repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
cells
list
types
list
tleonhardt/machine_learning
SL5_Ensemble_Methods.ipynb
apache-2.0
[ "Ensemble Methods\nThe goal of ensemble methods is to combine the predictions of several base estimators built with a given learning algorithm in order to improve generalizability / robustness over a single estimator.\nTwo families of ensemble methods are usually distinguished:\n* In averaging methods, the driving principle is to build several estimators independently and then to average their predictions. On average, the combined estimator is usually better than any of the single base estimator because its variance is reduced.\n * Examples: Bagging methods, Forests of randomized trees, ...\n* By contrast, in boosting methods, base estimators are built sequentially and one tries to reduce the bias of the combined estimator. The motivation is to combine several weak models to produce a powerful ensemble.\n * Examples: AdaBoost, Gradient Tree Boosting, ...\nEnsembles of Decision Trees\nEnsembles are methods that combine multiple machine learning models to create more powerful models. There are many models in the machine learning literature that belong to this category, but there are two ensemble models that have proven to be effective on a wide range of datasets for classification and regression, both of which use decision trees as their building blocks: random forests and gradient boosted decision trees.\nRandom Forests\nA main drawback of decision trees is that they tend to overfit the training data. Random forests are one way to address this problem. A random forest is essentially a collection of decision trees, where each tree is slightly different from the others. The idea behind random forests is that each tree might do a relatively good job of predicting, but will likely overfit on part of the data. If we build many trees, all of which work well and overfit in different ways, we can reduce the amount of overfitting by averaging their results. This reduction in overfitting, while retaining the predictive power of the trees, can be shown using rigorous mathematics.\nTo implement this strategy, we need to build many decision trees. Each tree should do an acceptable job of predicting the target, and should also be different from the other trees. Random forests get their name from injecting randomness into the tree building to ensure each tree is different. There are two ways in which the trees in a random forest are randomized: by selecting the data points used to build a tree and by selecting the features in each split test. \nAdvantages of Random Forests\n\nVery powerful - often yield excellent results\nOften work well without heavy tuning of the parameters, and don’t require scaling of the data\nBuilding random forests on large datasets can be parallelized across multiple CPU cores within a computer easily\n\nDisadvantages of Random Forests\n\nIt is basically impossible to interpret tens or hundreds of trees in detail\nDon’t tend to perform well on very high dimensional, sparse data, such as text data\nRequire more memory and are slower to train and to predict than linear models\n\nDisclaimer: Much of the code in this notebook was borrowed from the excellent book Introduction to Machine Learning with Python by Andreas Muller and Sarah Guido.\nBuilding Random Forests\nTo build a random forest model, you need to decide on the number of trees to build (the n_estimators parameter of RandomForestRegressor or RandomForestClassifier). Let’s say we want to build 10 trees. These trees will be built completely independently from each other, and the algorithm will make different random choices for each tree to make sure the trees are distinct. To build a tree, we first take what is called a bootstrap sample of our data. That is, from our n_samples data points, we repeatedly draw an example randomly with replacement (meaning the same sample can be picked multiple times), n_samples times. This will create a dataset that is as big as the original dataset, but some data points will be missing from it (approximately one third), and some will be repeated.\nTo illustrate, let’s say we want to create a bootstrap sample of the list ['a', 'b', 'c', 'd']. A possible bootstrap sample would be ['b', 'd', 'd', 'c']. Another possible sample would be ['d', 'a', 'd', 'a'].\nNext, a decision tree is built based on this newly created dataset. However, the algorithm we described for the decision tree is slightly modified. Instead of looking for the best test for each node, in each node the algorithm randomly selects a subset of the features, and it looks for the best possible test involving one of these features. The number of features that are selected is controlled by the max_features parameter. This selection of a subset of features is repeated separately in each node, so that each node in a tree can make a decision using a different subset of the features.\nThe bootstrap sampling leads to each decision tree in the random forest being built on a slightly different dataset. Because of the selection of features in each node, each split in each tree operates on a different subset of features. Together, these two mechanisms ensure that all the trees in the random forest are different.\nA critical parameter in this process is max_features. If we set max_features to n_features, that means that each split can look at all features in the dataset, and no randomness will be injected in the feature selection (the randomness due to the bootstrapping remains, though). If we set max_features to 1, that means that the splits have no choice at all on which feature to test, and can only search over different thresholds for the feature that was selected randomly. Therefore, a high max_features means that the trees in the random forest will be quite similar, and they will be able to fit the data easily, using the most distinctive features. A low max_features means that the trees in the random forest will be quite different, and that each tree might need to be very deep in order to fit the data well.\nTo make a prediction using the random forest, the algorithm first makes a prediction for every tree in the forest. For regression, we can average these results to get our final prediction. For classification, a “soft voting” strategy is used. This means each algorithm makes a “soft” prediction, providing a probability for each possible output label. The probabilities predicted by all the trees are averaged, and the class with the highest probability is predicted.\nAnalyzing Random Forests\nLet’s apply a random forest consisting of five trees to the two_moons dataset we studied earlier:", "import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nfrom sklearn.datasets import make_moons\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.ensemble import RandomForestClassifier\n\n\nX, y = make_moons(n_samples=100, noise=0.25, random_state=3)\nX_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, random_state=42)\n\nforest = RandomForestClassifier(n_estimators=5, random_state=2)\nforest.fit(X_train, y_train)", "The trees that are built as part of the random forest are stored in the estimator_ attribute. Let’s visualize the decision boundaries learned by each tree, together with their aggregate prediction as made by the forest:", "from scipy import ndimage\n\ndef plot_tree_partition(X, y, tree, ax=None):\n if ax is None:\n ax = plt.gca()\n eps = X.std() / 2.\n\n x_min, x_max = X[:, 0].min() - eps, X[:, 0].max() + eps\n y_min, y_max = X[:, 1].min() - eps, X[:, 1].max() + eps\n xx = np.linspace(x_min, x_max, 1000)\n yy = np.linspace(y_min, y_max, 1000)\n\n X1, X2 = np.meshgrid(xx, yy)\n X_grid = np.c_[X1.ravel(), X2.ravel()]\n\n Z = tree.predict(X_grid)\n Z = Z.reshape(X1.shape)\n faces = tree.apply(X_grid)\n faces = faces.reshape(X1.shape)\n border = ndimage.laplace(faces) != 0\n ax.contourf(X1, X2, Z, alpha=.4, cmap=cm2, levels=[0, .5, 1])\n ax.scatter(X1[border], X2[border], marker='.', s=1)\n\n discrete_scatter(X[:, 0], X[:, 1], y, ax=ax)\n ax.set_xlim(x_min, x_max)\n ax.set_ylim(y_min, y_max)\n ax.set_xticks(())\n ax.set_yticks(())\n return ax\n\nfrom matplotlib.colors import ListedColormap\ncm2 = ListedColormap(['#0000aa', '#ff2020'])\n\ndef plot_2d_separator(classifier, X, fill=False, ax=None, eps=None, alpha=1,\n cm=cm2, linewidth=None, threshold=None, linestyle=\"solid\"):\n # binary?\n if eps is None:\n eps = X.std() / 2.\n\n if ax is None:\n ax = plt.gca()\n\n x_min, x_max = X[:, 0].min() - eps, X[:, 0].max() + eps\n y_min, y_max = X[:, 1].min() - eps, X[:, 1].max() + eps\n xx = np.linspace(x_min, x_max, 100)\n yy = np.linspace(y_min, y_max, 100)\n\n X1, X2 = np.meshgrid(xx, yy)\n X_grid = np.c_[X1.ravel(), X2.ravel()]\n try:\n decision_values = classifier.decision_function(X_grid)\n levels = [0] if threshold is None else [threshold]\n fill_levels = [decision_values.min()] + levels + [decision_values.max()]\n except AttributeError:\n # no decision_function\n decision_values = classifier.predict_proba(X_grid)[:, 1]\n levels = [.5] if threshold is None else [threshold]\n fill_levels = [0] + levels + [1]\n if fill:\n ax.contourf(X1, X2, decision_values.reshape(X1.shape),\n levels=fill_levels, alpha=alpha, cmap=cm)\n else:\n ax.contour(X1, X2, decision_values.reshape(X1.shape), levels=levels,\n colors=\"black\", alpha=alpha, linewidths=linewidth,\n linestyles=linestyle, zorder=5)\n\n ax.set_xlim(x_min, x_max)\n ax.set_ylim(y_min, y_max)\n ax.set_xticks(())\n ax.set_yticks(())\n\nimport matplotlib as mpl\nfrom matplotlib.colors import colorConverter\n\ndef discrete_scatter(x1, x2, y=None, markers=None, s=10, ax=None,\n labels=None, padding=.2, alpha=1, c=None, markeredgewidth=None):\n \"\"\"Adaption of matplotlib.pyplot.scatter to plot classes or clusters.\n\n Parameters\n ----------\n\n x1 : nd-array\n input data, first axis\n\n x2 : nd-array\n input data, second axis\n\n y : nd-array\n input data, discrete labels\n\n cmap : colormap\n Colormap to use.\n\n markers : list of string\n List of markers to use, or None (which defaults to 'o').\n\n s : int or float\n Size of the marker\n\n padding : float\n Fraction of the dataset range to use for padding the axes.\n\n alpha : float\n Alpha value for all points.\n \"\"\"\n if ax is None:\n ax = plt.gca()\n\n if y is None:\n y = np.zeros(len(x1))\n\n unique_y = np.unique(y)\n\n if markers is None:\n markers = ['o', '^', 'v', 'D', 's', '*', 'p', 'h', 'H', '8', '<', '>'] * 10\n\n if len(markers) == 1:\n markers = markers * len(unique_y)\n\n if labels is None:\n labels = unique_y\n\n # lines in the matplotlib sense, not actual lines\n lines = []\n\n current_cycler = mpl.rcParams['axes.prop_cycle']\n\n for i, (yy, cycle) in enumerate(zip(unique_y, current_cycler())):\n mask = y == yy\n # if c is none, use color cycle\n if c is None:\n color = cycle['color']\n elif len(c) > 1:\n color = c[i]\n else:\n color = c\n # use light edge for dark markers\n if np.mean(colorConverter.to_rgb(color)) < .4:\n markeredgecolor = \"grey\"\n else:\n markeredgecolor = \"black\"\n\n lines.append(ax.plot(x1[mask], x2[mask], markers[i], markersize=s,\n label=labels[i], alpha=alpha, c=color,\n markeredgewidth=markeredgewidth,\n markeredgecolor=markeredgecolor)[0])\n\n if padding != 0:\n pad1 = x1.std() * padding\n pad2 = x2.std() * padding\n xlim = ax.get_xlim()\n ylim = ax.get_ylim()\n ax.set_xlim(min(x1.min() - pad1, xlim[0]), max(x1.max() + pad1, xlim[1]))\n ax.set_ylim(min(x2.min() - pad2, ylim[0]), max(x2.max() + pad2, ylim[1]))\n\n return lines\n\nfig, axes = plt.subplots(2, 3, figsize=(20, 10))\nfor i, (ax, tree) in enumerate(zip(axes.ravel(), forest.estimators_)):\n ax.set_title(\"Tree {}\".format(i))\n plot_tree_partition(X_train, y_train, tree, ax=ax)\n\nplot_2d_separator(forest, X_train, fill=True, ax=axes[-1, -1], alpha=.4)\naxes[-1, -1].set_title(\"Random Forest\")\ndiscrete_scatter(X_train[:, 0], X_train[:, 1], y_train)", "You can clearly see that the decision boundaries learned by the five trees are quite different. Each of them makes some mistakes, as some of the training points that are plotted here were not actually included in the training sets of the trees, due to the bootstrap sampling.\nThe random forest overfits less than any of the trees individually, and provides a much more intuitive decision boundary. In any real application, we would use many more trees (often hundreds or thousands), leading to even smoother boundaries.\nAs another example, let’s apply a random forest consisting of 100 trees on the Breast Cancer dataset:", "from sklearn.datasets import load_breast_cancer\n\ncancer = load_breast_cancer()\nX_train, X_test, y_train, y_test = train_test_split(cancer.data, cancer.target, random_state=0)\nforest = RandomForestClassifier(n_estimators=100, random_state=0)\nforest.fit(X_train, y_train)\n\nprint(\"Accuracy on training set: {:.3f}\".format(forest.score(X_train, y_train)))\nprint(\"Accuracy on test set: {:.3f}\".format(forest.score(X_test, y_test)))", "The random forest gives us an accuracy of 97%, better than the linear models or a single decision tree, without tuning any parameters. We could adjust the max_features setting, or apply pre-pruning as we did for the single decision tree. However, often the default parameters of the random forest already work quite well.\nSimilarly to the decision tree, the random forest provides feature importances, which are computed by aggregating the feature importances over the trees in the forest. Typically, the feature importances provided by the random forest are more reliable than the ones provided by a single tree. Take a look at the figure below:", "def plot_feature_importances_cancer(model):\n n_features = cancer.data.shape[1]\n plt.figure(figsize=(10,6))\n plt.barh(range(n_features), model.feature_importances_, align='center')\n plt.yticks(np.arange(n_features), cancer.feature_names)\n plt.xlabel(\"Feature importance\")\n plt.ylabel(\"Feature\")\n \nplot_feature_importances_cancer(forest)", "As you can see, the random forest gives nonzero importance to many more features than the single tree. Similarly to the single decision tree, the random forest also gives a lot of importance to the “worst radius” feature, but it actually chooses “worst perimeter” to be the most informative feature overall. The randomness in building the random forest forces the algorithm to consider many possible explanations, the result being that the random forest captures a much broader picture of the data than a single tree.\nStrengths, weaknesses, and parameters\nRandom forests for regression and classification are currently among the most widely used machine learning methods. They are very powerful, often work well without heavy tuning of the parameters, and don’t require scaling of the data.\nEssentially, random forests share all of the benefits of decision trees, while making up for some of their deficiencies. One reason to still use decision trees is if you need a compact representation of the decision-making process. It is basically impossible to interpret tens or hundreds of trees in detail, and trees in random forests tend to be deeper than decision trees (because of the use of feature subsets). Therefore, if you need to summarize the prediction making in a visual way to nonexperts, a single decision tree might be a better choice. While building random forests on large datasets might be somewhat time consuming, it can be parallelized across multiple CPU cores within a computer easily. If you are using a multi-core processor (as nearly all modern computers do), you can use the n_jobs parameter to adjust the number of cores to use. Using more CPU cores will result in linear speed-ups (using two cores, the training of the random forest will be twice as fast), but specifying n_jobs larger than the number of cores will not help. You can set n_jobs=-1 to use all the cores in your computer. \nYou should keep in mind that random forests, by their nature, are random, and setting different random states (or not setting the random_state at all) can drastically change the model that is built. The more trees there are in the forest, the more robust it will be against the choice of random state. If you want to have reproducible results, it is important to fix the random_state.\nThe important parameters to adjust are n_estimators, max_features, and possibly pre-pruning options like max_depth. For n_estimators, larger is always better. Averaging more trees will yield a more robust ensemble by reducing overfitting. However, there are diminishing returns, and more trees need more memory and more time to train. A common rule of thumb is to build “as many as you have time/memory for.\nGradient boosted regression trees (gradient boosting machines)\nThe gradient boosted regression tree is another ensemble method that combines multiple decision trees to create a more powerful model. Despite the “regression” in the name, these models can be used for regression and classification. In contrast to the random forest approach, gradient boosting works by building trees in a serial manner, where each tree tries to correct the mistakes of the previous one. By default, there is no randomization in gradient boosted regression trees; instead, strong pre-pruning is used. Gradient boosted trees often use very shallow trees, of depth one to five, which makes the model smaller in terms of memory and makes predictions faster. The main idea behind gradient boosting is to combine many simple models (in this context known as weak learners), like shallow trees. Each tree can only provide good predictions on part of the data, and so more and more trees are added to iteratively improve performance.\nGradient boosted trees are frequently the winning entries in machine learning competitions, and are widely used in industry. They are generally a bit more sensitive to parameter settings than random forests, but can provide better accuracy if the parameters are set correctly.\nApart from the pre-pruning and the number of trees in the ensemble, another important parameter of gradient boosting is the learning_rate, which controls how strongly each tree tries to correct the mistakes of the previous trees. A higher learning rate means each tree can make stronger corrections, allowing for more complex models. Adding more trees to the ensemble, which can be accomplished by increasing n_estimators, also increases the model complexity, as the model has more chances to correct mistakes on the training set.\nAdvantages of Gradient Tree Boosting\n\nNatural handling of data of mixed type (= heterogeneous features)\nPredictive power\nRobustness to outliers in output space (via robust loss functions)\n\nDisadvantages of Gradient Tree Boosting\n\nScalability, due to the sequential nature of boosting it can hardly be parallelized.\n\nHere is an example of using GradientBoostingClassifier on the Breast Cancer dataset. By default, 100 trees of maximum depth 3 and a learning rate of 0.1 are used:", "from sklearn.ensemble import GradientBoostingClassifier\n\nX_train, X_test, y_train, y_test = train_test_split(cancer.data, cancer.target, random_state=0)\n\ngbrt = GradientBoostingClassifier(random_state=0)\ngbrt.fit(X_train, y_train)\n\nprint(\"Accuracy on training set: {:.3f}\".format(gbrt.score(X_train, y_train)))\nprint(\"Accuracy on test set: {:.3f}\".format(gbrt.score(X_test, y_test)))", "As the training set accuracy is 100%, we are likely to be overfitting. To reduce overfitting, we could either apply stronger pre-pruning by limiting the maximum depth or lower the learning rate:", "gbrt = GradientBoostingClassifier(random_state=0, max_depth=1)\ngbrt.fit(X_train, y_train)\n\nprint(\"Accuracy on training set: {:.3f}\".format(gbrt.score(X_train, y_train)))\nprint(\"Accuracy on test set: {:.3f}\".format(gbrt.score(X_test, y_test)))\n\ngbrt = GradientBoostingClassifier(random_state=0, learning_rate=0.01)\ngbrt.fit(X_train, y_train)\n\nprint(\"Accuracy on training set: {:.3f}\".format(gbrt.score(X_train, y_train)))\nprint(\"Accuracy on test set: {:.3f}\".format(gbrt.score(X_test, y_test))) ", "Both methods of decreasing the model complexity reduced the training set accuracy, as expected. In this case, lowering the maximum depth of the trees provided a significant improvement of the model, while lowering the learning rate only increased the generalization performance slightly.\nAs for the other decision tree–based models, we can again visualize the feature importances to get more insight into our model. As we used 100 trees, it is impractical to inspect them all, even if they are all of depth 1:", "gbrt = GradientBoostingClassifier(random_state=0, max_depth=1)\ngbrt.fit(X_train, y_train)\n\nplot_feature_importances_cancer(gbrt)", "We can see that the feature importances of the gradient boosted trees are somewhat similar to the feature importances of the random forests, though the gradient boosting completely ignored some of the features.\nAs both gradient boosting and random forests perform well on similar kinds of data, a common approach is to first try random forests, which work quite robustly. If random forests work well but prediction time is at a premium, or it is important to squeeze out the last percentage of accuracy from the machine learning model, moving to gradient boosting often helps.\nIf you want to apply gradient boosting to a large-scale problem, it might be worth looking into the xgboost package and its Python interface, which at the time of writing is faster (and sometimes easier to tune) than the scikit-learn implementation of gradient boosting on many datasets.\nStrengths, weaknesses, and parameters\nGradient boosted decision trees are among the most powerful and widely used models for supervised learning. Their main drawback is that they require careful tuning of the parameters and may take a long time to train. Similarly to other tree-based models, the algorithm works well without scaling and on a mixture of binary and continuous features. As with other tree-based models, it also often does not work well on high-dimensional sparse data.\nThe main parameters of gradient boosted tree models are the number of trees, n_estimators, and the learning_rate, which controls the degree to which each tree is allowed to correct the mistakes of the previous trees. These two parameters are highly interconnected, as a lower learning_rate means that more trees are needed to build a model of similar complexity. In contrast to random forests, where a higher n_estimators value is always better, increasing n_estimators in gradient boosting leads to a more complex model, which may lead to overfitting. A common practice is to fit n_estimators depending on the time and memory budget, and then search over different learning_rates.\nAnother important parameter is max_depth (or alternatively max_leaf_nodes), to reduce the complexity of each tree. Usually max_depth is set very low for gradient boosted models, often not deeper than five splits." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
quentinsf/qhue
Qhue playground.ipynb
gpl-2.0
[ "Qhue experiments\nExperiments with the Qhue python module.\nIf you haven't already, then pip install qhue before starting. \nSome of these examples may assume you have a recent bridge with recent software.\nIf you're viewing this with my sample output, I've truncated some of it for readability. I have a lot of lights!\nBasics", "# Put in the IP address of your Hue bridge here\nBRIDGE_IP='192.168.0.45'\n\nfrom qhue import Bridge, QhueException, create_new_username\n\n\n# If you have a username set up on your bridge, enter it here\n# otherwise leave it as None and you'll be prompted to create one.\n# e.g.:\n# username='zeZomfNu-y-p1PLM9oeYTiXbtqsxn-q1-7RNLI4B'\nusername=None\n\nif username is None:\n username = create_new_username(BRIDGE_IP)\n print(\"New user: {} . Put this in the username variable above.\".format(username))", "Let's get the numbers and names of the lights:", "bridge = Bridge(BRIDGE_IP, username)\nlights = bridge.lights()\nfor num, info in lights.items():\n print(\"{:16} {}\".format(info['name'], num))", "Let's try interactively changing a light. You could make this a lot more sophisticated:", "from ipywidgets import interact, interactive, fixed\nimport ipywidgets as widgets\n\ndef setlight(lightid='14', on=True, ct=128, bri=128):\n bridge.lights[lightid].state(on=on)\n if on:\n bridge.lights[lightid].state(bri=bri, ct=ct)\n\nlight_list = interact(setlight,\n lightid = widgets.Dropdown(\n options={ lights[i]['name']:i for i in lights },\n value='14',\n description='Light:',\n ),\n on = widgets.Checkbox(value=True, description='On/off'),\n bri = widgets.IntSlider(min=0,max=255,value=128, description='Bright:'),\n ct = widgets.IntSlider(min=0,max=255,value=128, description='Colour:'))", "The YAML format is a nice way to view the sometimes large amount of structured information which comes back from the bridge. \nIf you haven't got the Python yaml module, pip install PyYAML.", "import yaml\nprint(\"{} lights:\\n\".format(len(lights)))\nprint(yaml.safe_dump(lights, indent=4))\n\nprint(yaml.safe_dump(bridge.lights['3'](), indent=4))", "Scenes\nLet's look at the scenes defined in the bridge, and their IDs. Some of these may be created manually, and others by the Hue app or other software.\nVersion 1-type scenes just refer to the lights - each light is told: \"Set the value you have stored for this scene\".\nVersion 2 scenes have more details stored in the hub, which is generally more useful.", "scenes = bridge.scenes()\nprint(\"{} scenes:\\n\".format(len(scenes)))\nprint(yaml.safe_dump(scenes, indent=4))", "Details of a particular scene from the list:", "print(yaml.safe_dump(bridge.scenes['wVXtOrFmdnySqUz']()))", "Let's list scenes with IDs, last updated time, and the lights affected:", "for sid, info in scenes.items():\n print(\"\\n{:16} {:20} {}\".format( sid, info['name'], info['lastupdated']))\n for li in info['lights']:\n print(\"{:40}- {}\".format('', lights[li]['name']))", "Tidying things up; let's delete a scene:", "# Uncomment and edit this if you actually want to run it!\n# print(bridge.scenes['cd06c70f7-on-0'](http_method='delete'))", "Show the details of the scenes that affect a particular light:", "lightname = 'Sitting room 1'\n# How's this for a nice use of python iterators?\nlight_id = next(i for i,info in lights.items() if info['name'] == lightname)\nprint(\"Light {} - {}\".format(light_id, lightname))\nfor line in [\"{} : {:20} {}\".format(sid, info['name'], info['lastupdated']) for sid, info in scenes.items() if light_id in info['lights']]:\n print(line)", "Groups and rooms\nLet's look at groups:", "print(yaml.safe_dump(bridge.groups(), indent=4))", "The current Hue software creates 'rooms', which are groups with a type value set to Room:", "groups = bridge.groups()\nrooms = [(gid, info['name']) for gid, info in groups.items() if info.get('type') == 'Room' ]\nfor room_id, info in rooms:\n print(\"{:3} : {}\".format(room_id, info))", "Sensors\nSensors are mostly switches, but a few other things come under the same category in the bridge. There's a 'daylight' sensor, implemented in software, for example, and various bits of state can also be stored here so they can be used in rule conditions later.", "sensors = bridge.sensors()\nsummary = [(info['name'], i, info['type']) for i,info in sensors.items()]\n# Sort by name\n# Python 2: summary.sort(lambda a,b: cmp(a[0], b[0]))\n# Python 3:\nsummary.sort(key = lambda a: a[0])\nfor n,i,t in summary:\n print(\"{:30} {:>3} {}\".format(n,i,t))\n #print(bridge.sensors[i]())\n ", "Here's a more complete list:", "print(yaml.safe_dump(bridge.sensors(), indent=4))", "Rules\nRules map sensor events etc. to actions.", "rules = bridge.rules()\nprint(yaml.safe_dump(rules, indent=4))\n", "Show the rules triggered by the Sitting Room switch.\nFor Tap switches, buttons 1,2,3,4 are represented by the values 34,16,17,18 respectively.", "switch = '10' # sitting room\nprint(\"Switch {} -- {}\\n\".format(switch, sensors[switch]['name']))\n\n# State changes on the switch will look like this:\nstate_string = \"/sensors/{}/state/\".format(switch)\n\n# Look through the rules for once which contain this \n# string in their conditions:\nfor rid, info in rules.items():\n this_switch = False\n matching_conditions = [c for c in info['conditions'] if state_string in c['address']]\n if len(matching_conditions) > 0:\n print(\"{:3} {:20}\".format(rid, info['name']))\n for c in info['conditions']:\n print(\" ? condition {}\".format(c))\n for a in info['actions']:\n\n # If the action involves applying a scene, get its name\n scene_name = \"\"\n if 'scene' in a['body']:\n scene_name = scenes[a['body']['scene']]['name']\n \n print(\" - action address {} body {!s:29s} {} \".format( a['address'], a['body'], scene_name))\n ", "Let's see what is actually done by one of these scenes:", "scene='3owQUn01W7nVsxR' # 'Evening' scene button 10.4\n\ns = bridge.scenes[scene]()\nprint(yaml.safe_dump(s, indent=4))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
james-prior/cohpy
20160318-dojo-python2-range-versus-xrange.ipynb
mit
[ "In Python 2, range() returns a list and xrange() returns a generator.\nI expect generating and iterating through a list\nto be faster than iterating through a generator.\nIt did not work out that way in the following exercises.\nxrange() was always faster.", "from __future__ import print_function\n\ndef get_known_good_output(n):\n n -= 1\n return n * (n+1) // 2\n\ndef foo(f, n):\n return sum(f(n))\n\nfor n in (10, 1000, 10**8):\n f = range\n assert foo(f, n) == get_known_good_output(n)\n %timeit foo(f, n)\n\nfor n in (10, 1000, 10**8):\n f = xrange\n assert foo(f, n) == get_known_good_output(n)\n %timeit foo(f, n)", "range() makes the whole list before execution can continue,\nwhereas values from xrange() a generated one at a time,\nso range() requires enough memory to hold the entire list in memory\nwhereas xrange() only needs a little bit of memory.\nFor large values, range will use up all memory then crash\nwhereas xrange will just work.\nAlso, since range() makes the whole list before continuing,\nrange() has greater latency. The following cells demonstrate that.", "def foo(f, n, last):\n total = 0\n for i in f(n):\n total += i\n if i >= last:\n break\n return total\n\nn = 10**8\nlast = 100\nknown_good_output = get_known_good_output(last+1)\nknown_good_output\n\nf = range\nassert foo(f, n, last) == known_good_output\n%timeit foo(f, n, last)\n\nf = xrange\nassert foo(f, n, last) == known_good_output\n%timeit foo(f, n, last)", "For Python 2, I prefer the behavior\nof xrange() over that of range().\nxrange() has low latency for the first value\nand is thrifty with memory.\nHowever, I dislike the ugly x in the name of xrange\nand usually stick to range() for portability with Python 3\nunless I really need the behavior of xrange()." ]
[ "markdown", "code", "markdown", "code", "markdown" ]
Rotvig/cs231n
Project/Deep Network Comparison.ipynb
mit
[ "Deep Neural Network Comparison", "#Load necessary libraries\nimport tensorflow as tf\nimport numpy as np\nimport tensorflow.contrib.slim as slim\nimport input_data\nimport matplotlib.pyplot as plt\n%matplotlib inline", "Load CIFAR Dataset\nTo obtain the CIFAR10 dataset, go here: https://www.cs.toronto.edu/~kriz/cifar.html\nThe training data is stored in 5 separate files, and we will alternate between them during training.", "def unpickle(file):\n import cPickle\n fo = open(file, 'rb')\n dict = cPickle.load(fo)\n fo.close()\n return dict\n\ncurrentCifar = 1\ncifar = unpickle('./cifar10/data_batch_1')\ncifarT = unpickle('./cifar10/test_batch')\n\ntotal_layers = 25 #Specify how deep we want our network\nunits_between_stride = total_layers / 5", "RegularNet\nA Deep Neural Network composed exclusively of regular and strided convolutional layers. While this architecture works well for relatively shallow networks, it becomes increasingly more difficult to train as the network depth increases.", "tf.reset_default_graph()\n\ninput_layer = tf.placeholder(shape=[None,32,32,3],dtype=tf.float32,name='input')\nlabel_layer = tf.placeholder(shape=[None],dtype=tf.int32)\nlabel_oh = slim.layers.one_hot_encoding(label_layer,10)\n\nlayer1 = slim.conv2d(input_layer,64,[3,3],normalizer_fn=slim.batch_norm,scope='conv_'+str(0))\nfor i in range(5):\n for j in range(units_between_stride):\n layer1 = slim.conv2d(layer1,64,[3,3],normalizer_fn=slim.batch_norm,scope='conv_'+str((j+1) + (i*units_between_stride)))\n layer1 = slim.conv2d(layer1,64,[3,3],stride=[2,2],normalizer_fn=slim.batch_norm,scope='conv_s_'+str(i))\n \ntop = slim.conv2d(layer1,10,[3,3],normalizer_fn=slim.batch_norm,activation_fn=None,scope='conv_top')\n\noutput = slim.layers.softmax(slim.layers.flatten(top))\n\nloss = tf.reduce_mean(-tf.reduce_sum(label_oh * tf.log(output) + 1e-10, axis=[1]))\ntrainer = tf.train.AdamOptimizer(learning_rate=0.001)\nupdate = trainer.minimize(loss)", "ResNet\nAn implementation of a Residual Network as described in Identity Mappings in Deep Residual Networks.", "def resUnit(input_layer,i):\n with tf.variable_scope(\"res_unit\"+str(i)):\n part1 = slim.batch_norm(input_layer,activation_fn=None)\n part2 = tf.nn.relu(part1)\n part3 = slim.conv2d(part2,64,[3,3],activation_fn=None)\n part4 = slim.batch_norm(part3,activation_fn=None)\n part5 = tf.nn.relu(part4)\n part6 = slim.conv2d(part5,64,[3,3],activation_fn=None)\n output = input_layer + part6\n return output\n\ntf.reset_default_graph()\n\ninput_layer = tf.placeholder(shape=[None,32,32,3],dtype=tf.float32,name='input')\nlabel_layer = tf.placeholder(shape=[None],dtype=tf.int32)\nlabel_oh = slim.layers.one_hot_encoding(label_layer,10)\n\nlayer1 = slim.conv2d(input_layer,64,[3,3],normalizer_fn=slim.batch_norm,scope='conv_'+str(0))\nfor i in range(5):\n for j in range(units_between_stride):\n layer1 = resUnit(layer1,j + (i*units_between_stride))\n layer1 = slim.conv2d(layer1,64,[3,3],stride=[2,2],normalizer_fn=slim.batch_norm,scope='conv_s_'+str(i))\n \ntop = slim.conv2d(layer1,10,[3,3],normalizer_fn=slim.batch_norm,activation_fn=None,scope='conv_top')\n\noutput = slim.layers.softmax(slim.layers.flatten(top))\n\nloss = tf.reduce_mean(-tf.reduce_sum(label_oh * tf.log(output) + 1e-10, axis=[1]))\ntrainer = tf.train.AdamOptimizer(learning_rate=0.001)\nupdate = trainer.minimize(loss)", "HighwayNet\nAn implementation of a Highway Network as desribed in Highway Networks.", "def highwayUnit(input_layer,i):\n with tf.variable_scope(\"highway_unit\"+str(i)):\n H = slim.conv2d(input_layer,64,[3,3])\n T = slim.conv2d(input_layer,64,[3,3], #We initialize with a negative bias to push the network to use the skip connection\n biases_initializer=tf.constant_initializer(-1.0),activation_fn=tf.nn.sigmoid)\n output = H*T + input_layer*(1.0-T)\n return output\n\ntf.reset_default_graph()\n\ninput_layer = tf.placeholder(shape=[None,32,32,3],dtype=tf.float32,name='input')\nlabel_layer = tf.placeholder(shape=[None],dtype=tf.int32)\nlabel_oh = slim.layers.one_hot_encoding(label_layer,10)\n\nlayer1 = slim.conv2d(input_layer,64,[3,3],normalizer_fn=slim.batch_norm,scope='conv_'+str(0))\nfor i in range(5):\n for j in range(units_between_stride):\n layer1 = highwayUnit(layer1,j + (i*units_between_stride))\n layer1 = slim.conv2d(layer1,64,[3,3],stride=[2,2],normalizer_fn=slim.batch_norm,scope='conv_s_'+str(i))\n \ntop = slim.conv2d(layer1,10,[3,3],normalizer_fn=slim.batch_norm,activation_fn=None,scope='conv_top')\n\noutput = slim.layers.softmax(slim.layers.flatten(top))\n\nloss = tf.reduce_mean(-tf.reduce_sum(label_oh * tf.log(output) + 1e-10, axis=[1]))\ntrainer = tf.train.AdamOptimizer(learning_rate=0.001)\nupdate = trainer.minimize(loss)", "DenseNet\nAn implementation of a Dense Network as described in Densely Connected Convolutional Networks.", "def denseBlock(input_layer,i,j):\n with tf.variable_scope(\"dense_unit\"+str(i)):\n nodes = []\n a = slim.conv2d(input_layer,64,[3,3],normalizer_fn=slim.batch_norm)\n nodes.append(a)\n for z in range(j):\n b = slim.conv2d(tf.concat(nodes,3),64,[3,3],normalizer_fn=slim.batch_norm)\n nodes.append(b)\n return b\n\ntf.reset_default_graph()\n\ninput_layer = tf.placeholder(shape=[None,32,32,3],dtype=tf.float32,name='input')\nlabel_layer = tf.placeholder(shape=[None],dtype=tf.int32)\nlabel_oh = slim.layers.one_hot_encoding(label_layer,10)\n\nlayer1 = slim.conv2d(input_layer,64,[3,3],normalizer_fn=slim.batch_norm,scope='conv_'+str(0))\nfor i in range(5):\n layer1 = denseBlock(layer1,i,units_between_stride)\n layer1 = slim.conv2d(layer1,64,[3,3],stride=[2,2],normalizer_fn=slim.batch_norm,scope='conv_s_'+str(i))\n \ntop = slim.conv2d(layer1,10,[3,3],normalizer_fn=slim.batch_norm,activation_fn=None,scope='conv_top')\n\noutput = slim.layers.softmax(slim.layers.flatten(top))\n\nloss = tf.reduce_mean(-tf.reduce_sum(label_oh * tf.log(output) + 1e-10, axis=[1]))\ntrainer = tf.train.AdamOptimizer(learning_rate=0.001)\nupdate = trainer.minimize(loss)", "Visualize the network graph\nWe can call the Tensorflow Board to provide a graphical representation of our network.", "from IPython.display import clear_output, Image, display, HTML\n\ndef strip_consts(graph_def, max_const_size=32):\n \"\"\"Strip large constant values from graph_def.\"\"\"\n strip_def = tf.GraphDef()\n for n0 in graph_def.node:\n n = strip_def.node.add() \n n.MergeFrom(n0)\n if n.op == 'Const':\n tensor = n.attr['value'].tensor\n size = len(tensor.tensor_content)\n if size > max_const_size:\n tensor.tensor_content = \"<stripped %d bytes>\"%size\n return strip_def\n\ndef show_graph(graph_def, max_const_size=32):\n \"\"\"Visualize TensorFlow graph.\"\"\"\n if hasattr(graph_def, 'as_graph_def'):\n graph_def = graph_def.as_graph_def()\n strip_def = strip_consts(graph_def, max_const_size=max_const_size)\n code = \"\"\"\n <script>\n function load() {{\n document.getElementById(\"{id}\").pbtxt = {data};\n }}\n </script>\n <link rel=\"import\" href=\"https://tensorboard.appspot.com/tf-graph-basic.build.html\" onload=load()>\n <div style=\"height:600px\">\n <tf-graph-basic id=\"{id}\"></tf-graph-basic>\n </div>\n \"\"\".format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))\n\n iframe = \"\"\"\n <iframe seamless style=\"width:1200px;height:620px;border:0\" srcdoc=\"{}\"></iframe>\n \"\"\".format(code.replace('\"', '&quot;'))\n display(HTML(iframe))\n\nshow_graph(tf.get_default_graph().as_graph_def())", "Training", "init = tf.global_variables_initializer()\nbatch_size = 64\ncurrentCifar = 1\ntotal_steps = 20000\nl = []\na = []\naT = []\nwith tf.Session() as sess:\n sess.run(init)\n i = 0\n draw = range(10000)\n while i < total_steps:\n if i % (10000/batch_size) != 0:\n batch_index = np.random.choice(draw,size=batch_size,replace=False)\n else:\n draw = range(10000)\n if currentCifar == 5:\n currentCifar = 1\n print \"Switched CIFAR set to \" + str(currentCifar)\n else:\n currentCifar = currentCifar + 1\n print \"Switched CIFAR set to \" + str(currentCifar)\n cifar = unpickle('./cifar10/data_batch_'+str(currentCifar))\n batch_index = np.random.choice(draw,size=batch_size,replace=False)\n x = cifar['data'][batch_index]\n x = np.reshape(x,[batch_size,32,32,3],order='F')\n x = (x/256.0)\n x = (x - np.mean(x,axis=0)) / np.std(x,axis=0)\n y = np.reshape(np.array(cifar['labels'])[batch_index],[batch_size,1])\n _,lossA,yP,LO = sess.run([update,loss,output,label_oh],feed_dict={input_layer:x,label_layer:np.hstack(y)})\n accuracy = np.sum(np.equal(np.hstack(y),np.argmax(yP,1)))/float(len(y))\n l.append(lossA)\n a.append(accuracy)\n if i % 10 == 0: print \"Step: \" + str(i) + \" Loss: \" + str(lossA) + \" Accuracy: \" + str(accuracy)\n if i % 100 == 0: \n point = np.random.randint(0,10000-500)\n xT = cifarT['data'][point:point+500]\n xT = np.reshape(xT,[500,32,32,3],order='F')\n xT = (xT/256.0)\n xT = (xT - np.mean(xT,axis=0)) / np.std(xT,axis=0)\n yT = np.reshape(np.array(cifarT['labels'])[point:point+500],[500])\n lossT,yP = sess.run([loss,output],feed_dict={input_layer:xT,label_layer:yT})\n accuracy = np.sum(np.equal(yT,np.argmax(yP,1)))/float(len(yT))\n aT.append(accuracy)\n print \"Test set accuracy: \" + str(accuracy)\n i+= 1", "Results", "plt.plot(l) #Plot training loss\n\nplt.plot(a) #Plot training accuracy\n\nplt.plot(aT) #Plot test accuracy\n\nnp.max(aT) #Best test accuracy" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
bayesimpact/bob-emploi
data_analysis/notebooks/datasets/rome/update_from_v331_to_v332.ipynb
gpl-3.0
[ "Author: Pascal, pascal@bayesimpact.org\nDate: 2016-06-28\nROME update from v331 to v332\nIn June 2017 a new version of the ROME was realeased. I want to investigate what changed and whether we need to do anything about it.\nYou might not be able to reproduce this notebook, mostly because it requires to have the two versions of the ROME in your data/rome/csv folder which happens only just before we switch to v332. You will have to trust me on the results ;-)\nSkip the run test because it requires older versions of the ROME.", "import collections\nimport glob\nimport os\nfrom os import path\n\nimport matplotlib_venn\nimport pandas\n\nrome_path = path.join(os.getenv('DATA_FOLDER'), 'rome/csv')\n\nOLD_VERSION = '331'\nNEW_VERSION = '332'\n\nold_version_files = frozenset(glob.glob(rome_path + '/*{}*'.format(OLD_VERSION)))\nnew_version_files = frozenset(glob.glob(rome_path + '/*{}*'.format(NEW_VERSION)))", "First let's check if there are new or deleted files (only matching by file names).", "new_files = new_version_files - frozenset(f.replace(OLD_VERSION, NEW_VERSION) for f in old_version_files)\ndeleted_files = old_version_files - frozenset(f.replace(NEW_VERSION, OLD_VERSION) for f in new_version_files)\n\nprint('{:d} new files'.format(len(new_files)))\nprint('{:d} deleted files'.format(len(deleted_files)))", "So we have the same set of files in both versions: good start.\nNow let's set up a dataset that, for each table, links the old file and the new file.", "new_to_old = dict((f, f.replace(NEW_VERSION, OLD_VERSION)) for f in new_version_files)\n\n# Load all ROME datasets for the two versions we compare.\nVersionedDataset = collections.namedtuple('VersionedDataset', ['basename', 'old', 'new'])\nrome_data = [VersionedDataset(\n basename=path.basename(f),\n old=pandas.read_csv(f.replace(NEW_VERSION, OLD_VERSION)),\n new=pandas.read_csv(f))\n for f in sorted(new_version_files)]\n\ndef find_rome_dataset_by_name(data, partial_name):\n for dataset in data:\n if 'unix_{}_v{}_utf8.csv'.format(partial_name, NEW_VERSION) == dataset.basename:\n return dataset\n raise ValueError('No dataset named {}, the list is\\n{}'.format(partial_name, [dataset.basename for d in data]))", "Let's make sure the structure hasn't changed:", "for dataset in rome_data:\n if set(dataset.old.columns) != set(dataset.new.columns):\n print('Columns of {} have changed.'.format(dataset.basename))", "All files have the same columns as before: still good.\nNow let's see for each file if they more or less rows.", "same_row_count_files = 0\nfor dataset in rome_data:\n diff = len(dataset.new.index) - len(dataset.old.index)\n if diff > 0:\n print('{:d} values added in {}'.format(diff, dataset.basename))\n elif diff < 0:\n print('{:d} values removed in {}'.format(diff, dataset.basename))\n else:\n same_row_count_files += 1\nprint('{:d}/{:d} files with the same number of rows'.format(same_row_count_files, len(rome_data)))", "One important change is the one added to referentiel_code_rome, adding it might be the reason of all the other changes as it's adding a new job group and all other files would need to propagate that change.\nNew Job Group\nIdentify the New Job Group\nLet's check it out. First let's make sure than no job groups were removed:", "job_groups = find_rome_dataset_by_name(rome_data, 'referentiel_code_rome')\n\nobsolete_job_groups = set(job_groups.old.code_rome) - set(job_groups.new.code_rome)\nobsolete_job_groups", "Alright, so the only change was the job group added:", "new_job_groups_codes = set(job_groups.new.code_rome) - set(job_groups.old.code_rome)\nnew_job_groups = job_groups.new[job_groups.new.code_rome.isin(new_job_groups_codes)]\nnew_job_groups ", "Let's see if this is a different grouping of existing jobs or if it's entirely new jobs. First let's check the jobs in this new job group.", "jobs = find_rome_dataset_by_name(rome_data, 'referentiel_appellation')\njobs.new[jobs.new.code_rome == 'L1510'].head()", "Now let's see if those jobs were already there, and if so which were there job groups:", "jobs.old[jobs.old.code_ogr.isin(jobs.new[jobs.new.code_rome == 'L1510'].code_ogr)]", "Alright, it seems that these are entirely new jobs. Just to make sure let's check with a keyword.", "jobs.old[jobs.old.libelle_appellation_court.str.contains('Animatrice 2D', case=False)]", "What? Wait a minute! what happened to this job that looks almost exactly like the new one `Animatrice 2D - films d'animation'.", "jobs.new[jobs.new.code_ogr == 10969]", "OK, this one did not move at all. What is this other job group that seems so close to ours?", "job_groups.new[job_groups.new.code_rome == 'E1205']", "Ouch, it's indeed quite close and might have fooled more than one jobseeker…\nSo we have an entirely new job group L1510 which stands for Films d'animation et effets spéciaux. It's quite close to E1205 (Réalisation de contenus multimédias) and by the past many jobs of the new job groups might have defaulted to similar jobs of E1205.\nLet's check now the impact on the rest of the ROME datasets, especially to identify other changes that might have not be related to adding this job group.\nImpact on ROME\nLet's first check the ROME mobility (there were 8 new lines):", "mobility = find_rome_dataset_by_name(rome_data, 'rubrique_mobilite')\nmobility.new[(mobility.new.code_rome == 'L1510') | (mobility.new.code_rome_cible == 'L1510')]", "Cool, we found our 8 new rows, and as expected it's linking to closeby job groups. We can see that the two job groups E1104 and E1205 are especially close as there are some mobility in both ways to and from the new job group.", "job_groups.new[job_groups.new.code_rome.isin(('E1205', 'E1104'))]", "Let's seek the skills related to that new job group:", "skills = find_rome_dataset_by_name(rome_data, 'referentiel_competence')\nlink = find_rome_dataset_by_name(rome_data, 'liens_rome_referentiels')\nnew_linked_skills = link.new.join(skills.new.set_index('code_ogr'), 'code_ogr')[\n ['code_rome', 'code_ogr', 'libelle_competence', 'libelle_type_competence']]\nnew_linked_skills[new_linked_skills.code_rome == 'L1510'].dropna()", "Some of the skills already existed (e.g. Technique de dessin), others have been added with this release specially for this job group (e.g. Logiciel de motion capture).\nOK I think this is enough scrutiny for this new job group. Let's check out the other changes.\nOther Changes\nLet's first check the job names dataset. We've seen that some jobs were added for this job group but there might be others:", "new_jobs = set(jobs.new.code_ogr) - set(jobs.old.code_ogr)\nnew_linked_skills[new_linked_skills.code_rome == 'L1510'].dropna()", "Those looks legitimate. New jobs are added regularly to ROME and this release makes no exception.\nWhat about the skills?", "new_skills = set(skills.new.code_ogr) - set(skills.old.code_ogr)\nskills_for_new_job_group = new_linked_skills[new_linked_skills.code_rome == 'L1510'].code_ogr\nskills.new[skills.new.code_ogr.isin(new_skills) & (~skills.new.code_ogr.isin(skills_for_new_job_group))]", "Those entries look legitimate as well, some new skills have been added.\nConclusion\nThe new version of ROME, v332, introduces a major change: the addition of a new job group L1510 - Films d'animation et effets spéciaux. It's quite close to E1205 and E1204. There are also very minor changes as in each ROME release.\nThis reflect quite well what they wrote in their changelog (although at the time I am writing this notebook, their website is down).\nSo before switching to v332, we should examin what it would mean for users that would land in the new job group." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
kkkddder/dmc
notebooks/week-6/01-training a RNN model in Keras.ipynb
apache-2.0
[ "Lab 6.1 - Keras for RNN\nIn this lab we will use the Keras deep learning library to construct a simple recurrent neural network (RNN) that can learn linguistic structure from a piece of text, and use that knowledge to generate new text passages. To review general RNN architecture, specific types of RNN networks such as the LSTM networks we'll be using here, and other concepts behind this type of machine learning, you should consult the following resources:\n\nhttp://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/\nhttp://ml4a.github.io/guides/recurrent_neural_networks/\nhttp://colah.github.io/posts/2015-08-Understanding-LSTMs/\nhttp://karpathy.github.io/2015/05/21/rnn-effectiveness/\n\nThis code is an adaptation of these two examples:\n\nhttp://machinelearningmastery.com/text-generation-lstm-recurrent-neural-networks-python-keras/\nhttps://github.com/fchollet/keras/blob/master/examples/lstm_text_generation.py\n\nYou can consult the original sites for more information and documentation.\nLet's start by importing some of the libraries we'll be using in this lab:", "import numpy as np\nfrom keras.models import Sequential\nfrom keras.layers import Dense\nfrom keras.layers import Dropout\nfrom keras.layers import LSTM\nfrom keras.callbacks import ModelCheckpoint\nfrom keras.utils import np_utils\n\nfrom time import gmtime, strftime\nimport os\nimport re\nimport pickle\nimport random\nimport sys", "The first thing we need to do is generate our training data set. In this case we will use a recent article written by Barack Obama for The Economist newspaper. Make sure you have the obama.txt file in the /data folder within the /week-6 folder in your repository.", "# load ascii text from file\nfilename = \"data/obama.txt\"\nraw_text = open(filename).read()\n\n# get rid of any characters other than letters, numbers, \n# and a few special characters\nraw_text = re.sub('[^\\nA-Za-z0-9 ,.:;?!-]+', '', raw_text)\n\n# convert all text to lowercase\nraw_text = raw_text.lower()\n\nn_chars = len(raw_text)\nprint \"length of text:\", n_chars\nprint \"text preview:\", raw_text[:500]", "Next, we use python's set() function to generate a list of all unique characters in the text. This will form our 'vocabulary' of characters, which is similar to the categories found in typical ML classification problems. \nSince neural networks work with numerical data, we also need to create a mapping between each character and a unique integer value. To do this we create two dictionaries: one which has characters as keys and the associated integers as the value, and one which has integers as keys and the associated characters as the value. These dictionaries will allow us to do translation both ways.", "# extract all unique characters in the text\nchars = sorted(list(set(raw_text)))\nn_vocab = len(chars)\nprint \"number of unique characters found:\", n_vocab\n\n# create mapping of characters to integers and back\nchar_to_int = dict((c, i) for i, c in enumerate(chars))\nint_to_char = dict((i, c) for i, c in enumerate(chars))\n\n# test our mapping\nprint 'a', \"- maps to ->\", char_to_int[\"a\"]\nprint 25, \"- maps to ->\", int_to_char[25]", "Now we need to define the training data for our network. With RNN's, the training data usually takes the shape of a three-dimensional matrix, with the size of each dimension representing:\n[# of training sequences, # of training samples per sequence, # of features per sample]\n\nThe training sequences are the sets of data subjected to the RNN at each training step. As with all neural networks, these training sequences are presented to the network in small batches during training.\nEach training sequence is composed of some number of training samples. The number of samples in each sequence dictates how far back in the data stream the algorithm will learn, and sets the depth of the RNN layer.\nEach training sample within a sequence is composed of some number of features. This is the data that the RNN layer is learning from at each time step. In our example, the training samples and targets will use one-hot encoding, so will have a feature for each possible character, with the actual character represented by 1, and all others by 0.\n\nTo prepare the data, we first set the length of training sequences we want to use. In this case we will set the sequence length to 100, meaning the RNN layer will be able to predict future characters based on the 100 characters that came before.\nWe will then slide this 100 character 'window' over the entire text to create input and output arrays. Each entry in the input array contains 100 characters from the text, and each entry in the output array contains the single character that came after.", "# prepare the dataset of input to output pairs encoded as integers\nseq_length = 100\n\ninputs = []\noutputs = []\n\nfor i in range(0, n_chars - seq_length, 1):\n inputs.append(raw_text[i:i + seq_length])\n outputs.append(raw_text[i + seq_length])\n \nn_sequences = len(inputs)\nprint \"Total sequences: \", n_sequences", "Now let's shuffle both the input and output data so that we can later have Keras split it automatically into a training and test set. To make sure the two lists are shuffled the same way (maintaining correspondance between inputs and outputs), we create a separate shuffled list of indeces, and use these indeces to reorder both lists.", "indeces = range(len(inputs))\nrandom.shuffle(indeces)\n\ninputs = [inputs[x] for x in indeces]\noutputs = [outputs[x] for x in indeces]", "Let's visualize one of these sequences to make sure we are getting what we expect:", "print inputs[0], \"-->\", outputs[0]", "Next we will prepare the actual numpy datasets which will be used to train our network. We first initialize two empty numpy arrays in the proper formatting:\n\nX --> [# of training sequences, # of training samples, # of features]\ny --> [# of training sequences, # of features]\n\nWe then iterate over the arrays we generated in the previous step and fill the numpy arrays with the proper data. Since all character data is formatted using one-hot encoding, we initialize both data sets with zeros. As we iterate over the data, we use the char_to_int dictionary to map each character to its related position integer, and use that position to change the related value in the data set to 1.", "# create two empty numpy array with the proper dimensions\nX = np.zeros((n_sequences, seq_length, n_vocab), dtype=np.bool)\ny = np.zeros((n_sequences, n_vocab), dtype=np.bool)\n\n# iterate over the data and build up the X and y data sets\n# by setting the appropriate indices to 1 in each one-hot vector\nfor i, example in enumerate(inputs):\n for t, char in enumerate(example):\n X[i, t, char_to_int[char]] = 1\n y[i, char_to_int[outputs[i]]] = 1\n \nprint 'X dims -->', X.shape\nprint 'y dims -->', y.shape", "Next, we define our RNN model in Keras. This is very similar to how we defined the CNN model, except now we use the LSTM() function to create an LSTM layer with an internal memory of 128 neurons. LSTM is a special type of RNN layer which solves the unstable gradients issue seen in basic RNN. Along with LSTM layers, Keras also supports basic RNN layers and GRU layers, which are similar to LSTM. You can find full documentation for recurrent layers in Keras' documentation\nAs before, we need to explicitly define the input shape for the first layer. Also, we need to tell Keras whether the LSTM layer should pass its sequence of predictions or its internal memory as the output to the next layer. If you are connecting the LSTM layer to a fully connected layer as we do in this case, you should set the return_sequences parameter to False to have the layer pass the value of its hidden neurons. If you are connecting multiple LSTM layers, you should set the parameter to True in all but the last layer, so that subsequent layers can learn from the sequence of predictions of previous layers.\nWe will use dropout with a probability of 50% to regularize the network and prevent overfitting on our training data. The output of the network will be a fully connected layer with one neuron for each character in the vocabulary. The softmax function will convert this output to a probability distribution across all characters.", "# define the LSTM model\nmodel = Sequential()\nmodel.add(LSTM(128, return_sequences=False, input_shape=(X.shape[1], X.shape[2])))\nmodel.add(Dropout(0.50))\nmodel.add(Dense(y.shape[1], activation='softmax'))\nmodel.compile(loss='categorical_crossentropy', optimizer='adam')", "Next, we define two helper functions: one to select a character based on a probability distribution, and one to generate a sequence of predicted characters based on an input (or 'seed') list of characters.\nThe sample() function will take in a probability distribution generated by the softmax() function, and select a character based on the 'temperature' input. The temperature (also often called the 'diversity') effects how strictly the probability distribution is sampled. \n\nLower values (closer to zero) output more confident predictions, but are also more conservative. In our case, if the model has overfit the training data, lower values are likely to give back exactly what is found in the text\nHigher values (1 and above) introduce more diversity and randomness into the results. This can lead the model to generate novel information not found in the training data. However, you are also likely to see more errors such as grammatical or spelling mistakes.", "def sample(preds, temperature=1.0):\n # helper function to sample an index from a probability array\n preds = np.asarray(preds).astype('float64')\n preds = np.log(preds) / temperature\n exp_preds = np.exp(preds)\n preds = exp_preds / np.sum(exp_preds)\n probas = np.random.multinomial(1, preds, 1)\n return np.argmax(probas)", "The generate() function will take in:\n\ninput sentance ('seed')\nnumber of characters to generate\nand target diversity or temperature\n\nand print the resulting sequence of characters to the screen.", "def generate(sentence, prediction_length=50, diversity=0.35):\n print '----- diversity:', diversity \n\n generated = sentence\n sys.stdout.write(generated)\n\n # iterate over number of characters requested\n for i in range(prediction_length):\n \n # build up sequence data from current sentence\n x = np.zeros((1, X.shape[1], X.shape[2]))\n for t, char in enumerate(sentence):\n x[0, t, char_to_int[char]] = 1.\n\n # use trained model to return probability distribution\n # for next character based on input sequence\n preds = model.predict(x, verbose=0)[0]\n \n # use sample() function to sample next character \n # based on probability distribution and desired diversity\n next_index = sample(preds, diversity)\n \n # convert integer to character\n next_char = int_to_char[next_index]\n\n # add new character to generated text\n generated += next_char\n \n # delete the first character from beginning of sentance, \n # and add new caracter to the end. This will form the \n # input sequence for the next predicted character.\n sentence = sentence[1:] + next_char\n\n # print results to screen\n sys.stdout.write(next_char)\n sys.stdout.flush()\n print", "Next, we define a system for Keras to save our model's parameters to a local file after each epoch where it achieves an improvement in the overall loss. This will allow us to reuse the trained model at a later time without having to retrain it from scratch. This is useful for recovering models incase your computer crashes, or you want to stop the training early.", "filepath=\"-basic_LSTM.hdf5\"\ncheckpoint = ModelCheckpoint(filepath, monitor='loss', verbose=0, save_best_only=True, mode='min')\ncallbacks_list = [checkpoint]", "Now we are finally ready to train the model. We want to train the model over 50 epochs, but we also want to output some generated text after each epoch to see how our model is doing. \nTo do this we create our own loop to iterate over each epoch. Within the loop we first train the model for one epoch. Since all parameters are stored within the model, training one epoch at a time has the same exact effect as training over a longer series of epochs. We also use the model's validation_split parameter to tell Keras to automatically split the data into 80% training data and 20% test data for validation. Remember to always shuffle your data if you will be using validation!\nAfter each epoch is trained, we use the raw_text data to extract a new sequence of 100 characters as the 'seed' for our generated text. Finally, we use our generate() helper function to generate text using two different diversity settings.\nWarning: because of their large depth (remember that an RNN trained on a 100 long sequence effectively has 100 layers!), these networks typically take a much longer time to train than traditional multi-layer ANN's and CNN's. You shoud expect these models to train overnight on the virtual machine, but you should be able to see enough progress after the first few epochs to know if it is worth it to train a model to the end. For more complex RNN models with larger data sets in your own work, you should consider a native installation, along with a dedicated GPU if possible.", "epochs = 50\nprediction_length = 100\n\nfor iteration in range(epochs):\n \n print 'epoch:', iteration + 1, '/', epochs\n model.fit(X, y, validation_split=0.2, batch_size=256, nb_epoch=1, callbacks=callbacks_list)\n \n # get random starting point for seed\n start_index = random.randint(0, len(raw_text) - seq_length - 1)\n # extract seed sequence from raw text\n seed = raw_text[start_index: start_index + seq_length]\n \n print '----- generating with seed:', seed\n \n for diversity in [0.5, 1.2]:\n generate(seed, prediction_length, diversity)", "That looks pretty good! You can see that the RNN has learned alot of the linguistic structure of the original writing, including typical length for words, where to put spaces, and basic punctuation with commas and periods. Many words are still misspelled but seem almost reasonable, and it is pretty amazing that it is able to learn this much in only 50 epochs of training. \nYou can see that the loss is still going down after 50 epochs, so the model can definitely benefit from longer training. If you're curious you can try to train for more epochs, but as the error decreases be careful to monitor the output to make sure that the model is not overfitting. As with other neural network models, you can monitor the difference between training and validation loss to see if overfitting might be occuring. In this case, since we're using the model to generate new information, we can also get a sense of overfitting from the material it generates.\nA good indication of overfitting is if the model outputs exactly what is in the original text given a seed from the text, but jibberish if given a seed that is not in the original text. Remember we don't want the model to learn how to reproduce exactly the original text, but to learn its style to be able to generate new text. As with other models, regularization methods such as dropout and limiting model complexity can be used to avoid the problem of overfitting.\nFinally, let's save our training data and character to integer mapping dictionaries to an external file so we can reuse it with the model at a later time.", "pickle_file = '-basic_data.pickle'\n\ntry:\n f = open(pickle_file, 'wb')\n save = {\n 'X': X,\n 'y': y,\n 'int_to_char': int_to_char,\n 'char_to_int': char_to_int,\n }\n pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)\n f.close()\nexcept Exception as e:\n print 'Unable to save data to', pickle_file, ':', e\n raise\n \nstatinfo = os.stat(pickle_file)\nprint 'Saved data to', pickle_file\nprint 'Compressed pickle size:', statinfo.st_size" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
yhilpisch/dx
01_dx_frame.ipynb
agpl-3.0
[ "<img src=\"http://hilpisch.com/tpq_logo.png\" alt=\"The Python Quants\" width=\"45%\" align=\"right\" border=\"4\">\nFramework Classes and Functions\nThis section explains the usage of some basic framework classes and functions of DX Analytics. Mainly some helper functions, the discounting classes and the market environment class used to store market data and other parameters/data needed to model, value and risk manage derivative instruments.", "from dx import *\n\nnp.set_printoptions(precision=3)", "Helper Functions\nThere are two helper functions used regulary:\n\n<code>get_year_deltas</code>: get a list of year deltas (decimal fractions) relative to first value in time_list\n<code>sn_random_numbers</code>: get an array of standard normally distributed pseudo-random numbers\n\nget_year_deltas\nSuppose we have a list object containing a number of datetime objects.", "time_list = [dt.datetime(2015, 1, 1),\n dt.datetime(2015, 4, 1),\n dt.datetime(2015, 6, 15),\n dt.datetime(2015, 10, 21)]", "Passing this object to the get_year_deltas functions yields a list of year fractions representing the time intervals between the dates given. This is sometimes used e.g. for discounting purposes.", "get_year_deltas(time_list)", "sn_random_numbers\nMonte Carlo simulation of course relies heavily an the use of random numbers. The function sn_random_numbers is a wrapper function around the pseudo-random number generator of the NumPy library. It implements antithetic variates and moment matching as generic variance reduction techniques. It also allows to fix the seed value for the random number generator. The shape parameter is a tuple object of three integers.", "ran = sn_random_numbers((2, 3, 4), antithetic=True,\n moment_matching=True, fixed_seed=False)\n\nran", "Using moment matching makes sure that the first and second moments match exactly 0 and 1, respectively.", "ran.mean()\n\nran.std()", "Setting the first value of the shape parameter to 1 yields a two-dimensional ndarray object.", "ran = sn_random_numbers((1, 3, 4), antithetic=True,\n moment_matching=True, fixed_seed=False)\n\nran", "Discounting Classes\nIn the risk-neutral valuation of derivative instrumente, discounting payoffs is a major task. The following discounting classes are implemented:\n\nconstant_short_rate: fixed short rate\ndeterministic_yield: deterministic yiels/term structure\n\nconstant_short_rate\nThe constant_short_rate class represents the most simple case for risk-neutral discounting. A discounting object is defined by instatiating the class and providing a name and a decimal short rate value only.", "r = constant_short_rate('r', 0.05)\n\nr.name\n\nr.short_rate", "The object has a method get_forward_rates to generate forward rates given, for instance, a list object of datetime objects.", "r.get_forward_rates(time_list)", "Similarly, the method get_discount_factors returns discount factors for such a list object.", "r.get_discount_factors(time_list)", "You can also pass, for instance, an ndarry object containing year fractions.", "r.get_discount_factors(np.array([0., 1., 1.5, 2.]),\n dtobjects=False)", "deterministic_short_rate\nThe deterministic_short_rate class allows to model an interest rate term structure. To this end, you need to pass a list object of datetime and yield pairs to the class.", "yields = [(dt.datetime(2015, 1, 1), 0.02),\n (dt.datetime(2015, 3, 1), 0.03),\n (dt.datetime(2015, 10, 15), 0.035),\n (dt.datetime(2015, 12, 31), 0.04)]\n\ny = deterministic_short_rate('y', yields)\n\ny.name\n\ny.yield_list", "The method get_interpolated_yields implements an interpolation of the yield data and returns the interpolated yields given a list object of datetime objects.", "y.get_interpolated_yields(time_list)", "In similar fashion, the methods get_forward_rates and get_discount_factors return forward rates and discount factors, respcectively.", "y.get_forward_rates(time_list)\n\ny.get_discount_factors(time_list)", "Market Environment\nThe market_environment class is used to collect relevant data for the modeling, valuation and risk management of single derivatives instruments and portfolios composed of such instruments. A market_environment object stores:\n\nconstants: e.g. maturity date of option\nlists: e.g. list of dates\ncurves: e.g. discounting objects\n\nA market_environment object is instantiated by providing a name as a string object and the pricing date as a datetime object.", "me = market_environment(name='me', pricing_date=dt.datetime(2014, 1, 1))", "Constants are added via the add_constant method and providing a key and the value.", "me.add_constant('initial_value', 100.)\n\nme.add_constant('volatility', 0.25)", "Lists of data are added via the add_list method.", "me.add_list('dates', time_list)", "The add_curve method does the same for curves.", "me.add_curve('discount_curve_1', r)\n\nme.add_curve('discount_curve_2', y)", "The single data objects are stored in separate dictionary objects.", "me.constants\n\nme.lists\n\nme.curves", "Data is retrieved from a market_environment object via the get_constant, get_list and get_curve methods and providing the respective key.", "me.get_constant('volatility')\n\nme.get_list('dates')\n\nme.get_curve('discount_curve_1')", "Retrieving, for instance, a discounting object you can in one step retrieve it and call a method on it.", "me.get_curve('discount_curve_2').get_discount_factors(time_list)", "Copyright, License & Disclaimer\n© Dr. Yves J. Hilpisch | The Python Quants GmbH\nDX Analytics (the \"dx library\" or \"dx package\") is licensed under the GNU Affero General\nPublic License version 3 or later (see http://www.gnu.org/licenses/).\nDX Analytics comes with no representations or warranties, to the extent\npermitted by applicable law.\nhttp://tpq.io | dx@tpq.io |\nhttp://twitter.com/dyjh\n<img src=\"http://hilpisch.com/tpq_logo.png\" alt=\"The Python Quants\" width=\"35%\" align=\"right\" border=\"0\"><br>\nQuant Platform | http://pqp.io\nPython for Finance Training | http://training.tpq.io\nCertificate in Computational Finance | http://compfinance.tpq.io\nDerivatives Analytics with Python (Wiley Finance) |\nhttp://dawp.tpq.io\nPython for Finance (2nd ed., O'Reilly) |\nhttp://py4fi.tpq.io" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
rasbt/algorithms_in_ipython_notebooks
ipython_nbs/search/binary_search.ipynb
gpl-3.0
[ "%load_ext watermark\n%watermark -a 'Sebastian Raschka' -u -d -v", "Binary Search\nAn implementation of the binary search algorithm. For details will follow. A good summary can be found on Wikipedia: https://en.wikipedia.org/wiki/Binary_search_algorithm.\nThe figures below provide a short illustration of how the implementation works on a toy example:\n\n\nBinary Search Implementation", "def binary_search(array, value):\n ary = array\n min_idx = 0\n max_idx = len(array)\n \n while min_idx < max_idx:\n middle_idx = (min_idx + max_idx) // 2\n\n if array[middle_idx] == value:\n return middle_idx\n elif array[middle_idx] < value:\n min_idx = middle_idx + 1\n else:\n max_idx = middle_idx\n \n return None\n\nbinary_search(array=[],\n value=1)\n\nbinary_search(array=[1, 2, 4, 7, 8, 10, 11],\n value=1)\n\nbinary_search(array=[1, 2, 4, 7, 8, 10, 11],\n value=2)\n\nbinary_search(array=[1, 2, 4, 7, 8, 10, 11],\n value=4)\n\nbinary_search(array=[1, 2, 4, 7, 8, 10, 11],\n value=11)\n\nbinary_search(array=[1, 2, 4, 7, 8, 10, 11],\n value=99)", "Binary Search using Recursion\nNote that this implementation of recursive binary search deliberately avoid slicing the array (e.g., array[:middle_idx]), because slicing Python lists is expensive due to the random memory access. E.g., slicing a Python list with as a_list[:k] is an O(k) operation.", "def recursive_binary_search(array, value, start_idx=None, end_idx=None):\n \n len_ary = len(array)\n \n if start_idx is None:\n start_idx = 0\n if end_idx is None:\n end_idx = len(array) - 1\n \n if not len_ary or start_idx >= end_idx:\n return None\n \n middle_idx = (start_idx + end_idx) // 2\n if array[middle_idx] == value:\n return middle_idx\n\n elif array[middle_idx] > value:\n return recursive_binary_search(array, \n value, \n start_idx=start_idx,\n end_idx=middle_idx)\n else:\n return recursive_binary_search(array,\n value,\n start_idx=middle_idx + 1,\n end_idx=len_ary)\n return None\n\nrecursive_binary_search(array=[],\n value=1)\n\nrecursive_binary_search(array=[1, 2, 4, 7, 8, 10, 11],\n value=1)\n\nrecursive_binary_search(array=[1, 2, 4, 7, 8, 10, 11],\n value=4)\n\nrecursive_binary_search(array=[1, 2, 4, 7, 8, 10, 11],\n value=11)\n\nrecursive_binary_search(array=[1, 2, 4, 7, 8, 10, 11],\n value=99)" ]
[ "code", "markdown", "code", "markdown", "code" ]
MadsJensen/intro_to_scientific_computing
src/Z1-Outline-of-topics.ipynb
bsd-3-clause
[ "Foundations of data-driven health science\nDay 1: The anatomy of a computer and data\n\nSummarize how the main components of a computer relate to, and constrain, the act of \"computing\".\nSee What is a computer-notebook\nDescribe the basic organisation of a file system, and navigate it using commands in a \"terminal\".\nSee this notebook and associated exercises\nLearn commands in notebook, but be sure to practice them in a terminal!\nContrast textual and binary files in terms of their contents and find information in both using tools that can be automated.\nSlides needed!\n\nContrast: \n\n\ndata, talk about\n\nphilosophy of data (mje)\ninformation; storage of information in bits and bytes = data\nrelate practical 'implementation' of data to persistent (HD) and non-persistent (RAM) media\n\n\n\ndata types (use type-command in python)\n\nnumbers: ints and floats\ntoo much detail? short (16-bit) vs. long (32-/64-int) ints; single vs. double precision\n\n\ncharacters: just that (ASCII, unicode, ...)\nstrings: an ordered sequence of characters\nhas a length (len)\n\n\nlists: an ordered sequence of arbitrary data types", "2\n\ntype(2)\n\n3.0\n\ntype(3.0)\n\n2/3.0\n\ntype('a')\n\nlen('a')\n\nlen('abc')", "Example exercise (data8.org, Lecture 4)\nDay 2: The anatomy and building blocks of a program\n\nContrast local and non-local computing resources and file systems, and formulate use cases for both.\n\nUse variables in a programming language (python) and perform simple operations (manipulations) on the information (data) they contain.\n\n\nUse of for loops and control flow\n\n\nFunction for calculating mean of list of numbers", "def my_mean(my_list):\n ", "Day 3: Programming as a means to gain insight into data\n\nWrite a program to extract, collate and preprocess \"raw\" data for further processing (statistics, visualisation, etc.).\n\nFinal product\nLuck (2009): Impaired response selection in schizophrenia...\n\nlong format dataset (csv-file)\npatient/control (20/group) median RT x condition (all data good)\nfilenames (e.g.): 0001_ABC_20170101.log (possibly with a few typos?)\n\n\n\n|Group|Cond|Median|Subjid|Accuracy|\n|:---:|:---:|---|---|---|\n|Patient/Control|Freq/Rare|{float}|{int}|{float}|\n|...|...|...|...|...|\nData format/structure", "import numpy as np\nfrom scipy.stats import gamma\nimport matplotlib.pyplot as plt\n\nfig, ax = plt.subplots(1, 1)\n\n# Calculate a few first moments:\n\na, a_loc, a_scale = 2.99, 0.5, 1\nb, b_loc, b_scale = 2.79, 0, 1.2\nmean, var, skew, kurt = gamma.stats(a, moments='mvsk')\n\n# Display the probability density function (``pdf``):\n\nx = np.linspace(gamma.ppf(0.01, a),\n gamma.ppf(0.99, a), 100)\nax.plot(x, gamma.pdf(x, a),\n 'r-', lw=5, alpha=0.6, label='gamma pdf')\nax.plot(x, gamma.pdf(x, b, b_loc, b_scale),\n 'g-', lw=5, alpha=0.6, label='gamma pdf')\n\n# Alternatively, the distribution object can be called (as a function)\n# to fix the shape, location and scale parameters. This returns a \"frozen\"\n# RV object holding the given parameters fixed.\n\n# Freeze the distribution and display the frozen ``pdf``:\n\nrv = gamma(a)\nax.plot(x, rv.pdf(x), 'k-', lw=2, label='frozen pdf')\n\n# Check accuracy of ``cdf`` and ``ppf``:\n\nvals = gamma.ppf([0.001, 0.5, 0.999], a)\nnp.allclose([0.001, 0.5, 0.999], gamma.cdf(vals, a))\n# True\n\n# Generate random numbers:\n\nr = gamma.rvs(a, size=1000)\n\n# And compare the histogram:\n\nax.hist(r, normed=True, histtype='stepfilled', alpha=0.2)\nax.legend(loc='best', frameon=False)\nplt.show()\n\ndef generate_RT_values(group='Control', trials=1000, rareprob=0.2):\n a, a_loc, a_scale = 2.99, 0.5, 1\n b, b_loc, b_scale = 2.79, 0, 1.2\n # r = gamma.rvs(a, size=trials)\n RTs = []\n return RTs\n\npatients = ['0004_SDF', '0015_GSC']\ncontrols = ['0006_SDG', '0010_KGE']\n\n# 0004_SDF_20170101.log\n# RARECAT=digit\n# Time SMTH STIM=a\n# Time SMTH RESP=1 (correct)\n# Time SMTH STIM=j\n# Time SMTH RESP=1 (correct)\n# Time SMTH STIM=r\n# Time SMTH RESP=2 (incorrect)\n# Time SMTH STIM=4\n# Time SMTH RESP=2 (correct)\n\n", "Skills needed/Tasks performed", "# from ddhs_helpers import find_file_matching_wildcard\nimport glob\nimport os\n\ndef find_file_matching_wildcard(wildcard, path='.'):\n allfiles = glob.glob(os.path.join(path, wildcard))\n if len(allfiles) == 0:\n raise ValueError('No files found matching: ...')\n elif len(allfiles) > 1:\n raise ValueError('More than one file found matching: ...')\n return allfiles[0] \n\n%pwd\n\nfind_file_matching_wildcard('course*', path='..')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
5agado/data-science-learning
image processing/Image Processing - Basics.ipynb
apache-2.0
[ "Table of Contents\n\nIntro\nLoad and Plot Image\nPlot Color Channels\n\n\nImage from array\nMorphological Operations\n\n\nConvolution Filters\nGaussian Convolution\n\n\nImages Normalization\nMean/Deviation of Images\nNormalization\n\n\n\nIntro\nNotebook that explores the basics of image processing in Python, like image loading, representation and transformations.", "import seaborn as sns\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\nimport os\nimport sys\nsys.path.append(os.path.join(os.pardir))\n\nfrom utils import image_processing\n%load_ext autoreload\n%autoreload 2\n\n%matplotlib inline\n\nsns.set_style(\"dark\")", "Load and Plot Image", "img_path = os.path.join(os.path.pardir, 'resources', 'mona_lisa.jpg')\n\n# Load with open-cv\nimport cv2\n# open-cv represents RGB images as BGR (reverse order)\nimg = cv2.cvtColor(cv2.imread(img_path), cv2.COLOR_BGR2RGB))\nsns.plt.imshow(img)\nsns.plt.show()\n\n# Load with scikit\nfrom skimage import io\nimg = io.imread(img_path)\nsns.plt.imshow(img)\nsns.plt.show()\n\n# Load with PIL\nfrom PIL import Image\nwith Image.open(img_path) as img:\n sns.plt.imshow(img.convert(mode='RGB'))\n sns.plt.show()", "Plot Color Channels", "# red color channel\nsns.plt.imshow(img[:,:,0])\nsns.plt.show()\n\n# red color channel in grey color map \nsns.plt.imshow(img[:,:,0], cmap='gray')\nsns.plt.show()", "Image from array", "# dummy list of strings representing our image\na = [\"0000000000\",\n \"0111111100\",\n \"0000111100\",\n \"0000111100\",\n \"0001111100\",\n \"0000111100\",\n \"0001100000\",\n \"0000000000\",\n \"0000000000\"]\n\n# build numpy array of 0s and 1s from previous list\na = np.array([list(map(int, s)) for s in a], dtype=np.float32)\na.shape\n\n# plot image\nplt.imshow(a, cmap='gray', interpolation='none', vmin=0, vmax=1)\nplt.show()", "Morphological Operations", "from skimage import morphology\nb = np.array([[1,1,1],\n [1,1,1],\n [1,1,1]])\nres = morphology.binary_dilation(a, b).astype(np.uint8)\n#res = morphology.binary_erosion(res, b).astype(np.uint8)\n\nplt.imshow(res, cmap='gray', interpolation='none', vmin=0, vmax=1)\nplt.show()\n\nb = np.array([[0,0,0,0],\n [0,1,1,0],\n [0,0,0,0]])\ns = np.array([[1,0],\n [1,1]])\n\nres = morphology.binary_dilation(b, s).astype(np.uint8)\nres\n\nplt.imshow(res, cmap='gray', interpolation='none', vmin=0, vmax=1)\nplt.show()", "Convolution Filters\nhttp://setosa.io/ev/image-kernels/", "from scipy import ndimage\nfrom skimage import data\n\n# load initial ref image\nimage = data.coins()\nplt.imshow(image, cmap='gray', interpolation='none')\nplt.show()\n\n# convolve image with custom kernel\nk = np.array([[1/16,1/8,1/16],\n [1/18,1,1/8],\n [1/16,1/8,1/16]])\nn_image = ndimage.convolve(image, k, mode='constant', cval=0.0)\n\n# plot convolved image\nplt.imshow(n_image, cmap='gray', interpolation='none')\nplt.show()", "Gaussian Convolution\nTry convolving an image using a Gaussian kernel.", "# gaussian distribution formula\ndef gaussian(x, mu, sig):\n return np.exp(-np.power(x - mu, 2.) / (2 * np.power(sig, 2.)))\n\n# plot 1D gaussian curve\nx = np.linspace(-3.0, 3.0, 6)\nz = gaussian(x, 0, 1)\nplt.plot(z)\n\n# compute 2D gaussian\nz_2d = np.matmul(z.reshape(-1, 1), z.reshape(1, -1))\nplt.imshow(z_2d)\n\n# load initial ref image\nimage = data.camera()\nplt.imshow(image, cmap='gray')\nplt.show()\n\n# convolve\nn_image = ndimage.convolve(image, z_2d)\nplt.imshow(n_image, cmap='gray', interpolation='none')\nplt.show()", "Images Normalization\nUsing CelebA Dataset\nReference Course: creative-applications-of-deep-learning-with-tensorflow", "# load all imgs filepaths for the celeba database\ndir_path = os.path.join(os.pardir, 'resources', 'img_align_celeba')\nimgs_filepath = [os.path.join(dir_path, img_name) for img_name in os.listdir(dir_path)]\nprint(len(imgs_filepath))\n\n# load subset of images\nimgs = image_processing.load_data(imgs_filepath[:100])\nimgs.shape", "Mean/Deviation of Images", "# compute mean images (across list of images, so axis=0)\nmean_img = np.mean(imgs, axis=0)\n# plot image (convert to int values)\nplt.imshow(mean_img.astype(np.uint8))\nplt.show()\n\n# compute std images (across list of images, so axis=0)\nstd_img = np.std(imgs, axis=0)\n# plot image (convert to int values)\nplt.imshow(std_img.astype(np.uint8))\nplt.show()\n\nplt.imshow(np.mean(std_img, axis=2).astype(np.uint8))\nplt.show()", "Normalization", "# flatten imgs to single vector\nflattened_imgs = imgs.ravel()\nprint(flattened_imgs.shape)\n\n# plot flattened images\n(_, _, _) = plt.hist(flattened_imgs, bins=255)\n\n# plot flattened mean image\n(_, _, _) = plt.hist(mean_img.ravel(), bins=255)\n\nbins = 20\nfig, axs = plt.subplots(1, 3, figsize=(12, 6), sharey=True, sharex=True)\naxs[0].hist((imgs[0]).ravel(), bins)\naxs[0].set_title('img distribution')\naxs[1].hist((mean_img).ravel(), bins)\naxs[1].set_title('mean distribution')\naxs[2].hist((imgs[0] - mean_img).ravel(), bins)\naxs[2].set_title('(img - mean) distribution')\n\n# normalized image (remove mean and divide by std)\n(_, _, _) = plt.hist(((imgs[0] - mean_img)/std_img).ravel(), bins=20)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
VectorBlox/PYNQ
Pynq-Z1/notebooks/examples/opencv_filters_webcam.ipynb
bsd-3-clause
[ "OpenCV Filters Webcam\nIn this notebook, several filters will be applied to webcam images.\nThose input sources and applied filters will then be displayed either directly in the notebook or on HDMI output.\nTo run all cells in this notebook a webcam and HDMI output monitor are required. \n1. Start HDMI output\nStep 1: Load the overlay", "from pynq import Overlay\nOverlay(\"base.bit\").download()", "Step 2: Initialize HDMI I/O", "from pynq.drivers import HDMI\nfrom pynq.drivers.video import VMODE_640x480\nhdmi_out = HDMI('out')\nhdmi_out.start()", "2. Applying OpenCV filters on Webcam input\nStep 1: Initialize Webcam and set HDMI Out resolution", "# monitor configuration: 640*480 @ 60Hz\nhdmi_out.mode(VMODE_640x480)\nhdmi_out.start()\n# monitor (output) frame buffer size\nframe_out_w = 1920\nframe_out_h = 1080\n# camera (input) configuration\nframe_in_w = 640\nframe_in_h = 480", "Step 2: Initialize camera from OpenCV", "from pynq.drivers import Frame\nimport cv2\n\nvideoIn = cv2.VideoCapture(0)\nvideoIn.set(cv2.CAP_PROP_FRAME_WIDTH, frame_in_w);\nvideoIn.set(cv2.CAP_PROP_FRAME_HEIGHT, frame_in_h);\nprint(\"capture device is open: \" + str(videoIn.isOpened()))", "Step 3: Send webcam input to HDMI output", "import numpy as np\n\nret, frame_vga = videoIn.read()\n\nif (ret):\n frame_1080p = np.zeros((1080,1920,3)).astype(np.uint8)\n frame_1080p[0:480,0:640,:] = frame_vga[0:480,0:640,:]\n hdmi_out.frame_raw(bytearray(frame_1080p.astype(np.int8).tobytes()))\nelse:\n raise RuntimeError(\"Error while reading from camera.\")", "Step 4: Edge detection\nDetecting edges on webcam input and display on HDMI out.", "import time\nframe_1080p = np.zeros((1080,1920,3)).astype(np.uint8)\n\nnum_frames = 20\nreadError = 0\n\nstart = time.time()\nfor i in range (num_frames): \n # read next image\n ret, frame_vga = videoIn.read()\n if (ret):\n laplacian_frame = cv2.Laplacian(frame_vga, cv2.CV_8U)\n # copy to frame buffer / show on monitor reorder RGB (HDMI = GBR)\n frame_1080p[0:480,0:640,[0,1,2]] = laplacian_frame[0:480,0:640,\n [1,0,2]]\n hdmi_out.frame_raw(bytearray(frame_1080p.astype(np.int8).tobytes()))\n else:\n readError += 1\nend = time.time()\n\nprint(\"Frames per second: \" + str((num_frames-readError) / (end - start)))\nprint(\"Number of read errors: \" + str(readError))", "Step 5: Canny edge detection\nDetecting edges on webcam input and display on HDMI out.\nAny edges with intensity gradient more than maxVal are sure to be edges and those below minVal are sure to be non-edges, so discarded. Those who lie between these two thresholds are classified edges or non-edges based on their connectivity. If they are connected to “sure-edge” pixels, they are considered to be part of edges. Otherwise, they are also discarded.", "frame_1080p = np.zeros((1080,1920,3)).astype(np.uint8)\n\nnum_frames = 20\n\nstart = time.time()\nfor i in range (num_frames):\n # read next image\n ret, frame_webcam = videoIn.read()\n if (ret):\n frame_canny = cv2.Canny(frame_webcam,100,110)\n frame_1080p[0:480,0:640,0] = frame_canny[0:480,0:640]\n frame_1080p[0:480,0:640,1] = frame_canny[0:480,0:640]\n frame_1080p[0:480,0:640,2] = frame_canny[0:480,0:640]\n # copy to frame buffer / show on monitor\n hdmi_out.frame_raw(bytearray(frame_1080p.astype(np.int8).tobytes()))\n else:\n readError += 1\nend = time.time()\n\nprint(\"Frames per second: \" + str((num_frames-readError) / (end - start)))\nprint(\"Number of read errors: \" + str(readError))", "Step 6: Show results\nNow use matplotlib to show filtered webcam input inside notebook.", "%matplotlib inline \nfrom matplotlib import pyplot as plt\nimport numpy as np\n\nplt.figure(1, figsize=(10, 10))\nframe_vga = np.zeros((480,640,3)).astype(np.uint8)\nframe_vga[0:480,0:640,0] = frame_canny[0:480,0:640]\nframe_vga[0:480,0:640,1] = frame_canny[0:480,0:640]\nframe_vga[0:480,0:640,2] = frame_canny[0:480,0:640]\nplt.imshow(frame_vga[:,:,[2,1,0]])\nplt.show()", "Step 7: Release camera and HDMI", "videoIn.release()\nhdmi_out.stop()\ndel hdmi_out" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
undercertainty/ou_nlp
tools_numpy.ipynb
apache-2.0
[ "Tools - NumPy\nNumPy is the fundamental library for scientific computing with Python. NumPy is centered around a powerful N-dimensional array object, and it also contains useful linear algebra, Fourier transform, and random number functions.\nCreating arrays\nFirst let's make sure that this notebook works both in python 2 and 3:", "from __future__ import division, print_function, unicode_literals", "Now let's import numpy. Most people import it as np:", "import numpy as np", "np.zeros\nThe zeros function creates an array containing any number of zeros:", "np.zeros(5)", "It's just as easy to create a 2D array (ie. a matrix) by providing a tuple with the desired number of rows and columns. For example, here's a 3x4 matrix:", "np.zeros((3,4))", "Some vocabulary\n\nIn NumPy, each dimension is called an axis.\nThe number of axes is called the rank.\nFor example, the above 3x4 matrix is an array of rank 2 (it is 2-dimensional).\nThe first axis has length 3, the second has length 4.\n\n\nAn array's list of axis lengths is called the shape of the array.\nFor example, the above matrix's shape is (3, 4).\nThe rank is equal to the shape's length.\n\n\nThe size of an array is the total number of elements, which is the product of all axis lengths (eg. 3*4=12)", "a = np.zeros((3,4))\na\n\na.shape\n\na.ndim # equal to len(a.shape)\n\na.size", "N-dimensional arrays\nYou can also create an N-dimensional array of arbitrary rank. For example, here's a 3D array (rank=3), with shape (2,3,4):", "np.zeros((2,3,4))", "Array type\nNumPy arrays have the type ndarrays:", "type(np.zeros((3,4)))", "np.ones\nMany other NumPy functions create ndarrays.\nHere's a 3x4 matrix full of ones:", "np.ones((3,4))", "np.full\nCreates an array of the given shape initialized with the given value. Here's a 3x4 matrix full of π.", "np.full((3,4), np.pi)", "np.empty\nAn uninitialized 2x3 array (its content is not predictable, as it is whatever is in memory at that point):", "np.empty((2,3))", "np.array\nOf course you can initialize an ndarray using a regular python array. Just call the array function:", "np.array([[1,2,3,4], [10, 20, 30, 40]])", "np.arange\nYou can create an ndarray using NumPy's range function, which is similar to python's built-in range function:", "np.arange(1, 5)", "It also works with floats:", "np.arange(1.0, 5.0)", "Of course you can provide a step parameter:", "np.arange(1, 5, 0.5)", "However, when dealing with floats, the exact number of elements in the array is not always predictible. For example, consider this:", "print(np.arange(0, 5/3, 1/3)) # depending on floating point errors, the max value is 4/3 or 5/3.\nprint(np.arange(0, 5/3, 0.333333333))\nprint(np.arange(0, 5/3, 0.333333334))\n", "np.linspace\nFor this reason, it is generally preferable to use the linspace function instead of arange when working with floats. The linspace function returns an array containing a specific number of points evenly distributed between two values (note that the maximum value is included, contrary to arange):", "print(np.linspace(0, 5/3, 6))", "np.rand and np.randn\nA number of functions are available in NumPy's random module to create ndarrays initialized with random values.\nFor example, here is a 3x4 matrix initialized with random floats between 0 and 1 (uniform distribution):", "np.random.rand(3,4)", "Here's a 3x4 matrix containing random floats sampled from a univariate normal distribution (Gaussian distribution) of mean 0 and variance 1:", "np.random.randn(3,4)", "To give you a feel of what these distributions look like, let's use matplotlib (see the matplotlib tutorial for more details):", "%matplotlib inline\nimport matplotlib.pyplot as plt\n\nplt.hist(np.random.rand(100000), normed=True, bins=100, histtype=\"step\", color=\"blue\", label=\"rand\")\nplt.hist(np.random.randn(100000), normed=True, bins=100, histtype=\"step\", color=\"red\", label=\"randn\")\nplt.axis([-2.5, 2.5, 0, 1.1])\nplt.legend(loc = \"upper left\")\nplt.title(\"Random distributions\")\nplt.xlabel(\"Value\")\nplt.ylabel(\"Density\")\nplt.show()", "np.fromfunction\nYou can also initialize an ndarray using a function:", "def my_function(z, y, x):\n return x * y + z\n\nnp.fromfunction(my_function, (3, 2, 10))", "NumPy first creates three ndarrays (one per dimension), each of shape (3, 2, 10). Each array has values equal to the coordinate along a specific axis. For example, all elements in the z array are equal to their z-coordinate:\n[[[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]\n [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]]\n\n [[ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]\n [ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]]\n\n [[ 2. 2. 2. 2. 2. 2. 2. 2. 2. 2.]\n [ 2. 2. 2. 2. 2. 2. 2. 2. 2. 2.]]]\n\nSo the terms x, y and z in the expression x * y + z above are in fact ndarrays (we will discuss arithmetic operations on arrays below). The point is that the function my_function is only called once, instead of once per element. This makes initialization very efficient.\nArray data\ndtype\nNumPy's ndarrays are also efficient in part because all their elements must have the same type (usually numbers).\nYou can check what the data type is by looking at the dtype attribute:", "c = np.arange(1, 5)\nprint(c.dtype, c)\n\nc = np.arange(1.0, 5.0)\nprint(c.dtype, c)", "Instead of letting NumPy guess what data type to use, you can set it explicitly when creating an array by setting the dtype parameter:", "d = np.arange(1, 5, dtype=np.complex64)\nprint(d.dtype, d)", "Available data types include int8, int16, int32, int64, uint8|16|32|64, float16|32|64 and complex64|128. Check out the documentation for the full list.\nitemsize\nThe itemsize attribute returns the size (in bytes) of each item:", "e = np.arange(1, 5, dtype=np.complex64)\ne.itemsize", "data buffer\nAn array's data is actually stored in memory as a flat (one dimensional) byte buffer. It is available via the data attribute (you will rarely need it, though).", "f = np.array([[1,2],[1000, 2000]], dtype=np.int32)\nf.data", "In python 2, f.data is a buffer. In python 3, it is a memoryview.", "if (hasattr(f.data, \"tobytes\")):\n data_bytes = f.data.tobytes() # python 3\nelse:\n data_bytes = memoryview(f.data).tobytes() # python 2\n\ndata_bytes", "Several ndarrays can share the same data buffer, meaning that modifying one will also modify the others. We will see an example in a minute.\nReshaping an array\nIn place\nChanging the shape of an ndarray is as simple as setting its shape attribute. However, the array's size must remain the same.", "g = np.arange(24)\nprint(g)\nprint(\"Rank:\", g.ndim)\n\ng.shape = (6, 4)\nprint(g)\nprint(\"Rank:\", g.ndim)\n\ng.shape = (2, 3, 4)\nprint(g)\nprint(\"Rank:\", g.ndim)", "reshape\nThe reshape function returns a new ndarray object pointing at the same data. This means that modifying one array will also modify the other.", "g2 = g.reshape(4,6)\nprint(g2)\nprint(\"Rank:\", g2.ndim)", "Set item at row 1, col 2 to 999 (more about indexing below).", "g2[1, 2] = 999\ng2", "The corresponding element in g has been modified.", "g", "ravel\nFinally, the ravel function returns a new one-dimensional ndarray that also points to the same data:", "g.ravel()", "Arithmetic operations\nAll the usual arithmetic operators (+, -, *, /, //, **, etc.) can be used with ndarrays. They apply elementwise:", "a = np.array([14, 23, 32, 41])\nb = np.array([5, 4, 3, 2])\nprint(\"a + b =\", a + b)\nprint(\"a - b =\", a - b)\nprint(\"a * b =\", a * b)\nprint(\"a / b =\", a / b)\nprint(\"a // b =\", a // b)\nprint(\"a % b =\", a % b)\nprint(\"a ** b =\", a ** b)", "Note that the multiplication is not a matrix multiplication. We will discuss matrix operations below.\nThe arrays must have the same shape. If they do not, NumPy will apply the broadcasting rules.\nBroadcasting\nIn general, when NumPy expects arrays of the same shape but finds that this is not the case, it applies the so-called broadcasting rules:\nFirst rule\nIf the arrays do not have the same rank, then a 1 will be prepended to the smaller ranking arrays until their ranks match.", "h = np.arange(5).reshape(1, 1, 5)\nh", "Now let's try to add a 1D array of shape (5,) to this 3D array of shape (1,1,5). Applying the first rule of broadcasting!", "h + [10, 20, 30, 40, 50] # same as: h + [[[10, 20, 30, 40, 50]]]", "Second rule\nArrays with a 1 along a particular dimension act as if they had the size of the array with the largest shape along that dimension. The value of the array element is repeated along that dimension.", "k = np.arange(6).reshape(2, 3)\nk", "Let's try to add a 2D array of shape (2,1) to this 2D ndarray of shape (2, 3). NumPy will apply the second rule of broadcasting:", "k + [[100], [200]] # same as: k + [[100, 100, 100], [200, 200, 200]]", "Combining rules 1 & 2, we can do this:", "k + [100, 200, 300] # after rule 1: [[100, 200, 300]], and after rule 2: [[100, 200, 300], [100, 200, 300]]", "And also, very simply:", "k + 1000 # same as: k + [[1000, 1000, 1000], [1000, 1000, 1000]]", "Third rule\nAfter rules 1 & 2, the sizes of all arrays must match.", "try:\n k + [33, 44]\nexcept ValueError as e:\n print(e)", "Broadcasting rules are used in many NumPy operations, not just arithmetic operations, as we will see below.\nFor more details about broadcasting, check out the documentation.\nUpcasting\nWhen trying to combine arrays with different dtypes, NumPy will upcast to a type capable of handling all possible values (regardless of what the actual values are).", "k1 = np.arange(0, 5, dtype=np.uint8)\nprint(k1.dtype, k1)\n\nk2 = k1 + np.array([5, 6, 7, 8, 9], dtype=np.int8)\nprint(k2.dtype, k2)", "Note that int16 is required to represent all possible int8 and uint8 values (from -128 to 255), even though in this case a uint8 would have sufficed.", "k3 = k1 + 1.5\nprint(k3.dtype, k3)", "Conditional operators\nThe conditional operators also apply elementwise:", "m = np.array([20, -5, 30, 40])\nm < [15, 16, 35, 36]", "And using broadcasting:", "m < 25 # equivalent to m < [25, 25, 25, 25]", "This is most useful in conjunction with boolean indexing (discussed below).", "m[m < 25]", "Mathematical and statistical functions\nMany mathematical and statistical functions are available for ndarrays.\nndarray methods\nSome functions are simply ndarray methods, for example:", "a = np.array([[-2.5, 3.1, 7], [10, 11, 12]])\nprint(a)\nprint(\"mean =\", a.mean())", "Note that this computes the mean of all elements in the ndarray, regardless of its shape.\nHere are a few more useful ndarray methods:", "for func in (a.min, a.max, a.sum, a.prod, a.std, a.var):\n print(func.__name__, \"=\", func())", "These functions accept an optional argument axis which lets you ask for the operation to be performed on elements along the given axis. For example:", "c=np.arange(24).reshape(2,3,4)\nc\n\nc.sum(axis=0) # sum across matrices\n\nc.sum(axis=1) # sum across rows", "You can also sum over multiple axes:", "c.sum(axis=(0,2)) # sum across matrices and columns\n\n0+1+2+3 + 12+13+14+15, 4+5+6+7 + 16+17+18+19, 8+9+10+11 + 20+21+22+23", "Universal functions\nNumPy also provides fast elementwise functions called universal functions, or ufunc. They are vectorized wrappers of simple functions. For example square returns a new ndarray which is a copy of the original ndarray except that each element is squared:", "a = np.array([[-2.5, 3.1, 7], [10, 11, 12]])\nnp.square(a)", "Here are a few more useful unary ufuncs:", "print(\"Original ndarray\")\nprint(a)\nfor func in (np.abs, np.sqrt, np.exp, np.log, np.sign, np.ceil, np.modf, np.isnan, np.cos):\n print(\"\\n\", func.__name__)\n print(func(a))", "Binary ufuncs\nThere are also many binary ufuncs, that apply elementwise on two ndarrays. Broadcasting rules are applied if the arrays do not have the same shape:", "a = np.array([1, -2, 3, 4])\nb = np.array([2, 8, -1, 7])\nnp.add(a, b) # equivalent to a + b\n\nnp.greater(a, b) # equivalent to a > b\n\nnp.maximum(a, b)\n\nnp.copysign(a, b)", "Array indexing\nOne-dimensional arrays\nOne-dimensional NumPy arrays can be accessed more or less like regular python arrays:", "a = np.array([1, 5, 3, 19, 13, 7, 3])\na[3]\n\na[2:5]\n\na[2:-1]\n\na[:2]\n\na[2::2]\n\na[::-1]", "Of course, you can modify elements:", "a[3]=999\na", "You can also modify an ndarray slice:", "a[2:5] = [997, 998, 999]\na", "Differences with regular python arrays\nContrary to regular python arrays, if you assign a single value to an ndarray slice, it is copied across the whole slice, thanks to broadcasting rules discussed above.", "a[2:5] = -1\na", "Also, you cannot grow or shrink ndarrays this way:", "try:\n a[2:5] = [1,2,3,4,5,6] # too long\nexcept ValueError as e:\n print(e)", "You cannot delete elements either:", "try:\n del a[2:5]\nexcept ValueError as e:\n print(e)", "Last but not least, ndarray slices are actually views on the same data buffer. This means that if you create a slice and modify it, you are actually going to modify the original ndarray as well!", "a_slice = a[2:6]\na_slice[1] = 1000\na # the original array was modified!\n\na[3] = 2000\na_slice # similarly, modifying the original array modifies the slice!", "If you want a copy of the data, you need to use the copy method:", "another_slice = a[2:6].copy()\nanother_slice[1] = 3000\na # the original array is untouched\n\na[3] = 4000\nanother_slice # similary, modifying the original array does not affect the slice copy", "Multi-dimensional arrays\nMulti-dimensional arrays can be accessed in a similar way by providing an index or slice for each axis, separated by commas:", "b = np.arange(48).reshape(4, 12)\nb\n\nb[1, 2] # row 1, col 2\n\nb[1, :] # row 1, all columns\n\nb[:, 1] # all rows, column 1", "Caution: note the subtle difference between these two expressions:", "b[1, :]\n\nb[1:2, :]", "The first expression returns row 1 as a 1D array of shape (12,), while the second returns that same row as a 2D array of shape (1, 12).\nFancy indexing\nYou may also specify a list of indices that you are interested in. This is referred to as fancy indexing.", "b[(0,2), 2:5] # rows 0 and 2, columns 2 to 4 (5-1)\n\nb[:, (-1, 2, -1)] # all rows, columns -1 (last), 2 and -1 (again, and in this order)", "If you provide multiple index arrays, you get a 1D ndarray containing the values of the elements at the specified coordinates.", "b[(-1, 2, -1, 2), (5, 9, 1, 9)] # returns a 1D array with b[-1, 5], b[2, 9], b[-1, 1] and b[2, 9] (again)", "Higher dimensions\nEverything works just as well with higher dimensional arrays, but it's useful to look at a few examples:", "c = b.reshape(4,2,6)\nc\n\nc[2, 1, 4] # matrix 2, row 1, col 4\n\nc[2, :, 3] # matrix 2, all rows, col 3", "If you omit coordinates for some axes, then all elements in these axes are returned:", "c[2, 1] # Return matrix 2, row 1, all columns. This is equivalent to c[2, 1, :]", "Ellipsis (...)\nYou may also write an ellipsis (...) to ask that all non-specified axes be entirely included.", "c[2, ...] # matrix 2, all rows, all columns. This is equivalent to c[2, :, :]\n\nc[2, 1, ...] # matrix 2, row 1, all columns. This is equivalent to c[2, 1, :]\n\nc[2, ..., 3] # matrix 2, all rows, column 3. This is equivalent to c[2, :, 3]\n\nc[..., 3] # all matrices, all rows, column 3. This is equivalent to c[:, :, 3]", "Boolean indexing\nYou can also provide an ndarray of boolean values on one axis to specify the indices that you want to access.", "b = np.arange(48).reshape(4, 12)\nb\n\nrows_on = np.array([True, False, True, False])\nb[rows_on, :] # Rows 0 and 2, all columns. Equivalent to b[(0, 2), :]\n\ncols_on = np.array([False, True, False] * 4)\nb[:, cols_on] # All rows, columns 1, 4, 7 and 10", "np.ix_\nYou cannot use boolean indexing this way on multiple axes, but you can work around this by using the ix_ function:", "b[np.ix_(rows_on, cols_on)]\n\nnp.ix_(rows_on, cols_on)", "If you use a boolean array that has the same shape as the ndarray, then you get in return a 1D array containing all the values that have True at their coordinate. This is generally used along with conditional operators:", "b[b % 3 == 1]", "Iterating\nIterating over ndarrays is very similar to iterating over regular python arrays. Note that iterating over multidimensional arrays is done with respect to the first axis.", "c = np.arange(24).reshape(2, 3, 4) # A 3D array (composed of two 3x4 matrices)\nc\n\nfor m in c:\n print(\"Item:\")\n print(m)\n\nfor i in range(len(c)): # Note that len(c) == c.shape[0]\n print(\"Item:\")\n print(c[i])", "If you want to iterate on all elements in the ndarray, simply iterate over the flat attribute:", "for i in c.flat:\n print(\"Item:\", i)", "Stacking arrays\nIt is often useful to stack together different arrays. NumPy offers several functions to do just that. Let's start by creating a few arrays.", "q1 = np.full((3,4), 1.0)\nq1\n\nq2 = np.full((4,4), 2.0)\nq2\n\nq3 = np.full((3,4), 3.0)\nq3", "vstack\nNow let's stack them vertically using vstack:", "q4 = np.vstack((q1, q2, q3))\nq4\n\nq4.shape", "This was possible because q1, q2 and q3 all have the same shape (except for the vertical axis, but that's ok since we are stacking on that axis).\nhstack\nWe can also stack arrays horizontally using hstack:", "q5 = np.hstack((q1, q3))\nq5\n\nq5.shape", "This is possible because q1 and q3 both have 3 rows. But since q2 has 4 rows, it cannot be stacked horizontally with q1 and q3:", "try:\n q5 = np.hstack((q1, q2, q3))\nexcept ValueError as e:\n print(e)", "concatenate\nThe concatenate function stacks arrays along any given existing axis.", "q7 = np.concatenate((q1, q2, q3), axis=0) # Equivalent to vstack\nq7\n\nq7.shape", "As you might guess, hstack is equivalent to calling concatenate with axis=1.\nstack\nThe stack function stacks arrays along a new axis. All arrays have to have the same shape.", "q8 = np.stack((q1, q3))\nq8\n\nq8.shape", "Splitting arrays\nSplitting is the opposite of stacking. For example, let's use the vsplit function to split a matrix vertically.\nFirst let's create a 6x4 matrix:", "r = np.arange(24).reshape(6,4)\nr", "Now let's split it in three equal parts, vertically:", "r1, r2, r3 = np.vsplit(r, 3)\nr1\n\nr2\n\nr3", "There is also a split function which splits an array along any given axis. Calling vsplit is equivalent to calling split with axis=0. There is also an hsplit function, equivalent to calling split with axis=1:", "r4, r5 = np.hsplit(r, 2)\nr4\n\nr5", "Transposing arrays\nThe transpose method creates a new view on an ndarray's data, with axes permuted in the given order.\nFor example, let's create a 3D array:", "t = np.arange(24).reshape(4,2,3)\nt", "Now let's create an ndarray such that the axes 0, 1, 2 (depth, height, width) are re-ordered to 1, 2, 0 (depth→width, height→depth, width→height):", "t1 = t.transpose((1,2,0))\nt1\n\nt1.shape", "By default, transpose reverses the order of the dimensions:", "t2 = t.transpose() # equivalent to t.transpose((2, 1, 0))\nt2\n\nt2.shape", "NumPy provides a convenience function swapaxes to swap two axes. For example, let's create a new view of t with depth and height swapped:", "t3 = t.swapaxes(0,1) # equivalent to t.transpose((1, 0, 2))\nt3\n\nt3.shape", "Linear algebra\nNumPy 2D arrays can be used to represent matrices efficiently in python. We will just quickly go through some of the main matrix operations available. For more details about Linear Algebra, vectors and matrics, go through the Linear Algebra tutorial.\nMatrix transpose\nThe T attribute is equivalent to calling transpose() when the rank is ≥2:", "m1 = np.arange(10).reshape(2,5)\nm1\n\nm1.T", "The T attribute has no effect on rank 0 (empty) or rank 1 arrays:", "m2 = np.arange(5)\nm2\n\nm2.T", "We can get the desired transposition by first reshaping the 1D array to a single-row matrix (2D):", "m2r = m2.reshape(1,5)\nm2r\n\nm2r.T", "Matrix dot product\nLet's create two matrices and execute a matrix dot product using the dot method.", "n1 = np.arange(10).reshape(2, 5)\nn1\n\nn2 = np.arange(15).reshape(5,3)\nn2\n\nn1.dot(n2)", "Caution: as mentionned previously, n1*n2 is not a dot product, it is an elementwise product.\nMatrix inverse and pseudo-inverse\nMany of the linear algebra functions are available in the numpy.linalg module, in particular the inv function to compute a square matrix's inverse:", "import numpy.linalg as linalg\n\nm3 = np.array([[1,2,3],[5,7,11],[21,29,31]])\nm3\n\nlinalg.inv(m3)", "You can also compute the pseudoinverse using pinv:", "linalg.pinv(m3)", "Identity matrix\nThe product of a matrix by its inverse returns the identiy matrix (with small floating point errors):", "m3.dot(linalg.inv(m3))", "You can create an identity matrix of size NxN by calling eye:", "np.eye(3)", "QR decomposition\nThe qr function computes the QR decomposition of a matrix:", "q, r = linalg.qr(m3)\nq\n\nr\n\nq.dot(r) # q.r equals m3", "Determinant\nThe det function computes the matrix determinant:", "linalg.det(m3) # Computes the matrix determinant", "Eigenvalues and eigenvectors\nThe eig function computes the eigenvalues and eigenvectors of a square matrix:", "eigenvalues, eigenvectors = linalg.eig(m3)\neigenvalues # λ\n\neigenvectors # v\n\nm3.dot(eigenvectors) - eigenvalues * eigenvectors # m3.v - λ*v = 0", "Singular Value Decomposition\nThe svd function takes a matrix and returns its singular value decomposition:", "m4 = np.array([[1,0,0,0,2], [0,0,3,0,0], [0,0,0,0,0], [0,2,0,0,0]])\nm4\n\nU, S_diag, V = linalg.svd(m4)\nU\n\nS_diag", "The svd function just returns the values in the diagonal of Σ, but we want the full Σ matrix, so let's create it:", "S = np.zeros((4, 5))\nS[np.diag_indices(4)] = S_diag\nS # Σ\n\nV\n\nU.dot(S).dot(V) # U.Σ.V == m4", "Diagonal and trace", "np.diag(m3) # the values in the diagonal of m3 (top left to bottom right)\n\nnp.trace(m3) # equivalent to np.diag(m3).sum()", "Solving a system of linear scalar equations\nThe solve function solves a system of linear scalar equations, such as:\n\n$2x + 6y = 6$\n$5x + 3y = -9$", "coeffs = np.array([[2, 6], [5, 3]])\ndepvars = np.array([6, -9])\nsolution = linalg.solve(coeffs, depvars)\nsolution", "Let's check the solution:", "coeffs.dot(solution), depvars # yep, it's the same", "Looks good! Another way to check the solution:", "np.allclose(coeffs.dot(solution), depvars)", "Vectorization\nInstead of executing operations on individual array items, one at a time, your code is much more efficient if you try to stick to array operations. This is called vectorization. This way, you can benefit from NumPy's many optimizations.\nFor example, let's say we want to generate a 768x1024 array based on the formula $sin(xy/40.5)$. A bad option would be to do the math in python using nested loops:", "import math\ndata = np.empty((768, 1024))\nfor y in range(768):\n for x in range(1024):\n data[y, x] = math.sin(x*y/40.5) # BAD! Very inefficient.", "Sure, this works, but it's terribly inefficient since the loops are taking place in pure python. Let's vectorize this algorithm. First, we will use NumPy's meshgrid function which generates coordinate matrices from coordinate vectors.", "x_coords = np.arange(0, 1024) # [0, 1, 2, ..., 1023]\ny_coords = np.arange(0, 768) # [0, 1, 2, ..., 767]\nX, Y = np.meshgrid(x_coords, y_coords)\nX\n\nY", "As you can see, both X and Y are 768x1024 arrays, and all values in X correspond to the horizontal coordinate, while all values in Y correspond to the the vertical coordinate.\nNow we can simply compute the result using array operations:", "data = np.sin(X*Y/40.5)", "Now we can plot this data using matplotlib's imshow function (see the matplotlib tutorial).", "import matplotlib.pyplot as plt\nimport matplotlib.cm as cm\nfig = plt.figure(1, figsize=(7, 6))\nplt.imshow(data, cmap=cm.hot, interpolation=\"bicubic\")\nplt.show()", "Saving and loading\nNumPy makes it easy to save and load ndarrays in binary or text format.\nBinary .npy format\nLet's create a random array and save it.", "a = np.random.rand(2,3)\na\n\nnp.save(\"my_array\", a)", "Done! Since the file name contains no file extension was provided, NumPy automatically added .npy. Let's take a peek at the file content:", "with open(\"my_array.npy\", \"rb\") as f:\n content = f.read()\n\ncontent", "To load this file into a NumPy array, simply call load:", "a_loaded = np.load(\"my_array.npy\")\na_loaded", "Text format\nLet's try saving the array in text format:", "np.savetxt(\"my_array.csv\", a)", "Now let's look at the file content:", "with open(\"my_array.csv\", \"rt\") as f:\n print(f.read())", "This is a CSV file with tabs as delimiters. You can set a different delimiter:", "np.savetxt(\"my_array.csv\", a, delimiter=\",\")", "To load this file, just use loadtxt:", "a_loaded = np.loadtxt(\"my_array.csv\", delimiter=\",\")\na_loaded", "Zipped .npz format\nIt is also possible to save multiple arrays in one zipped file:", "b = np.arange(24, dtype=np.uint8).reshape(2, 3, 4)\nb\n\nnp.savez(\"my_arrays\", my_a=a, my_b=b)", "Again, let's take a peek at the file content. Note that the .npz file extension was automatically added.", "with open(\"my_arrays.npz\", \"rb\") as f:\n content = f.read()\n\nrepr(content)[:180] + \"[...]\"", "You then load this file like so:", "my_arrays = np.load(\"my_arrays.npz\")\nmy_arrays", "This is a dict-like object which loads the arrays lazily:", "my_arrays.keys()\n\nmy_arrays[\"my_a\"]", "What next?\nNow you know all the fundamentals of NumPy, but there are many more options available. The best way to learn more is to experiment with NumPy, and go through the excellent reference documentation to find more functions and features you may be interested in." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ddavignon/CST-495
Project_search.ipynb
unlicense
[ "Search Project for CST 495\n\nCMU Movie Summary Corpus\nhttp://www.cs.cmu.edu/~ark/personas/\n\nDustin D'Avignon\nChris Ngo\nLet's go\n\nWe begin with normalise the text by removing unwanted characters and converting to lowercase", "import csv\nimport re\n\nwith open(\"data/MovieSummaries/plot_summaries.tsv\") as f:\n r = csv.reader(f, delimiter='\\t', quotechar='\"')\n tag = re.compile(r'\\b[0-9]+\\b')\n rgx = re.compile(r'\\b[a-zA-Z]+\\b')\n #docs = [ (' '.join(re.findall(tag, x[0])).lower(), ' '.join(re.findall(rgx, x[1])).lower()) for i,x in enumerate(r) if r>1 ]\n docs= {}\n for i,x in enumerate(r):\n if i >1:\n docs[' '.join(re.findall(tag, x[0])).lower()] = ' '.join(re.findall(rgx, x[1])).lower()\n\n\n\n\n\n\nimport csv\nimport re\n\nwith open(\"data/MovieSummaries/movie.metadata.tsv\") as f:\n r = csv.reader(f, delimiter='\\t', quotechar='\"')\n tag = re.compile(r'\\b[0-9]+\\b')\n rgx = re.compile(r'\\b[a-zA-Z]+\\b')\n docs2= {}\n for i,x in enumerate(r):\n if i >1:\n docs2[' '.join(re.findall(tag, x[0])).lower()] = ' '.join(re.findall(rgx, x[2])).lower(), ' '.join(re.findall(rgx, x[8])).lower()\n \n#print(docs2)", "now is the time to join the docs together", "doc = [(docs2.get(x), y) for x, y in docs.items() if docs2.get(x)]\n\n\n\n# for testing\n# import random\n #print doc[random.randint(0, len(doc)-1)]\nprint doc[0][0], doc[0][1]\n\nitems_t = [ d[0] for d in doc ] # item titles\nitems_d = [ d[1] for d in doc ] # item description\nitems_i = range(0 , len(items_t)) # item id\n\n", "term freq", "corpus = items_d[0:25]\nprint corpus", "start by computing frequncy of entire corpus", "tf = {}\nfor doc in corpus:\n for word in doc.split():\n if word in tf:\n tf[word] += 1\n else:\n tf[word] = 1\nprint(tf)", "now that we have normailised the data we can compute the term frequency", "from collections import Counter\n\ndef get_tf(corpus):\n tf = Counter()\n for doc in corpus:\n for word in doc.split():\n tf[word] += 1\n return tf\n\ntf = get_tf(corpus)\nprint(tf)\n ", "doc freq", "import collections\n\ndef get_tf(document):\n tf = Counter()\n for word in document.split():\n tf[word] += 1\n return tf\n\ndef get_dtf(corpus):\n dtf = {}\n for i,doc in enumerate(corpus):\n dtf[i]= get_tf(doc)\n return dtf\n\ndtf = get_dtf(items_d)\ndtf[342]", "compute dtf for item descriptions", "dtf = get_dtf(items_d)\ndtf[12]", "term freq matrix\n\nwith the lexicon we are able to compute the term freq matrix", "def get_tfm(corpus):\n \n def get_lexicon(corpus):\n lexicon = set()\n for doc in corpus:\n lexicon.update([word for word in doc.split()])\n return list(lexicon)\n \n lexicon = get_lexicon(corpus)\n \n tfm =[]\n for doc in corpus:\n tfv = [0]*len(lexicon)\n for term in doc.split():\n tfv[lexicon.index(term)] += 1\n \n tfm.append(tfv)\n \n return tfm, lexicon\n\n#test_corpus = ['mountain bike', 'road bike carbon', 'bike helmet']\n#tfm, lexicon = get_tfm(test_corpus)\n#print lexicon\n#print tfm\n\n\n ", "sparsity of term frequency matrix\nWe took the approach of using Bokeh for displaying the sparsity of term frequency matrix", "#!pip install bokeh\n\nimport pandas as pd\nfrom bokeh.plotting import figure, output_notebook, show, vplot\n\n# sparsity as a function of document count\nn = []\ns = []\nfor i in range(100,1000,100):\n corpus = items_d[0:i]\n tfm, lexicon = get_tfm(corpus)\n c = [ [x.count(0), x.count(1)] for x in tfm]\n n_zero = sum([ y[0] for y in c])\n n_one = sum( [y[1] for y in c])\n s.append(1.0 - (float(n_one) / (n_one + n_zero)))\n n.append(i)\n \noutput_notebook(hide_banner=True)\np = figure(x_axis_label='Documents', y_axis_label='Sparsity', plot_width=400, plot_height=400)\np.line(n, s, line_width=2)\np.circle(n, s, fill_color=\"white\", size=8)\nshow(p)", "boolean search\nAfter doing the term frequency matrix, we went into using our first ranking function. We are using a boolean search to find documents that contains the words that are included within a user specified query. This is how our boolean search algorithm works:\n\nCompute the lexicon for the corpus\nCompute the term frequency matrix for the corpus\nConvert query to query vector using the same lexicon \nCompare each documents term frequncy vector to the query vector - specifically for each document in the corpus:\nCompute a ranking score for each document by taking the dot product of the document's term frequency vector and the query vector\n\n\nSort the documents by ranking score", "\n\n# compute term frequency matrix and lexicon\ntfm, lexicon = get_tfm(corpus)\n\n\n# define our query\nqry = 'red bike'\n\n# convert query to query vector using lexicon\nqrv = [0]*len(lexicon)\nfor term in qry.split():\n if term in lexicon:\n qrv[lexicon.index(term)] = 1\n\n#print qrv\n\n# compare query vector to each term frequency vector\n# this is dot product between qrv and each row of tfm\nfor i,tfv in enumerate(tfm):\n print i, sum([ xy[0] * xy[1] for xy in zip(qrv, tfv) ])", "To compute the document ranking score we used the function get_results_tf() with results from the term frequency matrix", "def get_results_tf(qry, tfm, lexicon):\n qrv =[0]*len(lexicon)\n for term in qry.split():\n if term in lexicon:\n qrv[lexicon.index(term)] = 1\n \n results = []\n for i, tfv in enumerate(tfm):\n score = 0\n score = sum([ xy[0] * xy[1] for xy in zip(qrv,tfv)])\n results.append([score, i])\n \n sorted_results = sorted(results, key=lambda t: t[0] * -1)\n return sorted_results\n\n\ndef print_results(results,n, head=True):\n ''' Helper function to print results\n '''\n if head: \n print('\\nTop %d from recall set of %d items:' % (n,len(results)))\n for r in results[:n]:\n print('\\t%0.2f - %s'%(r[0],items_t[r[1]]))\n else:\n print('\\nBottom %d from recall set of %d items:' % (n,len(results)))\n for r in results[-n:]:\n print('\\t%0.2f - %s'%(r[0],items_t[r[1]]))\n \n\ntfm, lexicon = get_tfm(items_d[:1000])\nresults = get_results_tf('fun times', tfm , lexicon)\nprint_results(results,10)", "Inverted Index\n\nthe inverted index maps terms to the document in which they can be found", "def create_inverted_index(corpus):\n idx={}\n for i, document in enumerate(corpus):\n for word in document.split():\n if word in idx:\n idx[word].append(i)\n else:\n idx[word] = [i]\n ## HIDE\n return idx\n\ntest_corpus = ['mountain bike red','road bike carbon','bike helmet']\nidx = create_inverted_index(test_corpus)\nprint(idx)", "inverted index for document titles", "idx = create_inverted_index(items_d)\nprint(set(idx['good']).intersection(set(idx['times'])))\nprint(items_d[2061])", "improve the ranking function", "def get_results_tf(qry, idx):\n score = Counter()\n for term in qry.split():\n for doc in idx[term]:\n score[doc] += 1\n \n results=[]\n for x in [[r[0],r[1]] for r in zip(score.keys(), score.values())]:\n if x[1] > 0:\n results.append([x[1],x[0]])\n\n sorted_results = sorted(results, key=lambda t: t[0] * -1 )\n return sorted_results;\n\n\nidx = create_inverted_index(items_d)\nresults = get_results_tf('zombies', idx)\nprint_results(results,20)", "enter different queries", "results = get_results_tf('ghouls and ghosts', idx)\nprint_results(results, 10)\n\nimport pandas as pd\nfrom bokeh.plotting import output_notebook, show\nfrom bokeh.charts import Bar\nfrom bokeh.charts.attributes import CatAttr\n#from bokeh.models import ColumnDataSource\n\ndf = pd.DataFrame({'term':[x for x in idx.keys()],'freq':[len(x) for x in idx.values()]})\n\noutput_notebook(hide_banner=True)\np = Bar(df.sort_values('freq', ascending=False)[:30], label=CatAttr(columns=['term'], sort=False), values='freq',\n plot_width=800, plot_height=400)\nshow(p)\n", "TF-IDF\nTo implement TF-IDF we used the function: \n$$\nIDF = log ( 1 + \\frac{N}{n_t} ) \n$$", "import math\n\ndef idf(term, idx, n):\n return math.log( float(n) / (1 + len(idx[term]))) \n\n\nprint(idf('zombie',idx,len(items_d)))\nprint(idf('survival',idx,len(items_d)))\nprint(idf('invasions',idx,len(items_d)))", "TF-IDF Intuition", "from bokeh.charts import vplot\n\nidx = create_inverted_index(items_d)\n\ndf = pd.DataFrame({'term':[x for x in idx.keys()],'freq':[len(x) for x in idx.values()],\n 'idf':[idf(x, idx, len(items_t)) for x in idx.keys()]})\n\noutput_notebook(hide_banner=True)\np1 = Bar(df.sort_values('freq', ascending=False)[:30], label=CatAttr(columns=['term'], sort=False), values='freq',\n plot_width=800, plot_height=400)\np2 = Bar(df.sort_values('freq', ascending=False)[:30], label=CatAttr(columns=['term'], sort=False), values='idf',\n plot_width=800, plot_height=400)\np = vplot(p1, p2)\nshow(p)", "TF-IDF Ranking\nWe then created an inverted index for the TD-IDF ranking", "def create_inverted_index(corpus):\n idx={}\n for i, doc in enumerate(corpus):\n for word in doc.split():\n if word in idx:\n if i in idx[word]:\n # Update document's frequency\n idx[word][i] += 1\n else:\n # Add document\n idx[word][i] = 1\n else:\n # Add term\n idx[word] = {i:1}\n return idx\n\ndef get_results_tfidf(qry, idx, n):\n score = Counter()\n for term in qry.split():\n if term in idx:\n i = idf(term, idx, n)\n for doc in idx[term]:\n score[doc] += idx[term][doc] * i\n \n results=[]\n for x in [[r[0],r[1]] for r in zip(score.keys(), score.values())]:\n if x[1] > 0:\n results.append([x[1],x[0]])\n \n sorted_results = sorted(results, key=lambda t: t[0] * -1 )\n return sorted_results\n\nidx = create_inverted_index(items_d)\nresults = get_results_tfidf('lookout action bike zombie', idx, len(items_d))\nprint_results(results,10)", "Ideally we do not want scores to be the same for lots of documents. High TF-IDF scores in shorter documents should be more relevant - so we could try by boosting the score for documents that are shorter than average.", "def get_results_tfidf_boost(qry, corpus):\n idx = create_inverted_index(corpus)\n n = len(corpus)\n d = [len(x.split()) for x in corpus]\n d_avg = float(sum(d)) / len(d)\n score = Counter()\n for term in qry.split():\n if term in idx:\n i = idf(term, idx, n)\n for doc in idx[term]:\n f = float(idx[term][doc])\n score[doc] += i * ( f / (float(d[doc]) / d_avg) )\n \n results=[]\n for x in [[r[0],r[1]] for r in zip(score.keys(), score.values())]:\n if x[1] > 0:\n # output [0] score, [1] doc_id\n results.append([x[1],x[0]])\n\n sorted_results = sorted(results, key=lambda t: t[0] * -1 )\n return sorted_results\n\nfrom bokeh.charts import Scatter\n\nresults = get_results_tfidf_boost('zombie invasion', items_d)\nprint_results(results, 10)\n\n# Plot score vs item length\ndf = pd.DataFrame({'score':[float(x[0]) for x in results],\n 'length':[len(items_d[x[1]].split()) for x in results]})\n\noutput_notebook()\np = Scatter(df, x='score', y='length')\nshow(p)", "Implementing BM25\nTo implement BM25, we used the function get_results_bm25 that used arguments \"query, corpus, and the index sizes. We then printed out the results using a Bokeh chart.", "def get_results_bm25(qry, corpus, k1=1.5, b=0.75):\n idx = create_inverted_index(corpus)\n # 1.Assign (integer) n to be the number of documents in the corpus\n n = len(corpus)\n # 2.Assign (list) d with elements corresponding to the number of terms in each document in the corpus\n d = [len(x.split()) for x in corpus]\n # 3.Assign (float) d_avg as the average document length of the documents in the corpus\n d_avg = float(sum(d)) / len(d) \n score = Counter()\n for term in qry.split():\n if term in idx:\n i = idf(term, idx, n)\n for doc in idx[term]:\n # 4.Assign (float) f equal to the number of times the term appears in doc\n f = float(idx[term][doc])\n # 5.Assign (float) s the BM25 score for this (term, document) pair\n s = i * (( f * (k1 + 1) ) / (f + k1 * (1 - b + (b * (float(d[doc]) / d_avg)))))\n score[doc] += s\n \n results=[]\n for x in [[r[0],r[1]] for r in zip(score.keys(), score.values())]:\n if x[1] > 0:\n results.append([x[1],x[0]])\n\n sorted_results = sorted(results, key=lambda t: t[0] * -1 )\n return sorted_results\n\nresults = get_results_bm25('zombie apacolypse', items_d)\nprint_results(results, 10)\n\n!pip install bokeh\nfrom bokeh.charts import Scatter\n\nresults = get_results_bm25('zombie apacolypse', items_d, k1=1.5, b=0.75)\n\n# Plot score vs item length\ndf = pd.DataFrame({'score':[float(x[0]) for x in results],\n 'length':[len(items_d[x[1]].split()) for x in results]})\noutput_notebook()\np = Scatter(df, x='score', y='length')\nshow(p)", "Implementing Random Forest Machine Learning\nUsing the example from class to implement random forest ranking algorithm.", "import findspark\nimport os\nfindspark.init(os.getenv('HOME') + '/spark-1.6.0-bin-hadoop2.6')\nos.environ['PYSPARK_SUBMIT_ARGS'] = '--packages com.databricks:spark-csv_2.10:1.3.0 pyspark-shell'\n\nimport pyspark\ntry: \n print(sc)\nexcept NameError:\n sc = pyspark.SparkContext()\n print(sc)\n\nfrom pyspark.sql import SQLContext\nimport os\n\nsqlContext = SQLContext(sc)\ndf = sqlContext.read.format('data/MovieSummaries/plot_summaries.tsv').options().options(header='true', inferSchema='true', delimiter=',') \\\n .load(os.getcwd() + 'data/MovieSummaries/plot_summaries.tsv') \n \ndf.schema\ndf.dropna()\n\nsqlContext.registerDataFrameAsTable(df,'dataset')\nsqlContext.tableNames()\n\ndata_full = sqlContext.sql(\"select label_relevanceBinary, feature_1, feature_2, feature_3, feature_4 \\\n feature_5, feature_6, feature_7, feature_8, feature_9, feature_10 \\\n from dataset\").rdd\n\nfrom pyspark.mllib.classification import SVMWithSGD, SVMModel\nfrom pyspark.mllib.regression import LabeledPoint\nfrom pyspark.mllib.feature import StandardScaler\n\nlabel = data_full.map(lambda row: row[0])\nfeatures = data_full.map(lambda row: row[1:])\n\nmodel = StandardScaler().fit(features)\nfeatures_transform = model.transform(features)\n\n# Now combine and convert back to labelled points:\ntransformedData = label.zip(features_transform)\ntransformedData = transformedData.map(lambda row: LabeledPoint(row[0],[row[1]]))\n\ntransformedData.take(5)\n\ndata_train, data_test = transformedData.randomSplit([.75,.25],seed=1973)\n\nprint('Training data records = ' + str(data_train.count()))\nprint('Training data records = ' + str(data_test.count()))\n\nfrom pyspark.mllib.tree import RandomForest\n\nmodel = RandomForest.trainClassifier(data_train, numClasses=2, categoricalFeaturesInfo={},\n numTrees=400, featureSubsetStrategy=\"auto\",\n impurity='gini', maxDepth=10, maxBins=32)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
hanhanwu/Hanhan_Data_Science_Practice
sequencial_analysis/after_2020_practice/ts_RNN_basics.ipynb
mit
[ "Time Series Forecast with Basic RNN\n\nDataset is downloaded from https://archive.ics.uci.edu/ml/datasets/Beijing+PM2.5+Data", "import pandas as pd\nimport numpy as np\nimport datetime\nfrom matplotlib import pyplot as plt\nimport seaborn as sns\nfrom sklearn.preprocessing import MinMaxScaler\n\ndf = pd.read_csv('data/pm25.csv')\n\nprint(df.shape)\ndf.head()\n\ndf.isnull().sum()*100/df.shape[0]\n\ndf.dropna(subset=['pm2.5'], axis=0, inplace=True)\ndf.reset_index(drop=True, inplace=True)\n\ndf['datetime'] = df[['year', 'month', 'day', 'hour']].apply(\n lambda row: datetime.datetime(year=row['year'], \n month=row['month'], day=row['day'],hour=row['hour']), axis=1)\ndf.sort_values('datetime', ascending=True, inplace=True)\n\ndf.head()\n\ndf['year'].value_counts()\n\nplt.figure(figsize=(5.5, 5.5))\ng = sns.lineplot(data=df['pm2.5'], color='g')\ng.set_title('pm2.5 between 2010 and 2014')\ng.set_xlabel('Index')\ng.set_ylabel('pm2.5 readings')", "Note\n\nScaling the variables will make optimization functions work better, so here going to scale the variable into [0,1] range", "scaler = MinMaxScaler(feature_range=(0, 1))\ndf['scaled_pm2.5'] = scaler.fit_transform(np.array(df['pm2.5']).reshape(-1, 1))\n\ndf.head()\n\nplt.figure(figsize=(5.5, 5.5))\ng = sns.lineplot(data=df['scaled_pm2.5'], color='purple')\ng.set_title('Scaled pm2.5 between 2010 and 2014')\ng.set_xlabel('Index')\ng.set_ylabel('scaled_pm2.5 readings')\n\n# 2014 data as validation data, before 2014 as training data\nsplit_date = datetime.datetime(year=2014, month=1, day=1, hour=0) \ndf_train = df.loc[df['datetime']<split_date]\ndf_val = df.loc[df['datetime']>=split_date]\nprint('Shape of train:', df_train.shape)\nprint('Shape of test:', df_val.shape)\n\ndf_val.reset_index(drop=True, inplace=True)\ndf_val.head()\n\n# The way this works is to have the first nb_timesteps-1 observations as X and nb_timesteps_th as the target,\n## collecting the data with 1 stride rolling window.\n\ndef makeXy(ts, nb_timesteps):\n \"\"\"\n Input: \n ts: original time series\n nb_timesteps: number of time steps in the regressors\n Output: \n X: 2-D array of regressors\n y: 1-D array of target \n \"\"\"\n X = []\n y = []\n for i in range(nb_timesteps, ts.shape[0]):\n X.append(list(ts.loc[i-nb_timesteps:i-1]))\n y.append(ts.loc[i])\n \n X, y = np.array(X), np.array(y)\n return X, y\n\nX_train, y_train = makeXy(df_train['scaled_pm2.5'], 7)\nprint('Shape of train arrays:', X_train.shape, y_train.shape)\n\nprint(X_train[0], y_train[0])\nprint(X_train[1], y_train[1])\n\nX_val, y_val = makeXy(df_val['scaled_pm2.5'], 7)\nprint('Shape of validation arrays:', X_val.shape, y_val.shape)\n\nprint(X_val[0], y_val[0])\nprint(X_val[1], y_val[1])", "Note\n\nIn 2D array above for X_train, X_val, it means (number of samples, number of time steps)\nHowever RNN input has to be 3D array, (number of samples, number of time steps, number of features per timestep)\nOnly 1 feature which is scaled_pm2.5\nSo, the code below converts 2D array to 3D array", "X_train = X_train.reshape((X_train.shape[0], X_train.shape[1], 1))\nX_val = X_val.reshape((X_val.shape[0], X_val.shape[1], 1))\nprint('Shape of arrays after reshaping:', X_train.shape, X_val.shape)\n\nfrom keras.models import Sequential\nfrom keras.layers import SimpleRNN\nfrom keras.layers import Dense, Dropout, Input\nfrom keras.models import load_model\nfrom keras.callbacks import ModelCheckpoint\n\nfrom sklearn.metrics import mean_absolute_error\n\nmodel = Sequential()\nmodel.add(SimpleRNN(32, input_shape=(X_train.shape[1:])))\nmodel.add(Dropout(0.2))\nmodel.add(Dense(1, activation='linear'))\n\nmodel.compile(optimizer='rmsprop', loss='mean_absolute_error', metrics=['mae'])\nmodel.summary()\n\nsave_weights_at = 'basic_rnn_model'\nsave_best = ModelCheckpoint(save_weights_at, monitor='val_loss', verbose=0,\n save_best_only=True, save_weights_only=False, mode='min',\n period=1)\nhistory = model.fit(x=X_train, y=y_train, batch_size=16, epochs=20,\n verbose=1, callbacks=[save_best], validation_data=(X_val, y_val),\n shuffle=True)\n\n# load the best model\nbest_model = load_model('basic_rnn_model')\n\n# Compare the prediction with y_true\npreds = best_model.predict(X_val)\npred_pm25 = scaler.inverse_transform(preds)\npred_pm25 = np.squeeze(pred_pm25)\n\n# Measure MAE of y_pred and y_true\nmae = mean_absolute_error(df_val['pm2.5'].loc[7:], pred_pm25)\nprint('MAE for the validation set:', round(mae, 4))\n\nmae = mean_absolute_error(df_val['scaled_pm2.5'].loc[7:], preds)\nprint('MAE for the scaled validation set:', round(mae, 4))\n\n# Check the metrics and loss of each apoch\nmae = history.history['mae']\nval_mae = history.history['val_mae']\nloss = history.history['loss']\nval_loss = history.history['val_loss']\n\nepochs = range(len(mae))\n\nplt.plot(epochs, mae, 'bo', label='Training MAE')\nplt.plot(epochs, val_mae, 'b', label='Validation MAE')\nplt.title('Training and Validation MAE')\nplt.legend()\n\nplt.figure()\n\n# Here I was using MAE as loss too, that's why they lookedalmost the same...\nplt.plot(epochs, loss, 'bo', label='Training loss')\nplt.plot(epochs, val_loss, 'b', label='Validation loss')\nplt.title('Training and Validation loss')\nplt.legend()\n\nplt.show()", "Note\n\nBest model saved by ModelCheckpoint saved 7th epoch result, which had 0.12 val_loss\nFrom the history plot of training vs validation loss, 7th epoch result (i=6) has the lowest validation loss. This aligh with the result from ModelCheckpoint" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
oscarvarto/oscarvarto.github.io
content/immutability-principles.ipynb
apache-2.0
[ "title: Immutability Principles\nauthor: Oscar Vargas Torres\ndate: 2018-09-12\ncategory: Functional Programming\ntags: Immutability\n\nMutability is hard to reason about\nA lot of programming languages support mutability. For example, some objects in Python are mutable:", "x = [1, 2, 3]\n\nx.reverse()\nx", "This may not seem problematic at first. A lot of people would argue that it is indeed necessary to program. However, when things can change, we sometimes are forced to understand more details than the bare minimum necessary. For example:", "# This function is just for ilustration purposes.\n# Imagine a situation where a very long and complex method mutates one of it's arguments...\nfrom typing import List, TypeVar\nT = TypeVar('T')\n\ndef m1(x: List[T]) -> None:\n \"\"\"Reverses its argument\"\"\"\n x.reverse()\n return None\n\nvowels = ['a', 'e', 'i', 'o', 'u']\nm1(vowels)\nvowels", "Now, we have to dig into the implementation of m1, to understand how the method affects its arguments.\nA simpler approach is to rely on immutable data structures/variables. This may seem like a more difficult approach, but it makes programming easier in the long run.", "# Note: The example above serves to illustrate the problems with mutation.\n# Of course, it is not the *only* way to do it on Python.\n# For example, a more functional approach would be (using `List[T]`):\ndef m2(x: List[T]) -> List[T]:\n return x[::-1]\n\nvowels2 = ['a', 'e', 'i', 'o', 'u']\n\nprint(m2(vowels2))\nprint(vowels2) # Remains unmodified", "Let's use an immutable approach to the previous problem with pyrsistent Python's library:", "from pyrsistent import plist\nns1 = plist([1, 2, 3])\nns1\n\nns2 = ns1.reverse()\nns2\n\n# Notice that original list remains unmodified (it is an immutable/persistent data structure!)\nns1", "The following script, is a complete application of the concepts just presented.", "from pyrsistent import PRecord, field\nfrom typing import Callable, Optional, TypeVar\nfrom scipy.optimize import newton\n\nimport matplotlib.pyplot as plt\nimport numpy as np\n\n\nA = TypeVar('A')\nB = TypeVar('B')\nF1 = Callable[[A], B]\nRealF = F1[float, float]\n\n\nclass RootPlot(PRecord):\n def inv(self):\n return self.x_min <= self.x_max, 'x_min bigger than x_max'\n __invariant__ = inv\n x_min = field(type=float, mandatory=True)\n x_max = field(type=float, mandatory=True)\n x_init = field(type=float)\n output_file = field(type=str)\n\n def plot(self,\n y: RealF,\n dy: Optional[RealF] = None,\n dy2: Optional[RealF] = None) -> None:\n root = newton(func=y, x0=self.x_init, fprime=dy, fprime2=dy2)\n x = np.linspace(self.x_min, self.x_max)\n plt.clf()\n plt.plot(x, np.vectorize(y)(x))\n plt.plot(root, 0.0, 'r+')\n plt.grid()\n plt.savefig(self.output_file)\n plt.close()\n\n\ndef y(x: float) -> float:\n return ((2*x - 11.7)*x + 17.7)*x - 5.0\n\ndef dy(x: float) -> float:\n return (6.0*x - 23.4)*x + 17.7\n\ndef dy2(x: float) -> float:\n return 12*x - 23.4\n\np = RootPlot(x_min=0.0,\n x_max=4.0,\n x_init=3.0,\n output_file=\"simple_plot.png\")\n\n# This wouldn't change final result. You would still get a plot\n# from 0.0 to 4.0\n# p.set(x_init=2.0)\np.plot(y)", "Functional programming relies on immutability\nImmutable data structures/collections exist in a lot of programming languages:\n* Haskell\n * https://haskell-containers.readthedocs.io/en/latest/intro.html\n* Scala\n * https://docs.scala-lang.org/overviews/collections/overview.html \n* FSharp\n * https://docs.microsoft.com/en-us/dotnet/fsharp/language-reference/fsharp-collection-types\n* Clojure\n * https://clojure.org/reference/data_structures\n* C#\n * https://msdn.microsoft.com/en-us/library/system.collections.immutable(v=vs.111).aspx\n * https://msdn.microsoft.com/en-us/magazine/mt795189.aspx\n* JavaScript\n * https://facebook.github.io/immutable-js/\n* etc\nYou may have a lot of questions on the practicality and performance of Immutable Data Structures. There has been a lot of work and research on this topic. To give an example, Chris Okasaki received his PhD for his work on Purely Functional Data Structures. Take a look at\nhttps://www.cs.cmu.edu/~rwh/theses/okasaki.pdf\nImmutability in the context of Object Oriented Programming\nI will use examples from several programming languages that support Object Oriented Programming, mutability as well as immutability: Java, Scala, F#, C#.\nUsing Java\nWe are going to use Java to give an example (taken from Reactive Design Patterns by Roland Kuhn, et. al.) of an unsafe mutable class, which may hide unexpected behavior:\n```java\nimport java.util.Date;\npublic class Unsafe {\n private Date timestamp;\n private final StringBuffer message;\npublic Unsafe(Date timestamp, StringBuffer message) {\n this.timestamp = timestamp;\n this.message = message;\n}\n\npublic synchronized Date getTimestamp() {\n return timestamp;\n}\n\npublic synchronized void setTimestamp(Date timestamp) {\n this.timestamp = timestamp;\n}\n\npublic StringBuffer getMessage() {\n return message;\n}\n\n}\n```\nCan you spot the problems?\nThe following behaves predictably and is easier to reason about:\n```java\nimport java.util.Date;\npublic class Immutable {\n private final Date timestamp;\n private final String message;\npublic Immutable(final Date timestamp, final String message) {\n this.timestamp = new Date(timestamp.getTime());\n this.message = message;\n\npublic Date getTimestamp() {\n return new Date(timestamp.getTime());\n}\npublic String getMessage() {\n return message;\n}\n\n}}\n```\nUsing Scala\nLet's start with an example that stresses that using mutability forces to understand the context where this technique is used.\n```scala\nclass Counter {\n private var value = 0\ndef increment() { value += 1} // <== This method mutates value\ndef current = value\n}\n```\nAssume we create a Counter instance, and then call \"several times\" the increment method:\nscala\nval counter = new Counter\n// Block1 of code using increment(), possibly several times.\n// ...\n// ...\nval count = counter.current\nCan you guess which is the current count? Why? Do you need to know more information to give the exact answer? Do you think this requires more effort/time from you?\nNow, lets compare with an the following immutable definition (also supported by the language):\nscala\nfinal case class ImmutableCounter(current: Int = 0) {\n def increment: ImmutableCounter = ImmutableCounter(current + 1) \n}\n\nNOTE (Scala specific): When you declare a case class, several things happen automatically:\n * Each of the constructor parameters becomes a val unless it is explicitly declared as a var.\n * An apply method is provided for the companion object that lets you construct objects without new.\n * An unapply method is provided that makes pattern matching work.\n * Methods toString, equals, hashCode and copy are generated unless they are explicitly provided.\nTo get the equivalent functionality in other languages, like Java, you would have to write much more code, and/or use libraries like Lombok. Hopefully we will see Java evolving. Take a look at Data Classes for Java from Project Amber and Value Types from Project Valhalla.\n\nNow, for a given ImmutableCounter instance, it is impossible to mutate the current count. You would need to create new instances of the class to be able to get different values. For example:\nscala\nval initialCount = ImmutableCounter(0)\nval counter1 = initialCount.increment\n// Possibly big chunk of code manipulating counters\n// ...\n// ...\nval someCount = counter1.current\nCan you guess which is the value of someCount without studying the \"Possibly big chunk of code\"? Which is the value of someCount?\nWhereas the above example may feel fictitious, it illustrates one important point: Immutability allows you to focus in less code, so it will be easier for you to catch errors, and the compiler can protect you from making mistakes. Final result: you will make less mistakes in your code (less bugs!).\nIn Scala, it is a best practice to avoid vars, and try to use vals for primitive types (the story has some subtleties for reference types) to avoid mutation and make your life easier.\nUsing .NET (F# and C#)\nTake a look at this blog post: https://fsharpforfunandprofit.com/posts/correctness-immutability/\nImmutable Data Structures allow easier concurrency\nTake a look at https://clojure.org/about/concurrent_programming to read how immutable data structures will ease multicore/multithreaded programming on the JVM with Clojure.\nGlobal Data and Mutable Variables\nUsing mutable global variables can be very dangerous (AFAIK JavaScript allows this). Take a look at a thorough discussion on this topic on Section 13.3 Global Data of Code Complete 2nd Edition, by Steve McConnell.\nExtra: Avoiding Null Reference Exceptions by using descriptive types.\nSometimes people allow mutation of variables to encode the possibility that a value sometimes does not exist.\nTo encode the absence of a value, they use nulls. Like this:\nscala\nfinal case class Configuration(numberOfCores: Int)\nvar configuration: Configuration = null\n// Block1 of code logic depending on configuration\n// ...\n// Some time later\nconfiguration = Configuration(4)\n(Assume you \"have to\" use vars here, because you have no control over the whole source code)\nCan you spot a potential problem in Block1 above while trying to now the number of cores that have been configured?\nIf there is a possibility that sometimes a value may not exist, you can encode that using Option:\nscala\nfinal case class Configuration(numberOfCores: Int)\nvar configuration: Option[Configuration] = None\n// Block1 of code logic depending on configuration\n// ...\n// some time later\nconfigutation = Some(Configuration(4))\nNow, our program won't crush at runtime if we try to get the number of cores configured in Block1. We will simply get None, meaning that we have not configured our system yet. No more runtime crashes. You just need to allow the type system work for you, and encode the possibility of absence of a value using an appropriate type.\nWe have been using Scala to exemplify this, but optionals have been included in mainstream languages also. For example, take a look at the following references:\n* From the Java world: https://docs.oracle.com/javase/8/docs/api/java/util/Optional.html (We are now near to Java 11 Release Date)\n* From the .NET world: https://docs.microsoft.com/en-us/dotnet/fsharp/language-reference/options\n* C++17: https://en.cppreference.com/w/cpp/utility/optional\n* Python: Look for Optional here https://docs.python.org/3/library/typing.html (supported with type annotations, for python 3.6+)\n* PureScript (a language that compiles to JavaScript): https://pursuit.purescript.org/packages/purescript-maybe/4.0.0" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
openfisca/openfisca-france-indirect-taxation
openfisca_france_indirect_taxation/examples/notebooks/regressivite_taxation_indirecte.ipynb
agpl-3.0
[ "L'objectif est de calculer, pour chaque décile de revenu, la part de leur revenu que les ménages dépensent en taxes indirectes. On utilise plusieurs définitions du revenu pour comparer la régressivité de ces taxes. On compare également l'importance de chacune de ces taxes dans l'ensemble de l'imposition indirecte.\nImport de modules généraux", "from __future__ import division\n\nimport pandas\nimport seaborn\n", "Import de modules spécifiques à Openfisca", "from openfisca_france_indirect_taxation.examples.utils_example import graph_builder_bar\nfrom openfisca_france_indirect_taxation.surveys import SurveyScenario\n", "Import d'une nouvelle palette de couleurs", "seaborn.set_palette(seaborn.color_palette(\"Set2\", 12))\n%matplotlib inline", "Sélection des variables pour la simulation", "simulated_variables = [\n 'tva_total',\n 'ticpe_totale',\n 'vin_droit_d_accise',\n 'biere_droit_d_accise',\n 'alcools_forts_droit_d_accise',\n 'cigarette_droit_d_accise',\n 'cigares_droit_d_accise',\n 'tabac_a_rouler_droit_d_accise',\n 'assurance_transport_taxe',\n 'assurance_sante_taxe',\n 'autres_assurances_taxe',\n 'revtot',\n 'rev_disponible',\n 'somme_coicop12',\n 'taxes_indirectes_total'\n ]\n", "Calcul des contributions des ménages aux différentes taxes indirectes, par décile de revenu", "for year in [2000, 2005, 2011]:\n survey_scenario = SurveyScenario.create(year = year)\n pivot_table = pandas.DataFrame()\n for values in simulated_variables:\n pivot_table = pandas.concat([\n pivot_table,\n survey_scenario.compute_pivot_table(values = [values], columns = ['niveau_vie_decile'])\n ])\n taxe_indirectes = pivot_table.T\n\n taxe_indirectes['TVA'] = taxe_indirectes['tva_total']\n taxe_indirectes['TICPE'] = taxe_indirectes['ticpe_totale']\n taxe_indirectes[u'Taxes alcools'] = (\n taxe_indirectes['vin_droit_d_accise'] +\n taxe_indirectes['biere_droit_d_accise'] +\n taxe_indirectes['alcools_forts_droit_d_accise']\n ).copy()\n taxe_indirectes[u'Taxes assurances'] = (\n taxe_indirectes['assurance_sante_taxe'] +\n taxe_indirectes['assurance_transport_taxe'] +\n taxe_indirectes['autres_assurances_taxe']\n ).copy()\n taxe_indirectes[u'Taxes tabacs'] = (\n taxe_indirectes['cigarette_droit_d_accise'] +\n taxe_indirectes['cigares_droit_d_accise'] +\n taxe_indirectes['tabac_a_rouler_droit_d_accise']\n ).copy()\n\n taxe_indirectes = taxe_indirectes.rename(columns = {'revtot': u'revenu total',\n 'rev_disponible': u'revenu disponible', 'somme_coicop12': u'depenses totales',\n 'taxes_indirectes_total': u'toutes les taxes indirectes'})\n for revenu in [u'revenu total', u'revenu disponible', u'depenses totales', u'toutes les taxes indirectes']:\n list_part_taxes = []\n for taxe in ['TVA', 'TICPE', u'Taxes alcools', u'Taxes assurances', u'Taxes tabacs']:\n taxe_indirectes[u'part ' + taxe] = (\n taxe_indirectes[taxe] / taxe_indirectes[revenu]\n )\n 'list_part_taxes_{}'.format(taxe)\n list_part_taxes.append(u'part ' + taxe)\n\n df_to_graph = taxe_indirectes[list_part_taxes]\n\n print '''Contributions aux différentes taxes indirectes en part de {0},\n par décile de revenu en {1}'''.format(revenu, year)\n graph_builder_bar(df_to_graph)\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
BinRoot/TensorFlow-Book
ch02_basics/Concept05_variables.ipynb
mit
[ "Ch 02: Concept 05\nUsing variables\nHere we go, here we go, here we go! Moving on from those simple examples, let's get a better understanding of variables. Start with a session:", "import tensorflow as tf\nsess = tf.InteractiveSession()", "Below is a series of numbers. Don't worry what they mean. Just for fun, let's think of them as neural activations.", "raw_data = [1., 2., 8., -1., 0., 5.5, 6., 13]", "Create a boolean variable called spike to detect a sudden increase in the values.\nAll variables must be initialized. Go ahead and initialize the variable by calling run() on its initializer:", "spike = tf.Variable(False)\nspike.initializer.run()", "Loop through the data and update the spike variable when there is a significant increase:", "for i in range(1, len(raw_data)):\n if raw_data[i] - raw_data[i-1] > 5:\n updater = tf.assign(spike, tf.constant(True))\n updater.eval()\n else:\n tf.assign(spike, False).eval()\n print(\"Spike\", spike.eval())", "You forgot to close the session! Here, let me do it:", "sess.close()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
airanmehr/bio
notebooks/KGZ/QC.ipynb
mit
[ "<a href='#load'>1. Number of Het/Hom Sites per Individual</a>\n<a href='#gw-constant'>2. Estimating genomewide constant population size</a>", "%matplotlib inline\nimport matplotlib\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport sys,os\npath='/'.join(os.getcwd().split('/')[:-4])\nsys.path.insert(1,path)\nimport Utils.Util as utl\nimport pandas as pd\npd.options.display.max_rows = 20;\npd.options.display.expand_frame_repr = True\nfrom IPython.display import display\nimport seaborn as sns\nimport Scripts.HLI.Kyrgyz.QC.plots as qplt", "<a id='numhet'></a>\n1. Distribution of Genotyopes per Individual\nSummary.\n\nOne individual (HA11, 201852651) has excess of heterozygote variants.\nThe outlier individual has extremely high mPAP.\nA subgroup of individuals have low number of variants.\n\nBy only looking at number of homo/hetero sites, we observe that one individual significantly has excess of homozygote variants.", "qplt.outlier(False)", "Here is pairwise distributio for all possible genotypes:\nNote that the subplots are symmetric. subplots (1,2) and (2,1) and the previous plot are identical (subject to a rotation).\nYou can read these plots as following: in each subplot if a circle is above diagonal line it means excess of y-axis genotype (or deficit of x-axis genotype). Conversly if circle is below diagonal line it denots deficit of y-axis genotype.", "qplt.outlier()", "Next I look if some sub population has systematicaly excess of hetero and homozygote variants", "qplt.populationGT(False)", "Doing it for all possible genotypes:", "qplt.populationGT()\n\nreload(qplt);qplt.excessHetOutlier()\n\nreload(qplt);qplt.excessHetAll()\n\nreload(qplt);qplt.plotN();qplt.plotN(False)\n\nreload(qplt);qplt.HetChrX()\n\nreload(qplt);qplt.freq()\n\nreload(qplt);qplt.SFSHetChrX()\n\nimport Scripts.KyrgysHAPH.Util as kutl\na=pd.read_csv(kutl.pathShare+\"QC.csv\",sep='\\t')\na\npop=pd.read_pickle(kutl.pathShare+'info/pop.df').set_index('SampleName')\n# b=a[a['Sample.key'].apply(lambda x: ('201852665' in str(x)) or ('201852651' in str(x)))];b.loc[:,'label']='mixed'\n# a=pd.concat( [a,b])\na\n# a.iloc[-2:]['label']=['mixed','mixed']\nplt.figure(dpi=200,figsize=(2,5))\ndata=a[a.label.apply(lambda x: ('peruvian' not in x) & ('TKG'not in x))].replace({'label':{'new highlanders':'Individuals'}})\nsns.stripplot(ax=plt.gca(),x='label',y='FreeMix',data=data,jitter=0.05)\nplt.axhline(0.03,c='r')\nplt.xlabel('')\n\n\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
CUBoulder-ASTR2600/lectures
lecture_05_summer_sublists_functions.ipynb
isc
[ "quick recap\nYou now have both while loop and for loop in your toolset.\nLet's look quickly at yesterday's last tutorial task.\nHowever, I also will also upload general solution notebook files later today)", "for fIndex, y in enumerate(range(2, 5)):\n \n countdown = y\n yFactorial = 1\n \n wIndex = 0\n while countdown > 1:\n yFactorial *= countdown\n \n ##### CHECKPOINT! #####\n \n countdown -= 1\n wIndex += 1\n \n print(\"RESULT: %d! = %d\" % (y, yFactorial))\n\n# Question 3\n\nprint(\"%s %s %s %s %s\" % (\"fIndex\", \"y\", \"wIndex\", \"countdown\", \"yFactorial\"))\n\nfor fIndex, y in enumerate(range(2, 5)):\n \n countdown = y\n yFactorial = 1\n \n wIndex = 0\n while countdown > 1:\n yFactorial *= countdown\n \n print(\"%-6i %1i %6i %9i %10i\" % (fIndex, y, wIndex, countdown, yFactorial))\n \n countdown -= 1\n wIndex += 1\n \n #print \"RESULT: %d! = %d\" % (y, yFactorial)", "Today\n\nSublists\nnested lists\nFunctions (the most fun object in Python in my mind)\nglobal vs local variables\ndocstrings\n\nExtracting Sublists\nSometimes we want to operate on only parts of lists.\nThe syntax for this is particularly simple:", "# create our favorite massRatios:\nmassRatios = list(range(10))\nmassRatios\n\nmassRatios[2:7]", "This is called slicing and the 2 parameters required are separated by a colon :.\nSimilar to the parameters for the range() function, the starting number is inclusive while the ending number is exclusive.\nWhen the 1st parameter is left out, the slice starts at the beginning of the list, when the last parameter is left out it goes until the end:", "print(massRatios[:4])\nprint(massRatios[4:])", "Note how in the first case, the length returned is the same as the value of the index you provide, thanks to 0-based indexing.\nNote, also, that thanks to the asymmetry of inclusivity for the start parameter vs exclusivity for the end parameter, you can use the same number twice to get both ends of a list, thisk creates easier to read code as well.", "i = 5\nprint(massRatios[:i])\nprint(massRatios[i:])", "Nested lists\nPython allows for nesting lists. This allows for finer substructure of data storage.\nFor example, storing vectors in a list:", "v1 = [0,1,2]\nv2 = [7,8,9]\n\nvectors = [v1, v2]\nvectors", "When accessing elements, you can also just nest the indexing:", "vectors[0][1]\n\nvectors[1][-1]", "Functions\n$B_{\\lambda}(T) = \\frac{2 h c^2}{\\lambda^5 \\left[\\exp\\left(\\frac{h c}{\\lambda k T}\\right) - 1 \\right]}$\nwhere $h$ is Planck's constant, $c$ is the speed of light, \n$k$ is Boltzmann's constant, $T$ is the blackbody temperature, and\n$\\lambda$ is the wavelength.", "# First, define the physical constants:\nh = 6.626e-34 # J s, Planck's constant\nk = 1.38e-23 # J K^-1, Boltzmann constant\nc = 3.00e8 # m s^-1, speed of light\n \n# Conversion between angstroms and meters\nangPerM = 1e10\n \n# The Planck function is a function of two variables;\n# for now, we'll set T = 5,800 K, the photospheric temperature of the Sun\n# and allow the wavelength to vary.\ntemp = 5800.0 \n\nfrom math import exp\n\n# Define the function using def:\n \ndef intensity1(waveAng): # Function header\n waveM = waveAng / angPerM # Will convert Angstroms to meters\n \n B = 2 * h * c**2 / (waveM**5 * (exp(h * c / (waveM * k * temp)) - 1))\n \n return B\n\n# Units will be W / m^2 / m / ster\n\nprint('%e' % intensity1(5000.0)) # note the %e formatting string for exponent notation\n\ndef funcNoReturn(x):\n print(\"Answer:\", x + 5)\n return x+5\n\ny = funcNoReturn(6)\nprint(\"y =\", y)\n\nwaveList = [3000 + 100 * i for i in range(41)]", "Q. What did the above command do?", "waveList\n\nintensityList = [intensity1(wave) for wave in waveList] \nintensityList", "Q. What should the output of \"intensityList\" be?", "for index in range(len(waveList)):\n print('wavelength (Angstrom) = {} Intensity (W m^-3 ster^-1) = {:.2e}'\\\n .format(waveList[index], intensityList[index]))", "Q. What will the output look like?\nLocal and Global variables", "def intensity1(waveAng): # Function header\n waveM = waveAng / angPerM # Will convert Angstroms to meters\n \n B = 2 * h * c**2 / (waveM**5 * (exp(h * c / (waveM * k * temp)) - 1))\n \n return B\n\nB\n\nwaveM", "Q. What will this print?", "g = 10\n\ndef f(x):\n g = 11\n return x + g\n\nf(5), g\n\ng = 10\n\ndef f(x):\n global g # Now \"g\" inside the function references the global variable\n g = 11 # Give that variable a new value\n return x + g\n\nf(5), g", "Functions with multiple arguments", "def intensity2(waveAng, temp): # 2nd version of function Intensity\n waveM = waveAng / angPerM\n B = 2 * h * c**2 / (waveM**5 * (exp(h * c / (waveM * k * temp)) - 1))\n return B\n\nintensity2(5000.0, 5800.0)\n\nintensity2(temp=5800.0, waveAng=5000.0)\n\nintensity2(waveAng=5000.0, temp=5800.0)", "Q. Will this work (produce the same result)?", "intensity2(5800.0, 5000.0)\n\ndef waveListGen(minWave, maxWave, delta):\n waveList = []\n \n wave = minWave\n \n while wave <= maxWave:\n waveList.append(wave)\n wave += delta\n \n return waveList", "Q. What will this do?", "waveList = waveListGen(3000, 5000, 200)\nwaveList\n\nlist(range(3000, 5001, 200))", "Functions with multiple return values", "# (Defined h, c, k above and imported math)\n\ndef intensity3(waveAng, temp): # 3rd version of function Intensity\n waveM = waveAng / angPerM\n \n B = 2 * h * c**2 / (waveM**5 * (exp(h * c / (waveM * k * temp)) - 1))\n \n return (waveAng, B)\n\ntemp = 10000.0 # Hot A star or cool B star; brighter than a G star\n\nwaveAng, intensity = intensity3(6000.0, temp=temp)\nwaveAng, intensity # notice the automatic packing of Python again\n\nresult = intensity3(6000.0, 10000.0)\n\nprint(result)\ntype(result)\n\nfor wave in waveListGen(3e3, 4e3, 100):\n print('Wavelength (Angstroms) = %-10i Intensity (W m^-3 ster^-1) = %.2e'\\\n % intensity3(wave, 1e4))", "Doc Strings:\nDoc strings are used to document functions. They generally include:\n\n\nA description of the functionality of the function\n\n\nA list of arguments\n\n\nA description of outputs (returned values)\n\n\nAnd, they serve as the help documentation!\nThey go right after the function header and are enclosed within triple quotes.", "def force(mass1, mass2, radius):\n \"\"\"\n Compute the gravitational force between two bodies.\n \n Parameters\n ----------\n mass1 : int, float\n Mass of the first body, in kilograms.\n mass2 : int, float\n Mass of the second body, in kilograms.\n radius : int, float\n Separation of the bodies, in meters.\n\n Example\n -------\n To compute force between Earth and the Sun:\n >>> F = force(5.97e24, 1.99e30, 1.5e11)\n\n Returns\n -------\n Force in Newtons : float\n \"\"\"\n G = 6.67e-11\n \n return G * mass1 * mass2 / radius**2\n\nresult = force(5.97e24, 2.00e30, 1.5e11)\nresult\n\n# To see the documentation for a function, use help:\n\nhelp(force)", "or with the subwindow:", "force?", "Some important functionality review", "# a = [] initialize an empty list\n# a = [1., 2] initialize a list\n# a.append(elem) add the elem object to the end of the list\n# a + [5, 4] concatenate (join) two lists\n# a.insert(i, e) insert element e at index i\n# a[5] acess the value of the element at index 5\n# a[-1] get the last list element value\n# a[4:7] slice list a\n# del a[i] delete list element with index i\n# a.remove(e) remove list element with value e (not index e)\n# a.index('test') find the index where the element has the value 'test'\n# 4 in a find out whether 4 is in a\n# a.count(4) count how many times 4 is in a\n# len(a) return the number of elements in a\n# min(a) return the smallest element in a\n# max(a) return the largest element in a\n# sum(a) add all the elements in a\n# sorted(a) return a sorted version of list a\n# reversed(a) return a reversed version of list a\n# a[1][0][4] nested list indexing (3 dimensional list in this case)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.13/_downloads/plot_mne_dspm_source_localization.ipynb
bsd-3-clause
[ "%matplotlib inline", "Source localization with MNE/dSPM/sLORETA\nThe aim of this tutorials is to teach you how to compute and apply a linear\ninverse method such as MNE/dSPM/sLORETA on evoked/raw/epochs data.", "import numpy as np\nimport matplotlib.pyplot as plt\n\nimport mne\nfrom mne.datasets import sample\nfrom mne.minimum_norm import (make_inverse_operator, apply_inverse,\n write_inverse_operator)", "Process MEG data", "data_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\n\nraw = mne.io.read_raw_fif(raw_fname, add_eeg_ref=False)\nraw.set_eeg_reference() # set EEG average reference\nevents = mne.find_events(raw, stim_channel='STI 014')\n\nevent_id = dict(aud_r=1) # event trigger and conditions\ntmin = -0.2 # start of each epoch (200ms before the trigger)\ntmax = 0.5 # end of each epoch (500ms after the trigger)\nraw.info['bads'] = ['MEG 2443', 'EEG 053']\npicks = mne.pick_types(raw.info, meg=True, eeg=False, eog=True,\n exclude='bads')\nbaseline = (None, 0) # means from the first instant to t = 0\nreject = dict(grad=4000e-13, mag=4e-12, eog=150e-6)\n\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=picks,\n baseline=baseline, reject=reject, add_eeg_ref=False)", "Compute regularized noise covariance\nFor more details see tut_compute_covariance.", "noise_cov = mne.compute_covariance(\n epochs, tmax=0., method=['shrunk', 'empirical'])\n\nfig_cov, fig_spectra = mne.viz.plot_cov(noise_cov, raw.info)", "Compute the evoked response", "evoked = epochs.average()\nevoked.plot()\nevoked.plot_topomap(times=np.linspace(0.05, 0.15, 5), ch_type='mag')\n\n# Show whitening\nevoked.plot_white(noise_cov)", "Inverse modeling: MNE/dSPM on evoked and raw data", "# Read the forward solution and compute the inverse operator\n\nfname_fwd = data_path + '/MEG/sample/sample_audvis-meg-oct-6-fwd.fif'\nfwd = mne.read_forward_solution(fname_fwd, surf_ori=True)\n\n# Restrict forward solution as necessary for MEG\nfwd = mne.pick_types_forward(fwd, meg=True, eeg=False)\n\n# make an MEG inverse operator\ninfo = evoked.info\ninverse_operator = make_inverse_operator(info, fwd, noise_cov,\n loose=0.2, depth=0.8)\n\nwrite_inverse_operator('sample_audvis-meg-oct-6-inv.fif',\n inverse_operator)", "Compute inverse solution", "method = \"dSPM\"\nsnr = 3.\nlambda2 = 1. / snr ** 2\nstc = apply_inverse(evoked, inverse_operator, lambda2,\n method=method, pick_ori=None)\n\ndel fwd, inverse_operator, epochs # to save memory", "Visualization\nView activation time-series", "plt.plot(1e3 * stc.times, stc.data[::100, :].T)\nplt.xlabel('time (ms)')\nplt.ylabel('%s value' % method)\nplt.show()", "Here we use peak getter to move visualization to the time point of the peak\nand draw a marker at the maximum peak vertex.", "vertno_max, time_max = stc.get_peak(hemi='rh')\n\nsubjects_dir = data_path + '/subjects'\nbrain = stc.plot(surface='inflated', hemi='rh', subjects_dir=subjects_dir,\n clim=dict(kind='value', lims=[8, 12, 15]),\n initial_time=time_max, time_unit='s')\nbrain.add_foci(vertno_max, coords_as_verts=True, hemi='rh', color='blue',\n scale_factor=0.6)\nbrain.show_view('lateral')", "Morph data to average brain", "fs_vertices = [np.arange(10242)] * 2\nmorph_mat = mne.compute_morph_matrix('sample', 'fsaverage', stc.vertices,\n fs_vertices, smooth=None,\n subjects_dir=subjects_dir)\nstc_fsaverage = stc.morph_precomputed('fsaverage', fs_vertices, morph_mat)\nbrain_fsaverage = stc_fsaverage.plot(surface='inflated', hemi='rh',\n subjects_dir=subjects_dir,\n clim=dict(kind='value', lims=[8, 12, 15]),\n initial_time=time_max, time_unit='s')\nbrain_fsaverage.show_view('lateral')", "Exercise\n\nBy changing the method parameter to 'sloreta' recompute the source\n estimates using the sLORETA method." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
csiu/100daysofcode
misc/2017-03-02-day06.ipynb
mit
[ "layout: post\nauthor: csiu\ndate: 2017-03-02\ntitle: \"Day06: Jupyter Notebook, meet Jekyll blog post\"\ncategories: update\ntags:\n - 100daysofcode\n - setup\nexcerpt: Integrating code \n\nDAY 06 - Mar 2, 2017\nData Science meetup\nToday I went to the Data Science meetup for \"Using NLP & Machine Learning to understand and predict performance\". Fascinating stuff. Somewhat similar to my thesis work and the talk inspired a few ideas for future projects.", "speaker = 'Thomas Levi'\ntopics_mentioned_at_meetup = [\n \"latent dirichlet allocation\",\n \"collapsed gibbs sampling\",\n \"bayesian inference\",\n \"topic modelling\",\n \"porter stemmer\",\n \"flesch reading ease\",\n \"word2vec\"\n]", "Anyways, I just got home and now (as I'm typing this) have 35 minutes to do something and post it for Day06.\nJupyter Notebook meet Jekyll blog post\nGoing back to a comment I recently recieved about including and embedding code to my jekyll blog posts. I thought I would tackle this problem now. The issue is that I use Jupyter Notebooks to explore and analyze data but I haven't really looked at its integration with the Jekyll blog post.", "for t in topics_mentioned_at_meetup:\n print(\"- '{}' was mentioned\".format(t))", "Integration with Jekyll\n\nAdd yaml front matter to the top of the Jupyter Notebook\nConvert Jupyter Notebook to markdown by jupyter nbconvert --to markdown NOTEBOOK.ipynb\nDelete empty first line of markdown" ]
[ "markdown", "code", "markdown", "code", "markdown" ]
ethen8181/machine-learning
python/algorithms/recursion.ipynb
mit
[ "<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#Recursion,-Greedy-Algorithm,-Dynamic-Programming\" data-toc-modified-id=\"Recursion,-Greedy-Algorithm,-Dynamic-Programming-1\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>Recursion, Greedy Algorithm, Dynamic Programming</a></span><ul class=\"toc-item\"><li><span><a href=\"#Recursion\" data-toc-modified-id=\"Recursion-1.1\"><span class=\"toc-item-num\">1.1&nbsp;&nbsp;</span>Recursion</a></span></li><li><span><a href=\"#Greedy-Algorithm\" data-toc-modified-id=\"Greedy-Algorithm-1.2\"><span class=\"toc-item-num\">1.2&nbsp;&nbsp;</span>Greedy Algorithm</a></span></li><li><span><a href=\"#Dynamic-Programming---Changing-Coin\" data-toc-modified-id=\"Dynamic-Programming---Changing-Coin-1.3\"><span class=\"toc-item-num\">1.3&nbsp;&nbsp;</span>Dynamic Programming - Changing Coin</a></span></li><li><span><a href=\"#Dynamic-Programming---0/1-Knapsack\" data-toc-modified-id=\"Dynamic-Programming---0/1-Knapsack-1.4\"><span class=\"toc-item-num\">1.4&nbsp;&nbsp;</span>Dynamic Programming - 0/1 Knapsack</a></span></li></ul></li></ul></div>", "from jupyterthemes import get_themes\nfrom jupyterthemes.stylefx import set_nb_theme\nthemes = get_themes()\nset_nb_theme(themes[1])\n\n%load_ext watermark\n%watermark -a 'Ethen' -d -t -v -p jupyterthemes", "Recursion, Greedy Algorithm, Dynamic Programming\nFollowing the online book, Problem Solving with Algorithms and Data Structures. Chapter 5 discusses recursion.\nRecursion\nIs a method of breaking problems down into smaller and smaller sub-problems until we get to a problem small enough that it can be solved trivially. Recursion must follow three important laws.\n\nIt must have a base case\nIt must call itself recursively\nIt must change it's state and move towards the base case\n\nThe first problem will be Converting an Integer to a String in Any Base.", "def to_str(n, base):\n convert_str = '0123456789ABCDEF'\n if n < base:\n # look up the string representation if it's smaller than the base\n return convert_str[n]\n else:\n # convert_str comes after to to_str method so that it will\n # delayed the addition until the recursive call finishes\n return to_str(n // base, base) + convert_str[n % base]\n\nprint(to_str(769, 10))\nprint(to_str(1453, 16))", "Greedy Algorithm\nChanging money is an optimization problem involves making change using the fewest coins. e.g. The answer for making a change for 63 cents will be 6 coins: two quarters, one dime, and three pennies. How did we arrive at the answer of six coins? Well, one approach will be using a greedy method. Meaning we start with the largest coin in our arsenal (a quarter) and use as many of those as possible, then we go to the next lowest coin value and use as many of those as possible and keep going until we've arrived at our solution. This first approach is called a greedy method because we try to solve as big a piece of the problem as possible right away.", "def change_money_greedy(amount, coin_values):\n \"\"\"\n using greedy algorithm to solve the minimum\n number of coins needed to make change for the input\n amount (an integer), given the all the possible coin values.\n The coin values has to be sorted in\n decreasing order for this code to work properly\n \"\"\"\n \n # key = coin_values\n # value = corresponding number of that coin value\n change = {} \n for d in coin_values:\n n_coins = amount // d\n change[d] = n_coins\n amount = amount % d\n if not amount:\n break\n \n return change\n\namount = 63\ncoin_values = [25, 10, 5, 1]\nchange = change_money_greedy(amount, coin_values)\nprint(change)", "The greedy method works fine when we are using U.S. coins, but suppose, in addition to the usual 1, 5, 10, and 25 cent coins we now have a 21 cent coin. In this instance our greedy method fails to find the optimal solution for 63 cents in change. With the addition of the 21 cent coin the greedy method would still find the solution to be 6 coins when the optimal answer should be 3 21 cent pieces.\nDynamic Programming - Changing Coin\nLet’s look at a method called dynamic programming, where we could be sure that we would find the optimal answer to the problem. Dynamic programming solution is going to start with making change for one cent and systematically work its way up to the amount of change we require. This guarantees us that at each step of the algorithm we already know the minimum number of coins needed to make change for any smaller amount.\nLet’s look at how we would fill in a table of minimum coins to use in making change for 11 cents. The following figure illustrates the process. \n\nWe start with one cent. The only solution possible up till this point is one coin (a penny). The next row shows the minimum for one cent and two cents. Again, the only solution is two pennies. The fifth row is where things get interesting. Now we have two options to consider, five pennies or one nickel. How do we decide which is best? We consult the table and see that the number of coins needed to make change for four cents is four, plus one more penny to make five, equals five coins. Or we can look at zero cents plus one more nickel to make five cents equals 1 coin. Since the minimum of one and five is one we store 1 in the table. Fast forward again to the end of the table and consider 11 cents. The three options that we have to consider:\n\nA penny plus the minimum number of coins to make change for 11−1=10 cents (1)\nA nickel plus the minimum number of coins to make change for 11−5=6 cents (2)\nA dime plus the minimum number of coins to make change for 11−10=1 cent (1)\n\nEither option 1 or 3 will give us a total of two coins which is the minimum number of coins for 11 cents.", "import numpy as np\nfrom collections import defaultdict\n\ndef change_money_dp(amount, coin_values):\n \"\"\"\n using dynamic programming to solve\n the minimum number of coins needed to make change for the \n input amount (an integer), given the all the possible coin values.\n unlike the greedy algorithm the coin values doesn't need to be sorted in\n decreasing order for this code to work properly\n \"\"\"\n \n # index starts at 0 (change 0 essentially means nothing\n min_coin = np.zeros(amount + 1, dtype = np.int)\n used_coin = np.zeros(amount + 1, dtype = np.int)\n\n for cents in range(amount + 1):\n # all the coins that are smaller than the \n # current change are all candidates for exchanging\n possible_choices = [c for c in coin_values if c <= cents]\n\n # store the minimum change number 1, and\n # the maximum number of coins required to\n # make change for the current `cents`,\n # these will later be compared and updated\n coin = 1\n coin_count = cents\n\n # consider using all possible coins to make \n # change for the amount specified by cents,\n # and store the minimum number to min_coins\n for j in possible_choices:\n\n # access the minimum coin required to make \n # cents - j amount and add 1 to account for\n # the fact that you're using the current coin\n # to give the changes\n min_coin_count = min_coin[cents - j] + 1\n if min_coin_count < coin_count:\n coin_count = min_coin_count\n coin = j\n\n min_coin[cents] = coin_count\n used_coin[cents] = coin\n \n # determine the number of each coins used to\n # make the change\n change = defaultdict(int)\n coin = amount\n while coin > 0:\n coin_current = used_coin[coin]\n coin -= coin_current\n change[coin_current] += 1\n \n return change\n\namount = 63\ncoin_values = [21, 10, 35, 5, 1]\nchange = change_money_dp(amount, coin_values)\nprint(change)", "Dynamic Programming - 0/1 Knapsack\nThe following blog has a nice introduction on this topic. Blog: Dynamic Programming: Knapsack Optimization", "def knapsack(value_weight, capacity):\n \"\"\"0/1 knapsack problem\"\"\"\n\n # construct the dynamic programming table, where each row represents\n # the current capacity level and each column represents the item\n n_items = len(value_weight)\n \n # the padding (0, 1) tuple represents no padding at the beginning of both\n # dimension and pad 1 value at the end of the dimension\n # https://stackoverflow.com/questions/35751306/python-how-to-pad-numpy-array-with-zeros\n table = np.pad(np.zeros((capacity, n_items)), (0, 1), 'constant').astype(np.int)\n for j in range(1, n_items + 1):\n value, weight = value_weight[j - 1]\n for i in range(1, capacity + 1):\n # if the current item's weight is\n # larger than the capacity, then\n # all we can do is lookup the maximum\n # value of the previous column, i.e.\n # best value at this capacity with previously\n # seen items\n if weight > i:\n table[i, j] = table[i, j - 1]\n else:\n # if we can fit the item in, then we compare adding this new\n # item's value with the capacity level just enough to add this\n # value in\n table[i, j] = max(table[i, j - 1], table[i - weight, j - 1] + value)\n\n return table\n\ncapacity = 11\nvalue_weight = [(8, 4), (4, 3), (10, 5), (15, 8)]\ntable = knapsack(value_weight, capacity)\nprint('max value:', table[capacity, len(value_weight)])\ntable\n\n# to see which items were taken (put in the knapsack),\n# we check whether the row corresponding to the capacity\n# we have remaining to use is different in the current\n# column and the one before it, if it is, that means\n# that item was chosen\nremaining = capacity\nitems_taken = np.zeros(len(value_weight), dtype = np.bool)\n\nfor j in range(len(value_weight), 0, -1):\n if table[remaining, j] != table[remaining, j - 1]:\n items_taken[j - 1] = True\n _, weight = value_weight[j - 1]\n remaining -= weight\n \nitems_taken" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
karlnapf/shogun
doc/ipython-notebooks/multiclass/KNN.ipynb
bsd-3-clause
[ "K-Nearest Neighbors (KNN)\nby Chiyuan Zhang and S&ouml;ren Sonnenburg\nThis notebook illustrates the <a href=\"http://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm\">K-Nearest Neighbors</a> (KNN) algorithm on the USPS digit recognition dataset in Shogun. Further, the effect of <a href=\"http://en.wikipedia.org/wiki/Cover_tree\">Cover Trees</a> on speed is illustrated by comparing KNN with and without it. Finally, a comparison with <a href=\"http://en.wikipedia.org/wiki/Support_vector_machine#Multiclass_SVM\">Multiclass Support Vector Machines</a> is shown. \nThe basics\nThe training of a KNN model basically does nothing but memorizing all the training points and the associated labels, which is very cheap in computation but costly in storage. The prediction is implemented by finding the K nearest neighbors of the query point, and voting. Here K is a hyper-parameter for the algorithm. Smaller values for K give the model low bias but high variance; while larger values for K give low variance but high bias.\nIn SHOGUN, you can use CKNN to perform KNN learning. To construct a KNN machine, you must choose the hyper-parameter K and a distance function. Usually, we simply use the standard CEuclideanDistance, but in general, any subclass of CDistance could be used. For demonstration, in this tutorial we select a random subset of 1000 samples from the USPS digit recognition dataset, and run 2-fold cross validation of KNN with varying K.\nFirst we load and init data split:", "import numpy as np\nimport os\nSHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')\n\nfrom scipy.io import loadmat, savemat\nfrom numpy import random\nfrom os import path\n\nmat = loadmat(os.path.join(SHOGUN_DATA_DIR, 'multiclass/usps.mat'))\nXall = mat['data']\nYall = np.array(mat['label'].squeeze(), dtype=np.double)\n\n# map from 1..10 to 0..9, since shogun\n# requires multiclass labels to be\n# 0, 1, ..., K-1\nYall = Yall - 1\n\nrandom.seed(0)\n\nsubset = random.permutation(len(Yall))\n\nXtrain = Xall[:, subset[:5000]]\nYtrain = Yall[subset[:5000]]\n\nXtest = Xall[:, subset[5000:6000]]\nYtest = Yall[subset[5000:6000]]\n\nNsplit = 2\nall_ks = range(1, 21)\n\nprint(Xall.shape)\nprint(Xtrain.shape)\nprint(Xtest.shape)", "Let us plot the first five examples of the train data (first row) and test data (second row).", "%matplotlib inline\nimport pylab as P\ndef plot_example(dat, lab):\n for i in range(5):\n ax=P.subplot(1,5,i+1)\n P.title(int(lab[i]))\n ax.imshow(dat[:,i].reshape((16,16)), interpolation='nearest')\n ax.set_xticks([])\n ax.set_yticks([])\n \n \n_=P.figure(figsize=(17,6))\nP.gray()\nplot_example(Xtrain, Ytrain)\n\n_=P.figure(figsize=(17,6))\nP.gray()\nplot_example(Xtest, Ytest)", "Then we import shogun components and convert the data to shogun objects:", "import shogun as sg\nfrom shogun import MulticlassLabels, features\nfrom shogun import KNN\n\nlabels = MulticlassLabels(Ytrain)\nfeats = features(Xtrain)\nk=3\ndist = sg.distance('EuclideanDistance')\nknn = KNN(k, dist, labels)\nlabels_test = MulticlassLabels(Ytest)\nfeats_test = features(Xtest)\nknn.train(feats)\npred = knn.apply_multiclass(feats_test)\nprint(\"Predictions\", pred.get_int_labels()[:5])\nprint(\"Ground Truth\", Ytest[:5])\n\nfrom shogun import MulticlassAccuracy\nevaluator = MulticlassAccuracy()\naccuracy = evaluator.evaluate(pred, labels_test)\n\nprint(\"Accuracy = %2.2f%%\" % (100*accuracy))", "Let's plot a few missclassified examples - I guess we all agree that these are notably harder to detect.", "idx=np.where(pred != Ytest)[0]\nXbad=Xtest[:,idx]\nYbad=Ytest[idx]\n_=P.figure(figsize=(17,6))\nP.gray()\nplot_example(Xbad, Ybad)", "Now the question is - is 97.30% accuracy the best we can do? While one would usually re-train KNN with different values for k here and likely perform Cross-validation, we just use a small trick here that saves us lots of computation time: When we have to determine the $K\\geq k$ nearest neighbors we will know the nearest neigbors for all $k=1...K$ and can thus get the predictions for multiple k's in one step:", "knn.put('k', 13)\nmultiple_k=knn.classify_for_multiple_k()\nprint(multiple_k.shape)", "We have the prediction for each of the 13 k's now and can quickly compute the accuracies:", "for k in range(13):\n print(\"Accuracy for k=%d is %2.2f%%\" % (k+1, 100*np.mean(multiple_k[:,k]==Ytest)))", "So k=3 seems to have been the optimal choice.\nAccellerating KNN\nObviously applying KNN is very costly: for each prediction you have to compare the object against all training objects. While the implementation in SHOGUN will use all available CPU cores to parallelize this computation it might still be slow when you have big data sets. In SHOGUN, you can use Cover Trees to speed up the nearest neighbor searching process in KNN. Just call set_use_covertree on the KNN machine to enable or disable this feature. We also show the prediction time comparison with and without Cover Tree in this tutorial. So let's just have a comparison utilizing the data above:", "from shogun import Time, KNN_COVER_TREE, KNN_BRUTE\nstart = Time.get_curtime()\nknn.put('k', 3)\nknn.put('knn_solver', KNN_BRUTE)\npred = knn.apply_multiclass(feats_test)\nprint(\"Standard KNN took %2.1fs\" % (Time.get_curtime() - start))\n\n\nstart = Time.get_curtime()\nknn.put('k', 3)\nknn.put('knn_solver', KNN_COVER_TREE)\npred = knn.apply_multiclass(feats_test)\nprint(\"Covertree KNN took %2.1fs\" % (Time.get_curtime() - start))\n", "So we can significantly speed it up. Let's do a more systematic comparison. For that a helper function is defined to run the evaluation for KNN:", "def evaluate(labels, feats, use_cover_tree=False):\n from shogun import MulticlassAccuracy, CrossValidationSplitting\n import time\n split = CrossValidationSplitting(labels, Nsplit)\n split.build_subsets()\n \n accuracy = np.zeros((Nsplit, len(all_ks)))\n acc_train = np.zeros(accuracy.shape)\n time_test = np.zeros(accuracy.shape)\n for i in range(Nsplit):\n idx_train = split.generate_subset_inverse(i)\n idx_test = split.generate_subset_indices(i)\n\n for j, k in enumerate(all_ks):\n #print \"Round %d for k=%d...\" % (i, k)\n\n feats.add_subset(idx_train)\n labels.add_subset(idx_train)\n\n dist = sg.distance('EuclideanDistance')\n dist.init(feats, feats)\n knn = KNN(k, dist, labels)\n knn.set_store_model_features(True)\n if use_cover_tree:\n knn.put('knn_solver', KNN_COVER_TREE)\n else:\n knn.put('knn_solver', KNN_BRUTE)\n knn.train()\n\n evaluator = MulticlassAccuracy()\n pred = knn.apply_multiclass()\n acc_train[i, j] = evaluator.evaluate(pred, labels)\n\n feats.remove_subset()\n labels.remove_subset()\n feats.add_subset(idx_test)\n labels.add_subset(idx_test)\n\n t_start = time.clock()\n pred = knn.apply_multiclass(feats)\n time_test[i, j] = (time.clock() - t_start) / labels.get_num_labels()\n\n accuracy[i, j] = evaluator.evaluate(pred, labels)\n\n feats.remove_subset()\n labels.remove_subset()\n return {'eout': accuracy, 'ein': acc_train, 'time': time_test}", "Evaluate KNN with and without Cover Tree. This takes a few seconds:", "labels = MulticlassLabels(Ytest)\nfeats = features(Xtest)\nprint(\"Evaluating KNN...\")\nwo_ct = evaluate(labels, feats, use_cover_tree=False)\nwi_ct = evaluate(labels, feats, use_cover_tree=True)\nprint(\"Done!\")", "Generate plots with the data collected in the evaluation:", "import matplotlib\n\nfig = P.figure(figsize=(8,5))\nP.plot(all_ks, wo_ct['eout'].mean(axis=0), 'r-*')\nP.plot(all_ks, wo_ct['ein'].mean(axis=0), 'r--*')\nP.legend([\"Test Accuracy\", \"Training Accuracy\"])\nP.xlabel('K')\nP.ylabel('Accuracy')\nP.title('KNN Accuracy')\nP.tight_layout()\n\nfig = P.figure(figsize=(8,5))\nP.plot(all_ks, wo_ct['time'].mean(axis=0), 'r-*')\nP.plot(all_ks, wi_ct['time'].mean(axis=0), 'b-d')\nP.xlabel(\"K\")\nP.ylabel(\"time\")\nP.title('KNN time')\nP.legend([\"Plain KNN\", \"CoverTree KNN\"], loc='center right')\nP.tight_layout()", "Although simple and elegant, KNN is generally very resource costly. Because all the training samples are to be memorized literally, the memory cost of KNN learning becomes prohibitive when the dataset is huge. Even when the memory is big enough to hold all the data, the prediction will be slow, since the distances between the query point and all the training points need to be computed and ranked. The situation becomes worse if in addition the data samples are all very high-dimensional. Leaving aside computation time issues, k-NN is a very versatile and competitive algorithm. It can be applied to any kind of objects (not just numerical data) - as long as one can design a suitable distance function. In pratice k-NN used with bagging can create improved and more robust results.\nComparison to Multiclass Support Vector Machines\nIn contrast to KNN - multiclass Support Vector Machines (SVMs) attempt to model the decision function separating each class from one another. They compare examples utilizing similarity measures (so called Kernels) instead of distances like KNN does. When applied, they are in Big-O notation computationally as expensive as KNN but involve another (costly) training step. They do not scale very well to cases with a huge number of classes but usually lead to favorable results when applied to small number of classes cases. So for reference let us compare how a standard multiclass SVM performs wrt. KNN on the mnist data set from above.\nLet us first train a multiclass svm using a Gaussian kernel (kind of the SVM equivalent to the euclidean distance).", "from shogun import GMNPSVM\n\nwidth=80\nC=1\n\ngk=sg.kernel(\"GaussianKernel\", log_width=np.log(width))\n\nsvm=GMNPSVM(C, gk, labels)\n_=svm.train(feats)", "Let's apply the SVM to the same test data set to compare results:", "out=svm.apply(feats_test)\nevaluator = MulticlassAccuracy()\naccuracy = evaluator.evaluate(out, labels_test)\n\nprint(\"Accuracy = %2.2f%%\" % (100*accuracy))", "Since the SVM performs way better on this task - let's apply it to all data we did not use in training.", "Xrem=Xall[:,subset[6000:]]\nYrem=Yall[subset[6000:]]\n\nfeats_rem=features(Xrem)\nlabels_rem=MulticlassLabels(Yrem)\nout=svm.apply(feats_rem)\n\nevaluator = MulticlassAccuracy()\naccuracy = evaluator.evaluate(out, labels_rem)\n\nprint(\"Accuracy = %2.2f%%\" % (100*accuracy))\n\nidx=np.where(out.get_labels() != Yrem)[0]\nXbad=Xrem[:,idx]\nYbad=Yrem[idx]\n_=P.figure(figsize=(17,6))\nP.gray()\nplot_example(Xbad, Ybad)", "The misclassified examples are indeed much harder to label even for human beings." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
CrowdTruth/CrowdTruth-core
tutorial/notebooks/CrowdTruth vs. MACE vs. Majority Vote for Temporal Event Ordering.ipynb
apache-2.0
[ "CrowdTruth vs. MACE vs. Majority Vote for Temporal Event Ordering Annotation\nThis notebook contains a comparative analysis on the task of temporal event ordering between three approaches:\n\nCrowdTruth\nMACE (a probabilistic model that computes competence estimates of the individual annotators and the most likely answer to each item [1])\nMajority Vote (the most common crowd annotation aggregation method)\n\n[1] Dirk Hovy, Taylor Berg-Kirkpatrick, Ashish Vaswani, and Eduard Hovy (2013): Learning Whom to Trust with MACE. In: Proceedings of NAACL-HLT 2013.\nFirst we describe the task. Then, we apply the CrowdTruth metrics and give examples of clear and unclear example sentences. We then apply MACE. In the final part we perform two comparisons:\n\nCrowdTruth vs. MACE: workers' quality\nCrowdTruth vs. MACE vs. Majority Vote: metrics performance in terms of F1-score (compared to expert, ground truth annotations)\n\nData: This notebook uses the data gathered in the \"Event Annotation\" crowdsourcing experiment published in Rion Snow, Brendan O’Connor, Dan Jurafsky, and Andrew Y. Ng: Cheap and fast—but is it good? Evaluating non-expert annotations for natural language tasks. EMNLP 2008, pages 254–263*.\nTask Description: Given two events in a text, the crowd has to choose whether the first event happened \"strictly before\" or \"strictly after\" the second event. Following, we provide an example from the aforementioned publication:\nText: “It just blew up in the air, and then we saw two fireballs go down to the, to the water, and there was a big small, ah, smoke, from ah, coming up from that”.\nEvents: go/coming, or blew/saw\nA screenshot of the task as it appeared to workers can be seen at the following repository.\nThe dataset for this task was downloaded from the following repository, which contains the raw output from the crowd on AMT. Currently, you can find the processed input file in the folder named data. Besides the raw crowd annotations, the processed file also contains the sentence and the two events that were given as input to the crowd. However, we have the sentence and the two events only for a subset of the dataset.", "# Read the input file into a pandas DataFrame\n\nimport pandas as pd\n\ntest_data = pd.read_csv(\"../data/temp.standardized.csv\")\ntest_data.head()", "Declaring a pre-processing configuration\nThe pre-processing configuration defines how to interpret the raw crowdsourcing input. To do this, we need to define a configuration class. First, we import the default CrowdTruth configuration class:", "import crowdtruth\nfrom crowdtruth.configuration import DefaultConfig", "Our test class inherits the default configuration DefaultConfig, while also declaring some additional attributes that are specific to the Temporal Event Ordering task:\n\ninputColumns: list of input columns from the .csv file with the input data\noutputColumns: list of output columns from the .csv file with the answers from the workers\ncustomPlatformColumns: a list of columns from the .csv file that defines a standard annotation tasks, in the following order - judgment id, unit id, worker id, started time, submitted time. This variable is used for input files that do not come from AMT or FigureEight (formarly known as CrowdFlower).\nannotation_separator: string that separates between the crowd annotations in outputColumns\nopen_ended_task: boolean variable defining whether the task is open-ended (i.e. the possible crowd annotations are not known beforehand, like in the case of free text input); in the task that we are processing, workers pick the answers from a pre-defined list, therefore the task is not open ended, and this variable is set to False\nannotation_vector: list of possible crowd answers, mandatory to declare when open_ended_task is False; for our task, this is the list of relations\nprocessJudgments: method that defines processing of the raw crowd data; for this task, we process the crowd answers to correspond to the values in annotation_vector\n\nThe complete configuration class is declared below:", "class TestConfig(DefaultConfig):\n inputColumns = [\"gold\", \"event1\", \"event2\", \"text\"]\n outputColumns = [\"response\"]\n customPlatformColumns = [\"!amt_annotation_ids\", \"orig_id\", \"!amt_worker_ids\", \"start\", \"end\"]\n \n # processing of a closed task\n open_ended_task = False\n annotation_vector = [\"before\", \"after\"]\n \n def processJudgments(self, judgments):\n # pre-process output to match the values in annotation_vector\n for col in self.outputColumns:\n # transform to lowercase\n judgments[col] = judgments[col].apply(lambda x: str(x).lower())\n return judgments", "Pre-processing the input data\nAfter declaring the configuration of our input file, we are ready to pre-process the crowd data:", "data, config = crowdtruth.load(\n file = \"../data/temp.standardized.csv\",\n config = TestConfig()\n)\n\ndata['judgments'].head()", "Computing the CrowdTruth metrics\nThe pre-processed data can then be used to calculate the CrowdTruth metrics. results is a dict object that contains the quality metrics for the sentences, annotations and crowd workers.", "results = crowdtruth.run(data, config)", "CrowdTruth Sentence Quality Score\nThe sentence metrics are stored in results[\"units\"]. The uqs column in results[\"units\"] contains the sentence quality scores, capturing the overall workers agreement over each sentences. The uqs_initial column in results[\"units\"] contains the initial sentence quality scores, before appling the CrowdTruth metrics.", "results[\"units\"].head()\n\n# Distribution of the sentence quality scores and the initial sentence quality scores\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nplt.rcParams['figure.figsize'] = 15, 5\n\nplt.subplot(1, 2, 1)\nplt.hist(results[\"units\"][\"uqs\"])\nplt.ylim(0,200)\nplt.xlabel(\"Sentence Quality Score\")\nplt.ylabel(\"#Sentences\")\n\nplt.subplot(1, 2, 2)\nplt.hist(results[\"units\"][\"uqs_initial\"])\nplt.ylim(0,200)\nplt.xlabel(\"Initial Sentence Quality Score\")\nplt.ylabel(\"# Units\")\n", "The histograms above show that the final sentence quality scores are nicely distributed, with both lower and high quality sentences. We also observe that, overall, the sentence quality score increased after applying the CrowdTruth metrics, compared to the initial sentence quality scores. While initially more than half of the units had a score of around 0.55, after iteratively applying the CrowdTruth metrics, the majority of the units have quality scores above 0.7.\nThe sentence quality score is a powerful measure to understand how clear the sentence is and the suitability of the sentence to be used as training data for various machine learning models.\nThe unit_annotation_score column in results[\"units\"] contains the sentence-annotation scores, capturing the likelihood that an annotation is expressed in a sentence. For each sentence, we store a dictionary mapping each annotation to its sentence-annotation score.", "results[\"units\"][\"unit_annotation_score\"].head()", "Example of a clear unit based on the CrowdTruth metrics\nFirst, we sort the sentence metrics stored in results[\"units\"] based on the sentence quality score (uqs), in ascending order. Thus, the most clear sentences are found at the tail of the new structure. Because we do not have initial input for all the units, we first filter these out.", "sortedUQS = results[\"units\"].sort_values([\"uqs\"])\n# remove the units for which we don't have the events and the text\nsortedUQS = sortedUQS.dropna()\nsortedUQS = sortedUQS.reset_index()", "We print the most clear unit, which is the last unit in sortedUQS:", "sortedUQS.tail(1)", "The following two sentences contain the events that need to be ordered: \n<p>Ratners Group PLC's U.S. subsidiary has agreed to <b><u><font color=\"blue\">acquire</font></u></b> jewelry retailer Weisfield's Inc.\n\nRatners and Weisfield's said they <b><u><font color=\"purple\">reached</font></u></b> an agreement in principle for the acquisition of Weisfield's by Sterling Inc.\n\nThe unit is very clear because the second sentence clearly states that before acquiring Weisfield's Inc, the two parts reached an agreement, which means that <b><u><font color=\"blue\">acquire</font></u></b> happened after <b><u><font color=\"purple\">reached</font></u></b>.", "print(\"Text: %s\" % sortedUQS[\"input.text\"].iloc[len(sortedUQS.index)-1])\nprint(\"\\n Event1: %s\" % sortedUQS[\"input.event1\"].iloc[len(sortedUQS.index)-1])\nprint(\"\\n Event2: %s\" % sortedUQS[\"input.event2\"].iloc[len(sortedUQS.index)-1])\nprint(\"\\n Expert Answer: %s\" % sortedUQS[\"input.gold\"].iloc[len(sortedUQS.index)-1])\nprint(\"\\n Crowd Answer with CrowdTruth: %s\" % sortedUQS[\"unit_annotation_score\"].iloc[len(sortedUQS.index)-1])\nprint(\"\\n Crowd Answer without CrowdTruth: %s\" % sortedUQS[\"unit_annotation_score_initial\"].iloc[len(sortedUQS.index)-1])", "Example of an unclear unit based on the CrowdTruth metrics\nWe use the same structure as above and we print the most unclear unit, which is the first unit in sortedUQS:", "sortedUQS.head(1)", "The following sentence contains the events that need to be ordered: \nMagna International Inc..'s chief financial officer, James McAlpine, resigned and its chairman, Frank Stronach, is stepping in to help <b><u><font color=\"red\">turn</font></u></b> the automotive-parts manufacturer around, the company <b><u><font color=\"purple\">said</font></u></b>.\nThe unit is unclear due to various reasons. First of all, the sentence is very long and difficult to read. Second, there is a series of events mentioned in the text and third, it is not very clearly stated if the \"turning\" event is happening prior or after the \"announcement\".", "print(\"Text: %s\" % sortedUQS[\"input.text\"].iloc[0])\nprint(\"\\n Event1: %s\" % sortedUQS[\"input.event1\"].iloc[0])\nprint(\"\\n Event2: %s\" % sortedUQS[\"input.event2\"].iloc[0])\nprint(\"\\n Expert Answer: %s\" % sortedUQS[\"input.gold\"].iloc[0])\nprint(\"\\n Crowd Answer with CrowdTruth: %s\" % sortedUQS[\"unit_annotation_score\"].iloc[0])\nprint(\"\\n Crowd Answer without CrowdTruth: %s\" % sortedUQS[\"unit_annotation_score_initial\"].iloc[0])", "CrowdTruth Worker Quality Scores\nThe worker metrics are stored in results[\"workers\"]. The wqs columns in results[\"workers\"] contains the worker quality scores, capturing the overall agreement between one worker and all the other workers. The wqs_initial column in results[\"workers\"] contains the initial worker quality scores, before appling the CrowdTruth metrics.", "results[\"workers\"].head()\n\n# Distribution of the worker quality scores and the initial worker quality scores\n\nplt.rcParams['figure.figsize'] = 15, 5\n\nplt.subplot(1, 2, 1)\nplt.hist(results[\"workers\"][\"wqs\"])\nplt.ylim(0,30)\nplt.xlabel(\"Worker Quality Score\")\nplt.ylabel(\"#Workers\")\n\nplt.subplot(1, 2, 2)\nplt.hist(results[\"workers\"][\"wqs_initial\"])\nplt.ylim(0,30)\nplt.xlabel(\"Initial Worker Quality Score\")\nplt.ylabel(\"#Workers\")\n\n", "The histograms above shows the worker quality scores and the initial worker quality scores. We observe that the worker quality scores are distributed across a wide spectrum, from low to high quality workers. Furthermore, the worker quality scores seem to have, overall, improved after computing the CrowdTruth iterations, compared to the initial worker quality scores, which indicates that the difficulty of the units was taken into consideration.\nLow worker quality scores can be used to identify spam workers, or workers that have misunderstood the annotation task. Similarly, high worker quality scores can be used to identify well performing workers.\nCrowdTruth Annotation Quality Score\nThe annotation metrics are stored in results[\"annotations\"]. The aqs column contains the annotation quality scores, capturing the overall worker agreement over one annotation.", "results[\"annotations\"]", "In the dataframe above we observe that after iteratively computing the sentence quality scores and the worker quality scores the overall agreement on the annotations increased. This can be seen when comparing the annotation quality scores with the initial annotation quality scores.\nMACE for Temporal Event Ordering\nWe first pre-processed the crowd results to create compatible files for running the MACE tool.\nEach row in a csv file should point to a unit in the dataset and each column in the csv file should point to a worker. The content of the csv file captures the worker answer for that particular unit (or remains empty if the worker did not annotate that unit).\nThe following implementation of MACE has been used in these experiments: https://github.com/dirkhovy/MACE.", "# MACE input file sample\nimport numpy as np\n\nmace_test_data = pd.read_csv(\"../data/mace_temp.standardized.csv\", header=None)\nmace_test_data = mace_test_data.replace(np.nan, '', regex=True)\nmace_test_data.head()", "For each sentence and each annotation, MACE computes the sentence annotation probability score, which shows the probability of each annotation to be expressed in the sentence. MACE sentence annotation probability score is similar to the CrowdTruth sentence-annotation score.", "# MACE sentence annotation probability scores:\n\nimport pandas as pd\n\nmace_data = pd.read_csv(\"../data/results/mace_units_temp.csv\")\nmace_data.head()", "For each worker in the annotators set we have MACE worker competence score, which is similar to the CrowdTruth worker quality score.", "# MACE worker competence scores\n\nmace_workers = pd.read_csv(\"../data/results/mace_workers_temp.csv\")\nmace_workers.head()", "CrowdTruth vs. MACE on Worker Quality\nWe read the worker quality scores as returned by CrowdTruth and MACE and merge the two dataframes", "mace_workers = pd.read_csv(\"../data/results/mace_workers_temp.csv\")\ncrowdtruth_workers = pd.read_csv(\"../data/results/crowdtruth_workers_temp.csv\")\n\nworkers_scores = pd.merge(mace_workers, crowdtruth_workers, on='worker')\nworkers_scores = workers_scores.sort_values([\"wqs\"])\nworkers_scores.head()", "Plot the quality scores of the workers as computed by both CrowdTruth and MACE:", "%matplotlib inline\n\nimport matplotlib\nimport matplotlib.pyplot as plt\n\nplt.scatter(\n workers_scores[\"competence\"],\n workers_scores[\"wqs\"],\n)\nplt.plot([0, 1], [0, 1], 'red', linewidth=1)\nplt.title(\"Worker Quality Score\")\nplt.xlabel(\"MACE\")\nplt.ylabel(\"CrowdTruth\")", "In the plot above we observe that MACE and CrowdTruth have quite similar worker quality scores. It seems, however, that MACE favours extreme values, which means that the identified low quality workers will have very low scores, e.g., very close to 0.0 and the best workers will have quality scores of 1.0, or very close to 1.0. On the other side, CrowdTruth has a smaller interval of values, starting from around 0.1 to 0.9.\nFollowing, we compute the correlation between the two values using Spearman correlation and Kendall's tau correlation, to see whether the two values are correlated. More exactly, we want to see whether, overall, both metrics identify as low quality or high quality similar workers, or they are really divergent in their outcome.", "from scipy.stats import spearmanr\nx = workers_scores[\"wqs\"]\n\nx_corr = workers_scores[\"competence\"]\ncorr, p_value = spearmanr(x, x_corr)\nprint (\"correlation: \", corr)\nprint (\"p-value: \", p_value)", "Spearman correlation shows shows a very strong correlation between the two computed values, and the correlation is significant. This means that overall, even if the two metrics provide different values, they are indeed correlated and low quality workers receive low scores and high quality workers receive higher scores from both aggregation methods.", "from scipy.stats import kendalltau\nx1 = workers_scores[\"wqs\"]\nx2 = workers_scores[\"competence\"]\n\ntau, p_value = kendalltau(x1, x2)\nprint (\"correlation: \", tau)\nprint (\"p-value: \", p_value)", "Even with Kendall's tau rank correlation, we observe a strong correlation between the two computed values, where the correlation is significant. This means that the aggregation methods, MACE and CrowdTruth rank the workers based on their quality in a similar way.\nFurther, we compute the difference of the two quality scores and we check one worker for which the difference is very high.", "workers_scores[\"diff\"] = workers_scores[\"wqs\"] - workers_scores[\"competence\"]\nworkers_scores = workers_scores.sort_values([\"diff\"])\nworkers_scores.tail(5)", "We take for example the worker with the id \"A2KONK3TIL5KVX\" and check the overall disagreement among the workers on the units annotated by them. MACE rated the worker with a quality score of 0.002 while CrowdTruth rated the worker with a higher quality score of 0.32.\nWhat we observe in the dataframe below, where we show the units annotated by the worker \"A2KONK3TIL5KVX\", is that the worker \"A2KONK3TIL5KVX\" annotated, in general, units with high disagreement, i.e., which are not very clear. While MACE marked the worker as low quality because it seems that they always picked the same answer, CrowdTruth also considered the difficulty of the units, and thus, giving it a higher weight.", "units = list(test_data[test_data[\"!amt_worker_ids\"] == \"A2KONK3TIL5KVX\"][\"orig_id\"])\nall_results = results[\"units\"].reset_index()\nunits_df = all_results[all_results[\"unit\"].isin(units)]\nunits_df = units_df.sort_values([\"uqs_initial\"])\nunits_df.head(10)", "CrowdTruth vs. MACE vs. Majority Vote on Annotation Performance\nNext, we look into the crowd performance in terms of F1-score compared to expert annotations. We compare the crowd performance given the three aggregation methods: CrowdTruth, MACE and Majority Vote.", "mace = pd.read_csv(\"../data/results/mace_units_temp.csv\")\ncrowdtruth = pd.read_csv(\"../data/results/crowdtruth_units_temp.csv\")", "The following two functions compute the F1-score of the crowd compared to the expert annotations. The first function computes the F1-score at every sentence-annotation score threshold. The second function computes the F1-score for the majority vote approach, i.e., when at least half of the workers picked the answer.", "def compute_F1_score(dataset, label, gold_column, gold_value):\n nyt_f1 = np.zeros(shape=(100, 2))\n for idx in xrange(0, 100):\n thresh = (idx + 1) / 100.0\n tp = 0\n fp = 0\n tn = 0\n fn = 0\n\n for gt_idx in range(0, len(dataset.index)):\n if dataset[label].iloc[gt_idx] >= thresh:\n if dataset[gold_column].iloc[gt_idx] == gold_value:\n tp = tp + 1.0\n else:\n fp = fp + 1.0\n else:\n if dataset[gold_column].iloc[gt_idx] == gold_value:\n fn = fn + 1.0\n else:\n tn = tn + 1.0\n\n\n nyt_f1[idx, 0] = thresh\n \n if tp != 0:\n nyt_f1[idx, 1] = 2.0 * tp / (2.0 * tp + fp + fn)\n else:\n nyt_f1[idx, 1] = 0\n return nyt_f1\n\n\ndef compute_majority_vote(dataset, label, gold_column, gold_value):\n tp = 0\n fp = 0\n tn = 0\n fn = 0\n \n for j in range(len(dataset.index)):\n if dataset[label].iloc[j] >= 0.5:\n if dataset[gold_column].iloc[j] == gold_value:\n tp = tp + 1.0\n else:\n fp = fp + 1.0\n else:\n if dataset[gold_column].iloc[j] == gold_value:\n fn = fn + 1.0\n else:\n tn = tn + 1.0\n return 2.0 * tp / (2.0 * tp + fp + fn)", "F1-score for the annotation \"before\":", "F1_crowdtruth = compute_F1_score(crowdtruth, \"before\", \"gold\", \"before\")\nprint(\"Best CrowdTruth F1 score for annotation 'before': \", F1_crowdtruth[F1_crowdtruth[:,1].argsort()][-1:])\nF1_mace = compute_F1_score(mace, \"before\", \"gold\", \"before\")\nprint(\"Best MACE F1 score for annotation 'before': \", F1_mace[F1_mace[:,1].argsort()][-1:])\nF1_majority_vote = compute_majority_vote(crowdtruth, 'before_initial', \"gold\", \"before\")\nprint(\"Majority Vote F1 score for annotation 'before': \", F1_majority_vote)", "F1-score for the annotation \"after\":", "F1_crowdtruth = compute_F1_score(crowdtruth, \"after\", \"gold\", \"after\")\nprint(\"Best CrowdTruth F1 score for annotation 'after': \", F1_crowdtruth[F1_crowdtruth[:,1].argsort()][-1:])\nF1_mace = compute_F1_score(mace, \"after\", \"gold\", \"after\")\nprint(\"Best MACE F1 score for annotation 'after': \", F1_mace[F1_mace[:,1].argsort()][-1:])\nF1_majority_vote = compute_majority_vote(crowdtruth, 'after_initial', \"gold\", \"after\")\nprint(\"Majority Vote F1 score for annotation 'after': \", F1_majority_vote)", "From the results above we observe that MACE and CrowdTruth perform very close to each other, and they both perform a bit better than Majority Vote, but not significantly better. As we can observe in the overall initial sentence quality score, there aren't that many unclear sentences in the dataset where half of the workers picked \"true\" as an answer and half as \"false\" (less than 60 examples out of 462). \nTo further explore the CrowdTruth and MACE quality metrics, download the aggregation results in .csv format for:\n\nCrowdTruth units quality\nCrowdTruth workers quality\nMACE units quality\nMACE workers quality" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
donK23/pyData-Projects
EventDec/event_dec/notebook/1_Exploration.ipynb
apache-2.0
[ "Exploration\nExploration of prepocessed DF", "import numpy as np\nimport pandas as pd\nimport math\n\nimport matplotlib.pyplot as plt\n%matplotlib inline", "Input\nPrivacy restriction: \nOriginal (personal) cleaned DF not in Repo. Go through nb \"0_Cleaning\" with self provided data to reproduce pickled DF of attended events (\"events_df.pkl\").\nFor further steps: Repo contains pickled DF for modeling (nb \"3_Modeling\"), in which private informations are elimated.", "file_path = \"../data/events_df.pkl\"\ndf = pd.read_pickle(file_path)\n\nprint(df.shape)\nprint(df.dtypes)\ndf.head()", "Exploration", "print(\"Stats (continuous Vars):\")\nprint(df.describe())\nprint(\"\")\nprint(\"NaN values count:\")\nprint(df.isnull().sum())\n\nfor col in df:\n print(df[col].value_counts())\n print(\"\")\n\ndf.groupby(df.main_topic).mean()[[\"distance\", \"rating\"]]\n\ndf.groupby(df.city).mean()[[\"distance\", \"rating\"]]", "Preparation for Modeling\nMissing Values", "df_cleaned = df.fillna(\"missing\") # Nan in String val Cols\n\nprint(df_cleaned.isnull().sum())", "DFs for Modeling", "# Minimal Features Model\nmodel01_cols = [u\"main_topic\", u\"buzzwordy_title\", u\"buzzwordy_organizer\", u\"days\", u\"weekday\", u\"city\", \n u\"country\", u\"distance\", u\"ticket_prize\", u\"rating\"]\ndf_model01 = df_cleaned[model01_cols]\n\ndf_model01.head()", "Dummie Encoding", "df_model01 = pd.get_dummies(df_model01, prefix=[\"main_topic\", \"weekday\", \"city\", \"country\"])", "Output for Modeling", "def pickle_model(df_model, file_path):\n \"\"\"\n Pickles provided model DF for modeling step\n \"\"\"\n df_model.to_pickle(file_path)\n\npickle_model(df_model01, \"../data/df_model01.pkl\") # Model01" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jnarhan/Breast_Cancer
src/img_processing/Image_Resizing.ipynb
mit
[ "Image Resizing Functions\nThe VGG-16 and GoogLeNet pretrained networks need the images to be 224x224 pixles the recommendation from comment threads is to start with 255x255 and wither crop or resize the image from there. The first resizing function simply works through a given image directory or list of directories and resizes the images without worry about aspect ratio.", "import os\nfrom PIL import Image\nfrom __future__ import division\n\nin_dir1 = \"E:/erikn/Dropbox (DATA698-S17)/DATA698-S17/data/ddsm/png/0/\"\nin_dir2 = \"E:/erikn/Dropbox (DATA698-S17)/DATA698-S17/data/ddsm/png/1/\"\nin_dir3 = \"E:/erikn/Dropbox (DATA698-S17)/DATA698-S17/data/ddsm/png/3/\"\n\nimg_in = [in_dir1, in_dir2, in_dir3]\n\nout_dir = \"E:/erikn/Documents/GitHub/MLProjects/data698_images/small/\"\n\ndef basic_resize(height, width, in_dir, out_dir):\n # Takes a directory or list of directories containing png images and resizes to the given height and width \n for directory in in_dir:\n images = os.listdir(directory)\n for img in images:\n im = Image.open(os.path.join(directory, img))\n size = im.resize((width,height), resample=Image.LANCZOS)\n size.save(os.path.join(out_dir, img))\n\nheight = 255\nwidth = 255\nbasic_resize(height, width, img_in, out_dir)", "The next resizing function is designed to maintain the aspect ratio of the image while doing the resize. You can selecct the desired height and the function adds in black rows to the edges of the image to make all images the same size then resizes the image to desired size.", "from scipy import misc\nfrom scipy import ndimage\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport time\n%matplotlib inline\n\n# Finds the Maximum height and width from all of the images from the first run we have \n# Max Height = 7111\n# Max Width = 5641\n# Count = 4005\nheight = 0\nwidth = 0\ncount = 0\n\nstart = time.time()\nfor directory in img_in:\n images = os.listdir(directory)\n for img in images:\n im = misc.imread(os.path.join(directory, img), flatten=False, mode='L')\n if im.shape[0] > height:\n height = im.shape[0]\n if im.shape[1] > width:\n width = im.shape[1]\n count += 1\n\nprint(\"Max Height, Max Width, Number of Images\")\nprint(height, width, count)\nend = time.time()\nprint(\"Time taken:\")\nprint(end - start)", "Loading a test image", "img_in = [\"E:/erikn/Documents/GitHub/MLProjects/data698_images/png/\"] # This is a test set use the img_in from above for all images\nimages = os.listdir(img_in[0])\nim = Image.open(os.path.join(img_in[0], images[0]))\nimg_w, img_h = im.size\nplt.imshow(im,cmap = plt.get_cmap('gray'))", "Placing an image in the maximum size box for all images.", "background = Image.new('L', (5641, 7111), (0))\nbg_w, bg_h = background.size\noffset = ((bg_w - img_w) // 2, (bg_h - img_h) // 2) # Use // division in Python 3.5 \nbackground.paste(im, offset)\nplt.imshow(background,cmap = plt.get_cmap('gray'))", "Resizing the Image to a given height and then placing in a square box.", "height = 400\nwidth = int((height/img_h)*img_w)\nprint(height,width)\nsize = im.resize((width,height), resample=Image.LANCZOS)\nbackground = Image.new('L', (height,height), (0))\nbg_w, bg_h = background.size\noffset = ((bg_w - width) // 2, (bg_h - height) // 2) # Use // division in Python 3.5 \nbackground.paste(size, offset)\nplt.imshow(background,cmap = plt.get_cmap('gray'))", "Creating an Aspect Ratio Resizer", "def aspect_resize(height, in_dir, out_dir, square):\n for directory in in_dir:\n images = os.listdir(directory)\n for img in images:\n im = Image.open(os.path.join(directory, img))\n img_w, img_h = im.size\n if square == True:\n width = int((height/img_h)*img_w)\n size = im.resize((width,height), resample=Image.LANCZOS)\n background = Image.new('L', (height,height), (0))\n bg_w, bg_h = background.size\n offset = ((bg_w - width) // 2, (bg_h - height) // 2) # Use // division in Python 3.5 \n background.paste(size, offset)\n else:\n width = int((height/im.size[1])*im.size[0])\n background = im.resize((width,height), resample=Image.LANCZOS)\n background.save(os.path.join(out_dir, img)) ", "Running the aspect ratio resizer without creating square images. These maintain aspect ratio but have varying widths which I now realize is not the best idea. I will work improving this portion if we need it. Make sure that you pass False to the function to run in this mode.", "img_in = [\"E:/erikn/Documents/GitHub/MLProjects/data698_images/png/\"] # This is a test set use the img_in from above for all images\nimg_out = \"E:/erikn/Documents/GitHub/MLProjects/data698_images/non_square/\"\nheight = 150\naspect_resize(height,img_in, img_out, False)", "Running the aspect ratio resizer creating square images. This portion sets the height and width, resizes the image to maintain the asppect ratio, and then pastes the image into a black background that is the correct size and square. This should do a good job of giving us a base image with aspect ration maintained and not add to much extra space to the image.", "img_in = [\"E:/erikn/Documents/GitHub/MLProjects/data698_images/png/\"] # This is a test set use the img_in from above for all images\nimg_out = \"E:/erikn/Documents/GitHub/MLProjects/data698_images/square/\"\nheight = 150\naspect_resize(height,img_in, img_out, True)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
khalido/algorithims
monty_hall.ipynb
gpl-3.0
[ "The famous Monty Hall brain teaser:\n\nSuppose you're on a game show, and you're given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what's behind the doors, opens another door, say No. 3, which has a goat. He then says to you, \"Do you want to pick door No. 2?\" Is it to your advantage to switch your choice?\n\nThere is a really fun discussion over at Marilyn vos Savant's site.\nOk, now to setup the problem, along with some kind of visuals and what not.", "import random\nimport numpy as np\n\n# for plots, cause visuals\n%matplotlib inline\nimport matplotlib.pyplot as plt \nimport seaborn as sns", "setting up a game\nThere are many ways to do this, but to keep it simple and human comprehensible I'm going to do it one game at a time. \nFirst up, a helper function which takes the door number guessed and the door opened up the host to reveal a goat, and returns the switched door:", "def switch_door(guess, goat_door_opened):\n \"\"\"takes in the guessed door and the goat door opened\n and returns the switched door number\"\"\"\n doors = [0,1,2]\n doors.remove(goat_door_opened)\n doors.remove(guess)\n return doors[0]", "Now the actual monty hall function - it takes in a guess and whether you want to switch your guess, and returns True or False depending on whether you win", "def monty_hall(guess=0, switch_guess=False, open_goat_door=True):\n \"\"\"sets up 3 doors 0-2, one which has a pize, and 2 have goats.\n takes in the door number guessed by the player and whether he/she switched door\n after one goat door is revealed\"\"\"\n \n doors = [door for door in range(3)]\n np.random.shuffle(doors)\n prize_door = doors.pop()\n \n goat_door_opened = doors[0]\n \n if goat_door_opened == guess:\n goat_door_opened = doors[1]\n \n if switch_guess:\n return switch_door(guess, goat_door_opened) == prize_door\n else:\n return guess == prize_door", "Now to run through a bunch of monty hall games:", "no_switch = np.mean([monty_hall(random.randint(0,2), False) for _ in range(100000)])\nno_switch", "Not switching doors wins a third of the time, which makes intuitive sense, since we are choosing one door out of three.", "yes_switch = np.mean([monty_hall(random.randint(0,2), True) for _ in range(100000)])\nyes_switch", "This is the suprising result, since switching our guess increases the win rate to two third! To put it more graphically:", "plt.pie([yes_switch, no_switch], labels=[\"Switching win %\", \"Not switching win %\"],\n autopct='%1.1f%%', explode=(0, 0.05));", "So our chances of winning essentially double if we switch our guess once a goat door has been opened.\nA good monty hall infographic:\n<img src=\"images/monty-hall.png\" width=\"75%\">.\nthe no reveal month\nSo what if Monty never opens a goat door, and just gives us a change to switch the guessed door? Does the winning % still change?\nSo first we change the switch door function to remove the reveal option:", "def switch_door_no_revel(guess):\n \"\"\"takes in the guessed door\n and returns the switched door number\"\"\"\n doors = [0,1,2]\n doors.remove(guess)\n np.random.shuffle(doors)\n return doors[0]", "Then I removed the revealing the goat door code from the original monty hall function above:", "def monty_hall_no_reveal(guess=0, switch_guess=False):\n \"\"\"sets up 3 doors 0-2, one which has a pize, and 2 have goats.\n takes in the door number guessed by the player and whether he/she switched door\n \"\"\"\n \n doors = [door for door in range(3)]\n np.random.shuffle(doors)\n prize_door = doors.pop()\n \n if switch_guess:\n return switch_door_no_revel(guess) == prize_door\n else:\n return guess == prize_door", "Now to run some sims:", "no_switch_no_reveal = np.mean([monty_hall_no_reveal(random.randint(0,2), False) for _ in range(100000)])\nyes_switch_no_reveal = np.mean([monty_hall_no_reveal(random.randint(0,2), True) for _ in range(100000)])\n\nplt.bar([0,1], [yes_switch_no_reveal, no_switch_no_reveal], tick_label=[\"Switched Guess\",\"Didn't Switch\"], \n color=[\"blue\",\"red\"], alpha=0.7);", "There is no impact of switching our guess if a goat door hasn't been revealed. Which makes sense to, since whatever door we choose, it has 1/3 probablity of winning." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
data-cube/agdc-v2-examples
notebooks_ledaps/hyderabad_demo.ipynb
apache-2.0
[ "AGDCv2 Landsat analytics example using USGS Surface Reflectance\nImport the required libraries", "%matplotlib inline\nfrom matplotlib import pyplot as plt\nimport datacube\nfrom datacube.model import Range\nfrom datetime import datetime\ndc = datacube.Datacube(app='dc-example')\nfrom datacube.storage import masking\nfrom datacube.storage.masking import mask_valid_data as mask_invalid_data\nimport pandas\nimport xarray\nimport numpy\nimport json\nimport vega\nfrom datacube.utils import geometry\nnumpy.seterr(divide='ignore', invalid='ignore')\n\nimport folium\nfrom IPython.display import display\nimport geopandas\nfrom shapely.geometry import mapping\nfrom shapely.geometry import MultiPolygon\nimport rasterio\nimport shapely.geometry\nimport shapely.ops\nfrom functools import partial\nimport pyproj\nfrom datacube.model import CRS\nfrom datacube.utils import geometry\n\n## From http://scikit-image.org/docs/dev/auto_examples/plot_equalize.html\nfrom skimage import data, img_as_float\nfrom skimage import exposure\n\ndatacube.__version__", "Include some helpful functions", "def datasets_union(dss):\n thing = geometry.unary_union(ds.extent for ds in dss)\n return thing.to_crs(geometry.CRS('EPSG:4326'))\n\nimport random\ndef plot_folium(shapes):\n\n mapa = folium.Map(location=[17.38,78.48], zoom_start=8)\n colors=['#00ff00', '#ff0000', '#00ffff', '#ffffff', '#000000', '#ff00ff']\n for shape in shapes:\n style_function = lambda x: {'fillColor': '#000000' if x['type'] == 'Polygon' else '#00ff00', \n 'color' : random.choice(colors)}\n poly = folium.features.GeoJson(mapping(shape), style_function=style_function)\n mapa.add_children(poly)\n display(mapa)\n\n# determine the clip parameters for a target clear (cloud free image) - identified through the index provided\ndef get_p2_p98(rgb, red, green, blue, index):\n\n r = numpy.nan_to_num(numpy.array(rgb.data_vars[red][index]))\n g = numpy.nan_to_num(numpy.array(rgb.data_vars[green][index]))\n b = numpy.nan_to_num(numpy.array(rgb.data_vars[blue][index]))\n \n rp2, rp98 = numpy.percentile(r, (2, 99))\n gp2, gp98 = numpy.percentile(g, (2, 99)) \n bp2, bp98 = numpy.percentile(b, (2, 99))\n\n return(rp2, rp98, gp2, gp98, bp2, bp98)\n\ndef plot_rgb(rgb, rp2, rp98, gp2, gp98, bp2, bp98, red, green, blue, index):\n\n r = numpy.nan_to_num(numpy.array(rgb.data_vars[red][index]))\n g = numpy.nan_to_num(numpy.array(rgb.data_vars[green][index]))\n b = numpy.nan_to_num(numpy.array(rgb.data_vars[blue][index]))\n\n r_rescale = exposure.rescale_intensity(r, in_range=(rp2, rp98))\n g_rescale = exposure.rescale_intensity(g, in_range=(gp2, gp98))\n b_rescale = exposure.rescale_intensity(b, in_range=(bp2, bp98))\n\n rgb_stack = numpy.dstack((r_rescale,g_rescale,b_rescale))\n img = img_as_float(rgb_stack)\n\n return(img)\n\ndef plot_water_pixel_drill(water_drill):\n vega_data = [{'x': str(ts), 'y': str(v)} for ts, v in zip(water_drill.time.values, water_drill.values)]\n vega_spec = \"\"\"{\"width\":720,\"height\":90,\"padding\":{\"top\":10,\"left\":80,\"bottom\":60,\"right\":30},\"data\":[{\"name\":\"wofs\",\"values\":[{\"code\":0,\"class\":\"dry\",\"display\":\"Dry\",\"color\":\"#D99694\",\"y_top\":30,\"y_bottom\":50},{\"code\":1,\"class\":\"nodata\",\"display\":\"No Data\",\"color\":\"#A0A0A0\",\"y_top\":60,\"y_bottom\":80},{\"code\":2,\"class\":\"shadow\",\"display\":\"Shadow\",\"color\":\"#A0A0A0\",\"y_top\":60,\"y_bottom\":80},{\"code\":4,\"class\":\"cloud\",\"display\":\"Cloud\",\"color\":\"#A0A0A0\",\"y_top\":60,\"y_bottom\":80},{\"code\":1,\"class\":\"wet\",\"display\":\"Wet\",\"color\":\"#4F81BD\",\"y_top\":0,\"y_bottom\":20},{\"code\":3,\"class\":\"snow\",\"display\":\"Snow\",\"color\":\"#4F81BD\",\"y_top\":0,\"y_bottom\":20},{\"code\":255,\"class\":\"fill\",\"display\":\"Fill\",\"color\":\"#4F81BD\",\"y_top\":0,\"y_bottom\":20}]},{\"name\":\"table\",\"format\":{\"type\":\"json\",\"parse\":{\"x\":\"date\"}},\"values\":[],\"transform\":[{\"type\":\"lookup\",\"on\":\"wofs\",\"onKey\":\"code\",\"keys\":[\"y\"],\"as\":[\"class\"],\"default\":null},{\"type\":\"filter\",\"test\":\"datum.y != 255\"}]}],\"scales\":[{\"name\":\"x\",\"type\":\"time\",\"range\":\"width\",\"domain\":{\"data\":\"table\",\"field\":\"x\"},\"round\":true},{\"name\":\"y\",\"type\":\"ordinal\",\"range\":\"height\",\"domain\":[\"water\",\"not water\",\"not observed\"],\"nice\":true}],\"axes\":[{\"type\":\"x\",\"scale\":\"x\",\"formatType\":\"time\"},{\"type\":\"y\",\"scale\":\"y\",\"tickSize\":0}],\"marks\":[{\"description\":\"data plot\",\"type\":\"rect\",\"from\":{\"data\":\"table\"},\"properties\":{\"enter\":{\"xc\":{\"scale\":\"x\",\"field\":\"x\"},\"width\":{\"value\":\"1\"},\"y\":{\"field\":\"class.y_top\"},\"y2\":{\"field\":\"class.y_bottom\"},\"fill\":{\"field\":\"class.color\"},\"strokeOpacity\":{\"value\":\"0\"}}}}]}\"\"\"\n spec_obj = json.loads(vega_spec)\n spec_obj['data'][1]['values'] = vega_data\n return vega.Vega(spec_obj)", "Plot the spatial extent of our data for each product", "plot_folium([datasets_union(dc.index.datasets.search_eager(product='ls5_ledaps_scene')),\\\n datasets_union(dc.index.datasets.search_eager(product='ls7_ledaps_scene')),\\\n datasets_union(dc.index.datasets.search_eager(product='ls8_ledaps_scene'))])", "Inspect the available measurements for each product", "dc.list_measurements()", "Specify the Area of Interest for our analysis", "# Hyderbad\n# 'lon': (78.40, 78.57),\n# 'lat': (17.36, 17.52),\n# Lake Singur\n# 'lat': (17.67, 17.84),\n# 'lon': (77.83, 78.0),\n\n# Lake Singur Dam\nquery = {\n 'lat': (17.72, 17.79),\n 'lon': (77.88, 77.95),\n}", "Load Landsat Surface Reflectance for our Area of Interest", "products = ['ls5_ledaps_scene','ls7_ledaps_scene','ls8_ledaps_scene']\n\ndatasets = []\nfor product in products:\n ds = dc.load(product=product, measurements=['nir','red', 'green','blue'], output_crs='EPSG:32644',resolution=(-30,30), **query)\n ds['product'] = ('time', numpy.repeat(product, ds.time.size))\n datasets.append(ds)\n\nsr = xarray.concat(datasets, dim='time')\nsr = sr.isel(time=sr.time.argsort()) # sort along time dim\nsr = sr.where(sr != -9999)\n\n##### include an index here for the timeslice with representative data for best stretch of time series\n\n# don't run this to keep the same limits as the previous sensor\n#rp2, rp98, gp2, gp98, bp2, bp98 = get_p2_p98(sr,'red','green','blue', 0)\n\nrp2, rp98, gp2, gp98, bp2, bp98 = (300.0, 2000.0, 300.0, 2000.0, 300.0, 2000.0)\nprint(rp2, rp98, gp2, gp98, bp2, bp98)\n\nplt.imshow(plot_rgb(sr,rp2, rp98, gp2, gp98, bp2, bp98,'red',\n 'green', 'blue', 0),interpolation='nearest')", "Load Landsat Pixel Quality for our area of interest", "datasets = []\nfor product in products:\n ds = dc.load(product=product, measurements=['cfmask'], output_crs='EPSG:32644',resolution=(-30,30), **query).cfmask\n ds['product'] = ('time', numpy.repeat(product, ds.time.size))\n datasets.append(ds)\n\npq = xarray.concat(datasets, dim='time')\npq = pq.isel(time=pq.time.argsort()) # sort along time dim\ndel(datasets)", "Visualise pixel quality information from our selected spatiotemporal subset", "pq.attrs['flags_definition'] = {'cfmask': {'values': {'255': 'fill', '1': 'water', '2': 'shadow', '3': 'snow', '4': 'cloud', '0': 'clear'}, 'description': 'CFmask', 'bits': [0, 1, 2, 3, 4, 5, 6, 7]}}\n\npandas.DataFrame.from_dict(masking.get_flags_def(pq), orient='index')", "Plot the frequency of water classified in pixel quality", "water = masking.make_mask(pq, cfmask ='water')\nwater.sum('time').plot(cmap='nipy_spectral')", "Plot the timeseries at the center point of the image", "plot_water_pixel_drill(pq.isel(y=int(water.shape[1] / 2), x=int(water.shape[2] / 2)))\n\ndel(water)", "Remove the cloud and shadow pixels from the surface reflectance", "mask = masking.make_mask(pq, cfmask ='cloud')\nmask = abs(mask*-1+1)\nsr = sr.where(mask)\nmask = masking.make_mask(pq, cfmask ='shadow')\nmask = abs(mask*-1+1)\nsr = sr.where(mask)\ndel(mask)\ndel(pq)\n\nsr.attrs['crs'] = CRS('EPSG:32644')", "Spatiotemporal summary NDVI median", "ndvi_median = ((sr.nir-sr.red)/(sr.nir+sr.red)).median(dim='time')\nndvi_median.attrs['crs'] = CRS('EPSG:32644')\nndvi_median.plot(cmap='YlGn', robust='True')", "NDVI trend over time in cropping area Point Of Interest", "poi_latitude = 17.749343\npoi_longitude = 77.935634\n\np = geometry.point(x=poi_longitude, y=poi_latitude, crs=geometry.CRS('EPSG:4326')).to_crs(sr.crs)", "Create a subset around our point of interest", "subset = sr.sel(x=((sr.x > p.points[0][0]-1000)), y=((sr.y < p.points[0][1]+1000)))\nsubset = subset.sel(x=((subset.x < p.points[0][0]+1000)), y=((subset.y > p.points[0][1]-1000)))", "Plot subset image with POI at centre", "plt.imshow(plot_rgb(subset,rp2, rp98, gp2, gp98, bp2, bp98,'red',\n 'green', 'blue',0),interpolation='nearest' )", "NDVI timeseries plot", "((sr.nir-sr.red)/(sr.nir+sr.red)).sel(x=p.points[0][0], y=p.points[0][1], method='nearest').plot(marker='o')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
satishgoda/learning
prg/web/javascript/libs/d3/d3_1_intro.ipynb
mit
[ "View this document in jupyter nbviewer\n\nReferences\n\n\nhttp://blog.thedataincubator.com/2015/08/embedding-d3-in-an-ipython-notebook\n\n\nhttps://github.com/cmoscardi/embedded_d3_example/blob/master/Embedded_D3.ipynb\n\n\nhttps://bost.ocks.org/mike/circles/", "%%javascript\nrequire.config({\n paths: {\n d3: '//cdnjs.cloudflare.com/ajax/libs/d3/3.5.5/d3.min',\n }\n});", "Template code for writing d3js scripts", "%%javascript\n\nrequire(\n['d3'],\n\nfunction(d3) {\n \n}\n);\n\n%%svg\n<svg width=\"720\" height=\"120\">\n <circle id=\"circle0\" cx=\"40\" cy=\"60\" r=\"10\"></circle>\n <circle id=\"circle0\" cx=\"80\" cy=\"60\" r=\"10\"></circle>\n <circle id=\"circle0\" cx=\"120\" cy=\"60\" r=\"10\"></circle>\n</svg>\n\n%%javascript\n\nrequire(['d3'], function(d3) {\n var circle = d3.selectAll(\"#circle1\");\n circle.style(\"fill\", \"red\");\n circle.attr(\"r\", 10);\n});\n\n\n\n%%svg\n<svg width=\"720\" height=\"120\">\n <circle id=\"circle1\" cx=\"40\" cy=\"60\" r=\"10\"></circle>\n <circle id=\"circle1\" cx=\"80\" cy=\"60\" r=\"10\"></circle>\n <circle id=\"circle1\" cx=\"120\" cy=\"60\" r=\"10\"></circle>\n</svg>\n\n%%javascript\n\nrequire(['d3'], function(d3) {\n var circle = d3.selectAll(\"#circle2\");\n circle.style(\"fill\", \"red\");\n circle.attr(\"r\", 30);\n \n});\n\n%%svg\n<svg width=\"720\" height=\"120\">\n <circle id=\"circle2\" cx=\"40\" cy=\"60\" r=\"10\" style=\"fill:blue;\"></circle>\n <circle id=\"circle2\" cx=\"80\" cy=\"60\" r=\"10\" style=\"fill:blue;\"></circle>\n <circle id=\"circle2\" cx=\"120\" cy=\"60\" r=\"10\" style=\"fill:blue;\"></circle>\n</svg>\n\n%%javascript\n\nrequire(\n['d3'],\n \nfunction(d3) {\n var circle = d3.selectAll(\"#circle3\");\n \n circle.attr(\n \"cx\",\n function() {\n return Math.random() * 720;\n }\n );\n}\n\n);\n\n%%svg\n<svg width=\"720\" height=\"120\">\n <circle id=\"circle3\" cx=\"40\" cy=\"60\" r=\"30\" style=\"fill:red;\"></circle>\n <circle id=\"circle3\" cx=\"80\" cy=\"60\" r=\"30\" style=\"fill:red;\"></circle>\n <circle id=\"circle3\" cx=\"120\" cy=\"60\" r=\"30\" style=\"fill:red;\"></circle>\n</svg>\n\n%%svg\n<svg width=\"720\" height=\"120\">\n <circle id=\"circle4before\" cx=\"40\" cy=\"60\" r=\"10\" style=\"fill:red;\"></circle>\n <circle id=\"circle4before\" cx=\"240\" cy=\"60\" r=\"10\" style=\"fill:green;\"></circle>\n <circle id=\"circle4before\" cx=\"500\" cy=\"60\" r=\"10\" style=\"fill:blue;\"></circle>\n</svg>\n\n%%javascript\n\nrequire(\n['d3'],\n\nfunction(d3) {\n var circle = d3.selectAll(\"#circle4after\")\n \n circle.data([50, 150, 600]);\n \n circle.attr(\n \"r\",\n function(dataitem) {\n return Math.sqrt(dataitem);\n }\n );\n}\n\n);\n\n%%svg\n<svg width=\"720\" height=\"120\">\n <circle id=\"circle4after\" cx=\"40\" cy=\"60\" r=\"10\" style=\"fill:red;\"></circle>\n <circle id=\"circle4after\" cx=\"240\" cy=\"60\" r=\"10\" style=\"fill:green;\"></circle>\n <circle id=\"circle4after\" cx=\"500\" cy=\"60\" r=\"10\" style=\"fill:blue;\"></circle>\n</svg>\n\n%%svg\n<svg width=\"720\" height=\"120\">\n <rect id=\"rect0\" x=\"1\" y=\"0\" width=\"100\" height=\"100\" style=\"fill:red;\"></rect>\n <rect id=\"rect0\" x=\"5\" y=\"0\" width=\"100\" height=\"100\" style=\"fill:green;\"></rect>\n <rect id=\"rect0\" x=\"9\" y=\"0\" width=\"100\" height=\"100\" style=\"fill:blue;\"></rect>\n</svg> \n\n%%javascript\n\nrequire(\n['d3'],\n\nfunction(d3) {\n var rect = d3.selectAll(\"#rect1\")\n \n rect.data([0, 40, 100]);\n \n rect.attr(\n \"x\",\n function(dataitem, index) {\n return index * 100 + dataitem;\n }\n );\n}\n\n);\n\n%%svg\n<svg width=\"720\" height=\"120\">\n <rect id=\"rect1\" x=\"1\" y=\"0\" width=\"100\" height=\"100\" style=\"fill:red;\"></rect>\n <rect id=\"rect1\" x=\"5\" y=\"0\" width=\"100\" height=\"100\" style=\"fill:green;\"></rect>\n <rect id=\"rect1\" x=\"9\" y=\"0\" width=\"100\" height=\"100\" style=\"fill:blue;\"></rect>\n</svg> ", "TODO", "%%svg\n<svg width=\"500\" height=\"100\">\n<g stroke=\"green\">\n<line x1=\"10\" y1=\"30\" x2=\"10\" y2=\"100\" stroke-width=\"1\"></line>\n<line x1=\"20\" y1=\"30\" x2=\"20\" y2=\"100\" stroke-width=\"1\"></line>\n</g>\n</svg>\n\n%%svg\n<svg width=\"720\" height=\"120\">\n <circle cx=\"40\" cy=\"60\" r=\"10\"></circle>\n <circle cx=\"80\" cy=\"60\" r=\"10\"></circle>\n <circle cx=\"120\" cy=\"60\" r=\"10\"></circle>\n</svg>", "Enter selections are not working!!", "%%javascript\n\nrequire(\n['d3'],\n\nfunction(d3) {\n var svg = d3.select(\"svg\");\n \n var circle = svg.selectAll(\"circle\");\n \n circle.data([5, 10, 15, 20]);\n \n circle.attr(\"r\", function(d) { return d; });\n \n console.log(circle);\n \n var circleEnter = circle.enter().append(\"circle\");\n \n circleEnter.attr('r', function(d) {return 100;})\n}\n\n);" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jrbourbeau/cr-composition
notebooks/legacy/lightheavy/spectrum-analysis.ipynb
mit
[ "<a id='top'> </a>\nAuthor: James Bourbeau", "%load_ext watermark\n%watermark -u -d -v -p numpy,matplotlib,scipy,pandas,sklearn,mlxtend", "Cosmic-ray composition spectrum analysis\nTable of contents\n\nDefine analysis free parameters\nData preprocessing\nFitting random forest\nFraction correctly identified\nSpectrum\nUnfolding\nFeature importance", "%matplotlib inline\nfrom __future__ import division, print_function\nfrom collections import defaultdict\nimport itertools\nimport numpy as np\nfrom scipy import interp\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom matplotlib.colors import ListedColormap\nimport seaborn.apionly as sns\nimport matplotlib as mpl\n\nfrom sklearn.metrics import accuracy_score, confusion_matrix, roc_curve, auc, classification_report\nfrom sklearn.model_selection import cross_val_score, StratifiedShuffleSplit, KFold, StratifiedKFold\nfrom mlxtend.feature_selection import SequentialFeatureSelector as SFS\n\nimport composition as comp\nimport composition.analysis.plotting as plotting\n \ncolor_dict = comp.analysis.get_color_dict()", "Define analysis free parameters\n[ back to top ]", "bin_midpoints, _, counts, counts_err = comp.get1d('/home/jbourbeau/PyUnfold/unfolded_output_h3a.root', 'NC', 'Unf_ks_ACM/bin0')", "Whether or not to train on 'light' and 'heavy' composition classes, or the individual compositions", "comp_class = True\ncomp_list = ['light', 'heavy'] if comp_class else ['P', 'He', 'O', 'Fe']", "Get composition classifier pipeline", "pipeline_str = 'GBDT'\npipeline = comp.get_pipeline(pipeline_str)", "Define energy binning for this analysis", "energybins = comp.analysis.get_energybins()", "Data preprocessing\n[ back to top ]\n1. Load simulation/data dataframe and apply specified quality cuts\n2. Extract desired features from dataframe\n3. Get separate testing and training datasets\n4. Feature transformation", "sim_train, sim_test = comp.preprocess_sim(comp_class=comp_class, return_energy=True)\n\n# Compute the correlation matrix\ndf_sim = comp.load_dataframe(datatype='sim', config='IC79')\nfeature_list, feature_labels = comp.analysis.get_training_features()\n\nfig, ax = plt.subplots()\ndf_sim[df_sim.MC_comp_class == 'light'].avg_inice_radius.plot(kind='hist', bins=50, ax=ax, alpha=0.75)\ndf_sim[df_sim.MC_comp_class == 'heavy'].avg_inice_radius.plot(kind='hist', bins=50, ax=ax, alpha=0.75)\nax.grid()\nplt.show()\n\nfig, ax = plt.subplots()\ndf_sim[df_sim.MC_comp_class == 'light'].invcharge_inice_radius.plot(kind='hist', bins=50, ax=ax, alpha=0.75)\ndf_sim[df_sim.MC_comp_class == 'heavy'].invcharge_inice_radius.plot(kind='hist', bins=50, ax=ax, alpha=0.75)\nax.grid()\nplt.show()\n\nfig, ax = plt.subplots()\ndf_sim[df_sim.MC_comp_class == 'light'].max_inice_radius.plot(kind='hist', bins=50, ax=ax, alpha=0.75)\ndf_sim[df_sim.MC_comp_class == 'heavy'].max_inice_radius.plot(kind='hist', bins=50, ax=ax, alpha=0.75)\nax.grid()\nplt.show()\n\ncorr = df_sim[feature_list].corr()\n# Generate a mask for the upper triangle\nmask = np.zeros_like(corr, dtype=np.bool)\nmask[np.triu_indices_from(mask)] = True\n\nfig, ax = plt.subplots()\nsns.heatmap(corr, mask=mask, cmap='RdBu_r', center=0,\n square=True, xticklabels=feature_labels, yticklabels=feature_labels,\n linewidths=.5, cbar_kws={'label': 'Covariance'}, annot=True, ax=ax)\n# outfile = args.outdir + '/feature_covariance.png'\n# plt.savefig(outfile)\nplt.show()\n\nlabel_replacement = {feature: labels for feature, labels in zip(feature_list, feature_labels)}\nwith plt.rc_context({'text.usetex': False}):\n g = sns.pairplot(df_sim.sample(frac=1)[:1000], vars=feature_list, hue='MC_comp_class',\n plot_kws={'alpha': 0.5, 'linewidth': 0},\n diag_kws={'histtype': 'step', 'linewidth': 2, 'fill': True, 'alpha': 0.75, 'bins': 15})\n for i in range(len(feature_list)):\n for j in range(len(feature_list)):\n xlabel = g.axes[i][j].get_xlabel()\n ylabel = g.axes[i][j].get_ylabel()\n if xlabel in label_replacement.keys():\n g.axes[i][j].set_xlabel(label_replacement[xlabel])\n if ylabel in label_replacement.keys():\n g.axes[i][j].set_ylabel(label_replacement[ylabel])\n \n g.fig.get_children()[-1].set_title('Comp class') \n# g.fig.get_children()[-1].set_bbox_to_anchor((1.1, 0.5, 0, 0))\n\ndata = comp.preprocess_data(comp_class=comp_class, return_energy=True)\n\nis_finite_mask = np.isfinite(data.X)\nnot_finite_mask = np.logical_not(is_finite_mask)\nfinite_data_mask = np.logical_not(np.any(not_finite_mask, axis=1))\ndata = data[finite_data_mask]", "Run classifier over training and testing sets to get an idea of the degree of overfitting", "clf_name = pipeline.named_steps['classifier'].__class__.__name__\nprint('=' * 30)\nprint(clf_name)\nweights = sim_train.energy**-1.7\npipeline.fit(sim_train.X, sim_train.y)\n# pipeline.fit(sim_train.X, sim_train.y, classifier__sample_weight=weights)\ntrain_pred = pipeline.predict(sim_train.X)\ntrain_acc = accuracy_score(sim_train.y, train_pred)\nprint('Training accuracy = {:.2%}'.format(train_acc))\ntest_pred = pipeline.predict(sim_test.X)\ntest_acc = accuracy_score(sim_test.y, test_pred)\nprint('Testing accuracy = {:.2%}'.format(test_acc))\nprint('=' * 30)\n\nnum_features = len(feature_list)\nimportances = pipeline.named_steps['classifier'].feature_importances_\nindices = np.argsort(importances)[::-1]\n\nfig, ax = plt.subplots()\nfor f in range(num_features):\n print('{}) {}'.format(f + 1, importances[indices[f]]))\n\nplt.ylabel('Feature Importances')\nplt.bar(range(num_features),\n importances[indices],\n align='center')\n\nplt.xticks(range(num_features),\n feature_labels[indices], rotation=90)\nplt.xlim([-1, len(feature_list)])\nplt.show()", "Fraction correctly identified\n[ back to top ]", "def get_frac_correct(train, test, pipeline, comp_list):\n \n assert isinstance(train, comp.analysis.DataSet), 'train dataset must be a DataSet'\n assert isinstance(test, comp.analysis.DataSet), 'test dataset must be a DataSet'\n assert train.y is not None, 'train must have true y values'\n assert test.y is not None, 'test must have true y values'\n \n pipeline.fit(train.X, train.y)\n test_predictions = pipeline.predict(test.X)\n correctly_identified_mask = (test_predictions == test.y)\n\n # Construct MC composition masks\n MC_comp_mask = {}\n for composition in comp_list:\n MC_comp_mask[composition] = (test.le.inverse_transform(test.y) == composition)\n MC_comp_mask['total'] = np.array([True]*len(test))\n \n reco_frac, reco_frac_err = {}, {}\n for composition in comp_list+['total']:\n comp_mask = MC_comp_mask[composition]\n # Get number of MC comp in each reco energy bin\n num_MC_energy = np.histogram(test.log_energy[comp_mask],\n bins=energybins.log_energy_bins)[0]\n num_MC_energy_err = np.sqrt(num_MC_energy)\n\n # Get number of correctly identified comp in each reco energy bin\n num_reco_energy = np.histogram(test.log_energy[comp_mask & correctly_identified_mask],\n bins=energybins.log_energy_bins)[0]\n num_reco_energy_err = np.sqrt(num_reco_energy)\n\n # Calculate correctly identified fractions as a function of MC energy\n reco_frac[composition], reco_frac_err[composition] = comp.ratio_error(\n num_reco_energy, num_reco_energy_err,\n num_MC_energy, num_MC_energy_err)\n \n return reco_frac, reco_frac_err", "Calculate classifier generalization error via 10-fold CV", "# Split training data into CV training and testing folds\nkf = KFold(n_splits=10)\nfrac_correct_folds = defaultdict(list)\nfold_num = 0\nprint('Fold ', end='')\nfor train_index, test_index in kf.split(sim_train.X):\n fold_num += 1\n print('{}...'.format(fold_num), end='')\n \n reco_frac, reco_frac_err = get_frac_correct(sim_train[train_index],\n sim_train[test_index],\n pipeline, comp_list)\n \n for composition in comp_list:\n frac_correct_folds[composition].append(reco_frac[composition])\n frac_correct_folds['total'].append(reco_frac['total'])\nfrac_correct_gen_err = {key: np.std(frac_correct_folds[key], axis=0) for key in frac_correct_folds}\n# scores = np.array(frac_correct_folds['total'])\n# score = scores.mean(axis=1).mean()\n# score_std = scores.mean(axis=1).std()\n\navg_frac_correct_data = {'values': np.mean(frac_correct_folds['total'], axis=0), 'errors': np.std(frac_correct_folds['total'], axis=0)}\navg_frac_correct, avg_frac_correct_err = comp.analysis.averaging_error(**avg_frac_correct_data)\n\nreco_frac, reco_frac_stat_err = get_frac_correct(sim_train, sim_test, pipeline, comp_list)\n\n# Plot fraction of events correctlt classified vs energy\nfig, ax = plt.subplots()\nfor composition in comp_list + ['total']:\n err = np.sqrt(frac_correct_gen_err[composition]**2 + reco_frac_stat_err[composition]**2)\n plotting.plot_steps(energybins.log_energy_midpoints, reco_frac[composition], err, ax,\n color_dict[composition], composition)\nplt.xlabel('$\\log_{10}(E_{\\mathrm{reco}}/\\mathrm{GeV})$')\nax.set_ylabel('Fraction correctly identified')\nax.set_ylim([0.0, 1.0])\nax.set_xlim([energybins.log_energy_min, energybins.log_energy_max])\nax.grid()\nleg = plt.legend(loc='upper center', frameon=False,\n bbox_to_anchor=(0.5, # horizontal\n 1.1),# vertical \n ncol=len(comp_list)+1, fancybox=False)\n# set the linewidth of each legend object\nfor legobj in leg.legendHandles:\n legobj.set_linewidth(3.0)\n\ncv_str = 'Accuracy: {:0.2f}\\% (+/- {:0.1f}\\%)'.format(avg_frac_correct*100, avg_frac_correct_err*100)\nax.text(7.4, 0.2, cv_str,\n ha=\"center\", va=\"center\", size=10,\n bbox=dict(boxstyle='round', fc=\"white\", ec=\"gray\", lw=0.8))\nplt.savefig('/home/jbourbeau/public_html/figures/frac-correct-{}.png'.format(pipeline_str))\nplt.show()\n\n# Plot the two-class decision scores\nclassifier_score = pipeline.decision_function(sim_train.X)\nlight_mask = sim_train.le.inverse_transform(sim_train.y) == 'light'\nheavy_mask = sim_train.le.inverse_transform(sim_train.y) == 'heavy'\nfig, ax = plt.subplots()\nscore_bins = np.linspace(-1, 1, 50)\nax.hist(classifier_score[light_mask], bins=score_bins, label='light', alpha=0.75)\nax.hist(classifier_score[heavy_mask], bins=score_bins, label='heavy', alpha=0.75)\nax.grid()\nax.legend()\nplt.show()\n\nimport multiprocessing as mp\n\nkf = KFold(n_splits=10)\nfrac_correct_folds = defaultdict(list)\n\n# Define an output queue\noutput = mp.Queue()\n\n# define a example function\ndef rand_string(length, output):\n \"\"\" Generates a random string of numbers, lower- and uppercase chars. \"\"\"\n rand_str = ''.join(random.choice(\n string.ascii_lowercase\n + string.ascii_uppercase\n + string.digits)\n for i in range(length))\n output.put(rand_str)\n\n# Setup a list of processes that we want to run\nprocesses = [mp.Process(target=get_frac_correct,\n args=(sim_train[train_index],\n sim_train[test_index],\n pipeline, comp_list)) for train_index, test_index in kf.split(sim_train.X)]\n\n# Run processes\nfor p in processes:\n p.start()\n\n# Exit the completed processes\nfor p in processes:\n p.join()\n\n# Get process results from the output queue\nresults = [output.get() for p in processes]\n\nprint(results)", "Spectrum\n[ back to top ]", "def get_num_comp_reco(train, test, pipeline, comp_list):\n \n assert isinstance(train, comp.analysis.DataSet), 'train dataset must be a DataSet'\n assert isinstance(test, comp.analysis.DataSet), 'test dataset must be a DataSet'\n assert train.y is not None, 'train must have true y values'\n \n pipeline.fit(train.X, train.y)\n test_predictions = pipeline.predict(test.X)\n\n # Get number of correctly identified comp in each reco energy bin\n num_reco_energy, num_reco_energy_err = {}, {}\n for composition in comp_list:\n# print('composition = {}'.format(composition))\n comp_mask = train.le.inverse_transform(test_predictions) == composition\n# print('sum(comp_mask) = {}'.format(np.sum(comp_mask)))\n print(test.log_energy[comp_mask])\n num_reco_energy[composition] = np.histogram(test.log_energy[comp_mask],\n bins=energybins.log_energy_bins)[0]\n num_reco_energy_err[composition] = np.sqrt(num_reco_energy[composition])\n\n num_reco_energy['total'] = np.histogram(test.log_energy, bins=energybins.log_energy_bins)[0]\n num_reco_energy_err['total'] = np.sqrt(num_reco_energy['total'])\n \n return num_reco_energy, num_reco_energy_err\n\ndf_sim = comp.load_dataframe(datatype='sim', config='IC79')\n\ndf_sim[['log_dEdX', 'num_millipede_particles']].corr()\n\nmax_zenith_rad = df_sim['lap_zenith'].max()\n\n# Get number of events per energy bin\nnum_reco_energy, num_reco_energy_err = get_num_comp_reco(sim_train, data, pipeline, comp_list)\nimport pprint\npprint.pprint(num_reco_energy)\npprint.pprint(num_reco_energy_err)\n# Solid angle\nsolid_angle = 2*np.pi*(1-np.cos(max_zenith_rad))\n\nprint(num_reco_energy['light'].sum())\nprint(num_reco_energy['heavy'].sum())\nfrac_light = num_reco_energy['light'].sum()/num_reco_energy['total'].sum()\nprint(frac_light)\n\n# Live-time information\ngoodrunlist = pd.read_table('/data/ana/CosmicRay/IceTop_GRL/IC79_2010_GoodRunInfo_4IceTop.txt', skiprows=[0, 3])\ngoodrunlist.head()\n\nlivetimes = goodrunlist['LiveTime(s)']\nlivetime = np.sum(livetimes[goodrunlist['Good_it_L2'] == 1])\nprint('livetime (seconds) = {}'.format(livetime))\nprint('livetime (days) = {}'.format(livetime/(24*60*60)))\n\nfig, ax = plt.subplots()\nfor composition in comp_list + ['total']:\n # Calculate dN/dE\n y = num_reco_energy[composition]\n y_err = num_reco_energy_err[composition]\n\n plotting.plot_steps(energybins.log_energy_midpoints, y, y_err,\n ax, color_dict[composition], composition)\nax.set_yscale(\"log\", nonposy='clip')\nplt.xlabel('$\\log_{10}(E_{\\mathrm{reco}}/\\mathrm{GeV})$')\nax.set_ylabel('Counts')\n# ax.set_xlim([6.3, 8.0])\n# ax.set_ylim([10**-6, 10**-1])\nax.grid(linestyle=':')\nleg = plt.legend(loc='upper center', frameon=False,\n bbox_to_anchor=(0.5, # horizontal\n 1.1),# vertical \n ncol=len(comp_list)+1, fancybox=False)\n# set the linewidth of each legend object\nfor legobj in leg.legendHandles:\n legobj.set_linewidth(3.0)\n\nplt.savefig('/home/jbourbeau/public_html/figures/rate.png')\nplt.show()\n\nfig, ax = plt.subplots()\nfor composition in comp_list + ['total']:\n # Calculate dN/dE\n y = num_reco_energy[composition]\n y_err = num_reco_energy_err[composition]\n # Add time duration\n# y = y / livetime\n# y_err = y / livetime\n y, y_err = comp.analysis.ratio_error(y, y_err, livetime, 0.005*livetime)\n plotting.plot_steps(energybins.log_energy_midpoints, y, y_err,\n ax, color_dict[composition], composition)\nax.set_yscale(\"log\", nonposy='clip')\nplt.xlabel('$\\log_{10}(E_{\\mathrm{reco}}/\\mathrm{GeV})$')\nax.set_ylabel('Rate [s$^{-1}$]')\n# ax.set_xlim([6.3, 8.0])\n# ax.set_ylim([10**-6, 10**-1])\nax.grid(linestyle=':')\nleg = plt.legend(loc='upper center', frameon=False,\n bbox_to_anchor=(0.5, # horizontal\n 1.1),# vertical \n ncol=len(comp_list)+1, fancybox=False)\n# set the linewidth of each legend object\nfor legobj in leg.legendHandles:\n legobj.set_linewidth(3.0)\n\nplt.savefig('/home/jbourbeau/public_html/figures/rate.png')\nplt.show()\n\ndf_sim, cut_dict_sim = comp.load_dataframe(datatype='sim', config='IC79', return_cut_dict=True)\nselection_mask = np.array([True] * len(df_sim))\nstandard_cut_keys = ['IceTopQualityCuts', 'lap_InIce_containment',\n 'num_hits_1_60',\n# 'num_hits_1_60', 'max_qfrac_1_60',\n 'InIceQualityCuts']\nfor key in standard_cut_keys:\n selection_mask *= cut_dict_sim[key]\n\ndf_sim = df_sim[selection_mask]\n\ndef get_energy_res(df_sim, energy_bins):\n reco_log_energy = df_sim['lap_log_energy'].values \n MC_log_energy = df_sim['MC_log_energy'].values\n energy_res = reco_log_energy - MC_log_energy\n bin_centers, bin_medians, energy_err = comp.analysis.data_functions.get_medians(reco_log_energy,\n energy_res,\n energy_bins)\n return np.abs(bin_medians)\n\ndef counts_to_flux(counts, counts_err, eff_area=156390.673059, livetime=1):\n # Calculate dN/dE\n y = counts/energybins.energy_bin_widths\n y_err = counts_err/energybins.energy_bin_widths\n # Add effective area\n eff_area = np.array([eff_area]*len(y))\n eff_area_error = np.array([0.01 * eff_area]*len(y_err))\n y, y_err = comp.analysis.ratio_error(y, y_err, eff_area, eff_area_error)\n # Add solid angle\n y = y / solid_angle\n y_err = y_err / solid_angle\n # Add time duration\n# y = y / livetime\n# y_err = y / livetime\n livetime = np.array([livetime]*len(y))\n flux, flux_err = comp.analysis.ratio_error(y, y_err, livetime, 0.01*livetime)\n # Add energy scaling \n scaled_flux = energybins.energy_midpoints**2.7 * flux\n scaled_flux_err = energybins.energy_midpoints**2.7 * flux_err\n\n return scaled_flux, scaled_flux_err\n\n# Plot fraction of events vs energy\n# fig, ax = plt.subplots(figsize=(8, 6))\nfig = plt.figure()\nax = plt.gca()\nfor composition in comp_list + ['total']:\n y, y_err = counts_to_flux(num_reco_energy[composition], num_reco_energy_err[composition], livetime=livetime)\n plotting.plot_steps(energybins.log_energy_midpoints, y, y_err, ax, color_dict[composition], composition)\nax.set_yscale(\"log\", nonposy='clip')\nplt.xlabel('$\\log_{10}(E_{\\mathrm{reco}}/\\mathrm{GeV})$')\n# ax.set_ylabel('$\\mathrm{E}^{2.7} \\\\frac{\\mathrm{dN}}{\\mathrm{dE dA d\\Omega dt}} \\ [\\mathrm{GeV}^{1.7} \\mathrm{m}^{-2} \\mathrm{sr}^{-1} \\mathrm{s}^{-1}]$')\nax.set_ylabel('$\\mathrm{E}^{2.7} \\ J(E) \\ [\\mathrm{GeV}^{1.7} \\mathrm{m}^{-2} \\mathrm{sr}^{-1} \\mathrm{s}^{-1}]$')\nax.set_xlim([6.4, 9.0])\nax.set_ylim([10**2, 10**5])\nax.grid(linestyle='dotted', which=\"both\")\n \n# Add 3-year scraped flux\ndf_proton = pd.read_csv('3yearscraped/proton', sep='\\t', header=None, names=['energy', 'flux'])\ndf_helium = pd.read_csv('3yearscraped/helium', sep='\\t', header=None, names=['energy', 'flux'])\ndf_light = pd.DataFrame.from_dict({'energy': df_proton.energy, \n 'flux': df_proton.flux + df_helium.flux})\n\ndf_oxygen = pd.read_csv('3yearscraped/oxygen', sep='\\t', header=None, names=['energy', 'flux'])\ndf_iron = pd.read_csv('3yearscraped/iron', sep='\\t', header=None, names=['energy', 'flux'])\ndf_heavy = pd.DataFrame.from_dict({'energy': df_oxygen.energy, \n 'flux': df_oxygen.flux + df_iron.flux})\n\n# if comp_class:\n# ax.plot(np.log10(df_light.energy), df_light.flux, label='3 yr light',\n# marker='.', ls=':')\n# ax.plot(np.log10(df_heavy.energy), df_heavy.flux, label='3 yr heavy',\n# marker='.', ls=':')\n# ax.plot(np.log10(df_heavy.energy), df_heavy.flux+df_light.flux, label='3 yr total',\n# marker='.', ls=':')\n# else:\n# ax.plot(np.log10(df_proton.energy), df_proton.flux, label='3 yr proton',\n# marker='.', ls=':')\n# ax.plot(np.log10(df_helium.energy), df_helium.flux, label='3 yr helium',\n# marker='.', ls=':', color=color_dict['He'])\n# ax.plot(np.log10(df_oxygen.energy), df_oxygen.flux, label='3 yr oxygen',\n# marker='.', ls=':', color=color_dict['O'])\n# ax.plot(np.log10(df_iron.energy), df_iron.flux, label='3 yr iron',\n# marker='.', ls=':', color=color_dict['Fe'])\n# ax.plot(np.log10(df_iron.energy), df_proton.flux+df_helium.flux+df_oxygen.flux+df_iron.flux, label='3 yr total',\n# marker='.', ls=':', color='C2')\n\n\nleg = plt.legend(loc='upper center', frameon=False,\n bbox_to_anchor=(0.5, # horizontal\n 1.15),# vertical \n ncol=len(comp_list)+1, fancybox=False)\n# set the linewidth of each legend object\nfor legobj in leg.legendHandles:\n legobj.set_linewidth(3.0)\n\nplt.savefig('/home/jbourbeau/public_html/figures/spectrum.png')\nplt.show()\n\nif not comp_class:\n # Add 3-year scraped flux\n df_proton = pd.read_csv('3yearscraped/proton', sep='\\t', header=None, names=['energy', 'flux'])\n df_helium = pd.read_csv('3yearscraped/helium', sep='\\t', header=None, names=['energy', 'flux'])\n df_oxygen = pd.read_csv('3yearscraped/oxygen', sep='\\t', header=None, names=['energy', 'flux'])\n df_iron = pd.read_csv('3yearscraped/iron', sep='\\t', header=None, names=['energy', 'flux'])\n # Plot fraction of events vs energy\n fig, axarr = plt.subplots(2, 2, figsize=(8, 6))\n for composition, ax in zip(comp_list + ['total'], axarr.flatten()):\n # Calculate dN/dE\n y = num_reco_energy[composition]/energybins.energy_bin_widths\n y_err = num_reco_energy_err[composition]/energybins.energy_bin_widths\n # Add effective area\n y, y_err = comp.analysis.ratio_error(y, y_err, eff_area, eff_area_error)\n # Add solid angle\n y = y / solid_angle\n y_err = y_err / solid_angle\n # Add time duration\n y = y / livetime\n y_err = y / livetime\n y = energybins.energy_midpoints**2.7 * y\n y_err = energybins.energy_midpoints**2.7 * y_err\n plotting.plot_steps(energybins.log_energy_midpoints, y, y_err, ax, color_dict[composition], composition)\n # Load 3-year flux\n df_3yr = pd.read_csv('3yearscraped/{}'.format(composition), sep='\\t',\n header=None, names=['energy', 'flux'])\n ax.plot(np.log10(df_3yr.energy), df_3yr.flux, label='3 yr {}'.format(composition),\n marker='.', ls=':', color=color_dict[composition])\n ax.set_yscale(\"log\", nonposy='clip')\n # ax.set_xscale(\"log\", nonposy='clip')\n ax.set_xlabel('$\\log_{10}(E_{\\mathrm{reco}}/\\mathrm{GeV})$')\n ax.set_ylabel('$\\mathrm{E}^{2.7} \\\\frac{\\mathrm{dN}}{\\mathrm{dE dA d\\Omega dt}} \\ [\\mathrm{GeV}^{1.7} \\mathrm{m}^{-2} \\mathrm{sr}^{-1} \\mathrm{s}^{-1}]$')\n ax.set_xlim([6.3, 8])\n ax.set_ylim([10**3, 10**5])\n ax.grid(linestyle='dotted', which=\"both\")\n ax.legend()\n\n plt.savefig('/home/jbourbeau/public_html/figures/spectrum.png')\n plt.show()", "Unfolding\n[ back to top ]", "bin_midpoints, _, counts, counts_err = comp.get1d('/home/jbourbeau/PyUnfold/unfolded_output_h3a.root', 'NC', 'Unf_ks_ACM/bin0')\n\nlight_counts = counts[::2]\nheavy_counts = counts[1::2]\nlight_counts, heavy_counts\n\nfig, ax = plt.subplots()\nfor composition in comp_list + ['total']:\n y, y_err = counts_to_flux(num_reco_energy[composition], num_reco_energy_err[composition], livetime=livetime)\n plotting.plot_steps(energybins.log_energy_midpoints, y, y_err, ax, color_dict[composition], composition)\n \nh3a_light_flux, h3a_flux_err = counts_to_flux(light_counts, np.sqrt(light_counts), livetime=livetime)\nh3a_heavy_flux, h3a_flux_err = counts_to_flux(heavy_counts, np.sqrt(heavy_counts), livetime=livetime)\n\nax.plot(energybins.log_energy_midpoints, h3a_light_flux, ls=':', label='h3a light unfolded')\nax.plot(energybins.log_energy_midpoints, h3a_heavy_flux, ls=':', label='h3a heavy unfolded')\n\nax.set_yscale(\"log\", nonposy='clip')\nplt.xlabel('$\\log_{10}(E_{\\mathrm{reco}}/\\mathrm{GeV})$')\n# ax.set_ylabel('$\\mathrm{E}^{2.7} \\\\frac{\\mathrm{dN}}{\\mathrm{dE dA d\\Omega dt}} \\ [\\mathrm{GeV}^{1.7} \\mathrm{m}^{-2} \\mathrm{sr}^{-1} \\mathrm{s}^{-1}]$')\nax.set_ylabel('$\\mathrm{E}^{2.7} \\ J(E) \\ [\\mathrm{GeV}^{1.7} \\mathrm{m}^{-2} \\mathrm{sr}^{-1} \\mathrm{s}^{-1}]$')\nax.set_xlim([6.4, 9.0])\nax.set_ylim([10**2, 10**5])\nax.grid(linestyle='dotted', which=\"both\")\n \n\nleg = plt.legend(loc='upper center', frameon=False,\n bbox_to_anchor=(0.5, # horizontal\n 1.15),# vertical \n ncol=len(comp_list)+1, fancybox=False)\n# set the linewidth of each legend object\nfor legobj in leg.legendHandles:\n legobj.set_linewidth(3.0)\n\nplt.savefig('/home/jbourbeau/public_html/figures/spectrum-unfolded.png')\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mmaelicke/scikit-gstat
tutorials/07_maximum_likelihood_fit.ipynb
mit
[ "7. Maximum Likelihood fit", "import skgstat as skg\nfrom skgstat.util.likelihood import get_likelihood\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.optimize import minimize\nimport warnings\nfrom time import time\nimport matplotlib.pyplot as plt\nwarnings.filterwarnings('ignore')", "We use the pancake dataset, sampled at 300 random locations to produce a quite dense sample.", "# use the same dataset as used in GMD paper\nc, v = skg.data.pancake(N=300, seed=42).get('sample')", "First of, the variogram is calculated. We use Scott's rule to determine the number of lag classes, explicitly set Trust-Region Reflective as fitting method (although its default) and limit the distance matrix to 70% of the maximum separating distance.\nAdditionally, we capture the processing time for the whole variogram estimation. Note, that this also includes the calculation of the distance matrix, which is a mututal step.", "t1 = time()\nV = skg.Variogram(c,v, bin_func='scott', maxlag=0.7, fit_func='trf')\nt2 = time() # get time for full analysis, including fit\nprint(f\"Processing time: {round((t2 - t1) * 1000)} ms\")\nprint(V)\nfig = V.plot()", "Maximum likelihood using SciKit-GStat\nSince version 0.6.12 SciKit-GStat implements an utility function factory which takes a Variogram instance and builds up a (negative) maximum likelihood function for the associated sample, distance matrix and model type. The used function is defined in eq. 14 from Lark (2000). Eq. 16 from same publication was adapted to all available theoretical models available in SciKit-GStat, with the exception of the harmonized model, which does not require a fitting.\nFirst step to perform the fitting is to make initial guesses for the parameters. Here, we take the mean separating distance for the effective range, the sample variance for the sill and 10% of the sample variance for the nugget. To improve performance and runtime, we also define a boundary to restrict the parameter space.", "# base initial guess on separating distance and sample variance\nsep_mean = V.distance.mean()\nsam_var = V.values.var()\nprint(f\"Mean sep. distance: {sep_mean.round(1)} sample variance: {sam_var.round(1)}\")\n\n# create initial guess\n# mean dist. variance 5% of variance\np0 = np.array([sep_mean, sam_var, 0.1 * sam_var])\nprint('initial guess: ', p0.round(1))\n\n# create the bounds to restrict optimization\nbounds = [[0, V.bins[-1]], [0, 3*sam_var], [0, 2.9*sam_var]]\nprint('bounds: ', bounds)\n", "Next step is to pass the Variogram instance to the function factory. We find optimal parameters by minimizing the returned negative log-likelihood function. Please refer to SciPy's minimize function to learn about attributes. The returned function from the utility suite is built with SciPy in mind, as the function signature complies to SciPy's interface and, thus can just be passed to the minimize function.\nHere, we pass the initial guess, the bounds and set the solver method to SLSQP, a suitable solver for bounded optimization.", "# load the likelihood function for this variogram\nlikelihood = get_likelihood(V)\n\n# minimize the likelihood function \nt3 = time()\nres = minimize(likelihood, p0, bounds=bounds, method='SLSQP')\nt4 = time()\nprint(f\"Processing time {np.round(t4 - t3, 2)} seconds\")\n\nprint('initial guess: ', p0.round(1))\nprint('optimal parameters:', res.x.round(1))", "Apply the optimized parameters. For comparison, the three method-of-moment methods from SciKit-GStat are applied as well. Note that the used sample is quite dense. Thus we do not expect a different between the MoM based procedures. They should all find the same paramters.", "# use 100 steps\nx = np.linspace(0, V.bins[-1], 100)\n\n# apply the maximum likelihood fit parameters\ny_ml = V.model(x, *res.x)\n\n# apply the trf fit\ny_trf = V.fitted_model(x)\n\n# apply Levelberg marquard\nV.fit_method = 'lm'\ny_lm = V.fitted_model(x)\n\n# apply parameter ml\nV.fit_method = 'ml'\ny_pml = V.fitted_model(x)\n\n# check if the method-of-moment fits are different\nprint('Trf and Levenberg-Marquardt identical: ', all(y_lm - y_trf < 0.1))\nprint('Trf and parameter ML identical: ', all(y_pml - y_trf < 0.1))", "Make the result plot", "plt.plot(V.bins, V.experimental, '.b', label='experimental')\nplt.plot(x, y_ml, '-g', label='ML fit (Lark, 2000)')\nplt.plot(x, y_trf, '-b', label='SciKit-GStat TRF')\nplt.legend(loc='lower right')\n#plt.gcf().savefig('compare.pdf', dpi=300)", "Build from scratch\nSciKit-GStat's utility suite does only implement the maximum likelihood approach as published by Lark (2000). There are no settings to adjust the returned function, nor use other implementations. If you need to use another approach, the idea behind the implementation is demonstrated below for the spherical variogram model. This solution is only build on SciPy and does not need SciKit-GStat, in case the distance matrix is build externally.", "from scipy.spatial.distance import squareform\nfrom scipy.linalg import inv, det\n\n# define the spherical model only dependent on the range\ndef f(h, a):\n if h >= a:\n return 1.\n elif h == 0:\n return 0.\n return (3*h) / (2*a) - 0.5 * (h / a)**3\n\n# create the autocovariance matrix \ndef get_A(r, s, b, dists):\n a = np.array([f(d, r) for d in dists])\n A = squareform((s / (s + b)) * (1 - a))\n np.fill_diagonal(A, 1)\n\n return A\n\n# likelihood function\ndef like(r, s, b, z, dists):\n A = get_A(r, s, b, dists)\n n = len(A)\n A_inv = inv(A)\n ones = np.ones((n, 1))\n z = z.reshape(n, -1)\n m = inv(ones.T @ A_inv @ ones) @ (ones.T @ A_inv @ z)\n b = np.log((z - m).T @ A_inv @ (z - m))\n d = np.log(det(A))\n if d == -np.inf:\n print('invalid det(A)')\n return np.inf\n loglike = (n / 2)*np.log(2*np.pi) + (n / 2) - (n / 2)* np.log(n) + 0.5* d + (n / 2) * b\n return loglike.flatten()[0]\n\nfrom scipy.optimize import minimize\nfrom scipy.spatial.distance import pdist\n\n# c and v are coordinate and values array from the data source\nz = np.array(v)\n\n# in case you use 2D coordinates, without caching and euclidean metric, skgstat is using pdist under the hood\ndists = pdist(c)\n\nfun = lambda x, *args: like(x[0], x[1], x[2], z=z, dists=dists)\nt3 = time()\nres = minimize(fun, p0, bounds=bounds)\nt4 = time()\nprint(f\"Processing time {np.round(t4 - t3, 2)} seconds\")\n\nprint('initial guess: ', p0.round(1))\nprint('optimal parameters:', res.x.round(1))\n\nimport matplotlib.pyplot as plt\nmod = lambda h: f(h, res.x[0]) * res.x[1] + res.x[2]\n\nx = np.linspace(0, 450, 100)\ny = list(map(mod, x))\ny2 = V.fitted_model(x)\n\nplt.plot(V.bins, V.experimental, '.b', label='experimental')\nplt.plot(x, y, '-g', label='ML fit (Lark, 2000)')\nplt.plot(x, y2, '-b', label='SciKit-GStat default fit')\nplt.legend(loc='lower right')\nplt.gcf().savefig('compare.pdf', dpi=300)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Upward-Spiral-Science/team1
code/regression_simulation_old.ipynb
apache-2.0
[ "Regression (predicting unmasked value given (x, y, z, synapses))\nStep 1: Assumptions\nAssume that unmasked values, Y, follow some joint distribution $F_{Y \\mid X}$ where $X$ is the set of data, which are vectors in $\\mathbb{R}^4$ and its elements correspond to x coordinate, y coordinate, z coordinate, synapses, respectively.\nStep 2: Define model\nLet the true values of unmasked correspond to the set $Y$, and let the joint distribution be parameterized by $\\theta$. So for each $x_i \\in X \\textrm{ and } y_i \\in Y \\ , F(x;\\theta)=y$. \nWe want to find parameters $\\hat \\theta$ such that we minimize the loss function $l(\\hat y, y)$, where $\\hat y = F(x;\\hat \\theta)$.\nStep 3: Algorithms\nLinear Regression\nSupport Vector Regression (SVR)\nK-Nearest Neighbor Regression (KNN)\nRandom Forest Regression (RF)\nPolynomial Regression\nStep 4/5/6 part A: Null distribution\nNo relationship, i.e. all variables independent, so joint can be factored into marginals. Let's just let all marginals be uniform across their respective min and max in the actual dataset. So the target variable Y, i.e. unmasked, follows a multivariate uniform distribution.", "import numpy as np\nimport matplotlib.pyplot as plt\nimport urllib2\n\n%matplotlib inline\n\nsample_size = 10000\nk_fold = 10\nnp.random.seed(1)\nurl = ('https://raw.githubusercontent.com/Upward-Spiral-Science'\n '/data/master/syn-density/output.csv')\ndata = urllib2.urlopen(url)\ncsv = np.genfromtxt(data, delimiter=\",\")[1:] # don't want first row (labels)\n\nmins = [np.min(csv[:,i]) for i in xrange(5)]\nmaxs = [np.max(csv[:,i]) for i in xrange(5)]\ndomains = zip(mins, maxs)\nY_range = domains[3]\ndel domains[3]\n\n\nnull_X = np.array([[np.random.randint(*domains[i]) for i in xrange(4)] for k in xrange(sample_size)])\nnull_Y = np.array([[np.random.randint(*Y_range)] for k in xrange(sample_size)])\n\n# Sample sizes from each synthetic data distribution\nS = np.array((100, 120, 200, 320,\n 400, 800, 1000, 2500, 5000, 7500))\n\n# load our regressions\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.svm import LinearSVR\nfrom sklearn.neighbors import KNeighborsRegressor as KNN\nfrom sklearn.ensemble import RandomForestRegressor as RF\nfrom sklearn.preprocessing import PolynomialFeatures as PF\nfrom sklearn.pipeline import Pipeline\nfrom sklearn import cross_validation\nnames = ['Linear Regression','SVR','KNN Regression','Random Forest Regression','Polynomial Regression']\nregressions = [LinearRegression(),\n LinearSVR(C=1.0),\n KNN(n_neighbors=10, algorithm='auto'),\n RF(max_depth=5, max_features=1),\n Pipeline([('poly', PF(degree=2)),('linear', LinearRegression(fit_intercept=False))])]\nr2 = np.zeros((len(S), len(regressions), 2), dtype=np.dtype('float64'))\n\n#iterate over sample sizes and regression algos\nfor idx1, N in enumerate(S):\n # Randomly sample from synthetic data with sample size N\n a = np.random.permutation(np.arange(sample_size))[:N]\n X = null_X[a]\n Y = null_Y[a]\n Y = np.ravel(Y)\n\n for idx2, reg in enumerate(regressions):\n scores = cross_validation.cross_val_score(reg, X, Y, scoring='r2', cv=10)\n r2[idx1, idx2, :] = [scores.mean(), scores.std()]\n print(\"R^2 of %s: %0.2f (+/- %0.2f)\" % (names[idx2], scores.mean(), scores.std() * 2))", "Now graphing this data:", "plt.errorbar(S, r2[:,0,0], yerr = r2[:,0,1], hold=True, label=names[0])\nplt.errorbar(S, r2[:,1,0], yerr = r2[:,1,1], color='green', hold=True, label=names[1])\nplt.errorbar(S, r2[:,2,0], yerr = r2[:,2,1], color='red', hold=True, label=names[2])\nplt.errorbar(S, r2[:,3,0], yerr = r2[:,3,1], color='black', hold=True, label=names[3])\nplt.errorbar(S, r2[:,4,0], yerr = r2[:,4,1], color='brown', hold=True, label=names[4])\nplt.xscale('log')\nplt.axhline(1, color='red', linestyle='--')\nplt.legend(loc='center left', bbox_to_anchor=(1, 0.5))\nplt.show()", "Step 4/5/6 part b: Alternate distribution\nHere we want a strong relationship between variables. Let's keep the x, y, z uniformly distributed across the sample space, but let # of synapses, s, be a deterministic function, f, of x, y, z. Let $s=f(x,y,z)=\\frac{x+y+z}{3}$. Now let's say our random variable $Y=(s/4)+\\epsilon$ where $\\epsilon$ is some Gaussian noise with variance equal to average(s/4) (just to make this synthetic data slightly more realistic).", "alt_X = np.apply_along_axis(lambda row : np.hstack((row[0:3], np.average(row[0:3]))), 1, null_X)\nstd_dev = np.sqrt(np.average(alt_X[:, 3]))\nalt_Y = alt_X[:, 3]/4 + np.random.normal(scale=std_dev, size=(sample_size,))\nr2 = np.zeros((len(S), len(regressions), 2), dtype=np.dtype('float64'))\n#iterate over sample sizes and regression algos\nfor idx1, N in enumerate(S):\n # Randomly sample from synthetic data with sample size N\n a = np.random.permutation(np.arange(sample_size))[:N]\n X = alt_X[a]\n Y = alt_Y[a]\n Y = np.ravel(Y)\n\n for idx2, reg in enumerate(regressions):\n scores = cross_validation.cross_val_score(reg, X, Y, scoring='r2', cv=k_fold)\n r2[idx1, idx2, :] = [scores.mean(), scores.std()]\n print(\"R^2 of %s: %0.2f (+/- %0.2f)\" % (names[idx2], scores.mean(), scores.std() * 2))", "Now graphing it:", "plt.errorbar(S, r2[:,0,0], yerr = r2[:,0,1], hold=True, label=names[0])\nplt.errorbar(S, r2[:,1,0], yerr = r2[:,1,1], color='green', hold=True, label=names[1])\nplt.errorbar(S, r2[:,2,0], yerr = r2[:,2,1], color='red', hold=True, label=names[2])\nplt.errorbar(S, r2[:,3,0], yerr = r2[:,3,1], color='black', hold=True, label=names[3])\nplt.errorbar(S, r2[:,4,0], yerr = r2[:,4,1], color='brown', hold=True, label=names[4])\nplt.xscale('log')\nplt.axhline(1, color='red', linestyle='--')\nplt.legend(loc='center left', bbox_to_anchor=(1, 0.5))\nplt.show()", "Step 7: Apply on actual data", "X = csv[:, [0, 1, 2, 4]]\nY = csv[:, 3]\nfor idx2, reg in enumerate(regressions):\n scores = cross_validation.cross_val_score(reg, X, Y, scoring='r2', cv=k_fold)\n print(\"R^2 of %s: %0.2f (+/- %0.2f)\" % (names[idx2], scores.mean(), scores.std() * 2))", "Step 8: Reflect on results\nUpdate this part. The regression accuracy on real data based on the five tested regression algorithms is, at best, 85%, and, at worst, 41%. From the poor results of the linear regression and linear support vector, we see that the relationship between the variables (x,y,z,synapses) and the unmasked value is definitely not linear. Also, since the polynomial regression of degree 2 failed, we know that the relationship between those variables is not quadratic. We believe K-nearest neighbors failed to the high dimensionality of our data. Distances become less representative of the data with increasing dimensionaltiy. Next, we plan to investigate why random forests performed so well and review our assumptions for accuracy and completeness as well as adjust our regression algorithm parameters to better represent the true data as well as the adjusted assumptions." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
FordyceLab/AcqPack
notebooks/Experiment_Arjun20170606.ipynb
mit
[ "SETUP", "import time\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport os\nfrom config import utils as ut\n%matplotlib inline", "Autosipper", "# config directory must have \"__init__.py\" file\n# from the 'config' directory, import the following classes:\nfrom config import Motor, ASI_Controller, Autosipper\nautosipper = Autosipper(Motor('config/motor.yaml'), ASI_Controller('config/asi_controller.yaml'))\nautosipper.coord_frames\n\nfrom config import gui\ngui.stage_control(autosipper.XY, autosipper.Z)\n\nplatemap = ut.generate_position_table((8,12),(9,9),95.0)\nplatemap['x'] = -platemap['x'] - 1.8792\nplatemap['y'] = platemap['y'] + 32.45\nplatemap.loc[platemap.shape[0]] = [96, 99, 99, 99, 'W01', -8.2492, 1.1709, 68.3999]\nplatemap.loc[platemap.shape[0]] = [97, 99, 99, 99, 'W02', -36.9737, 1.1709, 68.3999]\n\nplatemap['contents'] = [\"\" for i in range(len(platemap['name']))]\nfor i in range(10):\n platemap['contents'].iloc[36+i] = \"conc\"+str(i)\n\nautosipper.coord_frames.hardware.position_table = platemap\nplatemap\n\nautosipper.go_to('hardware', 'name', 'B01')", "Manifold", "from config import Manifold\nfrom config.gui import manifold_control\n\nmanifold = Manifold('192.168.1.3', 'config/valvemaps/valvemap.csv', 512)\nmanifold.valvemap[manifold.valvemap.name>0]\n\nmanifold_control(manifold)\n\ndef valve_states():\n tmp = []\n for i in [2,0,14,8]:\n status = 'x'\n if manifold.read_valve(i):\n status = 'o'\n tmp.append([status, manifold.valvemap.name.iloc[i]])\n return pd.DataFrame(tmp)\n\ntmp = []\nfor i in range(16):\n status = 'x'\n if manifold.read_valve(i):\n status = 'o'\n name = manifold.valvemap.name.iloc[i]\n tmp.append([status, name])\npd.DataFrame(tmp).replace(np.nan, '')\n\nname = 'inlet_in'\nv = manifold.valvemap['valve'][manifold.valvemap.name==name]\n\nv=14\n\nmanifold.depressurize(v)\n\nmanifold.pressurize(v)\n\nmanifold.exit()", "Micromanager", "# !!!! Also must have MM folder on system PATH\n# mm_version = 'C:\\Micro-Manager-1.4'\n# cfg = 'C:\\Micro-Manager-1.4\\SetupNumber2_01282017.cfg'\nmm_version = 'C:\\Program Files\\Micro-Manager-2.0beta'\ncfg = 'C:\\Program Files\\Micro-Manager-2.0beta\\Setup2_20170413.cfg'\n\nimport sys\nsys.path.insert(0, mm_version) # make it so python can find MMCorePy\nimport MMCorePy\n\nfrom PIL import Image\n\ncore = MMCorePy.CMMCore()\ncore.loadSystemConfiguration(cfg)\ncore.setProperty(\"Spectra\", \"White_Enable\", \"1\")\ncore.waitForDevice(\"Spectra\")\n\ncore.setProperty(\"Cam Andor_Zyla4.2\", \"Sensitivity/DynamicRange\", \"16-bit (low noise & high well capacity)\") # NEED TO SET CAMERA TO 16 BIT (ceiling 12 BIT = 4096)", "Preset: 1_PBP \nConfigGroup,Channel,1_PBP,TIFilterBlock1,Label,1-PBP\nPreset: 2_BF \nConfigGroup,Channel,2_BF,TIFilterBlock1,Label,2-BF\nPreset: 3_DAPI \nConfigGroup,Channel,3_DAPI,TIFilterBlock1,Label,3-DAPI\nPreset: 4_eGFP \nConfigGroup,Channel,4_eGFP,TIFilterBlock1,Label,4-GFP\nPreset: 5_Cy5 \nConfigGroup,Channel,5_Cy5,TIFilterBlock1,Label,5-Cy5\nPreset: 6_AttoPhos \nConfigGroup,Channel,6_AttoPhos,TIFilterBlock1,Label,6-AttoPhos\nACQUISITION", "core.setProperty(core.getCameraDevice(), \"Exposure\", 125)\ncore.setConfig('Channel','4_eGFP')\ncore.setProperty(core.getCameraDevice(), \"Binning\", \"3x3\")\n\nposition_list = ut.load_mm_positionlist(\"Z:/Data/Setup 2/Arjun/170609_FlippedMITOMI/170609_mwm.pos\")\nposition_list\n\n# ONE ACQUISITION / SCAN\ndef scan(channel, exposure, washtype, plate_n):\n core.setConfig('Channel', channel)\n core.setProperty(core.getCameraDevice(), \"Exposure\", exposure)\n time.sleep(.2)\n \n timestamp = time.strftime(\"%Y%m%d-%H%M%S\", time.localtime())\n rootdirectory = \"Z:/Data/Setup 2/Arjun/170609_FlippedMITOMI/\"\n solution = autosipper.coord_frames.hardware.position_table.contents.iloc[plate_n]\n scandirectory = '{}_{}_{}_{}_{}'.format(timestamp, solution, washtype, channel, exposure)\n os.makedirs(rootdirectory+scandirectory)\n \n for i in xrange(len(position_list)):\n si = str(i)\n x,y = position_list[['x','y']].iloc[i]\n core.setXYPosition(x,y)\n core.waitForDevice(core.getXYStageDevice())\n \n core.snapImage()\n img = core.getImage()\n image = Image.fromarray(img)\n \n timestamp = time.strftime(\"%Y%m%d-%H%M%S\", time.localtime())\n positionname = position_list['name'].iloc[i]\n image.save('{}/{}_{}.tif'.format(rootdirectory+scandirectory, timestamp, positionname))\n\n x,y = position_list[['x','y']].iloc[0]\n core.setXYPosition(x,y)\n core.waitForDevice(core.getXYStageDevice())\n\ndef get_valve(name):\n return ut.lookup(manifold.valvemap,'name',name,'valve',0)[0]\n\nscan('4_eGFP', 125, 'pre')\n\n# PUT PINS IN ALL INPUTS\n# in0 WASTE LINE\n# in1 WASH LINE, 4.5 PSI\n# in2 PEEK LINE, 4.5 PSI\n#----------------------------------\n\n# initialize valve states\n## all inputs close\n## sandwhich\n\n# prime inlet tree (wash)\nmanifold.depressurize(get_valve('bBSA_2')) ## wash open\nmanifold.depressurize(get_valve('Waste_2')) ## waste open\ntime.sleep(60*0.5) ## wait 0.5 min\nmanifold.pressurize(get_valve('Waste_2')) ## waste close\n\n# backflush tubing (wash)\nautosipper.go_to('hardware', 'name', 'W02', zh_travel=40) ## W02 move\nmanifold.depressurize(get_valve('NA_2')) ## inlet open\ntime.sleep(60*11) ## wait 11 min\nmanifold.pressurize(get_valve('NA_2')) ## inlet close\n\n# fill chip (wash)\n## chip_in open\n## chip_out open\n## wait 10 min\n## chip_out close\n## wait 10 min\n\n# prime tubing (1st input)\nmanifold.pressurize(get_valve('Out_2')) # chip_out close\nmanifold.pressurize(get_valve('In_2')) # chip_in close\n\nautosipper.go_to('hardware', 'name', 'W01', zh_travel=40)\nautosipper.go_to('hardware', 'n', 36, zh_travel=40)\nmanifold.depressurize(get_valve('NA_2')) # inlet open\nmanifold.depressurize(get_valve('Waste_2')) # waste open\ntime.sleep(60*11) # filling inlet, 11 min...\n\nmanifold.pressurize(get_valve('Waste_2')) # waste close", "first attempt above: forgot to close wash after backflush", "# prime inlet tree (wash)\nmanifold.depressurize(get_valve('bBSA_2')) ## wash open\nmanifold.depressurize(get_valve('Waste_2')) ## waste open\ntime.sleep(60*0.5) ## wait 0.5 min\nmanifold.pressurize(get_valve('Waste_2')) ## waste close\n\n# backflush tubing (wash)\nautosipper.go_to('hardware', 'name', 'W02', zh_travel=40) ## W02 move\nmanifold.depressurize(get_valve('NA_2')) ## inlet open\ntime.sleep(60*15) ## wait 15 min\nmanifold.pressurize(get_valve('NA_2')) ## inlet close\nmanifold.pressurize(get_valve('bBSA_2')) ## wash close\n\n# prime tubing (1st input)\nmanifold.pressurize(get_valve('Out_2')) # chip_out close\nmanifold.pressurize(get_valve('In_2')) # chip_in close\n\nautosipper.go_to('hardware', 'name', 'W01', zh_travel=40)\nautosipper.go_to('hardware', 'n', 36, zh_travel=40)\nmanifold.depressurize(get_valve('NA_2')) # inlet open\nmanifold.depressurize(get_valve('Waste_2')) # waste open\ntime.sleep(60*15) # filling inlet, 15 min...\n\nmanifold.pressurize(get_valve('Waste_2')) # waste close\n\n# 16,Waste_2\n# 17,bBSA_2\n# 18,NA_2\n# 19,antibody_2\n# 20,Extra1_2\n# 21,Extra2_2\n# 22,Protein_2\n# 23,Wash_2\n\nexposures = [1250,1250,1250,1250,1000,600,300,180,70,30]\nfor i,exposure in enumerate(exposures):\n plate_n = 36+i\n \n # flow on chip\n manifold.depressurize(get_valve('NA_2')) # inlet open\n manifold.depressurize(get_valve('In_2')) # chip_in open\n manifold.depressurize(get_valve('Out_2')) # chip_out open\n time.sleep(60*10) # filling chip, 10 min...\n \n # CONCURRENTLY:\n incubate_time = 60*30 # 30 min\n \n # a) incubate DNA with protein\n manifold.pressurize(get_valve('Sandwich1_2')) # sandwhiches close\n manifold.pressurize(get_valve('Sandwich2_2')) # \"\n time.sleep(1) # pause\n manifold.depressurize(get_valve('Button1_2')) # buttons open\n manifold.depressurize(get_valve('Button2_2')) # \"\n incubate_start = time.time()\n \n # b1) prime inlet tube with next INPUT\n manifold.pressurize(get_valve('Out_2')) # chip_out close\n manifold.pressurize(get_valve('In_2')) # chip_in close\n\n manifold.pressurize(get_valve('NA_2')) # inlet close\n autosipper.go_to('hardware', 'name', 'W01', zh_travel=40)\n autosipper.go_to('hardware', 'n', plate_n+1, zh_travel=40)\n manifold.depressurize(get_valve('NA_2')) # inlet open\n manifold.depressurize(get_valve('Waste_2')) # waste open\n time.sleep(60*15) # filling inlet, 15 min...\n \n manifold.pressurize(get_valve('NA_2')) # inlet close\n\n # b2) prime inlet tree with wash\n manifold.depressurize(get_valve('bBSA_2')) # wash open\n manifold.pressurize(get_valve('Waste_2')) # waste close\n \n for v in [19,20,21,22,23]:\n manifold.depressurize(v)\n time.sleep(.2)\n manifold.pressurize(v)\n \n manifold.depressurize(get_valve('Waste_2')) # waste open\n time.sleep(60*1)\n manifold.pressurize(get_valve('Waste_2')) # waste close\n\n remaining_time = incubate_time - (time.time() - incubate_start)\n time.sleep(remaining_time) \n \n # prewash Cy5\n scan('5_Cy5', exposure, 'pre', plate_n)\n \n # wash\n manifold.pressurize(get_valve('Button1_2')) # buttons close\n manifold.pressurize(get_valve('Button2_2')) # \"\n time.sleep(1) # pause\n manifold.depressurize(get_valve('Sandwich1_2')) # sandwhiches open\n manifold.depressurize(get_valve('Sandwich2_2')) # \"\n \n manifold.depressurize(get_valve('In_2')) # chip_in open\n manifold.depressurize(get_valve('Out_2')) # chip_out open\n time.sleep(60*10) # washing chip, 10 min...\n \n manifold.pressurize(get_valve('Out_2')) # chip_out close\n manifold.pressurize(get_valve('In_2')) # chip_in close\n manifold.pressurize(get_valve('bBSA_2')) # wash close\n\n # postwash eGFP and postwash Cy5\n scan('4_eGFP', 125, 'post', plate_n)\n scan('5_Cy5', 1250, 'post', plate_n)\n\nautosipper.go_to('hardware', 'n', 47, zh_travel=40) \n\n# prewash Cy5\nscan('5_Cy5', exposure, 'pre', plate_n)\n\n# wash\nmanifold.pressurize(get_valve('Button1_2')) # buttons close\nmanifold.pressurize(get_valve('Button2_2')) # \"\ntime.sleep(1) # pause\nmanifold.depressurize(get_valve('Sandwich1_2')) # sandwhiches open\nmanifold.depressurize(get_valve('Sandwich2_2')) # \"\n\nmanifold.depressurize(get_valve('In_2')) # chip_in open\nmanifold.depressurize(get_valve('Out_2')) # chip_out open\ntime.sleep(60*10) # washing chip, 10 min...\n\nmanifold.pressurize(get_valve('Out_2')) # chip_out close\nmanifold.pressurize(get_valve('In_2')) # chip_in close\nmanifold.pressurize(get_valve('bBSA_2')) # wash close\n\n# postwash eGFP and postwash Cy5\nscan('4_eGFP', 125, 'post', plate_n)\nscan('5_Cy5', 1250, 'post', plate_n)\n\n\nexposures = [1250,1250,1250,1250,1000,600,300,180,70,30]\nfor i, exposure in enumerate(exposures):\n if i==0:\n continue\n \n plate_n = 36+i\n \n # flow on chip\n manifold.depressurize(get_valve('NA_2')) # inlet open\n manifold.depressurize(get_valve('In_2')) # chip_in open\n manifold.depressurize(get_valve('Out_2')) # chip_out open\n time.sleep(60*10) # filling chip, 10 min...\n \n # CONCURRENTLY:\n incubate_time = 60*30 # 30 min\n \n # a) incubate DNA with protein\n manifold.pressurize(get_valve('Sandwich1_2')) # sandwhiches close\n manifold.pressurize(get_valve('Sandwich2_2')) # \"\n time.sleep(1) # pause\n manifold.depressurize(get_valve('Button1_2')) # buttons open\n manifold.depressurize(get_valve('Button2_2')) # \"\n incubate_start = time.time()\n \n # b1) prime inlet tube with next INPUT\n manifold.pressurize(get_valve('Out_2')) # chip_out close\n manifold.pressurize(get_valve('In_2')) # chip_in close\n\n manifold.pressurize(get_valve('NA_2')) # inlet close\n autosipper.go_to('hardware', 'name', 'W01', zh_travel=40)\n autosipper.go_to('hardware', 'n', plate_n+1, zh_travel=40)\n manifold.depressurize(get_valve('NA_2')) # inlet open\n manifold.depressurize(get_valve('Waste_2')) # waste open\n time.sleep(60*15) # filling inlet, 15 min...\n \n manifold.pressurize(get_valve('NA_2')) # inlet close\n\n # b2) prime inlet tree with wash\n manifold.depressurize(get_valve('bBSA_2')) # wash open\n manifold.pressurize(get_valve('Waste_2')) # waste close\n \n for v in [19,20,21,22,23]:\n manifold.depressurize(v)\n time.sleep(.2)\n manifold.pressurize(v)\n \n manifold.depressurize(get_valve('Waste_2')) # waste open\n time.sleep(60*1)\n manifold.pressurize(get_valve('Waste_2')) # waste close\n\n remaining_time = incubate_time - (time.time() - incubate_start)\n time.sleep(remaining_time) \n \n # prewash Cy5\n scan('5_Cy5', exposure, 'pre', plate_n)\n \n # wash\n manifold.pressurize(get_valve('Button1_2')) # buttons close\n manifold.pressurize(get_valve('Button2_2')) # \"\n time.sleep(1) # pause\n manifold.depressurize(get_valve('Sandwich1_2')) # sandwhiches open\n manifold.depressurize(get_valve('Sandwich2_2')) # \"\n \n manifold.depressurize(get_valve('In_2')) # chip_in open\n manifold.depressurize(get_valve('Out_2')) # chip_out open\n time.sleep(60*10) # washing chip, 10 min...\n \n manifold.pressurize(get_valve('Out_2')) # chip_out close\n manifold.pressurize(get_valve('In_2')) # chip_in close\n manifold.pressurize(get_valve('bBSA_2')) # wash close\n\n # postwash eGFP and postwash Cy5\n scan('4_eGFP', 125, 'post', plate_n)\n scan('5_Cy5', 1250, 'post', plate_n)", "EXIT", "autosipper.exit()\nmanifold.exit()\ncore.unloadAllDevices()\ncore.reset()\nprint 'closed'" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
smharper/openmc
examples/jupyter/mg-mode-part-i.ipynb
mit
[ "This Notebook illustrates the usage of OpenMC's multi-group calculational mode with the Python API. This example notebook creates and executes the 2-D C5G7 benchmark model using the openmc.MGXSLibrary class to create the supporting data library on the fly.\nGenerate MGXS Library", "import os\n\nimport matplotlib.pyplot as plt\nimport matplotlib.colors as colors\nimport numpy as np\n\nimport openmc\n\n%matplotlib inline", "We will now create the multi-group library using data directly from Appendix A of the C5G7 benchmark documentation. All of the data below will be created at 294K, consistent with the benchmark.\nThis notebook will first begin by setting the group structure and building the groupwise data for UO2. As you can see, the cross sections are input in the order of increasing groups (or decreasing energy).\nNote: The C5G7 benchmark uses transport-corrected cross sections. So the total cross section we input here will technically be the transport cross section.", "# Create a 7-group structure with arbitrary boundaries (the specific boundaries are unimportant)\ngroups = openmc.mgxs.EnergyGroups(np.logspace(-5, 7, 8))\n\nuo2_xsdata = openmc.XSdata('uo2', groups)\nuo2_xsdata.order = 0\n\n# When setting the data let the object know you are setting the data for a temperature of 294K.\nuo2_xsdata.set_total([1.77949E-1, 3.29805E-1, 4.80388E-1, 5.54367E-1,\n 3.11801E-1, 3.95168E-1, 5.64406E-1], temperature=294.)\n\nuo2_xsdata.set_absorption([8.0248E-03, 3.7174E-3, 2.6769E-2, 9.6236E-2,\n 3.0020E-02, 1.1126E-1, 2.8278E-1], temperature=294.)\nuo2_xsdata.set_fission([7.21206E-3, 8.19301E-4, 6.45320E-3, 1.85648E-2,\n 1.78084E-2, 8.30348E-2, 2.16004E-1], temperature=294.)\n\nuo2_xsdata.set_nu_fission([2.005998E-2, 2.027303E-3, 1.570599E-2, 4.518301E-2,\n 4.334208E-2, 2.020901E-1, 5.257105E-1], temperature=294.)\n\nuo2_xsdata.set_chi([5.87910E-1, 4.11760E-1, 3.39060E-4, 1.17610E-7,\n 0.00000E-0, 0.00000E-0, 0.00000E-0], temperature=294.)", "We will now add the scattering matrix data. \nNote: Most users familiar with deterministic transport libraries are already familiar with the idea of entering one scattering matrix for every order (i.e. scattering order as the outer dimension). However, the shape of OpenMC's scattering matrix entry is instead [Incoming groups, Outgoing Groups, Scattering Order] to best enable other scattering representations. We will follow the more familiar approach in this notebook, and then use numpy's numpy.rollaxis function to change the ordering to what we need (scattering order on the inner dimension).", "# The scattering matrix is ordered with incoming groups as rows and outgoing groups as columns\n# (i.e., below the diagonal is up-scattering).\nscatter_matrix = \\\n [[[1.27537E-1, 4.23780E-2, 9.43740E-6, 5.51630E-9, 0.00000E-0, 0.00000E-0, 0.00000E-0],\n [0.00000E-0, 3.24456E-1, 1.63140E-3, 3.14270E-9, 0.00000E-0, 0.00000E-0, 0.00000E-0],\n [0.00000E-0, 0.00000E-0, 4.50940E-1, 2.67920E-3, 0.00000E-0, 0.00000E-0, 0.00000E-0],\n [0.00000E-0, 0.00000E-0, 0.00000E-0, 4.52565E-1, 5.56640E-3, 0.00000E-0, 0.00000E-0],\n [0.00000E-0, 0.00000E-0, 0.00000E-0, 1.25250E-4, 2.71401E-1, 1.02550E-2, 1.00210E-8],\n [0.00000E-0, 0.00000E-0, 0.00000E-0, 0.00000E-0, 1.29680E-3, 2.65802E-1, 1.68090E-2],\n [0.00000E-0, 0.00000E-0, 0.00000E-0, 0.00000E-0, 0.00000E-0, 8.54580E-3, 2.73080E-1]]]\nscatter_matrix = np.array(scatter_matrix)\nscatter_matrix = np.rollaxis(scatter_matrix, 0, 3)\nuo2_xsdata.set_scatter_matrix(scatter_matrix, temperature=294.)", "Now that the UO2 data has been created, we can move on to the remaining materials using the same process.\nHowever, we will actually skip repeating the above for now. Our simulation will instead use the c5g7.h5 file that has already been created using exactly the same logic as above, but for the remaining materials in the benchmark problem.\nFor now we will show how you would use the uo2_xsdata information to create an openmc.MGXSLibrary object and write to disk.", "# Initialize the library\nmg_cross_sections_file = openmc.MGXSLibrary(groups)\n\n# Add the UO2 data to it\nmg_cross_sections_file.add_xsdata(uo2_xsdata)\n\n# And write to disk\nmg_cross_sections_file.export_to_hdf5('mgxs.h5')", "Generate 2-D C5G7 Problem Input Files\nTo build the actual 2-D model, we will first begin by creating the materials.xml file.\nFirst we need to define materials that will be used in the problem. In other notebooks, either nuclides or elements were added to materials at the equivalent stage. We can do that in multi-group mode as well. However, multi-group cross-sections are sometimes provided as macroscopic cross-sections; the C5G7 benchmark data are macroscopic. In this case, we can instead use the Material.add_macroscopic method to specify a macroscopic object. Unlike for nuclides and elements, we do not need provide information on atom/weight percents as no number densities are needed.\nWhen assigning macroscopic objects to a material, the density can still be scaled by setting the density to a value that is not 1.0. This would be useful, for example, when slightly perturbing the density of water due to a small change in temperature (while of course ignoring any resultant spectral shift). The density of a macroscopic dataset is set to 1.0 in the openmc.Material object by default when a macroscopic dataset is used; so we will show its use the first time and then afterwards it will not be required.\nAside from these differences, the following code is very similar to similar code in other OpenMC example Notebooks.", "# For every cross section data set in the library, assign an openmc.Macroscopic object to a material\nmaterials = {}\nfor xs in ['uo2', 'mox43', 'mox7', 'mox87', 'fiss_chamber', 'guide_tube', 'water']:\n materials[xs] = openmc.Material(name=xs)\n materials[xs].set_density('macro', 1.)\n materials[xs].add_macroscopic(xs)", "Now we can go ahead and produce a materials.xml file for use by OpenMC", "# Instantiate a Materials collection, register all Materials, and export to XML\nmaterials_file = openmc.Materials(materials.values())\n\n# Set the location of the cross sections file to our pre-written set\nmaterials_file.cross_sections = 'c5g7.h5'\n\nmaterials_file.export_to_xml()", "Our next step will be to create the geometry information needed for our assembly and to write that to the geometry.xml file.\nWe will begin by defining the surfaces, cells, and universes needed for each of the individual fuel pins, guide tubes, and fission chambers.", "# Create the surface used for each pin\npin_surf = openmc.ZCylinder(x0=0, y0=0, R=0.54, name='pin_surf')\n\n# Create the cells which will be used to represent each pin type.\ncells = {}\nuniverses = {}\nfor material in materials.values():\n # Create the cell for the material inside the cladding\n cells[material.name] = openmc.Cell(name=material.name)\n # Assign the half-spaces to the cell\n cells[material.name].region = -pin_surf\n # Register the material with this cell\n cells[material.name].fill = material\n \n # Repeat the above for the material outside the cladding (i.e., the moderator)\n cell_name = material.name + '_moderator'\n cells[cell_name] = openmc.Cell(name=cell_name)\n cells[cell_name].region = +pin_surf\n cells[cell_name].fill = materials['water']\n \n # Finally add the two cells we just made to a Universe object\n universes[material.name] = openmc.Universe(name=material.name)\n universes[material.name].add_cells([cells[material.name], cells[cell_name]])", "The next step is to take our universes (representing the different pin types) and lay them out in a lattice to represent the assembly types", "lattices = {}\n\n# Instantiate the UO2 Lattice\nlattices['UO2 Assembly'] = openmc.RectLattice(name='UO2 Assembly')\nlattices['UO2 Assembly'].dimension = [17, 17]\nlattices['UO2 Assembly'].lower_left = [-10.71, -10.71]\nlattices['UO2 Assembly'].pitch = [1.26, 1.26]\nu = universes['uo2']\ng = universes['guide_tube']\nf = universes['fiss_chamber']\nlattices['UO2 Assembly'].universes = \\\n [[u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u],\n [u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u],\n [u, u, u, u, u, g, u, u, g, u, u, g, u, u, u, u, u],\n [u, u, u, g, u, u, u, u, u, u, u, u, u, g, u, u, u],\n [u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u],\n [u, u, g, u, u, g, u, u, g, u, u, g, u, u, g, u, u],\n [u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u],\n [u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u],\n [u, u, g, u, u, g, u, u, f, u, u, g, u, u, g, u, u],\n [u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u],\n [u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u],\n [u, u, g, u, u, g, u, u, g, u, u, g, u, u, g, u, u],\n [u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u],\n [u, u, u, g, u, u, u, u, u, u, u, u, u, g, u, u, u],\n [u, u, u, u, u, g, u, u, g, u, u, g, u, u, u, u, u],\n [u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u],\n [u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u]]\n \n# Create a containing cell and universe\ncells['UO2 Assembly'] = openmc.Cell(name='UO2 Assembly')\ncells['UO2 Assembly'].fill = lattices['UO2 Assembly']\nuniverses['UO2 Assembly'] = openmc.Universe(name='UO2 Assembly')\nuniverses['UO2 Assembly'].add_cell(cells['UO2 Assembly'])\n\n# Instantiate the MOX Lattice\nlattices['MOX Assembly'] = openmc.RectLattice(name='MOX Assembly')\nlattices['MOX Assembly'].dimension = [17, 17]\nlattices['MOX Assembly'].lower_left = [-10.71, -10.71]\nlattices['MOX Assembly'].pitch = [1.26, 1.26]\nm = universes['mox43']\nn = universes['mox7']\no = universes['mox87']\ng = universes['guide_tube']\nf = universes['fiss_chamber']\nlattices['MOX Assembly'].universes = \\\n [[m, m, m, m, m, m, m, m, m, m, m, m, m, m, m, m, m],\n [m, n, n, n, n, n, n, n, n, n, n, n, n, n, n, n, m],\n [m, n, n, n, n, g, n, n, g, n, n, g, n, n, n, n, m],\n [m, n, n, g, n, o, o, o, o, o, o, o, n, g, n, n, m],\n [m, n, n, n, o, o, o, o, o, o, o, o, o, n, n, n, m],\n [m, n, g, o, o, g, o, o, g, o, o, g, o, o, g, n, m],\n [m, n, n, o, o, o, o, o, o, o, o, o, o, o, n, n, m],\n [m, n, n, o, o, o, o, o, o, o, o, o, o, o, n, n, m],\n [m, n, g, o, o, g, o, o, f, o, o, g, o, o, g, n, m],\n [m, n, n, o, o, o, o, o, o, o, o, o, o, o, n, n, m],\n [m, n, n, o, o, o, o, o, o, o, o, o, o, o, n, n, m],\n [m, n, g, o, o, g, o, o, g, o, o, g, o, o, g, n, m],\n [m, n, n, n, o, o, o, o, o, o, o, o, o, n, n, n, m],\n [m, n, n, g, n, o, o, o, o, o, o, o, n, g, n, n, m],\n [m, n, n, n, n, g, n, n, g, n, n, g, n, n, n, n, m],\n [m, n, n, n, n, n, n, n, n, n, n, n, n, n, n, n, m],\n [m, m, m, m, m, m, m, m, m, m, m, m, m, m, m, m, m]]\n \n# Create a containing cell and universe\ncells['MOX Assembly'] = openmc.Cell(name='MOX Assembly')\ncells['MOX Assembly'].fill = lattices['MOX Assembly']\nuniverses['MOX Assembly'] = openmc.Universe(name='MOX Assembly')\nuniverses['MOX Assembly'].add_cell(cells['MOX Assembly'])\n \n# Instantiate the reflector Lattice\nlattices['Reflector Assembly'] = openmc.RectLattice(name='Reflector Assembly')\nlattices['Reflector Assembly'].dimension = [1,1]\nlattices['Reflector Assembly'].lower_left = [-10.71, -10.71]\nlattices['Reflector Assembly'].pitch = [21.42, 21.42]\nlattices['Reflector Assembly'].universes = [[universes['water']]]\n\n# Create a containing cell and universe\ncells['Reflector Assembly'] = openmc.Cell(name='Reflector Assembly')\ncells['Reflector Assembly'].fill = lattices['Reflector Assembly']\nuniverses['Reflector Assembly'] = openmc.Universe(name='Reflector Assembly')\nuniverses['Reflector Assembly'].add_cell(cells['Reflector Assembly'])", "Let's now create the core layout in a 3x3 lattice where each lattice position is one of the assemblies we just defined.\nAfter that we can create the final cell to contain the entire core.", "lattices['Core'] = openmc.RectLattice(name='3x3 core lattice')\nlattices['Core'].dimension= [3, 3]\nlattices['Core'].lower_left = [-32.13, -32.13]\nlattices['Core'].pitch = [21.42, 21.42]\nr = universes['Reflector Assembly']\nu = universes['UO2 Assembly']\nm = universes['MOX Assembly']\nlattices['Core'].universes = [[u, m, r],\n [m, u, r],\n [r, r, r]]\n\n# Create boundary planes to surround the geometry\nmin_x = openmc.XPlane(x0=-32.13, boundary_type='reflective')\nmax_x = openmc.XPlane(x0=+32.13, boundary_type='vacuum')\nmin_y = openmc.YPlane(y0=-32.13, boundary_type='vacuum')\nmax_y = openmc.YPlane(y0=+32.13, boundary_type='reflective')\n\n# Create root Cell\nroot_cell = openmc.Cell(name='root cell')\nroot_cell.fill = lattices['Core']\n\n# Add boundary planes\nroot_cell.region = +min_x & -max_x & +min_y & -max_y\n\n# Create root Universe\nroot_universe = openmc.Universe(name='root universe', universe_id=0)\nroot_universe.add_cell(root_cell)", "Before we commit to the geometry, we should view it using the Python API's plotting capability", "root_universe.plot(origin=(0., 0., 0.), width=(3 * 21.42, 3 * 21.42), pixels=(500, 500),\n color_by='material')", "OK, it looks pretty good, let's go ahead and write the file", "# Create Geometry and set root Universe\ngeometry = openmc.Geometry(root_universe)\n\n# Export to \"geometry.xml\"\ngeometry.export_to_xml()", "We can now create the tally file information. The tallies will be set up to give us the pin powers in this notebook. We will do this with a mesh filter, with one mesh cell per pin.", "tallies_file = openmc.Tallies()\n\n# Instantiate a tally Mesh\nmesh = openmc.RegularMesh()\nmesh.dimension = [17 * 2, 17 * 2]\nmesh.lower_left = [-32.13, -10.71]\nmesh.upper_right = [+10.71, +32.13]\n\n# Instantiate tally Filter\nmesh_filter = openmc.MeshFilter(mesh)\n\n# Instantiate the Tally\ntally = openmc.Tally(name='mesh tally')\ntally.filters = [mesh_filter]\ntally.scores = ['fission']\n\n# Add tally to collection\ntallies_file.append(tally)\n\n# Export all tallies to a \"tallies.xml\" file\ntallies_file.export_to_xml()", "With the geometry and materials finished, we now just need to define simulation parameters for the settings.xml file. Note the use of the energy_mode attribute of our settings_file object. This is used to tell OpenMC that we intend to run in multi-group mode instead of the default continuous-energy mode. If we didn't specify this but our cross sections file was not a continuous-energy data set, then OpenMC would complain.\nThis will be a relatively coarse calculation with only 500,000 active histories. A benchmark-fidelity run would of course require many more!", "# OpenMC simulation parameters\nbatches = 150\ninactive = 50\nparticles = 5000\n\n# Instantiate a Settings object\nsettings_file = openmc.Settings()\nsettings_file.batches = batches\nsettings_file.inactive = inactive\nsettings_file.particles = particles\n\n# Tell OpenMC this is a multi-group problem\nsettings_file.energy_mode = 'multi-group'\n\n# Set the verbosity to 6 so we dont see output for every batch\nsettings_file.verbosity = 6\n\n# Create an initial uniform spatial source distribution over fissionable zones\nbounds = [-32.13, -10.71, -1e50, 10.71, 32.13, 1e50]\nuniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)\nsettings_file.source = openmc.Source(space=uniform_dist)\n\n# Tell OpenMC we want to run in eigenvalue mode\nsettings_file.run_mode = 'eigenvalue'\n\n# Export to \"settings.xml\"\nsettings_file.export_to_xml()", "Let's go ahead and execute the simulation! You'll notice that the output for multi-group mode is exactly the same as for continuous-energy. The differences are all under the hood.", "# Run OpenMC\nopenmc.run()", "Results Visualization\nNow that we have run the simulation, let's look at the fission rate and flux tallies that we tallied.", "# Load the last statepoint file and keff value\nsp = openmc.StatePoint('statepoint.' + str(batches) + '.h5')\n\n# Get the OpenMC pin power tally data\nmesh_tally = sp.get_tally(name='mesh tally')\nfission_rates = mesh_tally.get_values(scores=['fission'])\n\n# Reshape array to 2D for plotting\nfission_rates.shape = mesh.dimension\n\n# Normalize to the average pin power\nfission_rates /= np.mean(fission_rates[fission_rates > 0.])\n\n# Force zeros to be NaNs so their values are not included when matplotlib calculates\n# the color scale\nfission_rates[fission_rates == 0.] = np.nan\n\n# Plot the pin powers and the fluxes\nplt.figure()\nplt.imshow(fission_rates, interpolation='none', cmap='jet', origin='lower')\nplt.colorbar()\nplt.title('Pin Powers')\nplt.show()\n", "There we have it! We have just successfully run the C5G7 benchmark model!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
nathanielng/machine-learning
perceptron/logistic-regression.ipynb
apache-2.0
[ "The Linear Model II\n<hr>\n\n\nlinear classification | classification error | perceptron learning algorithm, pocket algorithm, ...\nlinear regression | squared error | pseudo-inverse, ...\nthird linear model (logistic regression) | cross-entropy error | gradient descent, ...\nnonlinear transforms \n\n<hr>\n\n1. The Logistic Regression Linear Model\n1.1 Hypothesis Functions\nIn the case of linear models, inputs are combined linearly using weights, and summed into a signal, $s$:\n$$s = \\sum\\limits_{i=0}^d w_i x_i$$\nNext, the signal passes through a function, given by:\n\nLinear classification: $h\\left(\\mathbf{x}\\right) = \\text{sign}\\left(s\\right)$\nLinear regression: $h\\left(\\mathbf{x}\\right) = s$\nLogistic regression: $h\\left(\\mathbf{x}\\right) = \\theta\\left(s\\right)$\n\nFor logistic regression, we use a \"soft threshold\", by choosing a logistic function, $\\theta$, that has a sigmoidal shape. The sigmoidal function can take on various forms, such as the following:\n$$\\theta\\left(s\\right) = \\frac{e^s}{1+e^s}$$\nThis model implements a probability that has a genuine probability interpretation.\n1.2 Likelihood Measure and Probabilistic Connotations\nThe likelihood of a dataset, $\\mathcal{D} = \\left(\\mathbf{x_1},y_1\\right), \\dots, \\left(\\mathbf{x_N},y_N\\right)$, that we wish to maximize is given by:\n$$\\prod\\limits_{n=1}^N P\\left(y_n | \\mathbf{x_n}\\right) = \\prod\\limits_{n=1}^N \\theta\\left(y_n \\mathbf{w^T x_n}\\right)$$\nIt is possible to derive an error measure (that would maximise the above likelihood measure), which has a probabilistic connotation, and is called the in-sample \"cross-entropy\" error. It is based on assuming the hypothesis (of the logistic regression function) as the target function:\n$$E_{in}\\left(\\mathbf{w}\\right) = \\frac{1}{N}\\sum\\limits_{n=1}^N \\ln\\left[1 + \\exp\\left(-y_n \\mathbf{w^T x_n}\\right)\\right]$$\n$$E_{in}\\left(\\mathbf{w}\\right) = \\frac{1}{N}\\sum\\limits_{n=1}^N e\\left[ h\\left(\\mathbf{x_n}\\right), y_n \\right]$$\nWhile the above does not have a closed form solution, it is a convex function and therefore we can find the weights corresponding to the minimum of the above error measure using various techniques. Such techniques include gradient descent (and its variations, such as stochastic gradient descent and batch gradient descent) and there are others which make use of second order derivatives (such as the conjugate gradient method) or Hessians.\n1.3 Libraries Used", "import numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.optimize import minimize\nfrom numpy.random import permutation\nfrom sympy import var, diff, exp, latex, factor, log, simplify\nfrom IPython.display import display, Math, Latex\n%matplotlib inline", "1.4 Gradient Descent for Logistic Regression\n1.4.1 Gradient of the Cost Function - Derivation (using Sympy)\nThe Python package, sympy, can be used to obtain the form for the gradient of the cost function in logistic regression:", "var('x y w')\nlogistic_cost = log(1 + exp(-y*w*x))\ndisplay(Math(latex(logistic_cost)))\n\nlogistic_grad = logistic_cost.diff(w)\ndisplay(Math(latex(logistic_grad)))\ndisplay(Math(latex(simplify(logistic_grad))))", "1.4.2 Gradient Descent Algorithm\nThe gradient descent algorithm is a means to find the minimum of a function,\nstarting from some initial weight, $\\mathbf{w}()$.\nThe weights are adjusted at each iteration, by moving them in the direction of the steepest descent ($\\nabla E_{in}$). A learning rate, $\\eta$, is used to scale the gradient, $\\nabla E_{in}$.\n$$\\mathbf{w}(t+1) = \\mathbf{w}(t) - \\eta\\nabla E_{in}$$\nFor the case of logistic regression, the gradient of the error measure with respect to the weights, is calculated as:\n$$\\nabla E_{in}\\left(\\mathbf{w}\\right) = -\\frac{1}{N}\\sum\\limits_{n=1}^N \\frac{y_n\\mathbf{x_N}}{1 + \\exp\\left(y_n \\mathbf{w^T}(t)\\mathbf{x_n}\\right)}$$\n2. Linear Regression Error with Noisy Targets\n2.1 Effect of Sample Size on In-Sample Errors\nConsider a noisy target, $y=\\mathbf{w^{*T}x} + \\epsilon$ where $\\epsilon$ is a noise term with zero mean and variance, $\\sigma^2$\nThe in-sample error on a training set, $\\mathcal{D}$,\n$$\\mathbb{E}\\mathcal{D}\\left[E{in}\\left(\\mathbf{w_{lin}}\\right)\\right] = \\sigma^2\\left(1 - \\frac{d+1}{N}\\right)$$", "def in_sample_err(N, sigma = 0.1, d = 8):\n return (sigma**2)*(1 - (d+1)/N)\n\nN_arr = [10, 25, 100, 500, 1000]\nerr = [ in_sample_err(N) for N in N_arr ]\nfor i in range(len(N_arr)):\n print(\"N = {:4}, E_in = {}\".format(N_arr[i],err[i]))", "Here, we can see that, for a noisy target, as the number of examples, $N$, increases, the in-sample error also increases.", "result = minimize(lambda x: (0.008-in_sample_err(x))**2, x0=[20.0], tol=1e-11)\nif result.success is True:\n N = result.x[0]\n print(\"N = {}\".format(N))\n print(\"err({}) = {}\".format(int(N),in_sample_err(int(N))))\n print(\"err({}) = {}\".format(int(N+1),in_sample_err(int(N+1))))", "If we desire an in-sample error of not more than 0.008, then the maximum number of examples we should have is 44.\n3. Non-linear Transforms\n3.1 Background\nConsider the linear transform $z_i = \\phi_i\\left(\\mathbf{x}\\right)$ or $\\mathbf{z} = \\Phi\\left(\\mathbf{x}\\right)$, with the following mapping:\n$$\\mathbf{x} = \\left(x_0, x_1, \\dots, x_d\\right) \\rightarrow \\mathbf{z} = \\left(z_0, z_1, \\dots, z_{\\tilde d}\\right)$$\nThe final hypothesis, $\\mathcal{X}$ space is:\n$$g\\left(\\mathbf{x}\\right) = \\mathbf{\\tilde w^T} \\Phi\\left(\\mathbf{x}\\right)$$\n$$g\\left(\\mathbf{x}\\right) = \\left(w_0, w_1, w_2\\right) \\left(\\begin{array}{c}1\\x_1^2\\x_2^2\\end{array}\\right) = w_0 + w_1 x_1^2 + w_2 x_2^2$$\nThe non-linear transforms are implemented in the subroutine add_nonlinear_features() below. The contour plots corresponding to the non-linear transforms are implemented in plot_data_nonlinear().", "def add_nonlinear_features(X):\n N = X.shape[0]\n X = np.hstack((X,np.zeros((N,3))))\n X[:,3] = X[:,1]*X[:,2]\n X[:,4] = X[:,1]**2\n X[:,5] = X[:,2]**2\n return(X)\n\ndef plot_data_nonlinear(fig,plot_id,w_arr,w_colors,titles):\n p = 2.0\n x1 = np.linspace(-p,p,100)\n x2 = np.linspace(-p,p,100)\n X1,X2 = np.meshgrid(x1,x2)\n X1X2 = X1*X2\n X1_sq= X1**2\n X2_sq= X2**2\n\n for i,w in enumerate(w_arr):\n Y = w[0] + w[1]*X1 + w[2]*X2 + w[3]*X1X2 + \\\n w[4]*X1_sq + w[5]*X2_sq\n ax = fig.add_subplot(plot_id[i])\n cp0 = ax.contour(X1,X2,Y,1,linewidth=4, levels=[0.0],\n colors=w_colors[i])\n ax.clabel(cp0, inline=True, fontsize=14)\n #cp1 = ax.contour(X1,X2,Y,N=1,linewidth=4, levels=[-1.0, 1.0],\n # linestyles='dashed', colors=w_colors[i], alpha=0.3)\n cp1 = ax.contourf(X1,X2,Y,1,linewidth=4, linestyles='dashed', alpha=0.8)\n ax.clabel(cp1, inline=True, fontsize=14)\n\n plt.colorbar(cp1)\n ax.set_title(titles[i])\n #ax.set_axis_off() #ax.axis('off')\n ax.axes.xaxis.set_ticks([])\n ax.axes.yaxis.set_ticks([])", "Here we wish to consider the effects of the sign of the weights $\\tilde w_1, \\tilde w_2$ on the decision boundary. For simplicity, we choose the weights from [-1, 0, 1], as similar shapes would be obtained if the set of weights were scaled to something like [-2, 0, 2].", "w1 = np.array([ 1, 0, 0, 0, 0.0, 1.0])\nw2 = np.array([ 1, 0, 0, 0, 1.0, 0.0])\nw3 = np.array([ 1, 0, 0, 0, 1.0, 1.0])\nw4 = np.array([ 1, 0, 0, 0,-1.0, 1.0])\nw5 = np.array([ 1, 0, 0, 0, 1.0,-1.0])\nw_arr = [w1,w2,w3,w4,w5]\nw_colors = ['red','orange','green','blue','black']\ntitles = ['(a) $w_1$ = 0, $w_2$ > 0',\n '(b) $w_1$ > 0, $w_2$ = 0',\n '(c) $w_1$ > 0, $w_2$ > 0',\n '(d) $w_1$ < 0, $w_2$ > 0',\n '(e) $w_1$ > 0, $w_2$ < 0']\nplot_id_arr = [ 231, 232, 233, 234, 235 ]\nfig = plt.figure(figsize=(12,7))\nplot_data_nonlinear(fig,plot_id_arr,w_arr,w_colors,titles)", "In the second last example, $\\tilde w_1 <0, \\tilde w_2 > 0$, (with $x_0 = 1$), we have:\n$$\\mathbf{x} = \\left(1, x_1, x_2\\right) \\rightarrow \\mathbf{z} = \\left(1, x_1^2, x_2^2\\right)$$\n$$g\\left(\\mathbf{x}\\right) = 1 - x_1^2 + x_2^2$$\n4. Gradient Descent\n4.1 Gradient Descent Example Using Sympy\nThis example provides a demonstration of how the package sympy can be used to find the gradient of an arbitrary function, and perform gradient descent to the minimum of the function.\nOur arbitrary function in this case is:\n$$E\\left(u,v\\right) = \\left(ue^v -2ve^{-u}\\right)^2$$", "var('u v')\nexpr = (u*exp(v) -2*v*exp(-u))**2\ndisplay(Math(latex(expr)))", "The partial derivative of the function, $E$, with respect to $u$ is:", "derivative_u = expr.diff(u)\ndisplay(Math(latex(derivative_u)))\ndisplay(Math(latex(factor(derivative_u))))", "The partial derivative of the function, $E$, with respect to $v$ is:", "derivative_v = expr.diff(v)\ndisplay(Math(latex(derivative_v)))\ndisplay(Math(latex(factor(derivative_v))))", "Next, the functions to implement the gradient descent are implemented as follows. In the first case, err_gradient(), the derivatives are specified in the code. In the second case, err_gradient2(), the derivatives are calculated using sympy + evalf:", "def err(uv):\n u = uv[0]\n v = uv[1]\n ev = np.exp(v)\n e_u= np.exp(-u)\n return (u*ev - 2.0*v*e_u)**2\n\ndef err_gradient(uv):\n u = uv[0]\n v = uv[1]\n ev = np.exp(v)\n e_u= np.exp(-u)\n return np.array([ 2.0*(ev + 2.0*v*e_u)*(u*ev - 2.0*v*e_u),\n 2.0*(u*ev - 2.0*e_u)*(u*ev - 2.0*v*e_u) ])\ndef err_gradient2(uv):\n du = derivative_u.subs(u,uv[0]).subs(v,uv[1]).evalf()\n dv = derivative_v.subs(u,uv[0]).subs(v,uv[1]).evalf()\n return np.array([ du, dv ], dtype=float)", "To follow the gradient to the function minimum, we can either use $\\nabla E$ in the gradient descent approach, or we can alternate between the individual derivatives, $\\frac{\\partial E}{\\partial u}$ and $\\frac{\\partial E}{\\partial v}$ in the coordinate descent approach.", "def gradient_descent(x0, err, d_err, eta=0.1):\n x = x0\n for i in range(20):\n e = err(x)\n de = d_err(x)\n print(\"%2d: x = (%8.5f, %8.5f) | err' = (%8.4f, %8.4f) | err = %.3e\" %\n (i,x[0],x[1],de[0],de[1],e))\n if e < 1e-14:\n break\n x = x - eta*de\n\ndef coordinate_descent(x0, err, d_err, eta=0.1):\n x = x0\n for i in range(15):\n # Step 1: Move along the u-coordinate\n e = err(x)\n de = d_err(x)\n print(\"%2d: x = (%8.5f, %8.5f) | err' = (%8.4f, --------) | err = %.3e\" %\n (i,x[0],x[1],de[0],e))\n x[0] = x[0] - eta*de[0]\n if e < 1e-14: break\n \n # Step 2: Move along the v-coordinate\n e = err(x)\n de = d_err(x)\n print(\"%2d: x = (%8.5f, %8.5f) | err' = (--------, %8.4f) | err = %.3e\" %\n (i,x[0],x[1],de[1],e))\n x[1] = x[1] - eta*de[1]\n if e < 1e-14: break\n\nx0 = np.array([1.0,1.0])\ngradient_descent(x0=x0, err=err, d_err=err_gradient)\n\ngradient_descent(x0=x0, err=err, d_err=err_gradient2)", "Here, we can see that in both approaches of gradient descent above, it takes about 10 iterations to get the error below $10^{-14}$.\nFor comparison, an attempt to find the roots of the minimum via scipy.optimize.minimize was made, but it yielded a different result. This could be due to the fact that another method was used (in this case, conjugate gradient). At the moment, scipy.optimize.minimize, does not appear to have a gradient descent implementation.", "err_fn = lambda x: (x[0]*np.exp([1]) - 2.0*x[1]*np.exp(-x[0]))**2\nresult = minimize(err_fn, x0=np.array([1.0,1.0]), tol=1e-5, method='CG')\nif result.success is True:\n x = result.x\n print(\"x = {}\".format(x))\n print(\"f = {}\".format(result.fun))\n print(\"evalf = {}\".format(expr.subs(u,x[0]).subs(v,x[1]).evalf()))", "4.2 Coordinate Descent\nUsing the coordinate descent approach, the error minimization takes place more slowly. Even after 15 iterations, the error remains at only ~0.15, regardless of implementation.", "x0 = np.array([1.0,1.0])\ncoordinate_descent(x0=x0, err=err, d_err=err_gradient)\n\nx0 = np.array([1.0,1.0])\ncoordinate_descent(x0=x0, err=err, d_err=err_gradient2)", "5. Logistic Regression\n5.1 Creating a target function\nFor simplicity, we choose a target function, $f$, to be a 0/1 probability.\nFor visualization purposes, we choose the domain of interest to be in 2 dimensions, and choose $\\mathbf{x}$ to be picked uniformly from the region $\\mathcal{X}=\\left[-1,1\\right] \\times \\left[-1,1\\right]$,\nwhere $\\times$ denotes the Cartesian Product.\nA random line is created, and to ensure that it falls within the region of interest, it is created from two random points, $(x_0,y_0)$ and $(x_1,y_1)$ which are generated within $\\mathcal{X}$. The equation for this line in slope-intercept form and in the hypothesis / weights can be shown to be:\nSlope-Intercept Form\n$$m = - \\frac{w_1}{w_2}, c = - \\frac{w_0}{w_2}$$\nHypothesis Weights Form\n$$\\mathbf{w} = \\left(-c,-m,1\\right)$$", "def generate_data(n,seed=None):\n if seed is not None:\n np.random.seed(seed)\n x0 = np.ones(n)\n x1 = np.random.uniform(low=-1,high=1,size=(2,n))\n return np.vstack((x0,x1)).T\n\ndef get_random_line(seed=None):\n X = generate_data(2,seed=seed)\n x = X[:,1]\n y = X[:,2]\n m = (y[1]-y[0])/(x[1]-x[0])\n c = y[0] - m*x[0]\n return np.array([-c,-m,1])\n\ndef draw_line(ax,w,marker='g--',label=None):\n m = -w[1]/w[2]\n c = -w[0]/w[2]\n x = np.linspace(-1,1,20)\n y = m*x + c\n if label is None:\n ax.plot(x,y,marker)\n else:\n ax.plot(x,y,marker,label=label)\n \ndef get_hypothesis(X,w):\n h=np.dot(X,w)\n return np.sign(h).astype(int)", "5.2 Plotting the Data", "def plot_data(fig,plot_id,X,y=None,w_arr=None,my_x=None,title=None):\n ax = fig.add_subplot(plot_id)\n if y is None:\n ax.plot(X[:,1],X[:,2],'gx')\n else:\n ax.plot(X[y > 0,1],X[y > 0,2],'b+',label='Positive (+)')\n ax.plot(X[y < 0,1],X[y < 0,2],'ro',label='Negative (-)')\n ax.set_xlim(-1,1)\n ax.set_ylim(-1,1)\n ax.grid(True)\n if w_arr is not None:\n if isinstance(w_arr,list) is not True:\n w_arr=[w_arr]\n for i,w in enumerate(w_arr):\n if i==0:\n draw_line(ax,w,'g-',label='Theoretical')\n else:\n draw_line(ax,w,'g--')\n if my_x is not None:\n ax.plot([my_x[0]],[my_x[1]],'kx',markersize=10)\n if title is not None:\n ax.set_title(title)\n ax.legend(loc='best',frameon=True)\n\ndef create_dataset(N,make_plot=True,seed=None):\n X = generate_data(N,seed=seed)\n w_theoretical = get_random_line()\n y = get_hypothesis(X,w_theoretical)\n if make_plot is True:\n fig = plt.figure(figsize=(7,5))\n plot_data(fig,111,X,y,w_theoretical,title=\"Initial Dataset\")\n return X,y,w_theoretical", "We choose 100 training points at random from $\\mathcal{X}$ and record the outputs, $y_n$, for each of the points, $\\mathbf{x_n}$.", "N = 100\nX,y,w_theoretical = create_dataset(N=N,make_plot=True,seed=127)", "5.3 Gradient Descent\nThe gradient descent algorithm adjust the weights in the direction of the 'steepest descent' ($\\nabla E_{in}$), with the adjustment of a learning rate, $\\eta$:\n$$\\mathbf{w}(t+1) = \\mathbf{w}(t) - \\eta\\nabla E_{in}$$\nWe thus need to know the gradient of the error measure with respect to the weights, i.e.:\n$$\\nabla E_{in}\\left(\\mathbf{w}\\right) = -\\frac{1}{N}\\sum\\limits_{n=1}^N \\frac{y_n\\mathbf{x_N}}{1 + \\exp\\left(y_n \\mathbf{w^T}(t)\\mathbf{x_n}\\right)}$$\n$$E_{in}\\left(\\mathbf{w}\\right) = \\frac{1}{N}\\sum\\limits_{n=1}^N \\ln\\left[1 + \\exp\\left(-y_n \\mathbf{w^T x_n}\\right)\\right]$$", "w = w_theoretical\ndef cross_entropy(y_i,w,x):\n return np.log(1 + np.exp(-y_i*np.dot(x,w)))\ndef gradient(y_i,w,x):\n return -y_i*x/(1+np.exp(y_i*np.dot(x,w)))\nassert np.allclose(cross_entropy(y[0],w,X[0,:]),np.log(1 + np.exp(-y[0]*np.dot(X[0,:],w))))\nassert np.allclose(gradient(y[0],w,X[0,:]),-y[0]*X[0,:]/(1+np.exp(y[0]*np.dot(X[0,:],w))))\n\nnp.mean(cross_entropy(y,w,X))\n\nnp.set_printoptions(precision=4)\nassert np.linalg.norm(np.array([1.0, 2.0, 3.0])) == np.sqrt(1**2 + 2**2 + 3**2)\n\ndef run_simulation(N=100,eta=0.01,make_plot=None,w0 = np.array([0,0,0],dtype=float)):\n X = generate_data(N)\n w_theoretical = get_random_line()\n y = get_hypothesis(X,w_theoretical)\n\n w_arr = []\n w_arr2= []\n e_arr = []\n w = w0\n h = get_hypothesis(X,w)\n assert y.dtype == h.dtype\n for t_epoch in range(1000):\n w_epoch = w\n for i,p in enumerate(permutation(N)):\n grad = gradient(y[p],w,X[p,:])\n w = w - eta*grad;\n w_arr2.append(w)\n\n #Estimate out-of-sample error by re-generating data\n X_out = generate_data(N)\n h = get_hypothesis(X_out,w_theoretical)\n misclassified = np.mean(h != y)\n #E_out = np.mean(cross_entropy(y,w,X))\n E_out = np.mean(cross_entropy(h,w,X_out))\n delta_w = np.linalg.norm(w - w_epoch)\n w_arr.append(w)\n e_arr.append(E_out)\n #if t_epoch % 20 == 0:\n # print(\"epoch{:4}: miss={}, delta_w={}, E_out={}, w={}\".format(\n # t_epoch, misclassified, np.round(delta_w,5), E_out, w))\n if delta_w < 0.01: break\n print(\"Epochs = {}, E_out = {}, w = {}\".format(t_epoch, E_out, w))\n if make_plot is not None:\n fig = plt.figure(figsize=(7,5))\n plot_data(fig,111,X,y,[w_theoretical,w],title=\"Converged\")\n return e_arr, np.array(w_arr), X, y, np.array(w_arr2)", "Due to the randomness of starting with different target functions each time, we run stochastic gradient descent multiple times and consider the statistics in terms of the average number of epochs and the average out-of-sample errors.", "t_arr = []\ne_arr = []\nw_arr = []\nfor n in range(50):\n e, w, _, _, _ = run_simulation()\n t_arr.append(len(e)-1) #Should I subtract 1 here?\n e_arr.append(e[-1])\n w_arr.append(w[-1])", "The average out of sample error and the average number of epochs from the multiple runs above are:", "print(\"<E_out> = {}\".format(np.mean(e_arr)))\nprint(\"<Epochs> = {}\".format(np.mean(t_arr)))", "5.4 Gradient Descent Visualization", "def normalize_weights(w_arr):\n # You can't normalize the weights as this changes the cross entropy.\n w_arr[:,1] = w_arr[:,1] / w_arr[:,0]\n w_arr[:,2] = w_arr[:,2] / w_arr[:,0]\n w_arr[:,0] = 1.0\n return w_arr\n\ndef calculate_J(w0,w1,w2,X,y):\n J = np.zeros((w1.size,w2.size))\n for j in range(w1.size):\n for i in range(w2.size):\n W = np.array([w0, w1[j], w2[i]])\n J[i,j] = np.mean(cross_entropy(y,W,X))\n return J\n\ndef get_WJ(w_arr,X,y,n=100):\n w_arr = np.array(w_arr)\n\n w1_min = np.min(w_arr[:,1])\n w2_min = np.min(w_arr[:,2])\n w1_max = np.max(w_arr[:,1])\n w2_max = np.max(w_arr[:,2])\n sp = 10.0\n\n w0 = w_arr[-1,0] # take a 2D slice through the final value of w_0 in the 3D space [w0,w1,w2]\n w1 = np.linspace(w1_min-sp,w1_max+sp,n)\n w2 = np.linspace(w2_min-sp,w2_max+sp,n)\n W1, W2 = np.meshgrid(w1,w2)\n J = calculate_J(w0,w1,w2,X,y)\n return w_arr,w1,w2,W1,W2,J\n\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom matplotlib import cm\ndef visualise_SGD_3D(e_arr,w_arr,w_arr2,X,y,epoch_interval,elevation=30,azimuth=75):\n w_arr,w1,w2,W1,W2,J = get_WJ(w_arr,X,y)\n w0 = w_arr[-1,0] # take a 2D slice through the final value of w_0 in the 3D space [w0,w1,w2]\n z_arr = [ np.mean(cross_entropy(y,[w0,w_i[1],w_i[2]],X)) for w_i in w_arr ]\n z_arr2 = [ np.mean(cross_entropy(y,[w0,w_i[1],w_i[2]],X)) for w_i in w_arr2 ]\n\n fig = plt.figure(figsize=(14,10))\n ax = fig.gca(projection='3d')\n surf = ax.plot_surface(W1,W2,J, rstride=10, cstride=10, cmap=cm.coolwarm,\n linewidth=0.3, antialiased=True, alpha=0.9) #, zorder=3)\n ax.set_xlabel(r'$w_1$', fontsize=18)\n ax.set_ylabel(r'$w_2$', fontsize=18)\n ax.set_zlabel(r'$E_{in}$', fontsize=18)\n ax.plot(w_arr[:,1],w_arr[:,2],z_arr,'k-',lw=0.8,label=\"Stochastic Gradient Descent (SGD)\")\n ax.plot(w_arr2[:,1],w_arr2[:,2],z_arr2,'k-',lw=1.8,alpha=0.3,label=\"SGD within epochs\")\n ax.plot(w_arr[::epoch_interval,1],w_arr[::epoch_interval,2],z_arr[::epoch_interval],\n 'ko',markersize=7,label=r\"Intervals of $n$ Epochs\")\n ax.scatter([w_arr[-1,1]],[w_arr[-1,2]],[z_arr[-1]], c='r', s=250, marker='x', lw=3);\n #fig.colorbar(surf, shrink=0.5, aspect=12)\n ax.legend(loc='best',frameon=False)\n ax.axes.xaxis.set_ticklabels([])\n ax.axes.yaxis.set_ticklabels([])\n ax.axes.zaxis.set_ticklabels([])\n ax.view_init(elev=elevation, azim=azimuth)\n\ndef visualise_SGD_contour(e_arr,w_arr,w_arr2,X,y,epoch_interval):\n w_arr,w1,w2,W1,W2,J = get_WJ(w_arr,X,y)\n\n fig = plt.figure(figsize=(12,8))\n ax = fig.gca()\n CS = plt.contour(W1,W2,J,20)\n #plt.clabel(CS, inline=1, fontsize=10)\n ax.set_xlabel(r'$w_1$', fontsize=18)\n ax.set_ylabel(r'$w_2$', fontsize=18)\n ax.plot(w_arr[:,1],w_arr[:,2],'k-',lw=0.8,label=\"Stochastic Gradient Descent (SGD)\")\n ax.plot(w_arr2[:,1],w_arr2[:,2],'k-',lw=1.8,alpha=0.3,label=\"SGD within epochs\")\n ax.plot(w_arr[::epoch_interval,1],w_arr[::epoch_interval,2],\n 'ko',markersize=7,label=r\"Intervals of $n$ Epochs\")\n ax.scatter([w_arr[-1,1]],[w_arr[-1,2]], c='r', s=150, marker='x', lw=3);\n ax.legend(loc='best',frameon=False)\n ax.axes.xaxis.set_ticklabels([])\n ax.axes.yaxis.set_ticklabels([])\n plt.title(r'$E_{in}$', fontsize=16);\n\ndef plot_epochs(e_arr,w_arr,X,y,epoch_interval):\n w_arr,w1,w2,W1,W2,J = get_WJ(w_arr,X,y)\n E_in = [ np.mean(cross_entropy(y,w_i,X)) for w_i in w_arr ]\n epoch = np.array(range(len(e_arr)))\n\n fig = plt.figure(figsize=(10,10))\n ax = fig.add_subplot(211)\n ax.set_ylabel(r'Error', fontsize=16)\n ax.plot(epoch,e_arr,c='g',markersize=1,marker='+',lw=1,alpha=0.8,label=r'$E_{out}$')\n #ax.scatter(epoch[::epoch_interval],e_arr[::epoch_interval],c='g',s=20,marker='o',lw=3,alpha=0.8)\n ax.plot(epoch,E_in,c='k',linestyle='--',label=r'$E_{in}$')\n ax.legend(loc='best',frameon=False, fontsize=16)\n ax.set_title('\"Cross Entropy\" Error', fontsize=16);\n ax.axes.xaxis.set_ticklabels([])\n ax.axes.yaxis.set_ticklabels([])\n ax.grid(True)\n\n ax = fig.add_subplot(212)\n ax.set_xlabel(r'Epoch', fontsize=16)\n ax.set_ylabel(r'Error', fontsize=16)\n ax.loglog(epoch,e_arr,c='g',markersize=1,marker='+',lw=1,alpha=0.8,label=r'$E_{out}$')\n ax.loglog(epoch,E_in,c='k',linestyle='--',label=r'$E_{in}$')\n #ax.loglog(epoch[::epoch_interval],e_arr[::epoch_interval],c='g',markersize=8,marker='o',lw=3,alpha=0.8,ls='None')\n ax.legend(loc='best',frameon=False, fontsize=16)\n ax.axes.xaxis.set_ticklabels([])\n ax.axes.yaxis.set_ticklabels([])\n ax.grid(True)\n\nnp.random.seed(12345)\ne_arr, w_arr, X, y, w_arr2 = run_simulation(N=15,eta=0.8,w0=np.array([2.0, 10.0, -20.0]))\n\nvisualise_SGD_3D(e_arr,w_arr,w_arr2,X,y,epoch_interval=100)\n\nvisualise_SGD_contour(e_arr,w_arr,w_arr2,X,y,epoch_interval=100)\n\nplot_epochs(e_arr,w_arr,X,y,epoch_interval=100)", "5.5 Stochastic Gradient Descent vs Perceptron Learning Algorithm\n\"Consider that you are picking a point at random out of the $N$ points. In PLA, you see if it is misclassified then update using the PLA rule if it is and not update if it isn't. In SGD, you take the gradient of the error on that point w.r.t. $\\mathbf{w}$ and update accordingly. Which of the 5 error functions would make these equivalent?\n\n(a): $e_n\\left(\\mathbf{w}\\right) = \\exp\\left(-y_n \\mathbf{w^T x_n}\\right)$\n(b): $e_n\\left(\\mathbf{w}\\right) = -y_n \\mathbf{w^T x_n}$\n(c): $e_n\\left(\\mathbf{w}\\right) = \\left(y_n - \\mathbf{w^T x_n}\\right)^2$\n(d): $e_n\\left(\\mathbf{w}\\right) = \\ln\\left[1 + \\exp\\left(-y_n \\mathbf{w^T x_n}\\right)\\right]$\n(e): $e_n\\left(\\mathbf{w}\\right) = -\\min\\left(0, y_n \\mathbf{w^T x_n}\\right)$\n\nAnswer: (e)\nNotes: an attempt to evaluate the gradients of the above functions using sympy was carried out as follows (the final expression, which contains the function min was excluded):", "var('y_n w_i x_n')\nexpr = exp(-y_n * w_i * x_n)\nd_expr = expr.diff(w_i)\ndisplay(Math(latex(d_expr)))\n\nexpr = -y_n * w_i * x_n\nd_expr = expr.diff(w_i)\ndisplay(Math(latex(d_expr)))\n\nexpr = (y_n - w_i * x_n)**2\nd_expr = simplify(expr.diff(w_i))\ndisplay(Math(latex(d_expr)))\n\nexpr = log(1+exp(-y_n * w_i * x_n))\nd_expr = simplify(expr.diff(w_i))\ndisplay(Math(latex(d_expr)))\n\nw_final = np.array(w_arr)[-1,:]\ne_a = np.mean(np.exp(-y*np.dot(X,w_final)))\ne_b = np.mean(-y*np.dot(X,w_final))\ne_c = np.mean((y - np.dot(X,w_final))**2)\ne_d = np.mean(np.log(1 + np.exp(-y*np.dot(X,w_final))))\ne_e = -y*np.dot(X,w_final); e_e[e_e > 0] = 0; e_e = np.mean(e_e)\nprint(\"(a) e_n(w) = {}\".format(e_a))\nprint(\"(b) e_n(w) = {}\".format(e_b))\nprint(\"(c) e_n(w) = {}\".format(e_c))\nprint(\"(d) e_n(w) = {}\".format(e_d))\nprint(\"(e) e_n(w) = {}\".format(e_e))", "An attempt was also made to visualize the gradient descent algorithm when performed on the various error functions.", "def my_err_fn(y,W,X):\n #e = np.exp(-y*np.dot(X,W)) # e_a\n #e = -y*np.dot(X,W) # e_b\n #e = (y - np.dot(X,W))**2 # e_c\n e = np.log(1 + np.exp(-y*np.dot(X,W))) # e_d\n #e = -y*np.dot(X,W); e[e > 0] = 0 # e_e\n return np.mean(e)\n\ndef calculate_J(w0,w1,w2,X,y,my_err_fn):\n J = np.zeros((w1.size,w2.size))\n for j in range(w1.size):\n for i in range(w2.size):\n W = np.array([w0, w1[j], w2[i]])\n J[i,j] = my_err_fn(y,W,X)\n return J\n\ndef get_WJ(w_arr,X,y,my_err_fn,n=100):\n w_arr = np.array(w_arr)\n w1_min = np.min(w_arr[:,1])\n w2_min = np.min(w_arr[:,2])\n w1_max = np.max(w_arr[:,1])\n w2_max = np.max(w_arr[:,2])\n sp = 10.0\n\n w0 = w_arr[-1,0] # take a 2D slice through the final value of w_0 in the 3D space [w0,w1,w2]\n w1 = np.linspace(w1_min-sp,w1_max+sp,n)\n w2 = np.linspace(w2_min-sp,w2_max+sp,n)\n W1, W2 = np.meshgrid(w1,w2)\n J = calculate_J(w0,w1,w2,X,y,my_err_fn)\n return w_arr,w1,w2,W1,W2,J\n\ndef visualise_SGD_contour2(e_arr,w_arr,X,y,my_err_fn):\n w_arr,w1,w2,W1,W2,J = get_WJ(w_arr,X,y,my_err_fn)\n\n fig = plt.figure(figsize=(10,7))\n ax = fig.gca()\n CS = plt.contour(W1,W2,J,20)\n plt.clabel(CS, inline=1, fontsize=10)\n ax.set_xlabel(r'$w_1$', fontsize=18)\n ax.set_ylabel(r'$w_2$', fontsize=18)\n ax.plot(w_arr[:,1],w_arr[:,2],'k-',label=\"Gradient Descent\")\n ax.plot(w_arr[::100,1],w_arr[::100,2],'ko',markersize=7,label=r\"Intervals of $n$ Epochs\")\n ax.scatter([w_arr[-1,1]],[w_arr[-1,2]], c='r', s=150, marker='x', lw=3);\n ax.legend(loc='best',frameon=False)\n ax.axes.xaxis.set_ticklabels([])\n ax.axes.yaxis.set_ticklabels([])\n plt.title(r'$E_{in}$', fontsize=16)\n\nnp.random.seed(12345)\ne_arr, w_arr, X, y, w_arr2 = run_simulation(N=300,eta=0.15)\nvisualise_SGD_contour2(e_arr,w_arr,X,y,my_err_fn)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jegibbs/phys202-2015-work
assignments/assignment10/ODEsEx01.ipynb
mit
[ "Ordinary Differential Equations Exercise 1\nImports", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport seaborn as sns\nfrom scipy.integrate import odeint\nfrom IPython.html.widgets import interact, fixed", "Euler's method\nEuler's method is the simplest numerical approach for solving a first order ODE numerically. Given the differential equation\n$$ \\frac{dy}{dx} = f(y(x), x) $$\nwith the initial condition:\n$$ y(x_0)=y_0 $$\nEuler's method performs updates using the equations:\n$$ y_{n+1} = y_n + h f(y_n,x_n) $$\n$$ h = x_{n+1} - x_n $$\nWrite a function solve_euler that implements the Euler method for a 1d ODE and follows the specification described in the docstring:", "def derivs(yvec, t, h, f, ):\n x = yvec[0]\n y = yvec[1]\n dx = \n dy = \n return np.array([dx, dy])\n\ndef solve_euler(derivs, y0, x):\n \"\"\"Solve a 1d ODE using Euler's method.\n \n Parameters\n ----------\n derivs : function\n The derivative of the diff-eq with the signature deriv(y,x) where\n y and x are floats.\n y0 : float\n The initial condition y[0] = y(x[0]).\n x : np.ndarray, list, tuple\n The array of times at which of solve the diff-eq.\n \n Returns\n -------\n y : np.ndarray\n Array of solutions y[i] = y(x[i])\n \"\"\"\n \n\nassert np.allclose(solve_euler(lambda y, x: 1, 0, [0,1,2]), [0,1,2])", "The midpoint method is another numerical method for solving the above differential equation. In general it is more accurate than the Euler method. It uses the update equation:\n$$ y_{n+1} = y_n + h f\\left(y_n+\\frac{h}{2}f(y_n,x_n),x_n+\\frac{h}{2}\\right) $$\nWrite a function solve_midpoint that implements the midpoint method for a 1d ODE and follows the specification described in the docstring:", "def solve_midpoint(derivs, y0, x):\n \"\"\"Solve a 1d ODE using the Midpoint method.\n \n Parameters\n ----------\n derivs : function\n The derivative of the diff-eq with the signature deriv(y,x) where y\n and x are floats.\n y0 : float\n The initial condition y[0] = y(x[0]).\n x : np.ndarray, list, tuple\n The array of times at which of solve the diff-eq.\n \n Returns\n -------\n y : np.ndarray\n Array of solutions y[i] = y(x[i])\n \"\"\"\n # YOUR CODE HERE\n raise NotImplementedError()\n\nassert np.allclose(solve_euler(lambda y, x: 1, 0, [0,1,2]), [0,1,2])", "You are now going to solve the following differential equation:\n$$\n\\frac{dy}{dx} = x + 2y\n$$\nwhich has the analytical solution:\n$$\ny(x) = 0.25 e^{2x} - 0.5 x - 0.25\n$$\nFirst, write a solve_exact function that compute the exact solution and follows the specification described in the docstring:", "def solve_exact(x):\n \"\"\"compute the exact solution to dy/dx = x + 2y.\n \n Parameters\n ----------\n x : np.ndarray\n Array of x values to compute the solution at.\n \n Returns\n -------\n y : np.ndarray\n Array of solutions at y[i] = y(x[i]).\n \"\"\"\n # YOUR CODE HERE\n raise NotImplementedError()\n\nassert np.allclose(solve_exact(np.array([0,1,2])),np.array([0., 1.09726402, 12.39953751]))", "In the following cell you are going to solve the above ODE using four different algorithms:\n\nEuler's method\nMidpoint method\nodeint\nExact\n\nHere are the details:\n\nGenerate an array of x values with $N=11$ points over the interval $[0,1]$ ($h=0.1$).\nDefine the derivs function for the above differential equation.\nUsing the solve_euler, solve_midpoint, odeint and solve_exact functions to compute\n the solutions using the 4 approaches.\n\nVisualize the solutions on a sigle figure with two subplots:\n\nPlot the $y(x)$ versus $x$ for each of the 4 approaches.\nPlot $\\left|y(x)-y_{exact}(x)\\right|$ versus $x$ for each of the 3 numerical approaches.\n\nYour visualization should have legends, labeled axes, titles and be customized for beauty and effectiveness.\nWhile your final plot will use $N=10$ points, first try making $N$ larger and smaller to see how that affects the errors of the different approaches.", "# YOUR CODE HERE\nraise NotImplementedError()\n\nassert True # leave this for grading the plots" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
uber/pyro
tutorial/source/effect_handlers.ipynb
apache-2.0
[ "Poutine: A Guide to Programming with Effect Handlers in Pyro\nNote to readers: This tutorial is a guide to the API details of Pyro's effect handling library, Poutine. We recommend readers first orient themselves with the simplified minipyro.py which contains a minimal, readable implementation of Pyro's runtime and the effect handler abstraction described here. Pyro's effect handler library is more general than minipyro's but also contains more layers of indirection; it helps to read them side-by-side.", "import torch\n\nimport pyro\nimport pyro.distributions as dist\nimport pyro.poutine as poutine\n\nfrom pyro.poutine.runtime import effectful\n\npyro.set_rng_seed(101)", "Introduction\nInference in probabilistic programming involves manipulating or transforming probabilistic programs written as generative models. For example, nearly all approximate inference algorithms require computing the unnormalized joint probability of values of latent and observed variables under a generative model.\nConsider the following example model from the introductory inference tutorial:", "def scale(guess):\n weight = pyro.sample(\"weight\", dist.Normal(guess, 1.0))\n return pyro.sample(\"measurement\", dist.Normal(weight, 0.75))", "This model defines a joint probability distribution over \"weight\" and \"measurement\":\n$${\\sf weight} \\, | \\, {\\sf guess} \\sim \\cal {\\sf Normal}({\\sf guess}, 1) $$\n$${\\sf measurement} \\, | \\, {\\sf guess}, {\\sf weight} \\sim {\\sf Normal}({\\sf weight}, 0.75)$$\nIf we had access to the inputs and outputs of each pyro.sample site, we could compute their log-joint:\npython\nlogp = dist.Normal(guess, 1.0).log_prob(weight).sum() + dist.Normal(weight, 0.75).log_prob(measurement).sum()\nHowever, the way we wrote scale above does not seem to expose these intermediate distribution objects, and rewriting it to return them would be intrusive and would violate the separation of concerns between models and inference algorithms that a probabilistic programming language like Pyro is designed to enforce.\nTo resolve this conflict and facilitate inference algorithm development, Pyro exposes Poutine, a library of effect handlers, or composable building blocks for examining and modifying the behavior of Pyro programs. Most of Pyro's internals are implemented on top of Poutine.\nA first look at Poutine: Pyro's library of algorithmic building blocks\nEffect handlers, a common abstraction in the programming languages community, give nonstandard interpretations or side effects to the behavior of particular statements in a programming language, like pyro.sample or pyro.param. For background reading on effect handlers in programming language research, see the optional \"References\" section at the end of this tutorial. \nRather than reviewing more definitions, let's look at a first example that addresses the problem above: we can compose two existing effect handlers, poutine.condition (which sets output values of pyro.sample statements) and poutine.trace (which records the inputs, distributions, and outputs of pyro.sample statements), to concisely define a new effect handler that computes the log-joint:", "def make_log_joint(model):\n def _log_joint(cond_data, *args, **kwargs):\n conditioned_model = poutine.condition(model, data=cond_data)\n trace = poutine.trace(conditioned_model).get_trace(*args, **kwargs)\n return trace.log_prob_sum()\n return _log_joint\n\nscale_log_joint = make_log_joint(scale)\nprint(scale_log_joint({\"measurement\": 9.5, \"weight\": 8.23}, 8.5))", "That snippet is short, but still somewhat opaque - poutine.condition, poutine.trace, and trace.log_prob_sum are all black boxes. Let's remove a layer of boilerplate from poutine.condition and poutine.trace and explicitly implement what trace.log_prob_sum is doing:", "from pyro.poutine.trace_messenger import TraceMessenger\nfrom pyro.poutine.condition_messenger import ConditionMessenger\n\ndef make_log_joint_2(model):\n def _log_joint(cond_data, *args, **kwargs):\n with TraceMessenger() as tracer:\n with ConditionMessenger(data=cond_data):\n model(*args, **kwargs)\n \n trace = tracer.trace\n logp = 0.\n for name, node in trace.nodes.items():\n if node[\"type\"] == \"sample\":\n if node[\"is_observed\"]:\n assert node[\"value\"] is cond_data[name]\n logp = logp + node[\"fn\"].log_prob(node[\"value\"]).sum()\n return logp\n return _log_joint\n\nscale_log_joint = make_log_joint_2(scale)\nprint(scale_log_joint({\"measurement\": 9.5, \"weight\": 8.23}, 8.5))", "This makes things a little more clear: we can now see that poutine.trace and poutine.condition are wrappers for context managers that presumably communicate with the model through something inside pyro.sample. We can also see that poutine.trace produces a data structure (a Trace) containing a dictionary whose keys are sample site names and values are dictionaries containing the distribution (\"fn\") and output (\"value\") at each site, and that the output values at each site are exactly the values specified in data.\nFinally, TraceMessenger and ConditionMessenger are Pyro effect handlers, or Messengers: stateful context manager objects that are placed on a global stack and send messages (hence the name) up and down the stack at each effectful operation, like a pyro.sample call. A Messenger is placed at the bottom of the stack when its __enter__ method is called, i.e. when it is used in a \"with\" statement.\nWe'll look at this process in more detail later in this tutorial. For a simplified implementation in only a few lines of code, see pyro.contrib.minipyro.\nImplementing new effect handlers with the Messenger API\nAlthough it's easiest to build new effect handlers by composing the existing ones in pyro.poutine, implementing a new effect as a pyro.poutine.messenger.Messenger subclass is actually fairly straightforward. Before diving into the API, let's look at another example: a version of our log-joint computation that performs the sum while the model is executing. We'll then review what each part of the example is actually doing.", "class LogJointMessenger(poutine.messenger.Messenger):\n \n def __init__(self, cond_data):\n self.data = cond_data\n \n # __call__ is syntactic sugar for using Messengers as higher-order functions.\n # Messenger already defines __call__, but we re-define it here\n # for exposition and to change the return value:\n def __call__(self, fn):\n def _fn(*args, **kwargs):\n with self:\n fn(*args, **kwargs)\n return self.logp.clone()\n return _fn\n \n def __enter__(self):\n self.logp = torch.tensor(0.)\n # All Messenger subclasses must call the base Messenger.__enter__()\n # in their __enter__ methods\n return super().__enter__()\n \n # __exit__ takes the same arguments in all Python context managers\n def __exit__(self, exc_type, exc_value, traceback):\n self.logp = torch.tensor(0.)\n # All Messenger subclasses must call the base Messenger.__exit__ method\n # in their __exit__ methods.\n return super().__exit__(exc_type, exc_value, traceback)\n \n # _pyro_sample will be called once per pyro.sample site.\n # It takes a dictionary msg containing the name, distribution,\n # observation or sample value, and other metadata from the sample site.\n def _pyro_sample(self, msg):\n # Any unobserved random variables will trigger this assertion.\n # In the next section, we'll learn how to also handle sampled values.\n assert msg[\"name\"] in self.data\n msg[\"value\"] = self.data[msg[\"name\"]]\n # Since we've observed a value for this site, we set the \"is_observed\" flag to True\n # This tells any other Messengers not to overwrite msg[\"value\"] with a sample.\n msg[\"is_observed\"] = True\n self.logp = self.logp + (msg[\"scale\"] * msg[\"fn\"].log_prob(msg[\"value\"])).sum()\n\nwith LogJointMessenger(cond_data={\"measurement\": 9.5, \"weight\": 8.23}) as m:\n scale(8.5)\n print(m.logp.clone())\n \nscale_log_joint = LogJointMessenger(cond_data={\"measurement\": 9.5, \"weight\": 8.23})(scale)\nprint(scale_log_joint(8.5))", "A convenient bit of boilerplate that allows the use of LogJointMessenger as a context manager, decorator, or higher-order function is the following. Most of the existing effect handlers in pyro.poutine, including poutine.trace and poutine.condition which we used earlier, are Messengers wrapped this way in pyro.poutine.handlers.", "def log_joint(model=None, cond_data=None):\n msngr = LogJointMessenger(cond_data=cond_data)\n return msngr(model) if model is not None else msngr\n\nscale_log_joint = log_joint(scale, cond_data={\"measurement\": 9.5, \"weight\": 8.23})\nprint(scale_log_joint(8.5))", "The Messenger API in more detail\nOur LogJointMessenger implementation has three important methods: __enter__, __exit__, and _pyro_sample. \n__enter__ and __exit__ are special methods needed by any Python context manager. When implementing new Messenger classes, if we override __enter__ and __exit__, we always need to call the base Messenger's __enter__ and __exit__ methods for the new Messenger to be applied correctly.\nThe last method LogJointMessenger._pyro_sample, is called once at each sample site. It reads and modifies a message, which is a dictionary containing the sample site's name, distribution, sampled or observed value, and other metadata. We'll examine the contents of a message in more detail in the next section.\nInstead of _pyro_sample, a generic Messenger actually contains two methods that are called once per operation where side effects are performed:\n1. _process_message modifies a message and sends the result to the Messenger just above on the stack\n2. _postprocess_message modifies a message and sends the result to the next Messenger down on the stack. It is always called after all active Messengers have had their _process_message method applied to the message.\nAlthough custom Messengers can override _process_message and _postprocess_message, it's convenient to avoid requiring all effect handlers to be aware of all possible effectful operation types. For this reason, by default Messenger._process_message will use msg[\"type\"] to dispatch to a corresponding method Messenger._pyro_&lt;type&gt;, e.g. Messenger._pyro_sample as in LogJointMessenger. Just as exception handling code ignores unhandled exception types, this allows Messengers to simply forward operations they don't know how to handle up to the next Messenger in the stack:\npython\nclass Messenger:\n ...\n def _process_message(self, msg):\n method_name = \"_pyro_{}\".format(msg[\"type\"]) # e.g. _pyro_sample when msg[\"type\"] == \"sample\"\n if hasattr(self, method_name):\n getattr(self, method_name)(msg)\n ...\nInterlude: the global Messenger stack\nSee pyro.contrib.minipyro for an end-to-end implementation of the mechanism in this section.\nThe order in which Messengers are applied to an operation like a pyro.sample statement is determined by the order in which their __enter__ methods are called. Messenger.__enter__ appends a Messenger to the end (the bottom) of the global handler stack:\n```python\nclass Messenger:\n ...\n # enter pushes a Messenger onto the stack\n def enter(self):\n ...\n _PYRO_STACK.append(self)\n ...\n# __exit__ removes a Messenger from the stack\ndef __exit__(self, ...):\n ...\n assert _PYRO_STACK[-1] is self\n _PYRO_STACK.pop()\n ...\n\n```\npyro.poutine.runtime.apply_stack then traverses the stack twice at each operation, first from bottom to top to apply each _process_message and then from top to bottom to apply each _postprocess_message:\npython\ndef apply_stack(msg): # simplified\n for handler in reversed(_PYRO_STACK):\n handler._process_message(msg)\n ...\n default_process_message(msg)\n ...\n for handler in _PYRO_STACK:\n handler._postprocess_message(msg) \n ...\n return msg\nReturning to the LogJointMessenger example\nThe second method _postprocess_message is necessary because some effects can only be applied after all other effect handlers have had a chance to update the message once. In the case of LogJointMessenger, other effects, like enumeration, may modify a sample site's value or distribution (msg[\"value\"] or msg[\"fn\"]), so we move the log-probability computation to a new method, _pyro_post_sample, which is called by _postprocess_message (via a dispatch mechanism like the one used by _process_message) at each sample site after all active handlers' _pyro_sample methods have been applied:", "class LogJointMessenger2(poutine.messenger.Messenger):\n \n def __init__(self, cond_data):\n self.data = cond_data\n \n def __call__(self, fn):\n def _fn(*args, **kwargs):\n with self:\n fn(*args, **kwargs)\n return self.logp.clone()\n return _fn\n \n def __enter__(self):\n self.logp = torch.tensor(0.)\n return super().__enter__()\n \n def __exit__(self, exc_type, exc_value, traceback):\n self.logp = torch.tensor(0.)\n return super().__exit__(exc_type, exc_value, traceback)\n\n def _pyro_sample(self, msg):\n if msg[\"name\"] in self.data:\n msg[\"value\"] = self.data[msg[\"name\"]]\n msg[\"done\"] = True\n \n def _pyro_post_sample(self, msg):\n assert msg[\"done\"] # the \"done\" flag asserts that no more modifications to value and fn will be performed.\n self.logp = self.logp + (msg[\"scale\"] * msg[\"fn\"].log_prob(msg[\"value\"])).sum()\n\n\nwith LogJointMessenger2(cond_data={\"measurement\": 9.5, \"weight\": 8.23}) as m:\n scale(8.5)\n print(m.logp)", "Inside the messages sent by Messengers\nAs the previous two examples mentioned, the actual messages sent up and down the stack are dictionaries with a particular set of keys. Consider the following sample statement:\npython\npyro.sample(\"x\", dist.Bernoulli(0.5), infer={\"enumerate\": \"parallel\"}, obs=None)\nThis sample statement is converted into an initial message before any effects are applied, and each effect handler's _process_message and _postprocess_message may update fields in place or add new fields. We write out the full initial message here for completeness:\npython\nmsg = {\n # The following fields contain the name, inputs, function, and output of a site.\n # These are generally the only fields you'll need to think about.\n \"name\": \"x\",\n \"fn\": dist.Bernoulli(0.5),\n \"value\": None, # msg[\"value\"] will eventually contain the value returned by pyro.sample\n \"is_observed\": False, # because obs=None by default; only used by sample sites\n \"args\": (), # positional arguments passed to \"fn\" when it is called; usually empty for sample sites\n \"kwargs\": {}, # keyword arguments passed to \"fn\" when it is called; usually empty for sample sites\n # This field typically contains metadata needed or stored by a particular inference algorithm\n \"infer\": {\"enumerate\": \"parallel\"},\n # The remaining fields are generally only used by Pyro's internals,\n # or for implementing more advanced effects beyond the scope of this tutorial\n \"type\": \"sample\", # label used by Messenger._process_message to dispatch, in this case to _pyro_sample\n \"done\": False,\n \"stop\": False,\n \"scale\": torch.tensor(1.), # Multiplicative scale factor that can be applied to each site's log_prob\n \"mask\": None,\n \"continuation\": None,\n \"cond_indep_stack\": (), # Will contain metadata from each pyro.plate enclosing this sample site.\n}\nNote that when we use poutine.trace or TraceMessenger as in our first two versions of make_log_joint, the contents of msg are exactly the information stored in the trace for each sample and param site.\nImplementing inference algorithms with existing effect handlers: examples\nIt turns out that many inference operations, like our first version of make_log_joint above, have strikingly short implementations in terms of existing effect handlers in pyro.poutine. \nExample: Variational inference with a Monte Carlo ELBO\nFor example, here is an implementation of variational inference with a Monte Carlo ELBO that uses poutine.trace, poutine.condition, and poutine.replay. This is very similar to the simple ELBO in pyro.contrib.minipyro.", "def monte_carlo_elbo(model, guide, batch, *args, **kwargs):\n # assuming batch is a dictionary, we use poutine.condition to fix values of observed variables\n conditioned_model = poutine.condition(model, data=batch)\n \n # we'll approximate the expectation in the ELBO with a single sample:\n # first, we run the guide forward unmodified and record values and distributions\n # at each sample site using poutine.trace\n guide_trace = poutine.trace(guide).get_trace(*args, **kwargs)\n \n # we use poutine.replay to set the values of latent variables in the model\n # to the values sampled above by our guide, and use poutine.trace\n # to record the distributions that appear at each sample site in in the model\n model_trace = poutine.trace(\n poutine.replay(conditioned_model, trace=guide_trace)\n ).get_trace(*args, **kwargs)\n \n elbo = 0.\n for name, node in model_trace.nodes.items():\n if node[\"type\"] == \"sample\":\n elbo = elbo + node[\"fn\"].log_prob(node[\"value\"]).sum()\n if not node[\"is_observed\"]:\n elbo = elbo - guide_trace.nodes[name][\"fn\"].log_prob(node[\"value\"]).sum()\n return -elbo", "We use poutine.trace and poutine.block to record pyro.param calls for optimization:", "def train(model, guide, data):\n optimizer = pyro.optim.Adam({})\n for batch in data:\n # this poutine.trace will record all of the parameters that appear in the model and guide\n # during the execution of monte_carlo_elbo\n with poutine.trace() as param_capture:\n # we use poutine.block here so that only parameters appear in the trace above\n with poutine.block(hide_fn=lambda node: node[\"type\"] != \"param\"):\n loss = monte_carlo_elbo(model, guide, batch)\n \n loss.backward()\n params = set(node[\"value\"].unconstrained()\n for node in param_capture.trace.nodes.values())\n optimizer.step(params)\n pyro.infer.util.zero_grads(params)", "Example: exact inference via sequential enumeration\nHere is an example of a very different inference algorithm--exact inference via enumeration--implemented with pyro.poutine. A complete explanation of this algorithm is beyond the scope of this tutorial and may be found in Chapter 3 of the short online book Design and Implementation of Probabilistic Programming Languages. This example uses poutine.queue, itself implemented using poutine.trace, poutine.replay, and poutine.block, to enumerate over possible values of all discrete variables in a model and compute a marginal distribution over all possible return values or the possible values at a particular sample site:", "def sequential_discrete_marginal(model, data, site_name=\"_RETURN\"):\n \n from six.moves import queue # queue data structures\n q = queue.Queue() # Instantiate a first-in first-out queue\n q.put(poutine.Trace()) # seed the queue with an empty trace\n \n # as before, we fix the values of observed random variables with poutine.condition\n # assuming data is a dictionary whose keys are names of sample sites in model\n conditioned_model = poutine.condition(model, data=data)\n \n # we wrap the conditioned model in a poutine.queue,\n # which repeatedly pushes and pops partially completed executions from a Queue()\n # to perform breadth-first enumeration over the set of values of all discrete sample sites in model\n enum_model = poutine.queue(conditioned_model, queue=q)\n \n # actually perform the enumeration by repeatedly tracing enum_model\n # and accumulate samples and trace log-probabilities for postprocessing\n samples, log_weights = [], []\n while not q.empty():\n trace = poutine.trace(enum_model).get_trace()\n samples.append(trace.nodes[site_name][\"value\"])\n log_weights.append(trace.log_prob_sum())\n \n # we take the samples and log-joints and turn them into a histogram:\n samples = torch.stack(samples, 0)\n log_weights = torch.stack(log_weights, 0)\n log_weights = log_weights - dist.util.logsumexp(log_weights, dim=0)\n return dist.Empirical(samples, log_weights)", "(Note that sequential_discrete_marginal is very general, but is also quite slow. For high-performance parallel enumeration that applies to a less general class of models, see the enumeration tutorial.)\nExample: implementing lazy evaluation with the Messenger API\nNow that we've learned more about the internals of Messenger, let's use it to implement a slightly more complicated effect: lazy evaluation. We first define a LazyValue class that we will use to build up a computation graph:", "class LazyValue:\n def __init__(self, fn, *args, **kwargs):\n self._expr = (fn, args, kwargs)\n self._value = None\n \n def __str__(self):\n return \"({} {})\".format(str(self._expr[0]), \" \".join(map(str, self._expr[1])))\n \n def evaluate(self):\n if self._value is None:\n fn, args, kwargs = self._expr\n fn = fn.evaluate() if isinstance(fn, LazyValue) else fn\n args = tuple(arg.evaluate() if isinstance(arg, LazyValue) else arg\n for arg in args)\n kwargs = {k: v.evaluate() if isinstance(v, LazyValue) else v\n for k, v in kwargs.items()}\n self._value = fn(*args, **kwargs)\n return self._value", "With LazyValue, implementing lazy evaluation as a Messenger compatible with other effect handlers is suprisingly easy. We just make each msg[\"value\"] a LazyValue and introduce a new operation type \"apply\" for deterministic operations:", "class LazyMessenger(pyro.poutine.messenger.Messenger):\n def _process_message(self, msg):\n if msg[\"type\"] in (\"apply\", \"sample\") and not msg[\"done\"]:\n msg[\"done\"] = True\n msg[\"value\"] = LazyValue(msg[\"fn\"], *msg[\"args\"], **msg[\"kwargs\"])", "Finally, just like torch.autograd overloads torch tensor operations to record an autograd graph, we need to wrap any operations we'd like to be lazy. We'll use pyro.poutine.runtime.effectful as a decorator to expose these operations to LazyMessenger. effectful constructs a message much like the one above and sends it up and down the effect handler stack, but allows us to set the type (in this case, to \"apply\" instead of \"sample\") so that these operations aren't mistaken for sample statements by other effect handlers like TraceMessenger:", "@effectful(type=\"apply\")\ndef add(x, y):\n return x + y\n\n@effectful(type=\"apply\")\ndef mul(x, y):\n return x * y\n\n@effectful(type=\"apply\")\ndef sigmoid(x):\n return torch.sigmoid(x)\n\n@effectful(type=\"apply\")\ndef normal(loc, scale):\n return dist.Normal(loc, scale)", "Applied to another model:", "def biased_scale(guess):\n weight = pyro.sample(\"weight\", normal(guess, 1.))\n tolerance = pyro.sample(\"tolerance\", normal(0., 0.25))\n return pyro.sample(\"measurement\", normal(add(mul(weight, 0.8), 1.), sigmoid(tolerance)))\n\nwith LazyMessenger():\n v = biased_scale(8.5)\n print(v)\n print(v.evaluate())", "Together with other effect handlers like TraceMessenger and ConditionMessenger, with which it freely composes, LazyMessenger demonstrates how to use Poutine to quickly and concisely implement state-of-the-art PPL techniques like delayed sampling with Rao-Blackwellization.\nReferences: algebraic effects and handlers in programming language research\nThis section contains some references to PL papers for readers interested in this direction.\nAlgebraic effects and handlers, which were developed starting in the early 2000s and are a subject of active research in the programming languages community, are a versatile abstraction for building modular implementations of nonstandard interpreters of particular statements in a programming language, like pyro.sample or pyro.param. They were originally introduced to address the difficulty of composing nonstandard interpreters implemented with monads and monad transformers.\n\n\nFor an accessible introduction to the effect handlers literature, see the excellent review/tutorial paper \"Handlers in Action\" by Ohad Kammar, Sam Lindley, and Nicolas Oury, and the references therein.\n\n\nAlgebraic effect handlers were originally introduced by Gordon Plotkin and Matija Pretnar in the paper \"Handlers of Algebraic Effects\".\n\n\nA useful mental model of effect handlers is as exception handlers that are capable of resuming computation in the try block after raising an exception and performing some processing in the except block. This metaphor is explored further in the experimental programming language Eff and its companion paper \"Programming with Algebraic Effects and Handlers\" by Andrej Bauer and Matija Pretnar.\n\n\nMost effect handlers in Pyro are \"linear,\" meaning that they only resume once per effectful operation and do not alter the order of execution of the original program. One exception is poutine.queue, which uses an inefficient implementation strategy for multiple resumptions like the one described for delimited continuations in the paper \"Capturing the Future by Replaying the Past\" by James Koppel, Gabriel Scherer, and Armando Solar-Lezama. \n\n\nMore efficient implementation strategies for effect handlers in mainstream programming languages like Python or JavaScript is an area of active research. One promising line of work involves selective continuation-passing style transforms as in the paper \"Type-Directed Compilation of Row-Typed Algebraic Effects\" by Daan Leijen." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
vishaalprasad/AnimeRecommendation
notebooks/exploration/using_pretrained_cnn.ipynb
mit
[ "This notebook is an attempt to test and prototype using a pretrained tensorflow CNN to do classification. The end goal is to be able to take add the following as an anime-specific feature: a CNN's embeddings of the default image on MyAnimeList for that anime. The idea is that there is certain visual content that goes into a person's enjoyment of an anime: art style, character design, and color scheme, for example. I want that information via a high-level representation of the image in a deep CNN pipeline. This notebook is simply an attempt getting getting the CNN part to work. \nThe CNN is is downloaded from Illustration2Vec (Saito & Matsui, 2015) and is pretrained on anime images. That means the feature space is uniquely suited to capturing relevant information from anime. However, this was done on Caffe, which I do not have installed. Consequently, I used the caffe-tensorflow tool (https://github.com/ethereon/caffe-tensorflow) to convert this model into a tensorflow model (via an Amazon EC2 instance).\nThere are examples but no clear tutorials on how to use the caffe-tensorflow tool, so this exploratory notebook is an attempt to get it to work.", "import numpy as np\nimport tensorflow as tf\nimport os.path as osp", "Images are 224x224 pixels, with 3 channels. Batch size is 50. This is specified in the caffemodel but not in the tf class (mynet.py)", "input_size = {50, 3, 224, 224} \n\nfake_data = np.random.rand(2, 224, 224, 3)", "Now to actually load the model.", "from mynet import CaffeNet\nimages = tf.placeholder(tf.float32, [None, 224, 224, 3])\n\nnet = CaffeNet({'data':images})\n\nsesh = tf.Session()\nsesh.run(tf.global_variables_initializer())\n\n# Load the data\nnet.load('mynet.npy', sesh)\n\n# Forward pass\noutput = sesh.run(net.get_output(), feed_dict={images: fake_data})\n\nsesh.close()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
jjdblast/RoadTrafficSimulator
experiments/report.ipynb
mit
[ "%matplotlib inline\n\nimport pandas as pd\nimport matplotlib.pyplot as plt", "Запустим симулятор с фиксированными значениями времени переключения светофоров.", "data = pd.read_table(\"./1.data\", sep=\" \")\nplt.plot(data['multiplier'], data['avg_speed'], '-o')", "Теперь рассмотрим случайные значения в качестве времени между переключениями светофоров.", "data = pd.read_table(\"./2.data\", sep=\" \")\nplt.plot(data['it'], data['avg_speed'], '-o')", "Посмотрим на влияние phaseOffset на величину средней скорости.", "data = pd.read_table(\"./3.data\", sep=\" \")\ndata = data.sort(columns='it')\nplt.plot(data['it'], data['avg_speed'], '-o')", "Как видим, максимальное и миннимальное сильно отличаются следовательно этот параметр существенный." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
arcyfelix/Courses
18-03-07-Deep Learning With Python by François Chollet/Chapter 6.1.1 - One-hot encoding of words and characters.ipynb
apache-2.0
[ "Chapter 6.1.1 - One-hot encoding of words and characters", "import numpy as np", "Initializing an example", "samples = ['The cat sat on the mat.', 'The dog ate my homework.']\n\n# Intializing token index as an empty dictionary\ntoken_index = {}", "Splitting the sentence (word level)", "# Testing result of function .split()\nsamples[0].split()\n\nfor sample in samples:\n # Each sample is split into words. \n # Currently, punctuation is not ommited\n for word in sample.split():\n if word not in token_index:\n # Assign a unique index to each unique word\n # Index 0 is not assigned to anything.\n token_index[word] = len(token_index) + 1\n\ntoken_index", "Vectoring the example", "# Taking into consideration only first 10 words in each sentence.\nmax_length = 10\n\n# Initializing the result array with zeros\n# It will be of shape (number_of_samples, max_length_taken_into_consideration, number_of_unique_words)\nresults = np.zeros((len(samples), max_length, max(token_index.values()) + 1))\n\n# Enumerating through samples and words\n# One-hot encoding\nfor i, sample in enumerate(samples):\n for j, word in list(enumerate(sample.split()))[:max_length]:\n index = token_index.get(word)\n results[i, j, index] = 1.\n\nresults", "Splitting the sentence (character level)", "import string\n\nsamples = ['The cat sat on the mat.', 'The dog ate my homework.']\n\n# Assigning all prinatable ASCII characters\ncharacters = string.printable \n\ncharacters\n\n# Tokenizing the characters\ntoken_index = dict(zip(characters, range(1, len(characters) + 1)))\n\ntoken_index\n\n# Take intro consideration only first 50 character of the sentence\nmax_length = 50\n\nresults = np.zeros((len(samples), max_length, max(token_index.values()) + 1))\n\nfor i, sample in enumerate(samples):\n for j, character in enumerate(sample[:max_length]):\n index = token_index.get(character)\n results[i, j, index] = 1.\n\nresults", "Keras Tokenizer", "# Importing Keras Tokenizer\nfrom keras.preprocessing.text import Tokenizer\n\nsamples = ['The cat sat on the mat.', 'The dog ate my homework.']\n\n# Intializing tokenizer, which will take into account only 1000 most commonly used words.\ntokenizer = Tokenizer(num_words = 1000)\n\n# Building the dictionary\ntokenizer.fit_on_texts(samples)\n\n# Turning the sequences to an array of integers corresponding to the unique words\nsequences = tokenizer.texts_to_sequences(samples)\n\nsequences\n\n# Representing the data as one-hot encoded.\none_hot_results = tokenizer.texts_to_matrix(samples, \n mode = 'binary')\n\none_hot_results\n\n# Retrieving the index\nword_index = tokenizer.word_index\n\nword_index", "One-hot hashing\n\"A variant of one-hot encoding is the so-called \"one-hot hashing trick\", which can be used when the number of unique tokens in your vocabulary is too large to handle explicitly. Instead of explicitly assigning an index to each word and keeping a reference of these indices in a dictionary, one may hash words into vectors of fixed size. This is typically done with a very lightweight hashing function. The main advantage of this method is that it does away with maintaining an explicit word index, which saves memory and allows online encoding of the data (starting to generate token vectors right away, before having seen all of the available data). The one drawback of this method is that it is susceptible to \"hash collisions\": two different words may end up with the same hash, and subsequently any machine learning model looking at these hashes won't be able to tell the difference between these words. The likelihood of hash collisions decreases when the dimensionality of the hashing space is much larger than the total number of unique tokens being hashed.\"", "samples = ['The cat sat on the mat.', 'The dog ate my homework.']\n\n# Storing words as vector of size 1000.\n# Possible hash collisions and the accuracy of the encoding will drop.\n\ndimensionality = 1000\nmax_length = 10\n\nresults = np.zeros((len(samples), max_length, dimensionality))\n\nfor i, sample in enumerate(samples):\n for j, word in list(enumerate(sample.split()))[:max_length]:\n # Hash the word into a \"random\" integer index\n # that is between 0 and 1000\n index = abs(hash(word)) % dimensionality\n results[i, j, index] = 1.\n\nresults\n\n# 2 examples, each of size 10 (or less, but encoding will persist) with one-hot hashed words\nresults.shape" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tritemio/pybroom
doc/notebooks/pybroom-example-multi-datasets-minimize.ipynb
mit
[ "PyBroom Example - Multiple Datasets - Minimize\nThis notebook is part of pybroom.\n\nThis notebook demonstrate using pybroom when performing Maximum-Likelihood fitting\n(scalar minimization as opposed to curve fitting) of a set of datasets with lmfit.minimize.\nWe will show that pybroom greatly simplifies comparing, filtering and plotting fit results \nfrom multiple datasets.\nFor an example using curve fitting see\npybroom-example-multi-datasets.", "%matplotlib inline\n%config InlineBackend.figure_format='retina' # for hi-dpi displays\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom matplotlib.pylab import normpdf\nimport seaborn as sns\nfrom lmfit import Model\nimport lmfit\nprint('lmfit: %s' % lmfit.__version__)\n\nsns.set_style('whitegrid')\n\nimport pybroom as br", "Create Noisy Data\nSimulate N datasets which are identical except for the additive noise.", "N = 20 # number of datasets\nn = 1000 # number of sample in each dataset\n\nnp.random.seed(1)\nd1 = np.random.randn(20, int(0.6*n))*0.5 - 2\nd2 = np.random.randn(20, int(0.4*n))*1.5 + 2\nd = np.hstack((d1, d2))\n\nds = pd.DataFrame(data=d, columns=range(d.shape[1])).stack().reset_index()\nds.columns = ['dataset', 'sample', 'data']\nds.head()\n\nkws = dict(bins = np.arange(-5, 5.1, 0.1), histtype='step', \n lw=2, color='k', alpha=0.1)\nfor i in range(N):\n ds.loc[ds.dataset == i, :].data.plot.hist(**kws)", "Model Fitting\nTwo-peaks model\nHere, we use a Gaussian mixture distribution for fitting the data.\nWe fit the data using the Maximum-Likelihood method, i.e. we minimize the\n(negative) log-likelihood function:", "# Model PDF to be maximized\ndef model_pdf(x, a2, mu1, mu2, sig1, sig2):\n a1 = 1 - a2\n return (a1 * normpdf(x, mu1, sig1) + \n a2 * normpdf(x, mu2, sig2))\n\n# Function to be minimized by lmfit\ndef log_likelihood_lmfit(params, x):\n pnames = ('a2', 'mu1', 'mu2', 'sig1', 'sig2')\n kws = {n: params[n] for n in pnames}\n return -np.log(model_pdf(x, **kws)).sum()", "We define the parameters and \"fit\" the $N$ datasets by minimizing the (scalar) function log_likelihood_lmfit:", "params = lmfit.Parameters()\nparams.add('a2', 0.5, min=0, max=1)\nparams.add('mu1', -1, min=-5, max=5)\nparams.add('mu2', 1, min=-5, max=5)\nparams.add('sig1', 1, min=1e-6)\nparams.add('sig2', 1, min=1e-6)\nparams.add('ax', expr='a2') # just a test for a derived parameter\n\nResults = [lmfit.minimize(log_likelihood_lmfit, params, args=(di,), \n nan_policy='omit', method='least_squares')\n for di in d]", "Fit results can be inspected with\nlmfit.fit_report() or params.pretty_print():", "print(lmfit.fit_report(Results[0]))\nprint()\nResults[0].params.pretty_print()", "This is good for peeking at the results. However,\nextracting these data from lmfit objects is quite a chore\nand requires good knowledge of lmfit objects structure.\npybroom helps in this task: it extracts data from fit results and\nreturns familiar pandas DataFrame (in tidy format). \nThanks to the tidy format these data can be\nmuch more easily manipulated, filtered and plotted.\nLet's use the glance and \ntidy functions:", "dg = br.glance(Results)\ndg.drop('message', 1).head()\n\ndt = br.tidy(Results, var_names='dataset')\ndt.query('dataset == 0')", "Note that while glance returns one row per fit result, the tidy function\nreturn one row per fitted parameter.\nWe can query the value of one parameter (peak position) across the multiple datasets:", "dt.query('name == \"mu1\"').head()", "By computing the standard deviation of the peak positions:", "dt.query('name == \"mu1\"')['value'].std()\n\ndt.query('name == \"mu2\"')['value'].std()", "we see that the estimation of mu1 as less error than the estimation\nof mu2. \nThis difference can be also observed in the histogram of \nthe fitted values:", "dt.query('name == \"mu1\"')['value'].hist()\ndt.query('name == \"mu2\"')['value'].hist(ax=plt.gca());", "We can also use pybroom's tidy_to_dict \nand dict_to_tidy \nfunctions to convert\na set of fitted parameters to a dict (and vice-versa):", "kwd_params = br.tidy_to_dict(dt.loc[dt['dataset'] == 0])\nkwd_params\n\nbr.dict_to_tidy(kwd_params)", "This conversion is useful to call a python functions\npassing argument values from a tidy DataFrame. \nFor example, here we use tidy_to_dict\nto easily plot the model distribution:", "bins = np.arange(-5, 5.01, 0.25)\nx = bins[:-1] + 0.5*(bins[1] - bins[0])\ngrid = sns.FacetGrid(ds.query('dataset < 6'), col='dataset', hue='dataset', col_wrap=3)\ngrid.map(plt.hist, 'data', bins=bins, normed=True);\nfor i, ax in enumerate(grid.axes):\n kw_pars = br.tidy_to_dict(dt.loc[dt.dataset == i], keys_exclude=['ax'])\n y = model_pdf(x, **kw_pars)\n ax.plot(x, y, lw=2, color='k')", "Single-peak model\nFor the sake of the example we also fit the $N$ datasets with a single Gaussian distribution:", "def model_pdf1(x, mu, sig):\n return normpdf(x, mu, sig)\n\ndef log_likelihood_lmfit1(params, x):\n return -np.log(model_pdf1(x, **params.valuesdict())).sum()\n\nparams = lmfit.Parameters()\nparams.add('mu', 0, min=-5, max=5)\nparams.add('sig', 1, min=1e-6)\n\nResults1 = [lmfit.minimize(log_likelihood_lmfit1, params, args=(di,), \n nan_policy='omit', method='least_squares')\n for di in d]\n\ndg1 = br.glance(Results)\ndg1.drop('message', 1).head()\n\ndt1 = br.tidy(Results1, var_names='dataset')\ndt1.query('dataset == 0')", "Augment?\nPybroom augment function \nextracts information that is the same size as the input dataset,\nfor example the array of residuals. In this case, however, we performed a scalar minimization\n(the log-likelihood function returns a scalar) and therefore the MinimizerResult object\ndoes not contain any residual array or other data of the same size as the dataset.\nComparing fit results\nWe will do instead a comparison of single and two-peaks distribution using the results\nfrom the tidy function obtained in the previous section.\nWe start with the following plot:", "dt['model'] = 'twopeaks'\ndt1['model'] = 'onepeak'\ndt_tot = pd.concat([dt, dt1], ignore_index=True)\n\nbins = np.arange(-5, 5.01, 0.25)\nx = bins[:-1] + 0.5*(bins[1] - bins[0])\ngrid = sns.FacetGrid(ds.query('dataset < 6'), col='dataset', hue='dataset', col_wrap=3)\ngrid.map(plt.hist, 'data', bins=bins, normed=True);\nfor i, ax in enumerate(grid.axes):\n kw_pars = br.tidy_to_dict(dt_tot.loc[(dt_tot.dataset == i) & (dt_tot.model == 'onepeak')])\n y1 = model_pdf1(x, **kw_pars)\n li1, = ax.plot(x, y1, lw=2, color='k', alpha=0.5)\n kw_pars = br.tidy_to_dict(dt_tot.loc[(dt_tot.dataset == i) & (dt_tot.model == 'twopeaks')], keys_exclude=['ax'])\n y = model_pdf(x, **kw_pars)\n li, = ax.plot(x, y, lw=2, color='k')\ngrid.add_legend(legend_data=dict(onepeak=li1, twopeaks=li), \n label_order=['onepeak', 'twopeaks'], title='model');", "The problem is that FacetGrid only takes one DataFrame as input. In the previous\nexample we provide the DataFrame of \"experimental\" data (ds) and use the .map method to plot\nhistograms of the different datasets. The fitted distributions, instead, are\nplotted manually in the for loop.\nWe can invert the approach, and pass to FacetGrid the DataFrame of fitted parameters (dt_tot),\nwhile leaving the simple histogram for manual plotting. In this case we need to write an \nhelper function (_plot) that knows how to plot a distribution given a set of parameter:", "def _plot(names, values, x, label=None, color=None):\n df = pd.concat([names, values], axis=1)\n kw_pars = br.tidy_to_dict(df, keys_exclude=['ax'])\n func = model_pdf1 if label == 'onepeak' else model_pdf\n y = func(x, **kw_pars)\n plt.plot(x, y, lw=2, color=color, label=label) \n\nbins = np.arange(-5, 5.01, 0.25)\nx = bins[:-1] + 0.5*(bins[1] - bins[0])\ngrid = sns.FacetGrid(dt_tot.query('dataset < 6'), col='dataset', hue='model', col_wrap=3)\ngrid.map(_plot, 'name', 'value', x=x)\ngrid.add_legend()\nfor i, ax in enumerate(grid.axes):\n ax.hist(ds.query('dataset == %d' % i).data, bins=bins, histtype='stepfilled', normed=True, \n color='gray', alpha=0.5);", "For an even better (i.e. simpler) example of plots of fit results see\npybroom-example-multi-datasets." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
y2ee201/Deep-Learning-Nanodegree
tensorboard/Anna KaRNNa Name Scoped.ipynb
mit
[ "Anna KaRNNa\nIn this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.\nThis network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.\n<img src=\"assets/charseq.jpeg\" width=\"500\">", "import time\nfrom collections import namedtuple\n\nimport numpy as np\nimport tensorflow as tf", "First we'll load the text file and convert it into integers for our network to use.", "with open('anna.txt', 'r') as f:\n text=f.read()\nvocab = set(text)\nvocab_to_int = {c: i for i, c in enumerate(vocab)}\nint_to_vocab = dict(enumerate(vocab))\nchars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)\n\ntext[:100]\n\nchars[:100]", "Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.\nHere I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.\nThe idea here is to make a 2D matrix where the number of rows is equal to the number of batches. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.", "def split_data(chars, batch_size, num_steps, split_frac=0.9):\n \"\"\" \n Split character data into training and validation sets, inputs and targets for each set.\n \n Arguments\n ---------\n chars: character array\n batch_size: Size of examples in each of batch\n num_steps: Number of sequence steps to keep in the input and pass to the network\n split_frac: Fraction of batches to keep in the training set\n \n \n Returns train_x, train_y, val_x, val_y\n \"\"\"\n \n \n slice_size = batch_size * num_steps\n n_batches = int(len(chars) / slice_size)\n \n # Drop the last few characters to make only full batches\n x = chars[: n_batches*slice_size]\n y = chars[1: n_batches*slice_size + 1]\n \n # Split the data into batch_size slices, then stack them into a 2D matrix \n x = np.stack(np.split(x, batch_size))\n y = np.stack(np.split(y, batch_size))\n \n # Now x and y are arrays with dimensions batch_size x n_batches*num_steps\n \n # Split into training and validation sets, keep the virst split_frac batches for training\n split_idx = int(n_batches*split_frac)\n train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]\n val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]\n \n return train_x, train_y, val_x, val_y\n\ntrain_x, train_y, val_x, val_y = split_data(chars, 10, 200)\n\ntrain_x.shape\n\ntrain_x[:,:10]", "I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.", "def get_batch(arrs, num_steps):\n batch_size, slice_size = arrs[0].shape\n \n n_batches = int(slice_size/num_steps)\n for b in range(n_batches):\n yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]\n\ndef build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,\n learning_rate=0.001, grad_clip=5, sampling=False):\n \n if sampling == True:\n batch_size, num_steps = 1, 1\n\n tf.reset_default_graph()\n \n # Declare placeholders we'll feed into the graph\n with tf.name_scope('inputs'):\n inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')\n x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot')\n \n with tf.name_scope('targets'):\n targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')\n y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot')\n y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])\n \n keep_prob = tf.placeholder(tf.float32, name='keep_prob')\n \n # Build the RNN layers\n with tf.name_scope(\"RNN_layers\"):\n lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)\n drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)\n cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)\n \n with tf.name_scope(\"RNN_init_state\"):\n initial_state = cell.zero_state(batch_size, tf.float32)\n\n # Run the data through the RNN layers\n with tf.name_scope(\"RNN_forward\"):\n rnn_inputs = [tf.squeeze(i, squeeze_dims=[1]) for i in tf.split(x_one_hot, num_steps, 1)]\n outputs, state = tf.contrib.rnn.static_rnn(cell, rnn_inputs, initial_state=initial_state)\n \n final_state = state\n \n # Reshape output so it's a bunch of rows, one row for each cell output\n with tf.name_scope('sequence_reshape'):\n seq_output = tf.concat(outputs, axis=1,name='seq_output')\n output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output')\n \n # Now connect the RNN putputs to a softmax layer and calculate the cost\n with tf.name_scope('logits'):\n softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1),\n name='softmax_w')\n softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b')\n logits = tf.matmul(output, softmax_w) + softmax_b\n\n with tf.name_scope('predictions'):\n preds = tf.nn.softmax(logits, name='predictions')\n \n \n with tf.name_scope('cost'):\n loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss')\n cost = tf.reduce_mean(loss, name='cost')\n\n # Optimizer for training, using gradient clipping to control exploding gradients\n with tf.name_scope('train'):\n tvars = tf.trainable_variables()\n grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)\n train_op = tf.train.AdamOptimizer(learning_rate)\n optimizer = train_op.apply_gradients(zip(grads, tvars))\n \n # Export the nodes \n export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',\n 'keep_prob', 'cost', 'preds', 'optimizer']\n Graph = namedtuple('Graph', export_nodes)\n local_dict = locals()\n graph = Graph(*[local_dict[each] for each in export_nodes])\n \n return graph", "Hyperparameters\nHere I'm defining the hyperparameters for the network. The two you probably haven't seen before are lstm_size and num_layers. These set the number of hidden units in the LSTM layers and the number of LSTM layers, respectively. Of course, making these bigger will improve the network's performance but you'll have to watch out for overfitting. If your validation loss is much larger than the training loss, you're probably overfitting. Decrease the size of the network or decrease the dropout keep probability.", "batch_size = 100\nnum_steps = 100\nlstm_size = 512\nnum_layers = 2\nlearning_rate = 0.001", "Write out the graph for TensorBoard", "model = build_rnn(len(vocab), \n batch_size=batch_size,\n num_steps=num_steps,\n learning_rate=learning_rate,\n lstm_size=lstm_size,\n num_layers=num_layers)\n\nwith tf.Session() as sess:\n \n sess.run(tf.global_variables_initializer())\n file_writer = tf.summary.FileWriter('./logs/3', sess.graph)", "Training\nTime for training which is is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint.", "!mkdir -p checkpoints/anna\n\nepochs = 10\nsave_every_n = 200\ntrain_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)\n\nmodel = build_rnn(len(vocab), \n batch_size=batch_size,\n num_steps=num_steps,\n learning_rate=learning_rate,\n lstm_size=lstm_size,\n num_layers=num_layers)\n\nsaver = tf.train.Saver(max_to_keep=100)\n\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n \n # Use the line below to load a checkpoint and resume training\n #saver.restore(sess, 'checkpoints/anna20.ckpt')\n \n n_batches = int(train_x.shape[1]/num_steps)\n iterations = n_batches * epochs\n for e in range(epochs):\n \n # Train network\n new_state = sess.run(model.initial_state)\n loss = 0\n for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):\n iteration = e*n_batches + b\n start = time.time()\n feed = {model.inputs: x,\n model.targets: y,\n model.keep_prob: 0.5,\n model.initial_state: new_state}\n batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer], \n feed_dict=feed)\n loss += batch_loss\n end = time.time()\n print('Epoch {}/{} '.format(e+1, epochs),\n 'Iteration {}/{}'.format(iteration, iterations),\n 'Training loss: {:.4f}'.format(loss/b),\n '{:.4f} sec/batch'.format((end-start)))\n \n \n if (iteration%save_every_n == 0) or (iteration == iterations):\n # Check performance, notice dropout has been set to 1\n val_loss = []\n new_state = sess.run(model.initial_state)\n for x, y in get_batch([val_x, val_y], num_steps):\n feed = {model.inputs: x,\n model.targets: y,\n model.keep_prob: 1.,\n model.initial_state: new_state}\n batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed)\n val_loss.append(batch_loss)\n\n print('Validation loss:', np.mean(val_loss),\n 'Saving checkpoint!')\n saver.save(sess, \"checkpoints/anna/i{}_l{}_{:.3f}.ckpt\".format(iteration, lstm_size, np.mean(val_loss)))\n\ntf.train.get_checkpoint_state('checkpoints/anna')", "Sampling\nNow that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.\nThe network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.", "def pick_top_n(preds, vocab_size, top_n=5):\n p = np.squeeze(preds)\n p[np.argsort(p)[:-top_n]] = 0\n p = p / np.sum(p)\n c = np.random.choice(vocab_size, 1, p=p)[0]\n return c\n\ndef sample(checkpoint, n_samples, lstm_size, vocab_size, prime=\"The \"):\n prime = \"Far\"\n samples = [c for c in prime]\n model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)\n saver = tf.train.Saver()\n with tf.Session() as sess:\n saver.restore(sess, checkpoint)\n new_state = sess.run(model.initial_state)\n for c in prime:\n x = np.zeros((1, 1))\n x[0,0] = vocab_to_int[c]\n feed = {model.inputs: x,\n model.keep_prob: 1.,\n model.initial_state: new_state}\n preds, new_state = sess.run([model.preds, model.final_state], \n feed_dict=feed)\n\n c = pick_top_n(preds, len(vocab))\n samples.append(int_to_vocab[c])\n\n for i in range(n_samples):\n x[0,0] = c\n feed = {model.inputs: x,\n model.keep_prob: 1.,\n model.initial_state: new_state}\n preds, new_state = sess.run([model.preds, model.final_state], \n feed_dict=feed)\n\n c = pick_top_n(preds, len(vocab))\n samples.append(int_to_vocab[c])\n \n return ''.join(samples)\n\ncheckpoint = \"checkpoints/anna/i3560_l512_1.122.ckpt\"\nsamp = sample(checkpoint, 2000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)\n\ncheckpoint = \"checkpoints/anna/i200_l512_2.432.ckpt\"\nsamp = sample(checkpoint, 1000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)\n\ncheckpoint = \"checkpoints/anna/i600_l512_1.750.ckpt\"\nsamp = sample(checkpoint, 1000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)\n\ncheckpoint = \"checkpoints/anna/i1000_l512_1.484.ckpt\"\nsamp = sample(checkpoint, 1000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jeroarenas/MLBigData
0_Introduction/Intro_PySpark_1.ipynb
mit
[ "Counting words\n 1.- Creating a simple RDD .\nWe will create a simple RDD and apply basic operations", "fruits = ['apple', 'orange', 'banana', 'grape', 'watermelon', 'apple', 'orange', 'apple']\nnumber_partitions = 4\ndataRDD = sc.parallelize(fruits, number_partitions)\nprint type(dataRDD)", "Exercise: Apply the corresponding operation:\n- obtain the total number of elements in the RDD (count)\n- print the first two elements in the RDD (take)\n- print the first two alphabetically sorted elements in the RDD (takeOrdered)\nThe answer should be:\n<pre><code>\nThere are 8 elements in the RDD\n\nThese are the first two:\n['apple', 'orange']\n\nThese are the first two, alphabetically ordered:\n['apple', 'apple']\n</code></pre>", "N_data = dataRDD.<COMPLETAR>()\nprint \"There are %d elements in the RDD\\n\" % N_data\n\nprint \"These are the first two:\"\nprint dataRDD.<COMPLETAR>(2)\n\nprint \"\\nThese are the first two, alphabetically ordered:\"\nprint dataRDD.<COMPLETAR>(2)\n", "2.- Simple transformations\n Exercise: Define a function 'complete_word' that adds ' fruit' to the input string. Use this function to process all elements in the RDD using map. Print all of the elements in the resulting RDD using collect().\nThe answer should be:\n<pre><code>\nTesting the function:\napple fruit\n\nThese are all the elements in the RDD:\n['apple fruit', 'orange fruit', 'banana fruit', 'grape fruit', 'watermelon fruit', 'apple fruit', 'orange fruit', 'apple fruit']\n</code></pre>", "def complete_word(word):\n return <COMPLETAR>\n\nprint \"Testing the function:\"\nprint complete_word('apple')\n\ndataRDDprocessed = dataRDD.map(<COMPLETAR>)\n\nprint \"\\nThese are all the elements in the RDD:\"\nprint dataRDDprocessed.<COMPLETAR>()", "We will use now a lambda function to do the same task", "dataRDDprocessed_lambda = dataRDD.map(lambda x: x + ' fruit')\n\nprint \"Result with a lambda function:\"\nprint dataRDDprocessed_lambda.<COMPLETAR>()", "Now let's count the number of characters of every processed word.\nThe answer should be:\n<pre><code>\n[11, 12, 12, 11, 16, 11, 12, 11]\n</code></pre>", "wordLengths = (dataRDDprocessed_lambda\n .map(<COMPLETAR>)\n .collect())\nprint wordLengths", "Let's obtain a string with all the words in the original RDD using two different approaches.\n Exercise: Complete the code and discuss the results:\nThe answer should be:\n<pre><code>\ntype 'str'\napple orange banana grape watermelon apple orange apple\ntype 'str'\napple orange banana grape watermelon apple orange apple\n</code></pre>", "string1 = \" \".join(<COMPLETAR>)\nprint type(string1)\nprint string1\n\nstring2 = dataRDD.reduce(lambda x, y: <COMPLETAR>)\nprint type(string2)\nprint string2", "Exercise: Repeat the scheme above to obtain the total number of characters in the RDD:\nThe answer should be:\n<pre><code>\n48\n48\n</code></pre>", "Nchars = sum(dataRDD.<COMPLETAR>)\nprint Nchars\n\nNchars = dataRDD.map(len).reduce(<COMPLETAR>)\nprint Nchars", "3.- Creating a pair RDD and counting\nEvery element of a pair RDD is a tuple (k,v) where k is the key and v is the value.\n Exercise: Transform the original RDD into a pair RDD, where the value is always 1.\nThe answer should be:\n<pre><code>\n[('apple', 1), ('orange', 1), ('banana', 1), ('grape', 1), ('watermelon', 1), ('apple', 1), ('orange', 1), ('apple', 1)]\n\nGrouped pairs as an interable:\n[('orange', <pyspark.resultiterable.ResultIterable object at 0xb0e455cc>), ('watermelon', <pyspark.resultiterable.ResultIterable object at 0xb0e45b8c>), ('grape', <pyspark.resultiterable.ResultIterable object at 0xb0e45bec>), ('apple', <pyspark.resultiterable.ResultIterable object at 0xb0e4546c>), ('banana', <pyspark.resultiterable.ResultIterable object at 0xb1f5dd6c>)]\n\nGrouped pairs as a list\n[('orange', [1, 1]), ('watermelon', [1]), ('grape', [1]), ('apple', [1, 1, 1]), ('banana', [1])]\n\nGrouped pairs + count\n[('orange', 2), ('watermelon', 1), ('grape', 1), ('apple', 3), ('banana', 1)]\n</code></pre>", "pairRDD = dataRDD.map(lambda x: (x, 1))\nprint pairRDD.collect()\n\nprint \"Result: (key, iterable):\"\ngroupedRDD = pairRDD.groupByKey()\nprint groupedRDD.collect()\nprint \" \"\n\nprint \"Result: (key, list of results):\"\ngroupedRDDprocessed = groupedRDD.mapValues(list)\nprint groupedRDDprocessed.collect()\nprint \" \"\n\nprint \"Result: (key, count):\"\ngroupedRDDprocessed = groupedRDD.mapValues(len)\nprint groupedRDDprocessed.collect()\nprint \" \"\n", "Exercise: Use groupByKey to count the frequencies of every word ( caution!: groupByKey transformation can be very inefficient, since it needs to exchange data among workers):\nThe answer should be:\n<pre><code>\nResult: (key, count):\n[('apple', 1), ('orange', 1)]\n[('orange', 2), ('watermelon', 1), ('grape', 1), ('apple', 3), ('banana', 1)]\n</code></pre>", "print \"Result: (key, count):\"\n\ncountRDD = pairRDD.groupByKey().map(<COMPLETAR>)\n\nprint countRDD.collect()\nprint \" \"\n", "Exercise: Repeat the counting using reduceByKey, a much more efficient approach, since it operates at every worker before sharing results.\nThe answer should be:\n<pre><code>\nResult: (key, count):\n[('orange', 2), ('watermelon', 1), ('grape', 1), ('apple', 3), ('banana', 1)]\n</code></pre>", "print \"Result: (key, count):\"\ncountRDD = pairRDD.reduceByKey(<COMPLETAR>)\nprint countRDD.collect()\nprint \" \"\n", "Exercise: Combine map, reduceByKey and collect to obtain the counts per word:\nThe answer should be:\n<pre><code>\n[('orange', 2), ('watermelon', 1), ('grape', 1), ('apple', 3), ('banana', 1)]\n</code></pre>", "counts = (dataRDD\n .<COMPLETAR>\n .<COMPLETAR>\n .<COMPLETAR>\n )\nprint counts\n\n", "4.- Filtering a RDD\nCount the number of words that only appear once in the dataset.\nThe answer should be:\n<pre><code>\n3\n</code></pre>", "N_unique_words = (dataRDD\n .<COMPLETAR>\n .<COMPLETAR>\n .filter(<COMPLETAR>)\n .count()\n )\nprint N_unique_words", "5.- Counting words in a file \nWe will use the Complete Works of William Shakespeare from Project Gutenberg. To convert a text file into an RDD, we use the SparkContext.textFile() method.", "textRDD = sc.textFile('data/shakespeare.txt', 8)\nprint \"Number of lines of text = %d\" % textRDD.count()", "Exercise: Use the code written in the previous sections to obtain the counts for every word in the text. Print the first 10 results. Observe the result, is this what we want? What is going wrong?\nThe answer should be:\n<pre><code>\n[(u'', 9493), (u' thou diest in thine unthankfulness, and thine ignorance makes', 1), (u\" Which I shall send you written, be assur'd\", 1), (u' I do beseech you, take it not amiss:', 1), (u' their mastiffs are of unmatchable courage.', 1), (u' With us in Venice, if it be denied,', 1), (u\" Hot. I'll have it so. A little charge will do it.\", 1), (u' By what yourself, too, late have spoke and done,', 1), (u\" FIRST LORD. He's but a mad lord, and nought but humours sways him.\", 1), (u' none will entertain it.', 1)]\n</code></pre>", "counts = (textRDD\n .map(lambda x: (x, 1))\n .<COMPLETAR>\n .take(10)\n )\nprint counts", "Exercise: Modify the code by introducing a flatMap operation and observe the result.\nThe answer should be:\n<pre><code>\n[(u'fawn', 11), (u'bishops.', 2), (u'divinely', 1), (u'mustachio', 1), (u'four', 114), (u'reproach-', 1), (u'drollery.', 1), (u'conjuring', 1), (u'slew.', 1), (u'Calen', 1)]\n</code></pre>", "counts = (textRDD\n .flatMap(lambda x: x.split())\n .map(<COMPLETAR>)\n .<COMPLETAR>\n .take(10)\n )\nprint counts", "Exercise: Modify the code to obtain 5 words that appear exactly 111 times in the text.\nThe answer should be:\n<pre><code>\n[(u'think,', 111), (u'see,', 111), (u'gone.', 111), (u\"King's\", 111), (u'having', 111)]\n</code></pre>", "counts = (textRDD\n .flatMap(<COMPLETAR>)\n .map(<COMPLETAR>)\n .reduceByKey(<COMPLETAR>)\n .filter(<COMPLETAR>)\n .take(<COMPLETAR>)\n )\nprint counts", "Exercise: Modify the code to obtain the 5 words that most appear in the text.\nThe answer should be:\n<pre><code>\n[(u'the', 23197), (u'I', 19540), (u'and', 18263), (u'to', 15592), (u'of', 15507)]\n</code></pre>", "counts = (textRDD\n .<COMPLETAR>\n .<COMPLETAR>\n .<COMPLETAR>\n .takeOrdered(5,key = lambda x: <COMPLETAR>)\n )\nprint counts", "6.- Cleaning the text \nYou may see in the results that we observe some words in capital letters, that some other punctuation characters appear as well. We will incorporate in the code a cleaning function such that we eliminate unwanted characters. We provide a simple cleaning function that lowers all the characters.\n Exercise: Use it in the code and verify that the word \"I\" is printed as \"i\".\nNote: Since we are modifying the strings, the counts will differ with respect to the previous values.\nThe answer should be:\n<pre><code>\n[(u'the', 27267), (u'and', 25340), (u'i', 19540), (u'to', 18656), (u'of', 17301)]\n</code></pre>", "def clean_text(string):\n string = string.lower()\n return string \n\ncounts = (textRDD\n .flatMap(<COMPLETAR>)\n .map(<COMPLETAR>)\n .map(<COMPLETAR>)\n .reduceByKey(<COMPLETAR>)\n .takeOrdered(<COMPLETAR>)\n )\nprint counts", "We will now search for non-alphabetical characters in the dataset. We can use the Python method 'isalpha' to decide wether or not a string is composed of characters a-z.\n Exercise: Use that function to print the 20 words with non-alphabetic characters that most appear in the text and print the total number of strings with non-alphabetic characters.\nThe answer should be:\n<pre><code>\nThe database has 40957 words that need cleaning, for example:\n\n[(u\"i'll\", 1737), (u'you,', 1478), (u\"'tis\", 1367), (u'sir,', 1235), (u'me,', 1219), (u\"th'\", 1146), (u'o,', 1008), (u'lord,', 977), (u'come,', 875), (u'me.', 823), (u'you.', 813), (u'why,', 805), (u'now,', 785), (u'it.', 784), (u'him.', 755), (u'lord.', 702), (u'him,', 698), (u'ay,', 661), (u'well,', 647), (u'and,', 647)]\n</code></pre>", "countsRDD = (textRDD\n .flatMap(<COMPLETAR>)\n .map(<COMPLETAR>)\n .filter(lambda x: not x.isalpha())\n .map(<COMPLETAR>)\n .reduceByKey(<COMPLETAR>)\n )\ncountsRDD.cache()\n\nprint \"The database has %d words that need cleaning, for example:\\n\" % countsRDD.count()\nprint countsRDD.takeOrdered(20,key = lambda x: -x[1])", "You can clearly observe now all the punctuation symbols that have not been removed yet.\n Exercise: Write a new_clean_function such that all the unwanted symbols have been remode. As a hint, we include the code for removing the symbol '.'\nThe answer should be:\n<pre><code>\nThe database has 0 elements that need preprocessing, for example:\n[]\n</code></pre>", "def new_clean_text(string):\n string = string.lower()\n list_of_chars = ['.', <COMPLETAR>]\n for c in <COMPLETAR>:\n string = string.replace(c,'')\n return string \n\ncountsRDD = (textRDD\n .flatMap(<COMPLETAR>)\n .map(new_clean_text)\n .filter(lambda x: not x.isnumeric())\n .filter(lambda x: len(x)>0) \n .filter(lambda x: not x.isalnum()) \n .map(<COMPLETAR>)\n .reduceByKey(<COMPLETAR>)\n )\ncountsRDD.cache()\n\nNpreprocess = countsRDD.count()\nprint \"The database has %d elements that need preprocessing, for example:\" % Npreprocess\nprint countsRDD.takeOrdered(20,key = lambda x: -x[1])", "Exercise: Now that we have completely cleaned the words, try to find the 20 most frequent cleaned strings.\nThe answer should be:\n<pre><code>\nProcessing the dataset to find the 20 most frequent strings:\n\n[(u'the', 27361), (u'and', 26028), (u'i', 20681), (u'to', 19150), (u'of', 17463), (u'a', 14593), (u'you', 13615), (u'my', 12481), (u'in', 10956), (u'that', 10890), (u'is', 9134), (u'not', 8497), (u'with', 7771), (u'me', 7769), (u'it', 7678), (u'for', 7558), (u'be', 6857), (u'his', 6857), (u'your', 6655), (u'this', 6602)]\n</code></pre>", "print \"Processing the dataset to find the 20 most frequent strings:\\n\"\ncountsRDDclean = (textRDD\n .<COMPLETAR>\n )\ncountsRDDclean.cache()\nprint countsRDDclean.takeOrdered(20,key = lambda x: -x[1])\n", "7.- Removing stopwords\nMany of the most frequent words obtained in the previous section are irrelevant to many tasks, they are know as stop-words. We will use here a stop list (list of meaningless words) to clean out those terms.\n Exercise: Observe the line used for converting the strings to unicode. This task could be implemented using a \"for\" loop, but we are using what is called a \"List Comprehension\".", "import csv\nwith open('data/english_stopwords.txt', 'rb') as csvfile:\n \n reader = csv.reader(csvfile)\n stopwords = []\n \n for row in reader:\n stopwords.append(row[0].replace(\"'\",'').replace('\\t',''))\n \n stopwords = [unicode(s, \"utf-8\") for s in stopwords]\n \nprint stopwords", "Exercise: Apply an extra filter that removes the stop words from the calculations. Print the 50 most frequent words ONLY THE WORDS separated with blank spaces. Are they informative about Shakespeare's books?\nThe answer should be:\n<pre><code>\nThese are the most frequent words:\n\nall no lord king good now sir come or let enter love hath man one go upon like say know may make us yet must see tis give can take speak mine first th duke tell time exeunt much think never heart exit queen doth art great hear lady death\n</code></pre>", "countsRDDclean = (textRDD\n .<COMPLETAR>\n .filter(lambda x: <COMPLETAR> stopwords)\n .<COMPLETAR>\n )\ncountsRDDclean.cache()\npairs = countsRDDclean.takeOrdered(50,key = lambda x: -x[1])\n#print pairs\n\nwords = ' '.join([x[0] for x in pairs])\nprint \"These are the most frequent words:\\n\"\nprint words\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
temmeand/scikit-rf
doc/source/tutorials/Networks.ipynb
bsd-3-clause
[ "Networks\nIntroduction\nThis tutorial gives an overview of the microwave network analysis \nfeatures of skrf. For this tutorial, and the rest of the scikit-rf documentation, it is assumed that skrf has been imported as rf. Whether or not you follow this convention in your own code is up to you.", "import skrf as rf\nfrom pylab import *", "If this produces an import error, please see Installation .\nCreating Networks\nskrf provides an object for a N-port microwave Network. A Network can be created in a number of ways:\n - from a Touchstone file\n - from S-parameters\n - from Z-parameters\n - from other RF parameters (Y, ABCD, T, etc.) \nSome examples for each situation is given below.\nCreating Network from Touchstone file\nTouchstone file (.sNp files, with N being the number of ports) is a de facto standard to export N-port network parameter data and noise data of linear active devices, passive filters, passive devices, or interconnect networks. Creating a Network from a Touchstone file is simple:", "from skrf import Network, Frequency\n\nring_slot = Network('data/ring slot.s2p')", "Note that some softwares, such as ANSYS HFSS, add additional information to the Touchstone standard, such as comments, simulation parameters, Port Impedance or Gamma (wavenumber). These data are also imported if detected. \nA short description of the network will be printed out if entered onto the command line", "ring_slot", "Creating Network from s-parameters\nNetworks can also be created by directly passing values for the frequency, s-parameters (and optionally the port impedance z0). \nThe scattering matrix of a N-port Network is expected to be a Numpy array of shape (nb_f, N, N), where nb_f is the number of frequency points and N the number of ports of the network.", "# dummy 2-port network from Frequency and s-parameters\nfreq = Frequency(1, 10, 101, 'ghz')\ns = rand(101, 2, 2) + 1j*rand(101, 2, 2) # random complex numbers \n# if not passed, will assume z0=50. name is optional but it's a good practice.\nntwk = Network(frequency=freq, s=s, name='random values 2-port') \nntwk\n\nntwk.plot_s_db()", "Often, s-parameters are stored in separate arrays. In such case, one needs to forge the s-matrix:", "# let's assume we have separate arrays for the frequency and s-parameters\nf = np.array([1, 2, 3, 4]) # in GHz\nS11 = np.random.rand(4)\nS12 = np.random.rand(4)\nS21 = np.random.rand(4)\nS22 = np.random.rand(4)\n\n# Before creating the scikit-rf Network object, one must forge the Frequency and S-matrix:\nfreq2 = rf.Frequency.from_f(f, unit='GHz')\n\n# forging S-matrix as shape (nb_f, 2, 2)\n# there is probably smarter way, but less explicit for the purpose of this example:\ns = np.zeros((len(f), 2, 2), dtype=complex)\ns[:,0,0] = S11\ns[:,0,1] = S12\ns[:,1,0] = S21\ns[:,1,1] = S22\n\n# constructing Network object\nntw = rf.Network(frequency=freq2, s=s)\n\nprint(ntw)", "If necessary, the characteristic impedance can be passed as a scalar (same for all frequencies), as a list or an array:", "ntw2 = rf.Network(frequency=freq, s=s, z0=25, name='same z0 for all ports')\nprint(ntw2)\nntw3 = rf.Network(frequency=freq, s=s, z0=[20, 30], name='different z0 for each port')\nprint(ntw3)\nntw4 = rf.Network(frequency=freq, s=s, z0=rand(101,2), name='different z0 for each frequencies and ports')\nprint(ntw4)", "from z-parameters\nAs networks are also defined from their Z-parameters, there is from_z() method of the Network:", "# 1-port network example\nz = 10j\nZ = np.full((len(freq), 1, 1), z) # replicate z for all frequencies\n\nntw = rf.Network()\nntw = ntw.from_z(Z)\nntw.frequency = freq\nprint(ntw)", "from other network parameters (Z, Y, ABCD, T)\nIt is also possible to generate Networks from other kind of RF parameters, using the conversion functions: z2s, y2s, a2s, t2s, h2s. For example, the ABCD parameters of a serie-impedance is:\n$$\n\\left[\n\\begin{array}{cc}\n1 & Z \\\n0 & 1\n\\end{array}\n\\right]\n$$", "z = 20\nabcd = array([[1, z],\n [0, 1]])\n\ns = rf.a2s(tile(abcd, (len(freq),1,1)))\nntw = Network(frequency=freq, s=s)\nprint(ntw)", "Basic Properties\nThe basic attributes of a microwave Network are provided by the \nfollowing properties :\n\nNetwork.s : Scattering Parameter matrix. \nNetwork.z0 : Port Characteristic Impedance matrix.\nNetwork.frequency : Frequency Object. \n\nThe Network object has numerous other properties and methods. If you are using IPython, then these properties and methods can be 'tabbed' out on the command line. \nIn [1]: ring_slot.s&lt;TAB&gt;\nring_slot.line.s ring_slot.s_arcl ring_slot.s_im\nring_slot.line.s11 ring_slot.s_arcl_unwrap ring_slot.s_mag\n...\n\nAll of the network parameters are represented internally as complex numpy.ndarray. The s-parameters are of shape (nfreq, nport, nport)", "shape(ring_slot.s)", "Slicing\nYou can slice the Network.s attribute any way you want.", "ring_slot.s[:11,1,0] # get first 10 values of S21", "Slicing by frequency can also be done directly on Network objects like so", "ring_slot[0:10] # Network for the first 10 frequency points", "or with a human friendly string,", "ring_slot['80-90ghz']", "Notice that slicing directly on a Network returns a Network. So, a nice way to express slicing in both dimensions is", "ring_slot.s11['80-90ghz'] ", "Plotting\nAmongst other things, the methods of the Network class provide convenient ways to plot components of the network parameters, \n\nNetwork.plot_s_db : plot magnitude of s-parameters in log scale\nNetwork.plot_s_deg : plot phase of s-parameters in degrees\nNetwork.plot_s_smith : plot complex s-parameters on Smith Chart\n...\n\nIf you would like to use skrf's plot styling,", "%matplotlib inline \nrf.stylely()", "To plot all four s-parameters of the ring_slot on the Smith Chart.", "ring_slot.plot_s_smith()", "Combining this with the slicing features,", "from matplotlib import pyplot as plt\n\nplt.title('Ring Slot $S_{21}$')\n\nring_slot.s11.plot_s_db(label='Full Band Response')\nring_slot.s11['82-90ghz'].plot_s_db(lw=3,label='Band of Interest')", "For more detailed information about plotting see Plotting. \nArithmetic Operations\nElement-wise mathematical operations on the scattering parameter matrices are accessible through overloaded operators. To illustrate their usage, load a couple Networks stored in the data module.", "from skrf.data import wr2p2_short as short \nfrom skrf.data import wr2p2_delayshort as delayshort \n\n\nshort - delayshort\nshort + delayshort\nshort * delayshort\nshort / delayshort\n", "All of these operations return Network types. For example, to plot the complex difference between short and delay_short,", "difference = (short - delayshort)\ndifference.plot_s_mag(label='Mag of difference')", "Another common application is calculating the phase difference using the division operator,", "(delayshort/short).plot_s_deg(label='Detrended Phase')", "Linear operators can also be used with scalars or an numpy.ndarray that ais the same length as the Network.", "hopen = (short*-1)\nhopen.s[:3,...]\n\nrando = hopen *rand(len(hopen))\nrando.s[:3,...]", "Comparison of Network\nComparison operators also work with networks:", "short == delayshort\n\nshort != delayshort", "Cascading and De-embedding\nCascading and de-embeding 2-port Networks can also be done through operators. The cascade function can be called through the power operator, **. To calculate a new network which is the cascaded connection of the two individual Networks line and short,", "short = rf.data.wr2p2_short\nline = rf.data.wr2p2_line\ndelayshort = line ** short", "De-embedding can be accomplished by cascading the inverse of a network. The inverse of a network is accessed through the property Network.inv. To de-embed the short from delay_short,", "short_2 = line.inv ** delayshort\n\nshort_2 == short", "The cascading operator also works for to 2N-port Networks. This is illustrated in this example on balanced Networks. For example, assuming you want to cascade three 4-port Network ntw1, ntw2 and ntw3, you can use:\nresulting_ntw = ntw1 ** ntw2 ** ntw3. Note that the port scheme assumed by the ** cascading operator with 4-port networks is:\nntw1 ntw2 ntw3\n +----+ +----+ +----+\n0-|0 2|--|0 2|--|0 2|-2\n1-|1 3|--|1 3|--|1 3|-3\n +----+ +----+ +----+\nMore documentation on Connecting Network is available here: Connecting Networks.\nConnecting Multi-ports\nskrf supports the connection of arbitrary ports of N-port networks. It accomplishes this using an algorithm called sub-network growth[1], available through the function connect(). \nAs an example, terminating one port of an ideal 3-way splitter can be done like so,", "tee = rf.data.tee\ntee", "To connect port 1 of the tee, to port 0 of the delay short,", "terminated_tee = rf.connect(tee, 1, delayshort, 0)\nterminated_tee", "Note that this function takes into account port impedances. If two connected ports have different port impedances, an appropriate impedance mismatch is inserted. \nMore information on connecting Networks is available here: connecting Networks.\nFor advanced connections between many arbitrary N-port Networks, the Circuit object is more relevant since it allows defining explicitly the connections between ports and Networks. For more information, please refer to the Circuit documentation. \nInterpolation and Concatenation\nA common need is to change the number of frequency points of a Network. To use the operators and cascading functions the networks involved must have matching frequencies, for instance. If two networks have different frequency information, then an error will be raised,", "from skrf.data import wr2p2_line1 as line1\n\nline1", "line1+line\n\n---------------------------------------------------------------------------\nIndexError Traceback (most recent call last)\n&lt;ipython-input-49-82040f7eab08&gt; in &lt;module&gt;()\n----&gt; 1 line1+line\n\n/home/alex/code/scikit-rf/skrf/network.py in __add__(self, other)\n 500 \n 501 if isinstance(other, Network):\n--&gt; 502 self.__compatible_for_scalar_operation_test(other)\n 503 result.s = self.s + other.s\n 504 else:\n\n/home/alex/code/scikit-rf/skrf/network.py in __compatible_for_scalar_operation_test(self, other)\n 701 '''\n 702 if other.frequency != self.frequency:\n--&gt; 703 raise IndexError('Networks must have same frequency. See `Network.interpolate`')\n 704 \n 705 if other.s.shape != self.s.shape:\n\nIndexError: Networks must have same frequency. See `Network.interpolate`\n\nThis problem can be solved by interpolating one of Networks along the frequency axis using Network.resample.", "line1.resample(201)\nline1", "And now we can do things", "line1 + line", "You can also interpolate from a Frequency object. For example,", "line.interpolate_from_f(line1.frequency)", "A related application is the need to combine Networks which cover different frequency ranges. Two Networks can be concatenated (aka stitched) together using stitch, which concatenates networks along their frequency axis. To combine a WR-2.2 Network with a WR-1.5 Network,", "from skrf.data import wr2p2_line, wr1p5_line\n\nbig_line = rf.stitch(wr2p2_line, wr1p5_line)\nbig_line", "Reading and Writing\nFor long term data storage, skrf has support for reading and partial support for writing touchstone file format. Reading is accomplished with the Network initializer as shown above, and writing with the method Network.write_touchstone().\nFor temporary data storage, skrf object can be pickled with the functions skrf.io.general.read and skrf.io.general.write. The reason to use temporary pickles over touchstones is that they store all attributes of a network, while touchstone files only store partial information.", "rf.write('data/myline.ntwk',line) # write out Network using pickle\n\nntwk = Network('data/myline.ntwk') # read Network using pickle", "Frequently there is an entire directory of files that need to be analyzed. rf.read_all creates Networks from all files in a directory quickly. To load all skrf files in the data/ directory which contain the string 'wr2p2'.", "dict_o_ntwks = rf.read_all(rf.data.pwd, contains = 'wr2p2')\ndict_o_ntwks", "Other times you know the list of files that need to be analyzed. rf.read_all also accepts a files parameter. This example file list contains only files within the same directory, but you can store files however your application would benefit from.", "import os\ndict_o_ntwks_files = rf.read_all(files=[os.path.join(rf.data.pwd, test_file) for test_file in ['ntwk1.s2p', 'ntwk2.s2p']])\ndict_o_ntwks_files", "Other Parameters\nThis tutorial focuses on s-parameters, but other network representations are available as well. Impedance and Admittance Parameters can be accessed through the parameters Network.z and Network.y, respectively. Scalar components of complex parameters, such as Network.z_re, Network.z_im and plotting methods are available as well.\nOther parameters are only available for 2-port networks, such as wave cascading parameters (Network.t), and ABCD-parameters (Network.a)", "ring_slot.z[:3,...]\n\nring_slot.plot_z_im(m=1,n=0)", "References\n[1] Compton, R.C.; , \"Perspectives in microwave circuit analysis,\" Circuits and Systems, 1989., Proceedings of the 32nd Midwest Symposium on , vol., no., pp.716-718 vol.2, 14-16 Aug 1989. URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=101955&isnumber=3167" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
CyberCRI/dataanalysis-herocoli-redmetrics
v1.52.2/Tests/8.1 RM-GF correlations tests.ipynb
cc0-1.0
[ "8.1 RM-GF correlations tests", "%run \"../Functions/8. RM-GF correlations.ipynb\"\n\nallData = allDataPlaytestPhase1PretestPosttestUniqueProfilesVolunteers.copy()", "Correlation between max chapter and answers\nmethod 1: correlation matrix\nindex: question groups\ncolumns: RedMetrics parameters", "#def getScoresOnQuestionsFromAllData(allData, Qs):", "method 2\nuse max chapter or <= max chapter?", "correctPerMaxChapter = pd.DataFrame(index = posttestScientificQuestions, columns = range(15))\n\nallData.loc[:, allData.loc['maxChapter', :] == 10].columns\n\n# when reaching checkpoint N, what is the rate of good answer for question Q?\nmaxCheckpointsDF = pd.DataFrame(index = ['maxCh'], columns=range(15))\n\nfor chapter in allData.loc['maxChapter', :].unique():\n eltsCount = len(allData.loc[:, allData.loc['maxChapter', :] == chapter].columns)\n maxCheckpointsDF.loc['maxCh', chapter] = eltsCount\n for q in posttestScientificQuestions:\n interestingElts = allData.loc[q, allData.loc['maxChapter', :] == chapter]\n scoreSum = interestingElts.sum()\n correctPerMaxChapter.loc[q, chapter] = int(scoreSum * 100 / eltsCount)\ncorrectPerMaxChapterNotNan = correctPerMaxChapter.fillna(-1)\n\n_fig1 = plt.figure(figsize=(20,20))\n_ax1 = plt.subplot(111)\n_ax1.set_title(\"maxCheckpointsDF\")\nsns.heatmap(\n correctPerMaxChapterNotNan,\n ax=_ax1,\n cmap=plt.cm.jet,\n square=True,\n annot=True,\n fmt='d',\n)\n\n\nmaxCheckpointsDFNotNan = maxCheckpointsDF.fillna(0)\n\n_fig2 = plt.figure(figsize=(14,2))\n_ax2 = plt.subplot(111)\n_ax2.set_title(\"maxCheckpointsDF\")\nsns.heatmap(\n maxCheckpointsDFNotNan,\n ax=_ax2,\n cmap=plt.cm.jet,\n square=True,\n annot=True,\n fmt='d',\n )\n\ncorrChapterScQDF = pd.DataFrame(index=posttestScientificQuestions, columns=['corr'])\n\n# when reaching checkpoint N, what is the rate of good answer for question Q?\nfor q in posttestScientificQuestions:\n corrChapterScQDF.loc[q, 'corr'] = np.corrcoef(allData.loc[q,:].values, allData.loc['maxChapter',:].values)[1,0]\n\ncorrChapterScQDFNotNan = corrChapterScQDF.fillna(-2)\n\n_fig1 = plt.figure(figsize=(14,10))\n_ax1 = plt.subplot(111)\n_ax1.set_title(\"corrChapterScQDFNotNan\")\nsns.heatmap(\n corrChapterScQDFNotNan,\n ax=_ax1,\n cmap=plt.cm.jet,\n square=True,\n annot=True,\n fmt='.2f',\n vmin=-1,\n vmax=1,\n )", "Clustering answers to find underlying correlation with RedMetrics data", "from sklearn.cluster import KMeans\nfrom sklearn.neighbors.kde import KernelDensity\n\nX = np.array([[0.9], [1], [1.1], [4], [4.1], [4.2], [5]])\nkmeans = KMeans(n_clusters=2, random_state=0).fit(X)\nkmeans.inertia_\n\nkmeans.labels_\n\nkmeans.cluster_centers_\n\nkmeans.predict([[3], [4]])\n\ninertiaThreshold = 1\n\nfor question in scientificQuestions:\n posttestQuestion = answerTemporalities[1] + \" \" + question\n #deltaQuestion = delta + \" \" + question\n allDataPlaytestPhase1PretestPosttestUniqueProfilesVolunteers.loc[posttestQuestion, :]\n\nX = [[x] for x in allDataPlaytestPhase1PretestPosttestUniqueProfilesVolunteers.loc[posttestQuestion, :].values]\nclusterCount = 3\nkmeans = KMeans(n_clusters=clusterCount, random_state=0).fit(X)\nif len(np.unique(kmeans.labels_)) != clusterCount:\n print(\"incorrect number of clusters\")\nkmeans.inertia_", "Clustering using KernelDensity", "X = np.array([[-1], [-2], [-3], [1], [2], [3]])\nkde = KernelDensity(kernel='gaussian', bandwidth=0.2).fit(X)\nkde.score_samples(X)\n\nX = np.array([-1, -2, -3, 1, 2, 3])\nkde = KernelDensity(kernel='gaussian', bandwidth=0.2).fit(X.reshape(-1, 1))\nkde.score_samples(X.reshape(-1, 1))\nX.reshape(-1, 1)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/ncc/cmip6/models/sandbox-1/ocnbgchem.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Ocnbgchem\nMIP Era: CMIP6\nInstitute: NCC\nSource ID: SANDBOX-1\nTopic: Ocnbgchem\nSub-Topics: Tracers. \nProperties: 65 (37 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:25\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'ncc', 'sandbox-1', 'ocnbgchem')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport\n3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks\n4. Key Properties --&gt; Transport Scheme\n5. Key Properties --&gt; Boundary Forcing\n6. Key Properties --&gt; Gas Exchange\n7. Key Properties --&gt; Carbon Chemistry\n8. Tracers\n9. Tracers --&gt; Ecosystem\n10. Tracers --&gt; Ecosystem --&gt; Phytoplankton\n11. Tracers --&gt; Ecosystem --&gt; Zooplankton\n12. Tracers --&gt; Disolved Organic Matter\n13. Tracers --&gt; Particules\n14. Tracers --&gt; Dic Alkalinity \n1. Key Properties\nOcean Biogeochemistry key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of ocean biogeochemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of ocean biogeochemistry model code (PISCES 2.0,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Model Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of ocean biogeochemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Geochemical\" \n# \"NPZD\" \n# \"PFT\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Elemental Stoichiometry\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe elemental stoichiometry (fixed, variable, mix of the two)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Fixed\" \n# \"Variable\" \n# \"Mix of both\" \n# TODO - please enter value(s)\n", "1.5. Elemental Stoichiometry Details\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe which elements have fixed/variable stoichiometry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.6. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of all prognostic tracer variables in the ocean biogeochemistry component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.7. Diagnostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of all diagnotic tracer variables in the ocean biogeochemistry component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.8. Damping\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe any tracer damping used (such as artificial correction or relaxation to climatology,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.damping') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport\nTime stepping method for passive tracers transport in ocean biogeochemistry\n2.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime stepping framework for passive tracers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"use ocean model transport time step\" \n# \"use specific time step\" \n# TODO - please enter value(s)\n", "2.2. Timestep If Not From Ocean\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTime step for passive tracers (if different from ocean)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks\nTime stepping framework for biology sources and sinks in ocean biogeochemistry\n3.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime stepping framework for biology sources and sinks", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"use ocean model transport time step\" \n# \"use specific time step\" \n# TODO - please enter value(s)\n", "3.2. Timestep If Not From Ocean\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTime step for biology sources and sinks (if different from ocean)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Transport Scheme\nTransport scheme in ocean biogeochemistry\n4.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of transport scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Offline\" \n# \"Online\" \n# TODO - please enter value(s)\n", "4.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTransport scheme used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Use that of ocean model\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "4.3. Use Different Scheme\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDecribe transport scheme if different than that of ocean model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Boundary Forcing\nProperties of biogeochemistry boundary forcing\n5.1. Atmospheric Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how atmospheric deposition is modeled", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"from file (climatology)\" \n# \"from file (interannual variations)\" \n# \"from Atmospheric Chemistry model\" \n# TODO - please enter value(s)\n", "5.2. River Input\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how river input is modeled", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"from file (climatology)\" \n# \"from file (interannual variations)\" \n# \"from Land Surface model\" \n# TODO - please enter value(s)\n", "5.3. Sediments From Boundary Conditions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList which sediments are speficied from boundary condition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.4. Sediments From Explicit Model\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList which sediments are speficied from explicit sediment model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Gas Exchange\n*Properties of gas exchange in ocean biogeochemistry *\n6.1. CO2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs CO2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.2. CO2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe CO2 gas exchange", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.3. O2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs O2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.4. O2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe O2 gas exchange", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.5. DMS Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs DMS gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.6. DMS Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify DMS gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.7. N2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs N2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.8. N2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify N2 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.9. N2O Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs N2O gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.10. N2O Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify N2O gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.11. CFC11 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs CFC11 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.12. CFC11 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify CFC11 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.13. CFC12 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs CFC12 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.14. CFC12 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify CFC12 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.15. SF6 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs SF6 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.16. SF6 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify SF6 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.17. 13CO2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs 13CO2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.18. 13CO2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify 13CO2 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.19. 14CO2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs 14CO2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.20. 14CO2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify 14CO2 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.21. Other Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any other gas exchange", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Key Properties --&gt; Carbon Chemistry\nProperties of carbon chemistry biogeochemistry\n7.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how carbon chemistry is modeled", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other protocol\" \n# TODO - please enter value(s)\n", "7.2. PH Scale\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf NOT OMIP protocol, describe pH scale.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea water\" \n# \"Free\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "7.3. Constants If Not OMIP\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf NOT OMIP protocol, list carbon chemistry constants.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Tracers\nOcean biogeochemistry tracers\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of tracers in ocean biogeochemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Sulfur Cycle Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs sulfur cycle modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8.3. Nutrients Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList nutrient species present in ocean biogeochemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Nitrogen (N)\" \n# \"Phosphorous (P)\" \n# \"Silicium (S)\" \n# \"Iron (Fe)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.4. Nitrous Species If N\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf nitrogen present, list nitrous species.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Nitrates (NO3)\" \n# \"Amonium (NH4)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.5. Nitrous Processes If N\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf nitrogen present, list nitrous processes.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Dentrification\" \n# \"N fixation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9. Tracers --&gt; Ecosystem\nEcosystem properties in ocean biogeochemistry\n9.1. Upper Trophic Levels Definition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDefinition of upper trophic level (e.g. based on size) ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Upper Trophic Levels Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDefine how upper trophic level are treated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Tracers --&gt; Ecosystem --&gt; Phytoplankton\nPhytoplankton properties in ocean biogeochemistry\n10.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of phytoplankton", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Generic\" \n# \"PFT including size based (specify both below)\" \n# \"Size based only (specify below)\" \n# \"PFT only (specify below)\" \n# TODO - please enter value(s)\n", "10.2. Pft\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nPhytoplankton functional types (PFT) (if applicable)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diatoms\" \n# \"Nfixers\" \n# \"Calcifiers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.3. Size Classes\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nPhytoplankton size classes (if applicable)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Microphytoplankton\" \n# \"Nanophytoplankton\" \n# \"Picophytoplankton\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11. Tracers --&gt; Ecosystem --&gt; Zooplankton\nZooplankton properties in ocean biogeochemistry\n11.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of zooplankton", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Generic\" \n# \"Size based (specify below)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.2. Size Classes\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nZooplankton size classes (if applicable)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Microzooplankton\" \n# \"Mesozooplankton\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Tracers --&gt; Disolved Organic Matter\nDisolved organic matter properties in ocean biogeochemistry\n12.1. Bacteria Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there bacteria representation ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.2. Lability\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe treatment of lability in dissolved organic matter", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Labile\" \n# \"Semi-labile\" \n# \"Refractory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13. Tracers --&gt; Particules\nParticulate carbon properties in ocean biogeochemistry\n13.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is particulate carbon represented in ocean biogeochemistry?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diagnostic\" \n# \"Diagnostic (Martin profile)\" \n# \"Diagnostic (Balast)\" \n# \"Prognostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Types If Prognostic\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf prognostic, type(s) of particulate matter taken into account", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"POC\" \n# \"PIC (calcite)\" \n# \"PIC (aragonite\" \n# \"BSi\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Size If Prognostic\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No size spectrum used\" \n# \"Full size spectrum\" \n# \"Discrete size classes (specify which below)\" \n# TODO - please enter value(s)\n", "13.4. Size If Discrete\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic and discrete size, describe which size classes are used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13.5. Sinking Speed If Prognostic\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, method for calculation of sinking speed of particules", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Function of particule size\" \n# \"Function of particule type (balast)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Tracers --&gt; Dic Alkalinity\nDIC and alkalinity properties in ocean biogeochemistry\n14.1. Carbon Isotopes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich carbon isotopes are modelled (C13, C14)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"C13\" \n# \"C14)\" \n# TODO - please enter value(s)\n", "14.2. Abiotic Carbon\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs abiotic carbon modelled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14.3. Alkalinity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is alkalinity modelled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Prognostic\" \n# \"Diagnostic)\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
0.13/_downloads/plot_visualize_epochs.ipynb
bsd-3-clause
[ "%matplotlib inline", "Visualize Epochs data", "import os.path as op\n\nimport mne\n\ndata_path = op.join(mne.datasets.sample.data_path(), 'MEG', 'sample')\nraw = mne.io.read_raw_fif(op.join(data_path, 'sample_audvis_raw.fif'),\n add_eeg_ref=False)\nraw.set_eeg_reference() # set EEG average reference\nevents = mne.read_events(op.join(data_path, 'sample_audvis_raw-eve.fif'))\npicks = mne.pick_types(raw.info, meg='grad')\nepochs = mne.Epochs(raw, events, [1, 2], picks=picks, add_eeg_ref=False)", "This tutorial focuses on visualization of epoched data. All of the functions\nintroduced here are basically high level matplotlib functions with built in\nintelligence to work with epoched data. All the methods return a handle to\nmatplotlib figure instance.\nAll plotting functions start with plot. Let's start with the most\nobvious. :func:mne.Epochs.plot offers an interactive browser that allows\nrejection by hand when called in combination with a keyword block=True.\nThis blocks the execution of the script until the browser window is closed.", "epochs.plot(block=True)", "The numbers at the top refer to the event id of the epoch. We only have\nevents with id numbers of 1 and 2 since we included only those when\nconstructing the epochs.\nSince we did no artifact correction or rejection, there are epochs\ncontaminated with blinks and saccades. For instance, epoch number 9 (see\nnumbering at the bottom) seems to be contaminated by a blink (scroll to the\nbottom to view the EOG channel). This epoch can be marked for rejection by\nclicking on top of the browser window. The epoch should turn red when you\nclick it. This means that it will be dropped as the browser window is closed.\nYou should check out help at the lower left corner of the window for more\ninformation about the interactive features.\nTo plot individual channels as an image, where you see all the epochs at one\nglance, you can use function :func:mne.Epochs.plot_image. It shows the\namplitude of the signal over all the epochs plus an average of the\nactivation. We explicitly set interactive colorbar on (it is also on by\ndefault for plotting functions with a colorbar except the topo plots). In\ninteractive mode you can scale and change the colormap with mouse scroll and\nup/down arrow keys. You can also drag the colorbar with left/right mouse\nbutton. Hitting space bar resets the scale.", "epochs.plot_image(97, cmap='interactive')\n\n# You also have functions for plotting channelwise information arranged into a\n# shape of the channel array. The image plotting uses automatic scaling by\n# default, but noisy channels and different channel types can cause the scaling\n# to be a bit off. Here we define the limits by hand.\nepochs.plot_topo_image(vmin=-200, vmax=200, title='ERF images')" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/cccr-iitm/cmip6/models/iitm-esm/ocean.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Ocean\nMIP Era: CMIP6\nInstitute: CCCR-IITM\nSource ID: IITM-ESM\nTopic: Ocean\nSub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing. \nProperties: 133 (101 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:48\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'cccr-iitm', 'iitm-esm', 'ocean')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Seawater Properties\n3. Key Properties --&gt; Bathymetry\n4. Key Properties --&gt; Nonoceanic Waters\n5. Key Properties --&gt; Software Properties\n6. Key Properties --&gt; Resolution\n7. Key Properties --&gt; Tuning Applied\n8. Key Properties --&gt; Conservation\n9. Grid\n10. Grid --&gt; Discretisation --&gt; Vertical\n11. Grid --&gt; Discretisation --&gt; Horizontal\n12. Timestepping Framework\n13. Timestepping Framework --&gt; Tracers\n14. Timestepping Framework --&gt; Baroclinic Dynamics\n15. Timestepping Framework --&gt; Barotropic\n16. Timestepping Framework --&gt; Vertical Physics\n17. Advection\n18. Advection --&gt; Momentum\n19. Advection --&gt; Lateral Tracers\n20. Advection --&gt; Vertical Tracers\n21. Lateral Physics\n22. Lateral Physics --&gt; Momentum --&gt; Operator\n23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff\n24. Lateral Physics --&gt; Tracers\n25. Lateral Physics --&gt; Tracers --&gt; Operator\n26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff\n27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity\n28. Vertical Physics\n29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details\n30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers\n31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum\n32. Vertical Physics --&gt; Interior Mixing --&gt; Details\n33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers\n34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum\n35. Uplow Boundaries --&gt; Free Surface\n36. Uplow Boundaries --&gt; Bottom Boundary Layer\n37. Boundary Forcing\n38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction\n39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction\n40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration\n41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing \n1. Key Properties\nOcean key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of ocean model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of ocean model code (NEMO 3.6, MOM 5.0,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Model Family\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of ocean model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_family') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OGCM\" \n# \"slab ocean\" \n# \"mixed layer ocean\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nBasic approximations made in the ocean.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.basic_approximations') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Primitive equations\" \n# \"Non-hydrostatic\" \n# \"Boussinesq\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.5. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of prognostic variables in the ocean component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Potential temperature\" \n# \"Conservative temperature\" \n# \"Salinity\" \n# \"U-velocity\" \n# \"V-velocity\" \n# \"W-velocity\" \n# \"SSH\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Seawater Properties\nPhysical properties of seawater in ocean\n2.1. Eos Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of EOS for sea water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear\" \n# \"Wright, 1997\" \n# \"Mc Dougall et al.\" \n# \"Jackett et al. 2006\" \n# \"TEOS 2010\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2.2. Eos Functional Temp\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTemperature used in EOS for sea water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Potential temperature\" \n# \"Conservative temperature\" \n# TODO - please enter value(s)\n", "2.3. Eos Functional Salt\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSalinity used in EOS for sea water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Practical salinity Sp\" \n# \"Absolute salinity Sa\" \n# TODO - please enter value(s)\n", "2.4. Eos Functional Depth\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDepth or pressure used in EOS for sea water ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pressure (dbars)\" \n# \"Depth (meters)\" \n# TODO - please enter value(s)\n", "2.5. Ocean Freezing Point\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEquation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TEOS 2010\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2.6. Ocean Specific Heat\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecific heat in ocean (cpocean) in J/(kg K)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "2.7. Ocean Reference Density\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBoussinesq reference density (rhozero) in kg / m3", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Bathymetry\nProperties of bathymetry in ocean\n3.1. Reference Dates\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nReference date of bathymetry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Present day\" \n# \"21000 years BP\" \n# \"6000 years BP\" \n# \"LGM\" \n# \"Pliocene\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the bathymetry fixed in time in the ocean ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.3. Ocean Smoothing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe any smoothing or hand editing of bathymetry in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.4. Source\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe source of bathymetry in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.source') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Nonoceanic Waters\nNon oceanic waters treatement in ocean\n4.1. Isolated Seas\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how isolated seas is performed", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. River Mouth\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how river mouth mixing or estuaries specific treatment is performed", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Software Properties\nSoftware properties of ocean code\n5.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Resolution\nResolution in the ocean grid\n6.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Canonical Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.3. Range Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRange of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.4. Number Of Horizontal Gridpoints\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "6.5. Number Of Vertical Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of vertical levels resolved on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "6.6. Is Adaptive Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDefault is False. Set true if grid resolution changes during execution.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.7. Thickness Level 1\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThickness of first surface ocean level (in meters)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "7. Key Properties --&gt; Tuning Applied\nTuning methodology for ocean component\n7.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Key Properties --&gt; Conservation\nConservation in the ocean component\n8.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBrief description of conservation methodology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProperties conserved in the ocean by the numerical schemes", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.scheme') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Energy\" \n# \"Enstrophy\" \n# \"Salt\" \n# \"Volume of ocean\" \n# \"Momentum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.3. Consistency Properties\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAny additional consistency properties (energy conversion, pressure gradient discretisation, ...)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.4. Corrected Conserved Prognostic Variables\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSet of variables which are conserved by more than the numerical scheme alone.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.5. Was Flux Correction Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDoes conservation involve flux correction ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "9. Grid\nOcean grid\n9.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of grid in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Grid --&gt; Discretisation --&gt; Vertical\nProperties of vertical discretisation in ocean\n10.1. Coordinates\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of vertical coordinates in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Z-coordinate\" \n# \"Z*-coordinate\" \n# \"S-coordinate\" \n# \"Isopycnic - sigma 0\" \n# \"Isopycnic - sigma 2\" \n# \"Isopycnic - sigma 4\" \n# \"Isopycnic - other\" \n# \"Hybrid / Z+S\" \n# \"Hybrid / Z+isopycnic\" \n# \"Hybrid / other\" \n# \"Pressure referenced (P)\" \n# \"P*\" \n# \"Z**\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.2. Partial Steps\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nUsing partial steps with Z or Z vertical coordinate in ocean ?*", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "11. Grid --&gt; Discretisation --&gt; Horizontal\nType of horizontal discretisation scheme in ocean\n11.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal grid type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Lat-lon\" \n# \"Rotated north pole\" \n# \"Two north poles (ORCA-style)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.2. Staggering\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nHorizontal grid staggering type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Arakawa B-grid\" \n# \"Arakawa C-grid\" \n# \"Arakawa E-grid\" \n# \"N/a\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.3. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Finite difference\" \n# \"Finite volumes\" \n# \"Finite elements\" \n# \"Unstructured grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Timestepping Framework\nOcean Timestepping Framework\n12.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of time stepping in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.2. Diurnal Cycle\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDiurnal cycle type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Via coupling\" \n# \"Specific treatment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13. Timestepping Framework --&gt; Tracers\nProperties of tracers time stepping in ocean\n13.1. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTracers time stepping scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Leap-frog + Asselin filter\" \n# \"Leap-frog + Periodic Euler\" \n# \"Predictor-corrector\" \n# \"Runge-Kutta 2\" \n# \"AM3-LF\" \n# \"Forward-backward\" \n# \"Forward operator\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTracers time step (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14. Timestepping Framework --&gt; Baroclinic Dynamics\nBaroclinic dynamics in ocean\n14.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBaroclinic dynamics type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Preconditioned conjugate gradient\" \n# \"Sub cyling\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBaroclinic dynamics scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Leap-frog + Asselin filter\" \n# \"Leap-frog + Periodic Euler\" \n# \"Predictor-corrector\" \n# \"Runge-Kutta 2\" \n# \"AM3-LF\" \n# \"Forward-backward\" \n# \"Forward operator\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.3. Time Step\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nBaroclinic time step (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15. Timestepping Framework --&gt; Barotropic\nBarotropic time stepping in ocean\n15.1. Splitting\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime splitting method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"split explicit\" \n# \"implicit\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.2. Time Step\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nBarotropic time step (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "16. Timestepping Framework --&gt; Vertical Physics\nVertical physics time stepping in ocean\n16.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDetails of vertical time stepping in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17. Advection\nOcean advection\n17.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of advection in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Advection --&gt; Momentum\nProperties of lateral momemtum advection scheme in ocean\n18.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of lateral momemtum advection scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Flux form\" \n# \"Vector form\" \n# TODO - please enter value(s)\n", "18.2. Scheme Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of ocean momemtum advection scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18.3. ALE\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nUsing ALE for vertical advection ? (if vertical coordinates are sigma)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.ALE') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "19. Advection --&gt; Lateral Tracers\nProperties of lateral tracer advection scheme in ocean\n19.1. Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrder of lateral tracer advection scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "19.2. Flux Limiter\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMonotonic flux limiter for lateral tracer advection scheme in ocean ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "19.3. Effective Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEffective order of limited lateral tracer advection scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "19.4. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19.5. Passive Tracers\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nPassive tracers advected", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ideal age\" \n# \"CFC 11\" \n# \"CFC 12\" \n# \"SF6\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19.6. Passive Tracers Advection\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIs advection of passive tracers different than active ? if so, describe.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20. Advection --&gt; Vertical Tracers\nProperties of vertical tracer advection scheme in ocean\n20.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.vertical_tracers.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20.2. Flux Limiter\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMonotonic flux limiter for vertical tracer advection scheme in ocean ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "21. Lateral Physics\nOcean lateral physics\n21.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of lateral physics in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of transient eddy representation in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Eddy active\" \n# \"Eddy admitting\" \n# TODO - please enter value(s)\n", "22. Lateral Physics --&gt; Momentum --&gt; Operator\nProperties of lateral physics operator for momentum in ocean\n22.1. Direction\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDirection of lateral physics momemtum scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Horizontal\" \n# \"Isopycnal\" \n# \"Isoneutral\" \n# \"Geopotential\" \n# \"Iso-level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.2. Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrder of lateral physics momemtum scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Harmonic\" \n# \"Bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.3. Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDiscretisation of lateral physics momemtum scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Second order\" \n# \"Higher order\" \n# \"Flux limiter\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff\nProperties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean\n23.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLateral physics momemtum eddy viscosity coeff type in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Space varying\" \n# \"Time + space varying (Smagorinsky)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. Constant Coefficient\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "23.3. Variable Coefficient\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23.4. Coeff Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23.5. Coeff Backscatter\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "24. Lateral Physics --&gt; Tracers\nProperties of lateral physics for tracers in ocean\n24.1. Mesoscale Closure\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there a mesoscale closure in the lateral physics tracers scheme ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "24.2. Submesoscale Mixing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "25. Lateral Physics --&gt; Tracers --&gt; Operator\nProperties of lateral physics operator for tracers in ocean\n25.1. Direction\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDirection of lateral physics tracers scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Horizontal\" \n# \"Isopycnal\" \n# \"Isoneutral\" \n# \"Geopotential\" \n# \"Iso-level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.2. Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrder of lateral physics tracers scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Harmonic\" \n# \"Bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.3. Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDiscretisation of lateral physics tracers scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Second order\" \n# \"Higher order\" \n# \"Flux limiter\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff\nProperties of eddy diffusity coeff in lateral physics tracers scheme in the ocean\n26.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLateral physics tracers eddy diffusity coeff type in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Space varying\" \n# \"Time + space varying (Smagorinsky)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26.2. Constant Coefficient\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "26.3. Variable Coefficient\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26.4. Coeff Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "26.5. Coeff Backscatter\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity\nProperties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean\n27.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of EIV in lateral physics tracers in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"GM\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.2. Constant Val\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf EIV scheme for tracers is constant, specify coefficient value (M2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "27.3. Flux Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of EIV flux (advective or skew)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27.4. Added Diffusivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of EIV added diffusivity (constant, flow dependent or none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28. Vertical Physics\nOcean Vertical Physics\n28.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of vertical physics in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details\nProperties of vertical physics in ocean\n29.1. Langmuir Cells Mixing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there Langmuir cells mixing in upper ocean ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers\n*Properties of boundary layer (BL) mixing on tracers in the ocean *\n30.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of boundary layer mixing for tracers in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure - TKE\" \n# \"Turbulent closure - KPP\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Turbulent closure - Bulk Mixed Layer\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.2. Closure Order\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.3. Constant\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant BL mixing of tracers, specific coefficient (m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.4. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground BL mixing of tracers coefficient, (schema and value in m2/s - may by none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum\n*Properties of boundary layer (BL) mixing on momentum in the ocean *\n31.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of boundary layer mixing for momentum in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure - TKE\" \n# \"Turbulent closure - KPP\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Turbulent closure - Bulk Mixed Layer\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.2. Closure Order\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "31.3. Constant\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant BL mixing of momentum, specific coefficient (m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "31.4. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground BL mixing of momentum coefficient, (schema and value in m2/s - may by none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32. Vertical Physics --&gt; Interior Mixing --&gt; Details\n*Properties of interior mixing in the ocean *\n32.1. Convection Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of vertical convection in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Non-penetrative convective adjustment\" \n# \"Enhanced vertical diffusion\" \n# \"Included in turbulence closure\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.2. Tide Induced Mixing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how tide induced mixing is modelled (barotropic, baroclinic, none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.3. Double Diffusion\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there double diffusion", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "32.4. Shear Mixing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there interior shear mixing", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers\n*Properties of interior mixing on tracers in the ocean *\n33.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of interior mixing for tracers in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure / TKE\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33.2. Constant\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant interior mixing of tracers, specific coefficient (m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "33.3. Profile\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "33.4. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground interior mixing of tracers coefficient, (schema and value in m2/s - may by none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum\n*Properties of interior mixing on momentum in the ocean *\n34.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of interior mixing for momentum in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure / TKE\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "34.2. Constant\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant interior mixing of momentum, specific coefficient (m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "34.3. Profile\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "34.4. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground interior mixing of momentum coefficient, (schema and value in m2/s - may by none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "35. Uplow Boundaries --&gt; Free Surface\nProperties of free surface in ocean\n35.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of free surface in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "35.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nFree surface scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear implicit\" \n# \"Linear filtered\" \n# \"Linear semi-explicit\" \n# \"Non-linear implicit\" \n# \"Non-linear filtered\" \n# \"Non-linear semi-explicit\" \n# \"Fully explicit\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "35.3. Embeded Seaice\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the sea-ice embeded in the ocean model (instead of levitating) ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36. Uplow Boundaries --&gt; Bottom Boundary Layer\nProperties of bottom boundary layer in ocean\n36.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of bottom boundary layer in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "36.2. Type Of Bbl\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of bottom boundary layer in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diffusive\" \n# \"Acvective\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "36.3. Lateral Mixing Coef\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "36.4. Sill Overflow\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe any specific treatment of sill overflows", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37. Boundary Forcing\nOcean boundary forcing\n37.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of boundary forcing in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.2. Surface Pressure\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.3. Momentum Flux Correction\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.4. Tracers Flux Correction\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.5. Wave Effects\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how wave effects are modelled at ocean surface.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.wave_effects') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.6. River Runoff Budget\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how river runoff from land surface is routed to ocean and any global adjustment done.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.7. Geothermal Heating\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how geothermal heating is present at ocean bottom.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction\nProperties of momentum bottom friction in ocean\n38.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of momentum bottom friction in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear\" \n# \"Non-linear\" \n# \"Non-linear (drag function of speed of tides)\" \n# \"Constant drag coefficient\" \n# \"None\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction\nProperties of momentum lateral friction in ocean\n39.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of momentum lateral friction in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Free-slip\" \n# \"No-slip\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration\nProperties of sunlight penetration scheme in ocean\n40.1. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of sunlight penetration scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"1 extinction depth\" \n# \"2 extinction depth\" \n# \"3 extinction depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "40.2. Ocean Colour\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the ocean sunlight penetration scheme ocean colour dependent ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "40.3. Extinction Depth\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe and list extinctions depths for sunlight penetration scheme (if applicable).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing\nProperties of surface fresh water forcing in ocean\n41.1. From Atmopshere\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of surface fresh water forcing from atmos in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Freshwater flux\" \n# \"Virtual salt flux\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "41.2. From Sea Ice\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of surface fresh water forcing from sea-ice in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Freshwater flux\" \n# \"Virtual salt flux\" \n# \"Real salt flux\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "41.3. Forced Mode Restoring\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of surface salinity restoring in forced mode (OMIP)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
UPML/complexityTheory
toGit/TSP/tsp/000results.ipynb
apache-2.0
[ "Я запустил алгоритм на случайных тестах, размера начиная с 2 и он обсчитывает по 10 тестов одного размера.\nЯ хочу попробовать проанализировать данные, которые получу в результате его работы.\nА именно я получаю на выход, время работы на тесте, вес цикла и сам цикл.", "class Node:\n def __init__(self, number, cost, time, answer): \n self.number = int(number)\n self.cost = float(cost)\n self.time = float(time) / 10**9\n self.size = self.number / 100\n self.answer = answer\n def write(self):\n print(\"n = \", self.number,\" \\n\")\n print(\"cost = \", self.cost, \" \\n\")\n print(\"time = \", self.time, \" \\n\")\n print(\"size = \", self.size, \"\\n\")\n print(\"answer = \", self.answer, \"\\n\")\n def getTime(self):\n return self.time\n def getSize(self):\n return self.size\n def getNumber(self):\n return self.number\n def getAnswer(self):\n return self.answer\n\nclass Point:\n def __init__(self, x, y):\n self.x = x\n self.y = y\ndef constructNode(a):\n c = a.split('\\n')\n number = c[0]\n answerStr = c[1].split(\"to\")\n answer = []\n for i in range(len(answerStr)):\n answer.append(int(answerStr[i]))\n cost = (c[2].split())[1]\n time = (c[3].split())[1]\n return Node(number, cost, time, answer)", "Вытащим данные из файла и преобразуем в удобный формат.", "import math\n\nimport pylab\n\nfrom matplotlib import mlab\n%pylab inline\ndef plotPoints(a, size, showed):\n Y = [a[i].getTime() for i in range(size)]\n X = [a[i].getSize() for i in range(size)]\n pylab.plot (X, Y)\n if(showed):\n pylab.show()\n \ndef readNodes(name):\n fin = open(name, 'r')\n a = fin.read()\n nodesToSplit = a.split(\"i =\");\n nodes = []\n for i in range(len(nodesToSplit) -1):\n nodes.append(constructNode(nodesToSplit[i+1]))\n return nodes\nnodes = readNodes('out0.txt')\nnodesOne = []\nnodesOne.append(nodes[240])\nplotPoints(nodes, len(nodes), True)", "Построим график, видим, что есть три каких-то очень плохих теста, которые выглядят, как пики на этом графике. Давайте запомним, что они есть и выкиним их из эксперементальных данных.", "def findMaxTime(l, a, b):\n maxEl = l[a].getTime()\n deleteEl = 0\n for i in range(a, b):\n if(l[i].getTime() >= maxEl):\n maxEl = l[i].getTime()\n value = l[i]\n deleteEl = i\n l.pop(deleteEl)\n return value\nmaxTimeNodes =[]\nfor i in range(len(nodes) // 10 - 1, -1, -1):\n maxTimeNodes.append(findMaxTime(nodes, i*10, (i + 1) * 10))\nplotPoints(maxTimeNodes, len(maxTimeNodes),True)\n", "График из максимумов, похож на рост по экспоненте.", "plotPoints(nodes, len(nodes),True)", "Ну чтож попробуем выкинуть два максимума из рассмотрения. Хотя уже сейчас график выглядит намного лучше.", "for i in range(len(nodes) // 9 - 1, -1, -1):\n maxTimeNodes.append(findMaxTime(nodes, i*9, (i + 1) * 9))\nplotPoints(nodes, len(nodes), True)", "Ну вот уже намного лучше.\nпосмотрим для интереса, ещё и на начало графика.", "plotPoints(nodes, len(nodes)//2, True)", "Не красивый график, давайте запустим по 100 тестов для каждого размера. По значениям находящимя в середине.", "def findMinTime(l, a, b):\n minEl = l[a].getTime()\n deleteEl = 0\n for i in range(a, b):\n if(l[i].getTime() <= minEl):\n minEl = l[i].getTime()\n value = l[i]\n deleteEl = i\n l.pop(deleteEl)\n return value\n\nfin = open('16100.txt', 'r')\na = fin.read()\nnodesToSplitSmall = a.split(\"i =\");\nsmallNodes = []\nfor i in range(len(nodesToSplitSmall) -1):\n smallNodes.append(constructNode(nodesToSplitSmall[i+1]))\nsmallMaxTime = []\nfor i in range(len(smallNodes) // 100 - 1, -1, -1):\n smallMaxTime.append(findMaxTime(smallNodes, i*100, (i + 1) * 100))\n\nfor j in range(35):\n for i in range(len(smallNodes)// (99 - j * 2 + 1) - 1, -1, -1):\n findMaxTime(smallNodes, i * (99 - j * 2 + 1), (i + 1) * (99 - j * 2 + 1))\n findMinTime(smallNodes, i * (99 - j * 2), (i + 1) * (99 - j * 2)) \nplotPoints(smallNodes, len(smallNodes), True)", "Давайте попробуем понять, есть ли какая-то видимая разница, между тестами на которых алгоритм работает плохо и тех на которых он работает хорошо.", "smallMinTime = []\nfor i in range(len(smallNodes) // 100 - 1, -1, -1):\n smallMinTime.append(findMinTime(smallNodes, i*100, (i + 1) * 100))\n", "Нарисуем и посмотрим.", "import numpy as np\nfrom bokeh.plotting import *\n\ndef show(node):\n fin = open(str(node.getNumber()) + \".txt\", 'r')\n a = fin.read()\n lines = a.split('\\n')\n lines.pop(0)\n lines.pop(0)\n lines.pop(len(lines) - 1)\n lines.pop(len(lines) - 1)\n points = []\n X = []\n Y = []\n for i in range(len(lines)):\n c = lines[i].split(' ')\n points.append(Point(float(c[1]), float(c[2])))\n for i in range(len(node.getAnswer())):\n c = lines[node.getAnswer()[i] - 1].split(' ')\n X.append(float(c[1]))\n Y.append(float(c[2]))\n plot(X, Y)\n\n\nfor i in range(0, 12, 4):\n subplot(221 + i // 4)\n p1 = show(smallMinTime[i]) #синий тест с минимальным временем работы\n p2 = show(smallMaxTime[i]) #зеленый тест с максимальным временем работы\n\nfor i in range(12, 15):\n subplot(221 + i % 4)\n p1 = show(smallMinTime[i]) #синий\n p2 = show(smallMaxTime[i]) #зеленый", "Сомневаюсь, что здесь можно найти закономерность. Это и понятно в этом алгоритме многое зависит от того в каком порядке заданы вершины, от этого зависит, то на сколько быстро мы найдем действительно хороший путь, который позволит перебирать нам меньшее количество вершин. \nДо этого момента, старался найти честное полное решение задачи, и иногда это получалось сделать за малое время, но видно что например на тесте 2980 алгоритм работал почти то же время, что динамика за $2^n * n^2$.\nДавайте теперь построим несколько графиков, времени работы алгоритма, в зависимости от точности, которая нам требуется.", "nodes0 = nodes\nnodes010 = readNodes(\"out10.txt\")\nfor i in range(len(nodes010) // 10 - 1, -1, -1):\n findMaxTime(nodes010, i*10, (i + 1) * 10)\nfor i in range(len(nodes010) // 9 - 1, -1, -1):\n findMaxTime(nodes010, i*9, (i + 1) * 9)\nnodes025 = readNodes(\"out25.txt\")\nfor i in range(len(nodes025) // 10 - 1, -1, -1):\n findMaxTime(nodes025, i*10, (i + 1) * 10)\nfor i in range(len(nodes025) // 9 - 1, -1, -1):\n findMaxTime(nodes025, i*9, (i + 1) * 9)\nplotPoints(nodes010, len(nodes010), False) # синий \nplotPoints(nodes0, len(nodes0), False) #зеленый \nplotPoints(nodes025, len(nodes025), False) #красный\n\nplotPoints(nodes010[0:120], 120, False) # синий ошибка до 10%\nplotPoints(nodes0[0:120], 120, False) #зеленый без ошибки\nplotPoints(nodes025[0:120], 120, False) #красный ошибка до 25%\n\nplotPoints(nodes010[110:150], 40, False) # синий \nplotPoints(nodes0[110:150], 40, False) #зеленый \nplotPoints(nodes025[110:150], 40, False) #красный", "Видна зависимость между качеством апроксимации и временем работы программы.\nБудет интересно посмотреть, например на зависимость времени работы программы на одном и том же тесте от качества решения задачи. Возьмем например размер задачи 26, чтобы не ждать два часа пока все посчитается точно.", "nodesOne.append(readNodes(\"26out05.txt\")[0])\nnodesOne.append(readNodes(\"26out1.txt\")[0])\nnodesOne.append(readNodes(\"26out15.txt\")[0])\nnodesOne.append(readNodes(\"26out20.txt\")[0])\nnodesOne.append(readNodes(\"26out25.txt\")[0])\nnodesOne.append(readNodes(\"26out30.txt\")[0])\nY = [nodesOne[i].getTime() for i in range(len(nodesOne))]\nX = [0.05 * i for i in range(len(nodesOne))]\npylab.plot (X, Y)\nshow()", "Видно, что время, а соотвественно и количество перебираемых случаев падает по экспоненте \nв зависимости от требуемой точности." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
therealAJ/python-sandbox
data-science/learning/ud1/DataScience/TrainTest.ipynb
gpl-3.0
[ "Train / Test\nWe'll start by creating some data set that we want to build a model for (in this case a polynomial regression):", "%matplotlib inline\nimport numpy as np\nfrom pylab import *\n\nnp.random.seed(2)\n\npageSpeeds = np.random.normal(3.0, 1.0, 100)\npurchaseAmount = np.random.normal(50.0, 30.0, 100) / pageSpeeds\n\n\nscatter(pageSpeeds, purchaseAmount)", "Now we'll split the data in two - 80% of it will be used for \"training\" our model, and the other 20% for testing it. This way we can avoid overfitting.", "trainX = pageSpeeds[:80]\ntestX = pageSpeeds[80:]\n\ntrainY = purchaseAmount[:80]\ntestY = purchaseAmount[80:]\n", "Here's our training dataset:", "scatter(trainX, trainY)", "And our test dataset:", "scatter(testX, testY)", "Now we'll try to fit an 8th-degree polynomial to this data (which is almost certainly overfitting, given what we know about how it was generated!)", "x = np.array(trainX)\ny = np.array(trainY)\n\np4 = np.poly1d(np.polyfit(x, y, 8))", "Let's plot our polynomial against the training data:", "import matplotlib.pyplot as plt\n\nxp = np.linspace(0, 7, 100)\naxes = plt.axes()\naxes.set_xlim([0,7])\naxes.set_ylim([0, 200])\nplt.scatter(x, y)\nplt.plot(xp, p4(xp), c='r')\nplt.show()\n", "And against our test data:", "testx = np.array(testX)\ntesty = np.array(testY)\n\naxes = plt.axes()\naxes.set_xlim([0,7])\naxes.set_ylim([0, 200])\nplt.scatter(testx, testy)\nplt.plot(xp, p4(xp), c='r')\nplt.show()", "Doesn't look that bad when you just eyeball it, but the r-squared score on the test data is kind of horrible! This tells us that our model isn't all that great...", "from sklearn.metrics import r2_score\n\nr2 = r2_score(testy, p4(testx))\n\nprint r2\n", "...even though it fits the training data better:", "from sklearn.metrics import r2_score\n\nr2 = r2_score(np.array(trainY), p4(np.array(trainX)))\n\nprint r2", "If you're working with a Pandas DataFrame (using tabular, labeled data,) scikit-learn has built-in train_test_split functions to make this easy to do.\nLater we'll talk about even more robust forms of train/test, like K-fold cross-validation - where we try out multiple different splits of the data, to make sure we didn't just get lucky with where we split it.\nActivity\nTry measuring the error on the test data using different degree polynomial fits. What degree works best?" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ml4a/ml4a-guides
examples/language_models/word2vec_tsne.ipynb
gpl-2.0
[ "Word2Vec and t-SNE\nA question that might come up when working with text is: how do you turn text into numbers?\nIn the past, common techniques included methods like one-hot vectors, in which we'd have a different number associated with each word, and then turn \"on\" the value at that index in a vector (making it 1) and setting all the rest to zero.\nFor instance, if we have the sentence: \"I like dogs\", we'd have a 3-dimensional one-hot vector (3-dimensional because there are three words), so the word \"I\" might be [1,0,0], the word \"like\" might be [0,1,0], and \"dogs\" would be [0,0,1].\nOne-hot vectors worked well enough for some tasks but it's not a particularly rich or meaningful representation of text. The indices of these words are arbitrary and don't describe any relationship between them.\nWord embeddings provide a meaningful representation of text. Word embeddings, called such because they involve embedding a word in some high-dimensional space, that is, they map a word to some vector, much like one-hot vectors. The difference is that word embeddings are learned for a particular task, so they end up being meaningful representations.\nFor example, the relationships between words are meaningful (image from the TensorFlow documentation:\n{:width=\"100%\"}\nA notable property that emerges is that vector arithmetic is also meaningful. Perhaps the most well-known example of this is:\n$$\n\\text{king} - \\text{man} + \\text{woman} = \\text{queen}\n$$\n(Chris Olah's piece on word embeddings delves more into why this is.)\nSo the positioning of these words in this space actually tells us something about how these words are used.\nThis allows us to do things like find the most similar words by looking at the closest words. You can project the resulting embeddings down to 2D so that we can visualize them. We'll use t-SNE (\"t-Distributed Stochastic Neighbor Embedding\") for this, which is a dimensionality reduction method that works well for visualizing high-dimension data. We'll see that clusters of related words form in a way that a human would probably agree with. We couldn't do this with one-hot vectors - the distances between them are totally arbitrary and their proximity is essentially random.\nAs mentioned earlier, these word embeddings are trained to help with a particular task, which is learned through a neural network. Two tasks developed for training embeddings is CBOW (continuous bag of words) and skip-grams; together these methods of learning word embeddings are called \"Word2Vec\".\nFor the CBOW task, we take the context words (the words around the target word) and give the target word. We want to predict whether or not the target word belongs to the context.\nThe skip-grams is basically the inverse: we take the target word (the \"pivot\"), then give the context. We want to predict whether or not the context belongs to the word.\nThey are quite similar but have different properties, e.g. CBOW works better on smaller datasets, where as skip-grams works better for larger ones. In any case, the idea with word embeddings is that they can be trained to help with any task.\nWe're going to be using the skip-gram task here.\nCorpus\nWe need a reasonably-sized text corpus to learn from. Here we'll use State of the Union addresses retrieved from The American Presidency Project. These addresses tend to use similar patterns so we should be able to learn some decent word embeddings. Since the skip-gram task looks at context, texts that use words in a consistent way (i.e. in consistent contexts) we'll be able to learn better.\nThe corpus is available here. The texts were preprocessed a bit (mainly removing URL-encoded characters). The texts provided here are the processed versions (nb: this isn't the complete collection of texts but enough to work with here).\nSkip-grams\nBefore we go any further, let's get a bit more concrete about what the skip-gram task is.\nLet's consider the sentence \"I think cats are cool\".\nThe skip-gram task is as follows:\n\nWe take a word, e.g. 'cats', which we'll represent as $w_i$. We feed this as input into our neural network.\nWe take the word's context, e.g. ['I', 'think', 'are', 'cool']. We'll represent this as ${w_{i-2}, w_{i-1}, w_{i+1}, w_{i+2}}$ and we also feed this into our neural network.\nThen we just want our network to predict (i.e. classify) whether or not ${w_{i-2}, w_{i-1}, w_{i+1}, w_{i+2}}$ is the true context of $w_i$.\n\nFor this particular example we'd want the network to output 1 (i.e. yes, that is the true context).\nIf we set $w_i$ to 'frogs', then we'd want the network output 0. In our one sentence corpus, ['I', 'think', 'are', 'cool'] is not the true context for 'frogs'. Sorry frogs 🐸.\nBuilding the model\nWe'll use keras to build the neural network that we'll use to learn the embeddings.\nFirst we'll import everything:", "import numpy as np\nfrom keras.models import Sequential\nfrom keras.layers.embeddings import Embedding\nfrom keras.layers import Flatten, Activation, Merge\nfrom keras.preprocessing.text import Tokenizer\nfrom keras.preprocessing.sequence import skipgrams, make_sampling_table", "Then load in our data. We're actually going to define a generator to load our data in on-demand; this way we'll avoid having all our data sitting around in memory when we don't need it.", "from glob import glob\ntext_files = glob('../data/sotu/*.txt')\n\ndef text_generator():\n for path in text_files:\n with open(path, 'r') as f:\n yield f.read()\n \nlen(text_files)", "Before we go any further, we need to map the words in our corpus to numbers, so that we have a consistent way of referring to them. First we'll fit a tokenizer to the corpus:", "# our corpus is small enough where we\n# don't need to worry about this, but good practice\nmax_vocab_size = 50000\n\n# `filters` specify what characters to get rid of\ntokenizer = Tokenizer(nb_words=max_vocab_size,\n filters='!\"#$%&()*+,-./:;<=>?@[\\\\]^_{|}~\\t\\n\\'`“”–')\n\n# fit the tokenizer\ntokenizer.fit_on_texts(text_generator())\n\n# we also want to keep track of the actual vocab size\n# we'll need this later\n# note: we add one because `0` is a reserved index in keras' tokenizer\nvocab_size = len(tokenizer.word_index) + 1", "Now the tokenizer knows what tokens (words) are in our corpus and has mapped them to numbers. The keras tokenizer also indexes them in order of frequency (most common first, i.e. index 1 is usually a word like \"the\"), which will come in handy later.\nAt this point, let's define the dimensions of our embeddings. It's up to you and your task to choose this number. Like many neural network hyperparameters, you may just need to play around with it.", "embedding_dim = 256", "Now let's define the model. When I described the skip-gram task, I mentioned two inputs: the target word (also called the \"pivot\") and the context. So we're going to build two separate models for each input and then merge them into one.", "pivot_model = Sequential()\npivot_model.add(Embedding(vocab_size, embedding_dim, input_length=1))\n\ncontext_model = Sequential()\ncontext_model.add(Embedding(vocab_size, embedding_dim, input_length=1))\n\n# merge the pivot and context models\nmodel = Sequential()\nmodel.add(Merge([pivot_model, context_model], mode='dot', dot_axes=2))\nmodel.add(Flatten())\n\n# the task as we've framed it here is\n# just binary classification,\n# so we want the output to be in [0,1],\n# and we can use binary crossentropy as our loss\nmodel.add(Activation('sigmoid'))\nmodel.compile(optimizer='adam', loss='binary_crossentropy')", "Finally, we can train the model.", "n_epochs = 60\n\n# used to sample words (indices)\nsampling_table = make_sampling_table(vocab_size)\n\nfor i in range(n_epochs):\n loss = 0\n for seq in tokenizer.texts_to_sequences_generator(text_generator()):\n # generate skip-gram training examples\n # - `couples` consists of the pivots (i.e. target words) and surrounding contexts\n # - `labels` represent if the context is true or not\n # - `window_size` determines how far to look between words\n # - `negative_samples` specifies the ratio of negative couples\n # (i.e. couples where the context is false)\n # to generate with respect to the positive couples;\n # i.e. `negative_samples=4` means \"generate 4 times as many negative samples\"\n couples, labels = skipgrams(seq, vocab_size, window_size=5, negative_samples=4, sampling_table=sampling_table)\n if couples:\n pivot, context = zip(*couples)\n pivot = np.array(pivot, dtype='int32')\n context = np.array(context, dtype='int32')\n labels = np.array(labels, dtype='int32')\n loss += model.train_on_batch([pivot, context], labels)\n print('epoch %d, %0.02f'%(i, loss))", "With any luck, the model should finish training without a hitch.\nNow we can extract the embeddings, which are just the weights of the pivot embedding layer:", "embeddings = model.get_weights()[0]", "We also want to set aside the tokenizer's word index for later use (so we can get indices for words) and also create a reverse word index (so we can get words from indices):", "word_index = tokenizer.word_index\nreverse_word_index = {v: k for k, v in word_index.items()}", "That's it for learning the embeddings. Now we can try using them.\nGetting similar words\nEach word embedding is just a mapping of a word to some point in space. So if we want to find words similar to some target word, we literally just need to look at the closest embeddings to that target word's embedding.\nAn example will make this clearer.\nFirst, let's write a simple function to retrieve an embedding for a word:", "def get_embedding(word):\n idx = word_index[word]\n # make it 2d\n return embeddings[idx][:,np.newaxis].T", "Then we can define a function to get a most similar word for an input word:", "from scipy.spatial.distance import cdist\n\nignore_n_most_common = 50\n\ndef get_closest(word):\n embedding = get_embedding(word)\n\n # get the distance from the embedding\n # to every other embedding\n distances = cdist(embedding, embeddings)[0]\n\n # pair each embedding index and its distance\n distances = list(enumerate(distances))\n\n # sort from closest to furthest\n distances = sorted(distances, key=lambda d: d[1])\n\n # skip the first one; it's the target word\n for idx, dist in distances[1:]:\n # ignore the n most common words;\n # they can get in the way.\n # because the tokenizer organized indices\n # from most common to least, we can just do this\n if idx > ignore_n_most_common:\n return reverse_word_index[idx]", "Now let's give it a try (you may get different results):", "print(get_closest('freedom'))\nprint(get_closest('justice'))\nprint(get_closest('america'))\nprint(get_closest('citizens'))\nprint(get_closest('citizen'))", "For the most part, we seem to be getting related words!\nNB: Here we computed distances to every other embedding, which is far from ideal when dealing with really large vocabularies. Gensim's Word2Vec class implements a most_similar method that uses an approximate, but much faster, method for finding similar words. You can import the embeddings learned here into that class:", "from gensim.models.doc2vec import Word2Vec\n\nwith open('embeddings.dat', 'w') as f:\n f.write('{} {}'.format(vocab_size, embedding_dim))\n\n for word, idx in word_index.items():\n embedding = ' '.join(str(d) for d in embeddings[idx])\n f.write('\\n{} {}'.format(word, embedding))\n\nw2v = Word2Vec.load_word2vec_format('embeddings.dat', binary=False)\nprint(w2v.most_similar(positive=['freedom']))", "t-SNE\nt-SNE (\"t-Distributed Stochastic Neighbor Embedding\") is a way of projecting high-dimensional data, e.g. our word embeddings, to a lower-dimension space, e.g. 2D, so we can visualize it.\nThis will give us a better sense of the quality of our embeddings: we should see clusters of related words.\nscikit-learn provides a t-SNE implementation that is very easy to use.", "from sklearn.manifold import TSNE\n\n# `n_components` is the number of dimensions to reduce to\ntsne = TSNE(n_components=2)\n\n# apply the dimensionality reduction\n# to our embeddings to get our 2d points\npoints = tsne.fit_transform(embeddings)", "And now let's plot it out:", "print(points)\n\nimport matplotlib\nmatplotlib.use('Agg') # for pngs\nimport matplotlib.pyplot as plt\n\n# plot our results\n# make it quite big so we can see everything\nfig, ax = plt.subplots(figsize=(40, 20))\n\n# extract x and y values separately\nxs = points[:,0]\nys = points[:,1]\n\n# plot the points\n# we don't actually care about the point markers,\n# just want to automatically set the bounds of the plot\nax.scatter(xs, ys, alpha=0)\n\n# annotate each point with its word\nfor i, point in enumerate(points):\n ax.annotate(reverse_word_index.get(i),\n (xs[i], ys[i]),\n fontsize=8)\n\nplt.savefig('tsne.png')", "This looks pretty good! It could certainly be improved upon, with more data or more training, but it's a great start.\nFurther Reading\n\nDeep Learning, NLP, and Representations. Chris Olah.\nOn Word Embeddings. Sebastian Ruder.\nMikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.\nMikolov, T., Sutskever, I., Chen, K., Corrado, G. S., & Dean, J. (2013). Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems (pp. 3111-3119)." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
gschivley/Teaching-python
Pandas/Pre-written Pandas example.ipynb
mit
[ "import pandas as pd\nimport numpy as np\n\nfn1 = 'EPA emissions.txt'\nfn2 = 'may_generator2016.xlsx'\nfn3 = 'EIA923_Schedules_2_3_4_5_M_10_2016.xlsx'", "Load emissions data", "emissions = pd.read_csv(fn1)\n\nemissions.head()\n\nemissions = pd.read_csv(fn1, index_col=False)\n\nemissions.head()\n\nemissions.tail()", "Access parts of the dataframe", "emissions.columns", "Notice that most of the columns have a leading space?", "columns_strip = [name.strip() for name in emissions.columns]\ncolumns_strip\n\nemissions.columns = columns_strip\nemissions.columns\n\nemissions.dtypes", "A single column from a dataframe is called a Series", "type(emissions)\n\ntype(emissions['Operating Time'])\n\nemissions['Operating Time']", "Index into a dataframe using .loc or .iloc with square brackets and row,column notation", "emissions.loc[0:5,'Operating Time']\n\nemissions.iloc[0:5,:3]", "Sum unit emissions for each facility using groupby", "emissions.groupby('Facility ID (ORISPL)')", "Not all columns sum well", "facility_emiss = emissions.groupby('Facility ID (ORISPL)').sum()\n# facility_emiss = facility_emiss.iloc[:,2:]\nfacility_emiss", "Use apply to apply a function to every row of the dataframe\nIf we want to keep the EPA Region, there are probably better ways to do it than this. We will write a little function that divides the month by 5 (May) and then divides the region by that result.", "def correct_region(row):\n num_units = row['Month'] / 5\n region = row['EPA Region'] / num_units\n return region", "Not sure why this is returning a float. Go back to the function and return an int instead.", "facility_emiss.apply(correct_region, axis=1)\n\nfacility_emiss.loc[:,'EPA Region'] = facility_emiss.apply(correct_region, axis=1)\nfacility_emiss = facility_emiss.iloc[:,2:]\nfacility_emiss", "Load capacity data", "capacity = pd.read_excel(fn2, sheetname='Operating', header=1)\n\ncapacity.head()\n\ncapacity.tail()\n\ncapacity.drop(20187, inplace=True)\n\ncapacity.loc[:,'Plant ID'] = capacity.loc[:,'Plant ID'].astype(int)\n\ncapacity.head()", "Check the column names\nSure enough, there are some weird issues", "capacity.columns\n\ncapacity.columns = [name.strip() for name in capacity.columns]\ncapacity.columns", "Boolean filtering", "PA_cap = capacity.loc[capacity['Plant State']=='PA',:]\nPA_cap\n\nPA_NGCC_cap = capacity.loc[(capacity['Plant State']=='PA') &\n (capacity['Technology']=='Natural Gas Fired Combined Cycle'),:]\nPA_NGCC_cap", "Repeat groupby and sum to get capacity of facilities", "cols = ['Plant ID', 'Nameplate Capacity (MW)']\nfacility_cap = capacity.loc[:,cols].groupby('Plant ID').sum()\nfacility_cap", "Load generation data", "generation = pd.read_excel(fn3, header=5)", "Something weird is going on here. I know there are lots of rows with numeric data that are missing from this describe table.", "generation.describe()\n\ngeneration.head()", "Turns out that there are lots of dots (.) where it is no value. I'm going to replace these with zeros.", "generation.tail()\n\ngeneration.replace('.', 0, inplace=True)", "There are line breaks in the middle of column names. I don't see any breaks or spaces at the beginning or end of names, but will still strip just to be safe.", "generation.columns\n\ngeneration.columns = [name.strip().replace('\\n', ' ') for name in generation.columns]\ngeneration.columns", "Stack data by month rather than having multiple columns\nNot sure if I'll have time for this section\nI'm lazy and want to get a list of month names without typing them all", "# could have done this as a list comprehension, but it would have been harder to read\nmonths = []\nfor name in generation.columns:\n if 'Netgen' in name:\n month = name.split()[-1]\n months.append(month)\nmonths\n\nid_cols = ['Plant Id', 'Plant State', 'NERC Region', 'AER Fuel Type Code']\nmonthly_cols = []\ndef find_col_names(cols):\n for col in cols:\n if 'January' in col:\n monthly_cols.append(col.split()[0])\n\nfind_col_names(generation.columns)\nid_cols + monthly_cols\n\npd.DataFrame(columns=id_cols + monthly_cols + ['Month'])\n\ngen_list = []\nfor month in months:\n gen_df = pd.DataFrame(columns=id_cols + monthly_cols)\n \n # Took me a few tries to figure out that I couldn't use .loc for gen_df\n gen_df[id_cols] = generation.loc[:,id_cols]\n gen_df['Month'] = month\n \n for col in monthly_cols:\n gen_df.loc[:,col] = generation.loc[:,col + ' ' + month]\n \n gen_list.append(gen_df)\n\ngen_stack = pd.concat(gen_list)\ngen_stack.describe()", "Tag lines as using a combustion fuel or not", "gen_stack['AER Fuel Type Code'].unique()\n\nnon_combust = ['HYC', 'NUC', 'SUN', 'GEO', 'WND'] # might be incomplete\n\ndef tag_combust(row):\n if row['AER Fuel Type Code'] in non_combust:\n return 0\n else:\n return 1\n\ngen_stack['Combust'] = gen_stack.apply(tag_combust, axis=1)\n\ngen_stack.head()", "Now group and sum\nOnly keep data for May", "test = gen_stack.loc[gen_stack['Month']=='May',:].groupby(['Plant Id', 'NERC Region']).sum()\ntest.head()\n\ntest.reset_index('NERC Region')\n\nfacility_gen = gen_stack.loc[gen_stack['Month']=='May',:].groupby('Plant Id').sum()\n\nfacility_gen.head()", "If I want to keep the NERC Region, I can do that in the groupby", "facility_gen = gen_stack.loc[gen_stack['Month']=='May',:].groupby(['Plant Id', 'NERC Region']).sum()\nfacility_gen.head()\n\nfacility_gen.reset_index('NERC Region', inplace=True)\nfacility_gen.head()", "Merge data from all three sources", "merged = facility_gen.merge(facility_cap, how='inner', left_index=True, right_index=True)\n\nmerged.describe()", "Save the non-combustion units, because I'm going to join the merged dataframe with the emissions dataframe and want to add back in the non-combustion", "non_combust = merged.loc[merged['Combust']==0,:]\nnon_combust.describe()\n\nnon_combust.head()\n\nmerged = merged.merge(facility_emiss, how='inner', left_index=True, right_index=True)\nmerged.describe()\n\nmerged.head()", "Now concat the two dataframes", "final = pd.concat([merged, non_combust])\nfinal\n\nfinal.index.rename('Plant ID', inplace=True)\n\nfinal.describe()\n\nfinal['CO2 (short tons)'].sum() * 2000 * 2.2046 / final['Netgen'].sum()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
bsafdi/NPTFit
examples/Example2_Creating_Masks.ipynb
mit
[ "Example 2: Creating Masks\nIn this example we show how to create masks using create_mask.py.\nOften it is convenient to consider only a reduced Region of Interest (ROI) when analyzing the data. In order to do this we need to create a mask. The masks are boolean arrays where pixels labelled as True are masked and those labelled False are unmasked. In this notebook we give examples of how to create various masks.\nThe masks are created by create_mask.py and can be passed to an instance of nptfit via the function load_mask for a run, or an instance of dnds_analysis via load_mask_analysis for an analysis. If no mask is specified the code defaults to the full sky as the ROI.\nNB: Before you can call functions from NPTFit, you must have it installed. Instructions to do so can be found here: \nhttp://nptfit.readthedocs.io/", "# Import relevant modules\n\n%matplotlib inline\n%load_ext autoreload\n%autoreload 2\n\nimport numpy as np\nimport healpy as hp\n\nfrom NPTFit import create_mask as cm # Module for creating masks", "Example 1: Mask Nothing\nIf no options are specified, create mask returns an empty mask. In the plot here and for those below, purple represents unmasked, yellow masked.", "example1 = cm.make_mask_total()\nhp.mollview(example1, title='', cbar=False, min=0,max=1)", "Example 2: Band Mask\nHere we show an example of how to mask a region either side of the plane - specifically we mask 30 degrees either side", "example2 = cm.make_mask_total(band_mask = True, band_mask_range = 30)\nhp.mollview(example2, title='', cbar = False, min=0, max=1)", "Example 3: Mask outside a band in b and l\nThis example shows several methods of masking outside specified regions in galactic longitude (l) and latitude (b). The third example shows how when two different masks are specified, the mask returned is the combination of both.", "example3a = cm.make_mask_total(l_mask = False, l_deg_min = -30, l_deg_max = 30, \n b_mask = True, b_deg_min = -30, b_deg_max = 30)\nhp.mollview(example3a,title='',cbar=False,min=0,max=1)\n\nexample3b = cm.make_mask_total(l_mask = True, l_deg_min = -30, l_deg_max = 30, \n b_mask = False, b_deg_min = -30, b_deg_max = 30)\nhp.mollview(example3b,title='',cbar=False,min=0,max=1)\n\nexample3c = cm.make_mask_total(l_mask = True, l_deg_min = -30, l_deg_max = 30, \n b_mask = True, b_deg_min = -30, b_deg_max = 30)\nhp.mollview(example3c,title='',cbar=False,min=0,max=1)", "Example 4: Ring and Annulus Mask\nNext we show examples of masking outside a ring or annulus. The final example demonstrates that the ring need not be at the galactic center.", "example4a = cm.make_mask_total(mask_ring = True, inner = 0, outer = 30, ring_b = 0, ring_l = 0)\nhp.mollview(example4a,title='',cbar=False,min=0,max=1)\n\nexample4b = cm.make_mask_total(mask_ring = True, inner = 30, outer = 180, ring_b = 0, ring_l = 0)\nhp.mollview(example4b,title='',cbar=False,min=0,max=1)\n\nexample4c = cm.make_mask_total(mask_ring = True, inner = 30, outer = 90, ring_b = 0, ring_l = 0)\nhp.mollview(example4c,title='',cbar=False,min=0,max=1)\n\nexample4d = cm.make_mask_total(mask_ring = True, inner = 0, outer = 30, ring_b = 45, ring_l = 45)\nhp.mollview(example4d,title='',cbar=False,min=0,max=1)", "Example 5: Custom Mask\nIn addition to the options above, we can also add in custom masks. In this example we highlight this by adding a random mask.", "random_custom_mask = np.random.choice(np.array([True, False]), hp.nside2npix(128))\nexample5 = cm.make_mask_total(custom_mask = random_custom_mask)\nhp.mollview(example5,title='',cbar=False,min=0,max=1)", "Example 6: Full Analysis Mask including Custom Point Source Catalog Mask\nFinally we show an example of a full analysis mask that we will use for an analysis of the Galactic Center Excess in Example 3 and 8. Here we mask the plane with a band mask, mask outside a ring and also include a custom point source mask. The details of the point source mask are given in Example 1.\nNB: before the point source mask can be loaded, the Fermi Data needs to be downloaded. See details in Example 1.", "pscmask=np.array(np.load('fermi_data/fermidata_pscmask.npy'), dtype=bool)\nexample6 = cm.make_mask_total(band_mask = True, band_mask_range = 2,\n mask_ring = True, inner = 0, outer = 30,\n custom_mask = pscmask)\nhp.mollview(example6,title='',cbar=False,min=0,max=1)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
NYUDataBootcamp/Projects
MBA_S17/Boddu-Jadwani-Kutty-American_Time_Use_Study.ipynb
mit
[ "Data Boot-Camp Final Project\nAmerican Time Use Study (ATUS)\nSravya Boddu (sb5933), Sonal Jadwani (sj2280), Vineetha Kutty (vkk242) | May 5th, 2017\n<img src=\"http://marketingland.com/wp-content/ml-loads/2014/05/speed-to-market-600x300.jpg\" alt=\"Drawing\" style=\"width: 1000px;\"/>\nIntroduction\nIn this project, our aim is to thoroughly analyse the American Time Use Study (ATUS) data which primarily measures the amount of time Americans spend doing various activities such as personal work, paid work, and other daily duties. Nationally represented estimates of the time spent are processed to draw effective insights which will assist in further comprehending product business cycle, marketing, and in developing successful targeting strategy.\nWith the help of Python and its extensive libraries, we then visualized the processed data to grasp the trends more clearly and consequently obtain valuable insights.\nContents\n\nBackground\nAbout the Data\n2.1 | Data Sources\n2.2 | Python Libraries\n2.3 | Dataframes\n\n\nData Fetching, Cleaning, and Processing\n3.1 | Fetching and Slicing Data \n3.1.1 | User-defined Functions 'Extract_Main' / 'Extract_Sub' for Gender level Activity data\n3.1.2 | Leisure Activity Data\n3.1.3 | Age Level Activity Data\n3.1.4 | Geography Level Activity Data \n\n\n3.2 | Cleaning and Organizing Data\n\n\nVisualizing the Data\n4.1 | Visualizing the time spent on main activities at gender level\n4.2 | Visualizing trends for main activities through years 2011 - 2015\n4.3 | Which activity is the primary focus for each gender?\n4.4 | Does age play any role in the time spent on activities?\n4.5 | Deep-Diving and visualizing data at sub-activity level for the top 4 main activities\n4.6 | Leisure Activity Breakdown\n4.7 | Visualizing time spent on Sports & Leisure activities at a geographic level\n4.8 | Ad-Hoc Analysis: Time spent on Organizational, Civic, and Religious activities activities at a geographic level\n\n\nConclusion\n\n1 | Background\nThe American Time Use Study (ATUS) is sponsored by the Bureau of Labor Statistics (BLS) and is conducted by the United States Census Bureau. It's main goal is to record the amount of time spent by Americans on numeorus activities ranging from work to leisure to childcare and house-hold activities. This data is further classified based on gender, age group, employment status, marital status, and geographic location. \nParticipants are usually households which have completed all eight months of the Current Population Survey (CPS). Amongst this pool of participants, households are further categorized based on numerous demographic characteristics and then they are finally selected to participate in the survey. Any particular individual above the age of 15 in the household is called up and questioned regarding their time use. \nThis data is utilized for many purposes by organizations such as Bureau of Economic Analysis (BEA), Bureau of Transportation Statistics (BTS) and the Economic Research Service (ERS). It has also been employed to understand worker productivity, effective promotion and targeting strategies and to deep dive into work-life balance ideology.\n2 | About the Data\nThe data has been obtained from the Bureau of Labor Statistics - ATUS website. \nAs mentioned above, ATUS provides nationally representative estimates of the time spent by Americans. It is the only federal survey providing data on the full range of non-market activities, from childcare to volunteering.\nThe data is usually collected from over 170,000 interviews and we have focused on data spanning from 2011 - 2015 to maintain relevance.\nMain focus activities are: \n\nPersonal Care Activities\nEating and Drinking\nHousehold Activities\nPurchasing goods and services\nCaring for and helping household members\nCaring for and helping non-household members\nWorking and Work related activities\nEducational Activities\nOrganizational, civic, and religious activities\nLeisure and Sports\nTelephone calls, mail, and email \n\n2.1 | Data Sources\nWebsite : Bureau of Labor Statistics\nHistorical Data Span : 2011 – 2015\nData Sources URLs :\n+ Activity Level Data : https://www.bls.gov/tus/a1_all_years.xlsx\n+ Leisure Activity Data : https://www.bls.gov/tus/charts/chart9.txt\n+ Students Activity Data : https://www.bls.gov/tus/charts/chart6.txt\n+ Elders Activity Data : https://www.bls.gov/tus/charts/chart4.txt\n+ Geography Level Data : Obtained from ATUS on request - This data has been uploaded in the Dropbox\n + Sports Data : https://www.dropbox.com/s/c5ahh0ffb7tc3yv/ATUS_Geography_Data_Sports.csv?dl=1\n + Organizational, Civic, and Religious Data : https://www.dropbox.com/s/36gnok6gtn3u5gw/ATUS_Geography_Data_Religion.csv?dl=1\nData Dictionaries : https://www.bls.gov/tus/dictionaries.htm\nUser Guide : https://www.bls.gov/tus/atususersguide.pdf\n2.2 | Python Libraries\nWe employed os and requests libraries for importing the data. We then used the pandas library to manipulate and display selected data. Finally, we worked with matplotlib, plotly, and other graphic libraries for visualization.", "# Importing all the required libraries\n\n%matplotlib inline \n\nimport sys\nimport pandas as pd # data manipulation package\nimport datetime as dt # date tools, used to note current date \nimport matplotlib.pyplot as plt # graphics package\nimport matplotlib as mpl # graphics package\nimport plotly as pl # graphics package\nimport urllib.request # To import data from Dropbox\n\n# New Libraries\n\nimport os # operating system tools (check files)\nimport requests, io # internet and input tools \nimport zipfile as zf # zip file tools \nimport shutil # file management tools \nimport numpy as np # scientific computing\n\n# Geographical Views (Plotly Authentication)\n\nimport plotly.plotly as py\nimport plotly.graph_objs as go\npy.sign_in('Vinee03', '0hNA8NplYEePfVAdDtUa')\n\n# System Details\n\nprint('Python version:', sys.version)\nprint('Pandas version: ', pd.__version__)\nprint('Today: ', dt.date.today())\n\n# To maintain color uniformity throughout all of the visualizations\n\ncolors = {1: \"royalblue\",\n 2: \"bisque\",\n 3: \"navy\",\n 4: \"silver\",\n 5: \"darkmagenta\",\n 6: \"pink\",\n 7: \"chocolate\",\n 8: \"orangered\",\n 9: \"lime\",\n 10:\"orange\",\n 11:\"darkorchid\",\n 12:\"black\",\n 13:\"yellow\"} ", "2.3 | Dataframes\n\n\nThe above mentioned excel workbook comprises of multiple tabs with each tab having data for one particular year (spanning from 2003 to 2015)\n\n\nThere are two types of Activities: \n\nMain Activity (For example: Leisure & Sports, Educational Activities)\nSub-Activity (For example: Watching TV, Participating in Sports, Attending Class)\n\n\n\nThe sub-activity hours add up to the main activity hours\n\n\nTo check the trends at both main activity and sub-activity level, two dataframes have been created for the entire time span i.e. from 2011 to 2015\n\nMain Activity Dataframe : df_main_final\nSub-Activity Dataframe : df_sub_final\n\n\n\nLeisure Activity Dataframe : df_leisure_all\n\n\nConsolidated Age Group Activity Dataframe: df_age_data \n\nStudents Activity Dataframe : df_students\nElders Activity Dataframe : df_elderly\n\n\n\nGeographical Distribution Dataframe\n\nSports Data Dataframe : df_geo_sports\nOrganizational, Civic, and Religious Data Dataframe : df_geo_religion\n\n\n\n3 | Data Fetching, Cleaning, and Processing\n3.1 | Fetching and Slicing Data\n3.1.1 | User-defined Functions 'Extract_Main' / 'Extract_Sub' for Gender level activity data\n\n\nDataframe creation process by employing user-defined functions:\n\nFetched the Excel workbook from the ATUS Website\nRead individual tabs within the Excel workbook into separate dataframes for each year\nFiltered out the activities into Main and Sub-Activities\nMerged all of the dataframes for each year based on the primary index (Activity)\nCreated two final dataframes for Main and Sub-Activities\n\n\n\nNote: \n\n'extract_main' function dynamically creates 5 dataframes for Main activities for each year (2011 to 2015)\nMerge all dataframes into a single dataframe 'df_main_final'\nSimilarly, 'extract_sub' function dynamically creates 5 dataframes for sub-activities for each year (2011 to 2015)\nMerge all dataframes into a single dataframe 'df_sub_final'", "# Fetching data from ATUS Website \n\ndls = \"https://www.bls.gov/tus/a1_all_years.xlsx\"\nresp = requests.get(dls)\nwith open('a1_all_years.xlsx', 'wb') as output:\n c = output.write(resp.content)\n \n# Creating a dictionary of dataframes for each year of analysis\n\ndf_names = ['df_2011','df_2012','df_2013','df_2014','df_2015']\n\n# Defining a function to extract main activity data for each year ranging from 2011 to 2015\n\ndef extract_main(no_of_years):\n i= 0\n year = 2011 \n year_max = year + no_of_years\n while year < year_max: \n year = str(year)\n df = pd.read_excel(open('a1_all_years.xlsx','rb'), sheetname= year, header= None)\n df_extract = df.loc[[5,11,14,25,37,46,52,58,62,73,85,91],[0,8,9]] \n df_names[i] = df_extract\n df_names[i].columns = ['Main_Activity', 'AvgHrsMen_'+ year, 'AvgHrsWomen_' + year]\n df_names[i] = df_names[i].set_index(['Main_Activity'])\n year = int(year)\n year=year+1\n i = i+1\n \nextract_main(5)\n\n# Merging the year-level dataframes to obtain a consolidated main activity dataframe\n\ndf_main_final = pd.concat([df_names[0],df_names[1],df_names[2],df_names[3],df_names[4]], axis = 1)\ndf_main_final\n\n# Fetching data from ATUS Website\n\ndls = \"https://www.bls.gov/tus/a1_all_years.xlsx\"\nresp = requests.get(dls)\nwith open('a1_all_years.xlsx', 'wb') as output:\n c = output.write(resp.content)\n\n# Creating a dictionary of dataframes for each year of analysis\n\ndf_names = ['df_2011','df_2012','df_2013','df_2014','df_2015']\n\n# Defining a function to extract sub-activity data for each year ranging from 2011 to 2015\n\ndef extract_sub(no_of_years):\n i= 0\n year = 2011 \n year_max = year + no_of_years\n while year < year_max: \n year = str(year)\n df = pd.read_excel(open('a1_all_years.xlsx','rb'), sheetname= year, header= None)\n df_extract = df.loc[[6,7,8,9,10,12,13,15,16,17,18,19,20,21,22,23,24,26,28,32,35,38,42,45,47,48,51,53,54,55,56,57,59,60,61,\n 63,64,72,74,81,84,86,87,90],[0,8,9]] \n df_names[i] = df_extract\n df_names[i].columns = ['Sub_Activity', 'AvgHrsMen_'+ year, 'AvgHrsWomen_' + year]\n df_names[i] = df_names[i].set_index(['Sub_Activity'])\n year = int(year)\n year=year+1\n i = i+1\n \nextract_sub(5)\n\n# Merging the year-level dataframes to obtain a consolidated sub-activity dataframe\n\ndf_sub_final = pd.concat([df_names[0],df_names[1],df_names[2],df_names[3],df_names[4]], axis = 1)\ndf_sub_final", "3.1.2 | Leisure Activity Data\n\nFetched data from BLS website to further understand what leisure activities are most popular among Americans\nCleaned the text file in order to accommodate the missing and multiple delimiters\nHandled the gap with numeric data to ensure data consistency", "# Fetching Data for Leisure activity (will be used further down in the analysis)\n\ndls = \"https://www.bls.gov/tus/charts/chart9.txt\"\nresp = requests.get(dls)\nwith open('chart9.txt', 'wb') as output:\n c = output.write(resp.content)\n \n# Extracting the text file data\n\nlines = open(\"chart9.txt\").readlines()\nopen('newfile.txt', 'w').writelines(lines[2:-4])\nf_leisure = open(\"newfile.txt\",\"r\")\ndata_leisure = f_leisure.read()\n\n# Replace the target string to fix the delimiters\n\ndata_leisure = data_leisure.replace(\"\\t\", '|').replace(\"||||\", '|').replace(\"|||\", '|').replace(\"||\", '|')\n\n# Write the file out again\n\nwith open('newfile.txt', 'w') as file:\n file.write(data_leisure)\n\nf_leisure = open(\"newfile.txt\",\"r\")\ndata_leisure = f_leisure.read()\n\n# Extracting/Cleaning the data and renaming certain columns\n\ndf_leisure_all = pd.read_csv(open(\"newfile.txt\",\"r\"), delimiter=\"|\")\ndf_leisure_all = df_leisure_all.rename(columns={'Unnamed: 0': 'Activity_Leisure_SubActivity'})\ndf_leisure_all = df_leisure_all.drop(df_leisure_all.index[[7]])\ndf_leisure_all.iloc[0, df_leisure_all.columns.get_loc('Minutes')] = 167\ndf_leisure_all\ndf_leisure_all = df_leisure_all.set_index(['Activity_Leisure_SubActivity'])\n\ndf_leisure_all= df_leisure_all.astype(float)\ndf_leisure_all", "3.1.3 | Age Level Activity Data\n\nFetched data from BLS website to further understand the time indulged in activities based on Age\nAge Brackets considered are:\nAges 15-49\nAges 55-64\nAges 65-74\nAges 75 + \n\n\nCleaned the text file in order to accommodate the missing and multiple delimiters\nHandled the gap with numeric data to ensure data consistency", "# Fetching Students Data (will be used further down in the analysis)\n\ndls = \"https://www.bls.gov/tus/charts/chart6.txt\"\nresp = requests.get(dls)\nwith open('chart6.txt', 'wb') as output:\n c = output.write(resp.content)\n \n# Extracting the text file data\n\nlines = open(\"chart6.txt\").readlines()\nopen('newfile.txt', 'w').writelines(lines[1:-5])\nf_s = open(\"newfile.txt\",\"r\")\ndata_s = f_s.read()\n\n# Replace the target string to fix the delimiters\n\ndata_s = data_s.replace(\"\\t\", '|').replace(\"||||\", '|').replace(\"|||\", '|').replace(\"||\", '|').replace(\"activities\", 'Working').replace(\"Educational Working\", 'Household/Educational Activities')\n\n# Write the file out again\n\nwith open('newfile.txt', 'w') as file:\n file.write(data_s)\n\nf_s = open(\"newfile.txt\",\"r\")\ndata_s = f_s.read()\n\n# Extracting the data and renaming certain columns\n\ndf_students = pd.read_csv(open(\"newfile.txt\",\"r\"), delimiter=\"|\")\ndf_students = df_students.drop(df_students.index[[3,9,5,6,7,8]])\ndf_students = df_students.rename(columns={'Unnamed: 0': 'Main_Activity', 'Hours': 'Ages 15-49'})\ndf_students['Main_Activity'] = df_students['Main_Activity'].str.strip()\ndf_students = df_students.set_index(['Main_Activity'])\ndf_students\n\n# Fetching Elders Data (will be used further down in the analysis)\n\ndls = \"https://www.bls.gov/tus/charts/chart4.txt\"\nresp = requests.get(dls)\nwith open('chart4.txt', 'wb') as output:\n c = output.write(resp.content)\n \n# Extracting the text file data\n\nlines = open(\"chart4.txt\").readlines()\nopen('newfile.txt', 'w').writelines(lines[4:-4])\nf_e = open(\"newfile.txt\",\"r\")\ndata_e = f_e.read()\n\n# Replace the target string to fix the delimiters\n\ndata_e = data_e.replace(\"\\t\", '|').replace(\"||||\", '|').replace(\"|||\", '|').replace(\"||\", '|').replace(\"Household activities\", 'Household/Educational Activities')\n\n# Write the file out again\n\nwith open('newfile.txt', 'w') as file:\n file.write(data_e)\n\nf_e = open(\"newfile.txt\",\"r\")\ndata_e = f_e.read()\n\n# Extracting the data and renaming certain columns\n\ndf_elderly = pd.read_csv(open(\"newfile.txt\",\"r\"), delimiter=\"|\")\ndf_elderly.drop('Unnamed: 4', axis = 1, inplace = True)\ndf_elderly = df_elderly.rename(columns={'Unnamed: 0': 'Main_Activity'})\ndf_elderly['Main_Activity'] = df_elderly['Main_Activity'].str.strip()\ndf_elderly = df_elderly.set_index(['Main_Activity'])\ndf_elderly", "3.1.4 | Geography Level Activity Data\n\nObtained the Geography level data upon request from ATUS\nGeography data considered are:\nSports data\nReligion data\n\n\nIncluded the state code in order to easily visualize the data", "# Extracting Geographical Distribution of Sports Data from Dropbox\n\nurl = \"https://www.dropbox.com/s/c5ahh0ffb7tc3yv/ATUS_Geography_Data_Sports.csv?dl=1\"\nu_s = urllib.request.urlopen(url)\ndata_s = u_s.read()\nu_s.close()\n\nwith open(\"ATUS_Geography_Data_Sports.csv\", \"wb\") as f :\n f.write(data_s)\n\ndf_geo_sports = pd.read_csv(open(\"ATUS_Geography_Data_Sports.csv\",\"r\"),delimiter=\",\")\ndf_geo_sports\n\n# Extracting Geographical distribution of Organizational, Civic, and Religious Data from Dropbox\n\nurl = \"https://www.dropbox.com/s/36gnok6gtn3u5gw/ATUS_Geography_Data_Religion.csv?dl=1\"\nu_r = urllib.request.urlopen(url)\ndata_r = u_r.read()\nu_r.close()\n\nwith open(\"ATUS_Geography_Data_Religion.csv\", \"wb\") as f :\n f.write(data_r)\n\ndf_geo_religion = pd.read_csv(open(\"ATUS_Geography_Data_Religion.csv\",\"r\"),delimiter=\",\")\ndf_geo_religion", "3.2 | Cleaning and Organizing Data\n\nCleaned the Activity data to ensure no outliers are present i.e. negative values, consistent decimal places, substitution of missing values\nFeature engineered additional data fields to observe distinct trends\nMerged multiple datasets in order to obtain a well-rounded view", "# Cleaning the data and computing average time spent on activities at gender level\n\n# Main Activity\n\ndf_main_final= df_main_final.apply(pd.to_numeric, errors='ignore')\ndf_main_final[\"AvgHrsMenMain\"]= df_main_final[['AvgHrsMen_2011','AvgHrsMen_2012','AvgHrsMen_2013','AvgHrsMen_2014','AvgHrsMen_2015']].mean(axis = 1)\ndf_main_final[\"AvgHrsWomenMain\"]= df_main_final[['AvgHrsWomen_2011','AvgHrsWomen_2012','AvgHrsWomen_2013','AvgHrsWomen_2014','AvgHrsWomen_2015']].mean(axis =1)\ndf_main_final = df_main_final.round(2)\ndf_main_final\n\n# Cleaning the data and computing average time spent on activities at gender level\n\n# Sub-Activity\n\ndf_sub_final=df_sub_final.replace('\\(','',regex=True).replace('\\)','',regex=True) \ndf_sub_final= df_sub_final.apply(pd.to_numeric, errors='ignore')\ndf_sub_final[\"AvgHrsMenSub\"]=df_sub_final[['AvgHrsMen_2011','AvgHrsMen_2012','AvgHrsMen_2013','AvgHrsMen_2014','AvgHrsMen_2015']].mean(axis =1)\ndf_sub_final[\"AvgHrsWomenSub\"]=df_sub_final[['AvgHrsWomen_2011','AvgHrsWomen_2012','AvgHrsWomen_2013','AvgHrsWomen_2014','AvgHrsWomen_2015']].mean(axis =1)\ndf_sub_final = df_sub_final.round(2)\ndf_sub_final\n\n# Sorting the dataframes as per the hours spent\n\ndf_main_final = df_main_final.sort_values('AvgHrsMenMain', ascending=True)\n\n# Re-setting the Index\n\ndf_main_final = df_main_final.reset_index()\ndf_sub_final = df_sub_final.reset_index()\n\n# Trimming the data to format the dataframe\n\ndf_main_final['Main_Activity'] = df_main_final['Main_Activity'].str.strip()\ndf_sub_final['Sub_Activity'] = df_sub_final['Sub_Activity'].str.strip()\n\n# Combining the Students and Elderly dataframes to obtain a consolidated dataframe\n\ndf_age_data = pd.concat([df_students,df_elderly], axis = 1)\ndf_age_data = df_age_data.reset_index()\ndf_age_data = df_age_data.rename(columns={'index': 'Main_Activity'})\ndf_age_data = df_age_data.set_index('Main_Activity')\ndf_age_data", "4 | Visualizing the Data\nAfter processing the data, we have visualized it at different levels such as gender, age group, main activity/sub-activity, and geographical distribution to obtain a holistic understanding from all perspectives in order to create an effective targeting strategy plan as per the product requirements. \n4.1 | Visualizing the time spent on main activities at gender level", "# Plotting the Main Activity data as per the Average time\n\ncolortemp = [colors[x] for x in list(range(2,4))]\ndf_main_final = df_main_final.set_index(['Main_Activity'])\nax = df_main_final[['AvgHrsMenMain','AvgHrsWomenMain']].plot(kind='barh', title =\"AVERAGE TIME SPENT ON MAIN ACTIVITIES (2011-2015) AT GENDER LEVEL\", figsize=(8,8), legend=True, fontsize=10, color = colortemp )\nax.set_xlabel(\"AVERAGE HOURS FROM 2011 TO 2015\", fontsize=12)\nax.set_ylabel(\"MAIN ACTIVITIES\", fontsize=12)\nL=plt.legend(loc = 'lower right')\nL.get_texts()[0].set_text('Men')\nL.get_texts()[1].set_text('Women')\nplt.show()", "From the visualization above, we see that majority of the Americans (both genders) tend to spend most time on the following activities on an average:\n\nPersonal Care Activities\nWorking and Work related Activities\nEducational Activities\nLeisure and Sports\n\nAs the above visualization provides an idea regarding the top activities, we wanted to see this at a yearly level to further highlight the significance of these activities. By understanding the trends over the years, we can forecast as to which activities will pose relevance in future as well.\n4.2 | Visualizing trends for main activities through years 2011-2015\nWe can see the fluctuations in average hours per activity through each year from 2011 to 2015 for men and women. It is important to note these changes in trends to channelize advertisements/promotions in the right category for maximized results in the future as well (aiming for forecasting predictions).", "# Plotting the view for main activities at yearly level\n\ndf_men = df_main_final.filter(regex=\"Men+\",axis=1)\ndf_women = df_main_final.filter(regex=\"Women+\",axis=1)\nfig, ax = plt.subplots(nrows=1,ncols=2,figsize=(8,8))\n\n# Creating a loop so that yearly data can be mapped\n\ni = 0\nfor Activity in list(df_men.index):\n ax[0].plot(df_men.loc[Activity].tolist()[:-1],\"ko\",markersize=5)\n ax[0].plot(df_men.loc[Activity].tolist()[:-1],linestyle='dashed',label=Activity,color=colors[i+1])\n ax[1].plot(df_women.loc[Activity].tolist()[:-1],\"ko\",markersize=5)\n ax[1].plot(df_women.loc[Activity].tolist()[:-1],linestyle='dashed',label=Activity,color=colors[i+1])\n if i>=9:\n i=0\n else:\n i+=1\nax[0].legend(loc=\"upper left\", fontsize=10,framealpha=0.2,bbox_to_anchor=(2.5, 1))\nfig.subplots_adjust(wspace=0.3, hspace=0)\nax[0].set_title(\"Year level Main Activity trends (Men)\")\nax[1].set_title(\"Year level Main Activity trends (Women)\")\n\nax[0].set_xticks(list(range(0,5)))\nax[0].set_xticklabels(list(range(2011,2016)))\nax[1].set_xticks(list(range(0,5)))\nax[1].set_xticklabels(list(range(2011,2016)))\n\nxaxis1 = ax[0].get_xaxis()\nxaxis2 = ax[1].get_xaxis()\n\nfor ticks1,ticks2 in zip(xaxis1.get_ticklabels(),xaxis2.get_ticklabels()):\n ticks1.set_rotation(45)\n ticks1.set_color('k')\n ticks2.set_rotation(45)\n ticks2.set_color('k')\n\nplt.show()", "We observe from the yearly trends that - \n+ Time spent by both the genders has been gradually increasing from 2011 to 2015. This illustrates that a large emphasis is placed by individuals on personal care.\n+ Trends corresponding to Educational activities show a dip in the year 2014 for both genders but it is seen that it is slowly picking up in the year 2015.\n+ Overall, non-top 4 activities are consistent among their yearly trends which basically inidicates that higher emphasis on the same may not be extremely beneficial.\n4.3 | Which activity is the primary focus for each gender?\n From the below visualization we see the Gender breakdown for some of the key primary activities. \nThis further provides an idea about which activity is of higher focus for each gender so that gender-localized targeting strategy can be developed.\n\nWe clearly notice that though relatively the amount of time spent is in almost the same bracket for most of the activities, women tend to invest more time in house-hold activities. \nMen tend to spend more time indulging in leisure activities\nThe derivations above correlates with the normal convention of gender norms as well. Though we are seeing both genders involved in work related activies too, this disparity of activities is still distinct.", "# Creating the visualization to distinctly identify important activies for each gender\n\nfig, ax = plt.subplots(nrows=1,ncols=2, figsize=(6,6))\nlocx = list(range(0,10))\nbarwidth = 0.35\nfor loc, Activity in zip(range(len(df_men)),list(df_men.index)):\n bar1, = ax[0].barh(loc,round(df_men.loc[Activity][\"AvgHrsMenMain\"],2),label=Activity,color = colors[loc+1])\n ax[0].text(bar1.get_width()+1.5,bar1.get_y()+barwidth/2,bar1.get_width(),ha='left', va='bottom')\n bar2, = ax[1].barh(loc,round(df_women.loc[Activity][\"AvgHrsWomenMain\"],2),label=Activity,color = colors[loc+1])\n ax[1].text(bar2.get_width()*1.05,bar2.get_y()+barwidth/2,bar2.get_width())\nax[0].set_xticks([])\nax[0].set_yticks([])\nax[1].set_xticks([])\nax[1].set_yticks([])\nax[0].spines['top'].set_visible(False)\nax[0].spines['left'].set_visible(False)\nax[0].spines['bottom'].set_visible(False)\nax[0].spines['right'].set_visible(False)\nax[1].spines['top'].set_visible(False)\nax[1].spines['right'].set_visible(False)\nax[1].spines['bottom'].set_visible(False)\n\nax[0].invert_xaxis() #this invert provides the opposite comparison graphs between genders\nax[0].set_title(\"Average Hours for Men\")\nax[1].set_title(\"Average Hours for Women\")\nbox1 = ax[0].get_position()\nbox2 = ax[1].get_position()\nfig.subplots_adjust(wspace=0.01, hspace=0)\nl = ax[1].legend(loc=\"lower right\", fontsize=10,framealpha=0.6, markerscale=5,labelspacing=0.1,borderpad=0.1\n ,bbox_to_anchor=(2.2, -.1))\nplt.show()", "4.4 | Does age play any role in the time spent on activities?\n Follwing the understanding from the gender level analysis, we wanted to add another layer of analysis by viewing this data based on the age group demographic. \nFrom the below stacked visualization we see the age-group breakdown for some of the key primary activities. As expected, sleeping takes the cake followed by Leisure and Sports. This further ties to the on-going analysis where it is best to cater to all age groups via TV or Internet or Social Media. Knowing that this platform is highly effective, it is extremely sensible to invest thoroughly in this domain and effectively grow the sales.", "# Plotting the stacked bar chart for each age bracket\n\ndf_age_data = df_age_data.sort_values('Ages 15-49', ascending=False)\ncolortemp = [colors[x] for x in list(range(2,6))]\nax = df_age_data[['Ages 15-49','Ages 55-64', 'Ages 65-74', 'Ages 75+']].plot(kind='bar', stacked = True, color = colortemp,title =\"AVERAGE TIME SPENT IN ACTIVITIES BY AGE GROUP (2011-2015)\", figsize=(8, 8), legend=True, fontsize=10,rot=30)\nax.set_xlabel(\"MAIN ACTIVITIES\", fontsize=12)\nax.set_ylabel(\"AVERAGE HOURS FROM 2011 TO 2015\", fontsize=12)\nplt.show()", "4.5 | Deep-Diving and visualizing data at sub-activity level for the top 4 main activities\nFrom all of the above visualizations, we can concretely see that few of the primary activities really stand out. Knowing this, we wanted to further delve into the data to decipher which sub-activities (within the above mentioned top 4 main activities) do Americans perform most. This granular analysis of the data will provide a deeper insight to more locally target the relevant customer base.\nWe created main activity level segments to organize so that we can correlate easily.", "# Filtering out the sub-activities of the Top 4 main activities\n\ntop_list = ['Sleeping', 'Grooming', 'Health-related self care', 'Personal activities', 'Travel related to personal care', 'Working',\n 'Work-related activities', 'Other income-generating activities', 'Job search and interviewing', 'Travel related to work',\n 'Attending class', 'Homework and research', 'Travel related to education', 'Socializing, relaxing, and leisure', \n 'Sports, exercise, and recreation', 'Travel related to leisure and sports']\n\ndf_sub_final_u = df_sub_final[df_sub_final['Sub_Activity'].isin(top_list)]\ndf_sub_final_u = df_sub_final_u.reset_index()\n\n# Merging the sub-activity data with main activity to create categories/segements\n\ndata = {'Main_Activity': ['Personal care activities', 'Personal care activities', 'Personal care activities','Personal care activities','Personal care activities', 'Working and work-related activities', 'Working and work-related activities','Working and work-related activities', 'Working and work-related activities', 'Working and work-related activities',\n'Educational activities', 'Educational activities', 'Educational activities', 'Leisure and sports', 'Leisure and sports', 'Leisure and sports']} \ndf_main_act = pd.DataFrame(data)\n\ndf_sub_final_q = pd.merge(df_sub_final_u, df_main_act, left_index=True, right_index=True)\ndf_sub_final_q.drop('index', axis=1, inplace=True)\n\n# Cleaning and sorting the data \n\ndf_sub_final_Personal_Care = df_sub_final_q.groupby(['Main_Activity']).get_group('Personal care activities')\ndf_sub_final_Personal_Care = df_sub_final_Personal_Care.set_index(['Sub_Activity'])\ndf_sub_final_Personal_Care = df_sub_final_Personal_Care.sort_values('AvgHrsMenSub', ascending=False)\n\ndf_sub_final_Educational = df_sub_final_q.groupby(['Main_Activity']).get_group('Educational activities')\ndf_sub_final_Educational = df_sub_final_Educational.set_index(['Sub_Activity'])\ndf_sub_final_Educational = df_sub_final_Educational.sort_values('AvgHrsMenSub', ascending=False)\n\ndf_sub_final_Leisure_Sports = df_sub_final_q.groupby(['Main_Activity']).get_group('Leisure and sports')\ndf_sub_final_Leisure_Sports = df_sub_final_Leisure_Sports.set_index(['Sub_Activity'])\ndf_sub_final_Leisure_Sports = df_sub_final_Leisure_Sports.sort_values('AvgHrsMenSub', ascending=False)\n\ndf_sub_final_Work = df_sub_final_q.groupby(['Main_Activity']).get_group('Working and work-related activities')\ndf_sub_final_Work = df_sub_final_Work.set_index(['Sub_Activity'])\ndf_sub_final_Work = df_sub_final_Work.sort_values('AvgHrsMenSub', ascending=False)\n\n# Plotting the graph at sub-activity level\n\nfig, ax = plt.subplots(nrows=2, ncols=2)\ncolortemp = [colors[x] for x in list(range(4,6))]\ndf_sub_final_Personal_Care[['AvgHrsMenSub','AvgHrsWomenSub']].plot(kind='bar', ax=ax[0,0], color = colortemp, width=0.5, title =\"AVERAGE TIME SPENT ON PERSONAL CARE ACTIVITIES\", figsize=(15, 10), legend=True,fontsize=10, rot = 30)\ndf_sub_final_Work[['AvgHrsMenSub','AvgHrsWomenSub']].plot(kind='bar', ax=ax[0,1], color = colortemp, width=0.5, title =\"AVERAGE TIME SPENT ON WORK OR WORK RELATED ACTIVITIES\", figsize=(15, 10), legend=True, fontsize=10,sharey=ax[0,0],rot = 30 )\ndf_sub_final_Educational[['AvgHrsMenSub','AvgHrsWomenSub']].plot(kind='bar', ax=ax[1,0],color = colortemp, width=0.5, title =\"AVERAGE TIME SPENT ON EDUCATIONAL ACTIVITIES\", figsize=(15, 10), legend=True, fontsize=10,rot = 30 )\ndf_sub_final_Leisure_Sports[['AvgHrsMenSub','AvgHrsWomenSub']].plot(kind='bar', ax=ax[1,1],color = colortemp, width=0.5, title =\"AVERAGE TIME SPENT ON LEISURE & SPORTS ACTIVITIES\", figsize=(15, 10), legend=True, fontsize=10,sharey=ax[1,0],rot = 30)\n\n# Setting the Legend and Axis Labels\n\nL1 = ax[0,0].legend(loc = 'upper right')\nL1.get_texts()[0].set_text('Men')\nL1.get_texts()[1].set_text('Women')\n\nL2 = ax[0,1].legend(loc = 'upper right')\nL2.get_texts()[0].set_text('Men')\nL2.get_texts()[1].set_text('Women')\n\nL3 = ax[1,0].legend(loc = 'upper right')\nL3.get_texts()[0].set_text('Men')\nL3.get_texts()[1].set_text('Women')\n\nL4 = ax[1,1].legend(loc = 'upper right')\nL4.get_texts()[0].set_text('Men')\nL4.get_texts()[1].set_text('Women')\n\nax[0,0].set_ylabel('Average Hours (2011-2015)', fontsize=12)\nax[0,1].set_ylabel('Average Hours (2011-2015)', fontsize=12)\nax[1,0].set_ylabel('Average Hours (2011-2015)', fontsize=12)\nax[1,1].set_ylabel('Average Hours (2011-2015)', fontsize=12)\nax[0,0].set_xlabel('Sub-Activity', fontsize=12)\nax[0,1].set_xlabel('Sub-Activity', fontsize=12)\nax[1,0].set_xlabel('Sub-Activity', fontsize=12)\nax[1,1].set_xlabel('Sub-Activity', fontsize=12)\nfig.tight_layout() # aligns all 4 charts within subplots \nfig.subplots_adjust(wspace=0.05, hspace=0.7)", "We see that majority of the time is spent on either sleeping or working followed by attending class (target student crowd) or socializing, relaxing, and leisure. \nAll of these activities provide a profitable platform to advertise for a certain segment of population and develop a sales strategy. \n\n\nFor instance, since Americans spend an ample amount of time sleeping/resting or working, this could be leveraged such that the potential customer base is rightfully tapped based on the product. \n\n\nLikewise, products catering to students and teens can be branded and publicized in and around academic and college campuses to generate maximum revenue.\n\n\nAs we see from the plot above, socializing, relaxing, and leisure is also a primary activity with potential for convenient targeting. We wanted to further look into this category to clearly pin-point the modes by which advertising would be most effective.\n4.6 | Leisure Activity Breakdown\nFrom the below pie chart, we see that most of the time is consumed in:\n+ Watching TV \n+ Socializing and Communicating\n+ Playing Games; Using Computer for Leisure\n+ Sports, exercise, and recreation\nWith the above insight into the type of leisure activities, we see that TV and Internet could pose to be very strategic platforms for exploiting a potential audience.", "# Creating the exploded pie chart\n\nfig, ax = plt.subplots(figsize=(8, 8))\nexplode = [0,0.1,0.2,.4,0.3,0.5,0.6] \ncolor = [colors[x] for x in list(range(1,len(df_leisure_all[\"Minutes\"])+1))]\np,t = ax.pie(list(df_leisure_all[\"Minutes\"]), explode=explode,shadow=True, startangle=90, radius=1.3,colors=color)\nlabels = ['{0} - {1:1.2f}'.format(i,j) for i,j in zip(list(df_leisure_all.index), list(df_leisure_all[\"Minutes\"]))]\nbox1 = ax.get_position()\nax.set_position([box1.x0, box1.y0, box1.width * 0.9, box1.height])\nl = ax.legend(p,labels , loc=\"upper right\", fontsize=12,framealpha=0.2,bbox_to_anchor=(1.8, 1))\nl.get_title().set_position((30, 0))\n\n# Equal aspect ratio ensures that pie is drawn as a circle\n\nax.axis('equal') \nax.set_title(\"AVERAGE TIME SPENT BY AMERICANS IN LEISURE ACTIVITY (IN MINUTES)\",fontsize=15)\nplt.show()", "4.7 | Visualizing time spent on Sports & Leisure activities at a geographic level\n From the below geographic distribution, we see the regions within United States of America with maximum interest in sports related activities \n\nNorthern USA is more inclined towards sports activities with Alaska, Montana, and Wyoming leading the race\nSouthern USA is not very active in this particular domain", "# Geographic Distribution of Sports and Leisure Activities\n\nfor col in df_geo_sports.columns:\n df_geo_sports[col] = df_geo_sports[col].astype(str)\n#shows color gradient as hours increases\nscl = [[0.0, 'rgb(242,240,247)'],[0.2, 'rgb(218,218,235)'],[0.4, 'rgb(188,189,220)'],\\\n [0.6, 'rgb(158,154,200)'],[0.8, 'rgb(117,107,177)'],[1.0, 'rgb(84,39,143)']]\n\ndf_geo_sports['text'] = df_geo_sports['State']\nlayout = dict(\n title = 'Average Time spent on Sports & Leisure (in hours) <br>(Hover for breakdown)',\n geo = dict(\n scope='usa',\n projection=dict( type='albers usa' ),\n showlakes = True,\n lakecolor = 'rgb(255, 255, 255)'),\n )\ndata = [ dict(\n type='choropleth',\n colorscale = scl,\n autocolorscale = False,\n locations = df_geo_sports['code'], # picks 2 digit state code from csv\n z = df_geo_sports['Hours'].astype(float), # picks activity hrs as float\n locationmode = 'USA-states',\n text = df_geo_sports['text'],\n marker = dict(\n line = dict (\n color = 'rgb(255,255,255)',\n width = 2\n ) ),\n colorbar = dict(\n title = \"Time in Hours\") \n ) ]\nfig = dict( data=data, layout=layout )\npy.iplot( fig, filename='d3-cloropleth-map' )", "4.8 | Ad-Hoc Analysis: Time spent on Organizational, Civic, and Religious activities at a geographic level\n We were curious to see if religion and the corresponding activities are very popular among Americans.\nWe notice that religion is more strictly adhered to in the south-eastern part of the country. According to this ad-hoc analysis, the type of mentality and mind-set can also be worked into developing lucrative business strategies.", "# Geographic Distribution of Religious and Civic Organization Activities\n\nfor col in df_geo_religion.columns:\n df_geo_religion[col] = df_geo_religion[col].astype(str)\n#shows color gradient as hours increases\nscl = [[0.0, 'rgb(243,205,174)'],[0.3, 'rgb(237,184,140)'],[0.6, 'rgb(227,142,72)'],\\\n [0.9, 'rgb(72,39,11)'],[1.0, 'rgb(38,20,6)']]\n\ndf_geo_religion['text'] = df_geo_religion['State'] \nlayout = dict(\n title = 'Average time spent on Organizational, Civic & Religious Activities (in hours) <br>(Hover for breakdown)',\n geo = dict(\n scope='usa',\n projection=dict( type='albers usa' ),\n showlakes = True,\n lakecolor = 'rgb(255, 255, 255)'),\n )\ndata = [ dict(\n type='choropleth',\n colorscale = scl,\n autocolorscale = False,\n locations = df_geo_religion['code'], # picks 2digit state code from csv\n z = df_geo_religion['Hours'].astype(float), # picks activity hrs as float\n locationmode = 'USA-states',\n text = df_geo_religion['text'],\n marker = dict(\n line = dict (\n color = 'rgb(255,255,255)',\n width = 2\n ) ),\n colorbar = dict(\n title = \"Time in Hours\")\n ) ]\nfig = dict( data=data, layout=layout )\npy.iplot( fig, filename='d3-cloropleth-map' )", "5 | Conclusion\nAfter looking at serveral visualizations and analyzing the data, we see that most of the time spent by Americans is on the following activities (order: highest time to lowest time) : \n\nPersonal Care\nWork-related\nEducational\nLeisure and Sports\n\nIt is seen that age also plays a major role in determining the type of activities Americans indulge in as well. For instance, folks within the age group of 15-49 tend to be more involved in educational or work activities. In order to target such individuals, the strategies should be focused around work or academic zones as this would lead to maximum impact. \nFurther, all age groups indulge in a bit of leisure time. It is seen that a bit more than 50% of this leisure time is spent on television which indicates that it is the most lucrative bet for promotional and marketing activities of any type of products. Ofcourse, a further analysis into understanding the demographics of the audience that watches the television at different times of the day would provide a deeper insight into understanding the type of commericals and products that should be marketed. \nFinally, looking at the data from a geographic point of view, we observed that the mid-west part of the United States is relatively oriented towards leisure/sports activities relative to the rest of the country. The kind of details would add great value in further understanding the habits that Americans have and ultimately develop an effective sales strategy." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
WNoxchi/Kaukasos
FADL1/lesson1-dogbreeds.ipynb
mit
[ "Lesson 1 Dogbreeds CodeAlong", "%reload_ext autoreload\n%autoreload 2\n%matplotlib inline\n\nfrom fastai.imports import *\nfrom fastai.torch_imports import *\nfrom fastai.transforms import *\nfrom fastai.model import *\nfrom fastai.dataset import *\nfrom fastai.sgdr import *\nfrom fastai.plots import *\nfrom fastai.conv_learner import *\n\nPATH = \"data/dogbreeds/\"\nsz = 224\narch = resnext101_64\nbs = 64\n\nlabel_csv = f'{PATH}labels.csv'\nn = len(list(open(label_csv)))-1\nval_idxs = get_cv_idxs(n)\n\nval_idxs, n, len(val_idxs)", "2. Initial Exploration", "!ls {PATH}\n\nlabel_df = pd.read_csv(label_csv)\nlabel_df.head()\n\n# use Pandas to create pivot table which shows how many of each label:\nlabel_df.pivot_table(index='breed', aggfunc=len).sort_values('id', ascending=False)\n\ntfms = tfms_from_model(arch, sz, aug_tfms=transforms_side_on, max_zoom=1.1)\ndata = ImageClassifierData.from_csv(PATH, folder='train', csv_fname=f'{PATH}labels.csv', \n test_name='test', val_idxs=val_idxs, suffix='.jpg',\n tfms=tfms, bs=bs)\n\nfn = PATH + data.trn_ds.fnames[0]; fn\n\nimg = PIL.Image.open(fn); img\n\nimg.size\n\nsize_d = {k: PIL.Image.open(PATH + k).size for k in data.trn_ds.fnames}\n\nrow_sz, col_sz = list(zip(*size_d.values()))\n\nrow_sz = np.array(row_sz); col_sz = np.array(col_sz)\n\nrow_sz[:5]\n\nplt.hist(row_sz);\n\nplt.hist(row_sz[row_sz < 1000])\n\nplt.hist(col_sz);\n\nplt.hist(col_sz[col_sz < 1000])\n\nlen(data.trn_ds), len(data.test_ds)\n\nlen(data.classes), data.classes[:5]", "3. Initial Model\nstarting w/ small images, large batch sizes to train model v.fast in beginning; increase image size and decrease batch-size as go along.", "def get_data(sz, bs):\n tfms = tfms_from_model(arch, sz, aug_tfms=transforms_side_on, max_zoom=1.1)\n data = ImageClassifierData.from_csv(PATH, 'train', f'{PATH}labels.csv', test_name='test', \n num_workers=4, val_idxs=val_idxs, suffix='.jpg', \n tfms=tfms, bs=bs)\n return data if sz > 300 else data.resize(340, 'tmp')", "3.1 Precompute", "data = get_data(sz, bs)\n\nlearn = ConvLearner.pretrained(arch, data, precompute=True) # GTX870M;bs=64;sz=224;MEM:2431/3017\n\nlearn.fit(1e-2, 5)", "3.2 Augment", "from sklearn import metrics\n\n# data = get_data(sz, bs)\n\nlearn = ConvLearner.pretrained(arch, data, precompute=True, ps=0.5)\n\nlearn.fit(1e-2, 2)\n\nlrf = learn.find_lr()\nlearn.sched.plot()\n\n# turn precompute off then use dataug\nlearn.precompute = False\n\nlearn.fit(1e-2, 5, cycle_len=1)\n\nlearn.save('224_pre')\n\nlearn.load('224_pre')", "3.3 Increase Size\n\nIf you train smth on a smaller size, you can call learn.set_data() and pass in a larger sized dataset. That'll take your model, however it's trained so far, and continue to train on larger images.\nThis is another way to get SotA results. Starting training on small images for a few epochs, then switching to larger images and continuing training is an amazing effective way to avoid overfitting.\n\nJ.Howard (paraphrased)\nNOTE: Fully-Convolutional Architectures only.", "learn.set_data(get_data(299, bs=32))\nlearn.freeze() # just making all but last layer already frozen\n\nlearn.fit(1e-2, 3, cycle_len=1) # precompute is off so DataAugmentation is back on\n\nlearn.fit(1e-2, 3, cycle_len=1, cycle_mult=2)\n\nlog_preds, y = learn.TTA()\nprobs = np.exp(log_preds)\naccuracy(log_preds, y), metrics.log_loss(y, probs)\n\nlearn.save('299_pre')\n\n# learn.load('299_pre')\n\nlearn.fit(1e-2, 1, cycle_len=2)\n\nlearn.save('299_pre')\n\nlog_preds, y = learn.TTA()\nprobs = np.exp(log_preds)\naccuracy(log_preds, y), metrics.log_loss(y, probs)\n\nSUBM = f'{PATH}subm/'\nos.makedirs(SUBM, exist_ok=True)\ndf.to_csv(f'{SUBM}subm.gz', compression='gzip', index=False)\n\nFileLink(f'{SUBM}subm.gz')", "6. Individual Prediction", "fn = data.val_ds.fnames[0]\n\nfn\n\nImage.open(PATH+fn).resize((150,150))\n\ntrn_tfms, val_tfms = tfms_from_model(arch, sz)\n\nlearn = ConvLearner.pretrained(arch, data)\nlearn.load('299_pre')\n\n# ds = FilesIndexArrayDataset([fn], np.array([0]), val_tfms, PATH)\n# dl = DataLoader(ds)\n# preds = learn.predict_dl(dl)\n# np.argmax(preds)\n\nim = trn_tfms(Image.open(PATH+fn))\npreds = to_np(learn.model(V(T(im[None]).cuda())))\nnp.argmax(preds)\n\ntrn_tfms, val_tfms = tfms_from_model(arch, sz)\n\nim = val_tfms(Image.open(PATH+fn)) # or could apply trn_tfms(.)\npreds = learn.predict_array(im[None]) # index into image as[None] to create minibatch of 1 img\nnp.argmax(preds)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jazracherif/algorithms
knapsack/knapsack.ipynb
mit
[ "Knapsack\nIn this programming problem and the next you'll code up the knapsack algorithm from lecture.\nLet's start with a warm-up using file knapsack1.txt\nThis file describes a knapsack instance, and it has the following format:\n[knapsack_size][number_of_items]\n[value_1] [weight_1]\n[value_2] [weight_2]\n...\nFor example, the third line of the file is \"50074 659\", indicating that the second item has value 50074 and size 659, respectively.\nYou can assume that all numbers are positive. You should assume that item weights and the knapsack capacity are integers.\nIn the box below, type in the value of the optimal solution.\nADVICE: If you're not getting the correct answer, try debugging your algorithm using some small test cases. And then post them to the discussion forum!", "import numpy as np\n\nfile = \"knapsack1.txt\"\n\nfp = open(file, 'r+')\n\ndata = fp.readlines()\nW, n = data[0].split(\" \")\nW, n = int(W), int(n)\n\nv = []\nw = []\n\nfor r in data[1:]:\n v_i, w_i = r.split(\" \")\n v.append(int(v_i))\n w.append(int(w_i))\n\n\n\nA = np.zeros([n, W+1])\n\nfor i in range(n):\n for x in range(W+1):\n if x >= w[i]:\n A[i,x]= max(A[i-1,x], A[i-1,x-w[i]]+v[i])\n else:\n A[i,x]= A[i-1,x]\n\nprint (A)\n ", "Problem 2\nThis problem also asks you to solve a knapsack instance, but a much bigger one.\nUse the text file below knapsack_big.txt\nThis file describes a knapsack instance, and it has the following format:\n[knapsack_size][number_of_items]\n[value_1] [weight_1]\n[value_2] [weight_2]\n...\nFor example, the third line of the file is \"50074 834558\", indicating that the second item has value 50074 and size 834558, respectively. As before, you should assume that item weights and the knapsack capacity are integers.\nThis instance is so big that the straightforward iterative implemetation uses an infeasible amount of time and space. So you will have to be creative to compute an optimal solution. One idea is to go back to a recursive implementation, solving subproblems --- and, of course, caching the results to avoid redundant work --- only on an \"as needed\" basis. Also, be sure to think about appropriate data structures for storing and looking up solutions to subproblems.\nIn the box below, type in the value of the optimal solution.\nADVICE: If you're not getting the correct answer, try debugging your algorithm using some small test cases. And then post them to the discussion forum!", "file = \"knapsack_big.txt\"\n\nfp = open(file, 'r+')\n\ndata = fp.readlines()\nW, n = data[0].split(\" \")\nW, n = int(W), int(n)\n\nv = []\nw = []\n\nfor r in data[1:]:\n v_i, w_i = r.split(\" \")\n v.append(int(v_i))\n w.append(int(w_i))\n\n", "A recursive Implementation of the knapsack algorithm with caching", "\nimport sys\nsys.setrecursionlimit(2500)\n\ncache = dict()\ndef knap(i, _w):\n# print (i, _w)\n key = str(i)+\"-\"+str(_w)\n\n if i == 0:\n cache[key] = 0\n return 0\n \n if _w > w[i]:\n key1 = str(i-1)+\"-\"+str(_w - w[i])\n key2 = str(i-1)+\"-\"+str(_w)\n \n if key1 in cache and key2 in cache:\n a1 = cache[key1]\n a2 = cache[key2]\n cache[key] = max(v[i]+a1, a2)\n elif key1 in cache:\n a1 = cache[key1]\n cache[key] = max(v[i]+a1, knap(i-1, _w))\n elif key2 in cache:\n a2 = cache[key2]\n cache[key] = max(v[i]+knap(i-1,_w-w[i]), a2)\n else:\n cache[key] = max(v[i]+knap(i-1,_w-w[i]), knap(i-1, _w))\n else:\n key2 = str(i-1)+\"-\"+str(_w)\n if key2 in cache:\n cache[key] = cache[key2]\n else:\n cache[key] = knap(i-1,_w)\n \n return cache[key]\n\n\n\nknap(n-1,W)\n\n\nprint (cache[str(n-1)+\"-\"+str(W)])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
phockett/ePSproc
docs/doc-source/methods/geometric_method_dev_260220.ipynb
gpl-3.0
[ "Method development for geometric functions\n26/02/20\nAims:\n\nDevelop $\\beta_{L,M}$ formalism.\nDevelop corresponding numerical methods.\nSpeed things up (see low-level benchmarking notebook).\nAnalyse geometric terms.\n\nSetup", "# Imports\nimport numpy as np\nimport pandas as pd\nimport xarray as xr\nfrom functools import lru_cache # For function result caching\n\n# Special functions\n# from scipy.special import sph_harm\nimport spherical_functions as sf\nimport quaternion\n\n# Performance & benchmarking libraries\nfrom joblib import Memory\nimport xyzpy as xyz\nimport numba as nb\n\n# Timings with ttictoc\n# https://github.com/hector-sab/ttictoc\nfrom ttictoc import TicToc\n\n# Package fns.\n# For module testing, include path to module here\nimport sys\nimport os\nmodPath = r'D:\\code\\github\\ePSproc'\nsys.path.append(modPath)\nimport epsproc as ep\n# TODO: tidy this up!\nfrom epsproc.util import matEleSelector\nfrom epsproc.geomFunc import geomCalc", "Exploring Wigner 3js\nIn photoionization calculations, there is a lot of angular momentum coupling to deal with. Typically, 4 to 6 Wigner 3j terms appear (depending on the formalism), and/or higher-order terms in cases where couplings are included.\n\\begin{equation}\nW = \\left(\\begin{array}{ccc}\nl & l' & L\\\nm & m' & M\n\\end{array}\\right)\n\\end{equation}\nSince this is, effectively, a 6D space, dimensions $(l_{max}, l_{max}, 2l_{max}, 2l_{max}+1, 2l_{max}+1, 4l_{max}+1)$ things can get large quickly. For small $l_{max}$ it's easy to look at some values directly...\n(For more details on 3j symbols, see Wikipedia; for more on the numerics, see the test notebook, and benchmarks.)", "# Calculate some values.\n# w3jTable will output all values up to l=lp=Lmax (hence L=2Lmax)\nlmax = 1\nw3jlist = geomCalc.w3jTable(Lmax = Lmax, form = '2d') # For form = '2d', the function will output only valid entries as a coordinate table\n\nprint(w3jlist.shape)\nprint(f'Max value: {w3jlist[:,-1].max()}, min value: {w3jlist[:,-1].min()}\\n')\n\n# Print the table - output format has rows (l, lp, L, m, mp, M, 3j)\nprint(w3jlist)\n\n# Recalculate and set to Xarray output format, then plot with ep.lmPlot()\n\nw3j = geomCalc.w3jTable(Lmax = lmax, form = 'xdaLM')\n\n# Check number of valid entries matches basic table above\nprint(f'Number of valid (non-NaN) elements: {w3j.count()}')\n\n# Set parameters to restack the Xarray into (L,M) pairs\nplotDimsRed = ['l', 'm', 'lp', 'mp']\nxDim = {'LM':['L','M']}\n\n# Plot with ep.lmPlot(), real values\ndaPlot, daPlotpd, legendList, gFig = ep.lmPlot(w3j, plotDims=plotDimsRed, xDim=xDim, pType = 'r')\n\n# Print out values by QNs (Pandas table)\ndaPlotpd", "This ends up as a relatively sparse array, since many combinations are invalid (do not follow angular momentum selection rules), hence there are many NaN terms.\nThe results can also be output as a 6D sparse array, using the Sparse library.", "# Calculate and output in Sparse array format\nw3jSparse = geomCalc.w3jTable(Lmax = lmax, form = 'ndsparse')\nw3jSparse", "Here nnz is the number of non-zero elements.", "# Try a larger Lmax and plot only.\nlmax = 3\nw3j = geomCalc.w3jTable(Lmax = lmax, form = 'xdaLM')\n\n# Check number of valid entries matches basic table above\nprint(f'Number of valid (non-NaN) elements: {w3j.count()}')\n\n# Set parameters to restack the Xarray into (L,M) pairs\nplotDimsRed = ['l', 'm', 'lp', 'mp']\nxDim = {'LM':['L','M']}\n\n# Plot with ep.lmPlot(), real values\ndaPlot, daPlotpd, legendList, gFig = ep.lmPlot(w3j, plotDims=plotDimsRed, xDim=xDim, pType = 'r')\n\n# Resort axis by (l,lp)\nplotDimsRed = ['l', 'lp', 'm', 'mp']\nxDim = {'LM':['L','M']}\n\n# Plot with ep.lmPlot(), real values\ndaPlot, daPlotpd, legendList, gFig = ep.lmPlot(w3j, plotDims=plotDimsRed, xDim=xDim, pType = 'r')\n\n# A complementary visulization is to call directly the sns.clustermap plot, use clustering and plot by category labels - see https://seaborn.pydata.org/index.html\n# (ep.lpPlot uses a modified version of this routine.)\nep.snsMatMod.clustermap(daPlotpd.fillna(0), center=0, cmap=\"vlag\", row_cluster=True, col_cluster=True)", "This clearly shows that the valid terms become sparser at higher $l$, and the couplings become smaller.\nStructure can also be examined using other methods, e.g. correlation functions (see, for example, Seaborn Discovering structure in heatmap data. The example here shows Panda's standard Pearson correlation coefficient, which may (or may not) be particularly meaningful here... but does show structures.", "ep.snsMatMod.clustermap(daPlotpd.fillna(0).T.corr(), center=0, cmap=\"vlag\", row_cluster=True, col_cluster=True)\n\n# Test some other Seaborn methods... these likely won't scale well for large lmax!\nimport seaborn as sns\n\n# Recalculate for small lmax\nlmax = 1\nw3j = geomCalc.w3jTable(Lmax = lmax, form = 'xdaLM')\n\n# Set parameters to restack the Xarray into (L,M) pairs\nplotDimsRed = ['l', 'm', 'lp', 'mp']\nxDim = {'LM':['L','M']}\n\n# Plot with ep.lmPlot(), real values\ndaPlot, daPlotpd, legendList, gFig = ep.lmPlot(w3j, plotDims=plotDimsRed, xDim=xDim, pType = 'r')\n\n# Try sns pairplot\n# sns.pairplot(daPlotpd.fillna(0).T) # Big grids!\nsns.pairplot(daPlotpd.fillna(0)) # OK, not particularly informative\n# sns.pairplot(daPlotpd.fillna(0), hue = 'l') # Doesn't work - multindex issue?", "$E_{P,R}$ tensor\nThe coupling of two 1-photon terms can be written as a tensor contraction:\n\\begin{equation}\nE_{PR}(\\hat{e})=[e\\otimes e^{}]{R}^{P}=[P]^{\\frac{1}{2}}\\sum{p}(-1)^{R}\\left(\\begin{array}{ccc}\n1 & 1 & P\\\np & R-p & -R\n\\end{array}\\right)e_{p}e_{R-p}^{}\\label{eq:EPR-defn-1}\n\\end{equation}\nWhere $e_{p}$ and $e_{R-p}$ define the field strengths for the polarizations $p$ and $R-p$, which are coupled into the spherical tensor $E_{PR}$.", "# Calculate EPR terms, all QNs, with field strengths e = 1\nEPRtable = geomCalc.EPR(form = '2d')\n\n# Output values as list, [l, lp, P, p, R-p, R, EPR]\nprint(EPRX)", "As before, we can visualise these values...", "lmax = 1\nEPRX = geomCalc.EPR(form = 'xarray')\n\n# Set parameters to restack the Xarray into (L,M) pairs\nplotDimsRed = ['l', 'p', 'lp', 'R-p']\nxDim = {'LM':['P','R']}\n\n# Plot with ep.lmPlot(), real values\ndaPlot, daPlotpd, legendList, gFig = ep.lmPlot(w3j, plotDims=plotDimsRed, xDim=xDim, pType = 'r')", "Testing betaTerm()\n[See notebook on Bemo for additonal details]", "Lmax = 1\nBLMtable = geomCalc.betaTerm(Lmax = Lmax, form = 'xdaLM') # Output as stacked Xarray\n\nBLMtable\n\nplotDimsRed = ['l', 'm', 'lp', 'mp']\nxDim = {'LM':['L','M']}\n# daPlot = BLMtable.unstack().stack({'LM':['L','M']})\n# ep.lmPlot(w3jXcombMult.unstack().stack({'LM':['l','m']}), plotDims=['lp', 'L', 'mp', 'M'], xDim='LM', SFflag = False)\n# daPlot, daPlotpd, legendList, gFig = ep.lmPlot(BLMtable.unstack().stack(xDim), plotDims=plotDimsRed, xDim=xDim, SFflag = False, squeeze = False)\ndaPlot, daPlotpd, legendList, gFig = ep.lmPlot(BLMtable, plotDims=plotDimsRed, xDim=xDim, SFflag = False, squeeze = False)\n# ep.lmPlot(w3jXcombMult.unstack(), plotDims=['lp', 'L', 'mp', 'M'], xDim='L', SFflag = False)\n\n# daPlotpd = daPlot.unstack().stack(plotDim = plotDimsRed).to_pandas().dropna(axis = 1).T\n\n# daPlot, daPlotpd, legendList, gFig = ep.lmPlot(BLMtable.unstack().stack(xDim), plotDims=plotDimsRed, xDim=xDim, pType = 'r')\ndaPlot, daPlotpd, legendList, gFig = ep.lmPlot(BLMtable, plotDims=plotDimsRed, xDim=xDim, pType = 'r')\n\ndaPlotpd\n\n# NOW FIXED\n# Test PD conversion - seems to be giving issue for lmPlot() routine here.\n# daPlotpd = daPlot.stack(plotDim = plotDimsRed).to_pandas().T\n# daPlotpd = daPlot.stack(plotDim = plotDimsRed).to_pandas().dropna(axis = 0).T # Drop na here seems to remove everything - might be issue in lmPlot\n\n# daPlotpd = daPlot.stack(plotDim = plotDimsRed).dropna(dim='plotDim', how='all').to_pandas().T # This seems to reduce NaNs OK\n# daPlotpd = daPlot.stack(xDim).stack(plotDim = plotDimsRed).dropna(dim='plotDim', how='all').dropna(dim='LM',how='all').to_pandas().T # This seems to reduce NaNs OK\n\n# daPlotpd\n\n# Test reductions by # of non-Nan elements\n# print(daPlot.count())\n# print(daPlot.stack(plotDim = plotDimsRed).dropna(dim='plotDim', how='any').count()) # how='any' will drop all elements it seems.\n# print(daPlot.stack(plotDim = plotDimsRed).dropna(dim='plotDim', how='all').count()) \n# print(daPlot.stack(plotDim = plotDimsRed).dropna(dim='plotDim', how='all').dropna(dim='LM',how='all').count())\nprint(daPlot.count())\nprint(daPlot.dropna(dim='plotDim', how='any').count()) # how='any' will drop all elements it seems.\nprint(daPlot.dropna(dim='plotDim', how='all').count()) \nprint(daPlot.dropna(dim='plotDim', how='all').dropna(dim='LM',how='all').count())\n\n# Test correlation fns.\n# This fails with NaNs present it seems\n# ep.snsMatMod.clustermap(daPlot.dropna(dim='plotDim', how='all').dropna(dim='LM',how='all').fillna(0).to_pandas().T.corr())\n# ep.snsMatMod.clustermap(daPlot.dropna(dim='plotDim', how='all').dropna(dim='LM',how='all').fillna(0).to_pandas().corr())\nep.snsMatMod.clustermap(daPlotpd.fillna(0).T.corr())\nep.snsMatMod.clustermap(daPlotpd.fillna(0).corr())\n\n# Switch plotting dims\nplotDimsRed = ['L','M']\nxDim = {'llpmmp':['l', 'm', 'lp', 'mp']}\ndaPlot, daPlotpd, legendList, gFig = ep.lmPlot(BLMtable, plotDims=plotDimsRed, xDim=xDim, mMax = 2, pType = 'r')", "Other plotting methods...\nHoloviews\nShould be a good option, but previously had issues with multlevel coords, so may need to do some work here.", "import holoviews as hv\n\nhv_ds = hv.Dataset(daPlotpd.unstack())", "Pandas", "daPlotpd.plot(kind = 'bar')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
PLOS/allofplos
allofplos/allofplos_basics.ipynb
mit
[ "Examples of basic allofplos functions", "import datetime\nfrom allofplos.plos_regex import (validate_doi, show_invalid_dois, find_valid_dois)\nfrom allofplos.samples.corpus_analysis import (get_random_list_of_dois, get_all_local_dois,\n get_all_plos_dois)\nfrom allofplos.corpus.plos_corpus import (get_uncorrected_proofs, get_all_solr_dois)\nfrom allofplos import Article", "Get example DOIs: get_random_list_of_dois()", "example_dois = get_random_list_of_dois(count=10)\nexample_doi = example_dois[0]\narticle = Article(example_doi)\nexample_file = article.filepath\nexample_url = article.url\nprint(\"Three ways to represent an article\\nArticle as DOI: {}\\nArticle as local file: {}\\nArticle as url: {}\" \\\n .format(example_doi, example_file, example_url))\n\nexample_corrections_dois = ['10.1371/journal.pone.0166537',\n '10.1371/journal.ppat.1005301',\n '10.1371/journal.pone.0100397']\n\nexample_retractions_dois = ['10.1371/journal.pone.0180272',\n '10.1371/journal.pone.0155388',\n '10.1371/journal.pone.0102411']\n\nexample_vor_doi = '10.1371/journal.ppat.1006307'\nexample_uncorrected_proofs = get_uncorrected_proofs()", "Validate PLOS DOI format: validate.doi(string), show_invalid_dois(list)", "validate_doi('10.1371/journal.pbio.2000797')\n\nvalidate_doi('10.1371/journal.pone.12345678') # too many trailing digits\n\ndoi_list = ['10.1371/journal.pbio.2000797', '10.1371/journal.pone.12345678', '10.1371/journal.pmed.1234567']\nshow_invalid_dois(doi_list)", "Check if a DOI resolves correctly: article.check_if_doi_resolves()", "article = Article('10.1371/journal.pbio.2000797') # working DOI\narticle.check_if_doi_resolves()\n\narticle = Article('10.1371/annotation/b8b66a84-4919-4a3e-ba3e-bb11f3853755') # working DOI\narticle.check_if_doi_resolves()\n\narticle = Article('10.1371/journal.pone.1111111') # valid DOI structure, but article doesn't exist\narticle.check_if_doi_resolves()", "Check if uncorrected proof: article.proof", "article = Article(next(iter(example_uncorrected_proofs)))\narticle.proof\n\narticle = Article(example_vor_doi)\narticle.proof", "Find PLOS DOIs in a string: find_valid_dois(string)", "find_valid_dois(\"ever seen 10.1371/journal.pbio.2000797, it's great! or maybe 10.1371/journal.pone.1234567?\")", "Get article pubdate: article.pubdate", "# returns a datetime object\narticle = Article(example_doi)\narticle.pubdate\n\n# datetime object can be transformed into any string format\narticle = Article(example_doi)\ndates = article.get_dates(string_=True, string_format='%Y-%b-%d')\nprint(dates['epub'])", "Check (JATS) article type of article file: article.type_", "article = Article(example_doi)\narticle.authors\n\narticle = Article(example_corrections_dois[0])\narticle.type_\n\narticle = Article(example_retractions_dois[0])\narticle.type_", "Get related DOIs: article.related_dois\nFor corrections and retractions, get the DOI(s) of the PLOS articles being retracted or corrected.", "article = Article(example_corrections_dois[0])\narticle.related_dois\n\narticle = Article(example_retractions_dois[0])\narticle.related_dois", "Working with many articles at once\nGet list of every article DOI indexed on the PLOS search API, Solr: get_all_solr_dois()", "solr_dois = get_all_solr_dois()\nprint(len(solr_dois), \"articles indexed on Solr.\")", "Get list of every PLOS article you have downloaded: get_all_local_dois()", "all_articles = get_all_local_dois()\nprint(len(all_articles), \"articles on local computer.\")", "Combine local and solr articles: get_all_plos_dois()", "plos_articles = get_all_plos_dois()\n\ndownload_updated_xml('allofplos_xml/journal.pcbi.0030158.xml')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
CELMA-project/CELMA
derivations/boundaries/cauchyBC/cauchyBC2ndOrderDerivative.ipynb
lgpl-3.0
[ "Derivation of the cauchy BC\nWe would like to derive the cauchy BC, which for a field $f$ reads\n$$\nf(0) = a\\\n\\partial_z f\\big|_0 = b\n$$\nusing a second order approximation for the derivative.\nWARNING: This scheme is only first order convergent", "from IPython.display import display\nfrom sympy import init_printing\nfrom sympy import symbols, as_finite_diff, solve, latex\nfrom sympy import Function, Eq\n\nfg, f0, f1, f2 = symbols('f_g, f_0, f_1, f_2')\nz, h = symbols('z, h')\na, b = symbols('a, b')\nf = Function('f')\n\ninit_printing()", "Extrapolation of $f(0) = a$ to the ghost point yields (see ghost4thOrder for calculation) yields", "extraPolate = Eq(fg, 16*a/5 - 3*f0 + f1 - f2/5)\ndisplay(extraPolate)", "Which can be rewritten to", "eq1 = Eq(0, extraPolate.rhs - extraPolate.lhs)\ndisplay(eq1)", "Furthermore a second order FD of $\\partial_z f\\big|_0 = b$ reads", "deriv = as_finite_diff(f(z).diff(z), [z-h/2, z+h/2])\nderiv = Eq(b ,deriv.subs([(f(z-h/2), fg),\\\n (f(z+h/2), f0),\\\n ]).together())\ndisplay(deriv)", "Which can be rewritten to", "eq2 = Eq(0, deriv.rhs - deriv.lhs)\ndisplay(eq2)", "Thus", "full = Eq(eq1.rhs, eq2.rhs)\ndisplay(full)\n\nfullSolvedForFg = Eq(fg, solve(full, fg)[0].collect(symbols('f_0, f_1, f_2, h'), exact=True).simplify())\ndisplay(fullSolvedForFg)\n\nprint(latex(fullSolvedForFg))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Vettejeep/Data-Analysis-and-Data-Science-Projects
K Means and the UCI Wholesale Data Set.ipynb
gpl-3.0
[ "K-Means and the UCI Wholesale Customer Data Set\nKevin Maher\n<span style=\"color:blue\">Vettejeep365@gmail.com</span>\nImports needed for the script. Uses Python 2.7.13, numpy 1.11.3, pandas 0.19.2, sklearn 0.18.1, scipy 0.18.1, matplotlib 2.0.0.", "%matplotlib inline\n\nimport pandas as pd\nimport numpy as np\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.cluster import KMeans\nfrom scipy.spatial.distance import cdist\nimport matplotlib.pyplot as plt", "Read in the data, it is from the UCI Wholesale Customer Dataset at:\nhttps://archive.ics.uci.edu/ml/datasets/wholesale+customers.", "df = pd.read_csv('Wholesale customers data.csv')", "Create a feature for total customer size. Note: 'Delicassen' misspelled in original data file.", "df['Total'] = df['Fresh'] + df['Milk'] + df['Grocery'] + df['Frozen'] + df['Detergents_Paper'] + df['Delicassen']\nprint df.head()", "Add a function to convert and join dummy variables to the model. If source df = dest df then remember to delete the original catagorical variable.", "def get_dummies(source_df, dest_df, col):\n dummies = pd.get_dummies(source_df[col], prefix=col)\n\n print 'Quantities for %s column' % col\n for col in dummies:\n print '%s: %d' % (col, np.sum(dummies[col]))\n print\n\n dest_df = dest_df.join(dummies)\n return dest_df", "Process dummy variables for the 'Channel' and 'Region' features. Drop original categorical feature, plus drop one of the dummy variables for 'leave one out' encoding.", "df = get_dummies(df, df, 'Channel')\ndf.drop(['Channel', 'Channel_2'], axis=1, inplace=True)\ndf = get_dummies(df, df, 'Region')\ndf.drop(['Region', 'Region_3'], axis=1, inplace=True)\ndf.rename(index=str, columns={'Channel_1': 'Channel_Horeca', 'Region_1': 'Region_Lisbon', 'Region_2': 'Region_Oporto'},\n inplace=True)\nprint df.head()", "Plot a histogram of customer size, shows a small number of large customers, many smaller customers.", "plt.hist(df['Total'], bins=32)\nplt.xlabel('Total Purchases')\nplt.ylabel('Number of Customers')\nplt.title('Histogram of Customer Size')\nplt.show()\nplt.close()", "Scale the data so no category dominates due to numeric scale.", "sc = StandardScaler()\nsc.fit(df)\nX = sc.transform(df)", "Set up a plotting function for K Means output.", "def plot_kmeans(pred, centroids, x_name, y_name, x_idx, y_idx, k):\n for i in range(0, k):\n plt.scatter(df[x_name].loc[pred == i], df[y_name].loc[pred == i], s=6,\n c=colors[i], marker=markers[i], label='Cluster %d' % (i + 1))\n\n centroids = sc.inverse_transform(kmeans.cluster_centers_)\n plt.scatter(centroids[:, x_idx], centroids[:, y_idx],\n marker='x', s=180, linewidths=3,\n color='k', zorder=10)\n\n plt.xlabel(x_name)\n plt.ylabel(y_name)\n plt.legend()\n plt.show()\n plt.close()", "Set up some markers and colors.", "markers = ('s', 'o', 'v', '*', 'D', '+', 'p', '<', '>', 'x')\ncolors = ('C0', 'C1', 'C2', 'C3', 'C4', 'C5', 'C6', 'C7', 'C8', 'C9')", "Try K Means with k = 3 (Mainly because when I previously studied this using R, 3 produced interesting results). The value for 'k' will be set from a distortion graph later in the notebook.", "k=3\nkmeans = KMeans(n_clusters=k)\nkmeans.fit(X)\npred = kmeans.predict(X)\n\nfor i in range(0, k):\n x = len(pred[pred == i])\n print 'Cluster %d has %d members' % ((i + 1), x)", "Use an inverse transform on the centroids. Needed because the centroids were calculated on the scaled data and we would like the centroids to plot correctly with the original data.", "centroids = sc.inverse_transform(kmeans.cluster_centers_)\n\nplot_kmeans(pred, centroids, 'Frozen', 'Detergents_Paper', 3, 4, k)", "Commentary: The presence of a few large customers makes plot with more clusters hard to interpret because the mass of smaller customers is pushed into the lower right corner of the graph above. Though large customer behavior is interesting, especially in a business setting, here I will focus on smaller customers. A look at the histogram above appears to make a total sales level of 75,000 euros a reasonable break point. As an aside, when I tried this in R, the K Means algorithm tended to produce two large customer clusters and one for the great mass of smaller customers. The plot above was chosen because it is representative of a pattern in the data - namely that there are customers who buy one type of product from our wholesale client but do not buy other types of products. Keeping this Jupyter notebook to a reasonable size precludes showing all possible combinations of product pairs as graphs here. \nSet up the data structures for the smaller customers. We need to rescale the data because of the elimination of larger customers.", "df = df.loc[df['Total'] <= 75000]\n\nsc = StandardScaler()\nsc.fit(df)\nX = sc.transform(df)", "Plot a distortion or elbow graph to help chose an optimal value for k.", "K = range(1, 20)\nmean_distortions = []\nfor k in K:\n np.random.seed(555)\n kmeans = KMeans(n_clusters=k, init='k-means++')\n kmeans.fit(X)\n mean_distortions.append(sum(np.min(cdist(X, kmeans.cluster_centers_, 'euclidean'), axis = 1))/ X.shape[0])\n\nplt.plot(K, mean_distortions, 'bx-')\nplt.xlabel('k')\nplt.ylabel('Average distortion')\nplt.title('Selecting K w/ Elbow Method')\nplt.show()\nplt.close()", "Distortion plots often have elbow points that are difficult to interpret, but k = 6 looks like it might make for an interesting set of clusters and plots.", "np.random.seed(555) # sets seed, makes it repeatable\nk = 6\nkmeans = KMeans(n_clusters=k) # , init='random')\nkmeans.fit(X)\npred = kmeans.predict(X)\n\nfor i in range(0, k):\n x = len(pred[pred == i])\n print 'Cluster %d has %d members' % ((i + 1), x)", "Plot K Means result with centroids.", "centroids = sc.inverse_transform(kmeans.cluster_centers_)\nplot_kmeans(pred, centroids, 'Frozen', 'Detergents_Paper', 3, 4, k)", "The pattern seen in the full data set also shows up amongst the set of customers with less than 75,000 euros in total sales. There are, for example, a number of customers who buy detergents and paper from our client but not frozen goods, the inverse also appears to be true. A business domain expert could be consulted in order to try and determine if this is charasteristic of the needs of these customers or perhaps there are missed opportunities for our client's marketing department.\nThe pattern noted is especially apparent in clusters 1, 3 and 5; lets see what we can learn about these clusters.\nA function to print cluster data.", "def print_cluster_data(cluster_number):\n print '\\nData for cluster %d' % cluster_number\n cluster = df.loc[pred == cluster_number - 1, :]\n # print cluster1.head()\n num_in_cluster = float(len(cluster.index))\n num_horeca = float(np.sum(cluster['Channel_Horeca']))\n num_retail = float(num_in_cluster - num_horeca)\n print 'Percent Horeca: %.2f, Percent Retail: %.2f' % \\\n (num_horeca / num_in_cluster * 100.0, num_retail / num_in_cluster * 100.0)\n\n num_lisbon = float(np.sum(cluster['Region_Lisbon']))\n num_oporto = float(np.sum(cluster['Region_Oporto']))\n num_other = num_in_cluster - num_lisbon - num_oporto\n print 'Percent Lisbon: %.2f, Percent Oporto: %.2f, Percent Other: %.2f' % \\\n (num_lisbon / num_in_cluster * 100.0, num_oporto / num_in_cluster * 100.0, num_other / num_in_cluster * 100.0)\n\n avg_cust_size = np.sum(cluster['Total']) / num_in_cluster\n print 'Average Customer Size is: %.2f for %d Customers' % (avg_cust_size, num_in_cluster)", "Print out data for clusters 1, 3 and 5.", "print_cluster_data(cluster_number=1)\nprint_cluster_data(cluster_number=3)\nprint_cluster_data(cluster_number=5)", "Commentary: clusters 3 and 5 are dominated by retail customers, in an actual business setting we might want to investigate why they buy lots of detergents and paper from our client but not much in the way of frozen goods. Cluster 1 represents customers from the \"Horeca\" (hotel, restaurant, cafe) distribution channel. These customers tend to buy frozen goods but not detergents and paper, perhaps because they sell food and only use detergents and paper as maintenance supplies. Clustering can help in a marketing segmentation analysis by identifying types and groups of customers. \nI hope that you have enjoyed my example of using Python and K Means to identify some of the patterns in the UCI Wholesale Customers data set." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
gtzan/mir_book
Clustering using Gaussian Mixture Models and PCA.ipynb
cc0-1.0
[ "Gaussian Mixture Models and Principal Component Analysis\nIn this notebook I show how Gaussian Mixture Models can be used to cluster data. In addition the use of Principal Component Analysis for dimensionality reduction is also investigated. Both of these examples are based on examples provided by scikit-learn using the Iris dataset. The main change is that I use two datasets consisting of audio features for the task of genre classification. They are calculated using Marsyas, an open source software for audio analysis. The first dataset used in the GMM code contains just two song level features (average spectral centroid and average spectral rolloff). That way the data can be visualized directly with a scatter plot. The points are colored in terms of their class membership. There are three genres each represented by a 100 tracks (instances) or points in this case. The genres are classical, jazz and metal. \nThe GMM is used to cluster the points without taking into account their genre membership. As can be seen it does a pretty decent job of recovering the underlying genre structure.", "import matplotlib as mpl\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n\nimport numpy as np\n\nfrom sklearn import datasets\nfrom sklearn.mixture import GaussianMixture\nfrom sklearn.model_selection import StratifiedKFold\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn.decomposition import PCA\n", "Load the data file and function for drawing ellipses for each of the 3 GMM components.", "colors = ['navy', 'turquoise', 'darkorange']\n\n\n# draw ellipses for each GMM type \ndef make_ellipses(gmm, ax):\n for n, color in enumerate(colors):\n if gmm.covariance_type == 'full':\n covariances = gmm.covariances_[n][:2, :2]\n elif gmm.covariance_type == 'tied':\n covariances = gmm.covariances_[:2, :2]\n elif gmm.covariance_type == 'diag':\n covariances = np.diag(gmm.covariances_[n][:2])\n elif gmm.covariance_type == 'spherical':\n covariances = np.eye(gmm.means_.shape[1]) * gmm.covariances_[n]\n v, w = np.linalg.eigh(covariances)\n u = w[0] / np.linalg.norm(w[0])\n angle = np.arctan2(u[1], u[0])\n angle = 180 * angle / np.pi # convert to degrees\n v = 2. * np.sqrt(2.) * np.sqrt(v)\n ell = mpl.patches.Ellipse(gmm.means_[n, :2], v[0], v[1],\n 180 + angle, color=color)\n ell.set_clip_box(ax.bbox)\n ell.set_alpha(0.5)\n ax.add_artist(ell)\n\n\n \n(X, y) = datasets.load_svmlight_file(\"data/3genres.arff.libsvm\")\nX = X.toarray()\nX = MinMaxScaler().fit_transform(X)\ntarget_names = ['classical', 'jazz', 'metal']", "Now let's visualize with some plots different types of GMM based on the constrains placed on the covariance matrix.", "\n# Break up the dataset into non-overlapping training (75%) and testing\n# (25%) sets.\nskf = StratifiedKFold(n_splits=4)\n# Only take the first fold.\ntrain_index, test_index = next(iter(skf.split(X, y)))\n\nX_train = X[train_index]\ny_train = y[train_index]\nX_test = X[test_index]\ny_test = y[test_index]\n\nn_classes = len(np.unique(y_train))\n\n# Try GMMs using different types of covariances.\nestimators = dict((cov_type, GaussianMixture(n_components=n_classes,\n covariance_type=cov_type, max_iter=20, random_state=0))\n for cov_type in ['spherical', 'diag', 'tied', 'full'])\n\nn_estimators = len(estimators)\n\nplt.figure(figsize=(3 * n_estimators // 2, 6))\nplt.subplots_adjust(bottom=.01, top=0.95, hspace=.15, wspace=.05,\n left=.01, right=.99)\n\nfor index, (name, estimator) in enumerate(estimators.items()):\n # Since we have class labels for the training data, we can\n # initialize the GMM parameters in a supervised manner.\n estimator.means_init = np.array([X_train[y_train == i].mean(axis=0)\n for i in range(n_classes)])\n\n # Train the other parameters using the EM algorithm.\n estimator.fit(X_train)\n\n h = plt.subplot(2, n_estimators // 2, index + 1)\n make_ellipses(estimator, h)\n\n for n, color in enumerate(colors):\n data = X[y == n]\n plt.scatter(data[:, 0], data[:, 1], s=0.8, color=color,\n label=target_names[n])\n # Plot the test data with crosses\n for n, color in enumerate(colors):\n data = X_test[y_test == n]\n plt.scatter(data[:, 0], data[:, 1], marker='x', color=color)\n\n y_train_pred = estimator.predict(X_train)\n train_accuracy = np.mean(y_train_pred.ravel() == y_train.ravel()) * 100\n plt.text(0.05, 0.9, 'Train accuracy: %.1f' % train_accuracy,\n transform=h.transAxes)\n\n y_test_pred = estimator.predict(X_test)\n test_accuracy = np.mean(y_test_pred.ravel() == y_test.ravel()) * 100\n plt.text(0.05, 0.8, 'Test accuracy: %.1f' % test_accuracy,\n transform=h.transAxes)\n\n plt.xticks(())\n plt.yticks(())\n plt.title(name)\n\nplt.legend(scatterpoints=1, loc='lower right', prop=dict(size=12))\nplt.show() \n \n ", "Principal Component Analysis\nIn PCA the data is projected in a lower dimensional space. In this case the data loaded contains a full set of features. There are 300 instances and each instances has 120 features. After PCA we retain the first three PCA directions effectively reducing the dimensionality from 120 to 3. As can be seen from the 3D scatter plot \nwhere the points are visualized with color based on their corresponding genre, the structure of the data in terms of genre is visible.", "(X, y) = datasets.load_svmlight_file(\"data/3genres_full.arff.libsvm\")\nX = X.toarray()\nX = MinMaxScaler().fit_transform(X)\ntarget_names = ['classical', 'jazz', 'metal']\nprint(X.shape)\n\n# To getter a better understanding of interaction of the dimensions\n# plot the first three PCA dimensions\nfig = plt.figure(1, figsize=(8, 6))\nax = Axes3D(fig, elev=-150, azim=110)\nX_reduced = PCA(n_components=3).fit_transform(X)\nprint(X_reduced.shape)\nax.scatter(X_reduced[:, 0], X_reduced[:, 1], X_reduced[:, 2], c=y,\n cmap=plt.cm.Set1, edgecolor='k', s=40)\nax.set_title(\"First three PCA directions\")\nax.set_xlabel(\"1st eigenvector\")\nax.w_xaxis.set_ticklabels([])\nax.set_ylabel(\"2nd eigenvector\")\nax.w_yaxis.set_ticklabels([])\nax.set_zlabel(\"3rd eigenvector\")\nax.w_zaxis.set_ticklabels([])\n\nplt.show()\n\n\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.22/_downloads/eea7e38645d4176f944e2f8d02a34fde/plot_run_ica.ipynb
bsd-3-clause
[ "%matplotlib inline", "Compute ICA components on epochs\nICA is fit to MEG raw data.\nWe assume that the non-stationary EOG artifacts have already been removed.\nThe sources matching the ECG are automatically found and displayed.\n<div class=\"alert alert-info\"><h4>Note</h4><p>This example does quite a bit of processing, so even on a\n fast machine it can take about a minute to complete.</p></div>", "# Authors: Denis Engemann <denis.engemann@gmail.com>\n#\n# License: BSD (3-clause)\n\nimport mne\nfrom mne.preprocessing import ICA, create_ecg_epochs\nfrom mne.datasets import sample\n\nprint(__doc__)", "Read and preprocess the data. Preprocessing consists of:\n\nMEG channel selection\n1-30 Hz band-pass filter\nepoching -0.2 to 0.5 seconds with respect to events\nrejection based on peak-to-peak amplitude", "data_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\n\nraw = mne.io.read_raw_fif(raw_fname)\nraw.pick_types(meg=True, eeg=False, exclude='bads', stim=True).load_data()\nraw.filter(1, 30, fir_design='firwin')\n\n# peak-to-peak amplitude rejection parameters\nreject = dict(grad=4000e-13, mag=4e-12)\n# longer + more epochs for more artifact exposure\nevents = mne.find_events(raw, stim_channel='STI 014')\nepochs = mne.Epochs(raw, events, event_id=None, tmin=-0.2, tmax=0.5,\n reject=reject)", "Fit ICA model using the FastICA algorithm, detect and plot components\nexplaining ECG artifacts.", "ica = ICA(n_components=0.95, method='fastica').fit(epochs)\n\necg_epochs = create_ecg_epochs(raw, tmin=-.5, tmax=.5)\necg_inds, scores = ica.find_bads_ecg(ecg_epochs, threshold='auto')\n\nica.plot_components(ecg_inds)", "Plot properties of ECG components:", "ica.plot_properties(epochs, picks=ecg_inds)", "Plot the estimated source of detected ECG related components", "ica.plot_sources(raw, picks=ecg_inds)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/test-institute-3/cmip6/models/sandbox-1/ocnbgchem.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Ocnbgchem\nMIP Era: CMIP6\nInstitute: TEST-INSTITUTE-3\nSource ID: SANDBOX-1\nTopic: Ocnbgchem\nSub-Topics: Tracers. \nProperties: 65 (37 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:46\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'test-institute-3', 'sandbox-1', 'ocnbgchem')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport\n3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks\n4. Key Properties --&gt; Transport Scheme\n5. Key Properties --&gt; Boundary Forcing\n6. Key Properties --&gt; Gas Exchange\n7. Key Properties --&gt; Carbon Chemistry\n8. Tracers\n9. Tracers --&gt; Ecosystem\n10. Tracers --&gt; Ecosystem --&gt; Phytoplankton\n11. Tracers --&gt; Ecosystem --&gt; Zooplankton\n12. Tracers --&gt; Disolved Organic Matter\n13. Tracers --&gt; Particules\n14. Tracers --&gt; Dic Alkalinity \n1. Key Properties\nOcean Biogeochemistry key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of ocean biogeochemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of ocean biogeochemistry model code (PISCES 2.0,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Model Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of ocean biogeochemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Geochemical\" \n# \"NPZD\" \n# \"PFT\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Elemental Stoichiometry\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe elemental stoichiometry (fixed, variable, mix of the two)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Fixed\" \n# \"Variable\" \n# \"Mix of both\" \n# TODO - please enter value(s)\n", "1.5. Elemental Stoichiometry Details\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe which elements have fixed/variable stoichiometry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.6. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of all prognostic tracer variables in the ocean biogeochemistry component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.7. Diagnostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of all diagnotic tracer variables in the ocean biogeochemistry component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.8. Damping\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe any tracer damping used (such as artificial correction or relaxation to climatology,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.damping') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport\nTime stepping method for passive tracers transport in ocean biogeochemistry\n2.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime stepping framework for passive tracers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"use ocean model transport time step\" \n# \"use specific time step\" \n# TODO - please enter value(s)\n", "2.2. Timestep If Not From Ocean\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTime step for passive tracers (if different from ocean)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks\nTime stepping framework for biology sources and sinks in ocean biogeochemistry\n3.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime stepping framework for biology sources and sinks", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"use ocean model transport time step\" \n# \"use specific time step\" \n# TODO - please enter value(s)\n", "3.2. Timestep If Not From Ocean\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTime step for biology sources and sinks (if different from ocean)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Transport Scheme\nTransport scheme in ocean biogeochemistry\n4.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of transport scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Offline\" \n# \"Online\" \n# TODO - please enter value(s)\n", "4.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTransport scheme used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Use that of ocean model\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "4.3. Use Different Scheme\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDecribe transport scheme if different than that of ocean model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Boundary Forcing\nProperties of biogeochemistry boundary forcing\n5.1. Atmospheric Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how atmospheric deposition is modeled", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"from file (climatology)\" \n# \"from file (interannual variations)\" \n# \"from Atmospheric Chemistry model\" \n# TODO - please enter value(s)\n", "5.2. River Input\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how river input is modeled", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"from file (climatology)\" \n# \"from file (interannual variations)\" \n# \"from Land Surface model\" \n# TODO - please enter value(s)\n", "5.3. Sediments From Boundary Conditions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList which sediments are speficied from boundary condition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.4. Sediments From Explicit Model\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList which sediments are speficied from explicit sediment model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Gas Exchange\n*Properties of gas exchange in ocean biogeochemistry *\n6.1. CO2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs CO2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.2. CO2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe CO2 gas exchange", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.3. O2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs O2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.4. O2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe O2 gas exchange", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.5. DMS Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs DMS gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.6. DMS Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify DMS gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.7. N2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs N2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.8. N2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify N2 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.9. N2O Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs N2O gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.10. N2O Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify N2O gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.11. CFC11 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs CFC11 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.12. CFC11 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify CFC11 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.13. CFC12 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs CFC12 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.14. CFC12 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify CFC12 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.15. SF6 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs SF6 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.16. SF6 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify SF6 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.17. 13CO2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs 13CO2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.18. 13CO2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify 13CO2 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.19. 14CO2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs 14CO2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.20. 14CO2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify 14CO2 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.21. Other Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any other gas exchange", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Key Properties --&gt; Carbon Chemistry\nProperties of carbon chemistry biogeochemistry\n7.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how carbon chemistry is modeled", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other protocol\" \n# TODO - please enter value(s)\n", "7.2. PH Scale\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf NOT OMIP protocol, describe pH scale.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea water\" \n# \"Free\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "7.3. Constants If Not OMIP\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf NOT OMIP protocol, list carbon chemistry constants.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Tracers\nOcean biogeochemistry tracers\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of tracers in ocean biogeochemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Sulfur Cycle Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs sulfur cycle modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8.3. Nutrients Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList nutrient species present in ocean biogeochemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Nitrogen (N)\" \n# \"Phosphorous (P)\" \n# \"Silicium (S)\" \n# \"Iron (Fe)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.4. Nitrous Species If N\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf nitrogen present, list nitrous species.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Nitrates (NO3)\" \n# \"Amonium (NH4)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.5. Nitrous Processes If N\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf nitrogen present, list nitrous processes.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Dentrification\" \n# \"N fixation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9. Tracers --&gt; Ecosystem\nEcosystem properties in ocean biogeochemistry\n9.1. Upper Trophic Levels Definition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDefinition of upper trophic level (e.g. based on size) ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Upper Trophic Levels Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDefine how upper trophic level are treated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Tracers --&gt; Ecosystem --&gt; Phytoplankton\nPhytoplankton properties in ocean biogeochemistry\n10.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of phytoplankton", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Generic\" \n# \"PFT including size based (specify both below)\" \n# \"Size based only (specify below)\" \n# \"PFT only (specify below)\" \n# TODO - please enter value(s)\n", "10.2. Pft\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nPhytoplankton functional types (PFT) (if applicable)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diatoms\" \n# \"Nfixers\" \n# \"Calcifiers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.3. Size Classes\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nPhytoplankton size classes (if applicable)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Microphytoplankton\" \n# \"Nanophytoplankton\" \n# \"Picophytoplankton\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11. Tracers --&gt; Ecosystem --&gt; Zooplankton\nZooplankton properties in ocean biogeochemistry\n11.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of zooplankton", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Generic\" \n# \"Size based (specify below)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.2. Size Classes\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nZooplankton size classes (if applicable)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Microzooplankton\" \n# \"Mesozooplankton\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Tracers --&gt; Disolved Organic Matter\nDisolved organic matter properties in ocean biogeochemistry\n12.1. Bacteria Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there bacteria representation ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.2. Lability\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe treatment of lability in dissolved organic matter", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Labile\" \n# \"Semi-labile\" \n# \"Refractory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13. Tracers --&gt; Particules\nParticulate carbon properties in ocean biogeochemistry\n13.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is particulate carbon represented in ocean biogeochemistry?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diagnostic\" \n# \"Diagnostic (Martin profile)\" \n# \"Diagnostic (Balast)\" \n# \"Prognostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Types If Prognostic\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf prognostic, type(s) of particulate matter taken into account", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"POC\" \n# \"PIC (calcite)\" \n# \"PIC (aragonite\" \n# \"BSi\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Size If Prognostic\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No size spectrum used\" \n# \"Full size spectrum\" \n# \"Discrete size classes (specify which below)\" \n# TODO - please enter value(s)\n", "13.4. Size If Discrete\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic and discrete size, describe which size classes are used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13.5. Sinking Speed If Prognostic\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, method for calculation of sinking speed of particules", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Function of particule size\" \n# \"Function of particule type (balast)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Tracers --&gt; Dic Alkalinity\nDIC and alkalinity properties in ocean biogeochemistry\n14.1. Carbon Isotopes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich carbon isotopes are modelled (C13, C14)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"C13\" \n# \"C14)\" \n# TODO - please enter value(s)\n", "14.2. Abiotic Carbon\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs abiotic carbon modelled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14.3. Alkalinity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is alkalinity modelled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Prognostic\" \n# \"Diagnostic)\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
crowd-course/datascience
examples/Gradient.ipynb
mit
[ "Code for Gradient disent\nCode for calculating local minima for $$f(x,y)= x^4 + y^4 - x^2 - y^2$$ with partial derivative wrt x : $$4x^3 - 2x$$ and partial derivative wrt y: $$4y^3 - 2y$$", "max_iter = 1000\nx_o=0\ny_o=0\nalpha = 0.01 ## Step Size\nx_k=2 ## Starting position of x coordinate \ny_k=2 ## Starting position of y coordinate \n\n\n", "Here we set the Max iterations as 1000 and starting coordinte in ($x_k,y_k$) = (2,2) and Step size as 0.01 donated by alpha", "def devx(x): ## Defining partial derivative wrt x\n return 4*x**3 - 2*x\ndef devy(y): ## Defining partial derivation wrt y\n return 4*y**3 -2*y\nfor i in range(max_iter):\n x_o = x_k\n y_o = y_k\n x_k = x_o - alpha * devx(x_o)\n y_k = y_o - alpha * devy(y_o)\n\nprint \"Local Minimum at\",x_k,\",\",y_k", "Here We define 2 functions devx(x) and devy(y) as the partial derivative with respect to x:$$4x^3 - 2x$$ and partial derivative with respect to y: $$4y^3 - 2y$$ respectively . In the following loop we calculate the local minima using equations : ." ]
[ "markdown", "code", "markdown", "code", "markdown" ]
flaviostutz/datascience-snippets
kaggle-lung-cancer-approach2/.ipynb_checkpoints/01-nodule-segmentation-prepare-checkpoint.ipynb
mit
[ "Train nodule detector with LUNA16 dataset", "INPUT_DIR = '../../input/'\nOUTPUT_DIR = '../../output/lung-cancer/01/'\nIMAGE_DIMS = (50,50,50,1)\n\n%matplotlib inline\nimport numpy as np\nimport pandas as pd\nimport h5py\nimport matplotlib.pyplot as plt\nimport sklearn\nimport os\nimport glob\n\nfrom modules.logging import logger\nimport modules.utils as utils\nfrom modules.utils import Timer\nimport modules.logging\nimport modules.cnn as cnn\nimport modules.ctscan as ctscan", "Analyse input data\nLet us import annotations", "annotations = pd.read_csv(INPUT_DIR + 'annotations.csv')\ncandidates = pd.read_csv(INPUT_DIR + 'candidates.csv')\n\nprint(annotations.iloc[1]['seriesuid'])\nprint(str(annotations.head()))\nannotations.info()\n\nprint(candidates.iloc[1]['seriesuid'])\nprint(str(candidates.head()))\ncandidates.info()\n\nprint(len(candidates[candidates['class'] == 1]))\nprint(len(candidates[candidates['class'] == 0]))", "Lets take a look at some images", "scan = ctscan.CTScanMhd(INPUT_DIR, '1.3.6.1.4.1.14519.5.2.1.6279.6001.979083010707182900091062408058')\n\npixels = scan.get_image()\nplt.imshow(pixels[80])\n\npixels = scan.get_subimage((40,40,10), (230,230,230))\nplt.imshow(pixels[40])", "Classes are heaviliy unbalanced, hardly 0.2% percent are positive.\nThe best way to move forward will be to undersample the negative class and then augment the positive class heaviliy to balance out the samples.\nPlan of attack:\n\n\nGet an initial subsample of negative class and keep all of the positives such that we have a 80/20 class distribution\n\n\nCreate a training set such that we augment minority class heavilby rotating to get a 50/50 class distribution", "positives = candidates[candidates['class']==1].index \nnegatives = candidates[candidates['class']==0].index", "Ok the class to get image data works\nNext thing to do is to undersample negative class drastically. Since the number of positives in the data set of 551065 are 1351 and rest are negatives, I plan to make the dataset less skewed. Like a 70%/30% split.", "positives\n\nnp.random.seed(42)\nnegIndexes = np.random.choice(negatives, len(positives)*5, replace = False)\nprint(len(positives))\nprint(len(negIndexes))\n\ncandidatesDf = candidates.iloc[list(positives)+list(negIndexes)]", "Prepare input data\nSplit into test train set", "from sklearn.cross_validation import train_test_split\nX = candidatesDf.iloc[:,:-1]\nY = candidatesDf.iloc[:,-1]\nX_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.20, random_state = 42)\n\n#print(str(X_test))\n#print(str(Y_test))", "Create a validation dataset", "X_train, X_val, Y_train, Y_val = train_test_split(X_train, Y_train, test_size = 0.20, random_state = 42)\n\nprint(len(X_train))\nprint(len(X_val))\nprint(len(X_test))\n\nprint('number of positive cases are ' + str(Y_train.sum()))\nprint('total set size is ' + str(len(Y_train)))\nprint('percentage of positive cases are ' + str(Y_train.sum()*1.0/len(Y_train)))", "We will need to augment the positive dataset like mad! Add new keys to X_train and Y_train for augmented data", "tempDf = X_train[Y_train == 1]\ntempDf = tempDf.set_index(X_train[Y_train == 1].index + 1000000)\nX_train_new = X_train.append(tempDf)\ntempDf = tempDf.set_index(X_train[Y_train == 1].index + 2000000)\nX_train_new = X_train_new.append(tempDf)\n\nytemp = Y_train.reindex(X_train[Y_train == 1].index + 1000000)\nytemp.loc[:] = 1\nY_train_new = Y_train.append(ytemp)\nytemp = Y_train.reindex(X_train[Y_train == 1].index + 2000000)\nytemp.loc[:] = 1\nY_train_new = Y_train_new.append(ytemp)\n\nX_train = X_train_new\nY_train = Y_train_new\nprint(len(X_train), len(Y_train))\n\nprint('After undersampling')\nprint('number of positive cases are ' + str(Y_train.sum()))\nprint('total set size is ' + str(len(Y_train)))\nprint('percentage of positive cases are ' + str(Y_train.sum()*1.0/len(Y_train)))\n\nprint(len(X_train))\nprint(len(X_val))\nprint(len(X_test))\nprint(X_train.head())\nprint(Y_train.head())", "Prepare output dir", "utils.mkdirs(OUTPUT_DIR, recreate=True)\nmodules.logging.setup_file_logger(OUTPUT_DIR + 'out.log')\nlogger.info('Dir ' + OUTPUT_DIR + ' created')", "Create HDF5 dataset with input data", "def create_dataset(file_path, x_data, y_data):\n logger.info('Creating dataset ' + file_path + ' size=' + str(len(x_data)))\n file_path_tmp = file_path + '.tmp'\n with h5py.File(file_path_tmp, 'w') as h5f:\n x_ds = h5f.create_dataset('X', (len(x_data), IMAGE_DIMS[0], IMAGE_DIMS[1], IMAGE_DIMS[2], IMAGE_DIMS[3]), chunks=(1, IMAGE_DIMS[0], IMAGE_DIMS[1], IMAGE_DIMS[2], IMAGE_DIMS[3]), dtype='f')\n y_ds = h5f.create_dataset('Y', (len(y_data), 2), dtype='f')\n valid = []\n for c, idx in enumerate(x_data.index):\n #if(c>3): break\n d = x_data.loc[idx]\n filename = d[0]\n t = Timer('Loading scan ' + str(filename))\n scan = ctscan.CTScanMhd(INPUT_DIR, filename)\n pixels = scan.get_subimage((d[3],d[2],d[1]), IMAGE_DIMS)\n #add color channel dimension\n pixels = np.expand_dims(pixels, axis=3)\n #plt.imshow(pixels[round(np.shape(pixels)[0]/2),:,:,0])\n #plt.show()\n if(np.shape(pixels) == (50,50,50,1)):\n x_ds[c] = pixels\n y_ds[c] = [1,0]\n if(y_data.loc[idx] == 1):\n y_ds[c] = [0,1]\n valid.append(c)\n else:\n logger.warning('Invalid shape detected in image. Skipping. ' + str(np.shape(pixels)))\n t.stop()\n\n #dump only valid entries to dataset file\n c = 0\n with h5py.File(file_path, 'w') as h5fw:\n x_dsw = h5fw.create_dataset('X', (len(valid), IMAGE_DIMS[0], IMAGE_DIMS[1], IMAGE_DIMS[2], IMAGE_DIMS[3]), chunks=(1, IMAGE_DIMS[0], IMAGE_DIMS[1], IMAGE_DIMS[2], IMAGE_DIMS[3]), dtype='f')\n y_dsw = h5fw.create_dataset('Y', (len(valid), 2), dtype='f')\n with h5py.File(file_path_tmp, 'r') as h5fr:\n x_dsr = h5fr['X']\n y_dsr = h5fr['Y']\n for i in range(len(x_dsr)):\n if(i in valid):\n x_dsw[c] = x_dsr[i]\n y_dsw[c] = y_dsr[i]\n c = c + 1\n\n os.remove(file_path_tmp)\n \n utils.validate_xy_dataset(file_path, save_dir=OUTPUT_DIR + 'samples/')\n\n#create_dataset(OUTPUT_DIR + 'nodules-train.h5', X_train, Y_train)\n\n#create_dataset(OUTPUT_DIR + 'nodules-validate.h5', X_val, Y_val)\n\ncreate_dataset(OUTPUT_DIR + 'nodules-test.h5', X_test, Y_test)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
omoju/udacityUd120Lessons
Evaluation Metrics.ipynb
gpl-3.0
[ "Lesson 14 - Evaluation Metrics\nTask: Identify Persons Of Interest (POI) for Enron fraud dataset.", "\nimport pickle\nimport sys\nsys.path.append(\"../tools/\")\nfrom feature_format import featureFormat, targetFeatureSplit\n\ndata_dict = pickle.load(open(\"../final_project/final_project_dataset.pkl\", \"r\") )\n\n### first element is our labels, any added elements are predictor\n### features. Keep this the same for the mini-project, but you'll\n### have a different feature list when you do the final project.\nfeatures_list = [\"poi\", \"salary\"]\n\ndata = featureFormat(data_dict, features_list)\nlabels, features = targetFeatureSplit(data)\n\n\nprint len(labels), len(features)", "Create a decision tree classifier (just use the default parameters), train it on all the data. Print out the accuracy. \nTHIS IS AN OVERFIT TREE, DO NOT TRUST THIS NUMBER! Nonetheless, \n- what’s the accuracy?", "from sklearn import tree\nfrom time import time\n\ndef submitAcc(features, labels):\n return clf.score(features, labels)\n\n\n\nclf = tree.DecisionTreeClassifier()\nt0 = time()\nclf.fit(features, labels)\nprint(\"done in %0.3fs\" % (time() - t0))\n\npred = clf.predict(features)\nprint \"Classifier with accurancy %.2f%%\" % (submitAcc(features, labels))", "Now you’ll add in training and testing, so that you get a trustworthy accuracy number. Use the train_test_split validation available in sklearn.cross_validation; hold out 30% of the data for testing and set the random_state parameter to 42 (random_state controls which points go into the training set and which are used for testing; setting it to 42 means we know exactly which events are in which set, and can check the results you get). \n- What’s your updated accuracy?", "from sklearn import cross_validation\n\n\nX_train, X_test, y_train, y_test = cross_validation.train_test_split(features, labels, test_size=0.30, random_state=42)\n\nprint len(X_train), len(y_train)\nprint len(X_test), len(y_test)\n \n\nclf = tree.DecisionTreeClassifier()\nt0 = time()\nclf.fit(X_train, y_train)\nprint(\"done in %0.3fs\" % (time() - t0))\n\npred = clf.predict(X_test)\nprint \"Classifier with accurancy %.2f%%\" % (submitAcc(X_test, y_test))", "How many POIs are in the test set for your POI identifier?\n\n(Note that we said test set! We are not looking for the number of POIs in the whole dataset.)", "numPoiInTestSet = len([p for p in y_test if p == 1.0])\nprint numPoiInTestSet", "If your identifier predicted 0. (not POI) for everyone in the test set, what would its accuracy be?", "from __future__ import division\n\n1.0 - numPoiInTestSet/29", "Aaaand the testing data brings us back down to earth after that 99% accuracy.\nConcerns with Accuracy\n\nIf you have a skewed dataset, as is the case with this dataset\nThe problem might be of such that it is best to err on the side of guessing innocence\nFor another case, you may want to err on the side of predicting guilt, with the hopes that the innocent persons will be cleared through the investigation.\n\nAccuracy is not particularly good if any of these cases apply to you. Precision and recall are a better metric for evaluating the performance of the model.\nPicking The Most Suitable Metric\nAs you may now see, having imbalanced classes like we have in the Enron dataset (many more non-POIs than POIs) introduces some special challenges, namely that you can just guess the more common class label for every point, not a very insightful strategy, and still get pretty good accuracy!\nPrecision and recall can help illuminate your performance better. \n- Use the precision_score and recall_score available in sklearn.metrics to compute those quantities.\n- What’s the precision?", "from sklearn.metrics import *\n\nprecision_score(y_test,clf.predict(X_test))", "Obviously this isn’t a very optimized machine learning strategy (we haven’t tried any algorithms besides the decision tree, or tuned any parameters, or done any feature selection), and now seeing the precision and recall should make that much more apparent than the accuracy did.", "recall_score(y_test,clf.predict(X_test))\n\n\n\ny_true = y_test\ny_pred = clf.predict(X_test)\n\ncM = confusion_matrix(y_true, y_pred)\n\nprint \"{:>72}\".format('Actual Class')\nprint \"{:>20}{:>20}{:>20}{:>23}\".format('Predicted', '', 'Positive', 'Negative')\nprint \"{:>20}{:>20}{:>20.3f}{:>23.3f}\".format('', 'Positive', cM[0][0], cM[0][1])\nprint \"{:>20}{:>20}{:>20.3f}{:>23.3f}\".format('', 'Negative', cM[1][0], cM[1][1])\n \n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/tensor2tensor
tensor2tensor/notebooks/hello_t2t-rl.ipynb
apache-2.0
[ "Tensor2Tensor Reinforcement Learning\nThe rl package provides the ability to run model-free and model-based reinforcement learning algorithms.\nCurrently, we support the Proximal Policy Optimization (PPO) and Simulated Policy Learning (SimPLe).\nBelow you will find examples of PPO training using trainer_model_free.py and SimPLe traning using trainer_model_based.py.", "#@title\n# Copyright 2018 Google LLC.\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n\n# https://www.apache.org/licenses/LICENSE-2.0\n\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n!pip install -q tensorflow==1.13.1\n!pip install -q tensorflow_probability==0.6.0\n!pip install -q tensor2tensor==1.13.1\n!pip install -q gym[atari]\n\n# Helper function for playing videos in the colab.\ndef play_video(path):\n from IPython.core.magics.display import HTML\n display_path = \"/nbextensions/vid.mp4\"\n display_abs_path = \"/usr/local/share/jupyter\" + display_path\n !rm -f $display_abs_path\n !ffmpeg -loglevel error -i $path $display_abs_path\n return HTML(\"\"\"\n <video width=\"640\" height=\"480\" controls>\n <source src=\"{}\" type=\"video/mp4\">\n </video>\n \"\"\".format(display_path))", "Play using a pre-trained policy\nWe provide pretrained policies for the following games from the Atari Learning Environment ( ALE) : alien,\namidar,\n assault,\n asterix,\n asteroids,\n atlantis,\n bank_heist,\n battle_zone,\n beam_rider,\n bowling,\n boxing,\n breakout,\n chopper_command,\n crazy_climber,\n demon_attack,\n fishing_derby,\n freeway,\n frostbite,\n gopher,\n gravitar,\n hero,\n ice_hockey,\n jamesbond,\n kangaroo,\n krull,\n kung_fu_master,\n ms_pacman,\n name_this_game,\n pong,\n private_eye,\n qbert,\n riverraid,\n road_runner,\n seaquest,\n up_n_down,\n yars_revenge.\nWe have 5 checkpoints for each game saved on Google Storage. Run the following command get the storage path:", "# experiment_id is an integer from [0, 4].\ndef get_run_dir(game, experiment_id):\n from tensor2tensor.data_generators.gym_env import ATARI_GAMES_WITH_HUMAN_SCORE_NICE\n EXPERIMENTS_PER_GAME = 5\n run_id = ATARI_GAMES_WITH_HUMAN_SCORE_NICE.index(game) * EXPERIMENTS_PER_GAME + experiment_id + 1\n return \"gs://tensor2tensor-checkpoints/modelrl_experiments/train_sd/{}\".format(run_id)\n\nget_run_dir('pong', 2)", "To evaluate and generate videos for a pretrained policy on Pong:", "game = 'pong'\nrun_dir = get_run_dir(game, 1)\n!python -m tensor2tensor.rl.evaluator \\\n --loop_hparams_set=rlmb_long_stochastic_discrete \\\n --loop_hparams=game=$game,eval_max_num_noops=8,eval_sampling_temps=[0.5] \\\n --policy_dir=$run_dir/policy \\\n --eval_metrics_dir=pong_pretrained \\\n --debug_video_path=pong_pretrained \\\n --num_debug_videos=4", "The above command will run a single evaluation setting to get the results fast. We usually run a grid of different settings (sampling temperatures and whether to do initial no-ops). To do that, remove eval_max_num_noops=8,eval_sampling_temps=[0.5] from the command. You can override the evaluation settings:\n--loop_hparams=game=pong,eval_max_num_noops=0,eval_sampling_temps=[0.0]\nThe evaluator generates videos from the environment:", "play_video('pong_pretrained/0.avi')", "Train your policy (model-free training)\nTraining model-free on Pong (it takes a few hours):", "!python -m tensor2tensor.rl.trainer_model_free \\\n --hparams_set=rlmf_base \\\n --hparams=game=pong \\\n --output_dir=mf_pong", "Hyperparameter sets are defined in tensor2tensor/models/research/rl.py. You can override them using the hparams flag, e.g.\n--hparams=game=kung_fu_master,frame_stack_size=5\nAs in model-based training, the periodic evaluation runs with timestep limit of 1000. To do full evaluation after training, run:", "!python -m tensor2tensor.rl.evaluator \\\n --loop_hparams_set=rlmf_tiny \\\n --hparams=game=pong \\\n --policy_dir=mf_pong \\\n --debug_video_path=mf_pong \\\n --num_debug_videos=4 \\\n --eval_metrics_dir=mf_pong/full_eval_metrics\n\nplay_video('mf_pong/0.avi')", "Model-based training\nThe rl package offers many more features, including model-based training. For instructions on how to use them, go to our README." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
banneker-aztlan/python-week-2
Part 2/galaxy_phot.ipynb
mit
[ "Simulating Galaxy Observations: Photometry\nIn this exercise, I want to focus on getting comfortable with a few core packages: matplotlib (the default plotting utility), numpy (useful for fast operations with a lot of numbers), and scipy (useful for a lot of built-in methods).\nWe will use these packages this week to do some basic science: simulating a galaxy observation. This will have two components: simulating a photometric observation (the intensity of light as a function of wavelength) and, if time permits, simulating a spectroscopic observation (the intensity of light as a function of position and wavelength).\nPreamble\nWe'll try and run the same block of code as before. This time, I want you to see if you can figure out what packages you'll need to install to get the __builtin__ import to work before moving on. Once you do, try installing it with either conda or pip (depending on your installation version).", "# only necessary if you're running Python 2.7 or lower\nfrom __future__ import print_function\nfrom __builtin__ import range", "Plotting\nBefore we get started, we need to initialize the environment. First, let's import pyplot from matplotlib, which we will use to plot things. Then, we can use one of the \"magic commands\" to enable in-line plotting, which ensures our plots will show up in our actual notebook (rather than externally).", "# import plotting utility and define our naming alias\nfrom matplotlib import pyplot as plt\n\n# plot figures within the notebook rather than externally\n%matplotlib inline", "Let's make some quick plots to see how things look using some fake data. We will use numpy to help us with this.", "# import numpy\nimport numpy as np", "Let's quickly generate some data. We'll start with a \"grid\" of points $\\mathbf{x}$ and compute the corresponding output $\\mathbf{y}$.", "# define a relationship: y = ax + b\na, b = 1., 0. # the trailing decimal guarantees this is a \"float\" rather than \"int\"\n\n# initialize our data\nn = 1000 # number of data points\nx = np.linspace(0., 100., n) # our grid of `n` data points from 0. to 100.\ny = a * x + b # our output y", "Now let's add some noise to our results using numpy's built-in random module.", "# add in noise drawn from a normal distribution\nye = np.random.normal(loc=0., scale=5., size=n) # jitter\nyobs = y + ye # observed result", "Let's see how our results look.", "# plot our results\nplt.plot(x, yobs)\nplt.plot(x, y)\nplt.xlabel('x')\nplt.ylabel('y')\nplt.title('A (Noisy) Line')", "Play around with the parameters above to get some more familiarity with plotting. If you have time, see if you can:\n- Change the original relationship to a quadratic one.\n- Change the type of random noise we are adding to the data.\n- Change the colors used for plotting.\n- Change the \"linestyle\" used for plotting from connected lines to unconnected dots.\n- Change the x and y limits in the plot.\nand any other changes you'd like to experiment with.\nFeel free to use any resources you want to figure this out. For immediate results, try the help function (shown below) or Shift-Tab within a function for in-line documentation. There's also some official documentation online.", "help(plt.plot)", "I always find the default label and axes markers to be too small to easily read (especially when showing people plots). Luckily, it's pretty straightforward to change the plotting defaults for matplotlib to make things easier to read. We can override the defaults whenever we plot something (which we'll get to in a bit) or we can just update them all at once (as below).", "# re-defining plotting defaults\nrcParams.update({'xtick.major.pad': '7.0'})\nrcParams.update({'xtick.major.size': '7.5'})\nrcParams.update({'xtick.major.width': '1.5'})\nrcParams.update({'xtick.minor.pad': '7.0'})\nrcParams.update({'xtick.minor.size': '3.5'})\nrcParams.update({'xtick.minor.width': '1.0'})\nrcParams.update({'ytick.major.pad': '7.0'})\nrcParams.update({'ytick.major.size': '7.5'})\nrcParams.update({'ytick.major.width': '1.5'})\nrcParams.update({'ytick.minor.pad': '7.0'})\nrcParams.update({'ytick.minor.size': '3.5'})\nrcParams.update({'ytick.minor.width': '1.0'})\nrcParams.update({'xtick.color': 'k'})\nrcParams.update({'ytick.color': 'k'})\nrcParams.update({'font.size': 30})", "The set of commands above probably didn't work on the first try. What gives? Looking at the error, it's telling us that the name 'rcParams' is not defined in any capacity. This makes some sense: we never defined this variable anywhere. See if you can find out where to import it.\nLet's re-plot our results to see what updating our defaults has changed.", "# plot our results\nplt.figure(figsize=(10, 4))\nplt.plot(x, yobs)\nplt.plot(x, y)\nplt.xlabel('x', fontsize=20, color='darkviolet')\nplt.ylabel('y', fontsize=40, color='red')\nplt.title('A (Noisy) Line with Larger Font', y=1.05, color='navy')", "Now that we've changed the defaults, we notice a number of issues with our plot to do with our font size. This is because all outputs from pyplot are intrinsically drawn on a Figure object. If one of these are not initialized explicitly at the beginning, a default one is created. With our new larger fonts, the default figure feels a little squished. \nPlay around with changing the size of our figure using the commented line. Feel free to also mess around with the arguments passed to the axes labels and titles.\nGalaxies and Emission Lines\nNow that we have some familiarity with plotting, let's move on to a bit more of the science. In the seds folder under data, there are a bunch of files containing galaxy spectral energy distributions (SEDs). Let's load in UGCA 166, a nearby blue compact dwarf galaxy that's actively forming stars.", "# load in text data\ndata = np.loadtxt('seds/brown_UGCA_166_spec.dat')\ndata", "The data we loaded in is a $N \\times 2$-dimensional numpy array, where the first column is the wavelength (measured in Angstroms, $10^{-10}$ meters) while the second column is the relative flux density (energy per time per area per wavelength, i.e. the relative intensity of the light at that particular wavelength).\nArrays are fixed-size data structures in Python that allow you to quickly manipulate lots of numbers. We'll be exploiting them here and you'll definitely be using them if you code in Python more regularly.\nThe way data is currently structured is a bit awkward: the relevant quantities are the columns here, which makes plotting a bit awkward. One possible way to get around this is to make some new variables by (1) iterating through the array (using an implicit for loop) or (2) slicing through the array. Both of these are deomnstrated below.", "# iteration through the array using an implicit for loop (list comprehension)\nprint('Wavelength:', np.array([d[0] for d in data]))\n\n# slice the array along the 0th entry in the 1st dimension (i.e. by column, not row)\nprint('Wavelength:', data[:, 0])", "To facilitate plotting and later manipulation, let's redefine our data to instead be 2 $N$-dimensional arrays called wave and fgal_wave. There are a bunch of ways to do this besides the two shown above, but the most direct way is to use numpy's built-in array manipulation functions.\nExtra challenge: can you code up your own custom method to do this? How many lines of code does your method take compared with the one line implementations shown above?", "# take the transpose of the array (N x M) -> (M x N)\nprint(data.T)\nwave, fgal_wave = data.T", "Let's now plot the data to see what our galaxy looks like. Build on the bare-bones example below using the skills you've learned to make a better-looking plot. (At the minimum, please label the axes!)", "# flux density (per wavelength) vs wavelength (angstroms) for UGCA 166\n#plt.plot(wave, fgal_wave)\n#plt.semilogx(wave, fgal_wave)\n#plt.semilogy(wave, fgal_wave)\n#plt.loglog(wave, fgal_wave)", "Looking closely, we see that there are a number of very visible \"spikes\" on the plot. These are particular emission lines associated with atomic transitions. These specific spectral features are a direct result of the energetic photons emitted from all the new/young stars in the galaxy. Try and zoom in on the particular region on the plot where most of these are located. Can you identify any lines in particular based on this list? Some of the most common ones are also defined below.\nExtra challenge: Overplot the line wavelengths on the galaxy spectrum using the plt.vlines function.", "# defining some common emission lines\nha = 6564.6 # H-alpha [A]\nn2 = 6549.86 # NII [A]\no3_1 = 5008.240 # OIII doublet (1) [A]\no3_2 = 4960.295 # OIII doublet (2) [A]\nhb = 4862.7 # H-beta [A]\no2 = 3728.4 # approximate center of (blended) OII doublet [A]\n\n# consolidating results in an array (in order of decreasing wavelength)\nemlines = np.array([ha, n2, o3_1, o3_2, hb, o2])\nemline_names = np.array(['Ha', 'NII', 'OIII (1)', 'OIII (2)', 'Hb', 'OII (1,2)'])\nNlines = len(emlines)", "Before moving on, feel free to load in a few different galaxy spectra from the seds/ folder (the brown_ ones are particularly nice examples) to see how different galaxy spectra look.\nFilter Transmission Curves\nGetting spectra of a galaxy is actually quite expensive and time-consuming. Instead, many large upcoming surveys such as Euclid will be photometric surveys. This just means that instead of observing galaxies as a function of wavelength, they simply count how many (relative) photons they receive in a specific wavelength interval. In other words, they take pictures of the sky in a particular wavelength range!\nUGCA 166 is kinda bizarre within our local neighborhood, but is probably a good example of what galaxies are like at higher redshifts (i.e. earlier times). We would like to simulate what our galaxy would look like in the photometric filters that will be part of the Euclid and LSST surveys.\nFirst we need to extract the relevant filter transmission curves, which tell us how much light is ultimately transmitted through the filter at a particular wavelength. I've stored these in the filters/ folder, so we need to extract them now. See if you can the code below to point to the right place.", "filt_path = '' # file path for the filters folder", "Our previous galaxy file was just a bunch of numbers, which we were able to load in using np.loadtxt. Our list of filters, however, is a bunch of names. This means we need to read in the file a bit differently. Let's do this line by line just to be very explicit about it.", "# initialize lists\nfilt_names = [] # empty list\nfilt_files = [] # empty list\n\n# read in our filter names and file paths line-by-line\nfilt_list = open(filt_path+'Euclid.list') # open list of filter files for Euclid+LSST\nfor line in filt_list:\n l = line.strip() # strip out the end-carriage '\\n' if present\n ls = l.split() # split the line into component strings\n filt_names.append(ls[0]) # filter name\n filt_files.append(ls[1]) # filter file\nfilt_list.close() # close file (**ALWAYS REMEMBER TO DO THIS**)\nNfilt = len(filt_files) # number of filters", "Play around with the components above to get a sense of what we just did. What happens if you remove parts or don't close the file? What operations can you do with lists? How about with arrays? Some quick examples are below. We'll be coming back to some of these subtleties later, but please don't hesitate to ask if you have any questions.", "#print(filt_names, filt_names * 2)\n#print(emlines, emlines * 2)\n#print(filt_list)", "Now let's load in our individual filters.", "# initialize our lists\nfw = [] # wavelengths \nfnu = [] # frequencies\nft = [] # transmission\nc = 2.998e18 # speed of light [A/s]\nfor filt in filt_files:\n fpath = filt_path + filt # append filter name to filter path\n temp = np.loadtxt(fpath) # load ASCII text file (wavelength, transmission)\n fw.append(temp[:, 0]) # wavelength ('lambda') [A]\n fnu.append(c / temp[:, 0]) # frequency ('nu') [Hz]\n ft.append(temp[:, 1]) # transmission (fraction from 0. to 1.)", "Try plotting some of these below building off of the basic layout. The basic Figure setup has been initialized along with two possible color schemes and a bunch of keyword arguments to the methods. Try to play around with the styles to see how they change things.", "# initialize figure\nplt.figure(figsize=(16, 5))\n\n# define a sequence of colors\ncolors = ['blue', 'magenta', 'red', 'orange', 'brown', # this works because python implicitly\n 'green', 'teal', 'goldenrod', 'coral', 'black'] # continues bracketed statements\n\n# define our colormap (see: https://matplotlib.org/examples/color/colormaps_reference.html)\ncolor_scale = np.linspace(0, 1, Nfilt)\ncolors = plt.get_cmap('viridis')(color_scale)\n\n# plot results\nfor i in range(Nfilt):\n plt.plot(fw[i], ft[i], ls='-', lw=2, label=filt_names[i], color=colors[i])\nplt.ylim([0, 1.5])\nplt.legend(loc=1, ncol=5, fontsize=16)\nplt.tight_layout()", "What do you think is causing the different features in the $\\lbrace U, G, R, I, Z, Y \\rbrace$ transmission curves and the worse overall levels of transmission relative to the $\\lbrace VIS, Y_w, J_w, H_w \\rbrace$ curves? (Hint: it has something to do with one of the biggest differences between the LSST and Euclid surveys.)\nBefore moving on, let's take a second to examine how we've set up the for loop above in more detail above.\n- The range argument initializes an iterator that goes from [0, Nfilt), where \"[\" signals inclusive (including zero) and \")\" signals exclusive (up to but excluding Nfilt), respectively.\n- We then step through the iterator using i, which takes on values 0, 1, 2, ... up to but excluding Nfilt.\n- For each value of i, we plot the corresponding element of fw, ft, filt_names, and colors.\nAnother way to do this is to step through all these values simultaneously. Python allows you to do this using the zip method, which iterates over all zipped quantities simultaneously. Using the example below, see if you can rewrite the for loop above to loop over all quantities (fw, ft, filt_names, and colors) simultaneously. Feel free to also play around with this formatting to get more comfortable with how zip works.", "plt.figure(figsize=(16, 5))\nfor i, filt_x, filt_y in zip(range(Nfilt), fw, ft):\n plt.plot(filt_x, filt_y, ls='-', lw=2, label=filt_names[i], color=colors[i])", "In addition, there's also no need for us to generate a counter using range. Python natively allows us to \"count\" within our loop using the enumerate function, which wraps whatever we're looping over. See if you can use the example below to re-write the for loop above.", "plt.figure(figsize=(16, 5))\nfor i, stuff in enumerate(zip(filt_names, colors)):\n plt.plot(fw[i], ft[i], ls='-', lw=2, label=stuff[0], color=stuff[1])", "Photometry\nWe now want to compute some basic properties of our set of filters. We'll start with the effective wavelength. This can be seen as the approximate \"mean\" wavelength of the filter, accounting for the differing transmission as a function of wavelength. Defining our wavelengths as $\\lambda$, our frequencies as $\\nu$, and our transmission at a particular frequency as $T_\\nu$, the \"standard\" defition for this effective wavelength $\\lambda_{\\textrm{eff}}$:\n$$ \\lambda_{\\textrm{eff}} = \\exp \\left[ \\frac{\\int T_\\nu \\, \\ln \\lambda \\, d(\\ln \\nu)}{\\int T_\\nu \\, d(\\ln \\nu)} \\right] \\quad . $$\nAlthough this definition might seem a bit weird, the basic idea is we want to compromise between averaging as a function of wavelength compared to as a function of frequency, which don't give the same result since $ \\lambda = c / \\nu \\propto \\nu^{-1}$.\nWe can break down this computation into four steps:\n1. compute the integral in the denominator,\n2. compute the integral in the numerator,\n3. exponentiate their ratio, and\n4. iterate over all of our filters.\nPython has a bunch of numerical integration packages available as part of scipy for more general applications. There also is a basic numerical integration tool trapz as part of numpy, which should suffice for our purposes here. An example is shown below.", "# integrate our function from the beginning of the notebook\nplt.plot(x, y) # plot our original function\nplt.fill_between(x, y, color='blue', alpha=0.3) # fill between y and 0 over our x's\ny_area = np.trapz(y, x) # numerically integrate our function\n\nprint('Integral = {0}'.format(y_area))", "Using the example above, see if you can compute (1) the denominator, (2) numerator, and (3) effective wavelength for a particular filter in the style shown below. Then see if you can turn this into an array of effective wavelengths over all filters.", "# denominator\ndenominator = ...\n\n# numerator\nnumerator = ...\n\n# effective wavelength (Schneider et al. 1983; Fukugita et al. 1996)\nfilt_cent = ...", "Extra Challenge: Code up a simple numerical integration scheme by hand.\nExtra Extra Challenge: Use scipy.integrate to do the integral instead.\nIf you need to move on, here's a one-line solution:", "# array of effective wavelengths (compact solution)\nfilt_cent = np.array([np.exp(np.trapz(ft[i] * np.log(fw[i]), np.log(fnu[i])) / \n np.trapz(ft[i], np.log(fnu[i])))\n for i in range(Nfilt)])", "In addition to the effective wavelength, we also want some basic metric of how \"wide\" our filter is. There a lot of ways to define this, just as there are a lot of ways to define an \"effective\" wavelength. Here, we will use the 95% interval where most of the transmission is contained. In other words, our \"width\" will be determined by the wavelengths where there's a total of 2.5% transmission remaining on the left/right edges. This is computed below.", "# initializes a Nfilt x 2 array of \"empty\" values\nfilt_bounds = np.empty((Nfilt, 2))\n\n# fraction of total flux from filter (confidence interval)\nfbound = 0.95\nfor i in range(Nfilt):\n cdf = np.cumsum(ft[i]) # compute the cumulative sum over the transmission curve\n cdf /= cdf[-1] # normalize the transmission curve to sum to 1.\n fremain = (1 - fbound) / 2. # amount remaining (fraction) on either end\n \n # compute left bound\n temp = np.abs(cdf - fremain) # absolute value\n idx_left = np.argmin(temp) # find the **index** of the minimum position\n \n # compute right bound\n temp = np.abs(cdf - (1. - fremain)) # absolute value\n idx_right = np.argmin(temp) # find the **index** of the minimum position\n \n # assign bounds to our \n filt_bounds[i] = fw[i][idx_left], fw[i][idx_right] # lower/upper bound [A]\n\n# compute the \"width\" as the difference between the upper and lower 95% bounds\nfilt_width = filt_bounds[:,1] - filt_bounds[:,0] # filter width [A]\n\nprint(filt_width)", "Spend some time breaking down the above snippet of code so that you can (ideally) explicitly summarize what exactly each line is doing and why. One way I like to make sense of unfamiliar code is by pulling it apart and plotting some of the intermediate results. An example of this is shown below.", "i = 3\ncdf = np.cumsum(ft[i])\ncdf /= cdf[-1]\nfremain = (1 - fbound) / 2.\n\nplt.figure(figsize=(16, 4))\nplt.plot(fw[i], ft[i] / max(ft[i]), color='black') # normalize to 1.\nplt.plot(fw[i], cdf, color='blue') # normalize to 1.\n\ntemp = np.abs(cdf - fremain)\nidx_left = np.argmin(temp)\nplt.plot(fw[i], temp, color='red', linestyle='--') # function we will minimize\nplt.vlines(fw[i][idx_left], 0., 1., color='red')\n\ntemp = np.abs(cdf - (1. - fremain))\nidx_right = np.argmin(temp)\nplt.plot(fw[i], temp, color='red', linestyle='--') # function we will minimize\nplt.vlines(fw[i][idx_right], 0., 1., color='red')\n\nplt.xlabel('Wavelength')\nplt.ylabel('Fraction')", "If this type of thing works for you, great! If not, definitely try and find out what general practices/strategies are effective for you since you'll probably be doing a lot of this as you become more involved in coding.\nUsing this information, we can now compute the average photon energy within each filter (i.e. the minimum unit/quantum of energy). One of the seminal findings of quantum mechanics (and the thing that got Einstein the nobel prize) was that photon energy is quantized according to the relation\n$$ E = h \\nu = h c / \\lambda \\quad . $$\nLet's use this relation to compute the (average) energy of an individual photon in each of our filters.", "c = 2.998e18 # speed of light [A/s]\nh = 6.6260755e-27 # Planck constant in erg*s\nephot_cent = h * c / filt_cent # photon energies at effective wavelength [erg]\nprint(ephot_cent)", "Now that we have all these bits and pieces, see if you can plot the average photon energy in each filter as a function of wavelength. A short example is shown below using plt.errorbar along with a short printout summary that utilizes the round function, but you're welcome to be as creative as you like!", "# figure\nplt.figure(figsize=(16, 6))\nxlow, xhigh = filt_bounds.T\nxe_low, xe_high = filt_cent - xlow, xhigh - filt_cent\nplt.errorbar(filt_cent, ephot_cent, xerr=[xe_low, xe_high], linestyle='none', marker='o')\n\n# printout\nprint('Filter', 'Center[A]', 'Width({0})[A]'.format(fbound), 'Low[A]', 'High[A]', 'E_phot[1e-12 erg]')\nfor i in xrange(Nfilt):\n print(filt_names[i], round(filt_cent[i], 1), filt_width[i], filt_bounds[i][0], \n filt_bounds[i][1], round(ephot_cent[i] * 1e12, 2))", "Extra Challenge: Can you get the axes of the plot to be (semi-)logarithmic rather than linear?\nObserving a Galaxy\nWe are now ready to simulate a basic observation of our original galaxy through our set of filters. First, we need to integrate our galaxy spectrum $S_\\nu$ over each filter. This takes a similar form to our calculation of the effective wavelength:\n$$ F_\\nu = \\frac{\\int T_\\nu \\, S_\\nu \\, d(\\ln \\nu)}{\\int T_\\nu \\, d(\\ln \\nu)} $$\nwhere $S_\\nu$ (flux density per frequency) and $S_\\lambda$ (flux density per wavelength) are related via\n$$ S_\\nu = S_\\lambda \\frac{\\lambda^2}{c} $$ \nusing the nifty result\n$$ d\\nu = d \\left(\\frac{c}{\\lambda}\\right) = - \\frac{c}{\\lambda^2} d\\lambda \\quad . $$", "# compute S_\\nu\nfgal_nu = fgal_wave * wave**2 / c # this works because units are [A] and [A/s]\n\n# plot our result\nplt.loglog(wave, fgal_nu)\nplt.xlabel('Wavelength')\nplt.ylabel(r'$S_\\nu$') # LaTeX-style math; the preceding 'r' \"protects\" the string", "Why this conversion between $S_\\nu$ and $S_\\lambda$? A mix between historical reasons and plotting usefulness (sometimes it's easier to visualize trends as per unit wavelength instead of per unit frequency, and vice versa). Plotting things in $S_\\nu$ tends to highlight behavior in the optical range (which becomes \"flatter\" the more star formation a galaxy tends to have).\nUsing everything we've covered up to this point, we're now ready to simulate our galaxy observation. First, let's integrate our galaxy over the filters to get the relative photometric flux density $F_\\nu$.", "freq = c / wave\nphot = np.array([np.trapz(ft[i] * fgal_nu, np.log(freq)) / \n np.trapz(ft[i], np.log(freq))\n for i in range(Nfilt)])", "Oh no -- it seems we've hit an error! It turns out our galaxy is observed at a different number of wavelengths compared to our original filter. To integrate numerically, we need both our filter and the galaxy spectrum to be observed at the exact same wavelength values. This requires us to interpolate one of our values. Since the galaxy wavelength grid appears to be more precise, let's go with interpolating all of our filter results onto a galaxy grid.\nInterpolation in Python is super easy using functions like np.interp. See if you can interpolate the transmission from filter onto the wavelength grid spanned by our galaxy (wave), under the condition that values outside the boundaries of the filter are automatically set to zero. An incomplete solution is given below.", "# interpolation (incomplete)\nnp.interp(wave, fw[i], ft[i])", "Using this, let's redo our integral from above. I've provided a one-statement solution below, but I would highly encourage everyone to try their hand at implementing something themselves.", "# compute relative photometry\nphot = np.array([np.trapz(np.interp(wave, fw[i], ft[i], left=0., right=0.) * fgal_nu, np.log(freq)) / \n np.trapz(np.interp(wave, fw[i], ft[i], left=0., right=0.), np.log(freq))\n for i in range(Nfilt)])\n\nprint(phot)", "Note that this result is unnormalized because we haven't compared this relative result to some standard value. To correct for this and make the final answer more realistic, we're just going to multiply this result by $10^{-23}$.", "phot *= 1e-23", "Now we want to derive error bars on our photometry. Let's pretend for a (beautiful) second that there is no other source of noise other than the source itself (i.e. ignoring the sky, instrument, etc., and only counting the number of photons received from our observed galaxy). The uncertainty on our photometry is directly related to the uncertainty in the number of photons we expect to receive. This is an example of a Poisson process, and it turns out the standard deviation in the number of photons we receive is just\n$$ \\sigma_N = \\sqrt{N} ~\\Rightarrow~ \\sigma_N / N = 1/\\sqrt{N} \\quad . $$\nSo our fractional uncertainty is just $1/\\sqrt{N}$.\nUsing the photon energies we computed earlier, compute the fractional uncertainties we expect in each filter assuming a 4hr observation with a 8m telescope at the effective wavelength of each filter. Remember that our photometric flux densities have units of erg/s/cm$^2$/Hz. Again, I've added in a one-statement solution so you can continue on, but please try and write your own code to compute this.", "phot_ferr = np.empty(Nfilt)\n\nphot_ferr = 1. / np.sqrt(phot * (c / filt_cent) * (8. * 100. * 100.) * (4. * 60. * 60.) / ephot_cent)\nprint(phot_ferr)", "Finally, let's plot everything together: the galaxy SED, the expected photometry, and the observed filter set. A bare-bones example is shown below, where I've input lots of values by hand using trial and error.", "# plot our results\nplt.figure(figsize=(16, 5))\nplt.plot(wave, fgal_nu / np.median(fgal_nu) * 0.8, color='red', alpha=0.4)\nfor i in range(Nfilt):\n plt.plot(fw[i], ft[i], ls='-', lw=2, color='blue')\nplt.xlim([2e3, 2e4])\nplt.ylim([0, 1.5])\nplt.errorbar(filt_cent, phot / max(phot) * 1.2, xerr=[xe_low, xe_high],\n yerr = phot * phot_ferr / max(phot) * 1.2,\n marker='o', markersize=10, linestyle='none', color='red')\nplt.xlabel('Wavelength [A]')\nplt.ylabel('Flux Density [per Hz]', fontsize=26)\nplt.title('UGCA 166 Photometry')", "Extra Challenge: Can you add in some Gaussian (i.e. Normal) white noise to the observation to better mimic an observed realization of our galaxy?\nExtra Challenge: Assume that random sky noise creates 2 photons per hour per (projected) m$^2$. How does this change the noise calculations above?\nExtra Extra Challenge: Our noise assumptions above assume that we can approximate a discrete counting process (Poisson) using a continuous function (Normal). This can lead to problems since we only observe a discrete number of photons and we can't observe negative photons. Simulate the actual expected photon counts from a Poisson distribution.\nAnd that's that!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
cmmarti/housing-madrid
repo/clustering.ipynb
gpl-3.0
[ "from __future__ import division\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nimport matplotlib as mpl\nimport numpy as np\nimport pandas as pd\nimport pysal as ps\nimport geopandas as gpd\nfrom geopandas import GeoSeries, GeoDataFrame\nfrom shapely.geometry import Point\nfrom sklearn import neighbors\n\nsns.set(style=\"white\")\nsns.set_context({\"figure.figsize\": (24, 10)})\n\npd.options.display.float_format = '{:.2f}'.format\n\nabb_link = './tfg/dbases/development3.csv'\nzc_link = './tfg/mapas/barrios_area.shp'\n\nmuestra = pd.read_csv(abb_link)\nbarrios = gpd.read_file(zc_link)\n\ngeometry = [Point(xy) for xy in zip(muestra['lon'], muestra['lat'])]\ncrs = {'init': 'epsg:4326'}\ngeo_df = GeoDataFrame(muestra, crs=crs, geometry=geometry)\n\ndb = gpd.sjoin(geo_df, barrios, how=\"inner\", op='intersects')\n\nmetro = pd.read_csv('./tfg/dbases/distance_matrix_metro.csv')\n\ndb = db.join(metro.set_index('InputID'),\n on='id', how='left')\n\ndb = db.rename(index=str, columns={\"DESBDT\": \"subdistrict_f\", \"Distance\": \"metro_distance\", \"NUMPOINTS\": \"metro_number\"})\n\ndb = pd.DataFrame(db)\ndb['floor']=db['floor'].replace(['Ground floor', 'Mezzanine', 'Semi-basement', 'Basement', 'ground', 'Floor -2', 'Floor -1'], 0,regex=True)\n#db.replace(u'\\xe', 'A')\ndb['floor'] = pd.to_numeric(db['floor'])\n\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport pysal as ps\nimport geopandas as gpd\nfrom sklearn import cluster\nfrom sklearn.preprocessing import scale", "Agregaci\\'on de variables a nivel barrio", "varis = ['pricems', 'rooms', 'floor', 'needs_renovating', 'garden', 'terrace', 'new_dev', 'garage']\n\naves = db.groupby('GEOCODIGO')[varis].mean()\naves.info()\n\ntypes = pd.get_dummies(db['metro_number'])\nprop_types = types.join(db['GEOCODIGO'])\\\n .groupby('GEOCODIGO')\\\n .sum()\nprop_types_pct = (prop_types * 100).div(prop_types)\nprop_types_pct.info()\n\naves_props = aves.join(prop_types_pct)\n#eliminar valores nulos\naves_props = aves_props.fillna(value=0)\n\ndb1 = pd.DataFrame(\\\n scale(aves_props), \\\n index=aves_props.index, \\\n columns=aves_props.columns)\\\n #.rename(lambda x: str(int(x)) )\n\n\n#zc = gpd.read_file(zc_link)\n#zc.plot(color='green')\n#sns.plt.show()\n\ndb1.info()\n\n#zdb = db1.set_index('subdistrict_f').join(zc[['DESBDT', 'geometry']], on='DESBDT').dropna()\n\nzdb = zc[['geometry', 'GEOCODIGO']].join(db1, on='GEOCODIGO')\\\n .dropna()\n\nkm5 = cluster.KMeans(n_clusters=3)\nkm5cls = km5.fit(zdb.drop(['geometry', 'GEOCODIGO'], axis=1).values)\nf, ax = plt.subplots(1, figsize=(9, 9))\n\nzdb.assign(cl=km5cls.labels_)\\\n .plot(column='cl', categorical=True, legend=True, \\\n linewidth=0.1, edgecolor='white', ax=ax)\nax.set_axis_off()\nplt.show()\n\n\nkm5cls.labels_\n\ncl_pcts = prop_types_pct.reindex(zdb['GEOCODIGO'])\\\n .assign(cl=km5cls.labels_)\\\n .groupby('cl')\\\n .count()", "N\\'umero de bocas de metro en los barrios que componen la zona", "cl_pcts\n\nf, ax = plt.subplots(1, figsize=(18, 9))\ncl_pcts.plot(kind='bar', stacked=False, ax=ax, \\\n cmap='Set2', linewidth=2)\nax.legend(ncol=1, loc=\"right\");\n\nplt.show()\n\ntype(cl_pcts)\n\nzdb.info()\n\nrt_av = db.groupby('GEOCODIGO')[varis]\\\n .mean()\\\n .rename(lambda x: str(int(x)))\n \n#pasar a int para join\n#rt_av.index = rt_av.index.astype(int)\n\nrt_av.describe()\n\nzc['GEOCODIGO'] = pd.to_numeric(zc['GEOCODIGO'])\n\n#pasar a int para join\nrt_av.index = rt_av.index.astype(int)\n\n\nzrt = zc[['geometry', 'GEOCODIGO']].join(rt_av, on='GEOCODIGO')\\\n .dropna()\nzrt.info()\n\nzrt\n\n\nzrt.to_file('tmp')\n#matriz de pesos espaciales\nw = ps.queen_from_shapefile('tmp/tmp.shp', idVariable='GEOCODIGO')\n\n#rm -r tmp\nw", "Establecer minimo de viviendas para region", "n_rev = db.groupby('GEOCODIGO')\\\n .count()\\\n ['price']\\\n \nthr = np.round(0.1 * n_rev.sum())\nthr\n\nnp.random.seed(1234)\nz = zrt.drop(['geometry', 'GEOCODIGO'], axis=1).values\nmaxp = ps.region.Maxp(w, z, thr, n_rev, initial=1000)\n", "Inferencia para comprobar que los resultados son mejores que definiendo zonas al azar", "\nnp.random.seed(1234)\nmaxp.cinference(nperm=20)\n\n\nmaxp.cpvalue\n\n\nlbls = pd.Series(maxp.area2region).reindex(zrt['GEOCODIGO'])\n\n\n\nf, ax = plt.subplots(1, figsize=(9, 9))\n\nzrt.assign(cl=lbls.values)\\\n .plot(column='cl', categorical=True, legend=True, \\\n linewidth=0.1, edgecolor='white', ax=ax)\n\nax.set_axis_off()\n\nplt.show()\n\n\nlbls\n\n\n#ver stats\nzrt[varis].groupby(lbls.values).mean().T\n\n\n#ver stats\nzrt[varis].groupby(lbls.values).mean().T" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
blakeflei/IntroScientificPythonWithJupyter
04 - Arrays - Numpy.ipynb
bsd-3-clause
[ "Numpy Arrays and Vectorization\nFrequently, matrices and vectors are needed for computation and are a convenient way to store and access data. Vectors are more commonly many rows with a single column. A significant amount of work has been done to make computers very fast at doing matrix math, and while the tradeoff is commonly framed as 'more memory for faster calculation', there is typically enough memory in contemporary computation devices to process chunks of matrices.\nIn Python's NumPy, vectors and matrices are referred to as arrays: a constant-sized collection of elements (of the same type - integer, floating point number, string of characters, etc.).\nUnderneath, Python arrays use C for greater efficiency.\nNote that this is different from the python list - lists are a python datatype, whereas arrays are objects that are made available via the python package numpy. \nArray restrictions:\n - You can't append things to an array (i.e. you can't make it bigger without creating an entirely new array)\n - You can only put things of the same type into an array\nThe array is the basis of all (fast) scientific computing in Python.\nWe need to have a solid foundation of what an array is, how to use it, and what it can do.\nBy the end of this file you should have seen simple examples of:\n1. Arrays are faster than lists!\n2. Create an array\n3. Different types of arrays\n4. Creating and accessing (indexing) arrays\n5. Building arrays from other arrays (appending)\n6. Operations on arrays of different sizes (broadcasting)\n7. Arrays as Python objects\nFurther reading:\nhttps://docs.scipy.org/doc/numpy-dev/user/numpy-for-matlab-users.html", "# Python imports\nimport numpy as np", "Arrays versus lists\nWhile both data types hold a series of discrete information, arrays are stored more efficiently in memory and have significantly higher performance than Python lists. They also bring with them a host of properties and syntax that makes them more efficient, especially for numeric operations.", "l = 20000\ntest_list = list(range(l))\ntest_array = np.arange(l)\n\nprint(type(test_list))\nprint(type(test_array))\n\nprint(test_list[:300]) # Print the first 300 elements \n # (more on indexing in a bit):\n\nprint(test_array)\n\n%timeit [np.sqrt(i) for i in test_list]\n\n%timeit [np.sqrt(test_array)]", "If statement says \"10 loops, best of 3: [time]\" it means the fastest of 10 repeated runs was recorded - then the 10 runs were repeated twice more, resulting in an overall fastest time.\nCreating and accessing (indexing) arrays\nWe can create arrays from scratch:", "test_array = np.array([[1,2,3,4], [6,7,8,9]])\nprint(test_array)", "Index arrays using square brackets, starting from zero and specifying row, column:", "test_array[0,3]", "Arrays are duck typed just like Python variables, that is to say that Python will try to determine what kind of variable it should be based on how it's used. \nNumpy arrays are all the same type of variable. To check the data type (dtype) enter:", "test_array.dtype", "Different variable types use different amounts of memory and can have an effect on performance for very large arrays. \nChanging the type of array is possible via:", "test_array = test_array.astype('float64')\nprint(test_array)\n\n# We can create arrays of boolean values too:\nbool_array = np.array([[True, True, False,True],[False,False,True,False]])\nprint(bool_array)", "We can replace values in an array:", "test_array[0,3]=99 # Assign value directly\nprint(test_array)", "Deleting values from an array is possible, but due to the way they're stored in memory, it makes sense to keep the array structure. Often, a 'nan' is used (not a number) or some nonsensical value is used, i.e.: 0 or -1.\nKeep in mind that 'nan' only works for some types of arrays:", "test_array[0,3] = 'nan'\nprint(test_array)", "Fancy ways of indexing\nSlicing Arrays:\nSlicing arrays refers to indexing >1 elements in a previous array. Slicing is often used when parallelizing computations using arrays. Indexing is array[row, column].", "test_array[:,1] # Use the ':' to index along one dimension fully\n\ntest_array[1,1:] # Adding a colon indexes the rest of the values \n # (includes the numbered index)\n\ntest_array[1,1:-1] # We can index relative to the first and last elements\n\ntest_array[1,::2] # We can specify the indexing order\n\ntest_array[1,1::-1] # We can get pretty fancy about it \n # Index second row, second from first to second from \n # last in reverse order.", "Logical Indexing\nWe can specify only the elements we want by using an array of True/False values:", "test_array[bool_array] # Use our bool_array from earlier", "Using the isnan function in numpy:", "nans = np.isnan(test_array) \nprint(nans)\n\ntest_array[nans] = 4\nprint(test_array)", "Building arrays from other arrays (appending)\nWe can build arrays from other array via Python stacking in a horizontal or vertical way:", "test_array_Vstacked = np.vstack((test_array, [1,2,3,4]))\nprint(test_array_Vstacked)\n\ntest_array_Hstacked = np.hstack((test_array, test_array))\nprint(test_array_Hstacked)", "We can bring these dimensions back down to one via flatten:", "test_array_Hstacked.flatten()", "Caution: appending to numpy arrays frequently is memory intensive. Every time this happens, an entirely new chunk of memory needs to be used, so the old array is moved in memory to a new location.\nIt's faster to 'preallocate' an array with empty values, and simply populate as the computation progresses.\nOperations on arrays of different sizes (broadcasting)\nPython automatically handles arithmetic operations with arrays of different dimensions. In other words, when arrays have different (but compatible) shapes, the smaller is 'broadcast' across the larger.", "test_array\n\nprint(\"The broadcasted array is: \", test_array[0,:])\ntest_array[0,:] * test_array", "However, if the dimensions don't match, it won't work:", "print(\"The broadcasted array is: \", test_array[:,0])\n#test_array[:,0] * test_array # Uncomment the line to see that the \n # dimensions don't match\n\n# Make use of the matrix transpose (also can use array.T)\nnp.transpose( test_array[:,0]*np.transpose(test_array) )", "Arrays as Python objects\nPython can be used as an object oriented language, and numpy arrays have lots of properties. There are many functions we can use as numpy.&lt;function&gt;(&lt;array&gt;) and array.&lt;function&gt;\nFor example, the transpose above:", "print(\"The original array is: \", test_array)\nprint(\"The transposed array is: \", np.transpose(test_array) )\n\n# Alternatively, using test_array as an opject:\nprint(\"The transposed array is: \", test_array.transpose() )", "One of the most frequenly used properties of arrays is the dimension:", "print(\"The original array dimensions are: \", test_array.shape)\nprint(\"The array transpose dimensions are: \", test_array.transpose().shape)", "Sorting:\nSorting arrays happens in-place, so once the function is called on an array, the sorting happens to the original array:", "test_array2 = np.array([1,5,4,0,1])\nprint(\"The original array is: \", test_array2)\n\ntest_array3 = test_array2.sort() # Run the sort - note that the new variable isn't assigned\nprint(\"The reassigned array should be sorted: \", test_array3)\nprint(\"test_array2 after sort: \", test_array2)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
francisbrochu/microbiome-summer-school-2017_mass-spec
example/microbiome-summer-school-2017_mass-spectrometry.ipynb
mit
[ "Microbiome Summer School 2017 - Mass Spectrometry Tutorial\nWelcome to this tutorial for Plenary 9 of the Microbiome Summer School 2017. This tutorial concerns Algorithms for Mass Spectrometry.\nThis notebook contains working code and an example of applications of the algorithms covered in Plenary 9. A dataset of mass spectra will be processed and corrected by the Virtual Lock Mass algorithm and subsequently aligned. A machine learning algorithm will then be applied to the data.", "#This section contains some fundamental imports for the notebook.\nimport numpy as np", "The following section will load the mass spectra data into memory.\nThis dataset is a set of 80 samples of red blood cell cultures. Their spectra was acquired by LDTD-ToF mass spectrometry on a Waters Synapt G2-Si instrument. These spectra were acquired in high resolution mode using a data independant acquisition mode ($MS^e$).\nOf these 80 samples, 40 are from red blood cell cultures infected by malaria. The other 40 samples are not infected. It is the objective of this tutorial to correct and align these spectra in order to classify them by machine learning.\nThe dataset is stored in the file dataset.h5, contained within this tutorial. The hdf5 format is a very efficient data storage format for multiple types of datasets and numeric data.\nThe loading operation may take some seconds to complete.", "from tutorial_code.utils import load_spectra\n\ndatafile = \"dataset.h5\"\nspectra = load_spectra(datafile)", "At this point, the mass spectra are loaded in memory and ready for the next step.\nThe next steps will be to correct and align these spectra in order to render them more comparable for the machine learning analysis to follow.\nFirst, the Virtual Lock Mass algorithm will be applied. \nThe following command will import the corrector code.", "from tutorial_code.virtual_lock_mass import VirtualLockMassCorrector", "We must then create a corrector for the spectra.\nThe following command will create a corrector with a minimum peak intensity of 1000 and a maximum distance of 40 ppms.\nTheses settings yield the most correction points of the dataset, and thus they are considered optimal.", "corrector = VirtualLockMassCorrector(window_size=40, minimum_peak_intensity=1000)", "The corrector is then trained on the dataset in order to detect the VLM correction points.\nThis is done by using the fit function, with the dataset as a parameter.", "corrector.fit(spectra)", "Once the corrector has been trained, it can apply its correction to the spectra.\nWe simply use the transform function of the corrector on the dataset.\nHowever, we must store the result in a new variable.", "corrected_spectra = corrector.transform(spectra)", "Now the spectra are corrected and larger shifts between samples should be removed.\nWe must still align the spectra together in order to remove small variations in m/z values.\nThe following command will import the aligner code.", "from tutorial_code.alignment import Mass_Spectra_Aligner", "As before, we must create an aligner.\nThe following command will create this aligner with a window size of 30 ppms.", "aligner = Mass_Spectra_Aligner(window_size=30)", "The aligner will then detect the alignment points by being fitted to the mass spectra.", "aligner.fit(corrected_spectra)", "Once the aligner is fitted, we have the alignment points.\nThe spectra will then be aligned by the transform function of the aligner.\nOnce again, the aligned spectra will need to be stored in a new variable.", "aligned_spectra = aligner.transform(corrected_spectra)", "The spectra are now aligned.\nIn terms of m/z values, the spectra are ready to be compared.\nThe spectra must now be changed into a format more appropriate for machine learning, which the algorithms can read.\nThis format is that of a data matrix, where each row represents a mass spectrum and the columns represent a peak that is present in the dataset.\nTo make this conversion, import the spectrum_to_matrix function from the utilitaries.", "from tutorial_code.utils import spectrum_to_matrix\n\ndata = spectrum_to_matrix(aligned_spectra)", "Finally, we need to extract labels from the spectra in order to know which spectrum represents which class (infected or not by malaria).\nThe following function extracts this information from the spectra's metadata and return an array of tags, which are 0 for non-infected samples and 1 for malaria-infected samples", "from tutorial_code.utils import extract_tags\n\ntags = extract_tags(aligned_spectra)", "Here we start on the machine learning analysis more properly.\nWe need to ensure that we have a good experimental workflow that is reproducible and whose predictors are generalizable to more data.\nA first step is of splitting the data into a training set and a testing set of samples.\nThe algorithms will be trained on and exposed to the training set, while the testing set is set apart and kept for a final evaluation of the model.\nThis way, we can ensure that the model can generalize its predictions on new or never seen before data.\nA dataset can easily be split by the existing function train_test_split from the scikit-learn package.", "from sklearn.model_selection import train_test_split\n\nX_train, X_test, Y_train, Y_test = train_test_split(data, tags, test_size=0.25, random_state=42)", "The above command split the dataset randomly into a training set of 60 spectra (or examples, samples) and a testing set of 20 spectra.\nAll the feature information, i.e. the data matrix containing the peak intensities, are contained within the matrices X_train and X_test.\nThe arrays Y_train and Y_test respectively contain the tags pertaining to the samples.\nThe random_state so that we can reproduce this exact split of samples, instead of getting a random split each time we repeat this command\nNext, we will create an object to handle the cross-validation step and our learner at the same time.\nWe will create a Decision Tree classifier (as presented during the plenary sessions) for this classification task.\nCross-validation is more detailed in the online page for this tutorial, as well as in a previous tutorial.\nIn short, this process breaks the training sets into folds, in this case 5.\nThe training algorithm will be trained on all the folds but one, and then tested on the remaining one.\nThis process is repeated so that each fold serves as a test fold once.\nBy this method of evaluation, we can determine which parameters of the algorithm (called hyper-parameters) are best.", "from sklearn.model_selection import GridSearchCV\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.ensemble import AdaBoostClassifier \n#this algorithm will not be used in the tutorial, but was presented in the plenary.\n#Try using it on your own and see the results!", "The Decision Tree classifier has one hyper-parameter that we will cross-validate in this tutorial.\nThis hyper-parameter is the maximum depth of the decision tree allowed.\nThe following commands will create the parameter grid, the classifier and the cross-validator.", "param_grid = {\n \"max_depth\":[1,2,3,4,5,6]\n}\n\nlearner = GridSearchCV(DecisionTreeClassifier(random_state=42),\n param_grid=param_grid,\n cv=5) #the number of folds", "The learner is ready to be trained.\nWe use the fit method with the training set.\nWhile using a GridSearchCV object, the fit will conduct both the cross-validation and train the model with the best hyper-parameters on the whole training set.", "learner.fit(X_train, Y_train)", "We can then check the optimal parameters for the learner, and evaluate the predictions on the training set and testing sets.", "print(learner.best_estimator_)", "The learner can then predict on both the training sets and testing sets.\nWe can then evalutate the learner by comparing the true labels with the predicted ones.", "predictions_on_train = learner.predict(X_train)\n\npredictions_on_test = learner.predict(X_test)", "We can then use an existing function of scikit-learn that build a classification report on the comparison between the true labels and the predictions.\nThis gives us the precision and recall of the classifier, as well as the F1 Score.\nAn additionnal function gives us the zero one loss, or the error rate, of the learner.\nIf we print one minus the zero one loss, we obtain the percentage of accuracy of the classifier.\nFurther information on the metrics is presented on the website of this tutorial.", "from sklearn.metrics import classification_report, zero_one_loss", "Here are the results on the training set (empirical risk/accuracy).", "print(classification_report(Y_train, predictions_on_train))\nprint(1. - zero_one_loss(Y_train, predictions_on_train))", "And here are the results on the testing set (test risk/accuray).", "print(classification_report(Y_test, predictions_on_test))\nprint(1. - zero_one_loss(Y_test, predictions_on_test))", "This marks the end of the tutorial.\nIf you wish to experiment further, feel free to edit parameters and even change the machine learning algorithm. \nBelow is the code for a simple AdaBoost Classifier to test on the dataset.", "param_grid = {\n \"n_estimators\":[1,5,10,20,30,40,50,60,70,80,90,100],\n \"learning_rate\":[0.01, 0.1, 1., 10., 100.]\n}\n\nlearner = GridSearchCV(AdaBoostClassifier(random_state=42),\n param_grid=param_grid,\n cv=5) #the number of folds" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
netodeolino/TCC
TCC 02/Resultados/Abril/Abril.ipynb
mit
[ "import pandas\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport matplotlib.ticker as ticker\n\ndf_abril = pandas.read_csv(\"./data/Cluster-Crime-Abril.csv\")\n\ncrime_tipos = df_abril[['NATUREZA DA OCORRÊNCIA']]\ncrime_tipo_total = crime_tipos.groupby('NATUREZA DA OCORRÊNCIA').size()\ncrime_tipo_counts = df_abril[['NATUREZA DA OCORRÊNCIA']].groupby('NATUREZA DA OCORRÊNCIA').sum()\ncrime_tipo_counts['TOTAL'] = crime_tipo_total\nall_crime_tipos = crime_tipo_counts.sort_values(by='TOTAL', ascending=False)", "Filtro dos 10 crimes com mais ocorrências em abril", "all_crime_tipos.head(10)\n\nall_crime_tipos_top10 = all_crime_tipos.head(10)\nall_crime_tipos_top10.plot(kind='barh', figsize=(12,6), color='#3f3fff')\nplt.title('Top 10 crimes por tipo (Abr 2017)')\nplt.xlabel('Número de crimes')\nplt.ylabel('Crime')\nplt.tight_layout()\nax = plt.gca()\nax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))\nplt.show()", "Todas as ocorrências criminais de abril", "all_crime_tipos\n\ngroup_df_abril = df_abril.groupby('CLUSTER')\ncrimes = group_df_abril['NATUREZA DA OCORRÊNCIA'].count()\n\ncrimes.plot(kind='barh', figsize=(10,7), color='#3f3fff')\nplt.title('Número de crimes por região (Abr 2017)')\nplt.xlabel('Número')\nplt.ylabel('Região')\nplt.tight_layout()\nax = plt.gca()\nax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))\nplt.show()", "As 5 regiões com mais ocorrências", "regioes = df_abril.groupby('CLUSTER').count()\ngrupo_de_regioes = regioes.sort_values('NATUREZA DA OCORRÊNCIA', ascending=False)\n\ngrupo_de_regioes['TOTAL'] = grupo_de_regioes.ID\ntop_5_regioes_qtd = grupo_de_regioes.TOTAL.head(6)\n\ntop_5_regioes_qtd.plot(kind='barh', figsize=(10,4), color='#3f3fff')\nplt.title('Top 5 regiões com mais crimes')\nplt.xlabel('Número de crimes')\nplt.ylabel('Região')\nplt.tight_layout()\nax = plt.gca()\nax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))\nplt.show()", "Acima podemos ver que a região 1 teve o maior número de ocorrências criminais\nPodemos agora ver quais são essas ocorrências de forma mais detalhada", "regiao_1_detalhe = df_abril[df_abril['CLUSTER'] == 1]\nregiao_1_detalhe", "Uma análise sobre as 5 ocorrências mais comuns", "crime_types = regiao_1_detalhe[['NATUREZA DA OCORRÊNCIA']]\ncrime_type_total = crime_types.groupby('NATUREZA DA OCORRÊNCIA').size()\ncrime_type_counts = regiao_1_detalhe[['NATUREZA DA OCORRÊNCIA']].groupby('NATUREZA DA OCORRÊNCIA').sum()\ncrime_type_counts['TOTAL'] = crime_type_total\nall_crime_types = crime_type_counts.sort_values(by='TOTAL', ascending=False)\n\ncrimes_top_5 = all_crime_types.head(5)\ncrimes_top_5.plot(kind='barh', figsize=(11,3), color='#3f3fff')\nplt.title('Top 5 crimes na região 1')\nplt.xlabel('Número de crimes')\nplt.ylabel('Crime')\nplt.tight_layout()\nax = plt.gca()\nax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))\nplt.show()", "Filtro dos 10 horários com mais ocorrências em abril", "horas_mes = df_abril.HORA.value_counts()\nhoras_mes_top10 = horas_mes.head(10)\n\nhoras_mes_top10.plot(kind='barh', figsize=(11,4), color='#3f3fff')\nplt.title('Crimes por hora (Abr 2017)')\nplt.xlabel('Número de ocorrências')\nplt.ylabel('Hora do dia')\nplt.tight_layout()\nax = plt.gca()\nax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))\nplt.show()", "Filtro dos 5 horários com mais ocorrências na região 1 (região com mais ocorrências em abril)", "crime_hours = regiao_1_detalhe[['HORA']]\ncrime_hours_total = crime_hours.groupby('HORA').size()\ncrime_hours_counts = regiao_1_detalhe[['HORA']].groupby('HORA').sum()\ncrime_hours_counts['TOTAL'] = crime_hours_total\nall_hours_types = crime_hours_counts.sort_values(by='TOTAL', ascending=False)\n\nall_hours_types.head(5)\n\nall_hours_types_top5 = all_hours_types.head(5)\nall_hours_types_top5.plot(kind='barh', figsize=(11,3), color='#3f3fff')\nplt.title('Top 5 crimes por hora na região 1')\nplt.xlabel('Número de ocorrências')\nplt.ylabel('Hora do dia')\nplt.tight_layout()\nax = plt.gca()\nax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))\nplt.show()", "Filtro dos 10 bairros com mais ocorrências em abril", "crimes_mes = df_abril.BAIRRO.value_counts()\ncrimes_mes_top10 = crimes_mes.head(10)\n\ncrimes_mes_top10.plot(kind='barh', figsize=(11,4), color='#3f3fff')\nplt.title('Top 10 Bairros com mais crimes (Abr 2017)')\nplt.xlabel('Número de ocorrências')\nplt.ylabel('Bairro')\nplt.tight_layout()\nax = plt.gca()\nax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))\nplt.show()", "O Bairro com o maior número de ocorrências em abril foi o Jangurussú\nVamos agora ver de forma mais detalhadas quais foram estes crimes", "barra_do_ceara = df_abril[df_abril['BAIRRO'] == 'JANGURUSSU']\ncrime_types = barra_do_ceara[['NATUREZA DA OCORRÊNCIA']]\ncrime_type_total = crime_types.groupby('NATUREZA DA OCORRÊNCIA').size()\ncrime_type_counts = barra_do_ceara[['NATUREZA DA OCORRÊNCIA']].groupby('NATUREZA DA OCORRÊNCIA').sum()\ncrime_type_counts['TOTAL'] = crime_type_total\nall_crime_types = crime_type_counts.sort_values(by='TOTAL', ascending=False)\n\nall_crime_tipos_5 = all_crime_types.head(5)\nall_crime_tipos_5.plot(kind='barh', figsize=(15,4), color='#3f3fff')\nplt.title('Top 5 crimes no Jangurussú')\nplt.xlabel('Número de Crimes')\nplt.ylabel('Crime')\nplt.tight_layout()\nax = plt.gca()\nax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))\nplt.show()", "Os 5 bairros mais comuns na região 1", "crime_types_bairro = regiao_1_detalhe[['BAIRRO']]\ncrime_type_total_bairro = crime_types_bairro.groupby('BAIRRO').size()\ncrime_type_counts_bairro = regiao_1_detalhe[['BAIRRO']].groupby('BAIRRO').sum()\ncrime_type_counts_bairro['TOTAL'] = crime_type_total_bairro\nall_crime_types_bairro = crime_type_counts_bairro.sort_values(by='TOTAL', ascending=False)\n\ncrimes_top_5_bairro = all_crime_types_bairro.head(5)\ncrimes_top_5_bairro.plot(kind='barh', figsize=(11,3), color='#3f3fff')\nplt.title('Top 5 bairros na região 1')\nplt.xlabel('Quantidade')\nplt.ylabel('Bairro')\nplt.tight_layout()\nax = plt.gca()\nax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))\nplt.show()", "Análise sobre o bairro Barra do Ceará", "barra_do_ceara = df_abril[df_abril['BAIRRO'] == 'BARRA DO CEARA']\ncrime_types = barra_do_ceara[['NATUREZA DA OCORRÊNCIA']]\ncrime_type_total = crime_types.groupby('NATUREZA DA OCORRÊNCIA').size()\ncrime_type_counts = barra_do_ceara[['NATUREZA DA OCORRÊNCIA']].groupby('NATUREZA DA OCORRÊNCIA').sum()\ncrime_type_counts['TOTAL'] = crime_type_total\nall_crime_types = crime_type_counts.sort_values(by='TOTAL', ascending=False)\n\nall_crime_tipos_5 = all_crime_types.head(5)\nall_crime_tipos_5.plot(kind='barh', figsize=(15,4), color='#3f3fff')\nplt.title('Top 5 crimes na Barra do Ceará')\nplt.xlabel('Número de Crimes')\nplt.ylabel('Crime')\nplt.tight_layout()\nax = plt.gca()\nax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.17/_downloads/8a9d1481784df3b1e190b5615ba8fde7/plot_compute_source_psd_epochs.ipynb
bsd-3-clause
[ "%matplotlib inline", "Compute Power Spectral Density of inverse solution from single epochs\nCompute PSD of dSPM inverse solution on single trial epochs restricted\nto a brain label. The PSD is computed using a multi-taper method with\nDiscrete Prolate Spheroidal Sequence (DPSS) windows.", "# Author: Martin Luessi <mluessi@nmr.mgh.harvard.edu>\n#\n# License: BSD (3-clause)\n\nimport matplotlib.pyplot as plt\n\nimport mne\nfrom mne.datasets import sample\nfrom mne.minimum_norm import read_inverse_operator, compute_source_psd_epochs\n\nprint(__doc__)\n\ndata_path = sample.data_path()\nfname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'\nfname_raw = data_path + '/MEG/sample/sample_audvis_raw.fif'\nfname_event = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'\nlabel_name = 'Aud-lh'\nfname_label = data_path + '/MEG/sample/labels/%s.label' % label_name\nsubjects_dir = data_path + '/subjects'\n\nevent_id, tmin, tmax = 1, -0.2, 0.5\nsnr = 1.0 # use smaller SNR for raw data\nlambda2 = 1.0 / snr ** 2\nmethod = \"dSPM\" # use dSPM method (could also be MNE or sLORETA)\n\n# Load data\ninverse_operator = read_inverse_operator(fname_inv)\nlabel = mne.read_label(fname_label)\nraw = mne.io.read_raw_fif(fname_raw)\nevents = mne.read_events(fname_event)\n\n# Set up pick list\ninclude = []\nraw.info['bads'] += ['EEG 053'] # bads + 1 more\n\n# pick MEG channels\npicks = mne.pick_types(raw.info, meg=True, eeg=False, stim=False, eog=True,\n include=include, exclude='bads')\n# Read epochs\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,\n baseline=(None, 0), reject=dict(mag=4e-12, grad=4000e-13,\n eog=150e-6))\n\n# define frequencies of interest\nfmin, fmax = 0., 70.\nbandwidth = 4. # bandwidth of the windows in Hz", "Compute source space PSD in label\n..note:: By using \"return_generator=True\" stcs will be a generator object\n instead of a list. This allows us so to iterate without having to\n keep everything in memory.", "n_epochs_use = 10\nstcs = compute_source_psd_epochs(epochs[:n_epochs_use], inverse_operator,\n lambda2=lambda2,\n method=method, fmin=fmin, fmax=fmax,\n bandwidth=bandwidth, label=label,\n return_generator=True, verbose=True)\n\n# compute average PSD over the first 10 epochs\npsd_avg = 0.\nfor i, stc in enumerate(stcs):\n psd_avg += stc.data\npsd_avg /= n_epochs_use\nfreqs = stc.times # the frequencies are stored here\nstc.data = psd_avg # overwrite the last epoch's data with the average", "Visualize the 10 Hz PSD:", "brain = stc.plot(initial_time=10., hemi='lh', views='lat', # 10 HZ\n clim=dict(kind='value', lims=(20, 40, 60)),\n smoothing_steps=3, subjects_dir=subjects_dir)\nbrain.add_label(label, borders=True, color='k')", "Visualize the entire spectrum:", "fig, ax = plt.subplots()\nax.plot(freqs, psd_avg.mean(axis=0))\nax.set_xlabel('Freq (Hz)')\nax.set_xlim(stc.times[[0, -1]])\nax.set_ylabel('Power Spectral Density')" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.12/_downloads/plot_tf_dics.ipynb
bsd-3-clause
[ "%matplotlib inline", "Time-frequency beamforming using DICS\nCompute DICS source power in a grid of time-frequency windows and display\nresults.\nThe original reference is:\nDalal et al. Five-dimensional neuroimaging: Localization of the time-frequency\ndynamics of cortical activity. NeuroImage (2008) vol. 40 (4) pp. 1686-1700", "# Author: Roman Goj <roman.goj@gmail.com>\n#\n# License: BSD (3-clause)\n\nimport mne\nfrom mne.event import make_fixed_length_events\nfrom mne.datasets import sample\nfrom mne.time_frequency import compute_epochs_csd\nfrom mne.beamformer import tf_dics\nfrom mne.viz import plot_source_spectrogram\n\nprint(__doc__)\n\ndata_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'\nnoise_fname = data_path + '/MEG/sample/ernoise_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'\nfname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'\nsubjects_dir = data_path + '/subjects'\nlabel_name = 'Aud-lh'\nfname_label = data_path + '/MEG/sample/labels/%s.label' % label_name", "Read raw data", "raw = mne.io.read_raw_fif(raw_fname, preload=True)\nraw.info['bads'] = ['MEG 2443'] # 1 bad MEG channel\n\n# Pick a selection of magnetometer channels. A subset of all channels was used\n# to speed up the example. For a solution based on all MEG channels use\n# meg=True, selection=None and add mag=4e-12 to the reject dictionary.\nleft_temporal_channels = mne.read_selection('Left-temporal')\npicks = mne.pick_types(raw.info, meg='mag', eeg=False, eog=False,\n stim=False, exclude='bads',\n selection=left_temporal_channels)\nraw.pick_channels([raw.ch_names[pick] for pick in picks])\nreject = dict(mag=4e-12)\n# Re-normalize our empty-room projectors, which should be fine after\n# subselection\nraw.info.normalize_proj()\n\n# Setting time windows. Note that tmin and tmax are set so that time-frequency\n# beamforming will be performed for a wider range of time points than will\n# later be displayed on the final spectrogram. This ensures that all time bins\n# displayed represent an average of an equal number of time windows.\ntmin, tmax, tstep = -0.55, 0.75, 0.05 # s\ntmin_plot, tmax_plot = -0.3, 0.5 # s\n\n# Read epochs\nevent_id = 1\nevents = mne.read_events(event_fname)\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax,\n baseline=None, preload=True, proj=True, reject=reject)\n\n# Read empty room noise raw data\nraw_noise = mne.io.read_raw_fif(noise_fname, preload=True)\nraw_noise.info['bads'] = ['MEG 2443'] # 1 bad MEG channel\nraw_noise.pick_channels([raw_noise.ch_names[pick] for pick in picks])\nraw_noise.info.normalize_proj()\n\n# Create noise epochs and make sure the number of noise epochs corresponds to\n# the number of data epochs\nevents_noise = make_fixed_length_events(raw_noise, event_id)\nepochs_noise = mne.Epochs(raw_noise, events_noise, event_id, tmin_plot,\n tmax_plot, baseline=None, preload=True, proj=True,\n reject=reject)\nepochs_noise.info.normalize_proj()\nepochs_noise.apply_proj()\n# then make sure the number of epochs is the same\nepochs_noise = epochs_noise[:len(epochs.events)]\n\n# Read forward operator\nforward = mne.read_forward_solution(fname_fwd, surf_ori=True)\n\n# Read label\nlabel = mne.read_label(fname_label)", "Time-frequency beamforming based on DICS", "# Setting frequency bins as in Dalal et al. 2008\nfreq_bins = [(4, 12), (12, 30), (30, 55), (65, 300)] # Hz\nwin_lengths = [0.3, 0.2, 0.15, 0.1] # s\n# Then set FFTs length for each frequency range.\n# Should be a power of 2 to be faster.\nn_ffts = [256, 128, 128, 128]\n\n# Subtract evoked response prior to computation?\nsubtract_evoked = False\n\n# Calculating noise cross-spectral density from empty room noise for each\n# frequency bin and the corresponding time window length. To calculate noise\n# from the baseline period in the data, change epochs_noise to epochs\nnoise_csds = []\nfor freq_bin, win_length, n_fft in zip(freq_bins, win_lengths, n_ffts):\n noise_csd = compute_epochs_csd(epochs_noise, mode='fourier',\n fmin=freq_bin[0], fmax=freq_bin[1],\n fsum=True, tmin=-win_length, tmax=0,\n n_fft=n_fft)\n noise_csds.append(noise_csd)\n\n# Computing DICS solutions for time-frequency windows in a label in source\n# space for faster computation, use label=None for full solution\nstcs = tf_dics(epochs, forward, noise_csds, tmin, tmax, tstep, win_lengths,\n freq_bins=freq_bins, subtract_evoked=subtract_evoked,\n n_ffts=n_ffts, reg=0.001, label=label)\n\n# Plotting source spectrogram for source with maximum activity\n# Note that tmin and tmax are set to display a time range that is smaller than\n# the one for which beamforming estimates were calculated. This ensures that\n# all time bins shown are a result of smoothing across an identical number of\n# time windows.\nplot_source_spectrogram(stcs, freq_bins, tmin=tmin_plot, tmax=tmax_plot,\n source_index=None, colorbar=True)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
olgabot/cshl-singlecell-2017
notebooks/1.2_Downloading_public_data_Shalek2013.ipynb
mit
[ "Downloading public data\nSomething you may want to do in the future is compare your results to papers that came before you. Today we'll go through how to find these data and how to analyze them\nReading list\n\nWhat the FPKM: A review of RNA-Seq expression units - Explain difference between TPM/FPKM/RPKM units\nPearson correlation - linear correlation unit\nSingle-cell transcriptomics reveals bimodality in expression and splicing in immune cells (Shalek and Satija, et al. Nature (2013))\n\n1. Find the database and accession codes\nAt the end of most recent papers, they'll put a section called \"Accession Codes\" or \"Accession Numbers\" which will list a uniquely identifying number and letter combination.\nIn the US, the Gene Expression Omnibus (GEO) is a website funded by the NIH to store the expression data associated with papers. Many journals require you to submit your data to GEO to be able to publish.\nExample data accession section from a Cell paper\n\nExample data accession section from a Nature Biotech paper\n\nLet's do this for the Shalek2013 paper. \nNote: For some \"older\" papers (pre 2014), the accession code may not be on the PDF version of the paper but on the online version only. What I usually do then is search for the title of the paper and go to the journal website.\nFor your homework, you'll need to find another dataset to use and the expression matrix that you want may not be on a database, but rather posted in supplementary data on the journal's website.\n\nWhat database was the data deposited to? \nWhat is its' accession number?\n\n2. Go to the data in the database\nIf you search for the database and the accession number, the first result will usually be the database with the paper info and the deposited data! Below is an example search for \"Array Express E-MTAB-2805.\"\n\nSearch for its database and accession number and you should get to a page that looks like this:\n\n3. Find the gene expression matrix\nLately, for many papers, they do give a processed expression matrix in the accession database that you can use directly. Luckily for us, that's exactly what the authors of the Shalek 2013 dataset did. If you notice at the bottom of the page, there's a table of Supplementary files and one of them is called \"GSE41265_allGenesTPM.txt.gz\". The link below is the \"(ftp)\" link copied down with the command \"wget\" which I think of as short for \"web-get\" so you can download files from the internet with the command line.\nIn addition to the gene expression file, we'll also look at the metadata in the \"Series Matrix\" file. \n\nDownload the \"Series Matrix\" to your laptop and \nDownload the GSE41265_allGenesTPM.txt.gz\" file. \n\nAll the \"Series\" file formats contain the same information in different formats. I find the matrix one is the easiest to understand.\nOpen the \"Series Matrix\" in Excel (or equivalent) on your laptop, and look at the format and what's described. What line does the actual matrix of metadata start? You can find it where it says in the first column ,\"!!Sample_title.\" It's after an empty line.\nGet the data easy here:\nFollow this link to jump directly to the GEO page for this data. Scroll down to the bottom in supplemental material. And download the link for the table called GSE41265_allGenesTPM.txt.gz.\nWe also need the link to the metadata. It is here. Download the file called GSE41265_series_matrix.txt.gz. \nWhere did those files go on your computer? Maybe you moved it somewhere. Figure out what the full path of those files are and we will read that in directly below. \n4. Reading in the data file\nTo read the gene expression matrix, we'll use \"pandas\" a Python package for \"Panel Data Analysis\" (as in panels of data), which is a fantastic library for working with dataframes, and is Python's answer to R's dataframes. We'll take this opportunity to import ALL of the python libaries that we'll use today.\nWe'll be using several additional libraries in Python:\n\nmatplotlib - This is the base plotting library in Python.\nnumpy - (pronounced \"num-pie\") which is basis for most scientific packages. It's basically a nice-looking Python interface to C code. It's very fast.\npandas - This is the \"DataFrames in Python.\" (like R's nice dataframes) They're a super convenient form that's based on numpy so they're fast. And you can do convenient things like calculate mea n and variance very easily.\nscipy - (pronounced \"sigh-pie\") \"Scientific Python\" - Contains statistical methods and calculations\nseaborn - Statistical plotting library. To be completely honest, R's plotting and graphics capabilities are much better than Python's. However, Python is a really nice langauge to learn and use, it's very memory efficient, can be parallized well, and has a very robust machine learning library, scikit-learn, which has a very nice and consistent interface. So this is Python's answer to ggplot2 (very popular R library for plotting) to try and make plotting in Python nicer looking and to make statistical plots easier to do.", "# Alphabetical order is standard\n# We're doing \"import superlongname as abbrev\" for our laziness - this way we don't have to type out the whole thing each time.\n\n# Python plotting library\nimport matplotlib.pyplot as plt\n\n# Numerical python library (pronounced \"num-pie\")\nimport numpy as np\n\n# Dataframes in Python\nimport pandas as pd\n\n# Statistical plotting library we'll use\nimport seaborn as sns\n\n# This is necessary to show the plotted figures inside the notebook -- \"inline\" with the notebook cells\n%matplotlib inline\n\n", "We'll read in the data using pandas and look at the first 5 rows of the dataframe with the dataframe-specific function .head(). Whenever I read a new table or modify a dataframe, I ALWAYS look at it to make sure it was correctly imported and read in, and I want you to get into the same habit.", "# Read the data table\n# You may need to change the path to the file (what's in quotes below) relative \n# to where you downloaded the file and where this notebook is\nshalek2013_expression = pd.read_table('/home/ecwheele/cshl2017/GSE41265_allGenesTPM.txt.gz', \n \n # Sets the first (Python starts counting from 0 not 1) column as the row names\n index_col=0, \n\n # Tells pandas to decompress the gzipped file\n compression='gzip')\n\n\n\n\nprint(shalek2013_expression.shape)\nshalek2013_expression.head()", "That's kind of annoying ... we don't see all the samples.\nSo we have 21 columns but looks like here pandas by default is showing a maximum of 20 so let's change the setting so we can see ALL of the samples instead of just skipping single cell 11 (S11). Let's change to 50 for good measure.", "pd.options.display.max_columns = 50\npd.options.display.max_rows = 50\nshalek2013_expression.head()", "Now we can see all the samples!\nLet's take a look at the full size of the matrix with .shape:", "shalek2013_expression.shape", "Wow, ~28k rows! That must be the genes, while there are 18 single cell samples and 3 pooled samples as the columns. We'll do some filtering in the next few steps.\n5. Reading in the metadata", "shalek2013_metadata = pd.read_table('/home/ecwheele/cshl2017/GSE41265_series_matrix.txt.gz',\n compression = 'gzip',\n skiprows=33, \n index_col=0)\nprint(shalek2013_metadata.shape)\nshalek2013_metadata", "Let's transpose this matrix so the samples are the rows, and the features are the columns. We'll do that with .T", "shalek2013_metadata = shalek2013_metadata.T\nshalek2013_metadata", "Now we'll do some mild data cleaning. Notice that the columns have the exclamation point at the beginning, so let's get rid of that. In computer science, you keep letters between quotes, and you call those \"strings.\" Let's talk about the string function .strip(). This removes any characters that are on the outer edges of the string. For example, let's take the string \"Whoooo!!!!!!!\"", "\"Whoooo!!!!!!!\"", "Now let's remove the exclamation points:", "'Whoooo!!!!!!!'.strip('!')", "Exercise 1: Stripping strings\nWhat happens if you try to remove the 'o's?", "# YOUR CODE HERE", "", "'Whoooo!!!!!!!'.strip('o')\n\n'Whoooo!!!!!!!'.replace(\"o\",\"\")", "We can access the column names with dataframe.columns, like below:", "shalek2013_metadata.columns", "We can map the stripping function to every item of the columns. In Python, the square brackets ([ and ]) show that we're making a list. What we're doing below is called a \"list comprehension.\"", "[x.strip('!') for x in shalek2013_metadata.columns]", "In pandas, we can do the same thing by map-ping a lambda, which is a small, anonymous function that does one thing. It's called \"anonymous\" because it doesn't have a name. map runs the function on every element of the columns.", "shalek2013_metadata.columns.map(lambda x: x.strip('!'))", "The above lambda is the same as if we had written a named function called remove_exclamation, as below.", "def remove_exclamation(x):\n return x.strip('!')\n\nshalek2013_metadata.columns.map(remove_exclamation)", "Now we can assign the new column names to our matrix:", "shalek2013_metadata.columns = shalek2013_metadata.columns.map(lambda x: x.strip('!'))\nshalek2013_metadata.head()", "Okay, now we're ready to do some analysis!\nWe've looked at the top of the dataframe by using head(). By default, this shows the first 5 rows.", "shalek2013_expression.head()", "To specify a certain number of rows, put a number between the parentheses.", "shalek2013_expression.head(8)", "Exercise 2: using .head()\nShow the first 17 rows of shalek2013_expression", "# YOUR CODE HERE", "", "shalek2013_expression.head(17)", "Let's get a sense of this data by plotting the distributions using boxplot from seaborn. To save the output, we'll need to get access to the current figure, and save this to a variable using plt.gcf(). And then we'll save this figure with fig.savefig(\"filename.pdf\"). You can use other extensions (e.g. \".png\", \".tiff\" and it'll automatically save as that forma)", "sns.boxplot(shalek2013_expression)\n\n# gcf = Get current figure\nfig = plt.gcf()\nfig.savefig('shalek2013_expression_boxplot.pdf')", "Notice the 140,000 maximum ... Oh right we have expression data and the scales are enormous... Let's add 1 to all values and take the log2 of the data. We add one because log(0) is undefined and then all our logged values start from zero too. This \"$\\log_2(TPM + 1)$\" is a very common transformation of expression data so it's easier to analyze.", "expression_logged = np.log2(shalek2013_expression+1)\nexpression_logged.head()\n\nsns.boxplot(expression_logged)\n\n# gcf = Get current figure\nfig = plt.gcf()\nfig.savefig('expression_logged_boxplot.pdf')", "Exercise 3: Interpreting distributions\nNow that these are on moreso on the same scale ...\nQ: What do you notice about the pooled samples (P1, P2, P3) that is different from the single cells?\nYOUR ANSWER HERE\nFiltering expression data\nSeems like a lot of genes are near zero, which means we need to filter our genes.\nWe can ask which genes have log2 expression values are less than 2 (weird example I know - stay with me). This creates a dataframe of boolean values of True/False.", "at_most_2 = expression_logged < 2\nat_most_2", "What's nice about booleans is that False is 0 and True is 1, so we can sum to get the number of \"Trues.\" This is a simple, clever way that we can filter on a count for the data. We could use this boolean dataframe to filter our original dataframe, but then we lose information. For all values that are greater than 2, it puts in a \"not a number\" - \"NaN.\"", "expression_at_most_2 = expression_logged[expression_logged < 2]\nprint(expression_at_most_2.shape)\nexpression_at_most_2.head()", "Exercise 4: Crude filtering on expression data\nCreate a dataframe called \"expression_greater_than_5\" which contains only values that are greater than 5 from expression_logged.", "# YOUR CODE HERE", "", "expression_logged.head()\n\nexpression_greater_than_5 = expression_logged[expression_logged > 5]\nexpression_greater_than_5.head()", "The crude filtering above is okay, but we're smarter than that. We want to use the filtering in the paper: \n\n... discarded genes that were not appreciably expressed (transcripts per million (TPM) > 1) in at least three individual cells, retaining 6,313 genes for further analysis.\n\nWe want to do THAT, but first we need a couple more concepts. The first one is summing booleans.\nA smarter way to filter\nRemember that booleans are really 0s (False) and 1s (True)? This turns out to be VERY convenient and we can use this concept in clever ways.\nWe can use .sum() on a boolean matrix to get the number of genes with expression greater than 10 for each sample:", "(expression_logged > 10).sum()", "pandas is column-oriented and by default, it will give you a sum for each column. But we want a sum for each row. How do we do that?\nWe can sum the boolean matrix we created with \"expression_logged &lt; 10\" along axis=1 (along the samples) to get for each gene, how many samples have expression less than 10. In pandas, this column is called a \"Series\" because it has only one dimension - its length. Internally, pandas stores dataframes as a bunch of columns - specifically these Seriesssssss.\nThis turns out to be not that many.", "(expression_logged > 10).sum(axis=1)", "Now we can apply ANOTHER filter and find genes that are \"present\" (expression greater than 10) in at least 5 samples. We'll save this as the variable genes_of_interest. Notice that this doesn't the genes_of_interest but rather the list at the bottom. This is because what you see under a code cell is the output of the last thing you called. The \"hash mark\"/\"number sign\" \"#\" is called a comment character and makes the rest of the line after it not read by the Python language.\nExercise 5: Commenting and uncommenting\nTo see genes_of_interest, \"uncomment\" the line by removing the hash sign, and commenting out the list [1, 2, 3].", "genes_of_interest = (expression_logged > 10).sum(axis=1) >= 5\n#genes_of_interest\n[1, 2, 3]", "Getting only rows that you want (aka subsetting)\nNow we have some genes that we want to use - how do you pick just those? This can also be called \"subsetting\" and in pandas has the technical name indexing\nIn pandas, to get the rows (genes) you want using their name (gene symbol) or boolean matrix, you use .loc[rows_you_want]. Check it out below.", "expression_filtered = expression_logged.loc[genes_of_interest]\nprint(expression_filtered.shape) # shows (nrows, ncols) - like in manhattan you do the Street then the Avenue\nexpression_filtered.head()", "Wow, our matrix is very small - 197 genes! We probably don't want to filter THAT much... I'd say a range of 5,000-15,000 genes after filtering is a good ballpark. Not too big so it's impossible to work with but not too small that you can't do any statistics.\nWe'll get closer to the expression data created by the paper. Remember that they filtered on genes that had expression greater than 1 in at least 3 single cells. We'll filter for expression greater than 1 in at least 3 samples for now - we'll get to the single stuff in a bit. For now, we'll filter on all samples.\nExercise 6: Filtering on the presence of genes\nCreate a dataframe called expression_filtered_by_all_samples that consists only of genes that have expression greater than 1 in at least 3 samples.\nHint for IndexingError: Unalignable boolean Series key provided\nIf you're getting this error, double-check your .sum() command. Did you remember to specify that you want to get the number of cells (columns) that express each gene (row)? Remember that .sum() by default gives you the sum over columns, but since genes are the rows .... How do you get the sum over rows?", "# YOUR CODE HERE\n\nprint(expression_filtered_by_all_samples.shape)\nexpression_filtered_by_all_samples.head()", "", "genes_of_interest = (expression_logged > 1).sum(axis=1) >= 3\n\nexpression_filtered_by_all_samples = expression_logged.loc[genes_of_interest]\nprint(expression_filtered_by_all_samples.shape)\nexpression_filtered_by_all_samples.head()", "Just for fun, let's see how our the distributions in our expression matrix have changed. If you want to save the figure, you can:", "sns.boxplot(expression_filtered_by_all_samples)\n\n# gcf = Get current figure\nfig = plt.gcf()\nfig.savefig('expression_filtered_by_all_samples_boxplot.pdf')", "Discussion\n\nHow did the gene expression distributions change? Why?\nWere the single and pooled samples' distributions affected differently? Why or why not?\n\nGetting only the columns you want\nIn the next exercise, we'll get just the single cells\nFor the next step, we're going to pull out just the pooled - which are conveniently labeled as \"P#\". We'll do this using a list comprehension, which means we'll create a new list based on the items in shalek2013_expression.columns and whether or not they start with the letter 'P'.\nIn Python, things in square brackets ([]) are lists unless indicated otherwise. We are using a list comprehension here instead of a map, because we only want a subset of the columns, rather than all of them.", "pooled_ids = [x for x in expression_logged.columns if x.startswith('P')]\npooled_ids", "We'll access the columns we want using this bracket notation (note that this only works for columns, not rows)", "pooled = expression_logged[pooled_ids]\npooled.head()", "We could do the same thing using .loc but we would need to put a colon \":\" in the \"rows\" section (first place) to show that we want \"all rows.\"", "expression_logged.loc[:, pooled_ids].head()", "Exercise 7: Make a dataframe of only single samples\nUse list comprehensions to make a list called single_ids that consists only of single cells, and use that list to subset expression_logged and create a dataframe called singles. (Hint - how are the single cells ids different from the pooled ids?)", "# YOUR CODE HERE\n\nprint(singles.shape)\nsingles.head()", "", "single_ids = [x for x in expression_logged.columns if x.startswith('S')]\nsingles = expression_logged[single_ids]\nprint(singles.shape)\nsingles.head()", "Using two different dataframes for filtering\nExercise 8: Filter the full dataframe using the singles dataframe\nNow we'll actually do the filtering done by the paper. Using the singles dataframe you just created, get the genes that have expression greater than 1 in at least 3 single cells, and use that to filter expression_logged. Call this dataframe expression_filtered_by_singles.", "# YOUR CODE HERE\n\nprint(expression_filtered_by_singles.shape)\nexpression_filtered_by_singles.head()", "", "rows = (singles > 1).sum(axis=1) > 3\n\nexpression_filtered_by_singles = expression_logged.loc[rows]\nprint(expression_filtered_by_singles.shape)\nexpression_filtered_by_singles.head()", "Let's make a boxplot again to see how the data has changed.", "sns.boxplot(expression_filtered_by_singles)\n\nfig = plt.gcf()\nfig.savefig('expression_filtered_by_singles_boxplot.pdf')", "This is much nicer because now we don't have so many zeros and each sample has a reasonable dynamic range.\nWhy did this filtering even matter?\nYou may be wondering, we did all this work to remove some zeros..... so the FPKM what? Let's take a look at how this affects the relationships between samples using sns.jointplot from seaborn, which will plot a correlation scatterplot. This also calculates the Pearson correlation, a linear correlation metric.\nLet's first do this on the unlogged data.", "sns.jointplot(shalek2013_expression['S1'], shalek2013_expression['S2'])", "Pretty funky looking huh? That's why we logged it :)\nNow let's try this on the logged data.", "sns.jointplot(expression_logged['S1'], expression_logged['S2'])", "Hmm our pearson correlation increased from 0.62 to 0.64. Why could that be?\nLet's look at this same plot using the filtered data.", "sns.jointplot(expression_filtered_by_singles['S1'], expression_filtered_by_singles['S2'])", "And now our correlation went DOWN!? Why would that be? \nExercise 9: Discuss changes in correlation\nTake 2-5 sentences to explain why the correlation changed between the different datasets.\nYOUR ANSWER HERE" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
alfkjartan/control-computarizado
discrete-time-systems/notebooks/Evaluating-polynomials.ipynb
mit
[ "Recursively computing values of a polynomial using difference equations\nIn the lecture Introduction to digital control by Peter Corke, he talks about the historical importance of difference equations for computing values of a polynomial. Let's look at this in some more detail.\nA first order polynomial\nConsider the polynomial\n$$ p(x) = 4x + 2. $$\nThe first difference is\n$$ \\Delta p(x) = p(x) - p(x-h) = 4x + 2 - \\big( 4(x-h) + 2 \\big) = 4h, $$\nand the second order difference is zero (as are all higher order differences):\n$$ \\Delta^2 p(x) = \\Delta p(x) - \\Delta p(x-h) = 4h - 4h = 0. $$\nUsing the firs order difference, we can also write the second order difference $ \\Delta p(x) - \\Delta p(x-h) = \\Delta^2 p(x) $\nas\n$$ p(x) - p(x-h) - \\Delta p(x-h) = \\Delta^2p(x) $$\nor\n$$ p(x) = p(x-h) + \\Delta p(x-h) + \\Delta^2 p(x)$$\nwhich for the first order polynomial above becomes\n$$ p(x) = p(x-h) + \\Delta p(x-h) = p(x-h) + 4h. $$", "import numpy as np\nimport scipy.signal as signal\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\ndef p1(x): return 4*x + 2 # Our first-order polynomial\n# Compute values for x=[0,0.2, 0.4, ... 2] recursively using the difference equation\nh = 0.2\nx = h*np.arange(11) # Gives the array [0,0.2, 0.4, ... 2]\npd = np.zeros(11)\nd1 = 4*h\n\n# Need to compute the first value as the initial value for the difference equation,\npd[0] = p1(x[0])\n\nfor k in range(1,10): # Solve difference equation\n pd[k] = pd[k-1] + d1\n\nplt.figure(figsize=(14,6))\nplt.plot(x, p1(x), linewidth=2)\nplt.plot(x, pd, 'ro')", "Second order polynomial\nFor a second order polynomial \n$$ p(x) = a_0x^2 + a_1x + a_2 $$\nwe have\n$$ p''(x) = 2a_0, $$\nand the differences\n$$ \\Delta p(x) = p(x) - p(x-h) = a_0x^2 + a_1x + a_2 - \\big( a_0(x-h)^2 + a_1(x-h) + a_2 \\big) = h(2a_0x + a_1) -a_0h^2, $$\n$$ \\Delta^2 p(x) = \\Delta p(x) - \\Delta p(x-h) = h(2a_0x+a_1) - a_0h^2 - \\big( h(2a_0(x-h) + a_1) - a_0 h^2 \\big) = h^22a_0 $$\nRecall the difference equation using the second order difference\n$$ p(x) = p(x-h) + \\Delta p(x-h) + \\Delta^2 p(x)$$\nWe now get\n$$ p(x) = p(x-h) + \\Delta p(x-h) + \\Delta^2 p(x) = p(x-h) + \\Delta p(x-h) + h^2 2 a_0,$$\nor, using the definition of the first-order difference $\\Delta p(x-h)$ \n$$ p(x) = 2p(x-h) - p(x-2h) + h^2 2 a_0,$$\nConsider the second order polynomial\n$$ p(x) = 2x^2 - 3x + 2, $$\nand compute values using the difference equation.", "a0 = 2\na1 = -3\na2 = 2\ndef p2(x): return a0*x**2 + a1*x + a2 # Our second-order polynomial\n\n# Compute values for x=[0,0.2, 0.4, ... 8] recursively using the difference equation\nh = 0.2\nx = h*np.arange(41) # Gives the array [0,0.2, 0.4, ... 2]\nd1 = np.zeros(41) # The first differences\npd = np.zeros(41)\nd2 = h**2*2*a0 # The constant, second difference\n\n# Need to compute the first two values to get the initial values for the difference equation,\npd[0] = p2(x[0])\npd[1] = p2(x[1])\n\nfor k in range(2,41): # Solve difference equation\n pd[k] = 2*pd[k-1] - pd[k-2] + d2\n \nplt.figure(figsize=(14,6))\nplt.plot(x, p2(x), linewidth=2) # Evaluating the polynomial\nplt.plot(x, pd, 'ro') # The solution using the difference equation", "Exercise\nWhat order would the difference equation be for computing valuse of a third-order polynomial? What is the difference equation?" ]
[ "markdown", "code", "markdown", "code", "markdown" ]
paulmorio/grusData
basics/.ipynb_checkpoints/SupportVectorMachines-checkpoint.ipynb
mit
[ "Support Vector Machines\nSupport vector machines (SVMs) are a particularly powerful and flexible class of supervised algorithms for both classification and regression. In this section, we will develop the intuition behind support vector machines and their use in classification problems.\nWe begin with the standard imports:", "%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy import stats\n\n# use seaborn plotting defaults\nimport seaborn as sns; sns.set()", "Motivating Support Vector Machines\nAs part of our disussion of Bayesian classification (see In Depth: Naive Bayes Classification), we learned a simple model describing the distribution of each underlying class, and used these generative models to probabilistically determine labels for new points. That was an example of generative classification; here we will consider instead discriminative classification: rather than modeling each class, we simply find a line or curve (in two dimensions) or manifold (in multiple dimensions) that divides the classes from each other.\nAs an example of this, consider the simple case of a classification task, in which the two classes of points are well separated:", "from sklearn.datasets.samples_generator import make_blobs\nX, y = make_blobs(n_samples=50, centers=2,\n random_state=0, cluster_std=0.60)\nplt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn');", "A linear discriminative classifier would attempt to draw a straight line separating the two sets of data, and thereby create a model for classification. For two dimensional data like that shown here, this is a task we could do by hand. But immediately we see a problem: there is more than one possible dividing line that can perfectly discriminate between the two classes!\nWe can draw them as follows:", "xfit = np.linspace(-1, 3.5)\nplt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn')\nplt.plot([0.6], [2.1], 'x', color='red', markeredgewidth=2, markersize=10)\n\nfor m, b in [(1, 0.65), (0.5, 1.6), (-0.2, 2.9)]:\n plt.plot(xfit, m * xfit + b, '-k')\n\nplt.xlim(-1, 3.5);", "These are three very different separators which, nevertheless, perfectly discriminate between these samples. Depending on which you choose, a new data point (e.g., the one marked by the \"X\" in this plot) will be assigned a different label! Evidently our simple intuition of \"drawing a line between classes\" is not enough, and we need to think a bit deeper.\nSupport Vector Machines: Maximizing the Margin\nSupport vector machines offer one way to improve on this. The intuition is this: rather than simply drawing a zero-width line between the classes, we can draw around each line a margin of some width, up to the nearest point. Here is an example of how this might look:", "xfit = np.linspace(-1, 3.5)\nplt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn')\n\nfor m, b, d in [(1, 0.65, 0.33), (0.5, 1.6, 0.55), (-0.2, 2.9, 0.2)]:\n yfit = m * xfit + b\n plt.plot(xfit, yfit, '-k')\n plt.fill_between(xfit, yfit - d, yfit + d, edgecolor='none',\n color='#AAAAAA', alpha=0.4)\n\nplt.xlim(-1, 3.5);", "In support vector machines, the line that maximizes this margin is the one we will choose as the optimal model. Support vector machines are an example of such a maximum margin estimator.\nFitting the Support Vector Machine\nLet's see the result of an actual fit to this data: we will use Scikit-Learn's support vector classifier to train an SVM model on this data. For the time being, we will use a linear kernel and set the C parameter to a very large number (we'll discuss the meaning of these in more depth momentarily).", "from sklearn.svm import SVC # \"Support vector classifier\"\nmodel = SVC(kernel='linear', C=1E10)\nmodel.fit(X, y)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
google/ga360-bqml-toolkit
notebooks/GA360_Gazer_Automated_EDA.ipynb
apache-2.0
[ "# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Overview\nThis notebook provides code to populate a dashboard that compares audience behavior based on the GA360 BQ Export. This is particularly useful for customers interested in understanding behavior prior to an observed event, which is useful for behavior-based segmentation, site optimization, or as inputs for a predictive model.\nIn addition to a sound GA360 tagging implementation, you will need access to the source dataset as a viewer, and access to run BQ jobs on a GCP project.\nDataset\nThis notebook is meant be a scalable solution that works with any GA360 BQ Export. This particular example utilizes the GA360 data from the Google Merchandise Store, publicly available here. Due to the limited nature of the Merchandise Store Data, not all aspects of this notebook will produce results; try it on your own (corporate) data!\nObjective\nThe resulting dashboard provides a quick solution to visualize differences in audience behavior based on the Google Analytics 360 BigQuery Export. Without customization, this default to comparing the general population vs. the behavior of a particular audience of interest, e.g. users who make a purchase online, or who purchase above a certain dollar amount. These insights can be used in a variety of ways, which include (but are not limited to):\n - Provide guidance to create rules-based audiences\n - Recommend potential ways to optimize check-out flow or site design\n - Highlight potential features for a propensity model\nCosts\nThis tutorial uses billable components of Google Cloud Platform (GCP):\n\nBigQuery\n\nLearn about BigQuery pricing, and use the Pricing\nCalculator\nto generate a cost estimate based on your projected usage.\nDetails\nThe insights and analysis offered by GA360 are numerous, and this notebook does not intend to cover all of them. Here is a list of features included in this example:\n- Traffic source (trafficSource.medium)\n- DMA\n- Time visited by daypart\n- Time visited by day\n- Device category (device.deviceCategory)\n- Page path level 1 (hits.page.pagePathLevel1)\n- Ecommerce action (hits.eCommerceAction.action_type)\n- Product engagement (hits.product.v2ProductCategory)\n- Browser (device.browser)\n- total sessions \n- page views\n- average time per page\n- average session depth (page views per session)\n- distinct DMAs (for users on mobile, signifies if they are traveling or not)\n- session & hit level custom dimensions\nNotes on data output:\n\nContinuous variables generate histograms and cut off the top 0.5% of data\nCustom dimensions will only populate if they are setup on the GA360 implementation, and are treated as categorical features\nAs-is, only custom dimension indices 50 or lower will be visualized; you will need to edit the dashboard to look at distribution of indices above 50. All custom dimensions will be evaluated by the query, so will be present in the underlying dataset.\n\nSet up your GCP project\nIf you are not already a GCP customer with GA360 and its BQ Export enabled, follow the steps below. If you want to simply implement this on you already-existing dataset, skip to \"Import libraries and define parameters\".\n\n\nSelect or create a GCP project.. When you first create an account, you get a $300 free credit towards your compute/storage costs.\n\n\nMake sure that billing is enabled for your project.\n\n\nEnable the BigQuery API.\n\n\nEnter your project ID in the cell below. Then run the cell to make sure the\nCloud SDK uses the right project for all the commands in this notebook.\n\n\nNote: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.", "PROJECT_ID_BILLING = \"\" # Set the project ID\n! gcloud config set project $PROJECT_ID_BILLING", "Authenticate your GCP account\nIf you are using AI Platform Notebooks, your environment is already\nauthenticated. Skip this step.\nIf you are using Colab, run the cell below and follow the instructions\nwhen prompted to authenticate your account via oAuth.\nOtherwise, follow these steps:\n\n\nIn the GCP Console, go to the Create service account key\n page.\n\n\nFrom the Service account drop-down list, select New service account.\n\n\nIn the Service account name field, enter a name.\n\n\nFrom the Role drop-down list, select\n Machine Learning Engine > AI Platform Admin and\n Storage > Storage Object Admin.\n\n\nClick Create. A JSON file that contains your key downloads to your\nlocal environment.\n\n\nEnter the path to your service account key as the\nGOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.", "import sys\n\n# If you are running this notebook in Colab, run this cell and follow the\n# instructions to authenticate your GCP account. This provides access to your\n# Cloud Storage bucket and lets you submit training jobs and prediction\n# requests.\n\nif 'google.colab' in sys.modules:\n from google.colab import auth\n auth.authenticate_user()\n\n# If you are running this notebook locally, replace the string below with the\n# path to your service account key and run this cell to authenticate your GCP\n# account.\nelse:\n %env GOOGLE_APPLICATION_CREDENTIALS ''", "Create a BigQuery dataset\nIf you already have a dataset ready to save tables to, skip this step.\nSet the name of your BigQuery dataset below. Dataset IDs\nmust be alphanumeric (plus underscores) and must be at most 1024 characters\nlong.", "DATASET_NAME = \"\" # Name the dataset you'd like to save the output to\nLOCATION = \"US\"\n\n! bq mk --location=$LOCATION --dataset $PROJECT_ID_BILLING:$DATASET_NAME", "Validate that your dataset created successfully (this will throw an error if there is no dataset)", "! bq show --format=prettyjson $PROJECT_ID_BILLING:$DATASET_NAME", "Import libraries and define parameters\n\nPROJECT_ID_BILLING is where querying costs will be billed to\nGA_* fields are where the GA360 BQ Export is stored\nSTART_DATE and END_DATE note the date range for analysis\nUTC_ADJUSTMENT adjusts for timezone for the appropriate fields*\n\n*Note that the GA360 BQ Export has all timestamps in POSIX time", "# Import libraries\nimport numpy as np\nimport pandas as pd\n\n# Colab tools & bigquery library\nfrom google.cloud import bigquery\nbigquery.USE_LEGACY_SQL = False\n\npd.options.display.float_format = '{:.5f}'.format\n\nGA_PROJECT_ID = \"bigquery-public-data\" \nGA_DATASET_ID = \"google_analytics_sample\" \nGA_TABLE_ID = \"ga_sessions_*\" \nSTART_DATE = \"20170501\" # Format is YYYYMMDD, for GA360 BQ Export\nEND_DATE = \"20170801\" \nUTC_ADJUSTMENT = -5 \n\nclient = bigquery.Client(project=PROJECT_ID_BILLING)", "Define target audience and filters\n\n\nuser_label_query is used to segment the GA360 BQ Export between your target audience and general population.\n\n\nquery_filter is used to further define all data that is aggregated:\n\nRemoves behavior during or after the session in which the target event occurs\nSubset to only the United States\nSpecify start and end date for analysis", "# Define the query to identify your target audience with label \n# (1 for target, 0 for general population)\nuser_label_query = f\"\"\"\nSELECT \n fullvisitorId, \n max(case when totals.transactions = 1 then 1 else 0 end) as label,\n min(case when totals.transactions = 1 then visitStartTime end) as event_session\nFROM \n `{GA_PROJECT_ID}.{GA_DATASET_ID}.{GA_TABLE_ID}`\nWHERE \n _TABLE_SUFFIX BETWEEN '{START_DATE}' AND '{END_DATE}'\n AND geoNetwork.Country=\"United States\"\nGROUP BY \n fullvisitorId\n\"\"\"\n\n# query_filter -- Change this if you want to adjust WHERE clause in \n# the query. This will be inserted after all clauses selecting from \n# the GA360 BQ Export.\n\nquery_filter = f\"\"\"\nWHERE (\n _TABLE_SUFFIX BETWEEN '{START_DATE}' AND '{END_DATE}'\n AND geoNetwork.Country=\"United States\"\n AND (a.visitStartTime < IFNULL(event_session, 0)\n or event_session is null) )\"\"\"", "Query custom dimensions to isolate fields with fewer unique values, which will be visualized\nStart with session-level custom dimensions:", "# Set cut off for session-level custom dimensions, \n# then query BQ Export to pull relevant indices\nsessions_cut_off = 20 # Max number of distinct values in custom dimensions\n\n# By default, assume there will be custom dimensions at the session and hit level.\n# Further down, set these to False if no appropriate CDs are found.\nquery_session_cd = True\n\n# Unnest session-level custom dimensions a count values for each index\nsessions_cd = f\"\"\"\nSELECT index, count(distinct value) as dist_values\nFROM (SELECT cd.index, cd.value, count(*) as sessions\n FROM `{GA_PROJECT_ID}.{GA_DATASET_ID}.{GA_TABLE_ID}`,\n UNNEST(customDimensions) as cd\n WHERE _TABLE_SUFFIX BETWEEN '{START_DATE}' AND '{END_DATE}'\n GROUP BY 1, 2\n ORDER BY 1, 2)\nGROUP BY index\n\"\"\"\n\ntry:\n # Run a Standard SQL query with the project set explicitly\n sessions_custom_dimensions = client.query(sessions_cd, \n project=PROJECT_ID_BILLING).to_dataframe()\n\n # Create list of session-level CDs to visualize\n session_index_list = sessions_custom_dimensions.loc[\n sessions_custom_dimensions.dist_values <= sessions_cut_off, 'index'].values\n session_index_exclude = sessions_custom_dimensions.loc[\n sessions_custom_dimensions.dist_values > sessions_cut_off, 'index'].values\n\n if len(session_index_list) == 0:\n query_session_cd = False\n print(\"No session-level indices found.\")\n\n else: \n print(f\"\"\"Printing visualizations for the following session-level indices: \\\n {session_index_list};\\n\n Excluded the following custom dimension indices because they had more than \\\n {sessions_cut_off} possible values: {session_index_exclude}\\n \\n\"\"\")\n\nexcept:\n query_session_cd = False", "Repeat for hit level custom dimensions:", "# Set cut off for hit-level custom dimensions, \n# then query BQ Export to pull relevant indices\nhit_cut_off = 20 \n\n# By default, assume there will be custom dimensions at the session and hit level.\n# Further down, set these to False if no appropriate CDs are found.\nquery_hit_cd = True\n\nhits_cd = f\"\"\"\nSELECT index, count(distinct value) as dist_values\nFROM (\n SELECT cd.index, cd.value, count(*) as hits\n FROM `{GA_PROJECT_ID}.{GA_DATASET_ID}.{GA_TABLE_ID}`,\n UNNEST(hits) as ht,\n UNNEST(ht.customDimensions) as cd\n WHERE _TABLE_SUFFIX BETWEEN '{START_DATE}' AND '{END_DATE}'\n GROUP BY 1, 2\n ORDER BY 1, 2 )\nGROUP BY index\n\"\"\"\n\ntry:\n hits_custom_dimensions = client.query(hits_cd, project=PROJECT_ID_BILLING).to_dataframe()\n\n # Create list of hit-level CDs to visualize\n hit_index_list = hits_custom_dimensions.loc[hits_custom_dimensions.dist_values <= hit_cut_off, 'index'].values\n hit_index_exclude = hits_custom_dimensions.loc[hits_custom_dimensions.dist_values > hit_cut_off, 'index'].values\n\n if len(hit_index_list) == 0:\n query_hit_cd = False\n print(\"No hit-level indices found.\")\n\n else:\n print(f\"\"\"Printing visualizations for the following hit-level cds: \\\n {hit_index_list};\\n\n Excluded the following custom dimension indices because they had more than \\\n {hit_cut_off} possible values: {hit_index_exclude}\\n \\n\"\"\")\n\nexcept:\n print(\"No hit-level custom dimensions found!\")\n query_hit_cd = False", "Programmatically write a query that pulls distinct users, by class, for features and every custom dimension (session & hit level).\nIf you want to view the query, set View_Query to True in the cell below.", "# Write a big query that aggregates data to be used as dashboard input\n\n# Set to True if you want to print the final query after it's generated\nView_Query = False\n\nfinal_query = f\"\"\"\nWITH users_labeled as (\n{user_label_query}\n),\n\ntrafficSource_medium AS (\nSELECT count(distinct CASE WHEN label = 1 THEN fullvisitorId END) AS count_1_users,\ncount(distinct CASE WHEN label = 0 THEN fullvisitorId END) AS count_0_users,\ntrafficSource_medium AS trafficSource_medium,\n'trafficSource_medium' AS type\nFROM (\n SELECT a.fullvisitorId, \n trafficSource.medium AS trafficSource_medium,\n label\n FROM `{GA_PROJECT_ID}.{GA_DATASET_ID}.{GA_TABLE_ID}` a,\n unnest (hits) as hits\n LEFT JOIN users_labeled b USING(fullvisitorId)\n {query_filter}\n GROUP BY 1,2,3)\nGROUP BY trafficSource_medium),\n\ndma_staging AS (\n SELECT a.fullvisitorId, \n geoNetwork.metro AS metro,\n label,\n COUNT(*) AS visits\n FROM`{GA_PROJECT_ID}.{GA_DATASET_ID}.{GA_TABLE_ID}` a\n LEFT JOIN users_labeled b USING(fullvisitorId)\n {query_filter}\n GROUP BY 1,2,3),\n\n--- Finds the dma with the most visits for each user. If it's a tie, arbitrarily picks one.\nvisitor_dma AS (\nSELECT COUNT(DISTINCT CASE WHEN label = 1 THEN fullvisitorId END) AS count_1_users,\nCOUNT(DISTINCT CASE WHEN label = 0 THEN fullvisitorId END) AS count_0_users,\nmetro AS dma,\n'dma' AS type\nFROM (\n SELECT fullvisitorId,\n metro, \n label,\n ROW_NUMBER() OVER (PARTITION BY fullvisitorId ORDER BY visits DESC) AS row_num\n FROM dma_staging)\nWHERE row_num = 1 \nGROUP BY metro, type),\n\ndistinct_dma AS (\nSELECT COUNT(DISTINCT CASE WHEN label = 1 THEN fullvisitorId END) AS count_1_users,\nCOUNT(DISTINCT CASE WHEN label = 0 THEN fullvisitorId END) AS count_0_users,\ndistinct_dma AS distinct_dma,\n'distinct_dma' AS type\nFROM (\n SELECT COUNT(DISTINCT metro) as distinct_dma,\n fullvisitorId,\n label\n FROM dma_staging\n GROUP BY fullvisitorId, label)\nGROUP BY distinct_dma),\n \n\n-- Finds the daypart with the most pageviews for each user; adjusts for timezones and daylight savings time, loosely\nvisitor_common_daypart AS (\nSELECT COUNT(DISTINCT CASE WHEN label = 1 THEN fullvisitorId END) AS count_1_users,\nCOUNT(DISTINCT CASE WHEN label = 0 THEN fullvisitorId END) AS count_0_users,\n'day_part' AS type,\ndaypart\nFROM (\n SELECT fullvisitorId, daypart, label, ROW_NUMBER() OVER (PARTITION BY fullvisitorId ORDER BY pageviews DESC) AS row_num\n FROM (\n SELECT\n fullvisitorId,\n label,\n CASE WHEN hour_of_day >= 1 AND hour_of_day < 6 THEN '1_night_1_6' \n WHEN hour_of_day >= 6 AND hour_of_day < 11 THEN '2_morning_6_11' \n WHEN hour_of_day >= 11 AND hour_of_day < 14 THEN '3_lunch_11_14' \n WHEN hour_of_day >= 14 AND hour_of_day < 17 THEN '4_afternoon_14_17' \n WHEN hour_of_day >= 17 AND hour_of_day < 19 THEN '5_dinner_17_19' \n WHEN hour_of_day >= 19 AND hour_of_day < 22 THEN '6_evening_19_23' \n WHEN hour_of_day >= 22 OR hour_of_day = 0 THEN '7_latenight_23_1'\n END AS daypart, SUM(pageviews) AS pageviews\n FROM (\n SELECT a.fullvisitorId, b.label, EXTRACT(HOUR\n FROM TIMESTAMP_ADD(TIMESTAMP_SECONDS(visitStartTime), INTERVAL {UTC_ADJUSTMENT} HOUR)) AS hour_of_day,\n totals.pageviews AS pageviews\n FROM`{GA_PROJECT_ID}.{GA_DATASET_ID}.{GA_TABLE_ID}` a\n LEFT JOIN users_labeled b USING(fullvisitorId)\n {query_filter}\n )\n GROUP BY 1,2,3) )\nWHERE row_num = 1 \nGROUP BY type, daypart),\n\n-- Finds the most common day based on pageviews\nvisitor_common_day AS (\nSELECT COUNT(DISTINCT CASE WHEN label = 1 THEN fullvisitorId END) AS count_1_users,\nCOUNT(DISTINCT CASE WHEN label = 0 THEN fullvisitorId END) AS count_0_users,\n'DoW' AS type,\ncase when day = 1 then \"1_Sunday\"\nwhen day = 2 then \"2_Monday\"\nwhen day = 3 then \"3_Tuesday\"\nwhen day = 4 then \"4_Wednesday\"\nwhen day = 5 then \"5_Thursday\"\nwhen day = 6 then \"6_Friday\"\nwhen day = 7 then \"7_Saturday\" end as day\nFROM (\n SELECT fullvisitorId, day, label, ROW_NUMBER() OVER (PARTITION BY fullvisitorId ORDER BY pages_viewed DESC) AS row_num\n FROM (\n SELECT a.fullvisitorId, \n EXTRACT(DAYOFWEEK FROM PARSE_DATE('%Y%m%d',date)) AS day, \n SUM(totals.pageviews) AS pages_viewed,\n b.label\n FROM`{GA_PROJECT_ID}.{GA_DATASET_ID}.{GA_TABLE_ID}` a\n LEFT JOIN users_labeled b USING(fullvisitorId)\n {query_filter}\n GROUP BY 1,2,4 ) )\nWHERE row_num = 1 \nGROUP BY type, day),\n \ntechnology AS (\nSELECT COUNT(DISTINCT CASE WHEN label = 1 THEN fullvisitorId END) AS count_1_users,\nCOUNT(DISTINCT CASE WHEN label = 0 THEN fullvisitorId END) AS count_0_users,\ndeviceCategory AS deviceCategory,\nbrowser AS browser,\n'technology' AS type\nFROM (\n SELECT fullvisitorId,\n deviceCategory,\n browser,\n label,\n ROW_NUMBER() OVER (PARTITION BY fullvisitorId ORDER BY visits DESC) AS row_num\n FROM (\n SELECT a.fullvisitorId, \n device.deviceCategory AS deviceCategory,\n CASE WHEN device.browser LIKE 'Chrome%' THEN device.browser WHEN device.browser LIKE 'Safari%' THEN device.browser ELSE 'Other browser' END AS browser,\n b.label,\n COUNT(*) AS visits\n FROM`{GA_PROJECT_ID}.{GA_DATASET_ID}.{GA_TABLE_ID}` a\n LEFT JOIN users_labeled b USING(fullvisitorId)\n {query_filter}\n GROUP BY 1,2,3,4))\n WHERE row_num = 1 \nGROUP BY deviceCategory,browser,type),\n\nPPL1 AS (\nSELECT COUNT(DISTINCT CASE WHEN label = 1 THEN fullvisitorId END) AS count_1_users,\nCOUNT(DISTINCT CASE WHEN label = 0 THEN fullvisitorId END) AS count_0_users,\nPPL1 AS PPL1,\n'PPL1' AS type\nFROM (\n SELECT a.fullvisitorId, \n hits.page.pagePathLevel1 AS PPL1,\n b.label\n FROM`{GA_PROJECT_ID}.{GA_DATASET_ID}.{GA_TABLE_ID}` a,\n unnest (hits) as hits\n LEFT JOIN users_labeled b USING(fullvisitorId)\n {query_filter}\n GROUP BY 1,2,3)\nGROUP BY PPL1),\n\necomm_action AS (\nSELECT COUNT(DISTINCT CASE WHEN label = 1 THEN fullvisitorId END) AS count_1_users,\nCOUNT(DISTINCT CASE WHEN label = 0 THEN fullvisitorId END) AS count_0_users,\nCASE WHEN ecomm_action = '1' THEN '1_Click product list'\nWHEN ecomm_action = '2' THEN '2_Product detail view'\nWHEN ecomm_action = '3' THEN '3_Add to cart'\nWHEN ecomm_action = '4' THEN '4_Remove from cart'\nWHEN ecomm_action = '5' THEN '5_Start checkout'\nWHEN ecomm_action = '6' THEN '6_Checkout complete'\nWHEN ecomm_action = '7' THEN '7_Refund'\nWHEN ecomm_action = '8' THEN '8_Checkout options'\nELSE '9_No_ecomm_action'\nEND AS ecomm_action,\n'ecomm_action' AS type\nFROM (\n SELECT a.fullvisitorId, \n hits.eCommerceAction.action_type AS ecomm_action,\n b.label\n FROM`{GA_PROJECT_ID}.{GA_DATASET_ID}.{GA_TABLE_ID}` a,\n unnest (hits) as hits\n LEFT JOIN users_labeled b USING(fullvisitorId)\n {query_filter}\n GROUP BY 1,2,3)\nGROUP BY ecomm_action),\n\nprod_cat AS (\nSELECT COUNT(DISTINCT CASE WHEN label = 1 THEN fullvisitorId END) AS count_1_users,\nCOUNT(DISTINCT CASE WHEN label = 0 THEN fullvisitorId END) AS count_0_users,\nprod_cat AS prod_cat,\n'prod_cat' AS type\nFROM (\n SELECT a.fullvisitorId, \n prod.v2ProductCategory AS prod_cat,\n b.label\n FROM`{GA_PROJECT_ID}.{GA_DATASET_ID}.{GA_TABLE_ID}` a,\n unnest (hits) as hits,\n UNNEST (hits.product) AS prod\n LEFT JOIN users_labeled b USING(fullvisitorId)\n {query_filter}\n GROUP BY 1,2,3)\nGROUP BY prod_cat),\n\nagg_metrics AS (\nSELECT fullvisitorId,\n CASE WHEN label IS NULL then 0 else label end as label,\n count(distinct visitId) as total_sessions,\n sum(totals.pageviews) as pageviews,\n count(totals.bounces)/count(distinct VisitID) as bounce_rate,\n sum(totals.timeonSite)/sum(totals.pageviews) as time_per_page,\n sum(totals.pageviews) / count(distinct VisitID) as avg_session_depth\nFROM `{GA_PROJECT_ID}.{GA_DATASET_ID}.{GA_TABLE_ID}` a\nLEFT JOIN users_labeled b\nUSING (fullvisitorId)\n{query_filter}\nGROUP BY 1,2\n),\n\nAgg_sessions AS (\nSELECT fullvisitorId, label, total_sessions \nFROM agg_metrics),\n\nAgg_pageviews AS (\nSELECT fullvisitorId, label, pageviews \nFROM agg_metrics),\n\nAgg_time_per_page AS (\nSELECT fullvisitorId, label, time_per_page\nFROM agg_metrics),\n\nAgg_avg_session_depth AS (\nSELECT fullvisitorId, label, avg_session_depth\nFROM agg_metrics),\n\nhist_sessions AS (\nSELECT \n ROUND(min+max/2) as avg_sessions,\n COUNT(distinct case when label = 1 then fullvisitorId end) as count_1_users,\n COUNT(distinct case when label = 0 or label is null then fullvisitorId end) as count_0_users,\n 'stats_sessions' as type\nFROM Agg_sessions\nJOIN (SELECT min+step*i min, min+step*(i+1)max\n FROM (\n SELECT max-min diff, min, max, (max-min)/20 step, GENERATE_ARRAY(0, 20, 1) i\n FROM (\n SELECT MIN(total_sessions) min, MAX(total_sessions) max\n FROM Agg_sessions\n JOIN (select APPROX_QUANTILES(total_sessions, 200 IGNORE NULLS)[OFFSET(199)] as trimmer FROM Agg_sessions) b\n ON agg_sessions.total_sessions <= b.trimmer\n )\n ), UNNEST(i) i) stats_sessions\nON Agg_sessions.total_sessions >= stats_sessions.min \nAND Agg_sessions.total_sessions < stats_sessions.max\nGROUP BY min, max\nORDER BY min),\n\nhist_pageviews AS (\nSELECT \n ROUND(min+max/2) as avg_pageviews,\n COUNT(distinct case when label = 1 then fullvisitorId end) as count_1_users,\n COUNT(distinct case when label = 0 or label is null then fullvisitorId end) as count_0_users,\n 'stats_pageviews' as type\nFROM Agg_pageviews\nJOIN (SELECT min+step*i min, min+step*(i+1)max\n FROM (\n SELECT max-min diff, min, max, (max-min)/20 step, GENERATE_ARRAY(0, 20, 1) i\n FROM (\n SELECT MIN(pageviews) min, MAX(pageviews) max\n FROM Agg_pageviews\n JOIN (select APPROX_QUANTILES(pageviews, 200 IGNORE NULLS)[OFFSET(199)] as trimmer FROM Agg_pageviews) b\n ON agg_pageviews.pageviews <= b.trimmer\n )\n ), UNNEST(i) i) stats_pageviews\nON Agg_pageviews.pageviews >= stats_pageviews.min \nAND Agg_pageviews.pageviews < stats_pageviews.max\nGROUP BY min, max\nORDER BY min),\n\nhist_time_per_page AS (\nSELECT \n ROUND(min+max/2) as avg_time_per_page,\n COUNT(distinct case when label = 1 then fullvisitorId end) as count_1_users,\n COUNT(distinct case when label = 0 or label is null then fullvisitorId end) as count_0_users,\n 'stats_time_per_page' as type\nFROM Agg_time_per_page\nJOIN (SELECT min+step*i min, min+step*(i+1)max\n FROM (\n SELECT max-min diff, min, max, (max-min)/20 step, GENERATE_ARRAY(0, 20, 1) i\n FROM (\n SELECT MIN(time_per_page) min, MAX(time_per_page) max\n FROM Agg_time_per_page\n JOIN (select APPROX_QUANTILES(time_per_page, 200 IGNORE NULLS)[OFFSET(199)] as trimmer FROM Agg_time_per_page) b\n ON agg_time_per_page.time_per_page <= b.trimmer\n )\n ), UNNEST(i) i) stats_time_per_page\nON Agg_time_per_page.time_per_page >= stats_time_per_page.min \nAND Agg_time_per_page.time_per_page < stats_time_per_page.max\nGROUP BY min, max\nORDER BY min),\n\nhist_avg_session_depth AS (\nSELECT \n ROUND(min+max/2) as avg_avg_session_depth,\n COUNT(distinct case when label = 1 then fullvisitorId end) as count_1_users,\n COUNT(distinct case when label = 0 or label is null then fullvisitorId end) as count_0_users,\n 'stats_avg_session_depth' as type\nFROM Agg_avg_session_depth\nJOIN (SELECT min+step*i min, min+step*(i+1)max\n FROM (\n SELECT max-min diff, min, max, (max-min)/20 step, GENERATE_ARRAY(0, 20, 1) i\n FROM (\n SELECT MIN(avg_session_depth) min, MAX(avg_session_depth) max\n FROM Agg_avg_session_depth\n JOIN (select APPROX_QUANTILES(avg_session_depth, 200 IGNORE NULLS)[OFFSET(199)] as trimmer FROM Agg_avg_session_depth) b\n ON agg_avg_session_depth.avg_session_depth <= b.trimmer\n )\n ), UNNEST(i) i) stats_avg_session_depth\nON Agg_avg_session_depth.avg_session_depth >= stats_avg_session_depth.min \nAND Agg_avg_session_depth.avg_session_depth < stats_avg_session_depth.max\nGROUP BY min, max\nORDER BY min)\n\"\"\"\n\nif query_session_cd:\n session_cd_query = \",\\nsession_cds AS (SELECT * FROM (\"\n\n counter = len(session_index_list)\n start = 1\n\n for ind in session_index_list:\n ind_num = ind\n session_custom_dimension_query_base = f\"\"\"SELECT\n \"session_dim_{ind_num}\" as type,\n count(distinct case when label = 1 then a.fullvisitorId end) as count_1_users,\n count(distinct case when label = 0 then a.fullvisitorId end) as count_0_users,\n cd.value as session_dim_{ind_num}_value\n FROM `{GA_PROJECT_ID}.{GA_DATASET_ID}.{GA_TABLE_ID}` a,\n UNNEST(customDimensions) as cd\n LEFT JOIN users_labeled b\n ON a.fullvisitorId = b.fullvisitorId\n {query_filter}\n AND cd.index = {ind_num}\n GROUP BY type, cd.value)\"\"\"\n query_add = session_custom_dimension_query_base\n session_cd_query += query_add\n\n if start > 1:\n session_cd_query += \"USING (type, count_1_users, count_0_users)\"\n\n if start < counter:\n session_cd_query += \"\\nFULL OUTER JOIN\\n(\"\n start+=1\n\n session_cd_query+=\")\\n\"\n\n final_query += session_cd_query\n\n# Query hits \nif query_hit_cd:\n hit_cd_query = \",\\nhits_cds AS (SELECT * FROM (\"\n\n counter = len(hit_index_list)\n start = 1\n\n for ind in hit_index_list:\n ind_num = ind\n hit_cust_d_query_base = f\"\"\"SELECT\n \"hit_dim_{ind_num}\" as type,\n count(distinct case when label = 1 then a.fullvisitorId end) as count_1_users,\n count(distinct case when label = 0 then a.fullvisitorId end) as count_0_users,\n cd.value as hit_dim_{ind_num}_value\n FROM `{GA_PROJECT_ID}.{GA_DATASET_ID}.{GA_TABLE_ID}` a,\n UNNEST(hits) as ht,\n UNNEST(ht.customDimensions) as cd\n LEFT JOIN users_labeled b\n ON a.fullvisitorId = b.fullvisitorId\n {query_filter}\n AND cd.index = {ind_num}\n GROUP BY type, cd.value)\n \"\"\"\n\n query_add = hit_cust_d_query_base\n hit_cd_query += query_add\n\n if start > 1:\n hit_cd_query += \"USING (type, count_1_users, count_0_users)\"\n\n if start < counter:\n hit_cd_query += \"\\nFULL OUTER JOIN\\n(\"\n start+=1\n\n hit_cd_query+=\")\\n\"\n\n final_query += hit_cd_query\n\n\nfinal_query += \"\"\"SELECT *, count_1_users/(count_1_users+count_0_users) as conv_rate FROM trafficSource_medium\nFULL OUTER JOIN visitor_dma USING (type,count_1_users,count_0_users)\nFULL OUTER JOIN distinct_dma USING (type,count_1_users,count_0_users)\nFULL OUTER JOIN visitor_common_daypart USING (type,count_1_users,count_0_users)\nFULL OUTER JOIN visitor_common_day USING (type,count_1_users,count_0_users)\nFULL OUTER JOIN technology USING (type,count_1_users,count_0_users)\nFULL OUTER JOIN PPL1 USING (type,count_1_users,count_0_users)\nFULL OUTER JOIN ecomm_action USING (type,count_1_users,count_0_users)\nFULL OUTER JOIN prod_cat USING (type,count_1_users,count_0_users)\nFULL OUTER JOIN hist_sessions USING (type, count_1_users, count_0_users)\nFULL OUTER JOIN hist_pageviews USING (type, count_1_users, count_0_users)\nFULL OUTER JOIN hist_time_per_page USING (type, count_1_users, count_0_users)\nFULL OUTER JOIN hist_avg_session_depth USING (type, count_1_users, count_0_users)\n\"\"\"\n\nif query_hit_cd:\n final_query+=\"FULL OUTER JOIN hits_cds USING (type,count_1_users,count_0_users)\"\n \nif query_session_cd:\n final_query+=\"FULL OUTER JOIN session_cds USING (type,count_1_users,count_0_users)\"\n\nif (View_Query):\n print(final_query)", "Save results to BQ. As-is, only writes if there is no table that already exists.", "# Set the destination for your query results.\n# This will be your data source for the Data Studio dashboard.\nDESTINATION = f\"{PROJECT_ID_BILLING}.{DATASET_NAME}.ga360_gazer_output\"\n\njob_config = bigquery.QueryJobConfig(destination=DESTINATION, \n writeDisposition=\"WRITE_EMPTY\")\n\n# Start the query, passing in the extra configuration.\nquery_job = client.query(final_query, job_config=job_config)\nquery_job.result()\n\nprint(\"Query results loaded to the table {}\".format(DESTINATION))", "Visualize results with a pre-built Data Studio dashboard:\n\nOpen the templated dashboard here \nMake a copy with the button in the top menu bar. When making a copy:\nAccept the terms and conditions, if it's your first time using Data Studio\nCreate a new data source\nSelect BigQuery (you will need to grant permissions again)\nUnder Project, select your project specified by PROJECT_ID_BILLING\nUnder Dataset, select the dataset you specified as DATASET_NAME\nUnder Table, select \"ga360_gazer_output\" (unless you changed the name)\nClick \"Connect\"\nYou will see a list of fields - click \"ADD TO REPORT\" on the top right\nYou will be prompted to make a copy of the original report with your new data source - click \"Copy Report\"\nPage through the pages to view insights\n\nCleaning up\nTo clean up all GCP resources used in this project, you can delete the GCP\nproject you used for the tutorial.", "# Delete the dataset and all contents within\n! bq rm -r $PROJECT_ID_BILLING:$DATASET_NAME" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
LSSTC-DSFP/LSSTC-DSFP-Sessions
Sessions/Session03/Day4/Profiling_solns.ipynb
mit
[ "Profiling and Optimizing\n\nBy C Hummels (Caltech)", "import random\nimport numpy as np\nfrom matplotlib import pyplot as plt", "It can be hard to guess which code is going to operate faster just by looking at it because the interactions between software and computers can be extremely complex. The best way to optimize code is through using profilers to identify bottlenecks in your code and then attempt to address these problems through optimization. Let's give it a whirl.\nProblem 1) Using timeit\nWe will begin our experience with profilers by using the time and timeit commands. time can be run on any size of program, but it returns coarse level time information on how long something took to run overall.\nThere are a lot of small optimizations that can add up to a lot of time in real-world software. Let's look at a few of the non-obvious ones.\nProblem 1a\nWhat is the best way to join a bunch of strings into a larger string? There are several ways of doing this, but some are clearly superior to others. Let's use timeit to test things out. \nBelow, in each of the cells after the string_list is defined, put a new code snippet using the following three methods for building a string:\n--Use the builtin + operator to add strings together in an iterative way\n--Use the join method, as in \"\".join(list).\n--Iteratively add the strings from the list together using \"%s %s\" string composition.\nGuess which method you think will be fastest? Now test it out and see if you're right!", "string_list = ['the ', 'quick ', 'brown ', 'fox ', 'jumped ', 'over ', 'the ', 'lazy ', 'dog']\n\n%%timeit\noutput = \"\"\nfor string in string_list:\n output+=string\n\n%%timeit\n\"\".join(string_list)\n\n%%timeit\noutput = \"\"\nfor word in string_list:\n output = \"%s %s\" % (output, word)", "Interesting! So it appears that the join method was the fastest by a factor of four or so. Good to keep that in mind for future use of strings!\nProblem 1b\nWhat about building big lists or list-like structures (like numpy arrays)? We now know how to construct lists in a variety of ways, so let's see which is fastest. Make a list of ascending perfect squares (i.e. 1, 4, 9, ...) for the first 1 million integers. Use these methods:\n--Iteratively appending x**2 values on to an empty list\n--A for loop with the built in python range command\n--A for loop with the numpy arange command\n--Use the numpy arange command directly, and then take the square of it\n--Use map to map a lambda squaring function to a numpy array constructed with numpy arange\nGuess which method you think will be fastest? Now test it out and see if you're right!", "%%timeit\noutput = []\nfor x in range(1000000): output.append(x**2)\n\n%%timeit\n[x**2 for x in range(1000000)]\n\n%%timeit\n[x**2 for x in np.arange(1000000)]\n\n%%timeit\nnp.arange(1000000)**2\n\n%%timeit\nmap(lambda x: x**2, np.arange(1000000))", "Whoa! We were able to see a >100x efficiency increase by just switching these methods slightly! Numpy arrays are awesome, but I'm sort of surprised that the lambda function won compared to native numpy.\nProblem 2) Deeper profiling with cProfile and line_profiler\nProblem 2a\nOK, so what about larger program? Here is a sorting algorithm that I wrote, which may possess some inefficiencies. But it is hard to know which bugs are causing the biggest problems (some actually aren't that big of a deal in the long term). Let's see if we can speed it up. First, take this code and copy it into a file called sort.py. Read through the code to make sure you understand it. Then, run it with the time command, and write down the total time it took to run.", "# Sort version1\n\nimport random\n\ndef create_random_list(n_elements):\n \"\"\"\n Create a list made up of random elements in random order\n \"\"\"\n random_list = []\n for i in range(n_elements):\n random_list.append(random.random())\n return random_list\n\ndef find_minimum_index(random_list):\n \"\"\"\n Find the index of the minimum value in the list\n \"\"\"\n # current minimum\n min_value = 1\n i = 0\n\n # Find minimum in list\n for element in random_list:\n if element < min_value:\n min_value = element\n\n # Find value that matches minimum\n for element in random_list:\n if element == min_value:\n return i\n i += 1\n\ndef sort_list(random_list):\n \"\"\"\n Sort a list into ascending order\n \"\"\"\n output_list = []\n for _ in range(len(random_list)):\n i = find_minimum_index(random_list)\n minimum = random_list[i]\n output_list.append(minimum)\n del random_list[i]\n return output_list\n\nif __name__ == '__main__':\n l = create_random_list(10000)\n o = sort_list(l)", "Problem 2b\nOK, now try running the cProfile module with it in order to produce some profiling statistics. You can do this by running:\npython -m cProfile -o sort.prof sort.py\nThis will produce an output profile file called sort.prof. You can do a variety of things with sort.prof, but you'll need a few programs to do this. First, install pyprof2html with: pip install pyprof2html. Then, try:\npyprof2html sort.prof\nThis will produce a html directory, and you can just open up the enclosed index.html file to bring it to your browser. You can see function by function, what is taking the most time! You can click on column headers to change which sorting occurs.\nProblem 2c\nBut there are graphical ways of representing these data effectively. Download snakeviz, another means of viewing your profile data. You can do this with pip install snakeviz. And then open up the same file with snakeviz:\nsnakeviz sort.prof\nThis should bring up another graphical interface for analyzing the profile data. Switch to icicle mode, and explore the information a bit. Try to figure out where the \"hot\" sections of the code are. Namely, what is the most expensive function that is running in terms of time?\nProblem 2d\nOK, so if that's the most expensive, we better speed it up. We can investigate line-by-line how slow/fast things are, but we need another package for that called line_profiler. Go ahead and install this with pip install line_profiler. \nGo back to the source code file, and add a @profile line directly above the slow function. line_profiler automatically installed a file called kernprof to your $PYTHONPATH, which is used with the following format at the command line: \nkernprof.py -v -l your_script your_script_args\nStart up kernprof and we'll look at the slow function in our sort program! See if you can find where the slowdown is, based on the amount of time spent on a particular line. Can you fix this line to not be so inefficient?\nHint: Remember the ways we discussed for optimizing code: In-place operations, string concatenations, vectorizing loops, list comprehensions, range vs arange, lambda functions, etc.\nProblem 2e\nGreat! Now repeat these steps to improve your code:\n1) Run code with cProfile\n2) Record total time it took to run in this iteration.\n3) Load up the profiler information in snakeviz or pyprof2html\n4) Look for \"hot\" functions\n5) Run kernprof with line_profiler to identify individual lines that may be slow\n6) Make a modification to the code trying to address the problem\n7) Go back to (1) until you're satisfied.\nYou should be able to iterate on this until not one of the native functions is in the top 20 list of hot functions, the others being associated with loading numpy and such. If this is the case, there is more overhead being spent on loading the data than on your actual code--try increasing the number of elements in the sorting array.\nProblem 2f\nHere is a good test. Make a new code file where you swap out all of the sorting information and just run python's native list.sort() function. Profile this, look at total time spent running, and see how it compares with our version. Note any differences? What about if you use the np.sort() function? \nProblem 2g\nLook at the memory consumption of your optimized code versus the real list.sort() and numpy.sort() functions. You can do this by using the memory_profiler. You'll need to download it first with:\npip install memory_profiler\nNow, you can look at the line by line memory consumption of a function, just like you did with line_profiler and kernprof. Again, you have to put the @profile decorator just before the function you want to profile, but then you run:\npython -m memory_profiler program.py\nRun this on your optimized code, and then on the true python list.sort() and the numpy.sort() and see who takes up the most memory. Why do you think that is?\nProblem 3) Profiling real code\nProblem 3a\nBelow I have included the Moving Galaxy and Universe code, the challenge problem from Tuesday's session. First, glance over it, to make sure you know what it's doing for the most part. Then profile it and optimize it using the algorithm described above. What is the slowest general part of the runtime? \nHint: If you comment that out, do things speed up?", "from matplotlib import pyplot as plt\nimport numpy as np\nimport random\n\nclass Galaxy():\n \"\"\"\n Galaxy class for simply representing a galaxy.\n \"\"\"\n def __init__(self, total_mass, cold_gas_mass, stellar_mass, age=0):\n self.total_mass = total_mass\n self.cold_gas_mass = cold_gas_mass\n self.stellar_mass = stellar_mass\n self.age = age\n self.SFR = 0\n self.color = 'red'\n \n def __repr__(self):\n return \"Galaxy (m_total = %.1g; m_cold = %.1g; m_stars = %.1g; age = %.1g; SFR = %0.2f)\" % \\\n (self.total_mass, self.cold_gas_mass, self.stellar_mass, self.age, self.SFR)\n \nclass EvolvingGalaxy(Galaxy):\n \"\"\"\n Galaxy class for representing a galaxy that can evolve over time.\n \"\"\"\n def current_state(self):\n \"\"\"\n Return a tuple of the galaxy's total_mass, cold_gas_mass, stellar_mass, age, and SFR\n \"\"\"\n return (self.total_mass, self.cold_gas_mass, self.stellar_mass, self.age, self.SFR)\n \n def calculate_star_formation_rate(self):\n \"\"\"\n Calculate the star formation rate by taking a random number between 0 and 1 \n normalized by the galaxy total mass / 1e12; \n \n Also updates the galaxy's color to blue if SFR > 0.01, otherwise color = red\n \"\"\"\n self.SFR = random.random() * (self.total_mass / 1e12)\n if self.SFR > 0.01: \n self.color = 'blue'\n else:\n self.color = 'red'\n \n def accrete_gas_from_IGM(self, time):\n \"\"\"\n Allow the galaxy to accrete cold gas from the IGM at a variable rate normalized to\n the galaxy's mass\n \"\"\"\n cold_gas_accreted = random.random() * 0.1 * time * (self.total_mass / 1e12)\n self.cold_gas_mass += cold_gas_accreted\n self.total_mass += cold_gas_accreted\n \n def form_stars(self, time):\n \"\"\"\n Form stars according to the current star formation rate and time available\n If unable cold gas, then shut off star formation\n \"\"\"\n if self.cold_gas_mass > self.SFR * time:\n self.cold_gas_mass -= self.SFR * time\n self.stellar_mass += self.SFR * time\n else:\n self.SFR = 0\n self.color = 'red'\n \n def evolve(self, time):\n \"\"\"\n Evolve this galaxy forward for a period time\n \"\"\"\n if random.random() < 0.01:\n self.calculate_star_formation_rate()\n self.accrete_gas_from_IGM(time)\n self.form_stars(time)\n self.age += time \n \nclass MovingGalaxy(EvolvingGalaxy):\n \"\"\"\n This galaxy can move over time in the x,y plane\n \"\"\"\n def __init__(self, total_mass, cold_gas_mass, stellar_mass, x_position, y_position, x_velocity, y_velocity, idnum, age=0):\n \n # Replace self with super to activate the superclass's methods\n super().__init__(total_mass, cold_gas_mass, stellar_mass)\n \n self.x_position = x_position\n self.y_position = y_position\n self.x_velocity = x_velocity\n self.y_velocity = y_velocity\n self.idnum = idnum\n \n def __repr__(self):\n return \"Galaxy %i (x = %.0f; y = %.0f)\" % (self.idnum, self.x_position, self.y_position)\n \n def move(self, time):\n \"\"\"\n \"\"\"\n self.x_position += self.x_velocity * time\n self.y_position += self.y_velocity * time\n \n def calculate_momentum(self):\n return (self.total_mass * self.x_velocity, self.total_mass * self.y_velocity)\n\n def evolve(self, time):\n self.move(time)\n super().evolve(time)\n \ndef distance(galaxy1, galaxy2):\n x_diff = galaxy1.x_position - galaxy2.x_position\n y_diff = galaxy1.y_position - galaxy2.y_position\n return (x_diff**2 + y_diff**2)**0.5\n\nclass Universe():\n \"\"\"\n \"\"\"\n def __init__(self):\n self.xrange = (0,100)\n self.yrange = (0,100)\n self.galaxies = []\n self.added_galaxies = []\n self.removed_galaxies = []\n self.time = 0\n pass\n \n def __repr__(self):\n out = 'Universe: t=%.2g\\n' % self.time\n for galaxy in self.galaxies:\n out = \"%s%s\\n\" % (out, galaxy)\n return out\n \n def add_galaxy(self, galaxy=None):\n if galaxy is None:\n stellar_mass = 10**(4*random.random()) * 1e6\n cold_gas_mass = 10**(4*random.random()) * 1e6\n total_mass = (cold_gas_mass + stellar_mass)*1e2\n galaxy = MovingGalaxy(total_mass,\n cold_gas_mass,\n stellar_mass,\n x_position=random.random()*100,\n y_position=random.random()*100,\n x_velocity=random.uniform(-1,1)*1e-7,\n y_velocity=random.uniform(-1,1)*1e-7,\n idnum=len(self.galaxies))\n self.galaxies.append(galaxy)\n \n def remove_galaxy(self, galaxy):\n if galaxy in self.galaxies:\n del self.galaxies[self.galaxies.index(galaxy)]\n \n def evolve(self, time):\n for galaxy in self.galaxies:\n galaxy.evolve(time)\n galaxy.x_position %= 100\n galaxy.y_position %= 100\n self.check_for_mergers()\n for galaxy in self.removed_galaxies:\n self.remove_galaxy(galaxy)\n for galaxy in self.added_galaxies:\n self.add_galaxy(galaxy)\n self.removed_galaxies = []\n self.added_galaxies = []\n self.time += time\n \n def merge_galaxies(self, galaxy1, galaxy2):\n print('Merging:\\n%s\\n%s' % (galaxy1, galaxy2))\n x_mom1, y_mom1 = galaxy1.calculate_momentum()\n x_mom2, y_mom2 = galaxy2.calculate_momentum()\n new_total_mass = galaxy1.total_mass + galaxy2.total_mass\n new_galaxy = MovingGalaxy(total_mass = new_total_mass,\n cold_gas_mass = galaxy1.cold_gas_mass + galaxy2.cold_gas_mass,\n stellar_mass = galaxy1.stellar_mass + galaxy2.stellar_mass,\n x_position = galaxy1.x_position,\n y_position = galaxy1.y_position,\n x_velocity = (x_mom1 + x_mom2) / new_total_mass,\n y_velocity = (y_mom1 + y_mom2) / new_total_mass,\n idnum = galaxy1.idnum)\n self.added_galaxies.append(new_galaxy)\n self.removed_galaxies.append(galaxy1)\n self.removed_galaxies.append(galaxy2)\n \n def check_for_mergers(self):\n for i, galaxy1 in enumerate(self.galaxies):\n for j, galaxy2 in enumerate(self.galaxies[i+1:]):\n if distance(galaxy1, galaxy2) <= 2:\n self.merge_galaxies(galaxy1, galaxy2)\n \n def plot_state(self, frame_id):\n plt.clf()\n x = [galaxy.x_position for galaxy in self.galaxies]\n y = [galaxy.y_position for galaxy in self.galaxies]\n color = [galaxy.color for galaxy in self.galaxies]\n size = [galaxy.total_mass / 1e9 for galaxy in self.galaxies]\n plt.scatter(x,y, color=color, s=size)\n plt.xlim(uni.xrange)\n plt.ylim(uni.yrange)\n plt.savefig('frame%04i.png' % frame_id)\n\nif __name__ == '__main__':\n uni = Universe()\n n_timesteps = 2e2\n n_galaxies = 25\n for i in range(n_galaxies):\n uni.add_galaxy()\n\n for i in range(int(n_timesteps)):\n uni.evolve(2e9/n_timesteps)\n uni.plot_state(i)", "Problem 3b\nGreat! So how are we going to address this slowdown? Instead of generating one plot at a time, and later packaging them all as a single movie, why not try doing it all at once using the new matplotlib animation module: http://matplotlib.org/api/animation_api.html . See if you can figure out how to do it!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Dima806/udacity-mlnd-capstone
capstone-step1-sensitivity-check-run1.ipynb
apache-2.0
[ "Udacity MLND Capstone Project\n\"Determination of students’ interaction patterns with an intelligent tutoring system and study of their correlation with successful learning\"\nStep 1 (sensitivity check, run 1)", "# Select test_size and random_state for splitting a subset\ntest_size=0.1\nrandom_state=0\n\nimport pandas as pd\nimport numpy as np\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport matplotlib.cm as cm\nimport time\nimport gzip\nimport shutil\nimport seaborn as sns\nfrom collections import Counter\n\nfrom sklearn.mixture import GaussianMixture\nfrom sklearn.cluster import KMeans, MeanShift, estimate_bandwidth, AgglomerativeClustering\nfrom sklearn.metrics import silhouette_score #, make_scorer\nfrom sklearn.preprocessing import StandardScaler, MinMaxScaler\nfrom sklearn.neighbors import kneighbors_graph\nfrom sklearn.model_selection import train_test_split", "Do some preprocessing to group the data by 'Anon Stud Id' and extract features for further analysis", "def hdf_fixed_write_compress(df):\n df.to_hdf('data1-step1.hdf','test',mode='w',complib='blosc')\n return\n\ndef hdf_fixed_read_compress():\n df = pd.read_hdf('data.hdf','test')\n return df\n\nwith gzip.open('data1.hdf.gz', 'rb') as f_in, open('data.hdf', 'wb') as f_out:\n shutil.copyfileobj(f_in, f_out)\n\n!ls -lh data.hdf\n\ndata = hdf_fixed_read_compress()\ndata.head()", "Note to reviewers: this algorithm is quite slow (~45 minutes), so you may consider processing a substantial subset of data (e.g. processing 500,000 rows takes only ~1 minute).", "def prepare_stud_data_new(df):\n\n start_time = time.time()\n stud_list = df['Anon Student Id'].unique()\n cols=['num_sess', \\\n 'num_days', \\\n 'num_probs', \\\n 'num_atts', \\\n 'num_hints', \\\n 'frac_corr_atts', \\\n 'frac_3s_atts', \\\n 'frac_1s_hints', \\\n 'time_atts', \\\n 'time_hints', \\\n 'max_probl_views', \\\n 'max_atts']\n \n numbers = []\n #stud_data = pd.DataFrame(columns=cols)\n stud_info_df = pd.DataFrame()\n i = 0\n for stud_name in stud_list:\n stud_info_df = df[df['Anon Student Id'] == stud_name].copy()\n\n # total number of days loading the system\n num_days = len(set(stud_info_df['Day']))\n\n # total number of sessions opened\n num_sessions = len(set(stud_info_df['Session Id']))\n\n # total number of problems entered\n num_problems = len(set(stud_info_df['Problem Name']))\n\n # total number of attempts made by the student \n num_attempts = stud_info_df[stud_info_df['Student Response Type'] == 0].shape[0]\n\n # total number of hints made by the student \n num_hints = stud_info_df[stud_info_df['Student Response Type'] == 1].shape[0]\n\n # fraction of short attemps (with time <= 3 sec)\n if (num_attempts > 0):\n frac_3s_atts = stud_info_df[(stud_info_df['Student Response Type'] == 0) & (stud_info_df['Duration (sec)'] <= 3.0)].shape[0] / num_attempts\n else:\n frac_3s_atts = 0\n\n # fraction of short hints (with time <= 1 sec)\n if (num_hints > 0):\n frac_1s_hints = stud_info_df[(stud_info_df['Student Response Type'] == 1) & (stud_info_df['Duration (sec)'] <= 1.0)].shape[0] / num_hints\n else:\n frac_1s_hints = 0\n\n # fraction of correct attempts\n if (num_attempts > 0):\n fraction_correct_attempts = stud_info_df[(stud_info_df['Student Response Type'] == 0) & (stud_info_df['Outcome'] == 0)].shape[0] / num_attempts\n else:\n fraction_correct_attempts = 0\n\n # total number of time spent for attempts (in seconds)\n total_time_attempts = stud_info_df[stud_info_df['Student Response Type'] == 0]['Duration (sec)'].sum()\n\n # total number of time spent for hints (in seconds)\n total_time_hints = stud_info_df[stud_info_df['Student Response Type'] == 1]['Duration (sec)'].sum()\n\n # averaged maximal numbers of 'Problem View'\n avg_max_problem_views = stud_info_df[['Problem Name', 'Problem View']].groupby(['Problem Name']).agg(np.max).mean()[0]\n\n # averaged maximal number of attempts ('x')\n avg_max_attempts = stud_info_df[['Problem Name', 'x']].groupby(['Problem Name']).agg(np.max).mean()[0]\n\n stud_name = i # assign unique numerical ID to each student \n\n if num_attempts != 0:\n avd_time_att = total_time_attempts / num_attempts\n else:\n avg_time_att = 0\n if num_hints != 0:\n avg_time_hint = total_time_hints / num_hints\n else:\n avg_time_hint = 0 \n numbers.append([num_sessions, \\\n num_days, \\\n num_problems, \\\n num_attempts, \\\n num_hints, \\\n fraction_correct_attempts, \\\n frac_3s_atts, \\\n frac_1s_hints, \\\n total_time_attempts, \\\n total_time_hints, \\\n avg_max_problem_views, \\\n avg_max_attempts])\n print(\"\\r\\t>>> Progress\\t:{:.4%}\".format((i + 1)/len(stud_list)), end='')\n i += 1\n stud_data = pd.DataFrame(data=numbers, columns=cols)\n end_time = time.time()\n print(\"\\n\\t>>> Exec. time\\t:{}s\".format(end_time-start_time))\n return stud_data", "Reading from the scratch instead:", "#stud_data = prepare_stud_data_new(data.head(500000).copy())\n#stud_data = prepare_stud_data_new(data.copy())\n\nstud_data = pd.read_hdf('stud_data.hdf','test')", "Making backup for stud_data in HDF5 format:", "#stud_data.to_hdf('stud_data.hdf','test',mode='w',complib='blosc')\n\nstud_data.shape\n\nstud_data.describe()", "Choosing a student subset for a sensitivity check\n(note that this step updates stud_data):", "print(test_size, random_state)\nstud_data_1, stud_data_2 = train_test_split(stud_data, test_size=test_size, random_state=random_state)\n\nstud_data_1.shape[0]/stud_data.shape[0]\n\nstud_data = stud_data_1", "Clustering\nWrite a new clustering algorithm that:\n- starts from stud_data or its subset (with monotonic index)\n- finds a 2-column set with the largest score (using KMeans) and renames it that 0 is the largest group, 1 is the second largest etc.\n- returns index file (with indices 0, 1, ) that could be used for further analysis", "# old name: process_data\ndef transform_data(selected_columns, data):\n '''\n Apply log-transform and MinMaxScaler() to the selected data columns which are not fractions (frac_*)\n \n Parameters\n ==========\n selected_columns : list\n list of columns to leave in processed data\n data : pandas.DataFrame\n data to process (note that data should contain all selected_columns)\n \n Returns\n =======\n log_scaled_data : pandas.DataFrame\n log-transformed and scaled data selected by selected_columns\n '''\n \n data.reset_index(drop=True, inplace=True)\n log_data = data[selected_columns].copy()\n \n skewed = log_data.columns.tolist()\n skewed = [item for item in skewed if not item.startswith('frac_')]\n log_data[skewed] = log_data[skewed].apply(lambda x: np.log10(x + 1))\n\n scaler = MinMaxScaler().fit(log_data)\n log_scaled_data = scaler.transform(log_data)\n log_scaled_data = pd.DataFrame(log_scaled_data, columns=log_data.columns)\n \n return log_scaled_data\n\ndef replace_group_numbers(best_preds):\n '''\n Replace group numbers in best_preds with sorting by group size \n (so that the largest group is 0, the second largest is 1 etc.)\n \n Parameters\n ==========\n best_preds : numpy array\n unsorted array of predictions\n \n Returns\n =======\n best_preds_sorted : numpy array\n sorted array of predictions\n '''\n \n pp = pd.DataFrame(best_preds, columns = [\"old_group\"])\n dict_pp = {item[0]: i for i, item in enumerate(Counter(best_preds).most_common())}\n pp['new_group'] = pp['old_group'].replace(dict_pp)\n best_preds_sorted = np.array(pp['new_group'])\n return best_preds_sorted\n\ndef kmeans(log_scaled_data):\n '''\n Apply KMeans clustering algorithm with 2 <= cluster_number <= 6 to log_scaled_data \n (transformed and scaled by transform_data() function)\n \n Parameters\n ==========\n log_scaled_data : pandas.DataFrame\n data log-transormed and MinMaxScaler()-ed for KMeans clustering\n \n Returns\n =======\n best_clusterer : sklearn Model\n clustering algorithm with the largest Silhouette Coefficient\n best_score : float\n the largest value of the Silhouette Coefficient\n best_preds_sorted : numpy.array\n array with clustering predictions for log_scaled_data \n (0 is the largest cluster, 1 is the second largest etc.) \n '''\n \n best_score = 0\n for n_clusters in range(2,6):\n clusterer = KMeans(n_clusters=n_clusters, n_init=10, random_state=0)\n clusterer.fit(log_scaled_data)\n preds = clusterer.predict(log_scaled_data)\n # Calculate the mean silhouette coefficient for the number of clusters chosen\n score = silhouette_score(log_scaled_data, preds)\n if best_score < score:\n best_clusterer = clusterer\n # Predict the cluster for each data point\n best_preds = best_clusterer.predict(log_scaled_data)\n best_score = score\n best_clusters = n_clusters\n best_preds_sorted = replace_group_numbers(best_preds)\n \n return best_clusterer, best_score, best_preds_sorted", "Choose the pair of columns with best score:", "all_columns = ['num_sess', 'num_days', 'num_probs', 'num_atts', 'num_hints', 'frac_corr_atts', \\\n 'frac_3s_atts', 'frac_1s_hints', 'time_atts', 'time_hints', 'max_probl_views', 'max_atts']\n\ndef choose_pair_columns_kmeans(all_columns, log_scaled_all_data):\n '''\n Selects pair of columns in data that produces clusters with the largest score.\n In this function, only KMeans clustering algorithm is used\n\n Parameters\n ==========\n all_columns : list \n list of columns to look for the pair with the largest score\n log_scaled_data : pandas DataFrame\n properly scaled DataFrame with all columns\n\n Returns\n =======\n best_columns : list\n pair of data columns with the largest score\n best_score : float\n the largest value of the score\n best_clusterer : sklearn Model\n clustering algorithm with the largest score\n best_preds : numpy.array\n array with clustering predictions for log_scaled_data \n (0 is the largest cluster, 1 is the second largest etc.) \n '''\n \n best_score = 0\n best_columns = []\n j = 0\n l = len(all_columns)\n num_pairs = (l-1)*l/2\n for column in all_columns:\n selected_columns = [column]\n \n columns_to_add = [a for a in all_columns if (a not in selected_columns)]\n for column1 in columns_to_add:\n if all_columns.index(column) < all_columns.index(column1):\n selected_columns = [column, column1]\n print(\"\\r\\t>>> Progress\\t:{:.4%}\".format((j+1)/num_pairs), end='')\n j += 1 \n #log_scaled_data = transform_data(selected_columns, stud_data)\n clusterer, score, preds = kmeans(log_scaled_all_data[selected_columns])\n if score > best_score:\n best_score = score\n best_clusterer = clusterer\n best_preds = preds\n best_columns = selected_columns.copy()\n \n return best_columns, best_score, best_clusterer, best_preds\n\nstart_time = time.time()\nlog_scaled_all_data = transform_data(all_columns, stud_data)\n\n# consider skipping the step below because it takes some time (~5 minutes)\nbest_columns, best_kmeans_score, best_kmeans_clusterer, best_kmeans_preds = choose_pair_columns_kmeans(all_columns, log_scaled_all_data)\n\n# Instead run it single time (6 seconds only)\n#best_columns = ['frac_1s_hints', 'max_probl_views']\n#best_kmeans_clusterer, best_kmeans_score, best_kmeans_preds = kmeans(log_scaled_all_data[best_columns]) \n\nend_time = time.time()\nprint(\"\\n\\t>>> Exec. time\\t:{}s\".format(end_time-start_time))\nprint(\"\\t>>> Best pair of cols:\", best_columns)\nprint(\"\\t>>> Best score:\", best_kmeans_score)\nprint(\"\\t>>> Best clusterer:\", best_kmeans_clusterer)\nprint(\"\\t>>> Best preds:\", best_kmeans_preds)\n\ndef preds_to_indices(preds): # gives array and returns array of indices with 1s\n new_list = []\n for i, val in enumerate(preds):\n if val == 1:\n new_list.append(i)\n return np.array(new_list)", "Visualising the KMeans clusters:", "log_scaled_all_data.describe()\n\nbest_kmeans_preds_mask = preds_to_indices(best_kmeans_preds)\nlog_scaled_all_data_kmeans_0 = log_scaled_all_data.copy()[~log_scaled_all_data.index.isin(best_kmeans_preds_mask)]\nlog_scaled_all_data_kmeans_1 = log_scaled_all_data.copy()[log_scaled_all_data.index.isin(best_kmeans_preds_mask)]\nplt.scatter(log_scaled_all_data_kmeans_0['frac_1s_hints'], \\\n log_scaled_all_data_kmeans_0['max_probl_views'], \\\n alpha=0.6, s=15, c='lightgreen')\nplt.scatter(log_scaled_all_data_kmeans_1['frac_1s_hints'], \\\n log_scaled_all_data_kmeans_1['max_probl_views'], \\\n alpha=0.6, s=15, c='grey')\nplt.xlim([0.0, 0.6])\nplt.ylim([0.0, 0.4])\nplt.figtext(x=0.64, y=0.56, s='Group 1', ha='center', size=14, color='black')\nplt.figtext(x=0.20, y=0.19, s='Group 0', ha='center', size=14, color='darkgreen')\nax = plt.gca()\nax.set_xlabel('frac_1s_hints', size=14)\nax.set_ylabel('max_probl_views', size=14)\nplt.plot((0.14, 0.14), (0.001, 0.399), 'k--', c='blue')\nplt.show()\n\nprint(log_scaled_all_data_kmeans_0.shape, log_scaled_all_data_kmeans_1.shape)", "Then, consider adding one more column to further increase the score:", "def cols_iterate_kmeans(selected_columns, best_score, best_clusterer, best_preds):\n\n all_columns = ['num_sess', 'num_days', 'num_probs', 'num_atts', \\\n 'num_hints', 'frac_corr_atts', 'frac_3s_atts', 'frac_1s_hints', \\\n 'time_atts', 'time_hints', 'max_probl_views', 'max_atts']\n\n columns_to_add = [a for a in all_columns if (a not in selected_columns)]\n #print(columns_to_add)\n for column in columns_to_add:\n print(\"*\"*40)\n print(\"*** Trying to add column\", column)\n print(\"*\"*40)\n selected_columns.append(column)\n log_scaled_data = transform_data(selected_columns, stud_data)\n clusterer, score, preds = kmeans(log_scaled_data)\n if score > best_score:\n print(\"!!! Success !!!\")\n best_score = score\n best_clusterer = clusterer\n best_preds = preds\n print(\"!!! New score is\", best_score)\n print(\"!!! New best clusterer is\", best_clusterer)\n print(\"!!! New best selected_columns are\", selected_columns)\n columns_to_add.remove(column)\n else:\n print(\"!!! Last score is equal or worse then our best one\")\n print(\"!!! According to Occam's razor, remove the column\", column)\n selected_columns.remove(column)\n print(\"!!! Still the best selected columns are\", selected_columns)\n return selected_columns, best_score, best_clusterer, best_preds\n\n# Just skip this step, it does not give new results:\n\nkmeans_clusterer = best_kmeans_clusterer\nkmeans_score = best_kmeans_score\nkmeans_preds = best_kmeans_preds\n\nselected_columns = best_columns # ['frac_1s_hints', 'max_probl_views']\nnew_columns, new_kmeans_score, new_kmeans_clusterer, new_kmeans_preds = cols_iterate_kmeans(selected_columns, kmeans_score, kmeans_clusterer, kmeans_preds)\nif new_kmeans_score > kmeans_score:\n print(\"+++ SUCCESS\")\n selected_columns = new_columns\n best_kmeans_score = new_kmeans_score\n best_kmeans_clusterer = new_kmeans_clusterer\n best_kmeans_preds = new_kmeans_preds\nelse:\n print(\"--- GIVE UP\")", "As expected, the pair ['frac_1s_hints', 'max_probl_views'] still gives the best score.\nNow, trying with different clusterers.\nMeanShift:", "def largest_cluster_fraction(preds):\n '''\n calculates the fraction of students that are in the largest group\n \n Parameters\n ==========\n preds : list\n list of predictions\n \n Returns\n =======\n fraction : float\n largest fraction of students\n best_i : integer\n number of the largest group\n '''\n \n fraction = 0\n ll = len(preds)\n for i in np.unique(preds):\n frac = len(preds[preds == i])/ll\n if frac > fraction:\n fraction = frac\n best_i = i\n return fraction, best_i\n\n# Rewrite similar to kmeans procedure !!!\n\ndef meanshift(log_scaled_data):\n '''\n Apply MeanShift clustering algorithm to log_scaled_data\n (transformed and scaled by transform_data() function)\n Number of clusters is selected according to estimate_badwidth procedure\n with quantiles in np.linspace(0.01, 0.99, 99)\n \n \n Parameters\n ==========\n log_scaled_data : pandas.DataFrame\n data log-transormed and MinMaxScaler()-ed for KMeans clustering\n \n Returns\n =======\n best_clusterer : sklearn Model\n clustering algorithm with the largest Silhouette Coefficient\n best_score : float\n the largest value of the Silhouette Coefficient\n best_preds_sorted : numpy.array\n array with clustering predictions for log_scaled_data \n (0 is the largest cluster, 1 is the second largest etc.) \n cluster_frac : float\n fraction of students inside the largest group\n '''\n\n start_time = time.time()\n best_score = 0\n best_cluster_frac = 0\n for alpha in np.linspace(0.01, 0.99, 99):\n bandwidth = estimate_bandwidth(log_scaled_data, quantile=alpha, n_samples=None, random_state=0)\n\n clusterer = MeanShift(bandwidth=bandwidth, bin_seeding=True)\n clusterer.fit(log_scaled_data)\n\n preds = clusterer.fit_predict(log_scaled_data)\n cluster_frac = largest_cluster_fraction(preds)[0]\n # Calculate the mean silhouette coefficient for the number of clusters chosen\n try: \n score = silhouette_score(log_scaled_data, preds)\n except ValueError:\n score = 0\n print(alpha, clusterer.cluster_centers_.shape[0], score, cluster_frac)\n # setting cluster_frac > 0.85, the value obtained in KMeans algorithm for ['frac_1s_hints', 'max_probl_views']\n if (best_score < score) and (cluster_frac < 0.85):\n best_clusterer = clusterer\n best_preds = preds\n best_score = score\n best_clusters = clusterer.cluster_centers_.shape[0]\n best_cluster_frac = cluster_frac\n print('*'*68)\n print(\"Our best model has\", best_clusters, \"clusters and sihlouette is\", best_score)\n end_time = time.time()\n print(\"Running time is {}s\".format(end_time-start_time))\n print('>'*68)\n best_preds_sorted = replace_group_numbers(best_preds)\n cluster_frac = best_cluster_frac\n \n return best_clusterer, best_score, best_preds_sorted, cluster_frac\n\n# Rinning MeanShift is too slow: runs about 9 min for 1 pair, \n# and produces too bad results (largest score = 0.56 for reasonable max_fractions < 0.85)\n\nstart_time = time.time()\nlog_scaled_data = transform_data(best_columns, stud_data)\nbest_meanshift_clusterer, best_meanshift_score, best_meanshift_preds, _ = meanshift(log_scaled_data)\nprint(best_meanshift_clusterer, best_meanshift_score, best_meanshift_preds)\nend_time = time.time()\nprint(\"Running time is {}s\".format(end_time-start_time))", "GaussianMixture:", "def gaussmix(log_scaled_data): # GaussianMixture\n start_time = time.time()\n max_score = 0\n for n_clusters in range(2,6):\n\n clusterer = GaussianMixture(random_state=0, n_init=50, n_components=n_clusters).fit(log_scaled_data)\n\n preds = clusterer.predict(log_scaled_data)\n # Calculate the mean silhouette coefficient for the number of clusters chosen\n score = silhouette_score(log_scaled_data, preds)\n print(\"For our model with\", clusterer.n_components, \"clusters, the sihlouette score is\", score)\n if max_score < score:\n best_clusterer = clusterer\n # Predict the cluster for each data point\n best_preds = best_clusterer.predict(log_scaled_data)\n max_score = score\n best_clusters = n_clusters\n print('*'*68)\n print(\"Our best model has\", best_clusters, \"clusters and sihlouette is\", max_score)\n end_time = time.time()\n print(\"Running time is {}s\".format(end_time-start_time))\n print('>'*68)\n best_preds_sorted = replace_group_numbers(best_preds)\n return best_clusterer, max_score, best_preds_sorted\n\ndef run_clustering_gaussmix(log_scaled_data):\n best_score = 0\n print(\">>> GaussianMixture:\")\n clusterer, score, preds = gaussmix(log_scaled_data)\n if score > best_score:\n best_clusterer = clusterer\n best_score = score\n best_preds = preds\n print(\"Best clusterer is\", best_clusterer)\n print(\"Max score is\", best_score)\n print(\"Best preds is\", best_preds)\n return best_clusterer, best_score, best_preds\n\n# ~0.6 min running time but very small score (~0.39)\nstart_time = time.time()\nlog_scaled_data = transform_data(best_columns, stud_data)\ngaussmix_best_clusterer, gaussmix_best_score, gaussmix_best_preds = run_clustering_gaussmix(log_scaled_data)\nprint(gaussmix_best_clusterer, gaussmix_best_score, gaussmix_best_preds)\nend_time = time.time()\nprint(\"Running time is {}s\".format(end_time-start_time))", "AgglomerativeClustering:", "def agglom(log_scaled_data): # AgglomerativeClustering with 'ward' connectivity\n start_time = time.time()\n max_score = 0\n for n_clusters in range(2,3): # use only 2 clusters\n connectivity = kneighbors_graph(log_scaled_data, n_neighbors=100, include_self=False)\n # make connectivity symmetric\n connectivity = 0.5 * (connectivity + connectivity.T)\n clusterer = AgglomerativeClustering(n_clusters=n_clusters, \\\n linkage='ward', \\\n connectivity=connectivity)\n\n preds = clusterer.fit_predict(log_scaled_data)\n # Calculate the mean silhouette coefficient for the number of clusters chosen\n score = silhouette_score(log_scaled_data, preds)\n print(\"For our model with\", clusterer.n_clusters, \"clusters, and the sihlouette score is\", score)\n if max_score < score:\n best_clusterer = clusterer\n # Predict the cluster for each data point\n best_preds = preds\n max_score = score\n best_clusters = n_clusters\n print('*'*68)\n print(\"Our best model has\", best_clusters, \"clusters and sihlouette is\", max_score)\n end_time = time.time()\n print(\"Running time is {}s\".format(end_time-start_time))\n print('>'*68)\n best_preds_sorted = replace_group_numbers(best_preds)\n return best_clusterer, max_score, best_preds_sorted\n\ndef run_clustering_agglom(log_scaled_data):\n best_score = 0\n print(\">>> AgglomerativeClustering:\")\n clusterer, score, preds = agglom(log_scaled_data)\n if score > best_score:\n best_clusterer = clusterer\n best_score = score\n best_preds = preds\n print(\"Best clusterer is\", best_clusterer)\n print(\"Max score is\", best_score)\n print(\"Best preds is\", best_preds)\n return best_clusterer, best_score, best_preds\n\n# Gives results very similar to KMeans but takes ~4 times more running time\nstart_time = time.time()\nlog_scaled_data = transform_data(best_columns, stud_data)\nbest_agglom_clusterer, best_agglom_score, best_agglom_preds = run_clustering_agglom(log_scaled_data)\nprint(best_agglom_clusterer, best_agglom_score, best_agglom_preds)\nend_time = time.time()\nprint(\"Running time is {}s\".format(end_time-start_time))", "Visualising the AgglomerativeClustering clusters:", "best_agglom_preds_mask = preds_to_indices(best_agglom_preds)\nlog_scaled_data_agglom_0 = log_scaled_data.copy()[~log_scaled_data.index.isin(best_agglom_preds_mask)]\nlog_scaled_data_agglom_1 = log_scaled_data.copy()[log_scaled_data.index.isin(best_agglom_preds_mask)]\nplt.scatter(log_scaled_data_agglom_0['frac_1s_hints'], \\\n log_scaled_data_agglom_0['max_probl_views'], \\\n alpha=0.6, s=15, c='lightgreen')\nplt.scatter(log_scaled_data_agglom_1['frac_1s_hints'], \\\n log_scaled_data_agglom_1['max_probl_views'], \\\n alpha=0.6, s=15, c='grey')\nplt.xlim([0.0, 0.6])\nplt.ylim([0.0, 0.4])\nplt.figtext(x=0.64, y=0.56, s='Group 1', ha='center', size=14, color='black')\nplt.figtext(x=0.20, y=0.19, s='Group 0', ha='center', size=14, color='darkgreen')\nax = plt.gca()\nax.set_xlabel('frac_1s_hints', size=14)\nax.set_ylabel('max_probl_views', size=14)\n#plt.plot((0.145, 0.145), (0.001, 0.399), 'k--', c='blue')\nplt.show()", "Further clustering of obtained KMeans groups:\nI start from group 0 that contains 6934 students:", "best_kmeans_preds_mask = preds_to_indices(best_kmeans_preds)\nlog_scaled_all_data_kmeans_0 = log_scaled_all_data.copy()[~log_scaled_all_data.index.isin(best_kmeans_preds_mask)]\n\n# In this particular splitting, take drop=False to save the initial index\n# (simplifying students recovery for step 2)\nlog_scaled_all_data_kmeans_0.reset_index(inplace=True, drop=False)\n\nlog_scaled_all_data_kmeans_0.index\n\nstart_time = time.time()\n\nbest_kmeans_columns_0, \\\nbest_kmeans_score_0, \\\nbest_kmeans_clusterer_0, \\\nbest_kmeans_preds_0 = choose_pair_columns_kmeans(all_columns, log_scaled_all_data_kmeans_0)\n\n# best_kmeans_columns_0 = ['frac_3s_atts', 'max_probl_views']\n# best_kmeans_clusterer_0, best_kmeans_score_0, best_kmeans_preds_0 = kmeans(log_scaled_all_data_kmeans_0[best_kmeans_columns_0]) \n\nend_time = time.time()\nprint(\"\\n\\t>>> Exec. time\\t:{}s\".format(end_time-start_time))\nprint(\"\\t>>> Best pair of cols:\", best_kmeans_columns_0)\nprint(\"\\t>>> Best score:\", best_kmeans_score_0)\nprint(\"\\t>>> Best clusterer:\", best_kmeans_clusterer_0)\nprint(\"\\t>>> Best preds:\", best_kmeans_preds_0)\n\nprint(sum(best_kmeans_preds_0), len(best_kmeans_preds_0), len(best_kmeans_preds_0[best_kmeans_preds_0 == 0]))\n\nlog_scaled_all_data_kmeans_0.reset_index(inplace=True, drop=True)", "Visualise obtained clusters:", "best_kmeans_preds_mask_0 = preds_to_indices(best_kmeans_preds_0)\n\nlog_scaled_all_data_kmeans_00 = log_scaled_all_data_kmeans_0.copy()[~log_scaled_all_data_kmeans_0.index.isin(best_kmeans_preds_mask_0)]\n\nlog_scaled_all_data_kmeans_01 = log_scaled_all_data_kmeans_0.copy()[log_scaled_all_data_kmeans_0.index.isin(best_kmeans_preds_mask_0)]\n\nplt.scatter(log_scaled_all_data_kmeans_00[best_kmeans_columns_0[0]], \\\n log_scaled_all_data_kmeans_00[best_kmeans_columns_0[1]], \\\n alpha=0.6, s=15, c='lightgreen')\nplt.scatter(log_scaled_all_data_kmeans_01[best_kmeans_columns_0[0]], \\\n log_scaled_all_data_kmeans_01[best_kmeans_columns_0[1]], \\\n alpha=0.6, s=15, c='grey')\n# plt.xlim([0.0, 0.6])\n# plt.ylim([0.0, 0.4])\n# plt.figtext(x=0.64, y=0.56, s='Group 01', ha='center', size=14, color='black')\n# plt.figtext(x=0.20, y=0.69, s='Group 00', ha='center', size=14, color='darkgreen')\nax = plt.gca()\nax.set_xlabel(best_kmeans_columns_0[0], size=14)\nax.set_ylabel(best_kmeans_columns_0[1], size=14)\n#plt.plot((0.13, 0.13), (0.001, 0.499), 'k--', c='blue')\nplt.show()", "As we see, group 01 contains more students with \"gaming\" behaviour, so I proceed with group 00:", "len(best_kmeans_preds_0)\n\n#best_kmeans_preds_mask_0 = preds_to_indices(best_kmeans_preds_0) # already implemented during group0 visualisation\nlog_scaled_all_data_kmeans_00 = log_scaled_all_data_kmeans_0.copy()[~log_scaled_all_data_kmeans_0.index.isin(best_kmeans_preds_mask_0)]\n\nlog_scaled_all_data_kmeans_00.reset_index(inplace=True, drop=True)\n\nlog_scaled_all_data_kmeans_00.index\n\nstart_time = time.time()\n\nbest_kmeans_columns_00, \\\nbest_kmeans_score_00, \\\nbest_kmeans_clusterer_00, \\\nbest_kmeans_preds_00 = choose_pair_columns_kmeans(all_columns, log_scaled_all_data_kmeans_00)\n\n# best_kmeans_columns_00 = ['frac_3s_atts', 'time_hints']\n# best_kmeans_clusterer_00, \\\n# best_kmeans_score_00, \\\n# best_kmeans_preds_00 = kmeans(log_scaled_all_data_kmeans_00[best_kmeans_columns_00]) \n\n\nend_time = time.time()\nprint(\"\\n\\t>>> Exec. time\\t:{}s\".format(end_time-start_time))\nprint(\"\\t>>> Best pair of cols:\", best_kmeans_columns_00)\nprint(\"\\t>>> Best score:\", best_kmeans_score_00)\nprint(\"\\t>>> Best clusterer:\", best_kmeans_clusterer_00)\nprint(\"\\t>>> Best preds:\", best_kmeans_preds_00)\n\nprint(sum(best_kmeans_preds_00), len(best_kmeans_preds_00), len(best_kmeans_preds_00[best_kmeans_preds_00 == 0]))\n\nbest_kmeans_preds_mask_00 = preds_to_indices(best_kmeans_preds_00)\n\nlog_scaled_all_data_kmeans_000 = log_scaled_all_data_kmeans_00.copy()[~log_scaled_all_data_kmeans_00.index.isin(best_kmeans_preds_mask_00)]\n\nlog_scaled_all_data_kmeans_001 = log_scaled_all_data_kmeans_00.copy()[log_scaled_all_data_kmeans_00.index.isin(best_kmeans_preds_mask_00)]\n\nplt.scatter(log_scaled_all_data_kmeans_000[best_kmeans_columns_00[0]], \\\n log_scaled_all_data_kmeans_000[best_kmeans_columns_00[1]], \\\n alpha=0.6, s=15, c='lightgreen')\nplt.scatter(log_scaled_all_data_kmeans_001[best_kmeans_columns_00[0]], \\\n log_scaled_all_data_kmeans_001[best_kmeans_columns_00[1]], \\\n alpha=0.6, s=15, c='grey')\n# plt.xlim([0.0, 0.6])\n# plt.ylim([0.0, 0.4])\n# plt.figtext(x=0.64, y=0.56, s='Group 01', ha='center', size=14, color='black')\n# plt.figtext(x=0.20, y=0.69, s='Group 00', ha='center', size=14, color='darkgreen')\nax = plt.gca()\nax.set_xlabel(best_kmeans_columns_00[0], size=14)\nax.set_ylabel(best_kmeans_columns_00[1], size=14)\n#plt.plot((0.13, 0.13), (0.001, 0.499), 'k--', c='blue')\nplt.show()", "So, there is a subgroup 001 of 1001 students that do not use many hints. What about the rest (000, 5482 students)?", "log_scaled_all_data_kmeans_000 = log_scaled_all_data_kmeans_00.copy()[~log_scaled_all_data_kmeans_00.index.isin(best_kmeans_preds_mask_00)]\n\nlog_scaled_all_data_kmeans_000.reset_index(inplace=True, drop=True)\n\nlog_scaled_all_data_kmeans_000.index\n\nstart_time = time.time()\n\nbest_kmeans_columns_000, \\\nbest_kmeans_score_000, \\\nbest_kmeans_clusterer_000, \\\nbest_kmeans_preds_000 = choose_pair_columns_kmeans(all_columns, log_scaled_all_data_kmeans_000)\n\n# best_kmeans_columns_000 = ['num_sess', 'num_probs']\n# best_kmeans_clusterer_000, \\\n# best_kmeans_score_000, \\\n# best_kmeans_preds_000 = kmeans(log_scaled_all_data_kmeans_000[best_kmeans_columns_000]) \n\nend_time = time.time()\nprint(\"\\n\\t>>> Exec. time\\t:{}s\".format(end_time-start_time))\nprint(\"\\t>>> Best pair of cols:\", best_kmeans_columns_000)\nprint(\"\\t>>> Best score:\", best_kmeans_score_000)\nprint(\"\\t>>> Best clusterer:\", best_kmeans_clusterer_000)\nprint(\"\\t>>> Best preds:\", best_kmeans_preds_000)\n\nprint(sum(best_kmeans_preds_000), len(best_kmeans_preds_000), len(best_kmeans_preds_000[best_kmeans_preds_000 == 0]))\n\nbest_kmeans_preds_mask_000 = preds_to_indices(best_kmeans_preds_000)\n\nlog_scaled_all_data_kmeans_0000 = log_scaled_all_data_kmeans_000.copy()[~log_scaled_all_data_kmeans_000.index.isin(best_kmeans_preds_mask_000)]\n\nlog_scaled_all_data_kmeans_0001 = log_scaled_all_data_kmeans_000.copy()[log_scaled_all_data_kmeans_000.index.isin(best_kmeans_preds_mask_000)]\n\nplt.scatter(log_scaled_all_data_kmeans_0000[best_kmeans_columns_000[0]], \\\n log_scaled_all_data_kmeans_0000[best_kmeans_columns_000[1]], \\\n alpha=0.6, s=15, c='lightgreen')\nplt.scatter(log_scaled_all_data_kmeans_0001[best_kmeans_columns_000[0]], \\\n log_scaled_all_data_kmeans_0001[best_kmeans_columns_000[1]], \\\n alpha=0.6, s=15, c='grey')\n# plt.figtext(x=0.64, y=0.56, s='Group 01', ha='center', size=14, color='black')\n# plt.figtext(x=0.20, y=0.69, s='Group 00', ha='center', size=14, color='darkgreen')\nax = plt.gca()\nax.set_xlabel(best_kmeans_columns_000[0], size=14)\nax.set_ylabel(best_kmeans_columns_000[1], size=14)\n#plt.plot((0.13, 0.13), (0.001, 0.499), 'k--', c='blue')\nplt.show()", "Splitting group 0000 (students with large 'num_sess' and 'num_probs')", "log_scaled_all_data_kmeans_0000 = log_scaled_all_data_kmeans_000.copy()[~log_scaled_all_data_kmeans_000.index.isin(best_kmeans_preds_mask_000)]\n\nlog_scaled_all_data_kmeans_0000.reset_index(inplace=True, drop=True)\n\nlog_scaled_all_data_kmeans_0000.index\n\nstart_time = time.time()\n\nbest_kmeans_columns_0000, \\\nbest_kmeans_score_0000, \\\nbest_kmeans_clusterer_0000, \\\nbest_kmeans_preds_0000 = choose_pair_columns_kmeans(all_columns, log_scaled_all_data_kmeans_0000)\n\n# best_kmeans_columns_0000 = ['num_sess', 'num_probs']\n# best_kmeans_clusterer_0000, \\\n# best_kmeans_score_0000, \\\n# best_kmeans_preds_0000 = kmeans(log_scaled_all_data_kmeans_0000[best_kmeans_columns_0000]) \n\nend_time = time.time()\nprint(\"\\n\\t>>> Exec. time\\t:{}s\".format(end_time-start_time))\nprint(\"\\t>>> Best pair of cols:\", best_kmeans_columns_0000)\nprint(\"\\t>>> Best score:\", best_kmeans_score_0000)\nprint(\"\\t>>> Best clusterer:\", best_kmeans_clusterer_0000)\nprint(\"\\t>>> Best preds:\", best_kmeans_preds_0000)\n\nprint(sum(best_kmeans_preds_0000), \\\n len(best_kmeans_preds_0000), \\\n len(best_kmeans_preds_0000[best_kmeans_preds_0000 == 0]))\n\nbest_kmeans_preds_mask_0000 = preds_to_indices(best_kmeans_preds_0000)\n\nlog_scaled_all_data_kmeans_00000 = log_scaled_all_data_kmeans_0000.copy()[~log_scaled_all_data_kmeans_0000.index.isin(best_kmeans_preds_mask_0000)]\n\nlog_scaled_all_data_kmeans_00001 = log_scaled_all_data_kmeans_0000.copy()[log_scaled_all_data_kmeans_0000.index.isin(best_kmeans_preds_mask_0000)]\n\nplt.scatter(log_scaled_all_data_kmeans_00000[best_kmeans_columns_0000[0]], \\\n log_scaled_all_data_kmeans_00000[best_kmeans_columns_0000[1]], \\\n alpha=0.6, s=15, c='lightgreen')\nplt.scatter(log_scaled_all_data_kmeans_00001[best_kmeans_columns_0000[0]], \\\n log_scaled_all_data_kmeans_00001[best_kmeans_columns_0000[1]], \\\n alpha=0.6, s=15, c='grey')\n# plt.xlim([0.0, 0.6])\n# plt.ylim([0.0, 0.4])\n# plt.figtext(x=0.64, y=0.56, s='Group 01', ha='center', size=14, color='black')\n# plt.figtext(x=0.20, y=0.69, s='Group 00', ha='center', size=14, color='darkgreen')\nax = plt.gca()\nax.set_xlabel(best_kmeans_columns_0000[0], size=14)\nax.set_ylabel(best_kmeans_columns_0000[1], size=14)\n#plt.plot((0.13, 0.13), (0.001, 0.499), 'k--', c='blue')\nplt.show()", "As we see, these two groups represent students with \"intermediate experience\" (00000) and \"largest experience\" (00001).\nDuring this sensitivity check, I splitted 8082 students (90% of ASSISTments students) into 6 different groups:\n- group 1, 1148 students with large 'frac_1s_hints' (\"gaming\" behaviour);\n- group 2, 451 students with small 'frac_1s_hints' and large 'frac_3s_atts' (\"gaming\" behaviour);\n- group 3, 1001 students with small 'time_hints' (\"non-gaming\" behaviour, small usage of hints);\n- group 4, 2151 students with small 'num_sess' and 'num_probs' (\"non-gaming\" behaviour, large usage of hints, small experience);\n- group 5, 1734 students with medium 'num_sess' and 'num_probs' (\"non-gaming\" behaviour, large usage of hints, medium experience);\n- group 6, 1597 students with large 'num_sess' and 'num_probs' (\"non-gaming\" behaviour, large usage of hints, large experience).\nThe final result of this step is the joint cluster index that contains numbers 1-6 for each student:", "group1_index = np.array(log_scaled_all_data_kmeans_1.index)\nlen(group1_index)\n\ngroup2_index = np.array(log_scaled_all_data_kmeans_01['index'])\nlen(group2_index)\n\ngroup3_index = np.array(log_scaled_all_data_kmeans_001['index'])\nlen(group3_index)\n\ngroup4_index = np.array(log_scaled_all_data_kmeans_0001['index'])\nlen(group4_index)\n\ngroup5_index = np.array(log_scaled_all_data_kmeans_00000['index'])\nlen(group5_index)\n\ngroup6_index = np.array(log_scaled_all_data_kmeans_00001['index'])\nlen(group6_index)\n\ndef create_joint_cluster_index():\n '''\n Saves group index files into cluster_index.csv for further analysis\n '''\n \n cluster_index_lst = []\n for i in range(len(stud_data)+1):\n if i in group1_index:\n cluster_index_lst.append(1)\n elif i in group2_index:\n cluster_index_lst.append(2)\n elif i in group3_index:\n cluster_index_lst.append(3)\n elif i in group4_index:\n cluster_index_lst.append(4)\n elif i in group5_index:\n cluster_index_lst.append(5)\n elif i in group6_index:\n cluster_index_lst.append(6)\n\n print(Counter(cluster_index_lst))\n cluster_index = pd.Series(cluster_index_lst, dtype=int)\n cluster_index.to_csv('cluster_index_run1.csv')\n return \n\ncreate_joint_cluster_index()\n\n! ls -lh cluster_index_run1.csv" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
fevangelista/pyWicked
examples/forte/spinorbital-CCSDT.ipynb
mit
[ "CCSDT theory for a closed-shell reference\nIn this notebook we will use wicked to generate equations for the CCSDT method", "import wicked as w\n\nimport psi4\nimport forte\nimport forte.utils\nfrom forte import forte_options\nimport numpy as np\nimport time\n\nw.reset_space()\nw.add_space(\"o\", \"fermion\", \"occupied\", [\"i\", \"j\", \"k\", \"l\", \"m\", \"n\"])\nw.add_space(\"v\", \"fermion\", \"unoccupied\", [\"a\", \"b\", \"c\", \"d\", \"e\", \"f\"])\n\nTop = w.op(\"T\", [\"v+ o\", \"v+ v+ o o\", \"v+ v+ v+ o o o\"])\nHop = w.utils.gen_op(\"H\",1,\"ov\",\"ov\") + w.utils.gen_op(\"H\",2,\"ov\",\"ov\")\n\nwt = w.WickTheorem()\nHbar = w.bch_series(Hop,Top,4)\nexpr = wt.contract(w.rational(1), Hbar, 0, 6)\nmbeq = expr.to_manybody_equation(\"R\")\n\ndef generate_equation(mbeq, nocc, nvir):\n res_sym = f\"R{'o' * nocc}{'v' * nvir}\"\n code = [f\"def evaluate_residual_{nocc}_{nvir}(H,T):\",\n \" # contributions to the residual\"]\n if nocc + nvir == 0:\n code.append(\" R = 0.0\")\n else:\n dims = ','.join(['nocc'] * nocc + ['nvir'] * nvir)\n code.append(f\" {res_sym} = np.zeros(({dims}))\")\n for eq in mbeq[\"o\" * nocc + \"|\" + \"v\" * nvir]:\n contraction = eq.compile(\"einsum\")\n code.append(f' {contraction}')\n code.append(f' return {res_sym}')\n funct = '\\n'.join(code)\n exec(funct)\n print(f'\\n\\n{funct}\\n')\n return funct\n\nenergy_eq = generate_equation(mbeq, 0,0)\nexec(energy_eq)\nt1_eq = generate_equation(mbeq, 1,1)\nexec(t1_eq)\nt2_eq = generate_equation(mbeq, 2,2)\nexec(t2_eq)\nt3_eq = generate_equation(mbeq, 3,3)\nexec(t3_eq)", "```python\ndef evaluate_residual_0_0(H,T):\n # contributions to the residual\n R = 0.0\n R += 1.000000000 * np.einsum(\"ai,ia->\",H[\"vo\"],T[\"ov\"])\n R += 0.500000000 * np.einsum(\"abij,ia,jb->\",H[\"vvoo\"],T[\"ov\"],T[\"ov\"])\n R += 0.250000000 * np.einsum(\"abij,ijab->\",H[\"vvoo\"],T[\"oovv\"])\n return R\ndef evaluate_residual_1_1(H,T):\n # contributions to the residual\n Rov = np.zeros((nocc,nvir))\n Rov += 1.000000000 * np.einsum(\"ba,ib->ia\",H[\"vv\"],T[\"ov\"])\n Rov += 1.000000000 * np.einsum(\"ia->ia\",H[\"ov\"])\n Rov += -1.000000000 * np.einsum(\"bj,ja,ib->ia\",H[\"vo\"],T[\"ov\"],T[\"ov\"])\n Rov += 1.000000000 * np.einsum(\"bj,ijab->ia\",H[\"vo\"],T[\"oovv\"])\n Rov += -1.000000000 * np.einsum(\"ij,ja->ia\",H[\"oo\"],T[\"ov\"])\n Rov += 1.000000000 * np.einsum(\"bcja,ic,jb->ia\",H[\"vvov\"],T[\"ov\"],T[\"ov\"])\n Rov += -0.500000000 * np.einsum(\"bcja,ijbc->ia\",H[\"vvov\"],T[\"oovv\"])\n Rov += -1.000000000 * np.einsum(\"ibja,jb->ia\",H[\"ovov\"],T[\"ov\"])\n Rov += 1.000000000 * np.einsum(\"bcjk,kc,ijab->ia\",H[\"vvoo\"],T[\"ov\"],T[\"oovv\"])\n Rov += 0.500000000 * np.einsum(\"bcjk,ic,jkab->ia\",H[\"vvoo\"],T[\"ov\"],T[\"oovv\"])\n Rov += -1.000000000 * np.einsum(\"bcjk,ka,ic,jb->ia\",H[\"vvoo\"],T[\"ov\"],T[\"ov\"],T[\"ov\"])\n Rov += 0.500000000 * np.einsum(\"bcjk,ka,ijbc->ia\",H[\"vvoo\"],T[\"ov\"],T[\"oovv\"])\n Rov += 0.250000000 * np.einsum(\"bcjk,ijkabc->ia\",H[\"vvoo\"],T[\"ooovvv\"])\n Rov += 1.000000000 * np.einsum(\"ibjk,ka,jb->ia\",H[\"ovoo\"],T[\"ov\"],T[\"ov\"])\n Rov += -0.500000000 * np.einsum(\"ibjk,jkab->ia\",H[\"ovoo\"],T[\"oovv\"])\n return Rov\ndef evaluate_residual_2_2(H,T):\n # contributions to the residual\n Roovv = np.zeros((nocc,nocc,nvir,nvir))\n Roovv += -2.000000000 * np.einsum(\"ca,ijbc->ijab\",H[\"vv\"],T[\"oovv\"])\n Roovv += 1.000000000 * np.einsum(\"cdab,ic,jd->ijab\",H[\"vvvv\"],T[\"ov\"],T[\"ov\"])\n Roovv += 0.500000000 * np.einsum(\"cdab,ijcd->ijab\",H[\"vvvv\"],T[\"oovv\"])\n Roovv += 2.000000000 * np.einsum(\"icab,jc->ijab\",H[\"ovvv\"],T[\"ov\"])\n Roovv += 1.000000000 * np.einsum(\"ijab->ijab\",H[\"oovv\"])\n Roovv += 2.000000000 * np.einsum(\"ck,ic,jkab->ijab\",H[\"vo\"],T[\"ov\"],T[\"oovv\"])\n Roovv += 2.000000000 * np.einsum(\"ck,ka,ijbc->ijab\",H[\"vo\"],T[\"ov\"],T[\"oovv\"])\n Roovv += 1.000000000 * np.einsum(\"ck,ijkabc->ijab\",H[\"vo\"],T[\"ooovvv\"])\n Roovv += 2.000000000 * np.einsum(\"ik,jkab->ijab\",H[\"oo\"],T[\"oovv\"])\n Roovv += 2.000000000 * np.einsum(\"cdka,kd,ijbc->ijab\",H[\"vvov\"],T[\"ov\"],T[\"oovv\"])\n Roovv += 4.000000000 * np.einsum(\"cdka,id,jkbc->ijab\",H[\"vvov\"],T[\"ov\"],T[\"oovv\"])\n Roovv += 2.000000000 * np.einsum(\"cdka,kb,ic,jd->ijab\",H[\"vvov\"],T[\"ov\"],T[\"ov\"],T[\"ov\"])\n Roovv += 1.000000000 * np.einsum(\"cdka,kb,ijcd->ijab\",H[\"vvov\"],T[\"ov\"],T[\"oovv\"])\n Roovv += 1.000000000 * np.einsum(\"cdka,ijkbcd->ijab\",H[\"vvov\"],T[\"ooovvv\"])\n Roovv += 4.000000000 * np.einsum(\"icka,kb,jc->ijab\",H[\"ovov\"],T[\"ov\"],T[\"ov\"])\n Roovv += -4.000000000 * np.einsum(\"icka,jkbc->ijab\",H[\"ovov\"],T[\"oovv\"])\n Roovv += 2.000000000 * np.einsum(\"ijka,kb->ijab\",H[\"ooov\"],T[\"ov\"])\n Roovv += 1.000000000 * np.einsum(\"cdkl,ld,ijkabc->ijab\",H[\"vvoo\"],T[\"ov\"],T[\"ooovvv\"])\n Roovv += -2.000000000 * np.einsum(\"cdkl,id,lc,jkab->ijab\",H[\"vvoo\"],T[\"ov\"],T[\"ov\"],T[\"oovv\"])\n Roovv += -1.000000000 * np.einsum(\"cdkl,id,jklabc->ijab\",H[\"vvoo\"],T[\"ov\"],T[\"ooovvv\"])\n Roovv += 0.500000000 * np.einsum(\"cdkl,ic,jd,klab->ijab\",H[\"vvoo\"],T[\"ov\"],T[\"ov\"],T[\"oovv\"])\n Roovv += -2.000000000 * np.einsum(\"cdkl,la,kd,ijbc->ijab\",H[\"vvoo\"],T[\"ov\"],T[\"ov\"],T[\"oovv\"])\n Roovv += -4.000000000 * np.einsum(\"cdkl,la,id,jkbc->ijab\",H[\"vvoo\"],T[\"ov\"],T[\"ov\"],T[\"oovv\"])\n Roovv += -1.000000000 * np.einsum(\"cdkl,la,ijkbcd->ijab\",H[\"vvoo\"],T[\"ov\"],T[\"ooovvv\"])\n Roovv += 1.000000000 * np.einsum(\"cdkl,ka,lb,ic,jd->ijab\",H[\"vvoo\"],T[\"ov\"],T[\"ov\"],T[\"ov\"],T[\"ov\"])\n Roovv += 0.500000000 * np.einsum(\"cdkl,ka,lb,ijcd->ijab\",H[\"vvoo\"],T[\"ov\"],T[\"ov\"],T[\"oovv\"])\n Roovv += 1.000000000 * np.einsum(\"cdkl,ijad,klbc->ijab\",H[\"vvoo\"],T[\"oovv\"],T[\"oovv\"])\n Roovv += 2.000000000 * np.einsum(\"cdkl,ikac,jlbd->ijab\",H[\"vvoo\"],T[\"oovv\"],T[\"oovv\"])\n Roovv += 0.250000000 * np.einsum(\"cdkl,klab,ijcd->ijab\",H[\"vvoo\"],T[\"oovv\"],T[\"oovv\"])\n Roovv += 1.000000000 * np.einsum(\"cdkl,ilab,jkcd->ijab\",H[\"vvoo\"],T[\"oovv\"],T[\"oovv\"])\n Roovv += 2.000000000 * np.einsum(\"ickl,lc,jkab->ijab\",H[\"ovoo\"],T[\"ov\"],T[\"oovv\"])\n Roovv += 1.000000000 * np.einsum(\"ickl,jc,klab->ijab\",H[\"ovoo\"],T[\"ov\"],T[\"oovv\"])\n Roovv += 4.000000000 * np.einsum(\"ickl,la,jkbc->ijab\",H[\"ovoo\"],T[\"ov\"],T[\"oovv\"])\n Roovv += 2.000000000 * np.einsum(\"ickl,ka,lb,jc->ijab\",H[\"ovoo\"],T[\"ov\"],T[\"ov\"],T[\"ov\"])\n Roovv += 1.000000000 * np.einsum(\"ickl,jklabc->ijab\",H[\"ovoo\"],T[\"ooovvv\"])\n Roovv += 1.000000000 * np.einsum(\"ijkl,ka,lb->ijab\",H[\"oooo\"],T[\"ov\"],T[\"ov\"])\n Roovv += 0.500000000 * np.einsum(\"ijkl,klab->ijab\",H[\"oooo\"],T[\"oovv\"])\n return Roovv\ndef evaluate_residual_3_3(H,T):\n # contributions to the residual\n Rooovvv = np.zeros((nocc,nocc,nocc,nvir,nvir,nvir))\n Rooovvv += 3.000000000 * np.einsum(\"da,ijkbcd->ijkabc\",H[\"vv\"],T[\"ooovvv\"])\n Rooovvv += 9.000000000 * np.einsum(\"deab,ie,jkcd->ijkabc\",H[\"vvvv\"],T[\"ov\"],T[\"oovv\"])\n Rooovvv += 1.500000000 * np.einsum(\"deab,ijkcde->ijkabc\",H[\"vvvv\"],T[\"ooovvv\"])\n Rooovvv += -9.000000000 * np.einsum(\"idab,jkcd->ijkabc\",H[\"ovvv\"],T[\"oovv\"])\n Rooovvv += -3.000000000 * np.einsum(\"dl,id,jklabc->ijkabc\",H[\"vo\"],T[\"ov\"],T[\"ooovvv\"])\n Rooovvv += -3.000000000 * np.einsum(\"dl,la,ijkbcd->ijkabc\",H[\"vo\"],T[\"ov\"],T[\"ooovvv\"])\n Rooovvv += 9.000000000 * np.einsum(\"dl,ilab,jkcd->ijkabc\",H[\"vo\"],T[\"oovv\"],T[\"oovv\"])\n Rooovvv += -3.000000000 * np.einsum(\"il,jklabc->ijkabc\",H[\"oo\"],T[\"ooovvv\"])\n Rooovvv += -3.000000000 * np.einsum(\"dela,le,ijkbcd->ijkabc\",H[\"vvov\"],T[\"ov\"],T[\"ooovvv\"])\n Rooovvv += 9.000000000 * np.einsum(\"dela,ie,jklbcd->ijkabc\",H[\"vvov\"],T[\"ov\"],T[\"ooovvv\"])\n Rooovvv += -9.000000000 * np.einsum(\"dela,id,je,klbc->ijkabc\",H[\"vvov\"],T[\"ov\"],T[\"ov\"],T[\"oovv\"])\n Rooovvv += 18.000000000 * np.einsum(\"dela,lb,ie,jkcd->ijkabc\",H[\"vvov\"],T[\"ov\"],T[\"ov\"],T[\"oovv\"])\n Rooovvv += 3.000000000 * np.einsum(\"dela,lb,ijkcde->ijkabc\",H[\"vvov\"],T[\"ov\"],T[\"ooovvv\"])\n Rooovvv += -18.000000000 * np.einsum(\"dela,ijbe,klcd->ijkabc\",H[\"vvov\"],T[\"oovv\"],T[\"oovv\"])\n Rooovvv += -4.500000000 * np.einsum(\"dela,ilbc,jkde->ijkabc\",H[\"vvov\"],T[\"oovv\"],T[\"oovv\"])\n Rooovvv += -18.000000000 * np.einsum(\"idla,jd,klbc->ijkabc\",H[\"ovov\"],T[\"ov\"],T[\"oovv\"])\n Rooovvv += -18.000000000 * np.einsum(\"idla,lb,jkcd->ijkabc\",H[\"ovov\"],T[\"ov\"],T[\"oovv\"])\n Rooovvv += -9.000000000 * np.einsum(\"idla,jklbcd->ijkabc\",H[\"ovov\"],T[\"ooovvv\"])\n Rooovvv += -9.000000000 * np.einsum(\"ijla,klbc->ijkabc\",H[\"ooov\"],T[\"oovv\"])\n Rooovvv += 9.000000000 * np.einsum(\"delm,me,ilab,jkcd->ijkabc\",H[\"vvoo\"],T[\"ov\"],T[\"oovv\"],T[\"oovv\"])\n Rooovvv += 3.000000000 * np.einsum(\"delm,ie,md,jklabc->ijkabc\",H[\"vvoo\"],T[\"ov\"],T[\"ov\"],T[\"ooovvv\"])\n Rooovvv += 4.500000000 * np.einsum(\"delm,ie,lmab,jkcd->ijkabc\",H[\"vvoo\"],T[\"ov\"],T[\"oovv\"],T[\"oovv\"])\n Rooovvv += 18.000000000 * np.einsum(\"delm,ie,jmab,klcd->ijkabc\",H[\"vvoo\"],T[\"ov\"],T[\"oovv\"],T[\"oovv\"])\n Rooovvv += 1.500000000 * np.einsum(\"delm,id,je,klmabc->ijkabc\",H[\"vvoo\"],T[\"ov\"],T[\"ov\"],T[\"ooovvv\"])\n Rooovvv += -1.500000000 * np.einsum(\"delm,imde,jklabc->ijkabc\",H[\"vvoo\"],T[\"oovv\"],T[\"ooovvv\"])\n Rooovvv += 0.750000000 * np.einsum(\"delm,ijde,klmabc->ijkabc\",H[\"vvoo\"],T[\"oovv\"],T[\"ooovvv\"])\n Rooovvv += 3.000000000 * np.einsum(\"delm,ma,le,ijkbcd->ijkabc\",H[\"vvoo\"],T[\"ov\"],T[\"ov\"],T[\"ooovvv\"])\n Rooovvv += -9.000000000 * np.einsum(\"delm,ma,ie,jklbcd->ijkabc\",H[\"vvoo\"],T[\"ov\"],T[\"ov\"],T[\"ooovvv\"])\n Rooovvv += 9.000000000 * np.einsum(\"delm,ma,id,je,klbc->ijkabc\",H[\"vvoo\"],T[\"ov\"],T[\"ov\"],T[\"ov\"],T[\"oovv\"])\n Rooovvv += 18.000000000 * np.einsum(\"delm,ma,ijbe,klcd->ijkabc\",H[\"vvoo\"],T[\"ov\"],T[\"oovv\"],T[\"oovv\"])\n Rooovvv += 4.500000000 * np.einsum(\"delm,ma,ilbc,jkde->ijkabc\",H[\"vvoo\"],T[\"ov\"],T[\"oovv\"],T[\"oovv\"])\n Rooovvv += 9.000000000 * np.einsum(\"delm,la,mb,ie,jkcd->ijkabc\",H[\"vvoo\"],T[\"ov\"],T[\"ov\"],T[\"ov\"],T[\"oovv\"])\n Rooovvv += 1.500000000 * np.einsum(\"delm,la,mb,ijkcde->ijkabc\",H[\"vvoo\"],T[\"ov\"],T[\"ov\"],T[\"ooovvv\"])\n Rooovvv += -1.500000000 * np.einsum(\"delm,lmae,ijkbcd->ijkabc\",H[\"vvoo\"],T[\"oovv\"],T[\"ooovvv\"])\n Rooovvv += 9.000000000 * np.einsum(\"delm,imae,jklbcd->ijkabc\",H[\"vvoo\"],T[\"oovv\"],T[\"ooovvv\"])\n Rooovvv += -4.500000000 * np.einsum(\"delm,ijae,klmbcd->ijkabc\",H[\"vvoo\"],T[\"oovv\"],T[\"ooovvv\"])\n Rooovvv += 0.750000000 * np.einsum(\"delm,lmab,ijkcde->ijkabc\",H[\"vvoo\"],T[\"oovv\"],T[\"ooovvv\"])\n Rooovvv += -4.500000000 * np.einsum(\"delm,imab,jklcde->ijkabc\",H[\"vvoo\"],T[\"oovv\"],T[\"ooovvv\"])\n Rooovvv += -3.000000000 * np.einsum(\"idlm,md,jklabc->ijkabc\",H[\"ovoo\"],T[\"ov\"],T[\"ooovvv\"])\n Rooovvv += 3.000000000 * np.einsum(\"idlm,jd,klmabc->ijkabc\",H[\"ovoo\"],T[\"ov\"],T[\"ooovvv\"])\n Rooovvv += 18.000000000 * np.einsum(\"idlm,ma,jd,klbc->ijkabc\",H[\"ovoo\"],T[\"ov\"],T[\"ov\"],T[\"oovv\"])\n Rooovvv += 9.000000000 * np.einsum(\"idlm,ma,jklbcd->ijkabc\",H[\"ovoo\"],T[\"ov\"],T[\"ooovvv\"])\n Rooovvv += -9.000000000 * np.einsum(\"idlm,la,mb,jkcd->ijkabc\",H[\"ovoo\"],T[\"ov\"],T[\"ov\"],T[\"oovv\"])\n Rooovvv += -4.500000000 * np.einsum(\"idlm,lmab,jkcd->ijkabc\",H[\"ovoo\"],T[\"oovv\"],T[\"oovv\"])\n Rooovvv += -18.000000000 * np.einsum(\"idlm,jmab,klcd->ijkabc\",H[\"ovoo\"],T[\"oovv\"],T[\"oovv\"])\n Rooovvv += 9.000000000 * np.einsum(\"ijlm,ma,klbc->ijkabc\",H[\"oooo\"],T[\"ov\"],T[\"oovv\"])\n Rooovvv += 1.500000000 * np.einsum(\"ijlm,klmabc->ijkabc\",H[\"oooo\"],T[\"ooovvv\"])\n return Rooovvv\n```\nCompute the Hartree–Fock and MP2 energy", "# setup xyz geometry for linear H6\ngeometry = \"\"\"\nH 0.0 0.0 0.0\nH 0.0 0.0 1.0\nH 0.0 0.0 2.0\nH 0.0 0.0 3.0\nH 0.0 0.0 4.0\nH 0.0 0.0 5.1\nsymmetry c1\n\"\"\"\n\n(Escf, psi4_wfn) = forte.utils.psi4_scf(geometry,\n basis='sto-3g',\n reference='rhf',\n options={'E_CONVERGENCE' : 1.e-12})", "Prepare integrals for Forte", "# Define the orbital spaces\nmo_spaces = {'RESTRICTED_DOCC': [3],'RESTRICTED_UOCC': [3]}\n\n# pass Psi4 options to Forte\noptions = psi4.core.get_options()\noptions.set_current_module('FORTE')\nforte_options.get_options_from_psi4(options)\n\n# Grab the number of MOs per irrep\nnmopi = psi4_wfn.nmopi()\n# Grab the point group symbol (e.g. \"C2V\")\npoint_group = psi4_wfn.molecule().point_group().symbol()\n# create a MOSpaceInfo object\nmo_space_info = forte.make_mo_space_info_from_map(nmopi, point_group,mo_spaces, [])\n# make a ForteIntegral object\nints = forte.make_ints_from_psi4(psi4_wfn, forte_options, mo_space_info)", "Define orbital spaces and dimensions", "occmos = mo_space_info.corr_absolute_mo('RESTRICTED_DOCC')\nvirmos = mo_space_info.corr_absolute_mo('RESTRICTED_UOCC')\nallmos = mo_space_info.corr_absolute_mo('CORRELATED')\nnocc = 2 * len(occmos)\nnvir = 2 * len(virmos)", "Build the Fock matrix and the zeroth-order Fock matrix", "H = {'oo': forte.spinorbital_fock(ints,occmos, occmos,occmos),\n 'vv': forte.spinorbital_fock(ints,virmos, virmos,occmos),\n 'ov': forte.spinorbital_fock(ints,occmos, virmos,occmos),\n 'vo': forte.spinorbital_fock(ints,occmos, virmos,occmos), \n 'oovv' : forte.spinorbital_tei(ints,occmos,occmos,virmos,virmos),\n 'ooov' : forte.spinorbital_tei(ints,occmos,occmos,occmos,virmos),\n 'vvvv' : forte.spinorbital_tei(ints,virmos,virmos,virmos,virmos),\n 'vvoo' : forte.spinorbital_tei(ints,virmos,virmos,occmos,occmos),\n 'ovov' : forte.spinorbital_tei(ints,occmos,virmos,occmos,virmos),\n 'ovvv' : forte.spinorbital_tei(ints,occmos,virmos,virmos,virmos),\n 'vvov' : forte.spinorbital_tei(ints,virmos,virmos,occmos,virmos),\n 'ovoo' : forte.spinorbital_tei(ints,occmos,virmos,occmos,occmos),\n 'oooo' : forte.spinorbital_tei(ints,occmos,occmos,occmos,occmos)}", "Build the MP denominators", "fo = np.diag(H['oo'])\nfv = np.diag(H['vv'])\n\nD = {}\n\nd1 = np.zeros((nocc,nvir))\nfor i in range(nocc):\n for a in range(nvir):\n si = i % 2\n sa = a % 2\n if si == sa:\n d1[i][a] = 1.0 / (fo[i] - fv[a])\nD['ov'] = d1\n \n \nd2 = np.zeros((nocc,nocc,nvir,nvir))\nfor i in range(nocc):\n for j in range(nocc):\n for a in range(nvir):\n for b in range(nvir):\n si = i % 2\n sj = j % 2\n sa = a % 2\n sb = b % 2\n if si == sj == sa == sb:\n d2[i][j][a][b] = 1.0 / (fo[i] + fo[j] - fv[a] - fv[b])\n if si == sa and sj == sb and si != sj:\n d2[i][j][a][b] = 1.0 / (fo[i] + fo[j] - fv[a] - fv[b])\n if si == sb and sj == sa and si != sj:\n d2[i][j][a][b] = 1.0 / (fo[i] + fo[j] - fv[a] - fv[b]) \nD['oovv'] = d2\n\nd3 = np.zeros((nocc,nocc,nocc,nvir,nvir,nvir))\nfor i in range(nocc):\n for j in range(nocc):\n for k in range(nocc):\n for a in range(nvir):\n for b in range(nvir):\n for c in range(nvir):\n si = i % 2\n sj = j % 2\n sk = k % 2\n sa = a % 2\n sb = b % 2\n sc = c % 2\n d3[i][j][k][a][b][c] = 1.0 / (fo[i] + fo[j] + fo[k]- fv[a] - fv[b] - fv[c])\nD['ooovvv'] = d3\n\n# Compute the MP2 correlation energy\nEmp2 = 0.0\nfor i in range(nocc):\n for j in range(nocc):\n for a in range(nvir):\n for b in range(nvir):\n Emp2 += 0.25 * H[\"oovv\"][i][j][a][b] ** 2 / (fo[i] + fo[j] - fv[a] - fv[b])\nprint(f\"MP2 corr. energy: {Emp2:.12f} Eh\")\n\ndef antisymmetrize_residual_2_2(Roovv):\n # antisymmetrize the residual\n Roovv_anti = np.zeros((nocc,nocc,nvir,nvir))\n Roovv_anti += np.einsum(\"ijab->ijab\",Roovv)\n Roovv_anti -= np.einsum(\"ijab->jiab\",Roovv)\n Roovv_anti -= np.einsum(\"ijab->ijba\",Roovv)\n Roovv_anti += np.einsum(\"ijab->jiba\",Roovv) \n return Roovv_anti\n\ndef antisymmetrize_residual_3_3(Rooovvv):\n # antisymmetrize the residual\n Rooovvv_anti = np.zeros((nocc,nocc,nocc,nvir,nvir,nvir))\n Rooovvv_anti += +1 * np.einsum(\"ijkabc->ijkabc\",Rooovvv)\n Rooovvv_anti += -1 * np.einsum(\"ijkabc->ijkacb\",Rooovvv)\n Rooovvv_anti += -1 * np.einsum(\"ijkabc->ijkbac\",Rooovvv)\n Rooovvv_anti += +1 * np.einsum(\"ijkabc->ijkbca\",Rooovvv)\n Rooovvv_anti += +1 * np.einsum(\"ijkabc->ijkcab\",Rooovvv)\n Rooovvv_anti += -1 * np.einsum(\"ijkabc->ijkcba\",Rooovvv)\n Rooovvv_anti += -1 * np.einsum(\"ijkabc->ikjabc\",Rooovvv)\n Rooovvv_anti += +1 * np.einsum(\"ijkabc->ikjacb\",Rooovvv)\n Rooovvv_anti += +1 * np.einsum(\"ijkabc->ikjbac\",Rooovvv)\n Rooovvv_anti += -1 * np.einsum(\"ijkabc->ikjbca\",Rooovvv)\n Rooovvv_anti += -1 * np.einsum(\"ijkabc->ikjcab\",Rooovvv)\n Rooovvv_anti += +1 * np.einsum(\"ijkabc->ikjcba\",Rooovvv)\n Rooovvv_anti += -1 * np.einsum(\"ijkabc->jikabc\",Rooovvv)\n Rooovvv_anti += +1 * np.einsum(\"ijkabc->jikacb\",Rooovvv)\n Rooovvv_anti += +1 * np.einsum(\"ijkabc->jikbac\",Rooovvv)\n Rooovvv_anti += -1 * np.einsum(\"ijkabc->jikbca\",Rooovvv)\n Rooovvv_anti += -1 * np.einsum(\"ijkabc->jikcab\",Rooovvv)\n Rooovvv_anti += +1 * np.einsum(\"ijkabc->jikcba\",Rooovvv)\n Rooovvv_anti += +1 * np.einsum(\"ijkabc->jkiabc\",Rooovvv)\n Rooovvv_anti += -1 * np.einsum(\"ijkabc->jkiacb\",Rooovvv)\n Rooovvv_anti += -1 * np.einsum(\"ijkabc->jkibac\",Rooovvv)\n Rooovvv_anti += +1 * np.einsum(\"ijkabc->jkibca\",Rooovvv)\n Rooovvv_anti += +1 * np.einsum(\"ijkabc->jkicab\",Rooovvv)\n Rooovvv_anti += -1 * np.einsum(\"ijkabc->jkicba\",Rooovvv)\n Rooovvv_anti += +1 * np.einsum(\"ijkabc->kijabc\",Rooovvv)\n Rooovvv_anti += -1 * np.einsum(\"ijkabc->kijacb\",Rooovvv)\n Rooovvv_anti += -1 * np.einsum(\"ijkabc->kijbac\",Rooovvv)\n Rooovvv_anti += +1 * np.einsum(\"ijkabc->kijbca\",Rooovvv)\n Rooovvv_anti += +1 * np.einsum(\"ijkabc->kijcab\",Rooovvv)\n Rooovvv_anti += -1 * np.einsum(\"ijkabc->kijcba\",Rooovvv)\n Rooovvv_anti += -1 * np.einsum(\"ijkabc->kjiabc\",Rooovvv)\n Rooovvv_anti += +1 * np.einsum(\"ijkabc->kjiacb\",Rooovvv)\n Rooovvv_anti += +1 * np.einsum(\"ijkabc->kjibac\",Rooovvv)\n Rooovvv_anti += -1 * np.einsum(\"ijkabc->kjibca\",Rooovvv)\n Rooovvv_anti += -1 * np.einsum(\"ijkabc->kjicab\",Rooovvv)\n Rooovvv_anti += +1 * np.einsum(\"ijkabc->kjicba\",Rooovvv)\n return Rooovvv_anti\n\ndef update_amplitudes(T,R,d):\n T['ov'] += np.einsum(\"ia,ia->ia\",R['ov'],D['ov'])\n T['oovv'] += np.einsum(\"ijab,ijab->ijab\",R['oovv'],D['oovv'])\n T['ooovvv'] += np.einsum(\"ijkabc,ijkabc->ijkabc\",R['ooovvv'],D['ooovvv'])\n\nref_CCSDT = -0.108354659115 # from forte sparse implementation\n\nT = {}\nT[\"ov\"] = np.zeros((nocc,nvir))\nT[\"oovv\"] = np.zeros((nocc,nocc,nvir,nvir))\nT[\"ooovvv\"] = np.zeros((nocc,nocc,nocc,nvir,nvir,nvir))\n\nheader = \"Iter. Corr. energy |R| \"\nprint(\"-\" * len(header))\nprint(header)\nprint(\"-\" * len(header))\n\nstart = time.perf_counter()\n\nmaxiter = 100\nfor i in range(maxiter):\n R = {}\n Ewicked = float(evaluate_residual_0_0(H,T))\n R['ov'] = evaluate_residual_1_1(H,T)\n Roovv = evaluate_residual_2_2(H,T)\n R['oovv'] = antisymmetrize_residual_2_2(Roovv)\n Rooovvv = evaluate_residual_3_3(H,T)\n R['ooovvv'] = antisymmetrize_residual_3_3(Rooovvv) \n\n update_amplitudes(T,R,D)\n\n # check for convergence\n norm_R = np.sqrt(np.linalg.norm(R['ov'])**2 + np.linalg.norm(R['oovv'])**2 + np.linalg.norm(R['ooovvv'])**2)\n print(f\"{i:3d} {Ewicked:+.12f} {norm_R:e}\") \n if norm_R < 1.0e-9:\n break\n \n \nend = time.perf_counter()\nt = end - start \n \nprint(\"-\" * len(header)) \nprint(f\"CCSDT correlation energy: {Ewicked:+.12f} [Eh]\")\nprint(f\"Error: {Ewicked - ref_CCSDT:+.12e} [Eh]\")\nprint(f\"Timing: {t:+.12e} [s]\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.18/_downloads/d043a6f89c85579df811f8d5bf583129/plot_stats_cluster_spatio_temporal_repeated_measures_anova.ipynb
bsd-3-clause
[ "%matplotlib inline", "Repeated measures ANOVA on source data with spatio-temporal clustering\nThis example illustrates how to make use of the clustering functions\nfor arbitrary, self-defined contrasts beyond standard t-tests. In this\ncase we will tests if the differences in evoked responses between\nstimulation modality (visual VS auditory) depend on the stimulus\nlocation (left vs right) for a group of subjects (simulated here\nusing one subject's data). For this purpose we will compute an\ninteraction effect using a repeated measures ANOVA. The multiple\ncomparisons problem is addressed with a cluster-level permutation test\nacross space and time.", "# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>\n# Eric Larson <larson.eric.d@gmail.com>\n# Denis Engemannn <denis.engemann@gmail.com>\n#\n# License: BSD (3-clause)\n\nimport os.path as op\nimport numpy as np\nfrom numpy.random import randn\nimport matplotlib.pyplot as plt\n\nimport mne\nfrom mne.stats import (spatio_temporal_cluster_test, f_threshold_mway_rm,\n f_mway_rm, summarize_clusters_stc)\n\nfrom mne.minimum_norm import apply_inverse, read_inverse_operator\nfrom mne.datasets import sample\n\nprint(__doc__)", "Set parameters", "data_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\nsubjects_dir = data_path + '/subjects'\nsrc_fname = subjects_dir + '/fsaverage/bem/fsaverage-ico-5-src.fif'\n\ntmin = -0.2\ntmax = 0.3 # Use a lower tmax to reduce multiple comparisons\n\n# Setup for reading the raw data\nraw = mne.io.read_raw_fif(raw_fname)\nevents = mne.read_events(event_fname)", "Read epochs for all channels, removing a bad one", "raw.info['bads'] += ['MEG 2443']\npicks = mne.pick_types(raw.info, meg=True, eog=True, exclude='bads')\n# we'll load all four conditions that make up the 'two ways' of our ANOVA\n\nevent_id = dict(l_aud=1, r_aud=2, l_vis=3, r_vis=4)\nreject = dict(grad=1000e-13, mag=4000e-15, eog=150e-6)\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,\n baseline=(None, 0), reject=reject, preload=True)\n\n# Equalize trial counts to eliminate bias (which would otherwise be\n# introduced by the abs() performed below)\nepochs.equalize_event_counts(event_id)", "Transform to source space", "fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'\nsnr = 3.0\nlambda2 = 1.0 / snr ** 2\nmethod = \"dSPM\" # use dSPM method (could also be MNE, sLORETA, or eLORETA)\ninverse_operator = read_inverse_operator(fname_inv)\n\n# we'll only use one hemisphere to speed up this example\n# instead of a second vertex array we'll pass an empty array\nsample_vertices = [inverse_operator['src'][0]['vertno'], np.array([], int)]\n\n# Let's average and compute inverse, then resample to speed things up\nconditions = []\nfor cond in ['l_aud', 'r_aud', 'l_vis', 'r_vis']: # order is important\n evoked = epochs[cond].average()\n evoked.resample(50, npad='auto')\n condition = apply_inverse(evoked, inverse_operator, lambda2, method)\n # Let's only deal with t > 0, cropping to reduce multiple comparisons\n condition.crop(0, None)\n conditions.append(condition)\n\ntmin = conditions[0].tmin\ntstep = conditions[0].tstep", "Transform to common cortical space\nNormally you would read in estimates across several subjects and morph them\nto the same cortical space (e.g. fsaverage). For example purposes, we will\nsimulate this by just having each \"subject\" have the same response (just\nnoisy in source space) here.\nWe'll only consider the left hemisphere in this tutorial.", "n_vertices_sample, n_times = conditions[0].lh_data.shape\nn_subjects = 7\nprint('Simulating data for %d subjects.' % n_subjects)\n\n# Let's make sure our results replicate, so set the seed.\nnp.random.seed(0)\nX = randn(n_vertices_sample, n_times, n_subjects, 4) * 10\nfor ii, condition in enumerate(conditions):\n X[:, :, :, ii] += condition.lh_data[:, :, np.newaxis]", "It's a good idea to spatially smooth the data, and for visualization\npurposes, let's morph these to fsaverage, which is a grade 5 ICO source space\nwith vertices 0:10242 for each hemisphere. Usually you'd have to morph\neach subject's data separately, but here since all estimates are on\n'sample' we can use one morph matrix for all the heavy lifting.", "# Read the source space we are morphing to (just left hemisphere)\nsrc = mne.read_source_spaces(src_fname)\nfsave_vertices = [src[0]['vertno'], []]\nmorph_mat = mne.compute_source_morph(\n src=inverse_operator['src'], subject_to='fsaverage',\n spacing=fsave_vertices, subjects_dir=subjects_dir, smooth=20).morph_mat\nmorph_mat = morph_mat[:, :n_vertices_sample] # just left hemi from src\nn_vertices_fsave = morph_mat.shape[0]\n\n# We have to change the shape for the dot() to work properly\nX = X.reshape(n_vertices_sample, n_times * n_subjects * 4)\nprint('Morphing data.')\nX = morph_mat.dot(X) # morph_mat is a sparse matrix\nX = X.reshape(n_vertices_fsave, n_times, n_subjects, 4)", "Now we need to prepare the group matrix for the ANOVA statistic. To make the\nclustering function work correctly with the ANOVA function X needs to be a\nlist of multi-dimensional arrays (one per condition) of shape: samples\n(subjects) x time x space.\nFirst we permute dimensions, then split the array into a list of conditions\nand discard the empty dimension resulting from the split using numpy squeeze.", "X = np.transpose(X, [2, 1, 0, 3]) #\nX = [np.squeeze(x) for x in np.split(X, 4, axis=-1)]", "Prepare function for arbitrary contrast\nAs our ANOVA function is a multi-purpose tool we need to apply a few\nmodifications to integrate it with the clustering function. This\nincludes reshaping data, setting default arguments and processing\nthe return values. For this reason we'll write a tiny dummy function.\nWe will tell the ANOVA how to interpret the data matrix in terms of\nfactors. This is done via the factor levels argument which is a list\nof the number factor levels for each factor.", "factor_levels = [2, 2]", "Finally we will pick the interaction effect by passing 'A:B'.\n(this notation is borrowed from the R formula language). Without this also\nthe main effects will be returned.", "effects = 'A:B'\n# Tell the ANOVA not to compute p-values which we don't need for clustering\nreturn_pvals = False\n\n# a few more convenient bindings\nn_times = X[0].shape[1]\nn_conditions = 4", "A stat_fun must deal with a variable number of input arguments.\nInside the clustering function each condition will be passed as flattened\narray, necessitated by the clustering procedure. The ANOVA however expects an\ninput array of dimensions: subjects X conditions X observations (optional).\nThe following function catches the list input and swaps the first and the\nsecond dimension, and finally calls ANOVA.\n<div class=\"alert alert-info\"><h4>Note</h4><p>For further details on this ANOVA function consider the\n corresponding\n `time-frequency tutorial <tut-timefreq-twoway-anova>`.</p></div>", "def stat_fun(*args):\n # get f-values only.\n return f_mway_rm(np.swapaxes(args, 1, 0), factor_levels=factor_levels,\n effects=effects, return_pvals=return_pvals)[0]", "Compute clustering statistic\nTo use an algorithm optimized for spatio-temporal clustering, we\njust pass the spatial connectivity matrix (instead of spatio-temporal).", "# as we only have one hemisphere we need only need half the connectivity\nprint('Computing connectivity.')\nconnectivity = mne.spatial_src_connectivity(src[:1])\n\n# Now let's actually do the clustering. Please relax, on a small\n# notebook and one single thread only this will take a couple of minutes ...\npthresh = 0.0005\nf_thresh = f_threshold_mway_rm(n_subjects, factor_levels, effects, pthresh)\n\n# To speed things up a bit we will ...\nn_permutations = 128 # ... run fewer permutations (reduces sensitivity)\n\nprint('Clustering.')\nT_obs, clusters, cluster_p_values, H0 = clu = \\\n spatio_temporal_cluster_test(X, connectivity=connectivity, n_jobs=1,\n threshold=f_thresh, stat_fun=stat_fun,\n n_permutations=n_permutations,\n buffer_size=None)\n# Now select the clusters that are sig. at p < 0.05 (note that this value\n# is multiple-comparisons corrected).\ngood_cluster_inds = np.where(cluster_p_values < 0.05)[0]", "Visualize the clusters", "print('Visualizing clusters.')\n\n# Now let's build a convenient representation of each cluster, where each\n# cluster becomes a \"time point\" in the SourceEstimate\nstc_all_cluster_vis = summarize_clusters_stc(clu, tstep=tstep,\n vertices=fsave_vertices,\n subject='fsaverage')\n\n# Let's actually plot the first \"time point\" in the SourceEstimate, which\n# shows all the clusters, weighted by duration\n\nsubjects_dir = op.join(data_path, 'subjects')\n# The brighter the color, the stronger the interaction between\n# stimulus modality and stimulus location\n\nbrain = stc_all_cluster_vis.plot(subjects_dir=subjects_dir, views='lat',\n time_label='Duration significant (ms)',\n clim=dict(kind='value', lims=[0, 1, 40]))\nbrain.save_image('cluster-lh.png')\nbrain.show_view('medial')", "Finally, let's investigate interaction effect by reconstructing the time\ncourses", "inds_t, inds_v = [(clusters[cluster_ind]) for ii, cluster_ind in\n enumerate(good_cluster_inds)][0] # first cluster\n\ntimes = np.arange(X[0].shape[1]) * tstep * 1e3\n\nplt.figure()\ncolors = ['y', 'b', 'g', 'purple']\nevent_ids = ['l_aud', 'r_aud', 'l_vis', 'r_vis']\n\nfor ii, (condition, color, eve_id) in enumerate(zip(X, colors, event_ids)):\n # extract time course at cluster vertices\n condition = condition[:, :, inds_v]\n # normally we would normalize values across subjects but\n # here we use data from the same subject so we're good to just\n # create average time series across subjects and vertices.\n mean_tc = condition.mean(axis=2).mean(axis=0)\n std_tc = condition.std(axis=2).std(axis=0)\n plt.plot(times, mean_tc.T, color=color, label=eve_id)\n plt.fill_between(times, mean_tc + std_tc, mean_tc - std_tc, color='gray',\n alpha=0.5, label='')\n\nymin, ymax = mean_tc.min() - 5, mean_tc.max() + 5\nplt.xlabel('Time (ms)')\nplt.ylabel('Activation (F-values)')\nplt.xlim(times[[0, -1]])\nplt.ylim(ymin, ymax)\nplt.fill_betweenx((ymin, ymax), times[inds_t[0]],\n times[inds_t[-1]], color='orange', alpha=0.3)\nplt.legend()\nplt.title('Interaction between stimulus-modality and location.')\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
kubeflow/example-seldon
notebooks/training.ipynb
apache-2.0
[ "Train Various Models on MNIST using kubeflow and seldon-core\nUsing:\n\nkubeflow\nseldon-core\n\nThe example will be the MNIST handwriiten digit classification task.\n\nDependencies\n\nArgo\n\nSetup", "!kubectl config set-context $(kubectl config current-context) --namespace=kubeflow", "Tensorflow Model\nA simple neural network in Tensorflow.\nTraining\n\nCreate image from source\nRun training\n\nRun with:\n * -p build-push-image=true to build image and push to repo, needed extra params:\n * -p version=&lt;version&gt; create &lt;version&gt; of model\n * -p github-user=&lt;github-user&gt; to download example-seldon source from &lt;github-user&gt; account\n * -p github-revision=&lt;revision&gt; to use the github branch &lt;revision&gt;\n * -p docker-org=&lt;docker-org&gt; to use Docker repo &lt;docker-org&gt; to push image to. Needs docker credentials in secret as described in README.", "!pygmentize ../workflows/training-tf-mnist-workflow.yaml\n\n!argo submit ../workflows/training-tf-mnist-workflow.yaml -p tfjob-version-hack=1\n\n!argo list", "Runtime Image\nRun with:\n * -p build-push-image=true to build image and push to repo, needed extra params:\n * -p version=&lt;version&gt; create &lt;version&gt; of model\n * -p github-user=&lt;github-user&gt; to download example-seldon source from &lt;github-user&gt; account\n * -p github-revision=&lt;revision&gt; to use the github branch &lt;revision&gt;\n * -p docker-org=&lt;docker-org&gt; to use Docker user &lt;docker-org&gt; to push image to. Needs docker credentials in secret as described in README.\n * -p deploy-model=true to deploy model", "!pygmentize ../workflows/serving-tf-mnist-workflow.yaml\n\n!argo submit ../workflows/serving-tf-mnist-workflow.yaml\n\n!argo list", "Sklearn Model\nA Random forest in sklearn.\nTraining\n\nFor options see above Tensorflow example", "!pygmentize ../workflows/training-sk-mnist-workflow.yaml\n\n!argo submit ../workflows/training-sk-mnist-workflow.yaml\n\n!argo list", "Runtime Image\n\nFor options see above Tensorflow example", "!pygmentize ../workflows/serving-sk-mnist-workflow.yaml\n\n!argo submit ../workflows/serving-sk-mnist-workflow.yaml\n\n!argo list", "R Model\nA partial least squares model in R.\nTraining\n\nFor options see above Tensorflow example", "!pygmentize ../workflows/training-r-mnist-workflow.yaml\n\n!argo submit ../workflows/training-r-mnist-workflow.yaml\n\n!argo list", "Runtime Image\n\nFor options see above Tensorflow example", "!pygmentize ../workflows/serving-r-mnist-workflow.yaml\n\n!argo submit ../workflows/serving-r-mnist-workflow.yaml\n\n!argo list" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
kunaltyagi/SDES
notes/python/p_norvig/logic/WWW.ipynb
gpl-3.0
[ "WWW: Will the Warriors Win?\n18 April 2016\nThe Golden State Warriors have had a historic basketball season, winning more games than any other team ever has. But will they top that off by winning the championship? There are 15 other teams in contention, including one, the Spurs, that has had a historic season as the best second-best team ever. The web site fivethirtyeight, using a complicated scoring syste, gives the Warriors a 44% chance of winning, with the Spurs at 28%. Basketball-reference has the Warriors at 41% and Spurs at 32.5%, while a betting site had the Warriors at 54% and Spurs at 18%. But what's a good way to make a prediction? There are several choices:\n\nSubjective impression of a team's strength? Or a statistical model?\nPredictions based on:\nHolistic impression of entire postseason (e.g. \"I think the Warriors have a 50% chance of winning it all\")\nSeries by series (e.g. \"I think the Warriors have a 95% chance of beating the Rockets in the first round, then ...\")\nGame by game (e.g. \"I think the Warriors have a 83% chance of beating the Rockets in Game 1, then ...\")\nPossession by possession (e.g. simulate games basket by basket, based on past stats)\n\nHere are the top four teams with their Won-Loss percentage and SRS (Simple rating system: average margin of victory, adjusted for strength of opponents):\n TEAM PCT SRS\n Warriors .890 10.38\n Spurs .817 10.28\n Thunder .671 7.09\n Cavaliers .695 5.45\n\nI decided to go with a subjective impression of one team beating another in a single game. For example, I might think that the Warriors have a 58% chance of beating the Cavaliers in any one game, and from that compute the odds of winning a series:", "def win_series(p, W=0, L=0):\n \"Probability of winning best-of-7 series, given a probability p of winning a game.\"\n return (1 if W == 4 else\n 0 if L == 4 else\n p * win_series(p, W + 1, L) +\n (1 - p) * win_series(p, W, L + 1))\n\nwin_series(0.58)", "In other words, if you have a 58% chance of winning a game, you have a 67% chance of winning the series.\nNote that I ignore the fact that games aren't strictly independent; I ignore home court advantage; and I ignore the chance of catastrophic injuries. Why? Because all these factors would change the final winning estimate by only a few percentage points, and I already have more uncertainty than that.\nNote that win_series takes optional arguments to say how many games in the sries have been won and lost so far. Here's a table showing your chance of winning a series, given the current tally of games won and lost on the left, and your expected percentage of winning a game at the top:", "def percents(items, fmt='{:4.0%}'): return ' '.join(fmt.format(item) for item in items)\n\ndef series_table(pcts=[p/100 for p in range(20, 81, 5)]):\n print('W-L | Singe Game Win Percentage')\n print(' | ' + percents(pcts))\n for W in range(4):\n print('----+' + '-' * 5 * len(pcts))\n for L in reversed(range(4)):\n results = [win_series(p, W, L) for p in pcts]\n print('{}-{} | {}'.format(W, L, percents(results)))\n\nseries_table()", "And here's a function to tabulate the chances of winning each series on the way to a title:", "def playoffs(name, rounds):\n \"Print probability for team winning each series.\"\n overall = (1, 1, 1) # (lo, med, hi) probabilities of winning it all\n for (opponent, probs) in rounds:\n this_round = [win_series(p) for p in probs]\n overall = [overall[i] * this_round[i] for i in range(len(probs))]\n print('{} vs {:8} win this round: {}; win through here: {}'.format(\n name, opponent, percents(this_round), percents(overall)))", "Now I enter my subjective probability estimates (low, medium, and high), and likely opponents for each round, for the three top contenders:", "playoffs('Warriors',\n [('Rockets', (0.75, 0.83, 0.85)),\n ('Clippers', (0.67, 0.73, 0.80)),\n ('Spurs', (0.45, 0.58, 0.70)),\n ('Cavs', (0.60, 0.67, 0.75))])\n\nplayoffs('Spurs',\n [('Memphis', (0.75, 0.83, 0.85)),\n ('Thunder', (0.45, 0.62, 0.70)),\n ('Warriors', (0.30, 0.42, 0.55)),\n ('Cavs', (0.60, 0.67, 0.75))])\n\nplayoffs('Cavs',\n [('Pistons', (0.75, 0.83, 0.85)),\n ('Hawks', (0.45, 0.60, 0.75)),\n ('Raptors', (0.40, 0.55, 0.65)),\n ('GSW/SAS', (0.25, 0.33, 0.40))])", "I have the Warriors at 50% (for the medium estimate of winning it all) and the Spurs at 20%, so I'm more of a Warriors fan than fivethirtyeight and basketball-reference, but I have very wide margins between my low and high estimate: 22% to 78% for the Warriors; 3% to 49% for the Spurs; 1% to 21% for the Cavs. Interestingly, while fivethirtyeight does not think this year's Warriors are better than the 1995 Bulls, they do think the Spurs, Thunder, and Cavs are the best ever second-, third-, and fourth-best teams in a season.\nWhat's better--a holistic guess at the outcome, or a reductionist model like this one? I can't say that one is better than the other in every case, but it can be instructive to examine the cases where the reductionist model differs from your holistic impressions. For example, look at the low end of my prediction for the Spurs. I feel like it is crazy to say the Spurs only have a 3% chance of winning the title, but I don't feel that any of the individual game win probabilities (75%, 45%, 30%, and 60%, respectively) are crazy. So now I know that at least one of my intutions is wrong. But I'm not sure how to reconcile the mismatch...\nWWWWC: Will Warriors Win Without Curry?\n27 April 2016\nThe Playoff picture has changed! \nWe have some results for first-round series, and there have been key injuries to players including Steph Curry, Avery Bradley, Chris Paul, and Blake Griffin. To update, first I make a small alteration to allow the current state of a series in terms of wins/loses to be specified as part of the input to playoffs:", "def playoffs(name, rounds):\n \"Print probability for team winning each series.\"\n overall = (1, 1, 1) # (lo, med, hi) probabilities of winning it all\n for (opponent, probs, *args) in rounds:\n this_round = [win_series(p, *args) for p in probs]\n overall = [overall[i] * this_round[i] for i in range(len(probs))]\n print('{} vs {:8} win this round: ({}) win through here: ({})'.format(\n name, opponent, percents(this_round), percents(overall)))", "We don't know for sure how long Curry will be out, but here are my updated odds for the Warriors, with the middle probability value representing the assumption that Curry misses the second round, and comes back in time for the Western Conference Finals at a mildly reduced capacity; the low and high probability values represent more and less severe injuries:", "playoffs('Warriors',\n [('Rockets', (0.50, 0.70, 0.80), 3, 1),\n ('Blazers', (0.45, 0.55, 0.67)),\n ('Spurs', (0.30, 0.55, 0.67)),\n ('Cavs', (0.40, 0.60, 0.70))])", "The Spurs and Cavs are rolling; let's update their odds:", "playoffs('Spurs',\n [('Memphis', (0.75, 0.83, 0.85), 4, 0),\n ('Thunder', (0.45, 0.62, 0.70)),\n ('Warriors', (0.33, 0.45, 0.70)),\n ('Cavs', (0.60, 0.67, 0.75))])\n\nplayoffs('Cavs',\n [('Pistons', (0.75, 0.83, 0.85), 4, 0),\n ('Hawks', (0.45, 0.60, 0.75)),\n ('Raptors', (0.40, 0.55, 0.65)),\n ('GSW/SAS', (0.30, 0.40, 0.60))])", "So my updated odds are that the Warriors and Spurs are roughly equally likely to win (26% and 24%); the Cavs are still less likely (13%), and there is more uncertainty.\nWWWWCB: Will Warriors Win With Curry Back?\n10 May 2016\nCurry has returned from his injury, and after a slow shooting start, had the highest-scoring overtime period in the history of the NBA. Meanwhile, the Thunder lead the Spurs, 3-2, and the Cavaliers have been dominant in the East, hitting a historic number of 3-point shots. Here is my revised outlook:", "playoffs('Warriors',\n [('Rockets', (0.50, 0.70, 0.80), 4, 1),\n ('Blazers', (0.55, 0.67, 0.75), 3, 1),\n ('Spurs', (0.45, 0.60, 0.67)),\n ('Cavs', (0.40, 0.55, 0.67))])\n\nplayoffs('Spurs',\n [('Memphis', (0.75, 0.83, 0.85), 4, 0),\n ('Thunder', (0.40, 0.60, 0.70), 2, 3),\n ('Warriors', (0.33, 0.40, 0.55)),\n ('Cavs', (0.40, 0.50, 0.70))])\n\nplayoffs('Thunder',\n [('Dallas', (0.75, 0.83, 0.85), 4, 1),\n ('Spurs', (0.30, 0.40, 0.60), 3, 2),\n ('Warriors', (0.33, 0.40, 0.55)),\n ('Cavs', (0.35, 0.45, 0.60))])\n\nplayoffs('Cavs',\n [('Pistons', (0.75, 0.83, 0.85), 4, 0),\n ('Hawks', (0.45, 0.60, 0.75), 4, 0),\n ('Raptors', (0.50, 0.65, 0.75)),\n ('GS/SA/OK', (0.33, 0.45, 0.55))])", "So overall, from the start of the playoffs up to May 10th, I have:\n\nWarriors: Dropped from 50% to 26% with Curry's injury, and rebounded to 42%. \nSpurs: Dropped from 20% to 5% after falling behind Thunder.\nThunder: Increased to 7%.\nCavs: Increased to 31%.\n\nTime to Panic?\n17 May 2016\nThe Thunder finished off the Spurs and beat the Warriors in game 1. Are the Thunder, like the Cavs, peaking at just the right time, after an inconsistant regular season? Is it time for Warriors fans to panic?\nSure, the Warriors were down a game twice in last year's playoffs and came back to win both times. Sure, the Warriors are still 3-1 against the Thunder this year, and only lost two games all season to elite teams (Spurs, Thunder, Cavs, Clippers, Raptors). But the Thunder are playing at a top level. Here's my update, showing that the loss cost the Warriors 5%:", "playoffs('Warriors',\n [('Rockets', (0.50, 0.70, 0.80), 4, 1),\n ('Blazers', (0.55, 0.67, 0.75), 4, 1),\n ('Thunder', (0.45, 0.63, 0.70), 0, 1),\n ('Cavs', (0.40, 0.55, 0.65))])", "Not Yet?\n18 May 2016\nThe Warriors won game two of the series, so now they're back up to 45%, with the Cavs at 35%. At this time, fivethirtyeight has the Warriors at 45%, Cavs at 28% and Thunder at 24%", "playoffs('Warriors',\n [('Rockets', (0.50, 0.70, 0.80), 4, 1),\n ('Blazers', (0.55, 0.67, 0.75), 4, 1),\n ('Thunder', (0.45, 0.63, 0.70), 1, 1),\n ('Cavs', (0.40, 0.55, 0.65))])\n\nplayoffs('Cavs',\n [('Pistons', (0.75, 0.83, 0.85), 4, 0),\n ('Hawks', (0.45, 0.60, 0.75), 4, 0),\n ('Raptors', (0.50, 0.65, 0.75), 1, 0),\n ('GSW', (0.35, 0.45, 0.60))])", "Yet!\n24 May 2016\nThe Thunder won two in a row (first time the Warriors had lost two in a row all year), putting the Warriors down 3-1. And the Cavs are looking mortal, losing two to the Raptors. So now it looks to me like the Thunder are favorites to win it all:", "playoffs('Warriors',\n [('Rockets', (0.50, 0.70, 0.80), 4, 1),\n ('Blazers', (0.55, 0.67, 0.75), 4, 1),\n ('Thunder', (0.25, 0.55, 0.65), 1, 3),\n ('Cavs', (0.40, 0.55, 0.65))])\n\nplayoffs('Cavs',\n [('Pistons', (0.75, 0.83, 0.85), 4, 0),\n ('Hawks', (0.45, 0.60, 0.75), 4, 0),\n ('Raptors', (0.50, 0.55, 0.70), 2, 2),\n ('Thunder', (0.35, 0.45, 0.60))])\n\nplayoffs('Thunder',\n [('Dallas', (0.75, 0.83, 0.85), 4, 1),\n ('Spurs', (0.30, 0.40, 0.60), 4, 2),\n ('Warriors', (0.35, 0.45, 0.75), 3, 1),\n ('Cavs', (0.40, 0.55, 0.65))])", "But Not Done Yet\n26 May 2016\nThe Warriors won game 5, bringing them up from a 10% to an 18% chance of winning it all:", "playoffs('Warriors',\n [('Rockets', (0.50, 0.70, 0.80), 4, 1),\n ('Blazers', (0.55, 0.67, 0.75), 4, 1),\n ('Thunder', (0.35, 0.55, 0.65), 2, 3),\n ('Cavs', (0.40, 0.55, 0.65))])\n\nplayoffs('Cavs',\n [('Pistons', (0.75, 0.83, 0.85), 4, 0),\n ('Hawks', (0.45, 0.60, 0.75), 4, 0),\n ('Raptors', (0.50, 0.55, 0.70), 3, 2),\n ('Thunder', (0.35, 0.45, 0.60))])\n\nplayoffs('Thunder',\n [('Dallas', (0.75, 0.83, 0.85), 4, 1),\n ('Spurs', (0.30, 0.40, 0.60), 4, 2),\n ('Warriors', (0.35, 0.45, 0.75), 3, 2),\n ('Cavs', (0.40, 0.55, 0.65))])", "The Finals\n1 June 2016\nThe Warriors completed their comeback against the Thunder, putting them in a great position to win this year (and they are already established as favorites for next year). Rather than update the odds after each game 0f the finals, I'll just repeat the table (with the note that I think the Warriors are somewhere in the 60% range for each game):", "series_table()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
nathawkins/PHY451_FS_2017
Cavendish_Experiment/Scripts/20171010_cavendish_dayone_trial.ipynb
gpl-3.0
[ "Analysis of Preliminary Trail Data Pulled from Cavendish Balance", "import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport math as m\nfrom scipy.signal import argrelextrema as argex\nplt.style.use('ggplot')\n\ndata_dir = '../data/'\ntrial_data = np.loadtxt(data_dir+'20171010_cavendish_trial.txt', delimiter='\\t')\n\nplt.plot(trial_data[:,0], trial_data[:,1])\nplt.title(\"Trial Data from Cavendish Balance\")\nplt.ylabel(\"Anglular Positon (mrads)\")\nplt.xlabel(\"Time (s)\")", "The weird behavior at the beginning occured when we were making an alteration to the experimental setup itself (doing one of our many adjustments to try and zero out our $\\theta_e$. We can ignore this as it is not indicative of our data and look at where it moves into a damped harmonic oscialltion.", "plt.plot(trial_data[3900:,0], trial_data[3900:,1])\nplt.title(\"Trial Data from Cavendish Balance, Adjusted\")\nplt.ylabel(\"Anglular Positon (mrads)\")\nplt.xlabel(\"Time (s)\")\n\nnp.savetxt(data_dir+'20171010_cavendish_trial_useable.txt', trial_data[3200:,], delimiter=',')\n\nx_data = trial_data[3900:,0]\ny_data = trial_data[3900:,1]\n\n#I want to try to extract the peaks of the corresponding sine waves to show the exponential decay and use that \n#to fit my curve to.\n\nangles = []\ntime = []\n\nfor i in range(1, len(y_data)-1):\n if y_data[i] >= np.average(y_data):\n if (y_data[i] > y_data[i-1]) and (y_data[i] > y_data[i+1]):\n angles.append(float(y_data[i]))\n time.append(float(x_data[i]))\n \ninds = argex(np.array(angles), np.greater)\ntimes = np.array(time)[inds]\nturning_points = np.array(angles)[inds]\nplt.plot(times, turning_points, 'g--')\nplt.plot(x_data, y_data, 'b-')", "Trying out the scipy.optimzie library to fit this to a decaying sinuisodal curve.", "from scipy.optimize import curve_fit\n\ndef decay(t, a, b, w, phi, theta_0):\n return a*np.exp(-b*t)*np.cos(w*t + phi) + theta_0\n\npopt, pcov = curve_fit(decay, x_data, y_data , p0 = (-32, 1.3e-3, 3e-2, -6e-1, 0))\n\npopt\n\nplt.plot(times, turning_points, 'b--', label = 'Decay from Data')\nplt.plot(x_data, y_data, 'r-', linewidth = 5, label = \"Raw Data\")\nplt.plot(x_data, decay(x_data, *popt), 'g-', label = 'Fit of a*np.exp(-b*t)*np.cos(w*t + phi) + theta_0')\nplt.title(\"Free Oscillation Data, No Large Masses\")\nplt.ylabel(\"Angle, Not Calibrated (mrad)\")\nplt.xlabel(\"Time (s)\")\nplt.legend()\n\nround(popt[1],5)", "$$b = 1.34 \\times 10 ^{-3} \\frac{1}{s}$$" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
qutip/qutip-notebooks
docs/guide/TensorPtrace.ipynb
lgpl-3.0
[ "Tensor Products & Partial Traces\nContents\n\nTensor Products\nPartial Trace\nSuper Operators & Tensor Manipulations", "import numpy as np\nfrom qutip import *", "<a id='tensor'></a>\nTensor Products\nTo describe the states of multipartite quantum systems - such as two coupled qubits, a qubit coupled to an oscillator, etc. - we need to expand the Hilbert space by taking the tensor product of the state vectors for each of the system components. Similarly, the operators acting on the state vectors in the combined Hilbert space (describing the coupled system) are formed by taking the tensor product of the individual operators.\nIn QuTiP the function tensor is used to accomplish this task. This function takes as argument a collection::\npython\ntensor(op1, op2, op3)\nor a list:\npython\ntensor([op1, op2, op3])\nof state vectors or operators and returns a composite quantum object for the combined Hilbert space. The function accepts an arbitray number of states or operators as argument. The type returned quantum object is the same as that of the input(s).\nFor example, the state vector describing two qubits in their ground states is formed by taking the tensor product of the two single-qubit ground state vectors:", "tensor(basis(2, 0), basis(2, 0))", "or equivalently using the list format:", "tensor([basis(2, 0), basis(2, 0)])", "This is straightforward to generalize to more qubits by adding more component state vectors in the argument list to the tensor function, as illustrated in the following example:", "tensor((basis(2, 0) + basis(2, 1)).unit(), \n (basis(2, 0) + basis(2, 1)).unit(), basis(2, 0))", "This state is slightly more complicated, describing two qubits in a superposition between the up and down states, while the third qubit is in its ground state.\nTo construct operators that act on an extended Hilbert space of a combined system, we similarly pass a list of operators for each component system to the tensor function. For example, to form the operator that represents the simultaneous action of the $\\sigma_x$ operator on two qubits:", "tensor(sigmax(), sigmax())", "To create operators in a combined Hilbert space that only act only on a single component, we take the tensor product of the operator acting on the subspace of interest, with the identity operators corresponding to the components that are to be unchanged. For example, the operator that represents $\\sigma_z$ on the first qubit in a two-qubit system, while leaving the second qubit unaffected:", "tensor(sigmaz(), identity(2))", "Example: Constructing composite Hamiltonians\nThe tensor function is extensively used when constructing Hamiltonians for composite systems. Here we'll look at some simple examples.\nTwo coupled qubits\nFirst, let's consider a system of two coupled qubits. Assume that both qubit has equal energy splitting, and that the qubits are coupled through a $\\sigma_x\\otimes\\sigma_x$ interaction with strength $g = 0.05$ (in units where the bare qubit energy splitting is unity). The Hamiltonian describing this system is:", "H = tensor(sigmaz(), identity(2)) + tensor(identity(2),\n sigmaz()) + 0.05 * tensor(sigmax(), sigmax())\nH", "Three coupled qubits\nThe two-qubit example is easily generalized to three coupled qubits:", "H = (tensor(sigmaz(), identity(2), identity(2)) + \n tensor(identity(2), sigmaz(), identity(2)) + \n tensor(identity(2), identity(2), sigmaz()) + \n 0.5 * tensor(sigmax(), sigmax(), identity(2)) + \n 0.25 * tensor(identity(2), sigmax(), sigmax()))\nH", "Jaynes-Cummings Model\nThe simplest possible quantum mechanical description for light-matter interaction is encapsulated in the Jaynes-Cummings model, which describes the coupling between a two-level atom and a single-mode electromagnetic field (a cavity mode). Denoting the energy splitting of the atom and cavity omega_a and omega_c, respectively, and the atom-cavity interaction strength g, the Jaynes-Cumming Hamiltonian can be constructed as:", "N = 10 #Number of Fock states for cavity mode.\nomega_a = 1.0\nomega_c = 1.25\ng = 0.05\na = tensor(identity(2), destroy(N))\nsm = tensor(destroy(2), identity(N))\nsz = tensor(sigmaz(), identity(N))\nH = 0.5 * omega_a * sz + omega_c * a.dag() * a + g * (a.dag() * sm + a * sm.dag())", "<a id='partial'></a>\nPartial Trace\nThe partial trace is an operation that reduces the dimension of a Hilbert space by eliminating some degrees of freedom by averaging (tracing). In this sense it is therefore the converse of the tensor product. It is useful when one is interested in only a part of a coupled quantum system. For open quantum systems, this typically involves tracing over the environment leaving only the system of interest. In QuTiP the class method ptrace is used to take partial traces. ptrace acts on the Qobj instance for which it is called, and it takes one argument sel, which is a list of integers that mark the component systems that should be kept. All other components are traced out.\nFor example, the density matrix describing a single qubit obtained from a coupled two-qubit system is obtained via:", "psi = tensor(basis(2, 0), basis(2, 1))\npsi.ptrace(0)\n\npsi.ptrace(1)", "Note that the partial trace always results in a density matrix (mixed state), regardless of whether the composite system is a pure state (described by a state vector) or a mixed state (described by a density matrix):", "psi = tensor((basis(2, 0) + basis(2, 1)).unit(), basis(2, 0))\npsi.ptrace(0)\n\nrho = tensor(ket2dm((basis(2, 0) + basis(2, 1)).unit()), fock_dm(2, 0))\nrho.ptrace(0)", "<a id='super'></a>\nSuper Operators & Tensor Manipulations\nSuperoperators are operators\nthat act on Liouville space, the vectorspace of linear operators. Superoperators can be represented using the isomorphism $\\mathrm{vec} : \\mathcal{L}(\\mathcal{H}) \\to \\mathcal{H} \\otimes \\mathcal{H}$.\nTo represent superoperators acting on $\\mathcal{L}(\\mathcal{H}_1 \\otimes \\mathcal{H}_2)$ thus takes some tensor rearrangement to get the desired ordering\n$\\mathcal{H}_1 \\otimes \\mathcal{H}_2 \\otimes \\mathcal{H}_1 \\otimes \\mathcal{H}_2$.\nIn particular, this means that tensor does not act as one might expect on the results of to_super:", "A = qeye([2])\nB = qeye([3])\nto_super(tensor(A, B)).dims\n\ntensor(to_super(A), to_super(B)).dims", "In the former case, the result correctly has four copies\nof the compound index with dims [2, 3]. In the latter\ncase, however, each of the Hilbert space indices is listed\nindependently and in the wrong order.\nThe super_tensor function performs the needed\nrearrangement, providing the most direct analog to tensor on\nthe underlying Hilbert space. In particular, for any two type=\"oper\"\nQobjs A and B, to_super(tensor(A, B)) == super_tensor(to_super(A), to_super(B)) and\noperator_to_vector(tensor(A, B)) == super_tensor(operator_to_vector(A), operator_to_vector(B)). Returning to the previous example:", "super_tensor(to_super(A), to_super(B)).dims", "The composite function automatically switches between\ntensor and super_tensor based on the type\nof its arguments, such that composite(A, B) returns an appropriate Qobj to\nrepresent the composition of two systems.", "composite(A, B).dims\n\ncomposite(to_super(A), to_super(B)).dims", "QuTiP also allows more general tensor manipulations that are\nuseful for converting between superoperator representations.\nIn particular, the tensor_contract function allows for\ncontracting one or more pairs of indices. As detailed in\nthe channel contraction tutorial, this can be used to find\nsuperoperators that represent partial trace maps.\nUsing this functionality, we can construct some quite exotic maps,\nsuch as a map from $3 \\times 3$ operators to $2 \\times 2$\noperators:", "tensor_contract(composite(to_super(A), to_super(B)), (1, 3), (4, 6)).dims\n\nfrom IPython.core.display import HTML\ndef css_styling():\n styles = open(\"../styles/guide.css\", \"r\").read()\n return HTML(styles)\ncss_styling()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
SebastianBocquet/pygtc
demo.ipynb
mit
[ "Example 1: Making a GTC/triangle plot with pygtc\nThis example is built from a jupyter notebook hosted on the pyGTC GitHub repository.\nImport dependencies", "%matplotlib inline\n%config InlineBackend.figure_format = 'retina' # For mac users with Retina display\nfrom matplotlib import pyplot as plt\nimport numpy as np\nimport pygtc", "Generate fake data\nLet's create two sets of fake sample points with 8 dimensions each. Note that chains are allowed to have different lengths.", "# Create Npoints samples from random multivariate, nDim-dimensional Gaussian\ndef create_random_samples(nDim, Npoints):\n means = np.random.rand(nDim)\n cov = .5 - np.random.rand(nDim**2).reshape((nDim,nDim))\n cov = np.triu(cov)\n cov += cov.T - np.diag(cov.diagonal())\n cov = np.dot(cov,cov)\n samples = np.random.multivariate_normal(means, cov, Npoints)\n return samples\n\n# Create two sets of fake data with 8 parameters\nnp.random.seed(0) # To be able to create the same fake data over and over again\nsamples1 = create_random_samples(8, 50000)\nsamples2 = 1+create_random_samples(8, 70000)", "Omit one parameter for one chain\nLet's assume the samples1 does not include the second to last parameter. In the figure, we only want to show this parameter for samples2. pygtc will omit parameters that only contain nan.", "samples1[:,6] = None", "Minimal example\nNote that numpy throws a RuntimeWarning because we set one of the axes of samples1 to None just above. As we understand the warning, let's move on!", "GTC = pygtc.plotGTC(chains=[samples1,samples2])", "Complete the figure\nNow let's add:\n* axis and data labels\n* lines marking some important points in parameter space\n* Gaussian distributions on the 1d histograms that could indicate Gaussian priors we assumed\nNote that all these must match number of parameters!", "# List of parameter names, supports latex\n# NOTE: For capital greek letters in latex mode, use \\mathsf{}\nnames = ['param name',\n '$B_\\mathrm{\\lambda}$',\n '$E$', '$\\\\lambda$', \n 'C',\n 'D',\n '$\\mathsf{\\Omega}$',\n '$\\\\gamma$']\n\n# Labels for the different chains\nchainLabels = [\"data1 $\\lambda$\",\n \"data 2\"]\n\n# List of Gaussian curves to plot\n#(to represent priors): mean, width\n# Empty () or None if no prior to plot\npriors = ((2, 1),\n (-1, 2),\n (),\n (0, .4),\n None,\n (1,1),\n None,\n None)\n\n# List of truth values, to mark best-fit or input values\n# NOT a python array because of different lengths\n# Here we choose two sets of truth values\ntruths = ((4, .5, None, .1, 0, None, None, 0),\n (None, None, .3, 1, None, None, None, None))\n\n# Labels for the different truths\ntruthLabels = ( 'the truth',\n 'also true')\n\n# Do the magic\nGTC = pygtc.plotGTC(chains=[samples1,samples2],\n paramNames=names,\n chainLabels=chainLabels,\n truths=truths,\n truthLabels=truthLabels,\n priors=priors)", "Make figure publication ready\n\nSee how the prior for $B_{\\lambda}$ is cut off on the left? Let's display $B_\\lambda$ in the range (-5,4). Also, we could show a narrower range for $\\lambda$ like (-3,3).\nGiven that we're showing two sets of truth lines, let's show the line styles in the legend (legendMarker=True).\nFinally, let's make the figure size publication ready for MNRAS. Given that we're showing eight parameters, we'll want to choose figureSize='MNRAS_page' and show a full page-width figure.\nSave the figure as fullGTC.pdf and paste it into your publication!", "# List of parameter ranges to show,\n# empty () or None to let pyGTC decide\nparamRanges = (None,\n (-5,4),\n (),\n (-3,3),\n None,\n None,\n None,\n None)\n\n# Do the magic\nGTC = pygtc.plotGTC(chains=[samples1,samples2],\n paramNames=names,\n chainLabels=chainLabels,\n truths=truths,\n truthLabels=truthLabels,\n priors=priors,\n paramRanges=paramRanges,\n figureSize='MNRAS_page',\n plotName='fullGTC.pdf')", "Single 2d panel\nSee how the covariance between C and D is a ground-breaking result? Let's look in more detail!\nHere, we'll want single-column figures.", "# Redefine priors and truths\npriors2d = (None,(1,1))\ntruths2d = (0,None)\n\n# The 2d panel and the 1d histograms\nGTC = pygtc.plotGTC(chains=[samples1[:,4:6], samples2[:,4:6]],\n paramNames=names[4:6],\n chainLabels=chainLabels,\n truths=truths2d,\n truthLabels=truthLabels[0],\n priors=priors2d,\n figureSize='MNRAS_column')\n\n# Only the 2d panel\nRange2d = ((-3,5),(-3,7)) # To make sure there's enough space for the legend\n\nGTC = pygtc.plotGTC(chains=[samples1[:,4:6],samples2[:,4:6]],\n paramNames=names[4:6],\n chainLabels=chainLabels,\n truths=truths2d,\n truthLabels=truthLabels[0],\n priors=priors2d,\n paramRanges=Range2d,\n figureSize='MNRAS_column',\n do1dPlots=False)", "Single 1d panel\nFinally, let's just plot the posterior on C", "# Bit tricky, but remember each data set needs shape of (Npoints, nDim)\ninputarr = [np.array([samples1[:,4]]).T,\n np.array([samples2[:,4]]).T]\ntruth1d = [0.]\nGTC = pygtc.plotGTC(chains=inputarr,\n paramNames=names[4],\n chainLabels=chainLabels,\n truths=truth1d,\n truthLabels=truthLabels[0],\n figureSize='MNRAS_column',\n doOnly1dPlot=True)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
AllenDowney/ModSimPy
notebooks/chap11.ipynb
mit
[ "Modeling and Simulation in Python\nChapter 11\nCopyright 2017 Allen Downey\nLicense: Creative Commons Attribution 4.0 International", "# Configure Jupyter so figures appear in the notebook\n%matplotlib inline\n\n# Configure Jupyter to display the assigned value after an assignment\n%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'\n\n# import functions from the modsim.py module\nfrom modsim import *", "SIR implementation\nWe'll use a State object to represent the number (or fraction) of people in each compartment.", "init = State(S=89, I=1, R=0)", "To convert from number of people to fractions, we divide through by the total.", "init /= sum(init)", "make_system creates a System object with the given parameters.", "def make_system(beta, gamma):\n \"\"\"Make a system object for the SIR model.\n \n beta: contact rate in days\n gamma: recovery rate in days\n \n returns: System object\n \"\"\"\n init = State(S=89, I=1, R=0)\n init /= sum(init)\n\n t0 = 0\n t_end = 7 * 14\n\n return System(init=init, t0=t0, t_end=t_end,\n beta=beta, gamma=gamma)", "Here's an example with hypothetical values for beta and gamma.", "tc = 3 # time between contacts in days \ntr = 4 # recovery time in days\n\nbeta = 1 / tc # contact rate in per day\ngamma = 1 / tr # recovery rate in per day\n\nsystem = make_system(beta, gamma)", "The update function takes the state during the current time step and returns the state during the next time step.", "def update_func(state, t, system):\n \"\"\"Update the SIR model.\n \n state: State with variables S, I, R\n t: time step\n system: System with beta and gamma\n \n returns: State object\n \"\"\"\n s, i, r = state\n\n infected = system.beta * i * s \n recovered = system.gamma * i\n \n s -= infected\n i += infected - recovered\n r += recovered\n \n return State(S=s, I=i, R=r)", "To run a single time step, we call it like this:", "state = update_func(init, 0, system)", "Now we can run a simulation by calling the update function for each time step.", "def run_simulation(system, update_func):\n \"\"\"Runs a simulation of the system.\n \n system: System object\n update_func: function that updates state\n \n returns: State object for final state\n \"\"\"\n state = system.init\n \n for t in linrange(system.t0, system.t_end):\n state = update_func(state, t, system)\n \n return state", "The result is the state of the system at t_end", "run_simulation(system, update_func)", "Exercise Suppose the time between contacts is 4 days and the recovery time is 5 days. After 14 weeks, how many students, total, have been infected?\nHint: what is the change in S between the beginning and the end of the simulation?", "# Solution goes here", "Using TimeSeries objects\nIf we want to store the state of the system at each time step, we can use one TimeSeries object for each state variable.", "def run_simulation(system, update_func):\n \"\"\"Runs a simulation of the system.\n \n Add three Series objects to the System: S, I, R\n \n system: System object\n update_func: function that updates state\n \"\"\"\n S = TimeSeries()\n I = TimeSeries()\n R = TimeSeries()\n\n state = system.init\n t0 = system.t0\n S[t0], I[t0], R[t0] = state\n \n for t in linrange(system.t0, system.t_end):\n state = update_func(state, t, system)\n S[t+1], I[t+1], R[t+1] = state\n \n return S, I, R", "Here's how we call it.", "tc = 3 # time between contacts in days \ntr = 4 # recovery time in days\n\nbeta = 1 / tc # contact rate in per day\ngamma = 1 / tr # recovery rate in per day\n\nsystem = make_system(beta, gamma)\nS, I, R = run_simulation(system, update_func)", "And then we can plot the results.", "def plot_results(S, I, R):\n \"\"\"Plot the results of a SIR model.\n \n S: TimeSeries\n I: TimeSeries\n R: TimeSeries\n \"\"\"\n plot(S, '--', label='Susceptible')\n plot(I, '-', label='Infected')\n plot(R, ':', label='Recovered')\n decorate(xlabel='Time (days)',\n ylabel='Fraction of population')", "Here's what they look like.", "plot_results(S, I, R)\nsavefig('figs/chap11-fig01.pdf')", "Using a DataFrame\nInstead of making three TimeSeries objects, we can use one DataFrame.\nWe have to use row to selects rows, rather than columns. But then Pandas does the right thing, matching up the state variables with the columns of the DataFrame.", "def run_simulation(system, update_func):\n \"\"\"Runs a simulation of the system.\n \n system: System object\n update_func: function that updates state\n \n returns: TimeFrame\n \"\"\"\n frame = TimeFrame(columns=system.init.index)\n frame.row[system.t0] = system.init\n \n for t in linrange(system.t0, system.t_end):\n frame.row[t+1] = update_func(frame.row[t], t, system)\n \n return frame", "Here's how we run it, and what the result looks like.", "tc = 3 # time between contacts in days \ntr = 4 # recovery time in days\n\nbeta = 1 / tc # contact rate in per day\ngamma = 1 / tr # recovery rate in per day\n\nsystem = make_system(beta, gamma)\nresults = run_simulation(system, update_func)\nresults.head()", "We can extract the results and plot them.", "plot_results(results.S, results.I, results.R)", "Exercises\nExercise Suppose the time between contacts is 4 days and the recovery time is 5 days. Simulate this scenario for 14 weeks and plot the results.", "# Solution goes here" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
gojomo/gensim
docs/notebooks/topic_methods.ipynb
lgpl-2.1
[ "New Term Topics Methods and Document Coloring", "from gensim.corpora import Dictionary\nfrom gensim.models import ldamodel\nimport numpy\n%matplotlib inline\n\nimport logging\nlogging.basicConfig(level=logging.INFO)", "We're setting up our corpus now. We want to show off the new get_term_topics and get_document_topics functionalities, and a good way to do so is to play around with words which might have different meanings in different context.\nThe word bank is a good candidate here, where it can mean either the financial institution or a river bank.\nIn the toy corpus presented, there are 11 documents, 5 river related and 6 finance related.", "import gensim.downloader\ncorpus = gensim.downloader.load(\"20-newsgroups\")\n\nimport collections\nfrom gensim.parsing.preprocessing import preprocess_string\n\ntexts = [\n preprocess_string(text['data'])\n for text in corpus\n if text['topic'] in ('soc.religion.christian', 'talk.politics.guns')\n]\n\ndictionary = Dictionary(texts)\ndictionary.filter_extremes(no_above=0.1, no_below=10)\ncorpus = [dictionary.doc2bow(text) for text in texts]", "We set up the LDA model in the corpus. We set the number of topics to be 2, and expect to see one which is to do with river banks, and one to do with financial banks.", "numpy.random.seed(1) # setting random seed to get the same results each time.\nmodel = ldamodel.LdaModel(corpus, id2word=dictionary, num_topics=2, alpha='asymmetric', minimum_probability=1e-8)\n\nmodel.show_topics()", "And like we expected, the LDA model has given us near perfect results. Bank is the most influential word in both the topics, as we can see. The other words help define what kind of bank we are talking about. Let's now see where our new methods fit in.\nget_term_topics\nThe function get_term_topics returns the odds of that particular word belonging to a particular topic. \nA few examples:", "model.get_term_topics('hell')", "Makes sense, the value for it belonging to topic_0 is a lot more.", "model.get_term_topics('firearm')", "This also works out well, the word finance is more likely to be in topic_1 to do with financial banks.", "model.get_term_topics('car')", "And this is particularly interesting. Since the word bank is likely to be in both the topics, the values returned are also very similar.\nget_document_topics and Document Word-Topic Coloring\nget_document_topics is an already existing gensim functionality which uses the inference function to get the sufficient statistics and figure out the topic distribution of the document.\nThe addition to this is the ability for us to now know the topic distribution for each word in the document. \nLet us test this with two different documents which have the word bank in it, one in the finance context and one in the river context.\nThe get_document_topics method returns (along with the standard document topic proprtion) the word_type followed by a list sorted with the most likely topic ids, when per_word_topics is set as true.", "bow_water = ['bank','water','bank']\nbow_finance = ['bank','finance','bank']\n\nbow = model.id2word.doc2bow(bow_water) # convert to bag of words format first\ndoc_topics, word_topics, phi_values = model.get_document_topics(bow, per_word_topics=True)\n\nword_topics", "Now what does that output mean? It means that like word_type 1, our word_type 3, which is the word bank, is more likely to be in topic_0 than topic_1.\nYou must have noticed that while we unpacked into doc_topics and word_topics, there is another variable - phi_values. Like the name suggests, phi_values contains the phi values for each topic for that particular word, scaled by feature length. Phi is essentially the probability of that word in that document belonging to a particular topic. The next few lines should illustrate this.", "phi_values", "This means that word_type 0 has the following phi_values for each of the topics. \nWhat is intresting to note is word_type 3 - because it has 2 occurences (i.e, the word bank appears twice in the bow), we can see that the scaling by feature length is very evident. The sum of the phi_values is 2, and not 1.\nNow that we know exactly what get_document_topics does, let us now do the same with our second document, bow_finance.", "bow = model.id2word.doc2bow(bow_finance) # convert to bag of words format first\ndoc_topics, word_topics, phi_values = model.get_document_topics(bow, per_word_topics=True)\n\nword_topics", "And lo and behold, because the word bank is now used in the financial context, it immedietly swaps to being more likely associated with topic_1.\nWe've seen quite clearly that based on the context, the most likely topic associated with a word can change. \nThis differs from our previous method, get_term_topics, where it is a 'static' topic distribution. \nIt must also be noted that because the gensim implementation of LDA uses Variational Bayes sampling, a word_type in a document is only given one topic distribution. For example, the sentence 'the bank by the river bank' is likely to be assigned to topic_0, and each of the bank word instances have the same distribution.\nget_document_topics for entire corpus\nYou can get doc_topics, word_topics and phi_values for all the documents in the corpus in the following manner :", "all_topics = model.get_document_topics(corpus, per_word_topics=True)\n\nfor doc_topics, word_topics, phi_values in all_topics:\n print('New Document \\n')\n print('Document topics:', doc_topics)\n print('Word topics:', word_topics)\n print('Phi values:', phi_values)\n print(\" \")\n print('-------------- \\n')", "In case you want to store doc_topics, word_topics and phi_values for all the documents in the corpus in a variable and later access details of a particular document using its index, it can be done in the following manner:", "topics = model.get_document_topics(corpus, per_word_topics=True)\nall_topics = [(doc_topics, word_topics, word_phis) for doc_topics, word_topics, word_phis in topics]", "Now, I can access details of a particular document, say Document #3, as follows:", "doc_topic, word_topics, phi_values = all_topics[2]\nprint('Document topic:', doc_topics, \"\\n\")\nprint('Word topic:', word_topics, \"\\n\")\nprint('Phi value:', phi_values)", "We can print details for all the documents (as shown above), in the following manner:", "for doc in all_topics:\n print('New Document \\n')\n print('Document topic:', doc[0])\n print('Word topic:', doc[1])\n print('Phi value:', doc[2])\n print(\" \")\n print('-------------- \\n')", "Coloring topic-terms\nThese methods can come in handy when we want to color the words in a corpus or a document. If we wish to color the words in a corpus (i.e, color all the words in the dictionary of the corpus), then get_term_topics would be a better choice. If not, get_document_topics would do the trick.\nWe'll now attempt to color these words and plot it using matplotlib. \nThis is just one way to go about plotting words - there are more and better ways.\nWordCloud is such a python package which also does this.\nFor our simple illustration, let's keep topic_1 as red, and topic_0 as blue.", "# this is a sample method to color words. Like mentioned before, there are many ways to do this.\n\ndef color_words(model, doc):\n import matplotlib.pyplot as plt\n import matplotlib.patches as patches\n \n # make into bag of words\n doc = model.id2word.doc2bow(doc)\n # get word_topics\n doc_topics, word_topics, phi_values = model.get_document_topics(doc, per_word_topics=True)\n\n # color-topic matching\n topic_colors = { 1:'red', 0:'blue'}\n \n # set up fig to plot\n fig = plt.figure()\n ax = fig.add_axes([0,0,1,1])\n\n # a sort of hack to make sure the words are well spaced out.\n word_pos = 1/len(doc)\n \n # use matplotlib to plot words\n for word, topics in word_topics:\n ax.text(word_pos, 0.8, model.id2word[word],\n horizontalalignment='center',\n verticalalignment='center',\n fontsize=20, color=topic_colors[topics[0]], # choose just the most likely topic\n transform=ax.transAxes)\n word_pos += 0.2 # to move the word for the next iter\n\n ax.set_axis_off()\n plt.show()\n", "Let us revisit our old examples to show some examples of document coloring", "# our river bank document\n\nbow_water = ['bank','water','bank']\ncolor_words(model, bow_water)\n\nbow_finance = ['bank','finance','bank']\ncolor_words(model, bow_finance)", "What is fun to note here is that while bank was colored blue in our first example, it is now red because of the financial context - something which the numbers proved to us before.", "# sample doc with a somewhat even distribution of words among the likely topics\n\ndoc = ['bank', 'water', 'bank', 'finance', 'money','sell','river','fast','tree']\ncolor_words(model, doc)\n", "We see that the document word coloring is done just the way we expected. :)\nWord-coloring a dictionary\nWe can do the same for the entire vocabulary, statically. The only difference would be in using get_term_topics, and iterating over the dictionary.\nWe will use a modified version of the coloring code when passing an entire dictionary.", "def color_words_dict(model, dictionary):\n import matplotlib.pyplot as plt\n import matplotlib.patches as patches\n\n word_topics = []\n for word_id in dictionary:\n word = str(dictionary[word_id])\n # get_term_topics returns static topics, as mentioned before\n probs = model.get_term_topics(word)\n # we are creating word_topics which is similar to the one created by get_document_topics\n try:\n if probs[0][1] >= probs[1][1]:\n word_topics.append((word_id, [0, 1]))\n else:\n word_topics.append((word_id, [1, 0]))\n # this in the case only one topic is returned\n except IndexError:\n word_topics.append((word_id, [probs[0][0]]))\n \n # color-topic matching\n topic_colors = { 1:'red', 0:'blue'}\n \n # set up fig to plot\n fig = plt.figure()\n ax = fig.add_axes([0,0,1,1])\n\n # a sort of hack to make sure the words are well spaced out.\n word_pos = 1/len(doc)\n \n # use matplotlib to plot words\n for word, topics in word_topics:\n ax.text(word_pos, 0.8, model.id2word[word],\n horizontalalignment='center',\n verticalalignment='center',\n fontsize=20, color=topic_colors[topics[0]], # choose just the most likely topic\n transform=ax.transAxes)\n word_pos += 0.2 # to move the word for the next iter\n\n ax.set_axis_off()\n plt.show()\n\n\ncolor_words_dict(model, dictionary)", "As we can see, the red words are to do with finance, and the blue ones are to do with water. \nYou can also notice that some words, like mud, shore and borrow seem to be incorrectly colored - however, they are correctly colored according to the LDA model used for coloring. A small corpus means that the LDA algorithm might not assign 'ideal' topic proportions to each word. Fine tuning the model and having a larger corpus would improve the model, and improve the results of the word coloring." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
BorisPolonsky/LearningTensorFlow
GAN/GAN 101.ipynb
mit
[ "GAN 101\nA simple gan model within TensorFlow r1.10 framework. \nImport", "import os\nimport tensorflow as tf\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport sys\n% matplotlib notebook", "Load Data Set", "mnist = tf.keras.datasets.mnist\n(x_train, y_train), (x_test, y_test) = mnist.load_data(path=\"mnist.npz\")", "Verify data strucutre", "x_train.shape\n\ny_train.shape", "Define model\nRemarks\nInitialize the variables with caution\nIn the two loss function defined below, singulars will occur in case the output of the network contains $1$ or $0$, given that the loss function contains the term $\\log(x)$ or $\\log(1-x)$. In our case the output layer of both generator and discriminator network contains a simgoid function. In $y=sigmoid(wx+b)$, if the weights are too large then the result of this term quickly approaches $0$ or $1$ as $x$ moves away from $0$. Hence the mean and stddev of weight distribution at the output layer are set to $0$ a relatively small value, respectively, to opt out the occurence of singulars in loss functions.", "class GAN:\n def __init__(self, noise_input_tensor, image_input_tensor, generator_hidden_dim, discriminator_hidden_dim):\n self._noise_input = noise_input_tensor\n self._image_input = image_input_tensor\n with tf.variable_scope(\"generator\"):\n self._generator_output, self._generator_parameters = self._fnn(\n noise_input_tensor, generator_hidden_dim, image_input_tensor.shape[1], \n activation=lambda x: 255 * tf.sigmoid(x)) # (0, 1) -> (0.0, 255.0)\n with tf.variable_scope(\"discriminator\"):\n self._discriminator_output_for_real_data, self._discriminator_parameters = self._fnn(\n image_input_tensor, discriminator_hidden_dim, 1, activation=tf.sigmoid)\n with tf.variable_scope(\"discriminator\", reuse=True): # Share weights and biases\n d_o_fake = self._discriminator_output_for_synth, _ = self._fnn(self._generator_output, \n discriminator_hidden_dim, \n 1, \n activation=tf.sigmoid)\n \n def _fnn(self, input_tensor, hidden_dim, output_dim, activation=None):\n w_xh = tf.get_variable(initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.01),\n shape=[input_tensor.shape[1], hidden_dim], name=\"W_xh\")\n b_xh = tf.get_variable(initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.01),\n shape=[hidden_dim], name=\"b_xh\")\n hidden = tf.nn.relu(tf.add(tf.matmul(input_tensor, w_xh), b_xh))\n w_ho = tf.get_variable(initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.01), \n shape=[hidden_dim, output_dim], name=\"W_ho\")\n b_ho = tf.get_variable(initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.01), \n shape=[output_dim], name=\"b_ho\")\n output = tf.add(tf.matmul(hidden, w_ho), b_ho)\n if activation is None:\n return output, (w_xh, b_xh, w_ho, b_ho)\n else:\n return activation(output), (w_xh, b_xh, w_ho, b_ho)\n \n \n @property\n def noise_input(self):\n return self._noise_input\n \n @property\n def image_input(self):\n return self._image_input\n \n @property\n def generator_output(self):\n return self._generator_output\n\n @property\n def discriminator_output_from_generator(self):\n return self._discriminator_output_for_synth\n \n @property\n def discriminator_output_from_image_input(self):\n return self._discriminator_output_for_real_data\n \n @property\n def g_param(self):\n return self._generator_parameters[:]\n \n @property\n def d_param(self):\n return self._discriminator_parameters[:]", "Specify Dimensions", "noise_dim = 128\nimage_dim = x_train.shape[1] * x_train.shape[2]\ngenerator_hidden_dim = 256\ndiscriminator_hidden_dim = 256", "Create GAN model, define I/O , loss functions and optimizers.\nRemarks\nSpecifiy the variables to be trained\nBy default in TensorFlow, all variables are updated by each optimizer, so we need to specify the variables to be trained for each one of the optimizer. In this case we have two optimizers for improving the performance of the generator network and discriminator network, respectively.", "tf.reset_default_graph()\nwith tf.variable_scope(\"GAN\"):\n generator_input = tf.placeholder(shape=[None, noise_dim], dtype=tf.float32, name=\"generator_input\")\n discriminator_input = tf.placeholder(shape=[None, image_dim], dtype=tf.float32, name=\"discriminator_input_real\")\n gan = GAN(generator_input, discriminator_input, generator_hidden_dim, discriminator_hidden_dim)\n generator_loss = -tf.reduce_mean(tf.log(gan.discriminator_output_from_generator), name=\"generator_loss\")\n discriminator_loss = -tf.reduce_mean(\n tf.log(gan.discriminator_output_from_image_input)+tf.log(1.0-gan.discriminator_output_from_generator), \n name=\"discriminator_loss\")\nwith tf.variable_scope(\"training_configuration\"):\n g_learing_rate_tensor = tf.get_variable(dtype=tf.float32, initializer=0.0, name=\"generator_lr\")\n d_learing_rate_tensor = tf.get_variable(dtype=tf.float32, initializer=0.0, name=\"discriminator_lr\")\n global_step = tf.get_variable(dtype=tf.int32, shape=[], name=\"global_step\", trainable=False)\n g_train_op = tf.train.AdamOptimizer(learning_rate=g_learing_rate_tensor).minimize(generator_loss, \n var_list=gan.g_param, \n global_step=global_step)\n d_train_op = tf.train.AdamOptimizer(learning_rate=d_learing_rate_tensor).minimize(discriminator_loss, \n var_list=gan.d_param, global_step=global_step)\n \n summary_gen_loss = tf.summary.scalar(tensor=generator_loss, name=\"generator_loss_summary\")\n summary_dis_loss = tf.summary.scalar(tensor=discriminator_loss, name=\"discriminator_loss_summary\")\n summary_all = tf.summary.merge_all()", "Prepare Dataset and Start Training", "with tf.variable_scope(\"training_configuration\", auxiliary_name_scope=False): # Re-entering the name scope\n batch_size_t = tf.placeholder(dtype=tf.int64, shape=[], name=\"batch_size\")\n training_set = tf.data.Dataset.from_tensor_slices((x_train, y_train))\n training_set = training_set.batch(batch_size=batch_size_t).map(\n lambda x, y: (tf.reshape(tensor=x, shape=[-1, 28 * 28]), y))\n batch_iter_train = training_set.make_initializable_iterator()\n next_batch_train = batch_iter_train.get_next()\n\nn_epoch = 100\nbatch_size = 50\ng_lr, d_lr = 2e-5, 2e-5\nlr_decay = 0.97\nnum_batch = int(x_train.shape[0]/batch_size)\nk = 1\nmodel_param_path = os.path.normpath(r\"./model_checkpoints\")\nsaver=tf.train.Saver()\ndef batch_sampler(batch_size):\n return np.random.uniform(-5.0, 5.0, size=[batch_size, noise_dim])\nwith tf.Session() as sess, tf.summary.FileWriter(logdir=model_param_path) as writer:\n writer.add_graph(graph=tf.get_default_graph())\n sess.run(tf.global_variables_initializer())\n for epoch in range(n_epoch):\n sess.run([tf.assign(g_learing_rate_tensor, g_lr), tf.assign(d_learing_rate_tensor, d_lr)])\n sess.run(batch_iter_train.initializer, feed_dict={batch_size_t: batch_size})\n for batch_no in range(num_batch):\n x, _ = sess.run(next_batch_train)\n # print(x.shape)\n # x = x.reshape([-1, image_dim]) # flatten each sample manually\n # Train the discriminator network k times\n for _ in range(k):\n noise_batch = batch_sampler(batch_size)\n feed_dict = {gan.image_input: x, gan.noise_input: noise_batch}\n sess.run(d_train_op, feed_dict=feed_dict)\n # Train the generator network once\n noise_batch = batch_sampler(batch_size)\n feed_dict = {gan.image_input: x, gan.noise_input: noise_batch}\n _, summary, step = sess.run([g_train_op, summary_all, global_step], feed_dict=feed_dict)\n writer.add_summary(summary=summary, global_step=step)\n g_lr = g_lr * lr_decay\n d_lr = d_lr * lr_decay\n saver.save(sess=sess, save_path=os.path.join(model_param_path, \"GAN\"))\nprint(\"Done!\")", "Test Network", "n = 10\ncanvas = np.empty((28 * n, 28 * n))\nwith tf.Session() as sess:\n ckpt = tf.train.get_checkpoint_state(model_param_path)\n if ckpt and ckpt.model_checkpoint_path:\n saver.restore(sess, ckpt.model_checkpoint_path)\n for i in range(n):\n # Noise input.\n z = batch_sampler(n)\n # Generate image from noise.\n g = sess.run(gan.generator_output, feed_dict={gan.noise_input: z})\n # Reverse colours for better display\n # g = -1 * (g - 1)\n g = (-1 * (g - 255)).astype(np.int32)\n for j in range(n):\n # Draw the generated digits\n canvas[i * 28:(i + 1) * 28, j * 28:(j + 1) * 28] = g[j].reshape([28, 28])\n plt.figure(figsize=(n, n))\n plt.imshow(canvas, origin=\"upper\", cmap=\"gray\")\n plt.show()\n else:\n print(\"Failed to load model checkpoint.\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
rucka/NeuralNetworkPlayground
notebook/sample/3_mnist_from_scratch.ipynb
apache-2.0
[ "MNIST from scratch\nThis notebook walks through an example of training a TensorFlow model to do digit classification using the MNIST data set. MNIST is a labeled set of images of handwritten digits.\nAn example follows.", "from __future__ import print_function\n\nfrom IPython.display import Image\nimport base64\nImage(data=base64.decodestring(\"iVBORw0KGgoAAAANSUhEUgAAAMYAAABFCAYAAAARv5krAAAYl0lEQVR4Ae3dV4wc1bYG4D3YYJucc8455yCSSIYrBAi4EjriAZHECyAk3rAID1gCIXGRgIvASIQr8UTmgDA5imByPpicTcYGY+yrbx+tOUWpu2e6u7qnZ7qXVFPVVbv2Xutfce+q7hlasmTJktSAXrnn8vR/3/xXmnnadg1aTfxL3/7rwfSPmT+kf/7vf098YRtK+FnaZaf/SS++OjNNathufF9caiT2v/xxqbTGki/SXyM1nODXv/r8+7Tb+r+lnxZNcEFHEG/e3LnpoINXSh/PWzxCy/F9eWjOnDlLrr/++jR16tQakgylqdOWTZOGFqX5C/5IjXNLjdt7/NTvv/+eTjnllLT//vunr776Kl100UVpueWWq8n10lOmpSmTU5o/f0Fa3DDH1ry9p0/++eefaZ999slYYPS0005LK664Yk2eJ02ekqZNnZx+XzA/LfprYgGxePHitOqqq6YZM2akyfPmzUvXXXddHceoic2EOckxDj300CzPggUL0g033NC3OKy00krDer3pppv6FgcBIjvGUkv9u5paZZVVhoHpl4Mvv/wyhfxDQ0NZ7H7EQbacPHny39Tejzj88ccfacqUKRmHEecYf0Nr8GGAQJ8gMHCMPlH0QMzmEBg4RnN4DVr3CQIDx+gTRQ/EbA6BgWM0h9egdZ8g8PeliD4RutfF/Ouvfz9OtZy8aNGiNH/+/GGWl1122XzseYuVNKtqsaI23Ghw0DYCA8doG8JqO+AUG2+8cVq4cGHaY4890vLLL5/WXXfdfI6jvPDCC3lJ8amnnkoezP3000/pl19+GThHtWpIPekYomTxFS7HnkqKjMsss0yGgFE4r62tSBFVJ02aNPyconi9V4/JwzHwT9ZNNtkkeZ6w5ZZbph133DH99ttv6ccff8zXX3nllcRRnHNfv2cNGMQWGRaOrWbUrjsGBRLAA6U4Lhoqw9h2223ztRBq6aWXzsbgvueffz4Lu9NOO2UnYTgrr7xy7tO9nOH111/Pbb744ov0ww8/jAvngAdFMvQDDjggG/0GG2yQX1GZNm1aziCCwzrrrJPl3muvvXKwePnll9M333wzHDCKWPbLMbuAkfISjnvvvXcW/emnn85lqCBqa4a65hiYR/Gk2RNGRlwm3n7ggQfmdrKD9sqJtdZaKxvCnDlz8n3Tp09PXmPYeuutc0SVNQjvnmuvvTa3efzxx9N33303PGZ5rF75DBvvqq233nrp22+/TWeddVbyikpgxCE4vQDhlQUBRfDw2esbs2fPTquvvnqviNN1PuIdJ4GErVx44YUZowsuuCB9+umn6eeff84BspmsWqljhPFDxjGGYx/lDkN33udajCoVlAjRzl4U8LjefRwnPjsXG8OJqKBd8NB1LTU5IHyCd7LJGOYXNoGjFqaGIKtrERDIDKtukfGMH/zRZa1A101+YBF44KfMYzO8VOYYjDWiukiGqc022yyXOUqdzTffPJ/z1ialeqNVxA9gi0wzlOJ5juJlR8JeddVV+ZrIKTq4ZvJp/8EHH+SU+txzz+W2SqmxVFZRplrH5DTRXmGFFdKuu+6azjjjjOzosl5g6D54CQCI4mGjhNQO5occckh2LvLTA6fqJOEnyhU6kNlkZmUuvrtNcFx77bUzhsZWXgoSsm6t4Dsa/tp2DErCmA04HAI4FLjaaqtlBhmnSKiNY4rDtHZFB6jFMMH0RVDH+nCPYxtDCFJnKkniRbDitWjTK3sykQUuMLPn3DZGX8SFnCG/fVyz5zCCBtIHTLshdzif8fERn8cKXxjCNOwCTu3Qf6yqhV4AQokiP489//zzM0DxnQYKwqAtIkko1kQzFFxvaNcJ6u3Pe+65J/cRRvDee+9lA2BInIyRff/997nNO++8k7t0vl2A6vHWynmyiPJ43WKLLbIijz/++LTddtvlTCdzwIWSg9yjxBJ0GN/DDz+c7zv77LOzbEceeWSekwVGgsOsWbNyNo0+qt7DfPvtt8/dmtvIGnPnzk3PPPPMsJ6rHrNef/BBeJA90RprrJEDcNhctMkXR/mnbccwuCjNGTbaaKMc8TBZprITxOdgOvbuKxqGz6LSJ598kseJ9Gi1CYmSv/76a3YyJZWMZJ6Ceskp8EMusihFEAyUmVaa8G2rxTNHIrd733///eH7YeaLNe5xrEzlWNF/HqQDf0Tm+GIbvYdD43MsKAIo/JDgE0G5aFfN8NaWYxiUshikqGYTTUSt0TCkjXsYNqJQQso+rgGa0vX58ccf56hQTtk+48F92rmvlnE1A0on2uKP0Yrw+Nxzzz0zn+ZhjKwRXq6vueaa2TmUiRQfS7SyNeMks9IV9vrvJOl/q622yo4Mfw5Pvm6TMclLdit6shh+YAMnq1E29tEsteUYBgMSgxa5MOAzJZcVXQs4bUR8XxhCHIwzMALCBuCcx5q0tF3u133l8XrRMchFiRYNyMxBKM/5IjZlWVzjULKwACISytIWFsi56aab5mvOKyEikmdAO/iHY+BDCRUZuoPD1e1akECyLseA7d13352DhdKak8Cmlt3U7TSl9p58FwejYK8ncAwKpDTnGDcARbWiAUjHiNEHsITSPlagpEZChcfrZzwSOfBOiQwXLuR3PjAhtwAD08iAMCO/a+5xPTIm3ALjwERf0V+c69QeT7ZujVdLDhgKBrANXAMreMESRkU7rdVPrXNtZ4xIpSLH1VdfnR3j4IMPzkbw2Wefpa+//jovo5188slZsZjArAcvFP3YY4+lSy+9NEdTdTTy0I5xHHfccfm1CH2LtuORKEqmkwVlVU+sBY+IdJRmE0zeeOONnEXuu+++7AhnnnlmWn/99XMJ5brtzTffzHMJx/o555xzkgdb0U8rRtAKrnTYqtG1Ml6teyxInHDCCdlGYByBmG2Z97ChVvFo2zEwbHCRTbqP7EDxPjN2pUBEe86AXAcsg+f10TYMSTvnRM1ulQe1wG/nHEXZZEJZUIYQ5cgWMsEgMgqclFdkdh+MbFFyuddnWMLNfTYkcuuXHlBkpFYNI3dS+mMMfCHHsZWadfUjmQVn8iLywscG21apMscQwR555JEM3KuvvpoZ5LHOmzgjAvBwzFt2/Oijj3Lm4Ayin/MU/eGHH+b2N998c/5MGSaZ44nw7OEd5Rx77LE5+1EehYXxkpes5li2K6+8Mhv8Lrvsko381ltvzcEBfvHQKh5auk9GPvHEE3NJAx+/eKL/HXbYIQcbK3nwN067xAk4s5VHdbvsx0nxrYQeKxJMZAfBA7GlRx99NC9EtCN7JY4RoPBeAHIAyrB3jpHYwqu1d02d7HpZcfqINo5dL7eJMXtxTzk2sgWFM/gcsnCakI2cFOk+523O+Qw7WaeYHYpYRp9xn4BkbPdWSfgJXYYM+ne+2xRj2sdx8EDu8rm4Ntp9pY4RSmb0CIPOAVNGoLA47yU4S2xen37ppZdy9CkLE/3lm8bJHzJbbiavt2Q9p7AkK7oyXAZOLk7gs9c4PJC0AOE8DDyrgJkaWgYQkSPYuAdpWySfteU8HhqKouYq+io6ZfGeZo7xpbT1+jt+jGULfprpq922ePHMBibwjWVq523KVrzBsIzTaMeu1DFi0HI0YyyYtAekY5MltbRyihFJiROBKIYTwMCTWJNubwdQFCXFapK9z96mtbjgs3thFKWnUgjBzNZIya5FOyUcPG36q4LwRgZ6Ix8HtBk3tirGGU0feAkslHfk5PzBh2cXSkvtWqWOOEaRGcoSHdXDMoYn1tK8yaON0ahbCWgFS/vxSnjn5F4ItLeiFAGAzCKc7MDA1OlIjc4pLFKE7FEyxb5ZPNTbtuiv2fvrtddfOFsYXcwj8d8qv/XGq3femLvvvnvOvrIYPPEjG+PDseDbDnXcMXiyiGiyyACOPvrovN95552zV3/++ef5zVveznlEo6CICvG5l/d4JSvHP+qoo7JjKDs4PkVSGPm9HSz9W5rlPEoCQYHjVFXyRGnBOcKA28VOP/qTBWX6YnS2IKB8qYL/enyGHPbKziOOOCLj6sGeslGW8L6Y4ANr2MY99fpsdL7jjmFwkSTSr6gDVCk+tmDQedcJ5LgdwaLPbu7xjJRRNlErSsiQhVHJlOEQoh182o1wRTnharwYs3itnWP9Rd/RD5mLW5yveh/YRhYMjItyBh/wjPat8tEVx6B00RKo5513XpIl7rzzzuwEourMmTOz95uIcyBfTSXYiy++mCOrSFS1klsFrNZ9eGPoJtmeyRx00EE5cpGbIi21XnbZZbkMee2117KMHIKMIVcotVb/vXoOz6I0+URoMlVFcBFE7L1+IjNYIo6v/fo+D3tC+FCR+FHuwNUCgfOtUlccI5hnJMoIBhN1sBICqMoNNaLP3pkiFGciIIBC4HaEbRWk0dyHb3Mp/EY0I6+NsytvyKxsKhpQr8ozGpm1IZ8IbV+PyllGuyh1YBXXOQEcy6R8M5eAHzuxxX3GRvbaCKJ4aRfXrjkG5jEbk00Prxi8SZTJKmc5/PDDc5v99tsvC+hBjWtqStmD0F4Ma1foMvDtfqZMUc3/lYjMSFFW3NS7JtyyoKzSiTocHoFJHMc+MlK7Mta7n9NbATJerbEYvQWIWCVitIyaXrV3nsG7H2Y2GVcbxyj6NX+waKEPmOvbfShwtjhQDDz5Ygt/uuoY+OPtnICDEMBTWsAQUu0NBBsDEgFEWOADAiDaVRERWsCq5i34IRN+TbTJgn8KwzOFuR4KDUXW7Kyik53Ep8w/+RkxWeO5S1EM5wVABguXMGp69dk1x87D0ObdL32GHI5tsDQGHtwbm/Hw4TpnKvNY5Ge0x113DEwT3tIsIdSnDIfxcxJAevCHfE9cXcmotHXfAw88kIFUdgFjLMn4HuZRuh9FExmjRCCnZxRqcPxz8ioUVk9eRhJkPAYHV8ZVFRkjjFSfAtw222yTy2OZ0iv15fHcQ4dKaMcwsBdEEL26RzaIh5+yK7LSBGPno8yOZX+vzRhfXzZ8cRrtyzzkzpr803XHwB8wTJYIRol+VY8zqMMBbP0f+cExE1qTdbU7x3jwwQdzVBYdesExKNiEWx2MfwoOAyCbJ9uRHZvUTcPmsENhGNE4HBKOHKNqZzQu3KNfX9H1nRABQZlbNkpt4SNo4DWIIesDj9qYnwki2giWqol3330348kZLPm7xvi1Pffcc7MzhA3gy/0oeIuxWtmPiWNgNCIFYwcCAa2FA1ikJZz1aeUVsBmge9TyoqGoIqKUFdEKCFXcU0/pHJizVMUnXBiBh6IicdTTzsEOnuZkDE/2rcJI4KMf/TF+0TucwDhkZ+DGL4/nGkPGV/AIC+2RvfP6ZPTI4gu5XNM/Um7RPzuIFyn1zW7wpQ9UHj+fbOHPmDlGCOGBGIeQQfwuq0jnISBQfOHft7JEHN94Q5xF6XLFFVfkyKIEGyuiGAo3r6BIx0imcM6k+6GHHspOEQbcDq+UTl4BwRu7PstUiPEJFsa9/PLL83nXg6d2xnUvoxS5L7744uGyh/wyRpRF9YwSHsHjE088kWWADQeRFThZkTgBstensZG5h4m56oEdcAp9CwTOVUlj6hgECcGBpA6XDazeiLKhVABQAhKB3cNxbEAL4KoEppm+gjf3OMafDf+UW7zeTL/ltqIiAxBMOIIxnLOHgbFsMGQ4InhE0nJfrXw2hnIRD3SFBKmYWDfqE49woFvOzZno3NxM0HDciMjBDsjEBgLTsJHYN+qjmWtj7hjBLKFFQgL7qRz14jHHHJPBcC2M3wRPVDT5ohzZRv0Z16O/sdozAKmdopUH5kftTrzJpl+lk29CcgpLw3BgpMbwwqF/S80pGJ6xO0WM+8Ybbxw2TuOEoTYakwyovB/JKdzDMVQOHvCRzXju890fL11aGhcMqqIxdwwCRkYQDZAaE7lWBhyosQEmQM439MgffDHm0Si8EcuBC0ezcQSZVKYktzFEW+3sfQ4natRvu9eMTS9F7IvHo+m/2fb6LNuCc0WsW+mzHq9j6hgE9YCHp5tkez2EAVjlMOmyUlU2Lis8ygVR0rykyoltPZCaOY9fr32Qp50X6xi7pWCGbsHBvwLgGIcddljGxvcsjOU1GseyiKjJQWydpiqNsBlei85BfhNxeJunVCl31x0jBOMAjJ9jRC3OEERDS7QMI0qQohIYgLSq7FJuMZbi9WZA7kRbvFAWx5Dyy449mjEDG/dyDPW4VSiy2iNvBcCSUdxyyy35OYHrqJUx843j8I/qQpA074BVVdR1x+AIHCIiIGewsqIuds41tSSlOxeOFHuOQ/E+2zPEuFYVKM32U3RMvGy44YbZMTg2B2+GOIXXJcjpR9lkUy/QyZ7GUU8zAD9RCiuR0oQYVv1IMAk7qFL+rjkGg7GZQPLufffdN69QKJtkCAKKjNGu1p7gMgWDYEDRpkpAmu0rnMLehie/RavcI49Sr1ZW0w6V91ac/IsxmdHPB0U5pQ+4+TExDudNUhPufnaKIn7N6m2k9h11jKLRqP+UQJb2eHh4uYjK0LW1D0MpCq0NR4g24RTR/0hCdvM6/m14FtljeTL4D/liedFeO7LYcyh7eMGDY8X16IM8Vp9kWjj2GwWG5IZb2FKVOHTMMTCvDKBgD2Z22223bNynnnpqVrZXBFxjQDZUFJiwIqKHN8qHO+64IxvN/fffn9vG/VWC0UpfeC5uZMEbg/ctM/8SzYOxZ599Nhs4ebSx0ECpcDFvMCdRggkesoQ+zaHU0N4EgAEnue2227JTON+LgaEVDFu5h+w2Wdl33GFkEUIQqYIqdYwwbJGO8q2xOydqUiTFWpJVPzsuUwhlzzFETxlGdFSCqaMB4XwvUzgKWU3AyW4uwFns4QMbilUyxbq8p/4cw3UEB8FDGQUDx/acqB8zRS2dw5qthe3VatPKucocg6JiYu3lP2nfawvekKVITzgJQLH24QTBtPZeE2D89957b27jwZ1IwIm8R2OMWHmJ+3pxTzaK8l+HyMrgTzrppMxqOIEsGoZvz0nsyWiliRMUl2G9aOk6POyLZVUvYtBpniL4wA1m9lVSW46BOQqKpTLK9FnUsxftvW4swssa4dkhCGFCMNfcp08lhM9KKc4h0obgsa8ShHb6Cv5DJnu8IwHB9TB852DkOlzIRV6kXbSVMfQj48BWdhE0TLr1Fe3zQR/+gRMK5yjuq4KjZccQ2SlYjexHmCnSkiLjtsesmlnpQ5naFo1A5GMAHoJxBI709ttv54ygntZWmWEcQMS9VQleRT9kNmfAG0P3HRPGbHnVudg4gEyJOAYiE0wikHAAcxHyxndO4KI/WHEK/Qzo7wjAXfaFNdurikaNtIERRTqmYIYdE2tGEs8hfJ8iFB/3xV67MCjG8NZbb6Unn3wyC+XfDxfnDxFp496qhK6qn5CDA5twK/fIRH5Gb0MMOhxCFgkKjOBoHqKEkmWvueaanG04iTHcP3CKQO0/e3ZhgceP2smqcKyKRuUYlEKhPDL+d5z1c4qVFTDnmBIZMwZ9DiKAzTmvCetPNFR7W7fXXt/KLddqTcyjr17bRybkEF5XiQhPHnMuDlF07MCB3I49l4EDxTrnfsFBJBxQbQSKeGoROqjdurWzIzoGJqRxS2KUf/rpp2flcRDRjRKVCdpFhCwz7rOVKE5z++235/7uuuuuXDq5P5yKEY0np8B3TKb9K1/vLTF0/7MiJtyRPYrq4fx+7R2e7vFDDzDyfx1goPwcUGMEYG/rFI3oGAYW0UUyimQIcRwGzbgpVsZAUTYE065xCtc5GUeSHTyg4kzKs/FKoSBljyhvTz6y2gseZAwlwgI+cNBGtpV9ZRj4BobjFY9O8g0bQcXWaRpxBE5hHuFnJ0XB6dOn56ge2QGDlK2dFSSG4b8kxVzEdSWGVxgYQLzrxJkIGgbTaUE73b9MZ/KNfIMOJpdcckndYZWmFAwv+wgydW/o8wsCK3xnz56dFzx8oxPGtk7QiI5h0FBaeGzRKYIpjDN2ig6lB9OiprmI60qNieIMIXvsQy7yotjH9eI+2hbPDY4bI8D+2JdnWTYY+iwDs78qaUTHEM0sI1pClAVMnqX9ImGQszB6DHoNOLzZNZlGRlEq9JNB9JOsRXvoxDGnsDTudwFUHTNmzMjDqEaU9xYvGgWiZnka0TEo16CeNyCM1SLtwmt5cNEoCOUa5xjQAIFWEGBP5rbKdTRr1qwcfGUMthXVTCt917pnRMdwE6ZiQm0JckADBMYCgWLwtXjTSeq/d5Y7ieag7wmDwMAxJowqB4JUicDAMapEc9DXhEFgcjxcM7vvR4on7bHS1q84WNkpUr/iEL+aOLRw4cIlQCmuIhUBmsjHlpQ9c7EmzjEsN1vd6DeCg8UVT+qRd7b6EQey8wMT+6El8RSu36xhIO8AgQYI9F94bADG4NIAgUDg/wHX+3lgThDIegAAAABJRU5ErkJggg==\".encode('utf-8')), embed=True)", "We're going to be building a model that recognizes these digits as 5, 0, and 4.\nImports and input data\nWe'll proceed in steps, beginning with importing and inspecting the MNIST data. This doesn't have anything to do with TensorFlow in particular -- we're just downloading the data archive.", "import os\nfrom six.moves.urllib.request import urlretrieve\n\nSOURCE_URL = 'http://yann.lecun.com/exdb/mnist/'\nWORK_DIRECTORY = \"/tmp/mnist-data\"\n\ndef maybe_download(filename):\n \"\"\"A helper to download the data files if not present.\"\"\"\n if not os.path.exists(WORK_DIRECTORY):\n os.mkdir(WORK_DIRECTORY)\n filepath = os.path.join(WORK_DIRECTORY, filename)\n if not os.path.exists(filepath):\n filepath, _ = urlretrieve(SOURCE_URL + filename, filepath)\n statinfo = os.stat(filepath)\n print('Successfully downloaded', filename, statinfo.st_size, 'bytes.')\n else:\n print('Already downloaded', filename)\n return filepath\n\ntrain_data_filename = maybe_download('train-images-idx3-ubyte.gz')\ntrain_labels_filename = maybe_download('train-labels-idx1-ubyte.gz')\ntest_data_filename = maybe_download('t10k-images-idx3-ubyte.gz')\ntest_labels_filename = maybe_download('t10k-labels-idx1-ubyte.gz')", "Working with the images\nNow we have the files, but the format requires a bit of pre-processing before we can work with it. The data is gzipped, requiring us to decompress it. And, each of the images are grayscale-encoded with values from [0, 255]; we'll normalize these to [-0.5, 0.5].\nLet's try to unpack the data using the documented format:\n[offset] [type] [value] [description] \n0000 32 bit integer 0x00000803(2051) magic number \n0004 32 bit integer 60000 number of images \n0008 32 bit integer 28 number of rows \n0012 32 bit integer 28 number of columns \n0016 unsigned byte ?? pixel \n0017 unsigned byte ?? pixel \n........ \nxxxx unsigned byte ?? pixel\n\nPixels are organized row-wise. Pixel values are 0 to 255. 0 means background (white), 255 means foreground (black).\nWe'll start by reading the first image from the test data as a sanity check.", "import gzip, binascii, struct, numpy\nimport matplotlib.pyplot as plt\n\nwith gzip.open(test_data_filename) as f:\n # Print the header fields.\n for field in ['magic number', 'image count', 'rows', 'columns']:\n # struct.unpack reads the binary data provided by f.read.\n # The format string '>i' decodes a big-endian integer, which\n # is the encoding of the data.\n print(field, struct.unpack('>i', f.read(4))[0])\n \n # Read the first 28x28 set of pixel values. \n # Each pixel is one byte, [0, 255], a uint8.\n buf = f.read(28 * 28)\n image = numpy.frombuffer(buf, dtype=numpy.uint8)\n \n # Print the first few values of image.\n print('First 10 pixels:', image[:10])", "The first 10 pixels are all 0 values. Not very interesting, but also unsurprising. We'd expect most of the pixel values to be the background color, 0.\nWe could print all 28 * 28 values, but what we really need to do to make sure we're reading our data properly is look at an image.", "%matplotlib inline\n\n# We'll show the image and its pixel value histogram side-by-side.\n_, (ax1, ax2) = plt.subplots(1, 2)\n\n# To interpret the values as a 28x28 image, we need to reshape\n# the numpy array, which is one dimensional.\nax1.imshow(image.reshape(28, 28), cmap=plt.cm.Greys);\n\nax2.hist(image, bins=20, range=[0,255]);", "The large number of 0 values correspond to the background of the image, another large mass of value 255 is black, and a mix of grayscale transition values in between.\nBoth the image and histogram look sensible. But, it's good practice when training image models to normalize values to be centered around 0.\nWe'll do that next. The normalization code is fairly short, and it may be tempting to assume we haven't made mistakes, but we'll double-check by looking at the rendered input and histogram again. Malformed inputs are a surprisingly common source of errors when developing new models.", "# Let's convert the uint8 image to 32 bit floats and rescale \n# the values to be centered around 0, between [-0.5, 0.5]. \n# \n# We again plot the image and histogram to check that we \n# haven't mangled the data.\nscaled = image.astype(numpy.float32)\nscaled = (scaled - (255 / 2.0)) / 255\n_, (ax1, ax2) = plt.subplots(1, 2)\nax1.imshow(scaled.reshape(28, 28), cmap=plt.cm.Greys);\nax2.hist(scaled, bins=20, range=[-0.5, 0.5]);", "Great -- we've retained the correct image data while properly rescaling to the range [-0.5, 0.5].\nReading the labels\nLet's next unpack the test label data. The format here is similar: a magic number followed by a count followed by the labels as uint8 values. In more detail:\n[offset] [type] [value] [description] \n0000 32 bit integer 0x00000801(2049) magic number (MSB first) \n0004 32 bit integer 10000 number of items \n0008 unsigned byte ?? label \n0009 unsigned byte ?? label \n........ \nxxxx unsigned byte ?? label\n\nAs with the image data, let's read the first test set value to sanity check our input path. We'll expect a 7.", "with gzip.open(test_labels_filename) as f:\n # Print the header fields.\n for field in ['magic number', 'label count']:\n print(field, struct.unpack('>i', f.read(4))[0])\n\n print('First label:', struct.unpack('B', f.read(1))[0])", "Indeed, the first label of the test set is 7.\nForming the training, testing, and validation data sets\nNow that we understand how to read a single element, we can read a much larger set that we'll use for training, testing, and validation.\nImage data\nThe code below is a generalization of our prototyping above that reads the entire test and training data set.", "IMAGE_SIZE = 28\nPIXEL_DEPTH = 255\n\ndef extract_data(filename, num_images):\n \"\"\"Extract the images into a 4D tensor [image index, y, x, channels].\n \n For MNIST data, the number of channels is always 1.\n\n Values are rescaled from [0, 255] down to [-0.5, 0.5].\n \"\"\"\n print('Extracting', filename)\n with gzip.open(filename) as bytestream:\n # Skip the magic number and dimensions; we know these values.\n bytestream.read(16)\n\n buf = bytestream.read(IMAGE_SIZE * IMAGE_SIZE * num_images)\n data = numpy.frombuffer(buf, dtype=numpy.uint8).astype(numpy.float32)\n data = (data - (PIXEL_DEPTH / 2.0)) / PIXEL_DEPTH\n data = data.reshape(num_images, IMAGE_SIZE, IMAGE_SIZE, 1)\n return data\n\ntrain_data = extract_data(train_data_filename, 60000)\ntest_data = extract_data(test_data_filename, 10000)", "A crucial difference here is how we reshape the array of pixel values. Instead of one image that's 28x28, we now have a set of 60,000 images, each one being 28x28. We also include a number of channels, which for grayscale images as we have here is 1.\nLet's make sure we've got the reshaping parameters right by inspecting the dimensions and the first two images. (Again, mangled input is a very common source of errors.)", "print('Training data shape', train_data.shape)\n_, (ax1, ax2) = plt.subplots(1, 2)\nax1.imshow(train_data[0].reshape(28, 28), cmap=plt.cm.Greys);\nax2.imshow(train_data[1].reshape(28, 28), cmap=plt.cm.Greys);", "Looks good. Now we know how to index our full set of training and test images.\nLabel data\nLet's move on to loading the full set of labels. As is typical in classification problems, we'll convert our input labels into a 1-hot encoding over a length 10 vector corresponding to 10 digits. The vector [0, 1, 0, 0, 0, 0, 0, 0, 0, 0], for example, would correspond to the digit 1.", "NUM_LABELS = 10\n\ndef extract_labels(filename, num_images):\n \"\"\"Extract the labels into a 1-hot matrix [image index, label index].\"\"\"\n print('Extracting', filename)\n with gzip.open(filename) as bytestream:\n # Skip the magic number and count; we know these values.\n bytestream.read(8)\n buf = bytestream.read(1 * num_images)\n labels = numpy.frombuffer(buf, dtype=numpy.uint8)\n # Convert to dense 1-hot representation.\n return (numpy.arange(NUM_LABELS) == labels[:, None]).astype(numpy.float32)\n\ntrain_labels = extract_labels(train_labels_filename, 60000)\ntest_labels = extract_labels(test_labels_filename, 10000)", "As with our image data, we'll double-check that our 1-hot encoding of the first few values matches our expectations.", "print('Training labels shape', train_labels.shape)\nprint('First label vector', train_labels[0])\nprint('Second label vector', train_labels[1])", "The 1-hot encoding looks reasonable.\nSegmenting data into training, test, and validation\nThe final step in preparing our data is to split it into three sets: training, test, and validation. This isn't the format of the original data set, so we'll take a small slice of the training data and treat that as our validation set.", "VALIDATION_SIZE = 5000\n\nvalidation_data = train_data[:VALIDATION_SIZE, :, :, :]\nvalidation_labels = train_labels[:VALIDATION_SIZE]\ntrain_data = train_data[VALIDATION_SIZE:, :, :, :]\ntrain_labels = train_labels[VALIDATION_SIZE:]\n\ntrain_size = train_labels.shape[0]\n\nprint('Validation shape', validation_data.shape)\nprint('Train size', train_size)", "Defining the model\nNow that we've prepared our data, we're ready to define our model.\nThe comments describe the architecture, which fairly typical of models that process image data. The raw input passes through several convolution and max pooling layers with rectified linear activations before several fully connected layers and a softmax loss for predicting the output class. During training, we use dropout.\nWe'll separate our model definition into three steps:\n\nDefining the variables that will hold the trainable weights.\nDefining the basic model graph structure described above. And,\nStamping out several copies of the model graph for training, testing, and validation.\n\nWe'll start with the variables.", "import tensorflow as tf\n\n# We'll bundle groups of examples during training for efficiency.\n# This defines the size of the batch.\nBATCH_SIZE = 60\n# We have only one channel in our grayscale images.\nNUM_CHANNELS = 1\n# The random seed that defines initialization.\nSEED = 42\n\n# This is where training samples and labels are fed to the graph.\n# These placeholder nodes will be fed a batch of training data at each\n# training step, which we'll write once we define the graph structure.\ntrain_data_node = tf.placeholder(\n tf.float32,\n shape=(BATCH_SIZE, IMAGE_SIZE, IMAGE_SIZE, NUM_CHANNELS))\ntrain_labels_node = tf.placeholder(tf.float32,\n shape=(BATCH_SIZE, NUM_LABELS))\n\n# For the validation and test data, we'll just hold the entire dataset in\n# one constant node.\nvalidation_data_node = tf.constant(validation_data)\ntest_data_node = tf.constant(test_data)\n\n# The variables below hold all the trainable weights. For each, the\n# parameter defines how the variables will be initialized.\nconv1_weights = tf.Variable(\n tf.truncated_normal([5, 5, NUM_CHANNELS, 32], # 5x5 filter, depth 32.\n stddev=0.1,\n seed=SEED))\nconv1_biases = tf.Variable(tf.zeros([32]))\nconv2_weights = tf.Variable(\n tf.truncated_normal([5, 5, 32, 64],\n stddev=0.1,\n seed=SEED))\nconv2_biases = tf.Variable(tf.constant(0.1, shape=[64]))\nfc1_weights = tf.Variable( # fully connected, depth 512.\n tf.truncated_normal([IMAGE_SIZE // 4 * IMAGE_SIZE // 4 * 64, 512],\n stddev=0.1,\n seed=SEED))\nfc1_biases = tf.Variable(tf.constant(0.1, shape=[512]))\nfc2_weights = tf.Variable(\n tf.truncated_normal([512, NUM_LABELS],\n stddev=0.1,\n seed=SEED))\nfc2_biases = tf.Variable(tf.constant(0.1, shape=[NUM_LABELS]))\n\nprint('Done')", "Now that we've defined the variables to be trained, we're ready to wire them together into a TensorFlow graph.\nWe'll define a helper to do this, model, which will return copies of the graph suitable for training and testing. Note the train argument, which controls whether or not dropout is used in the hidden layer. (We want to use dropout only during training.)", "def model(data, train=False):\n \"\"\"The Model definition.\"\"\"\n # 2D convolution, with 'SAME' padding (i.e. the output feature map has\n # the same size as the input). Note that {strides} is a 4D array whose\n # shape matches the data layout: [image index, y, x, depth].\n conv = tf.nn.conv2d(data,\n conv1_weights,\n strides=[1, 1, 1, 1],\n padding='SAME')\n\n # Bias and rectified linear non-linearity.\n relu = tf.nn.relu(tf.nn.bias_add(conv, conv1_biases))\n\n # Max pooling. The kernel size spec ksize also follows the layout of\n # the data. Here we have a pooling window of 2, and a stride of 2.\n pool = tf.nn.max_pool(relu,\n ksize=[1, 2, 2, 1],\n strides=[1, 2, 2, 1],\n padding='SAME')\n conv = tf.nn.conv2d(pool,\n conv2_weights,\n strides=[1, 1, 1, 1],\n padding='SAME')\n relu = tf.nn.relu(tf.nn.bias_add(conv, conv2_biases))\n pool = tf.nn.max_pool(relu,\n ksize=[1, 2, 2, 1],\n strides=[1, 2, 2, 1],\n padding='SAME')\n\n # Reshape the feature map cuboid into a 2D matrix to feed it to the\n # fully connected layers.\n pool_shape = pool.get_shape().as_list()\n reshape = tf.reshape(\n pool,\n [pool_shape[0], pool_shape[1] * pool_shape[2] * pool_shape[3]])\n \n # Fully connected layer. Note that the '+' operation automatically\n # broadcasts the biases.\n hidden = tf.nn.relu(tf.matmul(reshape, fc1_weights) + fc1_biases)\n\n # Add a 50% dropout during training only. Dropout also scales\n # activations such that no rescaling is needed at evaluation time.\n if train:\n hidden = tf.nn.dropout(hidden, 0.5, seed=SEED)\n return tf.matmul(hidden, fc2_weights) + fc2_biases\n\nprint('Done')", "Having defined the basic structure of the graph, we're ready to stamp out multiple copies for training, testing, and validation.\nHere, we'll do some customizations depending on which graph we're constructing. train_prediction holds the training graph, for which we use cross-entropy loss and weight regularization. We'll adjust the learning rate during training -- that's handled by the exponential_decay operation, which is itself an argument to the MomentumOptimizer that performs the actual training.\nThe vaildation and prediction graphs are much simpler the generate -- we need only create copies of the model with the validation and test inputs and a softmax classifier as the output.", "# Training computation: logits + cross-entropy loss.\nlogits = model(train_data_node, True)\nloss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(\n labels=train_labels_node, logits=logits))\n\n# L2 regularization for the fully connected parameters.\nregularizers = (tf.nn.l2_loss(fc1_weights) + tf.nn.l2_loss(fc1_biases) +\n tf.nn.l2_loss(fc2_weights) + tf.nn.l2_loss(fc2_biases))\n# Add the regularization term to the loss.\nloss += 5e-4 * regularizers\n\n# Optimizer: set up a variable that's incremented once per batch and\n# controls the learning rate decay.\nbatch = tf.Variable(0)\n# Decay once per epoch, using an exponential schedule starting at 0.01.\nlearning_rate = tf.train.exponential_decay(\n 0.01, # Base learning rate.\n batch * BATCH_SIZE, # Current index into the dataset.\n train_size, # Decay step.\n 0.95, # Decay rate.\n staircase=True)\n# Use simple momentum for the optimization.\noptimizer = tf.train.MomentumOptimizer(learning_rate,\n 0.9).minimize(loss,\n global_step=batch)\n\n# Predictions for the minibatch, validation set and test set.\ntrain_prediction = tf.nn.softmax(logits)\n# We'll compute them only once in a while by calling their {eval()} method.\nvalidation_prediction = tf.nn.softmax(model(validation_data_node))\ntest_prediction = tf.nn.softmax(model(test_data_node))\n\nprint('Done')", "Training and visualizing results\nNow that we have the training, test, and validation graphs, we're ready to actually go through the training loop and periodically evaluate loss and error.\nAll of these operations take place in the context of a session. In Python, we'd write something like:\nwith tf.Session() as s:\n ...training / test / evaluation loop...\n\nBut, here, we'll want to keep the session open so we can poke at values as we work out the details of training. The TensorFlow API includes a function for this, InteractiveSession.\nWe'll start by creating a session and initializing the varibles we defined above.", "# Create a new interactive session that we'll use in\n# subsequent code cells.\ns = tf.InteractiveSession()\n\n# Use our newly created session as the default for \n# subsequent operations.\ns.as_default()\n\n# Initialize all the variables we defined above.\ntf.global_variables_initializer().run()", "Now we're ready to perform operations on the graph. Let's start with one round of training. We're going to organize our training steps into batches for efficiency; i.e., training using a small set of examples at each step rather than a single example.", "BATCH_SIZE = 60\n\n# Grab the first BATCH_SIZE examples and labels.\nbatch_data = train_data[:BATCH_SIZE, :, :, :]\nbatch_labels = train_labels[:BATCH_SIZE]\n\n# This dictionary maps the batch data (as a numpy array) to the\n# node in the graph it should be fed to.\nfeed_dict = {train_data_node: batch_data,\n train_labels_node: batch_labels}\n\n# Run the graph and fetch some of the nodes.\n_, l, lr, predictions = s.run(\n [optimizer, loss, learning_rate, train_prediction],\n feed_dict=feed_dict)\n\nprint('Done')", "Let's take a look at the predictions. How did we do? Recall that the output will be probabilities over the possible classes, so let's look at those probabilities.", "print(predictions[0])", "As expected without training, the predictions are all noise. Let's write a scoring function that picks the class with the maximum probability and compares with the example's label. We'll start by converting the probability vectors returned by the softmax into predictions we can match against the labels.", "# The highest probability in the first entry.\nprint('First prediction', numpy.argmax(predictions[0]))\n\n# But, predictions is actually a list of BATCH_SIZE probability vectors.\nprint(predictions.shape)\n\n# So, we'll take the highest probability for each vector.\nprint('All predictions', numpy.argmax(predictions, 1))", "Next, we can do the same thing for our labels -- using argmax to convert our 1-hot encoding into a digit class.", "print('Batch labels', numpy.argmax(batch_labels, 1))", "Now we can compare the predicted and label classes to compute the error rate and confusion matrix for this batch.", "correct = numpy.sum(numpy.argmax(predictions, 1) == numpy.argmax(batch_labels, 1))\ntotal = predictions.shape[0]\n\nprint(float(correct) / float(total))\n\nconfusions = numpy.zeros([10, 10], numpy.float32)\nbundled = zip(numpy.argmax(predictions, 1), numpy.argmax(batch_labels, 1))\nfor predicted, actual in bundled:\n confusions[predicted, actual] += 1\n\nplt.grid(False)\nplt.xticks(numpy.arange(NUM_LABELS))\nplt.yticks(numpy.arange(NUM_LABELS))\nplt.imshow(confusions, cmap=plt.cm.jet, interpolation='nearest');", "Now let's wrap this up into our scoring function.", "def error_rate(predictions, labels):\n \"\"\"Return the error rate and confusions.\"\"\"\n correct = numpy.sum(numpy.argmax(predictions, 1) == numpy.argmax(labels, 1))\n total = predictions.shape[0]\n\n error = 100.0 - (100 * float(correct) / float(total))\n\n confusions = numpy.zeros([10, 10], numpy.float32)\n bundled = zip(numpy.argmax(predictions, 1), numpy.argmax(labels, 1))\n for predicted, actual in bundled:\n confusions[predicted, actual] += 1\n \n return error, confusions\n\nprint('Done')", "We'll need to train for some time to actually see useful predicted values. Let's define a loop that will go through our data. We'll print the loss and error periodically.\nHere, we want to iterate over the entire data set rather than just the first batch, so we'll need to slice the data to that end.\n(One pass through our training set will take some time on a CPU, so be patient if you are executing this notebook.)", "# Train over the first 1/4th of our training set.\nsteps = train_size // BATCH_SIZE\nfor step in range(steps):\n # Compute the offset of the current minibatch in the data.\n # Note that we could use better randomization across epochs.\n offset = (step * BATCH_SIZE) % (train_size - BATCH_SIZE)\n batch_data = train_data[offset:(offset + BATCH_SIZE), :, :, :]\n batch_labels = train_labels[offset:(offset + BATCH_SIZE)]\n # This dictionary maps the batch data (as a numpy array) to the\n # node in the graph it should be fed to.\n feed_dict = {train_data_node: batch_data,\n train_labels_node: batch_labels}\n # Run the graph and fetch some of the nodes.\n _, l, lr, predictions = s.run(\n [optimizer, loss, learning_rate, train_prediction],\n feed_dict=feed_dict)\n \n # Print out the loss periodically.\n if step % 100 == 0:\n error, _ = error_rate(predictions, batch_labels)\n print('Step %d of %d' % (step, steps))\n print('Mini-batch loss: %.5f Error: %.5f Learning rate: %.5f' % (l, error, lr))\n print('Validation error: %.1f%%' % error_rate(\n validation_prediction.eval(), validation_labels)[0])\n", "The error seems to have gone down. Let's evaluate the results using the test set.\nTo help identify rare mispredictions, we'll include the raw count of each (prediction, label) pair in the confusion matrix.", "test_error, confusions = error_rate(test_prediction.eval(), test_labels)\nprint('Test error: %.1f%%' % test_error)\n\nplt.xlabel('Actual')\nplt.ylabel('Predicted')\nplt.grid(False)\nplt.xticks(numpy.arange(NUM_LABELS))\nplt.yticks(numpy.arange(NUM_LABELS))\nplt.imshow(confusions, cmap=plt.cm.jet, interpolation='nearest');\n\nfor i, cas in enumerate(confusions):\n for j, count in enumerate(cas):\n if count > 0:\n xoff = .07 * len(str(count))\n plt.text(j-xoff, i+.2, int(count), fontsize=9, color='white')", "We can see here that we're mostly accurate, with some errors you might expect, e.g., '9' is often confused as '4'.\nLet's do another sanity check to make sure this matches roughly the distribution of our test set, e.g., it seems like we have fewer '5' values.", "plt.xticks(numpy.arange(NUM_LABELS))\nplt.hist(numpy.argmax(test_labels, 1));", "Indeed, we appear to have fewer 5 labels in the test set. So, on the whole, it seems like our model is learning and our early results are sensible.\nBut, we've only done one round of training. We can greatly improve accuracy by training for longer. To try this out, just re-execute the training cell above." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
antoniomezzacapo/qiskit-tutorial
community/aqua/optimization/grover.ipynb
apache-2.0
[ "Using Grover Search for 3SAT problems\nThis notebook demonstrates how to use the Qiskit Aqua library Grover algorithm and process the result.\nFurther information is available for the algorithms in the github repo qiskit_aqua/readme.md", "import pylab\nfrom qiskit_aqua import run_algorithm\nfrom qiskit_aqua.input import get_input_instance\nfrom qiskit.tools.visualization import circuit_drawer, plot_histogram", "We have a SAT problem to which we want to find solutions using Grover and SAT oracle combination. The SAT problem is specified in the DIMACS CNF format. We read one of the sample cnf files to load the problem.", "with open('3sat3-5.cnf', 'r') as f:\n sat_cnf = f.read()\nprint(sat_cnf)", "In order to run an algorithm we need to create a configuration dictionary with the parameters for the algorithm and any other dependent objects it requires. So we first define a dictionary for the algorithm. We name GROVER as the algorithm and as it has no further parameters we are done. GROVER needs an oracle so we configure one. Here we use the SAT oracle which will allow us to solve an optimization SAT problem by searching solution space. We configure the oracle with the problem we loaded above. We then combine the dictionaries into the final single params dictionary that is passed to run_algorithm.", "algorithm_cfg = {\n 'name': 'Grover'\n}\n\noracle_cfg = {\n 'name': 'SAT',\n 'cnf': sat_cnf\n}\n\nparams = {\n 'problem': {'name': 'search', 'random_seed': 50},\n 'algorithm': algorithm_cfg,\n 'oracle': oracle_cfg,\n 'backend': {'name': 'qasm_simulator'}\n}\n\nresult = run_algorithm(params)\nprint(result['result'])", "As seen above, a satisfying solution to the specified sample SAT problem is obtained, with the absolute values indicating the variable indices, and the signs the True/False assignments, similar to the DIMACS format.\nA measurements result is also returned where it can be seen, below in the plot, that result['result'] was the highest probability. But the other solutions were very close in probability too.", "pylab.rcParams['figure.figsize'] = (8, 4)\nplot_histogram(result['measurements'])\n\ncircuit_drawer(result['circuit'])", "The above figure shows the circuit that was run for Grover. This circuit was returned from the algorithm for the above visualization which was generated using qiskit.tools.visualization functionality." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Cyb3rWard0g/ThreatHunter-Playbook
docs/notebooks/campaigns/apt29Evals.ipynb
gpl-3.0
[ "Free Telemetry Notebook\n| | |\n|:--------------|:---|\n| Group | APT29 |\n| Description | APT29 is a threat group that has been attributed to the Russian government and has operated since at least 2008. This group reportedly compromised the Democratic National Committee starting in the summer of 2015 |\n| Author | Open Threat Research - APT29 Detection Hackathon |\nTelemetry Detection Category", "# Importing Libraries\nfrom bokeh.io import show\nfrom bokeh.plotting import figure\nfrom bokeh.models import ColumnDataSource, LabelSet, HoverTool\nfrom bokeh.transform import dodge\nimport pandas as pd\n\n# You need to run this code at the beginning in order to show visualization using Jupyter Notebooks\nfrom bokeh.io import output_notebook\noutput_notebook()\napt29= pd.read_json('https://raw.githubusercontent.com/OTRF/ThreatHunter-Playbook/master/docs/evals/apt29/data/otr_results.json')\nsummary = (\n apt29\n .groupby(['step','stepname']).agg(total=pd.NamedAgg(column=\"substep\", aggfunc=\"nunique\"))\n .join(\n apt29[apt29['detectiontype'] == 'Telemetry']\n .groupby(['step','stepname']).agg(telemetry=pd.NamedAgg(column=\"vendor\", aggfunc=\"count\"))\n )\n).reset_index()\nsummary['percentage'] = (summary['telemetry'] / summary['total']).map(\"{:.0%}\".format)\n# Get Total Average Telemetry coverage\ntotal_avg_percentage = '{0:.0f}'.format((summary['telemetry'].sum() / summary['total'].sum() * 100))\n\n# Lists of values to create ColumnDataSource\nstepname = summary['stepname'].tolist()\ntotal = summary['total'].tolist()\ntelemetry = summary['telemetry'].tolist()\npercentage = summary['percentage'].tolist()\n\n# Creating ColumnDataSource object: source of data for visualization\nsource = ColumnDataSource(data={'stepname':stepname,'sub-Steps':total,'covered':telemetry,'percentage':percentage})\n\n# Defining HoverTool object (Display info with Mouse): It is applied to chart named 'needHover'\nhover_tool = HoverTool(names = ['needHover'],tooltips = [(\"Covered\", \"@covered\"),(\"Percentage\", \"@percentage\")])\n\n# Creating Figure\np = figure(x_range=stepname,y_range=(0,23),plot_height=550,plot_width=600,toolbar_location='right',tools=[hover_tool])\n\n# Creating Vertical Bar Charts\np.vbar(x=dodge('stepname',0.0,range=p.x_range),top='sub-Steps',width=0.7,source=source,color=\"#c9d9d3\",legend_label=\"Total\")\np.vbar(x=dodge('stepname',0.0, range=p.x_range),top='covered',width=0.7,source=source,color=\"#718dbf\",legend_label=\"Covered\", name = 'needHover')\n\n# Adding Legend\np.legend.location = \"top_right\"\np.legend.orientation = \"vertical\"\np.legend.border_line_width = 3\np.legend.border_line_color = \"black\"\np.legend.border_line_alpha = 0.3\n\n# Adding Title\np.title.text = 'Telemetry Detection Category (Average Coverage: {}%)'.format(total_avg_percentage)\np.title.align = 'center'\np.title.text_font_size = '12pt'\n\n# Adding Axis Labels\np.xaxis.axis_label = 'Emulation Steps'\np.xaxis.major_label_orientation = 45\n\np.yaxis.axis_label = 'Count of Sub-Steps'\n\n# Adding Data Label: Only for total of sub-steps\ntotal_label = LabelSet(x='stepname',y='sub-Steps',text='sub-Steps',text_align='center',level='glyph',source= source)\np.add_layout(total_label)\n\n#Showing visualization\nshow(p)", "Import Libraries", "from pyspark.sql import SparkSession", "Start Spark Session", "spark = SparkSession.builder.getOrCreate()\nspark.conf.set(\"spark.sql.caseSensitive\", \"true\")", "Decompress Dataset", "!wget https://github.com/OTRF/mordor/raw/master/datasets/large/apt29/day1/apt29_evals_day1_manual.zip\n\n!unzip apt29_evals_day1_manual.zip", "Import Datasets", "df_day1_host = spark.read.json('apt29_evals_day1_manual_2020-05-01225525.json')", "Create Temporary SQL View", "df_day1_host.createTempView('apt29Host')", "Adversary - Detection Steps\n1.A.1. User Execution\nProcedure: User Pam executed payload rcs.3aka3.doc\nCriteria: The rcs.3aka3.doc process spawning from explorer.exe\nDetection Type:Telemetry(None)\nQuery ID:204B00B6-A92B-4EF7-8510-4FB237703147", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host\nWHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(ParentImage) LIKE \"%explorer.exe\"\n AND LOWER(Image) LIKE \"%3aka3%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:52540C1E-DD76-41B2-93ED-CFBA2B94ECF7", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host\nWHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND LOWER(ParentProcessName) LIKE \"%explorer.exe\"\n AND LOWER(NewProcessName) LIKE \"%3aka3%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Detection Type:General(None)\nQuery ID:DFD6A782-9BDB-4550-AB6B-525E825B095E", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host\nWHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 13\n AND TargetObject RLIKE '.*\\\\\\\\\\\\\\\\AppCompatFlags\\\\\\\\\\\\\\\\Compatibility Assistant\\\\\\\\\\\\\\\\Store\\\\\\\\\\\\\\\\.*'\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "1.A.2. Masquerading\nProcedure: Used unicode right-to-left override (RTLO) character to obfuscate file name rcs.3aka3.doc (originally cod.3aka.scr)\nCriteria: Evidence of the right-to-left override character (U+202E) in the rcs.3aka.doc process ​OR the original filename (cod.3aka.scr)\nDetection Type:Telemetry(None)\nQuery ID:F4C71BF4-E068-493D-ABAA-0C5DFA02875D", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host\nWHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(Image) RLIKE '.*\\\\‎|â€|‪|‫|‬|â€|‮.*'\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:D94222A0-72F9-4F1E-84A9-F14CA1098D44", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host\nWHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND LOWER(NewProcessName) RLIKE '.*\\\\‎|â€|‪|‫|‬|â€|‮.*'\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "1.A.3. Uncommonly Used Port\nProcedure: Established C2 channel (192.168.0.5) via rcs.3aka3.doc payload over TCP port 1234\nCriteria: Established network channel over port 1234\nDetection Type:Telemetry(None)\nQuery ID:B53A710B-43AB-4B57-BD92-4E787D494978", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host\nWHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 3\n AND LOWER(Image) RLIKE '.*\\\\‎|â€|‪|‫|‬|â€|‮.*'\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:1BAC5645-83CD-4D6F-A4F8-659084401F47", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host\nWHERE LOWER(Channel) = \"security\"\n AND EventID = 5156\n AND LOWER(Application) RLIKE '.*\\\\‎|â€|‪|‫|‬|â€|‮.*'\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "1.A.4. Standard Cryptographic Protocol\nProcedure: Used RC4 stream cipher to encrypt C2 (192.168.0.5) traffic\nCriteria: Evidence that the network data sent over the C2 channel is encrypted\nDetection Type:None(None)\nQuery ID:E12B701E-1222-413C-BCAF-F357CB769B3E", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host\nWHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 7\n AND Image LIKE \"%3aka3%\"\n AND LOWER(ImageLoaded) LIKE '%bcrypt.dll'\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "1.B.1. Command-Line Interface\nProcedure: Spawned interactive cmd.exe\nCriteria: cmd.exe spawning from the rcs.3aka3.doc​ process\nDetection Type:Telemetry(Correlated)\nQuery ID:4799C203-573A-49CB-ACE4-8C4C5CD3862A", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host\nWHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(ParentImage) RLIKE '.*\\\\‎|â€|‪|‫|‬|â€|‮.*'\n AND LOWER(Image) LIKE \"%cmd.exe\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:C8D664CD-48EE-4663-AE49-D5B0B19014C7", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host\nWHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND LOWER(ParentProcessName) RLIKE '.*\\\\‎|â€|‪|‫|‬|â€|‮.*'\n AND LOWER(NewProcessName) LIKE \"%cmd.exe\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "1.B.2. PowerShell\nProcedure: Spawned interactive powershell.exe\nCriteria: powershell.exe spawning from cmd.exe\nDetection Type:Telemetry(Correlated)\nQuery ID:C1DBF5F2-21D5-45E4-8D9A-44905F1F8242", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host a\nINNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(ParentImage) RLIKE '.*\\\\‎|â€|‪|‫|‬|â€|‮.*'\n AND LOWER(Image) LIKE '%cmd.exe'\n) b\nON a.ParentProcessGuid = b.ProcessGuid\nWHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(Image) LIKE '%powershell.exe'\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:43B46661-3407-4302-BA8C-EE772C677DCB", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host a\nINNER JOIN (\n SELECT NewProcessId\n FROM apt29Host\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND LOWER(ParentProcessName) RLIKE '.*\\\\‎|â€|‪|‫|‬|â€|‮.*'\n AND LOWER(NewProcessName) LIKE '%cmd.exe'\n) b\nON a.ProcessId = b.NewProcessId\nWHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND LOWER(NewProcessName) LIKE '%powershell.exe'\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "2.A.1. File and Directory Discovery\nProcedure: Searched filesystem for document and media files using PowerShell\nCriteria: powershell.exe executing (Get-)ChildItem\nDetection Type:Telemetry(Correlated)\nQuery ID:10C87900-CC2F-4EE1-A2F2-1832A761B050", "df = spark.sql(\n'''\nSELECT b.ScriptBlockText\nFROM apt29Host a\nINNER JOIN (\n SELECT d.ParentProcessGuid, d.ProcessId, c.ScriptBlockText\n FROM apt29Host c\n INNER JOIN (\n SELECT ParentProcessGuid, ProcessGuid, ProcessId\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n ) d\n ON c.ExecutionProcessID = d.ProcessId\n WHERE c.Channel = \"Microsoft-Windows-PowerShell/Operational\"\n AND c.EventID = 4104\n AND LOWER(c.ScriptBlockText) LIKE \"%childitem%\"\n) b\nON a.ProcessGuid = b.ParentProcessGuid\nWHERE a.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND a.EventID = 1\n AND LOWER(a.ParentImage) RLIKE '.*\\\\‎|â€|‪|‫|‬|â€|‮.*'\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:26F6963D-00D5-466A-B4BA-59DA30892B26", "df = spark.sql(\n'''\nSELECT b.ScriptBlockText\nFROM apt29Host a\nINNER JOIN (\n SELECT d.NewProcessId, d.ProcessId, c.ScriptBlockText\n FROM apt29Host c\n INNER JOIN (\n SELECT split(NewProcessId, '0x')[1] as NewProcessId, ProcessId\n FROM apt29Host\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n ) d\n ON hex(c.ExecutionProcessID) = d.NewProcessId\n WHERE c.Channel = \"Microsoft-Windows-PowerShell/Operational\"\n AND c.EventID = 4104\n AND LOWER(c.ScriptBlockText) LIKE \"%childitem%\"\n) b\nON a.NewProcessId = b.ProcessId\nWHERE LOWER(a.Channel) = \"security\"\n AND a.EventID = 4688\n AND LOWER(a.ParentProcessName) RLIKE '.*\\\\‎|â€|‪|‫|‬|â€|‮.*'\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "2.A.2. Automated Collection\nProcedure: Scripted search of filesystem for document and media files using PowerShell\nCriteria: powershell.exe executing (Get-)ChildItem\nDetection Type:Telemetry(Correlated)\nQuery ID:F96EA21C-1EB4-4988-8F98-BD018717EE2D", "df = spark.sql(\n'''\nSELECT b.ScriptBlockText\nFROM apt29Host a\nINNER JOIN (\n SELECT d.ParentProcessGuid, d.ProcessId, c.ScriptBlockText\n FROM apt29Host c\n INNER JOIN (\n SELECT ParentProcessGuid, ProcessGuid, ProcessId\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n ) d\n ON c.ExecutionProcessID = d.ProcessId\n WHERE c.Channel = \"Microsoft-Windows-PowerShell/Operational\"\n AND c.EventID = 4104\n AND LOWER(c.ScriptBlockText) LIKE \"%childitem%\"\n) b\nON a.ProcessGuid = b.ParentProcessGuid\nWHERE a.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND a.EventID = 1\n AND LOWER(a.ParentImage) RLIKE '.*\\\\‎|â€|‪|‫|‬|â€|‮.*'\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:EAD989D4-8886-46DC-BC8C-780C10760E93", "df = spark.sql(\n'''\nSELECT b.ScriptBlockText\nFROM apt29Host a\nINNER JOIN (\n SELECT d.NewProcessId, d.ProcessId, c.ScriptBlockText\n FROM apt29Host c\n INNER JOIN (\n SELECT split(NewProcessId, '0x')[1] as NewProcessId, ProcessId\n FROM apt29Host\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n ) d\n ON hex(c.ExecutionProcessID) = d.NewProcessId\n WHERE c.Channel = \"Microsoft-Windows-PowerShell/Operational\"\n AND c.EventID = 4104\n AND LOWER(c.ScriptBlockText) LIKE \"%childitem%\"\n) b\nON a.NewProcessId = b.ProcessId\nWHERE LOWER(a.Channel) = \"security\"\n AND a.EventID = 4688\n AND LOWER(a.ParentProcessName) RLIKE '.*\\\\‎|â€|‪|‫|‬|â€|‮.*'\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "2.A.3. Data from Local System\nProcedure: Recursively collected files found in C:\\Users\\Pam\\ using PowerShell\nCriteria: powershell.exe reading files in C:\\Users\\Pam\\\nDetection Type:None(None)\n2.A.4. Data Compressed\nProcedure: Compressed and stored files into ZIP (Draft.zip) using PowerShell\nCriteria: powershell.exe executing Compress-Archive\nDetection Type:Telemetry(Correlated)\nQuery ID:6CDEBEBF-387F-4A40-A4E8-8D4DF3A8F897", "df = spark.sql(\n'''\nSELECT b.ScriptBlockText\nFROM apt29Host a\nINNER JOIN (\n SELECT d.ParentProcessGuid, d.ProcessId, c.ScriptBlockText\n FROM apt29Host c\n INNER JOIN (\n SELECT ParentProcessGuid, ProcessGuid, ProcessId\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n ) d\n ON c.ExecutionProcessID = d.ProcessId\n WHERE c.Channel = \"Microsoft-Windows-PowerShell/Operational\"\n AND c.EventID = 4104\n AND LOWER(c.ScriptBlockText) LIKE \"%compress-archive%\"\n) b\nON a.ProcessGuid = b.ParentProcessGuid\nWHERE a.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND a.EventID = 1\n AND LOWER(a.ParentImage) RLIKE '.*\\\\‎|â€|‪|‫|‬|â€|‮.*'\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:621F8EE7-E9D8-417C-9FE5-5A0D89C3736A", "df = spark.sql(\n'''\nSELECT b.ScriptBlockText\nFROM apt29Host a\nINNER JOIN (\n SELECT d.NewProcessId, d.ProcessId, c.ScriptBlockText\n FROM apt29Host c\n INNER JOIN (\n SELECT split(NewProcessId, '0x')[1] as NewProcessId, ProcessId\n FROM apt29Host\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n ) d\n ON hex(c.ExecutionProcessID) = d.NewProcessId\n WHERE c.Channel = \"Microsoft-Windows-PowerShell/Operational\"\n AND c.EventID = 4104\n AND LOWER(c.ScriptBlockText) LIKE \"%compress-archive%\"\n) b\nON a.NewProcessId = b.ProcessId\nWHERE LOWER(a.Channel) = \"security\"\n AND a.EventID = 4688\n AND LOWER(a.ParentProcessName) RLIKE '.*\\\\‎|â€|‪|‫|‬|â€|‮.*'\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "2.A.5. Data Staged\nProcedure: Staged files for exfiltration into ZIP (Draft.zip) using PowerShell\nCriteria: powershell.exe creating the file draft.zip\nDetection Type:Telemetry(Correlated)\nQuery ID:76154CEC-1E01-4D3A-B9ED-C78978597C2B", "df = spark.sql(\n'''\nSELECT TargetFilename\nFROM apt29Host a\nINNER JOIN (\n SELECT d.ProcessGuid, d.ProcessId\n FROM apt29Host c\n INNER JOIN (\n SELECT ProcessGuid, ProcessId\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n ) d\n ON c.ExecutionProcessID = d.ProcessId\n WHERE c.Channel = \"Microsoft-Windows-PowerShell/Operational\"\n AND c.EventID = 4104\n AND LOWER(c.ScriptBlockText) LIKE \"%compress-archive%\"\n) b\nON a.ProcessGuid = b.ProcessGuid\nWHERE a.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND a.EventID = 11\n AND LOWER(a.TargetFilename) LIKE \"%zip\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "2.B.1. Exfiltration Over Command and Control Channel\nProcedure: Read and downloaded ZIP (Draft.zip) over C2 channel (192.168.0.5 over TCP port 1234)\nCriteria: The rcs.3aka3.doc process reading the file draft.zip while connected to the C2 channel\nDetection Type:None(None)\n3.A.1. Remote File Copy\nProcedure: Dropped stage 2 payload (monkey.png) to disk\nCriteria: The rcs.3aka3.doc process creating the file monkey.png\nDetection Type:Telemetry(Correlated)\nQuery ID:64249901-ADF8-4E5D-8BB4-70540A45E26C", "df = spark.sql(\n'''\nSELECT b.Message\nFROM apt29Host a\nINNER JOIN (\n SELECT ProcessGuid, Message\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 11\n AND LOWER(TargetFilename) LIKE '%monkey.png'\n) b\nON a.ProcessGuid = b.ProcessGuid\nWHERE a.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND a.EventID = 1\n AND LOWER(a.Image) RLIKE '.*\\\\‎|â€|‪|‫|‬|â€|‮.*'\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "3.A.2. Obfuscated Files or Information\nProcedure: Embedded PowerShell payload in monkey.png using steganography\nCriteria: Evidence that a PowerShell payload was within monkey.png\nDetection Type:Telemetry(None)\nQuery ID:0F10E1D1-EDF8-4B9F-B879-3651598D528A", "df = spark.sql(\n'''\nSELECT d.Image, d.CommandLine, c.ScriptBlockText\nFROM apt29Host c\nINNER JOIN (\n SELECT ParentProcessGuid, ProcessGuid, ProcessId, ParentImage, Image, ParentCommandLine, CommandLine\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n ) d\nON c.ExecutionProcessID = d.ProcessId\nWHERE c.Channel = \"Microsoft-Windows-PowerShell/Operational\"\n AND c.EventID = 4104\n AND LOWER(c.ScriptBlockText) LIKE \"%monkey.png%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:94F9B4F2-1C52-4A47-BF47-C786513A05AA", "df = spark.sql(\n'''\nSELECT d.NewProcessName, d.CommandLine, c.ScriptBlockText\nFROM apt29Host c\nINNER JOIN (\n SELECT NewProcessName, CommandLine, split(NewProcessId, '0x')[1] as NewProcessId\n FROM apt29Host\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n ) d\nON LOWER(hex(c.ExecutionProcessID)) = d.NewProcessId\nWHERE c.Channel = \"Microsoft-Windows-PowerShell/Operational\"\n AND c.EventID = 4104\n AND LOWER(c.ScriptBlockText) LIKE \"%monkey.png%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "3.B.1. Component Object Model Hijacking\nProcedure: Modified the Registry to enable COM hijacking of sdclt.exe using PowerShell\nCriteria: Addition of the DelegateExecute ​subkey in ​HKCU\\Software\\Classes\\Folder\\shell\\open\\​​command​​\nDetection Type:Telemetry(None)\nQuery ID:04EB334D-A304-40D9-B177-0BB6E95FC23E", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host\nWHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 13\n AND LOWER(TargetObject) RLIKE '.*\\\\\\\\\\\\\\\\folder\\\\\\\\\\\\\\\\shell\\\\\\\\\\\\\\\\open\\\\\\\\\\\\\\\\command\\\\\\\\\\\\\\delegateexecute.*'\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "3.B.2. Bypass User Account Control\nProcedure: Executed elevated PowerShell payload\nCriteria: High integrity powershell.exe spawning from control.exe​​ (spawned from sdclt.exe)\nDetection Type:Technique(None)\nQuery ID:7a4a8c7e-4238-4db3-a90d-34e9f3c6e60f", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host\nWHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(ParentImage) LIKE \"%sdclt.exe%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:d52fe669-55da-49e1-a76b-89297c66fa02", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host\nWHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND LOWER(ParentProcessName) LIKE \"%sdclt.exe\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Detection Type:Telemetry(None)\nQuery ID:F7E315BA-6A66-44D8-ABB3-3FBB4AA8F80A", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host\nWHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(Image) LIKE \"%sdclt.exe\"\n AND IntegrityLevel = \"High\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:6C8780E9-E6AF-4210-8EA0-72E9017CEE7D", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host a\nINNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(Image) LIKE \"%control.exe\"\n AND LOWER(ParentImage) LIKE \"%sdclt.exe\"\n) b\nON a.ParentProcessGuid = b.ProcessGuid\nWHERE a.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND a.EventID = 1\n AND a.IntegrityLevel = \"High\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:C36B49B5-DF58-4A34-9FE9-56189B9DEFEA", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host\nWHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND LOWER(NewProcessName) LIKE \"%sdclt.exe\"\n AND MandatoryLabel = \"S-1-16-12288\"\n AND TokenElevationType = \"%%1937\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:EE34D18C-0549-4AFB-8B98-01160B0C9094", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host a\nINNER JOIN (\n SELECT NewProcessId\n FROM apt29Host\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND LOWER(NewProcessName) LIKE \"%control.exe\"\n AND LOWER(ParentProcessName) LIKE \"%sdclt.exe\"\n) b\nON a.ProcessId = b.NewProcessId\nWHERE LOWER(a.Channel) = \"security\"\n AND a.EventID = 4688\n AND a.MandatoryLabel = \"S-1-16-12288\"\n AND a.TokenElevationType = \"%%1937\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "3.B.3. Commonly Used Port\nProcedure: Established C2 channel (192.168.0.5) via PowerShell payload over TCP port 443\nCriteria: Established network channel over port 443\nDetection Type:Telemetry(Correlated)\nQuery ID:E209D0C5-5A2B-4AEC-92B0-1510165B8EC7", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host d\nINNER JOIN (\n SELECT a.ProcessGuid\n FROM apt29Host a\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(Image) LIKE \"%control.exe\"\n AND LOWER(ParentImage) LIKE \"%sdclt.exe\"\n ) b\n ON a.ParentProcessGuid = b.ProcessGuid\n WHERE a.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND a.EventID = 1\n AND a.IntegrityLevel = \"High\"\n) c\nON d.ProcessGuid = c.ProcessGuid\nWHERE d.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND d.EventID = 3\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:2E9B9ADC-2426-419F-8E6E-2D9338384F80", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host d\nINNER JOIN (\n SELECT split(a.NewProcessId, '0x')[1] as NewProcessId\n FROM apt29Host a\n INNER JOIN (\n SELECT NewProcessId\n FROM apt29Host\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND LOWER(NewProcessName) LIKE \"%control.exe\"\n AND LOWER(ParentProcessName) LIKE \"%sdclt.exe\"\n ) b\n ON a.ProcessId = b.NewProcessId\n WHERE LOWER(a.Channel) = \"security\"\n AND a.EventID = 4688\n AND a.MandatoryLabel = \"S-1-16-12288\"\n AND a.TokenElevationType = \"%%1937\"\n) c\nON LOWER(hex(CAST(ProcessId as INT))) = c.NewProcessId\nWHERE LOWER(Channel) = \"security\"\n AND d.EventID = 5156\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "3.B.4. Standard Application Layer Protocol\nProcedure: Used HTTPS to transport C2 (192.168.0.5) traffic\nCriteria: Evidence that the network data sent over the C2 channel is HTTPS\nDetection Type:None(None)\n3.B.5. Standard Cryptographic Protocol\nProcedure: Used HTTPS to encrypt C2 (192.168.0.5) traffic\nCriteria: Evidence that the network data sent over the C2 channel is encrypted\nDetection Type:None(None)\n3.C.1. Modify Registry\nProcedure: Modified the Registry to remove artifacts of COM hijacking\nCriteria: Deletion of of the HKCU\\Software\\Classes\\Folder\\shell\\Open\\command subkey\nDetection Type:Telemetry(Correlated)\nQuery ID:22A46621-7A92-48C1-81BF-B3937EB4FDC3", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host d\nINNER JOIN (\n SELECT b.ProcessGuid\n FROM apt29Host b\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(ParentImage) RLIKE '.*\\\\‎|â€|‪|‫|‬|â€|‮.*'\n ) a\n ON b.ParentProcessGuid = a.ProcessGuid\n WHERE b.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND b.EventID = 1\n) c\nON d.ProcessGuid = c.ProcessGuid\nWHERE d.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND d.EventID = 12\n AND LOWER(d.TargetObject) RLIKE '.*\\\\\\\\\\\\\\\\folder\\\\\\\\\\\\\\\\shell\\\\\\\\\\\\\\\\open\\\\\\\\\\\\\\\\command.*'\n AND d.Message RLIKE '.*EventType: DeleteKey.*'\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "4.A.1. Remote File Copy\nProcedure: Dropped additional tools (SysinternalsSuite.zip) to disk over C2 channel (192.168.0.5)\nCriteria: powershell.exe creating the file SysinternalsSuite.zip\nDetection Type:Telemetry(Correlated)\nQuery ID:337EA65D-55A7-4890-BB2A-6A08BB9703E2", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host d\nINNER JOIN (\n SELECT b.ProcessGuid\n FROM apt29Host b\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(ParentImage) RLIKE '.*\\\\‎|â€|‪|‫|‬|â€|‮.*'\n ) a\n ON b.ParentProcessGuid = a.ProcessGuid\n WHERE b.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND b.EventID = 1\n) c\nON d.ProcessGuid = c.ProcessGuid\nWHERE d.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND d.EventID = 11\n AND LOWER(d.TargetFilename) LIKE '%.zip'\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "4.A.2. PowerShell\nProcedure: Spawned interactive powershell.exe\nCriteria: powershell.exe spawning from powershell.exe\nDetection Type:Telemetry(Correlated)\nQuery ID:B86F90BD-716C-4432-AE97-901174F111A8", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host d\nINNER JOIN (\n SELECT a.ProcessGuid, a.ParentProcessGuid\n FROM apt29Host a\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(Image) LIKE \"%control.exe\"\n AND LOWER(ParentImage) LIKE \"%sdclt.exe\"\n ) b\n ON a.ParentProcessGuid = b.ProcessGuid\n WHERE a.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND a.EventID = 1\n AND a.IntegrityLevel = \"High\"\n) c\nON d.ParentProcessGuid= c.ProcessGuid\nWHERE d.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND d.EventID = 1\n AND d.Image LIKE '%powershell.exe'\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:FA520225-1813-4EF2-BA58-98CB59C897D7", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host d\nINNER JOIN(\n SELECT a.ProcessId, a.NewProcessId\n FROM apt29Host a\n INNER JOIN (\n SELECT NewProcessId\n FROM apt29Host\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND LOWER(NewProcessName) LIKE \"%control.exe\"\n AND LOWER(ParentProcessName) LIKE \"%sdclt.exe\"\n ) b\n ON a.ProcessId = b.NewProcessId\n WHERE LOWER(a.Channel) = \"security\"\n AND a.EventID = 4688\n AND a.MandatoryLabel = \"S-1-16-12288\"\n AND a.TokenElevationType = \"%%1937\"\n) c\nON d.ProcessId = c.NewProcessId\nWHERE LOWER(d.Channel) = \"security\"\n AND d.EventID = 4688\n AND d.NewProcessName LIKE '%powershell.exe'\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "4.A.3. Deobfuscate/Decode Files or Information\nProcedure: Decompressed ZIP (SysinternalsSuite.zip) file using PowerShell\nCriteria: powershell.exe executing Expand-Archive\nDetection Type:Telemetry(Correlated)\nQuery ID:66B068A4-C3AB-4973-AE07-2C15AFF78104", "df = spark.sql(\n'''\nSELECT Payload\nFROM apt29Host f\nINNER JOIN (\n SELECT d.ProcessId\n FROM apt29Host d\n INNER JOIN (\n SELECT a.ProcessGuid, a.ParentProcessGuid\n FROM apt29Host a\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(Image) LIKE \"%control.exe\"\n AND LOWER(ParentImage) LIKE \"%sdclt.exe\"\n ) b\n ON a.ParentProcessGuid = b.ProcessGuid\n WHERE a.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND a.EventID = 1\n AND a.IntegrityLevel = \"High\"\n ) c\n ON d.ParentProcessGuid= c.ProcessGuid\n WHERE d.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND d.EventID = 1\n AND d.Image LIKE '%powershell.exe'\n) e\nON f.ExecutionProcessID = e.ProcessId\nWHERE f.Channel = \"Microsoft-Windows-PowerShell/Operational\"\n AND f.EventID = 4103\n AND LOWER(f.Payload) LIKE \"%expand-archive%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:09F29912-8E93-461E-9E89-3F06F6763383", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host f\nINNER JOIN (\n SELECT d.ProcessId\n FROM apt29Host d\n INNER JOIN (\n SELECT a.ProcessGuid, a.ParentProcessGuid\n FROM apt29Host a\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(Image) LIKE \"%control.exe\"\n AND LOWER(ParentImage) LIKE \"%sdclt.exe\"\n ) b\n ON a.ParentProcessGuid = b.ProcessGuid\n WHERE a.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND a.EventID = 1\n AND a.IntegrityLevel = \"High\"\n ) c\n ON d.ParentProcessGuid= c.ProcessGuid\n WHERE d.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND d.EventID = 1\n AND d.Image LIKE '%powershell.exe'\n) e\nON f.ExecutionProcessID = e.ProcessId\nWHERE f.Channel = \"Microsoft-Windows-PowerShell/Operational\"\n AND f.EventID = 4104\n AND LOWER(f.ScriptBlockText) LIKE \"%expand-archive%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:B5F24262-9373-43A4-A83F-0DBB708BD2C0", "df = spark.sql(\n'''\nSELECT Payload\nFROM apt29Host f\nINNER JOIN (\n SELECT split(d.NewProcessId, '0x')[1] as NewProcessId\n FROM apt29Host d\n INNER JOIN(\n SELECT a.ProcessId, a.NewProcessId\n FROM apt29Host a\n INNER JOIN (\n SELECT NewProcessId\n FROM apt29Host\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND LOWER(NewProcessName) LIKE \"%control.exe\"\n AND LOWER(ParentProcessName) LIKE \"%sdclt.exe\"\n ) b\n ON a.ProcessId = b.NewProcessId\n WHERE LOWER(a.Channel) = \"security\"\n AND a.EventID = 4688\n AND a.MandatoryLabel = \"S-1-16-12288\"\n AND a.TokenElevationType = \"%%1937\"\n ) c\n ON d.ProcessId = c.NewProcessId\n WHERE LOWER(d.Channel) = \"security\"\n AND d.EventID = 4688\n AND d.NewProcessName LIKE '%powershell.exe'\n) e\nON LOWER(hex(f.ExecutionProcessID)) = e.NewProcessId\nWHERE f.Channel = \"Microsoft-Windows-PowerShell/Operational\"\n AND f.EventID = 4103\n AND LOWER(f.Payload) LIKE \"%expand-archive%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:4310F2AF-11EF-4EAC-A968-3436FE5F6140", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host f\nINNER JOIN (\n SELECT split(d.NewProcessId, '0x')[1] as NewProcessId\n FROM apt29Host d\n INNER JOIN(\n SELECT a.ProcessId, a.NewProcessId\n FROM apt29Host a\n INNER JOIN (\n SELECT NewProcessId\n FROM apt29Host\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND LOWER(NewProcessName) LIKE \"%control.exe\"\n AND LOWER(ParentProcessName) LIKE \"%sdclt.exe\"\n ) b\n ON a.ProcessId = b.NewProcessId\n WHERE LOWER(a.Channel) = \"security\"\n AND a.EventID = 4688\n AND a.MandatoryLabel = \"S-1-16-12288\"\n AND a.TokenElevationType = \"%%1937\"\n ) c\n ON d.ProcessId = c.NewProcessId\n WHERE LOWER(d.Channel) = \"security\"\n AND d.EventID = 4688\n AND d.NewProcessName LIKE '%powershell.exe'\n) e\nON LOWER(hex(f.ExecutionProcessID)) = e.NewProcessId\nWHERE f.Channel = \"Microsoft-Windows-PowerShell/Operational\"\n AND f.EventID = 4104\n AND LOWER(f.ScriptBlockText) LIKE \"%expand-archive%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "4.B.1. Process Discovery\nProcedure: Enumerated current running processes using PowerShell\nCriteria: powershell.exe executing Get-Process\nDetection Type:Telemetry(Correlated)\nQuery ID:CE6D61C3-C3B5-43D2-BD3C-4C1711A822DA", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host f\nINNER JOIN (\n SELECT d.ProcessId\n FROM apt29Host d\n INNER JOIN (\n SELECT a.ProcessGuid, a.ParentProcessGuid\n FROM apt29Host a\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(Image) LIKE \"%control.exe\"\n AND LOWER(ParentImage) LIKE \"%sdclt.exe\"\n ) b\n ON a.ParentProcessGuid = b.ProcessGuid\n WHERE a.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND a.EventID = 1\n AND a.IntegrityLevel = \"High\"\n ) c\n ON d.ParentProcessGuid= c.ProcessGuid\n WHERE d.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND d.EventID = 1\n AND d.Image LIKE '%powershell.exe'\n) e\nON f.ExecutionProcessID = e.ProcessId\nWHERE f.Channel = \"Microsoft-Windows-PowerShell/Operational\"\n AND f.EventID = 4104\n AND LOWER(f.ScriptBlockText) LIKE \"%get-process%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:294DFB34-1FA8-464D-B85C-F2AE163DB4A9", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host f\nINNER JOIN (\n SELECT split(d.NewProcessId, '0x')[1] as NewProcessId\n FROM apt29Host d\n INNER JOIN(\n SELECT a.ProcessId, a.NewProcessId\n FROM apt29Host a\n INNER JOIN (\n SELECT NewProcessId\n FROM apt29Host\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND LOWER(NewProcessName) LIKE \"%control.exe\"\n AND LOWER(ParentProcessName) LIKE \"%sdclt.exe\"\n ) b\n ON a.ProcessId = b.NewProcessId\n WHERE LOWER(a.Channel) = \"security\"\n AND a.EventID = 4688\n AND a.MandatoryLabel = \"S-1-16-12288\"\n AND a.TokenElevationType = \"%%1937\"\n ) c\n ON d.ProcessId = c.NewProcessId\n WHERE LOWER(d.Channel) = \"security\"\n AND d.EventID = 4688\n AND d.NewProcessName LIKE '%powershell.exe'\n) e\nON LOWER(hex(f.ExecutionProcessID)) = e.NewProcessId\nWHERE f.Channel = \"Microsoft-Windows-PowerShell/Operational\"\n AND f.EventID = 4104\n AND LOWER(f.ScriptBlockText) LIKE \"%get-process%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "4.B.2. File Deletion\nProcedure: Deleted rcs.3aka3.doc on disk using SDelete\nCriteria: sdelete64.exe deleting the file rcs.3aka3.doc\nDetection Type:Telemetry(Correlated)\nQuery ID:5EED5350-0BFD-4501-8B2D-4CE4F8F9E948", "df = spark.sql(\n'''\nSELECT f.ProcessGuid\nFROM apt29Host f\nINNER JOIN (\n SELECT d.ProcessId, d.ProcessGuid\n FROM apt29Host d\n INNER JOIN (\n SELECT a.ProcessGuid, a.ParentProcessGuid\n FROM apt29Host a\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(Image) LIKE \"%control.exe\"\n AND LOWER(ParentImage) LIKE \"%sdclt.exe\"\n ) b\n ON a.ParentProcessGuid = b.ProcessGuid\n WHERE a.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND a.EventID = 1\n AND a.IntegrityLevel = \"High\"\n ) c\n ON d.ParentProcessGuid= c.ProcessGuid\n WHERE d.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND d.EventID = 1\n AND d.Image LIKE '%powershell.exe'\n) e\nON f.ParentProcessGuid = e.ProcessGuid\nWHERE f.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND f.EventID = 1\n AND LOWER(f.Image) LIKE '%sdelete%'\n AND LOWER(f.CommandLine) LIKE '%3aka3%'\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:59A9AC92-124D-4C4B-A6BF-3121C98677C3", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host h\nINNER JOIN (\n SELECT f.ProcessGuid\n FROM apt29Host f\n INNER JOIN (\n SELECT d.ProcessId, d.ProcessGuid\n FROM apt29Host d\n INNER JOIN (\n SELECT a.ProcessGuid, a.ParentProcessGuid\n FROM apt29Host a\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(Image) LIKE \"%control.exe\"\n AND LOWER(ParentImage) LIKE \"%sdclt.exe\"\n ) b\n ON a.ParentProcessGuid = b.ProcessGuid\n WHERE a.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND a.EventID = 1\n AND a.IntegrityLevel = \"High\"\n ) c\n ON d.ParentProcessGuid= c.ProcessGuid\n WHERE d.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND d.EventID = 1\n AND d.Image LIKE '%powershell.exe'\n ) e\n ON f.ParentProcessGuid = e.ProcessGuid\n WHERE f.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND f.EventID = 1\n AND LOWER(f.Image) LIKE '%sdelete%'\n AND LOWER(f.CommandLine) LIKE '%3aka3%'\n) g\nON h.ProcessGuid = g.ProcessGuid\nWHERE h.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND h.EventID in (12,13)\n AND LOWER(h.TargetObject) RLIKE '.*\\\\\\\\\\\\\\\\software\\\\\\\\\\\\\\\\sysinternals\\\\\\\\\\\\\\\\sdelete.*'\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:3A1DC1C2-B640-4FCE-A71F-2F65AB060A8C", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host f\nINNER JOIN (\n SELECT d.NewProcessId\n FROM apt29Host d\n INNER JOIN(\n SELECT a.ProcessId, a.NewProcessId\n FROM apt29Host a\n INNER JOIN (\n SELECT NewProcessId\n FROM apt29Host\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND LOWER(NewProcessName) LIKE \"%control.exe\"\n AND LOWER(ParentProcessName) LIKE \"%sdclt.exe\"\n ) b\n ON a.ProcessId = b.NewProcessId\n WHERE LOWER(a.Channel) = \"security\"\n AND a.EventID = 4688\n AND a.MandatoryLabel = \"S-1-16-12288\"\n AND a.TokenElevationType = \"%%1937\"\n ) c\n ON d.ProcessId = c.NewProcessId\n WHERE LOWER(d.Channel) = \"security\"\n AND d.EventID = 4688\n AND d.NewProcessName LIKE '%powershell.exe'\n) e\nON f.ProcessId = e.NewProcessId\nWHERE LOWER(f.Channel) = \"security\"\n AND f.EventID = 4688\n AND LOWER(f.NewProcessName) LIKE '%sdelete%'\n AND LOWER(f.CommandLine) LIKE '%3aka3%'\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "4.B.3. File Deletion\nProcedure: Deleted Draft.zip on disk using SDelete\nCriteria: sdelete64.exe deleting the file draft.zip\nDetection Type:Telemetry(Correlated)\nQuery ID:02D0BBFB-4BDF-4167-B530-253779745EF7", "df = spark.sql(\n'''\nSELECT Message, g.CommandLine\nFROM apt29Host h\nINNER JOIN (\n SELECT f.ProcessGuid, f.CommandLine\n FROM apt29Host f\n INNER JOIN (\n SELECT d.ProcessId, d.ProcessGuid\n FROM apt29Host d\n INNER JOIN (\n SELECT a.ProcessGuid, a.ParentProcessGuid\n FROM apt29Host a\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(Image) LIKE \"%control.exe\"\n AND LOWER(ParentImage) LIKE \"%sdclt.exe\"\n ) b\n ON a.ParentProcessGuid = b.ProcessGuid\n WHERE a.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND a.EventID = 1\n AND a.IntegrityLevel = \"High\"\n ) c\n ON d.ParentProcessGuid= c.ProcessGuid\n WHERE d.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND d.EventID = 1\n AND d.Image LIKE '%powershell.exe'\n ) e\n ON f.ParentProcessGuid = e.ProcessGuid\n WHERE f.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND f.EventID = 1\n AND LOWER(f.Image) LIKE '%sdelete%'\n AND LOWER(f.CommandLine) LIKE '%draft.zip%'\n) g\nON h.ProcessGuid = g.ProcessGuid\nWHERE h.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND h.EventID = 23\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:719618E8-9EE7-4693-937E-1FD39228DEBC", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host h\nINNER JOIN (\n SELECT f.ProcessGuid\n FROM apt29Host f\n INNER JOIN (\n SELECT d.ProcessId, d.ProcessGuid\n FROM apt29Host d\n INNER JOIN (\n SELECT a.ProcessGuid, a.ParentProcessGuid\n FROM apt29Host a\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(Image) LIKE \"%control.exe\"\n AND LOWER(ParentImage) LIKE \"%sdclt.exe\"\n ) b\n ON a.ParentProcessGuid = b.ProcessGuid\n WHERE a.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND a.EventID = 1\n AND a.IntegrityLevel = \"High\"\n ) c\n ON d.ParentProcessGuid= c.ProcessGuid\n WHERE d.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND d.EventID = 1\n AND d.Image LIKE '%powershell.exe'\n ) e\n ON f.ParentProcessGuid = e.ProcessGuid\n WHERE f.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND f.EventID = 1\n AND LOWER(f.Image) LIKE '%sdelete%'\n AND LOWER(f.CommandLine) LIKE '%draft.zip%'\n) g\nON h.ProcessGuid = g.ProcessGuid\nWHERE h.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND h.EventID in (12,13)\n AND LOWER(h.TargetObject) RLIKE '.*\\\\\\\\\\\\\\\\software\\\\\\\\\\\\\\\\sysinternals\\\\\\\\\\\\\\\\sdelete.*'\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:5A19E46B-8328-4867-81CF-87518A3784B1", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host f\nINNER JOIN (\nSELECT d.NewProcessId\nFROM apt29Host d\nINNER JOIN(\n SELECT a.ProcessId, a.NewProcessId\n FROM apt29Host a\n INNER JOIN (\n SELECT NewProcessId\n FROM apt29Host\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND LOWER(NewProcessName) LIKE \"%control.exe\"\n AND LOWER(ParentProcessName) LIKE \"%sdclt.exe\"\n ) b\n ON a.ProcessId = b.NewProcessId\n WHERE LOWER(a.Channel) = \"security\"\n AND a.EventID = 4688\n AND a.MandatoryLabel = \"S-1-16-12288\"\n AND a.TokenElevationType = \"%%1937\"\n) c\nON d.ProcessId = c.NewProcessId\nWHERE LOWER(d.Channel) = \"security\"\n AND d.EventID = 4688\n AND d.NewProcessName LIKE '%powershell.exe'\n) e\nON f.ProcessId = e.NewProcessId\nWHERE LOWER(f.Channel) = \"security\"\nAND f.EventID = 4688\nAND LOWER(f.NewProcessName) LIKE '%sdelete%'\nAND LOWER(f.CommandLine) LIKE '%draft.zip'\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "4.B.4. File Deletion\nProcedure: Deleted SysinternalsSuite.zip on disk using SDelete\nCriteria: sdelete64.exe deleting the file SysinternalsSuite.zip\nDetection Type:Telemetry(Correlated)\nQuery ID:83D62033-105A-4A02-8B75-DAB52D8D51EC", "df = spark.sql(\n'''\nSELECT Message, g.CommandLine\nFROM apt29Host h\nINNER JOIN (\n SELECT f.ProcessGuid, f.CommandLine\n FROM apt29Host f\n INNER JOIN (\n SELECT d.ProcessId, d.ProcessGuid\n FROM apt29Host d\n INNER JOIN (\n SELECT a.ProcessGuid, a.ParentProcessGuid\n FROM apt29Host a\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(Image) LIKE \"%control.exe\"\n AND LOWER(ParentImage) LIKE \"%sdclt.exe\"\n ) b\n ON a.ParentProcessGuid = b.ProcessGuid\n WHERE a.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND a.EventID = 1\n AND a.IntegrityLevel = \"High\"\n ) c\n ON d.ParentProcessGuid= c.ProcessGuid\n WHERE d.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND d.EventID = 1\n AND d.Image LIKE '%powershell.exe'\n ) e\n ON f.ParentProcessGuid = e.ProcessGuid\n WHERE f.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND f.EventID = 1\n AND LOWER(f.Image) LIKE '%sdelete%'\n AND LOWER(f.CommandLine) LIKE '%sysinternalssuite.zip%'\n) g\nON h.ProcessGuid = g.ProcessGuid\nWHERE h.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND h.EventID = 23\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:AC2ECFF0-D817-4893-BDED-F16B837C4DBA", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host h\nINNER JOIN (\n SELECT f.ProcessGuid\n FROM apt29Host f\n INNER JOIN (\n SELECT d.ProcessId, d.ProcessGuid\n FROM apt29Host d\n INNER JOIN (\n SELECT a.ProcessGuid, a.ParentProcessGuid\n FROM apt29Host a\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(Image) LIKE \"%control.exe\"\n AND LOWER(ParentImage) LIKE \"%sdclt.exe\"\n ) b\n ON a.ParentProcessGuid = b.ProcessGuid\n WHERE a.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND a.EventID = 1\n AND a.IntegrityLevel = \"High\"\n ) c\n ON d.ParentProcessGuid= c.ProcessGuid\n WHERE d.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND d.EventID = 1\n AND d.Image LIKE '%powershell.exe'\n ) e\n ON f.ParentProcessGuid = e.ProcessGuid\n WHERE f.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND f.EventID = 1\n AND LOWER(f.Image) LIKE '%sdelete%'\n AND LOWER(f.CommandLine) LIKE '%sysinternalssuite.zip%'\n) g\nON h.ProcessGuid = g.ProcessGuid\nWHERE h.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND h.EventID in (12,13)\n AND LOWER(h.TargetObject) RLIKE '.*\\\\\\\\\\\\\\\\software\\\\\\\\\\\\\\\\sysinternals\\\\\\\\\\\\\\\\sdelete.*'\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:4D6DE690-E92C-4D60-93E6-8E5C7C4DF143", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host f\nINNER JOIN (\nSELECT d.NewProcessId\nFROM apt29Host d\nINNER JOIN(\n SELECT a.ProcessId, a.NewProcessId\n FROM apt29Host a\n INNER JOIN (\n SELECT NewProcessId\n FROM apt29Host\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND LOWER(NewProcessName) LIKE \"%control.exe\"\n AND LOWER(ParentProcessName) LIKE \"%sdclt.exe\"\n ) b\n ON a.ProcessId = b.NewProcessId\n WHERE LOWER(a.Channel) = \"security\"\n AND a.EventID = 4688\n AND a.MandatoryLabel = \"S-1-16-12288\"\n AND a.TokenElevationType = \"%%1937\"\n) c\nON d.ProcessId = c.NewProcessId\nWHERE LOWER(d.Channel) = \"security\"\n AND d.EventID = 4688\n AND d.NewProcessName LIKE '%powershell.exe'\n) e\nON f.ProcessId = e.NewProcessId\nWHERE LOWER(f.Channel) = \"security\"\nAND f.EventID = 4688\nAND LOWER(f.NewProcessName) LIKE '%sdelete%'\nAND LOWER(f.CommandLine) LIKE '%sysinternalssuite.zip'\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "4.C.1. File and Directory Discovery\nProcedure: Enumerated user's temporary directory path using PowerShell\nCriteria: powershell.exe executing $env:TEMP\nDetection Type:Telemetry(Correlated)\nQuery ID:85BFD73C-875E-4208-AD9E-1922D4D4D991", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host f\nINNER JOIN (\n SELECT d.ProcessId\n FROM apt29Host d\n INNER JOIN (\n SELECT a.ProcessGuid, a.ParentProcessGuid\n FROM apt29Host a\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(Image) LIKE \"%control.exe\"\n AND LOWER(ParentImage) LIKE \"%sdclt.exe\"\n ) b\n ON a.ParentProcessGuid = b.ProcessGuid\n WHERE a.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND a.EventID = 1\n AND a.IntegrityLevel = \"High\"\n ) c\n ON d.ParentProcessGuid= c.ProcessGuid\n WHERE d.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND d.EventID = 1\n AND d.Image LIKE '%powershell.exe'\n) e\nON f.ExecutionProcessID = e.ProcessId\nWHERE f.Channel = \"Microsoft-Windows-PowerShell/Operational\"\n AND f.EventID = 4104\n AND LOWER(f.ScriptBlockText) LIKE \"%$env:temp%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:D18CF7B9-CBF0-40CE-9D07-12DC83AF3B2F", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host f\nINNER JOIN (\n SELECT split(d.NewProcessId, '0x')[1] as NewProcessId\n FROM apt29Host d\n INNER JOIN(\n SELECT a.ProcessId, a.NewProcessId\n FROM apt29Host a\n INNER JOIN (\n SELECT NewProcessId\n FROM apt29Host\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND LOWER(NewProcessName) LIKE \"%control.exe\"\n AND LOWER(ParentProcessName) LIKE \"%sdclt.exe\"\n ) b\n ON a.ProcessId = b.NewProcessId\n WHERE LOWER(a.Channel) = \"security\"\n AND a.EventID = 4688\n AND a.MandatoryLabel = \"S-1-16-12288\"\n AND a.TokenElevationType = \"%%1937\"\n ) c\n ON d.ProcessId = c.NewProcessId\n WHERE LOWER(d.Channel) = \"security\"\n AND d.EventID = 4688\n AND d.NewProcessName LIKE '%powershell.exe'\n) e\nON LOWER(hex(f.ExecutionProcessID)) = e.NewProcessId\nWHERE f.Channel = \"Microsoft-Windows-PowerShell/Operational\"\n AND f.EventID = 4104\n AND LOWER(f.ScriptBlockText) LIKE \"%$env:temp%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "4.C.2. System Owner/User Discovery\nProcedure: Enumerated the current username using PowerShell\nCriteria: powershell.exe executing $env:USERNAME\nDetection Type:Telemetry(Correlated)\nQuery ID:A45F53ED-65CB-4739-A4D3-F2B0F08F86F8", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host f\nINNER JOIN (\n SELECT d.ProcessId\n FROM apt29Host d\n INNER JOIN (\n SELECT a.ProcessGuid, a.ParentProcessGuid\n FROM apt29Host a\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(Image) LIKE \"%control.exe\"\n AND LOWER(ParentImage) LIKE \"%sdclt.exe\"\n ) b\n ON a.ParentProcessGuid = b.ProcessGuid\n WHERE a.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND a.EventID = 1\n AND a.IntegrityLevel = \"High\"\n ) c\n ON d.ParentProcessGuid= c.ProcessGuid\n WHERE d.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND d.EventID = 1\n AND d.Image LIKE '%powershell.exe'\n) e\nON f.ExecutionProcessID = e.ProcessId\nWHERE f.Channel = \"Microsoft-Windows-PowerShell/Operational\"\n AND f.EventID = 4104\n AND LOWER(f.ScriptBlockText) LIKE \"%$env:username%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:6F3D1615-69D6-41C6-90D0-39ACA14941BD", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host f\nINNER JOIN (\n SELECT split(d.NewProcessId, '0x')[1] as NewProcessId\n FROM apt29Host d\n INNER JOIN(\n SELECT a.ProcessId, a.NewProcessId\n FROM apt29Host a\n INNER JOIN (\n SELECT NewProcessId\n FROM apt29Host\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND LOWER(NewProcessName) LIKE \"%control.exe\"\n AND LOWER(ParentProcessName) LIKE \"%sdclt.exe\"\n ) b\n ON a.ProcessId = b.NewProcessId\n WHERE LOWER(a.Channel) = \"security\"\n AND a.EventID = 4688\n AND a.MandatoryLabel = \"S-1-16-12288\"\n AND a.TokenElevationType = \"%%1937\"\n ) c\n ON d.ProcessId = c.NewProcessId\n WHERE LOWER(d.Channel) = \"security\"\n AND d.EventID = 4688\n AND d.NewProcessName LIKE '%powershell.exe'\n) e\nON LOWER(hex(f.ExecutionProcessID)) = e.NewProcessId\nWHERE f.Channel = \"Microsoft-Windows-PowerShell/Operational\"\n AND f.EventID = 4104\n AND LOWER(f.ScriptBlockText) LIKE \"%$env:username%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "4.C.3. System Information Discovery\nProcedure: Enumerated the computer hostname using PowerShell\nCriteria: powershell.exe executing $env:COMPUTERNAME\nDetection Type:Telemetry(Correlated)\nQuery ID:9B610803-2B27-4DA4-9AAC-C859F48510DA", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host f\nINNER JOIN (\n SELECT d.ProcessId\n FROM apt29Host d\n INNER JOIN (\n SELECT a.ProcessGuid, a.ParentProcessGuid\n FROM apt29Host a\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(Image) LIKE \"%control.exe\"\n AND LOWER(ParentImage) LIKE \"%sdclt.exe\"\n ) b\n ON a.ParentProcessGuid = b.ProcessGuid\n WHERE a.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND a.EventID = 1\n AND a.IntegrityLevel = \"High\"\n ) c\n ON d.ParentProcessGuid= c.ProcessGuid\n WHERE d.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND d.EventID = 1\n AND d.Image LIKE '%powershell.exe'\n) e\nON f.ExecutionProcessID = e.ProcessId\nWHERE f.Channel = \"Microsoft-Windows-PowerShell/Operational\"\n AND f.EventID = 4104\n AND LOWER(f.ScriptBlockText) LIKE \"%$env:computername%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:1BA09833-CDF3-44BE-86D0-6F5B1C66D151", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host f\nINNER JOIN (\n SELECT split(d.NewProcessId, '0x')[1] as NewProcessId\n FROM apt29Host d\n INNER JOIN(\n SELECT a.ProcessId, a.NewProcessId\n FROM apt29Host a\n INNER JOIN (\n SELECT NewProcessId\n FROM apt29Host\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND LOWER(NewProcessName) LIKE \"%control.exe\"\n AND LOWER(ParentProcessName) LIKE \"%sdclt.exe\"\n ) b\n ON a.ProcessId = b.NewProcessId\n WHERE LOWER(a.Channel) = \"security\"\n AND a.EventID = 4688\n AND a.MandatoryLabel = \"S-1-16-12288\"\n AND a.TokenElevationType = \"%%1937\"\n ) c\n ON d.ProcessId = c.NewProcessId\n WHERE LOWER(d.Channel) = \"security\"\n AND d.EventID = 4688\n AND d.NewProcessName LIKE '%powershell.exe'\n) e\nON LOWER(hex(f.ExecutionProcessID)) = e.NewProcessId\nWHERE f.Channel = \"Microsoft-Windows-PowerShell/Operational\"\n AND f.EventID = 4104\n AND LOWER(f.ScriptBlockText) LIKE \"%$env:computername%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "4.C.4. System Network Configuration Discovery\nProcedure: Enumerated the current domain name using PowerShell\nCriteria: powershell.exe executing $env:USERDOMAIN\nDetection Type:Telemetry(Correlated)\nQuery ID:1418A09E-BC90-4BC5-A0BC-1ECC4283ACF4", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host f\nINNER JOIN (\n SELECT d.ProcessId\n FROM apt29Host d\n INNER JOIN (\n SELECT a.ProcessGuid, a.ParentProcessGuid\n FROM apt29Host a\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(Image) LIKE \"%control.exe\"\n AND LOWER(ParentImage) LIKE \"%sdclt.exe\"\n ) b\n ON a.ParentProcessGuid = b.ProcessGuid\n WHERE a.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND a.EventID = 1\n AND a.IntegrityLevel = \"High\"\n ) c\n ON d.ParentProcessGuid= c.ProcessGuid\n WHERE d.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND d.EventID = 1\n AND d.Image LIKE '%powershell.exe'\n) e\nON f.ExecutionProcessID = e.ProcessId\nWHERE f.Channel = \"Microsoft-Windows-PowerShell/Operational\"\n AND f.EventID = 4104\n AND LOWER(f.ScriptBlockText) LIKE \"%$env:userdomain%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:8D215D46-CE33-4CB7-9934-FF9205971570", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host f\nINNER JOIN (\n SELECT split(d.NewProcessId, '0x')[1] as NewProcessId\n FROM apt29Host d\n INNER JOIN(\n SELECT a.ProcessId, a.NewProcessId\n FROM apt29Host a\n INNER JOIN (\n SELECT NewProcessId\n FROM apt29Host\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND LOWER(NewProcessName) LIKE \"%control.exe\"\n AND LOWER(ParentProcessName) LIKE \"%sdclt.exe\"\n ) b\n ON a.ProcessId = b.NewProcessId\n WHERE LOWER(a.Channel) = \"security\"\n AND a.EventID = 4688\n AND a.MandatoryLabel = \"S-1-16-12288\"\n AND a.TokenElevationType = \"%%1937\"\n ) c\n ON d.ProcessId = c.NewProcessId\n WHERE LOWER(d.Channel) = \"security\"\n AND d.EventID = 4688\n AND d.NewProcessName LIKE '%powershell.exe'\n) e\nON LOWER(hex(f.ExecutionProcessID)) = e.NewProcessId\nWHERE f.Channel = \"Microsoft-Windows-PowerShell/Operational\"\n AND f.EventID = 4104\n AND LOWER(f.ScriptBlockText) LIKE \"%$env:userdomain%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "4.C.5. Process Discovery\nProcedure: Enumerated the current process ID using PowerShell\nCriteria: powershell.exe executing $PID\nDetection Type:Telemetry(Correlated)\nQuery ID:2DBE08DB-BADD-40AD-A037-DEBD29E207C6", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host f\nINNER JOIN (\n SELECT d.ProcessId\n FROM apt29Host d\n INNER JOIN (\n SELECT a.ProcessGuid, a.ParentProcessGuid\n FROM apt29Host a\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(Image) LIKE \"%control.exe\"\n AND LOWER(ParentImage) LIKE \"%sdclt.exe\"\n ) b\n ON a.ParentProcessGuid = b.ProcessGuid\n WHERE a.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND a.EventID = 1\n AND a.IntegrityLevel = \"High\"\n ) c\n ON d.ParentProcessGuid= c.ProcessGuid\n WHERE d.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND d.EventID = 1\n AND d.Image LIKE '%powershell.exe'\n) e\nON f.ExecutionProcessID = e.ProcessId\nWHERE f.Channel = \"Microsoft-Windows-PowerShell/Operational\"\n AND f.EventID = 4104\n AND LOWER(f.ScriptBlockText) LIKE \"%$pid%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:9CFC783B-2DC8-4A3D-AC7B-2DF890827E2E", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host f\nINNER JOIN (\n SELECT split(d.NewProcessId, '0x')[1] as NewProcessId\n FROM apt29Host d\n INNER JOIN(\n SELECT a.ProcessId, a.NewProcessId\n FROM apt29Host a\n INNER JOIN (\n SELECT NewProcessId\n FROM apt29Host\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND LOWER(NewProcessName) LIKE \"%control.exe\"\n AND LOWER(ParentProcessName) LIKE \"%sdclt.exe\"\n ) b\n ON a.ProcessId = b.NewProcessId\n WHERE LOWER(a.Channel) = \"security\"\n AND a.EventID = 4688\n AND a.MandatoryLabel = \"S-1-16-12288\"\n AND a.TokenElevationType = \"%%1937\"\n ) c\n ON d.ProcessId = c.NewProcessId\n WHERE LOWER(d.Channel) = \"security\"\n AND d.EventID = 4688\n AND d.NewProcessName LIKE '%powershell.exe'\n) e\nON LOWER(hex(f.ExecutionProcessID)) = e.NewProcessId\nWHERE f.Channel = \"Microsoft-Windows-PowerShell/Operational\"\n AND f.EventID = 4104\n AND LOWER(f.ScriptBlockText) LIKE \"%$pid%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "4.C.6. System Information Discovery\nProcedure: Enumerated the OS version using PowerShell\nCriteria: powershell.exe executing​ Gwmi Win32_OperatingSystem\nDetection Type:Telemetry(Correlated)\nQuery ID:5A2B7006-A887-465F-9D41-AED8F6AECBE1", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host f\nINNER JOIN (\n SELECT d.ProcessId\n FROM apt29Host d\n INNER JOIN (\n SELECT a.ProcessGuid, a.ParentProcessGuid\n FROM apt29Host a\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(Image) LIKE \"%control.exe\"\n AND LOWER(ParentImage) LIKE \"%sdclt.exe\"\n ) b\n ON a.ParentProcessGuid = b.ProcessGuid\n WHERE a.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND a.EventID = 1\n AND a.IntegrityLevel = \"High\"\n ) c\n ON d.ParentProcessGuid= c.ProcessGuid\n WHERE d.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND d.EventID = 1\n AND d.Image LIKE '%powershell.exe'\n) e\nON f.ExecutionProcessID = e.ProcessId\nWHERE f.Channel = \"Microsoft-Windows-PowerShell/Operational\"\n AND f.EventID = 4104\n AND LOWER(f.ScriptBlockText) LIKE \"%gwmi win32_operatingsystem%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:69A3B3AC-42BE-44F6-A418-C2356894F745", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host f\nINNER JOIN (\n SELECT split(d.NewProcessId, '0x')[1] as NewProcessId\n FROM apt29Host d\n INNER JOIN(\n SELECT a.ProcessId, a.NewProcessId\n FROM apt29Host a\n INNER JOIN (\n SELECT NewProcessId\n FROM apt29Host\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND LOWER(NewProcessName) LIKE \"%control.exe\"\n AND LOWER(ParentProcessName) LIKE \"%sdclt.exe\"\n ) b\n ON a.ProcessId = b.NewProcessId\n WHERE LOWER(a.Channel) = \"security\"\n AND a.EventID = 4688\n AND a.MandatoryLabel = \"S-1-16-12288\"\n AND a.TokenElevationType = \"%%1937\"\n ) c\n ON d.ProcessId = c.NewProcessId\n WHERE LOWER(d.Channel) = \"security\"\n AND d.EventID = 4688\n AND d.NewProcessName LIKE '%powershell.exe'\n) e\nON LOWER(hex(f.ExecutionProcessID)) = e.NewProcessId\nWHERE f.Channel = \"Microsoft-Windows-PowerShell/Operational\"\n AND f.EventID = 4104\n AND LOWER(f.ScriptBlockText) LIKE \"%gwmi win32_operatingsystem%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "4.C.7. Security Software Discovery\nProcedure: Enumerated anti-virus software using PowerShell\nCriteria: powershell.exe executing​ Get-WmiObject ...​ -Class AntiVirusProduct\nDetection Type:Telemetry(Correlated)\nQuery ID:E1E0849D-1771-438B-9D8F-A67B7EC48B97", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host f\nINNER JOIN (\n SELECT d.ProcessId\n FROM apt29Host d\n INNER JOIN (\n SELECT a.ProcessGuid, a.ParentProcessGuid\n FROM apt29Host a\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(Image) LIKE \"%control.exe\"\n AND LOWER(ParentImage) LIKE \"%sdclt.exe\"\n ) b\n ON a.ParentProcessGuid = b.ProcessGuid\n WHERE a.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND a.EventID = 1\n AND a.IntegrityLevel = \"High\"\n ) c\n ON d.ParentProcessGuid= c.ProcessGuid\n WHERE d.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND d.EventID = 1\n AND d.Image LIKE '%powershell.exe'\n) e\nON f.ExecutionProcessID = e.ProcessId\nWHERE f.Channel = \"Microsoft-Windows-PowerShell/Operational\"\n AND f.EventID = 4104\n AND LOWER(f.ScriptBlockText) LIKE \"%-class antivirusproduct%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:956D78C8-FCB5-440D-B059-6790F729D02D", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host f\nINNER JOIN (\n SELECT split(d.NewProcessId, '0x')[1] as NewProcessId\n FROM apt29Host d\n INNER JOIN(\n SELECT a.ProcessId, a.NewProcessId\n FROM apt29Host a\n INNER JOIN (\n SELECT NewProcessId\n FROM apt29Host\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND LOWER(NewProcessName) LIKE \"%control.exe\"\n AND LOWER(ParentProcessName) LIKE \"%sdclt.exe\"\n ) b\n ON a.ProcessId = b.NewProcessId\n WHERE LOWER(a.Channel) = \"security\"\n AND a.EventID = 4688\n AND a.MandatoryLabel = \"S-1-16-12288\"\n AND a.TokenElevationType = \"%%1937\"\n ) c\n ON d.ProcessId = c.NewProcessId\n WHERE LOWER(d.Channel) = \"security\"\n AND d.EventID = 4688\n AND d.NewProcessName LIKE '%powershell.exe'\n) e\nON LOWER(hex(f.ExecutionProcessID)) = e.NewProcessId\nWHERE f.Channel = \"Microsoft-Windows-PowerShell/Operational\"\n AND f.EventID = 4104\n AND LOWER(f.ScriptBlockText) LIKE \"%-class antivirusproduct%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "4.C.8. Security Software Discovery\nProcedure: Enumerated firewall software using PowerShell\nCriteria: powershell.exe executing Get-WmiObject ...​​ -Class FireWallProduct\nDetection Type:Telemetry(Correlated)\nQuery ID:9F924458-73AD-42C8-B98E-0CB4B4355B9B", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host f\nINNER JOIN (\n SELECT d.ProcessId\n FROM apt29Host d\n INNER JOIN (\n SELECT a.ProcessGuid, a.ParentProcessGuid\n FROM apt29Host a\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(Image) LIKE \"%control.exe\"\n AND LOWER(ParentImage) LIKE \"%sdclt.exe\"\n ) b\n ON a.ParentProcessGuid = b.ProcessGuid\n WHERE a.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND a.EventID = 1\n AND a.IntegrityLevel = \"High\"\n ) c\n ON d.ParentProcessGuid= c.ProcessGuid\n WHERE d.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND d.EventID = 1\n AND d.Image LIKE '%powershell.exe'\n) e\nON f.ExecutionProcessID = e.ProcessId\nWHERE f.Channel = \"Microsoft-Windows-PowerShell/Operational\"\n AND f.EventID = 4104\n AND LOWER(f.ScriptBlockText) LIKE \"%-class firewallproduct%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:B7549913-AF53-4F9A-9C3F-4106578EA5F2", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host f\nINNER JOIN (\n SELECT split(d.NewProcessId, '0x')[1] as NewProcessId\n FROM apt29Host d\n INNER JOIN(\n SELECT a.ProcessId, a.NewProcessId\n FROM apt29Host a\n INNER JOIN (\n SELECT NewProcessId\n FROM apt29Host\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND LOWER(NewProcessName) LIKE \"%control.exe\"\n AND LOWER(ParentProcessName) LIKE \"%sdclt.exe\"\n ) b\n ON a.ProcessId = b.NewProcessId\n WHERE LOWER(a.Channel) = \"security\"\n AND a.EventID = 4688\n AND a.MandatoryLabel = \"S-1-16-12288\"\n AND a.TokenElevationType = \"%%1937\"\n ) c\n ON d.ProcessId = c.NewProcessId\n WHERE LOWER(d.Channel) = \"security\"\n AND d.EventID = 4688\n AND d.NewProcessName LIKE '%powershell.exe'\n) e\nON LOWER(hex(f.ExecutionProcessID)) = e.NewProcessId\nWHERE f.Channel = \"Microsoft-Windows-PowerShell/Operational\"\n AND f.EventID = 4104\n AND LOWER(f.ScriptBlockText) LIKE \"%-class firewallproduct%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "4.C.9. Permission Groups Discovery\nProcedure: Enumerated user's domain group membership via the NetUserGetGroups API\nCriteria: powershell.exe executing the NetUserGetGroups API\nDetection Type:technique(alert)\nQuery ID:FA458669-1C94-4150-AFFC-A3236FC6B275", "df = spark.sql(\n'''\nSELECT a.EventTime, o.TargetUserName, o.IpAddress, a.Message\nFROM apt29Host o\nINNER JOIN (\n SELECT Message, EventTime, SubjectLogonId\n FROM apt29Host\n WHERE lower(Channel) = \"security\"\n AND EventID = 4661\n AND ObjectType = \"SAM_DOMAIN\"\n AND SubjectUserName NOT LIKE '%$'\n AND AccessMask = '0x20094'\n AND LOWER(Message) LIKE '%getlocalgroupmembership%'\n ) a\nON o.TargetLogonId = a.SubjectLogonId\nWHERE lower(Channel) = \"security\" \n AND o.EventID = 4624\n AND o.LogonType = 3\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Detection Type:Telemetry(Correlated)\nQuery ID:11827B7C-8010-443C-9116-500289E0ED57", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host f\nINNER JOIN (\n SELECT d.ProcessId\n FROM apt29Host d\n INNER JOIN (\n SELECT a.ProcessGuid, a.ParentProcessGuid\n FROM apt29Host a\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(Image) LIKE \"%control.exe\"\n AND LOWER(ParentImage) LIKE \"%sdclt.exe\"\n ) b\n ON a.ParentProcessGuid = b.ProcessGuid\n WHERE a.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND a.EventID = 1\n AND a.IntegrityLevel = \"High\"\n ) c\n ON d.ParentProcessGuid= c.ProcessGuid\n WHERE d.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND d.EventID = 1\n AND d.Image LIKE '%powershell.exe'\n) e\nON f.ExecutionProcessID = e.ProcessId\nWHERE f.Channel = \"Microsoft-Windows-PowerShell/Operational\"\n AND f.EventID = 4104\n AND LOWER(f.ScriptBlockText) LIKE \"%netusergetgroups%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:52E7DFEA-05BC-4B81-BFE9-DE6085FA8228", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host f\nINNER JOIN (\n SELECT split(d.NewProcessId, '0x')[1] as NewProcessId\n FROM apt29Host d\n INNER JOIN(\n SELECT a.ProcessId, a.NewProcessId\n FROM apt29Host a\n INNER JOIN (\n SELECT NewProcessId\n FROM apt29Host\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND LOWER(NewProcessName) LIKE \"%control.exe\"\n AND LOWER(ParentProcessName) LIKE \"%sdclt.exe\"\n ) b\n ON a.ProcessId = b.NewProcessId\n WHERE LOWER(a.Channel) = \"security\"\n AND a.EventID = 4688\n AND a.MandatoryLabel = \"S-1-16-12288\"\n AND a.TokenElevationType = \"%%1937\"\n ) c\n ON d.ProcessId = c.NewProcessId\n WHERE LOWER(d.Channel) = \"security\"\n AND d.EventID = 4688\n AND d.NewProcessName LIKE '%powershell.exe'\n) e\nON LOWER(hex(f.ExecutionProcessID)) = e.NewProcessId\nWHERE f.Channel = \"Microsoft-Windows-PowerShell/Operational\"\n AND f.EventID = 4104\n AND LOWER(f.ScriptBlockText) LIKE \"%netusergetgroups%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "4.C.10. Execution through API\nProcedure: Executed API call by reflectively loading Netapi32.dll\nCriteria: The NetUserGetGroups API function loaded into powershell.exe from Netapi32.dll\nDetection Type:Telemetry(Correlated)\nQuery ID:0B50643F-98FA-4F4A-8E22-9257D85AD7C5", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host f\nINNER JOIN (\n SELECT d.ProcessGuid\n FROM apt29Host d\n INNER JOIN (\n SELECT a.ProcessGuid, a.ParentProcessGuid\n FROM apt29Host a\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(Image) LIKE \"%control.exe\"\n AND LOWER(ParentImage) LIKE \"%sdclt.exe\"\n ) b\n ON a.ParentProcessGuid = b.ProcessGuid\n WHERE a.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND a.EventID = 1\n AND a.IntegrityLevel = \"High\"\n ) c\n ON d.ParentProcessGuid= c.ProcessGuid\n WHERE d.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND d.EventID = 1\n AND d.Image LIKE '%powershell.exe'\n) e\nON f.ProcessGuid = e.ProcessGuid\nWHERE f.Channel = \"Microsoft-Windows-Sysmon/Operational\"\nAND f.EventID = 7\nAND LOWER(f.ImageLoaded) LIKE \"%netapi32.dll\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "4.C.11. Permission Groups Discovery\nProcedure: Enumerated user's local group membership via the NetUserGetLocalGroups API\nCriteria: powershell.exe executing the NetUserGetLocalGroups API\nDetection Type:Telemetry(Correlated)\nQuery ID:1CD16ED8-C812-40B1-B968-F0DABFC79DDF", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host f\nINNER JOIN (\n SELECT d.ProcessId\n FROM apt29Host d\n INNER JOIN (\n SELECT a.ProcessGuid, a.ParentProcessGuid\n FROM apt29Host a\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(Image) LIKE \"%control.exe\"\n AND LOWER(ParentImage) LIKE \"%sdclt.exe\"\n ) b\n ON a.ParentProcessGuid = b.ProcessGuid\n WHERE a.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND a.EventID = 1\n AND a.IntegrityLevel = \"High\"\n ) c\n ON d.ParentProcessGuid= c.ProcessGuid\n WHERE d.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND d.EventID = 1\n AND d.Image LIKE '%powershell.exe'\n) e\nON f.ExecutionProcessID = e.ProcessId\nWHERE f.Channel = \"Microsoft-Windows-PowerShell/Operational\"\n AND f.EventID = 4104\n AND LOWER(f.ScriptBlockText) LIKE \"%netusergetlocalgroups%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:F0AC46E2-63EA-4C8E-AF39-6631444451E5", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host f\nINNER JOIN (\n SELECT split(d.NewProcessId, '0x')[1] as NewProcessId\n FROM apt29Host d\n INNER JOIN(\n SELECT a.ProcessId, a.NewProcessId\n FROM apt29Host a\n INNER JOIN (\n SELECT NewProcessId\n FROM apt29Host\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND LOWER(NewProcessName) LIKE \"%control.exe\"\n AND LOWER(ParentProcessName) LIKE \"%sdclt.exe\"\n ) b\n ON a.ProcessId = b.NewProcessId\n WHERE LOWER(a.Channel) = \"security\"\n AND a.EventID = 4688\n AND a.MandatoryLabel = \"S-1-16-12288\"\n AND a.TokenElevationType = \"%%1937\"\n ) c\n ON d.ProcessId = c.NewProcessId\n WHERE LOWER(d.Channel) = \"security\"\n AND d.EventID = 4688\n AND d.NewProcessName LIKE '%powershell.exe'\n) e\nON LOWER(hex(f.ExecutionProcessID)) = e.NewProcessId\nWHERE f.Channel = \"Microsoft-Windows-PowerShell/Operational\"\n AND f.EventID = 4104\n AND LOWER(f.ScriptBlockText) LIKE \"%netusergetlocalgroups%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "4.C.12. Execution through API\nProcedure: Executed API call by reflectively loading Netapi32.dll\nCriteria: The NetUserGetLocalGroups API function loaded into powershelle.exe from Netapi32.dll\nDetection Type:Telemetry(Correlated)\nQuery ID:53CEF026-66EF-4B26-B5C9-10D4BBA3F9E8", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host f\nINNER JOIN (\n SELECT d.ProcessGuid\n FROM apt29Host d\n INNER JOIN (\n SELECT a.ProcessGuid, a.ParentProcessGuid\n FROM apt29Host a\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(Image) LIKE \"%control.exe\"\n AND LOWER(ParentImage) LIKE \"%sdclt.exe\"\n ) b\n ON a.ParentProcessGuid = b.ProcessGuid\n WHERE a.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND a.EventID = 1\n AND a.IntegrityLevel = \"High\"\n ) c\n ON d.ParentProcessGuid= c.ProcessGuid\n WHERE d.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND d.EventID = 1\n AND d.Image LIKE '%powershell.exe'\n) e\nON f.ProcessGuid = e.ProcessGuid\nWHERE f.Channel = \"Microsoft-Windows-Sysmon/Operational\"\nAND f.EventID = 7\nAND LOWER(f.ImageLoaded) LIKE \"%netapi32.dll\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "5.A.1. New Service\nProcedure: Created a new service (javamtsup) that executes a service binary (javamtsup.exe) at system startup\nCriteria: powershell.exe creating the Javamtsup service\nDetection Type:Telemetry(Correlated)\nQuery ID:A16CE10D-6EE3-4611-BE9B-B023F36E2DFF", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host\nWHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID IN (12,13,14)\n AND (LOWER(TargetObject) LIKE \"%javamtsup%\" OR LOWER(Details) LIKE \"%javamtsup%\")\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:E76C4174-C24A-4CA3-9EA8-46C5286D3B6F", "df = spark.sql(\n'''\nSELECT Payload\nFROM apt29Host f\nINNER JOIN (\n SELECT d.ProcessId, d.ParentProcessId\n FROM apt29Host d\n INNER JOIN (\n SELECT a.ProcessGuid, a.ParentProcessGuid\n FROM apt29Host a\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(Image) LIKE \"%control.exe\"\n AND LOWER(ParentImage) LIKE \"%sdclt.exe\"\n ) b\n ON a.ParentProcessGuid = b.ProcessGuid\n WHERE a.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND a.EventID = 1\n AND a.IntegrityLevel = \"High\"\n ) c\n ON d.ParentProcessGuid= c.ProcessGuid\n WHERE d.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND d.EventID = 1\n AND d.Image LIKE '%powershell.exe'\n) e\nON f.ExecutionProcessID = e.ProcessId\nWHERE f.Channel = \"Microsoft-Windows-PowerShell/Operational\"\n AND f.EventID = 4103\n AND LOWER(f.Payload) LIKE \"%new-service%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:AA3EF640-2720-4E8A-B86D-DFCF2FDB86BD", "df = spark.sql(\n'''\nSELECT Payload\nFROM apt29Host f\nINNER JOIN (\n SELECT split(d.NewProcessId, '0x')[1] as NewProcessId\n FROM apt29Host d\n INNER JOIN(\n SELECT a.ProcessId, a.NewProcessId\n FROM apt29Host a\n INNER JOIN (\n SELECT NewProcessId\n FROM apt29Host\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND LOWER(NewProcessName) LIKE \"%control.exe\"\n AND LOWER(ParentProcessName) LIKE \"%sdclt.exe\"\n ) b\n ON a.ProcessId = b.NewProcessId\n WHERE LOWER(a.Channel) = \"security\"\n AND a.EventID = 4688\n AND a.MandatoryLabel = \"S-1-16-12288\"\n AND a.TokenElevationType = \"%%1937\"\n ) c\n ON d.ProcessId = c.NewProcessId\n WHERE LOWER(d.Channel) = \"security\"\n AND d.EventID = 4688\n AND d.NewProcessName LIKE '%powershell.exe'\n) e\nON LOWER(hex(f.ExecutionProcessID)) = e.NewProcessId\nWHERE f.Channel = \"Microsoft-Windows-PowerShell/Operational\"\n AND f.EventID = 4103\n AND LOWER(f.Payload) LIKE \"%new-service%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "5.B.1. Registry Run Keys / Startup Folder\nProcedure: Created a LNK file (hostui.lnk) in the Startup folder that executes on login\nCriteria: powershell.exe creating the file hostui.lnk in the Startup folder\nDetection Type:Telemetry(Correlated)\nQuery ID:611FCA99-97D0-4873-9E51-1C1BA2DBB40D", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host f\nINNER JOIN (\n SELECT d.ProcessGuid\n FROM apt29Host d\n INNER JOIN (\n SELECT a.ProcessGuid, a.ParentProcessGuid\n FROM apt29Host a\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(Image) LIKE \"%control.exe\"\n AND LOWER(ParentImage) LIKE \"%sdclt.exe\"\n ) b\n ON a.ParentProcessGuid = b.ProcessGuid\n WHERE a.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND a.EventID = 1\n AND a.IntegrityLevel = \"High\"\n ) c\n ON d.ParentProcessGuid= c.ProcessGuid\n WHERE d.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND d.EventID = 1\n AND d.Image LIKE '%powershell.exe'\n) e\nON f.ProcessGuid = e.ProcessGuid\nWHERE f.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND f.EventID = 11\n AND f.TargetFilename RLIKE '.*\\\\\\\\\\\\\\\\ProgramData\\\\\\\\\\\\\\\\Microsoft\\\\\\\\\\\\\\\\Windows\\\\\\\\\\\\\\\\Start Menu\\\\\\\\\\\\\\\\Programs\\\\\\\\\\\\\\\\StartUp.*'\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "6.A.1. Credentials in Files\nProcedure: Read the Chrome SQL database file to extract encrypted credentials\nCriteria: accesschk.exe reading files within %APPDATALOCAL%\\Google\\chrome\\user data\\default\\\nDetection Type:None(None)\n6.A.2. Credential Dumping\nProcedure: Executed the CryptUnprotectedData API call to decrypt Chrome passwords\nCriteria: accesschk.exe executing the CryptUnprotectedData API\nDetection Type:None(None)\n6.A.3. Masquerading\nProcedure: Masqueraded a Chrome password dump tool as accesscheck.exe, a legitimate Sysinternals tool\nCriteria: Evidence that accesschk.exe is not the legitimate Sysinternals tool\nDetection Type:Telemetry(Correlated)\nQuery ID:0A19F9B7-5E17-47E5-8015-29E9ABC09ADC", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host h\nINNER JOIN (\n SELECT f.ProcessGuid\n FROM apt29Host f\n INNER JOIN (\n SELECT d.ProcessGuid, d.ParentProcessGuid\n FROM apt29Host d\n INNER JOIN (\n SELECT a.ProcessGuid, a.ParentProcessGuid\n FROM apt29Host a\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(Image) LIKE \"%control.exe\"\n AND LOWER(ParentImage) LIKE \"%sdclt.exe\"\n ) b\n ON a.ParentProcessGuid = b.ProcessGuid\n WHERE a.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND a.EventID = 1\n AND a.IntegrityLevel = \"High\"\n ) c\n ON d.ParentProcessGuid= c.ProcessGuid\n WHERE d.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND d.EventID = 1\n AND d.Image LIKE '%powershell.exe'\n ) e\n ON f.ParentProcessGuid = e.ProcessGuid\n WHERE f.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND f.EventID = 1\n AND LOWER(f.Image) LIKE '%accesschk%'\n) g\nON h.ProcessGuid = g.ProcessGuid\nWHERE h.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 7\n AND LOWER(ImageLoaded) LIKE '%accesschk%'\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Detection Type:General(Correlated)\nQuery ID:1FCE98FC-1FF9-41CB-9C25-0235729A2B01", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host h\nINNER JOIN (\n SELECT f.ProcessGuid\n FROM apt29Host f\n INNER JOIN (\n SELECT d.ProcessGuid, d.ParentProcessGuid\n FROM apt29Host d\n INNER JOIN (\n SELECT a.ProcessGuid, a.ParentProcessGuid\n FROM apt29Host a\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(Image) LIKE \"%control.exe\"\n AND LOWER(ParentImage) LIKE \"%sdclt.exe\"\n ) b\n ON a.ParentProcessGuid = b.ProcessGuid\n WHERE a.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND a.EventID = 1\n AND a.IntegrityLevel = \"High\"\n ) c\n ON d.ParentProcessGuid= c.ProcessGuid\n WHERE d.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND d.EventID = 1\n AND d.Image LIKE '%powershell.exe'\n ) e\n ON f.ParentProcessGuid = e.ProcessGuid\n WHERE f.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND f.EventID = 1\n AND LOWER(f.Image) LIKE '%accesschk%'\n) g\nON h.ProcessGuid = g.ProcessGuid\nWHERE h.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 7\n AND LOWER(ImageLoaded) LIKE '%accesschk%'\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "6.B.1. Private Keys\nProcedure: Exported a local certificate to a PFX file using PowerShell\nCriteria: powershell.exe creating a certificate file exported from the system\nDetection Type:Telemetry(Correlated)\nQuery ID:6392C9F1-D975-4F75-8A70-433DEDD7F622", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host f\nINNER JOIN (\n SELECT d.ProcessGuid\n FROM apt29Host d\n INNER JOIN (\n SELECT a.ProcessGuid, a.ParentProcessGuid\n FROM apt29Host a\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(Image) LIKE \"%control.exe\"\n AND LOWER(ParentImage) LIKE \"%sdclt.exe\"\n ) b\n ON a.ParentProcessGuid = b.ProcessGuid\n WHERE a.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND a.EventID = 1\n AND a.IntegrityLevel = \"High\"\n ) c\n ON d.ParentProcessGuid= c.ProcessGuid\n WHERE d.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND d.EventID = 1\n AND d.Image LIKE '%powershell.exe'\n) e\nON f.ProcessGuid = e.ProcessGuid\nWHERE f.Channel = \"Microsoft-Windows-Sysmon/Operational\"\nAND f.EventID = 11\nAND LOWER(f.TargetFilename) LIKE \"%.pfx\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "6.C.1. Credential Dumping\nProcedure: Dumped password hashes from the Windows Registry by injecting a malicious DLL into Lsass.exe\nCriteria: powershell.exe injecting into lsass.exe OR lsass.exe reading Registry keys under HKLM:\\SAM\\SAM\\Domains\\Account\\Users\\\nDetection Type:Telemetry(Correlated)\nQuery ID:7B2CE2A5-4386-4EED-9A03-9B7D1049C4AE", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host f\nINNER JOIN (\n SELECT d.ProcessGuid, d.ParentProcessGuid\n FROM apt29Host d\n INNER JOIN (\n SELECT a.ProcessGuid, a.ParentProcessGuid\n FROM apt29Host a\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(Image) LIKE \"%control.exe\"\n AND LOWER(ParentImage) LIKE \"%sdclt.exe\"\n ) b\n ON a.ParentProcessGuid = b.ProcessGuid\n WHERE a.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND a.EventID = 1\n AND a.IntegrityLevel = \"High\"\n ) c\n ON d.ParentProcessGuid= c.ProcessGuid\n WHERE d.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND d.EventID = 1\n AND d.Image LIKE '%powershell.exe'\n) e\nON f.SourceProcessGuid = e.ParentProcessGuid\nWHERE f.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND f.EventID = 8\n AND f.TargetImage LIKE '%lsass.exe'\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "7.A.1. Screen Capture\nProcedure: Captured and saved screenshots using PowerShell\nCriteria: powershell.exe executing the CopyFromScreen function from System.Drawing.dll\nDetection Type:Telemetry(Correlated)\nQuery ID:3B4E5808-3C71-406A-B181-17B0CE3178C9", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host f\nINNER JOIN (\n SELECT d.ProcessGuid, d.ParentProcessGuid\n FROM apt29Host d\n INNER JOIN (\n SELECT a.ProcessGuid, a.ParentProcessGuid\n FROM apt29Host a\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(Image) LIKE \"%control.exe\"\n AND LOWER(ParentImage) LIKE \"%sdclt.exe\"\n ) b\n ON a.ParentProcessGuid = b.ProcessGuid\n WHERE a.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND a.EventID = 1\n AND a.IntegrityLevel = \"High\"\n ) c\n ON d.ParentProcessGuid= c.ProcessGuid\n WHERE d.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND d.EventID = 1\n AND d.Image LIKE '%powershell.exe'\n) e\nON f.ProcessGuid = e.ProcessGuid\nWHERE f.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND f.EventID = 7\n AND LOWER(f.ImageLoaded) LIKE \"%system.drawing.ni.dll\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Detection Type:Telemetry(Correlated)\nQuery ID:B374D3E7-3580-441F-8D6E-48C40CBA7922", "df = spark.sql(\n'''\nSELECT Payload\nFROM apt29Host f\nINNER JOIN (\n SELECT d.ProcessId, d.ParentProcessId\n FROM apt29Host d\n INNER JOIN (\n SELECT a.ProcessGuid, a.ParentProcessGuid\n FROM apt29Host a\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(Image) LIKE \"%control.exe\"\n AND LOWER(ParentImage) LIKE \"%sdclt.exe\"\n ) b\n ON a.ParentProcessGuid = b.ProcessGuid\n WHERE a.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND a.EventID = 1\n AND a.IntegrityLevel = \"High\"\n ) c\n ON d.ParentProcessGuid= c.ProcessGuid\n WHERE d.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND d.EventID = 1\n AND d.Image LIKE '%powershell.exe'\n) e\nON f.ExecutionProcessID = e.ProcessId\nWHERE f.Channel = \"Microsoft-Windows-PowerShell/Operational\"\nAND f.EventID = 4103\nAND LOWER(f.Payload) LIKE \"%copyfromscreen%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:2AA4D448-3893-4F31-9497-0F8E2B7E3CFD", "df = spark.sql(\n'''\nSELECT Payload\nFROM apt29Host f\nINNER JOIN (\n SELECT split(d.NewProcessId, '0x')[1] as NewProcessId\n FROM apt29Host d\n INNER JOIN(\n SELECT a.ProcessId, a.NewProcessId\n FROM apt29Host a\n INNER JOIN (\n SELECT NewProcessId\n FROM apt29Host\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND LOWER(NewProcessName) LIKE \"%control.exe\"\n AND LOWER(ParentProcessName) LIKE \"%sdclt.exe\"\n ) b\n ON a.ProcessId = b.NewProcessId\n WHERE LOWER(a.Channel) = \"security\"\n AND a.EventID = 4688\n AND a.MandatoryLabel = \"S-1-16-12288\"\n AND a.TokenElevationType = \"%%1937\"\n ) c\n ON d.ProcessId = c.NewProcessId\n WHERE LOWER(d.Channel) = \"security\"\n AND d.EventID = 4688\n AND d.NewProcessName LIKE '%powershell.exe'\n) e\nON LOWER(hex(f.ExecutionProcessID)) = e.NewProcessId\nWHERE f.Channel = \"Microsoft-Windows-PowerShell/Operational\"\nAND f.EventID = 4103\nAND LOWER(f.Payload) LIKE \"%copyfromscreen%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "7.A.2. Clipboard Data\nProcedure: Captured clipboard contents using PowerShell\nCriteria: powershell.exe executing Get-Clipboard\nDetection Type:Telemetry(Correlated)\nQuery ID:F4609F7E-C4DB-4327-91D4-59A58C962A02", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host f\nINNER JOIN (\n SELECT d.ProcessId, d.ParentProcessId\n FROM apt29Host d\n INNER JOIN (\n SELECT a.ProcessGuid, a.ParentProcessGuid\n FROM apt29Host a\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(Image) LIKE \"%control.exe\"\n AND LOWER(ParentImage) LIKE \"%sdclt.exe\"\n ) b\n ON a.ParentProcessGuid = b.ProcessGuid\n WHERE a.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND a.EventID = 1\n AND a.IntegrityLevel = \"High\"\n ) c\n ON d.ParentProcessGuid= c.ProcessGuid\n WHERE d.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND d.EventID = 1\n AND d.Image LIKE '%powershell.exe'\n) e\nON f.ExecutionProcessID = e.ProcessId\nWHERE f.Channel = \"Microsoft-Windows-PowerShell/Operational\"\nAND f.EventID = 4103\nAND LOWER(f.Payload) LIKE \"%get-clipboard%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:6EC8D7EB-153B-459A-9333-51208449DB99", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host f\nINNER JOIN (\n SELECT split(d.NewProcessId, '0x')[1] as NewProcessId\n FROM apt29Host d\n INNER JOIN(\n SELECT a.ProcessId, a.NewProcessId\n FROM apt29Host a\n INNER JOIN (\n SELECT NewProcessId\n FROM apt29Host\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND LOWER(NewProcessName) LIKE \"%control.exe\"\n AND LOWER(ParentProcessName) LIKE \"%sdclt.exe\"\n ) b\n ON a.ProcessId = b.NewProcessId\n WHERE LOWER(a.Channel) = \"security\"\n AND a.EventID = 4688\n AND a.MandatoryLabel = \"S-1-16-12288\"\n AND a.TokenElevationType = \"%%1937\"\n ) c\n ON d.ProcessId = c.NewProcessId\n WHERE LOWER(d.Channel) = \"security\"\n AND d.EventID = 4688\n AND d.NewProcessName LIKE '%powershell.exe'\n) e\nON LOWER(hex(f.ExecutionProcessID)) = e.NewProcessId\nWHERE f.Channel = \"Microsoft-Windows-PowerShell/Operational\"\nAND f.EventID = 4103\nAND LOWER(f.Payload) LIKE \"%get-clipboard%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "7.A.3. Input Capture\nProcedure: Captured user keystrokes using the GetAsyncKeyState API\nCriteria: powershell.exe executing the GetAsyncKeyState API\nDetection Type:None(None)\n7.B.1. Data from Local System\nProcedure: Read data in the user's Downloads directory using PowerShell\nCriteria: powershell.exe reading files in C:\\Users\\pam\\Downloads\\\nDetection Type:None(None)\n7.B.2. Data Compressed\nProcedure: Compressed data from the user's Downloads directory into a ZIP file (OfficeSupplies.7z) using PowerShell\nCriteria: powershell.exe creating the file OfficeSupplies.7z\nDetection Type:Telemetry(Correlated)\nQuery ID:BA68938F-7506-4E20-BC06-0B44B535A0B1", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host f\nINNER JOIN (\n SELECT d.ProcessGuid, d.ParentProcessGuid\n FROM apt29Host d\n INNER JOIN (\n SELECT a.ProcessGuid, a.ParentProcessGuid\n FROM apt29Host a\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(Image) LIKE \"%control.exe\"\n AND LOWER(ParentImage) LIKE \"%sdclt.exe\"\n ) b\n ON a.ParentProcessGuid = b.ProcessGuid\n WHERE a.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND a.EventID = 1\n AND a.IntegrityLevel = \"High\"\n ) c\n ON d.ParentProcessGuid= c.ProcessGuid\n WHERE d.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND d.EventID = 1\n AND d.Image LIKE '%powershell.exe'\n) e\nON f.ProcessGuid = e.ProcessGuid\nWHERE f.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND f.EventID = 11\n AND LOWER(f.TargetFilename) LIKE '%officesupplies%'\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "7.B.3. Data Encrypted\nProcedure: Encrypted data from the user's Downloads directory using PowerShell\nCriteria: powershell.exe executing Compress-7Zip with the password argument used for encryption\nDetection Type:Telemetry(Correlated)\nQuery ID:4C19DDB9-9763-4D1C-9B9D-788ECF193778", "df = spark.sql(\n'''\nSELECT f.ScriptBlockText\nFROM apt29Host f\nINNER JOIN (\n SELECT d.ProcessId, d.ParentProcessId\n FROM apt29Host d\n INNER JOIN (\n SELECT a.ProcessGuid, a.ParentProcessGuid\n FROM apt29Host a\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(Image) LIKE \"%control.exe\"\n AND LOWER(ParentImage) LIKE \"%sdclt.exe\"\n ) b\n ON a.ParentProcessGuid = b.ProcessGuid\n WHERE a.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND a.EventID = 1\n AND a.IntegrityLevel = \"High\"\n ) c\n ON d.ParentProcessGuid= c.ProcessGuid\n WHERE d.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND d.EventID = 1\n AND d.Image LIKE '%powershell.exe'\n) e\nON f.ExecutionProcessID = e.ProcessId\nWHERE f.Channel = \"Microsoft-Windows-PowerShell/Operational\"\n AND f.EventID = 4104\n AND LOWER(f.ScriptBlockText) LIKE \"%compress-7zip%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:C670DAFF-B1FD-45B2-9DEB-AC5AEC273EE7", "df = spark.sql(\n'''\nSELECT f.ScriptBlockText\nFROM apt29Host f\nINNER JOIN (\n SELECT split(d.NewProcessId, '0x')[1] as NewProcessId\n FROM apt29Host d\n INNER JOIN(\n SELECT a.ProcessId, a.NewProcessId\n FROM apt29Host a\n INNER JOIN (\n SELECT NewProcessId\n FROM apt29Host\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND LOWER(NewProcessName) LIKE \"%control.exe\"\n AND LOWER(ParentProcessName) LIKE \"%sdclt.exe\"\n ) b\n ON a.ProcessId = b.NewProcessId\n WHERE LOWER(a.Channel) = \"security\"\n AND a.EventID = 4688\n AND a.MandatoryLabel = \"S-1-16-12288\"\n AND a.TokenElevationType = \"%%1937\"\n ) c\n ON d.ProcessId = c.NewProcessId\n WHERE LOWER(d.Channel) = \"security\"\n AND d.EventID = 4688\n AND d.NewProcessName LIKE '%powershell.exe'\n) e\nON LOWER(hex(f.ExecutionProcessID)) = e.NewProcessId\nWHERE f.Channel = \"Microsoft-Windows-PowerShell/Operational\"\nAND f.EventID = 4104\nAND LOWER(f.ScriptBlockText) LIKE \"%compress-7zip%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "7.B.4. Exfiltration Over Alternative Protocol\nProcedure: Exfiltrated collection (OfficeSupplies.7z) to WebDAV network share using PowerShell\nCriteria: powershell executing Copy-Item pointing to an attack-controlled WebDav network share (192.168.0.4:80)\nDetection Type:Telemetry(Correlated)\nQuery ID:7AAC6658-2B5C-4B4A-B7C9-D42D288D5218", "df = spark.sql(\n'''\nSELECT f.ScriptBlockText\nFROM apt29Host f\nINNER JOIN (\n SELECT d.ProcessId, d.ParentProcessId\n FROM apt29Host d\n INNER JOIN (\n SELECT a.ProcessGuid, a.ParentProcessGuid\n FROM apt29Host a\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(Image) LIKE \"%control.exe\"\n AND LOWER(ParentImage) LIKE \"%sdclt.exe\"\n ) b\n ON a.ParentProcessGuid = b.ProcessGuid\n WHERE a.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND a.EventID = 1\n AND a.IntegrityLevel = \"High\"\n ) c\n ON d.ParentProcessGuid= c.ProcessGuid\n WHERE d.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND d.EventID = 1\n AND d.Image LIKE '%powershell.exe'\n) e\nON f.ExecutionProcessID = e.ProcessId\nWHERE f.Channel = \"Microsoft-Windows-PowerShell/Operational\"\n AND f.EventID = 4104\n AND LOWER(f.ScriptBlockText) LIKE \"%copy-item%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:B19F8E16-AA6C-45C1-8A0D-92812830C237", "df = spark.sql(\n'''\nSELECT f.ScriptBlockText\nFROM apt29Host f\nINNER JOIN (\n SELECT split(d.NewProcessId, '0x')[1] as NewProcessId\n FROM apt29Host d\n INNER JOIN(\n SELECT a.ProcessId, a.NewProcessId\n FROM apt29Host a\n INNER JOIN (\n SELECT NewProcessId\n FROM apt29Host\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND LOWER(NewProcessName) LIKE \"%control.exe\"\n AND LOWER(ParentProcessName) LIKE \"%sdclt.exe\"\n ) b\n ON a.ProcessId = b.NewProcessId\n WHERE LOWER(a.Channel) = \"security\"\n AND a.EventID = 4688\n AND a.MandatoryLabel = \"S-1-16-12288\"\n AND a.TokenElevationType = \"%%1937\"\n ) c\n ON d.ProcessId = c.NewProcessId\n WHERE LOWER(d.Channel) = \"security\"\n AND d.EventID = 4688\n AND d.NewProcessName LIKE '%powershell.exe'\n) e\nON LOWER(hex(f.ExecutionProcessID)) = e.NewProcessId\nWHERE f.Channel = \"Microsoft-Windows-PowerShell/Operational\"\nAND f.EventID = 4104\nAND LOWER(f.ScriptBlockText) LIKE \"%copy-item%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Detection Type:technique(Alert)\nQuery ID:C10730EA-6345-4934-AA0F-B0EFCA0C4BA6", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host\nWHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND CommandLine RLIKE '.*rundll32.exe.*\\\\\\\\\\\\\\\\windows\\\\\\\\\\\\\\\\system32\\\\\\\\\\\\\\\\davclnt.dll.*DavSetCookie.*'\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "8.A.1. Remote System Discovery\nProcedure: Enumerated remote systems using LDAP queries\nCriteria: powershell.exe making LDAP queries over port 389 to the Domain Controller (10.0.0.4)\nDetection Type:Telemetry(Correlated)\nQuery ID:C1307FC1-19B7-467B-9705-95147B492CC7", "df = spark.sql(\n'''\nSELECT f.Message\nFROM apt29Host f\nINNER JOIN (\nSELECT d.ProcessId, d.ParentProcessId\nFROM apt29Host d\nINNER JOIN (\n SELECT a.ProcessGuid, a.ParentProcessGuid\n FROM apt29Host a\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(Image) LIKE \"%control.exe\"\n AND LOWER(ParentImage) LIKE \"%sdclt.exe\"\n ) b\n ON a.ParentProcessGuid = b.ProcessGuid\n WHERE a.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND a.EventID = 1\n AND a.IntegrityLevel = \"High\"\n) c\nON d.ParentProcessGuid= c.ProcessGuid\nWHERE d.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND d.EventID = 1\n AND d.Image LIKE '%powershell.exe'\n) e\nON f.ProcessId = e.ProcessId\nWHERE f.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND f.EventID = 3\n AND f.DestinationPort = 389\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:542C2E36-0BC0-450B-A34F-C600E9DC396B", "df = spark.sql(\n'''\nSELECT f.Message\nFROM apt29Host f\nINNER JOIN (\n SELECT split(d.NewProcessId, '0x')[1] as NewProcessId\n FROM apt29Host d\n INNER JOIN(\n SELECT a.ProcessId, a.NewProcessId\n FROM apt29Host a\n INNER JOIN (\n SELECT NewProcessId\n FROM apt29Host\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND LOWER(NewProcessName) LIKE \"%control.exe\"\n AND LOWER(ParentProcessName) LIKE \"%sdclt.exe\"\n ) b\n ON a.ProcessId = b.NewProcessId\n WHERE LOWER(a.Channel) = \"security\"\n AND a.EventID = 4688\n AND a.MandatoryLabel = \"S-1-16-12288\"\n AND a.TokenElevationType = \"%%1937\"\n ) c\n ON d.ProcessId = c.NewProcessId\n WHERE LOWER(d.Channel) = \"security\"\n AND d.EventID = 4688\n AND d.NewProcessName LIKE '%powershell.exe'\n) e\nON LOWER(hex(CAST(f.ProcessId as INT))) = e.NewProcessId\nWHERE LOWER(f.Channel) = \"security\"\n AND EventID = 5156\n AND DestPort = 389\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "8.A.2. Remote System Discovery\nProcedure: Established WinRM connection to remote host NASHUA (10.0.1.6)\nCriteria: Network connection to NASHUA (10.0.1.6) over port 5985\nDetection Type:Telemetry(Correlated)\nQuery ID:0A5428EA-171D-4944-B27C-0EBC3D557FAD", "df = spark.sql(\n'''\nSELECT f.Message\nFROM apt29Host f\nINNER JOIN (\nSELECT d.ProcessId, d.ParentProcessId\nFROM apt29Host d\nINNER JOIN (\n SELECT a.ProcessGuid, a.ParentProcessGuid\n FROM apt29Host a\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(Image) LIKE \"%control.exe\"\n AND LOWER(ParentImage) LIKE \"%sdclt.exe\"\n ) b\n ON a.ParentProcessGuid = b.ProcessGuid\n WHERE a.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND a.EventID = 1\n AND a.IntegrityLevel = \"High\"\n) c\nON d.ParentProcessGuid= c.ProcessGuid\nWHERE d.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND d.EventID = 1\n AND d.Image LIKE '%powershell.exe'\n) e\nON f.ProcessId = e.ProcessId\nWHERE f.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND f.EventID = 3\n AND f.DestinationPort = 5985\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:0376E07E-3C48-4B89-A50D-B3FAAB23EDAB", "df = spark.sql(\n'''\nSELECT f.Message\nFROM apt29Host f\nINNER JOIN (\n SELECT split(d.NewProcessId, '0x')[1] as NewProcessId\n FROM apt29Host d\n INNER JOIN(\n SELECT a.ProcessId, a.NewProcessId\n FROM apt29Host a\n INNER JOIN (\n SELECT NewProcessId\n FROM apt29Host\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND LOWER(NewProcessName) LIKE \"%control.exe\"\n AND LOWER(ParentProcessName) LIKE \"%sdclt.exe\"\n ) b\n ON a.ProcessId = b.NewProcessId\n WHERE LOWER(a.Channel) = \"security\"\n AND a.EventID = 4688\n AND a.MandatoryLabel = \"S-1-16-12288\"\n AND a.TokenElevationType = \"%%1937\"\n ) c\n ON d.ProcessId = c.NewProcessId\n WHERE LOWER(d.Channel) = \"security\"\n AND d.EventID = 4688\n AND d.NewProcessName LIKE '%powershell.exe'\n) e\nON LOWER(hex(CAST(f.ProcessId as INT))) = e.NewProcessId\nWHERE LOWER(f.Channel) = \"security\"\n AND EventID = 5156\n AND DestPort = 5985\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "8.A.3. Process Discovery\nProcedure: Enumerated processes on remote host Scranton (10.0.1.4) using PowerShell\nCriteria: powershell.exe executing Get-Process\nDetection Type:Telemetry(Correlated)\nQuery ID:6C481791-2AE8-4F6B-9BFE-C1F6DE1E0BC0", "df = spark.sql(\n'''\nSELECT b.ScriptBlockText\nFROM apt29Host b\nINNER JOIN (\n SELECT ProcessGuid, ProcessId\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND LOWER(Image) LIKE '%wsmprovhost.exe'\n) a\nON b.ExecutionProcessID = a.ProcessId\nWHERE b.Channel = \"Microsoft-Windows-PowerShell/Operational\"\n AND b.EventID = 4104\n AND LOWER(b.ScriptBlockText) LIKE \"%get-process%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:088846AF-FF45-4FC4-896C-64F24517BBD7", "df = spark.sql(\n'''\nSELECT b.ScriptBlockText\nFROM apt29Host b\nINNER JOIN (\n SELECT split(NewProcessId, '0x')[1] as NewProcessId\n FROM apt29Host\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND LOWER(NewProcessName) LIKE '%wsmprovhost.exe'\n) a\nON LOWER(hex(b.ExecutionProcessID)) = a.NewProcessId\nWHERE b.Channel = \"Microsoft-Windows-PowerShell/Operational\"\nAND b.EventID = 4104\nAND LOWER(b.ScriptBlockText) LIKE \"%get-process%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "8.B.1. Remote File Copy\nProcedure: Copied python.exe payload from a WebDAV share (192.168.0.4) to remote host Scranton (10.0.1.4)\nCriteria: The file python.exe created on Scranton (10.0.1.4)\nDetection Type:Telemetry(None)\nQuery ID:97402495-2449-415F-BDAD-5CC8EFC1E1B5", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host\nWHERE LOWER(Channel) = \"security\"\n AND EventID = 5145\n AND RelativeTargetName LIKE '%python.exe'\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:D804F2D8-C65B-42D6-A731-C13BE2BDB441", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host\nWHERE Channel = 'Microsoft-Windows-Sysmon/Operational'\n AND EventID = 11\n AND TargetFilename LIKE '%python.exe'\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "8.B.2. Software Packing\nProcedure: python.exe payload was packed with UPX\nCriteria: Evidence that the file python.exe is packed\nDetection Type:None(None)\n8.C.1. Valid Accounts\nProcedure: Logged on to remote host NASHUA (10.0.1.6) using valid credentials for user Pam\nCriteria: Successful logon as user Pam on NASHUA (10.0.1.6)\nDetection Type:Telemetry(None)\nQuery ID:AF5E8E22-DEC8-40AF-98AD-84BE1AC3F34C", "df = spark.sql(\n'''\nSELECT Hostname, a.Message\nFROM apt29Host b\nINNER JOIN (\n SELECT TargetLogonId, Message\n FROM apt29Host\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4624\n AND LogonType = 3\n AND TargetUserName NOT LIKE '%$'\n) a\nON b.SubjectLogonId = a.TargetLogonId\nWHERE LOWER(b.Channel) = \"security\"\n AND b.EventID = 5145\n AND b.RelativeTargetName LIKE '%python.exe'\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "8.C.2. Windows Admin Shares\nProcedure: Established SMB session to remote host NASHUA's (10.0.1.6) IPC$ share using PsExec\nCriteria: SMB session to NASHUA (10.0.1.6) over TCP port 445/135 OR evidence of usage of a Windows share\nDetection Type:Telemetry(None)\nQuery ID:C91A4BF2-22B1-421B-B1DE-626778AD3BBB", "df = spark.sql(\n'''\nSELECT EventTime, Hostname, ShareName, RelativeTargetName, SubjectUserName\nFROM apt29Host\nWHERE LOWER(Channel) = \"security\"\n AND EventID = 5145\n AND ShareName LIKE '%IPC%'\n AND RelativeTargetName LIKE '%PSEXESVC%'\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "8.C.3. Service Execution\nProcedure: Executed python.exe using PSExec\nCriteria: python.exe spawned by PSEXESVC.exe\nDetection Type:Telemetry(Correlated)\nQuery ID:BDE98B9B-77DD-4AD4-B755-463C3C27EE5F", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host b\nINNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND ParentImage LIKE '%services.exe'\n) a\nON b.ParentProcessGuid = a.ProcessGuid\nWHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND Image LIKE '%python.exe'\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:11D81CCD-163F-4347-8F1D-072F4B4B3B26", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host b\nINNER JOIN (\n SELECT NewProcessId\n FROM apt29Host\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND ParentProcessName LIKE '%services.exe'\n) a\nON b.ProcessId = a.NewProcessId\nWHERE LOWER(Channel) = \"security\"\n AND NewProcessName LIKE '%python.exe'\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "9.A.1. Remote File Copy\nProcedure: Dropped rar.exe to disk on remote host NASHUA (10.0.1.6)\nCriteria: python.exe creating the file rar.exe\nDetection Type:Telemetry(Correlated)\nQuery ID:1C94AFAF-74A9-4578-B026-7AA6948D9DBE", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host f\nINNER JOIN (\n SELECT d.ProcessGuid\n FROM apt29Host d\n INNER JOIN (\n SELECT b.ProcessGuid\n FROM apt29Host b\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND ParentImage LIKE '%services.exe'\n ) a\n ON b.ParentProcessGuid = a.ProcessGuid\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND Image LIKE '%python.exe'\n ) c\n ON d.ParentProcessGuid = c.ProcessGuid\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n) e\nON f.ProcessGuid = e.ProcessGuid\nWHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 11\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "9.A.2. Remote File Copy\nProcedure: Dropped rar.exe to disk on remote host NASHUA (10.0.1.6)\nCriteria: python.exe creating the file sdelete64.exe\nDetection Type:Telemetry(Correlated)\nQuery ID:F98D589E-94A9-4974-A142-7E75D9760118", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host f\nINNER JOIN (\n SELECT d.ProcessGuid\n FROM apt29Host d\n INNER JOIN (\n SELECT b.ProcessGuid\n FROM apt29Host b\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND ParentImage LIKE '%services.exe'\n ) a\n ON b.ParentProcessGuid = a.ProcessGuid\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND Image LIKE '%python.exe'\n ) c\n ON d.ParentProcessGuid = c.ProcessGuid\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n) e\nON f.ProcessGuid = e.ProcessGuid\nWHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 11\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "9.B.1. PowerShell\nProcedure: Spawned interactive powershell.exe\nCriteria: powershell.exe​ spawning from python.exe\nDetection Type:Telemetry(Correlated)\nQuery ID:77D403CE-2832-4927-B74A-42D965B5AF94", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host f\nINNER JOIN (\n SELECT d.ProcessGuid\n FROM apt29Host d\n INNER JOIN (\n SELECT b.ProcessGuid\n FROM apt29Host b\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND ParentImage LIKE '%services.exe'\n ) a\n ON b.ParentProcessGuid = a.ProcessGuid\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND Image LIKE '%python.exe'\n ) c\n ON d.ParentProcessGuid = c.ProcessGuid\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n) e\nON f.ParentProcessGuid = e.ProcessGuid\nWHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND Image LIKE '%powershell.exe'\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:B56C6666-EEF3-4028-85D4-6AAE01CD506C", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host f\nINNER JOIN (\n SELECT d.NewProcessId\n FROM apt29Host d\n INNER JOIN (\n SELECT b.NewProcessId\n FROM apt29Host b\n INNER JOIN (\n SELECT NewProcessId\n FROM apt29Host\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND ParentProcessName LIKE '%services.exe'\n ) a\n ON b.ProcessId = a.NewProcessId\n WHERE LOWER(Channel) = \"security\"\n AND NewProcessName LIKE '%python.exe'\n ) c\n ON d.ProcessId = c.NewProcessId\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n) e\nON f.ProcessId = e.NewProcessId\nWHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND NewProcessName LIKE '%powershell.exe'\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "9.B.2. File and Directory Discovery\nProcedure: Searched filesystem for document and media files using PowerShell\nCriteria: powershell.exe executing (Get-)ChildItem​\nDetection Type:Telemetry(Correlated)\nQuery ID:3DDF2B9B-10AC-454C-BFA0-1F7BD011947E", "df = spark.sql(\n'''\nSELECT h.ScriptBlockText\nFROM apt29Host h\nINNER JOIN (\n SELECT f.ProcessId\n FROM apt29Host f\n INNER JOIN (\n SELECT d.ProcessGuid\n FROM apt29Host d\n INNER JOIN (\n SELECT b.ProcessGuid\n FROM apt29Host b\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND ParentImage LIKE '%services.exe'\n ) a\n ON b.ParentProcessGuid = a.ProcessGuid\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND Image LIKE '%python.exe'\n ) c\n ON d.ParentProcessGuid = c.ProcessGuid\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n ) e\n ON f.ParentProcessGuid = e.ProcessGuid\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND Image LIKE '%powershell.exe'\n) g\nON h.ExecutionProcessID = g.ProcessId\nWHERE h.Channel = \"Microsoft-Windows-PowerShell/Operational\"\n AND h.EventID = 4104\n AND LOWER(h.ScriptBlockText) LIKE \"%childitem%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:E7ED941E-F3B3-441B-B43D-1F1B194D6303", "df = spark.sql(\n'''\nSELECT h.ScriptBlockText\nFROM apt29Host h\nINNER JOIN (\n SELECT split(f.NewProcessId, '0x')[1] as NewProcessId\n FROM apt29Host f\n INNER JOIN (\n SELECT d.NewProcessId\n FROM apt29Host d\n INNER JOIN (\n SELECT b.NewProcessId\n FROM apt29Host b\n INNER JOIN (\n SELECT NewProcessId\n FROM apt29Host\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND ParentProcessName LIKE '%services.exe'\n ) a\n ON b.ProcessId = a.NewProcessId\n WHERE LOWER(Channel) = \"security\"\n AND NewProcessName LIKE '%python.exe'\n ) c\n ON d.ProcessId = c.NewProcessId\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n ) e\n ON f.ProcessId = e.NewProcessId\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND NewProcessName LIKE '%powershell.exe'\n) g\nON LOWER(hex(h.ExecutionProcessID)) = g.NewProcessId\nWHERE h.Channel = \"Microsoft-Windows-PowerShell/Operational\"\n AND h.EventID = 4104\n AND LOWER(h.ScriptBlockText) LIKE \"%childitem%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "9.B.3. Automated Collection\nProcedure: Scripted search of filesystem for document and media files using PowerShell\nCriteria: powershell.exe executing (Get-)ChildItem\nDetection Type:Telemetry(Correlated)\nQuery ID:6AE2BDBE-48BD-4323-8572-B2214D244013", "df = spark.sql(\n'''\nSELECT h.ScriptBlockText\nFROM apt29Host h\nINNER JOIN (\n SELECT f.ProcessId\n FROM apt29Host f\n INNER JOIN (\n SELECT d.ProcessGuid\n FROM apt29Host d\n INNER JOIN (\n SELECT b.ProcessGuid\n FROM apt29Host b\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND ParentImage LIKE '%services.exe'\n ) a\n ON b.ParentProcessGuid = a.ProcessGuid\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND Image LIKE '%python.exe'\n ) c\n ON d.ParentProcessGuid = c.ProcessGuid\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n ) e\n ON f.ParentProcessGuid = e.ProcessGuid\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND Image LIKE '%powershell.exe'\n) g\nON h.ExecutionProcessID = g.ProcessId\nWHERE h.Channel = \"Microsoft-Windows-PowerShell/Operational\"\n AND h.EventID = 4104\n AND LOWER(h.ScriptBlockText) LIKE \"%childitem%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:6A0DF333-5329-42B5-9AF6-60AB647051CD", "df = spark.sql(\n'''\nSELECT h.ScriptBlockText\nFROM apt29Host h\nINNER JOIN (\n SELECT split(f.NewProcessId, '0x')[1] as NewProcessId\n FROM apt29Host f\n INNER JOIN (\n SELECT d.NewProcessId\n FROM apt29Host d\n INNER JOIN (\n SELECT b.NewProcessId\n FROM apt29Host b\n INNER JOIN (\n SELECT NewProcessId\n FROM apt29Host\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND ParentProcessName LIKE '%services.exe'\n ) a\n ON b.ProcessId = a.NewProcessId\n WHERE LOWER(Channel) = \"security\"\n AND NewProcessName LIKE '%python.exe'\n ) c\n ON d.ProcessId = c.NewProcessId\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n ) e\n ON f.ProcessId = e.NewProcessId\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND NewProcessName LIKE '%powershell.exe'\n) g\nON LOWER(hex(h.ExecutionProcessID)) = g.NewProcessId\nWHERE h.Channel = \"Microsoft-Windows-PowerShell/Operational\"\n AND h.EventID = 4104\n AND LOWER(h.ScriptBlockText) LIKE \"%childitem%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "9.B.4. Data from Local System\nProcedure: Recursively collected files found in C:\\Users\\Pam\\ using PowerShell\nCriteria: powershell.exe reading files in C:\\Users\\Pam\\\nDetection Type:None(None)\n9.B.5. Data Staged\nProcedure: Staged files for exfiltration into ZIP (working.zip in AppData directory) using PowerShell\nCriteria: powershell.exe creating the file working.zip\nDetection Type:Telemetry(Correlated)\nQuery ID:17B04626-D628-4CFC-9EF1-7FF9CD48FF5E", "df = spark.sql(\n'''\nSELECT h.Message\nFROM apt29Host h\nINNER JOIN (\n SELECT f.ProcessGuid\n FROM apt29Host f\n INNER JOIN (\n SELECT d.ProcessGuid\n FROM apt29Host d\n INNER JOIN (\n SELECT b.ProcessGuid\n FROM apt29Host b\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND ParentImage LIKE '%services.exe'\n ) a\n ON b.ParentProcessGuid = a.ProcessGuid\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND Image LIKE '%python.exe'\n ) c\n ON d.ParentProcessGuid = c.ProcessGuid\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n ) e\n ON f.ParentProcessGuid = e.ProcessGuid\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND Image LIKE '%powershell.exe'\n) g\nON h.ProcessGuid = g.ProcessGuid\nWHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND h.EventID = 11\n AND LOWER(h.TargetFilename) LIKE \"%working.zip\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "9.B.6. Data Encrypted\nProcedure: Encrypted staged ZIP (working.zip in AppData directory) into working.zip (on Desktop) using rar.exe\nCriteria: powershell.exe executing rar.exe with the -a parameter for a password to use for encryption\nDetection Type:Telemetry(Correlated)\nQuery ID:9EC44B89-9B82-41F2-B11E-D49392853C63", "df = spark.sql(\n'''\nSELECT h.Message\nFROM apt29Host h\nINNER JOIN (\n SELECT f.ProcessGuid\n FROM apt29Host f\n INNER JOIN (\n SELECT d.ProcessGuid\n FROM apt29Host d\n INNER JOIN (\n SELECT b.ProcessGuid\n FROM apt29Host b\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND ParentImage LIKE '%services.exe'\n ) a\n ON b.ParentProcessGuid = a.ProcessGuid\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND Image LIKE '%python.exe'\n ) c\n ON d.ParentProcessGuid = c.ProcessGuid\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n ) e\n ON f.ParentProcessGuid = e.ProcessGuid\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND Image LIKE '%powershell.exe'\n) g\nON h.ParentProcessGuid = g.ProcessGuid\nWHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND h.EventID = 1\n AND LOWER(h.CommandLine) LIKE \"%rar.exe%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:579D025B-DFFB-416B-B07A-A36D9CE1EF93", "df = spark.sql(\n'''\nSELECT h.Message\nFROM apt29Host h\nINNER JOIN (\n SELECT f.NewProcessId\n FROM apt29Host f\n INNER JOIN (\n SELECT d.NewProcessId\n FROM apt29Host d\n INNER JOIN (\n SELECT b.NewProcessId\n FROM apt29Host b\n INNER JOIN (\n SELECT NewProcessId\n FROM apt29Host\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND ParentProcessName LIKE '%services.exe'\n ) a\n ON b.ProcessId = a.NewProcessId\n WHERE LOWER(Channel) = \"security\"\n AND NewProcessName LIKE '%python.exe'\n ) c\n ON d.ProcessId = c.NewProcessId\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n ) e\n ON f.ProcessId = e.NewProcessId\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND NewProcessName LIKE '%powershell.exe'\n) g\nON h.ProcessId = g.NewProcessId\nWHERE LOWER(Channel) = \"security\"\n AND h.EventID = 4688\n AND LOWER(h.CommandLine) LIKE \"%rar.exe%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "9.B.7. Data Compressed\nProcedure: Compressed staged ZIP (working.zip in AppData directory) into working.zip (on Desktop) using rar.exe\nCriteria: powershell.exe executing rar.exe\nDetection Type:Telemetry(Correlated)\nQuery ID:FD1AE986-FD91-4B91-8BCE-42C9295949F7", "df = spark.sql(\n'''\nSELECT h.Message\nFROM apt29Host h\nINNER JOIN (\n SELECT f.ProcessGuid\n FROM apt29Host f\n INNER JOIN (\n SELECT d.ProcessGuid\n FROM apt29Host d\n INNER JOIN (\n SELECT b.ProcessGuid\n FROM apt29Host b\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND ParentImage LIKE '%services.exe'\n ) a\n ON b.ParentProcessGuid = a.ProcessGuid\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND Image LIKE '%python.exe'\n ) c\n ON d.ParentProcessGuid = c.ProcessGuid\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n ) e\n ON f.ParentProcessGuid = e.ProcessGuid\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND Image LIKE '%powershell.exe'\n) g\nON h.ParentProcessGuid = g.ProcessGuid\nWHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND h.EventID = 1\n AND LOWER(h.CommandLine) LIKE \"%rar.exe%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:8A865709-E762-4A26-BDEC-A762FB37947B", "df = spark.sql(\n'''\nSELECT h.Message\nFROM apt29Host h\nINNER JOIN (\n SELECT f.NewProcessId\n FROM apt29Host f\n INNER JOIN (\n SELECT d.NewProcessId\n FROM apt29Host d\n INNER JOIN (\n SELECT b.NewProcessId\n FROM apt29Host b\n INNER JOIN (\n SELECT NewProcessId\n FROM apt29Host\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND ParentProcessName LIKE '%services.exe'\n ) a\n ON b.ProcessId = a.NewProcessId\n WHERE LOWER(Channel) = \"security\"\n AND NewProcessName LIKE '%python.exe'\n ) c\n ON d.ProcessId = c.NewProcessId\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n ) e\n ON f.ProcessId = e.NewProcessId\n WHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND NewProcessName LIKE '%powershell.exe'\n) g\nON h.ProcessId = g.NewProcessId\nWHERE LOWER(Channel) = \"security\"\n AND h.EventID = 4688\n AND LOWER(h.CommandLine) LIKE \"%rar.exe%\"\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "9.B.8. Exfiltration Over Command and Control Channel\nProcedure: Read and downloaded ZIP (working.zip on Desktop) over C2 channel (192.168.0.5 over TCP port 8443)\nCriteria: python.exe reading the file working.zip while connected to the C2 channel\nDetection Type:None(None)\n9.C.1. File Deletion\nProcedure: Deleted rar.exe on disk using SDelete\nCriteria: sdelete64.exe deleting the file rar.exe\nDetection Type:Telemetry(Correlated)\nQuery ID:C20D8999-0B0D-4A50-9CDC-2BAAC4C7B577", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host j\nINNER JOIN (\n SELECT h.ProcessGuid\n FROM apt29Host h\n INNER JOIN (\n SELECT f.ProcessGuid\n FROM apt29Host f\n INNER JOIN (\n SELECT d.ProcessGuid\n FROM apt29Host d\n INNER JOIN (\n SELECT b.ProcessGuid\n FROM apt29Host b\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND ParentImage LIKE '%services.exe'\n ) a\n ON b.ParentProcessGuid = a.ProcessGuid\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND Image LIKE '%python.exe'\n ) c\n ON d.ParentProcessGuid = c.ProcessGuid\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n ) e\n ON f.ParentProcessGuid = e.ProcessGuid\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND Image LIKE '%cmd.exe'\n ) g\n ON h.ParentProcessGuid = g.ProcessGuid\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND h.EventID = 1\n) i\nON j.ProcessGuid = i.ProcessGuid\nWHERE j.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND j.EventID = 23\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "9.C.2. File Deletion\nProcedure: Deleted working.zip (from Desktop) on disk using SDelete\nCriteria: sdelete64.exe deleting the file \\Desktop\\working.zip\nDetection Type:Telemetry(Correlated)\nQuery ID:CB869916-7BCF-4F9F-8B95-C19B407B91E3", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host j\nINNER JOIN (\n SELECT h.ProcessGuid\n FROM apt29Host h\n INNER JOIN (\n SELECT f.ProcessGuid\n FROM apt29Host f\n INNER JOIN (\n SELECT d.ProcessGuid\n FROM apt29Host d\n INNER JOIN (\n SELECT b.ProcessGuid\n FROM apt29Host b\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND ParentImage LIKE '%services.exe'\n ) a\n ON b.ParentProcessGuid = a.ProcessGuid\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND Image LIKE '%python.exe'\n ) c\n ON d.ParentProcessGuid = c.ProcessGuid\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n ) e\n ON f.ParentProcessGuid = e.ProcessGuid\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND Image LIKE '%cmd.exe'\n ) g\n ON h.ParentProcessGuid = g.ProcessGuid\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND h.EventID = 1\n) i\nON j.ProcessGuid = i.ProcessGuid\nWHERE j.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND j.EventID = 23\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "9.C.3. File Deletion\nProcedure: Deleted working.zip (from AppData directory) on disk using SDelete\nCriteria: sdelete64.exe deleting the file \\AppData\\Roaming\\working.zip\nDetection Type:Telemetry(Correlated)\nQuery ID:59F37185-0BE4-4D81-8B81-FBFBD8055587", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host j\nINNER JOIN (\n SELECT h.ProcessGuid\n FROM apt29Host h\n INNER JOIN (\n SELECT f.ProcessGuid\n FROM apt29Host f\n INNER JOIN (\n SELECT d.ProcessGuid\n FROM apt29Host d\n INNER JOIN (\n SELECT b.ProcessGuid\n FROM apt29Host b\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND ParentImage LIKE '%services.exe'\n ) a\n ON b.ParentProcessGuid = a.ProcessGuid\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND Image LIKE '%python.exe'\n ) c\n ON d.ParentProcessGuid = c.ProcessGuid\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n ) e\n ON f.ParentProcessGuid = e.ProcessGuid\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND Image LIKE '%cmd.exe'\n ) g\n ON h.ParentProcessGuid = g.ProcessGuid\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND h.EventID = 1\n) i\nON j.ProcessGuid = i.ProcessGuid\nWHERE j.Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND j.EventID = 23\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "9.C.4. File Deletion\nProcedure: Deleted SDelete on disk using cmd.exe del command\nCriteria: cmd.exe deleting the file sdelete64.exe\nDetection Type:Telemetry(Correlated)\nQuery ID:0FC62E32-9052-49EB-A5D5-1DF316D634AD", "df = spark.sql(\n'''\nSELECT h.Message\nFROM apt29Host h\nINNER JOIN (\n SELECT f.ProcessGuid\n FROM apt29Host f\n INNER JOIN (\n SELECT d.ProcessGuid\n FROM apt29Host d\n INNER JOIN (\n SELECT b.ProcessGuid\n FROM apt29Host b\n INNER JOIN (\n SELECT ProcessGuid\n FROM apt29Host\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND ParentImage LIKE '%services.exe'\n ) a\n ON b.ParentProcessGuid = a.ProcessGuid\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND Image LIKE '%python.exe'\n ) c\n ON d.ParentProcessGuid = c.ProcessGuid\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n ) e\n ON f.ParentProcessGuid = e.ProcessGuid\n WHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND Image LIKE '%cmd.exe'\n) g\nON h.ProcessGuid = g.ProcessGuid\nWHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND h.EventID = 23\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "10.A.1. Service Execution\nProcedure: Executed persistent service (javamtsup) on system startup\nCriteria: javamtsup.exe spawning from services.exe\nDetection Type:Telemetry(None)\nQuery ID:CB9F90C0-93EA-469A-9515-7DF27DF1592A", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host\nWHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 1\n AND ParentImage LIKE '%services.exe'\n AND Image LIKE '%javamtsup.exe'\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "Query ID:4DABE602-E648-4C1E-81B3-A2AC96F94CE0", "df = spark.sql(\n'''\nSELECT Message\nFROM apt29Host\nWHERE LOWER(Channel) = \"security\"\n AND EventID = 4688\n AND ParentProcessName LIKE '%services.exe'\n AND NewProcessName LIKE '%javamtsup.exe'\n\n'''\n)\ndf.show(100,truncate = False, vertical = True)", "10.B.1. Registry Run Keys / Startup Folder\nProcedure: Executed LNK payload (hostui.lnk) in Startup Folder on user login\nCriteria: Evidence that the file hostui.lnk (which executes hostui.bat as a byproduct) was executed from the Startup Folder\nDetection Type:None(None)\n10.B.2. Execution through API\nProcedure: Executed PowerShell payload via the CreateProcessWithToken API\nCriteria: hostui.exe executing the CreateProcessWithToken API\nDetection Type:None(None)\n10.B.3. Access Token Manipulation\nProcedure: Manipulated the token of the PowerShell payload via the CreateProcessWithToken API\nCriteria: hostui.exe manipulating the token of powershell.exe via the CreateProcessWithToken API OR powershell.exe executing with the stolen token of explorer.exe\nDetection Type:None(None)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ituethoslab/navcom-2017
exercises/Week 11-Tooltrack 3/Social media scraping.ipynb
gpl-3.0
[ "import pandas as pd\nimport matplotlib.pyplot as plt\n\n%matplotlib inline", "Social media scraping 3/3\nWhat have we achieved in the past 2 week?\n1. Sanity checks\nDo them\nSrsly\nE.g. a student email 💬\nMessage from Netvizz\n\nGetting posts between 2017-09-11T00:00:00+0000 and 2017-09-18T23:59:59+0000.\npid: 20446254070 / until:2017-06-19T01:15:00+0000 (100,1835008)\nNo posts were retrieved.\n\nhmm... 🤔\nLet's investigate\nRead a Netvizz output file for scraping the page since beginning of June until mid-November 2017.", "biposts = pd.read_csv('page_20446254070_2017_11_14_15_20_00.tab',\n sep='\\t',\n parse_dates=['post_published'])", "Re-index by dates, resample weekly, and plot counts", "biweeks = biposts.set_index('post_published')\nax = biweeks.resample('W')['post_id'].count().plot(title=\"Posts per week\")", "\"Between 2017-09-11 and 2017-09-18\", and \n\"until 2017-06-19\"", "biweeks = biposts.set_index('post_published')\nax = biweeks.resample('W')['post_id'].count().plot(title=\"Posts per week\")\nax.annotate('\"until\"', xy=('2017-06-19T01:15:00+0000', 100))\nax.annotate('Requested interval', xy=('2017-09-11T00:00:00+0000', 100))\nax.axvline('2017-06-19T01:15:00+0000', linestyle='dotted')\nax.axvspan('2017-09-11T00:00:00+0000', '2017-09-18T23:59:59+0000', alpha=0.3);", "Or with Tableau\n\n\n\n2. Making the new, value-added graphs with fb_scraper\nSee sections 9.1 Analysis: co-reaction graph and 9.2 Analysis: user co-interaction graph\nUse writeGraph in social media scraping 2/3 Notebook\nwrite_graph(myjob1, 'CoReactionGraph')\nwrite_graph(myjob2, 'UserCoInteractionGraph')\n\n3. The other five Netvizz modules\n\ngroup data\npage data\npage like network\npage timeline images\nsearch\nlink stats\n\n4. Examples of social media scraping projects\n5. A round of status updates\n6. What is on your ignorance map?\nRegarding your project, or make an ignorance map of social media scraping" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
adriantorrie/adriantorrie.github.io
downloads/notebooks/udacity/deep_learning_foundations_nanodegree/project_1_notes_introduction_to_neural_networks.ipynb
mit
[ "Summary\nNotes taken to help for the first project for the Deep Learning Foundations Nanodegree course dellivered by Udacity.\nMy Github repo for this project can be found here: adriantorrie/udacity_dlfnd_project_1\nTable of Contents\n\n\nNeural network\n\nOutput Formula\nIntuition\nAND / OR perceptron\nNOT perceptron\nXOR perceptron\n\n\n\nActivation functions\n\nSummary\nDeep Learning Book extra notes from Chapter 6: Deep Feedforward Networks\nActivation Formula\nSigmoid\nTanh\nTanh Alternative Formula\nSoftmax\n\n\n\nGradient Descent\n\n\nMultilayer Perceptrons\n\n\nBackpropogation\n\n\nAdditional Reading\n\n\nAdditional Videos\n\n\nVersion Control", "%run ../../../code/version_check.py", "Change Log\nDate Created: 2017-02-06\n\nDate of Change Change Notes\n-------------- ----------------------------------------------------------------\n2017-02-06 Initial draft\n2017-03-23 Formatting changes for online publishing\n\nSetup", "%matplotlib inline\n\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport tensorflow as tf\n\nplt.style.use('bmh')\nmatplotlib.rcParams['figure.figsize'] = (15, 4)", "[Top]\nNeural network\nOutput Formula\nSynonym\n\nThe predicted value\nThe prediction\n\n\\begin{equation}\n\\hat y_j^\\mu = f \\left(\\Sigma_i w_{ij} x_i^\\mu\\right)\n\\end{equation}\nIntuition\n<img src=\"../../../../images/simple-nn.png\",width=450,height=200>\nAND / OR perceptron\n<img src=\"../../../../images/and-or-perceptron.png\",width=450,height=200>\nNOT perceptron\nThe NOT operations only cares about one input. The other inputs to the perceptron are ignored.\nXOR perceptron\nAn XOR perceptron is a logic gate that outputs 0 if the inputs are the same and 1 if the inputs are different. \n<img src=\"../../../../images/xor-perceptron.png\",width=450,height=200>\n\nActivation functions\nAF Summary\nActivation functions can be for\n * Binary outcomes (2 classes, e.g {True, False})\n * Multiclass outcomes\nBinary activation functions include:\n * Sigmoid\n * Hyperbolic tangent (and the alternative formula provided by LeCun et el, 1998)\n * Rectified linear unit\nMulti-class activation functions include:\n * Softmax\n[Top]\n\nTaken from Deep Learning Book - Chapter 6: Deep Feedforward Networks:\n6.2.2 Output Units\n * Any kind of neural network unit that may be used as an output can also be used as a hidden unit.\n6.3 Hidden Units\n * Rectified linear units are an excellent default choice of hidden unit. (My note: They are not covered in week one)\n6.3.1 Rectified Linear Units and Their Generalizations\n * g(z) = max{0, z}\n * One drawback to rectified linear units is that they cannot learn via gradient-based methods on examples for which their activation is zero.\n * Maxout units generalize rectified linear units further.\n * Maxout units can thus be seen as learning the activation function itself rather than just the relationship between units.\n * Maxout units typically need more regularization than rectified linear units. They can work well without regularization if the training set is large and the number of pieces per unit is kept low.\n * Rectified linear units and all of these generalizations of them are based on the principle that models are easier to optimize if their behavior is closer to linear.\n6.3.2 Logistic Sigmoid and Hyperbolic Tangent\n * ... use as hidden units in feedforward networks is now discouraged.\n * Sigmoidal activation functions are more common in settings other than feed-forward networks. Recurrent networks, many probabilistic models, and some auto-encoders have additional requirements that rule out the use of piecewise linear activation functions and make sigmoidal units more appealing despite the drawbacks of saturation.\n\nNote: Saturation as an issue is also in the extra reading, Yes, you should understand backprop, and also raised in Effecient Backprop, LeCun et el., 1998 and was one of the reasons cited for modifying the tanh function in this notebook.\n\n6.6 Historical notes\n * The core ideas behind modern feedforward networks have not changed substantially since the 1980s. The same back-propagation algorithm and the same approaches to gradient descent are still in use\n * Most of the improvement in neural network performance from 1986 to 2015 can be attributed to two factors.\n * larger datasets have reduced the degree to which statistical generalization is a challenge for neural networks\n * neural networks have become much larger, due to more powerful computers, and better software infrastructure\n * However, a small number of algorithmic changes have improved the performance of neural networks\n * ... replacement of mean squared error with the cross-entropy family of loss functions. Cross-entropy losses greatly improved the performance of models with sigmoid and softmax outputs, which had previously suffered from saturation and slow learning when using the mean squared error loss\n * ... replacement of sigmoid hidden units with piecewise linear hidden units, such as rectified linear units\n[Top]\nActivation Formula\n\\begin{equation}\na = f(x) = {sigmoid, tanh, softmax, \\text{or some other function not listed in this set}}\n\\end{equation}\nwhere:\n* $a$ is the activation function transformation of the output from $h$, e.g. apply the sigmoid function to $h$\nand\n\\begin{equation}\nh = \\Sigma_i w_i x_i + b\n\\end{equation}\nwhere:\n * $x_i$ are the incoming inputs. A perceptron can have one or more inputs.\n * $w_i$ are the weights being assigned to the respective incoming inputs\n * $b$ is a bias term\n * $h$ is the sum of the weighted input values + a bias figure\n<img src=\"../../../../images/artificial-neural-network.png\", width=450, height=200>\n[Top]\n\nSigmoid\nSynonyms:\n\nLogistic function\n\nSummary\nA sigmoid function is a mathematical function having an \"S\" shaped curve (sigmoid curve). Often, sigmoid function refers to the special case of the logistic function.\nThe sigmoid function is bounded between 0 and 1, and as an output can be interpreted as a probability for success.\nFormula\n\\begin{equation}\n\\text{sigmoid}(x) =\n\\frac{1} {1 + e^{-x}}\n\\end{equation}\n\\begin{equation}\n\\text{logistic}(x) =\n\\frac{L} {1 + e^{-k(x - x_0)}}\n\\end{equation}\nwhere:\n * $L$ = the curve's maximum value\n * $e$ = the natural logarithm base (also known as Euler's number)\n * $x_0$ = the x-value of the sigmoid's midpoint\n * $k$ = the steepness of the curve\nNetwork output from activation\n\\begin{equation}\n\\text{output} = a = f(h) = \\text{sigmoid}(\\Sigma_i w_i x_i + b)\n\\end{equation}\n[Top]\nCode", "def sigmoid(x):\n s = 1 / (1 + np.exp(-x))\n return s\n\ninputs = np.array([2.1, 1.5,])\nweights = np.array([0.2, 0.5,])\nbias = -0.2\n\noutput = sigmoid(np.dot(weights, inputs) + bias)\nprint(output)", "[Top]\nExample", "x = np.linspace(start=-10, stop=11, num=100)\ny = sigmoid(x)\n\nupper_bound = np.repeat([1.0,], len(x))\nsuccess_threshold = np.repeat([0.5,], len(x))\nlower_bound = np.repeat([0.0,], len(x))\n\nplt.plot(\n # upper bound\n x, upper_bound, 'w--',\n \n # success threshold\n x, success_threshold, 'w--',\n \n # lower bound\n x, lower_bound, 'w--',\n \n # sigmoid\n x, y\n)\n\nplt.grid(False)\nplt.xlabel(r'$x$')\nplt.ylabel(r'Probability of success')\nplt.title('Sigmoid Function Example')\nplt.show()", "[Top]\n\nTanh\nSynonyms:\n\nHyperbolic tangent\n\nSummary\nJust as the points (cos t, sin t) form a circle with a unit radius, the points (cosh t, sinh t) form the right half of the equilateral hyperbola.\nThe tanh function is bounded between -1 and 1, and as an output can be interpreted as a probability for success, where the output value:\n * 1 = 100%\n * 0 = 50%\n * -1 = 0%\nThe tanh function creates stronger gradients around zero, and therefore the derivatives are higher than the sigmoid function. Why this is important can apparently be found in Effecient Backprop by LeCun et al (1998). Also see this answer on Cross-Validated for a representation of the derivative values.\nFormula\n\\begin{equation}\n\\text{tanh}(x) =\n\\frac{2} {1 + e^{-2x}}\n- 1\n\\end{equation}\n\\begin{equation}\n\\text{tanh}(x) =\n\\frac{\\text{sinh}(x)} {\\text{cosh}(x)}\n\\end{equation}\nwhere:\n * $e$ = the natural logarithm base (also known as Euler's number)\n * $sinh$ is the hyperbolic sine\n * $cosh$ is the hyperbolic cosine\n[Top]\nTanh Alternative Formula\n\\begin{equation}\n\\text{modified tanh}(x) =\n\\text{1.7159 tanh } \\left(\\frac{2}{3}x\\right)\n\\end{equation}\nNetwork output from activation\n\\begin{equation}\n\\text{output} = a = f(h) = \\text{tanh}(\\Sigma_i w_i x_i + b)\n\\end{equation}\n[Top]\nCode", "inputs = np.array([2.1, 1.5,])\nweights = np.array([0.2, 0.5,])\nbias = -0.2\n\noutput = np.tanh(np.dot(weights, inputs) + bias)\nprint(output)", "[Top]\nExample", "x = np.linspace(start=-10, stop=11, num=100)\ny = np.tanh(x)\n\nupper_bound = np.repeat([1.0,], len(x))\nsuccess_threshold = np.repeat([0.0,], len(x))\nlower_bound = np.repeat([-1.0,], len(x))\n\nplt.plot(\n # upper bound\n x, upper_bound, 'w--',\n \n # success threshold\n x, success_threshold, 'w--',\n \n # lower bound\n x, lower_bound, 'w--',\n \n # sigmoid\n x, y\n)\n\nplt.grid(False)\nplt.xlabel(r'$x$')\nplt.ylabel(r'Probability of success (0.00 = 50%)')\nplt.title('Tanh Function Example')\nplt.show()", "[Top]\nAlternative Example", "def modified_tanh(x):\n return 1.7159 * np.tanh((2 / 3) * x)\n \nx = np.linspace(start=-10, stop=11, num=100)\ny = modified_tanh(x)\n\nupper_bound = np.repeat([1.75,], len(x))\nsuccess_threshold = np.repeat([0.0,], len(x))\nlower_bound = np.repeat([-1.75,], len(x))\n\nplt.plot(\n # upper bound\n x, upper_bound, 'w--',\n \n # success threshold\n x, success_threshold, 'w--',\n \n # lower bound\n x, lower_bound, 'w--',\n \n # sigmoid\n x, y\n)\n\nplt.grid(False)\nplt.xlabel(r'$x$')\nplt.ylabel(r'Probability of success (0.00 = 50%)')\nplt.title('Alternative Tanh Function Example')\nplt.show()", "[Top]\n\nSoftmax\nSynonyms:\n\nNormalized exponential function\nMultinomial logistic regression\n\nSummary\nSoftmax regression is interested in multi-class classification (as opposed to only binary classification when using the sigmoid and tanh functions), and so the label $y$ can take on $K$ different values, rather than only two.\nIs often used as the output layer in multilayer perceptrons to allow non-linear relationships to be learnt for multiclass problems.\nFormula\nFrom Deep Learning Book - Chapter 4: Numerical Computation\n\\begin{equation}\n\\text{softmax}(x)i =\n\\frac{\\text{exp}(x_i)} {\\sum{j=1}^n \\text{exp}(x_j)}\n\\end{equation}\nNetwork output from activation (incomplete)\n\\begin{equation}\n\\text{output} = a = f(h) = \\text{softmax}()\n\\end{equation}\n[Top]\nCode\nLink for a good discussion on SO regarding Python implementation of this function, from which the code below code was taken from.", "def softmax(X):\n assert len(X.shape) == 2\n s = np.max(X, axis=1)\n s = s[:, np.newaxis] # necessary step to do broadcasting\n e_x = np.exp(X - s)\n div = np.sum(e_x, axis=1)\n div = div[:, np.newaxis] # dito\n return e_x / div\n\nX = np.array([[1, 2, 3, 6],\n [2, 4, 5, 6],\n [3, 8, 7, 6]])\ny = softmax(X)\ny\n\n# compared to tensorflow implementation\nbatch = np.asarray([[1,2,3,6], [2,4,5,6], [3, 8, 7, 6]])\nx = tf.placeholder(tf.float32, shape=[None, 4])\ny = tf.nn.softmax(x)\n\ninit = tf.global_variables_initializer()\nsess = tf.Session()\n\nsess.run(y, feed_dict={x: batch})", "[Top]\nGradient Descent\nLearning weights\nWhat if you want to perform an operation, such as predicting college admission, but don't know the correct weights? You'll need to learn the weights from example data, then use those weights to make the predictions.\nWe need a metric of how wrong the predictions are, the error.\nSum of squared errors (SSE)\n\\begin{equation}\nE =\n\\frac{1}{2} \\Sigma_u \\Sigma_j \\left [ y_j ^ \\mu - \\hat y_j^ \\mu \\right ] ^ 2\n\\end{equation}\nwhere (neural network prediction):\n\\begin{equation}\n\\hat y_j^\\mu = \nf \\left(\\Sigma_i w_{ij} x_i^\\mu\\right)\n\\end{equation}\ntherefore:\n\\begin{equation}\nE =\n\\frac{1}{2} \\Sigma_u \\Sigma_j \\left [ y_j ^ \\mu - f \\left(\\Sigma_i w_{ij} x_i^\\mu\\right) \\right ] ^ 2\n\\end{equation}\nGoal\nFind weights $w_{ij}$ that minimize the squared error $E$.\nHow? Gradient descent.\n[Top]\nGradient Descent Formula\n\\begin{equation}\n\\Delta w_{ij} = \\eta (y_j - \\hat y_j) f^\\prime (h_j) x_i\n\\end{equation}\nremembering $h_j$ is the input to the output unit $j$:\n\\begin{equation}\nh = \\sum_i w_{ij} x_i\n\\end{equation}\nwhere: \n * $(y_j - \\hat y_j)$ is the prediction error.\n * The larger this error is, the larger the gradient descent step should be.\n * $f^\\prime (h_j)$ is the gradient\n * If the gradient is small, then a change in the unit input $h_j$ will have a small effect on the error.\n * This term produces larger gradient descent steps for units that have larger gradients\nThe errors can be rewritten as:\n\\begin{equation}\n\\delta_j = (y_j - \\hat y_j) f^\\prime (h_j)\n\\end{equation}\nGiving the gradient step as:\n\\begin{equation}\n\\Delta w_{ij} = \\eta \\delta_j x_i\n\\end{equation}\nwhere:\n * $\\Delta w_{ij}$ is the (delta) change to the $i$th $j$th weight\n * $\\eta$ (eta) is the learning rate\n * $\\delta_j$ (delta j) is the prediction errors\n * $x_i$ is the input\n[Top]\nAlgorithm\n\nSet the weight step to zero: $\\Delta w_i = 0$\nFor each record in the training data:\nMake a forward pass through the network, calculating the output $\\hat y = f(\\Sigma_i w_i x_i)$\nCalculate the error gradient in the output unit, $\\delta = (y − \\hat y) f^\\prime(\\Sigma_i w_i x_i)$\nUpdate the weight step $\\Delta w_i= \\Delta w_i + \\delta x_i$\n\n\nUpdate the weights $w_i = w_i + \\frac{\\eta \\Delta w_i} {m}$ where:\nη is the learning rate\n$m$ is the number of records\nHere we're averaging the weight steps to help reduce any large variations in the training data.\n\n\nRepeat for $e$ epochs.\n\n[Top]\nCode", "# Defining the sigmoid function for activations\ndef sigmoid(x):\n return 1 / ( 1 + np.exp(-x))\n\n# Derivative of the sigmoid function\ndef sigmoid_prime(x):\n return sigmoid(x) * (1 - sigmoid(x))\n\nx = np.array([0.1, 0.3])\ny = 0.2\nweights = np.array([-0.8, 0.5])\n# probably use a vector named \"w\" instead of a name like this\n# to make code look more like algebra\n\n# The learning rate, eta in the weight step equation\nlearnrate = 0.5\n\n# The neural network output\nnn_output = sigmoid(x[0] * weights[0] + x[1] * weights[1])\n# or nn_output = sigmoid(np.dot(weights, x))\n\n# output error\nerror = y - nn_output\n\n# error gradient\nerror_gradient = error * sigmoid_prime(np.dot(x, weights))\n# sigmoid_prime(x) is equal to -> nn_output * (1 - nn_output) \n\n# Gradient descent step\ndel_w = [ learnrate * error_gradient * x[0],\n learnrate * error_gradient * x[1]]\n# or del_w = learnrate * error_gradient * x", "[Top]\nCaveat\nGradient descent is reliant on beginnning weight values. If incorrect could result in convergergance occuring in a local minima, not a global minima. Random weights can be used.\nMomentum Term\nThe momentum term increases for dimensions whose gradients point in the same directions and reduces updates for dimensions whose gradients change directions. As a result, we gain faster convergence and reduced oscillation.\n[Top]\n\nMultilayer Perceptrons\nSynonyms\n\nMLP (just an acronym)\n\nNumpy column vector\nNumpy arays are row vectors by default, and the input_features.T (transpose) transform still leaves it as a row vector. Instead we have to use (use this one, makes more sense):\ninput_features = input_features[:, None]\n\nAlternatively you can create an array with two dimensions then transpose it:\ninput_features = np.array(input_features ndim=2).T\n\n[Top] \nCode example setting up a MLP", "# network size is a 4x3x2 network\nn_input = 4\nn_hidden = 3\nn_output = 2\n\n# make some fake data\nnp.random.seed(42)\nx = np.random.randn(4)\n\nweights_in_hidden = np.random.normal(0, scale=0.1, size=(n_input, n_hidden))\nweights_hidden_out = np.random.normal(0, scale=0.1, size=(n_hidden, n_output))\n\nprint('x shape\\t\\t\\t= {}'.format(x.shape))\nprint('weights_in_hidden shape\\t= {}'.format(weights_in_hidden.shape))\nprint('weights_hidden_out\\t= {}'.format(weights_hidden_out.shape))", "[Top]\n\nBackpropogation\nWhen doing the feed forward pass we are taking the inputs and using the weights (which are randomly assigned) to gain an output. In backprop you can view this as the errors (difference between prediction and actual expected value) being passed through the network using the weights again.\n[Top]\nWorked example\n<img src=\"../../../../images/backprop-network.png\", width=150>\n * Two layer network (Inputs not considered a layer, 1 hidden layer, 1 output layer):\n * Assume target outcome is $y = 1$.\n * Forward pass:\n * Sigmoid input = $h = \\Sigma_i w_i x_i = 0.1 \\times 0.4 - 0.2 \\times 0.3 = -0.02$\n * Activtion function = $a = f(h) = \\text{sigmoid}(-0.02) = 0.495$\n * Predicted output = $ \\hat y = f(W \\cdot a) = \\text{sigmoid}(0.1 \\times 0.495) = 0.512$\n * Backwards pass:\n * Sigmoid derivate = $f^\\prime (W \\cdot a) = f(W \\cdot a)(1 - f(W \\cdot a))$\n * Output error = $\\delta^o = (y - \\hat y)f^\\prime(W \\cdot a) = (1 - 0.512) \\times 0.512 \\times (1 - 0.512) = 0.122$\n * Usual hidden units error = $\\delta_j^h = \\Sigma_k W_{jk} \\delta_k^o f^\\prime(h_j)$\n * This example single hidden unit error = $\\delta^h = W \\delta^o f^\\prime(h) = 0.1 \\times 0.122 \\times 0.495 \\times (1 - 0.495) = 0.003$\n * Gradient descent step (output to hidden) = $\\Delta W = \\eta \\delta^o a = 0.5 \\times 0.122 \\times 0.495 = 0.0302$\n * Gradient descent step (hidden to inputs) = $\\Delta w_i = \\eta \\delta^h x_i = (0.5 \\times 0.003 \\times 0.1, 0.5 \\times 0.003 \\times 0.3) = (0.00015, 0.00045)$\nFrom this example, you can see one of the effects of using the sigmoid function for the activations. The maximum derivative of the sigmoid function is 0.5, so the errors in the output layer get scaled by at least half, and errors in the hidden layer are scaled down by at least a quarter. You can see that if you have a lot of layers, using a sigmoid activation function will quickly reduce the weight steps to tiny values in layers near the input.\n[Top]\nBackprop algorithm for updating the weights\n\nSet the weight steps for each layer to zero\nThe input to hidden weights $\\Delta w_{ij} = 0$\nThe hidden to output weights $\\Delta W_j = 0$\n\n\nFor each record in the training data:\nMake a forward pass through the network, calculating the output $\\hat y$ \nCalculate the error gradient in the output unit, $\\delta^o = (y − \\hat y)f^\\prime(z)$\nWhere the input to the output unit = $z = \\Sigma_j W_j a_j$,\n\n\nPropagate the errors to the hidden layer $\\delta_j^h = \\delta^o W_j f^\\prime(h_j)$\n\n\nUpdate the weight steps:\n$\\Delta W_j = \\Delta W_j + \\delta^o a_j$\n$\\Delta w_{ij} = \\Delta w_{ij} + \\delta_j^h a_i$\n\n\nUpdate the weights, where $\\eta$ is the learning rate and $m$ is the number of records:\n$W_j = W_j + \\frac {\\eta \\Delta W_j} {m}$\n$w_{ij} = w_{ij} + \\frac {\\eta \\Delta w_{ij}} {m}$\n\n\nRepeat for $e$ epochs.\n\n[Top]\n\nAdditional Reading\n\nUnderstanding the backward pass through Batch Normalization Layer\nYes, you should understand backprop\nEffecient Backprop, LeCun et el., 1998\nGood intuition of momentum term in gradient descent\nDeep Learning Book - Chapter 4: Numerical Computation\nDeep Learning Book - Chapter 6: Deep Feedforward Networks\n\nAddional Videos\n\nKhan Academy: Gradient of multivariable function\nKhan Acadmey: Chain Rule Introduction\nYoutube: CS231n Winter 2016 Lecture 4 Backpropagation\nHistory of Deep Learning\n\n[Top]" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
OSGeo-live/CesiumWidget
GSOC/notebooks/Projects/CARTOPY/00 Using cartopy with matplotlib.ipynb
apache-2.0
[ "Beautifully simple maps\nCartopy has exposed an interface to enable easy map creation using matplotlib. Creating a basic map is as simple as telling matplotlib to use a specific map projection, and then adding some coastlines to the axes:", "%matplotlib inline\nimport cartopy.crs as ccrs\nimport matplotlib.pyplot as plt\n\nplt.figure(figsize=(12, 12))\nax = plt.axes(projection=ccrs.PlateCarree())\nax.coastlines();", "A list of the available projections to be used with matplotlib can be found on the Cartopy projection list notebook.\nThe line plt.axes(projection=ccrs.PlateCarree()) sets up a GeoAxes instance which exposes a variety of other map related methods, in the case of the previous example, we used the coastlines() method to add coastlines to the map.\nLets create another map in a different projection, and make use of the stock_img() method to add an underlay image to the map:", "import cartopy.crs as ccrs\nimport matplotlib.pyplot as plt\n\n\nplt.figure(figsize=(12, 12))\nax = plt.axes(projection=ccrs.Mollweide())\nax.stock_img();", "Adding data to the map\nOnce you have the map just the way you want it, data can be added to it in exactly the same way as with normal matplotlib axes. By default, the coordinate system of any data added to a GeoAxes is the same as the coordinate system of the GeoAxes itself, to control which coordinate system that the given data is in, you can add the transform keyword with an appropriate cartopy.crs.CRS instance:", "%matplotlib inline\nimport cartopy.crs as ccrs\nimport matplotlib.pyplot as plt\n\nplt.figure(figsize=(12, 12))\nax = plt.axes(projection=ccrs.PlateCarree())\nax.stock_img()\n\nny_lon, ny_lat = -75, 43\ndelhi_lon, delhi_lat = 77.23, 28.61\n\nplt.plot([ny_lon, delhi_lon], [ny_lat, delhi_lat],\n color='blue', linewidth=2, marker='o',\n transform=ccrs.Geodetic(),\n )\n\nplt.plot([ny_lon, delhi_lon], [ny_lat, delhi_lat],\n color='gray', linestyle='--',\n transform=ccrs.PlateCarree(),\n )\n\nplt.text(ny_lon - 3, ny_lat - 12, 'New York',\n horizontalalignment='right',\n transform=ccrs.Geodetic())\n\nplt.text(delhi_lon + 3, delhi_lat - 12, 'Delhi',\n horizontalalignment='left',\n transform=ccrs.Geodetic());", "Notice how the line in blue between New York and Delhi is not straight on a flat PlateCarree map, this is because the Geodetic coordinate system is a truly spherical coordinate system, where a line between two points is defined as the shortest path between those points on the globe rather than 2d Cartesian space." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
johnpfay/environ859
07_DataWrangling/Geopandas/CreateStatePSUT.ipynb
gpl-3.0
[ "Create State Physical Supply Usage Table (PSUT)\nUses values in the WaterBalanceData.csv file to populate values in the StatePSUT.xlsx file\nRequires the openyxl module: https://openpyxl.readthedocs.io/en/default/", "#Import modules\nimport sys, os\nimport pandas as pd\nfrom openpyxl import load_workbook", "Get/Set the filenames required", "#Set the location of the data directory\ndataDir = '../../Data'\n#Get the water balance input csv file\ninDataFN = dataDir + os.sep + 'StateData' + os.sep + 'la_2010.csv'\n#Get the template\ninXlsxFN = dataDir + os.sep + 'Templates' + os.sep + 'StatePSUTTemplate.xlsx' #The template that will be filled in\n\n#Load the water balance data into a pandas dataframe\ndfData = pd.read_csv(inDataFN)", "Below we set the field to column mappings. The number on the right of the '=' refers to the column in the template in which the field on the left occurs.", "#Row and column indices\n#--Columns--\nAq = 5 #Aquaculture\nDo = 19 #Domestic\nIn = 16 #Industrial\nIc = 2 #Irrigation-cropland\nIg = 17 #Irrigation-golf courses\nLi = 3 #Livestock\nMi = 7 #Mining\nPS = 14 #Public supply\nTC = 10 #Thermoelectric-once thru\nTR = 9 #Thermoelectric-recirculated\nSupply = 20 #Environment\n#--Rows--\nSf = 21 #Surface\nGw = 22 #Groundwater\n\n#Create the dictionary of GroupNames and cell locations\ncelLocs = {'Aquaculture_Surface':(Sf,Aq),\n 'Aquaculture_Groundwater':(Gw,Aq),\n 'Domestic_Surface':(Sf,Do),\n 'Domestic_Groundwater':(Gw,Do),\n 'Industrial_Surface':(Sf,In),\n 'Industrial_Groundwater':(Gw,In),\n 'Irrigation_Crop_Surface':(Sf,Ic),\n 'Irrigation_Crop_Groundwater':(Gw,Ic),\n 'Irrigation_Golf_Surface':(Sf,Ig),\n 'Irrigation_Golf_Groundwater':(Gw,Ig),\n 'Livestock_Surface':(Sf,Li),\n 'Livestock_Groundwater':(Gw,Li),\n 'Mining_Surface':(Sf,Mi),\n 'Mining_Groundwater':(Gw,Mi),\n 'PublicSupply_Surface':(Sf,PS),\n 'PublicSupply_Groundwater':(Gw,PS),\n 'ThermoElec_OnceThru_Surface':(Sf,TC),\n 'ThermoElec_OnceThru_Groundwater':(Gw,TC),\n 'ThermoElec_Recirc_Surface':(Sf,TR),\n 'ThermoElec_Recirc_Groundwater':(Gw,TR),\n 'Supply':(4,12)\n } \n\n#Create the workbook object\nwb = load_workbook(filename = inXlsxFN)\n\nfor year in (2000,2005, 2010):\n #Get the year worksheet in the workbook\n ws = wb.get_sheet_by_name(str(year))\n \n #Label the sheet\n ws.cell(column=1,row=1,value=\"US Water Balance: {}. Values in MGal/Year\".format(year))\n \n #use the dictionary to insert values\n for name, cellLoc in celLocs.items():\n #Get the value for selected year\n val = dfData[(dfData.Group == name) & (dfData.YEAR == year)]['MGal'].iloc[0]\n #insert it into the Excel file\n ws.cell(column = cellLoc[1],row = cellLoc[0],value = val)\n\nwb.save(dataDir+os.sep+'BalanceSheet.xlsx')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
scottlittle/solar-sensors
IPnotebooks/important-IPNBs/.ipynb_checkpoints/all-datasets-together-checkpoint.ipynb
apache-2.0
[ "Summon any data\nI want to make a single query and have it return data across the datasets", "from datetime import datetime,timedelta, time\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom data_helper_functions import *\nfrom IPython.display import display\npd.options.display.max_columns = 999\n%matplotlib inline\n\ndesired_channel = 'BAND_01'\ndesired_date = datetime(2014, 4, 1)\ndesired_timedelta = timedelta(hours = 15)\ndesired_datetime = desired_date + desired_timedelta\nsatellite_filefolder = '../../data/satellite/colorado/summer6months/data/'\nsensor_filefolder = '../../data/sensor_data/colorado6months/'\npvoutput_filefolder = '../../data/pvoutput/pvoutput6months/'\n\n#satellite data\nsatellite_filename = find_filename(desired_datetime, desired_channel, satellite_filefolder)\nlons, lats, data = return_satellite_data(satellite_filename, satellite_filefolder)\n\n\nplt.figure(figsize=(8, 8))\nimgplot = plt.imshow(data)\nimgplot.set_interpolation('none')\nplt.savefig('foo.png')\nplt.show()\n\n#sensor data\nsensor_filename = find_file_from_date(desired_date, sensor_filefolder)\ndf_sensor = return_sensor_data(sensor_filename, sensor_filefolder)\ndf_sensor[df_sensor.index == desired_datetime]\ndisplay(df_sensor[df_sensor.index == desired_datetime])\n\n#pvoutput data\npvoutput_filename = find_file_from_date(desired_date, pvoutput_filefolder)\ndf_pvoutput = return_pvoutput_data(pvoutput_filename, pvoutput_filefolder)\ndisplay(df_pvoutput[df_pvoutput.index == desired_datetime])\n\n#saving df to image\n\n# a = Image(data=df_sensor)\n# type(a)", "Build up sensor to pvoutput model", "from datetime import datetime,timedelta, time\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom data_helper_functions import *\nfrom IPython.display import display\npd.options.display.max_columns = 999\n%matplotlib inline\n\n#iterate over datetimes:\nmytime = datetime(2014, 4, 1, 13)\ntimes = make_time(mytime)\n\n# Now that we can call data up over any datetime and we have a list of interested datetimes,\n# we can finally construct an X matrix and y vector for regression.\n\nsensor_filefolder = 'data/sensor_data/colorado6months/'\npvoutput_filefolder = 'data/pvoutput/pvoutput6months/'\n\nX = [] #Sensor values\ny = [] #PVOutput\n\nfor desired_datetime in times:\n \n try: #something wrong with y on last day\n desired_date = (desired_datetime - timedelta(hours=6)).date() #make sure correct date\n desired_date = datetime.combine(desired_date, time.min) #get into datetime format\n\n sensor_filename = find_file_from_date(desired_date, sensor_filefolder)\n df_sensor = return_sensor_data(sensor_filename, sensor_filefolder).ix[:,-15:-1]\n df_sensor[df_sensor.index == desired_datetime]\n\n pvoutput_filename = find_file_from_date(desired_date, pvoutput_filefolder)\n df_pvoutput = return_pvoutput_data(pvoutput_filename, pvoutput_filefolder)\n \n y.append(df_pvoutput[df_pvoutput.index == desired_datetime].values[0][0])\n X.append(df_sensor[df_sensor.index == desired_datetime].values[0])\n except:\n pass\n\nX = np.array(X)\ny = np.array(y)\n\nprint X.shape\nprint y.shape", "...finally ready to model!\nRandom Forest", "from sklearn.cross_validation import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=99)\n\nfrom sklearn.ensemble import RandomForestRegressor\nrfr = RandomForestRegressor(oob_score = True)\n\nrfr.fit(X_train,y_train)\n\ny_pred = rfr.predict(X_test)\n\nrfr.score(X_test,y_test)\n\ndf_sensor.columns.values.shape\n\nsorted_mask = np.argsort(rfr.feature_importances_)\n\nfor i in zip(df_sensor.columns.values,rfr.feature_importances_[sorted_mask])[::-1]:\n print i", "Linear model", "#now do a linear model and compare:\nfrom sklearn.linear_model import LinearRegression\nlr = LinearRegression()\nlr.fit(X_train,y_train)\nlr.score(X_test,y_test)\n\nsorted_mask = np.argsort(lr.coef_)\n\nfor i in zip(df_sensor.columns.values,lr.coef_[sorted_mask])[::-1]:\n print i\n\ndf_sensor.ix[:,-15:-1].head() #selects photometer and AOD, \n# useful in next iteration of using sensor data to fit", "When only keeping the photometer data, random forest and linear model do pretty similar. When I added all of the sensor instruments to the fit, rfr scored 0.87 and lr scored negative!\nAlso, I threw away the mysterious \"Research 2\" sensor, that was probably just a solar panel! I asked NREL what it is, so we'll see. If it turns out to be a solar panel, then I can do some feature engineering with the sensor data by simulating a solar panel!\nNeural Net Exploration", "import pandas as pd\nimport numpy as np\nfrom sklearn.preprocessing import scale\nfrom lasagne import layers\nfrom lasagne.nonlinearities import softmax, rectify, sigmoid, linear, very_leaky_rectify, tanh\nfrom lasagne.updates import nesterov_momentum, adagrad, momentum\nfrom nolearn.lasagne import NeuralNet\nimport theano\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.preprocessing import StandardScaler\n\ny = y.astype('float32')\nx = X.astype('float32')\nscaler = StandardScaler()\nscaled_x = scaler.fit_transform(x)\nx_train, x_test, y_train, y_test = train_test_split(scaled_x, y, test_size = 0.2, random_state = 12)\n\nnn_regression = NeuralNet(layers=[('input', layers.InputLayer),\n# ('hidden1', layers.DenseLayer),\n# ('hidden2', layers.DenseLayer),\n ('output', layers.DenseLayer)\n ],\n\n # Input Layer\n input_shape=(None, x.shape[1]),\n\n # hidden Layer\n# hidden1_num_units=512,\n# hidden1_nonlinearity=softmax,\n \n # hidden Layer\n# hidden2_num_units=128,\n# hidden2_nonlinearity=linear,\n\n # Output Layer\n output_num_units=1,\n output_nonlinearity=very_leaky_rectify,\n\n # Optimization\n update=nesterov_momentum,\n update_learning_rate=0.03,#0.02\n update_momentum=0.8,#0.8\n max_epochs=600, #was 100\n\n # Others\n #eval_size=0.2,\n regression=True,\n verbose=0,\n )\n\nnn_regression.fit(x_train, y_train)\ny_pred = nn_regression.predict(x_test)\nnn_regression.score(x_test, y_test)\n\nval = 11\nprint y_pred[val][0]\nprint y_test[val]\n\nplt.plot(y_pred,'ro')\n\nplt.plot(y_test,'go')", "Extra Trees!", "from sklearn.ensemble import ExtraTreesRegressor\netr = ExtraTreesRegressor(oob_score=True, bootstrap=True,\n n_jobs=-1, n_estimators=1000) #nj_obs uses all cores!\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=99)\n\netr.fit(X_train, y_train)\n\nprint etr.score(X_test,y_test)\nprint etr.oob_score_\n\ny_pred = etr.predict(X_test)\n\nfrom random import randint\nval = randint(0,y_test.shape[0])\nprint y_pred[val]\nprint y_test[val]\n\nprint X.shape\nprint y.shape", "Save this thing and try it out on the simulated sensors!", "from sklearn.externals import joblib\njoblib.dump(etr, 'data/sensor-to-power-model/sensor-to-power-model.pkl') \n\nnp.savez_compressed('data/y.npz',y=y) #save y" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jmhsi/justin_tinker
data_science/courses/deeplearning2/seq2seq-translation.ipynb
apache-2.0
[ "Requirements", "import unicodedata, string, re, random, time, math, torch, torch.nn as nn\nfrom torch.autograd import Variable\nfrom torch import optim\nimport torch.nn.functional as F\nimport keras, numpy as np\n\nfrom keras.preprocessing import sequence", "Loading data files\nThe data for this project is a set of many thousands of English to French translation pairs.\nThis question on Open Data Stack Exchange pointed me to the open translation site http://tatoeba.org/ which has downloads available at http://tatoeba.org/eng/downloads - and better yet, someone did the extra work of splitting language pairs into individual text files here: http://www.manythings.org/anki/\nThe English to French pairs are too big to include in the repo, so download to data/fra.txt before continuing. The file is a tab separated list of translation pairs:\nI am cold. Je suis froid.\nWe'll need a unique index per word to use as the inputs and targets of the networks later. To keep track of all this we will use a helper class called Lang which has word &rarr; index (word2index) and index &rarr; word (index2word) dictionaries, as well as a count of each word word2count to use to later replace rare words.", "SOS_token = 0\nEOS_token = 1\n\nclass Lang:\n def __init__(self, name):\n self.name = name\n self.word2index = {}\n self.word2count = {}\n self.index2word = {0: \"SOS\", 1: \"EOS\"}\n self.n_words = 2 # Count SOS and EOS\n \n def addSentence(self, sentence):\n for word in sentence.split(' '):\n self.addWord(word)\n\n def addWord(self, word):\n if word not in self.word2index:\n self.word2index[word] = self.n_words\n self.word2count[word] = 1\n self.index2word[self.n_words] = word\n self.n_words += 1\n else:\n self.word2count[word] += 1", "The files are all in Unicode, to simplify we will turn Unicode characters to ASCII, make everything lowercase, and trim most punctuation.", "# Turn a Unicode string to plain ASCII, thanks to http://stackoverflow.com/a/518232/2809427\ndef unicodeToAscii(s):\n return ''.join(\n c for c in unicodedata.normalize('NFD', s)\n if unicodedata.category(c) != 'Mn'\n )\n\n# Lowercase, trim, and remove non-letter characters\ndef normalizeString(s):\n s = unicodeToAscii(s.lower().strip())\n s = re.sub(r\"([.!?])\", r\" \\1\", s)\n s = re.sub(r\"[^a-zA-Z.!?]+\", r\" \", s)\n return s", "To read the data file we will split the file into lines, and then split lines into pairs. The files are all English &rarr; Other Language, so if we want to translate from Other Language &rarr; English I added the reverse flag to reverse the pairs.", "def readLangs(lang1, lang2, pairs_file, reverse=False):\n print(\"Reading lines...\")\n\n # Read the file and split into lines\n lines = open('data/%s' % (pairs_file)).read().strip().split('\\n')\n \n # Split every line into pairs and normalize\n pairs = [[normalizeString(s) for s in l.split('\\t')] for l in lines]\n \n # Reverse pairs, make Lang instances\n if reverse:\n pairs = [list(reversed(p)) for p in pairs]\n input_lang = Lang(lang2)\n output_lang = Lang(lang1)\n else:\n input_lang = Lang(lang1)\n output_lang = Lang(lang2)\n \n return input_lang, output_lang, pairs", "Since there are a lot of example sentences and we want to train something quickly, we'll trim the data set to only relatively short and simple sentences. Here the maximum length is 10 words (that includes ending punctuation) and we're filtering to sentences that translate to the form \"I am\" or \"He is\" etc. (accounting for apostrophes replaced earlier).", "MAX_LENGTH = 10\n\neng_prefixes = (\n \"i am \", \"i m \",\n \"he is\", \"he s \",\n \"she is\", \"she s\",\n \"you are\", \"you re \",\n \"we are\", \"we re \",\n \"they are\", \"they re \"\n)\n\ndef filterPair(p):\n return len(p[0].split(' ')) < MAX_LENGTH and \\\n len(p[1].split(' ')) < MAX_LENGTH and \\\n p[1].startswith(eng_prefixes)\n\ndef filterPairs(pairs):\n return [pair for pair in pairs if filterPair(pair)]", "The full process for preparing the data is:\n\nRead text file and split into lines, split lines into pairs\nNormalize text, filter by length and content\nMake word lists from sentences in pairs", "def prepareData(lang1, lang2, pairs_file, reverse=False):\n input_lang, output_lang, pairs = readLangs(lang1, lang2, pairs_file, reverse)\n print(\"Read %s sentence pairs\" % len(pairs))\n pairs = filterPairs(pairs)\n print(\"Trimmed to %s sentence pairs\" % len(pairs))\n print(\"Counting words...\")\n for pair in pairs:\n input_lang.addSentence(pair[0])\n output_lang.addSentence(pair[1])\n print(\"Counted words:\")\n print(input_lang.name, input_lang.n_words)\n print(output_lang.name, output_lang.n_words)\n return input_lang, output_lang, pairs\n\ninput_lang, output_lang, pairs = prepareData('eng', 'fra', 'fra.txt', True)\nprint(random.choice(pairs))\n\ndef indexesFromSentence(lang, sentence):\n return [lang.word2index[word] for word in sentence.split(' ')]+[EOS_token]\n\ndef variableFromSentence(lang, sentence):\n indexes = indexesFromSentence(lang, sentence)\n return Variable(torch.LongTensor(indexes).unsqueeze(0))\n\ndef variablesFromPair(pair):\n input_variable = variableFromSentence(input_lang, pair[0])\n target_variable = variableFromSentence(output_lang, pair[1])\n return (input_variable, target_variable)\n\ndef index_and_pad(lang, dat):\n return sequence.pad_sequences([indexesFromSentence(lang, s) \n for s in dat], padding='post').astype(np.int64)\n\nfra, eng = list(zip(*pairs))\n\nfra = index_and_pad(input_lang, fra)\neng = index_and_pad(output_lang, eng)\n\ndef get_batch(x, y, batch_size=16):\n idxs = np.random.permutation(len(x))[:batch_size]\n return x[idxs], y[idxs]", "The Encoder\nThe encoder of a seq2seq network is a RNN that outputs some value for every word from the input sentence. For every input word the encoder outputs a vector and a hidden state, and uses the hidden state for the next input word.", "class EncoderRNN(nn.Module):\n def __init__(self, input_size, hidden_size, n_layers=1):\n super(EncoderRNN, self).__init__()\n self.hidden_size = hidden_size\n self.embedding = nn.Embedding(input_size, hidden_size)\n self.gru = nn.GRU(hidden_size, hidden_size, batch_first=True, num_layers=n_layers)\n \n def forward(self, input, hidden):\n output, hidden = self.gru(self.embedding(input), hidden)\n return output, hidden\n\n # TODO: other inits\n def initHidden(self, batch_size):\n return Variable(torch.zeros(1, batch_size, self.hidden_size))", "Simple Decoder\nIn the simplest seq2seq decoder we use only last output of the encoder. This last output is sometimes called the context vector as it encodes context from the entire sequence. This context vector is used as the initial hidden state of the decoder.\nAt every step of decoding, the decoder is given an input token and hidden state. The initial input token is the start-of-string &lt;SOS&gt; token, and the first hidden state is the context vector (the encoder's last hidden state).", "class DecoderRNN(nn.Module):\n def __init__(self, hidden_size, output_size, n_layers=1):\n super(DecoderRNN, self).__init__()\n self.embedding = nn.Embedding(output_size, hidden_size)\n self.gru = nn.GRU(hidden_size, hidden_size, batch_first=True, num_layers=n_layers)\n # TODO use transpose of embedding\n self.out = nn.Linear(hidden_size, output_size)\n self.sm = nn.LogSoftmax()\n \n def forward(self, input, hidden):\n emb = self.embedding(input).unsqueeze(1)\n # NB: Removed relu\n res, hidden = self.gru(emb, hidden)\n output = self.sm(self.out(res[:,0]))\n return output, hidden", "Attention Decoder\nIf only the context vector is passed betweeen the encoder and decoder, that single vector carries the burden of encoding the entire sentence. \nAttention allows the decoder network to \"focus\" on a different part of the encoder's outputs for every step of the decoder's own outputs. First we calculate a set of attention weights. These will be multiplied by the encoder output vectors to create a weighted combination. The result (called attn_applied in the code) should contain information about that specific part of the input sequence, and thus help the decoder choose the right output words.\n\nCalculating the attention weights is done with another feed-forward layer attn, using the decoder's input and hidden state as inputs. Because there are sentences of all sizes in the training data, to actually create and train this layer we have to choose a maximum sentence length (input length, for encoder outputs) that it can apply to. Sentences of the maximum length will use all the attention weights, while shorter sentences will only use the first few.", "class AttnDecoderRNN(nn.Module):\n def __init__(self, hidden_size, output_size, n_layers=1, dropout_p=0.1, max_length=MAX_LENGTH):\n super(AttnDecoderRNN, self).__init__()\n self.hidden_size = hidden_size\n self.output_size = output_size\n self.n_layers = n_layers\n self.dropout_p = dropout_p\n self.max_length = max_length\n \n self.embedding = nn.Embedding(self.output_size, self.hidden_size)\n self.attn = nn.Linear(self.hidden_size * 2, self.max_length)\n self.attn_combine = nn.Linear(self.hidden_size * 2, self.hidden_size)\n self.dropout = nn.Dropout(self.dropout_p)\n self.gru = nn.GRU(self.hidden_size, self.hidden_size)\n self.out = nn.Linear(self.hidden_size, self.output_size)\n \n def forward(self, input, hidden, encoder_output, encoder_outputs):\n embedded = self.embedding(input).view(1, 1, -1)\n embedded = self.dropout(embedded)\n \n attn_weights = F.softmax(self.attn(torch.cat((embedded[0], hidden[0]), 1)))\n attn_applied = torch.bmm(attn_weights.unsqueeze(0), encoder_outputs.unsqueeze(0))\n \n output = torch.cat((embedded[0], attn_applied[0]), 1)\n output = self.attn_combine(output).unsqueeze(0)\n\n for i in range(self.n_layers):\n output = F.relu(output)\n output, hidden = self.gru(output, hidden)\n\n output = F.log_softmax(self.out(output[0]))\n return output, hidden, attn_weights\n\n def initHidden(self):\n return Variable(torch.zeros(1, 1, self.hidden_size))", "Note: There are other forms of attention that work around the length limitation by using a relative position approach. Read about \"local attention\" in Effective Approaches to Attention-based Neural Machine Translation.\nTraining\nTo train we run the input sentence through the encoder, and keep track of every output and the latest hidden state. Then the decoder is given the &lt;SOS&gt; token as its first input, and the last hidden state of the decoder as its first hidden state.\n\"Teacher forcing\" is the concept of using the real target outputs as each next input, instead of using the decoder's guess as the next input. Using teacher forcing causes it to converge faster but when the trained network is exploited, it may exhibit instability.", "def train(input_variable, target_variable, encoder, decoder, \n encoder_optimizer, decoder_optimizer, criterion, max_length=MAX_LENGTH):\n batch_size, input_length = input_variable.size()\n target_length = target_variable.size()[1]\n encoder_hidden = encoder.initHidden(batch_size).cuda()\n encoder_optimizer.zero_grad()\n decoder_optimizer.zero_grad()\n loss = 0\n\n encoder_output, encoder_hidden = encoder(input_variable, encoder_hidden)\n decoder_input = Variable(torch.LongTensor([SOS_token]*batch_size)).cuda()\n decoder_hidden = encoder_hidden\n\n for di in range(target_length):\n decoder_output, decoder_hidden = decoder(decoder_input, decoder_hidden) \n #, encoder_output, encoder_outputs)\n targ = target_variable[:, di]\n# print(decoder_output.size(), targ.size(), target_variable.size())\n loss += criterion(decoder_output, targ)\n decoder_input = targ\n\n loss.backward()\n encoder_optimizer.step()\n decoder_optimizer.step()\n return loss.data[0] / target_length\n\ndef asMinutes(s):\n m = math.floor(s / 60)\n s -= m * 60\n return '%dm %ds' % (m, s)\n\ndef timeSince(since, percent):\n now = time.time()\n s = now - since\n es = s / (percent)\n rs = es - s\n return '%s (- %s)' % (asMinutes(s), asMinutes(rs))\n\ndef trainEpochs(encoder, decoder, n_epochs, print_every=1000, plot_every=100, \n learning_rate=0.01):\n start = time.time()\n plot_losses = []\n print_loss_total = 0 # Reset every print_every\n plot_loss_total = 0 # Reset every plot_every\n \n encoder_optimizer = optim.RMSprop(encoder.parameters(), lr=learning_rate)\n decoder_optimizer = optim.RMSprop(decoder.parameters(), lr=learning_rate)\n criterion = nn.NLLLoss().cuda()\n \n for epoch in range(1, n_epochs + 1):\n training_batch = get_batch(fra, eng)\n input_variable = Variable(torch.LongTensor(training_batch[0])).cuda()\n target_variable = Variable(torch.LongTensor(training_batch[1])).cuda()\n loss = train(input_variable, target_variable, encoder, decoder, encoder_optimizer, \n decoder_optimizer, criterion)\n print_loss_total += loss\n plot_loss_total += loss\n\n if epoch % print_every == 0:\n print_loss_avg = print_loss_total / print_every\n print_loss_total = 0\n print('%s (%d %d%%) %.4f' % (timeSince(start, epoch / n_epochs), epoch, \n epoch / n_epochs * 100, print_loss_avg))\n \n if epoch % plot_every == 0:\n plot_loss_avg = plot_loss_total / plot_every\n plot_losses.append(plot_loss_avg)\n plot_loss_total = 0\n \n showPlot(plot_losses)", "Attention", "# TODO: Make this change during training\nteacher_forcing_ratio = 0.5\n\ndef attn_train(input_variable, target_variable, encoder, decoder, encoder_optimizer, \n decoder_optimizer, criterion, max_length=MAX_LENGTH):\n encoder_hidden = encoder.initHidden()\n\n encoder_optimizer.zero_grad()\n decoder_optimizer.zero_grad()\n\n input_length = input_variable.size()[0]\n target_length = target_variable.size()[0]\n encoder_outputs = Variable(torch.zeros(max_length, encoder.hidden_size))\n loss = 0\n\n for ei in range(input_length):\n encoder_output, encoder_hidden = encoder(input_variable[ei], encoder_hidden)\n encoder_outputs[ei] = encoder_output[0][0]\n\n decoder_input = Variable(torch.LongTensor([[SOS_token]]))\n decoder_hidden = encoder_hidden\n\n use_teacher_forcing = True if random.random() < teacher_forcing_ratio else False\n \n if use_teacher_forcing:\n # Teacher forcing: Feed the target as the next input\n for di in range(target_length):\n decoder_output, decoder_hidden, decoder_attention = decoder(\n decoder_input, decoder_hidden, encoder_output, encoder_outputs)\n loss += criterion(decoder_output[0], target_variable[di])\n decoder_input = target_variable[di] # Teacher forcing\n\n else:\n # Without teacher forcing: use its own predictions as the next input\n for di in range(target_length):\n decoder_output, decoder_hidden, decoder_attention = decoder(\n decoder_input, decoder_hidden, encoder_output, encoder_outputs)\n topv, topi = decoder_output.data.topk(1)\n ni = topi[0][0]\n decoder_input = Variable(torch.LongTensor([[ni]]))\n loss += criterion(decoder_output[0], target_variable[di])\n if ni == EOS_token:\n break\n\n loss.backward()\n \n encoder_optimizer.step()\n decoder_optimizer.step()\n \n return loss.data[0] / target_length", "Plotting results\nPlotting is done with matplotlib, using the array of loss values plot_losses saved while training.", "import matplotlib.pyplot as plt\nimport matplotlib.ticker as ticker\nimport numpy as np\n%matplotlib inline\n\ndef showPlot(points):\n plt.figure()\n fig, ax = plt.subplots()\n loc = ticker.MultipleLocator(base=0.2) # this locator puts ticks at regular intervals\n ax.yaxis.set_major_locator(loc)\n plt.plot(points)", "Evaluation\nEvaluation is mostly the same as training, but there are no targets so we simply feed the decoder's predictions back to itself for each step. Every time it predicts a word we add it to the output string, and if it predicts the EOS token we stop there. We also store the decoder's attention outputs for display later.", "def evaluate(encoder, decoder, sentence, max_length=MAX_LENGTH):\n input_variable = variableFromSentence(input_lang, sentence).cuda()\n input_length = input_variable.size()[0]\n encoder_hidden = encoder.initHidden(1).cuda()\n encoder_output, encoder_hidden = encoder(input_variable, encoder_hidden)\n\n decoder_input = Variable(torch.LongTensor([SOS_token])).cuda()\n decoder_hidden = encoder_hidden\n \n decoded_words = []\n# decoder_attentions = torch.zeros(max_length, max_length)\n \n for di in range(max_length):\n# decoder_output, decoder_hidden, decoder_attention = decoder(\n decoder_output, decoder_hidden = decoder(decoder_input, decoder_hidden)\n #, encoder_output, encoder_outputs)\n# decoder_attentions[di] = decoder_attention.data\n topv, topi = decoder_output.data.topk(1)\n ni = topi[0][0]\n if ni == EOS_token:\n decoded_words.append('<EOS>')\n break\n else:\n decoded_words.append(output_lang.index2word[ni])\n decoder_input = Variable(torch.LongTensor([ni])).cuda()\n \n return decoded_words,0#, decoder_attentions[:di+1]\n\ndef evaluateRandomly(encoder, decoder, n=10):\n for i in range(n):\n pair = random.choice(pairs)\n print('>', pair[0])\n print('=', pair[1])\n output_words, attentions = evaluate(encoder, decoder, pair[0])\n output_sentence = ' '.join(output_words)\n print('<', output_sentence)\n print('')", "Training and Evaluating\nNote: If you run this notebook you can train, interrupt the kernel, evaluate, and continue training later. Comment out the lines where the encoder and decoder are initialized and run trainEpochs again.", "#TODO:\n# - Test set\n# - random teacher forcing\n# - attention\n# - multi layers\n# - bidirectional encoding\n\nhidden_size = 256\nencoder1 = EncoderRNN(input_lang.n_words, hidden_size).cuda()\nattn_decoder1 = DecoderRNN(hidden_size, output_lang.n_words).cuda()\n\ntrainEpochs(encoder1, attn_decoder1, 15000, print_every=500, learning_rate=0.005)\n\nevaluateRandomly(encoder1, attn_decoder1)", "Visualizing Attention\nA useful property of the attention mechanism is its highly interpretable outputs. Because it is used to weight specific encoder outputs of the input sequence, we can imagine looking where the network is focused most at each time step.\nYou could simply run plt.matshow(attentions) to see attention output displayed as a matrix, with the columns being input steps and rows being output steps:\nNOTE: This only works when using the attentional decoder, if you've been following the notebook to this point you are using the standard decoder.", "output_words, attentions = evaluate(encoder1, attn_decoder1, \"je suis trop froid .\")\nplt.matshow(attentions.numpy())", "For a better viewing experience we will do the extra work of adding axes and labels:", "def showAttention(input_sentence, output_words, attentions):\n # Set up figure with colorbar\n fig = plt.figure()\n ax = fig.add_subplot(111)\n cax = ax.matshow(attentions.numpy(), cmap='bone')\n fig.colorbar(cax)\n\n # Set up axes\n ax.set_xticklabels([''] + input_sentence.split(' ') + ['<EOS>'], rotation=90)\n ax.set_yticklabels([''] + output_words)\n\n # Show label at every tick\n ax.xaxis.set_major_locator(ticker.MultipleLocator(1))\n ax.yaxis.set_major_locator(ticker.MultipleLocator(1))\n\n plt.show()\n\ndef evaluateAndShowAttention(input_sentence):\n output_words, attentions = evaluate(encoder1, attn_decoder1, input_sentence)\n print('input =', input_sentence)\n print('output =', ' '.join(output_words))\n showAttention(input_sentence, output_words, attentions)\n\nevaluateAndShowAttention(\"elle a cinq ans de moins que moi .\")\n\nevaluateAndShowAttention(\"elle est trop petit .\")\n\nevaluateAndShowAttention(\"je ne crains pas de mourir .\")\n\nevaluateAndShowAttention(\"c est un jeune directeur plein de talent .\")", "Replace the embedding pre-trained word embeddings such as word2vec or GloVe" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
cosmolejo/Fisica-Experimental-3
Constante_de_Planck/Constante_Plank.ipynb
gpl-3.0
[ "Calculo de la Constante de Planck\npor:\nYennifer Angarita Aarenas\nAlejandro Mesa Gómez\npara este experimento se sigue la guía propuesta en el articulo \"Classroom fundamentals: measuring the Planck constant\" de Maria Rute de Amorim e Sá Ferreira André y Paulo Sérgio de Brito André.\nen el cual se halla el voltaje de activación de diferentes leds y a partir de estos, su energía y finalmente a plicando la ecuacion: $$E_{p} = \\frac{hc}{\\lambda} $$ se puede despejar el valor de h\nPreparación del programa:", "import numpy as np\n#import pyfirmata as pyF\nfrom time import sleep\nimport os\nimport matplotlib.pyplot as plt\n%matplotlib inline\nfrom scipy import stats\nfrom scipy import constants as cons\n\n\n######################################\n##VECTORES\n######################################\nled=[1.6325,2.424,2.566,3.7095] #ir,rojo,naranja,azul.... voltajes de activacion\nlamb=[1.10e6,1.60514e6,1.70648e6,2.14133e6] #para el ejercio inicial\nlamb_ajuste=[1.10e6,1.60514e6,1.70648e6,1.763668e6] #1/lambda #ir,rojo,naranja,azul\nlamb_ajuste2=[1.60514e6,1.70648e6,1.763668e6] \nIR=np.loadtxt(\"datos_IR.dat\")\nRed=np.loadtxt(\"datos_rojo.dat\")\n#Blue=np.loadtxt(\"datos_azul.dat\")\nGreen=np.loadtxt(\"datos_verde.dat\")\nOrange=np.loadtxt(\"datos_naranja.dat\")\n\n\nvolt_IR=IR[:,0]\nvolt_IR*=(3.3/5.)\nvolt_red=Red[:,0]\nvolt_red*=(3.3/5.)\n#volt_blue=Blue[:,0]\nvolt_green=Green[:,0]\nvolt_green*=(3.3/5.)\nvolt_orange=Orange[:,0]\nvolt_orange*=(3.3/5.)\n\ncurr_IR=IR[:,1]\ncurr_IR*=(3.3/5.)\ncurr_red=Red[:,1]\ncurr_red*=(3.3/5.)\n#curr_blue=Blue[:,1]\ncurr_green=Green[:,1]\ncurr_green*=(3.3/5.)\ncurr_orange=Orange[:,1]\ncurr_orange*=(3.3/5.)\n\n\nVred=[]\nIred=[]\nfor i in range(len(curr_red)):\n if (volt_red[i]>1.716):\n if (volt_red[i]<=3.28053):\n Vred.append(volt_red[i])\n Ired.append(curr_red[i])\n #print (volt_red[i],curr_red[i])\n\nVgreen=[]\nIgreen=[]\nfor i in range(len(curr_green)):\n #print (volt_green[i],curr_green[i])\n if (volt_green[i]>2.39019):\n if (volt_green[i]<=3.19671):\n Vgreen.append(volt_green[i])\n Igreen.append(curr_green[i])\n #print (volt_green[i],curr_green[i])\n\n \nVorange=[]\nIorange=[]\nfor i in range(len(curr_orange)):\n #print (volt_orange[i],curr_orange[i])\n \n if (volt_orange[i]>2.):\n if (volt_orange[i]<=3.19671):\n Vorange.append(volt_orange[i])\n Iorange.append(curr_orange[i])\n #print (volt_orange[i],curr_orange[i])\n\n\nVIR=[]\nIIR=[]\nfor i in range(len(curr_IR)):\n #print (volt_IR[i],curr_IR[i])\n if (volt_IR[i]>1.5):\n if (volt_IR[i]<=3.):\n VIR.append(volt_IR[i])\n IIR.append(curr_IR[i])\n #print (volt_IR[i],curr_IR[i])\n ", "los ciclos for vistos antes se debieron usar, ya que los datos no tenían un comportamiento lineal como se mostrará a continuacion, por lo tanto, fue necesario hallar una zona donde existiese un comportamiento lineal, que siguiera el comportamiento planteado en el articulo.", "plt.plot(volt_IR,curr_IR,'ko')\nplt.plot(volt_red,curr_red,'ro')\nplt.plot(volt_orange,curr_orange,'yo')\nplt.plot(volt_green,curr_green,'go')", "el motivo de este comportamiento se debe a la forma en que se toman los datos, ya que variar la resistencia en los potenciometros azules usados en clase no era facil, si embargo, se realizó desde un valor de voltaje lo suficientemente bajo para que no pasara corriente por el led, hasta los 3.3 voltios que ofrece la tarjeta Chipkit.\nPrimera Aproximación\nInicialmente para hallar los valores de voltaje de activacion tomamos un camino más empirico, variamos el voltaje hasta notar una pequeña chispa en el led y tomamos este valor de voltaje.\nEste procedimiento se realizó para un led infrarojo (el cual se miró a travez de una camara de celular para poder distinguir la emisión de luz), uno rojo, uno naranja y uno azul. Sus respectivos voltajes de activacion fueron graficados contra $1/\\lambda$ provisto por el articulo y finalmente se ajustó usando las funciones de scipy obteniendose lo siguiente:", "led=np.array(led)\nled*=(3.3/5.)\n#lamb=[1.10e6,1.60514e6,1.70648e6,1.76367e6,2.14133e6]\nslope, intercept, r_value, p_value, std_err = stats.linregress(lamb,led)\nx=np.linspace(lamb[0],lamb[-1],100)\ny=slope*x+intercept\nplt.plot(lamb,led,'o')\nplt.plot(x,y,'-')\nplt.show()\nh_planck=slope*cons.e/cons.c\nh=cons.h\nerror=(h_planck-h)/h\nprint ('r: ',r_value)\nprint ('pendiente: ',slope)\nprint ('error: ',std_err)\nprint ('h_planck: ',h_planck)\nprint ('h_real: ',h)\nprint ('error_h: ',error*100,'%')", "aquí se puede ver que apezar de lo simple de este acercamiento, se obtuvo un error de tan solo $5 %$ respecto al valor real de $h$ extraido de la librería scipy. \nSegunda Aproximación\nen este intento se sigue el algoritmo prouesto por el articulo, graficar los voltajes contra las corrientes de los leds, luego, extraer su incercepto con el eje y como la corriente de activación y con la ley de ohm, el voltaje.\nen este caso se usaron leds de color: infrarojo, rojo,naranja y verde, se deseaba usar un azul tambien pero los datos quedaron corruptos y no pudieron recuperarse", "plt.plot(VIR,IIR,'ro')\n\npendiente, intercepto, r_value, p_value, std_err = stats.linregress(VIR,IIR)\nyir=[]\nVActivacion_IR=intercepto*330\nfor i in VIR:\n yir.append((pendiente*i)+intercepto)\nplt.plot(VIR,yir,'k-')\n\nprint ('medido ',led[0],'ajustado ',VActivacion_IR)\nprint ('r: ',r_value)\n\nplt.plot(Vred,Ired,'ro')\n\npendiente, intercepto, r_value, p_value, std_err = stats.linregress(Vred,Ired)\nyred=[]\nVActivacion_rojo=intercepto*330\nfor i in Vred:\n yred.append((pendiente*i)+intercepto)\nplt.plot(Vred,yred,'r-')\n\nprint ('medido ',led[1],'ajustado ',VActivacion_rojo)\nprint ('r: ',r_value)\n\nplt.plot(Vorange,Iorange,'yo')\n\npendiente, intercepto, r_value, p_value, std_err = stats.linregress(Vorange,Iorange)\nyorange=[]\nVActivacion_naranja=intercepto*330\nfor i in Vorange:\n yorange.append((pendiente*i)+intercepto)\nplt.plot(Vorange,yorange,'k-')\n\nprint ('medido ',led[-2],'ajustado',VActivacion_naranja)\nprint ('r: ',r_value)\n\nplt.plot(Vgreen,Igreen,'go')\n\npendiente, intercepto, r_value, p_value, std_err = stats.linregress(Vgreen,Igreen)\nygreen=[]\nVActivacion_verde=intercepto*330\nfor i in Vgreen:\n ygreen.append((pendiente*i)+intercepto)\nplt.plot(Vgreen,ygreen,'k-')\n\nprint ('ajustado',VActivacion_verde)\nprint ('r: ',r_value)", "una vez se obtienen los datos, se procede arealizar el mismo ajuste de la primera aproximacion, pero con un error mayor:", "V_ajuste=[VActivacion_IR,VActivacion_rojo,VActivacion_naranja,VActivacion_verde] #ir,rojo,naranja,verde,azul\n#print (len(V_ajuste))\n#print (len(lamb_ajuste))\nslope, intercept, r_value, p_value, std_err = stats.linregress(lamb_ajuste,V_ajuste)\nx=np.linspace(lamb_ajuste[0],lamb_ajuste[-1],100)\ny=slope*x+intercept\nplt.plot(lamb_ajuste,V_ajuste,'o')\nplt.plot(x,y,'-')\nplt.show()\nh_planck=slope*cons.e/cons.c\nh=cons.h\nerror=abs(h_planck-h)/h\nprint ('r: ',r_value)\nprint ('pendiente: ',slope)\nprint ('error: ',std_err)\nprint ('h_planck: ',h_planck)\nprint ('h_real: ',h)\nprint ('error_h: ',error*100, '%')", "la explicación posible de este mayor error puede ser el hecho de que no se siguió un procedimiento riguroso y formarl para encontrar las corrientes con comportamiento lineal en cada caso. solo se buscó la zona con mejor linealidad.\nfinalmente, se quizó estudiar el posible resultado al eliminar el led infrarojo, el cual como se observa en la grafica de su voltaje, no dió valores muy estables, además de tener una r de tan solo $0.5$ lo cual es bastante bajo en comparacion de los demás leds.", "V_ajuste2=[VActivacion_rojo,VActivacion_naranja,VActivacion_verde] #ir,rojo,naranja,verde,azul\n#print (len(V_ajuste2))\n#print (len(lamb_ajuste2))\nslope, intercept, r_value, p_value, std_err = stats.linregress(lamb_ajuste2,V_ajuste2)\nx=np.linspace(lamb_ajuste[0],lamb_ajuste[-1],100)\ny=slope*x+intercept\nplt.plot(lamb_ajuste2,V_ajuste2,'o')\nplt.plot(x,y,'-')\nplt.show()\nh_planck=slope*cons.e/cons.c\nh=cons.h\nerror=abs(h_planck-h)/h\nprint ('r: ',r_value)\nprint ('pendiente: ',slope)\nprint ('error: ',std_err)\nprint ('h_planck: ',h_planck)\nprint ('h_real: ',h)\nprint ('error_h: ',error*100)", "aquí se concluye que a pesar de tener un valor no tan exacto como los demás, es necesario para que el ajuste se" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]