repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
cells
list
types
list
sysid/nbs
LP/Introduction-to-linear-programming/LaTeX_formatted_ipynb_files/Introduction to Linear Programming with Python - Part 4.ipynb
mit
[ "Introduction to Linear Programming with Python - Part 4\nReal world examples - Blending Problem\nWe're going to make some sausages!\nWe have the following ingredients available to us:\n| Ingredient | Cost (€/kg) | Availability (kg) |\n|------------|--------------|-------------------|\n| Pork | 4.32 | 30 |\n| Wheat | 2.46 | 20 |\n| Starch | 1.86 | 17 |\nWe'll make 2 types of sausage:\n* Economy (>40% Pork)\n* Premium (>60% Pork)\nOne sausage is 50 grams (0.05 kg)\nAccording to government regulations, the most starch we can use in our sausages is 25%\nWe have a contract with a butcher, and have already purchased 23 kg pork, that must go in our sausages.\nWe have a demand for 350 economy sausages and 500 premium sausages.\nWe need to figure out how to most cost effectively blend our sausages.\nLet's model our problem:\n$\n p_e = \\text{Pork in the economy sausages (kg)} \\\n w_e = \\text{Wheat in the economy sausages (kg)} \\\n s_e = \\text{Starch in the economy sausages (kg)} \\\n p_p = \\text{Pork in the premium sausages (kg)} \\\n w_p = \\text{Wheat in the premium sausages (kg)} \\\n s_p = \\text{Starch in the premium sausages (kg)} \\\n$\nWe want to minimise costs such that:\n$\\text{Cost} = 0.72(p_e + p_p) + 0.41(w_e + w_p) + 0.31(s_e + s_p)$\nWith the following constraints:\n$\n p_e + w_e + s_e = 350 \\times 0.05 \\\n p_p + w_p + s_p = 500 \\times 0.05 \\\n p_e \\geq 0.4(p_e + w_e + s_e) \\\n p_p \\geq 0.6(p_p + w_p + s_p) \\ \n s_e \\leq 0.25(p_e + w_e + s_e) \\\n s_p \\leq 0.25(p_p + w_p + s_p) \\\n p_e + p_p \\leq 30 \\\n w_e + w_p \\leq 20 \\\n s_e + s_p \\leq 17 \\\n p_e + p_p \\geq 23 \\\n $", "import pulp\n\n# Instantiate our problem class\nmodel = pulp.LpProblem(\"Cost minimising blending problem\", pulp.LpMinimize)", "Here we have 6 decision variables, we could name them individually but this wouldn't scale up if we had hundreds/thousands of variables (you don't want to be entering all of these by hand multiple times). \nWe'll create a couple of lists from which we can create tuple indices.", "# Construct our decision variable lists\nsausage_types = ['economy', 'premium']\ningredients = ['pork', 'wheat', 'starch']", "Each of these decision variables will have similar characteristics (lower bound of 0, continuous variables). Therefore we can use PuLP's LpVariable object's dict functionality, we can provide our tuple indices.\nThese tuples will be keys for the ing_weight dict of decision variables", "ing_weight = pulp.LpVariable.dicts(\"weight kg\",\n ((i, j) for i in sausage_types for j in ingredients),\n lowBound=0,\n cat='Continuous')", "PuLP provides an lpSum vector calculation for the sum of a list of linear expressions.\nWhilst we only have 6 decision variables, I will demonstrate how the problem would be constructed in a way that could be scaled up to many variables using list comprehensions.", "# Objective Function\nmodel += (\n pulp.lpSum([\n 4.32 * ing_weight[(i, 'pork')]\n + 2.46 * ing_weight[(i, 'wheat')]\n + 1.86 * ing_weight[(i, 'starch')]\n for i in sausage_types])\n)", "Now we add our constraints, bear in mind again here how the use of list comprehensions allows for scaling up to many ingredients or sausage types", "# Constraints\n# 350 economy and 500 premium sausages at 0.05 kg\nmodel += pulp.lpSum([ing_weight['economy', j] for j in ingredients]) == 350 * 0.05\nmodel += pulp.lpSum([ing_weight['premium', j] for j in ingredients]) == 500 * 0.05\n\n# Economy has >= 40% pork, premium >= 60% pork\nmodel += ing_weight['economy', 'pork'] >= (\n 0.4 * pulp.lpSum([ing_weight['economy', j] for j in ingredients]))\n\nmodel += ing_weight['premium', 'pork'] >= (\n 0.6 * pulp.lpSum([ing_weight['premium', j] for j in ingredients]))\n\n# Sausages must be <= 25% starch\nmodel += ing_weight['economy', 'starch'] <= (\n 0.25 * pulp.lpSum([ing_weight['economy', j] for j in ingredients]))\n\nmodel += ing_weight['premium', 'starch'] <= (\n 0.25 * pulp.lpSum([ing_weight['premium', j] for j in ingredients]))\n\n# We have at most 30 kg of pork, 20 kg of wheat and 17 kg of starch available\nmodel += pulp.lpSum([ing_weight[i, 'pork'] for i in sausage_types]) <= 30\nmodel += pulp.lpSum([ing_weight[i, 'wheat'] for i in sausage_types]) <= 20\nmodel += pulp.lpSum([ing_weight[i, 'starch'] for i in sausage_types]) <= 17\n\n# We have at least 23 kg of pork to use up\nmodel += pulp.lpSum([ing_weight[i, 'pork'] for i in sausage_types]) >= 23\n\n# Solve our problem\nmodel.solve()\npulp.LpStatus[model.status]\n\nfor var in ing_weight:\n var_value = ing_weight[var].varValue\n print \"The weight of {0} in {1} sausages is {2} kg\".format(var[1], var[0], var_value)\n\ntotal_cost = pulp.value(model.objective)\n\nprint \"The total cost is €{} for 350 economy sausages and 500 premium sausages\".format(round(total_cost, 2))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
PLN-FaMAF/DeepLearningEAIA
deep_learning_tutorial_4.ipynb
bsd-3-clause
[ "Express Deep Learning in Python - Examples\nWe will run a couple of examples to see how different parameters affect the performance of the classifier.", "import numpy\nimport keras\nimport os\n\nfrom keras import backend as K\nfrom keras import losses, optimizers, regularizers\nfrom keras.datasets import mnist\nfrom keras.layers import Activation, ActivityRegularization, Conv2D, Dense, Dropout, Flatten, MaxPooling2D\nfrom keras.models import Sequential\nfrom keras.utils.np_utils import to_categorical\n\nfrom keras.callbacks import TensorBoard\n\nbatch_size = 128\nnum_classes = 10\nepochs = 10\nTRAIN_EXAMPLES = 60000\nTEST_EXAMPLES = 10000\n\n# image dimensions\nimg_rows, img_cols = 28, 28\n\n# load the data (already shuffled and splitted)\n(x_train, y_train), (x_test, y_test) = mnist.load_data()\n\n# reshape the data to add the \"channels\" dimension\nx_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)\nx_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)\ninput_shape = (img_rows, img_cols, 1)\n\n# normalize the input in the range [0, 1]\n# to make quick runs, select a smaller set of images.\ntrain_mask = numpy.random.choice(x_train.shape[0], TRAIN_EXAMPLES, replace=False)\nx_train = x_train[train_mask, :].astype('float32')\ny_train = y_train[train_mask]\ntest_mask = numpy.random.choice(x_test.shape[0], TEST_EXAMPLES, replace=False)\nx_test = x_test[test_mask, :].astype('float32')\ny_test = y_test[test_mask]\n\nx_train /= 255\nx_test /= 255\n\nprint('Train samples: %d' % x_train.shape[0])\nprint('Test samples: %d' % x_test.shape[0])\n\n# convert class vectors to binary class matrices\ny_train = to_categorical(y_train, num_classes)\ny_test = to_categorical(y_test, num_classes)", "Convolutional 1", "EXPERIMENT_COUNTER = 4\n\ndef write_summary(filename, model):\n with open(filename, 'w') as log_file:\n model.summary(print_fn=lambda x: log_file.write(x + '\\n'))\n\ndef evaluate_model(model, experiment_name=EXPERIMENT_COUNTER):\n # train the model\n logs_dirname = './logs/experiment-{}'.format(experiment_name)\n tensorboard = TensorBoard(log_dir=logs_dirname, histogram_freq=0,\n write_graph=False, write_images=False)\n epochs = 20\n history = model.fit(x_train, y_train,\n batch_size=batch_size,\n epochs=epochs,\n verbose=1,\n validation_data=(x_test, y_test),\n callbacks=[tensorboard])\n # TIP: write the model summary to keep track of your experiments\n write_summary(os.path.join(logs_dirname, 'model-summary.txt'), model)\n\n # evaluate the model\n return model.evaluate(x_test, y_test, verbose=0)\n\n# define the network architecture\nmodel = Sequential()\nmodel.add(Conv2D(filters=16,\n kernel_size=(3, 3),\n strides=(1,1),\n padding='valid',\n activation='relu',\n input_shape=input_shape,\n activity_regularizer='l2'))\nmodel.add(Conv2D(32, (3, 3), activation='relu'))\nmodel.add(MaxPooling2D(pool_size=(2, 2)))\nmodel.add(Dropout(0.25))\nmodel.add(Flatten())\nmodel.add(Dense(128, activation='relu'))\nmodel.add(Dropout(0.5))\nmodel.add(Dense(num_classes, activation='softmax'))\n\n# compile the model\nmodel.compile(loss=losses.categorical_crossentropy,\n optimizer=optimizers.RMSprop(),\n metrics=['accuracy', 'mae'])\n\nevaluate_model(model)\nEXPERIMENT_COUNTER += 1", "Convolutional 2", "# define the network architecture\nmodel = Sequential()\nmodel.add(Conv2D(filters=16,\n kernel_size=(3, 3),\n strides=(1,1),\n padding='valid',\n activation='sigmoid',\n input_shape=input_shape,\n activity_regularizer='l2'))\nmodel.add(MaxPooling2D(pool_size=(2, 2)))\nmodel.add(Dropout(0.25))\nmodel.add(Flatten())\nmodel.add(Dense(128, activation='sigmoid'))\nmodel.add(Dropout(0.25))\nmodel.add(Dense(num_classes, activation='softmax'))\n\n# compile the model\nmodel.compile(loss=losses.categorical_crossentropy,\n optimizer=optimizers.RMSprop(),\n metrics=['accuracy', 'mae'])\n\nevaluate_model(model)\nEXPERIMENT_COUNTER += 1" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
wittawatj/kernel-gof
ipynb/ex2_results.ipynb
mit
[ "A notebook to process experimental results of ex2_prob_params.py. p(reject) as problem parameters are varied.", "%load_ext autoreload\n%autoreload 2\n%matplotlib inline\n#%config InlineBackend.figure_format = 'svg'\n#%config InlineBackend.figure_format = 'pdf'\n\nimport numpy as np\n\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport kgof.data as data\nimport kgof.glo as glo\nimport kgof.goftest as gof\nimport kgof.kernel as kernel\nimport kgof.plot as plot\nimport kgof.util as util\n\nimport scipy.stats as stats\n\nimport kgof.plot\nkgof.plot.set_default_matplotlib_options()\n\ndef load_plot_vs_params(fname, xlabel='Problem parameter', show_legend=True):\n func_xvalues = lambda agg_results: agg_results['prob_params']\n ex = 2\n def func_title(agg_results):\n repeats, _, n_methods = agg_results['job_results'].shape\n alpha = agg_results['alpha']\n test_size = (1.0 - agg_results['tr_proportion'])*agg_results['sample_size']\n title = '%s. %d trials. test size: %d. $\\\\alpha$ = %.2g.'%\\\n ( agg_results['prob_label'], repeats, test_size, alpha)\n return title\n #plt.figure(figsize=(10,5))\n results = plot.plot_prob_reject(\n ex, fname, func_xvalues, xlabel, func_title=func_title)\n \n plt.title('')\n plt.gca().legend(loc='best').set_visible(show_legend)\n if show_legend:\n plt.legend(bbox_to_anchor=(1.80, 1.08))\n \n plt.grid(False)\n \n return results\n\n\ndef load_runtime_vs_params(fname, xlabel='Problem parameter', \n show_legend=True, xscale='linear', yscale='linear'):\n func_xvalues = lambda agg_results: agg_results['prob_params']\n ex = 2\n def func_title(agg_results):\n repeats, _, n_methods = agg_results['job_results'].shape\n alpha = agg_results['alpha']\n title = '%s. %d trials. $\\\\alpha$ = %.2g.'%\\\n ( agg_results['prob_label'], repeats, alpha)\n return title\n \n #plt.figure(figsize=(10,6))\n \n results = plot.plot_runtime(ex, fname, \n func_xvalues, xlabel=xlabel, func_title=func_title)\n \n plt.title('')\n plt.gca().legend(loc='best').set_visible(show_legend)\n if show_legend:\n plt.legend(bbox_to_anchor=(1.80, 1.05))\n \n plt.grid(False)\n if xscale is not None:\n plt.xscale(xscale)\n if yscale is not None:\n plt.yscale(yscale)\n \n return results\n\n\n# # Gaussian mean difference. Fix dimension. Vary the mean\n# #gmd_fname = 'ex2-gmd_d10_ms-me5_n1000_rs100_pmi0.000_pma0.600_a0.050_trp0.50.p'\n# gmd_fname = 'ex2-gmd_d10_ms-me4_n2000_rs50_pmi0.000_pma0.060_a0.050_trp0.50.p'\n# gmd_results = load_plot_vs_params(gmd_fname, xlabel='$m$', show_legend=True)\n# #plt.ylim([0.03, 0.1])\n# #plt.savefig(bsg_fname.replace('.p', '.pdf', 1), bbox_inches='tight')", "$$p(x) = \\mathcal{N}(0, I) \\\nq(x) = \\mathcal{N}((m,0,\\ldots), I)$$", "# # Gaussian increasing variance. Variance below 1.\n# gvsub1_d1_fname = 'ex2-gvsub1_d1_vs-me8_n1000_rs100_pmi0.100_pma0.700_a0.050_trp0.50.p'\n# gvsub1_d1_results = load_plot_vs_params(gvsub1_d1_fname, xlabel='$v$')\n# plt.title('d=1')\n# # plt.ylim([0.02, 0.08])\n# # plt.xlim([0, 4])\n# #plt.legend(bbox_to_anchor=(1.70, 1.05))\n# #plt.savefig(gsign_fname.replace('.p', '.pdf', 1), bbox_inches='tight')\n\n# # Gaussian increasing variance\n# gvinc_d5_fname = 'ex2-gvinc_d5-me8_n1000_rs100_pmi1.000_pma2.500_a0.050_trp0.50.p'\n# gvinc_d5_results = load_plot_vs_params(gvinc_d5_fname, xlabel='$v$', \n# show_legend=True)\n# plt.title('d=5')\n# # plt.ylim([0.02, 0.08])\n# # plt.xlim([0, 4])\n# #plt.legend(bbox_to_anchor=(1.70, 1.05))\n# #plt.savefig(gsign_fname.replace('.p', '.pdf', 1), bbox_inches='tight')", "$$p(x)=\\mathcal{N}(0, I) \\\nq(x)=\\mathcal{N}(0, vI)$$", "# # Gaussian variance diffenece (GVD)\n# gvd_fname = 'ex2-gvd-me4_n1000_rs100_pmi1.000_pma15.000_a0.050_trp0.50.p'\n# # gvd_fname = 'ex2-gvd-me4_n1000_rs50_pmi1.000_pma15.000_a0.050_trp0.80.p'\n# gvd_results = load_plot_vs_params(gvd_fname, xlabel='$d$', show_legend=True)\n# plt.figure()\n# load_runtime_vs_params(gvd_fname);\n", "$$p(x)=\\mathcal{N}(0, I) \\\nq(x)=\\mathcal{N}(0, \\mathrm{diag}(2,1,1,\\ldots))$$", "# Gauss-Bernoulli RBM\n# gb_rbm_fname = 'ex2-gbrbm_dx50_dh10-me4_n1000_rs200_pmi0.000_pma0.001_a0.050_trp0.20.p'\n# gb_rbm_fname = 'ex2-gbrbm_dx50_dh10-me4_n1000_rs200_pmi0.000_pma0.000_a0.050_trp0.20.p'\n# gb_rbm_fname = 'ex2-gbrbm_dx50_dh10-me6_n1000_rs300_pmi0.000_pma0.001_a0.050_trp0.20.p'\ngb_rbm_fname = 'ex2-gbrbm_dx50_dh10-me6_n1000_rs200_pmi0_pma0.06_a0.050_trp0.20.p'\ngb_rbm_results = load_plot_vs_params(gb_rbm_fname, xlabel='Perturbation SD $\\sigma_{per}$', \n show_legend=False)\nplt.savefig(gb_rbm_fname.replace('.p', '.pdf', 1), bbox_inches='tight')\n# plt.xlim([-0.1, -0.2])\n\nload_runtime_vs_params(gb_rbm_fname, xlabel='Perturbation SD $\\sigma_{per}$', yscale='linear', show_legend=False);\nplt.savefig(gb_rbm_fname.replace('.p', '_time.pdf', 1), bbox_inches='tight')\n\n# gbrbm_highd_fname = 'ex2-gbrbm_dx50_dh40-me6_n1000_rs200_pmi0_pma0.06_a0.050_trp0.20.p'\n# gbrbm_highd_fname = 'ex2-gbrbm_dx50_dh40-me2_n1000_rs200_pmi0_pma0.06_a0.050_trp0.20.p'\ngbrbm_highd_fname = 'ex2-gbrbm_dx50_dh40-me1_n1000_rs200_pmi0_pma0.06_a0.050_trp0.20.p'\ngbrbm_highd_results = load_plot_vs_params(\n gbrbm_highd_fname, \n# xlabel='Perturbation SD $\\sigma_{per}$', \n xlabel='Perturbation noise', \n show_legend=False)\nplt.xticks([0, 0.02, 0.04, 0.06])\nplt.yticks([0, 0.5, 1])\nplt.ylim([0, 1.05])\nplt.ylabel('P(detect difference)', fontsize=26)\nplt.box(True)\nplt.savefig(gbrbm_highd_fname.replace('.p', '.pdf', 1), bbox_inches='tight')\n\nload_runtime_vs_params(gbrbm_highd_fname, xlabel='Perturbation SD $\\sigma_{per}$', \n yscale='linear', show_legend=False);\nplt.savefig(gbrbm_highd_fname.replace('.p', '_time.pdf', 1), bbox_inches='tight')\n\n## p: Gaussian, q: Laplace. Vary d\n# glaplace_fname = 'ex2-glaplace-me4_n1000_rs100_pmi1.000_pma15.000_a0.050_trp0.50.p'\n# glaplace_fname = 'ex2-glaplace-me4_n1000_rs200_pmi1.000_pma15.000_a0.050_trp0.20.p'\n# glaplace_fname = 'ex2-glaplace-me5_n1000_rs400_pmi1.000_pma15.000_a0.050_trp0.20.p'\nglaplace_fname = 'ex2-glaplace-me6_n1000_rs200_pmi1_pma15_a0.050_trp0.20.p'\nglaplace_results = load_plot_vs_params(glaplace_fname, xlabel='dimension $d$', show_legend=False)\nplt.savefig(glaplace_fname.replace('.p', '.pdf', 1), bbox_inches='tight')\n\nload_runtime_vs_params(glaplace_fname, xlabel='dimension $d$', show_legend=False, yscale='linear');\nplt.savefig(glaplace_fname.replace('.p', '_time.pdf', 1), bbox_inches='tight')", "$$p(x)=\\mathcal{N}(0, 1) \\\nq(x)=\\mathrm{Laplace}(0, 1/\\sqrt{2})$$\nq has the same unit variance as p." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
CommonClimate/teaching_notebooks
GEOL351/lorenz_climatechange.ipynb
mit
[ "A nonlinear perspective on climate change\n<p class=\"gap2\"></p>\n\nThe following lab will have you explore the concepts developed in Palmer (1999}\n1. Exploring the Lorenz System\nBy now you are well acquainted with the Lorenz system:\n$$\n\\begin{aligned}\n\\dot{x} & = \\sigma(y-x) \\\n\\dot{y} & = \\rho x - y - xz \\\n\\dot{z} & = -\\beta z + xy\n\\end{aligned}\n$$\nIt exhibits a range of different behaviors as the parameters ($\\sigma$, $\\beta$, $\\rho$) are varied.\nWouldn't it be nice if you could tinker with this yourself? That is exactly what you will be doing in this lab.\nEverything is based on the programming language Python, which every geoscientist should learn. But all you need to know is that Shit+Enter will execute the current cell and move on to the next one. For all the rest, read this\nNote that if you're viewing this notebook statically (e.g. on nbviewer) the examples below will not work. They require connection to a running Python kernel\nLet us start by defining a few useful modules", "%matplotlib inline\nimport numpy as np\nfrom scipy import integrate\n\nfrom matplotlib import pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom matplotlib.colors import cnames\nfrom matplotlib import animation\nimport seaborn as sns\nimport butter_lowpass_filter as blf\n", "Let's write a quick Lorenz solver", "def solve_lorenz(N=10, angle=0.0, max_time=4.0, sigma=10.0, beta=8./3, rho=28.0):\n \n def lorenz_deriv((x, y, z), t0, sigma=sigma, beta=beta, rho=rho):\n \"\"\"Compute the time-derivative of a Lorentz system.\"\"\"\n return [sigma * (y - x), x * (rho - z) - y, x * y - beta * z]\n\n # Choose random starting points, uniformly distributed from -15 to 15\n np.random.seed(1)\n x0 = -15 + 30 * np.random.random((N, 3))\n\n # Solve for the trajectories\n t = np.linspace(0, max_time, int(250*max_time))\n x_t = np.asarray([integrate.odeint(lorenz_deriv, x0i, t)\n for x0i in x0])\n \n # choose a different color for each trajectory\n colors = plt.cm.jet(np.linspace(0, 1, N))\n\n # plot the results\n sns.set_style(\"white\")\n fig = plt.figure(figsize = (8,8))\n ax = fig.add_axes([0, 0, 1, 1], projection='3d')\n #ax.axis('off')\n\n # prepare the axes limits\n ax.set_xlim((-25, 25))\n ax.set_ylim((-35, 35))\n ax.set_zlim((5, 55))\n\n for i in range(N):\n x, y, z = x_t[i,:,:].T\n lines = ax.plot(x, y, z, '-', c=colors[i])\n plt.setp(lines, linewidth=1)\n\n ax.view_init(30, angle)\n ax.set_xlabel('X axis')\n ax.set_ylabel('Y axis')\n ax.set_zlabel('Z axis')\n plt.show()\n\n return t, x_t", "Let's plot this solution", "t, x_t = solve_lorenz(max_time=10.0)", "Very pretty. If you are curious, you can even change the plot angle within the function, and examine this strange attractor under all angles. There are several key properties to this attractor:\nA forced Lorenz system\nAs a metaphor for anthropogenic climate change, we now consider the case of a forced lorenz system. We wish to see what happens if a force is applied in the directions $X$ and $Y$. The magnitude of this force is $f_0$ and we may apply it at some angle $\\theta$. Hence:\n$f = f_0 \\left ( \\cos(\\theta),sin(\\theta) \\right) $\nThe new system is thus:\n$$\n\\begin{aligned}\n\\dot{x} & = \\sigma(y-x) + f_0 \\cos(\\theta)\\\n\\dot{y} & = \\rho x - y - xz + f_0 \\sin(\\theta)\\\n\\dot{z} & = -\\beta z + xy\n\\end{aligned}\n$$\nDoes the attractor change? Do the solutions change? If so, how?\nSolving the system\nLet us define a function to solve this new system:", "def forced_lorenz(N=3, fnot=2.5, theta=0, max_time=100.0, sigma=10.0, beta=8./3, rho=28.0):\n \n def lorenz_deriv((x, y, z), t0, sigma=sigma, beta=beta, rho=rho):\n \"\"\"Compute the time-derivative of a forced Lorentz system.\"\"\"\n c = 2*np.pi/360\n return [sigma * (y - x) + fnot*np.cos(theta*c), x * (rho - z) - y + fnot*np.sin(theta*c), x * y - beta * z]\n\n # Choose random starting points, uniformly distributed from -15 to 15\n np.random.seed(1)\n x0 = -15 + 30 * np.random.random((N, 3))\n\n # Solve for the trajectories\n t = np.linspace(0, max_time, int(25*max_time))\n x_t = np.asarray([integrate.odeint(lorenz_deriv, x0i, t)\n for x0i in x0])\n return t, x_t\n ", "Let's start by plotting the solutions for X, Y and Z when $f_0 = 0$ (no pertubation):", "sns.set_style(\"darkgrid\")\nsns.set_palette(\"Dark2\") \nt, x_t = forced_lorenz(fnot = 0, theta = 50,max_time = 100.00)\n\nxv = x_t[0,:,:]\n# time filter\nlab = 'X', 'Y', 'Z'\n\ncol = sns.color_palette(\"Paired\")\n\nfig = plt.figure(figsize = (8,8))\nxl = np.empty(xv.shape)\nfor k in range(3):\n xl[:,k] = blf.filter(xv[:,k],0.5,fs=25) \n plt.plot(t,xv[:,k],color=col[k*2])\n #plt.plot(t,xl[:,k],color=col[k*2+1],lw=3.0)\n\nplt.legend(lab)\nplt.show()\n", "What happened to X? Well in this case X and Y are so close to each other that they plot on top of each other. \nBaiscally, the system orbits around some fixed points near $X, Y = \\pm 10$. The transitions between these \"regimes\" are quite random. Sometimes the system hangs out there a while, sometimes not. \nIsolating climate fluctuations\nFurthermore, you may be overwhelmed by the short term variability in the system. In the climate system, we call this shprt-term variability \"weather\", and often what is of interest is the long-term behavior of the system (the \"climate\"). To isolate that, we need to filter the solutions. More precisely, we will apply a butterworth lowpass filter to $X, Y ,Z$ to highlight their slow evolution. (if you ever wonder what it is, take GEOL425L: Data Analysis in the Earth and Environmental Sciences)", "# Timeseries PLOT \nsns.set_palette(\"Dark2\") \nt, x_t = forced_lorenz(fnot = 0, theta = 50,max_time = 1000.00) \n\nxv = x_t[0,:,:]\n# time filter\nlab = 'X', 'lowpass-filtered X', 'Y', 'lowpass-filtered Y', 'Z','lowpass-filtered Z' \n\ncol = sns.color_palette(\"Paired\")\n\nfig = plt.figure(figsize = (8,8))\nxl = np.empty(xv.shape)\nfor k in range(3):\n xl[:,k] = blf.filter(xv[:,k],0.5,fs=25) \n plt.plot(t,xv[:,k],color=col[k*2])\n plt.plot(t,xl[:,k],color=col[k*2+1],lw=3.0)\n\nplt.legend(lab)\nplt.xlim(0,100)\n\n# Be patient... this could take a few seconds to complete.", "(Once again, Y is on top of X. )\nLet us now plot the probability of occurence of states in the $(X,Y)$ plane. If all motions were equally likely, this probability would be uniform. Is that what we observe?", "cmap = sns.cubehelix_palette(light=1, as_cmap=True)\nskip = 10\nsns.jointplot(xl[0::skip,0],xl[0::skip,1], kind=\"kde\", color=\"#4CB391\")", "Question 1 ###\nHow would you describe the probability of visiting states? What do \"dark\" regions correspond to?\nAnswer 1:\nWrite your answer here\n2. Visualizing climate change in the Lorenz system\nWe now wish to see if this changes once we apply a non-zero forcing. Specifically:\n1. Does the attractor change with the applied forcing?\n2. If not, can we say something about how frequently some states are visited?\nIn all the following we set $f_0 = 2.5$ and we tweak $\\theta$ to see how the system responds \nTo ease experimentation, let us first define a function to compute and plot the results:", "def plot_lorenzPDF(fnot, theta, max_time = 1000.00, skip = 10):\n t, x_t = forced_lorenz(fnot = fnot, theta = theta,max_time = max_time)\n xv = x_t[0,:,:]; xl = np.empty(xv.shape)\n\n for k in range(3):\n xl[:,k] = blf.filter(xv[:,k],0.5,fs=25) \n\n g = sns.jointplot(xl[0::skip,0],xl[0::skip,1], kind=\"kde\", color=\"#4CB391\")\n return g\n", "Now let's have some fun", "theta = 50; f0 = 2.5 # assign values of f0 and theta\ng = plot_lorenzPDF(fnot = f0, theta = theta) \ng.ax_joint.arrow(0, 0, 2*f0*np.cos(theta*np.pi/180), f0*np.sin(theta*np.pi/180), head_width=0.5, head_length=0.5, lw=3.0, fc='r', ec='r')\n\n## (BE PATIENT THIS COULD TAKE UP TO A MINUTE) ", "(the forcing is marked by a red arrow)\nQuestion 2.1 : What do you observe for $\\theta = 50^{\\circ}$?\nAnswer 2.1:\nWrite your answer here\nQuestion 2.2 : What do you observe for $\\theta = 140^{\\circ}$?\n(is it counter-intuitive? Welcome to the wonderful world of chaos theory)\ncompute and plot here:\nAnswer 2.2:\nWrite your answer here\nQuestion 2.3 : What do you observe for $\\theta = 225^{\\circ}$?\ncompute and plot here:\nAnswer 2.3:\nWrite your answer here\nQuestion 2.4 : What do you observe for $\\theta = -45^{\\circ}$?\ncompute and plot here\nAnswer 2.4:\nwrite your answer here\nQuestion 2.5 : Repeat one of the experiments above, with $f_0 = 10$?\ncompute and plot here\nAnswer 2.5:\nwrite your answer here\n(for fun , you may try $f_0 = 25$. What happens then?)\nTo conclude, comment on the first 2 principles enunciated by Palmer (1999):\n*\"First, the response to a forcing will be manifest primarily in terms of changes to the residence frequency associated with the quasi-stationary regimes. Second, the [...] structures of these regimes will be relatively insensitive to the forcing (within limits).\" *\nQuestion 3 : Is this what you observed within this toy model? Assuming that Earth's climate follows similar dynamics, what would be the best way to deciding whether climate change is happening or not?\nAnswer 3:\nwrite your answer here" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
cleuton/datascience
datavisualization/data_visualization_python_english.ipynb
apache-2.0
[ "Data visualization with Python\n1 - Charts turbo intro\nCleuton Sampaio, DataLearningHub\nHere you'll learn the basic part of chart generation, including: Formating and positioning using the most common types (Bar, Scatter, Pie) \n\nData visualization is a very important aspect of Data Science work, and not everyone knows the most used types of graphs, in addition to the techniques and libraries used for this.\nWhen you have mora than 2 attributes, visualization becomes impossible. In these cases, it is possible to apply techniques for dimensionality reduction, something that is outside the scope of this work.\nBars, pies and lines\nThe most common charts in searches are these: Bars, Pies and Lines, and you can easily create them with Matplotlib .\nThe first thing to do is to install matplotlib:\npip install matplotlib\nIf you are creating your model using Jupyter, then you have to use a magic command to inform that the generated chart must be inserted in the notebook:\n%matplotlib inline\nLet's show a very simple example. Imagine data collected from temperatures in 3 different cities. We have lists containing temperature measurements for each city. Let's use Numpy to create vectors and we can work with them numerically:", "import numpy as np\n%matplotlib inline \ntemp_cidade1 = np.array([33.15,32.08,32.10,33.25,33.01,33.05,32.00,31.10,32.27,33.81])\ntemp_cidade2 = np.array([35.17,36.23,35.22,34.33,35.78,36.31,36.03,36.23,36.35,35.25])\ntemp_cidade3 = np.array([22.17,23.25,24.22,22.31,23.18,23.31,24.11,23.53,24.38,21.25])", "Let's calculate the average temperature of each city and use it to generate a graph:", "medias = [np.mean(temp_cidade1), np.mean(temp_cidade2), np.mean(temp_cidade3)] # Values for the graph\nnomes = ['Cidade Um', 'Cidade Dois', 'Cidade Três'] # City names", "Now, let's create a bar chart using the pyplot module. First, we'll show you a very simple graph:", "import matplotlib.pyplot as plt\nfig, ax = plt.subplots() # Returns the graph figure and the graphics object (axes)\nax.bar([0,1,2], medias, align='center') # We create a graph passing the position of the elements\nax.set_xticks([0,1,2]) # Indicates the position of each label on the X axis\nax.set_xticklabels(nomes) # City names\nax.set_title('Média das temperaturas') # Chart title - Temperature average\nax.yaxis.grid(True) # Do we want to show a grid on the y axis?\nplt.show() # Generate charts", "Generating a line chart can help you understand the evolution of the data over time. We will generate line charts with the temperatures of the three cities.\nTo start, let's generate a line graph with a single city:", "fig, ax = plt.subplots()\nax.plot(temp_cidade1)\nax.set_title('Temperaturas da Cidade 1') # Chart title - City 1 temperatures\nax.yaxis.grid(True)\nplt.show()", "It is very common to compare data variations, and we can do this by drawing the charts side by side. They can be charts of the same type or different types, and they can be in several lines and columns. For this, we build an instance of the Figure class separately, and each instance of Axes also:", "fig = plt.figure(figsize=(20, 5)) # witdh and height in inches\ngrade = fig.add_gridspec(1, 3) # We created a grid with 1 row and 3 columns (it can be with several rows too)\nax1 = fig.add_subplot(grade[0, 0]) # First row, first column\nax2 = fig.add_subplot(grade[0, 1]) # First row, second column\nax3 = fig.add_subplot(grade[0, 2]) # First row, third column\nax1.plot(temp_cidade1)\nax1.set_title('Temperaturas da Cidade 1') # Chart 1 title\nax1.yaxis.grid(True)\nax2.plot(temp_cidade2)\nax2.set_title('Temperaturas da Cidade 2') # Chart 2 title\nax2.yaxis.grid(True)\nax3.plot(temp_cidade3)\nax3.set_title('Temperaturas da Cidade 3') # Chart 3 title\nax3.yaxis.grid(True)\nplt.show()", "Another interesting way to compare data series is to create a chart with multiple series. Let's see how to do this:", "fig, ax = plt.subplots()\nax.plot(temp_cidade1)\nax.plot(temp_cidade2)\nax.plot(temp_cidade3)\nax.set_title('Temperaturas das Cidades 1,2 e 3') # Chart title\nax.yaxis.grid(True)\nplt.show()", "Here, lines of different colors were used, but we can change the legend and shape of the graphics:", "fig, ax = plt.subplots()\nax.plot(temp_cidade1, marker='^') # triangle markers\nax.plot(temp_cidade2, marker='o') # circle markers\nax.plot(temp_cidade3, marker='.') # dot markers\nax.set_title('Temperaturas das Cidades 1,2 e 3') # Chart title\nax.yaxis.grid(True)\nplt.show()", "But we can better highlight each data series:", "fig, ax = plt.subplots()\nax.plot(temp_cidade1, color=\"red\",markerfacecolor='pink', marker='^', linewidth=4, markersize=12, label='Cidade1')\nax.plot(temp_cidade2, color=\"skyblue\",markerfacecolor='blue', marker='o', linewidth=4, markersize=12, label='Cidade2')\nax.plot(temp_cidade3, color=\"green\", linewidth=4, linestyle='dashed', label='Cidade3')\nax.set_title('Temperaturas das Cidades 1,2 e 3') # Chart title\nax.yaxis.grid(True)\nplt.legend()\nplt.show()", "The properties below can be used to differentiate the lines:\n- color: Thread color\n- marker: Type of marker\n- markerfacecolor: Marker fill color\n- linewidth: Line width\n- markersize: Marker size\n- label: Data series label\n- linestyle: Data line style\nAnd we have the \"legend ()\" method that creates the legend of the graph\nBut generally, when we are creating a model, we use data frames Pandas, with several attributes. Can you generate graphs directly from them? Yes of course. We will see a simple example.\nFirst, let's import the pandas and then read two CSV datasets (The datasets are at: https://github.com/cleuton/datascience/tree/master/datasets", "import pandas as pd\ndolar = pd.read_csv('../datasets/dolar.csv') # Dolar value in Reais (Brasilian currency)\ndesemprego = pd.read_csv('../datasets/desemprego.csv') # Unenployment\n\ndolar.head()", "Vamos analisar esse dataframe:", "dolar.describe()", "There's something wrong! Some values are way above 1,000. It must be a dataset error. Let's get this right:", "dolar.loc[dolar.Dolar > 1000, ['Dolar']] = dolar['Dolar'] / 1000\n\ndesemprego.head()", "We can build line charts of each one using the plot, but in fact any type of chart can be generated with data frames.", "fig = plt.figure(figsize=(20, 5)) # Width and height in inches\ngrade = fig.add_gridspec(1, 2) # We created a grid with 1 row and 3 columns (it can be with several rows too)\nax1 = fig.add_subplot(grade[0, 0]) # First row, first column\nax2 = fig.add_subplot(grade[0, 1]) # First row, second column\nax1.plot(dolar['Periodo'],dolar['Dolar']) # We plot the dollar value by period\nax1.set_title('Evolução da cotação do Dólar') \nax1.yaxis.grid(True)\nax2.plot(desemprego['Periodo'],desemprego['Desemprego'])\nax2.set_title('Evolução da taxa de desemprego') # Evolution of the unemployment rate by period\nax2.yaxis.grid(True)", "Yes ... One of the great advantages of visualization, even simple, is that we see a positive correlation between the value of the dollar and unemployment. But be careful not to take this as a cause and effect relationship! There are other factors that influence both! To illustrate this, and show how to create scatter plots, let's plot a graph with the dollar on the X axis and unemployment on the Y:", "fig, ax = plt.subplots()\nax.scatter(dolar['Dolar'],desemprego['Desemprego'])\nax.set_xlabel(\"Valor do dólar\")\nax.set_ylabel(\"Taxa de desemprego\")\nplt.show()", "We can see that there is even some apparent correlation, but at some point, after the value of R$ 3.00, the unemployment rate jumped. This proves that the model lacks explanatory variables.\nTo end this lesson, let's look at how to generate pie charts. To begin, let's \"invent\" a monthly product sales data frame:", "df_vendas = pd.DataFrame(np.random.randint(0,100,size=(100, 4)), columns=list('ABCD'))\n\ndf_vendas.head()", "Well, imagine that each column represents the sales of one of the products, and each line is a day. We will generate a pie chart with sales.", "list(df_vendas.columns)\n\nfig, ax = plt.subplots()\ntotais = df_vendas.sum()\nax.pie(totais, labels=list(df_vendas.columns),autopct='%1.1f%%')\nax.set_title('Vendas do período') \nplt.show()", "And we can \"explode\" one or more pieces. For example, let's separate the larger piece, from product \"C\":", "listexplode = [0]*len(totais) # we created a list containing zeros: One for each slice of the pizza\nimax = totais.idxmax() # We take the index of the product with the highest amount of sales\nix = list(df_vendas.columns).index(imax) # Now, we turn this index into the list position\nlistexplode[ix]=0.1 # We modified the highlight specification of the slice with the highest value\nfig, ax = plt.subplots()\nax.pie(totais, labels=list(df_vendas.columns),autopct='%1.1f%%', explode=listexplode)\nax.set_title('Vendas do período') \nplt.show()", "Here we use some interesting functions. As \"totais\" is a series (pandas.Series) we can obtain the index of the column with the highest value with the function \"ixmax ()\". Only it will return the column index, which is the product name. We need the position (from zero to size-1). Then I do a \"trick\" with the list of columns of the original dataframe, taking the position of the product. Then, just modify the list \"listexplode\" stating how much we want to highlight that particular slice (the others are zero)." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Ruediger-Braun/compana16
Lektion12-Fehler.ipynb
gpl-3.0
[ "Lektion 12", "from sympy import *\ninit_printing()\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline", "Matrixexponentiale", "x = Symbol('x', real=True)\n\nA = Matrix(3,3, [x,x,0,0,x,x,0,0,x])\nA\n\nA.exp()", "Gekoppelte Pendel\n\\begin{align}\ny'' &= w - y + \\cos(2t)\\\nw'' &= y - 3w\n\\end{align}\nÜbersetzt sich in\n\\begin{align}\n y_0' &= y_1 \\\n y_1' &= y_2 - y_0 + \\cos(2t) \\\n y_2' &= y_3 \\\n y_3' &= y_0 - 3 y_2\n\\end{align}", "A = Matrix(4,4,[0,1,0,0,-1,0,1,0,0,0,0,1,1,0,-3, 0])\nA\n\nA.eigenvals()", "Fundamentalsystem", "%time Phi = (x*A).exp() # Fundamentalsystem für das System", "Das Fundamentalsystem wird leider zu kompliziert", "% time len(latex(Phi))\n\nt = Symbol('t', real=True)\n\nmu = list(A.eigenvals())\nmu\n\nphi = [exp(mm*t) for mm in mu]\nphi\n\ndef element(i, j):\n f = phi[j]\n return f.diff(t, i)\n\nPhi = Matrix(4, 4, element)\n\nPhi\n\nP1 = Phi**(-1)\n\nlen(latex(P1))\n\nP4 = Phi.inv()\nlen(latex(P4))\n\nA3 = P1*Phi\nA3[0,0].n()\n\nA4 = simplify(A3)\n\nA4[0,0].n()\n\nA3[0,0].simplify()\n\nOut[44].n()\n\nlen(latex(A3[0,0]))\n\nA2 = simplify(P1*Phi)\nA2[0,0]\n\nA2[0,0].n()\n\nP2 = simplify(P1.expand())\nlen(latex(P2))\n\nP2\n\n(P2*Phi).simplify()\n\nA = Out[31]\n\nA[0,0].n()\n\nB = Matrix([0, cos(2*t), 0, 0])\nB\n\nP2*B\n\nP3 = Integral(P2*B, t).doit()\nP3\n\ntmp = (Phi*P3)[0]\ntmp = tmp.simplify()\n\nexpand(tmp).collect([sin(2*t), cos(2*t)])\n\npsi2 = (Phi*P3)[2]\npsi2.simplify().expand()\n\nim(psi2.simplify()).expand()\n\n\nM = Matrix([0,1,t])\nIntegral(M, t).doit()", "Numerische Lösungen", "x = Symbol('x')\ny = Function('y')\n\ndgl = Eq(y(x).diff(x,2), -sin(y(x)))\ndgl\n\n#dsolve(dgl) # NotImplementedError", "die Funktion mpmath.odefun löst die Differentialgleichung $[y_0', \\dots, y_n'] = F(x, [y_0, \\dots, y_n])$.", "def F(x, y):\n y0, y1 = y\n w0 = y1\n w1 = -mpmath.sin(y0)\n return [w0, w1]\n\nF(0,[0,1])\n\nab = [mpmath.pi/2, 0]\nx0 = 0\n\nphi = mpmath.odefun(F, x0, ab)\nphi(1)\n\nxn = np.linspace(0, 25, 200)\nwn = [phi(xx)[0] for xx in xn]\ndwn = [phi(xx)[1] for xx in xn]\n\nplt.plot(xn, wn, label=\"$y$\")\nplt.plot(xn, dwn, label=\"$y'$\")\nplt.legend();", "Ergebnisse werden intern gespeichert (Cache)", "%time phi(50)\n\n%time phi(60)\n\n%time phi(40)", "Die Pendelgleichung", "dgl\n\neta = Symbol('eta')\ny0 = Symbol('y0')", "Wir lösen die AWA $y'' = -\\sin(y)$, $y(0) = y_0$, $y'(0) = 0$.", "H = Integral(-sin(eta), eta).doit()\nH\n\nE = y(x).diff(x)**2/2 - H.subs(eta, y(x)) # Energie\nE \n\nE.diff(x)\n\nE.diff(x).subs(dgl.lhs, dgl.rhs)", "Die Energie ist eine Erhaltungsgröße.", "E0 = E.subs({y(x): y0, y(x).diff(x): 0})\nE0\n\ndgl_E = Eq(E, E0)\ndgl_E\n\n# dsolve(dgl_E) # abgebrochen", "Lösen wir mit der Methode der Trennung der Variablen.", "Lsg = solve(dgl_E, y(x).diff(x))\nLsg\n\nh = Lsg[0].subs(y(x), eta)\nh\n\nI1 = Integral(1/h, eta).doit()\nI1", "In der Tat nicht elementar integrierbar.\nTrennung der Variablem führt zu\n$$ -\\frac{\\sqrt2}2 \\int_{y_0}^{y(x)} \\frac{d\\eta}{\\sqrt{\\cos(\\eta)-\\cos(y_0)}} = x. $$\nInsbesondere ist \n$$ -\\frac{\\sqrt2}2 \\int_{y_0}^{-y_0} \\frac{d\\eta}{\\sqrt{\\cos(\\eta)-\\cos(y_0)}} $$\ngleich der halben Schwingungsperiode.", "I2 = Integral(1/h, (eta, y0, -y0))\nI2\n\ndef T(ypsilon0):\n return 2*re(I2.subs(y0, ypsilon0).n())\n\nT(pi/2)\n\nphi(T(pi/2)), mpmath.pi/2\n\nxn = np.linspace(0.1, .95*np.pi, 5)\nwn = [T(yy) for yy in xn]\n\nplt.plot(xn, wn);" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
bobmyhill/burnman
tutorial/tutorial_04_fitting.ipynb
gpl-2.0
[ "<h1>The BurnMan Tutorial</h1>\n\nPart 4: Fitting\nThis file is part of BurnMan - a thermoelastic and thermodynamic toolkit\nfor the Earth and Planetary Sciences\nCopyright (C) 2012 - 2021 by the BurnMan team,\nreleased under the GNU GPL v2 or later.\nIntroduction\nThis ipython notebook is the fourth in a series designed to introduce new users to the code structure and functionalities present in BurnMan.\n<b>Demonstrates</b>\n\nburnman.optimize.eos_fitting.fit_PTV_data\nburnman.optimize.composition_fitting.fit_composition_to_solution\nburnman.optimize.composition_fitting.fit_phase_proportions_to_bulk_composition\n\nEverything in BurnMan and in this tutorial is defined in SI units.", "import burnman\nimport numpy as np\nimport matplotlib.pyplot as plt", "Fitting parameters for an equation of state to experimental data\nBurnMan contains least-squares optimization functions that fit model parameters to data.\nThere are two helper functions especially for use in fitting Mineral parameters to experimental data; these are burnman.optimize.eos_fitting.fit_PTp_data (which can fit multiple kinds of data at the same time), and burnman.optimize.eos_fitting.fit_PTV_data, which specifically fits only pressure-temperature-volume data. \nAn extended example of fitting various kinds of data, outlier removal and detailed analysis can be found in examples/example_fit_eos.py.\nIn this tutorial, we shall focus solely on fitting room temperature pressure-temperature-volume data. Specifically, the data we will fit is experimental volumes of stishovite, taken from Andrault et al. (2003). This data is provided in the form [P (GPa), V (Angstrom^3) and sigma_V (Angstrom^3)].", "PV = np.array([[0.0001, 46.5126, 0.0061],\n [1.168, 46.3429, 0.0053],\n [2.299, 46.1756, 0.0043],\n [3.137, 46.0550, 0.0051],\n [4.252, 45.8969, 0.0045],\n [5.037, 45.7902, 0.0053],\n [5.851, 45.6721, 0.0038],\n [6.613, 45.5715, 0.0050],\n [7.504, 45.4536, 0.0041],\n [8.264, 45.3609, 0.0056],\n [9.635, 45.1885, 0.0042],\n [11.69, 44.947, 0.002], \n [17.67, 44.264, 0.002],\n [22.38, 43.776, 0.003],\n [29.38, 43.073, 0.009],\n [37.71, 42.278, 0.008],\n [46.03, 41.544, 0.017],\n [52.73, 40.999, 0.009],\n [26.32, 43.164, 0.006],\n [30.98, 42.772, 0.005],\n [34.21, 42.407, 0.003],\n [38.45, 42.093, 0.004],\n [43.37, 41.610, 0.004],\n [47.49, 41.280, 0.007]])\n\nprint(f'{len(PV)} data points loaded successfully.')", "BurnMan works exclusively in SI units, so we have to convert from GPa to Pa, and Angstrom per cell into molar volume in m^3.\nThe fitting function also takes covariance matrices as input, so we have to build those matrices.", "from burnman.tools.unitcell import molar_volume_from_unit_cell_volume\n\nZ = 2. # number of formula units per unit cell in stishovite\nPTV = np.array([PV[:,0]*1.e9,\n 298.15 * np.ones_like(PV[:,0]),\n molar_volume_from_unit_cell_volume(PV[:,1], Z)]).T\n\n# Here, we assume that the pressure uncertainties are equal to 3% of the total pressure, \n# that the temperature uncertainties are negligible, and take the unit cell volume\n# uncertainties from the paper.\n# We also assume that the uncertainties in pressure and volume are uncorrelated.\nnul = np.zeros_like(PTV[:,0])\nPTV_covariances = np.array([[0.03*PTV[:,0], nul, nul],\n [nul, nul, nul],\n [nul, nul, molar_volume_from_unit_cell_volume(PV[:,2], Z)]]).T\nPTV_covariances = np.power(PTV_covariances, 2.)", "The next code block creates a Mineral object (stv), providing starting guesses for the parameters.\nThe user selects which parameters they wish to fit, and which they wish to keep fixed.\nThe parameters of the Mineral object are automatically updated during fitting.\nFinally, the optimized parameter values and their variances are printed to screen.", "stv = burnman.minerals.HP_2011_ds62.stv()\nparams = ['V_0', 'K_0', 'Kprime_0']\nfitted_eos = burnman.optimize.eos_fitting.fit_PTV_data(stv, params, PTV, PTV_covariances, verbose=False)\n\nprint('Optimized equation of state for stishovite:')\nburnman.tools.misc.pretty_print_values(fitted_eos.popt, fitted_eos.pcov, fitted_eos.fit_params)\nprint('')", "The fitted_eos object contains a lot of useful information about the fit. In the next code block, we fit the corner plot of the covariances, showing the tradeoffs in different parameters.", "import matplotlib\nmatplotlib.rc('axes.formatter', useoffset=False) # turns offset off, makes for a more readable plot\n\nfig = burnman.nonlinear_fitting.corner_plot(fitted_eos.popt, fitted_eos.pcov,\n params)\naxes = fig[1]\naxes[1][0].set_xlabel('$V_0$ ($\\\\times 10^{-5}$ m$^3$)')\naxes[1][1].set_xlabel('$K_0$ ($\\\\times 10^{11}$ Pa)')\naxes[0][0].set_ylabel('$K_0$ ($\\\\times 10^{11}$ Pa)')\naxes[1][0].set_ylabel('$K\\'_0$')\nplt.show()", "We now plot our optimized equation of state against the original data.\nBurnMan also includes a useful function burnman.optimize.nonlinear_fitting.confidence_prediction_bands that can be used to calculate the nth percentile confidence and prediction bounds on a function given a model using the delta method.", "from burnman.tools.misc import attribute_function\nfrom burnman.optimize.nonlinear_fitting import confidence_prediction_bands\n\nT = 298.15\npressures = np.linspace(1.e5, 60.e9, 101)\ntemperatures = T*np.ones_like(pressures)\nvolumes = stv.evaluate(['V'], pressures, temperatures)[0]\nPTVs = np.array([pressures, temperatures, volumes]).T\n\n# Calculate the 95% confidence and prediction bands\ncp_bands = confidence_prediction_bands(model=fitted_eos,\n x_array=PTVs,\n confidence_interval=0.95,\n f=attribute_function(stv, 'V'),\n flag='V')\n\nplt.fill_between(pressures/1.e9, cp_bands[2] * 1.e6, cp_bands[3] * 1.e6,\n color=[0.75, 0.25, 0.55], label='95% prediction bands')\nplt.fill_between(pressures/1.e9, cp_bands[0] * 1.e6, cp_bands[1] * 1.e6,\n color=[0.75, 0.95, 0.95], label='95% confidence bands')\n\nplt.plot(PTVs[:,0] / 1.e9, PTVs[:,2] * 1.e6, label='Optimized fit for stishovite')\nplt.errorbar(PTV[:,0] / 1.e9, PTV[:,2] * 1.e6,\n xerr=np.sqrt(PTV_covariances[:,0,0]) / 1.e9,\n yerr=np.sqrt(PTV_covariances[:,2,2]) * 1.e6,\n linestyle='None', marker='o', label='Andrault et al. (2003)')\n\nplt.ylabel(\"Volume (cm$^3$/mol)\")\nplt.xlabel(\"Pressure (GPa)\")\nplt.legend(loc=\"upper right\")\nplt.title(\"Stishovite EoS (room temperature)\")\nplt.show()", "We can also calculate the confidence and prediction bands for any other property of the mineral. In the code block below, we calculate and plot the optimized isothermal bulk modulus and its uncertainties.", "cp_bands = confidence_prediction_bands(model=fitted_eos,\n x_array=PTVs,\n confidence_interval=0.95,\n f=attribute_function(stv, 'K_T'),\n flag='V')\n\nplt.fill_between(pressures/1.e9, (cp_bands[0])/1.e9, (cp_bands[1])/1.e9, color=[0.75, 0.95, 0.95], label='95% confidence band')\nplt.plot(pressures/1.e9, (cp_bands[0] + cp_bands[1])/2.e9, color='b', label='Best fit')\n\nplt.ylabel(\"Bulk modulus (GPa)\")\nplt.xlabel(\"Pressure (GPa)\")\nplt.legend(loc=\"upper right\")\nplt.title(\"Stishovite EoS; uncertainty in bulk modulus (room temperature)\")\nplt.show()", "Finding the best fit endmember proportions of a solution given a bulk composition\nLet's now turn our focus to a different kind of fitting. It is common in petrology to have a bulk composition of a phase (provided, for example, by electron probe microanalysis), and want to turn this composition into a formula that satisfies stoichiometric constraints. This can be formulated as a constrained, weighted least squares problem, and BurnMan can be used to solve these problems using the function burnman.optimize.composition_fitting.fit_composition_to_solution.\nIn the following example, we shall create a model garnet composition, and then fit that to the Jennings and Holland (2015) garnet solution model. First, let's look at the solution model endmembers (pyrope, almandine, grossular, andradite and knorringite):", "from burnman import minerals\n\ngt = minerals.JH_2015.garnet()\n\nprint(f'Endmembers: {gt.endmember_names}')\nprint(f'Elements: {gt.elements}')\nprint('Stoichiometric matrix:')\nprint(gt.stoichiometric_matrix)", "Now, let's create a model garnet composition. A unique composition can be determined with the species Fe (total), Ca, Mg, Cr, Al, Si and Fe3+, all given in mole amounts. On top of this, we add some random noise (using a fixed seed so that the composition is reproducible).", "fitted_variables = ['Fe', 'Ca', 'Mg', 'Cr', 'Al', 'Si', 'Fe3+']\nvariable_values = np.array([1.1, 2., 0., 0, 1.9, 3., 0.1])\nvariable_covariances = np.eye(7)*0.01*0.01\n\n# Add some noise.\nv_err = np.random.rand(7)\nnp.random.seed(100)\nvariable_values = np.random.multivariate_normal(variable_values,\n variable_covariances)", "Importantly, Fe3+ isn't an element or a site-species of the solution model, so we need to provide the linear conversion from Fe3+ to elements and/or site species. In this case, Fe3+ resides only on the second site (Site B), and the JH_2015.gt model has labelled Fe3+ on that site as Fef. Therefore, the conversion is simply Fe3+ = Fef_B.", "variable_conversions = {'Fe3+': {'Fef_B': 1.}}", "Now we're ready to do the fitting. The following line is all that is required, and yields as output the optimized parameters, the corresponding covariance matrix and the residual.", "from burnman.optimize.composition_fitting import fit_composition_to_solution\npopt, pcov, res = fit_composition_to_solution(gt,\n fitted_variables,\n variable_values,\n variable_covariances,\n variable_conversions)", "Finally, the optimized parameters can be used to set the composition of the solution model, and the optimized parameters printed to stdout.", "# We can set the composition of gt using the optimized parameters\ngt.set_composition(popt)\n\n# Print the optimized parameters and principal uncertainties\nprint('Molar fractions:')\nfor i in range(len(popt)):\n print(f'{gt.endmember_names[i]}: '\n f'{gt.molar_fractions[i]:.3f} +/- '\n f'{np.sqrt(pcov[i][i]):.3f}')\n\nprint(f'Weighted residual: {res:.3f}')", "As in the equation of state fitting, a corner plot of the covariances can also be plotted.", "fig = burnman.nonlinear_fitting.corner_plot(popt, pcov, gt.endmember_names)", "Fitting phase proportions to a bulk composition\nAnother common constrained weighted least squares problem involves fitting phase proportions, given their individual compositions and the overall bulk composition. This is particularly important in experimental petrology, where the bulk composition is known from a starting composition. In these cases, the residual after fitting is often used to assess whether the sample remained a closed system during the experiment.\nIn the following example, we take phase compositions and the bulk composition reported from high pressure experiments on a Martian mantle composition by Bertka and Fei (1997), and use these to calculate phase proportions in Mars mantle, and the quality of the experiments.\nFirst, some tedious data preparation...", "import itertools\n\n# Load and transpose input data\nfilename = '../burnman/data/input_fitting/Bertka_Fei_1997_mars_mantle.dat'\nwith open(filename) as f:\n column_names = f.readline().strip().split()[1:]\ndata = np.genfromtxt(filename, dtype=None, encoding='utf8')\ndata = list(map(list, itertools.zip_longest(*data, fillvalue=None)))\n\n# The first six columns are compositions given in weight % oxides\ncompositions = np.array(data[:6])\n\n# The first row is the bulk composition\nbulk_composition = compositions[:, 0]\n\n# Load all the data into a dictionary\ndata = {column_names[i]: np.array(data[i])\n for i in range(len(column_names))}\n\n# Make ordered lists of samples (i.e. experiment ID) and phases\nsamples = []\nphases = []\nfor i in range(len(data['sample'])):\n if data['sample'][i] not in samples:\n samples.append(data['sample'][i])\n if data['phase'][i] not in phases:\n phases.append(data['phase'][i])\n\nsamples.remove(\"bulk_composition\")\nphases.remove(\"bulk\")\n\n# Get the indices of all the phases present in each sample\nsample_indices = [[i for i in range(len(data['sample']))\n if data['sample'][i] == sample]\n for sample in samples]\n\n# Get the run pressures of each experiment\npressures = np.array([data['pressure'][indices[0]] for indices in sample_indices])", "The following code block loops over each of the compositions, and finds the best weight proportions and uncertainties on those proportions.", "from burnman.optimize.composition_fitting import fit_phase_proportions_to_bulk_composition\n\n# Create empty arrays to store the weight proportions of each phase,\n# and the principal uncertainties (we do not use the covariances here,\n# although they are calculated)\nweight_proportions = np.zeros((len(samples), len(phases)))*np.NaN\nweight_proportion_uncertainties = np.zeros((len(samples),\n len(phases)))*np.NaN\n\nresiduals = []\n# Loop over the samples, fitting phase proportions\n# to the provided bulk composition\nfor i, sample in enumerate(samples):\n # This line does the heavy lifting\n popt, pcov, res = fit_phase_proportions_to_bulk_composition(compositions[:, sample_indices[i]],\n bulk_composition)\n\n residuals.append(res)\n\n # Fill the correct elements of the weight_proportions\n # and weight_proportion_uncertainties arrays\n sample_phases = [data['phase'][i] for i in sample_indices[i]]\n for j, phase in enumerate(sample_phases):\n weight_proportions[i, phases.index(phase)] = popt[j]\n weight_proportion_uncertainties[i, phases.index(phase)] = np.sqrt(pcov[j][j])", "Finally, we plot the data.", "fig = plt.figure(figsize=(6, 6))\nax = [fig.add_subplot(4, 1, 1)]\nax.append(fig.add_subplot(4, 1, (2, 4)))\nfor i, phase in enumerate(phases):\n ebar = plt.errorbar(pressures, weight_proportions[:, i],\n yerr=weight_proportion_uncertainties[:, i],\n fmt=\"none\", zorder=2)\n ax[1].scatter(pressures, weight_proportions[:, i], label=phase, zorder=3)\n\nax[0].set_title('Phase proportions in the Martian Mantle (Bertka and Fei, 1997)')\nax[0].scatter(pressures, residuals)\nfor i in range(2):\n ax[i].set_xlim(0., 40.)\n\nax[1].set_ylim(0., 1.)\nax[0].set_ylabel('Residual')\nax[1].set_xlabel('Pressure (GPa)')\nax[1].set_ylabel('Phase fraction (wt %)')\nax[1].legend()\nfig.set_tight_layout(True)\nplt.show()", "We can see from this plot that most of the residuals are below one, indicating that the probe analyses consistent with the bulk composition. Three analyses have higher residuals, which may indicate a problem with the experiments, or with the analyses around the wadsleyite field. \nThe phase proportions also show some nice trends; clinopyroxene weight percentage increases with pressure at the expense of orthopyroxene. Garnet / majorite percentage increases sharply as clinopyroxene is exhausted at 14-16 GPa.\nAnd we're done! Next time, we'll look at how to determine equilibrium assemblages in BurnMan." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jpallas/beakerx
doc/python/EasyForm.ipynb
apache-2.0
[ "Python API to EasyForm", "from beakerx import *\n\nf = EasyForm(\"Form and Run\")\nf.addTextField(\"first\")\nf.addTextField(\"last\")\nf['first'] = \"First\"\nf['last'] = \"Last\"\nf.addButton(\"Go!\", tag=\"run\")\nf\n\n\"Good morning \" + f[\"first\"] + \" \" + f[\"last\"]\n\nf['last'][::-1] + '...' + f['first']\n\nf['first'] = 'Beaker'\nf['last'] = 'Berzelius'\n\nh = EasyForm(title=\"Form and Run\")\n\nh.addTextField(\"first\", width=10)\nh.addTextField(\"default\")\nh.addTextArea(\"Text Area 1\", height=5, width=20)\nh.addTextArea(\"Text Area 2\")\nh.addTextArea(\"Text Area 3\", height=10)\nh.addTextArea(\"Text Area 4\",width=20)\nh\n\ng2 = EasyForm(\"Field Types\")\noptions = [\"a\", \"b\", \"c\", \"d\", \"e\", \"f\"]\ng2.addList(\"List Single\", options, multi=False)\ng2.addList(\"List Two Row\", options, rows=2)\ng2\n\nf['last']+ \", \"+f['first']\n\nf['last'] = \"new Value\"\n\nf['first'] = \"new Value2\"\n\n# All Kinds of Fields\n\ng = EasyForm(\"Field Types\")\ng.addTextField(\"Short Text Field\", width=10)\ng.addTextField(\"Text Field\")\ng.addPasswordField(\"Password Field\", width=10)\ng.addTextArea(\"Text Area\")\ng.addTextArea(\"Tall Text Area\", 10, 5)\ng.addCheckBox(\"Check Box\")\noptions = [\"a\", \"b\", \"c\", \"d\"]\ng.addComboBox(\"Combo Box\", options)\ng.addComboBox(\"Combo Box editable\", options, editable=True)\n\ng.addList(\"List\", options)\ng.addList(\"List Single\", options, multi=False)\ng.addList(\"List Two Row\", options, rows=2)\n\ng.addCheckBoxes(\"Check Boxes\", options)\ng.addCheckBoxes(\"Check Boxes H\", options, orientation=EasyForm.HORIZONTAL)\n\ng.addRadioButtons(\"Radio Buttons\", options)\ng.addRadioButtons(\"Radio Buttons H\", options, orientation=EasyForm.HORIZONTAL)\n\ng.addDatePicker(\"Date\")\n\ng.addButton(\"Go!\", tag=\"run2\")\ng\n\nresult = dict()\nfor child in g:\n result[child] = g[child]\n\nresult\n\ngdp = EasyForm(\"Field Types\")\ngdp.addDatePicker(\"Date\")\ngdp\n\ngdp['Date']\n\nf.put(\"first\", \"Micheal\")\nf.put(\"last\", \"Fox\")\n# Read values from form\nfirstName = f.get(\"first\")\nlastName = f.get(\"last\")\n\nprint(\"Good morning \" + firstName + \" \" + lastName)\n\nf = EasyForm(\"actionPerformed demo\")\nf.addTextField(\"first\")\nf['first'] = \"First\"\nb = f.addButton(\"Action!\")\nb.actionPerformed = lambda: print(\"clicked \"+f[\"first\"])\nf\n\nimport operator\n\nf1 = EasyForm(\"OnInit and OnChange\")\nf1.addTextField(\"first\", width=15)\nf1.addTextField(\"last\", width=15)\\\n .onInit(lambda: operator.setitem(f1, 'last', \"setinit1\"))\\\n .onChange(lambda text: operator.setitem(f1, 'first', text + ' extra'))\n\nbutton = f1.addButton(\"action\", \"action_button\")\nbutton.actionPerformed = lambda: operator.setitem(f1, 'last', 'action done')\nf1", "Default Values and placeholder", "f3c = EasyForm(\"form3\")\nf3c = EasyForm(\"form3\")\nf3c.addTextArea(\"Default Value\", value = \"Initial value\")\nf3c.addTextArea(\"Place Holder\", placeholder = \"Put here some text\")\nf3c.addCheckBox(\"Default Checked\", value = True)\nf3c\n\nresult = dict()\nfor child in f3c:\n result[child] = f3c[child]\n\nresult", "JupyterJSWidgets work with EasyForm\nThe widgets from JupyterJSWidgets are compatible and can appear in forms.", "from beakerx import *\nfrom ipywidgets import * \nw = IntSlider()\n\nf = EasyForm(\"Form and Run\")\nf.addTextField(\"first\")\nf.addTextField(\"last\")\nf.addWidget(\"slider\", w)\nf['first'] = \"First\"\nf['last'] = \"Last\"\nf.addButton(\"Go!\", tag=\"run\")\nf\n\nf['slider']" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
connordurkin/CPSC_458
final_project.ipynb
mit
[ "Markowitz Portfolio Optimization\nFinal Project for CPSC 458 by Connor Durkin\nThis project explores the use of a mean variance or Markowitz method of portfolio optimization. The goal is to employ this trading strategy for a portfolio of SPDR ETFs and track returns over historical data. More importantly, though, as this is a class in decision making, I have incporated the ability for the functions here to explain their motivations to a human being--hopefully in a palatable manner. Below are the function definitions and at the end of the notebook you will find an example of their use. These functions were written with default key values but the operations are general enough to apply this strategy to any selection of securities with return data available via yahoo finance. Be sure to read the Results and Analysis at the end!", "%matplotlib inline\nimport yahoo_finance\nfrom yahoo_finance import Share\nimport numpy as np\nimport pandas\nimport matplotlib.pyplot as plt\nimport datetime\nimport cvxopt as opt\nfrom cvxopt import blas, solvers\n# We will do a lot of optimizations, \n# and don't want to see each step.\nsolvers.options['show_progress'] = False", "getTimeSeries( ticker, start_date, end_date)\nWhat it does:\ngetTimeSeries() takes in a date range and a ticker and returns a timeseries of adjusted closing prices.\nInputs:\n\nticker: a string indiciating the security for which the time series will be generated.\nstart_date: a string of the form 'YYYY-MM-DD' declaring the beginning of the historical window.\nend_date: a string of the form 'YYYY-MM-DD' declaring the end of the historical window\n\nReturns:\n\ntime_series: a single column Pandas DataFrame containing the time series of adjusted close prices\n for the indicated ticker.", "def getTimeSeries( ticker, start_date='2012-01-01', end_date='2012-02-01'):\n # yahoo_finance API to load list of dictionaries \n obj = Share(ticker)\n ts = obj.get_historical(start_date,end_date)\n # yahoo_finance indexes most recent date first, reverse this\n ts = list(reversed(ts))\n # Convert date strings to python datetime objects for easier manipulation\n dates = [datetime.datetime.strptime(ts[i]['Date'],'%Y-%m-%d').date() for i in range(len(ts))]\n # Convert close price strings to floats for numerical manipulation\n prices = [float(ts[i]['Adj_Close']) for i in range(len(ts))]\n # Create DataFrame from the list produced - python will recognize as Series\n time_series = pandas.DataFrame( prices, index = dates, columns = [ticker])\n return time_series", "getMultTimeSeries( tickers, start_date, end_date)\nWhat it does:\ngetMultTimeSeries() takes in a list of tickers and a specified date range and returns a Pandas DataFrame containing timeseries of adjusted closing prices. \nInputs:\n\ntickers: a list of strings indicating which tickers to include. Defaults to these 9 SPDR ETFs: 'XLY','XLP','XLE','XLF','XLV','XLI','XLB','XLK','XLU'.\nstart_date: a string of the form 'YYYY-MM-DD' declaring the beginning of the historical window.\nend_date: a string of the form 'YYYY-MM-DD' declaring the end of the historical window\n\nReturns:\n\ntime_series_dataframe: a dataframe of adjusted closing price timeseries over the specified date range for the specified group of tickers", "def getMultTimeSeries( tickers = ['XLY','XLP','XLE','XLF','XLV','XLI','XLB','XLK','XLU'],\n start_date = '2012-01-01', end_date = '2012-02-01'):\n # Initialize DataFrame\n time_series_dataframe = pandas.DataFrame()\n # Iterate over all tickers and append column to DataFrame\n for ticker in tickers:\n # Use helper function to get single column DataFrame\n df = getTimeSeries( ticker, start_date, end_date)\n # Concatanate on axis = 1\n time_series_dataframe = pandas.concat([time_series_dataframe,df],axis = 1)\n return time_series_dataframe ", "markowitzReturns( returns)\nWhat it does:\nmarkowitzReturns() takes in a Pandas DataFrame (or any container which can be converted to a numpy matrix) of returns and uses mean-variance portfolio theory to return an optimally weighted portfolio. It does so by minimizing $\\omega^{T}\\Sigma\\omega -qR^{T}\\omega$ (the Markowitz mean - variance framework) for portfolio weights $\\omega$. Where $\\Sigma$ is the covariance matrix of the securities, $R$ is the expected return matrix and $q$ is the mean return vector of all securities. The optimization is performed using the CVXOPT package employing the use of the solvers.qp() quadratic programming method. This method minimizes $(1/2)x^{T}Px + q^{T}x$ subject to $Gx \\preceq h$ and $Ax = b$. It also utilizes CVXOPT's BLAS methods for performing linear algebra computations. Inspiration for this process was found in Dr. Thomas Starke, David Edwards and Dr. Thomas Wiecki's quantopian blog post located at: http://blog.quantopian.com/markowitz-portfolio-optimization-2/.\nInputs:\n\nreturns: a Pandas DataFrame(or other container which can be converted to a numpy matrix). NOTE: the dataframe produced by getMultTimeSeries must be transposed (returns.T) for meaningful results. \njustify: a True / False input determining whether to print a robust explanation of the choice for the portfolio shift. \n\nReturns:\n\noptimal_weights: the weights of the optimal portfolio in array form.\nreturns: the returns of all portfolios calculated across the effecient frontier.\nrisks: list of risks of all portfolios calculated across the efficient frontier.", "def markowitzReturns( returns, tickers, explain = False):\n n = len(returns)\n returns_df = returns\n returns = np.asmatrix(returns)\n mus = [10**(5.0 * t/50 - 1.0) for t in range(50)]\n # Convert to cvxopt matrices\n Sigma = opt.matrix(np.cov(returns))\n q = opt.matrix(np.mean(returns, axis=1))\n # Create constraint matrices\n G = -opt.matrix(np.eye(n)) # negative n x n identity matrix\n h = opt.matrix(0.0, (n ,1)) # -I*w < 0 i.e. no shorts\n A = opt.matrix(1.0, (1, n)) # A is all ones so A*w = w\n b = opt.matrix(1.0) # Dot product sums to 1 \n # Calculate efficient frontier weights using quadratic programming\n ports = [solvers.qp(mu*Sigma, -q, G, h, A, b)['x'] for mu in mus]\n # Calculate risks and returns of frontier \n returns = [blas.dot(q, x) for x in ports]\n risks = [np.sqrt(blas.dot(x, Sigma*x)) for x in ports]\n # Fit polynomial to frontier curve \n m = np.polyfit(returns, risks, 2)\n x = np.sqrt(m[2]/m[0])\n # Calculate optimal portfolio weights\n optimal_weights = solvers.qp(opt.matrix(x * Sigma), -q, G, h, A, b)['x']\n optimal_return = blas.dot(q, optimal_weights)\n optimal_risk = np.sqrt(blas.dot(optimal_weights, Sigma*optimal_weights))\n # Method to justify this portfolio distribution if asked for \n if( explain ):\n date_text = \"\"\"\n--------------------------------------------------------------------------------------------------\n\nUsing returns data from {0} to {1} a careful mean - variance analysis was performed. \nThe analysis found a number of portfolios lying on the markowitz efficient frontier and they are \nfound below. The analysis indicates that the optimal portfolio for the next trading day will have \nthe following distribution:\n\"\"\"\n print(date_text.format(returns_df.columns[0],returns_df.columns[len(returns_df.columns)-1]))\n # Print optimal weights \n weights = np.asarray(optimal_weights)\n weights = [float(weights[i]) for i in range(len(weights))]\n wts = dict(zip(tickers,weights))\n for k in wts:\n weight_text = \"\\t{0} : {1:.4f}%\"\n print(weight_text.format(str(k),float(wts[k])*100))\n returns_text = \"\"\"\nThis portfolio distribution has an expected return of:\n {0:.4f}%\"\"\"\n print(returns_text.format(float(optimal_return)*100))\n risk_text = \"\"\"\nAnd the associated risk (standard deviation) is:\n {0:.4f}\"\"\"\n print(risk_text.format(float(optimal_risk)))\n break_text=\"\"\"\n--------------------------------------------------------------------------------------------------\n \"\"\"\n print(break_text)\n plt.plot(risks, returns, 'b-o')\n plt.title('Efficient Portfolios on {}'.format(returns_df.columns[len(returns_df.columns)-1]))\n plt.ylabel('Returns (%)')\n plt.xlabel('Risk (STD)')\n return np.asarray(optimal_weights), returns, risks ", "backtest( tickers, start_date, end_date, start, max_lookback, explain)\nWhat it does:\nbacktest() applies the mean-variance portfolio optimization trading strategy to a list of stocks. It applies the markowitzReturns() method over a range of dates and tracks the portfolio movement and returns, outputting a DataFrame describing the portfolio over time, a DataFrame describing the returns over time and a total return amount. Backtest does not take into account commission costs. Running backtest(explain = True) produces the output below. The default dates were carefully selected so that just one explain instance would print. \nInputs:\n\ntickers: a list of strings indicating which tickers to include. Defaults to these 9 SPDR ETFs: 'XLY','XLP','XLE','XLF','XLV','XLI','XLB','XLK','XLU'.\nstart_date: a string of the form 'YYYY-MM-DD' declaring the beginning of the historical window.\nend_date: a string of the form 'YYYY-MM-DD' declaring the end of the historical window\nstart: the minimum number of days to wait before beginning to trade (i.e. how much information is needed). Default is 10.\nmax_lookback: the maximum number of days to look back for data, i.e. the size of the input to markowitzReturns(). Default is 100.\n\nReturns:\n\nweights_df: a pandas DataFrame containing the portfolio weights over time beginning with the start date + start$*$days.\ntotal_returns: a pandas DataFrame containing the portfolio returns over time beginning with the start date + start$*$days.\nnaive_return: the total naive return (numpy float).", "def backtest( tickers = ['XLY','XLP','XLE','XLF','XLV','XLI','XLB','XLK','XLU'],\n start_date = '2012-01-01', end_date = '2012-01-20', start = 10, max_lookback = 100,\n explain = False):\n timeseries = getMultTimeSeries( tickers, start_date, end_date)\n returns = timeseries.pct_change().dropna()\n weights_df = pandas.DataFrame()\n for i in range(len(returns)):\n if ( i > start ):\n if( i < max_lookback ):\n returns_window = returns[0:i]\n else:\n returns_window = returns[(i-max_lookback):i]\n try:\n if( explain ):\n weights, returns_window, risks = markowitzReturns(returns_window.T, tickers, explain = True)\n else:\n weights, returns_window, risks = markowitzReturns(returns_window.T, tickers, explain = False)\n except ValueError as e:\n # Sometimes CVXOPT fails (infrequently)\n # \"ValueError: Rank(A) < p or Rank([P; A; G]) < n\"\n # In this case just do nothing (keep current weights)\n weights, returns_window, risks = weights_prev, returns_window_prev, risks_prev\n weights = [float(weights[i]) for i in range(len(weights))]\n wts = dict(zip(tickers,weights))\n df = pandas.DataFrame(wts, index = [returns.index[i]])\n weights_df = pandas.concat([weights_df, df])\n weights_prev, returns_window_prev, risks_prev = weights, returns_window, risks\n total_returns = pandas.DataFrame(weights_df.values*returns[(start+1)::],\n columns = returns.columns, index = returns.index)\n naive_returns = [np.sum(total_returns[[i]]) for i in range(len(total_returns.columns))]\n naive_return = np.sum(naive_returns)\n return weights_df, total_returns.dropna(), naive_return\nweights, returns, naive_return = backtest(explain = True)", "analyzeResults( weights_df, total_returns, naive_return, commission)\nWhat it does:\nanalyzeResults() is the final function which analyzes and displays the results of the backtest() function. It takes the output of backtest() plus an argument for the commission wich defaults to 4 basis points. It plots the real and naive returns over time and displays the total real and naive returns over the date range from backtest(). Below is an example from 2012.\nInputs:\n\nweights_df: pandas DataFrame of portfolio weights over time, returned from backtest().\ntotal_returns: pandas DataFrame of naive returns over time, returned from backtest().\nnaive_return: total naive_return as returned by backtest().\ncommission: basis point cost on trades, defualts to 4 basis points. \n\nReturns:\n\nnothing", "weights, returns, naive_return = backtest(start_date='2012-01-01',end_date='2012-12-31')\ndef analyzeResults( weights_df, total_returns, naive_return, commission = .0004):\n start_date = weights_df.index[0]\n end_date = weights_df.index[len(weights_df.index)-1]\n # Get cummulative sum of returns for plotting\n return_sums = total_returns.cumsum()\n return_sums['total_return'] = return_sums.sum(axis=1)\n # Analyze data with commission costs \n weights_diff = weights_df.diff()\n weights_diff['total_delta'] = weights_diff.abs().sum(axis = 1)\n portfolio_movement = pandas.DataFrame(weights_diff['total_delta']/2)\n portfolio_movement['commissions'] = portfolio_movement['total_delta']*commission\n portfolio_movement['naive_return'] = total_returns.sum(axis=1)\n portfolio_movement['real_return'] = (portfolio_movement['naive_return'] - portfolio_movement['commissions'])\n real_sums = portfolio_movement.cumsum()\n real_return = portfolio_movement['real_return'].sum()\n # Print naive_return and real_return + analysis\n naive_return_text = \"\"\"\n--------------------------------------------------------------------------------------------------\nIn trading from {0} to {1} the total return ignoring commission fees was:\n \n {2:.4f}%\n \nAfter factoring in commission fees of {3} the total return was:\n \n {4:.4f}%\n \n-------------------------------------------------------------------------------------------------- \n \"\"\"\n print(naive_return_text.format( start_date, end_date, naive_return*100, commission ,real_return*100) )\n # Get plot of naive_returns and real returns over time \n plt.figure(figsize=(12,6))\n plt.plot(return_sums.index,return_sums['total_return'],label='Naive Returns')\n plt.plot(real_sums.index,real_sums['real_return'],label='Real Returns')\n plt.title('Returns over Time')\n plt.xlabel('Time')\n plt.ylabel('Returns (%)')\n plt.xticks(rotation=70)\n plt.legend()\n plt.legend(bbox_to_anchor=(1.01, .5), loc=2, borderaxespad=0.)\n return\nanalyzeResults( weights, returns, naive_return, commission = .0004)", "Results and Analysis\nHere are my thoughts on these areas which could lead to improvement:\nTransaction Costs:\nHooray! We made money! Alas, perhaps this is not the case. There are a number of shortcomings in the design and implementation of this program. Namely, I set out to build a strategy which actually optimized the mean-variance problem that considers transaction costs. This proved to be immensely difficult while keeping the program in quadratic form. More involved methods or a bit of cleverness that escapes me would be necessary to get this strategy working. However, we can still see that the strategy generally works after transaction costs have been added. That being said, my model of transaction costs is very simplistic and more detail here could have large impacts on the algorithm's performance. \nUniverse Selection:\nMy universe selection of ETFs was not actually thought of for any particular reason other than the fact that they seemed safe--for this reason I am willing to say that the performance of the algorithm is on its own merit as opposed to my biased selection of securities for it to trade.\nErratic Trading:\nIf the DataFrame of portfolio weights overtime is looked at carefully it is clear that the algorithm trades rather erratically. That is to say, a marginal increase in one security's expected returns leads to the algorithm shifting nearly the entire portfolio into that one security. This flys in the face of diversification and seems to me like bad practice. This could be rectified for building in a penalization for portfolio movement into the optimization problem, but once again this problematizes keeping the optimization quadratic. More complex optimization techniques could be employed or simply adding an ad hoc dampning factor may have interesting results. \nPredicting the Future: I had initially set out to apply a complex ARIMA or other model in order to determine the next expected returns. As I explored the topic further I came to realize that the Markowitz method does not really rely on perfect analysis of the next prices, instead the beauty comes from it's emphasis of variance. For this reason I opted not to invest much time in developing this portion of the strategy." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Benedicto/ML-Learning
Clustering_6_hierarchical_clustering_blank.ipynb
gpl-3.0
[ "Hierarchical Clustering\nHierarchical clustering refers to a class of clustering methods that seek to build a hierarchy of clusters, in which some clusters contain others. In this assignment, we will explore a top-down approach, recursively bipartitioning the data using k-means.\nNote to Amazon EC2 users: To conserve memory, make sure to stop all the other notebooks before running this notebook.\nImport packages\nThe following code block will check if you have the correct version of GraphLab Create. Any version later than 1.8.5 will do. To upgrade, read this page.", "import graphlab\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport sys\nimport os\nimport time\nfrom scipy.sparse import csr_matrix\nfrom sklearn.cluster import KMeans\nfrom sklearn.metrics import pairwise_distances\n%matplotlib inline\n\n'''Check GraphLab Create version'''\nfrom distutils.version import StrictVersion\nassert (StrictVersion(graphlab.version) >= StrictVersion('1.8.5')), 'GraphLab Create must be version 1.8.5 or later.'", "Load the Wikipedia dataset", "wiki = graphlab.SFrame('people_wiki.gl/')", "As we did in previous assignments, let's extract the TF-IDF features:", "wiki['tf_idf'] = graphlab.text_analytics.tf_idf(wiki['text'])", "To run k-means on this dataset, we should convert the data matrix into a sparse matrix.", "from em_utilities import sframe_to_scipy # converter\n\n# This will take about a minute or two.\ntf_idf, map_index_to_word = sframe_to_scipy(wiki, 'tf_idf')", "To be consistent with the k-means assignment, let's normalize all vectors to have unit norm.", "from sklearn.preprocessing import normalize\ntf_idf = normalize(tf_idf)", "Bipartition the Wikipedia dataset using k-means\nRecall our workflow for clustering text data with k-means:\n\nLoad the dataframe containing a dataset, such as the Wikipedia text dataset.\nExtract the data matrix from the dataframe.\nRun k-means on the data matrix with some value of k.\nVisualize the clustering results using the centroids, cluster assignments, and the original dataframe. We keep the original dataframe around because the data matrix does not keep auxiliary information (in the case of the text dataset, the title of each article).\n\nLet us modify the workflow to perform bipartitioning:\n\nLoad the dataframe containing a dataset, such as the Wikipedia text dataset.\nExtract the data matrix from the dataframe.\nRun k-means on the data matrix with k=2.\nDivide the data matrix into two parts using the cluster assignments.\nDivide the dataframe into two parts, again using the cluster assignments. This step is necessary to allow for visualization.\nVisualize the bipartition of data.\n\nWe'd like to be able to repeat Steps 3-6 multiple times to produce a hierarchy of clusters such as the following:\n(root)\n |\n +------------+-------------+\n | |\n Cluster Cluster\n +------+-----+ +------+-----+\n | | | |\n Cluster Cluster Cluster Cluster\nEach parent cluster is bipartitioned to produce two child clusters. At the very top is the root cluster, which consists of the entire dataset.\nNow we write a wrapper function to bipartition a given cluster using k-means. There are three variables that together comprise the cluster:\n\ndataframe: a subset of the original dataframe that correspond to member rows of the cluster\nmatrix: same set of rows, stored in sparse matrix format\ncentroid: the centroid of the cluster (not applicable for the root cluster)\n\nRather than passing around the three variables separately, we package them into a Python dictionary. The wrapper function takes a single dictionary (representing a parent cluster) and returns two dictionaries (representing the child clusters).", "def bipartition(cluster, maxiter=400, num_runs=4, seed=None):\n '''cluster: should be a dictionary containing the following keys\n * dataframe: original dataframe\n * matrix: same data, in matrix format\n * centroid: centroid for this particular cluster'''\n \n data_matrix = cluster['matrix']\n dataframe = cluster['dataframe']\n \n # Run k-means on the data matrix with k=2. We use scikit-learn here to simplify workflow.\n kmeans_model = KMeans(n_clusters=2, max_iter=maxiter, n_init=num_runs, random_state=seed, n_jobs=-1) \n kmeans_model.fit(data_matrix)\n centroids, cluster_assignment = kmeans_model.cluster_centers_, kmeans_model.labels_\n \n # Divide the data matrix into two parts using the cluster assignments.\n data_matrix_left_child, data_matrix_right_child = data_matrix[cluster_assignment==0], \\\n data_matrix[cluster_assignment==1]\n \n # Divide the dataframe into two parts, again using the cluster assignments.\n cluster_assignment_sa = graphlab.SArray(cluster_assignment) # minor format conversion\n dataframe_left_child, dataframe_right_child = dataframe[cluster_assignment_sa==0], \\\n dataframe[cluster_assignment_sa==1]\n \n \n # Package relevant variables for the child clusters\n cluster_left_child = {'matrix': data_matrix_left_child,\n 'dataframe': dataframe_left_child,\n 'centroid': centroids[0]}\n cluster_right_child = {'matrix': data_matrix_right_child,\n 'dataframe': dataframe_right_child,\n 'centroid': centroids[1]}\n \n return (cluster_left_child, cluster_right_child)", "The following cell performs bipartitioning of the Wikipedia dataset. Allow 20-60 seconds to finish.\nNote. For the purpose of the assignment, we set an explicit seed (seed=1) to produce identical outputs for every run. In pratical applications, you might want to use different random seeds for all runs.", "wiki_data = {'matrix': tf_idf, 'dataframe': wiki} # no 'centroid' for the root cluster\nleft_child, right_child = bipartition(wiki_data, maxiter=100, num_runs=8, seed=1)", "Let's examine the contents of one of the two clusters, which we call the left_child, referring to the tree visualization above.", "left_child", "And here is the content of the other cluster we named right_child.", "right_child", "Visualize the bipartition\nWe provide you with a modified version of the visualization function from the k-means assignment. For each cluster, we print the top 5 words with highest TF-IDF weights in the centroid and display excerpts for the 8 nearest neighbors of the centroid.", "def display_single_tf_idf_cluster(cluster, map_index_to_word):\n '''map_index_to_word: SFrame specifying the mapping betweeen words and column indices'''\n \n wiki_subset = cluster['dataframe']\n tf_idf_subset = cluster['matrix']\n centroid = cluster['centroid']\n \n # Print top 5 words with largest TF-IDF weights in the cluster\n idx = centroid.argsort()[::-1]\n for i in xrange(5):\n print('{0:s}:{1:.3f}'.format(map_index_to_word['category'][idx[i]], centroid[idx[i]])),\n print('')\n \n # Compute distances from the centroid to all data points in the cluster.\n distances = pairwise_distances(tf_idf_subset, [centroid], metric='euclidean').flatten()\n # compute nearest neighbors of the centroid within the cluster.\n nearest_neighbors = distances.argsort()\n # For 8 nearest neighbors, print the title as well as first 180 characters of text.\n # Wrap the text at 80-character mark.\n for i in xrange(8):\n text = ' '.join(wiki_subset[nearest_neighbors[i]]['text'].split(None, 25)[0:25])\n print('* {0:50s} {1:.5f}\\n {2:s}\\n {3:s}'.format(wiki_subset[nearest_neighbors[i]]['name'],\n distances[nearest_neighbors[i]], text[:90], text[90:180] if len(text) > 90 else ''))\n print('')", "Let's visualize the two child clusters:", "display_single_tf_idf_cluster(left_child, map_index_to_word)\n\ndisplay_single_tf_idf_cluster(right_child, map_index_to_word)", "The left cluster consists of athletes, whereas the right cluster consists of non-athletes. So far, we have a single-level hierarchy consisting of two clusters, as follows:\nWikipedia\n +\n |\n +--------------------------+--------------------+\n | |\n + +\n Athletes Non-athletes\nIs this hierarchy good enough? When building a hierarchy of clusters, we must keep our particular application in mind. For instance, we might want to build a directory for Wikipedia articles. A good directory would let you quickly narrow down your search to a small set of related articles. The categories of athletes and non-athletes are too general to facilitate efficient search. For this reason, we decide to build another level into our hierarchy of clusters with the goal of getting more specific cluster structure at the lower level. To that end, we subdivide both the athletes and non-athletes clusters.\nPerform recursive bipartitioning\nCluster of athletes\nTo help identify the clusters we've built so far, let's give them easy-to-read aliases:", "athletes = left_child\nnon_athletes = right_child", "Using the bipartition function, we produce two child clusters of the athlete cluster:", "# Bipartition the cluster of athletes\nleft_child_athletes, right_child_athletes = bipartition(athletes, maxiter=100, num_runs=8, seed=1)", "The left child cluster mainly consists of baseball players:", "display_single_tf_idf_cluster(left_child_athletes, map_index_to_word)", "On the other hand, the right child cluster is a mix of football players and ice hockey players:", "display_single_tf_idf_cluster(right_child_athletes, map_index_to_word)", "Note. Concerning use of \"football\"\nThe occurrences of the word \"football\" above refer to association football. This sports is also known as \"soccer\" in United States (to avoid confusion with American football). We will use \"football\" throughout when discussing topic representation.\nOur hierarchy of clusters now looks like this:\nWikipedia\n +\n |\n +--------------------------+--------------------+\n | |\n + +\n Athletes Non-athletes\n +\n |\n +-----------+--------+\n | |\n | +\n + football/\n baseball ice hockey\nShould we keep subdividing the clusters? If so, which cluster should we subdivide? To answer this question, we again think about our application. Since we organize our directory by topics, it would be nice to have topics that are about as coarse as each other. For instance, if one cluster is about baseball, we expect some other clusters about football, basketball, volleyball, and so forth. That is, we would like to achieve similar level of granularity for all clusters.\nNotice that the right child cluster is more coarse than the left child cluster. The right cluster posseses a greater variety of topics than the left (ice hockey/football vs. baseball). So the right child cluster should be subdivided further to produce finer child clusters.\nLet's give the clusters aliases as well:", "baseball = left_child_athletes\nice_hockey_football = right_child_athletes", "Cluster of ice hockey players and football players\nIn answering the following quiz question, take a look at the topics represented in the top documents (those closest to the centroid), as well as the list of words with highest TF-IDF weights.\nQuiz Question. Bipartition the cluster of ice hockey and football players. Which of the two child clusters should be futher subdivided?\nNote. To achieve consistent results, use the arguments maxiter=100, num_runs=8, seed=1 when calling the bipartition function.\n\nThe left child cluster\nThe right child cluster", "left, right = bipartition(ice_hockey_football, maxiter=100, num_runs=8, seed=1)\n\ndisplay_single_tf_idf_cluster(left, map_index_to_word)\n\ndisplay_single_tf_idf_cluster(right, map_index_to_word)", "Caution. The granularity criteria is an imperfect heuristic and must be taken with a grain of salt. It takes a lot of manual intervention to obtain a good hierarchy of clusters.\n\nIf a cluster is highly mixed, the top articles and words may not convey the full picture of the cluster. Thus, we may be misled if we judge the purity of clusters solely by their top documents and words. \nMany interesting topics are hidden somewhere inside the clusters but do not appear in the visualization. We may need to subdivide further to discover new topics. For instance, subdividing the ice_hockey_football cluster led to the appearance of golf.\n\nQuiz Question. Which diagram best describes the hierarchy right after splitting the ice_hockey_football cluster? Refer to the quiz form for the diagrams.\nCluster of non-athletes\nNow let us subdivide the cluster of non-athletes.", "# Bipartition the cluster of non-athletes\nleft_child_non_athletes, right_child_non_athletes = bipartition(non_athletes, maxiter=100, num_runs=8, seed=1)\n\ndisplay_single_tf_idf_cluster(left_child_non_athletes, map_index_to_word)\n\ndisplay_single_tf_idf_cluster(right_child_non_athletes, map_index_to_word)", "The first cluster consists of scholars, politicians, and government officials whereas the second consists of musicians, artists, and actors. Run the following code cell to make convenient aliases for the clusters.", "scholars_politicians_etc = left_child_non_athletes\nmusicians_artists_etc = right_child_non_athletes", "Quiz Question. Let us bipartition the clusters scholars_politicians_etc and musicians_artists_etc. Which diagram best describes the resulting hierarchy of clusters for the non-athletes? Refer to the quiz for the diagrams.\nNote. Use maxiter=100, num_runs=8, seed=1 for consistency of output.", "s_left, s_right = bipartition(scholars_politicians_etc, maxiter=100, num_runs=8, seed=1)\n\nm_left, m_right = bipartition(musicians_artists_etc, maxiter=100, num_runs=8, seed=1)\n\ndisplay_single_tf_idf_cluster(s_left, map_index_to_word)\n\ndisplay_single_tf_idf_cluster(s_right, map_index_to_word)\n\ndisplay_single_tf_idf_cluster(m_left, map_index_to_word)\n\ndisplay_single_tf_idf_cluster(m_right, map_index_to_word)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
amueller/advanced_training
04.1 Pipelines.ipynb
bsd-2-clause
[ "from preamble import *\n%matplotlib inline", "Algorithm Chains and Pipelines", "from sklearn.svm import SVC\nfrom sklearn.datasets import load_breast_cancer\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.preprocessing import MinMaxScaler\n\n# load and split the data\ncancer = load_breast_cancer()\nX_train, X_test, y_train, y_test = train_test_split(\n cancer.data, cancer.target, random_state=0)\n\n# compute minimum and maximum on the training data\nscaler = MinMaxScaler().fit(X_train)\n# rescale training data\nX_train_scaled = scaler.transform(X_train)\n\nsvm = SVC()\n# learn an SVM on the scaled training data\nsvm.fit(X_train_scaled, y_train)\n# scale test data and score the scaled data\nX_test_scaled = scaler.transform(X_test)\nsvm.score(X_test_scaled, y_test)", "Parameter Selection with Preprocessing", "from sklearn.model_selection import GridSearchCV\n# illustration purposes only, don't use this code\nparam_grid = {'C': [0.001, 0.01, 0.1, 1, 10, 100],\n 'gamma': [0.001, 0.01, 0.1, 1, 10, 100]}\ngrid = GridSearchCV(SVC(), param_grid=param_grid, cv=5)\ngrid.fit(X_train_scaled, y_train)\nprint(\"best cross-validation accuracy:\", grid.best_score_)\nprint(\"test set score: \", grid.score(X_test_scaled, y_test))\nprint(\"best parameters: \", grid.best_params_)\n\nmglearn.plots.plot_improper_processing()", "Building Pipelines", "from sklearn.pipeline import Pipeline\npipe = Pipeline([(\"scaler\", MinMaxScaler()), (\"svm\", SVC())])\n\npipe.fit(X_train, y_train)\n\npipe.score(X_test, y_test)", "Using Pipelines in Grid-searches", "param_grid = {'svm__C': [0.001, 0.01, 0.1, 1, 10, 100],\n 'svm__gamma': [0.001, 0.01, 0.1, 1, 10, 100]}\n\ngrid = GridSearchCV(pipe, param_grid=param_grid, cv=5)\ngrid.fit(X_train, y_train)\nprint(\"best cross-validation accuracy:\", grid.best_score_)\nprint(\"test set score: \", grid.score(X_test, y_test))\nprint(\"best parameters: \", grid.best_params_)\n\nmglearn.plots.plot_proper_processing()\n\nrnd = np.random.RandomState(seed=0)\nX = rnd.normal(size=(100, 10000))\ny = rnd.normal(size=(100,))\n\nfrom sklearn.feature_selection import SelectPercentile, f_regression\n\nselect = SelectPercentile(score_func=f_regression, percentile=5).fit(X, y)\nX_selected = select.transform(X)\nprint(X_selected.shape)\n\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.linear_model import Ridge\nnp.mean(cross_val_score(Ridge(), X_selected, y, cv=5))\n\npipe = Pipeline([(\"select\", SelectPercentile(score_func=f_regression, percentile=5)), (\"ridge\", Ridge())])\nnp.mean(cross_val_score(pipe, X, y, cv=5))", "The General Pipeline Interface", "def fit(self, X, y):\n X_transformed = X\n for step in self.steps[:-1]:\n # iterate over all but the final step\n # fit and transform the data\n X_transformed = step[1].fit_transform(X_transformed, y)\n # fit the last step\n self.steps[-1][1].fit(X_transformed, y)\n return self\n\ndef predict(self, X):\n X_transformed = X\n for step in self.steps[:-1]:\n # iterate over all but the final step\n # transform the data\n X_transformed = step[1].transform(X_transformed)\n # fit the last step\n return self.steps[-1][1].predict(X_transformed)\n\n![pipeline_illustration](figures/pipeline.svg)", "Convenient Pipeline creation with make_pipeline", "from sklearn.pipeline import make_pipeline\n# standard syntax\npipe_long = Pipeline([(\"scaler\", MinMaxScaler()), (\"svm\", SVC(C=100))])\n# abbreviated syntax\npipe_short = make_pipeline(MinMaxScaler(), SVC(C=100))\n\npipe_short.steps\n\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.decomposition import PCA\n\npipe = make_pipeline(StandardScaler(), PCA(n_components=2), StandardScaler())\npipe.steps", "Accessing step attributes", "# fit the pipeline defined above to the cancer dataset\npipe.fit(cancer.data)\n# extract the first two principal components from the \"pca\" step\ncomponents = pipe.named_steps[\"pca\"].components_\nprint(components.shape)", "Accessing attributes in grid-searched pipeline.", "from sklearn.linear_model import LogisticRegression\n\npipe = make_pipeline(StandardScaler(), LogisticRegression())\n\nparam_grid = {'logisticregression__C': [0.01, 0.1, 1, 10, 100]}\n\nX_train, X_test, y_train, y_test = train_test_split(\n cancer.data, cancer.target, random_state=4)\ngrid = GridSearchCV(pipe, param_grid, cv=5)\ngrid.fit(X_train, y_train)\n\nprint(grid.best_estimator_)\n\nprint(grid.best_estimator_.named_steps[\"logisticregression\"])\n\nprint(grid.best_estimator_.named_steps[\"logisticregression\"].coef_)", "Grid-searching preprocessing steps and model parameters", "from sklearn.datasets import load_boston\nboston = load_boston()\nX_train, X_test, y_train, y_test = train_test_split(boston.data, boston.target, random_state=0)\n\nfrom sklearn.preprocessing import PolynomialFeatures\npipe = make_pipeline(\n StandardScaler(),\n PolynomialFeatures(),\n Ridge())\n\nparam_grid = {'polynomialfeatures__degree': [1, 2, 3],\n 'ridge__alpha': [0.001, 0.01, 0.1, 1, 10, 100]}\n\ngrid = GridSearchCV(pipe, param_grid=param_grid, cv=5, n_jobs=-1)\ngrid.fit(X_train, y_train)\n\nplt.matshow(np.array([s.mean_validation_score for s in grid.grid_scores_]).reshape(3, -1),\n vmin=0, cmap=\"viridis\")\nplt.xlabel(\"ridge__alpha\")\nplt.ylabel(\"polynomialfeatures__degree\")\nplt.xticks(range(len(param_grid['ridge__alpha'])), param_grid['ridge__alpha'])\nplt.yticks(range(len(param_grid['polynomialfeatures__degree'])), param_grid['polynomialfeatures__degree'])\n\nplt.colorbar()\n\nprint(grid.best_params_)\n\ngrid.score(X_test, y_test)\n\nparam_grid = {'ridge__alpha': [0.001, 0.01, 0.1, 1, 10, 100]}\npipe = make_pipeline(StandardScaler(), Ridge())\ngrid = GridSearchCV(pipe, param_grid, cv=5)\ngrid.fit(X_train, y_train)\ngrid.score(X_test, y_test)", "Exercise\nRepeat the exercise from the feature extraction chaper (polynomial feature expansion and feature selection), this time using pipelines and grid-search.\nSearch over the best options for the polynomial features and over how many features to select, together with the regularization of a linear model." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
metpy/MetPy
v1.0/_downloads/cdca3e0cb8a2930cccab0e29b97ef52a/upperair_soundings.ipynb
bsd-3-clause
[ "%matplotlib inline", "Upper Air Sounding Tutorial\nUpper air analysis is a staple of many synoptic and mesoscale analysis\nproblems. In this tutorial we will gather weather balloon data, plot it,\nperform a series of thermodynamic calculations, and summarize the results.\nTo learn more about the Skew-T diagram and its use in weather analysis and\nforecasting, checkout this &lt;http://www.pmarshwx.com/research/manuals/AF_skewt_manual.pdf&gt;_\nair weather service guide.", "import matplotlib.pyplot as plt\nfrom mpl_toolkits.axes_grid1.inset_locator import inset_axes\nimport pandas as pd\n\nimport metpy.calc as mpcalc\nfrom metpy.cbook import get_test_data\nfrom metpy.plots import Hodograph, SkewT\nfrom metpy.units import units", "Getting Data\nUpper air data can be obtained using the siphon package, but for this tutorial we will use\nsome of MetPy's sample data. This event is the Veterans Day tornado outbreak in 2002.", "col_names = ['pressure', 'height', 'temperature', 'dewpoint', 'direction', 'speed']\n\ndf = pd.read_fwf(get_test_data('nov11_sounding.txt', as_file_obj=False),\n skiprows=5, usecols=[0, 1, 2, 3, 6, 7], names=col_names)\n\n# Drop any rows with all NaN values for T, Td, winds\ndf = df.dropna(subset=('temperature', 'dewpoint', 'direction', 'speed'\n ), how='all').reset_index(drop=True)\n\n# We will pull the data out of the example dataset into individual variables and\n# assign units.\n\np = df['pressure'].values * units.hPa\nT = df['temperature'].values * units.degC\nTd = df['dewpoint'].values * units.degC\nwind_speed = df['speed'].values * units.knots\nwind_dir = df['direction'].values * units.degrees\nu, v = mpcalc.wind_components(wind_speed, wind_dir)", "Thermodynamic Calculations\nOften times we will want to calculate some thermodynamic parameters of a\nsounding. The MetPy calc module has many such calculations already implemented!\n\nLifting Condensation Level (LCL) - The level at which an air parcel's\n relative humidity becomes 100% when lifted along a dry adiabatic path.\nParcel Path - Path followed by a hypothetical parcel of air, beginning\n at the surface temperature/pressure and rising dry adiabatically until\n reaching the LCL, then rising moist adiabatially.", "# Calculate the LCL\nlcl_pressure, lcl_temperature = mpcalc.lcl(p[0], T[0], Td[0])\n\nprint(lcl_pressure, lcl_temperature)\n\n# Calculate the parcel profile.\nparcel_prof = mpcalc.parcel_profile(p, T[0], Td[0]).to('degC')", "Basic Skew-T Plotting\nThe Skew-T (log-P) diagram is the standard way to view rawinsonde data. The\ny-axis is height in pressure coordinates and the x-axis is temperature. The\ny coordinates are plotted on a logarithmic scale and the x coordinate system\nis skewed. An explanation of skew-T interpretation is beyond the scope of this\ntutorial, but here we will plot one that can be used for analysis or\npublication.\nThe most basic skew-T can be plotted with only five lines of Python.\nThese lines perform the following tasks:\n\n\nCreate a Figure object and set the size of the figure.\n\n\nCreate a SkewT object\n\n\nPlot the pressure and temperature (note that the pressure,\n the independent variable, is first even though it is plotted on the y-axis).\n\n\nPlot the pressure and dewpoint temperature.\n\n\nPlot the wind barbs at the appropriate pressure using the u and v wind\n components.", "# Create a new figure. The dimensions here give a good aspect ratio\nfig = plt.figure(figsize=(9, 9))\nskew = SkewT(fig)\n\n# Plot the data using normal plotting functions, in this case using\n# log scaling in Y, as dictated by the typical meteorological plot\nskew.plot(p, T, 'r', linewidth=2)\nskew.plot(p, Td, 'g', linewidth=2)\nskew.plot_barbs(p, u, v)\n\n# Show the plot\nplt.show()", "Advanced Skew-T Plotting\nFiducial lines indicating dry adiabats, moist adiabats, and mixing ratio are\nuseful when performing further analysis on the Skew-T diagram. Often the\n0C isotherm is emphasized and areas of CAPE and CIN are shaded.", "# Create a new figure. The dimensions here give a good aspect ratio\nfig = plt.figure(figsize=(9, 9))\nskew = SkewT(fig, rotation=30)\n\n# Plot the data using normal plotting functions, in this case using\n# log scaling in Y, as dictated by the typical meteorological plot\nskew.plot(p, T, 'r')\nskew.plot(p, Td, 'g')\nskew.plot_barbs(p, u, v)\nskew.ax.set_ylim(1000, 100)\nskew.ax.set_xlim(-40, 60)\n\n# Plot LCL temperature as black dot\nskew.plot(lcl_pressure, lcl_temperature, 'ko', markerfacecolor='black')\n\n# Plot the parcel profile as a black line\nskew.plot(p, parcel_prof, 'k', linewidth=2)\n\n# Shade areas of CAPE and CIN\nskew.shade_cin(p, T, parcel_prof, Td)\nskew.shade_cape(p, T, parcel_prof)\n\n# Plot a zero degree isotherm\nskew.ax.axvline(0, color='c', linestyle='--', linewidth=2)\n\n# Add the relevant special lines\nskew.plot_dry_adiabats()\nskew.plot_moist_adiabats()\nskew.plot_mixing_lines()\n\n# Show the plot\nplt.show()", "Adding a Hodograph\nA hodograph is a polar representation of the wind profile measured by the rawinsonde.\nWinds at different levels are plotted as vectors with their tails at the origin, the angle\nfrom the vertical axes representing the direction, and the length representing the speed.\nThe line plotted on the hodograph is a line connecting the tips of these vectors,\nwhich are not drawn.", "# Create a new figure. The dimensions here give a good aspect ratio\nfig = plt.figure(figsize=(9, 9))\nskew = SkewT(fig, rotation=30)\n\n# Plot the data using normal plotting functions, in this case using\n# log scaling in Y, as dictated by the typical meteorological plot\nskew.plot(p, T, 'r')\nskew.plot(p, Td, 'g')\nskew.plot_barbs(p, u, v)\nskew.ax.set_ylim(1000, 100)\nskew.ax.set_xlim(-40, 60)\n\n# Plot LCL as black dot\nskew.plot(lcl_pressure, lcl_temperature, 'ko', markerfacecolor='black')\n\n# Plot the parcel profile as a black line\nskew.plot(p, parcel_prof, 'k', linewidth=2)\n\n# Shade areas of CAPE and CIN\nskew.shade_cin(p, T, parcel_prof, Td)\nskew.shade_cape(p, T, parcel_prof)\n\n# Plot a zero degree isotherm\nskew.ax.axvline(0, color='c', linestyle='--', linewidth=2)\n\n# Add the relevant special lines\nskew.plot_dry_adiabats()\nskew.plot_moist_adiabats()\nskew.plot_mixing_lines()\n\n# Create a hodograph\n# Create an inset axes object that is 40% width and height of the\n# figure and put it in the upper right hand corner.\nax_hod = inset_axes(skew.ax, '40%', '40%', loc=1)\nh = Hodograph(ax_hod, component_range=80.)\nh.add_grid(increment=20)\nh.plot_colormapped(u, v, wind_speed) # Plot a line colored by wind speed\n\n# Show the plot\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
drivendata/data-science-is-software
notebooks/lectures/2.0-environment.ipynb
mit
[ "<table style=\"width:100%; border: 0px solid black;\">\n <tr style=\"width: 100%; border: 0px solid black;\">\n <td style=\"width:75%; border: 0px solid black;\">\n <a href=\"http://www.drivendata.org\">\n <img src=\"https://s3.amazonaws.com/drivendata.org/kif-example/img/dd.png\" />\n </a>\n </td>\n </tr>\n</table>\n\nData Science is Software\nDeveloper #lifehacks for the Jupyter Data Scientist\nSection 2: This is my house\nEnvironment reproducibility for Python", "from __future__ import print_function\n\nimport os\nimport sys\n\nPROJ_ROOT = os.path.join(os.pardir, os.pardir)\n\n# add local python functions\nsys.path.append(os.path.join(PROJ_ROOT, \"src\"))", "2.1 The watermark extension\nTell everyone when your notebook was run, and with which packages. This is especially useful for nbview, blog posts, and other media where you are not sharing the notebook as executable code.", "# install the watermark extension\n!pip install watermark\n\n# once it is installed, you'll just need this in future notebooks:\n%load_ext watermark\n\n%watermark -a \"Peter Bull\" -d -t -v -p numpy,pandas -g", "2.2 Laying the foundation\nContinuum's conda tool provides a way to create isolated environments. In fact, you've already seen this at work if you followed the pydata setup instructions to setup your machine for this tutorial. The conda env functionality let's you created an isolated environment on your machine for \n\nStart from \"scratch\" on each project\nChoose Python 2 or 3 as appropriate\n\nTo create an empty environment:\n\nconda create -n &lt;name&gt; python=3\n\nNote: python=2 will create a Python 2 environment; python=3 will create a Python 3 environment.\nTo work in a particular virtual environment:\n\nsource activate &lt;name&gt;\n\nTo leave a virtual environment:\n\nsource deactivate\n\nNote: on Windows, the commands are just activate and deactivate, no need to type source.\nThere are other Python tools for environment isolation, but none of them are perfect. If you're interested in the other options, virtualenv and pyenv both provide environment isolation. There are sometimes compatibility issues between the Anaconda Python distribution and these packages, so if you've got Anaconda on your machine you can use conda env to create and manage environments.\n\n#lifehack: create a new environment for every project you work on\n#lifehack: if you use anaconda to manage packages using mkvirtualenv --system-site-packages &lt;name&gt; means you don't have to recompile large packages\n\n2.3 The pip requirements.txt file\nIt's a convention in the Python ecosystem to track a project's dependencies in a file called requirements.txt. We recommend using this file to keep track of your MRE, \"Minimum reproducible environment\".\nConda\n\n#lifehack: never again run pip install &lt;package&gt;. Instead, update requirements.txt and run pip install -r requirements.txt\n#lifehack: for data science projects, favor package&gt;=0.0.0 rather than package==0.0.0. This works well with the --system-site-packages flag so you don't have many versions of large packages with complex dependencies sitting around (e.g., numpy, scipy, pandas)", "# what does requirements.txt look like?\nprint(open(os.path.join(PROJ_ROOT, 'requirements.txt')).read())", "The format for a line in the requirements file is:\n| Syntax | Result |\n | --- | --- |\n | package_name | for whatever the latest version on PyPI is |\n | package_name==X.X.X | for an exact match of version X.X.X |\n | package_name&gt;=X.X.X | for at least version X.X.X |\nNow, contributors can create a new virtual environment (using conda or any other tool) and install your dependencies just by running:\n\npip install -r requirements.txt\n\n2.4 Separation of configuration from codebase\nThere are some things you don't want to be openly reproducible: your private database url, your AWS credentials for downloading the data, your SSN, which you decided to use as a hash. These shouldn't live in source control, but may be essential for collaborators or others reproducing your work.\nThis is a situation where we can learn from some software engineering best practices. The 12-factor app principles give a set of best-practices for building web applications. Many of these principles are relevant for best practices in the data-science codebases as well.\nUsing a dependency manifest like requirements.txt satisfies II. Explicitly declare and isolate dependencies. The important principle here is III. Store config in the environment:\n\nAn app’s config is everything that is likely to vary between deploys (staging, production, developer environments, etc). Apps sometimes store config as constants in the code. This is a violation of twelve-factor, which requires strict separation of config from code. Config varies substantially across deploys, code does not. A litmus test for whether an app has all config correctly factored out of the code is whether the codebase could be made open source at any moment, without compromising any credentials.\n\nThe dotenv pacakge allows you to easily store these variables in a file that is not in source control (as long as you keep the line .env in your .gitignore file!). You can then reference these variables as environment variables in your application with os.environ.get('VARIABLE_NAME').", "print(open(os.path.join(PROJ_ROOT, '.env')).read())" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
PyDataMadrid2016/Conference-Info
workshops_materials/20160408_1100_Pandas_for_beginners/tutorial/EN - Tutorial 02 - IO.ipynb
mit
[ "In pandas we have several possibilities to read data and several possibilities to write data.\nLet's read some wind data\nIn the Datos folder you can find a file mast.txt with the following format:\n130904 0000 2.21 2.58 113.5 999.99 999.99 99.99 9999.99 9999.99 0.11\n130904 0010 1.69 2.31 99.9 999.99 999.99 99.99 9999.99 9999.99 0.35\n130904 0020 1.28 1.50 96.0 999.99 999.99 99.99 9999.99 9999.99 0.08\n130904 0030 1.94 2.39 99.2 999.99 999.99 99.99 9999.99 9999.99 0.26\n130904 0040 2.17 2.67 108.4 999.99 999.99 99.99 9999.99 9999.99 0.23\n130904 0050 2.25 2.89 105.0 999.99 999.99 99.99 9999.99 9999.99 0.35\n...\n\nWe can read in the following manner:", "# First, imports\nimport os\nimport datetime as dt\n\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom IPython.display import display\n\nnp.random.seed(19760812)\n%matplotlib inline\n\nipath = os.path.join('Datos', 'mast.txt')\nwind = pd.read_csv(ipath)\nwind.head(3)\n\nwind = pd.read_csv(ipath, sep = \"\\s*\")\n# When we work with text separated by whitespaces we can use the keyword delim_whitespace:\n# wind = pd.read_csv(path, delim_whitespace = True)\nwind.head(3)\n\ncols = ['Date', 'time', 'wspd', 'wspd_max', 'wdir',\n 'x1', 'x2', 'x3', 'x4', 'x5', \n 'wspd_std']\nwind = pd.read_csv(ipath, sep = \"\\s*\", names = cols)\nwind.head(3)\n\ncols = ['Date', 'time', 'wspd', 'wspd_max', 'wdir',\n 'x1', 'x2', 'x3', 'x4', 'x5', \n 'wspd_std']\nwind = pd.read_csv(ipath, sep = \"\\s*\", names = cols,\n parse_dates = [[0, 1]])\nwind.head(3)", "<div class=\"alert alert-danger\">\n<p>Depending of your operative system dates can be right or not. Don't worry now about this. Later we will work on this.</p>\n</div>", "cols = ['Date', 'time', 'wspd', 'wspd_max', 'wdir',\n 'x1', 'x2', 'x3', 'x4', 'x5', \n 'wspd_std']\nwind = pd.read_csv(ipath, sep = \"\\s*\", names = cols,\n parse_dates = [[0, 1]], index_col = 0)\nwind.head(3)\n\ncols = ['Date', 'time', 'wspd', 'wspd_max', 'wdir',\n 'x1', 'x2', 'x3', 'x4', 'x5', \n 'wspd_std']\nwind = pd.read_csv(ipath, sep = \"\\s*\", names = cols,\n parse_dates = {'timestamp': [0, 1]}, index_col = 0)\nwind.head(3)\n\n# The previous code is equivalent to\ncols = ['Date', 'time', 'wspd', 'wspd_max', 'wdir',\n 'x1', 'x2', 'x3', 'x4', 'x5', \n 'wspd_std']\nwind = pd.read_csv(ipath, sep = \"\\s*\", names = cols,\n parse_dates = [[0, 1]], index_col = 0)\nwind.index.name = 'Timestamp'\nwind.head(3)\n\n# in the previous cell code you can change 0's and 1's on \n# parse_dates and index_col with the names of the columns\n# test it!!!\n\n\nhelp(pd.read_csv)", "With very few lines of code we read a text file with data separated by whitespaces, we transformed two columns to have dates and that dates are now the index (we only can have one record each time),...\n¡¡Warning!! repeated indexes\n<br>\n<div class=\"alert alert-danger\">\n<h3>Note:</h3>\n<p>Nothing prevents from having repeated indexes. Take care as it cn be a source of errors.</p>\n</div>", "tmp = pd.DataFrame([1,10,100, 1000], index = [1,1,2,2], columns = ['values'])\n\ntmp\n\nprint(tmp['values'][1], tmp['values'][2], sep = '\\n')", "Warning!! when you convert to dates from strings\n<br>\n<div class=\"alert alert-danger\">\n<h3>Note:</h3>\n<p>If you let pandas to parse the dates take care and write tests as it is easy to find errors in the <b>automagic</b> conversion.</p>\n</div>", "# An example with error in dates:\n\nindex = [\n '01/01/2015 00:00',\n '02/01/2015 00:00',\n '03/01/2015 00:00',\n '04/01/2015 00:00',\n '05/01/2015 00:00',\n '06/01/2015 00:00',\n '07/01/2015 00:00',\n '08/01/2015 00:00',\n '09/01/2015 00:00',\n '10/01/2015 00:00',\n '11/01/2015 00:00',\n '12/01/2015 00:00',\n '13/01/2015 00:00',\n '14/01/2015 00:00',\n '15/01/2015 00:00'\n]\nvalues = np.random.randn(len(index))\n\ntmp = pd.DataFrame(values, index = pd.to_datetime(index), columns = ['col1'])\n\ndisplay(tmp)\ntmp.plot.line(figsize = (12, 6))", "To avoid the previous error we can write our own date parser on, for instance, pd.read_csv:", "import datetime as dt\nimport io\n\ndef dateparser(date):\n date, time = date.split()\n DD, MM, YY = date.split('/')\n hh, mm = time.split(':')\n return dt.datetime(int(YY), int(MM), int(DD), int(hh), int(mm))\n\nvirtual_file = io.StringIO(\"\"\"01/01/2015 00:00, 1\n02/01/2015 00:00, 2\n03/01/2015 00:00, 3\n04/01/2015 00:00, 4\n05/01/2015 00:00, 5\n06/01/2015 00:00, 6\n07/01/2015 00:00, 7\n08/01/2015 00:00, 8\n09/01/2015 00:00, 9\n10/01/2015 00:00, 10\n11/01/2015 00:00, 11\n12/01/2015 00:00, 12\n13/01/2015 00:00, 13\n14/01/2015 00:00, 14\n15/01/2015 00:00, 15\n\"\"\")\n\ntmp_wrong = pd.read_csv(virtual_file, parse_dates = [0], index_col = 0, names = ['Date', 'values'])\n\nvirtual_file = io.StringIO(\"\"\"01/01/2015 00:00, 1\n02/01/2015 00:00, 2\n03/01/2015 00:00, 3\n04/01/2015 00:00, 4\n05/01/2015 00:00, 5\n06/01/2015 00:00, 6\n07/01/2015 00:00, 7\n08/01/2015 00:00, 8\n09/01/2015 00:00, 9\n10/01/2015 00:00, 10\n11/01/2015 00:00, 11\n12/01/2015 00:00, 12\n13/01/2015 00:00, 13\n14/01/2015 00:00, 14\n15/01/2015 00:00, 15\n\"\"\")\n\ntmp_right = pd.read_csv(virtual_file, parse_dates = True, index_col = 0, names = ['Date', 'values'],\n date_parser = dateparser)\n\ndisplay(tmp_wrong)\ndisplay(tmp_right)", "Let's save the result in csv format", "opath = os.path.join('Datos', 'mast_2.csv')\n#wind.to_csv(opath)\nwind.iloc[0:100].to_csv(opath)", "... or in json format", "#wind.to_json(opath.replace('csv', 'json'))\nwind.iloc[0:100].to_json(opath.replace('csv', 'json'))", "... or to an HTML table", "# Si son muchos datos no os lo recomiendo, es lento\n#wind.to_html(opath.replace('csv', 'html'))\nwind.iloc[0:100].to_html(opath.replace('csv', 'html'))", "... or to an xlsx format\nHere you should have xlsxwriter, openpyxl, wlrd/xlwt,..., installed.", "writer = pd.ExcelWriter(opath.replace('csv', 'xlsx'))\n#wind.to_excel(writer, sheet_name= \"Mi hoja 1\")\nwind.iloc[0:100].to_excel(writer, sheet_name= \"Mi hoja 1\")\nwriter.save()\n\n# Now that we have files with json, html, xlsx,..., formats you can practice what we have learn opening them\n# using the pd.read_* functions\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
cmorgan/toyplot
docs/labels-and-legends.ipynb
bsd-3-clause
[ ".. _labels-and-legends:\nLabels and Legends\nOf course, most figures require proper labels before they can be of value, so Toyplot provides several labelling mechanisms to help:\nAxes Labels\nFirst, both :ref:cartesian-axes and :ref:table-axes provide labelling parameters that can be specified when they are created. In either case the label parameter provides a top-level label for the set of axes:", "import numpy\nimport toyplot\n\ncanvas = toyplot.Canvas(width=600, height=300)\ncanvas.axes(grid=(1,2,0), label=\"Cartesian Axes\").plot(numpy.linspace(0, 1)**2)\ncanvas.table(grid=(1,2,1), label=\"Table Axes\", data = numpy.random.random((4, 3)));", "Naturally, some axes - such as Cartesian axes - allow you to specify additional, axis-specific labels:", "canvas = toyplot.Canvas(width=300, height=300)\naxes = canvas.axes(label=\"Cartesian Axes\", xlabel=\"Days\", ylabel=\"Users\")\naxes.plot(numpy.linspace(0, 1)**2);", "Axes Text\nAnother option for labelling a plot is to insert text labels using the same domain as the data:", "def series(x):\n return numpy.cumsum(numpy.random.normal(loc=0.05, size=len(x)))\n\nnumpy.random.seed(1234)\nx = numpy.arange(100)\ny = numpy.column_stack([series(x) for i in range(5)])\n\nlabel_style = {\"text-anchor\":\"start\", \"-toyplot-anchor-shift\":\"5px\"}\ncanvas, axes, mark = toyplot.plot(x, y)\nfor i in range(y.shape[1]):\n axes.text(x[-1], y[-1,i], \"Series %s\" % i, style=label_style)", "Note that we are using the last coordinate in each series as the text label coordinate - by default, Toyplot renders text centered on its coordinate, so in this case we've chosen a text style that left-aligns the text and offsets it slightly for clarity.\nCanvas Text\nWhen adding text to axes, you specify the text coordinates using the same domain as your data. Naturally, this limits the added text to the bounds defined by the axes. For the ultimate in labeling flexibility, you can add text to the canvas directly, using canvas units, outside and/or overlapping axes:", "label_style={\"font-size\":\"18px\", \"font-weight\":\"bold\"}\n\ncanvas = toyplot.Canvas(width=600, height=300)\ncanvas.axes(grid=(1,2,0)).plot(numpy.linspace(1, 0)**2)\ncanvas.axes(grid=(1,2,1), yshow=False).plot(numpy.linspace(0, 1)**2)\ncanvas.text(300, 120, \"This label overlaps two sets of axes!\", style=label_style);", "Please keep in mind when placing labels in canvas coordinates that, unlike Cartesian coordinates, canvas coordinates increase from top-to-bottom.\nCanvas Legends\nLast-but-not-least, Toyplot provides (currently experimental) support for graphical legends:", "observations = numpy.random.power(2, size=(50, 50))\n\nx = numpy.arange(len(observations))\n\nboundaries = numpy.column_stack(\n (numpy.min(observations, axis=1),\n numpy.percentile(observations, 25, axis=1),\n numpy.percentile(observations, 50, axis=1),\n numpy.percentile(observations, 75, axis=1),\n numpy.max(observations, axis=1)))\n\nfill = [\"blue\", \"blue\", \"red\", \"red\"]\nopacity = [0.1, 0.2, 0.2, 0.1]\n\ncanvas = toyplot.Canvas(800, 400)\naxes = canvas.axes(grid=(1,5,0,1,0,4))\nfill = axes.fill(x, boundaries, fill=fill, opacity=opacity)\nmean = axes.plot(x, numpy.mean(observations, axis=1), color=\"blue\")\n\ncanvas.legend([\n (\"Mean\", mean),\n (\"First Quartile\", \"rect\", {\"fill\":\"blue\", \"opacity\":0.1}),\n (\"Second Quartile\", \"rect\", {\"fill\":\"blue\", \"opacity\":0.2}),\n (\"Third Quartile\", \"rect\", {\"fill\":\"red\", \"opacity\":0.2}),\n (\"Fourth Quartile\", \"rect\", {\"fill\":\"red\", \"opacity\":0.1}),\n ],\n corner=(\"right\", 100, 100, 125),\n );\n", "The call to :func:toyplot.canvas.Canvas.legend always includes an explicit list of items to add to the legend, plus a :ref:canvas-layout specification of where the layout should appear on the canvas. Currently, each item to be displayed should be either:\n\nA (label, mark) tuple, which will get its appearance from the mark, or:\nA (label, marker, style) tuple, which will render the given marker with the given style.\n\nOf course, label is the text to be displayed next to an item in the legend, while mark is a mark that has been added to the canvas. However, not all marks can provide an unambiguous legend representation - what to do when a mark represents multiple series, or has per-datum colors, markers, or styles? In these cases there isn't a one-to-one correspondence between marks and legend items, leading to the second form of legend item with explicit marker and style parameters. The marker parameter currently can be any of the following:\n\n\"line\" - draw a line in the same style that would be used for a line plot.\n\"rect\" - draw a filled rect in the same style that would be used for a fill plot.\nmarker - draw a marker shape using any of the :ref:markers that are available for line and scatter plots.\n\nAs is the case throughout Toyplot, the style parameter uses CSS styles to control the appearance of the item.\nThere are some subtleties here worth noting, many of which are driven by Toyplot's deliberate embrace of the philosophy that explicit is better than implicit:\n\nYou can have as many or as few legends on your canvas as you like.\nCallers explicitly specify the order and contents of each legend.\nThere is no relationship between axes and legends - you could combine marks from multiple axes in a single legend, for example.\n\nHere's an example of all these ideas at work:", "x = numpy.linspace(0, 1)\ny1 = (1 - x) ** 2\ny2 = numpy.column_stack((1 - (x ** 2), x ** 2))\n\ncanvas = toyplot.Canvas(width=600, height=300)\nm1 = canvas.axes(grid=(1,2,0), gutter=15).scatterplot(x, y1, marker=\"o\", color=\"rgb(255,0,0)\")\nm2 = canvas.axes(grid=(1,2,1), gutter=15, yshow=False).scatterplot(x, y2, marker=\"s\", color=[\"green\", \"blue\"])\n\ncanvas.legend([\n (\"Experiment 1\", \"o\", {\"fill\":\"rgb(255,0,0)\", \"stroke\": \"none\"}),\n (\"Experiment 2\", \"s\", {\"fill\":\"green\", \"stroke\": \"none\"}),\n (\"Experiment 3\", \"s\", {\"fill\":\"blue\", \"stroke\": \"none\"}),\n\n ],\n corner=(\"top\", 100, 100, 70),\n );\n\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/docs-l10n
site/en-snapshot/lite/models/modify/model_maker/object_detection.ipynb
apache-2.0
[ "Copyright 2021 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Object Detection with TensorFlow Lite Model Maker\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/lite/models/modify/model_maker/object_detection\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/models/modify/model_maker/object_detection.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/models/modify/model_maker/object_detection.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/models/modify/model_maker/object_detection.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nIn this colab notebook, you'll learn how to use the TensorFlow Lite Model Maker library to train a custom object detection model capable of detecting salads within images on a mobile device.\nThe Model Maker library uses transfer learning to simplify the process of training a TensorFlow Lite model using a custom dataset. Retraining a TensorFlow Lite model with your own custom dataset reduces the amount of training data required and will shorten the training time.\nYou'll use the publicly available Salads dataset, which was created from the Open Images Dataset V4.\nEach image in the dataset \bcontains objects labeled as one of the following classes:\n* Baked Good\n* Cheese\n* Salad\n* Seafood\n* Tomato\nThe dataset contains the bounding-boxes specifying where each object locates, together with the object's label.\nHere is an example image from the dataset:\n<br/>\n<img src=\"https://cloud.google.com/vision/automl/object-detection/docs/images/quickstart-preparing_a_dataset.png\" width=\"400\" hspace=\"0\">\nPrerequisites\nInstall the required packages\nStart by installing the required packages, including the Model Maker package from the GitHub repo and the pycocotools library you'll use for evaluation.", "!sudo apt -y install libportaudio2\n!pip install -q --use-deprecated=legacy-resolver tflite-model-maker\n!pip install -q pycocotools\n!pip install -q opencv-python-headless==4.1.2.30\n!pip uninstall -y tensorflow && pip install -q tensorflow==2.8.0", "Import the required packages.", "import numpy as np\nimport os\n\nfrom tflite_model_maker.config import QuantizationConfig\nfrom tflite_model_maker.config import ExportFormat\nfrom tflite_model_maker import model_spec\nfrom tflite_model_maker import object_detector\n\nimport tensorflow as tf\nassert tf.__version__.startswith('2')\n\ntf.get_logger().setLevel('ERROR')\nfrom absl import logging\nlogging.set_verbosity(logging.ERROR)", "Prepare the dataset\nHere you'll use the same dataset as the AutoML quickstart.\nThe Salads dataset is available at:\n gs://cloud-ml-data/img/openimage/csv/salads_ml_use.csv.\nIt contains 175 images for training, 25 images for validation, and 25 images for testing. The dataset has five classes: Salad, Seafood, Tomato, Baked goods, Cheese.\n<br/>\nThe dataset is provided in CSV format:\nTRAINING,gs://cloud-ml-data/img/openimage/3/2520/3916261642_0a504acd60_o.jpg,Salad,0.0,0.0954,,,0.977,0.957,,\nVALIDATION,gs://cloud-ml-data/img/openimage/3/2520/3916261642_0a504acd60_o.jpg,Seafood,0.0154,0.1538,,,1.0,0.802,,\nTEST,gs://cloud-ml-data/img/openimage/3/2520/3916261642_0a504acd60_o.jpg,Tomato,0.0,0.655,,,0.231,0.839,,\n\nEach row corresponds to an object localized inside a larger image, with each object specifically designated as test, train, or validation data. You'll learn more about what that means in a later stage in this notebook.\nThe three lines included here indicate three distinct objects located inside the same image available at gs://cloud-ml-data/img/openimage/3/2520/3916261642_0a504acd60_o.jpg.\nEach row has a different label: Salad, Seafood, Tomato, etc.\nBounding boxes are specified for each image using the top left and bottom right vertices.\n\nHere is a visualzation of these three lines:\n<br>\n<img src=\"https://cloud.google.com/vision/automl/object-detection/docs/images/quickstart-preparing_a_dataset.png\" width=\"400\" hspace=\"100\">\nIf you want to know more about how to prepare your own CSV file and the minimum requirements for creating a valid dataset, see the Preparing your training data guide for more details.\nIf you are new to Google Cloud, you may wonder what the gs:// URL means. They are URLs of files stored on Google Cloud Storage (GCS). If you make your files on GCS public or authenticate your client, Model Maker can read those files similarly to your local files.\nHowever, you don't need to keep your images on Google Cloud to use Model Maker. You can use a local path in your CSV file and Model Maker will just work.\nQuickstart\nThere are six steps to training an object detection model:\nStep 1. Choose an object detection model archiecture.\nThis tutorial uses the EfficientDet-Lite0 model. EfficientDet-Lite[0-4] are a family of mobile/IoT-friendly object detection models derived from the EfficientDet architecture.\nHere is the performance of each EfficientDet-Lite models compared to each others.\n| Model architecture | Size(MB) | Latency(ms) | Average Precision** |\n|--------------------|-----------|---------------|----------------------|\n| EfficientDet-Lite0 | 4.4 | 37 | 25.69% |\n| EfficientDet-Lite1 | 5.8 | 49 | 30.55% |\n| EfficientDet-Lite2 | 7.2 | 69 | 33.97% |\n| EfficientDet-Lite3 | 11.4 | 116 | 37.70% |\n| EfficientDet-Lite4 | 19.9 | 260 | 41.96% |\n<i> * Size of the integer quantized models. <br/>\n Latency measured on Pixel 4 using 4 threads on CPU. <br/>\n* Average Precision is the mAP (mean Average Precision) on the COCO 2017 validation dataset.\n</i>", "spec = model_spec.get('efficientdet_lite0')", "Step 2. Load the dataset.\nModel Maker will take input data in the CSV format. Use the object_detector.DataLoader.from_csv method to load the dataset and split them into the training, validation and test images.\n\nTraining images: These images are used to train the object detection model to recognize salad ingredients.\nValidation images: These are images that the model didn't see during the training process. You'll use them to decide when you should stop the training, to avoid overfitting.\nTest images: These images are used to evaluate the final model performance.\n\nYou can load the CSV file directly from Google Cloud Storage, but you don't need to keep your images on Google Cloud to use Model Maker. You can specify a local CSV file on your computer, and Model Maker will work just fine.", "train_data, validation_data, test_data = object_detector.DataLoader.from_csv('gs://cloud-ml-data/img/openimage/csv/salads_ml_use.csv')", "Step 3. Train the TensorFlow model with the training data.\n\nThe EfficientDet-Lite0 model uses epochs = 50 by default, which means it will go through the training dataset 50 times. You can look at the validation accuracy during training and stop early to avoid overfitting.\nSet batch_size = 8 here so you will see that it takes 21 steps to go through the 175 images in the training dataset.\nSet train_whole_model=True to fine-tune the whole model instead of just training the head layer to improve accuracy. The trade-off is that it may take longer to train the model.", "model = object_detector.create(train_data, model_spec=spec, batch_size=8, train_whole_model=True, validation_data=validation_data)", "Step 4. Evaluate the model with the test data.\nAfter training the object detection model using the images in the training dataset, use the remaining 25 images in the test dataset to evaluate how the model performs against new data it has never seen before.\nAs the default batch size is 64, it will take 1 step to go through the 25 images in the test dataset.\nThe evaluation metrics are same as COCO.", "model.evaluate(test_data)", "Step 5. Export as a TensorFlow Lite model.\nExport the trained object detection model to the TensorFlow Lite format by specifying which folder you want to export the quantized model to. The default post-training quantization technique is full integer quantization.", "model.export(export_dir='.')", "Step 6. Evaluate the TensorFlow Lite model.\nSeveral factors can affect the model accuracy when exporting to TFLite:\n* Quantization helps shrinking the model size by 4 times at the expense of some accuracy drop.\n* The original TensorFlow model uses per-class non-max supression (NMS) for post-processing, while the TFLite model uses global NMS that's much faster but less accurate.\nKeras outputs maximum 100 detections while tflite outputs maximum 25 detections.\nTherefore you'll have to evaluate the exported TFLite model and compare its accuracy with the original TensorFlow model.", "model.evaluate_tflite('model.tflite', test_data)", "You can download the TensorFlow Lite model file using the left sidebar of Colab. Right-click on the model.tflite file and choose Download to download it to your local computer.\nThis model can be integrated into an Android or an iOS app using the ObjectDetector API of the TensorFlow Lite Task Library.\nSee the TFLite Object Detection sample app for more details on how the model is used in an working app.\nNote: Android Studio Model Binding does not support object detection yet so please use the TensorFlow Lite Task Library.\n(Optional) Test the TFLite model on your image\nYou can test the trained TFLite model using images from the internet.\n* Replace the INPUT_IMAGE_URL below with your desired input image.\n* Adjust the DETECTION_THRESHOLD to change the sensitivity of the model. A lower threshold means the model will pickup more objects but there will also be more false detection. Meanwhile, a higher threshold means the model will only pickup objects that it has confidently detected.\nAlthough it requires some of boilerplate code to run the model in Python at this moment, integrating the model into a mobile app only requires a few lines of code.", "#@title Load the trained TFLite model and define some visualization functions\n\nimport cv2\n\nfrom PIL import Image\n\nmodel_path = 'model.tflite'\n\n# Load the labels into a list\nclasses = ['???'] * model.model_spec.config.num_classes\nlabel_map = model.model_spec.config.label_map\nfor label_id, label_name in label_map.as_dict().items():\n classes[label_id-1] = label_name\n\n# Define a list of colors for visualization\nCOLORS = np.random.randint(0, 255, size=(len(classes), 3), dtype=np.uint8)\n\ndef preprocess_image(image_path, input_size):\n \"\"\"Preprocess the input image to feed to the TFLite model\"\"\"\n img = tf.io.read_file(image_path)\n img = tf.io.decode_image(img, channels=3)\n img = tf.image.convert_image_dtype(img, tf.uint8)\n original_image = img\n resized_img = tf.image.resize(img, input_size)\n resized_img = resized_img[tf.newaxis, :]\n resized_img = tf.cast(resized_img, dtype=tf.uint8)\n return resized_img, original_image\n\n\ndef detect_objects(interpreter, image, threshold):\n \"\"\"Returns a list of detection results, each a dictionary of object info.\"\"\"\n\n signature_fn = interpreter.get_signature_runner()\n\n # Feed the input image to the model\n output = signature_fn(images=image)\n\n # Get all outputs from the model\n count = int(np.squeeze(output['output_0']))\n scores = np.squeeze(output['output_1'])\n classes = np.squeeze(output['output_2'])\n boxes = np.squeeze(output['output_3'])\n\n results = []\n for i in range(count):\n if scores[i] >= threshold:\n result = {\n 'bounding_box': boxes[i],\n 'class_id': classes[i],\n 'score': scores[i]\n }\n results.append(result)\n return results\n\n\ndef run_odt_and_draw_results(image_path, interpreter, threshold=0.5):\n \"\"\"Run object detection on the input image and draw the detection results\"\"\"\n # Load the input shape required by the model\n _, input_height, input_width, _ = interpreter.get_input_details()[0]['shape']\n\n # Load the input image and preprocess it\n preprocessed_image, original_image = preprocess_image(\n image_path,\n (input_height, input_width)\n )\n\n # Run object detection on the input image\n results = detect_objects(interpreter, preprocessed_image, threshold=threshold)\n\n # Plot the detection results on the input image\n original_image_np = original_image.numpy().astype(np.uint8)\n for obj in results:\n # Convert the object bounding box from relative coordinates to absolute\n # coordinates based on the original image resolution\n ymin, xmin, ymax, xmax = obj['bounding_box']\n xmin = int(xmin * original_image_np.shape[1])\n xmax = int(xmax * original_image_np.shape[1])\n ymin = int(ymin * original_image_np.shape[0])\n ymax = int(ymax * original_image_np.shape[0])\n\n # Find the class index of the current object\n class_id = int(obj['class_id'])\n\n # Draw the bounding box and label on the image\n color = [int(c) for c in COLORS[class_id]]\n cv2.rectangle(original_image_np, (xmin, ymin), (xmax, ymax), color, 2)\n # Make adjustments to make the label visible for all objects\n y = ymin - 15 if ymin - 15 > 15 else ymin + 15\n label = \"{}: {:.0f}%\".format(classes[class_id], obj['score'] * 100)\n cv2.putText(original_image_np, label, (xmin, y),\n cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 2)\n\n # Return the final image\n original_uint8 = original_image_np.astype(np.uint8)\n return original_uint8\n\n#@title Run object detection and show the detection results\n\nINPUT_IMAGE_URL = \"https://storage.googleapis.com/cloud-ml-data/img/openimage/3/2520/3916261642_0a504acd60_o.jpg\" #@param {type:\"string\"}\nDETECTION_THRESHOLD = 0.3 #@param {type:\"number\"}\n\nTEMP_FILE = '/tmp/image.png'\n\n!wget -q -O $TEMP_FILE $INPUT_IMAGE_URL\nim = Image.open(TEMP_FILE)\nim.thumbnail((512, 512), Image.ANTIALIAS)\nim.save(TEMP_FILE, 'PNG')\n\n# Load the TFLite model\ninterpreter = tf.lite.Interpreter(model_path=model_path)\ninterpreter.allocate_tensors()\n\n# Run inference and draw detection result on the local copy of the original file\ndetection_result_image = run_odt_and_draw_results(\n TEMP_FILE,\n interpreter,\n threshold=DETECTION_THRESHOLD\n)\n\n# Show the detection result\nImage.fromarray(detection_result_image)", "(Optional) Compile For the Edge TPU\nNow that you have a quantized EfficientDet Lite model, it is possible to compile and deploy to a Coral EdgeTPU.\nStep 1. Install the EdgeTPU Compiler", "! curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -\n\n! echo \"deb https://packages.cloud.google.com/apt coral-edgetpu-stable main\" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list\n\n! sudo apt-get update\n\n! sudo apt-get install edgetpu-compiler", "Step 2. Select number of Edge TPUs, Compile\nThe EdgeTPU has 8MB of SRAM for caching model paramaters (more info). This means that for models that are larger than 8MB, inference time will be increased in order to transfer over model paramaters. One way to avoid this is Model Pipelining - splitting the model into segments that can have a dedicated EdgeTPU. This can significantly improve latency.\nThe below table can be used as a reference for the number of Edge TPUs to use - the larger models will not compile for a single TPU as the intermediate tensors can't fit in on-chip memory.\n| Model architecture | Minimum TPUs | Recommended TPUs\n|--------------------|-------|-------|\n| EfficientDet-Lite0 | 1 | 1 |\n| EfficientDet-Lite1 | 1 | 1 |\n| EfficientDet-Lite2 | 1 | 2 |\n| EfficientDet-Lite3 | 2 | 2 |\n| EfficientDet-Lite4 | 2 | 3 |", "NUMBER_OF_TPUS = 1#@param {type:\"number\"}\n\n!edgetpu_compiler model.tflite --num_segments=$NUMBER_OF_TPUS", "Step 3. Download, Run Model\nWith the model(s) compiled, they can now be run on EdgeTPU(s) for object detection. First, download the compiled TensorFlow Lite model file using the left sidebar of Colab. Right-click on the model_edgetpu.tflite file and choose Download to download it to your local computer.\nNow you can run the model in your preferred manner. Examples of detection include:\n* pycoral detection\n* Basic TFLite detection\n* Example Video Detection\n* libcoral C++ API\nAdvanced Usage\nThis section covers advanced usage topics like adjusting the model and the training hyperparameters.\nLoad the dataset\nLoad your own data\nYou can upload your own dataset to work through this tutorial. Upload your dataset by using the left sidebar in Colab.\n<img src=\"https://storage.googleapis.com/download.tensorflow.org/models/tflite/screenshots/model_maker_object_detection.png\" alt=\"Upload File\" width=\"1000\" hspace=\"0\">\nIf you prefer not to upload your dataset to the cloud, you can also locally run the library by following the guide.\nLoad your data with a different data format\nThe Model Maker library also supports the object_detector.DataLoader.from_pascal_voc method to load data with PASCAL VOC format. makesense.ai and LabelImg are the tools that can annotate the image and save annotations as XML files in PASCAL VOC data format:\npython\nobject_detector.DataLoader.from_pascal_voc(image_dir, annotations_dir, label_map={1: \"person\", 2: \"notperson\"})\nCustomize the EfficientDet model hyperparameters\nThe model and training pipline parameters you can adjust are:\n\nmodel_dir: The location to save the model checkpoint files. If not set, a temporary directory will be used.\nsteps_per_execution: Number of steps per training execution.\nmoving_average_decay: Float. The decay to use for maintaining moving averages of the trained parameters.\nvar_freeze_expr: The regular expression to map the prefix name of variables to be frozen which means remaining the same during training. More specific, use re.match(var_freeze_expr, variable_name) in the codebase to map the variables to be frozen.\ntflite_max_detections: integer, 25 by default. The max number of output detections in the TFLite model.\nstrategy: A string specifying which distribution strategy to use. Accepted values are 'tpu', 'gpus', None. tpu' means to use TPUStrategy. 'gpus' mean to use MirroredStrategy for multi-gpus. If None, use TF default with OneDeviceStrategy.\ntpu: The Cloud TPU to use for training. This should be either the name used when creating the Cloud TPU, or a grpc://ip.address.of.tpu:8470 url.\nuse_xla: Use XLA even if strategy is not tpu. If strategy is tpu, always use XLA, and this flag has no effect.\nprofile: Enable profile mode.\ndebug: Enable debug mode.\n\nOther parameters that can be adjusted is shown in hparams_config.py.\nFor instance, you can set the var_freeze_expr='efficientnet' which freezes the variables with name prefix efficientnet (default is '(efficientnet|fpn_cells|resample_p6)'). This allows the model to freeze untrainable variables and keep their value the same through training.\npython\nspec = model_spec.get('efficientdet_lite0')\nspec.config.var_freeze_expr = 'efficientnet'\nChange the Model Architecture\nYou can change the model architecture by changing the model_spec. For instance, change the model_spec to the EfficientDet-Lite4 model.\npython\nspec = model_spec.get('efficientdet_lite4')\nTune the training hyperparameters\nThe create function is the driver function that the Model Maker library uses to create models. The model_spec parameter defines the model specification. The object_detector.EfficientDetSpec class is currently supported. The create function comprises of the following steps:\n\nCreates the model for the object detection according to model_spec.\n\nTrains the model. The default epochs and the default batch size are set by the epochs and batch_size variables in the model_spec object.\nYou can also tune the training hyperparameters like epochs and batch_size that affect the model accuracy. For instance,\n\n\nepochs: Integer, 50 by default. More epochs could achieve better accuracy, but may lead to overfitting.\n\nbatch_size: Integer, 64 by default. The number of samples to use in one training step.\ntrain_whole_model: Boolean, False by default. If true, train the whole model. Otherwise, only train the layers that do not match var_freeze_expr.\n\nFor example, you can train with less epochs and only the head layer. You can increase the number of epochs for better results.\npython\nmodel = object_detector.create(train_data, model_spec=spec, epochs=10, validation_data=validation_data)\nExport to different formats\nThe export formats can be one or a list of the following:\n\nExportFormat.TFLITE\nExportFormat.LABEL\nExportFormat.SAVED_MODEL\n\nBy default, it exports only the TensorFlow Lite model file containing the model metadata so that you can later use in an on-device ML application. The label file is embedded in metadata.\nIn many on-device ML application, the model size is an important factor. Therefore, it is recommended that you quantize the model to make it smaller and potentially run faster. As for EfficientDet-Lite models, full integer quantization is used to quantize the model by default. Please refer to Post-training quantization for more detail.\npython\nmodel.export(export_dir='.')\nYou can also choose to export other files related to the model for better examination. For instance, exporting both the saved model and the label file as follows:\npython\nmodel.export(export_dir='.', export_format=[ExportFormat.SAVED_MODEL, ExportFormat.LABEL])\nCustomize Post-training quantization on the TensorFlow Lite model\nPost-training quantization is a conversion technique that can reduce model size and inference latency, while also improving CPU and hardware accelerator inference speed, with a little degradation in model accuracy. Thus, it's widely used to optimize the model.\nModel Maker library applies a default post-training quantization techique when exporting the model. If you want to customize post-training quantization, Model Maker supports multiple post-training quantization options using QuantizationConfig as well. Let's take float16 quantization as an instance. First, define the quantization config.\npython\nconfig = QuantizationConfig.for_float16()\nThen we export the TensorFlow Lite model with such configuration.\npython\nmodel.export(export_dir='.', tflite_filename='model_fp16.tflite', quantization_config=config)\nRead more\nYou can read our object detection example to learn technical details. For more information, please refer to:\n\nTensorFlow Lite Model Maker guide and API reference.\nTask Library: ObjectDetector for deployment.\nThe end-to-end reference apps: Android, iOS, and Raspberry PI." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
henrysky/astroNN
demo_tutorial/NN_uncertainty_analysis/Uncertainty_Demo_MNIST.ipynb
mit
[ "Classification Uncertainty Analysis in Bayesian Deep Learning with Dropout Variational Inference\nHere is astroNN, please take a look if you are interested in astronomy or how neural network applied in astronomy\n* Henry Leung - Astronomy student, University of Toronto - henrysky\n* Project adviser: Jo Bovy - Professor, Department of Astronomy and Astrophysics, University of Toronto - jobovy\n* Contact Henry: henrysky.leung [at] utoronto.ca\n* This tutorial is created on 16/Mar/2018 with Keras 2.1.5, Tensorflow 1.6.0, Nvidia CuDNN 7.0 for CUDA 9.0 (Optional)\n* Updated on 31/Jan/2020 with Tensorflow 2.1.0, Tensorflow Probability 0.9.0\n* Updated again on 27/Jan/2020 with Tensorflow 2.4.0, Tensorflow Probability 0.12.0\n<br>\nFor more resources on Bayesian Deep Learning with Dropout Variational Inference, please refer to README.md\nFirst import everything we need", "%matplotlib inline\n%config InlineBackend.figure_format='retina'\n\nfrom tensorflow.keras.datasets import mnist\nfrom tensorflow.keras import utils\nimport numpy as np\nimport pylab as plt\n\nfrom astroNN.models import MNIST_BCNN", "Train the neural network on MNIST training set", "(x_train, y_train), (x_test, y_test) = mnist.load_data()\n\ny_train = utils.to_categorical(y_train, 10)\ny_train = y_train.astype(np.float32)\nx_train = x_train.astype(np.float32)\nx_test = x_test.astype(np.float32)\n\n# Create a astroNN neural network instance and set the basic parameter\nnet = MNIST_BCNN()\nnet.task = 'classification'\nnet.max_epochs = 5 # Just use 5 epochs for quick result\n\n# Trian the nerual network\nnet.train(x_train, y_train)", "Test the neural network on random MNIST images\nYou can see from below, most test images are right except the last one the model has a high uncertainty in it. As a human, you can indeed can argue this 5 is badly written can can be read as 6 or even a badly written 8.", "test_idx = [1, 2, 3, 4, 5, 8]\npred, pred_std = net.test(x_test[test_idx])\nfor counter, i in enumerate(test_idx):\n plt.figure(figsize=(3, 3), dpi=100)\n plt.title(f'Predicted Digit {pred[counter]}, Real Answer: {y_test[i]:{1}} \\n'\n f'Total Uncertainty (Entropy): {(pred_std[\"total\"][counter]):.{2}}')\n plt.imshow(x_test[i])\n plt.show()\n plt.close('all')\n plt.clf()", "Test the neural network on random MNIST images with 90 degree rotation\nSince the neural network is trained on MNIST images without any data argumentation, so if we rotate the MNIST images, the images should look 'alien' to the neural network and the neural network should give us a high unceratinty. And indeed the neural network tells us its very uncertain about the prediction with roated images.", "test_rot_idx = [9, 10, 11]\n\ntest_rot = x_test[test_rot_idx]\n\nfor counter, j in enumerate(test_rot):\n test_rot[counter] = np.rot90(j)\n\npred_rot, pred_rot_std = net.test(test_rot)\n\nfor counter, i in enumerate(test_rot_idx):\n plt.figure(figsize=(3, 3), dpi=100)\n plt.title(f'Predicted Digit {pred_rot[counter]}, Real Answer: {y_test[i]:{1}} \\n'\n f'Total Uncertainty (Entropy): {(pred_rot_std[\"total\"][counter]):.{2}}')\n plt.imshow(test_rot[counter])\n plt.show()\n plt.close('all')\n plt.clf()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
kit-cel/wt
ccgbc/ch4_LDPC_Analysis/LDPC_Optimization_AWGN.ipynb
gpl-2.0
[ "Optimization of Degree Distributions on the AWGN\nThis code is provided as supplementary material of the lecture Channel Coding 2 - Advanced Methods.\nThis code illustrates\n* Using linear programming to optimize degree distributions on the AWGN channel using EXIT charts", "import cvxpy as cp\nimport numpy as np\nimport matplotlib.pyplot as plot\nfrom ipywidgets import interactive\nimport ipywidgets as widgets\nimport math\n%matplotlib inline ", "Approximation of the J-function taken from [1] with\n$$\nJ(\\mu) \\approx \\left(1 - 2^{-H_1\\cdot (2\\mu)^{H_2}}\\right)^{H_3}\n$$\nand its inverse function can be easily found as\n$$\n\\mu = J^{-1}(I) \\approx \\frac{1}{2}\\left(-\\frac{1}{H_1}\\log_2\\left(1-I^{\\frac{1}{H_3}}\\right)\\right)^{\\frac{1}{H_2}}\n$$\nwith $H_1 = 0.3073$, $H_2=0.8935$, and $H_3 = 1.1064$.\n[1] F. Schreckenbach, Iterative Decoding of Bit-Interleaved Coded Modulation , PhD thesis, TU Munich, 2007", "H1 = 0.3073\nH2 = 0.8935\nH3 = 1.1064\n\ndef J_fun(mu): \n I = (1 - 2**(-H1*(2*mu)**H2))**H3\n return I\n\ndef invJ_fun(I):\n if I > (1-1e-10):\n return 100\n mu = 0.5*(-(1/H1) * np.log2(1 - I**(1/H3)))**(1/H2)\n return mu", "The following function solves the optimization problem that returns the best $\\lambda(Z)$ for a given BI-AWGN channel quality $E_s/N_0$, corresponding to a $\\mu_c = 4\\frac{E_s}{N_0}$, for a regular check node degree $d_{\\mathtt{c}}$, and for a maximum variable node degree $d_{\\mathtt{v},\\max}$. This optimization problem is derived in the lecture as\n$$\n\\begin{aligned}\n& \\underset{\\lambda_1,\\ldots,\\lambda_{d_{\\mathtt{v},\\max}}}{\\text{maximize}} & & \\sum_{i=1}^{d_{\\mathtt{v},\\max}}\\frac{\\lambda_i}{i} \\\n& \\text{subject to} & & \\lambda_1 = 0 \\\n& & & \\lambda_i \\geq 0, \\quad \\forall i \\in{2,3,\\ldots,d_{\\mathtt{v},\\max}} \\\n& & & \\sum_{i=2}^{d_{\\mathtt{v},\\max}}\\lambda_i = 1 \\\n& & & \\sum_{i=2}^{d_{\\mathtt{v},\\max}}\\lambda_i\\cdot J\\left(\\mu_c + (i-1)J^{-1}\\left(\\frac{j}{D}\\right)\\right) > 1 - J\\left(\\frac{1}{d_{\\mathtt{c}}-1}J^{-1}\\left(1-\\frac{j}{D}\\right)\\right),\\quad \\forall j \\in {1,\\ldots, D} \\\n& & & \\lambda_2 \\leq \\frac{e^{\\frac{\\mu_c}{4}}}{d_{\\mathtt{c}}-1}\n\\end{aligned}\n$$\nIf this optimization problem is feasible, then the function returns the polynomial $\\lambda(Z)$ as a coefficient array where the first entry corresponds to the largest exponent ($\\lambda_{d_{\\mathtt{v},\\max}}$) and the last entry to the lowest exponent ($\\lambda_1$). If the optimization problem has no solution (e.g., it is unfeasible), then the empty vector is returned.", "def find_best_lambda(mu_c, v_max, dc): \n # quantization of EXIT chart\n D = 500\n I_range = np.arange(0, D, 1)/D \n \n # definition of variables, v_max entries \\lambda_i \n v_lambda = cp.Variable(shape=v_max) \n \n # objective function\n cv = 1/np.arange(v_max,0,-1) \n objective = cp.Maximize(v_lambda @ cv)\n \n # constraints\n # constraint 1, v_lambda are fractions between 0 and 1 and sum up to 1\n constraints = [cp.sum(v_lambda) == 1, v_lambda >= 0]\n \n # constraint 2, no variable nodes of degree 1\n constraints += [v_lambda[v_max-1] == 0]\n \n # constraints 3, EXIT chart open tunnel condition condition for all the descrete I values (a total number of D, for each I) \n for myI in I_range: \n constraints += [v_lambda @ [J_fun(mu_c + (v_max-1-j)*invJ_fun(myI)) for j in range(v_max)] - 1 + J_fun(1/(dc-1)*invJ_fun(1-myI)) >= 0]\n \n # constraint 4, stability condition\n constraints += [v_lambda[v_max-2] <= np.exp(mu_c/4)/(dc-1)]\n\n # set up the problem and solve\n problem = cp.Problem(objective, constraints)\n problem.solve()\n \n if problem.status == \"optimal\":\n r_lambda = v_lambda.value\n # remove entries close to zero and renormalize\n r_lambda[r_lambda <= 1e-7] = 0\n r_lambda = r_lambda / sum(r_lambda)\n else:\n r_lambda = np.array([])\n \n return r_lambda", "As an example, we consider the case of optimization carried out in the lecture after 10 iterations, where we have $\\mu_c = 3.8086$ and $d_{\\mathtt{c}} = 14$ with $d_{\\mathtt{v},\\max}=16$", "best_lambda = find_best_lambda(3.8086, 16, 14)\nprint(np.poly1d(best_lambda, variable='Z'))", "In the following, we provide an interactive widget that allows you to choose the parameters of the optimization yourself and get the best possible $\\lambda(Z)$. Additionally, the EXIT chart is plotted to visualize the good fit of the obtained degree distribution.", "def best_lambda_interactive(mu_c, v_max, dc):\n # get lambda and rho polynomial from optimization and from c_avg, respectively\n p_lambda = find_best_lambda(mu_c, v_max, dc)\n \n # if optimization successful, compute rate and show plot\n if p_lambda.size == 0:\n print('Optimization infeasible, no solution found')\n else:\n design_rate = 1 - 1/(dc * np.polyval(np.polyint(p_lambda),1))\n if design_rate <= 0:\n print('Optimization feasible, but no code with positive rate found')\n else:\n print(\"Lambda polynomial:\")\n print(np.poly1d(p_lambda, variable='Z'))\n print(\"Design rate r_d = %1.3f\" % design_rate)\n \n # Plot EXIT-Chart\n print(\"EXIT Chart:\")\n plot.figure(3) \n x = np.linspace(0, 1, num=100)\n y_v = [np.sum([p_lambda[j] * J_fun(mu_c + (v_max-1-j)*invJ_fun(xv)) for j in range(v_max)]) for xv in x] \n y_c = [1-J_fun((dc-1)*invJ_fun(1-xv)) for xv in x] \n plot.plot(x, y_v, '#7030A0')\n plot.plot(y_c, x, '#008000') \n plot.axis('equal')\n plot.gca().set_aspect('equal', adjustable='box')\n plot.xlim(0,1)\n plot.ylim(0,1) \n plot.grid()\n plot.show()\n\ninteractive_plot = interactive(best_lambda_interactive, \\\n mu_c=widgets.FloatSlider(min=0.5,max=8,step=0.01,value=3, continuous_update=False, description=r'\\(\\mu_c\\)',layout=widgets.Layout(width='50%')), \\\n v_max = widgets.IntSlider(min=3, max=20, step=1, value=16, continuous_update=False, description=r'\\(d_{\\mathtt{v},\\max}\\)'), \\\n dc = widgets.IntSlider(min=3,max=20,step=1,value=4, continuous_update=False, description=r'\\(d_{\\mathtt{c}}\\)')) \noutput = interactive_plot.children[-1]\noutput.layout.height = '400px'\ninteractive_plot", "Now, we carry out the optimization over a wide range of $d_{\\mathtt{c},\\text{avg}}$ values for a given $\\epsilon$ and find the largest possible rate.", "def find_best_rate(mu_c, dv_max, dc_max):\n c_range = np.arange(3, dc_max+1)\n rates = np.zeros_like(c_range,dtype=float)\n \n \n # loop over all c_avg, add progress bar\n f = widgets.FloatProgress(min=0, max=np.size(c_range))\n display(f)\n for index,dc in enumerate(c_range):\n f.value += 1 \n p_lambda = find_best_lambda(mu_c, dv_max, dc) \n if np.array(p_lambda).size > 0: \n design_rate = 1 - 1/(dc * np.polyval(np.polyint(p_lambda),1))\n if design_rate >= 0:\n rates[index] = design_rate\n \n # find largest rate\n largest_rate_index = np.argmax(rates)\n best_lambda = find_best_lambda(mu_c, dv_max, c_range[largest_rate_index])\n print(\"Found best code of rate %1.3f for average check node degree of %1.2f\" % (rates[largest_rate_index], c_range[largest_rate_index]))\n print(\"Corresponding lambda polynomial\")\n print(np.poly1d(best_lambda, variable='Z'))\n \n # Plot curve with all obtained results\n plot.figure(4, figsize=(10,3)) \n plot.plot(c_range, rates, '--s',color=(0, 0.59, 0.51))\n plot.plot(c_range[largest_rate_index], rates[largest_rate_index], 'rs')\n plot.xlim(3, dc_max)\n plot.xticks(range(3,dc_max+1))\n plot.ylim(0, 1)\n plot.xlabel('$d_{\\mathtt{c}}$')\n plot.ylabel('design rate $r_d$')\n plot.grid()\n plot.show()\n\n return rates[largest_rate_index]\n \ninteractive_optim = interactive(find_best_rate, \\\n mu_c=widgets.FloatSlider(min=0.1,max=10,step=0.01,value=2, continuous_update=False, description=r'\\(\\mu_c\\)',layout=widgets.Layout(width='50%')), \\\n dv_max = widgets.IntSlider(min=3, max=20, step=1, value=16, continuous_update=False, description=r'\\(d_{\\mathtt{v},\\max}\\)'), \\\n dc_max = widgets.IntSlider(min=3, max=40, step=1, value=22, continuous_update=False, description=r'\\(d_{\\mathtt{c},\\max}\\)'))\noutput = interactive_optim.children[-1]\noutput.layout.height = '400px'\ninteractive_optim", "Running binary search to find code with a given target rate for the AWGN channel", "target_rate = 0.7\ndv_max = 16\ndc_max = 22\n\nT_Delta = 0.01\nmu_c = 10\nDelta_mu = 10\n\nwhile Delta_mu >= T_Delta: \n print('Running optimization for mu_c = %1.5f, corresponding to Es/N0 = %1.2f dB' % (mu_c, 10*np.log10(mu_c/4)))\n \n rate = find_best_rate(mu_c, dv_max, dc_max)\n if rate > target_rate:\n mu_c = mu_c - Delta_mu / 2\n else:\n mu_c = mu_c + Delta_mu / 2\n \n Delta_mu = Delta_mu / 2" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
as595/AllOfYourBases
TIARA/GaussianProcessModelling/NOTEBOOKS/GPMIntroExtended.ipynb
gpl-3.0
[ "==============================================================================================\n&lsaquo; GPMIntro.ipynb &rsaquo;\nCopyright (C) &lsaquo; 2017 &rsaquo; &lsaquo; Anna Scaife - anna.scaife@manchester.ac.uk &rsaquo;\nThis program is free software: you can redistribute it and/or modify\nit under the terms of the GNU General Public License as published by\nthe Free Software Foundation, either version 3 of the License, or\n(at your option) any later version.\nThis program is distributed in the hope that it will be useful,\nbut WITHOUT ANY WARRANTY; without even the implied warranty of\nMERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\nGNU General Public License for more details.\nYou should have received a copy of the GNU General Public License\nalong with this program. If not, see http://www.gnu.org/licenses/.\n==============================================================================================\n[AMS - 170402] Notebook created for SKA-SA Newton Big Data Summer School, Cape Town , April 2017.\nThis notebook is an extended version of the notebook \"GPMIntro.ipynb\". This notebook uses GPM to predict a signal. It recreates some of the plots from Roberts et al. 2013 (http://www.robots.ox.ac.uk/~sjrob/Pubs/Phil.%20Trans.%20R.%20Soc.%20A-2013-Roberts-.pdf). \nIt extends the original notebook to show how to fit maximum likelihood hyper-parameters to covariance kernels using a combination of the george library (http://dan.iel.fm/george/current/) and the emcee library (http://dan.iel.fm/emcee/current/).\nIt is a teaching resource and is accompanied by the lecture \"Can you Predict the Future..?\".\nAll Python libraries used in this example can be installed using pip.\n\nTo start let's specify that we want our figures to appear embedded in the notebook:", "#%matplotlib inline", "Then let's import all the libraries we need...", "import numpy as np\nimport pylab as pl\nfrom scipy import linalg as sl", "Make the covariance kernel a squared-exponential,\n$k(x_1,x_2) = h^2 \\exp{ \\left( \\frac{-(x_1 - x_2)^2}{\\lambda^2} \\right)}$,\njust like Eq. 3.11 in Roberts et al. (2012).\nhttp://www.robots.ox.ac.uk/~sjrob/Pubs/Phil.%20Trans.%20R.%20Soc.%20A-2013-Roberts-.pdf", "def cov_kernel(x1,x2,h,lam):\n \"\"\"\n Squared-Exponential covariance kernel\n \"\"\"\n k12 = h**2*np.exp(-1.*(x1 - x2)**2/lam**2)\n \n return k12", "We can use this kernel to calculate the value of each element in our covariance matrix:\n$\\mathbf{K(x,x)} = \\left(\n\\begin{array}{cccc}\nk(x_1,x_1) & k(x_1,x_2) & ... & k(x_1,x_n) \\\nk(x_2,x_1) & k(x_2,x_2) & ... & k(x_2,x_n) \\\n\\vdots & \\vdots & \\vdots & \\vdots \\\nk(x_n,x_1) & k(x_n,x_2) & ... & k(x_n,x_n) \n\\end{array}\n\\right).$\nWe can then populate a covariance matrix, $K(\\mathbf{x},\\mathbf{x})$, for our data:", "def make_K(x, h, lam):\n \n \"\"\"\n Make covariance matrix from covariance kernel\n \"\"\"\n \n # for a data array of length x, make a covariance matrix x*x:\n K = np.zeros((len(x),len(x)))\n \n for i in range(0,len(x)):\n for j in range(0,len(x)):\n \n if (i==j):\n noise = 1e-5\n else:\n noise = 0.0\n \n # calculate value of K for each separation:\n K[i,j] = cov_kernel(x[i],x[j],h,lam) + noise**2\n \n return K", "Using this kernel we can then recreate Fig. 5 from Roberts et al. (2012).", "# make an array of 200 evenly spaced positions between 0 and 20:\nx1 = np.arange(0, 20.,0.01)\n \nfor i in range(0,3):\n \n h = 1.0\n \n if (i==0): lam = 0.1\n if (i==1): lam = 1.0\n if (i==2): lam = 5.0\n \n # make a covariance matrix:\n K = make_K(x1,h,lam)\n \n # five realisations:\n for j in range(0,5):\n \n # draw samples from a co-variate Gaussian distribution, N(0,K):\n y1 = np.random.multivariate_normal(np.zeros(len(x1)),K)\n \n tmp2 = '23'+str(i+3+1)\n pl.subplot(int(tmp2))\n pl.plot(x1,y1)\n \n \n tmp1 = '23'+str(i+1)\n pl.subplot(int(tmp1))\n pl.imshow(K)\n pl.title(r\"$\\lambda = $\"+str(lam))\n \n \npl.show()", "If we then take the final realization, which has $\\lambda = 5$, and select 5 points from it randomly we can calculate the posterior mean and variance at every point based on those five input data. \nThe mean and variance are given by Eq. 3.8 & 3.9 in Roberts et al. (2012) or Eq. 2.25 & 2.26 in Rasmussen & Williams.\nFirst let's select our training data points and our test data points:", "# set number of training points\nnx_training = 5\n\n# randomly select the training points:\ntmp = np.random.uniform(low=0.0, high=2000.0, size=nx_training)\ntmp = tmp.astype(int)\n\ncondition = np.zeros_like(x1)\nfor i in tmp: condition[i] = 1.0\n \ny_train = y1[np.where(condition==1.0)]\nx_train = x1[np.where(condition==1.0)]\ny_test = y1[np.where(condition==0.0)]\nx_test = x1[np.where(condition==0.0)]", "We then use our training data points to define a covariance matrix:", "# define the covariance matrix:\nK = make_K(x_train,h,lam)\n\n# take the inverse:\niK = np.linalg.inv(K)\n\nprint 'determinant: ',np.linalg.det(K), sl.det(K)", "For each of our test data points we can then make a prediction of the value at $x_{\\ast}$ and the uncertainly (standard deviation):", "mu=[];sig=[]\nfor xx in x_test:\n\n # find the 1d covariance matrix:\n K_x = cov_kernel(xx, x_train, h, lam)\n \n # find the kernel for (x,x):\n k_xx = cov_kernel(xx, xx, h, lam)\n \n # calculate the posterior mean and variance:\n mu_xx = np.dot(K_x.T,np.dot(iK,y_train))\n sig_xx = k_xx - np.dot(K_x.T,np.dot(iK,K_x))\n \n mu.append(mu_xx)\n sig.append(np.sqrt(np.abs(sig_xx))) # note sqrt to get stdev from variance\n", "Let's plot this up:", "# mu and sig are currently lists - turn them into numpy arrays:\nmu=np.array(mu);sig=np.array(sig)\n\n# make some plots:\n\n# left-hand plot\nax = pl.subplot(121)\npl.scatter(x_train,y_train) # plot the training points\npl.plot(x1,y1,ls=':') # plot the original data they were drawn from\npl.title(\"Input\")\n\n# right-hand plot\nax = pl.subplot(122)\npl.plot(x_test,mu,ls='-') # plot the predicted values\npl.plot(x_test,y_test,ls=':') # plot the original values\n\n\n# shade in the area inside a one standard deviation bound:\nax.fill_between(x_test,mu-sig,mu+sig,facecolor='lightgrey', lw=0, interpolate=True)\npl.title(\"Predicted\")\n\npl.scatter(x_train,y_train) # plot the training points\n\n# display the plot:\npl.show()", "[Note: Depending on the selection of training points you might want to specify some axis ranges for these plots]\nAt this point we've come to the end of the material presented in the lecture.\n--END--\nExtra material\nWhat if we don't know the values of the hyper-parameters in our covariance kernel? We can fit them directly from the data!", "# set number of training points\nfrac = 0.05\nnx_training = int(len(x1)*frac)\nprint \"Using \",nx_training,\" points.\"\n\n# randomly select the training points:\ntmp = np.random.uniform(low=0.0, high=2000.0, size=nx_training)\ntmp = tmp.astype(int)\n\ncondition = np.zeros_like(x1)\nfor i in tmp: condition[i] = 1.0\n \ny_train = y1[np.where(condition==1.0)]\nx_train = x1[np.where(condition==1.0)]\ny_test = y1[np.where(condition==0.0)]\nx_test = x1[np.where(condition==0.0)]\n\npl.scatter(x_train,y_train) # plot the training points\npl.plot(x1,y1,ls=':') # plot the original data they were drawn from\npl.title(\"Training Data\")\npl.show()", "For what follows I'm going to use the george library. The main reason for this is that the matrix inversion in numpy is not very stable for large (> (10 x 10)) matrices.", "import george\nfrom george import kernels\n\nimport emcee\n\n# we need to define three functions: \n# a log likelihood, a log prior & a log posterior.\n\n# set the loglikelihood:\ndef lnlike2(p, x, y):\n \n # update kernel parameters:\n gp.kernel[:] = p\n \n # compute covariance matrix:\n gp.compute(x)\n \n # calculate the likelihood:\n ll = gp.lnlikelihood(y, quiet=True)\n \n # return \n return ll if np.isfinite(ll) else 1e25\n\n# set the logprior\ndef lnprior2(p):\n \n # note that \"p\" contains the ln()\n # of the parameter values - set your\n # prior ranges appropriately!\n \n lnh,lnlam,lnsig = p\n \n # really crappy prior:\n if (-2.<lnh<5.) and (-1.<lnlam<10.) and (-30.<lnsig<0.):\n return 0.0\n \n return -np.inf\n\n# set the logposterior:\ndef lnprob2(p, x, y):\n \n lp = lnprior2(p)\n \n return lp + lnlike2(p, x, y) if np.isfinite(lp) else -np.inf\n\n\n# initiate george with the exponential squared kernel:\nkernel = 1.0*kernels.ExpSquaredKernel(30.0)+kernels.WhiteKernel(1e-5)\ngp = george.GP(kernel, mean=0.0)\n\n# put all the data into a single array:\ndata = (x_train,y_train)\n\n# set your initial guess parameters\n# remember george keeps these in ln() form!\ninitial = gp.kernel[:]\nprint \"Initial guesses: \",np.exp(initial)\n\n# set the dimension of the prior volume \n# (i.e. how many parameters do you have?)\nndim = len(initial)\nprint \"Number of parameters: \",ndim\n\n# The number of walkers needs to be more than twice \n# the dimension of your parameter space... unless you're crazy!\nnwalkers = 10\n\n# perturb your inital guess parameters very slightly\n# to get your starting values:\np0 = [np.array(initial) + 1e-4 * np.random.randn(ndim)\n for i in xrange(nwalkers)]\n\n# initalise the sampler:\nsampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob2, args=data)\n\n# run a few samples as a burn-in:\nprint(\"Running burn-in\")\np0, lnp, _ = sampler.run_mcmc(p0, 1000)\nsampler.reset()\n\n# take the highest likelihood point from the burn-in as a\n# starting point and now begin your production run:\nprint(\"Running production\")\np = p0[np.argmax(lnp)]\np0 = [p + 1e-4 * np.random.randn(ndim) for i in xrange(nwalkers)]\np0, _, _ = sampler.run_mcmc(p0, 8000)\n\nprint \"Finished\"\n\nimport acor\n\n# calculate the convergence time of our\n# MCMC chains:\nsamples = sampler.flatchain\ns2 = np.ndarray.transpose(samples)\ntau, mean, sigma = acor.acor(s2)\nprint \"Convergence time from acor: \", tau\n\n# get rid of the samples that were taken\n# before convergence:\ndelta = int(20*tau)\nsamples = sampler.flatchain[delta:,:]\n\n# extract the log likelihood for each sample:\nlnl = sampler.flatlnprobability[delta:]\n\n# find the point of maximum likelihood:\nml = samples[np.argmax(lnl),:]\n\n# print out those values\n# Note that we have unwrapped \nprint \"ML parameters: \", \nprint \"h: \", np.sqrt(np.exp(ml[0])),\" ; lam: \",np.sqrt(np.exp(ml[1])),\" ; sigma: \",np.sqrt(np.exp(ml[2]))", "Note that the george ExpSquaredKernel has a factor of two in the denominator of the exponent. So whereas we specified that $\\lambda_{\\rm true} = 5.0$, we have found that $\\lambda_{\\rm fit} = 3.46$... (or similar)\nIs this what we should expect given the extra factor of two? Well, 25/2 = 12.5 and $\\sqrt{12.5} = \\dots 3.54$, so we're pretty close!\nLet's plot our probability surfaces for each pair of parameters, as well as the confidence intervals:", "import corner\n\n# Plot it.\nfigure = corner.corner(samples, labels=[r\"$\\ln(h^2)$\", r\"$\\ln(\\lambda^2)$\", r\"$\\ln(\\sigma^2)$\"],\n truths=ml,\n quantiles=[0.16,0.5,0.84],\n #levels=[0.39,0.86,0.99],\n levels=[0.68,0.95,0.99],\n title=\"Mauna Loa Data\",\n show_titles=True, title_args={\"fontsize\": 12})", "We can also extract the expectation value for each parameter and the individual confidence intervals:", "q1 = corner.quantile(samples[:,0],\n\t\t\t\t\t\t q=[0.16,0.5,0.84])\n\nprint \"Parameter 1: \",q1[1],\"(-\",q1[1]-q1[0],\", +\",q1[2]-q1[1],\")\"\n\nq2 = corner.quantile(samples[:,1],\n\t\t\t\t\t\t q=[0.16,0.5,0.84])\n\nprint \"Parameter 2: \",q2[1],\"(-\",q2[1]-q2[0],\", +\",q2[2]-q2[1],\")\"\n\nq3 = corner.quantile(samples[:,2],\n\t\t\t\t\t\t q=[0.16,0.5,0.84])\n\nprint \"Parameter 3: \",q2[1],\"(-\",q2[1]-q2[0],\", +\",q2[2]-q2[1],\")\"\n\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
srcole/qwm
burrito/Burrito_correlations.ipynb
mit
[ "San Diego Burrito Analytics: Correlations\nScott Cole\n21 May 2016\nThis notebook investigates the correlations between different burrito measures.\nimports", "%config InlineBackend.figure_format = 'retina'\n%matplotlib inline\n\nimport numpy as np\nimport scipy as sp\nimport matplotlib.pyplot as plt\nimport pandas as pd\n\nimport seaborn as sns\nsns.set_style(\"white\")", "Load data", "import util\ndf = util.load_burritos()\nN = df.shape[0]", "Correlation matrix\nNote that many dimensions of the burrito are correlated with each other. This lack of independence across dimensions means we should be careful while interpreting the presence of a correlation between two dimensions.", "m_corr = ['Google','Yelp','Hunger','Cost','Volume','Tortilla','Temp','Meat','Fillings','Meat:filling',\n 'Uniformity','Salsa','Synergy','Wrap','overall']\nM = len(m_corr)\ndfcorr = df[m_corr].corr()\n\nfrom matplotlib import cm\n\nclim1 = (-1,1)\nplt.figure(figsize=(12,10))\ncax = plt.pcolor(range(M+1), range(M+1), dfcorr, cmap=cm.bwr)\ncbar = plt.colorbar(cax, ticks=(-1,-.5,0,.5,1))\ncbar.ax.set_ylabel('Pearson correlation (r)', size=30)\nplt.clim(clim1)\ncbar.ax.set_yticklabels((-1,-.5,0,.5,1),size=20)\nax = plt.gca()\nax.set_yticks(np.arange(M)+.5)\nax.set_yticklabels(m_corr,size=25)\nax.set_xticks(np.arange(M)+.5)\nax.set_xticklabels(m_corr,size=25)\nplt.xticks(rotation='vertical')\nplt.tight_layout()\nplt.xlim((0,M))\nplt.ylim((0,M))\n\nfigname = 'metriccorrmat'\nplt.savefig('/gh/fig/burrito/'+figname + '.png')", "Correlation: Cost and volume", "plt.figure(figsize=(4,4))\nax = plt.gca()\ndf.plot(kind='scatter',x='Cost',y='Volume',ax=ax,**{'s':40,'color':'k'})\nplt.xlabel('Cost ($)',size=20)\nplt.ylabel('Volume (L)',size=20)\nplt.xticks(np.arange(3,11),size=15)\nplt.yticks(np.arange(.4,1.4,.1),size=15)\nplt.tight_layout()\nprint(df.corr()['Cost']['Volume'])\nfrom tools.misc import pearsonp\nprint(pearsonp(df.corr()['Cost']['Volume'],len(df[['Cost','Volume']].dropna())))\n\nfigname = 'corr-volume-cost'\nplt.savefig('/gh/fig/burrito/'+figname + '.png')\n\nplt.figure(figsize=(12,8))\nax = plt.gca()\ndf.plot(kind='scatter',x='Cost',y='overall',ax=ax,**{'s':40,'color':'k'})\nplt.xlabel('Cost ($)',size=30)\nplt.ylabel('overall rating',size=30)\nplt.xticks(np.arange(3,11),size=20)\nplt.yticks(np.arange(1,5.5,.5),size=20)\nplt.ylim((.9,5.1))\nplt.tight_layout()\nprint(df.corr()['Cost']['overall'])\nprint(pearsonp(df.corr()['Cost']['overall'],len(df[['Cost','overall']].dropna())))\n\n\nfigname = 'corr-overall-cost'\nplt.savefig('/gh/fig/burrito/'+figname + '.png')", "Positive correlation: Meat and overall rating", "# Visualize some correlations\n\nfrom tools.plt import scatt_corr\nscatt_corr(df['overall'].values,df['Meat'].values,\n xlabel = 'overall rating', ylabel='meat rating', xlim = (-.5,5.5),ylim = (-.5,5.5),xticks=range(6),yticks=range(6))\n #showline = True)\nfigname = 'corr-meat-overall'\nplt.savefig('C:/gh/fig/burrito/'+figname + '.png')", "Positive correlation: Meat and Non-meat fillings", "plt.figure(figsize=(4,4))\nax = plt.gca()\ndf.plot(kind='scatter',x='Meat',y='Fillings',ax=ax,**{'s':40,'color':'k','alpha':.1})\nplt.xlabel('Meat rating',size=20)\nplt.ylabel('Non-meat rating',size=20)\nplt.xticks(np.arange(0,6),size=15)\nplt.yticks(np.arange(0,6),size=15)\ndfMF = df[['Meat','Fillings']].dropna()\nprint sp.stats.spearmanr(dfMF.Meat,dfMF.Fillings)\n\nfigname = 'corr-meat-filling'\nplt.savefig('C:/gh/fig/burrito/'+figname + '.png')", "Positive correlation: Meat and non-meat fillings at The Taco Stand\nThe positive correlation here indicates that the correlation between these two features is not simply due to a restaurant having good meat tending to have good non-meat fillings", "# Restrict analysis to burritos at the taco stand\nrestrictCali = False\nimport re\nreTS = re.compile('.*taco stand.*', re.IGNORECASE)\nreCali = re.compile('.*cali.*', re.IGNORECASE)\nlocTS = np.ones(len(df))\nfor i in range(len(df)):\n mat = reTS.match(df['Location'][i])\n if mat is None:\n locTS[i] = 0\n else:\n if restrictCali:\n mat = reCali.match(df['Burrito'][i])\n if mat is None:\n locTS[i] = 0\ntemp = np.arange(len(df))\ndfTS = df.loc[temp[locTS==1]]\n\nplt.figure(figsize=(4,4))\nax = plt.gca()\ndfTS.plot(kind='scatter',x='Meat',y='Fillings',ax=ax,**{'s':40,'color':'k','alpha':.1})\nplt.xlabel('Meat rating',size=20)\nplt.ylabel('Non-meat rating',size=20)\nplt.xticks(np.arange(0,6),size=15)\nplt.yticks(np.arange(0,6),size=15)\ndfTSMF = dfTS[['Meat','Fillings']].dropna()\nprint sp.stats.spearmanr(dfTSMF.Meat,dfTSMF.Fillings)\n\nfigname = 'corr-meat-filling-TS'\nplt.savefig('C:/gh/fig/burrito/'+figname + '.png')", "Correlation: Hunger level and overall rating", "plt.figure(figsize=(4,4))\nax = plt.gca()\ndf.plot(kind='scatter',x='Hunger',y='overall',ax=ax,**{'s':40,'color':'k'})\nplt.xlabel('Hunger',size=20)\nplt.ylabel('Overall rating',size=20)\nplt.xticks(np.arange(0,6),size=15)\nplt.yticks(np.arange(0,6),size=15)\nprint df.corr()['Hunger']['overall']\nfrom tools.misc import pearsonp\nprint pearsonp(df.corr()['Hunger']['overall'],len(df[['Hunger','overall']].dropna()))\n\nfigname = 'corr-hunger-overall'\nplt.savefig('C:/gh/fig/burrito/'+figname + '.png')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ireapps/cfj-2017
exercises/02. Working with data files-working.ipynb
mit
[ "Working with data files\nReading and writing data files is a common task, and Python offers native support for working with many kinds of data files. Today, we're going to be working mainly with CSVs.\nImport the csv module\nWe're going to be working with delimited text files, so the first thing we need to do is import this functionality from the standard library.\nOpening a file to read the contents\nWe're going to use something called a with statement to open a file and read the contents. The open() function takes at least two arguments: The path to the file you're opening and what \"mode\" you're opening it in.\nTo start with, we're going to use the 'r' mode to read the data. We'll use the default arguments for delimiter -- comma -- and we don't need to specify a quote character.\nImportant: If you open a data file in w (write) mode, anything that's already in the file will be erased.\nThe file we're using -- MLB roster data from 2017 -- lives at data/mlb.csv.\nOnce we have the file open, we're going to use some functionality from the csv module to iterate over the lines of data and print each one.\nSpecifically, we're going to use the csv.reader method, which returns a list of lines in the data file. Each line, in turn, is a list of the \"cells\" of data in that line.\nThen we're going to loop over the lines of data and print each line. We can also use bracket notation to retrieve elements from inside each line of data.", "# open the MLB data file `as` mlb\n\n \n # create a reader object\n\n \n # loop over the rows in the file\n\n \n # assign variables to each element in the row (shortcut!)\n\n \n # print the row, which is a list\n", "Simple filtering\nIf you wanted to filter your data, you could use an if statement inside your with block.", "# open the MLB data file `as` mlb\n\n \n # create a reader object\n\n\n # move past the header row\n\n \n # loop over the rows in the file\n\n\n # assign variables to each element in the row (shortcut!)\n\n \n # print the line of data ~only~ if the player is on the Twins\n\n \n # print the row, which is a list\n", "Exercise\nRead in the MLB data, print only the names and salaries of players who make at least $1 million. (Hint: Use type coercion!)", "# open the MLB data file `as` mlb\n\n \n # create a reader object\n\n \n # move past the header row\n\n \n # loop over the rows in the file\n\n\n # assign variables to each element in the row (shortcut!)\n\n \n # print the line of data ~only~ if the player is on the Twins\n\n \n # print the row, which is a list\n", "DictReader: Another way to read CSV files\nSometimes it's more convenient to work with data files as a list of dictionaries instead of a list of lists. That way, you don't have to remember the position of each \"column\" of data -- you can just reference the column name. To do it, we'll use a csv.DictReader object instead of a csv.reader object. Otherwise the code is much the same.", "# open the MLB data file `as` mlb\n\n \n # create a reader object\n\n \n # loop over the rows in the file\n\n\n # print just the player's name (the column header is \"NAME\")\n", "Writing to CSV files\nYou can also use the csv module to create csv files -- same idea, you just need to change the mode to 'w'. As with reading, there's a list-based writing method and a dictionary-based method.", "# define the column names\n\n\n# let's make a few rows of data to write\n\n\n# open an output file in write mode\n\n \n # create a writer object\n\n \n # write the header row\n\n \n # loop over the data and write to file\n", "Using DictWriter to write data\nSimilar to using the list-based method, except that you need to ensure that the keys in your dictionaries of data match exactly a list of fieldnames.", "# define the column names\n\n\n# let's make a few rows of data to write\n\n# open an output file in write mode\n\n \n # create a writer object -- pass the list of column names to the `fieldnames` keyword argument\n\n \n # use the writeheader method to write the header row\n\n \n # loop over the data and write to file\n", "You can open multiple files for reading/writing\nSometimes you want to open multiple files at the same time. One thing you might want to do: Opening a file of raw data in read mode, clean each row in a loop and write out the clean data to a new file.\nYou can open multiple files in the same with block -- just separate your open() functions with a comma.\nFor this example, we're not going to do any cleaning -- we're just going to copy the contents of one file to another.", "# open the MLB data file `as` mlb\n# also, open `mlb-copy.csv` to write to\n\n \n # create a reader object\n\n \n # create a writer object\n # we're going to use the `fieldnames` attribute of the DictReader object\n # as our output headers, as well\n # b/c we're basically just making a copy\n\n \n # write header row\n\n \n # loop over the rows in the file\n\n \n # what type of object is `row`?\n # how would we find out?\n \n # write row to output file\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
astroumd/GradMap
notebooks/Haiti2016/python-basic.ipynb
gpl-3.0
[ "Some very basic python\nShowing some very basic python, variables, arrays, math and plotting:\nVariables and printing them", "# setting a variable\na = 1.23\n\n# although just writing the variable will show it's value, but this is not the recommended\n# way, because per cell only the last one will be printed and stored in the out[]\n# list that the notebook maintains\n\na\na+1", "The right way to print is using the official print() function in python\nand this way you can also print out multiple lines in the out[]", "print(a)\nprint(type(a))\n", "Now you can see that each call to print() will cause output on a new line,and also note they are not in the Out[] list\noverwriting the same variable , now as a string", "\na=\"1.23\"\nprint(a,type(a))\n\n# checking the value of the variable\na\n", "Python versions\nPython2 and Python3 are still being used today. So safeguard printing between python2 and python3 you will need a special import for old python2:", "from __future__ import print_function", "Now we can print(pi) in python2. The old style would be print pi", "pi = 3.1415\nprint(\"pi=\",pi)\nprint(\"pi=%15.10f\" % pi)\n\n# for reference, here is the old style of printing in python2 \n# print \"pi=\",pi", "Control stuctures\nMost programming languages have a way to control the flow of the program. The common ones are\n\nif/then/else\nfor-loop\nwhile-loop\n\nWhile you are looking at this code, note that white-space (indentation) controls the extent of the code, and unlike other language, there is no special symbol or end type statement. This is probably the single most confusing thing about those new to the python language.", "n = 1\n\nif n > 0:\n print(\"yes, n>0\")\nelse:\n print(\"not\") \n\n \nfor i in [2,4,n,6]:\n print(\"i=\",i)\nprint(\"oulala, after this for loop, i=\",i)\n\nn = 10\nwhile n>0:\n # n = n - 2\n print(\"whiling\",n)\n n = n - 2\n\nprint(\"last n\",n)\n ", "Python Data Structures\nA list is one of four major data structures (lists, dictionaries, sets, tuples) that python uses. It is the most simple one, and has direct parallels to those in other languages such as Fortran, C/C++, Java etc.\nPython Lists\nPython uses special symbols to make up these collection, briefly they are:\n* list: [1,2,3]\n* dictionary: { \"a\":1 , \"b\":2 , \"c\": 3}\n* set: {1,2,3,\"abc\"}\n* tuple: (1,2,3)", "a1 = [1,2,3,4]\na2 = range(1,5)\nprint(a1)\nprint(a2)\na2 = ['a',1,'cccc']\nprint(a1)\nprint(a2)\na3 = range(12,20,2)\nprint(a3)\n\na1=range(3)\na2=range(1,4)\nprint(a1,a2)\n\na1+a2", "Math and Numeric Arrays", "import math\nimport numpy as np\n\nmath.pi\n\n\nnp.pi\n\n\n# %matplotlib inline\nimport matplotlib.pyplot as plt\n\na=np.arange(0,1,0.01)\n\nb = a*a\nc = np.sqrt(a)\n\nplt.plot(a,b,'-bo',label='b')\nplt.plot(a,c,'-ro',label='c')\nplt.legend()\n\nplt.plot(a,a+1)\n\n\nplt.show()\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GrachevArtem/dl_course
week_03/HW2_Fully_Connected_NN.ipynb
mit
[ "Полносвязная нейронная сеть\nВ данном домашнем задании вы подготовите свою реализацию полносвязной нейронной сети и обучите классификатор на датасете CIFAR-10", "import numpy as np\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n\ndef rel_error(x, y):\n \"\"\" returns relative error \"\"\"\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))\n\nclass TwoLayerNet(object):\n \n def __init__(self, input_size, hidden_size, output_size, std=1e-4):\n \"\"\"\n W1: Первый слой с размерами (D, H)\n b1: Вектор байесов, размер (H,)\n W2: Второй слой с размерами (H, C)\n b2: Вектор байесов, размер (C,)\n\n Входные парамерты:\n - input_size: Размерность входных данных\n - hidden_size: Размер скрытого слоя\n - output_size: Количество классов\n \"\"\"\n self.params = {}\n self.params['W1'] = std * np.random.randn(input_size, hidden_size)\n self.params['b1'] = np.zeros(hidden_size)\n self.params['W2'] = std * np.random.randn(hidden_size, output_size)\n self.params['b2'] = np.zeros(output_size)\n \n def loss(self, X, y=None, reg=0.0):\n \"\"\"\n Вычисление функции потерь\n\n Входные парамерты:\n - X: Таблица данных (N, D). X[i] - один пример\n - y: Вектор лейблов. Если отсутсвует, то возвращается предсказание лейблов\n - reg: Коэффициент регуляризации\n\n Возвращает:\n Если y == None, то возвращаются скор для классов\n\n Если y != None, то возвращаются:\n - Лосс для данного семпла данных\n - grads: Словарь градиентов, ключи соответствуют ключам словаря self.params.\n \"\"\"\n W1, b1 = self.params['W1'], self.params['b1']\n W2, b2 = self.params['W2'], self.params['b2']\n N, D = X.shape\n\n scores = None\n #############################################################################\n # TODO: Расчет forward pass или прямой проход, для данных находятся скоры, #\n # на выходе размер (N, C) #\n #############################################################################\n pass\n #############################################################################\n # END OF YOUR CODE #\n #############################################################################\n\n # Если y == None, то завершаем вызов\n if y is None:\n return scores\n\n loss = None\n #############################################################################\n # TODO: Расчет Softmax loss для полученных скоров обьектов, на выходе скаляр #\n #############################################################################\n pass\n #############################################################################\n # END OF YOUR CODE #\n #############################################################################\n\n grads = {}\n #############################################################################\n # TODO: Расчет обратнохо прохода или backward pass, находятся градиенты для всех #\n # параметров, результаты сохраняются в grads, например grads['W1'] #\n #############################################################################\n pass\n #############################################################################\n # END OF YOUR CODE #\n #############################################################################\n\n return loss, grads\n \n\n def train(self, X, y, X_val, y_val,\n learning_rate=1e-3, learning_rate_decay=0.95,\n reg=5e-6, num_iters=100,\n batch_size=200, verbose=False):\n \"\"\"\n Обучение нейронной сети с помощью SGD\n\n Входные парамерты:\n - X: Матрица данных (N, D)\n - y: Вектор лейблов (N, )\n - X_val: Данные для валидации (N_val, D)\n - y_val: Вектор лейблов валидации (N_val, )\n - reg: Коэффициент регуляризации\n - num_iters: Количнство итераций\n - batch_size: Размер семпла данных, на 1 шаг алгоритма\n - verbose: Вывод прогресса\n \"\"\"\n num_train = X.shape[0]\n iterations_per_epoch = max(num_train / batch_size, 1)\n\n loss_history = []\n train_acc_history = []\n val_acc_history = []\n\n for it in range(num_iters):\n X_batch = None\n y_batch = None\n\n #########################################################################\n # TODO: Семпл данных их X-> X_batch, y_batch #\n #########################################################################\n pass\n #########################################################################\n # END OF YOUR CODE #\n #########################################################################\n \n loss, grads = self.loss(X_batch, y=y_batch, reg=reg)\n loss_history.append(loss)\n\n #########################################################################\n # TODO: Используя градиенты из grads обновите параметры сети #\n #########################################################################\n pass\n #########################################################################\n # END OF YOUR CODE #\n #########################################################################\n\n if verbose and it % 100 == 0:\n print('iteration %d / %d: loss %f' % (it, num_iters, loss))\n\n if it % iterations_per_epoch == 0:\n train_acc = (self.predict(X_batch) == y_batch).mean()\n val_acc = (self.predict(X_val) == y_val).mean()\n train_acc_history.append(train_acc)\n val_acc_history.append(val_acc)\n \n # Decay learning rate\n learning_rate *= learning_rate_decay\n\n return {\n 'loss_history': loss_history,\n 'train_acc_history': train_acc_history,\n 'val_acc_history': val_acc_history,\n }\n\n def predict(self, X):\n \"\"\"\n Входные параметры:\n - X: Матрица данных (N, D)\n\n Возвращает:\n - y_pred: Вектор предсказаний классов для обьектов (N,)\n \"\"\"\n y_pred = None\n\n ###########################################################################\n # TODO: Предсказание классов для обьектов из X #\n ###########################################################################\n pass\n ###########################################################################\n # END OF YOUR CODE #\n ###########################################################################\n\n return y_pred\n\n# Инициализация простого примера. Данные и обьект модели\n\ninput_size = 4\nhidden_size = 10\nnum_classes = 3\nnum_inputs = 5\n\ndef init_toy_model():\n np.random.seed(0)\n return TwoLayerNet(input_size, hidden_size, num_classes, std=1e-1)\n\ndef init_toy_data():\n np.random.seed(1)\n X = 10 * np.random.randn(num_inputs, input_size)\n y = np.array([0, 1, 2, 2, 1])\n return X, y\n\nnet = init_toy_model()\nX, y = init_toy_data()", "Прямой проход, скор\nРеализуйте прямой проход в TwoLayerNet.loss", "scores = net.loss(X)\nprint('Your scores:')\nprint(scores)\nprint()\nprint('correct scores:')\ncorrect_scores = np.asarray([\n [-0.81233741, -1.27654624, -0.70335995],\n [-0.17129677, -1.18803311, -0.47310444],\n [-0.51590475, -1.01354314, -0.8504215 ],\n [-0.15419291, -0.48629638, -0.52901952],\n [-0.00618733, -0.12435261, -0.15226949]])\nprint(correct_scores)\nprint()\n\n# The difference should be very small. We get < 1e-7\nprint('Difference between your scores and correct scores:')\nprint(np.sum(np.abs(scores - correct_scores)))", "Прямой проход, функция потерь\nВ том же методе реализуйте вычисление функции потерь", "loss, _ = net.loss(X, y, reg=0.05)\ncorrect_loss = 1.30378789133\n\n# Ошибка должна быть < 1e-12\nprint('Difference between your loss and correct loss:')\nprint(np.sum(np.abs(loss - correct_loss)))", "Обратный проход\nЗакончите реализацию метода, вычислением градиентов для W1, b1, W2, и b2", "def eval_numerical_gradient(f, x, verbose=True, h=0.00001):\n \n fx = f(x) # evaluate function value at original point\n grad = np.zeros_like(x)\n # iterate over all indexes in x\n it = np.nditer(x, flags=['multi_index'], op_flags=['readwrite'])\n while not it.finished:\n\n # evaluate function at x+h\n ix = it.multi_index\n oldval = x[ix]\n x[ix] = oldval + h # increment by h\n fxph = f(x) # evalute f(x + h)\n x[ix] = oldval - h\n fxmh = f(x) # evaluate f(x - h)\n x[ix] = oldval # restore\n\n # compute the partial derivative with centered formula\n grad[ix] = (fxph - fxmh) / (2 * h) # the slope\n if verbose:\n print(ix, grad[ix])\n it.iternext() # step to next dimension\n\n return grad\n\nloss, grads = net.loss(X, y, reg=0.05)\n\n# Ошибка должна быть меньше или около 1e-8\nfor param_name in grads:\n f = lambda W: net.loss(X, y, reg=0.05)[0]\n param_grad_num = eval_numerical_gradient(f, net.params[param_name], verbose=False)\n print('%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name])))", "Обучение сети\nРеализуйте метод TwoLayerNet.train и метод TwoLayerNet.predict\nПосле того как закончите, обучите сеть на игрушечных данных, которые мы сгенерировали выше, лосс дольжен быть менее или около 0.2.", "net = init_toy_model()\nstats = net.train(X, y, X, y,\n learning_rate=1e-1, reg=5e-6,\n num_iters=100, verbose=False)\n\nprint('Final training loss: ', stats['loss_history'][-1])\n\nplt.plot(stats['loss_history'])\nplt.xlabel('iteration')\nplt.ylabel('training loss')\nplt.title('Training Loss history')\nplt.show()", "CIFAR-10", "from keras.datasets import cifar10\n\n(X_train, y_train), (X_val, y_val) = cifar10.load_data()\n\nX_test, y_test = X_val[:int(X_val.shape[0]*0.5)], y_val[:int(X_val.shape[0]*0.5)]\nX_val, y_val = X_val[int(X_val.shape[0]*0.5):], y_val[int(X_val.shape[0]*0.5):]\n\nprint('Train data shape: ', X_train.shape)\nprint('Train labels shape: ', y_train.shape)\nprint('Validation data shape: ', X_val.shape)\nprint('Validation labels shape: ', y_val.shape)\nprint('Test data shape: ', X_test.shape)\nprint('Test labels shape: ', y_test.shape)", "Обучение сети\nОбучите сеть на данных CIFAR-10", "input_size = 32 * 32 * 3\nhidden_size = 50\nnum_classes = 10\nnet = TwoLayerNet(input_size, hidden_size, num_classes)\n\nstats = net.train(X_train, y_train, X_val, y_val,\n num_iters=1000, batch_size=200,\n learning_rate=1e-4, learning_rate_decay=0.95,\n reg=0.25, verbose=True)\n\nval_acc = (net.predict(X_val) == y_val).mean()\nprint('Validation accuracy: ', val_acc)\n\n", "Дебаггинг процесса обучния", "plt.subplot(2, 1, 1)\nplt.plot(stats['loss_history'])\nplt.title('Loss history')\nplt.xlabel('Iteration')\nplt.ylabel('Loss')\n\nplt.subplot(2, 1, 2)\nplt.plot(stats['train_acc_history'], label='train')\nplt.plot(stats['val_acc_history'], label='val')\nplt.title('Classification accuracy history')\nplt.xlabel('Epoch')\nplt.ylabel('Clasification accuracy')\nplt.show()", "Настройка гиперпараметров", "best_net = None\n\n#################################################################################\n# TODO: Напишите свою реализцию кросс валидации для настройки гиперпараметров сети #\n#################################################################################\npass\n#################################################################################\n# END OF YOUR CODE #\n#################################################################################", "Проверка качества\nС оптимальными гиперпараметрами сеть должна выдавать точнов около 48%.", "test_acc = (best_net.predict(X_test) == y_test).mean()\nprint('Test accuracy: ', test_acc)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
scikit-rf/examples
metrology/oneport_tiered_calibration/.ipynb_checkpoints/One Port Tiered Calibration-checkpoint.ipynb
bsd-3-clause
[ "One Port Tiered Calibration\nIntro\nA one-port network analyzer can be used to measure a two-port device, provided that the device is reciprocal. This is accomplished by performing two calibrations, which is why its called a tiered calibration. \nFirst, the VNA is calibrated at the test-port like normal. This is called the first tier. Next, the device is connected to the test-port, and a calibration is performed at the far end of the device, the second tier. A diagram is shown below,", "from IPython.display import SVG\nSVG('images/boxDiagram.svg')", "This notebook will demonstrate how to use skrf to do a two-tiered one-port calibration. We'll use data that was taken to characterize a waveguide-to-CPW probe. So, for this specific example the diagram above looks like:", "SVG('images/probe.svg')", "Some Data\nThe data available is the folders 'tier1/' and 'tier2/'.", "ls", "(if you dont have the git repo for these examples, the data for this notebook can be found here)\nIn each folder you will find the two sub-folders, called 'ideals/' and 'measured/'. These contain touchstone files of the calibration standards ideal and measured responses, respectively.", "ls tier1/", "The first tier is at waveguide interface, and consisted of the following set of standards\n\nshort \ndelay short\nload\nradiating open (literally an open waveguide)", "ls tier1/measured/", "Creating Calibrations\nTier 1\nFirst defining the calibration for Tier 1", "from skrf.calibration import OnePort\nimport skrf as rf \n\n# enable in-notebook plots\n%matplotlib inline\nrf.stylely()\n\ntier1_ideals = rf.read_all_networks('tier1/ideals/')\ntier1_measured = rf.read_all_networks('tier1/measured/')\n \n\ntier1 = OnePort(measured = tier1_measured,\n ideals = tier1_ideals,\n name = 'tier1',\n sloppy_input=True)\ntier1", "Because we saved corresponding ideal and measured standards with identical names, the Calibration will automatically align our standards upon initialization. (More info on creating Calibration objects this can be found in the docs.)\nSimilarly for the second tier 2,\nTier 2", "tier2_ideals = rf.read_all_networks('tier2/ideals/')\ntier2_measured = rf.read_all_networks('tier2/measured/')\n \n\ntier2 = OnePort(measured = tier2_measured,\n ideals = tier2_ideals,\n name = 'tier2',\n sloppy_input=True)\ntier2", "Error Networks\nEach one-port Calibration contains a two-port error network, that is determined from the calculated error coefficients. The error network for tier1 models the VNA, while the error network for tier2 represents the VNA and the DUT. These can be visualized through the parameter 'error_ntwk'.\nFor tier 1,", "tier1.error_ntwk.plot_s_db()\ntitle('Tier 1 Error Network')", "Similarly for tier 2,", "tier2.error_ntwk.plot_s_db()\ntitle('Tier 2 Error Network')", "De-embedding the DUT\nAs previously stated, the error network for tier1 models the VNA, and the error network for tier2 represents the VNA+DUT. So to deterine the DUT's response, we cascade the inverse S-parameters of the VNA with the VNA+DUT. \n$$ DUT = VNA^{-1}\\cdot (VNA \\cdot DUT)$$\nIn skrf, this is done as follows", "dut = tier1.error_ntwk.inv ** tier2.error_ntwk\ndut.name = 'probe'\ndut.plot_s_db()\ntitle('Probe S-parameters')\nylim(-60,10)", "You may want to save this to disk, for future use,", "dut.write_touchstone()\n\nls probe*", "formatting junk", "from IPython.core.display import HTML\n\n\ndef css_styling():\n styles = open(\"../styles/plotly.css\", \"r\").read()\n return HTML(styles)\ncss_styling()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
fsilva/deputado-histogramado
notebooks/Deputado-Histogramado-1.ipynb
gpl-3.0
[ "Deputado Histogramado\nexpressao.xyz/deputado/\nComo processar as sessões do parlamento Português\nÍndice\n\nReunir o dataset\nContando as palavras mais comuns\nFazendo histogramas\nRepresentações geograficas\nSimplificar o dataset e exportar para o expressa.xyz/deputado/\n\nO que se passou nas mais de 4000 sessões de discussão do parlamento Português que ocorreram desde 1976? \nNeste notebook vamos tentar visualizar o que se passou da maneira mais simples - contando palavras, e fazendo gráficos.\nPara obter os textos de todas as sessões usaremos o demo.cratica.org, onde podemos aceder facilmente a todas as sessões do parlamento de 1976 a 2015. Depois com um pouco de python, pandas e matplotlib vamos analisar o que se passou.\nPara executar estes notebook será necessário descarregar e abrir com o Jupiter Notebooks (a distribuição Anaconda faz com que instalar todas as ferramentas necessárias seja fácil - https://www.continuum.io/downloads)\nParte 1 - Reunir o dataset\nComecemos por carregar as sessões para um dataframe de pandas, para as podermos analisar mais facilmente. O demo.cratica.org esta organizado /sessoes/ano/mes/dia/. A quantidade todal de dadosé da ordem de 900 MB portanto cuidado com o trafego. E atenção que a transferencia de dados pode demorar bastante.\nPara cada data entre 1976 e 2015, vamos tentar transferir a página do democratica. Para as que existem, vamos extrair os dados na tag 'entry-content', remover as translineações e as tags, e passar para lowercase (pois o nosso propósito é contar palavras e ocorrências). Guardemos estes dados em duas listas (lista_datas, lista_sessoes), que depois vao ser transformadas nas duas colunas de um DataFrame de Pandas.", "from urllib import request \nimport zlib\nimport pandas\nfrom bs4 import BeautifulSoup #para processar o HTML\nimport re #para processar o html\n\n\nlista_datas = []\nlista_sessoes = []\nbytesTransferidos = 0\ni = 0\nfor ano in range(1976,2016):\n for mes in range(1,13):\n print(\"Processando %04d/%02d - bytes transferidos = %d...\"%(ano,mes,bytesTransferidos))\n for dia in range(1,32): #para cada dia possível, tenta transferiro ficheiro\n url = \"http://demo.cratica.org/sessoes/%04d/%02d/%02d/\"%(ano,mes,dia)\n #url = \"http://localhost:7888/radio-parlamento/backup170221/sessoes/%04d/%02d/%02d/\"%(ano,mes,dia)\n #transfere a pagina usando urllib \n r = request.Request(url)\n try: \n with request.urlopen(r) as f: #transfere o site\n dados = f.read()\n\n bytesTransferidos = bytesTransferidos + len(dados) #contabiliza quanto trafego estamos a usar\n # os dados em HTML têm mais informaçao do que queremos. vamos seleccionar apenas o elemento 'entre-content', que tem o texto do parlamento\n dados = ''.join([str(x).strip() for x in BeautifulSoup(dados,'lxml').find('div', class_=\"entry-content\").find_all()])\n # vamos retirar as tags de paragrafos e corrigir as translineações\n dados = dados.replace('-</p><p>','').replace('</p>',' ').replace('<p>',' ')\n # vamos retirar tags que faltem, pois nao interessam para a nossa análise (usando uma expressao regular, e assumindo que o codigo é html válido)\n dados = re.sub('<[^>]*>', '', dados)\n # texto para lowercase, para as procuras de palavras funcionarem independentemente da capitalização\n dados = dados.lower()\n \n if(len(dados) > 100): #se o texto nao for inexistente ou demasiado pequeno, adiciona á lista de sessões (há páginas que existem, mas não têm texto)\n lista_datas.append(\"%04d-%02d-%02d\"%(ano,mes,dia))\n lista_sessoes.append(dados)\n\n except request.URLError: #ficheiro nao existe, passa ao próximo\n pass\n\nprint('%d bytes transferidos, %d sessoes transferidas, entre %s e %s'%(bytesTransferidos,len(lista_sessoes), min(lista_datas).format(), max(lista_datas).format()))\n \ndados = {'data': pandas.DatetimeIndex(lista_datas), 'sessao': lista_sessoes }\nsessoes = pandas.DataFrame(dados, columns={'data','sessao'})\n", "Agora que temos os dados num dataframe podemos imediatamente tirar partido deles. Por exemplo representar o tamanho das sessoes em bytes ao longo do tempo é straightforward:", "%matplotlib inline\nimport pylab\nimport matplotlib\n\n#aplica a funcao len/length a cada elemento da series de dados, \n# criando uma columna com o numero de bytes de cada sessao\nsessoes['tamanho'] = sessoes['sessao'].map(len)\n\n#representa num grafico\nax = sessoes.plot(x='data',y='tamanho',figsize=(15,6),linewidth=0.1,marker='.',markersize=0.5)\nax.set_xlabel('Data da sessão')\nax.set_ylabel('Tamanho (bytes)')", "A sessão média é mais ou menos constante da ordem dos 200 mil bytes, excepto algumas sessões que periodicamente têm mais conteúdo (até >4x mais caracteres). \nA próxima coisa a fazer será guardar os dados, para não os termos que transferir novamente, e podermos usar nas próximas análises:", "sessoes.to_csv('sessoes_democratica_org.csv')", "Verificando o nome e tamanho do ficheiro:", "import os\nprint(str(os.path.getsize('sessoes_democratica_org.csv')) + ' bytes')\nprint(os.getcwd()+'/sessoes_democratica_org.csv')", "Código para carregar os dados: \n(este notebook pode ser começado por esta linha, se o ficheiro já existir)", "%matplotlib inline\nimport pylab\nimport matplotlib\nimport pandas\nimport numpy\n\n\ndateparse = lambda x: pandas.datetime.strptime(x, '%Y-%m-%d')\nsessoes = pandas.read_csv('sessoes_democratica_org.csv',index_col=0,parse_dates=['data'], date_parser=dateparse)", "Vamos só verificar se os dados ainda estão perceptíveis. Um ponto importante é se os acentos e cedilhas estão bem interpretados, pois gravar e abrir ficheiros pode confundir o python.", "sessoes", "Tudo OK. Óptimo. Vendo o inicio e fim da tabela reparamos imediatamente que o formato das sessões mudou de 1976 para 2015, de ter 'Número X' primeiro para a data por extenso primeiro. Passemos á frente." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
SHDShim/pytheos
examples/11_pvt-eos_fit-el_anh.ipynb
apache-2.0
[ "%cat 0Source_Citation.txt\n\n%matplotlib inline \n# %matplotlib notebook # for interactive", "For high dpi displays.", "%config InlineBackend.figure_format = 'retina'", "0. General note\n\n\nThis notebook shows an example of how to include anharmonic effects and/or electronic effects in the P-V-T EOS fitting.\n\n\nWe use data on SiC from Nisr et al. (2017, JGR-Planet).\n\n\nNote that the code here is for demonstration purpose. In fact, we did not find any evidence for including anharmonic and electronic effects in fitting the data for SiC. \n\n\n1. Global setup", "import numpy as np\nimport matplotlib.pyplot as plt\nimport uncertainties as uct\nfrom uncertainties import unumpy as unp\nimport pytheos as eos", "2. Setup for fitting with different gold pressure scales\nThe equations of state of gold from Fei et al. (2007, PNAS) and Dorogokupets and Dewaele (2007, HPR). These equations are provided in pytheos as built-in classes.", "au_eos = {'Fei2007': eos.gold.Fei2007bm3(), 'Dorogokupets2007': eos.gold.Dorogokupets2007()}", "Because we use Birch-Murnaghan EOS version of Fei2007 and Dorogokupets2007 used Vinet EOS, we create a dictionary to provide different static compression EOSs for the different pressure scales used.", "st_model = {'Fei2007': eos.BM3Model, 'Dorogokupets2007': eos.VinetModel}", "Assign initial values for the EOS parameters.", "k0_3c = {'Fei2007': 241.2, 'Dorogokupets2007': 243.0}\nk0p_3c = {'Fei2007': 2.84, 'Dorogokupets2007': 2.68}\nk0_6h = {'Fei2007': 243.1, 'Dorogokupets2007': 245.5}\nk0p_6h = {'Fei2007': 2.79, 'Dorogokupets2007': 2.59}", "Also for the thermal parameters. In this example, we will use the constant $q$ equation for the thermal part of the EOS.", "gamma0 = 1.06\nq = 1.\ntheta0 = 1200.", "We also setup for the physical constants of two different polymorphs of SiC.", "v0 = {'3C': 82.8042, '6H': 124.27}\nn_3c = 2.; z_3c = 4.\nn_6h = 2.; z_6h = 6.", "3. Data distribution (3C)\nThe data set is provided in a csv file.", "data = np.recfromcsv('./data/3C-HiTEOS-final.csv', case_sensitive=True, deletechars='')", "Set up variables for the data.", "v_std = unp.uarray( data['V(Au)'], data['sV(Au)'])\ntemp = unp.uarray(data['T(3C)'], data['sT(3C)'])\nv = unp.uarray(data['V(3C)'], data['sV(3C)'])", "Plot $P$-$V$-$T$ data in the $P$-$V$ and $P$-$T$ spaces.", "for key, value in au_eos.items():\n p = au_eos[key].cal_p(v_std, temp)\n eos.plot.thermal_data({'p': p, 'v': v, 'temp': temp}, title=key)", "4. Data fitting with constq equation (3C) with electronic effects\nNormally weight for each data point can be calculated from $\\sigma(P)$. In this case, using uncertainties, we can easily propagate the temperature and volume uncertainties to get the value.", "for key, value in au_eos.items():\n # calculate pressure\n p = au_eos[key].cal_p(v_std, temp)\n # add prefix to the parameters. this is important to distinguish thermal and static parameters\n eos_st = st_model[key](prefix='st_') \n eos_th = eos.ConstqModel(n_3c, z_3c, prefix='th_')\n eos_el = eos.ZharkovElecModel(n_3c, z_3c, prefix='el_')\n # define initial values for parameters\n params = eos_st.make_params(v0=v0['3C'], k0=k0_3c[key], k0p=k0p_3c[key])\n params += eos_th.make_params(v0=v0['3C'], gamma0=gamma0, q=q, theta0=theta0)\n params += eos_el.make_params(v0=v0['3C'], e0=0.1e-6, g=0.01)\n # construct PVT eos\n pvteos = eos_st + eos_th + eos_el\n # fix static parameters and some other well known parameters\n params['th_v0'].vary=False; params['th_gamma0'].vary=False; params['th_theta0'].vary=False\n params['th_q'].vary=False\n params['st_v0'].vary=False; params['st_k0'].vary=False; params['st_k0p'].vary=False\n params['el_v0'].vary=False#; params['el_e0'].vary=False#; params['el_g'].vary=False\n # calculate weights. setting it None results in unweighted fitting\n weights = 1./unp.std_devs(p) #None\n fit_result = pvteos.fit(unp.nominal_values(p), params, v=unp.nominal_values(v), \n temp=unp.nominal_values(temp), weights=weights)\n print('********'+key)\n print(fit_result.fit_report())\n # plot fitting results\n eos.plot.thermal_fit_result(fit_result, p_err=unp.std_devs(p), v_err=unp.std_devs(v), title=key)", "5. Data fitting with constq equation (3C) with anharmonic effects", "for key, value in au_eos.items():\n # calculate pressure\n p = au_eos[key].cal_p(v_std, temp)\n # add prefix to the parameters. this is important to distinguish thermal and static parameters\n eos_st = st_model[key](prefix='st_') \n eos_th = eos.ConstqModel(n_3c, z_3c, prefix='th_')\n eos_anh = eos.ZharkovAnhModel(n_3c, z_3c, prefix='anh_')\n # define initial values for parameters\n params = eos_st.make_params(v0=v0['3C'], k0=k0_3c[key], k0p=k0p_3c[key])\n params += eos_th.make_params(v0=v0['3C'], gamma0=gamma0, q=q, theta0=theta0)\n params += eos_anh.make_params(v0=v0['3C'], a0=0.1e-6, m=0.01)\n # construct PVT eos\n pvteos = eos_st + eos_th + eos_anh\n # fix static parameters and some other well known parameters\n params['th_v0'].vary=False; params['th_gamma0'].vary=False; params['th_theta0'].vary=False\n params['th_q'].vary=False\n params['st_v0'].vary=False; params['st_k0'].vary=False; params['st_k0p'].vary=False\n params['anh_v0'].vary=False#; params['el_e0'].vary=False#; params['el_g'].vary=False\n # calculate weights. setting it None results in unweighted fitting\n weights = 1./unp.std_devs(p) #None\n fit_result = pvteos.fit(unp.nominal_values(p), params, v=unp.nominal_values(v), \n temp=unp.nominal_values(temp), weights=weights)\n print('********'+key)\n print(fit_result.fit_report())\n # plot fitting results\n eos.plot.thermal_fit_result(fit_result, p_err=unp.std_devs(p), v_err=unp.std_devs(v), title=key)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
bashtage/statsmodels
examples/notebooks/ordinal_regression.ipynb
bsd-3-clause
[ "Ordinal Regression", "import numpy as np\nimport pandas as pd\nimport scipy.stats as stats\n\nfrom statsmodels.miscmodels.ordinal_model import OrderedModel", "Loading a stata data file from the UCLA website.This notebook is inspired by https://stats.idre.ucla.edu/r/dae/ordinal-logistic-regression/ which is a R notebook from UCLA.", "url = \"https://stats.idre.ucla.edu/stat/data/ologit.dta\"\ndata_student = pd.read_stata(url)\n\ndata_student.head(5)\n\ndata_student.dtypes\n\ndata_student['apply'].dtype", "This dataset is about the probability for undergraduate students to apply to graduate school given three exogenous variables:\n- their grade point average(gpa), a float between 0 and 4.\n- pared, a binary that indicates if at least one parent went to graduate school.\n- and public, a binary that indicates if the current undergraduate institution of the student is public or private.\napply, the target variable is categorical with ordered categories: unlikely < somewhat likely < very likely. It is a pd.Serie of categorical type, this is preferred over NumPy arrays.\nThe model is based on a numerical latent variable $y_{latent}$ that we cannot observe but that we can compute thanks to exogenous variables.\nMoreover we can use this $y_{latent}$ to define $y$ that we can observe.\nFor more details see the the Documentation of OrderedModel, the UCLA webpage or this book.\nProbit ordinal regression:", "mod_prob = OrderedModel(data_student['apply'],\n data_student[['pared', 'public', 'gpa']],\n distr='probit')\n\nres_prob = mod_prob.fit(method='bfgs')\nres_prob.summary()", "In our model, we have 3 exogenous variables(the $\\beta$s if we keep the documentation's notations) so we have 3 coefficients that need to be estimated.\nThose 3 estimations and their standard errors can be retrieved in the summary table.\nSince there are 3 categories in the target variable(unlikely, somewhat likely, very likely), we have two thresholds to estimate. \nAs explained in the doc of the method OrderedModel.transform_threshold_params, the first estimated threshold is the actual value and all the other thresholds are in terms of cumulative exponentiated increments. Actual thresholds values can be computed as follows:", "num_of_thresholds = 2\nmod_prob.transform_threshold_params(res_prob.params[-num_of_thresholds:])", "Logit ordinal regression:", "mod_log = OrderedModel(data_student['apply'],\n data_student[['pared', 'public', 'gpa']],\n distr='logit')\n\nres_log = mod_log.fit(method='bfgs', disp=False)\nres_log.summary()\n\npredicted = res_log.model.predict(res_log.params, exog=data_student[['pared', 'public', 'gpa']])\npredicted\n\npred_choice = predicted.argmax(1)\nprint('Fraction of correct choice predictions')\nprint((np.asarray(data_student['apply'].values.codes) == pred_choice).mean())", "Ordinal regression with a custom cumulative cLogLog distribution:\nIn addition to logit and probit regression, any continuous distribution from SciPy.stats package can be used for the distr argument. Alternatively, one can define its own distribution simply creating a subclass from rv_continuous and implementing a few methods.", "# using a SciPy distribution\nres_exp = OrderedModel(data_student['apply'],\n data_student[['pared', 'public', 'gpa']],\n distr=stats.expon).fit(method='bfgs', disp=False)\nres_exp.summary()\n\n# minimal definition of a custom scipy distribution.\nclass CLogLog(stats.rv_continuous):\n def _ppf(self, q):\n return np.log(-np.log(1 - q))\n\n def _cdf(self, x):\n return 1 - np.exp(-np.exp(x))\n\n\ncloglog = CLogLog()\n\n# definition of the model and fitting\nres_cloglog = OrderedModel(data_student['apply'],\n data_student[['pared', 'public', 'gpa']],\n distr=cloglog).fit(method='bfgs', disp=False)\nres_cloglog.summary()", "Using formulas - treatment of endog\nPandas' ordered categorical and numeric values are supported as dependent variable in formulas. Other types will raise a ValueError.", "modf_logit = OrderedModel.from_formula(\"apply ~ 0 + pared + public + gpa\", data_student,\n distr='logit')\nresf_logit = modf_logit.fit(method='bfgs')\nresf_logit.summary()", "Using numerical codes for the dependent variable is supported but loses the names of the category levels. The levels and names correspond to the unique values of the dependent variable sorted in alphanumeric order as in the case without using formulas.", "data_student[\"apply_codes\"] = data_student['apply'].cat.codes * 2 + 5\ndata_student[\"apply_codes\"].head()\n\nOrderedModel.from_formula(\"apply_codes ~ 0 + pared + public + gpa\", data_student,\n distr='logit').fit().summary()\n\nresf_logit.predict(data_student.iloc[:5])", "Using string values directly as dependent variable raises a ValueError.", "data_student[\"apply_str\"] = np.asarray(data_student[\"apply\"])\ndata_student[\"apply_str\"].head()\n\ndata_student.apply_str = pd.Categorical(data_student.apply_str, ordered=True)\ndata_student.public = data_student.public.astype(float)\ndata_student.pared = data_student.pared.astype(float)\n\nOrderedModel.from_formula(\"apply_str ~ 0 + pared + public + gpa\", data_student,\n distr='logit')", "Using formulas - no constant in model\nThe parameterization of OrderedModel requires that there is no constant in the model, neither explicit nor implicit. The constant is equivalent to shifting all thresholds and is therefore not separately identified.\nPatsy's formula specification does not allow a design matrix without explicit or implicit constant if there are categorical variables (or maybe splines) among explanatory variables. As workaround, statsmodels removes an explicit intercept. \nConsequently, there are two valid cases to get a design matrix without intercept.\n\nspecify a model without explicit and implicit intercept which is possible if there are only numerical variables in the model.\nspecify a model with an explicit intercept which statsmodels will remove.\n\nModels with an implicit intercept will be overparameterized, the parameter estimates will not be fully identified, cov_params will not be invertible and standard errors might contain nans.\nIn the following we look at an example with an additional categorical variable.", "nobs = len(data_student)\ndata_student[\"dummy\"] = (np.arange(nobs) < (nobs / 2)).astype(float)", "explicit intercept, that will be removed:\nNote \"1 +\" is here redundant because it is patsy's default.", "modfd_logit = OrderedModel.from_formula(\"apply ~ 1 + pared + public + gpa + C(dummy)\", data_student,\n distr='logit')\nresfd_logit = modfd_logit.fit(method='bfgs')\nprint(resfd_logit.summary())\n\nmodfd_logit.k_vars\n\nmodfd_logit.k_constant", "implicit intercept creates overparameterized model\nSpecifying \"0 +\" in the formula drops the explicit intercept. However, the categorical encoding is now changed to include an implicit intercept. In this example, the created dummy variables C(dummy)[0.0] and C(dummy)[1.0] sum to one.\npython\nOrderedModel.from_formula(\"apply ~ 0 + pared + public + gpa + C(dummy)\", data_student, distr='logit')\nTo see what would happen in the overparameterized case, we can avoid the constant check in the model by explicitly specifying whether a constant is present or not. We use hasconst=False, even though the model has an implicit constant.\nThe parameters of the two dummy variable columns and the first threshold are not separately identified. Estimates for those parameters and availability of standard errors are arbitrary and depends on numerical details that differ across environments.\nSome summary measures like log-likelihood value are not affected by this, within convergence tolerance and numerical precision. Prediction should also be possible. However, inference is not available, or is not valid.", "modfd2_logit = OrderedModel.from_formula(\"apply ~ 0 + pared + public + gpa + C(dummy)\", data_student,\n distr='logit', hasconst=False)\nresfd2_logit = modfd2_logit.fit(method='bfgs')\nprint(resfd2_logit.summary())\n\nresfd2_logit.predict(data_student.iloc[:5])\n\nresf_logit.predict()", "Binary Model compared to Logit\nIf there are only two levels of the dependent ordered categorical variable, then the model can also be estimated by a Logit model.\nThe models are (theoretically) identical in this case except for the parameterization of the constant. Logit as most other models requires in general an intercept. This corresponds to the threshold parameter in the OrderedModel, however, with opposite sign.\nThe implementation differs and not all of the same results statistic and post-estimation features are available. Estimated parameters and other results statistic differ mainly based on convergence tolerance of the optimization.", "from statsmodels.discrete.discrete_model import Logit\nfrom statsmodels.tools.tools import add_constant", "We drop the middle category from the data and keep the two extreme categories.", "mask_drop = data_student['apply'] == \"somewhat likely\"\ndata2 = data_student.loc[~mask_drop, :]\n# we need to remove the category also from the Categorical Index\ndata2['apply'].cat.remove_categories(\"somewhat likely\", inplace=True)\ndata2.head()\n\nmod_log = OrderedModel(data2['apply'],\n data2[['pared', 'public', 'gpa']],\n distr='logit')\n\nres_log = mod_log.fit(method='bfgs', disp=False)\nres_log.summary()", "The Logit model does not have a constant by default, we have to add it to our explanatory variables.\nThe results are essentially identical between Logit and ordered model up to numerical precision mainly resulting from convergence tolerance in the estimation.\nThe only difference is in the sign of the constant, Logit and OrdereModel have opposite signs of he constant. This is a consequence of the parameterization in terms of cut points in OrderedModel instead of including and constant column in the design matrix.", "ex = add_constant(data2[['pared', 'public', 'gpa']], prepend=False)\nmod_logit = Logit(data2['apply'].cat.codes, ex)\n\nres_logit = mod_logit.fit(method='bfgs', disp=False)\n\nres_logit.summary()", "Robust standard errors are also available in OrderedModel in the same way as in discrete.Logit.\nAs example we specify HAC covariance type even though we have cross-sectional data and autocorrelation is not appropriate.", "res_logit_hac = mod_logit.fit(method='bfgs', disp=False, cov_type=\"hac\", cov_kwds={\"maxlags\": 2})\nres_log_hac = mod_log.fit(method='bfgs', disp=False, cov_type=\"hac\", cov_kwds={\"maxlags\": 2})\n\nres_logit_hac.bse.values - res_log_hac.bse" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
laserkelvin/IPython-Notebook-Tools
Spectral Analysis Showcase.ipynb
gpl-3.0
[ "%load_ext autoreload\n%autoreload 2\n%matplotlib inline\nfrom SpectralAnalysis import *\nimport numpy as np\nimport pandas as pd\nfrom bokeh.io import output_notebook\nimport matplotlib.pyplot as plt\n\noutput_notebook()", "SpectralAnalysis showcase\nI've written a library for regular plotting and curve fitting that is much, much nicer than the FittingRoutines module. In this notebook I'll be showing off (I guess in a README-esque way) of how all the functions should be called as I've written them.\nThe first notable thing is a class Model, which will house the functions used to fit our sets of data. The advantage of doing it this way is that necessary variables will be detected, and reported back to the user to remind them what is required.\nTo create a Model instance, we give it a string name:", "testmodel = Model(\"Test\") # set up a test model with linear fit", "It'll then remind you to set a function, because as it stands it's only nominally a function. We can either define our own function within the notebook, or use one of the preset functions I've written.\nTo start off, let's make it a straight line with the equation\n$$ y = mx + c $$", "testmodel.SetFunction(Linear)", "The instance will automagically detect what variables are required for the function, providing the function you gave it is a one-liner (only has return) just because of the way python works. In this case, we need the gradient and offset values which we can set by sending a dictionary along its way", "testmodel.SetVariables({\"Gradient\": 5., \"Offset\": 2.}) # Dictionary with variables", "That's all we need to do to define a Model instance! Let's test it out by creating some fake linear data.", "x = np.linspace(0,10,20)\nNoise = np.random.rand(20)\ny = Linear(x, 6., 3.) + Noise # Generate some data to fit to", "Now I've also written a function that will package some xy data in the correct format for curve fitting. This is done by using the FormatData() function.", "df = FormatData(x, y)", "df is now a pandas dataframe, where x is stored as the index and y is a column called \"Y Range\", just for internal consistency.\nNow we're ready to fit the data by calling the function FitModel. The input parameters for this is a reference to the dataframe holding the target data, as well as a reference to the Model instance. What it returns are:\n\nOptimised parameters\nFit report\nFitted curves, including the original data\nCovariance matrix", "popt, report, fits, pcov = FitModel(df, testmodel)", "And it worked! Now we can call the plotting interface to show us the results. For this, I've written a function called PlotData(). The dataframe input is formatted such that x is the index, and will work with up to 11 columns. A keyword Interface will also let you choose between using matplotlib (good for static plots, saves space and time) and bokeh (interactive and pretty, slow). Another interesting argument is Labels, which is a dictionary for specifying the axes labels and whatnot. If it is left unspecified, it will just plot without labels.", "Labels = {\"X Label\": \"Durr\",\n \"Y Label\": \"Hurr\",\n \"Title\": \"Durr vs. Hurr\",\n \"X Limits\": [0, 5],\n \"Y Limits\": [0, 20],}\nPlotData(fits, Interface=\"pyplot\", Labels=Labels)", "A more advanced function, such as a Gaussian is shown below. The procedure I went through is exactly the same as above.\nThe one thing I did differently was include boundary conditions for the curve fitting. This is done by calling the Model.SetBounds() method, where the input is shown below. Another thing demonstrated below is the use of the Spectrum class. I've written it to store and reference data, but admittedly haven't gone very far with it beyond storing the spectra as an attribute.", "FC063b = Spectrum(\"./FC063b_sub_KER.dat\")\n\nFC063b.PlotAll()\n\nGaussianModel = Model(\"Gaussian\")\nGaussianModel.SetFunction(GaussianFunction)\n\nGaussianModel.SetVariables({\"Amplitude\": 50., \"Centre\": 2500., \"Width\": 300.})\nBoundaries = ([0., 2400., 200.], [1e3, 2900., 500.])\nGaussianModel.SetBounds(Boundaries)\n\npopt, report, fits, cov = FitModel(FC063b.Data, GaussianModel)\n\nPlotData(fits)", "Custom functions\nWith the Model class, it is possible to write your own objective function, i.e. with convolutions, combinations etc.\nLet's start with the convolution of a Gaussian and a Boltzmann. I want to fit only the temperature and the total amplitude, but none of the other stuff. This is something I did for the $T_1$ methyl data, where the impulsive reservoir is fixed while the statistical reservoir grows with excitation energy.\nFor this, I'm going to use a routine I've written that will do the convolution of two arrays, while returning the convolution result in the same dimensions as the input using a 1D interpolation.", "def ConvolveGB(x, A, T):\n return A * ConvolveArrays(GaussianFunction(x, 1., 2300., 250.), # None of the parameters are floated\n BoltzmannFunction(x, 1., T), # Only the temperature!\n x)", "So now we'll setup a new instance of Model, and we'll call it the \"Triplet Model\":", "TripletModel = Model(\"Triplet Convolution\")\n\nTripletModel.SetFunction(ConvolveGB)", "The instance method automatically detects which variables are actually required, making it quite trivial to set up new model functions to fit anything you want (in theory anyway).", "TripletModel.SetVariables({\"A\": 100., \"T\": 10.})\nTripletModel.SetBounds(([0., 0.],\n [800., 10.]))\n\npopt, report, fits, cov = FitModel(FC063b.Data, TripletModel)\n\nPlotData(fits)\n\nnp.diagonal(cov)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
metpy/MetPy
dev/_downloads/c1a3b4ec1d09d4debc078297d433a9b2/Point_Interpolation.ipynb
bsd-3-clause
[ "%matplotlib inline", "Point Interpolation\nCompares different point interpolation approaches.", "import cartopy.crs as ccrs\nimport cartopy.feature as cfeature\nfrom matplotlib.colors import BoundaryNorm\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nfrom metpy.cbook import get_test_data\nfrom metpy.interpolate import (interpolate_to_grid, remove_nan_observations,\n remove_repeat_coordinates)\nfrom metpy.plots import add_metpy_logo\n\ndef basic_map(proj, title):\n \"\"\"Make our basic default map for plotting\"\"\"\n fig = plt.figure(figsize=(15, 10))\n add_metpy_logo(fig, 0, 80, size='large')\n view = fig.add_axes([0, 0, 1, 1], projection=proj)\n view.set_title(title)\n view.set_extent([-120, -70, 20, 50])\n view.add_feature(cfeature.STATES.with_scale('50m'))\n view.add_feature(cfeature.OCEAN)\n view.add_feature(cfeature.COASTLINE)\n view.add_feature(cfeature.BORDERS, linestyle=':')\n return fig, view\n\n\ndef station_test_data(variable_names, proj_from=None, proj_to=None):\n with get_test_data('station_data.txt') as f:\n all_data = np.loadtxt(f, skiprows=1, delimiter=',',\n usecols=(1, 2, 3, 4, 5, 6, 7, 17, 18, 19),\n dtype=np.dtype([('stid', '3S'), ('lat', 'f'), ('lon', 'f'),\n ('slp', 'f'), ('air_temperature', 'f'),\n ('cloud_fraction', 'f'), ('dewpoint', 'f'),\n ('weather', '16S'),\n ('wind_dir', 'f'), ('wind_speed', 'f')]))\n\n all_stids = [s.decode('ascii') for s in all_data['stid']]\n\n data = np.concatenate([all_data[all_stids.index(site)].reshape(1, ) for site in all_stids])\n\n value = data[variable_names]\n lon = data['lon']\n lat = data['lat']\n\n if proj_from is not None and proj_to is not None:\n proj_points = proj_to.transform_points(proj_from, lon, lat)\n return proj_points[:, 0], proj_points[:, 1], value\n\n return lon, lat, value\n\n\nfrom_proj = ccrs.Geodetic()\nto_proj = ccrs.AlbersEqualArea(central_longitude=-97.0000, central_latitude=38.0000)\n\nlevels = list(range(-20, 20, 1))\ncmap = plt.get_cmap('magma')\nnorm = BoundaryNorm(levels, ncolors=cmap.N, clip=True)\n\nx, y, temp = station_test_data('air_temperature', from_proj, to_proj)\n\nx, y, temp = remove_nan_observations(x, y, temp)\nx, y, temp = remove_repeat_coordinates(x, y, temp)", "Scipy.interpolate linear", "gx, gy, img = interpolate_to_grid(x, y, temp, interp_type='linear', hres=75000)\nimg = np.ma.masked_where(np.isnan(img), img)\nfig, view = basic_map(to_proj, 'Linear')\nmmb = view.pcolormesh(gx, gy, img, cmap=cmap, norm=norm)\nfig.colorbar(mmb, shrink=.4, pad=0, boundaries=levels)", "Natural neighbor interpolation (MetPy implementation)\nReference &lt;https://cwp.mines.edu/wp-content/uploads/sites/112/2018/09/cwp-657.pdf&gt;_", "gx, gy, img = interpolate_to_grid(x, y, temp, interp_type='natural_neighbor', hres=75000)\nimg = np.ma.masked_where(np.isnan(img), img)\nfig, view = basic_map(to_proj, 'Natural Neighbor')\nmmb = view.pcolormesh(gx, gy, img, cmap=cmap, norm=norm)\nfig.colorbar(mmb, shrink=.4, pad=0, boundaries=levels)", "Cressman interpolation\nsearch_radius = 100 km\ngrid resolution = 25 km\nmin_neighbors = 1", "gx, gy, img = interpolate_to_grid(x, y, temp, interp_type='cressman', minimum_neighbors=1,\n hres=75000, search_radius=100000)\nimg = np.ma.masked_where(np.isnan(img), img)\nfig, view = basic_map(to_proj, 'Cressman')\nmmb = view.pcolormesh(gx, gy, img, cmap=cmap, norm=norm)\nfig.colorbar(mmb, shrink=.4, pad=0, boundaries=levels)", "Barnes Interpolation\nsearch_radius = 100km\nmin_neighbors = 3", "gx, gy, img1 = interpolate_to_grid(x, y, temp, interp_type='barnes', hres=75000,\n search_radius=100000)\nimg1 = np.ma.masked_where(np.isnan(img1), img1)\nfig, view = basic_map(to_proj, 'Barnes')\nmmb = view.pcolormesh(gx, gy, img1, cmap=cmap, norm=norm)\nfig.colorbar(mmb, shrink=.4, pad=0, boundaries=levels)", "Radial basis function interpolation\nlinear", "gx, gy, img = interpolate_to_grid(x, y, temp, interp_type='rbf', hres=75000, rbf_func='linear',\n rbf_smooth=0)\nimg = np.ma.masked_where(np.isnan(img), img)\nfig, view = basic_map(to_proj, 'Radial Basis Function')\nmmb = view.pcolormesh(gx, gy, img, cmap=cmap, norm=norm)\nfig.colorbar(mmb, shrink=.4, pad=0, boundaries=levels)\n\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
metpy/MetPy
v0.10/_downloads/56e68110d2faf6be8284d896c8f4cd23/Natural_Neighbor_Verification.ipynb
bsd-3-clause
[ "%matplotlib inline", "Natural Neighbor Verification\nWalks through the steps of Natural Neighbor interpolation to validate that the algorithmic\napproach taken in MetPy is correct.\nFind natural neighbors visual test\nA triangle is a natural neighbor for a point if the\ncircumscribed circle &lt;https://en.wikipedia.org/wiki/Circumscribed_circle&gt;_ of the\ntriangle contains that point. It is important that we correctly grab the correct triangles\nfor each point before proceeding with the interpolation.\nAlgorithmically:\n\n\nWe place all of the grid points in a KDTree. These provide worst-case O(n) time\n complexity for spatial searches.\n\n\nWe generate a Delaunay Triangulation &lt;https://docs.scipy.org/doc/scipy/\n reference/tutorial/spatial.html#delaunay-triangulations&gt;_\n using the locations of the provided observations.\n\n\nFor each triangle, we calculate its circumcenter and circumradius. Using\n KDTree, we then assign each grid a triangle that has a circumcenter within a\n circumradius of the grid's location.\n\n\nThe resulting dictionary uses the grid index as a key and a set of natural\n neighbor triangles in the form of triangle codes from the Delaunay triangulation.\n This dictionary is then iterated through to calculate interpolation values.\n\n\nWe then traverse the ordered natural neighbor edge vertices for a particular\n grid cell in groups of 3 (n - 1, n, n + 1), and perform calculations to generate\n proportional polygon areas.\n\n\nCircumcenter of (n - 1), n, grid_location\n Circumcenter of (n + 1), n, grid_location\nDetermine what existing circumcenters (ie, Delaunay circumcenters) are associated\n with vertex n, and add those as polygon vertices. Calculate the area of this polygon.\n\n\nIncrement the current edges to be checked, i.e.:\n n - 1 = n, n = n + 1, n + 1 = n + 2\n\n\nRepeat steps 5 & 6 until all of the edge combinations of 3 have been visited.\n\n\nRepeat steps 4 through 7 for each grid cell.", "import matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy.spatial import ConvexHull, Delaunay, delaunay_plot_2d, Voronoi, voronoi_plot_2d\nfrom scipy.spatial.distance import euclidean\n\nfrom metpy.interpolate import geometry\nfrom metpy.interpolate.points import natural_neighbor_point", "For a test case, we generate 10 random points and observations, where the\nobservation values are just the x coordinate value times the y coordinate\nvalue divided by 1000.\nWe then create two test points (grid 0 & grid 1) at which we want to\nestimate a value using natural neighbor interpolation.\nThe locations of these observations are then used to generate a Delaunay triangulation.", "np.random.seed(100)\n\npts = np.random.randint(0, 100, (10, 2))\nxp = pts[:, 0]\nyp = pts[:, 1]\nzp = (pts[:, 0] * pts[:, 0]) / 1000\n\ntri = Delaunay(pts)\n\nfig, ax = plt.subplots(1, 1, figsize=(15, 10))\nax.ishold = lambda: True # Work-around for Matplotlib 3.0.0 incompatibility\ndelaunay_plot_2d(tri, ax=ax)\n\nfor i, zval in enumerate(zp):\n ax.annotate('{} F'.format(zval), xy=(pts[i, 0] + 2, pts[i, 1]))\n\nsim_gridx = [30., 60.]\nsim_gridy = [30., 60.]\n\nax.plot(sim_gridx, sim_gridy, '+', markersize=10)\nax.set_aspect('equal', 'datalim')\nax.set_title('Triangulation of observations and test grid cell '\n 'natural neighbor interpolation values')\n\nmembers, tri_info = geometry.find_natural_neighbors(tri, list(zip(sim_gridx, sim_gridy)))\n\nval = natural_neighbor_point(xp, yp, zp, (sim_gridx[0], sim_gridy[0]), tri, members[0],\n tri_info)\nax.annotate('grid 0: {:.3f}'.format(val), xy=(sim_gridx[0] + 2, sim_gridy[0]))\n\nval = natural_neighbor_point(xp, yp, zp, (sim_gridx[1], sim_gridy[1]), tri, members[1],\n tri_info)\nax.annotate('grid 1: {:.3f}'.format(val), xy=(sim_gridx[1] + 2, sim_gridy[1]))", "Using the circumcenter and circumcircle radius information from\n:func:metpy.interpolate.geometry.find_natural_neighbors, we can visually\nexamine the results to see if they are correct.", "def draw_circle(ax, x, y, r, m, label):\n th = np.linspace(0, 2 * np.pi, 100)\n nx = x + r * np.cos(th)\n ny = y + r * np.sin(th)\n ax.plot(nx, ny, m, label=label)\n\n\nmembers, tri_info = geometry.find_natural_neighbors(tri, list(zip(sim_gridx, sim_gridy)))\n\nfig, ax = plt.subplots(1, 1, figsize=(15, 10))\nax.ishold = lambda: True # Work-around for Matplotlib 3.0.0 incompatibility\ndelaunay_plot_2d(tri, ax=ax)\nax.plot(sim_gridx, sim_gridy, 'ks', markersize=10)\n\nfor i, info in tri_info.items():\n x_t = info['cc'][0]\n y_t = info['cc'][1]\n\n if i in members[1] and i in members[0]:\n draw_circle(ax, x_t, y_t, info['r'], 'm-', str(i) + ': grid 1 & 2')\n ax.annotate(str(i), xy=(x_t, y_t), fontsize=15)\n elif i in members[0]:\n draw_circle(ax, x_t, y_t, info['r'], 'r-', str(i) + ': grid 0')\n ax.annotate(str(i), xy=(x_t, y_t), fontsize=15)\n elif i in members[1]:\n draw_circle(ax, x_t, y_t, info['r'], 'b-', str(i) + ': grid 1')\n ax.annotate(str(i), xy=(x_t, y_t), fontsize=15)\n else:\n draw_circle(ax, x_t, y_t, info['r'], 'k:', str(i) + ': no match')\n ax.annotate(str(i), xy=(x_t, y_t), fontsize=9)\n\nax.set_aspect('equal', 'datalim')\nax.legend()", "What?....the circle from triangle 8 looks pretty darn close. Why isn't\ngrid 0 included in that circle?", "x_t, y_t = tri_info[8]['cc']\nr = tri_info[8]['r']\n\nprint('Distance between grid0 and Triangle 8 circumcenter:',\n euclidean([x_t, y_t], [sim_gridx[0], sim_gridy[0]]))\nprint('Triangle 8 circumradius:', r)", "Lets do a manual check of the above interpolation value for grid 0 (southernmost grid)\nGrab the circumcenters and radii for natural neighbors", "cc = np.array([tri_info[m]['cc'] for m in members[0]])\nr = np.array([tri_info[m]['r'] for m in members[0]])\n\nprint('circumcenters:\\n', cc)\nprint('radii\\n', r)", "Draw the natural neighbor triangles and their circumcenters. Also plot a Voronoi diagram\n&lt;https://docs.scipy.org/doc/scipy/reference/tutorial/spatial.html#voronoi-diagrams&gt;_\nwhich serves as a complementary (but not necessary)\nspatial data structure that we use here simply to show areal ratios.\nNotice that the two natural neighbor triangle circumcenters are also vertices\nin the Voronoi plot (green dots), and the observations are in the polygons (blue dots).", "vor = Voronoi(list(zip(xp, yp)))\n\nfig, ax = plt.subplots(1, 1, figsize=(15, 10))\nax.ishold = lambda: True # Work-around for Matplotlib 3.0.0 incompatibility\nvoronoi_plot_2d(vor, ax=ax)\n\nnn_ind = np.array([0, 5, 7, 8])\nz_0 = zp[nn_ind]\nx_0 = xp[nn_ind]\ny_0 = yp[nn_ind]\n\nfor x, y, z in zip(x_0, y_0, z_0):\n ax.annotate('{}, {}: {:.3f} F'.format(x, y, z), xy=(x, y))\n\nax.plot(sim_gridx[0], sim_gridy[0], 'k+', markersize=10)\nax.annotate('{}, {}'.format(sim_gridx[0], sim_gridy[0]), xy=(sim_gridx[0] + 2, sim_gridy[0]))\nax.plot(cc[:, 0], cc[:, 1], 'ks', markersize=15, fillstyle='none',\n label='natural neighbor\\ncircumcenters')\n\nfor center in cc:\n ax.annotate('{:.3f}, {:.3f}'.format(center[0], center[1]),\n xy=(center[0] + 1, center[1] + 1))\n\ntris = tri.points[tri.simplices[members[0]]]\nfor triangle in tris:\n x = [triangle[0, 0], triangle[1, 0], triangle[2, 0], triangle[0, 0]]\n y = [triangle[0, 1], triangle[1, 1], triangle[2, 1], triangle[0, 1]]\n ax.plot(x, y, ':', linewidth=2)\n\nax.legend()\nax.set_aspect('equal', 'datalim')\n\n\ndef draw_polygon_with_info(ax, polygon, off_x=0, off_y=0):\n \"\"\"Draw one of the natural neighbor polygons with some information.\"\"\"\n pts = np.array(polygon)[ConvexHull(polygon).vertices]\n for i, pt in enumerate(pts):\n ax.plot([pt[0], pts[(i + 1) % len(pts)][0]],\n [pt[1], pts[(i + 1) % len(pts)][1]], 'k-')\n\n avex, avey = np.mean(pts, axis=0)\n ax.annotate('area: {:.3f}'.format(geometry.area(pts)), xy=(avex + off_x, avey + off_y),\n fontsize=12)\n\n\ncc1 = geometry.circumcenter((53, 66), (15, 60), (30, 30))\ncc2 = geometry.circumcenter((34, 24), (53, 66), (30, 30))\ndraw_polygon_with_info(ax, [cc[0], cc1, cc2])\n\ncc1 = geometry.circumcenter((53, 66), (15, 60), (30, 30))\ncc2 = geometry.circumcenter((15, 60), (8, 24), (30, 30))\ndraw_polygon_with_info(ax, [cc[0], cc[1], cc1, cc2], off_x=-9, off_y=3)\n\ncc1 = geometry.circumcenter((8, 24), (34, 24), (30, 30))\ncc2 = geometry.circumcenter((15, 60), (8, 24), (30, 30))\ndraw_polygon_with_info(ax, [cc[1], cc1, cc2], off_x=-15)\n\ncc1 = geometry.circumcenter((8, 24), (34, 24), (30, 30))\ncc2 = geometry.circumcenter((34, 24), (53, 66), (30, 30))\ndraw_polygon_with_info(ax, [cc[0], cc[1], cc1, cc2])", "Put all of the generated polygon areas and their affiliated values in arrays.\nCalculate the total area of all of the generated polygons.", "areas = np.array([60.434, 448.296, 25.916, 70.647])\nvalues = np.array([0.064, 1.156, 2.809, 0.225])\ntotal_area = np.sum(areas)\nprint(total_area)", "For each polygon area, calculate its percent of total area.", "proportions = areas / total_area\nprint(proportions)", "Multiply the percent of total area by the respective values.", "contributions = proportions * values\nprint(contributions)", "The sum of this array is the interpolation value!", "interpolation_value = np.sum(contributions)\nfunction_output = natural_neighbor_point(xp, yp, zp, (sim_gridx[0], sim_gridy[0]), tri,\n members[0], tri_info)\n\nprint(interpolation_value, function_output)", "The values are slightly different due to truncating the area values in\nthe above visual example to the 3rd decimal place.", "plt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
patryk-oleniuk/emotion_recognition
temp/main_emotion_recognition_Patryk.ipynb
gpl-3.0
[ "A Network Tour of Data Science, EPFL 2016\nProject: Facial Emotion Recognition\nDataset taken from: kaggle.com/c/challenges-in-representation-learning-facial-expression-recognition-challenge\n<br>\n<br>\nstudents: Patryk Oleniuk, Carmen Galotta\nThe project presented here is an algorithm to recognize and detect emotions from a face picture. \nOf course, the task of recognize face emotions is very easy for humans to do even if somethimes is really hard to understand how a person feels, but what can be easily understood thanks to human's brain, is difficult to emulate by a machine.\nThe aim of this project is to classify faces in discrete human emotions. Due to the success of Neural Network in images classification tasks it has been tought that employing it could be a good idea in also face emotion.\nThe dataset has been taken from the kaggle competition and consists of 48x48 grey images already labeled with a number coding for classes of emotions, namely: \n0-Angry<br>\n1-Disgust<br>\n2-Fear<br>\n3-Happy<br>\n4-Sad<br>\n5-Surprise<br>\n6-Neutral<br>\nThe faces are mostly centered in the image.\nConfiguration, dataset file", "import random\nimport numpy as np\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\nimport csv\nimport scipy.misc\nimport time\nimport collections\nimport os\nimport utils as ut\nimport importlib\nimport copy\n\nimportlib.reload(ut)\n\n# This is a bit of magic to make matplotlib figures appear inline in the notebook\n# rather than in a new window.\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (20.0, 20.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n#Data Visualization\n# Load the shortened raw CSV data, it contains only 300 pictures with labels\nemotions_dataset_dir = 'fer2013_full.csv'\n\n#obtaining the number of line of the csv file\nfile = open(emotions_dataset_dir)\nnumline = len(file.readlines())\nprint ('Number of data in the dataset:',numline)\n", "Load the data from *.csv file\nThe first step is to load the data from the .csv file. <br> The format of the csv line is<br>\nclass{0,1,2,3,4,5,6},pix0 pix2304,DataUsage(not used)<br>\ne.g.<br>\n2,234 1 34 23 ..... 234 256 0,Training<br>\nThe picture is always 48x48 pixels, 0-255 greyscale.\nRemove strange data\nIn the database there are some images thar are not good (e.g. some images are pixelated, unrelevant, from animations).\nIt has been tried to filter them by looking at the maximum of the histogram. If the image is very homogenous, the maximum value of the histogram will be very high (that is to say above a certain threshold) then this image is filtered out. Of course in this way are also removed some relevant information, but it's better for the CNN not to consider these images.\nMerge class 0 and 1\nWe discovered that class 1 has a very small amount of occurance in the test data et. This class, (disgust) is very similar to anger and that is why we merger class 0 and 1 together.\nTherefore, the recognized emotions and labels are\n0-Angry + Disgust\n1-Fear\n2-Happy\n3-Sad\n4-Surprise\n5-Neutral", "#Load the file in csv\nifile = open(emotions_dataset_dir, \"rt\")\nreader = csv.reader(ifile)\n\nhist_threshold = 350 # images above this threshold will be removed\nhist_div = 100 #parameter of the histogram\n\nprint('Loading Images. It may take a while, depending on the database size.')\nimages, emotions, strange_im, num_strange, num_skipped = ut.load_dataset(reader, numline, hist_div, hist_threshold)\n\nifile.close()\n\nprint('Skipped', num_skipped, 'happy class images.')\nprint(str( len(images) ) + ' are left after \\'strange images\\' removal.')\nprint('Deleted ' + str( num_strange ) + ' strange images. Images are shown below')\n\n\n# showing strange images\nplt.rcParams['figure.figsize'] = (5.0, 5.0) # set default size of plots\nidxs = np.random.choice(range(1,num_strange ), 6, replace=False)\nfor i, idx in enumerate(idxs):\n plt_idx = i\n plt.subplot(1, 6, plt_idx+1)\n plt.imshow(strange_im[idx])\n plt.axis('off')\n if(i == 0):\n plt.title('Some of the images removed from dataset (max(histogram) thresholded)')\nplt.show()", "Explore the correct data\nPlot some random pictures from each class.", "classes = [0,1,2,3,4,5]\nstr_emotions = ['angry','scared','happy','sad','surprised','normal']\nnum_classes = len(classes)\nsamples_per_class = 6\nplt.rcParams['figure.figsize'] = (10.0, 10.0) # set default size of plots\nfor y, cls in enumerate(classes):\n idxs = np.flatnonzero(emotions == y)\n idxs = np.random.choice(idxs, samples_per_class, replace=False)\n for i, idx in enumerate(idxs):\n plt_idx = i * num_classes + y + 1\n plt.subplot(samples_per_class, num_classes, plt_idx)\n plt.imshow(images[idx])\n y_h, x_h = np.histogram( images[idx], hist_div );\n plt.axis('off')\n if(i == 0):\n plt.title(str_emotions[y] )\nplt.show()", "Prepare the Data for CNN\nHere the initial data have been divided to create train and test data. <bv>\nThis two subsets have both an associated label to train the neural network and to test its accuracy with the test data.\nThe number of images used for each category of emotions is shown both for the train as for the test data.", "print('number of clean data:' + str(images.shape[0]) + ' 48x48 pix , 0-255 greyscale images')\nn_all = images.shape[0];\nn_train = 64; # number of data for training and for batch\n\n# dividing the input data\ntrain_data_orig = images[0:n_all-n_train,:,:]\ntrain_labels = emotions[0:n_all-n_train]\ntest_data_orig = images[n_all-n_train:n_all,:,:]\ntest_labels = emotions[n_all-n_train:n_all]\n\n# Convert to float\ntrain_data_orig = train_data_orig.astype('float32')\ny_train = train_labels.astype('float32')\ntest_data_orig = test_data_orig.astype('float32')\ny_test = test_labels.astype('float32')\n\nprint('orig train data ' + str(train_data_orig.shape))\nprint('orig train labels ' + str(train_labels.shape) + 'from ' + str(train_labels.min()) + ' to ' + str(train_labels.max()) )\nprint('orig test data ' + str(test_data_orig.shape))\nprint('orig test labels ' + str(test_labels.shape)+ 'from ' + str(test_labels.min()) + ' to ' + str(test_labels.max()) )\n\nfor i in range (0, 5): \n print('TRAIN: number of' , i, 'labels',len(train_labels[train_labels == i]))\n\nfor i in range (0, 5): \n print('TEST: number of', i, 'labels',len(test_labels[test_labels == i]))\n", "Prepare the data for CNN\nConvert, normalize, subtract the const mean value from the data images.", "# Data pre-processing\nn = train_data_orig.shape[0];\ntrain_data = np.zeros([n,48**2])\nfor i in range(n):\n xx = train_data_orig[i,:,:]\n xx -= np.mean(xx)\n xx /= np.linalg.norm(xx)\n train_data[i,:] = xx.reshape(2304); #np.reshape(xx,[-1])\n\nn = test_data_orig.shape[0]\ntest_data = np.zeros([n,48**2])\nfor i in range(n):\n xx = test_data_orig[i,:,:]\n xx -= np.mean(xx)\n xx /= np.linalg.norm(xx)\n test_data[i] = np.reshape(xx,[-1])\n\n#print(train_data.shape)\n#print(test_data.shape)\n#print(train_data_orig[0][2][2])\n#print(test_data[0][2])\nplt.rcParams['figure.figsize'] = (2.0, 2.0) # set default size of plots\nplt.imshow(train_data[4].reshape([48,48]));\nplt.title('example image after processing');\n\n# Convert label values to one_hot vector\n\ntrain_labels = ut.convert_to_one_hot(train_labels,num_classes)\ntest_labels = ut.convert_to_one_hot(test_labels,num_classes)\n\nprint('train labels shape',train_labels.shape)\nprint('test labels shape',test_labels.shape)", "Model 1 - Overfitting the data TODO not overfitting with 35k data\nIn the first model it has been implemented a baseline softmax classifier using a single convolutional layer and a one fully connected layer. For the initial baseline\nit has not be used any regularization, dropout, or batch normalization.\nThe equation of the classifier is simply:\n$$\ny=\\textrm{softmax}(ReLU( x \\ast W_1+b_1)W_2+b_2) \n$$", "# Define computational graph (CG)\nbatch_size = n_train # batch size\nd = train_data.shape[1] # data dimensionality\nnc = 6 # number of classes\n\n# CG inputs\nxin = tf.placeholder(tf.float32,[batch_size,d]); #print('xin=',xin,xin.get_shape())\ny_label = tf.placeholder(tf.float32,[batch_size,nc]); #print('y_label=',y_label,y_label.get_shape())\n#d = tf.placeholder(tf.float32);\n\n# Convolutional layer\nK0 = 8 # size of the patch\nF0 = 64 # number of filters\nncl0 = K0*K0*F0\nWcl0 = tf.Variable(tf.truncated_normal([K0,K0,1,F0], stddev=tf.sqrt(2./tf.to_float(ncl0)) )); print('Wcl=',Wcl0.get_shape())\n#bcl0 = tf.Variable(tf.zeros([F0])); print('bcl=',bcl0.get_shape())\nbcl0 = bias_variable([F0]); print('bcl0=',bcl0.get_shape()) #in ReLu case, small positive bias added to prevent killing of gradient when input is negative.\n\nx_2d0 = tf.reshape(xin, [-1,48,48,1]); print('x_2d=',x_2d0.get_shape())\nx = tf.nn.conv2d(x_2d0, Wcl0, strides=[1, 1, 1, 1], padding='SAME')\nx += bcl0; print('x2=',x.get_shape())\n\n# ReLU activation\nx = tf.nn.relu(x)\n\n# Dropout\n#x = tf.nn.dropout(x, 0.25)\n\n# Fully Connected layer\nnfc = 48*48*F0\nx = tf.reshape(x, [batch_size,-1]); print('x3=',x.get_shape())\nWfc = tf.Variable(tf.truncated_normal([nfc,nc], stddev=tf.sqrt(2./tf.to_float(nfc+nc)) )); print('Wfc=',Wfc.get_shape())\nbfc = tf.Variable(tf.zeros([nc])); print('bfc=',bfc.get_shape())\ny = tf.matmul(x, Wfc); print('y1=',y.get_shape())\ny += bfc; print('y2=',y.get_shape())\n\n# Softmax\ny = tf.nn.softmax(y); print('y3(SOFTMAX)=',y.get_shape())\n\n# Loss\ncross_entropy = tf.reduce_mean(-tf.reduce_sum(y_label * tf.log(y), 1))\ntotal_loss = cross_entropy\n\n# Optimization scheme\n#train_step = tf.train.GradientDescentOptimizer(0.02).minimize(total_loss)\ntrain_step = tf.train.AdamOptimizer(0.004).minimize(total_loss)\n\n# Accuracy\ncorrect_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_label,1))\naccuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n\n# Run Computational Graph\nn = train_data.shape[0]\nindices = collections.deque()\ninit = tf.initialize_all_variables()\nsess = tf.Session()\nsess.run(init)\nfor i in range(1001):\n \n # Batch extraction\n if len(indices) < batch_size:\n indices.extend(np.random.permutation(n)) \n idx = [indices.popleft() for i in range(batch_size)]\n batch_x, batch_y = train_data[idx,:], train_labels[idx]\n #print(batch_x.shape,batch_y.shape)\n \n # Run CG for vao to increase the test acriable training\n _,acc_train,total_loss_o = sess.run([train_step,accuracy,total_loss], feed_dict={xin: batch_x, y_label: batch_y})\n \n # Run CG for test set\n if not i%100:\n print('\\nIteration i=',i,', train accuracy=',acc_train,', loss=',total_loss_o)\n acc_test = sess.run(accuracy, feed_dict={xin: test_data, y_label: test_labels})\n print('test accuracy=',acc_test)", "As it is possible to see from this result, the model overfits the training data already at iteration 400, while getting a test accuracy of only 28%.\nIn order to prevent overfitting in the following model have been applied different techniques such as dropout and pool, as well as tried to implement a neural network of more layers. \nThis should help and improve the model since the first convolutional layer will just extract some simplest characteristics of the image such as edges, lines and curves. Adding layers will improve the performances because they will detect some high level feature which in this case could be really relevant since it's about face expressions.\nAdvanced computational graphs - functions", "d = train_data.shape[1]\n\n#Défining network\ndef weight_variable2(shape, nc10):\n initial2 = tf.random_normal(shape, stddev=tf.sqrt(2./tf.to_float(ncl0)) )\n return tf.Variable(initial2)\ndef conv2dstride2(x,W):\n return tf.nn.conv2d(x,W,strides=[1, 2, 2, 1], padding='SAME')\n\ndef conv2d(x,W):\n return tf.nn.conv2d(x,W,strides=[1, 1, 1, 1], padding='SAME')\ndef max_pool_2x2(x):\n return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')\ndef weight_variable(shape):\n initial = tf.truncated_normal(shape, stddev=1/np.sqrt(d/2) )\n return tf.Variable(initial)\ndef bias_variable(shape):\n initial = tf.constant(0.01,shape=shape)\n return tf.Variable(initial)\n", "Model 2 - 4 x Convolutional Layers, 1x Fully Connected", "tf.reset_default_graph()\n\n# Define computational graph (CG)\nbatch_size = n_train # batch size\nd = train_data.shape[1] # data dimensionality\nnc = 6 # number of classes\n\n# CG inputs\nxin = tf.placeholder(tf.float32,[batch_size,d]); #print('xin=',xin,xin.get_shape())\ny_label = tf.placeholder(tf.float32,[batch_size,nc]); #print('y_label=',y_label,y_label.get_shape())\n#d = tf.placeholder(tf.float32);\n\n# Convolutional layer\nK0 = 7 # size of the patch\nF0 = 16 # number of filters\nncl0 = K0*K0*F0\n\nK1 = 5 # size of the patch\nF1 = 16 # number of filters\nncl0 = K1*K1*F1\n\nK2 = 3 # size of the patch\nF2 = 2 # number of filters\nncl0 = K2*K2*F2\n\nnfc = int(48*48*F0/4)\nnfc1 = int(48*48*F1/4)\nnfc2 = int(48*48*F2/4)\n\nkeep_prob_input=tf.placeholder(tf.float32)\n\n#First set of conv followed by conv stride 2 operation and dropout 0.5\nW_conv1=weight_variable([K0,K0,1,F0]); print('W_conv1=',W_conv1.get_shape())\nb_conv1=bias_variable([F0]); print('b_conv1=',b_conv1.get_shape())\nx_2d0 = tf.reshape(xin, [-1,48,48,1]); print('x_2d0=',x_2d0.get_shape())\n\nh_conv1=tf.nn.relu(conv2d(x_2d0,W_conv1)+b_conv1); print('h_conv1=',h_conv1.get_shape())\nh_conv1= tf.nn.dropout(h_conv1,keep_prob_input);\n\n# 2nd convolutional layer \nW_conv2=weight_variable([K0,K0,F0,F0]); print('W_conv2=',W_conv2.get_shape())\nb_conv2=bias_variable([F0]); print('b_conv2=',b_conv2.get_shape())\n\nh_conv2 = tf.nn.relu(conv2d(h_conv1,W_conv2)+b_conv2); print('h_conv2=',h_conv2.get_shape())\nh_conv2_pooled = max_pool_2x2(h_conv2); print('h_conv2_pooled=',h_conv2_pooled.get_shape())\n\n# reshaping for fully connected\nh_conv2_pooled_rs = tf.reshape(h_conv2_pooled, [batch_size,-1]); print('x_rs',h_conv2_pooled_rs.get_shape());\nW_norm3 = weight_variable([nfc1, nfc]); print('W_norm3=',W_norm3.get_shape())\nb_conv3 = bias_variable([nfc1]); print('b_conv3=',b_conv3.get_shape())\n\n# fully connected layer\nh_full3 = tf.matmul( W_norm3, tf.transpose(h_conv2_pooled_rs) ); print('h_full3=',h_full3.get_shape())\nh_full3 = tf.transpose(h_full3); print('h_full3=',h_full3.get_shape())\nh_full3 += b_conv3; print('h_full3=',h_full3.get_shape())\n\nh_full3=tf.nn.relu(h_full3); print('h_full3=',h_full3.get_shape())\nh_full3=tf.nn.dropout(h_full3,keep_prob_input); print('h_full3_dropout=',h_full3.get_shape())\n#reshaping back to conv\nh_full3_rs = tf.reshape(h_full3, [batch_size, 24,24,-1]); print('h_full3_rs=',h_full3_rs.get_shape())\n\n#Second set of conv followed by conv stride 2 operation\nW_conv4=weight_variable([K1,K1,F1,F1]); print('W_conv4=',W_conv4.get_shape())\nb_conv4=bias_variable([F1]); print('b_conv4=',b_conv4.get_shape())\n\nh_conv4=tf.nn.relu(conv2d(h_full3_rs,W_conv4)+b_conv4); print('h_conv4=',h_conv4.get_shape())\nh_conv4 = max_pool_2x2(h_conv4); print('h_conv4_pooled=',h_conv4.get_shape())\n\n# reshaping for fully connected\nh_conv4_pooled_rs = tf.reshape(h_conv4, [batch_size,-1]); print('x2_rs',h_conv4_pooled_rs.get_shape());\nW_norm4 = weight_variable([ 2304, nc]); print('W_norm4=',W_norm4.get_shape())\nb_conv4 = tf.Variable(tf.zeros([nc])); print('b_conv4=',b_conv4.get_shape())\n\n# fully connected layer\nh_full4 = tf.matmul( h_conv4_pooled_rs, W_norm4 ); print('h_full4=',h_full4.get_shape())\nh_full4 += b_conv4; print('h_full4=',h_full4.get_shape())\n\ny = h_full4; \n\n## Softmax\ny = tf.nn.softmax(y); print('y(SOFTMAX)=',y.get_shape())\n\n# Loss\ncross_entropy = tf.reduce_mean(-tf.reduce_sum(y_label * tf.log(y), 1))\ntotal_loss = cross_entropy\n\n# Optimization scheme\n#train_step = tf.train.GradientDescentOptimizer(0.02).minimize(total_loss)\ntrain_step = tf.train.AdamOptimizer(0.001).minimize(total_loss)\n\n# Accuracy\ncorrect_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_label,1))\naccuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n\n# Run Computational Graph\nn = train_data.shape[0]\nindices = collections.deque()\ninit = tf.initialize_all_variables()\nsess = tf.Session()\nsess.run(init)\nfor i in range(15001):\n \n # Batch extraction\n if len(indices) < batch_size:\n indices.extend(np.random.permutation(n)) \n idx = [indices.popleft() for i in range(batch_size)]\n batch_x, batch_y = train_data[idx,:], train_labels[idx]\n #print(batch_x.shape,batch_y.shape)\n \n # Run CG for vao to increase the test acriable training\n _,acc_train,total_loss_o = sess.run([train_step,accuracy,total_loss], feed_dict={xin: batch_x, y_label: batch_y, keep_prob_input: 0.2})\n \n # Run CG for test set\n if not i%50:\n print('\\nIteration i=',i,', train accuracy=',acc_train,', loss=',total_loss_o)\n acc_test = sess.run(accuracy, feed_dict = {xin: test_data, y_label: test_labels, keep_prob_input: 1.0})\n print('test accuracy=',acc_test)", "Computational graph - 6 Layers, Conv-Relu-Maxpool, 1 Fully Connected L.\n$$\nx= maxpool2x2( ReLU( ReLU( x* W_1+b_1) * W_2+b_2))$$ 2 times (also for $W_3,b_3 W_4,b_4$) \n$$\nthen -> ReLU( x* W_5+b_5) $$ (1 additional conv layer)\n$$\nthen-> y=\\textrm{softmax} {( x W_6+b_1)}$$(fully connected layer at the end)", "tf.reset_default_graph()\n\n# implementation of Conv-Relu-COVN-RELU - pool\n# based on : http://cs231n.github.io/convolutional-networks/\n\n# Define computational graph (CG)\nbatch_size = n_train # batch size\nd = train_data.shape[1] # data dimensionality\nnc = 6 # number of classes\n\n# CG inputs\nxin = tf.placeholder(tf.float32,[batch_size,d]); #print('xin=',xin,xin.get_shape())\ny_label = tf.placeholder(tf.float32,[batch_size,nc]); #print('y_label=',y_label,y_label.get_shape())\n#d = tf.placeholder(tf.float32);\n\n#for the first conc-conv\n# Convolutional layer\nK0 = 8 # size of the patch\nF0 = 22 # number of filters\nncl0 = K0*K0*F0\n\n#for the second conc-conv\nK1 = 4 # size of the patch\nF1 = F0 # number of filters\nncl1 = K1*K1*F1\n\n#drouput probability\nkeep_prob_input=tf.placeholder(tf.float32)\n\n#1st set of conv followed by conv2d operation and dropout 0.5\nW_conv1=weight_variable([K0,K0,1,F0]); print('W_conv1=',W_conv1.get_shape())\nb_conv1=bias_variable([F0]); print('b_conv1=',b_conv1.get_shape())\nx_2d1 = tf.reshape(xin, [-1,48,48,1]); print('x_2d1=',x_2d1.get_shape())\n\n#conv2d \nh_conv1=tf.nn.relu(conv2d(x_2d1, W_conv1) + b_conv1); print('h_conv1=',h_conv1.get_shape())\n#h_conv1= tf.nn.dropout(h_conv1,keep_prob_input);\n\n# 2nd convolutional layer + max pooling\nW_conv2=weight_variable([K0,K0,F0,F0]); print('W_conv2=',W_conv2.get_shape())\nb_conv2=bias_variable([F0]); print('b_conv2=',b_conv2.get_shape())\n\n# conv2d + max pool\nh_conv2 = tf.nn.relu(conv2d(h_conv1,W_conv2)+b_conv2); print('h_conv2=',h_conv2.get_shape())\nh_conv2_pooled = max_pool_2x2(h_conv2); print('h_conv2_pooled=',h_conv2_pooled.get_shape())\n\n#3rd set of conv \nW_conv3=weight_variable([K0,K0,F0,F0]); print('W_conv3=',W_conv3.get_shape())\nb_conv3=bias_variable([F1]); print('b_conv3=',b_conv3.get_shape())\nx_2d3 = tf.reshape(h_conv2_pooled, [-1,24,24,F0]); print('x_2d3=',x_2d3.get_shape())\n\n#conv2d\nh_conv3=tf.nn.relu(conv2d(x_2d3, W_conv3) + b_conv3); print('h_conv3=',h_conv3.get_shape())\n\n# 4th convolutional layer \nW_conv4=weight_variable([K1,K1,F1,F1]); print('W_conv4=',W_conv4.get_shape())\nb_conv4=bias_variable([F1]); print('b_conv4=',b_conv4.get_shape())\n\n#conv2d + max pool 4x4\nh_conv4 = tf.nn.relu(conv2d(h_conv3,W_conv4)+b_conv4); print('h_conv4=',h_conv4.get_shape())\nh_conv4_pooled = max_pool_2x2(h_conv4); print('h_conv4_pooled=',h_conv4_pooled.get_shape())\nh_conv4_pooled = max_pool_2x2(h_conv4_pooled); print('h_conv4_pooled=',h_conv4_pooled.get_shape())\n\n#5th set of conv \nW_conv5=weight_variable([K1,K1,F1,F1]); print('W_conv5=',W_conv5.get_shape())\nb_conv5=bias_variable([F1]); print('b_conv5=',b_conv5.get_shape())\nx_2d5 = tf.reshape(h_conv4_pooled, [-1,6,6,F1]); print('x_2d5=',x_2d5.get_shape())\n\n#conv2d\nh_conv5=tf.nn.relu(conv2d(x_2d5, W_conv5) + b_conv5); print('h_conv5=',h_conv5.get_shape())\n\n# 6th convolutional layer \nW_conv6=weight_variable([K1,K1,F1,F1]); print('W_con6=',W_conv6.get_shape())\nb_conv6=bias_variable([F1]); print('b_conv6=',b_conv6.get_shape())\nb_conv6= tf.nn.dropout(b_conv6,keep_prob_input);\n\n#conv2d + max pool 4x4\nh_conv6 = tf.nn.relu(conv2d(h_conv5,W_conv6)+b_conv6); print('h_conv6=',h_conv6.get_shape())\nh_conv6_pooled = max_pool_2x2(h_conv6); print('h_conv6_pooled=',h_conv6_pooled.get_shape())\n\n# reshaping for fully connected\nh_conv6_pooled_rs = tf.reshape(h_conv6, [batch_size,-1]); print('x2_rs',h_conv6_pooled_rs.get_shape());\nW_norm6 = weight_variable([ 6*6*F1, nc]); print('W_norm6=',W_norm6.get_shape())\nb_norm6 = bias_variable([nc]); print('b_conv6=',b_norm6.get_shape())\n\n# fully connected layer\nh_full6 = tf.matmul( h_conv6_pooled_rs, W_norm6 ); print('h_full6=',h_full6.get_shape())\nh_full6 += b_norm6; print('h_full6=',h_full6.get_shape())\n\ny = h_full6; \n\n## Softmax\ny = tf.nn.softmax(y); print('y3(SOFTMAX)=',y.get_shape())\n\n# Loss\ncross_entropy = tf.reduce_mean(-tf.reduce_sum(y_label * tf.log(y), 1))\ntotal_loss = cross_entropy\n\n# Optimization scheme\n#train_step = tf.train.GradientDescentOptimizer(0.02).minimize(total_loss)\ntrain_step = tf.train.AdamOptimizer(0.001).minimize(total_loss)\n\n# Accuracy\ncorrect_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_label,1))\naccuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n\n# Run Computational Graph\nn = train_data.shape[0]\nindices = collections.deque()\ninit = tf.initialize_all_variables()\nsess = tf.Session()\nsess.run(init)\nfor i in range(20001):\n \n # Batch extraction\n if len(indices) < batch_size:\n indices.extend(np.random.permutation(n)) \n idx = [indices.popleft() for i in range(batch_size)]\n batch_x, batch_y = train_data[idx,:], train_labels[idx]\n #print(batch_x.shape,batch_y.shape)\n \n # Run CG for vao to increase the test acriable training\n _,acc_train,total_loss_o = sess.run([train_step,accuracy,total_loss], feed_dict={xin: batch_x, y_label: batch_y, keep_prob_input: 0.5})\n \n # Run CG for test set\n if not i%100:\n print('\\nIteration i=',i,', train accuracy=',acc_train,', loss=',total_loss_o)\n acc_test = sess.run(accuracy, feed_dict = {xin: test_data, y_label: test_labels, keep_prob_input: 1.0})\n print('test accuracy=',acc_test)", "saving the trained graph in TF file\nhttps://www.tensorflow.org/how_tos/variables/", "# Add ops to save and restore all the variables.\nsaver = tf.train.Saver()\n\n# Save the variables to disk.\nsave_path = saver.save(sess, \"model_6layers.ckpt\")\nprint(\"Model saved in file: %s\" % save_path)\n\n# calculating accuracy for each class separately for the test set\nresult_cnn = sess.run([y], feed_dict = {xin: test_data, keep_prob_input: 1.0})\n#result = sess.run(y, feed_dict={xin: test_data, keep_prob_input: 1.0})\n\ntset = test_labels.argmax(1);\nresult = np.asarray(result_cnn[:][0]).argmax(1);\n\nfor i in range (0,nc):\n print('accuracy',str_emotions[i]+str(' '), '\\t',ut.calc_partial_accuracy(tset, result, i))", "Feeding the CNN with some data (camera/file)\nFinally to test if the model really works it's needed to feed some new row and unlabeled data into the neural network.\nTo do so some images are taken from the internet or could be taken directly from the camera.", "faces, marked_img = ut.get_faces_from_img('diff_emotions.jpg');\n#faces, marked_img = ut.get_faces_from_img('big_bang.png');\n#faces, marked_img = ut.get_faces_from_img('camera');\n\n# if some face was found in the image\nif(len(faces)): \n #creating the blank test vector\n data_orig = np.zeros([n_train, 48,48])\n\n #putting face data into the vector (only first few)\n for i in range(0, len(faces)):\n data_orig[i,:,:] = ut.contrast_stretch(faces[i,:,:]);\n\n #preparing image and putting it into the batch \n \n n = data_orig.shape[0];\n data = np.zeros([n,48**2])\n for i in range(n):\n xx = data_orig[i,:,:]\n xx -= np.mean(xx)\n xx /= np.linalg.norm(xx)\n data[i,:] = xx.reshape(2304); #np.reshape(xx,[-1])\n\n result = sess.run([y], feed_dict={xin: data, keep_prob_input: 1.0})\n \n plt.rcParams['figure.figsize'] = (10.0, 10.0) # set default size of plots\n for i in range(0, len(faces)):\n emotion_nr = np.argmax(result[0][i]);\n plt_idx = (2*i)+1;\n plt.subplot( 5, 2*len(faces)/5+1, plt_idx)\n plt.imshow(np.reshape(data[i,:], (48,48)))\n plt.axis('off')\n plt.title(str_emotions[emotion_nr])\n ax = plt.subplot(5, 2*len(faces)/5+1, plt_idx +1)\n ax.bar(np.arange(nc) , result[0][i])\n ax.set_xticklabels(str_emotions, rotation=45, rotation_mode=\"anchor\")\n ax.set_yticks([])\n plt.show()\n ", "Conclusions and comments\n\nThe data contains a lot of noisy data, i.e. faces are rotated and of different size. \nA lot of emotions in the dataset were labeled wrong. (e.g. happy images in sad images).\n(we think) That is why we couldn't achieve very good accuracy.\nThe accuracy is very good for \"Happy\" and \"Surprised\" class. These images seems to be the most \"clean\" as data. \nThe computational power to train CNN is very high, therefore it was very time consuming to try different computational graphs.\nThe facial emotion recognition is has very complicated features that were hard to explore for our computational graphs.\n\nPossible improvements in the future\n\nFor sure the CNN would perfom better if the faces were always the same size and aligned to be straight. \nWe could try another, deeper CNN architectures to extract more features." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
5agado/data-science-learning
graphics/physarum/Physarum.ipynb
apache-2.0
[ "<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#Intro\" data-toc-modified-id=\"Intro-1\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>Intro</a></span></li><li><span><a href=\"#Run-Test-Simulation\" data-toc-modified-id=\"Run-Test-Simulation-2\"><span class=\"toc-item-num\">2&nbsp;&nbsp;</span>Run Test Simulation</a></span><ul class=\"toc-item\"><li><span><a href=\"#Performances-Profiling\" data-toc-modified-id=\"Performances-Profiling-2.1\"><span class=\"toc-item-num\">2.1&nbsp;&nbsp;</span>Performances Profiling</a></span></li></ul></li><li><span><a href=\"#Parameters-Grid-Search\" data-toc-modified-id=\"Parameters-Grid-Search-3\"><span class=\"toc-item-num\">3&nbsp;&nbsp;</span>Parameters Grid Search</a></span></li></ul></div>\n\nIntro\nThis notebook explores slime mold simulation and visualization. For an introduction to the phenomenon and method see this sagejenson post", "import numpy as np\nimport cupy as cp\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nimport tqdm\nimport math\n\nimport os\nimport sys\nfrom pathlib import Path\n\n%matplotlib inline\n\n%load_ext autoreload\n%autoreload 2\n\nimport Physarum as physarum\nfrom Physarum import Physarum\n\nfrom ds_utils.sim_utils import named_configs\nfrom ds_utils.video_utils import generate_video, imageio_generate_video", "Run Test Simulation", "width = 400\nheight = 400\nsystem_shape = (width, height)\n\ninit_fun_perlin=lambda shape: physarum.get_perlin_init(shape=shape, n=int(30e4), scale=380)\ninit_fun_circle=lambda shape: physarum.get_filled_circle_init(n=int(10e4), center=(shape[0]//2,shape[1]//2), \n radius=100)\n\ndef combined_init(shape):\n pop_01 = physarum.get_filled_circle_init(n=int(10e4), center=(shape[0]//2,shape[1]//2), radius=100)\n pop_02 = physarum.get_perlin_init(shape=shape, n=int(30e4), scale=80)\n return np.concatenate([pop_01, pop_02])\n\nspecies_a = Physarum(shape=system_shape, horizon_walk=1, horizon_sense=9,\n theta_walk=15, theta_sense=10., walk_range=1.,\n social_behaviour=0, trace_strength=1,\n init_fun=combined_init)\n\n\nspecies_b = Physarum(shape=system_shape, horizon_walk=1,horizon_sense=9,\n theta_walk=15, theta_sense=10., walk_range=1.2,\n social_behaviour=-16,trace_strength=1,\n init_fun=init_fun_circle)\n\nsimulation_steps = 10\nimages = physarum.run_physarum_simulation(populations=[species_a], steps=simulation_steps)\n\nout_path = Path.home() / 'Documents/graphics/generative_art_output/physarum/test_01'\nout_path.mkdir(exist_ok=True, parents=True)\n\nimageio_generate_video(str(out_path/\"test_02.mp4\"), images, fps=20, format=\"mp4\", loop=False)\n\ngenerate_video(str(out_path/\"tmp.mp4\"), (width, height),\n frame_gen_fun = lambda i: np.array(images[i])[:,:,:3],\n nb_frames = len(images))", "Performances Profiling", "%%prun -s cumulative -l 30 -r\n# We profile the cell, sort the report by \"cumulative\n# time\", limit it to 30 lines\n\nsimulation_steps = 50\nimages = physarum.run_physarum_simulation(populations=[species_b], steps=simulation_steps)", "Parameters Grid Search", "out_path = Path.home() / 'Documents/graphics/generative_art_output/physarum/grid_search'\n\nimport cupy as cp\ndef normalize_snapshots(snapshots):\n norm_s = cp.asnumpy(snapshots)\n #fix_images = np.sqrt(fix_images + 0.1) - np.sqrt(0.1)\n #fix_images = np.log(fix_images + 1\n norm_s = (norm_s/norm_s.max())*255\n # add z axis\n norm_s = norm_s[:, :, :, np.newaxis]\n return norm_s.astype(np.uint8)\n\nnb_vals = 3\ngrid_search_params = {\n 'horizon_walk': np.linspace(1., 3., nb_vals).round(2), # higher more spread, chaos\n 'horizon_sense': np.linspace(10., 25., nb_vals).round(2),\n 'theta_sense': np.linspace(10., 25., nb_vals).round(2), # the smaller, the more narrow paths they create\n 'theta_walk': np.linspace(5., 15., nb_vals).round(2), # should be close to theta_sense, if way bigger, they disappear or constrain to concentrated areas\n 'walk_range': [1.],\n 'social_behaviour': [0],\n 'trace_strength': [1],\n 'decay': [0.8], #np.linspace(.6, .9, nb_vals)\n}\nconfigs = list(named_configs(grid_search_params))\n\nsystem_size = 100\nsystem_shape = tuple([system_size]*2)\nrender_dir = out_path / f'{system_size}_size'\nrender_dir.mkdir(exist_ok=False, parents=True)\nnb_frames = 90\n\ngenerate_ply = True\nply_threshold = 70\n\n#init_setup = physarum.get_perlin_init(shape=system_shape, n=int(60e4), scale=150)\n#init_setup = physarum.get_filled_circle_init(n=int(20e5), center=(system_shape[0]//2,system_shape[1]//2), radius=150)\ndef combined_init(shape):\n pop_01 = physarum.get_filled_circle_init(n=int(80e4), center=(shape[0]//2,shape[1]//2), radius=150)\n pop_02 = physarum.get_perlin_init(shape=shape, n=int(10e4), scale=200)\n #pop_01 = physarum.get_gaussian_gradient(n=int(10e4), center=(system_shape[0]//2,system_shape[1]//2), sigma=200)\n #pop_02 = physarum.get_circle_init(n=int(5e4), center=(shape[0]//2,shape[1]//2), radius=100, width=30)\n return cp.concatenate([pop_01, pop_02])\n#init_setup = combined_init(system_shape)\ninit_setup = physarum.get_gaussian_gradient(n=int(50e5), center=(system_shape[0]//2,system_shape[1]//2), sigma=40)\n\nimgs_path = \"MAYBE_DUPLICATES/flat_hexa_logo/19\"\n#mask = physarum.get_image_mask(list(Path(img_path).glob('*.png'))[np.random.randint(15)], system_shape, threshold=0.5)\n\nnb_runs = 1\nfor run_idx in range(nb_runs):\n with open(str(render_dir / \"logs.txt\"), 'w+') as f:\n for config_idx, config in tqdm.tqdm_notebook(enumerate(configs)):\n print(f'#####################')\n print(f'Run {run_idx} - config {config_idx}')\n run_dir = render_dir / 'run_{}_config_{}'.format(run_idx, config_idx)\n\n system = Physarum(shape=system_shape, \n horizon_walk=config.horizon_walk,\n horizon_sense=config.horizon_sense,\n theta_walk=config.theta_walk,\n theta_sense=config.theta_sense,\n walk_range=config.walk_range,\n social_behaviour=config.social_behaviour,\n trace_strength=config.trace_strength,\n init_fun=lambda shape: init_setup,\n template=None, template_strength=0) \n \n imgs_path = f\"MAYBE_DUPLICATES/flat_hexa_logo/{np.random.randint(5, 19)}\"\n imgs_path = list(Path(imgs_path).glob('*.png'))\n img_path = imgs_path[np.random.randint(len(imgs_path))]\n #init_setup = physarum.get_image_init_positions(img_path, system_shape, int(80e4), invert=True)\n template = physarum.get_image_mask(img_path, system_shape, invert=False)\n template_strength = 1\n\n #config = config._replace(theta_walk = config.theta_sense-5)\n \n images = physarum.run_physarum_simulation(populations=[system], diffusion='median',\n steps=nb_frames, decay=config.decay, mask=None, mask_factor=.5)\n \n # write out config\n f.write(str(config)+\"\\n\")\n SYSTEM_CONFIG = config._asdict()\n\n norm_snapshots = normalize_snapshots(images)\n\n #np.save(render_dir / f'run_{run}.npy', fix_images)\n \n # save each frame to ply\n if generate_ply:\n print('Writing ply files')\n out_ply = run_dir / 'ply'\n out_ply.mkdir(exist_ok=False)\n ply_snapshots = prepare_for_ply(norm_snapshots, ply_threshold)\n for frame in np.arange(norm_snapshots.shape[0]):\n tmp = ply_snapshots[frame]\n tmp = tmp[tmp[:,-1] >= ply_threshold]\n write_to_ply(tmp, out_ply / f'frame_{frame:03d}.ply')\n\n # generate video\n print('Generating video')\n out_video = run_dir / 'run.mp4'\n generate_video(str(out_video), (system_shape[1], system_shape[0]),\n frame_gen_fun = lambda i: norm_snapshots[i][0],\n nb_frames=len(res_images), is_color=False, disable_tqdm=True)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
annarev/tensorflow
tensorflow/lite/g3doc/performance/post_training_float16_quant.ipynb
apache-2.0
[ "Copyright 2019 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Post-training float16 quantization\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/lite/performance/post_training_float16_quant\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/post_training_float16_quant.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/post_training_float16_quant.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/tensorflow/lite/g3doc/performance/post_training_float16_quant.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nOverview\nTensorFlow Lite now supports\nconverting weights to 16-bit floating point values during model conversion from TensorFlow to TensorFlow Lite's flat buffer format. This results in a 2x reduction in model size. Some harware, like GPUs, can compute natively in this reduced precision arithmetic, realizing a speedup over traditional floating point execution. The Tensorflow Lite GPU delegate can be configured to run in this way. However, a model converted to float16 weights can still run on the CPU without additional modification: the float16 weights are upsampled to float32 prior to the first inference. This permits a significant reduction in model size in exchange for a minimal impacts to latency and accuracy.\nIn this tutorial, you train an MNIST model from scratch, check its accuracy in TensorFlow, and then convert the model into a Tensorflow Lite flatbuffer\nwith float16 quantization. Finally, check the accuracy of the converted model and compare it to the original float32 model.\nBuild an MNIST model\nSetup", "import logging\nlogging.getLogger(\"tensorflow\").setLevel(logging.DEBUG)\n\nimport tensorflow as tf\nfrom tensorflow import keras\nimport numpy as np\nimport pathlib\n\ntf.float16", "Train and export the model", "# Load MNIST dataset\nmnist = keras.datasets.mnist\n(train_images, train_labels), (test_images, test_labels) = mnist.load_data()\n\n# Normalize the input image so that each pixel value is between 0 to 1.\ntrain_images = train_images / 255.0\ntest_images = test_images / 255.0\n\n# Define the model architecture\nmodel = keras.Sequential([\n keras.layers.InputLayer(input_shape=(28, 28)),\n keras.layers.Reshape(target_shape=(28, 28, 1)),\n keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation=tf.nn.relu),\n keras.layers.MaxPooling2D(pool_size=(2, 2)),\n keras.layers.Flatten(),\n keras.layers.Dense(10)\n])\n\n# Train the digit classification model\nmodel.compile(optimizer='adam',\n loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n metrics=['accuracy'])\nmodel.fit(\n train_images,\n train_labels,\n epochs=1,\n validation_data=(test_images, test_labels)\n)", "For the example, you trained the model for just a single epoch, so it only trains to ~96% accuracy.\nConvert to a TensorFlow Lite model\nUsing the Python TFLiteConverter, you can now convert the trained model into a TensorFlow Lite model.\nNow load the model using the TFLiteConverter:", "converter = tf.lite.TFLiteConverter.from_keras_model(model)\ntflite_model = converter.convert()", "Write it out to a .tflite file:", "tflite_models_dir = pathlib.Path(\"/tmp/mnist_tflite_models/\")\ntflite_models_dir.mkdir(exist_ok=True, parents=True)\n\ntflite_model_file = tflite_models_dir/\"mnist_model.tflite\"\ntflite_model_file.write_bytes(tflite_model)", "To instead quantize the model to float16 on export, first set the optimizations flag to use default optimizations. Then specify that float16 is the supported type on the target platform:", "converter.optimizations = [tf.lite.Optimize.DEFAULT]\nconverter.target_spec.supported_types = [tf.float16]", "Finally, convert the model like usual. Note, by default the converted model will still use float input and outputs for invocation convenience.", "tflite_fp16_model = converter.convert()\ntflite_model_fp16_file = tflite_models_dir/\"mnist_model_quant_f16.tflite\"\ntflite_model_fp16_file.write_bytes(tflite_fp16_model)", "Note how the resulting file is approximately 1/2 the size.", "!ls -lh {tflite_models_dir}", "Run the TensorFlow Lite models\nRun the TensorFlow Lite model using the Python TensorFlow Lite Interpreter.\nLoad the model into the interpreters", "interpreter = tf.lite.Interpreter(model_path=str(tflite_model_file))\ninterpreter.allocate_tensors()\n\ninterpreter_fp16 = tf.lite.Interpreter(model_path=str(tflite_model_fp16_file))\ninterpreter_fp16.allocate_tensors()", "Test the models on one image", "test_image = np.expand_dims(test_images[0], axis=0).astype(np.float32)\n\ninput_index = interpreter.get_input_details()[0][\"index\"]\noutput_index = interpreter.get_output_details()[0][\"index\"]\n\ninterpreter.set_tensor(input_index, test_image)\ninterpreter.invoke()\npredictions = interpreter.get_tensor(output_index)\n\nimport matplotlib.pylab as plt\n\nplt.imshow(test_images[0])\ntemplate = \"True:{true}, predicted:{predict}\"\n_ = plt.title(template.format(true= str(test_labels[0]),\n predict=str(np.argmax(predictions[0]))))\nplt.grid(False)\n\ntest_image = np.expand_dims(test_images[0], axis=0).astype(np.float32)\n\ninput_index = interpreter_fp16.get_input_details()[0][\"index\"]\noutput_index = interpreter_fp16.get_output_details()[0][\"index\"]\n\ninterpreter_fp16.set_tensor(input_index, test_image)\ninterpreter_fp16.invoke()\npredictions = interpreter_fp16.get_tensor(output_index)\n\nplt.imshow(test_images[0])\ntemplate = \"True:{true}, predicted:{predict}\"\n_ = plt.title(template.format(true= str(test_labels[0]),\n predict=str(np.argmax(predictions[0]))))\nplt.grid(False)", "Evaluate the models", "# A helper function to evaluate the TF Lite model using \"test\" dataset.\ndef evaluate_model(interpreter):\n input_index = interpreter.get_input_details()[0][\"index\"]\n output_index = interpreter.get_output_details()[0][\"index\"]\n\n # Run predictions on every image in the \"test\" dataset.\n prediction_digits = []\n for test_image in test_images:\n # Pre-processing: add batch dimension and convert to float32 to match with\n # the model's input data format.\n test_image = np.expand_dims(test_image, axis=0).astype(np.float32)\n interpreter.set_tensor(input_index, test_image)\n\n # Run inference.\n interpreter.invoke()\n\n # Post-processing: remove batch dimension and find the digit with highest\n # probability.\n output = interpreter.tensor(output_index)\n digit = np.argmax(output()[0])\n prediction_digits.append(digit)\n\n # Compare prediction results with ground truth labels to calculate accuracy.\n accurate_count = 0\n for index in range(len(prediction_digits)):\n if prediction_digits[index] == test_labels[index]:\n accurate_count += 1\n accuracy = accurate_count * 1.0 / len(prediction_digits)\n\n return accuracy\n\nprint(evaluate_model(interpreter))", "Repeat the evaluation on the float16 quantized model to obtain:", "# NOTE: Colab runs on server CPUs. At the time of writing this, TensorFlow Lite\n# doesn't have super optimized server CPU kernels. For this reason this may be\n# slower than the above float interpreter. But for mobile CPUs, considerable\n# speedup can be observed.\nprint(evaluate_model(interpreter_fp16))", "In this example, you have quantized a model to float16 with no difference in the accuracy.\nIt's also possible to evaluate the fp16 quantized model on the GPU. To perform all arithmetic with the reduced precision values, be sure to create the TfLiteGPUDelegateOptions struct in your app and set precision_loss_allowed to 1, like this:\n//Prepare GPU delegate.\nconst TfLiteGpuDelegateOptions options = {\n .metadata = NULL,\n .compile_options = {\n .precision_loss_allowed = 1, // FP16\n .preferred_gl_object_type = TFLITE_GL_OBJECT_TYPE_FASTEST,\n .dynamic_batch_enabled = 0, // Not fully functional yet\n },\n};\nDetailed documentation on the TFLite GPU delegate and how to use it in your application can be found here" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
sarnold/adaptive-median
AMF.ipynb
gpl-3.0
[ "import numpy as np\nimport pandas as pd\nfrom PIL import Image, ImageFilter\n\n### open the image (PLACE YOUR OWN IMAGE HERE)\nimage_org = Image.open(\"noise_2.png\")\n\ndef rgb2gray(rgb):\n if(len(rgb.shape) == 3):\n return np.uint8(np.dot(rgb[...,:3], [0.2989, 0.5870, 0.1140]))\n else:#already a grayscale\n return rgb\n\nimage = np.array(image_org)\n\ngrayscale_image = rgb2gray(image) #outputs a grayscaled image\n\n### Adaptive Median Filtering...\n\ndef calculate_median(array):\n \"\"\"Return the median of 1-d array\"\"\"\n sorted_array = np.sort(array) #timsort (O(nlogn))\n median = sorted_array[len(array)//2]\n return median\n\ndef level_A(z_min, z_med, z_max, z_xy, S_xy, S_max):\n if(z_min < z_med < z_max):\n return level_B(z_min, z_med, z_max, z_xy, S_xy, S_max)\n else:\n S_xy += 2 #increase the size of S_xy to the next odd value.\n if(S_xy <= S_max): #repeat process\n return level_A(z_min, z_med, z_max, z_xy, S_xy, S_max)\n else:\n return z_med\n\ndef level_B(z_min, z_med, z_max, z_xy, S_xy, S_max):\n if(z_min < z_xy < z_max):\n return z_xy\n else:\n return z_med\n\ndef amf(image, initial_window, max_window):\n \"\"\"runs the Adaptive Median Filter proess on an image\"\"\"\n xlength, ylength = image.shape #get the shape of the image.\n \n z_min, z_med, z_max, z_xy = 0, 0, 0, 0\n S_max = max_window\n S_xy = initial_window #dynamically to grow\n \n output_image = image.copy()\n \n for row in range(S_xy, xlength-S_xy-1):\n for col in range(S_xy, ylength-S_xy-1):\n filter_window = image[row - S_xy : row + S_xy + 1, col - S_xy : col + S_xy + 1] #filter window\n target = filter_window.reshape(-1) #make 1-dimensional\n z_min = np.min(target) #min of intensity values\n z_max = np.max(target) #max of intensity values\n z_med = calculate_median(target) #median of intensity values\n z_xy = image[row, col] #current intensity\n \n #Level A & B\n new_intensity = level_A(z_min, z_med, z_max, z_xy, S_xy, S_max)\n output_image[row, col] = new_intensity\n return output_image\n\noutput = amf(grayscale_image, 3, 11)", "Using Adaptive Median Filter", "Image.fromarray(output)", "Original Image (converted to grayscale)", "Image.fromarray(grayscale_image)", "Output with Python's native Median Filter function", "native_output = image_org.filter(ImageFilter.MedianFilter(size = 3))\nnative_output\n\ndeviation_native = np.sqrt(np.sum(np.square(grayscale_image-np.array(rgb2gray(np.array(native_output))))))\n\ndeviation_original = np.sum(np.square(grayscale_image-np.array(output)))\n\nprint(\"Deviation from the original salt and pepper images:\")\nprint(\"Deviation via Median Filter (built-in): \", deviation_native)\nprint(\"Deviation via Adaptive Median Filter: \", deviation_original)\nprint(f\"Percent difference b/w deviations: {100*(deviation_original - deviation_native)/deviation_original}%\")", "As shown from the above print, AMF results in almost twice as higher deviation than the native median filter technique.", "### Thereofore, the built-in technique is nowhere as good as compared to the Adaptive Median Filter technique." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
zzsza/TIL
scikit-learn/Chapter 2. Supervised Learning.ipynb
mit
[ "Chapter 2. Supervised Learning\n\n지도학습 : 분류 / 회귀\n일반화(generalization) : 모델이 처음 보는 데이터에 대해 정확하게 예측할 수 있는 경우\n과대적합 (overfitting) : 모델이 훈련 세트의 각 샘플에 너무 가깝게 맞춰져서 새로운 데이터에 일반화되기 어려울 경우\n\n과소적합 (underfitting) : 모델이 간단해 훈련 데이터에도 잘 맞지 않는 경우\n\n\n데이터의 균형을 맞추는 것이 중요함..! 얼마나 많이 모으고, 균형을 맞추느냐가 초점", "import mglearn\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nX, y = mglearn.datasets.make_forge()\n\nprint(X)\n\nprint(y)\n\nmglearn.discrete_scatter(X[:,0], X[:, 1], y)\nplt.legend([\"class 0\", \"class 1\"], loc=4)\nplt.xlabel(\"1st feature\")\nplt.ylabel(\"2nd feature\")\n\nprint(\"X.shape : {}\".format(X.shape))\n\n?mglearn.discrete_scatter\n#mglearn.discrete_scatter(x1, x2, y=None, markers=None, \\\n#s=10, ax=None, labels=None, padding=0.2, alpha=1, c=None, markeredgewidth=None)\n\n# x1 : nd-array\n# input data, first axis\n\n# x2 : nd-array\n# input data, second axis\n\n# y : nd-array\n# input data, discrete labels\n\n# cmap : colormap\n# Colormap to use.\n\n# markers : list of string\n# List of markers to use, or None (which defaults to 'o').\n\n# s : int or float\n# Size of the marker\n\n# padding : float\n# Fraction of the dataset range to use for padding the axes.\n\n# alpha : float\n# Alpha value for all points.\n\nX, y = mglearn.datasets.make_wave(n_samples=40)\nplt.plot(X, y, 'o')\nplt.ylim(-3, 3)\nplt.xlabel(\"Feature\")\nplt.ylabel(\"Target\")\n\nfrom sklearn.datasets import load_breast_cancer\ncancer = load_breast_cancer()\nprint(\"cancer.keys(): {}\".format(cancer.keys()))\n\nprint(\"Shape of cancer data: {}\".format(cancer.data.shape))\nprint(\"Sample counts per class:\\n{}\".format(\n {n: v for n, v in zip(cancer.target_names, np.bincount(cancer.target))}))\nprint(\"Feature names:\\n{}\".format(cancer.feature_names))\n\ncancer.target_names\n\nnp.bincount(cancer.target)\n\nprint(cancer.DESCR)\n\nfrom sklearn.datasets import load_boston\nboston = load_boston()\nprint(\"Data shape: {}\".format(boston.data.shape))\n\n# 중복을 포함한 조합 생성\nX, y = mglearn.datasets.load_extended_boston()\nprint(\"X.shape: {}\".format(X.shape))", "k-nearest neigbors", "mglearn.plots.plot_knn_classification(n_neighbors=1)\n\nmglearn.plots.plot_knn_classification(n_neighbors=2)\n\nmglearn.plots.plot_knn_regression(n_neighbors=1)\n\nmglearn.plots.plot_knn_classification(n_neighbors=3)\n\nfrom sklearn.model_selection import train_test_split\nX, y = mglearn.datasets.make_forge()\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)\n\nfrom sklearn.neighbors import KNeighborsClassifier\nclf = KNeighborsClassifier(n_neighbors=3)\n\nclf\n\nclf.fit(X_train, y_train)\n\nprint(\"prediciton : {}\".format(clf.predict(X_test)))\n\nprint(\"score accuracy: {:.2f}\".format(clf.score(X_test, y_test)))\n\nfig, axes = plt.subplots(1, 3, figsize=(10,3))\n\nfor n_neighbors, ax in zip([1,3,9], axes):\n clf = KNeighborsClassifier(n_neighbors=n_neighbors).fit(X, y)\n mglearn.plots.plot_2d_separator(clf, X, fill=True, eps=0.5, ax=ax, alpha=0.4)\n mglearn.discrete_scatter(X[:, 0], X[:, 1], y, ax=ax)\n ax.set_title(\"{} neighbor\".format(n_neighbors))\n ax.set_xlabel(\"feature 0\")\n ax.set_ylabel(\"featyre 1\")\naxes[0].legend(loc=3)\n\n# 이웃의 수를 늘릴수록, decision boundary는 더 부드러워짐 ( = 더 단순한 모델이 됨 )\n\ncancer = load_breast_cancer()\nX_train, X_test, y_train, y_test = train_test_split(\n cancer.data, cancer.target, stratify=cancer.target, random_state=66)\n\ntraining_accuracy = []\ntest_accuracy = []\n# 1-10\nneighbors_settings = range(1, 11)\n\nfor n_neighbors in neighbors_settings:\n clf = KNeighborsClassifier(n_neighbors=n_neighbors)\n clf.fit(X_train, y_train)\n training_accuracy.append(clf.score(X_train, y_train))\n test_accuracy.append(clf.score(X_test, y_test))\n \nplt.plot(neighbors_settings, training_accuracy, label=\"training accuracy\")\nplt.plot(neighbors_settings, test_accuracy, label=\"test accuracy\")\nplt.ylabel(\"Accuracy\")\nplt.xlabel(\"n_neighbors\")\nplt.legend()\n\nmglearn.plots.plot_knn_regression(n_neighbors=1)\n\nfrom sklearn.neighbors import KNeighborsRegressor\nX, y = mglearn.datasets.make_wave(n_samples=40)\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)\n\nreg = KNeighborsRegressor(n_neighbors=3)\nreg.fit(X_train, y_train)\n\n\nprint(\"Test set predictions:\\n{}\".format(reg.predict(X_test)))\n\nprint(\"Test set R^2: {:.2f}\".format(reg.score(X_test, y_test)))", "R² = 1- ( Σ(y-ŷ)²/Σ(y-ȳ)²)", "\nfig, axes = plt.subplots(1, 3, figsize=(15, 4))\n\nline = np.linspace(-3, 3, 1000).reshape(-1, 1)\nfor n_neighbors, ax in zip([1, 3, 9], axes):\n reg = KNeighborsRegressor(n_neighbors=n_neighbors)\n reg.fit(X_train, y_train)\n ax.plot(line, reg.predict(line))\n ax.plot(X_train, y_train, '^', c=mglearn.cm2(0), markersize=8)\n ax.plot(X_test, y_test, 'v', c=mglearn.cm2(1), markersize=8)\n\n ax.set_title(\n \"{} neighbor(s)\\n train score: {:.2f} test score: {:.2f}\".format(\n n_neighbors, reg.score(X_train, y_train),\n reg.score(X_test, y_test)))\n ax.set_xlabel(\"Feature\")\n ax.set_ylabel(\"Target\")\naxes[0].legend([\"Model predictions\", \"Training data/target\",\n \"Test data/target\"], loc=\"best\")\n\n# kneighbors 분류기는 거리를 재는 방법과 이웃의 수가 중요함!\n# knn을 사용할 경우 feature들이 같은 스케일을 갖도록 정규화하는 것이 일반적임", "linear model", "mglearn.plots.plot_linear_regression_wave()", "선형 회귀는 예측과 훈련 세트에 있는 타깃 y 사이의 평균제곱오차를 최소화하는 파라미터 w, b를 찾음\n평균제곱오차 = 예측값과 타깃값의 차이를 제곱하여 더한 후, 샘플의 개수로 나눈 것!", "from sklearn.linear_model import LinearRegression\nX, y = mglearn.datasets.make_wave(n_samples=60)\nX_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)\n\nlr = LinearRegression().fit(X_train, y_train)\n\nprint(\"lr.coef_: {}\".format(lr.coef_))\nprint(\"lr.intercept_: {}\".format(lr.intercept_))\n\n# coef, intercept의 뒤에 _가 붙는 이유는 sklearn은 훈련 데이터에서 유도된 속성은 항상 끝에 _를 붙이기 때문! \n# ( 사용자가 지정한 매개변수와 구분하기 위해 )\n\nprint(\"Training set score: {:.2f}\".format(lr.score(X_train, y_train)))\nprint(\"Test set score: {:.2f}\".format(lr.score(X_test, y_test)))\n\n# underfitting\n\nX, y = mglearn.datasets.load_extended_boston()\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)\nlr = LinearRegression().fit(X_train, y_train)\n\nprint(\"Training set score: {:.2f}\".format(lr.score(X_train, y_train)))\nprint(\"Test set score: {:.2f}\".format(lr.score(X_test, y_test)))\n\n# overfitting", "Lidge 회귀 ( regularization, 정규화 )\n\n모든 특성이 출력에 주는 영향을 최소한으로 만듬\n제곱을 패널티로 사용, 라쏘는 절대값을 패널티로 사용\n\n 참고하면 좋은 글", "from sklearn.linear_model import Ridge\n\nridge = Ridge().fit(X_train, y_train)\nprint(\"Training set score: {:.2f}\".format(ridge.score(X_train, y_train)))\nprint(\"Test set score: {:.2f}\".format(ridge.score(X_test, y_test)))\n\n# alpha 값을 높이면 계쑤를 0에 더 가깝에 만들어 훈련 세트의 성능은 나빠지지만 일반화에는 도움이 됨\n\nridge10 = Ridge(alpha=10).fit(X_train, y_train)\nprint(\"Training set score: {:.2f}\".format(ridge10.score(X_train, y_train)))\nprint(\"Test set score: {:.2f}\".format(ridge10.score(X_test, y_test)))\n\nridge01 = Ridge(alpha=0.1).fit(X_train, y_train)\nprint(\"Training set score: {:.2f}\".format(ridge01.score(X_train, y_train)))\nprint(\"Test set score: {:.2f}\".format(ridge01.score(X_test, y_test)))\n\nplt.plot(ridge.coef_, 's', label=\"Ridge alpha=1\")\nplt.plot(ridge10.coef_, '^', label=\"Ridge alpha=10\")\nplt.plot(ridge01.coef_, 'v', label=\"Ridge alpha=0.1\")\n\nplt.plot(lr.coef_, 'o', label=\"LinearRegression\")\nplt.xlabel(\"Coefficient index\")\nplt.ylabel(\"Coefficient magnitude\")\nxlims = plt.xlim()\nplt.hlines(0, xlims[0], xlims[1])\nplt.xlim(xlims)\nplt.ylim(-25, 25)\nplt.legend()\n\n# alpha 가 0.1인 값들이 라인에 몰려있음\n\nmglearn.plots.plot_ridge_n_samples()", "LASSO ( L1 reguralization )\n\n특정 계수는 값이 0이됨", "from sklearn.linear_model import Lasso\n\nlasso = Lasso().fit(X_train, y_train)\nprint(\"Training set score: {:.2f}\".format(lasso.score(X_train, y_train)))\nprint(\"Test set score: {:.2f}\".format(lasso.score(X_test, y_test)))\nprint(\"Number of features used: {}\".format(np.sum(lasso.coef_ != 0))) # lasso.coef_가 0이 아닌 것들을 합쳐서 개수 count\n\nprint(lasso.coef_)\n\nlasso001 = Lasso(alpha=0.01, max_iter=100000).fit(X_train, y_train)\nprint(\"Training set score: {:.2f}\".format(lasso001.score(X_train, y_train)))\nprint(\"Test set score: {:.2f}\".format(lasso001.score(X_test, y_test)))\nprint(\"Number of features used: {}\".format(np.sum(lasso001.coef_ != 0)))\n\n# 그러나 alpha값을 너무 낮추면 정규화 효과가 없어져 overfitting\nlasso00001 = Lasso(alpha=0.0001, max_iter=100000).fit(X_train, y_train)\nprint(\"Training set score: {:.2f}\".format(lasso00001.score(X_train, y_train)))\nprint(\"Test set score: {:.2f}\".format(lasso00001.score(X_test, y_test)))\nprint(\"Number of features used: {}\".format(np.sum(lasso00001.coef_ != 0)))\n\nplt.plot(lasso.coef_, 's', label=\"Lasso alpha=1\")\nplt.plot(lasso001.coef_, '^', label=\"Lasso alpha=0.01\")\nplt.plot(lasso00001.coef_, 'v', label=\"Lasso alpha=0.0001\")\n\nplt.plot(ridge01.coef_, 'o', label=\"Ridge alpha=0.1\")\nplt.legend(ncol=2, loc=(0, 1.05))\nplt.ylim(-25, 25)\nplt.xlabel(\"Coefficient index\")\nplt.ylabel(\"Coefficient magnitude\")\n\n# ElasticNet : L1 + L2 같이 사용! 각각의 비율을 매개변수로 넣음", "분류용 선형 모델\n\n로지스틱 회귀 / 서포트 벡터 머신\nlinear_model.LogisticRegression / svm.LinearSVC(support vector classifier)\n로지스틱 회귀는 이진 분류에선 로지스틱 손실 함수를 사용하고 다중 분류에서는 교차 엔트로피 손실함수 사용\nLinearSVC 클래스의 기본값은 squared hinge 손실 함수 사용", "from sklearn.linear_model import LogisticRegression\nfrom sklearn.svm import LinearSVC\n\nX, y = mglearn.datasets.make_forge()\n\nfig, axes = plt.subplots(1, 2, figsize=(10,3))\n\nfor model, ax, in zip([LinearSVC(), LogisticRegression()], axes):\n clf = model.fit(X, y)\n mglearn.plots.plot_2d_separator(clf, X, fill=False, eps=0.5, ax=ax, alpha=0.7)\n mglearn.discrete_scatter(X[:, 0], X[:, 1], y, ax=ax)\n ax.set_title(\"{}\".format(clf.__class__.__name__)) # class 이름 소환!\n ax.set_xlabel(\"feature 0\")\n ax.set_ylabel(\"feature 1\")\naxes[0].legend()", "매개변수 C( 정규화 강도를 결정하는 매개변수 ) : 높아지면 개개의 데이터 포인트를 정확히 분류하려고 노력하고, 낮아지면 데이터 포인트 중 다수에 맞추려고 함", "mglearn.plots.plot_linear_svc_regularization()\n\nfrom sklearn.datasets import load_breast_cancer\ncancer = load_breast_cancer()\nX_train, X_test, y_train, y_test = train_test_split(\n cancer.data, cancer.target, stratify=cancer.target, random_state=42)\nlogreg = LogisticRegression().fit(X_train, y_train)\nprint(\"Training set score: {:.3f}\".format(logreg.score(X_train, y_train)))\nprint(\"Test set score: {:.3f}\".format(logreg.score(X_test, y_test)))\n# underfitting\n\nlogreg100 = LogisticRegression(C=100).fit(X_train, y_train)\nprint(\"Training set score: {:.3f}\".format(logreg100.score(X_train, y_train)))\nprint(\"Test set score: {:.3f}\".format(logreg100.score(X_test, y_test)))\n# 복잡도가 높을수록 모델이 좋아짐\n\nlogreg001 = LogisticRegression(C=0.01).fit(X_train, y_train)\nprint(\"Training set score: {:.3f}\".format(logreg001.score(X_train, y_train)))\nprint(\"Test set score: {:.3f}\".format(logreg001.score(X_test, y_test)))\n\n\nplt.plot(logreg.coef_.T, 'o', label=\"C=1\")\nplt.plot(logreg100.coef_.T, '^', label=\"C=100\")\nplt.plot(logreg001.coef_.T, 'v', label=\"C=0.001\")\nplt.xticks(range(cancer.data.shape[1]), cancer.feature_names, rotation=90)\nxlims = plt.xlim()\nplt.hlines(0, xlims[0], xlims[1])\nplt.xlim(xlims)\nplt.ylim(-5, 5)\nplt.xlabel(\"Feature\")\nplt.ylabel(\"Coefficient magnitude\")\nplt.legend()\n\nfor C, marker in zip([0.001, 1, 100], ['o', '^', 'v']):\n lr_l1 = LogisticRegression(C=C, penalty=\"l1\").fit(X_train, y_train)\n print(\"Training accuracy of l1 logreg with C={:.3f}: {:.2f}\".format(\n C, lr_l1.score(X_train, y_train)))\n print(\"Test accuracy of l1 logreg with C={:.3f}: {:.2f}\".format(\n C, lr_l1.score(X_test, y_test)))\n plt.plot(lr_l1.coef_.T, marker, label=\"C={:.3f}\".format(C))\n\nplt.xticks(range(cancer.data.shape[1]), cancer.feature_names, rotation=90)\nxlims = plt.xlim()\nplt.hlines(0, xlims[0], xlims[1])\nplt.xlim(xlims)\nplt.xlabel(\"Feature\")\nplt.ylabel(\"Coefficient magnitude\")\n\nplt.ylim(-5, 5)\nplt.legend(loc=3)\n\nxlims[0], xlims[1]\n\nplt.hlines?", "다중 클래스 분류용 선형 모델\n\n로지스틱 회귀만 제외하고, 대부분의 선형 분류 모델은 이진 분류만 지원 (다중 클래스를 지원하지 않음)\n로지스틱 회귀는 소프트맥스 함수를 사용한 다중 클래스 분류 알고리즘을 지원함\n대부분의 선형 분류 모델은 각 클래스를 다른 모든 클래스와 구분하도록 이진 분류 모델을 학습시킴..!", "from sklearn.datasets import make_blobs\n\nX, y = make_blobs(random_state=42)\nmglearn.discrete_scatter(X[:, 0], X[:, 1], y)\nplt.xlabel(\"Feature 0\")\nplt.ylabel(\"Feature 1\")\nplt.legend([\"Class 0\", \"Class 1\", \"Class 2\"])\n\nlinear_svm = LinearSVC().fit(X, y)\nprint(\"Coefficient shape: \", linear_svm.coef_.shape)\nprint(\"Intercept shape: \", linear_svm.intercept_.shape)\n\n\nmglearn.discrete_scatter(X[:, 0], X[:, 1], y)\nline = np.linspace(-15, 15)\nfor coef, intercept, color in zip(linear_svm.coef_, linear_svm.intercept_,\n mglearn.cm3.colors):\n plt.plot(line, -(line * coef[0] + intercept) / coef[1], c=color)\nplt.ylim(-10, 15)\nplt.xlim(-10, 8)\nplt.xlabel(\"Feature 0\")\nplt.ylabel(\"Feature 1\")\nplt.legend(['Class 0', 'Class 1', 'Class 2', 'Line class 0', 'Line class 1',\n 'Line class 2'], loc=(1.01, 0.3))\n\nmglearn.plots.plot_2d_classification(linear_svm, X, fill=True, alpha=.7)\nmglearn.discrete_scatter(X[:, 0], X[:, 1], y)\nline = np.linspace(-15, 15)\nfor coef, intercept, color in zip(linear_svm.coef_, linear_svm.intercept_,\n mglearn.cm3.colors):\n plt.plot(line, -(line * coef[0] + intercept) / coef[1], c=color)\nplt.legend(['Class 0', 'Class 1', 'Class 2', 'Line class 0', 'Line class 1',\n 'Line class 2'], loc=(1.01, 0.3))\nplt.xlabel(\"Feature 0\")\nplt.ylabel(\"Feature 1\")\n\n# 회귀 모델에서는 alpha, linearSVC와 LogisticRegression에선 C\n# alpha가 클수록, C값이 작을수록 모델이 단순해짐 => Log Scale로 최적치를 정함\n# L1, L2 정규화 중 어떤 것을 사용할지 결정\n# 데이터가 많을 경우 solver='sag' 옵션을 주던가, SGDClassifier와 SGDRegressor를 사용..!\n# 선형 모델의 경우 샘플에 비해 특성이 많을 때 잘 작동. \n# 저차원의 데이터셋은 다른 모델들의 일반화 성능이 더 좋음", "나이브 베이즈 분류기\n\n선형 모델과 매우 유사하며, 로지스틱 회귀나 서포트 벡터모델보다 빠르지만 일반화 성능이 떨어짐\nGaussianNB, BernoulliNB, MultinomialNB", "X = np.array([[0, 1, 0, 1],\n [1, 0, 1, 1],\n [0, 0, 0, 1],\n [1, 0, 1, 0]])\ny = np.array([0, 1, 0, 1])\n\ncounts = {}\nfor label in np.unique(y):\n counts[label] = X[y == label].sum(axis=0)\nprint(\"Feature counts:\\n{}\".format(counts))\n\nnp.unique(y)\n\n# MultionomialNB 는 클래스별로 특서으이 평균을 계산, GaussianNB는 클래스별로 각 특성의 표준편차와 평균을 저장\n# MultinomialNB와 BernoulliNB는 모델의 복잡도를 조절하는 alpha 매개변수 하나를 가지고 있음. alpha 개수만큼 추가!\n# alpha 값이 성능 향상에 크게 기여하진 않지만 정확도를 높일 수 있음\n# GaussianNB는 고차원인 데이터셋에 사용, 다른 두 모델은 희소한 데이터를 카운트하는데 사용", "결정트리 ( decision tree )\n\n스무 고개와 비슷", "mglearn.plots.plot_animal_tree()\n#brew install graphviz\n\nmglearn.plots.plot_tree_progressive()\n\n\n# 결정트리의 오버피팅을 막는 것은 2가지\n# 1. 트리 생성을 일찍 중단 ( 사전 가지치기, pre-pruning )\n# 2. 데이터 포인트가 적은 노드를 삭제하거나 병합하는 전략 ( 사후 가지치기, post-pruning )\n# sklearn은 사전 가지치기만 지원\n\nfrom sklearn.tree import DecisionTreeClassifier\n\ncancer = load_breast_cancer()\nX_train, X_test, y_train, y_test = train_test_split(\n cancer.data, cancer.target, stratify=cancer.target, random_state=42)\ntree = DecisionTreeClassifier(random_state=0)\ntree.fit(X_train, y_train)\nprint(\"Accuracy on training set: {:.3f}\".format(tree.score(X_train, y_train)))\nprint(\"Accuracy on test set: {:.3f}\".format(tree.score(X_test, y_test)))\n\n# max_depth = 4로 지정\ntree = DecisionTreeClassifier(max_depth=4, random_state=0)\ntree.fit(X_train, y_train)\n\nprint(\"Accuracy on training set: {:.3f}\".format(tree.score(X_train, y_train)))\nprint(\"Accuracy on test set: {:.3f}\".format(tree.score(X_test, y_test)))\n\n\nfrom sklearn.tree import export_graphviz\nexport_graphviz(tree, out_file=\"tree.dot\", class_names=[\"malignant\", \"benign\"],\n feature_names=cancer.feature_names, impurity=False, filled=True)\n\nimport graphviz\n\nwith open(\"tree.dot\") as f:\n dot_graph = f.read()\ndisplay(graphviz.Source(dot_graph))\n\n# 특성 중요도 ( feature importance )\n\nprint(\"Feature importances:\\n{}\".format(tree.feature_importances_))\n\ndef plot_feature_importances_cancer(model):\n n_features = cancer.data.shape[1]\n plt.barh(range(n_features), model.feature_importances_, align='center')\n plt.yticks(np.arange(n_features), cancer.feature_names)\n plt.xlabel(\"Feature importance\")\n plt.ylabel(\"Feature\")\n plt.ylim(-1, n_features)\n\nplot_feature_importances_cancer(tree)\n\n\ntree = mglearn.plots.plot_tree_not_monotone()\ndisplay(tree)\n\n# 회귀를 위한 트리 기반의 모델을 사용할 경우엔 훈련 데이터 범위 박의 포인트에 대해 예측을 할 수 없음\n\nimport os\nram_prices = pd.read_csv(os.path.join(mglearn.datasets.DATA_PATH, \"ram_price.csv\"))\n\nplt.semilogy(ram_prices.date, ram_prices.price)\nplt.xlabel(\"Year\")\nplt.ylabel(\"Price in $/Mbyte\")\n\nfrom sklearn.tree import DecisionTreeRegressor\n\ndata_train = ram_prices[ram_prices.date < 2000]\ndata_test = ram_prices[ram_prices.date >= 2000]\n\nX_train = data_train.date[:, np.newaxis]\ny_train = np.log(data_train.price)\n\ntree = DecisionTreeRegressor().fit(X_train, y_train)\nlinear_reg = LinearRegression().fit(X_train, y_train)\n\nX_all = ram_prices.date[:, np.newaxis]\n\npred_tree = tree.predict(X_all)\npred_lr = linear_reg.predict(X_all)\n\nprice_tree = np.exp(pred_tree)\nprice_lr = np.exp(pred_lr)\n\nplt.semilogy(data_train.date, data_train.price, label=\"Training data\")\nplt.semilogy(data_test.date, data_test.price, label=\"Test data\")\nplt.semilogy(ram_prices.date, price_tree, label=\"Tree prediction\")\nplt.semilogy(ram_prices.date, price_lr, label=\"Linear prediction\")\nplt.legend()\n\n# 일반화성능이 좋지 않을 경우 앙상블 방법을 사용\n\n# 랜덤 포레스트와 그래디언트 부스팅" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
hannorein/variations
Figure1.ipynb
gpl-3.0
[ "Figure 1\nThis notebook recreates Figure 1 in Rein & Tamayo 2016. The figure illustrates the use of second order variational equations in an $N$-body simulation.\nWe start by import the REBOUND, numpy and matplotlib packages.", "import rebound\nimport numpy as np\n%matplotlib inline\nimport matplotlib\nimport matplotlib.pyplot as plt", "We setup a planetary system with two Jupiter mass planets. The following function takes that system, integrates it forward in time by 10 orbits and returns the inner planet's $x$ coordinate at the end of the simulation. The $x$ coordinate changes because the planet orbits the star, but also because the planet interacts with the other planet. The function takes the outer planet's initial semi-major axis, $a$, as a parameter. We setup the system using heliocentric coordinates and therefore specify the primary attribute when adding particles to REBOUND (by default REBOUND uses Jacobi coordinates which are not supported by variational equations).", "def run_sim(a):\n sim = rebound.Simulation()\n sim.add(m=1.)\n sim.add(primary=sim.particles[0],m=1e-3, a=1)\n sim.add(primary=sim.particles[0],m=1e-3, a=a)\n \n sim.integrate(2.*np.pi*10.)\n return sim.particles[1].x", "We now run this simulation 400 times for different initial $a$ in the range [1.4, 1.7] and store the final $x$ coordinate of the inner planet in the array x_exact.", "N=400\nx_exact = np.zeros((N))\na_grid = np.linspace(1.4,1.7,N)\nfor i,a in enumerate(a_grid):\n x_exact[i] = run_sim(a)", "Next, we create a function that runs an $N$-body simulation including first and second order differential equations. For that we add two sets of variational particles with the add_variation() command (one for first order and one for second order). We then initialize the variational particles by varying the outer planet's semi-major axis. After integrating the system forward in time, the function returns the $x$ coordinate of the inner planet as well as the $x$ coordinate of the corresponding variational particles: $\\partial x/\\partial a$ and $\\partial^2 x/\\partial a^2$. Note that the variational particles' position coordinates are thus unit-less.", "def run_sim_var(a):\n sim = rebound.Simulation()\n sim.add(m=1.)\n sim.add(primary=sim.particles[0],m=1e-3, a=1)\n sim.add(primary=sim.particles[0],m=1e-3, a=a)\n var_da = sim.add_variation()\n var_dda = sim.add_variation(order=2, first_order=var_da)\n var_da.vary(2, \"a\")\n var_dda.vary(2, \"a\")\n \n sim.integrate(2.*np.pi*10.)\n return sim.particles[1].x, var_da.particles[1].x, var_dda.particles[1].x", "We run one simulation with variational particles at $a_0=1.56$. We then use the derivates we got from the run_sim_var() function to approximate the final position of the inner planet as a function of the outer planet's initial semi-major axis using a Taylor series:\n$$x(a) = x(a_0)+\\frac{\\partial x}{\\partial a}\\Big|{a_0} (a-a_0)+\\frac 12\\frac{\\partial^2 x}{\\partial a^2}\\Big|{a_0} (a-a_0)^2 + \\ldots$$\nWe do this to both first and second order.", "a_0 = 1.56\nx, dxda, ddxdda = run_sim_var(a_0)\nx_1st_order = np.zeros(N)\nx_2nd_order = np.zeros(N)\nfor i,a in enumerate(a_grid):\n x_1st_order[i] = x + (a-a_0)*dxda\n x_2nd_order[i] = x + (a-a_0)*dxda + 0.5*(a-a_0)*(a-a_0)*ddxdda", "Finally, we plot the exact final position that we obtained from running a full $N$-body simulation as well as our approximation near a neighbourhood of $a_0$ which we got from the variational equations.", "fig = plt.figure(figsize=(6,4))\nax = plt.subplot(111)\nax.set_xlim(a_grid[0],a_grid[-1])\nax.set_ylim(np.min(x_exact),np.max(x_exact)*1.01)\nax.set_xlabel(\"initial semi-major axis of outer planet\")\nax.set_ylabel(\"$x$ position of inner planet after 10 orbits\")\nax.plot(a_grid, x_exact, \"-\", color=\"black\", lw=2)\nax.plot(a_grid, x_1st_order, \"--\", color=\"green\")\nax.plot(a_grid, x_2nd_order, \":\", color=\"blue\")\nax.plot(a_0, x, \"ro\",ms=10);\nplt.savefig('paper_test1.pdf',bbox_inches='tight'); # Save to file. ", "The following code produces an interactive version of this graph where one can change the initial semi-major axis $a_0$ and immediately see the new plot. It uses the ipywidgets tool interact. Move the slider and see how REBOUND accurately calculates the first and second derivate using variational equations.", "from ipywidgets import interact\ndef generate_plot(a_0=1.56):\n x, dxda, ddxdda = run_sim_var(a_0)\n x_1st_order = np.zeros(N)\n x_2nd_order = np.zeros(N)\n for i,a in enumerate(a_grid):\n x_1st_order[i] = x + (a-a_0)*dxda\n x_2nd_order[i] = x + (a-a_0)*dxda + 0.5*(a-a_0)*(a-a_0)*ddxdda\n \n fig = plt.figure(figsize=(6,4))\n ax = plt.subplot(111)\n ax.set_xlim(a_grid[0],a_grid[-1])\n ax.set_ylim(np.min(x_exact),np.max(x_exact)*1.01)\n ax.set_xlabel(\"initial semi-major axis of outer planet\")\n ax.set_ylabel(\"$x$ position of inner planet after 10 orbits\")\n ax.plot(a_grid, x_exact, \"-\", color=\"black\", lw=2)\n ax.plot(a_grid, x_1st_order, \"--\", color=\"green\")\n ax.plot(a_grid, x_2nd_order, \":\", color=\"blue\")\n ax.plot(a_0, x, \"ro\",ms=10)\n plt.show()\n return \ninteract(generate_plot,a_0=(1.4,1.7,0.01));" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
PMEAL/OpenPNM
examples/tutorials/network/manually_adding_pores_and_throats.ipynb
mit
[ "Manually Adding and Removing Pores and Throats", "import openpnm as op\n%config InlineBackend.figure_formats = ['svg']\nimport matplotlib.pyplot as plt\nimport numpy as np\nnp.random.seed(10)\nws = op.Workspace()\nws.settings['loglevel'] = 50", "We'll start with something simple, by adding a single pore and throat to a 2D network:", "pn = op.network.Cubic(shape=[5, 5, 1], spacing=1.0)", "Add a Single Pore\nFirst add a pore and visualize it:", "# NBVAL_IGNORE_OUTPUT\nop.topotools.extend(network=pn, pore_coords=[[6, 6, 0.5]])\nfig, ax = plt.subplots()\nop.topotools.plot_coordinates(network=pn, ax=ax)\nop.topotools.plot_connections(network=pn, ax=ax)", "Note that we've only added a pore, but not specified any connections to other pores. This requires quite a bit more thought than adding pore coords. The original network had 25 pores, number 0 to 24 (due to python's 0 indexing), so this new pore is number 25. Specifying connections requires explicitly stating which pores are connected to which according to the pore index. \nAdd a Single Throat\nLet's connect this new pore to a the single pore in the top-left corner, which we know to be pore 4. Again we'll use extend but specify throat_conns instead:", "# NBVAL_IGNORE_OUTPUT\nop.topotools.extend(network=pn, throat_conns=[[4, 25]])\nfig, ax = plt.subplots()\nop.topotools.plot_coordinates(network=pn, ax=ax)\nop.topotools.plot_connections(network=pn, ax=ax)", "Find Several Throat and Add Simultaneously\nWe can also find the indices of pores that are physicall close to pore 25 , then make connections between those:", "Ps = pn.find_nearby_pores(pores=25, r=3)\nprint(Ps)", "The above search yeilded 3 pores that are withing a radius of 3 units of pore 25. In order to connected these to pore 25 y a new throat, we need to create a pair of indices indicating the pore on each the new throat as shown below. This convention for defining network topology is based on the sparse adjacency matrix expressed in COO format, as described here.", "[[25, i] for i in Ps[0]]", "We can send this list to the extend function to add all three new throats with one call:", "# NBVAL_IGNORE_OUTPUT\nop.topotools.extend(network=pn, throat_conns=[[25, i] for i in Ps[0]])\nfig, ax = plt.subplots()\nop.topotools.plot_coordinates(network=pn, ax=ax)\nop.topotools.plot_connections(network=pn, ax=ax)", "More Complex Additions\nNow let's do something more complex, by adding pores inside a for-loop. First create a simple 2D cubic network:", "net = op.network.Cubic(shape=[5, 5, 1], spacing=1.0)", "We'll now scan through each pores in the network and add 4 new pores next to each one, at the 4 corners:", "Ps = net.Ps\nTs = net.Ts\ncoords = net['pore.coords']\ndist = 0.3\ncorners = [[-1, -1], [-1, 1], [1, 1], [1, -1]]\nfor xdir, ydir in corners:\n adj = np.zeros_like(coords)\n adj[:, 0] = dist*xdir\n adj[:, 1] = dist*ydir\n new_coords = coords + adj\n op.topotools.extend(network=net, pore_coords=new_coords)\n new_Ps = net.Ps[-len(Ps):]\n new_conns = np.vstack((Ps, new_Ps)).T\n op.topotools.extend(network=net, throat_conns=new_conns)", "After any network manipulation operation, it's a good idea to check the health of the network, which checks for disconnected pores. All empty lists means nothing was found.", "net.check_network_health()\n\nfig, ax = plt.subplots(1, figsize=[6, 6])\nop.topotools.plot_connections(network=net, throats=Ts, ax=ax, c='b')\nop.topotools.plot_connections(network=net, throats=net.Ts[len(Ts):],\n ax=ax, c='y')\nop.topotools.plot_coordinates(network=net, pores=Ps, c='r', s=500, ax=ax)\nop.topotools.plot_coordinates(network=net, pores=net.Ps[len(Ps):],\n c='g', s=100, ax=ax)\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
turbomanage/training-data-analyst
quests/serverlessml/03_tfdata/solution/input_pipeline.ipynb
apache-2.0
[ "Input pipeline into Keras\nIn this notebook, we will look at how to read large datasets, datasets that may not fit into memory, using TensorFlow. We can use the tf.data pipeline to feed data to Keras models that use a TensorFlow backend.\nLearning Objectives\n\nUse tf.data to read CSV files\nLoad the training data into memory\nPrune the data by removing columns\nUse tf.data to map features and labels\nAdjust the batch size of our dataset\nShuffle the dataset to optimize for deep learning\n\nEach learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook. \nLet's start off with the Python imports that we need.", "%%bash\nexport PROJECT=$(gcloud config list project --format \"value(core.project)\")\necho \"Your current GCP Project Name is: \"$PROJECT\n\nimport os, json, math\nimport numpy as np\nimport shutil\nimport logging\n# SET TF ERROR LOG VERBOSITY\nlogging.getLogger(\"tensorflow\").setLevel(logging.ERROR)\nimport tensorflow as tf\n\nprint(\"TensorFlow version: \",tf.version.VERSION)\n\nPROJECT = \"your-gcp-project-here\" # REPLACE WITH YOUR PROJECT NAME\nREGION = \"us-central1\" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1\n\n# Do not change these\nos.environ[\"PROJECT\"] = PROJECT\nos.environ[\"REGION\"] = REGION\nos.environ[\"BUCKET\"] = PROJECT # DEFAULT BUCKET WILL BE PROJECT ID\n\nif PROJECT == \"your-gcp-project-here\":\n print(\"Don't forget to update your PROJECT name! Currently:\", PROJECT)\n\n# If you're not using TF 2.0+, let's enable eager execution\nif tf.version.VERSION < '2.0':\n print('Enabling v2 behavior and eager execution; if necessary restart kernel, and rerun notebook')\n tf.enable_v2_behavior()", "Locating the CSV files\nWe will start with the CSV files that we wrote out in the first notebook of this sequence. Just so you don't have to run the notebook, we saved a copy in ../data", "!ls -l ../../data/*.csv", "Use tf.data to read the CSV files\nSee the documentation for make_csv_dataset.\nIf you have TFRecords (which is recommended), use make_batched_features_dataset instead.", "CSV_COLUMNS = ['fare_amount', 'pickup_datetime',\n 'pickup_longitude', 'pickup_latitude', \n 'dropoff_longitude', 'dropoff_latitude', \n 'passenger_count', 'key']\nLABEL_COLUMN = 'fare_amount'\nDEFAULTS = [[0.0],['na'],[0.0],[0.0],[0.0],[0.0],[0.0],['na']]\n\n# load the training data\ndef load_dataset(pattern):\n return tf.data.experimental.make_csv_dataset(pattern, 1, CSV_COLUMNS, DEFAULTS)\n\ntempds = load_dataset('../../data/taxi-train*')\nprint(tempds)", "Note that this is a prefetched dataset. If you loop over the dataset, you'll get the rows one-by-one. Let's convert each row into a Python dictionary:", "# print a few of the rows\nfor n, data in enumerate(tempds):\n row_data = {k: v.numpy() for k,v in data.items()}\n print(n, row_data)\n if n > 2:\n break", "What we really need is a dictionary of features + a label. So, we have to do two things to the above dictionary. (1) remove the unwanted column \"key\" and (2) keep the label separate from the features.", "# get features, label\ndef features_and_labels(row_data):\n for unwanted_col in ['pickup_datetime', 'key']:\n row_data.pop(unwanted_col)\n label = row_data.pop(LABEL_COLUMN)\n return row_data, label # features, label\n\n# print a few rows to make it sure works\nfor n, data in enumerate(tempds):\n row_data = {k: v.numpy() for k,v in data.items()}\n features, label = features_and_labels(row_data)\n print(n, label, features)\n if n > 2:\n break", "Batching\nLet's do both (loading, features_label)\nin our load_dataset function, and also add batching.", "def load_dataset(pattern, batch_size):\n return (\n tf.data.experimental.make_csv_dataset(pattern, batch_size, CSV_COLUMNS, DEFAULTS)\n .map(features_and_labels) # features, label\n )\n\n# try changing the batch size and watch what happens.\ntempds = load_dataset('../../data/taxi-train*', batch_size=2)\nprint(list(tempds.take(3))) # truncate and print as a list ", "Shuffling\nWhen training a deep learning model in batches over multiple workers, it is helpful if we shuffle the data. That way, different workers will be working on different parts of the input file at the same time, and so averaging gradients across workers will help. Also, during training, we will need to read the data indefinitely.", "def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):\n dataset = (tf.data.experimental.make_csv_dataset(pattern, batch_size, CSV_COLUMNS, DEFAULTS)\n .map(features_and_labels) # features, label\n .cache())\n if mode == tf.estimator.ModeKeys.TRAIN:\n dataset = dataset.shuffle(1000).repeat()\n dataset = dataset.prefetch(1) # take advantage of multi-threading; 1=AUTOTUNE\n return dataset\n\ntempds = load_dataset('../../data/taxi-train*', 2, tf.estimator.ModeKeys.TRAIN)\nprint(list(tempds.take(1)))\ntempds = load_dataset('../../data/taxi-valid*', 2, tf.estimator.ModeKeys.EVAL)\nprint(list(tempds.take(1)))", "In the next notebook, we will build the model using this input pipeline.\nCopyright 2019 Google Inc.\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ericlejan/CahierISNEtICN
CahierChapitre1PolyPythonEdu.ipynb
gpl-3.0
[ "Reprise pour proof of concept du pdf PythonEdu Amiens\nPasser par un logiciel Windows alors que jupyter et jupyterhub existent me semble une grossière erreur d'aiguillage.\nJe vais tenter de démontrer mon point de vue à partir de ce Notebook. Les librairies utilisées par le module lycee sont instalées en python3 sur la raspberryPi du projet Tremplin.\nIl va de soi que ce cahier peut s'exécuter en local sur toute machine disposant d'une instalation de jupyter notebook.\nL'écriture du code en Python : des règles à respecter\nl'indentation et les commentaires ##\nLa position du premier caractère d'une ligne de code obéit à une règle simple : \nUne ligne de code commence au début d'une ligne sauf si \":\" terminent la ligne précédente. Dans ce cas un décalage est nécessaire, il s'agit d'une indentation.\nLe nombre d'espaces est paramétrable dans les éditeurs de code python mais généralement il s'agit d'une tabulation.\nLes commentaires sont insérés à la suite d'un #\nLes codes des champs suivants illustrent ce propos. \nremarque : L'invite de saisie input() est utilisée dans sa version \"importer un entier\" int(input()).", "#solution de l'équation ax = b pour a = 2 et b = 6\na=2\nb=6\nprint(\"la solution solution de \",a,\"* x = \",b,\"est :\") \nprint(\"x =\",b/a)\n\n#résoudre l'équation ax=b pour a≠0\na = int(input(\"entrez une valeur pour a ≠ 0 : \"))\n#on s'assure que a est bien différent de 0\nif a==0: \n #on redemande la saisie de a\n a = int(input(\"Attention entrez une valeur pour a ≠ 0 : \"))\n #tant que a vaut 0 \n while a==0:\n #on demande une valuer différente de 0\n a = int(input(\"Attention entrez une valeur pour a ≠ 0 : \"))\n #dès que a est différent de 0 \n else:\n #on demande la saisie de b\n b = int(input(\"entrez une valeur pour b : \"))\n # on affiche le résultat\n print(\"la solution solution de \",a,\"* x = \",b,\"est :\") \n print(\"x =\",b/a)\nelse:\n #quand a est différent de 0 on demande la saisie de b\n b = int(input(\"entrez une valeur pour b : \"))\n print(\"la solution solution de \",a,\"* x = \",b,\"est :\") \n print(\"x =\",b/a)", "# Exercice 1 : appliquer ses connaissances #\nRédigez un programme qui vous donne la solution de l'équation \n 2ax + b = C\n\npour a≠0", "# Pensez à faire une copie de ce Cahier avant de rédiger \"File …… \"Make a Copy\" \"\n# Renommez le cahier avec votre nom.\n# Utilisez ce champ de saisie pour rédiger votre programme puis le tester.", "la casse des caractères\nIl est important de se souvenir que les instruction en python s'écrivent en minuscule.\nLorsque une variable contient des Majuscules il faut dans tout le programme conserver ces majuscules.\nLes champs de code suivants montrent ce qui se produit quand on suit la règle puis lorsqu'on ne la suit pas.", "# utilisation de la casse dans les variables\nCoefficientDirecteur=2\nordonneeAlOrigine=6\nprint (\"y =\",CoefficientDirecteur,\"x +\", ordonneeAlOrigine)\n\n# non respect de la casse dans un nom de variable\nCoefficientDirecteur=2\nordonneeAlOrigine=6\nprint (\"y=\",Coefficientdirecteur,\"x +\", ordonneeAlorigine)", "L'affectation d'une valeur à une variable : utilisation du signe =\nL'affectation peut se faire pour des entiers, des réels, des chaines de caractères\nLe champ de code suivant montre l'unité de l'affectation quelque soit le type de données.", "# affectation d'une chaine de caractère \na=\"ceci est une chaîne de caractères\"\nprint(a)\nb='ceci est une chaîne de caractères'\nprint(b)\nc=\"l'idée est de ne pas mélanger les guillemets et les apostrophes\"\nprint(c)\nd='\"Bien mal acqui de profite jamais\"'\nprint(d)\ne=3.8\nprint(e)\n", "Les variables peuvent être affectées par lot. On peut réaffecter les variables en même temps ou successivement.", "#affection de a et b \na,b=\" ceci est une chaîne de caractères \",' ceci est une chaîne de caractères '\nprint(a,b)\nf=a+b\nprint(f)\ne,g=3,4\nprint(e,\" : \",g)\nh,i=e+g,e-g\nprint(h,\" : \",i)", "De l'utilité du module lycee pour nos élèves ?\nLe projet Pythonedu semble considérer qu'il faut mettre à disposition des élèves un module qui masque les fonctionnalités de python ... \nIl me semble en tant que prof validé ISN que cette démarche est contre productive. \nQue fait lycee ? \nIl est lié à des librairies python qui doivent être installées pour qu'il puisse fonctionner. \n\nimport math\nimport tkinter as Tk\nimport tkinter.filedialog as tkf\nimport random as alea\nimport matplotlib.pyplot as repere\nimport numpy as np\nimport builtins\nfrom scipy.stats import norm\n\nPuis il crée des fonctions qui sont pour le moins peu utiles ou en tout cas pas nécessaires pour un élève même débutant. Voyons deux exemples, avec lycee et sans lycee, qui pourraient être utilisés. \nPour les besoins de la démonstration lycee.py est placé dans le même dossier que ce notebook sur la raspberryPi de Tremplin.", "from lycee import *\n\n# version utilisant lycee \nfrom lycee import *\nx=demande('Entrez une valeur pour x = ')\ny=demande('Entrez une valeur pour y = ')\nx,y=x+y,x-y\nprint(\"x = \",x,\"y = \",y)\nx,y=x+y,x-y\nprint (\"maintenant , x =\",x,\"et y =\",y)\n\n# version utilisant python \nx = int(input(\"Entrez un entier x : \"))\ny = int(input(\"Entrez un entier y : \"))\nprint(\"réaffectation de x ET y avec x,y = x+y,x-y \")\nx,y=x+y,x-y\nprint(\"nouvelle valeur de x = \",x,\"nouvelle valeur de y = \",y)\nprint(\"réaffectation de x ET y avec x,y = x+y,x-y\") \nx,y=x+y,x-y\nprint(\"Maintenant la valeur de x = \",x,\" et la valeur de y = \",y)", "On constate donc dans le premier cas que la démarche masque ce que fait le programme en privant l'élève d'un commentaire intermédiaire expliquant les réaffectation, et que la commande python est remplacée par une fonction qui au final n'a aucune utilité pour comprendre comment le programme fonctionne. \nEn ajoutant l'importation de time on peut même marquer des pauses qui permettent à l'élève de comprendre la séquence qui se produit dans le programme.", "# version utilisant python \nimport time\nx = int(input(\"Entrez un entier x : \"))\ny = int(input(\"Entrez un entier y : \"))\nprint(\"réaffectation de x ET y avec x,y = x+y,x-y \")\nx,y=x+y,x-y\nprint(\"Calculez les valeurs attendue de x et y\")\ntime.sleep(10)\nprint(\"nouvelle valeur de x = \",x,\"nouvelle valeur de y = \",y)\ntime.sleep(10)\nprint(\"réaffectation de x ET y avec x,y = x+y,x-y\") \nx,y=x+y,x-y\nprint(\"Calculez les valeurs attendue de x et y\")\ntime.sleep(10)\nprint(\"Maintenant la valeur de x = \",x,\" et la valeur de y = \",y)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
stevetjoa/stanford-mir
numpy_basics.ipynb
mit
[ "import numpy, scipy, matplotlib.pyplot as plt, pandas, librosa", "&larr; Back to Index\nNumPy and SciPy\nThe quartet of NumPy, SciPy, Matplotlib, and IPython is a popular combination in the Python world. We will use each of these libraries in this workshop.\nTutorial\nNumPy is one of the most popular libraries for numerical computing in the world. It is used in several disciplines including image processing, finance, bioinformatics, and more. This entire workshop is based upon NumPy and its derivatives.\nIf you are new to NumPy, follow this NumPy Tutorial.\nSciPy is a Python library for scientific computing which builds on top of NumPy. If NumPy is like the Matlab core, then SciPy is like the Matlab toolboxes. It includes support for linear algebra, sparse matrices, spatial data structions, statistics, and more.\nWhile there is a SciPy Tutorial, it isn't critical that you follow it for this workshop.\nSpecial Arrays", "print(numpy.arange(5))\n\nprint(numpy.linspace(0, 5, 10, endpoint=False))\n\nprint(numpy.zeros(5))\n\nprint(numpy.ones(5))\n\nprint(numpy.ones((5,2)))\n\nprint(numpy.random.randn(5)) # random Gaussian, zero-mean unit-variance\n\nprint(numpy.random.randn(5,2))", "Slicing Arrays", "x = numpy.arange(10)\nprint(x[2:4])\n\nprint(x[-1])", "The optional third parameter indicates the increment value:", "print(x[0:8:2])\n\nprint(x[4:2:-1])", "If you omit the start index, the slice implicitly starts from zero:", "print(x[:4])\n\nprint(x[:999])\n\nprint(x[::-1])", "Array Arithmetic", "x = numpy.arange(5)\ny = numpy.ones(5)\nprint(x+2*y)", "dot computes the dot product, or inner product, between arrays or matrices.", "x = numpy.random.randn(5)\ny = numpy.ones(5)\nprint(numpy.dot(x, y))\n\nx = numpy.random.randn(5,3)\ny = numpy.ones((3,2))\nprint(numpy.dot(x, y))", "Boolean Operations", "x = numpy.arange(10)\nprint(x < 5)\n\ny = numpy.ones(10)\nprint(x < y)", "Distance Metrics", "from scipy.spatial import distance\nprint(distance.euclidean([0, 0], [3, 4]))\nprint(distance.sqeuclidean([0, 0], [3, 4]))\nprint(distance.cityblock([0, 0], [3, 4]))\nprint(distance.chebyshev([0, 0], [3, 4]))", "The cosine distance measures the angle between two vectors:", "print(distance.cosine([67, 0], [89, 0]))\nprint(distance.cosine([67, 0], [0, 89]))", "Sorting\nNumPy arrays have a method, sort, which sorts the array in-place.", "x = numpy.random.randn(5)\nprint(x)\nx.sort()\nprint(x)", "numpy.argsort returns an array of indices, ind, such that x[ind] is a sorted version of x.", "x = numpy.random.randn(5)\nprint(x)\nind = numpy.argsort(x)\nprint(ind)\nprint(x[ind])", "&larr; Back to Index" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mirjalil/DataScience
python-stuff/regular-expression.ipynb
gpl-2.0
[ "Regular Expression\n| | | |\n|----------|--------------------------------------------|---|\n| ^ | the start of a line | '^From:' |\n| $ | end of a line | |\n| . | wildcard for any character | |\n| * | Repeating a character 0 or more times | '\\s*' or '.*' |\n| *? | | |\n| + | Repeating a character 1 or more times | '[0-9]+' |\n| +? | | |\n| \\s | white space | |\n| \\S | non-white space (any non-blank character) | |\n| [list] | matching a single character in the list | |\n| [^list] | matching any character not in the list | |\n| [a-z0-9] | range of characters a to z, and digits 0-9 | |\n| ( ) | String extraction | |\n\n\nIf two intersecting matches were found:\n Greedy expressions will output the largest matches\n Non-greedy: satisfying the expression with the shortest match\n\n\nTo search for a bigger match, but extract a subset of the match:\n Example: '^From: (\\S+@\\S+)'\n\n\n```\nimport re\nre.search()\n```\nEnron email dataset: https://www.cs.cmu.edu/~./enron/\n Python regular expression functions:\nre.search() to see if there is any pattern match\nre.findall() to extract all the matches in a list", "import re\n\nemaildata = open('enron-email-dataset.txt')\nfor line in emaildata:\n line = line.rstrip()\n if re.search('^From:', line):\n print(line)\n\nx = 'Team A beat team B 38-7. That was the greatest record for team A since 1987.'\n\ny = re.findall('[0-9]+', x)\n\ny", "Extracting email addresses from text", "x = 'My work email address is example@work.com and \\\nmy personal email is example@personal.com.'\n\nre.findall('\\S+@\\S+', x)\n\nx = 'From: me@you.com My work email address is example@work.com and \\\nmy personal email is example@personal.com.'\n\nre.findall('^From: (\\S+@\\S+)', x)", "Extracting the domain name in email addresses", "x = 'My work email address is example@work.com and \\\nmy personal email is example@personal.com.'\n\nre.findall('\\S+@(\\S+)', x)\n\nre.findall('@([^ ]+)', x.rstrip())\n\nemaildata = open('enron-email-dataset.txt')\nfor line in emaildata:\n line = line.rstrip()\n res = re.findall('^X-To: (.*@\\S+)', line)\n if (len(res)>0):\n print(res)", "Extracting prices in text", "x = \"It's a big weekend sale! 70% Everything. \\\nYou can get jeans for $9.99 or get 2 for only $14.99\"\n\nre.findall('\\$([0-9.]+)', x)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
probml/pyprobml
notebooks/book1/13/mlp_cifar_pytorch.ipynb
mit
[ "<a href=\"https://colab.research.google.com/github/probml/pyprobml/blob/master/book1/supplements/mlp_cifar_pytorch.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nMLP for image classification using PyTorch\nIn this section, we follow Chap. 7 of the Deep Learning With PyTorch book, and illustrate how to fit an MLP to a two-class version of CIFAR. (We modify the code from here.)", "import sklearn\nimport scipy\nimport scipy.optimize\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits import mplot3d\nfrom mpl_toolkits.mplot3d import Axes3D\nimport seaborn as sns\nimport warnings\n\nwarnings.filterwarnings(\"ignore\")\nimport itertools\nimport time\nfrom functools import partial\n\nimport os\n\nimport numpy as np\nfrom scipy.special import logsumexp\n\nnp.set_printoptions(precision=3)\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nimport torchvision\n\nprint(\"torch version {}\".format(torch.__version__))\nif torch.cuda.is_available():\n print(torch.cuda.get_device_name(0))\n print(\"current device {}\".format(torch.cuda.current_device()))\nelse:\n print(\"Torch cannot find GPU\")\n\n\ndef set_seed(seed):\n np.random.seed(seed)\n torch.manual_seed(seed)\n torch.cuda.manual_seed_all(seed)\n\n\nuse_cuda = torch.cuda.is_available()\ndevice = torch.device(\"cuda:0\" if use_cuda else \"cpu\")\n# torch.backends.cudnn.benchmark = True", "Get the CIFAR dataset", "from torchvision import datasets\n\nfolder = \"data\"\ncifar10 = datasets.CIFAR10(folder, train=True, download=True)\ncifar10_val = datasets.CIFAR10(folder, train=False, download=True)\n\nprint(type(cifar10))\nprint(type(cifar10).__mro__) # module resolution order shows class hierarchy\n\nprint(len(cifar10))\nimg, label = cifar10[99]\nprint(type(img))\nprint(img)\nplt.imshow(img)\nplt.show()\n\nclass_names = [\"airplane\", \"automobile\", \"bird\", \"cat\", \"deer\", \"dog\", \"frog\", \"horse\", \"ship\", \"truck\"]\n\nfig = plt.figure(figsize=(8, 3))\nnum_classes = 10\nfor i in range(num_classes):\n ax = fig.add_subplot(2, 5, 1 + i, xticks=[], yticks=[])\n ax.set_title(class_names[i])\n img = next(img for img, label in cifar10 if label == i)\n plt.imshow(img)\nplt.show()", "Convert to tensors", "# Now we want to convert this to a tensor\nfrom torchvision import transforms\n\nto_tensor = transforms.ToTensor()\n\nimg, label = cifar10[99]\nimg_t = to_tensor(img)\nprint(type(img))\n# print(img.shape)\nprint(type(img_t))\nprint(img_t.shape) # channels * height * width, here channels=3 (RGB)\nprint(img_t.min(), img_t.max()) # pixel values are rescaled to 0..1\n\n# transform the whole dataset to tensors\ncifar10 = datasets.CIFAR10(folder, train=True, download=False, transform=transforms.ToTensor())\n\nimg, label = cifar10[99]\nprint(type(img))\nplt.imshow(img.permute(1, 2, 0)) # matplotlib expects H*W*C\nplt.show()", "Standardize the inputs\nWe standardize the features by computing the mean and std of each channel, averaging across all pixels and all images. This will help optimization.", "# we load the whole training set as a batch, of size 3*H*W*N\n\nimgs = torch.stack([img for img, _ in cifar10], dim=3)\nprint(imgs.shape)\n\nimgs_flat = imgs.view(3, -1) # reshape by keeping first 3 channels, but flatten all others\nprint(imgs_flat.shape)\nmu = imgs_flat.mean(dim=1) # average over second dimension (H*W*N) to get one mean per channel\nsigma = imgs_flat.std(dim=1)\nprint(mu)\nprint(sigma)\n\ncifar10 = datasets.CIFAR10(\n folder,\n train=True,\n download=False,\n transform=transforms.Compose([transforms.ToTensor(), transforms.Normalize(mu, sigma)]),\n)\n\ncifar10_val = datasets.CIFAR10(\n folder,\n train=False,\n download=False,\n transform=transforms.Compose(\n [\n transforms.ToTensor(),\n transforms.Normalize(mu, sigma),\n ]\n ),\n)\n\n# rescaled data is harder to visualize\nimg, _ = cifar10[99]\n\nplt.imshow(img.permute(1, 2, 0))\nplt.show()", "Create two-class version of dataset\nWe extract data which correspond to airplane or bird.\nThe result object is a list of pairs.\nThis \"acts like\" an object of type torch.utilts.data.dataset.Dataset, since it implements the len() and item index methods.", "class_names = [\"airplane\", \"automobile\", \"bird\", \"cat\", \"deer\", \"dog\", \"frog\", \"horse\", \"ship\", \"truck\"]\n\nlabel_map = {0: 0, 2: 1} # 0(airplane)->0, 2(bird)->1\ncifar2 = [(img, label_map[label]) for img, label in cifar10 if label in [0, 2]]\ncifar2_val = [(img, label_map[label]) for img, label in cifar10_val if label in [0, 2]]\n\nprint(len(cifar2))\nprint(len(cifar2_val))", "A shallow, fully connected model", "img, label = cifar10[0]\nimg = img.view(-1)\nninputs = len(img)\nnhidden = 512\nnclasses = 2\n\ntorch.manual_seed(0)\nmodel = nn.Sequential(nn.Linear(ninputs, nhidden), nn.Tanh(), nn.Linear(nhidden, nclasses), nn.LogSoftmax(dim=1))\nprint(model)", "We can name the layers so we can access their activations and/or parameters more easily.", "torch.manual_seed(0)\nfrom collections import OrderedDict\n\nmodel = nn.Sequential(\n OrderedDict(\n [\n (\"hidden_linear\", nn.Linear(ninputs, nhidden)),\n (\"activation\", nn.Tanh()),\n (\"output_linear\", nn.Linear(nhidden, nclasses)),\n (\"softmax\", nn.LogSoftmax(dim=1)),\n ]\n )\n)\nprint(model)", "Let's test the model.", "img, label = cifar2[0]\nimg_batch = img.view(-1).unsqueeze(0)\nprint(img_batch.shape)\nlogprobs = model(img_batch)\nprint(logprobs.shape)\nprint(logprobs)\nprobs = torch.exp(logprobs) # elementwise\nprint(probs)\nprint(probs.sum(1))", "Negative log likelihood loss.", "loss_fn = nn.NLLLoss()\nloss = loss_fn(logprobs, torch.tensor([label]))\nprint(loss)", "Let's access the output of the logit layer directly, bypassing the final log softmax.\n(We borrow a trick from here).", "activation = {}\n\n\ndef get_activation(name):\n def hook(model, input, output):\n activation[name] = output.detach()\n\n return hook\n\n\nmodel.output_linear.register_forward_hook(get_activation(\"output_linear\"))\n\nlogprobs = model(img_batch).detach().numpy()\nlogits = activation[\"output_linear\"]\nlogprobs2 = F.log_softmax(logits).detach().numpy()\n\nprint(logprobs)\nprint(logprobs2)\nassert np.allclose(logprobs, logprobs2)", "We can also modify the model to return logits.", "torch.manual_seed(0)\nmodel_logits = nn.Sequential(nn.Linear(ndims_input, nhidden), nn.Tanh(), nn.Linear(nhidden, nclasses))\n\nlogits2 = model_logits(img_batch)\nprint(logits)\nprint(logits2)\ntorch.testing.assert_allclose(logits, logits2)", "In this case, we need to modify the loss to take in logits.", "logprobs = model(img_batch)\nloss = nn.NLLLoss()(logprobs, torch.tensor([label]))\n\nlogits = model_logits(img_batch)\nloss2 = nn.CrossEntropyLoss()(logits, torch.tensor([label]))\n\nprint(loss)\nprint(loss2)\ntorch.testing.assert_allclose(loss, loss2)", "We can also use the functional API to specify the model. This avoids having to create stateless layers (i.e., layers with no adjustable parameters), such as the tanh or softmax layers.", "class MLP(nn.Module):\n def __init__(self, ninputs, nhidden, nclasses):\n super().__init__()\n self.fc1 = nn.Linear(ninputs, nhidden)\n self.fc2 = nn.Linear(nhidden, nclasses)\n\n def forward(self, x):\n out = F.tanh(self.fc1(x))\n out = self.fc2(out)\n return out # logits\n\n\ntorch.manual_seed(0)\nmodel = MLP(ninputs, nhidden, nclasses)\nlogits = model(img_batch)\nlogits2 = model_logits(img_batch)\nprint(logits)\nprint(logits2)\ntorch.testing.assert_allclose(logits, logits2)\n\n# print(list(model.named_parameters()))\nnparams = [p.numel() for p in model.parameters() if p.requires_grad == True]\nprint(nparams)\n# weights1, bias1, weights2, bias2\nprint([ninputs * nhidden, nhidden, nhidden * nclasses, nclasses])", "Evaluation pre-training", "def compute_accuracy(model, loader):\n correct = 0\n total = 0\n with torch.no_grad():\n for imgs, labels in loader:\n outputs = model(imgs.view(imgs.shape[0], -1))\n _, predicted = torch.max(outputs, dim=1)\n total += labels.shape[0]\n correct += int((predicted == labels).sum())\n return correct / total\n\n\ntrain_loader = torch.utils.data.DataLoader(cifar2, batch_size=64, shuffle=False)\nval_loader = torch.utils.data.DataLoader(cifar2_val, batch_size=64, shuffle=False)\n\ntorch.manual_seed(0)\nmodel = MLP(ninputs, nhidden, nclasses)\n\nacc_train = compute_accuracy(model, train_loader)\nacc_val = compute_accuracy(model, val_loader)\nprint([acc_train, acc_val])", "Training loop", "torch.manual_seed(0)\nmodel = MLP(ninputs, nhidden, nclasses)\n\nlearning_rate = 1e-2\noptimizer = optim.SGD(model.parameters(), lr=learning_rate)\nloss_fn = nn.CrossEntropyLoss()\nn_epochs = 20\n\nfor epoch in range(n_epochs):\n for imgs, labels in train_loader:\n outputs = model(imgs.view(imgs.shape[0], -1))\n loss = loss_fn(outputs, labels)\n\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n\n # At end of each epoch\n acc_val = compute_accuracy(model, val_loader)\n loss_train_batch = float(loss)\n print(f\"Epoch {epoch}, Batch Loss {loss_train_batch}, Val acc {acc_val}\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
NeuralEnsemble/elephant
doc/tutorials/spade.ipynb
bsd-3-clause
[ "SPADE Tutorial", "import numpy as np\nimport quantities as pq\nimport neo\nimport elephant\nimport viziphant\nnp.random.seed(4542)", "Generate correlated data\nSPADE is a method to detect repeated spatio-temporal activity patterns in parallel spike train data that occur in excess to chance expectation. In this tutorial, we will use SPADE to detect the simplest type of such patterns, synchronous events that are found across a subset of the neurons considered (i.e., patterns that do not exhibit a temporal extent). We will demonstrate the method on stochastic data in which we control the patterns statistics. In a first step, let use generate 10 random spike trains, each modeled after a Poisson statistics, in which a certain proportion of the spikes is synchronized across the spike trains. To this end, we use the compound_poisson_process() function, which expects the rate of the resulting processes in addition to a distribution A[n] indicating the likelihood of finding synchronous spikes of a given order n. In our example, we construct the distribution such that we have a small probability to produce a synchronous event of order 10 (A[10]==0.02). Otherwise spikes are not synchronous with those of other neurons (i.e., synchronous events of order 1, A[1]==0.98). Notice that the length of the distribution A determines the number len(A)-1 of spiketrains returned by the function, and that A[0] is ignored for reasons of clearer notation.", "spiketrains = elephant.spike_train_generation.compound_poisson_process(\n rate=5*pq.Hz, A=[0]+[0.98]+[0]*8+[0.02], t_stop=10*pq.s)\nlen(spiketrains)", "In a second step, we add 90 purely random Poisson spike trains using the homogeneous_poisson_process()| function, such that in total we have 10 spiketrains that exhibit occasional synchronized events, and 90 uncorrelated spike trains.", "for i in range(90):\n spiketrains.append(elephant.spike_train_generation.homogeneous_poisson_process(\n rate=5*pq.Hz, t_stop=10*pq.s))", "Mining patterns with SPADE\nIn the next step, we run the spade() method to extract the synchronous patterns. We choose 1 ms as the time scale for discretization of the patterns, and specify a window length of 1 bin (meaning, we search for synchronous patterns only). Also, we concentrate on patterns that involve at least 3 spikes, therefore significantly accelerating the search by ignoring frequent events of order 2. To test for the significance of patterns, we set to repeat the pattern detection on 100 spike dither surrogates of the original data, creating by dithing spike up to 5 ms in time. For the final step of pattern set reduction (psr), we use the standard parameter set [0, 0, 0].", "patterns = elephant.spade.spade(\n spiketrains=spiketrains, binsize=1*pq.ms, winlen=1, min_spikes=3, \n n_surr=100,dither=5*pq.ms, \n psr_param=[0,0,0],\n output_format='patterns')['patterns']", "The output patterns of the method contains information on the found patterns. In this case, we retrieve the pattern we put into the data: a pattern involving the first 10 neurons (IDs 0 to 9), occuring 5 times.", "patterns", "Lastly, we visualize the found patterns using the function plot_patterns() of the viziphant library. Marked in red are the patterns of order ten injected into the data.", "viziphant.patterns.plot_patterns(spiketrains, patterns)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.20/_downloads/78dfec6019dc9e7214e1efd97200f1c4/plot_10_overview.ipynb
bsd-3-clause
[ "%matplotlib inline", "Overview of MEG/EEG analysis with MNE-Python\nThis tutorial covers the basic EEG/MEG pipeline for event-related analysis:\nloading data, epoching, averaging, plotting, and estimating cortical activity\nfrom sensor data. It introduces the core MNE-Python data structures\n:class:~mne.io.Raw, :class:~mne.Epochs, :class:~mne.Evoked, and\n:class:~mne.SourceEstimate, and covers a lot of ground fairly quickly (at the\nexpense of depth). Subsequent tutorials address each of these topics in greater\ndetail.\n :depth: 1\nWe begin by importing the necessary Python modules:", "import os\nimport numpy as np\nimport mne", "Loading data\nMNE-Python data structures are based around the FIF file format from\nNeuromag, but there are reader functions for a wide variety of other\ndata formats &lt;data-formats&gt;. MNE-Python also has interfaces to a\nvariety of publicly available datasets &lt;datasets&gt;,\nwhich MNE-Python can download and manage for you.\nWe'll start this tutorial by loading one of the example datasets (called\n\"sample-dataset\"), which contains EEG and MEG data from one subject\nperforming an audiovisual experiment, along with structural MRI scans for\nthat subject. The :func:mne.datasets.sample.data_path function will\nautomatically download the dataset if it isn't found in one of the expected\nlocations, then return the directory path to the dataset (see the\ndocumentation of :func:~mne.datasets.sample.data_path for a list of places\nit checks before downloading). Note also that for this tutorial to run\nsmoothly on our servers, we're using a filtered and downsampled version of\nthe data (:file:sample_audvis_filt-0-40_raw.fif), but an unfiltered version\n(:file:sample_audvis_raw.fif) is also included in the sample dataset and\ncould be substituted here when running the tutorial locally.", "sample_data_folder = mne.datasets.sample.data_path()\nsample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',\n 'sample_audvis_filt-0-40_raw.fif')\nraw = mne.io.read_raw_fif(sample_data_raw_file)", "By default, :func:~mne.io.read_raw_fif displays some information about the\nfile it's loading; for example, here it tells us that there are four\n\"projection items\" in the file along with the recorded data; those are\n:term:SSP projectors &lt;projector&gt; calculated to remove environmental noise\nfrom the MEG signals, plus a projector to mean-reference the EEG channels;\nthese are discussed in the tutorial tut-projectors-background.\nIn addition to the information displayed during loading,\nyou can get a glimpse of the basic details of a :class:~mne.io.Raw object\nby printing it; even more is available by printing its info attribute\n(a :class:dictionary-like object &lt;mne.Info&gt; that is preserved across\n:class:~mne.io.Raw, :class:~mne.Epochs, and :class:~mne.Evoked\nobjects). The info data structure keeps track of channel locations,\napplied filters, projectors, etc. Notice especially the chs entry,\nshowing that MNE-Python detects different sensor types and handles each\nappropriately. See tut-info-class for more on the :class:~mne.Info\nclass.", "print(raw)\nprint(raw.info)", ":class:~mne.io.Raw objects also have several built-in plotting methods;\nhere we show the power spectral density (PSD) for each sensor type with\n:meth:~mne.io.Raw.plot_psd, as well as a plot of the raw sensor traces with\n:meth:~mne.io.Raw.plot. In the PSD plot, we'll only plot frequencies below\n50 Hz (since our data are low-pass filtered at 40 Hz). In interactive Python\nsessions, :meth:~mne.io.Raw.plot is interactive and allows scrolling,\nscaling, bad channel marking, annotation, projector toggling, etc.", "raw.plot_psd(fmax=50)\nraw.plot(duration=5, n_channels=30)", "Preprocessing\nMNE-Python supports a variety of preprocessing approaches and techniques\n(maxwell filtering, signal-space projection, independent components analysis,\nfiltering, downsampling, etc); see the full list of capabilities in the\n:mod:mne.preprocessing and :mod:mne.filter submodules. Here we'll clean\nup our data by performing independent components analysis\n(:class:~mne.preprocessing.ICA); for brevity we'll skip the steps that\nhelped us determined which components best capture the artifacts (see\ntut-artifact-ica for a detailed walk-through of that process).", "# set up and fit the ICA\nica = mne.preprocessing.ICA(n_components=20, random_state=97, max_iter=800)\nica.fit(raw)\nica.exclude = [1, 2] # details on how we picked these are omitted here\nica.plot_properties(raw, picks=ica.exclude)", "Once we're confident about which component(s) we want to remove, we pass them\nas the exclude parameter and then apply the ICA to the raw signal. The\n:meth:~mne.preprocessing.ICA.apply method requires the raw data to be\nloaded into memory (by default it's only read from disk as-needed), so we'll\nuse :meth:~mne.io.Raw.load_data first. We'll also make a copy of the\n:class:~mne.io.Raw object so we can compare the signal before and after\nartifact removal side-by-side:", "orig_raw = raw.copy()\nraw.load_data()\nica.apply(raw)\n\n# show some frontal channels to clearly illustrate the artifact removal\nchs = ['MEG 0111', 'MEG 0121', 'MEG 0131', 'MEG 0211', 'MEG 0221', 'MEG 0231',\n 'MEG 0311', 'MEG 0321', 'MEG 0331', 'MEG 1511', 'MEG 1521', 'MEG 1531',\n 'EEG 001', 'EEG 002', 'EEG 003', 'EEG 004', 'EEG 005', 'EEG 006',\n 'EEG 007', 'EEG 008']\nchan_idxs = [raw.ch_names.index(ch) for ch in chs]\norig_raw.plot(order=chan_idxs, start=12, duration=4)\nraw.plot(order=chan_idxs, start=12, duration=4)", "Detecting experimental events\nThe sample dataset includes several :term:\"STIM\" channels &lt;stim channel&gt;\nthat recorded electrical\nsignals sent from the stimulus delivery computer (as brief DC shifts /\nsquarewave pulses). These pulses (often called \"triggers\") are used in this\ndataset to mark experimental events: stimulus onset, stimulus type, and\nparticipant response (button press). The individual STIM channels are\ncombined onto a single channel, in such a way that voltage\nlevels on that channel can be unambiguously decoded as a particular event\ntype. On older Neuromag systems (such as that used to record the sample data)\nthis summation channel was called STI 014, so we can pass that channel\nname to the :func:mne.find_events function to recover the timing and\nidentity of the stimulus events.", "events = mne.find_events(raw, stim_channel='STI 014')\nprint(events[:5]) # show the first 5", "The resulting events array is an ordinary 3-column :class:NumPy array\n&lt;numpy.ndarray&gt;, with sample number in the first column and integer event ID\nin the last column; the middle column is usually ignored. Rather than keeping\ntrack of integer event IDs, we can provide an event dictionary that maps\nthe integer IDs to experimental conditions or events. In this dataset, the\nmapping looks like this:\n+----------+----------------------------------------------------------+\n| Event ID | Condition |\n+==========+==========================================================+\n| 1 | auditory stimulus (tone) to the left ear |\n+----------+----------------------------------------------------------+\n| 2 | auditory stimulus (tone) to the right ear |\n+----------+----------------------------------------------------------+\n| 3 | visual stimulus (checkerboard) to the left visual field |\n+----------+----------------------------------------------------------+\n| 4 | visual stimulus (checkerboard) to the right visual field |\n+----------+----------------------------------------------------------+\n| 5 | smiley face (catch trial) |\n+----------+----------------------------------------------------------+\n| 32 | subject button press |\n+----------+----------------------------------------------------------+", "event_dict = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,\n 'visual/right': 4, 'smiley': 5, 'buttonpress': 32}", "Event dictionaries like this one are used when extracting epochs from\ncontinuous data; the / character in the dictionary keys allows pooling\nacross conditions by requesting partial condition descriptors (i.e.,\nrequesting 'auditory' will select all epochs with Event IDs 1 and 2;\nrequesting 'left' will select all epochs with Event IDs 1 and 3). An\nexample of this is shown in the next section. There is also a convenient\n:func:~mne.viz.plot_events function for visualizing the distribution of\nevents across the duration of the recording (to make sure event detection\nworked as expected). Here we'll also make use of the :class:~mne.Info\nattribute to get the sampling frequency of the recording (so our x-axis will\nbe in seconds instead of in samples).", "fig = mne.viz.plot_events(events, event_id=event_dict, sfreq=raw.info['sfreq'],\n first_samp=raw.first_samp)", "For paradigms that are not event-related (e.g., analysis of resting-state\ndata), you can extract regularly spaced (possibly overlapping) spans of data\nby creating events using :func:mne.make_fixed_length_events and then\nproceeding with epoching as described in the next section.\nEpoching continuous data\nThe :class:~mne.io.Raw object and the events array are the bare minimum\nneeded to create an :class:~mne.Epochs object, which we create with the\n:class:~mne.Epochs class constructor. Here we'll also specify some data\nquality constraints: we'll reject any epoch where peak-to-peak signal\namplitude is beyond reasonable limits for that channel type. This is done\nwith a rejection dictionary; you may include or omit thresholds for any of\nthe channel types present in your data. The values given here are reasonable\nfor this particular dataset, but may need to be adapted for different\nhardware or recording conditions. For a more automated approach, consider\nusing the autoreject package_.", "reject_criteria = dict(mag=4000e-15, # 4000 fT\n grad=4000e-13, # 4000 fT/cm\n eeg=150e-6, # 150 µV\n eog=250e-6) # 250 µV", "We'll also pass the event dictionary as the event_id parameter (so we can\nwork with easy-to-pool event labels instead of the integer event IDs), and\nspecify tmin and tmax (the time relative to each event at which to\nstart and end each epoch). As mentioned above, by default\n:class:~mne.io.Raw and :class:~mne.Epochs data aren't loaded into memory\n(they're accessed from disk only when needed), but here we'll force loading\ninto memory using the preload=True parameter so that we can see the\nresults of the rejection criteria being applied:", "epochs = mne.Epochs(raw, events, event_id=event_dict, tmin=-0.2, tmax=0.5,\n reject=reject_criteria, preload=True)", "Next we'll pool across left/right stimulus presentations so we can compare\nauditory versus visual responses. To avoid biasing our signals to the\nleft or right, we'll use :meth:~mne.Epochs.equalize_event_counts first to\nrandomly sample epochs from each condition to match the number of epochs\npresent in the condition with the fewest good epochs.", "conds_we_care_about = ['auditory/left', 'auditory/right',\n 'visual/left', 'visual/right']\nepochs.equalize_event_counts(conds_we_care_about) # this operates in-place\naud_epochs = epochs['auditory']\nvis_epochs = epochs['visual']\ndel raw, epochs # free up memory", "Like :class:~mne.io.Raw objects, :class:~mne.Epochs objects also have a\nnumber of built-in plotting methods. One is :meth:~mne.Epochs.plot_image,\nwhich shows each epoch as one row of an image map, with color representing\nsignal magnitude; the average evoked response and the sensor location are\nshown below the image:", "aud_epochs.plot_image(picks=['MEG 1332', 'EEG 021'])", "<div class=\"alert alert-info\"><h4>Note</h4><p>Both :class:`~mne.io.Raw` and :class:`~mne.Epochs` objects have\n :meth:`~mne.Epochs.get_data` methods that return the underlying data\n as a :class:`NumPy array <numpy.ndarray>`. Both methods have a ``picks``\n parameter for subselecting which channel(s) to return; ``raw.get_data()``\n has additional parameters for restricting the time domain. The resulting\n matrices have dimension ``(n_channels, n_times)`` for\n :class:`~mne.io.Raw` and ``(n_epochs, n_channels, n_times)`` for\n :class:`~mne.Epochs`.</p></div>\n\nTime-frequency analysis\nThe :mod:mne.time_frequency submodule provides implementations of several\nalgorithms to compute time-frequency representations, power spectral density,\nand cross-spectral density. Here, for example, we'll compute for the auditory\nepochs the induced power at different frequencies and times, using Morlet\nwavelets. On this dataset the result is not especially informative (it just\nshows the evoked \"auditory N100\" response); see here\n&lt;inter-trial-coherence&gt; for a more extended example on a dataset with richer\nfrequency content.", "frequencies = np.arange(7, 30, 3)\npower = mne.time_frequency.tfr_morlet(aud_epochs, n_cycles=2, return_itc=False,\n freqs=frequencies, decim=3)\npower.plot(['MEG 1332'])", "Estimating evoked responses\nNow that we have our conditions in aud_epochs and vis_epochs, we can\nget an estimate of evoked responses to auditory versus visual stimuli by\naveraging together the epochs in each condition. This is as simple as calling\nthe :meth:~mne.Epochs.average method on the :class:~mne.Epochs object,\nand then using a function from the :mod:mne.viz module to compare the\nglobal field power for each sensor type of the two :class:~mne.Evoked\nobjects:", "aud_evoked = aud_epochs.average()\nvis_evoked = vis_epochs.average()\n\nmne.viz.plot_compare_evokeds(dict(auditory=aud_evoked, visual=vis_evoked),\n legend='upper left', show_sensors='upper right')", "We can also get a more detailed view of each :class:~mne.Evoked object\nusing other plotting methods such as :meth:~mne.Evoked.plot_joint or\n:meth:~mne.Evoked.plot_topomap. Here we'll examine just the EEG channels,\nand see the classic auditory evoked N100-P200 pattern over dorso-frontal\nelectrodes, then plot scalp topographies at some additional arbitrary times:", "aud_evoked.plot_joint(picks='eeg')\naud_evoked.plot_topomap(times=[0., 0.08, 0.1, 0.12, 0.2], ch_type='eeg')", "Evoked objects can also be combined to show contrasts between conditions,\nusing the :func:mne.combine_evoked function. A simple difference can be\ngenerated by negating one of the :class:~mne.Evoked objects passed into the\nfunction. We'll then plot the difference wave at each sensor using\n:meth:~mne.Evoked.plot_topo:", "evoked_diff = mne.combine_evoked([aud_evoked, -vis_evoked], weights='equal')\nevoked_diff.pick_types('mag').plot_topo(color='r', legend=False)", "Inverse modeling\nFinally, we can estimate the origins of the evoked activity by projecting the\nsensor data into this subject's :term:source space (a set of points either\non the cortical surface or within the cortical volume of that subject, as\nestimated by structural MRI scans). MNE-Python supports lots of ways of doing\nthis (dynamic statistical parametric mapping, dipole fitting, beamformers,\netc.); here we'll use minimum-norm estimation (MNE) to generate a continuous\nmap of activation constrained to the cortical surface. MNE uses a linear\n:term:inverse operator to project EEG+MEG sensor measurements into the\nsource space. The inverse operator is computed from the\n:term:forward solution for this subject and an estimate of the\ncovariance of sensor measurements &lt;tut_compute_covariance&gt;. For this\ntutorial we'll skip those computational steps and load a pre-computed inverse\noperator from disk (it's included with the sample data\n&lt;sample-dataset&gt;). Because this \"inverse problem\" is underdetermined (there\nis no unique solution), here we further constrain the solution by providing a\nregularization parameter specifying the relative smoothness of the current\nestimates in terms of a signal-to-noise ratio (where \"noise\" here is akin to\nbaseline activity level across all of cortex).", "# load inverse operator\ninverse_operator_file = os.path.join(sample_data_folder, 'MEG', 'sample',\n 'sample_audvis-meg-oct-6-meg-inv.fif')\ninv_operator = mne.minimum_norm.read_inverse_operator(inverse_operator_file)\n# set signal-to-noise ratio (SNR) to compute regularization parameter (λ²)\nsnr = 3.\nlambda2 = 1. / snr ** 2\n# generate the source time course (STC)\nstc = mne.minimum_norm.apply_inverse(vis_evoked, inv_operator,\n lambda2=lambda2,\n method='MNE') # or dSPM, sLORETA, eLORETA", "Finally, in order to plot the source estimate on the subject's cortical\nsurface we'll also need the path to the sample subject's structural MRI files\n(the subjects_dir):", "# path to subjects' MRI files\nsubjects_dir = os.path.join(sample_data_folder, 'subjects')\n# plot\nstc.plot(initial_time=0.1, hemi='split', views=['lat', 'med'],\n subjects_dir=subjects_dir)", "The remaining tutorials have much more detail on each of these topics (as\nwell as many other capabilities of MNE-Python not mentioned here:\nconnectivity analysis, encoding/decoding models, lots more visualization\noptions, etc). Read on to learn more!\n.. LINKS" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
awhite40/pymks
notebooks/elasticity_3D.ipynb
mit
[ "Linear Elasticity in 3D\nIntroduction\nThis example provides a demonstration of using PyMKS to compute the linear strain field for a two-phase composite material in 3D, and presents a comparison of the computational efficiency of MKS, when compared with the finite element method. The example first provides information on the boundary conditions, used in MKS. Next, delta microstructures are used to calibrate the first-order influence coefficients. The influence coefficients are then used to compute the strain field for a random microstructure. Lastly, the calibrated influence coefficients are scaled up and are used to compute the strain field for a larger microstructure and compared with results computed using finite element analysis.\nElastostatics Equations and Boundary Conditions\nA review of the governing field equations for elastostatics can be found in the Linear Elasticity in 2D example. The same equations are used in the example with the exception that the second lame parameter (shear modulus) $\\mu$ is defined differently in 3D.\n$$ \\mu = \\frac{E}{2(1+\\nu)} $$\nIn general, generating the calibration data for the MKS requires boundary conditions that are both periodic and displaced, which are quite unusual boundary conditions. The ideal boundary conditions are given by:\n$$ u(L, y, z) = u(0, y, z) + L\\bar{\\varepsilon}_{xx} $$\n$$ u(0, L, L) = u(0, 0, L) = u(0, L, 0) = u(0, 0, 0) = 0 $$\n$$ u(x, 0, z) = u(x, L, z) $$\n$$ u(x, y, 0) = u(x, y, L) $$", "%matplotlib inline\n%load_ext autoreload\n%autoreload 2\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport timeit as tm\n", "Modeling with MKS\nCalibration Data and Delta Microstructures\nThe first-order MKS influence coefficients are all that is needed to compute a strain field of a random microstructure, as long as the ratio between the elastic moduli (also known as the contrast) is less than 1.5. If this condition is met, we can expect a mean absolute error of 2% or less, when comparing the MKS results with those computed using finite element methods [1]. \nBecause we are using distinct phases and the contrast is low enough to only need the first order coefficients, delta microstructures and their strain fields are all that we need to calibrate the first-order influence coefficients [2]. \nThe make_delta_microstructure function from pymks.datasets can be used to create the two delta microstructures needed to calibrate the first-order influence coefficients for a two phase microstructure. This function uses the Python module SfePy to compute the strain fields using finite element methods.", "n = 9\ncenter = (n - 1) / 2\n\nfrom pymks.tools import draw_microstructures\nfrom pymks.datasets import make_delta_microstructures\n\nX_delta = make_delta_microstructures(n_phases=2, size=(n, n, n))\ndraw_microstructures(X_delta[:, center])\n", "Using delta microstructures for the calibration of the first-order influence coefficients is essentially the same as using a unit impulse response to find the kernel of a system in signal processing. Delta microstructures are composed of only two phases. One phase is located only at the center cell of the microstructure, and the rest made up of the other phase. \nGenerating Calibration Data\nThe make_elasticFEstrain_delta function from pymks.datasets provides an easy interface to generate delta microstructures and their strain fields, which can then be used for calibration of the influence coefficients. The function calls the ElasticFESimulation class to compute the strain fields with the boundary conditions given above.\nIn this example, lets look at a two-phase microstructure with elastic moduli values of 80 and 120 and Poisson's ratio values of 0.3 and 0.3 respectively. Let's also set the macroscopic imposed strain equal to 0.02. All of these parameters used in the simulation must be passed into the make_elasticFEstrain_delta function.", "from pymks.datasets import make_elastic_FE_strain_delta\nfrom pymks.tools import draw_microstructure_strain\n\nelastic_modulus = (80, 120)\npoissons_ratio = (0.3, 0.3)\nmacro_strain = 0.02 \nsize = (n, n, n)\n\nt = tm.time.time()\nX_delta, strains_delta = make_elastic_FE_strain_delta(elastic_modulus=elastic_modulus,\n poissons_ratio=poissons_ratio,\n size=size, macro_strain=macro_strain)\nprint 'Elapsed Time',tm.time.time() - t, 'Seconds'\n", "Let's take a look at one of the delta microstructures and the $\\varepsilon_{xx}$ strain field.", "draw_microstructure_strain(X_delta[0, center, :, :], strains_delta[0, center, :, :])\n", "Calibrating First Order Influence Coefficients\nNow that we have the delta microstructures and their strain fields, we can calibrate the influence coefficients by creating an instance of a bases and the MKSLocalizationModel class. Because we have 2 discrete phases we will create an instance of the PrimitiveBasis with n_states equal to 2, and then pass the basis in to create an instance of the MKSLocalizationModel. The delta microstructures and their strain fields are then passed to the fit method.", "from pymks import MKSLocalizationModel\nfrom pymks.bases import PrimitiveBasis\n\nprim_basis = PrimitiveBasis(n_states=2)\nmodel = MKSLocalizationModel(basis=prim_basis)\n", "Now, pass the delta microstructures and their strain fields into the fit method to calibrate the first order influence coefficients.", "model.fit(X_delta, strains_delta)\n", "That's it, the influence coefficient have been calibrated. Let's take a look at them.", "from pymks.tools import draw_coeff\n\ncoeff = model.coeff\ndraw_coeff(coeff[center])\n", "The influence coefficients for $l=0$ have a Gaussian-like shape, while the influence coefficients for $l=1$ are constant-valued. The constant-valued influence coefficients may seem superfluous, but are equally as important. They are equivalent to the constant term in multiple linear regression with categorical variables.\nPredict of the Strain Field for a Random Microstructure\nLet's now use our instance of the MKSLocalizationModel class with calibrated influence coefficients to compute the strain field for a random two-phase microstructure and compare it with the results from a finite element simulation. \nThe make_elasticFEstrain_random function from pymks.datasets is an easy way to generate a random microstructure and its strain field results from finite element analysis.", "from pymks.datasets import make_elastic_FE_strain_random\n\nnp.random.seed(99)\nt = tm.time.time()\nX, strain = make_elastic_FE_strain_random(n_samples=1, elastic_modulus=elastic_modulus,\n poissons_ratio=poissons_ratio, size=size, macro_strain=macro_strain)\nprint 'Elapsed Time',(tm.time.time() - t), 'Seconds'\ndraw_microstructure_strain(X[0, center] , strain[0, center])\n", "Note that the calibrated influence coefficients can only be used to reproduce the simulation with the same boundary conditions that they were calibrated with.\nNow to get the strain field from the MKSLocalizationModel just pass the same microstructure to the predict method.", "t = tm.time.time()\nstrain_pred = model.predict(X)\nprint 'Elapsed Time',tm.time.time() - t,'Seconds'\n", "Finally let's compare the results from finite element simulation and the MKS model.", "from pymks.tools import draw_strains_compare\n\ndraw_strains_compare(strain[0, center], strain_pred[0, center])\n", "Let's look at the difference between the two plots.", "from pymks.tools import draw_differences\n\ndraw_differences([strain[0, center] - strain_pred[0, center]], ['Finite Element - MKS'])\n", "The MKS model is able to capture the strain field for the random microstructure after being calibrated with delta microstructures.\nResizing the Coefficeints to use on Larger Microstructures\nThe influence coefficients that were calibrated on a smaller microstructure can be used to predict the strain field on a larger microstructure though spectral interpolation [3], but accuracy of the MKS model drops slightly. To demonstrate how this is done, let's generate a new larger $m$ by $m$ random microstructure and its strain field.", "m = 3 * n\ncenter = (m - 1) / 2\nt = tm.time.time()\nX = np.random.randint(2, size=(1, m, m, m))\n", "The influence coefficients that have already been calibrated need to be resized to match the shape of the new larger microstructure that we want to compute the strain field for. This can be done by passing the shape of the new larger microstructure into the 'resize_coeff' method.", "model.resize_coeff(X[0].shape)\n", "Because the coefficients have been resized, they will no longer work for our original $n$ by $n$ sized microstructures they were calibrated on, but they can now be used on the $m$ by $m$ microstructures. Just like before, just pass the microstructure as the argument of the predict method to get the strain field.", "from pymks.tools import draw_strains\n\nt = tm.time.time()\nstrain_pred = model.predict(X)\nprint 'Elapsed Time',(tm.time.time() - t), 'Seconds'\ndraw_microstructure_strain(X[0, center], strain_pred[0, center])\n", "References\n[1] Binci M., Fullwood D., Kalidindi S.R., A new spectral framework for establishing localization relationships for elastic behav ior of composites and their calibration to finite-element models. Acta Materialia, 2008. 56 (10): p. 2272-2282 doi:10.1016/j.actamat.2008.01.017.\n[2] Landi, G., S.R. Niezgoda, S.R. Kalidindi, Multi-scale modeling of elastic response of three-dimensional voxel-based microstructure datasets using novel DFT-based knowledge systems. Acta Materialia, 2009. 58 (7): p. 2716-2725 doi:10.1016/j.actamat.2010.01.007.\n[3] Marko, K., Kalidindi S.R., Fullwood D., Computationally efficient database and spectral interpolation for fully plastic Taylor-type crystal plasticity calculations of face-centered cubic polycrystals. International Journal of Plasticity 24 (2008) 1264–1276 doi;10.1016/j.ijplas.2007.12.002.\n[4] Marko, K. Al-Harbi H. F. , Kalidindi S.R., Crystal plasticity simulations using discrete Fourier transforms. Acta Materialia 57 (2009) 1777–1784 doi:10.1016/j.actamat.2008.12.017." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
evolu-tion/GenomeManagement
example/get_protein_sequences.ipynb
mit
[ "<a href=\"https://colab.research.google.com/github/evolu-tion/GenomeManagement/blob/master/example/get_protein_sequences.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nHow to get protein sequence from whole protein sequence\n1. Clone GenomeManagement package into your environment", "!git clone https://github.com/evolu-tion/GenomeManagement.git", "2. Download whole protein sequence in FASTA format", "!wget -q https://ftp.ncbi.nlm.nih.gov/genomes/all/GCF/001/995/035/GCF_001995035.1_ASM199503v1/GCF_001995035.1_ASM199503v1_protein.faa.gz\r\n!zcat GCF_001995035.1_ASM199503v1_protein.faa.gz | head", "3. Use proteins_retrieve.py to get your interested protein\n3.1 Create a tab-delimited file of your protein and EC into list_of_protein_id.txt file and save on your working environment\nXP_022136171.1 2.3.1.9\nXP_022142541.1 2.3.1.9\nXP_022152764.1 2.3.3.10\n3.2 Use proteins_retrieve.py to get protein sequence using following command\npython3 proteins_retrieve.py \\\n --input &lt;input_fasta_file&gt; \\\n --list_of_interest &lt;list_of_protein_id.txt&gt; \\\n --output &lt;output_file.fa&gt;\n3.3 Download an output of FASTA file output_file.fa on your working environment", "!python3 GenomeManagement/proteins_retrieve.py \\\r\n --input GCF_001995035.1_ASM199503v1_protein.faa.gz \\\r\n --list_of_interest list_of_protein_id.txt \\\r\n --output output_file.fa" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
david4096/bioapi-examples
python_notebooks/1kg_metadata_service.ipynb
apache-2.0
[ "GA4GH 1000 Genomes Metadata Service\nThis example illustrates how to access the available datasets in a GA4GH server. \nInitialize client\nIn this step we create a client object which will be used to communicate with the server. It is initialized using the URL.", "from ga4gh.client import client\nc = client.HttpClient(\"http://1kgenomes.ga4gh.org\")", "We will continue to refer to this client object for accessing the remote server.\nAccess the dataset\nHere, we issue or first API call to get a listing of datasets hosted by the server. The API call returns an iterator, which is iterated on once to get the 1kgenomes dataset.", "dataset = c.search_datasets().next()\nprint dataset\ndata_set_id = dataset.id", "NOTE:\nWe can also obtain individual datasets by knowing its id. From the above field, we use the id to obtain the dataset which belong to that dataset.", "dataset_via_get = c.get_dataset(dataset_id=data_set_id)\nprint dataset_via_get", "We can then use this identifier to point to the same dataset throughout our examples.\nFor documentation on the service, and more information go to.\nhttps://ga4gh-schemas.readthedocs.io/en/latest/schemas/metadata_service.proto.html" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
kingb12/languagemodelRNN
model_comparisons/noing10_LSTM_v_BOW.ipynb
mit
[ "Comparing Encoder-Decoders Analysis\nModel Architecture", "report_files = [\"/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_200_512_04drb/encdec_noing10_200_512_04drb.json\", \"/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_bow_200_512_04drb/encdec_noing10_bow_200_512_04drb.json\"]\nlog_files = [\"/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_200_512_04drb/encdec_noing10_200_512_04drb_logs.json\", \"/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_bow_200_512_04drb/encdec_noing10_bow_200_512_04drb_logs.json\"]\nreports = []\nlogs = []\nimport json\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nfor report_file in report_files:\n with open(report_file) as f:\n reports.append((report_file.split('/')[-1].split('.json')[0], json.loads(f.read())))\nfor log_file in log_files:\n with open(log_file) as f:\n logs.append((log_file.split('/')[-1].split('.json')[0], json.loads(f.read())))\n \nfor report_name, report in reports:\n print '\\n', report_name, '\\n'\n print 'Encoder: \\n', report['architecture']['encoder']\n print 'Decoder: \\n', report['architecture']['decoder']\n ", "Perplexity on Each Dataset", "%matplotlib inline\nfrom IPython.display import HTML, display\n\ndef display_table(data):\n display(HTML(\n u'<table><tr>{}</tr></table>'.format(\n u'</tr><tr>'.join(\n u'<td>{}</td>'.format('</td><td>'.join(unicode(_) for _ in row)) for row in data)\n )\n ))\n\ndef bar_chart(data):\n n_groups = len(data)\n \n train_perps = [d[1] for d in data]\n valid_perps = [d[2] for d in data]\n test_perps = [d[3] for d in data]\n \n fig, ax = plt.subplots(figsize=(10,8))\n \n index = np.arange(n_groups)\n bar_width = 0.3\n\n opacity = 0.4\n error_config = {'ecolor': '0.3'}\n\n train_bars = plt.bar(index, train_perps, bar_width,\n alpha=opacity,\n color='b',\n error_kw=error_config,\n label='Training Perplexity')\n\n valid_bars = plt.bar(index + bar_width, valid_perps, bar_width,\n alpha=opacity,\n color='r',\n error_kw=error_config,\n label='Valid Perplexity')\n test_bars = plt.bar(index + 2*bar_width, test_perps, bar_width,\n alpha=opacity,\n color='g',\n error_kw=error_config,\n label='Test Perplexity')\n\n plt.xlabel('Model')\n plt.ylabel('Scores')\n plt.title('Perplexity by Model and Dataset')\n plt.xticks(index + bar_width / 3, [d[0] for d in data])\n plt.legend()\n\n plt.tight_layout()\n plt.show()\n\ndata = [['<b>Model</b>', '<b>Train Perplexity</b>', '<b>Valid Perplexity</b>', '<b>Test Perplexity</b>']]\n\nfor rname, report in reports:\n data.append([rname, report['train_perplexity'], report['valid_perplexity'], report['test_perplexity']])\n\ndisplay_table(data)\nbar_chart(data[1:])\n", "Loss vs. Epoch", "%matplotlib inline\nplt.figure(figsize=(10, 8))\nfor rname, l in logs:\n for k in l.keys():\n plt.plot(l[k][0], l[k][1], label=str(k) + ' ' + rname + ' (train)')\n plt.plot(l[k][0], l[k][2], label=str(k) + ' ' + rname + ' (valid)')\nplt.title('Loss v. Epoch')\nplt.xlabel('Epoch')\nplt.ylabel('Loss')\nplt.legend()\nplt.show()", "Perplexity vs. Epoch", "%matplotlib inline\nplt.figure(figsize=(10, 8))\nfor rname, l in logs:\n for k in l.keys():\n plt.plot(l[k][0], l[k][3], label=str(k) + ' ' + rname + ' (train)')\n plt.plot(l[k][0], l[k][4], label=str(k) + ' ' + rname + ' (valid)')\nplt.title('Perplexity v. Epoch')\nplt.xlabel('Epoch')\nplt.ylabel('Perplexity')\nplt.legend()\nplt.show()", "Generations", "def print_sample(sample, best_bleu=None):\n enc_input = ' '.join([w for w in sample['encoder_input'].split(' ') if w != '<pad>'])\n gold = ' '.join([w for w in sample['gold'].split(' ') if w != '<mask>'])\n print('Input: '+ enc_input + '\\n')\n print('Gend: ' + sample['generated'] + '\\n')\n print('True: ' + gold + '\\n')\n if best_bleu is not None:\n cbm = ' '.join([w for w in best_bleu['best_match'].split(' ') if w != '<mask>'])\n print('Closest BLEU Match: ' + cbm + '\\n')\n print('Closest BLEU Score: ' + str(best_bleu['best_score']) + '\\n')\n print('\\n')\n \ndef display_sample(samples, best_bleu=False):\n for enc_input in samples:\n data = []\n for rname, sample in samples[enc_input]:\n gold = ' '.join([w for w in sample['gold'].split(' ') if w != '<mask>'])\n data.append([rname, '<b>Generated: </b>' + sample['generated']])\n if best_bleu:\n cbm = ' '.join([w for w in sample['best_match'].split(' ') if w != '<mask>'])\n data.append([rname, '<b>Closest BLEU Match: </b>' + cbm + ' (Score: ' + str(sample['best_score']) + ')'])\n data.insert(0, ['<u><b>' + enc_input + '</b></u>', '<b>True: ' + gold+ '</b>'])\n display_table(data)\n\ndef process_samples(samples):\n # consolidate samples with identical inputs\n result = {}\n for rname, t_samples, t_cbms in samples:\n for i, sample in enumerate(t_samples):\n enc_input = ' '.join([w for w in sample['encoder_input'].split(' ') if w != '<pad>'])\n if t_cbms is not None:\n sample.update(t_cbms[i])\n if enc_input in result:\n result[enc_input].append((rname, sample))\n else:\n result[enc_input] = [(rname, sample)]\n return result\n\n\n \n \n\n\nsamples = process_samples([(rname, r['train_samples'], r['best_bleu_matches_train'] if 'best_bleu_matches_train' in r else None) for (rname, r) in reports])\ndisplay_sample(samples, best_bleu='best_bleu_matches_train' in reports[1][1])\n\n\nsamples = process_samples([(rname, r['valid_samples'], r['best_bleu_matches_valid'] if 'best_bleu_matches_valid' in r else None) for (rname, r) in reports])\ndisplay_sample(samples, best_bleu='best_bleu_matches_valid' in reports[1][1])\n\n\nsamples = process_samples([(rname, r['test_samples'], r['best_bleu_matches_test'] if 'best_bleu_matches_test' in r else None) for (rname, r) in reports])\ndisplay_sample(samples, best_bleu='best_bleu_matches_test' in reports[1][1])", "BLEU Analysis", "def print_bleu(blue_structs):\n data= [['<b>Model</b>', '<b>Overall Score</b>','<b>1-gram Score</b>','<b>2-gram Score</b>','<b>3-gram Score</b>','<b>4-gram Score</b>']]\n for rname, blue_struct in blue_structs:\n data.append([rname, blue_struct['score'], blue_struct['components']['1'], blue_struct['components']['2'], blue_struct['components']['3'], blue_struct['components']['4']])\n display_table(data)\n\n# Training Set BLEU Scores\nprint_bleu([(rname, report['train_bleu']) for (rname, report) in reports])\n\n# Validation Set BLEU Scores\nprint_bleu([(rname, report['valid_bleu']) for (rname, report) in reports])\n\n# Test Set BLEU Scores\nprint_bleu([(rname, report['test_bleu']) for (rname, report) in reports])\n\n# All Data BLEU Scores\nprint_bleu([(rname, report['combined_bleu']) for (rname, report) in reports])", "N-pairs BLEU Analysis\nThis analysis randomly samples 1000 pairs of generations/ground truths and treats them as translations, giving their BLEU score. We can expect very low scores in the ground truth and high scores can expose hyper-common generations", "# Training Set BLEU n-pairs Scores\nprint_bleu([(rname, report['n_pairs_bleu_train']) for (rname, report) in reports])\n\n# Validation Set n-pairs BLEU Scores\nprint_bleu([(rname, report['n_pairs_bleu_valid']) for (rname, report) in reports])\n\n# Test Set n-pairs BLEU Scores\nprint_bleu([(rname, report['n_pairs_bleu_test']) for (rname, report) in reports])\n\n# Combined n-pairs BLEU Scores\nprint_bleu([(rname, report['n_pairs_bleu_all']) for (rname, report) in reports])\n\n# Ground Truth n-pairs BLEU Scores\nprint_bleu([(rname, report['n_pairs_bleu_gold']) for (rname, report) in reports])", "Alignment Analysis\nThis analysis computs the average Smith-Waterman alignment score for generations, with the same intuition as N-pairs BLEU, in that we expect low scores in the ground truth and hyper-common generations to raise the scores", "def print_align(reports):\n data= [['<b>Model</b>', '<b>Average (Train) Generated Score</b>','<b>Average (Valid) Generated Score</b>','<b>Average (Test) Generated Score</b>','<b>Average (All) Generated Score</b>', '<b>Average (Gold) Score</b>']]\n for rname, report in reports:\n data.append([rname, report['average_alignment_train'], report['average_alignment_valid'], report['average_alignment_test'], report['average_alignment_all'], report['average_alignment_gold']])\n display_table(data)\n\nprint_align(reports)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
google/iree
samples/colab/mnist_training.ipynb
apache-2.0
[ "```\nCopyright 2020 The IREE Authors\nLicensed under the Apache License v2.0 with LLVM Exceptions.\nSee https://llvm.org/LICENSE.txt for license information.\nSPDX-License-Identifier: Apache-2.0 WITH LLVM-exception\n```\nTraining and Executing an MNIST Model with IREE\nOverview\nThis notebook covers installing IREE and using it to train a simple neural network on the MNIST dataset.\n1. Install and Import IREE", "%%capture\n!python -m pip install iree-compiler iree-runtime iree-tools-tf -f https://github.com/google/iree/releases\n\n# Import IREE's TensorFlow Compiler and Runtime.\nimport iree.compiler.tf\nimport iree.runtime", "2. Import TensorFlow and Other Dependencies", "from matplotlib import pyplot as plt\nimport numpy as np\nimport tensorflow as tf\n\ntf.random.set_seed(91)\nnp.random.seed(91)\n\nplt.style.use(\"seaborn-whitegrid\")\nplt.rcParams[\"font.family\"] = \"monospace\"\nplt.rcParams[\"figure.figsize\"] = [8, 4.5]\nplt.rcParams[\"figure.dpi\"] = 150\n\n# Print version information for future notebook users to reference.\nprint(\"TensorFlow version: \", tf.__version__)\nprint(\"Numpy version: \", np.__version__)", "3. Load the MNIST Dataset", "# Keras datasets don't provide metadata.\nNUM_CLASSES = 10\nNUM_ROWS, NUM_COLS = 28, 28\n\n(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()\n\n# Reshape into grayscale images:\nx_train = np.reshape(x_train, (-1, NUM_ROWS, NUM_COLS, 1))\nx_test = np.reshape(x_test, (-1, NUM_ROWS, NUM_COLS, 1))\n\n# Rescale uint8 pixel values into float32 values between 0 and 1:\nx_train = x_train.astype(np.float32) / 255\nx_test = x_test.astype(np.float32) / 255\n\n# IREE doesn't currently support int8 tensors, so we cast them to int32:\ny_train = y_train.astype(np.int32)\ny_test = y_test.astype(np.int32)\n\nprint(\"Sample image from the dataset:\")\nsample_index = np.random.randint(x_train.shape[0])\nplt.figure(figsize=(5, 5))\nplt.imshow(x_train[sample_index].reshape(NUM_ROWS, NUM_COLS), cmap=\"gray\")\nplt.title(f\"Sample #{sample_index}, label: {y_train[sample_index]}\")\nplt.axis(\"off\")\nplt.tight_layout()", "4. Create a Simple DNN\nMLIR-HLO (the MLIR dialect we use to convert TensorFlow models into assembly that IREE can compile) does not currently support training with a dynamic number of examples, so we compile the model with a fixed batch size (by specifying the batch size in the tf.TensorSpecs).", "BATCH_SIZE = 32\n\nclass TrainableDNN(tf.Module):\n\n def __init__(self):\n super().__init__()\n\n # Create a Keras model to train.\n inputs = tf.keras.layers.Input((NUM_COLS, NUM_ROWS, 1))\n x = tf.keras.layers.Flatten()(inputs)\n x = tf.keras.layers.Dense(128)(x)\n x = tf.keras.layers.Activation(\"relu\")(x)\n x = tf.keras.layers.Dense(10)(x)\n outputs = tf.keras.layers.Softmax()(x)\n self.model = tf.keras.Model(inputs, outputs)\n\n # Create a loss function and optimizer to use during training.\n self.loss = tf.keras.losses.SparseCategoricalCrossentropy()\n self.optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2)\n \n @tf.function(input_signature=[\n tf.TensorSpec([BATCH_SIZE, NUM_ROWS, NUM_COLS, 1]) # inputs\n ])\n def predict(self, inputs):\n return self.model(inputs, training=False)\n\n # We compile the entire training step by making it a method on the model.\n @tf.function(input_signature=[\n tf.TensorSpec([BATCH_SIZE, NUM_ROWS, NUM_COLS, 1]), # inputs\n tf.TensorSpec([BATCH_SIZE], tf.int32) # labels\n ])\n def learn(self, inputs, labels):\n # Capture the gradients from forward prop...\n with tf.GradientTape() as tape:\n probs = self.model(inputs, training=True)\n loss = self.loss(labels, probs)\n\n # ...and use them to update the model's weights.\n variables = self.model.trainable_variables\n gradients = tape.gradient(loss, variables)\n self.optimizer.apply_gradients(zip(gradients, variables))\n return loss", "5. Compile the Model with IREE\ntf.keras adds a large number of methods to TrainableDNN, and most of them\ncannot be compiled with IREE. To get around this we tell IREE exactly which\nmethods we would like it to compile.", "exported_names = [\"predict\", \"learn\"]", "Choose one of IREE's three backends to compile to. (Note: Using Vulkan requires installing additional drivers.)", "backend_choice = \"dylib-llvm-aot (CPU)\" #@param [ \"vmvx (CPU)\", \"dylib-llvm-aot (CPU)\", \"vulkan-spirv (GPU/SwiftShader – requires additional drivers) \" ]\nbackend_choice = backend_choice.split(' ')[0]\n\n# Compile the TrainableDNN module\n# Note: extra flags are needed to i64 demotion, see https://github.com/google/iree/issues/8644\nvm_flatbuffer = iree.compiler.tf.compile_module(\n TrainableDNN(),\n target_backends=[backend_choice],\n exported_names=exported_names,\n extra_args=[\"--iree-mhlo-demote-i64-to-i32=false\",\n \"--iree-flow-demote-i64-to-i32\"])\ncompiled_model = iree.runtime.load_vm_flatbuffer(\n vm_flatbuffer,\n backend=backend_choice)", "6. Train the Compiled Model on MNIST\nThis compiled model is portable, demonstrating that IREE can be used for training on a mobile device. On mobile, IREE has a ~1000 fold binary size advantage over the current TensorFlow solution (which is to use the now-deprecated TF Mobile, as TFLite does not support training at this time).", "#@title Benchmark inference and training\nprint(\"Inference latency:\\n \", end=\"\")\n%timeit -n 100 compiled_model.predict(x_train[:BATCH_SIZE])\nprint(\"Training latancy:\\n \", end=\"\")\n%timeit -n 100 compiled_model.learn(x_train[:BATCH_SIZE], y_train[:BATCH_SIZE])\n\n# Run the core training loop.\nlosses = []\n\nstep = 0\nmax_steps = x_train.shape[0] // BATCH_SIZE\n\nfor batch_start in range(0, x_train.shape[0], BATCH_SIZE):\n if batch_start + BATCH_SIZE > x_train.shape[0]:\n continue\n\n inputs = x_train[batch_start:batch_start + BATCH_SIZE]\n labels = y_train[batch_start:batch_start + BATCH_SIZE]\n\n loss = compiled_model.learn(inputs, labels).to_host()\n losses.append(loss)\n\n step += 1\n print(f\"\\rStep {step:4d}/{max_steps}: loss = {loss:.4f}\", end=\"\")\n\n#@title Plot the training results\nimport bottleneck as bn\nsmoothed_losses = bn.move_mean(losses, 32)\nx = np.arange(len(losses))\n\nplt.plot(x, smoothed_losses, linewidth=2, label='loss (moving average)')\nplt.scatter(x, losses, s=16, alpha=0.2, label='loss (per training step)')\n\nplt.ylim(0)\nplt.legend(frameon=True)\nplt.xlabel(\"training step\")\nplt.ylabel(\"cross-entropy\")\nplt.title(\"training loss\");", "7. Evaluate on Heldout Test Examples", "#@title Evaluate the network on the test data.\naccuracies = []\n\nstep = 0\nmax_steps = x_test.shape[0] // BATCH_SIZE\n\nfor batch_start in range(0, x_test.shape[0], BATCH_SIZE):\n if batch_start + BATCH_SIZE > x_test.shape[0]:\n continue\n\n inputs = x_test[batch_start:batch_start + BATCH_SIZE]\n labels = y_test[batch_start:batch_start + BATCH_SIZE]\n\n prediction = compiled_model.predict(inputs).to_host()\n prediction = np.argmax(prediction, -1)\n accuracies.append(np.sum(prediction == labels) / BATCH_SIZE)\n\n step += 1\n print(f\"\\rStep {step:4d}/{max_steps}\", end=\"\")\nprint()\n\naccuracy = np.mean(accuracies)\nprint(f\"Test accuracy: {accuracy:.3f}\")\n\n#@title Display inference predictions on a random selection of heldout data\nrows = 4\ncolumns = 4\nimages_to_display = rows * columns\nassert BATCH_SIZE >= images_to_display\n\nrandom_index = np.arange(x_test.shape[0])\nnp.random.shuffle(random_index)\nx_test = x_test[random_index]\ny_test = y_test[random_index]\n\npredictions = compiled_model.predict(x_test[:BATCH_SIZE]).to_host()\npredictions = np.argmax(predictions, -1)\n\nfig, axs = plt.subplots(rows, columns)\n\nfor i, ax in enumerate(np.ndarray.flatten(axs)):\n ax.imshow(x_test[i, :, :, 0])\n color = \"#000000\" if predictions[i] == y_test[i] else \"#ff7f0e\"\n ax.set_xlabel(f\"prediction={predictions[i]}\", color=color)\n ax.grid(False)\n ax.set_yticks([])\n ax.set_xticks([])\n\nfig.tight_layout()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
TheOregonian/long-term-care-db
notebooks/analysis/emeritus-springfield-woodside-and-briarwood.ipynb
mit
[ "Data were munged here.", "import pandas as pd\nimport numpy as np\nfrom IPython.core.display import display, HTML\ndisplay(HTML(\"<style>.container { width:100% !important; }</style>\"))\ndf = pd.read_csv('../../data/processed/complaints-3-29-scrape.csv')", "<h3>How many cases of property loss/theft that are offline were there at Emeritus Springfield Woodside b/ween 5/2013 and 8/2013?</h3>\n<i>(Facility ID 70M226)</i>", "df[['outcome','incident_date']][(df['facility_id']=='70M226') &\n (df['incident_date']<'2013-08-01') & \n (df['incident_date']>'2013-05-01') &\n (df['public']=='offline') &\n (df['outcome'].str.contains('Property'))].sort_values('incident_date').count()\n\ndf[['abuse_number','outcome_notes']][(df['facility_id']=='70M226') &\n (df['incident_date']<'2013-08-01') & \n (df['incident_date']>'2013-05-01') &\n (df['public']=='offline') &\n (df['outcome'].str.contains('Property'))]", "The case below lists two thefts. That's why our total in the paragraph is 11.", "df['outcome_notes'][df['abuse_number']=='ES133150']", "<h3>What cases at Briarwood were there that same year that are online and comparable to the offline ones at Woodside?</h3>", "df[['outcome_notes','abuse_number']][(df['facility_id']=='70A299') & \n (df['outcome']=='Loss of Resident Property') & \n (df['public']=='online') &\n (df['year']==2013)]", "Upon review, use my own judgement to determine that cases ES134746 and ES133151 have a similar severity to the ten cases at Woodside.", "df[['abuse_number','outcome_notes']][df['abuse_number'].isin(['ES133151','ES134746'])]", "<h1>DONE</h1>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
pmorissette/bt
examples/Target_Volatility.ipynb
mit
[ "import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\nimport ffn\n#import bt\n\n#using this import until pip is updated to have the version of bt with the targetVol algo\n# you will need to change this be wherever your local version of bt is located.\nimport sys\nsys.path.insert(0, \"C:\\\\Users\\JPL09A\\\\Documents\\\\Code\\\\pmorissette\\\\bt\\\\\")\nimport bt\n\n%matplotlib inline", "Create Fake Index Data", "names = ['foo','bar','rf']\ndates = pd.date_range(start='2015-01-01',end='2018-12-31', freq=pd.tseries.offsets.BDay())\nn = len(dates)\nrdf = pd.DataFrame(\n np.zeros((n, len(names))),\n index = dates,\n columns = names\n)\n\nnp.random.seed(1)\nrdf['foo'] = np.random.normal(loc = 0.1/252,scale=0.2/np.sqrt(252),size=n)\nrdf['bar'] = np.random.normal(loc = 0.04/252,scale=0.05/np.sqrt(252),size=n)\nrdf['rf'] = 0.\n\npdf = 100*np.cumprod(1+rdf)\npdf.plot()", "Build Strategy", "# algo to fire on the beginning of every week and to run on the first date\nrunWeeklyAlgo = bt.algos.RunWeekly(\n run_on_first_date=True\n)\n\nselectTheseAlgo = bt.algos.SelectThese(['foo','bar'])\n\n# algo to set the weights to 1/vol contributions from each asset\n# with data over the last 12 months excluding yesterday\nweighInvVolAlgo = bt.algos.WeighInvVol(\n lookback=pd.DateOffset(months=12),\n lag=pd.DateOffset(days=1)\n)\n\n# algo to set overall volatility of the portfolio to an annualized 10%\ntargetVolAlgo = bt.algos.TargetVol(\n 0.1,\n lookback=pd.DateOffset(months=12),\n lag=pd.DateOffset(days=1),\n covar_method='standard',\n annualization_factor=252\n)\n\n\n# algo to rebalance the current weights to weights set in target.temp\nrebalAlgo = bt.algos.Rebalance()\n\n# a strategy that rebalances monthly to specified weights\nstrat = bt.Strategy('static',\n [\n runWeeklyAlgo,\n selectTheseAlgo,\n weighInvVolAlgo,\n targetVolAlgo,\n rebalAlgo\n ]\n)", "Run Backtest\nNote: The logic of the strategy is seperate from the data used in the backtest.", "# set integer_positions=False when positions are not required to be integers(round numbers)\nbacktest = bt.Backtest(\n strat,\n pdf,\n integer_positions=False\n)\n\nres = bt.run(backtest)", "You can see the realized volatility below is close to the targeted 10% volatility.", "fig, ax = plt.subplots(nrows=1,ncols=1)\n(res.prices.pct_change().rolling(window=12*20).std()*np.sqrt(252)).plot(ax = ax)\nax.set_title('Rolling Volatility')\nax.plot()", "Because we are using a 1/vol allocation bar, the less risky security, has a much smaller weight.", "fig, ax = plt.subplots(nrows=1,ncols=1)\nres.get_security_weights().plot(ax = ax)\nax.set_title('Weights')\nax.plot()", "If we plot the total risk contribution of each asset class and divide by the total volatility, then we can see that both asset's contribute roughly similar amounts of volatility.", "weights = res.get_security_weights()\nrolling_cov = pdf.loc[:,weights.columns].pct_change().rolling(window=12*20).cov()*252\n\n\ntrc = pd.DataFrame(\n np.nan,\n index = weights.index,\n columns = weights.columns\n)\nfor dt in pdf.index:\n trc.loc[dt,:] = weights.loc[dt,:].values*rolling_cov.loc[dt,:].values@weights.loc[dt,:].values/np.sqrt(weights.loc[dt,:].values@rolling_cov.loc[dt,:].values@weights.loc[dt,:].values)\n\n\nfig, ax = plt.subplots(nrows=1,ncols=1)\ntrc.plot(ax=ax)\nax.set_title('% Total Risk Contribution')\nax.plot()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
DiXiT-eu/collatex-tutorial
unit8/unit8-collatex-and-XML/CollateX and XML, Part 1.ipynb
gpl-3.0
[ "CollateX and XML, Part 1\nDavid J. Birnbaum (&#100;&#106;&#98;&#112;&#105;&#116;&#116;&#64;&#103;&#109;&#97;&#105;&#108;&#46;&#99;&#111;&#109;, http://www.obdurodon.org), 2015-06-29 \nThis is the first part of multi-part tutorial on processing XML with CollateX (http://collatex.net). This example collates a single line of XML from four witnesses. It spells out the details step by step in a way that would not be used in a real project, but that makes it easy to see how each step moves toward the final result. The output is in the three formats supported natively by CollateX: a plain-text alignment table, JSON, and colored HTML.\nStill to come:\n\nPart 2: Restructuring the code to use Python classes\nPart 3: Reading multiline input from files\n\n<!--* Part 4: Creating output in generic XML, suitable for transformation into TEI or other XML formats.-->\n<!--* Part 5: Fine-tuning the input to improve tokenization, normalization, and alignment-->\n<!--* Part 6: Quicker processing with Python multiprocessing-->\n\nNot planned: Post-processing of generic XML output, which is best done separately with XSLT 2.0.\nLoad libraries", "from collatex import *\nfrom lxml import etree\nimport json,re", "Create XSLT stylesheets and functions to use them", "addWMilestones = etree.XML(\"\"\"\n<xsl:stylesheet version=\"1.0\" xmlns:xsl=\"http://www.w3.org/1999/XSL/Transform\">\n <xsl:output method=\"xml\" indent=\"no\" encoding=\"UTF-8\" omit-xml-declaration=\"yes\"/>\n <xsl:template match=\"*|@*\">\n <xsl:copy>\n <xsl:apply-templates select=\"node() | @*\"/>\n </xsl:copy>\n </xsl:template>\n <xsl:template match=\"/*\">\n <xsl:copy>\n <xsl:apply-templates select=\"@*\"/>\n <!-- insert a <w/> milestone before the first word -->\n <w/>\n <xsl:apply-templates/>\n </xsl:copy>\n </xsl:template>\n <!-- convert <add>, <sic>, and <crease> to milestones (and leave them that way)\n CUSTOMIZE HERE: add other elements that may span multiple word tokens\n -->\n <xsl:template match=\"add | sic | crease \">\n <xsl:element name=\"{name()}\">\n <xsl:attribute name=\"n\">start</xsl:attribute>\n </xsl:element>\n <xsl:apply-templates/>\n <xsl:element name=\"{name()}\">\n <xsl:attribute name=\"n\">end</xsl:attribute>\n </xsl:element>\n </xsl:template>\n <xsl:template match=\"note\"/>\n <xsl:template match=\"text()\">\n <xsl:call-template name=\"whiteSpace\">\n <xsl:with-param name=\"input\" select=\"translate(.,'&#x0a;',' ')\"/>\n </xsl:call-template>\n </xsl:template>\n <xsl:template name=\"whiteSpace\">\n <xsl:param name=\"input\"/>\n <xsl:choose>\n <xsl:when test=\"not(contains($input, ' '))\">\n <xsl:value-of select=\"$input\"/>\n </xsl:when>\n <xsl:when test=\"starts-with($input,' ')\">\n <xsl:call-template name=\"whiteSpace\">\n <xsl:with-param name=\"input\" select=\"substring($input,2)\"/>\n </xsl:call-template>\n </xsl:when>\n <xsl:otherwise>\n <xsl:value-of select=\"substring-before($input, ' ')\"/>\n <w/>\n <xsl:call-template name=\"whiteSpace\">\n <xsl:with-param name=\"input\" select=\"substring-after($input,' ')\"/>\n </xsl:call-template>\n </xsl:otherwise>\n </xsl:choose>\n </xsl:template>\n</xsl:stylesheet>\n\n\"\"\")\ntransformAddW = etree.XSLT(addWMilestones)\n \nxsltWrapW = etree.XML('''\n<xsl:stylesheet xmlns:xsl=\"http://www.w3.org/1999/XSL/Transform\" version=\"1.0\">\n <xsl:output method=\"xml\" indent=\"no\" omit-xml-declaration=\"yes\"/>\n <xsl:template match=\"/*\">\n <xsl:copy>\n <xsl:apply-templates select=\"w\"/>\n </xsl:copy>\n </xsl:template>\n <xsl:template match=\"w\">\n <!-- faking <xsl:for-each-group> as well as the \"<<\" and except\" operators -->\n <xsl:variable name=\"tooFar\" select=\"following-sibling::w[1] | following-sibling::w[1]/following::node()\"/>\n <w>\n <xsl:copy-of select=\"following-sibling::node()[count(. | $tooFar) != count($tooFar)]\"/>\n </w>\n </xsl:template>\n</xsl:stylesheet>\n''')\ntransformWrapW = etree.XSLT(xsltWrapW)", "Create and examine XML data", "A = \"\"\"<l><abbrev>Et</abbrev>cil i partent seulement</l>\"\"\"\nB = \"\"\"<l><abbrev>Et</abbrev>cil i p<abbrev>er</abbrev>dent ausem<abbrev>en</abbrev>t</l>\"\"\"\nC = \"\"\"<l><abbrev>Et</abbrev>cil i p<abbrev>ar</abbrev>tent seulema<abbrev>n</abbrev>t</l>\"\"\"\nD = \"\"\"<l>E cil i partent sulement</l>\"\"\"\n\nATree = etree.XML(A)\nBTree = etree.XML(B)\nCTree = etree.XML(C)\nDTree = etree.XML(D)\n\nprint(A)\nprint(ATree)", "Tokenize XML input by adding &lt;w&gt; tags and examine the results", "ATokenized = transformWrapW(transformAddW(ATree))\nBTokenized = transformWrapW(transformAddW(BTree))\nCTokenized = transformWrapW(transformAddW(CTree))\nDTokenized = transformWrapW(transformAddW(DTree))\n\nprint(ATokenized)", "Function to convert the word-tokenized witness line into JSON", "def XMLtoJSON(id,XMLInput):\n unwrapRegex = re.compile('<w>(.*)</w>')\n stripTagsRegex = re.compile('<.*?>')\n words = XMLInput.xpath('//w')\n witness = {}\n witness['id'] = id\n witness['tokens'] = []\n for word in words:\n unwrapped = unwrapRegex.match(etree.tostring(word,encoding='unicode')).group(1)\n token = {}\n token['t'] = unwrapped\n token['n'] = stripTagsRegex.sub('',unwrapped.lower())\n witness['tokens'].append(token)\n return witness", "Use the function to create JSON input for CollateX, and examine it", "json_input = {}\njson_input['witnesses'] = []\njson_input['witnesses'].append(XMLtoJSON('A',ATokenized))\njson_input['witnesses'].append(XMLtoJSON('B',BTokenized))\njson_input['witnesses'].append(XMLtoJSON('C',CTokenized))\njson_input['witnesses'].append(XMLtoJSON('D',DTokenized))\nprint(json_input)", "Collate the witnesses and view the output as JSON, in a table, and as colored HTML", "collationText = collate(json_input,output='table',layout='vertical')\nprint(collationText)\ncollationJSON = collate(json_input,output='json')\nprint(collationJSON)\ncollationHTML2 = collate(json_input,output='html2')\n\ncollation = Collation()\ncollation.add_plain_witness('A',A)\ncollation.add_plain_witness('B',B)\ncollation.add_plain_witness('C',C)\ncollation.add_plain_witness('D',D)\nprint(collate(collation,output='table',layout='vertical'))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
oasis-open/cti-python-stix2
docs/guide/filesystem.ipynb
bsd-3-clause
[ "# Delete this cell to re-enable tracebacks\nimport sys\nipython = get_ipython()\n\ndef hide_traceback(exc_tuple=None, filename=None, tb_offset=None,\n exception_only=False, running_compiled_code=False):\n etype, value, tb = sys.exc_info()\n value.__cause__ = None # suppress chained exceptions\n return ipython._showtraceback(etype, value, ipython.InteractiveTB.get_exception_only(etype, value))\n\nipython.showtraceback = hide_traceback\n\n# JSON output syntax highlighting\nfrom __future__ import print_function\nfrom pygments import highlight\nfrom pygments.lexers import JsonLexer, TextLexer\nfrom pygments.formatters import HtmlFormatter\nfrom IPython.display import display, HTML\nfrom IPython.core.interactiveshell import InteractiveShell\n\nInteractiveShell.ast_node_interactivity = \"all\"\n\ndef json_print(inpt):\n string = str(inpt)\n formatter = HtmlFormatter()\n if string[0] == '{':\n lexer = JsonLexer()\n else:\n lexer = TextLexer()\n return HTML('<style type=\"text/css\">{}</style>{}'.format(\n formatter.get_style_defs('.highlight'),\n highlight(string, lexer, formatter)))\n\nglobals()['print'] = json_print", "FileSystem\nThe FileSystem suite contains FileSystemStore, FileSystemSource and FileSystemSink. Under the hood, all FileSystem objects point to a file directory (on disk) that contains STIX 2 content. \nThe directory and file structure of the intended STIX 2 content should be:\nstix2_content/\n STIX2 Domain Object type/\n STIX2 Domain Object ID/\n 'modified' timestamp.json\n 'modified' timestamp.json\n STIX2 Domain Object ID/\n 'modified' timestamp.json\n .\n .\n STIX2 Domain Object type/\n STIX2 Domain Object ID/\n 'modified' timestamp.json\n .\n .\n .\n .\n .\n .\n STIX2 Domain Object type/\nThe master STIX 2 content directory contains subdirectories, each of which aligns to a STIX 2 domain object type (i.e. \"attack-pattern\", \"campaign\", \"malware\", etc.). Within each STIX 2 domain object type's subdirectory are further subdirectories containing JSON files that are STIX 2 domain objects of the specified type; the name of each of these subdirectories is the ID of the associated STIX 2 domain object. Inside each of these subdirectories are JSON files, the names of which correspond to the modified timestamp of the STIX 2 domain object found within that file. A real example of the FileSystem directory structure:\nstix2_content/\n /attack-pattern\n /attack-pattern--00d0b012-8a03-410e-95de-5826bf542de6\n 20201211035036648071.json\n /attack-pattern--0a3ead4e-6d47-4ccb-854c-a6a4f9d96b22\n 20201210035036648071.json\n /attack-pattern--1b7ba276-eedc-4951-a762-0ceea2c030ec\n 20201111035036648071.json\n /campaign\n /course-of-action\n /course-of-action--2a8de25c-f743-4348-b101-3ee33ab5871b\n 20201011035036648071.json\n /course-of-action--2c3ce852-06a2-40ee-8fe6-086f6402a739\n 20201010035036648071.json\n /identity\n /identity--c78cb6e5-0c4b-4611-8297-d1b8b55e40b5\n 20201215035036648071.json\n /indicator\n /intrusion-set\n /malware\n /malware--1d808f62-cf63-4063-9727-ff6132514c22\n 20201211045036648071.json\n /malware--2eb9b131-d333-4a48-9eb4-d8dec46c19ee\n 20201211035036648072.json\n /observed-data\n /report\n /threat-actor\n /vulnerability\nFileSystemStore is intended for use cases where STIX 2 content is retrieved and pushed to the same file directory. As FileSystemStore is just a wrapper around a paired FileSystemSource and FileSystemSink that point the same file directory.\nFor use cases where STIX 2 content will only be retrieved or pushed, then a FileSystemSource and FileSystemSink can be used individually. They can also be used individually when STIX 2 content will be retrieved from one distinct file directory and pushed to another.\nFileSystem API\nA note on get(), all_versions(), and query(): The format of the STIX2 content targeted by the FileSystem suite is JSON files. When the FileSystemStore retrieves STIX 2 content (in JSON) from disk, it will attempt to parse the content into full-featured python-stix2 objects and returned as such. \nA note on add(): When STIX content is added (pushed) to the file system, the STIX content can be supplied in the following forms: Python STIX objects, Python dictionaries (of valid STIX objects or Bundles), JSON-encoded strings (of valid STIX objects or Bundles), or a (Python) list of any of the previously listed types. Any of the previous STIX content forms will be converted to a STIX JSON object (in a STIX Bundle) and written to disk. \nFileSystem Examples\nFileSystemStore\nUse the FileSystemStore when you want to both retrieve STIX content from the file system and push STIX content to it, too.", "from stix2 import FileSystemStore\n\n# create FileSystemStore\nfs = FileSystemStore(\"/tmp/stix2_store\")\n\n# retrieve STIX2 content from FileSystemStore\nap = fs.get(\"attack-pattern--0a3ead4e-6d47-4ccb-854c-a6a4f9d96b22\")\nmal = fs.get(\"malware--92ec0cbd-2c30-44a2-b270-73f4ec949841\")\n\n# for visual purposes\nprint(mal.serialize(pretty=True))\n\nfrom stix2 import ThreatActor, Indicator\n\n# create new STIX threat-actor\nta = ThreatActor(name=\"Adjective Bear\",\n sophistication=\"innovator\",\n resource_level=\"government\",\n goals=[\n \"compromising media outlets\",\n \"water-hole attacks geared towards political, military targets\",\n \"intelligence collection\"\n ])\n\n# create new indicators\nind = Indicator(description=\"Crusades C2 implant\",\n pattern_type=\"stix\",\n pattern=\"[file:hashes.'SHA-256' = '54b7e05e39a59428743635242e4a867c932140a999f52a1e54fa7ee6a440c73b']\")\n\nind1 = Indicator(description=\"Crusades C2 implant 2\",\n pattern_type=\"stix\",\n pattern=\"[file:hashes.'SHA-256' = '64c7e05e40a59511743635242e4a867c932140a999f52a1e54fa7ee6a440c73b']\")\n\n# add STIX object (threat-actor) to FileSystemStore\nfs.add(ta)\n\n# can also add multiple STIX objects to FileSystemStore in one call\nfs.add([ind, ind1])", "FileSystemSource\nUse the FileSystemSource when you only want to retrieve STIX content from the file system.", "from stix2 import FileSystemSource\n\n# create FileSystemSource\nfs_source = FileSystemSource(\"/tmp/stix2_source\")\n\n# retrieve STIX 2 objects\nap = fs_source.get(\"attack-pattern--0a3ead4e-6d47-4ccb-854c-a6a4f9d96b22\")\n\n# for visual purposes\nprint(ap)\n\nfrom stix2 import Filter\n\n# create filter for type=malware\nquery = [Filter(\"type\", \"=\", \"malware\")]\n\n# query on the filter\nmals = fs_source.query(query)\n\nfor mal in mals:\n print(mal.id)\n\n# add more filters to the query\nquery.append(Filter(\"modified\", \">\" , \"2017-05-31T21:33:10.772474Z\"))\n\nmals = fs_source.query(query)\n\n# for visual purposes\nfor mal in mals:\n print(mal.id)", "FileSystemSink\nUse the FileSystemSink when you only want to push STIX content to the file system.", "from stix2 import FileSystemSink, Campaign, Indicator\n\n# create FileSystemSink\nfs_sink = FileSystemSink(\"/tmp/stix2_sink\")\n\n# create STIX objects and add to sink\ncamp = Campaign(name=\"The Crusades\",\n objective=\"Infiltrating Israeli, Iranian and Palestinian digital infrastructure and government systems.\",\n aliases=[\"Desert Moon\"])\n\nind = Indicator(description=\"Crusades C2 implant\",\n pattern_type=\"stix\",\n pattern=\"[file:hashes.'SHA-256' = '54b7e05e39a59428743635242e4a867c932140a999f52a1e54fa7ee6a440c73b']\")\n\nind1 = Indicator(description=\"Crusades C2 implant\",\n pattern_type=\"stix\",\n pattern=\"[file:hashes.'SHA-256' = '54b7e05e39a59428743635242e4a867c932140a999f52a1e54fa7ee6a440c73b']\")\n\n# add Campaign object to FileSystemSink\nfs_sink.add(camp)\n\n# can also add STIX objects to FileSystemSink in one call\nfs_sink.add([ind, ind1])" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
espressomd/espresso
doc/tutorials/lattice_boltzmann/lattice_boltzmann_poiseuille_flow.ipynb
gpl-3.0
[ "Poiseuille flow in ESPResSo\nPoiseuille flow is the flow through a pipe or (in our case) a slit\nunder a homogeneous force density, e.g. gravity. In the limit of small Reynolds\nnumbers, the flow can be described with the Stokes equation. \nWe assume the slit being infinitely extended in $y$ and $z$ \ndirection and a force density $f_y$ on the fluid \nin $y$ direction. No slip-boundary conditions (i.e. $\\vec{u}=0$)\nare located at $x = \\pm h/2$.\nAssuming invariance in $y$ and $z$ direction and a steady state, \nthe Stokes equation is simplified to:\n\\begin{equation}\n \\mu \\partial_x^2 u_y = f_y\n\\end{equation}\nwhere $f_y$ denotes the force density and $\\mu$ the dynamic viscosity.\nThis can be integrated twice and the integration constants are chosen\nso that $u_y=0$ at $x = \\pm h/2$ to obtain the solution to the\nplanar Poiseuille flow [8]:\n\\begin{equation}\n u_y(x) = \\frac{f_y}{2\\mu} \\left(h^2/4-x^2\\right)\n\\end{equation}\nWe will simulate a planar Poiseuille flow using a square box, two walls\nwith normal vectors $\\left(\\pm 1, 0, 0 \\right)$, and an external force density\napplied to every node.\n1. Setting up the system", "import logging\nimport sys\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.rcParams.update({'font.size': 18})\nimport numpy as np\nimport tqdm\n\nimport espressomd\nimport espressomd.lb\nimport espressomd.lbboundaries\nimport espressomd.shapes\n\nlogging.basicConfig(level=logging.INFO, stream=sys.stdout)\n\nespressomd.assert_features(['LB_BOUNDARIES_GPU'])\n\n# System constants\nBOX_L = 16.0\nTIME_STEP = 0.01\n\nsystem = espressomd.System(box_l=[BOX_L] * 3)\nsystem.time_step = TIME_STEP\nsystem.cell_system.skin = 0.4", "1.1 Setting up the lattice-Boltzmann fluid\nWe will now create a lattice-Boltzmann fluid confined between two walls.", "# LB parameters\nAGRID = 0.5\nVISCOSITY = 2.0\nFORCE_DENSITY = [0.0, 0.001, 0.0]\nDENSITY = 1.5\n\n# LB boundary parameters\nWALL_OFFSET = AGRID", "Create a lattice-Boltzmann actor and append it to the list of system actors. Use the GPU implementation of LB.\nYou can refer to section setting up a LB fluid\nin the user guide.\npython\nlogging.info(\"Setup LB fluid.\")\nlbf = espressomd.lb.LBFluidGPU(agrid=AGRID, dens=DENSITY, visc=VISCOSITY, tau=TIME_STEP,\n ext_force_density=FORCE_DENSITY)\nsystem.actors.add(lbf)\nCreate a LB boundary and append it to the list of system LB boundaries.\nYou can refer to section using shapes as lattice-Boltzmann boundary in the user guide.\n```python\nlogging.info(\"Setup LB boundaries.\")\ntop_wall = espressomd.shapes.Wall(normal=[1, 0, 0], dist=WALL_OFFSET)\nbottom_wall = espressomd.shapes.Wall(normal=[-1, 0, 0], dist=-(BOX_L - WALL_OFFSET))\ntop_boundary = espressomd.lbboundaries.LBBoundary(shape=top_wall)\nbottom_boundary = espressomd.lbboundaries.LBBoundary(shape=bottom_wall)\nsystem.lbboundaries.add(top_boundary)\nsystem.lbboundaries.add(bottom_boundary)\n```\n2. Simulation\nWe will now simulate the fluid flow until we reach the steady state.", "logging.info(\"Iterate until the flow profile converges (5000 LB updates).\")\nfor _ in tqdm.trange(20):\n system.integrator.run(5000 // 20)", "3. Data analysis\nWe can now extract the flow profile and compare it to the analytical solution for the planar Poiseuille flow.", "logging.info(\"Extract fluid velocities along the x-axis\")\n\nfluid_positions = (np.arange(lbf.shape[0]) + 0.5) * AGRID\n# get all velocities as Numpy array and extract y components only\nfluid_velocities = (lbf[:,:,:].velocity)[:,:,:,1]\n# average velocities in y and z directions (perpendicular to the walls)\nfluid_velocities = np.average(fluid_velocities, axis=(1,2))\n\n\ndef poiseuille_flow(x, force_density, dynamic_viscosity, height):\n return force_density / (2 * dynamic_viscosity) * (height**2 / 4 - x**2)\n\n\n# Note that the LB viscosity is not the dynamic viscosity but the\n# kinematic viscosity (mu=LB_viscosity * density)\nx_values = np.linspace(0.0, BOX_L, lbf.shape[0])\nHEIGHT = BOX_L - 2.0 * AGRID\n# analytical curve\ny_values = poiseuille_flow(x_values - (HEIGHT / 2 + AGRID), FORCE_DENSITY[1],\n VISCOSITY * DENSITY, HEIGHT)\n# velocity is zero inside the walls\ny_values[np.nonzero(x_values < WALL_OFFSET)] = 0.0\ny_values[np.nonzero(x_values > BOX_L - WALL_OFFSET)] = 0.0\n\nfig1 = plt.figure(figsize=(10, 6))\nplt.plot(x_values, y_values, '-', linewidth=2, label='analytical')\nplt.plot(fluid_positions, fluid_velocities, 'o', label='simulation')\nplt.xlabel('Position on the $x$-axis', fontsize=16)\nplt.ylabel('Fluid velocity in $y$-direction', fontsize=16)\nplt.legend()\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
synthicity/activitysim
activitysim/examples/example_estimation/notebooks/18_atwork_subtour_freq.ipynb
agpl-3.0
[ "Estimating At-Work Subtour Frequency\nThis notebook illustrates how to re-estimate a single model component for ActivitySim. This process \nincludes running ActivitySim in estimation mode to read household travel survey files and write out\nthe estimation data bundles used in this notebook. To review how to do so, please visit the other\nnotebooks in this directory.\nLoad libraries", "import os\nimport larch # !conda install larch -c conda-forge # for estimation\nimport pandas as pd", "We'll work in our test directory, where ActivitySim has saved the estimation data bundles.", "os.chdir('test')", "Load data and prep model for estimation", "modelname = \"atwork_subtour_frequency\"\n\nfrom activitysim.estimation.larch import component_model\nmodel, data = component_model(modelname, return_data=True)", "Review data loaded from the EDB\nThe next step is to read the EDB, including the coefficients, model settings, utilities specification, and chooser and alternative data.\nCoefficients", "data.coefficients", "Utility specification", "data.spec", "Chooser data", "data.chooser_data", "Estimate\nWith the model setup for estimation, the next step is to estimate the model coefficients. Make sure to use a sufficiently large enough household sample and set of zones to avoid an over-specified model, which does not have a numerically stable likelihood maximizing solution. Larch has a built-in estimation methods including BHHH, and also offers access to more advanced general purpose non-linear optimizers in the scipy package, including SLSQP, which allows for bounds and constraints on parameters. BHHH is the default and typically runs faster, but does not follow constraints on parameters.", "model.estimate(method='SLSQP')", "Estimated coefficients", "model.parameter_summary()", "Output Estimation Results", "from activitysim.estimation.larch import update_coefficients\nresult_dir = data.edb_directory/\"estimated\"\nupdate_coefficients(\n model, data, result_dir,\n output_file=f\"{modelname}_coefficients_revised.csv\",\n);", "Write the model estimation report, including coefficient t-statistic and log likelihood", "model.to_xlsx(\n result_dir/f\"{modelname}_model_estimation.xlsx\", \n data_statistics=False,\n)", "Next Steps\nThe final step is to either manually or automatically copy the *_coefficients_revised.csv file to the configs folder, rename it to *_coefficients.csv, and run ActivitySim in simulation mode.", "pd.read_csv(result_dir/f\"{modelname}_coefficients_revised.csv\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
whitead/numerical_stats
unit_11/hw_2016/homework_8_key.ipynb
gpl-3.0
[ "Homework 8\nCHE 116: Numerical Methods and Statistics\nProf. Andrew White\nVersion 1.0 (3/15/2016)\n\nShort Exercises (48 Points)\nAnswer each using Python. Write out any constraints or rearrangements of equations in Markdown. You must make a graph showing the demonstrating your solution for each problem (4 Points per problem). You must also print your numerical answer.\n1.0 See example\nFind the maximum x-value for this equation:\n$$ \\sin(x) - x^2$$\non the domain $[0, \\pi]$\n1.1\nFind the intersection between these two curves:\n$$ \\frac{(x - 4)^2}{4} $$\n$$ \\frac{(x + 2)^2}{3} $$\n1.2\nConsider $-p\\ln p$, where $p$ is a probability. What $p$ gives the maximum?\n1.3\nSolve for $x$:\n$$ \\cos(x) = x $$\n1.4\nRepeat 1.3 by creating an objective function and minimizing it.\n*Hint: Try rearranging the equation into an expression that is $0$ when the equation is satisifed and POSITIVE everywhere else. *\n1.5\nUsing a similar idea of an objective function, what $x$ most satisfies the following equations:\n$$ 4 x + 4 = 12 $$\n$$ 3x - 2 = 3$$\nHint: you can only minimize one thing, so try adding together multiple objective functions.\n1.6\nConsider these two curves:\n$$f(x) = \\cos(14.5 x - 0.3) + ( x + 0.2) x $$\n$$g(x) = x^3 - x^2 + x$$\nFind the $x^$ that both minimizes $f(x^)$ and has the property that $f(x^) > g(x^)$", "%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy import optimize as opt", "Answer 1.0", "#set-up function which we're optimizing\ndef fxn_10(x):\n return np.sin(x) - x**2\n\n#perform optimization and store result\n#invert the sign on the equation to to maximization\nresult = opt.minimize(lambda can_write_anything_here: -fxn_10(can_write_anything_here), x0= np.pi / 2, bounds=[(0, np.pi)])\n\n#make some points to use for plotting\nx_grid = np.linspace(0, np.pi, 100)\n\n#plot the function\nplt.plot(x_grid, fxn_10(x_grid))\n\n#create a vertical line at the optimum x value\nplt.axvline(result.x, color='red')\n\nplt.show()\n\nprint('Optimum x: {}'.format(result.x))", "Answer 1.1\nIntersection occurs when:\n$$ \\frac{(x - 4)^2}{4} - \\frac{(x + 2)^2}{3} = 0 $$", "def fxn_11(x):\n return (x - 4)**2 / 4 - (x + 2)**2 / 3\nx = opt.newton(fxn_11, x0=0)\nprint(x)\n\ngrid = np.linspace(-8, 8, 100)\nplt.plot(grid, (grid - 4)**2 / 4)\nplt.plot(grid, (grid + 2)**2 / 3)\nplt.axvline(x, color='red')\n\nplt.show()", "Answer 1.2\nThis simply requires maximization like the example problem", "#set-up function which we're optimizing\ndef fxn_12(p):\n return -p * np.log(p)\n\n#perform optimization and store result\n#invert the sign on the equation to to maximization\nresult = opt.minimize(lambda u: -fxn_12(u), x0= 0.5, bounds=[(0, 1)])\n\n#make some points to use for plotting\np_grid = np.linspace(0.01, 0.99, 100)\n\n#plot the function\nplt.plot(p_grid, fxn_12(p_grid))\n\n#create a vertical line at the optimum x value\nplt.axvline(result.x, color='red')\n\nplt.show()\n\nprint(result.x)", "Answer 1.3\nThis is similar to the intersection problem:\n$$ \\cos(x) - x = 0$$", "x_opt = opt.newton(lambda g: np.cos(g) - g, x0=0)\n\nprint(x_opt)\n\nx_grid = np.linspace(0,np.pi, 100)\n\nplt.plot(x_grid, x_grid)\nplt.plot(x_grid, np.cos(x_grid))\nplt.axvline(x_opt, color='purple')\n\nplt.show()", "Answer 1.4\nI'll define my objective function to be\n$$ \\left[\\cos(x) - x\\right]^2 $$", "x_opt = opt.minimize(lambda g: (np.cos(g) - g)**2, x0=0)\nx_opt = x_opt.x # remember the minimize function returns a bunch of extra crap\n\nprint(x_opt)\n\nx_grid = np.linspace(0,np.pi, 100)\n\nplt.plot(x_grid, x_grid)\nplt.plot(x_grid, np.cos(x_grid))\nplt.axvline(x_opt, color='purple')\n\nplt.show()", "Answer 1.5\nWe'll just sum together the two objective functions\n$$ \\left[ 4x - 8\\right]^2 $$\n$$ \\left[3x - 5 \\right]^2 $$\nThere are many plots to show this. Be lenient in grading.", "def fxn_15(x):\n return (4 * x - 8 )**2 + (3 * x - 5)**2\n\nx_opt = opt.minimize(fxn_15, x0=0)\nx_opt = x_opt.x\nprint(x_opt)\n\n\n#not a great \nx_grid = np.linspace(1,3, 100)\n\nplt.plot(x_grid, 4 *x_grid - 8)\nplt.plot(x_grid, 3 * x_grid- 5)\nplt.axhline(0, color='red')\nplt.axvline(x_opt,color='red')\n\nplt.show()", "Answer 1.6\nThe main challenge here is coming up with the constraint. Your constraint equation is that $f(x) - g(x) > 0$. Also, this is non-convex so basing hopping should be used.", "def non_convex(x):\n return np.cos(14.5 * x - 0.3) + (x + 0.2) * x\ndef poly(x):\n return x ** 3 - x**2 + x\n\nmy_constraints = {'type': 'ineq', 'fun': lambda y: non_convex(y) - poly(y)}\nkwargs = {'constraints': my_constraints}\nresult = opt.basinhopping(non_convex, x0=0, minimizer_kwargs=kwargs, niter=1000)\nprint(result)\nx_opt = result.x\n\nx_grid = np.linspace(-2, 2, 100)\nplt.plot(x_grid, non_convex(x_grid))\nplt.plot(x_grid, poly(x_grid))\nplt.axvline(x_opt, color='orange')\nplt.ylim(-1, 2)\n\nplt.show()\n\nprint(x_opt)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
esa-as/2016-ml-contest
BGC_Team/Facies Prediction_submit.ipynb
apache-2.0
[ "Facies Classification Solution By Team_BGC\nCheolkyun Jeong and Ping Zhang From Team_BGC\nImport Header", "##### import basic function\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline\n##### import stuff from scikit learn\nfrom sklearn.ensemble import RandomForestClassifier, RandomForestRegressor\nfrom sklearn.model_selection import KFold, cross_val_score,LeavePGroupsOut, LeaveOneGroupOut, cross_val_predict\nfrom sklearn.metrics import confusion_matrix, make_scorer, f1_score, accuracy_score, recall_score, precision_score", "1. Data Prepocessing\n1) Filtered data preparation\nAfter the initial data validation, we figure out the NM_M input is a key differentiator to group non-marine stones (sandstone, c_siltstone, and f_siltstone) and marine stones (marine_silt_shale, mudstone, wakestone, dolomite, packstone, and bafflestone) in the current field. Our team decides to use this classifier aggressively and prepare a filtered dataset which cleans up the outliers.", "# Input file paths\nfacies_vector_path = 'facies_vectors.csv'\ntrain_path = 'training_data.csv'\ntest_path = 'validation_data_nofacies.csv'\n# Read training data to dataframe\n#training_data = pd.read_csv(train_path)", "Using Full data to train", "# 1=sandstone 2=c_siltstone 3=f_siltstone # 4=marine_silt_shale \n#5=mudstone 6=wackestone 7=dolomite 8=packstone 9=bafflestone\nfacies_colors = ['#F4D03F', '#F5B041', '#DC7633','#A569BD',\n '#000000', '#000080', '#2E86C1', '#AED6F1', '#196F3D']\nfeature_names = ['GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE', 'NM_M', 'RELPOS']\n\nfacies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS',\n 'WS', 'D','PS', 'BS']\n#facies_color_map is a dictionary that maps facies labels\n#to their respective colors\n\ntraining_data = pd.read_csv(facies_vector_path)\n\nfacies_color_map = {}\nfor ind, label in enumerate(facies_labels):\n facies_color_map[label] = facies_colors[ind]\n\ndef label_facies(row, labels):\n return labels[ row['Facies'] -1]\n \ntraining_data.loc[:,'FaciesLabels'] = training_data.apply(lambda row: label_facies(row, facies_labels), axis=1)\ntraining_data.describe()\n\n# Fitering out some outliers\nj = []\nfor i in range(len(training_data)):\n if ((training_data['NM_M'].values[i]==2)and ((training_data['Facies'].values[i]==1)or(training_data['Facies'].values[i]==2)or(training_data['Facies'].values[i]==3))):\n j.append(i)\n elif((training_data['NM_M'].values[i]==1)and((training_data['Facies'].values[i]!=1)and(training_data['Facies'].values[i]!=2)and(training_data['Facies'].values[i]!=3))):\n j.append(i)\n\ntraining_data_filtered = training_data.drop(training_data.index[j])\nprint(np.shape(training_data_filtered))", "Add Missing PE by following AR4 Team", "#X = training_data_filtered[feature_names].values\n# Testing without filtering\nX = training_data[feature_names].values\n\nreg = RandomForestRegressor(max_features='sqrt', n_estimators=50)\n# DataImpAll = training_data_filtered[feature_names].copy()\nDataImpAll = training_data[feature_names].copy()\nDataImp = DataImpAll.dropna(axis = 0, inplace=False)\nXimp=DataImp.loc[:, DataImp.columns != 'PE']\nYimp=DataImp.loc[:, 'PE']\nreg.fit(Ximp, Yimp)\nX[np.array(DataImpAll.PE.isnull()),4] = reg.predict(DataImpAll.loc[DataImpAll.PE.isnull(),:].drop('PE',axis=1,inplace=False))", "2. Feature Selection\nLog Plot of Facies\nFiltered Data", "#count the number of unique entries for each facies, sort them by\n#facies number (instead of by number of entries)\n#facies_counts_filtered = training_data_filtered['Facies'].value_counts().sort_index()\nfacies_counts = training_data['Facies'].value_counts().sort_index()\n#use facies labels to index each count\n#facies_counts_filtered.index = facies_labels\nfacies_counts.index = facies_labels\n\n#facies_counts_filtered.plot(kind='bar',color=facies_colors, \n# title='Distribution of Filtered Training Data by Facies')\nfacies_counts.plot(kind='bar',color=facies_colors, \n title='Distribution of Filtered Training Data by Facies')\n#facies_counts_filtered\n#training_data_filtered.columns\n#facies_counts_filtered\n\ntraining_data.columns\nfacies_counts", "Filtered facies", "#correct_facies_labels_filtered = training_data_filtered['Facies'].values\n#feature_vectors_filtered = training_data_filtered.drop(['Formation', 'Well Name', 'Depth','Facies','FaciesLabels'], axis=1)\ncorrect_facies_labels = training_data['Facies'].values\nfeature_vectors = training_data.drop(['Formation', 'Well Name', 'Depth','Facies','FaciesLabels'], axis=1)\n\nfrom sklearn import preprocessing\n#scaler_filtered = preprocessing.StandardScaler().fit(X)\n#scaled_features_filtered = scaler_filtered.transform(X)\nscaler = preprocessing.StandardScaler().fit(X)\nscaled_features = scaler.transform(X)\n\nfrom sklearn.cross_validation import train_test_split\n#X_train_filtered, X_test_filtered, y_train_filtered, y_test_filtered = train_test_split(\n# scaled_features_filtered, correct_facies_labels_filtered, test_size=0.3, random_state=16)\n\nX_train, X_test, y_train, y_test = train_test_split(\n scaled_features, correct_facies_labels, test_size=0.3, random_state=16)\n\nX_train_full, X_test_zero, y_train_full, y_test_full = train_test_split(\n scaled_features, correct_facies_labels, test_size=0.0, random_state=42)", "3. Prediction Model\nAccuracy", "def accuracy(conf):\n total_correct = 0.\n nb_classes = conf.shape[0]\n for i in np.arange(0,nb_classes):\n total_correct += conf[i][i]\n acc = total_correct/sum(sum(conf))\n return acc\n\nadjacent_facies = np.array([[1], [0,2], [1], [4], [3,5], [4,6,7], [5,7], [5,6,8], [6,7]])\n\ndef accuracy_adjacent(conf, adjacent_facies):\n nb_classes = conf.shape[0]\n total_correct = 0.\n for i in np.arange(0,nb_classes):\n total_correct += conf[i][i]\n for j in adjacent_facies[i]:\n total_correct += conf[i][j]\n return total_correct / sum(sum(conf))", "SVM", "from sklearn.model_selection import KFold, cross_val_score,LeavePGroupsOut, LeaveOneGroupOut, cross_val_predict\nfrom classification_utilities import display_cm, display_adj_cm\n\nfrom sklearn import svm\nclf_filtered = svm.LinearSVC(random_state=23) \n\n#clf_filtered.fit(X_train_filtered, y_train_filtered)\nclf_filtered.fit(X_train, y_train)\n\n#predicted_labels_filtered = clf_filtered.predict(X_test_filtered)\npredicted_labels = clf_filtered.predict(X_test)", "SVM for filtered data model\n4. Result Analysis", "well_data = pd.read_csv('validation_data_nofacies.csv')\nwell_data['Well Name'] = well_data['Well Name'].astype('category')\nwell_features = well_data.drop(['Formation', 'Well Name', 'Depth'], axis=1)\n\nX_unknown = scaler_filtered.transform(well_features)\n\n# Using all data and optimize parameter to train the data\nclf_filtered = svm.SVC(C=10, gamma=1)\nclf_filtered.fit(X_train_full, y_train_full)\n#clf_filtered.fit(X_train_filtered, y_train_filtered)\ny_unknown = clf_filtered.predict(X_unknown)\nwell_data['Facies'] = y_unknown\nwell_data\nwell_data.to_csv('predict_result_svm_full_data.csv')", "5. Using Tensorflow\nFiltered Data Model", "from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nimport tensorflow as tf\n\n# Specify that all features have real-value data\nfeature_columns_filtered = [tf.contrib.layers.real_valued_column(\"\", dimension=7)]\n\n# Build 3 layer DNN with 7, 17, 10 units respectively.\nclassifier_filtered = tf.contrib.learn.DNNClassifier(feature_columns=feature_columns_filtered,\n hidden_units=[7, 17, 10],\n n_classes=10)\n\n# Fit model.\n#classifier_filtered.fit(x=X_train_filtered,y=y_train_filtered,steps=5000)\n#y_predict_filtered = []\n#predictions = classifier_filtered.predict(x=X_test_filtered)\n\nclassifier_filtered.fit(x=X_train,y=y_train,steps=5000)\ny_predict = []\npredictions = classifier_filtered.predict(x=X_test)\n\n\nfor i, p in enumerate(predictions):\n y_predict.append(p)\n #print(\"Index %s: Prediction - %s, Real - %s\" % (i + 1, p, y_test_filtered[i]))\n\n# Evaluate accuracy.\n#accuracy_score_filtered = classifier_filtered.evaluate(x=X_test_filtered, y=y_test_filtered)[\"accuracy\"]\n#print('Accuracy: {0:f}'.format(accuracy_score_filtered))\naccuracy_score = classifier_filtered.evaluate(x=X_test, y=y_test)[\"accuracy\"]\nprint('Accuracy: {0:f}'.format(accuracy_score))\n\ncv_conf_dnn = confusion_matrix(y_test, y_predict)\n\nprint('Optimized facies classification accuracy = %.2f' % accuracy(cv_conf_dnn))\nprint('Optimized adjacent facies classification accuracy = %.2f' % accuracy_adjacent(cv_conf_dnn, adjacent_facies))\ndisplay_cm(cv_conf_dnn, facies_labels,display_metrics=True, hide_zeros=True)", "Result from DNN", "classifier_filtered.fit(x=X_train_full,\n y=y_train_full,\n steps=10000)\npredictions = classifier_filtered.predict(X_unknown)\ny_predict_filtered = []\nfor i, p in enumerate(predictions):\n y_predict_filtered.append(p)\nwell_data['Facies'] = y_predict_filtered\nwell_data\nwell_data.to_csv('predict_result_dnn_full_data.csv')\n\n#export_dir_path = 'd:\\\\'\n#classifier_filtered.export(export_dir_path)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/docs-l10n
site/zh-cn/tutorials/generative/style_transfer.ipynb
apache-2.0
[ "Copyright 2018 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "神经风格迁移\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td> <a target=\"_blank\" href=\"https://tensorflow.google.cn/tutorials/generative/style_transfer\"><img src=\"https://tensorflow.google.cn/images/tf_logo_32px.png\"> 在 TensorFlow.org 上查看</a> </td>\n <td> <img><a>在 GitHub 上查看源代码</a> </td>\n <td><a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/generative/style_transfer.ipynb\"><img src=\"https://tensorflow.google.cn/images/GitHub-Mark-32px.png\">在 GitHub 中查看源代码</a></td>\n <td> <img><a>下载笔记本</a>\n</td>\n <td><a href=\"https://tfhub.dev/google/magenta/arbitrary-image-stylization-v1-256/2\"><img src=\"https://tensorflow.google.cn/images/hub_logo_32px.png\">查看 TF Hub 模型</a></td>\n</table>\n\n本教程使用深度学习来用其他图像的风格创造一个图像(曾经你是否希望可以像毕加索或梵高一样绘画?)。 这被称为神经风格迁移,该技术概述于 <a href=\"https://arxiv.org/abs/1508.06576\" class=\"external\">A Neural Algorithm of Artistic Style</a> (Gatys et al.).\n注:本教程演示了原始的风格迁移算法。它将图像内容优化为特定风格。现代方式会训练模型以直接生成风格化图像(类似于 cyclegan)。这种方式要快得多(最多可达 1000 倍)。\n有关风格迁移的简单应用,请查看此教程来详细了解如何使用来自 TensorFlow Hub 的预训练任意图像风格模型或者如何搭配使用风格迁移模型和 TensorFlow Lite。 \n神经风格迁移是一种优化技术,主要用于获取两个图像(内容图像和风格参考图像(例如著名画家的艺术作品))并将它们混合在一起,以便使输出图像看起来像内容图像,但却是以风格参考图像的风格“绘制”的。\n这是通过优化输出图像以匹配内容图像的内容统计和风格参考图像的风格统计来实现的。这些统计信息是使用卷积网络从图像中提取的。\n例如,让我们为这只狗和 Wassily Kandinsky 的构图 7 拍摄一张图像:\n<img src=\"https://storage.googleapis.com/download.tensorflow.org/example_images/YellowLabradorLooking_new.jpg\" class=\"\">\n黄色拉布拉多犬,来自 Wikimedia Commons 的 Elf。许可证 CC BY-SA 3.0\n<img src=\"https://storage.googleapis.com/download.tensorflow.org/example_images/Vassily_Kandinsky%2C_1913_-_Composition_7.jpg\" class=\"\">\n现在,如果 Kandinsky 决定用这种风格专门为这只狗绘画,会是什么样子?像这样的东西?\n<img src=\"https://tensorflow.org/tutorials/generative/images/stylized-image.png\" class=\"\"> \n配置\n导入和配置模块", "import tensorflow as tf\n\nimport IPython.display as display\n\nimport matplotlib.pyplot as plt\nimport matplotlib as mpl\nmpl.rcParams['figure.figsize'] = (12,12)\nmpl.rcParams['axes.grid'] = False\n\nimport numpy as np\nimport PIL.Image\nimport time\nimport functools\n\ndef tensor_to_image(tensor):\n tensor = tensor*255\n tensor = np.array(tensor, dtype=np.uint8)\n if np.ndim(tensor)>3:\n assert tensor.shape[0] == 1\n tensor = tensor[0]\n return PIL.Image.fromarray(tensor)", "下载图像并选择风格图像和内容图像:", "content_path = tf.keras.utils.get_file('YellowLabradorLooking_new.jpg', 'https://storage.googleapis.com/download.tensorflow.org/example_images/YellowLabradorLooking_new.jpg')\nstyle_path = tf.keras.utils.get_file('kandinsky5.jpg','https://storage.googleapis.com/download.tensorflow.org/example_images/Vassily_Kandinsky%2C_1913_-_Composition_7.jpg')", "将输入可视化\n定义一个加载图像的函数,并将其最大尺寸限制为 512 像素。", "def load_img(path_to_img):\n max_dim = 512\n img = tf.io.read_file(path_to_img)\n img = tf.image.decode_image(img, channels=3)\n img = tf.image.convert_image_dtype(img, tf.float32)\n\n shape = tf.cast(tf.shape(img)[:-1], tf.float32)\n long_dim = max(shape)\n scale = max_dim / long_dim\n\n new_shape = tf.cast(shape * scale, tf.int32)\n\n img = tf.image.resize(img, new_shape)\n img = img[tf.newaxis, :]\n return img", "创建一个简单的函数来显示图像:", "def imshow(image, title=None):\n if len(image.shape) > 3:\n image = tf.squeeze(image, axis=0)\n\n plt.imshow(image)\n if title:\n plt.title(title)\n\ncontent_image = load_img(content_path)\nstyle_image = load_img(style_path)\n\nplt.subplot(1, 2, 1)\nimshow(content_image, 'Content Image')\n\nplt.subplot(1, 2, 2)\nimshow(style_image, 'Style Image')", "使用 TF-Hub 进行快速风格迁移\n本教程演示了原始风格迁移算法,这种算法将图像内容优化为特定风格。在了解细节之前,我们先看一下 TensorFlow Hub 模型是如何做到这一点的:", "import tensorflow_hub as hub\nhub_model = hub.load('https://tfhub.dev/google/magenta/arbitrary-image-stylization-v1-256/2')\nstylized_image = hub_model(tf.constant(content_image), tf.constant(style_image))[0]\ntensor_to_image(stylized_image)", "定义内容和风格的表示\n使用模型的中间层来获取图像的内容和风格表示。 从网络的输入层开始,前几个层的激励响应表示边缘和纹理等低级 feature (特征)。 随着层数加深,最后几层代表更高级的 feature (特征)——实体的部分,如轮子或眼睛。 在此教程中,我们使用的是 VGG19 网络结构,这是一个已经预训练好的图像分类网络。 这些中间层是从图像中定义内容和风格的表示所必需的。 对于一个输入图像,我们尝试匹配这些中间层的相应风格和内容目标的表示。\n加载 VGG19 并在我们的图像上测试它以确保正常运行:", "x = tf.keras.applications.vgg19.preprocess_input(content_image*255)\nx = tf.image.resize(x, (224, 224))\nvgg = tf.keras.applications.VGG19(include_top=True, weights='imagenet')\nprediction_probabilities = vgg(x)\nprediction_probabilities.shape\n\npredicted_top_5 = tf.keras.applications.vgg19.decode_predictions(prediction_probabilities.numpy())[0]\n[(class_name, prob) for (number, class_name, prob) in predicted_top_5]", "现在,加载没有分类部分的 VGG19 ,并列出各层的名称:", "vgg = tf.keras.applications.VGG19(include_top=False, weights='imagenet')\n\nprint()\nfor layer in vgg.layers:\n print(layer.name)", "从网络中选择中间层的输出以表示图像的风格和内容:", "content_layers = ['block5_conv2'] \n\nstyle_layers = ['block1_conv1',\n 'block2_conv1',\n 'block3_conv1', \n 'block4_conv1', \n 'block5_conv1']\n\nnum_content_layers = len(content_layers)\nnum_style_layers = len(style_layers)", "用于表示风格和内容的中间层\n那么,为什么我们预训练的图像分类网络中的这些中间层的输出允许我们定义风格和内容的表示?\n从高层理解,为了使网络能够实现图像分类(该网络已被训练过),它必须理解图像。 这需要将原始图像作为输入像素并构建内部表示,这个内部表示将原始图像像素转换为对图像中存在的 feature (特征)的复杂理解。\n这也是卷积神经网络能够很好地推广的一个原因:它们能够捕获不变性并定义类别(例如猫与狗)之间的 feature (特征),这些 feature (特征)与背景噪声和其他干扰无关。 因此,将原始图像传递到模型输入和分类标签输出之间的某处的这一过程,可以视作复杂的 feature (特征)提取器。通过这些模型的中间层,我们就可以描述输入图像的内容和风格。\n建立模型\n使用tf.keras.applications中的网络可以让我们非常方便的利用Keras的功能接口提取中间层的值。\n在使用功能接口定义模型时,我们需要指定输入和输出:\nmodel = Model(inputs, outputs)\n以下函数构建了一个 VGG19 模型,该模型返回一个中间层输出的列表:", "def vgg_layers(layer_names):\n \"\"\" Creates a vgg model that returns a list of intermediate output values.\"\"\"\n # Load our model. Load pretrained VGG, trained on imagenet data\n vgg = tf.keras.applications.VGG19(include_top=False, weights='imagenet')\n vgg.trainable = False\n \n outputs = [vgg.get_layer(name).output for name in layer_names]\n\n model = tf.keras.Model([vgg.input], outputs)\n return model", "然后建立模型:", "style_extractor = vgg_layers(style_layers)\nstyle_outputs = style_extractor(style_image*255)\n\n#Look at the statistics of each layer's output\nfor name, output in zip(style_layers, style_outputs):\n print(name)\n print(\" shape: \", output.numpy().shape)\n print(\" min: \", output.numpy().min())\n print(\" max: \", output.numpy().max())\n print(\" mean: \", output.numpy().mean())\n print()", "风格计算\n图像的内容由中间 feature maps (特征图)的值表示。\n事实证明,图像的风格可以通过不同 feature maps (特征图)上的平均值和相关性来描述。 通过在每个位置计算 feature (特征)向量的外积,并在所有位置对该外积进行平均,可以计算出包含此信息的 Gram 矩阵。 对于特定层的 Gram 矩阵,具体计算方法如下所示:\n$$G^l_{cd} = \\frac{\\sum_{ij} F^l_{ijc}(x)F^l_{ijd}(x)}{IJ}$$\n这可以使用tf.linalg.einsum函数来实现:", "def gram_matrix(input_tensor):\n result = tf.linalg.einsum('bijc,bijd->bcd', input_tensor, input_tensor)\n input_shape = tf.shape(input_tensor)\n num_locations = tf.cast(input_shape[1]*input_shape[2], tf.float32)\n return result/(num_locations)", "提取风格和内容\n构建一个返回风格和内容张量的模型。", "class StyleContentModel(tf.keras.models.Model):\n def __init__(self, style_layers, content_layers):\n super(StyleContentModel, self).__init__()\n self.vgg = vgg_layers(style_layers + content_layers)\n self.style_layers = style_layers\n self.content_layers = content_layers\n self.num_style_layers = len(style_layers)\n self.vgg.trainable = False\n\n def call(self, inputs):\n \"Expects float input in [0,1]\"\n inputs = inputs*255.0\n preprocessed_input = tf.keras.applications.vgg19.preprocess_input(inputs)\n outputs = self.vgg(preprocessed_input)\n style_outputs, content_outputs = (outputs[:self.num_style_layers], \n outputs[self.num_style_layers:])\n\n style_outputs = [gram_matrix(style_output)\n for style_output in style_outputs]\n\n content_dict = {content_name:value \n for content_name, value \n in zip(self.content_layers, content_outputs)}\n\n style_dict = {style_name:value\n for style_name, value\n in zip(self.style_layers, style_outputs)}\n \n return {'content':content_dict, 'style':style_dict}", "在图像上调用此模型,可以返回 style_layers 的 gram 矩阵(风格)和 content_layers 的内容:", "extractor = StyleContentModel(style_layers, content_layers)\n\nresults = extractor(tf.constant(content_image))\n\nprint('Styles:')\nfor name, output in sorted(results['style'].items()):\n print(\" \", name)\n print(\" shape: \", output.numpy().shape)\n print(\" min: \", output.numpy().min())\n print(\" max: \", output.numpy().max())\n print(\" mean: \", output.numpy().mean())\n print()\n\nprint(\"Contents:\")\nfor name, output in sorted(results['content'].items()):\n print(\" \", name)\n print(\" shape: \", output.numpy().shape)\n print(\" min: \", output.numpy().min())\n print(\" max: \", output.numpy().max())\n print(\" mean: \", output.numpy().mean())\n", "梯度下降\n使用此风格和内容提取器,我们现在可以实现风格传输算法。我们通过计算每个图像的输出和目标的均方误差来做到这一点,然后取这些损失值的加权和。\n设置风格和内容的目标值:", "style_targets = extractor(style_image)['style']\ncontent_targets = extractor(content_image)['content']", "定义一个 tf.Variable 来表示要优化的图像。 为了快速实现这一点,使用内容图像对其进行初始化( tf.Variable 必须与内容图像的形状相同)", "image = tf.Variable(content_image)", "由于这是一个浮点图像,因此我们定义一个函数来保持像素值在 0 和 1 之间:", "def clip_0_1(image):\n return tf.clip_by_value(image, clip_value_min=0.0, clip_value_max=1.0)", "创建一个 optimizer 。 本教程推荐 LBFGS,但 Adam 也可以正常工作:", "opt = tf.optimizers.Adam(learning_rate=0.02, beta_1=0.99, epsilon=1e-1)", "为了优化它,我们使用两个损失的加权组合来获得总损失:", "style_weight=1e-2\ncontent_weight=1e4\n\ndef style_content_loss(outputs):\n style_outputs = outputs['style']\n content_outputs = outputs['content']\n style_loss = tf.add_n([tf.reduce_mean((style_outputs[name]-style_targets[name])**2) \n for name in style_outputs.keys()])\n style_loss *= style_weight / num_style_layers\n\n content_loss = tf.add_n([tf.reduce_mean((content_outputs[name]-content_targets[name])**2) \n for name in content_outputs.keys()])\n content_loss *= content_weight / num_content_layers\n loss = style_loss + content_loss\n return loss", "使用 tf.GradientTape 来更新图像。", "@tf.function()\ndef train_step(image):\n with tf.GradientTape() as tape:\n outputs = extractor(image)\n loss = style_content_loss(outputs)\n\n grad = tape.gradient(loss, image)\n opt.apply_gradients([(grad, image)])\n image.assign(clip_0_1(image))", "现在,我们运行几个步来测试一下:", "train_step(image)\ntrain_step(image)\ntrain_step(image)\ntensor_to_image(image)", "运行正常,我们来执行一个更长的优化:", "import time\nstart = time.time()\n\nepochs = 10\nsteps_per_epoch = 100\n\nstep = 0\nfor n in range(epochs):\n for m in range(steps_per_epoch):\n step += 1\n train_step(image)\n print(\".\", end='', flush=True)\n display.clear_output(wait=True)\n display.display(tensor_to_image(image))\n print(\"Train step: {}\".format(step))\n \nend = time.time()\nprint(\"Total time: {:.1f}\".format(end-start))", "总变分损失\n此实现只是一个基础版本,它的一个缺点是它会产生大量的高频误差。 我们可以直接通过正则化图像的高频分量来减少这些高频误差。 在风格转移中,这通常被称为总变分损失:", "def high_pass_x_y(image):\n x_var = image[:,:,1:,:] - image[:,:,:-1,:]\n y_var = image[:,1:,:,:] - image[:,:-1,:,:]\n\n return x_var, y_var\n\nx_deltas, y_deltas = high_pass_x_y(content_image)\n\nplt.figure(figsize=(14,10))\nplt.subplot(2,2,1)\nimshow(clip_0_1(2*y_deltas+0.5), \"Horizontal Deltas: Original\")\n\nplt.subplot(2,2,2)\nimshow(clip_0_1(2*x_deltas+0.5), \"Vertical Deltas: Original\")\n\nx_deltas, y_deltas = high_pass_x_y(image)\n\nplt.subplot(2,2,3)\nimshow(clip_0_1(2*y_deltas+0.5), \"Horizontal Deltas: Styled\")\n\nplt.subplot(2,2,4)\nimshow(clip_0_1(2*x_deltas+0.5), \"Vertical Deltas: Styled\")", "这显示了高频分量如何增加。\n而且,本质上高频分量是一个边缘检测器。 我们可以从 Sobel 边缘检测器获得类似的输出,例如:", "plt.figure(figsize=(14,10))\n\nsobel = tf.image.sobel_edges(content_image)\nplt.subplot(1,2,1)\nimshow(clip_0_1(sobel[...,0]/4+0.5), \"Horizontal Sobel-edges\")\nplt.subplot(1,2,2)\nimshow(clip_0_1(sobel[...,1]/4+0.5), \"Vertical Sobel-edges\")", "与此相关的正则化损失是这些值的平方和:", "def total_variation_loss(image):\n x_deltas, y_deltas = high_pass_x_y(image)\n return tf.reduce_sum(tf.abs(x_deltas)) + tf.reduce_sum(tf.abs(y_deltas))\n\ntotal_variation_loss(image).numpy()", "这展示了它的作用。但是没有必要自己去实现它,因为 TensorFlow 包括一个标准的实现:", "tf.image.total_variation(image).numpy()", "重新进行优化\n选择 total_variation_loss 的权重:", "total_variation_weight=30", "现在,将它加入 train_step 函数中:", "@tf.function()\ndef train_step(image):\n with tf.GradientTape() as tape:\n outputs = extractor(image)\n loss = style_content_loss(outputs)\n loss += total_variation_weight*tf.image.total_variation(image)\n\n grad = tape.gradient(loss, image)\n opt.apply_gradients([(grad, image)])\n image.assign(clip_0_1(image))", "重新初始化优化的变量:", "image = tf.Variable(content_image)", "并进行优化:", "import time\nstart = time.time()\n\nepochs = 10\nsteps_per_epoch = 100\n\nstep = 0\nfor n in range(epochs):\n for m in range(steps_per_epoch):\n step += 1\n train_step(image)\n print(\".\", end='', flush=True)\n display.clear_output(wait=True)\n display.display(tensor_to_image(image))\n print(\"Train step: {}\".format(step))\n\nend = time.time()\nprint(\"Total time: {:.1f}\".format(end-start))", "最后,保存结果:", "file_name = 'stylized-image.png'\ntensor_to_image(image).save(file_name)\n\ntry:\n from google.colab import files\nexcept ImportError:\n pass\nelse:\n files.download(file_name)", "了解更多\n本教程演示了原始风格迁移算法。有关风格迁移的简单应用,请查看此教程,以详细了解如何使用 TensorFlow Hub 中的任意图像风格迁移模型。" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ernestyalumni/Propulsion
T1000/Notebooks/Controls/DiscreteControlExamples.ipynb
gpl-2.0
[ "Setup Jupyter Notebook", "from pathlib import Path\nimport sys\n\nnotebook_directory_parent = Path.cwd().resolve().parent.parent\nif str(notebook_directory_parent) not in sys.path:\n sys.path.append(str(notebook_directory_parent))\n\nprint(notebook_directory_parent)\n\n%matplotlib inline\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport control\nimport control.matlab as controlmatlab", "cf. Discrete control #1: Introduction and overview, Aug. 11, 2017, Brian Douglas", "G = control.tf([1,], [0.2, 1])\nprint(G)\n\nt, y = control.step_response(G)\nplt.plot(t, y)\nplt.xlabel('Time')\nplt.title('Step Response')\n\nunit_step = control.tf([1,], [1, 0])\nprint(unit_step)\n\nprint(G * unit_step)\nt, y = control.step_response(G * unit_step)\nplt.plot(t, y)\nplt.xlabel('Time')\nplt.title('Ramp Response')", "This is hard to see, let's compare to just a ramp", "t_diff, y_diff = control.step_response(unit_step - G * unit_step)\nplt.plot(t_diff, y_diff)\nplt.xlabel('Time')\nplt.title('Ramp Response Difference')", "Steady State Error is 0.2 (20%), greater than say requirement of Steady State Error being < 2% for ramp input.\nSince it's first-order, open loop, it has infinite phase margin, no amount of phase lag will make system unstable.\nCreate the controller transfer function.", "C = control.tf([500, 50], [100, 1, 0])\nprint(C)", "Create the feedback system and find the closed loop transfer function.", "CL = control.feedback(C*G, 1)\nprint(CL)", "Check the ramp steady state error like we did before", "t, y = control.step_response(unit_step - CL * unit_step)\nplt.plot(t, y)\nplt.xlabel('Time')\nplt.title('ramp steady state error')", "There's a jump in error in the beginning, and then goes to 2 percent error.\nTo find phase margin we look at Bode plot of open loop system.", "mag, phase, omega = control.matlab.bode(C*G)\n\n# wpc = crossover frequency, wgc = gain crossover frequency (where gain crosses 1)\ngain_margin, phase_margin, wpc, wgc = control.margin(C*G)\nprint(gain_margin)\nprint(phase_margin) # 50.5 phase margin, beating a 48 degree phase margin\nprint(wpc)\nprint(wgc)", "We can convert to a discrete transfer function in the z domain using c2d() \nDefault conversion method is Zero Order Hold (ZOH)", "# c2d returns sysd - Discrete time system, with sampling rate Ts\nCz = control.matlab.c2d(C, 0.1) # default 'zoh'\nprint(Cz)\n\nmag, phase, omega = control.matlab.bode(C, Cz)\n\ngain_margin, phase_margin, wpc, wgc = control.margin(Cz)\nprint(gain_margin)\nprint(phase_margin) \nprint(wpc)\nprint(wgc)", "cf. Discrete control #2: Discretize! Going from continuous to discrete domain Brian Douglas", "G_s = control.tf([1,], [1, 3, 2])\nprint(G_s)\nZ_g_k = control.matlab.c2d(G_s, 1., 'zoh')\nprint(Z_g_k)\n\nnp.exp(-0.1)\n\nG = control.tf([1], [1, 1])\nprint(G)\nTs = 0.1 # seconds", "Discretization is done with the function c2d", "help(control.matlab.c2d)\n\nGz = control.matlab.c2d(G, Ts, 'zoh')\nprint(Gz)\nGf = control.matlab.c2d(G, Ts, 'foh')\nprint(Gf)\nGi = control.matlab.c2d(G, Ts, 'impulse')\nprint(Gi)\nGt = control.matlab.c2d(G, Ts, 'tustin')\nprint(Gt)\nGm = control.matlab.c2d(G, Ts, 'matched')\nprint(Gm)\nGb = control.matlab.c2d(G, Ts, 'bilinear')\nprint(Gb)\n\nmag, phase, omega = control.matlab.bode(G, Gz, Gf, Gi, Gt, Gm)\nplt.legend()\n\nmag, phase, omega = control.bode_plot([G, Gz, Gf, Gi, Gt, Gm])\nplt.gca().legend(('Continuous', 'ZOH', 'FOH', 'Impulse', 'Tustin', 'Matched'))\n\ny, T = control.matlab.step(G)\nyz, Tz = control.matlab.step(Gz)\nyf, Tf = control.matlab.step(Gf)\nyi, Ti = control.matlab.step(Gi)\nyt, Tt = control.matlab.step(Gt)\nym, Tm = control.matlab.step(Gm)\n\nplt.plot(T, y, 'r')\nplt.plot(Tz, yz, 'b')\nplt.plot(Tf, yf, 'g')\nplt.plot(Ti, yi, 'y')\nplt.plot(Tt, yt, 'p')\nplt.plot(Tm, ym, 'o')\nplt.show()\n\nprint(Gi)\n\nT, y = control.impulse_response(G)\nTi, yi = control.impulse_response(Gi)\nTz, yz = control.impulse_response(Gz)\nplt.plot(T, y, 'r')\nplt.plot(Tz, yz, 'b')\nplt.plot(Ti, yi, 'g')\nplt.xlim([0.0, 0.7])\nplt.show()", "cf. Discrete control #3: Designing for the zero-order hold Brian Douglas", "C = control.tf([500, 50], [100, 1, 0])\nprint(C)\n\nCt = control.matlab.c2d(C, 0.2, 'tustin')\nprint(Ct)", "Discrete control #4: Discretize with the matched method Brian Douglas", "# No zeroes, poles at -1, -2, gain of 3\nb, a = control.matlab.zpk2tf([], [-1, -2], 3)\nprint(b)\nprint(a) # Confirms this is same transfer function as written before.\n\nzero_frequency_DC_gain = control.dcgain(control.tf([3], [1,3,2]))\nprint(zero_frequency_DC_gain)\n\nGz = control.TransferFunction([1], [1, -0.5032, 0.04978], 1)\nprint(Gz)\n\nG = control.tf([3,], [1, 3, 2])\nprint(G)\n\nmag, phase, omega = control.matlab.bode(G, Gz)\n\nt, y = control.step_response(G)\nplt.plot(t, y)\nplt.xlabel('Time')\nplt.title('Step Response')\n\nt, y = control.step_response(Gz)\nplt.plot(t, y)\nplt.xlabel('Time')\nplt.title('Step Response')\n\nt, y = control.step_response(G)\ntz, yz = control.step_response(Gz)\nplt.plot(t, y)\nplt.step(tz, yz)\nplt.xlabel('Time')\nplt.title('Step Response')\n\nGz_proper = control.tf([1, 1], [1, -0.5032, -.04978], 1)\nmag, phase, omega = control.matlab.bode(G, Gz_proper)\n\nmag, phase, omega = control.matlab.bode(G, Gz, Gz_proper)\n\nt, y = control.step_response(G)\ntz, yz = control.step_response(Gz)\ntzp, yzp = control.step_response(Gz_proper)\nplt.plot(t, y)\nplt.step(tz, yz)\nplt.step(tzp, yzp)\nplt.xlabel('Time')\nplt.title('Step Response')\n\nimport sympy\nz = sympy.Symbol('z')\nprint( ((z - 0.3679) * (z -0.1353)).as_poly() )\n\nprint(G)\nprint(G.zero())\nprint(G.pole())\nGz_matched = control.matlab.c2d(G, 1.0, 'matched')\nprint(Gz_matched)\n\nmag, phase, omega = control.matlab.bode(G, Gz_matched)\n\nt, y = control.step_response(G)\ntm, ym = control.step_response(Gz_matched)\nplt.plot(t, y)\nplt.step(tm, ym)\nplt.xlabel('Time')\nplt.title('Step Response')\n\nGz_matched_hand = control.tf([0.4099, 0.4099], [1, -0.5032, 0.04977], 1)\nprint(Gz_matched_hand)\nmag, phase, omega = control.matlab.bode(G, Gz_matched_hand)\n\nt, y = control.step_response(G)\ntm, ym = control.step_response(Gz_matched_hand)\nplt.plot(t, y)\nplt.step(tm, ym)\nplt.xlabel('Time')\nplt.title('Step Response')", "cf. Discrete control #5: The bilinear transform", "G = control.tf([2,], [1, 1])\nprint(G)\n\nGz = control.matlab.c2d(G, 1, 'tustin')\nprint(Gz)\n\nGz = control.matlab.c2d(G, 0.1, 'tustin')\nprint(Gz)\n\nmag, phase, omega = control.matlab.bode(G, Gz)\n\nt, y = control.step_response(G)\ntm, ym = control.step_response(Gz)\nplt.plot(t, y)\nplt.step(tz, yz)\nplt.xlabel('Time')\nplt.title('Step Response')", "cf. Discrete control #6: z-plane warping and the bilinear transform", "G = control.tf([1, 0, 0.01], [1, 0.1, 0.01])\nprint(G)\n\n# Set sample time to 2 seconds\nT = 2\n# Set critical frequency to 0.1 rad/s\nW0 = 0.1\n# Set quality factor to 1\nQ = 1\n\n# Create our s-domain notch filter\nG1 = control.tf([1, 0, W0*W0], [1, W0/Q, W0*W0])\nprint(G1)\n\nmag, phase, omega = control.matlab.bode(G1)", "Notch at 0.1 radians per second.", "# Let's use the bilinear transform (Tustin) to convert to z-domain\nG1t = control.matlab.c2d(G1, T, 'tustin')\nprint(G1t)\n\nmag, phase, omega = control.matlab.bode(G1, G1t)", "Blue digital filter lies on top of analog filter, good.\nChange W0, critical frequency.", "W0 = 0.7\nG1 = control.tf([1, 0, W0*W0], [1, W0/Q, W0*W0])\nprint(G1)\n\nG1t = control.matlab.c2d(G1, T, 'tustin')\nprint(G1t)\n\nmag, phase, omega = control.matlab.bode(G1, G1t)\n\nWa = 0.1 # rad / sec\nWd = 1/(2j) * np.log((1 + Wa * 1j) / (1 - Wa * 1j))\nprint(Wd)\n\nWa = 0.7 # rad / sec\nWd = 1/(2j) * np.log((1 + Wa * 1j) / (1 - Wa * 1j))\nprint(Wd)\n\nWa = 2.5 # rad / sec\nWd = 1/(2j) * np.log((1 + Wa * 1j) / (1 - Wa * 1j))\nprint(Wd)\n\nW0 = 0.7\nT = 2\nQ = 1\nG7 = control.tf([1, 0, W0*W0], [1, W0/Q, W0*W0])\nprint(G7)\n\n# G7 is our s-domain notch filter at 0.7 rad/s\n# Now we'll convert it to z-domain with bilinear transform and prewarping\n#G7t = control.matlab.c2d(G7, T, ['Method', 'tustin', 'PrewarpFrequency',W0])\nG7t = control.sample_system(G7, T, 'tustin', prewarp_frequency=W0)\nprint(G7t)\n\nmag, phase, omega = control.matlab.bode(G7, G7t)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
hglanz/phys202-2015-work
assignments/assignment03/NumpyEx01.ipynb
mit
[ "Numpy Exercise 1\nImports", "import numpy as np\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nimport antipackage\nimport github.ellisonbg.misc.vizarray as va", "Checkerboard\nWrite a Python function that creates a square (size,size) 2d Numpy array with the values 0.0 and 1.0:\n\nYour function should work for both odd and even size.\nThe 0,0 element should be 1.0.\nThe dtype should be float.", "def checkerboard(size):\n \"\"\"Return a 2d checkboard of 0.0 and 1.0 as a NumPy array\"\"\"\n row1 = [(x % 2) for x in range(1, size+1)]\n row2 = [(x % 2) for x in range(size)]\n board = [row1, row2]\n if size > 3:\n for i in range(2,size):\n if (i % 2 == 0):\n board.append(row1)\n else:\n board.append(row2)\n return np.array(board, dtype=float)\n #raise NotImplementedError()\n \nassert checkerboard(8).ndim == 2\n\n\na = checkerboard(4)\nassert a[0,0]==1.0\nassert a.sum()==8.0\nassert a.dtype==np.dtype(float)\nassert np.all(a[0,0:5:2]==1.0)\nassert np.all(a[1,0:5:2]==0.0)\n\nb = checkerboard(5)\nassert b[0,0]==1.0\nassert b.sum()==13.0\nassert np.all(b.ravel()[0:26:2]==1.0)\nassert np.all(b.ravel()[1:25:2]==0.0)", "Use vizarray to visualize a checkerboard of size=20 with a block size of 10px.", "va.enable()\nva.set_block_size(10)\ncheckerboard(20)\n#raise NotImplementedError()\n\nassert True", "Use vizarray to visualize a checkerboard of size=27 with a block size of 5px.", "va.enable()\nva.set_block_size(5)\ncheckerboard(27)\n#raise NotImplementedError()\n\nassert True" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
liufuyang/coursera-Applied-Machine-Learning-in-Python
Assignment 4.ipynb
mit
[ "You are currently looking at version 1.0 of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the Jupyter Notebook FAQ course resource.\n\nAssignment 4 - Understanding and Predicting Property Maintenance Fines\nThis assignment is based on a data challenge from the Michigan Data Science Team (MDST). \nThe Michigan Data Science Team (MDST) and the Michigan Student Symposium for Interdisciplinary Statistical Sciences (MSSISS) have partnered with the City of Detroit to help solve one of the most pressing problems facing Detroit - blight. Blight violations are issued by the city to individuals who allow their properties to remain in a deteriorated condition. Every year, the city of Detroit issues millions of dollars in fines to residents and every year, many of these fines remain unpaid. Enforcing unpaid blight fines is a costly and tedious process, so the city wants to know: how can we increase blight ticket compliance?\nThe first step in answering this question is understanding when and why a resident might fail to comply with a blight ticket. This is where predictive modeling comes in. For this assignment, your task is to predict whether a given blight ticket will be paid on time.\nAll data for this assignment has been provided to us through the Detroit Open Data Portal. Only the data already included in your Coursera directory can be used for training the model for this assignment. Nonetheless, we encourage you to look into data from other Detroit datasets to help inform feature creation and model selection. We recommend taking a look at the following related datasets:\n\nBuilding Permits\nTrades Permits\nImprove Detroit: Submitted Issues\nDPD: Citizen Complaints\nParcel Map\n\n\nWe provide you with two data files for use in training and validating your models: train.csv and test.csv. Each row in these two files corresponds to a single blight ticket, and includes information about when, why, and to whom each ticket was issued. The target variable is compliance, which is True if the ticket was paid early, on time, or within one month of the hearing data, False if the ticket was paid after the hearing date or not at all, and Null if the violator was found not responsible. Compliance, as well as a handful of other variables that will not be available at test-time, are only included in train.csv.\nNote: All tickets where the violators were found not responsible are not considered during evaluation. They are included in the training set as an additional source of data for visualization, and to enable unsupervised and semi-supervised approaches. However, they are not included in the test set.\n<br>\nFile descriptions (Use only this data for training your model!)\ntrain.csv - the training set (all tickets issued 2004-2011)\ntest.csv - the test set (all tickets issued 2012-2016)\naddresses.csv &amp; latlons.csv - mapping from ticket id to addresses, and from addresses to lat/lon coordinates. \n Note: misspelled addresses may be incorrectly geolocated.\n\n<br>\nData fields\ntrain.csv & test.csv\nticket_id - unique identifier for tickets\nagency_name - Agency that issued the ticket\ninspector_name - Name of inspector that issued the ticket\nviolator_name - Name of the person/organization that the ticket was issued to\nviolation_street_number, violation_street_name, violation_zip_code - Address where the violation occurred\nmailing_address_str_number, mailing_address_str_name, city, state, zip_code, non_us_str_code, country - Mailing address of the violator\nticket_issued_date - Date and time the ticket was issued\nhearing_date - Date and time the violator's hearing was scheduled\nviolation_code, violation_description - Type of violation\ndisposition - Judgment and judgement type\nfine_amount - Violation fine amount, excluding fees\nadmin_fee - $20 fee assigned to responsible judgments\n\nstate_fee - $10 fee assigned to responsible judgments\n late_fee - 10% fee assigned to responsible judgments\n discount_amount - discount applied, if any\n clean_up_cost - DPW clean-up or graffiti removal cost\n judgment_amount - Sum of all fines and fees\n grafitti_status - Flag for graffiti violations\ntrain.csv only\npayment_amount - Amount paid, if any\npayment_date - Date payment was made, if it was received\npayment_status - Current payment status as of Feb 1 2017\nbalance_due - Fines and fees still owed\ncollection_status - Flag for payments in collections\ncompliance [target variable for prediction] \n Null = Not responsible\n 0 = Responsible, non-compliant\n 1 = Responsible, compliant\ncompliance_detail - More information on why each ticket was marked compliant or non-compliant\n\n\nEvaluation\nYour predictions will be given as the probability that the corresponding blight ticket will be paid on time.\nThe evaluation metric for this assignment is the Area Under the ROC Curve (AUC). \nYour grade will be based on the AUC score computed for your classifier. A model which with an AUROC of 0.7 passes this assignment, over 0.75 will recieve full points.\n\nFor this assignment, create a function that trains a model to predict blight ticket compliance in Detroit using train.csv. Using this model, return a series of length 61001 with the data being the probability that each corresponding ticket from test.csv will be paid, and the index being the ticket_id.\nExample:\nticket_id\n 284932 0.531842\n 285362 0.401958\n 285361 0.105928\n 285338 0.018572\n ...\n 376499 0.208567\n 376500 0.818759\n 369851 0.018528\n Name: compliance, dtype: float32", "import pandas as pd\nimport numpy as np\n\ndef blight_model():\n \n # Your code here\n \n return # Your answer here\n\ndf_train = pd.read_csv('train.csv', encoding = \"ISO-8859-1\")\ndf_test = pd.read_csv('test.csv', encoding = \"ISO-8859-1\")\n\ndf_train.columns\n\nlist_to_remove = ['balance_due',\n 'collection_status',\n 'compliance_detail',\n 'payment_amount',\n 'payment_date',\n 'payment_status']\n\nlist_to_remove_all = ['violator_name', 'zip_code', 'country', 'city',\n 'inspector_name', 'violation_street_number', 'violation_street_name',\n 'violation_zip_code', 'violation_description',\n 'mailing_address_str_number', 'mailing_address_str_name',\n 'non_us_str_code',\n 'ticket_issued_date', 'hearing_date']\n\ndf_train.drop(list_to_remove, axis=1, inplace=True)\ndf_train.drop(list_to_remove_all, axis=1, inplace=True)\ndf_test.drop(list_to_remove_all, axis=1, inplace=True)\n\ndf_train.drop('grafitti_status', axis=1, inplace=True)\ndf_test.drop('grafitti_status', axis=1, inplace=True)\n\ndf_train.head()\n\ndf_train.violation_code.unique().size\n\ndf_train.disposition.unique().size\n\ndf_latlons = pd.read_csv('latlons.csv')\n\ndf_latlons.head()\n\ndf_address = pd.read_csv('addresses.csv')\ndf_address.head()\n\ndf_id_latlons = df_address.set_index('address').join(df_latlons.set_index('address'))\n\ndf_id_latlons.head()\n\ndf_train = df_train.set_index('ticket_id').join(df_id_latlons.set_index('ticket_id'))\ndf_test = df_test.set_index('ticket_id').join(df_id_latlons.set_index('ticket_id'))\n\ndf_train.head()\n\ndf_train.agency_name.value_counts()\n\n# df_train.country.value_counts()\n# so we remove zip code and country as well\n\nvio_code_freq10 = df_train.violation_code.value_counts().index[0:10]\nvio_code_freq10\n\ndf_train['violation_code_freq10'] = [list(vio_code_freq10).index(c) if c in vio_code_freq10 else -1 for c in df_train.violation_code ]\n\ndf_train.head()\n\ndf_train.violation_code_freq10.value_counts()\n\n# drop violation code\n\ndf_train.drop('violation_code', axis=1, inplace=True)\n\ndf_test['violation_code_freq10'] = [list(vio_code_freq10).index(c) if c in vio_code_freq10 else -1 for c in df_test.violation_code ]\ndf_test.drop('violation_code', axis=1, inplace=True)\n\n#df_train.grafitti_status.fillna('None', inplace=True)\n#df_test.grafitti_status.fillna('None', inplace=True)\n\ndf_train = df_train[df_train.compliance.isnull() == False]\n\ndf_train.isnull().sum()\n\ndf_test.isnull().sum()\n\ndf_train.lat.fillna(method='pad', inplace=True)\ndf_train.lon.fillna(method='pad', inplace=True)\ndf_train.state.fillna(method='pad', inplace=True)\n\ndf_test.lat.fillna(method='pad', inplace=True)\ndf_test.lon.fillna(method='pad', inplace=True)\ndf_test.state.fillna(method='pad', inplace=True)\n\ndf_train.isnull().sum().sum()\n\ndf_test.isnull().sum().sum()", "", "df_train.head()\n\none_hot_encode_columns = ['agency_name', 'state', 'disposition']\n\n\n[ df_train[c].unique().size for c in one_hot_encode_columns]\n\n# So remove city and states...\n\none_hot_encode_columns = ['agency_name', 'state', 'disposition']\n\ndf_train = pd.get_dummies(df_train, columns=one_hot_encode_columns)\ndf_test = pd.get_dummies(df_test, columns=one_hot_encode_columns)\n\ndf_train.head()", "Train, keep, test split", "from sklearn.model_selection import train_test_split\ntrain_features = df_train.columns.drop('compliance')\ntrain_features\n\nX_data, X_keep, y_data, y_keep = train_test_split(df_train[train_features], \n df_train.compliance, \n random_state=0,\n test_size=0.05)\n\nprint(X_data.shape, X_keep.shape)\n\nX_train, X_test, y_train, y_test = train_test_split(X_data[train_features], \n y_data, \n random_state=0,\n test_size=0.2)\n\nprint(X_train.shape, X_test.shape)", "Train a NeuralNet and see the performance", "from sklearn.neural_network import MLPClassifier\nfrom sklearn.preprocessing import MinMaxScaler\n\nscaler = MinMaxScaler()\n\nX_train_scaled = scaler.fit_transform(X_train)\nX_test_scaled = scaler.transform(X_test)\n\nclf = MLPClassifier(hidden_layer_sizes = [50], alpha = 5,\n random_state = 0,\n solver='lbfgs')\nclf.fit(X_train_scaled, y_train)\nprint(clf.loss_)\n\nclf.score(X_train_scaled, y_train)\n\nclf.score(X_test_scaled, y_test)\n\nfrom sklearn.metrics import recall_score, precision_score, f1_score\n\ntrain_pred = clf.predict(X_train_scaled)\nprint(precision_score(y_train, train_pred),\n recall_score(y_train, train_pred),\n f1_score(y_train, train_pred))\n\nfrom sklearn.metrics import recall_score, precision_score, f1_score\n\ntest_pred = clf.predict(X_test_scaled)\nprint(precision_score(y_test, test_pred),\n recall_score(y_test, test_pred),\n f1_score(y_test, test_pred))\n\ntest_pro = clf.predict_proba(X_test_scaled)\n\ndef draw_roc_curve():\n %matplotlib notebook\n import matplotlib.pyplot as plt\n from sklearn.metrics import roc_curve, auc\n\n fpr_lr, tpr_lr, _ = roc_curve(y_test, test_pro[:,1])\n roc_auc_lr = auc(fpr_lr, tpr_lr)\n\n plt.figure()\n plt.xlim([-0.01, 1.00])\n plt.ylim([-0.01, 1.01])\n plt.plot(fpr_lr, tpr_lr, lw=3, label='LogRegr ROC curve (area = {:0.2f})'.format(roc_auc_lr))\n plt.xlabel('False Positive Rate', fontsize=16)\n plt.ylabel('True Positive Rate', fontsize=16)\n plt.title('ROC curve (1-of-10 digits classifier)', fontsize=16)\n plt.legend(loc='lower right', fontsize=13)\n plt.plot([0, 1], [0, 1], color='navy', lw=3, linestyle='--')\n plt.axes().set_aspect('equal')\n plt.show()\n \ndraw_roc_curve()\n\ntest_pro[0:10]\n\nclf.predict(X_test_scaled[0:10])\n\ny_test[0:10]\n\n1 - y_train.sum()/len(y_train)\n\nfrom sklearn.metrics import recall_score, precision_score, f1_score\n\ntest_pred = clf.predict(X_test_scaled)\nprint(precision_score(y_test, test_pred),\n recall_score(y_test, test_pred),\n f1_score(y_test, test_pred))\n\ndef draw_pr_curve():\n from sklearn.metrics import precision_recall_curve\n from sklearn.metrics import roc_curve, auc\n\n precision, recall, thresholds = precision_recall_curve(y_test, test_pro[:,1])\n print(len(thresholds))\n idx = min(range(len(thresholds)), key=lambda i: abs(thresholds[i]-0.5))\n print(idx)\n print(np.argmin(np.abs(thresholds)))\n \n closest_zero = idx # np.argmin(np.abs(thresholds))\n closest_zero_p = precision[closest_zero]\n closest_zero_r = recall[closest_zero]\n\n import matplotlib.pyplot as plt\n plt.figure()\n plt.xlim([0.0, 1.01])\n plt.ylim([0.0, 1.01])\n plt.plot(precision, recall, label='Precision-Recall Curve')\n plt.plot(closest_zero_p, closest_zero_r, 'o', markersize = 12, fillstyle = 'none', c='r', mew=3)\n plt.xlabel('Precision', fontsize=16)\n plt.ylabel('Recall', fontsize=16)\n plt.axes().set_aspect('equal')\n plt.show()\n \n return thresholds\n\nthresholds = draw_pr_curve()\n\nimport matplotlib.pyplot as plt\n%matplotlib notebook\nplt.plot(thresholds)\nplt.show()", "Let's use this first simple model see if we could pass the test" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
setten/pymatgen
examples/Plotting and Analyzing a Phase Diagram using the Materials API.ipynb
mit
[ "from pymatgen.ext.matproj import MPRester\nfrom pymatgen.analysis.phase_diagram import PhaseDiagram, PDPlotter\n%matplotlib inline", "Generating the phase diagram\nTo generate a phase diagram, we obtain entries from the Materials Project and call the PhaseDiagram class in pymatgen.", "#This initializes the REST adaptor. You may need to put your own API key in as an arg.\na = MPRester()\n\n#Entries are the basic unit for thermodynamic and other analyses in pymatgen.\n#This gets all entries belonging to the Ca-C-O system.\nentries = a.get_entries_in_chemsys(['Ca', 'C', 'O'])\n\n#With entries, you can do many sophisticated analyses, like creating phase diagrams.\npd = PhaseDiagram(entries)", "Plotting the phase diagram\nTo plot a phase diagram, we send our phase diagram object into the PDPlotter class.", "#Let's show all phases, including unstable ones\nplotter = PDPlotter(pd, show_unstable=True)\nplotter.show()", "Calculating energy above hull and other phase equilibria properties\nTo perform more sophisticated analyses, use the PDAnalyzer object.", "import collections\n\ndata = collections.defaultdict(list)\nfor e in entries:\n decomp, ehull = pd.get_decomp_and_e_above_hull(e)\n data[\"Materials ID\"].append(e.entry_id)\n data[\"Composition\"].append(e.composition.reduced_formula)\n data[\"Ehull\"].append(ehull) \n data[\"Decomposition\"].append(\" + \".join([\"%.2f %s\" % (v, k.composition.formula) for k, v in decomp.items()]))\n\nfrom pandas import DataFrame\ndf = DataFrame(data, columns=[\"Materials ID\", \"Composition\", \"Ehull\", \"Decomposition\"])\n\nprint(df)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Mashimo/datascience
01-Regression/LogisticRegressionSKL.ipynb
apache-2.0
[ "Titanic\nGoal: predict survival on the Titanic\nIt's a basic learning competition on the ML platform Kaggle, a simple introduction to machine learning concepts, specifically binary classification (survived / not survived).\nHere we are looking into how to apply Logistic Regression to the Titanic dataset.\n1. Collect and understand the data\nThe data can be downloaded directly from Kaggle", "import pandas as pd\n\n# get titanic training file as a DataFrame\ntitanic = pd.read_csv(\"../datasets/titanic_train.csv\")\n\ntitanic.shape\n\n# preview the data\ntitanic.head()", "Variable Description\nSurvived: Survived (1) or died (0); this is the target variable\nPclass: Passenger's class (1st, 2nd or 3rd class) \nName: Passenger's name\nSex: Passenger's sex\nAge: Passenger's age\nSibSp: Number of siblings/spouses aboard\nParch: Number of parents/children aboard\nTicket: Ticket number\nFare: Fare\nCabin: Cabin\nEmbarked: Port of embarkation", "titanic.describe()", "Not all features are numeric:", "titanic.info()", "2. Process the Data\nCategorical variables need to be transformed to numeric variables\nTransform the embarkment port\nThere are three ports: C = Cherbourg, Q = Queenstown, S = Southampton", "ports = pd.get_dummies(titanic.Embarked , prefix='Embarked')\nports.head()", "Now the feature Embarked (a category) has been trasformed into 3 binary features, e.g. Embarked_C = 0 not embarked in Cherbourg, 1 = embarked in Cherbourg.\nFinally, the 3 new binary features substitute the orignal one in the data frame:", "titanic = titanic.join(ports)\ntitanic.drop(['Embarked'], axis=1, inplace=True) # then drop the original column", "Transform the gender feature\nThis is easier, being already a binary classification (male or female).\nThis was 1912.", "titanic.Sex = titanic.Sex.map({'male':0, 'female':1})", "Extract the target variable", "y = titanic.Survived.copy() # copy “y” column values out\n\nX = titanic.drop(['Survived'], axis=1) # then, drop y column", "Drop not so important features\nFor the first model, we ignore some categorical features which will not add too much of a signal.", "X.drop(['Cabin'], axis=1, inplace=True) \n\nX.drop(['Ticket'], axis=1, inplace=True) \n\nX.drop(['Name'], axis=1, inplace=True) \n\nX.drop(['PassengerId'], axis=1, inplace=True)\n\nX.info()", "All features are now numeric, ready for regression.\nBut we have still a couple of processing to do.\nCheck if there are any missing values", "X.isnull().values.any()\n\n#X[pd.isnull(X).any(axis=1)]", "True, there are missing values in the data (NaN) and a quick look at the data reveals that they are all in the Age feature.\nOne possibility could be to remove the feature, another one is to fille the missing value with a fixed number or the average age.", "X.Age.fillna(X.Age.mean(), inplace=True) # replace NaN with average age\n\nX.isnull().values.any()", "Now all missing values have been removed.\nThe logistic regression would otherwise not work with missing values.\nSplit the dataset into training and validation\nThe training set will be used to build the machine learning models. The model will be based on the features like passengers’ gender and class but also on the known survived flag.\nThe validation set should be used to see how well the model performs on unseen data. For each passenger in the test set, I use the model trained to predict whether or not they survived the sinking of the Titanic, then will be compared with the actual survival flag.", "from sklearn.model_selection import train_test_split\n # 80 % go into the training test, 20% in the validation test\nX_train, X_valid, y_train, y_valid = train_test_split(X, y, test_size=0.2, random_state=7)", "3. Modelling\nGet a baseline\nA baseline is always useful to see if the model trained behaves significantly better than an easy to obtain baseline, such as a random guess or a simple heuristic like all and only female passengers survived. In this case, after quickly looging at the training dataset - where the survivial outcome is present - I am going to use the following:", "def simple_heuristic(titanicDF):\n '''\n predict whether or not the passngers survived or perished.\n Here's the algorithm, predict the passenger survived:\n 1) If the passenger is female or\n 2) if his socioeconomic status is high AND if the passenger is under 18\n '''\n\n predictions = [] # a list\n \n for passenger_index, passenger in titanicDF.iterrows():\n \n if passenger['Sex'] == 1:\n # female\n predictions.append(1) # survived\n elif passenger['Age'] < 18 and passenger['Pclass'] == 1:\n # male but minor and rich\n predictions.append(1) # survived\n else:\n predictions.append(0) # everyone else perished\n\n return predictions", "Let's see how this simple algorithm will behave on the validation dataset and we will keep that number as our baseline:", "simplePredictions = simple_heuristic(X_valid)\ncorrect = sum(simplePredictions == y_valid)\nprint (\"Baseline: \", correct/len(y_valid))", "Baseline: a simple algorithm predicts correctly 73% of validation cases.\nNow let's see if the model can do better.\nLogistic Regression", "from sklearn.linear_model import LogisticRegression\nmodel = LogisticRegression()\n\nmodel.fit(X_train, y_train)", "4. Evaluate the model", "model.score(X_train, y_train)\n\nmodel.score(X_valid, y_valid)", "Two things:\n- the score on the training set is much better than on the validation set, an indication that could be overfitting and not being a general model, e.g. for all ship sinks.\n- the score on the validation set is better than the baseline, so it adds some value at a minimal cost (the logistic regression is not computationally expensive, at least not for smaller datasets).\nAn advantage of logistic regression (e.g. against a neural network) is that it's easily interpreatble. It can be written as a math formula:", "model.intercept_ # the fitted intercept\n\nmodel.coef_ # the fitted coefficients", "Which means that the formula is: \n$$ \\boldsymbol P(survive) = \\frac{1}{1+e^{-logit}} $$ \nwhere the logit is: \n$$ logit = \\boldsymbol{\\beta_{0} + \\beta_{1}\\cdot x_{1} + ... + \\beta_{n}\\cdot x_{n}}$$ \nwhere $\\beta_{0}$ is the model intercept and the other beta parameters are the model coefficients from above, each multiplied for the related feature: \n$$ logit = \\boldsymbol{1.4224 - 0.9319 * Pclass + ... + 0.2228 * Embarked_S}$$ \n5. Iterate on the model\nThe model could be improved, for example transforming the excluded features above or creating new ones (e.g. I could extract titles from the names which could be another indication of the socio-economic status).\nA heat map of correlation may give us a understanding of which variables are important", "titanic.corr()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
cmawer/pycon-2017-eda-tutorial
notebooks/1-RedCard-EDA/4-Redcard-final-joins.ipynb
mit
[ "Redcard Exploratory Data Analysis\nThis dataset is taken from a fantastic paper that looks to see how analytical choices made by different data science teams on the same dataset in an attempt to answer the same research question affect the final outcome.\nMany analysts, one dataset: Making transparent how variations in analytical choices affect results\nThe data can be found here.\nThe Task\nDo an Exploratory Data Analysis on the redcard dataset. Keeping in mind the question is the following: Are soccer referees more likely to give red cards to dark-skin-toned players than light-skin-toned players?\n\nBefore plotting/joining/doing something, have a question or hypothesis that you want to investigate\nDraw a plot of what you want to see on paper to sketch the idea\nWrite it down, then make the plan on how to get there\nHow do you know you aren't fooling yourself\nWhat else can I check if this is actually true?\nWhat evidence could there be that it's wrong?", "%matplotlib inline\n%config InlineBackend.figure_format='retina'\n\nfrom __future__ import absolute_import, division, print_function\nimport matplotlib as mpl\nfrom matplotlib import pyplot as plt\nfrom matplotlib.pyplot import GridSpec\nimport seaborn as sns\nimport numpy as np\nimport pandas as pd\nimport os, sys\nfrom tqdm import tqdm\nimport warnings\nwarnings.filterwarnings('ignore')\nsns.set_context(\"poster\", font_scale=1.3)\n\nimport missingno as msno\nimport pandas_profiling\n\nfrom sklearn.datasets import make_blobs\nimport time", "About the Data\n\nThe dataset is available as a list with 146,028 dyads of players and referees and includes details from players, details from referees and details regarding the interactions of player-referees. A summary of the variables of interest can be seen below. A detailed description of all variables included can be seen in the README file on the project website. \nFrom a company for sports statistics, we obtained data and profile photos from all soccer players (N = 2,053) playing in the first male divisions of England, Germany, France and Spain in the 2012-2013 season and all referees (N = 3,147) that these players played under in their professional career (see Figure 1). We created a dataset of player–referee dyads including the number of matches players and referees encountered each other and our dependent variable, the number of red cards given to a player by a particular referee throughout all matches the two encountered each other.\n-- https://docs.google.com/document/d/1uCF5wmbcL90qvrk_J27fWAvDcDNrO9o_APkicwRkOKc/edit\n\n| Variable Name: | Variable Description: | \n| -- | -- | \n| playerShort | short player ID | \n| player | player name | \n| club | player club | \n| leagueCountry | country of player club (England, Germany, France, and Spain) | \n| height | player height (in cm) | \n| weight | player weight (in kg) | \n| position | player position | \n| games | number of games in the player-referee dyad | \n| goals | number of goals in the player-referee dyad | \n| yellowCards | number of yellow cards player received from the referee | \n| yellowReds | number of yellow-red cards player received from the referee | \n| redCards | number of red cards player received from the referee | \n| photoID | ID of player photo (if available) | \n| rater1 | skin rating of photo by rater 1 | \n| rater2 | skin rating of photo by rater 2 | \n| refNum | unique referee ID number (referee name removed for anonymizing purposes) | \n| refCountry | unique referee country ID number | \n| meanIAT | mean implicit bias score (using the race IAT) for referee country | \n| nIAT | sample size for race IAT in that particular country | \n| seIAT | standard error for mean estimate of race IAT | \n| meanExp | mean explicit bias score (using a racial thermometer task) for referee country | \n| nExp | sample size for explicit bias in that particular country | \n| seExp | standard error for mean estimate of explicit bias measure |", "# Uncomment one of the following lines and run the cell:\n\n# df = pd.read_csv(\"redcard.csv.gz\", compression='gzip')\n# df = pd.read_csv(\"https://github.com/cmawer/pycon-2017-eda-tutorial/raw/master/data/redcard/redcard.csv.gz\", compression='gzip')\n\ndef save_subgroup(dataframe, g_index, subgroup_name, prefix='raw_'):\n save_subgroup_filename = \"\".join([prefix, subgroup_name, \".csv.gz\"])\n dataframe.to_csv(save_subgroup_filename, compression='gzip', encoding='UTF-8')\n test_df = pd.read_csv(save_subgroup_filename, compression='gzip', index_col=g_index, encoding='UTF-8')\n # Test that we recover what we send in\n if dataframe.equals(test_df):\n print(\"Test-passed: we recover the equivalent subgroup dataframe.\")\n else:\n print(\"Warning -- equivalence test!!! Double-check.\")\n\ndef load_subgroup(filename, index_col=[0]):\n return pd.read_csv(filename, compression='gzip', index_col=index_col)\n\nclean_players = load_subgroup(\"cleaned_players.csv.gz\")\nplayers = load_subgroup(\"raw_players.csv.gz\", )\ncountries = load_subgroup(\"raw_countries.csv.gz\")\nreferees = load_subgroup(\"raw_referees.csv.gz\")\nagg_dyads = pd.read_csv(\"raw_dyads.csv.gz\", compression='gzip', index_col=[0, 1])\n# tidy_dyads = load_subgroup(\"cleaned_dyads.csv.gz\")\ntidy_dyads = pd.read_csv(\"cleaned_dyads.csv.gz\", compression='gzip', index_col=[0, 1])", "Joining and further considerations", "!conda install pivottablejs -y\n\nfrom pivottablejs import pivot_ui\n\nclean_players = load_subgroup(\"cleaned_players.csv.gz\")\n\ntemp = tidy_dyads.reset_index().set_index('playerShort').merge(clean_players, left_index=True, right_index=True)\n\ntemp.shape\n\n# This does not work on Azure notebooks out of the box\n# pivot_ui(temp[['skintoneclass', 'position_agg', 'redcard']], )\n\n# How many games has each player played in?\ngames = tidy_dyads.groupby(level=1).count()\nsns.distplot(games);\n\n(tidy_dyads.groupby(level=0)\n .count()\n .sort_values('redcard', ascending=False)\n .rename(columns={'redcard':'total games refereed'})).head()\n\n(tidy_dyads.groupby(level=0)\n .sum()\n .sort_values('redcard', ascending=False)\n .rename(columns={'redcard':'total redcards given'})).head()\n\n(tidy_dyads.groupby(level=1)\n .sum()\n .sort_values('redcard', ascending=False)\n .rename(columns={'redcard':'total redcards received'})).head()\n\ntidy_dyads.head()\n\ntidy_dyads.groupby(level=0).size().sort_values(ascending=False)\n\ntotal_ref_games = tidy_dyads.groupby(level=0).size().sort_values(ascending=False)\ntotal_player_games = tidy_dyads.groupby(level=1).size().sort_values(ascending=False)\n\ntotal_ref_given = tidy_dyads.groupby(level=0).sum().sort_values(ascending=False,by='redcard')\ntotal_player_received = tidy_dyads.groupby(level=1).sum().sort_values(ascending=False, by='redcard')\n\nsns.distplot(total_player_received, kde=False);\n\nsns.distplot(total_ref_given, kde=False);\n\ntidy_dyads.groupby(level=1).sum().sort_values(ascending=False, by='redcard').head()\n\ntidy_dyads.sum(), tidy_dyads.count(), tidy_dyads.sum()/tidy_dyads.count()\n\nplayer_ref_game = (tidy_dyads.reset_index()\n .set_index('playerShort')\n .merge(clean_players,\n left_index=True,\n right_index=True)\n )\n\nplayer_ref_game.head()\n\nplayer_ref_game.shape\n\nbootstrap = pd.concat([player_ref_game.sample(replace=True, \n n=10000).groupby('skintone').mean() \n for _ in range(100)])\n\nax = sns.regplot(bootstrap.index.values,\n y='redcard',\n data=bootstrap,\n lowess=True,\n scatter_kws={'alpha':0.4,},\n x_jitter=(0.125 / 4.0))\nax.set_xlabel(\"Skintone\");" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive/02_tensorflow/b_tfstart_graph.ipynb
apache-2.0
[ "Getting started with TensorFlow (Graph Mode)\nLearning Objectives\n - Understand the difference between Tensorflow's two modes: Eager Execution and Graph Execution\n - Get used to deferred execution paradigm: first define a graph then run it in a tf.Session()\n - Understand how to parameterize a graph using tf.placeholder() and feed_dict\n - Understand the difference between constant Tensors and variable Tensors, and how to define each\n - Practice using mid-level tf.train module for gradient descent\nIntroduction\nEager Execution\nEager mode evaluates operations immediatley and return concrete values immediately. To enable eager mode simply place tf.enable_eager_execution() at the top of your code. We recommend using eager execution when prototyping as it is intuitive, easier to debug, and requires less boilerplate code.\nGraph Execution\nGraph mode is TensorFlow's default execution mode (although it will change to eager in TF 2.0). In graph mode operations only produce a symbolic graph which doesn't get executed until run within the context of a tf.Session(). This style of coding is less inutitive and has more boilerplate, however it can lead to performance optimizations and is particularly suited for distributing training across multiple devices. We recommend using delayed execution for performance sensitive production code.", "import tensorflow as tf\nprint(tf.__version__)", "Graph Execution\nAdding Two Tensors\nBuild the Graph\nUnlike eager mode, no concrete value will be returned yet. Just a name, shape and type are printed. Behind the scenes a directed graph is being created.", "a = tf.constant(value = [5, 3, 8], dtype = tf.int32)\nb = tf.constant(value = [3, -1, 2], dtype = tf.int32)\nc = tf.add(x = a, y = b)\nprint(c)", "Run the Graph\nA graph can be executed in the context of a tf.Session(). Think of a session as the bridge between the front-end Python API and the back-end C++ execution engine. \nWithin a session, passing a tensor operation to run() will cause Tensorflow to execute all upstream operations in the graph required to calculate that value.", "with tf.Session() as sess:\n result = sess.run(fetches = c)\n print(result)", "Parameterizing the Grpah\nWhat if values of a and b keep changing? How can you parameterize them so they can be fed in at runtime? \nStep 1: Define Placeholders\nDefine a and b using tf.placeholder(). You'll need to specify the data type of the placeholder, and optionally a tensor shape.\nStep 2: Provide feed_dict\nNow when invoking run() within the tf.Session(), in addition to providing a tensor operation to evaluate, you also provide a dictionary whose keys are the names of the placeholders.", "a = tf.placeholder(dtype = tf.int32, shape = [None]) \nb = tf.placeholder(dtype = tf.int32, shape = [None])\nc = tf.add(x = a, y = b)\n\nwith tf.Session() as sess:\n result = sess.run(fetches = c, feed_dict = {\n a: [3, 4, 5],\n b: [-1, 2, 3]\n })\n print(result)", "Linear Regression\nToy Dataset\nWe'll model the following:\n\\begin{equation}\ny= 2x + 10\n\\end{equation}", "X = tf.constant(value = [1,2,3,4,5,6,7,8,9,10], dtype = tf.float32)\nY = 2 * X + 10\nprint(\"X:{}\".format(X))\nprint(\"Y:{}\".format(Y))", "2.2 Loss Function\nUsing mean squared error, our loss function is:\n\\begin{equation}\nMSE = \\frac{1}{m}\\sum_{i=1}^{m}(\\hat{Y}_i-Y_i)^2\n\\end{equation}\n$\\hat{Y}$ represents the vector containing our model's predictions:\n\\begin{equation}\n\\hat{Y} = w_0X + w_1\n\\end{equation}\nNote below we introduce TF variables for the first time. Unlike constants, variables are mutable. \nBrowse the official TensorFlow guide on variables for more information on when/how to use them.", "with tf.variable_scope(name_or_scope = \"training\", reuse = tf.AUTO_REUSE):\n w0 = tf.get_variable(name = \"w0\", initializer = tf.constant(value = 0.0, dtype = tf.float32))\n w1 = tf.get_variable(name = \"w1\", initializer = tf.constant(value = 0.0, dtype = tf.float32))\n \nY_hat = w0 * X + w1\nloss_mse = tf.reduce_mean(input_tensor = (Y_hat - Y)**2)", "Optimizer\nAn optimizer in TensorFlow both calculates gradients and updates weights. In addition to basic gradient descent, TF provides implementations of several more advanced optimizers such as ADAM and FTRL. They can all be found in the tf.train module. \nNote below we're not expclictly telling the optimizer which tensors are our weight tensors. So how does it know what to update? Optimizers will update all variables in the tf.GraphKeys.TRAINABLE_VARIABLES collection. All variables are added to this collection by default. Since our only variables are w0 and w1, this is the behavior we want. If we had a variable that we didn't want to be added to the collection we would set trainable=false when creating it.", "LEARNING_RATE = tf.placeholder(dtype = tf.float32, shape = None)\noptimizer = tf.train.GradientDescentOptimizer(learning_rate = LEARNING_RATE).minimize(loss = loss_mse)", "Training Loop\nNote our results are identical to what we found in Eager mode.", "STEPS = 1000\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer()) # initialize variables\n \n for step in range(STEPS):\n #1. Calculate gradients and update seights \n sess.run(fetches = optimizer, feed_dict = {LEARNING_RATE: 0.02})\n \n #2. Periodically print MSE\n if step % 100 == 0:\n print(\"STEP: {} MSE: {}\".format(step, sess.run(fetches = loss_mse)))\n \n # Print final MSE and weights\n print(\"STEP: {} MSE: {}\".format(STEPS, sess.run(loss_mse)))\n print(\"w0:{}\".format(round(float(sess.run(w0)), 4)))\n print(\"w1:{}\".format(round(float(sess.run(w1)), 4)))", "Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
rjdkmr/do_x3dna
docs/global_elasticity.ipynb
gpl-3.0
[ "How to calculate Global Elasticity?\nTo calculate global elasticity, dnaMD globalElasticity command can be used. However, it takes HDF5 file as input.\nFollowing steps can be performed to generate HDF5 files. The tutorial file can be downloaded here. We will prepare HDF5\nfile for both free and bound DNA.\nCalculate stretching twisting and bending motions\nBoth free and bound DNA is superimposed on to the same DNA structure. Careful that bending calculation is fitting dependent.\nTherefore, at first we aligned both free and bound DNA to a common DNA structure as follows", "%%bash\n\n# Align bound DNA\necho 6 0 | gmx trjconv -f inputs/F1_complex_DNA.xtc -s inputs/dna.tpr -n inputs/dna.ndx -o complex_dna_aligned.xtc -fit rot+trans\n\n# Align free DNA\necho 6 0 | gmx trjconv -f inputs/F1_free_DNA.xtc -s inputs/dna.tpr -n inputs/dna.ndx -o free_dna_aligned.xtc -fit rot+trans", "1. Run do_x3dna on DNA trajectory, -nofit is used because DNA is already superimposed to a common DNA structure using trjconv.", "%%bash\n\n# For free DNA\necho 2 | do_x3dna -f free_dna_aligned.xtc -s inputs/dna.tpr -n inputs/dna.ndx -ref -noavg -nofit -name free\nmv *_free.dat outputs/.\n\n# For bound DNA\necho 2 | do_x3dna -f complex_dna_aligned.xtc -s inputs/dna.tpr -n inputs/dna.ndx -ref -noavg -nofit -name bound\nmv *_bound.dat outputs/.", "2. Run dnaMD to extract the parameters from do_x3dna output files and save as HDF5 file. Also calculate global axis,\ncurvature and tangents.", "%%bash\n\n# For free DNA\ndnaMD saveAsH5 -tbp 27 -i outputs/L-BPS_free.dat,outputs/L-BPH_free.dat,outputs/HelAxis_free.dat -o free_dna.h5\ndnaMD axisCurv -tbp 27 -bs 2 -be 25 -ctan -scp 100 -s 1000 -cta 30 -io free_dna.h5 -ap free_dna_axis.pdb\n\n# For bound DNA\ndnaMD saveAsH5 -tbp 27 -i outputs/L-BPS_bound.dat,outputs/L-BPH_bound.dat,outputs/HelAxis_bound.dat -o bound_dna.h5\ndnaMD axisCurv -tbp 27 -bs 2 -be 25 -ctan -scp 100 -s 1000 -cta 30 -io bound_dna.h5 -ap bound_dna_axis.pdb", "Now, we have HDF5 files of both free and bounds DNA. It can be used for the calculation of elastic properties. These files\ncan be used with either dnaMD Python module or dnaMD globalElasticity.\nBending Stretching Twist Modulus\nFollowing command calculate Bending Stretching Twist modulus matrix. Output matrix will be stored in csv file.\nElastic modulus matrix is printed as output and average values of contour length and cumulative twist angle\nis also printed.", "%%bash\n\ndnaMD globalElasticity -i free_dna.h5 -tbp 27 -bs 4 -be 20 -estype BST -paxis X -o elastic_modulus_BST.csv", "The above modulus matrix is in this form:\n$$\\text{modulus matrix} =\n\\begin{bmatrix}\nM_{Bx} & M_{Bx,By} & M_{Bx,S} & M_{Bx,T} \\\nM_{Bx,By} & M_{By} & M_{By,S} & M_{By,T} \\\nM_{Bx,S} & M_{By,S} & M_{S} & M_{S,T} \\\nM_{Bx,T} & M_{Bx,T} & M_{S,T} & M_{T}\n\\end{bmatrix}\n$$\nWhere:\n\n$M_{Bx}$ - Bending-1 stiffness in one plane\n$M_{By}$ - Bending-2 stiffness in another orthogonal plane\n$M_{S}$ - Stretch Modulus\n$M_{T}$ - Twist rigidity\n$M_{Bx,By}$ - Bending-1 and Bending-2 coupling\n$M_{By,S}$ - Bending-2 and stretching coupling\n$M_{S,T}$ - Stretching Twsiting coupling\n$M_{Bx,S}$ - Bending-1 Stretching coupling\n$M_{By,T}$ - Bending-2 Twisting coupling\n$M_{Bx,T}$ - Bending-1 Twisting coupling\n\nStretching Twist Modulus\nFollowing command calculate Bending Stretching Twist modulus matrix. Output matrix will be stored in csv file.\nElastic modulus matrix is printed as output and average values of contour length and cumulative twist angle\nis also printed.", "%%bash\n\ndnaMD globalElasticity -i free_dna.h5 -tbp 27 -bs 4 -be 20 -estype ST -o elastic_modulus_ST.csv", "The above modulus matrix is in this form:\n$$\\text{modulus matrix} =\n\\begin{bmatrix}\nM_{S} & M_{S,T} \\\nM_{S,T} & M_{T}\n\\end{bmatrix}\n$$\nWhere:\n\n$M_{S}$ - Stretch Modulus\n$M_{T}$ - Twist rigidity\n$M_{S,T}$ - Stretching Twsiting coupling\n\nConvergence in Modulus\nSame command can be used to calculate elasticity as a function of time with option \n-ot/--output-time and save it in csv format file. This result can beused to check\ntheir convergence. \n\nIf this option is used, -fgap/--frame-gap is an essential option.\nThis options also gives final value and error as the output on display.\nThe output file is in csv format and can be opened as spreadsheet.\n\nNOTE: Elastic properties cannot be calculated using a single frame because \nfluctuations are required. Therefore, here time means trajectory between zero \ntime to given time.", "%%bash\n\ndnaMD globalElasticity -i free_dna.h5 -tbp 27 -bs 4 -be 20 -estype BST -paxis X -fgap 100 -em block -ot modulus_time_BST.csv\n\n# Print first and last 3 line of output file\necho \"=====================================\"\necho \"Elastic modulus as a function of time\"\necho \"=====================================\"\nhead -4 modulus_time_BST.csv\nprintf \".\\n.\\n.\\n\"\ntail -3 modulus_time_BST.csv", "Some of the plots from above data can be found here\n\nSame as above but only for stretching and twisting motions.", "%%bash\n\ndnaMD globalElasticity -i free_dna.h5 -tbp 27 -bs 4 -be 20 -estype ST -paxis X -fgap 100 -gt \"gmx analyze\" -em \"block\" -ot modulus_time_ST.csv\n\n# Print first and last 3 line of output file\necho \"=====================================\"\necho \"Elastic modulus as a function of time\"\necho \"=====================================\"\nhead -4 modulus_time_ST.csv\nprintf \".\\n.\\n.\\n\"\ntail -3 modulus_time_ST.csv", "Global deformation free energy\nTo caluclate global deformation enrgy, dnaMD globalEnergy can be used. At first, elastic matrix from reference\nDNA (most often free or unbound DNA) is calculated and subsequently this matrix is used to calculate deformation free\nenergy of probe DNA (most often bound DNA).\nThe deformation free energy is calculated using elastic matrix as follows\n$$G = \\frac{1}{2L_0}\\mathbf{xKx^T}$$\n$$\\mathbf{x} = \\begin{bmatrix}\n (\\theta^{x} - \\theta^{x}_0) & (\\theta^{y} - \\theta^{y}_0) & (L - L_0) & (\\phi - \\phi_0)\n \\end{bmatrix}$$\nWhere, $\\mathbf{K}$, $\\theta^{x}_0$, $\\theta^{y}_0$, $L_0$ and $\\phi_0$ is calculated from reference DNA while $\\theta^{x}$, $\\theta^{y}$, $L$ and $\\phi$ is calculated for probe DNA from each frame.\nThis command gives output energy as a function of time in csv file and also average energies with error.", "%%bash\n\ndnaMD globalEnergy -ir free_dna.h5 -ip bound_dna.h5 -tbp 27 -bs 4 -be 20 -estype BST -et \"all\" -paxis X -gt \"gmx analyze\" -em \"block\" -o energy_all_BST.csv\n\n# Print first and last 3 line of output file\necho \"===========================================================\"\necho \"Deformation free energy of bound DNA as a function of time\"\necho \"===========================================================\"\nhead -4 energy_all_BST.csv\nprintf \".\\n.\\n.\\n\"\ntail -3 energy_all_BST.csv", "Some of the plots from above data can be found here\n\nDeformation free energy can be calculated as the following terms:\n\nfull : Use entire elastic matrix -- all motions with their coupling\ndiag : Use diagonal of elastic matrix -- all motions but no coupling\nb1 : Only bending-1 motion\nb2 : Only bending-2 motion\nstretch : Only stretching motion\ntwist : Only Twisting motions\nst_coupling : Only stretch-twist coupling motion\nbs_coupling : Only Bending and stretching coupling\nbt_coupling : Only Bending and Twisting coupling\nbb_coupling : Only bending-1 and bending-2 coupling\nbend : Both bending motions with their coupling\nst : Stretching and twisting motions with their coupling\nbs : Bending (b1, b2) and stretching motions with their coupling\nbt : Bending (b1, b2) and twisting motions with their coupling\n\nWhen all is used, all above terms were calculated.", "%%bash\n\ndnaMD globalEnergy -ir free_dna.h5 -ip bound_dna.h5 -tbp 27 -bs 4 -be 24 -estype ST -et \"all\" -gt \"gmx analyze\" -em \"block\" -o energy_all_ST.csv\n\n# Print first and last 3 line of output file\necho \"===========================================================\"\necho \"Deformation free energy of bound DNA as a function of time\"\necho \"===========================================================\"\nhead -4 energy_all_ST.csv\nprintf \".\\n.\\n.\\n\"\ntail -3 energy_all_ST.csv", "Same as above but only for stretching and twisting motions.\nLocal elastic properties\nLocal elastic properties can be caluclated using either local base-step parameters or local helical base-step parameters.\nIn case of base-step parameters: Shift ($Dx$), Slide ($Dy$), Rise ($Dz$), Tilt ($\\tau$), Roll ($\\rho$) and Twist ($\\omega$), following elastic matrix is calculated.\n$$\n\\mathbf{K}{base-step} = \\begin{bmatrix}\nK{Dx} & K_{Dx,Dy} & K_{Dx,Dz} & K_{Dx,\\tau} & K_{Dx,\\rho} & K_{Dx,\\omega} \\\nK_{Dx,Dy} & K_{Dy} & K_{Dy,Dz} & K_{Dy,\\tau} & K_{Dy,\\rho} & K_{Dy,\\omega} \\\nK_{Dx,Dz} & K_{Dy,Dz} & K_{Dz} & K_{Dz,\\tau} & K_{Dz,\\rho} & K_{Dz,\\omega} \\\nK_{Dx,\\tau} & K_{Dy,\\tau} & K_{Dz,\\tau} & K_{\\tau} & K_{\\tau, \\rho} & K_{\\tau,\\omega} \\\nK_{Dx,\\rho} & K_{Dy,\\rho} & K_{Dz,\\rho} & K_{\\tau, \\rho} & K_{\\rho} & K_{\\rho,\\omega} \\\nK_{Dx,\\omega} & K_{Dy,\\omega} & K_{Dz,\\omega} & K_{\\tau, \\omega} & K_{\\rho, \\omega} & K_{\\omega} \\\n\\end{bmatrix}\n$$\nIn case of helical-base-step parameters: x-displacement ($dx$), y-displacement ($dy$), h-rise ($h$), inclination ($\\eta$), tip ($\\theta$) and twist ($\\Omega$), following elastic matrix is calculated.\n$$\n\\mathbf{K}{helical-base-step} = \\begin{bmatrix}\nK{dx} & K_{dx,dy} & K_{dx,h} & K_{dx,\\eta} & K_{dx,\\theta} & K_{dx,\\Omega} \\\nK_{dx,dy} & K_{dy} & K_{dy,h} & K_{dy,\\eta} & K_{dy,\\theta} & K_{dy,\\Omega} \\\nK_{dx,h} & K_{dy,h} & K_{h} & K_{h,\\eta} & K_{h,\\theta} & K_{h,\\Omega} \\\nK_{dx,\\eta} & K_{dy,\\eta} & K_{h,\\eta} & K_{\\eta} & K_{\\eta, \\theta} & K_{\\eta,\\Omega} \\\nK_{dx,\\theta} & K_{dy,\\theta} & K_{h,\\theta} & K_{\\eta, \\theta} & K_{\\theta} & K_{\\theta,\\Omega} \\\nK_{dx,\\Omega} & K_{dy,\\Omega} & K_{h,\\Omega} & K_{\\eta, \\Omega} & K_{\\theta, \\Omega} & K_{\\Omega} \\\n\\end{bmatrix}\n$$", "%%bash\n\ndnaMD localElasticity -i free_dna.h5 -tbp 27 -bs 4 -be 7 -o local_elasticity_4-7bps.csv", "Local elastic properties as a function of time\nSame command can be used to calculate elasticity as a function of time with option \n-ot/--output-time and save it in csv format file. This result can be used to check\ntheir convergence. \n\nIf this option is used, -fgap/--frame-gap is an essential option.\nThe output file is in csv format and can be opened as spreadsheet.\n\nNOTE: Elastic properties cannot be calculated using a single frame because \nfluctuations are required. Therefore, here time means trajectory between zero \ntime to given time.", "%%bash\n\ndnaMD localElasticity -i free_dna.h5 -tbp 27 -bs 4 -be 7 -fgap 200 -ot local_elasticity_time_4-7bps.csv\n\n# Print first 3 and last 3 line of first seven columns from output file\necho \"=====================================\"\necho \"Elastic matrix as a function of time\"\necho \"=====================================\"\nhead -4 local_elasticity_time_4-8bps.csv | awk '{print $1, $2, $3, $4, $5, $6 ,$7, \"... ... ...\"}'\nprintf \".\\n.\\n.\\n\"\ntail -3 local_elasticity_time_4-8bps.csv | awk '{print $1, $2, $3, $4, $5, $6 ,$7, \"... ... ...\"}'", "Local elastic properties of the consecutive overlapped DNA segments\nAbove method gives local elasticities of a small local segment of the DNA. However, we mostly interested\nin large segment of the DNA. This large segment can be further divided into smaller local segments. \nFor these smaller segments local elasticities can be calculated. Here these segments overlapped with each other.\nSame command can be used to calculate elasticity as a function of time with option \n-os/--output-segments and save it in csv format file.\n\nIf this option is used, -fgap/--frame-gap is an essential option.\nThe output file is in csv format and can be opened as spreadsheet.", "%%bash\n\ndnaMD localElasticity -i free_dna.h5 -tbp 27 -bs 4 -be 20 -span 4 -fgap 200 -gt \"gmx analyze\" -em \"acf\" -os local_elasticity_segments.csv\n\n# Print first 3 and last 3 line of first seven columns from output file\necho \"========================================\"\necho \"Elasticity as a function of DNA segments\"\necho \"========================================\"\nhead -4 local_elasticity_segments.csv | awk '{print $1, $2, $3, $4, $5, $6 ,$7, \"... ... ...\"}'\nprintf \".\\n.\\n.\\n\"\ntail -3 local_elasticity_segments.csv | awk '{print $1, $2, $3, $4, $5, $6 ,$7, \"... ... ...\"}'", "Local deformation energy of a local small segment\nAt first, elastic matrix from reference DNA (most often free or unbound DNA) is calculated \nand subsequently this matrix is used to calculate deformation free energy of probe DNA \n(most often bound DNA).\n$$G = \\frac{1}{2}\\mathbf{xKx^T}$$\nWhen helical='False'\n$$\\mathbf{K} = \\mathbf{K}_{base-step}$$\n$$\\mathbf{x} = \\begin{bmatrix}\n (Dx_{i}-Dx_0) & (Dy_i - Dy_0) & (Dz_i - Dz_0) & (\\tau_i - \\tau_0) &\n (\\rho_i - \\rho_0) & (\\omega_i - \\omega_0)\n \\end{bmatrix}$$\nWhen helical='True'\n$$\\mathbf{K} = \\mathbf{K}_{helical-base-step}$$\n$$\\mathbf{x} = \\begin{bmatrix}\n (dx_{i}-dx_0) & (dy_i - dy_0) & (h_i - h_0) & (\\eta_i - \\eta_0) &\n (\\theta_i - \\theta_0) & (\\Omega_i - \\Omega_0)\n \\end{bmatrix}$$", "%%bash\n\ndnaMD localEnergy -ir free_dna.h5 -ip bound_dna.h5 -tbp 27 -bs 10 -be 13 -et all -gt \"gmx analyze\" -em \"block\" -o local_energy_time_4-7bps.csv\n\n# Print first 3 and last 3 line from output file\necho \"===============================================\"\necho \"Local deformation energy as a function of time\"\necho \"===============================================\"\nhead -4 local_energy_time_4-7bps.csv\nprintf \".\\n.\\n.\\n\"\ntail -3 local_energy_time_4-7bps.csv", "Some of the plots from above data can be found \nhere\n\nSame as the above but energy is calculated using helical base-step parameters", "%%bash\n\ndnaMD localEnergy -ir free_dna.h5 -ip bound_dna.h5 -tbp 27 -bs 10 -be 13 -et all -helical -gt \"gmx analyze\" -em \"block\" -o local_helical_energy_time_4-7bps.csv\n\n# Print first 3 and last 3 line from output file\necho \"======================================================\"\necho \"Local helical deformation energy as a function of time\"\necho \"======================================================\"\nhead -4 local_energy_time_4-7bps.csv\nprintf \".\\n.\\n.\\n\"\ntail -3 local_energy_time_4-7bps.csv", "Deformation energy of the consecutive overlapped DNA segments\nAbove method gives energy of a small local segment of the DNA. \nHowever, we mostly interested in large segment of the DNA. This large segment \ncan be further divided into smaller local segments. For these smaller segments \nlocal deformation energy can be calculated. Here these segments overlapped with each other.", "%%bash\n\ndnaMD localEnergy -ir free_dna.h5 -ip bound_dna.h5 -tbp 27 -bs 4 -be 20 -span 4 -et all -gt \"gmx analyze\" -em \"block\" -os local_energy_segments.csv\n\n# Print first 3 and last 3 line from output file\necho \"==================================================\"\necho \"Local deformation energy as a function of segments\"\necho \"==================================================\"\nhead -4 local_energy_segments.csv\nprintf \".\\n.\\n.\\n\"\ntail -3 local_energy_segments.csv", "Some of the plots from above data can be found \nhere\n\nSame as the above but energy is calculated using helical base-step parameters", "%%bash\n\ndnaMD localEnergy -ir free_dna.h5 -ip bound_dna.h5 -tbp 27 -bs 4 -be 20 -span 4 -et all -helical -gt \"gmx analyze\" -em \"block\" -os local_helical_energy_segments.csv\n\n# Print first 3 and last 3 line from output file\necho \"==========================================================\"\necho \"Local helical deformation energy as a function of segments\"\necho \"==========================================================\"\nhead -4 local_helical_energy_segments.csv\nprintf \".\\n.\\n.\\n\"\ntail -3 local_helical_energy_segments.csv" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/miroc/cmip6/models/miroc-es2h/ocnbgchem.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Ocnbgchem\nMIP Era: CMIP6\nInstitute: MIROC\nSource ID: MIROC-ES2H\nTopic: Ocnbgchem\nSub-Topics: Tracers. \nProperties: 65 (37 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-20 15:02:40\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'miroc', 'miroc-es2h', 'ocnbgchem')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport\n3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks\n4. Key Properties --&gt; Transport Scheme\n5. Key Properties --&gt; Boundary Forcing\n6. Key Properties --&gt; Gas Exchange\n7. Key Properties --&gt; Carbon Chemistry\n8. Tracers\n9. Tracers --&gt; Ecosystem\n10. Tracers --&gt; Ecosystem --&gt; Phytoplankton\n11. Tracers --&gt; Ecosystem --&gt; Zooplankton\n12. Tracers --&gt; Disolved Organic Matter\n13. Tracers --&gt; Particules\n14. Tracers --&gt; Dic Alkalinity \n1. Key Properties\nOcean Biogeochemistry key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of ocean biogeochemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of ocean biogeochemistry model code (PISCES 2.0,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Model Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of ocean biogeochemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.model_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Geochemical\" \n# \"NPZD\" \n# \"PFT\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Elemental Stoichiometry\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe elemental stoichiometry (fixed, variable, mix of the two)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Fixed\" \n# \"Variable\" \n# \"Mix of both\" \n# TODO - please enter value(s)\n", "1.5. Elemental Stoichiometry Details\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe which elements have fixed/variable stoichiometry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.6. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of all prognostic tracer variables in the ocean biogeochemistry component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.7. Diagnostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of all diagnotic tracer variables in the ocean biogeochemistry component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.8. Damping\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe any tracer damping used (such as artificial correction or relaxation to climatology,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.damping') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Time Stepping Framework --&gt; Passive Tracers Transport\nTime stepping method for passive tracers transport in ocean biogeochemistry\n2.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime stepping framework for passive tracers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"use ocean model transport time step\" \n# \"use specific time step\" \n# TODO - please enter value(s)\n", "2.2. Timestep If Not From Ocean\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTime step for passive tracers (if different from ocean)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Time Stepping Framework --&gt; Biology Sources Sinks\nTime stepping framework for biology sources and sinks in ocean biogeochemistry\n3.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime stepping framework for biology sources and sinks", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"use ocean model transport time step\" \n# \"use specific time step\" \n# TODO - please enter value(s)\n", "3.2. Timestep If Not From Ocean\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTime step for biology sources and sinks (if different from ocean)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Transport Scheme\nTransport scheme in ocean biogeochemistry\n4.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of transport scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Offline\" \n# \"Online\" \n# TODO - please enter value(s)\n", "4.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTransport scheme used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Use that of ocean model\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "4.3. Use Different Scheme\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDecribe transport scheme if different than that of ocean model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Boundary Forcing\nProperties of biogeochemistry boundary forcing\n5.1. Atmospheric Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how atmospheric deposition is modeled", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"from file (climatology)\" \n# \"from file (interannual variations)\" \n# \"from Atmospheric Chemistry model\" \n# TODO - please enter value(s)\n", "5.2. River Input\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how river input is modeled", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"from file (climatology)\" \n# \"from file (interannual variations)\" \n# \"from Land Surface model\" \n# TODO - please enter value(s)\n", "5.3. Sediments From Boundary Conditions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList which sediments are speficied from boundary condition", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.4. Sediments From Explicit Model\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList which sediments are speficied from explicit sediment model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Gas Exchange\n*Properties of gas exchange in ocean biogeochemistry *\n6.1. CO2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs CO2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.2. CO2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe CO2 gas exchange", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.3. O2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs O2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.4. O2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe O2 gas exchange", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "6.5. DMS Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs DMS gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.6. DMS Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify DMS gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.7. N2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs N2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.8. N2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify N2 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.9. N2O Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs N2O gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.10. N2O Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify N2O gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.11. CFC11 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs CFC11 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.12. CFC11 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify CFC11 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.13. CFC12 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs CFC12 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.14. CFC12 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify CFC12 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.15. SF6 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs SF6 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.16. SF6 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify SF6 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.17. 13CO2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs 13CO2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.18. 13CO2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify 13CO2 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.19. 14CO2 Exchange Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs 14CO2 gas exchange modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.20. 14CO2 Exchange Type\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify 14CO2 gas exchange scheme type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.21. Other Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSpecify any other gas exchange", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Key Properties --&gt; Carbon Chemistry\nProperties of carbon chemistry biogeochemistry\n7.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how carbon chemistry is modeled", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OMIP protocol\" \n# \"Other protocol\" \n# TODO - please enter value(s)\n", "7.2. PH Scale\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf NOT OMIP protocol, describe pH scale.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea water\" \n# \"Free\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "7.3. Constants If Not OMIP\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf NOT OMIP protocol, list carbon chemistry constants.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Tracers\nOcean biogeochemistry tracers\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of tracers in ocean biogeochemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Sulfur Cycle Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs sulfur cycle modeled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8.3. Nutrients Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList nutrient species present in ocean biogeochemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Nitrogen (N)\" \n# \"Phosphorous (P)\" \n# \"Silicium (S)\" \n# \"Iron (Fe)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.4. Nitrous Species If N\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf nitrogen present, list nitrous species.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Nitrates (NO3)\" \n# \"Amonium (NH4)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.5. Nitrous Processes If N\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf nitrogen present, list nitrous processes.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Dentrification\" \n# \"N fixation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9. Tracers --&gt; Ecosystem\nEcosystem properties in ocean biogeochemistry\n9.1. Upper Trophic Levels Definition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDefinition of upper trophic level (e.g. based on size) ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Upper Trophic Levels Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDefine how upper trophic level are treated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Tracers --&gt; Ecosystem --&gt; Phytoplankton\nPhytoplankton properties in ocean biogeochemistry\n10.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of phytoplankton", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Generic\" \n# \"PFT including size based (specify both below)\" \n# \"Size based only (specify below)\" \n# \"PFT only (specify below)\" \n# TODO - please enter value(s)\n", "10.2. Pft\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nPhytoplankton functional types (PFT) (if applicable)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diatoms\" \n# \"Nfixers\" \n# \"Calcifiers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.3. Size Classes\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nPhytoplankton size classes (if applicable)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Microphytoplankton\" \n# \"Nanophytoplankton\" \n# \"Picophytoplankton\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11. Tracers --&gt; Ecosystem --&gt; Zooplankton\nZooplankton properties in ocean biogeochemistry\n11.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of zooplankton", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Generic\" \n# \"Size based (specify below)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.2. Size Classes\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nZooplankton size classes (if applicable)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Microzooplankton\" \n# \"Mesozooplankton\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Tracers --&gt; Disolved Organic Matter\nDisolved organic matter properties in ocean biogeochemistry\n12.1. Bacteria Present\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there bacteria representation ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "12.2. Lability\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe treatment of lability in dissolved organic matter", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Labile\" \n# \"Semi-labile\" \n# \"Refractory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13. Tracers --&gt; Particules\nParticulate carbon properties in ocean biogeochemistry\n13.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is particulate carbon represented in ocean biogeochemistry?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diagnostic\" \n# \"Diagnostic (Martin profile)\" \n# \"Diagnostic (Balast)\" \n# \"Prognostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Types If Prognostic\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf prognostic, type(s) of particulate matter taken into account", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"POC\" \n# \"PIC (calcite)\" \n# \"PIC (aragonite\" \n# \"BSi\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Size If Prognostic\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No size spectrum used\" \n# \"Full size spectrum\" \n# \"Discrete size classes (specify which below)\" \n# TODO - please enter value(s)\n", "13.4. Size If Discrete\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic and discrete size, describe which size classes are used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13.5. Sinking Speed If Prognostic\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, method for calculation of sinking speed of particules", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Function of particule size\" \n# \"Function of particule type (balast)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Tracers --&gt; Dic Alkalinity\nDIC and alkalinity properties in ocean biogeochemistry\n14.1. Carbon Isotopes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich carbon isotopes are modelled (C13, C14)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"C13\" \n# \"C14)\" \n# TODO - please enter value(s)\n", "14.2. Abiotic Carbon\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs abiotic carbon modelled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14.3. Alkalinity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow is alkalinity modelled ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Prognostic\" \n# \"Diagnostic)\" \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ggf84/python-tutorial
01-GettingStarted.ipynb
mit
[ "Getting Started\nHello World!", "print('Hello World!')\n\nimport __hello__", "The Zen of Python", "import this", "Python code looks almost like English", "love = this\n\nthis is love\n\nlove is True\n\nlove is False\n\nlove is not True or False\n\nlove is love", "Commenting your code", "4 + 5 # A single-line comment.\n\n4 + 5 # A\n # multi\n # line\n # comment.\n\n# Commenting multiple lines of code: the proper way.\n# 4 + 5\n# 9 + 6\n# 1 + 3\n# 2 + 7\n\n# Commenting multiple lines of code: the lazy way.\n\"\"\"\n4 + 5\n9 + 6\n1 + 3\n2 + 7\n\"\"\"\n\n# As you can see, triple quotes are actually strings.\n\n\"\"\"\nTry not to use triple quotes for commenting your code.\nTriple quotes are best used for code documentation. We'll\nsee how to do that when we talk about functions and modules.\n\"\"\"\n\n# Also, learn how to use a good text editor like emacs or vim.", "Whetting your appetite\nVariable Assignments\nIn languages like C/C++/Fortran a variable name has a data type, p.ex., float, int, etc.\nIn Python, variable names have no type. The data type belongs to the value. A variable is like an alias to a value. It just points to a value. So, you can redefine it whenever you want.", "x = 42\ny = 3.14\nz = 'Hi!'\n\nx = 'Hello!'\nz = 100\n\nprint(x, y, z) # Let's print their values", "Naming your variables can be fun - Python 3 only :-)", "import math\n\nπ = math.pi\nr = 5\narea = π * r**2\n\nprint(area)\n\nm = 2\nM = 8\n\nµ = (M * m) / (M + m)\n\nprint(µ)\n\nh = 6.62607004e-34 # Planck constant: m^2 kg s^-1\n\nħ = h / (2 * π)\n\nprint(ħ)", "Multiple Assignments", "# Varibles can be assigned with the same value...\n\nx = y = z = 5\n\ny += 10\n\nprint(x)\nprint(y)\nprint(z)\n\n# ...or with multiple values of any type\n\nx, y, z = 42, 3.14, 1+5j\n\nb, s = True, \"Hello!\"\n\nprint(b, '\\t\\t', 'bool')\nprint(x, '\\t\\t', 'int')\nprint(y, '\\t\\t', 'float')\nprint(z, '\\t\\t', 'complex')\nprint(s, '\\t\\t', 'str')", "Swapping values of variables", "a, b = 42, 3.14\nprint(a, b)\n\nb, a = a, b # swap values\nprint(a, b)\n\n# It works with 3 variables too! Or 4, or 5, or 6...\n\na, b, c = 1, 2, 3\nprint(a, b, c)\n\nb, c, a = a, b, c # swap values\nprint(a, b, c)", "What will be the output of the following code?", "x = 5\ny = x + 4\nx = 2\n\nprint(x, y)\n\nx, y = y + 3, x + 4\n\nprint(x, y)\n\na, b, c, d = 2, 3, 0, 1\n\nc, d, a, b = d + 7, b, c + 5, a\n\nprint(a, b, c, d)", "Blowing your mind!", "42 # Just a number\n\ntype(42) # What's the type of 42?\n\nprint(type(42)) # Let's print it!", "Wait a minute!\nIs int a class?\nIs 42 an object?\nWhaaat?! :o\nYep! In Python everything is an object!", "print(dir(42)) # WTF?!", "<img align=\"center\" src=\"https://img.memesuper.com/a203efb762aefa0a25da2c367fc9f599_taxi-driver-going-to-uber-wtf-meme-png_420-215.jpeg\" width=\"300\" alt text=\"wtf meme\" />\nKeep calm! Soon you'll be flying too!", "# import antigravity", "<img align=\"center\" src=\"https://imgs.xkcd.com/comics/python.png\" alt text=\"\" style=\"width: 400px;\"/>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
sn0wle0pard/tracer
example/sort/.ipynb_checkpoints/Insertion-checkpoint.ipynb
mit
[ "import ipytracer\nfrom IPython.display import display", "Insertion Sort (Insert Sort)\nInsertion sort is a simple sorting algorithm that builds the final sorted array (or list) one item at a time. It is much less efficient on large lists than more advanced algorithms such as quicksort, heapsort, or merge sort.\nComplexity\nTime\n\nWorst-case: $O(n^2)$\nBast-case: $O(n)$\nAverage: $O(n^2)$\n\nReference\n\nWikipedea\n\nCode1 - List1DTracer", "def insertion_sort(unsorted_list):\n x = ipytracer.List1DTracer(unsorted_list)\n display(x)\n for i in range(1, len(x)):\n j = i - 1\n key = x[i]\n while x[j] > key and j >= 0:\n x[j+1] = x[j]\n j = j - 1\n x[j+1] = key\n return x.data", "work", "insertion_sort([6,4,7,9,3,5,1,8,2])", "Code2 - ChartTracer", "def insertion_sort(unsorted_list):\n x = ipytracer.ChartTracer(unsorted_list)\n display(x)\n for i in range(1, len(x)):\n j = i - 1\n key = x[i]\n while x[j] > key and j >= 0:\n x[j+1] = x[j]\n j = j - 1\n x[j+1] = key\n return x.data", "work", "insertion_sort([6,4,7,9,3,5,1,8,2])" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
merryjman/astronomy
CMS_massplot.ipynb
gpl-3.0
[ "Creating a mass plot from CERN OpenData\nIn this example, we'll import some detector data and make a plot of the masses of the particles detected.\nTo begin, click the \"play\" icon or press shift+ENTER to execute each cell.", "# First, we'll \"import\" the software packages needed.\nimport pandas as pd\nimport numpy as np\n%matplotlib inline\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\ninline_rc = dict(mpl.rcParams)\n\n# Starting a line with a hashtag tells the program not to read the line.\n# That way we can write \"comments\" to humans trying to figure out what the code does.\n# Blank lines don't do anything either, but they can make the code easier to read.", "Importing a data set\nNow let's choose some data to plot. In this example we'll pull data from CERN's CMS detector and make a histogram of invariant mass. You can find more at CERN OpenData. This next cell will take a little while to run since it's grabbing a pretty big data set. This one contains 100,000 collision events. The cell label will look like \"In [*]\" while it's still thinking and \"In [2]\" when it's finished.", "# Here's dimuon data:\ndata = pd.read_csv('http://opendata.cern.ch/record/303/files/dimuon.csv')\n\n# Analyze dielectron data instead by un-commenting this URL instead:\n# http://opendata.cern.ch/record/304/files/dielectron.csv", "We can view the first few rows of the file we just imported.", "# The .head(n) command displays the first n rows of the file.\ndata.head(3)", "Part 1: Make a histogram\nCMS software calculated the invariant mass of a possible parent particle, based on the two particles' energies and momenta.\nIt's in the last column, labeled \"M\". The code below makes a histogram of those mass values.", "# adding a ; at the end of the next line will \"suppress\" the text output of the histogram's frequency table\nplt.hist(data.M, bins=120, range=[0,120], log=True)\nplt.title(\"CMS Dimuon Mass Plot\")\nplt.xlabel(\"mass (GeV)\")\nplt.ylabel(\"number of events\")", "Try editing the number of bins or bin range in the previous code cell. To re-exectue the code, click the play icon in the toolbar or press SHIFT + ENTER.\nPart 2: Hunt for a particle\nTry to create a new histogram to show the production of one of the following particles: J/$\\Psi$, Upsilon ($\\Upsilon$), or Z.\nYou can edit the cell above or paste the code into the empty cell below.\nPart Three\nTry selecting a subset of the events to analyze. This is called \"applying cuts\" to your data. Below are a few examples you may find useful.", "# create a new data set of only the events containing oppositely charges particles\ndata2 = data[data.Q1 != data.Q2] # change != to == for same charge\n\n# create a new data set of only events in a certain mass range\ndata3 = data[(data.M > 50) & (data.M < 80)] # this choses 50 to 80 GeV\n\n# make a scatterplot of two columns\n# plt.scatter(x_column, y_column, s=point_size, other parameters)\nplt.scatter(data.eta1, data.phi1, s=.001)\n\n# make your plots look like they're from xkcd.com\nplt.xkcd()\n\n# plt.hist can stack two histograms\nd1 = data[data.Q1 == data.Q2]\nd2 = data[data.Q1 != data.Q2]\n\nfig = plt.figure(figsize=(10, 5))\nplt.hist([d1.M, d2.M], range=[2,5], stacked=True, label=[\"events with same Q\",\"events with opp Q\"], bins=20, log=True)\nplt.title(\"Cutting on net charge\")\nplt.xlabel(\"mass (GeV)\")\nplt.ylabel(\"log number of events\")\nplt.legend()\n\n# to make normal-looking plots again\nmpl.rcParams.update(inline_rc)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.21/_downloads/2b9ae87368ee06cd9589fd87e1be1d30/plot_time_frequency_mixed_norm_inverse.ipynb
bsd-3-clause
[ "%matplotlib inline", "Compute MxNE with time-frequency sparse prior\nThe TF-MxNE solver is a distributed inverse method (like dSPM or sLORETA)\nthat promotes focal (sparse) sources (such as dipole fitting techniques)\n[1] [2]. The benefit of this approach is that:\n\nit is spatio-temporal without assuming stationarity (sources properties\n can vary over time)\nactivations are localized in space, time and frequency in one step.\nwith a built-in filtering process based on a short time Fourier\n transform (STFT), data does not need to be low passed (just high pass\n to make the signals zero mean).\nthe solver solves a convex optimization problem, hence cannot be\n trapped in local minima.\n\nReferences\n.. [1] A. Gramfort, D. Strohmeier, J. Haueisen, M. Hämäläinen, M. Kowalski\n \"Time-Frequency Mixed-Norm Estimates: Sparse M/EEG imaging with\n non-stationary source activations\",\n Neuroimage, Volume 70, pp. 410-422, 15 April 2013.\n DOI: 10.1016/j.neuroimage.2012.12.051\n.. [2] A. Gramfort, D. Strohmeier, J. Haueisen, M. Hämäläinen, M. Kowalski\n \"Functional Brain Imaging with M/EEG Using Structured Sparsity in\n Time-Frequency Dictionaries\",\n Proceedings Information Processing in Medical Imaging\n Lecture Notes in Computer Science, Volume 6801/2011, pp. 600-611, 2011.\n DOI: 10.1007/978-3-642-22092-0_49", "# Author: Alexandre Gramfort <alexandre.gramfort@inria.fr>\n# Daniel Strohmeier <daniel.strohmeier@tu-ilmenau.de>\n#\n# License: BSD (3-clause)\n\nimport numpy as np\n\nimport mne\nfrom mne.datasets import sample\nfrom mne.minimum_norm import make_inverse_operator, apply_inverse\nfrom mne.inverse_sparse import tf_mixed_norm, make_stc_from_dipoles\nfrom mne.viz import (plot_sparse_source_estimates,\n plot_dipole_locations, plot_dipole_amplitudes)\n\nprint(__doc__)\n\ndata_path = sample.data_path()\nsubjects_dir = data_path + '/subjects'\nfwd_fname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'\nave_fname = data_path + '/MEG/sample/sample_audvis-no-filter-ave.fif'\ncov_fname = data_path + '/MEG/sample/sample_audvis-shrunk-cov.fif'\n\n# Read noise covariance matrix\ncov = mne.read_cov(cov_fname)\n\n# Handling average file\ncondition = 'Left visual'\nevoked = mne.read_evokeds(ave_fname, condition=condition, baseline=(None, 0))\nevoked = mne.pick_channels_evoked(evoked)\n# We make the window slightly larger than what you'll eventually be interested\n# in ([-0.05, 0.3]) to avoid edge effects.\nevoked.crop(tmin=-0.1, tmax=0.4)\n\n# Handling forward solution\nforward = mne.read_forward_solution(fwd_fname)", "Run solver", "# alpha parameter is between 0 and 100 (100 gives 0 active source)\nalpha = 40. # general regularization parameter\n# l1_ratio parameter between 0 and 1 promotes temporal smoothness\n# (0 means no temporal regularization)\nl1_ratio = 0.03 # temporal regularization parameter\n\nloose, depth = 0.2, 0.9 # loose orientation & depth weighting\n\n# Compute dSPM solution to be used as weights in MxNE\ninverse_operator = make_inverse_operator(evoked.info, forward, cov,\n loose=loose, depth=depth)\nstc_dspm = apply_inverse(evoked, inverse_operator, lambda2=1. / 9.,\n method='dSPM')\n\n# Compute TF-MxNE inverse solution with dipole output\ndipoles, residual = tf_mixed_norm(\n evoked, forward, cov, alpha=alpha, l1_ratio=l1_ratio, loose=loose,\n depth=depth, maxit=200, tol=1e-6, weights=stc_dspm, weights_min=8.,\n debias=True, wsize=16, tstep=4, window=0.05, return_as_dipoles=True,\n return_residual=True)\n\n# Crop to remove edges\nfor dip in dipoles:\n dip.crop(tmin=-0.05, tmax=0.3)\nevoked.crop(tmin=-0.05, tmax=0.3)\nresidual.crop(tmin=-0.05, tmax=0.3)", "Plot dipole activations", "plot_dipole_amplitudes(dipoles)", "Plot location of the strongest dipole with MRI slices", "idx = np.argmax([np.max(np.abs(dip.amplitude)) for dip in dipoles])\nplot_dipole_locations(dipoles[idx], forward['mri_head_t'], 'sample',\n subjects_dir=subjects_dir, mode='orthoview',\n idx='amplitude')\n\n# # Plot dipole locations of all dipoles with MRI slices\n# for dip in dipoles:\n# plot_dipole_locations(dip, forward['mri_head_t'], 'sample',\n# subjects_dir=subjects_dir, mode='orthoview',\n# idx='amplitude')", "Show the evoked response and the residual for gradiometers", "ylim = dict(grad=[-120, 120])\nevoked.pick_types(meg='grad', exclude='bads')\nevoked.plot(titles=dict(grad='Evoked Response: Gradiometers'), ylim=ylim,\n proj=True, time_unit='s')\n\nresidual.pick_types(meg='grad', exclude='bads')\nresidual.plot(titles=dict(grad='Residuals: Gradiometers'), ylim=ylim,\n proj=True, time_unit='s')", "Generate stc from dipoles", "stc = make_stc_from_dipoles(dipoles, forward['src'])", "View in 2D and 3D (\"glass\" brain like 3D plot)", "plot_sparse_source_estimates(forward['src'], stc, bgcolor=(1, 1, 1),\n opacity=0.1, fig_name=\"TF-MxNE (cond %s)\"\n % condition, modes=['sphere'], scale_factors=[1.])\n\ntime_label = 'TF-MxNE time=%0.2f ms'\nclim = dict(kind='value', lims=[10e-9, 15e-9, 20e-9])\nbrain = stc.plot('sample', 'inflated', 'rh', views='medial',\n clim=clim, time_label=time_label, smoothing_steps=5,\n subjects_dir=subjects_dir, initial_time=150, time_unit='ms')\nbrain.add_label(\"V1\", color=\"yellow\", scalar_thresh=.5, borders=True)\nbrain.add_label(\"V2\", color=\"red\", scalar_thresh=.5, borders=True)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
bokeh/bokeh
examples/howto/MultiPolygons.ipynb
bsd-3-clause
[ "MultiPolygons\nThe MultiPolygons glyphs is modeled closely on the GeoJSON spec for Polygon and MultiPolygon. The data that are used to construct MultiPolygons are nested 3 deep. In the top level of nesting, each item in the list represents a MultiPolygon - an entity like a state or a contour level. Each MultiPolygon is composed of Polygons representing different parts of the MultiPolygon. Each Polygon contains a list of coordinates representing the exterior bounds of the Polygon followed by lists of coordinates of any holes contained within the Polygon. \nPolygon with no holes\nWe'll start with one square with bottom left corner at (1, 3) and top right corner at (2, 4). The simple case of one Polygon with no holes is represented in geojson as follows:\ngeojson\n {\n \"type\": \"Polygon\",\n \"coordinates\": [\n [\n [1, 3],\n [2, 3],\n [2, 4],\n [1, 4],\n [1, 3]\n ]\n ]\n }\nIn geojson this list of coordinates is nested 1 deep to allow for passing lists of holes within the polygon. In bokeh (using MultiPolygon) the coordinates for this same polygon will be nested 3 deep to allow space for other entities and for other parts of the MultiPolygon.", "from bokeh.plotting import figure, output_notebook, show\n\noutput_notebook()\n\np = figure(width=300, height=300, tools='hover,tap,wheel_zoom,pan,reset,help')\np.multi_polygons(xs=[[[[1, 2, 2, 1, 1]]]],\n ys=[[[[3, 3, 4, 4, 3]]]])\nshow(p)", "Notice that in the geojson Polygon always starts and ends at the same point and that the direction in which the Polygon is drawn (winding) must be counter-clockwise. In bokeh we don't have these two restrictions, the direction doesn't matter, and the polygon will be closed even if the starting end ending point are not the same.", "p = figure(width=300, height=300, tools='hover,tap,wheel_zoom,pan,reset,help')\np.multi_polygons(xs=[[[[1, 1, 2, 2]]]],\n ys=[[[[3, 4, 4, 3]]]])\nshow(p)", "Polygon with holes\nNow we'll add some holes to the square polygon defined above. We'll add a triangle in the lower left corner and another in the upper right corner. In geojson this can be represented as follows:\ngeojson\n {\n \"type\": \"Polygon\",\n \"coordinates\": [\n [\n [1, 3],\n [2, 3],\n [2, 4],\n [1, 4],\n [1, 3]\n ],\n [\n [1.2, 3.2],\n [1.6, 3.6],\n [1.6, 3.2],\n [1.2, 3.2]\n ],\n [\n [1.8, 3.8],\n [1.8, 3.4],\n [1.6, 3.8],\n [1.8, 3.8]\n ]\n ]\n }\nOnce again notice that the direction in which the polygons are drawn doesn't matter and the last point in a polygon does not need to match the first. Hover over the holes to demonstrate that they aren't considered part of the Polygon.", "p = figure(width=300, height=300, tools='hover,tap,wheel_zoom,pan,reset,help')\np.multi_polygons(xs=[[[ [1, 2, 2, 1], [1.2, 1.6, 1.6], [1.8, 1.8, 1.6] ]]],\n ys=[[[ [3, 3, 4, 4], [3.2, 3.6, 3.2], [3.4, 3.8, 3.8] ]]])\nshow(p)", "MultiPolygon\nNow we'll examine a MultiPolygon. A MultiPolygon is composed of different parts each of which is a Polygon and each of which can have or not have holes. To create a MultiPolygon from the Polygon that we are using above, we'll add a triangle below the square with holes. Here is how this shape would be represented in geojson:\ngeojson\n {\n \"type\": \"MultiPolygon\",\n \"coordinates\": [\n [\n [\n [1, 3],\n [2, 3],\n [2, 4],\n [1, 4],\n [1, 3]\n ],\n [\n [1.2, 3.2],\n [1.6, 3.6],\n [1.6, 3.2],\n [1.2, 3.2]\n ],\n [\n [1.8, 3.8],\n [1.8, 3.4],\n [1.6, 3.8],\n [1.8, 3.8]\n ]\n ],\n [\n [\n [3, 1],\n [4, 1],\n [3, 3],\n [3, 1]\n ]\n ]\n ]\n }", "p = figure(width=300, height=300, tools='hover,tap,wheel_zoom,pan,reset,help')\np.multi_polygons(xs=[[[ [1, 1, 2, 2], [1.2, 1.6, 1.6], [1.8, 1.8, 1.6] ], [ [3, 4, 3] ]]],\n ys=[[[ [4, 3, 3, 4], [3.2, 3.2, 3.6], [3.4, 3.8, 3.8] ], [ [1, 1, 3] ]]])\nshow(p)", "It is important to understand that the Polygons that make up this MultiPolygon are part of the same entity. It can be helpful to think of representing physically separate areas that are part of the same entity such as the islands of Hawaii.\nMultiPolygons\nFinally, we'll take a look at how we can represent a list of MultiPolygons. Each Mulipolygon represents a different entity. In geojson this would be a FeatureCollection:\ngeojson\n{\n \"type\": \"FeatureCollection\",\n \"features\": [\n {\n \"type\": \"Feature\",\n \"properties\": {\n \"fill\": \"blue\"\n },\n \"geometry\": {\n \"type\": \"MultiPolygon\",\n \"coordinates\": [\n [\n [\n [1, 3],\n [2, 3],\n [2, 4],\n [1, 4],\n [1, 3]\n ],\n [\n [1.2, 3.2],\n [1.6, 3.6],\n [1.6, 3.2],\n [1.2, 3.2]\n ],\n [\n [1.8, 3.8],\n [1.8, 3.4],\n [1.6, 3.8],\n [1.8, 3.8]\n ]\n ],\n [\n [\n [3, 1],\n [4, 1],\n [3, 3],\n [3, 1]\n ]\n ]\n ]\n }\n },\n {\n \"type\": \"Feature\",\n \"properties\": {\n \"fill\": \"red\"\n },\n \"geometry\": {\n \"type\": \"Polygon\",\n \"coordinates\": [\n [\n [1, 1],\n [2, 1],\n [2, 2],\n [1, 2],\n [1, 1]\n ],\n [\n [1.3, 1.3],\n [1.3, 1.7],\n [1.7, 1.7],\n [1.7, 1.3]\n [1.3, 1.3]\n ]\n ]\n }\n }\n]}", "p = figure(width=300, height=300, tools='hover,tap,wheel_zoom,pan,reset,help')\np.multi_polygons(\n xs=[\n [[ [1, 1, 2, 2], [1.2, 1.6, 1.6], [1.8, 1.8, 1.6] ], [ [3, 3, 4] ]],\n [[ [1, 2, 2, 1], [1.3, 1.3, 1.7, 1.7] ]]],\n ys=[\n [[ [4, 3, 3, 4], [3.2, 3.2, 3.6], [3.4, 3.8, 3.8] ], [ [1, 3, 1] ]],\n [[ [1, 1, 2, 2], [1.3, 1.7, 1.7, 1.3] ]]],\n color=['blue', 'red'])\nshow(p)", "Using MultiPolygons glyph directly", "from bokeh.models import ColumnDataSource, Plot, LinearAxis, Grid\nfrom bokeh.models.glyphs import MultiPolygons\nfrom bokeh.models.tools import TapTool, WheelZoomTool, ResetTool, HoverTool\n\nsource = ColumnDataSource(dict(\n xs=[\n [\n [ \n [1, 1, 2, 2], \n [1.2, 1.6, 1.6], \n [1.8, 1.8, 1.6] \n ], \n [ \n [3, 3, 4] \n ]\n ],\n [\n [ \n [1, 2, 2, 1], \n [1.3, 1.3, 1.7, 1.7] \n ]\n ]\n ],\n ys=[\n [\n [ \n [4, 3, 3, 4], \n [3.2, 3.2, 3.6], \n [3.4, 3.8, 3.8] \n ], \n [ \n [1, 3, 1] \n ]\n ],\n [\n [ \n [1, 1, 2, 2], \n [1.3, 1.7, 1.7, 1.3] \n ]\n ]\n ],\n color=[\"blue\", \"red\"],\n label=[\"A\", \"B\"]\n))", "By looking at the dataframe for this ColumnDataSource object, we can see that each MultiPolygon is represented by one row.", "source.to_df()\n\nhover = HoverTool(tooltips=[(\"Label\", \"@label\")])\nplot = Plot(width=300, height=300, tools=[hover, TapTool(), WheelZoomTool()])\n\nglyph = MultiPolygons(xs=\"xs\", ys=\"ys\", fill_color='color')\nplot.add_glyph(source, glyph)\n\nxaxis = LinearAxis()\nplot.add_layout(xaxis, 'below')\n\nyaxis = LinearAxis()\nplot.add_layout(yaxis, 'left')\n\nplot.add_layout(Grid(dimension=0, ticker=xaxis.ticker))\nplot.add_layout(Grid(dimension=1, ticker=yaxis.ticker))\n\nshow(plot)", "Using numpy arrays with MultiPolygons\nNumpy arrays can be used instead of python native lists. In the following example, we'll generate concentric circles and used them to make rings. Similar methods could be used to generate contours.", "import numpy as np\n\nfrom bokeh.palettes import Viridis10 as palette\n\ndef circle(radius):\n angles = np.linspace(0, 2*np.pi, 100)\n return {'x': radius*np.sin(angles), 'y': radius*np.cos(angles), 'radius': radius}\n\nradii = np.geomspace(1, 100, 10)\nsource = dict(xs=[], \n ys=[], \n color=[palette[i] for i in range(10)], \n outer_radius=radii)\n\nfor i, r in enumerate(radii):\n exterior = circle(r)\n if i == 0:\n polygon_xs = [exterior['x']]\n polygon_ys = [exterior['y']]\n else:\n hole = circle(radii[i-1])\n\n polygon_xs = [exterior['x'], hole['x']]\n polygon_ys = [exterior['y'], hole['y']]\n\n source['xs'].append([polygon_xs])\n source['ys'].append([polygon_ys])\n\np = figure(width=300, height=300, \n tools='hover,tap,wheel_zoom,pan,reset,help',\n tooltips=[(\"Outer Radius\", \"@outer_radius\")])\n\np.multi_polygons('xs', 'ys', fill_color='color', source=source)\nshow(p)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
metpy/MetPy
v0.11/_downloads/62a1acd718d4c5b9717787544d4cf09f/Gradient.ipynb
bsd-3-clause
[ "%matplotlib inline", "Gradient\nUse metpy.calc.gradient.\nThis example demonstrates the various ways that MetPy's gradient function\ncan be utilized.", "import numpy as np\n\nimport metpy.calc as mpcalc\nfrom metpy.units import units", "Create some test data to use for our example", "data = np.array([[23, 24, 23],\n [25, 26, 25],\n [27, 28, 27],\n [24, 25, 24]]) * units.degC\n\n# Create an array of x position data (the coordinates of our temperature data)\nx = np.array([[1, 2, 3],\n [1, 2, 3],\n [1, 2, 3],\n [1, 2, 3]]) * units.kilometer\n\ny = np.array([[1, 1, 1],\n [2, 2, 2],\n [3, 3, 3],\n [4, 4, 4]]) * units.kilometer", "Calculate the gradient using the coordinates of the data", "grad = mpcalc.gradient(data, coordinates=(y, x))\nprint('Gradient in y direction: ', grad[0])\nprint('Gradient in x direction: ', grad[1])", "It's also possible that we do not have the position of data points, but know\nthat they are evenly spaced. We can then specify a scalar delta value for each\naxes.", "x_delta = 2 * units.km\ny_delta = 1 * units.km\ngrad = mpcalc.gradient(data, deltas=(y_delta, x_delta))\nprint('Gradient in y direction: ', grad[0])\nprint('Gradient in x direction: ', grad[1])", "Finally, the deltas can be arrays for unevenly spaced data.", "x_deltas = np.array([[2, 3],\n [1, 3],\n [2, 3],\n [1, 2]]) * units.kilometer\ny_deltas = np.array([[2, 3, 1],\n [1, 3, 2],\n [2, 3, 1]]) * units.kilometer\ngrad = mpcalc.gradient(data, deltas=(y_deltas, x_deltas))\nprint('Gradient in y direction: ', grad[0])\nprint('Gradient in x direction: ', grad[1])" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Vvkmnn/books
ThinkBayes/09_Two_Dimensions.ipynb
gpl-3.0
[ "# format the book\n%matplotlib inline\nimport sys\nfrom __future__ import division, print_function\nimport sys\nsys.path.insert(0,'../code')\nimport book_format\nbook_format.load_style('../code')", "Two Dimensions\nPaintball\nPaintball is a sport in which competing teams try to shoot each other\nwith guns that fire paint-filled pellets that break on impact, leaving a\ncolorful mark on the target. It is usually played in an arena decorated\nwith barriers and other objects that can be used as cover.\nSuppose you are playing paintball in an indoor arena 30 feet wide and 50\nfeet long. You are standing near one of the 30 foot walls, and you\nsuspect that one of your opponents has taken cover nearby. Along the\nwall, you see several paint spatters, all the same color, that you think\nyour opponent fired recently.\nThe spatters are at 15, 16, 18, and 21 feet, measured from the\nlower-left corner of the room. Based on these data, where do you think\nyour opponent is hiding?\nFigure [fig.paintball] shows a diagram of the arena. Using the\nlower-left corner of the room as the origin, I denote the unknown\nlocation of the shooter with coordinates $\\alpha$ and $\\beta$, or\nalpha and beta. The location of a spatter is\nlabeled x. The angle the opponent shoots at is $\\theta$ or\ntheta.\nThe Paintball problem is a modified version of the Lighthouse problem, a\ncommon example of Bayesian analysis. My notation follows the\npresentation of the problem in D.S. Sivia’s, Data Analysis: a\nBayesian Tutorial, Second Edition (Oxford, 2006).\nYou can download the code in this chapter from\nhttp://thinkbayes.com/paintball.py. For more information see\nSection [download].\nThe suite\n\n[fig.paintball]\nTo get started, we need a Suite that represents a set of hypotheses\nabout the location of the opponent. Each hypothesis is a pair of\ncoordinates: (alpha, beta).\nHere is the definition of the Paintball suite:", "import thinkbayes\n\nclass Paintball(thinkbayes.Suite, thinkbayes.Joint):\n\n def __init__(self, alphas, betas, locations):\n self.locations = locations\n pairs = [(alpha, beta) \n for alpha in alphas \n for beta in betas]\n thinkbayes.Suite.__init__(self, pairs)", "Paintball inherits from Suite, which we have\nseen before, and Joint, which I will explain soon.\nalphas is the list of possible values for\nalpha; betas is the list of values for\nbeta. pairs is a list of all (alpha,\nbeta) pairs.\nlocations is a list of possible locations along the wall;\nit is stored for use in Likelihood.\n\n[fig.paintball2]\nThe room is 30 feet wide and 50 feet long, so here’s the code that\ncreates the suite:", "alphas = range(0, 31)\nbetas = range(1, 51)\nlocations = range(0, 31)\n\nsuite = Paintball(alphas, betas, locations)", "This prior distribution assumes that all locations in the room are\nequally likely. Given a map of the room, we might choose a more detailed\nprior, but we’ll start simple.\nTrigonometry\nNow we need a likelihood function, which means we have to figure out the\nlikelihood of hitting any spot along the wall, given the location of the\nopponent.\nAs a simple model, imagine that the opponent is like a rotating turret,\nequally likely to shoot in any direction. In that case, he is most\nlikely to hit the wall at location alpha, and less likely\nto hit the wall far away from alpha.\nWith a little trigonometry, we can compute the probability of hitting\nany spot along the wall. Imagine that the shooter fires a shot at angle\n$\\theta$; the pellet would hit the wall at location $x$, where\n$$x - \\alpha = \\beta \\tan \\theta$$ Solving this equation for $\\theta$\nyields $$\\theta = tan^{-1} \\left( \\frac{x - \\alpha}{\\beta} \\right)$$ \nSo given a location on the wall, we can find $\\theta$.\nTaking the derivative of the first equation with respect to $\\theta$\nyields\n$$\\frac{dx}{d\\theta} = \\frac{\\beta}{\\cos^2 \\theta}$$ \nThis derivative is what I’ll call the “strafing speed”, which is the speed of\nthe target location along the wall as $\\theta$ increases. The\nprobability of hitting a given point on the wall is inversely related to\nstrafing speed.\nIf we know the coordinates of the shooter and a location along the wall,\nwe can compute strafing speed:", "def StrafingSpeed(alpha, beta, x):\n theta = math.atan2(x - alpha, beta)\n speed = beta / math.cos(theta)**2\n return speed", "alpha and beta are the coordinates of the\nshooter; x is the location of a spatter. The result is the\nderivative of x with respect to theta.\n\n[fig.paintball1]\nNow we can compute a Pmf that represents the probability of hitting any\nlocation on the wall. MakeLocationPmf takes\nalpha and beta, the coordinates of the\nshooter, and locations, a list of possible values of\nx.", "def MakeLocationPmf(alpha, beta, locations):\n pmf = thinkbayes.Pmf()\n for x in locations:\n prob = 1.0 / StrafingSpeed(alpha, beta, x)\n pmf.Set(x, prob)\n pmf.Normalize()\n return pmf", "MakeLocationPmf computes the probability of hitting each\nlocation, which is inversely related to strafing speed. The result is a\nPmf of locations and their probabilities.\nFigure [fig.paintball1] shows the Pmf of location with alpha =\n10 and a range of values for beta. For all values of\nbeta the most likely spatter location is x = 10; as\nbeta increases, so does the spread of the Pmf.\nLikelihood\nNow all we need is a likelihood function. We can use\nMakeLocationPmf to compute the likelihood of any value of\nx, given the coordinates of the opponent.", "def Likelihood(self, data, hypo):\n alpha, beta = hypo\n x = data\n pmf = MakeLocationPmf(alpha, beta, self.locations)\n like = pmf.Prob(x)\n return like", "Again, alpha and beta are the hypothetical\ncoordinates of the shooter, and x is the location of an\nobserved spatter.\npmf contains the probability of each location, given the\ncoordinates of the shooter. From this Pmf, we select the probability of\nthe observed location.\nAnd we’re done. To update the suite, we can use UpdateSet,\nwhich is inherited from Suite.", "suite.UpdateSet([15, 16, 18, 21])", "The result is a distribution that maps each (alpha, beta)\npair to a posterior probability.\nJoint distributions\nWhen each value in a distribution is a tuple of variables, it is called\na joint distribution because it represents the\ndistributions of the variables together, that is “jointly”. A joint\ndistribution contains the distributions of the variables, as well\ninformation about the relationships among them.\nGiven a joint distribution, we can compute the distributions of each\nvariable independently, which are called the marginal\ndistributions.\nthinkbayes.Joint provides a method that computes marginal\ndistributions:\n```python\nclass Joint:\ndef Marginal(self, i):\n pmf = Pmf()\n for vs, prob in self.Items():\n pmf.Incr(vs[i], prob)\n return pmf\n\n```\ni is the index of the variable we want; in this example\ni=0 indicates the distribution of alpha, and\ni=1 indicates the distribution of beta.\nHere’s the code that extracts the marginal distributions:", "marginal_alpha = suite.Marginal(0)\nmarginal_beta = suite.Marginal(1)", "Figure [fig.paintball2] shows the results (converted to CDFs). The\nmedian value for alpha is 18, near the center of mass of\nthe observed spatters. For beta, the most likely values are\nclose to the wall, but beyond 10 feet the distribution is almost\nuniform, which indicates that the data do not distinguish strongly\nbetween these possible locations.\nGiven the posterior marginals, we can compute credible intervals for\neach coordinate independently:", "print('alpha CI', marginal_alpha.CredibleInterval(50))\nprint('beta CI', marginal_beta.CredibleInterval(50))", "The 50% credible intervals are (14, 21) for\nalpha and (5, 31) for beta. So\nthe data provide evidence that the shooter is in the near side of the\nroom. But it is not strong evidence. The 90% credible intervals cover\nmost of the room!\nConditional distributions {#conditional}\n\n[fig.paintball3]\nThe marginal distributions contain information about the variables\nindependently, but they do not capture the dependence between variables,\nif any.\nOne way to visualize dependence is by computing conditional\ndistributions. thinkbayes.Joint provides a method\nthat does that:", "def Conditional(self, i, j, val):\n pmf = Pmf()\n for vs, prob in self.Items():\n if vs[j] != val: continue\n pmf.Incr(vs[i], prob)\n\n pmf.Normalize()\n return pmf", "Again, i is the index of the variable we want;\nj is the index of the conditioning variable, and\nval is the conditional value.\nThe result is the distribution of the $i$th variable under the condition\nthat the $j$th variable is val.\nFor example, the following code computes the conditional distributions\nof alpha for a range of values of beta:", "betas = [10, 20, 40]\n\nfor beta in betas:\n cond = suite.Conditional(0, 1, beta)", "Figure [fig.paintball3] shows the results, which we could fully describe\nas “posterior conditional marginal distributions.” Whew!\nIf the variables were independent, the conditional distributions would\nall be the same. Since they are all different, we can tell the variables\nare dependent. For example, if we know (somehow) that beta = 10, the\nconditional distribution of alpha is fairly\nnarrow. For larger values of beta, the distribution of\nalpha is wider.\nCredible intervals {#credible-intervals}\n\n[fig.paintball5]\nAnother way to visualize the posterior joint distribution is to compute\ncredible intervals. When we looked at credible intervals in\nSection [credible], I skipped over a subtle point: for a given\ndistribution, there are many intervals with the same level of\ncredibility. For example, if you want a 50% credible interval, you could\nchoose any set of values whose probability adds up to 50%.\nWhen the values are one-dimensional, it is most common to choose the\ncentral credible interval; for example, the central 50%\ncredible interval contains all values between the 25th and 75th\npercentiles.\nIn multiple dimensions it is less obvious what the right credible\ninterval should be. The best choice might depend on context, but one\ncommon choice is the maximum likelihood credible interval, which\ncontains the most likely values that add up to 50% (or some other\npercentage).\nthinkbayes.Joint provides a method that computes maximum\nlikelihood credible intervals.\n```python\nclass Joint:\ndef MaxLikeInterval(self, percentage=90):\n interval = []\n total = 0\n\n t = [(prob, val) for val, prob in self.Items()]\n t.sort(reverse=True)\n\n for prob, val in t:\n interval.append(val)\n total += prob\n if total &gt;= percentage/100.0:\n break\n\n return interval\n\n```\nThe first step is to make a list of the values in the suite, sorted in\ndescending order by probability. Next we traverse the list, adding each\nvalue to the interval, until the total probability exceeds\npercentage. The result is a list of values from the suite.\nNotice that this set of values is not necessarily contiguous.\nTo visualize the intervals, I wrote a function that “colors” each value\naccording to how many intervals it appears in:", "def MakeCrediblePlot(suite):\n d = dict((pair, 0) for pair in suite.Values())\n\n percentages = [75, 50, 25]\n for p in percentages:\n interval = suite.MaxLikeInterval(p)\n for pair in interval:\n d[pair] += 1\n\n return d", "d is a dictionary that maps from each value in the suite to\nthe number of intervals it appears in. The loop computes intervals for\nseveral percentages and modifies d.\nFigure [fig.paintball5] shows the result. The 25% credible interval is\nthe darkest region near the bottom wall. For higher percentages, the\ncredible interval is bigger, of course, and skewed toward the right side\nof the room.\nDiscussion\nThis chapter shows that the Bayesian framework from the previous\nchapters can be extended to handle a two-dimensional parameter space.\nThe only difference is that each hypothesis is represented by a tuple of\nparameters.\nI also presented Joint, which is a parent class that\nprovides methods that apply to joint distributions:\nMarginal, Conditional, and\nMakeLikeInterval. In object-oriented terms,\nJoint is a mixin (see\nhttp://en.wikipedia.org/wiki/Mixin).\nThere is a lot of new vocabulary in this chapter, so let’s review:\nJoint distribution:\n: A distribution that represents all possible values in a\n multidimensional space and their probabilities. The example in this\n chapter is a two-dimensional space made up of the coordinates\n alpha and beta. The joint distribution\n represents the probability of each (alpha,\n beta) pair.\nMarginal distribution:\n: The distribution of one parameter in a joint distribution, treating\n the other parameters as unknown. For example,\n Figure [fig.paintball2] shows the distributions of\n alpha and beta independently.\nConditional distribution:\n: The distribution of one parameter in a joint distribution,\n conditioned on one or more of the other parameters.\n Figure [fig.paintball3] several distributions for\n alpha, conditioned on different values of\n beta.\nGiven the joint distribution, you can compute marginal and conditional\ndistributions. With enough conditional distributions, you could\nre-create the joint distribution, at least approximately. But given the\nmarginal distributions you cannot re-create the joint distribution\nbecause you have lost information about the dependence between\nvariables.\nIf there are $n$ possible values for each of two parameters, most\noperations on the joint distribution take time proportional to $n^2$. If\nthere are $d$ parameters, run time is proportional to $n^d$, which\nquickly becomes impractical as the number of dimensions increases.\nIf you can process a million hypotheses in a reasonable amount of time,\nyou could handle two dimensions with 1000 values for each parameter, or\nthree dimensions with 100 values each, or six dimensions with 10 values\neach.\nIf you need more dimensions, or more values per dimension, there are\noptimizations you can try. I present an example in Chapter [species].\nYou can download the code in this chapter from\nhttp://thinkbayes.com/paintball.py. For more information see\nSection [download].\nExercises\nIn our simple model, the opponent is equally likely to shoot in any\ndirection. As an exercise, let’s consider improvements to this model.\nThe analysis in this chapter suggests that a shooter is most likely to\nhit the closest wall. But in reality, if the opponent is close to a\nwall, he is unlikely to shoot at the wall because he is unlikely to see\na target between himself and the wall.\nDesign an improved model that takes this behavior into account. Try to\nfind a model that is more realistic, but not too complicated." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Weenkus/Machine-Learning-University-of-Washington
Regression/examples/numpy-tutorial.ipynb
mit
[ "Numpy Tutorial\nNumpy is a computational library for Python that is optimized for operations on multi-dimensional arrays. In this notebook we will use numpy to work with 1-d arrays (often called vectors) and 2-d arrays (often called matrices).\nFor a the full user guide and reference for numpy see: http://docs.scipy.org/doc/numpy/", "import numpy as np # importing this way allows us to refer to numpy as np", "Creating Numpy Arrays\nNew arrays can be made in several ways. We can take an existing list and convert it to a numpy array:", "mylist = [1., 2., 3., 4.]\nmynparray = np.array(mylist)\nmynparray", "You can initialize an array (of any dimension) of all ones or all zeroes with the ones() and zeros() functions:", "one_vector = np.ones(4)\nprint (one_vector) # using print removes the array() portion\n\none2Darray = np.ones((2, 4)) # an 2D array with 2 \"rows\" and 4 \"columns\"\nprint (one2Darray)\n\nzero_vector = np.zeros(4)\nprint (zero_vector)", "You can also initialize an empty array which will be filled with values. This is the fastest way to initialize a fixed-size numpy array however you must ensure that you replace all of the values.", "empty_vector = np.empty(5)\nprint (empty_vector)", "Accessing array elements\nAccessing an array is straight forward. For vectors you access the index by referring to it inside square brackets. Recall that indices in Python start with 0.", "mynparray[2]", "2D arrays are accessed similarly by referring to the row and column index separated by a comma:", "my_matrix = np.array([[1, 2, 3], [4, 5, 6]])\nprint (my_matrix)\n\nprint (my_matrix[1, 2])", "Sequences of indices can be accessed using ':' for example", "print (my_matrix[0:2, 2]) # recall 0:2 = [0, 1]\n\nprint (my_matrix[0, 0:3])", "You can also pass a list of indices.", "fib_indices = np.array([1, 1, 2, 3])\nrandom_vector = np.random.random(10) # 10 random numbers between 0 and 1\nprint (random_vector)\n\nprint (random_vector[fib_indices])", "You can also use true/false values to select values", "my_vector = np.array([1, 2, 3, 4])\nselect_index = np.array([True, False, True, False])\nprint (my_vector[select_index])", "For 2D arrays you can select specific columns and specific rows. Passing ':' selects all rows/columns", "select_cols = np.array([True, False, True]) # 1st and 3rd column\nselect_rows = np.array([False, True]) # 2nd row\n\nprint (my_matrix[select_rows, :]) # just 2nd row but all columns\n\nprint (my_matrix[:, select_cols]) # all rows and just the 1st and 3rd column", "Operations on Arrays\nYou can use the operations '*', '**', '\\', '+' and '-' on numpy arrays and they operate elementwise.", "my_array = np.array([1., 2., 3., 4.])\nprint (my_array*my_array)\n\nprint (my_array**2)\n\nprint (my_array - np.ones(4))\n\nprint (my_array + np.ones(4))\n\nprint (my_array / 3)\n\nprint (my_array / np.array([2., 3., 4., 5.])) # = [1.0/2.0, 2.0/3.0, 3.0/4.0, 4.0/5.0]", "You can compute the sum with np.sum() and the average with np.average()", "print (np.sum(my_array))\n\nprint (np.average(my_array))\n\nprint (np.sum(my_array)/len(my_array))", "The dot product\nAn important mathematical operation in linear algebra is the dot product. \nWhen we compute the dot product between two vectors we are simply multiplying them elementwise and adding them up. In numpy you can do this with np.dot()", "array1 = np.array([1., 2., 3., 4.])\narray2 = np.array([2., 3., 4., 5.])\nprint (np.dot(array1, array2))\n\nprint (np.sum(array1*array2))", "Recall that the Euclidean length (or magnitude) of a vector is the squareroot of the sum of the squares of the components. This is just the squareroot of the dot product of the vector with itself:", "array1_mag = np.sqrt(np.dot(array1, array1))\nprint (array1_mag)\n\nprint (np.sqrt(np.sum(array1*array1)))", "We can also use the dot product when we have a 2D array (or matrix). When you have an vector with the same number of elements as the matrix (2D array) has columns you can right-multiply the matrix by the vector to get another vector with the same number of elements as the matrix has rows. For example this is how you compute the predicted values given a matrix of features and an array of weights.", "my_features = np.array([[1., 2.], [3., 4.], [5., 6.], [7., 8.]])\nprint (my_features)\n\nmy_weights = np.array([0.4, 0.5])\nprint (my_weights)\n\nmy_predictions = np.dot(my_features, my_weights) # note that the weights are on the right\nprint (my_predictions) # which has 4 elements since my_features has 4 rows(", "Similarly if you have a vector with the same number of elements as the matrix has rows you can left multiply them.", "my_matrix = my_features\nmy_array = np.array([0.3, 0.4, 0.5, 0.6])\n\nprint (np.dot(my_array, my_matrix)) # which has 2 elements because my_matrix has 2 columns", "Multiplying Matrices\nIf we have two 2D arrays (matrices) matrix_1 and matrix_2 where the number of columns of matrix_1 is the same as the number of rows of matrix_2 then we can use np.dot() to perform matrix multiplication.", "matrix_1 = np.array([[1., 2., 3.],[4., 5., 6.]])\nprint (matrix_1)\n\nmatrix_2 = np.array([[1., 2.], [3., 4.], [5., 6.]])\nprint (matrix_2)\n\nprint (np.dot(matrix_1, matrix_2))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
fantasycheng/udacity-deep-learning-project
tv-script-generation/dlnd_tv_script_generation_2.ipynb
mit
[ "TV Script Generation\nIn this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.\nGet the Data\nThe data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like \"Moe's Cavern\", \"Flaming Moe's\", \"Uncle Moe's Family Feed-Bag\", etc..", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\n\ndata_dir = './data/simpsons/moes_tavern_lines.txt'\ntext = helper.load_data(data_dir)\n# Ignore notice, since we don't use it for analysing the data\ntext = text[81:]", "Explore the Data\nPlay around with view_sentence_range to view different parts of the data.", "view_sentence_range = (0, 10)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))\nscenes = text.split('\\n\\n')\nprint('Number of scenes: {}'.format(len(scenes)))\nsentence_count_scene = [scene.count('\\n') for scene in scenes]\nprint('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))\n\nsentences = [sentence for scene in scenes for sentence in scene.split('\\n')]\nprint('Number of lines: {}'.format(len(sentences)))\nword_count_sentence = [len(sentence.split()) for sentence in sentences]\nprint('Average number of words in each line: {}'.format(np.average(word_count_sentence)))\n\nprint()\nprint('The sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))", "Implement Preprocessing Functions\nThe first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:\n- Lookup Table\n- Tokenize Punctuation\nLookup Table\nTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:\n- Dictionary to go from the words to an id, we'll call vocab_to_int\n- Dictionary to go from the id to word, we'll call int_to_vocab\nReturn these dictionaries in the following tuple (vocab_to_int, int_to_vocab)", "import numpy as np\nimport problem_unittests as tests\nfrom collections import Counter\nfrom string import punctuation\n\n\ndef create_lookup_tables(text):\n \"\"\"\n Create lookup tables for vocabulary\n :param text: The text of tv scripts split into words\n :return: A tuple of dicts (vocab_to_int, int_to_vocab)\n \"\"\"\n # TODO: Implement Function\n vocab = set(text)\n vocab_to_int= {word:ii for ii,word in enumerate(vocab )}\n int_to_vocab = dict(enumerate(vocab))\n\n\n \n return vocab_to_int, int_to_vocab\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_create_lookup_tables(create_lookup_tables)", "Tokenize Punctuation\nWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word \"bye\" and \"bye!\".\nImplement the function token_lookup to return a dict that will be used to tokenize symbols like \"!\" into \"||Exclamation_Mark||\". Create a dictionary for the following symbols where the symbol is the key and value is the token:\n- Period ( . )\n- Comma ( , )\n- Quotation Mark ( \" )\n- Semicolon ( ; )\n- Exclamation mark ( ! )\n- Question mark ( ? )\n- Left Parentheses ( ( )\n- Right Parentheses ( ) )\n- Dash ( -- )\n- Return ( \\n )\nThis dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token \"dash\", try using something like \"||dash||\".", "def token_lookup():\n \"\"\"\n Generate a dict to turn punctuation into a token.\n :return: Tokenize dictionary where the key is the punctuation and the value is the token\n \"\"\"\n # TODO: Implement Function\n tknz_dict = {'.':'Period',\n ',':'Comma',\n '\"':'Quotationmark',\n ';':'Semicolon',\n '!':'Exclamationmark',\n '?':'Questionmark',\n '(':'LeftParentheses',\n ')':'RightParentheses',\n '--':'Dash',\n '\\n':'Return'}\n return tknz_dict\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_tokenize(token_lookup)", "Preprocess all the data and save it\nRunning the code cell below will preprocess all the data and save it to file.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Preprocess Training, Validation, and Testing Data\nhelper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)", "Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport numpy as np\nimport problem_unittests as tests\n\nint_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()", "Build the Neural Network\nYou'll build the components necessary to build a RNN by implementing the following functions below:\n- get_inputs\n- get_init_cell\n- get_embed\n- build_rnn\n- build_nn\n- get_batches\nCheck the Version of TensorFlow and Access to GPU", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom distutils.version import LooseVersion\nimport warnings\nimport tensorflow as tf\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'\nprint('TensorFlow Version: {}'.format(tf.__version__))\n\n# Check for a GPU\nif not tf.test.gpu_device_name():\n warnings.warn('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))", "Input\nImplement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:\n- Input text placeholder named \"input\" using the TF Placeholder name parameter.\n- Targets placeholder\n- Learning Rate placeholder\nReturn the placeholders in the following tuple (Input, Targets, LearningRate)", "def get_inputs():\n \"\"\"\n Create TF Placeholders for input, targets, and learning rate.\n :return: Tuple (input, targets, learning rate)\n \"\"\"\n # TODO: Implement Function\n input = tf.placeholder(tf.int32, [None, None],name = 'input' )\n targets = tf.placeholder(tf.int32, [None, None])\n learningrate = tf.placeholder(tf.float32)\n return input, targets, learningrate\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_inputs(get_inputs)", "Build RNN Cell and Initialize\nStack one or more BasicLSTMCells in a MultiRNNCell.\n- The Rnn size should be set using rnn_size\n- Initalize Cell State using the MultiRNNCell's zero_state() function\n - Apply the name \"initial_state\" to the initial state using tf.identity()\nReturn the cell and initial state in the following tuple (Cell, InitialState)", "def get_init_cell(batch_size, rnn_size):\n \"\"\"\n Create an RNN Cell and initialize it.\n :param batch_size: Size of batches\n :param rnn_size: Size of RNNs\n :return: Tuple (cell, initialize state)\n \"\"\"\n # TODO: Implement Function\n lstm_layers = 2\n \n \n #keep_prob = 0.75\n \n lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)\n \n #drop = tf.contrib.rnn.DropoutWrapper(lstm,output_keep_prob = keep_prob)\n\n cell = tf.contrib.rnn.MultiRNNCell([lstm for _ in range(lstm_layers)] )\n #cell = tf.contrib.rnn.MultiRNNCell([lstm])\n \n initial_state = cell.zero_state(batch_size,tf.float32)\n initial_state = tf.identity(initial_state,name= 'initial_state')\n return cell, initial_state\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_init_cell(get_init_cell)", "Word Embedding\nApply embedding to input_data using TensorFlow. Return the embedded sequence.", "def get_embed(input_data, vocab_size, embed_dim):\n \"\"\"\n Create embedding for <input_data>.\n :param input_data: TF placeholder for text input.\n :param vocab_size: Number of words in vocabulary.\n :param embed_dim: Number of embedding dimensions\n :return: Embedded input.\n \"\"\"\n # TODO: Implement Function\n e = tf.Variable(tf.random_uniform([vocab_size, embed_dim],-1,1))\n embed = tf.nn.embedding_lookup(e,input_data)\n return embed\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_embed(get_embed)", "Build RNN\nYou created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.\n- Build the RNN using the tf.nn.dynamic_rnn()\n - Apply the name \"final_state\" to the final state using tf.identity()\nReturn the outputs and final_state state in the following tuple (Outputs, FinalState)", "def build_rnn(cell, inputs):\n \"\"\"\n Create a RNN using a RNN Cell\n :param cell: RNN Cell\n :param inputs: Input text data\n :return: Tuple (Outputs, Final State)\n \"\"\"\n # TODO: Implement Function\n outputs, final_state = tf.nn.dynamic_rnn(cell,inputs,dtype = tf.float32)\n final_state = tf.identity(final_state, name=\"final_state\")\n return outputs, final_state\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_rnn(build_rnn)", "Build the Neural Network\nApply the functions you implemented above to:\n- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.\n- Build RNN using cell and your build_rnn(cell, inputs) function.\n- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.\nReturn the logits and final state in the following tuple (Logits, FinalState)", "def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):\n \"\"\"\n Build part of the neural network\n :param cell: RNN cell\n :param rnn_size: Size of rnns\n :param input_data: Input data\n :param vocab_size: Vocabulary size\n :param embed_dim: Number of embedding dimensions\n :return: Tuple (Logits, FinalState)\n \"\"\"\n # TODO: Implement Function\n embed = get_embed(input_data, vocab_size, embed_dim)\n outputs,final_state = build_rnn(cell, embed)\n logits = tf.contrib.layers.fully_connected(outputs,rnn_size,activation_fn = tf.sigmoid )\n return logits, final_state\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_nn(build_nn)", "Batches\nImplement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:\n- The first element is a single batch of input with the shape [batch size, sequence length]\n- The second element is a single batch of targets with the shape [batch size, sequence length]\nIf you can't fill the last batch with enough data, drop the last batch.\nFor exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:\n```\n[\n # First Batch\n [\n # Batch of Input\n [[ 1 2 3], [ 7 8 9]],\n # Batch of targets\n [[ 2 3 4], [ 8 9 10]]\n ],\n# Second Batch\n [\n # Batch of Input\n [[ 4 5 6], [10 11 12]],\n # Batch of targets\n [[ 5 6 7], [11 12 13]]\n ]\n]\n```", "def get_batches(int_text, batch_size, seq_length):\n \"\"\"\n Return batches of input and target\n :param int_text: Text with the words replaced by their ids\n :param batch_size: The size of batch\n :param seq_length: The length of sequence\n :return: Batches as a Numpy array\n \"\"\"\n # TODO: Implement Function\n batches_size = batch_size* seq_length \n n_batches = len(int_text)//batches_size \n batch_x = np.array(int_text[:n_batches* batches_size])\n \n batch_y = np.array(int_text[1:n_batches* batches_size+1])\n batch_y[-1] = batch_x[0]\n \n batch_x_reshape = batch_x.reshape(batch_size,-1)\n batch_y_reshape = batch_y.reshape(batch_size,-1)\n \n batches = np.zeros([n_batches, 2, batch_size, seq_length])\n \n for i in range(n_batches):\n batches[i][0]= batch_x_reshape[ : ,i * seq_length: (i+1)* seq_length]\n batches[i][1]= batch_y_reshape[ : ,i * seq_length: (i+1)* seq_length]\n\n\n return batches\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_batches(get_batches)", "Neural Network Training\nHyperparameters\nTune the following parameters:\n\nSet num_epochs to the number of epochs.\nSet batch_size to the batch size.\nSet rnn_size to the size of the RNNs.\nSet embed_dim to the size of the embedding.\nSet seq_length to the length of sequence.\nSet learning_rate to the learning rate.\nSet show_every_n_batches to the number of batches the neural network should print progress.", "# Number of Epochs\nnum_epochs = 50\n# Batch Size\nbatch_size = 256\n# RNN Size\nrnn_size = 10\n# Embedding Dimension Size\nembed_dim = 200\n# Sequence Length\nseq_length = 50\n# Learning Rate\nlearning_rate = 0.1\n# Show stats for every n number of batches\nshow_every_n_batches = 50\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nsave_dir = './save'", "Build the Graph\nBuild the graph using the neural network you implemented.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom tensorflow.contrib import seq2seq\n\ntrain_graph = tf.Graph()\nwith train_graph.as_default():\n vocab_size = len(int_to_vocab)\n input_text, targets, lr = get_inputs()\n input_data_shape = tf.shape(input_text)\n cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)\n logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)\n\n # Probabilities for generating words\n probs = tf.nn.softmax(logits, name='probs')\n\n # Loss function\n cost = seq2seq.sequence_loss(\n logits,\n targets,\n tf.ones([input_data_shape[0], input_data_shape[1]]))\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]\n train_op = optimizer.apply_gradients(capped_gradients)", "Train\nTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nbatches = get_batches(int_text, batch_size, seq_length)\n\nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch_i in range(num_epochs):\n state = sess.run(initial_state, {input_text: batches[0][0]})\n\n for batch_i, (x, y) in enumerate(batches):\n feed = {\n input_text: x,\n targets: y,\n initial_state: state,\n lr: learning_rate}\n train_loss, state, _ = sess.run([cost, final_state, train_op], feed)\n\n # Show every <show_every_n_batches> batches\n if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:\n print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(\n epoch_i,\n batch_i,\n len(batches),\n train_loss))\n\n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, save_dir)\n print('Model Trained and Saved')", "Save Parameters\nSave seq_length and save_dir for generating a new TV script.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Save parameters for checkpoint\nhelper.save_params((seq_length, save_dir))", "Checkpoint", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport tensorflow as tf\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()\nseq_length, load_dir = helper.load_params()", "Implement Generate Functions\nGet Tensors\nGet tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:\n- \"input:0\"\n- \"initial_state:0\"\n- \"final_state:0\"\n- \"probs:0\"\nReturn the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)", "def get_tensors(loaded_graph):\n \"\"\"\n Get input, initial state, final state, and probabilities tensor from <loaded_graph>\n :param loaded_graph: TensorFlow graph loaded from file\n :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)\n \"\"\"\n # TODO: Implement Function\n inputs = loaded_graph.get_tensor_by_name(\"input:0\")\n initial_state = loaded_graph.get_tensor_by_name(\"initial_state:0\")\n final_state = loaded_graph.get_tensor_by_name(\"final_state:0\")\n probs = loaded_graph.get_tensor_by_name(\"probs:0\")\n return inputs, initial_state, final_state, probs\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_tensors(get_tensors)", "Choose Word\nImplement the pick_word() function to select the next word using probabilities.", "def pick_word(probabilities, int_to_vocab):\n \"\"\"\n Pick the next word in the generated text\n :param probabilities: Probabilites of the next word\n :param int_to_vocab: Dictionary of word ids as the keys and words as the values\n :return: String of the predicted word\n \"\"\"\n # TODO: Implement Function\n pick_word =[] \n for idx,prob in enumerate(probabilities):\n if prob >= 0.05:\n pick_word.append(int_to_vocab[idx])\n rand = np.random.randint(0, len(pick_word))\n\n return str(pick_word[rand])\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_pick_word(pick_word)", "Generate TV Script\nThis will generate the TV script for you. Set gen_length to the length of TV script you want to generate.", "gen_length = 200\n# homer_simpson, moe_szyslak, or Barney_Gumble\nprime_word = 'moe_szyslak'\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(load_dir + '.meta')\n loader.restore(sess, load_dir)\n\n # Get Tensors from loaded model\n input_text, initial_state, final_state, probs = get_tensors(loaded_graph)\n\n # Sentences generation setup\n gen_sentences = [prime_word + ':']\n prev_state = sess.run(initial_state, {input_text: np.array([[1]])})\n\n # Generate sentences\n for n in range(gen_length):\n # Dynamic Input\n dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]\n dyn_seq_length = len(dyn_input[0])\n\n # Get Prediction\n probabilities, prev_state = sess.run(\n [probs, final_state],\n {input_text: dyn_input, initial_state: prev_state})\n \n pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)\n\n gen_sentences.append(pred_word)\n \n # Remove tokens\n tv_script = ' '.join(gen_sentences)\n for key, token in token_dict.items():\n ending = ' ' if key in ['\\n', '(', '\"'] else ''\n tv_script = tv_script.replace(' ' + token.lower(), key)\n tv_script = tv_script.replace('\\n ', '\\n')\n tv_script = tv_script.replace('( ', '(')\n \n print(tv_script)", "The TV Script is Nonsensical\nIt's ok if the TV script doesn't make any sense. We trained on less than a megabyte of text. In order to get good results, you'll have to use a smaller vocabulary or get more data. Luckly there's more data! As we mentioned in the begging of this project, this is a subset of another dataset. We didn't have you train on all the data, because that would take too long. However, you are free to train your neural network on all the data. After you complete the project, of course.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_tv_script_generation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
eeeeeric/rSalvador
pysalvador/userManual.ipynb
gpl-2.0
[ "pySalvador 0.1 User Manual\nQi Zheng, Texas A&M University School of Public Health\nAugust 14, 2020\npySalvador 0.1 offers a subset of the user functions that are available in rSalvador. Python users can use pySalvador 0.1 to estimate mutation rates and their confidence intervals under three common models: the classic mutation model as defined by Lea and Coulson (1949), a modified Lea-Coulson model that allows for partial plating, and another modified model that accounts for mutant fitness.\nInstallation of pySalvador is straightforward. Users only need to copy the Python code file pysalvador.py into their working directory or folder. However, the user must make sure that the standard Python pakcages numpy and scipy are pre-installed.To enhance flexibility, users may also put the Python code file pysalvador.py in a reserved directory and work from a different directory by adding the \"permanent\" pysalvador directory to the Python search path. For example, you may create a directory called c:/pysal as the permanent residence for pysalvador.py and then work from another directory by executing the following three Python commands:\n```python\nimport sys\nsys.path.append(\"c:/pysal\")\nimport pysalvador as sal\n```\nNote that pySlavador is written completely in Python, while a sizable portion of rSalvador is written in C. Not surprisingly, when the maximum of the mutant numbers is large, pySalvador is discernibly slower than rSalvador. However, in practice, the maximum of the numbers of mutants rarely exceeds 500, and hence computing speed is a nonissue for the most part.", "import numpy as np\n\nimport pysalvador as sal", "The basic model\nWe now use the well-known Demerec experimental data for illustration. The data is available in pySalvador.", "demerec=sal.demerec_data\n\nnp.transpose(demerec)", "To obtain a maximum likelihood estimate of the expected number of mutations per culture, m, you execute the following.", "sal.newtonLD(demerec)", "You may watch the iteration process as follows.", "sal.newtonLD(demerec, show_iter=True)", "A 95% confident interval for m can be obtained as follows.", "sal.confintLD(demerec,show_iter=True)", "Partial plating\nWe use data from experiment 16 of Luria and Delbruck for illustration. The plating efficiency is known to be 0.4.", "luria16=sal.luria_16_data\n\nluria16\n\nsal.newtonLD_plating(luria16,e=0.4,show_iter=True)\n\nsal.confintLD_plating(luria16,e=0.4,show_iter=True)", "Accounting for fitness\nNow assume that the mutants in Demerec's experiment had a relative fitness of 0.9.", "\nsal.newtonMK(demerec,w=0.9,show_iter=True)\n\nsal.confintMK(demerec,w=0.9,show_iter=True)", "Using your own data\npySalvador accepts mutant data as a list. Suppose you have the following 9-culture experiment.", "mydata=[0,16,20,2,2,56,3,161,9]\n\nsal.newtonLD(mydata)\n\nsal.confintLD(mydata)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
deculler/DataScienceTableDemos
HealthSample.ipynb
bsd-2-clause
[ "# HIDDEN\nfrom datascience import *\n%matplotlib inline\nimport matplotlib.pyplot as plots\nplots.style.use('fivethirtyeight')\nimport numpy as np\n\n# Read in data from health records survey as a raw table\n# Public data from the UC Michigan Institute for Social Research\n# Health and Retirement Survey - rsonline.isr.umich.edu\n#\nhrec06 = Table.read_table(\"./data/hrsextract06.csv\")\nhrec06", "Indirection\nThey say \"all problems in computer science can be solved with an extra level of indirection.\" \nIt certainly provides some real leverage in data wrangling. Rather than write a bunch of spaghetti\ncode, we will build a table that defines the transformation we would like to perform on the\nraw data in order to have something cleaner to work with. In this we can map the indecipherable identifiers\ninto something more understandable; we can establish formatters; we can translate field encodings into\nclear mnemonics, and so on.\nWe need a tool for finding elements in the translation table; that's table_lookup. Then we can\nbuild our mapping tool, map_raw_table.", "health_map = Table([\"raw label\", \"label\", \"encoding\", \"Description\"]).with_rows(\n [[\"hhidpn\", \"id\", None, \"identifier\"],\n [\"r8agey_m\", \"age\", None, \"age in years in wave 8\"],\n [\"ragender\", \"gender\", ['male','female'], \"1 = male, 2 = female)\"],\n [\"raracem\", \"race\", ['white','black','other'], \"(1 = white, 2 = black, 3 = other)\"],\n [\"rahispan\", \"hispanic\", None, \"(1 = yes)\"],\n [\"raedyrs\", \"education\", None, \"education in years\"],\n [\"h8cpl\", \"couple\", None, \"in a couple household (1 = yes)\"],\n [\"r8bpavgs\", \"blood pressure\", None,\"average systolic BP\"],\n [\"r8bpavgp\", \"pulse\", None, \"average pulse\"],\n [\"r8smoken\", \"smoker\",None, \"currently smokes cigarettes\"],\n [\"r8mdactx\", \"exercise\", None, \"frequency of moderate exercise (1=everyday, 2=>1perweek, 3=1perweek, 4=1-3permonth\\\n, 5=never)\"],\n [\"r8weightbio\", \"weight\", None, \"objective weight in kg\"],\n [\"r8heightbio\",\"height\", None, \"objective height in m\"]])\nhealth_map\n\ndef table_lookup(table,key_col,key,map_col):\n row = np.where(table[key_col]==key)\n if len(row[0]) == 1:\n return table[map_col][row[0]][0]\n else:\n return -1\n\ndef map_raw_table(raw_table,map_table):\n mapped = Table()\n for raw_label in raw_table :\n if raw_label in map_table[\"raw label\"] :\n new_label = table_lookup(map_table,'raw label',raw_label,'label')\n encoding = table_lookup(map_table,'raw label',raw_label,'encoding')\n if encoding is None :\n mapped[new_label] = raw_table[raw_label]\n else:\n mapped[new_label] = raw_table.apply(lambda x: encoding[x-1], raw_label)\n return mapped\n\n# create a more usable table by mapping the raw to finished\nhealth = map_raw_table(hrec06,health_map)\nhealth", "Descriptive statistics - smoking", "def firstQtile(x) : return np.percentile(x,25)\ndef thirdQtile(x) : return np.percentile(x,25)\nsummary_ops = (min, firstQtile, np.median, np.mean, thirdQtile, max, sum)\n\n# Let's try what is the effect of smoking\nsmokers = health.where('smoker',1)\nnosmokers = health.where('smoker',0)\nprint(smokers.num_rows, ' smokers')\nprint(nosmokers.num_rows, ' non-smokers')\n\nsmokers.stats(summary_ops)\n\nnosmokers.stats(summary_ops)\n\nhelp(smokers.hist)", "What is the effect of smoking on weight?", "smokers.hist('weight', bins=20)\n\nnosmokers.hist('weight', bins=20)\n\nnp.mean(nosmokers['weight'])-np.mean(smokers['weight'])", "Permutation tests", "# Lets draw two samples of equal size\nn_sample = 200\nsmoker_sample = smokers.sample(n_sample)\nnosmoker_sample = nosmokers.sample(n_sample)\nweight = Table().with_columns([('NoSmoke', nosmoker_sample['weight']),('Smoke', smoker_sample['weight'])])\nweight.hist(overlay=True,bins=30,normed=True)\n\nweight.stats(summary_ops)", "Is the difference observed between these samples representative of the larger population?", "combined = Table().with_column('all', np.append(nosmoker_sample['weight'],smoker_sample['weight']))\n\ncombined.num_rows\n\n# permutation test, split the combined into two random groups, do the comparison of those\ndef getdiff():\n A,B = combined.split(n_sample)\n return (np.mean(A['all'])-np.mean(B['all']))\n\n# Do the permutation many times and form the distribution of results\nnum_samples = 300\ndiff_samples = Table().with_column('diffs', [getdiff() for i in range(num_samples)])\ndiff_samples.hist(bins=np.arange(-5,5,0.5), normed=True)", "The 4.5 kg difference is certainly not an artifact of the sample we started with. The smokers definitely weigh less. At the same time, these are not light people in this study. Better go back and understand what was the purpose of the study that led to the selection of these six thousand individuals.\nOther Factors", "# A sense of the overall population represented - older\nhealth.select(['age','education']).hist(bins=20)\n\n# How does education correlate with age?\nhealth.select(['age','education']).scatter('age', fit_line=True)\n\nhealth.pivot_hist('race','education',normed=True)\n\n# How are races represented in the dataset and how does hispanic overlay the three?\nrace = health.select(['race', 'hispanic']) \nrace['count']=1\nby_race = race.group('race',sum)\nby_race['race frac'] = by_race['count sum']/np.sum(by_race['count sum'])\nby_race['hisp frac'] = by_race['hispanic sum'] / by_race['count sum']\nby_race\n\nhealth.select(['height','weight']).scatter('height','weight',fit_line=True)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
rdhyee/evernote-web-utility
notebooks/evernote_experiments_0.ipynb
apache-2.0
[ "imports needed\nWhat library am I using?\nhttp://dev.evernote.com/\nWhen I'm ready I would hit a Get API key button and fill out the form: https://www.evernote.com/shard/s1/sh/e03e0393-b2cb-4a54-94d1-60e65f482ad3/bb93b060e287d4d979fef70d7b997df9\nDocs:\n\nThe Evernote SDK for Python Quick-start Guide\nEvernote SDK for JavaScript Quick-start Guide\n\nIn getting started, you can take one or both of the following approaches:\n\nget a key set up for the sandbox\nset up a dev key to work with your own account and not worry about Oauth initially.\n\nyou can have a dev token for both the sandbox and for production to access production accounts:\n\nhttps://sandbox.evernote.com/api/DeveloperToken.action\nhttps://www.evernote.com/api/DeveloperToken.action\n\nEvernoteWebUtil is my wrapper for ...", "import settings\nfrom evernote.api.client import EvernoteClient\n\ndev_token = settings.authToken\n\nclient = EvernoteClient(token=dev_token, sandbox=False)\n\nuserStore = client.get_user_store()\nuser = userStore.getUser()\nprint user.username\n\n\nimport EvernoteWebUtil as ewu\newu.init(settings.authToken)\n\newu.user.username", "noteStore\nhttp://dev.evernote.com/documentation/reference/NoteStore.html#Svc_NoteStore\ngetting notebook by name", "# getting notes for a given notebook\n\nimport datetime\n\nfrom itertools import islice\nnotes = islice(ewu.notes_metadata(includeTitle=True, \n includeUpdated=True,\n includeUpdateSequenceNum=True,\n notebookGuid=ewu.notebook(name=':CORE').guid), None)\n\nfor note in notes:\n print note.title, note.updateSequenceNum, datetime.datetime.fromtimestamp(note.updated/1000.)\n \n\n# let's read my __MASTER note__\n# is it possible to search notes by title?\n\n[(n.guid, n.title) for n in ewu.notes(title=\".__MASTER note__\")]\n\n\nimport settings\nfrom evernote.api.client import EvernoteClient\n\ndev_token = settings.authToken\n\nclient = EvernoteClient(token=dev_token, sandbox=False)\n\nuserStore = client.get_user_store()\nuser = userStore.getUser()\n\nnoteStore = client.get_note_store()\n\nprint user.username\n\nuserStore.getUser()\nnoteStore.getNoteContent('ecc59d05-c010-4b3b-a04b-7d4eeb7e8505')", "my .__MASTER note__ is actually pretty complex....so parsing it and adding to it will take some effort. But let's give it a try.\nWorking with Note Contents\nThings to figure out:\n\nXML parsing\nXML creation\nXML validation via schema", "import lxml", "Getting tags by name", "ewu.tag('#1-Now')\n\nsorted(ewu.tag_counts_by_name().items(), key=lambda x: -x[1])[:10]\n\n\ntags = ewu.noteStore.listTags()\ntags_by_name = dict([(tag.name, tag) for tag in tags])\ntag_counts_by_name = ewu.tag_counts_by_name()\ntags_by_guid = ewu.tags_by_guid()\n\n# figure out which tags have no notes attached and possibly delete them -- say if they don't have children tags\n# oh -- don't delete them willy nilly -- some have organizational purposes\n\nset(tags_by_name) - set(tag_counts_by_name)\n\n# calculated tag_children -- tags that have children\n\nfrom collections import defaultdict\n\ntag_children = defaultdict(list)\nfor tag in tags:\n if tag.parentGuid is not None:\n tag_children[tag.parentGuid].append(tag)\n \n\n[tags_by_guid[guid].name for guid in tag_children.keys()]\n\nfor (guid, children) in tag_children.items():\n print tags_by_guid[guid].name\n for child in children:\n print \"\\t\", child.name", "things to do with tags\n\nfind all notes for a given tag\nget tag guid, name, count, parent / check for existence\ncreate new tag\ndelete tag\nmove tag to new parent\nexpunge tags -- disconnect tags from notes\ncan we get history of a tag: when created?\ndealing with deleted tags\nfind \"related\" tags -- in the Evernote client, when I click on a specific tag, it seems like I see the highlighting of other, possibly related, tags -- http://dev.evernote.com/documentation/reference/NoteStore.html#Fn_NoteStore_findRelated ?\n\nI will also want to locate notes that have a certain tag or set of tags and are in a certain notebook.", "# find all notes for a given tag\n\n[n.title for n in ewu.notes_metadata(includeTitle=True, tagGuids=[tags_by_name['#1-Now'].guid])]\n\newu.notebook(name='Action Pending').guid\n\n[n.title for n in ewu.notes_metadata(includeTitle=True, \n notebookGuid=ewu.notebook(name='Action Pending').guid, \n tagGuids=[tags_by_name['#1-Now'].guid])]\n\n# with a GUID, you can get the current state of a tag\n# http://dev.evernote.com/documentation/reference/NoteStore.html#Fn_NoteStore_getTag\n# not super useful for me since I'm already pulling a list of all tags in order to map names to guids\n\newu.noteStore.getTag(ewu.tag(name='#1-Now').guid) \n\n# create a tag\n# http://dev.evernote.com/documentation/reference/NoteStore.html#Fn_NoteStore_createTag\n# must pass name; optional to pass \n\nfrom evernote.edam.type.ttypes import Tag\n\newu.noteStore.createTag(Tag(name=\"happy happy2!\", parentGuid=None))\n\newu.tag(name=\"happy happy2!\", refresh=True)\n\n# expunge tag\n# http://dev.evernote.com/documentation/reference/NoteStore.html#Fn_NoteStore_expungeTag\n\newu.noteStore.expungeTag(ewu.tag(\"happy happy2!\").guid)\n\n# find all notes for a given tag and notebook\n\naction_now_notes = list(ewu.notes_metadata(includeTitle=True, \n notebookGuid=ewu.notebook(name='Action Pending').guid, \n tagGuids=[tags_by_name['#1-Now'].guid]))\n\n[(n.guid, n.title) for n in action_now_notes ]\n\n# get all tags for a given note\n\nimport datetime\n\nfrom itertools import islice\nnotes = list(islice(ewu.notes_metadata(includeTitle=True, \n includeUpdated=True,\n includeUpdateSequenceNum=True,\n notebookGuid=ewu.notebook(name=':PROJECTS').guid), None))\n\nplus_tags_set = set()\n\nfor note in notes:\n tags = ewu.noteStore.getNoteTagNames(note.guid)\n plus_tags = [tag for tag in tags if tag.startswith(\"+\")]\n \n plus_tags_set.update(plus_tags)\n print note.title, note.updateSequenceNum, datetime.datetime.fromtimestamp(note.updated/1000.), \\\n len(plus_tags) == 1\n ", "synchronization state", "syncstate = ewu.noteStore.getSyncState()\nsyncstate\n\nsyncstate.fullSyncBefore, syncstate.updateCount\n\nimport datetime\ndatetime.datetime.fromtimestamp(syncstate.fullSyncBefore/1000.)", "list notebooks and note counts", "ewu.notebookcounts()", "compute distribution of note sizes", "k = list(ewu.sizes_of_notes())\nprint len(k)\n\nplt.plot(k)\n\nsort(k)\n\nplt.plot(sort(k))\n\nplt.plot([log(i) for i in sort(k)])\n\n\"\"\"\nMake a histogram of normally distributed random numbers and plot the\nanalytic PDF over it\n\"\"\"\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.mlab as mlab\n\nmu, sigma = 100, 15\nx = mu + sigma * np.random.randn(10000)\n\nfig = plt.figure()\nax = fig.add_subplot(111)\n\n# the histogram of the data\nn, bins, patches = ax.hist(x, 50, normed=1, facecolor='green', alpha=0.75)\n\n# hist uses np.histogram under the hood to create 'n' and 'bins'.\n# np.histogram returns the bin edges, so there will be 50 probability\n# density values in n, 51 bin edges in bins and 50 patches. To get\n# everything lined up, we'll compute the bin centers\nbincenters = 0.5*(bins[1:]+bins[:-1])\n# add a 'best fit' line for the normal PDF\ny = mlab.normpdf( bincenters, mu, sigma)\nl = ax.plot(bincenters, y, 'r--', linewidth=1)\n\nax.set_xlabel('Smarts')\nax.set_ylabel('Probability')\n#ax.set_title(r'$\\mathrm{Histogram\\ of\\ IQ:}\\ \\mu=100,\\ \\sigma=15$')\nax.set_xlim(40, 160)\nax.set_ylim(0, 0.03)\nax.grid(True)\n\nplt.show()\n\nplt.hist(k)\n\nplt.hist([log10(i) for i in k], 50)\n\n# calculate Notebook name -> note count\n\nnb_guid_dict = dict([(nb.guid, nb) for nb in ewu.all_notebooks()])\nnb_name_dict = dict([(nb.name, nb) for nb in ewu.all_notebooks()])\n\newu.notes_metadata(includeTitle=True)\n\nimport itertools\n\ng = itertools.islice(ewu.notes_metadata(includeTitle=True, includeUpdateSequenceNum=True, notebookGuid=nb_name_dict[\"Action Pending\"].guid), 10)\n\nlist(g)\n\nlen(_)\n\n# grab content of a specific note\n\n# http://dev.evernote.com/documentation/reference/NoteStore.html#Fn_NoteStore_getNote\n# params: guid, withContent, withResourcesData, withResourcesRecognition, withResourcesAlternateData\n\nnote = ewu.noteStore.getNote('a49d531e-f3f8-4e72-9523-e5a558f11d87', True, False, False, False)\n\n\n\nnote_content = ewu.noteStore.getNoteContent('a49d531e-f3f8-4e72-9523-e5a558f11d87')\n\nnote_content", "creating a new note with content and tag\n\nNote type\nnoteStore.createNote\nnice to have convenience of not having to calculate tag guids", "import EvernoteWebUtil as ewu\nreload(ewu)\n\nfrom evernote.edam.type.ttypes import Note\n\nnote_template = \"\"\"<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"no\"?>\n<!DOCTYPE en-note SYSTEM \"http://xml.evernote.com/pub/enml2.dtd\">\n<en-note style=\"word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;\">\n{0}\n</en-note>\"\"\"\n\nnote = Note()\nnote.title = \"hello from ipython\"\nnote.content = note_template.format(\"hello from Canada 2\")\nnote.tagNames = [\"hello world\"]\n\nnote = ewu.noteStore.createNote(note)\n\nnote.guid\n\nassert False", "Move Evernote tags to have a different parent", "from evernote.edam.type.ttypes import Tag\nimport EvernoteWebUtil as ewu\n\ntags = ewu.noteStore.listTags()\ntags_by_name = dict([(tag.name, tag) for tag in tags])\n\nprint tags_by_name['+JoinTheAction'], tags_by_name['.Active Projects']\n\n# update +JoinTheAction tag to put it underneath .Active Projects\n\njta_tag = tags_by_name['+JoinTheAction']\njta_tag.parentGuid = tags_by_name['.Active Projects'].guid\n\nresult = ewu.noteStore.updateTag(Tag(name=jta_tag.name, guid=jta_tag.guid, parentGuid=tags_by_name['.Active Projects'].guid))\nprint result\n\n# mark certain project as inactive\n\nresult = ewu.noteStore.updateTag(Tag(name=\"+Relaunch unglue.it\", \n guid=tags_by_name[\"+Relaunch unglue.it\"].guid, \n parentGuid=tags_by_name['.Inactive Projects'].guid))\n\n\n\n# getTag?\n\newu.noteStore.getTag(tags_by_name['+JoinTheAction'].guid)\n\ntags_by_name[\"+Relaunch unglue.it\"]\n\nresult = ewu.noteStore.updateTag(ewu.authToken, Tag(name=\"+Relaunch unglue.it\", \n guid=tags_by_name[\"+Relaunch unglue.it\"].guid, \n parentGuid=tags_by_name['.Inactive Projects'].guid))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
davofis/computational_seismology
05_pseudospectral/cheby_elastic_1d.ipynb
gpl-3.0
[ "<div style='background-image: url(\"../../share/images/header.svg\") ; padding: 0px ; background-size: cover ; border-radius: 5px ; height: 250px'>\n <div style=\"float: right ; margin: 50px ; padding: 20px ; background: rgba(255 , 255 , 255 , 0.7) ; width: 50% ; height: 150px\">\n <div style=\"position: relative ; top: 50% ; transform: translatey(-50%)\">\n <div style=\"font-size: xx-large ; font-weight: 900 ; color: rgba(0 , 0 , 0 , 0.8) ; line-height: 100%\">Computational Seismology</div>\n <div style=\"font-size: large ; padding-top: 20px ; color: rgba(0 , 0 , 0 , 0.5)\">The Chebyshev Pseudospectral Method - Elastic Waves in 1D</div>\n </div>\n </div>\n</div>\n\nSeismo-Live: http://seismo-live.org\nAuthors:\n\nDavid Vargas (@dvargas)\nHeiner Igel (@heinerigel)\n\n\nBasic Equations\nThis notebook presents the numerical solution for the 1D elastic wave equation using the Chebyshev Pseudospectral Method. We depart from the equation \n\\begin{equation}\n\\rho(x) \\partial_t^2 u(x,t) = \\partial_x (\\mu(x) \\partial_x u(x,t)) + f(x,t),\n\\end{equation}\nand use a standard 3-point finite-difference operator to approximate the time derivatives. Then, the displacement field is extrapolated as\n\\begin{equation}\n\\rho_i\\frac{u_{i}^{j+1} - 2u_{i}^{j} + u_{i}^{j-1}}{dt^2}= \\partial_x (\\mu(x) \\partial_x u(x,t)){i}^{j} + f{i}^{j}\n\\end{equation}\nAn alternative way of performing space derivatives of a function defined on the Chebyshev collocation points is to define a derivative matrix $D_{ij}$\n\\begin{equation}\nD_{ij} =\n \\begin{cases}\n -\\frac{2 N^2 + 1}{6} \\hspace{1.5cm} \\text{for i = j = N}\\\n -\\frac{1}{2} \\frac{x_i}{1-x_i^2} \\hspace{1.5cm} \\text{for i = j = 1,2,...,N-1}\\\n \\frac{c_i}{c_j} \\frac{(-1)^{i+j}}{x_i - x_j} \\hspace{1.5cm} \\text{for i $\\neq$ j = 0,1,...,N}\n \\end{cases}\n\\end{equation}\nwhere $N+1$ is the number of Chebyshev collocation points $ \\ x_i = cos(i\\pi / N)$, $ \\ i=0,...,N$ and the $c_i$ are given as\n$$ c_i = 2 \\hspace{1.5cm} \\text{for i = 0 or N} $$\n$$ c_i = 1 \\hspace{1.5cm} \\text{otherwise} $$\nThis differentiation matrix allows us to write the derivative of the function $f_i = f(x_i)$ (possibly depending on time) simply as\n$$\\partial_x u_i = D_{ij} \\ u_j$$\nwhere the right-hand side is a matrix-vector product, and the Einstein summation convention applies.", "# This is a configuration step for the exercise. Please run it before calculating the derivative!\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom ricker import ricker \n\n# Show the plots in the Notebook.\nplt.switch_backend(\"nbagg\")", "1. Chebyshev derivative method\nExercise\nDefine a python function call \"get_cheby_matrix(nx)\" that initializes the Chebyshev derivative matrix $D_{ij}$, call this function and display the Chebyshev derivative matrix.", "#################################################################\n# IMPLEMENT THE CHEBYSHEV DERIVATIVE MATRIX METHOD HERE!\n################################################################# \n\n# Call the chebyshev differentiation matrix\n# ---------------------------------------------------------------\n#D_ij = \n\n# ---------------------------------------------------------------\n# Display Differentiation Matrix\n# ---------------------------------------------------------------\n", "2. Initialization of setup", "# Basic parameters\n# ---------------------------------------------------------------\n#nt = 5000 # number of time steps\ntmax = 0.0006\neps = 1.4 # stability limit\nisx = 100\nlw = 0.7\nft = 10\nf0 = 100000 # dominant frequency\niplot = 20 # Snapshot frequency\n\n# material parameters\nrho = 2500.\nc = 3000.\nmu = rho*c**2\n\n# space domain\nnx = 100 # number of grid points in x 199\nxs = np.floor(nx/2) # source location\nxr = np.floor(nx*0.8)\nx = np.zeros(nx+1) \n\n# initialization of pressure fields\np = np.zeros(nx+1) \npnew = np.zeros(nx+1)\npold = np.zeros(nx+1)\nd2p = np.zeros(nx+1) \n\nfor ix in range(0,nx+1):\n x[ix] = np.cos(ix * np.pi / nx) \ndxmin = min(abs(np.diff(x)))\ndxmax = max(abs(np.diff(x)))\n\ndt = eps*dxmin/c # calculate time step from stability criterion\nnt = int(round(tmax/dt))", "3. Source Initialization", "# source time function\n# ---------------------------------------------------------------\nt = np.arange(1, nt+1)*dt # initialize time axis\nT0 = 1./f0\ntmp = ricker(dt, T0)\nisrc = tmp\ntmp = np.diff(tmp)\nsrc = np.zeros(nt) \nsrc[0:np.size(tmp)] = tmp\n\n#spatial source function\n# ---------------------------------------------------------------\nsigma = 1.5*dxmax\nx0 = x[int(xs)]\nsg = np.exp(-1/sigma**2*(x-x0)**2)\nsg = sg/max(sg) ", "4. Time Extrapolation\nNow we time extrapolate using the previously defined get_cheby_matrix(nx) method to call the differentiation matrix. The discrete values of the numerical simulation are indicated by dots in the animation, they represent the Chebyshev collocation points. Observe how the wavefield near the domain center is less dense than towards the boundaries.", "# Initialize animated plot\n# ---------------------------------------------------------------\nplt.figure(figsize=(10,6))\nline = plt.plot(x, p, 'k.', lw=2)\nplt.title('Chebyshev Method - 1D Elastic wave', size=16)\nplt.xlabel(' x(m)', size=14)\nplt.ylabel(' Amplitude ', size=14)\n\nplt.ion() # set interective mode\nplt.show()\n# ---------------------------------------------------------------\n# Time extrapolation\n# ---------------------------------------------------------------\n# Differentiation matrix\nD = get_cheby_matrix(nx)\nfor it in range(nt):\n # Space derivatives\n dp = np.dot(D, p.T)\n dp = mu/rho * dp\n dp = D @ dp\n \n # Time extrapolation \n pnew = 2*p - pold + np.transpose(dp) * dt**2\n \n # Source injection\n pnew = pnew + sg*src[it]*dt**2/rho\n \n # Remapping\n pold, p = p, pnew\n p[0] = 0; p[nx] = 0 # set boundaries pressure free \n\n # -------------------------------------- \n # Animation plot. Display solution\n if not it % iplot: \n for l in line:\n l.remove()\n del l \n \n # -------------------------------------- \n # Display lines\n line = plt.plot(x, p, 'k.', lw=1.5)\n plt.gcf().canvas.draw()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
kdmurray91/kwip-experiments
writeups/coalescent/50reps_2016-05-18/sqrt-dist.ipynb
mit
[ "from glob import glob\nfrom os import path\nimport re\nfrom skbio import DistanceMatrix\nimport pandas as pd\nimport numpy as np\nimport scipy as sp\n\nfrom kwipexpt import *\n%matplotlib inline\n%load_ext rpy2.ipython\n\n%%R\nlibrary(tidyr)\nlibrary(dplyr, warn.conflicts=F, quietly=T)\nlibrary(ggplot2)\nlibrary(reshape2)", "Calculate performance of kWIP\nThe next bit of python code calculates the performance of kWIP against the distance between samples calulcated from the alignments of their genomes.\nThis code caluclates spearman's $\\rho$ between the off-diagonal elements of the triagnular distance matrices.", "expts = list(map(lambda fp: path.basename(fp.rstrip('/')), glob('data/*/')))\nprint(\"Number of replicate experiments:\", len(expts))\n\ndef process_expt(expt):\n expt_results = []\n \n def extract_info(filename):\n return re.search(r'kwip/(\\d\\.?\\d*)x-(0\\.\\d+)-(wip|ip).dist', filename).groups()\n def r_sqrt(truth, dist):\n return sp.stats.pearsonr(truth, np.sqrt(dist))[0]\n def rho_sqrt(truth, dist):\n return sp.stats.spearmanr(truth, np.sqrt(dist)).correlation\n \n \n # dict of scale: distance matrix, populated as we go\n truths = {}\n \n truth_points = []\n sim_points = []\n for distfile in glob(\"data/{}/kwip/*.dist\".format(expt)):\n cov, scale, metric = extract_info(distfile)\n if scale not in truths:\n genome_dist_path = 'data/{ex}/all_genomes-{sc}.dist'.format(ex=expt, sc=scale)\n truths[scale] = load_sample_matrix_to_runs(genome_dist_path)\n exptmat = DistanceMatrix.read(distfile)\n rho = distmat_corr(truths[scale], exptmat, stats.spearmanr).correlation\n rho2 = distmat_corr(truths[scale], exptmat, rho_sqrt)\n r = distmat_corr(truths[scale], exptmat, stats.pearsonr)[0]\n r2 = distmat_corr(truths[scale], exptmat, r_sqrt)\n if cov == \"100\" and scale == \"0.001\" and metric == \"wip\":\n truth_points.append(truths[scale].condensed_form())\n sim_points.append(exptmat.condensed_form())\n expt_results.append({\n \"coverage\": cov,\n \"scale\": scale,\n \"metric\": metric,\n \"rho\": rho,\n \"rhosqrt\": rho2,\n \"r\": r,\n \"rsqrt\": r2,\n \"seed\": expt,\n })\n return expt_results, (truth_points, sim_points)\n\n#process_expt('3662')\n\nresults = []\ntruepoints = []\nsimpoints = []\nfor res in map(process_expt, expts):\n results.extend(res[0])\n truepoints.extend(res[1][0])\n simpoints.extend(res[1][1])\nresults = pd.DataFrame(results)\n\ntruepoints = np.concatenate(truepoints)\nsimpoints = np.concatenate(simpoints)\n\n%%R -i truepoints -i simpoints\n\nplot(truepoints, sqrt(simpoints), pch=\".\")", "Visualisation\nBelow we see a summary and structure of the data", "%%R -i results\n\nresults$coverage = as.numeric(as.character(results$coverage))\nresults$scale = as.numeric(as.character(results$scale))\n\nprint(summary(results))\nstr(results)\n\n%%R\n\n# AND AGAIN WITHOUT SUBSETTING\ndat = results %>%\n filter(scale==0.001, metric==\"wip\") %>%\n select(coverage, rho, r, rsqrt, rhosqrt)\nmdat = melt(dat, id.vars=\"coverage\", variable.name=\"measure\", value.name=\"corr\")\nmdat$coverage = as.factor(mdat$coverage)\n\nggplot(mdat, aes(x=coverage, y=corr)) +\n geom_boxplot() +\n facet_wrap(~measure) +\n theme_bw()" ]
[ "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive/10_recommend/cf_softmax_model/solution/cfmodel_softmax_model_solution.ipynb
apache-2.0
[ "Recommendation Systems with TensorFlow\nIntroduction\nIn this lab, we will create a movie recommendation system based on the MovieLens dataset available here. The data consists of movies ratings (on a scale of 1 to 5).\nSpecifically, we'll be using matrix factorization to learn user and movie embeddings. Concepts highlighted here are also available in the course on Recommendation Systems. \nObjectives\n\nExplore the MovieLens Data\nTrain a matrix factorization model\nInspect the Embeddings\nPerform Softmax model training", "# Ensure the right version of Tensorflow is installed.\n!pip freeze | grep tensorflow==2.6\n\nfrom __future__ import print_function\n\nimport numpy as np\nimport pandas as pd\nimport collections\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom IPython import display\nfrom matplotlib import pyplot as plt\nimport sklearn\nimport sklearn.manifold\nimport tensorflow.compat.v1 as tf\ntf.disable_v2_behavior()\n\n# Add some convenience functions to Pandas DataFrame.\npd.options.display.max_rows = 10\npd.options.display.float_format = '{:.3f}'.format\ndef mask(df, key, function):\n \"\"\"Returns a filtered dataframe, by applying function to key\"\"\"\n return df[function(df[key])]\n\ndef flatten_cols(df):\n df.columns = [' '.join(col).strip() for col in df.columns.values]\n return df\n\npd.DataFrame.mask = mask\npd.DataFrame.flatten_cols = flatten_cols\n\n#Let's install Altair for interactive visualizations\n\n!pip install git+git://github.com/altair-viz/altair.git\nimport altair as alt\nalt.data_transformers.enable('default', max_rows=None)\n#alt.renderers.enable('colab')\n", "We then download the MovieLens Data, and create DataFrames containing movies, users, and ratings.", "# Download MovieLens data.\nprint(\"Downloading movielens data...\")\nfrom urllib.request import urlretrieve\nimport zipfile\n\nurlretrieve(\"http://files.grouplens.org/datasets/movielens/ml-100k.zip\", \"movielens.zip\")\nzip_ref = zipfile.ZipFile('movielens.zip', \"r\")\nzip_ref.extractall()\nprint(\"Done. Dataset contains:\")\nprint(zip_ref.read('ml-100k/u.info'))\n\n# Load each data set (users, ratings, and movies).\nusers_cols = ['user_id', 'age', 'sex', 'occupation', 'zip_code']\nusers = pd.read_csv(\n 'ml-100k/u.user', sep='|', names=users_cols, encoding='latin-1')\n\nratings_cols = ['user_id', 'movie_id', 'rating', 'unix_timestamp']\nratings = pd.read_csv(\n 'ml-100k/u.data', sep='\\t', names=ratings_cols, encoding='latin-1')\n\n# The movies file contains a binary feature for each genre.\ngenre_cols = [\n \"genre_unknown\", \"Action\", \"Adventure\", \"Animation\", \"Children\", \"Comedy\",\n \"Crime\", \"Documentary\", \"Drama\", \"Fantasy\", \"Film-Noir\", \"Horror\",\n \"Musical\", \"Mystery\", \"Romance\", \"Sci-Fi\", \"Thriller\", \"War\", \"Western\"\n]\nmovies_cols = [\n 'movie_id', 'title', 'release_date', \"video_release_date\", \"imdb_url\"\n] + genre_cols\nmovies = pd.read_csv(\n 'ml-100k/u.item', sep='|', names=movies_cols, encoding='latin-1')\n\n# Since the ids start at 1, we shift them to start at 0. This will make handling of the\n# indices easier later\nusers[\"user_id\"] = users[\"user_id\"].apply(lambda x: str(x-1))\nmovies[\"movie_id\"] = movies[\"movie_id\"].apply(lambda x: str(x-1))\nmovies[\"year\"] = movies['release_date'].apply(lambda x: str(x).split('-')[-1])\nratings[\"movie_id\"] = ratings[\"movie_id\"].apply(lambda x: str(x-1))\nratings[\"user_id\"] = ratings[\"user_id\"].apply(lambda x: str(x-1))\nratings[\"rating\"] = ratings[\"rating\"].apply(lambda x: float(x))\n\n# Compute the number of movies to which a genre is assigned.\ngenre_occurences = movies[genre_cols].sum().to_dict()\n\n# Since some movies can belong to more than one genre, we create different\n# 'genre' columns as follows:\n# - all_genres: all the active genres of the movie.\n# - genre: randomly sampled from the active genres.\ndef mark_genres(movies, genres):\n def get_random_genre(gs):\n active = [genre for genre, g in zip(genres, gs) if g==1]\n if len(active) == 0:\n return 'Other'\n return np.random.choice(active)\n def get_all_genres(gs):\n active = [genre for genre, g in zip(genres, gs) if g==1]\n if len(active) == 0:\n return 'Other'\n return '-'.join(active)\n movies['genre'] = [\n get_random_genre(gs) for gs in zip(*[movies[genre] for genre in genres])]\n movies['all_genres'] = [\n get_all_genres(gs) for gs in zip(*[movies[genre] for genre in genres])]\n\nmark_genres(movies, genre_cols)\n\n# Create one merged DataFrame containing all the movielens data.\nmovielens = ratings.merge(movies, on='movie_id').merge(users, on='user_id')\n\n# Utility to split the data into training and test sets.\ndef split_dataframe(df, holdout_fraction=0.1):\n \"\"\"Splits a DataFrame into training and test sets.\n Args:\n df: a dataframe.\n holdout_fraction: fraction of dataframe rows to use in the test set.\n Returns:\n train: dataframe for training\n test: dataframe for testing\n \"\"\"\n test = df.sample(frac=holdout_fraction, replace=False)\n train = df[~df.index.isin(test.index)]\n return train, test", "Exploring the Movielens Data\nBefore we dive into model building, let's inspect our MovieLens dataset. It is usually helpful to understand the statistics of the dataset.\nUsers\nWe start by printing some basic statistics describing the numeric user features.", "users.describe()", "We can also print some basic statistics describing the categorical user features", "users.describe(include=[np.object])", "We can also create histograms to further understand the distribution of the users. We use Altair to create an interactive chart.", "# The following functions are used to generate interactive Altair charts.\n# We will display histograms of the data, sliced by a given attribute.\n\n# Create filters to be used to slice the data.\noccupation_filter = alt.selection_multi(fields=[\"occupation\"])\noccupation_chart = alt.Chart().mark_bar().encode(\n x=\"count()\",\n y=alt.Y(\"occupation:N\"),\n color=alt.condition(\n occupation_filter,\n alt.Color(\"occupation:N\", scale=alt.Scale(scheme='category20')),\n alt.value(\"lightgray\")),\n).properties(width=300, height=300, selection=occupation_filter)\n\n# A function that generates a histogram of filtered data.\ndef filtered_hist(field, label, filter):\n \"\"\"Creates a layered chart of histograms.\n The first layer (light gray) contains the histogram of the full data, and the\n second contains the histogram of the filtered data.\n Args:\n field: the field for which to generate the histogram.\n label: String label of the histogram.\n filter: an alt.Selection object to be used to filter the data.\n \"\"\"\n base = alt.Chart().mark_bar().encode(\n x=alt.X(field, bin=alt.Bin(maxbins=10), title=label),\n y=\"count()\",\n ).properties(\n width=300,\n )\n return alt.layer(\n base.transform_filter(filter),\n base.encode(color=alt.value('lightgray'), opacity=alt.value(.7)),\n ).resolve_scale(y='independent')\n", "Next, we look at the distribution of ratings per user. Clicking on an occupation in the right chart will filter the data by that occupation. The corresponding histogram is shown in blue, and superimposed with the histogram for the whole data (in light gray). You can use SHIFT+click to select multiple subsets.\nWhat do you observe, and how might this affect the recommendations?", "users_ratings = (\n ratings\n .groupby('user_id', as_index=False)\n .agg({'rating': ['count', 'mean']})\n .flatten_cols()\n .merge(users, on='user_id')\n)\n\n# Create a chart for the count, and one for the mean.\nalt.hconcat(\n filtered_hist('rating count', '# ratings / user', occupation_filter),\n filtered_hist('rating mean', 'mean user rating', occupation_filter),\n occupation_chart,\n data=users_ratings)", "Movies\nIt is also useful to look at information about the movies and their ratings.", "movies_ratings = movies.merge(\n ratings\n .groupby('movie_id', as_index=False)\n .agg({'rating': ['count', 'mean']})\n .flatten_cols(),\n on='movie_id')\n\ngenre_filter = alt.selection_multi(fields=['genre'])\ngenre_chart = alt.Chart().mark_bar().encode(\n x=\"count()\",\n y=alt.Y('genre'),\n color=alt.condition(\n genre_filter,\n alt.Color(\"genre:N\"),\n alt.value('lightgray'))\n).properties(height=300, selection=genre_filter)\n\n(movies_ratings[['title', 'rating count', 'rating mean']]\n .sort_values('rating count', ascending=False)\n .head(10))\n\n(movies_ratings[['title', 'rating count', 'rating mean']]\n .mask('rating count', lambda x: x > 20)\n .sort_values('rating mean', ascending=False)\n .head(10))", "Finally, the last chart shows the distribution of the number of ratings and average rating.", "# Display the number of ratings and average rating per movie.\nalt.hconcat(\n filtered_hist('rating count', '# ratings / movie', genre_filter),\n filtered_hist('rating mean', 'mean movie rating', genre_filter),\n genre_chart,\n data=movies_ratings)", "Preliminaries\nOur goal is to factorize the ratings matrix $A$ into the product of a user embedding matrix $U$ and movie embedding matrix $V$, such that $A \\approx UV^\\top$ with\n$U = \\begin{bmatrix} u_{1} \\ \\hline \\vdots \\ \\hline u_{N} \\end{bmatrix}$ and\n$V = \\begin{bmatrix} v_{1} \\ \\hline \\vdots \\ \\hline v_{M} \\end{bmatrix}$.\nHere\n- $N$ is the number of users,\n- $M$ is the number of movies,\n- $A_{ij}$ is the rating of the $j$th movies by the $i$th user,\n- each row $U_i$ is a $d$-dimensional vector (embedding) representing user $i$,\n- each rwo $V_j$ is a $d$-dimensional vector (embedding) representing movie $j$,\n- the prediction of the model for the $(i, j)$ pair is the dot product $\\langle U_i, V_j \\rangle$.\nSparse Representation of the Rating Matrix\nThe rating matrix could be very large and, in general, most of the entries are unobserved, since a given user will only rate a small subset of movies. For effcient representation, we will use a tf.SparseTensor. A SparseTensor uses three tensors to represent the matrix: tf.SparseTensor(indices, values, dense_shape) represents a tensor, where a value $A_{ij} = a$ is encoded by setting indices[k] = [i, j] and values[k] = a. The last tensor dense_shape is used to specify the shape of the full underlying matrix.\nToy example\nAssume we have $2$ users and $4$ movies. Our toy ratings dataframe has three ratings,\nuser_id | movie_id | rating\n--:|--:|--:\n0 | 0 | 5.0\n0 | 1 | 3.0\n1 | 3 | 1.0\nThe corresponding rating matrix is\n$$\nA =\n\\begin{bmatrix}\n5.0 & 3.0 & 0 & 0 \\\n0 & 0 & 0 & 1.0\n\\end{bmatrix}\n$$\nAnd the SparseTensor representation is,\npython\nSparseTensor(\n indices=[[0, 0], [0, 1], [1,3]],\n values=[5.0, 3.0, 1.0],\n dense_shape=[2, 4])\nExercise 1: Build a tf.SparseTensor representation of the Rating Matrix.\nIn this exercise, we'll write a function that maps from our ratings DataFrame to a tf.SparseTensor.\nHint: you can select the values of a given column of a Dataframe df using df['column_name'].values.", "#Solution\ndef build_rating_sparse_tensor(ratings_df):\n \"\"\"\n Args:\n ratings_df: a pd.DataFrame with `user_id`, `movie_id` and `rating` columns.\n Returns:\n a tf.SparseTensor representing the ratings matrix.\n \"\"\"\n indices = ratings_df[['user_id', 'movie_id']].values\n values = ratings_df['rating'].values\n return tf.SparseTensor(\n indices=indices,\n values=values,\n dense_shape=[users.shape[0], movies.shape[0]])", "Calculating the error\nThe model approximates the ratings matrix $A$ by a low-rank product $UV^\\top$. We need a way to measure the approximation error. We'll start by using the Mean Squared Error of observed entries only (we will revisit this later). It is defined as\n$$\n\\begin{align}\n\\text{MSE}(A, UV^\\top)\n&= \\frac{1}{|\\Omega|}\\sum_{(i, j) \\in\\Omega}{( A_{ij} - (UV^\\top){ij})^2} \\\n&= \\frac{1}{|\\Omega|}\\sum{(i, j) \\in\\Omega}{( A_{ij} - \\langle U_i, V_j\\rangle)^2}\n\\end{align}\n$$\nwhere $\\Omega$ is the set of observed ratings, and $|\\Omega|$ is the cardinality of $\\Omega$.\nExercise 2: Mean Squared Error\nWrite a TensorFlow function that takes a sparse rating matrix $A$ and the two embedding matrices $U, V$ and returns the mean squared error $\\text{MSE}(A, UV^\\top)$.\nHints:\n * in this section, we only consider observed entries when calculating the loss.\n * a SparseTensor sp_x is a tuple of three Tensors: sp_x.indices, sp_x.values and sp_x.dense_shape.\n * you may find tf.gather_nd and tf.losses.mean_squared_error helpful.", "#Solution\ndef sparse_mean_square_error(sparse_ratings, user_embeddings, movie_embeddings):\n \"\"\"\n Args:\n sparse_ratings: A SparseTensor rating matrix, of dense_shape [N, M]\n user_embeddings: A dense Tensor U of shape [N, k] where k is the embedding\n dimension, such that U_i is the embedding of user i.\n movie_embeddings: A dense Tensor V of shape [M, k] where k is the embedding\n dimension, such that V_j is the embedding of movie j.\n Returns:\n A scalar Tensor representing the MSE between the true ratings and the\n model's predictions.\n \"\"\"\n predictions = tf.gather_nd(\n tf.matmul(user_embeddings, movie_embeddings, transpose_b=True),\n sparse_ratings.indices)\n loss = tf.losses.mean_squared_error(sparse_ratings.values, predictions)\n return loss", "Note: One approach is to compute the full prediction matrix $UV^\\top$, then gather the entries corresponding to the observed pairs. The memory cost of this approach is $O(NM)$. For the MovieLens dataset, this is fine, as the dense $N \\times M$ matrix is small enough to fit in memory ($N = 943$, $M = 1682$).\nAnother approach (given in the alternate solution below) is to only gather the embeddings of the observed pairs, then compute their dot products. The memory cost is $O(|\\Omega| d)$ where $d$ is the embedding dimension. In our case, $|\\Omega| = 10^5$, and the embedding dimension is on the order of $10$, so the memory cost of both methods is comparable. But when the number of users or movies is much larger, the first approach becomes infeasible.", "#Alternate Solution\ndef sparse_mean_square_error(sparse_ratings, user_embeddings, movie_embeddings):\n \"\"\"\n Args:\n sparse_ratings: A SparseTensor rating matrix, of dense_shape [N, M]\n user_embeddings: A dense Tensor U of shape [N, k] where k is the embedding\n dimension, such that U_i is the embedding of user i.\n movie_embeddings: A dense Tensor V of shape [M, k] where k is the embedding\n dimension, such that V_j is the embedding of movie j.\n Returns:\n A scalar Tensor representing the MSE between the true ratings and the\n model's predictions.\n \"\"\"\n predictions = tf.reduce_sum(\n tf.gather(user_embeddings, sparse_ratings.indices[:, 0]) *\n tf.gather(movie_embeddings, sparse_ratings.indices[:, 1]),\n axis=1)\n loss = tf.losses.mean_squared_error(sparse_ratings.values, predictions)\n return loss", "Training a Matrix Factorization model\nCFModel (Collaborative Filtering Model) helper class\nThis is a simple class to train a matrix factorization model using stochastic gradient descent.\nThe class constructor takes\n- the user embeddings U (a tf.Variable).\n- the movie embeddings V, (a tf.Variable).\n- a loss to optimize (a tf.Tensor).\n- an optional list of metrics dictionaries, each mapping a string (the name of the metric) to a tensor. These are evaluated and plotted during training (e.g. training error and test error).\nAfter training, one can access the trained embeddings using the model.embeddings dictionary.\nExample usage:\nU_var = ...\nV_var = ...\nloss = ...\nmodel = CFModel(U_var, V_var, loss)\nmodel.train(iterations=100, learning_rate=1.0)\nuser_embeddings = model.embeddings['user_id']\nmovie_embeddings = model.embeddings['movie_id']", "\nclass CFModel(object):\n \"\"\"Simple class that represents a collaborative filtering model\"\"\"\n def __init__(self, embedding_vars, loss, metrics=None):\n \"\"\"Initializes a CFModel.\n Args:\n embedding_vars: A dictionary of tf.Variables.\n loss: A float Tensor. The loss to optimize.\n metrics: optional list of dictionaries of Tensors. The metrics in each\n dictionary will be plotted in a separate figure during training.\n \"\"\"\n self._embedding_vars = embedding_vars\n self._loss = loss\n self._metrics = metrics\n self._embeddings = {k: None for k in embedding_vars}\n self._session = None\n\n @property\n def embeddings(self):\n \"\"\"The embeddings dictionary.\"\"\"\n return self._embeddings\n\n def train(self, num_iterations=100, learning_rate=1.0, plot_results=True,\n optimizer=tf.train.GradientDescentOptimizer):\n \"\"\"Trains the model.\n Args:\n iterations: number of iterations to run.\n learning_rate: optimizer learning rate.\n plot_results: whether to plot the results at the end of training.\n optimizer: the optimizer to use. Default to GradientDescentOptimizer.\n Returns:\n The metrics dictionary evaluated at the last iteration.\n \"\"\"\n \n with self._loss.graph.as_default():\n opt = optimizer(learning_rate)\n train_op = opt.minimize(self._loss)\n local_init_op = tf.group(\n tf.variables_initializer(opt.variables()),\n tf.local_variables_initializer())\n if self._session is None:\n self._session = tf.Session()\n with self._session.as_default():\n self._session.run(tf.global_variables_initializer())\n self._session.run(tf.tables_initializer())\n #tf.train.start_queue_runners()\n\n with self._session.as_default():\n local_init_op.run()\n iterations = []\n metrics = self._metrics or ({},)\n metrics_vals = [collections.defaultdict(list) for _ in self._metrics]\n\n # Train and append results.\n for i in range(num_iterations + 1):\n _, results = self._session.run((train_op, metrics))\n if (i % 10 == 0) or i == num_iterations:\n print(\"\\r iteration %d: \" % i + \", \".join(\n [\"%s=%f\" % (k, v) for r in results for k, v in r.items()]),\n end='')\n iterations.append(i)\n for metric_val, result in zip(metrics_vals, results):\n for k, v in result.items():\n metric_val[k].append(v)\n\n for k, v in self._embedding_vars.items():\n self._embeddings[k] = v.eval()\n\n if plot_results:\n # Plot the metrics.\n num_subplots = len(metrics)+1\n fig = plt.figure()\n fig.set_size_inches(num_subplots*10, 8)\n for i, metric_vals in enumerate(metrics_vals):\n ax = fig.add_subplot(1, num_subplots, i+1)\n for k, v in metric_vals.items():\n ax.plot(iterations, v, label=k)\n ax.set_xlim([1, num_iterations])\n ax.legend()\n return results", "Exercise 3: Build a Matrix Factorization model and train it\nUsing your sparse_mean_square_error function, write a function that builds a CFModel by creating the embedding variables and the train and test losses.", "#Solution\ndef build_model(ratings, embedding_dim=3, init_stddev=1.):\n \"\"\"\n Args:\n ratings: a DataFrame of the ratings\n embedding_dim: the dimension of the embedding vectors.\n init_stddev: float, the standard deviation of the random initial embeddings.\n Returns:\n model: a CFModel.\n \"\"\"\n # Split the ratings DataFrame into train and test.\n train_ratings, test_ratings = split_dataframe(ratings)\n # SparseTensor representation of the train and test datasets.\n A_train = build_rating_sparse_tensor(train_ratings)\n A_test = build_rating_sparse_tensor(test_ratings)\n # Initialize the embeddings using a normal distribution.\n U = tf.Variable(tf.random.normal(\n [A_train.dense_shape[0], embedding_dim], stddev=init_stddev))\n V = tf.Variable(tf.random.normal(\n [A_train.dense_shape[1], embedding_dim], stddev=init_stddev))\n train_loss = sparse_mean_square_error(A_train, U, V)\n test_loss = sparse_mean_square_error(A_test, U, V)\n metrics = {\n 'train_error': train_loss,\n 'test_error': test_loss\n }\n embeddings = {\n \"user_id\": U,\n \"movie_id\": V\n }\n return CFModel(embeddings, train_loss, [metrics])", "Great, now it's time to train the model!\nGo ahead and run the next cell, trying different parameters (embedding dimension, learning rate, iterations). The training and test errors are plotted at the end of training. You can inspect these values to validate the hyper-parameters.\nNote: by calling model.train again, the model will continue training starting from the current values of the embeddings.", "# Build the CF model and train it.\nmodel = build_model(ratings, embedding_dim=30, init_stddev=0.5)\n\n\nmodel.train(num_iterations=1000, learning_rate=10.)", "The movie and user embeddings are also displayed in the right figure. When the embedding dimension is greater than 3, the embeddings are projected on the first 3 dimensions. The next section will have a more detailed look at the embeddings.\nInspecting the Embeddings\nIn this section, we take a closer look at the learned embeddings, by\n- computing your recommendations\n- looking at the nearest neighbors of some movies,\n- looking at the norms of the movie embeddings,\n- visualizing the embedding in a projected embedding space.\nExercise 4: Write a function that computes the scores of the candidates\nWe start by writing a function that, given a query embedding $u \\in \\mathbb R^d$ and item embeddings $V \\in \\mathbb R^{N \\times d}$, computes the item scores.\nAs discussed in the lecture, there are different similarity measures we can use, and these can yield different results. We will compare the following:\n- dot product: the score of item j is $\\langle u, V_j \\rangle$.\n- cosine: the score of item j is $\\frac{\\langle u, V_j \\rangle}{\\|u\\|\\|V_j\\|}$.\nHints:\n- you can use np.dot to compute the product of two np.Arrays.\n- you can use np.linalg.norm to compute the norm of a np.Array.", "DOT = 'dot'\nCOSINE = 'cosine'\ndef compute_scores(query_embedding, item_embeddings, measure=DOT):\n \"\"\"Computes the scores of the candidates given a query.\n Args:\n query_embedding: a vector of shape [k], representing the query embedding.\n item_embeddings: a matrix of shape [N, k], such that row i is the embedding\n of item i.\n measure: a string specifying the similarity measure to be used. Can be\n either DOT or COSINE.\n Returns:\n scores: a vector of shape [N], such that scores[i] is the score of item i.\n \"\"\"\n u = query_embedding\n V = item_embeddings\n if measure == COSINE:\n V = V / np.linalg.norm(V, axis=1, keepdims=True)\n u = u / np.linalg.norm(u)\n scores = u.dot(V.T)\n return scores", "Equipped with this function, we can compute recommendations, where the query embedding can be either a user embedding or a movie embedding.", "\ndef user_recommendations(model, measure=DOT, exclude_rated=False, k=6):\n if USER_RATINGS:\n scores = compute_scores(\n model.embeddings[\"user_id\"][943], model.embeddings[\"movie_id\"], measure)\n score_key = measure + ' score'\n df = pd.DataFrame({\n score_key: list(scores),\n 'movie_id': movies['movie_id'],\n 'titles': movies['title'],\n 'genres': movies['all_genres'],\n })\n if exclude_rated:\n # remove movies that are already rated\n rated_movies = ratings[ratings.user_id == \"943\"][\"movie_id\"].values\n df = df[df.movie_id.apply(lambda movie_id: movie_id not in rated_movies)]\n display.display(df.sort_values([score_key], ascending=False).head(k)) \n\ndef movie_neighbors(model, title_substring, measure=DOT, k=6):\n # Search for movie ids that match the given substring.\n ids = movies[movies['title'].str.contains(title_substring)].index.values\n titles = movies.iloc[ids]['title'].values\n if len(titles) == 0:\n raise ValueError(\"Found no movies with title %s\" % title_substring)\n print(\"Nearest neighbors of : %s.\" % titles[0])\n if len(titles) > 1:\n print(\"[Found more than one matching movie. Other candidates: {}]\".format(\n \", \".join(titles[1:])))\n movie_id = ids[0]\n scores = compute_scores(\n model.embeddings[\"movie_id\"][movie_id], model.embeddings[\"movie_id\"],\n measure)\n score_key = measure + ' score'\n df = pd.DataFrame({\n score_key: list(scores),\n 'titles': movies['title'],\n 'genres': movies['all_genres']\n })\n display.display(df.sort_values([score_key], ascending=False).head(k))", "Movie Nearest neighbors\nLet's look at the neareast neighbors for some of the movies.", "movie_neighbors(model, \"Aladdin\", DOT)\nmovie_neighbors(model, \"Aladdin\", COSINE)", "It seems that the quality of learned embeddings may not be very good. Can you think of potential techniques that could be used to improve them? We can start by inspecting the embeddings.\nMovie Embedding Norm\nWe can also observe that the recommendations with dot-product and cosine are different: with dot-product, the model tends to recommend popular movies. This can be explained by the fact that in matrix factorization models, the norm of the embedding is often correlated with popularity (popular movies have a larger norm), which makes it more likely to recommend more popular items. We can confirm this hypothesis by sorting the movies by their embedding norm, as done in the next cell.", "\ndef movie_embedding_norm(models):\n \"\"\"Visualizes the norm and number of ratings of the movie embeddings.\n Args:\n model: A MFModel object.\n \"\"\"\n if not isinstance(models, list):\n models = [models]\n df = pd.DataFrame({\n 'title': movies['title'],\n 'genre': movies['genre'],\n 'num_ratings': movies_ratings['rating count'],\n })\n charts = []\n brush = alt.selection_interval()\n for i, model in enumerate(models):\n norm_key = 'norm'+str(i)\n df[norm_key] = np.linalg.norm(model.embeddings[\"movie_id\"], axis=1)\n nearest = alt.selection(\n type='single', encodings=['x', 'y'], on='mouseover', nearest=True,\n empty='none')\n base = alt.Chart().mark_circle().encode(\n x='num_ratings',\n y=norm_key,\n color=alt.condition(brush, alt.value('#4c78a8'), alt.value('lightgray'))\n ).properties(\n selection=nearest).add_selection(brush)\n text = alt.Chart().mark_text(align='center', dx=5, dy=-5).encode(\n x='num_ratings', y=norm_key,\n text=alt.condition(nearest, 'title', alt.value('')))\n charts.append(alt.layer(base, text))\n return alt.hconcat(*charts, data=df)\n\ndef visualize_movie_embeddings(data, x, y):\n nearest = alt.selection(\n type='single', encodings=['x', 'y'], on='mouseover', nearest=True,\n empty='none')\n base = alt.Chart().mark_circle().encode(\n x=x,\n y=y,\n color=alt.condition(genre_filter, \"genre\", alt.value(\"whitesmoke\")),\n ).properties(\n width=600,\n height=600,\n selection=nearest)\n text = alt.Chart().mark_text(align='left', dx=5, dy=-5).encode(\n x=x,\n y=y,\n text=alt.condition(nearest, 'title', alt.value('')))\n return alt.hconcat(alt.layer(base, text), genre_chart, data=data)\n\ndef tsne_movie_embeddings(model):\n \"\"\"Visualizes the movie embeddings, projected using t-SNE with Cosine measure.\n Args:\n model: A MFModel object.\n \"\"\"\n tsne = sklearn.manifold.TSNE(\n n_components=2, perplexity=40, metric='cosine', early_exaggeration=10.0,\n init='pca', verbose=True, n_iter=400)\n\n print('Running t-SNE...')\n V_proj = tsne.fit_transform(model.embeddings[\"movie_id\"])\n movies.loc[:,'x'] = V_proj[:, 0]\n movies.loc[:,'y'] = V_proj[:, 1]\n return visualize_movie_embeddings(movies, 'x', 'y')\n\nmovie_embedding_norm(model)", "Note: Depending on how the model is initialized, you may observe that some niche movies (ones with few ratings) have a high norm, leading to spurious recommendations. This can happen if the embedding of that movie happens to be initialized with a high norm. Then, because the movie has few ratings, it is infrequently updated, and can keep its high norm. This can be alleviated by using regularization.\nTry changing the value of the hyperparameter init_stddev. One quantity that can be helpful is that the expected norm of a $d$-dimensional vector with entries $\\sim \\mathcal N(0, \\sigma^2)$ is approximatley $\\sigma \\sqrt d$.\nHow does this affect the embedding norm distribution, and the ranking of the top-norm movies?", "model_lowinit = build_model(ratings, embedding_dim=30, init_stddev=0.05)\nmodel_lowinit.train(num_iterations=1000, learning_rate=10.)\nmovie_neighbors(model_lowinit, \"Aladdin\", DOT)\nmovie_neighbors(model_lowinit, \"Aladdin\", COSINE)\nmovie_embedding_norm([model, model_lowinit])", "Embedding visualization\nSince it is hard to visualize embeddings in a higher-dimensional space (when the embedding dimension $k > 3$), one approach is to project the embeddings to a lower dimensional space. T-SNE (T-distributed Stochastic Neighbor Embedding) is an algorithm that projects the embeddings while attempting to preserve their pariwise distances. It can be useful for visualization, but one should use it with care. For more information on using t-SNE, see How to Use t-SNE Effectively.", "tsne_movie_embeddings(model_lowinit)", "You can highlight the embeddings of a given genre by clicking on the genres panel (SHIFT+click to select multiple genres).\nWe can observe that the embeddings do not seem to have any notable structure, and the embeddings of a given genre are located all over the embedding space. This confirms the poor quality of the learned embeddings. One of the main reasons is that we only trained the model on observed pairs, and without regularization.\nSoftmax model\nIn this section, we will train a simple softmax model that predicts whether a given user has rated a movie.\nThe model will take as input a feature vector $x$ representing the list of movies the user has rated. We start from the ratings DataFrame, which we group by user_id.", "rated_movies = (ratings[[\"user_id\", \"movie_id\"]]\n .groupby(\"user_id\", as_index=False)\n .aggregate(lambda x: list(x)))\nrated_movies.head()", "We then create a function that generates an example batch, such that each example contains the following features:\n- movie_id: A tensor of strings of the movie ids that the user rated.\n- genre: A tensor of strings of the genres of those movies\n- year: A tensor of strings of the release year.", "#@title Batch generation code (run this cell)\nyears_dict = {\n movie: year for movie, year in zip(movies[\"movie_id\"], movies[\"year\"])\n}\ngenres_dict = {\n movie: genres.split('-')\n for movie, genres in zip(movies[\"movie_id\"], movies[\"all_genres\"])\n}\n\ndef make_batch(ratings, batch_size):\n \"\"\"Creates a batch of examples.\n Args:\n ratings: A DataFrame of ratings such that examples[\"movie_id\"] is a list of\n movies rated by a user.\n batch_size: The batch size.\n \"\"\"\n def pad(x, fill):\n return pd.DataFrame.from_dict(x).fillna(fill).values\n\n movie = []\n year = []\n genre = []\n label = []\n for movie_ids in ratings[\"movie_id\"].values:\n movie.append(movie_ids)\n genre.append([x for movie_id in movie_ids for x in genres_dict[movie_id]])\n year.append([years_dict[movie_id] for movie_id in movie_ids])\n label.append([int(movie_id) for movie_id in movie_ids])\n features = {\n \"movie_id\": pad(movie, \"\"),\n \"year\": pad(year, \"\"),\n \"genre\": pad(genre, \"\"),\n \"label\": pad(label, -1)\n }\n batch = (\n tf.data.Dataset.from_tensor_slices(features)\n .shuffle(1000)\n .repeat()\n .batch(batch_size)\n .make_one_shot_iterator()\n .get_next())\n return batch\n\ndef select_random(x):\n \"\"\"Selectes a random elements from each row of x.\"\"\"\n def to_float(x):\n return tf.cast(x, tf.float32)\n def to_int(x):\n return tf.cast(x, tf.int64)\n batch_size = tf.shape(x)[0]\n rn = tf.range(batch_size)\n nnz = to_float(tf.count_nonzero(x >= 0, axis=1))\n rnd = tf.random_uniform([batch_size])\n ids = tf.stack([to_int(rn), to_int(nnz * rnd)], axis=1)\n return to_int(tf.gather_nd(x, ids))\n", "Loss function\nRecall that the softmax model maps the input features $x$ to a user embedding $\\psi(x) \\in \\mathbb R^d$, where $d$ is the embedding dimension. This vector is then multiplied by a movie embedding matrix $V \\in \\mathbb R^{m \\times d}$ (where $m$ is the number of movies), and the final output of the model is the softmax of the product\n$$\n\\hat p(x) = \\text{softmax}(\\psi(x) V^\\top).\n$$\nGiven a target label $y$, if we denote by $p = 1_y$ a one-hot encoding of this target label, then the loss is the cross-entropy between $\\hat p(x)$ and $p$.\nExercise 5: Write a loss function for the softmax model.\nIn this exercise, we will write a function that takes tensors representing the user embeddings $\\psi(x)$, movie embeddings $V$, target label $y$, and return the cross-entropy loss.\nHint: You can use the function tf.nn.sparse_softmax_cross_entropy_with_logits, which takes logits as input, where logits refers to the product $\\psi(x) V^\\top$.", "#Solution\ndef softmax_loss(user_embeddings, movie_embeddings, labels):\n \"\"\"Returns the cross-entropy loss of the softmax model.\n Args:\n user_embeddings: A tensor of shape [batch_size, embedding_dim].\n movie_embeddings: A tensor of shape [num_movies, embedding_dim].\n labels: A tensor of [batch_size], such that labels[i] is the target label\n for example i.\n Returns:\n The mean cross-entropy loss.\n \"\"\"\n # Verify that the embddings have compatible dimensions\n user_emb_dim = user_embeddings.shape[1].value\n movie_emb_dim = movie_embeddings.shape[1].value\n if user_emb_dim != movie_emb_dim:\n raise ValueError(\n \"The user embedding dimension %d should match the movie embedding \"\n \"dimension % d\" % (user_emb_dim, movie_emb_dim))\n\n logits = tf.matmul(user_embeddings, movie_embeddings, transpose_b=True)\n loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(\n logits=logits, labels=labels))\n return loss", "Exercise 6: Build a softmax model, train it, and inspect its embeddings.\nWe are now ready to build a softmax CFModel. Complete the build_softmax_model function in the next cell. The architecture of the model is defined in the function create_user_embeddings and illustrated in the figure below. The input embeddings (movie_id, genre and year) are concatenated to form the input layer, then we have hidden layers with dimensions specified by the hidden_dims argument. Finally, the last hidden layer is multiplied by the movie embeddings to obtain the logits layer. For the target label, we will use a randomly-sampled movie_id from the list of movies the user rated.\n\nComplete the function below by creating the feature columns and embedding columns, then creating the loss tensors both for the train and test sets (using the softmax_loss function of the previous exercise).", "# Solution\n\ndef build_softmax_model(rated_movies, embedding_cols, hidden_dims):\n \"\"\"Builds a Softmax model for MovieLens.\n Args:\n rated_movies: DataFrame of traing examples.\n embedding_cols: A dictionary mapping feature names (string) to embedding\n column objects. This will be used in tf.feature_column.input_layer() to\n create the input layer.\n hidden_dims: int list of the dimensions of the hidden layers.\n Returns:\n A CFModel object.\n \"\"\"\n def create_network(features):\n \"\"\"Maps input features dictionary to user embeddings.\n Args:\n features: A dictionary of input string tensors.\n Returns:\n outputs: A tensor of shape [batch_size, embedding_dim].\n \"\"\"\n # Create a bag-of-words embedding for each sparse feature.\n inputs = tf.feature_column.input_layer(features, embedding_cols)\n # Hidden layers.\n input_dim = inputs.shape[1].value\n for i, output_dim in enumerate(hidden_dims):\n w = tf.get_variable(\n \"hidden%d_w_\" % i, shape=[input_dim, output_dim],\n initializer=tf.truncated_normal_initializer(\n stddev=1./np.sqrt(output_dim))) / 10.\n outputs = tf.matmul(inputs, w)\n input_dim = output_dim\n inputs = outputs\n return outputs\n\n train_rated_movies, test_rated_movies = split_dataframe(rated_movies)\n train_batch = make_batch(train_rated_movies, 200)\n test_batch = make_batch(test_rated_movies, 100)\n\n with tf.variable_scope(\"model\", reuse=False):\n # Train\n train_user_embeddings = create_network(train_batch)\n train_labels = select_random(train_batch[\"label\"])\n with tf.variable_scope(\"model\", reuse=True):\n # Test\n test_user_embeddings = create_network(test_batch)\n test_labels = select_random(test_batch[\"label\"])\n movie_embeddings = tf.get_variable(\n \"input_layer/movie_id_embedding/embedding_weights\")\n\n test_loss = softmax_loss(\n test_user_embeddings, movie_embeddings, test_labels)\n train_loss = softmax_loss(\n train_user_embeddings, movie_embeddings, train_labels)\n _, test_precision_at_10 = tf.metrics.precision_at_k(\n labels=test_labels,\n predictions=tf.matmul(test_user_embeddings, movie_embeddings, transpose_b=True),\n k=10)\n\n metrics = (\n {\"train_loss\": train_loss, \"test_loss\": test_loss},\n {\"test_precision_at_10\": test_precision_at_10}\n )\n embeddings = {\"movie_id\": movie_embeddings}\n return CFModel(embeddings, train_loss, metrics)", "Train the Softmax model\nWe are now ready to train the softmax model. You can set the following hyperparameters:\n- learning rate\n- number of iterations. Note: you can run softmax_model.train() again to continue training the model from its current state.\n- input embedding dimensions (the input_dims argument)\n- number of hidden layers and size of each layer (the hidden_dims argument)\nNote: since our input features are string-valued (movie_id, genre, and year), we need to map them to integer ids. This is done using tf.feature_column.categorical_column_with_vocabulary_list, which takes a vocabulary list specifying all the values the feature can take. Then each id is mapped to an embedding vector using tf.feature_column.embedding_column.", "# Create feature embedding columns\ndef make_embedding_col(key, embedding_dim):\n categorical_col = tf.feature_column.categorical_column_with_vocabulary_list(\n key=key, vocabulary_list=list(set(movies[key].values)), num_oov_buckets=0)\n return tf.feature_column.embedding_column(\n categorical_column=categorical_col, dimension=embedding_dim,\n # default initializer: trancated normal with stddev=1/sqrt(dimension)\n combiner='mean')\n\nwith tf.Graph().as_default():\n softmax_model = build_softmax_model(\n rated_movies,\n embedding_cols=[\n make_embedding_col(\"movie_id\", 35),\n make_embedding_col(\"genre\", 3),\n make_embedding_col(\"year\", 2),\n ],\n hidden_dims=[35])\n\nsoftmax_model.train(\n learning_rate=8., num_iterations=3000, optimizer=tf.train.AdagradOptimizer)", "Inspect the embeddings\nWe can inspect the movie embeddings as we did for the previous models. Note that in this case, the movie embeddings are used at the same time as input embeddings (for the bag of words representation of the user history), and as softmax weights.", "movie_neighbors(softmax_model, \"Aladdin\", DOT)\nmovie_neighbors(softmax_model, \"Aladdin\", COSINE)\n\nmovie_embedding_norm(softmax_model)\n\ntsne_movie_embeddings(softmax_model)", "Congratulations!\nYou have completed this lab.\nIf you would like to further explore these models, we encourage you to try different hyperparameters and observe how this affects the quality of the model and the structure of the embedding space. Here are some suggestions:\n- Change the embedding dimension.\n- In the softmax model: change the number of hidden layers, and the input features. For example, you can try a model with no hidden layers, and only the movie ids as inputs.\n- Using other similarity measures: In this notebook, we used dot product $d(u, V_j) = \\langle u, V_j \\rangle$ and cosine $d(u, V_j) = \\frac{\\langle u, V_j \\rangle}{\\|u\\|\\|V_j\\|}$, and discussed how the norms of the embeddings affect the recommendations. You can also try other variants which apply a transformation to the norm, for example $d(u, V_j) = \\frac{\\langle u, V_j \\rangle}{\\|V_j\\|^\\alpha}$.\nChallenge\nWith everything you learned during the Advanced Machine Learning on Google Cloud, can you try and push the model to the AI Platform for predictions?\nCopyright 2021 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
cesarcontre/Simulacion2017
Modulo3/Clase23_Repaso1(Mod.3).ipynb
mit
[ "Repaso (Módulo 3)\n\nEl tema principal en este módulo fue optimización. Al finalizar este módulo, se espera que ustedes tengan las siguientes competencias\n- Realizar optimizaciones de funciones escalares en un dominio dado usando sympy.\n- Dado un problema de programación lineal, llevarlo a la forma que vimos en clase y resolverlo.\n- Ajustar curvas a conjuntos de puntos dados.\n- Diseñar clasificadores binarios con regresión logística para conjuntos de datos linealmente separables.\n\nEjemplo 1. Optimización de funciones escalares usando sympy\nEn clase vimos cómo optimizar funciones escalares dado un invervalo cerrado finito utilizando sympy. Ustedes, además, realizaron una tarea donde hicieron una función genérica para optimizar cualquier función dada.\nRecordamos en este ejemplo como optimizar este tipo de funciones.\n1.1\nObtener el máximo y el mínimo absoluto de la función\n$$f(x)=2x^4-16x^3+32x^2+5$$\nen el intervalo $[-1, 4.5]$. Graficar la función en este intervalo, junto con el punto donde ocurre el máximo (en color rojo) y el punto donde ocurre el mínimo (en color azul).\nLes recuerdo nada más como imprimir en formato LaTeX.", "# Librería de cálculo simbólico\nimport sympy as sym\n# Para imprimir en formato TeX\nfrom sympy import init_printing; init_printing(use_latex='mathjax')", "¿Qué más librerías necesitamos?", "import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline", "¿Qué más sigue?\n- Declarar variable y función.", "def f(x):\n return 2*x**4-16*x**3+32*x**2+5\n\nsym.var('x', real = True)\n\nf(x)", "Sacar derivada e igualar a cero.", "df = sym.diff(f(x), x)\nxc = sym.solve(df, x)\nxc", "Evaluar en los extremos y en los puntos críticos.", "f(-1), f(4.5), f(xc[0]), f(xc[1]), f(xc[2])", "El más grande es el máximo absoluto y el más pequeño es el mínimo absoluto. Gráfico.", "xnum = np.linspace(-1, 4.5, 100)\n\nplt.figure(figsize=(8,6))\nplt.plot(xnum, f(xnum), 'k', lw = 2, label = '$y=f(x)$')\nplt.plot([-1], [f(-1)], '*r', ms = 10, label = '$max_{-1\\leq x\\leq 4.5}f(x)=55$')\nplt.plot([xc[0], xc[2]], [f(xc[0]), f(xc[2])], '*b', ms = 10, label = '$min_{-1\\leq x\\leq 4.5}f(x)=5$')\nplt.legend(loc='best')\nplt.xlabel('$x$')\nplt.ylabel('$y$')\nplt.show()", "1.2\nEncontrar el ángulo $\\theta$ que maximiza el área de un trapecio isósceles cuya base menor y segmentos laterales miden exactamente una unidad.\nGraficar la función área respecto al ángulo $\\theta$ en el intervalo donde tiene sentido, junto con el punto donde ocurre el máximo (en color rojo).\n\nDeclarar variable y función.", "def A(theta):\n return (1+sym.sin(theta))*sym.cos(theta)\n\nsym.var('theta', real = True)\n\nA(theta)", "Sacar derivada e igualar a cero.", "dA = sym.diff(A(theta), theta)\ndA\n\nthetac = sym.solve(dA, theta)\nthetac", "Evaluar en los extremos y en los puntos críticos.", "A(0), A(thetac[1]), A(np.pi/2)", "El más grande es el máximo absoluto.", "thetanum = np.linspace(0, np.pi/2, 100)\nAnum = sym.lambdify([theta], A(theta), 'numpy')\n\nplt.figure(figsize=(8,6))\nplt.plot(thetanum, Anum(thetanum), 'k', lw = 2, label = '$y=A(\\theta)$')\nplt.plot([thetac[1]], [A(thetac[1])], '*r', ms = 10, label = '$max_{0\\leq x\\leq \\pi/2}A(\\theta)$')\nplt.legend(loc='best')\nplt.xlabel('$\\theta$')\nplt.ylabel('$y$')\nplt.show()", "Ejemplo 2. Programación lineal\nEn clase vimos cómo llevar los problemas de programación lineal a la forma\n\\begin{equation}\n\\begin{array}{ll}\n\\min_{\\boldsymbol{x}} & \\boldsymbol{f}^T\\boldsymbol{x} \\\n\\text{s. a. } & \\boldsymbol{A}{eq}\\boldsymbol{x}=\\boldsymbol{b}{eq} \\\n & \\boldsymbol{A}\\boldsymbol{x}\\leq\\boldsymbol{b}.\n\\end{array}\n\\end{equation}\nAdemás, aprendimos a resolver los problemas en esta forma con la función linprog del paquete pyomo_utilities.py, proporcionando únicamente los parámetros $\\boldsymbol{f}$, $\\boldsymbol{A}$ y $\\boldsymbol{b}$ ($\\boldsymbol{A}{eq}$ y $\\boldsymbol{b}{eq}$, de ser necesario). \n2.1\nMaximizar la función $x_1+2x_2+3x_3+4x_4+5$ sujeta a las restricciones $4x_1+3x_2+2x_3+x_4\\leq10$, $x_1−x_3+2x_4=2$, $x_1+x_2+x_3+x_4\\geq1$, $x_1\\geq0$, $x_2\\geq0$, $x_3\\geq0$, $x_4\\geq0$.", "f = -np.arange(1, 5)\nA = np.array([[4, 3, 2, 1], [-1, -1, -1, -1]])\nA = np.concatenate((A, -np.eye(4)), axis = 0)\nb = np.array([10, -1, 0, 0, 0, 0])\nAeq = np.array([[1, 0, -1, 2]])\nbeq = np.array([2])\n\nimport pyomo_utilities\n\nx, obj = pyomo_utilities.linprog(f, A, b, Aeq, beq)\n\nx\n\nobj = -obj + 5\nobj", "2.2\nUna dieta ideal debe satisfacer (o posiblemente, exceder) ciertos requerimientos nutricionales básicos al menor costo posible, ser variada y buena al paladar. ¿Cómo podemos formular dicha dieta?\nSuponga que solo tenemos acceso a las siguientes comidas:", "import pandas as pd\n\ndf = pd.DataFrame(columns=['Energia', 'Proteina', 'Calcio', 'Precio', 'Limite_diario'], index=['Avena', 'Pollo', 'Huevos', 'Leche', 'Pastel', 'Frijoles_cerdo'])\ndf.loc[:,'Energia']=[110, 205, 160, 160, 420, 260]\ndf.loc[:,'Proteina']=[4, 32, 13, 8, 4, 14]\ndf.loc[:,'Calcio']=[2, 12, 54, 285, 22, 80]\ndf.loc[:,'Precio']=[3, 24, 13, 9, 20, 19]\ndf.loc[:,'Limite_diario']=[4, 3, 2, 8, 2, 2]\n\ndf", "Luego de consultar expertos en nutrición tenemos que una dieta satisfactoria tiene por lo menos $2000$ kcal de energía, $55$ g de proteina, y $800$ mg de calcio.\nPara imponer la variedad se ha decidido limitar el número de porciones diarias de cada una de las comidas como se indica en la tabla.", "f = np.array(df.loc[:,'Precio'])\nA = -np.array([df.loc[:,'Energia'], df.loc[:,'Proteina'], df.loc[:,'Calcio']])\nA = np.concatenate((A, np.eye(6)), axis = 0)\nA = np.concatenate((A, -np.eye(6)), axis = 0)\nb = np.array([-2000, -55, -800])\nb = np.concatenate((b, df.loc[:,'Limite_diario']))\nb = np.concatenate((b, np.zeros((6,))))\n\nx, obj = pyomo_utilities.linprog(f, A, b)\n\nx\n\nobj", "Ejemplo 3. Ajuste de curvas\nEl archivo forest_mex.csv contiene información histórica anual del porcentaje de área forestal de México. La primer columna corresponde a los años y la segunda corresponde al porcentaje de área forestal.\nTomado de: https://data.worldbank.org/indicator/AG.LND.FRST.ZS?view=chart.\nUsando los años como variable independiente $x$ y el porcentaje de área forestal como variable dependiente $y$, ajustar polinomios de grado 1 hasta grado 3.\nMostrar en un solo gráfico los datos de porcentaje de área forestal contra los años, y los polinomios ajustados.\nGraficar el error cuadrático acumulado contra el número de términos. ¿Cuál es el polinomio que mejor se ajusta?\nCon los polinomios ajustados en el punto anterior, estime el año en que México quedará sin area forestal (suponiendo que todo sigue igual).", "data_file = 'forest_mex.csv'\ndata = pd.read_csv(data_file, header = None)\n\nx = data.iloc[:, 0].values\ny = data.iloc[:, 1].values\n\nplt.figure()\nplt.plot(x, y)\nplt.show()\n\nbeta1 = pyomo_utilities.curve_polyfit(x, y, 1)\nbeta2 = pyomo_utilities.curve_polyfit(x, y, 2)\nbeta3 = pyomo_utilities.curve_polyfit(x, y, 3)\n\nyhat1 = beta1.dot(np.array([x**i for i in range(2)]))\nyhat2 = beta2.dot(np.array([x**i for i in range(3)]))\nyhat3 = beta3.dot(np.array([x**i for i in range(4)]))\n\nplt.figure(figsize = (8,6))\nplt.plot(x, y, '*b', label = 'datos reales')\nplt.plot(x, yhat1, '-r', label = 'ajuste 1')\nplt.plot(x, yhat2, '-g', label = 'ajuste 2')\nplt.plot(x, yhat3, '-k', label = 'ajuste 3')\nplt.legend(loc = 'best')\nplt.xlabel('$x$')\nplt.ylabel('$y$')\nplt.show()\n\nems = []\nems.append(sum((y-yhat1)**2))\nems.append(sum((y-yhat2)**2))\nems.append(sum((y-yhat3)**2))\n\nplt.figure(figsize = (8,6))\nplt.plot(np.arange(3)+1, ems, '*b')\nplt.xlabel('$n$')\nplt.ylabel('$e_{ms}(n)$')\nplt.show()", "El polinomio que mejor se ajusta es el cúbico.", "sym.var('a', real = True)\n\nAf1 = beta1[0]+beta1[1]*a\nAf2 = beta2[0]+beta2[1]*a+beta2[2]*a**2\nAf3 = beta3[0]+beta3[1]*a+beta3[2]*a**2+beta3[3]*a**3\n\nAf1, Af2, Af3\n\na1 = sym.solve(Af1, a)\na2 = sym.solve(Af2, a)\na3 = sym.solve(Af3, a)\n\na1\n\na2\n\na3", "Ejemplo 4. Clasificador binario\nHasta ahora hemos visto como diseñar clasificadores binarios dados un conjunto de entrenamiento. Sin embargo, no le hemos dado uso.\nDespués del diseño de un clasificador, lo que sigue es entregarle datos de entrada y que él los clasifique.\nPara los datos de la tarea de clasificación binaria, diseñaremos un clasificador binario por regresión logística lineal utilizando únicamente los primeros 80 datos.\nLuego, usaremos el clasificador diseñado para clasificar a los 20 datos restantes. ¿Cuántos datos se clasifican bien? ¿Cuántos mal?", "X = 10*np.random.random((100, 2))\nY = (X[:, 1] > X[:, 0]**2)*1\n\nplt.figure(figsize = (8,6))\nplt.scatter(X[:,0], X[:,1], c=Y)\nplt.show()\n\nXe = X[0:80, :]\nYe = Y[0:80]\n\nB = pyomo_utilities.logreg_clas(Xe, Ye)\n\nB\n\nx = np.arange(0, 10, 0.01)\ny = np.arange(0, 10, 0.01)\nXm, Ym = np.meshgrid(x, y)\nm,n = np.shape(Xm)\nXmr = np.reshape(Xm,(m*n,1))\nYmr = np.reshape(Ym,(m*n,1))\n\nXa = np.append(np.ones((len(Ymr),1)), Xmr, axis=1)\nXa = np.append(Xa,Ymr,axis=1)\n\ndef fun_log(z):\n return 1/(1+np.exp(-z))\ndef reg_log(B,Xa):\n return fun_log(Xa.dot(B))\n\nYg = reg_log(B,Xa)\nZ = np.reshape(Yg, (m,n))\nZ = np.round(Z)\n\nplt.figure(figsize=(10,10))\nplt.contour(Xm, Ym, Z)\nplt.scatter(Xe[:, 0], Xe[:, 1], c=Ye, edgecolors='w')\nplt.show()\n\nXp = X[80:, :]\nYp = Y[80:]\n\nXpa = np.append(np.ones((len(Yp),1)), Xp, axis=1)\nYhat = np.round(reg_log(B,Xpa))\n\nYhat\n\nYp\n\nmalos = sum(np.abs(Yhat-Yp))\n\nmalos", "<script>\n $(document).ready(function(){\n $('div.prompt').hide();\n $('div.back-to-top').hide();\n $('nav#menubar').hide();\n $('.breadcrumb').hide();\n $('.hidden-print').hide();\n });\n</script>\n\n<footer id=\"attribution\" style=\"float:right; color:#808080; background:#fff;\">\nCreated with Jupyter by Esteban Jiménez Rodríguez.\n</footer>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
lyoung13/deep-learning-nanodegree
p1-bikesharing-predictions/dlnd-your-first-neural-network.ipynb
mit
[ "Your first neural network\nIn this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). At the very bottom of the notebook, you'll find some unit tests to check the correctness of your neural network. Be sure to run these before you submit your project.\nAfter you've submitted this project, feel free to explore the data and the model more.", "%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt", "Load and prepare the data\nA critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!", "data_path = 'Bike-Sharing-Dataset/hour.csv'\n\nrides = pd.read_csv(data_path)\n\nrides.head()", "Checking out the data\nThis dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.\nBelow is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.", "rides[:24*10].plot(x='dteday', y='cnt')", "Dummy variables\nHere we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().", "dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']\nfor each in dummy_fields:\n dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)\n rides = pd.concat([rides, dummies], axis=1)\n\nfields_to_drop = ['instant', 'dteday', 'season', 'weathersit', \n 'weekday', 'atemp', 'mnth', 'workingday', 'hr']\ndata = rides.drop(fields_to_drop, axis=1)\ndata.head()", "Scaling target variables\nTo make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.\nThe scaling factors are saved so we can go backwards when we use the network for predictions.", "quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']\n# Store scalings in a dictionary so we can convert back later\nscaled_features = {}\nfor each in quant_features:\n mean, std = data[each].mean(), data[each].std()\n scaled_features[each] = [mean, std]\n data.loc[:, each] = (data[each] - mean)/std", "Splitting the data into training, testing, and validation sets\nWe'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.", "# Save the last 21 days \ntest_data = data[-21*24:]\ndata = data[:-21*24]\n\n# Separate the data into features and targets\ntarget_fields = ['cnt', 'casual', 'registered']\nfeatures, targets = data.drop(target_fields, axis=1), data[target_fields]\ntest_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]", "We'll split the data into two sets, one for training and one for validating as the network is being trained. It's important to split the data randomly so all cases are represented in both sets.", "n_records = features.shape[0]\nsplit = np.random.choice(features.index, \n size=int(n_records*0.8), \n replace=False)\ntrain_features, train_targets = features.ix[split], targets.ix[split]\nval_features, val_targets = features.drop(split), targets.drop(split)", "Time to build the network\nBelow you'll build your network. We've built out the structure and the backwards pass. . You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.\nThe network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.\nWe use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.\n\nHint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.\n\nBelow, you have these tasks:\n1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.\n2. Implement the forward pass in the train method.\n3. Implement the backpropagation algorithm in the train method, including calculating the output error.\n4. Implement the forward pass in the run method.", "class NeuralNetwork:\n def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):\n # Set number of nodes in input, hidden and output layers.\n self.input_nodes = input_nodes\n self.hidden_nodes = hidden_nodes\n self.output_nodes = output_nodes\n\n # Initialize weights\n self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5, \n (self.hidden_nodes, self.input_nodes))\n \n self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5, \n (self.output_nodes, self.hidden_nodes))\n \n self.learning_rate = learning_rate\n \n #### Set this to your implemented sigmoid function ####\n # TODO: Activation functionlam is the sigmoid function\n self.activation_function = lambda x : 1 / (1 + np.exp(-x))\n\n def train(self, inputs_list, targets_list):\n # Convert inputs list to 2d array\n inputs = np.array(inputs_list, ndmin=2).T\n targets = np.array(targets_list, ndmin = 2).T\n \n #### Implement the forward pass here ####\n ### Forward pass ###\n # TODO: Hidden layer\n hidden_inputs = np.dot(self.weights_input_to_hidden, inputs) # signals into hidden layer\n hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer\n \n # TODO: Output layer\n final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs) # signals into final output layer\n final_outputs = final_inputs # signals from final output layer\n \n #### Implement the backward pass here ####\n ### Backward pass ###\n \n # TODO: Output error\n output_errors = targets - final_outputs # Output layer error is the difference between desired target and actual output.\n \n # TODO: Backpropagated error\n hidden_errors = np.dot(self.weights_hidden_to_output.T, output_errors) # errors propagated to the hidden layer\n hidden_grad = hidden_outputs * (1 - hidden_outputs) # hidden layer gradients\n\n # TODO: Update the weights\n \n self.weights_hidden_to_output += self.learning_rate * output_errors * hidden_outputs.T # update hidden-to-output weights with gradient descent step\n self.weights_input_to_hidden += self.learning_rate * hidden_errors * hidden_grad * inputs.T # update input-to-hidden weights with gradient descent step\n \n def run(self, inputs_list):\n # Run a forward pass through the network\n inputs = np.array(inputs_list, ndmin=2).T\n \n #### Implement the forward pass here ####\n # TODO: Hidden layer\n hidden_inputs = np.dot(self.weights_input_to_hidden, inputs)# signals into hidden layer\n hidden_outputs = self.activation_function(hidden_inputs)# signals from hidden layer\n \n # TODO: Output layer\n final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)# signals into final output layer\n final_outputs = final_inputs # signals from final output layer\n \n return final_outputs\n\ndef MSE(y, Y):\n return np.mean((y-Y)**2)", "Training the network\nHere you'll set the hyperparameters for the network. You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network faster. You'll learn more about SGD later.\nChoose the number of epochs\nThis is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. However, it can become too specific to the training set and will fail to generalize to the validation set. This is called overfitting. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.\nChoose the learning rate\nThis scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.\nChoose the number of hidden nodes\nThe more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.", "### Set the hyperparameters here ###\nepochs = 15000\nlearning_rate = 0.005\nhidden_nodes = 30\noutput_nodes = 1\n\nN_i = train_features.shape[1]\nnetwork = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)\n\nlosses = {'train':[], 'validation':[]}\nfor e in range(epochs):\n # Go through a random batch of 128 records from the training data set\n batch = np.random.choice(train_features.index, size=128)\n for record, target in zip(train_features.ix[batch].values, \n train_targets.ix[batch]['cnt']):\n network.train(record, target)\n \n if e%(epochs/10) == 0:\n # Calculate losses for the training and test sets\n train_loss = MSE(network.run(train_features), train_targets['cnt'].values)\n val_loss = MSE(network.run(val_features), val_targets['cnt'].values)\n losses['train'].append(train_loss)\n losses['validation'].append(val_loss)\n \n # Print out the losses as the network is training\n print('Training loss: {:.4f}'.format(train_loss))\n print('Validation loss: {:.4f}'.format(val_loss))\n \n\nplt.plot(losses['train'], label='Training loss')\nplt.plot(losses['validation'], label='Validation loss')\nplt.legend()", "Check out your predictions\nHere, use the test data to check that network is accurately making predictions. If your predictions don't match the data, try adjusting the hyperparameters and check to make sure the forward passes in the network are correct.", "fig, ax = plt.subplots(figsize=(8,4))\n\nmean, std = scaled_features['cnt']\npredictions = network.run(test_features)*std + mean\nax.plot(predictions[0], label='Prediction')\nax.plot((test_targets['cnt']*std + mean).values, label='Data')\nax.set_xlim(right=len(predictions))\nax.legend()\n\ndates = pd.to_datetime(rides.ix[test_data.index]['dteday'])\ndates = dates.apply(lambda d: d.strftime('%b %d'))\nax.set_xticks(np.arange(len(dates))[12::24])\n_ = ax.set_xticklabels(dates[12::24], rotation=45)", "Thinking about your results\nAnswer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?\n\nNote: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter\n\nYour answer below\nOverall my model does a good job of predicting the data, the validation loss is low and slightly higher than the training loss. The model starts to fail when trying to predict farther into the future. This makes sense given that the further you go out in time the more unpredictable forecasts will get.\nUnit tests\nRun these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project.", "import unittest\n\nnp.random.seed(42)\ninputs = [0.5, -0.2, 0.1]\ntargets = [0.4]\n\nclass TestMethods(unittest.TestCase):\n \n ##########\n # Unit tests for data loading\n ##########\n \n def test_data_path(self):\n # Test that file path to dataset has been unaltered\n self.assertTrue(data_path == 'Bike-Sharing-Dataset/hour.csv')\n \n def test_data_loaded(self):\n # Test that data frame loaded\n self.assertTrue(isinstance(rides, pd.DataFrame))\n \n ##########\n # Unit tests for network functionality\n ##########\n\n def test_activation(self):\n network = NeuralNetwork(3, 2, 1, 0.5)\n # Test that the activation function is a sigmoid\n self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))\n\n def test_train(self):\n # Test that weights are updated correctly on training\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.train(inputs, targets)\n self.assertTrue(np.allclose(network.weights_hidden_to_output, \n [ 0.22931895, -1.28754157]))\n self.assertTrue(np.allclose(network.weights_input_to_hidden,\n [[-0.7128223, 0.22086344, -0.64139849],\n [-1.06444693, 1.06268915, -0.17280743]]))\n\n def test_run(self):\n # Test correctness of run method\n network = NeuralNetwork(3, 2, 1, 0.5)\n self.assertTrue(np.allclose(network.run(inputs), -0.97900982))\n\nsuite = unittest.TestLoader().loadTestsFromModule(TestMethods())\nunittest.TextTestRunner().run(suite)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ellisztamas/faps
docs/.ipynb_checkpoints/02 Genotype data-checkpoint.ipynb
mit
[ "Genotype data in FAPS\nTom Ellis, March 2017\nIn most cases, researchers will have a sample of offspring, maternal and candidate paternal individuals typed at a set of markers. In this section we'll look in more detail at how FAPS deals with genotype data to build a matrix we can use for sibship inference.\nThis notebook will examine how to:\n\nGenerate simple genotypeArray objects and explore what information is contained in them.\nImport external genotype data.\nWork with genotype data from multiple half sib families.\n\nChecking genotype data is an important step before committing to a full analysis. A case study of data checking and cleaning using an empirical dataset is given in section 8.\nIn the next section we'll see how to combine genotype information on offspring and a set of candidate parents to create an array of likelihoods of paternity for dyads of offspring and candidate fathers.\nAlso relevant is the section on simulating data and power analysis.\nCurrently, FAPS genotypeArray objects assume you are using biallelic, unlinked SNPs for a diploid. If your system deviates from these criteria in some way you can also skip this stage by creating your own array of paternity likelihoods using an appropriate likelihood function, and importing this directly as a paternityArrays. See the next section for more on paternityArray objects and how they should look.\ngenotypeArray objects\nBasic genotype information\nGenotype data are stored in a class of objects called a genotypeArray. We'll illustrate how these work with simulated data, since not all information is available for real-world data sets. We first generate a vector of population allele frequencies for 10 unlinked SNP markers, and use these to create a population of five adult individuals. This is obviously an unrealisticaly small dataset, but serves for illustration. The optional argument family_names allows you to name this generation.", "import faps as fp\nimport numpy as np\n\nallele_freqs = np.random.uniform(0.3,0.5,10)\nmypop = fp.make_parents(5, allele_freqs, family_name='my_population')", "The object we just created contains information about the genotypes of each of the ten parent individuals. Genotypes are stored as NxLx2-dimensional arrays, where N is the number of individuals and L is the number of loci. We can view the genotype for the first parent like so (recall that Python starts counting from zero, not one):", "mypop.geno[0]", "You could subset the array by indexes the genotypes, for example by taking only the first two individuals and the first five loci:", "mypop.geno[:2, :5]", "For realistic examples with many more loci, this obviously gets unwieldy pretty soon. It's cleaner to supply a list of individuals to keep or remove to the subset and drop functions. These return return a new genotypeArray for the individuals of interest.", "print(mypop.subset([0,2]).names)\nprint(mypop.drop([0,2]).names)", "Information on indivuals\nA genotypeArray contains other useful information about the individuals:", "print(mypop.names) # individual names\nprint(mypop.size) # number of individuals\nprint(mypop.nloci) # numbe of loci typed.", "make_sibships is a convenient way to generate a single half-sibling array from individuals in mypop. This code mates makes a half-sib array with individual 0 as the mothers, with individuals 1, 2 and 3 contributing male gametes. Each father has four offspring each.", "progeny = fp.make_sibships(mypop, 0, [1,2,3], 4, 'myprogeny')", "With this generation we can extract a little extra information from the genotypeArray than we could from the parents about their parents and family structure.", "print(progeny.fathers)\nprint(progeny.mothers)\nprint(progeny.families)\nprint(progeny.nfamilies)", "Of course with real data we would not normally know the identity of the father or the number of families, but this is useful for checking accuracy in simulations. It can also be useful to look up the positions of the parents in another list of names. This code finds the indices of the mothers and fathers of the offspring in the names listed in mypop.", "print(progeny.parent_index('mother', mypop.names))\nprint(progeny.parent_index('father', mypop.names))", "Information on markers\nPull out marker names with marker. The names here are boring because they are simulated, but your data can have as exciting names as you'd like.", "mypop.markers", "Check whether the locus names for parents and offspring match. This is obvious vital for determining who shares alleles with whom, but easy to overlook! If they don't match, the most likely explanation is that you have imported genotype data and misspecified where the genotype data start (the genotype_col argument).", "mypop.markers == progeny.markers", "FAPS uses population allele frequencies to calculate the likelihood that paternal alleles are drawn at random.\nThey are are useful to check the markers are doing what you think they are.\nPull out the population allele frequencies for each locus:", "mypop.allele_freqs()", "We can also check for missing data and heterozygosity for each marker and individual. By default, data for each marker are returned:", "print(mypop.missing_data())\nprint(mypop.heterozygosity())", "To get summaries for each individual:", "print(mypop.missing_data(by='individual'))\nprint(mypop.heterozygosity(by='individual'))", "In this instance there is no missing data, because data are simulated to be error-free. See the next section on an empircal example where this is not true.\nImporting genotype data\nYou can import genotype data from a text or CSV (comma-separated text) file. Both can be easily exported from a spreadsheet program. Rows index individuals, and columns index each typed locus. More specifically:\n\nOffspring names should be given in the first column\nIf the data are offspring, names of the mothers are given in the second column.\nIf known for some reason, names of fathers can be given as well.\nGenotype information should be given to the right of columns indicating individual or parental names, with locus names in the column headers.\n\nSNP genotype data must be biallelic, that is they can only be homozygous for the first allele, heterozygous, or homozygous for the second allele. These should be given as 0, 1 and 2 respectively. If genotype data is missing this should be entered as NA.\nThe following code imports genotype information on real samples of offspring from half-sibling array of wild-pollinated snpadragon seedlings collected in the Spanish Pyrenees. The candidate parents are as many of the wild adult plants as we could find. You will find the data files on the IST Austria data repository (DOI:10.15479/AT:ISTA:95). Aside from the path to where the data file is stored, the two other arguments specify the column containing names of the mothers, and the first column containing genotype data of the offspring.", "offspring = fp.read_genotypes(\n path = '../data/offspring_2012_genotypes.csv',\n mothers_col=1,\n genotype_col=2)", "Again, Python starts counting from zero rather than one, so the first column is really column zero, and so on. Because these are CSV, there was no need to specify that data are delimited by commas, but this is included for illustration.\nOffspring are divided into 60 maternal families of different sizes. You can call the name of the mother of each offspring. You can also call the names of the fathers, with offspring.fathers, but since these are unknown this is not informative.", "np.unique(offspring.mothers)", "Offspring names are a combination of maternal family and a unique ID for ecah offspring.", "offspring.names", "You can call summaries of genotype data to help in data cleaning. For example, this code shows the proportion of loci with missing genotype data for the first ten offspring individuals.", "print(offspring.missing_data('individual')[:10])", "This snippet shows the proportion of missing data points and heterozygosity for the first ten loci. These can be helpful in identifying dubious loci.", "print(offspring.missing_data('marker')[:10])\nprint(offspring.heterozygosity()[:10])", "Multiple families\nIn real data set we generally work with multplie half-sibling arrays at once. For downstream analyses we need to split up the genotype data into families to reflect this. This is easy to do with split and a vector of labels to group offspring by. This returns a dictionary of genotypeArray objects labelled by maternal family. These snippet splits up the data and prints the maternal family names.", "offs_split = offspring.split(by = offspring.mothers)\noffs_split.keys()", "Each entry is an individual genotypeArray. You can pull out individual families by indexing the dictionary by name. For example, here are the names of the offspring in family J1246:", "offs_split[\"J1246\"].names", "To perform operations on each genotypeArray we now have to iterate over each element. A convenient way to do this is with dictionary comprehensions by separating out the labels from the genotypeArray objects using items.\nAs an example, here's how you call the number of offspring in each family. It splits up the dictionary into keys for each family, and calls size on each genotypeArray (labelled genArray in the comprehension).", "{family : genArray.size for family,genArray in offs_split.items()}", "You can achieve the same thing with a list comprehension, but you lose information about family ID. It is also more difficult to pass a list on to downstream functions. This snippet shows the first ten items.", "[genArray.size for genArray in offs_split.values()][:10]" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
amitkaps/hackermath
Module_1c_linear_regression_ridge.ipynb
mit
[ "Linear Regression (Ridge)\nSo far we have been looking at solving for vector $x$ when there is a known matrix $A$ and vector $b$, such that\n$$ Ax = b $$\nThe first approach is solving for one (or none) unique solution when $n$ dimensions and $p$ feature when $ n = p + 1 $ i.e. $n \\times n$ matrix\nThe second approach is using OLS - ordinary least squares linear regression, when $ n > p + 1 $\nOverfitting in OLS\nOrdinary least squares estimation leads to an overdetermined (over-fitted) solution, which fits well with in-sample we have but does not generalise well when we extend it to outside the sample \nLets take the OLS cars example: Our sample was 7 cars for which we had $price$ and $kmpl$ data. However, our entire data is a population is a total of 42 cars. We want to see how well does this OLS for 7 cars do when we extend it to the entire set of 42 cars.", "import numpy as np\nimport pandas as pd\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\nplt.style.use('fivethirtyeight')\nplt.rcParams['figure.figsize'] = (10, 6)\n\npop = pd.read_csv('data/cars_small.csv')\n\npop.shape\n\npop.head()\n\nsample_rows = [35,17,11,25,12,22,13]\n\nsample = pop.loc[sample_rows,:]\n\nsample", "Lets plot the entire population (n = 42) and the sample (n =7) and our original prediction line.\n$$ price = 1662 - 62 * kmpl ~~~~ \\textit{(sample = 7)}$$", "# Plot the population and the sample\nplt.scatter(pop.kmpl, pop.price, s = 150, alpha = 0.5 )\nplt.scatter(sample.kmpl, sample.price, s = 150, alpha = 0.5, c = 'r')\n\n# Plot the OLS Line - Sample\nbeta_0_s, beta_1_s = 1662, -62 \nx = np.arange(min(pop.kmpl),max(pop.kmpl),1)\n\nplt.xlabel('kmpl')\nplt.ylabel('price')\n\ny_s = beta_0_s + beta_1_s * x\nplt.text(x[-1], y_s[-1], 'sample')\n\nplt.plot(x, y_s, '-')", "Let us find the best-fit OLS line for the population", "def ols (df):\n n = df.shape[0]\n x0 = np.ones(n)\n x1 = df.kmpl\n X = np.c_[x0, x1]\n X = np.asmatrix(X)\n y = np.transpose(np.asmatrix(df.price))\n X_T = np.transpose(X)\n X_pseudo = np.linalg.inv(X_T * X) * X_T\n beta = X_pseudo * y\n return beta\n\nols(sample)\n\nols(pop)", "So the two OLS lines are:\n$$ price = 1662 - 62 * kmpl ~~~~ \\textit{(sample = 7)}$$ \n$$ price = 1158 - 36 * kmpl ~~~~ \\textit{(population = 42)}$$ \nLet us plot this data:", "# Plot the population and the sample\nplt.scatter(pop.kmpl, pop.price, s = 150, alpha = 0.5 )\nplt.scatter(sample.kmpl, sample.price, s = 150, alpha = 0.5, c = 'r')\n\n# Plot the OLS Line - sample and population\nbeta_0_s, beta_1_s = 1662, -62 \nbeta_0_p, beta_1_p = 1158, -36 \nx = np.arange(min(pop.kmpl),max(pop.kmpl),1)\n\nplt.xlabel('kmpl')\nplt.ylabel('price')\n\ny_s = beta_0_s + beta_1_s * x\nplt.text(x[-1], y_s[-1], 'sample')\n\ny_p = beta_0_p + beta_1_p * x\nplt.text(x[-1], y_p[-1], 'population')\n\nplt.plot(x, y_s, '-')\nplt.plot(x, y_p, '-')", "Understanding Overfitting\nThe reason overfitting is happening is because our orginal line is really dependent on the selection of the sample of 7 observations. If we change our sample, we would get a different answer every time!", "# Randomly select 7 cars from this dataset\nsample_random = pop.sample(n=7)\nsample_random", "Let us write some code to randomly draw a sample of 7 and do it $z$ times and see the OLS lines and coefficients", "ols(sample_random)\n\ndef random_cars_ols (z):\n beta = []\n for i in range(z):\n \n # Select a sample and run OLS\n sample_random = pop.sample(n=7)\n b = ols(sample_random)\n beta.append([b[0,0], b[1,0]])\n \n # Get the OLS line\n x = np.arange(min(pop.kmpl), max(pop.kmpl), 1)\n y = b[0,0] + b[1,0] *x\n \n # Set the plotting area\n plt.subplot(1, 2, 1)\n plt.tight_layout()\n a = round(1/np.log(z), 2)\n \n # Plot the OLS line\n plt.plot(x,y, '-', linewidth = 1.0, c = 'b', alpha = a)\n plt.xlabel('kmpl')\n plt.ylabel('price')\n plt.ylim(0,1000)\n\n # Plot the intercept and coefficients\n plt.subplot(1,2,2)\n plt.scatter(beta[i][1],beta[i][0], s = 50, alpha = a)\n plt.xlim(-120,60)\n plt.ylim(-500,3000)\n plt.xlabel('beta_1')\n plt.ylabel('beta_0')\n \n # Plot the Popultaion line\n plt.subplot(1, 2, 1)\n beta_0_p, beta_1_p = 1158, -36 \n x = np.arange(min(pop.kmpl),max(pop.kmpl),1)\n y_p = beta_0_p + beta_1_p * x\n plt.plot(x, y_p, '-', linewidth =4, c = 'r')", "Let us do this 500 times, $ z = 500 $", "random_cars_ols(500)", "L2 Regularization - Ridge Regression\nNow to prevent our $\\beta $ from going all over the place to fit the line, we can need to constrain the constraint $\\beta$\n$$ \\beta^{T} \\beta < C $$\nFor OLS our error term was: \n$$ E_{ols}(\\beta)= \\frac {1}{n} (y-X\\beta)^{T}(y-X\\beta) $$\nSo now we add another constraint on the $\\beta$ to our minimization function\n$$ E_{reg}(\\beta)= \\frac {1}{n} (y-X\\beta)^{T}(y-X\\beta) + \\frac {\\alpha}{n} \\beta^{T}\\beta$$\nTo get the minimum for this error function, we need to differentiate by $\\beta^T$\n$$ \\nabla E_{reg}(\\beta) = 0 $$\n$$ \\nabla E_{reg}(\\beta) ={\\frac {dE_{reg}(\\beta)}{d\\beta^T}} = \\frac {2}{n} X^T(X\\beta−y) + \\frac {\\alpha}{n} \\beta= 0 $$\n$$ X^T X\\beta + \\alpha \\beta= X^T y $$\nSo our $\\beta$ for a regularized function is\n$$ \\beta_{reg} = (X^T X + \\alpha I)^{-1}X^Ty$$\nWhen $ \\alpha = 0 $, then it becomes OLS\n$$ \\beta_{ols} = (X^T X)^{-1}X^Ty$$\nDirect Calculation", "def ridge (df, alpha):\n n = df.shape[0]\n x0 = np.ones(n)\n x1 = df.kmpl\n X = np.c_[x0, x1]\n X = np.asmatrix(X)\n y = np.asmatrix(df.price.values.reshape(-1,1))\n X_T = np.transpose(X)\n I = np.identity(2)\n beta = np.linalg.inv(X_T * X + alpha * I ) * X_T * y\n return beta", "Let us run this with slpha = 0, which is OLS", "ridge(sample, 0)", "Lets increase alpha to constraint the plot and see the result", "def ridge_plot(df, alphas, func):\n plt.scatter(df.kmpl, df.price, s = 150, alpha = 0.5)\n plt.xlabel('kmpl')\n plt.ylabel('price')\n \n # Plot the Ridge line\n for a in alphas: \n beta = func(df, a)\n x = np.arange(min(df.kmpl), max(df.kmpl), 1)\n y = beta[0,0] + beta[1,0] * x\n plt.plot(x,y, '-', linewidth = 1, c = 'b')\n plt.text(x[-1], y[-1], '%s' % a, size = \"smaller\")\n\nridge_plot(sample, [0, 0.005, 0.01, 0.02, 0.03, 0.05, 0.1], ridge)", "Exercises\nRun a Ridge Linear Regression:\n$$ price = \\beta_{0} + \\beta_{1} kmpl + \\beta_{2} bhp + \\beta_{2} kmpl^2 + \\beta_{2} bhp/kmpl $$\nRun the Ridge Regression using Pseudo Inverse?\nPlot the Ridge Regression for different values of $\\alpha$\nPlot the overfitting by taking $n = 20$ samples?\nPlot the overfitting by taking $n = 42$ (entire population)?\nUsing sklearn", "from sklearn import linear_model\n\ndef ridge_sklearn(df, alpha):\n y = df.price\n X = df.kmpl.values.reshape(-1,1)\n X = np.c_[np.ones((X.shape[0],1)),X]\n model = linear_model.Ridge(alpha = alpha, fit_intercept = False)\n model.fit(X,y)\n beta = np.array([model.coef_]).T\n return beta\n\nridge_sklearn(pop, 0)\n\nridge_plot(sample, [0, 0.005, 0.01, 0.02, 0.03, 0.05, 0.1], ridge_sklearn)", "Let us now run the see how this ridge regression helps in reducing overplotting", "def random_cars_ridge (z, alpha, func):\n beta = []\n for i in range(z):\n \n # Select a sample and run OLS\n sample_random = pop.sample(n=7)\n b = func(sample_random, alpha)\n beta.append([b[0,0], b[1,0]])\n \n # Get the OLS line\n x = np.arange(min(pop.kmpl), max(pop.kmpl), 1)\n y = b[0,0] + b[1,0] *x\n \n # Set the plotting area\n plt.subplot(1, 2, 1)\n plt.tight_layout()\n a = round(1/np.log(z), 2)\n \n # Plot the OLS line\n plt.plot(x,y, '-', linewidth = 1, c = 'b', alpha = a)\n plt.xlabel('kmpl')\n plt.ylabel('price')\n plt.ylim(0,1000)\n\n # Plot the intercept and coefficients\n plt.subplot(1,2,2)\n plt.scatter(beta[i][1],beta[i][0], s = 50, alpha = a)\n plt.xlim(-120,60)\n plt.ylim(-500,3000)\n plt.xlabel('beta_1')\n plt.ylabel('beta_0')\n \n # Plot the Population line\n plt.subplot(1, 2, 1)\n beta_0_p, beta_1_p = 1158, -36 \n x = np.arange(min(pop.kmpl),max(pop.kmpl),1)\n y_p = beta_0_p + beta_1_p * x\n plt.plot(x, y_p, '-', linewidth =4, c = 'r')\n\nrandom_cars_ridge (500, 0.02, ridge)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ray-project/ray
doc/source/serve/tutorials/gradio.ipynb
apache-2.0
[ "(gradio-serve-tutorial)=\nBuilding a Gradio demo with Ray Serve\nIn this example, we will show you how to wrap a machine learning model served\nby Ray Serve in a Gradio demo.\nSpecifically, we're going to download a GPT-2 model from the transformer library,\ndefine a Ray Serve deployment with it, and then define and launch a Gradio Interface.\nLet's take a look.", "# Install all dependencies for this example.\n! pip install ray gradio transformers requests", "Deploying a model with Ray Serve\nTo start off, we import Ray Serve, Gradio, the transformers and requests libraries,\nand then simply start Ray Serve:", "import gradio as gr\nfrom ray import serve\nfrom transformers import pipeline\nimport requests\n\n\nserve.start()", "Next, we define a Ray Serve deployment with a GPT-2 model, by using the @serve.deployment decorator on a model\nfunction that takes a request argument.\nIn this function we define a GPT-2 model with a call to pipeline and return the result of querying the model.", "@serve.deployment\ndef model(request):\n language_model = pipeline(\"text-generation\", model=\"gpt2\")\n query = request.query_params[\"query\"]\n return language_model(query, max_length=100)", "This model can now easily be deployed using a model.deploy() call.\nTo test this deployment we use a simple example query to get a response from the model running\non localhost:8000/model.\nThe first time you use this endpoint, the model will be downloaded first, which can take a while to complete.\nSubsequent calls will be faster.", "model.deploy()\nexample = \"What's the meaning of life?\"\nresponse = requests.get(f\"http://localhost:8000/model?query={example}\")\nprint(response.text)", "Defining and launching a Gradio interface\nDefining a Gradio interface is now straightforward.\nAll we need is a function that Gradio can call to get the response from the model.\nThat's just a thin wrapper around our previous requests call:", "def gpt2(query):\n response = requests.get(f\"http://localhost:8000/model?query={query}\")\n return response.json()[0][\"generated_text\"]", "Apart from our gpt2 function, the only other thing that we need to define a Gradio interface is\na description of the model inputs and outputs that Gradio understands.\nSince our model takes text as input and output, this turns out to be pretty simple:", "iface = gr.Interface(\n fn=gpt2,\n inputs=[gr.inputs.Textbox(\n default=example, label=\"Input prompt\"\n )],\n outputs=[gr.outputs.Textbox(label=\"Model output\")]\n)", "For more complex models served with Ray, you might need multiple gr.inputs\nand gr.outputs of different types.\n{margin}\nThe [Gradio documentation](https://gradio.app/docs/) covers all viable input and output components in detail.\nFinally, we can launch the interface using iface.launch():", "iface.launch()", "This should launch an interface that you can interact with that looks like this:\n{image} https://raw.githubusercontent.com/ray-project/images/master/docs/serve/gradio_serve_gpt.png\nYou can run this examples directly in the browser, for instance by launching this notebook directly\ninto Google Colab or Binder, by clicking on the rocket icon at the top right of this page.\nIf you run this code locally in Python, this Gradio app will be served on http://127.0.0.1:7861/.\nBuilding a Gradio app from a Scikit-Learn model\nLet's take a look at another example, so that you can see the slight differences to the first example\nin direct comparison.", "# Install all dependencies for this example.\n! pip install ray gradio requests scikit-learn", "This time we're going to use a Scikit-Learn model that we quickly train\nourselves on the famous Iris dataset.\nTo do this, we'll download the Iris dataset using the built-in load_iris function from the sklearn library,\nand we used the GradientBoostingClassifier from the sklearn.ensemble module for training.\nThis time we'll use the @serve.deployment decorator on a class called BoostingModel, which has an\nasynchronous __call__ method that Ray Serve needs to define your deployment.\nAll else remains the same as in the first example.", "import gradio as gr\nimport requests\nfrom sklearn.datasets import load_iris\nfrom sklearn.ensemble import GradientBoostingClassifier\n\nfrom ray import serve\n\n# Train your model.\niris_dataset = load_iris()\nmodel = GradientBoostingClassifier()\nmodel.fit(iris_dataset[\"data\"], iris_dataset[\"target\"])\n\n# Start Ray Serve.\nserve.start()\n\n# Define your deployment.\n@serve.deployment(route_prefix=\"/iris\")\nclass BoostingModel:\n def __init__(self, model):\n self.model = model\n self.label_list = iris_dataset[\"target_names\"].tolist()\n\n async def __call__(self, request):\n payload = (await request.json())[\"vector\"]\n print(f\"Received http request with data {payload}\")\n\n prediction = self.model.predict([payload])[0]\n human_name = self.label_list[prediction]\n return {\"result\": human_name}\n\n\n# Deploy your model.\nBoostingModel.deploy(model)", "Equipped with our BoostingModel class, we can now define and launch a Gradio interface as follows.\nThe Iris dataset has a total of four features, namely the four numeric values sepal length, sepal width,\npetal length, and petal width.\nWe use this fact to define an iris function that takes these four features and returns the predicted class,\nusing our deployed model.\nThis time, the Gradio interface takes four input Numbers, and returns the predicted class as text.\nGo ahead and try it out in the browser yourself.", "# Define gradio function\ndef iris(sl, sw, pl, pw):\n request_input = {\"vector\": [sl, sw, pl, pw]}\n response = requests.get(\n \"http://localhost:8000/iris\", json=request_input)\n return response.json()[0][\"result\"]\n\n\n# Define gradio interface\niface = gr.Interface(\n fn=iris,\n inputs=[\n gr.inputs.Number(default=1.0, label=\"sepal length (cm)\"),\n gr.inputs.Number(default=1.0, label=\"sepal width (cm)\"),\n gr.inputs.Number(default=1.0, label=\"petal length (cm)\"),\n gr.inputs.Number(default=1.0, label=\"petal width (cm)\"),\n ],\n outputs=\"text\")\n\n# Launch the gradio interface\niface.launch()", "Launching this interface, you should see an interactive interface that looks like this:\n{image} https://raw.githubusercontent.com/ray-project/images/master/docs/serve/gradio_serve_iris.png\nConclusion\nTo summarize, it's easy to build Gradio apps from Ray Serve deployments.\nYou only need to properly encode your model's inputs and outputs in a Gradio interface, and you're good to go!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/training-data-analyst
quests/sparktobq/05_functions.ipynb
apache-2.0
[ "Migrating from Spark to BigQuery via Dataproc -- Part 5\n\nPart 1: The original Spark code, now running on Dataproc (lift-and-shift).\nPart 2: Replace HDFS by Google Cloud Storage. This enables job-specific-clusters. (cloud-native)\nPart 3: Automate everything, so that we can run in a job-specific cluster. (cloud-optimized)\nPart 4: Load CSV into BigQuery, use BigQuery. (modernize)\nPart 5: Using Cloud Functions, launch analysis every time there is a new file in the bucket. (serverless)\n\nCatch-up cell", "%%bash\nwget http://kdd.ics.uci.edu/databases/kddcup99/kddcup.data_10_percent.gz\ngunzip kddcup.data_10_percent.gz\nBUCKET='cloud-training-demos-ml' # CHANGE\ngsutil cp kdd* gs://$BUCKET/\nbq mk sparktobq", "Create reporting function", "%%writefile main.py\n\nfrom google.cloud import bigquery\nimport google.cloud.storage as gcs\nimport tempfile\nimport os\n\ndef create_report(BUCKET, gcsfilename, tmpdir):\n \"\"\"\n Creates report in gs://BUCKET/ based on contents in gcsfilename (gs://bucket/some/dir/filename)\n \"\"\"\n # connect to BigQuery\n client = bigquery.Client()\n destination_table = client.get_table('sparktobq.kdd_cup')\n \n # Specify table schema. Autodetect is not a good idea for production code\n job_config = bigquery.LoadJobConfig()\n schema = [\n bigquery.SchemaField(\"duration\", \"INT64\"),\n ]\n for name in ['protocol_type', 'service', 'flag']:\n schema.append(bigquery.SchemaField(name, \"STRING\"))\n for name in 'src_bytes,dst_bytes,wrong_fragment,urgent,hot,num_failed_logins'.split(','):\n schema.append(bigquery.SchemaField(name, \"INT64\"))\n schema.append(bigquery.SchemaField(\"unused_10\", \"STRING\"))\n schema.append(bigquery.SchemaField(\"num_compromised\", \"INT64\"))\n schema.append(bigquery.SchemaField(\"unused_12\", \"STRING\"))\n for name in 'su_attempted,num_root,num_file_creations'.split(','):\n schema.append(bigquery.SchemaField(name, \"INT64\")) \n for fieldno in range(16, 41):\n schema.append(bigquery.SchemaField(\"unused_{}\".format(fieldno), \"STRING\"))\n schema.append(bigquery.SchemaField(\"label\", \"STRING\"))\n job_config.schema = schema\n\n # Load CSV data into BigQuery, replacing any rows that were there before\n job_config.create_disposition = bigquery.CreateDisposition.CREATE_IF_NEEDED\n job_config.write_disposition = bigquery.WriteDisposition.WRITE_TRUNCATE\n job_config.skip_leading_rows = 0\n job_config.source_format = bigquery.SourceFormat.CSV\n load_job = client.load_table_from_uri(gcsfilename, destination_table, job_config=job_config)\n print(\"Starting LOAD job {} for {}\".format(load_job.job_id, gcsfilename))\n load_job.result() # Waits for table load to complete.\n print(\"Finished LOAD job {}\".format(load_job.job_id))\n \n # connections by protocol\n sql = \"\"\"\n SELECT COUNT(*) AS count\n FROM sparktobq.kdd_cup\n GROUP BY protocol_type\n ORDER by count ASC \n \"\"\"\n connections_by_protocol = client.query(sql).to_dataframe()\n connections_by_protocol.to_csv(os.path.join(tmpdir,\"connections_by_protocol.csv\"))\n print(\"Finished analyzing connections\")\n \n # attacks plot\n sql = \"\"\"\n SELECT \n protocol_type, \n CASE label\n WHEN 'normal.' THEN 'no attack'\n ELSE 'attack'\n END AS state,\n COUNT(*) as total_freq,\n ROUND(AVG(src_bytes), 2) as mean_src_bytes,\n ROUND(AVG(dst_bytes), 2) as mean_dst_bytes,\n ROUND(AVG(duration), 2) as mean_duration,\n SUM(num_failed_logins) as total_failed_logins,\n SUM(num_compromised) as total_compromised,\n SUM(num_file_creations) as total_file_creations,\n SUM(su_attempted) as total_root_attempts,\n SUM(num_root) as total_root_acceses\n FROM sparktobq.kdd_cup\n GROUP BY protocol_type, state\n ORDER BY 3 DESC\n \"\"\"\n attack_stats = client.query(sql).to_dataframe()\n ax = attack_stats.plot.bar(x='protocol_type', subplots=True, figsize=(10,25))\n ax[0].get_figure().savefig(os.path.join(tmpdir,'report.png'));\n print(\"Finished analyzing attacks\")\n \n bucket = gcs.Client().get_bucket(BUCKET)\n for blob in bucket.list_blobs(prefix='sparktobq/'):\n blob.delete()\n for fname in ['report.png', 'connections_by_protocol.csv']:\n bucket.blob('sparktobq/{}'.format(fname)).upload_from_filename(os.path.join(tmpdir,fname))\n print(\"Uploaded report based on {} to {}\".format(gcsfilename, BUCKET))\n\n\ndef bigquery_analysis_cf(data, context):\n # check that trigger is for a file of interest\n bucket = data['bucket']\n name = data['name']\n if ('kddcup' in name) and not ('gz' in name):\n filename = 'gs://{}/{}'.format(bucket, data['name'])\n print(bucket, filename)\n with tempfile.TemporaryDirectory() as tmpdir:\n create_report(bucket, filename, tmpdir)\n\n%%writefile requirements.txt\ngoogle-cloud-bigquery\ngoogle-cloud-storage\npandas\nmatplotlib\n\n# verify that the code in the CF works\nname='kddcup.data_10_percent'\nif 'kddcup' in name and not ('gz' in name):\n print(True)", "Test that the function endpoint works", "# test that the function works\nimport main as bq\n\nBUCKET='cloud-training-demos-ml' # CHANGE\ntry:\n bq.create_report(BUCKET, 'gs://{}/kddcup.data_10_percent'.format(BUCKET), \"/tmp\")\nexcept Exception as e:\n print(e.errors)", "Deploy the cloud function", "!gcloud functions deploy bigquery_analysis_cf --runtime python37 --trigger-resource $BUCKET --trigger-event google.storage.object.finalize", "Try it out\nCopy the file to the bucket:", "!gsutil rm -rf gs://$BUCKET/sparktobq\n!gsutil cp kddcup.data_10_percent gs://$BUCKET/", "Verify that the Cloud Function is being run. You can do this from the Cloud Functions part of the GCP Console.\nOnce the function is complete (in about 30 seconds), see if the output folder contains the report:", "!gsutil ls gs://$BUCKET/sparktobq", "Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
dvkonst/ml_mipt
task_7/captioning.ipynb
gpl-3.0
[ "<h1 align=\"center\">First of all -- Checking Questions</h1>\n\nВопрос 1: Можно ли использовать сверточные сети для классификации текстов? Если нет обоснуйте :D, если да то как? как решить проблему с произвольной длинной входа?\nДа, если тексты имеют одинаковую длину.\nВопрос 2: Чем LSTM лучше/хуже чем обычная RNN?\nГрадиенты затухают медленнее. Удается использовать информацию не только от близлежащих слоев.\nВопрос 3: Выпишите производную $\\frac{d c_{n+1}}{d c_{k}}$ для LSTM http://colah.github.io/posts/2015-08-Understanding-LSTMs/, объясните формулу, когда производная затухает, когда взрывается?\n<Ответ>\nВопрос 4: Зачем нужен TBPTT почему BPTT плох?\n<Ответ>\nВопрос 5: Как комбинировать рекуррентные и сверточные сети, а главное зачем? Приведите несколько примеров реальных задач.\nСначала распознавание объектов на изображениях с помощью сверточных сетей, а затем аннотация изображений помощью реккурентных.\nВопрос 6: Объясните интуицию выбора размера эмбединг слоя? почему это опасное место?\n<Ответ>\n\nArseniy Ashuha, you can text me ars.ashuha@gmail.com, Александр Панин\n\n<h1 align=\"center\"> Image Captioning </h1>\n\nIn this seminar you'll be going through the image captioning pipeline.\nTo begin with, let us download the dataset of image features from a pre-trained GoogleNet.", "!wget https://www.dropbox.com/s/3hj16b0fj6yw7cc/data.tar.gz?dl=1 -O data.tar.gz\n!tar -xvzf data.tar.gz", "Data preprocessing", "%%time\n# Read Dataset\nimport numpy as np\nimport pickle\n\nimg_codes = np.load(\"data/image_codes.npy\")\ncaptions = pickle.load(open('data/caption_tokens.pcl', 'rb'))\n\nprint \"each image code is a 1000-unit vector:\", img_codes.shape\nprint img_codes[0,:10]\nprint '\\n\\n'\nprint \"for each image there are 5-7 descriptions, e.g.:\\n\"\nprint '\\n'.join(captions[0])\n\n#split descriptions into tokens\nfor img_i in range(len(captions)):\n for caption_i in range(len(captions[img_i])):\n sentence = captions[img_i][caption_i] \n captions[img_i][caption_i] = [\"#START#\"]+sentence.split(' ')+[\"#END#\"]\n\n# Build a Vocabulary\n\n############# TO CODE IT BY YOURSELF ##################\n#<here should be dict word:number of entrances>\nword_counts = {}\nfor img_i in captions:\n for caption_i in img_i:\n for word in caption_i:\n try:\n word_counts[word] += 1\n except KeyError:\n word_counts[word] = 1\n# print word_counts\nvocab = ['#UNK#', '#START#', '#END#']\nvocab += [k for k, v in word_counts.items() if v >= 5]\nn_tokens = len(vocab)\n\nassert 10000 <= n_tokens <= 10500\n\nword_to_index = {w: i for i, w in enumerate(vocab)}\n\nPAD_ix = -1\nUNK_ix = vocab.index('#UNK#')\n\ndef as_matrix(sequences,max_len=None):\n max_len = max_len or max(map(len,sequences))\n \n matrix = np.zeros((len(sequences),max_len),dtype='int32')+PAD_ix\n for i,seq in enumerate(sequences):\n row_ix = [word_to_index.get(word,UNK_ix) for word in seq[:max_len]]\n matrix[i,:len(row_ix)] = row_ix\n \n return matrix\n\n#try it out on several descriptions of a random image\nas_matrix(captions[1337])", "Mah Neural Network", "# network shapes. \nCNN_FEATURE_SIZE = img_codes.shape[1]\nEMBED_SIZE = 128 #pls change me if u want\nLSTM_UNITS = 200 #pls change me if u want\n\nimport theano\nimport lasagne\nimport theano.tensor as T\nfrom lasagne.layers import *\n\n# Input Variable\nsentences = T.imatrix()# [batch_size x time] of word ids\nimage_vectors = T.matrix() # [batch size x unit] of CNN image features\nsentence_mask = T.neq(sentences, PAD_ix)\n\n#network inputs\nl_words = InputLayer((None, None), sentences)\nl_mask = InputLayer((None, None), sentence_mask)\n\n#embeddings for words \n############# TO CODE IT BY YOURSELF ##################\nl_word_embeddings = EmbeddingLayer(l_words, input_size=n_tokens, output_size=EMBED_SIZE) \n\n# input layer for image features\nl_image_features = InputLayer((None, CNN_FEATURE_SIZE), image_vectors)\n\n############# TO CODE IT BY YOURSELF ##################\n#convert 1000 image features from googlenet to whatever LSTM_UNITS you have set\n#it's also a good idea to add some dropout here and there\nl_image_features_small = DropoutLayer(l_image_features)\nl_image_features_small = DenseLayer(l_image_features_small, LSTM_UNITS)\nassert l_image_features_small.output_shape == (None, LSTM_UNITS)\n\n############# TO CODE IT BY YOURSELF ##################\n# Concatinate image features and word embedings in one sequence \ndecoder = LSTMLayer(l_word_embeddings,\n num_units=LSTM_UNITS,\n cell_init=l_image_features_small,\n mask_input=l_mask,\n grad_clipping=10**10)\n\n# Decoding of rnn hiden states\nfrom broadcast import BroadcastLayer,UnbroadcastLayer\n\n#apply whatever comes next to each tick of each example in a batch. Equivalent to 2 reshapes\nbroadcast_decoder_ticks = BroadcastLayer(decoder, (0, 1))\nprint \"broadcasted decoder shape = \",broadcast_decoder_ticks.output_shape\n\npredicted_probabilities_each_tick = DenseLayer(\n broadcast_decoder_ticks,n_tokens, nonlinearity=lasagne.nonlinearities.softmax)\n\n#un-broadcast back into (batch,tick,probabilities)\npredicted_probabilities = UnbroadcastLayer(\n predicted_probabilities_each_tick, broadcast_layer=broadcast_decoder_ticks)\n\nprint \"output shape = \", predicted_probabilities.output_shape\n\n#remove if you know what you're doing (e.g. 1d convolutions or fixed shape)\nassert predicted_probabilities.output_shape == (None, None, 10373)\n\nnext_word_probas = get_output(predicted_probabilities)\n\nreference_answers = sentences[:,1:]\noutput_mask = sentence_mask[:,1:]\n\n#write symbolic loss function to train NN for\nloss = lasagne.objectives.categorical_crossentropy(\n next_word_probas[:, :-1].reshape((-1, n_tokens)),\n reference_answers.reshape((-1,))\n).reshape(reference_answers.shape)\n\n############# TO CODE IT BY YOURSELF ##################\nloss = (loss * output_mask).sum() / output_mask.sum()\n\n#trainable NN weights\n############# TO CODE IT BY YOURSELF ##################\nweights = get_all_params(predicted_probabilities)\nupdates = lasagne.updates.adam(loss, weights)\n\n#compile a function that takes input sentence and image mask, outputs loss and updates weights\n#please not that your functions must accept image features as FIRST param and sentences as second one\n############# TO CODE IT BY YOURSELF ##################\ntrain_step = theano.function([image_vectors, sentences], loss, updates=updates)\nval_step = theano.function([image_vectors, sentences], loss)", "Training\n\nYou first have to implement a batch generator\nThan the network will get trained the usual way", "captions = np.array(captions)\n\nfrom random import choice\n\ndef generate_batch(images,captions,batch_size,max_caption_len=None):\n #sample random numbers for image/caption indicies\n random_image_ix = np.random.randint(0, len(images), size=batch_size)\n \n #get images\n batch_images = images[random_image_ix]\n \n #5-7 captions for each image\n captions_for_batch_images = captions[random_image_ix]\n \n #pick 1 from 5-7 captions for each image\n batch_captions = map(choice, captions_for_batch_images)\n \n #convert to matrix\n batch_captions_ix = as_matrix(batch_captions,max_len=max_caption_len)\n \n return batch_images, batch_captions_ix\n\ngenerate_batch(img_codes,captions, 3)", "Main loop\n\nWe recommend you to periodically evaluate the network using the next \"apply trained model\" block\nits safe to interrupt training, run a few examples and start training again\n\nНа первых (около 30) эпохах использовал батчи по 20-30 элементов, постепенно увеличивая до 130. Всего было посчитанно около 110 эпох.", "batch_size = 100 #adjust me\nn_epochs = 5 #adjust me\nn_batches_per_epoch = 50 #adjust me\nn_validation_batches = 5 #how many batches are used for validation after each epoch\n\n%%time\nfrom tqdm import tqdm\n\nfor epoch in range(n_epochs):\n train_loss=0\n for _ in tqdm(range(n_batches_per_epoch)):\n train_loss += train_step(*generate_batch(img_codes,captions,batch_size))\n train_loss /= n_batches_per_epoch\n \n val_loss=0\n for _ in range(n_validation_batches):\n val_loss += val_step(*generate_batch(img_codes,captions,batch_size))\n val_loss /= n_validation_batches\n \n print('\\nEpoch: {}, train loss: {}, val loss: {}'.format(epoch, train_loss, val_loss))\n\nprint(\"Finish :)\")", "apply trained model", "#the same kind you did last week, but a bit smaller\nfrom pretrained_lenet import build_model,preprocess, MEAN_VALUES\n\n# build googlenet\nlenet = build_model()\n\n#load weights\nlenet_weights = pickle.load(open('data/blvc_googlenet.pkl'))['param values']\nset_all_param_values(lenet[\"prob\"], lenet_weights)\n\n#compile get_features\ncnn_input_var = lenet['input'].input_var\ncnn_feature_layer = lenet['loss3/classifier']\nget_cnn_features = theano.function([cnn_input_var], lasagne.layers.get_output(cnn_feature_layer))\n\nfrom matplotlib import pyplot as plt\n%matplotlib inline\n\n#sample image\nimg = plt.imread('data/Dog-and-Cat.jpg')\nimg = preprocess(img)\n\n#deprocess and show, one line :)\nfrom pretrained_lenet import MEAN_VALUES\nplt.imshow(np.transpose((img[0] + MEAN_VALUES)[::-1],[1,2,0]).astype('uint8'))", "Generate caption", "last_word_probas_det = get_output(predicted_probabilities,deterministic=False)[:,-1]\n\nget_probs = theano.function([image_vectors,sentences], last_word_probas_det)\n\n#this is exactly the generation function from week5 classwork,\n#except now we condition on image features instead of words\ndef generate_caption(image,caption_prefix = (\"START\",),t=1,sample=True,max_len=100):\n image_features = get_cnn_features(image)\n caption = list(caption_prefix)\n for _ in range(max_len):\n \n next_word_probs = get_probs(image_features,as_matrix([caption]) ).ravel()\n #apply temperature\n next_word_probs = next_word_probs**t / np.sum(next_word_probs**t)\n\n if sample:\n next_word = np.random.choice(vocab,p=next_word_probs) \n else:\n next_word = vocab[np.argmax(next_word_probs)]\n\n caption.append(next_word)\n\n if next_word==\"#END#\":\n break\n \n return caption\n\nfor i in range(10):\n print ' '.join(generate_caption(img,t=1.)[1:-1])", "Bonus Part\n\nUse ResNet Instead of GoogLeNet\nUse W2V as embedding\nUse Attention :) \n\nPass Assignment https://goo.gl/forms/2qqVtfepn0t1aDgh1" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ConnectedSystems/veneer-py
doc/examples/simulation/VeneerBatchRuns.ipynb
isc
[ "import veneer\nfrom veneer.manage import BulkVeneer\n%matplotlib inline\nimport geopandas as gpd\nimport pandas as pd\n\nfrom veneer.batch import BatchRunner", "Connect to Source and test connection\nUsing 4 instances", "## Veneer started elsewhere (probably from a command line using veneer.manager.start)\nports = list(range(15004,15008))\nports\n\nbv = BulkVeneer(ports)\n\nv = bv.veneers[1]\n\nnetwork = v.network()\nnetwork.as_dataframe().plot()\n\nnetwork.outlet_nodes()\n\noutlet_node = network.outlet_nodes()[0]['properties']['name'] + '$'", "Enumerate the parameter combinations", "import numpy as np\n\nN_RUNS=100\n\nparams = {\n 'x1':np.random.uniform(1.0,1500.0,size=N_RUNS),\n 'x2':np.random.uniform(1.0,5.0,size=N_RUNS),\n 'x3':np.random.uniform(1.0,200.0,size=N_RUNS),\n 'x4':np.random.uniform(0.5,3.0,size=N_RUNS)\n}\nparams = pd.DataFrame(params)\nparams", "Specify the model changes\n(Much like we do when setting up a PEST job)", "runner = BatchRunner(bv.veneers)\n\nv.model.catchment.runoff.set_param_values?\n\nfor p in ['x1','x2','x3','x4']:\n runner.parameters.model.catchment.runoff.set_param_values(p,'$%s$'%p,fus=['Grazing'])", "Specify the result 'y' that we want to retrieve", "runner.retrieve('y').retrieve_multiple_time_series(criteria={'NetworkElement':outlet_node,'RecordingVariable':'Downstream Flow Volume'}).sum()[0]\n\n%xmode Verbose\nprint(runner._retrieval.script())", "Trigger the run...\nWill run the 100 simulations across the four instances of Source (25 runs each)", "jobs,results = runner.run(params)\n\n#jobs", "The results", "pd.DataFrame(results)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
cyliustack/sofa
validation/auto-caption-algorithm.ipynb
apache-2.0
[ "Auto-caption\nDate: 2018/11/14\nPurpose: swarm name matching using the data below\nData source:\nauto_caption4.csv\nauto_caption5.csv\nauto_caption7.csv\nauto_caption8.csv\nauto_caption9.csv\nauto_caption10.csv\nauto_caption11.csv", "# system\nimport os\nimport sys\n# 3rd party lib\nimport pandas as pd\nfrom sklearn.cluster import KMeans\nfrom fuzzywuzzy import fuzz # stirng matching\n\nprint('Python verison: {}'.format(sys.version))\nprint('\\n############################')\nprint('Pandas verison: {}'.format(pd.show_versions()))", "Read file", "standard_df = pd.read_csv('auto_caption4.csv', names=['cluster_ID','timestamp','event','name'])\nprint('There are {} clusters in standard_df\\n'.format(len(standard_df['cluster_ID'].unique())))\nprint(standard_df.head(5))\n\n# default is axis=0\nstandard_df_groupby = standard_df.groupby(['cluster_ID','name']).agg({'name':['count']})\nprint(standard_df.groupby(['cluster_ID','name']).agg({'name':['count']}))", "Access data of multiIndex dataframe\n\npandas, how to access multiIndex dataframe?", "# get column names\ndf = standard_df_groupby.loc[0].reset_index() \nflat_column_names = []\nfor level in df.columns: \n # tuple to list \n flat_column_names.extend(list(level)) # extend(): in-place\n \n# remove duplicate and empty\nflat_column_names = filter(None, flat_column_names) # filter empty\nflat_column_names = list(set(flat_column_names)) # deduplicate\nprint('original order: {}'.format(flat_column_names))\n\n# change member order of list due to set is a random order\nif flat_column_names[0] == 'count':\n myorder = [1,0]\n flat_column_names = [flat_column_names[i] for i in myorder]\n print('New order: {}'.format(flat_column_names))\n\nstandard_df_dict = {}\n\n# Transform multi-index to single index, and update string to dict standard_df_dict\nfor id_of_cluster in standard_df['cluster_ID'].unique():\n print('\\n# of cluster: {}'.format(id_of_cluster))\n df = standard_df_groupby.loc[id_of_cluster].reset_index() \n df.columns = flat_column_names\n print(df.sort_values(by=['count'], ascending=False))\n \n standard_df_dict.update({id_of_cluster: df.name.str.cat(sep=' ', na_rep='?')}) \n\nprint('################################')\nprint('\\nDictionary of swarm data: \\n{}'.format(standard_df_dict))", "Dataframe that i want to match", "matching_df1 = pd.read_csv('auto_caption5.csv', names=['cluster_ID','timestamp','event','name'])\nprint('There are {} clusters in standard_df\\n'.format(len(matching_df1['cluster_ID'].unique())))\nprint(matching_df1.head(5))\n\n# default is axis=0\nmatching_df1_groupby = matching_df1.groupby(['cluster_ID','name']).agg({'name':['count']})\nprint(matching_df1.groupby(['cluster_ID','name']).agg({'name':['count']}))\n\n# get column names\ndf = matching_df1_groupby.loc[0].reset_index() \nflat_column_names = []\nfor level in df.columns: \n # tuple to list \n flat_column_names.extend(list(level)) # extend(): in-place\n\n# remove duplicate and empty\nflat_column_names = filter(None, flat_column_names) # filter empty\nflat_column_names = list(set(flat_column_names)) # deduplicate\nprint(flat_column_names)\n\n# change member order of list due to set is a random order\nif flat_column_names[0] == 'count':\n myorder = [1,0]\n flat_column_names = [flat_column_names[i] for i in myorder]\n print('New order: {}'.format(flat_column_names))\n\nmatching_df1_dict = {}\n\n# Transform multi-index to single index, and update string to dict standard_df_dict\nfor id_of_cluster in matching_df1['cluster_ID'].unique():\n print('\\n# of cluster: {}'.format(id_of_cluster))\n df = matching_df1_groupby.loc[id_of_cluster].reset_index() \n df.columns = flat_column_names\n print(df.sort_values(by=['count'], ascending=False))\n \n matching_df1_dict.update({id_of_cluster: df.name.str.cat(sep=' ', na_rep='?')}) \n\nprint('################################')\nprint('\\nDictionary of swarm data: \\n{}'.format(matching_df1_dict))", "string matching funciton\n\n1-to-1 matching (or mapping)\nGithub of fuzzywuzzy: link\nSearch keyword: You can try 'fuzzywuzzy' + 'pandas'", "def matching_two_dicts_of_swarm(standard_dict, matching_dict, res_dict): \n \"\"\" \n match two dictoinaries with same amount of key-value pairs\n and return matching result, a dict of dict called res_dict.\n \n * standard_dict: The standard of dict\n * matching_dict: The dict that i want to match\n * res_dict: the result, a dict of dict\n \"\"\"\n key = 0 # key: number, no string \n pop_list = [k for k,v in matching_dict.items()]\n print(pop_list)\n for i in standard_dict.keys(): # control access index of standard_dict. a more pythonic way\n threshold = 0\n for j in pop_list: # control access index of matching_dict\n f_ratio = fuzz.ratio(standard_dict[i], matching_dict[j])\n if f_ratio > threshold: # update matching result only when the fuzz ratio is greater\n print('New matching fuzz ratio {} is higher than threshold {}'\\\n .format(f_ratio, threshold))\n key = j # update key\n threshold = f_ratio # update threshold value\n print('Update new threshold {}'\\\n .format(threshold)) \n res_dict.update({i: {j: matching_dict[i]}}) # \n # pop out matched key-value pair of matching dict\n if pop_list:\n pop_list.remove(key) # remove specific value. remove() fails when no elements remains\n print(res_dict)\n return res_dict\n\nres_dict = {}\nres_dict = matching_two_dicts_of_swarm(standard_df_dict, matching_df1_dict, res_dict)\n\nprint(res_dict)", "show all stats (Ans) and matching results (algorithm)", "std_dict_to_df = pd.DataFrame.from_dict(standard_df_dict, orient='index', columns=['Before: function_name'])\nstd_dict_to_df['std_cluster_ID'] = std_dict_to_df.index\nstd_dict_to_df = std_dict_to_df[['std_cluster_ID', 'Before: function_name']]\nstd_dict_to_df\n\nmtch_df1_dict_to_df = pd.DataFrame.from_dict(matching_df1_dict, orient='index', columns=['Matching function_name'])\nmtch_df1_dict_to_df\n\nres_dict_to_df = pd.DataFrame()\nres_dict_to_df\n\nres_list = [k for k,v in res_dict.items()]\nfor key in res_list:\n df = pd.DataFrame.from_dict(res_dict[key], orient='index', columns=['After: funciton name']) # res_dict[key]: a dict\n df['mtch_cluster_ID'] = df.index\n #print(df)\n res_dict_to_df = res_dict_to_df.append(df, ignore_index=True) # df.append(): not in-place\n\nres_dict_to_df = res_dict_to_df[['mtch_cluster_ID', 'After: funciton name']]\nprint(res_dict_to_df.head(5))\n\nfinal_df = pd.concat([std_dict_to_df, res_dict_to_df], axis=1)\nfinal_df" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
xhqu1981/pymatgen
examples/Analyze and plot band structures.ipynb
mit
[ "This notebook shows some examples of methods on a BandStructureSymmLine object (gettting band gaps, vbm, etc...) and basic plotting. Written by Geoffroy Hautier (geoffroy.hautier@uclouvain.be)\nWe start by querying the MP database for a band structure object. Please note that you could get such an object from a run (VASP, ABINIT, ...) using the methods in pymatgen.io", "from pymatgen.matproj.rest import MPRester\nfrom pymatgen.electronic_structure.core import Spin\n#This initiliazes the Rest connection to the Materials Project db. Put your own API key if needed.\na = MPRester()\n#load the band structure from mp-3748, CuAlO2 from the MP db\nbs = a.get_bandstructure_by_material_id(\"mp-3748\")", "We print some information about the band structure", "#is the material a metal (i.e., the fermi level cross a band)\nprint(bs.is_metal())\n#print information on the band gap\nprint(bs.get_band_gap())\n#print the energy of the 20th band and 10th kpoint\nprint(bs.bands[Spin.up][20][10])\n#print energy of direct band gap\nprint(bs.get_direct_band_gap())\n#print information on the vbm\nprint(bs.get_vbm())", "Here, we plot the bs object. By default for an insulator we have en energy limit of cbm+4eV and vbm-4 eV", "%matplotlib inline\nfrom pymatgen.electronic_structure.plotter import BSPlotter\nplotter = BSPlotter(bs)\nplotter.get_plot().show()", "We plot the Brillouin zone with the path which was used for the band structure", "plotter.plot_brillouin()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.21/_downloads/30d3f995eaa317ef564a8fb6d72c22bf/plot_left_cerebellum_volume_source.ipynb
bsd-3-clause
[ "%matplotlib inline", "Generate a left cerebellum volume source space\nGenerate a volume source space of the left cerebellum and plot its vertices\nrelative to the left cortical surface source space and the freesurfer\nsegmentation file.", "# Author: Alan Leggitt <alan.leggitt@ucsf.edu>\n#\n# License: BSD (3-clause)\n\nimport mne\nfrom mne import setup_source_space, setup_volume_source_space\nfrom mne.datasets import sample\n\nprint(__doc__)\n\ndata_path = sample.data_path()\nsubjects_dir = data_path + '/subjects'\nsubject = 'sample'\naseg_fname = subjects_dir + '/sample/mri/aseg.mgz'", "Setup the source spaces", "# setup a cortical surface source space and extract left hemisphere\nsurf = setup_source_space(subject, subjects_dir=subjects_dir, add_dist=False)\nlh_surf = surf[0]\n\n# setup a volume source space of the left cerebellum cortex\nvolume_label = 'Left-Cerebellum-Cortex'\nsphere = (0, 0, 0, 0.12)\nlh_cereb = setup_volume_source_space(\n subject, mri=aseg_fname, sphere=sphere, volume_label=volume_label,\n subjects_dir=subjects_dir, sphere_units='m')\n\n# Combine the source spaces\nsrc = surf + lh_cereb", "Plot the positions of each source space", "fig = mne.viz.plot_alignment(subject=subject, subjects_dir=subjects_dir,\n surfaces='white', coord_frame='head',\n src=src)\nmne.viz.set_3d_view(fig, azimuth=173.78, elevation=101.75,\n distance=0.30, focalpoint=(-0.03, -0.01, 0.03))", "You can export source positions to a NIfTI file::\n&gt;&gt;&gt; nii_fname = 'mne_sample_lh-cerebellum-cortex.nii'\n&gt;&gt;&gt; src.export_volume(nii_fname, mri_resolution=True)\n\nAnd display source positions in freeview::\n\n\n\nfrom mne.utils import run_subprocess\nmri_fname = subjects_dir + '/sample/mri/brain.mgz'\nrun_subprocess(['freeview', '-v', mri_fname, '-v',\n '%s:colormap=lut:opacity=0.5' % aseg_fname, '-v',\n '%s:colormap=jet:colorscale=0,2' % nii_fname,\n '-slice', '157 75 105'])" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tensorflow/docs-l10n
site/zh-cn/io/tutorials/azure.ipynb
apache-2.0
[ "Copyright 2020 The TensorFlow IO Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "将 Azure Blob 存储与 TensorFlow 结合使用\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td> <a target=\"_blank\" href=\"https://tensorflow.google.cn/io/tutorials/azure\"><img src=\"https://tensorflow.google.cn/images/tf_logo_32px.png\">在 TensorFlow.org 上查看</a> </td>\n <td><a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/io/tutorials/azure.ipynb\"><img src=\"https://tensorflow.google.cn/images/colab_logo_32px.png\">在 Google Colab 中运行</a></td>\n <td> <a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/io/tutorials/azure.ipynb\"><img src=\"https://tensorflow.google.cn/images/GitHub-Mark-32px.png\">在 Github 上查看源代码</a> </td>\n <td> <a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/io/tutorials/azure.ipynb\"><img src=\"https://tensorflow.google.cn/images/download_logo_32px.png\">下载笔记本</a> </td>\n</table>\n\n小心:除 Python 软件包外,此笔记本还使用 npm install --user 安装软件包。在本地运行时要注意。\n概述\n本教程介绍如何通过 TensorFlow IO 的 Azure 文件系统集成,使用 TensorFlow 读写 Azure Blob 存储上的文件。\n您需要有一个 Azure 存储帐户才能读写 Azure Blob 存储上的文件。Azure 存储密钥应通过环境变量提供:\nos.environ['TF_AZURE_STORAGE_KEY'] = '&lt;key&gt;'\n文件名 URI 包含存储帐户名称和容器名称:\nazfs://&lt;storage-account-name&gt;/&lt;container-name&gt;/&lt;path&gt;\n在本教程中,出于演示目的,您可以选择设置 Azurite(Azure 存储模拟器)。利用 Azurite 模拟器,您可以使用 TensorFlow 通过 Azure Blob 存储界面读写文件。\n设置和使用\n安装要求的软件包,然后重新启动运行时", "try:\n %tensorflow_version 2.x \nexcept Exception:\n pass\n\n!pip install tensorflow-io", "安装并设置 Azurite(可选)\n如果没有可用的 Azure 存储帐户,则需要执行以下命令才能安装和设置模拟 Azure 存储界面的 Azurite:", "!npm install azurite@2.7.0\n\n# The path for npm might not be exposed in PATH env,\n# you can find it out through 'npm bin' command\nnpm_bin_path = get_ipython().getoutput('npm bin')[0]\nprint('npm bin path: ', npm_bin_path)\n\n# Run `azurite-blob -s` as a background process. \n# IPython doesn't recognize `&` in inline bash cells.\nget_ipython().system_raw(npm_bin_path + '/' + 'azurite-blob -s &')", "使用 TensorFlow 读写 Azure 存储上的文件\n下面是使用 TensorFlow 的 API 读写 Azure 存储上的文件的一个示例。\n导入 tensorflow-io 软件包后,它与 TensorFlow 中其他文件系统(例如,POSIX 或 GCS)的行为相同,因为 tensorflow-io 会自动注册 azfs 方案以供使用。\nAzure 存储密钥应通过 TF_AZURE_STORAGE_KEY 环境变量提供。否则,可将 TF_AZURE_USE_DEV_STORAGE 设置为 True 以使用 Azurite 模拟器:", "import os\nimport tensorflow as tf\nimport tensorflow_io as tfio\n\n# Switch to False to use Azure Storage instead:\nuse_emulator = True\n\nif use_emulator:\n os.environ['TF_AZURE_USE_DEV_STORAGE'] = '1'\n account_name = 'devstoreaccount1'\nelse:\n # Replace <key> with Azure Storage Key, and <account> with Azure Storage Account\n os.environ['TF_AZURE_STORAGE_KEY'] = '<key>'\n account_name = '<account>'\n\n # Alternatively, you can use a shared access signature (SAS) to authenticate with the Azure Storage Account\n os.environ['TF_AZURE_STORAGE_SAS'] = '<your sas>'\n account_name = '<account>'\n\npathname = 'az://{}/aztest'.format(account_name)\ntf.io.gfile.mkdir(pathname)\n\nfilename = pathname + '/hello.txt'\nwith tf.io.gfile.GFile(filename, mode='w') as w:\n w.write(\"Hello, world!\")\n\nwith tf.io.gfile.GFile(filename, mode='r') as r:\n print(r.read())", "配置\n在 TensorFlow 中,始终通过环境变量完成 Azure Blob 存储的配置。下面是可用配置的完整列表:\n\nTF_AZURE_USE_DEV_STORAGE:对于“az://devstoreaccount1/container/file.txt”之类的连接,设置为 1 可使用本地开发存储模拟器。该设置的优先级高于所有其他设置,所以,要使用任何其他连接,请将其设置为 unset。\nTF_AZURE_STORAGE_KEY:使用的存储帐户的帐户密钥\nTF_AZURE_STORAGE_USE_HTTP:如果不想使用 https 传输,则可将其设置为任何值。将其设置为 unset 可使用默认值 https\nTF_AZURE_STORAGE_BLOB_ENDPOINT:设置为 Blob 存储的端点 - 默认值为 .core.windows.net。" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
NKhan121/Portfolio
Global Terrorism Bayesian Analysis/Predicting 1993 Bombings_Explosions from START.ipynb
mit
[ "Predicting 1993 bombings/explosions by taking the mean of 1990-1992 and 1994-1996 bombings/explosions Worldwide.", "import pandas as pd\nxls_file = pd.ExcelFile('globalterrorismdb_0616dist.xlsx')\nxls_file\n\nxls_file.sheet_names\n\ndfwhole = xls_file.parse('Data')\ndfwhole.head(2)\n\ndfwhole.groupby('iyear').attacktype1.value_counts().sum()\n\ndfwhole.groupby(['iyear', 'country_txt']).attacktype1.count()\n\ndfwhole[(dfwhole.attacktype1 == 3)]\n", "Creating boolean filters to pull the sum of bombings/explosions from 1991,1992,1994,1995.", "dfwhole[(dfwhole.attacktype1 ==3) & \n (dfwhole.iyear == 1990)].groupby(['iyear', 'region']).attacktype1.count().sum()\n\ndfwhole[(dfwhole.attacktype1 ==3) & \n (dfwhole.iyear == 1991)].groupby(['iyear', 'region']).attacktype1.count().sum()\n\ndfwhole[(dfwhole.attacktype1 ==3) & \n (dfwhole.iyear == 1992)].groupby(['iyear', 'region']).attacktype1.value_counts().sum()\n\ndfwhole[(dfwhole.attacktype1 ==3) & \n (dfwhole.iyear == 1994)].groupby(['iyear', 'region']).attacktype1.value_counts().sum()\n\ndfwhole[(dfwhole.attacktype1 ==3) & \n (dfwhole.iyear == 1995)].groupby(['iyear', 'region']).attacktype1.value_counts().sum()\n\ndfwhole[(dfwhole.attacktype1 ==3) & \n (dfwhole.iyear == 1996)].groupby(['iyear', 'region']).attacktype1.value_counts().sum()", "Calculating the mean of the above 6 values from the 3 years before and after 1993.", "(1217+791+1153+1738+1988+1731)/6\n# This appraoch gives us 1436 bombings/explosions took place in 1993.\n\n## Looking at the number of bombs/explosions by region. considering taking a look at mean of regions versus overall \n## as I did above.\ndfwhole[(dfwhole.attacktype1 ==3) & \n (dfwhole.iyear == 1990)].groupby(['region_txt']).attacktype1_txt.value_counts()\n\ndfwhole[(dfwhole.attacktype1 ==3) & \n (dfwhole.iyear == 1991)].groupby(['region_txt']).attacktype1_txt.value_counts()\n\ndfwhole[(dfwhole.attacktype1 ==3) & \n (dfwhole.iyear == 1992)].groupby(['region_txt']).attacktype1_txt.value_counts()\n\ndfwhole[(dfwhole.attacktype1 ==3) & \n (dfwhole.iyear == 1994)].groupby(['region_txt']).attacktype1_txt.value_counts()\n\ndfwhole[(dfwhole.attacktype1 ==3) & \n (dfwhole.iyear == 1995)].groupby(['region_txt']).attacktype1_txt.value_counts()\n\ndfwhole[(dfwhole.attacktype1 ==3) & \n (dfwhole.iyear == 1996)].groupby(['region_txt']).attacktype1_txt.value_counts()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
tennem01/pymks_overview
notebooks/stress_homogenization_2D.ipynb
mit
[ "Effective Stiffness\nIntroduction\nThis example uses the MKSHomogenizationModel to create a homogenization linkage for the effective stiffness. This example starts with a brief background of the homogenization theory on the components of the effective elastic stiffness tensor for a composite material. Then the example generates random microstructures and their average stress values that will be used to show how to calibrate and use our model. We will also show how to use tools from sklearn to optimize fit parameters for the MKSHomogenizationModel. Lastly, the data is used to evaluate the MKSHomogenizationModel for effective stiffness values for a new set of microstructures.\nLinear Elasticity and Effective Elastic Modulus\nFor this example we are looking to create a homogenization linkage that predicts the effective isotropic stiffness components for two-phase microstructures. The specific stiffness component we are looking to predict in this example is $C_{xxxx}$ which is easily accessed by applying an uniaxial macroscal strain tensor (the only non-zero component is $\\varepsilon_{xx}$. \n$$ u(L, y) = u(0, y) + L\\bar{\\varepsilon}_{xx}$$\n$$ u(0, L) = u(0, 0) = 0 $$\n$$ u(x, 0) = u(x, L) $$\nMore details about these boundary conditions can be found in [1]. Using these boundary conditions, $C_{xxxx}$ can be estimated calculating the ratio of the averaged stress over the applied averaged strain.\n$$ C_{xxxx}^* \\cong \\bar{\\sigma}{xx} / \\bar{\\varepsilon}{xx}$$ \nIn this example, $C_{xxxx}$ for 6 different types of microstructures will be estimated using the MKSHomogenizationModel from pymks, and provides a method to compute $\\bar{\\sigma}{xx}$ for a new microstructure with an applied strain of $\\bar{\\varepsilon}{xx}$.", "%matplotlib inline\n%load_ext autoreload\n%autoreload 2\n\nimport numpy as np\nimport matplotlib.pyplot as plt", "Data Generation\nA set of periodic microstructures and their volume averaged elastic stress values $\\bar{\\sigma}_{xx}$ can be generated by importing the make_elastic_stress_random function from pymks.datasets. This function has several arguments. n_samples is the number of samples that will be generated, size specifies the dimensions of the microstructures, grain_size controls the effective microstructure feature size, elastic_modulus and poissons_ratio are used to indicate the material property for each of the\nphases, macro_strain is the value of the applied uniaxixial strain, and the seed can be used to change the the random number generator seed.\nLet's go ahead and create 6 different types of microstructures each with 200 samples with dimensions 21 x 21. Each of the 6 samples will have a different microstructure feature size. The function will return and the microstructures and their associated volume averaged stress values.", "from pymks.datasets import make_elastic_stress_random\nsample_size = 200\ngrain_size = [(15, 2), (2, 15), (7, 7), (8, 3), (3, 9), (2, 2)]\nn_samples = [sample_size] * 6\nelastic_modulus = (380, 200)\npoissons_ratio = (0.28, 0.3)\nmacro_strain = 0.001\nsize = (21, 21)\n\n= make_elastic_stress_random(n_samples=n_samples, size=size, grain_size=grain_size, \n elastic_modulus=elastic_modulus, poissons_ratio=poissons_ratio, \n macro_strain=macro_strain, seed=0)\n", "The array X contains the microstructure information and has the dimensions \nof (n_samples, Nx, Ny). The array y contains the average stress value for \neach of the microstructures and has dimensions of (n_samples,).", "print(X.shape)\nprint(y.shape)", "Lets take a look at the 6 types the microstructures to get an idea of what they \nlook like. We can do this by importing draw_microstructures.", "from pymks.tools import draw_microstructures\n", "In this dataset 4 of the 6 microstructure types have grains that are elongated in either\nthe x or y directions. The remaining 2 types of samples have equiaxed grains with\ndifferent average sizes.\nLet's look at the stress values for each of the microstructures shown above.", "print('Stress Values'), (y[::200])", "Now that we have a dataset to work with, we can look at how to use the MKSHomogenizationModelto predict stress values for new microstructures.\nMKSHomogenizationModel Work Flow\nThe default instance of the MKSHomogenizationModel takes in a dataset and \n - calculates the 2-point statistics\n - performs dimensionality reduction using Singular Valued Decomposition (SVD)\n - and fits a polynomial regression model model to the low-dimensional representation.\nThis work flow has been shown to accurately predict effective properties in several examples [2][3], and requires that we specify the number of components used in dimensionality reduction and the order of the polynomial we will be using for the polynomial regression. In this example we will show how we can use tools from sklearn to try and optimize our selection for these two parameters.\nModeling with MKSHomogenizationModel\nIn order to make an instance of the MKSHomogenizationModel, we need to pass an instance of a basis (used to compute the 2-point statistics). For this particular example, there are only 2 discrete phases, so we will use the PrimitiveBasis from pymks. We only have two phases denoted by 0 and 1, therefore we have two local states and our domain is 0 to 1.\nLet's make an instance of the MKSHomgenizationModel.", "from pymks import MKSHomogenizationModel\nfrom pymks import PrimitiveBasis\n", "Let's take a look at the default values for the number of components and the order of the polynomial.", "print('Default Number of Components'), (model.n_components)\nprint('Default Polynomail Order'), (model.degree)", "These default parameters may not be the best model for a given problem, we will now show one method that can be used to optimize them.\nOptimizing the Number of Components and Polynomial Order\nTo start with, we can look at how the variance changes as a function of the number of components.\nIn general for SVD as well as PCA, the amount of variance captured in each component decreases\nas the component number increases.\nThis means that as the number of components used in the dimensionality reduction increases, the percentage of the variance will asymptotically approach 100%. Let's see if this is true for our dataset.\nIn order to do this we will change the number of components to 40 and then\nfit the data we have using the fit function. This function performs the dimensionality reduction and \nalso fits the regression model. Because our microstructures are periodic, we need to \nuse the periodic_axes argument when we fit the data.\nNow look at how the cumlative variance changes as a function of the number of components using draw_component_variance \nfrom pymks.tools.", "from pymks.tools import draw_component_variance\n", "Roughly 90 percent of the variance is captured with the first 5 components. This means our model may only need a few components to predict the average stress.\nNext we need to optimize the number of components and the polynomial order. To do this we are going to split the data into testing and training sets. This can be done using the train_test_spilt function from sklearn.", "from sklearn.cross_validation import train_test_split\n\nflat_shape = (X.shape[0],) + (np.prod(X.shape[1:]),)\n\n= train_test_split(X.reshape(flat_shape), y,\n test_size=0.2, random_state=3)\nprint(X_train.shape)\nprint(X_test.shape)", "We will use cross validation with the testing data to fit a number \nof models, each with a different number \nof components and a different polynomial order.\nThen we will use the testing data to verify the best model. \nThis can be done using GridSeachCV \nfrom sklearn.\nWe will pass a dictionary params_to_tune with the range of\npolynomial order degree and components n_components we want to try.\nA dictionary fit_params can be used to pass the periodic_axes variable to \ncalculate periodic 2-point statistics. The argument cv can be used to specify \nthe number of folds used in cross validation and n_jobs can be used to specify \nthe number of jobs that are ran in parallel.\nLet's vary n_components from 1 to 7 and degree from 1 to 3.", "from sklearn.grid_search import GridSearchCV\n\nparams_to_tune = {'degree': np.arange(1, 4), 'n_components': np.arange(1, 8)}\nfit_params = {'size': X[0].shape, 'periodic_axes': [0, 1]}\ngs = GridSearchCV(model, params_to_tune, cv=3, n_jobs=1, fit_params=fit_params).fit()", "The default score method for the MKSHomogenizationModel is the R-squared value. Let's look at the how the mean R-squared values and their \nstandard deviations change as we varied the number of n_components and degree using\ndraw_gridscores_matrix from pymks.tools.", "from pymks.tools import draw_gridscores_matrix\n", "It looks like we get a poor fit when only the first and second component are used, and when we increase\nthe polynomial order and the components together. The models have a high standard deviation and \npoor R-squared values for both of these cases.\nThere seems to be several potential models that use 3 to 6 components. It's difficult to see which model \nis the best. Let's use our testing data X_test to see which model performs the best.", "print('Order of Polynomial'), (gs.best_estimator_.degree)\nprint('Number of Components'), (gs.best_estimator_.n_components)\nprint('R-squared Value'), (gs.score(X_test, y_test))", "For the parameter range that we searched, we have found that a model with 3rd order polynomial \nand 3 components had the best R-squared value. It's difficult to see the differences in the score\nvalues and the standard deviation when we have 3 or more components. Let's take a closer look at those values using draw_grid_scores.", "from pymks.tools import draw_gridscores\n\ngs_deg_1 = [x for x in gs.grid_scores_ \\\n if x.parameters['degree'] == 1][2:-1]\ngs_deg_2 = [x for x in gs.grid_scores_ \\\n if x.parameters['degree'] == 2][2:-1]\ngs_deg_3 = [x for x in gs.grid_scores_ \\\n if x.parameters['degree'] == 3][2:-1]\n\ndraw_gridscores([], 'n_components', \n data_labels=['1st Order', '2nd Order', '3rd Order'], \n colors=['#f46d43', '#1a9641', '#762a83'],\n param_label='Number of Components', score_label='R-Squared')", "As we said, a model with a 3rd order polynomial and 3 components will give us the best result,\nbut there are several other models that will likely provide comparable results. Let's make the\nbest model from our grid scores.\nPrediction using MKSHomogenizationModel\nNow that we have selected values for n_components and degree, lets fit the model with the data. Again because\nour microstructures are periodic, we need to use the periodic_axes argument.\nLet's generate some more data that can be used to try and validate our model's prediction accuracy. We are going to\ngenerate 20 samples of all six different types of microstructures using the same \nmake_elastic_stress_random function.", "test_sample_size = 20\nn_samples = [test_sample_size] * 6\n = make_elastic_stress_random(n_samples=n_samples, size=size, grain_size=grain_size, \n elastic_modulus=elastic_modulus, poissons_ratio=poissons_ratio, \n macro_strain=macro_strain, seed=1)", "Now let's predict the stress values for the new microstructures. \nWe can look to see if the low-dimensional representation of the \nnew data is similar to the low-dimensional representation of the data \nwe used to fit the model using draw_components from pymks.tools.", "from pymks.tools import draw_components\n\n", "The predicted data seems to be reasonably similar to the data we used to fit the model\nwith. Now let's look at the score value for the predicted data.", "from sklearn.metrics import r2_score\nprint('R-squared'), (model.score(X_new, y_new, periodic_axes=[0, 1]))", "Looks pretty good. Let's print out one actual and predicted stress value for each of the 6 microstructure types to see how they compare.", "print('Actual Stress '), (y_new[::20])\nprint('Predicted Stress'), (y_predict[::20])", "Lastly, we can also evaluate our prediction by looking at a goodness-of-fit plot. We\ncan do this by importing draw_goodness_of_fit from pymks.tools.", "from pymks.tools import draw_goodness_of_fit\n", "We can see that the MKSHomogenizationModel has created a homogenization linkage for the effective stiffness for the 6 different microstructures and has predicted the average stress values for our new microstructures reasonably well.\nReferences\n[1] Landi, G., S.R. Niezgoda, S.R. Kalidindi, Multi-scale modeling of elastic response of three-dimensional voxel-based microstructure datasets using novel DFT-based knowledge systems. Acta Materialia, 2009. 58 (7): p. 2716-2725 doi:10.1016/j.actamat.2010.01.007.\n[2] Çeçen, A., et al. \"A data-driven approach to establishing microstructure–property relationships in porous transport layers of polymer electrolyte fuel cells.\" Journal of Power Sources 245 (2014): 144-153. doi:10.1016/j.jpowsour.2013.06.100\n[3] Deshpande, P. D., et al. \"Application of Statistical and Machine Learning Techniques for Correlating Properties to Composition and Manufacturing Processes of Steels.\" 2 World Congress on Integrated Computational Materials Engineering. John Wiley & Sons, Inc. doi:10.1002/9781118767061.ch25" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
boffi/boffi.github.io
dati_2017/02/Duhamel.ipynb
mit
[ "import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline", "Duhamel Integral\nProblem Data", "M = 600000\nT = 0.6\nz = 0.10\np0 = 400000\nt0, t1, t2, t3 = 0.0, 1.0, 3.0, 6.0", "Natural Frequency, Damped Frequency", "wn = 2*np.pi/T\nwd = wn*np.sqrt(1-z**2)", "Computation\nPreliminaries\nWe chose a time step and we compute a number of constants of the integration procedure that depend on the time step", "dt = 0.05\nedt = np.exp(-z*wn*dt)\nfac = dt/(2*M*wd)", "We initialize a time variable", "t = dt*np.arange(1+int(t3/dt))", "We compute the load, the sines and the cosines of $\\omega_D t$ and their products", "p = np.where(t<=t1, p0*(t-t0)/(t1-t0), np.where(t<t2, p0*(1-(t-t1)/(t2-t1)), 0))\n\ns = np.sin(wd*t)\nc = np.cos(wd*t)\nsp = s*p\ncp = c*p\n\nplt.plot(t, p/1000)\nplt.xlabel('Time/s')\nplt.ylabel('Force/kN')\nplt.xlim((t0,t3))\nplt.grid();", "The main (and only) loop in our code, we initialize A, B and a container for saving the deflections x,\nthen we compute the next values of A and B, the next value of x is eventually appended to the container.", "A, B, x = 0, 0, [0]\n\nfor i, _ in enumerate(t[1:], 1):\n A = A*edt+fac*(cp[i-1]*edt+cp[i])\n B = B*edt+fac*(sp[i-1]*edt+sp[i])\n x.append(A*s[i]-B*c[i])", "It is necessary to plot the response.", "x = np.array(x)\n\nplt.plot(t, x*1000)\nplt.xlabel('Time/s')\nplt.ylabel('Deflection/mm')\nplt.xlim((t0,t3))\nplt.grid()\nplt.show();" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
rishuatgithub/MLPy
torch/PYTORCH_NOTEBOOKS/05-Using-GPU/00-Using-GPU-and-CUDA.ipynb
apache-2.0
[ "<img src=\"../Pierian-Data-Logo.PNG\">\n<br>\n<strong><center>Copyright 2019. Created by Jose Marcial Portilla.</center></strong>\nWhat is CUDA?\nMost people confuse CUDA for a language or maybe an API. It is not.\nIt’s more than that. CUDA is a parallel computing platform and programming model that makes using a GPU for general purpose computing simple and elegant. The developer still programs in the familiar C, C++, Fortran, or an ever expanding list of supported languages, and incorporates extensions of these languages in the form of a few basic keywords.\nThese keywords let the developer express massive amounts of parallelism and direct the compiler to the portion of the application that maps to the GPU.\nHow do I install PyTorch for GPU?\nRefer to video, its dependent on whether you have an NVIDIA GPU card or not.\nHow do I know if I have CUDA available?", "import torch\ntorch.cuda.is_available()\n# True", "Using GPU and CUDA\nWe've provided 2 versions of our yml file, a GPU version and a CPU version. To use GPU, you need to either manually create a virtual environment, please watch the video related to this lecture, as not every computer can run GPU, you need CUDA and an NVIDIA GPU.", "## Get Id of default device\ntorch.cuda.current_device()\n\n# 0\ntorch.cuda.get_device_name(0) # Get name device with ID '0'\n\n# Returns the current GPU memory usage by \n# tensors in bytes for a given device\ntorch.cuda.memory_allocated()\n\n# Returns the current GPU memory managed by the\n# caching allocator in bytes for a given device\ntorch.cuda.memory_cached()", "Using CUDA instead of CPU", "# CPU\na = torch.FloatTensor([1.,2.])\n\na\n\na.device\n\n# GPU\na = torch.FloatTensor([1., 2.]).cuda()\n\na.device\n\ntorch.cuda.memory_allocated()", "Sending Models to GPU", "import torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nfrom torch.utils.data import Dataset, DataLoader\nfrom sklearn.model_selection import train_test_split\n\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nclass Model(nn.Module):\n def __init__(self, in_features=4, h1=8, h2=9, out_features=3):\n super().__init__()\n self.fc1 = nn.Linear(in_features,h1) # input layer\n self.fc2 = nn.Linear(h1, h2) # hidden layer\n self.out = nn.Linear(h2, out_features) # output layer\n \n def forward(self, x):\n x = F.relu(self.fc1(x))\n x = F.relu(self.fc2(x))\n x = self.out(x)\n return x\n\ntorch.manual_seed(32)\nmodel = Model()\n\n# From the discussions here: discuss.pytorch.org/t/how-to-check-if-model-is-on-cuda\nnext(model.parameters()).is_cuda\n\ngpumodel = model.cuda()\n\nnext(gpumodel.parameters()).is_cuda\n\ndf = pd.read_csv('../Data/iris.csv')\nX = df.drop('target',axis=1).values\ny = df['target'].values\nX_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.2,random_state=33)", "Convert Tensors to .cuda() tensors", "X_train = torch.FloatTensor(X_train).cuda()\nX_test = torch.FloatTensor(X_test).cuda()\ny_train = torch.LongTensor(y_train).cuda()\ny_test = torch.LongTensor(y_test).cuda()\n\ntrainloader = DataLoader(X_train, batch_size=60, shuffle=True)\ntestloader = DataLoader(X_test, batch_size=60, shuffle=False)\n\ncriterion = nn.CrossEntropyLoss()\noptimizer = torch.optim.Adam(model.parameters(), lr=0.01)\n\nimport time\nepochs = 100\nlosses = []\nstart = time.time()\nfor i in range(epochs):\n i+=1\n y_pred = gpumodel.forward(X_train)\n loss = criterion(y_pred, y_train)\n losses.append(loss)\n \n # a neat trick to save screen space:\n if i%10 == 1:\n print(f'epoch: {i:2} loss: {loss.item():10.8f}')\n\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n \nprint(f'TOTAL TRAINING TIME: {time.time()-start}')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/bcc/cmip6/models/sandbox-1/toplevel.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Toplevel\nMIP Era: CMIP6\nInstitute: BCC\nSource ID: SANDBOX-1\nSub-Topics: Radiative Forcings. \nProperties: 85 (42 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:39\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'bcc', 'sandbox-1', 'toplevel')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Flux Correction\n3. Key Properties --&gt; Genealogy\n4. Key Properties --&gt; Software Properties\n5. Key Properties --&gt; Coupling\n6. Key Properties --&gt; Tuning Applied\n7. Key Properties --&gt; Conservation --&gt; Heat\n8. Key Properties --&gt; Conservation --&gt; Fresh Water\n9. Key Properties --&gt; Conservation --&gt; Salt\n10. Key Properties --&gt; Conservation --&gt; Momentum\n11. Radiative Forcings\n12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2\n13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4\n14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O\n15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3\n16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3\n17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC\n18. Radiative Forcings --&gt; Aerosols --&gt; SO4\n19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon\n20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon\n21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate\n22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect\n23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect\n24. Radiative Forcings --&gt; Aerosols --&gt; Dust\n25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic\n26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic\n27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt\n28. Radiative Forcings --&gt; Other --&gt; Land Use\n29. Radiative Forcings --&gt; Other --&gt; Solar \n1. Key Properties\nKey properties of the model\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop level overview of coupled model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of coupled model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Flux Correction\nFlux correction properties of the model\n2.1. Details\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how flux corrections are applied in the model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.flux_correction.details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Genealogy\nGenealogy and history of the model\n3.1. Year Released\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nYear the model was released", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.2. CMIP3 Parent\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCMIP3 parent if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.3. CMIP5 Parent\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCMIP5 parent if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.4. Previous Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nPreviously known as", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Software Properties\nSoftware properties of model\n4.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.4. Components Structure\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how model realms are structured into independent software components (coupled via a coupler) and internal software components.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.5. Coupler\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nOverarching coupling framework for model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OASIS\" \n# \"OASIS3-MCT\" \n# \"ESMF\" \n# \"NUOPC\" \n# \"Bespoke\" \n# \"Unknown\" \n# \"None\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Coupling\n**\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of coupling in the model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Atmosphere Double Flux\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "5.3. Atmosphere Fluxes Calculation Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nWhere are the air-sea fluxes calculated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Atmosphere grid\" \n# \"Ocean grid\" \n# \"Specific coupler grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "5.4. Atmosphere Relative Winds\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Tuning Applied\nTuning methodology for model\n6.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics/diagnostics of the global mean state used in tuning model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics/diagnostics used in tuning model/component (such as 20th century)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.5. Energy Balance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.6. Fresh Water Balance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Key Properties --&gt; Conservation --&gt; Heat\nGlobal heat convervation properties of the model\n7.1. Global\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how heat is conserved globally", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Atmos Ocean Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how heat is conserved at the atmosphere/ocean coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Atmos Land Interface\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how heat is conserved at the atmosphere/land coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.4. Atmos Sea-ice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how heat is conserved at the atmosphere/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.5. Ocean Seaice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how heat is conserved at the ocean/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.6. Land Ocean Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how heat is conserved at the land/ocean coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Key Properties --&gt; Conservation --&gt; Fresh Water\nGlobal fresh water convervation properties of the model\n8.1. Global\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how fresh_water is conserved globally", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Atmos Ocean Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how fresh_water is conserved at the atmosphere/ocean coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.3. Atmos Land Interface\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how fresh water is conserved at the atmosphere/land coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.4. Atmos Sea-ice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.5. Ocean Seaice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how fresh water is conserved at the ocean/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.6. Runoff\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how runoff is distributed and conserved", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.7. Iceberg Calving\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how iceberg calving is modeled and conserved", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.8. Endoreic Basins\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how endoreic basins (no ocean access) are treated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.9. Snow Accumulation\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how snow accumulation over land and over sea-ice is treated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Key Properties --&gt; Conservation --&gt; Salt\nGlobal salt convervation properties of the model\n9.1. Ocean Seaice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how salt is conserved at the ocean/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Key Properties --&gt; Conservation --&gt; Momentum\nGlobal momentum convervation properties of the model\n10.1. Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how momentum is conserved in the model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Radiative Forcings\nRadiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)\n11.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of radiative forcings (GHG and aerosols) implementation in model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2\nCarbon dioxide forcing\n12.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4\nMethane forcing\n13.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O\nNitrous oxide forcing\n14.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3\nTroposheric ozone forcing\n15.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3\nStratospheric ozone forcing\n16.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC\nOzone-depleting and non-ozone-depleting fluorinated gases forcing\n17.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.2. Equivalence Concentration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDetails of any equivalence concentrations used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"Option 1\" \n# \"Option 2\" \n# \"Option 3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.3. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Radiative Forcings --&gt; Aerosols --&gt; SO4\nSO4 aerosol forcing\n18.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon\nBlack carbon aerosol forcing\n19.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon\nOrganic carbon aerosol forcing\n20.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate\nNitrate forcing\n21.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "21.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect\nCloud albedo effect forcing (RFaci)\n22.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.2. Aerosol Effect On Ice Clouds\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRadiative effects of aerosols on ice clouds are represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "22.3. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect\nCloud lifetime effect forcing (ERFaci)\n23.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. Aerosol Effect On Ice Clouds\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRadiative effects of aerosols on ice clouds are represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "23.3. RFaci From Sulfate Only\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRadiative forcing from aerosol cloud interactions from sulfate aerosol only?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "23.4. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "24. Radiative Forcings --&gt; Aerosols --&gt; Dust\nDust forcing\n24.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic\nTropospheric volcanic forcing\n25.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.2. Historical Explosive Volcanic Aerosol Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in historical simulations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.3. Future Explosive Volcanic Aerosol Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in future simulations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.4. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic\nStratospheric volcanic forcing\n26.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26.2. Historical Explosive Volcanic Aerosol Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in historical simulations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26.3. Future Explosive Volcanic Aerosol Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in future simulations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26.4. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt\nSea salt forcing\n27.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28. Radiative Forcings --&gt; Other --&gt; Land Use\nLand use forcing\n28.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "28.2. Crop Change Only\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLand use change represented via crop change only?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "28.3. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29. Radiative Forcings --&gt; Other --&gt; Solar\nSolar forcing\n29.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow solar forcing is provided", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"irradiance\" \n# \"proton\" \n# \"electron\" \n# \"cosmic ray\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "29.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
waynegm/OpendTect-External-Attributes
Python_3/Jupyter/z_delay_estimation.ipynb
mit
[ "Delay Estimation\nOpendTect has the Match Delta atttribute to measure time shifts between similar events in different seismic volumes. It works by finding the peaks in each volume and then tries to match them up. The algorithm is simple and fast but the results can be quite noisy. The following is a the result of running Match Delta between a depth section and itself shifted up by 13 metres. \n\nThe purpose of this notebook is to test some other options that could then be used in an External Attribute script.\nSimple Test Signal", "import numpy as np\nimport scipy.signal as sig\nimport matplotlib.pyplot as plt\n%pylab inline\nimport test_signals as tst\n\ndef make_signals(nsamp,delay ):\n ref = np.random.rand(nsamp+abs(delay))*2-1\n wav = sig.ricker(80,5)\n filtered = np.convolve(ref, wav,'same')\n if delay < 0 :\n return filtered[0:nsamp], filtered[-delay:nsamp-delay]\n else:\n return filtered[delay:nsamp+delay], filtered[0:nsamp]\n \n#res, shifted = make_signals(1000,10)\nres, shifted = tst.make_delayed_signal_pair(1000,10)\nfig = plt.figure(figsize=(12,1))\nplt.plot(res)\nplt.plot(shifted)\nshifted.shape", "Local Normalised Cross-correlation\nHere is a mainly numpy version of local normalised cross-correlation.", "#\n# Rolling summ of squares\ndef rollingSSQ(inp, winlen):\n inpsq = np.square(inp)\n kernel = np.ones(winlen)\n return np.convolve(inpsq, kernel, 'same')\n \ndef localCorr_numpy(reference, match, winlen, nlag,lag,qual):\n hwin = winlen//2\n lags = 2*nlag+1\n ns = reference.shape[0]\n hxw = hwin-nlag\n cor = np.zeros(lags)\n refSSQ = np.sqrt(rollingSSQ(reference,2*hxw+1))\n matSSQ = np.sqrt(rollingSSQ(match,2*hxw+1))\n for ir in range(hwin,ns-hwin):\n rbeg = ir - hxw\n rend = ir + hxw + 1\n mbeg = rbeg - nlag\n mend = rend + nlag \n cor = np.divide(np.correlate(match[mbeg:mend],reference[rbeg:rend],'valid'),matSSQ[ir-nlag:ir+nlag+1]*refSSQ[ir])\n pos = np.argmax(cor)\n if pos>0 and pos<lags-1:\n cp = (cor[pos-1]-cor[pos+1])/(2.*cor[pos-1]-4.*cor[pos]+2.*cor[pos+1])\n lag[ir] = pos-nlag+cp\n qual[ir] = cor[pos]\n else:\n lag[ir]=0.0\n qual[ir]=0.0\n\nlag = np.zeros(res.shape)\nqual = np.zeros(res.shape)\nlocalCorr_numpy(res,shifted,51,15,lag,qual)\nfig = plt.figure(figsize=(12,2))\nplt.plot(lag)\nplt.plot(qual)\n\n%timeit -o localCorr_numpy(res,shifted,51,15,lag,qual)", "Can we get a faster result using Numba?", "import sys,os\nfrom numba import jit\nsys.path.insert(0, os.path.join(sys.path[0], '..'))\nimport extnumba as xn\n\n@jit(nopython=True)\ndef localCorr_numba(reference, match, winlen, nlag,lag,qual):\n hwin = winlen//2\n lags = 2*nlag+1\n ns = reference.shape[0]\n hxw = hwin-nlag\n cor = np.zeros(lags)\n refSSQ = np.zeros(ns)\n matSSQ = np.zeros(ns)\n xn.winSSQ(reference,2*hxw+1,refSSQ)\n xn.winSSQ(match,2*hxw+1,matSSQ)\n for ir in range(hwin,ns-hwin):\n rbeg = ir - hxw\n rend = ir + hxw + 1\n mbeg = rbeg - nlag\n mend = rend + nlag\n for il in range(lags):\n lbeg = rbeg + il - nlag\n lend = lbeg + 2 * hxw + 1\n sum = 0.0\n for iref,imat in zip(range(rbeg,rend),range(lbeg,lend)):\n sum += reference[iref]*match[imat]\n den = refSSQ[ir]*matSSQ[lbeg+hxw]\n if den== 0.0:\n cor[il] = 0.0\n else:\n cor[il] = sum/den\n pos = np.argmax(cor)\n if pos>0 and pos<lags-1:\n cp = (cor[pos-1]-cor[pos+1])/(2.*cor[pos-1]-4.*cor[pos]+2.*cor[pos+1])\n lag[ir] = pos-nlag+cp\n qual[ir] = cor[pos]\n else:\n lag[ir]=0.0\n qual[ir]=0.0\n\nlocalCorr_numba(res,shifted,51,15,lag,qual)\nfig = plt.figure(figsize=(12,2))\nplt.plot(lag)\nplt.plot(qual)\n\n%timeit -o localCorr_numba(res,shifted,51,15,lag,qual)", "The Numba version is ~6 times faster.\nLocal Normalised Cross-correlation External Attribute\nHere is the result using the same seismic test case shown above but using a local normalised cross-correlation external attribute script. Results are clearly superior to the Match Delta attribute albeit with a longer calculation time. \n\nAlong with the delay estimate the attribute can also output the correlation coefficient with 1 indicating perfect correlation and 0 no correlation which provides a quality control measure." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]