repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
cells
list
types
list
FederatedAI/FATE
doc/tutorial/pipeline/pipeline_tutorial_match_id.ipynb
apache-2.0
[ "Pipeline Match ID Tutorial\nStarting at version 1.7, FATE distinguishes sample id(sid) and match id. Sid are unique to each sample entry, while match id corresponds to individual sample source identity. This adaption allows FATE to perform private set intersection on samples with repeated match id. User may choose to create sid by appending uuid to original sample entries at uploading; then module DataTransform will extract true match id for later use. This tutorial walks through a full uploading-training process to demonstrate how to add and train with sid.\ninstall\nPipeline is distributed along with fate_client.\nbash\npip install fate_client\nTo use Pipeline, we need to first specify which FATE Flow Service to connect to. Once fate_client installed, one can find an cmd enterpoint name pipeline:", "!pipeline --help", "Assume we have a FATE Flow Service in 127.0.0.1:9380(defaults in standalone), then exec", "!pipeline init --ip 127.0.0.1 --port 9380", "upload data\nBefore start a modeling task, the data to be used should be uploaded. \n Typically, a party is usually a cluster which include multiple nodes. Thus, when we upload these data, the data will be allocated to those nodes.", "from pipeline.backend.pipeline import PipeLine", "Make a pipeline instance:\n- initiator: \n * role: guest\n * party: 9999\n- roles:\n * guest: 9999\n\nnote that only local party id is needed.", "pipeline_upload = PipeLine().set_initiator(role='guest', party_id=9999).set_roles(guest=9999)", "Define partitions for data storage", "partition = 4", "Define table name and namespace, which will be used in FATE job configuration", "dense_data_guest = {\"name\": \"breast_hetero_guest\", \"namespace\": f\"experiment\"}\ndense_data_host = {\"name\": \"breast_hetero_host\", \"namespace\": f\"experiment\"}\ntag_data = {\"name\": \"breast_hetero_host\", \"namespace\": f\"experiment\"}", "Now, we add data to be uploaded. To create uuid as sid, turn on extend_sid option. Alternatively, set auto_increasing_sid to make extended sid starting at 0.", "data_base = \"/workspace/FATE/\"\npipeline_upload.add_upload_data(file=os.path.join(data_base, \"examples/data/breast_hetero_guest.csv\"),\n table_name=dense_data_guest[\"name\"], # table name\n namespace=dense_data_guest[\"namespace\"], # namespace\n head=1, partition=partition, # data info\n extend_sid=True, # extend sid \n auto_increasing_sid=False)\n\npipeline_upload.add_upload_data(file=os.path.join(data_base, \"examples/data/breast_hetero_host.csv\"),\n table_name=dense_data_host[\"name\"],\n namespace=dense_data_host[\"namespace\"],\n head=1, partition=partition,\n extend_sid=True,\n auto_increasing_sid=False) ", "We can then upload data", "pipeline_upload.upload(drop=1)", "After uploading, we can then start modeling. Here we build a Hetero SecureBoost model the same way as in this demo, but note how specificaiton of DataTransform module needs to be adjusted to crrectly load in match id.", "from pipeline.backend.pipeline import PipeLine\nfrom pipeline.component import Reader, DataTransform, Intersection, HeteroSecureBoost, Evaluation\nfrom pipeline.interface import Data\n\npipeline = PipeLine() \\\n .set_initiator(role='guest', party_id=9999) \\\n .set_roles(guest=9999, host=10000)\n\nreader_0 = Reader(name=\"reader_0\")\n# set guest parameter\nreader_0.get_party_instance(role='guest', party_id=9999).component_param(\n table={\"name\": \"breast_hetero_guest\", \"namespace\": \"experiment\"})\n# set host parameter\nreader_0.get_party_instance(role='host', party_id=10000).component_param(\n table={\"name\": \"breast_hetero_host\", \"namespace\": \"experiment\"})\n\n# set with match id\ndata_transform_0 = DataTransform(name=\"data_transform_0\", with_match_id=True)\n# set guest parameter\ndata_transform_0.get_party_instance(role='guest', party_id=9999).component_param(\n with_label=True)\ndata_transform_0.get_party_instance(role='host', party_id=[10000]).component_param(\n with_label=False)\n\nintersect_0 = Intersection(name=\"intersect_0\")\n\nhetero_secureboost_0 = HeteroSecureBoost(name=\"hetero_secureboost_0\",\n num_trees=5,\n bin_num=16,\n task_type=\"classification\",\n objective_param={\"objective\": \"cross_entropy\"},\n encrypt_param={\"method\": \"paillier\"},\n tree_param={\"max_depth\": 3})\n\nevaluation_0 = Evaluation(name=\"evaluation_0\", eval_type=\"binary\")", "Add components to pipeline, in order of execution:\n- data_transform_0 comsume reader_0's output data\n- intersect_0 comsume data_transform_0's output data\n- hetero_secureboost_0 consume intersect_0's output data\n- evaluation_0 consume hetero_secureboost_0's prediciton result on training data\n\nThen compile our pipeline to make it ready for submission.", "pipeline.add_component(reader_0)\npipeline.add_component(data_transform_0, data=Data(data=reader_0.output.data))\npipeline.add_component(intersect_0, data=Data(data=data_transform_0.output.data))\npipeline.add_component(hetero_secureboost_0, data=Data(train_data=intersect_0.output.data))\npipeline.add_component(evaluation_0, data=Data(data=hetero_secureboost_0.output.data))\npipeline.compile();", "Now, submit(fit) our pipeline:", "pipeline.fit()", "Check data output on FATEBoard or download component output data to see now each data instance has a uuid as sid.", "import json\nprint(json.dumps(pipeline.get_component(\"data_transform_0\").get_output_data(limits=3), indent=4))", "For more demo on using pipeline to submit jobs, please refer to pipeline demos. Here we include other pipeline examples using data with match id." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
theislab/scanpy_usage
171106_t-test_wilcoxon_comparison/Generic Comparison T-Test Wilcoxon-Rank-Sum Test.ipynb
bsd-3-clause
[ "Comparing test-statistics: T-Test and Wilcoxon rank sum test for generic Zero-Inflated Negative Binomial Distribution", "import numpy as np\nimport scanpy.api as sc\nfrom anndata import AnnData\nfrom numpy.random import negative_binomial, binomial, seed", "First, data following a (zero-inflated) negative binomial (ZINB) distribution is created for testing purposes. Test size and distribution parameters can be specified.\nFor a specified number of marker genes in a cluster, distribution of these genes follows a different ZINB distribution. We use the following notation:\n$z_r=\\text{zero-inflation reference group}$\n$z_c=\\text{zero-inflation cluster}$\n$p_r=\\text{success probability reference group}$\n$p_c=\\text{success probability cluster}$\n$n_r=\\text{number of successfull draws till stop reference group}$\n$n_c=\\text{number of successfull draws till stop cluster}$\nLet $X_r\\sim NegBin(p_r,n_r)$, $Y_r\\sim Ber(z_r)$ independent of $X$, then $Z_r=YX\\sim ZINB(z_r,p_r,n_r)$ describes the distribution for the all cells/genes except for marker genes in a specified number of clustered cells, which are described using a $ZINB(z_c,p_c,n_c)$ distribution. \nEspecially, we have \n$$\\mathbb{E}[Z_r]=z_rn_r\\frac{1-p_r}{p_r}$$\nand using standard calculations for expectations and variance, \n$$\\mathbb{V}[Z_r]=z_r*n_r\\frac{1-p_r}{p_rยฒ}+z_r(1-z_r)(n_r\\frac{1-p_r}{p_r})ยฒ$$\nThis form of the ZINB was taken from\nhttps://papers.ssrn.com/sol3/papers.cfm?abstract_id=1293115 (Greene, 1994)\nTune parameters and create data\nIn order to demonstrate the superiority of the wilcoxon rank test in certain cases, parameter specifications have to be found that violate the t-test assumptions and therefore make it difficult to detect marker genes. In short: Expectations should be the same, but variance should be different, for the simple reason that expectation differences will be - due to the law of large numbers - detected by the t-test as well, even though the lack of normal distribution means that this may take some time. \nThe effect should increase with the magnitude of variance difference, as demonstrated below.\nIn order for the t-test to fail, little to no difference in mean should occur. This can be achieved by tuning the parameters using the formula for expectation specified above.", "seed(1234)\n# n_cluster needs to be smaller than n_simulated_cells, n_marker_genes needs to be smaller than n_simulated_genes\nn_simulated_cells=1000\nn_simulated_genes=100\nn_cluster=100\nn_marker_genes=10\n# Specify parameter between 0 and 1 for zero_inflation and p, positive integer for r\n# Differential gene expression is simulated using reference parameters for all cells/genes \n# except for marker genes in the distinct cells. \nreference_zero_inflation=0.15\nreference_p=0.25\nreference_n=2\ncluster_zero_inflation=0.9\ncluster_p=0.5\ncluster_n=1", "Create data.\nBoth sample names and variable names are simply intgers starting from 0.", "adata=AnnData(np.multiply(binomial(1,reference_zero_inflation,(n_simulated_cells,n_simulated_genes)),\n negative_binomial(reference_n,reference_p,(n_simulated_cells,n_simulated_genes))))\n# adapt marker_genes for cluster \nadata.X[0:n_cluster,0:n_marker_genes]=np.multiply(binomial(1,cluster_zero_inflation,(n_cluster,n_marker_genes)),\n negative_binomial(cluster_n,cluster_p,(n_cluster,n_marker_genes)))", "Cluster according to true grouping:\nThe following code includes the true grouping such that it can be accessed by normal function calling of \nsc.tl.rank_genes_groups(adata,'true_groups')\n\nor, respectively,\nsc.tl.rank_genes_groups(adata,'true_groups', test_type='wilcoxon')", "import pandas as pd\nsmp='true_groups'\ntrue_groups_int=np.ones((n_simulated_cells,))\ntrue_groups_int[0:n_cluster]=0\ntrue_groups=list()\nfor i,j in enumerate(true_groups_int):\n true_groups.append(str(j))\nadata.smp['true_groups']=pd.Categorical(true_groups, dtype='category')\nadata.uns[smp + '_order']=np.asarray(['0','1'])", "Testing\nCase 1: No mean difference, large variance difference.\nUsing the data created above, we get the following expectation and variance \n$\\mathbb{E}[Z_r]=\\mathbb{E}[Z_c]=0.9$\n$\\mathbb{V}[Z_r]=8.19$\n$\\mathbb{V}[Z_c]=1.89$", "sc.tl.rank_genes_groups(adata, 'true_groups')\nsc.pl.rank_genes_groups(adata, n_genes=20)\n\nsc.tl.rank_genes_groups(adata, 'true_groups', test_type='wilcoxon')\nsc.pl.rank_genes_groups(adata, n_genes=20)", "As can be seen above, not on only does the wilcoxon-rank-sum test detect all marker genes, but there is also a clear difference to all other genes in ranking. \nCase 2: No mean difference, smaller variance difference", "# n_cluster needs to be smaller than n_simulated_cells, n_marker_genes needs to be smaller than n_simulated_genes\nn_simulated_cells=1000\nn_simulated_genes=100\nn_cluster=100\nn_marker_genes=10\n# Specify parameter between 0 and 1 for zero_inflation and p, positive integer for r\n# Differential gene expression is simulated using reference parameters for all cells/genes \n# except for marker genes in the distinct cells. \nreference_zero_inflation=0.15\nreference_p=0.5\nreference_n=6\ncluster_zero_inflation=0.9\ncluster_p=0.5\ncluster_n=1", "This parameter initialization leads to the following expectations/ variances: \n$\\mathbb{E}[Z_r]=\\mathbb{E}[Z_c]=0.9$\n$\\mathbb{V}[Z_r]=6.39$\n$\\mathbb{V}[Z_c]=1.89$", "adata=AnnData(np.multiply(binomial(1,reference_zero_inflation,(n_simulated_cells,n_simulated_genes)),\n negative_binomial(reference_n,reference_p,(n_simulated_cells,n_simulated_genes))))\n# adapt marker_genes for cluster \nadata.X[0:n_cluster,0:n_marker_genes]=np.multiply(binomial(1,cluster_zero_inflation,(n_cluster,n_marker_genes)),\n negative_binomial(cluster_n,cluster_p,(n_cluster,n_marker_genes)))\n\nimport pandas as pd\nsmp='true_groups'\ntrue_groups_int=np.ones((n_simulated_cells,))\ntrue_groups_int[0:n_cluster]=0\ntrue_groups=list()\nfor i,j in enumerate(true_groups_int):\n true_groups.append(str(j))\nadata.smp['true_groups']=pd.Categorical(true_groups, dtype='category')\nadata.uns[smp + '_order']=np.asarray(['0','1'])\n\nsc.tl.rank_genes_groups(adata, 'true_groups')\nsc.pl.rank_genes_groups(adata, n_genes=20)\n\nsc.tl.rank_genes_groups(adata, 'true_groups', test_type='wilcoxon')\nsc.pl.rank_genes_groups(adata, n_genes=20)", "With smaller difference in variance, still all marker genes are detected, but less clearly. \nCase 3: Small difference in expectation, difference in variance", "# n_cluster needs to be smaller than n_simulated_cells, n_marker_genes needs to be smaller than n_simulated_genes\nn_simulated_cells=1000\nn_simulated_genes=100\nn_cluster=100\nn_marker_genes=10\n# Specify parameter between 0 and 1 for zero_inflation and p, positive integer for r\n# Differential gene expression is simulated using reference parameters for all cells/genes \n# except for marker genes in the distinct cells. \nreference_zero_inflation=0.15\nreference_p=0.5\nreference_n=6\ncluster_zero_inflation=0.9\ncluster_p=0.55\ncluster_n=2", "This parameter initialization leads to the following expectations/ variances: \n$\\mathbb{E}[Z_r]=0.9$\n$\\mathbb{E}[Z_c]=1.47$\n$\\mathbb{V}[Z_r]=6.39$\n$\\mathbb{V}[Z_c]=2.92$", "adata=AnnData(np.multiply(binomial(1,reference_zero_inflation,(n_simulated_cells,n_simulated_genes)),\n negative_binomial(reference_n,reference_p,(n_simulated_cells,n_simulated_genes))))\n# adapt marker_genes for cluster \nadata.X[0:n_cluster,0:n_marker_genes]=np.multiply(binomial(1,cluster_zero_inflation,(n_cluster,n_marker_genes)),\n negative_binomial(cluster_n,cluster_p,(n_cluster,n_marker_genes)))\n\nsmp='true_groups'\ntrue_groups_int=np.ones((n_simulated_cells,))\ntrue_groups_int[0:n_cluster]=0\ntrue_groups=list()\nfor i,j in enumerate(true_groups_int):\n true_groups.append(str(j))\nadata.smp['true_groups']=pd.Categorical(true_groups, dtype='category')\nadata.uns[smp + '_order']=np.asarray(['0','1'])\n\nsc.tl.rank_genes_groups(adata, 'true_groups')\nsc.pl.rank_genes_groups(adata, n_genes=20)\n\nsc.tl.rank_genes_groups(adata, 'true_groups', test_type='wilcoxon')\nsc.pl.rank_genes_groups(adata, n_genes=20)", "As can be seen above, t-test fares better as soon as there is a difference in mean exists and difference in vairance decreases, but the ranking is less still worse" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
intellimath/pyaxon
examples/axon_readable_as_yaml.ipynb
mit
[ "Isn't AXON as readable as YAML?\nLet's consider example of YAML formatted configuration file:\n``` yaml\napplication: myapp\nversion: alpha-001\nruntime: python27\napi_version: 1\nthreadsafe: true\nurl handlers\nhandlers:\n- url: /\n script: home.app\n\n\nurl: /index.html\n script: home.app\n\n\nurl: /stylesheets\n static_dir: stylesheets\n\n\nurl: /(..(gif|png|jpg))\n static_files: static/\\1\n upload: static/..(gif|png|jpg)\n\n\nurl: /admin/.*\n script: admin.app\n login: admin\n\n\nurl: /.*\n script: not_found.app\n```", "# print all urls\nimport yaml\nimport io\nval = yaml.safe_load(io.open(\"example_config.yaml\", \"rt\"))\nprint([entry[\"url\"] for entry in val[\"handlers\"]])", "In AXON it will be formatted as:\n``` yaml\napplication: \"myapp\"\nversion: \"alpha-001\"\nruntime: \"python27\"\napi_version: 1\nthreadsafe: true\nurl handlers\nhandlers: [\n { url: \"/\"\n script: \"home.app\" }\n{ url: \"/index.html\"\n script: \"home.app\" }\n{ url: \"/stylesheets\"\n static_dir: \"stylesheets\" }\n{ url: \"/(..(gif|png|jpg))\"\n static_files: \"static/\\1\"\n upload: \"static/..(gif|png|jpg)\" }\n{ url: \"/admin/.*\"\n script: \"admin.app\"\n login: \"admin\" }\n{ url: \"/.*\"\n script: \"not_found.app\" }\n]\n```", "# print all urls\nimport axon\nval = axon.load(\"example_config1.axon\")\nprint([entry[\"url\"] for entry in val[\"handlers\"]])", "With AXON it can be also presented in the following form:\n``` yaml\n_\n application: \"myapp\"\n version: \"alpha-001\"\n runtime: \"python27\"\n api_version: 1\n threadsafe: true\n# url handlers\n handlers\n _\n url: \"/\"\n script: \"home.app\" \n _\n url: \"/index.html\"\n script: \"home.app\"\n _\n url: \"/stylesheets\"\n static_dir: \"stylesheets\"\n _\n url: \"/(..(gif|png|jpg))\"\n static_files: \"static\\1\"\n upload: \"static/..(gif|png|jpg)\"\n _\n url: \"/admin/.\"\n script: \"admin.app\"\n login: \"admin\"\n _\n url: \"/.\"\n script: \"not_found.app\"\n```", "# print all urls\nvals = axon.load(\"example_config2.axon\")\nprint([entry.url for entry in vals[0].handlers])", "Isn't configuration file in AXON as readable as in YAML?" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
liviu-/notebooks
notebooks/predicting_marks_by_facebook_likes.ipynb
mit
[ "Predicting Average Marks Based on Facebook Likes\nIntroduction\nIt is common for students to have a Facebook group on which they post course relevant discussion and announcements to help each other keep up with various aspects. Among the different type of posts, it is customary for people to announce when assessment marks are available to check online, in lieu of a formal notification from the administration. The premise of this short study is that the amount of likes that this sort of post gets can be used to predict the average mark of the assessment.\nThe reasoning behind this is that students may be less likely to react positively when they check their mark and it turns out to be low, but in case of a high mark, students may feel a need to express their happiness and reward the messenger with a like. Yet, number of likes may be an imperfect measure in many ways:\n\nLow-ish marks for an exam may excite students if their expectation was low; while good-ish marks for an exam may upset them if they had higher expectations. This indicates there may be a need for a possibly-difficult-to-quantify \"expectation\" variable. One possible way to address this is by introducing a difficulty variable for each exam based on historical results.\nNumber of likes varies depending on the number of students in a class -- this may be addressed by evaluating the percentage of students who liked the post, rather than the raw number.\nPeople may express their happiness through comments. This may be addressed through counting the number of comments, but also some sorts of sentiment analysis to distinguish between happiness and disappointment. In my experience though, people are much more likely to stick with a simple \"like\", rather than a more emotional public display of enthusiasm.\n\nThis is just a one-day project, so only the 2nd point will be addressed.\nAdditional information\nThis is done for a UK BSc course, so if you are not familiar with the academic environment there, some aspects may not make sense. For example, 70 is considered quite a good mark (first class), and one can pass if they obtain a mark over 30 and they can compensate with marks from a different module. Furthermore, when I talk about \"closed assessment\" or \"closed exam\" I refer to an exam typically done with no internet access in about 2 hours, and \"open assessment\" typically refers to a few week/months-long project (report/experiment/coding/coursework). Also, a bachelor's degree in UK lasts 3 years.\nCollecting data\nDue to an unstandardised method of announcing that the marks are up, the collection of the data was accomplished via a tedious process using Facebook group search and manually storing relevant data. I had access to 2 groups, so I gathered data from several academic years and connected every exam to its average mark as released by the university department. I will not make the data public because I'm not sure about the privacy policies involved.\nExploring data", "%matplotlib inline\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\nplt.rcParams['figure.figsize'] = 12, 10\nplt.rcParams.update({'font.size': 15})\n\ndata = pd.read_csv('../data/train.csv')\n\ndata.describe()", "As shown above, I managed to collect data for 56 exams. It seems that the minimum mean mark was of 37, and the maximum was of 83.66. On average, it looks like exams tend to end up with a mark slightly above 60 (std of 9.39), about 66% percent have been closed exams, and I collected slightly more data from 3rd years (as seen by the mean of the cohort column.) The minimum number of students that an exam had was 13 (I was there!), and the maximum was 131 (I was there too!).\nAnyway, let's get to what is likely the most interesting question right now: is there a distinguishable structure in the data?\nLet's plot the mean marks against the number of likes.", "# Create a quick function to allow reusing\ndef scatter_plot(independent, dependent, title=\"\", xlabel=\"\", ylabel=\"\"):\n plt.scatter(independent,dependent)\n plt.title(title)\n plt.xlabel(xlabel)\n plt.ylabel(ylabel)\n plt.show()\n\nscatter_plot(data['likes'],data['mean'], \n title=\"Relationship between number of likes and mean mark\",\n xlabel=\"Number of students who liked the announcements\",\n ylabel=\"Mean mark\")", "Alright, there is very little structure here, but let's see how it looks when we use the percentage of students.", "data['likes_normalised'] = (100 * data['likes'])/data['number_of_students']\n\nscatter_plot(data['likes_normalised'],data['mean'], \n title=\"Relationship between percentage of likes and mean mark\",\n xlabel=\"Percentage of students who liked the announcements\",\n ylabel=\"Mean mark\")", "Quite a bit better, but let's do it for the median as it seems like a more intuitive measurement here: half the students below the median got less than the mark indicated on the y axis. Also, the median is more resilient to outliers which may be helpful in this case.", "scatter_plot(data['likes_normalised'],data['median'], \n title=\"Relationship between percentage of likes and median mark\",\n xlabel=\"Percentage of students who liked the announcements\",\n ylabel=\"Median mark\")", "Alright, without starting to remove outliers, this seems to be the best I have. I will stay away from outlier detection, and from the graph there doesn't seem to be any really obvious one (maybe besides that one at 80+ median, but I won't go there).\nThe structure looks like it can be fitted with a polynomial curve, so without further ado, I'll just go ahead and try some polynomial regression.\nFitting the data", "from sklearn import linear_model\n\n# Convert series to dataframes to prepare them for fitting\nX = pd.DataFrame(data['likes_normalised'])\ny = pd.DataFrame(data['median'])\n\n# Add a squared feature\nX = pd.concat([X,pow(X,2)],axis=1)\n\n# Fit data with linear regression\nmodel_linear = linear_model.LinearRegression()\nmodel_linear.fit(X,y)\n\n# Create some new points to graph the curve\nnew_x = np.linspace(X.iloc[:,0].min(),X.iloc[:,0].max())\ntest_x = pd.DataFrame([new_x, pow(new_x,2)]).T\n\n# Plot curve\nplt.plot(test_x.iloc[:,0],model_linear.predict(test_x))\n# Plot points\nscatter_plot(X.iloc[:,0],y,\n title=\"Relationship between percentage of likes and median mark\",\n xlabel=\"Percentage of students who liked the announcements\",\n ylabel=\"Median mark\")", "Looks about right to me. I don't think stopping here would be a bad decision, but for the sake of it, I will try to fit it with a cubic polynomial.", "# Add a 3rd power and fit the new data\nX_ = pd.concat([X,pow(X.iloc[:,0],3)],axis=1)\nmodel_linear.fit(X_,y)\n\ntest_x_ = pd.concat([test_x, pow(test_x.iloc[:,0],3)],axis=1)\nplt.plot(test_x_.iloc[:,0],model_linear.predict(test_x_))\n# Plot points\nscatter_plot(X_[[0]],y,\n title=\"Relationship between percentage of likes and median mark\",\n xlabel=\"Percentage of students who liked the announcements\",\n ylabel=\"Median mark\")", "Looks pretty, but it likely overfits. So let's try to add L2 regularization and find the regularization parameter through 10-fold cross validation.", "from sklearn.linear_model import RidgeCV\n\nlist_reg_params = np.linspace(0.00001, X_.max().max() * 100, 2000)\nmodel_ridge = linear_model.RidgeCV(alphas=list_reg_params, cv=10)\nmodel_ridge.fit(X_,y)\n\nreg_param = model_ridge.alpha_\nprint(\"The regularization parameter that provided the best results was {}.\".format(reg_param))\nplt.plot(test_x_.iloc[:,0],model_ridge.predict(test_x_))\nscatter_plot(X_[[0]],y,\n title=\"Relationship between percentage of likes and median mark\",\n xlabel=\"Percentage of students who liked the announcements\",\n ylabel=\"Median mark\")", "The regularization parameter introduced only a very subtle difference. It's also a very big number as I haven't normalised the data. Now let's see which of the 2 polynomial curves fit the data better.\nEvaluating the models", "model_ridge_reg = linear_model.Ridge(alpha=reg_param)\n\nfrom sklearn import cross_validation\n\nscores_quadratic = abs(cross_validation.cross_val_score(\n model_linear, X, y, cv=10, scoring=\"mean_absolute_error\"))\nscores_cubic = abs(cross_validation.cross_val_score(\n model_ridge_reg, X_, y, cv=10, scoring=\"mean_absolute_error\"))\n\nprint(\"Average error for quadratic polynomial: {:.2f} (+/- {:.2f})\" .format(\n scores_quadratic.mean(), scores_quadratic.std() * 2))\nprint(\"Average error for cubic polynomial: {:.2f} (+/- {:.2f})\" .format(\n scores_cubic.mean(), scores_cubic.std() * 2))", "Not too bad. An error of less than 7 marks is better than I expected. Now for a comparison test, let's just predict that the median of a module is the mean of the medians of all the other modules (but using K-fold CV).", "from sklearn.metrics import mean_absolute_error\nfrom sklearn.cross_validation import KFold\n\nkf = KFold(X.shape[0], n_folds=10)\nscores_naive = []\nfor train, test in kf:\n train_mean = data['median'][train].mean()\n scores_naive.append(mean_absolute_error(data['median'][test], [train_mean] * len(test)))\n \nprint(\"Average error for naive predictor: {:.2f} (+/- {:.2f})\" .format(\n np.mean(scores_naive), np.std(scores_naive) * 2))", "D'aww, the regression predictor is not much better than the naive one. At least it's not worse! As a last test, let's see what are the chances that the the regression results are better just by chance. For this, a paired one-tailed t-test will be performed, with the null hypothesis that the 2 algorithms perform the same, and the alternative hypothesis that the regression one performs better. The p-value I will choose is 0.05.", "from scipy.stats import ttest_rel\n\nresults = ttest_rel(scores_naive, scores_cubic)\nprint(\"P-value: {:.4f}\" .format(results.pvalue/2)) # Dividing by 2 because it's one-tailed", "Everything was not in vain! It seems that the algorithm is likely to perform better than the naive approach.\nConclusion\nAnd this is it. The algorithm appears to predict grades with an average error of approximately 7 marks, and performs better than a naive predictor. Not bad I'd say considering we only have the number of likes on Facebook!\nI did not put aside a separate testing set because the data was not very big to begin with, and I don't think that a small test set would necessarily give a good indication of the algorithm performance. However, I will nonetheless test it on this year's exams once the results and the statistics are up!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
fdion/infographics_research
nfl_viz.ipynb
mit
[ "Exploring NFL data through visualization\nToday, we'll go with our first option, download data from http://nflsavant.com as csv using wget. We can then load this local file using pandas read_csv. read_csv can also read the csv data directly from the URL, but this way we don't have to download the file each time we load our data frame. Something I'm sure the owner of the website will appreciate.", "!wget http://nflsavant.com/pbp_data.php?year=2015 -O pbp-2015.csv\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport seaborn as sns\nsns.set_context(\"talk\")\nplt.figure(figsize=(10, 8))\n\ndf = pd.read_csv('pbp-2015.csv')\n\n# What do we have?\ndf.columns\n\ndef event_to_datetime(row):\n \"\"\"Calculate a datetime from date, quarter, minute and second of an event.\"\"\"\n mins = 15 * (row['Quarter'] - 1) + row['Minute']\n hours, mins = divmod(mins, 60)\n return \"{} {}:{:02}:{:02}\".format(row['GameDate'], hours, mins, row['Second'])\n\ndf['datetime'] = pd.to_datetime(df.apply(event_to_datetime, axis=1))", "Local team\nHappens to be the Carolina Panthers. So let's look at their offence.", "car = df[(df.OffenseTeam=='CAR')]", "Pandas plot does a decent job, but doesn't know about categoricals. We can't use the game date string for X axis. Here we use the datetime we calculated from quarter, mins etc. But now it thinks it's a time series. Which it looks like one. Consider it instead a form of parallel plot, ignoring the slope graph between each date, since it doesn't mean anything here (pandas also has an actual parallel plot).", "ax = car.plot(x='datetime', y='Yards')", "Not bad, but not completely helpful. Sure, pandas also has bar plots. But I think something else could be better visually. Let's see what Seaborn has to offer. How about a strip plot? It is a scatter plot for categorical data. We'll add jitter on the x axis to better see the data", "g = sns.stripplot(x='GameDate', y='Yards', data=car, jitter=True)\nfor item in g.get_xticklabels(): item.set_rotation(60)\n\n# We can also alter the look of the strip plot significantly\ng = sns.stripplot(x='Yards', y='GameDate', data=car,\n palette=\"Set2\", size=6, marker=\"D\", edgecolor=\"gray\", alpha=.25)", "Dare to compare\nHow about comparing two teams? Say, Carolina and Atlanta. Let's see how they do in yards (loss and gains) per quarters, for this season up to 9/13.", "car_atl = df[(df.OffenseTeam=='CAR')|(df.OffenseTeam=='ATL')]", "Colors can really improve readability. Atlanta Falcons primary color is red and Carolina Panthers primary color is light blue. Using those (context manager with color_palette):", "with sns.color_palette([sns.color_palette(\"muted\")[2],sns.color_palette(\"muted\")[5]]):\n g = sns.stripplot(x='Quarter', y='Yards', data=car_atl, hue='OffenseTeam', jitter=True)\n g.hlines(0,-1,6, color='grey')", "Distribution\nWe can also look at the distribution using plot types that are specifically designed for this. One pandas dataframe method is boxplot. Does the job, although we can make something prettier than this.", "ax = car.boxplot(column='Yards', by='GameDate')\nax.set_title(\"Carolina offence Yardage by game\")", "Let's have a look at the same thing using Seaborn. We'll fix the x axis tick labels too, rotating them.", "g = sns.boxplot(data=car, y='Yards', x='GameDate')\nfor item in g.get_xticklabels(): item.set_rotation(60)", "Finally, let's look at one more way to look at the distribution of data using Seaborn's violin plot.", "g = sns.violinplot(data=car, x='Yards', y='GameDate', orient='h')\ng.vlines(0,-1,15, alpha=0.5)", "Conclusion\nWe've barely touched on strip plots, box plots and violin plots. It's your turn to go on and explore. And as for the data, we've looked at every single events, play and non play (false starts etc), penalties touchdowns etc all on an equal footing. In order to gain better insight on the data, we'd have to look at these things individually, assign weighs etc.\nIf I get enough demand, I'll cover this subject in more detail in the future." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
JeffAbrahamson/MLWeek
practicum/05_regression/regression-lineaire.ipynb
gpl-3.0
[ "import numpy as np\nimport scipy.stats as ss\nimport matplotlib.pyplot as plt\nimport sklearn\nimport pandas as pd\n\n%matplotlib inline", "La pizza\nPrenons comme exemple le prix des pizzas par diametre.", "Diametre = [[6], [9], [12], [15], [18], [30]]\nprix = [[7], [9], [13], [17.5], [18], [24]]\nplt.figure()\nplt.title('Pizza v diametre')\nplt.xlabel('Diametre (cm)')\nplt.ylabel(u'Prix (โ‚ฌ)')\nplt.plot(Diametre, prix, 'k.')\nplt.axis([0, 32, 0, 25])\nplt.grid(True)\nplt.show()", "Et si on trouvait une pizza de 25 cm de diametre. Quel serait un prix raisonnable selon notre modรจle?", "from sklearn.linear_model import LinearRegression\nmodel = LinearRegression()\nX = Diametre\ny = prix\nmodel.fit(X, y)\nprint(u'Une pizza ร  25 cm doit coรปter {px:.2f} โ‚ฌ'.format(\n px=model.predict([[12]])[0][0]))", "La class sklearn.linear_model.LinearRegression est un estimateur (estimator). Un estimateur prรฉdit une valeur ร  partir de donnรฉes observรฉes. Brรจf, รงa crรฉe un modรจle.\nTous les estimateurs en scikit-learn implรฉmentent les mรฉthodes fit() et predict().\nExample : la diabรจte\nScikit-learn propose des exemples d'ensemble de donnรฉes (example data sets, plus couramment).", "# Code source: Jaques Grobler\n# License: BSD 3 clause\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom sklearn import datasets, linear_model\n\n# Load the diabetes dataset\ndiabetes = datasets.load_diabetes()\n\n\n# Use only one feature\ndiabetes_X = diabetes.data[:, np.newaxis]\ndiabetes_X_temp = diabetes_X[:, :, 2]\n\n# Split the data into training/testing sets\ndiabetes_X_train = diabetes_X_temp[:-20]\ndiabetes_X_test = diabetes_X_temp[-20:]\n\n# Split the targets into training/testing sets\ndiabetes_y_train = diabetes.target[:-20]\ndiabetes_y_test = diabetes.target[-20:]\n\n# Create linear regression object\nregr = linear_model.LinearRegression()\n\n# Train the model using the training sets\nregr.fit(diabetes_X_train, diabetes_y_train)\n\n# The coefficients\nprint('Coefficients: \\n', regr.coef_)\n# The mean square error\nprint(\"Residual sum of squares: %.2f\"\n % np.mean((regr.predict(diabetes_X_test) - diabetes_y_test) ** 2))\n# Explained variance score: 1 is perfect prediction\nprint('Variance score: %.2f' % regr.score(diabetes_X_test, diabetes_y_test))\n\n# Plot outputs\nplt.scatter(diabetes_X_test, diabetes_y_test, color='black')\nplt.plot(diabetes_X_test, regr.predict(diabetes_X_test), color='blue',\n linewidth=3)\n\nplt.xticks(())\nplt.yticks(())\n\nplt.show()", "Exercise\nVisualiser notre modรจle du prix de pizzas avec la pizza (le point) que nous avons ajoutรฉ.\nQuel est le modรจle de rรฉgression : $\\theta_0 x + \\theta_1$ ?\nQuel est la valeur du cost functionย  $J(\\theta) = \\sum_{i=1}^m (h_\\theta(x_i) - y_i)^2$" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
chagaz/sfan
StructuredSparsityDataGeneration.ipynb
mit
[ "%pylab inline\n\nnum_feats = 1000\nnum_obsvs = 150\n\nmod_size = 10\n\nnum_causl = 10", "Generate data", "# adjacency matrix\nW = np.zeros((num_feats, num_feats))\nfor i in range(num_feats/mod_size):\n W[i*mod_size:(i+1)*mod_size, i*mod_size:(i+1)*mod_size] = np.ones((mod_size, mod_size))\n if not i == (num_feats/mod_size - 1):\n W[(i+1)*mod_size-1, (i+1)*mod_size] = 1\n W[(i+1)*mod_size, (i+1)*mod_size-1] = 1\n \n# remove the diagonal\nW = W - np.eye(num_feats)\n\n# SNPs\nX = np.random.binomial(1, 0.1, size=(num_obsvs, num_feats))\n\n# Phenotype\nw_causl = np.random.normal(loc=0.2, scale=0.05, size=(num_causl))\nprint w_causl\n\nw = np.zeros((num_feats, ))\nw[:num_causl] = w_causl\n\ny = np.dot(X, w) + np.random.normal(loc=0., scale=0.1, size=(num_obsvs, ))", "Save generated data\nThe data used in the StructuredSparsity.ipnb notebook is saved under data/struct_spars. Here it will be generated under data/my_struct_spars.", "data_rep = 'data/my_struct_spars'\nX_fname = '%s/X.data' % data_rep\ny_fname = '%s/y.data' % data_rep\nW_fname = '%s/W.data' % data_rep\ncausl_fname = '%s/causl.data' % data_rep\nwghts_fname = '%s/w_causl.data' % data_rep\n\nnp.savetxt(X_fname, X, fmt='%d')\nnp.savetxt(y_fname, y)\nnp.savetxt(W_fname, W, fmt='%.1f')\nnp.savetxt(causl_fname, causl, fmt='%d')\nnp.savetxt(wghts_fname, w_causl)" ]
[ "code", "markdown", "code", "markdown", "code" ]
texib/deeplearning_homework
LogisticRegression.ipynb
mit
[ "ไฝฟ็”จๅฏถๅฏๅคขไฝœ็‚บๅฏฆ้ฉ—่ณ‡ๆ–™็ด ๆ", "import numpy as np\n\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndf = pd.read_csv(\"./pokemon.csv\")\n\n\nalldata = df[ (df['attack_strong_type'] == 'Normal') |(df['attack_strong_type'] == 'Flying') ]\n# alldata = alldata[df['attack_strong_type'] == 'Flying']\n\nf1 = alldata['height'].tolist()\nf2 = alldata['weight'].tolist()\ny = alldata['attack_strong_type']=='Normal'\ny = [ 1 if i else 0 for i in y.tolist()]\n\n\nf1 = np.array(f1)\nf2 = np.array(f2)\n\nc = [ 'g' if i==1 else 'b' for i in y ]\n\nplt.scatter(f1, f2, 20, c=c, alpha=0.5,\n label=\"Type\")\nplt.xlabel(\"Height\")\nplt.ylabel(\"Weight\")\nplt.legend(loc=2)\nplt.show()", "ๅฐๅŽŸๅง‹่ณ‡ๆ–™้€ฒ่กŒ Sacle to 0~1 ไน‹้–“๏ผŒ้ฟๅ… weight ๅคชๅЇ็ƒˆ่ฎŠๅ‹•๏ผŒๅฐŽ่‡ณ learning rate ้›ฃไปฅ่จญๅฎš", "from sklearn.preprocessing import scale,MinMaxScaler\nscaler1 = MinMaxScaler()\nscaler2 = MinMaxScaler()\n\nf1 = f2.reshape([f1.shape[0],1])\nf2 = f2.reshape([f2.shape[0],1])\n\n\n\nscaler1.fit(f1)\nscaler2.fit(f2)\n\nf1 = scaler1.transform(f1)\nf2 = scaler2.transform(f2)\n\nf1 = f1.reshape(f1.shape[0])\nf2 = f2.reshape(f2.shape[0])\n\nc = [ 'g' if i==1 else 'b' for i in y ]\n\nplt.scatter(f1, f2, 20, c=c, alpha=0.5,\n label=\"Type\")\nplt.xlabel(\"Height\")\nplt.ylabel(\"Weight\")\nplt.legend(loc=2)\nplt.show()\n\n\nY = np.array([1,1,0,0,1])\nA = np.array([0.8, 0.7, 0.2, 0.1, 0.9])\nA2 = np.array([0.6, 0.6, 0.2, 0.1, 0.3])\n\ndef cross_entropy(Y,A):\n \n # small tip ๅ›  log(0)ใ€€ๆœƒ่ถจ่ฟ‘่ฒ ็„ก้™ๅคง๏ผŒๆœƒ็”ข็”Ÿ nan ๏ผŒๆ•…้€™่ฃก็ตฑไธ€ๅŠ ไธŠ 0.00001\n Y = np.array(Y)\n A = np.array(A)\n m = len(A)\n cost = -(1.0/m) * np.sum(Y*np.log(A+0.00001) + (1-Y)*np.log(1-A+0.00001))\n return cost\n# Test cross_entropy Function\nprint cross_entropy(Y,A)\nprint cross_entropy(Y,A2)\nprint cross_entropy(Y,Y)", "LogisticRegression ๅ…ฌๅผๆŽจๅฐŽๅฆ‚ไธ‹\nๆŽจๅฐŽ้Ž็จ‹่ซ‹ๅƒ่€ƒ : http://speech.ee.ntu.edu.tw/~tlkagk/courses/ML_2017/Lecture/Logistic%20Regression%20(v4).pdf\nไธญ็š„ 3~13 ้ ๏ผŒๅ…ถ้‡้ปžๆญฅ้ฉŸๅฆ‚ไธ‹\n\nไปค $y=sigmoid(w*x+b)=f_{w,b}$\nๆˆ‘ๅ€‘ๆƒณ่ฆ Maximaๅณ้‚Š้€™ๅ€‹ๅผไน‹ => $ ArgMaxL_{w,b}= \\prod\\left( \\hat yf_{w,b} + (1-\\hat y)(1-f_{w,b}) \\right) $\nไฝ†ๅฆ‚ๆžœๆˆ‘ๅ€‘่ฆ็œ‹ๆˆ Lose Function ๅ้ŽๅŠ ไธŠๅ€‹่ฒ ่™Ÿไธฆๅ– Log ๆ–นไพฟ่จˆ็ฎ—๏ผŒๆ‰€ไปฅๅผไน‹ๆœƒ่ฎŠๆˆๅฆ‚ไธ‹\n$ ArgMinL_{w,b}= -1ln\\sum\\left( \\hat yf_{w,b} + (1-\\hat y)*(1-f_{w,b}) \\right) $\n็„ถๅพŒๅฐ่ฟฐ็š„ๅผๅญ๏ผŒๅˆ†ๅˆฅๅฐ $w,b$ ไฝœๅๅพฎๅˆ†๏ผŒไปฅๆ›ดๅ–ๅพ—ๅˆฐ $\\Delta w$ๅŠ$\\Delta b$\n็„ถๅพŒๅ†ไน˜ไธŠ Learning Rate ้€ฒ่กŒๆ›ดๆ–ฐๅฆ‚ๅผ $w_{t+1}=w_t - r\\Delta w $ , $b_{t+1}=b_t - r\\Delta b $\n\nๆœ€็ต‚ๆŽจๅฐŽ็ตๆžœๅฆ‚ไธ‹\n\n$w_i$ ็š„ update ๅ…ฌๅผๅฆ‚ไธ‹ , $\\hat y$ ็‚บ traning data ็š„ target label๏ผŒ $x^n$ ็‚บ็ฌฌ n ๅ€‹ data ็š„ๅ€ผ\n$w_{t+1} = w_t - r\\sum\\left((\\hat y^n - f_{w,b}(x^n))x^n*-1\\right) $\n$b_i$ ็š„ update ่ˆ‡ $w_i$ ๅชๅทฎไบ†ไธ€้ …ๅฐฑๆ˜ฏไธ็”จไน˜ไธŠ $X^n$ ๏ผŒๅฆ‚ๅณ $b_{t+1} = b_t - r\\sum\\left(-1(\\hat y^n - f_{w,b}(x^n))\\right) $", "import math\n\nw1 =1\nw2 =1\nb = 0\nr = 0.001\ndef fx(x1,x2):\n temp = w1*x1 + w2*x2 + b\n y_head = 1. / (1. + math.exp(-1.*temp))\n return y_head\n\n\n\ndef cross_entropy(Y, A):\n m = len(A)\n cost = -(1.0 / m) * np.sum(Y * np.log(A) + (1 - Y) * np.log(1 - A))\n return cost\n\nfor i in range(10000):\n w1_delta=0\n w2_delta=0\n\n b_delta = 0\n\n y_error = 0 \n \n \n for x1,x2,y_now in zip(f1,f2,y):\n \n y_error = y_now - fx(x1,x2)\n \n\n w1_delta = -1*x1*y_error\n w2_delta = -1*x2*y_error\n\n b_delta = -1*y_error\n\n w1 -= r*w1_delta\n w2 -= r*w2_delta\n\n b -= r*b_delta\n\n\n\n \n if i % 100==0 : \n error_rate = 0\n y_predict = []\n \n for x1,x2,y_now in zip(f1,f2,y): \n y_predict.append(fx(x1,x2))\n if y_now==1 and fx(x1,x2) < 0.5:\n error_rate+=1\n elif y_now==0 and fx(x1,x2) >=0.5:\n error_rate+=1\n\n print(\"{:0,.3f}, {:0,.3f}, {:0,.3f}, {:0,.3f}, {:0,.3f}\".format(error_rate*1./len(y) ,cross_entropy(np.array(y),np.array(y_predict)),w1,w2,b) )\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
flohorovicic/pynoddy
docs/notebooks/Training_Set_3.ipynb
gpl-2.0
[ "Generate Training sets\nBased on \"Reproducible Experiments\" notebook", "%matplotlib inline\n\n# here the usual imports. If any of the imports fails, \n# make sure that pynoddy is installed\n# properly, ideally with 'python setup.py develop' \n# or 'python setup.py install'\nimport sys, os\nimport matplotlib.pyplot as plt\nimport numpy as np\n# adjust some settings for matplotlib\nfrom matplotlib import rcParams\n# print rcParams\nrcParams['font.size'] = 15\n# determine path of repository to set paths corretly below\nrepo_path = os.path.realpath('../..')\nsys.path.append(repo_path)\nimport pynoddy\nimport pynoddy.history\nimport pynoddy.experiment\nimport importlib\nimportlib.reload(pynoddy.experiment)\nrcParams.update({'font.size': 15})\n\n# From notebook 4/ Traning Set example 1:\nimportlib.reload(pynoddy.history)\nimportlib.reload(pynoddy.events)\nnm = pynoddy.history.NoddyHistory()\n# add stratigraphy\nstrati_options = {'num_layers' : 3,\n 'layer_names' : ['layer 1', 'layer 2', 'layer 3'],\n 'layer_thickness' : [1500, 500, 1500]}\nnm.add_event('stratigraphy', strati_options )\n\n# The following options define the fault geometry:\nfault_options = {'name' : 'Fault_E',\n 'pos' : (4000, 0, 5000),\n 'dip_dir' : 90.,\n 'dip' : 60,\n 'slip' : 1000}\n\nnm.add_event('fault', fault_options)\nhistory = 'normal_fault.his'\noutput_name = 'normal_fault_out'\nnm.write_history(history)", "Initiate experiment with this input file:", "importlib.reload(pynoddy.history)\nimportlib.reload(pynoddy.experiment)\n\nfrom pynoddy.experiment import monte_carlo\nue = pynoddy.experiment.Experiment(history)\n\nue.change_cube_size(100)\nue.plot_section('y')", "Before we start to draw random realisations of the model, we should first store the base state of the model for later reference. This is simply possibel with the freeze() method which stores the current state of the model as the \"base-state\":", "ue.freeze()", "We now intialise the random generator. We can directly assign a random seed to simplify reproducibility (note that this is not essential, as it would be for the definition in a script function: the random state is preserved within the model and could be retrieved at a later stage, as well!):", "ue.set_random_seed(12345)", "The next step is to define probability distributions to the relevant event parameters. Let's first look at the different events:", "ue.info(events_only = True)\n\nev2 = ue.events[2]\n\nev2.properties", "Next, we define the probability distributions for the uncertain input parameters:", "param_stats = [{'event' : 2, \n 'parameter': 'Slip',\n 'stdev': 300.0,\n 'type': 'normal'},\n {'event' : 2, \n 'parameter': 'Dip',\n 'stdev': 10.0,\n 'type': 'normal'},]\n\nue.set_parameter_statistics(param_stats)\n\nresolution = 100\nue.change_cube_size(resolution)\ntmp = ue.get_section('y')\nprob_2 = np.zeros_like(tmp.block[:,:,:])\nn_draws = 10\n\n\nfor i in range(n_draws):\n ue.random_draw()\n tmp = ue.get_section('y', resolution = resolution)\n prob_2 += (tmp.block[:,:,:] == 2)\n\n# Normalise\nprob_2 = prob_2 / float(n_draws)\n\nfig = plt.figure(figsize = (12,8))\nax = fig.add_subplot(111)\nax.imshow(prob_2.transpose()[:,0,:], \n origin = 'lower left',\n interpolation = 'none')\nplt.title(\"Estimated probability of unit 4\")\nplt.xlabel(\"x (E-W)\")\nplt.ylabel(\"z\")", "This example shows how the base module for reproducible experiments with kinematics can be used. For further specification, child classes of Experiment can be defined, and we show examples of this type of extension in the next sections.\nAdjustments to generate training set\nFirst step: generate more layers and randomly select layers to visualise:", "ue.random_draw()\ns1 = ue.get_section('y')\ns1.block.shape\ns1.block[np.where(s1.block == 3)] = 1\ns1.plot_section('y', cmap='Greys')", "Idea: generate many layers, then randomly extract a couple of these and also assign different density/ color values:", "nm = pynoddy.history.NoddyHistory()\n# add stratigraphy\n\nn_layers = 8\n\nstrati_options['num_layers'] = n_layers\nstrati_options['layer_names'] = []\nstrati_options['layer_thickness'] = []\n\nfor n in range(n_layers):\n\n strati_options['layer_names'].append(\"layer %d\" % n)\n strati_options['layer_thickness'].append(5000./n_layers)\n\nnm.add_event('stratigraphy', strati_options )\n\n# The following options define the fault geometry:\nfault_options = {'name' : 'Fault_E',\n 'pos' : (1000, 0, 5000),\n 'dip_dir' : 90.,\n 'dip' : 60,\n 'slip' : 500}\n\nnm.add_event('fault', fault_options)\nhistory = 'normal_fault.his'\noutput_name = 'normal_fault_out'\nnm.write_history(history)\n\nimportlib.reload(pynoddy.history)\nimportlib.reload(pynoddy.experiment)\n\nfrom pynoddy.experiment import monte_carlo\nue = pynoddy.experiment.Experiment(history)\nue.freeze()\nue.set_random_seed(12345)\nue.set_extent(2800, 100, 2800)\n\nue.change_cube_size(50)\nue.plot_section('y')\n\nparam_stats = [{'event' : 2, \n 'parameter': 'Slip',\n 'stdev': 100.0,\n 'type': 'lognormal'},\n {'event' : 2, \n 'parameter': 'Dip',\n 'stdev': 10.0,\n 'type': 'normal'},\n# {'event' : 2, \n# 'parameter': 'Y',\n# 'stdev': 150.0,\n# 'type': 'normal'},\n {'event' : 2, \n 'parameter': 'X',\n 'stdev': 150.0,\n 'type': 'normal'},]\n\nue.set_parameter_statistics(param_stats)\n\n# randomly select layers:\nue.random_draw()\n\ns1 = ue.get_section('y')\n\n# create \"feature\" model:\nf1 = s1.block.copy()\n\n# randomly select layers:\nf1 = np.squeeze(f1)\n# n_featuers: number of \"features\" -> gray values in image\nn_features = 5\nvals = np.random.randint(0,255,size=n_features)\nfor n in range(n_layers):\n f1[f1 == n] = np.random.choice(vals)\n\nf1.shape\n\nplt.imshow(f1.T, origin='lower_left', cmap='Greys', interpolation='nearest')\n\n# blur image\nfrom scipy import ndimage\nf2 = ndimage.filters.gaussian_filter(f1, 1, mode='nearest')\n\n\nplt.imshow(f2.T, origin='lower_left', cmap='Greys', interpolation='nearest', vmin=0, vmax=255)\n\n# randomly swap image\nif np.random.randint(2) == 1:\n f2 = f2[::-1,:]\n\nplt.imshow(f2.T, origin='lower_left', cmap='Greys', interpolation='nearest', vmin=0, vmax=255)", "All in one function\nGenerate images for normal faults", "# back to before: re-initialise model:\nnm = pynoddy.history.NoddyHistory()\n# add stratigraphy\n\nn_layers = 18\n\nstrati_options['num_layers'] = n_layers\nstrati_options['layer_names'] = []\nstrati_options['layer_thickness'] = []\n\nfor n in range(n_layers):\n\n strati_options['layer_names'].append(\"layer %d\" % n)\n strati_options['layer_thickness'].append(5000./n_layers)\n\nnm.add_event('stratigraphy', strati_options )\n\n# The following options define the fault geometry:\nfault_options = {'name' : 'Fault_E',\n 'pos' : (1000, 0, 5000),\n 'dip_dir' : 90.,\n 'dip' : 60,\n 'slip' : 500}\n\nnm.add_event('fault', fault_options)\nhistory = 'normal_fault.his'\noutput_name = 'normal_fault_out'\nnm.write_history(history)\n\nfrom pynoddy.experiment import monte_carlo\nue = pynoddy.experiment.Experiment(history)\nue.freeze()\nue.set_random_seed(12345)\nue.set_extent(2800, 100, 2800)\nue.change_cube_size(50)\n\nparam_stats = [{'event' : 2, \n 'parameter': 'Slip',\n 'stdev': 100.0,\n 'type': 'lognormal'},\n {'event' : 2, \n 'parameter': 'Dip',\n 'stdev': 10.0,\n 'type': 'normal'},\n# {'event' : 2, \n# 'parameter': 'Y',\n# 'stdev': 150.0,\n# 'type': 'normal'},\n {'event' : 2, \n 'parameter': 'X',\n 'stdev': 150.0,\n 'type': 'normal'},]\n\nue.set_parameter_statistics(param_stats)", "Generate training set for normal faults:", "n_train = 10000\nF_train = np.empty((n_train, 28*28))\n\nue.change_cube_size(100)\n\nfor i in range(n_train):\n # randomly select layers:\n ue.random_draw()\n s1 = ue.get_section('y')\n # create \"feature\" model:\n f1 = s1.block.copy()\n # randomly select layers:\n f1 = np.squeeze(f1)\n # n_featuers: number of \"features\" -> gray values in image\n n_features = 4\n vals = np.random.randint(0,255,size=n_features)\n for n in range(n_layers):\n f1[f1 == n+1] = np.random.choice(vals)\n f1 = f1.T\n f2 = ndimage.filters.gaussian_filter(f1, 0, mode='nearest')\n # scale image\n f2 = f2 - np.min(f2)\n if np.max(f2) != 0:\n f2 = f2/np.max(f2)*255\n # randomly swap image\n if np.random.randint(2) == 1:\n f2 = f2[::-1,:]\n F_train[i] = f2.flatten().T\n\n\nplt.imshow(f2, origin='lower_left', cmap='Greys', interpolation='nearest', vmin=0, vmax=255)\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nfig, ax = plt.subplots(nrows=2, ncols=5, sharex=True, sharey=True, figsize=(12,6))\nax = ax.flatten()\nfor i in range(10):\n img = F_train[i].reshape(28, 28)\n ax[i].imshow(img, cmap='Greys', interpolation='nearest')\n\nax[0].set_xticks([])\nax[0].set_yticks([])\nplt.tight_layout()\n# plt.savefig('./figures/mnist_all.png', dpi=300)\nplt.show()\n\nimport pickle\n\n\nf = open(\"f_train_normal.pkl\", 'wb')\npickle.dump(F_train, f)", "Generate reverse faults\nAnd now: the same for reverse faults:", "# back to before: re-initialise model:\nnm = pynoddy.history.NoddyHistory()\n# add stratigraphy\n\nn_layers = 18\n\nstrati_options['num_layers'] = n_layers\nstrati_options['layer_names'] = []\nstrati_options['layer_thickness'] = []\n\nfor n in range(n_layers):\n\n strati_options['layer_names'].append(\"layer %d\" % n)\n strati_options['layer_thickness'].append(5000./n_layers)\n\nnm.add_event('stratigraphy', strati_options )\n\n# The following options define the fault geometry:\nfault_options = {'name' : 'Fault_E',\n 'pos' : (1000, 0, 5000),\n 'dip_dir' : 90.,\n 'dip' : 60,\n 'slip' : -500}\n\nnm.add_event('fault', fault_options)\nhistory = 'normal_fault.his'\noutput_name = 'normal_fault_out'\nnm.write_history(history)\n\nreload(pynoddy.history)\nreload(pynoddy.experiment)\n\nfrom pynoddy.experiment import monte_carlo\nue = pynoddy.experiment.Experiment(history)\nue.freeze()\nue.set_random_seed(12345)\nue.set_extent(2800, 100, 2800)\nue.change_cube_size(50)\n\nparam_stats = [{'event' : 2, \n 'parameter': 'Slip',\n 'stdev': -100.0,\n 'type': 'lognormal'},\n {'event' : 2, \n 'parameter': 'Dip',\n 'stdev': 10.0,\n 'type': 'normal'},\n# {'event' : 2, \n# 'parameter': 'Y',\n# 'stdev': 150.0,\n# 'type': 'normal'},\n {'event' : 2, \n 'parameter': 'X',\n 'stdev': 150.0,\n 'type': 'normal'},]\n\nue.set_parameter_statistics(param_stats)\n\nn_train = 10000\nF_train_rev = np.empty((n_train, 28*28))\n\nue.change_cube_size(100)\n\nfor i in range(n_train):\n # randomly select layers:\n ue.random_draw()\n s1 = ue.get_section('y')\n # create \"feature\" model:\n f1 = s1.block.copy()\n # randomly select layers:\n f1 = np.squeeze(f1)\n # n_featuers: number of \"features\" -> gray values in image\n n_features = \n vals = np.random.randint(0,255,size=n_features)\n for n in range(n_layers):\n f1[f1 == n+1] = np.random.choice(vals)\n f1 = f1.T\n f2 = ndimage.filters.gaussian_filter(f1, 0, mode='nearest')\n # scale image\n f2 = f2 - np.min(f2)\n if np.max(f2) != 0:\n f2 = f2/np.max(f2)*255\n # randomly swap image\n if np.random.randint(2) == 1:\n f2 = f2[::-1,:]\n F_train_rev[i] = f2.flatten().T\n\n\n\nfig, ax = plt.subplots(nrows=2, ncols=5, sharex=True, sharey=True, figsize=(12,6))\nax = ax.flatten()\nfor i in range(10):\n img = F_train_rev[i].reshape(28, 28)\n ax[i].imshow(img, cmap='Greys', interpolation='nearest')\n\nax[0].set_xticks([])\nax[0].set_yticks([])\nplt.tight_layout()\n# plt.savefig('./figures/mnist_all.png', dpi=300)\nplt.show()\n\npickle.dump(F_train_rev, open(\"f_train_reverse.pkl\", 'w'))", "Generate simple layer structure\nNo need for noddy, in this simple case - just adapt a numpy array:", "l1 = np.empty_like(s1.block[:,0,:])\n\nn_layers = 18\nfor i in range(l1.shape[0]):\n l1[:,i] = i\nl1_ori = np.floor(l1*n_layers/l1.shape[0])\n\nF_train_line = np.empty((n_train, 28*28))\n\n\n\nfor i in range(n_train):\n n_features = 4\n vals = np.random.randint(0,255,size=n_features)\n l1 = l1_ori.copy()\n for n in range(n_layers):\n l1[l1 == n+1] = np.random.choice(vals)\n f1 = l1.T\n f2 = ndimage.filters.gaussian_filter(f1, 0, mode='nearest')\n # scale image\n f2 = f2 - np.min(f2)\n if np.max(f2) != 0:\n f2 = f2/np.max(f2)*255\n F_train_line[i] = f2.flatten().T\n\nfig, ax = plt.subplots(nrows=2, ncols=5, sharex=True, sharey=True, figsize=(12,6))\nax = ax.flatten()\nfor i in range(10):\n img = F_train_line[i].reshape(28, 28)\n ax[i].imshow(img, cmap='Greys', interpolation='nearest')\n\nax[0].set_xticks([])\nax[0].set_yticks([])\nplt.tight_layout()\n# plt.savefig('./figures/mnist_all.png', dpi=300)\nplt.show()\n\npickle.dump(F_train_line, open(\"f_train_line.pkl\", 'w'))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
hhain/sdap17
notebooks/henrik_ueb01/02_Classification.ipynb
mit
[ "Excercise 1 Task 2\nExamination of runtime improvments of ensemble classifiers on a 250k elements dataset in dependence of the number of available cores\nThis notebook should run on an 8-core server environment to provide similar results", "# Load neccessary libraries changed pandas import for convinience\n%matplotlib inline\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom sklearn.datasets import make_classification\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.ensemble import RandomForestClassifier\n\n# creation of a dataset consisting of 250k samples\n# with the following parameters\nsamples = 250*1000\nfeatures = 40\ninformative = 5\nredundant=4\nX, Y = make_classification(n_samples=samples,\n n_features=features,\n n_informative=informative,\n n_redundant=4)\n\n# Split-out validation dataset\nvalidation_size = 0.20\nseed = 7\nscoring = 'accuracy'\nX_train, X_validation, Y_train, Y_validation = train_test_split(X,\n Y,\n test_size=validation_size,\n random_state=seed)", "Using 8 estimators (usage of one per core if 8 cores (jobs) are used)\nOne RandomForestClassifier (RFC) for each number of jobs (1 to 8 (inclusive)) is instantiated and trained on the training set of 200k elements. During the training the train time is measured with the magic %timeit function and stored in an array.", "# Create Random Forest Classifier\nestimators = 8 # For mapping one estimator per core in case of max 8 cores\njobs = 8\ntime_it_results = []\nfor _ in range(jobs):\n rf_class = RandomForestClassifier(n_estimators=estimators, n_jobs=(_+1))\n tr = %timeit -o rf_class.fit(X_train, Y_train)\n time_it_results.append(tr)\n\n# best_times are extracted\nbest_times = [timer.best for timer in time_it_results]", "Plot of the training time in seconds of each RFC against the number of used cores (number of jobs)", "x = np.arange(1,9)\nlabels = ['%i. Core' % i for i in x]\nfig = plt.figure()\nfig.suptitle('Training Time per number of cores')\nax = fig.add_subplot(111)\nax.set_xlabel('Number of cores')\nax.set_ylabel('Training time (s)')\nax.plot(x, best_times)\nplt.xticks(x, labels, rotation='vertical')\nplt.show()", "Execution time is exponentially decreasing till 4 cpu cores are utilized. Further increase and decrease dependes on mainly two factors.\n- Overhead intruduced by managing multiprocessing\n- Overhead introduced by copying the datasets for processing\nA slight increase in runtime between 4 and 7 cores can be experienced till 8 cpu cores are utilized" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tensorflow/docs-l10n
site/ja/tutorials/structured_data/feature_columns.ipynb
apache-2.0
[ "Copyright 2019 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "็‰นๅพด้‡ๅˆ—ใ‚’ไฝฟ็”จใ—ใฆๆง‹้€ ๅŒ–ใƒ‡ใƒผใ‚ฟใ‚’ๅˆ†้กžใ™ใ‚‹\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td> <a target=\"_blank\" href=\"https://www.tensorflow.org/tutorials/structured_data/feature_columns\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\">TensorFlow.org ใง่กจ็คบ</a>\n</td>\n <td> <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tutorials/structured_data/feature_columns.ipynb\"> <img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\"> Google Colab ใงๅฎŸ่กŒ</a>\n</td>\n <td><a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tutorials/structured_data/feature_columns.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\">GitHub ใงใ‚ฝใƒผใ‚นใ‚’่กจ็คบ</a></td>\n <td> <a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/tutorials/structured_data/feature_columns.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\">ใƒŽใƒผใƒˆใƒ–ใƒƒใ‚ฏใ‚’ใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰</a>\n</td>\n</table>\n\n\n่ญฆๅ‘Š: ใ“ใฎใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซใง่ชฌๆ˜Žใ•ใ‚Œใฆใ„ใ‚‹ tf.feature_columns ใƒขใ‚ธใƒฅใƒผใƒซใฏใ€ๆ–ฐใ—ใ„ใ‚ณใƒผใƒ‰ใซใฏใŠๅ‹งใ‚ใ—ใพใ›ใ‚“ใ€‚ Keras ๅ‰ๅ‡ฆ็†ใƒฌใ‚คใƒคใƒผใŒใ“ใฎๆฉŸ่ƒฝใ‚’ใ‚ซใƒใƒผใ—ใฆใ„ใพใ™ใ€‚็งป่กŒๆ‰‹้ †ใซใคใ„ใฆใฏใ€็‰นๅพด้‡ๅˆ—ใฎ็งป่กŒใ‚ฌใ‚คใƒ‰ใ‚’ใ”่ฆงใใ ใ•ใ„ใ€‚tf.feature_columns ใƒขใ‚ธใƒฅใƒผใƒซใฏใ€TF1 Estimators ใงไฝฟ็”จใ™ใ‚‹ใŸใ‚ใซ่จญ่จˆใ•ใ‚Œใพใ—ใŸใ€‚ไบ’ๆ›ๆ€งไฟ่จผใฎๅฏพ่ฑกใจใชใ‚Šใพใ™ใŒใ€ใ‚ปใ‚ญใƒฅใƒชใƒ†ใ‚ฃใฎ่„†ๅผฑๆ€งไปฅๅค–ใฎไฟฎๆญฃใฏ่กŒใ‚ใ‚Œใพใ›ใ‚“ใ€‚\n\nThis tutorial demonstrates how to classify structured data (e.g. tabular data in a CSV). We will use Keras to define the model, and tf.feature_column as a bridge to map from columns in a CSV to features used to train the model. This tutorial contains complete code to:\n\nPandas ใ‚’ไฝฟ็”จใ—ใฆ CSV ใƒ•ใ‚กใ‚คใƒซใ‚’่ชญใฟ่พผใฟใพใ™ใ€‚\ntf.data ใ‚’ไฝฟ็”จใ—ใฆใ€่กŒใ‚’ใƒใƒƒใƒๅŒ–ใ—ใฆใ‚ทใƒฃใƒƒใƒ•ใƒซใ™ใ‚‹ๅ…ฅๅŠ›ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใ‚’ๆง‹็ฏ‰ใ—ใพใ™ใ€‚\n็‰นๅพด้‡ใฎๅˆ—ใ‚’ไฝฟใฃใฆใƒขใƒ‡ใƒซใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ใŸใ‚ใซไฝฟ็”จใ™ใ‚‹็‰นๅพด้‡ใซใ€CSV ใฎๅˆ—ใ‚’ใƒžใƒƒใƒ”ใƒณใ‚ฐใ—ใพใ™ใ€‚\nKerasใ‚’ไฝฟใฃใŸใƒขใƒ‡ใƒซใฎๆง‹็ฏ‰ใจใ€่จ“็ทดๅŠใณ่ฉ•ไพก\n\nใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆ\nไธ‹่จ˜ใฏใ“ใฎใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใฎ่ชชๆ˜Žใงใ™ใ€‚ๆ•ฐๅ€คๅˆ—ใจใ‚ซใƒ†ใ‚ดใƒชใƒผๅˆ—ใŒใ‚ใ‚‹ใ“ใจใซๆณจ็›ฎใ—ใฆใใ ใ•ใ„ใ€‚\nFollowing is a description of this dataset. Notice there are both numeric and categorical columns. There is a free text column which we will not use in this tutorial.\nๅˆ— | ่ชฌๆ˜Ž | ็‰นๅพด้‡ใฎๅž‹ | ใƒ‡ใƒผใ‚ฟๅž‹\n--- | --- | --- | ---\nType | ๅ‹•็‰ฉใฎ็จฎ้กž๏ผˆ็Šฌใ€็Œซ๏ผ‰ | ใ‚ซใƒ†ใ‚ดใƒชใ‚ซใƒซ | ๆ–‡ๅญ—ๅˆ—\nAge | ใƒšใƒƒใƒˆใฎๅนด้ฝข | ๆ•ฐๅ€ค | ๆ•ดๆ•ฐ\nBreed1 | ใƒšใƒƒใƒˆใฎไธปใชๅ“็จฎ | ใ‚ซใƒ†ใ‚ดใƒชใ‚ซใƒซ | ๆ–‡ๅญ—ๅˆ—\nColor1 | ใƒšใƒƒใƒˆใฎๆฏ›่‰ฒ 1 | ใ‚ซใƒ†ใ‚ดใƒชใ‚ซใƒซ | ๆ–‡ๅญ—ๅˆ—\nColor2 | ใƒšใƒƒใƒˆใฎๆฏ›่‰ฒ 2 | ใ‚ซใƒ†ใ‚ดใƒชใ‚ซใƒซ | ๆ–‡ๅญ—ๅˆ—\nMaturitySize | ๆˆ็ฃๆ™‚ใฎใ‚ตใ‚คใ‚บ | ใ‚ซใƒ†ใ‚ดใƒชใ‚ซใƒซ | ๆ–‡ๅญ—ๅˆ—\nFurLength | ๆฏ›ใฎ้•ทใ• | ใ‚ซใƒ†ใ‚ดใƒชใ‚ซใƒซ | ๆ–‡ๅญ—ๅˆ—\nVaccinated | ไบˆ้˜ฒๆŽฅ็จฎๆธˆใฟ | ใ‚ซใƒ†ใ‚ดใƒชใ‚ซใƒซ | ๆ–‡ๅญ—ๅˆ—\nSterilized | ไธๅฆŠๆ‰‹่ก“ๆธˆใฟ | ใ‚ซใƒ†ใ‚ดใƒชใ‚ซใƒซ | ๆ–‡ๅญ—ๅˆ—\nHealth | ๅฅๅบท็Šถๆ…‹ | ใ‚ซใƒ†ใ‚ดใƒชใ‚ซใƒซ | ๆ–‡ๅญ—ๅˆ—\nFee | ๅผ•ใๅ–ใ‚Šๆ–™ | ๆ•ฐๅ€ค | ๆ•ดๆ•ฐ\nDescription | ใƒšใƒƒใƒˆใฎใƒ—ใƒญใƒ•ใ‚ฃใƒผใƒซ | ใƒ†ใ‚ญใ‚นใƒˆ | ๆ–‡ๅญ—ๅˆ—\nPhotoAmt | ใ‚ขใƒƒใƒ—ใƒญใƒผใƒ‰ใ•ใ‚ŒใŸใƒšใƒƒใƒˆใฎๅ†™็œŸๆ•ฐ | ๆ•ฐๅ€ค | ๆ•ดๆ•ฐ\nAdoptionSpeed | ๅผ•ใๅ–ใ‚ŠใพใงใฎๆœŸ้–“ | ๅˆ†้กž | ๆ•ดๆ•ฐ\nTensorFlowไป–ใƒฉใ‚คใƒ–ใƒฉใƒชใฎใ‚คใƒณใƒใƒผใƒˆ", "!pip install sklearn\n\nimport numpy as np\nimport pandas as pd\n\nimport tensorflow as tf\n\nfrom tensorflow import feature_column\nfrom tensorflow.keras import layers\nfrom sklearn.model_selection import train_test_split", "Pandasใ‚’ไฝฟใฃใŸใƒ‡ใƒผใ‚ฟใƒ•ใƒฌใƒผใƒ ไฝœๆˆ\nPandasใฏใ€ๆง‹้€ ๅŒ–ใƒ‡ใƒผใ‚ฟใฎ่ชญใฟ่พผใฟใ‚„ๆ“ไฝœใฎใŸใ‚ใฎไพฟๅˆฉใชใƒฆใƒผใƒ†ใ‚ฃใƒชใƒ†ใ‚ฃใ‚’ๆŒใคPythonใฎใƒฉใ‚คใƒ–ใƒฉใƒชใงใ™ใ€‚ใ“ใ“ใงใฏใ€Pandasใ‚’ไฝฟใฃใฆURLใ‹ใ‚‰ใƒ‡ใƒผใ‚ฟใ‚’ใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ใ—ใ€ใƒ‡ใƒผใ‚ฟใƒ•ใƒฌใƒผใƒ ใซ่ชญใฟ่พผใฟใพใ™ใ€‚", "URL = 'https://storage.googleapis.com/applied-dl/heart.csv'\ndataframe = pd.read_csv(URL)\ndataframe.head()\n\ndataframe.head()", "ใ‚ฟใƒผใ‚ฒใƒƒใƒˆๅค‰ๆ•ฐใ‚’ไฝœๆˆใ™ใ‚‹\nๅ…ƒใฎใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใงใฏใ€ใƒšใƒƒใƒˆใŒๅผ•ใๅ–ใ‚‰ใ‚Œใ‚‹ใพใงใฎๆœŸ้–“ (1 ้€ฑ็›ฎใ€1 ใ‹ๆœˆ็›ฎใ€3 ใ‹ๆœˆ็›ฎใชใฉ) ใ‚’ไบˆๆธฌใ™ใ‚‹ใ“ใจใŒใ‚ฟใ‚นใ‚ฏใจใชใฃใฆใ„ใพใ™ใŒใ€ใ“ใฎใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซใงใฏใ€ใ“ใฎใ‚ฟใ‚นใ‚ฏใ‚’ๅ˜็ด”ๅŒ–ใ—ใพใ™ใ€‚ใ“ใ“ใงใฏใ€ใ“ใฎใ‚ฟใ‚นใ‚ฏใ‚’ไบŒ้ …ๅˆ†้กžๅ•้กŒใซใ—ใ€ๅ˜ใซใƒšใƒƒใƒˆใŒๅผ•ใๅ–ใ‚‰ใ‚Œใ‚‹ใ‹ใฉใ†ใ‹ใฎใฟใ‚’ไบˆๆธฌใ—ใพใ™ใ€‚\nใƒฉใƒ™ใƒซใฎๅˆ—ใ‚’ๅค‰ๆ›ดใ™ใ‚‹ใจใ€0 ใฏๅผ•ใๅ–ใ‚‰ใ‚Œใชใ‹ใฃใŸใ€1 ใฏๅผ•ใๅ–ใ‚‰ใ‚ŒใŸใ“ใจใ‚’็คบใ™ใ‚ˆใ†ใซใชใ‚Šใพใ™ใ€‚", "# In the original dataset \"4\" indicates the pet was not adopted.\ndataframe['target'] = np.where(dataframe['AdoptionSpeed']==4, 0, 1)\n\n# Drop un-used columns.\ndataframe = dataframe.drop(columns=['AdoptionSpeed', 'Description'])", "ใƒ‡ใƒผใ‚ฟใƒ•ใƒฌใƒผใƒ ใ‚’ใ€่จ“็ทด็”จใ€ๆคœ่จผ็”จใ€ใƒ†ใ‚นใƒˆ็”จใซๅˆ†ๅ‰ฒ\nใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰ใ—ใŸใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใฏ1ใคใฎCSVใƒ•ใ‚กใ‚คใƒซใงใ™ใ€‚ใ“ใ‚Œใ‚’ใ€่จ“็ทด็”จใ€ๆคœ่จผ็”จใ€ใƒ†ใ‚นใƒˆ็”จใฎใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใซๅˆ†ๅ‰ฒใ—ใพใ™ใ€‚", "train, test = train_test_split(dataframe, test_size=0.2)\ntrain, val = train_test_split(train, test_size=0.2)\nprint(len(train), 'train examples')\nprint(len(val), 'validation examples')\nprint(len(test), 'test examples')", "tf.dataใ‚’ไฝฟใฃใŸๅ…ฅๅŠ›ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใฎๆง‹็ฏ‰\nๆฌกใซใ€tf.data ใ‚’ไฝฟใฃใฆใƒ‡ใƒผใ‚ฟใƒ•ใƒฌใƒผใƒ ใ‚’ใƒฉใƒƒใƒ—ใ—ใพใ™ใ€‚ใ“ใ†ใ™ใ‚‹ใ“ใจใงใ€็‰นๅพด้‡ใฎๅˆ—ใ‚’ Pandas ใƒ‡ใƒผใ‚ฟใƒ•ใƒฌใƒผใƒ ใฎๅˆ—ใ‹ใ‚‰ใƒขใƒ‡ใƒซใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐ็”จใฎ็‰นๅพด้‡ใธใฎใƒžใƒƒใƒ”ใƒณใ‚ฐใ™ใ‚‹ใŸใ‚ใฎๆฉ‹ๆธกใ—ๅฝนใจใ—ใฆไฝฟใ†ใ“ใจใŒใงใใพใ™ใ€‚(ใƒกใƒขใƒชใซๅŽใพใ‚‰ใชใ„ใใ‚‰ใ„ใฎ) ้žๅธธใซๅคงใใช CSV ใƒ•ใ‚กใ‚คใƒซใ‚’ๆ‰ฑใ†ๅ ดๅˆใซใฏใ€tf.data ใ‚’ไฝฟใฃใฆใƒ‡ใ‚ฃใ‚นใ‚ฏใ‹ใ‚‰็›ดๆŽฅ CSV ใƒ•ใ‚กใ‚คใƒซใ‚’่ชญใฟ่พผใ‚€ใ“ใจใซใชใ‚Šใพใ™ใ€‚ใ“ใฎๆ–นๆณ•ใฏใ€ใ“ใฎใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซใงใฏๅ–ใ‚ŠไธŠใ’ใพใ›ใ‚“ใ€‚", "# Pandasใƒ‡ใƒผใ‚ฟใƒ•ใƒฌใƒผใƒ ใ‹ใ‚‰tf.dataใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ไฝœใ‚‹ใŸใ‚ใฎใƒฆใƒผใƒ†ใ‚ฃใƒชใƒ†ใ‚ฃใƒกใ‚ฝใƒƒใƒ‰\ndef df_to_dataset(dataframe, shuffle=True, batch_size=32):\n dataframe = dataframe.copy()\n labels = dataframe.pop('target')\n ds = tf.data.Dataset.from_tensor_slices((dict(dataframe), labels))\n if shuffle:\n ds = ds.shuffle(buffer_size=len(dataframe))\n ds = ds.batch(batch_size)\n return ds\n\nbatch_size = 5 # ใƒ‡ใƒข็”จใจใ—ใฆๅฐใ•ใชใƒใƒƒใƒใ‚ตใ‚คใ‚บใ‚’ไฝฟ็”จ\ntrain_ds = df_to_dataset(train, batch_size=batch_size)\nval_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)\ntest_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size)", "ๅ…ฅๅŠ›ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใ‚’็†่งฃใ™ใ‚‹\nๅ…ฅๅŠ›ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใ‚’ๆง‹็ฏ‰ใ—ใŸใฎใงใ€ใใ‚ŒใŒ่ฟ”ใ™ใƒ‡ใƒผใ‚ฟใฎใƒ•ใ‚ฉใƒผใƒžใƒƒใƒˆใ‚’่ฆ‹ใ‚‹ใŸใ‚ใซๅ‘ผใณๅ‡บใ—ใฆใฟใพใ—ใ‚‡ใ†ใ€‚ๅ‡บๅŠ›ใ‚’่ชญใฟใ‚„ใ™ใใ™ใ‚‹ใŸใ‚ใซใƒใƒƒใƒใ‚ตใ‚คใ‚บใ‚’ๅฐใ•ใใ—ใฆใ‚ใ‚Šใพใ™ใ€‚", "for feature_batch, label_batch in train_ds.take(1):\n print('Every feature:', list(feature_batch.keys()))\n print('A batch of ages:', feature_batch['Age'])\n print('A batch of targets:', label_batch )", "ใ”่ฆงใฎใจใŠใ‚Šใ€ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใฏใ€ใƒ‡ใƒผใ‚ฟใƒ•ใƒฌใƒผใƒ ใฎ่กŒใ‹ใ‚‰ๅˆ—ใฎๅ€คใซใƒžใƒƒใƒ—ใ—ใฆใ„ใ‚‹ๅˆ—ๅใฎ (ใƒ‡ใƒผใ‚ฟใƒ•ใƒฌใƒผใƒ ใฎๅˆ—ๅ) ใฎใƒ‡ใ‚ฃใ‚ฏใ‚ทใƒงใƒŠใƒชใ‚’่ฟ”ใ—ใฆใ„ใพใ™ใ€‚\n็‰นๅพด้‡ๅˆ—ใฎๆง˜ใ€…ใชๅž‹ใฎใƒ‡ใƒข\nTensorFlow ใซใฏๆง˜ใ€…ใชๅž‹ใฎ็‰นๅพด้‡ๅˆ—ใŒใ‚ใ‚Šใพใ™ใ€‚ใ“ใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใงใฏใ€ใ„ใใคใ‹ใฎๅž‹ใฎ็‰นๅพด้‡ๅˆ—ใ‚’ไฝœใ‚Šใ€ใƒ‡ใƒผใ‚ฟใƒ•ใƒฌใƒผใƒ ใฎๅˆ—ใ‚’ใฉใฎใ‚ˆใ†ใซๅค‰ๆ›ใ™ใ‚‹ใ‹ใ‚’็คบใ—ใพใ™ใ€‚", "# ใ„ใใคใ‹ใฎๅž‹ใฎfeature columnsใ‚’ไพ‹็คบใ™ใ‚‹ใŸใ‚ใ“ใฎใƒใƒƒใƒใ‚’ไฝฟ็”จใ™ใ‚‹\nexample_batch = next(iter(train_ds))[0]\n\n# feature columnsใ‚’ไฝœใ‚Šใƒ‡ใƒผใ‚ฟใฎใƒใƒƒใƒใ‚’ๅค‰ๆ›ใ™ใ‚‹\n# ใƒฆใƒผใƒ†ใ‚ฃใƒชใƒ†ใ‚ฃใƒกใ‚ฝใƒƒใƒ‰\ndef demo(feature_column):\n feature_layer = layers.DenseFeatures(feature_column)\n print(feature_layer(example_batch).numpy())", "ๆ•ฐๅ€คใ‚ณใƒฉใƒ \n็‰นๅพด้‡ๅˆ—ใฎๅ‡บๅŠ›ใฏใƒขใƒ‡ใƒซใธใฎๅ…ฅๅŠ›ใซใชใ‚Šใพใ™ (ไธŠ่จ˜ใงๅฎš็พฉใ—ใŸใƒ‡ใƒข้–ขๆ•ฐใ‚’ไฝฟใ†ใจใ€ใƒ‡ใƒผใ‚ฟใƒ•ใƒฌใƒผใƒ ใฎๅˆ—ใŒใฉใฎใ‚ˆใ†ใซๅค‰ๆ›ใ•ใ‚Œใ‚‹ใ‹ใ‚’่ฆ‹ใ‚‹ใ“ใจใŒใงใใพใ™)ใ€‚ๆ•ฐๅ€คๅˆ—ใฏใ€ๆœ€ใ‚‚ๅ˜็ด”ใชๅž‹ใฎๅˆ—ใงใ™ใ€‚ๆ•ฐๅ€คๅˆ—ใฏๅฎŸๆ•ฐ็‰นๅพด้‡ใ‚’่กจ็พใ™ใ‚‹ใฎใซไฝฟใ‚ใ‚Œใพใ™ใ€‚ใ“ใฎๅˆ—ใ‚’ไฝฟใ†ๅ ดๅˆใ€ใƒขใƒ‡ใƒซใซใฏใƒ‡ใƒผใ‚ฟใƒ•ใƒฌใƒผใƒ ใฎๅˆ—ใฎๅ€คใŒใใฎใพใพๆธกใ•ใ‚Œใพใ™ใ€‚", "photo_count = feature_column.numeric_column('PhotoAmt')\ndemo(photo_count)", "PetFinder ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใงใฏใ€ใƒ‡ใƒผใ‚ฟใƒ•ใƒฌใƒผใƒ ใฎใปใจใ‚“ใฉใฎๅˆ—ใŒใ‚ซใƒ†ใ‚ดใƒชใ‚ซใƒซๅž‹ใงใ™ใ€‚\nใƒใ‚ฑใƒƒใƒˆๅŒ–ใ‚ณใƒฉใƒ \nๆ•ฐๅ€คใ‚’ใใฎใพใพใƒขใƒ‡ใƒซใซๅ…ฅๅŠ›ใ™ใ‚‹ใฎใงใฏใชใใ€ๅ€คใฎ็ฏ„ๅ›ฒใซๅŸบใฅใ„ใŸ็•ฐใชใ‚‹ใ‚ซใƒ†ใ‚ดใƒชใซๅˆ†ๅ‰ฒใ—ใŸใ„ใ“ใจใŒใ‚ใ‚Šใพใ™ใ€‚ไพ‹ใˆใฐใ€ไบบใฎๅนด้ฝขใ‚’่กจใ™็”Ÿใƒ‡ใƒผใ‚ฟใ‚’่€ƒใˆใฆใฟใพใ—ใ‚‡ใ†ใ€‚ใƒใ‚ฑใƒƒใƒˆๅŒ–ๅˆ—ใ‚’ไฝฟใ†ใจๅนด้ฝขใ‚’ๆ•ฐๅ€คๅˆ—ใจใ—ใฆ่กจ็พใ™ใ‚‹ใฎใงใฏใชใใ€ๅนด้ฝขใ‚’ใ„ใใคใ‹ใฎใƒใ‚ฑใƒƒใƒˆใซๅˆ†ๅ‰ฒใงใใพใ™ใ€‚ไปฅไธ‹ใฎใƒฏใƒณใƒ›ใƒƒใƒˆๅ€คใŒใ€ๅ„่กŒใŒใฉใฎๅนด้ฝข็ฏ„ๅ›ฒใซใ‚ใ‚‹ใ‹ใ‚’่กจใ—ใฆใ„ใ‚‹ใ“ใจใซๆณจ็›ฎใ—ใฆใใ ใ•ใ„ใ€‚", "age = feature_column.numeric_column('Age')\nage_buckets = feature_column.bucketized_column(age, boundaries=[1, 3, 5])\ndemo(age_buckets)", "ใ‚ซใƒ†ใ‚ดใƒชใƒผๅž‹ใ‚ณใƒฉใƒ \nใ“ใฎใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใงใฏใ€ๅž‹ใฏ (ใ€Œ็Šฌใ€ใ‚„ใ€Œ็Œซใ€ใชใฉใฎ) ๆ–‡ๅญ—ๅˆ—ใจใ—ใฆ่กจ็พใ•ใ‚Œใฆใ„ใพใ™ใ€‚ๆ–‡ๅญ—ๅˆ—ใ‚’็›ดๆŽฅใƒขใƒ‡ใƒซใซๅ…ฅๅŠ›ใ™ใ‚‹ใ“ใจใฏใงใใพใ›ใ‚“ใ€‚ใพใšใ€ๆ–‡ๅญ—ๅˆ—ใ‚’ๆ•ฐๅ€คใซใƒžใƒƒใƒ”ใƒณใ‚ฐใ™ใ‚‹ๅฟ…่ฆใŒใ‚ใ‚Šใพใ™ใ€‚ใ‚ซใƒ†ใ‚ดใƒชใ‚ซใƒซ่ชžๅฝ™ๅˆ—ใ‚’ไฝฟใ†ใจใ€(ไธŠ่จ˜ใง็คบใ—ใŸๅนด้ฝขใƒใ‚ฑใƒƒใƒˆใฎใ‚ˆใ†ใซ) ๆ–‡ๅญ—ๅˆ—ใ‚’ใƒฏใƒณใƒ›ใƒƒใƒˆใƒ™ใ‚ฏใƒˆใƒซใจใ—ใฆ่กจ็พใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚่ชžๅฝ™ใฏcategorical_column_with_vocabulary_list ใ‚’ไฝฟใฃใฆใƒชใ‚นใƒˆใงๆธกใ™ใ‹ใ€categorical_column_with_vocabulary_file ใ‚’ไฝฟใฃใฆใƒ•ใ‚กใ‚คใƒซใ‹ใ‚‰่ชญใฟ่พผใ‚€ใ“ใจใŒใงใใพใ™ใ€‚", "thal = feature_column.categorical_column_with_vocabulary_list(\n 'thal', ['fixed', 'normal', 'reversible'])\n\nthal_one_hot = feature_column.indicator_column(thal)\ndemo(thal_one_hot)", "ๅŸ‹ใ‚่พผใฟๅž‹ใ‚ณใƒฉใƒ \nๆ•ฐ็จฎ้กžใฎๆ–‡ๅญ—ๅˆ—ใงใฏใชใใ€ใ‚ซใƒ†ใ‚ดใƒชใ”ใจใซๆ•ฐๅƒ (ใ‚ใ‚‹ใ„ใฏใใ‚ŒไปฅไธŠ) ใฎๅ€คใŒใ‚ใ‚‹ใจใ—ใพใ—ใ‚‡ใ†ใ€‚ใ‚ซใƒ†ใ‚ดใƒชใฎๆ•ฐใŒๅคšใใชใฃใฆใใ‚‹ใจใ€ๆง˜ใ€…ใช็†็”ฑใ‹ใ‚‰ใ€ใƒฏใƒณใƒ›ใƒƒใƒˆใ‚จใƒณใ‚ณใƒผใƒ‡ใ‚ฃใƒณใ‚ฐใ‚’ไฝฟใฃใฆใƒ‹ใƒฅใƒผใƒฉใƒซใƒใƒƒใƒˆใƒฏใƒผใ‚ฏใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ใ“ใจใŒ้›ฃใ—ใใชใ‚Šใพใ™ใ€‚ๅŸ‹ใ‚่พผใฟๅˆ—ใ‚’ไฝฟใ†ใจใ€ใ“ใ†ใ—ใŸๅˆถ็ด„ใ‚’ๅ…‹ๆœใ™ใ‚‹ใ“ใจใŒๅฏ่ƒฝใงใ™ใ€‚ๅŸ‹ใ‚่พผใฟๅˆ—ใฏใ€ใƒ‡ใƒผใ‚ฟใ‚’ๅคšๆฌกๅ…ƒใฎใƒฏใƒณใƒ›ใƒƒใƒˆใƒ™ใ‚ฏใƒˆใƒซใจใ—ใฆ่กจใ™ใฎใงใฏใชใใ€ใ‚ปใƒซใฎๅ€คใŒ 0 ใ‹ 1 ใ‹ใ ใ‘ใงใฏใชใใ€ใฉใ‚“ใชๆ•ฐๅ€คใงใ‚‚ใจใ‚Œใ‚‹ใ‚ˆใ†ใชๅฏ†ใชไฝŽๆฌกๅ…ƒใƒ™ใ‚ฏใƒˆใƒซใจใ—ใฆ่กจ็พใ—ใพใ™ใ€‚ๅŸ‹ใ‚่พผใฟใฎใ‚ตใ‚คใ‚บ (ไธ‹่จ˜ใฎไพ‹ใงใฏ 8) ใฏใ€ใƒใƒฅใƒผใƒ‹ใƒณใ‚ฐใŒๅฟ…่ฆใชใƒ‘ใƒฉใƒกใƒผใ‚ฟใงใ™ใ€‚\n้‡่ฆใƒใ‚คใƒณใƒˆ: ใ‚ซใƒ†ใ‚ดใƒชใ‚ซใƒซๅˆ—ใŒๅคšใใฎ้ธๆŠž่‚ขใ‚’ๆŒใคๅ ดๅˆใ€ๅŸ‹ใ‚่พผใฟๅˆ—ใ‚’ไฝฟ็”จใ™ใ‚‹ใ“ใจใŒๆœ€ๅ–„ใฎๆ–นๆณ•ใงใ™ใ€‚ใ“ใ“ใงใฏไพ‹ใ‚’ไธ€ใค็คบใ—ใพใ™ใฎใงใ€ไปŠๅพŒๆง˜ใ€…ใชใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ๆ‰ฑใ†้š›ใซใฏใ€ใ“ใฎไพ‹ใ‚’ๅ‚่€ƒใซใ—ใฆใใ ใ•ใ„ใ€‚", "# ใ“ใฎๅŸ‹่พผใฟๅž‹ใ‚ณใƒฉใƒ ใฎๅ…ฅๅŠ›ใฏใ€ๅ…ˆ็จ‹ไฝœๆˆใ—ใŸใ‚ซใƒ†ใ‚ดใƒชๅž‹ใ‚ณใƒฉใƒ ใงใ‚ใ‚‹ใ“ใจใซๆณจๆ„\nthal_embedding = feature_column.embedding_column(thal, dimension=8)\ndemo(thal_embedding)", "ใƒใƒƒใ‚ทใƒฅๅŒ–็‰นๅพด้‡ๅˆ—\nๅ€คใฎ็จฎ้กžใŒๅคšใ„ใ‚ซใƒ†ใ‚ดใƒชใ‚ซใƒซๅˆ—ใ‚’่กจ็พใ™ใ‚‹ใ‚‚ใ†ไธ€ใคใฎๆ–นๆณ•ใจใ—ใฆใ€categorical_column_with_hash_bucket ใ‚’ไฝฟใ†ใ“ใจใŒใงใใพใ™ใ€‚ใ“ใฎ็‰นๅพด้‡ๅˆ—ใฏๅ…ฅๅŠ›ใฎใƒใƒƒใ‚ทใƒฅๅ€คใ‚’่จˆ็ฎ—ใ—ใ€ๆ–‡ๅญ—ๅˆ—ใ‚’ใ‚จใƒณใ‚ณใƒผใƒ‰ใ™ใ‚‹ใŸใ‚ใซ hash_bucket_size ใƒใ‚ฑใƒƒใƒˆใฎ 1 ใคใ‚’้ธๆŠžใ—ใพใ™ใ€‚ใ“ใฎๅˆ—ใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใซใฏใ€่ชžๅฝ™ใ‚’็”จๆ„ใ™ใ‚‹ๅฟ…่ฆใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ใพใŸใ€ใ‚นใƒšใƒผใ‚นใฎ็ฏ€็ด„ใฎใŸใ‚ใซใ€ๅฎŸ้š›ใฎใ‚ซใƒ†ใ‚ดใƒชๆ•ฐใซๆฏ”ในใฆๆฅตใ‚ใฆๅฐ‘ใชใ„ hash_buckets ๆ•ฐใ‚’้ธๆŠžใ™ใ‚‹ใ“ใจใ‚‚ๅฏ่ƒฝใงใ™ใ€‚\n้‡่ฆใƒใ‚คใƒณใƒˆ: ใ“ใฎๆ‰‹ๆณ•ใฎ้‡่ฆใชๆฌ ็‚นใฎไธ€ใคใฏใ€็•ฐใชใ‚‹ๆ–‡ๅญ—ๅˆ—ใŒๅŒใ˜ใƒใ‚ฑใƒƒใƒˆใซใƒžใƒƒใƒ”ใƒณใ‚ฐใ•ใ‚Œใ€่ก็ชใŒ็™บ็”Ÿใ™ใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚‹ใจใ„ใ†ใ“ใจใงใ™ใ€‚ใ—ใ‹ใ—ใชใŒใ‚‰ใ€ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใซใ‚ˆใฃใฆใฏๅ•้กŒใŒ็™บ็”Ÿใ—ใชใ„ๅ ดๅˆใ‚‚ใ‚ใ‚Šใพใ™ใ€‚", "thal_hashed = feature_column.categorical_column_with_hash_bucket(\n 'thal', hash_bucket_size=1000)\ndemo(feature_column.indicator_column(thal_hashed))", "ใƒ•ใ‚ฃใƒผใƒใƒฃใƒผใ‚ฏใƒญใ‚นๅˆ—\n่ค‡ๆ•ฐใฎ็‰นๅพด้‡ใ‚’ใพใจใ‚ใฆ1ใคใฎ็‰นๅพด้‡ใซใ™ใ‚‹ใ€ใƒ•ใ‚ฃใƒผใƒใƒฃใƒผใ‚ฏใƒญใ‚นใจใ—ใฆ็Ÿฅใ‚‰ใ‚Œใฆใ„ใ‚‹ๆ‰‹ๆณ•ใฏใ€ใƒขใƒ‡ใƒซใŒ็‰นๅพด้‡ใฎ็ต„ใฟๅˆใ‚ใ›ใฎไธ€ใคไธ€ใคใซๅˆฅใ€…ใฎ้‡ใฟใ‚’ๅญฆ็ฟ’ใ™ใ‚‹ใ“ใจใ‚’ๅฏ่ƒฝใซใ—ใพใ™ใ€‚ใ“ใ“ใงใฏๅนด้ฝขใจๅž‹ใ‚’ไบคๅทฎใ•ใ›ใฆๆ–ฐใ—ใ„็‰นๅพด้‡ใ‚’ไฝœใฃใฆใฟใพใ™ใ€‚(crossed_column) ใฏใ€่ตทใ“ใ‚Šใ†ใ‚‹ใ™ในใฆใฎ็ต„ใฟๅˆใ‚ใ›ๅ…จไฝ“ใฎ่กจ (ใ“ใ‚Œใฏ้žๅธธใซๅคงใใใชใ‚‹ๅฏ่ƒฝๆ€งใŒใ‚ใ‚Šใพใ™) ใ‚’ไฝœใ‚‹ใ‚‚ใฎใงใฏใชใ„ใ“ใจใซๆณจๆ„ใ—ใฆใใ ใ•ใ„ใ€‚ใƒ•ใ‚ฃใƒผใƒใƒฃใƒผใ‚ฏใƒญใ‚นๅˆ—ใฏใ€ไปฃใ‚ใ‚Šใซใƒใƒƒใ‚ฏใ‚จใƒณใƒ‰ใจใ—ใฆ hashed_column ใ‚’ไฝฟ็”จใ—ใฆใ„ใ‚‹ใŸใ‚ใ€่กจใฎๅคงใใ•ใ‚’้ธๆŠžใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚", "crossed_feature = feature_column.crossed_column([age_buckets, animal_type], hash_bucket_size=10)\ndemo(feature_column.indicator_column(crossed_feature))", "ไฝฟ็”จใ™ใ‚‹ใ‚ณใƒฉใƒ ใ‚’้ธๆŠžใ™ใ‚‹\nใ“ใ‚Œใพใงใ€ใ„ใใคใ‹ใฎ็‰นๅพด้‡ๅˆ—ใฎไฝฟใ„ๆ–นใ‚’่ฆ‹ใฆใใพใ—ใŸใ€‚ใ“ใ‚Œใ‹ใ‚‰ใƒขใƒ‡ใƒซใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใซใใ‚Œใ‚‰ใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ใ“ใฎใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซใฎ็›ฎ็š„ใฏใ€็‰นๅพด้‡ๅˆ—ใ‚’ไฝฟใ†ใฎใซๅฟ…่ฆใชๅฎŒๅ…จใชใ‚ณใƒผใƒ‰ (ใ„ใ‚ใฐไป•็ต„ใฟ) ใ‚’็คบใ™ใ“ใจใงใ™ใ€‚ไปฅไธ‹ใงใฏใƒขใƒ‡ใƒซใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹ใŸใ‚ใฎๅˆ—ใ‚’้ฉๅฝ“ใซ้ธใณใพใ—ใŸใ€‚\nใ‚ญใƒผใƒใ‚คใƒณใƒˆ๏ผšๆญฃ็ขบใชใƒขใƒ‡ใƒซใ‚’ๆง‹็ฏ‰ใ™ใ‚‹ใฎใŒ็›ฎ็š„ใงใ‚ใ‚‹ๅ ดๅˆใซใฏใ€ใงใใ‚‹ใ ใ‘ๅคงใใชใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ไฝฟ็”จใ—ใฆใ€ใฉใฎ็‰นๅพด้‡ใ‚’ๅซใ‚ใ‚‹ใฎใŒใ‚‚ใฃใจใ‚‚ๆ„ๅ‘ณใŒใ‚ใ‚‹ใฎใ‹ใ‚„ใ€ใใ‚Œใ‚‰ใ‚’ใฉใ†่กจ็พใ—ใŸใ‚‰ใ‚ˆใ„ใ‹ใ‚’ใ€ๆ…Ž้‡ใซๆคœ่จŽใ—ใฆใใ ใ•ใ„ใ€‚", "feature_columns = []\n\n# ๆ•ฐๅ€คใ‚ณใƒฉใƒ \nfor header in ['age', 'trestbps', 'chol', 'thalach', 'oldpeak', 'slope', 'ca']:\n feature_columns.append(feature_column.numeric_column(header))\n\n# ใƒใ‚ฑใƒƒใƒˆๅŒ–ใ‚ณใƒฉใƒ \nage_buckets = feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])\nfeature_columns.append(age_buckets)\n\n# ใ‚คใƒณใ‚ธใ‚ฑใƒผใ‚ฟใƒผ๏ผˆใ‚ซใƒ†ใ‚ดใƒชใƒผๅž‹๏ผ‰ใ‚ณใƒฉใƒ \nthal = feature_column.categorical_column_with_vocabulary_list(\n 'thal', ['fixed', 'normal', 'reversible'])\nthal_one_hot = feature_column.indicator_column(thal)\nfeature_columns.append(thal_one_hot)\n\n# ๅŸ‹ใ‚่พผใฟๅž‹ใ‚ณใƒฉใƒ \nthal_embedding = feature_column.embedding_column(thal, dimension=8)\nfeature_columns.append(thal_embedding)\n\n# ใ‚ฏใƒญใ‚นใƒ•ใ‚ฃใƒผใƒใƒฃใƒผใ‚ณใƒฉใƒ \ncrossed_feature = feature_column.crossed_column([age_buckets, thal], hash_bucket_size=1000)\ncrossed_feature = feature_column.indicator_column(crossed_feature)\nfeature_columns.append(crossed_feature)\n\n# bucketized cols\nage = feature_column.numeric_column('Age')\nage_buckets = feature_column.bucketized_column(age, boundaries=[1, 2, 3, 4, 5])\nfeature_columns.append(age_buckets)\n\n# indicator_columns\nindicator_column_names = ['Type', 'Color1', 'Color2', 'Gender', 'MaturitySize',\n 'FurLength', 'Vaccinated', 'Sterilized', 'Health']\nfor col_name in indicator_column_names:\n categorical_column = feature_column.categorical_column_with_vocabulary_list(\n col_name, dataframe[col_name].unique())\n indicator_column = feature_column.indicator_column(categorical_column)\n feature_columns.append(indicator_column)\n\n# embedding columns\nbreed1 = feature_column.categorical_column_with_vocabulary_list(\n 'Breed1', dataframe.Breed1.unique())\nbreed1_embedding = feature_column.embedding_column(breed1, dimension=8)\nfeature_columns.append(breed1_embedding)\n\n# crossed columns\nage_type_feature = feature_column.crossed_column([age_buckets, animal_type], hash_bucket_size=100)\nfeature_columns.append(feature_column.indicator_column(age_type_feature))", "็‰นๅพด้‡ๅฑคใฎๆง‹็ฏ‰\n็‰นๅพด้‡ๅˆ—ใ‚’ๅฎš็พฉใ—ใŸใฎใงใ€ๆฌกใซ DenseFeatures ใƒฌใ‚คใƒคใƒผใ‚’ไฝฟใฃใฆ Keras ใƒขใƒ‡ใƒซใซๅ…ฅๅŠ›ใ—ใพใ™ใ€‚", "feature_layer = tf.keras.layers.DenseFeatures(feature_columns)", "ใ“ใ‚Œใพใงใฏใ€feature columnsใฎๅƒใใ‚’่ฆ‹ใ‚‹ใŸใ‚ใ€ๅฐใ•ใชใƒใƒƒใƒใ‚ตใ‚คใ‚บใ‚’ไฝฟใฃใฆใใพใ—ใŸใ€‚ใ“ใ“ใงใฏใ‚‚ใ†ๅฐ‘ใ—ๅคงใใชใƒใƒƒใƒใ‚ตใ‚คใ‚บใฎๆ–ฐใ—ใ„ๅ…ฅๅŠ›ใƒ‘ใ‚คใƒ—ใƒฉใ‚คใƒณใ‚’ไฝœใ‚Šใพใ™ใ€‚", "batch_size = 32\ntrain_ds = df_to_dataset(train, batch_size=batch_size)\nval_ds = df_to_dataset(val, shuffle=False, batch_size=batch_size)\ntest_ds = df_to_dataset(test, shuffle=False, batch_size=batch_size)", "ใƒขใƒ‡ใƒซใฎๆง‹็ฏ‰ใ€ใ‚ณใƒณใƒ‘ใ‚คใƒซใจ่จ“็ทด", "model = tf.keras.Sequential([\n feature_layer,\n layers.Dense(128, activation='relu'),\n layers.Dense(128, activation='relu'),\n layers.Dense(1, activation='sigmoid')\n])\n\nmodel.compile(optimizer='adam',\n loss='binary_crossentropy',\n metrics=['accuracy'])\n\nmodel.fit(train_ds, \n validation_data=val_ds, \n epochs=5)\n\nloss, accuracy = model.evaluate(test_ds)\nprint(\"Accuracy\", accuracy)", "้‡่ฆใƒใ‚คใƒณใƒˆ: ้€šๅธธใ€ใƒ‡ใƒผใ‚ฟใƒ™ใƒผใ‚นใฎ่ฆๆจกใŒๅคงใใ่ค‡้›‘ใงใ‚ใ‚‹ใปใฉใ€ใƒ‡ใ‚ฃใƒผใƒ—ใƒฉใƒผใƒ‹ใƒณใ‚ฐใฎ็ตๆžœใŒใ‚ˆใใชใ‚Šใพใ™ใ€‚ใ“ใฎใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซใฎใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใฎใ‚ˆใ†ใซใ€ๅฐใ•ใชใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ไฝฟ็”จใ™ใ‚‹ๅ ดๅˆใฏใ€ๆฑบๅฎšๆœจใพใŸใฏใƒฉใƒณใƒ€ใƒ ใƒ•ใ‚ฉใƒฌใ‚นใƒˆใ‚’ๅผทๅŠ›ใชใƒ™ใƒผใ‚นใƒฉใ‚คใƒณใจใ—ใฆไฝฟ็”จใ™ใ‚‹ใ“ใจใ‚’ใŠๅ‹งใ‚ใ—ใพใ™ใ€‚ใ“ใฎใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซใงใฏใ€ๆง‹้€ ๅŒ–ใƒ‡ใƒผใ‚ฟใจใฎ้€ฃๆบใฎไป•็ต„ใฟใ‚’ๅฎŸๆผ”ใ™ใ‚‹ใ“ใจใŒ็›ฎ็š„ใงใ‚ใ‚Šใ€ใ‚ณใƒผใƒ‰ใฏๅฐ†ๆฅ็š„ใซ็‹ฌ่‡ชใฎใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ไฝฟ็”จใ™ใ‚‹้š›ใฎๅ‡บ็™บ็‚นใจใ—ใฆไฝฟ็”จใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚\nๆฌกใฎใ‚นใƒ†ใƒƒใƒ—\nๆง‹้€ ๅŒ–ใƒ‡ใƒผใ‚ฟใฎๅˆ†้กžใ‚’ใ•ใ‚‰ใซๅญฆ็ฟ’ใ™ใ‚‹ใซใฏใ€ใ”่‡ชๅˆ†ใงๅˆฅใฎใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚’ไฝฟ็”จใ—ใ€ไธŠ่จ˜ใฎใ‚ˆใ†ใชใ‚ณใƒผใƒ‰ใ‚’ไฝฟ็”จใ—ใ€ใƒขใƒ‡ใƒซใฎใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใจๅˆ†้กžใ‚’่ฉฆใ—ใฆใฟใฆใใ ใ•ใ„ใ€‚ๆญฃ่งฃๅบฆใ‚’ๆ”นๅ–„ใ™ใ‚‹ใซใฏใ€ใƒขใƒ‡ใƒซใซๅซใ‚ใ‚‹็‰นๅพด้‡ใจใใฎ่กจ็พๆ–นๆณ•ใ‚’ๅŸๅ‘ณใ—ใฆใใ ใ•ใ„ใ€‚" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
sthuggins/phys202-2015-work
assignments/assignment06/InteractEx05.ipynb
mit
[ "Interact Exercise 5\nImports\nPut the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.", "from IPython.display import display, SVG\nimport numpy as np\n%matplotlib inline\nfrom matplotlib import pyplot as plt\nfrom IPython.html.widgets import interact, interactive, fixed", "Interact with SVG display\nSVG is a simple way of drawing vector graphics in the browser. Here is a simple example of how SVG can be used to draw a circle in the Notebook:", "s = \"\"\"<svg width=\"100\" height=\"100\">\n <circle cx=\"50\" cy=\"50\" r=\"20\" fill=\"aquamarine\" />\n</svg>\"\"\" \n\nSVG(s) ", "Write a function named draw_circle that draws a circle using SVG. Your function should take the parameters of the circle as function arguments and have defaults as shown. You will have to write the raw SVG code as a Python string and then use the IPython.display.SVG object and IPython.display.display function.", "from IPython.display import HTML\n\ndef draw_circle(width=100, height=100, cx=25, cy=25, r=5, fill='red'):\n \"\"\"Draw an SVG circle.\n \n Parameters\n ----------\n width : int\n The width of the svg drawing area in px.\n height : int\n The height of the svg drawing area in px.\n cx : int\n The x position of the center of the circle in px.\n cy : int\n The y position of the center of the circle in px.\n r : int\n The radius of the circle in px.\n fill : str\n The fill color of the circle.\"\"\"\n \n x = \"\"\"<svg width=\"%d\" height=\"%d\">\n <circle cx=\"%d\" cy=\"%d\" r=\"%d\" fill=\"%s\" />\n </svg>\n \"\"\" % (width, height, cx, cy, r, fill)\n display(SVG(x))\n\ndraw_circle(cx=10, cy=10, r=10, fill='blue')\n\nassert True # leave this to grade the draw_circle function", "Use interactive to build a user interface for exploing the draw_circle function:\n\nwidth: a fixed value of 300px\nheight: a fixed value of 300px\ncx/cy: a slider in the range [0,300]\nr: a slider in the range [0,50]\nfill: a text area in which you can type a color's name\n\nSave the return value of interactive to a variable named w.", "w = interactive(draw_circle, width=fixed(300),height=fixed(300),cx=(0,300), cy=(0,300), r=(0,50), fill=\"red\")\n\n\nc = w.children\nassert c[0].min==0 and c[0].max==300\nassert c[1].min==0 and c[1].max==300\nassert c[2].min==0 and c[2].max==50\nassert c[3].value=='red'", "Use the display function to show the widgets created by interactive:", "display(w)\n\nassert True # leave this to grade the display of the widget", "Play with the sliders to change the circles parameters interactively." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
dmittov/misc
Thompson Sampling.ipynb
apache-2.0
[ "import sympy\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport scipy.stats as stats\nimport seaborn as sns\nfrom tqdm.notebook import tqdm\n%matplotlib inline", "We have 2 banners promoting a new sport club.\nThe first banner is aggressive: it focuses on the weight equipment we have and is very attractive to crossfitters, but completely can't convince runners. Another one makes the main focus on the cardio trainers we have and is much more attractive for runners. The neutral banner is also attractive to crossfitters, but it's not so cool as the first one.\nActually we don't know it. But that's what designers kept in mind when they create those banners.\nAlso let's imaging we don't know web site visitors interests. And ideally we show just one banner which is \"the best\" in general.\nLet's define the world model. And use it further as a black box. Use bernoulli instead of binomial to make everything transparent.", "crossfitters_ratio = .48\n\naggressive = {\"crossfitters\": .68, \"runners\": .04}\nneutral = {\"crossfitters\": .28, \"runners\": .4}\n\ndef test_banner(banner, shows):\n runners_dist = stats.bernoulli(banner[\"runners\"])\n crossfitters_dist = stats.bernoulli(banner[\"crossfitters\"])\n\n crossfitters_cnt = stats.bernoulli(crossfitters_ratio).rvs(shows).sum()\n runners_cnt = shows - crossfitters_cnt\n\n crossfitters_hits = crossfitters_dist.rvs(crossfitters_cnt).sum()\n runners_hits = runners_dist.rvs(runners_cnt).sum()\n \n return crossfitters_hits + runners_hits", "To decide which banner is better we run experiment. We show both banners to random clients and make the conclusions out of the data we get.\nImagine we ran an experiment and computed the Convertion Rate as #Convertions / #Shows. That's a point estimation = ๐Ÿ’ฉ\nTo make some statistically significant conclusions we need to use confidence intervals or hypothesis checking methods.\nLet's simplify everything a bit and sample from the true distribution directly. On practice we usually can't afford it and use bootstraping (delta-method/etc), but since we have the world model we don't need bootstrap.", "%%time\nrevenue_agressive = [test_banner(aggressive, 100) for _ in range(1000)]\nrevenue_neutral = [test_banner(neutral, 100) for _ in range(1000)]\nsns.distplot(revenue_agressive, label=\"agressive\")\nsns.distplot(revenue_neutral, label=\"neutral\")\nplt.legend()\n\n%%time\nrevenue_agressive = [test_banner(aggressive, 1000) for _ in range(1000)]\nrevenue_neutral = [test_banner(neutral, 1000) for _ in range(1000)]\nsns.distplot(revenue_agressive)\nsns.distplot(revenue_neutral)\n\n%%time\nrevenue_agressive = [test_banner(aggressive, 100000) for _ in range(1000)]\nrevenue_neutral = [test_banner(neutral, 100000) for _ in range(1000)]\n\nsns.distplot(revenue_agressive)\nsns.distplot(revenue_neutral)\n\n%%time\nrevenue_agressive = [test_banner(aggressive, 1000000) for _ in range(1000)]\nrevenue_neutral = [test_banner(neutral, 1000000) for _ in range(1000)]\n\nsns.distplot(revenue_agressive)\nsns.distplot(revenue_neutral)", "Regret - the money we lost on an experiment. If we have a magic oracle that tells you which banner is the best without any experiments you save ~ 347k - 343k = 4k โ‚ฌ\nYou also may compute regret in counts of the \"bad\" banner shows. The loss in each show is the same. So, regret is 1M shows.\nDue to we know how the real world behaves, let's check which banner is actually better.", ".48 * .68 + .52 * .04, .48 * .28 + .52 * .4", "Usually you have too many factors and it's hard to say if two banners have real different conversion rate. The less difference the more audience you need to find the difference.\nSo, you want to have some early stopping method + the tool to compare more than 2 banners the same time.\nThere are some tools in classis statistics. But they look overcomplicated compared to the following approach. The other benefit of Multiarmed Bandits: you can use Contextual Multiarmed Bandits when you have additional information about users (gender, city, etc).\nMultiarmed bandit [Thompson sampling]\n<img src=\"https://www.abtasty.com/content/uploads/img_5559fcc451925.png\" width=\"200px\" align=\"left\"/>\n<img src=\"https://vignette.wikia.nocookie.net/matrix/images/d/da/Spoon_Boy_Neo_Bends.jpg/revision/latest/scale-to-width-down/266?cb=20130119092916\" width=\"200px\" align=\"right\"/>\nCTR doesn't exist, but we have CTR distribution.\n\nIn fact there is a distribution of our knowledge about CTR.\n\nLet's assume CTR is a Beta distribution. <s>Because of conjugate prior</s> Because I like Beta distribution.\nWe don't know any supported by data prior knowledge about the true CTR. Therefore it's better to use non-informative prior than use some particular value.\nDon't use prejudices/preconception as a prior. Use either non-informative prior or something supported by data. Otherwise you would be loosing money when the model would fix your prior belives with the data.\nHow the Beta distribution looks like.", "xs = np.linspace(0, 1, 100)\nplt.plot(xs, stats.beta(1, 1).pdf(xs), label=\"alpha = 1 beta = 1\")\nplt.plot(xs, stats.beta(.1, .1).pdf(xs), label=\"alpha = .1 beta = .1\")\nplt.plot(xs, stats.beta(7, 3).pdf(xs), label=\"alpha = 7 beta = 3\")\nplt.legend()", "$Pr(A|B) = \\frac{Pr(B|A)Pr(A)}{Pr(B)}$\n$X$ - events (click/no-click)\nCTR is a distribution, obviously it's defined on [0,1] (dom of Beta distribution).\n$Pr(CTR|X) = \\frac{Pr(X|CTR)Pr(CTR)}{Pr(X)} = \\frac{Binomial(CTR) * Beta(\\alpha, \\beta)}{\\int{Binomial(CTR) * Beta(\\alpha, \\beta)}} = \\frac{Bernoulli(CTR) * Beta(\\alpha, \\beta)}{Const}$\nBeta: $\\frac{p^{\\alpha - 1}(1 - p)^{\\beta - 1}}{\\mathrm {B}(\\alpha, \\beta)}$ Binomial: $\\binom{N}{k} p^k(1 - p)^{N - k}$, where p is a success probability, which is distributed as Beta\n$\nPr(CTR|X) = (p^{(\\alpha + k) - 1} (1 - p)^{(\\beta + N - k) - 1}) / Const\n$\nIt has a shape of Beta distribution: $p^{\\alpha - 1} (1 - p)^{\\beta - 1}$\n$\\alpha_{new} = \\alpha + k$\n$\\beta_{new} = \\beta + N - k$\nBut we don't know the normalization constant. This curve may lay upper or lower than Beta distribution curve with same parameters. But we know the posterior is a some distribution and the square under the line should be = 1. So the only possible option - the Pr(CTR|X) curve is exactly the Beta distribution with $\\alpha_{new}$ and $\\beta_{new}$ parameters.\n$\\alpha$ & $\\beta$ correspond to the number of successes / failures\nThe more data we've seen the more confident we're in the estimation.", "xs = np.linspace(0, 1, 100)\nplt.plot(xs, stats.beta(7, 3).pdf(xs), label=\"alpha = 7 beta = 3\")\nplt.plot(xs, stats.beta(70, 30).pdf(xs), label=\"alpha = 21 beta = 9\")\nplt.plot(xs, stats.beta(700, 300).pdf(xs), label=\"alpha = 70 beta = 30\")\nplt.axvline(.7, 0, 1, color=\"red\")\nplt.legend()", "What banner to show\nLet's create a lottery. On each show <s>sample</s> draw a dice out of Beta distribution and have a CTR point estimation.", "crossfitters_ratio = .48\n\naggressive = {\"crossfitters\": .68, \"runners\": .04}\nneutral = {\"crossfitters\": .28, \"runners\": .4}\n\nagressive_beta = {\"alpha\": 1, \"beta\": 1}\nneutral_beta = {\"alpha\": 1, \"beta\": 1}\n\nregret = 0\nrevenue = 0\nfor _ in tqdm(range(2000000)):\n aggresive_score = stats.beta(agressive_beta[\"alpha\"], agressive_beta[\"beta\"]).rvs()\n neutral_score = stats.beta(neutral_beta[\"alpha\"], neutral_beta[\"beta\"]).rvs()\n user_type = \"crossfitters\" if stats.bernoulli(crossfitters_ratio).rvs() > 0 else \"runners\"\n if aggresive_score > neutral_score:\n click = stats.bernoulli(aggressive[user_type]).rvs()\n if click:\n agressive_beta[\"alpha\"] += 1\n else:\n agressive_beta[\"beta\"] += 1\n else:\n regret += 1\n click = stats.bernoulli(neutral[user_type]).rvs()\n if click:\n neutral_beta[\"alpha\"] += 1\n else:\n neutral_beta[\"beta\"] += 1\n revenue += click\n\nregret, revenue\n\nagressive_beta, neutral_beta\n\nagressive_beta[\"alpha\"] / (agressive_beta[\"alpha\"] + agressive_beta[\"beta\"]), neutral_beta[\"alpha\"] / (neutral_beta[\"alpha\"] + neutral_beta[\"beta\"])\n\ncrossfitters_ratio = .48\n\naggressive = {\"crossfitters\": .68, \"runners\": .04}\nneutral = {\"crossfitters\": .28, \"runners\": .4}\n\nagressive_beta = {\"alpha\": 1, \"beta\": 1}\nneutral_beta = {\"alpha\": 1, \"beta\": 1}\n\nregret = 0\nrevenue = 0\nfor _ in tqdm(range(200000)):\n aggresive_score = stats.beta(agressive_beta[\"alpha\"], agressive_beta[\"beta\"]).rvs()\n neutral_score = stats.beta(neutral_beta[\"alpha\"], neutral_beta[\"beta\"]).rvs()\n user_type = \"crossfitters\" if stats.bernoulli(crossfitters_ratio).rvs() > 0 else \"runners\"\n if aggresive_score > neutral_score:\n click = stats.bernoulli(aggressive[user_type]).rvs()\n if click:\n agressive_beta[\"alpha\"] += 1\n else:\n agressive_beta[\"beta\"] += 1\n else:\n regret += 1\n click = stats.bernoulli(neutral[user_type]).rvs()\n if click:\n neutral_beta[\"alpha\"] += 1\n else:\n neutral_beta[\"beta\"] += 1\n revenue += click\n\nxs = np.linspace(0.33, 0.36, 100)\nplt.plot(xs, stats.beta(agressive_beta[\"alpha\"], agressive_beta[\"beta\"]).pdf(xs), label=\"agressive\")\nplt.plot(xs, stats.beta(neutral_beta[\"alpha\"], neutral_beta[\"beta\"]).pdf(xs), label=\"neutral\")\nplt.legend()\n\nregret, revenue\n\nagressive_beta, neutral_beta\n\nagressive_beta[\"alpha\"] / (agressive_beta[\"alpha\"] + agressive_beta[\"beta\"]), neutral_beta[\"alpha\"] / (neutral_beta[\"alpha\"] + neutral_beta[\"beta\"])", "So, why are we using Beta as a prior\nWe assume there is a true CTR and we have a distribution of our belief in CTR. Therefore this distribution should converge to delta-function. And have a domain of [0,1]. Beta satisfies these properties and is computational effecient." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jdhp-docs/python-notebooks
python_scipy_optimize_brute_force_en.ipynb
mit
[ "\"Brute force\" optimization with Scipy\nOfficial documentation:\n- https://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html\n- https://docs.scipy.org/doc/scipy/reference/optimize.html\nImport required modules", "%matplotlib inline\n\nimport matplotlib\nmatplotlib.rcParams['figure.figsize'] = (8, 8)\n\n# Setup PyAI\nimport sys\nsys.path.insert(0, '/Users/jdecock/git/pub/jdhp/pyai')\n\nimport numpy as np\nfrom scipy import optimize\n\n# Plot functions\nfrom pyai.optimize.utils import plot_contour_2d_solution_space\nfrom pyai.optimize.utils import plot_2d_solution_space", "Define the objective function", "# Set the objective function\nfrom pyai.optimize.functions import sphere1d\nfrom pyai.optimize.functions import sphere2d", "Minimize using the \"Brute force\" algorithm\nUses the \"brute force\" method, i.e. computes the function's value at each point of a multidimensional grid of points, to find the global minimum of the function.\nSee https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.brute.html#scipy.optimize.brute\nFirst example: the 1D sphere function", "%%time\n\nsearch_ranges = (slice(-3., 3.5, 0.5),)\n\nres = optimize.brute(sphere1d,\n search_ranges,\n #args=params,\n full_output=True,\n finish=None) # optimize.fmin)\n\nprint(\"x* =\", res[0])\nprint(\"f(x*) =\", res[1])\n\nprint(res[2].shape)\nprint(\"tested x:\", res[2])\nprint(res[3].shape)\nprint(\"tested f(x):\", res[3])\n\nx_star = res[0]\ny_star = res[1]\n\nx = res[2]\ny = res[3]\n\nfig, ax = plt.subplots()\n\nax.set_title('Objective function')\n\nax.plot(x, y, 'k-', alpha=0.25, label=\"f\")\nax.plot(x, y, 'g.', label=\"tested points\")\nax.plot(x_star, y_star, 'ro', label=\"$x^*$\")\n\nax.legend(fontsize=12);", "Second example: the 2D sphere function", "%%time\n\nsearch_ranges = (slice(-2., 2.5, 0.5), slice(-2., 2.5, 0.5))\n\nres = optimize.brute(sphere2d,\n search_ranges,\n #args=params,\n full_output=True,\n finish=None) # optimize.fmin)\n\nprint(\"x* =\", res[0])\nprint(\"f(x*) =\", res[1])\n\nprint(res[2].shape)\nprint(\"tested x:\", res[2])\nprint()\nprint(res[3].shape)\nprint(\"tested f(x):\", res[3])\n\n# Setup data #########################\n\n# Using the following 3 lines, pcolormesh won't display the last row and the last collumn...\n\n#xx = res[2][0]\n#yy = res[2][1]\n#zz = res[3]\n\n# Workaround to avoid pcolormesh ignoring the last row and last collumn...\n\nx = res[2][0][:,0]\ny = res[2][1][0,:]\n\nx = np.append(x, x[-1] + x[-1] - x[-2])\ny = np.append(y, y[-1] + y[-1] - y[-2])\n\n# Make the meshgrid\n\nxx, yy = np.meshgrid(x, y) \n\n# \"Ideally the dimensions of X and Y should be one greater than those of C;\n# if the dimensions are the same, then the last row and column of C will be ignored.\"\n# https://stackoverflow.com/questions/44526052/can-someone-explain-this-matplotlib-pcolormesh-quirk\nzz = res[3]\n\n# Plot the image #####################\n\nfig, ax = plt.subplots()\n\nax.set_title('Objective function')\n\n# Shift to center pixels to data (workaround...)\n# (https://stackoverflow.com/questions/43128106/pcolormesh-ticks-center-for-each-data-point-tile)\n\nxx -= (x[-1] - x[-2])/2.\nyy -= (y[-1] - y[-2])/2.\n\n#im = ax.imshow(z, interpolation='bilinear', origin='lower')\nim = ax.pcolormesh(xx, yy, zz, cmap='gnuplot2_r')\nplt.colorbar(im) # draw the colorbar\n\n# Plot contours ######################\n\nmax_value = np.nanmax(zz)\nlevels = np.array([0.1*max_value, 0.3*max_value, 0.6*max_value])\n\n# Shift back pixels for contours (workaround...)\n# (https://stackoverflow.com/questions/43128106/pcolormesh-ticks-center-for-each-data-point-tile)\n\nxx += (x[-1] - x[-2])/2.\nyy += (y[-1] - y[-2])/2.\n\ncs = plt.contour(xx[:-1,:-1], yy[:-1,:-1], zz, levels,\n linewidths=(2, 2, 3), linestyles=('dotted', 'dashed', 'solid'),\n alpha=0.5, colors='blue')\nax.clabel(cs, inline=False, fontsize=12)\n\n# Plot x* ############################\n\nax.scatter(*res[0], c='red', label=\"$x^*$\")\n\nax.legend(fontsize=12);" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
khalido/nd101
Handwritten Digit Recognition with TFLearn.ipynb
gpl-3.0
[ "Handwritten Number Recognition with TFLearn and MNIST\nIn this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9. \nThis kind of neural network is used in a variety of real-world applications including: recognizing phone numbers and sorting postal mail by address. To build the network, we'll be using the MNIST data set, which consists of images of handwritten numbers and their correct labels 0-9.\nWe'll be using TFLearn, a high-level library built on top of TensorFlow to build the neural network. We'll start off by importing all the modules we'll need, then load the data, and finally build the network.", "# Import Numpy, TensorFlow, TFLearn, and MNIST data\nimport numpy as np\nimport tensorflow as tf\nimport tflearn\nimport tflearn.datasets.mnist as mnist", "Retrieving training and test data\nThe MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.\nEach MNIST data point has:\n1. an image of a handwritten digit and \n2. a corresponding label (a number 0-9 that identifies the image)\nWe'll call the images, which will be the input to our neural network, X and their corresponding labels Y.\nWe're going to want our labels as one-hot vectors, which are vectors that holds mostly 0's and one 1. It's easiest to see this in a example. As a one-hot vector, the number 0 is represented as [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], and 4 is represented as [0, 0, 0, 0, 1, 0, 0, 0, 0, 0].\nFlattened data\nFor this example, we'll be using flattened data or a representation of MNIST images in one dimension rather than two. So, each handwritten number image, which is 28x28 pixels, will be represented as a one dimensional array of 784 pixel values. \nFlattening the data throws away information about the 2D structure of the image, but it simplifies our data so that all of the training data can be contained in one array whose shape is [55000, 784]; the first dimension is the number of training images and the second dimension is the number of pixels in each image. This is the kind of data that is easy to analyze using a simple neural network.", "# mnist fails to load, so got this patch from the nd101 slack\ndef patched_read32(bytestream):\n dt = np.dtype(np.uint32).newbyteorder('>')\n return np.frombuffer(bytestream.read(4), dtype=dt)[0]\n\nmnist._read32 = patched_read32\n\n# Retrieve the training and test data\ntrainX, trainY, testX, testY = mnist.load_data(one_hot=True)", "Visualize the training data\nProvided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title.", "# Visualizing the data\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n# Function for displaying a training image by it's index in the MNIST set\ndef show_digit(index):\n label = trainY[index].argmax(axis=0)\n # Reshape 784 array into 28x28 image\n image = trainX[index].reshape([28,28])\n plt.title('Training data, index: %d, Label: %d' % (index, label))\n plt.imshow(image, cmap='gray_r')\n plt.show()\n \n# Display the first (index 0) training image\nshow_digit(0)", "Building the network\nTFLearn lets you build the network by defining the layers in that network. \nFor this example, you'll define:\n\nThe input layer, which tells the network the number of inputs it should expect for each piece of MNIST data. \nHidden layers, which recognize patterns in data and connect the input to the output layer, and\nThe output layer, which defines how the network learns and outputs a label for a given image.\n\nLet's start with the input layer; to define the input layer, you'll define the type of data that the network expects. For example,\nnet = tflearn.input_data([None, 100])\nwould create a network with 100 inputs. The number of inputs to your network needs to match the size of your data. For this example, we're using 784 element long vectors to encode our input data, so we need 784 input units.\nAdding layers\nTo add new hidden layers, you use \nnet = tflearn.fully_connected(net, n_units, activation='ReLU')\nThis adds a fully connected layer where every unit (or node) in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call, it designates the input to the hidden layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling tflearn.fully_connected(net, n_units). \nThen, to set how you train the network, use:\nnet = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')\nAgain, this is passing in the network you've been building. The keywords: \n\noptimizer sets the training method, here stochastic gradient descent\nlearning_rate is the learning rate\nloss determines how the network error is calculated. In this example, with categorical cross-entropy.\n\nFinally, you put all this together to create the model with tflearn.DNN(net).\nExercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.\nHint: The final output layer must have 10 output nodes (one for each digit 0-9). It's also recommended to use a softmax activation layer as your final output layer.", "# Define the neural network\ndef build_model():\n # This resets all parameters and variables, leave this here\n tf.reset_default_graph()\n \n #### Your code ####\n # Include the input layer, hidden layer(s), and set how you want to train the model\n \n #input layer\n net = tflearn.input_data([None, 784])\n \n #hidden layer 1\n net = tflearn.fully_connected(net, 196, activation='ReLU')\n \n # hidden layer 2\n net = tflearn.fully_connected(net, 49, activation='ReLU')\n \n # output layer\n net = tflearn.fully_connected(net, 10, activation='softmax')\n \n # how does it learn?\n net = tflearn.regression(net, optimizer='sgd', learning_rate=0.05, loss='categorical_crossentropy')\n \n # This model assumes that your network is named \"net\" \n model = tflearn.DNN(net)\n return model\n\n# Build the model\nmodel = build_model()", "Training the network\nNow that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. \nToo few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!", "# Training\nmodel.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=20)", "Testing\nAfter you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.\nA good result will be higher than 95% accuracy. Some simple models have been known to get up to 99.7% accuracy!", "# Compare the labels that our model predicts with the actual labels\n\n# Find the indices of the most confident prediction for each item. That tells us the predicted digit for that sample.\npredictions = np.array(model.predict(testX)).argmax(axis=1)\n\n# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels\nactual = testY.argmax(axis=1)\ntest_accuracy = np.mean(predictions == actual, axis=0)\n\n# Print out the result\nprint(\"Test accuracy: \", test_accuracy)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Faris137/MachineLearningArabic
Pima Project/Pima Project 2.0.ipynb
mit
[ "ู…ุญุงูˆู„ุฉ ู„ุฅุณุชูƒุดุงู ุงูุถู„ ุงู„ุทุฑู‚ ู„ุชุญุณูŠู† ุงุฏุงุก ู†ู…ูˆุฐุฌ ุจูŠู…ุง", "import numpy as np\nimport pandas as pd\nimport seaborn as sb\nfrom sklearn.metrics import classification_report\nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn import preprocessing\n\n%matplotlib inline\n\ndf = pd.read_csv('diabetes.csv')\ndf.head(20) #ู„ุงุณุชุนุฑุงุถ ุงู„20 ุงู„ุณุฌู„ุงุช ุงู„ุงูˆู„ู‰ ู…ู† ุฅุทุงุฑ ุงู„ุจูŠุงู†ุงุช", "ู‡ุฐู‡ ุงู„ุฏุงู„ุฉ ุชุนุทูŠู†ุง ุชูˆุตูŠู ูƒุงู…ู„ ู„ู„ุจูŠุงู†ุงุช ูˆ ุชูƒุดู ู„ู†ุง ููŠ ู…ุง ุฅุฐุง ูƒุงู†ุช ู‡ู†ุงูƒ ู‚ูŠู… ู…ูู‚ูˆุฏุฉ", "df.info()", "ุณูŠุจูˆุฑู† ู…ูƒุชุจุฉ ุฌู…ูŠู„ุฉ ู„ู„ุฑุณูˆู…ูŠุงุช ุณู‡ู„ุฉ ููŠ ุงู„ูƒุชุงุจุฉ ู„ูƒู† ู…ููŠุฏุฉ ุฌุฏุงู‹ ููŠ ุงู„ู…ุนู„ูˆู…ุงุช ุงู„ุชูŠ ู…ู…ูƒู† ุงู† ู†ู‚ุฑุงุกู‡ุง ุนุจุฑ ุงู„ู‡ูŠุณุชูˆู‚ุฑุงู…\nูุงุฆุฏู‡ุง ู…ู…ูƒู† ุงู† ุชูƒูˆู† ููŠ\n1- ุชู„ุฎูŠุต ุชูˆุฒูŠุน ุงู„ุจูŠู†ุงุช ููŠ ุฑุณูˆู…ูŠุงุช\n2- ูู‡ู… ุงูˆ ุงู„ุฅุทู„ุงุน ุนู„ู‰ ุงู„ู‚ูŠู… ุงู„ูุฑูŠุฏุฉ\n3- ุชุญู…ู„ ุงู„ุฑุณูˆู…ูŠุงุช ู…ุนู†ู‰ ุงุนู…ู‚ ู…ู† ุงู„ูƒู„ู…ุงุช", "sb.countplot(x='Outcome',data=df, palette='hls')\n\nsb.countplot(x='Pregnancies',data=df, palette='hls')\n\nsb.countplot(x='Glucose',data=df, palette='hls')\n\nsb.heatmap(df.corr())\n\nsb.pairplot(df, hue=\"Outcome\")\n\nfrom scipy.stats import kendalltau\nsb.jointplot(df['Pregnancies'], df['Glucose'], kind=\"hex\", stat_func=kendalltau, color=\"#4CB391\")\n\nimport matplotlib.pyplot as plt\ng = sb.FacetGrid(df, row=\"Pregnancies\", col=\"Outcome\", margin_titles=True)\nbins = np.linspace(0, 50, 13)\ng.map(plt.hist, \"BMI\", color=\"steelblue\", bins=bins, lw=0)\n\nsb.pairplot(df, vars=[\"Pregnancies\", \"BMI\"])", "ุชุฌุฑุจุฉ ุงุณุชุฎุฏุงู… ุชู‚ูŠูŠุณ ูˆ ุชุฏุฑูŠุฌ ุงู„ุฎูˆุงุต ู„ุชุญุณูŠู† ุงุฏุงุก ุงู„ู†ู…ูˆุฐุฌ", "columns = ['Pregnancies', 'Glucose', 'BloodPressure', 'SkinThickness', 'Insulin', 'BMI', 'DiabetesPedigreeFunction', 'Age']\nlabels = df['Outcome'].values\nfeatures = df[list(columns)].values\n\nX_train, X_test, y_train, y_test = train_test_split(features, labels, test_size=0.30)\n\nclf = RandomForestClassifier(n_estimators=1)\nclf = clf.fit(X_train, y_train)\n\naccuracy = clf.score(X_train, y_train)\nprint ' ุงุฏุงุก ุงู„ู†ู…ูˆุฐุฌ ููŠ ุนูŠู†ุฉ ุงู„ุชุฏุฑูŠุจ ุจุฏู‚ุฉ ', accuracy*100\n\naccuracy = clf.score(X_test, y_test)\nprint ' ุงุฏุงุก ุงู„ู†ู…ูˆุฐุฌ ููŠ ุนูŠู†ุฉ ุงู„ูุญุต ุจุฏู‚ุฉ ', accuracy*100\n\nypredict = clf.predict(X_train)\nprint '\\n Training classification report\\n', classification_report(y_train, ypredict)\nprint \"\\n Confusion matrix of training \\n\", confusion_matrix(y_train, ypredict)\n\nypredict = clf.predict(X_test)\nprint '\\n Testing classification report\\n', classification_report(y_test, ypredict)\nprint \"\\n Confusion matrix of Testing \\n\", confusion_matrix(y_test, ypredict)", "ุชุฌุฑุจุฉ ุชุญุณูŠู† ุงุฏุงุก ุงู„ู†ู…ูˆุฐุฌ ุจุงุณุชุฎุฏุงู… ุทุฑูŠู‚ุฉ\nstandard scaler", "#scaling\nscaler = StandardScaler()\n\n# Fit only on training data\nscaler.fit(X_train)\nX_train = scaler.transform(X_train)\n# apply same transformation to test data\nX_test = scaler.transform(X_test)\n\nclf = RandomForestClassifier(n_estimators=1)\nclf = clf.fit(X_train, y_train)\n\naccuracy = clf.score(X_train, y_train)\nprint ' ุงุฏุงุก ุงู„ู†ู…ูˆุฐุฌ ููŠ ุนูŠู†ุฉ ุงู„ุชุฏุฑูŠุจ ุจุฏู‚ุฉ ', accuracy*100\n\naccuracy = clf.score(X_test, y_test)\nprint ' ุงุฏุงุก ุงู„ู†ู…ูˆุฐุฌ ููŠ ุนูŠู†ุฉ ุงู„ูุญุต ุจุฏู‚ุฉ ', accuracy*100\n\nypredict = clf.predict(X_train)\nprint '\\n Training classification report\\n', classification_report(y_train, ypredict)\nprint \"\\n Confusion matrix of training \\n\", confusion_matrix(y_train, ypredict)\n\nypredict = clf.predict(X_test)\nprint '\\n Testing classification report\\n', classification_report(y_test, ypredict)\nprint \"\\n Confusion matrix of Testing \\n\", confusion_matrix(y_test, ypredict)", "ุชุฌุฑุจุฉ ุชุญุณูŠู† ุงุฏุงุก ุงู„ู†ู…ูˆุฐุฌ ุจุทุฑูŠู‚ุฉ\nmin-max scaler", "columns = ['Pregnancies', 'Glucose', 'BloodPressure', 'SkinThickness', 'Insulin', 'BMI', 'DiabetesPedigreeFunction', 'Age']\nlabels = df['Outcome'].values\nfeatures = df[list(columns)].values\n\nX_train, X_test, y_train, y_test = train_test_split(features, labels, test_size=0.30)\n\nscaler = preprocessing.MinMaxScaler()\n\nscaler.fit(X_train)\nX_train = scaler.transform(X_train)\n# apply same transformation to test data\nX_test = scaler.transform(X_test)\n\nclf = RandomForestClassifier(n_estimators=1)\nclf = clf.fit(X_train, y_train)\n\naccuracy = clf.score(X_train, y_train)\nprint ' ุงุฏุงุก ุงู„ู†ู…ูˆุฐุฌ ููŠ ุนูŠู†ุฉ ุงู„ุชุฏุฑูŠุจ ุจุฏู‚ุฉ ', accuracy*100\n\naccuracy = clf.score(X_test, y_test)\nprint ' ุงุฏุงุก ุงู„ู†ู…ูˆุฐุฌ ููŠ ุนูŠู†ุฉ ุงู„ูุญุต ุจุฏู‚ุฉ ', accuracy*100\n\nypredict = clf.predict(X_train)\nprint '\\n Training classification report\\n', classification_report(y_train, ypredict)\nprint \"\\n Confusion matrix of training \\n\", confusion_matrix(y_train, ypredict)\n\nypredict = clf.predict(X_test)\nprint '\\n Testing classification report\\n', classification_report(y_test, ypredict)\nprint \"\\n Confusion matrix of Testing \\n\", confusion_matrix(y_test, ypredict)\n\ncolumns = ['Pregnancies', 'Glucose', 'BloodPressure', 'SkinThickness', 'Insulin', 'BMI', 'DiabetesPedigreeFunction', 'Age']\nlabels = df['Outcome'].values\nfeatures = df[list(columns)].values\n\nX_train, X_test, y_train, y_test = train_test_split(features, labels, test_size=0.30)\n\nclf = RandomForestClassifier(n_estimators=5)\nclf = clf.fit(X_train, y_train)\n\naccuracy = clf.score(X_train, y_train)\nprint ' ุงุฏุงุก ุงู„ู†ู…ูˆุฐุฌ ููŠ ุนูŠู†ุฉ ุงู„ุชุฏุฑูŠุจ ุจุฏู‚ุฉ ', accuracy*100\n\naccuracy = clf.score(X_test, y_test)\nprint ' ุงุฏุงุก ุงู„ู†ู…ูˆุฐุฌ ููŠ ุนูŠู†ุฉ ุงู„ูุญุต ุจุฏู‚ุฉ ', accuracy*100\n\nypredict = clf.predict(X_train)\nprint '\\n Training classification report\\n', classification_report(y_train, ypredict)\nprint \"\\n Confusion matrix of training \\n\", confusion_matrix(y_train, ypredict)\n\nypredict = clf.predict(X_test)\nprint '\\n Testing classification report\\n', classification_report(y_test, ypredict)\nprint \"\\n Confusion matrix of Testing \\n\", confusion_matrix(y_test, ypredict)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/docs-l10n
site/ko/tutorials/load_data/unicode.ipynb
apache-2.0
[ "Copyright 2018 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n#@title MIT License\n#\n# Copyright (c) 2017 Franรงois Chollet\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the \"Software\"),\n# to deal in the Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish, distribute, sublicense,\n# and/or sell copies of the Software, and to permit persons to whom the\n# Software is furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.", "์œ ๋‹ˆ์ฝ”๋“œ ๋ฌธ์ž์—ด\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/tutorials/load_data/unicode\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" /> TensorFlow.org์—์„œ ๋ณด๊ธฐ</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/tutorials/load_data/unicode.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />๊ตฌ๊ธ€ ์ฝ”๋žฉ(Colab)์—์„œ ์‹คํ–‰ํ•˜๊ธฐ</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/ko/tutorials/load_data/unicode.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />๊นƒํ—ˆ๋ธŒ(GitHub) ์†Œ์Šค ๋ณด๊ธฐ</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/tutorials/load_data/unicode.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nNote: ์ด ๋ฌธ์„œ๋Š” ํ…์„œํ”Œ๋กœ ์ปค๋ฎค๋‹ˆํ‹ฐ์—์„œ ๋ฒˆ์—ญํ–ˆ์Šต๋‹ˆ๋‹ค. ์ปค๋ฎค๋‹ˆํ‹ฐ ๋ฒˆ์—ญ ํ™œ๋™์˜ ํŠน์„ฑ์ƒ ์ •ํ™•ํ•œ ๋ฒˆ์—ญ๊ณผ ์ตœ์‹  ๋‚ด์šฉ์„ ๋ฐ˜์˜ํ•˜๊ธฐ ์œ„ํ•ด ๋…ธ๋ ฅํ•จ์—๋„\n๋ถˆ๊ตฌํ•˜๊ณ  ๊ณต์‹ ์˜๋ฌธ ๋ฌธ์„œ์˜ ๋‚ด์šฉ๊ณผ ์ผ์น˜ํ•˜์ง€ ์•Š์„ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.\n์ด ๋ฒˆ์—ญ์— ๊ฐœ์„ ํ•  ๋ถ€๋ถ„์ด ์žˆ๋‹ค๋ฉด\ntensorflow/docs-l10n ๊นƒํ—™ ์ €์žฅ์†Œ๋กœ ํ’€ ๋ฆฌํ€˜์ŠคํŠธ๋ฅผ ๋ณด๋‚ด์ฃผ์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค.\n๋ฌธ์„œ ๋ฒˆ์—ญ์ด๋‚˜ ๋ฆฌ๋ทฐ์— ์ฐธ์—ฌํ•˜๋ ค๋ฉด\ndocs-ko@tensorflow.org๋กœ\n๋ฉ”์ผ์„ ๋ณด๋‚ด์ฃผ์‹œ๊ธฐ ๋ฐ”๋ž๋‹ˆ๋‹ค.\n์†Œ๊ฐœ\n์ž์—ฐ์–ด ์ฒ˜๋ฆฌ ๋ชจ๋ธ์€ ์ข…์ข… ๋‹ค๋ฅธ ๋ฌธ์ž ์ง‘ํ•ฉ์„ ๊ฐ–๋Š” ๋‹ค์–‘ํ•œ ์–ธ์–ด๋ฅผ ๋‹ค๋ฃจ๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. ์œ ๋‹ˆ์ฝ”๋“œ(unicode)๋Š” ๊ฑฐ์˜ ๋ชจ๋“  ์–ธ์–ด์˜ ๋ฌธ์ž๋ฅผ ํ‘œํ˜„ํ•  ์ˆ˜ ์žˆ๋Š” ํ‘œ์ค€ ์ธ์ฝ”๋”ฉ ์‹œ์Šคํ…œ์ž…๋‹ˆ๋‹ค. ๊ฐ ๋ฌธ์ž๋Š” 0๋ถ€ํ„ฐ 0x10FFFF ์‚ฌ์ด์˜ ๊ณ ์œ ํ•œ ์ •์ˆ˜ ์ฝ”๋“œ ํฌ์ธํŠธ(code point)๋ฅผ ์‚ฌ์šฉํ•ด์„œ ์ธ์ฝ”๋”ฉ๋ฉ๋‹ˆ๋‹ค. ์œ ๋‹ˆ์ฝ”๋“œ ๋ฌธ์ž์—ด์€ 0๊ฐœ ๋˜๋Š” ๊ทธ ์ด์ƒ์˜ ์ฝ”๋“œ ํฌ์ธํŠธ๋กœ ์ด๋ฃจ์–ด์ง„ ์‹œํ€€์Šค(sequence)์ž…๋‹ˆ๋‹ค.\n์ด ํŠœํ† ๋ฆฌ์–ผ์—์„œ๋Š” ํ…์„œํ”Œ๋กœ(Tensorflow)์—์„œ ์œ ๋‹ˆ์ฝ”๋“œ ๋ฌธ์ž์—ด์„ ํ‘œํ˜„ํ•˜๊ณ , ํ‘œ์ค€ ๋ฌธ์ž์—ด ์—ฐ์‚ฐ์˜ ์œ ๋‹ˆ์ฝ”๋“œ ๋ฒ„์ „์„ ์‚ฌ์šฉํ•ด์„œ ์œ ๋‹ˆ์ฝ”๋“œ ๋ฌธ์ž์—ด์„ ์กฐ์ž‘ํ•˜๋Š” ๋ฐฉ๋ฒ•์— ๋Œ€ํ•ด์„œ ์†Œ๊ฐœํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ ์Šคํฌ๋ฆฝํŠธ ๊ฐ์ง€(script detection)๋ฅผ ํ™œ์šฉํ•˜์—ฌ ์œ ๋‹ˆ์ฝ”๋“œ ๋ฌธ์ž์—ด์„ ํ† ํฐ์œผ๋กœ ๋ถ„๋ฆฌํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค.", "import tensorflow as tf", "tf.string ๋ฐ์ดํ„ฐ ํƒ€์ž…\nํ…์„œํ”Œ๋กœ์˜ ๊ธฐ๋ณธ tf.string dtype์€ ๋ฐ”์ดํŠธ ๋ฌธ์ž์—ด๋กœ ์ด๋ฃจ์–ด์ง„ ํ…์„œ๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค. ์œ ๋‹ˆ์ฝ”๋“œ ๋ฌธ์ž์—ด์€ ๊ธฐ๋ณธ์ ์œผ๋กœ utf-8๋กœ ์ธ์ฝ”๋”ฉ ๋ฉ๋‹ˆ๋‹ค.", "tf.constant(u\"Thanks ๐Ÿ˜Š\")", "tf.string ํ…์„œ๋Š” ๋ฐ”์ดํŠธ ๋ฌธ์ž์—ด์„ ์ตœ์†Œ ๋‹จ์œ„๋กœ ๋‹ค๋ฃจ๊ธฐ ๋•Œ๋ฌธ์— ๋‹ค์–‘ํ•œ ๊ธธ์ด์˜ ๋ฐ”์ดํŠธ ๋ฌธ์ž์—ด์„ ๋‹ค๋ฃฐ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ๋ฌธ์ž์—ด ๊ธธ์ด๋Š” ํ…์„œ ์ฐจ์›(dimensions)์— ํฌํ•จ๋˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค.", "tf.constant([u\"You're\", u\"welcome!\"]).shape", "๋…ธํŠธ: ํŒŒ์ด์ฌ์„ ์‚ฌ์šฉํ•ด ๋ฌธ์ž์—ด์„ ๋งŒ๋“ค ๋•Œ ๋ฒ„์ „ 2์™€ ๋ฒ„์ „ 3์—์„œ ์œ ๋‹ˆ์ฝ”๋“œ๋ฅผ ๋‹ค๋ฃจ๋Š” ๋ฐฉ์‹์ด ๋‹ค๋ฆ…๋‹ˆ๋‹ค. ๋ฒ„์ „ 2์—์„œ๋Š” ์œ„์™€ ๊ฐ™์ด \"u\" ์ ‘๋‘์‚ฌ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์œ ๋‹ˆ์ฝ”๋“œ ๋ฌธ์ž์—ด์„ ๋‚˜ํƒ€๋ƒ…๋‹ˆ๋‹ค. ๋ฒ„์ „ 3์—์„œ๋Š” ์œ ๋‹ˆ์ฝ”๋“œ ์ธ์ฝ”๋”ฉ๋œ ๋ฌธ์ž์—ด์ด ๊ธฐ๋ณธ๊ฐ’์ž…๋‹ˆ๋‹ค.\n์œ ๋‹ˆ์ฝ”๋“œ ํ‘œํ˜„\nํ…์„œํ”Œ๋กœ์—์„œ ์œ ๋‹ˆ์ฝ”๋“œ ๋ฌธ์ž์—ด์„ ํ‘œํ˜„ํ•˜๊ธฐ ์œ„ํ•œ ๋‘ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์ด ์žˆ์Šต๋‹ˆ๋‹ค:\n\nstring ์Šค์นผ๋ผ โ€” ์ฝ”๋“œ ํฌ์ธํŠธ์˜ ์‹œํ€€์Šค๊ฐ€ ์•Œ๋ ค์ง„ ๋ฌธ์ž ์ธ์ฝ”๋”ฉ์„ ์‚ฌ์šฉํ•ด ์ธ์ฝ”๋”ฉ๋ฉ๋‹ˆ๋‹ค.\nint32 ๋ฒกํ„ฐ โ€” ์œ„์น˜๋งˆ๋‹ค ๊ฐœ๋ณ„ ์ฝ”๋“œ ํฌ์ธํŠธ๋ฅผ ํฌํ•จํ•ฉ๋‹ˆ๋‹ค.\n\n์˜ˆ๋ฅผ ๋“ค์–ด, ์•„๋ž˜์˜ ์„ธ ๊ฐ€์ง€ ๊ฐ’์ด ๋ชจ๋‘ ์œ ๋‹ˆ์ฝ”๋“œ ๋ฌธ์ž์—ด \"่ฏญ่จ€ๅค„็†\"(์ค‘๊ตญ์–ด๋กœ \"์–ธ์–ด ์ฒ˜๋ฆฌ\"๋ฅผ ์˜๋ฏธํ•จ)๋ฅผ ํ‘œํ˜„ํ•ฉ๋‹ˆ๋‹ค.", "# UTF-8๋กœ ์ธ์ฝ”๋”ฉ๋œ string ์Šค์นผ๋ผ๋กœ ํ‘œํ˜„ํ•œ ์œ ๋‹ˆ์ฝ”๋“œ ๋ฌธ์ž์—ด์ž…๋‹ˆ๋‹ค.\ntext_utf8 = tf.constant(u\"่ฏญ่จ€ๅค„็†\")\ntext_utf8\n\n# UTF-16-BE๋กœ ์ธ์ฝ”๋”ฉ๋œ string ์Šค์นผ๋ผ๋กœ ํ‘œํ˜„ํ•œ ์œ ๋‹ˆ์ฝ”๋“œ ๋ฌธ์ž์—ด์ž…๋‹ˆ๋‹ค.\ntext_utf16be = tf.constant(u\"่ฏญ่จ€ๅค„็†\".encode(\"UTF-16-BE\"))\ntext_utf16be\n\n# ์œ ๋‹ˆ์ฝ”๋“œ ์ฝ”๋“œ ํฌ์ธํŠธ์˜ ๋ฒกํ„ฐ๋กœ ํ‘œํ˜„ํ•œ ์œ ๋‹ˆ์ฝ”๋“œ ๋ฌธ์ž์—ด์ž…๋‹ˆ๋‹ค.\ntext_chars = tf.constant([ord(char) for char in u\"่ฏญ่จ€ๅค„็†\"])\ntext_chars", "ํ‘œํ˜„ ๊ฐ„์˜ ๋ณ€ํ™˜\nํ…์„œํ”Œ๋กœ๋Š” ๋‹ค๋ฅธ ํ‘œํ˜„์œผ๋กœ ๋ณ€ํ™˜ํ•˜๊ธฐ ์œ„ํ•œ ์—ฐ์‚ฐ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค.\n\ntf.strings.unicode_decode: ์ธ์ฝ”๋”ฉ๋œ string ์Šค์นผ๋ผ๋ฅผ ์ฝ”๋“œ ํฌ์ธํŠธ์˜ ๋ฒกํ„ฐ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค.\ntf.strings.unicode_encode: ์ฝ”๋“œ ํฌ์ธํŠธ์˜ ๋ฒกํ„ฐ๋ฅผ ์ธ์ฝ”๋“œ๋œ string ์Šค์นผ๋ผ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค.\ntf.strings.unicode_transcode: ์ธ์ฝ”๋“œ๋œ string ์Šค์นผ๋ผ๋ฅผ ๋‹ค๋ฅธ ์ธ์ฝ”๋”ฉ์œผ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค.", "tf.strings.unicode_decode(text_utf8,\n input_encoding='UTF-8')\n\ntf.strings.unicode_encode(text_chars,\n output_encoding='UTF-8')\n\ntf.strings.unicode_transcode(text_utf8,\n input_encoding='UTF8',\n output_encoding='UTF-16-BE')", "๋ฐฐ์น˜(batch) ์ฐจ์›\n์—ฌ๋Ÿฌ ๊ฐœ์˜ ๋ฌธ์ž์—ด์„ ๋””์ฝ”๋”ฉ ํ•  ๋•Œ ๋ฌธ์ž์—ด๋งˆ๋‹ค ํฌํ•จ๋œ ๋ฌธ์ž์˜ ๊ฐœ์ˆ˜๋Š” ๋™์ผํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ๋ฐ˜ํ™˜๋˜๋Š” ๊ฐ’์€ tf.RaggedTensor๋กœ ๊ฐ€์žฅ ์•ˆ์ชฝ ์ฐจ์›์˜ ํฌ๊ธฐ๊ฐ€ ๋ฌธ์ž์—ด์— ํฌํ•จ๋œ ๋ฌธ์ž์˜ ๊ฐœ์ˆ˜์— ๋”ฐ๋ผ ๊ฒฐ์ •๋ฉ๋‹ˆ๋‹ค.", "# UTF-8 ์ธ์ฝ”๋”ฉ๋œ ๋ฌธ์ž์—ด๋กœ ํ‘œํ˜„ํ•œ ์œ ๋‹ˆ์ฝ”๋“œ ๋ฌธ์ž์—ด์˜ ๋ฐฐ์น˜์ž…๋‹ˆ๋‹ค. \nbatch_utf8 = [s.encode('UTF-8') for s in\n [u'hรƒllo', u'What is the weather tomorrow', u'Gรถรถdnight', u'๐Ÿ˜Š']]\nbatch_chars_ragged = tf.strings.unicode_decode(batch_utf8,\n input_encoding='UTF-8')\nfor sentence_chars in batch_chars_ragged.to_list():\n print(sentence_chars)", "tf.RaggedTensor๋ฅผ ๋ฐ”๋กœ ์‚ฌ์šฉํ•˜๊ฑฐ๋‚˜, ํŒจ๋”ฉ(padding)์„ ์‚ฌ์šฉํ•ด tf.Tensor๋กœ ๋ณ€ํ™˜ํ•˜๊ฑฐ๋‚˜, tf.RaggedTensor.to_tensor ์™€ tf.RaggedTensor.to_sparse ๋ฉ”์„œ๋“œ๋ฅผ ์‚ฌ์šฉํ•ด tf.SparseTensor๋กœ ๋ณ€ํ™˜ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.", "batch_chars_padded = batch_chars_ragged.to_tensor(default_value=-1)\nprint(batch_chars_padded.numpy())\n\nbatch_chars_sparse = batch_chars_ragged.to_sparse()", "๊ธธ์ด๊ฐ€ ๊ฐ™์€ ์—ฌ๋Ÿฌ ๋ฌธ์ž์—ด์„ ์ธ์ฝ”๋”ฉํ•  ๋•Œ๋Š” tf.Tensor๋ฅผ ์ž…๋ ฅ์œผ๋กœ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค.", "tf.strings.unicode_encode([[99, 97, 116], [100, 111, 103], [ 99, 111, 119]],\n output_encoding='UTF-8')", "๊ธธ์ด๊ฐ€ ๋‹ค๋ฅธ ์—ฌ๋Ÿฌ ๋ฌธ์ž์—ด์„ ์ธ์ฝ”๋”ฉํ•  ๋•Œ๋Š” tf.RaggedTensor๋ฅผ ์ž…๋ ฅ์œผ๋กœ ์‚ฌ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค.", "tf.strings.unicode_encode(batch_chars_ragged, output_encoding='UTF-8')", "ํŒจ๋”ฉ๋œ ํ…์„œ๋‚˜ ํฌ์†Œ(sparse) ํ…์„œ๋Š” unicode_encode๋ฅผ ํ˜ธ์ถœํ•˜๊ธฐ ์ „์— tf.RaggedTensor๋กœ ๋ฐ”๊ฟ‰๋‹ˆ๋‹ค.", "tf.strings.unicode_encode(\n tf.RaggedTensor.from_sparse(batch_chars_sparse),\n output_encoding='UTF-8')\n\ntf.strings.unicode_encode(\n tf.RaggedTensor.from_tensor(batch_chars_padded, padding=-1),\n output_encoding='UTF-8')", "์œ ๋‹ˆ์ฝ”๋“œ ์—ฐ์‚ฐ\n๊ธธ์ด\ntf.strings.length ์—ฐ์‚ฐ์€ ๊ณ„์‚ฐํ•ด์•ผ ํ•  ๊ธธ์ด๋ฅผ ๋‚˜ํƒ€๋‚ด๋Š” unit ์ธ์ž๋ฅผ ๊ฐ€์ง‘๋‹ˆ๋‹ค. unit์˜ ๊ธฐ๋ณธ ๋‹จ์œ„๋Š” \"BYTE\"์ด์ง€๋งŒ ์ธ์ฝ”๋”ฉ๋œ string์— ํฌํ•จ๋œ ์œ ๋‹ˆ์ฝ”๋“œ ์ฝ”๋“œ ํฌ์ธํŠธ์˜ ์ˆ˜๋ฅผ ํŒŒ์•…ํ•˜๊ธฐ ์œ„ํ•ด \"UTF8_CHAR\"๋‚˜ \"UTF16_CHAR\"๊ฐ™์ด ๋‹ค๋ฅธ ๊ฐ’์„ ์„ค์ •ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.", "# UTF8์—์„œ ๋งˆ์ง€๋ง‰ ๋ฌธ์ž๋Š” 4๋ฐ”์ดํŠธ๋ฅผ ์ฐจ์ง€ํ•ฉ๋‹ˆ๋‹ค.\nthanks = u'Thanks ๐Ÿ˜Š'.encode('UTF-8')\nnum_bytes = tf.strings.length(thanks).numpy()\nnum_chars = tf.strings.length(thanks, unit='UTF8_CHAR').numpy()\nprint('{} ๋ฐ”์ดํŠธ; {}๊ฐœ์˜ UTF-8 ๋ฌธ์ž'.format(num_bytes, num_chars))", "๋ถ€๋ถ„ ๋ฌธ์ž์—ด\n์ด์™€ ์œ ์‚ฌํ•˜๊ฒŒ tf.strings.substr ์—ฐ์‚ฐ์€ \"unit\" ๋งค๊ฐœ๋ณ€์ˆ˜ ๊ฐ’์„ ์‚ฌ์šฉํ•ด \"pos\"์™€ \"len\" ๋งค๊ฐœ๋ณ€์ˆ˜๋กœ ์ง€์ •๋œ ๋ฌธ์ž์—ด์˜ ์ข…๋ฅ˜๋ฅผ ๊ฒฐ์ •ํ•ฉ๋‹ˆ๋‹ค.", "# ๊ธฐ๋ณธ: unit='BYTE'. len=1์ด๋ฉด ๋ฐ”์ดํŠธ ํ•˜๋‚˜๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค.\ntf.strings.substr(thanks, pos=7, len=1).numpy()\n\n# unit='UTF8_CHAR'๋กœ ์ง€์ •ํ•˜๋ฉด 4 ๋ฐ”์ดํŠธ์ธ ๋ฌธ์ž ํ•˜๋‚˜๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค.\nprint(tf.strings.substr(thanks, pos=7, len=1, unit='UTF8_CHAR').numpy())", "์œ ๋‹ˆ์ฝ”๋“œ ๋ฌธ์ž์—ด ๋ถ„๋ฆฌ\ntf.strings.unicode_split ์—ฐ์‚ฐ์€ ์œ ๋‹ˆ์ฝ”๋“œ ๋ฌธ์ž์—ด์˜ ๊ฐœ๋ณ„ ๋ฌธ์ž๋ฅผ ๋ถ€๋ถ„ ๋ฌธ์ž์—ด๋กœ ๋ถ„๋ฆฌํ•ฉ๋‹ˆ๋‹ค.", "tf.strings.unicode_split(thanks, 'UTF-8').numpy()", "๋ฌธ์ž ๋ฐ”์ดํŠธ ์˜คํ”„์…‹\ntf.strings.unicode_decode๋กœ ๋งŒ๋“  ๋ฌธ์ž ํ…์„œ๋ฅผ ์›๋ณธ ๋ฌธ์ž์—ด๊ณผ ์œ„์น˜๋ฅผ ๋งž์ถ”๋ ค๋ฉด ๊ฐ ๋ฌธ์ž์˜ ์‹œ์ž‘ ์œ„์น˜์˜ ์˜คํ”„์…‹(offset)์„ ์•Œ์•„์•ผ ํ•ฉ๋‹ˆ๋‹ค. tf.strings.unicode_decode_with_offsets์€ unicode_decode์™€ ๋น„์Šทํ•˜์ง€๋งŒ ๊ฐ ๋ฌธ์ž์˜ ์‹œ์ž‘ ์˜คํ”„์…‹์„ ํฌํ•จํ•œ ๋‘ ๋ฒˆ์งธ ํ…์„œ๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค.", "codepoints, offsets = tf.strings.unicode_decode_with_offsets(u\"๐ŸŽˆ๐ŸŽ‰๐ŸŽŠ\", 'UTF-8')\n\nfor (codepoint, offset) in zip(codepoints.numpy(), offsets.numpy()):\n print(\"๋ฐ”์ดํŠธ ์˜คํ”„์…‹ {}: ์ฝ”๋“œ ํฌ์ธํŠธ {}\".format(offset, codepoint))", "์œ ๋‹ˆ์ฝ”๋“œ ์Šคํฌ๋ฆฝํŠธ\n๊ฐ ์œ ๋‹ˆ์ฝ”๋“œ ์ฝ”๋“œ ํฌ์ธํŠธ๋Š” ์Šคํฌ๋ฆฝํŠธ(script)๋ผ ๋ถ€๋ฅด๋Š” ํ•˜๋‚˜์˜ ์ฝ”๋“œ ํฌ์ธํŠธ์˜ ์ง‘ํ•ฉ(collection)์— ์†ํ•ฉ๋‹ˆ๋‹ค. ๋ฌธ์ž์˜ ์Šคํฌ๋ฆฝํŠธ๋Š” ๋ฌธ์ž๊ฐ€ ์–ด๋–ค ์–ธ์–ด์ธ์ง€ ๊ฒฐ์ •ํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋ฉ๋‹ˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, 'ะ‘'๊ฐ€ ํ‚ค๋ฆด(Cyrillic) ์Šคํฌ๋ฆฝํŠธ๋ผ๋Š” ๊ฒƒ์„ ์•Œ๊ณ  ์žˆ์œผ๋ฉด ์ด ๋ฌธ์ž๊ฐ€ ํฌํ•จ๋œ ํ…์ŠคํŠธ๋Š” ์•„๋งˆ๋„ (๋Ÿฌ์‹œ์•„์–ด๋‚˜ ์šฐํฌ๋ผ์ด๋‚˜์–ด ๊ฐ™์€) ์Šฌ๋ผ๋ธŒ ์–ธ์–ด๋ผ๋Š” ๊ฒƒ์„ ์•Œ ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.\nํ…์„œํ”Œ๋กœ๋Š” ์ฃผ์–ด์ง„ ์ฝ”๋“œ ํฌ์ธํŠธ๊ฐ€ ์–ด๋–ค ์Šคํฌ๋ฆฝํŠธ๋ฅผ ์‚ฌ์šฉํ•˜๋Š”์ง€ ํŒ๋ณ„ํ•˜๊ธฐ ์œ„ํ•ด tf.strings.unicode_script ์—ฐ์‚ฐ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ์Šคํฌ๋ฆฝํŠธ ์ฝ”๋“œ๋Š” International Components for\nUnicode (ICU) UScriptCode ๊ฐ’๊ณผ ์ผ์น˜ํ•˜๋Š” int32 ๊ฐ’์ž…๋‹ˆ๋‹ค.", "uscript = tf.strings.unicode_script([33464, 1041]) # ['่Šธ', 'ะ‘']\n\nprint(uscript.numpy()) # [17, 8] == [USCRIPT_HAN, USCRIPT_CYRILLIC]", "tf.strings.unicode_script ์—ฐ์‚ฐ์€ ์ฝ”๋“œ ํฌ์ธํŠธ์˜ ๋‹ค์ฐจ์› tf.Tensor๋‚˜ tf.RaggedTensor์— ์ ์šฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค:", "print(tf.strings.unicode_script(batch_chars_ragged))", "์˜ˆ์ œ: ๊ฐ„๋‹จํ•œ ๋ถ„ํ• \n๋ถ„ํ• (segmentation)์€ ํ…์ŠคํŠธ๋ฅผ ๋‹จ์–ด์™€ ๊ฐ™์€ ๋‹จ์œ„๋กœ ๋‚˜๋ˆ„๋Š” ์ž‘์—…์ž…๋‹ˆ๋‹ค. ๊ณต๋ฐฑ ๋ฌธ์ž๊ฐ€ ๋‹จ์–ด๋ฅผ ๋‚˜๋ˆ„๋Š” ๊ตฌ๋ถ„์ž๋กœ ์‚ฌ์šฉ๋˜๋Š” ๊ฒฝ์šฐ๋Š” ์‰ฝ์ง€๋งŒ, (์ค‘๊ตญ์–ด๋‚˜ ์ผ๋ณธ์–ด ๊ฐ™์ด) ๊ณต๋ฐฑ์„ ์‚ฌ์šฉํ•˜์ง€ ์•Š๋Š” ์–ธ์–ด๋‚˜ (๋…์ผ์–ด ๊ฐ™์ด) ๋‹จ์–ด๋ฅผ ๊ธธ๊ฒŒ ์กฐํ•ฉํ•˜๋Š” ์–ธ์–ด๋Š” ์˜๋ฏธ๋ฅผ ๋ถ„์„ํ•˜๊ธฐ ์œ„ํ•œ ๋ถ„ํ•  ๊ณผ์ •์ด ๊ผญ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ์›น ํ…์ŠคํŠธ์—๋Š” \"NYๆ ชไพก\"(New York Stock Exchange)์™€ ๊ฐ™์ด ์—ฌ๋Ÿฌ ๊ฐ€์ง€ ์–ธ์–ด์™€ ์Šคํฌ๋ฆฝํŠธ๊ฐ€ ์„ž์—ฌ ์žˆ๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์Šต๋‹ˆ๋‹ค.\n์Šคํฌ๋ฆฝํŠธ์˜ ๋ณ€ํ™”๋ฅผ ๋‹จ์–ด ๊ฒฝ๊ณ„๋กœ ๊ทผ์‚ฌํ•˜์—ฌ (ML ๋ชจ๋ธ ์‚ฌ์šฉ ์—†์ด) ๋Œ€๋žต์ ์ธ ๋ถ„ํ• ์„ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์œ„์—์„œ ์–ธ๊ธ‰๋œ \"NYๆ ชไพก\"์˜ ์˜ˆ์™€ ๊ฐ™์€ ๋ฌธ์ž์—ด์— ์ ์šฉ๋ฉ๋‹ˆ๋‹ค. ๋‹ค์–‘ํ•œ ์Šคํฌ๋ฆฝํŠธ์˜ ๊ณต๋ฐฑ ๋ฌธ์ž๋ฅผ ๋ชจ๋‘ USCRIPT_COMMON(์‹ค์ œ ํ…์ŠคํŠธ์˜ ์Šคํฌ๋ฆฝํŠธ ์ฝ”๋“œ์™€ ๋‹ค๋ฅธ ํŠน๋ณ„ํ•œ ์Šคํฌ๋ฆฝํŠธ ์ฝ”๋“œ)์œผ๋กœ ๋ถ„๋ฅ˜ํ•˜๊ธฐ ๋•Œ๋ฌธ์— ๊ณต๋ฐฑ์„ ์‚ฌ์šฉํ•˜๋Š” ๋Œ€๋ถ€๋ถ„์˜ ์–ธ์–ด๋“ค์—์„œ๋„ ์—ญ์‹œ ์ ์šฉ๋ฉ๋‹ˆ๋‹ค.", "# dtype: string; shape: [num_sentences]\n#\n# ์ฒ˜๋ฆฌํ•  ๋ฌธ์žฅ๋“ค ์ž…๋‹ˆ๋‹ค. ์ด ๋ผ์ธ์„ ์ˆ˜์ •ํ•ด์„œ ๋‹ค๋ฅธ ์ž…๋ ฅ๊ฐ’์„ ์‹œ๋„ํ•ด ๋ณด์„ธ์š”!\nsentence_texts = [u'Hello, world.', u'ไธ–็•Œใ“ใ‚“ใซใกใฏ']", "๋จผ์ € ๋ฌธ์žฅ์„ ๋ฌธ์ž ์ฝ”๋“œ ํฌ์ธํŠธ๋กœ ๋””์ฝ”๋”ฉํ•˜๊ณ  ๊ฐ ๋ฌธ์ž์— ๋Œ€ํ•œ ์Šคํฌ๋ฆฝํŠธ ์‹๋ณ„์ž๋ฅผ ์ฐพ์Šต๋‹ˆ๋‹ค.", "# dtype: int32; shape: [num_sentences, (num_chars_per_sentence)]\n#\n# sentence_char_codepoint[i, j]๋Š”\n# i๋ฒˆ์งธ ๋ฌธ์žฅ ์•ˆ์— ์žˆ๋Š” j๋ฒˆ์งธ ๋ฌธ์ž์— ๋Œ€ํ•œ ์ฝ”๋“œ ํฌ์ธํŠธ ์ž…๋‹ˆ๋‹ค.\nsentence_char_codepoint = tf.strings.unicode_decode(sentence_texts, 'UTF-8')\nprint(sentence_char_codepoint)\n\n# dtype: int32; shape: [num_sentences, (num_chars_per_sentence)]\n#\n# sentence_char_codepoint[i, j]๋Š” \n# i๋ฒˆ์งธ ๋ฌธ์žฅ ์•ˆ์— ์žˆ๋Š” j๋ฒˆ์งธ ๋ฌธ์ž์˜ ์œ ๋‹ˆ์ฝ”๋“œ ์Šคํฌ๋ฆฝํŠธ ์ž…๋‹ˆ๋‹ค.\nsentence_char_script = tf.strings.unicode_script(sentence_char_codepoint)\nprint(sentence_char_script)", "๊ทธ๋‹ค์Œ ์Šคํฌ๋ฆฝํŠธ ์‹๋ณ„์ž๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋‹จ์–ด ๊ฒฝ๊ณ„๊ฐ€ ์ถ”๊ฐ€๋  ์œ„์น˜๋ฅผ ๊ฒฐ์ •ํ•ฉ๋‹ˆ๋‹ค. ๊ฐ ๋ฌธ์žฅ์˜ ์‹œ์ž‘๊ณผ ์ด์ „ ๋ฌธ์ž์™€ ์Šคํฌ๋ฆฝํŠธ๊ฐ€ ๋‹ค๋ฅธ ๋ฌธ์ž์— ๋‹จ์–ด ๊ฒฝ๊ณ„๋ฅผ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค.", "# dtype: bool; shape: [num_sentences, (num_chars_per_sentence)]\n#\n# sentence_char_starts_word[i, j]๋Š” \n# i๋ฒˆ์งธ ๋ฌธ์žฅ ์•ˆ์— ์žˆ๋Š” j๋ฒˆ์งธ ๋ฌธ์ž๊ฐ€ ๋‹จ์–ด์˜ ์‹œ์ž‘์ด๋ฉด True ์ž…๋‹ˆ๋‹ค.\nsentence_char_starts_word = tf.concat(\n [tf.fill([sentence_char_script.nrows(), 1], True),\n tf.not_equal(sentence_char_script[:, 1:], sentence_char_script[:, :-1])],\n axis=1)\n\n# dtype: int64; shape: [num_words]\n#\n# word_starts[i]์€ (๋ชจ๋“  ๋ฌธ์žฅ์˜ ๋ฌธ์ž๋ฅผ ์ผ๋ ฌ๋กœ ํŽผ์นœ ๋ฆฌ์ŠคํŠธ์—์„œ)\n# i๋ฒˆ์งธ ๋‹จ์–ด๊ฐ€ ์‹œ์ž‘๋˜๋Š” ๋ฌธ์ž์˜ ์ธ๋ฑ์Šค ์ž…๋‹ˆ๋‹ค.\nword_starts = tf.squeeze(tf.where(sentence_char_starts_word.values), axis=1)\nprint(word_starts)", "์ด ์‹œ์ž‘ ์˜คํ”„์…‹์„ ์‚ฌ์šฉํ•˜์—ฌ ์ „์ฒด ๋ฐฐ์น˜์— ์žˆ๋Š” ๋‹จ์–ด ๋ฆฌ์ŠคํŠธ๋ฅผ ๋‹ด์€ RaggedTensor๋ฅผ ๋งŒ๋“ญ๋‹ˆ๋‹ค.", "# dtype: int32; shape: [num_words, (num_chars_per_word)]\n#\n# word_char_codepoint[i, j]์€ \n# i๋ฒˆ์งธ ๋‹จ์–ด ์•ˆ์— ์žˆ๋Š” j๋ฒˆ์งธ ๋ฌธ์ž์— ๋Œ€ํ•œ ์ฝ”๋“œ ํฌ์ธํŠธ ์ž…๋‹ˆ๋‹ค.\nword_char_codepoint = tf.RaggedTensor.from_row_starts(\n values=sentence_char_codepoint.values,\n row_starts=word_starts)\nprint(word_char_codepoint)", "๋งˆ์ง€๋ง‰์œผ๋กœ ๋‹จ์–ด ์ฝ”๋“œ ํฌ์ธํŠธ RaggedTensor๋ฅผ ๋ฌธ์žฅ์œผ๋กœ ๋‹ค์‹œ ๋‚˜๋ˆ•๋‹ˆ๋‹ค.", "# dtype: int64; shape: [num_sentences]\n#\n# sentence_num_words[i]๋Š” i๋ฒˆ์งธ ๋ฌธ์žฅ ์•ˆ์— ์žˆ๋Š” ๋‹จ์–ด์˜ ์ˆ˜์ž…๋‹ˆ๋‹ค.\nsentence_num_words = tf.reduce_sum(\n tf.cast(sentence_char_starts_word, tf.int64),\n axis=1)\n\n# dtype: int32; shape: [num_sentences, (num_words_per_sentence), (num_chars_per_word)]\n#\n# sentence_word_char_codepoint[i, j, k]๋Š” i๋ฒˆ์งธ ๋ฌธ์žฅ ์•ˆ์— ์žˆ๋Š”\n# j๋ฒˆ์งธ ๋‹จ์–ด ์•ˆ์˜ k๋ฒˆ์งธ ๋ฌธ์ž์— ๋Œ€ํ•œ ์ฝ”๋“œ ํฌ์ธํŠธ์ž…๋‹ˆ๋‹ค.\nsentence_word_char_codepoint = tf.RaggedTensor.from_row_lengths(\n values=word_char_codepoint,\n row_lengths=sentence_num_words)\nprint(sentence_word_char_codepoint)", "์ตœ์ข… ๊ฒฐ๊ณผ๋ฅผ ์ฝ๊ธฐ ์‰ฝ๊ฒŒ utf-8 ๋ฌธ์ž์—ด๋กœ ๋‹ค์‹œ ์ธ์ฝ”๋”ฉํ•ฉ๋‹ˆ๋‹ค.", "tf.strings.unicode_encode(sentence_word_char_codepoint, 'UTF-8').to_list()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
phoebe-project/phoebe2-docs
2.3/examples/pblum_method_compare.ipynb
gpl-3.0
[ "Comparing pblum methods\nHere we'll look into the influence of pblum_method on the resulting luminosities as a function of the stellar distortion (only applicable for alternate backends).\nSetup\nLet's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).", "#!pip install -I \"phoebe>=2.3,<2.4\"\n\nimport phoebe\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nb = phoebe.default_binary()\n\nb.add_dataset('lc')\n\nb.add_compute('ellc')", "And to avoid any issues with falling outside the atmosphere grids, we'll set a simple flat limb-darkening model and disable irradiation.", "b.set_value_all('ld_mode', 'manual')\nb.set_value_all('ld_func', 'linear')\nb.set_value_all('ld_coeffs', [0.5])\nb.set_value_all('irrad_method', 'none')\n\nb.set_value_all('atm', 'ck2004')\n\nrequiv_max = b.get_value('requiv_max', component='primary', context='component')\nrequiv_max_factors = np.arange(0.3,1.0,0.05)\nsb_pblum_abs = np.zeros_like(requiv_max_factors)\nph_pblum_abs = np.zeros_like(requiv_max_factors)\n\nfor i,requiv_max_factor in enumerate(requiv_max_factors):\n b.set_value('requiv', component='primary', value=requiv_max_factor*requiv_max)\n \n sb_pblum_abs[i] = b.compute_pblums(compute='ellc01', pblum_method='stefan-boltzmann', pblum_abs=True)['pblum_abs@primary@lc01'].value\n ph_pblum_abs[i] = b.compute_pblums(compute='ellc01', pblum_method='phoebe', pblum_abs=True)['pblum_abs@primary@lc01'].value", "Here we can see that Stefan-Boltzmann (which assumes spherical stars) is an increasingly bad approximation as the distortion of the star increase (as expected). But even in the quite detached case, the luminosities are not in great agreement. For this reason it is important to not trust absolute pblum values when using pblum_method='stefan-boltzmann', but rather just use them as a nuisance parameter or original estimate to adjust the light-levels.", "_ = plt.plot(requiv_max_factors, sb_pblum_abs, 'k-', label='Stefan-Boltzmann')\n_ = plt.plot(requiv_max_factors, ph_pblum_abs, 'b-', label='PHOEBE mesh')\n_ = plt.xlabel('requiv / requiv_max')\n_ = plt.ylabel('L (W)')\n_ = plt.legend()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
bloomberg/bqplot
examples/Marks/Object Model/Pie.ipynb
apache-2.0
[ "from bqplot import Pie, Figure\nimport numpy as np\nimport string", "Basic Pie Chart", "data = np.random.rand(3)\npie = Pie(sizes=data, display_labels=\"outside\", labels=list(string.ascii_uppercase))\nfig = Figure(marks=[pie], animation_duration=1000)\nfig", "Update Data", "n = np.random.randint(1, 10)\npie.sizes = np.random.rand(n)", "Display Values", "with pie.hold_sync():\n pie.display_values = True\n pie.values_format = \".1f\"", "Enable sort", "pie.sort = True", "Set different styles for selected slices", "pie.selected_style = {\"opacity\": 1, \"stroke\": \"white\", \"stroke-width\": 2}\npie.unselected_style = {\"opacity\": 0.2}\npie.selected = [1]\n\npie.selected = None", "For more on piechart interactions, see the Mark Interactions notebook\nModify label styling", "pie.label_color = \"Red\"\npie.font_size = \"20px\"\npie.font_weight = \"bold\"", "Update pie shape and style", "pie1 = Pie(sizes=np.random.rand(6), inner_radius=0.05)\nfig1 = Figure(marks=[pie1], animation_duration=1000)\nfig1", "Change pie dimensions", "# As of now, the radius sizes are absolute, in pixels\nwith pie1.hold_sync():\n pie1.radius = 150\n pie1.inner_radius = 100\n\n# Angles are in radians, 0 being the top vertical\nwith pie1.hold_sync():\n pie1.start_angle = -90\n pie1.end_angle = 90", "Move the pie around\nx and y attributes control the position of the pie in the figure.\nIf no scales are passed for x and y, they are taken in absolute\nfigure coordinates, between 0 and 1.", "pie1.y = 0.1\npie1.x = 0.6\npie1.radius = 180", "Change slice styles\nPie slice colors cycle through the colors and opacities attribute, as the Lines Mark.", "pie1.stroke = \"brown\"\npie1.colors = [\"orange\", \"darkviolet\"]\npie1.opacities = [0.1, 1]\nfig1", "Represent an additional dimension using Color\nThe Pie allows for its colors to be determined by data, that is passed to the color attribute.\nA ColorScale with the desired color scheme must also be passed.", "from bqplot import ColorScale, ColorAxis\n\nNslices = 7\nsize_data = np.random.rand(Nslices)\ncolor_data = np.random.randn(Nslices)\n\nsc = ColorScale(scheme=\"Reds\")\n# The ColorAxis gives a visual representation of its ColorScale\nax = ColorAxis(scale=sc)\n\npie2 = Pie(sizes=size_data, scales={\"color\": sc}, color=color_data)\nFigure(marks=[pie2], axes=[ax])", "Position the Pie using custom scales\nPies can be positioned, via the x and y attributes,\nusing either absolute figure scales or custom 'x' or 'y' scales", "from datetime import datetime\nfrom bqplot.traits import convert_to_date\nfrom bqplot import DateScale, LinearScale, Axis\n\navg_precipitation_days = [\n (d / 30.0, 1 - d / 30.0) for d in [2, 3, 4, 6, 12, 17, 23, 22, 15, 4, 1, 1]\n]\ntemperatures = [9, 12, 16, 20, 22, 23, 22, 22, 22, 20, 15, 11]\n\ndates = [datetime(2010, k, 1) for k in range(1, 13)]\n\nsc_x = DateScale()\nsc_y = LinearScale()\nax_x = Axis(scale=sc_x, label=\"Month\", tick_format=\"%b\")\nax_y = Axis(scale=sc_y, orientation=\"vertical\", label=\"Average Temperature\")\n\npies = [\n Pie(\n sizes=precipit,\n x=date,\n y=temp,\n display_labels=\"none\",\n scales={\"x\": sc_x, \"y\": sc_y},\n radius=30.0,\n stroke=\"navy\",\n apply_clip=False,\n colors=[\"navy\", \"navy\"],\n opacities=[1, 0.1],\n )\n for precipit, date, temp in zip(avg_precipitation_days, dates, temperatures)\n]\n\nFigure(\n title=\"Kathmandu Precipitation\",\n marks=pies,\n axes=[ax_x, ax_y],\n padding_x=0.05,\n padding_y=0.1,\n)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
317070/ModelZoo
examples/Saliency Maps and Guided Backpropagation.ipynb
mit
[ "author: Jan Schlรผter (@f0k), 2015-10-13\nIntroduction\nThis example demonstrates how to compute saliency maps for a trained neural network -- a visualization of which inputs a neural network used for a particular prediction. This allows an object classifier to be used for object localization, or to better understand misclassifications. Three papers have proposed closely related methods for this:\n\n[1]: Zeiler et al. (2013): \"Visualizing and Understanding Convolutional Networks\",\n http://arxiv.org/abs/1311.2901\n[2]: Simonyan et al. (2013): \"Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps\",\n http://arxiv.org/abs/1312.6034\n[3]: Springenberg et al. (2015): \"Striving for Simplicity - The All Convolutional Net\",\n http://arxiv.org/abs/1412.6806\n\nThe common idea is to compute the gradient of the network's prediction with respect to the input, holding the weights fixed. This determines which input elements (e.g., which pixels in case of an input image) need to be changed the least to affect the prediction the most. The sole difference between the three approaches is how they backpropagate through the linear rectifier. Only the second approach actually computes the gradient, the others modify the backpropagation step to do something slightly different. As we will see, this makes a crucial difference for the saliency maps!\nPreparation steps\nWe will work on a pre-trained ImageNet model, VGG-16. Let's start by downloading the weights (528 MiB):", "!wget -N https://s3.amazonaws.com/lasagne/recipes/pretrained/imagenet/vgg16.pkl", "We can now load the weights into Python:", "try:\n import cPickle as pickle\nexcept ImportError:\n # Python 3\n import pickle\n with open('vgg16.pkl', 'rb') as f:\n model = pickle.load(f, encoding='latin-1')\nelse:\n # Python 2\n with open('vgg16.pkl', 'rb') as f:\n model = pickle.load(f)\n\nweights = model['param values'] # list of network weight tensors\nclasses = model['synset words'] # list of class names\nmean_pixel = model['mean value'] # mean pixel value (in BGR)\ndel model", "And define and fill the VGG-16 network in Lasagne:", "import lasagne\nfrom lasagne.layers import InputLayer, DenseLayer, NonlinearityLayer\nfrom lasagne.layers.dnn import Conv2DDNNLayer as ConvLayer\nfrom lasagne.layers import Pool2DLayer as PoolLayer\nfrom lasagne.nonlinearities import softmax\n\nnet = {}\nnet['input'] = InputLayer((None, 3, 224, 224))\nnet['conv1_1'] = ConvLayer(net['input'], 64, 3, pad=1)\nnet['conv1_2'] = ConvLayer(net['conv1_1'], 64, 3, pad=1)\nnet['pool1'] = PoolLayer(net['conv1_2'], 2)\nnet['conv2_1'] = ConvLayer(net['pool1'], 128, 3, pad=1)\nnet['conv2_2'] = ConvLayer(net['conv2_1'], 128, 3, pad=1)\nnet['pool2'] = PoolLayer(net['conv2_2'], 2)\nnet['conv3_1'] = ConvLayer(net['pool2'], 256, 3, pad=1)\nnet['conv3_2'] = ConvLayer(net['conv3_1'], 256, 3, pad=1)\nnet['conv3_3'] = ConvLayer(net['conv3_2'], 256, 3, pad=1)\nnet['pool3'] = PoolLayer(net['conv3_3'], 2)\nnet['conv4_1'] = ConvLayer(net['pool3'], 512, 3, pad=1)\nnet['conv4_2'] = ConvLayer(net['conv4_1'], 512, 3, pad=1)\nnet['conv4_3'] = ConvLayer(net['conv4_2'], 512, 3, pad=1)\nnet['pool4'] = PoolLayer(net['conv4_3'], 2)\nnet['conv5_1'] = ConvLayer(net['pool4'], 512, 3, pad=1)\nnet['conv5_2'] = ConvLayer(net['conv5_1'], 512, 3, pad=1)\nnet['conv5_3'] = ConvLayer(net['conv5_2'], 512, 3, pad=1)\nnet['pool5'] = PoolLayer(net['conv5_3'], 2)\nnet['fc6'] = DenseLayer(net['pool5'], num_units=4096)\nnet['fc7'] = DenseLayer(net['fc6'], num_units=4096)\nnet['fc8'] = DenseLayer(net['fc7'], num_units=1000, nonlinearity=None)\nnet['prob'] = NonlinearityLayer(net['fc8'], softmax)\n\nlasagne.layers.set_all_param_values(net['prob'], weights)", "VGG-16 needs the input image in a specific size (224x224) and format (BGR instead of RGB). We'll define a helper function to download and convert an image (original source):", "import numpy as np\nimport matplotlib.pyplot as plt\n%config InlineBackend.figure_format = 'jpeg'\n%matplotlib inline\nimport urllib\nimport io\nimport skimage.transform\n\ndef prepare_image(url):\n ext = url.rsplit('.', 1)[1]\n img = plt.imread(io.BytesIO(urllib.urlopen(url).read()), ext)\n # Resize so smallest dim = 256, preserving aspect ratio\n h, w, _ = img.shape\n if h < w:\n img = skimage.transform.resize(img, (256, w*256/h), preserve_range=True)\n else:\n img = skimage.transform.resize(img, (h*256/w, 256), preserve_range=True)\n # Central crop to 224x224\n h, w, _ = img.shape\n img = img[h//2-112:h//2+112, w//2-112:w//2+112]\n # Remember this, it's a single RGB image suitable for plt.imshow()\n img_original = img.astype('uint8')\n # Shuffle axes from 01c to c01\n img = img.transpose(2, 0, 1)\n # Convert from RGB to BGR\n img = img[::-1]\n # Subtract mean pixel value\n img = img - mean_pixel[:, np.newaxis, np.newaxis]\n # Return the original and the prepared image (as a batch of a single item)\n return img_original, lasagne.utils.floatX(img[np.newaxis])", "Computing a saliency map\nNow that we've got the basics in place, let's define a function to compute a saliency map. As mentioned in the introduction, the main idea is to compute the gradient of the output with respect to the input. More specifically, we are interested in the gradient of the predicted class, that is, the gradient of the unit of maximum activation in the network's output layer (net['prob']). The network's output layer has a softmax nonlinearity, though, so the gradient of any output unit will not only tell which inputs are relevant for maximizing the predicted class, but also which ones are relevant for minimizing all the other classes. To get a clearer picture, we will thus take the gradient of the softmax inputs (net['fc8']).\nWe will define a helper function for compiling the saliency map function, so we can reuse it for the different ways of propagating through the linear rectifier.", "import theano\nimport theano.tensor as T\n\ndef compile_saliency_function(net):\n \"\"\"\n Compiles a function to compute the saliency maps and predicted classes\n for a given minibatch of input images.\n \"\"\"\n inp = net['input'].input_var\n outp = lasagne.layers.get_output(net['fc8'], deterministic=True)\n max_outp = T.max(outp, axis=1)\n saliency = theano.grad(max_outp.sum(), wrt=inp)\n max_class = T.argmax(outp, axis=1)\n return theano.function([inp], [saliency, max_class])", "Let us also define a helper function for plotting an input image along with the saliency map. We will display three variants:\n\nThe magnitude per pixel, taking the maximum magnitude over the three color channels, as in [2], Section 3.1\nThe positive magnitudes (positively correlated with the output), keeping color information\nThe negative magnitudes (negatively correlated with the output), keeping color information", "def show_images(img_original, saliency, max_class, title):\n # get out the first map and class from the mini-batch\n saliency = saliency[0]\n max_class = max_class[0]\n # convert saliency from BGR to RGB, and from c01 to 01c\n saliency = saliency[::-1].transpose(1, 2, 0)\n # plot the original image and the three saliency map variants\n plt.figure(figsize=(10, 10), facecolor='w')\n plt.suptitle(\"Class: \" + classes[max_class] + \". Saliency: \" + title)\n plt.subplot(2, 2, 1)\n plt.title('input')\n plt.imshow(img_original)\n plt.subplot(2, 2, 2)\n plt.title('abs. saliency')\n plt.imshow(np.abs(saliency).max(axis=-1), cmap='gray')\n plt.subplot(2, 2, 3)\n plt.title('pos. saliency')\n plt.imshow((np.maximum(0, saliency) / saliency.max()))\n plt.subplot(2, 2, 4)\n plt.title('neg. saliency')\n plt.imshow((np.maximum(0, -saliency) / -saliency.min()))\n plt.show()", "Simonyan et al. (2013): Plain Gradient\nReference [2] uses the unmodified gradient to compute the saliency map, so this is what we will start with. We will prepare one of the ILSVRC2012 validation images to feed to the network:", "url = 'http://farm5.static.flickr.com/4064/4334173592_145856d89b.jpg'\nimg_original, img = prepare_image(url)", "Then we'll compile the saliency map function, pass the image through it and display the results:", "saliency_fn = compile_saliency_function(net)\nsaliency, max_class = saliency_fn(img)\nshow_images(img_original, saliency, max_class, \"default gradient\")", "As in the original paper (Figure 2), the gradient magnitudes roughly show which pixels are relevant for the network predicting \"green snake\", i.e., where in the picture the detected green snake is visible. But we can do better!\nSpringenberg et al. (2015): Guided Backpropagation\nReference [3] proposes to change a tiny detail for backpropagating through the linear rectifier $y(x) = max(x, 0) = x \\cdot [x > 0]$, the nonlinearity used in all the layers of VGG-16 except for the final output layer (which has a softmax activation).\nHere, $[\\cdot]$ is the indicator function in a notation promoted by Knuth.\nThe gradient of the rectifier's output wrt. its input is defined as follows: $\\frac{dy}{dx} y(x) = [x > 0]$. So when backpropagating an error signal $\\delta_i$ through the rectifier, we retain $\\delta_{i-1} = \\delta_i \\cdot [x > 0]$.\nIn [3], Section 3.4, Springenberg et al. propose an additional limitation: In addition to propagating the error back to every positive input, only propagate back positive error signals: $\\delta_{i-1} = \\delta_i \\cdot [x > 0] \\cdot [\\delta_i > 0]$.\nThey term this \"guided backpropagation\", because the gradient is guided not only by the input from below, but also by the error signal from above.\nTo implement this, we need to change the gradient computed by Theano. Luckily, Theano is already organized in entities called Ops that know how to compute some quantity and its partial derivatives. All we need to do is to replace the nonlinearity functions in the network with applications of an Op that computes the nonlinear rectifier in the forward pass, but does guided backpropagation in its backward pass. Implementing an Op can be tedious, but Theano provides a helper function called OpFromGraph() that turns an expression graph into an Op. We will use this to turn the nonlinearity into an Op, then simply replace its grad() method -- the one reponsible for giving the partial derivatives -- by a custom implementation.\nAs we need this for both [3] and [1], we will first define a helper class that allows us to replace a nonlinearity with an Op that has the same output, but a custom gradient:", "class ModifiedBackprop(object):\n\n def __init__(self, nonlinearity):\n self.nonlinearity = nonlinearity\n self.ops = {} # memoizes an OpFromGraph instance per tensor type\n\n def __call__(self, x):\n # OpFromGraph is oblique to Theano optimizations, so we need to move\n # things to GPU ourselves if needed.\n if theano.sandbox.cuda.cuda_enabled:\n maybe_to_gpu = theano.sandbox.cuda.as_cuda_ndarray_variable\n else:\n maybe_to_gpu = lambda x: x\n # We move the input to GPU if needed.\n x = maybe_to_gpu(x)\n # We note the tensor type of the input variable to the nonlinearity\n # (mainly dimensionality and dtype); we need to create a fitting Op.\n tensor_type = x.type\n # If we did not create a suitable Op yet, this is the time to do so.\n if tensor_type not in self.ops:\n # For the graph, we create an input variable of the correct type:\n inp = tensor_type()\n # We pass it through the nonlinearity (and move to GPU if needed).\n outp = maybe_to_gpu(self.nonlinearity(inp))\n # Then we fix the forward expression...\n op = theano.OpFromGraph([inp], [outp])\n # ...and replace the gradient with our own (defined in a subclass).\n op.grad = self.grad\n # Finally, we memoize the new Op\n self.ops[tensor_type] = op\n # And apply the memoized Op to the input we got.\n return self.ops[tensor_type](x)", "We can now define a subclass that does guided backpropagation through a nonlinearity:", "class GuidedBackprop(ModifiedBackprop):\n def grad(self, inputs, out_grads):\n (inp,) = inputs\n (grd,) = out_grads\n dtype = inp.dtype\n return (grd * (inp > 0).astype(dtype) * (grd > 0).astype(dtype),)", "Finally, we replace all the nonlinearities of the network:", "relu = lasagne.nonlinearities.rectify\nrelu_layers = [layer for layer in lasagne.layers.get_all_layers(net['prob'])\n if getattr(layer, 'nonlinearity', None) is relu]\nmodded_relu = GuidedBackprop(relu) # important: only instantiate this once!\nfor layer in relu_layers:\n layer.nonlinearity = modded_relu", "We can now recompile the saliency map function, and compute and display the saliency maps:", "saliency_fn = compile_saliency_function(net)\nsaliency, max_class = saliency_fn(img)\nshow_images(img_original, saliency, max_class, \"guided backprop\")", "As in [3], Appendix C, they are considerably more detailed. We can see that the eye is important for the prediction: making its green part black or its black part green would reduce the output activation the most. Note that for guided backpropagation, the negative saliency values only arise from the very first convolution, because negative error signals are never propagated back through the nonlinearities.\nZeiler et al. (2013): DeconvNet\nFinally, we will compare results to the DeconvNet proposed in reference [1]. This was actually the first of the pack, but it was easier to start with the unmodified gradient of [2] and continue from there.\nThe central idea of Zeiler et al. is to visualize layer activations of a ConvNet by running them through a \"DeconvNet\" -- a network that undoes the convolutions and pooling operations of the ConvNet until it reaches the input space. Deconvolution is defined as convolving an image with the same filters transposed, and unpooling is defined as copying inputs to the spots in the (larger) output that were maximal in the ConvNet (i.e., an unpooling layer uses the switches from its corresponding pooling layer for the reconstruction). Any linear rectifier in the ConvNet is simply copied over to the DeconvNet.\nAs noted in [2], Section 4, this definition of a DeconvNet exactly corresponds to simply backpropagating through the ConvNet, except for the linear rectifier. So again, this can be implemented by modifying the gradient of the rectifier: Instead of propagating the error back to every positive input, propagate back all positive error signals: $\\delta_{i-1} = \\delta_i \\cdot [\\delta_i > 0]$. Note that this is equivalent to applying the linear rectifier to the error signal.\nSo again, let's define a subclass that does Zeiler's backpropagation through a nonlinearity:", "class ZeilerBackprop(ModifiedBackprop):\n def grad(self, inputs, out_grads):\n (inp,) = inputs\n (grd,) = out_grads\n #return (grd * (grd > 0).astype(inp.dtype),) # explicitly rectify\n return (self.nonlinearity(grd),) # use the given nonlinearity", "And again, we'll modify the network, recompile the saliency map function, compute and display the saliency maps:", "modded_relu = ZeilerBackprop(relu)\nfor layer in relu_layers:\n layer.nonlinearity = modded_relu\n\nsaliency_fn = compile_saliency_function(net)\nsaliency, max_class = saliency_fn(img)\nshow_images(img_original, saliency, max_class, \"deconvnet\")", "This is pretty much in line with [3], Appendix C, Figure 5: The DeconvNet reconstructs sharper features than the default gradient, but fails to correctly localize those features in the input image.\nBonus Image\nFor some images, even the unmodified gradient is largely uninformative, while guided backpropagation works remarkably well:", "url = 'http://farm6.static.flickr.com/5066/5595774449_b3f85b36ec.jpg'\nimg_original, img = prepare_image(url)\nfor layer in relu_layers:\n layer.nonlinearity = relu\nsaliency_fn = compile_saliency_function(net)\nshow_images(img_original, *saliency_fn(img), title=\"default gradient\")\n\nmodded_relu = GuidedBackprop(relu)\nfor layer in relu_layers:\n layer.nonlinearity = modded_relu\nsaliency_fn = compile_saliency_function(net)\nshow_images(img_original, *saliency_fn(img), title=\"guided backprop\")", "Conclusion\nWe have seen how to implement different saliency mapping methods, and compared the results. Guided Backpropagation seems to have a clear advantage over the plain gradient or the DeconvNet.\nNote that these methods also work fine for neural networks with leaky rectified units, but, to state the obvious, they are only useful for inspecting a trained network, not for training it.\nHave fun debugging your ConvNets!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
chapman-phys227-2016s/cw-3-classwork-team
cw-3.ipynb
mit
[ "import numpy as np\nimport sympy as sp\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nimport pi_sequences as p3\nimport boundary_layer_func1 as p1\nimport sequence_limits as p2\nimport diffeq_midpoint as p4\n\n\nimport math", "Classwork 3\nMichael Seaman, Chinmai Raman, Austin Ayers, Taylor Patti\nOrganized by Andrew Malfavon\nExcercise A.2: Computing $\\pi$ via sequences\nMichael Seaman\nThe following sequences all converge to pi, although at different rates.\nIn order:\n$$a_n = 4\\sum_{k=1}^{n}\\frac{(-1)^{k+1}}{2k-1}$$\n$$b_n = (6\\sum_{k=1}^{n}k^{-2})^{1/2} $$\n$$c_n = (90\\sum_{k=1}^{n}k^{-4})^{1/4} $$\n$$d_n = \\frac{6}{\\sqrt{3}}\\sum_{k=0}^{n}\\frac{(-1)^{k}}{3^k(2k+1)}$$\n$$e_n = 16\\sum_{k=0}^{n}\\frac{(-1)^{k}}{5^{2k+1}(2k+1)} - 4\\sum_{k=0}^{n}\\frac{(-1)^{k}}{239^{2k+1}(2k+1)}$$", "n = 30\nplt.plot([x for x in range(n)],p3.pi_sequence(n, p3.fa),'g.')\nplt.show()\n\nplt.plot([x for x in range(n)],p3.pi_sequence(n, p3.fb) ** .5 ,'b.')\nplt.show()\n\nplt.plot([x for x in range(n)],p3.pi_sequence(n, p3.fc) ** .25 ,'y.')\nplt.show()\n\nplt.plot([x for x in range(n)],p3.pi_sequence(n, p3.fd),'r.')\nplt.show()\n\nplt.plot([x for x in range(n)],p3.pi_sequence(n, p3.fd),'c.')\nplt.show()\n\nn = 10\nplt.plot([x + 20 for x in range(n)],p3.pi_sequence(n + 20, p3.fa)[-n:],'g.')\nplt.plot([x + 20 for x in range(n)],(p3.pi_sequence(n + 20, p3.fb) ** .5)[-n:] ,'b.')\nplt.plot([x + 20 for x in range(n)],(p3.pi_sequence(n + 20, p3.fc) ** .25)[-n:] ,'y.')\nplt.plot([x + 20 for x in range(n)],p3.pi_sequence(n + 20, p3.fd)[-n:],'r.')\nplt.plot([x + 20 for x in range(n)],p3.pi_sequence(n + 20, p3.fd)[-n:],'c.')\n\nplt.plot((20, 30), (math.pi, math.pi), 'b')\n\nplt.show()\n", "Chinmai Raman\nClasswork 3\n5.49 Experience Overflow in a Function\nCalculates an exponential function and returns the numerator, denominator and the fraction as a 3-tuple", "x = np.linspace(0,1,10000)\ny1 = p1.v(x, 1, np.exp)[2]\ny2 = p1.v(x, 0.1, np.exp)[2]\ny3 = p1.v(x, 0.01, np.exp)[2]\n\nfig = plt.figure(1)\nplt.plot(x, y1, 'b')\nplt.plot(x, y2, 'r')\nplt.plot(x, y3, 'g')\nplt.xlabel('x')\nplt.ylabel('v(x)')\nplt.legend(['(1 - exp(x / mu)) / (1 - exp(1 / mu))'])\nplt.axis([x[0], x[-1], min(y3), max(y3)])\nplt.title('Math Function')\nplt.show(fig)", "Austin Ayers\nClasswork 3\nA.1 Determine the limit of a sequence\nComputes and returns the following sequence for N = 100\n$$a_n = \\frac{7+1/(n+1)}{3-1/(n+1)^2}, \\qquad n=0,1,2,\\ldots,N$$", "p2.part_a()\n\np2.part_b()\n\np2.part_c()\n\np2.part_d()\n\np2.part_e()\n\np2.part_f()", "diffeq_midpoint\nTaylor Patti\nUses the midpoint integration rule along with numpy vectors to produce a continuous vector which gives integral data for an array of prespecified points.\nHere we use it to integrate sin from 0 to pi.", "function_call = p4.vector_midpoint(p4.np.sin, 0, p4.np.pi, 10000)\nprint function_call[1][-1]", "Observe the close adherance to the actual value of this cannonical value.\nWe can also call it at a different value of x. Let's look at the value of this integral from 0 to pi over 2. Again, the result will have strikingly close adherance to the analytical value of this integral.", "print function_call[1][5000]" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
eriksalt/jupyter
Python Quick Reference/Data Algorithms.ipynb
mit
[ "Python Data Algorithms Quick Reference\nTable Of Contents\n\n<a href=\"#1.-Manually-Consuming-an-Iterator\">Manually Consuming an Iterator</a>\n<a href=\"#2.-Delegating-Iterator\">Delegating Iterator</a>\n<a href=\"#3.-Map\">Map</a>\n<a href=\"#4.-Filter\">Filter</a>\n<a href=\"#5.-Named-Slices\">Named Slices</a>\n<a href=\"#6.-zip\">zip</a>\n<a href=\"#7.-itemgetter\">itemgetter</a>\n<a href=\"#8.-attrgetter\">attrgetter</a>\n<a href=\"#9.-groupby\">groupby</a>\n<a href=\"#10.-Generator-Expressions\">Generator Expressions</a>\n<a href=\"#11.-compress\">compress</a>\n<a href=\"#12.-reversed\">reversed</a>\n<a href=\"#13.-Generators-with-State\">Generators with State</a>\n<a href=\"#14.-slice-and-dropwhile\">slice and dropwhile</a>\n<a href=\"#15.-Permutations-and-Combinations-of-Elements\">Permutations and Combinations of Elements</a>\n<a href=\"#16.-Iterating-with-Indexes\">Iterating with Indexes</a>\n<a href=\"#17.-chain\">chain</a>\n<a href=\"#18.-Flatten-a-Nested-Sequence\">Flatten a Nested Sequence</a>\n<a href=\"#19.-Merging-Presorted-Iterables\">Merging Presorted Iterables</a>\n\n1. Manually Consuming an Iterator", "items = [1, 2, 3]\n# Get the iterator\nit = iter(items) # Invokes items.__iter__()\n# Run the iterator\nnext(it) # Invokes it.__next__()\n\nnext(it)\n\nnext(it)\n\n# if you uncomment this line it would throw a StopOperation exception\n# next(it)", "2. Delegating Iterator", "# if you write a container class, and want to expose an iterator over an internal collection use the __iter()__ method\nclass Node:\n def __init__(self):\n self._children = [1,2,3]\n def __iter__(self):\n return iter(self._children)\n\nroot = Node()\nfor x in root:\n print(x)", "3. Map\nmap applies a function to every element of a sequence and returns an iterator of elements", "simpsons = ['homer', 'marge', 'bart']\nmap(len, simpsons) # returns [0, 2, 4]\n\n#equivalent list comprehension\n[len(word) for word in simpsons]\n\nmap(lambda word: word[-1], simpsons) # returns ['r','e', 't']\n\n#equivalent list comprehension\n[word[-1] for word in simpsons]", "4. Filter\nfilter returns an iterator containing the elements from a sequence for which a condition is True:", "nums = range(5)\nfilter(lambda x: x % 2 == 0, nums) # returns [0, 2, 4]\n\n# equivalent list comprehension\n[num for num in nums if num % 2 == 0]", "5. Named Slices", "###### 0123456789012345678901234567890123456789012345678901234567890'\nrecord = '....................100 .......513.25 ..........'\n\nSHARES = slice(20,32)\nPRICE = slice(40,48)\n\ncost = int(record[SHARES]) * float(record[PRICE])\ncost", "6. zip", "# zip() allows you to create an iterable view over a tuple created out of two separate iterable views\nprices = { 'ACME' : 45.23, 'AAPL': 612.78, 'IBM': 205.55, 'HPQ' : 37.20, 'FB' : 10.75 }\n\nmin_price = min(zip(prices.values(), prices.keys())) #(10.75, 'FB')\n\nmax((zip(prices.values(), prices.keys())))", "zip can only be iterated over once!", "prices_and_names = zip(prices.values(), prices.keys())\nprint(min(prices_and_names))\n\n# running the following code would fail\n#print(min(prices_and_names))\n\n# zip usually stops when any individual iterator ends (it iterates only until the end of the shortest sequence)\n\na = [1, 2, 3]\nb = ['w', 'x', 'y', 'z']\nfor i in zip(a,b):\n print(i)\n\n# use zip_longest to keep iterating through longer sequences\nfrom itertools import zip_longest\nfor i in zip_longest(a,b):\n print(i)\n\n# zip can run over more then 2 sequences\n\nc = ['aaa', 'bbb', 'ccc']\nfor i in zip(a,b,c):\n print(i)", "7. itemgetter", "from operator import itemgetter\n\nrows = [\n{'fname': 'Brian', 'lname': 'Jones', 'uid': 1003},\n{'fname': 'David', 'lname': 'Beazley', 'uid': 1002},\n{'fname': 'John', 'lname': 'Cleese', 'uid': 1001},\n{'fname': 'Big', 'lname': 'Jones', 'uid': 1004}\n]\n\nrows_by_fname = sorted(rows, key=itemgetter('fname'))\nrows_by_fname\n\nrows_by_uid = sorted(rows, key=itemgetter('uid'))\nrows_by_uid\n\n# itemgetter() function can also accept multiple keys\nrows_by_lfname = sorted(rows, key=itemgetter('lname','fname'))\nrows_by_lfname", "8. attrgetter", "from operator import attrgetter\n\n#used to sort objects that dont natively support comparison\nclass User:\n def __init__(self, user_id):\n self.user_id = user_id\n def __repr__(self):\n return 'User({})'.format(self.user_id)\n \nusers = [User(23), User(3), User(99)]\nusers\n\nsorted(users, key=attrgetter('user_id'))\n\nmin(users, key=attrgetter('user_id'))", "9. groupby\nThe groupby() function works by scanning a sequence and finding sequential โ€œrunsโ€\nof identical values (or values returned by the given key function). On each iteration, it\nreturns the value along with an iterator that produces all of the items in a group with\nthe same value.", "from operator import itemgetter\nfrom itertools import groupby\n\nrows = [\n{'address': '5412 N CLARK', 'date': '07/01/2012'},\n{'address': '5148 N CLARK', 'date': '07/04/2012'},\n{'address': '5800 E 58TH', 'date': '07/02/2012'},\n{'address': '2122 N CLARK', 'date': '07/03/2012'},\n{'address': '5645 N RAVENSWOOD', 'date': '07/02/2012'},\n{'address': '1060 W ADDISON', 'date': '07/02/2012'},\n{'address': '4801 N BROADWAY', 'date': '07/01/2012'},\n{'address': '1039 W GRANVILLE', 'date': '07/04/2012'},\n]\n\n# important! must sort data on key field first!\nrows.sort(key=itemgetter('date'))\n\n#iterate in groups\nfor date, items in groupby(rows, key=itemgetter('date')):\n print(date)\n for i in items:\n print(' %s' % i)", "10. Generator Expressions", "mylist = [1, 4, -5, 10, -7, 2, 3, -1]\npositives = (n for n in mylist if n > 0)\n\npositives\n\nfor x in positives:\n print(x)\n\nnums = [1, 2, 3, 4, 5]\nsum(x * x for x in nums)\n\n# Output a tuple as CSV\ns = ('ACME', 50, 123.45)\n','.join(str(x) for x in s)\n\n# Determine if any .py files exist in a directory\nimport os\nfiles = os.listdir('.')\nif any(name.endswith('.py') for name in files):\n print('There be python!')\nelse:\n print('Sorry, no python.')\n\n# Data reduction across fields of a data structure\nportfolio = [\n{'name':'GOOG', 'shares': 50},\n{'name':'YHOO', 'shares': 75},\n{'name':'AOL', 'shares': 20},\n{'name':'SCOX', 'shares': 65}\n]\nmin(s['shares'] for s in portfolio)\n\ns = sum((x * x for x in nums)) # Pass generator-expr as argument\ns = sum(x * x for x in nums) # More elegant syntax\ns", "11. compress\nitertools.compress() takes an iterable and an accompanying Boolean selector sequence as input. As output, it gives you all of the items in the iterable where the corresponding element in the selector is True.", "from itertools import compress\n\naddresses = [\n'5412 N CLARK',\n'5148 N CLARK',\n'5800 E 58TH',\n'2122 N CLARK'\n'5645 N RAVENSWOOD',\n'1060 W ADDISON',\n'4801 N BROADWAY',\n'1039 W GRANVILLE',\n]\ncounts = [ 0, 3, 10, 4, 1, 7, 6, 1]\n\n\nmore5 = [n > 5 for n in counts]\nmore5\n\nlist(compress(addresses, more5))", "12. reversed", "#iterates in reverse\na = [1, 2, 3, 4]\nfor x in reversed(a):\n print(x)\n\n#you can customize the behavior of reversed for your class by implementing __reversed()__ method\nclass Counter:\n def __init__(self, start):\n self.start = start\n # Forward iterator\n def __iter__(self):\n n = 1\n while n <= self.start:\n yield n\n n += 1\n # Reverse iterator\n def __reversed__(self):\n n = self.start\n while n > 0:\n yield n\n n -= 1\n\nfoo = Counter(5)\nfor x in reversed(foo):\n print(x)", "13. Generators with State", "# To expose state available at each step of iteration, use a classs that implements __iter__()\nclass countingiterator:\n def __init__(self, items):\n self.items=items\n def __iter__(self):\n self.clear_count()\n for item in self.items:\n self.count+=1\n yield item\n def clear_count(self):\n self.count=0\n\nfoo = countingiterator([\"aaa\",\"bbb\",\"ccc\"])\n\nfor i in foo:\n print(\"{}:{}\".format(foo.count, i))", "14. islice and dropwhile", "# itertools.islice allows slicing of iterators\ndef count(n):\n while True:\n yield n\n n += 1\n\nc=count(0)\n#the next line would fail\n# c[10:20]\nimport itertools\nfor x in itertools.islice(c,10,15):\n print(x)\n\nc=count(0)\nfor x in itertools.islice(c, 10, 15, 2):\n print(x)\n\n# if you don't know how many to skip, but can define a skip condition, use dropwhile()\nfrom itertools import dropwhile\nfoo = ['#','#','#','#','aaa','bbb','#','ccc']\ndef getstrings(f):\n for i in f:\n yield i\n\nfor ch in dropwhile(lambda ch: ch.startswith('#'), getstrings(foo)):\n print(ch)", "15. Permutations and Combinations of Elements", "from itertools import permutations\n\nitems = ['a', 'b', 'c']\n\nfor p in permutations(items):\n print(p)\n\n# for smaller subset permutations\nfor p in permutations(items,2):\n print(p)\n\n# itertools.combinations ignores element order in creating unique sets\nfrom itertools import combinations\nfor c in combinations(items, 3):\n print(c)\n\nfor c in combinations(items, 2):\n print(c)\n\n# itertools.combinations_with_replacement() will not remove an item from the list of possible candidates after it is chosen \n# in other words, the same value can occur more then once\nfrom itertools import combinations_with_replacement\n\nfor c in combinations_with_replacement(items, 3):\n print(c)", "16. Iterating with Indexes", "# enumerate returns the iterated item and an index\nmy_list = ['a', 'b', 'c']\nfor idx, val in enumerate(my_list):\n print(idx, val)\n\n# pass a starting index to enumerate\nfor idx, val in enumerate(my_list, 7):\n print(idx, val)", "17. chain", "# chain iterates over several sequences, one after the other\n# making them look like one long sequence\n\nfrom itertools import chain\na = [1, 2]\nb = ['x', 'y', 'z']\nfor x in chain(a, b):\n print(x)", "18. Flatten a Nested Sequence", "# you want to traverse a sequence with nested sub sequences as one big sequence\nfrom collections import Iterable\n\ndef flatten(items, ignore_types=(str, bytes)):\n for x in items:\n if isinstance(x, Iterable) and not isinstance(x, ignore_types): # ignore types treats iterable string/bytes as simple values\n yield from flatten(x)\n else:\n yield x\n\nitems = [1, 2, [3, 4, [5, 6], 7], 8]\n\nfor x in flatten(items):\n print(x)", "19. Merging Presorted Iterables", "import heapq\na = [1, 4, 7]\nb = [2, 5, 6]\nfor c in heapq.merge(a, b):\n print(c)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Aggieyixin/cjc2016
code/16&17networkx.ipynb
mit
[ "็ฝ‘็ปœ็ง‘ๅญฆ็†่ฎบ็ฎ€ไป‹\n\n\n็ฝ‘็ปœ็ง‘ๅญฆ๏ผšๆ่ฟฐ่Š‚็‚นๅฑžๆ€ง\n\n\n็Ž‹ๆˆๅ†› \nwangchengjun@nju.edu.cn\n่ฎก็ฎ—ไผ ๆ’ญ็ฝ‘ http://computational-communication.com\nhttp://networkx.readthedocs.org/en/networkx-1.11/tutorial/", "%matplotlib inline\nimport networkx as nx\nimport matplotlib.cm as cm\nimport matplotlib.pyplot as plt\n\nimport networkx as nx\n\nG=nx.Graph() # G = nx.DiGraph() # ๆœ‰ๅ‘็ฝ‘็ปœ\n# ๆทปๅŠ ๏ผˆๅญค็ซ‹๏ผ‰่Š‚็‚น\nG.add_node(\"spam\")\n# ๆทปๅŠ ่Š‚็‚นๅ’Œ้“พๆŽฅ\nG.add_edge(1,2)\n\nprint(G.nodes())\n\nprint(G.edges())\n\n# ็ป˜ๅˆถ็ฝ‘็ปœ\nnx.draw(G, with_labels = True)", "WWW Data download\nhttp://www3.nd.edu/~networks/resources.htm\nWorld-Wide-Web: [README] [DATA]\nRรฉka Albert, Hawoong Jeong and Albert-Lรกszlรณ Barabรกsi:\nDiameter of the World Wide Web Nature 401, 130 (1999) [ PDF ]\nไฝœไธš๏ผš\n\nไธ‹่ฝฝwwwๆ•ฐๆฎ\nๆž„ๅปบnetworkx็š„็ฝ‘็ปœๅฏน่ฑกg๏ผˆๆ็คบ๏ผšๆœ‰ๅ‘็ฝ‘็ปœ๏ผ‰\nๅฐ†wwwๆ•ฐๆฎๆทปๅŠ ๅˆฐgๅฝ“ไธญ\n่ฎก็ฎ—็ฝ‘็ปœไธญ็š„่Š‚็‚นๆ•ฐ้‡ๅ’Œ้“พๆŽฅๆ•ฐ้‡", "G = nx.Graph()\nn = 0\nwith open ('/Users/chengjun/bigdata/www.dat.gz.txt') as f:\n for line in f:\n n += 1\n if n % 10**4 == 0:\n flushPrint(n)\n x, y = line.rstrip().split(' ')\n G.add_edge(x,y)\n\nnx.info(G)", "ๆ่ฟฐ็ฝ‘็ปœ\nnx.karate_club_graph\nๆˆ‘ไปฌไปŽkarate_club_graphๅผ€ๅง‹๏ผŒๆŽข็ดข็ฝ‘็ปœ็š„ๅŸบๆœฌๆ€ง่ดจใ€‚", "G = nx.karate_club_graph()\n \nclubs = [G.node[i]['club'] for i in G.nodes()]\ncolors = []\nfor j in clubs:\n if j == 'Mr. Hi':\n colors.append('r')\n else:\n colors.append('g')\n \nnx.draw(G, with_labels = True, node_color = colors)\n\nG.node[1] # ่Š‚็‚น1็š„ๅฑžๆ€ง\n\nG.edge.keys()[:3] # ๅ‰ไธ‰ๆก่พน็š„id\n\nnx.info(G)\n\nG.nodes()[:10]\n\nG.edges()[:3]\n\nG.neighbors(1)\n\nnx.average_shortest_path_length(G) ", "็ฝ‘็ปœ็›ดๅพ„", "nx.diameter(G)#่ฟ”ๅ›žๅ›พG็š„็›ดๅพ„๏ผˆๆœ€้•ฟๆœ€็Ÿญ่ทฏๅพ„็š„้•ฟๅบฆ๏ผ‰", "ๅฏ†ๅบฆ", "nx.density(G)\n\nnodeNum = len(G.nodes())\nedgeNum = len(G.edges())\n\n2.0*edgeNum/(nodeNum * (nodeNum - 1))", "ไฝœไธš๏ผš\n\n่ฎก็ฎ—www็ฝ‘็ปœ็š„็ฝ‘็ปœๅฏ†ๅบฆ\n\n่š้›†็ณปๆ•ฐ", "cc = nx.clustering(G)\ncc.items()[:5]\n\nplt.hist(cc.values(), bins = 15)\nplt.xlabel('$Clustering \\, Coefficient, \\, C$', fontsize = 20)\nplt.ylabel('$Frequency, \\, F$', fontsize = 20)\nplt.show()", "Spacing in Math Mode\nIn a math environment, LaTeX ignores the spaces you type and puts in the spacing that it thinks is best. LaTeX formats mathematics the way it's done in mathematics texts. If you want different spacing, LaTeX provides the following four commands for use in math mode:\n\\; - a thick space\n\\: - a medium space\n\\, - a thin space\n\\! - a negative thin space\nๅŒน้…็ณปๆ•ฐ", "# M. E. J. Newman, Mixing patterns in networks Physical Review E, 67 026126, 2003\nnx.degree_assortativity_coefficient(G) #่ฎก็ฎ—ไธ€ไธชๅ›พ็š„ๅบฆๅŒน้…ๆ€งใ€‚\n\nGe=nx.Graph()\nGe.add_nodes_from([0,1],size=2)\nGe.add_nodes_from([2,3],size=3)\nGe.add_edges_from([(0,1),(2,3)])\nprint(nx.numeric_assortativity_coefficient(Ge,'size'))\n\n# plot degree correlation \nfrom collections import defaultdict\nimport numpy as np\n\nl=defaultdict(list)\ng = nx.karate_club_graph()\n\nfor i in g.nodes():\n k = []\n for j in g.neighbors(i):\n k.append(g.degree(j))\n l[g.degree(i)].append(np.mean(k)) \n #l.append([g.degree(i),np.mean(k)])\n \nx = l.keys()\ny = [np.mean(i) for i in l.values()]\n\n#x, y = np.array(l).T\nplt.plot(x, y, 'r-o', label = '$Karate\\;Club$')\nplt.legend(loc=1,fontsize=10, numpoints=1)\nplt.xscale('log'); plt.yscale('log')\nplt.ylabel(r'$<knn(k)$> ', fontsize = 20)\nplt.xlabel('$k$', fontsize = 20)\nplt.show()", "Degree centrality measures.๏ผˆๅบฆไธญๅฟƒๆ€ง๏ผ‰\n\ndegree_centrality(G) # Compute the degree centrality for nodes.\nin_degree_centrality(G) # Compute the in-degree centrality for nodes.\nout_degree_centrality(G) # Compute the out-degree centrality for nodes.\ncloseness_centrality(G[, v, weighted_edges]) # Compute closeness centrality for nodes.\nbetweenness_centrality(G[, normalized, ...]) # Betweenness centrality measures.๏ผˆไป‹ๆ•ฐไธญๅฟƒๆ€ง๏ผ‰", "dc = nx.degree_centrality(G)\ncloseness = nx.closeness_centrality(G)\nbetweenness= nx.betweenness_centrality(G)\n\n\nfig = plt.figure(figsize=(15, 4),facecolor='white')\nax = plt.subplot(1, 3, 1)\nplt.hist(dc.values(), bins = 20)\nplt.xlabel('$Degree \\, Centrality$', fontsize = 20)\nplt.ylabel('$Frequency, \\, F$', fontsize = 20)\n\nax = plt.subplot(1, 3, 2)\nplt.hist(closeness.values(), bins = 20)\nplt.xlabel('$Closeness \\, Centrality$', fontsize = 20)\n\nax = plt.subplot(1, 3, 3)\nplt.hist(betweenness.values(), bins = 20)\nplt.xlabel('$Betweenness \\, Centrality$', fontsize = 20)\nplt.tight_layout()\nplt.show()\n\n\nfig = plt.figure(figsize=(15, 8),facecolor='white')\n\nfor k in betweenness:\n plt.scatter(dc[k], closeness[k], s = betweenness[k]*1000)\n plt.text(dc[k], closeness[k]+0.02, str(k))\nplt.xlabel('$Degree \\, Centrality$', fontsize = 20)\nplt.ylabel('$Closeness \\, Centrality$', fontsize = 20)\nplt.show()", "ๅบฆๅˆ†ๅธƒ", "from collections import defaultdict\nimport numpy as np\n\ndef plotDegreeDistribution(G):\n degs = defaultdict(int)\n for i in G.degree().values(): degs[i]+=1\n items = sorted ( degs.items () )\n x, y = np.array(items).T\n plt.plot(x, y, 'b-o')\n plt.xscale('log')\n plt.yscale('log')\n plt.legend(['Degree'])\n plt.xlabel('$K$', fontsize = 20)\n plt.ylabel('$P_K$', fontsize = 20)\n plt.title('$Degree\\,Distribution$', fontsize = 20)\n plt.show() \n \nG = nx.karate_club_graph() \nplotDegreeDistribution(G)", "็ฝ‘็ปœ็ง‘ๅญฆ็†่ฎบ็ฎ€ไป‹\n\n\n็ฝ‘็ปœ็ง‘ๅญฆ๏ผšๅˆ†ๆž็ฝ‘็ปœ็ป“ๆž„\n\n\n็Ž‹ๆˆๅ†› \nwangchengjun@nju.edu.cn\n่ฎก็ฎ—ไผ ๆ’ญ็ฝ‘ http://computational-communication.com\n่ง„ๅˆ™็ฝ‘็ปœ", "import networkx as nx\nimport matplotlib.pyplot as plt\nRG = nx.random_graphs.random_regular_graph(3,200) #็”ŸๆˆๅŒ…ๅซ20ไธช่Š‚็‚นใ€ๆฏไธช่Š‚็‚นๆœ‰3ไธช้‚ปๅฑ…็š„่ง„ๅˆ™ๅ›พRG\npos = nx.spectral_layout(RG) #ๅฎšไน‰ไธ€ไธชๅธƒๅฑ€๏ผŒๆญคๅค„้‡‡็”จไบ†spectralๅธƒๅฑ€ๆ–นๅผ๏ผŒๅŽๅ˜่ฟ˜ไผšไป‹็ปๅ…ถๅฎƒๅธƒๅฑ€ๆ–นๅผ๏ผŒๆณจๆ„ๅ›พๅฝขไธŠ็š„ๅŒบๅˆซ\nnx.draw(RG,pos,with_labels=False,node_size = 30) #็ป˜ๅˆถ่ง„ๅˆ™ๅ›พ็š„ๅ›พๅฝข๏ผŒwith_labelsๅ†ณๅฎš่Š‚็‚นๆ˜ฏ้žๅธฆๆ ‡็ญพ๏ผˆ็ผ–ๅท๏ผ‰๏ผŒnode_sizeๆ˜ฏ่Š‚็‚น็š„็›ดๅพ„\nplt.show() #ๆ˜พ็คบๅ›พๅฝข\n\nplotDegreeDistribution(RG)", "ER้šๆœบ็ฝ‘็ปœ", "import networkx as nx\nimport matplotlib.pyplot as plt\nER = nx.random_graphs.erdos_renyi_graph(200,0.05) #็”ŸๆˆๅŒ…ๅซ20ไธช่Š‚็‚นใ€ไปฅๆฆ‚็އ0.2่ฟžๆŽฅ็š„้šๆœบๅ›พ\npos = nx.shell_layout(ER) #ๅฎšไน‰ไธ€ไธชๅธƒๅฑ€๏ผŒๆญคๅค„้‡‡็”จไบ†shellๅธƒๅฑ€ๆ–นๅผ\nnx.draw(ER,pos,with_labels=False,node_size = 30) \nplt.show()\n\nplotDegreeDistribution(ER)", "ๅฐไธ–็•Œ็ฝ‘็ปœ", "import networkx as nx\nimport matplotlib.pyplot as plt\nWS = nx.random_graphs.watts_strogatz_graph(200,4,0.3) #็”ŸๆˆๅŒ…ๅซ200ไธช่Š‚็‚นใ€ๆฏไธช่Š‚็‚น4ไธช่ฟ‘้‚ปใ€้šๆœบๅŒ–้‡่ฟžๆฆ‚็އไธบ0.3็š„ๅฐไธ–็•Œ็ฝ‘็ปœ\npos = nx.circular_layout(WS) #ๅฎšไน‰ไธ€ไธชๅธƒๅฑ€๏ผŒๆญคๅค„้‡‡็”จไบ†circularๅธƒๅฑ€ๆ–นๅผ\nnx.draw(WS,pos,with_labels=False,node_size = 30) #็ป˜ๅˆถๅ›พๅฝข\nplt.show()\n\nplotDegreeDistribution(WS)\n\nnx.diameter(WS)\n\ncc = nx.clustering(WS)\nplt.hist(cc.values(), bins = 10)\nplt.xlabel('$Clustering \\, Coefficient, \\, C$', fontsize = 20)\nplt.ylabel('$Frequency, \\, F$', fontsize = 20)\nplt.show()\n\nimport numpy as np\nnp.mean(cc.values())", "BA็ฝ‘็ปœ", "import networkx as nx\nimport matplotlib.pyplot as plt\nBA= nx.random_graphs.barabasi_albert_graph(200,2) #็”Ÿๆˆn=20ใ€m=1็š„BAๆ— ๆ ‡ๅบฆ็ฝ‘็ปœ\npos = nx.spring_layout(BA) #ๅฎšไน‰ไธ€ไธชๅธƒๅฑ€๏ผŒๆญคๅค„้‡‡็”จไบ†springๅธƒๅฑ€ๆ–นๅผ\nnx.draw(BA,pos,with_labels=False,node_size = 30) #็ป˜ๅˆถๅ›พๅฝข\nplt.show()\n\nplotDegreeDistribution(BA)\n\nBA= nx.random_graphs.barabasi_albert_graph(20000,2) #็”Ÿๆˆn=20ใ€m=1็š„BAๆ— ๆ ‡ๅบฆ็ฝ‘็ปœ\nplotDegreeDistribution(BA)", "ไฝœไธš๏ผš\n\n้˜…่ฏป Barabasi (1999) Internet Diameter of the world wide web.Nature.401\n็ป˜ๅˆถwww็ฝ‘็ปœ็š„ๅ‡บๅบฆๅˆ†ๅธƒใ€ๅ…ฅๅบฆๅˆ†ๅธƒ\nไฝฟ็”จBAๆจกๅž‹็”Ÿๆˆ่Š‚็‚นๆ•ฐไธบNใ€ๅน‚ๆŒ‡ๆ•ฐไธบ$\\gamma$็š„็ฝ‘็ปœ\n่ฎก็ฎ—ๅนณๅ‡่ทฏๅพ„้•ฟๅบฆdไธŽ่Š‚็‚นๆ•ฐ้‡็š„ๅ…ณ็ณป\n\n<img src = './img/diameter.png' width = 10000>", "Ns = [i*10 for i in [1, 10, 100, 1000]]\nds = []\nfor N in Ns:\n print N\n BA= nx.random_graphs.barabasi_albert_graph(N,2)\n d = nx.average_shortest_path_length(BA)\n ds.append(d)\n\nplt.plot(Ns, ds, 'r-o')\nplt.xlabel('$N$', fontsize = 20)\nplt.ylabel('$<d>$', fontsize = 20)\nplt.xscale('log')\nplt.show()", "More\nhttp://computational-communication.com/wiki/index.php?title=Networkx", "# subgraph\nG = nx.Graph() # or DiGraph, MultiGraph, MultiDiGraph, etc\nG.add_path([0,1,2,3])\nH = G.subgraph([0,1,2])\nG.edges(), H.edges()", "ๅ‚่€ƒ\n\nhttps://networkx.readthedocs.org/en/stable/tutorial/tutorial.html" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive2/structured/labs/4c_keras_wide_and_deep_babyweight.ipynb
apache-2.0
[ "LAB 4c: Create Keras Wide and Deep model.\nLearning Objectives\n\nSet CSV Columns, label column, and column defaults\nMake dataset of features and label from CSV files\nCreate input layers for raw features\nCreate feature columns for inputs\nCreate wide layer, deep dense hidden layers, and output layer\nCreate custom evaluation metric\nBuild wide and deep model tying all of the pieces together\nTrain and evaluate\n\nIntroduction\nIn this notebook, we'll be using Keras to create a wide and deep model to predict the weight of a baby before it is born.\nWe'll start by defining the CSV column names, label column, and column defaults for our data inputs. Then, we'll construct a tf.data Dataset of features and the label from the CSV files and create inputs layers for the raw features. Next, we'll set up feature columns for the model inputs and build a wide and deep neural network in Keras. We'll create a custom evaluation metric and build our wide and deep model. Finally, we'll train and evaluate our model.\nEach learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.\nLoad necessary libraries", "import datetime\nimport os\nimport shutil\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport tensorflow as tf\nprint(tf.__version__)", "Set your bucket:", "BUCKET = # REPLACE BY YOUR BUCKET\n\nos.environ['BUCKET'] = BUCKET", "Verify CSV files exist\nIn the seventh lab of this series 1b_prepare_data_babyweight, we sampled from BigQuery our train, eval, and test CSV files. Verify that they exist, otherwise go back to that lab and create them.", "TRAIN_DATA_PATH = \"gs://{bucket}/babyweight/data/train*.csv\".format(bucket=BUCKET)\nEVAL_DATA_PATH = \"gs://{bucket}/babyweight/data/eval*.csv\".format(bucket=BUCKET)\n\n!gsutil ls $TRAIN_DATA_PATH\n\n!gsutil ls $EVAL_DATA_PATH", "Create Keras model\nLab Task #1: Set CSV Columns, label column, and column defaults.\nNow that we have verified that our CSV files exist, we need to set a few things that we will be using in our input function.\n* CSV_COLUMNS are going to be our header names of our columns. Make sure that they are in the same order as in the CSV files\n* LABEL_COLUMN is the header name of the column that is our label. We will need to know this to pop it from our features dictionary.\n* DEFAULTS is a list with the same length as CSV_COLUMNS, i.e. there is a default for each column in our CSVs. Each element is a list itself with the default value for that CSV column.", "# Determine CSV, label, and key columns\n# TODO: Create list of string column headers, make sure order matches.\nCSV_COLUMNS = [\"\"]\n\n# TODO: Add string name for label column\nLABEL_COLUMN = \"\"\n\n# Set default values for each CSV column as a list of lists.\n# Treat is_male and plurality as strings.\nDEFAULTS = []", "Lab Task #2: Make dataset of features and label from CSV files.\nNext, we will write an input_fn to read the data. Since we are reading from CSV files we can save ourself from trying to recreate the wheel and can use tf.data.experimental.make_csv_dataset. This will create a CSV dataset object. However we will need to divide the columns up into features and a label. We can do this by applying the map method to our dataset and popping our label column off of our dictionary of feature tensors.", "def features_and_labels(row_data):\n \"\"\"Splits features and labels from feature dictionary.\n\n Args:\n row_data: Dictionary of CSV column names and tensor values.\n Returns:\n Dictionary of feature tensors and label tensor.\n \"\"\"\n label = row_data.pop(LABEL_COLUMN)\n\n return row_data, label # features, label\n\n\ndef load_dataset(pattern, batch_size=1, mode='eval'):\n \"\"\"Loads dataset using the tf.data API from CSV files.\n\n Args:\n pattern: str, file pattern to glob into list of files.\n batch_size: int, the number of examples per batch.\n mode: 'eval' | 'train' to determine if training or evaluating.\n Returns:\n `Dataset` object.\n \"\"\"\n # TODO: Make a CSV dataset\n dataset = tf.data.experimental.make_csv_dataset()\n\n # TODO: Map dataset to features and label\n dataset = dataset.map() # features, label\n\n # Shuffle and repeat for training\n if mode == 'train':\n dataset = dataset.shuffle(buffer_size=1000).repeat()\n\n # Take advantage of multi-threading; 1=AUTOTUNE\n dataset = dataset.prefetch(buffer_size=1)\n\n return dataset", "Lab Task #3: Create input layers for raw features.\nWe'll need to get the data read in by our input function to our model function, but just how do we go about connecting the dots? We can use Keras input layers (tf.Keras.layers.Input) by defining:\n* shape: A shape tuple (integers), not including the batch size. For instance, shape=(32,) indicates that the expected input will be batches of 32-dimensional vectors. Elements of this tuple can be None; 'None' elements represent dimensions where the shape is not known.\n* name: An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn't provided.\n* dtype: The data type expected by the input, as a string (float32, float64, int32...)", "def create_input_layers():\n \"\"\"Creates dictionary of input layers for each feature.\n\n Returns:\n Dictionary of `tf.Keras.layers.Input` layers for each feature.\n \"\"\"\n # TODO: Create dictionary of tf.keras.layers.Input for each dense feature\n deep_inputs = {}\n\n # TODO: Create dictionary of tf.keras.layers.Input for each sparse feature\n wide_inputs = {}\n\n inputs = {**wide_inputs, **deep_inputs}\n\n return inputs", "Lab Task #4: Create feature columns for inputs.\nNext, define the feature columns. mother_age and gestation_weeks should be numeric. The others, is_male and plurality, should be categorical. Remember, only dense feature columns can be inputs to a DNN.", "def create_feature_columns(nembeds):\n \"\"\"Creates wide and deep dictionaries of feature columns from inputs.\n\n Args:\n nembeds: int, number of dimensions to embed categorical column down to.\n Returns:\n Wide and deep dictionaries of feature columns.\n \"\"\"\n # TODO: Create deep feature columns for numeric features\n deep_fc = {}\n\n # TODO: Create wide feature columns for categorical features\n wide_fc = {}\n\n # TODO: Bucketize the float fields. This makes them wide\n\n # TODO: Cross all the wide cols, have to do the crossing before we one-hot\n\n # TODO: Embed cross and add to deep feature columns\n\n return wide_fc, deep_fc", "Lab Task #5: Create wide and deep model and output layer.\nSo we've figured out how to get our inputs ready for machine learning but now we need to connect them to our desired output. Our model architecture is what links the two together. We need to create a wide and deep model now. The wide side will just be a linear regression or dense layer. For the deep side, let's create some hidden dense layers. All of this will end with a single dense output layer. This is regression so make sure the output layer activation is correct and that the shape is right.", "def get_model_outputs(wide_inputs, deep_inputs, dnn_hidden_units):\n \"\"\"Creates model architecture and returns outputs.\n\n Args:\n wide_inputs: Dense tensor used as inputs to wide side of model.\n deep_inputs: Dense tensor used as inputs to deep side of model.\n dnn_hidden_units: List of integers where length is number of hidden\n layers and ith element is the number of neurons at ith layer.\n Returns:\n Dense tensor output from the model.\n \"\"\"\n # Hidden layers for the deep side\n layers = [int(x) for x in dnn_hidden_units]\n deep = deep_inputs\n\n # TODO: Create DNN model for the deep side\n deep_out =\n\n # TODO: Create linear model for the wide side\n wide_out =\n\n # Concatenate the two sides\n both = tf.keras.layers.concatenate(\n inputs=[deep_out, wide_out], name=\"both\")\n\n # TODO: Create final output layer\n\n return output", "Lab Task #6: Create custom evaluation metric.\nWe want to make sure that we have some useful way to measure model performance for us. Since this is regression, we would like to know the RMSE of the model on our evaluation dataset, however, this does not exist as a standard evaluation metric, so we'll have to create our own by using the true and predicted labels.", "def rmse(y_true, y_pred):\n \"\"\"Calculates RMSE evaluation metric.\n\n Args:\n y_true: tensor, true labels.\n y_pred: tensor, predicted labels.\n Returns:\n Tensor with value of RMSE between true and predicted labels.\n \"\"\"\n # TODO: Calculate RMSE from true and predicted labels\n pass", "Lab Task #7: Build wide and deep model tying all of the pieces together.\nExcellent! We've assembled all of the pieces, now we just need to tie them all together into a Keras Model. This is NOT a simple feedforward model with no branching, side inputs, etc. so we can't use Keras' Sequential Model API. We're instead going to use Keras' Functional Model API. Here we will build the model using tf.keras.models.Model giving our inputs and outputs and then compile our model with an optimizer, a loss function, and evaluation metrics.", "def build_wide_deep_model(dnn_hidden_units=[64, 32], nembeds=3):\n \"\"\"Builds wide and deep model using Keras Functional API.\n\n Returns:\n `tf.keras.models.Model` object.\n \"\"\"\n # Create input layers\n inputs = create_input_layers()\n\n # Create feature columns\n wide_fc, deep_fc = create_feature_columns(nembeds)\n\n # The constructor for DenseFeatures takes a list of numeric columns\n # The Functional API in Keras requires: LayerConstructor()(inputs)\n\n # TODO: Add wide and deep feature colummns\n wide_inputs = tf.keras.layers.DenseFeatures(\n feature_columns=#TODO, name=\"wide_inputs\")(inputs)\n deep_inputs = tf.keras.layers.DenseFeatures(\n feature_columns=#TODO, name=\"deep_inputs\")(inputs)\n\n # Get output of model given inputs\n output = get_model_outputs(wide_inputs, deep_inputs, dnn_hidden_units)\n\n # Build model and compile it all together\n model = tf.keras.models.Model(inputs=inputs, outputs=output)\n\n # TODO: Add custom eval metrics to list\n model.compile(optimizer=\"adam\", loss=\"mse\", metrics=[\"mse\"])\n\n return model\n\nprint(\"Here is our wide and deep architecture so far:\\n\")\nmodel = build_wide_deep_model()\nprint(model.summary())", "We can visualize the wide and deep network using the Keras plot_model utility.", "tf.keras.utils.plot_model(\n model=model, to_file=\"wd_model.png\", show_shapes=False, rankdir=\"LR\")", "Run and evaluate model\nLab Task #8: Train and evaluate.\nWe've built our Keras model using our inputs from our CSV files and the architecture we designed. Let's now run our model by training our model parameters and periodically running an evaluation to track how well we are doing on outside data as training goes on. We'll need to load both our train and eval datasets and send those to our model through the fit method. Make sure you have the right pattern, batch size, and mode when loading the data. Also, don't forget to add the callback to TensorBoard.", "TRAIN_BATCH_SIZE = 32\nNUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset repeats, it'll wrap around\nNUM_EVALS = 5 # how many times to evaluate\n# Enough to get a reasonable sample, but not so much that it slows down\nNUM_EVAL_EXAMPLES = 10000\n\n# TODO: Load training dataset\ntrainds = load_dataset()\n\n# TODO: Load evaluation dataset\nevalds = load_dataset().take(count=NUM_EVAL_EXAMPLES // 1000)\n\nsteps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)\n\nlogdir = os.path.join(\n \"logs\", datetime.datetime.now().strftime(\"%Y%m%d-%H%M%S\"))\ntensorboard_callback = tf.keras.callbacks.TensorBoard(\n log_dir=logdir, histogram_freq=1)\n\n# TODO: Fit model on training dataset and evaluate every so often\nhistory = model.fit()", "Visualize loss curve", "# Plot\nnrows = 1\nncols = 2\nfig = plt.figure(figsize=(10, 5))\n\nfor idx, key in enumerate([\"loss\", \"rmse\"]):\n ax = fig.add_subplot(nrows, ncols, idx+1)\n plt.plot(history.history[key])\n plt.plot(history.history[\"val_{}\".format(key)])\n plt.title(\"model {}\".format(key))\n plt.ylabel(key)\n plt.xlabel(\"epoch\")\n plt.legend([\"train\", \"validation\"], loc=\"upper left\");", "Save the model", "OUTPUT_DIR = \"babyweight_trained_wd\"\nshutil.rmtree(OUTPUT_DIR, ignore_errors=True)\nEXPORT_PATH = os.path.join(\n OUTPUT_DIR, datetime.datetime.now().strftime(\"%Y%m%d%H%M%S\"))\ntf.saved_model.save(\n obj=model, export_dir=EXPORT_PATH) # with default serving function\nprint(\"Exported trained model to {}\".format(EXPORT_PATH))\n\n!ls $EXPORT_PATH", "Lab Summary:\nIn this lab, we started by defining the CSV column names, label column, and column defaults for our data inputs. Then, we constructed a tf.data Dataset of features and the label from the CSV files and created inputs layers for the raw features. Next, we set up feature columns for the model inputs and built a wide and deep neural network in Keras. We created a custom evaluation metric and built our wide and deep model. Finally, we trained and evaluated our model.\nCopyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Kulbear/deep-learning-nano-foundation
mnist/Handwritten Digit Recognition with TFLearn.ipynb
mit
[ "Handwritten Number Recognition with TFLearn and MNIST\nIn this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9. \nThis kind of neural network is used in a variety of real-world applications including: recognizing phone numbers and sorting postal mail by address. To build the network, we'll be using the MNIST data set, which consists of images of handwritten numbers and their correct labels 0-9.\nWe'll be using TFLearn, a high-level library built on top of TensorFlow to build the neural network. We'll start off by importing all the modules we'll need, then load the data, and finally build the network.", "# Import Numpy, TensorFlow, TFLearn, and MNIST data\nimport numpy as np\nimport tensorflow as tf\nimport tflearn\nimport tflearn.datasets.mnist as mnist", "Retrieving training and test data\nThe MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.\nEach MNIST data point has:\n1. an image of a handwritten digit and \n2. a corresponding label (a number 0-9 that identifies the image)\nWe'll call the images, which will be the input to our neural network, X and their corresponding labels Y.\nWe're going to want our labels as one-hot vectors, which are vectors that holds mostly 0's and one 1. It's easiest to see this in a example. As a one-hot vector, the number 0 is represented as [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], and 4 is represented as [0, 0, 0, 0, 1, 0, 0, 0, 0, 0].\nFlattened data\nFor this example, we'll be using flattened data or a representation of MNIST images in one dimension rather than two. So, each handwritten number image, which is 28x28 pixels, will be represented as a one dimensional array of 784 pixel values. \nFlattening the data throws away information about the 2D structure of the image, but it simplifies our data so that all of the training data can be contained in one array whose shape is [55000, 784]; the first dimension is the number of training images and the second dimension is the number of pixels in each image. This is the kind of data that is easy to analyze using a simple neural network.", "# Retrieve the training and test data\ntrainX, trainY, testX, testY = mnist.load_data(one_hot=True)", "Visualize the training data\nProvided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title.", "# Visualizing the data\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n# Function for displaying a training image by it's index in the MNIST set\ndef display_digit(index):\n label = trainY[index].argmax(axis=0)\n # Reshape 784 array into 28x28 image\n image = trainX[index].reshape([28,28])\n plt.title('Training data, index: %d, Label: %d' % (index, label))\n plt.imshow(image, cmap='gray_r')\n plt.show()\n \n# Display the first (index 0) training image\ndisplay_digit(0)", "Building the network\nTFLearn lets you build the network by defining the layers in that network. \nFor this example, you'll define:\n\nThe input layer, which tells the network the number of inputs it should expect for each piece of MNIST data. \nHidden layers, which recognize patterns in data and connect the input to the output layer, and\nThe output layer, which defines how the network learns and outputs a label for a given image.\n\nLet's start with the input layer; to define the input layer, you'll define the type of data that the network expects. For example,\nnet = tflearn.input_data([None, 100])\nwould create a network with 100 inputs. The number of inputs to your network needs to match the size of your data. For this example, we're using 784 element long vectors to encode our input data, so we need 784 input units.\nAdding layers\nTo add new hidden layers, you use \nnet = tflearn.fully_connected(net, n_units, activation='ReLU')\nThis adds a fully connected layer where every unit (or node) in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call, it designates the input to the hidden layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling tflearn.fully_connected(net, n_units). \nThen, to set how you train the network, use:\nnet = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')\nAgain, this is passing in the network you've been building. The keywords: \n\noptimizer sets the training method, here stochastic gradient descent\nlearning_rate is the learning rate\nloss determines how the network error is calculated. In this example, with categorical cross-entropy.\n\nFinally, you put all this together to create the model with tflearn.DNN(net).", "# Define the neural network\ndef build_model():\n # This resets all parameters and variables, leave this here\n tf.reset_default_graph()\n \n # Inputs\n net = tflearn.input_data([None, trainX.shape[1]])\n\n # Hidden layer(s)\n net = tflearn.fully_connected(net, 160, activation='ReLU')\n net = tflearn.fully_connected(net, 64, activation='ReLU')\n \n # Output layer and training model\n net = tflearn.fully_connected(net, 10, activation='softmax')\n net = tflearn.regression(net, optimizer='sgd', learning_rate=0.01, loss='categorical_crossentropy')\n \n model = tflearn.DNN(net)\n return model\n\n# Build the model\nmodel = build_model()", "Training the network\nNow that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively.\nToo few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!", "# Training\nmodel.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=200)\n\n# Compare the labels that our model predicts with the actual labels\n\n# Find the indices of the most confident prediction for each item. That tells us the predicted digit for that sample.\npredictions = np.array(model.predict(testX)).argmax(axis=1)\n\n# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels\nactual = testY.argmax(axis=1)\ntest_accuracy = np.mean(predictions == actual, axis=0)\n\n# Print out the result\nprint(\"Test accuracy: \", test_accuracy)\n\n# Training\nmodel.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=75, n_epoch=200)\n\n# Compare the labels that our model predicts with the actual labels\n\n# Find the indices of the most confident prediction for each item. That tells us the predicted digit for that sample.\npredictions = np.array(model.predict(testX)).argmax(axis=1)\n\n# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels\nactual = testY.argmax(axis=1)\ntest_accuracy = np.mean(predictions == actual, axis=0)\n\n# Print out the result\nprint(\"Test accuracy: \", test_accuracy)\n\n# Training\nmodel.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=50, n_epoch=75)\n\n# Compare the labels that our model predicts with the actual labels\n\n# Find the indices of the most confident prediction for each item. That tells us the predicted digit for that sample.\npredictions = np.array(model.predict(testX)).argmax(axis=1)\n\n# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels\nactual = testY.argmax(axis=1)\ntest_accuracy = np.mean(predictions == actual, axis=0)\n\n# Print out the result\nprint(\"Test accuracy: \", test_accuracy)\n\n# Training\nmodel.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=25, n_epoch=100)", "Testing\nAfter you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.\nA good result will be higher than 95% accuracy. Some simple models have been known to get up to 99.7% accuracy!", "# Compare the labels that our model predicts with the actual labels\n\n# Find the indices of the most confident prediction for each item. That tells us the predicted digit for that sample.\npredictions = np.array(model.predict(testX)).argmax(axis=1)\n\n# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels\nactual = testY.argmax(axis=1)\ntest_accuracy = np.mean(predictions == actual, axis=0)\n\n# Print out the result\nprint(\"Test accuracy: \", test_accuracy)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
davidzchen/tensorflow
tensorflow/lite/g3doc/performance/post_training_quant.ipynb
apache-2.0
[ "Copyright 2019 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Post-training dynamic range quantization\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/lite/performance/post_training_quant\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/post_training_quant.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/post_training_quant.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/tensorflow/lite/g3doc/performance/post_training_quant.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nOverview\nTensorFlow Lite now supports\nconverting weights to 8 bit precision as part of model conversion from\ntensorflow graphdefs to TensorFlow Lite's flat buffer format. Dynamic range quantization achieves a 4x reduction in the model size. In addition, TFLite supports on the fly quantization and dequantization of activations to allow for:\n\nUsing quantized kernels for faster implementation when available.\nMixing of floating-point kernels with quantized kernels for different parts\n of the graph.\n\nThe activations are always stored in floating point. For ops that\nsupport quantized kernels, the activations are quantized to 8 bits of precision\ndynamically prior to processing and are de-quantized to float precision after\nprocessing. Depending on the model being converted, this can give a speedup over\npure floating point computation.\nIn contrast to\nquantization aware training\n, the weights are quantized post training and the activations are quantized dynamically \nat inference in this method.\nTherefore, the model weights are not retrained to compensate for quantization\ninduced errors. It is important to check the accuracy of the quantized model to\nensure that the degradation is acceptable.\nThis tutorial trains an MNIST model from scratch, checks its accuracy in\nTensorFlow, and then converts the model into a Tensorflow Lite flatbuffer\nwith dynamic range quantization. Finally, it checks the\naccuracy of the converted model and compare it to the original float model.\nBuild an MNIST model\nSetup", "import logging\nlogging.getLogger(\"tensorflow\").setLevel(logging.DEBUG)\n\nimport tensorflow as tf\nfrom tensorflow import keras\nimport numpy as np\nimport pathlib", "Train a TensorFlow model", "# Load MNIST dataset\nmnist = keras.datasets.mnist\n(train_images, train_labels), (test_images, test_labels) = mnist.load_data()\n\n# Normalize the input image so that each pixel value is between 0 to 1.\ntrain_images = train_images / 255.0\ntest_images = test_images / 255.0\n\n# Define the model architecture\nmodel = keras.Sequential([\n keras.layers.InputLayer(input_shape=(28, 28)),\n keras.layers.Reshape(target_shape=(28, 28, 1)),\n keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation=tf.nn.relu),\n keras.layers.MaxPooling2D(pool_size=(2, 2)),\n keras.layers.Flatten(),\n keras.layers.Dense(10)\n])\n\n# Train the digit classification model\nmodel.compile(optimizer='adam',\n loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n metrics=['accuracy'])\nmodel.fit(\n train_images,\n train_labels,\n epochs=1,\n validation_data=(test_images, test_labels)\n)", "For the example, since you trained the model for just a single epoch, so it only trains to ~96% accuracy.\nConvert to a TensorFlow Lite model\nUsing the Python TFLiteConverter, you can now convert the trained model into a TensorFlow Lite model.\nNow load the model using the TFLiteConverter:", "converter = tf.lite.TFLiteConverter.from_keras_model(model)\ntflite_model = converter.convert()", "Write it out to a tflite file:", "tflite_models_dir = pathlib.Path(\"/tmp/mnist_tflite_models/\")\ntflite_models_dir.mkdir(exist_ok=True, parents=True)\n\ntflite_model_file = tflite_models_dir/\"mnist_model.tflite\"\ntflite_model_file.write_bytes(tflite_model)", "To quantize the model on export, set the optimizations flag to optimize for size:", "converter.optimizations = [tf.lite.Optimize.DEFAULT]\ntflite_quant_model = converter.convert()\ntflite_model_quant_file = tflite_models_dir/\"mnist_model_quant.tflite\"\ntflite_model_quant_file.write_bytes(tflite_quant_model)", "Note how the resulting file, is approximately 1/4 the size.", "!ls -lh {tflite_models_dir}", "Run the TFLite models\nRun the TensorFlow Lite model using the Python TensorFlow Lite\nInterpreter.\nLoad the model into an interpreter", "interpreter = tf.lite.Interpreter(model_path=str(tflite_model_file))\ninterpreter.allocate_tensors()\n\ninterpreter_quant = tf.lite.Interpreter(model_path=str(tflite_model_quant_file))\ninterpreter_quant.allocate_tensors()", "Test the model on one image", "test_image = np.expand_dims(test_images[0], axis=0).astype(np.float32)\n\ninput_index = interpreter.get_input_details()[0][\"index\"]\noutput_index = interpreter.get_output_details()[0][\"index\"]\n\ninterpreter.set_tensor(input_index, test_image)\ninterpreter.invoke()\npredictions = interpreter.get_tensor(output_index)\n\nimport matplotlib.pylab as plt\n\nplt.imshow(test_images[0])\ntemplate = \"True:{true}, predicted:{predict}\"\n_ = plt.title(template.format(true= str(test_labels[0]),\n predict=str(np.argmax(predictions[0]))))\nplt.grid(False)", "Evaluate the models", "# A helper function to evaluate the TF Lite model using \"test\" dataset.\ndef evaluate_model(interpreter):\n input_index = interpreter.get_input_details()[0][\"index\"]\n output_index = interpreter.get_output_details()[0][\"index\"]\n\n # Run predictions on every image in the \"test\" dataset.\n prediction_digits = []\n for test_image in test_images:\n # Pre-processing: add batch dimension and convert to float32 to match with\n # the model's input data format.\n test_image = np.expand_dims(test_image, axis=0).astype(np.float32)\n interpreter.set_tensor(input_index, test_image)\n\n # Run inference.\n interpreter.invoke()\n\n # Post-processing: remove batch dimension and find the digit with highest\n # probability.\n output = interpreter.tensor(output_index)\n digit = np.argmax(output()[0])\n prediction_digits.append(digit)\n\n # Compare prediction results with ground truth labels to calculate accuracy.\n accurate_count = 0\n for index in range(len(prediction_digits)):\n if prediction_digits[index] == test_labels[index]:\n accurate_count += 1\n accuracy = accurate_count * 1.0 / len(prediction_digits)\n\n return accuracy\n\nprint(evaluate_model(interpreter))", "Repeat the evaluation on the dynamic range quantized model to obtain:", "print(evaluate_model(interpreter_quant))", "In this example, the compressed model has no difference in the accuracy.\nOptimizing an existing model\nResnets with pre-activation layers (Resnet-v2) are widely used for vision applications.\n Pre-trained frozen graph for resnet-v2-101 is available on\n Tensorflow Hub.\nYou can convert the frozen graph to a TensorFLow Lite flatbuffer with quantization by:", "import tensorflow_hub as hub\n\nresnet_v2_101 = tf.keras.Sequential([\n keras.layers.InputLayer(input_shape=(224, 224, 3)),\n hub.KerasLayer(\"https://tfhub.dev/google/imagenet/resnet_v2_101/classification/4\")\n])\n\nconverter = tf.lite.TFLiteConverter.from_keras_model(resnet_v2_101)\n\n# Convert to TF Lite without quantization\nresnet_tflite_file = tflite_models_dir/\"resnet_v2_101.tflite\"\nresnet_tflite_file.write_bytes(converter.convert())\n\n# Convert to TF Lite with quantization\nconverter.optimizations = [tf.lite.Optimize.DEFAULT]\nresnet_quantized_tflite_file = tflite_models_dir/\"resnet_v2_101_quantized.tflite\"\nresnet_quantized_tflite_file.write_bytes(converter.convert())\n\n!ls -lh {tflite_models_dir}/*.tflite", "The model size reduces from 171 MB to 43 MB.\nThe accuracy of this model on imagenet can be evaluated using the scripts provided for TFLite accuracy measurement.\nThe optimized model top-1 accuracy is 76.8, the same as the floating point model." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
charliememory/AutonomousDriving
CarND-Traffic-Sign-Classifier-Project/traffic-sign-classification-with-keras_transfer_cifar10.ipynb
gpl-3.0
[ "Traffic Sign Classification with Keras\nKeras exists to make coding deep neural networks simpler. To demonstrate just how easy it is, youโ€™re going to use Keras to build a convolutional neural network in a few dozen lines of code.\nYouโ€™ll be connecting the concepts from the previous lessons to the methods that Keras provides.\nDataset\nThe network you'll build with Keras is similar to the example in Kerasโ€™s GitHub repository that builds out a convolutional neural network for MNIST. \nHowever, instead of using the MNIST dataset, you're going to use the German Traffic Sign Recognition Benchmark dataset that you've used previously.\nYou can download pickle files with sanitized traffic sign data here:", "from urllib.request import urlretrieve\nfrom os.path import isfile\nfrom tqdm import tqdm\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile('train.p'):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Train Dataset') as pbar:\n urlretrieve(\n 'https://s3.amazonaws.com/udacity-sdc/datasets/german_traffic_sign_benchmark/train.p',\n 'train.p',\n pbar.hook)\n\nif not isfile('test.p'):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Test Dataset') as pbar:\n urlretrieve(\n 'https://s3.amazonaws.com/udacity-sdc/datasets/german_traffic_sign_benchmark/test.p',\n 'test.p',\n pbar.hook)\n\nprint('Training and Test data downloaded.')", "Overview\nHere are the steps you'll take to build the network:\n\nLoad the training data.\nPreprocess the data.\nBuild a feedforward neural network to classify traffic signs.\nBuild a convolutional neural network to classify traffic signs.\nEvaluate the final neural network on testing data.\n\nKeep an eye on the networkโ€™s accuracy over time. Once the accuracy reaches the 98% range, you can be confident that youโ€™ve built and trained an effective model.", "import pickle\nimport numpy as np\nimport math\n\n# Fix error with TF and Keras\nimport tensorflow as tf\ntf.python.control_flow_ops = tf\n\nprint('Modules loaded.')", "Load the Data\nStart by importing the data from the pickle file.", "with open('train.p', 'rb') as f:\n data = pickle.load(f)\n\n# TODO: Load the feature data to the variable X_train\n X_train = data['features']\n\n# TODO: Load the label data to the variable y_train\n y_train = data['labels']\n\n\n# STOP: Do not change the tests below. Your implementation should pass these tests. \nassert np.array_equal(X_train, data['features']), 'X_train not set to data[\\'features\\'].'\nassert np.array_equal(y_train, data['labels']), 'y_train not set to data[\\'labels\\'].'\nprint('Tests passed.')", "Preprocess the Data\n\nShuffle the data\nNormalize the features using Min-Max scaling between -0.5 and 0.5\nOne-Hot Encode the labels\n\nShuffle the data\nHint: You can use the scikit-learn shuffle function to shuffle the data.", "# TODO: Shuffle the data\nfrom sklearn.utils import shuffle\nX_train, y_train = shuffle(X_train, y_train)\n\n# STOP: Do not change the tests below. Your implementation should pass these tests. \nassert X_train.shape == data['features'].shape, 'X_train has changed shape. The shape shouldn\\'t change when shuffling.'\nassert y_train.shape == data['labels'].shape, 'y_train has changed shape. The shape shouldn\\'t change when shuffling.'\nassert not np.array_equal(X_train, data['features']), 'X_train not shuffled.'\nassert not np.array_equal(y_train, data['labels']), 'y_train not shuffled.'\nprint('Tests passed.')", "Normalize the features\nHint: You solved this in TensorFlow lab Problem 1.", "# TODO: Normalize the data features to the variable X_normalized\nimport cv2\ndef gray_normalize(image_data):\n \"\"\"\n Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]\n :param image_data: The image data to be normalized\n :return: Normalized image data\n \"\"\"\n for i in range(image_data.shape[0]):\n gray = cv2.resize(cv2.cvtColor(image_data[i], cv2.COLOR_RGB2GRAY), (32, 32)).reshape(1,32,32,1)\n if 0==i:\n X_normalized = gray\n else:\n X_normalized = np.append(X_normalized, gray, axis=0)\n # TODO: Implement Min-Max scaling for grayscale image data\n x_min = np.min(X_normalized)\n x_max = np.max(X_normalized)\n a = -0.5\n b = 0.5\n image_data_rescale = a+ (X_normalized - x_min)*(b-a)/(x_max - x_min)\n return image_data_rescale\n\ndef normalize(image_data):\n \"\"\"\n Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]\n :param image_data: The image data to be normalized\n :return: Normalized image data\n \"\"\"\n # TODO: Implement Min-Max scaling for grayscale image data\n x_min = np.min(image_data)\n x_max = np.max(image_data)\n a = -0.5\n b = 0.5\n image_data_rescale = a+ (image_data - x_min)*(b-a)/(x_max - x_min)\n return image_data_rescale\nX_normalized = normalize(X_train)\nprint('Data normalization finished')\n\n# STOP: Do not change the tests below. Your implementation should pass these tests. \nassert math.isclose(np.min(X_normalized), -0.5, abs_tol=1e-5) and math.isclose(np.max(X_normalized), 0.5, abs_tol=1e-5), 'The range of the training data is: {} to {}. It must be -0.5 to 0.5'.format(np.min(X_normalized), np.max(X_normalized))\nprint('Tests passed.')", "One-Hot Encode the labels\nHint: You can use the scikit-learn LabelBinarizer function to one-hot encode the labels.", "# TODO: One Hot encode the labels to the variable y_one_hot\n# Turn labels into numbers and apply One-Hot Encoding\nfrom sklearn.preprocessing import LabelBinarizer\nencoder = LabelBinarizer()\nencoder.fit(y_train)\ny_one_hot = encoder.transform(y_train)\n\n# STOP: Do not change the tests below. Your implementation should pass these tests. \nimport collections\n\nassert y_one_hot.shape == (39209, 43), 'y_one_hot is not the correct shape. It\\'s {}, it should be (39209, 43)'.format(y_one_hot.shape)\nassert next((False for y in y_one_hot if collections.Counter(y) != {0: 42, 1: 1}), True), 'y_one_hot not one-hot encoded.'\nprint('Tests passed.')", "Keras Sequential Model\n```python\nfrom keras.models import Sequential\nCreate the Sequential model\nmodel = Sequential()\n``\nThekeras.models.Sequentialclass is a wrapper for the neural network model. Just like many of the class models in scikit-learn, it provides common functions likefit(),evaluate(), andcompile()`. We'll cover these functions as we get to them. Let's start looking at the layers of the model.\nKeras Layer\nA Keras layer is just like a neural network layer. It can be fully connected, max pool, activation, etc. You can add a layer to the model using the model's add() function. For example, a simple model would look like this:\n```python\nfrom keras.models import Sequential\nfrom keras.layers.core import Dense, Activation, Flatten\nCreate the Sequential model\nmodel = Sequential()\n1st Layer - Add a flatten layer\nmodel.add(Flatten(input_shape=(32, 32, 3)))\n2nd Layer - Add a fully connected layer\nmodel.add(Dense(100))\n3rd Layer - Add a ReLU activation layer\nmodel.add(Activation('relu'))\n4th Layer - Add a fully connected layer\nmodel.add(Dense(60))\n5th Layer - Add a ReLU activation layer\nmodel.add(Activation('relu'))\n```\nKeras will automatically infer the shape of all layers after the first layer. This means you only have to set the input dimensions for the first layer.\nThe first layer from above, model.add(Flatten(input_shape=(32, 32, 3))), sets the input dimension to (32, 32, 3) and output dimension to (3072=32*32*3). The second layer takes in the output of the first layer and sets the output dimenions to (100). This chain of passing output to the next layer continues until the last layer, which is the output of the model.\nBuild a Multi-Layer Feedforward Network\nBuild a multi-layer feedforward neural network to classify the traffic sign images.\n\nSet the first layer to a Flatten layer with the input_shape set to (32, 32, 3)\nSet the second layer to Dense layer width to 128 output. \nUse a ReLU activation function after the second layer.\nSet the output layer width to 43, since there are 43 classes in the dataset.\nUse a softmax activation function after the output layer.\n\nTo get started, review the Keras documentation about models and layers.\nThe Keras example of a Multi-Layer Perceptron network is similar to what you need to do here. Use that as a guide, but keep in mind that there are a number of differences.", "# TODO: Build a Multi-layer feedforward neural network with Keras here.\nfrom keras.models import Sequential\nfrom keras.layers.core import Dense, Activation, Flatten\n\n# Create the Sequential model\nmodel = Sequential()\n\n# 1st Layer - Add a flatten layer\nmodel.add(Flatten(input_shape=(32, 32, 3)))\n\n# 2nd Layer - Add a fully connected layer\nmodel.add(Dense(128))\n\n# 3rd Layer - Add a ReLU activation layer\nmodel.add(Activation('relu'))\n\n# 4th Layer - Add a fully connected layer\nmodel.add(Dense(43))\n\n# 5th Layer - Add a softmax activation layer\nmodel.add(Activation('softmax'))\n\n# STOP: Do not change the tests below. Your implementation should pass these tests.\nfrom keras.layers.core import Dense, Activation, Flatten\nfrom keras.activations import relu, softmax\n\ndef check_layers(layers, true_layers):\n assert len(true_layers) != 0, 'No layers found'\n for layer_i in range(len(layers)):\n assert isinstance(true_layers[layer_i], layers[layer_i]), 'Layer {} is not a {} layer'.format(layer_i+1, layers[layer_i].__name__)\n assert len(true_layers) == len(layers), '{} layers found, should be {} layers'.format(len(true_layers), len(layers))\n\ncheck_layers([Flatten, Dense, Activation, Dense, Activation], model.layers)\n\nassert model.layers[0].input_shape == (None, 32, 32, 3), 'First layer input shape is wrong, it should be (32, 32, 3)'\nassert model.layers[1].output_shape == (None, 128), 'Second layer output is wrong, it should be (128)'\nassert model.layers[2].activation == relu, 'Third layer not a relu activation layer'\nassert model.layers[3].output_shape == (None, 43), 'Fourth layer output is wrong, it should be (43)'\nassert model.layers[4].activation == softmax, 'Fifth layer not a softmax activation layer'\nprint('Tests passed.')", "Training a Sequential Model\nYou built a multi-layer neural network in Keras, now let's look at training a neural network.\n```python\nfrom keras.models import Sequential\nfrom keras.layers.core import Dense, Activation\nmodel = Sequential()\n...\nConfigures the learning process and metrics\nmodel.compile('sgd', 'mean_squared_error', ['accuracy'])\nTrain the model\nHistory is a record of training loss and metrics\nhistory = model.fit(x_train_data, Y_train_data, batch_size=128, nb_epoch=2, validation_split=0.2)\nCalculate test score\ntest_score = model.evaluate(x_test_data, Y_test_data)\n``\nThe code above configures, trains, and tests the model. The linemodel.compile('sgd', 'mean_squared_error', ['accuracy'])configures the model's optimizer to'sgd'(stochastic gradient descent), the loss to'mean_squared_error', and the metric to'accuracy'`. \nYou can find more optimizers here, loss functions here, and more metrics here.\nTo train the model, use the fit() function as shown in model.fit(x_train_data, Y_train_data, batch_size=128, nb_epoch=2, validation_split=0.2). The validation_split parameter will split a percentage of the training dataset to be used to validate the model. The model can be further tested with the test dataset using the evaluation() function as shown in the last line.\nTrain the Network\n\nCompile the network using adam optimizer and categorical_crossentropy loss function.\nTrain the network for ten epochs and validate with 20% of the training data.", "# TODO: Compile and train the model here.\n# Configures the learning process and metrics\nmodel.compile('adam', 'categorical_crossentropy', ['accuracy'])\n\n# Train the model\n# History is a record of training loss and metrics\nhistory = model.fit(X_normalized, y_one_hot, batch_size=128, nb_epoch=10, validation_split=0.2)\n\n# STOP: Do not change the tests below. Your implementation should pass these tests.\nfrom keras.optimizers import Adam\n\nassert model.loss == 'categorical_crossentropy', 'Not using categorical_crossentropy loss function'\nassert isinstance(model.optimizer, Adam), 'Not using adam optimizer'\nassert len(history.history['acc']) == 10, 'You\\'re using {} epochs when you need to use 10 epochs.'.format(len(history.history['acc']))\n\nassert history.history['acc'][-1] > 0.92, 'The training accuracy was: %.3f. It shoud be greater than 0.92' % history.history['acc'][-1]\nassert history.history['val_acc'][-1] > 0.85, 'The validation accuracy is: %.3f. It shoud be greater than 0.85' % history.history['val_acc'][-1]\nprint('Tests passed.')", "Convolutions\n\nRe-construct the previous network\nAdd a convolutional layer with 32 filters, a 3x3 kernel, and valid padding before the flatten layer.\nAdd a ReLU activation after the convolutional layer.\n\nHint 1: The Keras example of a convolutional neural network for MNIST would be a good example to review.", "# TODO: Re-construct the network and add a convolutional layer before the flatten layer.\nfrom keras.models import Sequential\nfrom keras.layers.core import Dense, Activation, Flatten\nfrom keras.layers.convolutional import Convolution2D\nfrom keras import backend as K\n\n# input image dimensions\nimg_rows, img_cols = 32, 32\n# number of convolutional filters to use\nnb_filters = 32\n# size of pooling area for max pooling\npool_size = (2, 2)\n# convolution kernel size\nkernel_size = (3, 3)\n\nif K.image_dim_ordering() == 'th':\n X_normalized = X_normalized.reshape(X_normalized.shape[0], 3, img_rows, img_cols)\n input_shape = (3, img_rows, img_cols)\nelse:\n X_normalized = X_normalized.reshape(X_normalized.shape[0], img_rows, img_cols, 3)\n input_shape = (img_rows, img_cols, 3)\n\n# Create the Sequential model\nmodel = Sequential()\n\nmodel.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1],\n border_mode='valid',\n input_shape=input_shape))\nmodel.add(Activation('relu'))\nmodel.add(Flatten(input_shape=(32, 32, 3)))\nmodel.add(Dense(128))\nmodel.add(Activation('relu'))\nmodel.add(Dense(43))\nmodel.add(Activation('softmax'))\n\n# STOP: Do not change the tests below. Your implementation should pass these tests.\nfrom keras.layers.core import Dense, Activation, Flatten\nfrom keras.layers.convolutional import Convolution2D\n\ncheck_layers([Convolution2D, Activation, Flatten, Dense, Activation, Dense, Activation], model.layers)\n\nassert model.layers[0].input_shape == (None, 32, 32, 3), 'First layer input shape is wrong, it should be (32, 32, 3)'\nassert model.layers[0].nb_filter == 32, 'Wrong number of filters, it should be 32'\nassert model.layers[0].nb_col == model.layers[0].nb_row == 3, 'Kernel size is wrong, it should be a 3x3'\nassert model.layers[0].border_mode == 'valid', 'Wrong padding, it should be valid'\n\nmodel.compile('adam', 'categorical_crossentropy', ['accuracy'])\nhistory = model.fit(X_normalized, y_one_hot, batch_size=128, nb_epoch=2, validation_split=0.2)\nassert(history.history['val_acc'][-1] > 0.91), \"The validation accuracy is: %.3f. It should be greater than 0.91\" % history.history['val_acc'][-1]\nprint('Tests passed.')", "Pooling\n\nRe-construct the network\nAdd a 2x2 max pooling layer immediately following your convolutional layer. (NO! max_pooling layer should follows the activation layer)", "# TODO: Re-construct the network and add a pooling layer after the convolutional layer.\nfrom keras.models import Sequential\nfrom keras.layers.core import Dense, Activation, Flatten\nfrom keras.layers.convolutional import Convolution2D\nfrom keras.layers.pooling import MaxPooling2D\nfrom keras import backend as K\n\n# input image dimensions\nimg_rows, img_cols = 32, 32\n# number of convolutional filters to use\nnb_filters = 32\n# size of pooling area for max pooling\npool_size = (2, 2)\n# convolution kernel size\nkernel_size = (3, 3)\n\nif K.image_dim_ordering() == 'th':\n X_normalized = X_normalized.reshape(X_normalized.shape[0], 3, img_rows, img_cols)\n input_shape = (3, img_rows, img_cols)\nelse:\n X_normalized = X_normalized.reshape(X_normalized.shape[0], img_rows, img_cols, 3)\n input_shape = (img_rows, img_cols, 3)\n\n# Create the Sequential model\nmodel = Sequential()\n\nmodel.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1],\n border_mode='valid',\n input_shape=input_shape))\nmodel.add(Activation('relu'))\nmodel.add(MaxPooling2D(pool_size=pool_size))\nmodel.add(Flatten(input_shape=(32, 32, 3)))\nmodel.add(Dense(128))\nmodel.add(Activation('relu'))\nmodel.add(Dense(43))\nmodel.add(Activation('softmax'))\n\n# STOP: Do not change the tests below. Your implementation should pass these tests.\nfrom keras.layers.core import Dense, Activation, Flatten\nfrom keras.layers.convolutional import Convolution2D\nfrom keras.layers.pooling import MaxPooling2D\n\ncheck_layers([Convolution2D, Activation, MaxPooling2D, Flatten, Dense, Activation, Dense, Activation], model.layers)\nassert model.layers[2].pool_size == (2, 2), 'Second layer must be a max pool layer with pool size of 2x2'\n\nmodel.compile('adam', 'categorical_crossentropy', ['accuracy'])\nhistory = model.fit(X_normalized, y_one_hot, batch_size=128, nb_epoch=2, validation_split=0.2)\nassert(history.history['val_acc'][-1] > 0.91), \"The validation accuracy is: %.3f. It should be greater than 0.91\" % history.history['val_acc'][-1]\nprint('Tests passed.')", "Dropout\n\nRe-construct the network\nAdd a dropout layer after the pooling layer. Set the dropout rate to 50%.", "# TODO: Re-construct the network and add dropout after the pooling layer.\nfrom keras.models import Sequential\nfrom keras.layers.core import Dense, Activation, Flatten, Dropout\nfrom keras.layers.convolutional import Convolution2D\nfrom keras.layers.pooling import MaxPooling2D\nfrom keras import backend as K\n\n# input image dimensions\nimg_rows, img_cols = 32, 32\n# number of convolutional filters to use\nnb_filters = 32\n# size of pooling area for max pooling\npool_size = (2, 2)\n# convolution kernel size\nkernel_size = (3, 3)\n\nif K.image_dim_ordering() == 'th':\n X_normalized = X_normalized.reshape(X_normalized.shape[0], 3, img_rows, img_cols)\n input_shape = (3, img_rows, img_cols)\nelse:\n X_normalized = X_normalized.reshape(X_normalized.shape[0], img_rows, img_cols, 3)\n input_shape = (img_rows, img_cols, 3)\n\n# Create the Sequential model\nmodel = Sequential()\n\nmodel.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1],\n border_mode='valid',\n input_shape=input_shape))\nmodel.add(Activation('relu'))\nmodel.add(MaxPooling2D(pool_size=pool_size))\nmodel.add(Dropout(0.5))\nmodel.add(Flatten(input_shape=(32, 32, 3)))\nmodel.add(Dense(128))\nmodel.add(Activation('relu'))\nmodel.add(Dense(43))\nmodel.add(Activation('softmax'))\n\n# STOP: Do not change the tests below. Your implementation should pass these tests.\nfrom keras.layers.core import Dense, Activation, Flatten, Dropout\nfrom keras.layers.convolutional import Convolution2D\nfrom keras.layers.pooling import MaxPooling2D\n\ncheck_layers([Convolution2D, Activation, MaxPooling2D, Dropout, Flatten, Dense, Activation, Dense, Activation], model.layers)\nassert model.layers[3].p == 0.5, 'Third layer should be a Dropout of 50%'\n\nmodel.compile('adam', 'categorical_crossentropy', ['accuracy'])\nhistory = model.fit(X_normalized, y_one_hot, batch_size=128, nb_epoch=2, validation_split=0.2)\nassert(history.history['val_acc'][-1] > 0.91), \"The validation accuracy is: %.3f. It should be greater than 0.91\" % history.history['val_acc'][-1]\nprint('Tests passed.')", "Use more conv layers.", "## Define traffic sign model\nfrom keras.models import Sequential\nfrom keras.layers.core import Dense, Activation, Flatten, Dropout\nfrom keras.layers.convolutional import Convolution2D\nfrom keras.layers.pooling import MaxPooling2D\nfrom keras import backend as K\n\n# input image dimensions\nimg_rows, img_cols = 32, 32\n# number of convolutional filters to use\nnb_filters = 32\n# size of pooling area for max pooling\npool_size = (2, 2)\n# convolution kernel size\nkernel_size = (3, 3)\n\nif K.image_dim_ordering() == 'th':\n X_normalized = X_normalized.reshape(X_normalized.shape[0], 3, img_rows, img_cols)\n input_shape = (3, img_rows, img_cols)\nelse:\n X_normalized = X_normalized.reshape(X_normalized.shape[0], img_rows, img_cols, 3)\n input_shape = (img_rows, img_cols, 3)\n\n# Create the Sequential model\nmodel = Sequential()\n\nmodel.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1],\n border_mode='valid',\n input_shape=input_shape))\nmodel.add(Activation('relu'))\nmodel.add(MaxPooling2D(pool_size=pool_size))\nmodel.add(Convolution2D(nb_filters*2, kernel_size[0], kernel_size[1],\n border_mode='valid',\n input_shape=input_shape))\nmodel.add(Activation('relu'))\nmodel.add(MaxPooling2D(pool_size=pool_size))\nmodel.add(Flatten(input_shape=(32, 32, 3)))\nmodel.add(Dropout(0.5))\nmodel.add(Dense(128, name=\"dense_1\"))\nmodel.add(Activation('relu'))\n# model.add(Dropout(0.5))\nmodel.add(Dense(43, name=\"dense_2\"))\nmodel.add(Activation('softmax'))\n\n# Train and save traffic sign model\nfrom keras.layers.core import Dense, Activation, Flatten, Dropout\nfrom keras.layers.convolutional import Convolution2D\nfrom keras.layers.pooling import MaxPooling2D\n\nmodel.compile('adam', 'categorical_crossentropy', ['accuracy'])\nhistory = model.fit(X_normalized, y_one_hot, batch_size=128, nb_epoch=20, validation_split=0.2)", "Transfer learning from German Traffic Sign dataset to the Cifar10 dataset\nRun the classifier used in the Traffic Sign project on the Cifar10 dataset. Cifar10 images are also (32, 32, 3), the only thing you'll need to change is the number of classes from 43 to 10.", "## Define traffic sign model\nfrom keras.models import Sequential\nfrom keras.layers.core import Dense, Activation, Flatten, Dropout\nfrom keras.layers.convolutional import Convolution2D\nfrom keras.layers.pooling import MaxPooling2D\nfrom keras import backend as K\n\n# input image dimensions\nimg_rows, img_cols = 32, 32\n# number of convolutional filters to use\nnb_filters = 32\n# size of pooling area for max pooling\npool_size = (2, 2)\n# convolution kernel size\nkernel_size = (3, 3)\n\nif K.image_dim_ordering() == 'th':\n X_normalized = X_normalized.reshape(X_normalized.shape[0], 3, img_rows, img_cols)\n input_shape = (3, img_rows, img_cols)\nelse:\n X_normalized = X_normalized.reshape(X_normalized.shape[0], img_rows, img_cols, 3)\n input_shape = (img_rows, img_cols, 3)\n\n# Create the Sequential model\nmodel = Sequential()\n\nmodel.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1],\n border_mode='valid',\n input_shape=input_shape, name=\"conv_1\"))\nmodel.add(Activation('relu'))\nmodel.add(MaxPooling2D(pool_size=pool_size))\nmodel.add(Dropout(0.5))\nmodel.add(Flatten(input_shape=(32, 32, 3)))\nmodel.add(Dense(128, name=\"dense_1\"))\nmodel.add(Activation('relu'))\nmodel.add(Dense(43, name=\"dense_2\"))\nmodel.add(Activation('softmax'))\n\n# Train and save traffic sign model\nfrom keras.layers.core import Dense, Activation, Flatten, Dropout\nfrom keras.layers.convolutional import Convolution2D\nfrom keras.layers.pooling import MaxPooling2D\n\nmodel.compile('adam', 'categorical_crossentropy', ['accuracy'])\nhistory = model.fit(X_normalized, y_one_hot, batch_size=128, nb_epoch=20, validation_split=0.2)\nmodel.save_weights('traffic_weights.h5')\n\n## Download Cifar10 dataset\nfrom keras.datasets import cifar10\nfrom keras.utils import np_utils\n(X_train, y_train), (X_test, y_test) = cifar10.load_data()\n# y_train.shape is 2d, (50000, 1). While Keras is smart enough to handle this\n# it's a good idea to flatten the array.\ny_train = y_train.reshape(-1)\ny_test = y_test.reshape(-1)\n\ndef normalize(image_data):\n \"\"\"\n Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]\n :param image_data: The image data to be normalized\n :return: Normalized image data\n \"\"\"\n # TODO: Implement Min-Max scaling for grayscale image data\n x_min = np.min(image_data)\n x_max = np.max(image_data)\n a = -0.5\n b = 0.5\n image_data_rescale = a+ (image_data - x_min)*(b-a)/(x_max - x_min)\n return image_data_rescale\nX_train = normalize(X_train)\nX_test = normalize(X_test)\nprint('Data normalization finished')\n\n## Define Cifar10 model\nfrom keras.models import Sequential\nfrom keras.layers.core import Dense, Activation, Flatten, Dropout\nfrom keras.layers.convolutional import Convolution2D\nfrom keras.layers.pooling import MaxPooling2D\nfrom keras import backend as K\n\n# input image dimensions\nimg_rows, img_cols = 32, 32\n# number of convolutional filters to use\nnb_filters = 32\n# size of pooling area for max pooling\npool_size = (2, 2)\n# convolution kernel size\nkernel_size = (3, 3)\n\nif K.image_dim_ordering() == 'th':\n X_normalized = X_normalized.reshape(X_normalized.shape[0], 3, img_rows, img_cols)\n input_shape = (3, img_rows, img_cols)\nelse:\n X_normalized = X_normalized.reshape(X_normalized.shape[0], img_rows, img_cols, 3)\n input_shape = (img_rows, img_cols, 3)\n\n# Create the Sequential model\nmodel_cifar10 = Sequential()\n\nmodel_cifar10.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1],\n border_mode='valid',\n input_shape=input_shape, name=\"conv_1\"))\nmodel_cifar10.add(Activation('relu'))\nmodel_cifar10.add(MaxPooling2D(pool_size=pool_size))\nmodel_cifar10.add(Dropout(0.5))\nmodel_cifar10.add(Flatten(input_shape=(32, 32, 3)))\nmodel_cifar10.add(Dense(128, name=\"dense_1\"))\nmodel_cifar10.add(Activation('relu'))\nmodel_cifar10.add(Dense(10, name=\"dense_2_new\"))\nmodel_cifar10.add(Activation('softmax', name=\"acivation_3_new\"))\n\n# load and use model weight from traffic sign\nfrom keras.layers.core import Dense, Activation, Flatten, Dropout\nfrom keras.layers.convolutional import Convolution2D\nfrom keras.layers.pooling import MaxPooling2D\n\nmodel_cifar10.compile('adam', 'categorical_crossentropy', ['accuracy'])\nmodel_cifar10.load_weights('traffic_weights.h5', by_name=True)\nscore = model_cifar10.evaluate(X_test, y_test, verbose=1)\nprint('Test score:', score[0])\nprint('Test accuracy:', score[1])\nmodel_cifar10.compile('adam', 'categorical_crossentropy', ['accuracy'])\nhistory = model_cifar10.fit(X_train, y_train, batch_size=128, nb_epoch=10, validation_split=0.2)\n\n", "Optimization\nCongratulations! You've built a neural network with convolutions, pooling, dropout, and fully-connected layers, all in just a few lines of code.\nHave fun with the model and see how well you can do! Add more layers, or regularization, or different padding, or batches, or more training epochs.\nWhat is the best validation accuracy you can achieve?", "# TODO: Build a model\n\n# TODO: Compile and train the model", "Best Validation Accuracy: (fill in here)\nTesting\nOnce you've picked out your best model, it's time to test it.\nLoad up the test data and use the evaluate() method to see how well it does.\nHint 1: The evaluate() method should return an array of numbers. Use the metrics_names property to get the labels.", "# TODO: Load test data\n \n# TODO: Preprocess data & one-hot encode the labels\n\n# TODO: Evaluate model on test data", "Test Accuracy: (fill in here)\nSummary\nKeras is a great tool to use if you want to quickly build a neural network and evaluate performance." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
waltervh/BornAgain-tutorial
talks/day_1/python_introduction/BornAgainSchool_Matplotlib.ipynb
gpl-3.0
[ "7. Matplotlib Basics\n7.1 Verifying the python version you are using", "import sys\nprint(sys.version)", "At this point anything above python 3.5 should be ok.\n7.2 Import numpy and matplotlib", "import numpy as np\nnp.__version__\n\nimport matplotlib as mpl\nfrom matplotlib import pyplot as plt\nmpl.__version__", "Notes:\n7.3 Simple plot", "x = np.linspace(-3.14, 3.14, num=100)\ny = np.sin(x)\n\nplt.plot(x, y)\nplt.xlabel('x values')\nplt.ylabel('y')\nplt.title('y=sin(x)')\nplt.show()", "Notes:\n7.4 Exercice:\nPlot other simple functions like the exponential or cosinus\nNotes:\n8. 3D data visualisation\n8.1 Colormaps", "x = np.linspace (-1, 1, num =100)\ny = np.linspace (-1, 1, num =100)\nxx, yy = np.meshgrid (x, y)\n\nz = np.sin(xx**2 + yy**2 + yy)\nplt.pcolormesh(x, y, z, shading = 'gouraud')\nplt.show()\n\n\n#change the colormaps\n\n#mpl.rcParams['image.cmap'] = 'viridis'\nmpl.rcParams['image.cmap'] = 'jet'\n#mpl.rc('image', cmap='jet')\n#mpl.rc('image', cmap='hsv')\n\nplt.pcolormesh(x, y, z, shading = 'gouraud')\nplt.show()\n\n#use imshow\nplt.imshow(z, aspect='auto')\nplt.show()", "Notes:\n8.2 3D Surface", "from mpl_toolkits.mplot3d import Axes3D\nfrom matplotlib import cm\nfig = plt.figure(figsize=(12,8))\nax = fig.gca(projection='3d')\nax.plot_surface(xx, yy, z, rstride=5, cstride=5, cmap=cm.coolwarm, linewidth=1, antialiased=True)\nplt.show()", "Notes:\n8.3 3D Wireframe\nCreate a numpy array and set the elements in it as you iterate.", "fig = plt.figure(figsize=(12,8))\nax = fig.gca(projection='3d')\nax.plot_wireframe(xx, yy, z, rstride=5, cstride=5, antialiased=True)\nplt.show()\n", "Notes:\n9. Multiple Plots", "fig = plt.figure(figsize=(20,15))\n\n#create the subplots\nax = fig.add_subplot(2,2,1)\nbx = fig.add_subplot(2,2,2)\ncx = fig.add_subplot(2,2,3, projection='3d')\ndx = fig.add_subplot(2,2,4, projection='3d')\n\n#the sin\nax.plot(np.linspace(-np.pi,np.pi,100), np.sin(np.linspace(-np.pi,np.pi,100)))\nax.scatter(np.linspace(-np.pi,np.pi,100), np.cos(np.linspace(-np.pi,np.pi,100)))\nax.set_xlabel('x values')\nax.set_ylabel('y')\nax.set_title('y=sin(x)')\n\n#the image\nbx.imshow(z, aspect='auto')\nbx.set_xlabel('x')\nbx.set_ylabel('y')\nbx.set_title('Some image')\n\n#the surface\ncx.set_xlabel('some x')\ncx.set_ylabel('some y')\ncx.set_zlabel('some z')\ncx.set_title('The surface')\ncx.plot_surface(xx, yy, z, rstride=5, cstride=5, cmap=cm.coolwarm, linewidth=1, antialiased=True)\n\n#the wireframe\ndx.set_xlabel('some x')\ndx.set_ylabel('some y')\ndx.set_zlabel('some z')\ndx.set_title('The wireframe')\ndx.plot_wireframe(xx, yy, z, rstride=4, cstride=4, antialiased=True)\n\nplt.show()", "Notes:\n10. Exercice:\nFrom what you have leanrt about matplotlib make four subplots of:\n- Randome 2D data\n- Randome 3D data as colormap\n- Gaussian 3D as a surface (np.gaussian)\n- Gaussian 3D as a wireframe (np.gaussian)\nNotes:" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
quantopian/research_public
notebooks/data/quandl.currfx_usdeur/notebook.ipynb
apache-2.0
[ "Quandl: US vs. EUR Exchange Rate\nIn this notebook, we'll take a look at data set , available on Quantopian. This dataset spans from 1999 through the current day. It contains the daily exchange rates for the US Dollar (USD) vs. the European EURO (EUR) We access this data via the API provided by Quandl. More details on this dataset can be found on Quandl's website.\nBlaze\nBefore we dig into the data, we want to tell you about how you generally access Quantopian partner data sets. These datasets are available using the Blaze library. Blaze provides the Quantopian user with a convenient interface to access very large datasets.\nSome of these sets (though not this one) are many millions of records. Bringing that data directly into Quantopian Research directly just is not viable. So Blaze allows us to provide a simple querying interface and shift the burden over to the server side.\nTo learn more about using Blaze and generally accessing Quantopian partner data, clone this tutorial notebook.\nWith preamble in place, let's get started:", "# import the dataset\nfrom quantopian.interactive.data.quandl import currfx_usdeur\n# Since this data is public domain and provided by Quandl for free, there is no _free version of this\n# data set, as found in the premium sets. This import gets you the entirety of this data set.\n\n# import data operations\nfrom odo import odo\n# import other libraries we will use\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\ncurrfx_usdeur.sort('asof_date')", "The data goes all the way back to 1999 and is updated daily.\nBlaze provides us with the first 10 rows of the data for display. Just to confirm, let's just count the number of rows in the Blaze expression:", "currfx_usdeur.count()", "Let's go plot it for fun. This data set is definitely small enough to just put right into a Pandas DataFrame", "usdeur_df = odo(currfx_usdeur, pd.DataFrame)\n\nusdeur_df.plot(x='asof_date', y='rate')\nplt.xlabel(\"As Of Date (asof_date)\")\nplt.ylabel(\"Exchange Rate\")\nplt.title(\"USD vs. EUR Exchange Rate\")\nplt.legend().set_visible(False)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
TimothyHelton/k2datascience
notebooks/EDA_MTA_Exercises.ipynb
bsd-3-clause
[ "Exploratory Data Analysis with Python\nWe will explore the NYC MTA turnstile data set. These data files are from the New York Subway. It tracks the hourly entries and exits to turnstiles (UNIT) by day in the subway system.\nHere is an example of what you could do with the data. James Kao investigates how subway ridership is affected by incidence of rain.\n<br>\n<font color=\"red\">\n NOTE:\n <br>\n This notebook uses code found in the\n <a href=\"https://github.com/TimothyHelton/k2datascience/blob/master/k2datascience/nyc_mta.py\">\n <strong>k2datascience.nyc_mta</strong></a> package.\n To execute all the cells do one of the following items:\n <ul>\n <li>Install the k2datascience package to the active Python interpreter.</li>\n <li>Add k2datascience/k2datascience to the PYTHON_PATH system variable.</li>\n <li>Create a link to the nyc_mta.py file in the same directory as this notebook.</li>\n</font>\n\nImports", "from collections import defaultdict\nimport csv\nimport os\nimport os.path as osp\n\nfrom dateutil.parser import parse\nimport matplotlib.dates as mdates\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport seaborn as sns\n\nfrom k2datascience import nyc_mta\n\nfrom IPython.core.interactiveshell import InteractiveShell\nInteractiveShell.ast_node_interactivity = \"all\"\n%matplotlib inline", "Download Data\nWould you like to download New York City MTA Turnstile data?\n\nEach file is for a week of data and is approximately 24 Megabytes in size.", "download = False\nfile_quantity = 2", "Scrape MTA Turnstile Web Page to extract all available data files.", "d = nyc_mta.TurnstileData()\nif download:\n d.write_data_files(qty=file_quantity)\n print(f'\\n\\nThe raw data files were written out to:\\n\\n{d.data_dir}')", "Exercise 1\n\nDownload at least 2 weeks worth of MTA turnstile data (You can do this manually or via Python)\nOpen up a file, use csv reader to read it, make a python dict where there is a key for each (C/A, UNIT, SCP, STATION). These are the first four columns. The value for this key should be a list of lists. Each list in the list is the rest of the columns in a row. For example, one key-value pair should look like{ ('A002','R051','02-00-00','LEXINGTON AVE'): \n [\n ['NQR456', 'BMT', '01/03/2015', '03:00:00', 'REGULAR', '0004945474', '0001675324'], \n ['NQR456', 'BMT', '01/03/2015', '07:00:00', 'REGULAR', '0004945478', '0001675333'], \n ['NQR456', 'BMT', '01/03/2015', '11:00:00', 'REGULAR', '0004945515', '0001675364'],\n ... \n ] \n}\n\n\n\nStore all the weeks in a data structure of your choosing\nData File Path", "data_file = '170401.txt'\ndata_dir = osp.join('..', 'data', 'nyc_mta_turnstile')\ndata_file_path = osp.join(data_dir, data_file)", "Create Excersize 1 Dictionary", "turnstile = defaultdict(list)\nwith open(data_file_path, 'r') as f:\n reader = csv.reader(f)\n initial_row = True\n for row in reader:\n if not initial_row:\n turnstile[tuple(row[:4])].append([x.strip() for x in row[4:]])\n else:\n header = [x.strip() for x in row]\n initial_row = False", "Header\n\nC/A: Control Area (A002)\nUNIT: Remote Unit for a station (R051)\nSCP: Subunit Channel Position represents an specific address for a device (02-00-00)\nSTATION: Represents the station name the device is located at\n\nLINENAME: Represents all train lines that can be boarded at this station\n\nNormally lines are represented by one character.\nLINENAME 456NQR represents train server for 4, 5, 6, N, Q, and R trains.\n\n\n\nDIVISION: Represents the Line originally the station belonged to BMT, IRT, or IND \n\nDATE: Represents the date (MM-DD-YY)\nTIME: Represents the time (hh:mm:ss) for a scheduled audit event\n\nDESc: Represent the \"REGULAR\" scheduled audit event (Normally occurs every 4 hours)\n\nAudits may occur more that 4 hours due to planning, or troubleshooting activities. \nAdditionally, there may be a \"RECOVR AUD\" entry: This refers to a missed audit that was recovered. \n\n\n\nENTRIES: The comulative entry register value for a device\n\nEXIST: The cumulative exit register value for a device", "header", "Example Entry in Turnstile Dictionary", "turnstile[('A002', 'R051', '02-00-00', '59 ST')][:3]", "Create Pandas DataFrame", "d.get_data()\nd.data.shape\nd.data.head()", "Exercise 2\n\nLet's turn this into a time series.\n\nFor each key (basically the control area, unit, device address and station of a specific turnstile), have a list again, but let the list be comprised of just the point in time and the cumulative count of entries.\nThis basically means keeping only the date, time, and entries fields in each list. You can convert the date and time into datetime objects -- That is a python class that represents a point in time. You can combine the date and time fields into a string and use the dateutil package to convert it into a datetime object.\nYour new dict should look something like\n{ ('A002','R051','02-00-00','LEXINGTON AVE'): \n [\n [datetime.datetime(2013, 3, 2, 3, 0), 3788],\n [datetime.datetime(2013, 3, 2, 7, 0), 2585],\n [datetime.datetime(2013, 3, 2, 12, 0), 10653],\n [datetime.datetime(2013, 3, 2, 17, 0), 11016],\n [datetime.datetime(2013, 3, 2, 23, 0), 10666],\n [datetime.datetime(2013, 3, 3, 3, 0), 10814],\n [datetime.datetime(2013, 3, 3, 7, 0), 10229],\n ...\n ],\n ....\n }\n\nCreate Exersize 2 Time Series Dictionary\nNote: The extended computational time is due to the dateutil operation.", "turnstile_ts = {}\nfor k, v in turnstile.items():\n turnstile_ts[k] = [[parse(f'{x[2]} {x[3]}'), int(x[-2])] for x in v]", "Example Entry in Turnstile Time Series Dictionary", "turnstile_ts[('A002', 'R051', '02-00-00', '59 ST')][:10]", "Add Time Stamp Series to Pandas DataFrame", "d.get_time_stamp()\nd.data.shape\nd.data.head()", "Exercise 3\n\nThese counts are cumulative every n hours. We want total daily entries. \n\nNow make it that we again have the same keys, but now we have a single value for a single day, which is not cumulative counts but the total number of passengers that entered through this turnstile on this day.", "daily_total = defaultdict(list)\nfor k, v in turnstile_ts.items():\n days = set([x[0].date() for x in v])\n for day in sorted(days):\n daily_total[k].append([day, sum([x[1] for x in v if x[0].date() == day])])", "Example Entry in Turnstile Time Series Dictionary", "daily_total[('A002', 'R051', '02-00-00', '59 ST')]", "Return Daily Entry Totals Using Pandas", "d.turnstile_daily.head(10)\nd.turnstile_daily.tail(10)", "Exercise 4\n\nWe will plot the daily time series for a turnstile.\n\nIn ipython notebook, add this to the beginning of your next cell: \n%matplotlib inline\n\nThis will make your matplotlib graphs integrate nicely with the notebook.\nTo plot the time series, import matplotlib with \nimport matplotlib.pyplot as plt\n\nTake the list of [(date1, count1), (date2, count2), ...], for the turnstile and turn it into two lists:\ndates and counts. This should plot it:\nplt.figure(figsize=(10,3))\nplt.plot(dates,counts)", "label_size = 14\n\nfig = plt.figure('Station 59 ST: Daily Turnstile Entries', figsize=(10, 3),\n facecolor='white', edgecolor='black')\nax1 = plt.subplot2grid((1, 1), (0, 0))\n\ndt = daily_total[('A002', 'R051', '02-00-00', '59 ST')]\ndates = [x[0] for x in dt]\nentries = [x[1] for x in dt]\n\nax1.plot_date(dates, entries, '^k-')\n\nplt.suptitle('Station: 59 ST', fontsize=24, y=1.16);\nplt.title('Control Area: A002 | Unit: R051 | Subunit Channel Position: 02-00-00',\n fontsize=18, y=1.10);\nax1.set_xlabel('Date', fontsize=label_size)\nax1.set_ylabel('Turnstile Entries', fontsize=label_size)\nfig.autofmt_xdate();", "Pandas Plot", "label_size = 14\nmarker_size = 5\n\nfig = plt.figure('Station 59 ST: Daily Turnstile Entries', figsize=(10, 7),\n facecolor='white', edgecolor='black')\nrows, cols = (2, 1)\nax1 = plt.subplot2grid((rows, cols), (0, 0))\nax2 = plt.subplot2grid((rows, cols), (1, 0), sharex=ax1)\n\ndt = d.turnstile_daily.query(('c_a == \"A002\"'\n '& unit == \"R051\"'\n '& scp == \"02-00-00\"'\n '& station == \"59 ST\"'))\ndt.plot(x=dt.index.levels[4], y='entries', color='IndianRed', legend=None,\n markersize=marker_size, marker='o', ax=ax1)\n\nax1.set_title('Control Area: A002 | Unit: R051 | Subunit Channel Position: 02-00-00',\n fontsize=18, y=1.10)\nax1.set_ylabel('Turnstile Entries', fontsize=label_size)\n\ndt.plot(x=dt.index.levels[4], y='exits', color='black', legend=None,\n markersize=marker_size, marker='d', ax=ax2)\n\nax2.set_xlabel('Date', fontsize=label_size)\nax2.set_ylabel('Turnstile Exits', fontsize=label_size)\n\nplt.suptitle('Station: 59 ST', fontsize=24, y=1.04);\nplt.tight_layout()\nfig.autofmt_xdate();", "Exercise 5\n\nSo far we've been operating on a single turnstile level, let's combine turnstiles in the same ControlArea/Unit/Station combo. There are some ControlArea/Unit/Station groups that have a single turnstile, but most have multiple turnstilea-- same value for the C/A, UNIT and STATION columns, different values for the SCP column.\n\nWe want to combine the numbers together -- for each ControlArea/UNIT/STATION combo, for each day, add the counts from each turnstile belonging to that combo.\nPandas Return Total Passengers Filtered By Control Area, Unit, Station and Date", "d.get_station_daily(control_area=True, unit=True)\nstation_daily_all = d._station_daily\nstation_daily_all.head(10)\nstation_daily_all.tail(10)", "Exercise 6\n\nSimilarly, combine everything in each station, and come up with a time series of [(date1, count1),(date2,count2),...] type of time series for each STATION, by adding up all the turnstiles in a station.\n\nPandas Return Total Passengers Filtered By Station and Date", "station_daily = d.station_daily\nstation_daily.query('station == \"59 ST\"')", "Exercise 7\n\nPlot the time series for a station", "label_size = 14\n\nfig = plt.figure('Station 59 ST: Total Passengers', figsize=(12, 4),\n facecolor='white', edgecolor='black')\nax1 = plt.subplot2grid((1, 1), (0, 0))\n\ndt = station_daily.query('station == \"59 ST\"')\ndt.plot(kind='bar', x=dt.index.levels[1], alpha=0.5, ax=ax1)\n\nax1.set_xlabel('Date', fontsize=label_size)\nax1.set_ylabel('Passengers', fontsize=label_size)\n\nplt.suptitle('Station: 59 ST', fontsize=24, y=1.16);\nplt.title('Total Passengers', fontsize=18, y=1.10);\nfig.autofmt_xdate();", "Exercise 8\n\nMake one list of counts for one week for one station. Monday's count, Tuesday's count, etc. so it's a list of 7 counts.\nMake the same list for another week, and another week, and another week.\nplt.plot(week_count_list) for every week_count_list you created this way. You should get a rainbow plot of weekly commute numbers on top of each other.", "week_59st = station_daily.query('station == \"59 ST\"').reset_index()\nweek_59st\n\nlabel_size = 14\n\nfig = plt.figure('Station 59 ST: Weekly Passengers', figsize=(12, 4),\n facecolor='white', edgecolor='black')\nax1 = plt.subplot2grid((1, 1), (0, 0))\n\nfor w in week_59st.week.unique():\n mask = f'station == \"59 ST\" & week == {w}' \n dt = station_daily.query(mask).reset_index()\n dt.plot(kind='area', x=dt.weekday, y='entries', alpha=0.5, label=f'Week: {w}', ax=ax1)\n\nax1.set_xlabel('Weekday', fontsize=label_size)\nax1.set_ylabel('Passengers', fontsize=label_size)\nax1.set_xticklabels(['Monday', 'Tuesday', 'Wednesday', 'Thursday',\n 'Friday', 'Saturday', 'Sunday'])\nx_min, x_max, y_min, y_max = ax1.axis()\nax1.axis((x_min, x_max, 0, 2e10))\n\nplt.suptitle('Station: 59 ST', fontsize=24, y=1.16);\nplt.title('Weekly Passengers', fontsize=18, y=1.10);\nfig.autofmt_xdate();", "Exercise 9\n\nOver multiple weeks, sum total ridership for each station and sort them, so you can find out the stations with the highest traffic during the time you investigate", "mask = ['station',\n pd.Series([x.week for x in d.data.time_stamp], name='week')]\nstation_weekly = d.data.groupby(mask)['entries', 'exits'].sum()\n\nstation_weekly.sort_values('entries', ascending=False)", "Exercise 10\n\nMake a single list of these total ridership values and plot it with plt.hist(total_ridership_counts) to get an idea about the distribution of total ridership among different stations. \nThis should show you that most stations have a small traffic, and the histogram bins for large traffic volumes have small bars.\n\nAdditional Hint: \nIf you want to see which stations take the meat of the traffic, you can sort the total ridership counts and make a plt.bar graph. For this, you want to have two lists: the indices of each bar, and the values. The indices can just be 0,1,2,3,..., so you can do \nindices = range(len(total_ridership_values))\nplt.bar(indices, total_ridership_values)", "station_group = d.data.groupby('station')\nstation_entries = station_group['entries'].sum()\nstation_entries.tail()\n\nlabel_size = 14\nsuptitle_size = 24\ntitle_size = 18\n\nbins = 50\n\nfig = plt.figure('', figsize=(10, 8),\n facecolor='white', edgecolor='black')\nrows, cols = (2, 1)\nax1 = plt.subplot2grid((rows, cols), (0, 0))\nax2 = plt.subplot2grid((rows, cols), (1, 0))\n\nstation_entries.sort_values().plot(kind='bar', ax=ax1)\n\nax1.set_title('Total Passengers', fontsize=title_size);\nax1.set_xlabel('Stations', fontsize=label_size)\nax1.set_ylabel('Passengers', fontsize=label_size)\nax1.set_xticklabels('')\n\nstation_entries.plot(kind='hist', alpha=0.5, bins=bins,\n edgecolor='black', label='_nolegend_', ax=ax2)\nax2.axvline(station_entries.mean(), color='crimson',\n label='Mean', linestyle='--')\nax2.axvline(station_entries.median(), color='black',\n label='Median', linestyle='-.')\n\nax2.legend()\nax2.set_xlabel('Total Passengers', fontsize=label_size)\nax2.set_ylabel('Count', fontsize=label_size)\n\nplt.suptitle('All NYC MTA Stations', fontsize=suptitle_size, y=1.03);\nplt.tight_layout();" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
TESScience/FPE_Test_Procedures
HK_Variance.ipynb
mit
[ "HK Variance Test\nThis notebook provides a test for the noise and stability of the housekeeping data captured by the FPE. It evaluates the performance of the housekeeping system by taking a large number of samples, collecting the results for each signal channel, and evaluating the variance of the data for each channel.\nInstructions:\nEnter Your Name and today's date:\nEd Bokhour, 11/4/15\nEnter the part numbers and serial numbers of the units under test:\nSDPCB Interface 6.1, s/n 02. SDPCB 6.1 Driver s/n RPI15310002, SDPCB Video 6.1 s/n RPI15310001. Using wrapper 6.1.2 (San Diego).\nWhen the test is complete, save this notebook as a new file, indicating the date, as \"HK_Variance_Results_YYMMDD.ipynb\". Alternatively, export the notebook as a PDF file, then clear all entries and outputs.\nStart the Observatory Simulator and Load the FPE FPGA\nRemember that whenever you power-cycle the Observatory Simulator, you should set preload=True below.\nWhen you are running this notebook and it has not been power cycled, you should set preload=False.\nRun the following cell to get the FPE loaded:", "from tessfpe.dhu.fpe import FPE\nfrom tessfpe.dhu.unit_tests import check_house_keeping_voltages\nfpe1 = FPE(1, debug=False, preload=True, FPE_Wrapper_version='6.1.1')\nprint fpe1.version\nfpe1.cmd_start_frames()\nfpe1.cmd_stop_frames()\nif check_house_keeping_voltages(fpe1):\n print \"Wrapper load complete. Interface voltages OK.\"", "Set all the operating parameters to the default values:", "def set_fpe_defaults(fpe):\n \"Set the FPE to the default operating parameters and return a list of the default values\"\n defaults = {}\n for k in range(len(fpe.ops.address)):\n if fpe.ops.address[k] is None:\n continue\n fpe.ops.address[k].value = fpe.ops.address[k].default\n defaults[fpe.ops.address[k].name] = fpe.ops.address[k].default\n return defaults\nset_fpe_defaults(fpe1)", "Run the variance test:", "from numpy import var, sqrt\nsamples=100\nfrom tessfpe.data.housekeeping_channels import housekeeping_channels\n# We make sample_data a dictionary and each value will be a set of HK \n# data, with key = sample_name.\nsample_data = {}\n\n# For later:\nsignal_names = []\nsignal_values = []\nsignal_data = {}\nvariance_values = {}\n\nfor i in range(samples):\n # Get a new set of HK values\n house_keeping_values = fpe1.house_keeping[\"analogue\"]\n data_values = house_keeping_values.values()\n # Add the new HK values to the sample_data dictionary:\n sample_number = \"sample_\" + str(i)\n sample_data[sample_number] = data_values\n\n# Get the signal names for use later\nsignal_names = housekeeping_channels.keys()\n\n# Get list of units for later\nunits = {}\nfor name in housekeeping_channels:\n units[name] = housekeeping_channels[name]['unit']\n \n\"\"\"Assign the set of all HK values of the same signal (e.g. substrate_1) \nto the dictionary 'signal_data'\"\"\"\n\nfor k in range(len(signal_names)):\n # Build the list 'signal_values' for this signal:\n for i in range(samples):\n sample_number = \"sample_\" + str(i)\n signal_values.append(sample_data[sample_number][k])\n # Add signal_values to the signal_data dictionary:\n signal_data[signal_names[k]] = signal_values\n signal_values = []\n\n\"\"\" Now get the square root of the variance of each of the 'signal_values' in the \nsignal_data dictionary and put the result in the 'variance_values' \ndictionary.\"\"\"\nfor name in signal_data:\n variance_values[name] = sqrt(var(signal_data[name]))\n #print units[name][\"unit\"]\n #print signal_data\n #print units[name]\n print '{0:26} {1:.3} {2}'.format(name, variance_values[name], units[name])\n\n# Results will be displayed below, in engineering units (root-variance). \n# Watch ObsSim LEDs for activity.", "End HK Variance Test." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tiagoft/inteligencia_computacional
mlp.ipynb
mit
[ "Modelos Discriminativos - Parte 2 - Sistemas Nรฃo-Lineares\nOs sistemas lineares tรชm a vantagem de serem facilmente treinados atravรฉs de um processo de otimizaรงรฃo convexa, isto รฉ, que sรณ tem um ponto de mรญnimo. Apesar disso, \nObjetivos\nAo fim desta iteraรงรฃo, o aluno serรก capaz de:\n* Entender o conceito e as limitaรงรตes da propriedade aproximaรงรฃo universal\n* Entender os algoritmos de backpropagation\n* Aplicar redes MLP para problemas de classificaรงรฃo\n* Configurar redes MLP quanto ao seu nรบmero de neurรดnios e nรบmero de camadas", "# Inicializacao\n%matplotlib inline\n\nimport numpy as np\nfrom matplotlib import pyplot as plt", "Erro de aproximaรงรฃo\nSeja um sistema qualquer cuja saรญda รฉ uma estimativa $\\boldsymbol y_e$. Se comparada a uma referรชncia $\\boldsymbol y$, o erro quadrรกtico mรฉdio (EQM) da aproximaรงรฃo serรก igual a\n$$E = ||\\boldsymbol y_e - \\boldsymbol y||^2.$$\nPodemos calcular a derivada da aproximaรงรฃo em relaรงรฃo ao erro (ou: o gradiente da aproximaรงรฃo em relaรงรฃo ao erro). O gradiente $\\nabla E_{\\boldsymbol y_e}$ รฉ um vetor de mesma dimensรฃo de $\\boldsymbol y_e$ que indica a direรงรฃo na qual o erro aumenta mais em relaรงรฃo ร  aproximaรงรฃo, dado por:\n$$\\nabla E_{\\boldsymbol y_e} = \n\\begin{pmatrix}\n \\frac{dE}{d y_{e1}} \\\n \\frac{dE}{d y_{e2}} \\\n \\vdots \\\n \\frac{dE}{d y_{eI}} \n \\end{pmatrix}\n = \n \\begin{pmatrix}\n 2 (y_{e1} - y_1) \\\n 2 (y_{e2} - y_2) \\\n \\vdots \\\n 2 (y_{eI} - y_I) \n \\end{pmatrix} \n =\n 2 (\\boldsymbol y_e - \\boldsymbol y).$$\nEsse resultado รฉ importante porque um passo pequeno na direรงรฃo contrรกria do gradiente levarรก ร  reduรงรฃo do erro. Assim, dado um fator multiplicativo $\\alpha$, temos que $\\boldsymbol y_e - \\alpha \\nabla E_{\\boldsymbol y_e}$ รฉ uma operaรงรฃo que reduz o EQM.\nAo realizar a operaรงรฃo:\n$$\\boldsymbol y_e \\leftarrow \\boldsymbol y_e - \\alpha \\frac{\\nabla E_{\\boldsymbol y_e}}{||\\nabla E_{\\boldsymbol y_e}||},$$\ntemos ainda a garantia de que a norma L2 do vetor que serรก somado a $\\boldsymbol y_e$ tem norma conhecida.", "alpha = 0.01 # Tamanho do passo\ny = np.array([1, 2., 1., 0., 0., -2., 0.5, .3]) # Referencia\ny_e = np.random.random(y.shape) # Vetor inicial\n\nn_passos = 500 # Vamos executar este numero de passos de otimizacao\neqm = np.zeros((n_passos + 1)) # Vetor que recebera o EQM a cada iteracao\neqm[0] = np.sum((y_e - y)**2)\nfor i in xrange(n_passos):\n gradiente = 2*(y_e - y)\n y_e -= alpha * gradiente / np.linalg.norm(gradiente)\n eqm[i + 1] = np.sum((y_e - y)**2)\n\nplt.figure();\nplt.plot(range(n_passos+1), eqm);\nplt.ylabel('EQM');\nplt.xlabel('Passos');\n", "Fica claro que o programa construรญdo, de fato, reduz o EQM. Apesar disso, a soluรงรฃo รณtima รฉ trivial: se sabemos o valor-objetivo $\\boldsymbol y$ de $\\boldsymbol y_e$, basta fazรช-lo igual a este valor que o erro se tornarรก nulo.\nO sistema รฉ mais interessante caso $\\boldsymbol y_e$ seja uma combinaรงรฃo linear de um vetor de entradas $\\boldsymbol x$. Neste caso, temos:\n$$\\boldsymbol y_e = \\boldsymbol A \\boldsymbol x,$$\nonde $\\boldsymbol A$ รฉ uma matriz de coeficientes de combinaรงรฃo. Temos liberdade para alterar $\\boldsymbol A$, mas nรฃo diretamente $\\boldsymbol x$ ou $\\boldsymbol y_e$. Para tal, vamos aplicar o mesmo processo de minimizaรงรฃo por gradiente descendente:\n$$\\nabla E _{\\boldsymbol A} = \\frac{dE}{d \\boldsymbol A} = \\frac{dE}{d \\boldsymbol y_e} \\frac{d \\boldsymbol y_e}{d \\boldsymbol A} = 2 (\\boldsymbol y_e - \\boldsymbol y) \\boldsymbol x^T.$$", "alpha = 0.01 # Tamanho do passo\nx = np.array([[1, 50, 3],[5, 300, 2],[2, 3, 100]]) # Entradas de referencia\ny = np.array([[3], [20], [-7]]) # Saidas de referencia\nA = np.random.random((y.shape[0], x.shape[0]))\n \nn_passos = 1000 # Vamos executar este numero de passos de otimizacao\neqm2 = np.zeros((n_passos+1)) # Vetor que recebera o EQM a cada iteracao\ny_e = A * x\neqm2[0] = np.sum((y_e - y)**2)\nfor i in xrange(n_passos):\n gradiente = 2*(y_e - y) * x.T\n A -= alpha * gradiente / np.linalg.norm(gradiente)\n y_e = A * x\n eqm2[i+1] = np.sum((y_e - y)**2)\n\nplt.figure();\nplt.plot(range(n_passos+1), eqm2);\nplt.ylabel('EQM');\nplt.xlabel('Passos');", "A restriรงรฃo do formato da funรงรฃo ($\\boldsymbol A \\boldsymbol x$) pode implicar em nunca reduzir o EQM a zero. Isso nรฃo significa que aproximaรงรตes lineares nรฃo sejam รบteis: podemos usar esse processo para deduzir, por exemplo, o valor da aceleraรงรฃo gravitacional ร  partir de uma sรฉrie de mediรงรตes de tempo de queda de objetos ร  em diferentes alturas. Sistemas lineares podem ser, tambรฉm, treinados usando o processo de pseudo-inversรฃo visto anteriormente.\nSistemas nรฃo-lineares\nUm sistema nรฃo-linear รฉ aquele que nรฃo obedece as condiรงรตes de linearidade. De forma geral, ele pode ser escrito como um vetor de saรญdas $\\boldsymbol y$ que รฉ definido em funรงรฃo de um vetor de entradas $\\boldsymbol x$:\n$$\\boldsymbol y_e = f(\\boldsymbol A \\boldsymbol x),$$\nonde $\\boldsymbol y$ e $\\boldsymbol x$ sรฃo dados do problema e $\\boldsymbol A$ รฉ uma matriz que combina linearmente os elementos das entradas. Uma possรญvel funรงรฃo $f(.)$ รฉ a tangente hiperbรณlica, que resulta num sistema:", "def nl_forward(A, x):\n return np.tanh(np.dot(A,x))\n\nA = np.array([[-1, 0], [0, 1], [0.5, 0.5]]) # Entram duas dimensoes e saem trรชs\nx = np.array([[1, 0, 0.1, 0.3], [1, 0.1, 0, 10]]) # Entram Quatro vetores-coluna\n\nprint nl_forward(A, x)", "Porรฉm, lembremos: queremos que nossa funรงรฃo tenha um comportamento especรญfico, e nรฃo um comportamento arbitrรกrio e/ou aparentemente aleatรณrio. Idealmente, gostarรญamos que os pesos fossem ajustados de forma a reduzir o erro das saรญdas em relaรงรฃo a um conjunto de dados de treinamento. ร€ partir disso, verificaremos como o comportamento de nossa funรงรฃo รฉ generalizado, avaliando seu comportamento em dados de teste.\nPodemos usar, novamente, a regra da cadeia para evidenciar uma expressรฃo para o gradiente do erro em relaรงรฃo aos coeficientes da matriz $\\boldsymbol A$:\n$$\\nabla E _\\boldsymbol A = \\frac{dE}{d \\boldsymbol y_e} \\frac{d \\boldsymbol y_e}{d\\boldsymbol A} = \\frac{dE}{d \\boldsymbol y_e} \\frac{d \\boldsymbol f(\\boldsymbol A \\boldsymbol x)}{d\\boldsymbol A \\boldsymbol x} \\frac{d \\boldsymbol A \\boldsymbol x}{d \\boldsymbol A}$$\ne, portanto:\n$$\\nabla E _\\boldsymbol A = (2 (y_e - y) \\otimes f'(\\boldsymbol A \\boldsymbol x)) x^T,$$\nonde $\\otimes$ denota a multiplicaรงรฃo elemento-a-elemento.\nSe fixarmos o formato da funรงรฃo $f(.)$, podemos executar, tambรฉm, o processo de minimizaรงรฃo por gradiente descendente:", "y = np.array([[1, 0, 2, 3], [0, 1, -1, 3], [1, 2, 1, 3]]) # Saem quatro vetores-coluna de trรชs elementos cada\nA = np.array([[-1, 0], [0, 1], [0.5, 0.5]]) # Entram dois elementos e saem trรชs\nx = np.array([[1, 0, 0.1, 0.3], [1, 0.1, 0, 10]]) # Entram quatro vetores-coluna de dois elementos cada\n\ndef gradiente(y, A, x):\n y_e = nl_forward(A, x)\n dedy = (y_e - y)\n return np.dot (dedy * (1-np.tanh(np.dot(A,x))**2) , x.T)\n\ndef erro(y, y_e):\n return np.sum((y-y_e)**2)\n\n\nalpha = 0.01\nn_passos = 1000 # Vamos executar este numero de passos de otimizacao\neqm3 = np.zeros((n_passos+1)) # Vetor que recebera o EQM a cada iteracao\neqm3[0] = erro(y, nl_forward(A, x)) \nfor i in xrange(n_passos):\n eqm3[i+1] = erro(y, nl_forward(A, x)) \n g = gradiente(y, A, x)\n A -= alpha * g / np.linalg.norm(g)\n\nplt.figure();\nplt.plot(range(n_passos+1), eqm3);\nplt.ylabel('EQM');\nplt.xlabel('Passos');", "Sistema nรฃo linear com linearidade na saรญda\nVamos supor agora que o sistema nรฃo-linear recebeu uma nova operaรงรฃo de combinaรงรฃo linear na saรญda. Assim, temos:\n$$ \\boldsymbol y_e = \\boldsymbol B f(\\boldsymbol A \\boldsymbol x).$$\nNeste caso, temos duas matrizes de coeficientes que podem ser otimizadas, $\\boldsymbol A$ e $\\boldsymbol B$. Usando a regra da cadeia, podemos calcular ambos os gradientes:\n$$\\nabla E_{\\boldsymbol A} = \\frac{dE}{d \\boldsymbol y_e} \\frac{d \\boldsymbol y_e}{d \\boldsymbol A}\n= \\frac{dE}{d \\boldsymbol y_e} \\frac{d \\boldsymbol y_e}{d f(\\boldsymbol A \\boldsymbol x)} \\otimes \\frac{d f(\\boldsymbol A \\boldsymbol x)}{d \\boldsymbol A \\boldsymbol x} \\frac{d \\boldsymbol A \\boldsymbol x} {d \\boldsymbol A}\n= \\big[ \\boldsymbol B^T (2(\\boldsymbol y_e - \\boldsymbol y) \\otimes f'(\\boldsymbol A \\boldsymbol x) ) \\big] \\boldsymbol x^T.\n$$\n$$\\nabla E_{\\boldsymbol B} = \\frac{dE}{d \\boldsymbol y_e} \\frac{d \\boldsymbol y_e}{d \\boldsymbol B}\n= 2(\\boldsymbol y_e - \\boldsymbol y) f^T(\\boldsymbol A \\boldsymbol x).\n$$", "y = np.array([[1, 0, 0, 3], [1, 1, -1, 3], [1, 2, 1, 3]]).astype(float) # Saem quatro vetores-coluna de trรชs elementos\nA = np.array([[-1, 0], [0, 1], [0.5, 0.5], [0.5, 0.5], [0.5, 0.5]]).astype(float) # Entram dois elementos e saem cinco\nB = np.array([[-1, 1, 2, 2, 1], [1, -1, 0, 1, 2], [-2, -3, -4, 1, 2]]).astype(float) # Entram cinco elementos e saem tres\nx = np.array([[1, 0, 0.1, 0.3], [1, 0.1, 0, 10]]).astype(float) # Entram quatro vetores-coluna de dois elementos\n\ndef nl_forward2(A, x, B):\n return np.dot(B, np.tanh(np.dot(A,x)))\n\ndef gradienteB(y, A, x, B):\n ye = nl_forward2(A, x, B)\n z = np.dot(A, x)\n dedy = ye - y\n return np.dot(dedy, np.tanh(z).T)\n\ndef gradienteA(y, A, x, B):\n y_e = nl_forward2(A, x, B)\n z = np.dot(A,x)\n flz = (1-np.tanh(np.dot(A,x))**2) \n deriv_front = np.dot(B.T, (y_e-y))\n deriv_back = flz\n gA = np.dot(deriv_front * deriv_back, x.T)\n return gA\n\ndef erro(y, y_e):\n return np.sum((y-y_e)**2)\n\nalpha = 0.01\nn_passos = 1000 # Vamos executar este numero de passos de otimizacao\neqm4 = np.zeros((n_passos+1)) # Vetor que recebera o EQM a cada iteracao\neqm4[0] = erro(y, nl_forward2(A, x, B)) \nfor i in xrange(n_passos):\n eqm4[i+1] = erro(y, nl_forward2(A, x, B)) \n gA = gradienteA(y, A, x, B)\n gB = gradienteB(y, A, x, B)\n A -= alpha * gA / np.linalg.norm(gA)\n B -= alpha * gB / np.linalg.norm(gB)\n\n\nplt.figure();\nplt.plot(range(n_passos+1), eqm4);\nplt.ylabel('EQM');\nplt.xlabel('Passos');\n", "Discussรตes\nNรฃo-linearidade\nA nรฃo-linearidade do tipo tangente hiperbรณlica tem a caracterรญstica de ser quase linear prรณxima a zero, mas saturar em valores $-1$ e $1$ para entradas de grande valor absoluto.", "x = np.linspace(-3, 3, num=100)\nplt.figure();\nplt.plot(x, np.tanh(x));\nplt.ylabel('tanh(x)');\nplt.xlabel('x');", "Trata-se, assim, de uma aproximaรงรฃo das funรงรตes de classificaรงรฃo tipo degrau, que valem $1$ para entradas positivas e $-1$ para entradas negativas. A vantagem da funรงรฃo tangente hiperbรณlica รฉ que sua derivada รฉ definida, o que permite a otimizaรงรฃo de nossa estrutura pelo mรฉtodo de gradiente descendente.\nRedes Neurais MLP\nA funรงรฃo se aproxima de um modelo de neurรดnio de acordo com o qual ele serรก ativado se sua entrada ultrapassar um determinado valor. Esse modelo รฉ chamado de perceptron. Assim, cada unidade que calcula $tanh(x)$ รฉ chamada de perceptron. ร‰ possรญvel criar diversas camadas de perceptrons (expandindo a definiรงรฃo da funรงรฃo nรฃo linear, como em $\\boldsymbol y_e = \\boldsymbol B f(\\boldsymbol A f(\\boldsymbol C \\boldsymbol x))$). Esses perceptrons estรฃo organizados numa rede definida pelas matrizes de coeficientes. Assim, a estrutura com que temos trabalhado รฉ uma rede de neurรดnios artificiais, ou ainda, uma rede neural artificial. Mais especificamente, esta รฉ uma rede com perceptrons organizados em mรบltiplas camadas, e, por isso, รฉ chamada de multi-layer perceptron (MLP).\nAproximaรงรฃo Universal\nAa estrutura do tipo $\\boldsymbol y_e = \\boldsymbol B f(\\boldsymbol A \\boldsymbol x)$ temos duas etapas de combinaรงรตes lineares e, entre elas, uma etapa nรฃo-linear. Com essa estrutura, รฉ possรญvel definir arbitrariamente qual serรก o nรบmero de unidades de funรงรตes nรฃo-lineares que serรฃo aplicadas atravรฉs da escolha do nรบmero de linhas de $\\boldsymbol A$ e do nรบmero de colunas de $\\boldsymbol B$.\nO Teorema da Aproximaรงรฃo Universal mostra que, para qualquer funรงรฃo, existe um nรบmero de elementos nรฃo-lineares (nessa estrutura) que permite uma aproximaรงรฃo com erro arbitrariamente pequeno. Assim, รฉ possรญvel encontrar coeficientes adequados que levarรฃo essa estrutura a convergir para qualquer funรงรฃo ร  partir de amostras, isto รฉ, sem que a funรงรฃo seja conhecida. Essa propriedade รฉ especialmente รบtil em situaรงรตes em que nรฃo รฉ possรญvel deduzir uma funรงรฃo precisa, por exemplo:\n\nPrever o desempenho de um aluno na รบltima prova de um curso ร  partir de seu desempenho nas demais avaliaรงรตes,\nDefinir se devemos comprar ou vender determinadas aรงรตes tomando por base seus valores passados,\nEscolher quais jogadores deverรฃo participar de um time\n\nAo mesmo tempo, o Teorema da Aproximaรงรฃo Universal nรฃo leva em consideraรงรฃo a dificuldade inerente de encontrar os coeficientes corretos do sistema. Fazendo um paralelo com aproximaรงรตes polinomiais, รฉ possรญvel imaginar que o nรบmero de amostras de dados necessรกrias para treinar uma estrutura como a que propusemos รฉ, ao menos, proporcional ao nรบmero de coeficientes a serem determinados. A presenรงa de coeficientes em excesso leva ao sobre-ajuste (overfitting), isto รฉ, a funรงรฃo aproximarรก bem os dados de treino, mas falharรก com relaรงรฃo aos dados de teste.\nRetropropagaรงรฃo do erro\nO algoritmo de otimizaรงรฃo por gradiente descendente, na forma que vimos, pode ser entendido ร  partir da retropropagaรงรฃo do erro de aproximaรงรฃo atravรฉs das camadas da rede MLP. Para cada camada nรฃo-linear adicional com pesos $\\boldsymbol A$, o gradiente do erro serรก dado por:\n$$\\nabla E_{\\boldsymbol A} = \\frac{dE}{d \\boldsymbol y_e} \\frac{d \\boldsymbol y_e}{d \\boldsymbol A}\n= \\frac{dE}{d \\boldsymbol y_e} \\frac{d \\boldsymbol y_e}{d f(\\boldsymbol A \\boldsymbol x)} \\otimes \\frac{d f(\\boldsymbol A \\boldsymbol x)}{d \\boldsymbol A \\boldsymbol x} \\frac{d \\boldsymbol A \\boldsymbol x} {d \\boldsymbol A}\n= \\big[ \\boldsymbol B^T \\frac{d \\boldsymbol y_e}{d f(\\boldsymbol A \\boldsymbol x)} \\otimes f'(\\boldsymbol A \\boldsymbol x) ) \\big] \\boldsymbol x^T,\n$$\nonde:\n* $\\boldsymbol B$ รฉ a matriz de pesos da camada seguinte, e\n* $\\frac{d \\boldsymbol y_e}{d f(\\boldsymbol A \\boldsymbol x)}$ รฉ um fator que se acumula, propagando o erro atravรฉs das camadas da rede.\nO procedimento de retro-propagaรงรฃo do erro permite implementar uma rede MLP com diversas camadas, como no cรณdigo a seguir:", "def nova_mlp(entradas, saidas, camadas):\n lista_de_camadas = [entradas] + camadas + [saidas]\n pesos = []\n for i in xrange(len(lista_de_camadas)-1):\n pesos.append(np.random.random((lista_de_camadas[i+1], lista_de_camadas[i])))\n return pesos\n\ndef ff_mlp(entradas, pesos):\n s = entradas\n for i in xrange(len(pesos)-1):\n s = np.tanh(np.dot(pesos[i],s))\n s = np.dot(pesos[-1],s)\n return s\n\ndef backpropagation_step(entradas, saidas, pesos, passo=0.01):\n derivadas = []\n resultados_intermediarios = [entradas]\n s = entradas\n for i in xrange(len(pesos)-1):\n s = np.tanh(np.dot(pesos[i],s))\n resultados_intermediarios.append(s)\n s = np.dot(pesos[-1],s)\n resultados_intermediarios.append(s)\n \n # Derivada do erro em relacao a saida estimada\n dedye = (resultados_intermediarios[-1] - saidas)\n \n # Derivada em relacao a camada de saida linear\n dedb = np.dot(dedye, resultados_intermediarios[-2].T)\n \n # Para cada camada nao-linear, calcula a nova derivada na forma:\n deda = dedye\n\n for i in range(len(pesos)-2, -1, -1):\n linear = np.dot(pesos[i], resultados_intermediarios[i])\n flz = (1-np.tanh(linear)**2) \n deda = np.dot(pesos[i+1].T, deda) # deriv_front\n derivada = np.dot(deda * flz, resultados_intermediarios[i].T)\n derivadas.insert (0, derivada)\n\n derivadas.append(dedb)\n \n # Executa um passo na direcao contraria da derivada\n for i in xrange(len(derivadas)):\n n = np.linalg.norm(derivadas[i])\n pesos[i] -= passo * derivadas[i]/n\n \n return pesos\n\ndef erro(y, y_e):\n return np.sum((y-y_e)**2)\n\nx = np.array([[5, 4, 3, 2, 1], [3, 2, 1, 2, 3]]).T.astype(float)\ny = np.array([[2, 1, 1], [1, 0, 0]]).T.astype(float)\nmlp = nova_mlp(5, 3, [1, 1])\n#print ff_mlp(x, mlp), y\n\nn_passos = 1000\neqm5 = np.zeros((n_passos+1))\neqm5[0] = erro(y, ff_mlp(x, mlp))\nfor i in xrange(n_passos):\n eqm5[i+1] = erro(y, ff_mlp(x, mlp)) \n mlp = backpropagation_step(x, y, mlp, 0.01)\n \n#print ff_mlp(x, mlp), y\nplt.figure();\nplt.plot(range(n_passos+1), eqm5);\nplt.ylabel('EQM');\nplt.xlabel('Passos');", "Exercรญcios\n\n\nTome um conjunto de dados rotulados ร  sua escolha. Os dados devem se dividir em dois rรณtulos. Se nรฃo tiver nenhum, utilize um dos conjuntos que estรฃo embutidos no mรณdulo sklearn. Use um procedimento de validaรงรฃo e teste para experimentar diferentes nรบmeros de camadas e de neurรดnios por camada numa rede MLP. Mostre, atravรฉs de grรกficos, como esses nรบmeros influenciam o erro de treinamento e como esse erro se relaciona com a precisรฃo do classificador. Execute testes envolvendo, ao menos, uma rede muito simples (com duas camadas e poucos neurรดnios por camada) e uma rede muito grande (com muitas camadas e muitos neurรดnios por camada).\n\n\nFaรงa um desenho que explique o conceito de backpropagation\n\n\nFaรงa um desenho que explique o conceito de aproximaรงรฃo universal\n\n\nDiscuta: atรฉ que ponto o processo de treinamento de uma rede MLP se assemelha a um processo de aprendizado de um ser humano que observa processos (por exemplo: uma crianรงa que aprende a jogar futebol)? Evidencie semelhanรงas e diferenรงas." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tapilab/is-karthikbmk
src/Final_code.ipynb
mit
[ "Ridge Regression", "from sklearn.linear_model import Ridge\nfrom sklearn.linear_model import Lasso\nfrom sklearn.cross_validation import KFold\nfrom sklearn.preprocessing import PolynomialFeatures\nfrom sklearn.preprocessing import scale\nfrom sklearn.metrics.pairwise import cosine_similarity\nfrom sumy import evaluation\nfrom sumy.models import dom\nfrom sumy.nlp import tokenizers\nfrom stemming.porter2 import stem\nfrom os import listdir\nimport os.path\nfrom nltk.corpus import stopwords\nimport nltk\nimport copy\nimport pickle\nimport unicodedata\nimport re\nimport numpy as np\nimport operator\nimport matplotlib.pyplot as plt\nfrom collections import defaultdict\nimport nltk.data\nsent_detector = nltk.data.load('tokenizers/punkt/english.pickle')\n%matplotlib inline", "<b> Feature Extraction:\n<img src=\".\\Others\\Features.png\" alt=\"HTML5 Icon\" width=\"800\" height=\"500\", style=\"display: ;\">", "data_root_dir = '../data/DUC2001'\nannotation_file = 'annotations.txt'\ntxt_opn_tag = '<TEXT>'\ntxt_close_tag = '</TEXT>'\n\ndef get_cluster_and_its_files(data_root_dir,annotation_file):\n '''Get a Cluster and the file names associated with it\n Returns a dictionary of the form { cluster_1 : [file1,file2,file3....], cluster_2 : [file1,file2,file3....] }''' \n \n f = open(data_root_dir + '/' + annotation_file,'r')\n \n clust_files = defaultdict(list)\n \n \n for line in f.readlines():\n cur_line = line.split(';')[0]\n clust_name = cur_line.split('@')[1]\n file_name = cur_line.split('@')[0]\n \n clust_files[clust_name].append(file_name)\n \n f.close()\n \n return clust_files\n \n \n\nclust_files = get_cluster_and_its_files(data_root_dir,annotation_file)\nprint clust_files['mad cow disease']\nclust_list = clust_files.keys()\n\ndef get_text_from_doc(document_path,txt_opn_tag,txt_close_tag):\n \n f = open(document_path,'r')\n content = f.read()\n f.close()\n \n start = content.index(txt_opn_tag) + len(txt_opn_tag)\n end = content.index(txt_close_tag)\n \n return content[start:end]\n \n\ndef tokenize_txt(text,nltk_flag=True,ner_flag=False):\n \n text = text.strip()\n \n if ner_flag == True: \n tokenizedList = re.split('[^a-zA-Z]+', text.lower())\n return tokenizedList\n \n if nltk_flag == False:\n #return [x.lower() for x in re.findall(r\"\\w+\", text)]\n\n tokenizedList = re.split('\\W+', text.lower())\n return [unicode(x,'utf-8') for x in tokenizedList if x != '' and x != '\\n' and x != u'\\x85' and x != '\\r' and x != '_']\n else:\n return nltk.word_tokenize(unicode(text,'utf-8')) \n #return [x for x in toks if x != '' and x != '\\n' and x != u'\\x85' and x != '\\r' and x != '_' and x!= ',' and x != '.'] \n \n\ntokenize_txt('What is this ?? Is this _ cool ? I don\\'t know',nltk_flag=True,ner_flag=True)", "<b>Feature 1 : Term frequency over the cluster(TF)", "def get_term_freqs(data_root_dir,annotation_file,stop_words=None) :\n '''Get the term freqs of words in clusters. The term freqs are unique to clusters.\n Returns a dict of form {clust1 : {word1 : 2, word2 :3...},clust2 : {word1 : 2, word2 :3..} ......}'''\n \n #Check about stop_words\n \n clust_files = get_cluster_and_its_files(data_root_dir,annotation_file)\n \n clust_term_freq = defaultdict(defaultdict)\n \n \n for clust,files in clust_files.iteritems():\n term_freq = defaultdict(int)\n \n for doc in files:\n doc_path = data_root_dir + '/' + doc\n txt = get_text_from_doc(doc_path,txt_opn_tag,txt_close_tag)\n doc_tokens = tokenize_txt(txt)\n \n for token in doc_tokens:\n term_freq[token] += 1\n \n clust_term_freq[clust] = term_freq\n \n return clust_term_freq\n \n \n \n \n\nclust_word_tfs = get_term_freqs(data_root_dir,annotation_file)\nprint clust_word_tfs['cattle disease']", "<b> Feature 2 : Total document number in the datasets, divided by the frequency of documents which contains this word (IDF)", "def get_doc_freqs(data_root_dir,annotation_file):\n \n '''Return a dictionary of the form {word1 : df1 , word2 : df2 ...}'''\n '''Example : {furazabol : 154.5 , the : 1.00032}'''\n \n data_root_dir += '/'\n \n docs = [file_name for _,__,file_name in os.walk(data_root_dir)][0]\n \n if annotation_file in docs:\n docs.remove(annotation_file) \n \n inverted_index = defaultdict(set)\n \n \n for doc in docs:\n doc_path = data_root_dir + doc \n txt = get_text_from_doc(doc_path,txt_opn_tag,txt_close_tag)\n doc_tokens = tokenize_txt(txt)\n \n for token in doc_tokens:\n inverted_index[token].add(doc)\n \n \n \n no_of_docs = len(docs)\n idf_dict = defaultdict(float)\n \n for term,doc_lst in inverted_index.iteritems():\n idf_dict[term] = float(no_of_docs) / len(doc_lst)\n \n return idf_dict\n \n \n \n\ndoc_freqs = get_doc_freqs(data_root_dir,annotation_file)\nprint doc_freqs['furazabol']\nprint doc_freqs['the']", "<b>Feature 3 : The frequency of documents which contains this word in the current cluster (CF)", "def get_clusterwise_dfs(data_root_dir,annotation_file):\n \n '''Return a dictionary of the form : {clust1 : (word1 : df1,word2 :df2 .....) , clust1 : (word3 : df3,word2 :df3 .....)}'''\n '''Note that the document frequencies of term are calculated clusterwise, and not on the whole dataset'''\n \n clust_doc_freqs = defaultdict(defaultdict)\n \n clust_files = get_cluster_and_its_files(data_root_dir,annotation_file)\n \n for clust,files in clust_files.iteritems():\n inverted_index = defaultdict(set)\n \n for doc in files:\n doc_path = data_root_dir + '/' + doc\n txt = get_text_from_doc(doc_path,txt_opn_tag,txt_close_tag)\n doc_tokens = tokenize_txt(txt)\n \n for token in doc_tokens:\n inverted_index[token].add(doc)\n \n \n clust_df = defaultdict(int)\n \n for term,doc_lst in inverted_index.iteritems():\n clust_df[term] = len(doc_lst)\n \n clust_doc_freqs[clust] = clust_df\n \n return clust_doc_freqs\n\nclust_dfs = get_clusterwise_dfs(data_root_dir,annotation_file)\nprint sorted(clust_dfs['mad cow disease'].items(),key=operator.itemgetter(1),reverse=True)[0:20]", "<b>Feature 4 : A 4-dimension binary vector indicates whether the word is a noun, a verb, an adjective or an adverb. If the word has\nanother part-of-speech, the vector is all-zero (POS)", "def get_short_tag(long_tag,valid_pos=['NN','VB','JJ','RB']): \n '''Truncate long_tag to get its first 2 chars. If a valid POS, return first 2 chars. else return OT (Other)'''\n '''Valid POS are NN,VB,JJ,RB'''\n \n valid_pos_lst = valid_pos\n \n long_tag = str.upper(long_tag[0:2])\n \n if long_tag in valid_pos_lst:\n return long_tag\n \n else:\n return 'OT' \n\ndef get_sentence_tags(sentence):\n '''POS tag the words in the sentence and return a dict of the form : {word1 : [tag1,tag2..], word2 : [tag3,tag4..]..}'''\n word_tag_dict = defaultdict(set)\n #sent_tags = pos_tagger.tag(tokenize_txt(sentence))\n sent_tags = nltk.pos_tag(tokenize_txt(sentence))\n \n for word_tag in sent_tags:\n word = word_tag[0]\n tag = word_tag[1]\n \n word_tag_dict[word].add(get_short_tag(tag))\n \n return word_tag_dict\n\nprint get_sentence_tags(\"sent one\")\nprint get_sentence_tags(\"sent two\")\n\ndef get_doc_tags(document):\n \n '''Perform POS tagging on all the sentences in the document and return a dict of the form :'''\n ''' (sent_id : { word1 : tag1 ...}...}'''\n \n sent_and_tags = defaultdict(int)\n \n #sentences = document.split('.')\n sentences = sent_detector.tokenize(document,realign_boundaries=True)\n \n for i,sentence in enumerate(sentences):\n sent_and_tags[i] = get_sentence_tags(sentence.strip('.').strip('\\n'.strip('')))\n \n return sent_and_tags\n\nget_doc_tags(\"Who is Alan Turing ??. Alan was born in the United Kingdom\")\n\ndef get_cluster_tags(data_root_dir,annotation_file):\n '''Perfom Part of Speech Tagging across all the sentences in all the documents in all the clusters'''\n \n clust_files = get_cluster_and_its_files(data_root_dir,annotation_file)\n \n clust_tags = defaultdict(defaultdict)\n \n i = 1\n for clust,files in clust_files.iteritems(): \n \n for doc in files:\n \n if i %10 == 0:\n print 'Finished tagging doc :', i\n i += 1\n doc_path = data_root_dir + '/' + doc\n txt = get_text_from_doc(doc_path,txt_opn_tag,txt_close_tag)\n \n clust_tags[clust][doc] = get_doc_tags(txt)\n \n return clust_tags\n\nclust_tags = get_cluster_tags(data_root_dir,annotation_file)\n\ndef serialize(file_name,data):\n \n with open(file_name, 'wb') as f: \n pickle.dump(data, f, pickle.HIGHEST_PROTOCOL)\n \n\ndef deserialize(file_name):\n\n with open(file_name, 'rb') as f:\n return pickle.load(f)\n \n\nfile_name = 'pos_tags.pickle'\n#serialize(file_name,clust_tags)\nclust_tags = deserialize(file_name)\nprint 'done'\n\nold_cpy = copy.deepcopy(clust_tags)\n\ndef vectorize_pos(pos_set,pos_idx = {'NN' : 0 ,'VB' : 1,'JJ' : 2,'RB' : 3}):\n \n '''Convert the POS set to a binary vector according to pos_idx''' \n bin_pos_vec = 4*[False]\n \n for pos in pos_set:\n \n if pos == 'OT':\n return 4*[False]\n else:\n bin_pos_vec[pos_idx[pos]] = True\n \n return bin_pos_vec\n\nprint vectorize_pos({'NN','RB'})\nprint vectorize_pos({'NN','RB','JJ','VB','OT'})\n\ndef vectorize_tags_across_clusters(clust_tags):\n '''Binarize the POS of words'''\n\n for clust,doc in clust_tags.iteritems(): \n\n doc_sent = defaultdict(defaultdict)\n\n for doc,sent in doc.iteritems():\n\n sent_word = defaultdict(defaultdict)\n\n for sen_id,word_pos in sent.iteritems():\n\n\n for word,pos in word_pos.iteritems(): \n word_pos[word] = copy.deepcopy(vectorize_pos(pos))\n\n sent_word[sen_id] = copy.deepcopy(word_pos)\n\n doc_sent[doc] = copy.deepcopy(sent_word)\n\n clust_tags[clust] = copy.deepcopy(doc_sent)\n\n return clust_tags\n\n\nnew_clust_tags = vectorize_tags_across_clusters(clust_tags)\n\nprint old_cpy['mad cow disease']['LA060490-0083'][2],'\\n\\n'\nprint new_clust_tags['mad cow disease']['LA060490-0083'][2]", "<b>Feature 5 : A binary value equals one iff the output of the named entity classifier from CoreNLP is not empty (Named Entity)", "def extract_ners(data_root_dir,annotation_file):\n '''Perform Named Entity Recognition on all sentences in all docs in all clusters'''\n \n clust_files = get_cluster_and_its_files(data_root_dir,annotation_file)\n \n clust_doc = defaultdict(defaultdict)\n \n for clust,files in clust_files.iteritems(): \n \n doc_sent = defaultdict(defaultdict)\n \n for file_name in files: \n \n \n file_path = data_root_dir + '/' + file_name\n doc = get_text_from_doc(file_path,txt_opn_tag,txt_close_tag)\n sentences = sent_detector.tokenize(doc)\n sent_tokens =[tokenize_txt(sent,nltk_flag=True,ner_flag=True) for sent in sentences]\n \n sent_ner_cnt = defaultdict(int)\n \n for s_id,tok_sent in enumerate(sent_tokens): \n \n \n ners = ner_tagger.tag(tok_sent)\n cnt = 0\n for ner in ners:\n if ner[1] != 'O':\n cnt += 1\n sent_ner_cnt[s_id] = cnt\n \n doc_sent[file_name] = copy.deepcopy(sent_ner_cnt)\n \n print 'FINISHED NER ON ', file_name\n clust_doc[clust] = copy.deepcopy(doc_sent)\n \n return clust_doc\n\nfile_name = 'ner_tags.pickle'\n#serialize(file_name,clust_ners)\nclust_ners = deserialize(file_name)\nprint 'done'\n\nclust_ners['mad cow disease']['LA060490-0083']", "<b>Feature 6 : A binary value denotes if a word in Number (Number)</b>", "def extract_digit_cnt(data_root_dir,annotation_file,cnt_ratio_flag='C'):\n '''Count the number of digits in a sentence'''\n clust_files = get_cluster_and_its_files(data_root_dir,annotation_file)\n \n clust_doc = defaultdict(defaultdict)\n \n for clust,files in clust_files.iteritems(): \n \n doc_sent = defaultdict(defaultdict)\n \n for file_name in files: \n \n \n file_path = data_root_dir + '/' + file_name\n doc = get_text_from_doc(file_path,txt_opn_tag,txt_close_tag)\n sentences = sent_detector.tokenize(doc)\n sent_tokens =[tokenize_txt(sent) for sent in sentences]\n \n sent_dig_cnt = defaultdict(int)\n \n dig_cnt = 0\n for s_id,tok_sent in enumerate(sent_tokens): \n for tok in tok_sent:\n if tok.isdigit():\n dig_cnt += 1\n if cnt_ratio_flag == 'C':\n sent_dig_cnt[s_id] = dig_cnt\n else:\n sent_dig_cnt[s_id] = float(dig_cnt)/len(tok_sent)\n \n doc_sent[file_name] = copy.deepcopy(sent_dig_cnt) \n \n clust_doc[clust] = copy.deepcopy(doc_sent) \n \n return clust_doc\n\nclust_digs = extract_digit_cnt(data_root_dir,annotation_file)\nprint 'done'\n\nprint clust_digs['mad cow disease']['LA060490-0083'][29]", "<b>Feature 22 : The number of digits, divided by the sentence length(Number ratio)", "clust_dig_ratio = extract_digit_cnt(data_root_dir,annotation_file,'R')\nprint 'done'\nclust_dig_ratio['mad cow disease']['LA060490-0083']", "<b>Feature 23 : The number of stop words, divided by the sentence length(Stop word ratio)", "def stop_word_ratio(data_root_dir,annotation_file):\n '''Compute the stop word ratio for all sentences'''\n '''stop word ratio == no of stop words in sent / len(sent) '''\n \n english_stopwords = set(stopwords.words('english'))\n \n clust_files = get_cluster_and_its_files(data_root_dir,annotation_file)\n \n clust_doc = defaultdict(defaultdict)\n \n for clust,files in clust_files.iteritems(): \n \n doc_sent = defaultdict(defaultdict)\n \n for file_name in files: \n \n \n file_path = data_root_dir + '/' + file_name\n doc = get_text_from_doc(file_path,txt_opn_tag,txt_close_tag)\n sentences = sent_detector.tokenize(doc)\n sent_tokens =[tokenize_txt(sent) for sent in sentences]\n \n sent_dig_cnt = defaultdict(int)\n \n \n for s_id,tok_sent in enumerate(sent_tokens): \n stop_cnt = 0\n for tok in tok_sent:\n if tok.lower() in english_stopwords:\n stop_cnt += 1\n sent_dig_cnt[s_id] = float(stop_cnt)/len(tok_sent)\n \n doc_sent[file_name] = copy.deepcopy(sent_dig_cnt) \n \n clust_doc[clust] = copy.deepcopy(doc_sent) \n \n return clust_doc\n\nclust_stop_word_ratio = stop_word_ratio(data_root_dir,annotation_file)\nprint 'done'\nprint clust_stop_word_ratio['mad cow disease']['LA060490-0083'][18]", "<b>Feature 24 : No of words in the sentence (Sentence Length)", "def sent_length(data_root_dir,annotation_file):\n '''Compute the Lenght of sentences and store them in a dictionary''' \n \n \n clust_files = get_cluster_and_its_files(data_root_dir,annotation_file)\n \n clust_doc = defaultdict(defaultdict)\n \n for clust,files in clust_files.iteritems(): \n \n doc_sent = defaultdict(defaultdict)\n \n for file_name in files: \n \n \n file_path = data_root_dir + '/' + file_name\n doc = get_text_from_doc(file_path,txt_opn_tag,txt_close_tag)\n sentences = sent_detector.tokenize(doc)\n sent_tokens =[tokenize_txt(sent) for sent in sentences]\n \n sent_dig_cnt = defaultdict(int)\n \n \n for s_id,tok_sent in enumerate(sent_tokens): \n sent_dig_cnt[s_id] = len(tok_sent)\n \n doc_sent[file_name] = copy.deepcopy(sent_dig_cnt) \n \n clust_doc[clust] = copy.deepcopy(doc_sent) \n \n return clust_doc\n\nclust_sent_lens = sent_length(data_root_dir,annotation_file)\nprint 'done'\nprint clust_sent_lens['mad cow disease']['LA060490-0083'][15]\n\nfile_path = data_root_dir + '/' + 'LA060490-0083'\ndoc = get_text_from_doc(file_path,txt_opn_tag,txt_close_tag)\nsentences = sent_detector.tokenize(doc)\nprint len(sentences[15].split(' '))", "<b>Feature 21 : The number of named entities divided by sentence length (NER Ratio)", "def ner_ratio(data_root_dir,annotation_file,clust_ners,clust_sent_lens):\n '''Compute the Ratio of NERS : Sentence lenght and store them in a dictionary''' \n \n clust_files = get_cluster_and_its_files(data_root_dir,annotation_file)\n \n clust_doc = defaultdict(defaultdict)\n \n for clust,files in clust_files.iteritems(): \n \n doc_sent = defaultdict(defaultdict)\n \n for file_name in files: \n \n \n file_path = data_root_dir + '/' + file_name\n doc = get_text_from_doc(file_path,txt_opn_tag,txt_close_tag)\n total_sents = len(sent_detector.tokenize(doc))\n #sent_tokens =[tokenize_txt(sent) for sent in sentences]\n \n sent_ner_ratio = defaultdict(int)\n \n \n for i in range(0,total_sents):\n sent_ner_ratio[i] = float(clust_ners[clust][file_name][i])/clust_sent_lens[clust][file_name][i]\n \n \n doc_sent[file_name] = copy.deepcopy(sent_ner_ratio) \n \n clust_doc[clust] = copy.deepcopy(doc_sent) \n \n return clust_doc\n\nclust_ner_ratio = ner_ratio(data_root_dir,annotation_file,clust_ners,clust_sent_lens)\nprint 'done'\nprint clust_ner_ratio['mad cow disease']['LA060490-0083'][11]", "<b>Feature 20 : The number of nouns,verbs,adverbs, adjectives in the sentence, divided by the length of the sentence (POS Ratio)", "def pos_ratio(data_root_dir,annotation_file,new_clust_tags,clust_sent_lens):\n '''Compute the Ratio of nouns,verbs,adverbs and adjectives : Sentence lenght and store them in a dictionary'''\n \n clust_doc = defaultdict(defaultdict)\n \n clusters = clust_sent_lens.keys() \n for clust in clusters: \n doc_sent = defaultdict(defaultdict)\n \n files = clust_sent_lens[clust].keys()\n \n for _file in files:\n sent_ids = clust_sent_lens[clust][_file].keys()\n \n sent_pos_ratio = defaultdict(int)\n \n \n for sent_id in sent_ids:\n pos_cnt = 0\n for word,tag_lst in new_clust_tags[clust][_file][sent_id].iteritems():\n '''\n if _file == 'LA060490-0083' and sent_id == 3:\n print tag_lst, pos_cnt*1.0/clust_sent_lens[clust][_file][sent_id]\n #print new_clust_tags[clust][_file][sent_id] \n '''\n if True in tag_lst:\n pos_cnt += 1 \n sent_pos_ratio[sent_id] = float(pos_cnt)/ clust_sent_lens[clust][_file][sent_id]\n \n doc_sent[_file] = copy.deepcopy(sent_pos_ratio) \n \n clust_doc[clust] = copy.deepcopy(doc_sent) \n \n return clust_doc\n\nclust_pos_ratios = pos_ratio(data_root_dir,annotation_file,new_clust_tags,clust_sent_lens)\nprint 'done'\n\nclust_pos_ratios['mad cow disease']['LA060490-0083'][3]", "<b>Feature 14 : The position of the sentence. Suppose there are M sentences in the document \n , then for the ith sentence the position is computed as 1-(i-1)/(M-1) (POSITION)<b>", "def sentence_pos(data_root_dir,annotation_file,clust_sent_lens):\n '''Compute the position of the sentence, according to above formula'''\n \n clust_doc = defaultdict(defaultdict)\n \n clusters = clust_sent_lens.keys() \n for clust in clusters: \n doc_sent = defaultdict(defaultdict)\n \n files = clust_sent_lens[clust].keys()\n \n for _file in files:\n sent_ids = clust_sent_lens[clust][_file].keys()\n \n total_sents = len(clust_sent_lens[clust][_file].keys())\n \n #Avoid divide by 0 error\n if total_sents == 1:\n total_sents = 2 \n \n sent_positon = defaultdict(int)\n \n \n for sent_id in sent_ids:\n sent_positon[sent_id] = 1 - ( float( sent_id ) / (total_sents - 1) )\n \n doc_sent[_file] = copy.deepcopy(sent_positon) \n \n clust_doc[clust] = copy.deepcopy(doc_sent) \n \n return clust_doc\n\nclust_sent_pos = sentence_pos(data_root_dir,annotation_file,clust_sent_lens)\nprint 'done'\n\nclust_sent_pos['mad cow disease']['LA060490-0083']", "<b>Feature 17 : The mean TF of all words in the sentence, divided by the sentence length (Averaged TF)<b>", "def averaged_tf(data_root_dir,annotation_file,clust_word_tfs):\n '''Get the average TF values of words in a sentence and them in a dictionary'''\n \n clust_files = get_cluster_and_its_files(data_root_dir,annotation_file)\n \n clust_doc = defaultdict(defaultdict)\n \n for clust,files in clust_files.iteritems(): \n \n doc_sent = defaultdict(defaultdict)\n \n for file_name in files: \n \n file_path = data_root_dir + '/' + file_name\n doc = get_text_from_doc(file_path,txt_opn_tag,txt_close_tag)\n sentences = sent_detector.tokenize(doc)\n sent_tokens =[tokenize_txt(sent,nltk_flag=True,ner_flag=True) for sent in sentences]\n \n sent_mean_tf = defaultdict(int)\n \n for s_id,tok_sent in enumerate(sent_tokens): \n mean_tf = 0\n for word in tok_sent:\n mean_tf += clust_word_tfs[clust][word]\n mean_tf = float(mean_tf)/len(tok_sent)\n \n sent_mean_tf[s_id] = mean_tf\n \n doc_sent[file_name] = copy.deepcopy(sent_mean_tf)\n \n clust_doc[clust] = copy.deepcopy(doc_sent)\n \n return clust_doc\n\nclust_mean_tfs = averaged_tf(data_root_dir,annotation_file,clust_word_tfs)\nprint 'done'\n\nclust_mean_tfs['mad cow disease']['LA060490-0083']", "<b>Feature 18 : The mean IDF of all words in the sentence, divided by the sentence length (Averaged IDF)<b>", "def averaged_idf(data_root_dir,annotation_file,doc_freqs):\n '''Get the average IDF values of words in a sentence and them in a dictionary'''\n \n clust_files = get_cluster_and_its_files(data_root_dir,annotation_file)\n \n clust_doc = defaultdict(defaultdict)\n \n for clust,files in clust_files.iteritems(): \n \n doc_sent = defaultdict(defaultdict)\n \n for file_name in files: \n \n file_path = data_root_dir + '/' + file_name\n doc = get_text_from_doc(file_path,txt_opn_tag,txt_close_tag)\n sentences = sent_detector.tokenize(doc)\n sent_tokens =[tokenize_txt(sent,nltk_flag=True,ner_flag=True) for sent in sentences]\n \n sent_mean_idf = defaultdict(int)\n \n for s_id,tok_sent in enumerate(sent_tokens): \n mean_idf = 0\n for word in tok_sent:\n mean_idf += doc_freqs[word]\n mean_idf = float(mean_idf)/len(tok_sent)\n \n sent_mean_idf[s_id] = mean_idf\n \n doc_sent[file_name] = copy.deepcopy(sent_mean_idf)\n \n clust_doc[clust] = copy.deepcopy(doc_sent)\n \n return clust_doc\n\nclust_mean_idfs = averaged_idf(data_root_dir,annotation_file,doc_freqs)\nprint 'done'\n\nclust_mean_idfs['mad cow disease']['LA060490-0083']", "<b>Feature 19 : The mean CF of all words in the sentence, divided by the sentence length (Averaged CF)<b>", "def averaged_cf(data_root_dir,annotation_file,clust_dfs):\n '''Get the average Cluster freqs values of words in a sentence and them in a dictionary'''\n \n clust_files = get_cluster_and_its_files(data_root_dir,annotation_file)\n \n clust_doc = defaultdict(defaultdict)\n \n for clust,files in clust_files.iteritems(): \n \n doc_sent = defaultdict(defaultdict)\n \n for file_name in files: \n \n file_path = data_root_dir + '/' + file_name\n doc = get_text_from_doc(file_path,txt_opn_tag,txt_close_tag)\n sentences = sent_detector.tokenize(doc)\n sent_tokens =[tokenize_txt(sent,nltk_flag=True,ner_flag=True) for sent in sentences]\n \n sent_mean_cf = defaultdict(int)\n \n for s_id,tok_sent in enumerate(sent_tokens): \n mean_cf = 0\n for word in tok_sent:\n mean_cf += clust_dfs[clust][word]\n mean_cf = float(mean_cf)/len(tok_sent)\n \n sent_mean_cf[s_id] = mean_cf\n \n doc_sent[file_name] = copy.deepcopy(sent_mean_cf)\n \n clust_doc[clust] = copy.deepcopy(doc_sent)\n \n return clust_doc\n\nclust_mean_cfs = averaged_cf(data_root_dir,annotation_file,clust_dfs)\nprint 'done'\n\nclust_mean_cfs['mad cow disease']['LA060490-0083']\n\ndef get_rouge_n_score(sent_1,sent_2,n=2,do_stem=True):\n '''Normalize the overlapping n-grams and return the score'''\n '''Sentences are converted to lower-case and words are stemmed'''\n \n #lower\n sent_1 = sent_1.lower()\n sent_2 = sent_2.lower()\n \n \n tokenizer = tokenizers.Tokenizer('english')\n\n sent_1_toks = tokenizer.to_words(sent_1)\n sent_2_toks = tokenizer.to_words(sent_2)\n \n \n #stem the sentence\n if do_stem == True:\n sent_1 = ' '.join([stem(tok) for tok in sent_1_toks])\n sent_2 = ' '.join([stem(tok) for tok in sent_2_toks])\n \n \n sent_obj_1= dom.Sentence(sent_1,tokenizer)\n sent_obj_2= dom.Sentence(sent_2,tokenizer)\n \n \n return evaluation.rouge_n([sent_obj_1],[sent_obj_2])\n \n\nprint 'ROGUE With stemming: ' , get_rouge_n_score('This iS SentENce CooLing','This is Sentence cool',2,True)\nprint 'ROGuE Without stemming: ' , get_rouge_n_score('This iS SentENce CooLing','This is Sentence cool',2,False)\n\ndef get_docs_without_summary(data_root_dir,annotation_file):\n '''Return a dictionary of the form {clust1 : [doc1,doc2...],clust2 : [doc1,doc2...] ....}'''\n '''The key is the cluster name, the value is a list of documents, for which summary do not exist'''\n '''This is because, certain documents in the DUC dataset do not have a summary. To weed out such documents, this function\n will be called.'''\n \n clust_files = get_cluster_and_its_files(data_root_dir,annotation_file)\n \n files_with_summ = set( [fname.lower() for fname in listdir(data_root_dir+ '/' + 'Summaries' + '/')] )\n clust_docs_wo_summ = defaultdict(list)\n \n for clust,files in clust_files.iteritems():\n for _file in files:\n tmp = _file + '.txt'\n if tmp.lower() not in files_with_summ:\n clust_docs_wo_summ[clust].append(_file)\n \n return clust_docs_wo_summ\n \n\ndocs_without_summ = get_docs_without_summary(data_root_dir,annotation_file)\nprint docs_without_summ\n\ndef extract_gold_summ_from_doc(document_path):\n '''Extract the Gold summary of a document.'''\n '''Gold summary is of the form <Abstract:> This is the summary <Introduction:>'''\n start_tag = 'Abstract:'\n close_tag = 'Introduction:'\n \n f = open(document_path,'r')\n content = f.read()\n f.close()\n \n start = content.index(start_tag) + len(start_tag)\n end = content.index(close_tag)\n \n return content[start:end].strip()\n\ndoc_path = data_root_dir+ '/' + 'Summaries' + '/' + 'ap880623-0135.txt'\nsumm = extract_gold_summ_from_doc(doc_path)\nprint summ", "Construct the train Matrix with the dimensions M*N,where:\n<b>N = $\\sum_{i=1}^c\\sum_{j=1}^{d_i}{X_{ij}}$ where c is the no of clusters, ${d_i}$ is the no of docs in ${cluster_i}$ , $X_{ij}$ is the no of sentences in $j^{th}$ doc of $i^{th} cluster$<b><br>\n<b>M = no of features for every sentence</b>", "def convert_dict_to_feature_column(clust_files,docs_without_summ):\n '''Convert the nested dictionary to a feature column''' \n feature_col = []\n \n clusters = sorted(clust_files.keys())\n \n for clust in clusters: \n \n files = sorted(clust_files[clust].keys())\n \n for _file in files:\n \n #Ignore the docs that do not have a summary. \n if _file not in docs_without_summ[clust]:\n sent_ids = sorted(clust_files[clust][_file].keys()) \n for sent_id in sent_ids:\n feature_col.append(clust_files[clust][_file][sent_id]) \n\n \n return np.array(feature_col)\n\ndef construct_X_Matrix(clust_sent_pos,clust_sent_lens,clust_mean_tfs,clust_mean_idfs,clust_mean_cfs,clust_pos_ratios,\n clust_ner_ratio,clust_dig_ratio,clust_stop_word_ratio):\n \n '''Construct the X_Matrix by stacking the Features, columnwise, for all sentences. Finally return X_train'''\n \n F_position = convert_dict_to_feature_column(clust_sent_pos,docs_without_summ)\n F_length = convert_dict_to_feature_column(clust_sent_lens,docs_without_summ)\n F_mean_tfs = convert_dict_to_feature_column(clust_mean_tfs,docs_without_summ)\n F_mean_idfs = convert_dict_to_feature_column(clust_mean_idfs,docs_without_summ)\n F_mean_cfs = convert_dict_to_feature_column(clust_mean_cfs,docs_without_summ)\n F_pos_ratio = convert_dict_to_feature_column(clust_pos_ratios,docs_without_summ)\n F_ner_ratio = convert_dict_to_feature_column(clust_ner_ratio,docs_without_summ)\n F_dig_ratio = convert_dict_to_feature_column(clust_dig_ratio,docs_without_summ)\n F_stop_word_ratio = convert_dict_to_feature_column(clust_stop_word_ratio,docs_without_summ)\n \n stack = (F_position,F_length,F_mean_tfs,F_mean_idfs,F_mean_cfs,F_pos_ratio,F_ner_ratio,F_dig_ratio,F_stop_word_ratio)\n return np.column_stack(stack)\n\ndef construct_Y(clust_files,docs_without_summ):\n '''Construct the Y output value(ROGUE Score) for every sentence in the document, along\n with the gold summary of the document. i.e ROGUE(sentence,summary)''' \n feature_col = []\n \n clusters = sorted(clust_files.keys())\n \n for clust in clusters: \n \n files = sorted(clust_files[clust])\n \n for _file in files:\n \n #Ignore the docs that do not have a summary. \n if _file not in docs_without_summ[clust]:\n\n file_path = data_root_dir + '/' + _file\n doc = get_text_from_doc(file_path,txt_opn_tag,txt_close_tag)\n sentences = sent_detector.tokenize(doc)\n \n sum_file_path = data_root_dir+ '/' + 'Summaries' + '/' + _file.lower() + '.txt'\n gold_summ = extract_gold_summ_from_doc(sum_file_path)\n \n for sent in sentences:\n try:\n rouge_score = get_rouge_n_score(sent,gold_summ) \n except:\n rouge_score = 0\n \n #To avoid divide by zero error\n feature_col.append(rouge_score)\n \n \n return np.array(feature_col)", "<b>Cross Validate and plot the predicted mean ROGUE square error</b><br><br>\nThe Cost function is $J(\\theta)=\\frac{1}{m}\\sum_{i=1}^m ROGUE_SCORE(pred_i,actual_i),$ <br>where \n$pred_i $ is the predicted ROGUE score of $sentence_i$ and $actual_i $ is the real ROGUE score of $sentence_i$", "def do_cross_validation(X_Matrix,Y,clf,n_folds,degree,Model='ridge'):\n '''Perform n-fold cross validation. \n Params:\n X.........a Matrix of features\n y.........the true rogue score of each sentence\n n_folds...the number of folds of cross-validation to do\n \n Return:\n the average testing accuracy across all folds.'''\n \n if Model != 'deep':\n poly = PolynomialFeatures(degree)\n X_Matrix = poly.fit_transform(X_Matrix)\n \n accuracies = []\n cv = KFold(len(Y), n_folds)\n for train_idx, test_idx in cv: \n clf.fit(X_Matrix[train_idx], Y[train_idx])\n predicted = clf.predict(X_Matrix[test_idx]) \n error = np.mean(np.abs(predicted - Y[test_idx]))\n accuracies.append(error) \n avg = np.mean(accuracies) \n return avg\n \n\ndef plot_accuracies(X_Matrix,Y,clf,n_folds=10,poly_degrees=[1,2,3,4]):\n '''Plot a graph of Test Error vs Polynomial Order''' \n errors = [do_cross_validation(X_Matrix,Y,clf,n_folds,degree) for degree in poly_degrees] \n plt.title('Ridge Regression')\n plt.ylabel('Validation Error')\n plt.xlabel('Polynomial Degree')\n plt.plot(poly_degrees, errors,'r-')\n plt.show()\n\ndef find_best_order(n_folds,Y,poly_degrees): \n '''Experiment with various settings and figure the best polynomial setting''' \n X_Matrix = construct_X_Matrix(clust_sent_pos,clust_sent_lens,clust_mean_tfs,clust_mean_idfs,clust_mean_cfs,clust_pos_ratios,\n clust_ner_ratio,clust_dig_ratio,clust_stop_word_ratio)\n \n \n plot_accuracies(X_Matrix,Y,Ridge(),n_folds,poly_degrees)\n\nY = construct_Y(clust_files,docs_without_summ)\nn_folds=10\npoly_degrees=[1,2,3]\nfind_best_order(n_folds,Y,poly_degrees)", "<b> Seems the Validation error is minimum when the polynomial order is 2. Raise the X_Matrix to this order and fit the Regressor</b>", "def get_best_clf(best_order,Y,clf):\n \n poly = PolynomialFeatures(best_order) \n \n X_Matrix = construct_X_Matrix(clust_sent_pos,clust_sent_lens,clust_mean_tfs,clust_mean_idfs,clust_mean_cfs,clust_pos_ratios,\n clust_ner_ratio,clust_dig_ratio,clust_stop_word_ratio)\n X_Matrix = poly.fit_transform(X_Matrix)\n \n print X_Matrix\n clf.fit(X_Matrix,Y)\n print '\\nFitted Regressor with best settings'\n return clf\n\nclf = get_best_clf(2,Y,Ridge())\n\ndef construct_X_Matrix_for_test_doc(cluster,document,clust_sent_pos,clust_sent_lens,clust_mean_tfs,clust_mean_idfs,\n clust_mean_cfs,clust_pos_ratios,clust_ner_ratio,clust_dig_ratio,\n clust_stop_word_ratio,poly_order):\n \n '''Extract all the features for a given document and return the extracted features'''\n \n X_Matrix = []\n \n for sent_id in clust_sent_pos[cluster][document].keys(): \n \n F_position = clust_sent_pos[cluster][document][sent_id]\n F_length = clust_sent_lens[cluster][document][sent_id]\n F_mean_tfs = clust_mean_tfs[cluster][document][sent_id]\n F_mean_idfs = clust_mean_idfs[cluster][document][sent_id]\n F_mean_cfs = clust_sent_pos[cluster][document][sent_id]\n F_pos_ratio = clust_pos_ratios[cluster][document][sent_id]\n F_ner_ratio = clust_ner_ratio[cluster][document][sent_id]\n F_dig_ratio = clust_dig_ratio[cluster][document][sent_id]\n F_stop_word_ratio = clust_stop_word_ratio[cluster][document][sent_id]\n \n row = [F_position,F_length,F_mean_tfs,F_mean_idfs,F_mean_cfs,\n F_pos_ratio,F_ner_ratio,F_dig_ratio,F_stop_word_ratio]\n \n X_Matrix.append(row)\n \n poly = PolynomialFeatures(poly_order)\n X_Matrix = poly.fit_transform(np.array(X_Matrix))\n \n return X_Matrix", "<b> Greedy Based Sentence Selection.<br>\n-are_sentences_salient()<br>\n-select_sentences()<br></b>", "def are_sentences_salient(clust,sent_1,sent_2,threshold=0.6):\n '''Check if the sentences are salient based on a threshold.\n If COSINE_SIM(sent_1,sent_2) < 0.6, return True. Else False'''\n \n sent_1_toks = tokenize_txt(sent_1)\n sent_2_toks = tokenize_txt(sent_2)\n \n vocab = list(set(sent_1_toks) | set(sent_2_toks))\n \n vec_1 = []\n vec_2 = []\n \n for token in vocab: \n tf = clust_word_tfs[clust][token]\n idf = doc_freqs[token] \n tf_idf = tf*idf\n \n if token in sent_1_toks and token in sent_2_toks: \n vec_1.append(tf_idf)\n vec_2.append(tf_idf) \n elif token in sent_1_toks and token not in sent_2_toks: \n vec_1.append(tf_idf)\n vec_2.append(0.0)\n elif token not in sent_1_toks and token in sent_2_toks: \n vec_1.append(0.0)\n vec_2.append(tf_idf)\n \n vec_1 = np.array(vec_1).reshape(1,-1)\n vec_2 = np.array(vec_2).reshape(1,-1)\n \n sim_score = list(cosine_similarity(vec_1,vec_2)[0])[0] \n \n if sim_score < threshold:\n return True\n else:\n return False\n\ns1 = 'A commission chaired by Professor Sir Richard Southwood '\ns2 = 'The government has committed $19 million to finding the cause of the disease Germany'\nprint are_sentences_salient('mad cow disease',s1 ,s2,threshold=0.6)\n\ns1 = '$19 million disease Germany '\ns2 = 'The government has committed $19 million to finding the cause of the disease Germany'\nprint are_sentences_salient('mad cow disease',s1 ,s2,threshold=0.6)\n\ndef doc_to_sent_list(document):\n '''Convert a document to a list of sentences'''\n \n file_path = data_root_dir + '/' + document\n doc = get_text_from_doc(file_path,txt_opn_tag,txt_close_tag)\n sentences = sent_detector.tokenize(doc)\n \n return sentences\n\ndef select_sentences(cluster,document,y_hats,sents_in_summ):\n '''In each step of the selection,based on the greedy approach, select a sentence if it satisfies 2 conditions:\n 1. It has the next maximum predicted ROGUE score\n 2. It is salient and not very similar to the previously generated sentence in the summary'''\n \n top_scores = sorted(y_hats,reverse=True)\n \n prev_sent = ''\n sent_id = 0\n j = 0\n \n all_sentences = doc_to_sent_list(document)\n \n summary = ''\n while(sent_id < sents_in_summ and j < len(top_scores)):\n top_sent_idx = y_hats.index(top_scores[j])\n cur_sent = all_sentences[top_sent_idx]\n if are_sentences_salient(cluster,prev_sent,cur_sent,threshold=0.6):\n summary += cur_sent + ' '\n prev_sent = cur_sent\n sent_id += 1 \n j += 1\n \n return summary\n\ndef generate_summary(cluster,document,clf,order,i,sents_in_summ=2):\n \n '''Generate the summary for a document with sents_in_summ number of sentences in it'''\n \n X_Matrix = construct_X_Matrix_for_test_doc(cluster,document,clust_sent_pos,clust_sent_lens,clust_mean_tfs,\n clust_mean_idfs,clust_mean_cfs,clust_pos_ratios,clust_ner_ratio,\n clust_dig_ratio,clust_stop_word_ratio,order)\n \n y_hats = list(clf.predict(X_Matrix)) \n \n print 'Generated SUMMARY for doc ',i, '::\\n-----------------------------------' \n summary = select_sentences(cluster,document,y_hats,sents_in_summ)\n print str.replace(str.replace(summary,'<P>',''),'</P>','').strip()\n print '\\n' \n \n print 'Actual SUMMARY for doc ',i, '::\\n-----------------------------------' \n summary_path = data_root_dir+ '/' + 'Summaries' + '/' + document.lower() + '.txt'\n print extract_gold_summ_from_doc(summary_path)\n print '\\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~'\n \n '''\n print 'COMPLETE TEXT :: \\n---------------------'\n complete_text = ''.join([sent for sent in sentences])\n print str.replace(str.replace(complete_text,'<P>',''),'</P>','').strip()\n '''\n \n\ndef random_summaries(prob,sents=1):\n '''Generate summaries for approx ( (1-prob) *total_docs) number of documents\n total_docs = 300 approx.\n prob should be in the range : 0 <= prob <= 1.0 . 0 indicates all documents, 1 indicates none of the documents'''\n \n i = 1\n for clust,docs in clust_files.items():\n for doc in docs: \n if np.random.uniform(low=0.0, high=1.0) > prob and doc not in docs_without_summ[clust] :\n generate_summary(clust,doc,clf,2,i,sents)\n i += 1\n \n print 'Generation Complete'\n\nrandom_summaries(0.98,1)\n\ndef serialize_data_matrix(poly_deg=2):\n '''Helper function to dump the data matrix onto hard disc'''\n poly = PolynomialFeatures(poly_deg) \n \n X_Matrix = construct_X_Matrix(clust_sent_pos,clust_sent_lens,clust_mean_tfs,clust_mean_idfs,clust_mean_cfs,clust_pos_ratios,\n clust_ner_ratio,clust_dig_ratio,clust_stop_word_ratio)\n X_Matrix = poly.fit_transform(X_Matrix)\n \n data = np.column_stack((X_Matrix,Y))\n serialize('data_matrix',data)\n print 'done'\n\nserialize_data_matrix(2)", "<b>Hard Baseline which assumes the very first sentence of the document as the summary</b>", "def get_predicted_rouge(cluster,document,clf,order,sents_in_summ=1): \n \n X_Matrix = construct_X_Matrix_for_test_doc(cluster,document,clust_sent_pos,clust_sent_lens,clust_mean_tfs,\n clust_mean_idfs,clust_mean_cfs,clust_pos_ratios,clust_ner_ratio,\n clust_dig_ratio,clust_stop_word_ratio,order)\n \n y_hats = list(clf.predict(X_Matrix)) \n \n pred_summary = select_sentences(cluster,document,y_hats,sents_in_summ)\n \n summary_path = data_root_dir+ '/' + 'Summaries' + '/' + document.lower() + '.txt'\n gold_summary = extract_gold_summ_from_doc(summary_path) \n \n try:\n rogue = get_rouge_n_score(gold_summary,pred_summary,n=2,do_stem=True) \n except:\n rogue = 0 \n \n return rogue \n\ndef evaluate_custom_model(clf,order=2,sents_in_summ=1):\n '''Evaluate the Model'''\n \n rouge_lst = []\n for clust,docs in clust_files.items():\n for doc in docs: \n if doc not in docs_without_summ[clust] :\n rouge = get_predicted_rouge(clust,doc,clf,order,sents_in_summ) \n rouge_lst.append(rouge)\n \n \n avg = sum(rouge_lst)/len(rouge_lst)\n return avg \n\ndef evaluate_hard_baseline_2():\n '''Blindly assume first sentence as the predicted sentence. Compute the ROGUE_SCORE between\n the first sentence and the actual summary.'''\n \n rouge_lst = []\n for clust,docs in clust_files.items():\n for doc in docs: \n if doc not in docs_without_summ[clust] :\n first_sentence = doc_to_sent_list(doc)[0] \n \n doc_path = data_root_dir+ '/' + 'Summaries' + '/' + doc.lower() + '.txt'\n summary = extract_gold_summ_from_doc(doc_path)\n\n try:\n rogue = get_rouge_n_score(first_sentence,summary,n=2,do_stem=True)\n except: \n rogue = 0\n \n rouge_lst.append(rogue)\n \n avg = sum(rouge_lst)/len(rouge_lst)\n return avg \n\nhard_baseline_accuracy = evaluate_hard_baseline_2()\nmodel_accuracy = evaluate_custom_model(clf,order=2,sents_in_summ=1)\n\nprint 'First Sentence Model\\'s accuracy',hard_baseline_accuracy\nprint 'Our Model\\'s accuracy', model_accuracy", "Deep Models", "from sklearn.neural_network import MLPRegressor\n\ndef get_Xmatrix_and_y(file_name):\n summ_data = deserialize(file_name) \n X_mat = summ_data[:,0:len(summ_data[0])-1]\n y = summ_data[:,len(summ_data[0])-1:]\n return X_mat,y\n\ndef train_regressor(optimizer,hidden_layer_units,activation_func,epochs):\n '''Fit an MLP with the specified settings and return the trained regressor'''\n '''Optimizer : Any of lbfgs,sgd etc\n hidden_layer_units : A tuple of the form (x,y,z) where x is the no of units in 1st hidden layer,y in 2nd and so on \n activation_fun : logistic / tanh / relu\n epochs : No of epochs\n X : Train Matrix\n y : True values\n '''\n regr = MLPRegressor(solver=optimizer,hidden_layer_sizes=hidden_layer_units,activation=activation_func,max_iter=epochs) \n return regr\n\ndef run_mlp(optimizers,activation_funcs,epochs,hid_layer_sizes,X_mat,y):\n '''Run the MLP for settings specified in the parameters'''\n optimizer_act = defaultdict(defaultdict)\n for optimizer in optimizers:\n act_epoch = defaultdict(defaultdict)\n for act_func in activation_funcs:\n epoch_hl = defaultdict(defaultdict)\n for epoch in epochs:\n hl_err = defaultdict(float)\n for hid_layer_size in hid_layer_sizes: \n regr = train_regressor(optimizer,hid_layer_size,act_func,epoch)\n error = do_cross_validation(X_mat,y,regr,10,None,'deep')\n hl_err[len(hid_layer_size)] = error\n print optimizer,act_func,epoch,len(hid_layer_size),'h_layers complete. Error = ',error\n epoch_hl[epoch] = copy.deepcopy(hl_err)\n act_epoch[act_func] = copy.deepcopy(epoch_hl)\n optimizer_act[optimizer] = copy.deepcopy(act_epoch)\n return optimizer_act\n\ndef get_best_hyperparams():\n '''Specify the various settings for hyperparameters here. Calls run_mlp to get the Validation error'''\n X_mat,y = get_Xmatrix_and_y('data_matrix')\n optimizers = ['sgd','adam','lbfgs'] \n activation_funcs = ['tanh','logistic'] \n epochs = [5,10,15,20]\n single_hl = (57,)\n double_hl = (57,57)\n triple_hl = (57,57,57)\n hid_layer_sizes = [single_hl,double_hl,triple_hl]\n return run_mlp(optimizers,activation_funcs,epochs,hid_layer_sizes,X_mat,y)\n\noptimizer_act = get_best_hyperparams()\n\n#serialize('MLP_Val_Err.pickle',optimizer_act)\noptimizer_act = deserialize('MLP_Val_Err.pickle')\n\ndef plot_epoch_vs_errors(optimizer_act):\n '''Plot Epoch vs Validation Errors by varying the following :\n 1. No of Hidden Layers\n 2. No of Epochs\n 3. Varying the optimizer\n 4. Varying the activation function for each optimizer\n '''\n for optimizer in sorted(optimizer_act.keys()):\n for act_func in sorted(optimizer_act[optimizer].keys()):\n layer_error = defaultdict(list)\n for epoch in sorted(optimizer_act[optimizer][act_func].keys()): \n for h_layer in sorted(optimizer_act[optimizer][act_func][epoch].keys()): \n layer_error[h_layer].append(optimizer_act[optimizer][act_func][epoch][h_layer])\n\n x = sorted(optimizer_act[optimizer][act_func].keys())\n y_lst = []\n for lyr in sorted(optimizer_act[optimizer][act_func][epoch].keys()):\n y_lst.append(layer_error[lyr])\n\n plt.xlabel('Epochs')\n plt.ylabel('Validation Error')\n plt.plot(x,y_lst[0],'r-')\n plt.plot(x,y_lst[1],'g-')\n plt.plot(x,y_lst[2],'b-')\n plt.legend(['1 Hidden Layer', '2 Hidden Layers', '3 Hidden Layers'], loc='upper right',columnspacing=0.0, labelspacing=0.0,\n )\n title = optimizer.upper() + ' with ' + act_func.upper() + ' activation'\n plt.title(title)\n plt.show()\n\nplot_epoch_vs_errors(optimizer_act)", "<table border =1>\n<caption style='color:green;font-size:22px'>Settings with Less Error:</caption>\n<tr><th>Optimizer</th> <th>Activation</th><th>Hidden Layers</th></tr>\n<tr><td>Adam</td> <td>Logistic</td><td>3</td></tr>\n<tr><td>LBFGS</td> <td>Logistic</td><td>3</td></tr>\n<tr><td>SGD</td> <td>Logistic</td><td>3</td></tr>\n</table>", "def get_model_accuracy(): \n '''Get the Deep Models Accuracies'''\n model_acc = defaultdict(float)\n data = 'data_matrix'\n #ADAM, Logistic 3 \n name = 'adam'\n model = train_regressor(name,(57,57,57),'logistic',20)\n X,y = get_Xmatrix_and_y(data)\n model.fit(X,y)\n model_acc[name] = evaluate_custom_model(model,order=2,sents_in_summ=1)\n \n #LBFGS, Logistic 3 \n name = 'lbfgs'\n model = train_regressor(name,(57,57,57),'logistic',20)\n X,y = get_Xmatrix_and_y(data)\n model.fit(X,y)\n model_acc[name] = evaluate_custom_model(model,order=2,sents_in_summ=1)\n\n #SGD, Logistic 3 \n name = 'sgd'\n model = train_regressor(name,(57,57,57),'logistic',20)\n X,y = get_Xmatrix_and_y(data)\n model.fit(X,y)\n model_acc[name] = evaluate_custom_model(model,order=2,sents_in_summ=1) \n \n\n return model_acc\n\nmodel_acc = get_model_accuracy() \nmodel_acc['first_sent'] = evaluate_hard_baseline_2()\nmodel_acc['ridge'] = evaluate_custom_model(clf,order=2,sents_in_summ=1)\nmodel_acc \n\nobjects = tuple([x.upper() for x in model_acc.keys()])\ny_pos = np.arange(len(objects))\nperformance = model_acc.values()\n \nplt.bar(y_pos, performance, align='center', alpha=0.5)\nplt.xticks(y_pos, objects)\nplt.ylabel('Test Accuracy')\nplt.title('Test Accuracies of Different Models')\n \nplt.show()", "Ridge is the Winner" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mromanello/SunoikisisDC_NER
participants_notebooks/Sunoikisis - Named Entity Extraction 1a-LV.ipynb
gpl-3.0
[ "Plan of the lecture\n\nIntroduction: Information Extraction and Named Entity Recognition (NER)\nNER: definitions and tasks (extraction, classification, disambiguation)\nbasic programming concepts in Python\nDoing NER with existing libraries:\nNER from Latin texts with CLTK\nNER from journal articles with NLTK\n\n\n\nPython: basic concepts\nPython is a very flexible and very powerful programming language that can help you working with texts and corpora. Python's phylosophy emphasizes code readability and features a simple and very expressive syntax. It is actually easy to master the basic aspects of Python's syntax: it is amazing how much you can do even with just the most basic concepts... The aim of these two lectures is to introduce to you some of these basic operation, let you see some code in action and also give you some exercise where you can apply what you've seen.\nIt is also amazing how many thing you can accomplish with some well written lines of Python! By the end of this class, we'd like to show you how you use Python to perform (some) Natural Language Processing. But of course, you can even just use Python do somethin as easy as...", "2 + 3", "Variables and data types\nHere we go! we've written our first line of code... But I guess we want to do something a little more interesting, right? Well, for a start, we might want to use Python to execute some operation (say: sum two numbers like 2 and 3) and process the result to print it on the screen, process it, and reuse it as many time as we want...\nVariables is what we use to store values. Think of it as a shoebox where you place your content; next time you need that content (i.e. the result of a previous operation, or for example some input you've read from a file) you simply call the shoebox name...", "result = 2 - 2\n\n#now we print the result\nprint(result)\n\n# by the way, I'm a comment. I'm not executed\n# every line of code following the sign # is ignored:\n# print(\"I'm line n. 3: do you see me?\")\n# see? You don't see me...\nprint(\"I'm line nr. 5 and you DO see me!\")", "That's it! As easy as that (yes, in some programming languages you have to create or declare the variable first and then use it to fill the shoebox; in Python, you go ahead and simply use it!)\nNow, what do you think we will get when we execute the following code?", "result + 8", "What types of values can we put into a variable? What goes into the shoebox? We can start by the members of this list:\n\nIntegers (-1,0,1,2,3,4...)\nStrings (\"Hello\", \"s\", \"Wolfgang Amadeus Mozart\", \"I am the ฮฑ and the ฯ‰!\"...)\nfloats (3.14159; 2.71828...)\nBooleans (True, False)\n\nIf you're not sure what type of value you're dealing with, you can use the function type(). Yes, it works with variables too...!", "type(\"I am the ฮฑ and the ฯ‰!\")\n\ntype(2.7182818284590452353602874713527)\n\ntype(True)\n\nresult = \"hello\"\n\ntype(results)", "You declare strings with single ('') or double (\"\") quote: it's totally indifferent! But now two questions:\n1. what happens if you forget the quotes?\n2. what happens if you put quotes around a number?", "mionome = \"lucia\"\nprint(mionome)\n\nprint(\"ciao\")\n\ntype(\"4\")", "String, integer, float... Why is that so important? Well, try to sum two strings and see what happens...", "\"2\" + \"3\"\n\n#probably you wanted this...\nint(\"2\") + int(\"3\")", "But if we are working with strings, then the \"+\" sign is used to concatenate the strings:", "a = \"interesting!\"\nprint(\"not very \" + a)", "Lists and dictionaries\nLists and dictionaries are two very useful types to store whole collections of data", "beatles = [\"John\", \"Paul\", \"George\", \"Ringo\"]\ntype(beatles)\n\n# dictionaries collections of key : value pairs\nbeatles_dictionary = { \"john\" : \"John Lennon\" ,\n \"paul\" : \"Paul McCartney\",\n \"george\" : \"George Harrison\",\n \"ringo\" : \"Ringo Starr\"}\ntype(beatles_dictionary)", "(there are also other types of collection, like Tuples and Sets, but we won't talk about them now; read the links if you're interested!)\nItems in list are accessible using their index. Do remember that indexing starts from 0!", "print(beatles[3], [1])\n\n#indexes can be negative!\nbeatles[-1]", "Dictionaries are collections of key : value pairs. You access the value using the key as index", "beatles_dictionary[\"paul\"]\n\nbeatles_dictionary[0]", "There are a bunch of methods that you can apply to list to work with them.\nYou can append items at the end of a list", "beatles.append(\"randomname\")\nbeatles", "You can learn the index of an item", "beatles.index(\"Paul\")", "You can insert elements at a predefinite index:", "beatles.insert(0, \"Pete Best\")\nprint(beatles.index(\"George\"))\nbeatles", "But most importantly, you can slice lists, producing sub-lists by specifying the range of indexes you want:", "beatles[1:5]", "Do you notice something strange? Yes, the limit index is not inclusive (i.e. item beatles[5] is not included)", "beatles[5]", "What happens if you specify an index that is too high?", "beatles[7]", "How can you know how long a list is?", "len(beatles)", "Do remember that indexing starts at 0, so don't make the mistake of thinking that len(yourlist) will give you the last item of your list!", "beatles[len(beatles)]", "This will work!", "beatles[len(beatles) -1]", "If-statements\nMost of the times, what you want to do when you program is to check a value and execute some operation depending on whether the value matches some condition. That's where if statements help!\nIn its easiest form, an If statement is syntactic construction that checks whether a condition is met; if it is some part of code is executed", "basist = \"Paul McCartney\"\n\nif bassist == \"Paul McCartney\":\n print(\"Paul played bass with the Beatles!\")", "Mind the indentation very much! This is the essential element in the syntax of the statement", "bassist = \"Bill Wyman\"\n\nif bassist == \"Paul McCartney\":\n print(\"I'm part of the if statement...\")\n print(\"Paul played bass in the Beatles!\")", "What happens if the condition is not met? Nothing! The indented code is not executed, because the condition is not met, so lines 4 and 5 are simply skipped.\nBut what happens if we de-indent line 5? Can you guess why this is what happes?\nMost of the time, we need to specify what happens if the conditions are not met", "bassist = \"\"\n\nif bassist == \"Paul McCartney\":\n print(\"Paul played bass in the Beatles!\")\nelse:\n print(\"This guy did not play for the Beatles...\")", "This is the flow:\n* the condition in line 3 is checked\n* is it met?\n * yes: then line 4 is executed\n * no: then line 6 is executed\nOr we can specify many different conditions...", "bassist = \"Mike\"\n\nif bassist == \"Paul McCartney\":\n print(\"Paul played bass in the Beatles!\")\nelif bassist == \"Bill Wyman\":\n print(\"Bill Wyman played for the Rolling Stones!\")\nelif bassist == \"Lucia\":\n print (\"Lucia played for Oasis\")\nelif bassist == \"Mike\":\n print (\"Mike played for xxx\")\nelse:\n print(\"I don't know what band this guy played for...\")", "For loops\nThe greatest thing about lists is that thet are iterable, that is you can loop through them. What do we do if we want to apply some line of code to each element in a list? Try with a for loop!\nA for loop can be paraphrased as: \"for each element named x in an iterable (e.g. a list): do some code (e.g. print the value of x)\"", "for b in beatles:\n print(b + \" was one of the Beatles\")", "Let's break the code down to its parts:\n* b: an arbitrary name that we give to the variable holding every value in the loop (it could have been any name; b is just very convenient in this case!)\n* beatles: the list we're iterating through\n* : as in the if-statements: don't forget the colon!\n* indent: also, don't forget to indent this code! it's the only thing that is telling python that line 2 is part of the for loop!\n* line 2: the function that we want to execute for each item in the iterables\nNow, let's join if statements and for loop to do something nice...", "beatles = [\"John\", \"Paul\", \"George\", \"Ringo\", \"Lucia\"]\nfor b in beatles:\n if b == \"Paul\":\n instrument = \"bass\"\n elif b == \"John\":\n instrument = \"rhythm guitar\"\n elif b == \"George\":\n instrument = \"lead guitar\"\n elif b == \"Ringo\":\n instrument = \"drum\"\n elif b == \"Lucia\":\n instrument = \"piano\"\n print(b + \" played \" + instrument + \" with the Beatles\")", "Input and Output\nOne of the most frequent tasks that programmers do is reading data from files, and write some of the output of the programs to a file. \nIn Python (as in many language), we need first to open a file-handler with the appropriate mode in order to process it. Files can be opened in:\n* read mode (\"r\")\n* write mode (\"w\")\n* append mode\nLet's try to read the content of one of the txt files of our Sunoikisis directory\nFirst, we open the file handler in read mode:", "#see? we assign the file-handler to a variable, or we wouldn't be able\n#to do anything with that!\nf = open(\"NOTES.md\", \"r\")", "note that \"r\" is optional: read is the default mode!\nNow there are a bunch of things we can do:\n* read the full content in one variable with this code:\ncontent = f.read()\n\nread the lines in a list of lines:\n\nlines = f.readlines()\n\nor, which is the easiest, simply read the content one line at the time with a for loop; the f object is iterable, so this is as easy as:", "for l in f:\n print(l)", "Once you're done, don't forget to close the handle:", "f.close()\n\n#all together\nf = open(\"NOTES.md\")\nfor l in f:\n print(l)\nf.close()", "Now, there's a shortcut statement, which you'll often see and is very convenient, because it takes care of opening, closing and cleaning up the mess, in case there's some error:", "with open(\"NOTES.md\") as f:\n #mind the indent!\n for l in f:\n #double indent, of course!\n print(l)", "Now, how about writing to a file? Let's try to write a simple message on a file; first, we open the handler in write mode", "out = open(\"test.txt\", \"w\")\n\n#the file is now open; let's write something in it\nout.write(\"Mio test This is a test!\\nThis is a second line (separated with a new-line feed)\")", "The file has been created! Let's check this out", "#don't worry if you don't understand this code!\n#We're simply listing the content of the current directory...\nimport os\nos.listdir()", "But before we can do anything (e.g. open it with your favorite text editor) you have to close the file-handler!", "out.close()", "Let's look at its content", "with open(\"test.txt\") as f:\n print(f.read())", "Again, also for writing we can use a with statement, which is very handy.\nBut let's have a look at what happens here, so we understand a bit better why \"write mode\" must be used carefully!", "with open(\"test.txt\", \"w\") as out:\n out.write(\"Oooops! new content\")", "Let's have a look at the content of \"test.txt\" now", "with open(\"test.txt\") as f:\n print(f.read())", "See? After we opened the file in \"write mode\" for the second time, all content of the file was erased and replaced with the new content that we wrote!!!\nSo keep in mind: when you open a file in \"w\" mode:\n\nif it doesn't exist, a new file with that name is created\nif it does exist, it is completely overwritten and all previous content is lost\n\nIf you want to write content to an existing file without losing its pervious content, you have to open the file with the \"a\" mode:", "with open(\"test.txt\", \"a\") as out:\n out.write('''\\nAnd this is some additional content.\nThe new content is appended at the bottom of the existing file''')\n\nwith open(\"test.txt\") as f:\n print(f.read())", "Functions\nAbove, we have opened a file several times to inspect its content. Each time, we had to type the same code over and over. This is the typical case where you would like to save some typing (and write code that is much easier to maintain!) by defining a function\nA function is a block of reusable code that can be invoked to perform a definite task. Most often (but not necessarily), it accepts one or more arguments and return a certain value.\nWe have already seen one of the built-in functions of Python: print(\"some str\")\nBut it's actually very easy to define your own. Let's define the function to print out the file content, as we said before. Note that this function takes one argument (the file name) and prints out some text, but doesn't return back any value.", "def printFileContent(file_name):\n #the function takes one argument: file_name\n with open(file_name) as f:\n print(f.read())", "As usual, mind the indent!\nfile_name (line 1) is the placeholder that we use in the function for any argument that we want to pass to the function in our real-life reuse of the code.\nNow, if we want to use our function we simply call it with the file name that we want to print out", "printFileContent(\"README.md\")", "Now, let's see an example of a function that returns some value to the users. Those functions typically take some argument, process them and yield back the result of this processing.\nHere's the easiest example possible: a function that takes two numbers as arguments, sum them and returns the result.", "def sumTwoNumbers(first_int, second_int):\n s = first_int + second_int\n return s\n\n#could be even shorter:\ndef sumTwoNumbers(first_int, second_int):\n return first_int + second_int\n\nsumTwoNumbers(5, 6)", "Most often, you want to assign the result returned to a variable, so that you can go on working with the results...", "s = sumTwoNumbers(5,6)\ns * 2", "Error and exceptions\nThings can go wrong, especially when you're a beginner. But no panic! Errors and exceptions are actually a good thing! Python gives you detailed reports about what is wrong, so read them carefully and try to figure out what is not right.\nOnce you're getting better, you'll actually learn that you can do something good with the exceptions: you'll learn how to handle them, and to anticipate some of the most common problems that dirty data can face you with...\nNow, what happens if you forget the all-important syntactic constraint of the code indent?", "if 1 > 0:\n print(\"Well, we know that 1 is bigger than 0!\")", "Pretty clear, isn't it? What you get is an error a construct that is not grammatical in Python's syntax. Note that you're also told where (at what line, and at what point of the code) your error is occurring. That is not always perfect (there are cases where the problem is actually occuring before what Python thinks), but in this case it's pretty OK.\nWhat if you forget to define a variable (or you misspell the name of a variable)?", "var = \"bla bla\"\nif var1:\n print(\"If you see me, then I was defined...\")", "You get an exception! The syntax of your code is right, but the execution met with a problem that caused the program to stop.\nNow, in your program, you can handle selected exception: this means that you can write your code in a way that the program would still be executed even if a certain exception is raised.\nLet's see what happens if we use our function to try to print the content of a file that doesn't exist:", "printFileContent(\"file_that_is_not_there.txt\")", "We get a FileNotFoundError! Now, let's re-write the function so that this event (somebody uses the function with a wrong file name) is taken care of...", "def printFileContent(file_name):\n #the function takes one argument: file_name\n try:\n with open(file_name) as f:\n print(f.read())\n except FileNotFoundError:\n print(\"The file does not exist.\\nNevertheless, I do like you, and I will print something to you anyway...\")\n\nprintFileContent(\"file_that_doesnt_exist.txt\")", "Appendix: useful links\nPython: how to install\nIf you're using Mac OSX or Linux, you already have (at least one version) of Python installed. Anyway, it's very easy to install Python or upgrade your version. See:\nhttps://wiki.python.org/moin/BeginnersGuide/Download\nJupyter: how to install\nhttp://jupyter.org/install.html\nPython and Jupyter come also in a pre-packaged environment (which is designed especially for data science) called Anaconda. You might be interested to look at that.\nPython 2 or Python 3?\nPython 3 is the latest version of Python (currently, 3.6.1). It's a major upgrade from Python 2, but the code has been somewhat dramatically changed in the passage from 2 to 3 and there is some problem of backward compatibility. Some version of Linux or Mac OSX still come with Python 2.7 (the final version of Python 2).\nAnyway, Python 3 is currently in active development: it's where the cutting-edge improvements and new stuff are being developed (especially for NLP and the NLTK library). In this code, we assume Python 3!\nhttps://wiki.python.org/moin/Python2orPython3\nNLTK: Book\nWould you like a book that is a great introduction to Python for absolute beginners, is a wonderfull resource to learn the basics of Natural Language processing and gives you a thorough introduction to the NLTK library to do NLP in Python? Oh, yeah, I was forgetting: that can be read for free on the internet? Yes, it's Christmass time!\nhttp://www.nltk.org/book/" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
xtr33me/deep-learning
tensorboard/Anna_KaRNNa_Summaries.ipynb
mit
[ "Anna KaRNNa\nIn this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.\nThis network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.\n<img src=\"assets/charseq.jpeg\" width=\"500\">", "import time\nfrom collections import namedtuple\n\nimport numpy as np\nimport tensorflow as tf", "First we'll load the text file and convert it into integers for our network to use.", "with open('anna.txt', 'r') as f:\n text=f.read()\nvocab = set(text)\nvocab_to_int = {c: i for i, c in enumerate(vocab)}\nint_to_vocab = dict(enumerate(vocab))\nchars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)\n\ntext[:100]\n\nchars[:100]", "Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.\nHere I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.\nThe idea here is to make a 2D matrix where the number of rows is equal to the number of batches. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.", "def split_data(chars, batch_size, num_steps, split_frac=0.9):\n \"\"\" \n Split character data into training and validation sets, inputs and targets for each set.\n \n Arguments\n ---------\n chars: character array\n batch_size: Size of examples in each of batch\n num_steps: Number of sequence steps to keep in the input and pass to the network\n split_frac: Fraction of batches to keep in the training set\n \n \n Returns train_x, train_y, val_x, val_y\n \"\"\"\n \n slice_size = batch_size * num_steps\n n_batches = int(len(chars) / slice_size)\n \n # Drop the last few characters to make only full batches\n x = chars[: n_batches*slice_size]\n y = chars[1: n_batches*slice_size + 1]\n \n # Split the data into batch_size slices, then stack them into a 2D matrix \n x = np.stack(np.split(x, batch_size))\n y = np.stack(np.split(y, batch_size))\n \n # Now x and y are arrays with dimensions batch_size x n_batches*num_steps\n \n # Split into training and validation sets, keep the virst split_frac batches for training\n split_idx = int(n_batches*split_frac)\n train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]\n val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]\n \n return train_x, train_y, val_x, val_y\n\ntrain_x, train_y, val_x, val_y = split_data(chars, 10, 200)\n\ntrain_x.shape\n\ntrain_x[:,:10]", "I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.", "def get_batch(arrs, num_steps):\n batch_size, slice_size = arrs[0].shape\n \n n_batches = int(slice_size/num_steps)\n for b in range(n_batches):\n yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]\n\ndef build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,\n learning_rate=0.001, grad_clip=5, sampling=False):\n \n if sampling == True:\n batch_size, num_steps = 1, 1\n\n tf.reset_default_graph()\n \n # Declare placeholders we'll feed into the graph\n with tf.name_scope('inputs'):\n inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')\n x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot')\n \n with tf.name_scope('targets'):\n targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')\n y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot')\n y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])\n \n keep_prob = tf.placeholder(tf.float32, name='keep_prob')\n \n # Build the RNN layers\n with tf.name_scope(\"RNN_cells\"):\n lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)\n drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)\n cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)\n \n with tf.name_scope(\"RNN_init_state\"):\n initial_state = cell.zero_state(batch_size, tf.float32)\n\n # Run the data through the RNN layers\n with tf.name_scope(\"RNN_forward\"):\n outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=initial_state)\n \n final_state = state\n \n # Reshape output so it's a bunch of rows, one row for each cell output\n with tf.name_scope('sequence_reshape'):\n seq_output = tf.concat(outputs, axis=1,name='seq_output')\n output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output')\n \n # Now connect the RNN outputs to a softmax layer and calculate the cost\n with tf.name_scope('logits'):\n softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1),\n name='softmax_w')\n softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b')\n logits = tf.matmul(output, softmax_w) + softmax_b\n tf.summary.histogram('softmax_w', softmax_w)\n tf.summary.histogram('softmax_b', softmax_b)\n\n with tf.name_scope('predictions'):\n preds = tf.nn.softmax(logits, name='predictions')\n tf.summary.histogram('predictions', preds)\n \n with tf.name_scope('cost'):\n loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss')\n cost = tf.reduce_mean(loss, name='cost')\n tf.summary.scalar('cost', cost)\n\n # Optimizer for training, using gradient clipping to control exploding gradients\n with tf.name_scope('train'):\n tvars = tf.trainable_variables()\n grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)\n train_op = tf.train.AdamOptimizer(learning_rate)\n optimizer = train_op.apply_gradients(zip(grads, tvars))\n \n merged = tf.summary.merge_all()\n \n # Export the nodes \n export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',\n 'keep_prob', 'cost', 'preds', 'optimizer', 'merged']\n Graph = namedtuple('Graph', export_nodes)\n local_dict = locals()\n graph = Graph(*[local_dict[each] for each in export_nodes])\n \n return graph", "Hyperparameters\nHere I'm defining the hyperparameters for the network. The two you probably haven't seen before are lstm_size and num_layers. These set the number of hidden units in the LSTM layers and the number of LSTM layers, respectively. Of course, making these bigger will improve the network's performance but you'll have to watch out for overfitting. If your validation loss is much larger than the training loss, you're probably overfitting. Decrease the size of the network or decrease the dropout keep probability.", "batch_size = 100\nnum_steps = 100\nlstm_size = 512\nnum_layers = 2\nlearning_rate = 0.001", "Training\nTime for training which is is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint.", "!mkdir -p checkpoints/anna\n\nepochs = 10\nsave_every_n = 100\ntrain_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)\n\nmodel = build_rnn(len(vocab), \n batch_size=batch_size,\n num_steps=num_steps,\n learning_rate=learning_rate,\n lstm_size=lstm_size,\n num_layers=num_layers)\n\nsaver = tf.train.Saver(max_to_keep=100)\n\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n train_writer = tf.summary.FileWriter('./logs/2/train', sess.graph)\n test_writer = tf.summary.FileWriter('./logs/2/test')\n \n # Use the line below to load a checkpoint and resume training\n #saver.restore(sess, 'checkpoints/anna20.ckpt')\n \n n_batches = int(train_x.shape[1]/num_steps)\n iterations = n_batches * epochs\n for e in range(epochs):\n \n # Train network\n new_state = sess.run(model.initial_state)\n loss = 0\n for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):\n iteration = e*n_batches + b\n start = time.time()\n feed = {model.inputs: x,\n model.targets: y,\n model.keep_prob: 0.5,\n model.initial_state: new_state}\n summary, batch_loss, new_state, _ = sess.run([model.merged, model.cost, \n model.final_state, model.optimizer], feed_dict=feed)\n loss += batch_loss\n end = time.time()\n print('Epoch {}/{} '.format(e+1, epochs),\n 'Iteration {}/{}'.format(iteration, iterations),\n 'Training loss: {:.4f}'.format(loss/b),\n '{:.4f} sec/batch'.format((end-start)))\n \n train_writer.add_summary(summary, iteration)\n \n if (iteration%save_every_n == 0) or (iteration == iterations):\n # Check performance, notice dropout has been set to 1\n val_loss = []\n new_state = sess.run(model.initial_state)\n for x, y in get_batch([val_x, val_y], num_steps):\n feed = {model.inputs: x,\n model.targets: y,\n model.keep_prob: 1.,\n model.initial_state: new_state}\n summary, batch_loss, new_state = sess.run([model.merged, model.cost, \n model.final_state], feed_dict=feed)\n val_loss.append(batch_loss)\n \n test_writer.add_summary(summary, iteration)\n\n print('Validation loss:', np.mean(val_loss),\n 'Saving checkpoint!')\n #saver.save(sess, \"checkpoints/anna/i{}_l{}_{:.3f}.ckpt\".format(iteration, lstm_size, np.mean(val_loss)))\n\ntf.train.get_checkpoint_state('checkpoints/anna')", "Sampling\nNow that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.\nThe network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.", "def pick_top_n(preds, vocab_size, top_n=5):\n p = np.squeeze(preds)\n p[np.argsort(p)[:-top_n]] = 0\n p = p / np.sum(p)\n c = np.random.choice(vocab_size, 1, p=p)[0]\n return c\n\ndef sample(checkpoint, n_samples, lstm_size, vocab_size, prime=\"The \"):\n prime = \"Far\"\n samples = [c for c in prime]\n model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)\n saver = tf.train.Saver()\n with tf.Session() as sess:\n saver.restore(sess, checkpoint)\n new_state = sess.run(model.initial_state)\n for c in prime:\n x = np.zeros((1, 1))\n x[0,0] = vocab_to_int[c]\n feed = {model.inputs: x,\n model.keep_prob: 1.,\n model.initial_state: new_state}\n preds, new_state = sess.run([model.preds, model.final_state], \n feed_dict=feed)\n\n c = pick_top_n(preds, len(vocab))\n samples.append(int_to_vocab[c])\n\n for i in range(n_samples):\n x[0,0] = c\n feed = {model.inputs: x,\n model.keep_prob: 1.,\n model.initial_state: new_state}\n preds, new_state = sess.run([model.preds, model.final_state], \n feed_dict=feed)\n\n c = pick_top_n(preds, len(vocab))\n samples.append(int_to_vocab[c])\n \n return ''.join(samples)\n\ncheckpoint = \"checkpoints/anna/i3560_l512_1.122.ckpt\"\nsamp = sample(checkpoint, 2000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)\n\ncheckpoint = \"checkpoints/anna/i200_l512_2.432.ckpt\"\nsamp = sample(checkpoint, 1000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)\n\ncheckpoint = \"checkpoints/anna/i600_l512_1.750.ckpt\"\nsamp = sample(checkpoint, 1000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)\n\ncheckpoint = \"checkpoints/anna/i1000_l512_1.484.ckpt\"\nsamp = sample(checkpoint, 1000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
nikodtbVf/aima-si
.ipynb_checkpoints/csp-checkpoint.ipynb
mit
[ "Constraint Satisfaction Problems (CSPs)\nThis IPy notebook acts as supporting material for topics covered in Chapter 6 Constraint Satisfaction Problems of the book Artificial Intelligence: A Modern Approach. We make use of the implementations in csp.py module. Even though this notebook includes a brief summary of the main topics familiarity with the material present in the book is expected. We will look at some visualizations and solve some of the CSP problems described in the book. Let us import everything from the csp module to get started.", "from csp import *", "Review\nCSPs are a special kind of search problems. Here we don't treat the space as a black box but the state has a particular form and we use that to our advantage to tweak our algorithms to be more suited to the problems. A CSP State is defined by a set of variables which can take values from corresponding domains. These variables can take only certain values in their domains to satisfy the constraints. A set of assignments which satisfies all constraints passes the goal test. Let us start by exploring the CSP class which we will use to model our CSPs. You can keep the popup open and read the main page to get a better idea of the code.", "%psource CSP", "The _ init _ method parameters specify the CSP. Variable can be passed as a list of strings or integers. Domains are passed as dict where key specify the variables and value specify the domains. The variables are passed as an empty list. Variables are extracted from the keys of the domain dictionary. Neighbor is a dict of variables that essentially describes the constraint graph. Here each variable key has a list its value which are the variables that are constraint along with it. The constraint parameter should be a function f(A, a, B, b) that returns true if neighbors A, B satisfy the constraint when they have values A=a, B=b. We have additional parameters like nassings which is incremented each time an assignment is made when calling the assign method. You can read more about the methods and parameters in the class doc string. We will talk more about them as we encounter their use. Let us jump to an example.\nGraph Coloring\nWe use the graph coloring problem as our running example for demonstrating the different algorithms in the csp module. The idea of map coloring problem is that the adjacent nodes (those connected by edges) should not have the same color throughout the graph. The graph can be colored using a fixed number of colors. Here each node is a variable and the values are the colors that can be assigned to them. Given that the domain will be the same for all our nodes we use a custom dict defined by the UniversalDict class. The UniversalDict Class takes in a parameter which it returns as value for all the keys of the dict. It is very similar to defaultdict in Python except that it does not support item assignment.", "s = UniversalDict(['R','G','B'])\ns[5]", "For our CSP we also need to define a constraint function f(A, a, B, b). In this what we need is that the neighbors must not have the same color. This is defined in the function different_values_constraint of the module.", "%psource different_values_constraint", "The CSP class takes neighbors in the form of a Dict. The module specifies a simple helper function named parse_neighbors which allows to take input in the form of strings and return a Dict of the form compatible with the CSP Class.", "%pdoc parse_neighbors", "The MapColoringCSP function creates and returns a CSP with the above constraint function and states. The variables our the keys of the neighbors dict and the constraint is the one specified by the different_values_constratint function. australia, usa and france are three CSPs that have been created using MapColoringCSP. australia corresponds to Figure 6.1 in the book.", "%psource MapColoringCSP\n\naustralia, usa, france", "NQueens\nThe N-queens puzzle is the problem of placing N chess queens on a Nร—N chessboard so that no two queens threaten each other. Here N is a natural number. Like the graph coloring, problem NQueens is also implemented in the csp module. The NQueensCSP class inherits from the CSP class. It makes some modifications in the methods to suit the particular problem. The queens are assumed to be placed one per column, from left to right. That means position (x, y) represents (var, val) in the CSP. The constraint that needs to be passed on the CSP is defined in the queen_constraint function. The constraint is satisfied (true) if A, B are really the same variable, or if they are not in the same row, down diagonal, or up diagonal.", "%psource queen_constraint", "The NQueensCSP method implements methods that support solving the problem via min_conflicts which is one of the techniques for solving CSPs. Because min_conflicts hill climbs the number of conflicts to solve the CSP assign and unassign are modified to record conflicts. More details about the structures rows, downs, ups which help in recording conflicts are explained in the docstring.", "%psource NQueensCSP", "The _ init _ method takes only one parameter n the size of the problem. To create an instance we just pass the required n into the constructor.", "eight_queens = NQueensCSP(8)", "Helper Functions\nWe will now implement few helper functions that will help us visualize the Coloring Problem. We will make some modifications to the existing Classes and Functions for additional book keeping. To begin with we modify the assign and unassign methods in the CSP to add a copy of the assignment to the assingment_history. We call this new class InstruCSP. This would allow us to see how the assignment evolves over time.", "import copy\nclass InstruCSP(CSP):\n \n def __init__(self, variables, domains, neighbors, constraints):\n super().__init__(variables, domains, neighbors, constraints)\n self.assingment_history = []\n \n def assign(self, var, val, assignment):\n super().assign(var,val, assignment)\n self.assingment_history.append(copy.deepcopy(assignment))\n \n def unassign(self, var, assignment):\n super().unassign(var,assignment)\n self.assingment_history.append(copy.deepcopy(assignment)) ", "Next, we define make_instru which takes an instance of CSP and returns a InstruCSP instance.", "def make_instru(csp):\n return InstruCSP(csp.variables, csp.domains, csp.neighbors,\n csp.constraints)", "We will now use a graph defined as a dictonary for plotting purposes in our Graph Coloring Problem. The keys are the nodes and their corresponding values are the nodes are they are connected to.", "neighbors = {\n 0: [6, 11, 15, 18, 4, 11, 6, 15, 18, 4], \n 1: [12, 12, 14, 14], \n 2: [17, 6, 11, 6, 11, 10, 17, 14, 10, 14], \n 3: [20, 8, 19, 12, 20, 19, 8, 12], \n 4: [11, 0, 18, 5, 18, 5, 11, 0], \n 5: [4, 4], \n 6: [8, 15, 0, 11, 2, 14, 8, 11, 15, 2, 0, 14], \n 7: [13, 16, 13, 16], \n 8: [19, 15, 6, 14, 12, 3, 6, 15, 19, 12, 3, 14], \n 9: [20, 15, 19, 16, 15, 19, 20, 16], \n 10: [17, 11, 2, 11, 17, 2], \n 11: [6, 0, 4, 10, 2, 6, 2, 0, 10, 4], \n 12: [8, 3, 8, 14, 1, 3, 1, 14], \n 13: [7, 15, 18, 15, 16, 7, 18, 16], \n 14: [8, 6, 2, 12, 1, 8, 6, 2, 1, 12], \n 15: [8, 6, 16, 13, 18, 0, 6, 8, 19, 9, 0, 19, 13, 18, 9, 16], \n 16: [7, 15, 13, 9, 7, 13, 15, 9], \n 17: [10, 2, 2, 10], \n 18: [15, 0, 13, 4, 0, 15, 13, 4], \n 19: [20, 8, 15, 9, 15, 8, 3, 20, 3, 9], \n 20: [3, 19, 9, 19, 3, 9]\n}", "Now we are ready to create an InstruCSP instance for our problem. We are doing this for an instance of MapColoringProblem class which inherits from the CSP Class. This means that our make_instru function will work perfectly for it.", "coloring_problem = MapColoringCSP('RGBY', neighbors)\n\ncoloring_problem1 = make_instru(coloring_problem)", "Backtracking Search\nFor solving a CSP the main issue with Naive search algorithms is that they can continue expanding obviously wrong paths. In backtracking search, we check constraints as we go. Backtracking is just the above idea combined with the fact that we are dealing with one variable at a time. Backtracking Search is implemented in the repository as the function backtracking_search. This is the same as Figure 6.5 in the book. The function takes as input a CSP and few other optional parameters which can be used to further speed it up. The function returns the correct assignment if it satisfies the goal. We will discuss these later. Let us solve our coloring_problem1 with backtracking_search.", "result = backtracking_search(coloring_problem1)\n\nresult # A dictonary of assingments.", "Let us also check the number of assingments made.", "coloring_problem1.nassigns", "Now let us check the total number of assingments and unassingments which is the lentgh ofour assingment history.", "len(coloring_problem1.assingment_history)", "Now let us explore the optional keyword arguments that the backtracking_search function takes. These optional arguments help speed up the assignment further. Along with these, we will also point out to methods in the CSP class that help make this work. \nThe first of these is select_unassigned_variable. It takes in a function that helps in deciding the order in which variables will be selected for assignment. We use a heuristic called Most Restricted Variable which is implemented by the function mrv. The idea behind mrv is to choose the variable with the fewest legal values left in its domain. The intuition behind selecting the mrv or the most constrained variable is that it allows us to encounter failure quickly before going too deep into a tree if we have selected a wrong step before. The mrv implementation makes use of another function num_legal_values to sort out the variables by a number of legal values left in its domain. This function, in turn, calls the nconflicts method of the CSP to return such values.", "%psource mrv\n\n%psource num_legal_values\n\n%psource CSP.nconflicts", "Another ordering related parameter order_domain_values governs the value ordering. Here we select the Least Constraining Value which is implemented by the function lcv. The idea is to select the value which rules out the fewest values in the remaining variables. The intuition behind selecting the lcv is that it leaves a lot of freedom to assign values later. The idea behind selecting the mrc and lcv makes sense because we need to do all variables but for values, we might better try the ones that are likely. So for vars, we face the hard ones first.", "%psource lcv", "Finally, the third parameter inference can make use of one of the two techniques called Arc Consistency or Forward Checking. The details of these methods can be found in the Section 6.3.2 of the book. In short the idea of inference is to detect the possible failure before it occurs and to look ahead to not make mistakes. mac and forward_checking implement these two techniques. The CSP methods support_pruning, suppose, prune, choices, infer_assignment and restore help in using these techniques. You can know more about these by looking up the source code.\nNow let us compare the performance with these parameters enabled vs the default parameters. We will use the Graph Coloring problem instance usa for comparison. We will call the instances solve_simple and solve_parameters and solve them using backtracking and compare the number of assignments.", "solve_simple = copy.deepcopy(usa)\nsolve_parameters = copy.deepcopy(usa)\n\nbacktracking_search(solve_simple)\nbacktracking_search(solve_parameters, order_domain_values=lcv, select_unassigned_variable=mrv, inference=mac )\n\nsolve_simple.nassigns\n\nsolve_parameters.nassigns", "Graph Coloring Visualization\nNext, we define some functions to create the visualisation from the assingment_history of coloring_problem1. The reader need not concern himself with the code that immediately follows as it is the usage of Matplotib with IPython Widgets. If you are interested in reading more about these visit ipywidgets.readthedocs.io. We will be using the networkx library to generate graphs. These graphs can be treated as the graph that needs to be colored or as a constraint graph for this problem. If interested you can read a dead simple tutorial here. We start by importing the necessary libraries and initializing matplotlib inline.", "%matplotlib inline\nimport networkx as nx\nimport matplotlib.pyplot as plt\nimport matplotlib\nimport time", "The ipython widgets we will be using require the plots in the form of a step function such that there is a graph corresponding to each value. We define the make_update_step_function which return such a function. It takes in as inputs the neighbors/graph along with an instance of the InstruCSP. This will be more clear with the example below. If this sounds confusing do not worry this is not the part of the core material and our only goal is to help you visualize how the process works.", "def make_update_step_function(graph, instru_csp):\n \n def draw_graph(graph):\n # create networkx graph\n G=nx.Graph(graph)\n # draw graph\n pos = nx.spring_layout(G,k=0.15)\n return (G, pos)\n \n G, pos = draw_graph(graph)\n \n def update_step(iteration):\n # here iteration is the index of the assingment_history we want to visualize.\n current = instru_csp.assingment_history[iteration]\n # We convert the particular assingment to a default dict so that the color for nodes which \n # have not been assigned defaults to black.\n current = defaultdict(lambda: 'Black', current)\n\n # Now we use colors in the list and default to black otherwise.\n colors = [current[node] for node in G.node.keys()]\n # Finally drawing the nodes.\n nx.draw(G, pos, node_color=colors, node_size=500)\n\n labels = {label:label for label in G.node}\n # Labels shifted by offset so as to not overlap nodes.\n label_pos = {key:[value[0], value[1]+0.03] for key, value in pos.items()}\n nx.draw_networkx_labels(G, label_pos, labels, font_size=20)\n\n # show graph\n plt.show()\n\n return update_step # <-- this is a function\n\ndef make_visualize(slider):\n ''' Takes an input a slider and returns \n callback function for timer and animation\n '''\n \n def visualize_callback(Visualize, time_step):\n if Visualize is True:\n for i in range(slider.min, slider.max + 1):\n slider.value = i\n time.sleep(float(time_step))\n \n return visualize_callback\n ", "Finally let us plot our problem. We first use the function above to obtain a step function.", "step_func = make_update_step_function(neighbors, coloring_problem1)", "Next we set the canvas size.", "matplotlib.rcParams['figure.figsize'] = (18.0, 18.0)", "Finally our plot using ipywidget slider and matplotib. You can move the slider to experiment and see the coloring change. It is also possible to move the slider using arrow keys or to jump to the value by directly editing the number with a double click. The Visualize Button will automatically animate the slider for you. The Extra Delay Box allows you to set time delay in seconds upto one second for each time step.", "import ipywidgets as widgets\nfrom IPython.display import display\n\niteration_slider = widgets.IntSlider(min=0, max=len(coloring_problem1.assingment_history)-1, step=1, value=0)\nw=widgets.interactive(step_func,iteration=iteration_slider)\ndisplay(w)\n\nvisualize_callback = make_visualize(iteration_slider)\n\nvisualize_button = widgets.ToggleButton(desctiption = \"Visualize\", value = False)\ntime_select = widgets.ToggleButtons(description='Extra Delay:',options=['0', '0.1', '0.2', '0.5', '0.7', '1.0'])\n\na = widgets.interactive(visualize_callback, Visualize = visualize_button, time_step=time_select)\ndisplay(a)", "NQueens Visualization\nJust like the Graph Coloring Problem we will start with defining a few helper functions to help us visualize the assignments as they evolve over time. The make_plot_board_step_function behaves similar to the make_update_step_function introduced earlier. It initializes a chess board in the form of a 2D grid with alternating 0s and 1s. This is used by plot_board_step function which draws the board using matplotlib and adds queens to it. This function also calls the label_queen_conflicts which modifies the grid placing 3 in positions in a position where there is a conflict.", "def label_queen_conflicts(assingment,grid):\n ''' Mark grid with queens that are under conflict. '''\n for col, row in assingment.items(): # check each queen for conflict\n row_conflicts = {temp_col:temp_row for temp_col,temp_row in assingment.items() \n if temp_row == row and temp_col != col}\n up_conflicts = {temp_col:temp_row for temp_col,temp_row in assingment.items() \n if temp_row+temp_col == row+col and temp_col != col}\n down_conflicts = {temp_col:temp_row for temp_col,temp_row in assingment.items() \n if temp_row-temp_col == row-col and temp_col != col}\n \n # Now marking the grid.\n for col, row in row_conflicts.items():\n grid[col][row] = 3\n for col, row in up_conflicts.items():\n grid[col][row] = 3\n for col, row in down_conflicts.items():\n grid[col][row] = 3\n\n return grid\n\ndef make_plot_board_step_function(instru_csp):\n '''ipywidgets interactive function supports\n single parameter as input. This function\n creates and return such a function by taking\n in input other parameters.\n '''\n n = len(instru_csp.variables)\n \n \n def plot_board_step(iteration):\n ''' Add Queens to the Board.'''\n data = instru_csp.assingment_history[iteration]\n \n grid = [[(col+row+1)%2 for col in range(n)] for row in range(n)]\n grid = label_queen_conflicts(data, grid) # Update grid with conflict labels.\n \n # color map of fixed colors\n cmap = matplotlib.colors.ListedColormap(['white','lightsteelblue','red'])\n bounds=[0,1,2,3] # 0 for white 1 for black 2 onwards for conflict labels (red).\n norm = matplotlib.colors.BoundaryNorm(bounds, cmap.N)\n \n fig = plt.imshow(grid, interpolation='nearest', cmap = cmap,norm=norm)\n\n plt.axis('off')\n fig.axes.get_xaxis().set_visible(False)\n fig.axes.get_yaxis().set_visible(False)\n\n # Place the Queens Unicode Symbol\n for col, row in data.items():\n fig.axes.text(row, col, u\"\\u265B\", va='center', ha='center', family='Dejavu Sans', fontsize=32)\n plt.show()\n \n return plot_board_step", "Now let us visualize a solution obtained via backtracking. We use of the previosuly defined make_instru function for keeping a history of steps.", "twelve_queens_csp = NQueensCSP(12)\nbacktracking_instru_queen = make_instru(twelve_queens_csp)\nresult = backtracking_search(backtracking_instru_queen)\n\nbacktrack_queen_step = make_plot_board_step_function(backtracking_instru_queen) # Step Function for Widgets", "Now finally we set some matplotlib parameters to adjust how our plot will look. The font is necessary because the Black Queen Unicode character is not a part of all fonts. You can move the slider to experiment and observe the how queens are assigned. It is also possible to move the slider using arrow keys or to jump to the value by directly editing the number with a double click.The Visualize Button will automatically animate the slider for you. The Extra Delay Box allows you to set time delay in seconds upto one second for each time step.", "matplotlib.rcParams['figure.figsize'] = (8.0, 8.0)\nmatplotlib.rcParams['font.family'].append(u'Dejavu Sans')\n\niteration_slider = widgets.IntSlider(min=0, max=len(backtracking_instru_queen.assingment_history)-1, step=0, value=0)\nw=widgets.interactive(backtrack_queen_step,iteration=iteration_slider)\ndisplay(w)\n\nvisualize_callback = make_visualize(iteration_slider)\n\nvisualize_button = widgets.ToggleButton(desctiption = \"Visualize\", value = False)\ntime_select = widgets.ToggleButtons(description='Extra Delay:',options=['0', '0.1', '0.2', '0.5', '0.7', '1.0'])\n\na = widgets.interactive(visualize_callback, Visualize = visualize_button, time_step=time_select)\ndisplay(a)", "Now let us finally repeat the above steps for min_conflicts solution.", "conflicts_instru_queen = make_instru(twelve_queens_csp)\nresult = min_conflicts(conflicts_instru_queen)\n\nconflicts_step = make_plot_board_step_function(conflicts_instru_queen)", "The visualization has same features as the above. But here it also highlights the conflicts by labeling the conflicted queens with a red background.", "iteration_slider = widgets.IntSlider(min=0, max=len(conflicts_instru_queen.assingment_history)-1, step=0, value=0)\nw=widgets.interactive(conflicts_step,iteration=iteration_slider)\ndisplay(w)\n\nvisualize_callback = make_visualize(iteration_slider)\n\nvisualize_button = widgets.ToggleButton(desctiption = \"Visualize\", value = False)\ntime_select = widgets.ToggleButtons(description='Extra Delay:',options=['0', '0.1', '0.2', '0.5', '0.7', '1.0'])\n\na = widgets.interactive(visualize_callback, Visualize = visualize_button, time_step=time_select)\ndisplay(a)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.18/_downloads/8a8a7b741ef485e4d3d3772b252fcb81/plot_configuration.ipynb
bsd-3-clause
[ "%matplotlib inline", "Configuring MNE python\nThis tutorial gives a short introduction to MNE configurations.", "import os.path as op\n\nimport mne\nfrom mne.datasets.sample import data_path\n\nfname = op.join(data_path(), 'MEG', 'sample', 'sample_audvis_raw.fif')\nraw = mne.io.read_raw_fif(fname).crop(0, 10)\noriginal_level = mne.get_config('MNE_LOGGING_LEVEL', 'INFO')", "MNE-python stores configurations to a folder called .mne in the user's\nhome directory, or to AppData directory on Windows. The path to the config\nfile can be found out by calling :func:mne.get_config_path.", "print(mne.get_config_path())", "These configurations include information like sample data paths and plotter\nwindow sizes. Files inside this folder should never be modified manually.\nLet's see what the configurations contain.", "print(mne.get_config())", "We see fields like \"MNE_DATASETS_SAMPLE_PATH\". As the name suggests, this is\nthe path the sample data is downloaded to. All the fields in the\nconfiguration file can be modified by calling :func:mne.set_config.\nLogging\nConfigurations also include the default logging level for the functions. This\nfield is called \"MNE_LOGGING_LEVEL\".", "mne.set_config('MNE_LOGGING_LEVEL', 'INFO')\nprint(mne.get_config(key='MNE_LOGGING_LEVEL'))", "The default value is now set to INFO. This level will now be used by default\nevery time we call a function in MNE. We can set the global logging level for\nonly this session by calling :func:mne.set_log_level function.", "mne.set_log_level('WARNING')\nprint(mne.get_config(key='MNE_LOGGING_LEVEL'))", "Notice how the value in the config file was not changed. Logging level of\nWARNING only applies for this session. Let's see what logging level of\nWARNING prints for :func:mne.compute_raw_covariance.", "cov_raw = mne.compute_raw_covariance(raw)", "Nothing. This means that no warnings were emitted during the computation. If\nyou look at the documentation of :func:mne.compute_raw_covariance, you\nnotice the verbose keyword. Setting this parameter does not touch the\nconfigurations, but sets the logging level for just this one function call.\nLet's see what happens with logging level of INFO.", "cov = mne.compute_raw_covariance(raw, verbose=True)", "As you see there is some info about what the function is doing. The logging\nlevel can be set to 'DEBUG', 'INFO', 'WARNING', 'ERROR' or 'CRITICAL'. It can\nalso be set to an integer or a boolean value. The correspondence to string\nvalues can be seen in the table below. verbose=None uses the default\nvalue from the configuration file.\n+----------+---------+---------+\n| String | Integer | Boolean |\n+==========+=========+=========+\n| DEBUG | 10 | |\n+----------+---------+---------+\n| INFO | 20 | True |\n+----------+---------+---------+\n| WARNING | 30 | False |\n+----------+---------+---------+\n| ERROR | 40 | |\n+----------+---------+---------+\n| CRITICAL | 50 | |\n+----------+---------+---------+", "mne.set_config('MNE_LOGGING_LEVEL', original_level)\nprint('Config value restored to: %s' % mne.get_config(key='MNE_LOGGING_LEVEL'))" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/examples
courses/udacity_intro_to_tensorflow_lite/tflite_c01_linear_regression.ipynb
apache-2.0
[ "Copyright 2018 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Running TFLite models\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_lite/tflite_c01_linear_regression.ipynb\">\n <img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />\n Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_lite/tflite_c01_linear_regression.ipynb\">\n <img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />\n View source on GitHub</a>\n </td>\n</table>\n\nSetup", "import tensorflow as tf\n\nimport pathlib\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom tensorflow.keras.models import Model\nfrom tensorflow.keras.layers import Input", "Create a basic model of the form y = mx + c", "# Create a simple Keras model.\nx = [-1, 0, 1, 2, 3, 4]\ny = [-3, -1, 1, 3, 5, 7]\n\nmodel = tf.keras.models.Sequential([\n tf.keras.layers.Dense(units=1, input_shape=[1])\n])\nmodel.compile(optimizer='sgd', loss='mean_squared_error')\nmodel.fit(x, y, epochs=200, verbose=1)", "Generate a SavedModel", "export_dir = 'saved_model/1'\ntf.saved_model.save(model, export_dir)", "Convert the SavedModel to TFLite", "# Convert the model.\nconverter = tf.lite.TFLiteConverter.from_saved_model(export_dir)\ntflite_model = converter.convert()\n\ntflite_model_file = pathlib.Path('model.tflite')\ntflite_model_file.write_bytes(tflite_model)", "Initialize the TFLite interpreter to try it out", "# Load TFLite model and allocate tensors.\ninterpreter = tf.lite.Interpreter(model_content=tflite_model)\ninterpreter.allocate_tensors()\n\n# Get input and output tensors.\ninput_details = interpreter.get_input_details()\noutput_details = interpreter.get_output_details()\n\n# Test the TensorFlow Lite model on random input data.\ninput_shape = input_details[0]['shape']\ninputs, outputs = [], []\nfor _ in range(100):\n input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32)\n interpreter.set_tensor(input_details[0]['index'], input_data)\n\n interpreter.invoke()\n tflite_results = interpreter.get_tensor(output_details[0]['index'])\n\n # Test the TensorFlow model on random input data.\n tf_results = model(tf.constant(input_data))\n output_data = np.array(tf_results)\n \n inputs.append(input_data[0][0])\n outputs.append(output_data[0][0])", "Visualize the model", "plt.plot(inputs, outputs, 'r')\nplt.show()", "Download the TFLite model file", "try:\n from google.colab import files\n files.download(tflite_model_file)\nexcept:\n pass" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
stallmanifold/cs229-machine-learning-stanford-fall-2016
src/homework2/q3/homework2_3.ipynb
apache-2.0
[ "Homework 2 Problem 3", "import numpy as np\nimport pandas as pd\nimport read_matrix as rm\nimport nb_train\nimport nb_test\nimport svm_train\nimport svm_test", "Part 3.a\nThe first machine learning algorithm for classifying spam emails is the Naive Bayes model. First we trained the model using the MATRIX.TRAIN data file.", "df_train = rm.read_data('spam_data/MATRIX.TRAIN')\n\nnb_model = nb_train.train(df_train)", "Next we ran the model against the testing data.", "df_test = rm.read_data('spam_data/MATRIX.TEST')\n\nnb_predictions = nb_test.test(nb_model, df_test)", "The following is the testing error.", "y = df_test.iloc[:,0]\nnb_error = nb_test.compute_error(y, nb_predictions)\n\nprint('NB Test error: {}'.format(nb_error))", "Part 3.b.\nThe five most indicative words of a spam message are the following.", "words = nb_test.k_most_indicative_words(5, nb_model.to_dataframe().iloc[:,1:])\n\nprint('The {} most spam-worthy words are: {}'.format(len(words), words))", "Part 3.c.\nTo test the convergence properties of the Naive Bayes classifier on the email data set, it needs to be run on different training set sizes. Here we use six different sized training sets to see how the error rate progresses.", "training_set_files = {\n 50 : 'spam_data/MATRIX.TRAIN.50', \n 100 : 'spam_data/MATRIX.TRAIN.100', \n 200 : 'spam_data/MATRIX.TRAIN.200', \n 400 : 'spam_data/MATRIX.TRAIN.400', \n 800 : 'spam_data/MATRIX.TRAIN.800', \n 1400 : 'spam_data/MATRIX.TRAIN.1400'\n }", "Estimate the models and compute the errors.", "nb_models = {}\nfor size, filename in training_set_files.items():\n df_next = rm.read_data(filename)\n m = nb_train.train(df_next)\n nb_models[size] = m\n\nnb_errors = {}\nfor size, model in nb_models.items():\n guessed_y = nb_test.test(model, df_test)\n err = nb_test.compute_error(y, guessed_y)\n nb_errors[size] = err", "The resulting errors are", "print('Naive Bayes')\nfor size, error in nb_errors.items():\n print('size: {}; error: {}'.format(size, error))", "As the training set size increases, the error rate for the Naive Bayes classifier decreases. It converges above a training set size of about 1000 emails.\nPart 3.d.\nThe second model used to classify the emails is a support vector machine. As in part (a), we train the SVM model using the MATRIX.TRAIN data.", "tau = 8\nmax_iters = 40\n\nsvm_model = svm_train.train(df_train, tau, max_iters)", "Next, we run the trained SVM model against the testing data.", "svm_predictions = svm_test.test(svm_model, df_test)\n\nprint(svm_predictions.shape)", "The testing error is:", "ytest = 2 * df_test.iloc[:,0].as_matrix() - 1\nsvm_error = svm_test.compute_error(ytest, svm_predictions)\n\nprint('SVM Test Error: {}'.format(svm_error))", "For the varying sized training sets, we estimate an SVM model.", "svm_models = {}\nfor size, filename in training_set_files.items():\n df_next = rm.read_data(filename)\n m = svm_train.train(df_next, tau, max_iters)\n svm_models[size] = m", "And we compute the errors for each model.", "svm_errors = {}\nfor size, model in svm_models.items():\n guessed_y = svm_test.test(model, df_test)\n err = svm_test.compute_error(ytest, guessed_y)\n svm_errors[size] = err", "The resulting errors are", "print('Support Vector Machine')\nfor size, error in svm_errors.items():\n print('size: {}; error: {}'.format(size, error))", "Part 3.e.\nFor this data set, the SVM is a much better classifier than the Naive Bayes classifier. Indeed, it converges to zero error much more rapidly than the Naive Bayes classifier in the simulations." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
NuGrid/NuPyCEE
DOC/Capabilities/Modifying_yields.ipynb
bsd-3-clause
[ "Modifying yields\nThis Notebook shows how to modify specific yields without having to re-generate yields table for every case. The modification will alter the input yields internally (within the code) and will leave the original yields table files intact.\nTo do so, the yield_modifier (developed by Tom Trueman) argument must be used, which consists of a list of arrays in the form of [ [iso, M, Z, type, modifier] , [...]]. This will modify the yield of a specific isotope for the given M and Z by multiplying the yield by a given factor (type=\"multiply\") or replacing the yield by a new value (type=\"replace\"). Modifier will be either the factor or value depending on type.\nNotebook created by Benoit Cรดtรฉ", "# Import Python modules\nimport matplotlib.pyplot as plt\n\n# Import NuPyCEE codes\nfrom NuPyCEE import sygma", "Modifying the isotopic yields of a specific stellar model", "# Select an isotope and a stellar model (the model needs to be in the yields table).\niso = \"Si-28\"\nM = 15.0\nZ = 0.02\n\n# Run SYGMA with no yields modification.\ns1 = sygma.sygma(iniZ=Z)\n\n# Run SYGMA where the yield is multiplied by 2.\nfactor = 2.0\nyield_modifier = [ [iso, M, Z, \"multiply\", factor] ]\ns2 = sygma.sygma(iniZ=Z, yield_modifier=yield_modifier)\n\n# Run SYGMA where the yield is replaced by 0.6.\nvalue = 0.6\nyield_modifier = [ [iso, M, Z, \"replace\", value] ]\ns3 = sygma.sygma(iniZ=Z, yield_modifier=yield_modifier)\n\n# Get the isotope array index.\ni_iso = s1.history.isotopes.index(iso)\n\n# Print the yield that was taken by SYGMA.\nprint(iso,\"yield (original) : \", s1.get_interp_yields(M,Z)[i_iso],\"Msun\")\nprint(iso,\"yield multiplied by\",factor,\":\", s2.get_interp_yields(M,Z)[i_iso],\"Msun\")\nprint(iso,\"yield replaced by\",value,\": \", s3.get_interp_yields(M,Z)[i_iso],\"Msun\")\n\n# Plot the yields as a function of stellar mass.\n# Note: The y axis is not the yields as found in the yield table file.\n# It is the IMF-weighted yields from a 1Msun stellar population.\n%matplotlib nbagg\ns3.plot_mass_range_contributions(specie=iso, color=\"C0\", label=\"Replaced\")\ns2.plot_mass_range_contributions(specie=iso, color=\"C1\", label=\"Multiplied\")\ns1.plot_mass_range_contributions(specie=iso, color=\"C2\", label=\"Original\")\nplt.title(iso,fontsize=12)", "Modifying the isotopic yields of several stellar models", "# Select an isotope\niso = \"Si-28\"\n\n# Define the list of multiplication factor for each stellar model\nfactor_list = [2, 4, 8, 16]\nM_list = [12.0, 15.0, 20.0, 25.0]\nZ = 0.02\n\n# Fill the yield_modifier array\nyield_modifier = []\nfor M, factor in zip(M_list, factor_list):\n yield_modifier.append([iso, M, Z, \"multiply\", factor])\n\n# Run SYGMA with yields modification\ns4 = sygma.sygma(iniZ=Z, yield_modifier=yield_modifier)\n\n# Plot the yields as a function of stellar mass (IMF weighted yields for a 1Msun population)\n%matplotlib nbagg\ns4.plot_mass_range_contributions(specie=iso, color=\"C1\", label=\"Multiplied\")\ns1.plot_mass_range_contributions(specie=iso, color=\"C2\", label=\"Original\")\nplt.title(iso,fontsize=12)", "Example with OMEGA", "# Import the NuPyCEE galactic chemical evolution code\nfrom NuPyCEE import omega\n\n# Print the list of available M and Z in the yields table\nprint(\"M:\",s1.M_table)\nprint(\"Z:\",s1.Z_table)\n\n# Boost the Mg-24 yields of all massive stars by a factor of 2\niso = \"Mg-24\"\nyield_modifier = []\nfor M in [12.0, 15.0, 20.0, 25.0]:\n for Z in [0.02, 0.01, 0.006, 0.001, 0.0001]:\n yield_modifier.append([iso,M,Z,\"multiply\",2])\n\n# Run OMEGA with and without the yield modifier\no1 = omega.omega()\no2 = omega.omega(yield_modifier=yield_modifier)\n\n# Plot the amount of Mg-24 present in the interstellar medium of the galaxy\n%matplotlib nbagg\no2.plot_mass(specie=iso, label=\"Modified\", color=\"r\", shape=\"--\")\no1.plot_mass(specie=iso, label=\"Original\")\n\n# Set visual\nplt.xscale(\"linear\")\nplt.xlabel(\"Galactic age [yr]\")\nplt.ylabel(iso+\" mass in the ISM [Msun]\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
YihaoLu/statsmodels
examples/notebooks/statespace_structural_harvey_jaeger.ipynb
bsd-3-clause
[ "Detrending, Stylized Facts and the Business Cycle\nIn an influential article, Harvey and Jaeger (1993) described the use of unobserved components models (also known as \"structural time series models\") to derive stylized facts of the business cycle.\nTheir paper begins:\n\"Establishing the 'stylized facts' associated with a set of time series is widely considered a crucial step\nin macroeconomic research ... For such facts to be useful they should (1) be consistent with the stochastic\nproperties of the data and (2) present meaningful information.\"\n\nIn particular, they make the argument that these goals are often better met using the unobserved components approach rather than the popular Hodrick-Prescott filter or Box-Jenkins ARIMA modeling techniques.\nStatsmodels has the ability to perform all three types of analysis, and below we follow the steps of their paper, using a slightly updated dataset.", "%matplotlib inline\n\nimport numpy as np\nimport pandas as pd\nimport statsmodels.api as sm\nimport matplotlib.pyplot as plt\n\nfrom IPython.display import display, Latex", "Unobserved Components\nThe unobserved components model available in Statsmodels can be written as:\n$$\ny_t = \\underbrace{\\mu_{t}}{\\text{trend}} + \\underbrace{\\gamma{t}}{\\text{seasonal}} + \\underbrace{c{t}}{\\text{cycle}} + \\sum{j=1}^k \\underbrace{\\beta_j x_{jt}}{\\text{explanatory}} + \\underbrace{\\varepsilon_t}{\\text{irregular}}\n$$\nsee Durbin and Koopman 2012, Chapter 3 for notation and additional details. Notice that different specifications for the different individual components can support a wide range of models. The specific models considered in the paper and below are specializations of this general equation.\nTrend\nThe trend component is a dynamic extension of a regression model that includes an intercept and linear time-trend.\n$$\n\\begin{align}\n\\underbrace{\\mu_{t+1}}{\\text{level}} & = \\mu_t + \\nu_t + \\eta{t+1} \\qquad & \\eta_{t+1} \\sim N(0, \\sigma_\\eta^2) \\\\\n\\underbrace{\\nu_{t+1}}{\\text{trend}} & = \\nu_t + \\zeta{t+1} & \\zeta_{t+1} \\sim N(0, \\sigma_\\zeta^2) \\\n\\end{align}\n$$\nwhere the level is a generalization of the intercept term that can dynamically vary across time, and the trend is a generalization of the time-trend such that the slope can dynamically vary across time.\nFor both elements (level and trend), we can consider models in which:\n\nThe element is included vs excluded (if the trend is included, there must also be a level included).\nThe element is deterministic vs stochastic (i.e. whether or not the variance on the error term is confined to be zero or not)\n\nThe only additional parameters to be estimated via MLE are the variances of any included stochastic components.\nThis leads to the following specifications:\n| | Level | Trend | Stochastic Level | Stochastic Trend |\n|----------------------------------------------------------------------|-------|-------|------------------|------------------|\n| Constant | โœ“ | | | |\n| Local Level <br /> (random walk) | โœ“ | | โœ“ | |\n| Deterministic trend | โœ“ | โœ“ | | |\n| Local level with deterministic trend <br /> (random walk with drift) | โœ“ | โœ“ | โœ“ | |\n| Local linear trend | โœ“ | โœ“ | โœ“ | โœ“ |\n| Smooth trend <br /> (integrated random walk) | โœ“ | โœ“ | | โœ“ |\nSeasonal\nThe seasonal component is written as:\n<span>$$\n\\gamma_t = - \\sum_{j=1}^{s-1} \\gamma_{t+1-j} + \\omega_t \\qquad \\omega_t \\sim N(0, \\sigma_\\omega^2)\n$$</span>\nThe periodicity (number of seasons) is s, and the defining character is that (without the error term), the seasonal components sum to zero across one complete cycle. The inclusion of an error term allows the seasonal effects to vary over time.\nThe variants of this model are:\n\nThe periodicity s\nWhether or not to make the seasonal effects stochastic.\n\nIf the seasonal effect is stochastic, then there is one additional parameter to estimate via MLE (the variance of the error term).\nCycle\nThe cyclical component is intended to capture cyclical effects at time frames much longer than captured by the seasonal component. For example, in economics the cyclical term is often intended to capture the business cycle, and is then expected to have a period between \"1.5 and 12 years\" (see Durbin and Koopman).\nThe cycle is written as:\n<span>$$\n\\begin{align}\nc_{t+1} & = c_t \\cos \\lambda_c + c_t^ \\sin \\lambda_c + \\tilde \\omega_t \\qquad & \\tilde \\omega_t \\sim N(0, \\sigma_{\\tilde \\omega}^2) \\\\\nc_{t+1}^ & = -c_t \\sin \\lambda_c + c_t^ \\cos \\lambda_c + \\tilde \\omega_t^ & \\tilde \\omega_t^* \\sim N(0, \\sigma_{\\tilde \\omega}^2)\n\\end{align}\n$$</span>\nThe parameter $\\lambda_c$ (the frequency of the cycle) is an additional parameter to be estimated by MLE. If the seasonal effect is stochastic, then there is one another parameter to estimate (the variance of the error term - note that both of the error terms here share the same variance, but are assumed to have independent draws).\nIrregular\nThe irregular component is assumed to be a white noise error term. Its variance is a parameter to be estimated by MLE; i.e.\n$$\n\\varepsilon_t \\sim N(0, \\sigma_\\varepsilon^2)\n$$\nIn some cases, we may want to generalize the irregular component to allow for autoregressive effects:\n$$\n\\varepsilon_t = \\rho(L) \\varepsilon_{t-1} + \\epsilon_t, \\qquad \\epsilon_t \\sim N(0, \\sigma_\\epsilon^2)\n$$\nIn this case, the autoregressive parameters would also be estimated via MLE.\nRegression effects\nWe may want to allow for explanatory variables by including additional terms\n<span>$$\n\\sum_{j=1}^k \\beta_j x_{jt}\n$$</span>\nor for intervention effects by including\n<span>$$\n\\begin{align}\n\\delta w_t \\qquad \\text{where} \\qquad w_t & = 0, \\qquad t < \\tau, \\\\\n& = 1, \\qquad t \\ge \\tau\n\\end{align}\n$$</span>\nThese additional parameters could be estimated via MLE or by including them as components of the state space formulation.\nData\nFollowing Harvey and Jaeger, we will consider the following time series:\n\nUS real GNP, \"output\", (GNPC96)\nUS GNP implicit price deflator, \"prices\", (GNPDEF)\nUS monetary base, \"money\", (AMBSL)\n\nThe time frame in the original paper varied across series, but was broadly 1954-1989. Below we use data from the period 1948-2008 for all series. Although the unobserved components approach allows isolating a seasonal component within the model, the series considered in the paper, and here, are already seasonally adjusted.\nAll data series considered here are taken from Federal Reserve Economic Data (FRED). Conveniently, the Python library Pandas has the ability to download data from FRED directly.", "# Datasets\nfrom pandas.io.data import DataReader\n\n# Get the raw data\nstart = '1948-01'\nend = '2008-01'\nus_gnp = DataReader('GNPC96', 'fred', start=start, end=end)\nus_gnp_deflator = DataReader('GNPDEF', 'fred', start=start, end=end)\nus_monetary_base = DataReader('AMBSL', 'fred', start=start, end=end).resample('QS')\nrecessions = DataReader('USRECQ', 'fred', start=start, end=end).resample('QS', how='last').values[:,0]\n\n# Construct the dataframe\ndta = pd.concat(map(np.log, (us_gnp, us_gnp_deflator, us_monetary_base)), axis=1)\ndta.columns = ['US GNP','US Prices','US monetary base']\ndates = dta.index._mpl_repr()", "To get a sense of these three variables over the timeframe, we can plot them:", "# Plot the data\nax = dta.plot(figsize=(13,3))\nylim = ax.get_ylim()\nax.xaxis.grid()\nax.fill_between(dates, ylim[0]+1e-5, ylim[1]-1e-5, recessions, facecolor='k', alpha=0.1);", "Model\nSince the data is already seasonally adjusted and there are no obvious explanatory variables, the generic model considered is:\n$$\ny_t = \\underbrace{\\mu_{t}}{\\text{trend}} + \\underbrace{c{t}}{\\text{cycle}} + \\underbrace{\\varepsilon_t}{\\text{irregular}}\n$$\nThe irregular will be assumed to be white noise, and the cycle will be stochastic and damped. The final modeling choice is the specification to use for the trend component. Harvey and Jaeger consider two models:\n\nLocal linear trend (the \"unrestricted\" model)\nSmooth trend (the \"restricted\" model, since we are forcing $\\sigma_\\eta = 0$)\n\nBelow, we construct kwargs dictionaries for each of these model types. Notice that rather that there are two ways to specify the models. One way is to specify components directly, as in the table above. The other way is to use string names which map to various specifications.", "# Model specifications\n\n# Unrestricted model, using string specification\nunrestricted_model = {\n 'level': 'local linear trend', 'cycle': True, 'damped_cycle': True, 'stochastic_cycle': True\n}\n\n# Unrestricted model, setting components directly\n# This is an equivalent, but less convenient, way to specify a\n# local linear trend model with a stochastic damped cycle:\n# unrestricted_model = {\n# 'irregular': True, 'level': True, 'stochastic_level': True, 'trend': True, 'stochastic_trend': True,\n# 'cycle': True, 'damped_cycle': True, 'stochastic_cycle': True\n# }\n\n# The restricted model forces a smooth trend\nrestricted_model = {\n 'level': 'smooth trend', 'cycle': True, 'damped_cycle': True, 'stochastic_cycle': True\n}\n\n# Restricted model, setting components directly\n# This is an equivalent, but less convenient, way to specify a\n# smooth trend model with a stochastic damped cycle. Notice\n# that the difference from the local linear trend model is that\n# `stochastic_level=False` here.\n# unrestricted_model = {\n# 'irregular': True, 'level': True, 'stochastic_level': False, 'trend': True, 'stochastic_trend': True,\n# 'cycle': True, 'damped_cycle': True, 'stochastic_cycle': True\n# }", "We now fit the following models:\n\nOutput, unrestricted model\nPrices, unrestricted model\nPrices, restricted model\nMoney, unrestricted model\nMoney, restricted model", "# Output\noutput_mod = sm.tsa.UnobservedComponents(dta['US GNP'], **unrestricted_model)\noutput_res = output_mod.fit(method='powell', disp=False)\n\n# Prices\nprices_mod = sm.tsa.UnobservedComponents(dta['US Prices'], **unrestricted_model)\nprices_res = prices_mod.fit(method='powell', disp=False)\n\nprices_restricted_mod = sm.tsa.UnobservedComponents(dta['US Prices'], **restricted_model)\nprices_restricted_res = prices_restricted_mod.fit(method='powell', disp=False)\n\n# Money\nmoney_mod = sm.tsa.UnobservedComponents(dta['US monetary base'], **unrestricted_model)\nmoney_res = money_mod.fit(method='powell', disp=False)\n\nmoney_restricted_mod = sm.tsa.UnobservedComponents(dta['US monetary base'], **restricted_model)\nmoney_restricted_res = money_restricted_mod.fit(method='powell', disp=False)", "Once we have fit these models, there are a variety of ways to display the information. Looking at the model of US GNP, we can summarize the fit of the model using the summary method on the fit object.", "print(output_res.summary())", "For unobserved components models, and in particular when exploring stylized facts in line with point (2) from the introduction, it is often more instructive to plot the estimated unobserved components (e.g. the level, trend, and cycle) themselves to see if they provide a meaningful description of the data.\nThe plot_components method of the fit object can be used to show plots and confidence intervals of each of the estimated states, as well as a plot of the observed data versus the one-step-ahead predictions of the model to assess fit.", "fig = output_res.plot_components(legend_loc='lower right', figsize=(15, 9));", "Finally, Harvey and Jaeger summarize the models in another way to highlight the relative importances of the trend and cyclical components; below we replicate their Table I. The values we find are broadly consistent with, but different in the particulars from, the values from their table.", "# Create Table I\ntable_i = np.zeros((5,6))\n\nstart = dta.index[0]\nend = dta.index[-1]\ntime_range = '%d:%d-%d:%d' % (start.year, start.quarter, end.year, end.quarter)\nmodels = [\n ('US GNP', time_range, 'None'),\n ('US Prices', time_range, 'None'),\n ('US Prices', time_range, r'$\\sigma_\\eta^2 = 0$'),\n ('US monetary base', time_range, 'None'),\n ('US monetary base', time_range, r'$\\sigma_\\eta^2 = 0$'),\n]\nindex = pd.MultiIndex.from_tuples(models, names=['Series', 'Time range', 'Restrictions'])\nparameter_symbols = [\n r'$\\sigma_\\zeta^2$', r'$\\sigma_\\eta^2$', r'$\\sigma_\\kappa^2$', r'$\\rho$',\n r'$2 \\pi / \\lambda_c$', r'$\\sigma_\\varepsilon^2$',\n]\n\ni = 0\nfor res in (output_res, prices_res, prices_restricted_res, money_res, money_restricted_res):\n if res.model.stochastic_level:\n (sigma_irregular, sigma_level, sigma_trend,\n sigma_cycle, frequency_cycle, damping_cycle) = res.params\n else:\n (sigma_irregular, sigma_level,\n sigma_cycle, frequency_cycle, damping_cycle) = res.params\n sigma_trend = np.nan\n period_cycle = 2 * np.pi / frequency_cycle\n \n table_i[i, :] = [\n sigma_level*1e7, sigma_trend*1e7,\n sigma_cycle*1e7, damping_cycle, period_cycle,\n sigma_irregular*1e7\n ]\n i += 1\n \npd.set_option('float_format', lambda x: '%.4g' % np.round(x, 2) if not np.isnan(x) else '-')\ntable_i = pd.DataFrame(table_i, index=index, columns=parameter_symbols)\ntable_i" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tpin3694/tpin3694.github.io
machine-learning/hyperparameter_tuning_using_grid_search.ipynb
mit
[ "Title: Hyperparameter Tuning Using Grid Search\nSlug: hyperparameter_tuning_using_grid_search\nSummary: How to conduct grid search for hyperparameter tuning in scikit-learn for machine learning in Python. \nDate: 2017-09-18 12:00\nCategory: Machine Learning\nTags: Model Selection\nAuthors: Chris Albon\n<a alt=\"Hyperparameter Tuning Using Grid Search\" href=\"https://machinelearningflashcards.com\">\n <img src=\"hyperparameter_tuning_using_grid_search/Hyperparameter_Tuning_print.png\" class=\"flashcard center-block\">\n</a>\nPreliminaries", "# Load libraries\nimport numpy as np\nfrom sklearn import linear_model, datasets\nfrom sklearn.model_selection import GridSearchCV", "Load Iris Dataset", "# Load data\niris = datasets.load_iris()\nX = iris.data\ny = iris.target", "Create Logistic Regression", "# Create logistic regression\nlogistic = linear_model.LogisticRegression()", "Create Hyperparameter Search Space", "# Create regularization penalty space\npenalty = ['l1', 'l2']\n\n# Create regularization hyperparameter space\nC = np.logspace(0, 4, 10)\n\n# Create hyperparameter options\nhyperparameters = dict(C=C, penalty=penalty)", "Create Grid Search", "# Create grid search using 5-fold cross validation\nclf = GridSearchCV(logistic, hyperparameters, cv=5, verbose=0)", "Conduct Grid Search", "# Fit grid search\nbest_model = clf.fit(X, y)", "View Hyperparameter Values Of Best Model", "# View best hyperparameters\nprint('Best Penalty:', best_model.best_estimator_.get_params()['penalty'])\nprint('Best C:', best_model.best_estimator_.get_params()['C'])", "Predict Using Best Model", "# Predict target vector\nbest_model.predict(X)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
unpingco/Python-for-Probability-Statistics-and-Machine-Learning
chapters/statistics/notebooks/Bootstrap.ipynb
mit
[ "from IPython.display import Image \nImage('../../../python_for_probability_statistics_and_machine_learning.jpg')", "Python for Probability, Statistics, and Machine Learning", "from __future__ import division\n%pylab inline", "As we have seen, outside of some toy problems, it can be very difficult or\nimpossible to determine the probability density distribution of the estimator\nof some quantity. The idea behind the bootstrap is that we can use computation\nto approximate these functions which would otherwise be impossible to solve\nfor analytically. \nLet's start with a simple example. Suppose we have the following set of random\nvariables, $\\lbrace X_1, X_2, \\ldots, X_n \\rbrace$ where each $X_k \\sim F$. In\nother words the samples are all drawn from the same unknown distribution $F$.\nHaving run the experiment, we thereby obtain the following sample set:\n$$\n\\lbrace x_1, x_2, \\ldots, x_n \\rbrace\n$$\nThe sample mean is computed from this set as,\n$$\n\\bar{x} = \\frac{1}{n}\\sum_{i=1}^n x_i\n$$\nThe next question is how close is the sample mean to the true mean,\n$\\theta = \\mathbb{E}_F(X)$. Note that the second central moment of $X$ is as\nfollows:\n$$\n\\mu_2(F) := \\mathbb{E}_F (X^2) - (\\mathbb{E}_F (X))^2\n$$\nThe standard deviation of the sample mean, $\\bar{x}$, given $n$\nsamples from an underlying distribution $F$, is the following:\n$$\n\\sigma(F) = (\\mu_2(F)/n)^{1/2}\n$$\nUnfortunately, because we have only the set of samples $\\lbrace x_1,\nx_2, \\ldots, x_n \\rbrace$ and not $F$ itself, we cannot compute this and\ninstead must use the estimated standard error,\n$$\n\\bar{\\sigma} = (\\bar{\\mu}_2/n)^{1/2}\n$$\nwhere $\\bar{\\mu}_2 = \\sum (x_i -\\bar{x})^2/(n-1) $, which is the\nunbiased estimate of $\\mu_2(F)$. However, that is not the only way to proceed.\nInstead, we could replace $F$ by some estimate, $\\hat{F}$ obtained as a\npiecewise function of $\\lbrace x_1, x_2, \\ldots, x_n \\rbrace$ by placing\nprobability mass $1/n$ on each $x_i$. With that in place, we can compute the\nestimated standard error as the following:\n$$\n\\hat{\\sigma}_B = (\\mu_2(\\hat{F})/n)^{1/2}\n$$\nwhich is called the bootstrap estimate of the standard error.\nUnfortunately, the story effectively ends here. In even a slightly more general\nsetting, there is no clean formula $\\sigma(F)$ within which $F$ can be swapped\nfor $\\hat{F}$.\nThis is where the computer saves the day. We actually do not need to know the\nformula $\\sigma(F)$ because we can compute it using a resampling method. The\nkey idea is to sample with replacement from $\\lbrace x_1, x_2, \\ldots, x_n\n\\rbrace$. The new set of $n$ independent draws (with replacement) from this set\nis the bootstrap sample,\n$$\ny^ = \\lbrace x_1^, x_2^, \\ldots, x_n^ \\rbrace\n$$\nThe Monte Carlo algorithm proceeds by first by selecting a large number of\nbootstrap samples, $\\lbrace y^*_k\\rbrace$, then computing the statistic on each\nof these samples, and then computing the sample standard deviation of the\nresults in the usual way. Thus, the bootstrap estimate of the statistic\n$\\theta$ is the following,\n$$\n\\hat{\\theta}^_B = \\frac{1}{B} \\sum_k \\hat{\\theta}^(k)\n$$\nwith the corresponding square of the sample standard deviation as\n$$\n\\hat{\\sigma}_B^2 = \\frac{1}{B-1} \\sum_k (\\hat{\\theta}^(k)-\\hat{\\theta}^_B )^2\n$$\nThe process is much simpler than the notation implies.\nLet's explore this with a simple example using Python. The next block\nof code sets up some samples from a $\\beta(3,2)$ distribution,", "import numpy as np\n_=np.random.seed(123456)\n\nimport numpy as np\nfrom scipy import stats\nrv = stats.beta(3,2)\nxsamples = rv.rvs(50)", "Because this is simulation data, we already know that the\nmean is $\\mu_1 = 3/5$ and the standard deviation of the sample mean\nfor $n=50$ is $\\bar{\\sigma} =1/\\sqrt {1250}$, which we will verify\nthis later.", "%matplotlib inline\n\nfrom matplotlib.pylab import subplots\nfig,ax = subplots()\nfig.set_size_inches(8,4)\n_=ax.hist(xsamples,normed=True,color='gray')\nax2 = ax.twinx()\n_=ax2.plot(np.linspace(0,1,100),rv.pdf(np.linspace(0,1,100)),lw=3,color='k')\n_=ax.set_xlabel('$x$',fontsize=28)\n_=ax2.set_ylabel(' $y$',fontsize=28,rotation='horizontal')\nfig.tight_layout()\n#fig.savefig('fig-statistics/Bootstrap_001.png')", "<!-- dom:FIGURE: [fig-statistics/Bootstrap_001.png, width=500 frac=0.85] The $\\beta(3,2)$ distribution and the histogram that approximates it. <div id=\"fig:Bootstrap_001\"></div> -->\n<!-- begin figure -->\n<div id=\"fig:Bootstrap_001\"></div>\n\n<p>The $\\beta(3,2)$ distribution and the histogram that approximates it.</p>\n<img src=\"fig-statistics/Bootstrap_001.png\" width=500>\n<!-- end figure -->\n\nFigure shows the $\\beta(3,2)$ distribution and\nthe corresponding histogram of the samples. The histogram represents\n$\\hat{F}$ and is the distribution we sample from to obtain the\nbootstrap samples. As shown, the $\\hat{F}$ is a pretty crude estimate\nfor the $F$ density (smooth solid line), but that's not a serious\nproblem insofar as the following bootstrap estimates are concerned.\nIn fact, the approximation $\\hat{F}$ has a naturally tendency to\npull towards where most of the probability mass is. This is a\nfeature, not a bug; and is the underlying mechanism for why\nbootstrapping works, but the formal proofs that exploit this basic\nidea are far out of our scope here. The next block generates the\nbootstrap samples", "yboot = np.random.choice(xsamples,(100,50))\nyboot_mn = yboot.mean()", "and the bootstrap estimate is therefore,", "np.std(yboot.mean(axis=1)) # approx sqrt(1/1250)", "Figure shows the distribution of computed\nsample means from the bootstrap samples. As promised, the next block\nshows how to use sympy.stats to compute the $\\beta(3,2)$ parameters we quoted\nearlier.", "fig,ax = subplots()\nfig.set_size_inches(8,4)\n_=ax.hist(yboot.mean(axis=1),normed=True,color='gray')\n_=ax.set_title('Bootstrap std of sample mean %3.3f vs actual %3.3f'%\n (np.std(yboot.mean(axis=1)),np.sqrt(1/1250.)))\nfig.tight_layout()\n#fig.savefig('fig-statistics/Bootstrap_002.png')", "<!-- dom:FIGURE: [fig-statistics/Bootstrap_002.png, width=500 frac=0.85] For each bootstrap draw, we compute the sample mean. This is the histogram of those sample means that will be used to compute the bootstrap estimate of the standard deviation. <div id=\"fig:Bootstrap_002\"></div> -->\n<!-- begin figure -->\n<div id=\"fig:Bootstrap_002\"></div>\n\n<p>For each bootstrap draw, we compute the sample mean. This is the histogram of those sample means that will be used to compute the bootstrap estimate of the standard deviation.</p>\n<img src=\"fig-statistics/Bootstrap_002.png\" width=500>\n<!-- end figure -->", "import sympy as S\nimport sympy.stats\nfor i in range(50): # 50 samples\n # load sympy.stats Beta random variables\n # into global namespace using exec\n execstring = \"x%d = S.stats.Beta('x'+str(%d),3,2)\"%(i,i)\n exec(execstring) \n\n# populate xlist with the sympy.stats random variables\n# from above\nxlist = [eval('x%d'%(i)) for i in range(50) ]\n# compute sample mean\nsample_mean = sum(xlist)/len(xlist)\n# compute expectation of sample mean\nsample_mean_1 = S.stats.E(sample_mean)\n# compute 2nd moment of sample mean\nsample_mean_2 = S.stats.E(S.expand(sample_mean**2))\n# standard deviation of sample mean\n# use sympy sqrt function\nsigma_smn = S.sqrt(sample_mean_2-sample_mean_1**2) # 1/sqrt(1250)\nprint sigma_smn", "Programming Tip.\nUsing the exec function enables the creation of a sequence of Sympy\nrandom variables. Sympy has the var function which can automatically\ncreate a sequence of Sympy symbols, but there is no corresponding\nfunction in the statistics module to do this for random variables.\n<!-- @@@CODE src-statistics/Bootstrap.py from-to:^import sympy as S@^print sigma_smn -->\n\n<!-- p.505 casella -->\n\nExample. Recall the delta method from the section ref{sec:delta_method}. Suppose we have a set of Bernoulli coin-flips\n($X_i$) with probability of head $p$. Our maximum likelihood estimator\nof $p$ is $\\hat{p}=\\sum X_i/n$ for $n$ flips. We know this estimator\nis unbiased with $\\mathbb{E}(\\hat{p})=p$ and $\\mathbb{V}(\\hat{p}) =\np(1-p)/n$. Suppose we want to use the data to estimate the variance of\nthe Bernoulli trials ($\\mathbb{V}(X)=p(1-p)$). By the notation the\ndelta method, $g(x) = x(1-x)$. By the plug-in principle, our maximum\nlikelihood estimator of this variance is then $\\hat{p}(1-\\hat{p})$. We\nwant the variance of this quantity. Using the results of the delta\nmethod, we have\n$$\n\\begin{align}\n\\mathbb{V}(g(\\hat{p})) &=(1-2\\hat{p})^2\\mathbb{V}(\\hat{p}) \\\\\n\\mathbb{V}(g(\\hat{p})) &=(1-2\\hat{p})^2\\frac{\\hat{p}(1-\\hat{p})}{n} \\\\\n\\end{align}\n$$\nLet's see how useful this is with a short simulation.", "import numpy as np\nnp.random.seed(123)\n\nfrom scipy import stats\nimport numpy as np\np= 0.25 # true head-up probability\nx = stats.bernoulli(p).rvs(10)\nprint x", "The maximum likelihood estimator of $p$ is $\\hat{p}=\\sum X_i/n$,", "phat = x.mean()\nprint phat", "Then, plugging this into the delta method approximant above,", "print (1-2*phat)**2*(phat)**2/10.", "Now, let's try this using the bootstrap estimate of the variance", "phat_b=np.random.choice(x,(50,10)).mean(1)\nprint np.var(phat_b*(1-phat_b))", "This shows that the delta method's estimated variance\nis different from the bootstrap method, but which one is better?\nFor this situation we can solve for this directly using Sympy", "import sympy as S\nfrom sympy.stats import E, Bernoulli\nxdata =[Bernoulli(i,p) for i in S.symbols('x:10')]\nph = sum(xdata)/float(len(xdata))\ng = ph*(1-ph)", "Programming Tip.\nThe argument in the S.symbols('x:10') function returns a sequence of Sympy\nsymbols named x1,x2 and so on. This is shorthand for creating and naming each\nsymbol sequentially.\nNote that g is the $g(\\hat{p})=\\hat{p}(1- \\hat{p})$ \nwhose variance we are trying to estimate. Then,\nwe can plug in for the estimated $\\hat{p}$ and get the correct\nvalue for the variance,", "print E(g**2) - E(g)**2", "This case is generally representative --- the delta method tends\nto underestimate the variance and the bootstrap estimate is better here.\nParametric Bootstrap\nIn the previous example, we used the $\\lbrace x_1, x_2, \\ldots, x_n \\rbrace $\nsamples themselves as the basis for $\\hat{F}$ by weighting each with $1/n$. An\nalternative is to assume that the samples come from a particular\ndistribution, estimate the parameters of that distribution from the sample set,\nand then use the bootstrap mechanism to draw samples from the assumed\ndistribution, using the so-derived parameters. For example, the next code block\ndoes this for a normal distribution.", "rv = stats.norm(0,2)\nxsamples = rv.rvs(45)\n# estimate mean and var from xsamples\nmn_ = np.mean(xsamples)\nstd_ = np.std(xsamples)\n# bootstrap from assumed normal distribution with\n# mn_,std_ as parameters\nrvb = stats.norm(mn_,std_) #plug-in distribution\nyboot = rvb.rvs(1000)", "<!-- @@@CODE src-statistics/Bootstrap.py from-to:^# In\\[7\\]:@^yboot -->\n\nRecall the sample variance estimator is the following:\n$$\nS^2 = \\frac{1}{n-1} \\sum (X_i-\\bar{X})^2\n$$\nAssuming that the samples are normally distributed, this\nmeans that $(n-1)S^2/\\sigma^2$ has a chi-squared distribution with\n$n-1$ degrees of freedom. Thus, the variance, $\\mathbb{V}(S^2) = 2\n\\sigma^4/(n-1) $. Likewise, the MLE plug-in estimate for this is\n$\\mathbb{V}(S^2) = 2 \\hat{\\sigma}^4/(n-1)$ The following code computes\nthe variance of the sample variance, $S^2$ using the MLE and bootstrap\nmethods.", "# MLE-Plugin Variance of the sample mean \nprint 2*(std_**2)**2/9. # MLE plugin\n# Bootstrap variance of the sample mean \nprint yboot.var()\n# True variance of sample mean \nprint 2*(2**2)**2/9.", "<!-- @@@CODE src-statistics/Bootstrap.py from-to:^# In\\[8\\]:@^# end8 -->\n\nThis shows that the bootstrap estimate is better here than the MLE\nplugin estimate.\nNote that this technique becomes even more powerful with multivariate\ndistributions with many parameters because all the mechanics are the same.\nThus, the bootstrap is a great all-purpose method for computing standard\nerrors, but, in the limit, is it converging to the correct value? This is the\nquestion of consistency. Unfortunately, to answer this question requires more\nand deeper mathematics than we can get into here. The short answer is that for\nestimating standard errors, the bootstrap is a consistent estimator in a wide\nrange of cases and so it definitely belongs in your toolkit." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
robertoalotufo/ia898
2S2018/10 A transformada discreta de Fourier - DFT.ipynb
mit
[ "A transformada discreta de Fourier (DFT)\nCaso unidimensional\nTransformada Discreta de Fourier em uma dimensรฃo:\n\nEntrada: $f(x)$ - $x$ coordenada dos pixels da imagem\nSaรญda: $F(u)$ - $u$ frequรชncia normalizada, nรบmero de ciclos na amostra\n\n$$ F(u) = \\sum_{x=0}^{N-1}f(x)\\exp(-j2\\pi(\\frac{ux}{N})) $$\n$$ 0 \\leq x < N, 0 \\leq u < N $$ \nCaso bidimensional\nTransformada Discreta de Fourier em duas dimensรตes:\n\nEntrada: $f(x,y)$ - $(x,y)$ coordenada dos pixels da imagem\nSaรญda: $F(u,v)$ - $(u,v)$ frequรชncia normalizada, nรบmero de ciclos na amostra\n\n$$ F(u,v) = \\sum_{x=0}^{N-1}\\sum_{y=0}^{M-1}f(x,y)\\exp(-j2\\pi(\\frac{ux}{N}+ \\frac{vy}{M})) $$\n$$ 0 \\leq x < N , 0 \\leq u < M $$ \n$$ 0 \\leq y < M , 0 \\leq v < M $$ \nSignificado de $u$ na equaรงรฃo\nDado $N$ amostras, o $u$ na equaรงรฃo $ \\exp(-j{2\\pi}\\frac{ux}{N}) $ indica o nรบmero de ciclos no espaรงo de $0$ a $N-1$. O perรญodo, em pixels, deste sinal รฉ $\\frac{N}{u}$. O perรญodo mรกximo รฉ $N$ e perรญodo mรญnimo รฉ 2.\n1. Exemplo unidimensional", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\nimport numpy as np\nfrom numpy.fft import *\nimport sys,os\nia898path = os.path.abspath('../../')\nif ia898path not in sys.path:\n sys.path.append(ia898path)\nimport ia898.src as ia", "Para exemplificar o caso unidimensional, vamos pegar uma imagem bidimensional (cameraman) e escolher apenas uma linha da imagem para ser nossa funรงรฃo unidimensional.", "f1 = mpimg.imread('../data/cameraman.tif')[10,:]\nplt.plot(f1)\nF1 = fft(f1)\ng1 = ifft(F1)\nprint ('comparando g1 e f1:', abs(g1-f1).max())", "2. Exemplo bidimensional", "f2 = mpimg.imread('../data/cameraman.tif')\n\nF2 = fft2(f2)\ng2 = ifft2(F2)\nprint ('comparando g2 e f2:', abs(g2-f2).max())\nplt.figure(1, figsize=(8,8))\n\nplt.subplot(1,2,1)\nplt.imshow(f2, cmap='gray')\nplt.subplot(1,2,2)\nplt.imshow(ia.dftview(F2), cmap='gray')", "Propriedades da DFT\n1. Periodicidade\nA transformada discreta de Fourier e sua inversa sรฃo periรณdicas com perรญodos N e M, isto รฉ:\n$$ F(u,v) = F(u+N,v) = F(u, v+M) = F(u + N, v+M)$$\n2. Simรฉtrico conjugado\nSe $f(x)$ รฉ real, isto รฉ, a parte imaginรกria รฉ zero, entรฃo $F(u)$ exibe simetria do complexo conjugado, ou seja: $$ F(u,v) = F^{*}(-u,-v) $$\n3. Rotaรงรฃo\nA rotaรงรฃo de $f(x,y)$ de um รขngulo $\\theta$ implicarรก em uma rotaรงรฃo de $F(u,v)$ deste mesmo รขngulo $\\theta$. \n4. O valor mรฉdio\nO valor mรฉdio de $f(x,y)$ รฉ relacionado ร  $F(u,v)$ por: $$ \\bar{f}(x,y) = \\frac{1}{NM}F(0,0)$$\nDemonstraรงรตes interessantes\n\nExemplos de DFT\nLive 2D DFT Demo\n\nOutras funรงรตes\n\ndftmatrix -- Kernel matrix for the DFT Transform.\nidft -- Inverse Discrete Fourier Transform.\ndftview -- Discrete Fourier Transform Visualization.\ndftshift -- Shift of Fourier Spectrum for Visualization." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
victorgcapone/data-science-portifolio
Monte Carlos Experiments/Monte Carlo Integration.ipynb
gpl-3.0
[ "import matplotlib.pyplot as plt\nimport random\n%matplotlib inline", "In this notebook we will use the Monte Carlo method to find the area under a curve, so first let's define a function\n$$f(x) = x^2-4x+5$$", "f = lambda x:x**2-4*x+5\nx = range(0, 11, 1)\ny = [f(v) for v in x]\nplt.plot(y)", "Now, we know the probability of a random point being below the curve is equal to \n $$P_{curve}=\\dfrac{A_{curve}}{A_{rectanle}}$$\nWhere $A_{rectangle}$ is the area of the plot in the given interval, so let's try to integrate it from 0 to 10", "#Will use 3000 points\nnumber_points=3000\n#We want to see the points\npoints=[]\nbelow=0\nfor p in range(0, number_points):\n x,y=(random.uniform(0,10), random.uniform(0, 70))\n # If the function for x is greater then the random y, then the point is under the curve\n if f(x) >= y:\n below +=1\n points.append((x,y))\nratio = below/number_points\n\ncolor_func = lambda x,y: (1,0,0) if f(x)>=y else (0,0,1,0.5)\ncolors = [color_func(x,y) for (x,y) in points]\nplt.ylim(0,70)\nplt.xlim(0,10)\nplt.scatter(*zip(*points), color=colors)\nplt.show()\nprint(\"Ratio of points under the curve: %.4f\" % ratio)\nprint(\"Approximated area under the curve: %.4f\" % (ratio*700))", "Knowing the ratio of points under the curve, we can now calculate the integral as \n$$P_{curve}A_{rectangle} = A_{cruve}$$\nIf we take the integral\n$$\\int_0^{10}x^2-4x+5$$\nWe have $$\\dfrac{x^3}{3}-2x^2+5x\\big\\lvert_0^{10} = \\dfrac{10^3}{3}-200+50 = 333.33 - 200 + 50 = 183.33$$\nWhich is close to the real area, now, let's see how many points we need", "def monte_carlo_integration(f, number_points, xlims, ylims):\n below=0\n for p in range(0, number_points):\n x,y=(random.uniform(*xlims), random.uniform(*ylims))\n # If the function for x is greater then the random y, then the point is under the curve\n if y <= f(x):\n below +=1\n ratio = below/number_points\n area = ratio * (xlims[1]-xlims[0]) * (ylims[1]-ylims[0])\n return (ratio, area)\n\ntotal_points = 10000\nstep = 100\nestimated = [monte_carlo_integration(f, i, (0,10), (0,70))[1]\nfor i in range(step,total_points, step)]\nmean = sum(estimated)/len(estimated)\nprint(\"Mean Approximated value %.4f\" % mean)\n\nplt.figure(figsize=(8,5))\nplt.plot(estimated)\nplt.hlines(183.3, 0, total_points/step, 'g')\nplt.hlines(mean, 0 , total_points/step, 'r')\nplt.legend(['Approximation', 'Real', 'Mean'], loc='best')\nplt.ylabel(\"Estimated area\")\nplt.xlabel(\"Points used (x100)\")\nprint(\"Approximated Integral Value: %.4f\" % mean)", "As we can see the more points we sample, the more accurate our approximation is to the real values, now what about if we have segments of our curve under the $x$ axis? Let's look at this example\n$$g(x) = x^2-4x-8$$", "g = lambda x:x**2-4*x-8\nx = range(0, 11, 1)\ny = [g(v) for v in x]\nplt.plot(y)\n\n#Will use 3000 points\nnumber_points=3000\n#We want to see the points\npoints=[]\nbelow=0\nfor p in range(0, number_points):\n x,y=(random.uniform(0,10), random.uniform(-20, 60))\n # If the function for x is greater then the random y, then the point is under the curve\n if f(x) > 0 and 0 < y < f(x):\n below +=1\n if f(x) < 0 and 0 > y >= f(x):\n below += 1\n points.append((x,y))\nratio = below/number_points\n\ncolor_func = lambda x,y: (1,0,0) if (g(x) > 0 and 0 <= y <= g(x)) or (g(x) <= 0 and 0 > y >= g(x)) else (0,0,1,0.5)\ncolors = [color_func(x,y) for (x,y) in points]\nplt.ylim(-20,60)\nplt.xlim(0,10)\nplt.scatter(*zip(*points), color=colors)\nplt.show()\nprint(\"Ratio of points under the curve: %.4f\" % ratio)\nprint(\"Approximated area under the curve: %.4f\" % (ratio*800))", "We can see the area is around 184.0, so let's take the true integral to see if we are close to the real value\nWe have $$\\int_0^{10}x^2-4x+5 = \\dfrac{x^3}{3}-2x^2-8x\\big\\lvert_0^{10} = \\dfrac{10^3}{3}-200-80 = 333.33 - 200 - 80 = 53.33$$\nSo, we are off by a lot, that's because we are adding the area under the $x$ axis instead of subtracting it. To do this, we are going to first, find the point where $g(x) = 0$ which is $x = 2+\\sqrt{12} = 2 + 2\\sqrt{3}$", "#Let's adjust our function to deal with points under the x axis\ndef monte_carlo_integration(f, number_points, xlims, ylims):\n below=0\n for p in range(0, number_points):\n x,y=(random.uniform(*xlims), random.uniform(*ylims))\n # If the function for x is greater then the random y, then the point is under the curve\n if f(x) > 0 and 0 < y <= f(x):\n below += 1\n if f(x) < 0 and 0 > y >= f(x):\n below += 1\n ratio = below/number_points\n area = ratio * (xlims[1]-xlims[0]) * (ylims[1]-ylims[0])\n return (ratio, area)\n#Calculating the area below and above the x axis (before and after the root)\n_, area_before = monte_carlo_integration(g, 1000, (0,2+(12**(1/2))), (-20,60))\n_, area_after = monte_carlo_integration(g, 1000, (2+(12**(1/2)),10), (-20,60))\narea = area_after-area_before if area_after >= area_before else area_before-area_after\nprint(\"Estimated area under the curve: %.4f\" % area)\n\ndef monte_carlo_integration_helper(f, number_points, xlims, ylims, root):\n _, area_before = monte_carlo_integration(f, number_points, (xlims[0],root), ylims)\n _, area_after = monte_carlo_integration(f, number_points, (root,xlims[1]), ylims)\n return area_after-area_before if area_after >= area_before else area_before-area_after\ntotal_points = 10000\nstep = 100\nestimated = [monte_carlo_integration_helper(g, i, (0,10), (-20,70), 2+(12**(1/2)))\nfor i in range(step,total_points, step)]\nmean = sum(estimated)/len(estimated)\nprint(\"Mean Approximated value %.4f\" % mean)\n\nplt.figure(figsize=(8,2))\nplt.plot(estimated)\nplt.hlines(53.3, 0, total_points/step, 'g')\nplt.hlines(mean, 0 , total_points/step, 'r')\nplt.legend(['Approximation', 'Real', 'Mean'], loc='best')\nplt.ylabel(\"Estimated area\")\nplt.xlabel(\"Points used (x100)\")\nprint(\"Approximated Integral Value: %.4f\" % mean)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
chemiskyy/simmit
Examples/MMSP/parametric_analysis/fibre_angle.ipynb
gpl-3.0
[ "Composites simulation : perform parametric analyses", "%matplotlib inline\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom simmit import smartplus as sim\nfrom simmit import identify as iden\nimport os\nimport itertools\n\ndir = os.path.dirname(os.path.realpath('__file__'))", "We need to import here the data, modify them if needed and proceed", "umat_name = 'MIMTN' #This is the 5 character code for the Mori-Tanaka homogenization for composites with a matrix and ellipsoidal reinforcments\nnstatev = 0\n\nnphases = 2 #The number of phases\nnum_file = 0 #The num of the file that contains the subphases\nint1 = 50\nint2 = 50\nn_matrix = 0\n\nprops = np.array([nphases, num_file, int1, int2, n_matrix])\n\nNPhases_file = dir + '/keys/Nellipsoids0.dat'\nNPhases = pd.read_csv(NPhases_file, delimiter=r'\\s+', index_col=False, engine='python')\nNPhases[::]\n\n\npath_data = dir + '/data'\npath_keys = dir + '/keys'\npathfile = 'path.txt'\noutputfile = 'results_PLN.txt'\n\nnparams = 4\n\nparam_list = iden.read_parameters(nparams)\n\npsi_rve = 0.\ntheta_rve = 0.\nphi_rve = 0.\n\nalpha = np.arange(0.,91.,1)\nparam_list[1].value = 100 \nparam_list[2].value = 0.4 \nparam_list[3].value = 1.0 - param_list[2].value\n\nE_L = np.zeros(len(alpha))\nfig = plt.figure()\n\numat_name = 'MIMTN' #This is the 5 character code for the Mori-Tanaka homogenization for composites with a matrix and ellipsoidal reinforcments\nfor i, x in enumerate (alpha):\n \n param_list[0].value = x\n \n iden.copy_parameters(param_list, path_keys, path_data)\n iden.apply_parameters(param_list, path_data)\n\n L = sim.L_eff(umat_name, props, nstatev, psi_rve, theta_rve, phi_rve, path_data)\n p = sim.L_ortho_props(L)\n E_L[i] = p[0]\n\n\nplt.plot(alpha,E_L, c='black')\nnp.savetxt('E_L-angle_MT.txt', np.transpose([alpha,E_L]), fmt='%1.8e')\n\numat_name = 'MISCN' #This is the 5 character code for the Mori-Tanaka homogenization for composites with a matrix and ellipsoidal reinforcments\nfor i, x in enumerate (alpha):\n \n param_list[0].value = x\n \n iden.copy_parameters(param_list, path_keys, path_data)\n iden.apply_parameters(param_list, path_data)\n\n L = sim.L_eff(umat_name, props, nstatev, psi_rve, theta_rve, phi_rve, path_data)\n p = sim.L_ortho_props(L)\n E_L[i] = p[0]\n\n\nplt.plot(alpha,E_L, c='red')\nnp.savetxt('E_L-angle_SC.txt', np.transpose([alpha,E_L]), fmt='%1.8e')\n\n\nplt.show()", "Now let's study the evolution of the concentration", "param_list[0].value = 0.0\nparam_list[1].value = 100\n\nc = np.arange(0.,1.01,0.01)\nE_L = np.zeros(len(c))\nE_T = np.zeros(len(c))\n\numat_name = 'MIMTN' #This is the 5 character code for the Mori-Tanaka homogenization for composites with a matrix and ellipsoidal reinforcments\nfor i, x in enumerate (c):\n \n param_list[3].value = x\n param_list[2].value = 1.0 - param_list[3].value \n \n iden.copy_parameters(param_list, path_keys, path_data)\n iden.apply_parameters(param_list, path_data)\n\n L = sim.L_eff(umat_name, props, nstatev, psi_rve, theta_rve, phi_rve, path_data)\n p = sim.L_ortho_props(L)\n E_L[i] = p[0]\n E_T[i] = p[1] \n\nfig = plt.figure()\nnp.savetxt('E-concentration_MT.txt', np.transpose([c,E_L,E_T]), fmt='%1.8e')\nplt.plot(c,E_L, c='black')\nplt.plot(c,E_T, c='black', label='Mori-Tanaka')\n\numat_name = 'MISCN' #This is the 5 character code for the Mori-Tanaka homogenization for composites with a matrix and ellipsoidal reinforcments\nfor i, x in enumerate (c):\n \n param_list[3].value = x\n param_list[2].value = 1.0 - param_list[3].value \n \n iden.copy_parameters(param_list, path_keys, path_data)\n iden.apply_parameters(param_list, path_data)\n\n L = sim.L_eff(umat_name, props, nstatev, psi_rve, theta_rve, phi_rve, path_data)\n p = sim.L_ortho_props(L)\n E_L[i] = p[0]\n E_T[i] = p[1] \n \nnp.savetxt('E-concentration_SC.txt', np.transpose([c,E_L,E_T]), fmt='%1.8e')\nplt.plot(c,E_L, c='red')\nplt.plot(c,E_T, c='red', label='self-consistent')\nplt.xlabel('volume fraction $c$', size=12)\nplt.ylabel('Young modulus', size=12) \n\nplt.show()\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/docs-l10n
site/ja/quantum/tutorials/qcnn.ipynb
apache-2.0
[ "Copyright 2020 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "้‡ๅญ็•ณใฟ่พผใฟใƒ‹ใƒฅใƒผใƒฉใƒซใƒใƒƒใƒˆใƒฏใƒผใ‚ฏ\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td><a target=\"_blank\" href=\"https://www.tensorflow.org/quantum/tutorials/qcnn\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\"> TensorFlow.orgใง่กจ็คบ</a></td>\n <td><a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/quantum/tutorials/qcnn.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\"> Google Colab ใงๅฎŸ่กŒ</a></td>\n <td><a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/ja/quantum/tutorials/qcnn.ipynb\"> <img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\"> GitHubใงใ‚ฝใƒผใ‚นใ‚’่กจ็คบ</a></td>\n <td><a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/quantum/tutorials/qcnn.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\">ใƒŽใƒผใƒˆใƒ–ใƒƒใ‚ฏใ‚’ใƒ€ใ‚ฆใƒณใƒญใƒผใƒ‰</a></td>\n</table>\n\nใ“ใฎใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซใงใฏใ€ๅ˜็ด”ใช<a href=\"https://www.nature.com/articles/s41567-019-0648-8\" class=\"external\">้‡ๅญ็•ณใฟ่พผใฟใƒ‹ใƒฅใƒผใƒฉใƒซใƒใƒƒใƒˆใƒฏใƒผใ‚ฏ</a>๏ผˆQCNN๏ผ‰ใ‚’ๅฎŸ่ฃ…ใ—ใพใ™ใ€‚QCNN ใฏใ€ไธฆ้€ฒ็š„ใซไธๅค‰ใงใ‚‚ใ‚ใ‚‹ๅคๅ…ธ็š„ใช็•ณใฟ่พผใฟใƒ‹ใƒฅใƒผใƒฉใƒซใƒใƒƒใƒˆใƒฏใƒผใ‚ฏใซๆๆกˆใ•ใ‚ŒใŸ้‡ๅญใ‚ขใƒŠใƒญใ‚ฐใงใ™ใ€‚\nใ“ใฎไพ‹ใงใฏใ€ใƒ‡ใƒใ‚คใ‚นใฎ้‡ๅญใ‚ปใƒณใ‚ตใพใŸใฏ่ค‡้›‘ใชใ‚ทใƒŸใƒฅใƒฌใƒผใ‚ทใƒงใƒณใชใฉใ€้‡ๅญใƒ‡ใƒผใ‚ฟใ‚ฝใƒผใ‚นใฎ็‰นๅฎšใฎใƒ—ใƒญใƒ‘ใƒ†ใ‚ฃใ‚’ๆคœๅ‡บใ™ใ‚‹ๆ–นๆณ•ใ‚’ๅฎŸๆผ”ใ—ใพใ™ใ€‚้‡ๅญใƒ‡ใƒผใ‚ฟใ‚ฝใƒผใ‚นใฏใ€ๅŠฑ่ตทใฎๆœ‰็„กใซใ‹ใ‹ใ‚ใ‚‰ใš<a href=\"https://arxiv.org/pdf/quant-ph/0504097.pdf\" class=\"external\">ใ‚ฏใƒฉใ‚นใ‚ฟ็Šถๆ…‹</a>ใงใ™ใ€‚QCNN ใฏใ“ใฎๆคœๅ‡บใ‚’ๅญฆ็ฟ’ใ—ใพใ™๏ผˆ่ซ–ๆ–‡ใงไฝฟ็”จใ•ใ‚ŒใŸใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใฏ SPT ใƒ•ใ‚งใƒผใ‚บๅˆ†้กžใงใ™๏ผ‰ใ€‚\nใ‚ปใƒƒใƒˆใ‚ขใƒƒใƒ—", "!pip install tensorflow==2.7.0", "TensorFlow Quantum ใ‚’ใ‚คใƒณใ‚นใƒˆใƒผใƒซใ—ใพใ™ใ€‚", "!pip install tensorflow-quantum\n\n# Update package resources to account for version changes.\nimport importlib, pkg_resources\nimportlib.reload(pkg_resources)", "ๆฌกใซใ€TensorFlow ใจใƒขใ‚ธใƒฅใƒผใƒซใฎไพๅญ˜้–ขไฟ‚ใ‚’ใ‚คใƒณใƒใƒผใƒˆใ—ใพใ™ใ€‚", "import tensorflow as tf\nimport tensorflow_quantum as tfq\n\nimport cirq\nimport sympy\nimport numpy as np\n\n# visualization tools\n%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom cirq.contrib.svg import SVGCircuit", "1. QCNN ใ‚’ๆง‹็ฏ‰ใ™ใ‚‹\n1.1 TensorFlow ใ‚ฐใƒฉใƒ•ใงๅ›ž่ทฏใ‚’็ต„ใฟ็ซ‹ใฆใ‚‹\nTensorFlow Quantum๏ผˆTFQ๏ผ‰ใซใฏใ€ใ‚ฐใƒฉใƒ•ๅ†…ใงๅ›ž่ทฏใ‚’ๆง‹็ฏ‰ใ™ใ‚‹ใŸใ‚ใซ่จญ่จˆใ•ใ‚ŒใŸใƒฌใ‚คใƒคใƒผใ‚ฏใƒฉใ‚นใŒใ‚ใ‚Šใพใ™ใ€‚ใŸใจใˆใฐ tfq.layers.AddCircuit ใƒฌใ‚คใƒคใƒผใŒใ‚ใ‚Šใ€tf.keras.Layer ใ‚’็ถ™ๆ‰ฟใ—ใฆใ„ใพใ™ใ€‚ใ“ใฎใƒฌใ‚คใƒคใƒผใฏใ€ๆฌกใฎๅ›ณใง็คบใ™ใ‚ˆใ†ใซใ€ๅ›ž่ทฏใฎๅ…ฅๅŠ›ใƒใƒƒใƒใฎๅ‰ๅพŒใ„ใšใ‚Œใ‹ใซ่ฟฝๅŠ ใงใใพใ™ใ€‚\n<img src=\"./images/qcnn_1.png\" width=\"700\">\nๆฌกใฎใ‚นใƒ‹ใƒšใƒƒใƒˆใซใฏใ€ใ“ใฎใƒฌใ‚คใƒคใƒผใŒไฝฟ็”จใ•ใ‚Œใฆใ„ใพใ™ใ€‚", "qubit = cirq.GridQubit(0, 0)\n\n# Define some circuits.\ncircuit1 = cirq.Circuit(cirq.X(qubit))\ncircuit2 = cirq.Circuit(cirq.H(qubit))\n\n# Convert to a tensor.\ninput_circuit_tensor = tfq.convert_to_tensor([circuit1, circuit2])\n\n# Define a circuit that we want to append\ny_circuit = cirq.Circuit(cirq.Y(qubit))\n\n# Instantiate our layer\ny_appender = tfq.layers.AddCircuit()\n\n# Run our circuit tensor through the layer and save the output.\noutput_circuit_tensor = y_appender(input_circuit_tensor, append=y_circuit)", "ๅ…ฅๅŠ›ใƒ†ใƒณใ‚ฝใƒซใ‚’่ชฟในใพใ™ใ€‚", "print(tfq.from_tensor(input_circuit_tensor))", "ๆฌกใซใ€ๅ‡บๅŠ›ใƒ†ใƒณใ‚ฝใƒซใ‚’่ชฟในใพใ™ใ€‚", "print(tfq.from_tensor(output_circuit_tensor))", "ไปฅไธ‹ใฎไพ‹ใฏ tfq.layers.AddCircuit ใ‚’ไฝฟ็”จใ›ใšใซๅฎŸ่กŒใงใใพใ™ใŒใ€TensorFlow ่จˆ็ฎ—ใ‚ฐใƒฉใƒ•ใซ่ค‡้›‘ใชๆฉŸ่ƒฝใ‚’ๅŸ‹ใ‚่พผใ‚€ๆ–นๆณ•ใ‚’็†่งฃใ™ใ‚‹ไธŠใงๅฝน็ซ‹ใกใพใ™ใ€‚\n1.2 ๅ•้กŒใฎๆฆ‚่ฆ\nใ‚ฏใƒฉใ‚นใ‚ฟใƒผ็Šถๆ…‹ใ‚’ๆบ–ๅ‚™ใ—ใ€ใ€ŒๅŠฑ่ตทใ€ใŒใ‚ใ‚‹ใ‹ใฉใ†ใ‹ใ‚’ๆคœๅ‡บใ™ใ‚‹้‡ๅญๅˆ†้กžๅ™จใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ—ใพใ™ใ€‚ใ‚ฏใƒฉใ‚นใ‚ฟใƒผ็Šถๆ…‹ใฏๆฅตใ‚ใฆใ“ใ˜ใ‚Œใฆใ„ใพใ™ใŒใ€ๅคๅ…ธ็š„ใ‚ณใƒณใƒ”ใƒฅใƒผใ‚ฟใซใŠใ„ใฆใฏๅฟ…ใšใ—ใ‚‚ๅ›ฐ้›ฃใงใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚ใ‚ใ‹ใ‚Šใ‚„ใ™ใ่จ€ใˆใฐใ€ใ“ใ‚Œใฏ่ซ–ๆ–‡ใงไฝฟ็”จใ•ใ‚Œใฆใ„ใ‚‹ใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใ‚ˆใ‚Šใ‚‚ๅ˜็ด”ใงใ™ใ€‚\nใ“ใฎๅˆ†้กžใ‚ฟใ‚นใ‚ฏใงใฏใ€ๆฌกใฎ็†็”ฑใซใ‚ˆใ‚Šใ€ใƒ‡ใ‚ฃใƒผใƒ— <a href=\"https://arxiv.org/pdf/quant-ph/0610099.pdf\" class=\"external\">MERA</a> ใฎใ‚ˆใ†ใช QCNN ใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ‚’ๅฎŸ่ฃ…ใ—ใพใ™ใ€‚\n\nQCNN ใจๅŒๆง˜ใซใ€ใƒชใƒณใ‚ฐใฎใ‚ฏใƒฉใ‚นใ‚ฟใƒผ็Šถๆ…‹ใฏไธฆ้€ฒ็š„ใซไธๅค‰ใงใ‚ใ‚‹\nใ‚ฏใƒฉใ‚นใ‚ฟใƒผ็Šถๆ…‹ใฏ้žๅธธใซใ‚‚ใคใ‚Œใฆใ„ใ‚‹\n\nใ“ใฎใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใฏใ‚จใƒณใ‚ฟใƒณใ‚ฐใƒซใƒกใƒณใƒˆใ‚’่ปฝๆธ›ใ—ใ€ๅ˜ไธ€ใฎใ‚ญใƒฅใƒผใƒ“ใƒƒใƒˆใ‚’่ชญใฟๅ‡บใ™ใ“ใจใงๅˆ†้กžใ‚’ๅ–ๅพ—ใ™ใ‚‹ไธŠใงๅŠนๆžœใŒใ‚ใ‚Šใพใ™ใ€‚\n<img src=\"./images/qcnn_2.png\" width=\"1000\">\nใ€ŒๅŠฑ่ตทใ€ใฎใ‚ใ‚‹ใ‚ฏใƒฉใ‚นใ‚ฟใƒผ็Šถๆ…‹ใฏใ€cirq.rx ใ‚ฒใƒผใƒˆใŒใ™ในใฆใฎใ‚ญใƒฅใƒผใƒ“ใƒƒใƒˆใซ้ฉ็”จใ•ใ‚ŒใŸใ‚ฏใƒฉใ‚นใ‚ฟใƒผ็Šถๆ…‹ใจใ—ใฆๅฎš็พฉใ•ใ‚Œใพใ™ใ€‚Qconv ใจ QPool ใซใคใ„ใฆใฏใ€ใ“ใฎใƒใƒฅใƒผใƒˆใƒชใ‚ขใƒซใฎๅพŒใฎๆ–นใง่ชฌๆ˜Žใ—ใฆใ„ใพใ™ใ€‚\n1.3 TensorFlow ใฎใƒ“ใƒซใƒ‡ใ‚ฃใƒณใ‚ฐใƒ–ใƒญใƒƒใ‚ฏ\n<img src=\"./images/qcnn_3.png\" width=\"1000\">\nTensorFlow Quantum ใ‚’ไฝฟใฃใฆใ“ใฎๅ•้กŒใ‚’่งฃๆฑบใ™ใ‚‹ๆ–นๆณ•ใจใ—ใฆใ€ๆฌกใ‚’ๅฎŸ่ฃ…ใ™ใ‚‹ใ“ใจใŒๆŒ™ใ’ใ‚‰ใ‚Œใพใ™ใ€‚\n\nใƒขใƒ‡ใƒซใธใฎๅ…ฅๅŠ›ใฏๅ›ž่ทฏใงใ€็ฉบใฎๅ›ž่ทฏใ‹ๅŠฑ่ตทใ‚’็คบใ™็‰นๅฎšใฎใ‚ญใƒฅใƒผไบบใซใŠใ‘ใ‚‹ X ใ‚ฒใƒผใƒˆใงใ™ใ€‚\nใƒขใƒ‡ใƒซใฎๆฎ‹ใ‚Šใฎ้‡ๅญใ‚ณใƒณใƒใƒผใƒใƒณใƒˆใฏใ€tfq.layers.AddCircuit ใƒฌใ‚คใƒคใƒผใงไฝœๆˆใ•ใ‚Œใพใ™ใ€‚\nใŸใจใˆใฐ tfq.layers.PQC ใƒฌใ‚คใƒคใƒผใŒไฝฟ็”จใ•ใ‚Œใฆใ„ใ‚‹ใจใ—ใŸๅ ดๅˆใ€$\\langle \\hat{Z} \\rangle$ ใ‚’่ชญใฟๅ–ใฃใฆใ€ๅŠฑ่ตทใฎใ‚ใ‚‹็Šถๆ…‹ใซใฏ 1 ใฎใƒฉใƒ™ใƒซใจใ€ๅŠฑ่ตทใฎใชใ„็Šถๆ…‹ใซใฏ -1 ใฎใƒฉใƒ™ใƒซใจๆฏ”่ผƒใ—ใพใ™ใ€‚\n\n1.4 ใƒ‡ใƒผใ‚ฟ\nใƒขใƒ‡ใƒซใ‚’ๆง‹็ฏ‰ใ™ใ‚‹ๅ‰ใซใ€ใƒ‡ใƒผใ‚ฟใ‚’็”Ÿๆˆใ™ใ‚‹ใ“ใจใŒใงใใพใ™ใ€‚ใ“ใฎๅ ดๅˆใซใฏใ€ใ‚ฏใƒฉใ‚นใ‚ฟใƒผ็Šถๆ…‹ใซๅŠฑ่ตทใŒใฏไธ€ๆ–‰ๆ€ๆกˆใ™๏ผˆๅ…ƒใฎ่ซ–ๆ–‡ใงใฏใ€ใ‚ˆใ‚Š่ค‡้›‘ใชใƒ‡ใƒผใ‚ฟใ‚ปใƒƒใƒˆใŒไฝฟ็”จใ•ใ‚Œใฆใ„ใพใ™๏ผ‰ใ€‚ๅŠฑ่ตทใฏใ€cirq.rx ใ‚ฒใƒผใƒˆใง่กจใ•ใ‚Œใพใ™ใ€‚ๅๅˆ†ใซๅคงใใ„ๅ›ž่ปขใฏๅŠฑ่ตทใจ่ฆ‹ใชใ•ใ‚Œใ€1 ใจใƒฉใƒ™ใƒซไป˜ใ‘ใ•ใ‚Œใ€ๅๅˆ†ใซๅคงใใใชใ„ๅ›ž่ปขใฏ -1 ใจใƒฉใƒ™ใƒซไป˜ใ‘ใ•ใ‚Œใ€ๅŠฑ่ตทใงใฏใชใ„ใจ่ฆ‹ใชใ•ใ‚Œใพใ™ใ€‚", "def generate_data(qubits):\n \"\"\"Generate training and testing data.\"\"\"\n n_rounds = 20 # Produces n_rounds * n_qubits datapoints.\n excitations = []\n labels = []\n for n in range(n_rounds):\n for bit in qubits:\n rng = np.random.uniform(-np.pi, np.pi)\n excitations.append(cirq.Circuit(cirq.rx(rng)(bit)))\n labels.append(1 if (-np.pi / 2) <= rng <= (np.pi / 2) else -1)\n\n split_ind = int(len(excitations) * 0.7)\n train_excitations = excitations[:split_ind]\n test_excitations = excitations[split_ind:]\n\n train_labels = labels[:split_ind]\n test_labels = labels[split_ind:]\n\n return tfq.convert_to_tensor(train_excitations), np.array(train_labels), \\\n tfq.convert_to_tensor(test_excitations), np.array(test_labels)", "้€šๅธธใฎๆฉŸๆขฐๅญฆ็ฟ’ใจๅŒใ˜ใ‚ˆใ†ใซใ€ใƒขใƒ‡ใƒซใฎใƒ™ใƒณใƒใƒžใƒผใ‚ฏใซไฝฟ็”จใ™ใ‚‹ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใจใƒ†ใ‚นใƒˆใฎใ‚ปใƒƒใƒˆใ‚’ไฝœๆˆใ—ใฆใ„ใ‚‹ใ“ใจใŒใ‚ใ‹ใ‚Šใพใ™ใ€‚ๆฌกใฎใ‚ˆใ†ใซใ™ใ‚‹ใจใ€ใƒ‡ใƒผใ‚ฟใƒใ‚คใƒณใƒˆใ‚’็ด ๆ—ฉใ็ขบ่ชใงใใพใ™ใ€‚", "sample_points, sample_labels, _, __ = generate_data(cirq.GridQubit.rect(1, 4))\nprint('Input:', tfq.from_tensor(sample_points)[0], 'Output:', sample_labels[0])\nprint('Input:', tfq.from_tensor(sample_points)[1], 'Output:', sample_labels[1])", "1.5 ใƒฌใ‚คใƒคใƒผใ‚’ๅฎš็พฉใ™ใ‚‹\nไธŠ่จ˜ใฎๅ›ณใง็คบใ™ใƒฌใ‚คใƒคใƒผใ‚’ TensorFlow ใงๅฎš็พฉใ—ใพใ—ใ‚‡ใ†ใ€‚\n1.5.1 ใ‚ฏใƒฉใ‚นใ‚ฟใƒผ็Šถๆ…‹\nใพใšๅง‹ใ‚ใซใ€<a href=\"https://arxiv.org/pdf/quant-ph/0504097.pdf\" class=\"external\">ใ‚ฏใƒฉใ‚นใ‚ฟใƒผ็Šถๆ…‹</a>ใ‚’ๅฎš็พฉใ—ใพใ™ใŒใ€ใ“ใ‚Œใซใฏ Google ใŒ้‡ๅญๅ›ž่ทฏใฎใƒ—ใƒญใ‚ฐใƒฉใƒŸใƒณใ‚ฐ็”จใซๆไพ›ใ—ใฆใ„ใ‚‹ <a href=\"https://github.com/quantumlib/Cirq\" class=\"external\">Cirq</a> ใƒ•ใƒฌใƒผใƒ ใƒฏใƒผใ‚ฏใ‚’ไฝฟ็”จใ—ใพใ™ใ€‚ใƒขใƒ‡ใƒซใฎ้™็š„ใช้ƒจๅˆ†ใงใ‚ใ‚‹ใŸใ‚ใ€tfq.layers.AddCircuit ๆฉŸ่ƒฝใ‚’ไฝฟ็”จใ—ใฆๅŸ‹ใ‚่พผใฟใพใ™ใ€‚", "def cluster_state_circuit(bits):\n \"\"\"Return a cluster state on the qubits in `bits`.\"\"\"\n circuit = cirq.Circuit()\n circuit.append(cirq.H.on_each(bits))\n for this_bit, next_bit in zip(bits, bits[1:] + [bits[0]]):\n circuit.append(cirq.CZ(this_bit, next_bit))\n return circuit", "็Ÿฉๅฝขใฎ <a href=\"https://cirq.readthedocs.io/en/stable/generated/cirq.GridQubit.html\" class=\"external\"><code>cirq.GridQubit</code></a> ใฎใ‚ฏใƒฉใ‚นใ‚ฟใƒผ็Šถๆ…‹ๅ›ž่ทฏใ‚’่กจ็คบใ—ใพใ™ใ€‚", "SVGCircuit(cluster_state_circuit(cirq.GridQubit.rect(1, 4)))", "1.5.2 QCNN ใƒฌใ‚คใƒคใƒผ\n<a href=\"https://arxiv.org/abs/1810.03787\" class=\"external\">Cong and Lukin ใฎ QCNN ใซ้–ขใ™ใ‚‹่ซ–ๆ–‡</a>ใ‚’ไฝฟ็”จใ—ใฆใ€ใƒขใƒ‡ใƒซใ‚’ๆง‹ๆˆใ™ใ‚‹ใƒฌใ‚คใƒคใƒผใ‚’ๅฎš็พฉใ—ใพใ™ใ€‚ใ“ใ‚Œใซใฏๆฌกใฎๅ‰ๆๆกไปถใŒใ‚ใ‚Šใพใ™ใ€‚\n\n<a href=\"https://arxiv.org/abs/quant-ph/0507171\" class=\"external\">Tucci ใฎ่ซ–ๆ–‡</a>ใซใ‚ใ‚‹ 1 ใ‚ญใƒฅใƒผใƒ“ใƒƒใƒˆใจ 2 ใ‚ญใƒฅใƒผใƒ“ใƒƒใƒˆใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟๅŒ–ใ•ใ‚ŒใŸใƒฆใƒ‹ใ‚ฟใƒชโ€•่กŒๅˆ—\nไธ€่ˆฌ็š„ใชใƒ‘ใƒฉใƒกใƒผใ‚ฟๅŒ–ใ•ใ‚ŒใŸ 2 ใ‚ญใƒฅใƒผใƒ“ใƒƒใƒˆใƒ—ใƒผใƒชใƒณใ‚ฐๆผ”็ฎ—", "def one_qubit_unitary(bit, symbols):\n \"\"\"Make a Cirq circuit enacting a rotation of the bloch sphere about the X,\n Y and Z axis, that depends on the values in `symbols`.\n \"\"\"\n return cirq.Circuit(\n cirq.X(bit)**symbols[0],\n cirq.Y(bit)**symbols[1],\n cirq.Z(bit)**symbols[2])\n\n\ndef two_qubit_unitary(bits, symbols):\n \"\"\"Make a Cirq circuit that creates an arbitrary two qubit unitary.\"\"\"\n circuit = cirq.Circuit()\n circuit += one_qubit_unitary(bits[0], symbols[0:3])\n circuit += one_qubit_unitary(bits[1], symbols[3:6])\n circuit += [cirq.ZZ(*bits)**symbols[6]]\n circuit += [cirq.YY(*bits)**symbols[7]]\n circuit += [cirq.XX(*bits)**symbols[8]]\n circuit += one_qubit_unitary(bits[0], symbols[9:12])\n circuit += one_qubit_unitary(bits[1], symbols[12:])\n return circuit\n\n\ndef two_qubit_pool(source_qubit, sink_qubit, symbols):\n \"\"\"Make a Cirq circuit to do a parameterized 'pooling' operation, which\n attempts to reduce entanglement down from two qubits to just one.\"\"\"\n pool_circuit = cirq.Circuit()\n sink_basis_selector = one_qubit_unitary(sink_qubit, symbols[0:3])\n source_basis_selector = one_qubit_unitary(source_qubit, symbols[3:6])\n pool_circuit.append(sink_basis_selector)\n pool_circuit.append(source_basis_selector)\n pool_circuit.append(cirq.CNOT(control=source_qubit, target=sink_qubit))\n pool_circuit.append(sink_basis_selector**-1)\n return pool_circuit", "ไฝœๆˆใ—ใŸใ‚‚ใฎใ‚’็ขบ่ชใ™ใ‚‹ใŸใ‚ใซใ€1 ใ‚ญใƒฅใƒผใƒ“ใƒƒใƒˆใฎใƒฆใƒ‹ใ‚ฟใƒชใƒผๅ›ž่ทฏใ‚’ๅ‡บๅŠ›ใ—ใพใ—ใ‚‡ใ†ใ€‚", "SVGCircuit(one_qubit_unitary(cirq.GridQubit(0, 0), sympy.symbols('x0:3')))", "ๆฌกใซใ€2 ใ‚ญใƒฅใƒผใƒ“ใƒƒใƒˆใฎใƒฆใƒ‹ใ‚ฟใƒชใƒผๅ›ž่ทฏใ‚’ๅ‡บๅŠ›ใ—ใพใ™ใ€‚", "SVGCircuit(two_qubit_unitary(cirq.GridQubit.rect(1, 2), sympy.symbols('x0:15')))", "ใใ—ใฆ 2 ใ‚ญใƒฅใƒผใƒ“ใƒƒใƒˆใฎใƒ—ใƒผใƒชใƒณใ‚ฐๅ›ž่ทฏใ‚’ๅ‡บๅŠ›ใ—ใพใ™ใ€‚", "SVGCircuit(two_qubit_pool(*cirq.GridQubit.rect(1, 2), sympy.symbols('x0:6')))", "1.5.2.1 ้‡ๅญ็•ณใฟ่พผใฟ\n<a href=\"https://arxiv.org/abs/1810.03787\" class=\"external\">Cong ใจ Lukin</a> ใฎ่ซ–ๆ–‡ใซใ‚ใ‚‹ใจใŠใ‚Šใ€1 ๆฌกๅ…ƒ้‡ๅญ็•ณใฟ่พผใฟใ‚’ใ€ใ‚นใƒˆใƒฉใ‚คใƒ‰ 1 ใฎ้šฃๆŽฅใ™ใ‚‹ใ™ในใฆใฎใ‚ญใƒฅใƒผใƒ“ใƒƒใƒˆใƒšใ‚ขใซ 2 ใ‚ญใƒฅใƒผใƒ“ใƒƒใƒˆใฎใƒ‘ใƒฉใƒกใƒผใ‚ฟใƒผๅŒ–ใ•ใ‚ŒใŸใƒฆใƒ‹ใ‚ฟใƒชใฎ้ฉ็”จใจใ—ใฆๅฎš็พฉใ—ใพใ™ใ€‚", "def quantum_conv_circuit(bits, symbols):\n \"\"\"Quantum Convolution Layer following the above diagram.\n Return a Cirq circuit with the cascade of `two_qubit_unitary` applied\n to all pairs of qubits in `bits` as in the diagram above.\n \"\"\"\n circuit = cirq.Circuit()\n for first, second in zip(bits[0::2], bits[1::2]):\n circuit += two_qubit_unitary([first, second], symbols)\n for first, second in zip(bits[1::2], bits[2::2] + [bits[0]]):\n circuit += two_qubit_unitary([first, second], symbols)\n return circuit", "๏ผˆ้žๅธธใซๆฐดๅนณใช๏ผ‰ๅ›ž่ทฏใ‚’่กจ็คบใ—ใพใ™ใ€‚", "SVGCircuit(\n quantum_conv_circuit(cirq.GridQubit.rect(1, 8), sympy.symbols('x0:15')))", "1.5.2.2 ้‡ๅญใƒ—ใƒผใƒชใƒณใ‚ฐ\n้‡ๅญใƒ—ใƒผใƒชใƒณใ‚ฐใƒฌใ‚คใƒคใƒผใฏใ€ไธŠ่จ˜ใงๅฎš็พฉใ•ใ‚ŒใŸ 2 ใ‚ญใƒฅใƒผใƒ“ใƒƒใƒˆใƒ—ใƒผใƒซใ‚’ไฝฟ็”จใ—ใฆใ€$N$ ใ‚ญใƒฅใƒผใƒ“ใƒƒใƒˆใ‹ใ‚‰ $\\frac{N}{2}$ ใ‚ญใƒฅใƒผใƒ“ใƒƒใƒˆใพใงใ‚’ใƒ—ใƒผใƒชใƒณใ‚ฐใ—ใพใ™ใ€‚", "def quantum_pool_circuit(source_bits, sink_bits, symbols):\n \"\"\"A layer that specifies a quantum pooling operation.\n A Quantum pool tries to learn to pool the relevant information from two\n qubits onto 1.\n \"\"\"\n circuit = cirq.Circuit()\n for source, sink in zip(source_bits, sink_bits):\n circuit += two_qubit_pool(source, sink, symbols)\n return circuit", "ใƒ—ใƒผใƒชใƒณใ‚ฐใ‚ณใƒณใƒใƒผใƒใƒณใƒˆๅ›ž่ทฏใ‚’่ชฟในใพใ™ใ€‚", "test_bits = cirq.GridQubit.rect(1, 8)\n\nSVGCircuit(\n quantum_pool_circuit(test_bits[:4], test_bits[4:], sympy.symbols('x0:6')))", "1.6 ใƒขใƒ‡ใƒซใฎๅฎš็พฉ\nๅฎš็พฉใ—ใŸใƒฌใ‚คใƒคใƒผใ‚’ไฝฟ็”จใ—ใฆ็ด”็ฒ‹ใช้‡ๅญ CNN ใ‚’ๆง‹็ฏ‰ใ—ใพใ™ใ€‚8 ใ‚ญใƒฅใƒผใƒ“ใƒƒใƒˆใง้–‹ๅง‹ใ—ใ€1 ใ‚ญใƒฅใƒผใƒ“ใƒƒใƒˆใพใงใƒ—ใƒผใƒซใƒ€ใ‚ฆใƒณใ—ใฆใ‹ใ‚‰ใ€$\\langle \\hat{Z} \\rangle$ ใ‚’ๆธฌๅฎšใ—ใพใ™ใ€‚", "def create_model_circuit(qubits):\n \"\"\"Create sequence of alternating convolution and pooling operators \n which gradually shrink over time.\"\"\"\n model_circuit = cirq.Circuit()\n symbols = sympy.symbols('qconv0:63')\n # Cirq uses sympy.Symbols to map learnable variables. TensorFlow Quantum\n # scans incoming circuits and replaces these with TensorFlow variables.\n model_circuit += quantum_conv_circuit(qubits, symbols[0:15])\n model_circuit += quantum_pool_circuit(qubits[:4], qubits[4:],\n symbols[15:21])\n model_circuit += quantum_conv_circuit(qubits[4:], symbols[21:36])\n model_circuit += quantum_pool_circuit(qubits[4:6], qubits[6:],\n symbols[36:42])\n model_circuit += quantum_conv_circuit(qubits[6:], symbols[42:57])\n model_circuit += quantum_pool_circuit([qubits[6]], [qubits[7]],\n symbols[57:63])\n return model_circuit\n\n\n# Create our qubits and readout operators in Cirq.\ncluster_state_bits = cirq.GridQubit.rect(1, 8)\nreadout_operators = cirq.Z(cluster_state_bits[-1])\n\n# Build a sequential model enacting the logic in 1.3 of this notebook.\n# Here you are making the static cluster state prep as a part of the AddCircuit and the\n# \"quantum datapoints\" are coming in the form of excitation\nexcitation_input = tf.keras.Input(shape=(), dtype=tf.dtypes.string)\ncluster_state = tfq.layers.AddCircuit()(\n excitation_input, prepend=cluster_state_circuit(cluster_state_bits))\n\nquantum_model = tfq.layers.PQC(create_model_circuit(cluster_state_bits),\n readout_operators)(cluster_state)\n\nqcnn_model = tf.keras.Model(inputs=[excitation_input], outputs=[quantum_model])\n\n# Show the keras plot of the model\ntf.keras.utils.plot_model(qcnn_model,\n show_shapes=True,\n show_layer_names=False,\n dpi=70)", "1.7 ใƒขใƒ‡ใƒซใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹\nใ“ใฎไพ‹ใ‚’ๅ˜็ด”ๅŒ–ใ™ใ‚‹ใŸใ‚ใซใ€ๅฎŒๅ…จใชใƒใƒƒใƒใงใƒขใƒ‡ใƒซใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ—ใพใ™ใ€‚", "# Generate some training data.\ntrain_excitations, train_labels, test_excitations, test_labels = generate_data(\n cluster_state_bits)\n\n\n# Custom accuracy metric.\n@tf.function\ndef custom_accuracy(y_true, y_pred):\n y_true = tf.squeeze(y_true)\n y_pred = tf.map_fn(lambda x: 1.0 if x >= 0 else -1.0, y_pred)\n return tf.keras.backend.mean(tf.keras.backend.equal(y_true, y_pred))\n\n\nqcnn_model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.02),\n loss=tf.losses.mse,\n metrics=[custom_accuracy])\n\nhistory = qcnn_model.fit(x=train_excitations,\n y=train_labels,\n batch_size=16,\n epochs=25,\n verbose=1,\n validation_data=(test_excitations, test_labels))\n\nplt.plot(history.history['loss'][1:], label='Training')\nplt.plot(history.history['val_loss'][1:], label='Validation')\nplt.title('Training a Quantum CNN to Detect Excited Cluster States')\nplt.xlabel('Epochs')\nplt.ylabel('Loss')\nplt.legend()\nplt.show()", "2. ใƒใ‚คใƒ–ใƒชใƒƒใƒ‰ใƒขใƒ‡ใƒซ\n้‡ๅญ็•ณใฟ่พผใฟใ‚’ไฝฟ็”จใ—ใฆ 8 ใ‚ญใƒฅใƒผใƒ“ใƒƒใƒˆใ‹ใ‚‰ 1 ใ‚ญใƒฅใƒผใƒ“ใƒƒใƒˆใซใ™ใ‚‹ๅฟ…่ฆใฏใ‚ใ‚Šใพใ›ใ‚“ใ€‚้‡ๅญ็•ณใฟ่พผใฟใฎ 1๏ฝž2 ใƒฉใ‚ฆใƒณใƒ‰ใ‚’ๅฎŸ่กŒใ—ใ€็ตๆžœใ‚’ๅพ“ๆฅใฎใƒ‹ใƒฅใƒผใƒฉใƒซใƒใƒƒใƒˆใƒฏใƒผใ‚ฏใซใƒ•ใ‚ฃใƒผใƒ‰ใ™ใ‚‹ใ“ใจใ‚‚ๅฏ่ƒฝใงใ™ใ€‚ใ“ใฎใ‚ปใ‚ฏใ‚ทใƒงใƒณใงใฏใ€้‡ๅญใจๅพ“ๆฅใฎใƒใ‚คใƒ–ใƒชใƒƒใƒ‰ใƒขใƒ‡ใƒซใ‚’่ชฌๆ˜Žใ—ใพใ™ใ€‚\n2.1 ๅ˜ไธ€้‡ๅญใƒ•ใ‚ฃใƒซใ‚ฟใ‚’ๅ‚™ใˆใŸใƒใ‚คใƒ–ใƒชใƒƒใƒ‰ใƒขใƒ‡ใƒซ\n้‡ๅญ็•ณใฟ่พผใฟใฎใƒฌใ‚คใƒคใƒผใ‚’ 1 ใค้ฉ็”จใ—ใ€ใ™ในใฆใฎใƒ“ใƒƒใƒˆใฎ $\\langle \\hat{Z}_n \\rangle$ ใ‚’่ชญใฟๅ–ใ‚Šใ€็ถšใ„ใฆๅฏ†ใซๆŽฅ็ถšใ•ใ‚ŒใŸใƒ‹ใƒฅใƒผใƒฉใƒซใƒใƒƒใƒˆใƒฏใƒผใ‚ฏใ‚’่ชญใฟๅ–ใ‚Šใพใ™ใ€‚\n<img src=\"./images/qcnn_5.png\" width=\"1000\"> \n2.1.1 ใƒขใƒ‡ใƒซใฎๅฎš็พฉ", "# 1-local operators to read out\nreadouts = [cirq.Z(bit) for bit in cluster_state_bits[4:]]\n\n\ndef multi_readout_model_circuit(qubits):\n \"\"\"Make a model circuit with less quantum pool and conv operations.\"\"\"\n model_circuit = cirq.Circuit()\n symbols = sympy.symbols('qconv0:21')\n model_circuit += quantum_conv_circuit(qubits, symbols[0:15])\n model_circuit += quantum_pool_circuit(qubits[:4], qubits[4:],\n symbols[15:21])\n return model_circuit\n\n\n# Build a model enacting the logic in 2.1 of this notebook.\nexcitation_input_dual = tf.keras.Input(shape=(), dtype=tf.dtypes.string)\n\ncluster_state_dual = tfq.layers.AddCircuit()(\n excitation_input_dual, prepend=cluster_state_circuit(cluster_state_bits))\n\nquantum_model_dual = tfq.layers.PQC(\n multi_readout_model_circuit(cluster_state_bits),\n readouts)(cluster_state_dual)\n\nd1_dual = tf.keras.layers.Dense(8)(quantum_model_dual)\n\nd2_dual = tf.keras.layers.Dense(1)(d1_dual)\n\nhybrid_model = tf.keras.Model(inputs=[excitation_input_dual], outputs=[d2_dual])\n\n# Display the model architecture\ntf.keras.utils.plot_model(hybrid_model,\n show_shapes=True,\n show_layer_names=False,\n dpi=70)", "2.1.2 ใƒขใƒ‡ใƒซใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹", "hybrid_model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.02),\n loss=tf.losses.mse,\n metrics=[custom_accuracy])\n\nhybrid_history = hybrid_model.fit(x=train_excitations,\n y=train_labels,\n batch_size=16,\n epochs=25,\n verbose=1,\n validation_data=(test_excitations,\n test_labels))\n\nplt.plot(history.history['val_custom_accuracy'], label='QCNN')\nplt.plot(hybrid_history.history['val_custom_accuracy'], label='Hybrid CNN')\nplt.title('Quantum vs Hybrid CNN performance')\nplt.xlabel('Epochs')\nplt.legend()\nplt.ylabel('Validation Accuracy')\nplt.show()", "ใ”่ฆงใฎใจใŠใ‚Šใ€้žๅธธใซๆŽงใˆใ‚ใชๅคๅ…ธ็š„ๆ”ฏๆดใซใ‚ˆใ‚Šใ€ใƒใ‚คใƒ–ใƒชใƒƒใƒ‰ใƒขใƒ‡ใƒซใฏ้€šๅธธใ€็ด”็ฒ‹ใช้‡ๅญใƒใƒผใ‚ธใƒงใƒณใ‚ˆใ‚Šใ‚‚้€ŸใๅŽๆŸใ—ใพใ™ใ€‚\n2.2 ๅคš้‡้‡ๅญใƒ•ใ‚ฃใƒซใ‚ฟใ‚’ๅ‚™ใˆใŸใƒใ‚คใƒ–ใƒชใƒƒใƒ‰็•ณใฟ่พผใฟ\nๅคš้‡้‡ๅญ็•ณใฟ่พผใฟใจๅพ“ๆฅใฎใƒ‹ใƒฅใƒผใƒฉใƒซใƒใƒƒใƒˆใƒฏใƒผใ‚ฏใ‚’ไฝฟ็”จใ—ใฆใใ‚Œใ‚‰ใ‚’็ต„ใฟๅˆใ‚ใ›ใ‚‹ใ‚ขใƒผใ‚ญใƒ†ใ‚ฏใƒใƒฃใ‚’่ฉฆใ—ใฆใฟใพใ—ใ‚‡ใ†ใ€‚\n<img src=\"./images/qcnn_6.png\" width=\"1000\"> \n2.2.1 ใƒขใƒ‡ใƒซใฎๅฎš็พฉ", "excitation_input_multi = tf.keras.Input(shape=(), dtype=tf.dtypes.string)\n\ncluster_state_multi = tfq.layers.AddCircuit()(\n excitation_input_multi, prepend=cluster_state_circuit(cluster_state_bits))\n\n# apply 3 different filters and measure expectation values\n\nquantum_model_multi1 = tfq.layers.PQC(\n multi_readout_model_circuit(cluster_state_bits),\n readouts)(cluster_state_multi)\n\nquantum_model_multi2 = tfq.layers.PQC(\n multi_readout_model_circuit(cluster_state_bits),\n readouts)(cluster_state_multi)\n\nquantum_model_multi3 = tfq.layers.PQC(\n multi_readout_model_circuit(cluster_state_bits),\n readouts)(cluster_state_multi)\n\n# concatenate outputs and feed into a small classical NN\nconcat_out = tf.keras.layers.concatenate(\n [quantum_model_multi1, quantum_model_multi2, quantum_model_multi3])\n\ndense_1 = tf.keras.layers.Dense(8)(concat_out)\n\ndense_2 = tf.keras.layers.Dense(1)(dense_1)\n\nmulti_qconv_model = tf.keras.Model(inputs=[excitation_input_multi],\n outputs=[dense_2])\n\n# Display the model architecture\ntf.keras.utils.plot_model(multi_qconv_model,\n show_shapes=True,\n show_layer_names=True,\n dpi=70)", "2.2.2 ใƒขใƒ‡ใƒซใ‚’ใƒˆใƒฌใƒผใƒ‹ใƒณใ‚ฐใ™ใ‚‹", "multi_qconv_model.compile(\n optimizer=tf.keras.optimizers.Adam(learning_rate=0.02),\n loss=tf.losses.mse,\n metrics=[custom_accuracy])\n\nmulti_qconv_history = multi_qconv_model.fit(x=train_excitations,\n y=train_labels,\n batch_size=16,\n epochs=25,\n verbose=1,\n validation_data=(test_excitations,\n test_labels))\n\nplt.plot(history.history['val_custom_accuracy'][:25], label='QCNN')\nplt.plot(hybrid_history.history['val_custom_accuracy'][:25], label='Hybrid CNN')\nplt.plot(multi_qconv_history.history['val_custom_accuracy'][:25],\n label='Hybrid CNN \\n Multiple Quantum Filters')\nplt.title('Quantum vs Hybrid CNN performance')\nplt.xlabel('Epochs')\nplt.legend()\nplt.ylabel('Validation Accuracy')\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mdeff/ntds_2016
algorithms/08_sol_graph_inpainting.ipynb
mit
[ "A Network Tour of Data Science\nMichaรซl Defferrard, PhD student, Pierre Vandergheynst, Full Professor, EPFL LTS2.\nAssignment 4: Transductive Learning using Graphs\nTransduction is reasoning from observed, specific (training) cases to specific (test) cases. For this assignment, the task is to infer missing values in some dataset, while the training and testing cases are available to construct a graph. The exercise consists of two parts: (1) construct some artificial data and (2) retrieve the missing values and measure performance.\n1 Smooth graph signal\nLet $\\mathcal{G} = (\\mathcal{V}, W)$ be a graph of vertex set $\\mathcal{V}$ and weighted adjacency matrix $W$.", "import numpy as np\nimport scipy.io\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport os.path\n\nX = scipy.io.mmread(os.path.join('datasets', 'graph_inpainting', 'embedding.mtx'))\nW = scipy.io.mmread(os.path.join('datasets', 'graph_inpainting', 'graph.mtx'))\nN = W.shape[0]\n\nprint('N = |V| = {}, k|V| < |E| = {}'.format(N, W.nnz))\nplt.spy(W, markersize=2, color='black');", "Design a technique to construct smooth scalar signals $x \\in \\mathbb{R}^N$ over the graph $\\mathcal{G}$.\nHint:\n* This part is related to our last exercise.\n* There is multiple ways to do this, another is to filter random signals.", "# Fourier basis.\nD = W.sum(axis=0)\nD = scipy.sparse.diags(D.A.squeeze(), 0)\nL = D - W\nlamb, U = np.linalg.eigh(L.toarray())\n\n# Low-pass filters.\n\ndef f1(u, a=4):\n y = np.zeros(u.shape)\n y[:a] = 1\n return y\ndef f2(u, m=4):\n return np.maximum(1 - m * u / u[-1], 0)\ndef f3(u, a=0.8):\n return np.exp(-u / a)\n\n# Random signal.\nx = np.random.uniform(-1, 1, size=W.shape[0])\nxhat = U.T.dot(x)\n\nfig, ax = plt.subplots(1, 2, figsize=(15, 5))\nax[0].plot(lamb, xhat, '.-')\nax[0].set_title('Random signal spectrum')\nax[1].scatter(X[:, 0], X[:, 1], c=x, s=40, linewidths=0)\nax[1].set_title('Random signal')\n\n# Smooth signal through filtering.\nxhat *= f3(lamb)\nx = U.dot(xhat)\n\nM = x.T.dot(L.dot(x))\nprint('M = x^T L x = {}'.format(M))\n\nfig, ax = plt.subplots(1, 2, figsize=(15, 5))\nax[0].set_title('Smooth signal spectrum')\nax[0].plot(lamb, abs(xhat), '.-', label='spectrum |U^T x|')\n#ax[0].plot(lamb, np.sqrt(M/lamb))\nax[0].plot(lamb[1:], np.sqrt(M/lamb[1:]), label='Decay associated with smoothness M')\nax[0].legend()\nax[1].scatter(X[:, 0], X[:, 1], c=x, s=40, linewidths=0)\nax[1].set_title('Smooth signal');", "2 Graph Signal Inpainting\nLet $y$ be a signal obtained by observing $n$ out the $N$ entries of a smooth signal $x$. Design and implement a procedure to infer the missing values and test its average accuracy $\\| x^\\ast - x \\|_2^2$ as a function of $n/N$ on a test set of signals created using the technique developed above.\nFirst complete the equations below, then do the implementation.\nObservation:\n$$y = Ax$$\nwhere $A$ is a diagonal masking matrix with $\\operatorname{diag(A)} \\in {0,1}^N$.\nOptimization problem:\n$$x^\\ast = \\operatorname{arg } \\min_x \\frac{\\tau}{2} \\|Ax - y\\|2^2 + \\frac12 x^T L x$$\nwhere $\\|Ax - y\\|_2^2$ is the fidelity term and \n$x^T L x = \\sum{u \\sim v} w(u,v) (x(u) - x(v))^2$ is the smoothness prior.\nOptimal solution (by putting the derivative to zero):\n$$\\tau Ax^\\ast - \\tau y + L x^\\ast = 0\n\\hspace{0.3cm} \\rightarrow \\hspace{0.3cm}\nx^\\ast = (\\tau A + L)^{-1} \\tau y$$\nHint: in the end the solution should be a linear system of equations, to be solved with np.linalg.solve().", "tau = 1e5 # Balance between fidelity and smoothness prior.\nnum = 100 # Number of signals and masks to generate.\n\n# Percentage of values to keep.\nprobs = [0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0, 0.1, 0.2, 0.3]\n\nerrors = []\nfor p in probs:\n mse = 0\n for _ in range(num):\n # Smooth signal.\n x = np.random.uniform(-1, 1, size=W.shape[0])\n xhat = U.T.dot(x) * f3(lamb)\n x = U.dot(xhat)\n\n # Observation.\n A = np.diag(np.random.uniform(size=N) < p)\n y = A.dot(x)\n\n # Reconstruction.\n x_sol = np.linalg.solve(tau * A + L, tau * y)\n mse += np.linalg.norm(x - x_sol)**2\n errors.append(mse / num)\n\n# Show one example.\nfig, ax = plt.subplots(1, 3, figsize=(15, 5))\nparam = dict(s=40, vmin=min(x), vmax=max(x), linewidths=0)\nax[0].scatter(X[:, 0], X[:, 1], c=x, **param)\nax[1].scatter(X[:, 0], X[:, 1], c=y, **param)\nax[2].scatter(X[:, 0], X[:, 1], c=x_sol, **param)\nax[0].set_title('Ground truth')\nax[1].set_title('Observed signal (missing values set to 0)')\nax[2].set_title('Inpainted signal')\n\nprint('|x-y|_2^2 = {:5f}'.format(np.linalg.norm(x - y)**2))\nprint('|x-x*|_2^2 = {:5f}'.format(np.linalg.norm(x - x_sol)**2))\n\n# Show reconstruction error w.r.t. percentage of observed values.\nplt.figure(figsize=(15, 5))\nplt.semilogy(probs, errors, '.', markersize=10)\nplt.xlabel('Percentage of observed values n/N')\nplt.ylabel('Reconstruction error |x* - x|_2^2');" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
bzamecnik/ml
snippets/keras/lstm_hello_world.ipynb
mit
[ "Hello, LSTM!\nIn this project we'd like to explore the basic usage of LSTM (Long Short-Term Memory) which is a flavor of RNN (Recurrent Neural Network).\n\nA nice theorerical tutorial is Understanding LSTM Networks.\nKeras docs: http://keras.io/layers/recurrent/\nKeras examples: https://github.com/fchollet/keras/tree/master/examples\nhttps://github.com/fchollet/keras/blob/master/examples/imdb_bidirectional_lstm.py\nhttps://github.com/fchollet/keras/blob/master/examples/imdb_cnn_lstm.py\nhttps://github.com/fchollet/keras/blob/master/examples/imdb_lstm.py\nhttps://github.com/fchollet/keras/blob/master/examples/lstm_text_generation.py\n\nThe goals\n\nDefine the problem that LSTM can solve.\nShow a basic working example of LSTM usage in Keras.\nTry to learn some basic patterns in simple sequences.\n\nSetup\nInstall keras, tensorflow and the basic ML/Data Science libs (numpy/matplotlib/etc.).\nSet TensorFlow as the keras backend in ~/.keras/keras.json:\njson\n{\"epsilon\": 1e-07, \"floatx\": \"float32\", \"backend\": \"tensorflow\"}", "%matplotlib inline\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nmpl.rc('image', interpolation='nearest', cmap='gray')\nmpl.rc('figure', figsize=(20,10))", "Basic problems\nPrediction of the next value of sequence\nsequence of (110)+\nJust a repeated pattern:\n110110110110110...\nClassification of sequences\nThe inputs/outputs must be tensors of shape (samples, time_steps, features).\nIn this case (1, len(X), 1).\nFor simplicity we have a single training example and no test test.\nPredict one step ahead:\n(A, B, C, [D, E]) -&gt; D", "X = np.array([[[1],[1],[0]], [[1],[0],[1]], [[0],[1],[1]]])\ny = np.array([[1], [1], [0]])\n\n# X = np.array([[[1],[0],[0]], [[0],[1],[0]], [[0],[0],[1]]])\n# y = np.array([[1], [0], [0]])\n\n# input: 3 samples of 3-step sequences with 1 feature\n# input: 3 samples with 1 feature\nX.shape, y.shape", "Basic usage of LTSM layers in Keras\nNotes:\n\nthe first layer must specify the input shape\nTensorFlow needs explicit length of series, so input_shape or batch_input_shape must be used, not just input_dim\nwhen specifying batch_input_shape in LSTM, we need to explicitly add batch_size to model.fit()", "from keras.models import Sequential\nfrom keras.layers.core import Dense, Activation, TimeDistributedDense\nfrom keras.layers.recurrent import LSTM\n\n# model = Sequential()\n# # return_sequences=False\n# model.add(LSTM(output_dim=1, input_shape=(3, 1)))\n# # since the LSTM layer has only one output after activation we can directly use as model output\n# model.add(Activation('sigmoid'))\n# model.compile(loss='binary_crossentropy', optimizer='adam', class_mode='binary')\n\n# This models is probably too easy and it is not able to overfit on the training dataset.\n# For LSTM output dim 3 it works ok (after a few hundred epochs).\n\nmodel = Sequential()\nmodel.add(LSTM(output_dim=3, input_shape=(3, 1)))\n# Since the LSTM layer has multiple outputs and model has single one\n# we need to add another Dense layer with single output.\n# In case the LSTM would return sequences we would use TimeDistributedDense layer.\nmodel.add(Dense(1))\nmodel.add(Activation('sigmoid'))\n\nmodel.compile(loss='binary_crossentropy', optimizer='adam', class_mode='binary')\n\nmodel.count_params()\n\nmodel.fit(X, y, nb_epoch=500, show_accuracy=True)\n\nplt.plot(model.predict_proba(X).flatten(), 'rx')\nplt.plot(model.predict_classes(X).flatten(), 'ro')\nplt.plot(y.flatten(), 'g.')\nplt.xlim(-0.1, 2.1)\nplt.ylim(-0.1, 1.1)\n\nmodel.predict_proba(X)\n\nmodel.predict_classes(X)\n\n# del model", "LSTM weight meanings:\n\nhttp://colah.github.io/posts/2015-08-Understanding-LSTMs/\nsource code LSTM in recurrent.py\n\n[W_i, U_i, b_i,\n W_c, U_c, b_c,\n W_f, U_f, b_f,\n W_o, U_o, b_o]\nType of weights:\n- W - weight matrix - from input to output\n- b - bias vector - from input to output\n- U - weight matrix - from hidden to output (it has no companion biases)\nUsage of weights:\n- i - input - to control whether to modify the cell state\n- c - candidate - a new value of cell state\n- f - forget - to remove the previous cell state\n- o - output - to control whether to output something\nInputs and outputs of a LSTM unit:\n- value, cell state, hidden state", "weight_names = ['W_i', 'U_i', 'b_i',\n 'W_c', 'U_c', 'b_c',\n 'W_f', 'U_f', 'b_f',\n 'W_o', 'U_o', 'b_o']\n\nweight_shapes = [w.shape for w in model.get_weights()]\n# for n, w in zip(weight_names, weight_shapes):\n# print(n, ':', w)\nprint(weight_shapes)\n\ndef pad_vector_shape(s):\n return (s[0], 1) if len(s) == 1 else s\n\nall_shapes = np.array([pad_vector_shape(s) for s in weight_shapes])\nall_shapes\n\nfor w in model.get_weights():\n print(w)\n\nall_weights = np.zeros((all_shapes[:,0].sum(axis=0), all_shapes[:,1].max(axis=0)))\n\ndef add_weights(src, target):\n target[0] = src[0]\n target[1:4] = src[1]\n target[4:7,0] = src[2]\n \nfor i in range(4):\n add_weights(model.get_weights()[i*3:(i+1)*3], all_weights[i*7:(i+1)*7])\n\nall_weights[28:31,0] = model.get_weights()[12].T\nall_weights[31,0] = model.get_weights()[13]\n \nplt.imshow(all_weights.T)\n\n\nfrom matplotlib.patches import Rectangle\n\nax = plt.gca()\nax.add_patch(Rectangle([-.4, -0.4], 28-0.2, 3-0.2, fc='none', ec='r', lw=2, alpha=0.75))\nax.add_patch(Rectangle([28 - .4, -0.4], 3-0.2, 3-0.2, fc='none', ec='g', lw=2, alpha=0.75))\nax.add_patch(Rectangle([31 - .4, -0.4], 1-0.2, 3-0.2, fc='none', ec='b', lw=2, alpha=0.75))\n\nplt.savefig('weights_110.png')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
michaelaye/planet4
notebooks/LPSC 2018 stats.ipynb
isc
[ "<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#Read-in-data\" data-toc-modified-id=\"Read-in-data-1\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>Read in data</a></span><ul class=\"toc-item\"><li><span><a href=\"#Get-original-input-stats\" data-toc-modified-id=\"Get-original-input-stats-1.1\"><span class=\"toc-item-num\">1.1&nbsp;&nbsp;</span>Get original input stats</a></span></li></ul></li><li><span><a href=\"#Convert-distance-to-meters\" data-toc-modified-id=\"Convert-distance-to-meters-2\"><span class=\"toc-item-num\">2&nbsp;&nbsp;</span>Convert distance to meters</a></span><ul class=\"toc-item\"><li><ul class=\"toc-item\"><li><span><a href=\"#Reduction-of-number-of-fan-markings-to-finals\" data-toc-modified-id=\"Reduction-of-number-of-fan-markings-to-finals-2.0.1\"><span class=\"toc-item-num\">2.0.1&nbsp;&nbsp;</span>Reduction of number of fan markings to finals</a></span></li></ul></li></ul></li><li><span><a href=\"#Length-stats\" data-toc-modified-id=\"Length-stats-3\"><span class=\"toc-item-num\">3&nbsp;&nbsp;</span>Length stats</a></span><ul class=\"toc-item\"><li><span><a href=\"#Blotch-sizes\" data-toc-modified-id=\"Blotch-sizes-3.1\"><span class=\"toc-item-num\">3.1&nbsp;&nbsp;</span>Blotch sizes</a></span></li><li><span><a href=\"#Longest-fans\" data-toc-modified-id=\"Longest-fans-3.2\"><span class=\"toc-item-num\">3.2&nbsp;&nbsp;</span>Longest fans</a></span></li></ul></li><li><span><a href=\"#Regional\" data-toc-modified-id=\"Regional-4\"><span class=\"toc-item-num\">4&nbsp;&nbsp;</span>Regional</a></span></li></ul></div>", "%matplotlib nbagg\n\nimport seaborn as sns\nfrom planet4 import io, stats, markings\nfrom planet4.catalog_production import ReleaseManager", "Read in data", "rm = ReleaseManager('v1.0b4')\n\ndb = io.DBManager()\n\ndb.n_image_names\n\ndb.dbname\n\nblotches = rm.read_blotch_file()\nfans = rm.read_fan_file()", "Get original input stats", "import dask.dataframe as dd\n\ndata = dd.read_hdf(db.dbname, 'df')\n\nfan_input = data[data.marking=='fan']\n\nblotch_input = data[data.marking=='blotch']\n\nfan_input.compute().shape\n\nblotch_input.compute().shape", "Convert distance to meters", "fans['distance_m'] = fans.distance*fans.map_scale\n\nblotches['radius_1_m'] = blotches.radius_1*blotches.map_scale\nblotches['radius_2_m'] = blotches.radius_2*blotches.map_scale", "Reduction of number of fan markings to finals", "n_fan_in = 2792963\n\nfans.shape[0]\n\nfans.shape[0] / n_fan_in\n\nblotches.shape[0]", "Length stats\nPercentage of fan markings below 100 m:", "import scipy\nscipy.stats.percentileofscore(fans.distance_m, 100)", "Cumulative histogram of fan lengths", "def add_percentage_line(ax, meters, column):\n y = scipy.stats.percentileofscore(column, meters)\n ax.axhline(y/100)\n ax.axvline(meters)\n ax.text(meters, y/100, f\"{y/100:0.2f}\")\n\nplt.close('all')\n\nfig, ax = plt.subplots(figsize=(8,4))\nsns.distplot(fans.distance_m, bins=500, kde=False, hist_kws={'cumulative':True,'normed':True},\n axlabel='Fan length [m]', ax=ax)\nax.set_title(\"Cumulative normalized histogram for fan lengths\")\nax.set_ylabel(\"Fraction of fans with given length\")\nadd_percentage_line(ax, 100, fans.distance_m)\nadd_percentage_line(ax, 50, fans.distance_m)", "General fan stats, in numbers", "fans.distance_m.describe()", "In words, the mean length of fans is {{f\"{fans.distance_m.describe()['mean']:.1f}\"}} m, while the median is\n{{f\"{fans.distance_m.describe()['50%']:.1f}\"}} m.\nBlotch sizes", "plt.figure()\ncols = ['radius_1','radius_2']\nsns.distplot(blotches[cols], kde=False, bins=np.arange(2.0,50.), \n color=['r','g'], label=cols)\nplt.legend()\n\nplt.figure()\ncols = ['radius_1_m','radius_2_m']\nsns.distplot(blotches[cols], kde=False, bins=np.arange(2.0,50.), \n color=['r','g'], label=cols)\nplt.legend()\n\nfig, ax = plt.subplots(figsize=(8,4))\nsns.distplot(blotches.radius_2_m, bins=500, kde=False, hist_kws={'cumulative':True,'normed':True},\n axlabel='Blotch radius_1 [m]', ax=ax)\nax.set_title(\"Cumulative normalized histogram for blotch lengths\")\nax.set_ylabel(\"Fraction of blotches with given radius_1\")\nadd_percentage_line(ax, 30, blotches.radius_2_m)\nadd_percentage_line(ax, 10, blotches.radius_2_m)\n\nimport scipy\nscipy.stats.percentileofscore(blotches.radius_2_m, 30)\n\nplt.close('all')", "Longest fans", "fans.query('distance_m > 350')[\n 'distance_m distance obsid image_x image_y image_id x_tile y_tile'.split()].sort_values(\n by='distance_m')\n\nusers1 = markings.ImageID(\"APF0000dtk\").data.user_name.unique()\n\nusers2 = markings.ImageID(\"de3\").data.user_name.unique()\n\nsame = []\nfor user in users1:\n if user in users2:\n same.append(user)\n\nsame\n\nlen(users2)\n\nfrom planet4 import plotting\n\nplotting.plot_image_id_pipeline('q45', datapath=rm.catalog, via_obsid=False, figsize=(12,8))", "Regional", "from planet4 import stats\nfrom planet4 import region_data\n\nstats.define_season_column(fans)\nstats.define_season_column(blotches)\n\nregions = ['Manhattan2', 'Giza', 'Inca', 'Ithaca']\n\nfor reg in regions:\n obj = getattr(region_data, reg)\n roi = obj()\n for marking in [fans, blotches]:\n if reg == 'Manhattan2':\n reg = 'Manhattan'\n marking.loc[marking.obsid.isin(roi.all_obsids), 'roi'] = reg\n\nfans.roi.value_counts(dropna=False)\n\nfans_rois = fans[fans.roi.notnull()]\nblotches_rois = blotches[blotches.roi.notnull()]\n\nfans_rois.roi.value_counts(dropna=False)\n\nfans.query('season==2').distance_m.median()\n\nfans.query('season==3').distance_m.median()\n\nimport seaborn as sns\nsns.set_palette('Set1')\n\nfans_rois\n\ndef my_plot(x, **kwargs):\n sns.distplot(x, kde=True, **kwargs)\n# plt.axvline(x.median(), color='blue')\n plt.gca().set_xlim(-10, 150)\n\ng = sns.FacetGrid(fans_rois, col=\"roi\", hue='season', size=2, aspect=1.1, legend_out=False)\n# g.map(sns.distplot, \"distance_m\", kde=True);\ng.map(my_plot, 'distance_m')\ng.add_legend()\n\ng = sns.FacetGrid(fans_rois, col=\"roi\", hue='season', size=2, aspect=1.1, legend_out=False)\ng.map(sns.distplot, \"distance_m\", kde=True);\n# g.map(my_plot, 'distance_m')\ng.add_legend()\n\nfor region in ['Manhattan2', 'Giza','Ithaca']:\n print(region)\n obj = getattr(region_data, region)\n for s in ['season2','season3']:\n print(s)\n obsids = getattr(obj, s)\n print(fans[fans.obsid.isin(obsids)].distance_m.median())\n\nimport numpy as np\nimport scipy\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\nplt.figure()\n\nsns.set_palette(\"hls\", 1)\ndata = np.random.randn(30)\np=sns.kdeplot(data, shade=True)\n\nx,y = p.get_lines()[0].get_data()\n\n#care with the order, it is first y\n#initial fills a 0 so the result has same length than x\ncdf = scipy.integrate.cumtrapz(y, x, initial=0)\n\nnearest_05 = np.abs(cdf-0.5).argmin()\n\nx_median = x[nearest_05]\ny_median = y[nearest_05]\n\nplt.vlines(x_median, 0, y_median)\n\nimport numpy as np\nimport scipy\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\nplt.figure()\n\nsns.set_palette(\"hls\", 1)\ndata = np.random.randn(30)\np=sns.kdeplot(data, shade=True)\n\nx,y = p.get_lines()[0].get_data()\n\n#care with the order, it is first y\n#initial fills a 0 so the result has same length than x\ncdf = scipy.integrate.cumtrapz(y, x, initial=0)\n\nnearest_05 = np.abs(cdf-0.5).argmin()\n\nx_median = x[nearest_05]\ny_median = y[nearest_05]\n\nplt.vlines(x_median, 0, y_median)\n\nnp.median(x)\n\nnp.percentile(x, 50)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
deniederhut/workshop_pyintensive
instructor/day_three.ipynb
bsd-2-clause
[ "Text Data\nPre-introduction\nWe'll be spending a lot of time today manipulating text. Make sure you remember how to split, join, and search strings.\nIntroduction\nWe've spent a lot of time in python dealing with text data, and that's because text data is everywhere. It is the primary form of communication between persons and persons, persons and computers, and computers and computers. The kind of inferential methods that we apply to text data, however, are different from those applied to tabular data. \nThis is partly because documents are typically specified in a way that expresses both structure and content using text (i.e. the document object model).\nLargely, however, it's because text is difficult to turn into numbers in a way that preserves the information in the document. Today, we'll talk about dominant language model in NLP and the basics of how to implement it in Python.\nThe term-document model\nThis is also sometimes referred to as \"bag-of-words\" by those who don't think very highly of it. The term document model looks at language as individual communicative efforts that contain one or more tokens. The kind and number of the tokens in a document tells you something about what is attempting to be communicated, and the order of those tokens is ignored.\nTo start with, let's load a document.", "import nltk\n#nltk.download('webtext')\ndocument = nltk.corpus.webtext.open('grail.txt').read()", "Let's see what's in this document", "len(document.split('\\n'))\n\ndocument.split('\\n')[0:10]", "It looks like we've gotten ourselves a bit of the script from Monty Python and the Holy Grail. Note that when we are looking at the text, part of the structure of the document is written in tokens. For example, stage directions have been placed in brackets, and the names of the person speaking are in all caps.\nRegular expressions\nIf we wanted to read out all of the stage directions for analysis, or just King Arthur's lines, doing so in base python string processing will be very difficult. Instead, we are going to use regular expressions. Regular expressions are a method for string manipulation that match patterns instead of bytes.", "import re\nsnippet = \"I fart in your general direction! Your mother was a hamster, and your father smelt of elderberries!\"\nre.search(r'mother', snippet)", "Just like with str.find, we can search for plain text. But re also gives us the option for searching for patterns of bytes - like only alphabetic characters.", "re.search(r'[a-z]', snippet)", "In this case, we've told re to search for the first sequence of bytes that is only composed of lowercase letters between a and z. We could get the letters at the end of each sentence by including a bang at the end of the pattern.", "re.search(r'[a-z]!', snippet)", "If we wanted to pull out just the stage directions from the screenplay, we might try a pattern like this:", "re.findall(r'[a-zA-Z]', document)[0:10]", "So that's obviously no good. There are two things happening here:\n\n[ and ] do not mean 'bracket'; they are special characters which mean 'any thing of this class'\nwe've only matched one letter each\n\nA better regular expression, then, would wrap this in escaped brackets, and include a command saying more than one letter.\nRe is flexible about how you specify numbers - you can match none, some, a range, or all repetitions of a sequence or character class.\ncharacter | meaning\n----------|--------\n{x} | exactly x repetitions\n{x,y} | between x and y repetitions\n? | 0 or 1 repetition\n* | 0 or many repetitions\n+ | 1 or many repetitions", "re.findall(r'\\[[a-zA-Z]+\\]', document)[0:10]", "This is better, but it's missing that [clop clop clop] we saw above. This is because we told the regex engine to match any alphabetic character, but we did not specify whitespaces, commas, etc. to match these, we'll use the dot operator, which will match anything expect a newline.\nPart of the power of regular expressions are their special characters. Common ones that you'll see are:\ncharacter | meaning\n----------|--------\n. | match anything except a newline\n^ | match the start of a line\n$ | match the end of a line\n\\s | matches any whitespace or newline\nFinally, we need to fix this + character. It is a 'greedy' operator, which means it will match as much of the string as possible. To see why this is a problem, try:", "snippet = 'This is [cough cough] and example of a [really] greedy operator'\nre.findall(r'\\[.+\\]', snippet)", "Since the operator is greedy, it is matching everything inbetween the first open and the last close bracket. To make + consume the least possible amount of string, we'll add a ?.", "p = re.compile(r'\\[.+?\\]')\nre.findall(p, document)[0:10]", "What if we wanted to grab all of Arthur's speech? This one is a little trickier, since:\n\nIt is not conveniently bracketed; and,\nWe want to match on ARTHUR, but not to capture it\n\nIf we wanted to do this using base string manipulation, we would need to do something like:\nsplit the document into lines\ncreate a new list of just lines that start with ARTHUR\ncreate a newer list with ARTHUR removed from the front of each element\nRegex gives us a way of doing this in one line, by using something called groups. Groups are pieces of a pattern that can be ignored, negated, or given names for later retrieval.\ncharacter | meaning\n----------|--------\n(x) | match x\n(?:x) | match x but don't capture it\n(?P&lt;x&gt;) | match something and give it name x\n(?=x) | match only if string is followed by x\n(?!x) | match only if string is not followed by x", "p = re.compile(r'(?:ARTHUR: )(.+)')\nre.findall(p, document)[0:10]", "Because we are using findall, the regex engine is capturing and returning the normal groups, but not the non-capturing group. For complicated, multi-piece regular expressions, you may need to pull groups out separately. You can do this with names.", "p = re.compile(r'(?P<name>[A-Z ]+)(?::)(?P<line>.+)')\nmatch = re.search(p, document)\nmatch\n\nmatch.group('name'), match.group('line')", "Now let's try a small challenge!\nTo check that you've understood something about regular expressions, we're going to have you do a small test challenge. Partner up with the person next to you - we're going to do this as a pair coding exercise - and choose which computer you are going to use.\nThen, navigate to challenges/03_analysis/ and read through challenge A. When you think you've completed it successfully, run py.test test_A.py .\nTokenizing\nLet's grab Arthur's speech from above, and see what we can learn about Arthur from it.", "p = re.compile(r'(?:ARTHUR: )(.+)')\narthur = ' '.join(re.findall(p, document))\narthur[0:100]", "In our model for natural language, we're interested in words. The document is currently a continuous string of bytes, which isn't ideal. You might be tempted to separate this into words using your newfound regex knowledge:", "p = re.compile(r'\\w+', flags=re.I)\nre.findall(p, arthur)[0:10]", "But this is problematic for languages that make extensive use of punctuation. For example, see what happens with:", "re.findall(p, \"It isn't Dav's cheesecake that I'm worried about\")", "The practice of pulling apart a continuous string into units is called \"tokenizing\", and it creates \"tokens\". NLTK, the canonical library for NLP in Python, has a couple of implementations for tokenizing a string into words.", "from nltk import word_tokenize\nword_tokenize(\"It isn't Dav's cheesecake that I'm worried about\")", "The distinction here is subtle, but look at what happened to \"isn't\". It's been separated into \"IS\" and \"N'T\", which is more in keeping with the way contractions work in English.", "tokens = word_tokenize(arthur)\ntokens[0:10]", "At this point, we can start asking questions like what are the most common words, and what words tend to occur together.", "len(tokens), len(set(tokens))", "So we can see right away that Arthur is using the same words a whole bunch - on average, each unique word is used four times. This is typical of natural language. \n\nNot necessarily the value, but that the number of unique words in any corpus increases much more slowly than the total number of words.\nA corpus with 100M tokens, for example, probably only has 100,000 unique tokens in it.\n\nFor more complicated metrics, it's easier to use NLTK's classes and methods.", "from nltk import collocations\nfd = collocations.FreqDist(tokens)\nfd.most_common()[:10]\n\nmeasures = collocations.BigramAssocMeasures()\nc = collocations.BigramCollocationFinder.from_words(tokens)\nc.nbest(measures.pmi, 10)\n\nc.nbest(measures.likelihood_ratio, 10)", "We see here that the collocation finder is pulling out some things that have face validity. When Arthur is talking about peasants, he calls them \"bloody\" more often than not. However, collocations like \"Brother Maynard\" and \"BLACK KNIGHT\" are less informative to us, because we know that they are proper names.\nIf you were interested in collocations in particular, what step do you think you would have to take during the tokenizing process?\nStemming\nThis has gotten us as far identical tokens, but in language processing, it is often the case that the specific form of the word is not as important as the idea to which it refers. For example, if you are trying to identify the topic of a document, counting 'running', 'runs', 'ran', and 'run' as four separate words is not useful. Reducing words to their stems is a process called stemming.\nA popular stemming implementation is the Snowball Stemmer, which is based on the Porter Stemmer. It's algorithm looks at word forms and does things like drop final 's's, 'ed's, and 'ing's.\nJust like the tokenizers, we first have to create a stemmer object with the language we are using.", "snowball = nltk.SnowballStemmer('english')", "Now, we can try stemming some words", "snowball.stem('running')\n\nsnowball.stem('eats')\n\nsnowball.stem('embarassed')", "Snowball is a very fast algorithm, but it has a lot of edge cases. In some cases, words with the same stem are reduced to two different stems.", "snowball.stem('cylinder'), snowball.stem('cylindrical')", "In other cases, two different words are reduced to the same stem.\n\nThis is sometimes referred to as a 'collision'", "snowball.stem('vacation'), snowball.stem('vacate')\n\nsnowball.stem('organization'), snowball.stem('organ')\n\nsnowball.stem('iron'), snowball.stem('ironic')\n\nsnowball.stem('vertical'), snowball.stem('vertices')", "A more accurate approach is to use an English word bank like WordNet to call dictionary lookups on word forms, in a process called lemmatization.", "# nltk.download('wordnet')\nwordnet = nltk.WordNetLemmatizer()\n\nwordnet.lemmatize('iron'), wordnet.lemmatize('ironic')\n\nwordnet.lemmatize('vacation'), wordnet.lemmatize('vacate')", "Nothing comes for free, and you've probably noticed already that the lemmatizer is slower. We can see how much slower with one of IPYthon's magic functions.", "%timeit wordnet.lemmatize('table')\n\n4.45 * 5.12\n\n%timeit snowball.stem('table')", "Time for another small challenge!\nSwitch computers for this one, so that you are using your partner's computer, and try your hand at challenge B!\nSentiment\nFrequently, we are interested in text to learn something about the person who is speaking. One of these things we've talked about already - linguistic diversity. A similar metric was used a couple of years ago to settle the question of who has the largest vocabulary in Hip Hop.\n\nUnsurprisingly, top spots go to Canibus, Aesop Rock, and the Wu Tang Clan. E-40 is also in the top 20, but mostly because he makes up a lot of words; as are OutKast, who print their lyrics with words slurred in the actual typography\n\nAnother thing we can learn is about how the speaker is feeling, with a process called sentiment analysis. Before we start, be forewarned that this is not a robust method by any stretch of the imagination. Sentiment classifiers are often trained on product reviews, which limits their ecological validity.\nWe're going to use TextBlob's built-in sentiment classifier, because it is super easy.", "from textblob import TextBlob\n\nblob = TextBlob(arthur)\n\nfor sentence in blob.sentences[10:25]:\n print(sentence.sentiment.polarity, sentence)", "Semantic distance\nAnother common NLP task is to look for semantic distance between documents. This is used by search engines like Google (along with other things like PageRank) to decide which websites to show you when you search for things like 'bike' versus 'motorcycle'.\nIt is also used to cluster documents into topics, in a process called topic modeling. The math behind this is beyond the scope of this course, but the basic strategy is to represent each document as a one-dimensional array, where the indices correspond to integer ids of tokens in the document. Then, some measure of semantic similarity, like the cosine of the angle between unitized versions of the document vectors, is calculated.\nLuckily for us there is another python library that takes care of the heavy lifting for us.", "from gensim import corpora, models, similarities", "We already have a document for Arthur, but let's grab the text from someone else to compare it with.", "p = re.compile(r'(?:GALAHAD: )(.+)')\ngalahad = ' '.join(re.findall(p, document))\narthur_tokens = tokens\ngalahad_tokens = word_tokenize(galahad)", "Now, we use gensim to create vectors from these tokenized documents:", "dictionary = corpora.Dictionary([arthur_tokens, galahad_tokens])\ncorpus = [dictionary.doc2bow(doc) for doc in [arthur_tokens, galahad_tokens]]\ntfidf = models.TfidfModel(corpus, id2word=dictionary)", "Then, we create matrix models of our corpus and query", "query = tfidf[dictionary.doc2bow(['peasant'])]\nindex = similarities.MatrixSimilarity(tfidf[corpus])", "And finally, we can test our query, \"peasant\" on the two documents in our corpus", "list(enumerate(index[query]))", "So we see here that \"peasant\" does not match Galahad very well (a really bad match would have a negative value), and is more similar to the kind of speach output that we see from King Arthur.\nTabular data\nIn data storage, data visualization, inferential statistics, and machine learning, the most common way to pass data between applications is in the form of tables (these are called tabular, structured, or rectangular data). These are convenient in that, when used correctly, they store data in a DRY and easily queryable way, and are also easily turned into matrices for numeric processing.\n\nnote - it is sometimes tempting to refer to N-dimensional matrices as arrays, following the numpy naming convention, but these are not the same as arrays in C++ or Java, which may cause confusion\n\nIt is common in enterprise applications to store tabular data in a SQL database. In the sciences, data is typically passed around as comma separated value files (.csv), which you have already been dealing with over the course of the last two days.\nFor this brief introduction to analyzing tabular data, we'll be using the scipy stack, which includes numpy, pandas, scipy, and \"scikits\" like sk-learn and sk-image.", "import pandas as pd", "You might not have seen this as convention yet. It is just telling python that when we import pandas, we don't want to access it in the namespace as pandas but as pd instead.\nPandas basics\nWe'll start by making a small table to practice on. Tables in pandas are called data frames, so we'll start by making an instance of class DataFrame, and initialize it with some data.\n\nnote - pandas and R use the same name for their tables, but their behavior is often very different", "table = pd.DataFrame({'id': [1,2,3], 'name':['dillon','juan','andrew'], 'age':[47,27,23]})\nprint(table)", "Variables in pandas are represented by a pandas-specific data structure, called a Series. You can grab a Series out of a DataFrame by using the slicing operator with the name of the variable that you want to pull.", "table['name'], type(table['name'])", "We could have made each variable a Series, and then put it into the DataFrame object, but it's easier in this instance to pass in a dictionary where the keys are variable names and the values are lists. You can also modify a data frame in place using similar syntax:", "table['fingers'] = [9, 10, None]", "If you try to run that code without the None there, pandas will return an error. In a table (in any language) each column must have the same number of rows.\nWe've entered None, base python's missingness indicator, but pandas is going to swap this out with something else:", "table['fingers']", "You might be tempted to write your own control structures around these missing values (which are variably called NaN, nan, and NA), but this is always a bad idea:", "table['fingers'][2] == None\n\ntable['fingers'][2] == 'NaN'\n\ntype(table['fingers'][2]) == str", "None of this works because the pandas NaN is a subclass of numpy's double precision floating point number. However, for ambiguous reasons, even numpy.nan does not evaluate as being equal to itself.\nTo handle missing data, you'll need to use the pandas method isnull.", "pd.isnull(table['fingers'])", "In the same way that we've been pulling out columns by name, you can pull out rows by index. If I want to grab the first row, I can use:", "table[:1]", "Recall that indices in python start at zero, and that selecting by a range does not include the final value (i.e. [ , )).\nUnlike other software languages (R, I'm looking at you here), row indices in pandas are immutable. So, if I rearrange my data, the index also get shuffled.", "table.sort_values('age')", "Because of this, it's common to set the index to be something like a timestamp or UUID.\nWe can select parts of a DataFrame with conditional statements:", "table[table['age'] < 40]", "Merging tables\nAs you might expect, tables in pandas can also be merged by keys. So, if we make a new dataset that shares an attribute in common:", "other_table = pd.DataFrame({\n 'name':['dav', 'juan', 'dillon'], \n 'languages':['python','python','python']})\n\ntable.merge(other_table, on='name')", "Note that we have done an \"inner join\" here, which means we are only getting the intersection of the two tables. If we want the union, we can specify that we want an outer join:", "table.merge(other_table, on='name', how='outer')", "Or maybe we want all of the data from table, but not other_table", "table.merge(other_table, on='name', how='left')", "Reshaping\nTo make analysis easier, you may have to reshape your data. It's easiest to deal with data when each table meets the follwing criteria:\n\nEach row is exactly one observation\nEach column is exactly one kind of data\nThe table expresses one and only one relationship between observations and variables\n\nThis kind of format is easy to work with, because:\n\nIt's easy to update when every piece of data exists in one and only one place\nIt's easy to subset conditionally across rows\nIt's easy to test across columns\n\nTo make this more concrete, let's take an example table.\nname | city1 | city2 | population\n-------|-------|-------|-----------\ndillon | williamsburg | berkeley | 110\njuan | berkeley | berkeley | 110\ndav | cambridge | berkeley | 110\nThis table violates all three of the rules above. Specifically, it:\n\neach row is about two observations\ntwo columns are about the same kind of date (city), while another datatype (time) has been hidden in the column names\nit expresses the relationship between people and where they live; and, cities and their population\n\nIn this particular example, our data is too wide. If we create that dataframe in pandas", "wide_table = pd.DataFrame({'name' : ['dillon', 'juan', 'dav'],\n 'city1' : ['williamsburg', 'berkeley', 'cambridge'],\n 'city2' : ['berkeley', 'berkeley', 'berkeley'],\n 'population' : [110, 110, 110]\n })\nwide_table", "We can make this longer in pandas using the melt function", "long_table = pd.melt(wide_table, id_vars = ['name'])\nlong_table", "We can make the table wider using the pivot method\n\nside note - this kind of inconsistency between melt and pivot is un-pythonic and should not be emulated", "long_table.pivot(columns='variable')", "WHOA\nOne of the really cool things about pandas is that it allows you to have multiple indexes for rows and columns. Since pandas couldn't figure out what do with two kinds of value variables, it doubled up our column index. We can fix this by specifying that we only want the 'values' values", "long_table.pivot(columns='variable', values='value')", "Challenge time!\nSwitch computers again so that you are working on the first computer of the day, and have a look at challenge C. This will have you practice reading and merging tables. Again, when you are finished, check your work by running py.test test_C in a shell.\nDescriptive statistics\nSingle descriptives have their own method calls in the Series class.", "table['fingers'].mean()\n\ntable['fingers'].std()\n\ntable['fingers'].quantile(.25)\n\ntable['fingers'].kurtosis()", "You can call several of these at once with the describe method", "table.describe()", "Inferential statistics\npandas does not have statistical functions baked in, so we are going to call them from the scipy.stats library and the statmodels scikit.\nWe are also going to load in an actual dataset, as stats examples aren't very interesting with tiny bits of fake data.", "from scipy import stats\ndata = pd.read_csv('../data/03_feedback.csv')", "Using what you've learned so far about manipulating pandas objects, how would you find out the names of the variables in this dataset? Their datatypes? The distribution of their values?\nComparisons of group means\nA common statistical procedure is to look for differences between groups of values. Typically, the values are grouped by a variable of interest, like sex or age. Here, we are going to compare the barriers of access to technology that people experience in the D-Lab compared to the world outside.\nIf you only have two groups in your sample, you can use a t-test:", "i = data['inside.barriers'].dropna()\no = data['outside.barriers'].dropna()\nstats.ttest_ind(i, o)", "Notice that here, we are passing in two whole columns, but we could also be subsetting by some other factor.\nIf you have more than two groups (or levels) that you would like to compare, you'll have to use something like an ANOVA:", "m = data[data.gender == \"Male/Man\"]['outside.barriers'].dropna()\nf = data[data.gender == \"Female/Woman\"]['outside.barriers'].dropna()\nq = data[data.gender == \"Genderqueer/Gender non-conforming\"]['outside.barriers'].dropna()\nstats.f_oneway(m, f, q)", "Linear relationships\nAnother common task is to establish if/how two variables are related across linear space. This could be something, for example, like relating shoe size to height. Here, we are going to ask whether barriers to access to technology inside and outside of the D-Lab are related.\nOne implementation of linear relationships is correlation testing:", "intermediate = data.dropna(subset=['inside.barriers', 'outside.barriers'])\nstats.pearsonr(intermediate['outside.barriers'], intermediate['inside.barriers'])", "At this point, we're going to pivot to using statsmodels", "import statsmodels.formula.api as smf", "The formulas module in statsmodels lets us work with pandas dataframes, and linear model specifications that are similar to R and other variants of statistical software, e.g.:\noutcome ~ var1 + var2", "model_1 = smf.ols(\"inside_barriers ~ outside_barriers\", data=data).fit()\nmodel_1", "To get a summary of the test results, call the model's summary method", "model_1.summary()", "Since Python does not have private data or hidden attributes, you can pull out just about any intermediate information you want, including coefficients, residuals, and eigenvalues\n\nRaymond Hettinger would say that Python is a \"consenting adult language\"", "model_1.params['outside_barriers']", "statsmodels also exposes methods for validity checking your regressions, like looking for outliers by influence statistics", "model_1.get_influence().summary_frame()", "If, at this stage, you suspect that one or more outliers is unduly influencing your model fit, you can transform your results into robust OLS with a method call:", "model_1.get_robustcov_results().summary()", "This isn't very different, so we're probably okay.\nIf you want to add more predictors to your model, you can do so inside the function string:", "smf.ols(\"inside_barriers ~ outside_barriers + gender\", data=data).fit().summary()", "Note that our categorical/factor variable has been automatically one-hot encoded as treatment conditions. There's not way to change this within statsmodels, but you can specify your contrasts indirectly using a library called (Patsy)[http://statsmodels.sourceforge.net/stable/contrasts.html].\nTo add interactions to your model, you can use :, or * [for full factorial]", "smf.ols(\"inside_barriers ~ outside_barriers * gender\", data=data).fit().summary()", "Practice\nIn the time remaining, pull up a dataset that you have, and that you'd like to work with in Python. The instructors will be around to help you apply what you've learned today to problems in your data that you are dealing with.\nIf you don't have data of your own, you should practice with the test data we've given you here. For example, you could try to figure out:\n\nIs King Arthur happier than Sir Robin, based on his speech?\nWhich character in Monty Python has the biggest vocabulary?\nDo different departments have the same gender ratios?\nWhat variable in this dataset is the best predictor for how useful people find our workshops to be?" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/bigquery-notebooks
notebooks/official/template_notebooks/getting_started_bigquery_ML.ipynb
apache-2.0
[ "# Copyright 2021 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# test", "Getting started with BigQuery ML\nBigQuery ML enables users to create and execute machine learning models in BigQuery using SQL queries. The goal is to democratize machine learning by enabling SQL practitioners to build models using their existing tools and to increase development speed by eliminating the need for data movement.\nIn this tutorial, you use the sample Google Analytics sample dataset for BigQuery to create a model that predicts whether a website visitor will make a transaction. For information on the schema of the Analytics dataset, see BigQuery export schema in the Google Analytics Help Center.\nObjectives\nIn this tutorial, you use:\n\nBigQuery ML to create a binary logistic regression model using the CREATE MODEL statement\nThe ML.EVALUATE function to evaluate the ML model\nThe ML.PREDICT function to make predictions using the ML model\n\nCreate your dataset\nEnter the following code to import the BigQuery Python client library and initialize a client. The BigQuery client is used to send and receive messages from the BigQuery API.", "from google.cloud import bigquery\n\nclient = bigquery.Client(location=\"US\")", "Next, you create a BigQuery dataset to store your ML model. Run the following to create your dataset:", "dataset = client.create_dataset(\"bqml_tutorial\")", "Create your model\nNext, you create a logistic regression model using the Google Analytics sample\ndataset for BigQuery. The model is used to predict whether a\nwebsite visitor will make a transaction. The standard SQL query uses a\nCREATE MODEL statement to create and train the model. Standard SQL is the\ndefault query syntax for the BigQuery python client library.\nThe BigQuery python client library provides a cell magic,\n%%bigquery, which runs a SQL query and returns the results as a Pandas\nDataFrame.\nTo run the CREATE MODEL query to create and train your model:", "%%bigquery\nCREATE OR REPLACE MODEL `bqml_tutorial.sample_model`\nOPTIONS(model_type='logistic_reg') AS\nSELECT\n IF(totals.transactions IS NULL, 0, 1) AS label,\n IFNULL(device.operatingSystem, \"\") AS os,\n device.isMobile AS is_mobile,\n IFNULL(geoNetwork.country, \"\") AS country,\n IFNULL(totals.pageviews, 0) AS pageviews\nFROM\n `bigquery-public-data.google_analytics_sample.ga_sessions_*`\nWHERE\n _TABLE_SUFFIX BETWEEN '20160801' AND '20170630'", "The query takes several minutes to complete. After the first iteration is\ncomplete, your model (sample_model) appears in the navigation panel of the\nBigQuery web UI. Because the query uses a CREATE MODEL statement to create a\ntable, you do not see query results. The output is an empty DataFrame.\nGet training statistics\nTo see the results of the model training, you can use the\nML.TRAINING_INFO\nfunction, or you can view the statistics in the BigQuery web UI. This functionality\nis not currently available in the BigQuery Classic web UI.\nIn this tutorial, you use the ML.TRAINING_INFO function.\nA machine learning algorithm builds a model by examining many examples and\nattempting to find a model that minimizes loss. This process is called empirical\nrisk minimization.\nLoss is the penalty for a bad prediction &mdash; a number indicating\nhow bad the model's prediction was on a single example. If the model's\nprediction is perfect, the loss is zero; otherwise, the loss is greater. The\ngoal of training a model is to find a set of weights that have low\nloss, on average, across all examples.\nTo see the model training statistics that were generated when you ran the\nCREATE MODEL query:", "%%bigquery\nSELECT\n *\nFROM\n ML.TRAINING_INFO(MODEL `bqml_tutorial.sample_model`)", "Note: Typically, it is not a best practice to use a SELECT * query. Because the model output is a small table, this query does not process a large amount of data. As a result, the cost is minimal.\nWhen the query is complete, the results appear below the query. The results should look like the following:\n\nThe loss column represents the loss metric calculated after the given iteration\non the training dataset. Since you performed a logistic regression, this column\nis the log loss.\nThe eval_loss column is the same loss metric calculated on\nthe holdout dataset (data that is held back from training to validate the model).\nFor more details on the ML.TRAINING_INFO function, see the\nBigQuery ML syntax reference.\nEvaluate your model\nAfter creating your model, you evaluate the performance of the classifier using\nthe ML.EVALUATE\nfunction. You can also use the ML.ROC_CURVE\nfunction for logistic regression specific metrics.\nA classifier is one of a set of enumerated target values for a label. For\nexample, in this tutorial you are using a binary classification model that\ndetects transactions. The two classes are the values in the label column:\n0 (no transactions) and not 1 (transaction made).\nTo run the ML.EVALUATE query that evaluates the model:", "%%bigquery\nSELECT\n *\nFROM ML.EVALUATE(MODEL `bqml_tutorial.sample_model`, (\n SELECT\n IF(totals.transactions IS NULL, 0, 1) AS label,\n IFNULL(device.operatingSystem, \"\") AS os,\n device.isMobile AS is_mobile,\n IFNULL(geoNetwork.country, \"\") AS country,\n IFNULL(totals.pageviews, 0) AS pageviews\n FROM\n `bigquery-public-data.google_analytics_sample.ga_sessions_*`\n WHERE\n _TABLE_SUFFIX BETWEEN '20170701' AND '20170801'))", "When the query is complete, the results appear below the query. The\nresults should look like the following:\n\nBecause you performed a logistic regression, the results include the following\ncolumns:\n\nprecision\nrecall\naccuracy\nf1_score\nlog_loss\nroc_auc\n\nUse your model to predict outcomes\nNow that you have evaluated your model, the next step is to use it to predict\noutcomes. You use your model to predict the number of transactions made by\nwebsite visitors from each country. And you use it to predict purchases per user.\nTo run the query that uses the model to predict the number of transactions:", "%%bigquery\nSELECT\n country,\n SUM(predicted_label) as total_predicted_purchases\nFROM ML.PREDICT(MODEL `bqml_tutorial.sample_model`, (\n SELECT\n IFNULL(device.operatingSystem, \"\") AS os,\n device.isMobile AS is_mobile,\n IFNULL(totals.pageviews, 0) AS pageviews,\n IFNULL(geoNetwork.country, \"\") AS country\n FROM\n `bigquery-public-data.google_analytics_sample.ga_sessions_*`\n WHERE\n _TABLE_SUFFIX BETWEEN '20170701' AND '20170801'))\n GROUP BY country\n ORDER BY total_predicted_purchases DESC\n LIMIT 10", "When the query is complete, the results appear below the query. The\nresults should look like the following. Because model training is not\ndeterministic, your results may differ.\n\nIn the next example, you try to predict the number of transactions each website\nvisitor will make. This query is identical to the previous query except for the\nGROUP BY clause. Here the GROUP BY clause &mdash; GROUP BY fullVisitorId\n&mdash; is used to group the results by visitor ID.\nTo run the query that predicts purchases per user:", "%%bigquery\nSELECT\n fullVisitorId,\n SUM(predicted_label) as total_predicted_purchases\nFROM ML.PREDICT(MODEL `bqml_tutorial.sample_model`, (\n SELECT\n IFNULL(device.operatingSystem, \"\") AS os,\n device.isMobile AS is_mobile,\n IFNULL(totals.pageviews, 0) AS pageviews,\n IFNULL(geoNetwork.country, \"\") AS country,\n fullVisitorId\n FROM\n `bigquery-public-data.google_analytics_sample.ga_sessions_*`\n WHERE\n _TABLE_SUFFIX BETWEEN '20170701' AND '20170801'))\n GROUP BY fullVisitorId\n ORDER BY total_predicted_purchases DESC\n LIMIT 10", "When the query is complete, the results appear below the query. The\nresults should look like the following:\n\nCleaning up\nTo delete the resources created by this tutorial, execute the following code to delete the dataset and its contents:", "client.delete_dataset(dataset, delete_contents=True)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jhconning/Dev-II
notebooks/incentives_corruption.ipynb
bsd-3-clause
[ "Notes on Incentive Contracts and Corruption\nMany public policy issues in developing countries be understood and modeled as asymmetric information problems where Principal - Monitor - Agent contract structures must be employed. \nConsider for example a central government that launches a project either to improve rural roads (as in Olken, 2007) or perhaps instead a project to carry out a campaign to distribute anti-malaria mosquito nets. \nFor either type of project the government will make funds available to local communities across the country but must rely on local agents (e.g. local contractors or officials) to implement the projects. The government may have a very good idea of how much it costs to repair a kilometer or make and distribute 1000 mosquito nets but typically cannot monitor whether the local agents 'diligently' carry out the project to specification or not. The government will in the end see if the project has succeeded or failed, and the project is much more likely to succeed if the local contractor has carried out the contract to specification, but the government cannot directly verify whether this has been the case or not. This moral hazard situation means that the government can only put local agents on outcome-contingent and not action-contingent contracts. \nFor example after the local road repair project is completed the government will be able to see if the project succeeded (e.g. the bridge was not washed away this year when the river floods) or whether it failed (it did not wash away). But the government can not directly observe the agent's level of diligence in the project -- for example whether they built the bridge to specification with the allocated funds or if they perhaps substituted shoddier building materials to divert funds to private gain) because the project's outcome is probabilistic.\nSimilarly in an anti-malarial campaign the local agent might claim to have made and distributed 1000 mosquito nets but in fact it only made and distributed 500. All the government can do is observe whether the number of malaria cases has been reduced (success) or increased or stayed the same compared to earlier years. \nLet's model this principal-agent problem using the same simple model we used to describe tenancy and credit contracts. \nThe government advances $I$ in funds to an agent to purchase materials to carry out the project. If the agent diligently carries out the project to specification the project succeeds with probability $p$ in which case the community receives benefit $X_s$ or it fails with probability $1-p$ in which case the community receives benefits of only $X_f<X_s$. \nIf the agent is not diligent they divert project funds and/or put in less effort all of which allows them to capture private benefits $B$. Furthermore when the agent is non-diligent the project succeeds with only probability $q$ (in which case the community again receives benefit $X_s$) or it fails with probability $1-q$ (in which case the community receives benefits of only $X_f<X_s$). \nSince the government can only observe project outcomes and not the choice of diligence the principal can only get the agent to be diligent by offering a contract that rewards the agent sufficiently more for project successes as for failures, in order to want to make the agent want to on their own raise the probability of success via their choice of diligence. \nThe government aims to maximize the expected value of community benefits minus the cost of the investment funds and the cost of renumerating the agent:\n$$\\max_{c_s, c_f} p (X_s - c_s) + (1-p) (X_f - c_f) - I$$\nsubject to the agent's participation (PC) constraint:\n$$ p c_s + (1-p) c_f \\geq \\bar u $$\nWithout loss of generality in what follows we will set $\\bar u = 0$, so in the ideal of circumstances the government would be able to hire the agent away from their next best opportunity by paying them amount normalized to zero. \nand an incentive compatibility (IC) constraint:\n$$ p c_s + (1-p) c_f \\geq q c_s + (1-q) c_f + \\bar B $$ \nNote that we will at times write this problem compactly as:\n$$\\max_{c_s,c_f} E(X|p) - E(c|p) - I$$\ns.t.\n$$E(c|p) \\geq \\bar u$$\n$$E(c|p) \\geq E(c|q) + \\bar B$$\nThe IC constraint can be rewritten:\n$$ c_s \\geq c_f + \\frac{\\bar B}{p-q} $$ \nThis can be satisfied at minimum cost when this constraint binds. This tells us that in the event of project success the agent must receive a 'bonus' of $\\frac{\\bar B}{p-q}$ over what they get paid for failure outcomes. This higher reward for success compared to failure is what induces the agent to want to be diligent and increase the probability of success from $q$ to $p$. The contractual cost of this renumeration strategy is then $p c_s + (1-p) c_f$ or:\n$$E(c|p) = c_f + p \\frac{\\bar B}{\\Delta}$$\nwhere $\\Delta = p-q$which then means that the expected net benefit of the government project is:\n$$E(X|p) -I - c_f - p \\frac{\\bar B}{\\Delta}$$ \nNote that we earlier normalized the agent's next best employment opportunity to a renumeration of zero. If the government could get local agents to competitively bid against each other for the government contract the agent's contract could be made to bind, but this in turn would require:\n$$c_f = - p \\frac{\\bar B}{\\Delta}+\\bar u$$\n$$c_s = (1-p) \\frac{\\bar B}{\\Delta}+\\bar u$$\nOne way to think of this is that the agent is made to pay a fine of $- p \\frac{\\bar B}{\\Delta}$ when the project fails while if the project succeeds she earns a reward of $(1-p) \\frac{\\bar B}{\\Delta}$\nA possible problem with this type of project is that it may be difficult for the government to impose a penalty on agents when the project fails (e.g. the local contractor leaves town when the bridge collapses or the incidence of malaria cases surges). One way to try to resolve that problem is by asking local contractors to post a bond but this solution may be hard to implement particularly in poor communities where the agents are poor to start with. \nThe consequence of not being able to impose a fine when the project fails is that we have to now impose yet another constraint on the contract design problem, a limited liability constraint of the form\n$$c_f \\geq 0$$\nfor example if the heaviest fine that can be imposed is to pay the local agent nothing when the project fails. The lowest cost way to renumerate the agent will be for this limited liability constraint and the incentive compatibility constraints to bind (to set the punishment as high as possible and the bonus as low as possible, compatible with maintaining incentives. With $c_f =0$ an extra bonus must now be paid folowing success outcomes to contine to satisfy the incentive constraint. But this increases the expected cost of renumeration and reduces expected benefits from the project to:\n$$E(X|p) - I - p \\frac{\\bar B}{\\Delta}$$ \nThe last term $p\\frac{\\bar B}{\\Delta}$ is sometimes referred to as an 'information rent' that must be paid to the agent that arises due to the asymmetric information problem.\nAn Example", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom ipywidgets import interact, fixed\n\ndef E(xs,xf,p):\n \"\"\"Expectation operator \"\"\"\n return p*xs + (1-p)*xf", "Consider a project with the following characteristics:", "I = 75 # Lump sum investment to start project\nXs = 100 # project success return\nXf = 0 # project failure return\np = 0.95 # probability of success when diligent\nq = 0.50 # probability of success when non-diligent\nEX = E(Xs,Xf,p) # Gross expected project return\nubar = 5 # Consumer reservation income \nB = 10 # private benefits to being non-diligent\n\nprint('Expected returns Diligent (p): {}, Non-diligent (q): {}'.format(E(Xs,Xf,p), E(Xs,Xf,q)))", "This project fails only 1 percent of the time when the agent is non-diligent (corrupt) but fails 50 percent of the time when they are non-diligent (corrupt). We associate non-diligence with an opportunity to divert $\\bar B$ in funds to private uses.", "B = 10", "As derived above the optimal renumeration contract calls for the agent to pay a big fine for failure and earn a positive reward for success:", "cf = -p*B/(p-q) + ubar\ncs = (1-p)*B/(p-q) + ubar\n\nprint('(c_f, c_s) =({:5.1f}{:5.1f})'.format(cf, cs))\n\nprint('consumer and bank expected payments:')\nE(cs,cf,p), E(Xs-cs, Xf-cf,p) - I", "In expectation this covers the agent's opportunity cost of funds $\\bar u$. Since the incentive compatibility constraint is met (by construction) when she is diligent:\nDiagram", "def zeroprofit(c):\n return EX/p -((1-p)/p)*c - I/p\n\ndef IC(c):\n return c + B/(p-q)\n\ndef BPC(c,ubar):\n return ubar/p - ((1-p)/p)*c\n\ncf_min, cf_max = -25,10\nc = np.arange(cf_min, cf_max)\n\nax = plt.subplot(111)\nax.plot(c,zeroprofit(c), 'k--',label='zero $\\Pi$')\nax.plot(c,IC(c), label='IC')\nax.plot(c,BPC(c,ubar), label='PC',color='b')\nax.plot(cf,cs,marker='o')\nax.legend(loc='lower right')\nax.set_xlabel('$c_f$'), ax.set_ylabel('$c_s$')\nax.axvline(0, color='k')\nax.set_ylim(0,25)\nax.spines['right'].set_visible(False)\nax.spines['top'].set_visible(False)\nax.yaxis.set_ticks_position('left')\nax.xaxis.set_ticks_position('bottom')", "The principal extracts", "EX - I - ubar", "Under this contract the agent will be diligent even though they cannot be observed. Had they been non-dilig", "q*(Xs- cs) + (1-q)*(Xf-cf) - I", "Limited Liability constraints\nIn the example above the agent is asked to pay the principal in the event of failure ($c_f <0$). Suppose however that the agent cannot be made to pay the fine (e.g. they cannot post bond and run away before paying the fine). \nSuppose in fact that the worst fine we can impose is that they pay $c_f = -5$. \nWhen that is the case the cheapest way to satisfy the incentive compatibility constraint is to set:", "cf = -5\ncs = -5 + B/(p-q)", "Which then means that in expectation the agent earns an information rent of", "E(cs,cf,p)", "This is above their reservation utility $bar u$ and this contract is therefore quite a bit more expensive to the principal. Their net return is now\n$$E(X,p) - I - E(C,p)$$\n$$E(X,p) - I - p/(p-q)$$\nAnd the net benefits to the project are greatly reduced to:", "EX - I - E(cs,cf,p)", "The net benefits have been reduced by the heavy cost of incentive contract to keep the agent diligent or non-corrupt.\nBut this is still better than allowing the agent to be corrupt.", "q*(Xs-cs)+(1-q)*(Xf-cf) - I ", "As we can see from the diagram above the principal can contract with agents who face limited liability but they earn less from agents where the LL constraint binds. The limited liability constraint means the agent must earn a rent in excess of their reservation utility. \nSuppose the most we can take away from the agent is an amount $A$, equal for example to the amount of resources that can be seized or that they posted as bond or collateral. \nThe cheapest way to satisfy the IC is then:\n$c_f = -A$\n$c_s = -A + B/(p-q)$\nWhich implies the agent's expected repayment is:\n$$E(c|p) = - A + \\frac{p B}{p-q}$$\nwhich will be more than their reservation wage $\\bar u$ as long as $A < p \\frac{B}{p-q} \\bar u$\nMinimum collateral requirement\nWhat is the minimum collateral requirement below which the contract cannot both satisfy the incentive compatibility constraint and guarantee at least zero profits to the principal?\nSubstitute the expected repayment under limited liability (above) into the principal's zero profit condition and solve for A (on the diagram above this is the $c_f$ at the intersection of the IC constraint and the principal's zero profit condition:\n$E(X|p) - E(c|p) - I = 0$\n$E(X|p) + A - \\frac{p \\dot B}{p-q} - I = 0$\n$$\\underline{A} = \\frac{p B}{p-q} - [E(X|p) - I] $$\nFor our running example this minimum collateral requirement is:", "Amc = p*B/(p-q) - (EX - I)\nAmc", "This is an important expression. This tells us that unless the agent can post a minimum bond or collateral of this amount then the principal cannot provide them with strong enough incentives for them to be diligent and still allow the principal to break even on the transaction. \nThe take away lesson is that sometimes in asymmetric information situations one has to pay employees a rent (expected payment in excess of their next best option) in order to motivate their behavior. It also means however that if the principal (employer, lender, etc) has a choice of agent to deal with they will prefer to deal with those who can post collateral.\nMonitoring by an intermediary\nSuppose an intermediary can 'monitor' the project. By expending resources $m$ the monitor can reduce the agent's private benefits from non-diligence from $\\bar B$ to $\\bar B(m) < \\bar B$.\nFor example the intermediar might visit the agent at random times to check up on the progress of the project. This does not completely eliminate the scope for corruption but limits how much can be privately captured (perhaps because the agent now has to spend additional resources hiding her diversions of effort and funds).\nThe obvious advantage of this is that it reduces the size of the information rent to:\n$$\\frac{\\bar B(m)}{\\Delta}$$\nAnd this in turn will reduce the total cost of renumerating the agent. Intuitively, since the private benefit that can be captured has been directly reduced by monitoring the contract does not have to rely so much on costly bonus payments to motivate diligence.\nNow of course the Principal will have to pay the intermediary to compensate them for their expense $m$ and this will add to the cost. But so long as this extra cost is smaller than the reduction in the cost of renumerating the agent, net project benefits will improve. \nUnder the assumption that Principal can specify and observer the intermediary's monitoring effort the net benefits from the project will now be:\n$$E(X|p) - I - p \\frac{\\bar B(m)}{\\Delta} - m$$ \nTo take a concrete example suppose that the 'monitoring function' where given by: \n$$\\bar B(m) = \\frac{\\bar B}{1+m}$$\nthen the total expected cost of renumerating the agent and the monitoring intermediary would look as follows as a function of $m$:", "m = np.linspace(0,10,20)\nplt.plot(m, EX - I - p*(B/(1+m))/D - m)", "Which shows that over a range monitoring by the intermediary lowers the information rent that must be left with the agent faster than the cost of monitoring but eventually diminishing returns to this activity kick in (at somewhat less than 4 units of monitoring).\nWho monitors the monitor? Two-layered moral hazard\nMore likely the principal cannot directly contract on the intermediary's level of monitoring. The intermediary is supposed to spend resources $m$ to monitor the agent but if the government has no way to directly verify if this is happening or not, the intermediary may well be tempted to monitor at expense of zero but claim that it has monitored at expense $m$? \nThe only way for the government to avoid this from happening is to put also put the intermediary on an incentive contract. The way to do this is to make the intermediary share in the agent's successess and failures. \nThe Principal's contract design problem is to choose renumeration packages $(c_s,c_f)$ and $(w_s, w_f)$ to maximize:\n$$ E(X|p) - E(c|p) - E(m|p) - I $$\nsubject to participation constraints for both the agent and the intermediary\n$$ E(c|p) \\geq 0$$ \n$$ E(w|p) \\geq 0$$ \nthe (now modified) incentive compatibility constraint for the agent:\n$$E(c|p) \\geq E(c|q) + \\bar B(m)$$\nand an incentive compatibility constraint for the intermediary monitor:\n$$E(w|p) \\geq E(w|q) + m$$\nAs was the case with the agent the cost of providing incentives to the intermediary monitor will depend on whether the intermediary can be made to lose money when the project fails or not. \nLet's first consider the case when this is not the case. In that event the intermediary is paid $0$ when the project fails and \n$$w_s = \\frac{m}{\\Delta}$$\nwhen the project succeeds. Note this is very much like the expression we derived for the bonus that had to be paid to the agent. The expected cost of this intermediary renumeration contract (when $w_f =0$) is then:\n$$E(w|p) = p \\frac{m}{\\Delta}$$\nwhich is always larger than $m$ so long as $p > q$. This suggest that the intermediary will also earn an information rent equal to \n$$E(w|p) - m = p \\frac{m}{\\Delta} - m > 0$$\nsince the monitor has to pay expenses $m$ while monitoring.\nIf on the other hand we can assume that intermediary's can be made to bear liability for the projects that they monitor and fail then this rent can be eliminated. Consider the case of competitive bidding for the intermediary monitor job. Different firms will compete to offer their services until the expected return to being an intermediary monitor is equal to what they could earn in their next best occupation which we assume to be zero. Then\n$$E(w|p) = m $$\nwhich implies\n$$w_f + p \\frac{m}{\\Delta} = m $$\nor \n$$ w_f = -p \\frac{m}{\\Delta} + m$$\n(which then implies $w_s = (1-p)\\frac{m}{\\Delta} +m $)\nOne way to think of this is that the principal asks the intermediary put up portion \n$$I_m = p \\frac{m}{\\Delta} - m$$\nof the total cost I of the project while the uninformed principal puts up the remainder \n$$I_u = I - I_m$$\nThen if the project fails the intermediary loses $I_m + m$ (their investment and their monitoring cost). If the project succeeds the intermediary pockets net of their monitoring cost:\n$$w_s = (1-p) \\frac{m}{\\Delta} - m$$\nFor a zero profit expected return. \nIn this last competitive scenario the cost to the Principal of adding the intermediary to the contract is just the monitoring cost $m$ that must be compensated and not the larger information rent $p\\frac{m}{\\Delta}$ that we saw in the non-competitive case. \nThe diagram below depicts net benefits under the competitive (solid line) and non-competitive (dashed line) scenarios:", "plt.plot(m, EX - I - p*(B/(1+m))/D - m)\nplt.plot(m, EX - I - p*(B/(1+m))/D - p*m/D,'r--')", "In the competitive scenario more monitoring is employed but this monitoring is effective at bringing down the total cost of implementation, leaving more net project benefits. When the market for intermediaries is not competitive (and/or intermediaries are subject to limited liability constraints themselves) then monitoring still works (given our parameter assumptions in this example) but more of the project returns must be used for costly contractual bonus payments to motivate both the agent and the intermediary. Less monitoring will be employed and the net project returns will be reduced\nKeep in mind that net project returns could be even larger if agents themselves could be made to post bond (credibly pay fines) in the event of project failure. Then no monitoring at all would be required and the agent could be paid the reservation wage (of zero in this example) and maximum project returns of $E(X|p) - I$ or", "EX - I", "Could be achieved.\nExtensions\nThis is the type of model that made Jean-Jacques Laffont and Jean Tirole famous. See for example their book on A Theory of Incentives in Procurement and Regulation. Jean Tirole powers much of his other important book $Modern Corporate Finance$ with similar double-moral hazard models as well (and come to think of it his famous The Theory of Industrial Organization features this type of structure as well. Many problems of fiscal federalism with Central, State and Local goverments can also be modeled this way.\nHolmstrom and Tirole (1993) and Conning (1997) working with variations on an idea first introduced by Diamond (1984) note that if the intermediary monitors several independent (or at least not perfectly correlated) projects then the problem is like a multi-task principal - agent problem (except here between the principal and the intermediary) and it turns out that one can reduce the money that the intermediary has to put at risk in the event of failure by making their 'bonus' reward depend on success across several projects. For example if the intermediary is monitoring two agents on projects with independent projects it may be cheaper to pay zero or low compensation to the intermediary when either or both projects fails but a healthy bonus if they both succeed, then to structure the two renumeration projects separately.\nA variation on the above idea is also one way to understand joint liability contracts and 'peer monitoring' although in this case the problem is a bit more tricky. For example suppose that we had two projects and two agents and we engaged each agent to work on their own project (say builing bridges as described above) and also as a monitor of the other agent. The challenge is to then design a contract that then induces the two agents to choose effort on their bridge building and monitoring of the other agent strategically. This can be solved as a mechanism design problem: the principal determines the terms of a joint liability contract implement the desired levels of monitoring and project effort as the solution to a Nash-game played between the agents, guarding against the possibility that the agents might collude with each other against the principal. \nDepending on the assumptions one makes about monitoring technologies and credible enforcement within groups, one can derive several interesting results that are relevant to understanding group versus individual liability loans as well as questions as to whether it might be preferable to monitor projects subject to corruption with 'community' monitoring or outside monitoring. \nLocal community agents may have better information or monitoring technologies which might appear to make them better monitors, but they may fewer resources to put up as bond and/or they may be more likely to 'collude' against the outside principal." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
h-mayorquin/time_series_basic
presentations/2016-01-26(Wall-Street-Letters-Support-Vector-Machines-Study).ipynb
bsd-3-clause
[ "Support Vector Machines Study\nThis notebook will be used to study how well Support Vector Machines perform in the easily created representation provided by SLM.", "import numpy as np\nimport h5py\nfrom sklearn import svm, cross_validation, preprocessing\n\nimport matplotlib.pyplot as plt\nimport matplotlib.gridspec as gridspec\nimport seaborn as sns\n%matplotlib inline", "Now we load the data", "# First we load the file \nfile_location = '../results_database/text_wall_street_big.hdf5'\nrun_name = '/low-resolution'\nf = h5py.File(file_location, 'r')\n\n\n# Now we need to get the letters and align them\ntext_directory = '../data/wall_street_letters.npy'\nletters_sequence = np.load(text_directory)\nNletters = len(letters_sequence)\nsymbols = set(letters_sequence)\n\n# Nexa parameters\nNspatial_clusters = 5\nNtime_clusters = 15\nNembedding = 3\n\nparameters_string = '/' + str(Nspatial_clusters)\nparameters_string += '-' + str(Ntime_clusters)\nparameters_string += '-' + str(Nembedding)\n\nnexa = f[run_name + parameters_string]\n\n# We need to exctrat the SLM\nSLM = np.array(f[run_name]['SLM'])", "Number of data requiered for good predictions.\nFirst let's see how much data we actually require to make good predictions using both a linear and an RBF kernel for a support vector machine.", "number_of_data = np.logspace(2, 3.5, 10, dtype='int')\ndelay = 5\ncache_size = 5000\n\naccuracy_linear = []\naccuracy_rbf = []\naccuracy_linear_std = []\naccuracy_rbf_std = []\n\nfor N in number_of_data:\n # Standarized\n X = SLM[:,:(N - delay)].T\n y = letters_sequence[delay:N]\n\n # We now scale X\n X = preprocessing.scale(X)\n X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.10)\n\n clf_linear = svm.SVC(C=1.0, cache_size=cache_size, kernel='linear')\n clf_linear.fit(X_train, y_train)\n score = clf_linear.score(X_test, y_test) * 100.0\n accuracy_linear_std.append(score)\n\n clf_rbf = svm.SVC(C=1.0, cache_size=cache_size, kernel='rbf')\n clf_rbf.fit(X_train, y_train)\n score = clf_rbf.score(X_test, y_test) * 100.0\n accuracy_rbf_std.append(score)\n \n # Not standarized\n X = SLM[:,:(N - delay)].T\n y = letters_sequence[delay:N]\n\n X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.10)\n\n clf_linear = svm.SVC(C=1.0, cache_size=cache_size, kernel='linear')\n clf_linear.fit(X_train, y_train)\n score = clf_linear.score(X_test, y_test) * 100.0\n accuracy_linear.append(score)\n\n clf_rbf = svm.SVC(C=1.0, cache_size=cache_size, kernel='rbf')\n clf_rbf.fit(X_train, y_train)\n score = clf_rbf.score(X_test, y_test) * 100.0\n accuracy_rbf.append(score)\n \n print(N)", "Plot it", "gs = gridspec.GridSpec(1, 2)\nfig = plt.figure(figsize=(12, 9))\n\nax1 = fig.add_subplot(gs[0, 0])\nax1.plot(number_of_data, accuracy_linear, 'o-', lw=2, markersize=10, label='linear')\nax1.plot(number_of_data, accuracy_rbf, 'o-', lw=2, markersize=10, label='rbf')\nax1.set_xlabel('Total Data')\nax1.set_ylabel('Accuracy %')\nax1.set_title('Accuracy vs Amount of Data')\nax1.set_ylim([0, 110])\nax1.set_xscale('log')\nax1.legend()\n\nax2 = fig.add_subplot(gs[0, 1])\nax2.plot(number_of_data, accuracy_linear_std, 'o-', lw=2, markersize=10, label='linear')\nax2.plot(number_of_data, accuracy_rbf_std, 'o-', lw=2, markersize=10, label='rbf')\nax2.set_xlabel('Total Data')\nax2.set_ylabel('Accuracy %')\n\nax2.set_title('Accuracy vs Amount of Data (Normalized)')\nax2.set_ylim([0, 110])\nax2.set_xscale('log')\nax2.legend()\n", "Latency analysis for SMVs", "delays = np.arange(0, 10)\nN = 1000\naccuracy_lattency = []\n\nfor delay in delays:\n # Standarized\n print(delay)\n X = SLM[:,:(N - delay)].T\n y = letters_sequence[delay:N]\n\n # We now scale X\n X = preprocessing.scale(X)\n X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.10)\n\n clf_linear = svm.SVC(C=1.0, cache_size=cache_size, kernel='linear')\n clf_linear.fit(X_train, y_train)\n score = clf_linear.score(X_test, y_test) * 100.0\n accuracy_lattency.append(score)", "Plot it", "plt.plot(delays, accuracy_lattency, 'o-', lw=2, markersize=10., label='Accuracy')\nplt.xlabel('Delays')\nplt.ylim([0, 105])\nplt.xlim([-0.5, 10])\nplt.ylabel('Accuracy %')\nplt.title('Delays vs Accuracy')\nfig = plt.gcf()\nfig.set_size_inches((12, 9))\nplt.legend()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tiagofabre/tiagofabre.github.io
_notebooks/Radial basis function.ipynb
mit
[ "import numpy as np\nfrom math import exp, pow, sqrt\nfrom numpy.linalg import inv\nfrom functools import reduce\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\nfrom matplotlib.colors import ListedColormap", "Interpolation with RBF\n$$\nf( x) =\\sum ^{P}{p=1} a{p} .R_{p} +b\n$$\n$$\nR_{p} = e^{-\\frac{1}{2\\sigma ^{2}} .\\parallel ( X_{i}) -( X_{p}) \\parallel ^{2}}\n$$\n$$\n\\sigma =\\frac{P_{max} -P_{min}}{\\sqrt{2P}}\n$$\n$$\n\\sigma =\\frac{4-2}{\\sqrt{2.2}} \\ \n$$\n$$\n\\sigma ^{2} =1\n$$\n$$\nC_{1}=2\n$$\n$$\nC_{2}=4\n$$\n$$\\displaystyle \\frac{1}{[ R]} =\\left([ R]^{t} .[ R]\\right)^{-1} .[ R]^{t}$$\n$$\n\\displaystyle \\begin{bmatrix}\na\n\\end{bmatrix} =\\frac{1}{[ R]} \\ \\begin{bmatrix}\nA\n\\end{bmatrix}\n$$", "def rbf(inp, out, center):\n def euclidean_norm(x1, x2):\n return sqrt(((x1 - x2)**2).sum(axis=0))\n \n def gaussian (x, c):\n return exp(+1 * pow(euclidean_norm(x, c), 2))\n \n R = np.ones((len(inp), (len(center) + 1)))\n\n for i, iv in enumerate(inp):\n for j, jv in enumerate(center):\n R[i, j] = (gaussian(inp[i], center[j]))\n \n Rt = R.transpose()\n RtR = Rt.dot(R)\n iRtR = inv(RtR)\n oneR = iRtR.dot(Rt)\n a = oneR.dot(out)\n \n def rbf_interpolation(x):\n sum = np.ones(len(center) + 1)\n\n for i, iv in enumerate(center):\n sum[i] = gaussian(x, iv)\n\n y = a * sum\n return reduce((lambda x, y: x + y), y)\n \n return rbf_interpolation", "", "inp = np.array([2, 3, 4])\nout = np.array([3, 6, 5])\ncenter = np.array([2, 4])\n\nrbf_instance = rbf(inp, out, center)\n\ninput_test = input_test = np.linspace(0,10,100)\noutput_test = list(map(rbf_instance, input_test))\n\nplt.plot(input_test, output_test)\nplt.plot(inp, out, 'ro')\nplt.ylabel('expected vs predicted')\nplt.savefig(\"rbf1.svg\")\nplt.show()", "", "inp = np.array([2, 3, 4, 5])\nout = np.array([3, 1, 5, -2])\ncenter = np.array([2, 3, 4])\n\nrbf_instance = rbf(inp, out, center)\n\ninput_test = np.linspace(-5,10,100)\noutput_test = list(map(rbf_instance, input_test))\n\n# plt.plot(input_test, output_test)\nplt.plot(inp, out, 'ro')\nplt.ylabel('expected vs predicted')\nplt.savefig(\"interpolate1.svg\")\nplt.show()", "", "inp = np.array([2, 3, 4, 5])\nout = np.array([3, 1, 5, -2])\ncenter = np.array([2, 3, 4])\n\nrbf_instance = rbf(inp, out, center)\n\ninput_test = input_test = np.linspace(2,5,100)\noutput_test = list(map(rbf_instance, input_test))\n\nplt.plot(input_test, output_test)\nplt.plot(inp, out, 'ro')\nplt.ylabel('expected vs predicted')\nplt.savefig(\"rbf3.svg\")\nplt.show()", "XOR input", "inp = np.array([np.array([1,1]), np.array([0,1]), np.array([0,0]), np.array([1,0])])\nout = np.array([ 0, 1, 0, 1])\ncenter = np.array([ np.array([1,1]), np.array([0,0])])\n\nrbf_instance = rbf(inp, out, center)\n\ninp_test = np.array([np.array([1,1]), \n np.array([0,1]), \n np.array([0,0]), \n np.array([1,0])])\noutput = map(rbf_instance, inp_test)\n\ndef colorize(output):\n c = [None]* len(output)\n for i, iv in enumerate(output):\n if (output[i] > 0):\n c[i] = 'blue'\n else:\n c[i] = 'red'\n return c\n\ninp_x = [1, 0, 0, 1]\ninp_y = [1, 1, 0, 0]\n\nc = colorize(output)\n\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\n\nxx, yy = np.meshgrid(np.arange(0, 1, 0.02), np.arange(0, 1, 0.02))\n\ncm_bright = ListedColormap(['#FF0000', '#0000FF'])\n\nax.scatter(xx[:, 0], yy[:, 1], output, cmap=cm_bright, depthshade=False)\nplt.savefig(\"rbf_xor.svg\")\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/practical-ml-vision-book
03_image_models/03b_finetune_MOBILENETV2_flowers5.ipynb
apache-2.0
[ "import urllib\nfrom IPython.display import Markdown as md\n_nb_loc = \"03_image_models/03b_finetune_MOBILENETV2_flowers5.ipynb\" # change to reflect your notebook\n_nb_title = \"Fine-tuning MobileNetV2 on 5 flowers (image classification)\" # change to reflect your notebook\n_nb_message = \"This notebook is set up to run on TPU or GPU. It has been executed on a TPUv3. When running on hardware with less memory such as a TPUv2 (Colab) or a GPU, you might have to set a lower batch size and/or image size in the Configuration section below.\" # change to reflect your notebook\n_icons=[\"https://raw.githubusercontent.com/GoogleCloudPlatform/practical-ml-vision-book/master/logo-cloud.png\", \"https://www.tensorflow.org/images/colab_logo_32px.png\", \"https://www.tensorflow.org/images/GitHub-Mark-32px.png\", \"https://www.tensorflow.org/images/download_logo_32px.png\"]\n_links=[\"https://console.cloud.google.com/ai-platform/notebooks/deploy-notebook?\" + urllib.parse.urlencode({\"name\": _nb_title, \"download_url\": \"https://github.com/GoogleCloudPlatform/practical-ml-vision-book/raw/master/\"+_nb_loc}), \"https://colab.research.google.com/github/GoogleCloudPlatform/practical-ml-vision-book/blob/master/{0}\".format(_nb_loc), \"https://github.com/GoogleCloudPlatform/practical-ml-vision-book/blob/master/{0}\".format(_nb_loc), \"https://raw.githubusercontent.com/GoogleCloudPlatform/practical-ml-vision-book/master/{0}\".format(_nb_loc)]\nmd(\"\"\"<table class=\"tfo-notebook-buttons\" align=\"left\"><td><a target=\"_blank\" href=\"{0}\"><img src=\"{4}\"/>Run in AI Platform Notebook</a></td><td><a target=\"_blank\" href=\"{1}\"><img src=\"{5}\" />Run in Google Colab</a></td><td><a target=\"_blank\" href=\"{2}\"><img src=\"{6}\" />View source on GitHub</a></td><td><a href=\"{3}\"><img src=\"{7}\" />Download notebook</a></td></table><br/><br/><h1>{8}</h1>{9}\"\"\".format(_links[0], _links[1], _links[2], _links[3], _icons[0], _icons[1], _icons[2], _icons[3], _nb_title, _nb_message))\n\n!pip install --user --quiet --force keras-adamw\n\n# restart kernel to pick up the adamw package\nimport IPython\n\nIPython.Application.instance().kernel.do_shutdown(True) #automatically restarts kernel\n\nimport math, re, os, sys\nimport tensorflow as tf\nimport numpy as np\nfrom matplotlib import pyplot as plt\nfrom sklearn.metrics import f1_score, precision_score, recall_score, confusion_matrix\nprint(\"Tensorflow version \" + tf.__version__)\nAUTO = tf.data.experimental.AUTOTUNE\n\n# WARNING! This call has to come before *any* TensorFlow calls. \n# If you get errors about AdamW not being found, restart the kernel and start from previous cell.\nos.environ['TF_KERAS'] = '1' # for AdamW\nfrom keras_adamw import AdamW", "TPU or GPU detection", "try: # detect TPUs\n tpu = tf.distribute.cluster_resolver.TPUClusterResolver.connect()\n strategy = tf.distribute.TPUStrategy(tpu)\nexcept ValueError: # detect GPUs or multi-GPU machines\n strategy = tf.distribute.MirroredStrategy()\n\nprint(\"REPLICAS: \", strategy.num_replicas_in_sync)", "Configuration", "GCS_DS_PATH = \"gs://practical-ml-vision-book/flowers_5_tfr\"\n\n# Settings for TPUv3. When running on hardware with less memory such as a TPUv2 (Colab)\n# or a GPU, you might have to use lower BATCH_SIZE and/or IMAGE_SIZE values.\n\nIMAGE_SIZE = [224, 224] # available image sizes in flowers104 dataset: 512x512, 331x331, 224x224, 192,192\nEPOCHS = 13\n\n# Learning rate schedule for fine-tuning: use trainable=True (best validation accuracy 0.91)\n#BATCH_SIZE = 16 * strategy.num_replicas_in_sync\n#LR_START = 0.00001\n#LR_MAX = 0.000025 * strategy.num_replicas_in_sync\n#LR_MIN = 0.00001\n#LR_RAMPUP_EPOCHS = 3\n#LR_SUSTAIN_EPOCHS = 1\n#LR_EXP_DECAY = .8\n\n# Learning rate schedule for fine-tuning with AdamW: use trainable=True (best validation accuracy 0.92)\nBATCH_SIZE = 8 * strategy.num_replicas_in_sync\nLR_START = 0.00001\nLR_MAX = 0.0001 * strategy.num_replicas_in_sync\nLR_MIN = 0.00001\nLR_RAMPUP_EPOCHS = 3\nLR_SUSTAIN_EPOCHS = 0\nLR_EXP_DECAY = .8\n\n# learning rate schedule for transfer learning: use trainable=False (best validation accuracy 0.90)\n#BATCH_SIZE = 16 * strategy.num_replicas_in_sync\n#LR_START = 0.00001\n#LR_MAX = 0.00075 * strategy.num_replicas_in_sync #(Note: 0.00007 with trainable=True to replicate graph in book)\n#LR_MIN = 0.00001\n#LR_RAMPUP_EPOCHS = 0\n#LR_SUSTAIN_EPOCHS = 0\n#LR_EXP_DECAY = .8\n\nGCS_PATH_SELECT = { # available image sizes\n 192: GCS_DS_PATH + '/tfrecords-jpeg-192x192',\n 224: GCS_DS_PATH + '/tfrecords-jpeg-224x224',\n 331: GCS_DS_PATH + '/tfrecords-jpeg-331x331',\n 512: GCS_DS_PATH + '/tfrecords-jpeg-512x512'\n}\nGCS_PATH = GCS_PATH_SELECT[IMAGE_SIZE[0]] + '/*.tfrec'\nfilenames = tf.io.gfile.glob(GCS_PATH)\nvalidation_split = 0.19\nsplit = len(filenames) - int(len(filenames) * validation_split)\nTRAINING_FILENAMES = filenames[:split]\nVALIDATION_FILENAMES = filenames[split:]\n\nCLASSES = ['daisy', 'dandelion', 'roses', 'sunflowers', 'tulips']\n\ndef lrfn(epoch):\n if epoch < LR_RAMPUP_EPOCHS:\n lr = (LR_MAX - LR_START) / LR_RAMPUP_EPOCHS * epoch + LR_START\n elif epoch < LR_RAMPUP_EPOCHS + LR_SUSTAIN_EPOCHS:\n lr = LR_MAX\n else:\n lr = (LR_MAX - LR_MIN) * LR_EXP_DECAY**(epoch - LR_RAMPUP_EPOCHS - LR_SUSTAIN_EPOCHS) + LR_MIN\n return lr\n \nlr_callback = tf.keras.callbacks.LearningRateScheduler(lrfn, verbose=True)\n\nrng = [i for i in range(EPOCHS)]\ny = [lrfn(x) for x in rng]\nplt.plot(rng, y)\nprint(\"Learning rate schedule: {:.3g} to {:.3g} to {:.3g}\".format(y[0], max(y), y[-1]))", "Visualization utilities\ndata -> pixels, nothing of much interest for the machine learning practitioner in this section.", "# numpy and matplotlib defaults\nnp.set_printoptions(threshold=15, linewidth=80)\n\ndef batch_to_numpy_images_and_labels(data):\n images, labels = data\n numpy_images = images.numpy()\n numpy_labels = labels.numpy()\n if numpy_labels.dtype == object: # binary string in this case, these are image ID strings\n numpy_labels = [None for _ in enumerate(numpy_images)]\n # If no labels, only image IDs, return None for labels (this is the case for test data)\n return numpy_images, numpy_labels\n\ndef title_from_label_and_target(label, correct_label):\n if correct_label is None:\n return CLASSES[label], True\n correct = (label == correct_label)\n return \"{} [{}{}{}]\".format(CLASSES[label], 'OK' if correct else 'NO', u\"\\u2192\" if not correct else '',\n CLASSES[correct_label] if not correct else ''), correct\n\ndef display_one_flower(image, title, subplot, red=False, titlesize=16):\n plt.subplot(*subplot)\n plt.axis('off')\n plt.imshow(image)\n if len(title) > 0:\n plt.title(title, fontsize=int(titlesize) if not red else int(titlesize/1.2), color='red' if red else 'black', fontdict={'verticalalignment':'center'}, pad=int(titlesize/1.5))\n return (subplot[0], subplot[1], subplot[2]+1)\n \ndef display_batch_of_images(databatch, predictions=None):\n \"\"\"This will work with:\n display_batch_of_images(images)\n display_batch_of_images(images, predictions)\n display_batch_of_images((images, labels))\n display_batch_of_images((images, labels), predictions)\n \"\"\"\n # data\n images, labels = batch_to_numpy_images_and_labels(databatch)\n if labels is None:\n labels = [None for _ in enumerate(images)]\n \n # auto-squaring: this will drop data that does not fit into square or square-ish rectangle\n rows = int(math.sqrt(len(images)))\n cols = len(images)//rows\n \n # size and spacing\n FIGSIZE = 13.0\n SPACING = 0.1\n subplot=(rows,cols,1)\n if rows < cols:\n plt.figure(figsize=(FIGSIZE,FIGSIZE/cols*rows))\n else:\n plt.figure(figsize=(FIGSIZE/rows*cols,FIGSIZE))\n \n # display\n for i, (image, label) in enumerate(zip(images[:rows*cols], labels[:rows*cols])):\n title = '' if label is None else CLASSES[label]\n correct = True\n if predictions is not None:\n title, correct = title_from_label_and_target(predictions[i], label)\n dynamic_titlesize = FIGSIZE*SPACING/max(rows,cols)*40+3 # magic formula tested to work from 1x1 to 10x10 images\n subplot = display_one_flower(image, title, subplot, not correct, titlesize=dynamic_titlesize)\n \n #layout\n plt.tight_layout()\n if label is None and predictions is None:\n plt.subplots_adjust(wspace=0, hspace=0)\n else:\n plt.subplots_adjust(wspace=SPACING, hspace=SPACING)\n plt.show()\n\ndef display_confusion_matrix(cmat, score, precision, recall):\n #plt.figure(figsize=(15,15))\n ax = plt.gca()\n ax.matshow(cmat, cmap='Reds')\n ax.set_xticks(range(len(CLASSES)))\n ax.set_xticklabels(CLASSES)\n plt.setp(ax.get_xticklabels(), rotation=45, ha=\"left\", rotation_mode=\"anchor\")\n ax.set_yticks(range(len(CLASSES)))\n ax.set_yticklabels(CLASSES)\n plt.setp(ax.get_yticklabels(), rotation=45, ha=\"right\", rotation_mode=\"anchor\")\n #titlestring = \"\"\n #if score is not None:\n # titlestring += 'f1 = {:.3f} '.format(score)\n #if precision is not None:\n # titlestring += '\\nprecision = {:.3f} '.format(precision)\n #if recall is not None:\n # titlestring += '\\nrecall = {:.3f} '.format(recall)\n #if len(titlestring) > 0:\n # ax.text(101, 1, titlestring, fontdict={'fontsize': 18, 'horizontalalignment':'right', 'verticalalignment':'top', 'color':'#804040'})\n plt.show()\n \ndef display_training_curves(training, validation, title, subplot, zoom_pcent=None, ylim=None):\n # zoom_pcent: X autoscales y axis for the last X% of data points\n if subplot%10==1: # set up the subplots on the first call\n plt.subplots(figsize=(10,10), facecolor='#F0F0F0')\n plt.tight_layout()\n ax = plt.subplot(subplot)\n ax.set_facecolor('#F8F8F8')\n ax.plot(training)\n ax.plot(validation, '--')\n ax.set_title('model '+ title)\n ax.set_ylabel(title)\n if zoom_pcent is not None:\n ylen = len(training)*(100-zoom_pcent)//100\n ymin = min([min(training[ylen:]), min(validation[ylen:])])\n ymax = max([max(training[ylen:]), max(validation[ylen:])])\n ax.set_ylim([ymin-(ymax-ymin)/20, ymax+(ymax-ymin)/20])\n if ylim is not None:\n ymin = ylim[0]\n ymax = ylim[1]\n ax.set_ylim([ymin-(ymax-ymin)/20, ymax+(ymax-ymin)/20])\n ax.set_xlabel('epoch')\n ax.legend(['train', 'valid.'])", "Datasets", "def decode_image(image_data):\n image = tf.image.decode_jpeg(image_data, channels=3) # decoded inamge in uint8 format range [0,255]\n image = tf.reshape(image, [*IMAGE_SIZE, 3]) # explicit size needed for TPU\n return image\n\ndef read_tfrecord(example):\n TFREC_FORMAT = {\n \"image\": tf.io.FixedLenFeature([], tf.string), # tf.string means bytestring\n \"class\": tf.io.FixedLenFeature([], tf.int64), # shape [] means single element\n }\n example = tf.io.parse_single_example(example, TFREC_FORMAT)\n image = decode_image(example['image'])\n label = tf.cast(example['class'], tf.int32)\n return image, label # returns a dataset of (image, label) pairs\n\ndef load_dataset(filenames, ordered=False):\n # Read from TFRecords. For optimal performance, reading from multiple files at once and\n # disregarding data order. Order does not matter since we will be shuffling the data anyway.\n\n ignore_order = tf.data.Options()\n if not ordered:\n ignore_order.experimental_deterministic = False # disable order, increase speed\n\n dataset = tf.data.TFRecordDataset(filenames, num_parallel_reads=AUTO) # automatically interleaves reads from multiple files\n dataset = dataset.with_options(ignore_order) # uses data as soon as it streams in, rather than in its original order\n dataset = dataset.map(read_tfrecord, num_parallel_calls=AUTO)\n # returns a dataset of (image, label) pairs\n return dataset\n\ndef data_augment(image, label):\n # data augmentation. Thanks to the dataset.prefetch(AUTO) statement in the next function (below),\n # this happens essentially for free on TPU. Data pipeline code is executed on the \"CPU\" part\n # of the TPU while the TPU itself is computing gradients.\n image = tf.image.random_flip_left_right(image)\n #image = tf.image.random_saturation(image, 0, 2)\n return image, label \n\ndef get_training_dataset():\n dataset = load_dataset(TRAINING_FILENAMES)\n dataset = dataset.map(data_augment, num_parallel_calls=AUTO)\n dataset = dataset.repeat() # the training dataset must repeat for several epochs\n dataset = dataset.shuffle(2048)\n dataset = dataset.batch(BATCH_SIZE)\n dataset = dataset.prefetch(AUTO) # prefetch next batch while training (autotune prefetch buffer size)\n return dataset\n\ndef get_validation_dataset(ordered=False):\n dataset = load_dataset(VALIDATION_FILENAMES, ordered=ordered)\n dataset = dataset.batch(BATCH_SIZE)\n dataset = dataset.cache()\n dataset = dataset.prefetch(AUTO) # prefetch next batch while training (autotune prefetch buffer size)\n return dataset\n\ndef count_data_items(filenames):\n # the number of data items is written in the name of the .tfrec files, i.e. flowers00-230.tfrec = 230 data items\n n = [int(re.compile(r\"-([0-9]*)\\.\").search(filename).group(1)) for filename in filenames]\n return np.sum(n)\n\nNUM_TRAINING_IMAGES = count_data_items(TRAINING_FILENAMES)\nNUM_VALIDATION_IMAGES = count_data_items(VALIDATION_FILENAMES)\nTRAIN_STEPS = NUM_TRAINING_IMAGES // BATCH_SIZE\nSTEPS_PER_EPOCH = NUM_TRAINING_IMAGES // BATCH_SIZE\nVALIDATION_STEPS = -(-NUM_VALIDATION_IMAGES // BATCH_SIZE) # The \"-(-//)\" trick rounds up instead of down :-)\nprint('Dataset: {} training images, {} validation images'.format(NUM_TRAINING_IMAGES, NUM_VALIDATION_IMAGES))", "Dataset visualizations (5 flowers)", "# data dump\nprint(\"Training data shapes:\")\nfor image, label in get_training_dataset().take(3):\n print(image.numpy().shape, label.numpy().shape)\nprint(\"Training data label examples:\", label.numpy())\n\n# Peek at training data\ntraining_dataset = get_training_dataset()\ntraining_dataset = training_dataset.unbatch().batch(20)\ntrain_batch = iter(training_dataset)\n\n# run this cell again for next set of images\ndisplay_batch_of_images(next(train_batch))", "Model", "with strategy.scope():\n pretrained_model = tf.keras.applications.MobileNetV2(\n weights='imagenet',\n include_top=False,\n input_shape=[*IMAGE_SIZE, 3])\n \n pretrained_model.trainable = True # fine-tuning\n \n model = tf.keras.Sequential([\n tf.keras.layers.Lambda( # convert image format from int [0,255] to the format expected by this model\n lambda data: tf.keras.applications.mobilenet.preprocess_input(\n tf.cast(data, tf.float32)), input_shape=[*IMAGE_SIZE, 3]),\n pretrained_model,\n tf.keras.layers.GlobalAveragePooling2D(),\n tf.keras.layers.Dense(16, activation='relu', name='flower_dense'),\n tf.keras.layers.Dense(len(CLASSES), activation='softmax', name='flower_prob')\n ])\n \n mult = 0.4 # for pretrained layers\n mult_by_layer={ \n # Clasification head\n 'flower_prob': 1.0,\n 'flower_dense': 1.0,\n # Pretrained layers\n 'block_1_': 0.02 * mult,\n 'block_2_': 0.04 * mult,\n 'block_3_': 0.06 * mult,\n 'block_4_': 0.08 * mult,\n 'block_5_': 0.1 * mult,\n 'block_6_': 0.15 * mult,\n 'block_7_': 0.2 * mult,\n 'block_8_': 0.25 * mult,\n 'block_9_': 0.3 * mult,\n 'block_10_': 0.35 * mult,\n 'block_11_': 0.4 * mult,\n 'block_12_': 0.5 * mult,\n 'block_13_': 0.6 * mult,\n 'block_14_': 0.7 * mult,\n 'block_15_': 0.8 * mult,\n 'block_16_': 0.9 * mult,\n # these layers do not have stable identifiers in tf.keras.applications.MobileNetV2\n 'conv': 0.5 * mult,\n 'Conv': 0.5 * mult\n }\n \n optimizer = AdamW(lr=LR_MAX, model=model, lr_multipliers=mult_by_layer)\n \nmodel.compile(\n #optimizer='adam',\n optimizer=optimizer,\n loss = 'sparse_categorical_crossentropy',\n metrics=['sparse_categorical_accuracy'],\n steps_per_execution=8\n)\nmodel.summary()", "Training", "history = model.fit(get_training_dataset(), steps_per_epoch=STEPS_PER_EPOCH, epochs=EPOCHS,\n validation_data=get_validation_dataset(), validation_steps=VALIDATION_STEPS,\n callbacks=[lr_callback])\n\ndisplay_training_curves(history.history['loss'], history.history['val_loss'], 'loss', 211, ylim=[0,1.7])\ndisplay_training_curves(history.history['sparse_categorical_accuracy'], history.history['val_sparse_categorical_accuracy'], 'accuracy', 212)\n", "Confusion matrix", "cmdataset = get_validation_dataset(ordered=True) # since we are splitting the dataset and iterating separately on images and labels, order matters.\nimages_ds = cmdataset.map(lambda image, label: image)\nlabels_ds = cmdataset.map(lambda image, label: label).unbatch()\ncm_correct_labels = next(iter(labels_ds.batch(NUM_VALIDATION_IMAGES))).numpy() # get everything as one batch\ncm_probabilities = model.predict(images_ds, steps=VALIDATION_STEPS)\ncm_predictions = np.argmax(cm_probabilities, axis=-1)\nprint(\"Correct labels: \", cm_correct_labels.shape, cm_correct_labels)\nprint(\"Predicted labels: \", cm_predictions.shape, cm_predictions)\n\ncmat = confusion_matrix(cm_correct_labels, cm_predictions, labels=range(len(CLASSES)))\nscore = f1_score(cm_correct_labels, cm_predictions, labels=range(len(CLASSES)), average='macro')\nprecision = precision_score(cm_correct_labels, cm_predictions, labels=range(len(CLASSES)), average='macro')\nrecall = recall_score(cm_correct_labels, cm_predictions, labels=range(len(CLASSES)), average='macro')\ncmat = (cmat.T / cmat.sum(axis=1)).T # normalized\ndisplay_confusion_matrix(cmat, score, precision, recall)\nprint('f1 score: {:.3f}, precision: {:.3f}, recall: {:.3f}'.format(score, precision, recall))", "Visual validation", "dataset = get_validation_dataset()\ndataset = dataset.unbatch().batch(20)\nbatch = iter(dataset)\n\n# run this cell again for next set of images\nimages, labels = next(batch)\nprobabilities = model.predict(images)\npredictions = np.argmax(probabilities, axis=-1)\ndisplay_batch_of_images((images, labels), predictions)", "License\nCopyright 2021 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
joshnsolomon/phys202-2015-work
assignments/assignment05/InteractEx01.ipynb
mit
[ "Interact Exercise 01\nImport", "%matplotlib inline\nfrom matplotlib import pyplot as plt\nimport numpy as np\n\nfrom IPython.html.widgets import interact, interactive, fixed\nfrom IPython.display import display", "Interact basics\nWrite a print_sum function that prints the sum of its arguments a and b.", "def print_sum(a, b):\n \"\"\"Print the sum of the arguments a and b.\"\"\"\n print(a+b)", "Use the interact function to interact with the print_sum function.\n\na should be a floating point slider over the interval [-10., 10.] with step sizes of 0.1\nb should be an integer slider the interval [-8, 8] with step sizes of 2.", "interact(print_sum,a=(-10.,10.,.1),b=(-8,8,2));\n\nassert True # leave this for grading the print_sum exercise", "Write a function named print_string that prints a string and additionally prints the length of that string if a boolean parameter is True.", "def print_string(s, length=False):\n \"\"\"Print the string s and optionally its length.\"\"\"\n print(s)\n if length == True:\n print(len(s))", "Use the interact function to interact with the print_string function.\n\ns should be a textbox with the initial value \"Hello World!\".\nlength should be a checkbox with an initial value of True.", "# YOUR CODE HERE\ninteract(print_string,s='Hello World!',length=True);\n\nassert True # leave this for grading the print_string exercise" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
astro4dev/OAD-Data-Science-Toolkit
Teaching Materials/Machine Learning/Supervised Learning/Examples/PPC/Predicting_Pulsar_Candidates.ipynb
gpl-3.0
[ "Build simple models to predict pulsar candidates\nIn this notebook we will look at building machine learning models to predict Pulsar Candidate. The data comes from Rob Lyon at Manchester. This data is publically available. For more information check out https://figshare.com/articles/HTRU2/3080389/1\nLets start with the basic imports", "# For numerical stuff\nimport pandas as pd\n\n# Plotting\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (7.0, 7.0)\n\n# Some preprocessing utilities\nfrom sklearn.cross_validation import train_test_split # Data splitting\nfrom sklearn.utils import shuffle\n\n# The different classifiers\nfrom sklearn.neighbors import KNeighborsClassifier # Nearest Neighbor - Analogizer\nfrom sklearn.naive_bayes import GaussianNB # Bayesian Classifier - Bayesian\nfrom sklearn.neural_network import MLPClassifier # Neural Network - Connectionist\n\n# Model result function\nfrom sklearn.metrics import classification_report,accuracy_score", "Load dataset\n\n\nData is a csv file with each column as features and rows as samples of positive and negative candidates\n\n\nClass label is the last column where \"1\" correspondes to true pulsar candidate and \"0\" a false candidate", "data = pd.read_csv('Data/pulsar.csv')\n\n# Show some information\nprint ('Dataset has %d rows and %d columns including features and labels'%(data.shape[0],data.shape[1]))", "Lets print the feature names", "print (data.columns.values[0:-1])", "Do a scatter plot", "ax = plt.figure().gca(projection='3d')\nax.scatter3D(data['std_pf'], data['mean_dm'], data['mean_int_pf'],c=data['class'],alpha=.25)\nax.set_xlabel('std_pf')\nax.set_ylabel('mean_dm')\nax.set_zlabel('mean_int_pf')", "Get the features and labels", "# Lets shuffle the rows of the data 10 times\nfor i in range(10):\n data = shuffle(data)\n\n# Now split the dataset into seperate variabels for features and labels\nfeatures = data.ix[:,data.columns != 'class'].values # All columns except class\nlabels = data['class'].values # Class labels", "Split data to training and validation sets", "# Do a 70 - 30 split of the whole data for training and testing\n# The last argument specifies the fraction of samples for testing\ntrain_data,test_data,train_labels,test_labels = train_test_split(features,labels,test_size=.3)\n#Print some info\nprint ('Number of training data points : %d'%(train_data.shape[0]))\nprint ('Number of testing data points : %d'%(test_data.shape[0]))", "Lets do the training on different algorithms\nWe will be using the following algorithms\n\n\nk-Nearest Neighbours (KNN) [ https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm ]\n\n\nNaive Bayes Classifier [ https://en.wikipedia.org/wiki/Naive_Bayes_classifier ]\n\n\nMultilayer Neural Network [ https://en.wikipedia.org/wiki/Multilayer_perceptron ] \n\n\nLets start with default model parameters for each classifier.\nCheck the link above each block for function definition\n\nScikit KNN\n\nhttp://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html", "# K nearest neighbor\nknn = KNeighborsClassifier()\nknn.fit(train_data,train_labels)", "Scikit Naive Bayes\n\nhttp://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.GaussianNB.html", "# Naive Bayes\nnb = GaussianNB()\nnb.fit(train_data,train_labels)", "Scikit MLP\n\nhttps://en.wikipedia.org/wiki/Multilayer_perceptron", "# MLP\nmlp = MLPClassifier(solver='sgd',hidden_layer_sizes=(5, 1))\nmlp.fit(train_data,train_labels)", "Fancy function to print results for model evaluation", "# Pretty function to test a model and print accuracy score\ndef evaluate(model,modelname,test_data,test_labels):\n predictions = model.predict(test_data) # Do the actual prediction\n print('====================================================')\n print('Classification Report for %s'%modelname)\n print('===================================================')\n print(classification_report(test_labels,predictions,target_names=['Non Pulsar','Pulsar']))\n \n print('\\n The model is %f accurate' %(accuracy_score(test_labels,predictions)*100))\n print('====================================================\\n\\n')\n\n# Making some stuff easy\nmodels =[knn,nb,mlp]\nmodel_names =['KNN','Naive Bayes','Neural Network']", "Now Lets test each classifier and disply their accuracy", "for i in range(0,3):\n evaluate(models[i],model_names[i],test_data,test_labels)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Aggieyixin/cjc2016
code/04.PythonCrawler_beautifulsoup.ipynb
mit
[ "ๆ•ฐๆฎๆŠ“ๅ–๏ผš\n\nBeautifulsoup็ฎ€ไป‹\n\n\n็Ž‹ๆˆๅ†›\nwangchengjun@nju.edu.cn\n่ฎก็ฎ—ไผ ๆ’ญ็ฝ‘ http://computational-communication.com\n้œ€่ฆ่งฃๅ†ณ็š„้—ฎ้ข˜\n\n้กต้ข่งฃๆž\n่Žทๅ–Javascript้š่—ๆบๆ•ฐๆฎ\n่‡ชๅŠจ็ฟป้กต\n่‡ชๅŠจ็™ปๅฝ•\n่ฟžๆŽฅAPIๆŽฅๅฃ", "import urllib2\nfrom bs4 import BeautifulSoup", "ไธ€่ˆฌ็š„ๆ•ฐๆฎๆŠ“ๅ–๏ผŒไฝฟ็”จurllib2ๅ’Œbeautifulsoup้…ๅˆๅฐฑๅฏไปฅไบ†ใ€‚\nๅฐคๅ…ถๆ˜ฏๅฏนไบŽ็ฟป้กตๆ—ถurlๅ‡บ็Žฐ่ง„ๅˆ™ๅ˜ๅŒ–็š„็ฝ‘้กต๏ผŒๅช้œ€่ฆๅค„็†่ง„ๅˆ™ๅŒ–็š„urlๅฐฑๅฏไปฅไบ†ใ€‚\nไปฅ็ฎ€ๅ•็š„ไพ‹ๅญๆ˜ฏๆŠ“ๅ–ๅคฉๆถฏ่ฎบๅ›ไธŠๅ…ณไบŽๆŸไธ€ไธชๅ…ณ้”ฎ่ฏ็š„ๅธ–ๅญใ€‚\nๅœจๅคฉๆถฏ่ฎบๅ›๏ผŒๅ…ณไบŽ้›พ้œพ็š„ๅธ–ๅญ็š„็ฌฌไธ€้กตๆ˜ฏ๏ผš\nhttp://bbs.tianya.cn/list.jsp?item=free&nextid=0&order=8&k=้›พ้œพ\n็ฌฌไบŒ้กตๆ˜ฏ๏ผš\nhttp://bbs.tianya.cn/list.jsp?item=free&nextid=1&order=8&k=้›พ้œพ\n\n\n\nBeautiful Soup\n\nBeautiful Soup is a Python library designed for quick turnaround projects like screen-scraping. Three features make it powerful:\n\n\nBeautiful Soup provides a few simple methods. It doesn't take much code to write an application\nBeautiful Soup automatically converts incoming documents to Unicode and outgoing documents to UTF-8. Then you just have to specify the original encoding.\nBeautiful Soup sits on top of popular Python parsers like lxml and html5lib.\n\nInstall beautifulsoup4\nopen your terminal/cmd\n\n$ pip install beautifulsoup4\n\n็ฌฌไธ€ไธช็ˆฌ่™ซ\nBeautifulsoup Quick Start \nhttp://www.crummy.com/software/BeautifulSoup/bs4/doc/", "url = 'file:///Users/chengjun/GitHub/cjc2016/data/test.html'\ncontent = urllib2.urlopen(url).read() \nsoup = BeautifulSoup(content, 'html.parser') \nsoup", "html.parser\nBeautiful Soup supports the html.parser included in Pythonโ€™s standard library\nlxml\nbut it also supports a number of third-party Python parsers. One is the lxml parser lxml. Depending on your setup, you might install lxml with one of these commands:\n\n$ apt-get install python-lxml\n$ easy_install lxml\n$ pip install lxml\n\nhtml5lib\nAnother alternative is the pure-Python html5lib parser html5lib, which parses HTML the way a web browser does. Depending on your setup, you might install html5lib with one of these commands:\n\n$ apt-get install python-html5lib\n$ easy_install html5lib\n$ pip install html5lib", "print(soup.prettify())", "html\nhead\ntitle\n\n\nbody\np (class = 'title', 'story' )\na (class = 'sister')\nhref/id", "for tag in soup.find_all(True):\n print(tag.name)\n\nsoup('head') # or soup.head\n\nsoup('body') # or soup.body\n\nsoup('title') # or soup.title\n\nsoup('p')\n\nsoup.p\n\nsoup.title.name\n\nsoup.title.string\n\nsoup.title.text\n\nsoup.title.parent.name\n\nsoup.p\n\nsoup.p['class']\n\nsoup.find_all('p', {'class', 'title'})\n\nsoup.find_all('p', class_= 'title')\n\nsoup.find_all('p', {'class', 'story'})\n\nsoup.find_all('p', {'class', 'story'})[0].find_all('a')\n\nsoup.a\n\nsoup('a')\n\nsoup.find(id=\"link3\")\n\nsoup.find_all('a')\n\nsoup.find_all('a', {'class', 'sister'}) # compare with soup.find_all('a')\n\nsoup.find_all('a', {'class', 'sister'})[0]\n\nsoup.find_all('a', {'class', 'sister'})[0].text\n\nsoup.find_all('a', {'class', 'sister'})[0]['href']\n\nsoup.find_all('a', {'class', 'sister'})[0]['id']\n\nsoup.find_all([\"a\", \"b\"])\n\nprint(soup.get_text())", "ๆ•ฐๆฎๆŠ“ๅ–๏ผš\n\nๆ นๆฎURLๆŠ“ๅ–ๅพฎไฟกๅ…ฌไผ—ๅทๆ–‡็ซ ๅ†…ๅฎน\n\n\n\n็Ž‹ๆˆๅ†›\nwangchengjun@nju.edu.cn\n่ฎก็ฎ—ไผ ๆ’ญ็ฝ‘ http://computational-communication.com", "from IPython.display import display_html, HTML\nHTML('<iframe src=http://mp.weixin.qq.com/s?__biz=MzA3MjQ5MTE3OA==&\\\nmid=206241627&idx=1&sn=471e59c6cf7c8dae452245dbea22c8f3&3rd=MzA3MDU4NTYzMw==&scene=6#rd\\\nwidth=500 height=500></iframe>')\n# the webpage we would like to crawl", "ๆŸฅ็œ‹ๆบไปฃ็ \nInspect", "url = \"http://mp.weixin.qq.com/s?__biz=MzA3MjQ5MTE3OA==&\\\nmid=206241627&idx=1&sn=471e59c6cf7c8dae452245dbea22c8f3&3rd=MzA3MDU4NTYzMw==&scene=6#rd\"\ncontent = urllib2.urlopen(url).read() #่Žทๅ–็ฝ‘้กต็š„htmlๆ–‡ๆœฌ\nsoup = BeautifulSoup(content, 'html.parser') \ntitle = soup.title.text\nrmml = soup.find('div', {'class', 'rich_media_meta_list'})\ndate = rmml.find(id = 'post-date').text\nrmc = soup.find('div', {'class', 'rich_media_content'})\ncontent = rmc.get_text()\nprint title\nprint date\nprint content", "ไฝœไธš๏ผš\n\nๆŠ“ๅ–ๅคๆ—ฆๆ–ฐๅช’ไฝ“ๅพฎไฟกๅ…ฌไผ—ๅทๆœ€ๆ–ฐไธ€ๆœŸ็š„ๅ†…ๅฎน" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
turbomanage/training-data-analyst
courses/machine_learning/deepdive/09_sequence_keras/poetry.ipynb
apache-2.0
[ "Text generation using tensor2tensor on Cloud ML Engine\nThis notebook illustrates using the <a href=\"https://github.com/tensorflow/tensor2tensor\">tensor2tensor</a> library to do from-scratch, distributed training of a poetry model. Then, the trained model is used to complete new poems.\n<br/>\nInstall tensor2tensor, and specify Google Cloud Platform project and bucket\nInstall the necessary packages. tensor2tensor will give us the Transformer model. Project Gutenberg gives us access to historical poems.\n<b>p.s.</b> Note that this notebook uses Python2 because Project Gutenberg relies on BSD-DB which was deprecated in Python 3 and removed from the standard library.\ntensor2tensor itself can be used on Python 3. It's just Project Gutenberg that has this issue.", "%%bash\npip freeze | grep tensor\n\n# Choose a version of TensorFlow that is supported on TPUs\nTFVERSION='1.13'\nimport os\nos.environ['TFVERSION'] = TFVERSION\n\n%%bash\npip install tensor2tensor==${TFVERSION} gutenberg \n\n# install from sou\n#git clone https://github.com/tensorflow/tensor2tensor.git\n#cd tensor2tensor\n#yes | pip install --user -e .", "If the following cell does not reflect the version of tensorflow and tensor2tensor that you just installed, click \"Reset Session\" on the notebook so that the Python environment picks up the new packages.", "%%bash\npip freeze | grep tensor\n\nimport os\nPROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID\nBUCKET = 'cloud-training-demos-ml' # REPLACE WITH YOUR BUCKET NAME\nREGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1\n\n# this is what this notebook is demonstrating\nPROBLEM= 'poetry_line_problem'\n\n# for bash\nos.environ['PROJECT'] = PROJECT\nos.environ['BUCKET'] = BUCKET\nos.environ['REGION'] = REGION\nos.environ['PROBLEM'] = PROBLEM\n\n#os.environ['PATH'] = os.environ['PATH'] + ':' + os.getcwd() + '/tensor2tensor/tensor2tensor/bin/'\n\n%%bash\ngcloud config set project $PROJECT\ngcloud config set compute/region $REGION", "Download data\nWe will get some <a href=\"https://www.gutenberg.org/wiki/Poetry_(Bookshelf)\">poetry anthologies</a> from Project Gutenberg.", "%%bash\nrm -rf data/poetry\nmkdir -p data/poetry\n\nfrom gutenberg.acquire import load_etext\nfrom gutenberg.cleanup import strip_headers\nimport re\n\nbooks = [\n # bookid, skip N lines\n (26715, 1000, 'Victorian songs'),\n (30235, 580, 'Baldwin collection'),\n (35402, 710, 'Swinburne collection'),\n (574, 15, 'Blake'),\n (1304, 172, 'Bulchevys collection'),\n (19221, 223, 'Palgrave-Pearse collection'),\n (15553, 522, 'Knowles collection') \n]\n\nwith open('data/poetry/raw.txt', 'w') as ofp:\n lineno = 0\n for (id_nr, toskip, title) in books:\n startline = lineno\n text = strip_headers(load_etext(id_nr)).strip()\n lines = text.split('\\n')[toskip:]\n # any line that is all upper case is a title or author name\n # also don't want any lines with years (numbers)\n for line in lines:\n if (len(line) > 0 \n and line.upper() != line \n and not re.match('.*[0-9]+.*', line)\n and len(line) < 50\n ):\n cleaned = re.sub('[^a-z\\'\\-]+', ' ', line.strip().lower())\n ofp.write(cleaned)\n ofp.write('\\n')\n lineno = lineno + 1\n else:\n ofp.write('\\n')\n print('Wrote lines {} to {} from {}'.format(startline, lineno, title))\n\n!wc -l data/poetry/*.txt", "Create training dataset\nWe are going to train a machine learning model to write poetry given a starting point. We'll give it one line, and it is going to tell us the next line. So, naturally, we will train it on real poetry. Our feature will be a line of a poem and the label will be next line of that poem.\n<p>\nOur training dataset will consist of two files. The first file will consist of the input lines of poetry and the other file will consist of the corresponding output lines, one output line per input line.", "with open('data/poetry/raw.txt', 'r') as rawfp,\\\n open('data/poetry/input.txt', 'w') as infp,\\\n open('data/poetry/output.txt', 'w') as outfp:\n \n prev_line = ''\n for curr_line in rawfp:\n curr_line = curr_line.strip()\n # poems break at empty lines, so this ensures we train only\n # on lines of the same poem\n if len(prev_line) > 0 and len(curr_line) > 0: \n infp.write(prev_line + '\\n')\n outfp.write(curr_line + '\\n')\n prev_line = curr_line \n\n!head -5 data/poetry/*.txt", "We do not need to generate the data beforehand -- instead, we can have Tensor2Tensor create the training dataset for us. So, in the code below, I will use only data/poetry/raw.txt -- obviously, this allows us to productionize our model better. Simply keep collecting raw data and generate the training/test data at the time of training.\nSet up problem\nThe Problem in tensor2tensor is where you specify parameters like the size of your vocabulary and where to get the training data from.", "%%bash\nrm -rf poetry\nmkdir -p poetry/trainer\n\n%%writefile poetry/trainer/problem.py\nimport os\nimport tensorflow as tf\nfrom tensor2tensor.utils import registry\nfrom tensor2tensor.models import transformer\nfrom tensor2tensor.data_generators import problem\nfrom tensor2tensor.data_generators import text_encoder\nfrom tensor2tensor.data_generators import text_problems\nfrom tensor2tensor.data_generators import generator_utils\n\ntf.summary.FileWriterCache.clear() # ensure filewriter cache is clear for TensorBoard events file\n\n@registry.register_problem\nclass PoetryLineProblem(text_problems.Text2TextProblem):\n \"\"\"Predict next line of poetry from the last line. From Gutenberg texts.\"\"\"\n\n @property\n def approx_vocab_size(self):\n return 2**13 # ~8k\n\n @property\n def is_generate_per_split(self):\n # generate_data will NOT shard the data into TRAIN and EVAL for us.\n return False\n\n @property\n def dataset_splits(self):\n \"\"\"Splits of data to produce and number of output shards for each.\"\"\"\n # 10% evaluation data\n return [{\n \"split\": problem.DatasetSplit.TRAIN,\n \"shards\": 90,\n }, {\n \"split\": problem.DatasetSplit.EVAL,\n \"shards\": 10,\n }]\n\n def generate_samples(self, data_dir, tmp_dir, dataset_split):\n with open('data/poetry/raw.txt', 'r') as rawfp:\n prev_line = ''\n for curr_line in rawfp:\n curr_line = curr_line.strip()\n # poems break at empty lines, so this ensures we train only\n # on lines of the same poem\n if len(prev_line) > 0 and len(curr_line) > 0: \n yield {\n \"inputs\": prev_line,\n \"targets\": curr_line\n }\n prev_line = curr_line \n\n\n# Smaller than the typical translate model, and with more regularization\n@registry.register_hparams\ndef transformer_poetry():\n hparams = transformer.transformer_base()\n hparams.num_hidden_layers = 2\n hparams.hidden_size = 128\n hparams.filter_size = 512\n hparams.num_heads = 4\n hparams.attention_dropout = 0.6\n hparams.layer_prepostprocess_dropout = 0.6\n hparams.learning_rate = 0.05\n return hparams\n\n@registry.register_hparams\ndef transformer_poetry_tpu():\n hparams = transformer_poetry()\n transformer.update_hparams_for_tpu(hparams)\n return hparams\n\n# hyperparameter tuning ranges\n@registry.register_ranged_hparams\ndef transformer_poetry_range(rhp):\n rhp.set_float(\"learning_rate\", 0.05, 0.25, scale=rhp.LOG_SCALE)\n rhp.set_int(\"num_hidden_layers\", 2, 4)\n rhp.set_discrete(\"hidden_size\", [128, 256, 512])\n rhp.set_float(\"attention_dropout\", 0.4, 0.7)\n\n%%writefile poetry/trainer/__init__.py\nfrom . import problem\n\n%%writefile poetry/setup.py\nfrom setuptools import find_packages\nfrom setuptools import setup\n\nREQUIRED_PACKAGES = [\n 'tensor2tensor'\n]\n\nsetup(\n name='poetry',\n version='0.1',\n author = 'Google',\n author_email = 'training-feedback@cloud.google.com',\n install_requires=REQUIRED_PACKAGES,\n packages=find_packages(),\n include_package_data=True,\n description='Poetry Line Problem',\n requires=[]\n)\n\n!touch poetry/__init__.py\n\n!find poetry", "Generate training data\nOur problem (translation) requires the creation of text sequences from the training dataset. This is done using t2t-datagen and the Problem defined in the previous section.\n(Ignore any runtime warnings about np.float64. they are harmless).", "%%bash\nDATA_DIR=./t2t_data\nTMP_DIR=$DATA_DIR/tmp\nrm -rf $DATA_DIR $TMP_DIR\nmkdir -p $DATA_DIR $TMP_DIR\n# Generate data\nt2t-datagen \\\n --t2t_usr_dir=./poetry/trainer \\\n --problem=$PROBLEM \\\n --data_dir=$DATA_DIR \\\n --tmp_dir=$TMP_DIR", "Let's check to see the files that were output. If you see a broken pipe error, please ignore.", "!ls t2t_data | head", "Provide Cloud ML Engine access to data\nCopy the data to Google Cloud Storage, and then provide access to the data. gsutil throws an error when removing an empty bucket, so you may see an error the first time this code is run.", "%%bash\nDATA_DIR=./t2t_data\ngsutil -m rm -r gs://${BUCKET}/poetry/\ngsutil -m cp ${DATA_DIR}/${PROBLEM}* ${DATA_DIR}/vocab* gs://${BUCKET}/poetry/data\n\n%%bash\nPROJECT_ID=$PROJECT\nAUTH_TOKEN=$(gcloud auth print-access-token)\nSVC_ACCOUNT=$(curl -X GET -H \"Content-Type: application/json\" \\\n -H \"Authorization: Bearer $AUTH_TOKEN\" \\\n https://ml.googleapis.com/v1/projects/${PROJECT_ID}:getConfig \\\n | python -c \"import json; import sys; response = json.load(sys.stdin); \\\n print(response['serviceAccount'])\")\n\necho \"Authorizing the Cloud ML Service account $SVC_ACCOUNT to access files in $BUCKET\"\ngsutil -m defacl ch -u $SVC_ACCOUNT:R gs://$BUCKET\ngsutil -m acl ch -u $SVC_ACCOUNT:R -r gs://$BUCKET # error message (if bucket is empty) can be ignored\ngsutil -m acl ch -u $SVC_ACCOUNT:W gs://$BUCKET", "Train model locally on subset of data\nLet's run it locally on a subset of the data to make sure it works.", "%%bash\nBASE=gs://${BUCKET}/poetry/data\nOUTDIR=gs://${BUCKET}/poetry/subset\ngsutil -m rm -r $OUTDIR\ngsutil -m cp \\\n ${BASE}/${PROBLEM}-train-0008* \\\n ${BASE}/${PROBLEM}-dev-00000* \\\n ${BASE}/vocab* \\\n $OUTDIR", "Note: the following will work only if you are running Jupyter on a reasonably powerful machine. Don't be alarmed if your process is killed.", "%%bash\nDATA_DIR=gs://${BUCKET}/poetry/subset\nOUTDIR=./trained_model\nrm -rf $OUTDIR\nt2t-trainer \\\n --data_dir=gs://${BUCKET}/poetry/subset \\\n --t2t_usr_dir=./poetry/trainer \\\n --problem=$PROBLEM \\\n --model=transformer \\\n --hparams_set=transformer_poetry \\\n --output_dir=$OUTDIR --job-dir=$OUTDIR --train_steps=10", "Option 1: Train model locally on full dataset (use if running on Notebook Instance with a GPU)\nYou can train on the full dataset if you are on a Google Cloud Notebook Instance with a P100 or better GPU", "%%bash\nLOCALGPU=\"--train_steps=7500 --worker_gpu=1 --hparams_set=transformer_poetry\"\n\nDATA_DIR=gs://${BUCKET}/poetry/data\nOUTDIR=gs://${BUCKET}/poetry/model\nrm -rf $OUTDIR\nt2t-trainer \\\n --data_dir=gs://${BUCKET}/poetry/subset \\\n --t2t_usr_dir=./poetry/trainer \\\n --problem=$PROBLEM \\\n --model=transformer \\\n --hparams_set=transformer_poetry \\\n --output_dir=$OUTDIR ${LOCALGPU}", "Option 2: Train on Cloud ML Engine\ntensor2tensor has a convenient --cloud_mlengine option to kick off the training on the managed service.\nIt uses the Python API mentioned in the Cloud ML Engine docs, rather than requiring you to use gcloud to submit the job.\n<p>\nNote: your project needs P100 quota in the region.\n<p>\nThe echo is because t2t-trainer asks you to confirm before submitting the job to the cloud. Ignore any error about \"broken pipe\".\nIf you see a message similar to this:\n<pre>\n [... cloud_mlengine.py:392] Launched transformer_poetry_line_problem_t2t_20190323_000631. See console to track: https://console.cloud.google.com/mlengine/jobs/.\n</pre>\nthen, this step has been successful.", "%%bash\nGPU=\"--train_steps=7500 --cloud_mlengine --worker_gpu=1 --hparams_set=transformer_poetry\"\n\nDATADIR=gs://${BUCKET}/poetry/data\nOUTDIR=gs://${BUCKET}/poetry/model\nJOBNAME=poetry_$(date -u +%y%m%d_%H%M%S)\necho $OUTDIR $REGION $JOBNAME\ngsutil -m rm -rf $OUTDIR\nyes Y | t2t-trainer \\\n --data_dir=gs://${BUCKET}/poetry/subset \\\n --t2t_usr_dir=./poetry/trainer \\\n --problem=$PROBLEM \\\n --model=transformer \\\n --output_dir=$OUTDIR \\\n ${GPU}\n\n%%bash\n## CHANGE the job name (based on output above: You will see a line such as Launched transformer_poetry_line_problem_t2t_20190322_233159)\ngcloud ml-engine jobs describe transformer_poetry_line_problem_t2t_20190323_003001", "The job took about <b>25 minutes</b> for me and ended with these evaluation metrics:\n<pre>\nSaving dict for global step 8000: global_step = 8000, loss = 6.03338, metrics-poetry_line_problem/accuracy = 0.138544, metrics-poetry_line_problem/accuracy_per_sequence = 0.0, metrics-poetry_line_problem/accuracy_top5 = 0.232037, metrics-poetry_line_problem/approx_bleu_score = 0.00492648, metrics-poetry_line_problem/neg_log_perplexity = -6.68994, metrics-poetry_line_problem/rouge_2_fscore = 0.00256089, metrics-poetry_line_problem/rouge_L_fscore = 0.128194\n</pre>\nNotice that accuracy_per_sequence is 0 -- Considering that we are asking the NN to be rather creative, that doesn't surprise me. Why am I looking at accuracy_per_sequence and not the other metrics? This is because it is more appropriate for problem we are solving; metrics like Bleu score are better for translation.\nOption 3: Train on a directly-connected TPU\nIf you are running on a VM connected directly to a Cloud TPU, you can run t2t-trainer directly. Unfortunately, you won't see any output from Jupyter while the program is running.\nCompare this command line to the one using GPU in the previous section.", "%%bash\n# use one of these\nTPU=\"--train_steps=7500 --use_tpu=True --cloud_tpu_name=laktpu --hparams_set=transformer_poetry_tpu\"\n\nDATADIR=gs://${BUCKET}/poetry/data\nOUTDIR=gs://${BUCKET}/poetry/model_tpu\nJOBNAME=poetry_$(date -u +%y%m%d_%H%M%S)\necho $OUTDIR $REGION $JOBNAME\ngsutil -m rm -rf $OUTDIR\necho \"'Y'\" | t2t-trainer \\\n --data_dir=gs://${BUCKET}/poetry/subset \\\n --t2t_usr_dir=./poetry/trainer \\\n --problem=$PROBLEM \\\n --model=transformer \\\n --output_dir=$OUTDIR \\\n ${TPU}\n\n%%bash\ngsutil ls gs://${BUCKET}/poetry/model_tpu", "The job took about <b>10 minutes</b> for me and ended with these evaluation metrics:\n<pre>\nSaving dict for global step 8000: global_step = 8000, loss = 6.03338, metrics-poetry_line_problem/accuracy = 0.138544, metrics-poetry_line_problem/accuracy_per_sequence = 0.0, metrics-poetry_line_problem/accuracy_top5 = 0.232037, metrics-poetry_line_problem/approx_bleu_score = 0.00492648, metrics-poetry_line_problem/neg_log_perplexity = -6.68994, metrics-poetry_line_problem/rouge_2_fscore = 0.00256089, metrics-poetry_line_problem/rouge_L_fscore = 0.128194\n</pre>\nNotice that accuracy_per_sequence is 0 -- Considering that we are asking the NN to be rather creative, that doesn't surprise me. Why am I looking at accuracy_per_sequence and not the other metrics? This is because it is more appropriate for problem we are solving; metrics like Bleu score are better for translation.\nOption 4: Training longer\nLet's train on 4 GPUs for 75,000 steps. Note the change in the last line of the job.", "%%bash\n\nXXX This takes 3 hours on 4 GPUs. Remove this line if you are sure you want to do this.\n\nDATADIR=gs://${BUCKET}/poetry/data\nOUTDIR=gs://${BUCKET}/poetry/model_full2\nJOBNAME=poetry_$(date -u +%y%m%d_%H%M%S)\necho $OUTDIR $REGION $JOBNAME\ngsutil -m rm -rf $OUTDIR\necho \"'Y'\" | t2t-trainer \\\n --data_dir=gs://${BUCKET}/poetry/subset \\\n --t2t_usr_dir=./poetry/trainer \\\n --problem=$PROBLEM \\\n --model=transformer \\\n --hparams_set=transformer_poetry \\\n --output_dir=$OUTDIR \\\n --train_steps=75000 --cloud_mlengine --worker_gpu=4", "This job took <b>12 hours</b> for me and ended with these metrics:\n<pre>\nglobal_step = 76000, loss = 4.99763, metrics-poetry_line_problem/accuracy = 0.219792, metrics-poetry_line_problem/accuracy_per_sequence = 0.0192308, metrics-poetry_line_problem/accuracy_top5 = 0.37618, metrics-poetry_line_problem/approx_bleu_score = 0.017955, metrics-poetry_line_problem/neg_log_perplexity = -5.38725, metrics-poetry_line_problem/rouge_2_fscore = 0.0325563, metrics-poetry_line_problem/rouge_L_fscore = 0.210618\n</pre>\nAt least the accuracy per sequence is no longer zero. It is now 0.0192308 ... note that we are using a relatively small dataset (12K lines) and this is tiny in the world of natural language problems.\n<p>\nIn order that you have your expectations set correctly: a high-performing translation model needs 400-million lines of input and takes 1 whole day on a TPU pod!\n\n## Check trained model", "%%bash\ngsutil ls gs://${BUCKET}/poetry/model #_modeltpu", "Batch-predict\nHow will our poetry model do when faced with Rumi's spiritual couplets?", "%%writefile data/poetry/rumi.txt\nWhere did the handsome beloved go?\nI wonder, where did that tall, shapely cypress tree go?\nHe spread his light among us like a candle.\nWhere did he go? So strange, where did he go without me?\nAll day long my heart trembles like a leaf.\nAll alone at midnight, where did that beloved go?\nGo to the road, and ask any passing travelerโ€‰โ€”โ€‰\nThat soul-stirring companion, where did he go?\nGo to the garden, and ask the gardenerโ€‰โ€”โ€‰\nThat tall, shapely rose stem, where did he go?\nGo to the rooftop, and ask the watchmanโ€‰โ€”โ€‰\nThat unique sultan, where did he go?\nLike a madman, I search in the meadows!\nThat deer in the meadows, where did he go?\nMy tearful eyes overflow like a riverโ€‰โ€”โ€‰\nThat pearl in the vast sea, where did he go?\nAll night long, I implore both moon and Venusโ€‰โ€”โ€‰\nThat lovely face, like a moon, where did he go?\nIf he is mine, why is he with others?\nSince heโ€™s not here, to what โ€œthereโ€ did he go?\nIf his heart and soul are joined with God,\nAnd he left this realm of earth and water, where did he go?\nTell me clearly, Shams of Tabriz,\nOf whom it is said, โ€œThe sun never diesโ€โ€‰โ€”โ€‰where did he go?", "Let's write out the odd-numbered lines. We'll compare how close our model can get to the beauty of Rumi's second lines given his first.", "%%bash\nawk 'NR % 2 == 1' data/poetry/rumi.txt | tr '[:upper:]' '[:lower:]' | sed \"s/[^a-z\\'-\\ ]//g\" > data/poetry/rumi_leads.txt\nhead -3 data/poetry/rumi_leads.txt\n\n%%bash\n# same as the above training job ...\nTOPDIR=gs://${BUCKET}\nOUTDIR=${TOPDIR}/poetry/model #_tpu # or ${TOPDIR}/poetry/model_full\nDATADIR=${TOPDIR}/poetry/data\nMODEL=transformer\nHPARAMS=transformer_poetry #_tpu\n\n# the file with the input lines\nDECODE_FILE=data/poetry/rumi_leads.txt\n\nBEAM_SIZE=4\nALPHA=0.6\n\nt2t-decoder \\\n --data_dir=$DATADIR \\\n --problem=$PROBLEM \\\n --model=$MODEL \\\n --hparams_set=$HPARAMS \\\n --output_dir=$OUTDIR \\\n --t2t_usr_dir=./poetry/trainer \\\n --decode_hparams=\"beam_size=$BEAM_SIZE,alpha=$ALPHA\" \\\n --decode_from_file=$DECODE_FILE", "<b> Note </b> if you get an error about \"AttributeError: 'HParams' object has no attribute 'problems'\" please <b>Reset Session</b>, run the cell that defines the PROBLEM and run the above cell again.", "%%bash \nDECODE_FILE=data/poetry/rumi_leads.txt\ncat ${DECODE_FILE}.*.decodes", "Some of these are still phrases and not complete sentences. This indicates that we might need to train longer or better somehow. We need to diagnose the model ...\n<p>\n\n### Diagnosing training run\n\n<p>\nLet's diagnose the training run to see what we'd improve the next time around.\n(Note that this package may not be present on Jupyter -- `pip install pydatalab` if necessary)", "from google.datalab.ml import TensorBoard\nTensorBoard().start('gs://{}/poetry/model_full'.format(BUCKET))\n\nfor pid in TensorBoard.list()['pid']:\n TensorBoard().stop(pid)\n print('Stopped TensorBoard with pid {}'.format(pid))", "<table>\n<tr>\n<td><img src=\"diagrams/poetry_loss.png\"/></td>\n<td><img src=\"diagrams/poetry_acc.png\"/></td>\n</table>\nLooking at the loss curve, it is clear that we are overfitting (note that the orange training curve is well below the blue eval curve). Both loss curves and the accuracy-per-sequence curve, which is our key evaluation measure, plateaus after 40k. (The red curve is a faster way of computing the evaluation metric, and can be ignored). So, how do we improve the model? Well, we need to reduce overfitting and make sure the eval metrics keep going down as long as the loss is also going down.\n<p>\nWhat we really need to do is to get more data, but if that's not an option, we could try to reduce the NN and increase the dropout regularization. We could also do hyperparameter tuning on the dropout and network sizes.\n\n## Hyperparameter tuning\n\ntensor2tensor also supports hyperparameter tuning on Cloud ML Engine. Note the addition of the autotune flags.\n<p>\nThe `transformer_poetry_range` was registered in problem.py above.", "%%bash\n\nXXX This takes about 15 hours and consumes about 420 ML units. Uncomment if you wish to proceed anyway\n\nDATADIR=gs://${BUCKET}/poetry/data\nOUTDIR=gs://${BUCKET}/poetry/model_hparam\nJOBNAME=poetry_$(date -u +%y%m%d_%H%M%S)\necho $OUTDIR $REGION $JOBNAME\ngsutil -m rm -rf $OUTDIR\necho \"'Y'\" | t2t-trainer \\\n --data_dir=gs://${BUCKET}/poetry/subset \\\n --t2t_usr_dir=./poetry/trainer \\\n --problem=$PROBLEM \\\n --model=transformer \\\n --hparams_set=transformer_poetry \\\n --output_dir=$OUTDIR \\\n --hparams_range=transformer_poetry_range \\\n --autotune_objective='metrics-poetry_line_problem/accuracy_per_sequence' \\\n --autotune_maximize \\\n --autotune_max_trials=4 \\\n --autotune_parallel_trials=4 \\\n --train_steps=7500 --cloud_mlengine --worker_gpu=4", "When I ran the above job, it took about 15 hours and finished with these as the best parameters:\n<pre>\n{\n \"trialId\": \"37\",\n \"hyperparameters\": {\n \"hp_num_hidden_layers\": \"4\",\n \"hp_learning_rate\": \"0.026711152525921437\",\n \"hp_hidden_size\": \"512\",\n \"hp_attention_dropout\": \"0.60589466163419292\"\n },\n \"finalMetric\": {\n \"trainingStep\": \"8000\",\n \"objectiveValue\": 0.0276162791997\n }\n</pre>\nIn other words, the accuracy per sequence achieved was 0.027 (as compared to 0.019 before hyperparameter tuning, so a <b>40% improvement!</b>) using 4 hidden layers, a learning rate of 0.0267, a hidden size of 512 and droput probability of 0.606. This is inspite of training for only 7500 steps instead of 75,000 steps ... we could train for 75k steps with these parameters, but I'll leave that as an exercise for you.\n<p>\nInstead, let's try predicting with this optimized model. Note the addition of the hp* flags in order to override the values hardcoded in the source code. (there is no need to specify learning rate and dropout because they are not used during inference). I am using 37 because I got the best result at trialId=37", "%%bash\n# same as the above training job ...\nBEST_TRIAL=28 # CHANGE as needed.\nTOPDIR=gs://${BUCKET}\nOUTDIR=${TOPDIR}/poetry/model_hparam/$BEST_TRIAL\nDATADIR=${TOPDIR}/poetry/data\nMODEL=transformer\nHPARAMS=transformer_poetry\n\n# the file with the input lines\nDECODE_FILE=data/poetry/rumi_leads.txt\n\nBEAM_SIZE=4\nALPHA=0.6\n\nt2t-decoder \\\n --data_dir=$DATADIR \\\n --problem=$PROBLEM \\\n --model=$MODEL \\\n --hparams_set=$HPARAMS \\\n --output_dir=$OUTDIR \\\n --t2t_usr_dir=./poetry/trainer \\\n --decode_hparams=\"beam_size=$BEAM_SIZE,alpha=$ALPHA\" \\\n --decode_from_file=$DECODE_FILE \\\n --hparams=\"num_hidden_layers=4,hidden_size=512\"\n\n%%bash \nDECODE_FILE=data/poetry/rumi_leads.txt\ncat ${DECODE_FILE}.*.decodes", "Take the first three line. I'm showing the first line of the couplet provided to the model, how the AI model that we trained complets it and how Rumi completes it:\n<p>\nINPUT: where did the handsome beloved go <br/>\nAI: where art thou worse to me than dead <br/>\nRUMI: I wonder, where did that tall, shapely cypress tree go?\n<p>\nINPUT: he spread his light among us like a candle <br/>\nAI: like the hurricane eclipse <br/>\nRUMI: Where did he go? So strange, where did he go without me? <br/>\n<p>\nINPUT: all day long my heart trembles like a leaf <br/>\nAI: and through their hollow aisles it plays <br/>\nRUMI: All alone at midnight, where did that beloved go? \n<p>\nOh wow. The couplets as completed are quite decent considering that:\n* We trained the model on American poetry, so feeding it Rumi is a bit out of left field.\n* Rumi, of course, has a context and thread running through his lines while the AI (since it was fed only that one line) doesn't. \n\n<p>\n\"Spreading light like a hurricane eclipse\" is a metaphor I won't soon forget. And it was created by a machine learning model!\n\n## Serving poetry\n\nHow would you serve these predictions? There are two ways:\n<ol>\n<li> Use [Cloud ML Engine](https://cloud.google.com/ml-engine/docs/deploying-models) -- this is serverless and you don't have to manage any infrastructure.\n<li> Use [Kubeflow](https://github.com/kubeflow/kubeflow/blob/master/user_guide.md) on Google Kubernetes Engine -- this uses clusters but will also work on-prem on your own Kubernetes cluster.\n</ol>\n<p>\nIn either case, you need to export the model first and have TensorFlow serving serve the model. The model, however, expects to see *encoded* (i.e. preprocessed) data. So, we'll do that in the Python Flask application (in AppEngine Flex) that serves the user interface.", "%%bash\nTOPDIR=gs://${BUCKET}\nOUTDIR=${TOPDIR}/poetry/model_full2\nDATADIR=${TOPDIR}/poetry/data\nMODEL=transformer\nHPARAMS=transformer_poetry\nBEAM_SIZE=4\nALPHA=0.6\n\nt2t-exporter \\\n --model=$MODEL \\\n --hparams_set=$HPARAMS \\\n --problem=$PROBLEM \\\n --t2t_usr_dir=./poetry/trainer \\\n --decode_hparams=\"beam_size=$BEAM_SIZE,alpha=$ALPHA\" \\\n --data_dir=$DATADIR \\\n --output_dir=$OUTDIR\n\n%%bash\nMODEL_LOCATION=$(gsutil ls gs://${BUCKET}/poetry/model_full2/export/Servo | tail -1)\necho $MODEL_LOCATION\nsaved_model_cli show --dir $MODEL_LOCATION --tag_set serve --signature_def serving_default", "Cloud ML Engine", "%%writefile mlengine.json\ndescription: Poetry service on ML Engine\nautoScaling:\n minNodes: 1 # We don't want this model to autoscale down to zero\n\n%%bash\nMODEL_NAME=\"poetry\"\nMODEL_VERSION=\"v1\"\nMODEL_LOCATION=$(gsutil ls gs://${BUCKET}/poetry/model_full2/export/Servo | tail -1)\necho \"Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes\"\ngcloud ml-engine versions delete ${MODEL_VERSION} --model ${MODEL_NAME}\n#gcloud ml-engine models delete ${MODEL_NAME}\n#gcloud ml-engine models create ${MODEL_NAME} --regions $REGION\ngcloud alpha ml-engine versions create --machine-type=mls1-highcpu-4 ${MODEL_VERSION} \\\n --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version=1.5 --config=mlengine.json\n\n%%bash\ngcloud components update --quiet\ngcloud components install alpha --quiet\n\n%%bash\nMODEL_NAME=\"poetry\"\nMODEL_VERSION=\"v1\"\nMODEL_LOCATION=$(gsutil ls gs://${BUCKET}/poetry/model_full2/export/Servo | tail -1)\ngcloud alpha ml-engine versions create --machine-type=mls1-highcpu-4 ${MODEL_VERSION} \\\n --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version=1.5 --config=mlengine.json", "Kubeflow\nFollow these instructions:\n* On the GCP console, launch a Google Kubernetes Engine (GKE) cluster named 'poetry' with 2 nodes, each of which is a n1-standard-2 (2 vCPUs, 7.5 GB memory) VM\n* On the GCP console, click on the Connect button for your cluster, and choose the CloudShell option\n* In CloudShell, run: \n git clone https://github.com/GoogleCloudPlatform/training-data-analyst`\n cd training-data-analyst/courses/machine_learning/deepdive/09_sequence\n* Look at ./setup_kubeflow.sh and modify as appropriate.\nAppEngine\nWhat's deployed in Cloud ML Engine or Kubeflow is only the TensorFlow model. We still need a preprocessing service. That is done using AppEngine. Edit application/app.yaml appropriately.", "!cat application/app.yaml\n\n%%bash\ncd application\n#gcloud app create # if this is your first app\n#gcloud app deploy --quiet --stop-previous-version app.yaml", "Now visit https://mlpoetry-dot-cloud-training-demos.appspot.com and try out the prediction app!\n<img src=\"diagrams/poetry_app.png\" width=\"50%\"/>\nCopyright 2018 Google Inc. Licensed under the Apache License, Version 2.0 (the \\\"License\\\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \\\"AS IS\\\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
f-guitart/data_mining
notes/03 - Introduction to Data Cleaning.ipynb
gpl-3.0
[ "Beggining with Data Cleaning with Pandas\nIntroduction\nThe aim of this notebook is to introduce you to basic data cleaning using Python and Pandas. Most of the contents follow the ideas presented the great report of Jonge van der Loo - <cite>Introduction to data cleaning with R</cite>.\nAs explained in 1, most of Data Scientist's work is spent in cleaning preparing data before any statistical analysis or model application. It is often said that 80% of data analysis is spent on the process of cleaning and preparing the data (Dasu T, Johnson T (2003). Exploratory Data Mining and Data Cleaning. Wiley-IEEE.). Of course one can find data sources date are ready to go, but usually these sources are already explored data sets on practical examples. The reality however, is that data is full of errors, without format consistency and potentially incomplete. The Data Scientist mission is to convert these raw data sources into consistent data sets that can be used as input for further analysis.\nEven with technically correct data sets, and after hard work on cleaning, checking, filling data sets can lack of a standard way to organize data values within a dataset. Hadley Wickham defined this standardization as Tidy Data.\nStatistical Analysis in Five Steps\nVan der Loo defines statistical data analysis in five steps:\n1. Raw data: \n | type checking, normalizing \n v \n2. Technically correct data \n | fix and impute \n v \n3. Consistent data \n | estimate, analyze, derive, etc.\n v\n4. Statistical results\n | tabulate, plot\n v\n5. Formatted output\nIn the previous graph you find a numbered list with five items. Each item represent data in different states and arrows represent actions needed at each step to move to the next one. It has to be noted that when data is transformed from first to fifth state it gains value at each step in an incremental way.\nAt the first stage, we have data as is. Raw data is a rough diamond that is going to be cut and shined at each step. Among the errors we can find there are: wrong types, different variable encoding, data without labels, etc.\nTechnically correct data, is data that can be loaded into Pandas structures, let's say that it has the proper \"shape\" with correct names, types, labels and so on. However variables may be out of range or potentially inconsistent (relations between variables).\nIn consistent data stage, data is ready for statistical inference. For example the total amount of incomes in a year is the sum of all months incomes.\nThe later stages contain statistical results derived from the analysis that ultimately can be formatted to provide a synthetic layout.\nBest Practice\nIt is a good idea to keep the input of each step in a local file, and the methods applied ready to be reproduced at each stage (at least). \nWhy?\nWe will see that at each stage, we can potentially loose or modify initial data. This loose or modification of data can influence final analysis. All operations performed over a data set should be reproducible.\nPython offers a good interactive environment that facilitates the transformation and computation of datasets while generating a nice scripting framework to reproduce procedures.\nKind Reminder on (statistical) variables\nData cleaning can be seen as the first step on statistical analysis, and as programmers we tend to forget or mess the statistical terms. What an statistician says when it says variable? For a computer programmer, a variable is a memory space that can be filled with a know (or unknown) quantity of information (a.k.a. value). Moreover, this space has an associated notation alias that can be used in a program in running time to modify the value of the variable. Well, don't take this as an exact definition, but it helps to provide us a general refresh of what a variable is (for us the CS).\nWell, statisticians have their own variables, lets have an (again) informal definition. In statistics, a variable is an attribute that describes a person, place, thing, or idea (often referred as feature). \nAs an example, we can take the list of physical characteristics of 10 persons. The objects of the matrix are the persons, the variables are the measured properties, such as the weight or the color of the eyes.", "import pandas as pd\ndf = pd.read_csv('../data/people.csv',index_col=0)\ndf", "Variables can be classified as qualitative (aka, categorical) or quantitative (aka, numeric).\n\n\nQualitative: Qualitative variables take on values that are names or labels. The eye color (e.g., brown, green, gray, etc.) or the sex of the person (e.g., female, male) would be examples of qualitative or categorical variables.\n\n\nQuantitative: Quantitative variables are numeric. They represent a measurable quantity. For example, when we speak of the age of a person, we are talking about the time past from its birth - a measurable attribute of the persom. Therefore, age would be a qualitative variable.\n\n\nBasically in Pandas there are two fundamental data structures Series and DataFrame. According to the previous definition of statistical variables, a Series would be a variable an a DataFrame would be a set of variables. Moreover, if we slice a DataFrame (set of variables) we get a Series (variable).", "print(type(df[\"Sex\"]))", "And what kind of variable is for example the weight of a person?", "print(df[\"Age[years]\"].dtype)", "And what about categorical variables? Well, that deserve more than a word, so commonly for us categorical variables will have object dtype.", "print(df[\"Sex\"].dtype)\nprint (df[\"Eye Color\"].dtype)", "To know more about this, give a read to these links:\n* http://stackoverflow.com/questions/21018654/strings-in-a-dataframe-but-dtype-is-object\n* http://pandas.pydata.org/pandas-docs/stable/categorical.html\n* http://pandas.pydata.org/pandas-docs/version/0.15.2/text.html\nSpecial Values\nThere are some values that are considered as special. This is the case when we have missing data (this is when no data value is stored for the variable in an observation). There are other cases, such as Inf and -Inf, etc.\nMissing data\nLet's suppose that we are getting individuum attributes, and we loose or the height of an individuum can't be taken. We always try to maximize all the data we have, so the best way to deal with this kind of situations is to mark these value as missing.", "import pandas as pd\ndf = pd.read_csv('../data/people_with_nas.csv',index_col=0,na_values=\"NaN\")\nprint(df)\nprint(df[\"Weight[kg]\"])\npd.isnull(df[\"Weight[kg]\"])", "You should be asking yourself... Why using np.nan and not None\nThe answer is (or at least should be) that np.nan allows vectorized operations.", "import numpy as np\n\nv1 = pd.Series([1,2,3,4])\nv2 = pd.Series([1,2,None,4])\nv3 = pd.Series([1,2,np.nan,4])\n\n# this can arise problems\nprint(v1 * v2)\n# this shouldn't\nprint(v2 * v3)", "Further reading: \n* http://pandas.pydata.org/pandas-docs/stable/missing_data.html\n* http://stackoverflow.com/questions/17534106/what-is-the-difference-between-nan-and-none\nExercises\nExercise 1: Load iqsize.csv using csv library. The result should be a list of lists.\nExercise 2: Do you think that there exist any advantage of using dictionaries?\nExercise 3: Load iqsize.csv taking advantage of indexes and dictionaries. Describe the problems that you faced so far.\nExercise 4: Identify dataset variables. Are they quantitative or qualitative? Can you identify the units? If you have enough time change unit to metric system.\nExercise 5: Check the range of quantitative variables. Are they correct? If not correct how would you correct it (don't expend many time). (If you have and error, treat the exception and pass it)\nExercise 6: Check the labels of qualitative variables. Are they correct? If not correct them, how would you correct them?\nExercise 7: For quantitative variables, compute the mean and median.\nExercise 8: For qualitative variables, count how many observations of each label exist.\nExercise 9: Compute Exercise 7 statistics, but now for each label in Sex variable.\nBibliography\n[1] Jonge van der Loo, Introduction to data cleaning with R - https://cran.r-project.org/doc/contrib/de_Jonge+van_der_Loo-Introduction_to_data_cleaning_with_R.pdf\n[2] Dasu T, Johnson T (2003). Exploratory Data Mining and Data Cleaning. Wiley-IEEE.\n[3] Hadley Wickman. Tidy Data. http://vita.had.co.nz/papers/tidy-data.pdf \n[4] http://stattrek.com/descriptive-statistics/variables.aspx" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
whitead/numerical_stats
unit_12/hw_2017/problem_set_1.ipynb
gpl-3.0
[ "Problem 1 Instructions\nAnswer the following short-answer questions using Markdown cells\nProblem 1.1\nA $t$-test and $zM$ test rely on the assumption of normality. How could you test that assumption?\nShapiro Wilks hypothesis test\nProblem 1.2\nWhat is $\\hat{\\beta}$ in OLS? \nThe best-fit slope\nProblem 1.3\nWhat is $S_{\\epsilon}$ in OLS?\nThe standard error in residuals. \nProblem 1.4\nWhat is the difference between SSR and TSS? Is one always greater than the other?\nSSR is the sum of squared distance between fit y and data y. TTS is the sum of squared distance between average y and all y data. $TTS \\geq SSR$\nProblem 1.5\nWe learned three ways to do regression. One way was with algebraic equations (OLS-ND). What were the other two ways?\nOLS-1D, NLS-ND\nProblem 1.6\nAside from a plot, what are the steps to complete for a good regression analysis?\n(1) Justify with Spearmann test (2) Check normality of residuals (3) hypothesis tests/confidence intervals as needed\nProblem 1.7\nIs a goodness of fit applicable to a multidimensional regression? If so, what are the x/y axes for this plot?\nyes, $y$ vs $\\hat{y}$\nProblem 1.8\nWhen is it valid to linearize a non-linear problem?\nWhen it doesn't change the noise in the model from normal to some other distribution\nProblem 1.9\nSometimes expressions for a model have $\\hat{y}=\\ldots$ on the left-hand side and other times $y=\\ldots$. What is the difference between these two quantities and what changes on the right-hand side when adding/removing the $\\hat{}$?\n$\\hat{y}$ is the best fit and $y$ is the data. When we write $y$, to achieve equality with our model we have to add $\\epsilon$, some noise to describe the discrepancy between our model and the data.\nProblem Set 2\nProblem 2.1\nAre these numbers normally distributed? [-26.3,-24.2, -20.9, -25.8, -24.3, -22.6, -23.0, -26.8, -26.5, -23.1, -20.0, -23.1, -22.4, -22.8]", "import scipy.stats as ss\n\nss.shapiro([-26.3,-24.2, -20.9, -25.8, -24.3, -22.6, -23.0, -26.8, -26.5, -23.1, -20.0, -23.1, -22.4, -22.8])", "The $p$-value is 0.43, so it could be normal\nProblem 2.2\nGiven $\\hat{\\alpha} = 0.2$, $\\hat{\\beta} = 1.6$, $N = 11$, $S^2_\\alpha = 0.4$, $S^2_\\epsilon = 0.5$, $S^2_\\beta = 4$, give a justification for or against their being an intercept", "import numpy as np\nT = (0.2 - 0) / np.sqrt(0.4)\n# Use 11 - 1 because null hypothesis is there is no intercept!\n1 - (ss.t.cdf(T, 11 - 1) - ss.t.cdf(-T, 11- 1))", "The $p$-value is 0.76, so we cannot reject the null hypothesis of no intercept\nProblem 2.3\nConduct a hpyothesis test for the slope being positive using the above data. This is a one-sided hypothesis test. Hint: a good null hypothesis would be that the slope is negative. Describe your test in Markdown first, then complete it in Python, and finally write an explanation of the p-value in the final cell.\nLet's make the null hypothesis that the slope is negative as suggested. We will create a T statistic, which should correspond to some interval/$p$-value that gets smaller (closer to our significance threshold) as we get more positive in our slope. This will work:\n$$ p = 1 - \\int_{0}^{T} p(t)\\,dt$$ \nwhere $T$ is our positive value reflecting how positive the slope is.\nYou can use 1 or 2 deducted degrees of freedom. 1 is correct, since there is no degree of freedom for the intercept here, but it's a little bit tricky to see that.", "T = 1.6 / np.sqrt(4)\nss.t.cdf(T, 11 - 1) - ss.t.cdf(0,11 - 1)", "The $p$-value is 0.28, so it's not guaranteed that the slope is positive. This is due to the large uncertainty in the intercept\nProblem 2.4\nWrite a function which computes the SSR for $\\hat{y} = \\beta_0 x + \\beta_1 \\exp\\left( -\\beta_2 x\\right) $. Your function should take in one argument. You may assume $x$ and $y$ are defined elsewhere in the code.", "def ssr(beta):\n yhat = beta[0] * x + beta[1] * np.exp(-beta[2] * x)\n return np.sum( (y - yhat)**2)", "Problem 2.5\nIn NLS-ND, if I have 11 $x$ values, each is 2 dimensions, and my fit equation is $y = \\beta_0 x_0 x_1$ (where $x_0$ is first dimension and $x_1$ is second), how many degrees of freedom do I have? Why?\n$11 - 1 = 10$. Only deduct number of fit coefficients for non-linear regression\nProblem 2.6\nIf my model equation is $\\hat{z} = \\beta_0 x y^{\\,\\beta_1}$, what would ${\\mathbf F_{10}}$ be if $\\hat{\\beta_0} = 1.2$, $\\hat{\\beta_1} = 1.8$, $x_1 = 1.0$, $x_2 = 1.5$, $y_1 = 0.5$, $y_2 = -0.2$. Answer in Markdown (you can compute in a Python cell or with calculator).\n$$F_{10} = \\frac{\\partial f(\\hat{\\beta}, x_1)}{\\partial \\beta_0} = x_1 y_1^{\\hat{\\beta}_1} = 0.287$$" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mromanello/SunoikisisDC_NER
participants_notebooks/Sunoikisis - Named Entity Extraction 1a_PG.ipynb
gpl-3.0
[ "Plan of the lecture\n\nIntroduction: Information Extraction and Named Entity Recognition (NER)\nNER: definitions and tasks (extraction, classification, disambiguation)\nbasic programming concepts in Python\nDoing NER with existing libraries:\nNER from Latin texts with CLTK\nNER from journal articles with NLTK\n\n\n\nPython: basic concepts\nPython is a very flexible and very powerful programming language that can help you working with texts and corpora. Python's phylosophy emphasizes code readability and features a simple and very expressive syntax. It is actually easy to master the basic aspects of Python's syntax: it is amazing how much you can do even with just the most basic concepts... The aim of these two lectures is to introduce to you some of these basic operation, let you see some code in action and also give you some exercise where you can apply what you've seen.\nIt is also amazing how many thing you can accomplish with some well written lines of Python! By the end of this class, we'd like to show you how you use Python to perform (some) Natural Language Processing. But of course, you can even just use Python do somethin as easy as...", "2 + 3", "Variables and data types\nHere we go! we've written our first line of code... But I guess we want to do something a little more interesting, right? Well, for a start, we might want to use Python to execute some operation (say: sum two numbers like 2 and 3) and process the result to print it on the screen, process it, and reuse it as many time as we want...\nVariables is what we use to store values. Think of it as a shoebox where you place your content; next time you need that content (i.e. the result of a previous operation, or for example some input you've read from a file) you simply call the shoebox name...", "result = 2 + 3\n\n#now we print the result\nprint(result)\n\n# by the way, I'm a comment. I'm not executed\n# every line of code following the sign # is ignored:\n# print(\"I'm line n. 3: do you see me?\")\n# see? You don't see me...\nprint(\"I'm line nr. 5 and you DO see me!\")", "That's it! As easy as that (yes, in some programming languages you have to create or declare the variable first and then use it to fill the shoebox; in Python, you go ahead and simply use it!)\nNow, what do you think we will get when we execute the following code?", "result + 5", "What types of values can we put into a variable? What goes into the shoebox? We can start by the members of this list:\n\nIntegers (-1,0,1,2,3,4...)\nStrings (\"Hello\", \"s\", \"Wolfgang Amadeus Mozart\", \"I am the ฮฑ and the ฯ‰!\"...)\nfloats (3.14159; 2.71828...)\nBooleans (True, False)\n\nIf you're not sure what type of value you're dealing with, you can use the function type(). Yes, it works with variables too...!", "type(\"I am the ฮฑ and the ฯ‰!\")\n\ntype(2.7182818284590452353602874713527)\n\ntype(True)\n\nresult = \"hello\"\n\ntype(results)", "You declare strings with single ('') or double (\"\") quote: it's totally indifferent! But now two questions:\n1. what happens if you forget the quotes?\n2. what happens if you put quotes around a number?", "hello = \"goodbye\"\nprint(hello)\n\nprint(\"hello\")\n\ntype(\"2\")", "String, integer, float... Why is that so important? Well, try to sum two strings and see what happens...", "\"2\" + \"3\"\n\n#probably you wanted this...\nint(\"2\") + int(\"3\")", "But if we are working with strings, then the \"+\" sign is used to concatenate the strings:", "a = \"interesting!\"\nprint(\"not very \" + a)", "Lists and dictionaries\nLists and dictionaries are two very useful types to store whole collections of data", "beatles = [\"John\", \"Paul\", \"George\", \"Ringo\"]\ntype(beatles)\n\n# dictionaries collections of key : value pairs\nbeatles_dictionary = { \"john\" : \"John Lennon\" ,\n \"paul\" : \"Paul McCartney\",\n \"george\" : \"George Harrison\",\n \"ringo\" : \"Ringo Starr\"}\ntype(beatles_dictionary)", "(there are also other types of collection, like Tuples and Sets, but we won't talk about them now; read the links if you're interested!)\nItems in list are accessible using their index. Do remember that indexing starts from 0!", "print(beatles[0])\n\n#indexes can be negative!\nbeatles[-1]", "Dictionaries are collections of key : value pairs. You access the value using the key as index", "beatles_dictionary[\"john\"]\n\nbeatles_dictionary[0]", "There are a bunch of methods that you can apply to list to work with them.\nYou can append items at the end of a list", "beatles.append(\"Billy Preston\")\nbeatles", "You can learn the index of an item", "beatles.index(\"George\")", "You can insert elements at a predefinite index:", "beatles.insert(0, \"Pete Best\")\nprint(beatles.index(\"George\"))\nbeatles", "But most importantly, you can slice lists, producing sub-lists by specifying the range of indexes you want:", "beatles[1:5]", "Do you notice something strange? Yes, the limit index is not inclusive (i.e. item beatles[5] is not included)", "beatles[5]", "What happens if you specify an index that is too high?", "beatles[7]", "How can you know how long a list is?", "len(beatles)", "Do remember that indexing starts at 0, so don't make the mistake of thinking that len(yourlist) will give you the last item of your list!", "beatles[len(beatles)]", "This will work!", "beatles[len(beatles) -1]", "If-statements\nMost of the times, what you want to do when you program is to check a value and execute some operation depending on whether the value matches some condition. That's where if statements help!\nIn its easiest form, an If statement is syntactic construction that checks whether a condition is met; if it is some part of code is executed", "bassist = \"Paul McCartney\"\n\nif bassist == \"Paul McCartney\":\n print(\"Paul played bass with the Beatles!\")", "Mind the indentation very much! This is the essential element in the syntax of the statement", "bassist = \"Bill Wyman\"\n\nif bassist == \"Paul McCartney\":\n print(\"I'm part of the if statement...\")\n print(\"Paul played bass in the Beatles!\")", "What happens if the condition is not met? Nothing! The indented code is not executed, because the condition is not met, so lines 4 and 5 are simply skipped.\nBut what happens if we de-indent line 5? Can you guess why this is what happes?\nMost of the time, we need to specify what happens if the conditions are not met", "bassist = \"\"\n\nif bassist == \"Paul McCartney\":\n print(\"Paul played bass in the Beatles!\")\nelse:\n print(\"This guy did not play for the Beatles...\")", "This is the flow:\n* the condition in line 3 is checked\n* is it met?\n * yes: then line 4 is executed\n * no: then line 6 is executed\nOr we can specify many different conditions...", "bassist = \"Bill\"\n\nif bassist == \"Paul McCartney\":\n print(\"Paul played bass in the Beatles!\")\nelif bassist == \"Bill Wyman\":\n print(\"Bill Wyman played for the Rolling Stones!\")\nelse:\n print(\"I don't know what band this guy played for...\")", "For loops\nThe greatest thing about lists is that thet are iterable, that is you can loop through them. What do we do if we want to apply some line of code to each element in a list? Try with a for loop!\nA for loop can be paraphrased as: \"for each element named x in an iterable (e.g. a list): do some code (e.g. print the value of x)\"", "for b in beatles:\n print(b + \" was one of the Beatles\")", "Let's break the code down to its parts:\n* b: an arbitrary name that we give to the variable holding every value in the loop (it could have been any name; b is just very convenient in this case!)\n* beatles: the list we're iterating through\n* : as in the if-statements: don't forget the colon!\n* indent: also, don't forget to indent this code! it's the only thing that is telling python that line 2 is part of the for loop!\n* line 2: the function that we want to execute for each item in the iterables\nNow, let's join if statements and for loop to do something nice...", "beatles = [\"John\", \"Paul\", \"George\", \"Ringo\"]\nfor b in beatles:\n if b == \"Paul\":\n instrument = \"bass\"\n elif b == \"John\":\n instrument = \"rhythm guitar\"\n elif b == \"George\":\n instrument = \"lead guitar\"\n elif b == \"Ringo\":\n instrument = \"drum\"\n print(b + \" played \" + instrument + \" with the Beatles\")", "Input and Output\nOne of the most frequent tasks that programmers do is reading data from files, and write some of the output of the programs to a file. \nIn Python (as in many language), we need first to open a file-handler with the appropriate mode in order to process it. Files can be opened in:\n* read mode (\"r\")\n* write mode (\"w\")\n* append mode\nLet's try to read the content of one of the txt files of our Sunoikisis directory\nFirst, we open the file handler in read mode:", "#see? we assign the file-handler to a variable, or we wouldn't be able\n#to do anything with that!\nf = open(\"NOTES.md\", \"r\")", "note that \"r\" is optional: read is the default mode!\nNow there are a bunch of things we can do:\n* read the full content in one variable with this code:\ncontent = f.read()\n\nread the lines in a list of lines:\n\nlines = f.readlines()\n\nor, which is the easiest, simply read the content one line at the time with a for loop; the f object is iterable, so this is as easy as:", "for l in f:\n print(l)", "Once you're done, don't forget to close the handle:", "f.close()\n\n#all together\nf = open(\"NOTES.md\")\nfor l in f:\n print(l)\nf.close()", "Now, there's a shortcut statement, which you'll often see and is very convenient, because it takes care of opening, closing and cleaning up the mess, in case there's some error:", "with open(\"NOTES.md\") as f:\n #mind the indent!\n for l in f:\n #double indent, of course!\n print(l)", "Now, how about writing to a file? Let's try to write a simple message on a file; first, we open the handler in write mode", "out = open(\"test.txt\", \"w\")\n\n#the file is now open; let's write something in it\nout.write(\"This is a test!\\nThis is a second line (separated with a new-line feed)\")", "The file has been created! Let's check this out", "#don't worry if you don't understand this code!\n#We're simply listing the content of the current directory...\nimport os\nos.listdir()", "But before we can do anything (e.g. open it with your favorite text editor) you have to close the file-handler!", "out.close()", "Let's look at its content", "with open(\"test.txt\") as f:\n print(f.read())", "Again, also for writing we can use a with statement, which is very handy.\nBut let's have a look at what happens here, so we understand a bit better why \"write mode\" must be used carefully!", "with open(\"test.txt\", \"w\") as out:\n out.write(\"Oooops! new content\")", "Let's have a look at the content of \"test.txt\" now", "with open(\"test.txt\") as f:\n print(f.read())\n ", "See? After we opened the file in \"write mode\" for the second time, all content of the file was erased and replaced with the new content that we wrote!!!\nSo keep in mind: when you open a file in \"w\" mode:\n\nif it doesn't exist, a new file with that name is created\nif it does exist, it is completely overwritten and all previous content is lost\n\nIf you want to write content to an existing file without losing its pervious content, you have to open the file with the \"a\" mode:", "with open(\"test.txt\", \"a\") as out:\n out.write('''\\nAnd this is some additional content.\nThe new content is appended at the bottom of the existing file''')\n \n\nwith open(\"test.txt\") as f:\n print(f.read())\n ", "Functions\nAbove, we have opened a file several times to inspect its content. Each time, we had to type the same code over and over. This is the typical case where you would like to save some typing (and write code that is much easier to maintain!) by defining a function\nA function is a block of reusable code that can be invoked to perform a definite task. Most often (but not necessarily), it accepts one or more arguments and return a certain value.\nWe have already seen one of the built-in functions of Python: print(\"some str\")\nBut it's actually very easy to define your own. Let's define the function to print out the file content, as we said before. Note that this function takes one argument (the file name) and prints out some text, but doesn't return back any value.", "def printFileContent(file_name):\n #the function takes one argument: file_name\n with open(file_name) as f:\n print(f.read())", "As usual, mind the indent!\nfile_name (line 1) is the placeholder that we use in the function for any argument that we want to pass to the function in our real-life reuse of the code.\nNow, if we want to use our function we simply call it with the file name that we want to print out", "printFileContent(\"README.md\")", "Now, let's see an example of a function that returns some value to the users. Those functions typically take some argument, process them and yield back the result of this processing.\nHere's the easiest example possible: a function that takes two numbers as arguments, sum them and returns the result.", "def sumTwoNumbers(first_int, second_int):\n s = first_int + second_int\n return s\n\n#could be even shorter:\ndef sumTwoNumbers(first_int, second_int):\n return first_int + second_int\n\nsumTwoNumbers(5, 6)", "Most often, you want to assign the result returned to a variable, so that you can go on working with the results...", "s = sumTwoNumbers(5,6)\ns * 2", "Error and exceptions\nThings can go wrong, especially when you're a beginner. But no panic! Errors and exceptions are actually a good thing! Python gives you detailed reports about what is wrong, so read them carefully and try to figure out what is not right.\nOnce you're getting better, you'll actually learn that you can do something good with the exceptions: you'll learn how to handle them, and to anticipate some of the most common problems that dirty data can face you with...\nNow, what happens if you forget the all-important syntactic constraint of the code indent?", "if 1 > 0:\n print(\"Well, we know that 1 is bigger than 0!\")", "Pretty clear, isn't it? What you get is an error a construct that is not grammatical in Python's syntax. Note that you're also told where (at what line, and at what point of the code) your error is occurring. That is not always perfect (there are cases where the problem is actually occuring before what Python thinks), but in this case it's pretty OK.\nWhat if you forget to define a variable (or you misspell the name of a variable)?", "var = \"bla bla\"\nif var1:\n print(\"If you see me, then I was defined...\")", "You get an exception! The syntax of your code is right, but the execution met with a problem that caused the program to stop.\nNow, in your program, you can handle selected exception: this means that you can write your code in a way that the program would still be executed even if a certain exception is raised.\nLet's see what happens if we use our function to try to print the content of a file that doesn't exist:", "printFileContent(\"file_that_is_not_there.txt\")", "We get a FileNotFoundError! Now, let's re-write the function so that this event (somebody uses the function with a wrong file name) is taken care of...", "def printFileContent(file_name):\n #the function takes one argument: file_name\n try:\n with open(file_name) as f:\n print(f.read())\n except FileNotFoundError:\n print(\"The file does not exist.\\nNevertheless, I do like you, and I will print something to you anyway...\")\n\nprintFileContent(\"file_that_doesnt_exist.txt\")", "Appendix: useful links\nPython: how to install\nIf you're using Mac OSX or Linux, you already have (at least one version) of Python installed. Anyway, it's very easy to install Python or upgrade your version. See:\nhttps://wiki.python.org/moin/BeginnersGuide/Download\nJupyter: how to install\nhttp://jupyter.org/install.html\nPython and Jupyter come also in a pre-packaged environment (which is designed especially for data science) called Anaconda. You might be interested to look at that.\nPython 2 or Python 3?\nPython 3 is the latest version of Python (currently, 3.6.1). It's a major upgrade from Python 2, but the code has been somewhat dramatically changed in the passage from 2 to 3 and there is some problem of backward compatibility. Some version of Linux or Mac OSX still come with Python 2.7 (the final version of Python 2).\nAnyway, Python 3 is currently in active development: it's where the cutting-edge improvements and new stuff are being developed (especially for NLP and the NLTK library). In this code, we assume Python 3!\nhttps://wiki.python.org/moin/Python2orPython3\nNLTK: Book\nWould you like a book that is a great introduction to Python for absolute beginners, is a wonderfull resource to learn the basics of Natural Language processing and gives you a thorough introduction to the NLTK library to do NLP in Python? Oh, yeah, I was forgetting: that can be read for free on the internet? Yes, it's Christmass time!\nhttp://www.nltk.org/book/" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive2/building_production_ml_systems/solutions/0_export_data_from_bq_to_gcs.ipynb
apache-2.0
[ "Exporting data from BigQuery to Google Cloud Storage\nIn this notebook, we export BigQuery data to GCS so that we can reuse our Keras model that was developed on CSV data.\nUncomment the following line if you are running the notebook locally:", "!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst\n\n#%load_ext google.cloud.bigquery\n\nimport os\n\nfrom google.cloud import bigquery", "Change the following cell as necessary:", "# Change with your own bucket and project below:\nBUCKET = \"<BUCKET>\"\nPROJECT = \"<PROJECT>\"\n\nOUTDIR = \"gs://{bucket}/taxifare/data\".format(bucket=BUCKET)\n\nos.environ['BUCKET'] = BUCKET\nos.environ['OUTDIR'] = OUTDIR\nos.environ['PROJECT'] = PROJECT", "Create BigQuery tables\nIf you haven not already created a BigQuery dataset for our data, run the following cell:", "bq = bigquery.Client(project = PROJECT)\ndataset = bigquery.Dataset(bq.dataset(\"taxifare\"))\n\ntry:\n bq.create_dataset(dataset)\n print(\"Dataset created\")\nexcept:\n print(\"Dataset already exists\")", "Let's create a table with 1 million examples.\nNote that the order of columns is exactly what was in our CSV files.", "%%bigquery\n\nCREATE OR REPLACE TABLE taxifare.feateng_training_data AS\n\nSELECT\n (tolls_amount + fare_amount) AS fare_amount,\n pickup_datetime,\n pickup_longitude AS pickuplon,\n pickup_latitude AS pickuplat,\n dropoff_longitude AS dropofflon,\n dropoff_latitude AS dropofflat,\n passenger_count*1.0 AS passengers,\n 'unused' AS key\nFROM `nyc-tlc.yellow.trips`\nWHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 1000)) = 1\nAND\n trip_distance > 0\n AND fare_amount >= 2.5\n AND pickup_longitude > -78\n AND pickup_longitude < -70\n AND dropoff_longitude > -78\n AND dropoff_longitude < -70\n AND pickup_latitude > 37\n AND pickup_latitude < 45\n AND dropoff_latitude > 37\n AND dropoff_latitude < 45\n AND passenger_count > 0", "Make the validation dataset be 1/10 the size of the training dataset.", "%%bigquery\n\nCREATE OR REPLACE TABLE taxifare.feateng_valid_data AS\n\nSELECT\n (tolls_amount + fare_amount) AS fare_amount,\n pickup_datetime,\n pickup_longitude AS pickuplon,\n pickup_latitude AS pickuplat,\n dropoff_longitude AS dropofflon,\n dropoff_latitude AS dropofflat,\n passenger_count*1.0 AS passengers,\n 'unused' AS key\nFROM `nyc-tlc.yellow.trips`\nWHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 10000)) = 2\nAND\n trip_distance > 0\n AND fare_amount >= 2.5\n AND pickup_longitude > -78\n AND pickup_longitude < -70\n AND dropoff_longitude > -78\n AND dropoff_longitude < -70\n AND pickup_latitude > 37\n AND pickup_latitude < 45\n AND dropoff_latitude > 37\n AND dropoff_latitude < 45\n AND passenger_count > 0", "Export the tables as CSV files", "%%bash\n\necho \"Deleting current contents of $OUTDIR\"\ngsutil -m -q rm -rf $OUTDIR\n\necho \"Extracting training data to $OUTDIR\"\nbq --location=US extract \\\n --destination_format CSV \\\n --field_delimiter \",\" --noprint_header \\\n taxifare.feateng_training_data \\\n $OUTDIR/taxi-train-*.csv\n\necho \"Extracting validation data to $OUTDIR\"\nbq --location=US extract \\\n --destination_format CSV \\\n --field_delimiter \",\" --noprint_header \\\n taxifare.feateng_valid_data \\\n $OUTDIR/taxi-valid-*.csv\n\ngsutil ls -l $OUTDIR\n\n!gsutil cat gs://$BUCKET/taxifare/data/taxi-train-000000000000.csv | head -2", "Copyright 2019 Google Inc.\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/thu/cmip6/models/sandbox-2/ocean.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Ocean\nMIP Era: CMIP6\nInstitute: THU\nSource ID: SANDBOX-2\nTopic: Ocean\nSub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing. \nProperties: 133 (101 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:40\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'thu', 'sandbox-2', 'ocean')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Seawater Properties\n3. Key Properties --&gt; Bathymetry\n4. Key Properties --&gt; Nonoceanic Waters\n5. Key Properties --&gt; Software Properties\n6. Key Properties --&gt; Resolution\n7. Key Properties --&gt; Tuning Applied\n8. Key Properties --&gt; Conservation\n9. Grid\n10. Grid --&gt; Discretisation --&gt; Vertical\n11. Grid --&gt; Discretisation --&gt; Horizontal\n12. Timestepping Framework\n13. Timestepping Framework --&gt; Tracers\n14. Timestepping Framework --&gt; Baroclinic Dynamics\n15. Timestepping Framework --&gt; Barotropic\n16. Timestepping Framework --&gt; Vertical Physics\n17. Advection\n18. Advection --&gt; Momentum\n19. Advection --&gt; Lateral Tracers\n20. Advection --&gt; Vertical Tracers\n21. Lateral Physics\n22. Lateral Physics --&gt; Momentum --&gt; Operator\n23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff\n24. Lateral Physics --&gt; Tracers\n25. Lateral Physics --&gt; Tracers --&gt; Operator\n26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff\n27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity\n28. Vertical Physics\n29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details\n30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers\n31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum\n32. Vertical Physics --&gt; Interior Mixing --&gt; Details\n33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers\n34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum\n35. Uplow Boundaries --&gt; Free Surface\n36. Uplow Boundaries --&gt; Bottom Boundary Layer\n37. Boundary Forcing\n38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction\n39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction\n40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration\n41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing \n1. Key Properties\nOcean key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of ocean model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of ocean model code (NEMO 3.6, MOM 5.0,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Model Family\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of ocean model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_family') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OGCM\" \n# \"slab ocean\" \n# \"mixed layer ocean\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nBasic approximations made in the ocean.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.basic_approximations') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Primitive equations\" \n# \"Non-hydrostatic\" \n# \"Boussinesq\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.5. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nList of prognostic variables in the ocean component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Potential temperature\" \n# \"Conservative temperature\" \n# \"Salinity\" \n# \"U-velocity\" \n# \"V-velocity\" \n# \"W-velocity\" \n# \"SSH\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Seawater Properties\nPhysical properties of seawater in ocean\n2.1. Eos Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of EOS for sea water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear\" \n# \"Wright, 1997\" \n# \"Mc Dougall et al.\" \n# \"Jackett et al. 2006\" \n# \"TEOS 2010\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2.2. Eos Functional Temp\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTemperature used in EOS for sea water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Potential temperature\" \n# \"Conservative temperature\" \n# TODO - please enter value(s)\n", "2.3. Eos Functional Salt\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSalinity used in EOS for sea water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Practical salinity Sp\" \n# \"Absolute salinity Sa\" \n# TODO - please enter value(s)\n", "2.4. Eos Functional Depth\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDepth or pressure used in EOS for sea water ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pressure (dbars)\" \n# \"Depth (meters)\" \n# TODO - please enter value(s)\n", "2.5. Ocean Freezing Point\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEquation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TEOS 2010\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2.6. Ocean Specific Heat\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecific heat in ocean (cpocean) in J/(kg K)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "2.7. Ocean Reference Density\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBoussinesq reference density (rhozero) in kg / m3", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Bathymetry\nProperties of bathymetry in ocean\n3.1. Reference Dates\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nReference date of bathymetry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Present day\" \n# \"21000 years BP\" \n# \"6000 years BP\" \n# \"LGM\" \n# \"Pliocene\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the bathymetry fixed in time in the ocean ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.3. Ocean Smoothing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe any smoothing or hand editing of bathymetry in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.4. Source\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe source of bathymetry in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.source') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Nonoceanic Waters\nNon oceanic waters treatement in ocean\n4.1. Isolated Seas\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how isolated seas is performed", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. River Mouth\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how river mouth mixing or estuaries specific treatment is performed", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Software Properties\nSoftware properties of ocean code\n5.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Resolution\nResolution in the ocean grid\n6.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Canonical Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.3. Range Horizontal Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRange of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.4. Number Of Horizontal Gridpoints\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "6.5. Number Of Vertical Levels\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of vertical levels resolved on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "6.6. Is Adaptive Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDefault is False. Set true if grid resolution changes during execution.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6.7. Thickness Level 1\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThickness of first surface ocean level (in meters)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "7. Key Properties --&gt; Tuning Applied\nTuning methodology for ocean component\n7.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Key Properties --&gt; Conservation\nConservation in the ocean component\n8.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBrief description of conservation methodology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nProperties conserved in the ocean by the numerical schemes", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.scheme') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Energy\" \n# \"Enstrophy\" \n# \"Salt\" \n# \"Volume of ocean\" \n# \"Momentum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "8.3. Consistency Properties\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAny additional consistency properties (energy conversion, pressure gradient discretisation, ...)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.4. Corrected Conserved Prognostic Variables\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nSet of variables which are conserved by more than the numerical scheme alone.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.5. Was Flux Correction Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDoes conservation involve flux correction ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "9. Grid\nOcean grid\n9.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of grid in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Grid --&gt; Discretisation --&gt; Vertical\nProperties of vertical discretisation in ocean\n10.1. Coordinates\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of vertical coordinates in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Z-coordinate\" \n# \"Z*-coordinate\" \n# \"S-coordinate\" \n# \"Isopycnic - sigma 0\" \n# \"Isopycnic - sigma 2\" \n# \"Isopycnic - sigma 4\" \n# \"Isopycnic - other\" \n# \"Hybrid / Z+S\" \n# \"Hybrid / Z+isopycnic\" \n# \"Hybrid / other\" \n# \"Pressure referenced (P)\" \n# \"P*\" \n# \"Z**\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.2. Partial Steps\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nUsing partial steps with Z or Z vertical coordinate in ocean ?*", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "11. Grid --&gt; Discretisation --&gt; Horizontal\nType of horizontal discretisation scheme in ocean\n11.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal grid type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Lat-lon\" \n# \"Rotated north pole\" \n# \"Two north poles (ORCA-style)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.2. Staggering\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nHorizontal grid staggering type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Arakawa B-grid\" \n# \"Arakawa C-grid\" \n# \"Arakawa E-grid\" \n# \"N/a\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.3. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHorizontal discretisation scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Finite difference\" \n# \"Finite volumes\" \n# \"Finite elements\" \n# \"Unstructured grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Timestepping Framework\nOcean Timestepping Framework\n12.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of time stepping in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.2. Diurnal Cycle\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDiurnal cycle type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Via coupling\" \n# \"Specific treatment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13. Timestepping Framework --&gt; Tracers\nProperties of tracers time stepping in ocean\n13.1. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTracers time stepping scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Leap-frog + Asselin filter\" \n# \"Leap-frog + Periodic Euler\" \n# \"Predictor-corrector\" \n# \"Runge-Kutta 2\" \n# \"AM3-LF\" \n# \"Forward-backward\" \n# \"Forward operator\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTracers time step (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14. Timestepping Framework --&gt; Baroclinic Dynamics\nBaroclinic dynamics in ocean\n14.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBaroclinic dynamics type", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Preconditioned conjugate gradient\" \n# \"Sub cyling\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBaroclinic dynamics scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Leap-frog + Asselin filter\" \n# \"Leap-frog + Periodic Euler\" \n# \"Predictor-corrector\" \n# \"Runge-Kutta 2\" \n# \"AM3-LF\" \n# \"Forward-backward\" \n# \"Forward operator\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.3. Time Step\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nBaroclinic time step (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15. Timestepping Framework --&gt; Barotropic\nBarotropic time stepping in ocean\n15.1. Splitting\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime splitting method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"split explicit\" \n# \"implicit\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.2. Time Step\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nBarotropic time step (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "16. Timestepping Framework --&gt; Vertical Physics\nVertical physics time stepping in ocean\n16.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDetails of vertical time stepping in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17. Advection\nOcean advection\n17.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of advection in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Advection --&gt; Momentum\nProperties of lateral momemtum advection scheme in ocean\n18.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of lateral momemtum advection scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Flux form\" \n# \"Vector form\" \n# TODO - please enter value(s)\n", "18.2. Scheme Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of ocean momemtum advection scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18.3. ALE\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nUsing ALE for vertical advection ? (if vertical coordinates are sigma)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.ALE') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "19. Advection --&gt; Lateral Tracers\nProperties of lateral tracer advection scheme in ocean\n19.1. Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrder of lateral tracer advection scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "19.2. Flux Limiter\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMonotonic flux limiter for lateral tracer advection scheme in ocean ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "19.3. Effective Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEffective order of limited lateral tracer advection scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "19.4. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19.5. Passive Tracers\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nPassive tracers advected", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ideal age\" \n# \"CFC 11\" \n# \"CFC 12\" \n# \"SF6\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19.6. Passive Tracers Advection\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIs advection of passive tracers different than active ? if so, describe.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20. Advection --&gt; Vertical Tracers\nProperties of vertical tracer advection scheme in ocean\n20.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.vertical_tracers.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20.2. Flux Limiter\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMonotonic flux limiter for vertical tracer advection scheme in ocean ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "21. Lateral Physics\nOcean lateral physics\n21.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of lateral physics in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of transient eddy representation in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Eddy active\" \n# \"Eddy admitting\" \n# TODO - please enter value(s)\n", "22. Lateral Physics --&gt; Momentum --&gt; Operator\nProperties of lateral physics operator for momentum in ocean\n22.1. Direction\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDirection of lateral physics momemtum scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Horizontal\" \n# \"Isopycnal\" \n# \"Isoneutral\" \n# \"Geopotential\" \n# \"Iso-level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.2. Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrder of lateral physics momemtum scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Harmonic\" \n# \"Bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.3. Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDiscretisation of lateral physics momemtum scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Second order\" \n# \"Higher order\" \n# \"Flux limiter\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff\nProperties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean\n23.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLateral physics momemtum eddy viscosity coeff type in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Space varying\" \n# \"Time + space varying (Smagorinsky)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. Constant Coefficient\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "23.3. Variable Coefficient\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23.4. Coeff Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23.5. Coeff Backscatter\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "24. Lateral Physics --&gt; Tracers\nProperties of lateral physics for tracers in ocean\n24.1. Mesoscale Closure\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there a mesoscale closure in the lateral physics tracers scheme ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "24.2. Submesoscale Mixing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "25. Lateral Physics --&gt; Tracers --&gt; Operator\nProperties of lateral physics operator for tracers in ocean\n25.1. Direction\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDirection of lateral physics tracers scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Horizontal\" \n# \"Isopycnal\" \n# \"Isoneutral\" \n# \"Geopotential\" \n# \"Iso-level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.2. Order\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOrder of lateral physics tracers scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Harmonic\" \n# \"Bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.3. Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDiscretisation of lateral physics tracers scheme in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Second order\" \n# \"Higher order\" \n# \"Flux limiter\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff\nProperties of eddy diffusity coeff in lateral physics tracers scheme in the ocean\n26.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLateral physics tracers eddy diffusity coeff type in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Space varying\" \n# \"Time + space varying (Smagorinsky)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26.2. Constant Coefficient\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "26.3. Variable Coefficient\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26.4. Coeff Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "26.5. Coeff Backscatter\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity\nProperties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean\n27.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of EIV in lateral physics tracers in the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"GM\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.2. Constant Val\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf EIV scheme for tracers is constant, specify coefficient value (M2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "27.3. Flux Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of EIV flux (advective or skew)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27.4. Added Diffusivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of EIV added diffusivity (constant, flow dependent or none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28. Vertical Physics\nOcean Vertical Physics\n28.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of vertical physics in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details\nProperties of vertical physics in ocean\n29.1. Langmuir Cells Mixing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there Langmuir cells mixing in upper ocean ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers\n*Properties of boundary layer (BL) mixing on tracers in the ocean *\n30.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of boundary layer mixing for tracers in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure - TKE\" \n# \"Turbulent closure - KPP\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Turbulent closure - Bulk Mixed Layer\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.2. Closure Order\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.3. Constant\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant BL mixing of tracers, specific coefficient (m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.4. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground BL mixing of tracers coefficient, (schema and value in m2/s - may by none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum\n*Properties of boundary layer (BL) mixing on momentum in the ocean *\n31.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of boundary layer mixing for momentum in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure - TKE\" \n# \"Turbulent closure - KPP\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Turbulent closure - Bulk Mixed Layer\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.2. Closure Order\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "31.3. Constant\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant BL mixing of momentum, specific coefficient (m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "31.4. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground BL mixing of momentum coefficient, (schema and value in m2/s - may by none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32. Vertical Physics --&gt; Interior Mixing --&gt; Details\n*Properties of interior mixing in the ocean *\n32.1. Convection Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of vertical convection in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Non-penetrative convective adjustment\" \n# \"Enhanced vertical diffusion\" \n# \"Included in turbulence closure\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.2. Tide Induced Mixing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how tide induced mixing is modelled (barotropic, baroclinic, none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.3. Double Diffusion\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there double diffusion", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "32.4. Shear Mixing\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there interior shear mixing", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers\n*Properties of interior mixing on tracers in the ocean *\n33.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of interior mixing for tracers in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure / TKE\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33.2. Constant\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant interior mixing of tracers, specific coefficient (m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "33.3. Profile\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "33.4. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground interior mixing of tracers coefficient, (schema and value in m2/s - may by none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum\n*Properties of interior mixing on momentum in the ocean *\n34.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of interior mixing for momentum in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure / TKE\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "34.2. Constant\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf constant interior mixing of momentum, specific coefficient (m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "34.3. Profile\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "34.4. Background\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBackground interior mixing of momentum coefficient, (schema and value in m2/s - may by none)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "35. Uplow Boundaries --&gt; Free Surface\nProperties of free surface in ocean\n35.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of free surface in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "35.2. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nFree surface scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear implicit\" \n# \"Linear filtered\" \n# \"Linear semi-explicit\" \n# \"Non-linear implicit\" \n# \"Non-linear filtered\" \n# \"Non-linear semi-explicit\" \n# \"Fully explicit\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "35.3. Embeded Seaice\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the sea-ice embeded in the ocean model (instead of levitating) ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "36. Uplow Boundaries --&gt; Bottom Boundary Layer\nProperties of bottom boundary layer in ocean\n36.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of bottom boundary layer in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "36.2. Type Of Bbl\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of bottom boundary layer in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diffusive\" \n# \"Acvective\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "36.3. Lateral Mixing Coef\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "36.4. Sill Overflow\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe any specific treatment of sill overflows", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37. Boundary Forcing\nOcean boundary forcing\n37.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of boundary forcing in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.2. Surface Pressure\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.3. Momentum Flux Correction\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.4. Tracers Flux Correction\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.5. Wave Effects\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how wave effects are modelled at ocean surface.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.wave_effects') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.6. River Runoff Budget\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how river runoff from land surface is routed to ocean and any global adjustment done.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "37.7. Geothermal Heating\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how geothermal heating is present at ocean bottom.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction\nProperties of momentum bottom friction in ocean\n38.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of momentum bottom friction in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear\" \n# \"Non-linear\" \n# \"Non-linear (drag function of speed of tides)\" \n# \"Constant drag coefficient\" \n# \"None\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction\nProperties of momentum lateral friction in ocean\n39.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of momentum lateral friction in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Free-slip\" \n# \"No-slip\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration\nProperties of sunlight penetration scheme in ocean\n40.1. Scheme\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of sunlight penetration scheme in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"1 extinction depth\" \n# \"2 extinction depth\" \n# \"3 extinction depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "40.2. Ocean Colour\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the ocean sunlight penetration scheme ocean colour dependent ?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "40.3. Extinction Depth\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe and list extinctions depths for sunlight penetration scheme (if applicable).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing\nProperties of surface fresh water forcing in ocean\n41.1. From Atmopshere\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of surface fresh water forcing from atmos in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Freshwater flux\" \n# \"Virtual salt flux\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "41.2. From Sea Ice\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of surface fresh water forcing from sea-ice in ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Freshwater flux\" \n# \"Virtual salt flux\" \n# \"Real salt flux\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "41.3. Forced Mode Restoring\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nType of surface salinity restoring in forced mode (OMIP)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "ยฉ2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jacobdein/alpine-soundscapes
examples/Playing with rasterio and fiona.ipynb
mit
[ "Playing with rasterio and fiona\nVariable declarations\nsample_points_filepath โ€“ path to sample points shapefile <br />\nDEM_filepath โ€“ path to DEM raster <br />\nelevation_filepath โ€“ path to export excel file containing elevation values for each sample site", "sample_points_filepath = \"\"\n\nDEM_filepath = \"\"\n\nelevation_filepath = \"\"", "Import statements", "import rasterio\nimport fiona\nimport pandas\nimport numpy\nfrom pyproj import Proj, transform\nfrom fiona.crs import from_epsg\n\nwith fiona.open(sample_points_filepath, 'r') as source_points:\n points = [f['geometry']['coordinates'] for f in source_points]\n \n original = Proj(source_points.crs)\n destination = Proj(from_epsg(4326))\n #destination = Proj(' +proj=latlong +ellps=bessel')\n \n with rasterio.drivers():\n with rasterio.open(DEM_filepath) as source_dem:\n s = source_dem.sample(points)\n elevs = numpy.array([n[0] for n in s])\n source_dem.close\n source_points.close", "Transform points", "points_projected = []\nfor p in points:\n x, y = p\n lat, long = transform(original, destination, x, y)\n points_projected.append((long,lat))\n\npoints_projected_pd = pandas.DataFrame(points_projected, columns=[\"lat\", \"long\"])\n\nwith fiona.open(sample_points_filepath, 'r') as source_points:\n names = numpy.array([p['properties']['NAME'] for p in source_points])\n IDs = numpy.array([p['properties']['ID'] for p in source_points])\n \n source_points.close\n\nelevs_names = [{\"ID\":IDs[i],\"elevation\":elevs[i], \"name\":names[i], \"latitude\":points_projected[i][0], \"longitude\":points_projected[i][1]} for i in range(len(elevs))]\n\nelevs_pd = pandas.DataFrame(elevs_names)\n\nelevs_pd\n\nelevs_pd.to_excel(elevation_filepath)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/fairness-indicators
g3doc/tutorials/Fairness_Indicators_TensorBoard_Plugin_Example_Colab.ipynb
apache-2.0
[ "Copyright 2020 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Fairness Indicators TensorBoard Plugin Example Colab\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/responsible_ai/fairness_indicators/tutorials/Fairness_indicators_TensorBoard_Plugin_Example_Colab\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/Fairness_Indicators_TensorBoard_Plugin_Example_Colab.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/Fairness_Indicators_TensorBoard_Plugin_Example_Colab.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/fairness-indicators/g3doc/tutorials/Fairness_Indicators_TensorBoard_Plugin_Example_Colab.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nOverview\nIn this activity, you'll use Fairness Indicators for TensorBoard. With the plugin, you can visualize fairness evaluations for your runs and easily compare performance across groups.\nImporting\nRun the following code to install the required libraries.", "!pip install -q -U pip==20.2\n\n!pip install fairness_indicators 'absl-py<0.9,>=0.7'\n!pip install google-api-python-client==1.8.3\n!pip install tensorboard-plugin-fairness-indicators\n!pip install tensorflow-serving-api==2.8.0", "Restart the runtime. After the runtime is restarted, continue with following cells without running previous cell again.", "# %tf.disable_v2_behavior()\t# Uncomment this line if running in Google Colab.\n\nimport datetime\nimport os\nimport tempfile\nfrom tensorboard_plugin_fairness_indicators import summary_v2\nimport tensorflow.compat.v1 as tf\n\n# example_model.py is provided in fairness_indicators package to train and\n# evaluate an example model. \nfrom fairness_indicators import example_model\n\ntf.compat.v1.enable_eager_execution()", "Data and Constants", "# To know about dataset, check Fairness Indicators Example Colab at:\n# https://github.com/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/Fairness_Indicators_Example_Colab.ipynb\n\ntrain_tf_file = tf.keras.utils.get_file('train.tf', 'https://storage.googleapis.com/civil_comments_dataset/train_tf_processed.tfrecord')\nvalidate_tf_file = tf.keras.utils.get_file('validate.tf', 'https://storage.googleapis.com/civil_comments_dataset/validate_tf_processed.tfrecord')\n\nBASE_DIR = tempfile.gettempdir()\nTEXT_FEATURE = 'comment_text'\nLABEL = 'toxicity'\nFEATURE_MAP = {\n # Label:\n LABEL: tf.io.FixedLenFeature([], tf.float32),\n # Text:\n TEXT_FEATURE: tf.io.FixedLenFeature([], tf.string),\n\n # Identities:\n 'sexual_orientation': tf.io.VarLenFeature(tf.string),\n 'gender': tf.io.VarLenFeature(tf.string),\n 'religion': tf.io.VarLenFeature(tf.string),\n 'race': tf.io.VarLenFeature(tf.string),\n 'disability': tf.io.VarLenFeature(tf.string),\n}", "Train the Model", "model_dir = os.path.join(BASE_DIR, 'train',\n datetime.datetime.now().strftime('%Y%m%d-%H%M%S'))\n\nclassifier = example_model.train_model(model_dir,\n train_tf_file,\n LABEL,\n TEXT_FEATURE,\n FEATURE_MAP)", "Run TensorFlow Model Analysis with Fairness Indicators\nThis step might take 2 to 5 minutes.", "tfma_eval_result_path = os.path.join(BASE_DIR, 'tfma_eval_result')\n\nexample_model.evaluate_model(classifier,\n validate_tf_file,\n tfma_eval_result_path,\n 'gender',\n LABEL,\n FEATURE_MAP)", "Visualize Fairness Indicators in TensorBoard\nBelow you will visualize Fairness Indicators in Tensorboard and compare performance of each slice of the data on selected metrics. You can adjust the baseline comparison slice as well as the displayed threshold(s) using the drop down menus at the top of the visualization. You can also select different evaluation runs using the drop down menu at the top-left corner.\nWrite Fairness Indicators Summary\nWrite summary file containing all required information to visualize Fairness Indicators in TensorBoard.", "import tensorflow.compat.v2 as tf2\n\nwriter = tf2.summary.create_file_writer(\n os.path.join(model_dir, 'fairness_indicators'))\nwith writer.as_default():\n summary_v2.FairnessIndicators(tfma_eval_result_path, step=1)\nwriter.close()", "Launch TensorBoard\nNavigate to \"Fairness Indicators\" tab to visualize Fairness Indicators.", "%load_ext tensorboard\n\n%tensorboard --logdir=$model_dir" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
beangoben/HistoriaDatos_Higgs
Dia1/.ipynb_checkpoints/Intro a Matplotlib-checkpoint.ipynb
gpl-2.0
[ "Intro a Matplotlib\nMatplotlib = Libreria para graficas cosas matematicas\nQue es Matplotlib?\n\nMatplotlin es un libreria para crear imagenes 2D de manera facil.\nChecate mas en :\n\nPagina oficial : http://matplotlib.org/\nGalleria de ejemplo: http://matplotlib.org/gallery.html\nUna libreria mas avanzada que usa matplotlib, Seaborn: http://stanford.edu/~mwaskom/software/seaborn/\nLibreria de visualizacion interactiva: http://bokeh.pydata.org/\nBuenisimo Tutorial: http://www.labri.fr/perso/nrougier/teaching/matplotlib/\n\nPara usar matplotlib, solo tiene que importar el modulo ..tambien te conviene importar numpy pues es muy util", "import numpy as np # modulo de computo numerico\nimport matplotlib.pyplot as plt # modulo de graficas\nimport pandas as pd # modulo de datos\n# esta linea hace que las graficas salgan en el notebook\n%matplotlib inline", "Crear graficas (plot)\nCrear graficas es muy facil en matplotlib, si tienes una lista de valores X y otra y..solo basta usar :", "x = np.array([0,1,2,3,4])\ny = x**2 #cuadramos x\nplt.plot(x,y)\nplt.title(\"Grafica sencilla\")\nplt.show()", "Podemos usar la funcion np.linspace para crear valores en un rango, por ejemplo si queremos 100 numeros entre 0 y 10 usamos:", "x = np.linspace(0,10,100)\ny = x**2 #cuadramos x\nplt.plot(x,y)\nplt.title(\"Grafica sencilla\")\nplt.show()", "Y podemos graficar dos cosas al mismo tiempo:", "x = np.linspace(0,10,100)\ny1 = x # una linea\ny2 = x**2 # cuadramos x\nplt.plot(x,y1)\nplt.plot(x,y2)\nplt.title(\"Dos graficas sencillas\")\nplt.show()", "Que tal si queremos distinguir cada linea? Pues usamos legend(), de leyenda..tambien tenemos que agregarles nombres a cada plot", "x = np.linspace(0,10,100)\ny1 = x # una linea\ny2 = x**2 # cuadramos x\nplt.plot(x,y1,label=\"Linea\")\nplt.plot(x,y2,label=\"Cuadrado\")\nplt.legend()\nplt.title(\"Dos graficas sencillas\")\nplt.show()", "Tambien podemos hacer mas cosas, como dibujar solamente los puntos, o las lineas con los puntos usando linestyle:", "x = np.linspace(0,10,100)\ny1 = x # una linea\ny2 = x**2 # cuadramos x\ny3 = np.sqrt(x) # sacamos raiz cuadrada a x\ny4 = np.power(x,1.5) # elevamos x a la potencia 1.5\n\nplt.plot(x,y1,label=\"Linea\",linestyle='-') # linea\nplt.plot(x,y2,label=\"Cuadrado\",linestyle=':') # puntitos\nplt.plot(x,y3,label=\"Raiz\",linestyle='-.') # linea y punto\nplt.plot(x,y4,label=\"potencia 1.5\",linestyle='--') # lineas salteadas\nplt.legend()\nplt.title(\"Dos graficas sencillas\")\nplt.show()", "Dibujando puntos (scatter)\nAveces no queremos dibujar lineas, sino puntos, esto nos da informacion de donde se encuentras datos de manera espacial. Para esto podemos usarlo de la siguiente manera:", "N = 50 # numero de puntos\nx = np.random.rand(N) # numeros aleatorios entre 0 y 1\ny = np.random.rand(N)\nplt.scatter(x, y)\nplt.title(\"Scatter de puntos aleatorios\")\nplt.show()", "Pero ademas podemos meter mas informacion, por ejemplo dar colores cada punto, o darle tamanos diferentes:", "N = 50 # numero de puntos\nx = np.random.rand(N) # numeros aleatorios entre 0 y 1\ny = np.random.rand(N)\ncolores = np.random.rand(N) # colores aleatorios\nradios= 15 * np.random.rand(N) # numeros aleatorios entre 0 y 15\nareas = np.pi * radios**2 # la formula de area de un circulo\nplt.scatter(x, y, s=areas, c=colores, alpha=0.5)\nplt.title(\"Scatter plot de puntos aleatorios\")\nplt.show()", "Histogramas (hist)\nLos histogramas nos muestran distribuciones de datos, la forma de los datos, nos muestran el numero de datos de diferentes tipos:", "N=500\nx = np.random.rand(N) # numeros aleatorios entre 0 y 1\nplt.hist(x)\nplt.title(\"Histograma aleatorio\")\nplt.show()", "otro tipo de datos, tomados de una campana de gauss, es decir una distribucion normal:", "N=500\nx = np.random.randn(N)\nplt.hist(x)\nplt.title(\"Histograma aleatorio Normal\")\nplt.show()\n\nN=1000\nx1 = np.random.randn(N)\nx2 = 2+2*np.random.randn(N)\n\nplt.hist(x1,20,alpha=0.3)\nplt.hist(x2,20,alpha=0.3)\nplt.title(\"Histograma de dos distribuciones\")\nplt.show()", "Bases de datos en el internet\nAveces los datos que queremos se encuentran en el internet. Asumiendo que se encuentran ordenados y en un formato amigable siempre los podemos bajar y guardar como un DataFrame.\nPor ejemplo:\nGapminder es una pagina con mas de 500 conjunto de daatos relacionado a indicadores globales como ingresos, producto interno bruto (PIB=GDP) y esperanza de vida.\nAqui bajamos la base de datos de esperanza de vida, lo guardamos en memoria y lo lodeamos como un excel:\nOjo! Aqui usamos .head() para imprimir los primeros 5 renglones del dataframe pues son gigantescos los datos.", "xurl=\"http://spreadsheets.google.com/pub?key=phAwcNAVuyj2tPLxKvvnNPA&output=xls\"\ndf=pd.read_excel(xurl)\nprint(\"Tamano completo es %s\"%str(df.shape))\ndf.head()", "Arreglando los Datos\nHead nos permite darle un vistazo a los datos... asi a puro ojo vemos que las columnas son anios y los renglones los paises...ponder reversar esto con transpose, pero tambien vemos que esta con indices enumerados, prefeririamos que los indices fueran los paises, entonces los cambiamos y tiramos la columna que ya no sirve...al final un head para ver que todo esta bien... a este juego de limpiar y arreglar datos se llama \"Data Wrangling\"", "df = df.rename(columns={'Life expectancy with projections. Yellow is IHME': 'Life expectancy'})\ndf.index=df['Life expectancy']\ndf=df.drop('Life expectancy',axis=1)\ndf=df.transpose()\ndf.head()", "Entonces ahora podemos ver la calidad de vida en Mexico atravez del tiempo:", "df['Mexico'].plot()\nprint(\"== Esperanza de Vida en Mexico ==\")", "de esta visualizacion vemos que la caldiad ha ido subiendo apartir de 1900, ademas vemos mucho movimiento entre 1890 y 1950, justo cuando habia muchas guerras en Mexico.\nTambien podemos seleccionar un rango selecto de aรฑos, vemos que este rango es interesante entonces", "subdf=df[ df.index >= 1890 ]\nsubdf=subdf[ subdf.index <= 1955 ]\nsubdf['Mexico'].plot()\nplt.title(\"Esperanza de Vida en Mexico entre 1890 y 1955\")\nplt.show()", "o sin tanto rollo, podemos restringuir el rango de nuestra grafica con xlim (los limites del eje X)", "df['Mexico'].plot()\nplt.xlim(1890,1955)\nplt.title(\"Esperanza de Vida en Mexico entre 1890 y 1955\")\nplt.show()", "Tambien es importante ver como esto se compara con otros paises, podemos comparar con todo Norteamerica:", "df[['Mexico','United States','Canada']].plot()\nplt.title(\"Esperanza de Vida en Norte-America\")\nplt.show()", "Ejercicios:\n\nCompara la esperanza de vida en Latino America (o al menos algunos paises de ella).\nSolo grafica los aรฑos entre 1900 y 2000, tambien 2000-2014.\nQuita los paises que tienen valores 'Nan', checa la funcion .dropna().\nSaca estadisticas para paises Latino Americanos.\nLo mismo de arriba para diferentes periodos 1800-1900, 1900-2000, 2000-2014" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
google-research/big_transfer
colabs/big_transfer_jax.ipynb
apache-2.0
[ "Copyright 2020 Google LLC.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "<a href=\"https://colab.research.google.com/github/google-research/big_transfer/blob/master/colabs/big_transfer_jax.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nBigTransfer (BiT): A step-by-step tutorial for state-of-the-art vision\nThis colab demonstrates how to:\n1. Load BiT models in JAX.\n2. Make predictions using BiT pre-trained on CIFAR-10.\n3. Fine-tune BiT on 5-shot CIFAR-100 and get amazing results!\nIt is good to get an understanding or quickly try things. However, to run longer training runs, we recommend using the commandline scripts at http://github.com/google-research/big_transfer\nInstall flax and run imports", "!pip install flax\n\nimport io\nimport re\n\nfrom functools import partial\n\nimport numpy as np\n\nimport matplotlib.pyplot as plt\n\nimport jax\nimport jax.numpy as jnp\n\nimport flax\nimport flax.nn as nn\nimport flax.optim as optim\nimport flax.jax_utils as flax_utils\n\n# Assert that GPU is available\nassert 'Gpu' in str(jax.devices())\n\nimport tensorflow as tf\nimport tensorflow_datasets as tfds", "Architecture and function for transforming BiT weights to JAX to format", "def fixed_padding(x, kernel_size):\n pad_total = kernel_size - 1\n pad_beg = pad_total // 2\n pad_end = pad_total - pad_beg\n\n x = jax.lax.pad(x, 0.0,\n ((0, 0, 0),\n (pad_beg, pad_end, 0), (pad_beg, pad_end, 0),\n (0, 0, 0)))\n return x\n\n\ndef standardize(x, axis, eps):\n x = x - jnp.mean(x, axis=axis, keepdims=True)\n x = x / jnp.sqrt(jnp.mean(jnp.square(x), axis=axis, keepdims=True) + eps)\n return x\n\n\nclass GroupNorm(nn.Module):\n \"\"\"Group normalization (arxiv.org/abs/1803.08494).\"\"\"\n\n def apply(self, x, num_groups=32):\n\n input_shape = x.shape\n group_shape = x.shape[:-1] + (num_groups, x.shape[-1] // num_groups)\n\n x = x.reshape(group_shape)\n\n # Standardize along spatial and group dimensions\n x = standardize(x, axis=[1, 2, 4], eps=1e-5)\n x = x.reshape(input_shape)\n\n bias_scale_shape = tuple([1, 1, 1] + [input_shape[-1]])\n x = x * self.param('scale', bias_scale_shape, nn.initializers.ones)\n x = x + self.param('bias', bias_scale_shape, nn.initializers.zeros)\n return x\n\n\nclass StdConv(nn.Conv):\n\n def param(self, name, shape, initializer):\n param = super().param(name, shape, initializer)\n if name == 'kernel':\n param = standardize(param, axis=[0, 1, 2], eps=1e-10)\n return param\n\n\nclass RootBlock(nn.Module):\n\n def apply(self, x, width):\n x = fixed_padding(x, 7)\n x = StdConv(x, width, (7, 7), (2, 2),\n padding=\"VALID\",\n bias=False,\n name=\"conv_root\")\n\n x = fixed_padding(x, 3)\n x = nn.max_pool(x, (3, 3), strides=(2, 2), padding=\"VALID\")\n\n return x\n\n\nclass ResidualUnit(nn.Module):\n \"\"\"Bottleneck ResNet block.\"\"\"\n\n def apply(self, x, nout, strides=(1, 1)):\n x_shortcut = x\n needs_projection = x.shape[-1] != nout * 4 or strides != (1, 1)\n\n group_norm = GroupNorm\n conv = StdConv.partial(bias=False)\n\n x = group_norm(x, name=\"gn1\")\n x = nn.relu(x)\n if needs_projection:\n x_shortcut = conv(x, nout * 4, (1, 1), strides, name=\"conv_proj\")\n x = conv(x, nout, (1, 1), name=\"conv1\")\n\n x = group_norm(x, name=\"gn2\")\n x = nn.relu(x)\n x = fixed_padding(x, 3)\n x = conv(x, nout, (3, 3), strides, name=\"conv2\", padding='VALID')\n\n x = group_norm(x, name=\"gn3\")\n x = nn.relu(x)\n x = conv(x, nout * 4, (1, 1), name=\"conv3\")\n\n return x + x_shortcut\n\n\nclass ResidualBlock(nn.Module):\n\n def apply(self, x, block_size, nout, first_stride):\n x = ResidualUnit(\n x, nout, strides=first_stride,\n name=\"unit01\")\n for i in range(1, block_size):\n x = ResidualUnit(\n x, nout, strides=(1, 1),\n name=f\"unit{i+1:02d}\")\n return x\n\n\nclass ResNet(nn.Module):\n \"\"\"ResNetV2.\"\"\"\n\n def apply(self, x, num_classes=1000,\n width_factor=1, num_layers=50):\n block_sizes = _block_sizes[num_layers]\n\n width = 64 * width_factor\n\n root_block = RootBlock.partial(width=width)\n x = root_block(x, name='root_block')\n\n # Blocks\n for i, block_size in enumerate(block_sizes):\n x = ResidualBlock(x, block_size, width * 2 ** i,\n first_stride=(1, 1) if i == 0 else (2, 2),\n name=f\"block{i + 1}\")\n\n # Pre-head\n x = GroupNorm(x, name='norm-pre-head')\n x = nn.relu(x)\n x = jnp.mean(x, axis=(1, 2))\n\n # Head\n x = nn.Dense(x, num_classes, name=\"conv_head\",\n kernel_init=nn.initializers.zeros)\n\n return x.astype(jnp.float32)\n\n\n_block_sizes = {\n 50: [3, 4, 6, 3],\n 101: [3, 4, 23, 3],\n 152: [3, 8, 36, 3],\n }\n\n\ndef transform_params(params, params_tf, num_classes, init_head=False):\n # BiT and JAX models have different naming conventions, so we need to\n # properly map TF weights to JAX weights\n params['root_block']['conv_root']['kernel'] = (\n params_tf['resnet/root_block/standardized_conv2d/kernel'])\n\n for block in ['block1', 'block2', 'block3', 'block4']:\n units = set([re.findall(r'unit\\d+', p)[0] for p in params_tf.keys()\n if p.find(block) >= 0])\n for unit in units:\n for i, group in enumerate(['a', 'b', 'c']):\n params[block][unit][f'conv{i+1}']['kernel'] = (\n params_tf[f'resnet/{block}/{unit}/{group}/'\n 'standardized_conv2d/kernel'])\n params[block][unit][f'gn{i+1}']['bias'] = (\n params_tf[f'resnet/{block}/{unit}/{group}/'\n 'group_norm/beta'][None, None, None])\n params[block][unit][f'gn{i+1}']['scale'] = (\n params_tf[f'resnet/{block}/{unit}/{group}/'\n 'group_norm/gamma'][None, None, None])\n\n projs = [p for p in params_tf.keys()\n if p.find(f'{block}/{unit}/a/proj') >= 0]\n assert len(projs) <= 1\n if projs:\n params[block][unit]['conv_proj']['kernel'] = params_tf[projs[0]]\n\n params['norm-pre-head']['bias'] = (\n params_tf['resnet/group_norm/beta'][None, None, None])\n params['norm-pre-head']['scale'] = (\n params_tf['resnet/group_norm/gamma'][None, None, None])\n\n if init_head:\n params['conv_head']['kernel'] = params_tf['resnet/head/conv2d/kernel'][0, 0]\n params['conv_head']['bias'] = params_tf['resnet/head/conv2d/bias']\n else:\n params['conv_head']['kernel'] = np.zeros(\n (params['conv_head']['kernel'].shape[0], num_classes), dtype=np.float32)\n params['conv_head']['bias'] = np.zeros(num_classes, dtype=np.float32)", "Run BiT-M-ResNet50x1 already fine-tuned on CIFAR-10\nBuild model and load weights", "with tf.io.gfile.GFile('gs://bit_models/BiT-M-R50x1-CIFAR10.npz', 'rb') as f:\n params_tf = np.load(f)\nparams_tf = dict(zip(params_tf.keys(), params_tf.values()))\n\nfor k in params_tf:\n params_tf[k] = jnp.array(params_tf[k])\n\nResNet_cifar10 = ResNet.partial(num_classes=10)\n\ndef resnet_fn(params, images):\n return ResNet_cifar10.partial(num_classes=10).call(params, images)\n\nresnet_init = ResNet_cifar10.init_by_shape\n_, params = resnet_init(jax.random.PRNGKey(0), [([1, 224, 224, 3], jnp.float32)])\n\ntransform_params(params, params_tf, 10, init_head=True)", "Prepare data", "data_builder = tfds.builder('cifar10')\ndata_builder.download_and_prepare()\n\ndef _pp(data):\n im = data['image']\n im = tf.image.resize(im, [128, 128])\n im = (im - 127.5) / 127.5\n data['image'] = im\n return {'image': data['image'], 'label': data['label']}\n\ndata = data_builder.as_dataset(split='test')\ndata = data.map(_pp)\ndata = data.batch(100)\ndata_iter = data.as_numpy_iterator()", "Run BiT", "correct, n = 0, 0\nfor batch in data_iter:\n preds = resnet_fn(params, batch['image'])\n correct += np.sum(np.argmax(preds, axis=1) == batch['label'])\n n += len(preds)\n\nprint(f\"CIFAR-10 accuracy of BiT-M-R50x1: {correct / n:0.3%}\")", "Run finetuning on CIFAR-100\nPrepare data", "data_builder = tfds.builder('cifar100')\ndata_builder.download_and_prepare()\n\ndef get_data(split, repeats, batch_size, images_per_class, shuffle_buffer):\n data = data_builder.as_dataset(split=split)\n\n if split == 'train':\n data = data.batch(50000)\n\n data = data.as_numpy_iterator().next()\n\n np.random.seed(0)\n indices = [idx \n for cls in range(100)\n for idx in np.random.choice(np.where(data['label'] == cls)[0],\n images_per_class,\n replace=False)]\n\n data = {'image': data['image'][indices],\n 'label': data['label'][indices]}\n\n data = tf.data.Dataset.zip((tf.data.Dataset.from_tensor_slices(data['image']),\n tf.data.Dataset.from_tensor_slices(data['label'])))\n data = data.map(lambda x, y: {'image': x, 'label': y})\n else:\n data = data.map(lambda d: {'image': d['image'], 'label': d['label']})\n\n def _pp(data):\n im = data['image']\n if split == 'train':\n im = tf.image.resize(im, [160, 160])\n im = tf.image.random_crop(im, [128, 128, 3])\n im = tf.image.flip_left_right(im)\n else:\n im = tf.image.resize(im, [128, 128])\n im = (im - 127.5) / 127.5\n data['image'] = im\n data['label'] = tf.one_hot(data['label'], 100)\n return {'image': data['image'], 'label': data['label']}\n\n data = data.repeat(repeats)\n data = data.shuffle(shuffle_buffer)\n data = data.map(_pp)\n return data.batch(batch_size)\n\ndata_train = get_data(split='train', repeats=None, images_per_class=5,\n batch_size=64, shuffle_buffer=500)\ndata_test = get_data(split='test', repeats=1, images_per_class=None,\n batch_size=250, shuffle_buffer=1)", "Build model and load weights", "@jax.jit\ndef resnet_fn(params, images):\n return ResNet.partial(num_classes=100).call(params, images)\n\ndef cross_entropy_loss(*, logits, labels):\n logp = jax.nn.log_softmax(logits)\n return -jnp.mean(jnp.sum(logp * labels, axis=1))\n\ndef loss_fn(params, images, labels):\n logits = resnet_fn(params, images)\n return cross_entropy_loss(logits=logits, labels=labels)\n\n@jax.jit\ndef update_fn(opt, lr, images, labels):\n l, g = jax.value_and_grad(loss_fn)(opt.target, images, labels)\n opt = opt.apply_gradient(g, learning_rate=lr)\n return opt, l\n\nwith tf.io.gfile.GFile('gs://bit_models/BiT-M-R50x1.npz', 'rb') as f:\n params_tf = np.load(f)\nparams_tf = dict(zip(params_tf.keys(), params_tf.values()))\n\nresnet_init = ResNet.partial(num_classes=100).init_by_shape\n_, params = resnet_init(jax.random.PRNGKey(0), [([1, 224, 224, 3], jnp.float32)])\ntransform_params(params, params_tf, 100, init_head=False)", "Run optimization", "def get_lr(step):\n lr = 0.003\n if step < 100:\n return lr * (step / 100)\n else:\n for s in [200, 300, 400]:\n if s < step:\n lr /= 10\n return lr\n\nopt = optim.Momentum(beta=0.9).create(params)\n\nfor step, batch in zip(range(500), data_train.as_numpy_iterator()):\n\n opt, loss_value = update_fn(\n opt, get_lr(step), batch[\"image\"], batch[\"label\"])\n \n if opt.state.step % 100 == 0:\n acc = np.mean([c for test_batch in data_test.as_numpy_iterator()\n for c in (np.argmax(test_batch['label'], axis=1) ==\n np.argmax(resnet_fn(opt.target, test_batch['image']), axis=1))])\n print(f\"Step: {opt.state.step}, Test accuracy: {acc:0.3%}\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
NumCosmo/NumCosmo
notebooks/DataNCount/run_ascaso_mcmc_binned.ipynb
gpl-3.0
[ " #!/usr/bin/env python\n\ntry:\n import gi\n gi.require_version('NumCosmo', '1.0')\n gi.require_version('NumCosmoMath', '1.0')\nexcept:\n pass\n\nimport math\nimport matplotlib.pyplot as plt\nfrom gi.repository import GObject\nfrom gi.repository import NumCosmo as nc\nfrom gi.repository import NumCosmoMath as ncm\nimport numpy as np\nfrom astropy.io import fits\nfrom astropy.table import Table\nimport sys\nsys.path.insert(0,'../../scripts')\n\nimport pyccl as ccl\n\nfrom nc_ccl import create_nc_obj, ccl_cosmo_set_high_prec\n\nncm.cfg_init()\nncm.cfg_set_log_handler (lambda msg: sys.stdout.write (msg) and sys.stdout.flush ())\n\nfrom IPython.display import display, HTML\ndisplay(HTML(\"<style>.container { width:80% !important; }</style>\"))", "initialize the Cosmological models", "#CCL cosmology\ncosmo_ccl = ccl.Cosmology(Omega_c = 0.30711 - 0.048254, Omega_b = 0.048254, h = 0.677, sigma8 = 0.8822714165197718, n_s=0.96, Omega_k = 0, transfer_function='eisenstein_hu')\n#ccl_cosmo_set_high_prec (cosmo_ccl)\n\ncosmo_numcosmo, dist, ps_lin, ps_nln, hmfunc = create_nc_obj (cosmo_ccl)\n\npsf = hmfunc.peek_psf ()", "Define proxy modelling\nUse a mass proxy, define the probability for observing a proxy given a mass and redhsift\n$$\nP(\\log\\lambda|M,z) = N(\\mu(M,z), \\sigma^2(M,z))\n$$\nthe mean is\n$$\n\\mu(M,z) = \\mu_0 + a_\\mu^M\\log_{10}\\frac{M}{M_0} + a_\\mu^z\\log_{10}\\frac{1+z}{1+z_0} \n$$\nvariance is\n$$\n\\sigma(M,z) = \\sigma_0 + a_\\sigma^M\\log_{10}\\frac{M}{M_0} + a_\\sigma ^z\\log_{10}\\frac{1+z}{1+z_0} \n$$", "#CosmoSim_proxy model\n#M_0, z_0\ntheta_pivot = [3e14/0.71, 0.6]\n#\\mu_0, a_\\mu^z, a_\\mu^M\ntheta_mu = [3.19, -0.7, 2]\n#\\sigma_0, a_\\sigma^z, a_\\sigma^M\ntheta_sigma = [0.33, 0.,-0.08]\n#Richness object\n\narea = (0.25)*4*np.pi / 100.0\nlnRl = 1.0\nlnRu = 2.0\nzl = 0.25\nzu = 1.0\n\n#Numcosmo_proxy model\ncluster_z = nc.ClusterRedshift.new_from_name(\"NcClusterRedshiftNodist{'z-min': <%20.15e>, 'z-max':<%20.15e>}\" % (zl, zu))\ncluster_m = nc.ClusterMass.new_from_name(\"NcClusterMassAscaso{'M0':<%20.15e>,'z0':<%20.15e>,'lnRichness-min':<%20.15e>, 'lnRichness-max':<%20.15e>}\" % (3e14/(0.71),0.6, lnRl, lnRu))\ncluster_m.param_set_by_name('mup0', 3.19)\ncluster_m.param_set_by_name('mup1', 2/np.log(10))\ncluster_m.param_set_by_name('mup2', -0.7/np.log(10))\ncluster_m.param_set_by_name('sigmap0', 0.33)\ncluster_m.param_set_by_name('sigmap1', -0.08/np.log(10))\ncluster_m.param_set_by_name('sigmap2', 0/np.log(10))", "initialize the ClusterAbundance object", "#Numcosmo Cluster Abundance\n\n#First we need to define the multiplicity function here we will use the tinker\nmulf = nc.MultiplicityFuncTinker.new()\nmulf.set_linear_interp (True)\nmulf.set_mdef(nc.MultiplicityFuncMassDef.CRITICAL)\nmulf.set_Delta(200)\n#Second we need to construct a filtered power spectrum \n\nhmf = nc.HaloMassFunction.new(dist,psf,mulf)\nhmf.set_area(area)\n\nca = nc.ClusterAbundance.new(hmf,None)\nmset = ncm.MSet.new_array([cosmo_numcosmo,cluster_m,cluster_z])\n\nncount = Nc.DataClusterNCount.new (ca, \"NcClusterRedshiftNodist\", \"NcClusterMassAscaso\")\nncount.catalog_load (\"ncount_ascaso.fits\")\n\ncosmo_numcosmo.props.Omegac_fit = True\ncosmo_numcosmo.props.w0_fit = True\ncluster_m.props.mup0_fit = True\nmset.prepare_fparam_map ()\n\nncount.set_binned (True)\n\ndset = ncm.Dataset.new ()\ndset.append_data (ncount)\n\nlh = Ncm.Likelihood (dataset = dset)\nfit = Ncm.Fit.new (Ncm.FitType.NLOPT, \"ln-neldermead\", lh, mset, Ncm.FitGradType.NUMDIFF_FORWARD)\n\nprint (Ncm.func_eval_log_pool_stats ())\n\ninit_sampler = Ncm.MSetTransKernGauss.new (0)\ninit_sampler.set_mset (mset)\ninit_sampler.set_prior_from_mset ()\ninit_sampler.set_cov_from_rescale (1.0)\n\nnwalkers = 300\n\nwalker = Ncm.FitESMCMCWalkerAPES.new (nwalkers, mset.fparams_len ())\n\nesmcmc = Ncm.FitESMCMC.new (fit, nwalkers, init_sampler, walker, Ncm.FitRunMsgs.SIMPLE)\nesmcmc.set_nthreads (3)\n\nesmcmc.set_data_file (\"ncount_ascaso_mcmc_binned.fits\")\n\nesmcmc.start_run ()\nesmcmc.run_lre (50, 1.0e-3)\nesmcmc.end_run ()\n\nesmcmc.mean_covar ()\nfit.log_covar ()\n\n\nntests = 100.0\nnwalkers = 300\nburnin = 80\nmcat = Ncm.MSetCatalog.new_from_file_ro (\"ncount_ascaso_mcmc_binned.fits\", nwalkers * burnin)\n\nmcat.log_current_chain_stats ()\nmcat.calc_max_ess_time (ntests, Ncm.FitRunMsgs.FULL);\nmcat.calc_heidel_diag (ntests, 0.0, Ncm.FitRunMsgs.FULL);\n\nmset.pretty_log ()\nmcat.log_full_covar ()\nmcat.log_current_stats ()\n\nbe, post_lnnorm_sd = mcat.get_post_lnnorm ()\nlnevol, glnvol = mcat.get_post_lnvol (0.6827)\n\nNcm.cfg_msg_sepa ()\nprint (\"# Bayesian evidence: % 22.15g +/- % 22.15g\" % (be, post_lnnorm_sd))\nprint (\"# 1 sigma posterior volume: % 22.15g\" % lnevol)\nprint (\"# 1 sigma posterior volume (Gaussian approximation): % 22.15g\" % glnvol)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
noppanit/machine-learning
word-vector/.ipynb_checkpoints/Word Vector-checkpoint.ipynb
mit
[ "Introduction\nMy notebook to learn Word \nReference\nhttps://www.kaggle.com/c/word2vec-nlp-tutorial/details/part-1-for-beginners-bag-of-words\nImport the pandas package, then use the \"read_csv\" function to read\nthe labeled training data", "# Import\nimport pandas as pd \nimport numpy as np\nfrom bs4 import BeautifulSoup\nimport nltk", "Reading the Data\nThe necessary files can be downloaded from the Data page. The first file that you'll need is unlabeledTrainData.tsv, which contains 25,000 IMDB movie reviews, each with a positive or negative sentiment label.\nNext, read the tab-delimited file into Python. To do this, we can use the pandas package, introduced in the Titanic tutorial, which provides the read_csv function for easily reading and writing data files. If you haven't used pandas before, you may need to install it.", "train = pd.read_csv(\"labeledTrainData.tsv\", header=0, delimiter=\"\\t\", quoting=3)\n\ntrain.shape # Get the shape, 25000 rows and 3 columns\n\ntrain.columns.values # Show the column names\n\nprint(train[\"review\"][0]) # Looking at some review", "Data Cleaning and Text Preprocessing\nRemoving HTML Markup: The BeautifulSoup Package\nFirst, we'll remove the HTML tags. For this purpose, we'll use the Beautiful Soup library. If you don't have Beautiful soup installed, do:", "# Initialize the BeautifulSoup object on a single movie review \nexample1 = BeautifulSoup(train[\"review\"][0], 'html.parser') \n\nprint(example1.get_text())", "Dealing with Punctuation, Numbers and Stopwords: NLTK and regular expressions\nWhen considering how to clean the text, we should think about the data problem we are trying to solve. For many problems, it makes sense to remove punctuation. On the other hand, in this case, we are tackling a sentiment analysis problem, and it is possible that \"!!!\" or \":-(\" could carry sentiment, and should be treated as words. In this tutorial, for simplicity, we remove the punctuation altogether, but it is something you can play with on your own.\nSimilarly, in this tutorial we will remove numbers, but there are other ways of dealing with them that make just as \nmuch sense. For example, we could treat them as words, or replace them all with a placeholder string such as \"NUM\".\nTo remove punctuation and numbers, we will use a package for dealing with regular expressions, called re. The package comes built-in with Python; no need to install anything. For a detailed description of how regular expressions work, see the package documentation. Now, try the following:", "import re\n# Use regular expressions to do a find-and-replace\n# Basically we get rid of all the word that doesn't begin with a-Z or A-Z\nletters_only = re.sub(\"[^a-zA-Z]\", # The pattern to search for\n \" \", # The pattern to replace it with\n example1.get_text() ) # The text to search\nprint(letters_only)\n\nlower_case = letters_only.lower() # Convert to lower case\nwords = lower_case.split() # Split into words\n\nprint(words)", "Finally, we need to decide how to deal with frequently occurring words that don't carry much meaning. Such words are called \"stop words\"; in English they include words such as \"a\", \"and\", \"is\", and \"the\". Conveniently, there are Python packages that come with stop word lists built in. Let's import a stop word list from the Python Natural Language Toolkit (NLTK). You'll need to install the library if you don't already have it on your computer; you'll also need to install the data packages that come with it, as follows:", "from nltk.corpus import stopwords # Import the stop word list\n# Remove stop words from \"words\"\n\nwords = [w for w in words if not w in stopwords.words(\"english\")]\nprint(words)\n\ndef review_to_words( raw_review ):\n # Function to convert a raw review to a string of words\n # The input is a single string (a raw movie review), and \n # the output is a single string (a preprocessed movie review)\n #\n # 1. Remove HTML\n review_text = BeautifulSoup(raw_review, 'html.parser').get_text() \n #\n # 2. Remove non-letters \n letters_only = re.sub(\"[^a-zA-Z]\", \" \", review_text) \n #\n # 3. Convert to lower case, split into individual words\n words = letters_only.lower().split() \n #\n # 4. In Python, searching a set is much faster than searching\n # a list, so convert the stop words to a set\n stops = set(stopwords.words(\"english\")) \n # \n # 5. Remove stop words\n meaningful_words = [w for w in words if not w in stops] \n #\n # 6. Join the words back into one string separated by space, \n # and return the result.\n return( \" \".join( meaningful_words )) ", "Let's clean all the data", "print(\"Cleaning and parsing the training set movie reviews...\\n\")\nclean_train_reviews = []\nfor i in range( 0, num_reviews ):\n # If the index is evenly divisible by 1000, print a message\n if( (i+1)%1000 == 0 ):\n print(\"Review %d of %d\\n\" % ( i+1, num_reviews )) \n clean_train_reviews.append( review_to_words( train[\"review\"][i] ))", "Creating Features from a Bag of Words (Using scikit-learn)\nNow that we have our training reviews tidied up, how do we convert them to some kind of numeric representation for machine learning? One common approach is called a Bag of Words. The Bag of Words model learns a vocabulary from all of the documents, then models each document by counting the number of times each word appears. For example, consider the following two sentences:\nSentence 1: \"The cat sat on the hat\"\nSentence 2: \"The dog ate the cat and the hat\"\nFrom these two sentences, our vocabulary is as follows:\n{ the, cat, sat, on, hat, dog, ate, and }\nTo get our bags of words, we count the number of times each word occurs in each sentence. In Sentence 1, \"the\" appears twice, and \"cat\", \"sat\", \"on\", and \"hat\" each appear once, so the feature vector for Sentence 1 is:\n{ the, cat, sat, on, hat, dog, ate, and }\nSentence 1: { 2, 1, 1, 1, 1, 0, 0, 0 }\nSimilarly, the features for Sentence 2 are: { 3, 1, 0, 0, 1, 1, 1, 1}\nIn the IMDB data, we have a very large number of reviews, which will give us a large vocabulary. To limit the size of the feature vectors, we should choose some maximum vocabulary size. Below, we use the 5000 most frequent words (remembering that stop words have already been removed).\nWe'll be using the feature_extraction module from scikit-learn to create bag-of-words features. If you did the Random Forest tutorial in the Titanic competition, you should already have scikit-learn installed; otherwise you will need to install it.", "print(\"Creating the bag of words...\\n\")\nfrom sklearn.feature_extraction.text import CountVectorizer\n\n# Initialize the \"CountVectorizer\" object, which is scikit-learn's\n# bag of words tool. \nvectorizer = CountVectorizer(analyzer = \"word\", \\\n tokenizer = None, \\\n preprocessor = None, \\\n stop_words = None, \\\n max_features = 5000) \n\n# fit_transform() does two functions: First, it fits the model\n# and learns the vocabulary; second, it transforms our training data\n# into feature vectors. The input to fit_transform should be a list of \n# strings.\ntrain_data_features = vectorizer.fit_transform(clean_train_reviews)\n\n# Numpy arrays are easy to work with, so convert the result to an \n# array\ntrain_data_features = train_data_features.toarray()", "It has 25,000 rows and 5,000 features (one for each vocabulary word).\nNote that CountVectorizer comes with its own options to automatically do preprocessing, tokenization, and stop word removal -- for each of these, instead of specifying \"None\", we could have used a built-in method or specified our own function to use. See the function documentation for more details. However, we wanted to write our own function for data cleaning in this tutorial to show you how it's done step by step.\nNow that the Bag of Words model is trained, let's look at the vocabulary:", "vocab = vectorizer.get_feature_names()\nprint(vocab)\n\n# Sum up the counts of each vocabulary word\ndist = np.sum(train_data_features, axis=0)\n\n# For each, print the vocabulary word and the number of times it \n# appears in the training set\nfor tag, count in zip(vocab, dist):\n print(count, tag)", "Random Forest\nAt this point, we have numeric training features from the Bag of Words and the original sentiment labels for each feature vector, so let's do some supervised learning! Here, we'll use the Random Forest classifier that we introduced in the Titanic tutorial. The Random Forest algorithm is included in scikit-learn (Random Forest uses many tree-based classifiers to make predictions, hence the \"forest\"). Below, we set the number of trees to 100 as a reasonable default value. More trees may (or may not) perform better, but will certainly take longer to run. Likewise, the more features you include for each review, the longer this will take.", "print(\"Training the random forest...\")\nfrom sklearn.ensemble import RandomForestClassifier\n\n# Initialize a Random Forest classifier with 100 trees\nforest = RandomForestClassifier(n_estimators = 100) \n\n# Fit the forest to the training set, using the bag of words as \n# features and the sentiment labels as the response variable\n#\n# This may take a few minutes to run\nforest = forest.fit( train_data_features, train[\"sentiment\"] )", "Creating a Submission\nAll that remains is to run the trained Random Forest on our test set and create a submission file. If you haven't already done so, download testData.tsv from the Data page. This file contains another 25,000 reviews and ids; our task is to predict the sentiment label.\nNote that when we use the Bag of Words for the test set, we only call \"transform\", not \"fit_transform\" as we did for the training set. In machine learning, you shouldn't use the test set to fit your model, otherwise you run the risk of overfitting. For this reason, we keep the test set off-limits until we are ready to make predictions.", "# Read the test data\ntest = pd.read_csv(\"testData.tsv\", header=0, delimiter=\"\\t\", quoting=3 )\n\n# Verify that there are 25,000 rows and 2 columns\nprint(test.shape)\n\n# Create an empty list and append the clean reviews one by one\nnum_reviews = len(test[\"review\"])\nclean_test_reviews = [] \n\nprint \"Cleaning and parsing the test set movie reviews...\\n\"\nfor i in xrange(0,num_reviews):\n if( (i+1) % 1000 == 0 ):\n print \"Review %d of %d\\n\" % (i+1, num_reviews)\n clean_review = review_to_words( test[\"review\"][i] )\n clean_test_reviews.append( clean_review )\n\n# Get a bag of words for the test set, and convert to a numpy array\ntest_data_features = vectorizer.transform(clean_test_reviews)\ntest_data_features = test_data_features.toarray()\n\n# Use the random forest to make sentiment label predictions\nresult = forest.predict(test_data_features)\n\n# Copy the results to a pandas dataframe with an \"id\" column and\n# a \"sentiment\" column\noutput = pd.DataFrame( data={\"id\":test[\"id\"], \"sentiment\":result} )\n\n# Use pandas to write the comma-separated output file\noutput.to_csv( \"Bag_of_Words_model.csv\", index=False, quoting=3 )" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jbocharov-mids/W207-Machine-Learning
reference/Naive_Bayes_partially_modified.ipynb
apache-2.0
[ "Build and test a Naive Bayes classifier.\nWe will again use the iris data. In case you don't feel familiar with the iris varieties yet, here are some pictures. The petals are smaller and stick out above the larger, flatter sepals. In many flowers, the sepal is a greenish support below the petals, but the iris sepals are designed specifically as landing pads for bumblebees, and the bright yellow coloring on the sepal directs the bees down into the tight space where pollination happens.\n<img src=\"../Extra/iris.jpg\">", "# This tells matplotlib not to try opening a new window for each plot.\n%matplotlib inline\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom sklearn.datasets import load_iris\nfrom sklearn.naive_bayes import BernoulliNB\n\n# Load the data, which is included in sklearn.\niris = load_iris()\nprint 'Iris target names:', iris.target_names\nprint 'Iris feature names:', iris.feature_names\nX, Y = iris.data, iris.target\n\n# Shuffle the data, but make sure that the features and accompanying labels stay in sync.\nnp.random.seed(0)\nshuffle = np.random.permutation(np.arange(X.shape[0]))\nX, Y = X[shuffle], Y[shuffle]\n\n# Split into train and test.\ntrain_data, train_labels = X[:100], Y[:100]\ntest_data, test_labels = X[100:], Y[100:]", "The iris feature values are real valued -- measurements in centimeters. Let's look at histograms of each feature.", "# Create a new figure and set the figsize argument so we get square-ish plots of the 4 features.\nplt.figure(figsize=(15, 3))\n\n# Iterate over the features, creating a subplot with a histogram for each one.\nfor feature in range(train_data.shape[1]):\n plt.subplot(1, 4, feature+1)\n plt.hist(train_data[:,feature], 20)\n plt.title(iris.feature_names[feature])", "To make things simple, let's binarize these feature values. That is, we'll treat each measurement as either \"short\" or \"long\". I'm just going to choose a threshold for each feature. Binning usually depends on the distribution. There are two ways to bin data: constant bin width and constant bin size. For now let's just go with the two.\nThis is equivalent to running the previous block of code with only two bins for plt.hist, except that we'll use a different binning approach:", "# Create a new figure and set the figsize argument so we get square-ish plots of the 4 features.\nplt.figure(figsize=(15, 3))\n\n# Iterate over the features, creating a subplot with a histogram for each one.\nfor feature in range(train_data.shape[1]):\n plt.subplot(1, 4, feature+1)\n plt.hist(train_data[:,feature], 2)\n plt.title(iris.feature_names[feature])\n\n# Define a function that applies a threshold to turn real valued iris features into 0/1 features.\n# 0 will mean \"short\" and 1 will mean \"long\".\ndef binarize_iris(data, thresholds=[6.0, 3.0, 2.5, 1.0]):\n # Initialize a new feature array with the same shape as the original data.\n binarized_data = np.zeros(data.shape)\n\n # Apply a threshold to each feature.\n for feature in range(data.shape[1]):\n binarized_data[:,feature] = data[:,feature] > thresholds[feature]\n return binarized_data\n\n# Create new binarized training and test data\nbinarized_train_data = binarize_iris(train_data)\nbinarized_test_data = binarize_iris(test_data)\n", "Recall that Naive Bayes assumes conditional independence of features. With $Y$ the set of labels and $X$ the set of features ($y$ is a specific label and $x$ is a specific feature), Naive Bayes gives the probability of a label $y$ given input features $X$ as:\n$ \\displaystyle P(y|X) \\approx \n \\frac { P(y) \\prod_{x \\in X} P(x|y) }\n { \\sum_{y \\in Y} P(y) \\prod_{x \\in X} P(x|y) }\n$\nLet's estimate some of these probabilities using maximum likelihood, which is just a matter of counting and normalizing. \nWe'll start with the prior probability of the label: $P(y)$.", "# Initialize counters for all labels to zero.\nlabel_counts = [0 for i in iris.target_names]\n''' print label_counts '''\nprint iris.target_names\n\n# Calculate the counts of labels in the training data set by iterating over labels.\nfor label in train_labels:\n label_counts[label] += 1\nprint label_counts\n\n# Normalize counts to get the estimates of the probabilities of each label.\ntotal = sum(label_counts)\nprint total\nlabel_probs = [1.0 * count / total for count in label_counts]\nfor (prob, name) in zip(label_probs, iris.target_names):\n print '%15s : %.2f' %(name, prob) ", "Example of what zip() function does (from https://docs.python.org/2/library/functions.html#zip):", "x = [1, 2, 3]\ny = [4, 5, 6]\nzipped = zip(x, y)\nzipped", "Now, we have estimated the prior probabilities of each label, $P(y)$\nNext, let's estimate $P(X|Y)$, that is, the probability of each feature given each label: if I am a flower labeled $y (e.g., setosa)$, what is the probability that my measurements (features) will be $x$? \nRemember that we can get the conditional probability from the joint distribution:\n$\\displaystyle P(X|Y) = \\frac{ P(X,Y) } { P(Y) } \\approx \\frac{ \\textrm{Count}(X,Y) } { \\textrm{Count}(Y) }$\nLet's think carefully about the size of the count matrix we need to build. There are 3 labels $y_1$, $y_2$, and $y_3$ ($setosa$, $versicolor$, and $virginica$) and 4 features $x_0$, $x_1$, $x_2$, and $x_3$ ($petalLength$, $petalWidth$, $sepalLength$, and $sepalWidth$). Each feature has 2 possible values, 0 or 1. So there are actually $3 \\times 4 \\times 2=24$ probabilities we need to estimate: \n$P(x_0=0, Y=y_0)$\n$P(x_0=1, Y=y_0)$\n$P(x_1=0, Y=y_0)$\n$P(x_1=1, Y=y_0)$\n...\nHowever, we already estimated (above) the probability of each label, $P(y)$. And, we know that each feature value is either 0 or 1 (the advantage of having converted the problem to binary). So, for example,\n$P(x_0=0, Y=\\textrm{setosa}) + P(x_0=1, Y=\\textrm{setosa}) = P(Y=\\textrm{setosa}) \\approx 0.31$.\nAs a result, we can just estimate probabilities for one of the feature values, say, $x_i = 0$. This requires a $4 \\times 3$ matrix.", "# Initialize a matrix for joint counts of feature=0 and label.\nfeature0_and_label_counts = np.zeros([len(iris.feature_names), len(iris.target_names)])\n'''print feature0_and_label_counts'''\n\n# Just to check our work, let's also keep track of joint counts of feature=1 and label.\nfeature1_and_label_counts = np.zeros([len(iris.feature_names), len(iris.target_names)])\n'''print feature1_and_label_counts'''\n\nprint binarized_train_data.shape\n'''binarized_train_data.shape[1] corresponds to the rows of data'''\nfor i in range(binarized_train_data.shape[0]): \n # Pick up one training example at a time: a label and a feature vector.\n label = train_labels[i]\n features = binarized_train_data[i]\n \n # Update the count matrices.\n for feature_index, feature_value in enumerate(features):\n feature0_and_label_counts[feature_index][label] += (feature_value == 0)\n feature1_and_label_counts[feature_index][label] += (feature_value == 1)\n\n# Let's look at the counts.\nprint 'Feature = 0 and label:\\n', feature0_and_label_counts\nprint '\\nFeature = 1 and label:\\n', feature1_and_label_counts\n\n# As a sanity check, what should the total sum of all counts be?\n# We have 100 training examples, each with 4 features. So we should have counted 400 things.\ntotal_sum = feature0_and_label_counts.sum() + feature1_and_label_counts.sum()\nprint '\\nTotal count:', total_sum\n\n# As another sanity check, the label probabilities should be equal to the normalized feature counts for each label.\nprint 'Label probabilities:', (feature0_and_label_counts.sum(0) + feature1_and_label_counts.sum(0)) / total_sum", "We still need to normalize the joint counts to get probabilities: P(feature|label) = P(feature, label) / P(label)", "# Initialize new matrices to hold conditional probabilities.\nfeature0_given_label = np.zeros(feature0_and_label_counts.shape)\nfeature1_given_label = np.zeros(feature1_and_label_counts.shape)\n\n# P(feature|label) = P(feature, label) / P(label) =~ count(feature, label) / count(label).\n# Note that we could do this normalization more efficiently with array operations, but for the sake of clarity,\n# let's iterate over each label and each feature.\nfor label in range(feature0_and_label_counts.shape[1]):\n for feature in range(feature0_and_label_counts.shape[0]):\n feature0_given_label[feature,label] = feature0_and_label_counts[feature,label] / label_counts[label]\n feature1_given_label[feature,label] = feature1_and_label_counts[feature,label] / label_counts[label]\n\n# Here's our estimated conditional probability table.\nprint 'Estimated values of P(feature=0|label):\\n', feature0_given_label\n\n# As a sanity check, which probabilities should sum to 1?\nprint '\\nCheck that P(feature=0|label) + P(feature=1|label) = 1\\n',feature0_given_label + feature1_given_label", "Now that we have all the pieces, let's try making a prediction for the first test example. It looks like this is a setosa (label 0) example with all small measurements -- all the feature values are 0.\nWe start by assuming the prior distribution , which has a slight preference for virginica, followed by versicolor. Of course, these estimates come from our training data, which might not be a representative sample. In practice, we may prefer to use a uniform prior.", "# What does the feature vector look like? And what's the true label?\nindex = 0\nprint 'Feature vector:', binarized_test_data[index]\nprint 'Label:', test_labels[index]\n\n# Start with the prior distribution over labels.\npredictions = label_probs[:]\nprint 'Prior:', predictions", "You can think of each feature as an additional piece of evidence. After observing the first feature, we update our belief by multiplying our initial probabilities by the probability of the observation, conditional on each possible label.", "# Let's include the first feature. We use feature0_given_label since the feature value is 0.\npredictions *= feature0_given_label[0]\n\n# We could wait until we've multiplied by all the feature probabilities, but there's no harm in normalizing after each update.\npredictions /= predictions.sum()\nprint 'After observing sepal length:', predictions", "So after observing a short sepal, our updated belief prefers setosa. Let's include the remaining observations.", "# Include the second feature.\npredictions *= feature0_given_label[1]\npredictions *= feature0_given_label[2]\npredictions *= feature0_given_label[3]\n# print feature0_given_label\n# print feature1_given_label\n\n# We could wait until we've multiplied by all the feature probabilities, but there's no harm in normalizing after each update.\npredictions /= predictions.sum()\nprint 'After observing all features:', predictions", "What happened?\nWell, it looks like Naive Bayes came up with the right answer. But it seems overconfident!\nLet's look again at our conditional probability estimates for the features, feature0_given_label and feature1_given_label. Notice that there are a bunch of zero probabilities. This is bad because as soon as we multiply anything by zero, we're guaranteed that our final estimate will be zero. This is an overly harsh penalty for an observation that simply never occurred in our training data. Surely there's some possibility, even if very small, that there could exist a setosa with a long sepal.\nThis is where smoothing comes in. \nThe maximum likelihood estimate is only optimal in the case where we have infinite training data. When we have less than that, we need to temper maximum likelihood by reserving some small probability for unseen events. The simplest way to do this is with Laplace smoothing (http://en.wikipedia.org/wiki/Additive_smoothing) -- rather than starting with a count of 0 for each joint (feature, label) observation, we start with a count of $\\alpha$. Note that the $\\alpha$ is applied during the training step, to produce the label_counts and to initialize the feature0_and_label_counts, which is later used to compute the feature0_and_label_counts.\nNow we have covered everyting we need to package training and inference into a class. This NaiveBayes class below has been modeled after sklearn's BernoulliNB (details here: http://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.BernoulliNB.html).", "class NaiveBayes:\n # Initialize an instance of the class.\n def __init__(self, alpha=1.0):\n self.alpha = alpha # additive (Laplace) smoothing parameter\n self.priors = None # estimated by fit()\n self.probs = None # estimated by fit()\n self.num_labels = 0 # set by fit()\n self.num_features = 0 # set by fit()\n \n def fit(self, train_data, train_labels):\n # Store number of labels, number of features, and number training examples.\n self.num_labels = len(np.unique(train_labels))\n self.num_features = train_data.shape[1]\n self.num_examples = train_data.shape[0]\n \n # Initialize an array of (feature=1, label) counts to alpha.\n feature0_and_label_counts = np.ones([self.num_features, self.num_labels]) * self.alpha\n '''We do not care for feature1_and_label_counts (it is 1-feature0_and_label_counts), \n but we would use the same initialization for it'''\n \n # Initialize an array of label counts. Each label gets a smoothed count of 2*alpha because\n # each feature value (0 and 1) gets an extra count of alpha (see details in the Wkipedia page on Additive Smoothing).\n label_counts = np.ones(self.num_labels) * self.alpha * 2\n\n # Count features with value == 1.\n for i in range(self.num_examples):\n label = train_labels[i]\n label_counts[label] += 1\n for feature_index, feature_value in enumerate(train_data[i]):\n feature0_and_label_counts[feature_index][label] += (feature_value == 1)\n\n # Normalize to get probabilities P(feature=1|label).\n self.probs = feature0_and_label_counts / label_counts\n \n # Normalize label counts to get prior probabilities P(label).\n self.priors = label_counts / label_counts.sum()\n\n # Make predictions for each test example and return results.\n ''' Nothing new here: same predict() method as we used in NearestNeighbors class'''\n def predict(self, test_data):\n results = []\n for item in test_data:\n results.append(self._predict_item(item))\n return np.array(results)\n \n # Private function for making a single prediction.\n def _predict_item(self, item):\n # Make a copy of the prior probabilities.\n predictions = self.priors.copy()\n \n # Multiply by each conditional feature probability.\n for (index, value) in enumerate(item):\n feature_probs = self.probs[index]\n if not value: ## same as \"if value != 1\" \n feature_probs = 1 - feature_probs\n predictions *= feature_probs\n\n # Normalize to the [0,1] range and return the label that gives the largest probability.\n predictions /= predictions.sum()\n #print item, predictions\n return predictions.argmax()\n ", "The NumPy method argmax called with axis=None (default) returns the indices of the array being passed where the value reaches its maximum. For more details, see here: http://docs.scipy.org/doc/numpy/reference/generated/numpy.argmax.html\nNow we can compare our implementation with the sklearn implementation. Do the predictions agree? What about the estimated parameters? Try changing alpha from 0 to 1.\nNote: I think there might be a bug in the sklearn code. What do you think?", "alpha = 0\nnb = NaiveBayes(alpha=alpha)\nnb.fit(binarized_train_data, train_labels)\nprint \"Trained the model on binarized_train_data\"\n\n# Compute accuracy on the test data.\nprint \"Using our NB classifier\"\npreds = nb.predict(binarized_test_data)\ncorrect, total = 0, 0\nfor pred, label in zip(preds, test_labels):\n if pred == label: correct += 1\n total += 1\nprint 'With alpha = %.2f' %alpha\nprint '[OUR implementation] total: %3d correct: %3d accuracy: %3.2f' %(total, correct, 1.0*correct/total)\n\n# Compare to sklearn's implementation.\nprint \"Using sklearn's NB classifier\"\nclf = BernoulliNB(alpha=alpha)\nclf.fit(binarized_train_data, train_labels)\nprint 'sklearn accuracy: %3.2f' %clf.score(binarized_test_data, test_labels)\n\nprint '\\nOur feature probabilities\\n', nb.probs\nprint '\\nsklearn feature probabilities\\n', np.exp(clf.feature_log_prob_).T\n\nprint '\\nOur prior probabilities\\n', nb.priors\nprint '\\nsklearn prior probabilities\\n', np.exp(clf.class_log_prior_)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
qutip/qutip-notebooks
examples/heom/heom-5b-fermions-discrete-boson-model.ipynb
lgpl-3.0
[ "Example 5b: Discrete boson coupled to an impurity + fermionic leads\nHere we model a single fermion coupled to two electronic leads or reservoirs (e.g., this can describe a single quantum dot, a molecular transistor, etc), also coupled to a discrete bosonic (vibronic) mode. \nNote that in this implementation we primarily follow the definitions used by Christian Schinabeck in his Dissertation https://opus4.kobv.de/opus4-fau/files/10984/DissertationChristianSchinabeck.pdf and related publications. In particular this example reproduces some results from https://journals.aps.org/prb/abstract/10.1103/PhysRevB.94.201407\nNotation:\n$K=L/R$ refers to left or right leads.\n$\\sigma=\\pm$ refers to input/output\nWe choose a Lorentzian spectral density for the leads, with a peak at the chemical potential. The latter simplifies a little the notation required for the correlation functions, but can be relaxed if neccessary.\n$$J(\\omega) = \\frac{\\Gamma W^2}{((\\omega-\\mu_K)^2 +W^2 )}$$\nFermi distribution is\n$$f_F (x) = (\\exp(x) + 1)^{-1}$$\ngives correlation functions\n$$C^{\\sigma}K(t) = \\frac{1}{2\\pi} \\int{-\\infty}^{\\infty} d\\omega e^{\\sigma i \\omega t} \\Gamma_K(\\omega) f_F[\\sigma\\beta(\\omega - \\mu)]$$\nAs with the Bosonic case we can treat these with Matsubara, Pade, or fitting approaches.\nThe Pade decomposition approximates the Fermi distubition as \n$$f_F(x) \\approx f_F^{\\mathrm{approx}}(x) = \\frac{1}{2} - \\sum_l^{l_{max}} \\frac{2k_l x}{x^2 + \\epsilon_l^2}$$\n$k_l$ and $\\epsilon_l$ are co-efficients defined in J. Chem Phys 133,10106\nEvaluating the integral for the correlation functions gives,\n$$C_K^{\\sigma}(t) \\approx \\sum_{l=0}^{l_{max}} \\eta_K^{\\sigma_l} e^{-\\gamma_{K,\\sigma,l}t}$$\nwhere\n$$\\eta_{K,0} = \\frac{\\Gamma_KW_K}{2} f_F^{approx}(i\\beta_K W)$$\n$$\\gamma_{K,\\sigma,0} = W_K - \\sigma i\\mu_K$$ \n$$\\eta_{K,l\\neq 0} = -i\\cdot \\frac{k_m}{\\beta_K} \\cdot \\frac{\\Gamma_K W_K^2}{-\\frac{\\epsilon^2_m}{\\beta_K^2} + W_K^2}$$\n$$\\gamma_{K,\\sigma,l\\neq 0}= \\frac{\\epsilon_m}{\\beta_K} - \\sigma i \\mu_K$$ \nThe system is how described by the single impurity model coupled to a discrete bosonic mode\n$$\nH_{\\mathrm{vib}} = H_{\\mathrm{SIAM}} + \\Omega a^{\\dagger}a + \\lambda (a+a^{\\dagger})c{^\\dagger}c.\n$$\nNote: This example is quite numerically challenging. For an easier introduction into the fermionic case, see example 4a.", "%load_ext autoreload\n%autoreload 2\n%pylab inline\n\nfrom qutip import *\n\nimport contextlib\nimport time\n\nimport numpy as np\n\nfrom qutip import *\nfrom qutip.nonmarkov.heom import HEOMSolver\nfrom qutip.nonmarkov.heom import FermionicBath\nfrom qutip.nonmarkov.heom import LorentzianBath\nfrom qutip.nonmarkov.heom import LorentzianPadeBath", "We first specify the properties of the two reservoirs, and plot their power spectra.", "#parameters and spectra check\noptions = Options(nsteps=15000, store_states=True, rtol=1e-14, atol=1e-14)\n\nGamma = 0.01\nW = 10**4 #Wide-band limit\nT = 0.025851991 #in ev\nbeta = 1./T\n\ntheta = 2.\nmu_l = theta/2.\nmu_r = -theta/2.\n\nw_list = np.linspace(-2,2,100)\n\ndef Gamma_L_w(w):\n return Gamma*W**2/((w-mu_l)**2 + W**2)\n\ndef Gamma_R_w(w):\n return Gamma*W**2/((w-mu_r)**2 + W**2)\n\n\ndef f(x):\n kB=1.\n return 1/(exp(x)+1.)\ndef f2(x):\n return 0.5\n\nfig, ax1 = plt.subplots(figsize=(12, 7))\ngam_list_in = [Gamma_L_w(w)*f(beta*(w-mu_l)) for w in w_list]\n\nax1.plot(w_list,gam_list_in, color=\"b\", linewidth=3, label= r\"Gamma_L(w) input (absorption)\")\n\n#ax1.set_ylim(0, 0.1)\nax1.set_xlabel(\"w\")\nax1.set_ylabel(r\"$n$\")\nax1.legend()\n\n\ngam_list_out = [Gamma_L_w(w)*f(-beta*(w-mu_l)) for w in w_list]\nspec = [Gamma_L_w(w) for w in w_list]\n#print(gam_list)\nax1.plot(w_list,gam_list_out, color=\"r\", linewidth=3, label= r\"Gamma_L(w) output (emission)\")\n\ngam_list_in = [Gamma_R_w(w)*f(beta*(w-mu_r)) for w in w_list]\n\nax1.plot(w_list,gam_list_in, color=\"b\", linewidth=3, label= r\"Gamma_R(w) input (absorption)\")\n\n\n\ngam_list_out = [Gamma_R_w(w)*f(-beta*(w-mu_r)) for w in w_list]\nspec = [Gamma_R_w(w) for w in w_list]\n\nax1.plot(w_list,gam_list_out, color=\"r\", linewidth=3, label= r\"Gamma_R(w) output (emission)\")\n\nax1.set_xlabel(\"w\")\nax1.set_ylabel(r\"$n$\")\nax1.legend()", "Below we give one example data set from Paper\nHere we just give one example of the current as a function of bias voltage, but in general one can try different cut-offs of the bosonic Fock space and the expansion of the correlation functions until convergence is found.\nOne note: for very large problems, this can be slow.", "def get_curr(theta,Nk,Ncc,Nbos):\n print(\"------------- theta:\",theta)\n mu_l = theta/2.\n mu_r = -theta/2.\n\n\n d1 = tensor(destroy(2), qeye(Nbos))\n a = tensor(qeye(2), destroy(Nbos))\n\n e1 = 0.3 #d1 = spin up\n Omega = 0.2\n Lambda = 0.12\n\n H0 = e1*d1.dag() * d1 + Omega * a.dag()*a + Lambda * (a+a.dag()) * d1.dag() * d1\n\n rho_0 = tensor(basis(2,0)*basis(2,0).dag(),basis(Nbos,0)*basis(Nbos,0).dag())\n\n \n start = time.time()\n Q = d1\n bathL = LorentzianPadeBath(Q,Gamma,W,mu_l,T,Nk,tag=\"L\")\n bathR = LorentzianPadeBath(Q,Gamma,W,mu_r,T,Nk,tag=\"R\")\n # for a single impurity we converge with max_depth = 2\n resultHEOM = HEOMSolver(H0, [bathL,bathR], Ncc, options=options)\n \n end = time.time()\n print(\"construct time:\", end - start)\n\n start = time.time()\n\n\n rhossHP,fullssP=resultHEOM.steady_state()\n end = time.time()\n print(\"Steady state time new\",end - start)\n return rhossHP, fullssP\n \n\nrhoHssPlistl5n2N16 = []\nfullssPlistl5n2N16 = []\n\nNk=5\n\ntheta_list = np.linspace(0,2,30)\nfor theta in theta_list:\n rhotemp, fulltemp = get_curr(theta,Nk=Nk,Ncc=2,Nbos=16)\n rhoHssPlistl5n2N16.append(rhotemp)\n fullssPlistl5n2N16.append(fulltemp)\n \n\ndef state_current(ado_state,bath_tag):\n level_1_aux = [\n (ado_state.extract(label), ado_state.exps(label)[0])\n for label in ado_state.filter(level=1,tags =[bath_tag])\n ]\n def exp_sign(exp):\n return 1 if exp.type == exp.types[\"+\"] else -1\n\n def exp_op(exp):\n return exp.Q if exp.type == exp.types[\"+\"] else exp.Q.dag()\n\n k = Nk + 1\n return -1.0j * sum(\n exp_sign(exp) * (exp_op(exp) * aux).tr()\n for aux, exp in level_1_aux \n )\n \n\ncurrPunitsl5n2N16 = [2.434e-4*1e6*state_current(fullss,\"R\") for fullss in fullssPlistl5n2N16]\n\nfig, ax1 = plt.subplots(figsize=(12, 10))\n\nax1.plot(theta_list,currPunitsl5n2N16, color=\"green\", linestyle='-', linewidth=3, label= r\"$l_{\\mathrm{max}}=5$, $n_{\\mathrm{max}}= 2$, $N = 16$\")\n\n\nax1.set_yticks([0,0.5,1])\nax1.set_yticklabels([0,0.5,1])\n\nax1.locator_params(axis='y', nbins=4)\nax1.locator_params(axis='x', nbins=4)\n\nax1.set_xlabel(r\"Bias voltage $\\Delta \\mu$ ($V$)\", fontsize=30 )\nax1.set_ylabel(r\"Current ($\\mu A$)\", fontsize=30)\nax1.legend()\n\nfig, ax1 = plt.subplots(figsize=(12, 10))\n\n\nax1.plot(theta_list,currPunitsl5n2N16, color=\"green\", linestyle='-', linewidth=3, label= r\"$l_{\\mathrm{max}}=5$, $n_{\\mathrm{max}} = 2$, $N = 16$\")\nax1.plot(theta_list,currPunitsl6n2N16, color=\"blue\", linestyle=':', linewidth=3, label= r\"$l_{\\mathrm{max}}=6$, $n_{\\mathrm{max}} = 2$, $N = 16$\")\n\nax1.plot(theta_list,currPunitsl4n2N34, color=\"black\", linestyle='-.', linewidth=3, label= r\"$l_{\\mathrm{max}}=4$, $n_{\\mathrm{max}}= 2$, $N = 34$\")\nax1.plot(theta_list,currPunitsl5n2N34, color=\"red\", linestyle='--', linewidth=3, label= r\"$l_{\\mathrm{max}}=5$, $n_{\\mathrm{max}} = 2$, $N = 34$\")\n\nax1.set_yticks([0,0.5,1])\nax1.set_yticklabels([0,0.5,1])\n\nax1.locator_params(axis='y', nbins=4)\nax1.locator_params(axis='x', nbins=4)\n\nax1.set_xlabel(r\"Bias voltage $\\Delta \\mu$ ($V$)\", fontsize=30 )\nax1.set_ylabel(r\"Current ($\\mu A$)\", fontsize=30)\nax1.legend()\n#plt.savefig(\"figures/figImpBos.pdf\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
jorisvandenbossche/DS-python-data-analysis
_solved/case4_air_quality_analysis.ipynb
bsd-3-clause
[ "<p><font size=\"6\"><b> CASE - air quality data of European monitoring stations (AirBase)</b></font></p>\n\n\nยฉ 2021, Joris Van den Bossche and Stijn Van Hoey (&#106;&#111;&#114;&#105;&#115;&#118;&#97;&#110;&#100;&#101;&#110;&#98;&#111;&#115;&#115;&#99;&#104;&#101;&#64;&#103;&#109;&#97;&#105;&#108;&#46;&#99;&#111;&#109;, &#115;&#116;&#105;&#106;&#110;&#118;&#97;&#110;&#104;&#111;&#101;&#121;&#64;&#103;&#109;&#97;&#105;&#108;&#46;&#99;&#111;&#109;). Licensed under CC BY 4.0 Creative Commons", "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt", "We processed some raw data files of the AirBase air quality data. The data contains hourly concentrations of nitrogen dioxide (NO2) for 4 different measurement stations:\n\nFR04037 (PARIS 13eme): urban background site at Square de Choisy\nFR04012 (Paris, Place Victor Basch): urban traffic site at Rue d'Alesia\nBETR802: urban traffic site in Antwerp, Belgium\nBETN029: rural background site in Houtem, Belgium\n\nSee http://www.eea.europa.eu/themes/air/interactive/no2\nImporting and quick exploration\nWe processed the individual data files in the previous notebook (case4_air_quality_processing.ipynb), and saved it to a csv file airbase_data_processed.csv. Let's import the file here (if you didn't finish the previous notebook, a set of the pre-processed dataset if also available in data/airbase_data.csv):", "alldata = pd.read_csv('data/airbase_data.csv', index_col=0, parse_dates=True)", "We only use the data from 1999 onwards:", "data = alldata['1999':].copy()", "Some first exploration with the typical functions:", "data.head() # tail()\n\ndata.info()\n\ndata.describe(percentiles=[0.1, 0.5, 0.9])\n\ndata.plot(figsize=(12,6))", "<div class=\"alert alert-warning\">\n\n**ATTENTION!**:\n\nWhen just using `.plot()` without further notice (selection, aggregation,...)\n\n* Risk of running into troubles by overloading your computer processing (certainly with looooong time series).\n* Not always the most informative/interpretable visualisation.\n\n</div>\n\nPlot only a subset\nWhy not just using the head/tail possibilities?", "data.tail(500).plot(figsize=(12,6))", "Summary figures\nUse summary statistics...", "data.plot(kind='box', ylim=[0,250])", "Also with seaborn plots function, just start with some subsets as first impression...\nAs we already have seen previously, the plotting library seaborn provides some high-level plotting functions on top of matplotlib (check the docs!). One of those functions is pairplot, which we can use here to quickly visualize the concentrations at the different stations and their relation:", "import seaborn as sns\n\nsns.pairplot(data.tail(5000).dropna())", "Is this a tidy dataset ?", "data.head()", "In principle this is not a tidy dataset. The variable that was measured is the NO2 concentration, and is divided in 4 columns. Of course those measurements were made at different stations, so one could interpret it as separate variables. But in any case, such format does not always work well with libraries like seaborn which expects a pure tidy format.\nReason to not use a tidy dataset here: \n\nsmaller memory use\ntimeseries functionality like resample works better\npandas plotting already does what we want when having different columns for some types of plots (eg line plots of the timeseries)\n\n<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>:\n\n <ul>\n <li>Create a tidy version of this dataset <code>data_tidy</code>, ensuring the result has new columns 'station' and 'no2'.</li>\n <li>Check how many missing values are contained in the 'no2' column.</li>\n <li>Drop the rows with missing values in that column.</li>\n</ul>\n</div>", "data_tidy = data.reset_index().melt(id_vars=[\"datetime\"], var_name='station', value_name='no2')\ndata_tidy.head()\n\ndata_tidy['no2'].isna().sum()\n\ndata_tidy = data_tidy.dropna()", "In the following exercises we will mostly do our analysis on dataand often use pandas plotting, but once we produced some kind of summary dataframe as the result of an analysis, then it becomes more interesting to convert that result to a tidy format to be able to use the more advanced plotting functionality of seaborn.\nExercises\n<div class=\"alert alert-warning\">\n\n<b>REMINDER</b>: <br><br>\n\nTake a look at the [Timeseries notebook](pandas_04_time_series_data.ipynb) when you require more info about:\n\n <ul>\n <li><code>resample</code></li>\n <li>string indexing of DateTimeIndex</li>\n</ul><br>\n\nTake a look at the [matplotlib](visualization_01_matplotlib.ipynb) and [seaborn](visualization_02_seaborn.ipynb) notebooks when you require more info about the plot requirements.\n\n</div>\n\n<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>:\n\n <ul>\n <li>Plot the monthly mean and median concentration of the 'FR04037' station for the years 2009 - 2013 in a single figure/ax</li>\n</ul>\n</div>", "fig, ax = plt.subplots()\ndata.loc['2009':, 'FR04037'].resample('M').mean().plot(ax=ax, label='mean')\ndata.loc['2009':, 'FR04037'].resample('M').median().plot(ax=ax, label='median')\nax.legend(ncol=2)\nax.set_title(\"FR04037\");\n\ndata.loc['2009':, 'FR04037'].resample('M').agg(['mean', 'median']).plot()", "<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>\n\n <ul>\n <li>Make a violin plot for January 2011 until August 2011 (check out the documentation to improve the plotting settings)</li>\n <li>Change the y-label to 'NO$_2$ concentration (ยตg/mยณ)'</li>\n</ul><br>\n\n_NOTE:_ In this case, we can use seaborn both with the data not in a long format but when having different columns for which you want to make violin plots, as with the tidy data.\n\n</div>", "# with wide dataframe\nfig, ax = plt.subplots()\nsns.violinplot(data=data['2011-01': '2011-08'], palette=\"GnBu_d\", ax=ax)\nax.set_ylabel(\"NO$_2$ concentration (ยตg/mยณ)\")\n\n# with tidy dataframe\ndata_tidy_subset = data_tidy[(data_tidy['datetime'] >= \"2011-01\") & (data_tidy['datetime'] < \"2011-09\")]\n\nfig, ax = plt.subplots()\nsns.violinplot(data=data_tidy_subset, x=\"station\", y=\"no2\", palette=\"GnBu_d\", ax=ax)\nax.set_ylabel(\"NO$_2$ concentration (ยตg/mยณ)\")\n\n# with figure-level function\nsns.catplot(data=data_tidy_subset, x=\"station\", y=\"no2\", kind=\"violin\", palette=\"GnBu_d\")", "<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>\n\n <ul>\n <li>Make a bar plot with pandas of the mean of each of the stations in the year 2012 (check the documentation of Pandas plot to adapt the rotation of the labels) and make sure all bars have the same color.</li>\n <li>Using the matplotlib objects, change the y-label to 'NO$_2$ concentration (ยตg/mยณ)</li>\n <li>Add a 'darkorange' horizontal line on the ax for the y-value 40 ยตg/mยณ (command for horizontal line from matplotlib: <code>axhline</code>).</li>\n <li><a href=\"visualization_01_matplotlib.ipynb\">Place the text</a> 'Yearly limit is 40 ยตg/mยณ' just above the 'darkorange' line.</li>\n</ul>\n\n</div>", "fig, ax = plt.subplots()\ndata['2012':].mean().plot(kind='bar', ax=ax, rot=0, color='C0')\nax.set_ylabel(\"NO$_2$ concentration (ยตg/mยณ)\")\nax.axhline(y=40., color='darkorange')\nax.text(0.01, 0.48, 'Yearly limit is 40 ยตg/mยณ',\n horizontalalignment='left', fontsize=13, \n transform=ax.transAxes, color='darkorange');", "<div class=\"alert alert-success\">\n\n<b>EXERCISE:</b> Did the air quality improve over time?\n\n <ul>\n <li>For the data from 1999 till the end, plot the yearly averages</li>\n <li>For the same period, add the overall mean (all stations together) as an additional line to the graph, use a thicker black line (<code>linewidth=4</code> and <code>linestyle='--'</code>)</li>\n <li>[OPTIONAL] Add a legend above the ax for all lines</li>\n\n\n</ul>\n</div>", "fig, ax = plt.subplots()\n\ndata['1999':].resample('A').mean().plot(ax=ax)\ndata['1999':].mean(axis=1).resample('A').mean().plot(color='k', \n linestyle='--', \n linewidth=4, \n ax=ax, \n label='Overall mean')\nax.legend(loc='center', ncol=3, \n bbox_to_anchor=(0.5, 1.06))\nax.set_ylabel(\"NO$_2$ concentration (ยตg/mยณ)\");", "<div class=\"alert alert-info\">\n\n**REMEMBER**:\n\n`resample` is a special version of a`groupby` operation. For example, taking annual means with `data.resample('A').mean()` is equivalent to `data.groupby(data.index.year).mean()` (but the result of `resample` still has a DatetimeIndex).\n\nChecking the index of the resulting DataFrame when using **groupby** instead of resample: You'll notice that the Index lost the DateTime capabilities:\n\n```python\n>>> data.groupby(data.index.year).mean().index\n```\n<br>\n\nResults in:\n\n```\nInt64Index([1990, 1991, 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000,\n 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011,\n 2012],\n dtype='int64')$\n```\n<br>\n\nWhen using **resample**, we keep the DateTime capabilities:\n\n```python\n>>> data.resample('A').mean().index\n```\n<br>\n\nResults in:\n\n```\nDatetimeIndex(['1999-12-31', '2000-12-31', '2001-12-31', '2002-12-31',\n '2003-12-31', '2004-12-31', '2005-12-31', '2006-12-31',\n '2007-12-31', '2008-12-31', '2009-12-31', '2010-12-31',\n '2011-12-31', '2012-12-31'],\n dtype='datetime64[ns]', freq='A-DEC')\n```\n<br>\n\nBut, `groupby` is more flexible and can also do resamples that do not result in a new continuous time series, e.g. by grouping by the hour of the day to get the diurnal cycle.\n</div>\n\n<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>\n\n <ul>\n <li>How does the <i>typical yearly profile</i> (typical averages for the different months over the years) look like for the different stations? (add a 'month' column as a first step)</li>\n\n</ul>\n</div>", "# add a column to the dataframe that indicates the month (integer value of 1 to 12):\ndata['month'] = data.index.month\n\n# now, we can calculate the mean of each month over the different years:\ndata.groupby('month').mean()\n\n# plot the typical monthly profile of the different stations:\ndata.groupby('month').mean().plot()", "Remove the temporary 'month' column generated in the solution of the previous exercise:", "data = data.drop(\"month\", axis=1, errors=\"ignore\")", "Note: Technically, we could reshape the result of the groupby operation to a tidy format (we no longer have a real time series), but since we already have the things we want to plot as lines in different columns, doing .plot already does what we want.\n<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>\n\n <ul>\n <li>Plot the weekly 95% percentiles of the concentration in 'BETR801' and 'BETN029' for 2011</li>\n\n</ul>\n</div>", "# Resample wise\ndf2011 = data.loc['2011']\ndf2011[['BETN029', 'BETR801']].resample('W').quantile(0.95).plot()\n\n# Groupby wise\n# Note the different x-axis labels\ndf2011.groupby(df2011.index.isocalendar().week)[['BETN029', 'BETR801']].quantile(0.95).plot()", "<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>\n\n <ul>\n <li>Plot the typical diurnal profile (typical hourly averages) for the different stations taking into account the whole time period.</li>\n\n</ul>\n</div>", "data.groupby(data.index.hour).mean().plot()", "<div class=\"alert alert-success\">\n\n__EXERCISE__\n\nWhat is the difference in the typical diurnal profile between week and weekend days? (and visualise it)\n\nStart with only visualizing the different in diurnal profile for the 'BETR801' station. In a next step, make the same plot for each station.\n\n<details><summary>Hints</summary>\n\n- Add a column `weekend` defining if a value of the index is in the weekend (i.e. days of the week 5 and 6) or not\n- Add a column `hour` with the hour of the day for each row.\n- You can `groupby` on multiple items at the same time.\n\n</details>\n\n</div>", "data['weekend'] = data.index.dayofweek.isin([5, 6])\ndata['weekend'] = data['weekend'].replace({True: 'weekend', False: 'weekday'})\ndata['hour'] = data.index.hour\n\ndata_weekend = data.groupby(['weekend', 'hour']).mean()\ndata_weekend.head()\n\n# using unstack and pandas plotting\ndata_weekend_BETR801 = data_weekend['BETR801'].unstack(level=0)\ndata_weekend_BETR801.plot()\n\n# using a tidy dataset and seaborn\ndata_weekend_BETR801_tidy = data_weekend['BETR801'].reset_index()\n\nsns.lineplot(data=data_weekend_BETR801_tidy, x=\"hour\", y=\"BETR801\", hue=\"weekend\")\n\n# tidy dataset that still includes all stations\n\ndata_weekend_tidy = pd.melt(data_weekend.reset_index(), id_vars=['weekend', 'hour'],\n var_name='station', value_name='no2')\ndata_weekend_tidy.head()\n\n# when still having multiple factors, it becomes useful to convert to tidy dataset and use seaborn\nsns.relplot(data=data_weekend_tidy, x=\"hour\", y=\"no2\", kind=\"line\",\n hue=\"weekend\", col=\"station\", col_wrap=2)", "Remove the temporary columns 'hour' and 'weekend' used in the solution of previous exercise:", "data = data.drop(['hour', 'weekend'], axis=1, errors=\"ignore\")", "<div class=\"alert alert-success\">\n\n__EXERCISE__\n\nCalculate the correlation between the different stations (check in the documentation, google \"pandas correlation\" or use the magic function <code>%psearch</code>)\n\n</div>", "data[['BETR801', 'BETN029', 'FR04037', 'FR04012']].corr()", "<div class=\"alert alert-success\">\n\n__EXERCISE__\n\nCount the number of exceedances of hourly values above the European limit 200 ยตg/m3 for each year and station after 2005. Make a barplot of the counts. Add an horizontal line indicating the maximum number of exceedances (which is 18) allowed per year?\n\n**Hints:**\n\n<details><summary>Hints</summary>\n\n- Create a new DataFrame, called <code>exceedances</code>, (with boolean values) indicating if the threshold is exceeded or not\n- Remember that the sum of True values can be used to count elements\n- Adding a horizontal line can be done with the matplotlib function <code>ax.axhline</code>\n\n</details>\n\n</div>", "exceedances = data > 200\n\n# group by year and count exceedances (sum of boolean)\nexceedances = exceedances.groupby(exceedances.index.year).sum()\n\n# Make a barplot of the yearly number of exceedances\nax = exceedances.loc[2005:].plot(kind='bar')\nax.axhline(18, color='k', linestyle='--')", "More advanced exercises...", "data = alldata['1999':].copy()", "<div class=\"alert alert-success\">\n\n__EXERCISE__\n\nPerform the following actions for the station `'FR04012'` only:\n\n <ul>\n <li>Remove the rows containing <code>NaN</code> or zero values</li>\n <li>Sort the values of the rows according to the air quality values (low to high values)</li>\n <li>Rescale the values to the range [0-1] and store result as <code>FR_scaled</code> (Hint: check <a href=\"https://en.wikipedia.org/wiki/Feature_scaling#Rescaling\">wikipedia</a>)</li>\n <li>Use pandas to plot these values sorted, not taking into account the dates</li>\n <li>Add the station name 'FR04012' as y-label</li>\n <li>[OPTIONAL] Add a vertical line to the plot where the line (hence, the values of variable FR_scaled) reach the value <code>0.3</code>. You will need the documentation of <code>np.searchsorted</code> and matplotlib's <code>axvline</code></li>\n</ul>\n</div>", "FR_station = data['FR04012'] # select the specific data series\nFR_station = FR_station[(FR_station.notnull()) & (FR_station != 0.0)] # exclude the Nan and zero values\n\nFR_sorted = FR_station.sort_values(ascending=True)\nFR_scaled = (FR_sorted - FR_sorted.min())/(FR_sorted.max() - FR_sorted.min())\n\nfig, axfr = plt.subplots()\nFR_scaled.plot(use_index=False, ax = axfr) #alternative version: FR_scaled.reset_index(drop=True).plot(use_index=False) \naxfr.set_ylabel('FR04012')\n# optional addition, just in case you need this\naxfr.axvline(x=FR_scaled.searchsorted(0.3), color='0.6', linestyle='--', linewidth=3)", "<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>:\n\n <ul>\n <li>Create a Figure with two subplots (axes), for which both ax<b>i</b>s are shared</li>\n <li>In the left subplot, plot the histogram (30 bins) of station 'BETN029', only for the year 2009</li>\n <li>In the right subplot, plot the histogram (30 bins) of station 'BETR801', only for the year 2009</li>\n <li>Add the title representing the station name on each of the subplots, you do not want to have a legend</li>\n</ul>\n</div>", "# Mixing an matching matplotlib and Pandas\nfig, (ax1, ax2) = plt.subplots(1, 2, \n sharex=True, \n sharey=True)\n\ndata.loc['2009', ['BETN029', 'BETR801']].plot(kind='hist', subplots=True, \n bins=30, legend=False, \n ax=(ax1, ax2))\nax1.set_title('BETN029')\nax2.set_title('BETR801')\n# Remark: the width of the bins is calculated over the x data range for both plots together\n\n# A more step by step approach (equally valid)\nfig, (ax1, ax2) = plt.subplots(1, 2, sharey=True, sharex=True)\ndata.loc['2009', 'BETN029'].plot(kind='hist', bins=30, ax=ax1)\nax1.set_title('BETN029')\ndata.loc['2009', 'BETR801'].plot(kind='hist', bins=30, ax=ax2)\nax2.set_title('BETR801')\n# Remark: the width of the bins is calculated over the x data range for each plot individually", "<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>\n\n <ul>\n <li>Make a selection of the original dataset of the data in January 2009, call the resulting variable <code>subset</code></li>\n <li>Add a new column, called 'dayofweek', to the variable <code>subset</code> which defines for each data point the day of the week</li>\n <li>From the <code>subset</code> DataFrame, select only Monday (= day 0) and Sunday (=day 6) and remove the others (so, keep this as variable <code>subset</code>)</li>\n <li>Change the values of the dayofweek column in <code>subset</code> according to the following mapping: <code>{0:\"Monday\", 6:\"Sunday\"}</code></li>\n <li>With seaborn, make a scatter plot of the measurements at 'BETN029' vs 'FR04037', with the color variation based on the weekday. Add a linear regression to this plot.</li>\n</ul><br>\n\n**Note**: If you run into the **SettingWithCopyWarning** and do not know what to do, recheck [pandas_03b_indexing](pandas_03b_indexing.ipynb)\n\n</div>", "subset = data.loc['2009-01'].copy()\nsubset[\"dayofweek\"] = subset.index.dayofweek\nsubset = subset[subset['dayofweek'].isin([0, 6])]\n\nsubset[\"dayofweek\"] = subset[\"dayofweek\"].replace(to_replace={0:\"Monday\", 6:\"Sunday\"})\n\nsns.set_style(\"whitegrid\")\n\nsns.lmplot(\n data=subset, x=\"BETN029\", y=\"FR04037\", hue=\"dayofweek\"\n)", "<div class=\"alert alert-success\">\n\n__EXERCISE__\n\nThe maximum daily, 8 hour mean, should be below 100 ยตg/mยณ. What is the number of exceedances of this limit for each year/station?\n\n<details><summary>Hints</summary>\n\n- Have a look at the `rolling` method to perform moving window operations.\n\n</details>\n\n<br>_Note:_\nThis is not an actual limit for NO$_2$, but a nice exercise to introduce the `rolling` method. Other pollutans, such as 0$_3$ have actually such kind of limit values based on 8-hour means.\n\n</div>", "exceedances = data.rolling(8).mean().resample('D').max() > 100\n\nexceedances = exceedances.groupby(exceedances.index.year).sum()\nax = exceedances.plot(kind='bar')", "<div class=\"alert alert-success\">\n\n<b>EXERCISE</b>:\n\n <ul>\n <li>Visualize the typical week profile for station 'BETR801' as boxplots (where the values in one boxplot are the <i>daily means</i> for the different <i>weeks</i> for a certain day of the week).</li><br>\n </ul>\n\n\n**Tip:**<br>\n\nThe boxplot method of a DataFrame expects the data for the different boxes in different columns. For this, you can either use `pivot_table` or a combination of `groupby` and `unstack`\n\n\n</div>\n\nCalculating daily means and add day of the week information:", "data_daily = data.resample('D').mean()\n\n# add a dayofweek column\ndata_daily['dayofweek'] = data_daily.index.dayofweek\ndata_daily.head()", "Plotting with seaborn:", "# seaborn\nsns.boxplot(data=data_daily, x='dayofweek', y='BETR801', color=\"grey\")", "Reshaping and plotting with pandas:", "# when using pandas to plot, the different boxplots should be different columns\n# therefore, pivot table so that the weekdays are the different columns\ndata_daily['week'] = data_daily.index.isocalendar().week\ndata_pivoted = data_daily.pivot_table(columns='dayofweek', index='week',\n values='BETR801')\ndata_pivoted.head()\ndata_pivoted.boxplot();\n\n# An alternative method using `groupby` and `unstack`\ndata_daily.groupby(['dayofweek', 'week'])['BETR801'].mean().unstack(level=0).boxplot();" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
KristianJensen/cameo
examples/Simulation Methods.ipynb
apache-2.0
[ "from cameo import load_model\n\nmodel = load_model(\"iJR904\")\n\nmodel\n\nfrom cameo.flux_analysis import simulation\n\nhelp(simulation)", "Step 1: Simulate the flux distribution of the Wild-Type\nFBA and PFBA can be used to compute flux distributions for the Wild-Type (model as it as been described) as well as knockouts (next session)", "%time\nfba_result = simulation.fba(model)\n\n%time\npfba_result = simulation.pfba(model)", "Setp 2: Simulate knockouts phenotypes\nAlthough PFBA and FBA can be used to simulate the effect of knockouts, other methods have been proven more valuable for that task: MOMA and ROOM. In cameo we implement a linear version of MOMA.\n\nSimulating knockouts:\n\nManipulate the bounds of the reaction (or use the shorthand method knock_out)", "model.reactions.PGI\n\nmodel.reactions.PGI.knock_out()\nmodel.reactions.PGI", "Simulate using different methods:", "%time\nfba_knockout_result = simulation.fba(model)\nfba_knockout_result[model.objective]\n\npfba_knockout_result = simulation.pfba(model)\npfba_knockout_result[model.objective]", "MOMA and ROOM relly on a reference (wild-type) flux distribution and we can use the one previously computed.\nParsimonious FBA references seem to produce better results using this methods", "lmoma_result[\"2 * EX_glc_lp_e_rp_\"]\n\n%time\nlmoma_result = simulation.lmoma(model, reference=pfba_result.fluxes)\nlmoma_result[model.objective]\n\n%time\nroom_result = simulation.room(model, reference=pfba_result.fluxes)\nroom_result[model.objective]\n\nroom_result" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Caranarq/01_Dmine
Datasets/LEED/LEED.ipynb
gpl-3.0
[ "Limpieza de datos sobre edificios con certificacion LEED\n1. Introduccion\nEL United States Green Building Council (USGBG) tiene una base de datos de edificios que cuentan con certificaciรณn LEED alrededor del mundo. La pagina web de USGBG cuenta con una interfaz para hacer consultas directamente a su base de datos, sin embargo no cuenta con una API o una URL directa para descarga masiva por lo que es necesario enviar el query a la base de datos desde la URL de USBG:\nhttps://www.usgbc.org/projects/list?page=17&keys=Mexico\nDespuรฉs de esperar a que la base de datos interprete el query, regresa el archivo \"leed_projects.xls\" que quedarรก guardado como \"D:\\PCCS\\00_RawData\\01_CSV\\LEED\\leed_projects.xls\"\n2. Estandarizacion del dataset", "# Librerias utilizadas\nimport pandas as pd\nimport sys\nimport os\nimport csv\nfrom lxml import html\nimport requests\nimport time\n\n# Configuracion del sistema\nprint('Python {} on {}'.format(sys.version, sys.platform))\nprint('Pandas version: {}'.format(pd.__version__))\nimport platform; print('Running on {} {}'.format(platform.system(), platform.release()))", "El archivo tal como se descarga, a pesar de ser tabulados de excel, envia un mensaje de error cuando se intenta abrir directamente como se descargรณ. Por lo tanto, antes de procesarlo es necesario abrirlo en excel y guardarlo con formato .xlsx", "path = r'D:\\PCCS\\00_RawData\\01_CSV\\LEED\\leed_projects.xlsx'\nraw_data = pd.read_excel(path)\nraw_data.index.name = 'Building'\nraw_data.head()\n\n# Eliminar columnas que no pertenecen a Mexico\n\nprint('La tabla tiene {} registros'.format(len(raw_data)))\nx = 'United States [us]'\nraw_data = raw_data[raw_data['Country'] != x]\nprint('Quitando los registros donde el paรญs es \"{}\", la tabla queda con {} registros'.format(x, len(raw_data)))\nx = 'Colombia'\nraw_data = raw_data[raw_data['Country'] != x]\nprint('Quitando los registros donde el paรญs es \"{}\", la tabla queda con {} registros'.format(x, len(raw_data)))\nx = 'United States'\nraw_data = raw_data[raw_data['Country'] != x]\nprint('Quitando los registros donde el paรญs es \"{}\", la tabla queda con {} registros'.format(x, len(raw_data)))\n\nraw_data.head()", "La base de datos es un listado de edificios que incluye para cada edificio: \n- El nombre del edificio\n- Una URL de referencia\n- La fecha de la certificacion del edificio\n- La ciudad, estado y paรญs en el que se ubica el edificio\n- El sistema de calificaciรณn bajo el ecual se certificรณ el edificio\n- La versiรณn de la certificaciรณn\n- El nivel alcanzado por el edificio con la certificaciรณn.\nDebido a que las columnas de Ciudad, estado y paรญs no estรกn realizadas bajo ningun estรกndar, es necesario asignar a cada renglรณn las claves geoestadรญsticas municipales de 5 dรญgitos correspondientes al municipio en el que se ubica el edificio.\nEsto se harรก manualmente pues cada renglรณn tiene que ser interpretado individualmente.\nDurante la revision me di cuenta que si bien la tabla no tiene una clave para identificar cada ciudad y municipio, la liga de cada edificio nos lleva a una ficha del municipio que usualmente sรญ contiene un cรณdigo postal; y desde el cรณdigo postal es posible obtener el municipio y el estado.\nA continuacion se hace la revision de una pagina para conocer su estructura y hacer un webscrapping desde esta estructura, esperando que sea igual en todas las fichas:", "# Descarga el HTML de la pagina\npage = requests.get('https://www.usgbc.org/projects/reforma-180')\ntree = html.fromstring(page.content)\n\n# Obten variables desde la estructura\nstreet = tree.xpath('//span[@itemprop=\"streetAddress\"]/text()')\nlocality = tree.xpath('//span[@itemprop=\"addressLocality\"]/text()')\npostalcode = tree.xpath('//span[@itemprop=\"postalCode\"]/text()')\ncountry = tree.xpath('//span[@itemprop=\"addressCountry\"]/text()')\n\n''.join(street).replace('\\n', '')\n\n# A ver, que datos sacaste?\nprint('len({}), type({}) - {}'.format(len(street), type(street), street))\nprint('len({}), type({}) - {}'.format(len(locality), type(locality), locality))\nprint('len({}), type({}) - {}'.format(len(postalcode), type(postalcode), postalcode))\nprint('len({}), type({}) - {}'.format(len(country), type(country), country))", "Todos los datos son listas, pero \"street\" tiene 2 elementos. Entonces para el script lo que voy a hacer serรก eliminar todos los saltos de linea y concatenar el texto de todos los elementos de la lista", "# Script para extraer datos de fichas a partir de la URL\ndef webcrawler(x):\n time.sleep(0.05)\n url = x\n try:\n page = requests.get(x)\n tree = html.fromstring(page.content)\n except: # Regresa false si no logras entrar a la URL\n street = False\n locality = False\n postalcode = False\n country = False\n return [street, locality, postalcode, country]\n # Saca los datos del tree. Regresa None si no encontraste \n try:\n street = ''.join(tree.xpath('//span[@itemprop=\"streetAddress\"]/text()'))\n except:\n street = None\n try:\n locality = tree.xpath('//span[@itemprop=\"addressLocality\"]/text()')\n except:\n locality = None\n try:\n postalcode = tree.xpath('//span[@itemprop=\"postalCode\"]/text()')\n except:\n postalcode = None\n try:\n country = tree.xpath('//span[@itemprop=\"addressCountry\"]/text()')\n except:\n country = None\n \n return [street, locality, postalcode, country]\n\n# Pon al crawler a hacer su chamba (Pero no si el archivo ya existe)\narchivoraw = r'D:\\PCCS\\00_RawData\\01_CSV\\LEED\\crawl_leed.xlsx'\nif os.path.isfile(archivoraw):\n print('NO SE REALIZร“ EL WEBCRAWL PORQUE YA SE TIENEN LOS DATOS EN \\n {}'.format(archivoraw))\n print('*** Mejor importa el archivo para no gastar tantos recursos ***')\nelse:\n raw_data['crawl'] = raw_data.Path.apply(webcrawler)", "Reemplaza los enters en cada lista\n(Voy a saltarme este paso porque lo que me interesa en realidad es el Codigo Postal, pero dejo el codigo por si lo ocupo en el futuro)\ndef listtotext(x):\n templist = []\n for element in x:\n if element == None or element == False:\n templist.append(element)\n else:\n templist.append(''.join(x).replace('\\n', ''))\n return templist", "raw_data.head()\n\n# Guarda una copia de raw_data por si es necesario ocupar este dataset de nuevo, \n# que no se tenga que hacer nuevamente el webcrawiling porque consume mucho tiempo\nwriter = pd.ExcelWriter(archivoraw)\nraw_data.to_excel(writer, sheet_name = 'DATOS')\nwriter.save()\n\n# Crea una copia de trabajo de raw_data\ndatasetfinal = raw_data\n\n# Crea una columna รบnica con los datos de direcciรณn y cรณdigo postal extraรญdos con el crawler.\ndatasetfinal['address'] = datasetfinal.crawl.apply(lambda x: x[0].replace('\\n', ''))\n# raw_data['city'] = raw_data.crawl.apply(lambda x: x[1][0].replace('/n', ''))\ndatasetfinal['CP'] = datasetfinal.crawl.apply(lambda x: str(x[2][0]))\n# raw_data['city'] = raw_data.crawl.apply(lambda x: x[3][0].replace('/n', ''))\ndatasetfinal.head(2)", "A partir de los Codigos Postales ya es posible identificar la ciudad y municipio a la que pertenece cada edificio. Para esto, vamos a utilizar la base de datos de codigos postales del SEPOMEX que se descargรณ en otra minerรญa de datos:", "bd_sepo = r'D:\\PCCS\\01_Dmine\\Datasets\\SEPOMEX\\sepomex_CP_CVEMUN.xlsx'\nSEPOMEX = pd.read_excel(bd_sepo, dtype={'CVE_MUN':'str', 'CP':'str'})\nSEPOMEX.head(3)", "Con la base de datos del SEPOMEX ya es posible unir ambos datasets para obtener las claves municipales de cada edificio", "datasetfinal.head()\n\n# Copiar CVE_MUN del dataset en base al codigo postal\ndatasetfinal = datasetfinal.reset_index().merge(SEPOMEX, on='CP', how='left').set_index('Building')\ndatasetfinal.head()", "Quedan 70 filas en donde no fue posible identificar la clave Municipal", "len(datasetfinal[datasetfinal['CVE_MUN'].isnull()])", "Casos Particulares\nEstos 70 registros tienen 33 claves unicas de C que requieren ser asignadas individualmente para conocer la CVE_MUN de cada edificio. Para esto, haremos un script que permita revisar cada clave para realizar la investigaciรณn necesaria y asignarle una CVE_MUN", "mira = ['City', 'State', 'CP', 'address', 'CVE_MUN'] # El diccionario 'mira' se utilizarรก en adelante para imprimir subsets de la informacion\nsinmun = datasetfinal[datasetfinal['CVE_MUN'].isnull()][mira]\nsinmun.head()\n\nlen(sinmun['CP'].unique())", "En el siguiente diccionario recopila las CVE_MUN que se asignarรกn a los cรณdigos postales que requieren asignacion individual. Los cรณdigos cuyo valor es None se asignarรกn mas adelante", "# Diccionario creado en donde key = 'CP' y value = 'CVE_MUN'\ndefmuns = {'00000': None,\n '00100': '09010',\n '00502': '15024',\n '00604': '15121',\n '00702': '15051',\n '01006': '09010',\n '01152': '09010',\n '01209': '09004',\n '01300': '09004',\n '03130': '09014',\n '03210': '09014',\n '05300': '09004',\n '05490': '15104',\n '05940': '15013',\n '08424': '14094',\n '11010': '09016',\n '11111': '14098',\n '11570': '09016',\n '12345': None,\n '21118': '02002',\n '22320': '02004',\n '23410': '03008',\n '23479': '03008',\n '31240': '08019',\n '46685': '14006',\n '48219': '16053',\n '56277': '15099',\n '66601': '19006',\n '67114': '19026',\n '76232': '22014',\n '77780': '23009',\n '78341': '24028',\n '87131': None}", "El siguiente diccionario incluye cรณdigos postales que requieren ser corregidos", "# Diccionario en donde key = Codigo postal listado en el dataset; value = Codigo postal correcto\ndeberiaser = {'00100': '45620',\n '00502': '54830',\n '00604': '54713',\n '00702': '52004',\n '03130': '03103',\n '11111': '45620',\n '48219': '58218'}", "Asignacion de codigos postales", "# Reemplazar las CVE_MUN identificadas en el dataset final\ndatasetfinal['CVE_MUN'] = datasetfinal['CP'].map(defmuns).fillna(datasetfinal['CVE_MUN'])", "Algunos edificios, marcados con los codigos postales 00000 y 12345 (Intuyo que por desidia del capturista) se tendrรกn que asignar individualmente", "sinmun.loc[sinmun['CP'].isin(['00000', '12345'])]\n\n# Diccionario con edificios que se asignaran individualmente\n# Para este diccionario key = Nombre del edificio, value = CVE_MUN que se asignarรก a este edificio\nbuildings = {\n 'Grainger Mexico HQ': '19039',\n 'La Concha Pearl': '03003',\n #'Schneider Electric at COK': '66629', # Este edificio esta repetido, por lo que no se le asignarรก nada y se eliminarรก al final\n 'Bank of America-Reforma 115 5th floor': '09016',\n 'Vesta Corporate Headquarters': '09016',\n 'Air Traffic Control Tower': '15101', # Estoy considerando que esta es la Torre de Control del NAICM\n 'Passenger Terminal Building': '15101', # El edificio del NAICM\n 'Area Control Center': '15101', # Infraestructura del NAICM\n 'Corporativo TRIO': '09004',\n 'Casa GF': '19019',\n 'Eurocenter 2': '09004',\n 'ROUZ TOWER': '09014',\n 'Periferico Sur Parque Industrial': '14098'\n}\n\n\n# Hay un edificio duplicado. El duplicado se eliminarรก mas adelante\ndatasetfinal.loc['Schneider Electric at COK'][mira]\n\n# Reemplazar valores individuales en el dataset.\nfor k, v in buildings.items():\n building = datasetfinal.loc[k].name\n CVEMUN_prev = datasetfinal.loc[k]['CVE_MUN']\n datasetfinal.at[k, 'CVE_MUN'] = v\n print('Edificio:{} - la CVE_MUN {} se reemplazรณ por {}'.format(building, CVEMUN_prev, datasetfinal.at[k, 'CVE_MUN']))", "El dataset contiene dos edificios en el dataset que no corresponden a Mรฉxico:", "sinmun[sinmun['CP'] == '87131']", "Se eliminarรกn del dataset los siguientes edificios:", "datasetfinal[datasetfinal['CVE_MUN'].isnull()][mira]", "El primero por estar repetido y el resto por que no estรกn en los Estados Unidos Mexicanos.", "datasetfinal = datasetfinal.dropna(subset=['CVE_MUN'])\ndatasetfinal.head(3)", "Los edificios que requieren correccion de codigos postales son los siguientes:", "datasetfinal[datasetfinal['CP'].isin(list(deberiaser.keys()))][mira]\n\n# Corregir codigos postales errรณneos\ndatasetfinal['CP'] = datasetfinal['CP'].map(deberiaser).fillna(datasetfinal['CP'])\ndatasetfinal[mira].head()\n\n# Renombrar columnas para crear variables รบnicas\ncolumns={\n 'address':'direccion',\n 'Path': 'URL',\n 'Certification date': 'usgbc_fecha_cert', \n 'Rating system':'usgbc_sis_val',\n 'Version': 'usgbc_ver_sisv',\n 'Certification level': 'usgbc_nv_cert',\n}\ndatasetfinal = datasetfinal.rename(columns=columns)\ndatasetfinal.head(2)\n\n# Descripciones de columnas\nvariables = {\n 'direccion': 'Ubicacion (Calle y numero)',\n 'CVE_MUN': 'Clave geoestadรญstica de 5 digitos a nivel municipal, de acuerdo con el Catรกlogo รšnico de Claves de รreas Geoestadรญsticas Estatales, Municipales y Localidades de INEGI',\n 'usgbc_fecha_cert': 'Fecha de certificacion como edificio LEED por el United States Green Building Council',\n 'usgbc_sis_val': 'Sistema de valoracion aplicado por el United States Green Building Council al edificio',\n 'usgbc_ver_sisv': 'Version del Sistema de valoracion aplicado por el United States Green Building Council al edificio',\n 'usgbc_nv_cert': 'Nivel de certificacion como edificio LEED alcanzado por el edificio',\n 'CP': 'Codigo Postal',\n 'URL': 'Uniform Resource Locator, referencia a recurso en lรญnea'\n}\n\n# Convertir descripciones a dataframe\nvariables = pd.DataFrame.from_dict(variables, orient='index', dtype=None)\nvariables.columns = ['Descripcion']\nvariables = variables.rename_axis('Mnemonico')\nvariables.head()\n\n# Eliminar columnas que ya no se utilizarรกn y reordenar\nsetfinal = [\n 'direccion',\n 'CVE_MUN',\n 'usgbc_fecha_cert',\n 'usgbc_sis_val',\n 'usgbc_ver_sisv',\n 'usgbc_nv_cert',\n 'CP',\n 'URL']\ndatasetfinal = datasetfinal[setfinal]\ndatasetfinal.head()\n\nmetadatos = {\n 'Nombre del Dataset': 'Edificios con Certificaciรณn LEED',\n 'Descripcion del dataset': 'Edificios que han recibido algรบn nivel de certificaciรณn de Liderazgo en Energรญa y desarrollo Ambiental' \\\n ' (LEED, por sus siglas en ingles) Otorgado por el Consejo de edificios Verdes de Estados Unidos (USGBC' \\\n ' por sus suglas en inglรฉs)',\n 'Disponibilidad Temporal': '2007 - 2018',\n 'Periodo de actualizacion': 'No Definido',\n 'Nivel de Desagregacion': 'Edificio',\n 'Notas': 's/n',\n 'Fuente': 'United States Green Buildings Council',\n 'URL_Fuente': 'https://www.usgbc.org/projects/list?page=17&keys=Mexico',\n 'Dataset base': None\n}\n\n# Metadatos a dataframe para exportar\nmetadatos = pd.DataFrame.from_dict(metadatos, orient='index', dtype=None)\nmetadatos.columns = ['Descripcion']\nmetadatos = metadatos.rename_axis('Metadato')\nmetadatos\n\n# Guardar el dataset\nfile = r'D:\\PCCS\\01_Dmine\\Datasets\\LEED\\PCCS_leed_projects.xlsx'\nwriter = pd.ExcelWriter(file)\ndatasetfinal.to_excel(writer, sheet_name = 'DATOS')\nmetadatos.to_excel(writer, sheet_name = 'METADATOS')\nvariables.to_excel(writer, sheet_name = 'VARIABLES')\nwriter.save()\nprint('---------------TERMINADO---------------')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
JuanIgnacioGil/basket-stats
NBA_stats_visualization/Shot charts Notebook.ipynb
mit
[ "How to Create NBA Shot Charts in Python\nIn this post I go over how to extract a player's shot chart data and then plot it using matplotlib and seaborn .", "%matplotlib inline\nimport requests\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport seaborn as sns\nimport json", "Getting the data\nGetting the data from stats.nba.com is pretty straightforward. While there isn't a a public API provided by the NBA,\nwe can actually access the API that the NBA uses for stats.nba.com using the requests library. \nThis blog post \nby Greg Reda does a great job on explaining how to access this API (or finding an API to any web app for that matter).", "playerID='2200'\n\nshot_chart_url ='http://stats.nba.com/stats/shotchartdetail?CFID=33&CFPARAMS=2015-16&' \\\n'ContextFilter=&ContextMeasure=FGA&DateFrom=&DateTo=&GameID=&GameSegment=&LastNGames=0&' \\\n'LeagueID=00&Location=&MeasureType=Base&Month=0&OpponentTeamID=0&Outcome=&PaceAdjust=N&' \\\n'PerMode=PerGame&Period=0&PlayerID='+playerID+'&PlusMinus=N&Position=&Rank=N&RookieYear=&' \\\n'Season=2015-16&SeasonSegment=&SeasonType=Regular+Season&TeamID=0&VsConference=&' \\\n'VsDivision=&mode=Advanced&showDetails=0&showShots=1&showZones=0'\n\nprint(shot_chart_url)", "The above url sends us to the JSON file contatining the data we want. \nAlso note that the url contains the various API parameters used to access the data. \nThe PlayerID parameter in the url is set to 201935, which is James Harden's PlayerID.\nNow lets use requests to get the data we want", "# Get the webpage containing the data\nresponse = requests.get(shot_chart_url)\n\n# Grab the headers to be used as column headers for our DataFrame\nheaders = response.json()['resultSets'][0]['headers']\n# Grab the shot chart data\nshots = response.json()['resultSets'][0]['rowSet']", "Create a pandas DataFrame using the scraped shot chart data.", "shot_df = pd.DataFrame(shots, columns=headers)\n\n# View the head of the DataFrame and all its columns\nfrom IPython.display import display\nwith pd.option_context('display.max_columns', None):\n display(shot_df.head())", "The above shot chart data contains all the the field goal attempts James Harden took during the 2014-15 \nregular season. They data we want is found in LOC_X and LOC_Y. These are coordinate values for each shot \nattempt, which can then be plotted onto a set of axes that represent the basketball court.\nPlotting the Shot Chart Data\nLets just quickly plot the data just too see how it looks.", "sns.set_style(\"white\")\nsns.set_color_codes()\nplt.figure(figsize=(12,11))\nplt.scatter(shot_df.LOC_X, shot_df.LOC_Y)\nplt.show()", "Please note that the above plot misrepresents the data. The x-axis values are the inverse \nof what they actually should be. Lets plot the shots taken from only the right side to see \nthis issue.", "right = shot_df[shot_df.SHOT_ZONE_AREA == \"Right Side(R)\"]\nplt.figure(figsize=(12,11))\nplt.scatter(right.LOC_X, right.LOC_Y)\nplt.xlim(-300,300)\nplt.ylim(-100,500)\nplt.show()", "As we can see the shots in categorized as shots from the \"Right Side(R)\", \nwhile to the viewers right, are actually to the left side of the hoop. \nThis is something we will need to fix when creating our final shot chart.\nDrawing the Court\nBut first we need to figure out how to draw the court lines onto our plot. By looking at the first plot and \nat the data we can roughly estimate that the center of the hoop is at the origin. We can also estimate that \nevery 10 units on either the x and y axes represents one foot. We can verify this by just look at the first\nobservation in our DataFrame . That shot was taken from the Right Corner 3 spot from a distance of 22 feet\nwith a LOC_X value of 226. So the shot was taken from about 22.6 feet to the right of the hoop. Now that we \nknow this we can actually draw the court onto our plot.\nThe dimensions of a basketball court can be seen here, and here.\nUsing those dimensions we can convert them to fit the scale of our plot and just draw them using \nMatplotlib Patches. We'll be using and Arc objects to draw our court. \nNow to create our function that draws our basketball court.\nNOTE: While you can draw lines onto the plot using Lines2D I found it more convenient to use Rectangles (without a height or width) instead.\nEDIT (Aug 4, 2015): I made a mistake in drawing the outerlines and the half court arcs. The outer courtlines height was changed from the incorrect value of 442.5 to 470. The y-values for the centers of the center court arcs were changed from 395 to 422.5. The ylim values for the plots were changed from (395, -47.5) to (422.5, -47.5)", "from matplotlib.patches import Circle, Rectangle, Arc\ndef draw_court(ax=None, color='black', lw=2, outer_lines=False):\n # If an axes object isn't provided to plot onto, just get current one\n if ax is None:\n ax = plt.gca()\n # Create the various parts of an NBA basketball court\n # Create the basketball hoop\n # Diameter of a hoop is 18\" so it has a radius of 9\", which is a value\n # 7.5 in our coordinate system\n hoop = Circle((0, 0), radius=7.5, linewidth=lw, color=color, fill=False)\n # Create backboard\n backboard = Rectangle((-30, -7.5), 60, -1, linewidth=lw, color=color)\n # The paint\n # Create the outer box 0f the paint, width=16ft, height=19ft\n outer_box = Rectangle((-80, -47.5), 160, 190, linewidth=lw, color=color,\n fill=False)\n # Create the inner box of the paint, widt=12ft, height=19ft\n inner_box = Rectangle((-60, -47.5), 120, 190, linewidth=lw, color=color,\n fill=False)\n # Create free throw top arc\n top_free_throw = Arc((0, 142.5), 120, 120, theta1=0, theta2=180,\n linewidth=lw, color=color, fill=False)\n # Create free throw bottom arc\n bottom_free_throw = Arc((0, 142.5), 120, 120, theta1=180, theta2=0,\n linewidth=lw, color=color, linestyle='dashed')\n # Restricted Zone, it is an arc with 4ft radius from center of the hoop\n restricted = Arc((0, 0), 80, 80, theta1=0, theta2=180, linewidth=lw,\n color=color)\n # Three point line\n # Create the side 3pt lines, they are 14ft long before they begin to arc\n corner_three_a = Rectangle((-220, -47.5), 0, 140, linewidth=lw,\n color=color)\n corner_three_b = Rectangle((220, -47.5), 0, 140, linewidth=lw, color=color)\n # 3pt arc - center of arc will be the hoop, arc is 23'9\" away from hoop\n # I just played around with the theta values until they lined up with the\n # threes\n three_arc = Arc((0, 0), 475, 475, theta1=22, theta2=158, linewidth=lw,\n color=color)\n # Center Court\n center_outer_arc = Arc((0, 422.5), 120, 120, theta1=180, theta2=0,\n linewidth=lw, color=color)\n center_inner_arc = Arc((0, 422.5), 40, 40, theta1=180, theta2=0,\n linewidth=lw, color=color)\n # List of the court elements to be plotted onto the axes\n court_elements = [hoop, backboard, outer_box, inner_box, top_free_throw,\n bottom_free_throw, restricted, corner_three_a,\n corner_three_b, three_arc, center_outer_arc,\n center_inner_arc]\n if outer_lines:\n # Draw the half court line, baseline and side out bound lines\n outer_lines = Rectangle((-250, -47.5), 500, 470, linewidth=lw,\n color=color, fill=False)\n court_elements.append(outer_lines)\n \n #Add the court elements onto the axes\n for element in court_elements:\n ax.add_patch(element)\n return ax\n", "Lets draw our court", "plt.figure(figsize=(12,11))\ndraw_court(outer_lines=True)\nplt.xlim(-300,300)\nplt.ylim(-100,500)\nplt.show()", "Creating some Shot Charts\nNow plot our properly adjusted shot chart data along with the court. We can adjust \nthe x-values in two ways. We can either pass in the the negative inverse of LOC_X to \nplt.scatter or we can pass in descending values to plt.xlim . We'll do the latter to plot\nour shot chart.", "plt.figure(figsize=(12,11))\nplt.scatter(shot_df.LOC_X, shot_df.LOC_Y)\ndraw_court(outer_lines=True)\n# Descending values along the axis from left to right\nplt.xlim(300,-300)\nplt.show()", "Lets orient our shot chart with the hoop by the top of the chart, which is the same orientation as the shot charts on stats.nba.com. We do this by settting descending y-values from the bottom to the top of the y-axis. When we do this we no longer need to adjust the x-values of our plot.", "plt.figure(figsize=(12,11))\nplt.scatter(shot_df.LOC_X, shot_df.LOC_Y)\ndraw_court(outer_lines=True)\n# Adjust plot limits to just fit in half court\nplt.xlim(-250,250)\n# Descending values along th y axis from bottom to top\n# in order to place the hoop by the top of plot\nplt.ylim(422.5, -47.5)\n# get rid of axis tick labels\nplt.tick_params(labelbottom=False, labelleft=False)\nplt.show()", "Lets create a few shot charts using jointplot from seaborn .", " # create our jointplot\njoint_shot_chart = sns.jointplot(shot_df.LOC_X, shot_df.LOC_Y, \n stat_func=None,kind='scatter', space=0, alpha=0.5)\njoint_shot_chart.fig.set_size_inches(12,11)\n# A joint plot has 3 Axes, the first one called ax_joint\n# is the one we want to draw our court onto and adjust some other settings\nax = joint_shot_chart.ax_joint\ndraw_court(ax, outer_lines=True)\n# Adjust the axis limits and orientation of the plot in order\n# to plot half court, with the hoop by the top of the plot\nax.set_xlim(-250,250)\nax.set_ylim(422.5, -47.5)\n# Get rid of axis labels and tick marks\nax.set_xlabel('')\nax.set_ylabel('')\nax.tick_params(labelbottom='off', labelleft='off')\n# Add a title\nax.set_title('James Harden FGA \\n2015-16 Reg. Season',\n y=1.2, fontsize=18)\n# Add Data Scource and Author\nauthors=\"\"\"Data Source: stats.nba.com\nAuthor: Juan Ignacio Gil\nOriginal code by Savvas Tjortjoglou (savvastjortjoglou.com)\"\"\"\n \nax.text(-250,460,authors,fontsize=12)\nplt.show()", "Getting a Player's Image\nWe could also scrape Jame Harden's picture from stats.nba.com and place it on our plot. \nWe can find his image at this url.\nTo retrieve the image for our plot we can use urlretrieve from url.requests as follows:", "import urllib.request\n\n# we pass in the link to the image as the 1st argument\n# the 2nd argument tells urlretrieve what we want to scrape\n\npic = urllib.request.urlretrieve(\"http://stats.nba.com/media/players/230x185/\"+playerID+\".png\",\n playerID+\".png\")\n\n# urlretrieve returns a tuple with our image as the first\n# element and imread reads in the image as a\n# mutlidimensional numpy array so matplotlib can plot it\nharden_pic = plt.imread(pic[0])\n# plot the image\nplt.imshow(harden_pic)\nplt.show()", "Now to plot Harden's face on a jointplot we will import OffsetImage from matplotlib. \nOffset, which will allow us to place the image at the top right corner of the plot. \nSo lets create our shot chart like we did above, but this time we will create a KDE jointplot and at the end add \non our image.", "from matplotlib.offsetbox import OffsetImage\n\n# create our jointplot\n# get our colormap for the main kde plot\n# Note we can extract a color from cmap to use for\n# the plots that lie on the side and top axes\ncmap=plt.cm.YlOrRd_r\n\n# n_levels sets the number of contour lines for the main kde plot\njoint_shot_chart = sns.jointplot(shot_df.LOC_X, shot_df.LOC_Y, stat_func=None,\n kind='kde', space=0, color=cmap(0.1),\n cmap=cmap, n_levels=50)\njoint_shot_chart.fig.set_size_inches(12,11)\n# A joint plot has 3 Axes, the first one called ax_joint\n# is the one we want to draw our court onto and adjust some other settings\nax = joint_shot_chart.ax_joint\ndraw_court(ax,outer_lines=True)\n# Adjust the axis limits and orientation of the plot in order\n# to plot half court, with the hoop by the top of the plot\nax.set_xlim(-250,250)\nax.set_ylim(422.5, -47.5)\n# Get rid of axis labels and tick marks\nax.set_xlabel('')\nax.set_ylabel('')\nax.tick_params(labelbottom='off', labelleft='off')\n# Add a title\nax.set_title('James Harden FGA \\n2015-16 Reg. Season',\n y=1.2, fontsize=18)\n# Add Data Scource and Author\nax.text(-250,460,authors,fontsize=12)\n# Add Harden's image to the top right\n# First create our OffSetImage by passing in our image\n# and set the zoom level to make the image small enough\n# to fit on our plot\nimg = OffsetImage(harden_pic, zoom=0.6)\n# Pass in a tuple of x,y coordinates to set_offset\n# to place the plot where you want, I just played around\n# with the values until I found a spot where I wanted\n# the image to be\nimg.set_offset((625,621))\n# add the image\nax.add_artist(img)\nplt.show()", "And another jointplot but with hexbins.", "# create our jointplot\n\ncmap=plt.cm.gist_heat_r\njoint_shot_chart = sns.jointplot(shot_df.LOC_X, shot_df.LOC_Y, stat_func=None,\n kind='hex', space=0, color=cmap(.2), cmap=cmap)\n\njoint_shot_chart.fig.set_size_inches(12,11)\n\n# A joint plot has 3 Axes, the first one called ax_joint \n# is the one we want to draw our court onto \nax = joint_shot_chart.ax_joint\ndraw_court(ax)\n\n# Adjust the axis limits and orientation of the plot in order\n# to plot half court, with the hoop by the top of the plot\nax.set_xlim(-250,250)\nax.set_ylim(422.5, -47.5)\n\n# Get rid of axis labels and tick marks\nax.set_xlabel('')\nax.set_ylabel('')\nax.tick_params(labelbottom='off', labelleft='off')\n\n# Add a title\nax.set_title('FGA 2015-16 Reg. Season', y=1.2, fontsize=14)\n\n# Add Data Source and Author\nax.text(-250,450,authors, fontsize=12)\n\n# Add James Harden's image to the top right\nimg = OffsetImage(harden_pic, zoom=0.6)\nimg.set_offset((625,621))\nax.add_artist(img)\n\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
atulsingh0/MachineLearning
HandsOnML/code/03_classification.ipynb
gpl-3.0
[ "Chapter 3 โ€“ Classification\nThis notebook contains all the sample code and solutions to the exercises in chapter 3.\nSetup\nFirst, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:", "# To support both python 2 and python 3\nfrom __future__ import division, print_function, unicode_literals\n\n# Common imports\nimport numpy as np\nimport os\n\n# to make this notebook's output stable across runs\nnp.random.seed(42)\n\n# To plot pretty figures\n%matplotlib inline\nimport matplotlib\nimport matplotlib.pyplot as plt\nplt.rcParams['axes.labelsize'] = 14\nplt.rcParams['xtick.labelsize'] = 12\nplt.rcParams['ytick.labelsize'] = 12\n\n# Where to save the figures\nPROJECT_ROOT_DIR = \".\"\nCHAPTER_ID = \"classification\"\n\ndef save_fig(fig_id, tight_layout=True):\n path = os.path.join(PROJECT_ROOT_DIR, \"images\", CHAPTER_ID, fig_id + \".png\")\n print(\"Saving figure\", fig_id)\n if tight_layout:\n plt.tight_layout()\n plt.savefig(path, format='png', dpi=300)", "MNIST", "from sklearn.datasets import fetch_mldata\nmnist = fetch_mldata('MNIST original')\nmnist\n\nX, y = mnist[\"data\"], mnist[\"target\"]\nX.shape\n\ny.shape\n\n28*28\n\n%matplotlib inline\nimport matplotlib\nimport matplotlib.pyplot as plt\n\nsome_digit = X[36000]\nsome_digit_image = some_digit.reshape(28, 28)\nplt.imshow(some_digit_image, cmap = matplotlib.cm.binary,\n interpolation=\"nearest\")\nplt.axis(\"off\")\n\nsave_fig(\"some_digit_plot\")\nplt.show()\n\ndef plot_digit(data):\n image = data.reshape(28, 28)\n plt.imshow(image, cmap = matplotlib.cm.binary,\n interpolation=\"nearest\")\n plt.axis(\"off\")\n\n# EXTRA\ndef plot_digits(instances, images_per_row=10, **options):\n size = 28\n images_per_row = min(len(instances), images_per_row)\n images = [instance.reshape(size,size) for instance in instances]\n n_rows = (len(instances) - 1) // images_per_row + 1\n row_images = []\n n_empty = n_rows * images_per_row - len(instances)\n images.append(np.zeros((size, size * n_empty)))\n for row in range(n_rows):\n rimages = images[row * images_per_row : (row + 1) * images_per_row]\n row_images.append(np.concatenate(rimages, axis=1))\n image = np.concatenate(row_images, axis=0)\n plt.imshow(image, cmap = matplotlib.cm.binary, **options)\n plt.axis(\"off\")\n\nplt.figure(figsize=(9,9))\nexample_images = np.r_[X[:12000:600], X[13000:30600:600], X[30600:60000:590]]\nplot_digits(example_images, images_per_row=10)\nsave_fig(\"more_digits_plot\")\nplt.show()\n\ny[36000]\n\nX_train, X_test, y_train, y_test = X[:60000], X[60000:], y[:60000], y[60000:]\n\nimport numpy as np\n\nshuffle_index = np.random.permutation(60000)\nX_train, y_train = X_train[shuffle_index], y_train[shuffle_index]", "Binary classifier", "y_train_5 = (y_train == 5)\ny_test_5 = (y_test == 5)\n\nfrom sklearn.linear_model import SGDClassifier\n\nsgd_clf = SGDClassifier(random_state=42)\nsgd_clf.fit(X_train, y_train_5)\n\nsgd_clf.predict([some_digit])\n\nfrom sklearn.model_selection import cross_val_score\ncross_val_score(sgd_clf, X_train, y_train_5, cv=3, scoring=\"accuracy\")\n\nfrom sklearn.model_selection import StratifiedKFold\nfrom sklearn.base import clone\n\nskfolds = StratifiedKFold(n_splits=3, random_state=42)\n\nfor train_index, test_index in skfolds.split(X_train, y_train_5):\n clone_clf = clone(sgd_clf)\n X_train_folds = X_train[train_index]\n y_train_folds = (y_train_5[train_index])\n X_test_fold = X_train[test_index]\n y_test_fold = (y_train_5[test_index])\n\n clone_clf.fit(X_train_folds, y_train_folds)\n y_pred = clone_clf.predict(X_test_fold)\n n_correct = sum(y_pred == y_test_fold)\n print(n_correct / len(y_pred))\n\nfrom sklearn.base import BaseEstimator\nclass Never5Classifier(BaseEstimator):\n def fit(self, X, y=None):\n pass\n def predict(self, X):\n return np.zeros((len(X), 1), dtype=bool)\n\nnever_5_clf = Never5Classifier()\ncross_val_score(never_5_clf, X_train, y_train_5, cv=3, scoring=\"accuracy\")\n\nfrom sklearn.model_selection import cross_val_predict\n\ny_train_pred = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3)\n\nfrom sklearn.metrics import confusion_matrix\n\nconfusion_matrix(y_train_5, y_train_pred)\n\ny_train_perfect_predictions = y_train_5\n\nconfusion_matrix(y_train_5, y_train_perfect_predictions)\n\nfrom sklearn.metrics import precision_score, recall_score\n\nprecision_score(y_train_5, y_train_pred)\n\n4344 / (4344 + 1307)\n\nrecall_score(y_train_5, y_train_pred)\n\n4344 / (4344 + 1077)\n\nfrom sklearn.metrics import f1_score\nf1_score(y_train_5, y_train_pred)\n\n4344 / (4344 + (1077 + 1307)/2)\n\ny_scores = sgd_clf.decision_function([some_digit])\ny_scores\n\nthreshold = 0\ny_some_digit_pred = (y_scores > threshold)\n\ny_some_digit_pred\n\nthreshold = 200000\ny_some_digit_pred = (y_scores > threshold)\ny_some_digit_pred\n\ny_scores = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3,\n method=\"decision_function\")", "Note: there is an issue introduced in Scikit-Learn 0.19.0 where the result of cross_val_predict() is incorrect in the binary classification case when using method=\"decision_function\", as in the code above. The resulting array has an extra first dimension full of 0s. We need to add this small hack for now to work around this issue:", "y_scores.shape\n\n# hack to work around issue #9589 introduced in Scikit-Learn 0.19.0\nif y_scores.ndim == 2:\n y_scores = y_scores[:, 1]\n\nfrom sklearn.metrics import precision_recall_curve\n\nprecisions, recalls, thresholds = precision_recall_curve(y_train_5, y_scores)\n\ndef plot_precision_recall_vs_threshold(precisions, recalls, thresholds):\n plt.plot(thresholds, precisions[:-1], \"b--\", label=\"Precision\", linewidth=2)\n plt.plot(thresholds, recalls[:-1], \"g-\", label=\"Recall\", linewidth=2)\n plt.xlabel(\"Threshold\", fontsize=16)\n plt.legend(loc=\"upper left\", fontsize=16)\n plt.ylim([0, 1])\n\nplt.figure(figsize=(8, 4))\nplot_precision_recall_vs_threshold(precisions, recalls, thresholds)\nplt.xlim([-700000, 700000])\nsave_fig(\"precision_recall_vs_threshold_plot\")\nplt.show()\n\n(y_train_pred == (y_scores > 0)).all()\n\ny_train_pred_90 = (y_scores > 70000)\n\nprecision_score(y_train_5, y_train_pred_90)\n\nrecall_score(y_train_5, y_train_pred_90)\n\ndef plot_precision_vs_recall(precisions, recalls):\n plt.plot(recalls, precisions, \"b-\", linewidth=2)\n plt.xlabel(\"Recall\", fontsize=16)\n plt.ylabel(\"Precision\", fontsize=16)\n plt.axis([0, 1, 0, 1])\n\nplt.figure(figsize=(8, 6))\nplot_precision_vs_recall(precisions, recalls)\nsave_fig(\"precision_vs_recall_plot\")\nplt.show()", "ROC curves", "from sklearn.metrics import roc_curve\n\nfpr, tpr, thresholds = roc_curve(y_train_5, y_scores)\n\ndef plot_roc_curve(fpr, tpr, label=None):\n plt.plot(fpr, tpr, linewidth=2, label=label)\n plt.plot([0, 1], [0, 1], 'k--')\n plt.axis([0, 1, 0, 1])\n plt.xlabel('False Positive Rate', fontsize=16)\n plt.ylabel('True Positive Rate', fontsize=16)\n\nplt.figure(figsize=(8, 6))\nplot_roc_curve(fpr, tpr)\nsave_fig(\"roc_curve_plot\")\nplt.show()\n\nfrom sklearn.metrics import roc_auc_score\n\nroc_auc_score(y_train_5, y_scores)\n\nfrom sklearn.ensemble import RandomForestClassifier\nforest_clf = RandomForestClassifier(random_state=42)\ny_probas_forest = cross_val_predict(forest_clf, X_train, y_train_5, cv=3,\n method=\"predict_proba\")\n\ny_scores_forest = y_probas_forest[:, 1] # score = proba of positive class\nfpr_forest, tpr_forest, thresholds_forest = roc_curve(y_train_5,y_scores_forest)\n\nplt.figure(figsize=(8, 6))\nplt.plot(fpr, tpr, \"b:\", linewidth=2, label=\"SGD\")\nplot_roc_curve(fpr_forest, tpr_forest, \"Random Forest\")\nplt.legend(loc=\"lower right\", fontsize=16)\nsave_fig(\"roc_curve_comparison_plot\")\nplt.show()\n\nroc_auc_score(y_train_5, y_scores_forest)\n\ny_train_pred_forest = cross_val_predict(forest_clf, X_train, y_train_5, cv=3)\nprecision_score(y_train_5, y_train_pred_forest)\n\nrecall_score(y_train_5, y_train_pred_forest)", "Multiclass classification", "sgd_clf.fit(X_train, y_train)\nsgd_clf.predict([some_digit])\n\nsome_digit_scores = sgd_clf.decision_function([some_digit])\nsome_digit_scores\n\nnp.argmax(some_digit_scores)\n\nsgd_clf.classes_\n\nsgd_clf.classes_[5]\n\nfrom sklearn.multiclass import OneVsOneClassifier\novo_clf = OneVsOneClassifier(SGDClassifier(random_state=42))\novo_clf.fit(X_train, y_train)\novo_clf.predict([some_digit])\n\nlen(ovo_clf.estimators_)\n\nforest_clf.fit(X_train, y_train)\nforest_clf.predict([some_digit])\n\nforest_clf.predict_proba([some_digit])\n\ncross_val_score(sgd_clf, X_train, y_train, cv=3, scoring=\"accuracy\")\n\nfrom sklearn.preprocessing import StandardScaler\nscaler = StandardScaler()\nX_train_scaled = scaler.fit_transform(X_train.astype(np.float64))\ncross_val_score(sgd_clf, X_train_scaled, y_train, cv=3, scoring=\"accuracy\")\n\ny_train_pred = cross_val_predict(sgd_clf, X_train_scaled, y_train, cv=3)\nconf_mx = confusion_matrix(y_train, y_train_pred)\nconf_mx\n\ndef plot_confusion_matrix(matrix):\n \"\"\"If you prefer color and a colorbar\"\"\"\n fig = plt.figure(figsize=(8,8))\n ax = fig.add_subplot(111)\n cax = ax.matshow(matrix)\n fig.colorbar(cax)\n\nplt.matshow(conf_mx, cmap=plt.cm.gray)\nsave_fig(\"confusion_matrix_plot\", tight_layout=False)\nplt.show()\n\nrow_sums = conf_mx.sum(axis=1, keepdims=True)\nnorm_conf_mx = conf_mx / row_sums\n\nnp.fill_diagonal(norm_conf_mx, 0)\nplt.matshow(norm_conf_mx, cmap=plt.cm.gray)\nsave_fig(\"confusion_matrix_errors_plot\", tight_layout=False)\nplt.show()\n\ncl_a, cl_b = 3, 5\nX_aa = X_train[(y_train == cl_a) & (y_train_pred == cl_a)]\nX_ab = X_train[(y_train == cl_a) & (y_train_pred == cl_b)]\nX_ba = X_train[(y_train == cl_b) & (y_train_pred == cl_a)]\nX_bb = X_train[(y_train == cl_b) & (y_train_pred == cl_b)]\n\nplt.figure(figsize=(8,8))\nplt.subplot(221); plot_digits(X_aa[:25], images_per_row=5)\nplt.subplot(222); plot_digits(X_ab[:25], images_per_row=5)\nplt.subplot(223); plot_digits(X_ba[:25], images_per_row=5)\nplt.subplot(224); plot_digits(X_bb[:25], images_per_row=5)\nsave_fig(\"error_analysis_digits_plot\")\nplt.show()", "Multilabel classification", "from sklearn.neighbors import KNeighborsClassifier\n\ny_train_large = (y_train >= 7)\ny_train_odd = (y_train % 2 == 1)\ny_multilabel = np.c_[y_train_large, y_train_odd]\n\nknn_clf = KNeighborsClassifier()\nknn_clf.fit(X_train, y_multilabel)\n\nknn_clf.predict([some_digit])", "Warning: the following cell may take a very long time (possibly hours depending on your hardware).", "y_train_knn_pred = cross_val_predict(knn_clf, X_train, y_multilabel, cv=3)\nf1_score(y_multilabel, y_train_knn_pred, average=\"macro\")", "Multioutput classification", "noise = np.random.randint(0, 100, (len(X_train), 784))\nX_train_mod = X_train + noise\nnoise = np.random.randint(0, 100, (len(X_test), 784))\nX_test_mod = X_test + noise\ny_train_mod = X_train\ny_test_mod = X_test\n\nsome_index = 5500\nplt.subplot(121); plot_digit(X_test_mod[some_index])\nplt.subplot(122); plot_digit(y_test_mod[some_index])\nsave_fig(\"noisy_digit_example_plot\")\nplt.show()\n\nknn_clf.fit(X_train_mod, y_train_mod)\nclean_digit = knn_clf.predict([X_test_mod[some_index]])\nplot_digit(clean_digit)\nsave_fig(\"cleaned_digit_example_plot\")", "Extra material\nDummy (ie. random) classifier", "from sklearn.dummy import DummyClassifier\ndmy_clf = DummyClassifier()\ny_probas_dmy = cross_val_predict(dmy_clf, X_train, y_train_5, cv=3, method=\"predict_proba\")\ny_scores_dmy = y_probas_dmy[:, 1]\n\nfprr, tprr, thresholdsr = roc_curve(y_train_5, y_scores_dmy)\nplot_roc_curve(fprr, tprr)", "KNN classifier", "from sklearn.neighbors import KNeighborsClassifier\nknn_clf = KNeighborsClassifier(n_jobs=-1, weights='distance', n_neighbors=4)\nknn_clf.fit(X_train, y_train)\n\ny_knn_pred = knn_clf.predict(X_test)\n\nfrom sklearn.metrics import accuracy_score\naccuracy_score(y_test, y_knn_pred)\n\nfrom scipy.ndimage.interpolation import shift\ndef shift_digit(digit_array, dx, dy, new=0):\n return shift(digit_array.reshape(28, 28), [dy, dx], cval=new).reshape(784)\n\nplot_digit(shift_digit(some_digit, 5, 1, new=100))\n\nX_train_expanded = [X_train]\ny_train_expanded = [y_train]\nfor dx, dy in ((1, 0), (-1, 0), (0, 1), (0, -1)):\n shifted_images = np.apply_along_axis(shift_digit, axis=1, arr=X_train, dx=dx, dy=dy)\n X_train_expanded.append(shifted_images)\n y_train_expanded.append(y_train)\n\nX_train_expanded = np.concatenate(X_train_expanded)\ny_train_expanded = np.concatenate(y_train_expanded)\nX_train_expanded.shape, y_train_expanded.shape\n\nknn_clf.fit(X_train_expanded, y_train_expanded)\n\ny_knn_expanded_pred = knn_clf.predict(X_test)\n\naccuracy_score(y_test, y_knn_expanded_pred)\n\nambiguous_digit = X_test[2589]\nknn_clf.predict_proba([ambiguous_digit])\n\nplot_digit(ambiguous_digit)", "Exercise solutions\nComing soon" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
renekm/CD-atualizado-
exercicios/ex3.ipynb
gpl-3.0
[ "Atividade: Teoria da Probabilidade\n\nAula 08\nReferรชncia de Leitura:\n1. Magalhรฃes e Lima (7ยช. Ediรงรฃo): pรกg. 49 a 68 - Probabilidades\nHoje:\n1. Conceito de Probabilidade\n4. Probabilidade Condicional\n3. Independรชncia de eventos\n5. Teorema de Bayes. Simulaรงรฃo do problema de Monty Hall\nPrรณxima aula:\n1. Magalhรฃes e Lima (7ยช. Ediรงรฃo): pรกg. 69 a 104 - Variรกveis aleatรณrias discretas", "%matplotlib inline\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\n#Bibliotecas necessรกrias\nfrom numpy.random import shuffle, randint, choice\n", "<font color='blue'>Exercรญcio 1</font>\nMonte a simulaรงรฃo de 1000 jogadas de um dado idรดneo de 6 faces. Faรงa o histograma (normalizado) da frequรชncia.\na) Olhando o histograma, o que pode se dizer sobre as probabilidade de cada face?\n\nb) Discorra sobre o porque da probabilidade nรฃo ser exatamente igual ร  teรณrica.", "lista = []\n\nfor i in range (1,1001):\n numero = randint (1,7)\n lista.append(numero)\n\nplt.hist(lista,6,normed = True)\nplt.axis([1,6,0,0.25])\nplt.xlabel('Nรบmero do dado')\nplt.ylabel('Frequencia')\nplt.show()", "B\nA probabilidade nรฃo รฉ exatamente igual a teรณrica porque como 1000 รฉ um limite de vezes que se lanรงa tecnicamente muito baixo, as probabilidades acabam ficando muito diferentes, com o aumento de vezes lanรงadas as possibilidades acabam se estabilizando.\n\n<font color='blue'>Exercรญcio 2</font>\nAmpliando o espaรงo amostral para as possรญveis jogadas de 2 dados, analise as seguintes situaรงรตes:\na) Jogando os dois dados ao mesmo tempo. Qual รฉ a probabilidade de obter soma 7?\n\nb) Jogando um dado e depois o segundo dado. Qual รฉ a probabilidade de obter soma 7 jรก sabendo o resultado do primeiro? Compare o resultado com item anterior! Por que รฉ igual ou diferente?", "#a\nsoma=0\ni=0\n\nwhile i <= 1000:\n p1 = randint (1,7)\n p2 = randint(1,7)\n i +=1\n if p1 + p2 == 7:\n soma+=1\n i+=1\nprint(soma/i)\n", "B\nA probabilidade รฉ a mesma que o item anteior que รฉ 1/6 porque somente 1 face vai conseguir conseguir obter uma soma com 7.\n\n<font color='blue'>Exercรญcio 3</font>\nSimule 10000 vezes o problema de Monty Hallยน , usar o seguinte algoritmo:\n\n\nRepetir 10000 vezes:\n\nSorteie um nรบmero de porta de 1 a 3 para ser a premiada\nSorteie um nรบmero de porta de 1 a 3 para ser a porta escolhida.\n\nSorteie um nรบmero de porta para ser a aberta, desde que nรฃo seja a premiada e nem a porta escolhida. Assim, se:\n\nporta premiada รฉ 1 e a escolhida รฉ 1, sorteie entre as portas 2 e 3 para ser aberta\nporta premiada รฉ 1 e a escolhida รฉ 2, com probabilidade 1 deve abrir a porta 3\nporta premiada รฉ 1 e a escolhida รฉ 3, com probabilidade 1 deve abrir a porta 2\nassim para demais casos...\n\n\n\nCalcule quantas vezes indivรญduo ganha ao trocar de porta. Ou seja, se:\n\nporta premiada รฉ 1, a escolhida รฉ 1 e aberta รฉ 2 (ou 3), indivรญduo perde se trocar de porta\nporta premiada รฉ 1, a escolhida รฉ 2 e aberta รฉ 3, indivรญduo ganha se trocar de porta\nporta premiada รฉ 1, a escolhida รฉ 3 e aberta รฉ 2, indivรญduo ganha se trocar de porta\nassim para demais casos...\n\n\n\nExibir quantas vezes em 10000, o indivรญduo ganhou ao trocar de porta.\n\n\n\n\nCompare o resultado numรฉrico com o resultado analรญtico obtido via Teorema de Bayes.\nยนhttps://en.wikipedia.org/wiki/Monty_Hall_problem e \nExercรญcio 1.4.5 de http://www.portalaction.com.br/probabilidades/14-eventos-independentes-e-probabilidade-condicional", "cont = 0\nb = 0\nfor i in range (1,10000):\n lista = ['g','g','c']\n shuffle(lista)\n if lista [1] == 'c':\n del lista[2]\n elif lista[2]=='c':\n del lista[1]\n else:\n x = randint(1,2)\n del lista[x]\n if lista[0]=='c':\n cont+=1\n elif lista[0] != 'c':\n b+=1\n \n \nprint(cont/100)\nprint(b/100)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
QuantStack/quantstack-talks
2019-07-10-CICM/notebooks/wealth-of-nations.ipynb
bsd-3-clause
[ "This is a bqplot recreation of Mike Bostock's Wealth of Nations. This was also done by Gapminder. It is originally based on a TED Talk by Hans Rosling.", "import pandas as pd\nimport numpy as np\nimport os\n\nfrom bqplot import (\n LogScale, LinearScale, OrdinalColorScale, ColorAxis,\n Axis, Scatter, Lines, CATEGORY10, Label, Figure, Tooltip\n)\n\nfrom ipywidgets import HBox, VBox, IntSlider, Play, jslink\n\ninitial_year = 1800", "Cleaning and Formatting JSON Data", "data = pd.read_json(os.path.abspath('./nations.json'))\n\ndef clean_data(data):\n for column in ['income', 'lifeExpectancy', 'population']:\n data = data.drop(data[data[column].apply(len) <= 4].index)\n return data\n\ndef extrap_interp(data):\n data = np.array(data)\n x_range = np.arange(1800, 2009, 1.)\n y_range = np.interp(x_range, data[:, 0], data[:, 1])\n return y_range\n\ndef extrap_data(data):\n for column in ['income', 'lifeExpectancy', 'population']:\n data[column] = data[column].apply(extrap_interp)\n return data\n\ndata = clean_data(data)\ndata = extrap_data(data)\n\nincome_min, income_max = np.min(data['income'].apply(np.min)), np.max(data['income'].apply(np.max))\nlife_exp_min, life_exp_max = np.min(data['lifeExpectancy'].apply(np.min)), np.max(data['lifeExpectancy'].apply(np.max))\npop_min, pop_max = np.min(data['population'].apply(np.min)), np.max(data['population'].apply(np.max))\n\ndef get_data(year):\n year_index = year - 1800\n income = data['income'].apply(lambda x: x[year_index])\n life_exp = data['lifeExpectancy'].apply(lambda x: x[year_index])\n pop = data['population'].apply(lambda x: x[year_index])\n return income, life_exp, pop", "Creating the Tooltip to display the required fields\nbqplot's native Tooltip allows us to simply display the data fields we require on a mouse-interaction.", "tt = Tooltip(fields=['name', 'x', 'y'], labels=['Country Name', 'Income per Capita', 'Life Expectancy'])", "Creating the Label to display the year\nStaying true to the d3 recreation of the talk, we place a Label widget in the bottom-right of the Figure (it inherits the Figure co-ordinates when no scale is passed to it). With enable_move set to True, the Label can be dragged around.", "year_label = Label(x=[0.75], y=[0.10], font_size=52, font_weight='bolder', colors=['orange'],\n text=[str(initial_year)], enable_move=True)", "Defining Axes and Scales\nThe inherent skewness of the income data favors the use of a LogScale. Also, since the color coding by regions does not follow an ordering, we use the OrdinalColorScale.", "x_sc = LogScale(min=income_min, max=income_max)\ny_sc = LinearScale(min=life_exp_min, max=life_exp_max)\nc_sc = OrdinalColorScale(domain=data['region'].unique().tolist(), colors=CATEGORY10[:6])\nsize_sc = LinearScale(min=pop_min, max=pop_max)\n\nax_y = Axis(label='Life Expectancy', scale=y_sc, orientation='vertical', side='left', grid_lines='solid')\nax_x = Axis(label='Income per Capita', scale=x_sc, grid_lines='solid')", "Creating the Scatter Mark with the appropriate size and color parameters passed\nTo generate the appropriate graph, we need to pass the population of the country to the size attribute and its region to the color attribute.", "# Start with the first year's data\ncap_income, life_exp, pop = get_data(initial_year)\n\nwealth_scat = Scatter(x=cap_income, y=life_exp, color=data['region'], size=pop,\n names=data['name'], display_names=False,\n scales={'x': x_sc, 'y': y_sc, 'color': c_sc, 'size': size_sc},\n default_size=4112, tooltip=tt, animate=True, stroke='Black',\n unhovered_style={'opacity': 0.5})\n\nnation_line = Lines(x=data['income'][0], y=data['lifeExpectancy'][0], colors=['Gray'],\n scales={'x': x_sc, 'y': y_sc}, visible=False)", "Creating the Figure", "time_interval = 10\n\nfig = Figure(marks=[wealth_scat, year_label, nation_line], axes=[ax_x, ax_y],\n title='Health and Wealth of Nations', animation_duration=time_interval)", "Using a Slider to allow the user to change the year and a button for animation\nHere we see how we can seamlessly integrate bqplot into the jupyter widget infrastructure.", "year_slider = IntSlider(min=1800, max=2008, step=1, description='Year', value=initial_year)", "When the hovered_point of the Scatter plot is changed (i.e. when the user hovers over a different element), the entire path of that country is displayed by making the Lines object visible and setting it's x and y attributes.", "def hover_changed(change):\n if change.new is not None:\n nation_line.x = data['income'][change.new + 1]\n nation_line.y = data['lifeExpectancy'][change.new + 1]\n nation_line.visible = True\n else:\n nation_line.visible = False\n \nwealth_scat.observe(hover_changed, 'hovered_point')", "On the slider value callback (a function that is triggered everytime the value of the slider is changed) we change the x, y and size co-ordinates of the Scatter. We also update the text of the Label to reflect the current year.", "def year_changed(change):\n wealth_scat.x, wealth_scat.y, wealth_scat.size = get_data(year_slider.value)\n year_label.text = [str(year_slider.value)]\n\nyear_slider.observe(year_changed, 'value')", "Add an animation button", "play_button = Play(min=1800, max=2008, interval=time_interval)\njslink((play_button, 'value'), (year_slider, 'value'))", "Displaying the GUI", "VBox([HBox([play_button, year_slider]), fig])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ledeprogram/algorithms
class7/homework/najmabadi_shannon_7_1.ipynb
gpl-3.0
[ "We covered a lot of information today and I'd like you to practice developing classification trees on your own. For each exercise, work through the problem, determine the result, and provide the requested interpretation in comments along with the code. The point is to build classifiers, not necessarily good classifiers (that will hopefully come later)", "import numpy as np\nimport pandas as pd\nimport pydotplus\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nfrom pandas.tools.plotting import scatter_matrix\nfrom sklearn import datasets\nfrom sklearn import tree\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn import metrics\nfrom sklearn.externals.six import StringIO", "1. Load the iris dataset and create a holdout set that is 50% of the data (50% in training and 50% in test). Output the results (don't worry about creating the tree visual unless you'd like to) and discuss them briefly (are they good or not?)", "iris = datasets.load_iris()\niris\n\nx = iris.data[:,2:]\ny = iris.target\ndt = tree.DecisionTreeClassifier()\n\nx_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.5,train_size=0.5)\n\ndt = dt.fit(x_train,y_train)\n\ndef measure_performance(x,y,dt, show_accuracy=True, show_classification_report=True, show_confusion_matrix=True):\n y_pred=dt.predict(x)\n if show_accuracy:\n print(\"Accuracy:{0:.3f}\".format(metrics.accuracy_score(y, y_pred)),\"\\n\")\n if show_classification_report:\n print(\"Classification report\")\n print(metrics.classification_report(y,y_pred),\"\\n\")\n if show_confusion_matrix:\n print(\"Confusion matrix\")\n print(metrics.confusion_matrix(y,y_pred),\"\\n\")\n\nmeasure_performance(x_test,y_test,dt)", "Nearly 95%\naccuracy, with 100% precision for the first species (0), and progressively less precision for the latter two species (1, 2) . \n30 were classified as species 1, 26 as species 2, and 15 as species 3; though there were two falsely classified samples in both species 2 and species 3. \n2. Redo the model with a 75% - 25% training/test split and compare the results. Are they better or worse than before? Discuss why this may be.", "x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.25,train_size=0.75)\ndt = dt.fit(x_train,y_train)\ny_pred=dt.predict(x_test)\nmeasure_performance(x_test,y_test,dt)", "Over 97% accuracy, with again 100% precision for the first species (0), and a similar percent accuracy for the latter two species (1, 2) . \n12 were classified as species 1, 14 as species 2, and 11 as species 3; and there was one falsely classified sample in species 3. \nThere are less total samples because the testing size is larger.\n3. Load the breast cancer dataset (datasets.load_breast_cancer()) and perform basic exploratory analysis. What attributes to we have? What are we trying to predict?\nFor context of the data, see the documentation here: https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Diagnostic%29", "bc = datasets.load_breast_cancer()\nbc", "The attributes (x) are listed near the top of the dataset, and include radius, texture, perimeter, etc. \nThe target (y) is what we are trying to predict. Here, that is whether a tumor is malignant or benign.", "x = bc.data[:,:] \ny = bc.target \nx_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.5,train_size=0.5)\ndt = dt.fit(x_train,y_train)\n\ndef measure_performance(x,y,dt, show_accuracy=True, show_classification_report=True, show_confusion_matrix=True):\n y_pred=dt.predict(x)\n if show_accuracy:\n print(\"Accuracy:{0:.5f}\".format(metrics.accuracy_score(y, y_pred)),\"\\n\")\n if show_classification_report:\n print(\"Classification report\")\n print(metrics.classification_report(y,y_pred),\"\\n\")\n if show_confusion_matrix:\n print(\"Confusion matrix\")\n print(metrics.confusion_matrix(y,y_pred),\"\\n\")\n\nmeasure_performance(x_test,y_test,dt)", "4. Using the breast cancer data, create a classifier to predict the type of seed. Perform the above hold out evaluation (50-50 and 75-25) and discuss the results.", "x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.25,train_size=0.75)\ndt = dt.fit(x_train,y_train)\n\nmeasure_performance(x_test,y_test,dt)", "Over 95% accuracy, with rougly 95% precision. 7 out of 143 samples were misclassified." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tensorflow/docs-l10n
site/en-snapshot/lite/models/modify/model_maker/speech_recognition.ipynb
apache-2.0
[ "Copyright 2022 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Retrain a speech recognition model with TensorFlow Lite Model Maker\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/lite/models/modify/model_maker/speech_recognition\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/models/modify/model_maker/speech_recognition.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/models/modify/model_maker/speech_recognition.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/models/modify/model_maker/speech_recognition.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n\n</table>\n\nIn this colab notebook, you'll learn how to use the TensorFlow Lite Model Maker to train a speech recognition model that can classify spoken words or short phrases using one-second sound samples. The Model Maker library uses transfer learning to retrain an existing TensorFlow model with a new dataset, which reduces the amount of sample data and time required for training. \nBy default, this notebook retrains the model (BrowserFft, from the TFJS Speech Command Recognizer) using a subset of words from the speech commands dataset (such as \"up,\" \"down,\" \"left,\" and \"right\"). Then it exports a TFLite model that you can run on a mobile device or embedded system (such as a Raspberry Pi). It also exports the trained model as a TensorFlow SavedModel.\nThis notebook is also designed to accept a custom dataset of WAV files, uploaded to Colab in a ZIP file. The more samples you have for each class, the better your accuracy will be, but because the transfer learning process uses feature embeddings from the pre-trained model, you can still get a fairly accurate model with only a few dozen samples in each of your classes.\nNote: The model we'll be training is optimized for speech recognition with one-second samples. If you want to perform more generic audio classification (such as detecting different types of music), we suggest you instead follow this Colab to retrain an audio classifier.\nIf you want to run the notebook with the default speech dataset, you can run the whole thing now by clicking Runtime > Run all in the Colab toolbar. However, if you want to use your own dataset, then continue down to Prepare the dataset and follow the instructions there.\nImport the required packages\nYou'll need TensorFlow, TFLite Model Maker, and some modules for audio manipulation, playback, and visualizations.", "!sudo apt -y install libportaudio2\n!pip install tflite-model-maker\n\nimport os\nimport glob\nimport random\nimport shutil\n\nimport librosa\nimport soundfile as sf\nfrom IPython.display import Audio\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nimport tensorflow as tf\nimport tflite_model_maker as mm\nfrom tflite_model_maker import audio_classifier\nfrom tflite_model_maker.config import ExportFormat\n\nprint(f\"TensorFlow Version: {tf.__version__}\")\nprint(f\"Model Maker Version: {mm.__version__}\")", "Prepare the dataset\nTo train with the default speech dataset, just run all the code below as-is.\nBut if you want to train with your own speech dataset, follow these steps:\nNote: \nThe model you'll retrain expects input data to be roughly one second of audio at 44.1 kHz. Model Maker perfoms automatic resampling for the training dataset, so there's no need to resample your dataset if it has a sample rate other than 44.1 kHz. But beware that audio samples longer than one second will be split into multiple one-second chunks, and the final chunk will be discarded if it's shorter than one second.\n\nBe sure each sample in your dataset is in WAV file format, about one second long. Then create a ZIP file with all your WAV files, organized into separate subfolders for each classification. For example, each sample for a speech command \"yes\" should be in a subfolder named \"yes\". Even if you have only one class, the samples must be saved in a subdirectory with the class name as the directory name. (This script assumes your dataset is not split into train/validation/test sets and performs that split for you.)\nClick the Files tab in the left panel and just drag-drop your ZIP file there to upload it.\nUse the following drop-down option to set use_custom_dataset to True.\nThen skip to Prepare a custom audio dataset to specify your ZIP filename and dataset directory name.", "use_custom_dataset = False #@param [\"False\", \"True\"] {type:\"raw\"}", "Generate a background noise dataset\nWhether you're using the default speech dataset or a custom dataset, you should have a good set of background noises so your model can distinguish speech from other noises (including silence). \nBecause the following background samples are provided in WAV files that are a minute long or longer, we need to split them up into smaller one-second samples so we can reserve some for our test dataset. We'll also combine a couple different sample sources to build a comprehensive set of background noises and silence:", "tf.keras.utils.get_file('speech_commands_v0.01.tar.gz',\n 'http://download.tensorflow.org/data/speech_commands_v0.01.tar.gz',\n cache_dir='./',\n cache_subdir='dataset-speech',\n extract=True)\ntf.keras.utils.get_file('background_audio.zip',\n 'https://storage.googleapis.com/download.tensorflow.org/models/tflite/sound_classification/background_audio.zip',\n cache_dir='./',\n cache_subdir='dataset-background',\n extract=True)\n", "Note: Although there is a newer version available, we're using v0.01 of the speech commands dataset because it's a smaller download. v0.01 includes 30 commands, while v0.02 adds five more (\"backward\", \"forward\", \"follow\", \"learn\", and \"visual\").", "# Create a list of all the background wav files\nfiles = glob.glob(os.path.join('./dataset-speech/_background_noise_', '*.wav'))\nfiles = files + glob.glob(os.path.join('./dataset-background', '*.wav'))\n\nbackground_dir = './background'\nos.makedirs(background_dir, exist_ok=True)\n\n# Loop through all files and split each into several one-second wav files\nfor file in files:\n filename = os.path.basename(os.path.normpath(file))\n print('Splitting', filename)\n name = os.path.splitext(filename)[0]\n rate = librosa.get_samplerate(file)\n length = round(librosa.get_duration(filename=file))\n for i in range(length - 1):\n start = i * rate\n stop = (i * rate) + rate\n data, _ = sf.read(file, start=start, stop=stop)\n sf.write(os.path.join(background_dir, name + str(i) + '.wav'), data, rate)", "Prepare the speech commands dataset\nWe already downloaded the speech commands dataset, so now we just need to prune the number of classes for our model.\nThis dataset includes over 30 speech command classifications, and most of them have over 2,000 samples. But because we're using transfer learning, we don't need that many samples. So the following code does a few things:\n\nSpecify which classifications we want to use, and delete the rest.\nKeep only 150 samples of each class for training (to prove that transfer learning works well with smaller datasets and simply to reduce the training time).\nCreate a separate directory for a test dataset so we can easily run inference with them later.", "if not use_custom_dataset:\n commands = [ \"up\", \"down\", \"left\", \"right\", \"go\", \"stop\", \"on\", \"off\", \"background\"]\n dataset_dir = './dataset-speech'\n test_dir = './dataset-test'\n\n # Move the processed background samples\n shutil.move(background_dir, os.path.join(dataset_dir, 'background')) \n\n # Delete all directories that are not in our commands list\n dirs = glob.glob(os.path.join(dataset_dir, '*/'))\n for dir in dirs:\n name = os.path.basename(os.path.normpath(dir))\n if name not in commands:\n shutil.rmtree(dir)\n\n # Count is per class\n sample_count = 150\n test_data_ratio = 0.2\n test_count = round(sample_count * test_data_ratio)\n\n # Loop through child directories (each class of wav files)\n dirs = glob.glob(os.path.join(dataset_dir, '*/'))\n for dir in dirs:\n files = glob.glob(os.path.join(dir, '*.wav'))\n random.seed(42)\n random.shuffle(files)\n # Move test samples:\n for file in files[sample_count:sample_count + test_count]:\n class_dir = os.path.basename(os.path.normpath(dir))\n os.makedirs(os.path.join(test_dir, class_dir), exist_ok=True)\n os.rename(file, os.path.join(test_dir, class_dir, os.path.basename(file)))\n # Delete remaining samples\n for file in files[sample_count + test_count:]:\n os.remove(file)", "Prepare a custom dataset\nIf you want to train the model with our own speech dataset, you need to upload your samples as WAV files in a ZIP (as described above) and modify the following variables to specify your dataset:", "if use_custom_dataset:\n # Specify the ZIP file you uploaded:\n !unzip YOUR-FILENAME.zip\n # Specify the unzipped path to your custom dataset\n # (this path contains all the subfolders with classification names):\n dataset_dir = './YOUR-DIRNAME'", "After changing the filename and path name above, you're ready to train the model with your custom dataset. In the Colab toolbar, select Runtime > Run all to run the whole notebook.\nThe following code integrates our new background noise samples into your dataset and then separates a portion of all samples to create a test set.", "def move_background_dataset(dataset_dir):\n dest_dir = os.path.join(dataset_dir, 'background')\n if os.path.exists(dest_dir):\n files = glob.glob(os.path.join(background_dir, '*.wav'))\n for file in files:\n shutil.move(file, dest_dir)\n else:\n shutil.move(background_dir, dest_dir)\n\nif use_custom_dataset:\n # Move background samples into custom dataset\n move_background_dataset(dataset_dir)\n\n # Now we separate some of the files that we'll use for testing:\n test_dir = './dataset-test'\n test_data_ratio = 0.2\n dirs = glob.glob(os.path.join(dataset_dir, '*/'))\n for dir in dirs:\n files = glob.glob(os.path.join(dir, '*.wav'))\n test_count = round(len(files) * test_data_ratio)\n random.seed(42)\n random.shuffle(files)\n # Move test samples:\n for file in files[:test_count]:\n class_dir = os.path.basename(os.path.normpath(dir))\n os.makedirs(os.path.join(test_dir, class_dir), exist_ok=True)\n os.rename(file, os.path.join(test_dir, class_dir, os.path.basename(file)))\n print('Moved', test_count, 'images from', class_dir)", "Play a sample\nTo be sure the dataset looks correct, let's play at a random sample from the test set:", "def get_random_audio_file(samples_dir):\n files = os.path.abspath(os.path.join(samples_dir, '*/*.wav'))\n files_list = glob.glob(files)\n random_audio_path = random.choice(files_list)\n return random_audio_path\n\ndef show_sample(audio_path):\n audio_data, sample_rate = sf.read(audio_path)\n class_name = os.path.basename(os.path.dirname(audio_path))\n print(f'Class: {class_name}')\n print(f'File: {audio_path}')\n print(f'Sample rate: {sample_rate}')\n print(f'Sample length: {len(audio_data)}')\n\n plt.title(class_name)\n plt.plot(audio_data)\n display(Audio(audio_data, rate=sample_rate))\n\nrandom_audio = get_random_audio_file(test_dir)\nshow_sample(random_audio)", "Define the model\nWhen using Model Maker to retrain any model, you have to start by defining a model spec. The spec defines the base model from which your new model will extract feature embeddings to begin learning new classes. The spec for this speech recognizer is based on the pre-trained BrowserFft model from TFJS.\nThe model expects input as an audio sample that's 44.1 kHz, and just under a second long: the exact sample length must be 44034 frames.\nYou don't need to do any resampling with your training dataset. Model Maker takes care of that for you. But when you later run inference, you must be sure that your input matches that expected format.\nAll you need to do here is instantiate the BrowserFftSpec:", "spec = audio_classifier.BrowserFftSpec()", "Load your dataset\nNow you need to load your dataset according to the model specifications. Model Maker includes the DataLoader API, which will load your dataset from a folder and ensure it's in the expected format for the model spec.\nWe already reserved some test files by moving them to a separate directory, which makes it easier to run inference with them later. Now we'll create a DataLoader for each split: the training set, the validation set, and the test set.\nLoad the speech commands dataset", "if not use_custom_dataset:\n train_data_ratio = 0.8\n train_data = audio_classifier.DataLoader.from_folder(\n spec, dataset_dir, cache=True)\n train_data, validation_data = train_data.split(train_data_ratio)\n test_data = audio_classifier.DataLoader.from_folder(\n spec, test_dir, cache=True)", "Load a custom dataset\nNote: Setting cache=True is important to make training faster (especially when the dataset must be re-sampled) but it will also require more RAM to hold the data. If you use a very large custom dataset, caching might exceed your RAM capacity.", "if use_custom_dataset:\n train_data_ratio = 0.8\n train_data = audio_classifier.DataLoader.from_folder(\n spec, dataset_dir, cache=True)\n train_data, validation_data = train_data.split(train_data_ratio)\n test_data = audio_classifier.DataLoader.from_folder(\n spec, test_dir, cache=True)\n", "Train the model\nNow we'll use the Model Maker create() function to create a model based on our model spec and training dataset, and begin training.\nIf you're using a custom dataset, you might want to change the batch size as appropriate for the number of samples in your train set.\nNote: The first epoch takes longer because it must create the cache.", "# If your dataset has fewer than 100 samples per class,\n# you might want to try a smaller batch size\nbatch_size = 25\nepochs = 25\nmodel = audio_classifier.create(train_data, spec, validation_data, batch_size, epochs)", "Review the model performance\nEven if the accuracy/loss looks good from the training output above, it's important to also run the model using test data that the model has not seen yet, which is what the evaluate() method does here:", "model.evaluate(test_data)", "View the confusion matrix\nWhen training a classification model such as this one, it's also useful to inspect the confusion matrix. The confusion matrix gives you detailed visual representation of how well your classifier performs for each classification in your test data.", "def show_confusion_matrix(confusion, test_labels):\n \"\"\"Compute confusion matrix and normalize.\"\"\"\n confusion_normalized = confusion.astype(\"float\") / confusion.sum(axis=1)\n sns.set(rc = {'figure.figsize':(6,6)})\n sns.heatmap(\n confusion_normalized, xticklabels=test_labels, yticklabels=test_labels,\n cmap='Blues', annot=True, fmt='.2f', square=True, cbar=False)\n plt.title(\"Confusion matrix\")\n plt.ylabel(\"True label\")\n plt.xlabel(\"Predicted label\")\n\nconfusion_matrix = model.confusion_matrix(test_data)\nshow_confusion_matrix(confusion_matrix.numpy(), test_data.index_to_label)", "Export the model\nThe last step is exporting your model into the TensorFlow Lite format for execution on mobile/embedded devices and into the SavedModel format for execution elsewhere.\nWhen exporting a .tflite file from Model Maker, it includes model metadata that describes various details that can later help during inference. It even includes a copy of the classification labels file, so you don't need to a separate labels.txt file. (In the next section, we show how to use this metadata to run an inference.)", "TFLITE_FILENAME = 'browserfft-speech.tflite'\nSAVE_PATH = './models'\n\nprint(f'Exporing the model to {SAVE_PATH}')\nmodel.export(SAVE_PATH, tflite_filename=TFLITE_FILENAME)\nmodel.export(SAVE_PATH, export_format=[mm.ExportFormat.SAVED_MODEL, mm.ExportFormat.LABEL])", "Run inference with TF Lite model\nNow your TFLite model can be deployed and run using any of the supported inferencing libraries or with the new TFLite AudioClassifier Task API. The following code shows how you can run inference with the .tflite model in Python.", "# This library provides the TFLite metadata API\n! pip install -q tflite_support\n\nfrom tflite_support import metadata\nimport json\n\ndef get_labels(model):\n \"\"\"Returns a list of labels, extracted from the model metadata.\"\"\"\n displayer = metadata.MetadataDisplayer.with_model_file(model)\n labels_file = displayer.get_packed_associated_file_list()[0]\n labels = displayer.get_associated_file_buffer(labels_file).decode()\n return [line for line in labels.split('\\n')]\n\ndef get_input_sample_rate(model):\n \"\"\"Returns the model's expected sample rate, from the model metadata.\"\"\"\n displayer = metadata.MetadataDisplayer.with_model_file(model)\n metadata_json = json.loads(displayer.get_metadata_json())\n input_tensor_metadata = metadata_json['subgraph_metadata'][0][\n 'input_tensor_metadata'][0]\n input_content_props = input_tensor_metadata['content']['content_properties']\n return input_content_props['sample_rate']", "To observe how well the model performs with real samples, run the following code block over and over. Each time, it will fetch a new test sample and run inference with it, and you can listen to the audio sample below.", "# Get a WAV file for inference and list of labels from the model\ntflite_file = os.path.join(SAVE_PATH, TFLITE_FILENAME)\nlabels = get_labels(tflite_file)\nrandom_audio = get_random_audio_file(test_dir)\n\n# Ensure the audio sample fits the model input\ninterpreter = tf.lite.Interpreter(tflite_file)\ninput_details = interpreter.get_input_details()\noutput_details = interpreter.get_output_details()\ninput_size = input_details[0]['shape'][1]\nsample_rate = get_input_sample_rate(tflite_file)\naudio_data, _ = librosa.load(random_audio, sr=sample_rate)\nif len(audio_data) < input_size:\n audio_data.resize(input_size)\naudio_data = np.expand_dims(audio_data[:input_size], axis=0)\n\n# Run inference\ninterpreter.allocate_tensors()\ninterpreter.set_tensor(input_details[0]['index'], audio_data)\ninterpreter.invoke()\noutput_data = interpreter.get_tensor(output_details[0]['index'])\n\n# Display prediction and ground truth\ntop_index = np.argmax(output_data[0])\nlabel = labels[top_index]\nscore = output_data[0][top_index]\nprint('---prediction---')\nprint(f'Class: {label}\\nScore: {score}')\nprint('----truth----')\nshow_sample(random_audio)", "Download the TF Lite model\nNow you can deploy the TF Lite model to your mobile or embedded device. You don't need to download the labels file because you can instead retrieve the labels from .tflite file metadata, as shown in the previous inferencing example.", "try:\n from google.colab import files\nexcept ImportError:\n pass\nelse:\n files.download(tflite_file)", "Check out our end-to-end example apps that perform inferencing with TFLite audio models on Android and iOS." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ceos-seo/data_cube_notebooks
notebooks/feature_extraction/Clustering_Notebook.ipynb
apache-2.0
[ "<a id=\"clustering_notebook_top\"></a>\nClustering Notebook\n<hr>\n\nNotebook Summary\nThis notebook demonstrates how to use Open Data Cube utilities to cluster geospatial data. \n<hr>\n\nIndex\n\nImport Dependencies and Connect to the Data Cube\nChoose Platform and Product\nGet the Maximum Extents of the Cube\nDefine the Extents of the Analysis (selecting too much can make the acquisition process slow)\nLoad Data from the Data Cube and Create a Composite\nExamine the Composite and Export as a GeoTIFF\nPerform Clustering\nVisualize the Clustered Data\nExport the Clustered Data as a GeoTIFF\n\n<span id=\"clustering_notebook_import\">Import Dependencies and Connect to the Data Cube &#9652;</span>", "import sys\nimport os\nsys.path.append(os.environ.get('NOTEBOOK_ROOT'))\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport datacube\nimport datetime as dt\nimport xarray as xr \nimport numpy as np\n\nfrom utils.data_cube_utilities.data_access_api import DataAccessApi\nfrom utils.data_cube_utilities.plotter_utils import figure_ratio\n\napi = DataAccessApi()\ndc = api.dc", "<span id=\"clustering_notebook_plat_prod\">Choose Platform and Product &#9652;</span>\nExamine available products", "# Get available products\nproducts_info = dc.list_products()\n\n# List LANDSAT 7 products\nprint(\"LANDSAT 7 Products:\")\nproducts_info[[\"platform\", \"name\"]][products_info.platform == \"LANDSAT_7\"]\n\n# List LANDSAT 8 products\nprint(\"LANDSAT 8 Products:\")\nproducts_info[[\"platform\", \"name\"]][products_info.platform == \"LANDSAT_8\"]", "Choose product and platform", "product = 'ls8_usgs_sr_scene'\nplatform = 'LANDSAT_8'\ncollection = 'c1'\nlevel = 'l2'", "<span id=\"clustering_notebook_extents\">Get the Maximum Extents of the Cube &#9652;</span>", "from utils.data_cube_utilities.dc_load import get_product_extents\n\nfull_lat, full_lon, min_max_dates = get_product_extents(api, platform, product)\n\nprint(\"{}:\".format(platform))\nprint(\"Lat bounds:\", full_lat)\nprint(\"Lon bounds:\", full_lon)\nprint(\"Time bounds:\", min_max_dates) \n\nfrom utils.data_cube_utilities.dc_display_map import display_map\n\n# Display the total shared area available for these datacube products.\ndisplay_map(latitude = full_lat,longitude = full_lon)", "<span id=\"clustering_notebook_define_extents\">Define the Extents of the Analysis &#9652;</span>\nSpecify start and end dates in the same order as platforms and products", "# from datetime import datetime \n# start_date, end_date = (datetime(2010,1,1), datetime(2011,1,1))\n# start_date, end_date = dt.datetime(2014,1,1), dt.datetime(2016,1,1)\nstart_date, end_date = dt.datetime(2014,9,1), dt.datetime(2015,3,1)\ndate_range = (start_date, end_date)", "Specify an area to analyze", "# Specify latitude and longitude bounds of an interesting area within the full extents\n\n# Vietnam\n# lat_small = (9.8, 9.85) # Area #1\n# lon_small = (105.1, 105.15) # Area #1\n\n# Ghana\n# Weija Reservoir - North\nlat_small = (5.5974, 5.6270)\nlon_small = (-0.3900, -0.3371)", "Visualize the selected area", "display_map(latitude = lat_small,longitude = lon_small)", "<span id=\"clustering_notebook_retrieve_data\">Load Data from the Data Cube and Create a Composite &#9652;</span>\nCreate geographic chunks for efficient processing", "from utils.data_cube_utilities.dc_chunker import create_geographic_chunks\n\ngeographic_chunks = create_geographic_chunks(\n latitude=lat_small, \n longitude=lon_small, \n geographic_chunk_size=.05)", "Create a geomedian composite", "from utils.data_cube_utilities.clean_mask import landsat_clean_mask_full\nfrom utils.data_cube_utilities.dc_mosaic import create_hdmedians_multiple_band_mosaic\n\nmeasurements = ['blue', 'green', 'red', 'nir', 'swir1', 'swir2', 'pixel_qa']\nproduct_chunks = []\n\nfor index, chunk in enumerate(geographic_chunks):\n data = dc.load(measurements = measurements,\n time = date_range,\n platform = platform,\n product = product,\n longitude=chunk['longitude'],\n latitude=chunk['latitude'])\n # Mask out clouds and scan lines.\n clean_mask = landsat_clean_mask_full(dc, data, product=product, platform=platform,\n collection=collection, level=level)\n # Create the mosaic.\n product_chunks.append(create_hdmedians_multiple_band_mosaic(data, clean_mask=clean_mask, dtype=np.float32))", "Combine the chunks to produce the final mosaic", "from utils.data_cube_utilities.dc_chunker import combine_geographic_chunks\nfinal_composite = combine_geographic_chunks(product_chunks)", "<span id=\"clustering_notebook_examine_composite\">Examine the Composite and Export as a GeoTIFF &#9652;</span>\n\nTrue color", "from utils.data_cube_utilities.dc_rgb import rgb\n\nfig = plt.figure(figsize=figure_ratio(final_composite, fixed_width=8, fixed_height=8))\nrgb(final_composite, bands=['red', 'green', 'blue'], fig=fig)\nplt.title('True Color Geomedian Composite', fontsize=16)\nplt.show()", "False color", "fig = plt.figure(figsize=figure_ratio(final_composite, fixed_width=8, fixed_height=8))\nrgb(final_composite, bands=['swir1', 'nir', 'red'], fig=fig)\nplt.title('False Color Geomedian Composite', fontsize=16)\nplt.show()", "Example of a composited swir1 band", "final_composite.swir1.plot(figsize = figure_ratio(final_composite, fixed_width=10, \n fixed_height=10), cmap = 'magma')\nplt.title('SWIR1 Composite', fontsize=16)\nplt.show()", "Export to GeoTIFF", "from utils.data_cube_utilities.import_export import export_slice_to_geotiff\nimport os\ngeotiff_dir = 'output/geotiffs/Clustering_Notebook'\nif not os.path.exists(geotiff_dir):\n os.makedirs(geotiff_dir)\nexport_slice_to_geotiff(final_composite, '{}/final_composite.tif'.format(geotiff_dir))", "<span id=\"clustering_notebook_cluster\">Perform Clustering &#9652;</span>", "from utils.data_cube_utilities.aggregate import xr_scale_res\n\nfrom utils.data_cube_utilities.dc_clustering import kmeans_cluster_dataset, get_frequency_counts\n\n# Bands used for clustering\ncluster_bands = ['red', 'green', 'blue', 'swir1']\n\nclassification_4 = kmeans_cluster_dataset(final_composite, cluster_bands, n_clusters=4)\nfreq_counts_4 = get_frequency_counts(classification_4)\nclassification_8 = kmeans_cluster_dataset(final_composite, cluster_bands, n_clusters=8)\nfreq_counts_8 = get_frequency_counts(classification_8)\nclassification_12 = kmeans_cluster_dataset(final_composite, cluster_bands, n_clusters=12)\nfreq_counts_12 = get_frequency_counts(classification_12)", "<span id=\"clustering_notebook_visualize\">Visualize the Clustered Data &#9652;</span>", "# Define standard formatting.\ndef get_figsize_geospatial(fixed_width=8, fixed_height=14, \n num_cols=1, num_rows=1):\n return figure_ratio(final_composite, \n fixed_width=fixed_width, fixed_height=fixed_height,\n num_cols=num_cols, num_rows=num_rows)\nxarray_imshow_params = dict(use_colorbar=False, use_legend=True, \n fig_kwargs=dict(dpi=120, figsize=get_figsize_geospatial()))\n\nfrom utils.data_cube_utilities.plotter_utils import xarray_imshow\n\nfor class_num, freq, fractional_freq in freq_counts_4:\n # The `*_cluster_dataset()` functions set -1 as the cluster number for \"rows\" with missing data.\n class_num, freq = int(class_num), int(freq)\n class_mem_str = \"in class {:d}\".format(class_num) if class_num != -1 else \"that had missing data\"\n print(\"There were {:d} data points {}, comprising {:.2%} \"\\\n \"of all data points.\".format(int(freq), class_mem_str, \n fractional_freq))\nlegend_labels = {v:\"Cluster {}\".format(v) if v != -1 else \"Missing Data\" for v in np.unique(classification_4)}\nxarray_imshow(classification_4, **xarray_imshow_params, legend_labels=legend_labels)\nplt.show()\n\nfor class_num, freq, fractional_freq in freq_counts_8:\n # The `*_cluster_dataset()` functions set -1 as the cluster number for \"rows\" with missing data.\n class_num, freq = int(class_num), int(freq)\n class_mem_str = \"in class {:d}\".format(class_num) if class_num != -1 else \"that had missing data\"\n print(\"There were {:d} data points {}, comprising {:.2%} \"\\\n \"of all data points.\".format(int(freq), class_mem_str, \n fractional_freq))\nlegend_labels = {v:\"Cluster {}\".format(v) if v != -1 else \"Missing Data\" for v in np.unique(classification_8)}\nxarray_imshow(classification_8, **xarray_imshow_params, legend_labels=legend_labels)\nplt.show()\n\nfor class_num, freq, fractional_freq in freq_counts_12:\n # The `*_cluster_dataset()` functions set -1 as the cluster number for \"rows\" with missing data.\n class_num, freq = int(class_num), int(freq)\n class_mem_str = \"in class {:d}\".format(class_num) if class_num != -1 else \"that had missing data\"\n print(\"There were {:d} data points {}, comprising {:.2%} \"\\\n \"of all data points.\".format(int(freq), class_mem_str, \n fractional_freq))\nlegend_labels = {v:\"Cluster {}\".format(v) if v != -1 else \"Missing Data\" for v in np.unique(classification_12)}\nxarray_imshow(classification_12, **xarray_imshow_params, legend_labels=legend_labels)\nplt.show()", "<span id=\"clustering_notebook_export_clustered_data\">Export the Clustered Data as a GeoTIFF &#9652;</span>", "from utils.data_cube_utilities.import_export import export_slice_to_geotiff\n\nif not os.path.exists(geotiff_dir):\n os.makedirs(geotiff_dir)\n\noutput_kmeans_cluster4_file_path = os.path.join(geotiff_dir, \"cluster4_kmeans.tif\")\noutput_kmeans_cluster8_file_path = os.path.join(geotiff_dir, \"cluster8_kmeans.tif\")\noutput_kmeans_cluster12_file_path = os.path.join(geotiff_dir, \"cluster12_kmeans.tif\")\n\nexport_slice_to_geotiff(classification_4.to_dataset(name='classification'), \n output_kmeans_cluster4_file_path)\nexport_slice_to_geotiff(classification_8.to_dataset(name='classification'), \n output_kmeans_cluster8_file_path)\nexport_slice_to_geotiff(classification_12.to_dataset(name='classification'), \n output_kmeans_cluster12_file_path)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
google/earthengine-community
tutorials/detecting-changes-in-sentinel-1-imagery-pt-2/index.ipynb
apache-2.0
[ "#@title Copyright 2020 The Earth Engine Community Authors { display-mode: \"form\" }\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Detecting Changes in Sentinel-1 Imagery (Part 2)\nAuthor: mortcanty\nRun me first\nRun the following cell to initialize the API. The output will contain instructions on how to grant this notebook access to Earth Engine using your account.", "import ee\n\n# Trigger the authentication flow.\nee.Authenticate()\n\n# Initialize the library.\nee.Initialize()", "Datasets and Python modules\nOne dataset will be used in the tutorial:\n\nCOPERNICUS/S1_GRD_FLOAT\nSentinel-1 ground range detected images\n\n\n\nThe following cell imports some python modules which we will be using as we go along and enables inline graphics.", "import matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy.stats import norm, gamma, f, chi2\nimport IPython.display as disp\n%matplotlib inline", "And to make use of interactive graphics, we import the folium package:", "# Import the Folium library.\nimport folium\n\n# Define a method for displaying Earth Engine image tiles to folium map.\ndef add_ee_layer(self, ee_image_object, vis_params, name):\n map_id_dict = ee.Image(ee_image_object).getMapId(vis_params)\n folium.raster_layers.TileLayer(\n tiles = map_id_dict['tile_fetcher'].url_format,\n attr = 'Map Data &copy; <a href=\"https://earthengine.google.com/\">Google Earth Engine</a>',\n name = name,\n overlay = True,\n control = True\n ).add_to(self)\n\n# Add EE drawing method to folium.\nfolium.Map.add_ee_layer = add_ee_layer", "Part 2. Hypothesis testing\nWe continue from Part 1 of the Tutorial with the area of interest aoi covering the Frankfurt International Airport and a subset aoi_sub consisting of uniform pixels within a forested region.", "geoJSON = {\n \"type\": \"FeatureCollection\",\n \"features\": [\n {\n \"type\": \"Feature\",\n \"properties\": {},\n \"geometry\": {\n \"type\": \"Polygon\",\n \"coordinates\": [\n [\n [\n 8.473892211914062,\n 49.98081240937428\n ],\n [\n 8.658599853515625,\n 49.98081240937428\n ],\n [\n 8.658599853515625,\n 50.06066538593667\n ],\n [\n 8.473892211914062,\n 50.06066538593667\n ],\n [\n 8.473892211914062,\n 49.98081240937428\n ]\n ]\n ]\n }\n }\n ]\n}\ncoords = geoJSON['features'][0]['geometry']['coordinates']\naoi = ee.Geometry.Polygon(coords)\ngeoJSON = {\n \"type\": \"FeatureCollection\",\n \"features\": [\n {\n \"type\": \"Feature\",\n \"properties\": {},\n \"geometry\": {\n \"type\": \"Polygon\",\n \"coordinates\": [\n [\n [\n 8.534317016601562,\n 50.021637833966786\n ],\n [\n 8.530540466308594,\n 49.99780882512238\n ],\n [\n 8.564186096191406,\n 50.00663576154257\n ],\n [\n 8.578605651855469,\n 50.019431940583104\n ],\n [\n 8.534317016601562,\n 50.021637833966786\n ]\n ]\n ]\n }\n }\n ]\n}\ncoords = geoJSON['features'][0]['geometry']['coordinates']\naoi_sub = ee.Geometry.Polygon(coords)", "This time we filter the S1 archive to get an image collection consisting of two images acquired in the month of August, 2020. Because we are interested in change detection, it is essential that the local incidence angles be the same in both images. So now we specify both the orbit pass (ASCENDING) as well the relative orbit number (15):", "im_coll = (ee.ImageCollection('COPERNICUS/S1_GRD_FLOAT')\n .filterBounds(aoi)\n .filterDate(ee.Date('2020-08-01'),ee.Date('2020-08-31'))\n .filter(ee.Filter.eq('orbitProperties_pass', 'ASCENDING'))\n .filter(ee.Filter.eq('relativeOrbitNumber_start', 15))\n .sort('system:time_start'))", "Here are the acquisition times in the collection, formatted with Python's time module:", "import time\nacq_times = im_coll.aggregate_array('system:time_start').getInfo()\n[time.strftime('%x', time.gmtime(acq_time/1000)) for acq_time in acq_times]", "A ratio image\nLet's select the first two images and extract the VV bands, clipping them to aoi_sub,", "im_list = im_coll.toList(im_coll.size())\nim1 = ee.Image(im_list.get(0)).select('VV').clip(aoi_sub)\nim2 = ee.Image(im_list.get(1)).select('VV').clip(aoi_sub)", "Now we'll build the ratio of the VV bands and display it", "ratio = im1.divide(im2)\nurl = ratio.getThumbURL({'min': 0, 'max': 10})\ndisp.Image(url=url, width=800)", "As in the first part of the Tutorial, standard GEE reducers can be used to calculate a histogram, mean and variance of the ratio image:", "hist = ratio.reduceRegion(ee.Reducer.fixedHistogram(0, 5, 500), aoi_sub).get('VV').getInfo()\nmean = ratio.reduceRegion(ee.Reducer.mean(), aoi_sub).get('VV').getInfo()\nvariance = ratio.reduceRegion(ee.Reducer.variance(), aoi_sub).get('VV').getInfo()", "Here is a plot of the (normalized) histogram using numpy and matplotlib:", "a = np.array(hist)\nx = a[:, 0]\ny = a[:, 1] / np.sum(a[:, 1])\nplt.grid()\nplt.plot(x, y, '.')\nplt.show()", "This looks a bit like the gamma distribution we met in Part 1 but is in fact an F probability distribution. The F distribution is defined as the ratio of two chi square distributions, see Eq. (1.12), with $m_1$ and $m_2$ degrees of freedom. The above histogram is an $F$ distribution with $m_1=2m$ and $m_2=2m$ degrees of freedom and is given by\n$$\np_{f;2m,2m}(x) = {\\Gamma(2m)\\over \\Gamma(m)^2} x^{m-1}(1+x)^{-2m},\n$$\n$$\n\\quad {\\rm mean}(x) = {m\\over m-1},\\tag{2.1}\n$$\n$$\n\\quad {\\rm var}(x) = {m(2m-1)\\over (m-1)^2 (m-2)}\n$$\nwith parameter $m = 5$. We can see this empirically by overlaying the distribution onto the histogram with the help of scipy.stats.f. The histogram bucket widths are 0.01 so we have to divide by 100:", "m = 5\nplt.grid()\nplt.plot(x, y, '.', label='data')\nplt.plot(x, f.pdf(x, 2*m, 2*m) / 100, '-r', label='F-dist')\nplt.legend()\nplt.show()", "Checking the mean and variance, we get approximate agreement", "print(mean, m/(m-1))\nprint(variance, m*(2*m-1)/(m-1)**2/(m-2))", "So what is so special about this distribution? When looking for changes between two co-registered Sentinel-1 images acquired at different times, it might seem natural to subtract one from the other and then examine the difference, much as we would do for instance with visual/infrared ground reflectance images. In the case of SAR intensity images this is not a good idea. In the difference of two uncorrelated multilook images $\\langle s_1\\rangle$ and $\\langle s_2\\rangle$ the variances add together and, from Eq. (1.21) in the first part of the Tutorial,\n$$\n{\\rm var}(\\langle s_1\\rangle-\\langle s_2\\rangle) = {a_1^2+a_2^2\\over m}, \\tag{2.4}\n$$\nwhere $a_1$ and $a_2$ are mean intensities. So difference pixels in bright areas will have a higher variance than difference pixels in darker areas. It is not possible to set a reliable threshold to determine with a given confidence where change has occurred. \nIt turns out that the F distributed ratio of the two images which we looked at above is much more informative. For each pixel position in the two images, the quotient $\\langle s_1\\rangle / \\langle s_2\\rangle$ is a likelihood ratio test statistic for deciding whether or not a change has occurred between the two acquisition dates at that position. We will explain what this means below. Here for now is the ratio of the two Frankfurt Airport images, this time within the complete aoi:", "im1 = ee.Image(im_list.get(0)).select('VV').clip(aoi)\nim2 = ee.Image(im_list.get(1)).select('VV').clip(aoi)\nratio = im1.divide(im2)\n\nlocation = aoi.centroid().coordinates().getInfo()[::-1]\nmp = folium.Map(location=location, zoom_start=12)\nmp.add_ee_layer(ratio,\n {'min': 0, 'max': 20, 'palette': ['black', 'white']}, 'Ratio')\nmp.add_child(folium.LayerControl())\n\ndisplay(mp)", "We might guess that the bright pixels here are significant changes, for instance due to aircraft movements on the tarmac or vehicles moving on the highway. Of course ''significant'' doesn't necessarily imply ''interesting''. We already know Frankfurt has a busy airport and that a German Autobahn is always crowded. The question is, how significant are the changes in the statistical sense? Let's now try to answer that question.\nStatistical testing\nA statistical hypothesis is a conjecture about the distributions of one or more measured variables. It might, for instance, be an assertion about the mean of a distribution, or about the equivalence of the variances of two different distributions. We distinguish between simple hypotheses, for which the distributions are completely specified, for example: the mean of a normal distribution with variance $\\sigma^2$ is $\\mu=0$, and composite hypotheses, for which this is not the case, e.g., the mean is $\\mu\\ge 0$.\nIn order to test such assertions on the basis of measured values, it is also necessary to formulate alternative hypotheses. To distinguish these from the original assertions, the latter are traditionally called null hypotheses. Thus we might be interested in testing the simple null hypothesis $\\mu = 0$ against the composite alternative hypothesis $\\mu\\ne 0$. An appropriate combination of measurements for deciding whether or not to reject the null hypothesis in favor of its alternative is referred to as a test statistic, often denoted by the symbol $Q$. An appropriate test procedure will partition the possible test statistics into two subsets: an acceptance region for the null hypothesis and a rejection region. The latter is customarily referred to as the critical region.\nReferring to the null hypothesis as $H_0$, there are two kinds of errors which can arise from any test procedure:\n\n$H_0$ may be rejected when in fact it is true. This is called an error of the first kind and the probability that it will occur is denoted $\\alpha$.\n$H_0$ may be accepted when in fact it is false, which is called an error of the second kind with probability of occurrence $\\beta$.\n\nThe probability of obtaining a value of the test statistic within the critical region when $H_0$ is true is thus $\\alpha$. The probability $\\alpha$ is also referred to as the level of significance of the test or the probability of a false positive. It is generally the case that the lower the value of $\\alpha$, the higher is the probability $\\beta$ of making a second kind error, so there is always a trade-off. (Judge Roy Bean, from the film of the same name, didn't believe in trade-offs. He hanged all defendants regardless of the evidence. His $\\beta$ was zero, but his $\\alpha$ was rather large.)\nAt any rate, traditionally, significance levels of 0.01 or 0.05 are often used.\nThe P value\nSuppose we determine the test statistic to have the value $q$. The P value is defined as the probability of getting a test statistic $Q$ that is at least as extreme as the one observed given the null hypothesis. What is meant by \"extreme\" depends on how we choose the test statistic. If this probability is small, then the null hypothesis is unlikely. If it is smaller than the prescribed significance level $\\alpha$, then the null hypothesis is rejected.\nLikelihood Functions\nThe $m$-look VV intensity bands of the two Sentinel-1 images that we took from the archive have pixel values\n$$\n\\langle s\\rangle=\\langle|S_{vv}|^2\\rangle, \\quad {\\rm with\\ mean}\\ a=|S^a_{vv}|^2,\n$$\nand are gamma distributed according to Eq. (1.1), with parameters $\\alpha=m$ and $\\beta = a/m$. To make the notation a bit simpler, let's write $s = \\langle s \\rangle$, so that the multi-look averaging is understood.\nUsing subscript $i=1,2$ to refer to the two images, the probability densities are\n$$\np(s_i| a_i) = {1 \\over (a_i/m)^m\\Gamma(m)}s_i^{m-1}e^{-s_i m/a_i},\\quad i=1,2. \\tag{2.5}\n$$\nWe've left out the number of looks $m$ on the left hand side, since it is the same for both images. \nNow let's formulate a null hypothesis, namely that no change has taken place in the signal strength $a = |S^a_{vv}|^2$ between the two acquisitions, i.e.,\n$$\nH_0: \\quad a_1=a_2 = a\n$$ \nand test it against the alternative hypothesis that a change took place\n$$\nH_1: \\quad a_1\\ne a_2.\n$$ \nIf the null hypothesis is true, then the so-called likelihood for getting the measured pixel intensities $s_1$ and $s_2$ is defined as the product of the probability densities for that value of $a$,\n$$\nL_0(a) = p(s_1|a)p(s_2|a) = {1\\over(a/m)^{2m}\\Gamma(m)^2}(s_1s_2)^{m-1}e^{-(s_1+s_2)m/a}. \\tag{2.6}\n$$\nTaking the product of the probability densities like this is justified by the fact that the measurements $s_1$ and $s_2$ are independent.\nThe maximum likelihood is obtained by maximizing $L_0(a)$ with respect to $a$,\n$$\nL_0(\\hat a) = p(s_1|\\hat a)p(s_2|\\hat a), \\quad \\hat a = \\arg\\max_a L_0(a). \n$$\nWe can get $\\hat a$ simply by solving the equation\n$$\n{d L_0(a)\\over da} = 0\n$$\nfor which we derive the maximum likelihood estimate (an easy exercise)\n$$\n\\hat a = {s_1 + s_2 \\over 2}.\n$$\nMakes sense: the only information we have is $s_1$ and $s_2$, so, if there was no change, our best estimate of the intensity $a$ is to take the average. Thus, substituting this value into Eq. (2.6), the maximum likelihood under $H_0$ is\n$$\nL_0(\\hat a) = {1\\over ((s_1+s_2)/2m)^{2m}\\Gamma(m)^2}(s_1s_2)^{m-1}e^{-2m}. \\tag{2.7}\n$$\nSimilarly, under the alternative hypothesis $H_1$, the maximum likelihood is\n$$\nL_1(\\hat a_1,\\hat a_2) = p(s_1|\\hat a_1)p(s_2|\\hat a_2)\\quad \\hat a_1, \\hat a_2 = \\arg\\max_{a_1,a_2} L_1(a_1,a_2). \n$$\nAgain, setting derivatives equal to zero, we get for $H_1$\n$$\n\\hat a_1 = s_1, \\quad \\hat a_2 = s_2,\n$$\nand the maximum likelihood\n$$\nL_1(\\hat a_1,\\hat a_2) = {m^{2m}\\over \\Gamma(m)^2}s_1s_2 e^{-2m}. \\tag{2.8}\n$$\nThe Likelihood Ratio Test\nThe theory of statistical testing specifies methods for\ndetermining the most appropriate test procedure, one which minimizes the probability $\\beta$ of an error of the second kind for a fixed level of significance $\\alpha$. Rather than giving a general definition, we state the appropriate test for our case: \nWe should reject the null hypothesis if the ratio of the two likelihoods satisfies the inequality\n$$\nQ = {L_0(\\hat a)\\over L_1(\\hat a_1,\\hat a_2)} \\le k \\tag{2.9}\n$$\nfor some appropriately small value of threshold $k$.\nThis definition simply reflects the fact that, if the null hypothesis is true, the maximum likelihood when $a_1=a_2$ should be close to the maximum likelihood without that restriction, given the measurements $s_1$ and $s_2$. Therefore, if the likelihood ratio is small, (less than or equal to some small value $k$), then $H_0$ should be rejected. \nWith some (very) simply algebra, Eq. (2.9) evaluates to\n$$\nQ = \\left[2^2 \\left( s_1s_2\\over (s_1+s_2)^2\\right)\\right]^m \\le k \\tag{2.10}\n$$\nusing (2.7) and (2.8). This is the same as saying\n$$\n{s_1s_2\\over (s_1+s_2)^2} \\le k'\\quad {\\rm or}\\quad {(s_1+s_2)^2\\over s_1s_2}\\ge k''\\quad {\\rm or}\\quad {s_1\\over s_2}+{s_2\\over s_1}\\ge k''-2\n$$\nwhere $k',k''$ depend on $k$. The last inequality is satisfied if either term is small enough:\n$$\n{s_1\\over s_2} < c_1 \\quad {\\rm or}\\quad {s_2\\over s_1} < c_2 \\tag{2.11}\n$$\nagain for some appropriate threshold $c_1$ and $c_2$ which depend on $k''$. \nSo the ratio image $s_1/s_2$ that we generated above is indeed a Likelihood Ratio Test (LRT) statistic, one of two possible. We'll call it $Q_1 = s_1/s_2$ and the other one $Q_2 = s_2/s_1$. The former tests for a significant increase in intensity between times $t_1$ and $t_2$, the latter for a significant decrease.\nFine, but where does the F distribution come in?\nBoth $s_1$ and $s_2$ are gamma distributed\n$$\np(s\\mid a) = {1\\over (a/m)^m\\Gamma(m)}s^{m-1}e^{-sm/a}.\n$$\nLet $z = 2sm/a$. Then\n$$\np(z\\mid a) = p(s\\mid a)\\left |{ds\\over dz}\\right | = {1\\over (a/m)^m\\Gamma(m)}\\left({za\\over 2m}\\right)^{m-1}\\left({a\\over 2m}\\right) = {1\\over 2^m\\Gamma(m)}z^{m-1}e^{-z/2}.\n$$\nComparing this with Eq. (1.12) from the first part of the Tutorial, we see that $z$ is chi square distributed with $2m$ degrees of freedom, and therefore so are the variables $2s_1m/a$ and $2s_2m/a$. The quotients $s_1/s_2$ and $s_2/s_1$ are thus ratios of two chi square distributed variables with $2m$ degrees of freedom. They therefore have the F distribution of Eq. (2.1).\nIn order to decide the test for $Q_1$, we need the P value for a measurement $q_1$ of the statistic. Recall that this is the probability of getting a result at least as extreme as the one measured under the null hypothesis. So in this case\n$$\nP_1 = {\\rm Prob}(Q_1\\le q_1\\mid H_0), \\tag{2.12}\n$$\nwhich we can calculate from the percentiles of the F distribution, Eq. (2.1). Then if $P_1\\le \\alpha/2$ we reject $H_0$ and conclude with significance $\\alpha/2$ that a change occurred. We do the same test for $Q_2$, so that the combined significance is $\\alpha$.\nNow we can make a change map for the Frankfurt Airport for the two acquisitions, August 5 and August 11, 2020. We want to see quite large changes associated primarily with airplane and vehicle movements, so we will set the significance generously low to $\\alpha = 0.001$. We will also distinguish the direction of change and mask out the no-change pixels:", "# Decision threshold alpha/2:\ndt = f.ppf(0.0005, 2*m, 2*m)\n\n# LRT statistics.\nq1 = im1.divide(im2)\nq2 = im2.divide(im1)\n\n# Change map with 0 = no change, 1 = decrease, 2 = increase in intensity.\nc_map = im1.multiply(0).where(q2.lt(dt), 1)\nc_map = c_map.where(q1.lt(dt), 2)\n\n# Mask no-change pixels.\nc_map = c_map.updateMask(c_map.gt(0))\n\n# Display map with red for increase and blue for decrease in intensity.\nlocation = aoi.centroid().coordinates().getInfo()[::-1]\nmp = folium.Map(\n location=location, tiles='Stamen Toner',\n zoom_start=13)\nfolium.TileLayer('OpenStreetMap').add_to(mp)\nmp.add_ee_layer(ratio,\n {'min': 0, 'max': 20, 'palette': ['black', 'white']}, 'Ratio')\nmp.add_ee_layer(c_map,\n {'min': 0, 'max': 2, 'palette': ['black', 'blue', 'red']},\n 'Change Map')\nmp.add_child(folium.LayerControl())\n\ndisplay(mp)", "Most changes are within the airport or on the Autobahn. Barge movements on the Main River (upper left hand corner) are also signaled as significant changes. Note that the 'red' changes (significant increases in intensity) do not show up in the 'ratio' overlay, which displays $s_1/s_2$.\nBivariate change detection\nRather than analyzing the VV and VH bands individually, it would make more sense to treat them together, and that is what we will now do. It is convenient to work with the covariance matrix form for measured intensities that we introduce in Part 1, see Eq.(1.6a). Again with the aim of keeping the notation simple, define\n$$\n\\pmatrix{ s_i & 0\\cr 0 & r_i} = \\pmatrix{\\langle|S_{vv}|^2\\rangle_i & 0 \\cr 0 & \\langle|S_{vh}|^2\\rangle_i}, \\quad {\\rm with\\ means}\\quad a_i = \\langle|S^{a_i}{vv}|^2\\rangle, \\quad b_i = \\langle|S^{b_i}{vh}|^2\\rangle \\tag{2.13}\n$$\nfor the two acquisition times $t_i,\\ i=1,2$. \nUnder $H_0$ we have $a_1=a_2=a$ and $b_1=b_2=b$. Assuming independence of $s_i$ and $r_i$, the likelihood function is the product of the four gamma distributions\n$$\nL_0(a,b) = p(s_1\\mid a)p(r_1\\mid b)p(s_2\\mid a)p(r_2\\mid b).\n$$\nUnder $H_1$,\n$$\nL_1(a_1,b_1,a_2,b_2) = p(s_1\\mid a_1)p(r_1\\mid b_1)p(s_2\\mid a_2)p(r_2\\mid b_2).\n$$\nWith maximum likelihood estimates under $H_0$ \n$$\n\\hat a = (s_1+s_2)/2\\quad {\\rm and}\\quad \\hat b = (r_1+r_2)/2\n$$ \nfor the parameters and some simple algebra, we get \n$$\nL_0(\\hat a,\\hat b) = {(2m)^{4m}\\over (s_1+s_2)^{2m}(r_1+r_2)^{2m}\\Gamma(m)^4}s_1r_1s_2r_2e^{-4m}. \\tag{2.14}\n$$ \nSimilarly with $\\hat a_1=s_1,\\ \\hat b_1=r_1,\\ \\hat a_2=s_2,\\ \\hat b_2=r_2$, we calculate\n$$\nL_1(\\hat a_1,\\hat b_1,\\hat a_2,\\hat b_2) = {m^{4m}\\over s_1r_1s_2r_2}e^{-4m}.\n$$\nThe likelihood test statistic in then\n$$\nQ = {L_0(\\hat a,\\hat b)\\over L_1(\\hat a_1,\\hat b_1,\\hat a_2,\\hat b_2)}={2^4(s_1r_1s_2r_2)^m\\over (s_1+s_2)^{2m}(r_1+r_2)^{2m}}.\n$$\nWriting this in terms of the covariance matrix representation,\n$$\nc_i = \\pmatrix{s_i & 0\\cr 0 & r_i},\\quad i=1,2,\n$$\nwe derive, finally, the likelihood ratio test\n$$\nQ = \\left[2^4\\pmatrix{|c_1| |c_2|\\over |c_1+c_2|^2 }\\right]^m \\le k, \\tag{2.15}\n$$\nwhere $|\\cdot|$ indicates the matrix determinant, $|c_i|=s_ir_i$. \nSo far so good. But in order to determine P values, we need the probability distribution of $Q$. This time we have no idea how to obtain it. Here again, statistical theory comes to our rescue.\nLet $\\Theta$ be the parameter space for the LRT. In our example it is \n$$\n\\Theta = { a_1,b_1,a_2,b_2}\n$$ \nand has $d=4$ dimensions. Under the null hypothesis the parameter space is restricted by the conditions $a=a_1=a_2$ and $b=b_1=b_2$ to \n$$\n\\Theta_0 = { a,b}\n$$ \nwith $d_0=2$ dimensions. According to Wilks' Theorem, as the number of measurements determining the LRT statistic $Q$ approaches $\\infty$, the test statistic $-2\\log Q$ approaches a chi square distribution with $d-d_0=2$ degrees of freedom. (Recall that, in order to determine the matrices $c_1$ and $c_2$, five individual measurements were averaged or multi-looked.) So rather than working with $Q$ directly, we use $-2\\log Q$ instead and hope that Wilk's theorem is a good enough approximation for our case.\nIn order to check if this is so, we just have to program \n$$\n-2\\log Q = (\\log{|c_1|}+\\log{|c_2|}-2\\log{|c_1+c_2|}+4\\log{2})(-2m)\n$$ \nin GEE-ese:", "def det(im):\n return im.expression('b(0) * b(1)')\n\n# Number of looks.\nm = 5\n\nim1 = ee.Image(im_list.get(0)).select('VV', 'VH').clip(aoi)\nim2 = ee.Image(im_list.get(1)).select('VV', 'VH').clip(aoi)\n\nm2logQ = det(im1).log().add(det(im2).log()).subtract(\n det(im1.add(im2)).log().multiply(2)).add(4*np.log(2)).multiply(-2*m)", "and then plot its histogram, comparing it with the chi square distribution scipy.stats.chi2.pdf() with two degrees of freedom:", "hist = m2logQ.reduceRegion(\n ee.Reducer.fixedHistogram(0, 20, 200), aoi).get('VV').getInfo()\na = np.array(hist)\nx = a[:, 0]\ny = a[:, 1] / np.sum(a[:, 1])\nplt.plot(x, y, '.', label='data')\nplt.plot(x, chi2.pdf(x, 2)/10, '-r', label='chi square')\nplt.legend()\nplt.grid()\nplt.show()", "Looks pretty good. Note now that a small value of the LRT $Q$ in Eq. (2.15) corresponds to a large value of $-2\\log{Q}$. Therefore the P value for a measurement $q$ is now the probability of getting the value $-2\\log{q}$\nor higher,\n$$\nP = {\\rm Prob}(-2\\log{Q} \\ge -2\\log{q}) = 1 - {\\rm Prob}(-2\\log{Q} < -2\\log{q}).\n$$\nSo let's try out our bivariate change detection procedure, this time on an agricultural scene where we expect to see larger regions of change.", "geoJSON ={\n \"type\": \"FeatureCollection\",\n \"features\": [\n {\n \"type\": \"Feature\",\n \"properties\": {},\n \"geometry\": {\n \"type\": \"Polygon\",\n \"coordinates\": [\n [\n [\n -98.2122802734375,\n 49.769291532628515\n ],\n [\n -98.00559997558594,\n 49.769291532628515\n ],\n [\n -98.00559997558594,\n 49.88578690918283\n ],\n [\n -98.2122802734375,\n 49.88578690918283\n ],\n [\n -98.2122802734375,\n 49.769291532628515\n ]\n ]\n ]\n }\n }\n ]\n}\ncoords = geoJSON['features'][0]['geometry']['coordinates']\naoi1 = ee.Geometry.Polygon(coords)", "This is a mixed agricultural/forest area in southern Manitoba, Canada. We'll gather two images, one from the beginning of August and one from the beginning of September, 2018. A lot of harvesting takes place in this interval, so we expect some extensive changes.", "im1 = ee.Image(ee.ImageCollection('COPERNICUS/S1_GRD_FLOAT')\n .filterBounds(aoi1)\n .filterDate(ee.Date('2018-08-01'), ee.Date('2018-08-31'))\n .filter(ee.Filter.eq('orbitProperties_pass', 'ASCENDING'))\n .filter(ee.Filter.eq('relativeOrbitNumber_start', 136))\n .first()\n .clip(aoi1))\nim2 = ee.Image(ee.ImageCollection('COPERNICUS/S1_GRD_FLOAT').filterBounds(aoi1)\n .filterDate(ee.Date('2018-09-01'), ee.Date('2018-09-30'))\n .filter(ee.Filter.eq('orbitProperties_pass', 'ASCENDING'))\n .filter(ee.Filter.eq('relativeOrbitNumber_start', 136))\n .first()\n .clip(aoi1))", "Here are the acquisition times:", "acq_time = im1.get('system:time_start').getInfo()\nprint( time.strftime('%x', time.gmtime(acq_time/1000)) )\nacq_time = im2.get('system:time_start').getInfo()\nprint( time.strftime('%x', time.gmtime(acq_time/1000)) )", "Fortunately it is possible to map the chi square cumulative distribution function over an ee.Image() so that a P value image can be calculated directly. This wasn't possible in the single band case, as the F cumulative distribution is not available on the GEE. Here are the P values:", "def chi2cdf(chi2, df):\n ''' Chi square cumulative distribution function for df degrees of freedom\n using the built-in incomplete gamma function gammainc() '''\n return ee.Image(chi2.divide(2)).gammainc(ee.Number(df).divide(2))\n\n# The observed test statistic image -2logq.\nm2logq = det(im1).log().add(det(im2).log()).subtract(\n det(im1.add(im2)).log().multiply(2)).add(4*np.log(2)).multiply(-2*m)\n\n# The P value image prob(m2logQ > m2logq) = 1 - prob(m2logQ < m2logq).\np_value = ee.Image.constant(1).subtract(chi2cdf(m2logq, 2))\n\n# Project onto map.\nlocation = aoi1.centroid().coordinates().getInfo()[::-1]\nmp = folium.Map(location=location, zoom_start=12)\nmp.add_ee_layer(p_value,\n {'min': 0,'max': 1, 'palette': ['black', 'white']}, 'P-value')\nmp.add_child(folium.LayerControl())", "The uniformly dark areas correspond to small or vanishing P values and signify change. The bright areas correspond to no change. Why they are not uniformly bright will be explained below. Now we set a significance threshold of $\\alpha=0.01$ and display the significant changes, whereby 1% of them will be false positives. For reference we also show the 2018 Canada AAFC Annual Crop Inventory map, which is available as a GEE collection:", "c_map = p_value.multiply(0).where(p_value.lt(0.01), 1)\n\ncrop2018 = (ee.ImageCollection('AAFC/ACI')\n .filter(ee.Filter.date('2018-01-01', '2018-12-01'))\n .first()\n .clip(aoi1))\n\nmp = folium.Map(location=location, zoom_start=12)\nmp.add_ee_layer(crop2018, {min: 0, max: 255}, 'crop2018')\nmp.add_ee_layer(c_map.updateMask(\n c_map.gt(0)), {'min': 0, 'max': 1, 'palette': ['black', 'red']}, 'c_map')\nmp.add_child(folium.LayerControl())", "The major crops in the scene are soybeans (dark brown), oats (light brown), canola (light green), corn (light yellow) and winter wheat (dark gray). The wooded areas exhibit little change, while canola has evidently been extensively harvested in the interval.\nA note on P values\nBecause small P values are indicative of change, it is tempting to say that, the larger the P value, the higher the probability of no change. Or more explicitly, the P value is itself the no change probability. Let's see why this is false. Below we choose a wooded area of the agricultural scene where few significant changes are to be expected and use it to subset the P value image. Then we plot the histogram of the subset:", "geoJSON ={\n \"type\": \"FeatureCollection\",\n \"features\": [\n {\n \"type\": \"Feature\",\n \"properties\": {},\n \"geometry\": {\n \"type\": \"Polygon\",\n \"coordinates\": [\n [\n [\n -98.18550109863281,\n 49.769735012247885\n ],\n [\n -98.13949584960938,\n 49.769735012247885\n ],\n [\n -98.13949584960938,\n 49.798109268622\n ],\n [\n -98.18550109863281,\n 49.798109268622\n ],\n [\n -98.18550109863281,\n 49.769735012247885\n ]\n ]\n ]\n }\n }\n ]\n}\ncoords = geoJSON['features'][0]['geometry']['coordinates']\naoi1_sub = ee.Geometry.Polygon(coords)\nhist = p_value.reduceRegion(ee.Reducer.fixedHistogram(0, 1, 100), aoi1_sub).get('constant').getInfo()\na = np.array(hist)\nx = a[:,0]\ny = a[:,1]/np.sum(a[:,1])\nplt.plot(x, y, '.b', label='p-value')\nplt.ylim(0, 0.05)\nplt.grid()\nplt.legend()\nplt.show()", "So the P values of no-change measurements are uniformly distributed over $[0, 1]$ (the excess of small P values at the left can be ascribed to genuine changes within the polygon). A large P value is no more indicative of no change than a small one. Of course it has to be this way. When, for example, we set a significance level of 5%, then the fraction of false positives, i.e., the fraction of P values smaller than 0.05 given $H_0$, must also be 5%. This accounts for the noisy appearance of the P value image in the no-change regions.\nChange direction: the Loewner order\nWhat about the direction of change in the bivariate case? This is less clear, as we can have the situation where the VV intensity gets larger and the VH smaller from time $t_1$ to $t_2$, or vice versa. When we are dealing with the C2 covariance matrix representation of SAR imagery, see Eq. (2.13), a characterization of change can be made as follows (Nielsen et al. (2019)): For each significantly changed pixel, we determine the difference $C2_{t_2}-C2_{t_1}$ and examine its so-called definiteness, also known as the Loewner order of the change. A matrix is said to be positive definite if all of its eigenvalues are positive, negative definite if they are all negative, otherwise indefinite. In the case of the $2\\times 2$ diagonal matrices that we are concerned with the eigenvalues are just the two diagonal elements themselves, so determining the Loewner order is trivial. For full $2\\times 2$ dual pol or $3\\times 3$ quad pol SAR imagery, devising an efficient way to determine the Loewner order is more difficult, see Nielsen (2019).\nSo let's include the Loewner order in our change map:", "c_map = p_value.multiply(0).where(p_value.lt(0.01), 1)\ndiff = im2.subtract(im1)\nd_map = c_map.multiply(0) # Initialize the direction map to zero.\nd_map = d_map.where(det(diff).gt(0), 2) # All pos or neg def diffs are now labeled 2.\nd_map = d_map.where(diff.select(0).gt(0), 3) # Re-label pos def (and label some indef) to 3.\nd_map = d_map.where(det(diff).lt(0), 1) # Label all indef to 1.\nc_map = c_map.multiply(d_map) # Re-label the c_map, 0*X = 0, 1*1 = 1, 1*2= 2, 1*3 = 3.", "Now we display the changes, with positive definite red, negative definite blue, and indefinite yellow:", "mp = folium.Map(location=location, zoom_start=12)\nmp.add_ee_layer(crop2018, {min: 0, max: 255}, 'crop2018')\nmp.add_ee_layer(\n c_map.updateMask(c_map.gt(0)), {\n 'min': 0,\n 'max': 3,\n 'palette': ['black', 'yellow', 'blue', 'red']\n }, 'c_map')\nmp.add_child(folium.LayerControl())", "The more or less compact blue changes indicate a decrease in reflectivity in both VV and VH bands, and correspond to crop harvesting (especially canola).\nOutlook\nWe have now covered the subject of bitemporal change detection with GEE Sentinel-1 imagery. The beauty of GEE is that it is trivially easy to gather arbitrarily long time series of S1 images from the archive, all with revisit times of 6 or 12 days depending on whether one or both satellites are collecting data. The next part of the Tutorial will generalize the techniques we have learned so far to treat multitemporal change detection.\nOh, and one more thing ...\nWe didn't mention it above, but note the similarity between Eq. (2.10) and Eq. (2.15). To go from the monovariate LRT to the bivariate LRT, we simply replace the product of intensities $s_1s_2$ by the product of determinants $|c_1||c_2|$, the sum $s_1+s_2$ by $|c_1+c_2|$ and the factor $2^{2}$ by $2^4=2^{2\\cdot2}$. This observation will come in handy in Part 3." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
synthicity/activitysim
activitysim/examples/example_mtc/notebooks/getting_started.ipynb
agpl-3.0
[ "Getting Started with ActivitySim\nThis getting started guide is a Jupyter notebook. It is an interactive Python 3 environment that describes how to set up, run, and begin to analyze the results of ActivitySim modeling scenarios. It is assumed users of ActivitySim are familiar with the basic concepts of activity-based modeling. This tutorial covers:\n\nInstallation and setup\nSetting up and running a base model\nInputs and outputs\nSetting up and running an alternative scenario\nComparing results\nNext steps and further reading\n\nThis notebook depends on Anaconda Python 3 64bit.\nInstall ActivitySim\nThe first step is to install activitysim from conda forge. This also installs dependent packages such as tables for reading/writing HDF5, openmatrix for reading/writing OMX matrix, and pyyaml for yaml settings files. The command below also creates an asim conda environment just for activitysim.", "!conda create -n asim python=3.9 activitysim -c conda-forge --override-channels", "Activate the Environment", "!conda activate asim", "Creating an Example Setup\nThe example is included in the package and can be copied to a user defined location using the package's command line interface. The example includes all model steps. The command below copies the example_mtc example to a new example folder. It also changes into the new example folder so we can run the model from there.", "!activitysim create -e example_mtc -d example\n%cd example", "Run the Example\nThe code below runs the example, which runs in a few minutes. The example consists of 100 synthetic households and the first 25 zones in the example model region. The full example (example_mtc_full) can be created and downloaded from the activitysim resources repository using activitysim's create command above. As the model runs, it logs information to the screen. \nTo run the example, use activitysim's built-in run command. As shown in the script help, the default settings assume a configs, data, and output folder in the current directory.", "!activitysim run -c configs -d data -o output", "Inputs and Outputs Overview\nAn ActivitySim model requires:\n\nConfigs: settings, model step expressions files, etc.โ€‹\nsettings.yaml - main settings file for running the model\nnetwork_los.yaml - network level-of-service (skims) settings file\n[model].yaml - configuration file for the model step (such as auto ownership)\n[model].csv - expressions file for the model step\nData: input data - input data tables and skimsโ€‹\nland_use.csv - zone data file\nhouseholds.csv - synthethic households\npersons.csv - synthethic persons\nskims.omx - all skims in one open matrix file\nOutput: output data - output data, tables, tracing info, etc.\npipeline.h5 - data pipeline database file (all tables at each model step)\nfinal_[table].csv - final household, person, tour, trip CSV tables\nactivitysim.log - console log file\ntrace.[model].csv - trace calculations for select households\nsimulation.py: main script to run the model\n\nRun the command below to list the example folder contents.", "import os\nfor root, dirs, files in os.walk(\".\", topdown=False):\n for name in files:\n print(os.path.join(root, name))\n for name in dirs:\n print(os.path.join(root, name))", "Inputs\nRun the commands below to: \n* Load required Python libraries for reading data\n* Display the settings.yaml, including the list of models to run\n* Display the land_use, households, and persons tables\n* Display the skims", "print(\"Load libraries.\")\nimport pandas as pd\nimport openmatrix as omx\nimport yaml\nimport glob\n\nprint(\"Display the settings file.\\n\")\n\nwith open(r'configs/settings.yaml') as file:\n file_contents = yaml.load(file, Loader=yaml.FullLoader)\n print(yaml.dump(file_contents))\n\nprint(\"Display the network_los file.\\n\")\n\nwith open(r'configs/network_los.yaml') as file:\n file_contents = yaml.load(file, Loader=yaml.FullLoader)\n print(yaml.dump(file_contents))\n\nprint(\"Input land_use. Primary key: TAZ. Required additional fields depend on the downstream submodels (and expression files).\")\npd.read_csv(\"data/land_use.csv\")\n\nprint(\"Input households. Primary key: HHID. Foreign key: TAZ. Required additional fields depend on the downstream submodels (and expression files).\")\npd.read_csv(\"data/households.csv\")\n\nprint(\"Input persons. Primary key: PERID. Foreign key: household_id. Required additional fields depend on the downstream submodels (and expression files).\")\npd.read_csv(\"data/persons.csv\")\n\nprint(\"Skims. All skims are input via one OMX file. Required skims depend on the downstream submodels (and expression files).\\n\")\nprint(omx.open_file(\"data/skims.omx\"))", "Outputs\nRun the commands below to: \n* Display the output household and person tables\n* Display the output tour and trip tables", "print(\"The output pipeline contains the state of each table after each model step.\")\npipeline = pd.io.pytables.HDFStore('output/pipeline.h5')\npipeline.keys()\n\nprint(\"Households table after trip mode choice, which contains several calculated fields.\")\npipeline['/households/joint_tour_frequency'] #watch out for key changes if not running all models\n\nprint(\"Final output households table to written to CSV, which is the same as the table in the pipeline.\")\npd.read_csv(\"output/final_households.csv\")\n\nprint(\"Final output persons table to written to CSV, which is the same as the table in the pipeline.\")\npd.read_csv(\"output/final_persons.csv\")\n\nprint(\"Final output tours table to written to CSV, which is the same as the table in the pipeline. Joint tours are stored as one record.\")\npd.read_csv(\"output/final_tours.csv\")\n\nprint(\"Final output trips table to written to CSV, which is the same as the table in the pipeline. Joint trips are stored as one record\")\npd.read_csv(\"output/final_trips.csv\")", "Other notable outputs", "print(\"Final output accessibility table to written to CSV.\")\npd.read_csv(\"output/final_accessibility.csv\")\n\nprint(\"Joint tour participants table, which contains the person ids of joint tour participants.\")\npipeline['joint_tour_participants/joint_tour_participation']\n\nprint(\"Destination choice sample logsums table for school location if want_dest_choice_sample_tables=True.\")\nif '/school_location_sample/school_location' in pipeline:\n pipeline['/school_location_sample/school_location']", "Trip matrices\nA write_trip_matrices step at the end of the model adds boolean indicator columns to the trip table in order to assign each trip into a trip matrix and then aggregates the trip counts and writes OD matrices to OMX (open matrix) files. The coding of trips into trip matrices is done via annotation expressions.", "print(\"trip matrices by time of day for assignment\")\noutput_files = os.listdir(\"output\")\nfor output_file in output_files:\n if \"omx\" in output_file:\n print(output_file)", "Tracing calculations\nTracing calculations is an important part of model setup and debugging. Often times data issues, such as missing values in input data and/or incorrect submodel expression files, do not reveal themselves until a downstream submodels fails. There are two types of tracing in ActivtiySim: household and origin-destination (OD) pair. If a household trace ID is specified via trace_hh_id, then ActivitySim will output a comprehensive set of trace files for all calculations for all household members. These trace files are listed below and explained.", "print(\"All trace files.\\n\")\nglob.glob(\"output/trace/*.csv\")\n\n\nprint(\"Trace files for auto ownership.\\n\")\nglob.glob(\"output/trace/auto_ownership*.csv\")\n\nprint(\"Trace chooser data for auto ownership.\\n\")\npd.read_csv(\"output\\\\trace\\\\auto_ownership_simulate.simple_simulate.eval_mnl.choosers.csv\")\n\nprint(\"Trace utility expression values for auto ownership.\\n\")\npd.read_csv(\"output\\\\trace\\\\auto_ownership_simulate.simple_simulate.eval_mnl.eval_utils.expression_values.csv\")\n\nprint(\"Trace alternative total utilities for auto ownership.\\n\")\npd.read_csv(\"output\\\\trace\\\\auto_ownership_simulate.simple_simulate.eval_mnl.utilities.csv\")\n\nprint(\"Trace alternative probabilities for auto ownership.\\n\")\npd.read_csv(\"output\\\\trace\\\\auto_ownership_simulate.simple_simulate.eval_mnl.probs.csv\")\n\nprint(\"Trace random number for auto ownership.\\n\")\npd.read_csv(\"output\\\\trace\\\\auto_ownership_simulate.simple_simulate.eval_mnl.rands.csv\")\n\nprint(\"Trace choice for auto ownership.\\n\")\npd.read_csv(\"output\\\\trace\\\\auto_ownership_simulate.simple_simulate.eval_mnl.choices.csv\")", "Run the Multiprocessor Example\nThe command below runs the multiprocessor example, which runs in a few minutes. It uses settings inheritance to override setings in the configs folder with settings in the configs_mp folder. This allows for re-using expression files and settings files in the single and multiprocessed setups. The multiprocessed example uses the following additional settings:\n```\nnum_processes: 2\nchunk_size: 0\nchunk_training_mode: disabled\nmultiprocess_steps:\n - name: mp_initialize\n begin: initialize_landuse\n - name: mp_households\n begin: school_location\n slice:\n tables:\n - households\n - persons\n - name: mp_summarize\n begin: write_data_dictionary\n```\nIn brief, num_processes specifies the number of processors to use and a chunk_size of 0 plus a chunk_training_mode of disabled means ActivitySim is free to use all the available RAM if needed. The multiprocess_steps specifies the beginning, middle, and end steps in multiprocessing. The mp_initialize step is single processed because there is no slice setting. It starts with the initialize_landuse submodel and runs until the submodel identified by the next multiprocess submodel starting point, school_location. The mp_households step is multiprocessed and the households and persons tables are sliced and allocated to processes using the chunking settings. The rest of the submodels are run multiprocessed until the final multiprocess step. The mp_summarize step is single processed because there is no slice setting and it writes outputs. See multiprocessing and chunk_size for more information.", "!activitysim run -c configs_mp -c configs -d data -o output", "Next Steps and Further Reading\nFor futher information on the software, management consortium, and activity-based models in general, see the resources below. \n\nActivitySim\nUser Documentation\nGitHub Repository\nProject Wiki\nActivity-Based Travel Demand Models: A Primer" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ML4DS/ML4all
TM4.DTUCourse/notebook/DTU02901_student.ipynb
mit
[ "Exploring and undertanding documental databases with topic models and graph analysis\nExercise notebook\nVersion 1.0\nDate: Aug 31, 2017\nAuthors: \n\nJerรณnimo Arenas-Garcรญa (jeronimo.arenas@uc3m.es)\nJesรบs Cid-Sueiro (jcid@tsc.uc3m.es)", "# Common imports \n\nimport numpy as np\n# import pandas as pd\n# import os\nfrom os.path import isfile, join\n# import scipy.io as sio\n# import scipy\nimport zipfile as zp\n# import shutil\n# import difflib", "1. Corpus acquisition\nIn this block we will work with collections of text documents. The objectives will be:\n\nFind the most important topics in the collection and assign documents to topics\nAnalyze the structure of the collection by means of graph analysis\n\nWe will work with a collection of research projects funded by the US National Science Foundation, that you can find under the ./data directory. These files are publicly available from the NSF website.\n(As a side note, there are many other available text collections to work with. In particular, the NLTK library has many examples, that you can explore using the nltk.download() tool.\nimport nltk\nnltk.download()\n\nfor instance, you can take the gutemberg dataset\nMycorpus = nltk.corpus.gutenberg\ntext_name = Mycorpus.fileids()[0]\nraw = Mycorpus.raw(text_name)\nWords = Mycorpus.words(text_name)\n\nAlso, tools like Gensim or Sci-kit learn include text databases to work with).\n1.1. Exploring file structure\nNSF project information is provided in XML files. Projects are yearly grouped in .zip files, and each project is saved in a different XML file. To explore the structure of such files, we will use the file 160057.xml. Parsing XML files in python is rather easy using the ElementTree module. \nTo introduce some common functions to work with XML files we will follow <a href=http://docs.python.org/3.4/library/xml.etree.elementtree.html#module-xml.etree.ElementTree>this tutorial</a>.\n1.1.1. File format\nTo start with, you can have a look at the contents of the example file. We are interested on the following information of each project:\n\nProject identifier\nProject Title\nProject Abstract\nBudget\nStarting Year (we will ignore project duration)\nInstitution (name, zipcode, and state)", "xmlfile = '../data/1600057.xml'\n\nwith open(xmlfile,'r') as fin:\n \n print(fin.read())", "1.1.2. Parsing XML\nXML is an inherently hierarchical data format, and the most natural way to represent it is with a tree. The ElementTree module has two classes for this purpose:\n\nElementTree represents the whole XML document as a tree\nElement represents a single node in this tree\n\nWe can import XLM data by reading an XML file:", "import xml.etree.ElementTree as ET\ntree = ET.parse(xmlfile)\nroot = tree.getroot", "or directly reading a string:", "root = ET.fromstring(open(xmlfile,'r').read())", "fromstring() parses XML from a string directly into an Element, which is the root element of the parsed tree. Other parsing functions may create an ElementTree, but we will not cover them here.\nAs an Element, root has a tag and a dictionary of attributes:", "print(root.tag)\nprint(root.attrib)", "It also has children nodes over which we can iterate:", "for child in root:\n print(child.tag, child.attrib)", "Children are nested, and we can access specific child nodes by index. We can also access the text of specified elements. For instance:", "for child in root[0]:\n print(child.tag, child.attrib, child.text)", "The presented classes and functions are all you need to solve the following exercise. However, there are many other interesting functions that can probably make it easier for you to work with XML files. For more information, please refer to the ElementTree API.\n1.1.3. Exercise: Parsing the XML project files\nImplement a function that parses the XML files and provides as its output a dictionary with fields:\nproject_code (string)\ntitle (string)\nabstract (string)\nbudget (float)\nyear (string)\ninstitution (tuple with elements: name, zipcode, and statecode)", "def parse_xmlproject(xml_string):\n \"\"\"This function processess the specified XML field,\n and outputs a dictionary with the desired project information\n \n :xml_string: String with XML content\n :Returns: Dictionary with indicated files\n \"\"\"\n \n #<SOL>\n #</SOL>\n \nparse_xmlproject(open(xmlfile,'r').read())", "1.2. Building the dataset\nNow, we will use the function you just implemented, to create a database that we will use throughout this module.\nFor simplicity, and given that the dataset is not too large, we will keep all projects in the RAM. The dataset will consist of a list containing the dictionaries associated to each of the considered projects in a time interval.", "# Construct an iterator (or a list) for the years you want to work with\nyears = range(2015,2017)\ndatafiles_path = '../data/'\nNSF_data = []\n\nfor year in years:\n \n zpobj = zp.ZipFile(join(datafiles_path, str(year)+'.zip'))\n for fileinzip in zpobj.namelist():\n if fileinzip.endswith('xml'):\n \n #Some files seem to be incorrectly parsed\n try:\n project_dictio = parse_xmlproject(zpobj.read(fileinzip))\n if project_dictio['abstract']:\n NSF_data.append(project_dictio)\n except:\n pass\n\n", "We will extract some characteristics of the constructed dataset:", "print('Number of projects in dataset:', len(NSF_data))\n\n####\nbudget_data = list(map(lambda x: x['budget'], NSF_data))\nprint('Average budget of projects in dataset:', np.mean(budget_data))\n\n####\ninsti_data = list(map(lambda x: x['institution'], NSF_data))\nprint('Number of unique institutions in dataset:', len(set(insti_data)))\n\n####\ncounts = dict()\nfor project in NSF_data:\n counts[project['year']] = counts.get(project['year'],0) + 1\n\nprint('Breakdown of projects by starting year:')\nfor el in counts:\n print(el, ':', counts[el])", "Exercise\nCompute the average length of the abstracts of all projects in the dataset", "#<SOL>\n#</SOL>", "2. Corpus Processing\nTopic modelling algorithms process vectorized data. In order to apply them, we need to transform the raw text input data into a vector representation. To do so, we will remove irrelevant information from the text data and preserve as much relevant information as possible to capture the semantic content in the document collection.\nThus, we will proceed with the following steps:\n\nTokenization\nHomogeneization\nCleaning\nVectorization\n\n2.1. Tokenization\nFor the first steps, we will use some of the powerful methods available from the Natural Language Toolkit. In order to use the word_tokenize method from nltk, you might need to get the appropriate libraries using nltk.download(). You must select option \"d) Download\", and identifier \"punkt\"", "import nltk\n\n# You should comment this code fragment if the package is already available.\n# Select option \"d) Download\", and identifier \"punkt\"\n# nltk.download()", "We will create a list that contains just the abstracts in the dataset. As the order of the elements in a list is fixed, it will be later straightforward to match the processed abstracts to metadata associated to their corresponding projects.", "from nltk.tokenize import word_tokenize\n\nNSF_abstracts = list(map(lambda x: x['abstract'], NSF_data))\n\ntokenized_abstracts = []\nnprojects = len(NSF_abstracts)\n\nfor n, abstract in enumerate(NSF_abstracts):\n if not n%100:\n print('\\rTokenizing abstract', n, 'out of', nprojects, end='', flush=True)\n tokenized_abstracts.append(word_tokenize(abstract))\n\nprint('\\n\\n The corpus has been tokenized. Check the result for the first abstract:')\nprint(NSF_abstracts[0])\nprint(tokenized_abstracts[0])", "2.2. Homogeneization\nBy looking at the tokenized corpus you may verify that there are many tokens that correspond to punktuation signs and other symbols that are not relevant to analyze the semantic content. They can be removed using the stemming or lemmatization tools from nltk.\nThe homogeneization process will consist of:\n\nRemoving capitalization: capital alphabetic characters will be transformed to their corresponding lowercase characters.\nRemoving non alphanumeric tokens (e.g. punktuation signs)\nStemming/Lemmatization: removing word terminations to preserve the root of the words and ignore grammatical information.\n\nExercise\nConvert all tokens in tokenized_abstracts to lowercase (using the .lower() method) and remove non alphanumeric tokens (that you can detect with .isalnum() method). You can complete the following code fragment with a single line of code ...", "filtered_abstracts = []\n\nfor n, abstract in enumerate(tokenized_abstracts):\n if not n%100:\n print('\\rFiltering abstract', n, 'out of', nprojects, end='', flush=True)\n\n #<SOL>\n #</SOL>\n\nprint('\\n',filtered_abstracts[0])", "2.2.1. Stemming vs Lemmatization\nAt this point, we can choose between applying a simple stemming or ussing lemmatization. We will try both to test their differences.\nThe lemmatizer from NLTK is based on WordNet. If you have not used wordnet before, you will likely need to download it from nltk (use the nltk.download() command)", "stemmer = nltk.stem.SnowballStemmer('english')\nfrom nltk.stem import WordNetLemmatizer\nwnl = WordNetLemmatizer()\n\nprint('Result for the first abstract in dataset applying stemming')\nprint([stemmer.stem(el) for el in filtered_abstracts[0]])\n\nprint('Result for the first abstract in the dataset applying lemmatization')\nprint([wnl.lemmatize(el) for el in filtered_abstracts[0]])\n", "One of the advantages of the lemmatizer method is that the result of lemmmatization is still a true word, which is more advisable for the presentation of text processing results and lemmatization.\nHowever, without using contextual information, lemmatize() does not remove grammatical differences. This is the reason why \"is\" or \"are\" are preserved and not replaced by infinitive \"be\".\nAs an alternative, we can apply .lemmatize(word, pos), where 'pos' is a string code specifying the part-of-speech (pos), i.e. the grammatical role of the words in its sentence. For instance, you can check the difference between wnl.lemmatize('is') and wnl.lemmatize('is, pos='v').\nExercise\nComplete the following code fragment to lemmatize all abstracts in the NSF dataset", "lemmatized_abstracts = []\n\nfor n, abstract in enumerate(filtered_abstracts):\n if not n%100:\n print('\\rLemmatizing abstract', n, 'out of', nprojects, end='', flush=True)\n\n #<SOL>\n #</SOL>\n\nprint('Result for the first abstract in the dataset applying lemmatization')\nprint('\\n',lemmatized_abstracts[0])", "2.3. Cleaning\nThe third step consists of removing those words that are very common in language and do not carry out usefull semantic content (articles, pronouns, etc).\nOnce again, we might need to load the stopword files using the download tools from nltk\nExercise\nIn the second line below we read a list of common english stopwords. Clean lemmatized_abstracts by removing all tokens in the stopword list.", "from nltk.corpus import stopwords\nstopwords_en = stopwords.words('english')\n\nclean_abstracts = []\n\nfor n, abstract in enumerate(lemmatized_abstracts):\n if not n%100:\n print('\\rCleaning abstract', n, 'out of', nprojects, end='', flush=True)\n \n # Remove all tokens in the stopwords list and append the result to clean_abstracts\n # <SOL>\n # </SOL>\n clean_abstracts.append(clean_tokens)\n \nprint('\\n Let us check tokens after cleaning:')\nprint(clean_abstracts[0])", "2.4. Vectorization\nUp to this point, we have transformed the raw text collection of articles in a list of articles, where each article is a collection of the word roots that are most relevant for semantic analysis. Now, we need to convert these data (a list of token lists) into a numerical representation (a list of vectors, or a matrix). To do so, we will start using the tools provided by the gensim library. \nAs a first step, we create a dictionary containing all tokens in our text corpus, and assigning an integer identifier to each one of them.", "import gensim\n\n# Create dictionary of tokens\nD = gensim.corpora.Dictionary(clean_abstracts)\nn_tokens = len(D)\n\nprint('The dictionary contains', n_tokens, 'terms')\nprint('First terms in the dictionary:')\nfor n in range(10):\n print(str(n), ':', D[n])", "We can also filter out terms that appear in too few or too many of the documents in the dataset:", "no_below = 5 #Minimum number of documents to keep a term in the dictionary\nno_above = .75 #Maximum proportion of documents in which a term can appear to be kept in the dictionary\n\nD.filter_extremes(no_below=no_below,no_above=no_above, keep_n=25000)\nn_tokens = len(D)\n\nprint('The dictionary contains', n_tokens, 'terms')\n\nprint('First terms in the dictionary:')\nfor n in range(10):\n print(str(n), ':', D[n])", "In the second step, let us create a numerical version of our corpus using the doc2bow method. In general, D.doc2bow(token_list) transforms any list of tokens into a list of tuples (token_id, n), one per each token in token_list, where token_id is the token identifier (according to dictionary D) and n is the number of occurrences of such token in token_list.", "corpus_bow = [D.doc2bow(doc) for doc in clean_abstracts]", "At this point, it is good to make sure to understand what has happened. In clean_abstracts we had a list of token lists. With it, we have constructed a Dictionary, D, which assigns an integer identifier to each token in the corpus.\nAfter that, we have transformed each article (in clean_abstracts) in a list tuples (id, n).", "print('Original article (after cleaning):')\nprint(clean_abstracts[0])\nprint('Sparse vector representation (first 10 components):')\nprint(corpus_bow[0][:10])\nprint('Word counts for the first project (first 10 components):')\nprint(list(map(lambda x: (D[x[0]], x[1]), corpus_bow[0][:10])))", "Note that we can interpret each element of corpus_bow as a sparse_vector. For example, a list of tuples \n[(0, 1), (3, 3), (5,2)]\n\nfor a dictionary of 10 elements can be represented as a vector, where any tuple (id, n) states that position id must take value n. The rest of positions must be zero.\n[1, 0, 0, 3, 0, 2, 0, 0, 0, 0]\n\nThese sparse vectors will be the inputs to the topic modeling algorithms.\nAs a summary, the following variables will be relevant for the next chapters:\n\nD: A gensim dictionary. Term strings can be accessed using the numeric identifiers. For instance, D[0] contains the string corresponding to the first position in the BoW representation.\ncorpus_bow: BoW corpus. A list containing an entry per project in the dataset, and consisting of the (sparse) BoW representation for the abstract of that project.\nNSF_data: A list containing an entry per project in the dataset, and consisting of metadata for the projects in the dataset\n\nThe way we have constructed the corpus_bow variable guarantees that the order is preserved, so that the projects are listed in the same order in the lists corpus_bow and NSF_data.\n2.5. Dictionary properties\nIn the following code fragment, we build a list all_counts that contains tuples (terms, document_counts). You can use this list to calculate some statistics about the vocabulary of the dataset", "all_counts = [(D[el], D.dfs[el]) for el in D.dfs]\nall_counts = sorted(all_counts, key=lambda x: x[1])", "3. Topic Modeling\n3.1. Training a topic model using Gensim LDA\nSince we already have computed the dictionary and documents BoW representation using Gensim, computing the topic model is straightforward using the LdaModel() function. Please, refer to Gensim API documentation for more information on the different parameters accepted by the function:", "import gensim\nnum_topics = 50\n\nldag = gensim.models.ldamodel.LdaModel(corpus=corpus_bow, id2word=D, num_topics=num_topics)", "3.2. LDA model visualization\nGensim provides a basic visualization of the obtained topics:", "ldag.print_topics(num_topics=-1, num_words=10)", "A more useful visualization is provided by the python LDA visualization library, pyLDAvis.\nBefore executing the next code fragment you need to install pyLDAvis:\n&gt;&gt; pip install (--user) pyLDAvis", "import pyLDAvis.gensim as gensimvis\nimport pyLDAvis\n\nvis_data = gensimvis.prepare(ldag, corpus_bow, D)\npyLDAvis.display(vis_data)", "3.3. Gensim utility functions\nIn addition to visualization purposes, topic models are useful to obtain a semantic representation of documents that can later be used with some other purpose:\n\nIn document classification problems\nIn content-based recommendations systems\n\nEssentially, the idea is that the topic model provides a (semantic) vector representation of documents, and use probability divergences to measure document similarity. The following functions of the LdaModel class will be useful in this context:\n\nget_topic_terms(topic_id): Gets vector of the probability distribution among words for the indicated topic\nget_document_topics(bow_vector): Gets (sparse) vector with the probability distribution among topics for the provided document", "ldag.get_topic_terms(topicid=0)\n\nldag.get_document_topics(corpus_bow[0])", "An alternative to the use of the get_document_topics() function is to directly transform a dataset using the ldag object as follows. You can apply this transformation to several documents at once, but then the result is an iterator from which you can build the corresponding list if necessary", "print(ldag[corpus_bow[0]])\n\nprint('When applied to a dataset it will provide an iterator')\nprint(ldag[corpus_bow[:3]])\n\nprint('We can rebuild the list from the iterator with a one liner')\nprint([el for el in ldag[corpus_bow[:3]]])", "Finally, Gensim provides some useful functions to convert between formats, and to simplify interaction with numpy and scipy. The following code fragment converts a corpus in sparse format to a full numpy matrix", "reduced_corpus = [el for el in ldag[corpus_bow[:3]]]\nreduced_corpus = gensim.matutils.corpus2dense(reduced_corpus, num_topics).T\nprint(reduced_corpus)", "Exercise\nBuild a function that returns the most relevant projects for a given topic", "def most_relevant_projects(ldag, topicid, corpus_bow, nprojects=10):\n \"\"\"This function returns the most relevant projects in corpus_bow\n \n : ldag: The trained topic model object provided by gensim\n : topicid: The topic for which we want to find the most relevant documents\n : corpus_bow: The BoW representation of documents in Gensim format\n : nprojects: Number of most relevant projects to identify\n \n : Returns: A list with the identifiers of the most relevant projects\n \"\"\"\n\n print('Computing most relevant projects for Topic', topicid)\n print('Topic composition is:')\n print(ldag.show_topic(topicid))\n \n #<SOL>\n #</SOL>\n \n#To test the function we will find the most relevant projects for a subset of the NSF dataset\nproject_id = most_relevant_projects(ldag, 17, corpus_bow[:10000])\n\n#Print titles of selected projects\nfor idproject in project_id:\n print(NSF_data[idproject]['title'])", "Exercise\nBuild a function that computes the semantic distance between two documents. For this, you can use the functions (or code fragments) provided in the library dist_utils.py.", "def pairwase_dist(doc1, doc2):\n \"\"\"This function returns the Jensen-Shannon\n distance between the corresponding vectors of the documents\n \n : doc1: Semantic representation for the doc1 (a vector of length ntopics)\n : doc2: Semantic representation for the doc2 (a vector of length ntopics)\n\n : Returns: The JS distance between doc1 and doc2 (a number)\n \"\"\"\n #<SOL>\n #</SOL>", "Function that creates the Node CSV file for Gephi", "#print(NSF_data[0].keys())\n#print(NSF_data[0]['institution'])\n\ndef strNone(str_to_convert):\n if str_to_convert is None:\n return ''\n else:\n return str_to_convert\n\nwith open('NSF_nodes.csv','w') as fout:\n fout.write('Id;Title;Year;Budget;UnivName;UnivZIP;State\\n')\n for project in NSF_data:\n fout.write(project['project_code']+';'+project['title']+';')\n fout.write(project['year']+';'+str(project['budget'])+';')\n fout.write(project['institution'][0]+';')\n fout.write(strNone(project['institution'][1])+';')\n fout.write(strNone(project['institution'][2])+'\\n')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jdsanch1/SimRC
02. Parte 2/15. Clase 15/.ipynb_checkpoints/04Class NB-checkpoint.ipynb
mit
[ "Clase 4: Portafolios y riesgo\nJuan Diego Sรกnchez Torres, \nProfesor, MAF ITESO\n\nDepartamento de Matemรกticas y Fรญsica\ndsanchez@iteso.mx\nTel. 3669-34-34 Ext. 3069\nOficina: Cubรญculo 4, Edificio J, 2do piso\n\n1. Motivaciรณn\nEn primer lugar, para poder bajar precios y informaciรณn sobre opciones de Yahoo, es necesario cargar algunos paquetes de Python. En este caso, el paquete principal serรก Pandas. Tambiรฉn, se usarรกn el Scipy y el Numpy para las matemรกticas necesarias y, el Matplotlib y el Seaborn para hacer grรกficos de las series de datos.", "#importar los paquetes que se van a usar\nimport pandas as pd\nimport pandas_datareader.data as web\nimport numpy as np\nimport datetime\nfrom datetime import datetime\nimport scipy.stats as stats\nimport scipy as sp\nimport scipy.optimize as scopt\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline\n#algunas opciones para Python\npd.set_option('display.notebook_repr_html', True)\npd.set_option('display.max_columns', 6)\npd.set_option('display.max_rows', 10)\npd.set_option('display.width', 78)\npd.set_option('precision', 3)\n\ndef def_portafolio(tickers, participacion=None):\n if (participacion is None):\n part = np.ones(len(tickers))/len(tickers) \n portfolio = pd.DataFrame({'Tickers': tickers, 'Participacion': participacion}, index=tickers)\n return portfolio\n\nportafolio = def_portafolio(['Acciรณn A', 'Acciรณn B'], [1, 1])\nportafolio\n\nrendimientos = pd.DataFrame({'Acciรณn A': [0.1, 0.24, 0.05, -0.02, 0.2],\n 'Acciรณn B': [-0.15, -0.2, -0.01, 0.04, -0.15]})\nrendimientos\n\ndef valor_portafolio_ponderado(portafolio, rendimientos, name='Valor'):\n total_participacion = portafolio.Participacion.sum()\n ponderaciones=portafolio.Participacion/total_participacion\n rendimientos_ponderados = rendimientos*ponderaciones\n return pd.DataFrame({name: rendimientos_ponderados.sum(axis=1)})\n\nrend_portafolio=valor_portafolio_ponderado(portafolio, rendimientos, 'Valor')\nrend_portafolio\n\ntotal_rend=pd.concat([rendimientos, rend_portafolio], axis=1)\ntotal_rend\n\ntotal_rend.std()\n\nrendimientos.corr()\n\ntotal_rend.plot(figsize=(8,6));\n\ndef plot_portafolio_rend(rend, title=None):\n rend.plot(figsize=(8,6))\n plt.xlabel('Aรฑo')\n plt.ylabel('Rendimientos')\n if (title is not None): plt.title(title)\n plt.show()\n\nplot_portafolio_rend(total_rend);", "2. Uso de Pandas para descargar datos de precios de cierre\nAhora, en forma de funciรณn", "def get_historical_closes(ticker, start_date, end_date):\n p = web.DataReader(ticker, \"yahoo\", start_date, end_date).sort_index('major_axis')\n d = p.to_frame()['Adj Close'].reset_index()\n d.rename(columns={'minor': 'Ticker', 'Adj Close': 'Close'}, inplace=True)\n pivoted = d.pivot(index='Date', columns='Ticker')\n pivoted.columns = pivoted.columns.droplevel(0)\n return pivoted", "Una vez cargados los paquetes, es necesario definir los tickers de las acciones que se usarรกn, la fuente de descarga (Yahoo en este caso, pero tambiรฉn se puede desde Google) y las fechas de interรฉs. Con esto, la funciรณn DataReader del paquete pandas_datareader bajarรก los precios solicitados.\nNota: Usualmente, las distribuciones de Python no cuentan, por defecto, con el paquete pandas_datareader. Por lo que serรก necesario instalarlo aparte. El siguiente comando instala el paquete en Anaconda:\n*conda install -c conda-forge pandas-datareader *", "closes=get_historical_closes(['AA','AAPL','MSFT','KO'], '2010-01-01', '2016-12-31')\ncloses\n\ncloses.plot(figsize=(8,6));", "Nota: Para descargar datos de la bolsa mexicana de valores (BMV), el ticker debe tener la extensiรณn MX. \nPor ejemplo: MEXCHEM.MX, LABB.MX, GFINBURO.MX y GFNORTEO.MX.\n3. Formulaciรณn del riesgo de un portafolio", "def calc_daily_returns(closes):\n return np.log(closes/closes.shift(1))[1:]\n\ndaily_returns=calc_daily_returns(closes)\ndaily_returns.plot(figsize=(8,6));\n\ndaily_returns.corr()\n\ndef calc_annual_returns(daily_returns):\n grouped = np.exp(daily_returns.groupby(lambda date: date.year).sum())-1\n return grouped\n\nannual_returns = calc_annual_returns(daily_returns)\nannual_returns\n\ndef calc_portfolio_var(returns, weights=None):\n if (weights is None):\n weights = np.ones(returns.columns.size)/returns.columns.size\n sigma = np.cov(returns.T,ddof=0)\n var = (weights * sigma * weights.T).sum()\n return var\n\ncalc_portfolio_var(annual_returns)\n\ndef sharpe_ratio(returns, weights = None, risk_free_rate = 0.015):\n n = returns.columns.size\n if weights is None: weights = np.ones(n)/n\n var = calc_portfolio_var(returns, weights)\n means = returns.mean()\n return (means.dot(weights) - risk_free_rate)/np.sqrt(var)\n\nsharpe_ratio(annual_returns)", "4. Optimizaciรณn de portafolios", "def f(x): return 2+x**2\n\nscopt.fmin(f, 10)\n\ndef negative_sharpe_ratio_n_minus_1_stock(weights,returns,risk_free_rate):\n \"\"\"\n Given n-1 weights, return a negative sharpe ratio\n \"\"\"\n weights2 = sp.append(weights, 1-np.sum(weights))\n return -sharpe_ratio(returns, weights2, risk_free_rate)\n\ndef optimize_portfolio(returns, risk_free_rate):\n w0 = np.ones(returns.columns.size-1, dtype=float) * 1.0 / returns.columns.size\n w1 = scopt.fmin(negative_sharpe_ratio_n_minus_1_stock, w0, args=(returns, risk_free_rate))\n final_w = sp.append(w1, 1 - np.sum(w1))\n final_sharpe = sharpe_ratio(returns, final_w, risk_free_rate)\n return (final_w, final_sharpe)\n\noptimize_portfolio(annual_returns, 0.0003)\n\ndef objfun(W, R, target_ret):\n stock_mean = np.mean(R,axis=0)\n port_mean = np.dot(W,stock_mean)\n cov=np.cov(R.T)\n port_var = np.dot(np.dot(W,cov),W.T)\n penalty = 2000*abs(port_mean-target_ret)\n return np.sqrt(port_var) + penalty\n\ndef calc_efficient_frontier(returns):\n result_means = []\n result_stds = []\n result_weights = []\n means = returns.mean()\n min_mean, max_mean = means.min(), means.max()\n nstocks = returns.columns.size\n for r in np.linspace(min_mean, max_mean, 150):\n weights = np.ones(nstocks)/nstocks\n bounds = [(0,1) for i in np.arange(nstocks)]\n constraints = ({'type': 'eq', 'fun': lambda W: np.sum(W) - 1})\n results = scopt.minimize(objfun, weights, (returns, r), method='SLSQP', constraints = constraints, bounds = bounds)\n if not results.success: # handle error\n raise Exception(result.message)\n result_means.append(np.round(r,4)) # 4 decimal places\n std_=np.round(np.std(np.sum(returns*results.x,axis=1)),6)\n result_stds.append(std_)\n result_weights.append(np.round(results.x, 5))\n return {'Means': result_means, 'Stds': result_stds, 'Weights': result_weights}\n\nfrontier_data = calc_efficient_frontier(annual_returns)\n\ndef plot_efficient_frontier(ef_data):\n plt.figure(figsize=(12,8))\n plt.title('Efficient Frontier')\n plt.xlabel('Standard Deviation of the porfolio (Risk))')\n plt.ylabel('Return of the portfolio')\n plt.plot(ef_data['Stds'], ef_data['Means'], '--');\n\nplot_efficient_frontier(frontier_data)", "5. ETF", "etf=get_historical_closes(['PICK','IBB','XBI','MLPX','AMLP','VGT','RYE','IEO','AAPL'], '2014-01-01', '2014-12-31')\netf.plot(figsize=(8,6));\n\ndaily_returns_etf=calc_daily_returns(etf)\ndaily_returns_etf\n\ndaily_returns_etf_mean=1000*daily_returns_etf.mean()\ndaily_returns_etf_mean\n\ndaily_returns_etf_std=daily_returns_etf.std()\ndaily_returns_etf_std\n\ndaily_returns_ms=pd.concat([daily_returns_etf_mean, daily_returns_etf_std], axis=1)\ndaily_returns_ms\n\nfrom sklearn.cluster import KMeans\n\nrandom_state = 10\ny_pred = KMeans(n_clusters=4, random_state=random_state).fit_predict(daily_returns_ms)\n\nplt.scatter(daily_returns_etf_mean, daily_returns_etf_std, c=y_pred);\nplt.axis([-1, 1, 0.01, 0.03]);\n\nimport scipy.cluster.hierarchy as hac\n\ndaily_returns_etf.corr()\n\nZ = hac.linkage(daily_returns_etf.corr(), 'single')\n\n# Plot the dendogram\nplt.figure(figsize=(25, 10))\nplt.title('Hierarchical Clustering Dendrogram')\nplt.xlabel('sample index')\nplt.ylabel('distance')\nhac.dendrogram(\n Z,\n leaf_rotation=90., # rotates the x axis labels\n leaf_font_size=8., # font size for the x axis labels\n)\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
rflamary/POT
notebooks/plot_fgw.ipynb
mit
[ "%matplotlib inline", "Plot Fused-gromov-Wasserstein\nThis example illustrates the computation of FGW for 1D measures[18].\n.. [18] Vayer Titouan, Chapel Laetitia, Flamary R{'e}mi, Tavenard Romain\n and Courty Nicolas\n \"Optimal Transport for structured data with application on graphs\"\n International Conference on Machine Learning (ICML). 2019.", "# Author: Titouan Vayer <titouan.vayer@irisa.fr>\n#\n# License: MIT License\n\nimport matplotlib.pyplot as pl\nimport numpy as np\nimport ot\nfrom ot.gromov import gromov_wasserstein, fused_gromov_wasserstein", "Generate data", "#%% parameters\n# We create two 1D random measures\nn = 20 # number of points in the first distribution\nn2 = 30 # number of points in the second distribution\nsig = 1 # std of first distribution\nsig2 = 0.1 # std of second distribution\n\nnp.random.seed(0)\n\nphi = np.arange(n)[:, None]\nxs = phi + sig * np.random.randn(n, 1)\nys = np.vstack((np.ones((n // 2, 1)), 0 * np.ones((n // 2, 1)))) + sig2 * np.random.randn(n, 1)\n\nphi2 = np.arange(n2)[:, None]\nxt = phi2 + sig * np.random.randn(n2, 1)\nyt = np.vstack((np.ones((n2 // 2, 1)), 0 * np.ones((n2 // 2, 1)))) + sig2 * np.random.randn(n2, 1)\nyt = yt[::-1, :]\n\np = ot.unif(n)\nq = ot.unif(n2)", "Plot data", "#%% plot the distributions\n\npl.close(10)\npl.figure(10, (7, 7))\n\npl.subplot(2, 1, 1)\n\npl.scatter(ys, xs, c=phi, s=70)\npl.ylabel('Feature value a', fontsize=20)\npl.title('$\\mu=\\sum_i \\delta_{x_i,a_i}$', fontsize=25, usetex=True, y=1)\npl.xticks(())\npl.yticks(())\npl.subplot(2, 1, 2)\npl.scatter(yt, xt, c=phi2, s=70)\npl.xlabel('coordinates x/y', fontsize=25)\npl.ylabel('Feature value b', fontsize=20)\npl.title('$\\\\nu=\\sum_j \\delta_{y_j,b_j}$', fontsize=25, usetex=True, y=1)\npl.yticks(())\npl.tight_layout()\npl.show()", "Create structure matrices and across-feature distance matrix", "#%% Structure matrices and across-features distance matrix\nC1 = ot.dist(xs)\nC2 = ot.dist(xt)\nM = ot.dist(ys, yt)\nw1 = ot.unif(C1.shape[0])\nw2 = ot.unif(C2.shape[0])\nGot = ot.emd([], [], M)", "Plot matrices", "#%%\ncmap = 'Reds'\npl.close(10)\npl.figure(10, (5, 5))\nfs = 15\nl_x = [0, 5, 10, 15]\nl_y = [0, 5, 10, 15, 20, 25]\ngs = pl.GridSpec(5, 5)\n\nax1 = pl.subplot(gs[3:, :2])\n\npl.imshow(C1, cmap=cmap, interpolation='nearest')\npl.title(\"$C_1$\", fontsize=fs)\npl.xlabel(\"$k$\", fontsize=fs)\npl.ylabel(\"$i$\", fontsize=fs)\npl.xticks(l_x)\npl.yticks(l_x)\n\nax2 = pl.subplot(gs[:3, 2:])\n\npl.imshow(C2, cmap=cmap, interpolation='nearest')\npl.title(\"$C_2$\", fontsize=fs)\npl.ylabel(\"$l$\", fontsize=fs)\n#pl.ylabel(\"$l$\",fontsize=fs)\npl.xticks(())\npl.yticks(l_y)\nax2.set_aspect('auto')\n\nax3 = pl.subplot(gs[3:, 2:], sharex=ax2, sharey=ax1)\npl.imshow(M, cmap=cmap, interpolation='nearest')\npl.yticks(l_x)\npl.xticks(l_y)\npl.ylabel(\"$i$\", fontsize=fs)\npl.title(\"$M_{AB}$\", fontsize=fs)\npl.xlabel(\"$j$\", fontsize=fs)\npl.tight_layout()\nax3.set_aspect('auto')\npl.show()", "Compute FGW/GW", "#%% Computing FGW and GW\nalpha = 1e-3\n\not.tic()\nGwg, logw = fused_gromov_wasserstein(M, C1, C2, p, q, loss_fun='square_loss', alpha=alpha, verbose=True, log=True)\not.toc()\n\n#%reload_ext WGW\nGg, log = gromov_wasserstein(C1, C2, p, q, loss_fun='square_loss', verbose=True, log=True)", "Visualize transport matrices", "#%% visu OT matrix\ncmap = 'Blues'\nfs = 15\npl.figure(2, (13, 5))\npl.clf()\npl.subplot(1, 3, 1)\npl.imshow(Got, cmap=cmap, interpolation='nearest')\n#pl.xlabel(\"$y$\",fontsize=fs)\npl.ylabel(\"$i$\", fontsize=fs)\npl.xticks(())\n\npl.title('Wasserstein ($M$ only)')\n\npl.subplot(1, 3, 2)\npl.imshow(Gg, cmap=cmap, interpolation='nearest')\npl.title('Gromov ($C_1,C_2$ only)')\npl.xticks(())\npl.subplot(1, 3, 3)\npl.imshow(Gwg, cmap=cmap, interpolation='nearest')\npl.title('FGW ($M+C_1,C_2$)')\n\npl.xlabel(\"$j$\", fontsize=fs)\npl.ylabel(\"$i$\", fontsize=fs)\n\npl.tight_layout()\npl.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
fastai/fastai
nbs/34_callback.rnn.ipynb
apache-2.0
[ "#|hide\n#|skip\n! [ -e /content ] && pip install -Uqq fastai # upgrade fastai on colab\n\n#|export\nfrom __future__ import annotations\nfrom fastai.basics import *\n\n#|hide\nfrom nbdev.showdoc import *\n\n#|default_exp callback.rnn", "Callback for RNN training\n\nCallback that uses the outputs of language models to add AR and TAR regularization", "#|export\n@docs\nclass ModelResetter(Callback):\n \"`Callback` that resets the model at each validation/training step\"\n def before_train(self): self.model.reset()\n def before_validate(self): self.model.reset()\n def after_fit(self): self.model.reset()\n _docs = dict(before_train=\"Reset the model before training\",\n before_validate=\"Reset the model before validation\",\n after_fit=\"Reset the model after fitting\")\n\n#|export\nclass RNNCallback(Callback):\n \"Save the raw and dropped-out outputs and only keep the true output for loss computation\"\n def after_pred(self): self.learn.pred,self.raw_out,self.out = [o[-1] if is_listy(o) else o for o in self.pred]\n\n#|export\nclass RNNRegularizer(Callback):\n \"Add AR and TAR regularization\"\n order,run_valid = RNNCallback.order+1,False\n def __init__(self, alpha=0., beta=0.): store_attr()\n def after_loss(self):\n if not self.training: return\n if self.alpha: self.learn.loss_grad += self.alpha * self.rnn.out.float().pow(2).mean()\n if self.beta:\n h = self.rnn.raw_out\n if len(h)>1: self.learn.loss_grad += self.beta * (h[:,1:] - h[:,:-1]).float().pow(2).mean()\n\n#|export\ndef rnn_cbs(alpha=0., beta=0.):\n \"All callbacks needed for (optionally regularized) RNN training\"\n reg = [RNNRegularizer(alpha=alpha, beta=beta)] if alpha or beta else []\n return [ModelResetter(), RNNCallback()] + reg", "Export -", "#|hide\nfrom nbdev.export import notebook2script\nnotebook2script()" ]
[ "code", "markdown", "code", "markdown", "code" ]
econ-ark/HARK
examples/Gentle-Intro/Gentle-Intro-To-HARK.ipynb
apache-2.0
[ "A Gentle Introduction to HARK\nThis notebook provides a simple, hands-on tutorial for first time HARK users -- and potentially first time Python users. It does not go \"into the weeds\" - we have hidden some code cells that do boring things that you don't need to digest on your first experience with HARK. Our aim is to convey a feel for how the toolkit works.\nFor readers for whom this is your very first experience with Python, we have put important Python concepts in boldface. For those for whom this is the first time they have used a Jupyter notebook, we have put Jupyter instructions in italics. Only cursory definitions (if any) are provided here. If you want to learn more, there are many online Python and Jupyter tutorials.", "# This cell has a bit of initial setup. You can click the triangle to the left to expand it.\n# Click the \"Run\" button immediately above the notebook in order to execute the contents of any cell\n# WARNING: Each cell in the notebook relies upon results generated by previous cells\n# The most common problem beginners have is to execute a cell before all its predecessors\n# If you do this, you can restart the kernel (see the \"Kernel\" menu above) and start over\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport HARK \nfrom copy import deepcopy\nmystr = lambda number : \"{:.4f}\".format(number)\nfrom HARK.utilities import plot_funcs", "Your First HARK Model: Perfect Foresight\nWe start with almost the simplest possible consumption model: A consumer with CRRA utility \n\\begin{equation}\nU(C) = \\frac{C^{1-\\rho}}{1-\\rho}\n\\end{equation}\nhas perfect foresight about everything except the (stochastic) date of death, which occurs with constant probability implying a \"survival probability\" $\\newcommand{\\LivPrb}{\\aleph}\\LivPrb < 1$. Permanent labor income $P_t$ grows from period to period by a factor $\\Gamma_t$. At the beginning of each period $t$, the consumer has some amount of market resources $M_t$ (which includes both market wealth and currrent income) and must choose how much of those resources to consume $C_t$ and how much to retain in a riskless asset $A_t$ which will earn return factor $R$. The agent's flow of utility $U(C_t)$ from consumption is geometrically discounted by factor $\\beta$. Between periods, the agent dies with probability $\\mathsf{D}_t$, ending his problem.\nThe agent's problem can be written in Bellman form as:\n\\begin{eqnarray}\nV_t(M_t,P_t) &=& \\max_{C_t}~U(C_t) + \\beta \\aleph V_{t+1}(M_{t+1},P_{t+1}), \\\n& s.t. & \\\n%A_t &=& M_t - C_t, \\\nM_{t+1} &=& R (M_{t}-C_{t}) + Y_{t+1}, \\\nP_{t+1} &=& \\Gamma_{t+1} P_t, \\\n\\end{eqnarray}\nA particular perfect foresight agent's problem can be characterized by values of risk aversion $\\rho$, discount factor $\\beta$, and return factor $R$, along with sequences of income growth factors ${ \\Gamma_t }$ and survival probabilities ${\\mathsf{\\aleph}_t}$. To keep things simple, let's forget about \"sequences\" of income growth and mortality, and just think about an $\\textit{infinite horizon}$ consumer with constant income growth and survival probability.\nRepresenting Agents in HARK\nHARK represents agents solving this type of problem as $\\textbf{instances}$ of the $\\textbf{class}$ $\\texttt{PerfForesightConsumerType}$, a $\\textbf{subclass}$ of $\\texttt{AgentType}$. To make agents of this class, we must import the class itself into our workspace. (Run the cell below in order to do this).", "from HARK.ConsumptionSaving.ConsIndShockModel import PerfForesightConsumerType", "The $\\texttt{PerfForesightConsumerType}$ class contains within itself the python code that constructs the solution for the perfect foresight model we are studying here, as specifically articulated in these lecture notes. \nTo create an instance of $\\texttt{PerfForesightConsumerType}$, we simply call the class as if it were a function, passing as arguments the specific parameter values we want it to have. In the hidden cell below, we define a $\\textbf{dictionary}$ named $\\texttt{PF_dictionary}$ with these parameter values:\n| Param | Description | Code | Value |\n| :---: | --- | --- | :---: |\n| $\\rho$ | Relative risk aversion | $\\texttt{CRRA}$ | 2.5 |\n| $\\beta$ | Discount factor | $\\texttt{DiscFac}$ | 0.96 |\n| $R$ | Risk free interest factor | $\\texttt{Rfree}$ | 1.03 |\n| $\\aleph$ | Survival probability | $\\texttt{LivPrb}$ | 0.98 |\n| $\\Gamma$ | Income growth factor | $\\texttt{PermGroFac}$ | 1.01 |\nFor now, don't worry about the specifics of dictionaries. All you need to know is that a dictionary lets us pass many arguments wrapped up in one simple data structure.", "# This cell defines a parameter dictionary. You can expand it if you want to see what that looks like.\nPF_dictionary = {\n 'CRRA' : 2.5,\n 'DiscFac' : 0.96,\n 'Rfree' : 1.03,\n 'LivPrb' : [0.98],\n 'PermGroFac' : [1.01],\n 'T_cycle' : 1,\n 'cycles' : 0,\n 'AgentCount' : 10000\n}\n\n# To those curious enough to open this hidden cell, you might notice that we defined\n# a few extra parameters in that dictionary: T_cycle, cycles, and AgentCount. Don't\n# worry about these for now.", "Let's make an object named $\\texttt{PFexample}$ which is an instance of the $\\texttt{PerfForesightConsumerType}$ class. The object $\\texttt{PFexample}$ will bundle together the abstract mathematical description of the solution embodied in $\\texttt{PerfForesightConsumerType}$, and the specific set of parameter values defined in $\\texttt{PF_dictionary}$. Such a bundle is created passing $\\texttt{PF_dictionary}$ to the class $\\texttt{PerfForesightConsumerType}$:", "PFexample = PerfForesightConsumerType(**PF_dictionary) \n# the asterisks ** basically say \"here come some arguments\" to PerfForesightConsumerType", "In $\\texttt{PFexample}$, we now have defined the problem of a particular infinite horizon perfect foresight consumer who knows how to solve this problem. \nSolving an Agent's Problem\nTo tell the agent actually to solve the problem, we call the agent's $\\texttt{solve}$ method. (A method is essentially a function that an object runs that affects the object's own internal characteristics -- in this case, the method adds the consumption function to the contents of $\\texttt{PFexample}$.)\nThe cell below calls the $\\texttt{solve}$ method for $\\texttt{PFexample}$", "PFexample.solve()", "Running the $\\texttt{solve}$ method creates the attribute of $\\texttt{PFexample}$ named $\\texttt{solution}$. In fact, every subclass of $\\texttt{AgentType}$ works the same way: The class definition contains the abstract algorithm that knows how to solve the model, but to obtain the particular solution for a specific instance (paramterization/configuration), that instance must be instructed to $\\texttt{solve()}$ its problem. \nThe $\\texttt{solution}$ attribute is always a $\\textit{list}$ of solutions to a single period of the problem. In the case of an infinite horizon model like the one here, there is just one element in that list -- the solution to all periods of the infinite horizon problem. The consumption function stored as the first element (element 0) of the solution list can be retrieved by:", "PFexample.solution[0].cFunc", "One of the results proven in the associated the lecture notes is that, for the specific problem defined above, there is a solution in which the ratio $c = C/P$ is a linear function of the ratio of market resources to permanent income, $m = M/P$. \nThis is why $\\texttt{cFunc}$ can be represented by a linear interpolation. It can be plotted between an $m$ ratio of 0 and 10 using the command below.", "mPlotTop=10\nplot_funcs(PFexample.solution[0].cFunc,0.,mPlotTop)", "The figure illustrates one of the surprising features of the perfect foresight model: A person with zero money should be spending at a rate more than double their income (that is, $\\texttt{cFunc}(0.) \\approx 2.08$ - the intersection on the vertical axis). How can this be?\nThe answer is that we have not incorporated any constraint that would prevent the agent from borrowing against the entire PDV of future earnings-- human wealth. How much is that? What's the minimum value of $m_t$ where the consumption function is defined? We can check by retrieving the $\\texttt{hNrm}$ attribute of the solution, which calculates the value of human wealth normalized by permanent income:", "humanWealth = PFexample.solution[0].hNrm\nmMinimum = PFexample.solution[0].mNrmMin\nprint(\"This agent's human wealth is \" + str(humanWealth) + ' times his current income level.')\nprint(\"This agent's consumption function is defined (consumption is positive) down to m_t = \" + str(mMinimum))", "Yikes! Let's take a look at the bottom of the consumption function. In the cell below, the bounds of the plot_funcs function are set to display down to the lowest defined value of the consumption function.", "plot_funcs(PFexample.solution[0].cFunc,\n mMinimum,\n mPlotTop)", "Changing Agent Parameters\nSuppose you wanted to change one (or more) of the parameters of the agent's problem and see what that does. We want to compare consumption functions before and after we change parameters, so let's make a new instance of $\\texttt{PerfForesightConsumerType}$ by copying $\\texttt{PFexample}$.", "NewExample = deepcopy(PFexample)", "You can assign new parameters to an AgentType with the assign_parameter method. For example, we could make the new agent less patient:", "NewExample.assign_parameters(DiscFac = 0.90)\nNewExample.solve()\nmPlotBottom = mMinimum\nplot_funcs([PFexample.solution[0].cFunc,\n NewExample.solution[0].cFunc],\n mPlotBottom,\n mPlotTop)", "(Note that you can pass a list of functions to plot_funcs as the first argument rather than just a single function. Lists are written inside of [square brackets].)\nLet's try to deal with the \"problem\" of massive human wealth by making another consumer who has essentially no future income. We can virtually eliminate human wealth by making the permanent income growth factor $\\textit{very}$ small.\nIn $\\texttt{PFexample}$, the agent's income grew by 1 percent per period -- his $\\texttt{PermGroFac}$ took the value 1.01. What if our new agent had a growth factor of 0.01 -- his income shrinks by 99 percent each period? In the cell below, set $\\texttt{NewExample}$'s discount factor back to its original value, then set its $\\texttt{PermGroFac}$ attribute so that the growth factor is 0.01 each period.\nImportant: Recall that the model at the top of this document said that an agent's problem is characterized by a sequence of income growth factors, but we tabled that concept. Because $\\texttt{PerfForesightConsumerType}$ treats $\\texttt{PermGroFac}$ as a time-varying attribute, it must be specified as a list (with a single element in this case).", "# Revert NewExample's discount factor and make his future income minuscule\n# print(\"your lines here\")\n\n# Compare the old and new consumption functions\nplot_funcs([PFexample.solution[0].cFunc,NewExample.solution[0].cFunc],0.,10.)", "Now $\\texttt{NewExample}$'s consumption function has the same slope (MPC) as $\\texttt{PFexample}$, but it emanates from (almost) zero-- he has basically no future income to borrow against!\nIf you'd like, use the cell above to alter $\\texttt{NewExample}$'s other attributes (relative risk aversion, etc) and see how the consumption function changes. However, keep in mind that \\textit{no solution exists} for some combinations of parameters. HARK should let you know if this is the case if you try to solve such a model.\nYour Second HARK Model: Adding Income Shocks\nLinear consumption functions are pretty boring, and you'd be justified in feeling unimpressed if all HARK could do was plot some lines. Let's look at another model that adds two important layers of complexity: income shocks and (artificial) borrowing constraints.\nSpecifically, our new type of consumer receives two income shocks at the beginning of each period: a completely transitory shock $\\theta_t$ and a completely permanent shock $\\psi_t$. Moreover, lenders will not let the agent borrow money such that his ratio of end-of-period assets $A_t$ to permanent income $P_t$ is less than $\\underline{a}$. As with the perfect foresight problem, this model can be framed in terms of normalized variables, e.g. $m_t \\equiv M_t/P_t$. (See here for all the theory).\n\\begin{eqnarray}\nv_t(m_t) &=& \\max_{c_t} ~ U(c_t) ~ + \\phantom{\\LivFac} \\beta \\mathbb{E} [(\\Gamma_{t+1}\\psi_{t+1})^{1-\\rho} v_{t+1}(m_{t+1}) ], \\\na_t &=& m_t - c_t, \\\na_t &\\geq& \\underset{\\bar{}}{a}, \\\nm_{t+1} &=& R/(\\Gamma_{t+1} \\psi_{t+1}) a_t + \\theta_{t+1}, \\\n\\mathbb{E}[\\psi]=\\mathbb{E}[\\theta] &=& 1, \\\nu(c) &=& \\frac{c^{1-\\rho}}{1-\\rho}.\n\\end{eqnarray}\nHARK represents agents with this kind of problem as instances of the class $\\texttt{IndShockConsumerType}$. To create an $\\texttt{IndShockConsumerType}$, we must specify the same set of parameters as for a $\\texttt{PerfForesightConsumerType}$, as well as an artificial borrowing constraint $\\underline{a}$ and a sequence of income shocks. It's easy enough to pick a borrowing constraint -- say, zero -- but how would we specify the distributions of the shocks? Can't the joint distribution of permanent and transitory shocks be just about anything?\nYes, and HARK can handle whatever correlation structure a user might care to specify. However, the default behavior of $\\texttt{IndShockConsumerType}$ is that the distribution of permanent income shocks is mean one lognormal, and the distribution of transitory shocks is mean one lognormal augmented with a point mass representing unemployment. The distributions are independent of each other by default, and by default are approximated with $N$ point equiprobable distributions.\nLet's make an infinite horizon instance of $\\texttt{IndShockConsumerType}$ with the same parameters as our original perfect foresight agent, plus the extra parameters to specify the income shock distribution and the artificial borrowing constraint. As before, we'll make a dictionary:\n| Param | Description | Code | Value |\n| :---: | --- | --- | :---: |\n| \\underline{a} | Artificial borrowing constraint | $\\texttt{BoroCnstArt}$ | 0.0 |\n| $\\sigma_\\psi$ | Underlying stdev of permanent income shocks | $\\texttt{PermShkStd}$ | 0.1 |\n| $\\sigma_\\theta$ | Underlying stdev of transitory income shocks | $\\texttt{TranShkStd}$ | 0.1 |\n| $N_\\psi$ | Number of discrete permanent income shocks | $\\texttt{PermShkCount}$ | 7 |\n| $N_\\theta$ | Number of discrete transitory income shocks | $\\texttt{TranShkCount}$ | 7 |\n| $\\mho$ | Unemployment probability | $\\texttt{UnempPrb}$ | 0.05 |\n| $\\underset{\\bar{}}{\\theta}$ | Transitory shock when unemployed | $\\texttt{IncUnemp}$ | 0.3 |", "# This cell defines a parameter dictionary for making an instance of IndShockConsumerType.\n\nIndShockDictionary = {\n 'CRRA': 2.5, # The dictionary includes our original parameters...\n 'Rfree': 1.03,\n 'DiscFac': 0.96,\n 'LivPrb': [0.98],\n 'PermGroFac': [1.01],\n 'PermShkStd': [0.1], # ... and the new parameters for constructing the income process. \n 'PermShkCount': 7,\n 'TranShkStd': [0.1],\n 'TranShkCount': 7,\n 'UnempPrb': 0.05,\n 'IncUnemp': 0.3,\n 'BoroCnstArt': 0.0,\n 'aXtraMin': 0.001, # aXtra parameters specify how to construct the grid of assets.\n 'aXtraMax': 50., # Don't worry about these for now\n 'aXtraNestFac': 3,\n 'aXtraCount': 48,\n 'aXtraExtra': [None],\n 'vFuncBool': False, # These booleans indicate whether the value function should be calculated\n 'CubicBool': False, # and whether to use cubic spline interpolation. You can ignore them.\n 'aNrmInitMean' : -10.,\n 'aNrmInitStd' : 0.0, # These parameters specify the (log) distribution of normalized assets\n 'pLvlInitMean' : 0.0, # and permanent income for agents at \"birth\". They are only relevant in\n 'pLvlInitStd' : 0.0, # simulation and you don't need to worry about them.\n 'PermGroFacAgg' : 1.0,\n 'T_retire': 0, # What's this about retirement? ConsIndShock is set up to be able to\n 'UnempPrbRet': 0.0, # handle lifecycle models as well as infinite horizon problems. Swapping\n 'IncUnempRet': 0.0, # out the structure of the income process is easy, but ignore for now.\n 'T_age' : None,\n 'T_cycle' : 1,\n 'cycles' : 0,\n 'AgentCount': 10000,\n 'tax_rate':0.0,\n}\n \n# Hey, there's a lot of parameters we didn't tell you about! Yes, but you don't need to\n# think about them for now.", "As before, we need to import the relevant subclass of $\\texttt{AgentType}$ into our workspace, then create an instance by passing the dictionary to the class as if the class were a function.", "from HARK.ConsumptionSaving.ConsIndShockModel import IndShockConsumerType\nIndShockExample = IndShockConsumerType(**IndShockDictionary)", "Now we can solve our new agent's problem just like before, using the $\\texttt{solve}$ method.", "IndShockExample.solve()\nplot_funcs(IndShockExample.solution[0].cFunc,0.,10.)", "Changing Constructed Attributes\nIn the parameter dictionary above, we chose values for HARK to use when constructing its numeric representation of $F_t$, the joint distribution of permanent and transitory income shocks. When $\\texttt{IndShockExample}$ was created, those parameters ($\\texttt{TranShkStd}$, etc) were used by the constructor or initialization method of $\\texttt{IndShockConsumerType}$ to construct an attribute called $\\texttt{IncomeDstn}$.\nSuppose you were interested in changing (say) the amount of permanent income risk. From the section above, you might think that you could simply change the attribute $\\texttt{TranShkStd}$, solve the model again, and it would work.\nThat's almost true-- there's one extra step. $\\texttt{TranShkStd}$ is a primitive input, but it's not the thing you actually want to change. Changing $\\texttt{TranShkStd}$ doesn't actually update the income distribution... unless you tell it to (just like changing an agent's preferences does not change the consumption function that was stored for the old set of parameters -- until you invoke the $\\texttt{solve}$ method again). In the cell below, we invoke the method $\\texttt{update_income_process}$ so HARK knows to reconstruct the attribute $\\texttt{IncomeDstn}$.", "OtherExample = deepcopy(IndShockExample) # Make a copy so we can compare consumption functions\nOtherExample.assign_parameters(PermShkStd = [0.2]) # Double permanent income risk (note that it's a one element list)\nOtherExample.update_income_process() # Call the method to reconstruct the representation of F_t\nOtherExample.solve()", "In the cell below, use your blossoming HARK skills to plot the consumption function for $\\texttt{IndShockExample}$ and $\\texttt{OtherExample}$ on the same figure.", "# Use the line(s) below to plot the consumptions functions against each other" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/nasa-giss/cmip6/models/sandbox-1/landice.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Landice\nMIP Era: CMIP6\nInstitute: NASA-GISS\nSource ID: SANDBOX-1\nTopic: Landice\nSub-Topics: Glaciers, Ice. \nProperties: 30 (21 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:20\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'nasa-giss', 'sandbox-1', 'landice')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Software Properties\n3. Grid\n4. Glaciers\n5. Ice\n6. Ice --&gt; Mass Balance\n7. Ice --&gt; Mass Balance --&gt; Basal\n8. Ice --&gt; Mass Balance --&gt; Frontal\n9. Ice --&gt; Dynamics \n1. Key Properties\nLand ice key properties\n1.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of land surface model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of land surface model code", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Ice Albedo\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify how ice albedo is modelled", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.ice_albedo') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"function of ice age\" \n# \"function of ice density\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Atmospheric Coupling Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhich variables are passed between the atmosphere and ice (e.g. orography, ice mass)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.5. Oceanic Coupling Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhich variables are passed between the ocean and ice", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.6. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich variables are prognostically calculated in the ice model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ice velocity\" \n# \"ice thickness\" \n# \"ice temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Software Properties\nSoftware properties of land ice code\n2.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Grid\nLand ice grid\n3.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the grid in the land ice scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.2. Adaptive Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs an adative grid being used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.3. Base Resolution\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe base resolution (in metres), before any adaption", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.base_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.4. Resolution Limit\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf an adaptive grid is being used, what is the limit of the resolution (in metres)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.resolution_limit') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.5. Projection\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe projection of the land ice grid (e.g. albers_equal_area)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.projection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Glaciers\nLand ice glaciers\n4.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of glaciers in the land ice scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of glaciers, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Dynamic Areal Extent\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDoes the model include a dynamic glacial extent?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "5. Ice\nIce sheet and ice shelf\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the ice sheet and ice shelf in the land ice scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Grounding Line Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.grounding_line_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grounding line prescribed\" \n# \"flux prescribed (Schoof)\" \n# \"fixed grid size\" \n# \"moving grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "5.3. Ice Sheet\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre ice sheets simulated?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.ice_sheet') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "5.4. Ice Shelf\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre ice shelves simulated?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.ice_shelf') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6. Ice --&gt; Mass Balance\nDescription of the surface mass balance treatment\n6.1. Surface Mass Balance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Ice --&gt; Mass Balance --&gt; Basal\nDescription of basal melting\n7.1. Bedrock\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the implementation of basal melting over bedrock", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Ocean\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the implementation of basal melting over the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Ice --&gt; Mass Balance --&gt; Frontal\nDescription of claving/melting from the ice shelf front\n8.1. Calving\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the implementation of calving from the front of the ice shelf", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Melting\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the implementation of melting from the front of the ice shelf", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Ice --&gt; Dynamics\n**\n9.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description if ice sheet and ice shelf dynamics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Approximation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nApproximation type used in modelling ice dynamics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.approximation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"SIA\" \n# \"SAA\" \n# \"full stokes\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "9.3. Adaptive Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there an adaptive time scheme for the ice scheme?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "9.4. Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "ยฉ2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
privong/pythonclub
sessions/01-introduction/Five Minute Notebook.ipynb
gpl-3.0
[ "A First Brush With Jupyter\nThis notebook will show you some things I find useful to do in these notebooks.", "# you only need to do this once. Shamelessly stolen from Johannsen.\n!pip2 install --upgrade version_information\n\n#Preamble. These are some standard things I like to include in IPython Notebooks.\nimport astropy\nfrom astropy.table import Table, Column, MaskedColumn\nimport numpy as np\nimport matplotlib.pyplot as plt\n%load_ext version_information\n\n%version_information numpy, scipy, matplotlib, sympy, version_information", "You will more than likely want to plot some things. In the notebook environment, this can be done in different ways. I typically choose an inline plot. However, you can also have images from matplotlib run as separate windows or as interactive objects within the notebook.", "# special IPython command to prepare the notebook for matplotlib\n#interactive plotting in separate window\n#%matplotlib qt \n#interactive charts inside notebooks, matplotlib 1.4+\n#%matplotlib notebook \n#normal charts inside notebooks\n%matplotlib inline", "So what to do first? Lets download some Gaia file.", "#This cell will download some gaia data file to your pwd\nimport urllib2\nimport gzip\nsome_zipped_gaia_file = urllib2.urlopen('http://cdn.gea.esac.esa.int/Gaia/gaia_source/csv/GaiaSource_000-010-207.csv.gz')\nsome_gaia_file_saved = open('GaiaSource_000-010-207.csv.gz','wb')\nsome_gaia_file_saved.write(some_zipped_gaia_file.read())\nsome_zipped_gaia_file.close()\nsome_gaia_file_saved.close()\nsome_gaia_zipfile = gzip.GzipFile('GaiaSource_000-010-207.csv.gz', 'r') \n\nfrom astropy.io import ascii\ndata = ascii.read(some_gaia_zipfile)\n\ndata\n\ndata['ra'].mean()\n\ndata['dec'].mean()\n\nfrom numpy import random\nrandom_subsample = data[random.choice(len(data), 10000)]\n\nplt.scatter(random_subsample['ra'],random_subsample['dec'], s=0.1, color='black')\n\nplt.xlabel('R.A.', fontsize=16)\nplt.ylabel('Dec', fontsize=16)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
southpaw94/MachineLearning
TextExamples/3547_04_Code.ipynb
gpl-2.0
[ "Sebastian Raschka, 2015\nPython Machine Learning Essentials\nBuilding Good Training Sets โ€“ย Data Pre-Processing\nNote that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).", "%load_ext watermark\n%watermark -a 'Sebastian Raschka' -u -d -v -p numpy,pandas,matplotlib,scikit-learn\n\n# to install watermark just uncomment the following line:\n#%install_ext https://raw.githubusercontent.com/rasbt/watermark/master/watermark.py", "<br>\n<br>\nSections\n\nDealing with missing data\nEliminating samples or features with missing values\nImputing missing values\nUnderstanding the scikit-learn estimator API\n\n\nHandling categorical data\nMapping ordinal features\nEncoding class labels\nPerforming one-hot encoding on nominal features\n\n\nPartitioning a dataset in training and test sets\nBringing features onto the same scale\nSelecting meaningful features\nSparse solutions with L1-regularization\nSequential feature selection algorithms\nAssessing Feature Importances with Random Forests\n\n\n\n<br>\n<br>\nDealing with missing data\n[back to top]", "import pandas as pd\nfrom io import StringIO\n\ncsv_data = '''A,B,C,D\n1.0,2.0,3.0,4.0\n5.0,6.0,,8.0\n10.0,11.0,12.0,'''\n\n# If you are using Python 2.7, you need\n# to convert the string to unicode:\n# csv_data = unicode(csv_data)\n\ndf = pd.read_csv(StringIO(csv_data))\ndf\n\ndf.isnull().sum()", "<br>\n<br>\nEliminating samples or features with missing values\n[back to top]", "df.dropna()\n\ndf.dropna(axis=1)\n\n# only drop rows where all columns are NaN\ndf.dropna(how='all') \n\n# drop rows that have not at least 4 non-NaN values\ndf.dropna(thresh=4)\n\n# only drop rows where NaN appear in specific columns (here: 'C')\ndf.dropna(subset=['C'])", "<br>\n<br>\nImputing missing values", "from sklearn.preprocessing import Imputer\n\nimr = Imputer(missing_values='NaN', strategy='mean', axis=0)\nimr = imr.fit(df)\nimputed_data = imr.transform(df.values)\nimputed_data\n\ndf.values", "<br>\n<br>\nHandling categorical data\n[back to top]", "import pandas as pd\ndf = pd.DataFrame([\n ['green', 'M', 10.1, 'class1'], \n ['red', 'L', 13.5, 'class2'], \n ['blue', 'XL', 15.3, 'class1']])\n\ndf.columns = ['color', 'size', 'price', 'classlabel']\ndf", "<br>\n<br>\nMapping ordinal features\n[back to top]", "size_mapping = {\n 'XL': 3,\n 'L': 2,\n 'M': 1}\n\ndf['size'] = df['size'].map(size_mapping)\ndf\n\ninv_size_mapping = {v: k for k, v in size_mapping.items()}\ndf['size'].map(inv_size_mapping)", "<br>\n<br>\nEncoding class labels\n[back to top]", "import numpy as np\n\nclass_mapping = {label:idx for idx,label in enumerate(np.unique(df['classlabel']))}\nclass_mapping\n\ndf['classlabel'] = df['classlabel'].map(class_mapping)\ndf\n\ninv_class_mapping = {v: k for k, v in class_mapping.items()}\ndf['classlabel'] = df['classlabel'].map(inv_class_mapping)\ndf\n\nfrom sklearn.preprocessing import LabelEncoder\n\nclass_le = LabelEncoder()\ny = class_le.fit_transform(df['classlabel'].values)\ny\n\nclass_le.inverse_transform(y)", "<br>\n<br>\nPerforming one-hot encoding on nominal features\n[back to top]", "X = df[['color', 'size', 'price']].values\n\ncolor_le = LabelEncoder()\nX[:, 0] = color_le.fit_transform(X[:, 0])\nX\n\nfrom sklearn.preprocessing import OneHotEncoder\n\nohe = OneHotEncoder(categorical_features=[0])\nohe.fit_transform(X).toarray()\n\npd.get_dummies(df[['price', 'color', 'size']])", "<br>\n<br>\nPartitioning a dataset in training and test sets\n[back to top]", "df_wine = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data', header=None)\n\ndf_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash', \n'Alcalinity of ash', 'Magnesium', 'Total phenols', \n'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins', \n'Color intensity', 'Hue', 'OD280/OD315 of diluted wines', 'Proline']\n\nprint('Class labels', np.unique(df_wine['Class label']))\ndf_wine.head()\n\nfrom sklearn.cross_validation import train_test_split\n\nX, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values\n\nX_train, X_test, y_train, y_test = \\\n train_test_split(X, y, test_size=0.3, random_state=0)", "<br>\n<br>\nBringing features onto the same scale\n[back to top]", "from sklearn.preprocessing import MinMaxScaler\nmms = MinMaxScaler()\nX_train_norm = mms.fit_transform(X_train)\nX_test_norm = mms.transform(X_test)\n\nfrom sklearn.preprocessing import StandardScaler\n\nstdsc = StandardScaler()\nX_train_std = stdsc.fit_transform(X_train)\nX_test_std = stdsc.transform(X_test)", "A visual example:", "ex = pd.DataFrame([0, 1, 2 ,3, 4, 5])\n\n# standardize\nex[1] = (ex[0] - ex[0].mean()) / ex[0].std()\n# normalize\nex[2] = (ex[0] - ex[0].min()) / (ex[0].max() - ex[0].min())\nex.columns = ['input', 'standardized', 'normalized']\nex", "<br>\n<br>\nSelecting meaningful features\n[back to top]\n<br>\n<br>\nSparse solutions with L1-regularization\n[back to top]", "from sklearn.linear_model import LogisticRegression\n\nlr = LogisticRegression(penalty='l1', C=0.1)\nlr.fit(X_train_std, y_train)\nprint('Training accuracy:', lr.score(X_train_std, y_train))\nprint('Test accuracy:', lr.score(X_test_std, y_test))\n\nlr.intercept_\n\nlr.coef_\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nfig = plt.figure()\nax = plt.subplot(111)\n \ncolors = ['blue', 'green', 'red', 'cyan', \n 'magenta', 'yellow', 'black', \n 'pink', 'lightgreen', 'lightblue', \n 'gray', 'indigo', 'orange']\n\nweights, params = [], []\nfor c in np.arange(-4, 6):\n lr = LogisticRegression(penalty='l1', C=10**c, random_state=0)\n lr.fit(X_train_std, y_train)\n weights.append(lr.coef_[1])\n params.append(10**c)\n\nweights = np.array(weights)\n\nfor column, color in zip(range(weights.shape[1]), colors):\n plt.plot(params, weights[:, column],\n label=df_wine.columns[column+1],\n color=color)\nplt.axhline(0, color='black', linestyle='--', linewidth=3)\nplt.xlim([10**(-5), 10**5])\nplt.ylabel('weight coefficient')\nplt.xlabel('C')\nplt.xscale('log')\nplt.legend(loc='upper left')\nax.legend(loc='upper center', \n bbox_to_anchor=(1.38, 1.03),\n ncol=1, fancybox=True)\n# plt.savefig('./figures/l1_path.png', dpi=300)\nplt.show()", "<br>\n<br>\nSequential feature selection algorithms\n[back to top]", "from sklearn.base import clone\nfrom itertools import combinations\nimport numpy as np\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.metrics import accuracy_score\n\nclass SBS():\n def __init__(self, estimator, k_features, scoring=accuracy_score,\n test_size=0.25, random_state=1):\n self.scoring = scoring\n self.estimator = clone(estimator)\n self.k_features = k_features\n self.test_size = test_size\n self.random_state = random_state\n\n def fit(self, X, y):\n \n X_train, X_test, y_train, y_test = \\\n train_test_split(X, y, test_size=self.test_size, \n random_state=self.random_state)\n\n dim = X_train.shape[1]\n self.indices_ = tuple(range(dim))\n self.subsets_ = [self.indices_]\n score = self._calc_score(X_train, y_train, \n X_test, y_test, self.indices_)\n self.scores_ = [score]\n\n while dim > self.k_features:\n scores = []\n subsets = []\n\n for p in combinations(self.indices_, r=dim-1):\n score = self._calc_score(X_train, y_train, \n X_test, y_test, p)\n scores.append(score)\n subsets.append(p)\n\n best = np.argmax(scores)\n self.indices_ = subsets[best]\n self.subsets_.append(self.indices_)\n dim -= 1\n\n self.scores_.append(scores[best])\n self.k_score_ = self.scores_[-1]\n\n return self\n\n def transform(self, X):\n return X[:, self.indices_]\n\n def _calc_score(self, X_train, y_train, X_test, y_test, indices):\n self.estimator.fit(X_train[:, indices], y_train)\n y_pred = self.estimator.predict(X_test[:, indices])\n score = self.scoring(y_test, y_pred)\n return score\n\n%matplotlib inline\nfrom sklearn.neighbors import KNeighborsClassifier\nimport matplotlib.pyplot as plt\n\nknn = KNeighborsClassifier(n_neighbors=2)\n\n# selecting features\nsbs = SBS(knn, k_features=1)\nsbs.fit(X_train_std, y_train)\n\n# plotting performance of feature subsets\nk_feat = [len(k) for k in sbs.subsets_]\n\nplt.plot(k_feat, sbs.scores_, marker='o')\nplt.ylim([0.7, 1.1])\nplt.ylabel('Accuracy')\nplt.xlabel('Number of features')\nplt.grid()\nplt.tight_layout()\n# plt.savefig('./sbs.png', dpi=300)\nplt.show()\n\nk5 = list(sbs.subsets_[8])\nprint(df_wine.columns[1:][k5])\n\nknn.fit(X_train_std, y_train)\nprint('Training accuracy:', knn.score(X_train_std, y_train))\nprint('Test accuracy:', knn.score(X_test_std, y_test))\n\nknn.fit(X_train_std[:, k5], y_train)\nprint('Training accuracy:', knn.score(X_train_std[:, k5], y_train))\nprint('Test accuracy:', knn.score(X_test_std[:, k5], y_test))", "<br>\n<br>\nAssessing Feature Importances with Random Forests\n[back to top]", "from sklearn.ensemble import RandomForestClassifier\n\nfeat_labels = df_wine.columns[1:]\n\nforest = RandomForestClassifier(n_estimators=10000,\n random_state=0,\n n_jobs=-1)\n\nforest.fit(X_train, y_train)\nimportances = forest.feature_importances_\n\nindices = np.argsort(importances)[::-1]\n\nfor f in range(X_train.shape[1]):\n print(\"%2d) %-*s %f\" % (f + 1, 30, \n feat_labels[f], \n importances[indices[f]]))\n\nplt.title('Feature Importances')\nplt.bar(range(X_train.shape[1]), \n importances[indices],\n color='lightblue', \n align='center')\n\nplt.xticks(range(X_train.shape[1]), \n feat_labels, rotation=90)\nplt.xlim([-1, X_train.shape[1]])\nplt.tight_layout()\n# plt.savefig('./figures/random_forest.png', dpi=300)\nplt.show()\n\nX_selected = forest.transform(X_train, threshold=0.15)\nX_selected.shape" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
stijnvanhoey/hydropy
Analyze_USGS_data.ipynb
bsd-2-clause
[ "Quick Start with HydroPy!", "# Import the libraries that we'll be using\nimport numpy as np\nimport pandas as pd\nimport hydropy as hp\n\n# Set the notebook to plot graphs in the output cells.\n%matplotlib inline", "Load USGS data into a dataframe", "# Use HydroCloud.org to find a stream gauge to investigate.\n# Click on the red points to find the site number.\nfrom IPython.display import HTML\nHTML('<iframe src=http://hydrocloud.org/ width=700 height=400></iframe>')\n\n# Create a Pandas dataframe using the USGS daily discharge for Herring Run.\nherring = hp.get_usgs('01585200', 'dv', '2011-01-01', '2016-01-01')\n\n# List the first few values from the top of the dataframe.\nherring.head()\n\n# Calculate some basic statistics for the dataframe.\nherring.describe()\n\n# For more advanced analysis, use the HydroAnalysis class.\nmy_analysis = hp.HydroAnalysis(herring)\n\n# Plot discharge on a logarithmic scale for the Y axis.\nmy_analysis.plot(figsize=(16,6), logy=True)\n\n## Finding Help\n\n# Use help() to learn more about a particular function.\nhelp(hp.get_usgs)", "Learn More!\nTo learn more about hydropy, read the documentation, visit us on github, or try out more notebooks!" ]
[ "markdown", "code", "markdown", "code", "markdown" ]
rcrehuet/Python_for_Scientists_2017
notebooks/2_0_Loops.ipynb
gpl-3.0
[ "Introductory exercices: Loops\nCelsius to Kelvin\nPrint the conversion from Celsius degrees to Kelvin, from 0ยบC to 40ยบC, with a step of 5. That is, 0, 5, 10, 15...", "for t in range(41):\n if t % 5 == 0:\n print(t+273.15)\n\nfor t in range(0,41,5):\n print(t+273.15)", "Multiples\nPrint all the multiples of 3 from 0 to 25 that are not multiples of 5 or 7.", "for n in range(26):\n #Finish", "Now, instead of printing, generate a list of all the multiples of 3 from 0 to 25 that are not multiples of 5 or 7.\nMessing with loops\nWhat do you expect this loop to do? Check it.", "for i in range(10):\n print('before:', i)\n if i==3 or i==7: i=i+2 #Tying to skip values 4 and 8...\n print('after: ',i)\n print('----------')", "From the previous example you should deduce that it is better not to modify the loop variable. So now translate the previous incorrect loop into into a while loop that really skips i==4 and i==8.\nQueuing system\nYou have a list that should act as a kind of queueing system:\nqueue=['Mariona','Ramon', 'Joan', 'Quique', 'Laia']\n\nYou want to do something (say print it) with each element of the list, and then remove it form the list. (pop can be a useful method). Check that at the and, the list is empty.", "queue=['Mariona','Ramon', 'Joan', 'Quique', 'Laia']\n\nwhile queue:\n print(\"popping name : \",queue.pop(0), \"remaining\", queue)\n\n\nqueue.pop(0), queue", "Factorial\nFind the sum of the digits in 100! (answer is 648)", "import math\n\nmath.factorial(100)\n\nresult = 1\nfor i in range(100):\n result = result*(i+1)\nresult\n\nresult = 1\nfor i in range(1,101):\n result = result*i\n\nresult = str(result)\nsuma = 0\nfor caracter in result:\n suma = suma + int(caracter)\nsuma", "Dictionaries\nChecking for keys\nSoft software use keyword to define the type of calculations to be performed. As an example, here we will use the quantum chemistry software Gaussian. Imagine we have stored Gaussian keywords in a dictionary as in:\nkeywords={'basis':'6-31+G', 'SCF':['XQC', 'Tight'], 'Opt':['TS', 'NoEigenTest']}\nCheck that if there is a diffuse function in the basis set, SCF has 'tight' as one of its keywords.", "# Check different possibilities\nkeywords={'basis':'6-31+G', 'SCF':['XQC', 'Tight'], 'Opt':['TS', 'NoEigenTest']}\n#keywords={'basis':'6-31G', 'SCF':['XQC', 'Tight'], 'Opt':['TS', 'NoEigenTest']}\n#keywords={'basis':'6-31+G', 'SCF':['XQC',], 'Opt':['TS', 'NoEigenTest']}\n#keywords={'basis':'6-31+G', 'Opt':['TS', 'NoEigenTest']}\n\n\nif #Finish...\n print('When using diffuse functions, \"Tight\" should be used in the SCF!')\n", "What happens if the 'SCF' keyword is not present as in here?\nkeywords={'basis':'6-31+G', 'Opt':['TS, 'NoEigenTest']}", "#Finish", "Common keys\nGiven two dictionaries, find the keys that are present in both dictionaries. (Hint: you can use sets)", "def common_keys(d1, d2):\n \"\"\"\n Return the keys shared by dictionaries d1 and d2\n returns a set\n \"\"\"\n #Finish\n\n\n#Test it\nd1 = makedict(red=1, green=2, blue=3)\nd2 = makedict(purple=3, green=5, blue=6, yellow=1)\nprint(common_keys(d1, d2))", "Genetic code (difficult!)\nGiven the genetic code dictionary, calculate how many codons code each aminoacid. which aminoacid is coded by more codons? The underscore means the STOP codon. (Answer: R and L, 6 times each)", "gencode = {\n 'ATA':'I', 'ATC':'I', 'ATT':'I', 'ATG':'M',\n 'ACA':'T', 'ACC':'T', 'ACG':'T', 'ACT':'T',\n 'AAC':'N', 'AAT':'N', 'AAA':'K', 'AAG':'K',\n 'AGC':'S', 'AGT':'S', 'AGA':'R', 'AGG':'R',\n 'CTA':'L', 'CTC':'L', 'CTG':'L', 'CTT':'L',\n 'CCA':'P', 'CCC':'P', 'CCG':'P', 'CCT':'P',\n 'CAC':'H', 'CAT':'H', 'CAA':'Q', 'CAG':'Q',\n 'CGA':'R', 'CGC':'R', 'CGG':'R', 'CGT':'R',\n 'GTA':'V', 'GTC':'V', 'GTG':'V', 'GTT':'V',\n 'GCA':'A', 'GCC':'A', 'GCG':'A', 'GCT':'A',\n 'GAC':'D', 'GAT':'D', 'GAA':'E', 'GAG':'E',\n 'GGA':'G', 'GGC':'G', 'GGG':'G', 'GGT':'G',\n 'TCA':'S', 'TCC':'S', 'TCG':'S', 'TCT':'S',\n 'TTC':'F', 'TTT':'F', 'TTA':'L', 'TTG':'L',\n 'TAC':'Y', 'TAT':'Y', 'TAA':'_', 'TAG':'_',\n 'TGC':'C', 'TGT':'C', 'TGA':'_', 'TGG':'W'}", "Remember that you can iterate a dictionary keys with: for k in gencode: and its values with: for v in gencode.values(): Or access the values like this:\nfor k in d:\n v =d[k]\n\nThis exercice has many possible solutions.\nHow many of Leu(L) codons differ in only one point mutations from a Ile(I) codon?", "#Finish" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
JKeun/lecture-statistics
ch02-variables.ipynb
mit
[ "Ch 2. ์ž๋ฃŒ์˜ ์ •๋ฆฌ\n\n๋ณ€์ˆ˜์™€ ์ž๋ฃŒ\n๋„์ˆ˜๋ถ„ํฌํ‘œ\n\n\n1. ๋ณ€์ˆ˜ ( variable, feature )\n์–‘์ ๋ณ€์ˆ˜ ( quantitative variable, real value )\n์ˆ˜์น˜๋กœ ๋‚˜ํƒ€๋‚ผ ์ˆ˜ ์žˆ๋Š” ๋ณ€์ˆ˜\n - ์ด์‚ฐ๋ณ€์ˆ˜ ( discrete )\n - ์ •์ˆซ๊ฐ’์„ ์ทจํ•œ ์ˆ˜ ์žˆ๋Š” ๋ณ€์ˆ˜\n - ex. ์ž๋…€์ˆ˜, ์ž๋™์ฐจํŒ๋งค๋Œ€์ˆ˜ ๋“ฑ\n - ์—ฐ์†๋ณ€์ˆ˜ ( continuous )\n - ๋ชจ๋“  ์‹ค์ˆ˜๊ฐ’์„ ์ทจํ•  ์ˆ˜ ์žˆ๋Š” ๋ณ€์ˆ˜\n - ex. ๊ธธ์ด, ๋ฌด๊ฒŒ ๋“ฑ\n์งˆ์ ๋ณ€์ˆ˜ ( qualitative variable, categorical value )\n์ˆ˜์น˜๋กœ ๋‚˜ํƒ€๋‚ผ ์ˆ˜ ์—†๋Š” ๋ณ€์ˆ˜\n - ๋ช…๋ชฉ๋ณ€์ˆ˜ ( nominal )\n - ex. ์„ฑ๋ณ„, ์ข…๊ต, ์ถœ์ƒ์ง€, ์šด๋™์„ ์ˆ˜ ๋“ฑ๋ฒˆํ˜ธ ๋“ฑ\n - ์„œ์—ด๋ณ€์ˆ˜ ( ordinal )\n - ์ธก์ •๋Œ€์ƒ ๊ฐ„์˜ ์ˆœ์„œ๋ฅผ ๋งค๊ธฐ๊ธฐ ์œ„ํ•ด ์‚ฌ์šฉ๋˜๋Š” ๋ณ€์ˆ˜\n - ex. ์„ฑ์  A, B, C ๋“ฑ๊ธ‰\n2. ๋„์ˆ˜๋ถ„ํฌํ‘œ\n์ˆ˜์ง‘๋œ ์ž๋ฃŒ๋ฅผ ์ ์ ˆํ•œ ๋“ฑ๊ธ‰(๋˜๋Š” ๋ฒ”์ฃผ)์œผ๋กœ ๋ถ„๋ฅ˜ํ•˜๊ณ  ๊ฐ ๋“ฑ๊ธ‰์— ํ•ด๋‹น๋˜๋Š” ๋นˆ๋„์ˆ˜ ๋“ฑ์„ ์ •๋ฆฌํ•œ ํ‘œ\n<img src=\"http://trsketch.dothome.co.kr/_contents/2009curi/images/img900027.png\", width=600>\n๋„์ˆ˜๋ถ„ํฌํ‘œ ์ž‘์„ฑ์š”๋ น\n\n๋ชจ๋“  ์ž๋ฃŒ๋Š” ๋น ์ง์—†์ด ๋„์ˆ˜๋ถ„ํฌํ‘œ์— ํฌํ•จ๋˜์–ด์•ผ ํ•œ๋‹ค. ๊ทน๋‹จ์  ์ˆ˜์น˜(์ด์ƒ์น˜)๊ฐ€ ์žˆ๋‹ค๊ณ  ํ•ด๋„ ์ œ์™ธํ•˜์ง€ ๋ง๊ณ  ๊ตฌ๊ฐ„์˜ ๊ฐ„๊ฒฉ์„ ~์ด์ƒ, ๋˜๋Š” ~์ดํ•˜๋กœ ํ‘œ์‹œํ•˜์—ฌ ๋‹ค ํฌํ•จ์‹œ์ผœ์•ผ ํ•œ๋‹ค.\n์ด์ƒ์น˜๋ฅผ ์ œ์™ธํ•œ ๋‚˜๋จธ์ง€ ๋“ฑ๊ธ‰์˜ ๊ตฌ๊ฐ„์€ ๋ชจ๋‘ ๊ฐ™์•„์•ผ ํ•œ๋‹ค.\n๋“ฑ๊ธ‰์€ ์„œ๋กœ ์ค‘๋ณต๋˜์ง€ ์•Š์•„์•ผ ํ•œ๋‹ค.\n๋“ฑ๊ธ‰์€ ์—ฐ์†์ ์œผ๋กœ ํ‘œ์‹œ๋˜์–ด์•ผ ํ•œ๋‹ค. ํ•ด๋‹น ์‚ฌ๋ก€์ˆ˜๊ฐ€ ์—†๋‹ค๊ณ  ํ•ด์„œ ๊ทธ ๊ตฌ๊ฐ„์„ ์ œ์™ธํ•ด์„  ์•ˆ ๋œ๋‹ค.\n๋“ฑ๊ธ‰์˜ ๊ตฌ๊ฐ„์˜ ํฌ๊ธฐ๋Š” ํ™€์ˆ˜๋กœ ์ •ํ•˜๋Š” ๊ฒƒ์ด ์ข‹๋‹ค. ๊ทธ๋ž˜์•ผ ๊ทธ ๊ตฌ๊ฐ„์˜ ์ค‘๊ฐ„์ ์„ ์‰ฝ๊ฒŒ ์ •ํ•  ์ˆ˜ ์žˆ๋‹ค.\n๋“ฑ๊ธ‰ ๊ตฌ๊ฐ„์˜ ์ฒซ๋ฒˆ์งธ ์ˆซ์ž๋Š” ํ•œ๋ˆˆ์— ์ž˜ ๋„๋Š” ์ˆซ์ž๋กœ ์‹œ์ž‘ํ•˜๋Š” ๊ฒƒ์ด ์ข‹๋‹ค. ์˜ˆ๋ฅผ ๋“ค๋ฉด 93์ด์ƒ ~ 98๋ฏธ๋งŒ ๋ณด๋‹จ 90์ด์ƒ ~ 95๋ฏธ๋งŒ ์ด ์ข‹๋‹ค.\n\n๋“ฑ๊ธ‰์˜ ๊ตฌ๊ฐ„\n- ๋“ฑ๊ธ‰์˜ ๊ตฌ๊ฐ„ = ( ์ž๋ฃŒ์˜ ์ตœ๋Œ€๊ฐ’ - ์ž๋ฃŒ์˜ ์ตœ์†Œ๊ฐ’ ) / ๋“ฑ๊ธ‰์˜ ์ˆ˜\nQ. ์—ฐ์Šต๋ฌธ์ œ", "import pandas as pd\nimport numpy as np\n\nnp.random.seed(0)\ndata = np.random.randint(50, 100, size=(8, 5))\ndata[0][0] = 12\n\ndata\n\nnp.sort(data.flatten())", "๊ตฌ๊ฐ„ ์„ค์ •\n- ๊ตฌ๊ฐ„ ๊ฐœ์ˆ˜ : 5๊ฐœ\n- ๊ตฌ๊ฐ„ ํฌ๊ธฐ : 10", "interval = 5\ninterval_len = ( data.max() - 50 ) / interval\ninterval_len", "๊ตฌ๊ฐ„๋ณ„ ๋นˆ๋„์ˆ˜ ์ธก์ •\n- 50๋ฏธ๋งŒ : 1\n- 50์ด์ƒ 60๋ฏธ๋งŒ : 10\n- 60์ด์ƒ 70๋ฏธ๋งŒ : 9\n- 70์ด์ƒ 80๋ฏธ๋งŒ : 10\n- 80์ด์ƒ 90๋ฏธ๋งŒ : 6\n- 90์ด์ƒ 100๋ฏธ๋งŒ : 4", "data1 = [[45, 1, 1],\n [55, 10, 11],\n [65, 9, 20],\n [75, 10, 30],\n [85, 6, 36],\n [95, 4, 40]]\ndf = pd.DataFrame(data1,\n index=[u'~50', u'50~60', u'60~70', u'70~80', u'80~90', u'90~100'],\n columns=[u\"์ค‘๊ฐ„๊ฐ’\" ,u\"๋นˆ๋„์ˆ˜\", u\"๋ˆ„์ ๋นˆ๋„\"])\ndf\n\nimport matplotlib.pyplot as plt\n\nX = df.์ค‘๊ฐ„๊ฐ’\ny = df.๋นˆ๋„์ˆ˜\n\nplt.bar(X, y, width=5, align='center')\nplt.title(\"histogram\")\nplt.xlabel(\"score\")\nplt.ylabel(\"frequency\")\nplt.xticks(X, [u'~50', u'50~60', u'60~70', u'70~80', u'80~90', u'90~100'])\nplt.show()\n\nX = df.์ค‘๊ฐ„๊ฐ’\ny = df.๋ˆ„์ ๋นˆ๋„\n\nplt.plot(X, y, \"--o\")\nplt.xlim(45, 95)\nplt.title(\"cumulative line plot\")\nplt.xlabel(\"score\")\nplt.ylabel(\"cumulative frequency\")\nplt.xticks(X, [u'~50', u'50~60', u'60~70', u'70~80', u'80~90', u'90~100'])\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
esa-as/2016-ml-contest
esaTeam/esa_Submission01b.ipynb
apache-2.0
[ "Facies classification using machine learning techniques\nThe ideas of \n<a href=\"https://home.deib.polimi.it/bestagini/\">Paolo Bestagini's</a> \"Try 2\", <a href=\"https://github.com/ar4\">Alan Richardson's</a> \"Try 2\",\n<a href=\"https://github.com/dalide\">Dalide's</a> \"Try 6\", augmented, by Dimitrios Oikonomou and Eirik Larsen (ESA AS) by \n\nadding the gradient of gradient of features as augmented features.\nwith an ML estimator for PE using both training and blind well data.\nremoving the NM_M from augmented features. \n\nIn the following, we provide a possible solution to the facies classification problem described at https://github.com/seg/2016-ml-contest.\nThe proposed algorithm is based on the use of random forests, xgboost or gradient boost combined in one-vs-one multiclass strategy. In particular, we would like to study the effect of:\n- Robust feature normalization.\n- Feature imputation for missing feature values.\n- Well-based cross-validation routines.\n- Feature augmentation strategies.\n- Test multiple classifiers \nScript initialization\nLet's import the used packages and define some parameters (e.g., colors, labels, etc.).", "# Import\nfrom __future__ import division\nget_ipython().magic(u'matplotlib inline')\nimport matplotlib as mpl\n\nimport matplotlib.pyplot as plt\nmpl.rcParams['figure.figsize'] = (20.0, 10.0)\ninline_rc = dict(mpl.rcParams)\nfrom classification_utilities import make_facies_log_plot\n\nimport pandas as pd\nimport numpy as np\nimport seaborn as sns\n\nfrom sklearn import preprocessing\nfrom sklearn.model_selection import LeavePGroupsOut\nfrom sklearn.metrics import f1_score\nfrom sklearn.model_selection import GridSearchCV\nfrom sklearn.multiclass import OneVsOneClassifier\nfrom sklearn.ensemble import RandomForestClassifier, RandomForestRegressor, GradientBoostingClassifier \nimport xgboost as xgb\nfrom xgboost.sklearn import XGBClassifier\n\nfrom scipy.signal import medfilt\n\nimport sys, scipy, sklearn\nprint('Python: ' + sys.version.split('\\n')[0])\nprint(' ' + sys.version.split('\\n')[0])\nprint('Pandas: ' + pd.__version__)\nprint('Numpy: ' + np.__version__)\nprint('Scipy: ' + scipy.__version__)\nprint('Sklearn: ' + sklearn.__version__)\nprint('Xgboost: ' + xgb.__version__)", "Parameters", "feature_names = ['GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE', 'NM_M', 'RELPOS']\nfacies_names = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS', 'WS', 'D', 'PS', 'BS']\nfacies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00', '#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D']\n\n#Select classifier type\n\n\n#clfType='GB' #Gradient Boosting Classifier\nclfType='XBA' #XGB Clasifier\n\n#Seed\nseed = 24\nnp.random.seed(seed)", "Load data\nLet's load the data", "# Load data from file\ndata = pd.read_csv('../facies_vectors.csv')\n\n# Load Test data from file\ntest_data = pd.read_csv('../validation_data_nofacies.csv') \ntest_data.insert(0,'Facies',np.ones(test_data.shape[0])*(-1))\n\n#Create Dataset for PE prediction from both dasets\nall_data=pd.concat([data,test_data])", "Let's store features, labels and other data into numpy arrays.", "# Store features and labels\nX = data[feature_names].values # features\n\ny = data['Facies'].values # labels\n\n# Store well labels and depths\nwell = data['Well Name'].values\ndepth = data['Depth'].values\n", "Data inspection\nLet us inspect the features we are working with. This step is useful to understand how to normalize them and how to devise a correct cross-validation strategy. Specifically, it is possible to observe that:\n - Some features seem to be affected by a few outlier measurements.\n - Only a few wells contain samples from all classes.\n - PE measurements are available only for some wells.", "# Define function for plotting feature statistics\ndef plot_feature_stats(X, y, feature_names, facies_colors, facies_names):\n \n # Remove NaN\n nan_idx = np.any(np.isnan(X), axis=1)\n X = X[np.logical_not(nan_idx), :]\n y = y[np.logical_not(nan_idx)]\n \n # Merge features and labels into a single DataFrame\n features = pd.DataFrame(X, columns=feature_names)\n labels = pd.DataFrame(y, columns=['Facies'])\n for f_idx, facies in enumerate(facies_names):\n labels[labels[:] == f_idx] = facies\n data = pd.concat((labels, features), axis=1)\n\n # Plot features statistics\n facies_color_map = {}\n for ind, label in enumerate(facies_names):\n facies_color_map[label] = facies_colors[ind]\n\n sns.pairplot(data, hue='Facies', palette=facies_color_map, hue_order=list(reversed(facies_names)))", "Feature distribution\nplot_feature_stats(X, y, feature_names, facies_colors, facies_names)\nmpl.rcParams.update(inline_rc)", "# Facies per well\nfor w_idx, w in enumerate(np.unique(well)):\n ax = plt.subplot(3, 4, w_idx+1)\n hist = np.histogram(y[well == w], bins=np.arange(len(facies_names)+1)+.5)\n plt.bar(np.arange(len(hist[0])), hist[0], color=facies_colors, align='center')\n ax.set_xticks(np.arange(len(hist[0])))\n ax.set_xticklabels(facies_names)\n ax.set_title(w)\n\n \n# Features per well\nfor w_idx, w in enumerate(np.unique(well)):\n ax = plt.subplot(3, 4, w_idx+1)\n hist = np.logical_not(np.any(np.isnan(X[well == w, :]), axis=0))\n plt.bar(np.arange(len(hist)), hist, color=facies_colors, align='center')\n ax.set_xticks(np.arange(len(hist)))\n ax.set_xticklabels(feature_names)\n ax.set_yticks([0, 1])\n ax.set_yticklabels(['miss', 'hit'])\n ax.set_title(w)", "Feature imputation\nLet us fill missing PE values. Currently no feature engineering is used, but this should be explored in the future.", "reg = RandomForestRegressor(max_features='sqrt', n_estimators=50, random_state=seed)\n\nDataImpAll = all_data[feature_names].copy()\nDataImp = DataImpAll.dropna(axis = 0, inplace=False)\nXimp=DataImp.loc[:, DataImp.columns != 'PE']\nYimp=DataImp.loc[:, 'PE']\nreg.fit(Ximp, Yimp)\nX[np.array(data.PE.isnull()),feature_names.index('PE')] = reg.predict(data.loc[data.PE.isnull(),:][['GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'NM_M', 'RELPOS']])\n", "Augment features", "# ## Feature augmentation\n# Our guess is that facies do not abrutly change from a given depth layer to the next one. Therefore, we consider features at neighboring layers to be somehow correlated. To possibly exploit this fact, let us perform feature augmentation by:\n# - Select features to augment.\n# - Aggregating aug_features at neighboring depths.\n# - Computing aug_features spatial gradient.\n# - Computing aug_features spatial gradient of gradient.\n\n\n# Feature windows concatenation function\ndef augment_features_window(X, N_neig, features=-1):\n \n # Parameters\n N_row = X.shape[0]\n if features==-1:\n N_feat = X.shape[1]\n features=np.arange(0,X.shape[1])\n else:\n N_feat = len(features)\n\n # Zero padding\n X = np.vstack((np.zeros((N_neig, X.shape[1])), X, (np.zeros((N_neig, X.shape[1])))))\n\n # Loop over windows\n X_aug = np.zeros((N_row, N_feat*(2*N_neig)+X.shape[1]))\n for r in np.arange(N_row)+N_neig:\n this_row = []\n for c in np.arange(-N_neig,N_neig+1):\n if (c==0):\n this_row = np.hstack((this_row, X[r+c,:]))\n else:\n this_row = np.hstack((this_row, X[r+c,features]))\n X_aug[r-N_neig] = this_row\n\n return X_aug\n\n\n# Feature gradient computation function\ndef augment_features_gradient(X, depth, features=-1):\n \n if features==-1:\n features=np.arange(0,X.shape[1])\n # Compute features gradient\n d_diff = np.diff(depth).reshape((-1, 1))\n d_diff[d_diff==0] = 0.001\n X_diff = np.diff(X[:,features], axis=0)\n X_grad = X_diff / d_diff\n \n # Compensate for last missing value\n X_grad = np.concatenate((X_grad, np.zeros((1, X_grad.shape[1]))))\n \n return X_grad\n\n\n\n# Feature augmentation function\ndef augment_features(X, well, depth, N_neig=1, features=-1):\n \n if (features==-1):\n N_Feat=X.shape[1]\n else:\n N_Feat=len(features)\n # Augment features\n X_aug = np.zeros((X.shape[0], X.shape[1] + N_Feat*(N_neig*2+2)))\n for w in np.unique(well):\n w_idx = np.where(well == w)[0]\n X_aug_win = augment_features_window(X[w_idx, :], N_neig,features)\n X_aug_grad = augment_features_gradient(X[w_idx, :], depth[w_idx],features)\n X_aug_grad_grad = augment_features_gradient(X_aug_grad, depth[w_idx])\n X_aug[w_idx, :] = np.concatenate((X_aug_win, X_aug_grad,X_aug_grad_grad), axis=1)\n \n # Find padded rows\n padded_rows = np.unique(np.where(X_aug[:, 0:7] == np.zeros((1, 7)))[0])\n \n return X_aug, padded_rows\n\n\n# Train and test a classifier\ndef train_and_test(X_tr, y_tr, X_v, well_v, clf):\n # Feature normalization\n scaler = preprocessing.RobustScaler(quantile_range=(25.0, 75.0)).fit(X_tr)\n X_tr = scaler.transform(X_tr)\n X_v = scaler.transform(X_v)\n \n # Train classifier\n clf.fit(X_tr, y_tr)\n \n # Test classifier\n y_v_hat = clf.predict(X_v)\n \n # Clean isolated facies for each well\n for w in np.unique(well_v):\n y_v_hat[well_v==w] = medfilt(y_v_hat[well_v==w], kernel_size=3)\n \n return y_v_hat\n\n# Define window length\nN_neig=1\n\n# Define which features to augment by introducing window and gradients.\naugm_Features=['GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE', 'RELPOS']\n\n# Get the columns of features to be augmented \nfeature_indices=[feature_names.index(log) for log in augm_Features]\n\n# Augment features\nX_aug, padded_rows = augment_features(X, well, depth, N_neig=N_neig, features=feature_indices)\n\n# Remove padded rows \ndata_no_pad = np.setdiff1d(np.arange(0,X_aug.shape[0]), padded_rows)\n\nX=X[data_no_pad ,:]\ndepth=depth[data_no_pad]\nX_aug=X_aug[data_no_pad ,:]\ny=y[data_no_pad]\ndata=data.iloc[data_no_pad ,:]\nwell=well[data_no_pad]\n", "Generate training, validation and test data splitsar4_submission_withFac.ipynb\nThe choice of training and validation data is paramount in order to avoid overfitting and find a solution that generalizes well on new data. For this reason, we generate a set of training-validation splits so that:\n- Features from each well belongs to training or validation set.\n- Training and validation sets contain at least one sample for each class.\nInitialize model selection methods", "lpgo = LeavePGroupsOut(2)\n\n# Generate splits\nsplit_list = []\nfor train, val in lpgo.split(X, y, groups=data['Well Name']):\n hist_tr = np.histogram(y[train], bins=np.arange(len(facies_names)+1)+.5)\n hist_val = np.histogram(y[val], bins=np.arange(len(facies_names)+1)+.5)\n if np.all(hist_tr[0] != 0) & np.all(hist_val[0] != 0):\n split_list.append({'train':train, 'val':val})\n \n# Print splits\nfor s, split in enumerate(split_list):\n print('Split %d' % s)\n print(' training: %s' % (data.iloc[split['train']]['Well Name'].unique()))\n print(' validation: %s' % (data.iloc[split['val']]['Well Name'].unique()))", "Classification parameters optimization\nLet us perform the following steps for each set of parameters:\n\nSelect a data split.\nNormalize features using a robust scaler.\nTrain the classifier on training data.\nTest the trained classifier on validation data.\nRepeat for all splits and average the F1 scores. \nAt the end of the loop, we select the classifier that maximizes the average F1 score on the validation set. Hopefully, this classifier should be able to generalize well on new data.", "\nif clfType=='XB':\n md_grid = [2,3]\n# mcw_grid = [1]\n gamma_grid = [0.2, 0.3, 0.4] \n ss_grid = [0.7, 0.9, 0.5] \n csb_grid = [0.6,0.8,0.9]\n alpha_grid =[0.2, 0.4, 0.3]\n lr_grid = [0.04, 0.06, 0.05] \n ne_grid = [100,200,300]\n param_grid = []\n for N in md_grid:\n# for M in mcw_grid:\n for S in gamma_grid:\n for L in ss_grid:\n for K in csb_grid:\n for P in alpha_grid:\n for R in lr_grid:\n for E in ne_grid:\n param_grid.append({'maxdepth':N, \n# 'minchildweight':M, \n 'gamma':S, \n 'subsample':L,\n 'colsamplebytree':K,\n 'alpha':P,\n 'learningrate':R,\n 'n_estimators':E})\n\nif clfType=='XBA':\n \n learning_rate_grid=[0.12] #[0.06, 0.10, 0.12]\n max_depth_grid=[5] #[3, 5]\n min_child_weight_grid=[6] #[6, 8, 10]\n colsample_bytree_grid = [0.9] #[0.7, 0.9]\n n_estimators_grid=[120] #[80, 120, 150] #[150]\n \n param_grid = []\n for max_depth in max_depth_grid:\n for min_child_weight in min_child_weight_grid:\n for colsample_bytree in colsample_bytree_grid:\n for learning_rate in learning_rate_grid: \n for n_estimators in n_estimators_grid: \n param_grid.append({'maxdepth':max_depth, \n 'minchildweight':min_child_weight, \n 'colsamplebytree':colsample_bytree,\n 'learningrate':learning_rate,\n 'n_estimators':n_estimators})\n\nif clfType=='RF':\n N_grid = [50, 100, 150]\n M_grid = [5, 10, 15]\n S_grid = [10, 25, 50, 75]\n L_grid = [2, 3, 4, 5, 10, 25]\n param_grid = []\n for N in N_grid:\n for M in M_grid:\n for S in S_grid:\n for L in L_grid:\n param_grid.append({'N':N, 'M':M, 'S':S, 'L':L})\n \n \n \nif clfType=='GB':\n N_grid = [100] #[80, 100, 120] \n MD_grid = [3] #[3, 5] \n M_grid = [10]\n LR_grid = [0.14] #[0.1, 0.08, 0.14] \n L_grid = [7] #[3, 5, 7]\n S_grid = [30] #[20, 25, 30] \n param_grid = []\n for N in N_grid:\n for M in MD_grid:\n for M1 in M_grid:\n for S in LR_grid: \n for L in L_grid:\n for S1 in S_grid:\n param_grid.append({'N':N, 'MD':M, 'MF':M1,'LR':S,'L':L,'S1':S1})\n\n\n\ndef getClf(clfType, param):\n if clfType=='RF':\n clf = OneVsOneClassifier(RandomForestClassifier(n_estimators=param['N'], criterion='entropy',\n max_features=param['M'], min_samples_split=param['S'], min_samples_leaf=param['L'],\n class_weight='balanced', random_state=seed), n_jobs=-1)\n if clfType=='XB':\n clf = OneVsOneClassifier(XGBClassifier(\n learning_rate = param['learningrate'],\n n_estimators=param['n_estimators'],\n max_depth=param['maxdepth'],\n# min_child_weight=param['minchildweight'],\n gamma = param['gamma'],\n subsample=param['subsample'],\n colsample_bytree=param['colsamplebytree'],\n reg_alpha = param['alpha'],\n nthread =4,\n seed = seed,\n ) , n_jobs=4)\n\n \n \n if clfType=='XBA':\n clf = XGBClassifier(\n learning_rate = param['learningrate'],\n n_estimators=param['n_estimators'],\n max_depth=param['maxdepth'],\n min_child_weight=param['minchildweight'],\n colsample_bytree=param['colsamplebytree'],\n nthread =4,\n seed = 17\n ) \n if clfType=='GB':\n clf=OneVsOneClassifier(GradientBoostingClassifier(\n loss='exponential',\n n_estimators=param['N'], \n learning_rate=param['LR'], \n max_depth=param['MD'],\n max_features= param['MF'],\n min_samples_leaf=param['L'],\n min_samples_split=param['S1'],\n random_state=seed, \n max_leaf_nodes=None,)\n , n_jobs=-1)\n return clf\n\n# For each set of parameters\nscore_param = []\nprint('features: %d' % X_aug.shape[1])\nexportScores=[]\nfor param in param_grid:\n print('features: %d' % X_aug.shape[1])\n # For each data split\n score_split = []\n split = split_list[5]\n \n split_train_no_pad = split['train']\n\n # Select training and validation data from current split\n X_tr = X_aug[split_train_no_pad, :]\n X_v = X_aug[split['val'], :]\n y_tr = y[split_train_no_pad]\n y_v = y[split['val']]\n\n # Select well labels for validation data\n well_v = well[split['val']]\n\n # Train and test\n y_v_hat = train_and_test(X_tr, y_tr, X_v, well_v, getClf(clfType,param))\n\n # Score\n score = f1_score(y_v, y_v_hat, average='micro')\n score_split.append(score)\n #print('Split: {0}, Score = {1:0.3f}'.format(split_list.index(split),score))\n# print('Split: , Score = {0:0.3f}'.format(score))\n # Average score for this param\n score_param.append(np.mean(score_split))\n print('Average F1 score = %.3f %s' % (score_param[-1], param))\n exportScores.append('Average F1 score = %.3f %s' % (score_param[-1], param))\n\n# Best set of parameters\nbest_idx = np.argmax(score_param)\nparam_best = param_grid[best_idx]\nscore_best = score_param[best_idx]\nprint('\\nBest F1 score = %.3f %s' % (score_best, param_best))\n\n# Store F1 scores for multiple param grids\nif len(exportScores)>1:\n exportScoresFile=open('results_{0}_{1}_sub01b.txt'.format(clfType,N_neig),'wb')\n exportScoresFile.write('features: %d' % X_aug.shape[1])\n for item in exportScores:\n exportScoresFile.write(\"%s\\n\" % item)\n exportScoresFile.write('\\nBest F1 score = %.3f %s' % (score_best, param_best))\n exportScoresFile.close()\n\n# ## Predict labels on test data\n# Let us now apply the selected classification technique to test data.\n\n# Training data\nX_tr = X_aug\ny_tr = y\n\n# Prepare test data\nwell_ts = test_data['Well Name'].values\ndepth_ts = test_data['Depth'].values\nX_ts = test_data[feature_names].values\n\n# Augment Test data features\nX_ts, padded_rows = augment_features(X_ts, well_ts,depth_ts,N_neig=N_neig, features=feature_indices)\n\n# Predict test labels\ny_ts_hat = train_and_test(X_tr, y_tr, X_ts, well_ts, getClf(clfType,param_best))\n\n# Save predicted labels\ntest_data['Facies'] = y_ts_hat\ntest_data.to_csv('esa_predicted_facies_{0}_{1}_sub01b.csv'.format(clfType,N_neig))\n\n\n# Plot predicted labels\nmake_facies_log_plot(\n test_data[test_data['Well Name'] == 'STUART'],\n facies_colors=facies_colors)\n\nmake_facies_log_plot(\n test_data[test_data['Well Name'] == 'CRAWFORD'],\n facies_colors=facies_colors)\nmpl.rcParams.update(inline_rc)\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
junhwanjang/DataSchool
Lecture/09. ๊ธฐ์ดˆ ํ™•๋ฅ ๋ก  3 - ํ™•๋ฅ ๋ชจํ˜•/11) ๋””๋ฆฌํด๋ ˆ ๋ถ„ํฌ.ipynb
mit
[ "๋””๋ฆฌํด๋ ˆ ๋ถ„ํฌ\n๋””๋ฆฌํด๋ ˆ ๋ถ„ํฌ(Dirichlet distribution)๋Š” ๋ฒ ํƒ€ ๋ถ„ํฌ์˜ ํ™•์žฅํŒ์ด๋ผ๊ณ  ํ•  ์ˆ˜ ์žˆ๋‹ค. ๋ฒ ํƒ€ ๋ถ„ํฌ๋Š” 0๊ณผ 1์‚ฌ์ด์˜ ๊ฐ’์„ ๊ฐ€์ง€๋Š” ๋‹จ์ผ(univariate) ํ™•๋ฅ  ๋ณ€์ˆ˜์˜ ๋ฒ ์ด์ง€์•ˆ ๋ชจํ˜•์— ์‚ฌ์šฉ๋˜๊ณ  ๋””๋ฆฌํด๋ ˆ ๋ถ„ํฌ๋Š” 0๊ณผ 1์‚ฌ์ด์˜ ์‚ฌ์ด์˜ ๊ฐ’์„ ๊ฐ€์ง€๋Š” ๋‹ค๋ณ€์ˆ˜(multivariate) ํ™•๋ฅ  ๋ณ€์ˆ˜์˜ ๋ฒ ์ด์ง€์•ˆ ๋ชจํ˜•์— ์‚ฌ์šฉ๋œ๋‹ค. ๋˜ํ•œ ๋””๋ฆฌํด๋ ˆ ๋ถ„ํฌํ‹‘ ๋‹ค๋ณ€์ˆ˜ ํ™•๋ฅ  ๋ณ€์ˆ˜๋“ค์˜ ํ•ฉ์ด 1์ด๋˜์–ด์•ผ ํ•œ๋‹ค๋Š” ์ œํ•œ ์กฐ๊ฑด์„ ๊ฐ€์ง„๋‹ค.\n๋””๋ฆฌํด๋ ˆ ๋ถ„ํฌ์˜ ํ™•๋ฅ  ๋ฐ€๋„ ํ•จ์ˆ˜๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™๋‹ค.\n$$ f(x_1, x_2, \\cdots, x_K) = \\frac{1}{\\mathrm{B}(\\boldsymbol\\alpha)} \\prod_{i=1}^K x_i^{\\alpha_i - 1} $$\n์—ฌ๊ธฐ์—์„œ \n$$ \\mathrm{B}(\\boldsymbol\\alpha) = \\frac{\\prod_{i=1}^K \\Gamma(\\alpha_i)} {\\Gamma\\bigl(\\sum_{i=1}^K \\alpha_i\\bigr)} $$\n์ด๊ณ  ๋‹ค์Œ๊ณผ ๊ฐ™์€ ์ œํ•œ ์กฐ๊ฑด์ด ์žˆ๋‹ค.\n$$ \\sum_{i=1}^{K} x_i = 1 $$\n๋ฒ ํƒ€ ๋ถ„ํ† ์™€ ๋””๋ฆฌํด๋ ˆ ๋ถ„ํฌ์˜ ๊ด€๊ณ„\n๋ฒ ํƒ€ ๋ถ„ํฌ๋Š” $K=2$ ์ธ ๋””๋ฆฌํด๋ ˆ ๋ถ„ํฌ๋ผ๊ณ  ๋ณผ ์ˆ˜ ์žˆ๋‹ค.\n์ฆ‰ $x_1 = x$, $x_2 = 1 - x$, $\\alpha_1 = a$, $\\alpha_2 = b$ ๋กœ ํ•˜๋ฉด \n$$ \n\\begin{eqnarray}\n\\text{Beta}(x;a,b) \n&=& \\frac{\\Gamma(a+b)}{\\Gamma(a)\\Gamma(b)}\\, x^{a-1}(1-x)^{b-1} \\\n&=& \\frac{\\Gamma(\\alpha_1+\\alpha_2)}{\\Gamma(\\alpha_1)\\Gamma(\\alpha_2)}\\, x_1^{\\alpha_1 - 1} x_2^{\\alpha_2 - 1} \\\n&=& \\frac{1}{\\mathrm{B}(\\alpha_1, \\alpha_2)} \\prod_{i=1}^2 x_i^{\\alpha_i - 1}\n\\end{eqnarray}\n$$\n๋””๋ฆฌํด๋ ˆ ๋ถ„ํฌ์˜ ๋ชจ๋ฉ˜ํŠธ ํŠน์„ฑ\n๋””๋ฆฌํด๋ ˆ ๋ถ„ํฌ์˜ ๊ธฐ๋Œ“๊ฐ’, ๋ชจ๋“œ, ๋ถ„์‚ฐ์€ ๋‹ค์Œ๊ณผ ๊ฐ™๋‹ค.\n\n\n๊ธฐ๋Œ“๊ฐ’\n$$E[x_k] = \\dfrac{\\alpha_k}{\\alpha}$$\n์—ฌ๊ธฐ์—์„œ\n$$\\alpha=\\sum\\alpha_k$$\n\n\n๋ชจ๋“œ\n$$ \\dfrac{\\alpha_k - 1}{\\alpha - K}$$\n\n\n๋ถ„์‚ฐ\n$$\\text{Var}[x_k] =\\dfrac{\\alpha_k(\\alpha - \\alpha_k)}{\\alpha^2(\\alpha + 1)}$$\n\n\n๊ธฐ๋Œ“๊ฐ’ ๊ณต์‹์„ ๋ณด๋ฉด ๋ชจ์ˆ˜์ธ $\\boldsymbol\\alpha = (\\alpha_1, \\alpha_2, \\ldots, \\alpha_K)$๋Š” $(x_1, x_2, \\ldots, x_K$ ์ค‘ ์–ด๋А ์ˆ˜๊ฐ€ ๋” ํฌ๊ฒŒ ๋‚˜์˜ฌ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์€์ง€๋ฅผ ๊ฒฐ์ •ํ•˜๋Š” ํ˜•์ƒ ์ธ์ž(shape factor)์ž„์„ ์•Œ ์ˆ˜ ์žˆ๋‹ค. ๋ชจ๋“  $\\alpha_i$๊ฐ’์ด ๋™์ผํ•˜๋ฉด ๋ชจ๋“  $x_i$์˜ ๋ถ„ํฌ๊ฐ€ ๊ฐ™์•„์ง„๋‹ค. \n๋˜ํ•œ ๋ถ„์‚ฐ ๊ณต์‹์„ ๋ณด๋ฉด $\\boldsymbol\\alpha$์˜ ์ ˆ๋Œ€๊ฐ’์ด ํด์ˆ˜๋ก ๋ถ„์‚ฐ์ด ์ž‘์•„์ง„๋‹ค. ์ฆ‰, ์–ด๋–ค ํŠน์ •ํ•œ ๊ฐ’์ด ๋‚˜์˜ฌ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’์•„์ง„๋‹ค.\n๋””๋ฆฌํด๋ ˆ ๋ถ„ํฌ์˜ ์‘์šฉ\n๋‹ค์Œ๊ณผ ๊ฐ™์€ ๋ฌธ์ œ๋ฅผ ๋ณด์ž ์ด ๋ฌธ์ œ๋Š” $K=3$์ด๊ณ  $ \\alpha_1 = \\alpha_2 = \\alpha_3$ ์ธ Dirichlet ๋ถ„ํฌ์˜ ํŠน์ˆ˜ํ•œ ๊ฒฝ์šฐ์ด๋‹ค.\n<img src=\"https://datascienceschool.net/upfiles/d0acaf490aaa41389b975e20c58ac1ee.png\" style=\"width:90%; margin: 0 auto 0 auto;\">\n3์ฐจ์› ๋””๋ฆฌํด๋ ˆ ๋ฌธ์ œ๋Š” ๋‹ค์Œ ๊ทธ๋ฆผ๊ณผ ๊ฐ™์ด 3์ฐจ์› ๊ณต๊ฐ„ ์ƒ์—์„œ (1,0,0), (0,1,0), (0,0,1) ์„ธ ์ ์„ ์—ฐ๊ฒฐํ•˜๋Š” ์ •์‚ผ๊ฐํ˜• ๋ฉด์œ„์˜ ์ ์„ ์ƒ์„ฑํ•˜๋Š” ๋ฌธ์ œ๋ผ๊ณ  ๋ณผ ์ˆ˜ ์žˆ๋‹ค.", "from mpl_toolkits.mplot3d import Axes3D\nfrom mpl_toolkits.mplot3d.art3d import Poly3DCollection\n\nfig = plt.figure()\nax = Axes3D(fig)\nx = [1,0,0]\ny = [0,1,0]\nz = [0,0,1]\nverts = [zip(x, y,z)]\nax.add_collection3d(Poly3DCollection(verts, edgecolor=\"k\", lw=5, alpha=0.4))\nax.text(1, 0, 0, \"(1,0,0)\", position=(0.7,0.1))\nax.text(0, 1, 0, \"(0,1,0)\", position=(0,1.04))\nax.text(0, 0, 1, \"(0,0,1)\", position=(-0.2,0))\nax.set_xlabel(\"x\")\nax.set_ylabel(\"y\")\nax.set_zlabel(\"z\")\nax.set_xticks([0, 1])\nax.set_yticks([0, 1])\nax.set_zticks([0, 1])\nax.view_init(20, -20)\nplt.show()", "๋‹ค์Œ ํ•จ์ˆ˜๋Š” ์ƒ์„ฑ๋œ ์ ๋“ค์„ 2์ฐจ์› ์‚ผ๊ฐํ˜• ์œ„์—์„œ ๋ณผ ์ˆ˜ ์žˆ๋„๋ก ๊ทธ๋ ค์ฃผ๋Š” ํ•จ์ˆ˜์ด๋‹ค.", "def plot_triangle(X, kind):\n n1 = np.array([1, 0, 0])\n n2 = np.array([0, 1, 0])\n n3 = np.array([0, 0, 1])\n n12 = (n1 + n2)/2\n m1 = np.array([1, -1, 0])\n m2 = n3 - n12\n m1 = m1/np.linalg.norm(m1)\n m2 = m2/np.linalg.norm(m2)\n\n X1 = (X-n12).dot(m1)\n X2 = (X-n12).dot(m2)\n \n g = sns.jointplot(X1, X2, kind=kind, xlim=(-0.8,0.8), ylim=(-0.45,0.9))\n g.ax_joint.axis(\"equal\")\n plt.show()\n", "๋งŒ์•ฝ ์ด ๋ฌธ์ œ๋ฅผ ๋‹จ์ˆœํ•˜๊ฒŒ ์ƒ๊ฐํ•˜์—ฌ ์„œ๋กœ ๋…๋ฆฝ์ธ 0๊ณผ 1์‚ฌ์ด์˜ ์œ ๋‹ˆํผ ํ™•๋ฅ  ๋ณ€์ˆ˜๋ฅผ 3๊ฐœ ์ƒ์„ฑํ•˜๊ณ  ์ด๋“ค์˜ ํ•ฉ์ด 1์ด ๋˜๋„๋ก ํฌ๊ธฐ๋ฅผ ์ •๊ทœํ™”(normalize)ํ•˜๋ฉด ๋‹ค์Œ ๊ทธ๋ฆผ๊ณผ ๊ฐ™์ด ์‚ผ๊ฐํ˜•์˜ ์ค‘์•™ ๊ทผ์ฒ˜์— ๋งŽ์€ ํ™•๋ฅ  ๋ถ„ํฌ๊ฐ€ ์ง‘์ค‘๋œ๋‹ค. ์ฆ‰, ํ™•๋ฅ  ๋ณ€์ˆ˜๊ฐ€ ๊ณจ๊ณ ๋ฃจ ๋ถ„ํฌ๋˜์ง€ ์•Š๋Š”๋‹ค.", "X1 = np.random.rand(1000, 3)\nX1 = X1/X1.sum(axis=1)[:, np.newaxis]\nplot_triangle(X1, kind=\"scatter\")\n\nplot_triangle(X1, kind=\"hex\")", "๊ทธ๋Ÿฌ๋‚˜ $\\alpha=(1,1,1)$์ธ ๋””๋ฆฌํด๋ ˆ ๋ถ„ํฌ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ๊ณจ๊ณ ๋ฃจ ์ƒ˜ํ”Œ์„ ์ƒ์„ฑํ•œ๋‹ค.", "X2 = sp.stats.dirichlet((1,1,1)).rvs(1000)\nplot_triangle(X2, kind=\"scatter\")\n\nplot_triangle(X2, kind=\"hex\")", "$\\alpha$๊ฐ€ $(1,1,1)$์ด ์•„๋‹Œ ๊ฒฝ์šฐ์—๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™์ด ํŠน์ • ์œ„์น˜์— ๋ถ„ํฌ๊ฐ€ ์ง‘์ค‘๋˜๋„๋ก ํ•  ์ˆ˜ ์žˆ๋‹ค. ์ด ํŠน์„ฑ์„ ์ด์šฉํ•˜๋ฉด ๋‹คํ•ญ ๋ถ„ํฌ์˜ ๋ชจ์ˆ˜๋ฅผ ์ถ”์ •ํ•˜๋Š” ๋ฒ ์ด์ง€์•ˆ ์ถ”์ • ๋ฌธ์ œ์— ์‘์šฉํ•  ์ˆ˜ ์žˆ๋‹ค.", "def project(x):\n n1 = np.array([1, 0, 0])\n n2 = np.array([0, 1, 0])\n n3 = np.array([0, 0, 1])\n n12 = (n1 + n2)/2\n m1 = np.array([1, -1, 0])\n m2 = n3 - n12\n m1 = m1/np.linalg.norm(m1)\n m2 = m2/np.linalg.norm(m2)\n return np.dstack([(x-n12).dot(m1), (x-n12).dot(m2)])[0]\n\ndef project_reverse(x):\n n1 = np.array([1, 0, 0])\n n2 = np.array([0, 1, 0])\n n3 = np.array([0, 0, 1])\n n12 = (n1 + n2)/2\n m1 = np.array([1, -1, 0])\n m2 = n3 - n12\n m1 = m1/np.linalg.norm(m1)\n m2 = m2/np.linalg.norm(m2)\n return x[:,0][:, np.newaxis] * m1 + x[:,1][:, np.newaxis] * m2 + n12\n\neps = np.finfo(float).eps * 10\nX = project([[1-eps,0,0], [0,1-eps,0], [0,0,1-eps]])\n\nimport matplotlib.tri as mtri\ntriang = mtri.Triangulation(X[:,0], X[:,1], [[0, 1, 2]])\nrefiner = mtri.UniformTriRefiner(triang)\ntriang2 = refiner.refine_triangulation(subdiv=6)\nXYZ = project_reverse(np.dstack([triang2.x, triang2.y, 1-triang2.x-triang2.y])[0])\n\npdf = sp.stats.dirichlet((1,1,1)).pdf(XYZ.T)\nplt.tricontourf(triang2, pdf)\nplt.axis(\"equal\")\nplt.show()\n\npdf = sp.stats.dirichlet((3,4,2)).pdf(XYZ.T)\nplt.tricontourf(triang2, pdf)\nplt.axis(\"equal\")\nplt.show()\n\npdf = sp.stats.dirichlet((16,24,14)).pdf(XYZ.T)\nplt.tricontourf(triang2, pdf)\nplt.axis(\"equal\")\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Bowenislandsong/Distributivecom
Archive/The Ray API.ipynb
gpl-3.0
[ "import ray", "Starting Ray\nThere are two main ways in which Ray can be used. First, you can start all of the relevant Ray processes and shut them all down within the scope of a single script. Second, you can connect to and use an existing Ray cluster.\nStarting and stopping a cluster within a script\nOne use case is to start all of the relevant Ray processes when you call ray.init and shut them down when the script exits. These processes include local and global schedulers, an object manager, a redis server, and more.\nNote: this approach is limited to a single machine. \nThis can be done as follows.", "ray.init()", "If there are GPUs available on the machine, you should specify this with the num_gpus argument. Similarly, you can also specify the number of CPUs with num_cpus.", "#ray.init(num_cpus=20,num_gpus=2)", "By default, Ray will use psutil.cpu_count() to determine the number of CPUs, and by default the number of GPUs will be zero.\nInstead of thinking about the number of \"worker\" processes on each node, we prefer to think in terms of the quantities of CPU and GPU resources on each node and to provide the illusion of an infinite pool of workers. Tasks will be assigned to workers based on the availability of resources so as to avoid contention and not based on the number of available worker processes.\nConnecting to an existing cluster\nOnce a Ray cluster has been started, the only thing you need in order to connect to it is the address of the Redis server in the cluster. In this case, your script will not start up or shut down any processes. The cluster and all of its processes may be shared between multiple scripts and multiple users. To do this, you simply need to know the address of the cluser's Redis server. This can be done with a command like the following.", "# ray.init(redis_address=\"12.345.67.89:6379\")", "In this case, you cannot specify num_cpus or num_gpus in ray.init because that information is passed into the cluster when the cluster is started, not when your script is started.\nray.init (redis_address=None, node_ip_address=None, object_id_seed=None, num_workers=None, driver_mode=0, redirect_output=False, num_cpus=None, num_gpus=None, num_custom_resource=None, num_redis_shards=None, plasma_directory=None, huge_pages=False)\nParameters\n* redis_address (str) - The address of the Redis server to connect to. If this address is not provided, then this command will start Redis, a global scheduler, a local scheduler, a plasma store, a plasma manager, and some workers. It will also kill these processes when Python exists.\n\n\nobject_id_seed (int) - Used to seed the deterministic generation of object IDs. The same value can be used across multiple runs of the same job in order to generate the object IDs in a consistent manner. However, the same ID should not be used for different jobs.\n\n\nnum_workers (int) - The number of workers to start. This is only provided if redis_address is not provided. \n\n\ndriver_mode (bool) - The mode in which to start the driver. This should be one of ray.SCRIPT_MODE, ray.PYTHON_MODE, and ray.SILENT_MODE.\n\n\nredirect_output (bool) - True if stdout and stderr for all the processes should be redirected to files and false otherwise.\n\n\nnum_cpus (int) - Number of cpus the user wishes all local schedulers to be configured with.\n\n\nnum_gpus (int) - Number of gpus the user wishes all local schedulers to be configured with. \n\n\nnum_custom_resource (int) - The quantity of a user-defined custom resource that the local scheduler should be configured with. This flag is experimental and is subject to changes in the future.\n\n\nnum_redis_shards - The number of Redis shards to start in addition to the primary Redis shard. \n\n\nplasma_directory - A directory where the Plasma memory mapped files will be created. \n\n\nhuge_pages - Boolean flag indicating whether to start the Object Store with hugetlbfs support. Requires plasma_directory. \n\n\nReturns Address information about the started processes.\nRaises Exception - An exception is raised if an inappropriate combination of arguments is passed in. \nDefining Remote Functions\nRemote functions are used to create tasks. To define a remote function, the @ray.remote decorator is placed over the function definition.\nThe function can then be invoked with f.remote. Invoking the function creates a task which will be scheduled on and executed by some worker process in the Ray cluster. The call will return an object ID (essentially a future) representing the eventual return value of the task. Anyone with the object ID can retrieve its value, regardless of where the task was executed.\nWhen a task executes, its outputs will be serialized into a string of bytes and stored in the object store.\nNote that arguments to remote functions can be values or object IDs.", "@ray.remote\ndef f(x):\n return x+1\n\nx_id = f.remote(0)\nray.get(x_id) # 1\n\n#y_id = f.remote(x_id)\n#ray.get(y_id) # 2", "If you want a remote function to return multiple object IDs, you can do that by passing the num_return_vals argument into the remote decorator.", "@ray.remote(num_return_vals=2)\ndef f():\n return 1,2\n\nx_id, y_id = f.remote()\nray.get(x_id) #1\n#ray.get(y_id) #2", "ray.remote (*args , **kwargs)\nThis decorator is used to define remote functions and to define actors.\nParameters\n * num_return_vals (int) - The number of object IDs that a call to this function should return. \n\n\nnum_cpus (int) - The number of CPUs needed to execute this function.\n\n\nnum_gpus (int) - The number of GPUs needed to execute this function.\n\n\nnum_custom_resource (int) - The quantity of a user-defined custom resource that is needed to execute this function. This flag is experimental and is subject to changes in the future.\n\n\nmax_calls (int) - The maximum number of tasks of this kind that can be run on a worker before the worker needs to be restarted. \n\n\ncheckpoint_interval (int) - The number of tasks to run between checkpoints of the actor state. \n\n\nGetting Values from Object IDs\nObject IDs can be converted into objects by calling ray.get on the object ID. Note that ray.get accepts either a single object ID or a list of object IDs.", "@ray.remote\ndef f():\n return {'key1': ['value']}\n\n# Get one object ID.\nray.get(f.remote()) # {'key1': ['value']}\n\n# Get a list of object IDs.\nray.get([f.remote() for _ in range(2)]) # [{'key1': ['value']}, {'key1': ['value']}]", "Numpy arrays\nuse numpy arrays whenever possible (efficiency)\nAny numpy arrays that are part of the serialized object will not be copied out of the object store. They will remain in the object store and the resulting deserialized object will simply have a pointer to the relevant place in the object storeโ€™s memory.\nSince objects in the object store are immutable, this means that if you want to mutate a numpy array that was returned by a remote function, you will have to first copy it.\nray.get (object_ids, worker= < ray.worker.Worker object >)\nGet a remote object or a list of remote objects from the object store. \nThis method blocks until the object corresponding to the object ID is available in the local object store. If this object is not in the local object store, it will be shipped from an object store that has it (once the object has been created). If object_ids is a list, then the objects corresponding to each object in the list will be returned. \nParameters\nobject_ids - Object ID of the object to get or a list of object IDs to get.\nReturns A Python object or a list of Python objects. \nPutting Objects in the Object Store\nThe primary way that objects are placed in the object store is by being returned by a task. However, it is also possible to directly place objects in the object store using ray.put.", "x_id = ray.put(1)\nray.get(x_id) # 1", "The main reason to use ray.put is that you want to pass the same large object into a number of tasks. By first doing ray.put and then passing the resulting object ID into each of the tasks, the large object is copied into the object store only once, whereas when we directly pass the object in, it is copied multiple times, which is not efficient.", "import numpy as np\n\n@ray.remote\ndef f(x):\n pass\n\nx = np.zeros(10 ** 6)\n\n# Alternative 1: Here, x is copied into the object store 10 times.\n[f.remote(x) for _ in range(10)]\n\n# Alternative 2: Here, x is copied into the object store once.\nx_id = ray.put(x)\n[f.remote(x_id) for _ in range(10)]", "Note that ray.put is called under the hood in a couple situations.\n\nIt is called on the values returned by a task.\nIt is called on the arguments to a task, unless the arguments are Python primitives like integers or short strings, lists, tuples, or dictionaries.\n\nray.put(value, worker= < ray.worker.Worker object >)\nStore an object in the object store.\nParameters value โ€“ The Python object to be stored. \nReturns The object ID assigned to this value.\nWaiting for A Subset of Tasks to Finish\nIt is often desirable to adapt the computation being done based on when different tasks finish. For example, if a bunch of tasks each take a variable length of time, and their results can be processed in any order, then it makes sense to simply process the results in the order that they finish. In other settings, it makes sense to discard straggler tasks whose results may turn out to be negligible to the entire system (dynamic resource allocation).\nTo do this, we introduce the ray.wait primitive, which takes a list of object IDs and returns when a subset of them are available. By default it blocks until a single object is available, but the num_returns value can be specified to wait for a different number. If a timeout argument is passed in, it will block for at most that many milliseconds and may return a list with fewer than num_returns elements.\nThe ray.wait function returns two lists. The first list is a list of object IDs of available objects (of length at most num_returns), and the second list is a list of the remaining object IDs, so the combination of these two lists is equal to the list passed in to ray.wait (up to ordering).", "import time\nimport numpy as np\n\n@ray.remote\ndef f(n):\n time.sleep(n)\n return n\n\n# # Start 3 tasks with different durations.\n# results = [f.remote(i) for i in range(3)]\n# # Block until 2 of them have finished.\n# ready_ids, remaining_ids = ray.wait(results, num_returns=2)\n\n# Start 5 tasks with different durations.\nresults = [f.remote(i) for i in range(5)]\n# Block until 4 of them have finished or 2.5 seconds pass.\nready_ids, remaining_ids = ray.wait(results, num_returns=4, timeout=2500)\n# Task 4 will be finished after 4 seconds.\nready_ids_4, remaining_ids_4 = ray.wait(results, num_returns=4, timeout=4000)\n\nready_ids\n\nremaining_ids\n\nready_ids_4\n\nremaining_ids_4", "It is easy to use this construct to create an infinite loop in which multiple tasks are executing, and whenever one task finishes, a new one is launched.", "@ray.remote\ndef f():\n return 10\n\n# Start 5 tasks.\nremaining_ids = [f.remote() for i in range(5)]\n#print(ray.get(remaining_ids))\n'''\nThe following few lines is for testing the behavior of the wait method\nwithout specifying the number of returning object IDs. Comment it before\nrunning the for loop in the original documentation.\n'''\n# # Whenever one task finishes, start a new one. \n# ready_ids, remaining_ids, = ray.wait(remaining_ids, num_returns=3)\n# # Get the available object and do something with it. \n# print(ray.get(ready_ids))\n# # Print out the remaining ids\n# print(ray.get(remaining_ids))\n\n'''\nObservation: when the number of object IDs to be returned is not specified,\nthe wait method will automatically set it to 1. \n'''\nfor _ in range(10):\n # actually this only works when num_returns = 1, otherwise it will \n # result in a dead kernel since finally the number of remaining IDs \n # will be 0\n ready_ids, remaining_ids = ray.wait(remaining_ids)\n #print(ray.get(remaining_ids))\n # Get the available object and do something with it. \n print(ray.get(ready_ids))\n # Start a new task. The number of remaining IDs will be consistent.\n remaining_ids.append(f.remote())", "ray.wait (object_ids, num_returns=1, timeout=None, worker= < ray.worker.Worker object > )\nReturn a list of IDs that are ready and a list of IDs that are not. \nIf timeout is set, the function returns either when the requested number of IDs are ready or when the timeout is reached, whichever occurs first. If it is not set, the function simply waits until that number of objects is ready and returns that exact number of object IDs.\nThis method returns two lists. The first list consists of object IDs that correspond to objects that are stored in the object store. The second list corresponds to the rest of the object IDs (which may or may not be ready).\nParameters\n\nobject_ids (List [Object ID]) - List of object IDs for objects that may or may not be ready. Note that these IDs must be unique. \nnum_returns (int) - The number of object IDs that should be returned. \ntimeout (int) - The maximum amount of time in milliseconds to wait before returning. \n\nReturns\n A list of object IDs that are ready and a list of the remaining object IDs.\nViewing errors\nKeeping track of errors that occur in different processes throughout a cluster can be challenging. There are a couple mechanisms to help with this.\n\n\nIf a task throw an exception, that exception will be printed in the background of the driver process. \n\n\nIf ray.get is called on an object ID whose parent task threw an exception before creating the object, the exception will be re-raised by ray.get\n\n\nThe errors will also be accumulated in Redis and can be accessed with ray.error_info. Normally, you shouldn't need to do this, but it is possible.", "@ray.remote\ndef f():\n raise Exception(\"This task failed!!\")\n \nf.remote() # An error message will be printed in the background.\n\n# Wait for the error to propagate to Redis.\nimport time\ntime.sleep(1)\n\nray.error_info() # This returns a list containing the error message. ", "ray.error_info (worker= < ray.worker.Worker object >)\nReturn information about failed tasks." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
pjbull/data-science-is-software
notebooks/edit-run-repeat.ipynb
mit
[ "import numpy as np\nimport pandas as pd\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\n%matplotlib inline\n\nsns.set_style(\"darkgrid\")", "Edit-run-repeat: Stopping the cycle of pain\n\n1. No more docs-guessing", "df = pd.read_csv(\"../data/water-pumps.csv\", index=0)\ndf.head(1)\n\npd.read_csv?\n\ndf = pd.read_csv(\"../data/water-pumps.csv\",\n index_col=0,\n parse_dates=\"date_recorded\")\ndf.head(1)", "2. No more copy pasta\nDon't repeat yourself.", "plot_data = df['construction_year']\nplot_data = plot_data[plot_data != 0]\nsns.kdeplot(plot_data, bw=0.1)\nplt.show()\n\nplot_data = df['longitude']\nplot_data = plot_data[plot_data != 0]\nsns.kdeplot(plot_data, bw=0.1)\nplt.show()\n\nplot_data = df['amount_tsh']\nplot_data = plot_data[plot_data > 20000]\nsns.kdeplot(plot_data, bw=0.1)\nplt.show()\n\nplot_data = df['latitude']\nplot_data = plot_data[plot_data > 20000]\nsns.kdeplot(plot_data, bw=0.1)\nplt.show()\n\ndef kde_plot(dataframe, variable, upper=0.0, lower=0.0, bw=0.1):\n plot_data = dataframe[variable]\n plot_data = plot_data[(plot_data > lower) & (plot_data < upper)]\n sns.kdeplot(plot_data, bw=bw)\n plt.show()\n\nkde_plot(df, 'construction_year', upper=2016)\nkde_plot(df, 'longitude', upper=42)\n\nkde_plot(df, 'amount_tsh', lower=20000, upper=400000)", "3. No more guess-and-check\nUse pdb the Python debugger to debug inside a notebook. Key commands are:\n\n\np: Evaluate and print Python code\n\n\nw: Where in the stack trace am I?\n\nu: Go up a frame in the stack trace.\n\nd: Go down a frame in the stack trace.\n\n\nc: Continue execution\n\nq: Stop execution\n\nThere are two ways to activate the debugger:\n - %pdb: toggles wether or not the debugger will be called on an exception\n - %debug: enters the debugger at the line where this magic is", "kde_plot(df, 'date_recorded')\n\n# \"1\" turns pdb on, \"0\" turns pdb off\n%pdb 1\n\nkde_plot(df, 'date_recorded')\n\n# turn off debugger\n%pdb 0", "4. No more \"Restart & Run All\"\nassert is the poor man's unit test: stops execution if condition is False, continues silently if True", "def gimme_the_mean(series):\n return np.mean(series)\n\nassert gimme_the_mean([0.0]*10) == 0.0\n\ndata = np.random.normal(0.0, 1.0, 1000000)\nassert gimme_the_mean(data) == 0.0\n\nnp.testing.assert_almost_equal(gimme_the_mean(data),\n 0.0,\n decimal=1)", "" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]