repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | cells
list | types
list |
|---|---|---|---|---|
GoogleCloudPlatform/asl-ml-immersion
|
notebooks/end-to-end-structured/solutions/4a_sample_babyweight.ipynb
|
apache-2.0
|
[
"LAB 4a: Creating a Sampled Dataset.\nLearning Objectives\n\nSetup up the environment\nSample the natality dataset to create train/eval/test sets\nPreprocess the data in Pandas dataframe\n\nIntroduction\nIn this notebook, we'll read data from BigQuery into our notebook to preprocess the data within a Pandas dataframe for a small, repeatable sample.\nWe will set up the environment, sample the natality dataset to create train/eval/test splits, and preprocess the data in a Pandas dataframe.\nSet up environment variables and load necessary libraries\nImport necessary libraries.",
"import os\n\nimport pandas as pd\nfrom google.cloud import bigquery",
"Set environment variables so that we can use them throughout the entire lab. We will be using our project ID for our bucket, so you only need to change your project and region.",
"PROJECT = !gcloud config list --format 'value(core.project)'\nPROJECT = PROJECT[0]\nBUCKET = PROJECT\nREGION = \"us-central1\"\n\nos.environ[\"BUCKET\"] = BUCKET\nos.environ[\"REGION\"] = REGION",
"Create ML datasets by sampling using BigQuery\nWe'll begin by sampling the BigQuery data to create smaller datasets. Let's create a BigQuery client that we'll use throughout the lab.",
"bq = bigquery.Client(project=PROJECT)",
"We need to figure out the right way to divide our hash values to get our desired splits. To do that we need to define some values to hash with in the modulo. Feel free to play around with these values to get the perfect combination.",
"modulo_divisor = 100\ntrain_percent = 80.0\neval_percent = 10.0\n\ntrain_buckets = int(modulo_divisor * train_percent / 100.0)\neval_buckets = int(modulo_divisor * eval_percent / 100.0)",
"We can make a series of queries to check if our bucketing values result in the correct sizes of each of our dataset splits and then adjust accordingly. Therefore, to make our code more compact and reusable, let's define a function to return the head of a dataframe produced from our queries up to a certain number of rows.",
"def display_dataframe_head_from_query(query, count=10):\n \"\"\"Displays count rows from dataframe head from query.\n\n Args:\n query: str, query to be run on BigQuery, results stored in dataframe.\n count: int, number of results from head of dataframe to display.\n Returns:\n Dataframe head with count number of results.\n \"\"\"\n df = bq.query(query + f\" LIMIT {count}\").to_dataframe()\n\n return df.head(count)",
"For our first query, we're going to use the original query above to get our label, features, and columns to combine into our hash which we will use to perform our repeatable splitting. There are only a limited number of years, months, days, and states in the dataset. Let's see what the hash values are. We will need to include all of these extra columns to hash on to get a fairly uniform spread of the data. Feel free to try less or more in the hash and see how it changes your results.",
"# Get label, features, and columns to hash and split into buckets\nhash_cols_fixed_query = \"\"\"\nSELECT\n weight_pounds,\n is_male,\n mother_age,\n plurality,\n gestation_weeks,\n year,\n month,\n CASE\n WHEN day IS NULL THEN\n CASE\n WHEN wday IS NULL THEN 0\n ELSE wday\n END\n ELSE day\n END AS date,\n IFNULL(state, \"Unknown\") AS state,\n IFNULL(mother_birth_state, \"Unknown\") AS mother_birth_state\nFROM\n publicdata.samples.natality\nWHERE\n year > 2000\n AND weight_pounds > 0\n AND mother_age > 0\n AND plurality > 0\n AND gestation_weeks > 0\n\"\"\"\n\ndisplay_dataframe_head_from_query(hash_cols_fixed_query)",
"Using COALESCE would provide the same result as the nested CASE WHEN. This is preferable when all we want is the first non-null instance. To be precise the CASE WHEN would become COALESCE(wday, day, 0) AS date. You can read more about it here.\nNext query will combine our hash columns and will leave us just with our label, features, and our hash values.",
"data_query = \"\"\"\nSELECT\n weight_pounds,\n is_male,\n mother_age,\n plurality,\n gestation_weeks,\n FARM_FINGERPRINT(\n CONCAT(\n CAST(year AS STRING),\n CAST(month AS STRING),\n CAST(date AS STRING),\n CAST(state AS STRING),\n CAST(mother_birth_state AS STRING)\n )\n ) AS hash_values\nFROM\n ({CTE_hash_cols_fixed})\n\"\"\".format(\n CTE_hash_cols_fixed=hash_cols_fixed_query\n)\n\ndisplay_dataframe_head_from_query(data_query)",
"The next query is going to find the counts of each of the unique 657484 hash_values. This will be our first step at making actual hash buckets for our split via the GROUP BY.",
"# Get the counts of each of the unique hashs of our splitting column\nfirst_bucketing_query = \"\"\"\nSELECT\n hash_values,\n COUNT(*) AS num_records\nFROM\n ({CTE_data})\nGROUP BY\n hash_values\n\"\"\".format(\n CTE_data=data_query\n)\n\ndisplay_dataframe_head_from_query(first_bucketing_query)",
"The query below performs a second layer of bucketing where now for each of these bucket indices we count the number of records.",
"# Get the number of records in each of the hash buckets\nsecond_bucketing_query = \"\"\"\nSELECT\n ABS(MOD(hash_values, {modulo_divisor})) AS bucket_index,\n SUM(num_records) AS num_records\nFROM\n ({CTE_first_bucketing})\nGROUP BY\n ABS(MOD(hash_values, {modulo_divisor}))\n\"\"\".format(\n CTE_first_bucketing=first_bucketing_query, modulo_divisor=modulo_divisor\n)\n\ndisplay_dataframe_head_from_query(second_bucketing_query)",
"The number of records is hard for us to easily understand the split, so we will normalize the count into percentage of the data in each of the hash buckets in the next query.",
"# Calculate the overall percentages\npercentages_query = \"\"\"\nSELECT\n bucket_index,\n num_records,\n CAST(num_records AS FLOAT64) / (\n SELECT\n SUM(num_records)\n FROM\n ({CTE_second_bucketing})) AS percent_records\nFROM\n ({CTE_second_bucketing})\n\"\"\".format(\n CTE_second_bucketing=second_bucketing_query\n)\n\ndisplay_dataframe_head_from_query(percentages_query)",
"We'll now select the range of buckets to be used in training.",
"# Choose hash buckets for training and pull in their statistics\ntrain_query = \"\"\"\nSELECT\n *,\n \"train\" AS dataset_name\nFROM\n ({CTE_percentages})\nWHERE\n bucket_index >= 0\n AND bucket_index < {train_buckets}\n\"\"\".format(\n CTE_percentages=percentages_query, train_buckets=train_buckets\n)\n\ndisplay_dataframe_head_from_query(train_query)",
"We'll do the same by selecting the range of buckets to be used evaluation.",
"# Choose hash buckets for validation and pull in their statistics\neval_query = \"\"\"\nSELECT\n *,\n \"eval\" AS dataset_name\nFROM\n ({CTE_percentages})\nWHERE\n bucket_index >= {train_buckets}\n AND bucket_index < {cum_eval_buckets}\n\"\"\".format(\n CTE_percentages=percentages_query,\n train_buckets=train_buckets,\n cum_eval_buckets=train_buckets + eval_buckets,\n)\n\ndisplay_dataframe_head_from_query(eval_query)",
"Lastly, we'll select the hash buckets to be used for the test split.",
"# Choose hash buckets for testing and pull in their statistics\ntest_query = \"\"\"\nSELECT\n *,\n \"test\" AS dataset_name\nFROM\n ({CTE_percentages})\nWHERE\n bucket_index >= {cum_eval_buckets}\n AND bucket_index < {modulo_divisor}\n\"\"\".format(\n CTE_percentages=percentages_query,\n cum_eval_buckets=train_buckets + eval_buckets,\n modulo_divisor=modulo_divisor,\n)\n\ndisplay_dataframe_head_from_query(test_query)",
"In the below query, we'll UNION ALL all of the datasets together so that all three sets of hash buckets will be within one table. We added dataset_id so that we can sort on it in the query after.",
"# Union the training, validation, and testing dataset statistics\nunion_query = \"\"\"\nSELECT\n 0 AS dataset_id,\n *\nFROM\n ({CTE_train})\nUNION ALL\nSELECT\n 1 AS dataset_id,\n *\nFROM\n ({CTE_eval})\nUNION ALL\nSELECT\n 2 AS dataset_id,\n *\nFROM\n ({CTE_test})\n\"\"\".format(\n CTE_train=train_query, CTE_eval=eval_query, CTE_test=test_query\n)\n\ndisplay_dataframe_head_from_query(union_query)",
"Lastly, we'll show the final split between train, eval, and test sets. We can see both the number of records and percent of the total data. It is really close to the 80/10/10 that we were hoping to get.",
"# Show final splitting and associated statistics\nsplit_query = \"\"\"\nSELECT\n dataset_id,\n dataset_name,\n SUM(num_records) AS num_records,\n SUM(percent_records) AS percent_records\nFROM\n ({CTE_union})\nGROUP BY\n dataset_id,\n dataset_name\nORDER BY\n dataset_id\n\"\"\".format(\n CTE_union=union_query\n)\n\ndisplay_dataframe_head_from_query(split_query)",
"Now that we know that our splitting values produce a good global splitting on our data, here's a way to get a well-distributed portion of the data in such a way that the train/eval/test sets do not overlap and takes a subsample of our global splits.",
"# every_n allows us to subsample from each of the hash values\n# This helps us get approximately the record counts we want\nevery_n = 1000\n\nsplitting_string = \"ABS(MOD(hash_values, {} * {}))\".format(\n every_n, modulo_divisor\n)\n\n\ndef create_data_split_sample_df(query_string, splitting_string, lo, up):\n \"\"\"Creates a dataframe with a sample of a data split.\n\n Args:\n query_string: str, query to run to generate splits.\n splitting_string: str, modulo string to split by.\n lo: float, lower bound for bucket filtering for split.\n up: float, upper bound for bucket filtering for split.\n Returns:\n Dataframe containing data split sample.\n \"\"\"\n query = \"SELECT * FROM ({0}) WHERE {1} >= {2} and {1} < {3}\".format(\n query_string, splitting_string, int(lo), int(up)\n )\n\n df = bq.query(query).to_dataframe()\n\n return df\n\n\ntrain_df = create_data_split_sample_df(\n data_query, splitting_string, lo=0, up=train_percent\n)\n\neval_df = create_data_split_sample_df(\n data_query,\n splitting_string,\n lo=train_percent,\n up=train_percent + eval_percent,\n)\n\ntest_df = create_data_split_sample_df(\n data_query,\n splitting_string,\n lo=train_percent + eval_percent,\n up=modulo_divisor,\n)\n\nprint(f\"There are {len(train_df)} examples in the train dataset.\")\nprint(f\"There are {len(eval_df)} examples in the validation dataset.\")\nprint(f\"There are {len(test_df)} examples in the test dataset.\")",
"Preprocess data using Pandas\nWe'll perform a few preprocessing steps to the data in our dataset. Let's add extra rows to simulate the lack of ultrasound. That is we'll duplicate some rows and make the is_male field be Unknown. Also, if there is more than child we'll change the plurality to Multiple(2+). While we're at it, we'll also change the plurality column to be a string. We'll perform these operations below. \nLet's start by examining the training dataset as is.",
"train_df.head()",
"Also, notice that there are some very important numeric fields that are missing in some rows (the count in Pandas doesn't count missing data)",
"train_df.describe()",
"It is always crucial to clean raw data before using in machine learning, so we have a preprocessing step. We'll define a preprocess function below. Note that the mother's age is an input to our model so users will have to provide the mother's age; otherwise, our service won't work. The features we use for our model were chosen because they are such good predictors and because they are easy enough to collect.",
"def preprocess(df):\n \"\"\"Preprocess pandas dataframe for augmented babyweight data.\n\n Args:\n df: Dataframe containing raw babyweight data.\n Returns:\n Pandas dataframe containing preprocessed raw babyweight data as well\n as simulated no ultrasound data masking some of the original data.\n \"\"\"\n # Clean up raw data\n # Filter out what we don\"t want to use for training\n df = df[df.weight_pounds > 0]\n df = df[df.mother_age > 0]\n df = df[df.gestation_weeks > 0]\n df = df[df.plurality > 0]\n\n # Modify plurality field to be a string\n twins_etc = dict(\n zip(\n [1, 2, 3, 4, 5],\n [\n \"Single(1)\",\n \"Twins(2)\",\n \"Triplets(3)\",\n \"Quadruplets(4)\",\n \"Quintuplets(5)\",\n ],\n )\n )\n df[\"plurality\"].replace(twins_etc, inplace=True)\n\n # Clone data and mask certain columns to simulate lack of ultrasound\n no_ultrasound = df.copy(deep=True)\n\n # Modify is_male\n no_ultrasound[\"is_male\"] = \"Unknown\"\n\n # Modify plurality\n condition = no_ultrasound[\"plurality\"] != \"Single(1)\"\n no_ultrasound.loc[condition, \"plurality\"] = \"Multiple(2+)\"\n\n # Concatenate both datasets together and shuffle\n return pd.concat([df, no_ultrasound]).sample(frac=1).reset_index(drop=True)",
"Let's process the train/eval/test set and see a small sample of the training data after our preprocessing:",
"train_df = preprocess(train_df)\neval_df = preprocess(eval_df)\ntest_df = preprocess(test_df)\n\ntrain_df.head()\n\ntrain_df.tail()",
"Let's look again at a summary of the dataset. Note that we only see numeric columns, so plurality does not show up.",
"train_df.describe()",
"Write to .csv files\nIn the final versions, we want to read from files, not Pandas dataframes. So, we write the Pandas dataframes out as csv files. Using csv files gives us the advantage of shuffling during read. This is important for distributed training because some workers might be slower than others, and shuffling the data helps prevent the same data from being assigned to the slow workers.",
"# Define columns\ncolumns = [\n \"weight_pounds\",\n \"is_male\",\n \"mother_age\",\n \"plurality\",\n \"gestation_weeks\",\n]\n\n# Write out CSV files\ntrain_df.to_csv(\n path_or_buf=\"train.csv\", columns=columns, header=False, index=False\n)\neval_df.to_csv(\n path_or_buf=\"eval.csv\", columns=columns, header=False, index=False\n)\ntest_df.to_csv(\n path_or_buf=\"test.csv\", columns=columns, header=False, index=False\n)\n\n%%bash\nwc -l *.csv\n\n%%bash\nhead *.csv\n\n%%bash\ntail *.csv",
"Lab Summary:\nIn this lab, we set up the environment, sampled the natality dataset to create train/eval/test splits, and preprocessed the data in a Pandas dataframe.\nCopyright 2021 Google LLC\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n https://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
uber-common/deck.gl
|
bindings/pydeck/examples/01 - Introduction.ipynb
|
mit
|
[
"A brief intro to pydeck\nThe pydeck library is made for visualizing data points in 2D or 3D maps. Specifically, it handles\n\nrendering large (>1M points) data sets, like LIDAR point clouds or GPS pings\nlarge-scale updates to data points, like plotting points with motion\nmaking beautiful maps\n\nUnder the hood, it's powered by the deck.gl JavaScript framework.\npydeck is strongest when used in tandem with Pandas but doesn't have to be.\nPlease note that these demo notebooks are best when executed cell-by-cell, so ideally clone this repo or run it from mybinder.org.",
"import pydeck as pdk\nprint(\"Welcome to pydeck version\", pdk.__version__)",
"There are three steps for most pydeck visualizations\nWe'll walk through pydeck using a visualization of vehicle accident data in the United Kingdom.\n1. Choose your data\nHere, we'll use the history of accident data throughout the United Kingdom. This data set presents the location of every latitude and longitude of car accidents in the UK in 2014 (source).",
"import pandas as pd\n\nUK_ACCIDENTS_DATA = 'https://raw.githubusercontent.com/visgl/deck.gl-data/master/examples/3d-heatmap/heatmap-data.csv'\n\npd.read_csv(UK_ACCIDENTS_DATA).head()",
"2. Configure the visualization: Choose your layer(s) and viewport\npydeck's Layer object takes two positional and many keyword arguments:\n\nFirst, a string specifying the layer type, with our example below using 'HexagonLayer'\nNext, a data URL–below you'll see the UK_ACCIDENTS_DATA that we set above, but we could alternately pass a data frame or list of dictionaries\nFinally, keywords representing that layer's attributes–in our example, this would include elevation_scale, elevation_range, extruded, coverage. pickable=True also allows us to add a tooltip that appears on hover.\n\npython\nlayer = pdk.Layer(\n 'HexagonLayer',\n UK_ACCIDENTS_DATA,\n get_position=['lng', 'lat'],\n elevation_scale=50,\n pickable=True,\n auto_highlight=True,\n elevation_range=[0, 3000],\n extruded=True, \n coverage=1)\nThere is of course an entire catalog of layers which you're welcome to check out within the deck.gl documentation.\nConfigure your viewport\nWe also have to specifiy a ViewState object.\nThe ViewState object specifies a camera angle relative to the map data. If you don't want to manually specify it, the function pydeck.data_utils.compute_view can take your data and automatically zoom to it.\npydeck also provides some controls, most of which should be familiar from map applications throughout the web. By default, you can hold out and drag to rotate the map.",
"layer = pdk.Layer(\n 'HexagonLayer',\n UK_ACCIDENTS_DATA,\n get_position=['lng', 'lat'],\n auto_highlight=True,\n elevation_scale=50,\n pickable=True,\n elevation_range=[0, 3000],\n extruded=True, \n coverage=1)\n\n# Set the viewport location\nview_state = pdk.ViewState(\n longitude=-1.415,\n latitude=52.2323,\n zoom=6,\n min_zoom=5,\n max_zoom=15,\n pitch=40.5,\n bearing=-27.36)\n\n# Combined all of it and render a viewport\nr = pdk.Deck(layers=[layer], initial_view_state=view_state)\nr.show()",
"Render an update to the visualization\nExecute the cell below and look at the map in the cell above–you'll notice a seamless rendered update on the map",
"layer.elevation_range = [0, 10000]\n\nr.update()",
"Support updates over time\nWe can combine any Python function with our work here, of course. Execute the cell below to update our map above over time.",
"import time\nr.show()\n\nfor i in range(0, 10000, 1000):\n layer.elevation_range = [0, i]\n r.update()\n time.sleep(0.1)",
"pydeck without Jupyter\nIf you prefer not to use Jupyter or you'd like to export your map to a separate file, you can also write out maps to HTML locally using the .to_html function.",
"r.to_html()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/inm/cmip6/models/sandbox-1/land.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Land\nMIP Era: CMIP6\nInstitute: INM\nSource ID: SANDBOX-1\nTopic: Land\nSub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. \nProperties: 154 (96 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:05\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'inm', 'sandbox-1', 'land')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Conservation Properties\n3. Key Properties --> Timestepping Framework\n4. Key Properties --> Software Properties\n5. Grid\n6. Grid --> Horizontal\n7. Grid --> Vertical\n8. Soil\n9. Soil --> Soil Map\n10. Soil --> Snow Free Albedo\n11. Soil --> Hydrology\n12. Soil --> Hydrology --> Freezing\n13. Soil --> Hydrology --> Drainage\n14. Soil --> Heat Treatment\n15. Snow\n16. Snow --> Snow Albedo\n17. Vegetation\n18. Energy Balance\n19. Carbon Cycle\n20. Carbon Cycle --> Vegetation\n21. Carbon Cycle --> Vegetation --> Photosynthesis\n22. Carbon Cycle --> Vegetation --> Autotrophic Respiration\n23. Carbon Cycle --> Vegetation --> Allocation\n24. Carbon Cycle --> Vegetation --> Phenology\n25. Carbon Cycle --> Vegetation --> Mortality\n26. Carbon Cycle --> Litter\n27. Carbon Cycle --> Soil\n28. Carbon Cycle --> Permafrost Carbon\n29. Nitrogen Cycle\n30. River Routing\n31. River Routing --> Oceanic Discharge\n32. Lakes\n33. Lakes --> Method\n34. Lakes --> Wetlands \n1. Key Properties\nLand surface key properties\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of land surface model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of land surface model code (e.g. MOSES2.2)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.4. Land Atmosphere Flux Exchanges\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nFluxes exchanged with the atmopshere.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"water\" \n# \"energy\" \n# \"carbon\" \n# \"nitrogen\" \n# \"phospherous\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.5. Atmospheric Coupling Treatment\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.6. Land Cover\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTypes of land cover defined in the land surface model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bare soil\" \n# \"urban\" \n# \"lake\" \n# \"land ice\" \n# \"lake ice\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.7. Land Cover Change\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe how land cover change is managed (e.g. the use of net or gross transitions)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover_change') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.8. Tiling\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2. Key Properties --> Conservation Properties\nTODO\n2.1. Energy\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how energy is conserved globally and to what level (e.g. within X [units]/year)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.energy') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Water\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how water is conserved globally and to what level (e.g. within X [units]/year)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.water') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Carbon\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Key Properties --> Timestepping Framework\nTODO\n3.1. Timestep Dependent On Atmosphere\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs a time step dependent on the frequency of atmosphere coupling?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nOverall timestep of land surface model (i.e. time between calls)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.3. Timestepping Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of time stepping method and associated time step(s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Key Properties --> Software Properties\nSoftware properties of land surface code\n4.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5. Grid\nLand surface grid\n5.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of the grid in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Grid --> Horizontal\nThe horizontal grid in the land surface\n6.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general structure of the horizontal grid (not including any tiling)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Matches Atmosphere Grid\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the horizontal grid match the atmosphere?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"7. Grid --> Vertical\nThe vertical grid in the soil\n7.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general structure of the vertical grid in the soil (not including any tiling)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Total Depth\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe total depth of the soil (in metres)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.total_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"8. Soil\nLand surface soil\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of soil in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Heat Water Coupling\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the coupling between heat and water in the soil",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_water_coupling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.3. Number Of Soil layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of soil layers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.number_of_soil layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"8.4. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the soil scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Soil --> Soil Map\nKey properties of the land surface soil map\n9.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of soil map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.2. Structure\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil structure map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.3. Texture\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil texture map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.texture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.4. Organic Matter\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil organic matter map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.organic_matter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.5. Albedo\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil albedo map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.6. Water Table\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil water table map, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.water_table') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.7. Continuously Varying Soil Depth\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the soil properties vary continuously with depth?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"9.8. Soil Depth\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil depth map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Soil --> Snow Free Albedo\nTODO\n10.1. Prognostic\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs snow free albedo prognostic?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"10.2. Functions\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf prognostic, describe the dependancies on snow free albedo calculations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"soil humidity\" \n# \"vegetation state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.3. Direct Diffuse\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nIf prognostic, describe the distinction between direct and diffuse albedo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"distinction between direct and diffuse albedo\" \n# \"no distinction between direct and diffuse albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.4. Number Of Wavelength Bands\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf prognostic, enter the number of wavelength bands used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11. Soil --> Hydrology\nKey properties of the land surface soil hydrology\n11.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of the soil hydrological model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of river soil hydrology in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.3. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil hydrology tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.4. Vertical Discretisation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the typical vertical discretisation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.5. Number Of Ground Water Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of soil layers that may contain water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.6. Lateral Connectivity\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDescribe the lateral connectivity between tiles",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"perfect connectivity\" \n# \"Darcian flow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.7. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nThe hydrological dynamics scheme in the land surface model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bucket\" \n# \"Force-restore\" \n# \"Choisnel\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12. Soil --> Hydrology --> Freezing\nTODO\n12.1. Number Of Ground Ice Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nHow many soil layers may contain ground ice",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"12.2. Ice Storage Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method of ice storage",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12.3. Permafrost\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the treatment of permafrost, if any, within the land surface scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Soil --> Hydrology --> Drainage\nTODO\n13.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral describe how drainage is included in the land surface scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13.2. Types\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nDifferent types of runoff represented by the land surface model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gravity drainage\" \n# \"Horton mechanism\" \n# \"topmodel-based\" \n# \"Dunne mechanism\" \n# \"Lateral subsurface flow\" \n# \"Baseflow from groundwater\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Soil --> Heat Treatment\nTODO\n14.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of how heat treatment properties are defined",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of soil heat scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14.3. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil heat treatment tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.4. Vertical Discretisation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the typical vertical discretisation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.5. Heat Storage\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify the method of heat storage",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Force-restore\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.6. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDescribe processes included in the treatment of soil heat",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"soil moisture freeze-thaw\" \n# \"coupling with snow temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15. Snow\nLand surface snow\n15.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of snow in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the snow tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.3. Number Of Snow Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of snow levels used in the land surface scheme/model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.number_of_snow_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"15.4. Density\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of snow density",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.5. Water Equivalent\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of the snow water equivalent",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.water_equivalent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.6. Heat Content\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of the heat content of snow",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.heat_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.7. Temperature\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of snow temperature",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.temperature') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.8. Liquid Water Content\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of snow liquid water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.liquid_water_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.9. Snow Cover Fractions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify cover fractions used in the surface snow scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_cover_fractions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ground snow fraction\" \n# \"vegetation snow fraction\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.10. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSnow related processes in the land surface scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"snow interception\" \n# \"snow melting\" \n# \"snow freezing\" \n# \"blowing snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.11. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the snow scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16. Snow --> Snow Albedo\nTODO\n16.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe the treatment of snow-covered land albedo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"prescribed\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.2. Functions\nIs Required: FALSE Type: ENUM Cardinality: 0.N\n*If prognostic, *",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"snow age\" \n# \"snow density\" \n# \"snow grain type\" \n# \"aerosol deposition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17. Vegetation\nLand surface vegetation\n17.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of vegetation in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of vegetation scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"17.3. Dynamic Vegetation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there dynamic evolution of vegetation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.dynamic_vegetation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"17.4. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the vegetation tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.5. Vegetation Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nVegetation classification used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation types\" \n# \"biome types\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.6. Vegetation Types\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nList of vegetation types in the classification, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"broadleaf tree\" \n# \"needleleaf tree\" \n# \"C3 grass\" \n# \"C4 grass\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.7. Biome Types\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nList of biome types in the classification, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biome_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"evergreen needleleaf forest\" \n# \"evergreen broadleaf forest\" \n# \"deciduous needleleaf forest\" \n# \"deciduous broadleaf forest\" \n# \"mixed forest\" \n# \"woodland\" \n# \"wooded grassland\" \n# \"closed shrubland\" \n# \"opne shrubland\" \n# \"grassland\" \n# \"cropland\" \n# \"wetlands\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.8. Vegetation Time Variation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow the vegetation fractions in each tile are varying with time",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_time_variation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed (not varying)\" \n# \"prescribed (varying from files)\" \n# \"dynamical (varying from simulation)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.9. Vegetation Map\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.10. Interception\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs vegetation interception of rainwater represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.interception') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"17.11. Phenology\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTreatment of vegetation phenology",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic (vegetation map)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.12. Phenology Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation phenology",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.13. Leaf Area Index\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTreatment of vegetation leaf area index",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.14. Leaf Area Index Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of leaf area index",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.15. Biomass\nIs Required: TRUE Type: ENUM Cardinality: 1.1\n*Treatment of vegetation biomass *",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.16. Biomass Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation biomass",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.17. Biogeography\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTreatment of vegetation biogeography",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.18. Biogeography Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation biogeography",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.19. Stomatal Resistance\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify what the vegetation stomatal resistance depends on",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"light\" \n# \"temperature\" \n# \"water availability\" \n# \"CO2\" \n# \"O3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.20. Stomatal Resistance Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation stomatal resistance",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.21. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the vegetation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18. Energy Balance\nLand surface energy balance\n18.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of energy balance in land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the energy balance tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18.3. Number Of Surface Temperatures\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"18.4. Evaporation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify the formulation method for land surface evaporation, from soil and vegetation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"alpha\" \n# \"beta\" \n# \"combined\" \n# \"Monteith potential evaporation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.5. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDescribe which processes are included in the energy balance scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"transpiration\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19. Carbon Cycle\nLand surface carbon cycle\n19.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of carbon cycle in land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the carbon cycle tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of carbon cycle in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"19.4. Anthropogenic Carbon\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nDescribe the treament of the anthropogenic carbon pool",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grand slam protocol\" \n# \"residence time\" \n# \"decay time\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19.5. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the carbon scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20. Carbon Cycle --> Vegetation\nTODO\n20.1. Number Of Carbon Pools\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"20.2. Carbon Pools\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20.3. Forest Stand Dynamics\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the treatment of forest stand dyanmics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"21. Carbon Cycle --> Vegetation --> Photosynthesis\nTODO\n21.1. Method\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22. Carbon Cycle --> Vegetation --> Autotrophic Respiration\nTODO\n22.1. Maintainance Respiration\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the general method used for maintainence respiration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.2. Growth Respiration\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the general method used for growth respiration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23. Carbon Cycle --> Vegetation --> Allocation\nTODO\n23.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general principle behind the allocation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23.2. Allocation Bins\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify distinct carbon bins used in allocation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"leaves + stems + roots\" \n# \"leaves + stems + roots (leafy + woody)\" \n# \"leaves + fine roots + coarse roots + stems\" \n# \"whole plant (no distinction)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.3. Allocation Fractions\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe how the fractions of allocation are calculated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"function of vegetation type\" \n# \"function of plant allometry\" \n# \"explicitly calculated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24. Carbon Cycle --> Vegetation --> Phenology\nTODO\n24.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general principle behind the phenology scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"25. Carbon Cycle --> Vegetation --> Mortality\nTODO\n25.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general principle behind the mortality scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26. Carbon Cycle --> Litter\nTODO\n26.1. Number Of Carbon Pools\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"26.2. Carbon Pools\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26.3. Decomposition\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the decomposition methods used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26.4. Method\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the general method used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27. Carbon Cycle --> Soil\nTODO\n27.1. Number Of Carbon Pools\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"27.2. Carbon Pools\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27.3. Decomposition\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the decomposition methods used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27.4. Method\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the general method used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28. Carbon Cycle --> Permafrost Carbon\nTODO\n28.1. Is Permafrost Included\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs permafrost included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"28.2. Emitted Greenhouse Gases\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the GHGs emitted",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28.3. Decomposition\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the decomposition methods used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28.4. Impact On Soil Properties\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the impact of permafrost on soil properties",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29. Nitrogen Cycle\nLand surface nitrogen cycle\n29.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of the nitrogen cycle in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the notrogen cycle tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of nitrogen cycle in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"29.4. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the nitrogen scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30. River Routing\nLand surface river routing\n30.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of river routing in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the river routing, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of river routing scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.4. Grid Inherited From Land Surface\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the grid inherited from land surface?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"30.5. Grid Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of grid, if not inherited from land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.6. Number Of Reservoirs\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of reservoirs",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.number_of_reservoirs') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.7. Water Re Evaporation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTODO",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.water_re_evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"flood plains\" \n# \"irrigation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.8. Coupled To Atmosphere\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nIs river routing coupled to the atmosphere model component?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"30.9. Coupled To Land\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the coupling between land and rivers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_land') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.10. Quantities Exchanged With Atmosphere\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.11. Basin Flow Direction Map\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat type of basin flow direction map is being used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"adapted for other periods\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.12. Flooding\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the representation of flooding, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.flooding') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.13. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the river routing",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"31. River Routing --> Oceanic Discharge\nTODO\n31.1. Discharge Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify how rivers are discharged to the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"direct (large rivers)\" \n# \"diffuse\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.2. Quantities Transported\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nQuantities that are exchanged from river-routing to the ocean model component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32. Lakes\nLand surface lakes\n32.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of lakes in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.2. Coupling With Rivers\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre lakes coupled to the river routing model component?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.coupling_with_rivers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"32.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of lake scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"32.4. Quantities Exchanged With Rivers\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf coupling with rivers, which quantities are exchanged between the lakes and rivers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32.5. Vertical Grid\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the vertical grid of lakes",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.vertical_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.6. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the lake scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"33. Lakes --> Method\nTODO\n33.1. Ice Treatment\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs lake ice included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.ice_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"33.2. Albedo\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe the treatment of lake albedo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33.3. Dynamics\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhich dynamics of lakes are treated? horizontal, vertical, etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No lake dynamics\" \n# \"vertical\" \n# \"horizontal\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33.4. Dynamic Lake Extent\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs a dynamic lake extent scheme included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"33.5. Endorheic Basins\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nBasins not flowing to ocean included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.endorheic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"34. Lakes --> Wetlands\nTODO\n34.1. Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the treatment of wetlands, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.wetlands.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
Sessions/Session03/Day4/ANTARES/miniAntaresSolutions_serial.ipynb
|
mit
|
[
"from __future__ import print_function, division, absolute_import",
"Challenges of Streaming Data:\nBuilding an ANTARES-like Pipeline for Data Management and Discovery\nVersion 0.1\n\nBy AA Miller 2017 Apr 10 \nEdited by Gautham Narayan, 2017 Apr 26\nAs we just saw in Gautham's lecture - LSST will produce an unprecedented volume of time-domain information for the astronomical sky. $>37$ trillion individual photometric measurements will be recorded. While the vast, vast majority of these measurements will simply confirm the status quo, some will represent rarities that have never been seen before (e.g., LSST may be the first telescope to discover the electromagnetic counterpart to a LIGO graviational wave event), which the community will need to know about in ~real time. \nStoring, filtering, and serving this data is going to be a huge <del>nightmare</del> challenge. ANTARES, as detailed by Gautham, is one proposed solution to this challenge. In this exercise you will build a miniature version of ANTARES, which will require the application of several of the lessons from earlier this week. Many of the difficult, and essential, steps necessary for ANTARES will be skipped here as they are too time consuming or beyond the scope of what we have previously covered. We will point out these challenges are we come across them.",
"import numpy as np\nimport scipy.stats as spstat\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\n%matplotlib notebook",
"Problem 1) Light Curve Data\nWe begin by ignoring the streaming aspect of the problem (we will come back to that later) and instead we will work with full light curves. The collection of light curves has been curated by Gautham and like LSST it features objects of different types covering a large range in brightness and observations in multiple filters taken at different cadences.\nAs the focus of this exercise is the construction of a data management pipeline, we have already created a Python class to read in the data and store light curves as objects. The data are stored in flat text files with the following format:\n|t |pb |flux |dflux |\n|:--------------:|:---:|:----------:|-----------:|\n| 56254.160000 | i | 6.530000 | 4.920000 |\n| 56254.172000 | z | 4.113000 | 4.018000 |\n| 56258.125000 | g | 5.077000 | 10.620000 |\n| 56258.141000 | r | 6.963000 | 5.060000 |\n| . | . | . | . |\n| . | . | . | . |\n| . | . | . | . |\nand names FAKE0XX.dat where the XX is a running index from 01 to 99. \nProblem 1a\nRead in the data for the first light curve file and plot the $g'$ light curve for that source.",
"# execute this cell\n\n# XXX note - figure out how data handling will work for this file\n\nlc = pd.read_csv('training_set_for_LSST_DSFP/FAKE001.dat', delim_whitespace=True, comment = '#')\n\nplt.errorbar(np.array(lc['t'].ix[lc['pb'] == 'g']), \n np.array(lc['flux'].ix[lc['pb'] == 'g']), \n np.array(lc['dflux'].ix[lc['pb'] == 'g']), fmt = 'o', color = 'green')\nplt.xlabel('MJD')\nplt.ylabel('flux')",
"As we have many light curve files (in principle as many as 37 billion...), we will define a light curve class to ease our handling of the data.\n Problem 1b \nFix the lc class definition below.\nHint - the only purpose of this problem is to make sure you actually read each line of code below, it is not intended to be difficult.",
"class ANTARESlc():\n '''Light curve object for NOAO formatted data'''\n \n def __init__(self, filename):\n '''Read in light curve data'''\n DFlc = pd.read_csv(filename, delim_whitespace=True, comment = '#')\n self.DFlc = DFlc\n self.filename = filename\n \n def plot_multicolor_lc(self):\n '''Plot the 4 band light curve'''\n fig, ax = plt.subplots()\n g = ax.errorbar(self.DFlc['t'].ix[self.DFlc['pb'] == 'g'], \n self.DFlc['flux'].ix[self.DFlc['pb'] == 'g'],\n self.DFlc['dflux'].ix[self.DFlc['pb'] == 'g'],\n fmt = 'o', color = '#78A5A3', label = r\"$g'$\")\n r = ax.errorbar(self.DFlc['t'].ix[self.DFlc['pb'] == 'r'], \n self.DFlc['flux'].ix[self.DFlc['pb'] == 'r'],\n self.DFlc['dflux'].ix[self.DFlc['pb'] == 'r'],\n fmt = 'o', color = '#CE5A57', label = r\"$r'$\")\n i = ax.errorbar(self.DFlc['t'].ix[self.DFlc['pb'] == 'i'], \n self.DFlc['flux'].ix[self.DFlc['pb'] == 'i'],\n self.DFlc['dflux'].ix[self.DFlc['pb'] == 'i'],\n fmt = 'o', color = '#E1B16A', label = r\"$i'$\")\n z = ax.errorbar(self.DFlc['t'].ix[self.DFlc['pb'] == 'z'], \n self.DFlc['flux'].ix[self.DFlc['pb'] == 'z'],\n self.DFlc['dflux'].ix[self.DFlc['pb'] == 'z'],\n fmt = 'o', color = '#444C5C', label = r\"$z'$\")\n ax.legend(fancybox = True)\n ax.set_xlabel(r\"$\\mathrm{MJD}$\")\n ax.set_ylabel(r\"$\\mathrm{flux}$\")",
"Problem 1c\nConfirm the corrections made in 1b by plotting the multiband light curve for the source FAKE010.",
"lc = ANTARESlc('training_set_for_LSST_DSFP/FAKE010.dat')\n\nlc.plot_multicolor_lc()",
"One thing that we brushed over previously is that the brightness measurements have units of flux, rather than the traditional use of magnitudes. The reason for this is that LSST will measure flux variations via image differencing, which will for some sources in some filters result in a measurement of negative flux. (You may have already noticed this in 1a.) Statistically there is nothing wrong with such a measurement, but it is impossible to convert a negative flux into a magnitude. Thus we will use flux measurements throughout this exercise. [Aside - if you are bored during the next break, I'd be happy to rant about why we should have ditched the magnitude system years ago.]\nUsing flux measurements will allow us to make unbiased measurements of the statistical distributions of the variations of the sources we care about. \nProblem 1d\nWhat is FAKE010 the source that is plotted above?\nHint 1 - if you have no idea that's fine, move on.\nHint 2 - ask Szymon or Tomas... \nSolution 1d\nFAKE010 is a transient, as can be seen by the rapid rise followed by a gradual decline in the light curve. In this particular case, we can further guess that FAKE010 is a Type Ia supernova due to the secondary maxima in the $i'$ and $z'$ light curves. These secondary peaks are not present in any other known type of transient.\nProblem 1e\nTo get a better sense of the data, plot the multiband light curves for sources FAKE060 and FAKE073.",
"lc59 = ANTARESlc(\"training_set_for_LSST_DSFP/FAKE060.dat\")\nlc59.plot_multicolor_lc()\n\nlc60 = ANTARESlc(\"training_set_for_LSST_DSFP/FAKE073.dat\")\nlc60.plot_multicolor_lc()",
"Problem 2) Data Preparation\nWhile we could create a database table that includes every single photometric measurement made by LSST, this ~37 trillion row db would be enormous without providing a lot of added value beyond the raw flux measurements [while this table is necessary, alternative tables may provide more useful]. Furthermore, extracting individual light curves from such a database will be slow. Instead, we are going to develop summary statistics for every source which will make it easier to select individual sources and develop classifiers to identify objects of interest. \nBelow we will redefine the ANTARESlc class to include additional methods so we can (eventually) store summary statistics in a database table. In the interest of time, we limit the summary statistics to a relatively small list all of which have been shown to be useful for classification (see Richards et al. 2011 for further details). The statistics that we include (for now) are: \n\nStd -- the standard deviation of the flux measurements \nAmp -- the amplitude of flux deviations\nMAD -- the median absolute deviation of the flux measurements\nbeyond1std -- the fraction of flux measurements beyond 1 standard deviation\nthe mean $g' - r'$, $r' - i'$, and $i' - z'$ color\n\nProblem 2a\nComplete the mean color module in the ANTARESlc class. Feel free to use the other modules as a template for your work. \nHint/food for thought - if a source is observed in different filters but the observations are not simultaneous (or quasi-simultaneous), what is the meaning of a \"mean color\"?\nSolution to food for thought - in this case we simply want you to take the mean flux in each filter and create a statistic that is $-2.5 \\log \\frac{\\langle f_X \\rangle}{\\langle f_{Y} \\rangle}$, where ${\\langle f_{Y} \\rangle}$ is the mean flux in band $Y$, while $\\langle f_X \\rangle$ is the mean flux in band $X$, which can be $g', r', i', z'$. Note that our use of image-difference flux measurements, which can be negative, means you'll need to add some form a case excpetion if $\\langle f_X \\rangle$ or $\\langle f_Y \\rangle$ is negative. In these cases set the color to -999.",
"class ANTARESlc():\n '''Light curve object for NOAO formatted data'''\n \n def __init__(self, filename):\n '''Read in light curve data'''\n DFlc = pd.read_csv(filename, delim_whitespace=True, comment = '#')\n self.DFlc = DFlc\n self.filename = filename\n \n def plot_multicolor_lc(self):\n '''Plot the 4 band light curve'''\n fig, ax = plt.subplots()\n g = ax.errorbar(self.DFlc['t'].ix[self.DFlc['pb'] == 'g'], \n self.DFlc['flux'].ix[self.DFlc['pb'] == 'g'],\n self.DFlc['dflux'].ix[self.DFlc['pb'] == 'g'],\n fmt = 'o', color = '#78A5A3', label = r\"$g'$\")\n r = ax.errorbar(self.DFlc['t'].ix[self.DFlc['pb'] == 'r'], \n self.DFlc['flux'].ix[self.DFlc['pb'] == 'r'],\n self.DFlc['dflux'].ix[self.DFlc['pb'] == 'r'],\n fmt = 'o', color = '#CE5A57', label = r\"$r'$\")\n i = ax.errorbar(self.DFlc['t'].ix[self.DFlc['pb'] == 'i'], \n self.DFlc['flux'].ix[self.DFlc['pb'] == 'i'],\n self.DFlc['dflux'].ix[self.DFlc['pb'] == 'i'],\n fmt = 'o', color = '#E1B16A', label = r\"$i'$\")\n z = ax.errorbar(self.DFlc['t'].ix[self.DFlc['pb'] == 'z'], \n self.DFlc['flux'].ix[self.DFlc['pb'] == 'z'],\n self.DFlc['dflux'].ix[self.DFlc['pb'] == 'z'],\n fmt = 'o', color = '#444C5C', label = r\"$z'$\")\n ax.legend(fancybox = True)\n ax.set_xlabel(r\"$\\mathrm{MJD}$\")\n ax.set_ylabel(r\"$\\mathrm{flux}$\")\n \n def filter_flux(self):\n '''Store individual passband fluxes as object attributes'''\n \n self.gFlux = self.DFlc['flux'].ix[self.DFlc['pb'] == 'g']\n self.gFluxUnc = self.DFlc['dflux'].ix[self.DFlc['pb'] == 'g']\n\n self.rFlux = self.DFlc['flux'].ix[self.DFlc['pb'] == 'r']\n self.rFluxUnc = self.DFlc['dflux'].ix[self.DFlc['pb'] == 'r']\n\n self.iFlux = self.DFlc['flux'].ix[self.DFlc['pb'] == 'i']\n self.iFluxUnc = self.DFlc['dflux'].ix[self.DFlc['pb'] == 'i']\n\n self.zFlux = self.DFlc['flux'].ix[self.DFlc['pb'] == 'z']\n self.zFluxUnc = self.DFlc['dflux'].ix[self.DFlc['pb'] == 'z']\n\n def weighted_mean_flux(self):\n '''Measure (SNR weighted) mean flux in griz'''\n\n if not hasattr(self, 'gFlux'):\n self.filter_flux()\n \n weighted_mean = lambda flux, dflux: np.sum(flux*(flux/dflux)**2)/np.sum((flux/dflux)**2)\n \n self.gMean = weighted_mean(self.gFlux, self.gFluxUnc)\n self.rMean = weighted_mean(self.rFlux, self.rFluxUnc)\n self.iMean = weighted_mean(self.iFlux, self.iFluxUnc)\n self.zMean = weighted_mean(self.zFlux, self.zFluxUnc)\n\n def normalized_flux_std(self):\n '''Measure standard deviation of flux in griz'''\n \n if not hasattr(self, 'gFlux'):\n self.filter_flux()\n\n if not hasattr(self, 'gMean'):\n self.weighted_mean_flux()\n \n normalized_flux_std = lambda flux, wMeanFlux: np.std(flux/wMeanFlux, ddof = 1) \n \n self.gStd = normalized_flux_std(self.gFlux, self.gMean)\n self.rStd = normalized_flux_std(self.rFlux, self.rMean)\n self.iStd = normalized_flux_std(self.iFlux, self.iMean)\n self.zStd = normalized_flux_std(self.zFlux, self.zMean)\n\n def normalized_amplitude(self):\n '''Measure the normalized amplitude of variations in griz'''\n\n if not hasattr(self, 'gFlux'):\n self.filter_flux()\n\n if not hasattr(self, 'gMean'):\n self.weighted_mean_flux()\n\n normalized_amplitude = lambda flux, wMeanFlux: (np.max(flux) - np.min(flux))/wMeanFlux\n \n self.gAmp = normalized_amplitude(self.gFlux, self.gMean)\n self.rAmp = normalized_amplitude(self.rFlux, self.rMean)\n self.iAmp = normalized_amplitude(self.iFlux, self.iMean)\n self.zAmp = normalized_amplitude(self.zFlux, self.zMean)\n\n def normalized_MAD(self):\n '''Measure normalized Median Absolute Deviation (MAD) in griz'''\n\n if not hasattr(self, 'gFlux'):\n self.filter_flux()\n\n if not hasattr(self, 'gMean'):\n self.weighted_mean_flux()\n\n normalized_MAD = lambda flux, wMeanFlux: np.median(np.abs((flux - np.median(flux))/wMeanFlux))\n \n self.gMAD = normalized_MAD(self.gFlux, self.gMean)\n self.rMAD = normalized_MAD(self.rFlux, self.rMean)\n self.iMAD = normalized_MAD(self.iFlux, self.iMean)\n self.zMAD = normalized_MAD(self.zFlux, self.zMean)\n\n def normalized_beyond_1std(self):\n '''Measure fraction of flux measurements beyond 1 std'''\n\n if not hasattr(self, 'gFlux'):\n self.filter_flux()\n\n if not hasattr(self, 'gMean'):\n self.weighted_mean_flux()\n \n beyond_1std = lambda flux, wMeanFlux: sum(np.abs(flux - wMeanFlux) > np.std(flux, ddof = 1))/len(flux)\n \n self.gBeyond = beyond_1std(self.gFlux, self.gMean)\n self.rBeyond = beyond_1std(self.rFlux, self.rMean)\n self.iBeyond = beyond_1std(self.iFlux, self.iMean)\n self.zBeyond = beyond_1std(self.zFlux, self.zMean)\n \n def skew(self):\n '''Measure the skew of the flux measurements'''\n if not hasattr(self, 'gFlux'):\n self.filter_flux()\n \n skew = lambda flux: spstat.skew(flux) \n \n self.gSkew = skew(self.gFlux)\n self.rSkew = skew(self.rFlux)\n self.iSkew = skew(self.iFlux)\n self.zSkew = skew(self.zFlux)\n \n def mean_colors(self):\n '''Measure the mean g-r, g-i, and g-z colors'''\n \n if not hasattr(self, 'gFlux'):\n self.filter_flux()\n\n if not hasattr(self, 'gMean'):\n self.weighted_mean_flux()\n \n self.gMinusR = -2.5*np.log10(self.gMean/self.rMean) if self.gMean> 0 and self.rMean > 0 else -999\n self.rMinusI = -2.5*np.log10(self.rMean/self.iMean) if self.rMean> 0 and self.iMean > 0 else -999\n self.iMinusZ = -2.5*np.log10(self.iMean/self.zMean) if self.iMean> 0 and self.zMean > 0 else -999",
"Problem 2b\nConfirm your solution to 2a by measuring the mean colors of source FAKE010. Does your measurement make sense given the plot you made in 1c?",
"lc = ANTARESlc('training_set_for_LSST_DSFP/FAKE010.dat')\n\nlc.filter_flux()\nlc.weighted_mean_flux()\nlc.mean_colors()\n\nprint(\"The g'-r', r'-i', and 'i-z' colors are: {:.3f}, {:.3f}, and {:.3f}, respectively.\". format(lc.gMinusR, lc.rMinusI, lc.iMinusZ))",
"Problem 3) Store the sources in a database\nBuilding (and managing) a database from scratch is a challenging task. For (very) small projects one solution to this problem is to use SQLite, which is a self-contained, publicly available SQL engine. One of the primary advantages of SQLite is that no server setup is required, unlike other popular tools such as postgres and MySQL. In fact, SQLite is already integrated with python so everything we want to do (create database, add tables, load data, write queries, etc.) can be done within Python.\nWithout diving too deep into the details, here are situations where SQLite has advantages and disadvantages according to their own documentation:\nAdvantages\n\nSituations where expert human support is not needed\nFor basic data analysis (SQLite is easy to install and manage for new projects)\nEducation and training\n\nDisadvantages\n\nClient/Server applications (SQLite does not behave well if multiple systems need to access db at the same time)\nVery large data sets (SQLite stores entire db in a single disk file, other solutions can store data across multiple files/volumes)\nHigh concurrency (Only 1 writer allowed at a time for SQLite)\n\nFrom the (limited) lists above, you can see that while SQLite is perfect for our application right now, if you were building an actual ANTARES-like system a more sophisticated database solution would be required. \nProblem 3a\nImport sqlite3 into the notebook. \nHint - if this doesn't work, you may need to conda install sqlite3 or pip install sqlite3.",
"import sqlite3",
"Following the sqlite3 import, we must first connect to the database. If we attempt a connection to a database that does not exist, then a new database is created. Here we will create a new database file, called miniANTARES.db.",
"conn = sqlite3.connect(\"miniANTARES.db\")",
"We now have a database connection object, conn. To interact with the database (create tables, load data, write queries) we need a cursor object.",
"cur = conn.cursor()",
"Now that we have a cursor object, we can populate the database. As an example we will start by creating a table to hold all the raw photometry (though ultimately we will not use this table for analysis).\nNote - there are many cursor methods capable of interacting with the database. The most common, execute, takes a single SQL command as its argument and executes that command. Other useful methods include executemany, which is useful for inserting data into the database, and executescript, which take an SQL script as its argument and executes the script.\nIn many cases, as below, it will be useful to use triple quotes in order to improve the legibility of your code.",
"cur.execute(\"\"\"drop table if exists rawPhot\"\"\") # drop the table if is already exists\ncur.execute(\"\"\"create table rawPhot(\n id integer primary key,\n objId int,\n t float, \n pb varchar(1),\n flux float,\n dflux float) \n\"\"\")",
"Let's unpack everything that happened in these two commands. First - if the table rawPhot already exists, we drop it to start over from scratch. (this is useful here, but should not be adopted as general practice)\nSecond - we create the new table rawPhot, which has 6 columns: id - a running index for every row in the table, objId - an ID to identify which source the row belongs to, t - the time of observation in MJD, pb - the passband of the observation, flux the observation flux, and dflux the uncertainty on the flux measurement. In addition to naming the columns, we also must declare their type. We have declared id as the primary key, which means this value will automatically be assigned and incremented for all data inserted into the database. We have also declared pb as a variable character of length 1, which is more useful and restrictive than simply declaring pb as text, which allows any freeform string.\nNow we need to insert the raw flux measurements into the database. To do so, we will use the ANTARESlc class that we created earlier. As an initial example, we will insert the first 3 observations from the source FAKE010.",
"filename = \"training_set_for_LSST_DSFP/FAKE001.dat\"\nlc = ANTARESlc(filename)\n\nobjId = int(filename.split('FAKE')[1].split(\".dat\")[0])\n\ncur.execute(\"\"\"insert into rawPhot(objId, t, pb, flux, dflux) values {}\"\"\".format((objId,) + tuple(lc.DFlc.ix[0])))\ncur.execute(\"\"\"insert into rawPhot(objId, t, pb, flux, dflux) values {}\"\"\".format((objId,) + tuple(lc.DFlc.ix[1])))\ncur.execute(\"\"\"insert into rawPhot(objId, t, pb, flux, dflux) values {}\"\"\".format((objId,) + tuple(lc.DFlc.ix[2])))",
"There are two things to highlight above: (1) we do not specify an id for the data as this is automatically generated, and (2) the data insertion happens via a tuple. In this case, we are taking advantage of the fact that a Python tuple is can be concatenated:\n(objId,) + tuple(lc10.DFlc.ix[0]))\n\nWhile the above example demonstrates the insertion of a single row to the database, it is far more efficient to bulk load the data. To do so we will delete, i.e. DROP, the rawPhot table and use some pandas manipulation to load the contents of an entire file at once via executemany.",
"cur.execute(\"\"\"drop table if exists rawPhot\"\"\") # drop the table if it already exists\ncur.execute(\"\"\"create table rawPhot(\n id integer primary key,\n objId int,\n t float, \n pb varchar(1),\n flux float,\n dflux float) \n\"\"\")\n\n# next 3 lines are already in name space; repeated for clarity\nfilename = \"training_set_for_LSST_DSFP/FAKE001.dat\"\nlc = ANTARESlc(filename)\nobjId = int(filename.split('FAKE')[1].split(\".dat\")[0])\n\ndata = [(objId,) + tuple(x) for x in lc.DFlc.values] # array of tuples\n\ncur.executemany(\"\"\"insert into rawPhot(objId, t, pb, flux, dflux) values (?,?,?,?,?)\"\"\", data)\n",
"Problem 3b\nLoad all of the raw photometric observations into the rawPhot table in the database. \nHint - you can use glob to select all of the files being loaded.\nHint 2 - you have already loaded the data from FAKE001 into the table.",
"import glob\n\nfilenames = glob.glob(\"training_set_for_LSST_DSFP/FAKE*.dat\")\n\nfor filename in filenames[1:]: \n lc = ANTARESlc(filename)\n objId = int(filename.split('FAKE')[1].split(\".dat\")[0])\n\n data = [(objId,) + tuple(x) for x in lc.DFlc.values] # array of tuples\n\n cur.executemany(\"\"\"insert into rawPhot(objId, t, pb, flux, dflux) values (?,?,?,?,?)\"\"\", data)",
"Problem 3c\nTo ensure the data have been loaded properly, select the $r'$ light curve for source FAKE010 from the rawPhot table and plot the results. Does it match the plot from 1c?",
"cur.execute(\"\"\"select t, flux, dflux \n from rawPhot\n where objId = 61 and pb = 'g'\"\"\")\n\ndata = cur.fetchall()\ndata = np.array(data)\n\nfig, ax = plt.subplots()\nax.errorbar(data[:,0], data[:,1], data[:,2], fmt = 'o', color = '#78A5A3')\nax.set_xlabel(r\"$\\mathrm{MJD}$\")\nax.set_ylabel(r\"$\\mathrm{flux}$\")",
"Now that we have loaded the raw observations, we need to create a new table to store summary statistics for each object. This table will include everything we've added to the ANTARESlc class.",
"cur.execute(\"\"\"drop table if exists lcFeats\"\"\") # drop the table if it already exists\ncur.execute(\"\"\"create table lcFeats(\n id integer primary key,\n objId int,\n gStd float,\n rStd float,\n iStd float,\n zStd float,\n gAmp float, \n rAmp float, \n iAmp float, \n zAmp float, \n gMAD float,\n rMAD float,\n iMAD float, \n zMAD float, \n gBeyond float,\n rBeyond float,\n iBeyond float,\n zBeyond float,\n gSkew float,\n rSkew float,\n iSkew float,\n zSkew float,\n gMinusR float,\n rMinusI float,\n iMinusZ float,\n FOREIGN KEY(objId) REFERENCES rawPhot(objId)\n ) \n\"\"\")",
"The above procedure should look familiar to above, with one exception: the addition of the foreign key in the lcFeats table. The inclusion of the foreign key ensures a connected relationship between rawPhot and lcFeats. In brief, a row cannot be inserted into lcFeats unless a corresponding row, i.e. objId, exists in rawPhot. Additionally, rows in rawPhot cannot be deleted if there are dependent rows in lcFeats. \nProblem 3d\nCalculate features for every source in rawPhot and insert those features into the lcFeats table.",
"for filename in filenames:\n lc = ANTARESlc(filename)\n objId = int(filename.split('FAKE')[1].split(\".dat\")[0])\n\n lc.filter_flux()\n lc.weighted_mean_flux()\n lc.normalized_flux_std()\n lc.normalized_amplitude()\n lc.normalized_MAD()\n lc.normalized_beyond_1std()\n lc.skew()\n lc.mean_colors()\n \n feats = (objId, lc.gStd, lc.rStd, lc.iStd, lc.zStd, \n lc.gAmp, lc.rAmp, lc.iAmp, lc.zAmp, \n lc.gMAD, lc.rMAD, lc.iMAD, lc.zMAD, \n lc.gBeyond, lc.rBeyond, lc.iBeyond, lc.zBeyond,\n lc.gSkew, lc.rSkew, lc.iSkew, lc.zSkew, \n lc.gMinusR, lc.rMinusI, lc.iMinusZ)\n\n cur.execute(\"\"\"insert into lcFeats(objId, \n gStd, rStd, iStd, zStd, \n gAmp, rAmp, iAmp, zAmp, \n gMAD, rMAD, iMAD, zMAD, \n gBeyond, rBeyond, iBeyond, zBeyond,\n gSkew, rSkew, iSkew, zSkew,\n gMinusR, rMinusI, iMinusZ) values {}\"\"\".format(feats))",
"Problem 3e\nConfirm that the data loaded correctly by counting the number of sources with gAmp > 2.\nHow many sources have gMinusR = -999?\nHint - you should find 9 and 2, respectively.",
"cur.execute(\"\"\"select count(*) from lcFeats where gAmp > 2\"\"\")\n\nnAmp2 = cur.fetchone()[0]\n\ncur.execute(\"\"\"select count(*) from lcFeats where gMinusR = -999\"\"\")\nnNoColor = cur.fetchone()[0]\n\nprint(\"There are {:d} sources with gAmp > 2\".format(nAmp2))\nprint(\"There are {:d} sources with no measured i' - z' color\".format(nNoColor))",
"Finally, we close by commiting the changes we made to the database.\nNote that strictly speaking this is not needed, however, were we to update any values in the database then we would need to commit those changes.",
"conn.commit()",
"mini Challenge Problem\nIf there is less than 45 min to go, please skip this part. \nEarlier it was claimed that bulk loading the data is faster than loading it line by line. For this problem - prove this assertion, use %%timeit to \"profile\" the two different options (bulk load with executemany and loading one photometric measurement at a time via for loop).\nHint - to avoid corruption of your current working database, miniANTARES.db, create a new temporary database for the pupose of running this test. Also be careful with the names of your connection and cursor variables.",
"%%timeit\n# bulk load solution\n\ntmp_conn = sqlite3.connect(\"tmp1.db\")\ntmp_cur = tmp_conn.cursor()\n\ntmp_cur.execute(\"\"\"drop table if exists rawPhot\"\"\") # drop the table if it already exists\ntmp_cur.execute(\"\"\"create table rawPhot(\n id integer primary key,\n objId int,\n t float, \n pb varchar(1),\n flux float,\n dflux float) \n \"\"\")\n\nfor filename in filenames: \n lc = ANTARESlc(filename)\n objId = int(filename.split('FAKE')[1].split(\".dat\")[0])\n\n data = [(objId,) + tuple(x) for x in lc.DFlc.values] # array of tuples\n\n tmp_cur.executemany(\"\"\"insert into rawPhot(objId, t, pb, flux, dflux) values (?,?,?,?,?)\"\"\", data)\n\n%%timeit\n# bulk load solution\n\ntmp_conn = sqlite3.connect(\"tmp1.db\")\ntmp_cur = tmp_conn.cursor()\n\ntmp_cur.execute(\"\"\"drop table if exists rawPhot\"\"\") # drop the table if it already exists\ntmp_cur.execute(\"\"\"create table rawPhot(\n id integer primary key,\n objId int,\n t float, \n pb varchar(1),\n flux float,\n dflux float) \n \"\"\")\n\nfor filename in filenames: \n lc = ANTARESlc(filename)\n objId = int(filename.split('FAKE')[1].split(\".dat\")[0])\n\n for obs in lc.DFlc.values:\n tmp_cur.execute(\"\"\"insert into rawPhot(objId, t, pb, flux, dflux) values {}\"\"\".format((objId,) + tuple(obs)))",
"Problem 4) Build a Classification Model\nOne of the primary goals for ANTARES is to separate the Wheat from the Chaff, in other words, given that ~10 million alerts will be issued by LSST on a nightly basis, what is the single (or 10, or 100) most interesting alert.\nHere we will build on the skills developed during the DSFP Session 2 to construct a machine-learning model to classify new light curves. \nFortunately - the data that has already been loaded to miniANTARES.db is a suitable training set for the classifier (we simply haven't provided you with labels just yet). Execute the cell below to add a new table to the database which includes the appropriate labels.",
"cur.execute(\"\"\"drop table if exists lcLabels\"\"\") # drop the table if it already exists\ncur.execute(\"\"\"create table lcLabels(\n objId int,\n label int, \n foreign key(objId) references rawPhot(objId)\n )\"\"\")\n\nlabels = np.zeros(100)\nlabels[20:60] = 1\nlabels[60:] = 2\n\ndata = np.append(np.arange(1,101)[np.newaxis].T, labels[np.newaxis].T, axis = 1)\ntup_data = [tuple(x) for x in data]\n\ncur.executemany(\"\"\"insert into lcLabels(objId, label) values (?,?)\"\"\", tup_data)",
"For now - don't worry about what the labels mean (though if you inspect the light curves you may be able to figure this out...)\nProblem 4a\nQuery the database to select features and labels for the light curves in your training set. Store the results of these queries in numpy arrays, X and y, respectively, which are suitable for the various scikit-learn machine learning algorithms.\nHint - recall that databases do not store ordered results.\nHint 2 - recall that scikit-learn expects y to be a 1d array. You will likely need to convert a 2d array to 1d.",
"cur.execute(\"\"\"select label\n from lcLabels \n order by objId asc\"\"\")\ny = np.array(cur.fetchall()).ravel()\n\ncur.execute(\"\"\"select gStd, rStd, iStd, zStd, \n gAmp, rAmp, iAmp, zAmp, \n gMAD, rMAD, iMAD, zMAD, \n gBeyond, rBeyond, iBeyond, zBeyond,\n gSkew, rSkew, iSkew, zSkew,\n gMinusR, rMinusI, iMinusZ\n from lcFeats\n order by objId asc\"\"\")\nX = np.array(cur.fetchall())",
"Problem 4b\nTrain a SVM model (SVC in scikit-learn) using a radial basis function (RBF) kernel with penalty parameter, $C = 1$, and kernel coefficient, $\\gamma = 0.1$.\nEvaluate the accuracy of the model via $k = 5$ fold cross validation. \nHint - you may find the cross_val_score module helpful.",
"from sklearn.svm import SVC\nfrom sklearn.cross_validation import cross_val_score\n\ncv_scores = cross_val_score(SVC(C = 1.0, gamma = 0.1, kernel = 'rbf'), X, y, cv = 5)\n\nprint(\"The SVM model produces a CV accuracy of {:.4f}\".format(np.mean(cv_scores)))",
"The SVM model does a decent job of classifying the data. However - we are going to have 10 million alerts every night. Therefore, we need something that runs quickly. For most ML models the training step is slow, while predictions (relatively) are fast. \nProblem 4c\nPick any other classification model from scikit-learn, and \"profile\" the time it takes to train that model vs. the time it takes to train an SVM model.\nIs the model that you have selected faster than SVM?\nHint - you should import the model outside your timing loop as we only care about the training step in this case.",
"from sklearn.ensemble import RandomForestClassifier\nrf_clf = RandomForestClassifier()\nsvm_clf = SVC(C = 1.0, gamma = 0.1, kernel = 'rbf')\n\n%%timeit\n# timing solution for RF model\nrf_clf.fit(X,y)\n\n%%timeit\n# timing solution for SVM model\nsvm_clf.fit(X,y)",
"Problem 4d\nDoes the model you selected perform better than the SVM model? Perform a $k = 5$ fold cross validation to determine which model provides superior accuracy.",
"cv_scores = cross_val_score(RandomForestClassifier(), X, y, cv = 5)\n\nprint(\"The RF model produces a CV accuracy of {:.4f}\".format(np.mean(cv_scores)))",
"Problem 4e\nWhich model are you going to use in your miniANTARES? Justify your answer. \nWrite solution to 4e here\nIn this case we are going to adopt the SVM model as it is a factor of 20 times faster than RF, while providing nearly identical performance from an accuracy stand point.\nProblem 5) Class Predictions for New Sources\nNow that we have developed a basic infrastructure for dealing with streaming data, we may reap the rewards of our efforts. We will use our ANTARES-like software to classify newly observed sources.\nProblem 5a\nLoad the light curves for the new observations (found in full_testset_for_LSST_DSP) into the a table in the database. \nHint - ultimately it doesn't matter much one way or another, but you may choose to keep new observations in a table separate from the training data. I'm putting it into a new testPhot database. Up to you.",
"cur.execute(\"\"\"drop table if exists testPhot\"\"\") # drop the table if is already exists\ncur.execute(\"\"\"create table testPhot(\n id integer primary key,\n objId int,\n t float, \n pb varchar(1),\n flux float,\n dflux float) \n\"\"\")\ncur.execute(\"\"\"drop table if exists testFeats\"\"\") # drop the table if it already exists\ncur.execute(\"\"\"create table testFeats(\n id integer primary key,\n objId int,\n gStd float,\n rStd float,\n iStd float,\n zStd float,\n gAmp float, \n rAmp float, \n iAmp float, \n zAmp float, \n gMAD float,\n rMAD float,\n iMAD float, \n zMAD float, \n gBeyond float,\n rBeyond float,\n iBeyond float,\n zBeyond float,\n gSkew float,\n rSkew float,\n iSkew float,\n zSkew float,\n gMinusR float,\n rMinusI float,\n iMinusZ float,\n FOREIGN KEY(objId) REFERENCES testPhot(objId)\n ) \n\"\"\")\n\nnew_obs_filenames = glob.glob(\"test_set_for_LSST_DSFP/FAKE*.dat\")\n\nfor filename in new_obs_filenames: \n lc = ANTARESlc(filename)\n objId = int(filename.split('FAKE')[1].split(\".dat\")[0])\n\n data = [(objId,) + tuple(x) for x in lc.DFlc.values] # array of tuples\n\n cur.executemany(\"\"\"insert into testPhot(objId, t, pb, flux, dflux) values (?,?,?,?,?)\"\"\", data)",
"Problem 5b\nCalculate features for the new observations and insert those features into a database table.\nHint - again, you may want to create a new table for this, up to you. I'm using the testFeats table.",
"for filename in new_obs_filenames:\n lc = ANTARESlc(filename)\n \n # simple HACK to get rid of data with too few observations (fails because std is nan with just one observation)\n if len(lc.DFlc) <= 14:\n continue\n \n objId = int(filename.split('FAKE')[1].split(\".dat\")[0])\n\n lc.filter_flux()\n lc.weighted_mean_flux()\n lc.normalized_flux_std()\n lc.normalized_amplitude()\n lc.normalized_MAD()\n lc.normalized_beyond_1std()\n lc.skew()\n lc.mean_colors()\n \n feats = (objId, lc.gStd, lc.rStd, lc.iStd, lc.zStd, \n lc.gAmp, lc.rAmp, lc.iAmp, lc.zAmp, \n lc.gMAD, lc.rMAD, lc.iMAD, lc.zMAD, \n lc.gBeyond, lc.rBeyond, lc.iBeyond, lc.zBeyond,\n lc.gSkew, lc.rSkew, lc.iSkew, lc.zSkew,\n lc.gMinusR, lc.rMinusI, lc.iMinusZ)\n\n cur.execute(\"\"\"insert into testFeats(objId, \n gStd, rStd, iStd, zStd, \n gAmp, rAmp, iAmp, zAmp, \n gMAD, rMAD, iMAD, zMAD, \n gBeyond, rBeyond, iBeyond, zBeyond,\n gSkew, rSkew, iSkew, zSkew,\n gMinusR, rMinusI, iMinusZ) values {}\"\"\".format(feats))",
"Problem 5c\nTrain the model that you adopted in 4e on the training set, and produce predictions for the newly observed sources.\nWhat is the class distribution for the newly detected sources?\nHint - the training set was constructed to have a nearly uniform class distribution, that may not be the case for the actual observed distribution of sources.",
"svm_clf = SVC(C=1.0, gamma = 0.1, kernel = 'rbf').fit(X, y)\n\ncur.execute(\"\"\"select gStd, rStd, iStd, zStd, \n gAmp, rAmp, iAmp, zAmp, \n gMAD, rMAD, iMAD, zMAD, \n gBeyond, rBeyond, iBeyond, zBeyond,\n gSkew, rSkew, iSkew, zSkew,\n gMinusR, rMinusI, iMinusZ\n from testFeats\n order by objId asc\"\"\")\nX_new = np.array(cur.fetchall())\n\ny_preds = svm_clf.predict(X_new)\n\nprint(\"\"\"There are {:d}, {:d}, and {:d} sources \n in classes 1, 2, 3, respectively\"\"\".format(*list(np.bincount(y_preds)))) # be careful using bincount",
"Problem 5d\nANOTHER PROBLEM HERE INVESTIGATING THE DATA IN SOME WAY - LIGHT CURVE PLOTS OR SOMETHING, BUT NEED REAL DATA FIRST.\nProblem 6) Anomaly Detection\nAs we learned earlier - one of the primary goals of ANTARES is to reduce the stream of 10 million alerts on any given night to the single (or 10, or 100) most interesting objects. One possible definition of \"interesting\" is rarity - in which case it would be useful to add some form of anomaly detection to the pipeline. scikit-learn has several different algorithms that can be used for anomaly detection. Here we will employ isolation forest which has many parallels to random forests, which we have previously learned about.\nIn brief, isolation forest builds an ensemble of decision trees where the splitting parameter in each node of the tree is selected randomly. In each tree the number of branches necessary to isolate each source is measured - outlier sources will, on average, require fewer splittings to be isolated than sources in high-density regions of the feature space. Averaging the number of branchings over many trees results in a relative ranking of the anomalousness (yes, I just made up a word) of each source.\nProblem 6a\nUsing IsolationForest in sklearn.ensemble - determine the 10 most isolated sources in the data set.\nHint - for IsolationForest you will want to use the decision_function() method rather than predict_proba(), which is what we have previously used with sklearn.ensemble models to get relative rankings from the model.",
"from sklearn.ensemble import IsolationForest\n\nisoF_clf = IsolationForest(n_estimators = 100)\nisoF_clf.fit(X_new)\nanomaly_score = isoF_clf.decision_function(X_new)\n\nprint(\"The 10 most anomalous sources are: {}\".format(np.arange(1,5001)[np.argsort(anomaly_score)[:10]]))",
"Problem 6b\nPlot the light curves of the 2 most anomalous sources. \nCan you identify why these sources have been selected as outliers?",
"lc485 = ANTARESlc(\"test_set_for_LSST_DSFP/FAKE00485.dat\")\nlc485.plot_multicolor_lc()\n\nlc2030 = ANTARESlc(\"test_set_for_LSST_DSFP/FAKE02030.dat\")\nlc2030.plot_multicolor_lc()",
"Write solution to 6b here\nFor source 485 - this looks like a supernova at intermediate redshifts. What might be throwing it is the outlier point. We never really made our features very robust to outliers.\nFor source 2030 - This is a weird faint source with multiple unsynced rises and falls in different bands.\nChallenge Problem) Simulate a Real ANTARES\nThe problem that we just completed features a key difference from the true ANTARES system - namely, all the light curves analyzed had a complete set of observations loaded into the database. One of the key challenges for LSST (and by extension ANTARES) is that the data will be streaming - new observations will be available every night, but the full light curves for all sources won't be available until the 10 yr survey is complete. In this problem, you will use the same data to simulate an LSST-like classification problem.\nAssume that your training set (i.e. the first 100 sources loaded into the database) were observed prior to LSST, thus, these light curves can still be used in their entirety to train your classification models. For the test set of observations, simulate LSST by determining the min and max observation date and take 1-d quantized steps through these light curves. On each day when there are new observations, update the feature calculations for every source that has been newly observed. Classify those sources and identify possible anomalies.\nHere are some things you should think about as you build this software:\n\nShould you use the entire light curves for training-set objects when classifying sources with only a few data points?\nHow are you going to handle objects on the first epoch when they are detected?\nWhat threshold (if any) are you going to set to notify the community about rarities that you have discovered\n\nHint - Since you will be reading these light curves from the database (and not from text files) the ANTARESlc class that we previously developed will not be useful. You will (likely) either need to re-write this class to interact with the database or figure out how to massage the query results to comply with the class definitions."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
GoogleCloudPlatform/vertex-ai-samples
|
notebooks/community/sdk/sdk_automl_forecasting_evaluating_a_model.ipynb
|
apache-2.0
|
[
"# Copyright 2021 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Vertex AI Model Builder SDK: AutoML Forecasting Model Training Example\nTo use this Colaboratory notebook, you copy the notebook to your own Google Drive and open it with Colaboratory (or Colab). You can run each step, or cell, and see its results. To run a cell, use Shift+Enter. Colab automatically displays the return value of the last line in each cell. For more information about running notebooks in Colab, see the Colab welcome page.\nThis notebook demonstrates how to create an AutoML Forecasting model based on a time series dataset. The process of exporting and visualizing test set predictions will be mentioned as well. It will require you provide a bucket where the dataset will be stored.\nNote: you may incur charges for training, prediction, storage or usage of other GCP products in connection with testing this SDK.\nInstall Vertex AI SDK, Authenticate, and upload of a Dataset to your GCS bucket\nAfter the SDK installation the kernel will be automatically restarted. You may see this error message Your session crashed for an unknown reason which is normal.",
"%%capture\n!pip3 uninstall -y google-cloud-aiplatform\n!pip3 install google-cloud-aiplatform\n\nimport IPython\n\napp = IPython.Application.instance()\napp.kernel.do_shutdown(True)\n\nimport sys\n\nif \"google.colab\" in sys.modules:\n from google.colab import auth\n\n auth.authenticate_user()",
"Enter your project and GCS bucket\nEnter your Project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook.\nIf you don't know your project ID, you may be able to get your project ID using gcloud.",
"import os\n\nPROJECT_ID = \"\"\n\n# Get your Google Cloud project ID from gcloud\nif not os.getenv(\"IS_TESTING\"):\n shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null\n PROJECT_ID = shell_output[0]\n print(\"Project ID: \", PROJECT_ID)",
"Otherwise, set your project ID here.",
"if PROJECT_ID == \"\" or PROJECT_ID is None:\n PROJECT_ID = \"\" # @param {type:\"string\"}",
"If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.",
"from datetime import datetime\n\nTIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")",
"Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.\nYou may also change the REGION variable, which is used for operations throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are available. You may not use a Multi-Regional Storage bucket for training with Vertex AI.",
"BUCKET_NAME = \"\" # @param {type:\"string\"}\nREGION = \"\" # @param {type:\"string\"}\n\nif BUCKET_NAME == \"\" or BUCKET_NAME is None or BUCKET_NAME == \"gs://[your-bucket-name]\":\n BUCKET_NAME = \"gs://\" + PROJECT_ID + \"aip-\" + TIMESTAMP",
"The datasets we are using are samples from the Iowa Liquor Retail Sales dataset. The training sample contains the sales from 2020 and the prediction sample (used in the batch prediction step) contains the January - April sales from 2021.",
"TRAINING_DATASET_BQ_PATH = (\n \"bq://bigquery-public-data:iowa_liquor_sales_forecasting.2020_sales_train\"\n)",
"Initialize Vertex AI SDK\nInitialize the client for Vertex AI.",
"from google.cloud import aiplatform\n\naiplatform.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)",
"Create a Managed Time Series Dataset from BigQuery\nThis section will create a dataset from a BigQuery table.",
"ds = aiplatform.datasets.TimeSeriesDataset.create(\n display_name=\"iowa_liquor_sales_train_job\", bq_source=[TRAINING_DATASET_BQ_PATH]\n)\n\nds.resource_name\nds",
"Launch a Training Job to Create a Model\nOnce we have defined your training script, we will create a model, export the test set predictions, and output the BigQuery location of the test set predictions in the training log.",
"time_column = \"date\"\ntime_series_identifier_column = \"store_name\"\ntarget_column = \"sale_dollars\"\n\njob = aiplatform.AutoMLForecastingTrainingJob(\n display_name=\"train-iowa-liquor-sales-automl-1\",\n optimization_objective=\"minimize-rmse\",\n column_specs={\n time_column: \"timestamp\",\n target_column: \"numeric\",\n \"city\": \"categorical\",\n \"zip_code\": \"categorical\",\n \"county\": \"categorical\",\n },\n)\n\ndataset_id = \"iowa_liquor_sales_train_job\"\nbq_table_name = \"iowa_liquor_sales_test_pred\"\nbq_evaluated_examples_uri = \"bq://{}:{}:{}\".format(\n PROJECT_ID, dataset_id, bq_table_name\n)\n# This will take around an hour to run\nmodel = job.run(\n dataset=ds,\n target_column=target_column,\n time_column=time_column,\n time_series_identifier_column=time_series_identifier_column,\n available_at_forecast_columns=[time_column],\n unavailable_at_forecast_columns=[target_column],\n time_series_attribute_columns=[\"city\", \"zip_code\", \"county\"],\n forecast_horizon=30,\n context_window=30,\n data_granularity_unit=\"day\",\n data_granularity_count=1,\n weight_column=None,\n export_evaluated_data_items=True,\n budget_milli_node_hours=500,\n model_display_name=\"iowa-liquor-sales-forecast-model\",\n predefined_split_column_name=None,\n)\n\n# @title # Fetch Model Evaluation Metrics\n# @markdown Fetch the model evaluation metrics calculated during training on the test set.\n\nimport pandas as pd\n\nlist_evaluation_pager = model.api_client.list_model_evaluations(\n parent=model.resource_name\n)\nfor model_evaluation in list_evaluation_pager:\n metrics_dict = {m[0]: m[1] for m in model_evaluation.metrics.items()}\n df = pd.DataFrame(metrics_dict.items(), columns=[\"Metric\", \"Value\"])\n print(df.to_string(index=False))\n\ntime_column = \"date\"\ntime_series_identifier_column = \"store_name\"\ntarget_column = \"sale_dollars\"\nMY_PROJECT = PROJECT_ID\n\neval_ex_uri = job.evaluated_data_items_bigquery_uri\n\n# @title # Visualize the Forecasts\n# @markdown The following snippet visualizes the test set predictions from the forecasting training job above to aid in model evaluation.\n# @markdown Visit the given link to view the generated forecasts in [Data Studio](https://support.google.com/datastudio/answer/6283323?hl=en).\n\nimport urllib\n\n\ndef _sanitize_bq_uri(bq_uri):\n if bq_uri.startswith(\"bq://\"):\n bq_uri = bq_uri[5:]\n return bq_uri.replace(\":\", \".\")\n\n\neval_ex_uri_clean = _sanitize_bq_uri(eval_ex_uri)\n\n\ndef get_data_studio_link(\n eval_input_uri, time_column, time_series_identifier_column, target_column\n):\n\n base_url = \"https://datastudio.google.com/c/u/0/reporting\"\n query = (\n \"SELECT \\\\n\"\n \" CAST({} as DATETIME) timestamp_col,\\\\n\"\n \" CAST({} as STRING) time_series_identifier_col,\\\\n\"\n \" CAST({} as NUMERIC) actual_values,\\\\n\"\n \" CAST(predicted_{}.value as NUMERIC) predicted_values,\\\\n\"\n \" CAST(predicted_on_{} as DATETIME) predicted_on_Date_col,\\\\n\"\n \" CAST({} as NUMERIC) - CAST(predicted_{}.value as NUMERIC) residuals,\\\\n\"\n \" * \\\\n\"\n \"FROM `{}` input\"\n )\n query = query.format(\n time_column,\n time_series_identifier_column,\n target_column,\n target_column,\n time_column,\n target_column,\n target_column,\n eval_input_uri,\n )\n params = {\n \"templateId\": \"5df87696-b427-49d8-aeec-b885f9b7080f\",\n \"ds0.connector\": \"BIG_QUERY\",\n \"ds0.projectId\": MY_PROJECT,\n \"ds0.billingProjectId\": MY_PROJECT,\n \"ds0.type\": \"CUSTOM_QUERY\",\n \"ds0.sql\": query,\n }\n params_str_parts = []\n for k, v in params.items():\n params_str_parts.append('\"{}\":\"{}\"'.format(k, v))\n params_str = \"\".join([\"{\", \",\".join(params_str_parts), \"}\"])\n return \"{}?{}\".format(base_url, urllib.parse.urlencode({\"params\": params_str}))\n\n\nprint(\n get_data_studio_link(\n eval_ex_uri_clean, time_column, time_series_identifier_column, target_column\n )\n)",
"Cleaning up\nTo clean up all Google Cloud resources used in this project, you can delete the Google Cloud project you used for the tutorial.\nOtherwise, you can delete the individual resources you created in this tutorial:",
"# Delete model resource\nmodel.delete(sync=True)\n\n# Delete Cloud Storage objects that were created\n! gsutil -m rm -r $BUCKET_NAME"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
brainiak/brainiak
|
examples/reconstruct/iem_example_synthetic_RF_data.ipynb
|
apache-2.0
|
[
"import numpy as np\nfrom brainiak.reconstruct import iem as IEM\nimport matplotlib.pyplot as plt\nimport numpy.matlib as matlib\nimport scipy.signal",
"In this example, we will assume that the stimuli are patches of different motion directions. These stimuli span a 360-degree, circular feature space. We will build an encoding model that has 6 channels, or basis functions, which also span this feature space.",
"# Set up parameters\nn_channels = 6\ncos_exponent = 5\nrange_start = 0\n\nrange_stop = 360\nfeature_resolution = 360\niem_obj = IEM.InvertedEncoding1D(n_channels, cos_exponent, stimulus_mode='circular', range_start=range_start, \n range_stop=range_stop, channel_density=feature_resolution)\n\n# You can also try the half-circular space. Here's the associated code:\n# range_stop = 180 # since 0 and 360 degrees are the same, we want to stop shy of 360\n# feature_resolution = 180\n# iem_obj = IEM.InvertedEncoding1D(n_channels, cos_exponent, stimulus_mode='halfcircular', range_start=range_start, \n# range_stop=range_stop, channel_density=feature_resolution, verbose=True)\n\nstim_vals = np.linspace(0, feature_resolution - (feature_resolution/6), 6).astype(int)",
"Now we'll generate synthetic data. Ideally, each voxel that we measure from is roughly tuned to some part of the feature space (see Sprague, Boynton, Serences, 2019). So we will generate data that has a receptive field (RF). We can define the RF along the same feature axis as the channels that we generated above.\nThe following two functions will generate the voxel RFs, and then generate several trials of that dataset. There are options to add uniform noise to either the RF or the trials.",
"# Generate synthetic data s.t. each voxel has a Gaussian tuning function\n\ndef generate_voxel_RFs(n_voxels, feature_resolution, random_tuning=True, RF_noise=0.):\n if random_tuning:\n # Voxel selectivity is random\n voxel_tuning = np.floor((np.random.rand(n_voxels) * range_stop) + range_start).astype(int)\n else:\n # Voxel selectivity is evenly spaced along the feature axis\n voxel_tuning = np.linspace(range_start, range_stop, n_voxels+1)\n voxel_tuning = voxel_tuning[0:-1]\n voxel_tuning = np.floor(voxel_tuning).astype(int)\n gaussian = scipy.signal.gaussian(feature_resolution, 15)\n voxel_RFs = np.zeros((n_voxels, feature_resolution))\n for i in range(0, n_voxels):\n voxel_RFs[i, :] = np.roll(gaussian, voxel_tuning[i] - ((feature_resolution//2)-1))\n voxel_RFs += np.random.rand(n_voxels, feature_resolution)*RF_noise # add noise to voxel RFs\n voxel_RFs = voxel_RFs / np.max(voxel_RFs, axis=1)[:, None]\n \n return voxel_RFs, voxel_tuning\n\n\ndef generate_voxel_data(voxel_RFs, n_voxels, trial_list, feature_resolution, \n trial_noise=0.25):\n one_hot = np.eye(feature_resolution)\n # Generate trial-wise responses based on voxel RFs\n if range_start > 0:\n trial_list = trial_list + range_start\n elif range_start < 0:\n trial_list = trial_list - range_start\n stim_X = one_hot[:, trial_list] #@ basis_set.transpose()\n trial_data = voxel_RFs @ stim_X\n trial_data += np.random.rand(n_voxels, trial_list.size)*(trial_noise*np.max(trial_data))\n \n return trial_data",
"Now let's generate some training data and look at it. This code will create a plot that depicts the response of an example voxel for different trials.",
"np.random.seed(100)\nn_voxels = 50\nn_train_trials = 120\ntraining_stim = np.repeat(stim_vals, n_train_trials/6)\nvoxel_RFs, voxel_tuning = generate_voxel_RFs(n_voxels, feature_resolution, random_tuning=False, RF_noise=0.1)\ntrain_data = generate_voxel_data(voxel_RFs, n_voxels, training_stim, feature_resolution, trial_noise=0.25)\nprint(np.linalg.cond(train_data))\n# print(\"Voxels are tuned to: \", voxel_tuning)\n\n# Generate plots to look at the RF of an example voxel.\nvoxi = 20\nf = plt.figure()\nplt.subplot(1, 2, 1)\nplt.plot(train_data[voxi, :])\nplt.xlabel(\"trial\")\nplt.ylabel(\"activation\")\nplt.title(\"Activation over trials\")\nplt.subplot(1, 2, 2)\nplt.plot(voxel_RFs[voxi, :])\nplt.xlabel(\"degrees (motion direction)\")\nplt.axvline(voxel_tuning[voxi])\nplt.title(\"Receptive field at {} deg\".format(voxel_tuning[voxi]))\nplt.suptitle(\"Example voxel\")\n\nplt.figure()\nplt.imshow(train_data)\nplt.ylabel('voxel')\nplt.xlabel('trial')\nplt.suptitle('Simulated data from each voxel')",
"Using this synthetic training data, we can fit the IEM.",
"# Fit an IEM\niem_obj.fit(train_data.transpose(), training_stim)",
"Calling the IEM fit method defines the channels, or the basis set, which span the feature domain. We can examine the channels and plot them to check that they look appropriate.\nRemember that the plot below is in circular space. Hence, the channels wrap around the x-axis. For example, the channel depicted in blue is centered at 0 degrees (far left of plot), which is the same as 360 degrees (far right of plot).\nWe can check whether the channels properly tile the feature space by summing across all of them. This is shown on the right plot. It should be a straight horizontal line.",
"# Let's visualize the basis functions.\nchannels = iem_obj.channels_\nfeature_axis = iem_obj.channel_domain\nprint(channels.shape)\n\nplt.figure()\nplt.subplot(1, 2, 1)\nfor i in range(0, channels.shape[0]):\n plt.plot(feature_axis, channels[i,:])\nplt.title('Channels (i.e. basis functions)')\nplt.subplot(1, 2, 2)\nplt.plot(np.sum(channels, 0))\nplt.ylim(0, 2.5)\nplt.title('Sum across channels')",
"Now we can generate test data and see how well we can predict the test stimuli.",
"# Generate test data\nn_test_trials = 12\ntest_stim = np.repeat(stim_vals, n_test_trials/len(stim_vals))\nnp.random.seed(330)\ntest_data = generate_voxel_data(voxel_RFs, n_voxels, test_stim, feature_resolution, trial_noise=0.25)\n\n# Predict test stim & get R^2 score\npred_feature = iem_obj.predict(test_data.transpose())\nR2 = iem_obj.score(test_data.transpose(), test_stim)\n\nprint(\"Predicted features are: {} degrees.\".format(pred_feature))\nprint(\"Actual features are: {} degrees.\".format(test_stim))\nprint(\"Test R^2 is {}\".format(R2))",
"In addition to predicting the exact feature, we can examine the model-based reconstructions in the feature domain. That is, instead of getting single predicted values for each feature, we can look at a reconstructed function which peaks at the predicted feature.\nBelow we will plot all of the reconstructions. There will be some variability because of the noise added during the synthetic data generation.",
"# Now get the model-based reconstructions, which are continuous\n# functions that should peak at each test stimulus feature\nrecons = iem_obj._predict_feature_responses(test_data.transpose())\n\nf = plt.figure()\nfor i in range(0, n_test_trials-1):\n plt.plot(feature_axis, recons[:, i])\nfor i in stim_vals:\n plt.axvline(x=i, color='k', linestyle='--')\n\nplt.title(\"Reconstructions of {} degrees\".format(np.unique(test_stim)))",
"For a sanity check, let's check how R^2 changes as the number of voxels increases. We can write a quick wrapper function to train and test on a given set of motion directions, as below.",
"iem_obj.verbose = False\ndef train_and_test(nvox, ntrn, ntst, rfn, tn):\n vRFs, vox_tuning = generate_voxel_RFs(nvox, feature_resolution, random_tuning=True, RF_noise=rfn)\n trn = np.repeat(stim_vals, ntrn/6).astype(int)\n trnd = generate_voxel_data(vRFs, nvox, trn, feature_resolution, trial_noise=tn)\n tst = np.repeat(stim_vals, ntst/6).astype(int)\n tstd = generate_voxel_data(vRFs, nvox, tst, feature_resolution, trial_noise=tn)\n \n iem_obj.fit(trnd.transpose(), trn)\n recons = iem_obj._predict_feature_responses(tstd.transpose())\n pred_ori = iem_obj.predict(tstd.transpose())\n R2 = iem_obj.score(tstd.transpose(), tst)\n\n return recons, pred_ori, R2, tst",
"We'll iterate through the list and look at the resulting R^2 values.",
"np.random.seed(300)\nvox_list = (5, 10, 15, 25, 50)\nR2_list = np.zeros(len(vox_list))\nfor idx, nvox in enumerate(vox_list):\n recs, preds, R2_list[idx], test_features = train_and_test(nvox, 120, 30, 0.1, 0.25)\n\nprint(\"The R2 values for increasing numbers of voxels: \")\nprint(R2_list)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io
|
0.24/_downloads/7ca3f34c286b629113cbb522edf26a21/75_cluster_ftest_spatiotemporal.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Spatiotemporal permutation F-test on full sensor data\nTests for differential evoked responses in at least\none condition using a permutation clustering test.\nThe FieldTrip neighbor templates will be used to determine\nthe adjacency between sensors. This serves as a spatial prior\nto the clustering. Spatiotemporal clusters will then\nbe visualized using custom matplotlib code.\nSee the FieldTrip website_ for a caveat regarding\nthe possible interpretation of \"significant\" clusters.",
"# Authors: Denis Engemann <denis.engemann@gmail.com>\n# Jona Sassenhagen <jona.sassenhagen@gmail.com>\n#\n# License: BSD-3-Clause\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.axes_grid1 import make_axes_locatable\n\nimport mne\nfrom mne.stats import spatio_temporal_cluster_test\nfrom mne.datasets import sample\nfrom mne.channels import find_ch_adjacency\nfrom mne.viz import plot_compare_evokeds\n\nprint(__doc__)",
"Set parameters",
"data_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\nevent_id = {'Aud/L': 1, 'Aud/R': 2, 'Vis/L': 3, 'Vis/R': 4}\ntmin = -0.2\ntmax = 0.5\n\n# Setup for reading the raw data\nraw = mne.io.read_raw_fif(raw_fname, preload=True)\nraw.filter(1, 30, fir_design='firwin')\nevents = mne.read_events(event_fname)",
"Read epochs for the channel of interest",
"picks = mne.pick_types(raw.info, meg='mag', eog=True)\n\nreject = dict(mag=4e-12, eog=150e-6)\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,\n baseline=None, reject=reject, preload=True)\n\nepochs.drop_channels(['EOG 061'])\nepochs.equalize_event_counts(event_id)\n\nX = [epochs[k].get_data() for k in event_id] # as 3D matrix\nX = [np.transpose(x, (0, 2, 1)) for x in X] # transpose for clustering",
"Find the FieldTrip neighbor definition to setup sensor adjacency",
"adjacency, ch_names = find_ch_adjacency(epochs.info, ch_type='mag')\n\nprint(type(adjacency)) # it's a sparse matrix!\n\nplt.imshow(adjacency.toarray(), cmap='gray', origin='lower',\n interpolation='nearest')\nplt.xlabel('{} Magnetometers'.format(len(ch_names)))\nplt.ylabel('{} Magnetometers'.format(len(ch_names)))\nplt.title('Between-sensor adjacency')",
"Compute permutation statistic\nHow does it work? We use clustering to \"bind\" together features which are\nsimilar. Our features are the magnetic fields measured over our sensor\narray at different times. This reduces the multiple comparison problem.\nTo compute the actual test-statistic, we first sum all F-values in all\nclusters. We end up with one statistic for each cluster.\nThen we generate a distribution from the data by shuffling our conditions\nbetween our samples and recomputing our clusters and the test statistics.\nWe test for the significance of a given cluster by computing the probability\nof observing a cluster of that size. For more background read:\nMaris/Oostenveld (2007), \"Nonparametric statistical testing of EEG- and\nMEG-data\" Journal of Neuroscience Methods, Vol. 164, No. 1., pp. 177-190.\ndoi:10.1016/j.jneumeth.2007.03.024",
"# set cluster threshold\nthreshold = 50.0 # very high, but the test is quite sensitive on this data\n# set family-wise p-value\np_accept = 0.01\n\ncluster_stats = spatio_temporal_cluster_test(X, n_permutations=1000,\n threshold=threshold, tail=1,\n n_jobs=1, buffer_size=None,\n adjacency=adjacency)\n\nT_obs, clusters, p_values, _ = cluster_stats\ngood_cluster_inds = np.where(p_values < p_accept)[0]",
"Note. The same functions work with source estimate. The only differences\nare the origin of the data, the size, and the adjacency definition.\nIt can be used for single trials or for groups of subjects.\nVisualize clusters",
"# configure variables for visualization\ncolors = {\"Aud\": \"crimson\", \"Vis\": 'steelblue'}\nlinestyles = {\"L\": '-', \"R\": '--'}\n\n# organize data for plotting\nevokeds = {cond: epochs[cond].average() for cond in event_id}\n\n# loop over clusters\nfor i_clu, clu_idx in enumerate(good_cluster_inds):\n # unpack cluster information, get unique indices\n time_inds, space_inds = np.squeeze(clusters[clu_idx])\n ch_inds = np.unique(space_inds)\n time_inds = np.unique(time_inds)\n\n # get topography for F stat\n f_map = T_obs[time_inds, ...].mean(axis=0)\n\n # get signals at the sensors contributing to the cluster\n sig_times = epochs.times[time_inds]\n\n # create spatial mask\n mask = np.zeros((f_map.shape[0], 1), dtype=bool)\n mask[ch_inds, :] = True\n\n # initialize figure\n fig, ax_topo = plt.subplots(1, 1, figsize=(10, 3))\n\n # plot average test statistic and mark significant sensors\n f_evoked = mne.EvokedArray(f_map[:, np.newaxis], epochs.info, tmin=0)\n f_evoked.plot_topomap(times=0, mask=mask, axes=ax_topo, cmap='Reds',\n vmin=np.min, vmax=np.max, show=False,\n colorbar=False, mask_params=dict(markersize=10))\n image = ax_topo.images[0]\n\n # create additional axes (for ERF and colorbar)\n divider = make_axes_locatable(ax_topo)\n\n # add axes for colorbar\n ax_colorbar = divider.append_axes('right', size='5%', pad=0.05)\n plt.colorbar(image, cax=ax_colorbar)\n ax_topo.set_xlabel(\n 'Averaged F-map ({:0.3f} - {:0.3f} s)'.format(*sig_times[[0, -1]]))\n\n # add new axis for time courses and plot time courses\n ax_signals = divider.append_axes('right', size='300%', pad=1.2)\n title = 'Cluster #{0}, {1} sensor'.format(i_clu + 1, len(ch_inds))\n if len(ch_inds) > 1:\n title += \"s (mean)\"\n plot_compare_evokeds(evokeds, title=title, picks=ch_inds, axes=ax_signals,\n colors=colors, linestyles=linestyles, show=False,\n split_legend=True, truncate_yaxis='auto')\n\n # plot temporal cluster extent\n ymin, ymax = ax_signals.get_ylim()\n ax_signals.fill_betweenx((ymin, ymax), sig_times[0], sig_times[-1],\n color='orange', alpha=0.3)\n\n # clean up viz\n mne.viz.tight_layout(fig=fig)\n fig.subplots_adjust(bottom=.05)\n plt.show()",
"Exercises\n\nWhat is the smallest p-value you can obtain, given the finite number of\n permutations?\n\nuse an F distribution to compute the threshold by traditional significance\n levels. Hint: take a look at :obj:scipy.stats.f\nhttp://www.fieldtriptoolbox.org/faq/\n how_not_to_interpret_results_from_a_cluster-based_permutation_test"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
daniestevez/jupyter_notebooks
|
ESEO.ipynb
|
gpl-3.0
|
[
"%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport binascii\n\ndef hexprint(x):\n print(' '.join([('0'+j[2:])[-2:] for j in map(hex, np.packbits(x))]))",
"An example packet from ESEO.",
"bits = np.array([0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1],\\\n dtype = 'uint8')\n\nhexprint(bits)",
"Trim the data between the 0x7e7e flags. We skip Reed-Solomon decoding, since we are confident that there are no bit errors. We remove the 16 Reed-Solomon parity check bytes.",
"data = bits[16:16+161*8-16*8]\nhexprint(data)",
"Reverse the bytes in the data.",
"def reflect_bytes(x):\n return np.fliplr(x[:x.size//8*8].reshape((-1,8))).ravel()\n\ndata_rev = reflect_bytes(data)\nhexprint(data_rev)",
"Perform bit de-stuffing.",
"def destuff(x):\n y = list()\n run = 0\n for i, bit in enumerate(x):\n if run == 5:\n if bit == 1:\n print('Long run found at bit', i)\n break\n else:\n run = 0\n elif bit == 0:\n run = 0\n y.append(bit)\n elif bit == 1:\n run += 1\n y.append(bit)\n return np.array(y, dtype = 'uint8')\n\ndata_rev_destuff = destuff(data_rev)",
"That it interesting, we have found a run of ones longer than 5 inside the data. We wouln't expect such a run due to byte stuffing. This happens during byte of a total of 161 data bytes.",
"1193/8\n\nhexprint(data_rev_destuff)",
"Perform G3RUH descrambling.",
"def descramble(x):\n y = np.concatenate((np.zeros(17, dtype='uint8'), x))\n z = y[:-17] ^ y[5:-12] ^ y[17:]\n return z\n\ndef nrzi_decode(x):\n return x ^ np.concatenate((np.zeros(1, dtype = 'uint8'), x[:-1])) ^ 1\n\ndata_descrambled = descramble(data_rev_destuff)\nhexprint(data_descrambled)",
"Perform NRZ-I decoding.",
"data_nrz = nrzi_decode(data_descrambled)\nhexprint(data_nrz)",
"The long sequences of zeros are a good indicator, but still we don't have the expected 8A A6 8A 9E 40 40 60 92 AE 68 88 AA 98 61 AX.25 header.\nReflect the bytes again.",
"data_nrz_rev = reflect_bytes(data_nrz)\nhexprint(data_nrz_rev)",
"The CRC is CRC16_CCITT_ZERO following the notation of this online calculator.\nData from SITAEL",
"raw_input_bits = np.array([1,1,0,1,0,0,1,1,1,1,1,1,1,0,0,0,0,0,0,0,1,1,1,0,1,0,1,0,0,0,1,0,1,1,1,0,1,0,1,0,1,1,0,0,0,1,0,0,1,0,0,0,1,1,1,0,0,0,1,0,1,1,0,0,0,1,0,0,1,0,1,1,0,0,0,0,1,1,1,0,0,0,1,0,1,0,0,0,1,1,1,1,1,0,1,0,1,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,1,1,0,0,0,1,1,1,0,0,1,1,0,0,1,1,1,0,1,0,0,1,1,0,0,1,0,1,1,0,0,0,1,1,0,0,1,0,1,1,0,1,0,1,0,0,1,0,1,1,1,0,1,1,1,1,0,0,0,0,0,0,0,1,1,0,0,1,0,1,0,0,0,0,0,1,0,1,1,0,0,0,0,1,0,1,0,0,0,0,1,0,1,0,0,1,0,0,0,1,1,0,0,0,1,1,1,0,0,0,1,0,1,1,1,0,1,0,0,0,0,1,1,1,1,1,0,0,0,1,1,1,0,1,1,1,0,0,1,1,1,1,1,0,1,1,1,0,1,0,0,0,1,0,1,1,0,1,0,1,0,1,0,0,0,1,1,0,1,1,0,0,1,1,0,0,0,1,0,1,0,0,0,0,1,1,0,1,0,0,0,0,1,1,0,0,1,1,0,1,1,1,0,0,1,0,1,0,0,0,1,1,0,0,1,1,0,1,1,1,1,1,0,1,0,0,0,1,1,0,1,1,1,0,0,0,0,0,1,1,0,0,1,1,0,1,1,1,0,0,1,0,0,1,1,0,0,1,0,0,0,1,0,0,0,0,1,1,1,0,1,1,0,0,1,1,0,0,1,0,1,0,0,1,1,1,0,1,1,0,1,0,1,1,0,0,0,0,1,1,0,1,0,0,0,0,0,1,1,1,0,0,0,1,1,1,0,0,1,0,0,1,1,1,0,0,1,0,1,1,0,1,0,1,1,1,0,1,0,1,0,0,0,1,0,0,1,1,0,0,0,1,1,1,1,0,1,0,0,1,0,1,1,1,1,0,0,0,0,0,1,0,0,1,1,0,0,1,0,1,1,0,1,1,1,0,1,0,0,1,0,0,0,0,1,1,0,0,0,1,1,0,0,1,1,0,0,0,0,0,1,0,0,0,1,0,0,0,0,1,1,0,0,1,1,1,0,0,1,0,0,1,0,0,0,0,1,0,0,0,0,1,1,0,1,1,0,0,1,1,0,1,0,1,0,0,0,0,0,1,0,1,0,0,1,1,1,0,1,0,1,0,0,1,0,0,0,0,0,0,1,0,0,0,1,0,0,1,0,0,1,1,1,0,1,1,0,0,0,0,1,0,1,1,0,1,1,0,0,1,1,0,1,1,0,0,1,0,0,0,0,0,0,1,0,0,1,1,1,1,1,1,1,0,0,1,1,1,0,0,0,1,1,0,0,0,1,1,0,0,0,1,0,0,1,1,1,0,0,1,0,1,0,1,1,0,1,0,0,1,1,1,1,0,1,1,1,0,1,0,1,0,1,1,1,1,1,0,1,0,1,1,0,1,0,0,0,1,1,0,0,0,0,1,0,0,1,1,0,0,0,1,0,0,1,1,1,0,1,1,1,1,0,1,0,1,0,1,1,1,0,1,1,0,0,0,1,0,1,1,1,0,1,1,0,0,1,1,0,1,0,0,0,1,0,0,1,0,0,1,0,1,1,0,1,1,0,0,0,1,1,1,1,1,0,1,0,1,1,1,1,0,1,1,0,0,1,1,0,0,0,0,1,1,0,0,0,0,0,0,0,1,1,0,0,0,1,0,1,0,1,0,1,1,1,1,1,0,0,1,1,0,0,0,0,1,1,0,1,1,0,1,1,1,1,0,0,0,0,1,0,1,1,1,1,0,0,1,0,1,1,0,0,0,0,0,1,1,1,1,0,0,0,0,0,0,0,1,0,0,1,0,1,1,0,1,0,1,1,1,1,0,0,1,1,0,1,0,0,0,1,0,1,0,0,0,0,0,0,1,0,0,1,0,1,1,1,1,0,1,0,1,1,1,0,1,1,0,1,1,0,1,1,1,0,1,1,0,0,1,0,1,0,0,0,1,1,1,1,1,0,0,1,0,1,1,1,1,1,1,0,1,1,0,1,1,1,0,1,1,1,0,0,1,1,0,1,1,1,1,1,0,1,0,0,1,1,0,1,0,1,1,1,0,1,1,1,1,1,1,0,0,1,1,0,1,0,1,1,1,0,1,0,0,1,0,0,0,0,0,1,0,1,0,0,1,1,0,1,1,1,1,1,0,1,0,0,1,1,1,1,1,0,0,1,1,1,1,1,0,0,0,0,1,1,0,1,1,0,1,0,0,1,1,0,1,1,1,0,0,1,1,1,0,0,1,1,1,0,1,0,0,0,0,1,1,1,0,0,1,0,1,1,0,0,0,1,1,1,1,1,0,0,0,1,0,0,0,0,1,1,0,0,0,0,1,1,1,1,0,0,1,1,1,0,0,0,0,0,1,1,1,0,0,1,0,0,1,1,1,1,1,1,1,1,0,0,1,0,1,1,1,0,0,1,0,1,1,0,0,0,0,1,1,0,0,0,1,0,1,1,0,1,0,0,0,0,1,0,0,0,1,1,0,1,1,0,1,0,1,0,1,1,1,1,0,0,0,0,0,0,0,1,0,0,1,0,0,1,1,0,0,1,0,1,0,1,0,1,1,0,1,0,0,1,1,0,0,0,1,1,0,1,1,0,0,1,0,0,0,1,0,1,0,1,1,0,0,0,0,1,1,0,1,0,0,0,1,0,1,0,0,1,0,0,1,1,1,1,1,0,0,1,1,0,1,1,1,0,1,0,1,0,0,1,1,1,1,0,1,1,1,0,0], dtype = 'uint8')\nhexprint(raw_input_bits)\n\nraw_input_bits.size//8\n\nraw_input_stream = 'D3F8 0EA2 EAC4 8E2C 4B0E 28FA 9020 C733 A658 CB52 EF01 9416 1429 18E2 E87C 773E E8B5 46CC 50D0 CDCA 337D 1B83 3726 443B 329D AC34'\n\n#input_stream = np.unpackbits(np.frombuffer(binascii.a2b_hex(raw_input_stream.replace(' ','')), dtype='uint8'))\ninput_stream = raw_input_bits\ninput_stream_reflected = reflect_bytes(input_stream)\nhexprint(input_stream_reflected)",
"They have CC64 rather than ec 64 near the end. Why?\nWe drop the Reed-Solomon parity check bytes (last 16 bytes).",
"input_stream_reflected_no_rs = input_stream_reflected[:-16*8]\ninput_stream_reflected_no_rs.size//8\n\nafter_unstuffing = destuff(input_stream_reflected_no_rs)\nhexprint(after_unstuffing)",
"Here we have 18 3d instead of 1839 near the end.",
"after_unstuffing.size/8\n\nafter_derandom = nrzi_decode(descramble(after_unstuffing))\nhexprint(reflect_bytes(after_derandom))\n\nafter_derandom.size",
"For some reason we have needed to do something funny with the start of the descrambler (changing byte align) and reflect the bytes again to get something as in their example.",
"reflect_bytes(after_derandom).size/8"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
arviz-devs/arviz
|
doc/source/getting_started/ConversionGuideEmcee.ipynb
|
apache-2.0
|
[
"(emcee_conversion)=\nConverting emcee objects to InferenceData\n{class}~arviz.InferenceData is the central data format for ArviZ. InferenceData itself is just a container that maintains references to one or more {class}xarray.Dataset. \nBelow are various ways to generate an InferenceData from emcee objects.\n```{seealso}\n\nConversion from Python, numpy or pandas objects\n{ref}xarray_for_arviz for an overview of InferenceData and its role within ArviZ. \n{ref}schema describes the structure of InferenceData objects and the assumptions made by ArviZ to ease your exploratory analysis of Bayesian models.\n```\n\nWe will start by importing the required packages and defining the model. The famous 8 school model.",
"import arviz as az\nimport numpy as np\nimport emcee\n\naz.style.use(\"arviz-darkgrid\")\n\nJ = 8\ny_obs = np.array([28.0, 8.0, -3.0, 7.0, -1.0, 1.0, 18.0, 12.0])\nsigma = np.array([15.0, 10.0, 16.0, 11.0, 9.0, 11.0, 10.0, 18.0])\n\ndef log_prior_8school(theta):\n mu, tau, eta = theta[0], theta[1], theta[2:]\n # Half-cauchy prior, hwhm=25\n if tau < 0:\n return -np.inf\n prior_tau = -np.log(tau ** 2 + 25 ** 2)\n prior_mu = -(mu / 10) ** 2 # normal prior, loc=0, scale=10\n prior_eta = -np.sum(eta ** 2) # normal prior, loc=0, scale=1\n return prior_mu + prior_tau + prior_eta\n\ndef log_likelihood_8school(theta, y, s):\n mu, tau, eta = theta[0], theta[1], theta[2:]\n return -((mu + tau * eta - y) / s) ** 2\n\ndef lnprob_8school(theta, y, s):\n prior = log_prior_8school(theta)\n like_vect = log_likelihood_8school(theta, y, s)\n like = np.sum(like_vect)\n return like + prior\n\nnwalkers = 40 # called chains in ArviZ\nndim = J + 2\ndraws = 1500\npos = np.random.normal(size=(nwalkers, ndim))\npos[:, 1] = np.absolute(pos[:, 1])\nsampler = emcee.EnsembleSampler(\n nwalkers,\n ndim,\n lnprob_8school,\n args=(y_obs, sigma)\n)\nsampler.run_mcmc(pos, draws);",
"Manually set variable names\nThis first example will show how to convert manually setting the variable names only, leaving everything else to ArviZ defaults.",
"# define variable names, it cannot be inferred from emcee\nvar_names = [\"mu\", \"tau\"] + [\"eta{}\".format(i) for i in range(J)]\nidata1 = az.from_emcee(sampler, var_names=var_names)\nidata1",
"ArviZ has stored the posterior variables with the provided names as expected, but it has also included other useful information in the InferenceData object. The log probability of each sample is stored in the sample_stats group under the name lp and all the arguments passed to the sampler as args have been saved in the observed_data group.\nIt can also be useful to perform a burn in cut to the MCMC samples (see :meth:arviz.InferenceData.sel for more details)",
"idata1.sel(draw=slice(100, None))",
"From an InferenceData object, ArviZ's native data structure, the {func}posterior plot <arviz.plot_posterior> of a few variables can be done in one line:",
"az.plot_posterior(idata1, var_names=[\"mu\", \"tau\", \"eta4\"])",
"Structuring the posterior as multidimensional variables\nThis way of calling from_emcee stores each eta as a different variable, called eta#, \nhowever, they are in fact different dimensions of the same variable. \nThis can be seen in the code of the likelihood and prior functions, where theta is unpacked as:\nmu, tau, eta = theta[0], theta[1], theta[2:]\n\nArviZ has support for multidimensional variables, and there is a way to tell it how to split the variables like it was done in the likelihood and prior functions:",
"idata2 = az.from_emcee(sampler, slices=[0, 1, slice(2, None)])\nidata2",
"After checking the default variable names, the trace of one dimension of eta can be plotted using ArviZ syntax:",
"az.plot_trace(idata2, var_names=[\"var_2\"], coords={\"var_2_dim_0\": 4});",
"blobs: unlock sample stats, posterior predictive and miscellanea\nEmcee does not store per-draw sample stats, however, it has a functionality called\nblobs that allows to store any variable on a per-draw basis. It can be used\nto store some sample_stats or even posterior_predictive data. \nYou can modify the probability function to use this blobs functionality and store the pointwise log likelihood,\nthen rerun the sampler using the new function:",
"def lnprob_8school_blobs(theta, y, s):\n prior = log_prior_8school(theta)\n like_vect = log_likelihood_8school(theta, y, s)\n like = np.sum(like_vect)\n return like + prior, like_vect\n\nsampler_blobs = emcee.EnsembleSampler(\n nwalkers,\n ndim,\n lnprob_8school_blobs,\n args=(y_obs, sigma),\n)\nsampler_blobs.run_mcmc(pos, draws);",
"You can now use the blob_names argument to indicate how to store this blob-defined variable. As the group is not specified, it will go to sample_stats.\nNote that the argument blob_names is added to the arguments covered in the previous examples and we are also introducing coords and dims arguments to show the power and flexibility of the converter. For more on coords and dims see page_in_construction.",
"dims = {\"eta\": [\"school\"], \"log_likelihood\": [\"school\"]}\nidata3 = az.from_emcee(\n sampler_blobs,\n var_names = [\"mu\", \"tau\", \"eta\"],\n slices=[0, 1, slice(2,None)],\n blob_names=[\"log_likelihood\"],\n dims=dims,\n coords={\"school\": range(8)}\n)\nidata3",
"Multi-group blobs\nYou might even have more complicated blobs, each corresponding to a different group of the InferenceData object. Moreover, you can store the variables passed to the EnsembleSampler via the args argument in observed or constant data groups. This is shown in the example below:",
"sampler_blobs.blobs[0, 1]\n\ndef lnprob_8school_blobs(theta, y, sigma):\n mu, tau, eta = theta[0], theta[1], theta[2:]\n prior = log_prior_8school(theta)\n like_vect = log_likelihood_8school(theta, y, sigma)\n like = np.sum(like_vect)\n # store pointwise log likelihood, useful for model comparison with az.loo or az.waic\n # and posterior predictive samples as blobs\n return like + prior, (like_vect, np.random.normal((mu + tau * eta), sigma))\n\nsampler_blobs = emcee.EnsembleSampler(\n nwalkers,\n ndim,\n lnprob_8school_blobs,\n args=(y_obs, sigma),\n)\nsampler_blobs.run_mcmc(pos, draws);\n\ndims = {\"eta\": [\"school\"], \"log_likelihood\": [\"school\"], \"y\": [\"school\"]}\nidata4 = az.from_emcee(\n sampler_blobs,\n var_names = [\"mu\", \"tau\", \"eta\"],\n slices=[0, 1, slice(2,None)],\n arg_names=[\"y\",\"sigma\"],\n arg_groups=[\"observed_data\", \"constant_data\"],\n blob_names=[\"log_likelihood\", \"y\"],\n blob_groups=[\"log_likelihood\", \"posterior_predictive\"],\n dims=dims,\n coords={\"school\": range(8)}\n)\nidata4",
"This last version, which contains both observed data and posterior predictive can be used to plot posterior predictive checks:",
"az.plot_ppc(idata4, var_names=[\"y\"], alpha=0.3, num_pp_samples=200);\n\n%load_ext watermark\n%watermark -n -u -v -iv -w"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
hainm/mdtraj
|
examples/clustering.ipynb
|
lgpl-2.1
|
[
"In this example, we cluster our alanine dipeptide trajectory using the RMSD distance metric and Ward's method.",
"from __future__ import print_function\n%matplotlib inline\nimport mdtraj as md\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport scipy.cluster.hierarchy",
"Let's load up our trajectory. This is the trajectory that we generated in the \"Running a simulation in OpenMM and analyzing the results with mdtraj\" example. The first step is to build the rmsd cache, which precalculates some values for the RMSD computation.",
"traj = md.load('ala2.h5')",
"Lets compute all pairwise rmsds between conformations.",
"distances = np.empty((traj.n_frames, traj.n_frames))\nfor i in range(traj.n_frames):\n distances[i] = md.rmsd(traj, traj, i)\nprint('Max pairwise rmsd: %f nm' % np.max(distances))",
"scipy.cluster implements the ward linkage algorithm (among others)",
"linkage = scipy.cluster.hierarchy.ward(distances)",
"Lets plot the resulting dendrogram.",
"plt.title('RMSD Ward hierarchical clustering')\nscipy.cluster.hierarchy.dendrogram(linkage, no_labels=True, count_sort='descendent')\nNone"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
molpopgen/fwdpy
|
docs/examples/windows.ipynb
|
gpl-3.0
|
[
"Example: Sliding windows\nThere are two basic ways of getting sliding windows from simulated data:\n\nManually\nUsing pylibseq\n\nBoth work, and both are pretty easy.",
"#import our modules\nfrom __future__ import print_function\nimport fwdpy as fp\nimport numpy as np\nimport datetime\nimport time\n\n#set up our sim\nrng = fp.GSLrng(101)\nnregions = [fp.Region(0,1,1),fp.Region(2,3,1)]\nsregions = [fp.ExpS(1,2,1,-0.1),fp.ExpS(1,2,0.1,0.001)]\nrregions = [fp.Region(0,3,1)]\npopsizes = np.array([1000]*10000,dtype=np.uint32)\n\n#Run the sim\npops = fp.evolve_regions(rng,4,1000,popsizes[0:],0.001,0.0001,0.001,nregions,sregions,rregions)\n\n#Take samples from the simulation\nsamples = [fp.get_samples(rng,i,20) for i in pops]",
"Calculating sliding windows\nWe are going to want non-overlapping widwos of size 0.1.\nOne thing to keep track of is the total size of our region, which is the half-open interval $[0,3)$\nManual method\nLet's just do it using pure Python:",
"for i in samples:\n windows = []\n start = 0\n while start < 3:\n ##We will only look at neutral mutations, which are element 0 of each sampl\n window = [j[0] for j in i[0] if (j[0] >=start and j[0] < start+0.1)]\n windows.append(window)\n start += 0.1\n ##We now have a full set of windows that we can do something with\n print (len(windows)) ##There should be 30, and many will be empy",
"Using pylibseq",
"from libsequence.windows import Windows\nfrom libsequence.polytable import SimData\nfor i in samples:\n ##We need to convert our list of tuples\n ##into types that pylibseq/libsequence understand:\n windows = Windows(SimData(i[0]),0.1,0.1,0,3)\n ##Now, you can analyze the windows, etc.\n print(len(windows))",
"Well, the pylibseq version is clearly more compact. Of course, you can/should abstract the pure Python version into a standalone function.\nWhy would you ever use the manual version? It can save you memory. The pylibseq version constructs an iterable list of windows, meaning that there is an object allocated for each window. For the manual version above, we grew a list of objects, but we could just have easily processed them and let them go out of scope."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
liupengyuan/python_tutorial
|
chapter3/python正则表达式.ipynb
|
mit
|
[
"python正则表达式快速基础教程\n正则表达式,这个术语不太容易望文生义(没有去考证是如何被翻译为正则表达式的),其实其英文为Regular Expression,直接翻译就是:有规律的表达式。这个表达式其实就是一个字符序列,反映某种字符规律,用(字符串模式匹配)来处理字符串。很多高级语言均支持利用正则表达式对字符串进行处理的操作。\npython提供的正则表达式文档可参见:https://docs.python.org/3/library/re.html",
"import re",
"首先引入python正则表达式库re\n\n1. 初识",
"s = 'Blow low, follow in of which low. lower, lmoww oow aow bow cow 23742937 dow kdiieur998.'\np = 'low'",
"假设要在字符串s中查找单词low,由于该单词的规律就是low,因此可将low作为一个正则表达式,命名为p。",
"m = re.findall(p, s)\nm",
"findall(pattern, string)是re模块中的函数,会在字符串string中将所有匹配正则表达式pattern模式的字符串提取出来,并以一个list的形式返回。该方法是从左到右进行扫描,所返回的list中的每个匹配按照从左到右匹配的顺序进行存放。\n正则表达式low能够将所有单词low匹配出来,但是也会将lower,Blow等含有low字符串中的low也匹配出来。",
"p = input('请输入字符模式,回车结束!\\n')\nm = re.findall(p,s)\nif not m:\n print('没有找到匹配字符!')\nelse:\n print('成功匹配!')",
"如果不存在可以匹配的字符模式,则返回空列表。可以利用列表是否为空作为分支条件。",
"p = r'\\blow\\b'\nm = re.findall(p, s)\nm",
"\\b,即boundary,是正则表达式中的一种特殊字符,表示单词的边界。正则表达式r'\\blow\\b'就是要单独匹配low,该字符串两侧为单词的边界(边界为空格等,但是并不对边界进行匹配)",
"p = r'[lmo]ow'\nm = re.findall(p, s)\nm",
"[lmo],匹配lmo字母中的任何一个",
"p = r'[a-d]ow'\nm = re.findall(p, s)\nm",
"[a-d],匹配abcd字母中的任何一个",
"p = r'\\d'\nm = re.findall(p, s)\nm",
"\\d,即digit,表示数字,\\d表示数字(一个数字字符)",
"p = r'\\d+'\nm = re.findall(p, s)\nm",
"+,表示一个或者重复多个对象,对象为+前面指定的模式\n因此\\d+可以匹配长度至少为1的任意正整数字符。\n\n2. 基本匹配与实例\n字符模式|匹配模式内容|等价于\n----|---|--\n[a-d]|One character of: a, b, c, d|[abcd]\n[^a-d]|One character except: a, b, c, d|[^abcd]\nabc丨def|abc or def|\n\\d|One digit|[0-9]\n\\D|One non-digit|[^0-9]\n\\s|One whitespace|[ \\t\\n\\r\\f\\v]\n\\S|One non-whitespace|[^ \\t\\n\\r\\f\\v]\n\\w|One word character|[a-zA-Z0-9_]\n\\W|One non-word character|[^a-zA-Z0-9_]\n.|Any character (except newline)|[^\\n]\n固定点标记|匹配模式内容\n----|---\n^|Start of the string\n$|End of the string\n\\b|Boundary between word and non-word characters\n数量词|匹配模式内容\n----|---\n{5}|Match expression exactly 5 times\n{2,5}|Match expression 2 to 5 times\n{2,}|Match expression 2 or more times\n{,5}|Match expression 0 to 5 times\n*|Match expression 0 or more times\n{,}|Match expression 0 or more times\n?|Match expression 0 or 1 times\n{0,1}|Match expression 0 or 1 times\n+|Match expression 1 or more times\n{1,}|Match expression 1 or more times\n字符转义|转义匹配内容\n----|---\n\\.|. character\n\\\\|\\ character\n\\| character\n\\+|+ character\n\\?|? character\n\\{|{ character\n\\)|) character\n\\[|[ character",
"m = re.findall(r'\\d{3,4}-?\\d{8}', '010-66677788,02166697788, 0451-22882828')\nm",
"匹配电话号码,区号可以是3或者4位,号码为8位,中间可以有-或者没有。",
"m = re.findall(r'[\\u4e00-\\u9fa5]', '测试 汉 字,abc,测试xia,可以')\nm",
"匹配汉字\n\n\n几个组合实例\n\n\n正则表达式|匹配内容\n----|---\n[A-Za-z0-9]|匹配英文和数字\n[\\u4E00-\\u9FA5A-Za-z0-9_]|中文英文和数字及下划线\n^[a-zA-Z][a-zA-Z0-9_]{4,15}$`|合法账号,长度在5-16个字符之间,只能用字母数字下划线,且第一个位置必须为字母\n3. 进阶\n3.1 python正则表达式常用函数\n函数|功能|用法\n----|---|---\nre.search|Return a match object if pattern found in string|re.search(r'[pat]tern', 'string')\nre.finditer|Return an iterable of match objects (one for each match)|re.finditer(r'[pat]tern', 'string')\nre.findall|Return a list of all matched strings (different when capture groups)|re.findall(r'[pat]tern', 'string')\nre.split|Split string by regex delimeter & return string list|re.split(r'[ -]', 'st-ri ng')\nre.compile|Compile a regular expression pattern for later use|re.compile(r'[pat]tern')\nre.sub|Replaces all occurrences of the RE pattern in string with repl, substituting all occurrences unless max provided. This method returns modified string|re.sub(r'[pat]tern', repl, 'string')",
"m = re.search(r'\\d{3,4}-?\\d{8}', '010-66677788,02166697788, 0451-22882828')\nm\n\nm.group()",
"search总是返回第一个成功匹配,如果没有匹配,则返回None\n利用group()函数,取出match对象中的内容",
"m.span()",
"span()函数返回匹配字符串的起始和结束位置",
"ms = re.finditer(r'\\d{3,4}-?\\d{8}', '010-66677788,02166697788, 0451-22882828')\nfor m in ms:\n print(m.group())",
"finditer()是返回所有匹配,放置在一个元组中,每个匹配都是类似search()函数所返回的match对象,内含每个匹配的详细信息\n可以对该元组进行迭代,取得每个match对象,进一步可以取得其详细信息\n与findall()的区别是,findall()只取得所有匹配字符串,返回包含所有匹配字符串的列表,不关心匹配字符串在原字符串中的各项信息。",
"words = re.split(r'[,-]', '010-66677788,02166697788,0451-22882828')\nwords",
"正则下的split(),是一般split()函数的增强版本,可以对字符串以正则表达式匹配的字符进行切割,返回切割后的列表。",
"p = re.compile(r'[,-]')\np.split('010-66677788,02166697788,0451-22882828')",
"利用compile()函数将正则表达式编译,如以后多次运行,可加快程序运行速度\n\n3.2 分组与引用\nGroup Type|Expression\n----|---\nCapturing|( ... )\nNon-capturing|(?: ... )\nCapturing group named Y|(?P<Y> ... )\nMatch the Y'th captured group|\\Y\nMatch the named group Y|(?P=Y)\n\n(...) 将括号中的部分,放在一起,视为一组,即group。以该group来匹配符合条件的字符串。\ngroup,可被同一正则表达式的后续,所引用,引用可以利用其位置,或者利用其名称,可称为反向引用。",
"p = re.compile('(ab)+')\np.search('ababababab').group()\n\np.search('ababababab').groups()",
"有分组的情况,用groups()函数取出匹配的所有分组",
"p=re.compile('(\\d)-(\\d)-(\\d)')\np.search('1-2-3').group()\n\np.search('1-2-3').groups()\n\ns = '喜欢/v 你/x 的/u 眼睛/n 和/u 深情/n 。/w'\np = re.compile(r'(\\S+)/n')\nm = p.findall(s)\nm",
"按出现顺序捕获名词(/n)。",
"p=re.compile('(?P<first>\\d)-(\\d)-(\\d)')\np.search('1-2-3').group()",
"在分组内,可通过?P<name>的形式,给该分组命名,其中name是给该分组的命名",
"p.search('1-2-3').group('first')",
"可利用group('name'),直接通过组名来获取匹配的该分组",
"s = 'age:13,name:Tom;age:18,name:John'\np = re.compile(r'age:(\\d+),name:(\\w+)')\nm = p.findall(s)\nm\n\np = re.compile(r'age:(?:\\d+),name:(\\w+)')\nm = p.findall(s)\nm",
"(?:\\d+),匹配该模式,但不捕获该分组。因此没有捕获该分组的数字",
"s = 'abcdebbcde'\np = re.compile(r'([ab])\\1')\nm = p.search(s)\nprint('The match is {},the capture group is {}'.format(m.group(), m.groups()))",
"此即为反向引用\n当分组([ab])内的a或b匹配成功后,将开始匹配\\1,\\1将匹配前面分组成功的字符。因此该正则表达式将匹配aa或bb。\n类似地,r'([a-z])\\1{3}',该正则将匹配连续的4个英文小写字母。",
"s = '12,56,89,123,56,98, 12'\np = re.compile(r'\\b(\\d+)\\b.*\\b\\1\\b')\nm = p.search(s)\nm.group(1)",
"利用反向引用来判断是否含有重复数字,可提取第一个重复的数字。\n其中\\1是引用前一个分组的匹配。",
"s = '12,56,89,123,56,98, 12'\np = re.compile(r'\\b(?P<name>\\d+)\\b.*\\b(?P=name)\\b')\nm = p.search(s)\nm.group(1)",
"与前一个类似,但是利用了带分组名称的反向引用。\n\n3.3 贪婪与懒惰\n数量词|匹配模式内容\n----|---\n{2,5}?|Match 2 to 5 times (less preferred)\n{2,}?|Match 2 or more times (less preferred)\n{,5}?|Match 0 to 5 times (less preferred)\n*?|Match 0 or more times (less preferred)\n{,}?|Match 0 or more times (less preferred)\n??|Match 0 or 1 times (less preferred)\n{0,1}?|Match 0 or 1 times (less preferred)\n+?|Match 1 or more times (less preferred)\n{1,}?|Match 1 or more times (less preferred)\n\n当正则表达式中包含能接受重复的限定符时,通常的行为是(在使整个表达式能得到匹配的前提下)匹配尽可能多的字符。(贪婪匹配)\n而懒惰匹配,是匹配尽可能少的字符。方法是在重复的后面加一个?。",
"p = re.compile('<.*>')\np.search('<python>perl>').group()",
"贪婪匹配(默认)将匹配尽可能多的重复",
"p = re.compile('<.*?>')\np.search('<python>perl>').group()",
"懒惰匹配(非贪婪匹配),将匹配尽可能少的重复",
"p = re.compile('(ab)+')\np.search('ababababab').group()\n\np = re.compile('(ab)+?')\np.search('ababababab').group()",
"4. 文本处理中的一些应用实例\n4.1 提取文本中的符合某种模式的字串并进行处理后替换",
"with open(r'test_re.txt', encoding = 'utf-8') as f:\n text = f.read()",
"读入文本文件",
"p = '[\\S]+/t[^/]+/t'\nlines = re.findall(p, text)\nlines[:10]",
"提取词性标记模式为连续2个时间标记t的子串\n模式可构造为:1或多个非空白字符 /t 一或者多个不含/的字符 /t\n利用findall()函数,返回所有符合模式的子串,并查看前10个子串",
"import re\np = '((?:[\\S]+/t\\s){2,})'\nmatchs = re.findall(p, text)\nmatchs[:10]",
"捕获连续两个以上的时间标记词汇\nfindall()函数在应用分组时,将捕获所有分组,不是捕获符合模式的字符串,因此这里将整体作为一个分组\n?:表示不捕获该分组内容",
"def express(line):\n return line.group().replace('/t ', '')+'/t '\n\ntxt = re.sub(p, express, text)\ntxt.split('\\n')[:5]",
"express()为一个自定义函数,参数为match对象,可以将match对象中所有的'/t'替换为空,并在结尾加上'/t '\nre.sub()函数,第二个参数可以是函数,当其为函数时,会将pattern每一次匹配到的结果作为match对象,传参给该函数,接受该函数的返回值来进行字符串替换\n本例中将所有连续的时间标记词汇合成为一个\n\n4.2 网页文件(HTML)处理示例\n5. 常用正则表达式(待补充)\n进一步学习可参考官方文档以及《精通正则表达式(第3版)》"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mne-tools/mne-tools.github.io
|
0.23/_downloads/cfbef36033f8d33f28c4fe2cfa35314a/30_cluster_ftest_spatiotemporal.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"2 samples permutation test on source data with spatio-temporal clustering\nTests if the source space data are significantly different between\n2 groups of subjects (simulated here using one subject's data).\nThe multiple comparisons problem is addressed with a cluster-level\npermutation test across space and time.",
"# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>\n# Eric Larson <larson.eric.d@gmail.com>\n# License: BSD (3-clause)\n\nimport os.path as op\n\nimport numpy as np\nfrom scipy import stats as stats\n\nimport mne\nfrom mne import spatial_src_adjacency\nfrom mne.stats import spatio_temporal_cluster_test, summarize_clusters_stc\nfrom mne.datasets import sample\n\nprint(__doc__)",
"Set parameters",
"data_path = sample.data_path()\nstc_fname = data_path + '/MEG/sample/sample_audvis-meg-lh.stc'\nsubjects_dir = data_path + '/subjects'\nsrc_fname = subjects_dir + '/fsaverage/bem/fsaverage-ico-5-src.fif'\n\n# Load stc to in common cortical space (fsaverage)\nstc = mne.read_source_estimate(stc_fname)\nstc.resample(50, npad='auto')\n\n# Read the source space we are morphing to\nsrc = mne.read_source_spaces(src_fname)\nfsave_vertices = [s['vertno'] for s in src]\nmorph = mne.compute_source_morph(stc, 'sample', 'fsaverage',\n spacing=fsave_vertices, smooth=20,\n subjects_dir=subjects_dir)\nstc = morph.apply(stc)\nn_vertices_fsave, n_times = stc.data.shape\ntstep = stc.tstep * 1000 # convert to milliseconds\n\nn_subjects1, n_subjects2 = 7, 9\nprint('Simulating data for %d and %d subjects.' % (n_subjects1, n_subjects2))\n\n# Let's make sure our results replicate, so set the seed.\nnp.random.seed(0)\nX1 = np.random.randn(n_vertices_fsave, n_times, n_subjects1) * 10\nX2 = np.random.randn(n_vertices_fsave, n_times, n_subjects2) * 10\nX1[:, :, :] += stc.data[:, :, np.newaxis]\n# make the activity bigger for the second set of subjects\nX2[:, :, :] += 3 * stc.data[:, :, np.newaxis]\n\n# We want to compare the overall activity levels for each subject\nX1 = np.abs(X1) # only magnitude\nX2 = np.abs(X2) # only magnitude",
"Compute statistic\nTo use an algorithm optimized for spatio-temporal clustering, we\njust pass the spatial adjacency matrix (instead of spatio-temporal)",
"print('Computing adjacency.')\nadjacency = spatial_src_adjacency(src)\n\n# Note that X needs to be a list of multi-dimensional array of shape\n# samples (subjects_k) x time x space, so we permute dimensions\nX1 = np.transpose(X1, [2, 1, 0])\nX2 = np.transpose(X2, [2, 1, 0])\nX = [X1, X2]\n\n# Now let's actually do the clustering. This can take a long time...\n# Here we set the threshold quite high to reduce computation.\np_threshold = 0.0001\nf_threshold = stats.distributions.f.ppf(1. - p_threshold / 2.,\n n_subjects1 - 1, n_subjects2 - 1)\nprint('Clustering.')\nT_obs, clusters, cluster_p_values, H0 = clu =\\\n spatio_temporal_cluster_test(X, adjacency=adjacency, n_jobs=1,\n threshold=f_threshold, buffer_size=None)\n# Now select the clusters that are sig. at p < 0.05 (note that this value\n# is multiple-comparisons corrected).\ngood_cluster_inds = np.where(cluster_p_values < 0.05)[0]",
"Visualize the clusters",
"print('Visualizing clusters.')\n\n# Now let's build a convenient representation of each cluster, where each\n# cluster becomes a \"time point\" in the SourceEstimate\nfsave_vertices = [np.arange(10242), np.arange(10242)]\nstc_all_cluster_vis = summarize_clusters_stc(clu, tstep=tstep,\n vertices=fsave_vertices,\n subject='fsaverage')\n\n# Let's actually plot the first \"time point\" in the SourceEstimate, which\n# shows all the clusters, weighted by duration\nsubjects_dir = op.join(data_path, 'subjects')\n# blue blobs are for condition A != condition B\nbrain = stc_all_cluster_vis.plot('fsaverage', hemi='both',\n views='lateral', subjects_dir=subjects_dir,\n time_label='temporal extent (ms)',\n clim=dict(kind='value', lims=[0, 1, 40]))"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
OceanPARCELS/parcels
|
parcels/examples/documentation_unstuck_Agrid.ipynb
|
mit
|
[
"Tutorial on implementing boundary conditions in an A grid\nIn another notebook, we have shown how particles may end up getting stuck on land, especially in A gridded velocity fields. Here we show how you can work around this problem and how large the effects of the solutions on the trajectories are.\nCommon solutions are:\n1. Delete the particles\n2. Displace the particles when they are within a certain distance of the coast.\n3. Implement free-slip or partial-slip boundary conditions\nIn the first two of these solutions, kernels are used to modify the trajectories near the coast. The kernels all consist of two parts:\n1. Flag particles whose trajectory should be modified\n2. Modify the trajectory accordingly\nIn the third solution, the interpolation method is changed; this has to be done when creating the FieldSet.\nThis notebook is mainly focused on comparing the different modifications to the trajectory. The flagging of particles is also very relevant however and further discussion on this is encouraged. Some options shown here are:\n1. Flag particles within a specific distance to the shore\n2. Flag particles in any gridcell that has a shore edge\nAs argued in the previous notebook, it is important to accurately plot the grid discretization, in order to understand the motion of particles near the boundary. The velocity fields can best be depicted using points or arrows that define the velocity at a single position. Four of these nodes then form gridcells that can be shown using tiles, for example with matplotlib.pyplot.pcolormesh.",
"import numpy as np\nimport numpy.ma as ma\nfrom netCDF4 import Dataset\nimport xarray as xr\nfrom scipy import interpolate\n\nfrom parcels import FieldSet, ParticleSet, JITParticle, ScipyParticle, AdvectionRK4, Variable, Field,GeographicPolar,Geographic\nfrom datetime import timedelta as delta\n\nimport matplotlib.pyplot as plt\nimport matplotlib.gridspec as gridspec\nfrom matplotlib.colors import ListedColormap\nfrom matplotlib.lines import Line2D\nfrom copy import copy\nimport cmocean",
"1. Particle deletion\nThe simplest way to avoid trajectories that interact with the coastline is to remove them entirely. To do this, all Particle objects have a delete function that can be invoked in a kernel using particle.delete()\n2. Displacement\nA simple concept to avoid particles moving onto shore is displacing them towards the ocean as they get close to shore. This is for example done in Kaandorp et al. (2020) and Delandmeter and van Sebille (2018). To do so, a particle must be 'aware' of where the shore is and displaced accordingly. In Parcels, we can do this by adding a 'displacement' Field to the Fieldset, which contains vectors pointing away from shore. \nImport a velocity field - the A gridded SMOC product",
"file_path = \"GLOBAL_ANALYSIS_FORECAST_PHY_001_024_SMOC/SMOC_20190704_R20190705.nc\"\nmodel = xr.open_dataset(file_path)\n\n# --------- Define meshgrid coordinates to plot velocity field with matplotlib pcolormesh ---------\nlatmin = 1595\nlatmax = 1612\nlonmin = 2235\nlonmax = 2260\n\n# Velocity nodes\nlon_vals, lat_vals = np.meshgrid(model['longitude'], model['latitude'])\nlons_plot = lon_vals[latmin:latmax,lonmin:lonmax]\nlats_plot = lat_vals[latmin:latmax,lonmin:lonmax]\n\ndlon = 1/12\ndlat = 1/12\n\n# Centers of the gridcells formed by 4 nodes = velocity nodes + 0.5 dx\nx = model['longitude'][:-1]+np.diff(model['longitude'])/2\ny = model['latitude'][:-1]+np.diff(model['latitude'])/2\nlon_centers, lat_centers = np.meshgrid(x, y)\n\ncolor_land = copy(plt.get_cmap('Reds'))(0)\ncolor_ocean = copy(plt.get_cmap('Reds'))(128)",
"Make a landmask where land = 1 and ocean = 0.",
"def make_landmask(fielddata):\n \"\"\"Returns landmask where land = 1 and ocean = 0\n fielddata is a netcdf file.\n \"\"\"\n datafile = Dataset(fielddata)\n\n landmask = datafile.variables['uo'][0, 0]\n landmask = np.ma.masked_invalid(landmask)\n landmask = landmask.mask.astype('int')\n\n return landmask\n\nlandmask = make_landmask(file_path)\n\n# Interpolate the landmask to the cell centers - only cells with 4 neighbouring land points will be land\nfl = interpolate.interp2d(model['longitude'],model['latitude'],landmask)\n\nl_centers = fl(lon_centers[0,:],lat_centers[:,0]) \n\nlmask = np.ma.masked_values(l_centers,1) # land when interpolated value == 1\n\nfig = plt.figure(figsize=(12,5))\nfig.suptitle('Figure 1. Landmask', fontsize=18, y=1.01)\ngs = gridspec.GridSpec(ncols=2, nrows=1, figure=fig)\n\nax0 = fig.add_subplot(gs[0, 0])\nax0.set_title('A) lazy use of pcolormesh', fontsize=11)\nax0.set_ylabel('Latitude [degrees]')\nax0.set_xlabel('Longitude [degrees]')\n\nland0 = ax0.pcolormesh(lons_plot, lats_plot, landmask[latmin:latmax,lonmin:lonmax],cmap='Reds_r', shading='auto')\nax0.scatter(lons_plot, lats_plot, c=landmask[latmin:latmax,lonmin:lonmax],s=20,cmap='Reds_r',vmin=-0.05,vmax=0.05,edgecolors='k')\n\ncustom_lines = [Line2D([0], [0], c = color_ocean, marker='o', markersize=10, markeredgecolor='k', lw=0),\n Line2D([0], [0], c = color_land, marker='o', markersize=10, markeredgecolor='k', lw=0)]\nax0.legend(custom_lines, ['ocean point', 'land point'], bbox_to_anchor=(.01,.93), loc='center left', borderaxespad=0.,framealpha=1)\n\nax1 = fig.add_subplot(gs[0, 1])\nax1.set_title('B) correct A grid representation in Parcels', fontsize=11)\nax1.set_ylabel('Latitude [degrees]')\nax1.set_xlabel('Longitude [degrees]')\n\nland1 = ax1.pcolormesh(lon_vals[latmin:latmax+1,lonmin:lonmax+1], lat_vals[latmin:latmax+1,lonmin:lonmax+1], lmask.mask[latmin:latmax,lonmin:lonmax],cmap='Reds_r')\nax1.scatter(lons_plot, lats_plot, c=landmask[latmin:latmax,lonmin:lonmax],s=20,cmap='Reds_r',vmin=-0.05,vmax=0.05,edgecolors='k')\n\nax1.legend(custom_lines, ['ocean point', 'land point'], bbox_to_anchor=(.01,.93), loc='center left', borderaxespad=0.,framealpha=1)",
"Figure 1 shows why it is important to be precise when visualizing the model land and ocean. Parcels trajectories should not cross the land boundary between two land nodes as seen in 1B.\nDetect the coast\nWe can detect the edges between land and ocean nodes by computing the Laplacian with the 4 nearest neighbors [i+1,j], [i-1,j], [i,j+1] and [i,j-1]:\n$$\\nabla^2 \\text{landmask} = \\partial_{xx} \\text{landmask} + \\partial_{yy} \\text{landmask},$$\nand filtering the positive and negative values. This gives us the location of coast nodes (ocean nodes next to land) and shore nodes (land nodes next to the ocean). \nAdditionally, we can find the nodes that border the coast/shore diagonally by considering the 8 nearest neighbors, including [i+1,j+1], [i-1,j+1], [i-1,j+1] and [i-1,j-1].",
"def get_coastal_nodes(landmask):\n \"\"\"Function that detects the coastal nodes, i.e. the ocean nodes directly\n next to land. Computes the Laplacian of landmask.\n\n - landmask: the land mask built using `make_landmask`, where land cell = 1\n and ocean cell = 0.\n\n Output: 2D array array containing the coastal nodes, the coastal nodes are\n equal to one, and the rest is zero.\n \"\"\"\n mask_lap = np.roll(landmask, -1, axis=0) + np.roll(landmask, 1, axis=0)\n mask_lap += np.roll(landmask, -1, axis=1) + np.roll(landmask, 1, axis=1)\n mask_lap -= 4*landmask\n coastal = np.ma.masked_array(landmask, mask_lap > 0)\n coastal = coastal.mask.astype('int')\n\n return coastal\n\ndef get_shore_nodes(landmask):\n \"\"\"Function that detects the shore nodes, i.e. the land nodes directly\n next to the ocean. Computes the Laplacian of landmask.\n\n - landmask: the land mask built using `make_landmask`, where land cell = 1\n and ocean cell = 0.\n\n Output: 2D array array containing the shore nodes, the shore nodes are\n equal to one, and the rest is zero.\n \"\"\"\n mask_lap = np.roll(landmask, -1, axis=0) + np.roll(landmask, 1, axis=0)\n mask_lap += np.roll(landmask, -1, axis=1) + np.roll(landmask, 1, axis=1)\n mask_lap -= 4*landmask\n shore = np.ma.masked_array(landmask, mask_lap < 0)\n shore = shore.mask.astype('int')\n\n return shore\n\ndef get_coastal_nodes_diagonal(landmask):\n \"\"\"Function that detects the coastal nodes, i.e. the ocean nodes where \n one of the 8 nearest nodes is land. Computes the Laplacian of landmask\n and the Laplacian of the 45 degree rotated landmask.\n\n - landmask: the land mask built using `make_landmask`, where land cell = 1\n and ocean cell = 0.\n\n Output: 2D array array containing the coastal nodes, the coastal nodes are\n equal to one, and the rest is zero.\n \"\"\"\n mask_lap = np.roll(landmask, -1, axis=0) + np.roll(landmask, 1, axis=0)\n mask_lap += np.roll(landmask, -1, axis=1) + np.roll(landmask, 1, axis=1)\n mask_lap += np.roll(landmask, (-1,1), axis=(0,1)) + np.roll(landmask, (1, 1), axis=(0,1))\n mask_lap += np.roll(landmask, (-1,-1), axis=(0,1)) + np.roll(landmask, (1, -1), axis=(0,1))\n mask_lap -= 8*landmask\n coastal = np.ma.masked_array(landmask, mask_lap > 0)\n coastal = coastal.mask.astype('int')\n \n return coastal\n \ndef get_shore_nodes_diagonal(landmask):\n \"\"\"Function that detects the shore nodes, i.e. the land nodes where \n one of the 8 nearest nodes is ocean. Computes the Laplacian of landmask \n and the Laplacian of the 45 degree rotated landmask.\n\n - landmask: the land mask built using `make_landmask`, where land cell = 1\n and ocean cell = 0.\n\n Output: 2D array array containing the shore nodes, the shore nodes are\n equal to one, and the rest is zero.\n \"\"\"\n mask_lap = np.roll(landmask, -1, axis=0) + np.roll(landmask, 1, axis=0)\n mask_lap += np.roll(landmask, -1, axis=1) + np.roll(landmask, 1, axis=1)\n mask_lap += np.roll(landmask, (-1,1), axis=(0,1)) + np.roll(landmask, (1, 1), axis=(0,1))\n mask_lap += np.roll(landmask, (-1,-1), axis=(0,1)) + np.roll(landmask, (1, -1), axis=(0,1))\n mask_lap -= 8*landmask\n shore = np.ma.masked_array(landmask, mask_lap < 0)\n shore = shore.mask.astype('int')\n\n return shore\n\ncoastal = get_coastal_nodes_diagonal(landmask)\nshore = get_shore_nodes_diagonal(landmask)\n\nfig = plt.figure(figsize=(10,4), constrained_layout=True)\nfig.suptitle('Figure 2. Coast and Shore', fontsize=18, y=1.04)\ngs = gridspec.GridSpec(ncols=2, nrows=1, figure=fig)\n\n\nax0 = fig.add_subplot(gs[0, 0])\nland0 = ax0.pcolormesh(lon_vals[latmin:latmax+1,lonmin:lonmax+1], lat_vals[latmin:latmax+1,lonmin:lonmax+1], lmask.mask[latmin:latmax,lonmin:lonmax],cmap='Reds_r')\ncoa = ax0.scatter(lons_plot,lats_plot, c=coastal[latmin:latmax,lonmin:lonmax], cmap='Reds_r', s=50)\nax0.scatter(lons_plot, lats_plot, c=landmask[latmin:latmax,lonmin:lonmax],s=20,cmap='Reds_r',vmin=-0.05,vmax=0.05)\n\nax0.set_title('Coast')\nax0.set_ylabel('Latitude [degrees]')\nax0.set_xlabel('Longitude [degrees]')\n\ncustom_lines = [Line2D([0], [0], c = color_ocean, marker='o', markersize=5, lw=0),\n Line2D([0], [0], c = color_ocean, marker='o', markersize=7, markeredgecolor='w', markeredgewidth=2, lw=0),\n Line2D([0], [0], c = color_land, marker='o', markersize=7, markeredgecolor='firebrick', lw=0)]\nax0.legend(custom_lines, ['ocean node', 'coast node', 'land node'], bbox_to_anchor=(.01,.9), loc='center left', borderaxespad=0.,framealpha=1, facecolor='silver')\n\n\nax1 = fig.add_subplot(gs[0, 1])\nland1 = ax1.pcolormesh(lon_vals[latmin:latmax+1,lonmin:lonmax+1], lat_vals[latmin:latmax+1,lonmin:lonmax+1], lmask.mask[latmin:latmax,lonmin:lonmax],cmap='Reds_r')\nsho = ax1.scatter(lons_plot,lats_plot, c=shore[latmin:latmax,lonmin:lonmax], cmap='Reds_r', s=50)\nax1.scatter(lons_plot, lats_plot, c=landmask[latmin:latmax,lonmin:lonmax],s=20,cmap='Reds_r',vmin=-0.05,vmax=0.05)\n\nax1.set_title('Shore')\nax1.set_ylabel('Latitude [degrees]')\nax1.set_xlabel('Longitude [degrees]')\n\ncustom_lines = [Line2D([0], [0], c = color_ocean, marker='o', markersize=5, lw=0),\n Line2D([0], [0], c = color_land, marker='o', markersize=7, markeredgecolor='w', markeredgewidth=2, lw=0),\n Line2D([0], [0], c = color_land, marker='o', markersize=7, markeredgecolor='firebrick', lw=0)]\nax1.legend(custom_lines, ['ocean node', 'shore node', 'land node'], bbox_to_anchor=(.01,.9), loc='center left', borderaxespad=0.,framealpha=1, facecolor='silver')",
"Assigning coastal velocities\nFor the displacement kernel we define a velocity field that pushes the particles back to the ocean. This velocity is a vector normal to the shore. \nFor the shore nodes directly next to the ocean, we can take the simple derivative of landmask and project the result to the shore array, this will capture the orientation of the velocity vectors. \nFor the shore nodes that only have a diagonal component, we need to take into account the diagonal nodes also and project the vectors only onto the inside corners that border the ocean diagonally.\nThen to make the vectors unitary, we normalize them by their magnitude.",
"def create_displacement_field(landmask, double_cell=False):\n \"\"\"Function that creates a displacement field 1 m/s away from the shore.\n\n - landmask: the land mask dUilt using `make_landmask`.\n - double_cell: Boolean for determining if you want a double cell.\n Default set to False.\n\n Output: two 2D arrays, one for each camponent of the velocity.\n \"\"\"\n shore = get_shore_nodes(landmask)\n shore_d = get_shore_nodes_diagonal(landmask) # bordering ocean directly and diagonally\n shore_c = shore_d - shore # corner nodes that only border ocean diagonally\n \n Ly = np.roll(landmask, -1, axis=0) - np.roll(landmask, 1, axis=0) # Simple derivative\n Lx = np.roll(landmask, -1, axis=1) - np.roll(landmask, 1, axis=1)\n \n Ly_c = np.roll(landmask, -1, axis=0) - np.roll(landmask, 1, axis=0)\n Ly_c += np.roll(landmask, (-1,-1), axis=(0,1)) + np.roll(landmask, (-1,1), axis=(0,1)) # Include y-component of diagonal neighbours\n Ly_c += - np.roll(landmask, (1,-1), axis=(0,1)) - np.roll(landmask, (1,1), axis=(0,1))\n \n Lx_c = np.roll(landmask, -1, axis=1) - np.roll(landmask, 1, axis=1)\n Lx_c += np.roll(landmask, (-1,-1), axis=(1,0)) + np.roll(landmask, (-1,1), axis=(1,0)) # Include x-component of diagonal neighbours\n Lx_c += - np.roll(landmask, (1,-1), axis=(1,0)) - np.roll(landmask, (1,1), axis=(1,0))\n \n v_x = -Lx*(shore)\n v_y = -Ly*(shore)\n \n v_x_c = -Lx_c*(shore_c)\n v_y_c = -Ly_c*(shore_c)\n \n v_x = v_x + v_x_c\n v_y = v_y + v_y_c\n\n magnitude = np.sqrt(v_y**2 + v_x**2)\n # the coastal nodes between land create a problem. Magnitude there is zero\n # I force it to be 1 to avoid problems when normalizing.\n ny, nx = np.where(magnitude == 0)\n magnitude[ny, nx] = 1\n\n v_x = v_x/magnitude\n v_y = v_y/magnitude\n\n return v_x, v_y\n\nv_x, v_y = create_displacement_field(landmask)\n\nfig = plt.figure(figsize=(7,6), constrained_layout=True)\nfig.suptitle('Figure 3. Displacement field', fontsize=18, y=1.04)\ngs = gridspec.GridSpec(ncols=1, nrows=1, figure=fig)\n\nax0 = fig.add_subplot(gs[0, 0])\nland = ax0.pcolormesh(lon_vals[latmin:latmax+1,lonmin:lonmax+1], lat_vals[latmin:latmax+1,lonmin:lonmax+1], lmask.mask[latmin:latmax,lonmin:lonmax],cmap='Reds_r')\nax0.scatter(lons_plot, lats_plot, c=landmask[latmin:latmax,lonmin:lonmax],s=30,cmap='Reds_r',vmin=-0.05,vmax=0.05, edgecolors='k')\nquiv = ax0.quiver(lons_plot,lats_plot,v_x[latmin:latmax,lonmin:lonmax],v_y[latmin:latmax,lonmin:lonmax],color='orange',angles='xy', scale_units='xy', scale=19, width=0.005)\n\nax0.set_ylabel('Latitude [degrees]')\nax0.set_xlabel('Longitude [degrees]')\n\ncustom_lines = [Line2D([0], [0], c = color_ocean, marker='o', markersize=10, markeredgecolor='k', lw=0),\n Line2D([0], [0], c = color_land, marker='o', markersize=10, markeredgecolor='k', lw=0)]\nax0.legend(custom_lines, ['ocean point', 'land point'], bbox_to_anchor=(.01,.93), loc='center left', borderaxespad=0.,framealpha=1)",
"Calculate the distance to the shore\nIn this tutorial, we will only displace particles that are within some distance (smaller than the grid size) to the shore. \nFor this we map the distance of the coastal nodes to the shore: Coastal nodes directly neighboring the shore are $1dx$ away. Diagonal neighbors are $\\sqrt{2}dx$ away. The particles can then sample this field and will only be displaced when closer than a threshold value. This gives a crude estimate of the distance.",
"def distance_to_shore(landmask, dx=1):\n \"\"\"Function that computes the distance to the shore. It is based in the\n the `get_coastal_nodes` algorithm.\n\n - landmask: the land mask dUilt using `make_landmask` function.\n - dx: the grid cell dimension. This is a crude approxsimation of the real\n distance (be careful).\n\n Output: 2D array containing the distances from shore.\n \"\"\"\n ci = get_coastal_nodes(landmask) # direct neighbours\n dist = ci*dx # 1 dx away\n \n ci_d = get_coastal_nodes_diagonal(landmask) # diagonal neighbours\n dist_d = (ci_d - ci)*np.sqrt(2*dx**2) # sqrt(2) dx away\n \n return dist+dist_d\n\nd_2_s = distance_to_shore(landmask)\n\nfig = plt.figure(figsize=(6,5), constrained_layout=True)\n\nax0 = fig.add_subplot()\nax0.set_title('Figure 4. Distance to shore', fontsize=18)\nax0.set_ylabel('Latitude [degrees]')\nax0.set_xlabel('Longitude [degrees]')\n\nland = ax0.pcolormesh(lon_vals[latmin:latmax+1,lonmin:lonmax+1], lat_vals[latmin:latmax+1,lonmin:lonmax+1], lmask.mask[latmin:latmax,lonmin:lonmax],cmap='Reds_r')\nd2s = ax0.scatter(lons_plot,lats_plot, c=d_2_s[latmin:latmax,lonmin:lonmax])\n\nplt.colorbar(d2s,ax=ax0, label='Distance [gridcells]')",
"Particle and Kernels\nThe distance to shore, used to flag whether a particle must be displaced, is stored in a particle Variable d2s. To visualize the displacement, the zonal and meridional displacements are stored in the variables dU and dV. \nTo write the displacement vector to the output before displacing the particle, the set_displacement kernel is invoked after the advection kernel. Then only in the next timestep are particles displaced by displace, before resuming the advection.",
"class DisplacementParticle(JITParticle):\n dU = Variable('dU')\n dV = Variable('dV')\n d2s = Variable('d2s', initial=1e3)\n \ndef set_displacement(particle, fieldset, time):\n particle.d2s = fieldset.distance2shore[time, particle.depth,\n particle.lat, particle.lon]\n if particle.d2s < 0.5:\n dispUab = fieldset.dispU[time, particle.depth, particle.lat,\n particle.lon]\n dispVab = fieldset.dispV[time, particle.depth, particle.lat,\n particle.lon]\n particle.dU = dispUab\n particle.dV = dispVab\n else:\n particle.dU = 0.\n particle.dV = 0.\n \ndef displace(particle, fieldset, time): \n if particle.d2s < 0.5:\n particle.lon += particle.dU*particle.dt\n particle.lat += particle.dV*particle.dt",
"Simulation\nLet us first do a simulation with the default AdvectionRK4 kernel for comparison later",
"SMOCfile = 'GLOBAL_ANALYSIS_FORECAST_PHY_001_024_SMOC/SMOC_201907*.nc'\nfilenames = {'U': SMOCfile,\n 'V': SMOCfile}\n\nvariables = {'U': 'uo',\n 'V': 'vo'}\n\ndimensions = {'U': {'lon': 'longitude', 'lat': 'latitude', 'depth': 'depth', 'time': 'time'},\n 'V': {'lon': 'longitude', 'lat': 'latitude', 'depth': 'depth', 'time': 'time'}}\nindices = {'lon': range(lonmin, lonmax), 'lat': range(latmin, latmax)} # to load only a small part of the domain\n\nfieldset = FieldSet.from_netcdf(filenames, variables, dimensions, indices=indices)",
"And we use the following set of 9 particles",
"npart = 9 # number of particles to be released\nlon = np.linspace(7, 7.2, int(np.sqrt(npart)), dtype=np.float32)\nlat = np.linspace(53.45, 53.65, int(np.sqrt(npart)), dtype=np.float32)\nlons, lats = np.meshgrid(lon,lat)\ntime = np.zeros(lons.size)\n\nruntime = delta(hours=100)\ndt = delta(minutes=10)\n\npset = ParticleSet(fieldset=fieldset, pclass=JITParticle, lon=lons, lat=lats, time=time)\n\nkernels = AdvectionRK4\n\noutput_file = pset.ParticleFile(name=\"SMOC.nc\", outputdt=delta(hours=1))\n\npset.execute(kernels, runtime=runtime, dt=dt, output_file=output_file)\noutput_file.close()",
"Now let's add the Fields we created above to the FieldSet and do a simulation to test the displacement of the particles as they approach the shore.",
"fieldset = FieldSet.from_netcdf(filenames, variables, dimensions, indices=indices)\nu_displacement = v_x\nv_displacement = v_y\nfieldset.add_field(Field('dispU', data=u_displacement[latmin:latmax,lonmin:lonmax],\n lon=fieldset.U.grid.lon, lat=fieldset.U.grid.lat,\n mesh='spherical'))\nfieldset.add_field(Field('dispV', data=v_displacement[latmin:latmax,lonmin:lonmax],\n lon=fieldset.U.grid.lon, lat=fieldset.U.grid.lat,\n mesh='spherical'))\nfieldset.dispU.units = GeographicPolar()\nfieldset.dispV.units = Geographic()\n\nfieldset.add_field(Field('landmask', landmask[latmin:latmax,lonmin:lonmax],\n lon=fieldset.U.grid.lon, lat=fieldset.U.grid.lat,\n mesh='spherical'))\nfieldset.add_field(Field('distance2shore', d_2_s[latmin:latmax,lonmin:lonmax],\n lon=fieldset.U.grid.lon, lat=fieldset.U.grid.lat,\n mesh='spherical'))\n\npset = ParticleSet(fieldset=fieldset, pclass=DisplacementParticle, lon=lons, lat=lats, time=time)\n\nkernels = pset.Kernel(displace)+pset.Kernel(AdvectionRK4)+pset.Kernel(set_displacement)\n\noutput_file = pset.ParticleFile(name=\"SMOC-disp.nc\", outputdt=delta(hours=1))\n\npset.execute(kernels, runtime=runtime, dt=dt, output_file=output_file)\noutput_file.close()",
"Output\nTo visualize the effect of the displacement, the particle trajectory output can be compared to the simulation without the displacement kernel.",
"ds_SMOC = xr.open_dataset('SMOC.nc')\nds_SMOC_disp = xr.open_dataset('SMOC-disp.nc')\n\nfig = plt.figure(figsize=(16,4), facecolor='silver', constrained_layout=True)\nfig.suptitle('Figure 5. Trajectory difference', fontsize=18, y=1.06)\ngs = gridspec.GridSpec(ncols=4, nrows=1, width_ratios=[1,1,1,0.3], figure=fig)\n\nax0 = fig.add_subplot(gs[0, 0])\nax0.set_ylabel('Latitude [degrees]')\nax0.set_xlabel('Longitude [degrees]')\nax0.set_title('A) No displacement', fontsize=14, fontweight = 'bold')\nax0.set_xlim(6.9, 7.6)\nax0.set_ylim(53.4, 53.8)\n\nland = ax0.pcolormesh(lon_vals[latmin:latmax+1,lonmin:lonmax+1], lat_vals[latmin:latmax+1,lonmin:lonmax+1], lmask.mask[latmin:latmax,lonmin:lonmax],cmap='Reds_r')\nax0.scatter(lons_plot, lats_plot, c=landmask[latmin:latmax,lonmin:lonmax],s=50,cmap='Reds_r',vmin=-0.05,vmax=0.05, edgecolors='k')\nax0.plot(ds_SMOC['lon'].T, ds_SMOC['lat'].T,linewidth=3, zorder=1)\nax0.scatter(ds_SMOC['lon'], ds_SMOC['lat'], color='limegreen', zorder=2)\n\nn_p0 = 0\nax1 = fig.add_subplot(gs[0, 1])\nax1.set_ylabel('Latitude [degrees]')\nax1.set_xlabel('Longitude [degrees]')\nax1.set_title('B) Displacement trajectory '+str(n_p0), fontsize=14, fontweight = 'bold')\nax1.set_xlim(6.9, 7.3)\nax1.set_ylim(53.4, 53.55)\n\nland = ax1.pcolormesh(lon_vals[latmin:latmax+1,lonmin:lonmax+1], lat_vals[latmin:latmax+1,lonmin:lonmax+1], lmask.mask[latmin:latmax,lonmin:lonmax],cmap='Reds_r')\nax1.scatter(lons_plot, lats_plot, c=landmask[latmin:latmax,lonmin:lonmax],s=50,cmap='Reds_r',vmin=-0.05,vmax=0.05, edgecolors='k')\nquiv = ax1.quiver(lons_plot,lats_plot,v_x[latmin:latmax,lonmin:lonmax],v_y[latmin:latmax,lonmin:lonmax],color='orange', scale=19, width=0.005)\nax1.plot(ds_SMOC_disp['lon'][n_p0].T, ds_SMOC_disp['lat'][n_p0].T,linewidth=3, zorder=1)\nax1.scatter(ds_SMOC['lon'][n_p0], ds_SMOC['lat'][n_p0], color='limegreen', zorder=2)\nax1.scatter(ds_SMOC_disp['lon'][n_p0], ds_SMOC_disp['lat'][n_p0], cmap='viridis_r', zorder=2)\nax1.quiver(ds_SMOC_disp['lon'][n_p0], ds_SMOC_disp['lat'][n_p0],ds_SMOC_disp['dU'][n_p0], ds_SMOC_disp['dV'][n_p0], color='w',angles='xy', scale_units='xy', scale=2e-4, zorder=3)\n\nn_p1 = 4\nax2 = fig.add_subplot(gs[0, 2])\nax2.set_ylabel('Latitude [degrees]')\nax2.set_xlabel('Longitude [degrees]')\nax2.set_title('C) Displacement trajectory '+str(n_p1), fontsize=14, fontweight = 'bold')\nax2.set_xlim(7., 7.6)\nax2.set_ylim(53.4, 53.8)\n\nland = ax2.pcolormesh(lon_vals[latmin:latmax+1,lonmin:lonmax+1], lat_vals[latmin:latmax+1,lonmin:lonmax+1], lmask.mask[latmin:latmax,lonmin:lonmax],cmap='Reds_r')\nax2.scatter(lons_plot, lats_plot, c=landmask[latmin:latmax,lonmin:lonmax],s=50,cmap='Reds_r',vmin=-0.05,vmax=0.05, edgecolors='k')\nq1 = ax2.quiver(lons_plot,lats_plot,v_x[latmin:latmax,lonmin:lonmax],v_y[latmin:latmax,lonmin:lonmax],color='orange', scale=19, width=0.005)\nax2.plot(ds_SMOC_disp['lon'][n_p1].T, ds_SMOC_disp['lat'][n_p1].T,linewidth=3, zorder=1)\nax2.scatter(ds_SMOC['lon'][n_p1], ds_SMOC['lat'][n_p1], color='limegreen', zorder=2)\nax2.scatter(ds_SMOC_disp['lon'][n_p1], ds_SMOC_disp['lat'][n_p1], cmap='viridis_r', zorder=2)\nq2 = ax2.quiver(ds_SMOC_disp['lon'][n_p1], ds_SMOC_disp['lat'][n_p1],ds_SMOC_disp['dU'][n_p1], ds_SMOC_disp['dV'][n_p1], color='w',angles='xy', scale_units='xy', scale=2e-4, zorder=3)\n\n\nax3 = fig.add_subplot(gs[0, 3])\nax3.axis('off')\ncustom_lines = [Line2D([0], [0], c = 'tab:blue', marker='o', markersize=10),\n Line2D([0], [0], c = 'limegreen', marker='o', markersize=10),\n Line2D([0], [0], c = color_ocean, marker='o', markersize=10, markeredgecolor='k', lw=0),\n Line2D([0], [0], c = color_land, marker='o', markersize=10, markeredgecolor='k', lw=0)]\nax3.legend(custom_lines, ['with displacement', 'without displacement', 'ocean point', 'land point'], bbox_to_anchor=(0.,0.6), loc='center left', borderaxespad=0.,framealpha=1)\n\nax2.quiverkey(q1, 1.3, 0.9, 2, 'displacement field', coordinates='axes')\nax2.quiverkey(q2, 1.3, 0.8, 1e-5, 'particle displacement', coordinates='axes')\nplt.show()",
"Conclusion\nFigure 5 shows how particles are prevented from approaching the coast in a 5 day simulation. Note that to show each computation, the integration timestep (dt) is equal to the output timestep (outputdt): 1 hour. This is relatively large, and causes the displacement to be on the order of 4 km and be relatively infrequent. It is advised to use smaller dt in real simulations.",
"d2s_cmap = copy(plt.get_cmap('cmo.deep_r'))\nd2s_cmap.set_over('gold')\n\nfig = plt.figure(figsize=(11,6), constrained_layout=True)\n\nax0 = fig.add_subplot()\nax0.set_title('Figure 6. Distance to shore', fontsize=18)\nland = ax0.pcolormesh(lon_vals[latmin:latmax+1,lonmin:lonmax+1], lat_vals[latmin:latmax+1,lonmin:lonmax+1], lmask.mask[latmin:latmax,lonmin:lonmax],cmap='Reds_r')\nax0.scatter(lons_plot, lats_plot, c=landmask[latmin:latmax,lonmin:lonmax],s=50,cmap='Reds_r', edgecolor='k',vmin=-0.05,vmax=0.05)\nax0.plot(ds_SMOC_disp['lon'].T, ds_SMOC_disp['lat'].T,linewidth=3, zorder=1)\nd2s = ax0.scatter(ds_SMOC_disp['lon'], ds_SMOC_disp['lat'], c=ds_SMOC_disp['d2s'],cmap=d2s_cmap, s=20,vmax=0.5, zorder=2)\nq2 = ax0.quiver(ds_SMOC_disp['lon'], ds_SMOC_disp['lat'],ds_SMOC_disp['dU'], ds_SMOC_disp['dV'], color='k',angles='xy', scale_units='xy', scale=2.3e-4, width=0.003, zorder=3)\n\nax0.set_xlim(6.9, 8)\nax0.set_ylim(53.4, 53.8)\nax0.set_ylabel('Latitude [degrees]')\nax0.set_xlabel('Longitude [degrees]')\nplt.colorbar(d2s,ax=ax0, label='Distance [gridcells]',extend='max')\n\ncolor_land = copy(plt.get_cmap('Reds'))(0)\ncolor_ocean = copy(plt.get_cmap('Reds'))(128)\n\ncustom_lines = [Line2D([0], [0], c = color_ocean, marker='o', markersize=10, markeredgecolor='k', lw=0),\n Line2D([0], [0], c = color_land, marker='o', markersize=10, markeredgecolor='k', lw=0)]\nax0.legend(custom_lines, ['ocean point', 'land point'], bbox_to_anchor=(.01,.95), loc='center left', borderaxespad=0.,framealpha=1)\n",
"3. Slip boundary conditions\nThe reason trajectories do not neatly follow the coast in A grid velocity fields is that the lack of staggering causes both velocity components to go to zero in the same way towards the cell edge. This no-slip condition can be turned into a free-slip or partial-slip condition by separately considering the cross-shore and along-shore velocity components as in a staggered C-grid. Each interpolation of the velocity field must then be corrected with a factor depending on the direction of the boundary. \nThese boundary conditions have been implemented in Parcels as interp_method=partialslip and interp_method=freeslip, which we will show in the plot below",
"cells_x = np.array([[0,0],[1,1],[2,2]])\ncells_y = np.array([[0,1],[0,1],[0,1]])\nU0 = 1\nV0 = 1\nU = np.array([U0,U0,0,0,0,0])\nV = np.array([V0,V0,0,0,0,0])\nxsi = np.linspace(0.001,0.999)\n\nu_interp = U0*(1-xsi)\nv_interp = V0*(1-xsi)\n\nu_freeslip = u_interp\nv_freeslip = v_interp/(1-xsi)\n\nu_partslip = u_interp\nv_partslip = v_interp*(1-.5*xsi)/(1-xsi)\n\n\nfig = plt.figure(figsize=(15,4), constrained_layout=True)\nfig.suptitle('Figure 7. Boundary conditions', fontsize=18, y=1.06)\ngs = gridspec.GridSpec(ncols=3, nrows=1, figure=fig)\n\nax0 = fig.add_subplot(gs[0, 0])\n\nax0.pcolormesh(cells_x, cells_y, np.array([[0],[1]]), cmap='Greys',edgecolor='k')\nax0.scatter(cells_x,cells_y, c='w', edgecolor='k')\nax0.quiver(cells_x,cells_y,U,V, scale=15)\n\nax0.plot(xsi, u_interp,linewidth=5, label='u_interpolation')\nax0.plot(xsi, v_interp, linestyle='dashed',linewidth=5, label='v_interpolation')\nax0.set_xlim(-0.3,2.3)\nax0.set_ylim(-0.5,1.5)\nax0.set_ylabel('u - v [-]', fontsize=14)\nax0.set_xlabel(r'$\\xi$', fontsize = 14)\nax0.set_title('A) Bilinear interpolation')\nax0.legend(loc='lower right')\n\n\nax1 = fig.add_subplot(gs[0, 1])\n\nax1.pcolormesh(cells_x, cells_y,np.array([[0],[1]]), cmap='Greys',edgecolor='k')\nax1.scatter(cells_x,cells_y, c='w', edgecolor='k')\nax1.quiver(cells_x,cells_y,U,V, scale=15)\n\nax1.plot(xsi, u_freeslip,linewidth=5, label='u_freeslip')\nax1.plot(xsi, v_freeslip, linestyle='dashed',linewidth=5, label='v_freeslip')\nax1.set_xlim(-0.3,2.3)\nax1.set_ylim(-0.5,1.5)\nax1.set_xlabel(r'$\\xi$', fontsize = 14)\nax1.text(0., 1.3, r'$v_{freeslip} = v_{interpolation}*\\frac{1}{1-\\xi}$', fontsize = 18)\nax1.set_title('B) Free slip condition')\nax1.legend(loc='lower right')\n\nax2 = fig.add_subplot(gs[0, 2])\n\nax2.pcolormesh(cells_x, cells_y,np.array([[0],[1]]), cmap='Greys',edgecolor='k')\nax2.scatter(cells_x,cells_y, c='w', edgecolor='k')\nax2.quiver(cells_x,cells_y,U,V, scale=15)\n\nax2.plot(xsi, u_partslip,linewidth=5, label='u_partialslip')\nax2.plot(xsi, v_partslip, linestyle='dashed',linewidth=5, label='v_partialslip')\nax2.set_xlim(-0.3,2.3)\nax2.set_ylim(-0.5,1.5)\nax2.set_xlabel(r'$\\xi$', fontsize = 14)\nax2.text(0., 1.3, r'$v_{partialslip} = v_{interpolation}*\\frac{1-1/2\\xi}{1-\\xi}$', fontsize = 18)\nax2.set_title('C) Partial slip condition')\nax2.legend(loc='lower right');",
"Consider a grid cell with a solid boundary to the right and vectors $(U0, V0)$ = $(1, 1)$ on the lefthand nodes, as in figure 7. Parcels bilinear interpolation will interpolate in the $x$ and $y$ directions. This cell is invariant in the $y$-direction, we will only consider the effect in the direction normal to the boundary. In the x-direction, both u and v will be interpolated along $\\xi$, the normalized $x$-coordinate within the cell. This is plotted with the blue and orange dashed lines in subfigure 7A.\nA free slip boundary condition is defined with $\\frac{\\delta v}{\\delta \\xi}=0$. This means that the tangential velocity is constant in the direction normal to the boundary. This can be achieved in a kernel after interpolation by dividing by $(1-\\xi)$. The resulting velocity profiles are shown in subfigure 7B.\nA partial slip boundary condition is defined with a tangential velocity profile that decreases toward the boundary, but not to zero. This can be achieved by multiplying the interpolated velocity by $\\frac{1-1/2\\xi}{1-\\xi}$. This is shown in subfigure 7C.\nFor each direction and boundary condition a different factor must be used (where $\\xi$ and $\\eta$ are the normalized x- and y-coordinates within the cell, respectively):\n- Free slip\n1: $f_u = \\frac{1}{\\eta}$\n2: $f_u = \\frac{1}{(1-\\eta)}$\n4: $f_v = \\frac{1}{\\xi}$\n8: $f_v = \\frac{1}{(1-\\xi)}$\n\nPartial slip\n\n1: $f_u = \\frac{1/2+1/2\\eta}{\\eta}$\n2: $f_u = \\frac{1-1/2\\eta}{1-\\eta}$\n4: $f_v = \\frac{1/2+1/2\\xi}{\\xi}$\n8: $f_v = \\frac{1-1/2\\xi}{1-\\xi}$\nWe now simulate the three different boundary conditions by advecting the 9 particles from above in a time-evolving SMOC dataset from CMEMS.",
"SMOCfiles = 'GLOBAL_ANALYSIS_FORECAST_PHY_001_024_SMOC/SMOC_201907*.nc'\nfilenames = {'U': SMOCfile,\n 'V': SMOCfile}\n\nvariables = {'U': 'uo',\n 'V': 'vo'}\n\ndimensions = {'U': {'lon': 'longitude', 'lat': 'latitude', 'depth': 'depth', 'time': 'time'},\n 'V': {'lon': 'longitude', 'lat': 'latitude', 'depth': 'depth', 'time': 'time'}}\n\nindices = {'lon': range(lonmin, lonmax), 'lat': range(latmin, latmax)}",
"First up is the partialslip interpolation (note that we have to redefine the FieldSet because the interp_method=partialslip is set there)",
"fieldset = FieldSet.from_netcdf(filenames, variables, dimensions, indices=indices, \n interp_method={'U': 'partialslip', 'V': 'partialslip'}) # Setting the interpolation for U and V\npset = ParticleSet(fieldset=fieldset, pclass=JITParticle, lon=lons, lat=lats, time=time)\n\nkernels = pset.Kernel(AdvectionRK4)\n\noutput_file = pset.ParticleFile(name=\"SMOC_partialslip.nc\", outputdt=delta(hours=1))\n\npset.execute(kernels, runtime=runtime, dt=dt, output_file=output_file)\noutput_file.close() # export the trajectory data to a netcdf file",
"And then we also use the freeslip interpolation",
"fieldset = FieldSet.from_netcdf(filenames, variables, dimensions, indices=indices, \n interp_method={'U': 'freeslip', 'V': 'freeslip'}) # Setting the interpolation for U and V\npset = ParticleSet(fieldset=fieldset, pclass=JITParticle, lon=lons, lat=lats, time=time)\n\nkernels = pset.Kernel(AdvectionRK4)\n\noutput_file = pset.ParticleFile(name=\"SMOC_freeslip.nc\", outputdt=delta(hours=1))\n\npset.execute(kernels, runtime=runtime, dt=dt, output_file=output_file)\noutput_file.close() # export the trajectory data to a netcdf file",
"Now we can load and plot the three different interpolation_methods",
"ds_SMOC = xr.open_dataset('SMOC.nc')\nds_SMOC_part = xr.open_dataset('SMOC_partialslip.nc')\nds_SMOC_free = xr.open_dataset('SMOC_freeslip.nc')\n\nfig = plt.figure(figsize=(18,5), constrained_layout=True)\nfig.suptitle('Figure 8. Solution comparison', fontsize=18, y=1.06)\ngs = gridspec.GridSpec(ncols=3, nrows=1, figure=fig)\n\nn_p=[[0, 1, 3, 4, 6, 7, 8], 0, 6]\n\nfor i in range(3):\n ax = fig.add_subplot(gs[0, i])\n ax.set_title(chr(i+65)+') Trajectory '+str(n_p[i]), fontsize = 18)\n land = ax.pcolormesh(lon_vals[latmin:latmax+1,lonmin:lonmax+1], lat_vals[latmin:latmax+1,lonmin:lonmax+1], lmask.mask[latmin:latmax,lonmin:lonmax],cmap='Reds_r')\n ax.scatter(lons_plot, lats_plot, c=landmask[latmin:latmax,lonmin:lonmax],s=50,cmap='Reds_r',vmin=-0.05,vmax=0.05, edgecolors='k')\n\n ax.scatter(ds_SMOC['lon'][n_p[i]], ds_SMOC['lat'][n_p[i]], s=30, color='limegreen', zorder=2)\n\n ax.scatter(ds_SMOC_disp['lon'][n_p[i]], ds_SMOC_disp['lat'][n_p[i]], s=25, color='tab:blue', zorder=2)\n\n ax.scatter(ds_SMOC_part['lon'][n_p[i]], ds_SMOC_part['lat'][n_p[i]], s=20, color='magenta', zorder=2)\n\n ax.scatter(ds_SMOC_free['lon'][n_p[i]], ds_SMOC_free['lat'][n_p[i]], s=15, color='gold', zorder=2)\n\n ax.set_xlim(6.9, 7.6)\n ax.set_ylim(53.4, 53.9)\n ax.set_ylabel('Latitude [degrees]')\n ax.set_xlabel('Longitude [degrees]')\n\n color_land = copy(plt.get_cmap('Reds'))(0)\n color_ocean = copy(plt.get_cmap('Reds'))(128)\n\n custom_lines = [Line2D([0], [0], c = 'limegreen', marker='o', markersize=10, lw=0),\n Line2D([0], [0], c = 'tab:blue', marker='o', markersize=10, lw=0),\n Line2D([0], [0], c = 'magenta', marker='o', markersize=10, lw=0),\n Line2D([0], [0], c = 'gold', marker='o', markersize=10, lw=0),\n Line2D([0], [0], c = color_ocean, marker='o', markersize=10, markeredgecolor='k', lw=0),\n Line2D([0], [0], c = color_land, marker='o', markersize=10, markeredgecolor='k', lw=0)]\n ax.legend(custom_lines, ['basic RK4','displacement','partial slip', 'free slip','ocean point', 'land point'], bbox_to_anchor=(.01,.8), loc='center left', borderaxespad=0.,framealpha=1)",
"Figure 8 shows the influence of the different solutions on the particle trajectories near the shore. Subfigure 8B shows how the different solutions make trajectory 0 move along the shore and around the corner of the model geometry. Subfigure 8C shows how trajectories are unaffected by the different interpolation scheme as long as they do not cross a coastal gridcell."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
gngdb/variational-dropout
|
notebooks/Investigative Adaptive Properties.ipynb
|
mit
|
[
"This should just be a short notebook showing some of the properties of the adaptive dropout parameters that we can learn with this method.",
"import varout.layers\nimport varout.objectives\nimport varout.experiments\nimport lasagne.layers\nimport lasagne.nonlinearities\nimport lasagne.init\nimport theano\nimport theano.tensor as T\nimport numpy as np\nimport holonets\nimport holoviews as hv\n%load_ext holoviews.ipython\n\ndataset = varout.experiments.load_data()\n\nbatch_size, input_dim, n_hidden, output_dim = 200, 784, 100, 10\nl_in = lasagne.layers.InputLayer((batch_size, input_dim))\nl_drop1 = varout.layers.VariationalDropoutA(l_in, p=0.2, adaptive='elementwise')\nl_hidden1 = lasagne.layers.DenseLayer(l_drop1, num_units=n_hidden)\nl_drop2 = varout.layers.VariationalDropoutA(l_hidden1, p=0.5, adaptive='elementwise')\nl_hidden2 = lasagne.layers.DenseLayer(l_drop2, num_units=n_hidden)\nl_drop3 = varout.layers.VariationalDropoutA(l_hidden2, p=0.5, adaptive='elementwise')\nl_hidden3 = lasagne.layers.DenseLayer(l_drop3, num_units=n_hidden)\nl_drop4 = varout.layers.VariationalDropoutA(l_hidden3, p=0.5, adaptive='elementwise')\nl_out = lasagne.layers.DenseLayer(l_drop4, num_units=output_dim,\n nonlinearity=lasagne.nonlinearities.softmax)\n\ndef set_up_training(l_out, dataset, squash_updates=False, N_train=50000, N_test=10000):\n expressions = holonets.monitor.Expressions(l_out, dataset, update_rule=lasagne.updates.adam,\n loss_function=lasagne.objectives.categorical_crossentropy,\n loss_aggregate=T.mean, \n extra_loss=-varout.objectives.priorKL(l_out)/N_train,\n learning_rate=0.001)\n # add channel to monitor loss and accuracy on training and test\n expressions.add_channel(**expressions.loss('train', False))\n expressions.add_channel(**expressions.accuracy('train', False))\n expressions.add_channel(**expressions.loss('test', True))\n expressions.add_channel(**expressions.accuracy('test', True))\n expressions.add_channel(name='cross-entropy loss', dimension='Loss',\n expression=T.mean(\n lasagne.objectives.categorical_crossentropy(expressions.network_output, expressions.y_batch)),\n function='train')\n expressions.add_channel(name='DKL', dimension='Loss', \n expression=-varout.objectives.priorKL(l_out)/N_train, function='train')\n # would like to track the various alphas\n for i, dlayer in enumerate([l for l in lasagne.layers.get_all_layers(l_out) \n if isinstance(l, varout.layers.VariationalDropout)]):\n expressions.add_channel(name='Dropout Layer {0} Mean Alpha'.format(i+1),\n dimension='Alpha', function='train',\n expression=T.mean(T.nnet.sigmoid(dlayer.logitalpha)))\n expressions.add_channel(name='Dropout Layer {0} Sigma Alpha'.format(i+1),\n dimension='Alpha', function='train',\n expression=T.sqrt(T.var(T.nnet.sigmoid(dlayer.logitalpha))))\n channels = expressions.build_channels()\n train = holonets.train.Train(channels, n_batches={'train': N_train//batch_size,\n 'test': N_test//batch_size})\n loop = holonets.run.EpochLoop(train, dimensions=train.dimensions)\n return loop\n\nloop = set_up_training(l_out, dataset)\n\nresults = loop.run(100)\nresults.layout('Channel')",
"Looking at the alpha parameters learnt over the input image:",
"hv.Image(T.nnet.sigmoid(l_drop1.logitalpha).eval().reshape(28,28))",
"And comparing the filters learnt by their logitalpha scores:",
"filteralphas = T.nnet.sigmoid(l_drop2.logitalpha).eval()\n\nfiltermap = hv.HoloMap(kdims=['Alpha'])\nfor i,a in enumerate(filteralphas):\n filtermap[a] = hv.Image(l_hidden1.W.get_value()[:,i].reshape(28,28))\n\nfiltermap[:0.4].layout('Alpha')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
newworldnewlife/TensorFlow-Tutorials
|
03_PrettyTensor.ipynb
|
mit
|
[
"TensorFlow Tutorial #03\nPrettyTensor\nby Magnus Erik Hvass Pedersen\n/ GitHub / Videos on YouTube\nIntroduction\nThe previous tutorial showed how to implement a Convolutional Neural Network in TensorFlow, which required low-level knowledge of how TensorFlow works. It was complicated and easy to make mistakes.\nThis tutorial shows how to use the add-on package for TensorFlow called PrettyTensor, which is also developed by Google. PrettyTensor provides much simpler ways of constructing Neural Networks in TensorFlow, thus allowing us to focus on the idea we wish to implement and not worry so much about low-level implementation details. This also makes the source-code much shorter and easier to read and modify.\nMost of the source-code in this tutorial is identical to Tutorial #02 except for the graph-construction which is now done using PrettyTensor, as well as some other minor changes.\nThis tutorial builds directly on Tutorial #02 and it is recommended that you study that tutorial first if you are new to TensorFlow. You should also be familiar with basic linear algebra, Python and the Jupyter Notebook editor.\nFlowchart\nThe following chart shows roughly how the data flows in the Convolutional Neural Network that is implemented below. See the previous tutorial for a more detailed description of convolution.",
"from IPython.display import Image\nImage('images/02_network_flowchart.png')",
"The input image is processed in the first convolutional layer using the filter-weights. This results in 16 new images, one for each filter in the convolutional layer. The images are also down-sampled so the image resolution is decreased from 28x28 to 14x14.\nThese 16 smaller images are then processed in the second convolutional layer. We need filter-weights for each of these 16 channels, and we need filter-weights for each output channel of this layer. There are 36 output channels so there are a total of 16 x 36 = 576 filters in the second convolutional layer. The resulting images are down-sampled again to 7x7 pixels.\nThe output of the second convolutional layer is 36 images of 7x7 pixels each. These are then flattened to a single vector of length 7 x 7 x 36 = 1764, which is used as the input to a fully-connected layer with 128 neurons (or elements). This feeds into another fully-connected layer with 10 neurons, one for each of the classes, which is used to determine the class of the image, that is, which number is depicted in the image.\nThe convolutional filters are initially chosen at random, so the classification is done randomly. The error between the predicted and true class of the input image is measured as the so-called cross-entropy. The optimizer then automatically propagates this error back through the Convolutional Network using the chain-rule of differentiation and updates the filter-weights so as to improve the classification error. This is done iteratively thousands of times until the classification error is sufficiently low.\nThese particular filter-weights and intermediate images are the results of one optimization run and may look different if you re-run this Notebook.\nNote that the computation in TensorFlow is actually done on a batch of images instead of a single image, which makes the computation more efficient. This means the flowchart actually has one more data-dimension when implemented in TensorFlow.\nImports",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\nimport numpy as np\nfrom sklearn.metrics import confusion_matrix\nimport time\nfrom datetime import timedelta\nimport math\n\n# We also need PrettyTensor.\nimport prettytensor as pt",
"This was developed using Python 3.5.2 (Anaconda) and TensorFlow version:",
"tf.__version__",
"PrettyTensor version:",
"pt.__version__",
"Load Data\nThe MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.",
"from tensorflow.examples.tutorials.mnist import input_data\ndata = input_data.read_data_sets('data/MNIST/', one_hot=True)",
"The MNIST data-set has now been loaded and consists of 70,000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.",
"print(\"Size of:\")\nprint(\"- Training-set:\\t\\t{}\".format(len(data.train.labels)))\nprint(\"- Test-set:\\t\\t{}\".format(len(data.test.labels)))\nprint(\"- Validation-set:\\t{}\".format(len(data.validation.labels)))",
"The class-labels are One-Hot encoded, which means that each label is a vector with 10 elements, all of which are zero except for one element. The index of this one element is the class-number, that is, the digit shown in the associated image. We also need the class-numbers as integers for the test-set, so we calculate it now.",
"data.test.cls = np.argmax(data.test.labels, axis=1)",
"Data Dimensions\nThe data dimensions are used in several places in the source-code below. They are defined once so we can use these variables instead of numbers throughout the source-code below.",
"# We know that MNIST images are 28 pixels in each dimension.\nimg_size = 28\n\n# Images are stored in one-dimensional arrays of this length.\nimg_size_flat = img_size * img_size\n\n# Tuple with height and width of images used to reshape arrays.\nimg_shape = (img_size, img_size)\n\n# Number of colour channels for the images: 1 channel for gray-scale.\nnum_channels = 1\n\n# Number of classes, one class for each of 10 digits.\nnum_classes = 10",
"Helper-function for plotting images\nFunction used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.",
"def plot_images(images, cls_true, cls_pred=None):\n assert len(images) == len(cls_true) == 9\n \n # Create figure with 3x3 sub-plots.\n fig, axes = plt.subplots(3, 3)\n fig.subplots_adjust(hspace=0.3, wspace=0.3)\n\n for i, ax in enumerate(axes.flat):\n # Plot image.\n ax.imshow(images[i].reshape(img_shape), cmap='binary')\n\n # Show true and predicted classes.\n if cls_pred is None:\n xlabel = \"True: {0}\".format(cls_true[i])\n else:\n xlabel = \"True: {0}, Pred: {1}\".format(cls_true[i], cls_pred[i])\n\n # Show the classes as the label on the x-axis.\n ax.set_xlabel(xlabel)\n \n # Remove ticks from the plot.\n ax.set_xticks([])\n ax.set_yticks([])\n \n # Ensure the plot is shown correctly with multiple plots\n # in a single Notebook cell.\n plt.show()",
"Plot a few images to see if data is correct",
"# Get the first images from the test-set.\nimages = data.test.images[0:9]\n\n# Get the true classes for those images.\ncls_true = data.test.cls[0:9]\n\n# Plot the images and labels using our helper-function above.\nplot_images(images=images, cls_true=cls_true)",
"TensorFlow Graph\nThe entire purpose of TensorFlow is to have a so-called computational graph that can be executed much more efficiently than if the same calculations were to be performed directly in Python. TensorFlow can be more efficient than NumPy because TensorFlow knows the entire computation graph that must be executed, while NumPy only knows the computation of a single mathematical operation at a time.\nTensorFlow can also automatically calculate the gradients that are needed to optimize the variables of the graph so as to make the model perform better. This is because the graph is a combination of simple mathematical expressions so the gradient of the entire graph can be calculated using the chain-rule for derivatives.\nTensorFlow can also take advantage of multi-core CPUs as well as GPUs - and Google has even built special chips just for TensorFlow which are called TPUs (Tensor Processing Units) and are even faster than GPUs.\nA TensorFlow graph consists of the following parts which will be detailed below:\n\nPlaceholder variables used for inputting data to the graph.\nVariables that are going to be optimized so as to make the convolutional network perform better.\nThe mathematical formulas for the convolutional network.\nA cost measure that can be used to guide the optimization of the variables.\nAn optimization method which updates the variables.\n\nIn addition, the TensorFlow graph may also contain various debugging statements e.g. for logging data to be displayed using TensorBoard, which is not covered in this tutorial.\nPlaceholder variables\nPlaceholder variables serve as the input to the TensorFlow computational graph that we may change each time we execute the graph. We call this feeding the placeholder variables and it is demonstrated further below.\nFirst we define the placeholder variable for the input images. This allows us to change the images that are input to the TensorFlow graph. This is a so-called tensor, which just means that it is a multi-dimensional array. The data-type is set to float32 and the shape is set to [None, img_size_flat], where None means that the tensor may hold an arbitrary number of images with each image being a vector of length img_size_flat.",
"x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')",
"The convolutional layers expect x to be encoded as a 4-dim tensor so we have to reshape it so its shape is instead [num_images, img_height, img_width, num_channels]. Note that img_height == img_width == img_size and num_images can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is:",
"x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])",
"Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable x. The shape of this placeholder variable is [None, num_classes] which means it may hold an arbitrary number of labels and each label is a vector of length num_classes which is 10 in this case.",
"y_true = tf.placeholder(tf.float32, shape=[None, num_classes], name='y_true')",
"We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point.",
"y_true_cls = tf.argmax(y_true, dimension=1)",
"TensorFlow Implementation\nThis section shows the original source-code from Tutorial #02 which implements the Convolutional Neural Network directly in TensorFlow. The code is not actually used in this Notebook and is only meant for easy comparison to the PrettyTensor implementation below.\nThe thing to note here is how many lines of code there are and the low-level details of how TensorFlow stores its data and performs the computation. It is easy to make mistakes even for fairly small Neural Networks.\nHelper-functions\nIn the direct TensorFlow implementation, we first make some helper-functions which will be used several times in the graph construction.\nThese two functions create new variables in the TensorFlow graph that will be initialized with random values.",
"def new_weights(shape):\n return tf.Variable(tf.truncated_normal(shape, stddev=0.05))\n\ndef new_biases(length):\n return tf.Variable(tf.constant(0.05, shape=[length]))",
"The following helper-function creates a new convolutional network. The input and output are 4-dimensional tensors (aka. 4-rank tensors). Note the low-level details of the TensorFlow API, such as the shape of the weights-variable. It is easy to make a mistake somewhere which may result in strange error-messages that are difficult to debug.",
"def new_conv_layer(input, # The previous layer.\n num_input_channels, # Num. channels in prev. layer.\n filter_size, # Width and height of filters.\n num_filters, # Number of filters.\n use_pooling=True): # Use 2x2 max-pooling.\n\n # Shape of the filter-weights for the convolution.\n # This format is determined by the TensorFlow API.\n shape = [filter_size, filter_size, num_input_channels, num_filters]\n\n # Create new weights aka. filters with the given shape.\n weights = new_weights(shape=shape)\n\n # Create new biases, one for each filter.\n biases = new_biases(length=num_filters)\n\n # Create the TensorFlow operation for convolution.\n # Note the strides are set to 1 in all dimensions.\n # The first and last stride must always be 1,\n # because the first is for the image-number and\n # the last is for the input-channel.\n # But e.g. strides=[1, 2, 2, 1] would mean that the filter\n # is moved 2 pixels across the x- and y-axis of the image.\n # The padding is set to 'SAME' which means the input image\n # is padded with zeroes so the size of the output is the same.\n layer = tf.nn.conv2d(input=input,\n filter=weights,\n strides=[1, 1, 1, 1],\n padding='SAME')\n\n # Add the biases to the results of the convolution.\n # A bias-value is added to each filter-channel.\n layer += biases\n\n # Use pooling to down-sample the image resolution?\n if use_pooling:\n # This is 2x2 max-pooling, which means that we\n # consider 2x2 windows and select the largest value\n # in each window. Then we move 2 pixels to the next window.\n layer = tf.nn.max_pool(value=layer,\n ksize=[1, 2, 2, 1],\n strides=[1, 2, 2, 1],\n padding='SAME')\n\n # Rectified Linear Unit (ReLU).\n # It calculates max(x, 0) for each input pixel x.\n # This adds some non-linearity to the formula and allows us\n # to learn more complicated functions.\n layer = tf.nn.relu(layer)\n\n # Note that ReLU is normally executed before the pooling,\n # but since relu(max_pool(x)) == max_pool(relu(x)) we can\n # save 75% of the relu-operations by max-pooling first.\n\n # We return both the resulting layer and the filter-weights\n # because we will plot the weights later.\n return layer, weights\n",
"The following helper-function flattens a 4-dim tensor to 2-dim so we can add fully-connected layers after the convolutional layers.",
"def flatten_layer(layer):\n # Get the shape of the input layer.\n layer_shape = layer.get_shape()\n\n # The shape of the input layer is assumed to be:\n # layer_shape == [num_images, img_height, img_width, num_channels]\n\n # The number of features is: img_height * img_width * num_channels\n # We can use a function from TensorFlow to calculate this.\n num_features = layer_shape[1:4].num_elements()\n\n # Reshape the layer to [num_images, num_features].\n # Note that we just set the size of the second dimension\n # to num_features and the size of the first dimension to -1\n # which means the size in that dimension is calculated\n # so the total size of the tensor is unchanged from the reshaping.\n layer_flat = tf.reshape(layer, [-1, num_features])\n\n # The shape of the flattened layer is now:\n # [num_images, img_height * img_width * num_channels]\n\n # Return both the flattened layer and the number of features.\n return layer_flat, num_features",
"The following helper-function creates a fully-connected layer.",
"def new_fc_layer(input, # The previous layer.\n num_inputs, # Num. inputs from prev. layer.\n num_outputs, # Num. outputs.\n use_relu=True): # Use Rectified Linear Unit (ReLU)?\n\n # Create new weights and biases.\n weights = new_weights(shape=[num_inputs, num_outputs])\n biases = new_biases(length=num_outputs)\n\n # Calculate the layer as the matrix multiplication of\n # the input and weights, and then add the bias-values.\n layer = tf.matmul(input, weights) + biases\n\n # Use ReLU?\n if use_relu:\n layer = tf.nn.relu(layer)\n\n return layer",
"Graph Construction\nThe Convolutional Neural Network will now be constructed using the helper-functions above. Without the helper-functions this would have been very long and confusing\nNote that the following code will not actually be executed. It is just shown here for easy comparison to the PrettyTensor code below.\nThe previous tutorial used constants defined elsewhere so they could be changed easily. For example, instead of having filter_size=5 as an argument to new_conv_layer() we had filter_size=filter_size1 and then defined filter_size1=5 elsewhere. This made it easier to change all the constants.",
"if False: # Don't execute this! Just show it for easy comparison.\n # First convolutional layer.\n layer_conv1, weights_conv1 = \\\n new_conv_layer(input=x_image,\n num_input_channels=num_channels,\n filter_size=5,\n num_filters=16,\n use_pooling=True)\n\n # Second convolutional layer.\n layer_conv2, weights_conv2 = \\\n new_conv_layer(input=layer_conv1,\n num_input_channels=16,\n filter_size=5,\n num_filters=36,\n use_pooling=True)\n\n # Flatten layer.\n layer_flat, num_features = flatten_layer(layer_conv2)\n\n # First fully-connected layer.\n layer_fc1 = new_fc_layer(input=layer_flat,\n num_inputs=num_features,\n num_outputs=128,\n use_relu=True)\n\n # Second fully-connected layer.\n layer_fc2 = new_fc_layer(input=layer_fc1,\n num_inputs=128,\n num_outputs=num_classes,\n use_relu=False)\n\n # Predicted class-label.\n y_pred = tf.nn.softmax(layer_fc2)\n\n # Cross-entropy for the classification of each image.\n cross_entropy = \\\n tf.nn.softmax_cross_entropy_with_logits(logits=layer_fc2,\n labels=y_true)\n\n # Loss aka. cost-measure.\n # This is the scalar value that must be minimized.\n loss = tf.reduce_mean(cross_entropy)",
"PrettyTensor Implementation\nThis section shows how to make the exact same implementation of a Convolutional Neural Network using PrettyTensor.\nThe basic idea is to wrap the input tensor x_image in a PrettyTensor object which has helper-functions for adding new computational layers so as to create an entire Neural Network. This is a bit similar to the helper-functions we implemented above, but it is even simpler because PrettyTensor also keeps track of each layer's input and output dimensionalities, etc.",
"x_pretty = pt.wrap(x_image)",
"Now that we have wrapped the input image in a PrettyTensor object, we can add the convolutional and fully-connected layers in just a few lines of source-code.\nNote that pt.defaults_scope(activation_fn=tf.nn.relu) makes activation_fn=tf.nn.relu an argument for each of the layers constructed inside the with-block, so that Rectified Linear Units (ReLU) are used for each of these layers. The defaults_scope makes it easy to change arguments for all of the layers.",
"with pt.defaults_scope(activation_fn=tf.nn.relu):\n y_pred, loss = x_pretty.\\\n conv2d(kernel=5, depth=16, name='layer_conv1').\\\n max_pool(kernel=2, stride=2).\\\n conv2d(kernel=5, depth=36, name='layer_conv2').\\\n max_pool(kernel=2, stride=2).\\\n flatten().\\\n fully_connected(size=128, name='layer_fc1').\\\n softmax_classifier(num_classes=num_classes, labels=y_true)",
"That's it! We have now created the exact same Convolutional Neural Network in a few simple lines of code that required many complex lines of code in the direct TensorFlow implementation.\nUsing PrettyTensor instead of TensorFlow, we can clearly see the network structure and how the data flows through the network. This allows us to focus on the main ideas of the Neural Network rather than low-level implementation details. It is simple and pretty!\nGetting the Weights\nUnfortunately, not everything is pretty when using PrettyTensor.\nFurther below, we want to plot the weights of the convolutional layers. In the TensorFlow implementation we had created the variables ourselves so we could just refer to them directly. But when the network is constructed using PrettyTensor, all the variables of the layers are created indirectly by PrettyTensor. We therefore have to retrieve the variables from TensorFlow.\nWe used the names layer_conv1 and layer_conv2 for the two convolutional layers. These are also called variable scopes (not to be confused with defaults_scope as described above). PrettyTensor automatically gives names to the variables it creates for each layer, so we can retrieve the weights for a layer using the layer's scope-name and the variable-name.\nThe implementation is somewhat awkward because we have to use the TensorFlow function get_variable() which was designed for another purpose; either creating a new variable or re-using an existing variable. The easiest thing is to make the following helper-function.",
"def get_weights_variable(layer_name):\n # Retrieve an existing variable named 'weights' in the scope\n # with the given layer_name.\n # This is awkward because the TensorFlow function was\n # really intended for another purpose.\n\n with tf.variable_scope(layer_name, reuse=True):\n variable = tf.get_variable('weights')\n\n return variable",
"Using this helper-function we can retrieve the variables. These are TensorFlow objects. In order to get the contents of the variables, you must do something like: contents = session.run(weights_conv1) as demonstrated further below.",
"weights_conv1 = get_weights_variable(layer_name='layer_conv1')\nweights_conv2 = get_weights_variable(layer_name='layer_conv2')",
"Optimization Method\nPrettyTensor gave us the predicted class-label (y_pred) as well as a loss-measure that must be minimized, so as to improve the ability of the Neural Network to classify the input images.\nIt is unclear from the documentation for PrettyTensor whether the loss-measure is cross-entropy or something else. But we now use the AdamOptimizer to minimize the loss.\nNote that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution.",
"optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss)",
"Performance Measures\nWe need a few more performance measures to display the progress to the user.\nFirst we calculate the predicted class number from the output of the Neural Network y_pred, which is a vector with 10 elements. The class number is the index of the largest element.",
"y_pred_cls = tf.argmax(y_pred, dimension=1)",
"Then we create a vector of booleans telling us whether the predicted class equals the true class of each image.",
"correct_prediction = tf.equal(y_pred_cls, y_true_cls)",
"The classification accuracy is calculated by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then taking the average of these numbers.",
"accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))",
"TensorFlow Run\nCreate TensorFlow session\nOnce the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.",
"session = tf.Session()",
"Initialize variables\nThe variables for weights and biases must be initialized before we start optimizing them.",
"session.run(tf.global_variables_initializer())",
"Helper-function to perform optimization iterations\nThere are 55,000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore only use a small batch of images in each iteration of the optimizer.\nIf your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to perform more optimization iterations.",
"train_batch_size = 64",
"Function for performing a number of optimization iterations so as to gradually improve the variables of the network layers. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations.",
"# Counter for total number of iterations performed so far.\ntotal_iterations = 0\n\ndef optimize(num_iterations):\n # Ensure we update the global variable rather than a local copy.\n global total_iterations\n\n # Start-time used for printing time-usage below.\n start_time = time.time()\n\n for i in range(total_iterations,\n total_iterations + num_iterations):\n\n # Get a batch of training examples.\n # x_batch now holds a batch of images and\n # y_true_batch are the true labels for those images.\n x_batch, y_true_batch = data.train.next_batch(train_batch_size)\n\n # Put the batch into a dict with the proper names\n # for placeholder variables in the TensorFlow graph.\n feed_dict_train = {x: x_batch,\n y_true: y_true_batch}\n\n # Run the optimizer using this batch of training data.\n # TensorFlow assigns the variables in feed_dict_train\n # to the placeholder variables and then runs the optimizer.\n session.run(optimizer, feed_dict=feed_dict_train)\n\n # Print status every 100 iterations.\n if i % 100 == 0:\n # Calculate the accuracy on the training-set.\n acc = session.run(accuracy, feed_dict=feed_dict_train)\n\n # Message for printing.\n msg = \"Optimization Iteration: {0:>6}, Training Accuracy: {1:>6.1%}\"\n\n # Print it.\n print(msg.format(i + 1, acc))\n\n # Update the total number of iterations performed.\n total_iterations += num_iterations\n\n # Ending time.\n end_time = time.time()\n\n # Difference between start and end-times.\n time_dif = end_time - start_time\n\n # Print the time-usage.\n print(\"Time usage: \" + str(timedelta(seconds=int(round(time_dif)))))",
"Helper-function to plot example errors\nFunction for plotting examples of images from the test-set that have been mis-classified.",
"def plot_example_errors(cls_pred, correct):\n # This function is called from print_test_accuracy() below.\n\n # cls_pred is an array of the predicted class-number for\n # all images in the test-set.\n\n # correct is a boolean array whether the predicted class\n # is equal to the true class for each image in the test-set.\n\n # Negate the boolean array.\n incorrect = (correct == False)\n \n # Get the images from the test-set that have been\n # incorrectly classified.\n images = data.test.images[incorrect]\n \n # Get the predicted classes for those images.\n cls_pred = cls_pred[incorrect]\n\n # Get the true classes for those images.\n cls_true = data.test.cls[incorrect]\n \n # Plot the first 9 images.\n plot_images(images=images[0:9],\n cls_true=cls_true[0:9],\n cls_pred=cls_pred[0:9])",
"Helper-function to plot confusion matrix",
"def plot_confusion_matrix(cls_pred):\n # This is called from print_test_accuracy() below.\n\n # cls_pred is an array of the predicted class-number for\n # all images in the test-set.\n\n # Get the true classifications for the test-set.\n cls_true = data.test.cls\n \n # Get the confusion matrix using sklearn.\n cm = confusion_matrix(y_true=cls_true,\n y_pred=cls_pred)\n\n # Print the confusion matrix as text.\n print(cm)\n\n # Plot the confusion matrix as an image.\n plt.matshow(cm)\n\n # Make various adjustments to the plot.\n plt.colorbar()\n tick_marks = np.arange(num_classes)\n plt.xticks(tick_marks, range(num_classes))\n plt.yticks(tick_marks, range(num_classes))\n plt.xlabel('Predicted')\n plt.ylabel('True')\n\n # Ensure the plot is shown correctly with multiple plots\n # in a single Notebook cell.\n plt.show()",
"Helper-function for showing the performance\nFunction for printing the classification accuracy on the test-set.\nIt takes a while to compute the classification for all the images in the test-set, that's why the results are re-used by calling the above functions directly from this function, so the classifications don't have to be recalculated by each function.\nNote that this function can use a lot of computer memory, which is why the test-set is split into smaller batches. If you have little RAM in your computer and it crashes, then you can try and lower the batch-size.",
"# Split the test-set into smaller batches of this size.\ntest_batch_size = 256\n\ndef print_test_accuracy(show_example_errors=False,\n show_confusion_matrix=False):\n\n # Number of images in the test-set.\n num_test = len(data.test.images)\n\n # Allocate an array for the predicted classes which\n # will be calculated in batches and filled into this array.\n cls_pred = np.zeros(shape=num_test, dtype=np.int)\n\n # Now calculate the predicted classes for the batches.\n # We will just iterate through all the batches.\n # There might be a more clever and Pythonic way of doing this.\n\n # The starting index for the next batch is denoted i.\n i = 0\n\n while i < num_test:\n # The ending index for the next batch is denoted j.\n j = min(i + test_batch_size, num_test)\n\n # Get the images from the test-set between index i and j.\n images = data.test.images[i:j, :]\n\n # Get the associated labels.\n labels = data.test.labels[i:j, :]\n\n # Create a feed-dict with these images and labels.\n feed_dict = {x: images,\n y_true: labels}\n\n # Calculate the predicted class using TensorFlow.\n cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)\n\n # Set the start-index for the next batch to the\n # end-index of the current batch.\n i = j\n\n # Convenience variable for the true class-numbers of the test-set.\n cls_true = data.test.cls\n\n # Create a boolean array whether each image is correctly classified.\n correct = (cls_true == cls_pred)\n\n # Calculate the number of correctly classified images.\n # When summing a boolean array, False means 0 and True means 1.\n correct_sum = correct.sum()\n\n # Classification accuracy is the number of correctly classified\n # images divided by the total number of images in the test-set.\n acc = float(correct_sum) / num_test\n\n # Print the accuracy.\n msg = \"Accuracy on Test-Set: {0:.1%} ({1} / {2})\"\n print(msg.format(acc, correct_sum, num_test))\n\n # Plot some examples of mis-classifications, if desired.\n if show_example_errors:\n print(\"Example errors:\")\n plot_example_errors(cls_pred=cls_pred, correct=correct)\n\n # Plot the confusion matrix, if desired.\n if show_confusion_matrix:\n print(\"Confusion Matrix:\")\n plot_confusion_matrix(cls_pred=cls_pred)",
"Performance before any optimization\nThe accuracy on the test-set is very low because the model variables have only been initialized and not optimized at all, so it just classifies the images randomly.",
"print_test_accuracy()",
"Performance after 1 optimization iteration\nThe classification accuracy does not improve much from just 1 optimization iteration, because the learning-rate for the optimizer is set very low.",
"optimize(num_iterations=1)\n\nprint_test_accuracy()",
"Performance after 100 optimization iterations\nAfter 100 optimization iterations, the model has significantly improved its classification accuracy.",
"optimize(num_iterations=99) # We already performed 1 iteration above.\n\nprint_test_accuracy(show_example_errors=True)",
"Performance after 1000 optimization iterations\nAfter 1000 optimization iterations, the model has greatly increased its accuracy on the test-set to more than 90%.",
"optimize(num_iterations=900) # We performed 100 iterations above.\n\nprint_test_accuracy(show_example_errors=True)",
"Performance after 10,000 optimization iterations\nAfter 10,000 optimization iterations, the model has a classification accuracy on the test-set of about 99%.",
"optimize(num_iterations=9000) # We performed 1000 iterations above.\n\nprint_test_accuracy(show_example_errors=True,\n show_confusion_matrix=True)",
"Visualization of Weights and Layers\nWhen the Convolutional Neural Network was implemented directly in TensorFlow, we could easily plot both the convolutional weights and the images that were output from the different layers. When using PrettyTensor instead, we can also retrieve the weights as shown above, but we cannot so easily retrieve the images that are output from the convolutional layers. So in the following we only plot the weights.\nHelper-function for plotting convolutional weights",
"def plot_conv_weights(weights, input_channel=0):\n # Assume weights are TensorFlow ops for 4-dim variables\n # e.g. weights_conv1 or weights_conv2.\n \n # Retrieve the values of the weight-variables from TensorFlow.\n # A feed-dict is not necessary because nothing is calculated.\n w = session.run(weights)\n\n # Get the lowest and highest values for the weights.\n # This is used to correct the colour intensity across\n # the images so they can be compared with each other.\n w_min = np.min(w)\n w_max = np.max(w)\n\n # Number of filters used in the conv. layer.\n num_filters = w.shape[3]\n\n # Number of grids to plot.\n # Rounded-up, square-root of the number of filters.\n num_grids = math.ceil(math.sqrt(num_filters))\n \n # Create figure with a grid of sub-plots.\n fig, axes = plt.subplots(num_grids, num_grids)\n\n # Plot all the filter-weights.\n for i, ax in enumerate(axes.flat):\n # Only plot the valid filter-weights.\n if i<num_filters:\n # Get the weights for the i'th filter of the input channel.\n # See new_conv_layer() for details on the format\n # of this 4-dim tensor.\n img = w[:, :, input_channel, i]\n\n # Plot image.\n ax.imshow(img, vmin=w_min, vmax=w_max,\n interpolation='nearest', cmap='seismic')\n \n # Remove ticks from the plot.\n ax.set_xticks([])\n ax.set_yticks([])\n \n # Ensure the plot is shown correctly with multiple plots\n # in a single Notebook cell.\n plt.show()",
"Convolution Layer 1\nNow plot the filter-weights for the first convolutional layer.\nNote that positive weights are red and negative weights are blue.",
"plot_conv_weights(weights=weights_conv1)",
"Convolution Layer 2\nNow plot the filter-weights for the second convolutional layer.\nThere are 16 output channels from the first conv-layer, which means there are 16 input channels to the second conv-layer. The second conv-layer has a set of filter-weights for each of its input channels. We start by plotting the filter-weigths for the first channel.\nNote again that positive weights are red and negative weights are blue.",
"plot_conv_weights(weights=weights_conv2, input_channel=0)",
"There are 16 input channels to the second convolutional layer, so we can make another 15 plots of filter-weights like this. We just make one more with the filter-weights for the second channel.",
"plot_conv_weights(weights=weights_conv2, input_channel=1)",
"Close TensorFlow Session\nWe are now done using TensorFlow, so we close the session to release its resources.",
"# This has been commented out in case you want to modify and experiment\n# with the Notebook without having to restart it.\n# session.close()",
"Conclusion\nPrettyTensor allows you to implement Neural Networks using a much simpler syntax than a direct implementation in TensorFlow. This lets you focus on your ideas rather than low-level implementation details. It makes the code much shorter and easier to understand, and you will make fewer mistakes.\nHowever, there are some inconsistencies and awkward designs in PrettyTensor, and it can be difficult to learn because the documentation is short and confusing. Hopefully this gets better in the future (this was written in July 2016).\nThere are alternatives to PrettyTensor including TFLearn and Keras.\nExercises\nThese are a few suggestions for exercises that may help improve your skills with TensorFlow. It is important to get hands-on experience with TensorFlow in order to learn how to use it properly.\nYou may want to backup this Notebook before making any changes.\n\nChange the activation function to sigmoid for all the layers.\nUse sigmoid in some layers and relu in others. Can you use defaults_scope for this?\nUse l2loss in all layers. Then try it for only some of the layers.\nUse PrettyTensor's reshape for x_image instead of TensorFlow's. Is one better than the other?\nAdd a dropout-layer after the fully-connected layer. If you want a different keep_prob during training and testing then you will need a placeholder variable and set it in the feed-dict.\nReplace the 2x2 max-pooling layers with stride=2 in the convolutional layers. Is there a difference in classification accuracy? What if you optimize it again and again? The difference is random, so how would you measure if there really is a difference? What are the pros and cons of using max-pooling vs. stride in the conv-layer?\nChange the parameters for the layers, e.g. the kernel, depth, size, etc. What is the difference in time usage and classification accuracy?\nAdd and remove some convolutional and fully-connected layers.\nWhat is the simplest network you can design that still performs well?\nRetrieve the bias-values for the convolutional layers and print them. See get_weights_variable() for inspiration.\nRemake the program yourself without looking too much at this source-code.\nExplain to a friend how the program works.\n\nLicense (MIT)\nCopyright (c) 2016 by Magnus Erik Hvass Pedersen\nPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\nThe above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
karlstroetmann/Artificial-Intelligence
|
Python/7 Neural Networks/Neural-Network-Keras.ipynb
|
gpl-2.0
|
[
"from IPython.core.display import HTML\nwith open (\"../style.css\", \"r\") as file:\n css = file.read()\nHTML(css)",
"Building a Neural Network with Keras",
"import gzip\nimport pickle\nimport numpy as np\nimport keras\nimport tensorflow as tf",
"The following magic command is necessary to prevent the Python kernel to die because of linkage problems.",
"%env KMP_DUPLICATE_LIB_OK=TRUE",
"The function $\\texttt{vectorized_result}(d)$ converts the digit $d \\in {0,\\cdots,9}$ and returns a NumPy vector $\\mathbf{x}$ of shape $(10, 1)$ such that\n$$\n\\mathbf{x}[i] = \n\\left{\n \\begin{array}{ll}\n 1 & \\mbox{if $i = j$;} \\\n 0 & \\mbox{otherwise.}\n \\end{array}\n\\right.\n$$\nThis function is used to convert a digit $d$ into the expected output of a neural network that has an output unit for every digit.",
"def vectorized_result(d):\n e = np.zeros((10, ), dtype=np.float32)\n e[d] = 1.0\n return e",
"The function $\\texttt{load_data}()$ returns a pair of the form\n$$ (\\texttt{training_data}, \\texttt{test_data}) $$\nwhere \n<ul>\n<li> $\\texttt{training_data}$ is a list containing 60,000 pairs $(\\textbf{x}, \\textbf{y})$ s.t. $\\textbf{x}$ is a 784-dimensional `numpy.ndarray` containing the input image and $\\textbf{y}$ is a 10-dimensional `numpy.ndarray` corresponding to the correct digit for x.</li>\n<li> $\\texttt{test_data}$ is a list containing 10,000 pairs $(\\textbf{x}, y)$. In each case, \n $\\textbf{x}$ is a 784-dimensional `numpy.ndarray` containing the input image, \n and $y$ is the corresponding digit value.\n</ul>",
"def load_data():\n with gzip.open('../mnist.pkl.gz', 'rb') as f:\n train, validate, test = pickle.load(f, encoding=\"latin1\")\n X_train = np.array([np.reshape(x, (784, )) for x in train[0]])\n X_test = np.array([np.reshape(x, (784, )) for x in test [0]])\n Y_train = np.array([vectorized_result(y) for y in train[1]])\n Y_test = np.array([vectorized_result(y) for y in test [1]])\n return (X_train, X_test, Y_train, Y_test)\n\nX_train, X_test, Y_train, Y_test = load_data()",
"Let us see what we have read:",
"X_train.shape, X_test.shape, Y_train.shape, Y_test.shape",
"Below, we create a neural network with two hidden layers.\n- The first hidden layer has 60 nodes and uses the <a href=\"https://en.wikipedia.org/wiki/Rectifier_(neural_networks)\">ReLU function</a> \n as activation function.\n- The second hidden layer uses 30 nodes and also uses the ReLu function.\n- The output layer uses the <a href=\"https://en.wikipedia.org/wiki/Softmax_function\">softmax function</a> as \n activation function. This function is defined as follows:\n $$ \\sigma(\\mathbf{z})i := \\frac{e^{z_i}}{\\sum\\limits{d=0}^{9} e^{z_d}} $$\n Here, $N$ is the number of output nodes and $z_i$ is the sum of the inputs of the $i$-th output neuron.\n This function guarantees that the outputs of the 10 output nodes can be interpreted as probabilities, since \n there sum is equal to $1$.\n- The <em style=\"color:blue\">loss function</em> used is the <em style=\"color:blue\">cross-entropy</em>.\n If a neuron outputs the value $a$, when it should output the value $y \\in {0,1}$, the cross entropy cost of \n this neuron is defined as\n $$ C(a, y) := - y \\cdot \\ln(a) - (1-y)\\cdot \\ln(1-a). $$\n- The cost function is minimized using stochastic gradient descent with a learning rate of $0.3$.",
"model = keras.models.Sequential()\nmodel.add(keras.layers.Dense( 80, activation='relu', input_dim=784))\nmodel.add(keras.layers.Dense( 40, activation='relu' ))\nmodel.add(keras.layers.Dense( 40, activation='relu' ))\nmodel.add(keras.layers.Dense( 10, activation='softmax' ))\nmodel.compile(loss = 'categorical_crossentropy', \n optimizer = tf.keras.optimizers.SGD(lr=0.3), \n metrics = ['accuracy'])\nmodel.summary()\n\n%%time\nhistory = model.fit(X_train, Y_train, validation_data=(X_test, Y_test), epochs=30, batch_size=100, verbose=1)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
empet/Math
|
Joukowski-airfoil.ipynb
|
bsd-3-clause
|
[
"The flow past a Joukowski airfoil\nThe generation of streamlines of the flow past a Joukowski airfoil follows this chapter from the Internet Book of Fluid Dynamics.\nVisualization of streamlines is based on the property of the complex flow\nwith respect to a conformal transformation:\nIf w is the complex plane of the airfoil, z is the complex plane of the circle as the section in a circular cylinder,\nand $w=w(z)$ is a conformal tranformatiom from the outside of the disc mapped to the airfoil,\nthen the complex flow, $F$, past the airfoil is related to the complex flow, $f$, past the circle(cylinder) by:\n$F(w)=f(z(w))$ or equivalently $F(w(z))=f(z)$. \nThe streamlines of each flow are defined as contour plots of the imaginary part of the complex flow.\nIn our case, due to the latter relation, we plot the contours of the stream function, $Imag{(f)}$, over $w(z)$, where $w(z)$ is the Joukowski transformation, that maps a suitable circle onto the airfoil.",
"import numpy as np\nimport numpy.ma as ma\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\ndef Juc(z, lam):#Joukowski transformation\n return z+(lam**2)/z\n\ndef circle(C, R):\n t=np.linspace(0,2*np.pi, 200)\n return C+R*np.exp(1j*t)\n\ndef deg2radians(deg):\n return deg*np.pi/180\n\nplt.rcParams['figure.figsize'] = 8, 8\n\ndef streamlines(alpha=10, beta=5, V_inf=1, R=1, ratio=1.2):\n #ratio=R/lam\n alpha=deg2radians(alpha)# angle of attack\n beta=deg2radians(beta)# -beta is the argument of the complex no (Joukovski parameter - circle center)\n if ratio<=1: #R/lam must be >1\n raise ValueError('R/lambda must be >1')\n lam=R/ratio#lam is the parameter of the Joukowski transformation\n\n center_c=lam-R*np.exp(-1j*beta)# Center of the circle\n x=np.arange(-3,3, 0.1)\n y=np.arange(-3,3, 0.1)\n x,y=np.meshgrid(x,y)\n z=x+1j*y\n z=ma.masked_where(np.absolute(z-center_c)<=R, z)\n Z=z-center_c\n Gamma=-4*np.pi*V_inf*R*np.sin(beta+alpha)#circulation\n # np.log(Z) cannot be calculated correctly due to a numpy bug np.log(MaskedArray);\n #https://github.com/numpy/numpy/issues/8516\n # we perform an elementwise computation\n U=np.zeros(Z.shape, dtype=np.complex)\n with np.errstate(divide='ignore'):#avoid warning when evaluates np.log(0+1jy).\n #In this case the arg is arctan(y/0)+cst\n for m in range(Z.shape[0]):\n for n in range(Z.shape[1]):\n #U[m,n]=Gamma*np.log(Z[m,n]/R)/(2*np.pi)# \n U[m,n]=Gamma*np.log((Z[m,n]*np.exp(-1j*alpha))/R)/(2*np.pi)\n c_flow=V_inf*Z*np.exp(-1j*alpha) + (V_inf*np.exp(1j*alpha)*R**2)/Z - 1j*U #the complex flow\n\n J=Juc(z, lam)#Joukovski transformation of the z-plane minus the disc D(center_c, R)\n Circle=circle(center_c, R)\n Airfoil=Juc(Circle, lam)# airfoil \n return J, c_flow.imag, Airfoil\n\n\nJ, stream_func, Airfoil=streamlines()\nlevels=np.arange(-2.8, 3.8, 0.2).tolist()",
"Matplotlib plot of the streamlines:",
"fig=plt.figure()\nax=fig.add_subplot(111)\ncp=ax.contour(J.real, J.imag, stream_func,levels=levels, colors='blue', linewidths=1,\n linestyles='solid')# this means that the flow is evaluated at Juc(z) since c_flow(Z)=C_flow(csi(Z))\n\nax.plot(Airfoil.real, Airfoil.imag)\nax.set_aspect('equal')\n ",
"Plotly plot of the streamlines:",
"import plotly.plotly as py\n\npy.sign_in('empet', 'my_api_key')\n\nconts=cp.allsegs # get the segments of line computed via plt.contour\nxline=[]\nyline=[]\n\nfor cont in conts:\n if len(cont)!=0:\n for arr in cont: \n \n xline+=arr[:,0].tolist()\n yline+=arr[:,1].tolist()\n xline.append(None) \n yline.append(None)\n\nflowlines=dict(x=xline, \n y=yline, \n type='scatter', \n mode='lines', \n line=dict(color='blue', width=1)\n ) \n\n#define a filled path (a shape) representing the airfoil\n\nshapes=[]\npath='M'\nfor pt in Airfoil:\n path+=str(pt.real)+', '+str(pt.imag)+' L '\nshapes.append(dict(line=dict(color='blue', \n width=1.5\n ),\n path= path,\n type='path',\n fillcolor='#edf4fe' \n )\n )\n\naxis=dict(showline=True, zeroline=False, ticklen=4, mirror=True, showgrid=False)\n\nlayout=dict(title=\"The streamlines for the flow past a Joukowski airfoil<br>Angle of attack, alpha=10 degrees\",\n font=dict(family='Balto'),\n showlegend=False, \n autosize=False, \n width=600, \n height=600, \n xaxis=dict(axis, **{'range': [ma.min(J.real), ma.max(J.real)]}),\n yaxis=dict(axis, **{'range':[ma.min(J.imag), ma.max(J.imag)]}),\n shapes=shapes,\n plot_bgcolor='#c1e3ff',\n hovermode='closest',\n )\nfig=dict(data=[flowlines],layout=layout)\npy.iplot(fig, filename='Joucstreamlns')\n\nfrom IPython.core.display import HTML\ndef css_styling():\n styles = open(\"./custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
esumitra/minecraft-programming
|
notebooks/Adventure2.ipynb
|
mit
|
[
"Welcome Home\nUsually when Steve comes home, there is no one at home. Steve can get lonely at times especially after long hard battle with creepers and zombies.\nIn this programming adventure we'll make Minecraft display a warm and friendly welcome message when Steve comes home. We'll test your program by exploring the world and then come back home to a friendly welcome. Along the way we will learn about coordinate systems which help us locate objects in a game. We will also learn about variables and conditions. \nCoordinate Systems\nFrom your math classes, you will remember coordinate systems to locate points in a plane. The points (2,3), (-3,1) and (-1,-2.5) are shown in the grid below.\n\nThe Minecraft coordinate grid is shown below:\n\nIn Minecraft, when you move East, your X-coordinate increases and when you move South, your Z-coordinate increases. Let's confirm this through a few Minecraft exercises.\nTask 1: Moving in Minecraft coordinate systems\nIn Minecraft look at Steve's coordinates. Now move Steve to any other position. See how his coordinates change as you move? \n\n\n[ ] Change your direction so that only the Xcoordinate moves when you move forward or back. \n\n\n[ ] Change your direction so only the Z-coordinate moves when you move forward or back.\n\n\nTask 2: Write a program to show Steve's position on the screen\nRemember functions from the first Adventure? A function lets us do things in a computer program or in the minecraft game. The function getTilePos() get the players position as (x,y,z) coordinates in Minecraft. Let's using this function to print Steve's position as he moves around. We need to store Steve's position when we call the function getTilePos() so that we can print the position later. We can use a program variable to store the position. A variable has a name and can be used to store values. We'll call our variable pos for position and it will contain the Steve's position. When we want to print the position, we print the values of the position x,y and z coordinates using another function print() which prints any strings you give it.\nStart up minecraft and type the following in a new cell.\npython\nfrom mcpi.minecraft import *\nmc = Minecraft.create()\npos = mc.player.getTilePos()\nprint(pos.x)\nprint(pos.y)\nprint(pos.z)\nWhen you run your program by pressing Ctrl+Enter in the program cell, you should now see Steve's position printed.\n Great Job!",
"from mcpi.minecraft import *\nimport time\nmc = Minecraft.create()\n\n# Type Task 2 program here\n",
"Task 3: Prettying up messages\nThe messages we printed are somewhat basic and can be confusing since we don't know which number is x,y or z. Why not print a message that is more useful. Often messages are built by attaching strings and data. Try typing \npython\n\"my name is \" + \"Steve\" \nin a code cell. What message gets printed? Now try\npython\n\"my age is \" + 10 \nHmmm... That did not work :( Strings can only be attached or concatenated with other strings. In order to attach a number to string, we need to convert the number into a printable string. We will use another function str() which returns a printable string of its arguments. Since x,y,z coordinates are numbers, we need to convert them to strings in order to print them with other strings. Too how the str() function works type the following in a code cell and run.\npython\n\"my age is \" + str(10) \nWhat gets printed by the line below?\npython\n\"x = \" + str(10) + \",y = \" + str(20) + \",z = \" + str(30) \nYou now have all the information you need to print a pretty message. \n\n[ ] Modify your program to print a pretty message shown below to correctly print Steve's position\n Steve's position is: x = 10,y = 20,z = 30\n[ ] Modify your program to use a variable names message to store the pretty message and then print the message\n Hint:\n\npython\nmessage = ...\nprint(message)",
"## Type Task 3 program here\n",
"Task 4: Display Steve's coordinates in Minecraft\nFor this task instead of printing Steve's coordinates, lets display them in Minecraft using the postToChat() function from Adventure1\nYou should see a message like the one below once you run your program.",
"while True:\n time.sleep(1)\n ## Type Task 4 program here\n ",
"Home\nIn Minecraft move to a location that you want to call home and place a Gold block there. Move Steve on top of the Gold block and write down his coordinates. Lets save these coordinates in the variables home_x and home_z. We will use these variables to detect when Steve returns home.",
"## Change these values for your home\nhome_x = 0\nhome_z = 0",
"Is Steve home?\nNow the magic of figuring out if Steve is home. As Steve moves in Minecraft, his x and z coordinates change. We can detect that Steve is home when his coordinates are equal to the cordinates of his home! To put it in math terms, Steve is home when\n$$\n(pos_{x},pos_{z}) = (home_{x},home_{z})\n$$\nIn the program we can write the math expression as\npython\npos.x == home_x and pos.z == home_z\nWe can an if program block to check if Steve's coordinates equal his home coordinates. An if block is written as shown below\npython\nif (condition):\n do something 1\n do something 2\nLets put this all together in the program below\npython\nwhile True:\n time.sleep(1)\n pos = mc.player.getTilePos()\n if (pos.x == home_x and pos.z == home_z):\n mc.postToChat(\"Welcome home Steve.\")\n # the rest of your program from task 4\nWhat happens when you run around in Minecraft and return to the gold block that is your home? That warm message makes Steve happy. He can now be well rested for battling creepers the next day.",
"## Type Task 5 program here\n",
"Recap\nIn this adventure you learned about coordinates, variables and if conditions. You used your knowledge to greet Steve with a warm message when he returns home. \n Great Job!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tritemio/PyBroMo
|
notebooks/PyBroMo - B.1 Disk-single-core - Generate photon timestamps.ipynb
|
gpl-2.0
|
[
"PyBroMo - B.1 Disk-single-core - Generate photon timestamps\n<small><i>\nThis notebook is part of <a href=\"http://tritemio.github.io/PyBroMo\" target=\"_blank\">PyBroMo</a> a \npython-based single-molecule Brownian motion diffusion simulator \nthat simulates confocal smFRET\nexperiments.\n</i></small>\nOverview\nIn this notebook we show how to generated timestamps of emitted-photons from saved diffusion traces.",
"%matplotlib inline\nimport numpy as np\nimport tables\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport pybromo as pbm\nprint('Numpy version:', np.__version__)\nprint('PyTables version:', tables.__version__)\nprint('PyBroMo version:', pbm.__version__)",
"Timestamps simulation\nAs a memo, let's write some formulas related to the FRET efficiency:\n$$ k = \\frac{F_a}{F_d} \\quad,\\qquad E = \\frac{k}{k+1} \\qquad\\Rightarrow\\qquad k = \\frac{E}{1-E}$$",
"S = pbm.ParticlesSimulation.from_datafile('016', mode='w')\n\ndef em_rates_from_E(em_rate_tot, E_values):\n E_values = np.asarray(E_values)\n em_rates_a = E_values * em_rate_tot\n em_rates_d = em_rate_tot - em_rates_a\n\n k_values = E_values/(1 - E_values)\n assert np.allclose((em_rates_a/em_rates_d), k_values)\n\n em_rates = np.hstack([em_rates_a, em_rates_d])\n return em_rates\n\nem_rate_tot = 200e3\nE_list = np.array([0.01, 0.02, 0.05, 0.1, 0.2, 0.4])\n\nem_rate_list = em_rates_from_E(em_rate_tot, E_list)\nem_rate_list\n\n# Get the random state at the end of the diffusion simulation\nsaved_rs_state = S.traj_group._v_attrs['last_random_state']\npbm.hash_(saved_rs_state)",
"Simulation of the series of emission rates\nHere we perform the timetsamps simulation for the list of emission rates previously computed.\nAt this point we also choose the Poisson background level to add to the simulation.",
"em_rate_list",
"Simulate timestamps for background = 1kcps:",
"rs = np.random.RandomState()\nrs.set_state(saved_rs_state)\n\n%%timeit -n1 -r1\nfor em_rate in em_rate_list:\n print(' Emission rate: ', em_rate, flush=True)\n S.simulate_timestamps_mix(max_rates=(em_rate,), populations=(slice(0, 20),), \n bg_rate=1e3, rs=rs)",
"Simulate timestamps for background = 4kcps:",
"%%timeit -n1 -r1\nfor em_rate in em_rate_list:\n print(' Emission rate: ', em_rate, flush=True)\n S.simulate_timestamps_mix(max_rates=(em_rate,), populations=(slice(0, 20),), \n bg_rate=4e3, rs=rs)\n\nfor k in S.ts_store.h5file.root.timestamps._v_children.keys():\n if not k.endswith('_par'):\n print(k)\n\nts, ts_par = S.get_timestamps_part('Pop1_P20_Pstart0_max_rate198000cps_BG4000cps_t_1s_rs_8798a6')\n\nts[:]\n\nbins = np.arange(0, 1, 1e-3)\nplt.hist(ts*ts.attrs['clk_p'], bins=bins, histtype='step');",
"Verify the simulation\nCheck that the new arrays show up in the data file:",
"group = '/timestamps'\n\nprint('Nodes in in %s:\\n' % group)\n\nprint(S.ts_store.h5file.get_node(group))\nfor node in S.ts_store.h5file.get_node(group)._f_list_nodes():\n print('\\t%s' % node.name)\n #print('\\t %s' % node.title)\n\n[t for t in S.timestamp_names if 'BG4000cps' in t]\n\nS.ts_store.close()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
AllenDowney/ModSim
|
soln/chap26.ipynb
|
gpl-2.0
|
[
"Chapter 26\nModeling and Simulation in Python\nCopyright 2021 Allen Downey\nLicense: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International",
"# install Pint if necessary\n\ntry:\n import pint\nexcept ImportError:\n !pip install pint\n\n# download modsim.py if necessary\n\nfrom os.path import exists\n\nfilename = 'modsim.py'\nif not exists(filename):\n from urllib.request import urlretrieve\n url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'\n local, _ = urlretrieve(url+filename, filename)\n print('Downloaded ' + local)\n\n# import functions from modsim\n\nfrom modsim import *",
"Case studies!\nBungee jumping\nSuppose you want to set the world record for the highest \"bungee dunk\",\nwhich is a stunt in which a bungee jumper dunks a cookie in a cup of tea\nat the lowest point of a jump. An example is shown in this video:\nhttp://modsimpy.com/dunk.\nSince the record is 70 m, let's design a jump for 80 m. We'll start with\nthe following modeling assumptions:\n\n\nInitially the bungee cord hangs from a crane with the attachment\n point 80 m above a cup of tea.\n\n\nUntil the cord is fully extended, it applies no force to the jumper.\n It turns out this might not be a good assumption; we will revisit\n it.\n\n\nAfter the cord is fully extended, it obeys Hooke's Law; that is, it\n applies a force to the jumper proportional to the extension of the\n cord beyond its resting length. See http://modsimpy.com/hooke.\n\n\nThe mass of the jumper is 75 kg.\n\n\nThe jumper is subject to drag force so that their terminal velocity\n is 60 m/s.\n\n\nOur objective is to choose the length of the cord, L, and its spring\nconstant, k, so that the jumper falls all the way to the tea cup, but\nno farther!\nIn the repository for this book, you will find a notebook,\nbungee.ipynb, which contains starter code and exercises for this case\nstudy.\nBungee dunk revisited\nIn the previous case study, we assume that the cord applies no force to\nthe jumper until it is stretched. It is tempting to say that the cord\nhas no effect because it falls along with the jumper, but that intuition\nis incorrect. As the cord falls, it transfers energy to the jumper.\nAt http://modsimpy.com/bungee you'll find a paper[^1] that explains\nthis phenomenon and derives the acceleration of the jumper, $a$, as a\nfunction of position, $y$, and velocity, $v$:\n$$a = g + \\frac{\\mu v^2/2}{\\mu(L+y) + 2L}$$ where $g$ is acceleration\ndue to gravity, $L$ is the length of the cord, and $\\mu$ is the ratio of\nthe mass of the cord, $m$, and the mass of the jumper, $M$.\nIf you don't believe that their model is correct, this video might\nconvince you: http://modsimpy.com/drop.\nIn the repository for this book, you will find a notebook,\nbungee2.ipynb, which contains starter code and exercises for this case\nstudy. How does the behavior of the system change as we vary the mass of\nthe cord? When the mass of the cord equals the mass of the jumper, what\nis the net effect on the lowest point in the jump?\nSpider-Man\nIn this case study we'll develop a model of Spider-Man swinging from a\nspringy cable of webbing attached to the top of the Empire State\nBuilding. Initially, Spider-Man is at the top of a nearby building, as\nshown in Figure [spiderman]{reference-type=\"ref\"\nreference=\"spiderman\"}.\n{height=\"2.8in\"}\nThe origin, O, is at the base of the Empire State Building. The vector\nH represents the position where the webbing is attached to the\nbuilding, relative to O. The vector P is the position of Spider-Man\nrelative to O. And L is the vector from the attachment point to\nSpider-Man.\nBy following the arrows from O, along H, and along L, we can see\nthat\nH + L = P\nSo we can compute L like this:\nL = P - H\nThe goals of this case study are:\n\n\nImplement a model of this scenario to predict Spider-Man's\n trajectory.\n\n\nChoose the right time for Spider-Man to let go of the webbing in\n order to maximize the distance he travels before landing.\n\n\nChoose the best angle for Spider-Man to jump off the building, and\n let go of the webbing, to maximize range.\n\n\nWe'll use the following parameters:\n\n\nAccording to the Spider-Man Wiki[^2], Spider-Man weighs 76 kg.\n\n\nLet's assume his terminal velocity is 60 m/s.\n\n\nThe length of the web is 100 m.\n\n\nThe initial angle of the web is 45 ° to the left of straight down.\n\n\nThe spring constant of the web is 40 N/m when the cord is stretched,\n and 0 when it's compressed.\n\n\nIn the repository for this book, you will find a notebook,\nspiderman.ipynb, which contains starter code. Read through the\nnotebook and run the code. It uses minimize, which is a SciPy function\nthat can search for an optimal set of parameters (as contrasted with\nminimize_scalar, which can only search along a single axis).\nKittens\nLet's simulate a kitten unrolling toilet paper. As reference material,\nsee this video: http://modsimpy.com/kitten.\nThe interactions of the kitten and the paper roll are complex. To keep\nthings simple, let's assume that the kitten pulls down on the free end\nof the roll with constant force. Also, we will neglect the friction\nbetween the roll and the axle.\n{height=\"2.5in\"}\nFigure [kitten]{reference-type=\"ref\" reference=\"kitten\"}\nshows the paper roll with $r$, $F$, and $\\tau$. As a vector quantity,\nthe direction of $\\tau$ is into the page, but we only care about its\nmagnitude for now.\nHere's the Params object with the parameters we'll need:",
"params = Params(Rmin = 0.02 * m,\n Rmax = 0.055 * m,\n Mcore = 15e-3 * kg,\n Mroll = 215e-3 * kg,\n L = 47 * m,\n tension = 2e-4 * N,\n t_end = 180 * s)",
"As before, Rmin is the minimum radius and Rmax is the maximum. L\nis the length of the paper. Mcore is the mass of the cardboard tube at\nthe center of the roll; Mroll is the mass of the paper. tension is\nthe force applied by the kitten, in N. I chose a value that yields\nplausible results.\nAt http://modsimpy.com/moment you can find moments of inertia for\nsimple geometric shapes. I'll model the cardboard tube at the center of\nthe roll as a \"thin cylindrical shell\\\", and the paper roll as a\n\"thick-walled cylindrical tube with open ends\\\".\nThe moment of inertia for a thin shell is just $m r^2$, where $m$ is the\nmass and $r$ is the radius of the shell.\nFor a thick-walled tube the moment of inertia is\n$$I = \\frac{\\pi \\rho h}{2} (r_2^4 - r_1^4)$$ where $\\rho$ is the density\nof the material, $h$ is the height of the tube, $r_2$ is the outer\ndiameter, and $r_1$ is the inner diameter.\nSince the outer diameter changes as the kitten unrolls the paper, we\nhave to compute the moment of inertia, at each point in time, as a\nfunction of the current radius, r. Here's the function that does it:",
"def moment_of_inertia(r, system):\n Mcore, Rmin = system.Mcore, system.Rmin\n rho_h = system.rho_h\n \n Icore = Mcore * Rmin**2 \n Iroll = pi * rho_h / 2 * (r**4 - Rmin**4)\n return Icore + Iroll",
"rho_h is the product of density and height, $\\rho h$, which is the\nmass per area. rho_h is computed in make_system:",
"def make_system(params):\n L, Rmax, Rmin = params.L, params.Rmax, params.Rmin\n Mroll = params.Mroll\n \n init = State(theta = 0 * radian,\n omega = 0 * radian/s,\n y = L)\n \n area = pi * (Rmax**2 - Rmin**2)\n rho_h = Mroll / area\n k = (Rmax**2 - Rmin**2) / 2 / L / radian \n \n return System(params, init=init, area=area, \n rho_h=rho_h, k=k)",
"make_system also computes k using\nEquation [eqn4]{reference-type=\"ref\" reference=\"eqn4\"}.\nIn the repository for this book, you will find a notebook,\nkitten.ipynb, which contains starter code for this case study. Use it\nto implement this model and check whether the results seem plausible.\nSimulating a yo-yo\nSuppose you are holding a yo-yo with a length of string wound around its\naxle, and you drop it while holding the end of the string stationary. As\ngravity accelerates the yo-yo downward, tension in the string exerts a\nforce upward. Since this force acts on a point offset from the center of\nmass, it exerts a torque that causes the yo-yo to spin.\n{height=\"2.5in\"}\nFigure [yoyo]{reference-type=\"ref\" reference=\"yoyo\"} is a\ndiagram of the forces on the yo-yo and the resulting torque. The outer\nshaded area shows the body of the yo-yo. The inner shaded area shows the\nrolled up string, the radius of which changes as the yo-yo unrolls.\nIn this model, we can't figure out the linear and angular acceleration\nindependently; we have to solve a system of equations: $$\\begin{aligned}\n\\sum F &= m a \\\n\\sum \\tau &= I \\alpha\\end{aligned}$$ where the summations indicate that\nwe are adding up forces and torques.\nAs in the previous examples, linear and angular velocity are related\nbecause of the way the string unrolls:\n$$\\frac{dy}{dt} = -r \\frac{d \\theta}{dt}$$ In this example, the linear\nand angular accelerations have opposite sign. As the yo-yo rotates\ncounter-clockwise, $\\theta$ increases and $y$, which is the length of\nthe rolled part of the string, decreases.\nTaking the derivative of both sides yields a similar relationship\nbetween linear and angular acceleration:\n$$\\frac{d^2 y}{dt^2} = -r \\frac{d^2 \\theta}{dt^2}$$ Which we can write\nmore concisely: $$a = -r \\alpha$$ This relationship is not a general law\nof nature; it is specific to scenarios like this where one object rolls\nalong another without stretching or slipping.\nBecause of the way we've set up the problem, $y$ actually has two\nmeanings: it represents the length of the rolled string and the height\nof the yo-yo, which decreases as the yo-yo falls. Similarly, $a$\nrepresents acceleration in the length of the rolled string and the\nheight of the yo-yo.\nWe can compute the acceleration of the yo-yo by adding up the linear\nforces: $$\\sum F = T - mg = ma$$ Where $T$ is positive because the\ntension force points up, and $mg$ is negative because gravity points\ndown.\nBecause gravity acts on the center of mass, it creates no torque, so the\nonly torque is due to tension: $$\\sum \\tau = T r = I \\alpha$$ Positive\n(upward) tension yields positive (counter-clockwise) angular\nacceleration.\nNow we have three equations in three unknowns, $T$, $a$, and $\\alpha$,\nwith $I$, $m$, $g$, and $r$ as known quantities. It is simple enough to\nsolve these equations by hand, but we can also get SymPy to do it for\nus:",
"T, a, alpha, I, m, g, r = symbols('T a alpha I m g r')\neq1 = Eq(a, -r * alpha)\neq2 = Eq(T - m*g, m * a)\neq3 = Eq(T * r, I * alpha)\nsoln = solve([eq1, eq2, eq3], [T, a, alpha])",
"The results are $$\\begin{aligned}\nT &= m g I / I^ \\\na &= -m g r^2 / I^ \\\n\\alpha &= m g r / I^ \\\\end{aligned}$$ where $I^$ is the augmented\nmoment of inertia, $I + m r^2$. To simulate the system, we don't really\nneed $T$; we can plug $a$ and $\\alpha$ directly into the slope function.\nIn the repository for this book, you will find a notebook, yoyo.ipynb,\nwhich contains the derivation of these equations and starter code for\nthis case study. Use it to implement and test this model.\n[^1]: Heck, Uylings, and Kędzierska, \"Understanding the physics of\n bungee jumping\\\", Physics Education, Volume 45, Number 1, 2010."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jc091/deep-learning
|
first-neural-network/.ipynb_checkpoints/DLND Your first neural network-checkpoint.ipynb
|
mit
|
[
"Your first neural network\nIn this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.",
"%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt",
"Load and prepare the data\nA critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!",
"data_path = 'Bike-Sharing-Dataset/hour.csv'\n\nrides = pd.read_csv(data_path)\n\nrides.head()",
"Checking out the data\nThis dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.\nBelow is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.",
"rides[:24*10].plot(x='dteday', y='cnt')",
"Dummy variables\nHere we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().",
"dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']\nfor dummy_field in dummy_fields:\n dummies = pd.get_dummies(rides[dummy_field], prefix=dummy_field, drop_first=False)\n rides = pd.concat([rides, dummies], axis=1)\n\nfields_to_drop = ['instant', 'dteday', 'season', 'weathersit', \n 'weekday', 'atemp', 'mnth', 'workingday', 'hr']\ndata = rides.drop(fields_to_drop, axis=1)\ndata.head()\n\ndata.size",
"Scaling target variables\nTo make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.\nThe scaling factors are saved so we can go backwards when we use the network for predictions.",
"quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']\n# Store scalings in a dictionary so we can convert back later\nscaled_features = {}\nfor quant_feature in quant_features:\n mean, std = data[quant_feature].mean(), data[quant_feature].std()\n scaled_features[quant_feature] = [mean, std]\n data.loc[:, quant_feature] = (data[quant_feature] - mean)/std",
"Splitting the data into training, testing, and validation sets\nWe'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.",
"# Save data for approximately the last 21 days \ntest_data = data[-21*24:]\n\n# Now remove the test data from the data set \ndata = data[:-21*24]\n\n# Separate the data into features and targets\ntarget_fields = ['cnt', 'casual', 'registered']\nfeatures, targets = data.drop(target_fields, axis=1), data[target_fields]\ntest_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]",
"We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).",
"# Hold out the last 60 days or so of the remaining data as a validation set\ntrain_features, train_targets = features[:-60*24], targets[:-60*24]\nval_features, val_targets = features[-60*24:], targets[-60*24:]",
"Time to build the network\nBelow you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.\n<img src=\"assets/neural_network.png\" width=300px>\nThe network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.\nWe use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.\n\nHint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.\n\nBelow, you have these tasks:\n1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.\n2. Implement the forward pass in the train method.\n3. Implement the backpropagation algorithm in the train method, including calculating the output error.\n4. Implement the forward pass in the run method.",
"class NeuralNetwork(object):\n def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):\n # Set number of nodes in input, hidden and output layers.\n self.input_nodes = input_nodes\n self.hidden_nodes = hidden_nodes\n self.output_nodes = output_nodes\n\n # Initialize weights\n self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5, \n (self.input_nodes, self.hidden_nodes))\n\n self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5, \n (self.hidden_nodes, self.output_nodes))\n self.lr = learning_rate\n \n #### TODO: Set self.activation_function to your implemented sigmoid function ####\n #\n # Note: in Python, you can define a function with a lambda expression,\n # as shown below.\n self.activation_function = lambda x : 1 / (1 + np.exp(-x)) \n \n def train(self, features, targets):\n ''' Train the network on batch of features and targets. \n \n Arguments\n ---------\n \n features: 2D array, each row is one data record, each column is a feature\n targets: 1D array of target values\n \n '''\n n_records = features.shape[0]\n delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)\n delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)\n for X, y in zip(features, targets):\n #### Implement the forward pass here ####\n ### Forward pass ###\n # TODO: Hidden layer - Replace these values with your calculations.\n hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer\n hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer\n \n # TODO: Output layer - Replace these values with your calculations.\n final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer\n final_outputs = final_inputs # signals from final output layer\n\n #### Implement the backward pass here ####\n ### Backward pass ###\n\n # TODO: Output error - Replace this value with your calculations.\n error = y - final_outputs # Output layer error is the difference between desired target and actual output.\n\n # TODO: Calculate the hidden layer's contribution to the error\n hidden_error = np.dot(self.weights_hidden_to_output, error)\n \n # TODO: Backpropagated error terms - Replace these values with your calculations.\n output_error_term = error * 1\n hidden_error_term = hidden_error * hidden_outputs * (1 - hidden_outputs)\n \n # Weight step (hidden to output)\n delta_weights_h_o += output_error_term * hidden_outputs[:, None]\n # Weight step (input to hidden)\n delta_weights_i_h += hidden_error_term * X[:, None]\n\n # TODO: Update the weights - Replace these values with your calculations.\n self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records # update hidden-to-output weights with gradient descent step\n self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records # update input-to-hidden weights with gradient descent step\n \n def run(self, features):\n ''' Run a forward pass through the network with input features \n \n Arguments\n ---------\n features: 1D array of feature values\n '''\n \n #### Implement the forward pass here ####\n # TODO: Hidden layer - replace these values with the appropriate calculations.\n hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer\n hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer\n \n # TODO: Output layer - Replace these values with the appropriate calculations.\n final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer\n final_outputs = final_inputs # signals from final output layer \n \n return final_outputs\n\ndef MSE(y, Y):\n return np.mean((y-Y)**2)",
"Unit tests\nRun these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.",
"import unittest\n\ninputs = np.array([[0.5, -0.2, 0.1]])\ntargets = np.array([[0.4]])\ntest_w_i_h = np.array([[0.1, -0.2],\n [0.4, 0.5],\n [-0.3, 0.2]])\ntest_w_h_o = np.array([[0.3],\n [-0.1]])\n\nclass TestMethods(unittest.TestCase):\n \n ##########\n # Unit tests for data loading\n ##########\n \n def test_data_path(self):\n # Test that file path to dataset has been unaltered\n self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')\n \n def test_data_loaded(self):\n # Test that data frame loaded\n self.assertTrue(isinstance(rides, pd.DataFrame))\n \n ##########\n # Unit tests for network functionality\n ##########\n\n def test_activation(self):\n network = NeuralNetwork(3, 2, 1, 0.5)\n # Test that the activation function is a sigmoid\n self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))\n\n def test_train(self):\n # Test that weights are updated correctly on training\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n \n network.train(inputs, targets)\n self.assertTrue(np.allclose(network.weights_hidden_to_output, \n np.array([[ 0.37275328], \n [-0.03172939]])))\n self.assertTrue(np.allclose(network.weights_input_to_hidden,\n np.array([[ 0.10562014, -0.20185996], \n [0.39775194, 0.50074398], \n [-0.29887597, 0.19962801]])))\n\n def test_run(self):\n # Test correctness of run method\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n\n self.assertTrue(np.allclose(network.run(inputs), 0.09998924))\n\nsuite = unittest.TestLoader().loadTestsFromModule(TestMethods())\nunittest.TextTestRunner().run(suite)",
"Training the network\nHere you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.\nYou'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.\nChoose the number of iterations\nThis is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.\nChoose the learning rate\nThis scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.\nChoose the number of hidden nodes\nThe more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.",
"import sys\n\n### Set the hyperparameters here ###\niterations = 5000\nlearning_rate = 0.4\nhidden_nodes = 30\noutput_nodes = 1\n\nN_i = train_features.shape[1]\nnetwork = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)\n\nlosses = {'train':[], 'validation':[]}\nfor ii in range(iterations):\n # Go through a random batch of 128 records from the training data set\n batch = np.random.choice(train_features.index, size=128)\n X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']\n \n network.train(X, y)\n \n # Printing out the training progress\n train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)\n val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)\n sys.stdout.write(\"\\rProgress: {:2.1f}\".format(100 * ii/float(iterations)) \\\n + \"% ... Training loss: \" + str(train_loss)[:5] \\\n + \" ... Validation loss: \" + str(val_loss)[:5])\n sys.stdout.flush()\n \n losses['train'].append(train_loss)\n losses['validation'].append(val_loss)\n\nplt.plot(losses['train'], label='Training loss')\nplt.plot(losses['validation'], label='Validation loss')\nplt.legend()\n_ = plt.ylim()",
"Check out your predictions\nHere, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.",
"fig, ax = plt.subplots(figsize=(8,4))\n\nmean, std = scaled_features['cnt']\npredictions = network.run(test_features).T*std + mean\nax.plot(predictions[0], label='Prediction')\nax.plot((test_targets['cnt']*std + mean).values, label='Data')\nax.set_xlim(right=len(predictions))\nax.legend()\n\ndates = pd.to_datetime(rides.ix[test_data.index]['dteday'])\ndates = dates.apply(lambda d: d.strftime('%b %d'))\nax.set_xticks(np.arange(len(dates))[12::24])\n_ = ax.set_xticklabels(dates[12::24], rotation=45)",
"OPTIONAL: Thinking about your results(this question will not be evaluated in the rubric).\nAnswer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?\n\nNote: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter\n\nYour answer below\nThe model did a great job in predicting the data before Dec 22.\nIt started to fail after Dec 21.\nI guess it is due to the holiday.\nWe splited the last approximately 21 days from data set for testing instead of split the data randomly for training and testing."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
google/starthinker
|
colabs/twitter.ipynb
|
apache-2.0
|
[
"Twitter Targeting\nAdjusts line item settings based on Twitter hashtags and locations specified in a sheet.\nLicense\nCopyright 2020 Google LLC,\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\nhttps://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\nDisclaimer\nThis is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.\nThis code generated (see starthinker/scripts for possible source):\n - Command: \"python starthinker_ui/manage.py colab\"\n - Command: \"python starthinker/tools/colab.py [JSON RECIPE]\"\n1. Install Dependencies\nFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.",
"!pip install git+https://github.com/google/starthinker\n",
"2. Set Configuration\nThis code is required to initialize the project. Fill in required fields and press play.\n\nIf the recipe uses a Google Cloud Project:\n\nSet the configuration project value to the project identifier from these instructions.\n\n\nIf the recipe has auth set to user:\n\nIf you have user credentials:\nSet the configuration user value to your user credentials JSON.\n\n\n\nIf you DO NOT have user credentials:\n\nSet the configuration client value to downloaded client credentials.\n\n\n\nIf the recipe has auth set to service:\n\nSet the configuration service value to downloaded service credentials.",
"from starthinker.util.configuration import Configuration\n\n\nCONFIG = Configuration(\n project=\"\",\n client={},\n service={},\n user=\"/content/user.json\",\n verbose=True\n)\n\n",
"3. Enter Twitter Targeting Recipe Parameters\n\nClick Run Now and a sheet called Twitter Targeting will be generated with a tab called Twitter Triggers.\nFollow instructions on the sheets tab to provide triggers and lineitems.\nClick Run Now again, trends are downloaded and triggered.\nOr give these intructions to the client.\nModify the values below for your use case, can be done multiple times, then click play.",
"FIELDS = {\n 'auth_read':'user', # Credentials used for reading data.\n 'auth_write':'service', # Credentials used for writing data.\n 'recipe_name':'', # Name of sheet where Line Item settings will be read from.\n 'twitter_secret':'', # Twitter API secret token.\n 'recipe_slug':'', # Name of Google BigQuery dataset to create.\n 'twitter_key':'', # Twitter API key token.\n}\n\nprint(\"Parameters Set To: %s\" % FIELDS)\n",
"4. Execute Twitter Targeting\nThis does NOT need to be modified unless you are changing the recipe, click play.",
"from starthinker.util.configuration import execute\nfrom starthinker.util.recipe import json_set_fields\n\nTASKS = [\n {\n 'dataset':{\n 'description':'Create a dataset where data will be combined and transfored for upload.',\n 'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}},\n 'dataset':{'field':{'name':'recipe_slug','kind':'string','order':1,'description':'Place where tables will be created in BigQuery.'}}\n }\n },\n {\n 'sheets':{\n 'description':'Read mapping of hash tags to line item toggles from sheets.',\n 'auth':{'field':{'name':'auth_read','kind':'authentication','order':0,'default':'user','description':'Credentials used for reading data.'}},\n 'template':{\n 'sheet':'https://docs.google.com/spreadsheets/d/1iYCGa2NKOZiL2mdT4yiDfV_SWV9C7SUosXdIr4NAEXE/edit?usp=sharing',\n 'tab':'Twitter Triggers'\n },\n 'sheet':{'field':{'name':'recipe_name','kind':'string','prefix':'Twitter Targeting For ','order':2,'description':'Name of sheet where Line Item settings will be read from.','default':''}},\n 'tab':'Twitter Triggers',\n 'range':'A7:E',\n 'out':{\n 'bigquery':{\n 'dataset':{'field':{'name':'recipe_slug','kind':'string','description':'Place where tables will be created in BigQuery.'}},\n 'table':'Twitter_Triggers',\n 'schema':[\n {\n 'name':'Location',\n 'type':'STRING',\n 'mode':'REQUIRED'\n },\n {\n 'name':'WOEID',\n 'type':'INTEGER',\n 'mode':'REQUIRED'\n },\n {\n 'name':'Hashtag',\n 'type':'STRING',\n 'mode':'REQUIRED'\n },\n {\n 'name':'Advertiser_Id',\n 'type':'INTEGER',\n 'mode':'REQUIRED'\n },\n {\n 'name':'Line_Item_Id',\n 'type':'INTEGER',\n 'mode':'REQUIRED'\n }\n ]\n }\n }\n }\n },\n {\n 'twitter':{\n 'description':'Read trends from Twitter and place into BigQuery.',\n 'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}},\n 'secret':{'field':{'name':'twitter_secret','kind':'string','order':3,'default':'','description':'Twitter API secret token.'}},\n 'key':{'field':{'name':'twitter_key','kind':'string','order':4,'default':'','description':'Twitter API key token.'}},\n 'trends':{\n 'places':{\n 'single_cell':True,\n 'bigquery':{\n 'dataset':{'field':{'name':'recipe_slug','kind':'string','description':'Place where tables will be created in BigQuery.'}},\n 'query':'SELECT DISTINCT WOEID FROM {dataset}.Twitter_Triggers',\n 'legacy':False,\n 'parameters':{\n 'dataset':{'field':{'name':'recipe_slug','kind':'string','description':'Place where tables will be created in BigQuery.'}}\n }\n }\n }\n },\n 'out':{\n 'bigquery':{\n 'dataset':{'field':{'name':'recipe_slug','kind':'string','description':'Place where tables will be created in BigQuery.'}},\n 'table':'Twitter_Trends_Place'\n }\n }\n }\n },\n {\n 'google_api':{\n 'description':'Combine sheet and twitter data into API operations for each line item. Match all possibilities and PAUSE if no trigger match.',\n 'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}},\n 'api':'displayvideo',\n 'version':'v1',\n 'function':'advertisers.lineItems.patch',\n 'kwargs_remote':{\n 'bigquery':{\n 'dataset':{'field':{'name':'recipe_slug','kind':'string','order':4,'default':'','description':'Name of Google BigQuery dataset to create.'}},\n 'query':\" SELECT CAST(S.Advertiser_Id AS STRING) advertiserId, CAST(S.Line_Item_Id AS STRING) AS lineItemId, STRUCT( IF(LOGICAL_OR(T.Name is NULL), 'ENTITY_STATUS_ACTIVE', 'ENTITY_STATUS_PAUSED') AS entityStatus ) AS body, 'entityStatus' AS updateMask, FROM `{dataset}.Twitter_Triggers` AS S LEFT JOIN `{dataset}.Twitter_Trends_Place` As T ON S.WOEID=T.WOEID AND REPLACE(LOWER(S.Hashtag), '#', '')=REPLACE(LOWER(T.Name), '#', '') GROUP BY 1,2 \",\n 'parameters':{\n 'dataset':{'field':{'name':'recipe_slug','kind':'string','description':'Place where tables will be created in BigQuery.'}}\n }\n }\n },\n 'results':{\n 'bigquery':{\n 'dataset':{'field':{'name':'recipe_slug','kind':'string','order':4,'default':'','description':'Name of Google BigQuery dataset to create.'}},\n 'table':'Trigger_Results'\n }\n },\n 'errors':{\n 'bigquery':{\n 'dataset':{'field':{'name':'recipe_slug','kind':'string','order':4,'default':'','description':'Name of Google BigQuery dataset to create.'}},\n 'table':'Trigger_Errors'\n }\n }\n }\n }\n]\n\njson_set_fields(TASKS, FIELDS)\n\nexecute(CONFIG, TASKS, force=True)\n"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jdnz/qml-rg
|
Meeting 7/qutipHHL.ipynb
|
gpl-3.0
|
[
"from __future__ import print_function, division\nimport qutip as qt\nimport numpy as np\nimport scipy.linalg as sp\nimport math\nimport cmath\nπ = math.pi\n\ndef preprocess(mat, vec):\n\n if mat.shape[0] != mat.shape[1] or mat.shape[0] != vec.shape[0]:\n raise Exception(\"Matrix A should be square and b should have a matching dimension!\")\n \n if not mat.isherm:\n zero_block = np.zeros(mat.shape)\n mat = qt.Qobj(np.bmat([[zero_block, mat.full()] , \n [mat.dag().full(), zero_block]]))\n vec = qt.Qobj(np.hstack([b_init.full().flatten(), zero_block[0]]))\n \n ### Normalise b and remember normalisation factor\n if vec.norm() != 1:\n nf = vec.norm()\n vec = vec / nf\n else:\n nf = 1\n\n return mat, vec, nf\n\ndef qft(N):\n mat = 1 / math.sqrt(N) * qt.Qobj([[cmath.exp(1j * 2 * π * i * j / N)\n for i in range(N)] for j in range(N)])\n return mat\n\ndef angle(k, t0, C):\n return math.asin(- C * t0 /(2 * π * k)) # Sign is for appropriate phases in the rotation, but should not affect the solution\n\n\ndef rot(k, t0, C):\n return qt.Qobj([[math.cos(angle(k, t0, C)), math.sin(angle(k, t0, C))],\n [- math.sin(angle(k, t0, C)), math.cos(angle(k, t0, C))]])\n\n# def inv(Quantobj):\n# mat = Quantobj.full()\n# matinv = np.linalg.inv(mat)\n# invQobj = qt.Qobj(matinv)\n# invQobj.dims = Quantobj.dims\n# return invQobj\n\nA_init = qt.Qobj([[0.2, 0.1],[0.1, 0.2]])\nb_init = qt.Qobj([[0.2] , [0.7]])\nprec = 1000 # Choose the dimension of the ancillary state\n\n# Produce a Hermitian matrix A given the problem matrix A_int\n# and a unit-length vector b given the problem vector b_int \nA, b, nf1 = preprocess(A_init, b_init)\n\neigenvals, eigenvecs = A.eigenstates() # The eigendecomposition of A\n\n# Condition number, of use for estimating constants\nκ = eigenvals.max() / eigenvals.min()\n\n# Additive error achieved in the estimation of |x>, of use for estimating constants\nϵ = 0.2",
"STEP 1: STATE PREPARATION OF b\n[HHL: page 2, column 2, center]\nWrite b in the eigenbasis of A",
"# Create the matrix that diagonalizes A\ndiagonalizer = qt.Qobj(np.array([eigenvecs[i].full().T.flatten()\n for i in range(len(eigenvals))]))\nb = diagonalizer * b\nA = diagonalizer.dag() * A * diagonalizer",
"STEP 2: QUANTUM SIMULATION AND QUANTUM PHASE ESTIMATION\n[HHL: page 2, column 2, bottom]",
"T = prec\nt0 = κ / ϵ # It should be O(κ/ϵ), whatever that means\nψ0 = qt.Qobj([[math.sqrt(2 / T) * math.sin(π * (τ + 0.5) / T)] \n for τ in range(T)])\n\n# Order is b, τ, and then ancilla\nevo = qt.tensor(qt.identity(A.shape[0]), qt.ket2dm(qt.basis(T, 0))) \n\nfor τ in range(1, T):\n evo += qt.tensor((1j * A * τ * t0 / T).expm(), qt.ket2dm(qt.basis(T, τ)))\n\nψev = evo * qt.tensor(b, ψ0)\n\nftrans = qt.tensor(qt.identity(b.shape[0]), qft(T))\n\nψfourier = ftrans * ψev # This is Eq. 3 in HHL",
"STEP 2-1: Making true the assumption of perfect phase estimation",
"# w = (ψfourier[:T] / b[0]).argmax()\n# prj = qt.ket2dm(qt.basis(T, w))\n\n# ψfourier = qt.tensor(qt.identity(b.shape[0]), prj) * ψfourier",
"STEP 3-1: Conditional Rotation of Ancilla",
"total_state = qt.tensor(ψfourier, qt.basis(2, 0)) # Add ancilla for swapping\n\nC = 1 / κ # Constant, should be O(1/κ)\n\n# Do conditional rotation only on τ and ancilla\nrotation = qt.tensor(qt.ket2dm(qt.basis(T, 0)), qt.identity(2)) \n\nfor τ in range(1, T):\n rotation += qt.tensor(qt.ket2dm(qt.basis(T, τ)), rot(τ, t0, C))\n \nfinal_state = qt.tensor(qt.identity(b.shape[0]), rotation) * total_state",
"STEP 3-2: Post-selection on $\\left|1\\right\\rangle$ on the ancilla register\nWe perform the post-selection by projecting onto the desired ancilla state and later tracing out the ancilla and the $\\left|\\psi_0\\right\\rangle$ registers.",
"projector = qt.tensor(qt.identity(b.shape[0]), qt.identity(T), qt.ket2dm(qt.basis(2, 1)))\npostsel = projector * final_state\nprob1 = qt.expect(projector, final_state)\n\n# Trace out ancilla and τ registers, leaving only the b register\nfinalstate = qt.ket2dm(postsel).ptrace([0]) / prob1\nfinalstate.eigenenergies()",
"$\\left|finalstate\\right\\rangle$ is essentially a pure state (it should be if all the process was perfect), so now we just isolate that main part, that is our $\\left|x\\right\\rangle$ (after a transformation if our original A was not Hermitian) in the basis that diagonalizes $A$.",
"fsevls, fsevcs = finalstate.eigenstates()\nx = math.sqrt(fsevls.max()) * fsevcs[fsevls.argmax()]\nx",
"This state is supposed to be $\\left|x\\right\\rangle=A^{-1}\\left|b\\right\\rangle=\\sum_j \\beta_j\\lambda_j^{-1}\\left|u_j\\right\\rangle$, although I don't understand why it has complex numbers.\nIMPORTANT NOTE: If A_init is not Hermitian, the output $\\left|x\\right\\rangle$ has dimension double than the input b_init. It is to be interpreted as the representation of the vector (0, x) in the basis that diagonalizes $C = [[0,\\,A], [A^{\\dagger},\\, 0]]$\nEq. (A26) in HHL is wrong, but Eq. (A27) is right. The global sign in Eq. (A29) is wrong, as well as the $\\sqrt{2}$ (it should be in the denominator). The problem with the $\\sqrt{2}$ carries until Eq. (A31)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
wise-r/R-Topics
|
7.Crawler/Presentation/爬虫工具.ipynb
|
mit
|
[
"rvest & urllib2 简介\nFibears\n2016.03.29",
"# 载入rpy2.ipython\n# rpy2提供了Python和R之间的交互环境\n%load_ext rpy2.ipython",
"rvest\nrvest是大神Hadley的作品,对于结构比较良好的网页,利用rvest, CSS/XPath选择器和管道符号来处理效率最高。\nrvest包里面主要有以下几个函数:\n\nread_html(x, ..., encoding = \"\", as_html = FALSE): 既可以从网络中获取html文档,也可以从本地中载入html文档;\nhtml_nodes(x, css, xpath): 利用css和xpath选择器从html文档中提取出节点信息;\nhtml_text(x): 提取所有满足条件的文本信息;\nhtml_attrs(x): 提取所有满足条件的属性信息;\nhtml_table(x, header = NA, trim = TRUE, fill = FALSE, dec = \".\"): 提取表格信息;\nhtml_session(), jump_to(), follow_link(), back(), forward(): 这些都是用于模拟浏览网站",
"%%R\nlibrary(rvest)\n# vignette(\"selectorgadget\")\nlego_movie <- read_html(\"http://www.imdb.com/title/tt1490017/\")\n\n%%R\nlego_movie\n\n%%R\nrating <- lego_movie %>% \n html_nodes(\"strong span\")\nrating\n\n%%R\ncast <- lego_movie %>%\n html_nodes(\"#titleCast .itemprop span\") %>%\n html_text()\ncast\n\n%%R\nposter <- lego_movie %>%\n html_nodes(xpath=\"//div[@class='poster']/a/img\") %>%\n html_attr(\"src\")\nposter",
"urllib*模块\n首先,urllib*中包含了urllib, urllib2和urllib3等几个模块。其中urllib3是第三方扩展模块,所以在这里我们只讨论前两个Python自带模块。那么这两个模块之间到底有何区别呢?\n官方文档对urllib和urllib2分别是这样描述的:“通过url打开任意资源”和“打开url的拓展库”。urllib2主要用于处理一些更复杂的操作,比如操作相关的一些认证、重定向和cookie等等。\nurllib模块\nurllib模块中主要有以下几个方法:\n\nurllib.urlopen(url, data, proxies): 向url发出一个请求,并获取服务器返回的文件对象;",
"import urllib\nf = urllib.urlopen('http://www.baidu.com')\nfirstLine = f.readline()\nfirstLine",
"urllib.urlretrieve(url, filename, reporthook, data): 将url对应的html文件下载到本地电脑中;",
"file = urllib.urlretrieve('http://www.baidu.com/', filename='baidu.html')\nfile",
"quote(): 将url中的特殊字符或汉字encode成指定编码;\n\n\nunquote(): 将url中的url编码解码;\n\n\nurllib.urlencode(query): 将URL中的数据对以连接符&连接起来,作为post方法和get方法的请求参数",
"urllib.quote('经济学院')\n\nprint urllib.unquote('%E7%BB%8F%E6%B5%8E%E5%AD%A6%E9%99%A2')\n\nData = urllib.urlencode({'UserName':'fibears','PassWd':123456})\nData\n\n# GET 方法\n# GET方式是直接以链接形式访问,链接中包含了所有的参数。\nresponse = urllib.urlopen(\"http://event.wisesoe.com/Logon.aspx\" + '?' + Data)\n# response.read()\n\n\"http://event.wisesoe.com/Logon.aspx\" + '?' + Data\n\n# POST 方法\n# POST将参数以变量的形式传递给处理器,所以不会在网址上显示所有的参数。\nresponse = urllib.urlopen(\"http://event.wisesoe.com/Logon.aspx\",Data)\n# response.read()",
"urllib2模块\nurllib2模块中主要有以下几个方法:\n\nurllib2.urlopen(url, data, timeout): 向url发出一个请求,并获取服务器返回的文件对象,该方法还可以接受一个Request类的实例;",
"import urllib2\nresponse1 = urllib2.urlopen(\"http://www.baidu.com\")\n# print response1.read()",
"urllib2.Request(url, data, headers): 构建一个请求(request)对象;",
"request = urllib2.Request('http://www.baidu.com')\nresponse2 = urllib2.urlopen(request)\n# print response2.read()\n\n# GET 方法\nData = urllib.urlencode({'UserName':'fibears','PassWd':123456})\nGetUrl = \"http://event.wisesoe.com/Logon.aspx\" + '?' + Data\nprint GetUrl\nrequest = urllib2.Request(GetUrl)\nresponse = urllib2.urlopen(request)\n\n# POST 方法\nData = urllib.urlencode({'UserName':'fibears','PassWd':123456})\nUrl = \"http://event.wisesoe.com/Logon.aspx\"\nrequest = urllib2.Request(Url, Data)\nresponse = urllib2.urlopen(request)",
"有些网站不会同意程序直接用上面的方式进行访问,站点根本不会响应我们所发出的简单请求,所以为了完全模拟浏览器的工作,我们需要设置一些Headers的属性。以下是几个常用的 Headers 属性:\n- \"User-Agent\": 表明了你的浏览器版本和系统信息;\n- \"Host\": 代表基本的主机名;\n- \"Cookie\": 浏览器所存储的Cookie信息;\n- \"Referer\": 主要用于让服务器判断来源页面, 即用户是从哪个页面跳转过来的;",
"Data = urllib.urlencode({'UserName':'fibears','PassWd':123456})\nuser_agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:45.0) Gecko/20100101 Firefox/45.0'\nheaders = {'User-Agent' : user_agent,\n 'Referer': 'http://event.wisesoe.com/Authenticate.aspx?returnUrl=/LectureOrder.aspx'}\nUrl = \"http://event.wisesoe.com/LectureOrder.aspx\"\nrequest = urllib2.Request(Url, Data, headers)\nresponse = urllib2.urlopen(request)",
"有的网站会检测某一段时间内某个 IP 访问网页的次数,如果访问过于频数,它会禁止该 IP 对网页的访问。这种情况下,我们通常有两种处理办法,一是设置延迟机制,降低爬虫程序发出请求的频率;二是设置代理服务器,每隔一段时间更换一个代理,这样就不会被躲过网站的检测机制。我们可以利用urllib2.ProxyHandler方法来设置代理:\n\nurllib2.ProxyHandler({\"http\" : 'http://some-proxy.com:8080'})",
"# 设置代理\nimport urllib2\n\nproxy_handler = urllib2.ProxyHandler({\"http\" : 'http://some-proxy.com:8080'})\nopener = urllib2.build_opener(proxy_handler)\n# urllib2.install_opener(opener)",
"Cookie的使用方法\nCookie指某些网站为了辨别用户身份、进行session跟踪而储存在用户本地终端上的数据。\n有些网站需要登录后才能访问某个页面,在登录之前,我们无法获取网页的内容。这种情况下我们可以利用urllib2库保存登录网页的Cookie,然后再利用该Cookie来抓取其他页面。\n步骤:\n\n构建一个带有Cookie的处理器\n模拟登陆网页\n获取Cookie\n将Cookie保存到本地\n读取Cookie并构建用于访问网页的opener\n发出HTTP请求,并得到返回的响应文件",
"import urllib\nimport urllib2\nimport cookielib\n\n# 声明cookie文件的存储路径\nCookieFile = \"cookie.txt\"\n\n# 构建一个MozillaCookieJar对象来保存cookie文件\ncookie = cookielib.MozillaCookieJar(CookieFile)\n\n# urlopen()方法就是一个特殊的opener\n# 构建一个带有Cookie的处理器opener\nopener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookie))\n\n# 利用处理器opener发出HTTP请求\nData = urllib.urlencode({'UserName':'fibears','PassWd':123456})\nuser_agent = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10.11; rv:45.0) Gecko/20100101 Firefox/45.0'\nheaders = {'User-Agent' : user_agent,\n 'Referer': 'http://event.wisesoe.com/Authenticate.aspx?returnUrl=/LectureOrder.aspx'}\nLectureUrl = \"http://event.wisesoe.com/LectureOrder.aspx\"\nrequest = urllib2.Request(LectureUrl, Data, headers)\nresponse = opener.open(request)\nprint cookie\n\n# 将Cookie保存到本地\n# ignore_discard: 即使浏览网页过程中cookie被丢弃也将其保存下来\n# ignore_expires: 对于文件中已经存在的cookie,将其覆盖并写入新的信息\ncookie.save(ignore_discard=True, ignore_expires=True)\n\n# 读取Cookie并构建用于访问网页的opener\ncookie = cookielib.MozillaCookieJar()\ncookie.load('cookie.txt', ignore_discard=True, ignore_expires=True)\nopener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookie))\n\n# 发出HTTP请求,并得到返回的响应文件\nrequest = urllib2.Request(LectureUrl, Data, headers)\nresponse = opener.open(request)\n# print response.read()",
"豆瓣电影爬虫程序\n接下来我以结构比较简单的豆瓣网站为例,分别介绍如何利用rvest和urllib2从网上爬取数据。",
"%%R\nlibrary(RCurl)\nlibrary(rvest)\nlibrary(stringr)\nlibrary(plyr)\nlibrary(dplyr)\n\n%%R\n# 爬取豆瓣电影TOP250的数据\n# 获取豆瓣电影首页URL\nDoubanUrl <- 'http://movie.douban.com/top250'\n\n# 从首页中获取所有页面的URL\nPageUrlList <- read_html(DoubanUrl) %>% \n html_nodes(xpath = \"//div[@class='paginator']/a\") %>% \n html_attr(\"href\") %>% \n str_c(DoubanUrl, ., sep=\"\") %>% c(DoubanUrl,.)\nPageUrlList\n\n%%R\n# 从每个PageUrl中提取出每部电影的链接\nMovieUrl <- NULL\nfor (url in PageUrlList) {\n item = read_html(url) %>% \n html_nodes(xpath=\"//div[@class='hd']/a\") %>% \n html_attrs(\"href\")\n MovieUrl = c(MovieUrl, item)\n}\nhead(MovieUrl,5)\n\n%%R\n# 从每个MovieUrl中提取出最终的数据\n## 定义函数Getdata,用于获取数据并输出dataframe格式\nGetImdbScore <- function(url){\n ImdbScore = read_html(url) %>% \n html_nodes(xpath = \"//span[@itemprop='ratingValue']/text()\") %>% \n html_text()\n return(ImdbScore)\n}\nGetdata <- function(url){\n Movie = url\n if(url.exists(url)){\n MovieHTML = read_html(url, encoding = 'UTF-8')\n Rank = html_nodes(MovieHTML, xpath = \"//span[@class='top250-no']/text()\") %>% html_text()\n MovieName = html_nodes(MovieHTML, xpath = \"//span[@property='v:itemreviewed']/text()\") %>% html_text()\n Director = html_nodes(MovieHTML, xpath = \"//a[@rel='v:directedBy']/text()\") %>% \n html_text() %>% paste(collapse = \";\")\n Type = html_nodes(MovieHTML, xpath = \"//span[@property='v:genre']/text()\") %>% \n html_text() %>% paste(collapse = \";\")\n Score = html_nodes(MovieHTML, xpath = \"//strong[@property='v:average']/text()\") %>% html_text()\n ImdbUrl = html_nodes(MovieHTML, xpath = \"//a[contains(@href,'imdb')]/@href\") %>% html_text()\n ImdbScore = GetImdbScore(ImdbUrl) \n Description = html_nodes(MovieHTML, xpath = \"//span[@property='v:summary']/text()\") %>% \n html_text() %>% str_replace(\"\\n[\\\\s]+\", \"\") %>% paste(collapse = \";\")\n data.frame(Rank, Movie, MovieName, Director, Type, Score, ImdbScore, Description)\n }\n}\n\n%%R\n## 抓取数据\nDouban250 <- data.frame()\nfor (i in 1:2) {\n Douban250 = rbind(Douban250, Getdata(MovieUrl[i]))\n print(paste(\"Movie\",i,sep = \"-\"))\n Sys.sleep(round(runif(1,1,3)))\n}\nDouban250\n# for (i in 1:length(MovieUrl)) {\n# Douban250 = rbind(Douban250, Getdata(MovieUrl[i]))\n# print(paste(\"Movie\",i,sep = \"-\"))\n# Sys.sleep(round(runif(1,1,3)))\n# }\n\n\n%%R\n# 豆瓣API\nurl <- \"https://api.douban.com/v2/movie/1292052\"\nlibrary(rvest)\nresult <- read_html(url)\nresult <- html_nodes(result, \"p\") %>% html_text()\n\n%%R\nclass(result)\n\n%%R\n# 这是json(javascript online notation)格式的文件,可以利用rjson中的函数fromJSON将其转化为结构化的数据。\nMovie = rjson::fromJSON(result)\n\n%%R\nMovie"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
NeuroDataDesign/pan-synapse
|
pipeline_1/background/ConnectedComponents_Algorithms.md.ipynb
|
apache-2.0
|
[
"Algorithm\nDescription:\nConnected components takes in a binary volume containing clusters and generates labels for those clusters. Specifically, it generates labels such that every voxel within a cluster has the same label as the other voxels within that same clusters, but a different label than all the voxels within all the other clusters. It then turns each of these components into an object of type Cluster.\nInputs: \nthe input volume \nOutputs:\nthe list of clusters\nPseudocode\nOverall Structures:\n1. Run Connected Components to label distinct clusters \n2. For each label, find which indices of the matrix are equal to that label and put them in a memberList \n3. Instantiate an object of type Cluster containing the memberList from (2) as its members \nPseudocode for Step 1:\n linked = []\n labels = structure with dimensions of data, initialized with the value of Background\n\n First pass\n\n for row in data:\n for column in row:\n if data[row][column] is not Background\n\n neighbors = connected elements with the current element's value\n\n if neighbors is empty\n linked[NextLabel] = set containing NextLabel\n labels[row][column] = NextLabel\n NextLabel += 1\n\n else\n\n Find the smallest label\n\n L = neighbors labels\n labels[row][column] = min(L)\n for label in L\n linked[label] = union(linked[label], L)\n\n Second pass\n\n for row in data\n for column in row\n if data[row][column] is not Background\n labels[row][column] = find(labels[row][column])\n\n return labels\n\nActual Code",
"import itertools\nimport sys\nsys.path.insert(0, '../code/functions/')\nimport connectLib as cLib\n\ndef connectedComponents(volume):\n # the connectivity structure matrix\n s = [[[1 for k in xrange(3)] for j in xrange(3)] for i in xrange(3)]\n \n # find connected components\n labeled, nr_objects = ndimage.label(volume, s) \n #change them to object type Cluster\n if nr_objects == 1: \n nr_objects += 1\n clusterList = []\n labelTime = 0\n clusterTime = 0\n \n for label in range(0, nr_objects):\n \n start_time = time.time()\n memberList = np.argwhere(labeled == label)\n labelTime += time.time() - start_time\n \n start_time = time.time()\n if not len(memberList) == 0:\n clusterList.append(Cluster(memberList))\n clusterTime += time.time() - start_time\n print 'Member-Find Time: ' + str(labelTime)\n print 'Cluster Time: ' + str(clusterTime)\n\n return clusterList",
"The Cluster Class for Reference:",
"import numpy as np\nimport math\n\nclass Cluster:\n def __init__(self, members):\n self.members = members\n self.volume = self.getVolume()\n\n def getVolume(self):\n return len(self.members)\n\n def getCentroid(self):\n unzipList = zip(*self.members)\n listZ = unzipList[0]\n listY = unzipList[1]\n listX = unzipList[2]\n return [np.average(listZ), np.average(listY), np.average(listX)]\n\n def getStdDeviation(self):\n unzipList = zip(*self.members)\n listZ = unzipList[0]\n listY = unzipList[1]\n listX = unzipList[2]\n listOfDistances = []\n for location in self.members:\n listOfDistances.append(math.sqrt((location[0]-self.centroid[0])**2 + (location[1]-self.centroid[1])**2 + (location[2]-self.centroid[2])**2))\n stdDevDistance = np.std(listOfDistances)\n return stdDevDistance\n\n def probSphere(self):\n unzipList = zip(*self.members)\n listZ = unzipList[0]\n listY = unzipList[1]\n listX = unzipList[2]\n volume = ((max(listZ) - min(listZ) + 1)*(max(listY) - min(listY) + 1)*(max(listX) - min(listX) + 1))\n ratio = len(self.members)*1.0/volume\n return 1 - abs(ratio/(math.pi/6) - 1)\n\n def getMembers(self):\n return self.members",
"Connected Components Conditions\nConnected Components would work well under the conditions that the input volume contains seperable, non-overlapping, sparse clusters and that the input volume is in binary-form (i.e. the values of the background voxels are 0's and the value of the foreground voxels are all positive integers). \nConnected Components would work poorly if the volume is not binary (i.e. the values of the background voxels are anything besides 0) or if the clusters are dense or in any way neighboring eachother. \nPredictable Data Sets\nThe Good Data Set:\nDescription: The good data set is a 1000 x 1000 x 100 volume containing 1875 clusters of size 125 with value of 1. Every other value in the volume is 0. \nPlot: I will plot the data at z=5 because it provides better visualization.",
"import numpy as np\nimport matplotlib.pyplot as plt\n\nclusterGrid = np.zeros((100, 1000, 1000))\nfor i in range(40):\n for j in range(40):\n for k in range(40):\n clusterGrid[20*(2*j): 20*(2*j + 1), 20*(2*i): 20*(2*i + 1), 20*(2*k): 20*(2*k + 1)] = 1\n \nplt.imshow(clusterGrid[5])\nplt.axis('off')\nplt.title('Slice at z=5')\nplt.show()",
"Prediction: I predict that this volume will be perfectly segmented into 1875 clusters.\nThe Difficult Data Set:\nDescription: The good data set is a 1000 x 1000 x 100 volume containing 1875 clusters of size 125 with value of 2. Every other value in the volume is 1. In other words, the image is not binarized.\nPlot: I will plot the data at z=5 because it provides better visualization.",
"clusterGrid = clusterGrid + 1\nplt.imshow(clusterGrid[5])\nplt.axis('off')\nplt.title('Slice at z=5')\nplt.show()",
"Prediction: I predict that the entire volume will be segmented into one big component.\nSimulation\nToy Data Generation\nThe Good Data Set:",
"simEasyGrid = np.zeros((100, 100, 100))\nfor i in range(4):\n for j in range(4):\n for k in range(4):\n simEasyGrid[20*(2*j): 20*(2*j + 1), 20*(2*i): 20*(2*i + 1), 20*(2*k): 20*(2*k + 1)] = 1",
"Predicting what good data will look like: I believe the good data will look like a grid of 27 cubes, 9 in each slice that contains clusters.",
"plt.imshow(simEasyGrid[5])\nplt.axis('off')\nplt.show()",
"Visualization relative to prediction: As predicted, the good data looks like a grid of cubes, 9 in each slice that contains clusters.\nThe Difficult Data Set:",
"simDiffGrid = simEasyGrid + 1",
"Predicting what difficult data will look like: I believe the good data will look like a grid of 27 cubes, 9 in each slice that contains clusters.",
"plt.imshow(simDiffGrid[5])\nplt.axis('off')\nplt.show()",
"Visualization relative to prediction: As predicted, the difficult data looks like a grid of cubes, 9 in each slice that contains clusters.\nToy Data Analysis\nGood Data Prediction: \nI predict that the good data will segment the easy simulation into 27 clusters very quickly.",
"def connectAnalysis(rawData, expected):\n start_time = time.time()\n clusterList = connectedComponents(rawData)\n print \"time taken to label: \" + str((time.time() - start_time)) + \" seconds\"\n print \"Number of connected components:\\n\\tExpected: \" + expected + \"\\n\\tActual: \" + str(len(clusterList))\n displayIm = np.zeros_like(rawData)\n for cluster in range(len(clusterList)):\n for member in range(len(clusterList[cluster].members)):\n z, y, x = clusterList[cluster].members[member]\n displayIm[z][y][x] = cluster\n\n plt.imshow(displayIm[0])\n plt.axis('off')\n plt.show()\n\nconnectAnalysis(simEasyGrid, '27')",
"Results of Good Data Relative to Predictions: As expected, the volume was segmented into 27 seperate clusters very quickly.\nRepeating the Good Data Simulation:",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport pylab\nlabeledLengths = []\ntimes = []\n\nfor i in range(10):\n start_time = time.time()\n clusterList = connectedComponents(simEasyGrid)\n labeledLengths.append(len(clusterList))\n times.append((time.time() - start_time))\n \n\npylab.hist(labeledLengths, normed=1)\npylab.xlabel('Number of Components')\npylab.ylabel('Number of Trials')\npylab.show()\nprint 'Average Number of Components on Easy Simulation Data:\\n\\tExpected: 27\\tActual: ' + str(np.mean(labeledLengths))\n\n\npylab.hist(times, normed=1)\npylab.xlabel('Time Taken to Execute')\npylab.ylabel('Number of Trials')\nplt.show()\nprint 'Average Time Taken to Execute: ' + str(np.mean(times))",
"Difficult Data Prediction: I predict the difficult data will be segmented into 1 big cluster.",
"connectAnalysis(simDiffGrid, '1')",
"Results of Difficult Data Result Relative to Prediction: As expected, the volume was segmented into one big component.\nRepeating the Difficult Data Simulation:",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport pylab\nlabeledLengths = []\ntimes = []\n\nfor i in range(10):\n start_time = time.time()\n clusterList = connectedComponents(simDiffGrid)\n labeledLengths.append(len(clusterList))\n times.append((time.time() - start_time))\n \n\npylab.hist(labeledLengths, normed=1)\npylab.xlabel('Number of Components')\npylab.ylabel('Number of Trials')\npylab.show()\nprint 'Average Number of Components on Difficult Simulation Data:\\n\\tExpected: 27\\tActual: ' + str(np.mean(labeledLengths))\n\n\npylab.hist(times, normed=1)\npylab.xlabel('Time Taken to Execute')\npylab.ylabel('Number of Trials')\nplt.show()\nprint 'Average Time Taken to Execute: ' + str(np.mean(times))",
"Summary of Performances: Connected Components performed extremely well on the easy simulation, correctly detecting 27 components very quickly for every trial. It also performed poorly as expected on the difficult simulation, connecting 1 component for every trial\nReal Data\nSynthetic Data Analysis\nDescription: Validation testing will be performed on a a 100x100x100 volume with a pixel intensity distribution approximately the same as that of the true image volumes (i.e., 98% background, 2% synapse). The synapse pixels will be grouped together in clusters as they would in the true data. Based on research into the true size of synapses, these synthetic synapse clusters will be given area of ~1 micron ^3, or about 139 voxels (assuming the synthetic data here and the real world data have identical resolutions). After the data goes through the algorithm, I will gauge performance based on the following:\nnumber of clusters (should be about 500)\nvolumetric density of data (should be about 2% of the data)\nPlotting Raw Synthetic Data:",
"from random import randrange as rand\n\ndef generatePointSet():\n center = (rand(0, 99), rand(0, 99), rand(0, 99))\n toPopulate = []\n for z in range(-1, 5):\n for y in range(-1, 5):\n for x in range(-1, 5):\n curPoint = (center[0]+z, center[1]+y, center[2]+x)\n #only populate valid points\n valid = True\n for dim in range(3):\n if curPoint[dim] < 0 or curPoint[dim] >= 100:\n valid = False\n if valid:\n toPopulate.append(curPoint)\n return set(toPopulate)\n \ndef generateTestVolume():\n #create a test volume\n volume = np.zeros((100, 100, 100))\n myPointSet = set()\n for _ in range(rand(500, 800)):\n potentialPointSet = generatePointSet()\n #be sure there is no overlap\n while len(myPointSet.intersection(potentialPointSet)) > 0:\n potentialPointSet = generatePointSet()\n for elem in potentialPointSet:\n myPointSet.add(elem)\n #populate the true volume\n for elem in myPointSet:\n volume[elem[0], elem[1], elem[2]] = 60000\n #introduce noise\n noiseVolume = np.copy(volume)\n for z in range(noiseVolume.shape[0]):\n for y in range(noiseVolume.shape[1]):\n for x in range(noiseVolume.shape[2]):\n if not (z, y, x) in myPointSet:\n noiseVolume[z][y][x] = rand(0, 10000)\n return volume\n\nforeground = generateTestVolume()\n\n\n#displaying the random clusters\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\nz, y, x = foreground.nonzero()\nax.scatter(x, y, z, zdir='z', c='r')\nplt.title('Random Foreground Clusters')\nplt.show()",
"Expectation for Synthetic Data: I expect that the Connected Components will detect around 500 clusters.\nRunning Algorithm on Synethetic Data:",
"print 'Analysis Before My Adjusting'\nconnectAnalysis(foreground, \"Around 500\")\n\nprint 'Analysis After Adjusting Clustering Instantiation'\nconnectAnalysis(foreground, \"Around 500\")",
"Results on Synthetic Data Relative to Prediction: The data correctly detected around 500 connected components. More importantly, it did so extremely quickly.\nReal Data Analysis\nVisualizing Real Data Subset:",
"import sys\nsys.path.insert(0,'../code/functions/')\nimport tiffIO as tIO\nimport connectLib as cLib\nimport plosLib as pLib\n\ndataSubset = tIO.unzipChannels(tIO.loadTiff('../data/SEP-GluA1-KI_tp1.tif'))[0][0:5]\nplt.imshow(dataSubset[0], cmap=\"gray\")\nplt.show()",
"Predicting Performance of Subset: I predict that the data will be segmented into roughly 2000 synapses, hopefully in under a minute.\nRunning the Algorithm on Real Data Subset:",
"#finding the clusters after plosPipeline\nplosOutSub = pLib.pipeline(dataSubset)\n\n#binarize output of plos lib\nbianOutSub = cLib.otsuVox(plosOutSub)\n\n#dilate the output based on neigborhood size\nbianOutSub = ndimage.morphology.binary_dilation(bianOutSub).astype(int)\n\nprint 'Analysis Before My Adjusting'\nconnectAnalysis(bianOutSub, 'Around 2 thousand')\n\nprint 'Analysis After Adjusting Cluster class'\nconnectAnalysis(bianOutSub, 'Around 2 thousand')",
"Performance of Subset Relative to Predictions: As expected, Connected Components picked up around 2 thousand cluster in under a minute. Furthermore, my changes to the Cluster class cut the total time to 36 seconds down from 87 by reducing the Cluster Time from 51 seconds to .018 seconds.\nExpectations in Relation to Other Data Sets: I expect that this version of Connected Components will work well for all data sets that are binary (that is, 0's for background, any positive integer for foreground). It also seems that the algorithm performs particularly quickly for data sets with fewer labels. \nWays to Improve: To expand to larger data sets, we would need to find an efficient way to search through a matrix for specific values with the knowledge that same-valued-indices will all be near eachother in clusters."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
GoogleCloudPlatform/asl-ml-immersion
|
notebooks/recommendation_systems/solutions/2_als_bqml.ipynb
|
apache-2.0
|
[
"Collaborative filtering on the MovieLense Dataset\nLearning Objectives\n\nKnow how to explore the data using BigQuery\nKnow how to use the model to make recommendations for a user\nKnow how to use the model to recommend an item to a group of users\n\nThis notebook is based on part of Chapter 9 of BigQuery: The Definitive Guide by Lakshmanan and Tigani.\nMovieLens dataset\nTo illustrate recommender systems in action, let’s use the MovieLens dataset. This is a dataset of movie reviews released by GroupLens, a research lab in the Department of Computer Science and Engineering at the University of Minnesota, through funding by the US National Science Foundation.\nDownload the data and load it as a BigQuery table using:",
"PROJECT = !(gcloud config get-value core/project)\nPROJECT = PROJECT[0]\n\n%env PROJECT=$PROJECT\n\n%%bash\nrm -r bqml_data\nmkdir bqml_data\ncd bqml_data\ncurl -O 'http://files.grouplens.org/datasets/movielens/ml-20m.zip'\nunzip ml-20m.zip\nyes | bq rm -r $PROJECT:movielens\nbq --location=US mk --dataset \\\n --description 'Movie Recommendations' \\\n $PROJECT:movielens\nbq --location=US load --source_format=CSV \\\n --autodetect movielens.ratings ml-20m/ratings.csv\nbq --location=US load --source_format=CSV \\\n --autodetect movielens.movies_raw ml-20m/movies.csv",
"Exploring the data\nTwo tables should now be available in <a href=\"https://console.cloud.google.com/bigquery\">BigQuery</a>.\nCollaborative filtering provides a way to generate product recommendations for users, or user targeting for products. The starting point is a table, <b>movielens.ratings</b>, with three columns: a user id, an item id, and the rating that the user gave the product. This table can be sparse -- users don’t have to rate all products. Then, based on just the ratings, the technique finds similar users and similar products and determines the rating that a user would give an unseen product. Then, we can recommend the products with the highest predicted ratings to users, or target products at users with the highest predicted ratings.",
"%%bigquery --project $PROJECT\nSELECT *\nFROM movielens.ratings\nLIMIT 10",
"A quick exploratory query yields that the dataset consists of over 138 thousand users, nearly 27 thousand movies, and a little more than 20 million ratings, confirming that the data has been loaded successfully.",
"%%bigquery --project $PROJECT\nSELECT \n COUNT(DISTINCT userId) numUsers,\n COUNT(DISTINCT movieId) numMovies,\n COUNT(*) totalRatings\nFROM movielens.ratings",
"On examining the first few movies using the query following query, we can see that the genres column is a formatted string:",
"%%bigquery --project $PROJECT\nSELECT *\nFROM movielens.movies_raw\nWHERE movieId < 5",
"We can parse the genres into an array and rewrite the table as follows:",
"%%bigquery --project $PROJECT\nCREATE OR REPLACE TABLE movielens.movies AS\n SELECT * REPLACE(SPLIT(genres, \"|\") AS genres)\n FROM movielens.movies_raw\n\n%%bigquery --project $PROJECT\nSELECT *\nFROM movielens.movies\nWHERE movieId < 5",
"Matrix factorization\nMatrix factorization is a collaborative filtering technique that relies on factorizing the ratings matrix into two vectors called the user factors and the item factors. The user factors is a low-dimensional representation of a user_id and the item factors similarly represents an item_id.\nNote: MF model training requires BQ flat rate contract. So here we will retrieve pre-trained model from external project.\nIf you activated flat rate pricing in BQ, you can train MF model with this Query.\n```SQL\nCREATE OR REPLACE MODEL movielens.recommender\noptions(model_type='matrix_factorization',\n user_col='userId', item_col='movieId', rating_col='rating')\nAS\nSELECT \nuserId, movieId, rating\nFROM movielens.ratings\n```",
"%%bigquery --project $PROJECT\nSELECT iteration, loss, duration_ms\n-- Note: remove cloud-training-demos if you are using your own model: \nFROM ML.TRAINING_INFO(MODEL `cloud-training-demos.movielens.recommender`)",
"Making recommendations\nWith the trained model, we can now provide recommendations. For example, let’s find the best comedy movies to recommend to the user whose userId is 903. In the query below, we are calling ML.PREDICT passing in the trained recommendation model and providing a set of movieId and userId to carry out the predictions on. In this case, it’s just one userId (903), but all movies whose genre includes Comedy.",
"%%bigquery --project $PROJECT\nSELECT * FROM\nML.PREDICT(MODEL `cloud-training-demos.movielens.recommender`, (\n SELECT \n movieId, title, 903 AS userId\n FROM movielens.movies, UNNEST(genres) g\n WHERE g = 'Comedy'\n))\nORDER BY predicted_rating DESC\nLIMIT 5",
"Filtering out already rated movies\nOf course, this includes movies the user has already seen and rated in the past. Let’s remove them.\nTODO 1: Make a prediction for user 903 that does not include already seen movies.",
"%%bigquery --project $PROJECT\nSELECT * FROM\nML.PREDICT(MODEL `cloud-training-demos.movielens.recommender`, (\n WITH seen AS (\n SELECT ARRAY_AGG(movieId) AS movies \n FROM movielens.ratings\n WHERE userId = 903\n )\n SELECT \n movieId, title, 903 AS userId\n FROM movielens.movies, UNNEST(genres) g, seen\n WHERE g = 'Comedy' AND movieId NOT IN UNNEST(seen.movies)\n))\nORDER BY predicted_rating DESC\nLIMIT 5",
"For this user, this happens to yield the same set of movies -- the top predicted ratings didn’t include any of the movies the user has already seen.\nCustomer targeting\nIn the previous section, we looked at how to identify the top-rated movies for a specific user. Sometimes, we have a product and have to find the customers who are likely to appreciate it. Suppose, for example, we wish to get more reviews for movieId = 96481 (American Mullet) which has only one rating and we wish to send coupons to the 5 users who are likely to rate it the highest. \nTODO 2: Find the top five users who will likely enjoy American Mullet (2001)",
"%%bigquery --project $PROJECT\nSELECT * FROM\nML.PREDICT(MODEL `cloud-training-demos.movielens.recommender`, (\n WITH allUsers AS (\n SELECT DISTINCT userId\n FROM movielens.ratings\n )\n SELECT \n 96481 AS movieId, \n (SELECT title FROM movielens.movies WHERE movieId=96481) title,\n userId\n FROM\n allUsers\n))\nORDER BY predicted_rating DESC\nLIMIT 5",
"Batch predictions for all users and movies\nWhat if we wish to carry out predictions for every user and movie combination? Instead of having to pull distinct users and movies as in the previous query, a convenience function is provided to carry out batch predictions for all movieId and userId encountered during training. A limit is applied here, otherwise, all user-movie predictions will be returned and will crash the notebook.",
"%%bigquery --project $PROJECT\nSELECT *\nFROM ML.RECOMMEND(MODEL `cloud-training-demos.movielens.recommender`)\nLIMIT 10",
"As seen in a section above, it is possible to filter out movies the user has already seen and rated in the past. The reason already seen movies aren’t filtered out by default is that there are situations (think of restaurant recommendations, for example) where it is perfectly expected that we would need to recommend restaurants the user has liked in the past.\nCopyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jmschrei/pomegranate
|
tutorials/B_Model_Tutorial_7_Markov_Networks.ipynb
|
mit
|
[
"Markov Networks\nauthor: Jacob Schreiber <br>\ncontact: jmschreiber91@gmail.com\nMarkov networks are probabilistic models that are usually represented as an undirected graph, where the nodes represent variables and the edges represent associations. Markov networks are similar to Bayesian networks with the primary difference being that Bayesian networks can be represented as directed graphs with known parental relations. Generally, Bayesian networks are easier to interpret and can be used to calculate probabilities faster but, naturally, require that causality is known. However, in many settings, one can only know the associations between variables and not necessarily the direction of causality.\nThe underlying implementation of inference in pomegranate for both Markov networks and Bayesian networks is the same, because both get converted to their factor graph representations. However, there are many important differences between the two models that should be considered before choosing one. \nIn this tutorial we will go over how to use Markov networks in pomegranate and what some of the current limitations of the implementation are.",
"%matplotlib inline\nimport numpy\nimport itertools\n\nfrom pomegranate import *\n\nnumpy.random.seed(0)\nnumpy.set_printoptions(suppress=True)\n\n%load_ext watermark\n%watermark -m -n -p numpy,scipy,pomegranate",
"Defining a Markov Network\nA Markov network is defined by passing in a list of the joint probability tables associated with a series of cliques rather than an explicit graph structure. This is because the probability distributions for a particular variable are not defined by themselves, but rather through associations with other variables. While a Bayesian network has root variables that do not hav e parents, the undirected nature of the edges in a Markov networks means that variables are generally grouped together.\nLet's define a simple Markov network where the cliques are A-B, B-C-D, and C-D-E. B-C-D-E is almost a clique but are missing the connection between B and E.",
"d1 = JointProbabilityTable([\n [0, 0, 0.1],\n [0, 1, 0.2],\n [1, 0, 0.4],\n [1, 1, 0.3]], [0, 1])\n\nd2 = JointProbabilityTable([\n [0, 0, 0, 0.05],\n [0, 0, 1, 0.15],\n [0, 1, 0, 0.07],\n [0, 1, 1, 0.03],\n [1, 0, 0, 0.12],\n [1, 0, 1, 0.18],\n [1, 1, 0, 0.10],\n [1, 1, 1, 0.30]], [1, 2, 3])\n\nd3 = JointProbabilityTable([\n [0, 0, 0, 0.08],\n [0, 0, 1, 0.12],\n [0, 1, 0, 0.11],\n [0, 1, 1, 0.19],\n [1, 0, 0, 0.04],\n [1, 0, 1, 0.06],\n [1, 1, 0, 0.23],\n [1, 1, 1, 0.17]], [2, 3, 4])\n\nmodel = MarkovNetwork([d1, d2, d3])\nmodel.bake()",
"We can see that the initialization is fairly straightforward. An important note is that the JointProbabilityTable object requires as the second argument a list of variables that are included in that clique in the order that they appear in the table, from left to right.\nCalculating the probability of examples\nSimilar to the other probabilistic models in pomegranate, Markov networks can be used to calculate the probability or log probability of examples. However, unlike the other models, calculating the log probability for Markov networks is generally computationally intractable for data with even a modest number of variables (~30). \nThe process for calculating the log probability begins by calculating the \"unnormalized\" log probability $\\hat{P}$, which is just the product of the probabilities for the variables $c$ in each clique $c \\in C$ under their joint probability table $JPT(c)$. This step is easy because it just involves, for each clique, taking the columns corresponding to that clique and performing a table lookup.\n\\begin{equation}\n\\hat{P}(X=x) = \\prod\\limits_{c \\in C} JPT(c)\n\\end{equation}\nThe reason this is called the unnormalized probability is because the sum of all combinations of variables that $X$ can take $\\sum\\limits_{x \\in X} \\hat{P}(X=x)$ does not sum to 1; thus, it is not a true probability distribution. \nWe can calculate the normalized probability $P(X)$ by dividing by the sum of probabilities under all combinations of variables, frequently referred to as the \"partition function\". Calculating the partition function $Z$ is as simple as summing the unnormalized probabilities over all possible combinations of variables $x \\in X$. \n\\begin{equation}\nZ = \\sum\\limits_{x \\in X} \\hat{P}(X=x)\n\\end{equation}\nFinally, we can divide any unnormalized probability calculation by the partition function to get the correct probability.\n\\begin{equation}\nP(X = x) = \\frac{1}{Z} \\hat{P}(X = x)\n\\end{equation}\nThe probability method returns the normalized probability value. We can check this by seeing that it is different than simply passing the columns of data in to the distributions for their respective cliques.",
"model.probability([0, 1, 0, 0, 1])",
"And the probability if we simply passed the columns into the corresponding cliques:",
"d1.probability([0, 1]) * d2.probability([1, 0, 0]) * d3.probability([0, 0, 1])",
"However, by passing the unnormalized=True parameter in to the probability method we can return the unnormalized probability values.",
"model.probability([0, 1, 0, 0, 1], unnormalized=True)",
"We can see that the two are identical, subject to machine precision.\nCalculating the partition function\nCalculating the partition function involves summing the unnormalized probabilities of all combinations of variables that an example can take. Unfortunately, the time it takes to calculate the partition function grows exponentially with the number of dimensions. This means that it may not be able to calculate the partition function exactly for more than ~25 variables, depending on your machine. While pomegranate does not currently support any methods for calculating the partition function other than the exact method, it is flexible enough to allow users to get around this limitation.\nThe partition function itself is calculated in the bake method because, at that point, all combinations of variables are known to the model. This value is then cached so that calls to probability or log_probability are just as fast regardless of if the normalized or unnormalized probabilities are calculated. However, if the user passes in calculate_partition=False the model will not spend time calculating the partition function. We can see the difference in time here:",
"X = numpy.random.randint(2, size=(100, 14))\nmodel2 = MarkovNetwork.from_samples(X)\n\n%timeit model2.bake()\n%timeit model2.bake(calculate_partition=False)",
"There are two main reasons that one might not want to calculate the partition function when creating the model. The first is when the user will only be inspecting the model, such as after structure learning, or only doing inference, which uses the approximate loopy belief propogation. The second is if the user wants to estimate the partition function themselves using an approximate algorithm.\nLet's look at how one would manually calculate the exact partition function to see how an approximate algorithm could be substituted in. First, what happens if we don't calculate the partition but try to calculate probabilities?",
"model.bake(calculate_partition=False)\nmodel.probability([0, 1, 0, 0, 1])",
"Looks like we get an error. We can still calculate unnormalized probabilities though.",
"model.probability([0, 1, 0, 0, 1], unnormalized=True)",
"Now we can calculate the partition function by calculating the unnormalized probability of all combinations.",
"Z = model.probability(list(itertools.product(*model.keys_)), unnormalized=True).sum()\nZ",
"We can set the stored partition function in the model to be this value (or specifically the log partition function) and then calculate normalized probabilities as before.",
"model.partition = numpy.log(Z)\nmodel.probability([0, 1, 0, 0, 1])",
"Now you can calculate $Z$ however you'd like and simply plug it in.\nInference\nSimilar to Bayesian networks, Markov networks can be used to infer missing values in data sets. In pomegranate, inference is done for Markov networks by converting them to their factor graph representations and then using loopy belief propagation. This results in fast, but approximate, inference.",
"model.predict([[None, 1, 0, None, None], [None, None, 1, None, 0]])",
"If we go back and inspect the joint probability tables we can easily see that the inferred values are the correct ones.\nIf we want the full probability distribution rather than just the most likely value we can use the predict_proba method.",
"model.predict_proba([[None, None, 1, None, 0]])",
"Structure Learning\nMarkov networks can be learned from data just as Bayesian networks can. While there are many algorithms that have been proposed for Bayesian network structure learning, there are fewer for Markov networks. Currently, pomegranate only supports the Chow-Liu tree-building algorithm for learning a tree-like network. Let's see a simple example where we learn a Markov network over a chunk of features from the digits data set.",
"from sklearn.datasets import load_digits\n\nX = load_digits().data[:, 22:40]\nX = (X > numpy.median(X)).astype(int)\n\nmodel = MarkovNetwork.from_samples(X)\nicd = IndependentComponentsDistribution.from_samples(X, distributions=DiscreteDistribution)\n\nmodel.log_probability(X).sum(), icd.log_probability(X).sum()",
"It looks like the Markov network is somewhat better than simply modeling each pixel individually for this set of features."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
kimkipyo/dss_git_kkp
|
통계, 머신러닝 복습/160627월_21일차_PCA Principal Component Analysis/3.PCA.ipynb
|
mit
|
[
"PCA\nPCA(Principal Component Analysis)는 주성분 분석이라고도 하며 차원 축소를 통해 최소 차원의 정보로 원래 차원의 정보를 모사(approximate)하려는 작업을 말한다.\n차원 축소\n차원 축소란(Dimension Reduction) 고차원 벡터에서 일부 차원의 값을 모두 0으로 만들어(truncation) 저차원 벡터로 줄이는 것을 말한다.\n다만 원래의 고차원 벡터의 특성을 최대한 살리기 위해 가장 분산이 높은 방향으로 회전 변환(rotation transform)을 한다.\n<img src=\"https://alliance.seas.upenn.edu/~cis520/dynamic/2014/wiki/uploads/Lectures/pca-example-1D-of-2D_small.png\" style=\"width:40%; margin: 0 auto 0 auto;\">\n<img src=\"http://www.nlpca.org/fig_pca_principal_component_analysis.png\" style=\"width:90%; margin: 0 auto 0 auto;\">\n3차원을 2차원으로만 해도 거의 비슷한 데이터가 나온다.\nScikit-Learn 의 decomposition 서브패키지는 PCA분석을 위한 PCA 클래스를 제공한다. 사용법은 다음과 같다.\n\n입력 인수: \n\nn_components : 정수\n\n최종 성분의 수\n\n\n\n속성: \n\ncomponents_ \n주성분 축\n\n\nn_components_ 이게 제일 처음 나오는 파라미터이다. 이것만 남기고 나머지 람다 작은거는 죽이겠다는 의미로 중요하다.\n주성분의 수\n\n\nmean_ :\n각 성분의 평균\n\n\nexplained_variance_ratio_ \n각 성분의 분산 비율\n\n\n\n2차원 PCA의 예",
"X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])\nplt.scatter(X[:,0], X[:,1], s=100)\nplt.xlim(-4,4)\nplt.ylim(-3,3)\nplt.title(\"original data\")\nplt.show()\n\nfrom sklearn.decomposition import PCA\npca = PCA(n_components=2)\npca.fit(X)\n\nZ = pca.transform(X)\nZ\n\nw, V = np.linalg.eig(pca.get_covariance())\n\nV.T.dot(X.T).T # 이 V는 축이 하나 없어진 V이다. 뒤에 부호가 바뀐 이유는 아이겐 밸류에서 중요한 것은 절대값이다. 바뀔 수 있다.\n\nplt.scatter(Z[:,0], Z[:,1], c='r', s=100)\nplt.xlim(-4,4)\nplt.ylim(-3,3)\nplt.title(\"transformed data\")\nplt.show()",
"위로 세워지지 않고 옆으로 뻗은 이유는 확률 변수의 좌표 변환 이야기를 다시 하는 것이다.~~",
"plt.scatter(Z[:,0], np.zeros_like(Z[:,1]), c='g', s=100)\nplt.xlim(-4,4)\nplt.ylim(-3,3)\nplt.title(\"transformed and truncated data\")\nplt.show()",
"IRIS 데이터의 예",
"from sklearn.datasets import load_iris\niris = load_iris()\nX = iris.data[:,2:]\nplt.scatter(X[:, 0], X[:, 1], c=iris.target, s=200, cmap=plt.cm.jet);\n\nX2 = PCA(2).fit_transform(X) #이 안에 2라는 숫자는 무엇이냐? 나중에 2차원으로 해라. 원래 2차원인데? 회전만 시켜라. 이렇게 하면\n# 사실 x2를 볼 필요가 없고 x1만 보면 된다.\nplt.scatter(X2[:, 0], X2[:, 1], c=iris.target, s=200, cmap=plt.cm.jet)\nplt.xlim(-6, 6)\nplt.show()\n\nX1 = PCA(1).fit_transform(X)\nsns.distplot(X1[iris.target==0], color=\"b\", bins=20, rug=True, kde=False)\nsns.distplot(X1[iris.target==1], color=\"g\", bins=20, rug=True, kde=False)\nsns.distplot(X1[iris.target==2], color=\"r\", bins=20, rug=True, kde=False)\nplt.xlim(-6, 6)\nplt.show()\n\nX3 = PCA(2).fit_transform(iris.data) #여기서는 4개가 다 들어갔다. 그 중에서 2개만 뽑은 것이다.\nplt.scatter(X3[:, 0], X3[:, 1], c=iris.target, s=200, cmap=plt.cm.jet);\n# 피처가 원래 4개가 있었는데 잘 조합해서 x1다시를 만든다. x2 마찬가지로 다시를 만들어서 조합을 하면 아주 선명하게 분리가 된다.\n\nX4 = PCA(3).fit_transform(iris.data) \nfrom mpl_toolkits.mplot3d import Axes3D\n\ndef plot_pca(azim):\n fig = plt.figure()\n ax = fig.add_subplot(111, projection='3d');\n ax.scatter(X4[:,0], X4[:,1], X4[:,2], c=iris.target, s=100, cmap=plt.cm.jet, alpha=1);\n ax.view_init(20, azim)\n\nplot_pca(-60)\n\nfrom ipywidgets import widgets\nwidgets.interact(plot_pca, azim=widgets.IntSlider(min=0, max=180, step=10, value=0));",
"이미지 PCA",
"from sklearn.datasets import load_digits\ndigits = load_digits()\nX_digits, y_digits = digits.data, digits.target\n\nN=2; M=5;\nfig = plt.figure(figsize=(10, 4))\nplt.subplots_adjust(top=1, bottom=0, hspace=0, wspace=0.05)\nfor i in range(N):\n for j in range(M):\n k = i*M+j\n ax = fig.add_subplot(N, M, k+1)\n ax.imshow(digits.images[k], cmap=plt.cm.bone, interpolation=\"none\");\n ax.grid(False)\n ax.xaxis.set_ticks([])\n ax.yaxis.set_ticks([])\n plt.title(digits.target_names[k])",
"다차원 공간에서 그림 1개는 점 1개이다. 예를 들어 0 한 개는 어느 한 공간(다른 은하계)에 모여 있다는 의미이다",
"from sklearn.decomposition import PCA\npca = PCA(n_components=10)\nX_pca = pca.fit_transform(X_digits)\nprint(X_digits.shape)\nprint(X_pca.shape)\n\nplt.scatter(X_pca[:,0], X_pca[:,1], c=y_digits, s=100, cmap=plt.cm.jet)\nplt.axis(\"equal\")\nplt.show()",
"지금은 2차원이라서 뭉쳐 있지만 차원을 늘릴수록 떨어져 있는다.",
"from mpl_toolkits.mplot3d import Axes3D\n\ndef plot_pca2(azim):\n fig = plt.figure(figsize=(8,8))\n ax = fig.add_subplot(111, projection='3d');\n ax.scatter(X_pca[:,0], X_pca[:,1], X_pca[:,2], c=y_digits, s=100, cmap=plt.cm.jet, alpha=1);\n ax.view_init(20, azim)\n\nplot_pca2(-60)\n\nfrom ipywidgets import widgets\nwidgets.interact(plot_pca2, azim=widgets.IntSlider(min=0,max=180,step=10,value=0));\n\nN=2; M=5;\nfig = plt.figure(figsize=(10,3.2))\nplt.subplots_adjust(top=1, bottom=0, hspace=0, wspace=0.05)\nfor i in range(N):\n for j in range(M):\n k = i*M+j\n p = fig.add_subplot(N, M, k+1)\n p.imshow(pca.components_[k].reshape((8,8)), cmap=plt.cm.bone, interpolation='none')\n plt.xticks([])\n plt.yticks([]) \n plt.grid(False)",
"이미지가 벡터이면 거꾸로 벡터도 이미지이다. 벡터도 점이기 때문에 거꾸로 이미지가 될 수 있다.\nbone을 썼기 때문에 하얀 것이 가장 변화가 심한 것이다. 0이 진한 부분이고 하얀 부분이 큰 부분이다. 그 방향으로 분산이 가장 크다는 의미이다. 검은색 부분은 잘 안 움직이는 것이고 흰색은 분산, 움직임이 크다는 것이다.\n이미지 전체를 숫자 10개로만 나타낼 수 있고 저것들의 조합으로 다른 것들을 나타낼 수 있다.?\n\nKernal PCA\n데이터의 분리성을 향상시키기 위해 비선형 변환 $\\phi(x)$ 을 한 데이터에 대해서 다시 PCA 적용하는 방법을 Kernel PCA라고 한다.\n$$ x \\;\\; \\rightarrow \\;\\; \\phi(x) \\;\\; \\rightarrow \\;\\; \\text{PCA} \\;\\; \\rightarrow \\;\\; z $$\n비선형은 회전만으로는 찾아낼 수 없다. 그래서 왜곡된 정보로 만들어서 PCA를 적용하게끔 만든다",
"A1_mean = [1, 1]\nA1_cov = [[2, 1], [1, 1]]\nA1 = np.random.multivariate_normal(A1_mean, A1_cov, 50)\nA2_mean = [5, 5]\nA2_cov = [[2, 1], [1, 1]]\nA2 = np.random.multivariate_normal(A2_mean, A2_cov, 50)\nA = np.vstack([A1, A2])\nB_mean = [5, 0]\nB_cov = [[0.8, -0.7], [-0.7, 0.8]]\nB = np.random.multivariate_normal(B_mean, B_cov, 50)\nAB = np.vstack([A, B])\n\nplt.scatter(A[:,0], A[:,1], c='r')\nplt.scatter(B[:,0], B[:,1], c='g')\nplt.show()",
"이런 좌표를 카테시안 좌표라고 한다. 카테시안? 좌표계(r과 세터_각도)를 이용하면 구할 수 있다. 비선형을 이용해서 구하는 방식을 커널방식이라고 한다.",
"pca = PCA(n_components=2)\npca.fit(AB)\nA_transformed = pca.transform(A)\nB_transformed = pca.transform(B)\nplt.scatter(A_transformed[:,0], A_transformed[:,1], c=\"r\", s=100)\nplt.scatter(B_transformed[:,0], B_transformed[:,1], c=\"g\", s=100)\nplt.show()",
"이걸로 차원 축소가 불가능하다.",
"pca = PCA(n_components=1)\npca.fit(AB)\nA_transformed = pca.transform(A)\nB_transformed = pca.transform(B)\nplt.scatter(A_transformed, np.zeros(len(A_transformed)), c=\"r\", s=100)\nplt.scatter(B_transformed, np.zeros(len(B_transformed)), c=\"g\", s=100)\nplt.show()\n\nsns.distplot(A_transformed, color=\"b\", bins=20, rug=True, kde=False)\nsns.distplot(B_transformed, color=\"g\", bins=20, rug=True, kde=False)\nplt.show()\n\nfrom sklearn.decomposition import KernelPCA\n\nkpca = KernelPCA(kernel=\"cosine\", n_components=2) #데이터의 모양이 바뀐다. 이 때는 차원축소가 가능하다.\nkpca.fit(AB)\nA_transformed2 = kpca.transform(A)\nB_transformed2 = kpca.transform(B)\nplt.scatter(A_transformed2[:,0], A_transformed2[:,1], c=\"r\", s=100)\nplt.scatter(B_transformed2[:,0], B_transformed2[:,1], c=\"g\", s=100)\nplt.show()\n\nfrom sklearn.decomposition import KernelPCA\nkpca = KernelPCA(kernel=\"cosine\", n_components=1)\nkpca.fit(AB)\nA_transformed2 = kpca.transform(A)\nB_transformed2 = kpca.transform(B)\nplt.scatter(A_transformed2, np.zeros(len(A_transformed2)), c=\"r\", s=100)\nplt.scatter(B_transformed2, np.zeros(len(B_transformed2)), c=\"g\", s=100)\nplt.show()\n\nsns.distplot(A_transformed2, color=\"b\", bins=20, rug=True, kde=False)\nsns.distplot(B_transformed2, color=\"g\", bins=20, rug=True, kde=False)\nplt.show()\n\nfrom sklearn.datasets import make_circles\nnp.random.seed(0)\nX, y = make_circles(n_samples=400, factor=.3, noise=.05)\nreds = y == 0\nblues = y == 1\nplt.plot(X[reds, 0], X[reds, 1], \"ro\")\nplt.plot(X[blues, 0], X[blues, 1], \"bo\")\nplt.xlabel(\"$x_1$\")\nplt.ylabel(\"$x_2$\")\nplt.show()",
"디시전트리로는 이것을 나눌수 있다. 하지만 퍼셉트론이나 로지스틱회귀분석은 어떻게 해서도 나눌 수 없다.",
"kpca = KernelPCA(kernel=\"rbf\", fit_inverse_transform=True, gamma=10) #radial basis fuunction_rbf\nkpca.fit(X)\nA_transformed2 = kpca.transform(X[reds])\nB_transformed2 = kpca.transform(X[blues])\nplt.scatter(A_transformed2[:,0], A_transformed2[:,1], c=\"r\", s=100)\nplt.scatter(B_transformed2[:,0], B_transformed2[:,1], c=\"b\", s=100)\nplt.show()",
"성분 수의 결정\n성분의 수가 같은 PCA로 변환된 데이터의 공분산 행렬의 고유값은 원래 데이터의 공분산 행렬의 고유값과 일치한다.\n성분의 수를 줄여야 하는 경우에는 가장 고유값이 작은 성분부터 생략한다.\n\n\n원래의 데이터 $X$의 공분산 행렬 $X^TX$의 고유값\n $$ \\lambda_1, \\lambda_2, \\lambda_3, \\cdots, \\lambda_D $$\n\n\nPCA 변환한 데이터 $Z$의 공분산 행렬 $Z^TZ$의 고유값\n $$ \\lambda_1, \\cdots, \\lambda_L $$\n\n\nExplained Variance \n$$ \\dfrac{\\lambda_1 + \\cdots + \\lambda_L}{\\lambda_1 + \\lambda_2 + \\lambda_3 + \\cdots + \\lambda_D} < 1$$\n\n\n그럼 몇 개를 자르는 것이 좋냐? 그건 분석가의 마음. 그래도 기준은? Explained Variance? \n잘라내고 나면 움직임이 줄어든다. 몇 개까지를 잘라내면 80%(임의)를 유지하느냐 하는 기준점을 찾아내는 것.",
"from sklearn.datasets import fetch_mldata\nfrom sklearn.decomposition import PCA\n\nwine = fetch_mldata(\"wine\")\nX, y = wine.data, wine.target\n\npca = PCA().fit(X)\nvar = pca.explained_variance_\ncmap = sns.color_palette()\nplt.bar(np.arange(1,len(var)+1), var/np.sum(var), align=\"center\", color=cmap[0])\nplt.step(np.arange(1,len(var)+1), np.cumsum(var)/np.sum(var), where=\"mid\", color=cmap[1])\nplt.show()",
"최소한 5개는 남겨두어야 80% 이상을 유지할 수 있는 기준점이 된다. 차원을 줄이다보면 어느 순간 퍼포먼스가 살짝 올라갈 때가 있다. 경우에 따라서 공선성이 줄어드면 퍼포먼스의 베리언스가 줄어들기 때문에 살짝 올라간다. 그 때의 차원수를 선택해도 좋다.",
"X_pca = PCA(2).fit_transform(X)\ncmap = mpl.colors.ListedColormap(sns.color_palette(\"Set1\"))\nplt.scatter(X_pca[:, 0], X_pca[:, 1], c=y, cmap=cmap)\nplt.show()\n\nfrom sklearn.linear_model import LogisticRegression\n\nclf = LogisticRegression()\nclf.fit(X_pca, y)\n\nxmin, xmax = X_pca[:,0].min(), X_pca[:,0].max()\nymin, ymax = X_pca[:,1].min(), X_pca[:,1].max()\nXGrid, YGrid = np.meshgrid(np.arange(xmin, xmax, (xmax-xmin)/1000), np.arange(ymin, ymax, (ymax-ymin)/1000))\nZGrid = np.reshape(clf.predict(np.array([XGrid.ravel(), YGrid.ravel()]).T), XGrid.shape)\ncmap = mpl.colors.ListedColormap(sns.color_palette(\"Set3\"))\nplt.contourf(XGrid, YGrid, ZGrid, cmap=cmap)\nplt.scatter(X_pca[:, 0], X_pca[:, 1], c=y, cmap=cmap)\nplt.show()",
"파이프라인",
"from sklearn import linear_model, decomposition, datasets\nfrom sklearn.pipeline import Pipeline\n\ndigits = datasets.load_digits()\nX_digits = digits.data\ny_digits = digits.target\n\nmodel1 = linear_model.LogisticRegression()\nmodel1.fit(X_digits, y_digits)\n\npca = decomposition.PCA()\nlogistic = linear_model.LogisticRegression()\nmodel2 = Pipeline(steps=[('pca', pca), ('logistic', logistic)])\nmodel2.fit(X_digits, y_digits)\n\nfrom sklearn.metrics import classification_report\n\nprint(classification_report(y_digits, model1.predict(X_digits)))\nprint(classification_report(y_digits, model2.predict(X_digits)))",
"실습\n\n어떤 데이터들은 pca가 안 되는 경우가 있다.\n배웠으니까 실습을 해보겠다. iris데이터로 해보겠다. 어떻게 해볼 것인가? iris데이터는 4차원이다. 2개를 뽑는 방법은 6가지이다. 6가지 경우에 대해 각각 logistic regression을 해서 precision 퍼포먼스를 실험해보자.",
"from itertools import combinations\n\nfor x in combinations(np.arange(4), 2):\n print(x)\n\nfrom sklearn.datasets import load_iris\niris = load_iris()\ny = iris.target\n\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.metrics import precision_score\n\nfor idx in combinations(np.arange(4), 2):\n X = iris.data[:, idx]\n model = LogisticRegression().fit(X, y)\n print(precision_score(y, model.predict(X), average=\"micro\")) #average가 micro이면 가중치평균을 최소화 하겠다는 의미\n\nfrom sklearn.metrics import classification_report\nprint(classification_report(y, model.predict(X)))\n\nfrom sklearn.decomposition import PCA\n\nX = PCA(2).fit_transform(iris.data)\nmodel = LogisticRegression().fit(X, y)\nprint(precision_score(y, model.predict(X)))\n\nprint(classification_report(y, model.predict(X)))",
"이렇게 해서 PCA는 끝이 아니고 파이프라인",
"from sklearn import linear_model, decomposition, datasets\nfrom sklearn.pipeline import Pipeline\n\ndigits = datasets.load_digits()\nX_digits = digits.data\ny_digits = digits.target\n\nmodel1 = linear_model.LogisticRegression()\nmodel1.fit(X_digits, y_digits)\n\npca = decomposition.PCA(16)\nlogistic = linear_model.LogisticRegression()\nmodel2 = Pipeline(steps=[('pca', pca), ('logistic', logistic)])\nmodel2.fit(X_digits, y_digits)\n\nX_digits.shape\n\nfrom sklearn.metrics import classification_report\n\nprint(classification_report(y_digits, model1.predict(X_digits)))\nprint(classification_report(y_digits, model2.predict(X_digits)))",
"질문 내용\n\n기저벡터들 간 모두 독립인 것인가? 기저벡터의 정의를 알아야 한다. 기저벡터는 2개가 있다면 2차원에서 해당하는 모든 점들을 찍어낼 수 있는 조합을 만들어 낼 수 있는 것.\n커널PCA에서 왜곡된 정보로 어떻게 바꾸는가? 커널을 사용하면 비선형 트렌스폼을 쓸 수 있다 정도만 알아둬라. 나중에 알려주겠다.\n\n어제 내용 복습\n\n가장 중요한 것은 좌표변환. x라는 벡터가 고정되어 있을 때 분해할 수 있다. 여러가지 방법 가능. e1, e2나 f1, f2 등 여러 벡터들로 가능하다. 하지만 조건은 e1, e2가 같은 방향만 아니면 된다.\n아이겐 디컴포지션. transpose를 하게 되면 대각 모형이 독립 모양인 동그라미 형태로 바뀐다. 단위벡터일 경우에는 회전만 할 수 있다. 길이를 바꿀 수 없다.\n크고 작은 게 뭐가 크고 작은 것인지? 내 생각에는 분산이라고 생각한다. 그러면 직접 공분산을 구해보아라.",
"X = np.array([[-1, -1], [-2, -1], [-3,-2], [1, 1], [2, 1], [3, 2]])\nX\n\nnp.cov(X)\n\nnp.cov(X.T)\n\nnpcov = np.cov(X.T, bias=True)\nnpcov\n\nL, V = np.linalg.eig(npcov)\n\nL #람다. 고유값\n\nV #고유벡터. ",
"이와 같은 상태가 컨디션이 안좋다는 의미이다. 옵티미제이션이 안되는 것은 계곡 모양으로 되어 있는 것\n최저점을 찾기 힘든 것. 큰 값과 작은 값의 차이가 많이 나면. 람다끼리\n아이겐밸류 차가 크게 나면 안된다. 없어도 되는 애가 있어서 다중공선성이 생긴 것이다. 그래서 PCA의 방법으로는 너무 작은 것은 없애버리겠다는 것. 그래서 2차원에서 1차원으로 없애면 된다.\nPCA는 차원 축소의 개념보다 회전의 의미를 두면 된다. 그렇게 해서 분산이 큰 것을 순서대로 정렬\n예를 들어 basis가 4개가 생긴다. 그 안에서 어떤 조합으로 인해 가장 큰 v1을 만든다. 프로모션 할 때 들었던 돈, 서버의 장소 등등 영향을 미치는 요소들 중에서 가장 큰 아이겐벡터는 이것들을 어떤 식으로 조합한 것이다.\n람다는 전체의 방향과 크기를 나타내고 각각의 데이터간의 알파원, 알파투는 각각에 해당하는 고유값\n사람들 얼굴들의 특징들을 뽑아서 저장을 하고 나서 새로운 얼굴들을 만들어낸다."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
GoogleCloudPlatform/asl-ml-immersion
|
notebooks/bigquery/labs/b_bqml.ipynb
|
apache-2.0
|
[
"Big Query Machine Learning (BQML)\nLearning Objectives\n- Understand that it is possible to build ML models in Big Query\n- Understand when this is appropriate\n- Experience building a model using BQML\nIntroduction\nBigQuery is more than just a data warehouse, it also has some ML capabilities baked into it. \nBQML is a great option when a linear model will suffice, or when you want a quick benchmark to beat, but for more complex models such as neural networks you will need to pull the data out of BigQuery and into an ML Framework like TensorFlow.\nIn this notebook, we will build a naive model using BQML. This notebook is intended to inspire usage of BQML, we will not focus on model performance.\nSet up environment variables and load necessary libraries",
"from google import api_core\nfrom google.cloud import bigquery\n\nPROJECT = !gcloud config get-value project\nPROJECT = PROJECT[0]\n\n%env PROJECT=$PROJECT",
"Create BigQuery dataset\nPrior to now we've just been reading an existing BigQuery table, now we're going to create our own so so we need some place to put it. In BigQuery parlance, Dataset means a folder for tables. \nWe will take advantage of BigQuery's Python Client to create the dataset.",
"bq = bigquery.Client(project=PROJECT)\n\ndataset = bigquery.Dataset(bq.dataset(\"bqml_taxifare\"))\ntry:\n bq.create_dataset(dataset) # will fail if dataset already exists\n print(\"Dataset created\")\nexcept api_core.exceptions.Conflict:\n print(\"Dataset already exists\")",
"Create model\nTo create a model (Documentation)\n1. Use CREATE MODEL and provide a destination table for resulting model. Alternatively we can use CREATE OR REPLACE MODEL which allows overwriting an existing model.\n2. Use OPTIONS to specify the model type (linear_reg or logistic_reg). There are many more options we could specify, such as regularization and learning rate, but we'll accept the defaults.\n3. Provide the query which fetches the training data \nExercise 1\nUse the query we created in the previous lab to Clean the Data to now train a Linear Regression model with BQML called taxifare_model. This should amount to adding a line to create the model and adding OPTIONS to specify the model type. Our label will be sum of tolls_amount and fare_amount and for features will use the pickup datetime and pickup & dropoff latitude and longitude.\nHINT: Have a look at Step Two of this tutorial if you get stuck or if you want to see another example.\nYour query could take about two minutes to complete.",
"%%bigquery --project $PROJECT\n# TODO: Your code goes here",
"Get training statistics\nBecause the query uses a CREATE MODEL statement to create a table, you do not see query results. The output is an empty string.\nTo get the training results we use the ML.TRAINING_INFO function.\nExercise 2\nAfter completing the exercise above, query the training information of the model you created. Have a look at Step Three and Four of this tutorial to see a similar example.",
"%%bigquery --project $PROJECT\n# TODO: Your code goes here",
"'eval_loss' is reported as mean squared error. Your RMSE should be about 8.29. Your results may vary.\nPredict\nTo use our model to make predictions, we use ML.PREDICT\nExercise 3\nLastly, use the taxifare_model you trained above to infer the cost of a taxi ride that occurs at 10:00 am on January 3rd, 2014 going\nfrom the Google Office in New York (latitude: 40.7434, longitude: -74.0080) to the JFK airport (latitude: 40.6413, longitude: -73.7781)\nHint: Have a look at Step Five of this tutorial if you get stuck or if you want to see another example.",
"%%bigquery --project $PROJECT\n# TODO: Your code goes here",
"Recap\nThe value of BQML is its ease of use:\n\nWe created a model with just two additional lines of SQL\nWe never had to move our data out of BigQuery\nWe didn't need to use an ML Framework or code, just SQL\n\nThere's lots of work going on behind the scenes make this look easy. For example BQML is automatically creating a training/evaluation split, tuning our learning rate, and one-hot encoding features if neccesary. When we move to TensorFlow these are all things we'll need to do ourselves. \nThis notebook was just to inspire usagage of BQML, the current model is actually very poor. We'll prove this in the next lesson by beating it with a simple heuristic. \nWe could improve our model considerably with some feature engineering but we'll save that for a future lesson. Also there are additional BQML functions such as ML.WEIGHTS and ML.EVALUATE that we haven't even explored. If you're interested in learning more about BQML I encourage you to read the offical docs.\nFrom here on out we'll focus on pulling data out of BigQuery and building models using TensorFlow, which is more effort but also offers much more flexibility.\nCopyright 2021 Google Inc.\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
psas/composite-propellant-tank
|
Analysis/Calculations/LOX_BoilOff_HeatTransferAnalysis.ipynb
|
gpl-3.0
|
[
"Heat and Mass Transfer Analysis of Liquid Oxygen Boil-Off due to Conduction through Composite Layers.\nObjective:\nDetermine a rough estimate of the time it takes for a 3\" diameter 5.15\" long cylinder of liquid oxygen to boil off on a hot summer day in Brothers, OR (T~38C).\nAssumptions:\n\n\nThe temperature of the outermost carbon fiber layer is equal to the ambient air temperature ($T_s = T_{inf}$).\n\n\nThe temperature of the innermost (PTFE) layer is equal to the liquid oxygen (LOX) temperature ($T_1 = T_{lox}$).\n\n\nThe Nomex Honecomb layer is considered to be composed of mostly air.\n\n\nConvection and radiation effects are neglected.\n\n\nMaterial properties are constant.",
"# Import packages here:\n\nimport math as m\nimport numpy as np\nfrom IPython.display import Image\nimport matplotlib.pyplot as plt\n\n# Properties of Materials (engineeringtoolbox.com, Cengel, Tian)\n\n# Conductivity\nKair = 0.026 # w/mk\nKptfe = 0.25 # w/mk\nKcf = 0.8 # transverse conductivity 0.5 -0.8 w/mk\n\n# Fluid Properties\n\nrhoLox = 1141 # kg/m^3\nTLox = -183 # *C\n\n# Latent Heat of Evaporation\nheOxy = 214000 # j/kg\n\n\n# Layer Dimensions:\n\nr1 = 0.0381 # meters (1.5\")\nr2 = 0.0396 # m\nr3 = 0.0399 # m\nr4 = 0.0446 # m\nr5 = 0.0449 # m\nL = 0.13081 # meters (5.15\")\n\n# Environmental Properties:\n\nTs = 38 # *C\nT1 = -183 #*C",
"Heat Transfer Rate\nThe following analysis was performed using steady heat conduction analysis of multilayered cylinders (Cengel, pg 156). The heat transfer rate is given by:\n$$\\dot{Q} = \\frac{T_s -T_1}{R_{total}}$$\nwhere $R_{total}$ is the total thermal resistance expressed as:\n$$R_{total} = R_{PTFE} + R_{CF1} + R_{Air}+ R_{CF2}$$\nwhere: \n$$R_{material} = (\\frac{ln(r_{outer} / r_{inner})}{2\\piL*K_{material}})$$",
"Rptfe = m.log(r2/r1)/(2*m.pi*L*Kptfe)\nRcf1 = m.log(r3/r2)/(2*m.pi*L*Kcf)\nRair = m.log(r4/r3)/(2*m.pi*L*Kair)\nRcf2 = m.log(r5/r4)/(2*m.pi*L*Kcf)\n\nRtot = Rptfe + Rcf1 + Rair + Rcf2 \n\nprint('Total Thermal Resistance equals: ', \"%.2f\" % Rtot, 'K/W')\n\n#Heat transfer rate: \nQrate = (Ts - T1)/Rtot\n\nprint('Calculated Heat Transfer rate equals: ',\"%.2f\" % Qrate, 'W')",
"Evaporation Rate\nThe energy balance on a thin layer of liquid at the surface is expressed by: (Cengel, pg. 841)\n$$\\dot{Q}{transferred} = \\dot{Q}{latent, absorbed}$$ or $$\\dot{Q} = \\dot{m}v*h{e, oxygen}$$\nwhere $\\dot{m}_v$ is the rate of evaportation. Solving for $\\dot{m}_v$ gives us:\n$$\\dot{m}v = \\frac{\\dot{Q}}{h{e, oxygen}}$$",
"EvapRate = Qrate/heOxy\nprint ('The rate of evaporation is', \"%.6f\" % EvapRate, 'kg/s')",
"Time for total Evaporation\nThe mass of liquid oxygen is given by: $$m = \\rho_{LOX}V$$ where $$V = \\pir1^2*L$$",
"VLox = m.pi*(r1)**2*L\nmLox = rhoLox*VLox\nprint('The mass of the liquid oxygen in tank is: ', \"%.2f\" % mLox, 'kg')",
"Finally, the amount of time that it takes for all 0.68kg of LOX to boil off is:",
"Tboiloff = mLox/EvapRate/60\nprint('%.2f' % Tboiloff, 'minutes' )"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Yangqing/caffe2
|
caffe2/python/tutorials/Basics.ipynb
|
apache-2.0
|
[
"Caffe2 Basic Concepts - Operators & Nets\nIn this tutorial we will go through a set of Caffe2 basics: the basic concepts including how operators and nets are being written.\nFirst, let's import Caffe2. core and workspace are usually the two that you need most. If you want to manipulate protocol buffers generated by Caffe2, you probably also want to import caffe2_pb2 from caffe2.proto.",
"from __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\nfrom __future__ import unicode_literals\n\n# We'll also import a few standard python libraries\nfrom matplotlib import pyplot\nimport numpy as np\nimport time\n\n# These are the droids you are looking for.\nfrom caffe2.python import core, workspace\nfrom caffe2.proto import caffe2_pb2\n\n# Let's show all plots inline.\n%matplotlib inline",
"You might see a warning saying that caffe2 does not have GPU support. That means you are running a CPU-only build. Don't be alarmed - anything CPU is still runnable without a problem.\nWorkspaces\nLet's cover workspaces first, where all the data resides.\nSimilar to Matlab, the Caffe2 workspace consists of blobs you create and store in memory. For now, consider a blob to be a N-dimensional Tensor similar to numpy's ndarray, but contiguous. Down the road, we will show you that a blob is actually a typed pointer that can store any type of C++ objects, but Tensor is the most common type stored in a blob. Let's show what the interface looks like.\nBlobs() prints out all existing blobs in the workspace. \nHasBlob() queries if a blob exists in the workspace. As of now, we don't have any.",
"print(\"Current blobs in the workspace: {}\".format(workspace.Blobs()))\nprint(\"Workspace has blob 'X'? {}\".format(workspace.HasBlob(\"X\")))",
"We can feed blobs into the workspace using FeedBlob().",
"X = np.random.randn(2, 3).astype(np.float32)\nprint(\"Generated X from numpy:\\n{}\".format(X))\nworkspace.FeedBlob(\"X\", X)",
"Now, let's take a look at what blobs are in the workspace.",
"print(\"Current blobs in the workspace: {}\".format(workspace.Blobs()))\nprint(\"Workspace has blob 'X'? {}\".format(workspace.HasBlob(\"X\")))\nprint(\"Fetched X:\\n{}\".format(workspace.FetchBlob(\"X\")))",
"Let's verify that the arrays are equal.",
"np.testing.assert_array_equal(X, workspace.FetchBlob(\"X\"))",
"Note that if you try to access a blob that does not exist, an error will be thrown:",
"try:\n workspace.FetchBlob(\"invincible_pink_unicorn\")\nexcept RuntimeError as err:\n print(err)",
"One thing that you might not use immediately: you can have multiple workspaces in Python using different names, and switch between them. Blobs in different workspaces are separate from each other. You can query the current workspace using CurrentWorkspace. Let's try switching the workspace by name (gutentag) and creating a new one if it doesn't exist.",
"print(\"Current workspace: {}\".format(workspace.CurrentWorkspace()))\nprint(\"Current blobs in the workspace: {}\".format(workspace.Blobs()))\n\n# Switch the workspace. The second argument \"True\" means creating \n# the workspace if it is missing.\nworkspace.SwitchWorkspace(\"gutentag\", True)\n\n# Let's print the current workspace. Note that there is nothing in the\n# workspace yet.\nprint(\"Current workspace: {}\".format(workspace.CurrentWorkspace()))\nprint(\"Current blobs in the workspace: {}\".format(workspace.Blobs()))",
"Let's switch back to the default workspace.",
"workspace.SwitchWorkspace(\"default\")\nprint(\"Current workspace: {}\".format(workspace.CurrentWorkspace()))\nprint(\"Current blobs in the workspace: {}\".format(workspace.Blobs()))",
"Finally, ResetWorkspace() clears anything that is in the current workspace.",
"workspace.ResetWorkspace()\nprint(\"Current blobs in the workspace after reset: {}\".format(workspace.Blobs()))",
"Operators\nOperators in Caffe2 are kind of like functions. From the C++ side, they all derive from a common interface, and are registered by type, so that we can call different operators during runtime. The interface of operators is defined in caffe2/proto/caffe2.proto. Basically, it takes in a bunch of inputs, and produces a bunch of outputs.\nRemember, when we say \"create an operator\" in Caffe2 Python, nothing gets run yet. All it does is create the protocol buffer that specifies what the operator should be. At a later time it will be sent to the C++ backend for execution. If you are not familiar with protobuf, it is a json-like serialization tool for structured data. Find more about protocol buffers here.\nLet's see an actual example.",
"# Create an operator.\nop = core.CreateOperator(\n \"Relu\", # The type of operator that we want to run\n [\"X\"], # A list of input blobs by their names\n [\"Y\"], # A list of output blobs by their names\n)\n# and we are done!",
"As we mentioned, the created op is actually a protobuf object. Let's show the content.",
"print(\"Type of the created op is: {}\".format(type(op)))\nprint(\"Content:\\n\")\nprint(str(op))",
"Ok, let's run the operator. We first feed the input X to the workspace. \nThen the simplest way to run an operator is to do workspace.RunOperatorOnce(operator)",
"workspace.FeedBlob(\"X\", np.random.randn(2, 3).astype(np.float32))\nworkspace.RunOperatorOnce(op)",
"After execution, let's see if the operator is doing the right thing.\nIn this case, the operator is a common activation function used in neural networks, called [ReLU](https://en.wikipedia.org/wiki/Rectifier_(neural_networks), or Rectified Linear Unit activation. ReLU activation helps to add necessary non-linear characteristics to the neural network classifier, and is defined as:\n$$ReLU(x) = max(0, x)$$",
"print(\"Current blobs in the workspace: {}\\n\".format(workspace.Blobs()))\nprint(\"X:\\n{}\\n\".format(workspace.FetchBlob(\"X\")))\nprint(\"Y:\\n{}\\n\".format(workspace.FetchBlob(\"Y\")))\nprint(\"Expected:\\n{}\\n\".format(np.maximum(workspace.FetchBlob(\"X\"), 0)))",
"This is working if your Expected output matches your Y output in this example.\nOperators also take optional arguments if needed. They are specified as key-value pairs. Let's take a look at one simple example, which takes a tensor and fills it with Gaussian random variables.",
"op = core.CreateOperator(\n \"GaussianFill\",\n [], # GaussianFill does not need any parameters.\n [\"Z\"],\n shape=[100, 100], # shape argument as a list of ints.\n mean=1.0, # mean as a single float\n std=1.0, # std as a single float\n)\nprint(\"Content of op:\\n\")\nprint(str(op))",
"Let's run it and see if things are as intended.",
"workspace.RunOperatorOnce(op)\ntemp = workspace.FetchBlob(\"Z\")\npyplot.hist(temp.flatten(), bins=50)\npyplot.title(\"Distribution of Z\")",
"If you see a bell shaped curve then it worked!\nNets\nNets are essentially computation graphs. We keep the name Net for backward consistency (and also to pay tribute to neural nets). A Net is composed of multiple operators just like a program written as a sequence of commands. Let's take a look.\nWhen we talk about nets, we will also talk about BlobReference, which is an object that wraps around a string so we can do easy chaining of operators.\nLet's create a network that is essentially the equivalent of the following python math:\nX = np.random.randn(2, 3)\nW = np.random.randn(5, 3)\nb = np.ones(5)\nY = X * W^T + b\nWe'll show the progress step by step. Caffe2's core.Net is a wrapper class around a NetDef protocol buffer.\nWhen creating a network, its underlying protocol buffer is essentially empty other than the network name. Let's create the net and then show the proto content.",
"net = core.Net(\"my_first_net\")\nprint(\"Current network proto:\\n\\n{}\".format(net.Proto()))",
"Let's create a blob called X, and use GaussianFill to fill it with some random data.",
"X = net.GaussianFill([], [\"X\"], mean=0.0, std=1.0, shape=[2, 3], run_once=0)\nprint(\"New network proto:\\n\\n{}\".format(net.Proto()))",
"You might have observed a few differences from the earlier core.CreateOperator call. Basically, when using a net, you can directly create an operator and add it to the net at the same time by calling net.SomeOp where SomeOp is a registered type string of an operator. This gets translated to\nop = core.CreateOperator(\"SomeOp\", ...)\nnet.Proto().op.append(op)\nAlso, you might be wondering what X is. X is a BlobReference which records two things:\n\n\nThe blob's name, which is accessed with str(X)\n\n\nThe net it got created from, which is recorded by the internal variable _from_net\n\n\nLet's verify it. Also, remember, we are not actually running anything yet, so X contains nothing but a symbol. Don't expect to get any numerical values out of it right now :)",
"print(\"Type of X is: {}\".format(type(X)))\nprint(\"The blob name is: {}\".format(str(X)))",
"Let's continue to create W and b.",
"W = net.GaussianFill([], [\"W\"], mean=0.0, std=1.0, shape=[5, 3], run_once=0)\nb = net.ConstantFill([], [\"b\"], shape=[5,], value=1.0, run_once=0)",
"Now, one simple code sugar: since the BlobReference objects know what net it is generated from, in addition to creating operators from net, you can also create operators from BlobReferences. Let's create the FC operator in this way.",
"Y = X.FC([W, b], [\"Y\"])",
"Under the hood, X.FC(...) simply delegates to net.FC by inserting X as the first input of the corresponding operator, so what we did above is equivalent to\nY = net.FC([X, W, b], [\"Y\"])\nLet's take a look at the current network.",
"print(\"Current network proto:\\n\\n{}\".format(net.Proto()))",
"Too verbose huh? Let's try to visualize it as a graph. Caffe2 ships with a very minimal graph visualization tool for this purpose.",
"from caffe2.python import net_drawer\nfrom IPython import display\ngraph = net_drawer.GetPydotGraph(net, rankdir=\"LR\")\ndisplay.Image(graph.create_png(), width=800)",
"So we have defined a Net, but nothing has been executed yet. Remember that the net above is essentially a protobuf that holds the definition of the network. When we actually run the network, what happens under the hood is:\n- A C++ net object is instantiated from the protobuf\n- The instantiated net's Run() function is called\nBefore we do anything, we should clear any earlier workspace variables with ResetWorkspace().\nThen there are two ways to run a net from Python. We will do the first option in the example below.\n\nCall workspace.RunNetOnce(), which instantiates, runs and immediately destructs the network \nCall workspace.CreateNet() to create the C++ net object owned by the workspace, then call workspace.RunNet(), passing the name of the network to it",
"workspace.ResetWorkspace()\nprint(\"Current blobs in the workspace: {}\".format(workspace.Blobs()))\nworkspace.RunNetOnce(net)\nprint(\"Blobs in the workspace after execution: {}\".format(workspace.Blobs()))\n# Let's dump the contents of the blobs\nfor name in workspace.Blobs():\n print(\"{}:\\n{}\".format(name, workspace.FetchBlob(name)))",
"Now let's try the second way to create the net, and run it. First, clear the variables with ResetWorkspace(). Then create the net with the workspace's net object that we created earlier using CreateNet(net_object). Finally, run the net with RunNet(net_name).",
"workspace.ResetWorkspace()\nprint(\"Current blobs in the workspace: {}\".format(workspace.Blobs()))\nworkspace.CreateNet(net)\nworkspace.RunNet(net.Proto().name)\nprint(\"Blobs in the workspace after execution: {}\".format(workspace.Blobs()))\nfor name in workspace.Blobs():\n print(\"{}:\\n{}\".format(name, workspace.FetchBlob(name)))",
"There are a few differences between RunNetOnce and RunNet, but the main difference is the computational overhead. Since RunNetOnce involves serializing the protobuf to pass between Python and C and instantiating the network, it may take longer to run. Let's run a test and see what the time overhead is.",
"# It seems that %timeit magic does not work well with\n# C++ extensions so we'll basically do for loops\nstart = time.time()\nfor i in range(1000):\n workspace.RunNetOnce(net)\nend = time.time()\nprint('Run time per RunNetOnce: {}'.format((end - start) / 1000))\n\nstart = time.time()\nfor i in range(1000):\n workspace.RunNet(net.Proto().name)\nend = time.time()\nprint('Run time per RunNet: {}'.format((end - start) / 1000))",
"Congratulations, you now know the many of the key components of the Caffe2 Python API! Ready for more Caffe2? Check out the rest of the tutorials for a variety of interesting use-cases!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
matthewljones/computingincontext
|
CiC_lecture_01_as_taught.ipynb
|
gpl-2.0
|
[
"Computing in Context sub history\nLecture one\nNumber munging\nThis is iPython.\nIt is swell.\nIt is Python in a brower.\nPure CS types not love.\nWe hackish types adore!\nDownload anaconda (esp if on windows)",
"#This is a comment\n#This is all blackboxed for now--DON'T worry about it\n# Render our plots inline\n%matplotlib inline\n\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np\n\npd.set_option('display.mpl_style', 'default') # Make the graphs a bit prettier\nplt.rcParams['figure.figsize'] = (15, 5)\n\n",
"Our first data format\nRk,G,Date,Age,Tm,,Opp,,GS,MP,FG,FGA,FG%,3P,3PA,3P%,FT,FTA,FT%,ORB,DRB,TRB,AST,STL,BLK,TOV,PF,PTS,GmSc,+/-\n1,1,2013-10-29,28-303,MIA,,CHI,W (+12),1,38:01,5,11,.455,0,1,.000,7,9,.778,0,6,6,8,1,0,2,0,17,16.9,+8\n2,2,2013-10-30,28-304,MIA,@,PHI,L (-4),1,36:38,9,17,.529,4,7,.571,3,4,.750,0,4,4,13,0,0,4,3,25,21.4,-8\n3,3,2013-11-01,28-306,MIA,@,BRK,L (-1),1,42:14,11,19,.579,1,2,.500,3,5,.600,1,6,7,6,2,1,5,2,26,19.9,-3\n4,4,2013-11-03,28-308,MIA,,WAS,W (+10),1,34:41,9,14,.643,3,5,.600,4,5,.800,0,3,3,5,1,0,6,2,25,17.0,+16\n5,5,2013-11-05,28-310,MIA,@,TOR,W (+9),1,36:01,13,20,.650,1,3,.333,8,8,1.000,2,6,8,8,0,1,1,2,35,33.9,+3",
"#looks much nicer on a wide screen!",
"Comma-separated value (CSVs) (files)\nLeBron James' first five games of the 2013-2014 NBA season",
"import csv\nimport urllib\n\nurl = \"https://gist.githubusercontent.com/aparrish/cb1672e98057ea2ab7a1/raw/13166792e0e8436221ef85d2a655f1965c400f75/lebron_james.csv\"\nstats = list(csv.reader(urllib.urlopen(url)))\n#example courtesy the great Allison Parrish!\n#What different things do urllib.urlopen(url) then csv.reader() and then list() do? \n\nstats[0]\n\n\nlen(stats)\n\nstats[74][0]\n",
"You can compose indexes! this is the 0th item of the 74th list. \nBUT I'm not going to torture you with this lower level analysis (for now)\nPandas first-line python tool for Exploratory Data Analysis\n\nrich data structures\npowerful ways to slice, dice, reformate, fix, and eliminate data\ntaste of what can do\n\n\nrich queries like databases\n\ndataframes\nThe library Pandas provides us with a powerful overlay that lets us use matrices but always keep their row and column names: a spreadsheet on speed. It allows us to work directly with the datatype \"Dataframes\" that keeps track of values and their names for us. And it allows us to perform many operations on slices of the dataframe without having to run for loops and the like. This is more convenient and involves faster processing.",
"import pandas as pd #we've already done this but just to remind you you'll need to\n\n#Let's start with yet another way to read csv files, this time from `pandas`\nimport os\ndirectory=(\"/Users/mljones/repositories/comp_in_context_trial/\")\nos.chdir(directory)",
"Now we read a big csv file using a function from pandas called pd.read_csv()",
"df=pd.read_csv('HMXPC_13.csv', sep=\",\")\n\ndf",
"Note at the bottom that the display tells us how many rows and columns we're dealing with.\nAs a general rule, pandas dataframe objects default to slicing by column using a syntax you'll know from dicts as in df[\"course_id\"].",
"df[\"course_id\"]\n\ndf[\"course_id\"][3340:3350] #pick out a list of values from ONE column",
"Instead of (column, row) we use name_of_dataframe[column name][row #]",
"df[3340:3350] # SLICE a list of ROWS\n\n#This was _not_ in class PREPARE FOR TERRIBLE ERROR!\n#THIS DOESN'T WORK\ndf[3340]\n\n#That's icky.\n#to pick out one row use `.ix`\ndf.ix[3340]",
"Why? A good question. Now try passing a list of just one row:",
"df.ix[[3340]]",
"We can pick out columns using their names and with a slice of rows.",
"df['final_cc_cname_DI'][100:110]\n\ndf.dtypes",
"In inputing CSV, Pandas parses each column and attempts to discern what sort of data is within. It's good but not infallible.\n- Pandas is particularly good with dates: you simply tell it which columns to parse as dates.\nLet's refine our reading of the CSV to parse the dates.",
"df=pd.read_csv('HMXPC_13.csv', sep=\",\" , parse_dates=['start_time_DI', 'last_event_DI'])",
"note that we pass a list of columns to pick out multiple columns",
"df[\"start_time_DI\"]",
"Now we can count how many times someone started",
"startdates=df['start_time_DI'].value_counts()\n# Exercise to the reader: how might you do this without using the `.value_counts()` method?\n\nstartdates\n\nstartdates.plot()\n\nstartdates.plot(title=\"I can't it's not butter.\")",
"What are",
"startdates.plot(kind=\"bar\")\n\n#Ok, let's consider how many times different people played a video\ndf[\"nplay_video\"].dropna().plot()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
thammegowda/algos
|
deeplearning/udacity730/1_notmnist.ipynb
|
apache-2.0
|
[
"Deep Learning\nAssignment 1\nThe objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.\nThis notebook uses the notMNIST dataset to be used with python experiments. This dataset is designed to look like the classic MNIST dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST.",
"# These are all the modules we'll be using later. Make sure you can import them\n# before proceeding further.\nfrom __future__ import print_function\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport os\nimport sys\nimport tarfile\nimport random\nfrom IPython.display import display, Image\nfrom scipy import ndimage\nfrom sklearn.linear_model import LogisticRegression\nfrom six.moves.urllib.request import urlretrieve\nfrom six.moves import cPickle as pickle\n\n# Config the matlotlib backend as plotting inline in IPython\n%matplotlib inline\n\nprint(\"All imports are fine\")\n\n# defining some useful utils\n\ndef randindex(items):\n '''Gets random index'''\n return items[random.randint(0, len(items) -1)]",
"First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine.",
"url = 'http://commondatastorage.googleapis.com/books1000/'\nlast_percent_reported = None\n\ndef download_progress_hook(count, blockSize, totalSize):\n \"\"\"A hook to report the progress of a download. This is mostly intended for users with\n slow internet connections. Reports every 1% change in download progress.\n \"\"\"\n global last_percent_reported\n percent = int(count * blockSize * 100 / totalSize)\n\n if last_percent_reported != percent:\n if percent % 5 == 0:\n sys.stdout.write(\"%s%%\" % percent)\n sys.stdout.flush()\n else:\n sys.stdout.write(\".\")\n sys.stdout.flush()\n \n last_percent_reported = percent\n \ndef maybe_download(filename, expected_bytes, force=False):\n \"\"\"Download a file if not present, and make sure it's the right size.\"\"\"\n if force or not os.path.exists(filename):\n print('Attempting to download:', filename) \n filename, _ = urlretrieve(url + filename, filename, reporthook=download_progress_hook)\n print('\\nDownload Complete!')\n statinfo = os.stat(filename)\n if statinfo.st_size == expected_bytes:\n print('Found and verified', filename)\n else:\n raise Exception(\n 'Failed to verify ' + filename + '. Can you get to it with a browser?')\n return filename\n\ntrain_filename = maybe_download('notMNIST_large.tar.gz', 247336696)\ntest_filename = maybe_download('notMNIST_small.tar.gz', 8458043)",
"Extract the dataset from the compressed .tar.gz file.\nThis should give you a set of directories, labelled A through J.",
"num_classes = 10\nnp.random.seed(133)\n\ndef maybe_extract(filename, force=False):\n root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz\n if os.path.isdir(root) and not force:\n # You may override by setting force=True.\n print('%s already present - Skipping extraction of %s.' % (root, filename))\n else:\n print('Extracting data for %s. This may take a while. Please wait.' % root)\n tar = tarfile.open(filename)\n sys.stdout.flush()\n tar.extractall()\n tar.close()\n data_folders = [\n os.path.join(root, d) for d in sorted(os.listdir(root))\n if os.path.isdir(os.path.join(root, d))]\n if len(data_folders) != num_classes:\n raise Exception(\n 'Expected %d folders, one per class. Found %d instead.' % (\n num_classes, len(data_folders)))\n print(data_folders)\n return data_folders\n \ntrain_folders = maybe_extract(train_filename)\ntest_folders = maybe_extract(test_filename)",
"Problem 1\nLet's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.",
"from IPython.display import Image, display\nrootdir = \"notMNIST_large\"\nfor letter in os.listdir(rootdir):\n if \".pickle\" in letter:\n continue\n images = os.listdir(os.path.join(rootdir, letter))\n image = images[random.randint(0, len(images)-1)]\n image = os.path.join(rootdir, letter, image)\n display(Image(filename=image))",
"Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.\nWe'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. \nA few images might not be readable, we'll just skip them.",
"image_size = 28 # Pixel width and height.\npixel_depth = 255.0 # Number of levels per pixel.\n\ndef load_letter(folder, min_num_images):\n \"\"\"Load the data for a single letter label.\"\"\"\n image_files = os.listdir(folder)\n dataset = np.ndarray(shape=(len(image_files), image_size, image_size),\n dtype=np.float32)\n print(folder)\n num_images = 0\n for image in image_files:\n image_file = os.path.join(folder, image)\n try:\n image_data = (ndimage.imread(image_file).astype(float) - \n pixel_depth / 2) / pixel_depth\n if image_data.shape != (image_size, image_size):\n raise Exception('Unexpected image shape: %s' % str(image_data.shape))\n dataset[num_images, :, :] = image_data\n num_images = num_images + 1\n except IOError as e:\n print('Could not read:', image_file, ':', e, '- it\\'s ok, skipping.')\n \n dataset = dataset[0:num_images, :, :]\n if num_images < min_num_images:\n raise Exception('Many fewer images than expected: %d < %d' %\n (num_images, min_num_images))\n \n print('Full dataset tensor:', dataset.shape)\n print('Mean:', np.mean(dataset))\n print('Standard deviation:', np.std(dataset))\n return dataset\n \ndef maybe_pickle(data_folders, min_num_images_per_class, force=False):\n dataset_names = []\n for folder in data_folders:\n set_filename = folder + '.pickle'\n dataset_names.append(set_filename)\n if os.path.exists(set_filename) and not force:\n # You may override by setting force=True.\n print('%s already present - Skipping pickling.' % set_filename)\n else:\n print('Pickling %s.' % set_filename)\n dataset = load_letter(folder, min_num_images_per_class)\n try:\n with open(set_filename, 'wb') as f:\n pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL)\n except Exception as e:\n print('Unable to save data to', set_filename, ':', e)\n \n return dataset_names\n\ntrain_datasets = maybe_pickle(train_folders, 45000)\ntest_datasets = maybe_pickle(test_folders, 1800)",
"Problem 2\nLet's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.",
"stats = {} \ncols = 5\nrows = 10 / cols\nf, grid = plt.subplots(rows, cols)\ncounter = 0\nfor picklefile in train_datasets:\n with open(picklefile) as f:\n dataset = pickle.load(f)\n L = picklefile.split(\"/\")[-1].replace(\".pickle\", \"\")\n stats[L]= len(dataset)\n grid[counter / cols][counter % cols].imshow(dataset[random.randint(0, len(dataset) - 1)])\n counter += 1",
"Problem 3\nAnother check: we expect the data to be balanced across classes. Verify that.",
"print(stats)\nplt.bar(range(len(stats)), stats.values(), align='center')\nplt.xticks(range(len(stats)), stats.keys())\n\nplt.show()",
"Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune train_size as needed. The labels will be stored into a separate array of integers 0 through 9.\nAlso create a validation dataset for hyperparameter tuning.",
"def make_arrays(nb_rows, img_size):\n if nb_rows:\n dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32)\n labels = np.ndarray(nb_rows, dtype=np.int32)\n else:\n dataset, labels = None, None\n return dataset, labels\n\ndef merge_datasets(pickle_files, train_size, valid_size=0):\n num_classes = len(pickle_files)\n valid_dataset, valid_labels = make_arrays(valid_size, image_size)\n train_dataset, train_labels = make_arrays(train_size, image_size)\n vsize_per_class = valid_size // num_classes\n tsize_per_class = train_size // num_classes\n \n start_v, start_t = 0, 0\n end_v, end_t = vsize_per_class, tsize_per_class\n end_l = vsize_per_class+tsize_per_class\n for label, pickle_file in enumerate(pickle_files): \n try:\n with open(pickle_file, 'rb') as f:\n letter_set = pickle.load(f)\n # let's shuffle the letters to have random validation and training set\n np.random.shuffle(letter_set)\n if valid_dataset is not None:\n valid_letter = letter_set[:vsize_per_class, :, :]\n valid_dataset[start_v:end_v, :, :] = valid_letter\n valid_labels[start_v:end_v] = label\n start_v += vsize_per_class\n end_v += vsize_per_class\n \n train_letter = letter_set[vsize_per_class:end_l, :, :]\n train_dataset[start_t:end_t, :, :] = train_letter\n train_labels[start_t:end_t] = label\n start_t += tsize_per_class\n end_t += tsize_per_class\n except Exception as e:\n print('Unable to process data from', pickle_file, ':', e)\n raise\n \n return valid_dataset, valid_labels, train_dataset, train_labels\n \n \ntrain_size = 200000\nvalid_size = 10000\ntest_size = 10000\n\nvalid_dataset, valid_labels, train_dataset, train_labels = merge_datasets(\n train_datasets, train_size, valid_size)\n_, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size)\n\nprint('Training:', train_dataset.shape, train_labels.shape)\nprint('Validation:', valid_dataset.shape, valid_labels.shape)\nprint('Testing:', test_dataset.shape, test_labels.shape)",
"Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.",
"def randomize(dataset, labels):\n permutation = np.random.permutation(labels.shape[0])\n shuffled_dataset = dataset[permutation,:,:]\n shuffled_labels = labels[permutation]\n return shuffled_dataset, shuffled_labels\ntrain_dataset, train_labels = randomize(train_dataset, train_labels)\ntest_dataset, test_labels = randomize(test_dataset, test_labels)\nvalid_dataset, valid_labels = randomize(valid_dataset, valid_labels)\nprint(\"Shuffled\")",
"Problem 4\nConvince yourself that the data is still good after shuffling!",
"cols = 5\nrows = 10 / cols\n\nfor h, ds in {'Train': train_dataset, 'Test':test_dataset, 'Validation': valid_dataset}.items():\n print(h)\n _, grid = plt.subplots(rows, cols)\n counter = 0\n for i in range(10):\n grid[counter / cols][counter % cols].imshow(randindex(ds))\n counter += 1\n plt.show()\n",
"Finally, let's save the data for later reuse:",
"pickle_file = 'notMNIST.pickle'\n\ntry:\n f = open(pickle_file, 'wb')\n save = {\n 'train_dataset': train_dataset,\n 'train_labels': train_labels,\n 'valid_dataset': valid_dataset,\n 'valid_labels': valid_labels,\n 'test_dataset': test_dataset,\n 'test_labels': test_labels,\n }\n pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)\n f.close()\nexcept Exception as e:\n print('Unable to save data to', pickle_file, ':', e)\n raise\n\nstatinfo = os.stat(pickle_file)\nprint('Compressed pickle size:', statinfo.st_size)",
"Problem 5\nBy construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it.\nMeasure how much overlap there is between training, validation and test samples.\nOptional questions:\n- What about near duplicates between datasets? (images that are almost identical)\n- Create a sanitized validation and test set, and compare your accuracy on those in subsequent assignments.",
"# Hash is computed using Zobrist Hashing Algorithm https://en.wikipedia.org/wiki/Zobrist_hashing\n\nimport uuid\nmin_val = 0\nmax_val = 255\ntot_vals = max_val - min_val + 1 # possible entries for each pixel\n\nZB = np.zeros(shape=(image_size, image_size, tot_vals), dtype=object) # Zobrist Board\nfor i in range(image_size):\n for j in range(image_size):\n for k in range(tot_vals):\n randmbits = long(uuid.uuid4().int)\n ZB[i][j][k] = randmbits\nprint(\"Zobrist Board initialized\")\n\ndef hashfunc(img):\n h = 0\n for i in range(image_size):\n for j in range(image_size):\n k = img[i][j]\n # color is in range of [-1.0, 1.0];\n # converting to [0, 255]\n k = int(k * 127) + 128\n assert k >= min_val\n assert k <= max_val\n h ^= ZB[i][j][k]\n return h\ndef aggr_dictval_reptn(d): # Finds the value repeartation sum\n return sum(map(lambda x: x - 1, filter(lambda x: x > 1, d.values())))\n\ndef find_overlap(ds):\n d = {}\n percent = len(ds) / 100\n i = 0\n for img in ds:\n h = hashfunc(img)\n d[h] = d.get(h, 0) + 1\n if i % percent == 0:\n print(\"%d%%.. \" % (i/percent), end=\"\"),\n i += 1\n tot = len(ds)\n reptn = aggr_dictval_reptn(d)\n ovrlp = reptn / float(tot)\n return ovrlp, d\n \ndef find_overlap_hash(ds1, ds2): \n ovrlp1, d1 = find_overlap(ds1)\n print(\"overlap in ds1 = %f\" % ovrlp1)\n ovrlp2, d2 = find_overlap(ds2)\n print(\"overlap in ds2 = %f\" % ovrlp2)\n \n # finding duplication across datasets \n d = d1 # not making another copy!\n for h, c in d2.items():\n d[h] = d.get(h, 0) + c\n reptn = aggr_dictval_reptn(d)\n ovrlp = reptn / float(len(ds1) + len(ds2))\n print(\"repeatation across datasets = %d\" % reptn)\n print(\"overlap across datasets = %f\" % ovrlp)\n\nprint(\"Starting\")\nfind_overlap_hash(train_dataset, test_dataset)\nprint(\"Done\")",
"Problem 6\nLet's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it.\nTrain a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint: you can use the LogisticRegression model from sklearn.linear_model.\nOptional question: train an off-the-shelf model on all the data!",
"lg_model = LogisticRegression(C=1e5)\nX_data = np.array(map(lambda x: x.flatten(), train_dataset))\nlg_model = lg_model.fit(X_data, train_labels)\nprint(\"Fitting Done\")\n\npickle_file = 'logistic_regr_model.pickle'\nwith open(pickle_file, 'wb') as handle:\n pickle.dump(lg_model, handle)\nprint(\"Model dumped\")\nstatinfo = os.stat(pickle_file)\nprint('Model pickle size:', statinfo.st_size)\n\nerror = 0\ntds = np.array(map(lambda x: x.flatten(), test_dataset))\nY = lg_model.predict(tds)\nY_ = test_labels\nassert len(Y) == len(Y_) == len(test_dataset)\nfor i in xrange(len(Y)):\n if Y[i] != Y_[i]:\n error += 1\n\nprint(\"Error = %d out of %d, i.e. %.2f%%\" % (error, len(test_dataset), 100.0 * error/len(test_dataset)))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
camillescott/ucd-ecs253
|
ECS253 - Homework 1.ipynb
|
cc0-1.0
|
[
"This is the common problem set for Homework 1 from the spring quarter Network Theory class at UC Davis taught by Prof. Raissa D'Souza. The original assignment is at http://mae.engr.ucdavis.edu/dsouza/Classes/253-S16/hw1.pdf. Source code for this notebook is on github at https://github.com/camillescott/ucd-ecs253.",
"%matplotlib inline\nimport numpy as np\nimport networkx as nx\nimport seaborn as sns\nsns.set_style('ticks')\nsns.set_context('poster')",
"Adjacency Matrices\nThe Matrix",
"A_directed = np.array( [[0, 1, 0, 0, 1],\n [0, 0, 1, 0, 0],\n [1, 0, 0, 1, 1],\n [0, 1, 1, 0, 0],\n [0, 0, 0, 0, 0]] )\nprint(A_directed)",
"A little visualization, just to double check.",
"nx.draw_networkx(nx.DiGraph(data=A_directed.T), \n labels=dict(zip(range(len(A_directed)), range(1, len(A_directed)+1))),\n node_size=600, \n node_color=sns.xkcd_rgb[\"pale orange\"])",
"Steady-State Probability of Random Walker\nWe can calculate the steady state probabilities quite simply by dividing out the column sums of the adjacency matrix $A$ to convert it into a state transition matrix $M$ and then computing the matrix power $M^i$. With $i$ as a reasonably high number, the result will converge to the steady state probabilities.",
"def calc_steady_state(A, i=100):\n M = A / A.sum(axis=0)\n M = np.linalg.matrix_power(M, i)\n return M\n\ndef print_probs(probs):\n print('\\n'.join('Node {0}: {1:.4f}'.format(node, p) for node, p in \\\n zip(range(1, len(probs)+1), probs)))",
"The resulting probabilities:",
"probs = calc_steady_state(A_directed)[:,0]\nprint_probs(probs)",
"Steady-State Probabilities for Undirected Graph\nFor the undirected variant, we just mirror across the diagonal.",
"A_undirected = np.array( [[0, 1, 1, 0, 1],\n [1, 0, 1, 1, 0],\n [1, 1, 0, 1, 1],\n [0, 1, 1, 0, 0],\n [1, 0, 1, 0, 0]] )\nprint(A_undirected)\n\nprobs = calc_steady_state(A_undirected)[:,0]\nprint_probs(probs)",
"Rate equations: Network growth with uniform attachment\nRate Equations\n$k > m$: $n_{k,t+1} = n_{k,t} + \\frac{1}{t}n_{k-1,t} - \\frac{1}{t}n_{k,t}$\n$k = m$: $n_{m,t+1} = n_{m,t} + 1 - \\frac{1}{t}n_{m,t}$\nProbabilistic Treatment\n$k > m$: $(t+1)P_{k,t+1} = tP_{k,t} + \\frac{1}{t}tP_{k-1,t} - \\frac{1}{t}tP_{k,t}$\n$\\Rightarrow (t+1)P_{k,t+1} = (t-1)P_{k,t} + P_{k-1}{t}$\n$k = m$: $(t+1)P_{m,t+1} = tP_{m,t} + 1 - \\frac{1}{t}tP_{m,t}$\n$\\Rightarrow (t+1)P_{m,t+1} = (t-1)P_{m,t} + 1$\nSolving for $P_k$\n$k > m$: $tP_k + P_k = tP_k - P_k + P_k - 1$\n$\\Rightarrow P_k = \\frac{P_{k-1}}{2}$\n$k = m$: $tP_m + P_m = tP_m - P_m + 1$\n$\\Rightarrow P_m = \\frac{1}{2}$\nFinal Expression\n$P_k = \\frac{1}{2} \\times \\frac{1}{2} \\times \\frac{1}{2} \\times ... \\times P_m$\n$\\Rightarrow P_k = \\frac{1}{2^k}$"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
napjon/ds-nd
|
p7-abtest/code-analysis.ipynb
|
mit
|
[
"Overview: Free Trial Screener\nAt the time of this experiment, Udacity courses currently have two options on the home page: \"start free trial\", and \"access course materials\". If the student clicks \"start free trial\", they will be asked to enter their credit card information, and then they will be enrolled in a free trial for the paid version of the course. After 14 days, they will automatically be charged unless they cancel first. If the student clicks \"access course materials\", they will be able to view the videos and take the quizzes for free, but they will not receive coaching support or a verified certificate, and they will not submit their final project for feedback.\nIn the experiment, Udacity tested a change where if the student clicked \"start free trial\", they were asked how much time they had available to devote to the course. If the student indicated 5 or more hours per week, they would be taken through the checkout process as usual. If they indicated fewer than 5 hours per week, a message would appear indicating that Udacity courses usually require a greater time commitment for successful completion, and suggesting that the student might like to access the course materials for free. At this point, the student would have the option to continue enrolling in the free trial, or access the course materials for free instead. This screenshot shows what the experiment looks like.\nThe hypothesis was that this might set clearer expectations for students upfront, thus reducing the number of frustrated students who left the free trial because they didn't have enough time—without significantly reducing the number of students to continue past the free trial and eventually complete the course. If this hypothesis held true, Udacity could improve the overall student experience and improve coaches' capacity to support students who are likely to complete the course.\nThe unit of diversion is a cookie, although if the student enrolls in the free trial, they are tracked by user-id from that point forward. The same user-id cannot enroll in the free trial twice. For users that do not enroll, their user-id is not tracked in the experiment, even if they were signed in when they visited the course overview page.\n<!--TEASER_END-->\n\nBackground\n\nStart free trial to allow paid version\nAccess course materials for free\n\nexperiment\n\n\nusers see this popup after click\nif they're not committed, warning them.\n\nWe have two initial hypothesis.\n\n“..this might set clearer expectations for students upfront, thus reducing the number of frustrated students who left the free trial because they didn't have enough time”\n”..without significantly reducing the number of students to continue past the free trial and eventually complete the course. “\n\nUnit of diversion: cookie\n\nusers in free trial tracked by user-id\nsame user-id can't enroll in free trial twice\nusers not enroll can't tracked.\n\nExperiment Design\nMetric Choice\nInvariant metrics\n\nNumber of cookies\nNumber of clicks\nClick through probability\n\nSanity Check is useful when we want to make sure that the data filtered for experiment and control group is the same. This can be done using the right invariance metric. These three metrics shouldn't change because it's outside of the experiment, in a sense that these metric calculated all before the experiment begin.\nNumber of cookies who views the page should be the same when Udacity experiment. They haven't click the \"Start Now\" button and see \"Free Trial Screener\" experiment. So number of cookies can be used as invariant metrics. When users click the button, they also haven't yet see the experiment that Udacity does, so number of clicks shouldn't change between experiment and control groups.\nSince the experiments only occurs after the users click the \"Start Now\" button, its click-through-probability also have to be the same for each experiment and control group. We know that number of cookies and number of clicks has to be the same, then click-thorough-probability also has to be the same.\nBesides cookie-id, there is also user-id. But user-id is not a good invariant, because Udacity also open to unregistered users to view page until after click of a button.\nEvaluation metrics\n\nGross Conversion\nNet Conversion\n\nWe have two initial hypothesis.\n\n“..this might set clearer expectations for students upfront, thus reducing the number of frustrated students who left the free trial because they didn't have enough time”\n”..without significantly reducing the number of students to continue past the free trial and eventually complete the course. “\n\nFor evaluation metrics, I choose Gross Conversion, Retention, and Net Conversion. All of these metrics are a good evaluation metrics since they change when the experiment change, and since each of the metrics has user-ids as the unit of analysis, should be much smaller standard error since Udacity also using it as the cookie of diversion. \nGross conversion is the number of user-ids to complete checkout and enroll in the free trial divided by number of unique cookies to click the \"Start free trial\" button. After the visitors click the button, they should see the screener, hence the warning. It should be makes other visitors that doesn't have serious commitment back down and cancel it right away.\nRetention is number of user-ids to remain enrolled past the 14-day boundary (and thus make at least one payment) divided by number of user-ids to complete checkout. The experiment intend to focus the visitors that only want to make a serious commitment. The retention rate should be higher for experiment group than the control group.\nNet conversion is number of user-ids to remain enrolled past the 14-day boundary (and thus make at least one payment) divided by the number of unique cookies to click the \"Start free trial\" button. Net Conversion also true, since the experiment intend to see higher conversion rate for students to continue (at least make one payment) than the users that only click the button, that doesn't even see the warning experiment given.\nOuf of these metrics, Retention turns out have a longer duration, which is 118 days. This takes too long, and it's not something Udacity willing to give for the experiment. So Retention will be excluded.\nThe first part is what Gross Conversion does, we should expect after the experiment, Gross Conversion shows significantly reduce the number who left trial because they don’t have time commitment. The experiment group should be significantly different than control group.\nThe second part is what Net Conversion does, we should expect after experiment, the metric shows insignificantly reduce the number of students who at least make one payment. The experiment group should not significantly different than control group.\nMeasuring Standard Deviation\n\nGross Conversion: 0.02\nNet Conversion 0.0156\n\nExpect analytical variance match empirical variance because unit of analysis and unit of diversion is same.\nTo calculate standard deviation, we use this formula\nPython\nFormula = np.sqrt(p * (1-p) / n)\nand using baseline data below.",
"baselines= \"\"\"Unique cookies to view page per day:\t40000\nUnique cookies to click \"Start free trial\" per day:\t3200\nEnrollments per day:\t660\nClick-through-probability on \"Start free trial\":\t0.08\nProbability of enrolling, given click:\t0.20625\nProbability of payment, given enroll:\t0.53\nProbability of payment, given click:\t0.1093125\"\"\"\n\nlines = baselines.split('\\n')\nd_baseline = dict([(e.split(':\\t')[0],float(e.split(':\\t')[1])) for e in lines])",
"Since we have 5000 sample cookies instead of the original 40000, we can adjust accordingly using calculate probability. For these two evaluation metric, we need number of users who click \"Start Now\" button, and calculated as",
"n = 5000\nn_click = n * d_baseline['Click-through-probability on \"Start free trial\"']\nn_click",
"Next, standard deviation for Gross conversion is",
"p = d_baseline['Probability of enrolling, given click']\nround(np.sqrt(p * (1-p) / n_click),4)",
"and for Net Conversion,",
"p = d_baseline['Probability of payment, given click']\nround(np.sqrt(p * (1-p) / n_click),4)",
"Gross Conversion and Net Conversion, their empirical variance should approximate analytical variance, because the unit of analysis and unit of diversion is the same, cookie-ids/user-ids.\nSizing\nNumber of Samples vs. Power\n\nGross Conversion. Baseline: 0.20625 dmin: 0.01 = 25.839 cookies who clicks.\nNet Conversion. Baseline: 0.1093125 dmin: 0.0075 = 27,411 cookies who clicks.\nNot using Bonferroni correction.\nUsing alpha = 0.05 and beta 0.2\n\nThe pageviews needed then will be:\n685275 impression.\nWe feed it into sample size calculator. \nWe can use bigger number, so the minimum required cookies is sufficient. The sample size is only for one group, so output from the calculator must be doubled to get the enough pageviews. Since this only the user who clicks, we calculate number of pageviews using CTP. The pageviews needed then will be:",
"(27411 * 2) / d_baseline['Click-through-probability on \"Start free trial\"']",
"Duration vs. Exposure\n\nFraction: 0.8 (Low risk)\nDuration: 22 days (40000 pageviews/day)\n\nThe fraction of experiment exposure to Udacity visitors will be 80%. The experiment isn't risky enough that may potentially leaked as blog news or article. It doesn't really big a news, as Udacity only want to put little warning to the users. Because only 40000 pageviews each day can be gathered, the duration will be 22 days.\nThis is where Retention metric fail for our evaluation metrics. It has a longer duration, which is 118 days. This takes too long, and it's not something Udacity willing to give for the experiment. So Retention will be excluded.\nExperiment Analysis\nSanity Checks\n\n\nNumber of Cookies:\n\nBounds = (0.4988,0.5012)\nObserved = 0.5006\nPasses? Yes\n\n\n\nNumber of clicks on “Start free trial”:\n\nBounds = (0.4959,0.5041)\nObserved = 0.5005\nPasses? Yes\n\n\n\nClick-through-probability on “Start free trial”:\n\nBounds = (0.0812,0.0830)\nObserved = 0.0821\nPasses? Yes\n\n\n\nSince we have passed all of the sanity checks, we can continue to analyze the experiment.\nWe do sanity checks to ensure that both experiment and control groups have equal proportion. It's the metric that shouldn't change when experiment change, which is invariant metrics that we chose earlier. First let's see the data that we want to analyze both at control and experiment.",
"control = pd.read_csv('control_data.csv')\nexperiment = pd.read_csv('experiment.csv')\n\ncontrol.head()\n\nexperiment.head()",
"Next, we count the total views and clicks for both control and experiment groups.",
"control_views = control.Pageviews.sum()\ncontrol_clicks = control.Clicks.sum()\n\nexperiment_views = experiment.Pageviews.sum()\nexperiment_clicks = experiment.Clicks.sum()",
"For count like number of cookies and number of clicks in \"Start free trial\" button, we can do confidence interval around the fraction we expect in control group, and actual fraction as the observed outcome. Since we expect control and experiment to have equal proportion, we set the the expected proportion to be 0.5. Both invariant metrics, the confidence interval for sanity checks use the function below.",
"def sanity_check_CI(control,experiment,expected):\n SE = np.sqrt((expected*(1-expected))/(control + experiment))\n ME = 1.96 * SE\n return (expected-ME,expected+ME)\n ",
"Now for sanity checks confidence interval of number of cookies who views the page,",
"sanity_check_CI(control_views,experiment_views,0.5)",
"The actual proportion is",
"float(control_views)/(control_views+experiment_views)",
"Since we know that 0.5006 is within the interval, then experiment pass sanity checks for number of cookies.\nNext, we calculate confidence interval of number of clicks at \"Start free trial\" button.",
"sanity_check_CI(control_clicks,experiment_clicks,0.5)",
"And the actual proportion,",
"float(control_clicks)/(control_clicks+experiment_clicks)",
"Again 0.5006 is within the interval, so our experiment also pass the sanity check.\nFor our sanity check with ctp, is a little different calculation. Using simple count earlier, we know that if we setup our experiment in a proper way, the true proportion of control group should be 0.5. Since we don't know the true proportion of ctp control group, we build confidence interval around the control group, and ctp experiment as observed outcome. If the experiment change and ctp experiment is outside ctp control confidence interval, then our experiment failed sanity checks. Thus we can't continue our analysis.",
"ctp_control = float(control_clicks)/control_views\nctp_experiment = float(experiment_clicks)/experiment_views\n\n# %%R\nc = 28378\nn = 345543\nCL = 0.95\n\npe = c/n\nSE = sqrt(pe*(1-pe)/n)\nz_star = round(qnorm((1-CL)/2,lower.tail=F),digits=2)\nME = z_star * SE\n\nc(pe-ME, pe+ME)\n\nctp_experiment",
"And as you can see, click-through-probability of the experiment is still within the confidence interval of click-through-probability control groups. Since we have passed all of the sanity checks, we can continue to analyze the experiment.\nEffect Size Test\n\nDid not use Bonferroni correction\nGross Conversion\nBounds = (-0.0291, -0.0120)\nStatistical Significance? Yes\nPractical Significance? Yes\n\n\nNet Conversion\nBounds = (-0.0116,0.0019)\nStatistical Significance? No\nPractical Significance? No",
"get_gross = lambda group: float(group.dropna().Enrollments.sum())/ group.Clicks.sum()\nget_net = lambda group: float(group.dropna().Payments.sum())/ group.Clicks.sum()",
"Gross Conversion\nKeep in mind that observed_difference can be negative",
"print('N_cont = %i'%control.dropna().Clicks.sum())\nprint('X_cont = %i'%control.dropna().Enrollments.sum())\nprint('N_exp = %i'%experiment.dropna().Clicks.sum())\nprint('X_exp = %i'%experiment.dropna().Enrollments.sum())\n\nX_exp/N_exp\n\nX_cont/N_cont\n\n#%%R\n\nN_cont = 17293\nX_cont = 3785\nN_exp = 17260\nX_exp = 3423\n\nobserved_diff = X_exp/N_exp - X_cont/N_cont\n# print(observed_diff)\np_pool = (X_cont+X_exp)/(N_cont+N_exp)\nSE = sqrt( (p_pool*(1-p_pool)) * ((1/N_cont) + (1/N_exp)))\nME = 1.96 * SE\n# print(p_pool)\nc(observed_diff-ME, observed_diff+ME)\n\nobserved_diff",
"The observed difference is outside the confidence interval. And the observed difference also above 0.01 dmin, minimum detectable effect. We should definitely launch.\nNet Conversion",
"print('N_cont = %i'%control.dropna().Clicks.sum())\nprint('X_cont = %i'%control.dropna().Payments.sum())\nprint('N_exp = %i'%experiment.dropna().Clicks.sum())\nprint('X_exp = %i'%experiment.dropna().Payments.sum())\n\nX_exp/N_exp\n\nX_cont/N_cont\n\n#%%R\nN_cont = 17293\nX_cont = 2033\nN_exp = 17260\nX_exp = 1945\n\nobserved_diff = X_exp/N_exp - X_cont/N_cont\n# print(observed_diff)\np_pool = (X_cont+X_exp)/(N_cont+N_exp)\nSE = sqrt( (p_pool*(1-p_pool)) * ((1/N_cont) + (1/N_exp)))\nME = 1.96 * SE\n# print(p_pool)\nc(observed_diff-ME, observed_diff+ME)\n\nobserved_diff",
"The observed difference is within the confidence interval so it's not statiscally significant and also not practically significant. We may fail or continue with our results.\nSign Test\n\nDid not use Bonferroni correction\nGross Conversion\np-value = 0.0026\nStatistical Significance? Yes\n\n\nNet Conversion\np-value = 0.6776\nStatistical Significance? No\n\n\n\nSign Test is also a test that must be confirmed with effect size test. I'm using Online Calculator to calculate the binomial p-value, whether the probability of experiment is higher than control groups. If we simulate it, what ar the odds. If the probability is so rare, that isn't likely due to chance, then the experiment succeed, provided significance level, which I choose to be 5%.\nI'm using helper function, to compare probability day-to-day whether the metric in question is smaller for group than the experiment.",
"compare_prob = lambda col: ((control.dropna()[col] / control.dropna().Clicks) <\n (experiment.dropna()[col]/experiment.dropna().Clicks))\n ",
"Gross Conversion\nCount the gross conversion, I got,",
"compare_prob('Enrollments').value_counts()",
"Net Conversion",
"compare_prob('Payments').value_counts()",
"I got p-value of 0.6776 for Net Conversion. \nConclusion\nI would not use Bonferroni correction in this case. Bonferroni correction needs all metrics to be significantly different. This is not what we do in our experiment. We have Gross Conversion that need to be significant, and Net Conversion that need to be insignificant.\n\nNot use Benferroni correction.\nGross Conversion need significant but Net Conversion doesn't.\n\nRecommendation\nGross Conversion is good because it passes Udacity’s practical significance boundary. This means it reduces the number of students who feel not committed (in time/cost). However, even though Net Conversion is not statistically significance, its confidence interval touch practical significance boundary, which is not how Udacity wants. Udacity could lose potential money if the experiment launch. So my recommendation is further experiment or cancel if Udacity doesn’t have a time.\n\nGross Conversion: pass\n\nNet Conversion: somehow pass, can loss potential money\n\n\ndecision: risky. delay for further experiment or cancel the launch.\n\n\nFollow-Up Experiment\nSo what does it take to decrease not-so-serious users without losing potential money? I see that on every course overview page in Udacity, they already given information about the hours spent on particular course. So really, warning them again about time commitment might be unnecessary. What we could do, is giving them an incentive after their enrollment.\nIn an experiment, after the students enroll, they are given an information on the right side of the video material page. An incentive of offering free payment until they graduate. The deal is they have to be Udacity Code Reviewer. Udacity has this program. It gives reasonable payment per hour to whoever graduates reviewing the students' code. If they agree, they can click the button “Start debt program” below the information page. \nThey will be able to continue after 14-day boundary and finished the course. But in return, they have to be Code Reviewer, and finish the debt through payroll. They won't be given any salary until their debt finished.\nYes, it seems risky to Udacity. But if the users break on their agreement, for example not become Code Reviewer within two months, they will be automatically charged through their registered credit card. They will also automatically charged if they cancel the program. So it’s safe to assume that we have handled risk of potential runner, but this is not part of the experiment.\nThe hypothesis is that after they’re given an incentive, they become more serious and committed to complete the course. By doing this incentive, number of users who cancel early in the course is also significantly reduced, and boost them compared to ones which already committed.\nThe unit of diversion is an user-id. Like free trial, the same user-id can’t follow the debt program twice. User-id is more cross-platform and more represent as an user than a cookie. User-ids that don’t enroll in the program, is not tracked in the experiment. The number of user-ids that are in debt program, but cancel at the end of the free trial is also not tracked.\n\nNot necessary to show warning\nStart Debt Program\nRisky, users break agreement\nNot become Udacity Code Reviewer\nCancel in midway program\n\n\nHypothesis\nNon-serious users become more committed after incentive\nNumber of users who cancel early is reduced\nBoost compared to already committed\n\n\n\nWe can use Invariant metrics for this experiment for the follow-up:\n\nNumber of cookies: That is, number of unique cookies to view the course overview page.\nNumber of clicks: That is, number of unique cookies to click the \"Start free trial\" button (which happens before the free trial screener is trigger).\nClick-through-probability: That is, number of unique cookies to click the \"Start free trial\" button divided by number of unique cookies to view the course overview page\nGross conversion: That is, number of user-ids to complete checkout and enroll in the free trial divided by number of unique cookies to click the \"Start free trial\" button.\n\nAnd the evaluation metric:\n\nDebt Conversion: That is, number of user-ids to click “Start Debt Program” divided by number of user-ids that enroll in the free trial.\nDebt-Net conversion: That is, number of user-ids to click “Start Debt Program” divided by number of user-ids to remain enrolled past the 14-day boundary (and thus make at least one payment)\nNet conversion: That is, number of user-ids to remain enrolled past the 14-day boundary (and thus make at least one payment) divided by the number of unique cookies to click the \"Start free trial\" button.\n\nWe use user-ids as unit of diversion, expect all of the evaluation metrics to be practically significant."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
MegaShow/college-programming
|
Homework/Principles of Artificial Neural Networks/Week 3 Backpropagation/week_3_numpy.ipynb
|
mit
|
[
"Week 3 Back Propagation\nWe introduce back propagation in numpy and pytorch respectively. \nIf you have some questions or suggestion about BackPropagation with Numpy, contact Jiaxin Zhuang or email(zhuangjx5@mail2.sysu.edu.cn) \n1. Simple expressions and interpretation of the gradient\n1.1 Simple expressions\nLets start simple so that we can develop the notation and conventions for more complex expressions. Consider a simple multiplication function of two numbers $f(x,y)=xy$. It is a matter of simple calculus to derive the partial derivative for either input:\n$$f(x,y) = x y \\hspace{0.5in} \\rightarrow \\hspace{0.5in} \\frac{\\partial f}{\\partial x} = y \\hspace{0.5in} \\frac{\\partial f}{\\partial y} = x$$",
"# set some inputs\nx1 = -2; x2 = 5;\n\n# perform the forward pass\nf = x1 * x2 # f becomes -10\n\n# perform the backward pass (backpropagation) in reverse order:\n# backprop through f = x * y\ndfdx1 = x2 # df/dx = y, so gradient on x becomes 5\nprint(\"gradient on x is {:2}\".format(dfdx1))\ndfdx2 = x1 # df/dy = x, so gradient on y becomes -2\nprint('gradient on y is {:2}'.format(dfdx2))",
"1.2 interpretation of the gradient\nInterpretation:The derivatives indicate the rate of change of a function with respect to that variable surrounding an infinitesimally small region near a particular point:\n$$\\frac{df(x)}{dx} = \\lim_{h\\ \\to 0} \\frac{f(x + h) - f(x)}{h}$$\nIn other words, the derivative on each variable tells you the sensitivity of the whole expression on its value.As mentioned, the gradient $\\nabla f$ is the vector of partial derivatives, so we have that $\\nabla f = [\\frac{\\partial f}{\\partial x}, \\frac{\\partial f}{\\partial y}] = [y, x]$. \n2. Compound expressions with chain rule\n2.1 Simple examples for chain rule\nLets now start to consider more complicated expressions that involve multiple composed functions, such as $f(x,y,z) = (x + y) z$.\nThis expression is still simple enough to differentiate directly, but we’ll take a particular approach to it that will be helpful with understanding the intuition behind backpropagation. \nIn particular, note that this expression can be broken down into two expressions: $q=x+y$ and $f=qz$. As seen in the previous section,$f$ is just multiplication of $q$ and $z$, so $\\frac{\\partial f}{\\partial q} = z, \\frac{\\partial f}{\\partial z} = q$,and $q$ is addition of $x$ and $y$ so $\\frac{\\partial q}{\\partial x} = 1, \\frac{\\partial q}{\\partial y} = 1$.\nHowever, we don’t necessarily care about the gradient on the intermediate value $q$ - the value of $\\frac{\\partial f}{\\partial q}$ is not useful. Instead, we are ultimately interested in the gradient of $f$ with respect to its inputs $x$,$y$,$z$. \nThe chain rule tells us that the correct way to “chain” these gradient expressions together is through multiplication. For example, $\\frac{\\partial f}{\\partial x} = \\frac{\\partial f}{\\partial q} \\frac{\\partial q}{\\partial x}$. In practice this is simply a multiplication of the two numbers that hold the two gradients. Lets see this with an example:",
"# set some inputs\nx = -2; y = 5; z = -4 \n\n# perform the forward pass\nq = 2*x + y # q becomes 1\nf = q * z # f becomes -4\nprint(q, f)\n\n# perform the backward pass (backpropagation) in reverse order:\n# first backprop through f = q * z = (2*x+y) * z\ndfdz = q # df/dz = q, so gradient on z becomes 3\ndfdq = z # df/dq = z, so gradient on q becomes -4\n# now backprop through q = x + y\ndfdx = 2.0 * dfdq # dq/dx = 1. And the multiplication here is the chain rule!\ndfdy = 1.0 * dfdq # dq/dy = 1\nprint('df/dx is {:2}'.format(dfdx))\nprint('df/dy is {:2}'.format(dfdy))",
"2.2 Intuitive understanding of backpropagation\nNotice that backpropagation is a beautifully local process. \nEvery gate in a circuit diagram gets some inputs and can right away compute two things: \n1. its output value and \n2. the local gradient of its inputs with respect to its output value. \n3. Practice: Writing a simple Feedforward Neural Network\n3.1 Outline\nWe would implement a simple feedforward neural network by using numpy. Thus, we need to define network and implement the forward pass as well as the backword propagation.\n\nDefine a simpel feedforward neural netork, with 1 hidden layer. Implement forward and backward\nLoad data from local csv file with pandas, which contains some training and testing dots, generated by 3 different gaussian distribution.(different mean and std).\nDefine some functions for visualization and training\nTraining and predicting every epoch\nplot the distribution of the points' label and the predictions",
"# Load necessary module for later\nimport numpy as np\nimport pandas as pd\nnp.random.seed(1024)",
"3.2 Define a Feedforward Neural Netowk, implement forward and backward\nA simple Neural Network with 1 hidden layer.\n```\n Networks Structure\n Input Weights Output\n\nHidden Layer [batch_size, 2] x [2,5] -> [batch_size, 5]\nactivation function(sigmoid) [batch_size, 5] -> [batch_size, 5]\nClassification Layer [batch_size, 5] x [5,3] -> [batch_size, 3]\nactivation function(sigmoid) [batch_size, 3] -> [batch_size, 3]\n```\nAccording to training and testing data. Each points is in two-dimension space, and there is three categories. And predictions would be a one-hot vector, like [0 0 1] , [1 0 0], [0 1 0]",
"w1_initialization = np.random.randn(2, 5) \nw2_initialization = np.random.randn(5, 3) \n\nw2_initialization\n\nclass FeedForward_Neural_Network(object):\n def __init__(self, learning_rate):\n self.input_channel = 2 # number of input neurons\n self.output_channel = 3 # number of output neurons\n self.hidden_channel = 5 # number of hidden neurons\n self.learning_rate = learning_rate\n \n # weights initialization\n # Usually, we use random or uniform initialzation to initialize weight\n # For simplicity, here we use same array to initialze \n# np.random.randn(self.input_channel, self.hidden_channel) \n # (2x5) weight matrix from input to hidden layer\n self.weight1 = np.array([[ 2.12444863, 0.25264613, 1.45417876, 0.56923979, 0.45822365],\n [-0.80933344, 0.86407349, 0.20170137, -1.87529904, -0.56850693]])\n \n # (5x3) weight matrix from hidden to output layer\n# np.random.randn(self.hidden_channel, self.output_channel) \n self.weight2 = np.array([ [-0.06510141, 0.80681666, -0.5778176 ],\n [ 0.57306064, -0.33667496, 0.29700734],\n [-0.37480416, 0.15510474, 0.70485719],\n [ 0.8452178 , -0.65818079, 0.56810558],\n [ 0.51538125, -0.61564998, 0.92611427]])\n\n \n def forward(self, X):\n \"\"\"forward propagation through our network\n \"\"\"\n # dot product of X (input) and first set of 3x2 weights\n self.h1 = np.dot(X, self.weight1) \n # activation function\n self.z1 = self.sigmoid(self.h1) \n # dot product of hidden layer (z2) and second set of 3x1 weights\n self.h2 = np.dot(self.z1, self.weight2) \n # final activation function\n o = self.sigmoid(self.h2)\n return o\n \n def backward(self, X, y, o):\n \"\"\"Backward, compute gradient and update parameters\n Inputs:\n X: data, [batch_size, 2]\n y: label, one-hot vector, [batch_size, 3]\n o: predictions, [batch_size, 3]\n \"\"\"\n # backward propgate through the network\n self.o_error = y - o # error in output\n # applying derivative of sigmoid to error delata L\n self.o_delta = self.o_error * self.sigmoid_prime(o) \n\n # z1 error: how much our hidden layer weights contributed to output error\n self.z1_error = self.o_delta.dot(self.weight2.T) \n # applying derivative of sigmoid to z1 error\n self.z1_delta = self.z1_error * self.sigmoid_prime(self.z1) \n\n # adjusting first set (input --> hidden) weights\n self.weight1 += X.T.dot(self.z1_delta) * self.learning_rate \n # adjusting second set (hidden --> output) weights\n self.weight2 += self.z1.T.dot(self.o_delta) * self.learning_rate \n \n def sigmoid(self, s):\n \"\"\"activation function\n \"\"\"\n return 1 / (1 + np.exp(-s))\n\n def sigmoid_prime(self, s):\n \"\"\"derivative of sigmoid\n \"\"\"\n return s * (1 - s)",
"3.3 Loading Data From local csv by using Pandas",
"# Import Module\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport matplotlib.cm as cm\nimport math\n\ntrain_csv_file = './labels/train.csv'\ntest_csv_file = './labels/test.csv'\n# Load data from csv file, without header\ntrain_frame = pd.read_csv(train_csv_file, encoding='utf-8', header=None)\ntest_frame = pd.read_csv(test_csv_file, encoding='utf-8', header=None)\n\n# show data in Dataframe format (defined in pandas)\ntrain_frame\n\n# obtain data from specific columns\n\n# obtain data from first and second columns and convert into narray\ntrain_data = train_frame.iloc[:,0:2].values \n# obtain labels from third columns and convert into narray\ntrain_labels = train_frame.iloc[:,2].values \n# obtain data from first and second columns and convert into narray\ntest_data = test_frame.iloc[:,0:2].values\n# obtain labels from third columns and convert into narray\ntest_labels = test_frame.iloc[:,2].values\n\n# train & test data shape\nprint(train_data.shape)\nprint(test_data.shape)\n# train & test labels shape\nprint(train_labels.shape)\nprint(test_labels.shape)",
"3.4 Define some function for visualization and training",
"def plot(data, labels, caption):\n \"\"\"plot the data distribution, !!YOU CAN READ THIS LATER, if you are interested\n \"\"\"\n colors = cm.rainbow(np.linspace(0, 1, len(set(labels))))\n for i in set(labels):\n xs = []\n ys = []\n for index, label in enumerate(labels):\n if label == i:\n xs.append(data[index][0])\n ys.append(data[index][1])\n plt.scatter(xs, ys, colors[int(i)]) \n plt.title(caption)\n plt.xlabel('x')\n plt.ylabel('y')\n plt.show()\n\nplot(train_data, train_labels, 'train_dataset')\n\nplot(test_data, test_labels, 'test_dataset')\n\ndef int2onehot(label):\n \"\"\"conver labels into one-hot vector, !!YOU CAN READ THIS LATER, if you are interested\n Args:\n label: [batch_size]\n Returns:\n onehot: [batch_size, categories]\n \"\"\"\n dims = len(set(label))\n imgs_size = len(label)\n onehot = np.zeros((imgs_size, dims))\n onehot[np.arange(imgs_size), label] = 1\n return onehot\n\n# convert labels into one hot vector\ntrain_labels_onehot = int2onehot(train_labels)\ntest_labels_onehot = int2onehot(test_labels)\nprint(train_labels_onehot.shape)\nprint(train_labels_onehot.shape)\n\ndef get_accuracy(predictions, labels):\n \"\"\"Compute accuracy, !!YOU CAN READ THIS LATER, if you are interested\n Inputs: \n predictions:[batch_size, categories] one-hot vector\n labels: [batch_size, categories]\n \"\"\"\n predictions = np.argmax(predictions, axis=1)\n labels = np.argmax(labels, axis=1)\n all_imgs = len(labels)\n predict_true = np.sum(predictions == labels)\n return predict_true/all_imgs\n\n# Please read this function carefully, related to implementation of GD, SGD, and mini-batch\ndef generate_batch(train_data, train_labels, batch_size):\n \"\"\"Generate batch\n when batch_size=len(train_data), it's GD\n when batch_size=1, it's SGD\n when batch_size>1 & batch_size<len(train_data), it's mini-batch, usually, batch_size=2,4,8,16...\n \"\"\"\n iterations = math.ceil(len(train_data)/batch_size)\n for i in range(iterations):\n index_from = i*batch_size\n index_end = (i+1)*batch_size\n yield (train_data[index_from:index_end], train_labels[index_from:index_end])\n\ndef show_curve(ys, title):\n \"\"\"plot curlve for Loss and Accuacy, !!YOU CAN READ THIS LATER, if you are interested\n Args:\n ys: loss or acc list\n title: Loss or Accuracy\n \"\"\"\n x = np.array(range(len(ys)))\n y = np.array(ys)\n plt.plot(x, y, c='b')\n plt.axis()\n plt.title('{} Curve:'.format(title))\n plt.xlabel('Epoch')\n plt.ylabel('{} Value'.format(title))\n plt.show()",
"3.5 Training model and make predictions",
"learning_rate = 0.1\n\nepochs = 400 # training epoch\n\nbatch_size = len(train_data) # GD\n# batch_size = 1 # SGD\n# batch_size = 8 # mini-batch\n\nmodel = FeedForward_Neural_Network(learning_rate) # declare a simple feedforward neural model\n\nlosses = []\naccuracies = []\n\nfor i in range(epochs):\n loss = 0\n for index, (xs, ys) in enumerate(generate_batch(train_data, train_labels_onehot, batch_size)):\n predictions = model.forward(xs) # forward phase\n loss += 1/2 * np.mean(np.sum(np.square(ys-predictions), axis=1)) # Mean square error\n model.backward(xs, ys, predictions) # backward phase\n \n losses.append(loss)\n \n # train dataset acc computation\n predictions = model.forward(train_data)\n # compute acc on train dataset\n accuracy = get_accuracy(predictions, train_labels_onehot)\n accuracies.append(accuracy)\n \n if i % 50 == 0:\n print('Epoch: {}, has {} iterations'.format(i, index+1))\n print('\\tLoss: {:.4f}, \\tAccuracy: {:.4f}'.format(loss, accuracy))\n \ntest_predictions = model.forward(test_data)\n# compute acc on test dataset\ntest_accuracy = get_accuracy(test_predictions, test_labels_onehot)\nprint('Test Accuracy: {:.4f}'.format(test_accuracy))",
"!!! Homework 1. Describe the training procedure, based on codes above.\nBP神经网络的训练流程主要分为神经网络的初始化、信号的前向传播、误差的反向传播、权重的更新四个步骤。\n\n神经网络的初始化:初始化输入层、隐层、输出层的节点,同时初始化各层之间的权重,选择激励函数、学习速率等参数。这里各层节点分别为3、5、2,权重固定一个值,而不是均匀随机生成,激励函数选择Sigmoid函数。\n信号的前向传播:输入训练集样例,通过权重的选择,以及激励函数的作用,最终得到一个输出值。\n误差的反向传播:比较输出值和训练集样例的标签,计算出误差,并逐步反向传播至隐层、输入层。\n权重的更新:根据下一层传回的误差和学习速率,对当前权重进行更新。\n\nBP神经网络的训练首先初始化神经网络,然后循环训练代数次。\n在训练循环中,会根据预先选择的梯度下降法对数据进行分割,每份数据都依次进行前向传播、反向传播、更新权重过程。其中,梯度下降将不分割数据,而随机梯度下降法每次只使用一个样例,mini批梯度下降法根据参数决定每份数据的样例数。\n训练结束后,可使用测试集样例来检测当前BP神经网络的正确率。",
"# Draw losses curve using losses \nshow_curve(losses, 'Loss')\n\n# Draw Accuracy curve using accuracies\nshow_curve(accuracies, 'Accuracy')",
"!!! Howework 2\nset learning rate = 0.01 to train the model and show two curve below",
"def training(learning_rate, batch_size):\n model = FeedForward_Neural_Network(learning_rate) # declare a simple feedforward neural model\n\n losses = []\n accuracies = []\n\n for i in range(epochs):\n loss = 0\n for index, (xs, ys) in enumerate(generate_batch(train_data, train_labels_onehot, batch_size)):\n predictions = model.forward(xs) # forward phase\n loss += 1/2 * np.mean(np.sum(np.square(ys-predictions), axis=1)) # Mean square error\n model.backward(xs, ys, predictions) # backward phase\n\n losses.append(loss)\n\n # train dataset acc computation\n predictions = model.forward(train_data)\n # compute acc on train dataset\n accuracy = get_accuracy(predictions, train_labels_onehot)\n accuracies.append(accuracy)\n\n if i % 50 == 0:\n print('Epoch: {}, has {} iterations'.format(i, index+1))\n print('\\tLoss: {:.4f}, \\tAccuracy: {:.4f}'.format(loss, accuracy))\n\n test_predictions = model.forward(test_data)\n # compute acc on test dataset\n test_accuracy = get_accuracy(test_predictions, test_labels_onehot)\n print('Test Accuracy: {:.4f}'.format(test_accuracy))\n\n # Draw losses curve using losses \n show_curve(losses, 'Loss')\n \n # Draw Accuracy curve using accuracies\n show_curve(accuracies, 'Accuracy')\n \n return model\n\nlearning_rate = 0.01\ntraining(learning_rate, batch_size)",
"!!! Howework 3\nUse SGD and mini-batch to train model and show four curve below",
"# SGD\nlearning_rate = 0.1\nbatch_size = 1\ntraining(learning_rate, batch_size)\n\n# mini-batch\nlearning_rate = 0.1\nbatch_size = 8\ntraining(learning_rate, batch_size)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
GeoNet/fits
|
examples/Notebook_2.ipynb
|
apache-2.0
|
[
"Plot multiple volcanic data sets from the FITS (FIeld Time Series) database\nIn this notebook we will plot data of multiple types from volcano observatory instruments using data from the FITS (FIeld Time Series) database. This notebook assumes that the reader has either read the previous FITS data access and plotting Jupyter Notebook or has a basic understanding of Python. Some of the code from the previous notebook has been brought over in the form of package imports and a function in the first code segment.\nTo begin, run the following code segment:",
"# Import packages\n\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# Define functions\n\ndef build_query(site, data_type):\n \n '''\n Take site code and data type and generate a FITS API query for an observations csv file\n '''\n \n # Ensure parameters are in the correct format for use with the FITS API\n\n site = str.upper(site) # ensure site code is upper case\n \n # Build a FITS API query by combining parameter:value pairs in the query format\n\n query_suffix = 'siteID=%s&typeID=%s' % (site, data_type)\n\n # Combine the query parameter=value string with the FITS observation data URL\n\n URL = 'https://fits.geonet.org.nz/observation?' + query_suffix\n\n return URL",
"Next we specify the sites and corresponding data types we want to view. Volcanic data has many types, and FITS database TypeID may not be obvious, so this may be a useful resource if the query isn't working.\nTo discover what geodetic sites exist, browse the GeoNet Network Maps. Volcanic sites can be found by data type in the FITS GUI.\nAt the Ruapehu Crater Lake (RU001) lake temperature and Mg<sup>2+</sup> concentration are used (in combination with the lake level and wind speed) to model the amount of energy that enters the lake from below. In the next code segment we will gather this data.",
"# Set sites and respective data types in lists.\n\nsites = ['RU001', 'RU001']\ndata_types = ['t', 'Mg-w']\n\n# Ensure input is in list format\n\nif type(sites) != list:\n \n site = sites\n sites = []\n sites.append(site)\n \nif type(data_types) != list:\n \n temp_data_types = data_types\n data_types = []\n data_types.append(temp_data_types)\n\n# Check that each site has a corresponding data type and vice versa\n\nif len(sites) != len(data_types):\n \n print('Number of sites and data types are not equal!')\n\n# Create a list to store DataFrame objects in\n\ndata = [[] for i in range(len(sites))]\n\n# Parse csv data from the FITS database into the data list \n\nfor i in range(len(sites)):\n \n URL = build_query(sites[i], data_types[i]) # FITS API query building function\n \n try:\n \n data[i] = pd.read_csv(URL, names=['date-time', data_types[i], 'error'], header=0, parse_dates = [0], index_col = 0)\n \n except:\n \n print('Site or data type does not exist')",
"The only difference between this code segment and the corresponding segment of the previous notebook is that here we store DataFrame objects in a list and generate them using a for loop. Again we can plot the data in a simple plot:",
"# Plot the data on one figure\n\ncolors = ['red', 'blue']\nfor i in range(len(data)):\n \n data[i].loc[:, data_types[i]].plot(marker='o', linestyle=':', color = colors[i])\n \nplt.xlabel('Time', fontsize = 12)\nplt.ylabel('')\nplt.show()",
"While the above plot may succeed in showing the two data series on the same figure, it doesn't do it in a very useful way. We can use a few features of matplotlib to improve the readability of the figure:",
"# Generate blank figure to plot onto\n\nfig, ax1 = plt.subplots()\n\n# Plot the first data series onto the figure\n\ndata[0].loc[:, data_types[0]].plot(marker='o', linestyle=':', ax = ax1, color = colors[0], label = data_types[0])\n\n# Plot the second data series onto the figure\n\nax2 = ax1.twinx() # Share x axis between two y axes\ndata[1].loc[:, data_types[1]].plot(marker='o', linestyle=':', ax = ax2, color = colors[1], label = data_types[1])\n\n# Make a legend for both plots\n\nplot1, labels1 = ax1.get_legend_handles_labels()\nplot2, labels2 = ax2.get_legend_handles_labels()\nax1.legend(plot1 + plot2, labels1 + labels2, loc = 0)\n\n# Tidy up plot\n\nax1.set_xlabel('Time', rotation = 0, labelpad = 15, fontsize = 12)\nax1.set_ylabel(data_types[0], rotation = 0, labelpad = 35, fontsize = 12)\nax2.set_ylabel(data_types[1], rotation = 0, labelpad = 35, fontsize = 12)\nplt.title(data_types[0] + ' and ' + data_types[1] + ' data for ' + sites[0] + ' and ' + sites[1], fontsize = 12)\n\nplt.show()",
"This figure is much easier to compare the data series in, but it is also very cluttered. The next code segment plots each data series in separate subplots within the same figure to maximise readability without reducing the operator's ability to compare the two data series.",
"# New figure\n\nplt.figure()\n\n# Plot first data series onto subplot\n\nax1 = plt.subplot(211) # Generate first subplot\ndata[0].loc[:, data_types[0]].plot(marker='o', linestyle=':', ax = ax1, color = colors[0], label = data_types[0])\nplt.title(data_types[0] + ' and ' + data_types[1] + ' data for ' + sites[0] + ' and ' + sites[1], fontsize = 12)\n\n# Plot second data series onto second subplot\n\nax2 = plt.subplot(212, sharex = ax1)\ndata[1].loc[:, data_types[1]].plot(marker='o', linestyle=':', color = colors[1], label = data_types[1])\n\n# Tidy up plot\n\nax2.set_xlabel('Time', rotation = 0, labelpad = 15, fontsize = 12)\nax1.set_ylabel(data_types[0], rotation = 0, labelpad = 35, fontsize = 12)\nax2.set_ylabel(data_types[1], rotation = 0, labelpad = 35, fontsize = 12)\n\n# Remove messy minor x ticks\n\nax1.tick_params(axis = 'x', which = 'minor', size = 0)\nax2.tick_params(axis = 'x', which = 'minor', size = 0)\n\nplt.show()\n",
"Which of these two plot styles is best is data-dependent and a matter of preference. When only two data series are being plotted it is fairly easy to overlay the two, but when more than two are used subplotting quickly becomes favourable.\nAnother useful dataset used for volcanic activity observation is the CO<sub>2</sub>/SO<sub>2</sub> ratio, as high values of this ratio can indicate a fresh batch of magma beneath a volcano. We will look now at the dataset for monthly airborne measurements of the two gases at White Island. As multiple collection methods exist for these data types, we will need to expand the build_query function to allow methodID specification.",
"# Define functions\n\ndef build_query(site, data_type, method_type):\n \n '''\n Take site code and data type and generate a FITS API query for an observations csv file\n '''\n \n # Ensure parameters are in the correct format for use with the FITS API\n\n site = str.upper(site) # ensure site code is upper case\n \n # Build a FITS API query by combining parameter:value pairs in the query format\n\n query_suffix = 'siteID=%s&typeID=%s&methodID=%s' % (site, data_type, method_type)\n\n # Combine the query parameter=value string with the FITS observation data URL\n\n URL = 'https://fits.geonet.org.nz/observation?' + query_suffix\n\n return URL",
"Now we will gather the data, do a few specific modifications, and then present it. If you want to change the data types used here the code will fail. This is because there are hard-coded variable redefinitions that require this particular type of data. Consider this code segment an example, and not a modifiable script like the other segments of this notebook.",
"# Redefine variables\n\nsites = ['WI000','WI000']\ndata_types = ['SO2-flux-a', 'CO2-flux-a']\nmethod_types = ['cont', 'cont']\n\n# Check that each site has a corresponding data type and vice versa\n\nif (len(sites) != len(data_types)) or (len(sites) != len(method_types)) or (len(data_types) != len(method_types)):\n \n print('Number of sites, data types, and collection methods are not all equal!')\n\n# Create a list to store DataFrame objects in\n\ndata = [[] for i in range(len(sites))]\n\n# Parse csv data from the FITS database into the data list \n\nfor i in range(len(sites)):\n \n URL = build_query(sites[i], data_types[i], method_types[i]) # FITS API query building function\n \n try:\n \n data[i] = pd.read_csv(URL, names=['date-time', data_types[i], 'error'], header=0, parse_dates = [0], index_col = 0)\n \n except:\n \n print('Site or data type does not exist')\n\n# Remove non-synchronous measurements in the two data series\n\ndata[0] = data[0][data[0].index.isin(data[1].index)]\ndata[1] = data[1][data[1].index.isin(data[0].index)]\n\n# Hard-code in the ratio calculation\n\nratio = pd.DataFrame() # make an empty DataFrame\nratio['value'] = data[1]['CO2-flux-a'] / data[0]['SO2-flux-a'] # calculate the ratio between observations and call it value\nratio.index = data[1].index # the ratio index is the CO2 flux index (observation times)\n\n# Plot the dataset\n\nratio.loc[:,'value'].plot(marker='o', linestyle=':', color='blue')\n\n# Add functional aspects to plot\n\nplt.xlabel('Time', fontsize = 12)\nplt.ylabel('Ratio', fontsize = 12)\nplt.title('Ratio of ' + data_types[0] + ' and ' + data_types[1] + ' for site ' + sites[0], fontsize = 12)\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
myinxd/agn-ae
|
code-sdss/calc_reconstruction.ipynb
|
mit
|
[
"# Applying kcorrect_python to SDSS observed ugriz data\n# Download\n# 1. kcorrect package: http://cosmo.nyu.edu/blanton/kcorrect/kcorrect.v4_3.tar.gz\n# 2. kocrrect_python: https://pypi.python.org/pypi/kcorrect_python/2017.07.05\n# Installation\n# 1. see kcorrect install\n# 2. see kocrrect_python readme\n\n# Some physics links\n# Measures of flux and measurements: http://www.sdss.org/dr12/algorithms/magnitudes/\n# kcorrect: http://kcorrect.org",
"Given the excellent agreement between cmodel magnitudes (see cmodel magnitudes above) and PSF magnitudes for point sources, and between cmodel magnitudes and Petrosian magnitudes (albeit with intrinsic offsets due to aperture corrections) for galaxies, the cmodel magnitude is now an adequate proxy to use as a universal magnitude for all types of objects. As it is approximately a matched aperture to a galaxy, it has the great advantage over Petrosian magnitudes, in particular, of having close to optimal noise properties.\nA \"maggy\" is the flux f of the source relative to the standard source f0 (which defines the zeropoint of the magnitude scale). Therefore, a \"nanomaggy\" is 10^-9 times a maggy. \nTo relate these quantities to standard magnitudes, an object with flux f given in nMgy has a Progson magnitude:\nm = [22.5 mag] - 2.5log10f\n- Note that magnitudes listed in the SDSS catalog, however, are not standard Pogson magnitudes, but asinh magnitudes.\n- Magnitudes within the SDSS are expressed as inverse hyperbolic sine (asinh) magnitudes, sometimes referred to informally as luptitudes. \nThe relation between detected flux f and asinh magnitude m is:\n m = -2.5 / ln(10) * [asinh((f/f0) / (2b)) + ln(b)]\nHere f0 is given by the classical zero point of the magnitude scale, i.e., f0 is the flux of an object with conventional magnitude of zero.\nAsinh softening parameters\n| Filter | b | zero-flux magnitude [m(f/f0 = 0)] | m(f/f0 = 10b) |\n|:-------|:------:|:-------:|:--------:|\n| u | 1.4e-10| 24.63 | 22.12 |\n| g | 0.9e-10| 25.11 | 22.60 |\n| r | 1.2e-10| 24.80 | 22.29 |\n| i | 1.8e-10| 24.36 | 21.85 |\n| z | 7.4e-10| 22.83 | 20.32 |\nSDSS ugriz magnitudes to AB ugriz magnitudes\n- u: u_AB = u_SDSS - 0.04 mag\n- g,r,i are close to AB\n- z: z_AB = z_SDSS + 0.02 mag\nmaginitudes to maggy\n\ncmodelmag - extinxtion_correction\nSDSS to AB\nasinh mag to flux\n f = sinh([m / (-2.5/ln(10)) - ln(b)]) * (2b) * f0\n\nmaggy_ivar\nNote that the conversion to the inverse variances from the maggies and the magnitude errors is (0.4 ln(10) × maggies × magerr)-2\nwhat are the magerr? Can be downloaded from SDSS skyserver",
"import kcorrect as kc\nimport numpy as np\nimport os\nimport pandas as pd\n\n# load the unLRG sample list\nlistpath = \"./BH_SDSS_cross_checked.xlsx\"\ndata = pd.read_excel(listpath, \"Sheet2\")\n\ndef calc_maggies(mag, ext, band_id):\n \"\"\"Calc flux from magnitude\"\"\"\n # extinction correction\n mag = mag - ext\n # AB calibration\n ab_coeff = [-0.04, 0, 0, 0, 0.02]\n mag = mag + mag * ab_coeff[band_id]\n # mag to maggie\n f0 = 3631 #[Jy]\n b_coeff = [1.4e-10, 0.9e-10, 1.2e-10, 1.8e-10, 7.4e-10]\n b = b_coeff[band_id]\n # maggie = math.sinh((mag / (-2.5/math.log(10)) - math.log(b))) * (2*b) * f0\n maggie = 10 ** (mag / -2.5)\n \n return maggie\n\n# TODO\ndef calc_maggies_ivar(maggie, magerr):\n \"Calc maggie inverse variance\"\n maggie_ivar = (0.4 * math.log(10) * maggie * magerr)**(-2)\n return maggie_ivar\n\ndef get_sample_params(mags,exts,magerrs,z):\n \"\"\"Generate the parameters for kcorrection estimation\"\"\"\n param = np.zeros((11,))\n param[0] = z\n # calc maggies\n for i, mag in enumerate(mags):\n param[i+1] = calc_maggies(mag=mag, ext=exts[i], band_id=i)\n # calc maggie_ivar\n for i, magerr in enumerate(magerrs):\n param[i+6] = calc_maggies_ivar(maggie=param[i+1], magerr=magerr)\n \n return param\n\n# calc reconstructed_mag\ndef calc_reconmag(sample_params):\n kc.load_templates() # load the default template\n kc.load_filters() # load the SDSS filters\n # get the coeffs\n coeff = kc.fit_coeffs(sample_params)\n reconmag = kc.reconstruct_maggies(coeff)\n return reconmag\n\n# calc the absolute magnitude from magnitude_r and k_correction\nfrom astropy.cosmology import FlatLambdaCDM\nimport astropy.units as au\ndef calc_luminosity_distance(redshift):\n \"\"\"Calculate the rate, kpc/px.\"\"\"\n # Init\n # Hubble constant at z = 0\n H0 = 71.0\n # Omega0, total matter density\n Om0 = 0.27\n # Cosmo\n cosmo = FlatLambdaCDM(H0=H0, Om0=Om0)\n # Angular diameter distance, [Mpc]\n dL = cosmo.luminosity_distance(redshift)\n\n return dL.to(au.pc)\n\ndef calc_absmag(mag_r,dl,kcrt):\n mag_abs = mag_r - 5*(math.log10(dl) - 1) - kcrt\n return mag_abs",
"Single test",
"data.keys()\n\n# a single test\nextinction_u = data[\"extinction_u\"]\nextinction_g = data[\"extinction_g\"]\nextinction_r = data[\"extinction_r\"]\nextinction_i = data[\"extinction_i\"]\nextinction_z = data[\"extinction_z\"]\n\ncmodelmag_u = np.nan_to_num(data[\"cmodelmag_u\"])\ncmodelmag_g = np.nan_to_num(data[\"cmodelmag_g\"])\ncmodelmag_r = np.nan_to_num(data[\"cmodelmag_r\"])\ncmodelmag_i = np.nan_to_num(data[\"cmodelmag_i\"])\ncmodelmag_z = np.nan_to_num(data[\"cmodelmag_z\"])\n\ncmodelmagerr_u = np.nan_to_num(data[\"cmodelmagerr_u\"])\ncmodelmagerr_g = np.nan_to_num(data[\"cmodelmagerr_g\"])\ncmodelmagerr_r = np.nan_to_num(data[\"cmodelmagerr_r\"])\ncmodelmagerr_i = np.nan_to_num(data[\"cmodelmagerr_i\"])\ncmodelmagerr_z = np.nan_to_num(data[\"cmodelmagerr_z\"])\n\nidx_u1 = np.where(cmodelmag_u != -9999)[0]\nidx_u2 = np.where(cmodelmag_u != 0.0)[0]\nidx_u3 = np.where(cmodelmag_u != 10000)[0]\nidx = np.intersect1d(idx_u1,idx_u2)\nidx = np.intersect1d(idx, idx_u3)\n\nredshift = data[\"z\"]\n\ni = 1109\nz = redshift[i]\nmags = [cmodelmag_u[i],cmodelmag_g[i],cmodelmag_r[i],cmodelmag_r[i],cmodelmag_z[i]]\nexts = [extinction_u[i],extinction_g[i],extinction_r[i],extinction_r[i],extinction_z[i]]\nmagerrs = [cmodelmagerr_u[i],cmodelmagerr_g[i],cmodelmagerr_g[i],cmodelmagerr_i[i],cmodelmagerr_z[i]]\nparams = get_sample_params(mags,exts,magerrs,z)\n\nparams\n\nreconmag = calc_reconmag(params)\n\nreconmag\n\ndl = calc_luminosity_distance(z)\n# mag_abs = calc_abmag(mags[2],dl,reconmag[3])\n\ndl\n\n5*(math.log10(dl.value) - 1)\n\nmags[2]\n\nreconmag[3]\n\nmags[2] - 40.8 - reconmag[3]\n\nmag_abs = calc_absmag(mags[2],dl.value,reconmag[3])\n\nmag_abs\n\nf = 10 ** (mag_abs/(-2.5))\n\nf"
] |
[
"code",
"markdown",
"code",
"markdown",
"code"
] |
timomwa/50ForReel
|
ITEC610_Python_Codes_And_Comments.ipynb
|
gpl-3.0
|
[
"<a href=\"https://colab.research.google.com/github/timomwa/50ForReel/blob/master/ITEC610_Python_Codes_And_Comments.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nAnswers for;\n\nUTEC610 Python Fundamentals for data science, Semester 1, 2022 Assignment number (2)\nAssessment Artefact: Python Codes and Comments\nWeighing [30%]\nQuestion 2\nDownload csv data with pandas with below code\n```\nimport pandas as pd\ndeaths_df = pd.read_csv('https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_deaths_global.csv')\n```",
"import pandas as pd\n\ndeaths_df = pd.read_csv('https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/time_series_covid19_deaths_global.csv')",
"Question 3\nDisplay first 5 rows of the loaded data",
"# Question 3\n# Display first 5 rows\n# of the loaded data\ndeaths_df.head(5)",
"...and do a short summary about the data;\n\nThe resultant table comes from the CSV served by the URI.\nThe data set consists of a list of geographical \nlocations with GPS coordinates of each.\nThe data has a spread of 839 days starting 22nd Jan 2020 to 9th May 2022 - about a period of 2 years 4 months.\n\nQuestion 4\nGet daily cases worldwide ( hint: summarising daily death cases over all countries",
"# Yank out 3 unecessary columns\n# i.e 'Lat','Long','Province/State'\n# Leaving 'Country/Region' & deaths per day\n# Columns\ndeath_cases_worldwide = deaths_df.drop(['Lat','Long','Province/State'], axis=1);\ndeath_cases_worldwide",
"Question 5\nGet daily increasement of death cases via defining a function (hint: use the death cases of today minus the death cases of yesterday from the data obtained in task 4).",
"death_cases_worldwide_ = death_cases_worldwide.head(5)\n\ndef calc_increment(x):\n current_col_idx = death_cases_worldwide_.columns.get_loc(x.name)\n if current_col_idx > 1:\n prev_column_idx = current_col_idx - 1;\n prev_column = death_cases_worldwide_.iloc[:, (current_col_idx-1):]\n death_cases_of_today = x.iloc[0]\n death_cases_of_yesterday = prev_column.iloc[0].iloc[0]\n increment = int(death_cases_of_today) - int(death_cases_of_yesterday)\n x.iloc[0] = increment\n \ndeath_cases_worldwide_.apply( calc_increment, axis=0 )\n# print(death_cases_worldwide_)\n",
"Question 6\nVisualize the data obtained in task 4 with library matplotlib",
"# Import library\nimport matplotlib.pyplot as plt\n# Import numpy\nimport numpy as np\n\n#Specify X axis to be that of Country/Region\ndeath_cases_worldwide.plot(x='Country/Region')\n#Finally Show the graph\nplt.show()\n"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
NAMD/corpushash
|
demos/benchmark.ipynb
|
lgpl-3.0
|
[
"benchmarking the corpushash library\ncorpushash is a simple library that aims to make the natural language processing of sensitive documents easier. the library enables performing common NLP tasks on sensitive documents without disclosing their contents. This is done by hashing every token in the corpus along with a salt (to prevent dictionary attacks). \nits workflow is as simple as having the sensitive corpora as a python nested list (or generator) whose elements are themselves (nested) lists of strings. after the hashing is done, NLP can be carried out by a third party, and when the results are in they can be decoded by a dictionary that maps hashes to the original strings. so that makes:\n```python\nimport corpushash as ch\nhashed_corpus = ch.CorpusHash(mycorpus_as_a_nested_list, '/home/sensitive-corpus')\n\n\n\n\"42 documents hashed and saved to '/home/sensitive-corpus/public/$(timestamp)'\"\n**NLP is done, and `results` are in**:python\nfor token in results:\n print(token, \">\", hashed_corpus.decode_dictionary[token])\n\"7)JBMGG?sGu+>%Js~dG=%c1Qn1HpAU{jM-~Buu7?\" > \"gutenberg\"\n\n\n\n```\nThe library requires as input:\n\n\na tokenized corpus as a nested list, whose elements are themselves nested lists of the tokens of each document in the corpus\neach list corresponds to a document structure: its chapters, paragraphs, sentences. you decide how the nested list is to be created or structured, as long as the input is a nested list with strings as their bottom-most elements.\n\n\ncorpus_path, a path to a directory where the output files are to be stored\n\n\nThe output includes:\n\n\na .json file for every item in the dictionary provided, named sequencially as positive integers, e.g., the first document being 0.json, stored in corpus_path/public/$(timestamp-of-hash)/\n\n\ntwo pickled dictionaries stored in corpus_path/private. they are used to decode the .json files or the NLP results\n\n\nloading libraries..",
"import os\nimport sys\nimport shutil\nimport copy\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom timeit import default_timer as timer\nimport nltk\nfrom nltk.corpus import words\nfrom nltk.corpus import reuters\nimport corpushash as ch",
"disabling logging information so that we are not swamped by it when hashing several corpora in a row:",
"import logging\n\nlogging.getLogger('corpushash.hashers').setLevel(logging.WARNING)\n\ncorpus_path = 'bench_test'\n\nplt.rcParams['figure.figsize'] = (30, 12)",
"benchmark corpus\nbenchmark corpus is a wordlist, so that it represents a worst-case scenario where all tokens have to be hashed instead of some them being repeated and being looked-up.\nis hashing really slower than looking up words in dictionary?\nwe'll test the wordlist against the abc corpus, which is 3 times bigger, but which has only 31885 unique words.",
"# if corpus is missing, comment here and nltk import (above)\nnltk.download('words')\nnltk.download('abc')\nnltk.download('reuters')\n\nprint('|wordlist corpus|\\ntotal nr of tokens & nr of unique tokens:', len(words.words()))\n\nprint('|abc corpus|\\ntotal nr of tokens:', len(nltk.corpus.abc.words()), '\\nunique tokens:', len(set(nltk.corpus.abc.words())))",
"wordlist corpus",
"corpus = [words.words()]\n\n%time wordlistcorpus = ch.CorpusHash(corpus, corpus_path)\n\nprint('decode dictionary size (bytes)\\n', os.path.getsize(os.path.join(corpus_path, 'private', 'decode_dictionary.json')))\n\nprint('hashed corpus size (bytes)\\n', os.path.getsize(os.path.join(wordlistcorpus.public_path, '{}.json'.format(wordlistcorpus.corpus_size-1))))",
"removing folder to have clean slate for next corpus hash:",
"shutil.rmtree(corpus_path)",
"abc corpus",
"corpus = [list(nltk.corpus.abc.words())]\nlen(corpus[0])\n\n%time abccorpus = ch.CorpusHash(corpus, corpus_path)\n\nprint('decode dictionary size (bytes)\\n', os.path.getsize(os.path.join(corpus_path, 'private', 'decode_dictionary.json')))\n\nprint('hashed corpus size (bytes)\\n', os.path.getsize(os.path.join(abccorpus.public_path, '{}.json'.format(wordlistcorpus.corpus_size-1))))\n\nshutil.rmtree(corpus_path)",
"determining library complexity: three main variables\ntwo main variables should determine the time taken to hash a corpus: \n\n\nthe documents' sizes;\n\n\nand the documents' nesting level\na corpus of zero-depth is a list of tokens, while a higher-depth corpus is a nested list of tokens, where the nesting represents document structure, such as sentences and paragraphs.\n\n\nzero-depth corpus, one document, variable nr of tokens",
"corpus_size = 10\nstep = 100\niterations = 2330\nmax_corpus_size = corpus_size + step*iterations\nloops = 1\niterations_time = np.zeros((iterations))\n\niterations_time.shape\n\nmax_corpus_size\n\n%%time\ni = 0\nfor size in range(corpus_size, max_corpus_size, step):\n corpus = [words.words()[:size]]\n startt = timer()\n ch.CorpusHash(corpus, corpus_path)\n endt = timer()\n iterations_time[i] = endt - startt\n shutil.rmtree(corpus_path)\n i += 1\n print(i)\n\nfile_name = '0_d_tokens.npy'\nif os.path.isfile(file_name):\n iterations_time = np.load(file_name)\nelse:\n np.save(file_name, iterations_time)",
"iterations_time is a vector with each element being the time it took to hash the corpus of a given size.",
"x = np.arange(corpus_size, max_corpus_size, step)\ny = iterations_time / iterations_time[0]\nplt.plot(x, y)\nplt.ylabel('time (normalized)')\nplt.xlabel('corpus size (in nr of tokens)')\nplt.title('time to hash list of tokens')\nplt.show()\n\nz = np.polyfit(x, y, 1)\np = np.poly1d(z)\nprint(p)\n\nxp = np.linspace(corpus_size, max_corpus_size, 10000) # creating evenly-spaced points to evaluate polynomial\nplt.plot(x, y, '.', xp, p(xp), '-')\nplt.ylabel('time (normalized)')\nplt.xlabel('corpus size (in nr of tokens)')\nplt.title('time to hash list of tokens')\nplt.show()",
"final benchmark\nvariable nr of documents, variable corpus size, three nesting levels\nnesting level 0 (list of tokens)\nusing reuters corpus and varying corpus size and nr of docs independently:",
"corpus_size = 10\nstep = 5\niterations = 260\nmax_corpus_size = corpus_size + step*iterations\niterations_time = np.zeros((iterations*iterations))\n\niterations_time.shape\n\nmax_corpus_size\n\ndocument = list(nltk.corpus.reuters.words())\n\ncorpus_length = len(document)\ncorpus_length\n\n%%time\ni = 0\nk = 0\nfor size in range(corpus_size, max_corpus_size, step):\n for nrdocs in range(iterations):\n doc_length = size\n corpus = []\n for doc in range(nrdocs+1):\n begin = doc*doc_length\n end = (1 + doc)*doc_length\n corpus.append(document[begin:end])\n startt = timer()\n a = ch.CorpusHash(corpus, corpus_path)\n endt = timer()\n iterations_time[i] = endt - startt\n shutil.rmtree(corpus_path)\n i += 1\n print(k)\n k += 1\n\nfile_name = '0_d_tokens_documents.npy'\nif os.path.isfile(file_name):\n iterations_time = np.load(file_name)\nelse:\n np.save(file_name, iterations_time)",
"plotting",
"from mpl_toolkits.mplot3d import Axes3D\nfig = plt.figure()\nax = fig.gca(projection='3d')\n\nxs = np.array(list(range(iterations))*iterations) # nr of docs\n\ny = []\nfor i in range(corpus_size, max_corpus_size, step):\n y += [i]*iterations\nys = np.array(y) # nr of tokens in each document\n\niterations_time[0]\n\nz = np.average(iterations_time, axis=0)\nzs = z / z[0]\n\nax.scatter(xs, ys, zs)\n\nax.set_xlabel('nr of documents')\nax.set_ylabel('nr of tokens per document')\nax.set_zlabel('time to hash (normalized)')\n\nplt.show()",
"nesting level 1\nusing reuters and varying corpus size and nr of docs independently",
"corpus_size = 10\nstep = 5\niterations = 260\nmax_corpus_size = corpus_size + step*iterations\niterations_time = np.zeros((iterations*iterations))\n\niterations_time.shape\n\nmax_corpus_size",
"using a worst-case scenario where every other word is in a nested list and the others are not\nnormal nesting:",
"normal_corpus = ch.text_split(reuters.raw())\n\n%time ch.CorpusHash([normal_corpus], 'split_test')\n\ndocument = []\nfor ix, word in enumerate(reuters.words()):\n if ix % 2:\n document.append([word])\n else:\n document.append(word)\ndocument[:10]\n\nshutil.rmtree('split_test')\n\n%time ch.CorpusHash([document], 'split_test')\n\ncorpus_length = len(document)\ncorpus_length\n\n%%time\ni = 0\nk = 0\nfor size in range(corpus_size, max_corpus_size, step):\n for nrdocs in range(iterations):\n doc_length = size\n corpus = []\n for doc in range(nrdocs+1):\n begin = doc*doc_length\n end = (1 + doc)*doc_length\n corpus.append(document[begin:end])\n startt = timer()\n a = ch.CorpusHash(corpus, corpus_path)\n endt = timer()\n iterations_time[i] = endt - startt\n shutil.rmtree(corpus_path)\n i += 1\n print(k)\n k += 1\n\nfile_name = '1_d_tokens_documents.npy'\nif os.path.isfile(file_name):\n iterations_time = np.load(file_name)\nelse:\n np.save(file_name, iterations_time)",
"plotting",
"from mpl_toolkits.mplot3d import Axes3D\nfig = plt.figure()\nax = fig.gca(projection='3d')\n\nxs = np.array(list(range(iterations))*iterations) # nr of docs\n\ny = []\nfor i in range(corpus_size, max_corpus_size, step):\n y += [i]*iterations\nys = np.array(y) # nr of tokens in each document\n\niterations_time[0]\n\nzs = iterations_time / iterations_time[0]\n\nax.scatter(xs, ys, zs)\n\nax.set_xlabel('nr of documents')\nax.set_ylabel('nr of tokens per document')\nax.set_zlabel('time to hash (normalized)')\n\nplt.show()",
"nesting level 2",
"corpus_size = 10\nstep = 5\niterations = 260\nmax_corpus_size = corpus_size + step*iterations\niterations_time = np.zeros((iterations*iterations))\n\niterations_time.shape\n\nmax_corpus_size\n\ndocument = []\nfor ix, word in enumerate(nltk.corpus.reuters.words()):\n if ix % 3 == 0:\n document.append(word)\n elif ix % 3 == 2:\n document.append([word])\n else:\n document.append([[word]])\ndocument[:10]\n\ncorpus_length = len(document)\ncorpus_length\n\n%%time\ni = 0\nk = 0\nfor size in range(corpus_size, max_corpus_size, step):\n for nrdocs in range(iterations):\n doc_length = size\n corpus = []\n for doc in range(nrdocs+1):\n begin = doc*doc_length\n end = (1 + doc)*doc_length\n corpus.append(document[begin:end])\n startt = timer()\n a = ch.CorpusHash(corpus, corpus_path)\n endt = timer()\n iterations_time[i] = endt - startt\n shutil.rmtree(corpus_path)\n i += 1\n print(k)\n k += 1\nfile_name = '2_d_tokens_documents.npy'\nif os.path.isfile(file_name):\n iterations_time = np.load(file_name)\nelse:\n np.save(file_name, iterations_time)",
"plotting",
"from mpl_toolkits.mplot3d import Axes3D\nfig = plt.figure()\nax = fig.gca(projection='3d')\n\nxs = np.array(list(range(iterations))*iterations) # nr of docs\n\ny = []\nfor i in range(corpus_size, max_corpus_size, step):\n y += [i]*iterations\nys = np.array(y) # nr of tokens in each document\n\niterations_time[0]\n\nzs = iterations_time / iterations_time[0]\n\nax.scatter(xs, ys, zs)\n\nax.set_xlabel('nr of documents')\nax.set_ylabel('nr of tokens per document')\nax.set_zlabel('time to hash (normalized)')\n\nplt.show()",
"plotting all three together",
"file_name = '0_d_tokens_documents.npy'\nif os.path.isfile(file_name):\n zero = np.load(file_name)\n\nfile_name = '1_d_tokens_documents.npy'\nif os.path.isfile(file_name):\n um = np.load(file_name)\n\nfile_name = '2_d_tokens_documents.npy'\nif os.path.isfile(file_name):\n dois = np.load(file_name)\n\nfrom mpl_toolkits.mplot3d import Axes3D\n\nxs = np.array(list(range(iterations))*iterations) # nr of docs\n\ny = []\nfor i in range(corpus_size, max_corpus_size, step):\n y += [i]*iterations\nys = np.array(y) # nr of tokens in each document\n\nz0 = zero / zero[0]\n\nz1 = um / zero[0]\n\nz2 = dois / zero[0]\n\nfig = plt.figure(figsize=plt.figaspect(3))\nax = fig.add_subplot(3, 1, 1, projection='3d')\nax.scatter(xs, ys, z0, c='r')\nplt.title('nesting level 0')\n\nax = fig.add_subplot(3, 1, 2, projection='3d')\nax.scatter(xs, ys, z1, c='g')\nplt.title('nesting level 1')\n\nax = fig.add_subplot(3, 1, 3, projection='3d')\nax.scatter(xs, ys, z2, c='b')\nplt.title('nesting level 2')\n\nax.set_xlabel('nr of documents')\nax.set_ylabel('nr of tokens per document')\nax.set_zlabel('time to hash (normalized)')\n\nplt.show()",
"paper benchmark\nin this benchmark we'll be using the Brazilian Portuguese and English dictionaries, for a corpus of moderate size.",
"with open('brazilian.txt', 'r') as f:\n portuguese = f.read().split()\nenglish = words.words()\n\ndocument = english + portuguese\ncorpus_length = len(document)\ncorpus_length\n\ncorpus_size = 10\nstep = 100\niterations = 74\nmax_corpus_size = corpus_size + step*iterations\nloops = 3\niterations_time = np.zeros((loops, iterations))\n\n%%time\ni = 0\nfor size in range(corpus_size, max_corpus_size, step):\n corpus = []\n doc_length = size\n for doc in range(size):\n begin = doc*doc_length\n end = (1 + doc)*doc_length\n corpus.append(document[begin:end])\n for loop in range(loops):\n startt = timer()\n a = ch.CorpusHash(corpus, corpus_path)\n endt = timer()\n iterations_time[loop, i] = endt - startt\n shutil.rmtree(corpus_path)\n print(i)\n i += 1",
"saving data to disk:",
"file_name = 'paper-iterations-time.npy'\nif os.path.isfile(file_name):\n iterations_time = np.load(file_name)\nelse:\n np.save(file_name, iterations_time)",
"plotting:",
"from mpl_toolkits.mplot3d import Axes3D\nfig = plt.figure()\nax = fig.gca(projection='3d')\n\nxs = np.arange(1, iterations+1)\nys = np.arange(corpus_size, max_corpus_size, step)\nzs = np.average(iterations_time, axis=0)\nnormalized_zs = np.divide(zs, zs[0])\n\nax.scatter(xs, ys, zs)\n\nax.set_xlabel('documents')\nax.set_ylabel('tokens per document')\nax.set_zlabel('time to hash (normalized)')\n\nplt.show()\n\nz = np.polyfit(ys*xs, normalized_zs, 1)\np = np.poly1d(z)\nprint(p)\n\nfrom PIL import Image\nfrom io import BytesIO",
"saving plot as .tiff, for the paper:",
"fig = plt.figure(figsize=(20, 12), dpi=300)\n\nxp = np.linspace(0, 600000, iterations, ) # creating evenly-spaced points to evaluate polynomial\nplt.plot(xs*ys, normalized_zs, '.', xp, p(xp), 'r-')\nplt.ylabel('time', fontsize=22)\nplt.xlabel('corpus size (documents $\\cdot$ words)', fontsize=22)\nplt.text(300000, 3000, r'$y(x) = 0.01821 x + 172.5$', fontsize=20, color='r')\n#plt.title('time to hash corpus')\nplt.tick_params(axis='both', which='major', labelsize=20)\n\n\npng1 = BytesIO()\nfig.savefig(png1, format='png')\n\n# (2) load this image into PIL\npng2 = Image.open(png1)\n\n# (3) save as TIFF\npng2.save('complexity.tiff')\npng1.close()",
"benchmarking different nesting levels",
"document2 = []\nfor ix, word in enumerate(document):\n if ix % 2:\n document2.append([word])\n else:\n document2.append(word)\n\ndocument3 = []\nfor ix, word in enumerate(document):\n if ix % 3 == 0:\n document3.append(word)\n elif ix % 3 == 2:\n document3.append([word])\n else:\n document3.append([[word]])\n\ndocs = [document, document2, document3]\n\ncorpus_length = len(document)\ncorpus_size = 10\nstep = 10000\niterations = 54\nmax_corpus_size = corpus_size + step*iterations\niterations_time = np.zeros((3, iterations))\n\nmax_corpus_size\n\n%%time\ni = 0\nfor size in range(corpus_size, max_corpus_size, step):\n for ix, doc in enumerate(docs):\n corpus = [doc[:size]]\n startt = timer()\n a = ch.CorpusHash(corpus, corpus_path)\n endt = timer()\n iterations_time[ix, i] = endt - startt\n shutil.rmtree(corpus_path)\n print(i)\n i += 1\n\nfile_name = 'nest-iterations-time.npy'\nif os.path.isfile(file_name):\n iterations_time = np.load(file_name)\nelse:\n np.save(file_name, iterations_time)\n\niterations_time\n\nto_2_1 = np.divide(iterations_time[1], iterations_time[0])\n\nprint(to_2_1, np.mean(to_2_1), np.std(to_2_1))\n\nto_3_1 = np.divide(iterations_time[2], iterations_time[0])\n\nprint(to_3_1, np.mean(to_3_1), np.std(to_3_1))\n\nto_3_2 = np.divide(iterations_time[2], iterations_time[1])\n\nprint(to_3_2, np.mean(to_3_2), np.std(to_3_2))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
srcole/qwm
|
burrito/Burrito_Coordinates.ipynb
|
mit
|
[
"San Diego Burrito Analytics: Coordinates\nDetermine the longitude and latitude for each restaurant based on its address\nDefault imports",
"%config InlineBackend.figure_format = 'retina'\n%matplotlib inline\n\nimport numpy as np\nimport scipy as sp\nimport matplotlib.pyplot as plt\nimport pandas as pd\n\nimport seaborn as sns\nsns.set_style(\"white\")",
"Load data",
"import util2\ndf, dfRestaurants, dfIngredients = util2.load_burritos()\nN = df.shape[0]",
"Process",
"dfRestaurants=dfRestaurants.reset_index().drop('index',axis=1)\n\ndfRestaurants",
"Process Cali burrito data: Averages for each restaurant",
"dfAvg = df.groupby('Location').agg({'Cost': np.mean,'Volume': np.mean,'Hunger': np.mean,\n 'Tortilla': np.mean,'Temp': np.mean,'Meat': np.mean,\n 'Fillings': np.mean,'Meat:filling': np.mean,'Uniformity': np.mean,\n 'Salsa': np.mean,'Synergy': np.mean,'Wrap': np.mean,\n 'overall': np.mean, 'Location':np.size})\ndfAvg.rename(columns={'Location': 'N'}, inplace=True)\ndfAvg['Location'] = list(dfAvg.index)\n\n# Calculate latitutude and longitude for each city\nimport geocoder\naddresses = dfRestaurants['Address'] + ', ' + dfRestaurants['Neighborhood'] + ', San Diego, CA' # dfRestaurants['Address'] + ', San Diego, CA'\nlats = np.zeros(len(addresses))\nlongs = np.zeros(len(addresses))\nfor i, address in enumerate(addresses):\n g = geocoder.google(address)\n Ntries = 1\n while g.latlng ==[]:\n g = geocoder.google(address)\n Ntries +=1\n print 'try again: ' + address\n if Ntries >= 5:\n if 'Marshall College' in address:\n address = '9500 Gilman Drive, La Jolla, CA'\n g = geocoder.google(address)\n Ntries = 1\n while g.latlng ==[]:\n g = geocoder.google(address)\n Ntries +=1\n print 'try again: ' + address\n if Ntries >= 5:\n raise ValueError('Address not found: ' + address)\n else:\n raise ValueError('Address not found: ' + address)\n lats[i], longs[i] = g.latlng\n \n# # Check for nonsense lats and longs\nif sum(np.logical_or(lats>34,lats<32)):\n raise ValueError('Address not in san diego')\nif sum(np.logical_or(longs<-118,longs>-117)):\n raise ValueError('Address not in san diego')\n\n# Incorporate lats and longs into restaurants data\ndfRestaurants['Latitude'] = lats\ndfRestaurants['Longitude'] = longs\n# Merge restaurant data with burrito data\ndfTableau = pd.merge(dfRestaurants,dfAvg,on='Location')\n\ndfTableau.head()\n\ndfTableau.to_csv('df_burrito_tableau.csv')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
childresslab/MicrocavityExp1
|
tools/Finesse_Measurement.ipynb
|
gpl-3.0
|
[
"import numpy as np \nimport matplotlib.pyplot as plt\nimport glob\n# Loads nicard and scope\nmanager.startModule('logic','cavitylogic')",
"Initialize set up\n\nAmplifyer is fet -15V and = +85V \nCalibrate strain gauge to zero with the kinesis software\nPull fiber back\nSet nicard to -3.75",
"cavitylogic._ni.cavity_set_voltage(0.0)\n\ncavitylogic._current_filepath = r'C:\\BittorrentSyncDrive\\Personal - Rasmus\\Rasmus notes\\Measurements\\171001_position15_2'\n\ncavitylogic.last_sweep = None\ncavitylogic.get_nth_full_sweep(sweep_number=1, save=True)",
"Move close with fiber",
"cavitylogic.ramp_popt = cavitylogic._fit_ramp(xdata=cavitylogic.time_trim[::9], ydata=cavitylogic.volts_trim[cavitylogic.ramp_channel,::9])\nModes = cavitylogic._ni.sweep_function(cavitylogic.RampUp_time[cavitylogic.first_corrected_resonances],*cavitylogic.ramp_popt)\n\ncavitylogic.current_mode_number= len(cavitylogic.first_corrected_resonances)-2\n\nlen(cavitylogic.first_corrected_resonances)\n\ncavitylogic.linewidth_measurement(Modes,target_mode = cavitylogic.current_mode_number, repeat=10, freq=40)\n\nhigh_mode=len(cavitylogic.first_corrected_resonances)-2\nlow_mode=0\n\nfor i in range(15):\n cavitylogic.current_mode_number -=1\n ret_val = cavitylogic.get_nth_full_sweep(sweep_number=2+i)\n target_mode = cavitylogic.get_target_mode(cavitylogic.current_resonances, low_mode=low_mode, high_mode=high_mode, plot=True)\n if target_mode == None:\n print('Moved more that 5 modes')\n cavitylogic.ramp_popt = cavitylogic._fit_ramp(xdata=cavitylogic.time_trim[::9], ydata=cavitylogic.volts_trim[cavitylogic.ramp_channel,::9])\n Modes = cavitylogic._ni.sweep_function(cavitylogic.RampUp_time[cavitylogic.current_resonances],*cavitylogic.ramp_popt)\n cavitylogic.linewidth_measurement(Modes,target_mode = target_mode, repeat=10, freq=40)\n\ncavitylogic.current_mode_number\n\ntarget_mode\n\ncavitylogic.mode_shift_list"
] |
[
"code",
"markdown",
"code",
"markdown",
"code"
] |
Nikea/scikit-xray-examples
|
demos/spatial_correlation/Spatial_Correlations.ipynb
|
bsd-3-clause
|
[
"Running Spatial Correlations + options\nThe goal of this log is to show the API of the spatial correlations and the options available.\nWith this code, it is possible to run the spatial correlations on masked and unmasked data.\nAlso, it is possible to apply a correction, \n called symmetric averaging, which is a \n derivative of a method by Schatzel (1988):\nSchätzel, Klaus, Martin Drewel, and Sven Stimac. \"Photon correlation measurements at large lag times: improving statistical accuracy.\" Journal of Modern Optics 35.4 (1988): 711-718.\nTechnique adapted to arbitrary masks by Julien Lhermitte, Jan 2017\nThe correlation function in 1 dimension is:\n$$C = \\frac{1}{N(k)} \\sum \\limits_j^{N_t} I_j I_{j+k} M_j M_{j+k}$$\nWe may normalize it by its average intensity in two different ways:\n1. Naive averaging:\nthe normalized correlation function is just divided by the average squared:\n$$cc_{reg} = \\frac{CC}{\\bar{I}^2}$$\nwhere:\n$$\\bar{I} = \\frac{1}{N(k)} \\sum \\limits_{j=1}^{N_t} I_j$$\nis average intensity\nand where $$N(k) = \\sum \\limits_{j= 1}^{N_t}M_j M_{j+k}$$\n(Note that in the limit of no mask, $N(k) = N_t$ as it should, mask has effect of inducing a $k$ dependence on the effective ''$N_t$'')\n2. Symmetric Averaging:\nFor symmetric averaging, we define two new averages, $I_p$ and $I_f$ (I 'past' and I 'future'):\n$$I_p = \\frac{1}{N(k)} \\sum \\limits_j I_j M_j M_{j+k}$$\n$$I_f = \\frac{1}{N(k)} \\sum \\limits_l I_{l+k} M_l M_{l+k}$$\nwe define symmetric averaging as:\n$$cc_{sym} = \\frac{CC}{\\bar{I}_p \\bar{I}_p}$$\nSchatzel shows this averaging is superior for the case of a simple ''mask'' : a 1D time series (data outside of time range is ''masked'')\nImport some essential libraries/code",
"%matplotlib inline\nimport numpy as np\n#from pyCXD.tools.CrossCorrelator import CrossCorrelator\nfrom skbeam.core.correlation import CrossCorrelator\nimport matplotlib.pyplot as plt\nfrom skbeam.core.roi import ring_edges, segmented_rings\n\n\n# for some convolutions, used to smooth images (make spatially correlated images)\n# avoid more dependencies for this example\ndef convol2d(a,b=None,axes=(-2,-1)):\n ''' convolve a and b along axes axes\n if axes 1 element, then convolves along that dimension\n only works with dimensions 1 or 2 (1 or 2 axes)\n '''\n from numpy.fft import fft2, ifft2\n if(b is None):\n b = a\n return ifft2(fft2(a,axes=axes)*np.conj(fft2(b,axes=axes)),axes=axes).real\n\ndef pos2extent(pos):\n # convenience routine to turn positions to extent\n # left right bottom top. For 2D data\n extent = [pos[1][0], pos[1][-1], pos[0][-1], pos[0][0]]\n return extent",
"1. Try on 1D data",
"# test 1D data\nsigma = .1\nNpoints = 1000\nx = np.linspace(-10, 10, Npoints)\ny = convol2d(np.random.random(Npoints)*10, np.exp(-x**2/(2*sigma**2)),axes=(-1,))\n\nmask_1D = np.ones_like(y)\nmask_1D[10:20] = 0\nmask_1D[60:90] = 0\nmask_1D[111:137] = 0\nmask_1D[211:237] = 0\nmask_1D[411:537] = 0\n\nmask_1D *= mask_1D[::-1]\n\ny_masked = y*mask_1D\n\ncc1D = CrossCorrelator(mask_1D.shape)\ncc1D_symavg = CrossCorrelator(mask_1D.shape,normalization='symavg')\ncc1D_masked = CrossCorrelator(mask_1D.shape,mask=mask_1D)\ncc1D_masked_symavg = CrossCorrelator(mask_1D.shape, mask=mask_1D,normalization='symavg')\n\nycorr_1D = cc1D(y)\nycorr_1D_masked = cc1D_masked(y*mask_1D)\nycorr_1D_symavg = cc1D_symavg(y)\nycorr_1D_masked_symavg = cc1D_masked_symavg(y*mask_1D)\n\n# the x axis\nycorr_1D_x = cc1D.positions\nycorr_1D_masked_x = cc1D_masked.positions\nycorr_1D_symavg_x = cc1D_symavg.positions\nycorr_1D_masked_symavg_x = cc1D_masked_symavg.positions\n\n\nycorr_1D[0].shape",
"Plot the data",
"# plot 1D Data\nplt.figure(0);plt.clf();\nplt.plot(x,y)\nplt.plot(x,y*mask_1D)\nplt.xlabel(\"position\")\nplt.ylabel(\"intensity (arb. units)\")",
"Correlations for different cases",
"plt.figure(1);plt.clf();\nplt.plot(ycorr_1D_x, ycorr_1D,color='k',label='regular')\nplt.plot(ycorr_1D_masked_x, ycorr_1D_masked,color='r',label='masked')\nplt.plot(ycorr_1D_symavg_x, ycorr_1D_symavg,color='g',label='symavg')\nplt.plot(ycorr_1D_masked_symavg_x, ycorr_1D_masked_symavg,color='b',label='masked + symavg')\nplt.ylim(.9,1.2)\nplt.xlabel(\"shift ($\\Delta x$)\")\nplt.ylabel(\"Correlation\")\nplt.legend()",
"2. Try for 2D data\n(In this case, even no mask has a strong effect on data. No mask still contains a ''mask'' since at higher correlation lengths we are correlating less points. Symmetric averaging excels to overcome these effects here.)",
"# test 2D data\nNpoints2 = 100\nx2 = np.linspace(-10, 10, Npoints2)\nX, Y = np.meshgrid(x2,x2)\nZ = np.random.random((Npoints2,Npoints2))\nZ = convol2d(Z, np.exp(-(X**2 + Y**2)/2./sigma**2))\n\n\nmask_2D = np.ones_like(Z)\nmask_2D[10:20, 10:20] = 0\nmask_2D[73:91, 45:67] = 0\nmask_2D[1:20, 90:] = 0\n\ncc2D = CrossCorrelator(mask_2D.shape)\ncc2D_symavg = CrossCorrelator(mask_2D.shape,normalization='symavg')\ncc2D_masked = CrossCorrelator(mask_2D.shape,mask=mask_2D)\ncc2D_masked_symavg = CrossCorrelator(mask_2D.shape, mask=mask_2D,normalization='symavg')\n\nycorr_2D = cc2D(Z)\nycorr_2D_masked = cc2D_masked(Z*mask_2D)\nycorr_2D_symavg = cc2D_symavg(Z)\nycorr_2D_masked_symavg = cc2D_masked_symavg(Z*mask_2D)\n\nycorr_2D_pos = cc2D.positions\nycorr_2D_masked_pos = cc2D_masked.positions\nycorr_2D_symavg_pos = cc2D_symavg.positions\nycorr_2D_masked_symavg_pos = cc2D_masked_symavg.positions",
"plot 2D data",
"plt.figure(2);plt.clf();\nplt.subplot(2,2,1)\nplt.title(\"not masked\")\nplt.imshow(Z)\n\nplt.subplot(2,2,2)\nplt.title(\"masked\")\nplt.imshow(Z*mask_2D)",
"Correlations (2D)",
"vmin=.95; vmax=1.03\n\n\nplt.figure(3);plt.clf();\nplt.subplot(2,2,1)\nplt.title(\"regular\")\nplt.imshow(ycorr_2D,extent = pos2extent(ycorr_2D_pos))\n#plt.axhline(ycorr_2D_masked.shape[0]//2)\nplt.clim(vmin,vmax)\nplt.xlim(-30,30)\nplt.ylim(-30,30)\nplt.subplot(2,2,2)\nplt.title(\"masked\")\nplt.imshow(ycorr_2D_masked, extent = pos2extent(ycorr_2D_masked_pos))\n#plt.axhline(ycorr_2D_masked.shape[0]//2)\nplt.clim(vmin,vmax)\nplt.xlim(-30,30)\nplt.ylim(-30,30)\nplt.subplot(2,2,3)\nplt.title(\"symavg\")\nplt.imshow(ycorr_2D_symavg, extent = pos2extent(ycorr_2D_symavg_pos))\n#plt.axhline(ycorr_2D_masked.shape[0]//2)\nplt.clim(vmin,vmax)\nplt.xlim(-30,30)\nplt.ylim(-30,30)\n\nplt.subplot(2,2,4)\nplt.title(\"mask + symavg\")\nplt.imshow(ycorr_2D_masked_symavg, extent = pos2extent(ycorr_2D_masked_symavg_pos))\n#plt.axhline(ycorr_2D_masked.shape[0]//2)\nplt.clim(vmin,vmax)\nplt.xlim(-30,30)\nplt.ylim(-30,30)\n",
"Correlation Cross sections",
"plt.figure(4);plt.clf();\nplt.plot(cc2D.positions[1], ycorr_2D[cc2D.centers[0]],label=\"reg\")\nplt.plot(cc2D_masked.positions[1], ycorr_2D_masked[cc2D_masked.centers[0]],label=\"masked\")\nplt.plot(cc2D_symavg.positions[1], ycorr_2D_symavg[cc2D_symavg.centers[0]],label=\"symavg\")\nplt.plot(cc2D_masked_symavg.positions[1], ycorr_2D_masked_symavg[cc2D_masked_symavg.centers[0]],label=\"masked+symavg\")\nplt.ylim(0.8, 1.2)\nplt.xlabel(\"shift ($\\Delta x$)\")\nplt.ylabel(\"Correlation\")\nplt.legend()",
"3. Try with different id's in different regions of image",
"# make id numbers\nedges = ring_edges(1, 20, num_rings=2)\nsegments = 5\nx0, y0 = np.array(mask_2D.shape)//2\nmaskids = segmented_rings(edges,segments,(y0,x0),mask_2D.shape)\n\ncc2D_ids = CrossCorrelator(mask_2D.shape, mask=maskids)\ncc2D_ids_symavg = CrossCorrelator(mask_2D.shape,mask=maskids,normalization='symavg')\n\nycorr_ids_2D = cc2D_ids(Z)\nycorr_ids_2D_symavg = cc2D_ids_symavg(Z)",
"Plot mask",
"plt.figure(2);plt.clf();\nplt.imshow(maskids)\n",
"plot correlations\nHere, we see that without symmetric averaging, the correlations quickly come back at values higher than the point of initial correlation, whereas with symmetric averaging, the result looks more as what is expected, a nice Gaussian like curve centered in image. (Center of image is zero correlation)",
"vmin=.95; vmax=1.1\n\nfig, axes = plt.subplots(2,4)\nax1 = axes[:len(axes)//2].ravel()\nax2 = axes[len(axes)//2:].ravel()\n\nfor i in range(len(ax1)):\n plt.sca(ax1[i])\n plt.title(\"regular\")\n plt.imshow(ycorr_ids_2D[i],extent=pos2extent(cc2D_ids.positions[i]))\n plt.clim(vmin,vmax)\n \n plt.sca(ax2[i])\n plt.title(\"sym avg\")\n plt.imshow(ycorr_ids_2D_symavg[i],extent=pos2extent(cc2D_ids_symavg.positions[i]))\n plt.clim(vmin,vmax)\n\n## Cross correlate image with itself shifted\nZ2 = np.roll(np.roll(Z, 4,axis=0),-5,axis=1)\n\nycorr_ids_2D_shift = cc2D_ids(Z, Z2)\ncenters_ids_2D_shift = cc2D_ids.centers\nycorr_ids_2D_symavg_shift = cc2D_ids_symavg(Z,Z2)\ncenters_ids_2D_symavg_shift = cc2D_ids_symavg.centers\n\nvmin=.95; vmax=1.05\n\nfig, axes = plt.subplots(2,4)\nax1 = axes[:len(axes)//2].ravel()\nax2 = axes[len(axes)//2:].ravel()\n\nfor i in range(len(ax1)):\n plt.sca(ax1[i])\n plt.title(\"regular\")\n plt.imshow(ycorr_ids_2D_shift[i], extent=pos2extent(cc2D_ids.positions[i]))\n yc, xc = centers_ids_2D_shift[i]\n plt.axvline(xc)\n plt.axhline(yc)\n plt.clim(vmin,vmax)\n \n plt.sca(ax2[i])\n plt.title(\"sym avg\")\n plt.imshow(ycorr_ids_2D_symavg_shift[i], extent=pos2extent(cc2D_ids_symavg.positions[i]))\n yc, xc = centers_ids_2D_symavg_shift[i]\n plt.axvline(xc)\n plt.axhline(yc)\n plt.clim(vmin,vmax)\n\nmask_test = (maskids == 1).astype(float)\nfrom scipy.signal import fftconvolve\nfrom numpy.fft import fft2, ifft2, fftshift\n\ncc = fftconvolve(mask_test, mask_test, mode='same')\ncc2 = fftshift(ifft2((np.conj(fft2(mask_test)))*fft2(mask_test)).real)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
w4zir/ml17s
|
.ipynb_checkpoints/python-tutorial-checkpoint.ipynb
|
mit
|
[
"CS228 Python Tutorial\nAdapted from the CS231n Python tutorial by Justin Johnson (http://cs231n.github.io/python-numpy-tutorial/).\nIntroduction\nPython is a great general-purpose programming language on its own, but with the help of a few popular libraries (numpy, scipy, matplotlib) it becomes a powerful environment for scientific computing.\nWe expect that many of you will have some experience with Python and numpy; for the rest of you, this section will serve as a quick crash course both on the Python programming language and on the use of Python for scientific computing.\nSome of you may have previous knowledge in Matlab, in which case we also recommend the numpy for Matlab users page (https://docs.scipy.org/doc/numpy-dev/user/numpy-for-matlab-users.html).\nIn this tutorial, we will cover:\n\nBasic Python: Basic data types (Containers, Lists, Dictionaries, Sets, Tuples), Functions, Classes\nNumpy: Arrays, Array indexing, Datatypes, Array math, Broadcasting\nMatplotlib: Plotting, Subplots, Images\n\nBasics of Python\nPython is a high-level, dynamically typed multiparadigm programming language. Python code is often said to be almost like pseudocode, since it allows you to express very powerful ideas in very few lines of code while being very readable. As an example, here is an implementation of the classic quicksort algorithm in Python:",
"def quicksort(arr):\n if len(arr) <= 1:\n return arr\n pivot = arr[int(len(arr) / 2)]\n left = [x for x in arr if x < pivot]\n middle = [x for x in arr if x == pivot]\n right = [x for x in arr if x > pivot]\n return quicksort(left) + middle + quicksort(right)\n\nprint (quicksort([3,6,8,10,1,2]))",
"Python versions\nThere are currently two different supported versions of Python, 2.7 and 3.6. Somewhat confusingly, Python 3.X introduced many backwards-incompatible changes to the language, so code written for 2.7 may not work under 3.6 and vice versa. For this class all code will use Python 2.7.\nYou can check your Python version at the command line by running python --version.\nBasic data types\nNumbers\nIntegers and floats work as you would expect from other languages:",
"x,y = 3,4\nprint (x,y)\n\n# type of variable\nprint(type(x))\n\nprint (x + 1) # Addition;\nprint (x - 1) # Subtraction;\nprint (x * 2) # Multiplication;\nprint (x ** 2) # Exponentiation;\n\nx += 1\nprint (x) # Prints \"4\"\nx *= 2\nprint (x) # Prints \"8\"\n\ny = 2.5\nprint (type(y)) # Prints \"<type 'float'>\"\nprint (y, y + 1, y * 2, y ** 2) # Prints \"2.5 3.5 5.0 6.25\"",
"Note that unlike many languages, Python does not have unary increment (x++) or decrement (x--) operators.\nPython also has built-in types for long integers and complex numbers; you can find all of the details in the documentation.\nBooleans\nPython implements all of the usual operators for Boolean logic, but uses English words rather than symbols (&&, ||, etc.):",
"t, f = True, False\nprint (type(t)) # Prints \"<type 'bool'>\"",
"Now we let's look at the operations:",
"print (t and f) # Logical AND;\nprint (t or f) # Logical OR;\nprint (not t) # Logical NOT;\nprint (t != f) # Logical XOR;",
"Strings",
"hello = 'hello' # String literals can use single quotes\nworld = \"world\" # or double quotes; it does not matter.\nprint (hello, len(hello))\n\nhw = hello + ' ' + world # String concatenation\nprint (hw) # prints \"hello world\"\n\nhw12 = '%s %s %d' % (hello, world, 12) # sprintf style string formatting\nprint (hw12) # prints \"hello world 12\"",
"String objects have a bunch of useful methods; for example:",
"s = \"hello\"\nprint (s.capitalize()) # Capitalize a string; prints \"Hello\"\nprint (s.upper()) # Convert a string to uppercase; prints \"HELLO\"\nprint (s.rjust(7)) # Right-justify a string, padding with spaces; prints \" hello\"\nprint (s.center(7)) # Center a string, padding with spaces; prints \" hello \"\nprint (s.replace('l', '(ell)')) # Replace all instances of one substring with another;\n # prints \"he(ell)(ell)o\"\nprint (' world '.strip()) # Strip leading and trailing whitespace; prints \"world\"",
"You can find a list of all string methods in the documentation.\nContainers\nPython includes several built-in container types: lists, dictionaries, sets, and tuples.\nLists\nA list is the Python equivalent of an array, but is resizeable and can contain elements of different types:",
"xs = [3, 1, 2] # Create a list\nprint (xs, xs[2])\nprint (xs[-1]) # Negative indices count from the end of the list; prints \"2\"\n\nys = [[1,2,3],[2,3,4]]\nprint(ys)\nprint(ys[1][2])\n\nxs[2] = 'foo' # Lists can contain elements of different types\nprint (xs)\n\nxs.append('bar') # Add a new element to the end of the list\nprint (xs) \n\nx = xs.pop() # Remove and return the last element of the list\nprint (x, xs) ",
"As usual, you can find all the gory details about lists in the documentation.\nSlicing\nIn addition to accessing list elements one at a time, Python provides concise syntax to access sublists; this is known as slicing:",
"# nums = range(5) # range is a built-in function that creates a list of integers\nnums = [2,3,5,1,2,8]\nprint (nums) # Prints \"[0, 1, 2, 3, 4]\"\nprint (nums[2:4]) # Get a slice from index 2 to 4 (exclusive); prints \"[2, 3]\"\nprint (nums[2:]) # Get a slice from index 2 to the end; prints \"[2, 3, 4]\"\nprint (nums[:2]) # Get a slice from the start to index 2 (exclusive); prints \"[0, 1]\"\nprint (nums[:]) # Get a slice of the whole list; prints [\"0, 1, 2, 3, 4]\"\nprint (nums[:-2]) # Slice indices can be negative; prints [\"0, 1, 2, 3]\"",
"Loops\nYou can loop over the elements of a list like this:",
"animals = ['cat', 'dog', 'monkey']\nfor animal in animals:\n print (animal)\n print(1)\n \nx =1\nprint(x)",
"If you want access to the index of each element within the body of a loop, use the built-in enumerate function:",
"animals = ['cat', 'dog', 'monkey']\nfor idx, animal in enumerate(animals):\n print ('#%d: %s' % (idx + 1, animal))",
"List comprehensions:\nWhen programming, frequently we want to transform one type of data into another. As a simple example, consider the following code that computes square numbers:",
"nums = [0, 1, 2, 3, 4]\nsquares = []\nfor x in nums:\n squares.append(x ** 2)\nprint (squares)",
"You can make this code simpler using a list comprehension:",
"nums = [0, 1, 2, 3, 4]\nsquares = [x ** 2 for x in nums]\nprint (squares)",
"List comprehensions can also contain conditions:",
"nums = [0, 1, 2, 3, 4]\neven_squares = [x ** 2 for x in nums if x % 2 == 0]\nprint (even_squares)",
"Dictionaries\nA dictionary stores (key, value) pairs, similar to a Map in Java or an object in Javascript. You can use it like this:",
"d = {'cat': 'cute', 'dog': 'furry'} # Create a new dictionary with some data\nprint (d['cat']) # Get an entry from a dictionary; prints \"cute\"\nprint ('cat' in d) # Check if a dictionary has a given key; prints \"True\"\n\nd['fish'] = 'wet' # Set an entry in a dictionary\nprint (d['fish']) # Prints \"wet\"\n\nprint (d['monkey']) # KeyError: 'monkey' not a key of d\n\nprint (d.get('monkey', 'N/A')) # Get an element with a default; prints \"N/A\"\nprint (d.get('fish', 'N/A')) # Get an element with a default; prints \"wet\"\n\ndel d['fish'] # Remove an element from a dictionary\nprint (d.get('fish', 'N/A')) # \"fish\" is no longer a key; prints \"N/A\"",
"You can find all you need to know about dictionaries in the documentation.\nIt is easy to iterate over the keys in a dictionary:",
"d = {'person': 2, 'cat': 4, 'spider': 8}\nfor animal in d:\n legs = d[animal]\n print ('A %s has %d legs' % (animal, legs))",
"If you want access to keys and their corresponding values, use the iteritems method:",
"d = {'person': 2, 'cat': 4, 'spider': 8}\nfor animal, legs in d.items():\n print ('A %s has %d legs' % (animal, legs))",
"Dictionary comprehensions: These are similar to list comprehensions, but allow you to easily construct dictionaries. For example:",
"nums = [0, 1, 2, 3, 4]\neven_num_to_square = {x: x ** 2 for x in nums if x % 2 == 0}\nprint (even_num_to_square)",
"Sets\nA set is an unordered collection of distinct elements. As a simple example, consider the following:",
"animals = {'cat', 'dog'}\nprint ('cat' in animals) # Check if an element is in a set; prints \"True\"\nprint ('fish' in animals) # prints \"False\"\n\n\nanimals.add('fish') # Add an element to a set\nprint ('fish' in animals)\nprint (animals) # Number of elements in a set;\n\nanimals.add('cat') # Adding an element that is already in the set does nothing\nprint (animals) \nanimals.remove('cat') # Remove an element from a set\nprint (animals) ",
"Loops: Iterating over a set has the same syntax as iterating over a list; however since sets are unordered, you cannot make assumptions about the order in which you visit the elements of the set:",
"animals = {'cat', 'dog', 'fish'}\nfor idx, animal in enumerate(animals):\n print ('#%d: %s' % (idx + 1, animal))\n# Prints \"#1: fish\", \"#2: dog\", \"#3: cat\"",
"Functions\nPython functions are defined using the def keyword. For example:",
"def sign(x):\n if x > 0:\n return 'positive'\n elif x < 0:\n return 'negative'\n else:\n return 'zero'\n\nfor x in [-1, 0, 1]:\n print (sign(x))",
"We will often define functions to take optional keyword arguments, like this:",
"def hello(name, loud=False):\n if loud:\n print ('HELLO, %s' % name.upper())\n else:\n print ('Hello, %s!' % name)\n\nhello('Bob')\nhello('Fred', loud=True)",
"Classes\nThe syntax for defining classes in Python is straightforward:",
"class Greeter:\n\n # Constructor\n def __init__(self, name):\n self.name = name # Create an instance variable\n\n # Instance method\n def greet(self, loud=False):\n if loud:\n print ('HELLO, %s!' % self.name.upper())\n else:\n print ('Hello, %s' % self.name)\n\ng = Greeter('Fred') # Construct an instance of the Greeter class\ng.greet() # Call an instance method; prints \"Hello, Fred\"\ng.greet(loud=True) # Call an instance method; prints \"HELLO, FRED!\"",
"Numpy\nNumpy is the core library for scientific computing in Python. It provides a high-performance multidimensional array object, and tools for working with these arrays. If you are already familiar with MATLAB, you might find this tutorial useful to get started with Numpy.\nTo use Numpy, we first need to import the numpy package:",
"import numpy as np",
"Arrays\nA numpy array is a grid of values, all of the same type, and is indexed by a tuple of nonnegative integers. The number of dimensions is the rank of the array; the shape of an array is a tuple of integers giving the size of the array along each dimension.\nWe can initialize numpy arrays from nested Python lists, and access elements using square brackets:",
"a = np.array([1, 2, 3]) # Create a rank 1 array\nprint (type(a), a.shape, a[0], a[1], a[2])\na[0] = 5 # Change an element of the array\nprint (a) \n\nb = np.array([[1,2,3],[4,5,6]]) # Create a rank 2 array\nprint (b)\n\nprint (b.shape) \nprint (b[0, 0], b[0, 1], b[1, 0])",
"Numpy also provides many functions to create arrays:",
"a = np.zeros((2,2)) # Create an array of all zeros\nprint (a)\n\nb = np.ones((1,2)) # Create an array of all ones\nprint (b)\n\nc = np.full((2,2), 7) # Create a constant array\nprint (c) \n\nd = np.eye(2) # Create a 2x2 identity matrix\nprint (d)\n\ne = np.random.random((2,2)) # Create an array filled with random values\nprint (e)",
"Array indexing\nNumpy offers several ways to index into arrays.\nSlicing: Similar to Python lists, numpy arrays can be sliced. Since arrays may be multidimensional, you must specify a slice for each dimension of the array:",
"import numpy as np\n\n# Create the following rank 2 array with shape (3, 4)\n# [[ 1 2 3 4]\n# [ 5 6 7 8]\n# [ 9 10 11 12]]\na = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]])\n\n# Use slicing to pull out the subarray consisting of the first 2 rows\n# and columns 1 and 2; b is the following array of shape (2, 2):\n# [[2 3]\n# [6 7]]\nb = a[:2, 1:3]\nprint (b)",
"A slice of an array is a view into the same data, so modifying it will modify the original array.",
"print (a[0, 1]) \nb[0, 0] = 77 # b[0, 0] is the same piece of data as a[0, 1]\nprint (a[0, 1])",
"You can also mix integer indexing with slice indexing. However, doing so will yield an array of lower rank than the original array. Note that this is quite different from the way that MATLAB handles array slicing:",
"# Create the following rank 2 array with shape (3, 4)\na = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]])\nprint (a)\nprint(a.shape)",
"Two ways of accessing the data in the middle row of the array.\nMixing integer indexing with slices yields an array of lower rank,\nwhile using only slices yields an array of the same rank as the\noriginal array:",
"row_r1 = a[1, :] # Rank 1 view of the second row of a \nrow_r2 = a[1:2, :] # Rank 2 view of the second row of a\nrow_r3 = a[[1], :] # Rank 2 view of the second row of a\nprint (row_r1, row_r1.shape)\nprint (row_r2, row_r2.shape)\nprint (row_r3, row_r3.shape)\n\n# We can make the same distinction when accessing columns of an array:\ncol_r1 = a[:, 1]\ncol_r2 = a[:, 1:2]\nprint (col_r1, col_r1.shape)\nprint (col_r2, col_r2.shape)",
"Integer array indexing: When you index into numpy arrays using slicing, the resulting array view will always be a subarray of the original array. In contrast, integer array indexing allows you to construct arbitrary arrays using the data from another array. Here is an example:",
"a = np.array([[1,2], [3, 4], [5, 6]])\n\n# An example of integer array indexing.\n# The returned array will have shape (3,) and \nprint (a[[0, 1, 2], [0, 1, 0]])\n\n# The above example of integer array indexing is equivalent to this:\nprint (np.array([a[0, 0], a[1, 1], a[2, 0]]))\n\n# When using integer array indexing, you can reuse the same\n# element from the source array:\nprint (a[[0, 0], [1, 1]])\n\n# Equivalent to the previous integer array indexing example\nprint (np.array([a[0, 1], a[0, 1]]))",
"One useful trick with integer array indexing is selecting or mutating one element from each row of a matrix:",
"# Create a new array from which we will select elements\na = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])\nprint (a)\n\n# Create an array of indices\nb = np.array([0, 2, 0, 1])\n\n# Select one element from each row of a using the indices in b\nprint (a[np.arange(4), b]) # Prints \"[ 1 6 7 11]\"\n\n# Mutate one element from each row of a using the indices in b\na[np.arange(4), b] += 10\nprint (a)",
"Boolean array indexing: Boolean array indexing lets you pick out arbitrary elements of an array. Frequently this type of indexing is used to select the elements of an array that satisfy some condition. Here is an example:",
"import numpy as np\n\na = np.array([[1,2], [3, 4], [5, 6]])\n\nbool_idx = (a > 2) # Find the elements of a that are bigger than 2;\n # this returns a numpy array of Booleans of the same\n # shape as a, where each slot of bool_idx tells\n # whether that element of a is > 2.\n\nprint (bool_idx)\n\n# We use boolean array indexing to construct a rank 1 array\n# consisting of the elements of a corresponding to the True values\n# of bool_idx\nprint (a[bool_idx])\n\n# We can do all of the above in a single concise statement:\nprint (a[a > 2])",
"For brevity we have left out a lot of details about numpy array indexing; if you want to know more you should read the documentation.\nDatatypes\nEvery numpy array is a grid of elements of the same type. Numpy provides a large set of numeric datatypes that you can use to construct arrays. Numpy tries to guess a datatype when you create an array, but functions that construct arrays usually also include an optional argument to explicitly specify the datatype. Here is an example:",
"x = np.array([1, 2]) # Let numpy choose the datatype\ny = np.array([1.0, 2.0]) # Let numpy choose the datatype\nz = np.array([1, 2], dtype=np.int64) # Force a particular datatype\n\nprint (x.dtype, y.dtype, z.dtype)",
"You can read all about numpy datatypes in the documentation.\nArray math\nBasic mathematical functions operate elementwise on arrays, and are available both as operator overloads and as functions in the numpy module:",
"x = np.array([[1,2],[3,4]], dtype=np.float64)\ny = np.array([[5,6],[7,8]], dtype=np.float64)\n\n# Elementwise sum; both produce the array\nprint (x + y)\nprint (np.add(x, y))\n\n# Elementwise difference; both produce the array\nprint x - y\nprint np.subtract(x, y)\n\n# Elementwise product; both produce the array\nprint x * y\nprint np.multiply(x, y)\n\n# Elementwise division; both produce the array\n# [[ 0.2 0.33333333]\n# [ 0.42857143 0.5 ]]\nprint x / y\nprint np.divide(x, y)\n\n# Elementwise square root; produces the array\n# [[ 1. 1.41421356]\n# [ 1.73205081 2. ]]\nprint np.sqrt(x)",
"Note that unlike MATLAB, * is elementwise multiplication, not matrix multiplication. We instead use the dot function to compute inner products of vectors, to multiply a vector by a matrix, and to multiply matrices. dot is available both as a function in the numpy module and as an instance method of array objects:",
"x = np.array([[1,2],[3,4]])\ny = np.array([[5,6],[7,8]])\n\nv = np.array([9,10])\nw = np.array([11, 12])\n\n# Inner product of vectors; both produce 219\nprint v.dot(w)\nprint np.dot(v, w)\n\n# Matrix / vector product; both produce the rank 1 array [29 67]\nprint x.dot(v)\nprint np.dot(x, v)\n\n# Matrix / matrix product; both produce the rank 2 array\n# [[19 22]\n# [43 50]]\nprint x.dot(y)\nprint np.dot(x, y)",
"Numpy provides many useful functions for performing computations on arrays; one of the most useful is sum:",
"x = np.array([[1,2],[3,4]])\n\nprint np.sum(x) # Compute sum of all elements; prints \"10\"\nprint np.sum(x, axis=0) # Compute sum of each column; prints \"[4 6]\"\nprint np.sum(x, axis=1) # Compute sum of each row; prints \"[3 7]\"",
"You can find the full list of mathematical functions provided by numpy in the documentation.\nApart from computing mathematical functions using arrays, we frequently need to reshape or otherwise manipulate data in arrays. The simplest example of this type of operation is transposing a matrix; to transpose a matrix, simply use the T attribute of an array object:",
"print x\nprint x.T\n\nv = np.array([[1,2,3]])\nprint v \nprint v.T",
"Broadcasting\nBroadcasting is a powerful mechanism that allows numpy to work with arrays of different shapes when performing arithmetic operations. Frequently we have a smaller array and a larger array, and we want to use the smaller array multiple times to perform some operation on the larger array.\nFor example, suppose that we want to add a constant vector to each row of a matrix. We could do it like this:",
"# We will add the vector v to each row of the matrix x,\n# storing the result in the matrix y\nx = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])\nv = np.array([1, 0, 1])\ny = np.empty_like(x) # Create an empty matrix with the same shape as x\n\n# Add the vector v to each row of the matrix x with an explicit loop\nfor i in range(4):\n y[i, :] = x[i, :] + v\n\nprint (y)",
"This works; however when the matrix x is very large, computing an explicit loop in Python could be slow. Note that adding the vector v to each row of the matrix x is equivalent to forming a matrix vv by stacking multiple copies of v vertically, then performing elementwise summation of x and vv. We could implement this approach like this:",
"vv = np.tile(v, (4, 1)) # Stack 4 copies of v on top of each other\nprint (vv) # Prints \"[[1 0 1]\n # [1 0 1]\n # [1 0 1]\n # [1 0 1]]\"\n\ny = x + vv # Add x and vv elementwise\nprint (y)",
"Numpy broadcasting allows us to perform this computation without actually creating multiple copies of v. Consider this version, using broadcasting:",
"import numpy as np\n\n# We will add the vector v to each row of the matrix x,\n# storing the result in the matrix y\nx = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])\nv = np.array([1, 0, 1])\ny = x + v # Add v to each row of x using broadcasting\nprint (y)",
"The line y = x + v works even though x has shape (4, 3) and v has shape (3,) due to broadcasting; this line works as if v actually had shape (4, 3), where each row was a copy of v, and the sum was performed elementwise.\nBroadcasting two arrays together follows these rules:\n\nIf the arrays do not have the same rank, prepend the shape of the lower rank array with 1s until both shapes have the same length.\nThe two arrays are said to be compatible in a dimension if they have the same size in the dimension, or if one of the arrays has size 1 in that dimension.\nThe arrays can be broadcast together if they are compatible in all dimensions.\nAfter broadcasting, each array behaves as if it had shape equal to the elementwise maximum of shapes of the two input arrays.\nIn any dimension where one array had size 1 and the other array had size greater than 1, the first array behaves as if it were copied along that dimension\n\nIf this explanation does not make sense, try reading the explanation from the documentation or this explanation.\nFunctions that support broadcasting are known as universal functions. You can find the list of all universal functions in the documentation.\nHere are some applications of broadcasting:",
"# Compute outer product of vectors\nv = np.array([1,2,3]) # v has shape (3,)\nw = np.array([4,5]) # w has shape (2,)\n# To compute an outer product, we first reshape v to be a column\n# vector of shape (3, 1); we can then broadcast it against w to yield\n# an output of shape (3, 2), which is the outer product of v and w:\n\nprint (np.reshape(v, (3, 1)) * w)\n\n# Add a vector to each row of a matrix\nx = np.array([[1,2,3], [4,5,6]])\n# x has shape (2, 3) and v has shape (3,) so they broadcast to (2, 3),\n# giving the following matrix:\n\nprint (x + v)\n\n# Add a vector to each column of a matrix\n# x has shape (2, 3) and w has shape (2,).\n# If we transpose x then it has shape (3, 2) and can be broadcast\n# against w to yield a result of shape (3, 2); transposing this result\n# yields the final result of shape (2, 3) which is the matrix x with\n# the vector w added to each column. Gives the following matrix:\n\nprint ((x.T + w).T)\n\n# Another solution is to reshape w to be a row vector of shape (2, 1);\n# we can then broadcast it directly against x to produce the same\n# output.\nprint x + np.reshape(w, (2, 1))\n\n# Multiply a matrix by a constant:\n# x has shape (2, 3). Numpy treats scalars as arrays of shape ();\n# these can be broadcast together to shape (2, 3), producing the\n# following array:\nprint x * 2",
"Broadcasting typically makes your code more concise and faster, so you should strive to use it where possible.\nThis brief overview has touched on many of the important things that you need to know about numpy, but is far from complete. Check out the numpy reference to find out much more about numpy.\nMatplotlib\nMatplotlib is a plotting library. In this section give a brief introduction to the matplotlib.pyplot module, which provides a plotting system similar to that of MATLAB.",
"import matplotlib.pyplot as plt",
"By running this special iPython command, we will be displaying plots inline:",
"%matplotlib inline",
"Plotting\nThe most important function in matplotlib is plot, which allows you to plot 2D data. Here is a simple example:",
"# Compute the x and y coordinates for points on a sine curve\nx = np.arange(0, 3 * np.pi, 0.1)\ny = np.sin(x)\n\n# Plot the points using matplotlib\nplt.scatter(x, y)",
"With just a little bit of extra work we can easily plot multiple lines at once, and add a title, legend, and axis labels:",
"y_cos = np.cos(x)\n\n# Plot the points using matplotlib\nplt.plot(x, y)\nplt.plot(x, y_cos)\nplt.xlabel('x axis label')\nplt.ylabel('y axis label')\nplt.title('Sine and Cosine')\nplt.legend(['Sine', 'Cosine'])",
"Subplots\nYou can plot different things in the same figure using the subplot function. Here is an example:",
"# Compute the x and y coordinates for points on sine and cosine curves\nx = np.arange(0, 3 * np.pi, 0.1)\ny_sin = np.sin(x)\ny_cos = np.cos(x)\n\n# Set up a subplot grid that has height 2 and width 1,\n# and set the first such subplot as active.\nplt.subplot(2, 1, 1)\n\n# Make the first plot\nplt.plot(x, y_sin)\nplt.title('Sine')\n\n# Set the second subplot as active, and make the second plot.\nplt.subplot(2, 1, 2)\nplt.plot(x, y_cos)\nplt.title('Cosine')\n\n# Show the figure.\nplt.show()",
"You can read much more about the subplot function in the documentation.\nPandas",
"import pandas as pd\n\n# read_csv() is the function (or feature) from pandas we want to use to load the file into memory\ndframe = pd.read_csv(\"lectures/datasets/titanic_dataset.csv\")\n\n# .head(num_of_rows) is a method that displays the first few (num_of_rows) rows, not counting column headers\ndframe.head(5)\n\n# rows and columns in dataset\ndframe.shape\n\n# check columns in dataset\ndframe.columns\n\n# select a row\nhundredth_row = dframe.loc[99]\nprint(hundredth_row)\n\n# select multiple rows\nprint(\"Rows 3, 4, 5 and 6\")\nprint(dframe.loc[3:6])\n\n# select specific columns\ncols = ['survived','sex','age']\nspecific_cols = dframe[cols]\nspecific_cols.head()\n\n# check statistics of the data\ndframe.describe()\n\n# check histogram of age\ndframe.hist(column='age', bins=10)\n\n# Replace all the occurences of male with the number 0 and female with 1\ndframe.loc[dframe[\"sex\"] == \"male\", \"sex\"] = 0\ndframe.loc[dframe[\"sex\"] == \"female\", \"sex\"] = 1",
"Images",
"from IPython.display import Image\nImage(filename='lectures/images/01_02.png', width=500)\n\nImage(filename='lectures/images/01_01.png', width=500)",
"KNN Classifier",
"# read X and y\n# cols = ['pclass','sex','age','fare']\ncols = ['pclass','sex','age']\nX = dframe[cols]\ny = dframe[[\"survived\"]]\n\ndframe.head()\n\n# Use scikit-learn KNN classifier to predit survival probability\nfrom sklearn.neighbors import KNeighborsClassifier\nneigh = KNeighborsClassifier(n_neighbors=3)\nneigh.fit(X, y) \n\n# check accuracy\nneigh.score(X,y)\n\n# define a passenger\npassenger = [1,1,29]\n\n# predict survial label\nprint(neigh.predict([passenger]))\n\n# predict survial probability\nprint(neigh.predict_proba([passenger]))\n\n# find k-nearest neighbors\nneigh.kneighbors(passenger,3)\n\n# Let's create some data for DiCaprio and Winslet and you\nimport numpy as np\ncolsidx = [0,2,3];\ndicaprio = np.array([3, 'Jack Dawson', 0, 19, 0, 0, 'N/A', 5.0000])\nwinslet = np.array([1, 'Rose DeWitt Bukater', 1, 17, 1, 2, 'N/A', 100.0000])\nyou = np.array([1, 'user', 1, 24, 0, 2, 'N/A', 50.0000])\n# Preprocess data\ndicaprio = dicaprio[colsidx]\nwinslet = winslet[colsidx]\nyou = you[colsidx]\n# # Predict surviving chances (class 1 results)\npred = neigh.predict([dicaprio, winslet, you])\nprob = neigh.predict_proba([dicaprio, winslet, you])\nprint(\"DiCaprio Surviving:\", pred[0], \" with probability\", prob[0])\nprint(\"Winslet Surviving Rate:\", pred[1], \" with probability\", prob[2])\nprint(\"user Surviving Rate:\", pred[2], \" with probability\", prob[2])"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
hashiprobr/redes-sociais
|
encontro03/show-graph.ipynb
|
gpl-3.0
|
[
"Encontro 03: Grafos Reais\nImportando a biblioteca:",
"import sys\nsys.path.append('..')\n\nimport socnet as sn",
"Carregando e visualizando o grafo:",
"sn.node_size = 3\nsn.node_color = (0, 0, 0)\nsn.edge_width = 1\nsn.edge_color = (192, 192, 192)\nsn.node_label_position = 'top center'\n\ng = sn.load_graph('twitter.gml')\n\nsn.show_graph(g, nlab=True)",
"Dependendo do tamanho, o grafo pode demorar um pouco para aparecer."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mne-tools/mne-tools.github.io
|
0.21/_downloads/995d3b6a9ece3566320943b2dcc13e22/plot_opm_data.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Optically pumped magnetometer (OPM) data\nIn this dataset, electrical median nerve stimulation was delivered to the\nleft wrist of the subject. Somatosensory evoked fields were measured using\nnine QuSpin SERF OPMs placed over the right-hand side somatomotor area.\nHere we demonstrate how to localize these custom OPM data in MNE.",
"import os.path as op\n\nimport numpy as np\nimport mne\n\ndata_path = mne.datasets.opm.data_path()\nsubject = 'OPM_sample'\nsubjects_dir = op.join(data_path, 'subjects')\nraw_fname = op.join(data_path, 'MEG', 'OPM', 'OPM_SEF_raw.fif')\nbem_fname = op.join(subjects_dir, subject, 'bem',\n subject + '-5120-5120-5120-bem-sol.fif')\nfwd_fname = op.join(data_path, 'MEG', 'OPM', 'OPM_sample-fwd.fif')\ncoil_def_fname = op.join(data_path, 'MEG', 'OPM', 'coil_def.dat')",
"Prepare data for localization\nFirst we filter and epoch the data:",
"raw = mne.io.read_raw_fif(raw_fname, preload=True)\nraw.filter(None, 90, h_trans_bandwidth=10.)\nraw.notch_filter(50., notch_widths=1)\n\n\n# Set epoch rejection threshold a bit larger than for SQUIDs\nreject = dict(mag=2e-10)\ntmin, tmax = -0.5, 1\n\n# Find Median nerve stimulator trigger\nevent_id = dict(Median=257)\nevents = mne.find_events(raw, stim_channel='STI101', mask=257, mask_type='and')\npicks = mne.pick_types(raw.info, meg=True, eeg=False)\n# we use verbose='error' to suppress warning about decimation causing aliasing,\n# ideally we would low-pass and then decimate instead\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, verbose='error',\n reject=reject, picks=picks, proj=False, decim=10,\n preload=True)\nevoked = epochs.average()\nevoked.plot()\ncov = mne.compute_covariance(epochs, tmax=0.)\ndel epochs, raw",
"Examine our coordinate alignment for source localization and compute a\nforward operator:\n<div class=\"alert alert-info\"><h4>Note</h4><p>The Head<->MRI transform is an identity matrix, as the\n co-registration method used equates the two coordinate\n systems. This mis-defines the head coordinate system\n (which should be based on the LPA, Nasion, and RPA)\n but should be fine for these analyses.</p></div>",
"bem = mne.read_bem_solution(bem_fname)\ntrans = None\n\n# To compute the forward solution, we must\n# provide our temporary/custom coil definitions, which can be done as::\n#\n# with mne.use_coil_def(coil_def_fname):\n# fwd = mne.make_forward_solution(\n# raw.info, trans, src, bem, eeg=False, mindist=5.0,\n# n_jobs=1, verbose=True)\n\nfwd = mne.read_forward_solution(fwd_fname)\n# use fixed orientation here just to save memory later\nmne.convert_forward_solution(fwd, force_fixed=True, copy=False)\n\nwith mne.use_coil_def(coil_def_fname):\n fig = mne.viz.plot_alignment(\n evoked.info, trans, subject, subjects_dir, ('head', 'pial'), bem=bem)\n\nmne.viz.set_3d_view(figure=fig, azimuth=45, elevation=60, distance=0.4,\n focalpoint=(0.02, 0, 0.04))",
"Perform dipole fitting",
"# Fit dipoles on a subset of time points\nwith mne.use_coil_def(coil_def_fname):\n dip_opm, _ = mne.fit_dipole(evoked.copy().crop(0.015, 0.080),\n cov, bem, trans, verbose=True)\nidx = np.argmax(dip_opm.gof)\nprint('Best dipole at t=%0.1f ms with %0.1f%% GOF'\n % (1000 * dip_opm.times[idx], dip_opm.gof[idx]))\n\n# Plot N20m dipole as an example\ndip_opm.plot_locations(trans, subject, subjects_dir,\n mode='orthoview', idx=idx)",
"Perform minimum-norm localization\nDue to the small number of sensors, there will be some leakage of activity\nto areas with low/no sensitivity. Constraining the source space to\nareas we are sensitive to might be a good idea.",
"inverse_operator = mne.minimum_norm.make_inverse_operator(\n evoked.info, fwd, cov, loose=0., depth=None)\ndel fwd, cov\n\nmethod = \"MNE\"\nsnr = 3.\nlambda2 = 1. / snr ** 2\nstc = mne.minimum_norm.apply_inverse(\n evoked, inverse_operator, lambda2, method=method,\n pick_ori=None, verbose=True)\n\n# Plot source estimate at time of best dipole fit\nbrain = stc.plot(hemi='rh', views='lat', subjects_dir=subjects_dir,\n initial_time=dip_opm.times[idx],\n clim=dict(kind='percent', lims=[99, 99.9, 99.99]))"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
SnShine/aima-python
|
search-4e.ipynb
|
mit
|
[
"Note: This is not yet ready, but shows the direction I'm leaning in for Fourth Edition Search.\nState-Space Search\nThis notebook describes several state-space search algorithms, and how they can be used to solve a variety of problems. We start with a simple algorithm and a simple domain: finding a route from city to city. Later we will explore other algorithms and domains.\nThe Route-Finding Domain\nLike all state-space search problems, in a route-finding problem you will be given:\n- A start state (for example, 'A' for the city Arad).\n- A goal state (for example, 'B' for the city Bucharest).\n- Actions that can change state (for example, driving from 'A' to 'S').\nYou will be asked to find:\n- A path from the start state, through intermediate states, to the goal state.\nWe'll use this map:\n<img src=\"http://robotics.cs.tamu.edu/dshell/cs625/images/map.jpg\" height=\"366\" width=\"603\">\nA state-space search problem can be represented by a graph, where the vertexes of the graph are the states of the problem (in this case, cities) and the edges of the graph are the actions (in this case, driving along a road).\nWe'll represent a city by its single initial letter. \nWe'll represent the graph of connections as a dict that maps each city to a list of the neighboring cities (connected by a road). For now we don't explicitly represent the actions, nor the distances\nbetween cities.",
"romania = {\n 'A': ['Z', 'T', 'S'],\n 'B': ['F', 'P', 'G', 'U'],\n 'C': ['D', 'R', 'P'],\n 'D': ['M', 'C'],\n 'E': ['H'],\n 'F': ['S', 'B'],\n 'G': ['B'],\n 'H': ['U', 'E'],\n 'I': ['N', 'V'],\n 'L': ['T', 'M'],\n 'M': ['L', 'D'],\n 'N': ['I'],\n 'O': ['Z', 'S'],\n 'P': ['R', 'C', 'B'],\n 'R': ['S', 'C', 'P'],\n 'S': ['A', 'O', 'F', 'R'],\n 'T': ['A', 'L'],\n 'U': ['B', 'V', 'H'],\n 'V': ['U', 'I'],\n 'Z': ['O', 'A']}",
"Suppose we want to get from A to B. Where can we go from the start state, A?",
"romania['A']",
"We see that from A we can get to any of the three cities ['Z', 'T', 'S']. Which should we choose? We don't know. That's the whole point of search: we don't know which immediate action is best, so we'll have to explore, until we find a path that leads to the goal. \nHow do we explore? We'll start with a simple algorithm that will get us from A to B. We'll keep a frontier—a collection of not-yet-explored states—and expand the frontier outward until it reaches the goal. To be more precise:\n\nInitially, the only state in the frontier is the start state, 'A'.\nUntil we reach the goal, or run out of states in the frontier to explore, do the following:\nRemove the first state from the frontier. Call it s.\nIf s is the goal, we're done. Return the path to s.\nOtherwise, consider all the neighboring states of s. For each one:\nIf we have not previously explored the state, add it to the end of the frontier.\nAlso keep track of the previous state that led to this new neighboring state; we'll need this to reconstruct the path to the goal, and to keep us from re-visiting previously explored states.\n\n\n\nA Simple Search Algorithm: breadth_first\nThe function breadth_first implements this strategy:",
"from collections import deque # Doubly-ended queue: pop from left, append to right.\n\ndef breadth_first(start, goal, neighbors):\n \"Find a shortest sequence of states from start to the goal.\"\n frontier = deque([start]) # A queue of states\n previous = {start: None} # start has no previous state; other states will\n while frontier:\n s = frontier.popleft()\n if s == goal:\n return path(previous, s)\n for s2 in neighbors[s]:\n if s2 not in previous:\n frontier.append(s2)\n previous[s2] = s\n \ndef path(previous, s): \n \"Return a list of states that lead to state s, according to the previous dict.\"\n return [] if (s is None) else path(previous, previous[s]) + [s]",
"A couple of things to note: \n\nWe always add new states to the end of the frontier queue. That means that all the states that are adjacent to the start state will come first in the queue, then all the states that are two steps away, then three steps, etc.\nThat's what we mean by breadth-first search.\nWe recover the path to an end state by following the trail of previous[end] pointers, all the way back to start.\nThe dict previous is a map of {state: previous_state}. \nWhen we finally get an s that is the goal state, we know we have found a shortest path, because any other state in the queue must correspond to a path that is as long or longer.\nNote that previous contains all the states that are currently in frontier as well as all the states that were in frontier in the past.\nIf no path to the goal is found, then breadth_first returns None. If a path is found, it returns the sequence of states on the path.\n\nSome examples:",
"breadth_first('A', 'B', romania)\n\nbreadth_first('L', 'N', romania)\n\nbreadth_first('N', 'L', romania)\n\nbreadth_first('E', 'E', romania)",
"Now let's try a different kind of problem that can be solved with the same search function.\nWord Ladders Problem\nA word ladder problem is this: given a start word and a goal word, find the shortest way to transform the start word into the goal word by changing one letter at a time, such that each change results in a word. For example starting with green we can reach grass in 7 steps:\ngreen → greed → treed → trees → tress → cress → crass → grass\nWe will need a dictionary of words. We'll use 5-letter words from the Stanford GraphBase project for this purpose. Let's get that file from aimadata.",
"from search import *\nsgb_words = open_data(\"EN-text/sgb-words.txt\")",
"We can assign WORDS to be the set of all the words in this file:",
"WORDS = set(sgb_words.read().split())\nlen(WORDS)",
"And define neighboring_words to return the set of all words that are a one-letter change away from a given word:",
"def neighboring_words(word):\n \"All words that are one letter away from this word.\"\n neighbors = {word[:i] + c + word[i+1:]\n for i in range(len(word))\n for c in 'abcdefghijklmnopqrstuvwxyz'\n if c != word[i]}\n return neighbors & WORDS",
"For example:",
"neighboring_words('hello')\n\nneighboring_words('world')",
"Now we can create word_neighbors as a dict of {word: {neighboring_word, ...}}:",
"word_neighbors = {word: neighboring_words(word)\n for word in WORDS}",
"Now the breadth_first function can be used to solve a word ladder problem:",
"breadth_first('green', 'grass', word_neighbors)\n\nbreadth_first('smart', 'brain', word_neighbors)\n\nbreadth_first('frown', 'smile', word_neighbors)",
"More General Search Algorithms\nNow we'll embelish the breadth_first algorithm to make a family of search algorithms with more capabilities:\n\nWe distinguish between an action and the result of an action.\nWe allow different measures of the cost of a solution (not just the number of steps in the sequence).\nWe search through the state space in an order that is more likely to lead to an optimal solution quickly.\n\nHere's how we do these things:\n\nInstead of having a graph of neighboring states, we instead have an object of type Problem. A Problem\nhas one method, Problem.actions(state) to return a collection of the actions that are allowed in a state,\nand another method, Problem.result(state, action) that says what happens when you take an action.\nWe keep a set, explored of states that have already been explored. We also have a class, Frontier, that makes it efficient to ask if a state is on the frontier.\nEach action has a cost associated with it (in fact, the cost can vary with both the state and the action).\nThe Frontier class acts as a priority queue, allowing the \"best\" state to be explored next.\nWe represent a sequence of actions and resulting states as a linked list of Node objects.\n\nThe algorithm breadth_first_search is basically the same as breadth_first, but using our new conventions:",
"def breadth_first_search(problem):\n \"Search for goal; paths with least number of steps first.\"\n if problem.is_goal(problem.initial): \n return Node(problem.initial)\n frontier = FrontierQ(Node(problem.initial), LIFO=False)\n explored = set()\n while frontier:\n node = frontier.pop()\n explored.add(node.state)\n for action in problem.actions(node.state):\n child = node.child(problem, action)\n if child.state not in explored and child.state not in frontier:\n if problem.is_goal(child.state):\n return child\n frontier.add(child)",
"Next is uniform_cost_search, in which each step can have a different cost, and we still consider first one os the states with minimum cost so far.",
"def uniform_cost_search(problem, costfn=lambda node: node.path_cost):\n frontier = FrontierPQ(Node(problem.initial), costfn)\n explored = set()\n while frontier:\n node = frontier.pop()\n if problem.is_goal(node.state):\n return node\n explored.add(node.state)\n for action in problem.actions(node.state):\n child = node.child(problem, action)\n if child.state not in explored and child not in frontier:\n frontier.add(child)\n elif child in frontier and frontier.cost[child] < child.path_cost:\n frontier.replace(child)",
"Finally, astar_search in which the cost includes an estimate of the distance to the goal as well as the distance travelled so far.",
"def astar_search(problem, heuristic):\n costfn = lambda node: node.path_cost + heuristic(node.state)\n return uniform_cost_search(problem, costfn)",
"Search Tree Nodes\nThe solution to a search problem is now a linked list of Nodes, where each Node\nincludes a state and the path_cost of getting to the state. In addition, for every Node except for the first (root) Node, there is a previous Node (indicating the state that lead to this Node) and an action (indicating the action taken to get here).",
"class Node(object):\n \"\"\"A node in a search tree. A search tree is spanning tree over states.\n A Node contains a state, the previous node in the tree, the action that\n takes us from the previous state to this state, and the path cost to get to \n this state. If a state is arrived at by two paths, then there are two nodes \n with the same state.\"\"\"\n\n def __init__(self, state, previous=None, action=None, step_cost=1):\n \"Create a search tree Node, derived from a previous Node by an action.\"\n self.state = state\n self.previous = previous\n self.action = action\n self.path_cost = 0 if previous is None else (previous.path_cost + step_cost)\n\n def __repr__(self): return \"<Node {}: {}>\".format(self.state, self.path_cost)\n \n def __lt__(self, other): return self.path_cost < other.path_cost\n \n def child(self, problem, action):\n \"The Node you get by taking an action from this Node.\"\n result = problem.result(self.state, action)\n return Node(result, self, action, \n problem.step_cost(self.state, action, result)) ",
"Frontiers\nA frontier is a collection of Nodes that acts like both a Queue and a Set. A frontier, f, supports these operations:\n\n\nf.add(node): Add a node to the Frontier.\n\n\nf.pop(): Remove and return the \"best\" node from the frontier.\n\n\nf.replace(node): add this node and remove a previous node with the same state.\n\n\nstate in f: Test if some node in the frontier has arrived at state.\n\n\nf[state]: returns the node corresponding to this state in frontier.\n\n\nlen(f): The number of Nodes in the frontier. When the frontier is empty, f is false.\n\n\nWe provide two kinds of frontiers: One for \"regular\" queues, either first-in-first-out (for breadth-first search) or last-in-first-out (for depth-first search), and one for priority queues, where you can specify what cost function on nodes you are trying to minimize.",
"from collections import OrderedDict\nimport heapq\n\nclass FrontierQ(OrderedDict):\n \"A Frontier that supports FIFO or LIFO Queue ordering.\"\n \n def __init__(self, initial, LIFO=False):\n \"\"\"Initialize Frontier with an initial Node.\n If LIFO is True, pop from the end first; otherwise from front first.\"\"\"\n self.LIFO = LIFO\n self.add(initial)\n \n def add(self, node):\n \"Add a node to the frontier.\"\n self[node.state] = node\n \n def pop(self):\n \"Remove and return the next Node in the frontier.\"\n (state, node) = self.popitem(self.LIFO)\n return node\n \n def replace(self, node):\n \"Make this node replace the nold node with the same state.\"\n del self[node.state]\n self.add(node)\n\nclass FrontierPQ:\n \"A Frontier ordered by a cost function; a Priority Queue.\"\n \n def __init__(self, initial, costfn=lambda node: node.path_cost):\n \"Initialize Frontier with an initial Node, and specify a cost function.\"\n self.heap = []\n self.states = {}\n self.costfn = costfn\n self.add(initial)\n \n def add(self, node):\n \"Add node to the frontier.\"\n cost = self.costfn(node)\n heapq.heappush(self.heap, (cost, node))\n self.states[node.state] = node\n \n def pop(self):\n \"Remove and return the Node with minimum cost.\"\n (cost, node) = heapq.heappop(self.heap)\n self.states.pop(node.state, None) # remove state\n return node\n \n def replace(self, node):\n \"Make this node replace a previous node with the same state.\"\n if node.state not in self:\n raise ValueError('{} not there to replace'.format(node.state))\n for (i, (cost, old_node)) in enumerate(self.heap):\n if old_node.state == node.state:\n self.heap[i] = (self.costfn(node), node)\n heapq._siftdown(self.heap, 0, i)\n return\n\n def __contains__(self, state): return state in self.states\n \n def __len__(self): return len(self.heap)",
"Search Problems\nProblem is the abstract class for all search problems. You can define your own class of problems as a subclass of Problem. You will need to override the actions and result method to describe how your problem works. You will also have to either override is_goal or pass a collection of goal states to the initialization method. If actions have different costs, you should override the step_cost method.",
"class Problem(object):\n \"\"\"The abstract class for a search problem.\"\"\"\n\n def __init__(self, initial=None, goals=(), **additional_keywords):\n \"\"\"Provide an initial state and optional goal states.\n A subclass can have additional keyword arguments.\"\"\"\n self.initial = initial # The initial state of the problem.\n self.goals = goals # A collection of possibe goal states.\n self.__dict__.update(**additional_keywords)\n\n def actions(self, state):\n \"Return a list of actions executable in this state.\"\n raise NotImplementedError # Override this!\n\n def result(self, state, action):\n \"The state that results from executing this action in this state.\"\n raise NotImplementedError # Override this!\n\n def is_goal(self, state):\n \"True if the state is a goal.\" \n return state in self.goals # Optionally override this!\n\n def step_cost(self, state, action, result=None):\n \"The cost of taking this action from this state.\"\n return 1 # Override this if actions have different costs \n\ndef action_sequence(node):\n \"The sequence of actions to get to this node.\"\n actions = []\n while node.previous:\n actions.append(node.action)\n node = node.previous\n return actions[::-1]\n\ndef state_sequence(node):\n \"The sequence of states to get to this node.\"\n states = [node.state]\n while node.previous:\n node = node.previous\n states.append(node.state)\n return states[::-1]",
"Two Location Vacuum World",
"dirt = '*'\nclean = ' '\n\nclass TwoLocationVacuumProblem(Problem):\n \"\"\"A Vacuum in a world with two locations, and dirt.\n Each state is a tuple of (location, dirt_in_W, dirt_in_E).\"\"\"\n\n def actions(self, state): return ('W', 'E', 'Suck')\n \n def is_goal(self, state): return dirt not in state\n \n def result(self, state, action):\n \"The state that results from executing this action in this state.\" \n (loc, dirtW, dirtE) = state\n if action == 'W': return ('W', dirtW, dirtE)\n elif action == 'E': return ('E', dirtW, dirtE)\n elif action == 'Suck' and loc == 'W': return (loc, clean, dirtE)\n elif action == 'Suck' and loc == 'E': return (loc, dirtW, clean) \n else: raise ValueError('unknown action: ' + action)\n\nproblem = TwoLocationVacuumProblem(initial=('W', dirt, dirt))\nresult = uniform_cost_search(problem)\nresult\n\naction_sequence(result)\n\nstate_sequence(result)\n\nproblem = TwoLocationVacuumProblem(initial=('E', clean, dirt))\nresult = uniform_cost_search(problem)\naction_sequence(result)",
"Water Pouring Problem\nHere is another problem domain, to show you how to define one. The idea is that we have a number of water jugs and a water tap and the goal is to measure out a specific amount of water (in, say, ounces or liters). You can completely fill or empty a jug, but because the jugs don't have markings on them, you can't partially fill them with a specific amount. You can, however, pour one jug into another, stopping when the seconfd is full or the first is empty.",
"class PourProblem(Problem):\n \"\"\"Problem about pouring water between jugs to achieve some water level.\n Each state is a tuples of levels. In the initialization, provide a tuple of \n capacities, e.g. PourProblem(capacities=(8, 16, 32), initial=(2, 4, 3), goals={7}), \n which means three jugs of capacity 8, 16, 32, currently filled with 2, 4, 3 units of \n water, respectively, and the goal is to get a level of 7 in any one of the jugs.\"\"\"\n \n def actions(self, state):\n \"\"\"The actions executable in this state.\"\"\"\n jugs = range(len(state))\n return ([('Fill', i) for i in jugs if state[i] != self.capacities[i]] +\n [('Dump', i) for i in jugs if state[i] != 0] +\n [('Pour', i, j) for i in jugs for j in jugs if i != j])\n\n def result(self, state, action):\n \"\"\"The state that results from executing this action in this state.\"\"\"\n result = list(state)\n act, i, j = action[0], action[1], action[-1]\n if act == 'Fill': # Fill i to capacity\n result[i] = self.capacities[i]\n elif act == 'Dump': # Empty i\n result[i] = 0\n elif act == 'Pour':\n a, b = state[i], state[j]\n result[i], result[j] = ((0, a + b) \n if (a + b <= self.capacities[j]) else\n (a + b - self.capacities[j], self.capacities[j]))\n else:\n raise ValueError('unknown action', action)\n return tuple(result)\n\n def is_goal(self, state):\n \"\"\"True if any of the jugs has a level equal to one of the goal levels.\"\"\"\n return any(level in self.goals for level in state)\n\np7 = PourProblem(initial=(2, 0), capacities=(5, 13), goals={7})\np7.result((2, 0), ('Fill', 1))\n\nresult = uniform_cost_search(p7)\naction_sequence(result)",
"Visualization Output",
"def showpath(searcher, problem):\n \"Show what happens when searcvher solves problem.\"\n problem = Instrumented(problem)\n print('\\n{}:'.format(searcher.__name__))\n result = searcher(problem)\n if result:\n actions = action_sequence(result)\n state = problem.initial\n path_cost = 0\n for steps, action in enumerate(actions, 1):\n path_cost += problem.step_cost(state, action, 0)\n result = problem.result(state, action)\n print(' {} =={}==> {}; cost {} after {} steps'\n .format(state, action, result, path_cost, steps,\n '; GOAL!' if problem.is_goal(result) else ''))\n state = result\n msg = 'GOAL FOUND' if result else 'no solution'\n print('{} after {} results and {} goal checks'\n .format(msg, problem._counter['result'], problem._counter['is_goal']))\n \nfrom collections import Counter\n\nclass Instrumented:\n \"Instrument an object to count all the attribute accesses in _counter.\"\n def __init__(self, obj):\n self._object = obj\n self._counter = Counter()\n def __getattr__(self, attr):\n self._counter[attr] += 1\n return getattr(self._object, attr) \n\nshowpath(uniform_cost_search, p7)\n\np = PourProblem(initial=(0, 0), capacities=(7, 13), goals={2})\nshowpath(uniform_cost_search, p)\n\nclass GreenPourProblem(PourProblem): \n def step_cost(self, state, action, result=None):\n \"The cost is the amount of water used in a fill.\"\n if action[0] == 'Fill':\n i = action[1]\n return self.capacities[i] - state[i]\n return 0\n\np = GreenPourProblem(initial=(0, 0), capacities=(7, 13), goals={2})\nshowpath(uniform_cost_search, p)\n\ndef compare_searchers(problem, searchers=None):\n \"Apply each of the search algorithms to the problem, and show results\"\n if searchers is None: \n searchers = (breadth_first_search, uniform_cost_search)\n for searcher in searchers:\n showpath(searcher, problem)\n\ncompare_searchers(p)",
"Random Grid\nAn environment where you can move in any of 4 directions, unless there is an obstacle there.",
"import random\n\nN, S, E, W = DIRECTIONS = [(0, 1), (0, -1), (1, 0), (-1, 0)]\n\ndef Grid(width, height, obstacles=0.1):\n \"\"\"A 2-D grid, width x height, with obstacles that are either a collection of points,\n or a fraction between 0 and 1 indicating the density of obstacles, chosen at random.\"\"\"\n grid = {(x, y) for x in range(width) for y in range(height)}\n if isinstance(obstacles, (float, int)):\n obstacles = random.sample(grid, int(width * height * obstacles))\n def neighbors(x, y):\n for (dx, dy) in DIRECTIONS:\n (nx, ny) = (x + dx, y + dy)\n if (nx, ny) not in obstacles and 0 <= nx < width and 0 <= ny < height:\n yield (nx, ny)\n return {(x, y): list(neighbors(x, y))\n for x in range(width) for y in range(height)}\n\nGrid(5, 5)\n\nclass GridProblem(Problem):\n \"Create with a call like GridProblem(grid=Grid(10, 10), initial=(0, 0), goal=(9, 9))\"\n def actions(self, state): return DIRECTIONS\n def result(self, state, action):\n #print('ask for result of', state, action)\n (x, y) = state\n (dx, dy) = action\n r = (x + dx, y + dy)\n return r if r in self.grid[state] else state\n\ngp = GridProblem(grid=Grid(5, 5, 0.3), initial=(0, 0), goals={(4, 4)})\nshowpath(uniform_cost_search, gp)\n",
"Finding a hard PourProblem\nWhat solvable two-jug PourProblem requires the most steps? We can define the hardness as the number of steps, and then iterate over all PourProblems with capacities up to size M, keeping the hardest one.",
"def hardness(problem):\n L = breadth_first_search(problem)\n #print('hardness', problem.initial, problem.capacities, problem.goals, L)\n return len(action_sequence(L)) if (L is not None) else 0\n\nhardness(p7)\n\naction_sequence(breadth_first_search(p7))\n\nC = 9 # Maximum capacity to consider\n\nphard = max((PourProblem(initial=(a, b), capacities=(A, B), goals={goal})\n for A in range(C+1) for B in range(C+1)\n for a in range(A) for b in range(B)\n for goal in range(max(A, B))),\n key=hardness)\n\nphard.initial, phard.capacities, phard.goals\n\nshowpath(breadth_first_search, PourProblem(initial=(0, 0), capacities=(7, 9), goals={8}))\n\nshowpath(uniform_cost_search, phard)\n\nclass GridProblem(Problem):\n \"\"\"A Grid.\"\"\"\n\n def actions(self, state): return ['N', 'S', 'E', 'W'] \n \n def result(self, state, action):\n \"\"\"The state that results from executing this action in this state.\"\"\" \n (W, H) = self.size\n if action == 'N' and state > W: return state - W\n if action == 'S' and state + W < W * W: return state + W\n if action == 'E' and (state + 1) % W !=0: return state + 1\n if action == 'W' and state % W != 0: return state - 1\n return state\n\ncompare_searchers(GridProblem(initial=0, goals={44}, size=(10, 10)))\n\ndef test_frontier():\n \n #### Breadth-first search with FIFO Q\n f = FrontierQ(Node(1), LIFO=False)\n assert 1 in f and len(f) == 1\n f.add(Node(2))\n f.add(Node(3))\n assert 1 in f and 2 in f and 3 in f and len(f) == 3\n assert f.pop().state == 1\n assert 1 not in f and 2 in f and 3 in f and len(f) == 2\n assert f\n assert f.pop().state == 2\n assert f.pop().state == 3\n assert not f\n \n #### Depth-first search with LIFO Q\n f = FrontierQ(Node('a'), LIFO=True)\n for s in 'bcdef': f.add(Node(s))\n assert len(f) == 6 and 'a' in f and 'c' in f and 'f' in f\n for s in 'fedcba': assert f.pop().state == s\n assert not f\n\n #### Best-first search with Priority Q\n f = FrontierPQ(Node(''), lambda node: len(node.state))\n assert '' in f and len(f) == 1 and f\n for s in ['book', 'boo', 'bookie', 'bookies', 'cook', 'look', 'b']:\n assert s not in f\n f.add(Node(s))\n assert s in f\n assert f.pop().state == ''\n assert f.pop().state == 'b'\n assert f.pop().state == 'boo'\n assert {f.pop().state for _ in '123'} == {'book', 'cook', 'look'}\n assert f.pop().state == 'bookie'\n \n #### Romania: Two paths to Bucharest; cheapest one found first\n S = Node('S')\n SF = Node('F', S, 'S->F', 99)\n SFB = Node('B', SF, 'F->B', 211)\n SR = Node('R', S, 'S->R', 80)\n SRP = Node('P', SR, 'R->P', 97)\n SRPB = Node('B', SRP, 'P->B', 101)\n f = FrontierPQ(S)\n f.add(SF); f.add(SR), f.add(SRP), f.add(SRPB); f.add(SFB)\n def cs(n): return (n.path_cost, n.state) # cs: cost and state\n assert cs(f.pop()) == (0, 'S')\n assert cs(f.pop()) == (80, 'R')\n assert cs(f.pop()) == (99, 'F')\n assert cs(f.pop()) == (177, 'P')\n assert cs(f.pop()) == (278, 'B')\n return 'test_frontier ok'\n\ntest_frontier()\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\np = plt.plot([i**2 for i in range(10)])\nplt.savefig('destination_path.eps', format='eps', dpi=1200)\n\nimport itertools\nimport random\n# http://stackoverflow.com/questions/10194482/custom-matplotlib-plot-chess-board-like-table-with-colored-cells\n\nfrom matplotlib.table import Table\n\ndef main():\n grid_table(8, 8)\n plt.axis('scaled')\n plt.show()\n\ndef grid_table(nrows, ncols):\n fig, ax = plt.subplots()\n ax.set_axis_off()\n colors = ['white', 'lightgrey', 'dimgrey']\n tb = Table(ax, bbox=[0,0,2,2])\n for i,j in itertools.product(range(ncols), range(nrows)):\n tb.add_cell(i, j, 2./ncols, 2./nrows, text='{:0.2f}'.format(0.1234), \n loc='center', facecolor=random.choice(colors), edgecolor='grey') # facecolors=\n ax.add_table(tb)\n #ax.plot([0, .3], [.2, .2])\n #ax.add_line(plt.Line2D([0.3, 0.5], [0.7, 0.7], linewidth=2, color='blue'))\n return fig\n\nmain()\n\nimport collections\nclass defaultkeydict(collections.defaultdict):\n \"\"\"Like defaultdict, but the default_factory is a function of the key.\n >>> d = defaultkeydict(abs); d[-42]\n 42\n \"\"\"\n def __missing__(self, key):\n self[key] = self.default_factory(key)\n return self[key]"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jhprinz/openpathsampling
|
examples/alanine_dipeptide_mstis/AD_mstis_4_analysis.ipynb
|
lgpl-2.1
|
[
"Analyzing the MSTIS simulation\nIncluded in this notebook:\n\nOpening files for analysis\nRates, fluxes, total crossing probabilities, and condition transition probabilities\nPer-ensemble properties such as path length distributions and interface crossing probabilities\nMove scheme analysis\nReplica exchange analysis\nReplica move history tree visualization\nReplaying the simulation\nMORE TO COME! Like free energy projections, path density plots, and more",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport openpathsampling as paths\nimport numpy as np",
"The optimum way to use storage depends on whether you're doing production or analysis. For analysis, you should open the file as an AnalysisStorage object. This makes the analysis much faster.",
"%%time\nstorage = paths.AnalysisStorage(\"ala_mstis_production.nc\")\n\nprint \"PathMovers:\", len(storage.pathmovers)\nprint \"Engines:\", len(storage.engines)\nprint \"Samples:\", len(storage.samples)\nprint \"Ensembles:\", len(storage.ensembles)\nprint \"SampleSets:\", len(storage.samplesets)\nprint \"Snapshots:\", len(storage.snapshots)\nprint \"Trajectories:\", len(storage.trajectories)\nprint \"Networks:\", len(storage.networks)\n\n%%time\nmstis = storage.networks[0]\n\n%%time\nfor cv in storage.cvs:\n print cv.name, cv._store_dict",
"Reaction rates\nTIS methods are especially good at determining reaction rates, and OPS makes it extremely easy to obtain the rate from a TIS network.\nNote that, although you can get the rate directly, it is very important to look at other results of the sampling (illustrated in this notebook and in notebooks referred to herein) in order to check the validity of the rates you obtain.\nBy default, the built-in analysis calculates histograms the maximum value of some order parameter and the pathlength of every sampled ensemble. You can add other things to this list as well, but you must always specify histogram parameters for these two. The pathlength is in units of frames.",
"mstis.hist_args['max_lambda'] = { 'bin_width' : 2, 'bin_range' : (0.0, 90) }\nmstis.hist_args['pathlength'] = { 'bin_width' : 5, 'bin_range' : (0, 100) }\n\n%%time\nmstis.rate_matrix(storage.steps, force=True)",
"The self-rates (the rate of returning the to initial state) are undefined, and return not-a-number.\nThe rate is calcuated according to the formula:\n$$k_{AB} = \\phi_{A,0} P(B|\\lambda_m) \\prod_{i=0}^{m-1} P(\\lambda_{i+1} | \\lambda_i)$$\nwhere $\\phi_{A,0}$ is the flux from state A through its innermost interface, $P(B|\\lambda_m)$ is the conditional transition probability (the probability that a path which crosses the interface at $\\lambda_m$ ends in state B), and $\\prod_{i=0}^{m-1} P(\\lambda_{i+1} | \\lambda_i)$ is the total crossing probability. We can look at each of these terms individually.\nTotal crossing probability",
"stateA = storage.volumes[\"A0\"]\nstateB = storage.volumes[\"B0\"]\nstateC = storage.volumes[\"C0\"]\n\ntcp_AB = mstis.transitions[(stateA, stateB)].tcp\ntcp_AC = mstis.transitions[(stateA, stateC)].tcp\ntcp_BC = mstis.transitions[(stateB, stateC)].tcp\ntcp_BA = mstis.transitions[(stateB, stateA)].tcp\ntcp_CA = mstis.transitions[(stateC, stateA)].tcp\ntcp_CB = mstis.transitions[(stateC, stateB)].tcp\n\nplt.plot(tcp_AB.x, tcp_AB)\nplt.plot(tcp_CA.x, tcp_CA)\nplt.plot(tcp_BC.x, tcp_BC)\nplt.plot(tcp_AC.x, tcp_AC) # same as tcp_AB in MSTIS",
"We normally look at these on a log scale:",
"plt.plot(tcp_AB.x, np.log(tcp_AB))\nplt.plot(tcp_CA.x, np.log(tcp_CA))\nplt.plot(tcp_BC.x, np.log(tcp_BC))",
"Flux\nHere we also calculate the flux contribution to each transition. The flux is calculated based on",
"import pandas as pd\nflux_matrix = pd.DataFrame(columns=mstis.states, index=mstis.states)\nfor state_pair in mstis.transitions:\n transition = mstis.transitions[state_pair]\n flux_matrix.set_value(state_pair[0], state_pair[1], transition._flux)\n\nflux_matrix",
"Conditional transition probability",
"outer_ctp_matrix = pd.DataFrame(columns=mstis.states, index=mstis.states)\nfor state_pair in mstis.transitions:\n transition = mstis.transitions[state_pair]\n outer_ctp_matrix.set_value(state_pair[0], state_pair[1], transition.ctp[transition.ensembles[-1]]) \n\nouter_ctp_matrix\n\nctp_by_interface = pd.DataFrame(index=mstis.transitions)\nfor state_pair in mstis.transitions:\n transition = mstis.transitions[state_pair]\n for ensemble_i in range(len(transition.ensembles)):\n ctp_by_interface.set_value(\n state_pair, ensemble_i,\n transition.conditional_transition_probability(\n storage.steps,\n transition.ensembles[ensemble_i]\n ))\n \n \nctp_by_interface ",
"Path ensemble properties",
"hists_A = mstis.transitions[(stateA, stateB)].histograms\nhists_B = mstis.transitions[(stateB, stateC)].histograms\nhists_C = mstis.transitions[(stateC, stateB)].histograms",
"Interface crossing probabilities\nWe obtain the total crossing probability, shown above, by combining the individual crossing probabilities of",
"for hist in [hists_A, hists_B, hists_C]:\n for ens in hist['max_lambda']:\n normalized = hist['max_lambda'][ens].normalized()\n plt.plot(normalized.x, normalized)\n\n# add visualization of the sum\n\nfor hist in [hists_A, hists_B, hists_C]:\n for ens in hist['max_lambda']:\n reverse_cumulative = hist['max_lambda'][ens].reverse_cumulative()\n plt.plot(reverse_cumulative.x, reverse_cumulative)\n\nfor hist in [hists_A, hists_B, hists_C]:\n for ens in hist['max_lambda']:\n reverse_cumulative = hist['max_lambda'][ens].reverse_cumulative()\n plt.plot(reverse_cumulative.x, np.log(reverse_cumulative))",
"Path length histograms",
"for hist in [hists_A, hists_B, hists_C]:\n for ens in hist['pathlength']:\n normalized = hist['pathlength'][ens].normalized()\n plt.plot(normalized.x, normalized)\n\nfor ens in hists_A['pathlength']:\n normalized = hists_A['pathlength'][ens].normalized()\n plt.plot(normalized.x, normalized)",
"Sampling properties\nThe properties we illustrated above were properties of the path ensembles. If your path ensembles are sufficiently well-sampled, these will never depend on how you sample them.\nBut to figure out whether you've done a good job of sampling, you often want to look at properties related to the sampling process. OPS also makes these very easy.\nMove scheme analysis",
"scheme = storage.schemes[0]\n\nscheme.move_summary(storage.steps)\n\nscheme.move_summary(storage.steps, 'shooting')\n\nscheme.move_summary(storage.steps, 'minus')\n\nscheme.move_summary(storage.steps, 'repex')\n\nscheme.move_summary(storage.steps, 'pathreversal')",
"Replica exchange sampling\nSee the notebook repex_networks.ipynb for more details on tools to study the convergence of replica exchange. However, a few simple examples are shown here. All of these are analyzed with a separate object, ReplicaNetwork.",
"repx_net = paths.ReplicaNetwork(scheme, storage.steps)",
"Replica exchange mixing matrix",
"repx_net.mixing_matrix()",
"Replica exchange graph\nThe mixing matrix tells a story of how well various interfaces are connected to other interfaces. The replica exchange graph is essentially a visualization of the mixing matrix (actually, of the transition matrix -- the mixing matrix is a symmetrized version of the transition matrix).\nNote: We're still developing better layout tools to visualize these.",
"repxG = paths.ReplicaNetworkGraph(repx_net)\nrepxG.draw('spring')",
"Replica exchange flow\nReplica flow is defined as TODO\nFlow is designed for calculations where the replica exchange graph is linear, which ours clearly is not. However, we can define the flow over a subset of the interfaces.\nReplica move history tree",
"import openpathsampling.visualize as vis\nreload(vis)\nfrom IPython.display import SVG\n\ntree = vis.PathTree(\n [step for step in storage.steps if not isinstance(step.change, paths.EmptyPathMoveChange)],\n vis.ReplicaEvolution(replica=3, accepted=False)\n)\ntree.options.css['width'] = 'inherit'\n\nSVG(tree.svg())\n\ndecorrelated = tree.generator.decorrelated\nprint \"We have \" + str(len(decorrelated)) + \" decorrelated trajectories.\"",
"Visualizing trajectories\nHistogramming data (TODO)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
TESScience/FPE_Test_Procedures
|
Manual_cmds.ipynb
|
mit
|
[
"Manual Commands Workbook\nThis notebook is a workbook for testing hardware with manual FPE commands, and general empirical testing. It turns out that it's also a handy command reference.\nStart the Observatory Simulator and Load the FPE FPGA\nRemember that whenever you power-cycle the Observatory Simulator, you should set preload=True below.\nWhen you are running this notebook and it has not been power cycled, you should set preload=False.\nRun the following cell to get the FPE loaded:",
"from tessfpe.dhu.fpe import FPE\nfrom tessfpe.dhu.unit_tests import check_house_keeping_voltages\nfpe1 = FPE(1, debug=False, preload=True, FPE_Wrapper_version='6.1.1')\nprint fpe1.version\nfpe1.cmd_start_frames()\nfpe1.cmd_stop_frames()\nif check_house_keeping_voltages(fpe1):\n print \"Wrapper load complete. Interface voltages OK.\"",
"Useful Commands:\nping()\nfpe1.cmd_start_frames() # Starts frame generation.\nfpe1.cmd_stop_frames() # Stops frame generation.\nfpe1.cmd_camrst # Don't know how to work this. As-is, it fails.\nfpe1.cmd_cam_status() # Returns the camera status register values.\nfpe1.cmd_version() # Returns ObsSim version info.\nfpe1.house_keeping # Returns a set of HK data in alphabetical order, in engineering units, without frames running. This includes all the FPGA digital housekeeping values.\nfpe1.house_keeping[\"analogue\"] #Returns only the analog values of the housekeeping set.\n{fpe1.cmd_cam_hsk() # Returns raw, un-parsed housekeeping data, two samples per word (decimal), mostly useless here.}\ncheck_house_keeping_voltages(fpe1, tolerance=0.05) # Returns True if standard set of supply voltages are in tolerance.\nIf you plan on setting operating parameters (DACs), run this cell:",
"from tessfpe.data.operating_parameters import operating_parameters\noperating_parameters[\"heater_1_current\"]",
"Reading a housekeeping value has this form:\nfpe1.house_keeping[\"analogue\"][\"parameter_name\"]\nHere's a couple of sample reads of housekeeping values:",
"fpe1.house_keeping[\"analogue\"][\"heater_1_current\"]\nfpe1.house_keeping[\"analogue\"][\"ccd1_input_diode_high\"]",
"Setting an operating parameter has this form:\nfpe1.ops.parameter_name = value\nfpe1.ops.send()\nSetting the 3 trim heaters to their minimum values looks like this:",
"fpe1.ops.heater_1_current = fpe1.ops.heater_1_current.low\nfpe1.ops.heater_2_current = fpe1.ops.heater_2_current.low\nfpe1.ops.heater_3_current = fpe1.ops.heater_3_current.low\nfpe1.ops.send()",
"Setting all the operating parameters to the default values:",
"def set_fpe_defaults(fpe):\n \"Set the FPE to the default operating parameters and return a list of the default values\"\n defaults = {}\n for k in range(len(fpe.ops.address)):\n if fpe.ops.address[k] is None:\n continue\n fpe.ops.address[k].value = fpe.ops.address[k].default\n defaults[fpe.ops.address[k].name] = fpe.ops.address[k].default\n return defaults\n\nset_fpe_defaults(fpe1)",
"Workspace:",
"operating_parameters[\"ccd1_output_drain_a_offset\"]\n#operating_parameters[\"ccd1_reset_drain\"]\n\nfpe1.ops.ccd1_reset_drain = 15\nfpe1.ops.ccd1_output_drain_a_offset = 10\nfpe1.ops.send()\nfpe1.house_keeping[\"analogue\"][\"ccd1_output_drain_a\"]\n\n#operating_parameters[\"ccd1_reset_high\"]\noperating_parameters['ccd1_reset_low_offset']\n\nfpe1.ops.ccd1_reset_high = -10.3\nfpe1.ops.ccd1_reset_low_offset = -9.9\nfpe1.ops.send()\nfpe1.house_keeping[\"analogue\"][\"ccd1_reset_low\"]\n\nfpe1.cmd_start_frames() # Starts frame generation.\n\nfpe1.cmd_stop_frames() # Stops frame generation.\n\nfrom tessfpe.data.housekeeping_channels import housekeeping_channels\n\nfrom tessfpe.data.housekeeping_channels import housekeeping_channel_memory_map\n\nprint fpe1.house_keeping\n\nprint fpe1.house_keeping[\"analogue\"]\n\nfrom numpy import var\nsamples=100\nfrom tessfpe.data.housekeeping_channels import housekeeping_channels\n# We make sample_data a dictionary and each value will be a set of HK data, with key = sample_name.\nsample_data = {}\n\n# For later:\nsignal_names = []\nsignal_values = []\nsignal_data = {}\nvariance_values = {}\n \n#my_dict[\"new key\"] = \"New value\"\n\nfor i in range(samples):\n # Get a new set of HK values\n house_keeping_values = fpe1.house_keeping[\"analogue\"]\n data_values = house_keeping_values.values()\n # Add the new HK values to the sample_data dictionary:\n sample_number = \"sample_\" + str(i)\n sample_data[sample_number] = data_values\n\n# Get the signal names for use later\nsignal_names = house_keeping_values.keys()\n\n\"\"\"Assign the set of all HK values of the same signal (e.g. substrate_1) \nto the dictionary 'signal_data'\"\"\"\n\nfor k in range(len(signal_names)):\n # Build the list 'signal_values' for this signal:\n for i in range(samples):\n sample_number = \"sample_\" + str(i)\n signal_values.append(sample_data[sample_number][k])\n # Add signal_values to the signal_data dictionary:\n signal_data[signal_names[k]] = signal_values\n signal_values = []\n\n\"\"\" Now get the variance of each of the 'signal_values' in the \nsignal_data dictionary and put the result in the 'variance_values' \ndictionary.\"\"\"\nfor name in signal_data:\n variance_values[name] = var(signal_data[name])\n # print name, str(variance_values[name])\n print '{0} {1:<5}'.format(name, variance_values[name])\n\n\ndata = []\n\nfor i in range(10):\n set_values = {}\n for k in range(len(fpe1.ops.address)):\n if fpe1.ops.address[k] is None:\n continue\n low = fpe1.ops.address[k].low\n high = fpe1.ops.address[k].high\n name = fpe1.ops.address[k].name\n set_values[name] = fpe1.ops.address[k].value = low + i / 100. * (high - low)\n fpe1.ops.send()\n data.append({\"set values\": set_values,\"measured values\": fpe1.house_keeping[\"analogue\"]})\n print data\n\nprint sample_data\n\nv = {}\nfor name in operating_parameters.keys():\n v[name] = operating_parameters[name]\n print v[name][\"unit\"]\n print name"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
DJCordhose/ai
|
notebooks/tensorflow/fashion_mnist_resnet.ipynb
|
mit
|
[
"View in Colaboratory\nFashion MNIST with Keras and Resnet\nAdapted from \n* https://colab.research.google.com/github/tensorflow/tpu/blob/master/tools/colab/fashion_mnist.ipynb\n* https://github.com/margaretmz/deep-learning/blob/master/fashion_mnist_keras.ipynb",
"import warnings\nwarnings.filterwarnings('ignore')\n\nimport tensorflow as tf\ntf.logging.set_verbosity(tf.logging.ERROR)\n\nimport numpy as np\n\n(x_train, y_train), (x_test, y_test) = tf.keras.datasets.fashion_mnist.load_data()\n\nx_train.shape\n\n# add empty color dimension\nx_train = np.expand_dims(x_train, -1)\nx_test = np.expand_dims(x_test, -1)\n\nx_train.shape\n\n# recude memory and compute time\n# NUMBER_OF_SAMPLES = 50000\nNUMBER_OF_SAMPLES = 50000\n\nx_train_samples = x_train[:NUMBER_OF_SAMPLES]\n\ny_train_samples = y_train[:NUMBER_OF_SAMPLES]\n\nimport skimage.data\nimport skimage.transform\n\nx_train_224 = np.array([skimage.transform.resize(image, (32, 32)) for image in x_train_samples])\n\nx_train_224.shape",
"Alternative: ResNet\n\nbasic ideas\ndepth does matter\n8x deeper than VGG\npossible by using shortcuts and skipping final fc layer\nprevents vanishing gradient problem\nhttps://keras.io/applications/#resnet50\nhttps://medium.com/towards-data-science/neural-network-architectures-156e5bad51ba\n\nhttp://arxiv.org/abs/1512.03385",
"from tensorflow.keras.applications.resnet50 import ResNet50\n\n# https://keras.io/applications/#mobilenet\n# https://arxiv.org/pdf/1704.04861.pdf\nfrom tensorflow.keras.applications.mobilenet import MobileNet\n\n# model = ResNet50(classes=10, weights=None, input_shape=(32, 32, 1))\nmodel = MobileNet(classes=10, weights=None, input_shape=(32, 32, 1))\n\nmodel.summary()\n\nBATCH_SIZE=10\nEPOCHS = 20\n\nmodel.compile(loss='sparse_categorical_crossentropy',\n optimizer='adam',\n metrics=['accuracy'])\n\n%time history = model.fit(x_train_224, y_train_samples, epochs=EPOCHS, batch_size=BATCH_SIZE, validation_split=0.2, verbose=1)\n\nimport pandas as pd\nfrom matplotlib import pyplot as plt\n%matplotlib inline\n\ndef plot_history(history, samples=10, init_phase_samples=None):\n epochs = history.params['epochs']\n \n acc = history.history['acc']\n val_acc = history.history['val_acc']\n\n every_sample = int(epochs / samples)\n acc = pd.DataFrame(acc).iloc[::every_sample, :]\n val_acc = pd.DataFrame(val_acc).iloc[::every_sample, :]\n\n fig, ax = plt.subplots(figsize=(20,5))\n\n ax.plot(acc, 'bo', label='Training acc')\n ax.plot(val_acc, 'b', label='Validation acc')\n ax.set_title('Training and validation accuracy')\n ax.legend()\n\nplot_history(history)",
"Checking our results (inference)",
"x_test_224 = np.array([skimage.transform.resize(image, (32, 32)) for image in x_test])\n\nLABEL_NAMES = ['t_shirt', 'trouser', 'pullover', 'dress', 'coat', 'sandal', 'shirt', 'sneaker', 'bag', 'ankle_boots']\n\n\ndef plot_predictions(images, predictions):\n n = images.shape[0]\n nc = int(np.ceil(n / 4))\n f, axes = plt.subplots(nc, 4)\n for i in range(nc * 4):\n y = i // 4\n x = i % 4\n axes[x, y].axis('off')\n \n label = LABEL_NAMES[np.argmax(predictions[i])]\n confidence = np.max(predictions[i])\n if i > n:\n continue\n axes[x, y].imshow(images[i])\n axes[x, y].text(0.5, 0.5, label + '\\n%.3f' % confidence, fontsize=14)\n\n pyplot.gcf().set_size_inches(8, 8) \n\nplot_predictions(np.squeeze(x_test_224[:16]), \n model.predict(x_test_224[:16]))\n\ntrain_loss, train_accuracy = model.evaluate(x_train_224, y_train_samples, batch_size=BATCH_SIZE)\ntrain_accuracy\n\ntest_loss, test_accuracy = model.evaluate(x_test_224, y_test, batch_size=BATCH_SIZE)\ntest_accuracy"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
enakai00/jupyter_tfbook
|
Chapter04/MNIST dynamic filter result.ipynb
|
gpl-3.0
|
[
"[MDR-01] 必要なモジュールをインポートします。",
"import tensorflow as tf\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom tensorflow.examples.tutorials.mnist import input_data",
"[MDR-02] MNISTのデータセットを用意します。",
"mnist = input_data.read_data_sets(\"/tmp/data/\", one_hot=True)",
"[MDR-03] フィルターに対応する Variable を用意して、入力データにフィルターとプーリング層を適用する計算式を定義します。",
"num_filters = 16\n\nx = tf.placeholder(tf.float32, [None, 784])\nx_image = tf.reshape(x, [-1,28,28,1])\n\nW_conv = tf.Variable(tf.truncated_normal([5,5,1,num_filters], stddev=0.1))\nh_conv = tf.nn.conv2d(x_image, W_conv,\n strides=[1,1,1,1], padding='SAME')\nh_pool =tf.nn.max_pool(h_conv, ksize=[1,2,2,1],\n strides=[1,2,2,1], padding='SAME')",
"[MDR-04] プーリング層からの出力を全結合層を経由してソフトマックス関数に入力する計算式を定義します。",
"h_pool_flat = tf.reshape(h_pool, [-1, 14*14*num_filters])\n\nnum_units1 = 14*14*num_filters\nnum_units2 = 1024\n\nw2 = tf.Variable(tf.truncated_normal([num_units1, num_units2]))\nb2 = tf.Variable(tf.zeros([num_units2]))\nhidden2 = tf.nn.relu(tf.matmul(h_pool_flat, w2) + b2)\n\nw0 = tf.Variable(tf.zeros([num_units2, 10]))\nb0 = tf.Variable(tf.zeros([10]))\np = tf.nn.softmax(tf.matmul(hidden2, w0) + b0)",
"[MDR-05] 誤差関数 loss、トレーニングアルゴリズム train_step、正解率 accuracy を定義します。",
"t = tf.placeholder(tf.float32, [None, 10])\nloss = -tf.reduce_sum(t * tf.log(p))\ntrain_step = tf.train.AdamOptimizer(0.0005).minimize(loss)\ncorrect_prediction = tf.equal(tf.argmax(p, 1), tf.argmax(t, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))",
"[MDR-06] セッションを用意して Variable を初期化した後、最適化処理を実施済みのセッションを復元します。",
"sess = tf.Session()\nsess.run(tf.initialize_all_variables())\nsaver = tf.train.Saver()\nsaver.restore(sess, 'mdc_session-4000')",
"[MDR-07] 畳込みフィルターの値と、最初の9個分の画像データに対して、畳み込みフィルターとプーリング層を適用した結果を取得します。",
"filter_vals, conv_vals, pool_vals = sess.run(\n [W_conv, h_conv, h_pool], feed_dict={x:mnist.test.images[:9]})",
"[MDR-08] 畳込みフィルターを適用した結果を画像として表示します。\n畳込みフィルターを適用した後は、ピクセル値が負の値をとることもあるため、背景(ピクセル値 0)の部分が白にならない点に注意してください。",
"fig = plt.figure(figsize=(10,num_filters+1))\n\nfor i in range(num_filters):\n subplot = fig.add_subplot(num_filters+1, 10, 10*(i+1)+1)\n subplot.set_xticks([])\n subplot.set_yticks([])\n subplot.imshow(filter_vals[:,:,0,i],\n cmap=plt.cm.gray_r, interpolation='nearest')\n\nfor i in range(9):\n subplot = fig.add_subplot(num_filters+1, 10, i+2)\n subplot.set_xticks([])\n subplot.set_yticks([])\n subplot.set_title('%d' % np.argmax(mnist.test.labels[i]))\n subplot.imshow(mnist.test.images[i].reshape((28,28)),\n vmin=0, vmax=1,\n cmap=plt.cm.gray_r, interpolation='nearest')\n\n for f in range(num_filters):\n subplot = fig.add_subplot(num_filters+1, 10, 10*(f+1)+i+2)\n subplot.set_xticks([])\n subplot.set_yticks([])\n subplot.imshow(conv_vals[i,:,:,f],\n cmap=plt.cm.gray_r, interpolation='nearest') ",
"[MDR-09] 同じく、畳込みフィルターとプーリング層を適用した結果を画像として表示します。",
"fig = plt.figure(figsize=(10,num_filters+1))\n\nfor i in range(num_filters):\n subplot = fig.add_subplot(num_filters+1, 10, 10*(i+1)+1)\n subplot.set_xticks([])\n subplot.set_yticks([])\n subplot.imshow(filter_vals[:,:,0,i],\n cmap=plt.cm.gray_r, interpolation='nearest')\n\nfor i in range(9):\n subplot = fig.add_subplot(num_filters+1, 10, i+2)\n subplot.set_xticks([])\n subplot.set_yticks([])\n subplot.set_title('%d' % np.argmax(mnist.test.labels[i]))\n subplot.imshow(mnist.test.images[i].reshape((28,28)),\n vmin=0, vmax=1,\n cmap=plt.cm.gray_r, interpolation='nearest')\n\n for f in range(num_filters):\n subplot = fig.add_subplot(num_filters+1, 10, 10*(f+1)+i+2)\n subplot.set_xticks([])\n subplot.set_yticks([])\n subplot.imshow(pool_vals[i,:,:,f],\n cmap=plt.cm.gray_r, interpolation='nearest') ",
"[MDR-10] 正しく分類できなかったいくつかのデータについて、それぞれの文字である確率を確認します。",
"fig = plt.figure(figsize=(12,10))\nc=0\nfor (image, label) in zip(mnist.test.images, \n mnist.test.labels):\n p_val = sess.run(p, feed_dict={x:[image]})\n pred = p_val[0]\n prediction, actual = np.argmax(pred), np.argmax(label)\n if prediction == actual:\n continue\n subplot = fig.add_subplot(5,4,c*2+1)\n subplot.set_xticks([])\n subplot.set_yticks([])\n subplot.set_title('%d / %d' % (prediction, actual))\n subplot.imshow(image.reshape((28,28)), vmin=0, vmax=1,\n cmap=plt.cm.gray_r, interpolation=\"nearest\")\n subplot = fig.add_subplot(5,4,c*2+2)\n subplot.set_xticks(range(10))\n subplot.set_xlim(-0.5,9.5)\n subplot.set_ylim(0,1)\n subplot.bar(range(10), pred, align='center')\n c += 1\n if c == 10:\n break"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
basnijholt/holoviews
|
examples/user_guide/17-Dashboards.ipynb
|
bsd-3-clause
|
[
"Creating interactive dashboards",
"import pandas as pd\nimport holoviews as hv\n\nfrom bokeh.sampledata import stocks\nfrom holoviews.operation.timeseries import rolling, rolling_outlier_std\n\nhv.extension('bokeh')",
"In the Data Processing Pipelines section we discovered how to declare a DynamicMap and control multiple processing steps with the use of custom streams as described in the Responding to Events guide. Here we will use the same example exploring a dataset of stock timeseries and build a small dashboard using the Panel library, which allows us to declare easily declare custom widgets and link them to our streams. We will begin by once again declaring our function that loads the stock data:",
"def load_symbol(symbol, variable='adj_close', **kwargs):\n df = pd.DataFrame(getattr(stocks, symbol))\n df['date'] = df.date.astype('datetime64[ns]')\n return hv.Curve(df, ('date', 'Date'), variable)\n\nstock_symbols = ['AAPL', 'IBM', 'FB', 'GOOG', 'MSFT']\ndmap = hv.DynamicMap(load_symbol, kdims='Symbol').redim.values(Symbol=stock_symbols)\n\ndmap.opts(framewise=True)",
"Building dashboards\nControlling stream events manually from the Python prompt can be a bit cumbersome. However since you can now trigger events from Python we can easily bind any Python based widget framework to the stream. HoloViews itself is based on param and param has various UI toolkits that accompany it and allow you to quickly generate a set of widgets. Here we will use panel, which is based on bokeh to control our stream values.\nTo do so we will declare a StockExplorer class subclassing Parameterized and defines two parameters, the rolling_window as an integer and the symbol as an ObjectSelector. Additionally we define a view method, which defines the DynamicMap and applies the two operations we have already played with, returning an overlay of the smoothed Curve and outlier Scatter.",
"import param\nimport panel as pn\nfrom holoviews.streams import Params\n\nclass StockExplorer(param.Parameterized):\n\n rolling_window = param.Integer(default=10, bounds=(1, 365))\n \n symbol = param.ObjectSelector(default='AAPL', objects=stock_symbols)\n \n variable = param.ObjectSelector(default='adj_close', objects=[\n 'date', 'open', 'high', 'low', 'close', 'volume', 'adj_close'])\n\n @param.depends('symbol', 'variable')\n def load_symbol(self):\n df = pd.DataFrame(getattr(stocks, self.symbol))\n df['date'] = df.date.astype('datetime64[ns]')\n return hv.Curve(df, ('date', 'Date'), self.variable).opts(framewise=True)",
"You will have noticed the param.depends decorator on the load_symbol method above, this declares that the method depends on these two parameters. When we pass the method to a DynamicMap it will now automatically listen to changes to the 'symbol', and 'variable' parameters. To generate a set of widgets to control these parameters we can simply supply the explorer.param accessor to a panel layout, and combining the two we can quickly build a little GUI:",
"explorer = StockExplorer()\n\nstock_dmap = hv.DynamicMap(explorer.load_symbol)\n\npn.Row(explorer.param, stock_dmap)",
"The rolling_window parameter is not yet connected to anything however, so just like in the Data Processing Pipelines section we will see how we can get the widget to control the parameters of an operation. Both the rolling and rolling_outlier_std operations accept a rolling_window parameter, so we create a Params stream to listen to that parameter and then pass it to the operations. Finally we compose everything into a panel Row:",
"# Apply rolling mean\nwindow = Params(explorer, ['rolling_window'])\nsmoothed = rolling(stock_dmap, streams=[window])\n\n# Find outliers\noutliers = rolling_outlier_std(stock_dmap, streams=[window]).opts(\n hv.opts.Scatter(color='red', marker='triangle')\n)\n\npn.Row(explorer.param, (smoothed * outliers).opts(width=600))",
"Replacing the output\nUpdating plots using a DynamicMap is a very efficient means of updating a plot since it will only update the data that has changed. In some cases it is either necessary or more convenient to redraw a plot entirely. Panel makes this easy by annotating a method with any dependencies that should trigger the plot to be redrawn. In the example below we extend the StockExplorer by adding a datashade boolean and a view method which will flip between a datashaded and regular view of the plot:",
"from holoviews.operation.datashader import datashade, dynspread\n\nclass AdvancedStockExplorer(StockExplorer): \n\n datashade = param.Boolean(default=False)\n\n @param.depends('datashade')\n def view(self):\n stocks = hv.DynamicMap(self.load_symbol)\n\n # Apply rolling mean\n window = Params(self, ['rolling_window'])\n smoothed = rolling(stocks, streams=[window])\n if self.datashade:\n smoothed = dynspread(datashade(smoothed)).opts(framewise=True)\n\n # Find outliers\n outliers = rolling_outlier_std(stocks, streams=[window]).opts(\n width=600, color='red', marker='triangle', framewise=True)\n return (smoothed * outliers)",
"In the previous example we explicitly called the view method, but to allow panel to update the plot when the datashade parameter is toggled we instead pass it the actual view method. Whenever the datashade parameter is toggled panel will call the method and update the plot with whatever is returned:",
"explorer = AdvancedStockExplorer()\npn.Row(explorer.param, explorer.view)",
"As you can see using streams we have bound the widgets to the streams letting us easily control the stream values and making it trivial to define complex dashboards. For more information on how to deploy bokeh apps from HoloViews and build dashboards see the Deploying Bokeh Apps."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
datascience-practice/data-quest
|
python_introduction/beginner/Challenge Files Loops and Conditional Logic.ipynb
|
mit
|
[
"2: Unisex names\n3: Read the file into string\nInstructions\nUse the open() function to return a File object with the parameters:\n\nr for read mode\ndq_unisex_names.csv for the file name\nThen use the read() method of the File object to read the file into a string. Assign that string to a variable named data.",
"f = open(\"dq_unisex_names.csv\", \"r\")\ndata = f.read()\nprint(data)",
"4: Convert the string to a list\nInstructions\nUse the split() method that strings have to split on the new-line delimiter (\"\\n\") and assign the resulting list to data_list. Then use the print() function to display the first 5 elements in data_list.\nAnswer",
"f = open('dq_unisex_names.csv', 'r')\ndata = f.read()\ndata_list = data.split(\"\\n\")\nprint(data_list[:5])",
"5: Convert the list of strings to a list of lists\nInstructions\nSplit each element in data_list on the comma delimiter (,) and append the resulting list to string_data.\nTo accomplish this:\n\ncreate an empty list and assign it to string_data\nwrite a for loop that iterates over data_list\nwithin the loop body, run the split() method on each element to return a list (you call that list comma_list)\nwithin the loop body, run the append() method to add each list (comma_list) to string_data.\n\nFinally, use the print() function to display the first 5 elements in string_data.\nAnswer",
"f = open('dq_unisex_names.csv', 'r')\ndata = f.read()\ndata_list = data.split('\\n')\n\nstring_data = []\nfor data_elm in data_list:\n comma_list = data_elm.split(\",\")\n string_data.append(comma_list)\n \nprint(string_data[:5])",
"6: Convert numerical values\nInstructions\nCreate a new list of lists called numerical_data where:\n\nthe value at index 0 for each list is the unisex name (as a string)\n\nthe value at index 1 for each list is the number of people who share that name (as a float)\nTo accomplish this:\n\n\ncreate an empty list and assign to numerical_data\n\nwrite a for loop that iterates over string_data\nin the loop body\nretrieve the value at index 0 and assign to a variable\nretrieve the value at index 1, convert it to a float, and assign to a variable\ncreate a new list containing these 2 values (in the same order)\n\n\nuse the append() method to add this new list to numerical_data.\nFinally, display the first 5 elements in numerical_data.\n\nAnswer",
"numerical_data = []\nfor str_elm in string_data:\n if len(str_elm) != 2:\n continue\n name = str_elm[0]\n num = float(str_elm[1])\n lst = [name, num]\n numerical_data.append(lst)\n \nprint(numerical_data[:5])",
"7: Filter the list\nInstructions\nCreate a new list of strings called thousand_or_greater that contains just the names that at least a thousand people share.\nTo accomplish this:\n\ncreate an empty list and assign to thousand_or_greater\nwrite a for loop that iterates over numerical_data\nin the loop body, use an if statement to determine if the value at index 1 for that element (a list) is greater than or equal to 1000\nif the value is larger than or equal to 1000, use the append() method to add it to thousand_or_greater\n\nFinally, display the first 10 elements in thousand_or_greater.\nAnswer",
"# The last value is ~100 people\nnumerical_data[len(numerical_data)-1]\n\nthousand_or_greater = []\nfor num in numerical_data:\n if num[1] >= 1000:\n thousand_or_greater.append(num[0])\n \nprint(thousand_or_greater[:10])"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
rebeccabilbro/viz
|
bokeh/gapminder.ipynb
|
mit
|
[
"Bokeh Visualization Demo\nRecreating Han's Rosling's \"The Health and Wealth of Nations\"\nThis notebook is intended to illustrate the some of the utilities of the Python Bokeh visualization library.",
"import numpy as np\nimport pandas as pd\n\nfrom bokeh.embed import file_html\nfrom bokeh.io import output_notebook, show\nfrom bokeh.layouts import layout\nfrom bokeh.models import (\n ColumnDataSource, Plot, Circle, Range1d, LinearAxis, HoverTool, \n Text, SingleIntervalTicker, Slider, CustomJS)\nfrom bokeh.palettes import Spectral6\n\noutput_notebook()\n\nimport bokeh.sampledata\nbokeh.sampledata.download()",
"Setting up the data\nThe plot animates with the slider showing the data over time from 1964 to 2013. We can think of each year as a seperate static plot, and when the slider moves, we use the Callback to change the data source that is driving the plot. \nWe could use bokeh-server to drive this change, but as the data is not too big we can also pass all the datasets to the javascript at once and switch between them on the client side. \nThis means that we need to build one data source for each year that we have data for and are going to switch between using the slider. We build them and add them to a dictionary sources that holds them under a key that is the name of the year preficed with a _.",
"def process_data():\n from bokeh.sampledata.gapminder import fertility, life_expectancy, population, regions\n\n # Make the column names ints not strings for handling\n columns = list(fertility.columns)\n years = list(range(int(columns[0]), int(columns[-1])))\n rename_dict = dict(zip(columns, years))\n\n fertility = fertility.rename(columns=rename_dict)\n life_expectancy = life_expectancy.rename(columns=rename_dict)\n population = population.rename(columns=rename_dict)\n regions = regions.rename(columns=rename_dict)\n\n # Turn population into bubble sizes. Use min_size and factor to tweak.\n scale_factor = 200\n population_size = np.sqrt(population / np.pi) / scale_factor\n min_size = 3\n population_size = population_size.where(population_size >= min_size).fillna(min_size)\n\n # Use pandas categories and categorize & color the regions\n regions.Group = regions.Group.astype('category')\n regions_list = list(regions.Group.cat.categories)\n\n def get_color(r):\n return Spectral6[regions_list.index(r.Group)]\n regions['region_color'] = regions.apply(get_color, axis=1)\n\n return fertility, life_expectancy, population_size, regions, years, regions_list\n\nfertility_df, life_expectancy_df, population_df_size, regions_df, years, regions = process_data()\n\nsources = {}\n\nregion_color = regions_df['region_color']\nregion_color.name = 'region_color'\n\nfor year in years:\n fertility = fertility_df[year]\n fertility.name = 'fertility'\n life = life_expectancy_df[year]\n life.name = 'life' \n population = population_df_size[year]\n population.name = 'population' \n new_df = pd.concat([fertility, life, population, region_color], axis=1)\n sources['_' + str(year)] = ColumnDataSource(new_df)",
"sources looks like this \n{'_1964': <bokeh.models.sources.ColumnDataSource at 0x7f7e7d165cc0>,\n '_1965': <bokeh.models.sources.ColumnDataSource at 0x7f7e7d165b00>,\n '_1966': <bokeh.models.sources.ColumnDataSource at 0x7f7e7d1656a0>,\n '_1967': <bokeh.models.sources.ColumnDataSource at 0x7f7e7d165ef0>,\n '_1968': <bokeh.models.sources.ColumnDataSource at 0x7f7e7e9dac18>,\n '_1969': <bokeh.models.sources.ColumnDataSource at 0x7f7e7e9da9b0>,\n '_1970': <bokeh.models.sources.ColumnDataSource at 0x7f7e7e9da668>,\n '_1971': <bokeh.models.sources.ColumnDataSource at 0x7f7e7e9da0f0>... \nWe will pass this dictionary to the Callback. In doing so, we will find that in our javascript we have an object called, for example 1964 that refers to our ColumnDataSource. Note that we needed the prefixing as JS objects cannot begin with a number. \nFinally we construct a string that we can insert into our javascript code to define an object. \nThe string looks like this: {1962: _1962, 1963: _1963, ....} \nNote the keys of this object are integers and the values are the references to our ColumnDataSources from above. So that now, in our JS code, we have an object that's storing all of our ColumnDataSources and we can look them up.",
"dictionary_of_sources = dict(zip([x for x in years], ['_%s' % x for x in years]))\njs_source_array = str(dictionary_of_sources).replace(\"'\", \"\")",
"Build the plot",
"# Set up the plot\nxdr = Range1d(1, 9)\nydr = Range1d(20, 100)\nplot = Plot(\n x_range=xdr,\n y_range=ydr,\n plot_width=800,\n plot_height=400,\n outline_line_color=None,\n toolbar_location=None, \n min_border=20,\n)",
"Build the axes",
"AXIS_FORMATS = dict(\n minor_tick_in=None,\n minor_tick_out=None,\n major_tick_in=None,\n major_label_text_font_size=\"10pt\",\n major_label_text_font_style=\"normal\",\n axis_label_text_font_size=\"10pt\",\n\n axis_line_color='#AAAAAA',\n major_tick_line_color='#AAAAAA',\n major_label_text_color='#666666',\n\n major_tick_line_cap=\"round\",\n axis_line_cap=\"round\",\n axis_line_width=1,\n major_tick_line_width=1,\n)\n\nxaxis = LinearAxis(ticker=SingleIntervalTicker(interval=1), axis_label=\"Children per woman (total fertility)\", **AXIS_FORMATS)\nyaxis = LinearAxis(ticker=SingleIntervalTicker(interval=20), axis_label=\"Life expectancy at birth (years)\", **AXIS_FORMATS) \nplot.add_layout(xaxis, 'below')\nplot.add_layout(yaxis, 'left')",
"Add the background year text\nWe add this first so it is below all the other glyphs",
"# Add the year in background (add before circle)\ntext_source = ColumnDataSource({'year': ['%s' % years[0]]})\ntext = Text(x=2, y=35, text='year', text_font_size='150pt', text_color='#EEEEEE')\nplot.add_glyph(text_source, text)",
"Add the bubbles and hover\nWe add the bubbles using the Circle glyph. We start from the first year of data and that is our source that drives the circles (the other sources will be used later). \nplot.add_glyph returns the renderer, and we pass this to the HoverTool so that hover only happens for the bubbles on the page and not other glyph elements.",
"# Add the circle\nrenderer_source = sources['_%s' % years[0]]\ncircle_glyph = Circle(\n x='fertility', y='life', size='population',\n fill_color='region_color', fill_alpha=0.8, \n line_color='#7c7e71', line_width=0.5, line_alpha=0.5)\ncircle_renderer = plot.add_glyph(renderer_source, circle_glyph)\n\n# Add the hover (only against the circle and not other plot elements)\ntooltips = \"@index\"\nplot.add_tools(HoverTool(tooltips=tooltips, renderers=[circle_renderer]))",
"Add the legend\nWe manually build the legend by adding circles and texts to the upper-right portion of the plot.",
"text_x = 7\ntext_y = 95\nfor i, region in enumerate(regions):\n plot.add_glyph(Text(x=text_x, y=text_y, text=[region], text_font_size='10pt', text_color='#666666'))\n plot.add_glyph(Circle(x=text_x - 0.1, y=text_y + 2, fill_color=Spectral6[i], size=10, line_color=None, fill_alpha=0.8))\n text_y = text_y - 5 ",
"Add the slider and callback\nNext we add the slider widget and the JS callback code which changes the data of the renderer_source (powering the bubbles / circles) and the data of the text_source (powering background text). After we've set() the data we need to trigger() a change. slider, renderer_source, text_source are all available because we add them as args to Callback. \nIt is the combination of sources = %s % (js_source_array) in the JS and Callback(args=sources...) that provides the ability to look-up, by year, the JS version of our Python-made ColumnDataSource.",
"# Add the slider\ncode = \"\"\"\n var year = slider.get('value'),\n sources = %s,\n new_source_data = sources[year].get('data');\n renderer_source.set('data', new_source_data);\n text_source.set('data', {'year': [String(year)]});\n\"\"\" % js_source_array\n\ncallback = CustomJS(args=sources, code=code)\nslider = Slider(start=years[0], end=years[-1], value=1, step=1, title=\"Year\", callback=callback)\ncallback.args[\"renderer_source\"] = renderer_source\ncallback.args[\"slider\"] = slider\ncallback.args[\"text_source\"] = text_source",
"Render together with a slider\nLast but not least, we put the chart and the slider together in a layout and diplay it inline in the notebook.",
"# Stick the plot and the slider together\nshow(layout([[plot], [slider]], sizing_mode='scale_width'))",
"Check out Rosling's version here: https://www.gapminder.org/world \nCheck out Bostock's D3 version here: https://bost.ocks.org/mike/nations/"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tensorflow/quantum
|
docs/tutorials/quantum_data.ipynb
|
apache-2.0
|
[
"Copyright 2020 The TensorFlow Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Quantum data\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/quantum/tutorials/quantum_data\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/quantum/blob/master/docs/tutorials/quantum_data.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/quantum/blob/master/docs/tutorials/quantum_data.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/quantum/docs/tutorials/quantum_data.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nBuilding off of the comparisons made in the MNIST tutorial, this tutorial explores the recent work of Huang et al. that shows how different datasets affect performance comparisons. In the work, the authors seek to understand how and when classical machine learning models can learn as well as (or better than) quantum models. The work also showcases an empirical performance separation between classical and quantum machine learning model via a carefully crafted dataset. You will:\n\nPrepare a reduced dimension Fashion-MNIST dataset.\nUse quantum circuits to re-label the dataset and compute Projected Quantum Kernel features (PQK).\nTrain a classical neural network on the re-labeled dataset and compare the performance with a model that has access to the PQK features.\n\nSetup",
"!pip install tensorflow==2.7.0 tensorflow-quantum\n\n# Update package resources to account for version changes.\nimport importlib, pkg_resources\nimportlib.reload(pkg_resources)\n\nimport cirq\nimport sympy\nimport numpy as np\nimport tensorflow as tf\nimport tensorflow_quantum as tfq\n\n# visualization tools\n%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom cirq.contrib.svg import SVGCircuit\nnp.random.seed(1234)",
"1. Data preparation\nYou will begin by preparing the fashion-MNIST dataset for running on a quantum computer.\n1.1 Download fashion-MNIST\nThe first step is to get the traditional fashion-mnist dataset. This can be done using the tf.keras.datasets module.",
"(x_train, y_train), (x_test, y_test) = tf.keras.datasets.fashion_mnist.load_data()\n\n# Rescale the images from [0,255] to the [0.0,1.0] range.\nx_train, x_test = x_train/255.0, x_test/255.0\n\nprint(\"Number of original training examples:\", len(x_train))\nprint(\"Number of original test examples:\", len(x_test))",
"Filter the dataset to keep just the T-shirts/tops and dresses, remove the other classes. At the same time convert the label, y, to boolean: True for 0 and False for 3.",
"def filter_03(x, y):\n keep = (y == 0) | (y == 3)\n x, y = x[keep], y[keep]\n y = y == 0\n return x,y\n\nx_train, y_train = filter_03(x_train, y_train)\nx_test, y_test = filter_03(x_test, y_test)\n\nprint(\"Number of filtered training examples:\", len(x_train))\nprint(\"Number of filtered test examples:\", len(x_test))\n\nprint(y_train[0])\n\nplt.imshow(x_train[0, :, :])\nplt.colorbar()",
"1.2 Downscale the images\nJust like the MNIST example, you will need to downscale these images in order to be within the boundaries for current quantum computers. This time however you will use a PCA transformation to reduce the dimensions instead of a tf.image.resize operation.",
"def truncate_x(x_train, x_test, n_components=10):\n \"\"\"Perform PCA on image dataset keeping the top `n_components` components.\"\"\"\n n_points_train = tf.gather(tf.shape(x_train), 0)\n n_points_test = tf.gather(tf.shape(x_test), 0)\n\n # Flatten to 1D\n x_train = tf.reshape(x_train, [n_points_train, -1])\n x_test = tf.reshape(x_test, [n_points_test, -1])\n\n # Normalize.\n feature_mean = tf.reduce_mean(x_train, axis=0)\n x_train_normalized = x_train - feature_mean\n x_test_normalized = x_test - feature_mean\n\n # Truncate.\n e_values, e_vectors = tf.linalg.eigh(\n tf.einsum('ji,jk->ik', x_train_normalized, x_train_normalized))\n return tf.einsum('ij,jk->ik', x_train_normalized, e_vectors[:,-n_components:]), \\\n tf.einsum('ij,jk->ik', x_test_normalized, e_vectors[:, -n_components:])\n\nDATASET_DIM = 10\nx_train, x_test = truncate_x(x_train, x_test, n_components=DATASET_DIM)\nprint(f'New datapoint dimension:', len(x_train[0]))",
"The last step is to reduce the size of the dataset to just 1000 training datapoints and 200 testing datapoints.",
"N_TRAIN = 1000\nN_TEST = 200\nx_train, x_test = x_train[:N_TRAIN], x_test[:N_TEST]\ny_train, y_test = y_train[:N_TRAIN], y_test[:N_TEST]\n\nprint(\"New number of training examples:\", len(x_train))\nprint(\"New number of test examples:\", len(x_test))",
"2. Relabeling and computing PQK features\nYou will now prepare a \"stilted\" quantum dataset by incorporating quantum components and re-labeling the truncated fashion-MNIST dataset you've created above. In order to get the most seperation between quantum and classical methods, you will first prepare the PQK features and then relabel outputs based on their values. \n2.1 Quantum encoding and PQK features\nYou will create a new set of features, based on x_train, y_train, x_test and y_test that is defined to be the 1-RDM on all qubits of: \n$V(x_{\\text{train}} / n_{\\text{trotter}}) ^ {n_{\\text{trotter}}} U_{\\text{1qb}} | 0 \\rangle$\nWhere $U_\\text{1qb}$ is a wall of single qubit rotations and $V(\\hat{\\theta}) = e^{-i\\sum_i \\hat{\\theta_i} (X_i X_{i+1} + Y_i Y_{i+1} + Z_i Z_{i+1})}$\nFirst, you can generate the wall of single qubit rotations:",
"def single_qubit_wall(qubits, rotations):\n \"\"\"Prepare a single qubit X,Y,Z rotation wall on `qubits`.\"\"\"\n wall_circuit = cirq.Circuit()\n for i, qubit in enumerate(qubits):\n for j, gate in enumerate([cirq.X, cirq.Y, cirq.Z]):\n wall_circuit.append(gate(qubit) ** rotations[i][j])\n\n return wall_circuit",
"You can quickly verify this works by looking at the circuit:",
"SVGCircuit(single_qubit_wall(\n cirq.GridQubit.rect(1,4), np.random.uniform(size=(4, 3))))",
"Next you can prepare $V(\\hat{\\theta})$ with the help of tfq.util.exponential which can exponentiate any commuting cirq.PauliSum objects:",
"def v_theta(qubits):\n \"\"\"Prepares a circuit that generates V(\\theta).\"\"\"\n ref_paulis = [\n cirq.X(q0) * cirq.X(q1) + \\\n cirq.Y(q0) * cirq.Y(q1) + \\\n cirq.Z(q0) * cirq.Z(q1) for q0, q1 in zip(qubits, qubits[1:])\n ]\n exp_symbols = list(sympy.symbols('ref_0:'+str(len(ref_paulis))))\n return tfq.util.exponential(ref_paulis, exp_symbols), exp_symbols",
"This circuit might be a little bit harder to verify by looking at, but you can still examine a two qubit case to see what is happening:",
"test_circuit, test_symbols = v_theta(cirq.GridQubit.rect(1, 2))\nprint(f'Symbols found in circuit:{test_symbols}')\nSVGCircuit(test_circuit)",
"Now you have all the building blocks you need to put your full encoding circuits together:",
"def prepare_pqk_circuits(qubits, classical_source, n_trotter=10):\n \"\"\"Prepare the pqk feature circuits around a dataset.\"\"\"\n n_qubits = len(qubits)\n n_points = len(classical_source)\n\n # Prepare random single qubit rotation wall.\n random_rots = np.random.uniform(-2, 2, size=(n_qubits, 3))\n initial_U = single_qubit_wall(qubits, random_rots)\n\n # Prepare parametrized V\n V_circuit, symbols = v_theta(qubits)\n exp_circuit = cirq.Circuit(V_circuit for t in range(n_trotter))\n \n # Convert to `tf.Tensor`\n initial_U_tensor = tfq.convert_to_tensor([initial_U])\n initial_U_splat = tf.tile(initial_U_tensor, [n_points])\n\n full_circuits = tfq.layers.AddCircuit()(\n initial_U_splat, append=exp_circuit)\n # Replace placeholders in circuits with values from `classical_source`.\n return tfq.resolve_parameters(\n full_circuits, tf.convert_to_tensor([str(x) for x in symbols]),\n tf.convert_to_tensor(classical_source*(n_qubits/3)/n_trotter))",
"Choose some qubits and prepare the data encoding circuits:",
"qubits = cirq.GridQubit.rect(1, DATASET_DIM + 1)\nq_x_train_circuits = prepare_pqk_circuits(qubits, x_train)\nq_x_test_circuits = prepare_pqk_circuits(qubits, x_test)",
"Next, compute the PQK features based on the 1-RDM of the dataset circuits above and store the results in rdm, a tf.Tensor with shape [n_points, n_qubits, 3]. The entries in rdm[i][j][k] = $\\langle \\psi_i | OP^k_j | \\psi_i \\rangle$ where i indexes over datapoints, j indexes over qubits and k indexes over $\\lbrace \\hat{X}, \\hat{Y}, \\hat{Z} \\rbrace$ .",
"def get_pqk_features(qubits, data_batch):\n \"\"\"Get PQK features based on above construction.\"\"\"\n ops = [[cirq.X(q), cirq.Y(q), cirq.Z(q)] for q in qubits]\n ops_tensor = tf.expand_dims(tf.reshape(tfq.convert_to_tensor(ops), -1), 0)\n batch_dim = tf.gather(tf.shape(data_batch), 0)\n ops_splat = tf.tile(ops_tensor, [batch_dim, 1])\n exp_vals = tfq.layers.Expectation()(data_batch, operators=ops_splat)\n rdm = tf.reshape(exp_vals, [batch_dim, len(qubits), -1])\n return rdm\n\nx_train_pqk = get_pqk_features(qubits, q_x_train_circuits)\nx_test_pqk = get_pqk_features(qubits, q_x_test_circuits)\nprint('New PQK training dataset has shape:', x_train_pqk.shape)\nprint('New PQK testing dataset has shape:', x_test_pqk.shape)",
"2.2 Re-labeling based on PQK features\nNow that you have these quantum generated features in x_train_pqk and x_test_pqk, it is time to re-label the dataset. To achieve maximum seperation between quantum and classical performance you can re-label the dataset based on the spectrum information found in x_train_pqk and x_test_pqk.\nNote: This preparation of your dataset to explicitly maximize the seperation in performance between the classical and quantum models might feel like cheating, but it provides a very important proof of existance for datasets that are hard for classical computers and easy for quantum computers to model. There would be no point in searching for quantum advantage in QML if you couldn't first create something like this to demonstrate advantage.",
"def compute_kernel_matrix(vecs, gamma):\n \"\"\"Computes d[i][j] = e^ -gamma * (vecs[i] - vecs[j]) ** 2 \"\"\"\n scaled_gamma = gamma / (\n tf.cast(tf.gather(tf.shape(vecs), 1), tf.float32) * tf.math.reduce_std(vecs))\n return scaled_gamma * tf.einsum('ijk->ij',(vecs[:,None,:] - vecs) ** 2)\n\ndef get_spectrum(datapoints, gamma=1.0):\n \"\"\"Compute the eigenvalues and eigenvectors of the kernel of datapoints.\"\"\"\n KC_qs = compute_kernel_matrix(datapoints, gamma)\n S, V = tf.linalg.eigh(KC_qs)\n S = tf.math.abs(S)\n return S, V\n\nS_pqk, V_pqk = get_spectrum(\n tf.reshape(tf.concat([x_train_pqk, x_test_pqk], 0), [-1, len(qubits) * 3]))\n\nS_original, V_original = get_spectrum(\n tf.cast(tf.concat([x_train, x_test], 0), tf.float32), gamma=0.005)\n\nprint('Eigenvectors of pqk kernel matrix:', V_pqk)\nprint('Eigenvectors of original kernel matrix:', V_original)",
"Now you have everything you need to re-label the dataset! Now you can consult with the flowchart to better understand how to maximize performance seperation when re-labeling the dataset:\n<img src=\"./images/quantum_data_1.png\">\nIn order to maximize the seperation between quantum and classical models, you will attempt to maximize the geometric difference between the original dataset and the PQK features kernel matrices $g(K_1 || K_2) = \\sqrt{ || \\sqrt{K_2} K_1^{-1} \\sqrt{K_2} || _\\infty}$ using S_pqk, V_pqk and S_original, V_original. A large value of $g$ ensures that you initially move to the right in the flowchart down towards a prediction advantage in the quantum case.\nNote: Computing quantities for $s$ and $d$ are also very useful when looking to better understand performance seperations. In this case ensuring a large $g$ value is enough to see performance seperation.",
"def get_stilted_dataset(S, V, S_2, V_2, lambdav=1.1):\n \"\"\"Prepare new labels that maximize geometric distance between kernels.\"\"\"\n S_diag = tf.linalg.diag(S ** 0.5)\n S_2_diag = tf.linalg.diag(S_2 / (S_2 + lambdav) ** 2)\n scaling = S_diag @ tf.transpose(V) @ \\\n V_2 @ S_2_diag @ tf.transpose(V_2) @ \\\n V @ S_diag\n\n # Generate new lables using the largest eigenvector.\n _, vecs = tf.linalg.eig(scaling)\n new_labels = tf.math.real(\n tf.einsum('ij,j->i', tf.cast(V @ S_diag, tf.complex64), vecs[-1])).numpy()\n # Create new labels and add some small amount of noise.\n final_y = new_labels > np.median(new_labels)\n noisy_y = (final_y ^ (np.random.uniform(size=final_y.shape) > 0.95))\n return noisy_y\n\ny_relabel = get_stilted_dataset(S_pqk, V_pqk, S_original, V_original)\ny_train_new, y_test_new = y_relabel[:N_TRAIN], y_relabel[N_TRAIN:]",
"3. Comparing models\nNow that you have prepared your dataset it is time to compare model performance. You will create two small feedforward neural networks and compare performance when they are given access to the PQK features found in x_train_pqk.\n3.1 Create PQK enhanced model\nUsing standard tf.keras library features you can now create and a train a model on the x_train_pqk and y_train_new datapoints:",
"#docs_infra: no_execute\ndef create_pqk_model():\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Dense(32, activation='sigmoid', input_shape=[len(qubits) * 3,]))\n model.add(tf.keras.layers.Dense(16, activation='sigmoid'))\n model.add(tf.keras.layers.Dense(1))\n return model\n\npqk_model = create_pqk_model()\npqk_model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\n optimizer=tf.keras.optimizers.Adam(learning_rate=0.003),\n metrics=['accuracy'])\n\npqk_model.summary()\n\n#docs_infra: no_execute\npqk_history = pqk_model.fit(tf.reshape(x_train_pqk, [N_TRAIN, -1]),\n y_train_new,\n batch_size=32,\n epochs=1000,\n verbose=0,\n validation_data=(tf.reshape(x_test_pqk, [N_TEST, -1]), y_test_new))",
"3.2 Create a classical model\nSimilar to the code above you can now also create a classical model that doesn't have access to the PQK features in your stilted dataset. This model can be trained using x_train and y_label_new.",
"#docs_infra: no_execute\ndef create_fair_classical_model():\n model = tf.keras.Sequential()\n model.add(tf.keras.layers.Dense(32, activation='sigmoid', input_shape=[DATASET_DIM,]))\n model.add(tf.keras.layers.Dense(16, activation='sigmoid'))\n model.add(tf.keras.layers.Dense(1))\n return model\n\nmodel = create_fair_classical_model()\nmodel.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\n optimizer=tf.keras.optimizers.Adam(learning_rate=0.03),\n metrics=['accuracy'])\n\nmodel.summary()\n\n#docs_infra: no_execute\nclassical_history = model.fit(x_train,\n y_train_new,\n batch_size=32,\n epochs=1000,\n verbose=0,\n validation_data=(x_test, y_test_new))",
"3.3 Compare performance\nNow that you have trained the two models you can quickly plot the performance gaps in the validation data between the two. Typically both models will achieve > 0.9 accuaracy on the training data. However on the validation data it becomes clear that only the information found in the PQK features is enough to make the model generalize well to unseen instances.",
"#docs_infra: no_execute\nplt.figure(figsize=(10,5))\nplt.plot(classical_history.history['accuracy'], label='accuracy_classical')\nplt.plot(classical_history.history['val_accuracy'], label='val_accuracy_classical')\nplt.plot(pqk_history.history['accuracy'], label='accuracy_quantum')\nplt.plot(pqk_history.history['val_accuracy'], label='val_accuracy_quantum')\nplt.xlabel('Epoch')\nplt.ylabel('Accuracy')\nplt.legend()",
"Success: You have engineered a stilted quantum dataset that can intentionally defeat classical models in a fair (but contrived) setting. Try comparing results using other types of classical models. The next step is to try and see if you can find new and interesting datasets that can defeat classical models without needing to engineer them yourself!\n4. Important conclusions\nThere are several important conclusions you can draw from this and the MNIST experiments:\n\n\nIt's very unlikely that the quantum models of today will beat classical model performance on classical data. Especially on today's classical datasets that can have upwards of a million datapoints.\n\n\nJust because the data might come from a hard to classically simulate quantum circuit, doesn't necessarily make the data hard to learn for a classical model.\n\n\nDatasets (ultimately quantum in nature) that are easy for quantum models to learn and hard for classical models to learn do exist, regardless of model architecture or training algorithms used."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tebeka/pythonwise
|
numpy-testing.ipynb
|
bsd-3-clause
|
[
"numpy Testing\nMiki Tebeka .:. 353solutions .:. Highly effective Python, Scientific Python and Go workshops\nWe'll explore certain caveats while testing numpy code.\nTL;DR\nUse np.allclose when comparing numpy arrays. Beware of nan.",
"import numpy as np",
"The Naive Approach",
"def test_mul():\n arr = np.array([0.0, 1.0, 1.1])\n v, expected = 1.1, np.array([0.0, 1.1, 1.21])\n assert arr * v == expected, 'bad multiplication'\n \ntest_mul()",
"This is due to the fact that when we compare two numpy arrays with == we'll get an array of boolean values comparing each element.",
"np.array([1,2,3]) == np.array([1, 1, 3])",
"And the truch value of an array (as the error says) is ambiguous.",
"bool(np.array([1, 2, 3]))",
"We need to use np.all to check that all elements are equal.",
"np.all([True, True, True])",
"Using np.all",
"def test_mul():\n arr = np.array([0.0, 1.0, 1.1])\n v, expected = 1.1, np.array([0.0, 1.1, 1.21])\n assert np.all(arr * v == expected), 'bad multiplication'\n \ntest_mul()",
"This is due to the fact that floating points are not exact.",
"1.1 * 1.1",
"This is not a bug in Python but how floating points are implemented. You'll get the same result in C, Java, Go ...\nTo overcome this we're going to use np.allclose.\nBTW: If you're really intersted in floating points, read this article.\nUsing np.allclose",
"def test_mul():\n arr = np.array([0.0, 1.0, 1.1])\n v, expected = 1.1, np.array([0.0, 1.1, 1.21])\n assert np.allclose(arr * v, expected), 'bad multiplication'\n \ntest_mul()",
"Oh nan, Let Me Count the Ways ...",
"def test_div():\n arr1, arr2 = np.array([1.0, np.inf, 2.0]), np.array([2.0, np.inf, 2.0])\n expected = np.array([0.5, np.nan, 1.0])\n assert np.allclose(arr1 / arr2, expected), 'bad nan'\n \ntest_div()",
"This is due to the fact the nan does not equal itself.",
"np.nan == np.nan",
"To check is a number is nan we need to use np.isnan",
"np.isnan(np.inf/np.inf)",
"We have two options to solve this:\n\nConvert all nan to numbers\nUse equal_nan argument to np.allclose\n\nOption 1: Convert nan to Numbers",
"def test_div():\n arr1, arr2 = np.array([1.0, np.inf, 2.0]), np.array([2.0, np.inf, 2.0])\n expected = np.array([0.5, np.nan, 1.0])\n result = arr1 / arr2\n \n result[np.isnan(result)] = 0.0\n expected[np.isnan(expected)] = 0.0\n assert np.allclose(result, expected), 'bad nan'\n \ntest_div()",
"Option 2: Use equal_nan in np.allclose",
"def test_div():\n arr1, arr2 = np.array([1.0, np.inf, 2.0]), np.array([2.0, np.inf, 2.0])\n expected = np.array([0.5, np.nan, 1.0])\n assert np.allclose(arr1 / arr2, expected, equal_nan=True), 'bad nan'\n \ntest_div()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
CentreForCorpusResearch/clic
|
docs/notebooks/Concordance/Debugging the Concordance View.ipynb
|
mit
|
[
"Basic setup",
"# coding: utf-8\n\nimport os\n\nfrom cheshire3.baseObjects import Session\nfrom cheshire3.document import StringDocument\nfrom cheshire3.internal import cheshire3Root\nfrom cheshire3.server import SimpleServer \n\nsession = Session()\nsession.database = 'db_dickens'\nserv = SimpleServer(session, os.path.join(cheshire3Root, 'configs', 'serverConfig.xml'))\ndb = serv.get_object(session, session.database)\nqf = db.get_object(session, 'defaultQueryFactory')\nresultSetStore = db.get_object(session, 'resultSetStore')\nidxStore = db.get_object(session, 'indexStore')",
"The problems\nWhen using the any search function to search for two different terms, the results are wrong.\nProblem 1: searching for fog OR dense is not the same as dense OR fog.\nProblem 2: Second, the counts for fog OR dense are off. \nCurrently, there are 150 results for fog OR dense and 221 for dense OR fog, but there should be many more (142 or 144 if one counts compound nouns).",
"# This is the query that is currently being used.\n# The count is the number of chapters\n\nquery = qf.get_query(session, \"\"\" \n ((c3.subcorpus-idx all \"dickens\" and/cql.proxinfo c3.chapter-idx = \"fog\") or c3.chapter-idx = \"dense\")\n \"\"\")\nresult_set = db.search(session, query)\nprint len(result_set)\n\n# To get a more speficic count one also needs to include the numbers of hits \n# in the different chapters\n\ndef count_total(result_set):\n \"\"\"\n Helper function to count the total number of hits\n in the search results\n \"\"\"\n count = 0 \n for result in result_set:\n count += len(result.proxInfo)\n return count\n\ncount_total(result_set)\n\ndef try_query(query):\n \"\"\"\n Another helper function to take a query and return\n the total number of hits\n \"\"\"\n query = qf.get_query(session, query)\n result_set = db.search(session, query)\n return count_total(result_set)",
"Solving problem 1\nThis query gets wrong results because it the OR query is poorly constructed",
"try_query(\"\"\"\n ((c3.subcorpus-idx all \"dickens\" and/cql.proxinfo c3.chapter-idx = \"dense\") or c3.chapter-idx = \"fog\")\n \"\"\"\n )",
"Properly structuring the OR clause takes away the problem of having different results for\nfor OR dense\ndense OR fog\n\nOption 1",
"try_query(\"\"\"\n (c3.subcorpus-idx all \"dickens\" and/cql.proxinfo (c3.chapter-idx = \"dense\" or c3.chapter-idx = \"fog\"))\n \"\"\"\n )",
"Option 2",
"try_query(\"\"\"\n (c3.subcorpus-idx all \"dickens\" and/cql.proxinfo c3.chapter-idx any \"dense fog\")\n \"\"\"\n )\n\ntry_query(\"\"\"\n (c3.subcorpus-idx all \"dickens\" and/cql.proxinfo c3.chapter-idx any \"fog dense\")\n \"\"\"\n )",
"Option 3: the verbose one",
"try_query(\"\"\"\n ((c3.subcorpus-idx all \"dickens\" and/cql.proxinfo c3.chapter-idx = \"dense\") or \n (c3.subcorpus-idx all \"dickens\" and/cql.proxinfo c3.chapter-idx = \"fog\"))\n \"\"\"\n )",
"Solving problem 2\nTo really get the right results, though, one should not just use\nany, but rather any/cql.proxinfo.",
"try_query(\"\"\"\n (c3.subcorpus-idx all \"dickens\" and/proxinfo (c3.chapter-idx = \"dense\" or/proxinfo c3.chapter-idx = \"fog\"))\n \"\"\"\n )",
"Or in its simpler form:",
"try_query(\"\"\"\n (c3.subcorpus-idx all \"dickens\" and/cql.proxinfo c3.chapter-idx any/proxinfo \"fog dense\")\n \"\"\"\n )",
"This does not seem to be affected by whether you mention cql or not (that is a cql specification, if I am not wrong).",
"try_query(\"\"\"\n (c3.subcorpus-idx all \"dickens\" and/cql.proxinfo c3.chapter-idx any/cql.proxinfo \"fog dense\")\n \"\"\"\n )",
"The counts are now correct:",
"dense = try_query(\"\"\"(c3.subcorpus-idx all \"dickens\" and/cql.proxinfo c3.chapter-idx = \"dense\")\"\"\")\nprint dense\n\nfog = try_query(\"\"\"(c3.subcorpus-idx all \"dickens\" and/cql.proxinfo c3.chapter-idx = \"fog\")\"\"\")\nprint fog\n\ndense + fog"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
keras-team/keras-io
|
guides/ipynb/making_new_layers_and_models_via_subclassing.ipynb
|
apache-2.0
|
[
"Making new layers and models via subclassing\nAuthor: fchollet<br>\nDate created: 2019/03/01<br>\nLast modified: 2020/04/13<br>\nDescription: Complete guide to writing Layer and Model objects from scratch.\nSetup",
"import tensorflow as tf\nfrom tensorflow import keras",
"The Layer class: the combination of state (weights) and some computation\nOne of the central abstraction in Keras is the Layer class. A layer\nencapsulates both a state (the layer's \"weights\") and a transformation from\ninputs to outputs (a \"call\", the layer's forward pass).\nHere's a densely-connected layer. It has a state: the variables w and b.",
"\nclass Linear(keras.layers.Layer):\n def __init__(self, units=32, input_dim=32):\n super(Linear, self).__init__()\n w_init = tf.random_normal_initializer()\n self.w = tf.Variable(\n initial_value=w_init(shape=(input_dim, units), dtype=\"float32\"),\n trainable=True,\n )\n b_init = tf.zeros_initializer()\n self.b = tf.Variable(\n initial_value=b_init(shape=(units,), dtype=\"float32\"), trainable=True\n )\n\n def call(self, inputs):\n return tf.matmul(inputs, self.w) + self.b\n",
"You would use a layer by calling it on some tensor input(s), much like a Python\nfunction.",
"x = tf.ones((2, 2))\nlinear_layer = Linear(4, 2)\ny = linear_layer(x)\nprint(y)",
"Note that the weights w and b are automatically tracked by the layer upon\nbeing set as layer attributes:",
"assert linear_layer.weights == [linear_layer.w, linear_layer.b]",
"Note you also have access to a quicker shortcut for adding weight to a layer:\nthe add_weight() method:",
"\nclass Linear(keras.layers.Layer):\n def __init__(self, units=32, input_dim=32):\n super(Linear, self).__init__()\n self.w = self.add_weight(\n shape=(input_dim, units), initializer=\"random_normal\", trainable=True\n )\n self.b = self.add_weight(shape=(units,), initializer=\"zeros\", trainable=True)\n\n def call(self, inputs):\n return tf.matmul(inputs, self.w) + self.b\n\n\nx = tf.ones((2, 2))\nlinear_layer = Linear(4, 2)\ny = linear_layer(x)\nprint(y)",
"Layers can have non-trainable weights\nBesides trainable weights, you can add non-trainable weights to a layer as\nwell. Such weights are meant not to be taken into account during\nbackpropagation, when you are training the layer.\nHere's how to add and use a non-trainable weight:",
"\nclass ComputeSum(keras.layers.Layer):\n def __init__(self, input_dim):\n super(ComputeSum, self).__init__()\n self.total = tf.Variable(initial_value=tf.zeros((input_dim,)), trainable=False)\n\n def call(self, inputs):\n self.total.assign_add(tf.reduce_sum(inputs, axis=0))\n return self.total\n\n\nx = tf.ones((2, 2))\nmy_sum = ComputeSum(2)\ny = my_sum(x)\nprint(y.numpy())\ny = my_sum(x)\nprint(y.numpy())",
"It's part of layer.weights, but it gets categorized as a non-trainable weight:",
"print(\"weights:\", len(my_sum.weights))\nprint(\"non-trainable weights:\", len(my_sum.non_trainable_weights))\n\n# It's not included in the trainable weights:\nprint(\"trainable_weights:\", my_sum.trainable_weights)",
"Best practice: deferring weight creation until the shape of the inputs is known\nOur Linear layer above took an input_dimargument that was used to compute\nthe shape of the weights w and b in __init__():",
"\nclass Linear(keras.layers.Layer):\n def __init__(self, units=32, input_dim=32):\n super(Linear, self).__init__()\n self.w = self.add_weight(\n shape=(input_dim, units), initializer=\"random_normal\", trainable=True\n )\n self.b = self.add_weight(shape=(units,), initializer=\"zeros\", trainable=True)\n\n def call(self, inputs):\n return tf.matmul(inputs, self.w) + self.b\n",
"In many cases, you may not know in advance the size of your inputs, and you\nwould like to lazily create weights when that value becomes known, some time\nafter instantiating the layer.\nIn the Keras API, we recommend creating layer weights in the build(self,\ninputs_shape) method of your layer. Like this:",
"\nclass Linear(keras.layers.Layer):\n def __init__(self, units=32):\n super(Linear, self).__init__()\n self.units = units\n\n def build(self, input_shape):\n self.w = self.add_weight(\n shape=(input_shape[-1], self.units),\n initializer=\"random_normal\",\n trainable=True,\n )\n self.b = self.add_weight(\n shape=(self.units,), initializer=\"random_normal\", trainable=True\n )\n\n def call(self, inputs):\n return tf.matmul(inputs, self.w) + self.b\n",
"The __call__() method of your layer will automatically run build the first time\nit is called. You now have a layer that's lazy and thus easier to use:",
"# At instantiation, we don't know on what inputs this is going to get called\nlinear_layer = Linear(32)\n\n# The layer's weights are created dynamically the first time the layer is called\ny = linear_layer(x)\n",
"Implementing build() separately as shown above nicely separates creating weights\nonly once from using weights in every call. However, for some advanced custom\nlayers, it can become impractical to separate the state creation and computation.\nLayer implementers are allowed to defer weight creation to the first __call__(),\nbut need to take care that later calls use the same weights. In addition, since\n__call__() is likely to be executed for the first time inside a tf.function,\nany variable creation that takes place in __call__() should be wrapped in atf.init_scope.\nLayers are recursively composable\nIf you assign a Layer instance as an attribute of another Layer, the outer layer\nwill start tracking the weights created by the inner layer.\nWe recommend creating such sublayers in the __init__() method and leave it to\nthe first __call__() to trigger building their weights.",
"\nclass MLPBlock(keras.layers.Layer):\n def __init__(self):\n super(MLPBlock, self).__init__()\n self.linear_1 = Linear(32)\n self.linear_2 = Linear(32)\n self.linear_3 = Linear(1)\n\n def call(self, inputs):\n x = self.linear_1(inputs)\n x = tf.nn.relu(x)\n x = self.linear_2(x)\n x = tf.nn.relu(x)\n return self.linear_3(x)\n\n\nmlp = MLPBlock()\ny = mlp(tf.ones(shape=(3, 64))) # The first call to the `mlp` will create the weights\nprint(\"weights:\", len(mlp.weights))\nprint(\"trainable weights:\", len(mlp.trainable_weights))",
"The add_loss() method\nWhen writing the call() method of a layer, you can create loss tensors that\nyou will want to use later, when writing your training loop. This is doable by\ncalling self.add_loss(value):",
"# A layer that creates an activity regularization loss\nclass ActivityRegularizationLayer(keras.layers.Layer):\n def __init__(self, rate=1e-2):\n super(ActivityRegularizationLayer, self).__init__()\n self.rate = rate\n\n def call(self, inputs):\n self.add_loss(self.rate * tf.reduce_sum(inputs))\n return inputs\n",
"These losses (including those created by any inner layer) can be retrieved via\nlayer.losses. This property is reset at the start of every __call__() to\nthe top-level layer, so that layer.losses always contains the loss values\ncreated during the last forward pass.",
"\nclass OuterLayer(keras.layers.Layer):\n def __init__(self):\n super(OuterLayer, self).__init__()\n self.activity_reg = ActivityRegularizationLayer(1e-2)\n\n def call(self, inputs):\n return self.activity_reg(inputs)\n\n\nlayer = OuterLayer()\nassert len(layer.losses) == 0 # No losses yet since the layer has never been called\n\n_ = layer(tf.zeros(1, 1))\nassert len(layer.losses) == 1 # We created one loss value\n\n# `layer.losses` gets reset at the start of each __call__\n_ = layer(tf.zeros(1, 1))\nassert len(layer.losses) == 1 # This is the loss created during the call above",
"In addition, the loss property also contains regularization losses created\nfor the weights of any inner layer:",
"\nclass OuterLayerWithKernelRegularizer(keras.layers.Layer):\n def __init__(self):\n super(OuterLayerWithKernelRegularizer, self).__init__()\n self.dense = keras.layers.Dense(\n 32, kernel_regularizer=tf.keras.regularizers.l2(1e-3)\n )\n\n def call(self, inputs):\n return self.dense(inputs)\n\n\nlayer = OuterLayerWithKernelRegularizer()\n_ = layer(tf.zeros((1, 1)))\n\n# This is `1e-3 * sum(layer.dense.kernel ** 2)`,\n# created by the `kernel_regularizer` above.\nprint(layer.losses)",
"These losses are meant to be taken into account when writing training loops,\nlike this:\n```python\nInstantiate an optimizer.\noptimizer = tf.keras.optimizers.SGD(learning_rate=1e-3)\nloss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)\nIterate over the batches of a dataset.\nfor x_batch_train, y_batch_train in train_dataset:\n with tf.GradientTape() as tape:\n logits = layer(x_batch_train) # Logits for this minibatch\n # Loss value for this minibatch\n loss_value = loss_fn(y_batch_train, logits)\n # Add extra losses created during this forward pass:\n loss_value += sum(model.losses)\ngrads = tape.gradient(loss_value, model.trainable_weights)\n optimizer.apply_gradients(zip(grads, model.trainable_weights))\n```\nFor a detailed guide about writing training loops, see the\nguide to writing a training loop from scratch.\nThese losses also work seamlessly with fit() (they get automatically summed\nand added to the main loss, if any):",
"import numpy as np\n\ninputs = keras.Input(shape=(3,))\noutputs = ActivityRegularizationLayer()(inputs)\nmodel = keras.Model(inputs, outputs)\n\n# If there is a loss passed in `compile`, the regularization\n# losses get added to it\nmodel.compile(optimizer=\"adam\", loss=\"mse\")\nmodel.fit(np.random.random((2, 3)), np.random.random((2, 3)))\n\n# It's also possible not to pass any loss in `compile`,\n# since the model already has a loss to minimize, via the `add_loss`\n# call during the forward pass!\nmodel.compile(optimizer=\"adam\")\nmodel.fit(np.random.random((2, 3)), np.random.random((2, 3)))",
"The add_metric() method\nSimilarly to add_loss(), layers also have an add_metric() method\nfor tracking the moving average of a quantity during training.\nConsider the following layer: a \"logistic endpoint\" layer.\nIt takes as inputs predictions & targets, it computes a loss which it tracks\nvia add_loss(), and it computes an accuracy scalar, which it tracks via\nadd_metric().",
"\nclass LogisticEndpoint(keras.layers.Layer):\n def __init__(self, name=None):\n super(LogisticEndpoint, self).__init__(name=name)\n self.loss_fn = keras.losses.BinaryCrossentropy(from_logits=True)\n self.accuracy_fn = keras.metrics.BinaryAccuracy()\n\n def call(self, targets, logits, sample_weights=None):\n # Compute the training-time loss value and add it\n # to the layer using `self.add_loss()`.\n loss = self.loss_fn(targets, logits, sample_weights)\n self.add_loss(loss)\n\n # Log accuracy as a metric and add it\n # to the layer using `self.add_metric()`.\n acc = self.accuracy_fn(targets, logits, sample_weights)\n self.add_metric(acc, name=\"accuracy\")\n\n # Return the inference-time prediction tensor (for `.predict()`).\n return tf.nn.softmax(logits)\n",
"Metrics tracked in this way are accessible via layer.metrics:",
"layer = LogisticEndpoint()\n\ntargets = tf.ones((2, 2))\nlogits = tf.ones((2, 2))\ny = layer(targets, logits)\n\nprint(\"layer.metrics:\", layer.metrics)\nprint(\"current accuracy value:\", float(layer.metrics[0].result()))",
"Just like for add_loss(), these metrics are tracked by fit():",
"inputs = keras.Input(shape=(3,), name=\"inputs\")\ntargets = keras.Input(shape=(10,), name=\"targets\")\nlogits = keras.layers.Dense(10)(inputs)\npredictions = LogisticEndpoint(name=\"predictions\")(logits, targets)\n\nmodel = keras.Model(inputs=[inputs, targets], outputs=predictions)\nmodel.compile(optimizer=\"adam\")\n\ndata = {\n \"inputs\": np.random.random((3, 3)),\n \"targets\": np.random.random((3, 10)),\n}\nmodel.fit(data)",
"You can optionally enable serialization on your layers\nIf you need your custom layers to be serializable as part of a\nFunctional model, you can optionally implement a get_config()\nmethod:",
"\nclass Linear(keras.layers.Layer):\n def __init__(self, units=32):\n super(Linear, self).__init__()\n self.units = units\n\n def build(self, input_shape):\n self.w = self.add_weight(\n shape=(input_shape[-1], self.units),\n initializer=\"random_normal\",\n trainable=True,\n )\n self.b = self.add_weight(\n shape=(self.units,), initializer=\"random_normal\", trainable=True\n )\n\n def call(self, inputs):\n return tf.matmul(inputs, self.w) + self.b\n\n def get_config(self):\n return {\"units\": self.units}\n\n\n# Now you can recreate the layer from its config:\nlayer = Linear(64)\nconfig = layer.get_config()\nprint(config)\nnew_layer = Linear.from_config(config)",
"Note that the __init__() method of the base Layer class takes some keyword\narguments, in particular a name and a dtype. It's good practice to pass\nthese arguments to the parent class in __init__() and to include them in the\nlayer config:",
"\nclass Linear(keras.layers.Layer):\n def __init__(self, units=32, **kwargs):\n super(Linear, self).__init__(**kwargs)\n self.units = units\n\n def build(self, input_shape):\n self.w = self.add_weight(\n shape=(input_shape[-1], self.units),\n initializer=\"random_normal\",\n trainable=True,\n )\n self.b = self.add_weight(\n shape=(self.units,), initializer=\"random_normal\", trainable=True\n )\n\n def call(self, inputs):\n return tf.matmul(inputs, self.w) + self.b\n\n def get_config(self):\n config = super(Linear, self).get_config()\n config.update({\"units\": self.units})\n return config\n\n\nlayer = Linear(64)\nconfig = layer.get_config()\nprint(config)\nnew_layer = Linear.from_config(config)",
"If you need more flexibility when deserializing the layer from its config, you\ncan also override the from_config() class method. This is the base\nimplementation of from_config():\npython\ndef from_config(cls, config):\n return cls(**config)\nTo learn more about serialization and saving, see the complete\nguide to saving and serializing models.\nPrivileged training argument in the call() method\nSome layers, in particular the BatchNormalization layer and the Dropout\nlayer, have different behaviors during training and inference. For such\nlayers, it is standard practice to expose a training (boolean) argument in\nthe call() method.\nBy exposing this argument in call(), you enable the built-in training and\nevaluation loops (e.g. fit()) to correctly use the layer in training and\ninference.",
"\nclass CustomDropout(keras.layers.Layer):\n def __init__(self, rate, **kwargs):\n super(CustomDropout, self).__init__(**kwargs)\n self.rate = rate\n\n def call(self, inputs, training=None):\n if training:\n return tf.nn.dropout(inputs, rate=self.rate)\n return inputs\n",
"Privileged mask argument in the call() method\nThe other privileged argument supported by call() is the mask argument.\nYou will find it in all Keras RNN layers. A mask is a boolean tensor (one\nboolean value per timestep in the input) used to skip certain input timesteps\nwhen processing timeseries data.\nKeras will automatically pass the correct mask argument to __call__() for\nlayers that support it, when a mask is generated by a prior layer.\nMask-generating layers are the Embedding\nlayer configured with mask_zero=True, and the Masking layer.\nTo learn more about masking and how to write masking-enabled layers, please\ncheck out the guide\n\"understanding padding and masking\".\nThe Model class\nIn general, you will use the Layer class to define inner computation blocks,\nand will use the Model class to define the outer model -- the object you\nwill train.\nFor instance, in a ResNet50 model, you would have several ResNet blocks\nsubclassing Layer, and a single Model encompassing the entire ResNet50\nnetwork.\nThe Model class has the same API as Layer, with the following differences:\n\nIt exposes built-in training, evaluation, and prediction loops\n(model.fit(), model.evaluate(), model.predict()).\nIt exposes the list of its inner layers, via the model.layers property.\nIt exposes saving and serialization APIs (save(), save_weights()...)\n\nEffectively, the Layer class corresponds to what we refer to in the\nliterature as a \"layer\" (as in \"convolution layer\" or \"recurrent layer\") or as\na \"block\" (as in \"ResNet block\" or \"Inception block\").\nMeanwhile, the Model class corresponds to what is referred to in the\nliterature as a \"model\" (as in \"deep learning model\") or as a \"network\" (as in\n\"deep neural network\").\nSo if you're wondering, \"should I use the Layer class or the Model class?\",\nask yourself: will I need to call fit() on it? Will I need to call save()\non it? If so, go with Model. If not (either because your class is just a block\nin a bigger system, or because you are writing training & saving code yourself),\nuse Layer.\nFor instance, we could take our mini-resnet example above, and use it to build\na Model that we could train with fit(), and that we could save with\nsave_weights():\n```python\nclass ResNet(tf.keras.Model):\ndef __init__(self, num_classes=1000):\n super(ResNet, self).__init__()\n self.block_1 = ResNetBlock()\n self.block_2 = ResNetBlock()\n self.global_pool = layers.GlobalAveragePooling2D()\n self.classifier = Dense(num_classes)\n\ndef call(self, inputs):\n x = self.block_1(inputs)\n x = self.block_2(x)\n x = self.global_pool(x)\n return self.classifier(x)\n\nresnet = ResNet()\ndataset = ...\nresnet.fit(dataset, epochs=10)\nresnet.save(filepath)\n```\nPutting it all together: an end-to-end example\nHere's what you've learned so far:\n\nA Layer encapsulate a state (created in __init__() or build()) and some\ncomputation (defined in call()).\nLayers can be recursively nested to create new, bigger computation blocks.\nLayers can create and track losses (typically regularization losses) as well\nas metrics, via add_loss() and add_metric()\nThe outer container, the thing you want to train, is a Model. A Model is\njust like a Layer, but with added training and serialization utilities.\n\nLet's put all of these things together into an end-to-end example: we're going\nto implement a Variational AutoEncoder (VAE). We'll train it on MNIST digits.\nOur VAE will be a subclass of Model, built as a nested composition of layers\nthat subclass Layer. It will feature a regularization loss (KL divergence).",
"from tensorflow.keras import layers\n\n\nclass Sampling(layers.Layer):\n \"\"\"Uses (z_mean, z_log_var) to sample z, the vector encoding a digit.\"\"\"\n\n def call(self, inputs):\n z_mean, z_log_var = inputs\n batch = tf.shape(z_mean)[0]\n dim = tf.shape(z_mean)[1]\n epsilon = tf.keras.backend.random_normal(shape=(batch, dim))\n return z_mean + tf.exp(0.5 * z_log_var) * epsilon\n\n\nclass Encoder(layers.Layer):\n \"\"\"Maps MNIST digits to a triplet (z_mean, z_log_var, z).\"\"\"\n\n def __init__(self, latent_dim=32, intermediate_dim=64, name=\"encoder\", **kwargs):\n super(Encoder, self).__init__(name=name, **kwargs)\n self.dense_proj = layers.Dense(intermediate_dim, activation=\"relu\")\n self.dense_mean = layers.Dense(latent_dim)\n self.dense_log_var = layers.Dense(latent_dim)\n self.sampling = Sampling()\n\n def call(self, inputs):\n x = self.dense_proj(inputs)\n z_mean = self.dense_mean(x)\n z_log_var = self.dense_log_var(x)\n z = self.sampling((z_mean, z_log_var))\n return z_mean, z_log_var, z\n\n\nclass Decoder(layers.Layer):\n \"\"\"Converts z, the encoded digit vector, back into a readable digit.\"\"\"\n\n def __init__(self, original_dim, intermediate_dim=64, name=\"decoder\", **kwargs):\n super(Decoder, self).__init__(name=name, **kwargs)\n self.dense_proj = layers.Dense(intermediate_dim, activation=\"relu\")\n self.dense_output = layers.Dense(original_dim, activation=\"sigmoid\")\n\n def call(self, inputs):\n x = self.dense_proj(inputs)\n return self.dense_output(x)\n\n\nclass VariationalAutoEncoder(keras.Model):\n \"\"\"Combines the encoder and decoder into an end-to-end model for training.\"\"\"\n\n def __init__(\n self,\n original_dim,\n intermediate_dim=64,\n latent_dim=32,\n name=\"autoencoder\",\n **kwargs\n ):\n super(VariationalAutoEncoder, self).__init__(name=name, **kwargs)\n self.original_dim = original_dim\n self.encoder = Encoder(latent_dim=latent_dim, intermediate_dim=intermediate_dim)\n self.decoder = Decoder(original_dim, intermediate_dim=intermediate_dim)\n\n def call(self, inputs):\n z_mean, z_log_var, z = self.encoder(inputs)\n reconstructed = self.decoder(z)\n # Add KL divergence regularization loss.\n kl_loss = -0.5 * tf.reduce_mean(\n z_log_var - tf.square(z_mean) - tf.exp(z_log_var) + 1\n )\n self.add_loss(kl_loss)\n return reconstructed\n",
"Let's write a simple training loop on MNIST:",
"original_dim = 784\nvae = VariationalAutoEncoder(original_dim, 64, 32)\n\noptimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)\nmse_loss_fn = tf.keras.losses.MeanSquaredError()\n\nloss_metric = tf.keras.metrics.Mean()\n\n(x_train, _), _ = tf.keras.datasets.mnist.load_data()\nx_train = x_train.reshape(60000, 784).astype(\"float32\") / 255\n\ntrain_dataset = tf.data.Dataset.from_tensor_slices(x_train)\ntrain_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)\n\nepochs = 2\n\n# Iterate over epochs.\nfor epoch in range(epochs):\n print(\"Start of epoch %d\" % (epoch,))\n\n # Iterate over the batches of the dataset.\n for step, x_batch_train in enumerate(train_dataset):\n with tf.GradientTape() as tape:\n reconstructed = vae(x_batch_train)\n # Compute reconstruction loss\n loss = mse_loss_fn(x_batch_train, reconstructed)\n loss += sum(vae.losses) # Add KLD regularization loss\n\n grads = tape.gradient(loss, vae.trainable_weights)\n optimizer.apply_gradients(zip(grads, vae.trainable_weights))\n\n loss_metric(loss)\n\n if step % 100 == 0:\n print(\"step %d: mean loss = %.4f\" % (step, loss_metric.result()))",
"Note that since the VAE is subclassing Model, it features built-in training\nloops. So you could also have trained it like this:",
"vae = VariationalAutoEncoder(784, 64, 32)\n\noptimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)\n\nvae.compile(optimizer, loss=tf.keras.losses.MeanSquaredError())\nvae.fit(x_train, x_train, epochs=2, batch_size=64)",
"Beyond object-oriented development: the Functional API\nWas this example too much object-oriented development for you? You can also\nbuild models using the Functional API. Importantly,\nchoosing one style or another does not prevent you from leveraging components\nwritten in the other style: you can always mix-and-match.\nFor instance, the Functional API example below reuses the same Sampling layer\nwe defined in the example above:",
"original_dim = 784\nintermediate_dim = 64\nlatent_dim = 32\n\n# Define encoder model.\noriginal_inputs = tf.keras.Input(shape=(original_dim,), name=\"encoder_input\")\nx = layers.Dense(intermediate_dim, activation=\"relu\")(original_inputs)\nz_mean = layers.Dense(latent_dim, name=\"z_mean\")(x)\nz_log_var = layers.Dense(latent_dim, name=\"z_log_var\")(x)\nz = Sampling()((z_mean, z_log_var))\nencoder = tf.keras.Model(inputs=original_inputs, outputs=z, name=\"encoder\")\n\n# Define decoder model.\nlatent_inputs = tf.keras.Input(shape=(latent_dim,), name=\"z_sampling\")\nx = layers.Dense(intermediate_dim, activation=\"relu\")(latent_inputs)\noutputs = layers.Dense(original_dim, activation=\"sigmoid\")(x)\ndecoder = tf.keras.Model(inputs=latent_inputs, outputs=outputs, name=\"decoder\")\n\n# Define VAE model.\noutputs = decoder(z)\nvae = tf.keras.Model(inputs=original_inputs, outputs=outputs, name=\"vae\")\n\n# Add KL divergence regularization loss.\nkl_loss = -0.5 * tf.reduce_mean(z_log_var - tf.square(z_mean) - tf.exp(z_log_var) + 1)\nvae.add_loss(kl_loss)\n\n# Train.\noptimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)\nvae.compile(optimizer, loss=tf.keras.losses.MeanSquaredError())\nvae.fit(x_train, x_train, epochs=3, batch_size=64)",
"For more information, make sure to read the Functional API guide."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
amitkaps/full-stack-data-science
|
credit-risk/notebooks/Intuition.ipynb
|
mit
|
[
"Intuition\nPaper Exercise\nLet us start with a simple exercise in classifying credit risk. \nWe have the following features in our dataset. \n- Risk - ordinal (label)\n- Income - continuous\n- Credit History - ordinal\nWe want to find out the rules that would help us classify the three risk type - This is a paper and pen exercise first!!",
"import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nplt.style.use('ggplot')\nplt.rcParams['figure.figsize'] = (9,6)\n\ndf = pd.read_csv(\"../data/creditRisk.csv\")\n\ndf.head()",
"Plotting the Data",
"from plotnine import *\n\nggplot(df, aes(x = \"Income\", y = \"Credit History\", color = \"Risk\")) + geom_point(size = 4)",
"Preparing Data\nWe have one ordinal variable (Risk) and one nominal variable (Credit History)\nLets use LabelEncoder for the nominal variable",
"from sklearn import preprocessing\nle = preprocessing.LabelEncoder()\n\ndf['Credit History'].unique()\n\ndf['Credit History'].unique()\n\nle.fit(df['Credit History'].unique())\n\ndf['Credit History'].tail()\n\n# Converting the categorical data using label encoder\ndf['Credit History'] = le.transform(df['Credit History'])\n\ndf['Credit History'].tail()\n\nle.classes_",
"Lets use a dictionary for encoding nominal variable",
"df.Risk.unique()\n\nRisk_mapping = {\n 'High': 2,\n 'Moderate': 1,\n 'Low': 0}\n\ndf.Risk.tail()\n\ndf['Risk'] = df['Risk'].map(Risk_mapping)\n\ndf.Risk.tail()\n\ndf.head()",
"Classifier - Logistic Regression",
"data = df.iloc[:,0:2]\ntarget = df['Risk']\n\nfrom sklearn.linear_model import LogisticRegression\n\nclf_LR = LogisticRegression()\n\nclf_LR\n\nclf_LR = clf_p.fit(data, target)\n\nfrom modelvis import plot_classifier_2d\n\nX = np.array(data)\ny = np.array(target)\n\nplot_classifier_2d(clf_LR, data,target, probability = False)",
"Classifier",
"data = df.iloc[:,0:2]\ntarget = df.iloc[:,2:3]\n\nfrom sklearn import tree\n\nclf = tree.DecisionTreeClassifier()\n\nclf\n\nclf = clf.fit(data, target)",
"Visualise the Tree",
"import pydotplus \nfrom IPython.display import Image\n\ndata.columns\n\ntarget.columns\n\ndot_data = tree.export_graphviz(clf, out_file='tree.dot', feature_names=data.columns,\n class_names=['Low', 'Moderate', 'High'], filled=True, \n rounded=True, special_characters=True)\n\ngraph = pydotplus.graph_from_dot_file('tree.dot') \n\nImage(graph.create_png()) ",
"Understanding how the Decision Tree works\nTerminology\n- Each root node represents a single input variable (x) and a split point on that variable.\n- The leaf nodes of the tree contain an output variable (y) which is used to make a prediction.\nGrowing the tree\n\nThe first choice we have is how many branches we split the trees. And we choose Binary Tree because otherwise it will explode due to combinatorial explosion. So BINARY TREES is a practical consideration.\nThe second decision is to choose which variable and where to split it. We need to have an objective function to do this\n\nOne objective function is to maximize the information gain (IG) at each split:\n$$ IG(D_p,f)= I(D_p) - \\frac{N_{right}}{N} I(D_{right}) - \\frac{N_{left}}{N} I(D_{left}) $$\nwhere: \n- f is the feature to perform the split\n- $D_p$, $D_{left}$, and $D_{right}$ are the datasets of the parent, left and right child node, respectively\n- I is the impurity measure\n- N is the total number of samples\n- $N_{left}$ and $N_{right}$ is the number of samples in the left and right child node. \nNow we need to first define an Impurity measure. The three popular impurity measures are:\n - Gini Impurity\n - Entropy\n - Classification Error\nGini Impurity and Entropy lead to similiar results when growing the tree, while Classification error is not as useful for growing the tree (but for pruning the tree) - See example here http://sebastianraschka.com/faq/docs/decision-tree-binary.html\nLets understand Gini Impurity a little better. Gini impurity is a measure of how often a randomly chosen element from the set would be incorrectly labeled if it was randomly labeled according to the distribution of labels in the subset, Gini impurity can be computed by summing the probability $t_{i} $ of an item with label $i$ being chosen times the probability $ 1-t_{i}$ of a mistake in categorizing that item.\n$$ I_{G}(f)=\\sum {i=1}^{J}t{i}(1-t_{i})=\\sum {i=1}^{J}(t{i}-{t_{i}}^{2})=\\sum {i=1}^{J}t{i}-\\sum {i=1}^{J}{t{i}}^{2}=1-\\sum {i=1}^{J}{t{i}}^{2} $$\nLets calculate the Gini for the overall data set:\nLow - 4, Moderate - 6, High - 8 and total observations are 18 \n$$ I_G(t) = 1 - \\left(\\frac{6}{18}\\right)^2 - \\left(\\frac{4}{18}\\right)^2 - \\left(\\frac{8}{18}\\right)^2 = 1 - \\frac{116}{256} = 0.642 $$\nscikit-learn uses an optimized CART algorithm, which will use a greedy approach. A greedy approach is used to divide the space called recursive binary splitting. This is a numerical procedure where all the values are lined up and different split points are tried and tested using a objective cost function. The split with the best cost (lowest cost because we minimize cost) is selected.\nAnother way to think of this is that a learned binary tree is actually a partitioning of the input space. You can think of each input variable as a dimension on an p-dimensional space. The decision tree split this up into rectangles (when p=2 input variables) or some kind of hyper-rectangles with more inputs. \nWe can draw these partitions for our dataset",
"def plot_classifier_2d(clf, data, target):\n x_min, x_max = data.iloc[:,0].min(), data.iloc[:,0].max()\n y_min, y_max = data.iloc[:,1].min(), data.iloc[:,1].max()\n xx, yy = np.meshgrid(\n np.arange(x_min, x_max, (x_max - x_min)/100), \n np.arange(y_min, y_max, (y_max - y_min)/100))\n Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])\n Z = Z.reshape(xx.shape)\n cs = plt.contourf(xx, yy, Z, cmap=\"viridis\", alpha = 0.5)\n plt.colorbar(cs)\n plt.scatter(x = data.iloc[:,0], y = data.iloc[:,1], c = target, s = 100, cmap=\"magma\")\n\n\nplot_classifier_2d(clf, data,target)",
"Stop growing the tree\n\nThe obvious point to stop growing the tree is when the Gini Impurity = 0, that is there is only one label left in a node\nAnother option is to define a max_depth of the tree, though this may lead to suboptimal trees.\nThe most common stopping procedure is to use a minimum count on the number of training instances assigned to each leaf node. If the count is less than some minimum then the split is not accepted and the node is taken as a final leaf node.\n\nTrees - Advantages and Disadvantages\nAdvantages of Trees\n- Simple to understand and interpret.\n- Requires little data prep. No need for data normalisation, dummy variables, missing values to be removed.\n- Able to handle both numerical and categorical data. \n- Uses a white box model. With simple clear rules\n- Can be easily validated\n- Scales well with data\nDisadvantages of Trees\n- Overfitting: over-complex trees that do not generalise data well. \n- Unstable because small variations in the data result in a different tree\n- Locally Optimal: Tree is known to be NP-complete and hence need heuristics like greedy algorithm (locally optimal but may not be globally optimal)\n- Biased Trees: Not good learner if one class dominates."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ToqueWillot/M2DAC
|
FDMS/TME3/DataViz.ipynb
|
gpl-2.0
|
[
"FDMS TME3\nKaggle How Much Did It Rain? II\nFlorian Toque & Paul Willot \nData Vize",
"# from __future__ import exam_success\nfrom __future__ import absolute_import\nfrom __future__ import print_function\n\n%matplotlib inline\nimport sklearn\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport numpy as np\nimport random\nimport pandas as pd\nimport scipy.stats as stats\n\n# Sk cheats\nfrom sklearn.cross_validation import cross_val_score # cross val\nfrom sklearn.ensemble import RandomForestRegressor\nfrom sklearn.ensemble import ExtraTreesRegressor\nfrom sklearn.preprocessing import Imputer # get rid of nan",
"13.765.202 lines in train.csv \n8.022.757 lines in test.csv \n\nLoad the dataset",
"%%time\nfilename = \"data/reduced_train_100000.csv\"\n#filename = \"data/reduced_test_100000.csv\"\nraw = pd.read_csv(filename)\nraw = raw.set_index('Id')\n\nraw['Expected'].describe()",
"Per wikipedia, a value of more than 421 mm/h is considered \"Extreme/large hail\"\nIf we encounter the value 327.40 meter per hour, we should probably start building Noah's ark\nTherefor, it seems reasonable to drop values too large, considered as outliers",
"# Considering that the gauge may concentrate the rainfall, we set the cap to 1000\n# Comment this line to analyse the complete dataset \nl = len(raw)\nraw = raw[raw['Expected'] < 1000]\nprint(\"Dropped %d (%0.2f%%)\"%(l-len(raw),(l-len(raw))/float(l)*100))\n\nraw.head(5)",
"Quick analysis for the sparsity by column",
"l = float(len(raw[\"minutes_past\"]))\ncomp = [[1-raw[i].isnull().sum()/l , i] for i in raw.columns]\ncomp.sort(key=lambda x: x[0], reverse=True)\n\nsns.barplot(zip(*comp)[0],zip(*comp)[1],palette=sns.cubehelix_palette(len(comp), start=.5, rot=-.75))\nplt.title(\"Percentage of non NaN data\")\nplt.show()",
"We see that except for the fixed features minutes_past, radardist_km and Expected the dataset is mainly sparse.\nLet's transform the dataset to conduct more analysis\nWe regroup the data by ID",
"# We select all features except for the minutes past,\n# because we ignore the time repartition of the sequence for now\n\nfeatures_columns = list([u'Ref', u'Ref_5x5_10th',\n u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',\n u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',\n u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',\n u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',\n u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',\n u'Kdp_5x5_50th', u'Kdp_5x5_90th'])\n\ndef getXy(raw):\n selected_columns = list([ u'radardist_km', u'Ref', u'Ref_5x5_10th',\n u'Ref_5x5_50th', u'Ref_5x5_90th', u'RefComposite',\n u'RefComposite_5x5_10th', u'RefComposite_5x5_50th',\n u'RefComposite_5x5_90th', u'RhoHV', u'RhoHV_5x5_10th',\n u'RhoHV_5x5_50th', u'RhoHV_5x5_90th', u'Zdr', u'Zdr_5x5_10th',\n u'Zdr_5x5_50th', u'Zdr_5x5_90th', u'Kdp', u'Kdp_5x5_10th',\n u'Kdp_5x5_50th', u'Kdp_5x5_90th'])\n \n data = raw[selected_columns]\n \n docX, docY = [], []\n for i in data.index.unique():\n if isinstance(data.loc[i],pd.core.series.Series):\n m = [data.loc[i].as_matrix()]\n docX.append(m)\n docY.append(float(raw.loc[i][\"Expected\"]))\n else:\n m = data.loc[i].as_matrix()\n docX.append(m)\n docY.append(float(raw.loc[i][:1][\"Expected\"]))\n X , y = np.array(docX) , np.array(docY)\n return X,y\n\nraw.index.unique()",
"How much observations is there for each ID ?",
"X,y=getXy(raw)\n\ntmp = []\nfor i in X:\n tmp.append(len(i))\ntmp = np.array(tmp)\nsns.countplot(tmp,order=range(tmp.min(),tmp.max()+1))\nplt.title(\"Number of ID per number of observations\\n(On complete dataset)\")\nplt.plot()\n\nprint(\"Average gauge observation in mm: %0.2f\"%y.mean())",
"We see there is a lot of ID with 6 or 12 observations, that mean one every 5 or 10 minutes on average.",
"pd.DataFrame(y).describe()",
"Now let's do the analysis on different subsets:\nOn fully filled dataset",
"noAnyNan = raw.loc[raw[features_columns].dropna(how='any').index.unique()]\n\nX,y=getXy(noAnyNan)\n\ntmp = []\nfor i in X:\n tmp.append(len(i))\ntmp = np.array(tmp)\nsns.countplot(tmp,order=range(tmp.min(),tmp.max()+1))\nplt.title(\"Number of ID per number of observations\\n(On fully filled dataset)\")\nplt.plot()\n\nprint(\"Average gauge observation in mm: %0.2f\"%y.mean())\n\npd.DataFrame(y).describe()\n\nnoFullNan = raw.loc[raw[features_columns].dropna(how='all').index.unique()]\n\nindex[:10]\n\nX,y=getXy(raw)\n\ntmp = []\nfor i in X:\n tmp.append(len(i))\ntmp = np.array(tmp)\nsns.countplot(tmp,order=range(tmp.min(),tmp.max()+1))\nplt.title(\"Number of ID per number of observations\\n(On partly filled dataset)\")\nplt.plot()\n\nprint(\"Average gauge observation in mm: %0.2f\"%y.mean())\n\npd.DataFrame(y).describe()\n\nfullNan = raw.drop(raw[features_columns].dropna(how='all').index)\n\nX,y=getXy(fullNan)\n\ntmp = []\nfor i in X:\n tmp.append(len(i))\ntmp = np.array(tmp)\nsns.countplot(tmp,order=range(tmp.min(),tmp.max()+1))\nplt.title(\"Number of ID per number of observations\\n(On fully empty dataset)\")\nplt.plot()\n\nprint(\"Average gauge observation in mm: %0.2f\"%y.mean())\n\npd.DataFrame(y).describe()",
"Strangely we notice that the less observations there is, the more it rains on average\nHowever more of the expected rainfall fall below 0.5\nWhat prediction should we make if there is no data?",
"print(\"%d observations\" %(len(raw)))\n#print(\"%d fully filled, %d partly filled, %d fully empty\"\n# %(len(noAnyNan),len(noFullNan),len(raw)-len(noFullNan)))\nprint(\"%0.1f%% fully filled, %0.1f%% partly filled, %0.1f%% fully empty\"\n %(len(noAnyNan)/float(len(raw))*100,\n len(noFullNan)/float(len(raw))*100,\n (len(raw)-len(noFullNan))/float(len(raw))*100))\n\nimport numpy as np\nfrom scipy.stats import kendalltau\nimport seaborn as sns\n#sns.set(style=\"ticks\")\n\nrs = np.random.RandomState(11)\nx = rs.gamma(2, size=1000)\ny = -.5 * x + rs.normal(size=1000)\n\nsns.jointplot(x, y, kind=\"hex\", stat_func=kendalltau, color=\"#4CB391\")"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
InsightSoftwareConsortium/SimpleITK-Notebooks
|
Python/00_Setup.ipynb
|
apache-2.0
|
[
"Welcome to SimpleITK Jupyter Notebooks\nNewcomers to Jupyter Notebooks:\n\nWe use two types of cells, code and markdown.\nTo run a code cell, select it (mouse or arrow key so that it is highlighted) and then press shift+enter which also moves focus to the next cell or ctrl+enter which doesn't.\nClosing the browser window does not close the Jupyter server. To close the server, go to the terminal where you ran it and press ctrl+c twice.\n\nFor additional details see the Jupyter project documentation on Jupyter Notebook or JupyterLab.\nSimpleITK Environment Setup\nCheck that SimpleITK and auxiliary program(s) are correctly installed in your environment, and that you have the SimpleITK version which you expect (<b>requires network connectivity</b>).\nYou can optionally download all of the data used in the notebooks in advance. This step is only necessary if you expect to run the notebooks without network connectivity.\nThe following cell checks that all expected packages are installed.",
"import importlib\nfrom distutils.version import LooseVersion\n\n# check that all packages are installed (see requirements.txt file)\nrequired_packages = {\n \"jupyter\",\n \"numpy\",\n \"matplotlib\",\n \"ipywidgets\",\n \"scipy\",\n \"pandas\",\n \"numba\",\n \"multiprocess\",\n \"SimpleITK\",\n}\n\nproblem_packages = list()\n# Iterate over the required packages: If the package is not installed\n# ignore the exception.\nfor package in required_packages:\n try:\n p = importlib.import_module(package)\n except ImportError:\n problem_packages.append(package)\n\nif len(problem_packages) == 0:\n print(\"All is well.\")\nelse:\n print(\n \"The following packages are required but not installed: \"\n + \", \".join(problem_packages)\n )\n\nimport SimpleITK as sitk\n\n%run update_path_to_download_script\nfrom downloaddata import fetch_data, fetch_data_all\n\nfrom ipywidgets import interact\n\nprint(sitk.Version())",
"We expect that you have an external image viewer installed. The default viewer is <a href=\"https://fiji.sc/#download\">Fiji</a>. If you have another viewer (i.e. ITK-SNAP or 3D Slicer) you will need to set an environment variable to point to it. This can be done from within a notebook as shown below.",
"# Uncomment the line below to change the default external viewer to your viewer of choice and test that it works.\n#%env SITK_SHOW_COMMAND /Applications/ITK-SNAP.app/Contents/MacOS/ITK-SNAP\n\n# Retrieve an image from the network, read it and display using the external viewer.\n# The show method will also set the display window's title and by setting debugOn to True,\n# will also print information with respect to the command it is attempting to invoke.\n# NOTE: The debug information is printed to the terminal from which you launched the notebook\n# server.\nsitk.Show(sitk.ReadImage(fetch_data(\"SimpleITK.jpg\")), \"SimpleITK Logo\", debugOn=True)",
"Now we check that the ipywidgets will display correctly. When you run the following cell you should see a slider.\nIf you don't see a slider please shutdown the Jupyter server, at the command line prompt press Control-c twice, and then run the following command:\njupyter nbextension enable --py --sys-prefix widgetsnbextension",
"interact(lambda x: x, x=(0, 10));",
"Download all of the data in advance if you expect to be working offline (may take a couple of minutes).",
"fetch_data_all(os.path.join(\"..\", \"Data\"), os.path.join(\"..\", \"Data\", \"manifest.json\"))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tensorflow/hub
|
examples/colab/tf2_arbitrary_image_stylization.ipynb
|
apache-2.0
|
[
"Copyright 2019 The TensorFlow Hub Authors.\nLicensed under the Apache License, Version 2.0 (the \"License\");",
"# Copyright 2019 The TensorFlow Hub Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================",
"Fast Style Transfer for Arbitrary Styles\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/hub/tutorials/tf2_arbitrary_image_stylization\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/tf2_arbitrary_image_stylization.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/hub/blob/master/examples/colab/tf2_arbitrary_image_stylization.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/tf2_arbitrary_image_stylization.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n <td>\n <a href=\"https://tfhub.dev/google/magenta/arbitrary-image-stylization-v1-256/2\"><img src=\"https://www.tensorflow.org/images/hub_logo_32px.png\" />See TF Hub model</a>\n </td>\n</table>\n\nBased on the model code in magenta and the publication:\nExploring the structure of a real-time, arbitrary neural artistic stylization\nnetwork.\nGolnaz Ghiasi, Honglak Lee,\nManjunath Kudlur, Vincent Dumoulin, Jonathon Shlens,\nProceedings of the British Machine Vision Conference (BMVC), 2017.\nSetup\nLet's start with importing TF2 and all relevant dependencies.",
"import functools\nimport os\n\nfrom matplotlib import gridspec\nimport matplotlib.pylab as plt\nimport numpy as np\nimport tensorflow as tf\nimport tensorflow_hub as hub\n\nprint(\"TF Version: \", tf.__version__)\nprint(\"TF Hub version: \", hub.__version__)\nprint(\"Eager mode enabled: \", tf.executing_eagerly())\nprint(\"GPU available: \", tf.config.list_physical_devices('GPU'))\n\n# @title Define image loading and visualization functions { display-mode: \"form\" }\n\ndef crop_center(image):\n \"\"\"Returns a cropped square image.\"\"\"\n shape = image.shape\n new_shape = min(shape[1], shape[2])\n offset_y = max(shape[1] - shape[2], 0) // 2\n offset_x = max(shape[2] - shape[1], 0) // 2\n image = tf.image.crop_to_bounding_box(\n image, offset_y, offset_x, new_shape, new_shape)\n return image\n\n@functools.lru_cache(maxsize=None)\ndef load_image(image_url, image_size=(256, 256), preserve_aspect_ratio=True):\n \"\"\"Loads and preprocesses images.\"\"\"\n # Cache image file locally.\n image_path = tf.keras.utils.get_file(os.path.basename(image_url)[-128:], image_url)\n # Load and convert to float32 numpy array, add batch dimension, and normalize to range [0, 1].\n img = tf.io.decode_image(\n tf.io.read_file(image_path),\n channels=3, dtype=tf.float32)[tf.newaxis, ...]\n img = crop_center(img)\n img = tf.image.resize(img, image_size, preserve_aspect_ratio=True)\n return img\n\ndef show_n(images, titles=('',)):\n n = len(images)\n image_sizes = [image.shape[1] for image in images]\n w = (image_sizes[0] * 6) // 320\n plt.figure(figsize=(w * n, w))\n gs = gridspec.GridSpec(1, n, width_ratios=image_sizes)\n for i in range(n):\n plt.subplot(gs[i])\n plt.imshow(images[i][0], aspect='equal')\n plt.axis('off')\n plt.title(titles[i] if len(titles) > i else '')\n plt.show()\n",
"Let's get as well some images to play with.",
"# @title Load example images { display-mode: \"form\" }\n\ncontent_image_url = 'https://upload.wikimedia.org/wikipedia/commons/thumb/f/fd/Golden_Gate_Bridge_from_Battery_Spencer.jpg/640px-Golden_Gate_Bridge_from_Battery_Spencer.jpg' # @param {type:\"string\"}\nstyle_image_url = 'https://upload.wikimedia.org/wikipedia/commons/0/0a/The_Great_Wave_off_Kanagawa.jpg' # @param {type:\"string\"}\noutput_image_size = 384 # @param {type:\"integer\"}\n\n# The content image size can be arbitrary.\ncontent_img_size = (output_image_size, output_image_size)\n# The style prediction model was trained with image size 256 and it's the \n# recommended image size for the style image (though, other sizes work as \n# well but will lead to different results).\nstyle_img_size = (256, 256) # Recommended to keep it at 256.\n\ncontent_image = load_image(content_image_url, content_img_size)\nstyle_image = load_image(style_image_url, style_img_size)\nstyle_image = tf.nn.avg_pool(style_image, ksize=[3,3], strides=[1,1], padding='SAME')\nshow_n([content_image, style_image], ['Content image', 'Style image'])",
"Import TF Hub module",
"# Load TF Hub module.\n\nhub_handle = 'https://tfhub.dev/google/magenta/arbitrary-image-stylization-v1-256/2'\nhub_module = hub.load(hub_handle)",
"The signature of this hub module for image stylization is:\noutputs = hub_module(content_image, style_image)\nstylized_image = outputs[0]\nWhere content_image, style_image, and stylized_image are expected to be 4-D Tensors with shapes [batch_size, image_height, image_width, 3].\nIn the current example we provide only single images and therefore the batch dimension is 1, but one can use the same module to process more images at the same time.\nThe input and output values of the images should be in the range [0, 1].\nThe shapes of content and style image don't have to match. Output image shape\nis the same as the content image shape.\nDemonstrate image stylization",
"# Stylize content image with given style image.\n# This is pretty fast within a few milliseconds on a GPU.\n\noutputs = hub_module(tf.constant(content_image), tf.constant(style_image))\nstylized_image = outputs[0]\n\n# Visualize input images and the generated stylized image.\n\nshow_n([content_image, style_image, stylized_image], titles=['Original content image', 'Style image', 'Stylized image'])",
"Let's try it on more images",
"# @title To Run: Load more images { display-mode: \"form\" }\n\ncontent_urls = dict(\n sea_turtle='https://upload.wikimedia.org/wikipedia/commons/d/d7/Green_Sea_Turtle_grazing_seagrass.jpg',\n tuebingen='https://upload.wikimedia.org/wikipedia/commons/0/00/Tuebingen_Neckarfront.jpg',\n grace_hopper='https://storage.googleapis.com/download.tensorflow.org/example_images/grace_hopper.jpg',\n )\nstyle_urls = dict(\n kanagawa_great_wave='https://upload.wikimedia.org/wikipedia/commons/0/0a/The_Great_Wave_off_Kanagawa.jpg',\n kandinsky_composition_7='https://upload.wikimedia.org/wikipedia/commons/b/b4/Vassily_Kandinsky%2C_1913_-_Composition_7.jpg',\n hubble_pillars_of_creation='https://upload.wikimedia.org/wikipedia/commons/6/68/Pillars_of_creation_2014_HST_WFC3-UVIS_full-res_denoised.jpg',\n van_gogh_starry_night='https://upload.wikimedia.org/wikipedia/commons/thumb/e/ea/Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg/1024px-Van_Gogh_-_Starry_Night_-_Google_Art_Project.jpg',\n turner_nantes='https://upload.wikimedia.org/wikipedia/commons/b/b7/JMW_Turner_-_Nantes_from_the_Ile_Feydeau.jpg',\n munch_scream='https://upload.wikimedia.org/wikipedia/commons/c/c5/Edvard_Munch%2C_1893%2C_The_Scream%2C_oil%2C_tempera_and_pastel_on_cardboard%2C_91_x_73_cm%2C_National_Gallery_of_Norway.jpg',\n picasso_demoiselles_avignon='https://upload.wikimedia.org/wikipedia/en/4/4c/Les_Demoiselles_d%27Avignon.jpg',\n picasso_violin='https://upload.wikimedia.org/wikipedia/en/3/3c/Pablo_Picasso%2C_1911-12%2C_Violon_%28Violin%29%2C_oil_on_canvas%2C_Kr%C3%B6ller-M%C3%BCller_Museum%2C_Otterlo%2C_Netherlands.jpg',\n picasso_bottle_of_rum='https://upload.wikimedia.org/wikipedia/en/7/7f/Pablo_Picasso%2C_1911%2C_Still_Life_with_a_Bottle_of_Rum%2C_oil_on_canvas%2C_61.3_x_50.5_cm%2C_Metropolitan_Museum_of_Art%2C_New_York.jpg',\n fire='https://upload.wikimedia.org/wikipedia/commons/3/36/Large_bonfire.jpg',\n derkovits_woman_head='https://upload.wikimedia.org/wikipedia/commons/0/0d/Derkovits_Gyula_Woman_head_1922.jpg',\n amadeo_style_life='https://upload.wikimedia.org/wikipedia/commons/8/8e/Untitled_%28Still_life%29_%281913%29_-_Amadeo_Souza-Cardoso_%281887-1918%29_%2817385824283%29.jpg',\n derkovtis_talig='https://upload.wikimedia.org/wikipedia/commons/3/37/Derkovits_Gyula_Talig%C3%A1s_1920.jpg',\n amadeo_cardoso='https://upload.wikimedia.org/wikipedia/commons/7/7d/Amadeo_de_Souza-Cardoso%2C_1915_-_Landscape_with_black_figure.jpg'\n)\n\ncontent_image_size = 384\nstyle_image_size = 256\ncontent_images = {k: load_image(v, (content_image_size, content_image_size)) for k, v in content_urls.items()}\nstyle_images = {k: load_image(v, (style_image_size, style_image_size)) for k, v in style_urls.items()}\nstyle_images = {k: tf.nn.avg_pool(style_image, ksize=[3,3], strides=[1,1], padding='SAME') for k, style_image in style_images.items()}\n\n\n#@title Specify the main content image and the style you want to use. { display-mode: \"form\" }\n\ncontent_name = 'sea_turtle' # @param ['sea_turtle', 'tuebingen', 'grace_hopper']\nstyle_name = 'munch_scream' # @param ['kanagawa_great_wave', 'kandinsky_composition_7', 'hubble_pillars_of_creation', 'van_gogh_starry_night', 'turner_nantes', 'munch_scream', 'picasso_demoiselles_avignon', 'picasso_violin', 'picasso_bottle_of_rum', 'fire', 'derkovits_woman_head', 'amadeo_style_life', 'derkovtis_talig', 'amadeo_cardoso']\n\nstylized_image = hub_module(tf.constant(content_images[content_name]),\n tf.constant(style_images[style_name]))[0]\n\nshow_n([content_images[content_name], style_images[style_name], stylized_image],\n titles=['Original content image', 'Style image', 'Stylized image'])"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/cccr-iitm/cmip6/models/sandbox-3/land.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Land\nMIP Era: CMIP6\nInstitute: CCCR-IITM\nSource ID: SANDBOX-3\nTopic: Land\nSub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. \nProperties: 154 (96 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:48\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'cccr-iitm', 'sandbox-3', 'land')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Conservation Properties\n3. Key Properties --> Timestepping Framework\n4. Key Properties --> Software Properties\n5. Grid\n6. Grid --> Horizontal\n7. Grid --> Vertical\n8. Soil\n9. Soil --> Soil Map\n10. Soil --> Snow Free Albedo\n11. Soil --> Hydrology\n12. Soil --> Hydrology --> Freezing\n13. Soil --> Hydrology --> Drainage\n14. Soil --> Heat Treatment\n15. Snow\n16. Snow --> Snow Albedo\n17. Vegetation\n18. Energy Balance\n19. Carbon Cycle\n20. Carbon Cycle --> Vegetation\n21. Carbon Cycle --> Vegetation --> Photosynthesis\n22. Carbon Cycle --> Vegetation --> Autotrophic Respiration\n23. Carbon Cycle --> Vegetation --> Allocation\n24. Carbon Cycle --> Vegetation --> Phenology\n25. Carbon Cycle --> Vegetation --> Mortality\n26. Carbon Cycle --> Litter\n27. Carbon Cycle --> Soil\n28. Carbon Cycle --> Permafrost Carbon\n29. Nitrogen Cycle\n30. River Routing\n31. River Routing --> Oceanic Discharge\n32. Lakes\n33. Lakes --> Method\n34. Lakes --> Wetlands \n1. Key Properties\nLand surface key properties\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of land surface model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of land surface model code (e.g. MOSES2.2)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.4. Land Atmosphere Flux Exchanges\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nFluxes exchanged with the atmopshere.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"water\" \n# \"energy\" \n# \"carbon\" \n# \"nitrogen\" \n# \"phospherous\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.5. Atmospheric Coupling Treatment\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.6. Land Cover\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTypes of land cover defined in the land surface model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bare soil\" \n# \"urban\" \n# \"lake\" \n# \"land ice\" \n# \"lake ice\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.7. Land Cover Change\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe how land cover change is managed (e.g. the use of net or gross transitions)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover_change') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.8. Tiling\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2. Key Properties --> Conservation Properties\nTODO\n2.1. Energy\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how energy is conserved globally and to what level (e.g. within X [units]/year)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.energy') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Water\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how water is conserved globally and to what level (e.g. within X [units]/year)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.water') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Carbon\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Key Properties --> Timestepping Framework\nTODO\n3.1. Timestep Dependent On Atmosphere\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs a time step dependent on the frequency of atmosphere coupling?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nOverall timestep of land surface model (i.e. time between calls)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.3. Timestepping Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of time stepping method and associated time step(s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Key Properties --> Software Properties\nSoftware properties of land surface code\n4.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5. Grid\nLand surface grid\n5.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of the grid in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Grid --> Horizontal\nThe horizontal grid in the land surface\n6.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general structure of the horizontal grid (not including any tiling)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Matches Atmosphere Grid\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the horizontal grid match the atmosphere?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"7. Grid --> Vertical\nThe vertical grid in the soil\n7.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general structure of the vertical grid in the soil (not including any tiling)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Total Depth\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe total depth of the soil (in metres)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.total_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"8. Soil\nLand surface soil\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of soil in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Heat Water Coupling\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the coupling between heat and water in the soil",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_water_coupling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.3. Number Of Soil layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of soil layers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.number_of_soil layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"8.4. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the soil scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Soil --> Soil Map\nKey properties of the land surface soil map\n9.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of soil map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.2. Structure\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil structure map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.3. Texture\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil texture map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.texture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.4. Organic Matter\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil organic matter map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.organic_matter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.5. Albedo\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil albedo map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.6. Water Table\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil water table map, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.water_table') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.7. Continuously Varying Soil Depth\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the soil properties vary continuously with depth?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"9.8. Soil Depth\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil depth map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Soil --> Snow Free Albedo\nTODO\n10.1. Prognostic\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs snow free albedo prognostic?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"10.2. Functions\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf prognostic, describe the dependancies on snow free albedo calculations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"soil humidity\" \n# \"vegetation state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.3. Direct Diffuse\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nIf prognostic, describe the distinction between direct and diffuse albedo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"distinction between direct and diffuse albedo\" \n# \"no distinction between direct and diffuse albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.4. Number Of Wavelength Bands\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf prognostic, enter the number of wavelength bands used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11. Soil --> Hydrology\nKey properties of the land surface soil hydrology\n11.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of the soil hydrological model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of river soil hydrology in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.3. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil hydrology tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.4. Vertical Discretisation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the typical vertical discretisation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.5. Number Of Ground Water Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of soil layers that may contain water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.6. Lateral Connectivity\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDescribe the lateral connectivity between tiles",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"perfect connectivity\" \n# \"Darcian flow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.7. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nThe hydrological dynamics scheme in the land surface model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bucket\" \n# \"Force-restore\" \n# \"Choisnel\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12. Soil --> Hydrology --> Freezing\nTODO\n12.1. Number Of Ground Ice Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nHow many soil layers may contain ground ice",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"12.2. Ice Storage Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method of ice storage",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12.3. Permafrost\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the treatment of permafrost, if any, within the land surface scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Soil --> Hydrology --> Drainage\nTODO\n13.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral describe how drainage is included in the land surface scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13.2. Types\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nDifferent types of runoff represented by the land surface model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gravity drainage\" \n# \"Horton mechanism\" \n# \"topmodel-based\" \n# \"Dunne mechanism\" \n# \"Lateral subsurface flow\" \n# \"Baseflow from groundwater\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Soil --> Heat Treatment\nTODO\n14.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of how heat treatment properties are defined",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of soil heat scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14.3. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil heat treatment tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.4. Vertical Discretisation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the typical vertical discretisation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.5. Heat Storage\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify the method of heat storage",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Force-restore\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.6. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDescribe processes included in the treatment of soil heat",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"soil moisture freeze-thaw\" \n# \"coupling with snow temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15. Snow\nLand surface snow\n15.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of snow in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the snow tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.3. Number Of Snow Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of snow levels used in the land surface scheme/model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.number_of_snow_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"15.4. Density\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of snow density",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.5. Water Equivalent\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of the snow water equivalent",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.water_equivalent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.6. Heat Content\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of the heat content of snow",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.heat_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.7. Temperature\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of snow temperature",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.temperature') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.8. Liquid Water Content\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of snow liquid water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.liquid_water_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.9. Snow Cover Fractions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify cover fractions used in the surface snow scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_cover_fractions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ground snow fraction\" \n# \"vegetation snow fraction\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.10. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSnow related processes in the land surface scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"snow interception\" \n# \"snow melting\" \n# \"snow freezing\" \n# \"blowing snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.11. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the snow scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16. Snow --> Snow Albedo\nTODO\n16.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe the treatment of snow-covered land albedo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"prescribed\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.2. Functions\nIs Required: FALSE Type: ENUM Cardinality: 0.N\n*If prognostic, *",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"snow age\" \n# \"snow density\" \n# \"snow grain type\" \n# \"aerosol deposition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17. Vegetation\nLand surface vegetation\n17.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of vegetation in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of vegetation scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"17.3. Dynamic Vegetation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there dynamic evolution of vegetation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.dynamic_vegetation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"17.4. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the vegetation tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.5. Vegetation Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nVegetation classification used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation types\" \n# \"biome types\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.6. Vegetation Types\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nList of vegetation types in the classification, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"broadleaf tree\" \n# \"needleleaf tree\" \n# \"C3 grass\" \n# \"C4 grass\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.7. Biome Types\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nList of biome types in the classification, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biome_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"evergreen needleleaf forest\" \n# \"evergreen broadleaf forest\" \n# \"deciduous needleleaf forest\" \n# \"deciduous broadleaf forest\" \n# \"mixed forest\" \n# \"woodland\" \n# \"wooded grassland\" \n# \"closed shrubland\" \n# \"opne shrubland\" \n# \"grassland\" \n# \"cropland\" \n# \"wetlands\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.8. Vegetation Time Variation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow the vegetation fractions in each tile are varying with time",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_time_variation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed (not varying)\" \n# \"prescribed (varying from files)\" \n# \"dynamical (varying from simulation)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.9. Vegetation Map\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.10. Interception\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs vegetation interception of rainwater represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.interception') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"17.11. Phenology\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTreatment of vegetation phenology",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic (vegetation map)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.12. Phenology Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation phenology",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.13. Leaf Area Index\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTreatment of vegetation leaf area index",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.14. Leaf Area Index Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of leaf area index",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.15. Biomass\nIs Required: TRUE Type: ENUM Cardinality: 1.1\n*Treatment of vegetation biomass *",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.16. Biomass Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation biomass",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.17. Biogeography\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTreatment of vegetation biogeography",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.18. Biogeography Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation biogeography",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.19. Stomatal Resistance\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify what the vegetation stomatal resistance depends on",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"light\" \n# \"temperature\" \n# \"water availability\" \n# \"CO2\" \n# \"O3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.20. Stomatal Resistance Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation stomatal resistance",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.21. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the vegetation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18. Energy Balance\nLand surface energy balance\n18.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of energy balance in land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the energy balance tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18.3. Number Of Surface Temperatures\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"18.4. Evaporation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify the formulation method for land surface evaporation, from soil and vegetation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"alpha\" \n# \"beta\" \n# \"combined\" \n# \"Monteith potential evaporation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.5. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDescribe which processes are included in the energy balance scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"transpiration\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19. Carbon Cycle\nLand surface carbon cycle\n19.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of carbon cycle in land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the carbon cycle tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of carbon cycle in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"19.4. Anthropogenic Carbon\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nDescribe the treament of the anthropogenic carbon pool",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grand slam protocol\" \n# \"residence time\" \n# \"decay time\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19.5. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the carbon scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20. Carbon Cycle --> Vegetation\nTODO\n20.1. Number Of Carbon Pools\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"20.2. Carbon Pools\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20.3. Forest Stand Dynamics\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the treatment of forest stand dyanmics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"21. Carbon Cycle --> Vegetation --> Photosynthesis\nTODO\n21.1. Method\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22. Carbon Cycle --> Vegetation --> Autotrophic Respiration\nTODO\n22.1. Maintainance Respiration\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the general method used for maintainence respiration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.2. Growth Respiration\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the general method used for growth respiration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23. Carbon Cycle --> Vegetation --> Allocation\nTODO\n23.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general principle behind the allocation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23.2. Allocation Bins\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify distinct carbon bins used in allocation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"leaves + stems + roots\" \n# \"leaves + stems + roots (leafy + woody)\" \n# \"leaves + fine roots + coarse roots + stems\" \n# \"whole plant (no distinction)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.3. Allocation Fractions\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe how the fractions of allocation are calculated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"function of vegetation type\" \n# \"function of plant allometry\" \n# \"explicitly calculated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24. Carbon Cycle --> Vegetation --> Phenology\nTODO\n24.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general principle behind the phenology scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"25. Carbon Cycle --> Vegetation --> Mortality\nTODO\n25.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general principle behind the mortality scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26. Carbon Cycle --> Litter\nTODO\n26.1. Number Of Carbon Pools\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"26.2. Carbon Pools\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26.3. Decomposition\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the decomposition methods used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26.4. Method\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the general method used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27. Carbon Cycle --> Soil\nTODO\n27.1. Number Of Carbon Pools\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"27.2. Carbon Pools\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27.3. Decomposition\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the decomposition methods used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27.4. Method\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the general method used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28. Carbon Cycle --> Permafrost Carbon\nTODO\n28.1. Is Permafrost Included\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs permafrost included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"28.2. Emitted Greenhouse Gases\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the GHGs emitted",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28.3. Decomposition\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the decomposition methods used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28.4. Impact On Soil Properties\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the impact of permafrost on soil properties",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29. Nitrogen Cycle\nLand surface nitrogen cycle\n29.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of the nitrogen cycle in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the notrogen cycle tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of nitrogen cycle in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"29.4. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the nitrogen scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30. River Routing\nLand surface river routing\n30.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of river routing in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the river routing, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of river routing scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.4. Grid Inherited From Land Surface\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the grid inherited from land surface?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"30.5. Grid Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of grid, if not inherited from land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.6. Number Of Reservoirs\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of reservoirs",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.number_of_reservoirs') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.7. Water Re Evaporation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTODO",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.water_re_evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"flood plains\" \n# \"irrigation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.8. Coupled To Atmosphere\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nIs river routing coupled to the atmosphere model component?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"30.9. Coupled To Land\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the coupling between land and rivers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_land') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.10. Quantities Exchanged With Atmosphere\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.11. Basin Flow Direction Map\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat type of basin flow direction map is being used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"adapted for other periods\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.12. Flooding\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the representation of flooding, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.flooding') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.13. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the river routing",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"31. River Routing --> Oceanic Discharge\nTODO\n31.1. Discharge Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify how rivers are discharged to the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"direct (large rivers)\" \n# \"diffuse\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.2. Quantities Transported\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nQuantities that are exchanged from river-routing to the ocean model component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32. Lakes\nLand surface lakes\n32.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of lakes in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.2. Coupling With Rivers\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre lakes coupled to the river routing model component?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.coupling_with_rivers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"32.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of lake scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"32.4. Quantities Exchanged With Rivers\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf coupling with rivers, which quantities are exchanged between the lakes and rivers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32.5. Vertical Grid\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the vertical grid of lakes",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.vertical_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.6. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the lake scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"33. Lakes --> Method\nTODO\n33.1. Ice Treatment\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs lake ice included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.ice_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"33.2. Albedo\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe the treatment of lake albedo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33.3. Dynamics\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhich dynamics of lakes are treated? horizontal, vertical, etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No lake dynamics\" \n# \"vertical\" \n# \"horizontal\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33.4. Dynamic Lake Extent\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs a dynamic lake extent scheme included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"33.5. Endorheic Basins\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nBasins not flowing to ocean included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.endorheic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"34. Lakes --> Wetlands\nTODO\n34.1. Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the treatment of wetlands, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.wetlands.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
etendue/deep-learning
|
image-classification/dlnd_image_classification.ipynb
|
mit
|
[
"Image Classification\nIn this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.\nGet the Data\nRun the following cell to download the CIFAR-10 dataset for python.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nfrom urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\nimport problem_unittests as tests\nimport tarfile\n\ncifar10_dataset_folder_path = 'cifar-10-batches-py'\n\n# Use Floyd's cifar-10 dataset if present\nfloyd_cifar10_location = '/input/cifar-10/python.tar.gz'\nif isfile(floyd_cifar10_location):\n tar_gz_path = floyd_cifar10_location\nelse:\n tar_gz_path = 'cifar-10-python.tar.gz'\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile(tar_gz_path):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:\n urlretrieve(\n 'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',\n tar_gz_path,\n pbar.hook)\n\nif not isdir(cifar10_dataset_folder_path):\n with tarfile.open(tar_gz_path) as tar:\n tar.extractall()\n tar.close()\n\n\ntests.test_folder_path(cifar10_dataset_folder_path)",
"Explore the Data\nThe dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:\n* airplane\n* automobile\n* bird\n* cat\n* deer\n* dog\n* frog\n* horse\n* ship\n* truck\nUnderstanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.\nAsk yourself \"What are all possible labels?\", \"What is the range of values for the image data?\", \"Are the labels in order or random?\". Answers to questions like these will help you preprocess the data and end up with better predictions.",
"%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport helper\nimport numpy as np\n\n# Explore the dataset\nbatch_id = 1\nsample_id = 15\nhelper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)",
"Implement Preprocess Functions\nNormalize\nIn the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.",
"from sklearn.preprocessing import minmax_scale\ndef normalize(x):\n \"\"\"\n Normalize a list of sample image data in the range of 0 to 1\n : x: List of image data. The image shape is (32, 32, 3)\n : return: Numpy array of normalize data\n \"\"\"\n # TODO: Implement Function\n shape = x.shape\n return minmax_scale(x.flatten()).reshape(shape)\n \n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_normalize(normalize)",
"One-hot encode\nJust like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.\nHint: Don't reinvent the wheel.",
"from sklearn.preprocessing import label_binarize\n\ndef one_hot_encode(x):\n \"\"\"\n One hot encode a list of sample labels. Return a one-hot encoded vector for each label.\n : x: List of sample Labels\n : return: Numpy array of one-hot encoded labels\n \"\"\"\n # TODO: Implement Function\n return label_binarize(x,classes=[0,1,2,3,4,5,6,7,8,9])\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_one_hot_encode(one_hot_encode)",
"Randomize Data\nAs you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.\nPreprocess all the data and save it\nRunning the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Preprocess Training, Validation, and Testing Data\nhelper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)",
"Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport pickle\nimport problem_unittests as tests\nimport helper\n\n# Load the Preprocessed Validation data\nvalid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))",
"Build the network\nFor the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.\n\nNote: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the \"Convolutional and Max Pooling Layer\" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.\nHowever, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d. \n\nLet's begin!\nInput\nThe neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions\n* Implement neural_net_image_input\n * Return a TF Placeholder\n * Set the shape using image_shape with batch size set to None.\n * Name the TensorFlow placeholder \"x\" using the TensorFlow name parameter in the TF Placeholder.\n* Implement neural_net_label_input\n * Return a TF Placeholder\n * Set the shape using n_classes with batch size set to None.\n * Name the TensorFlow placeholder \"y\" using the TensorFlow name parameter in the TF Placeholder.\n* Implement neural_net_keep_prob_input\n * Return a TF Placeholder for dropout keep probability.\n * Name the TensorFlow placeholder \"keep_prob\" using the TensorFlow name parameter in the TF Placeholder.\nThese names will be used at the end of the project to load your saved model.\nNote: None for shapes in TensorFlow allow for a dynamic size.",
"import tensorflow as tf\n\ndef neural_net_image_input(image_shape):\n \"\"\"\n Return a Tensor for a batch of image input\n : image_shape: Shape of the images\n : return: Tensor for image input.\n \"\"\"\n # TODO: Implement Function\n return tf.placeholder(tf.float32,shape=(None,)+image_shape,name=\"x\")\n\n\ndef neural_net_label_input(n_classes):\n \"\"\"\n Return a Tensor for a batch of label input\n : n_classes: Number of classes\n : return: Tensor for label input.\n \"\"\"\n # TODO: Implement Function\n return tf.placeholder(tf.float32,shape=(None,n_classes),name=\"y\")\n\n\ndef neural_net_keep_prob_input():\n \"\"\"\n Return a Tensor for keep probability\n : return: Tensor for keep probability.\n \"\"\"\n # TODO: Implement Function\n return tf.placeholder(tf.float32,name=\"keep_prob\")\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntf.reset_default_graph()\ntests.test_nn_image_inputs(neural_net_image_input)\ntests.test_nn_label_inputs(neural_net_label_input)\ntests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)",
"Convolution and Max Pooling Layer\nConvolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:\n* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.\n* Apply a convolution to x_tensor using weight and conv_strides.\n * We recommend you use same padding, but you're welcome to use any padding.\n* Add bias\n* Add a nonlinear activation to the convolution.\n* Apply Max Pooling using pool_ksize and pool_strides.\n * We recommend you use same padding, but you're welcome to use any padding.\nNote: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.",
"def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):\n \"\"\"\n Apply convolution then max pooling to x_tensor\n :param x_tensor: TensorFlow Tensor\n :param conv_num_outputs: Number of outputs for the convolutional layer\n :param conv_ksize: kernal size 2-D Tuple for the convolutional layer\n :param conv_strides: Stride 2-D Tuple for convolution\n :param pool_ksize: kernal size 2-D Tuple for pool\n :param pool_strides: Stride 2-D Tuple for pool\n : return: A tensor that represents convolution and max pooling of x_tensor\n \"\"\"\n # TODO: Implement Function\n #create filter(weights)\n weights_shape = list(conv_ksize) + [x_tensor.get_shape().as_list()[-1], conv_num_outputs]\n wc = tf.Variable(tf.truncated_normal(weights_shape,stddev=0.1),name=\"wc\")\n \n #create filter(biases)\n bc = tf.Variable(tf.zeros(conv_num_outputs),name=\"bc\")\n \n #stride shape is [1,x,y,1]\n stride_shape = [1] + list(conv_strides)+[1]\n #do convoultion padding =\"SAME\"\n conv_layer = tf.nn.conv2d(x_tensor,wc, strides= stride_shape,padding='SAME')\n #do biases addtion\n conv_layer = tf.nn.bias_add(conv_layer,bc)\n #do relu activation (nonlinear activation)\n conv_layer = tf.nn.relu(conv_layer)\n #do max pooling\n pksize=[1]+list(pool_ksize)+[1]\n pstrides = [1]+list(pool_strides)+[1]\n \n return tf.nn.max_pool(conv_layer,pksize,pstrides,padding='SAME') \n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_con_pool(conv2d_maxpool)",
"Flatten Layer\nImplement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.",
"import numpy as np\ndef flatten(x_tensor):\n \"\"\"\n Flatten x_tensor to (Batch Size, Flattened Image Size)\n : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.\n : return: A tensor of size (Batch Size, Flattened Image Size).\n \"\"\"\n # TODO: Implement Function\n dim = x_tensor.get_shape().as_list()\n flattened_size = np.prod(dim[1:])\n return tf.reshape(x_tensor,[-1,flattened_size])\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_flatten(flatten)",
"Fully-Connected Layer\nImplement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.",
"def fully_conn(x_tensor, num_outputs):\n \"\"\"\n Apply a fully connected layer to x_tensor using weight and bias\n : x_tensor: A 2-D tensor where the first dimension is batch size.\n : num_outputs: The number of output that the new tensor should be.\n : return: A 2-D tensor where the second dimension is num_outputs.\n \"\"\"\n # TODO: Implement Function\n weights= tf.Variable(tf.truncated_normal([x_tensor.get_shape().as_list()[1],num_outputs],stddev=0.1),name=\"wf\")\n biases = tf.Variable(tf.zeros(num_outputs),name=\"bf\")\n return tf.nn.relu(tf.add(tf.matmul(x_tensor,weights),biases))\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_fully_conn(fully_conn)",
"Output Layer\nImplement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.\nNote: Activation, softmax, or cross entropy should not be applied to this.",
"def output(x_tensor, num_outputs):\n \"\"\"\n Apply a output layer to x_tensor using weight and bias\n : x_tensor: A 2-D tensor where the first dimension is batch size.\n : num_outputs: The number of output that the new tensor should be.\n : return: A 2-D tensor where the second dimension is num_outputs.\n \"\"\"\n # TODO: Implement Function\n weights= tf.Variable(tf.truncated_normal([x_tensor.get_shape().as_list()[1],num_outputs],stddev=0.1),name=\"wo\")\n biases = tf.Variable(tf.zeros(num_outputs),name=\"bo\")\n return tf.add(tf.matmul(x_tensor,weights),biases)\n \n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_output(output)",
"Create Convolutional Model\nImplement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:\n\nApply 1, 2, or 3 Convolution and Max Pool layers\nApply a Flatten Layer\nApply 1, 2, or 3 Fully Connected Layers\nApply an Output Layer\nReturn the output\nApply TensorFlow's Dropout to one or more layers in the model using keep_prob.",
"def conv_net(x, keep_prob):\n \"\"\"\n Create a convolutional neural network model\n : x: Placeholder tensor that holds image data.\n : keep_prob: Placeholder tensor that hold dropout keep probability.\n : return: Tensor that represents logits\n \"\"\"\n # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers\n # Play around with different number of outputs, kernel size and stride\n # Function Definition from Above:\n # conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)\n # layer 1 \n x_tensor = conv2d_maxpool(x, 64, (3,3), (1,1), (2,2), (1,1))\n \n \n # layer 2 ,\n x_tensor = conv2d_maxpool(x_tensor, 64, (5,5), (1,1), (3,3), (1,1))\n \n\n # TODO: Apply a Flatten Layer\n # Function Definition from Above:\n # flatten(x_tensor)\n x_tensor = flatten(x_tensor) \n \n\n # TODO: Apply 1, 2, or 3 Fully Connected Layers\n # Play around with different number of outputs\n # Function Definition from Above:\n x_tensor = fully_conn(x_tensor,192)\n x_tensor = fully_conn(x_tensor,64)\n \n x_tensor = tf.nn.dropout(x_tensor,keep_prob)\n \n # TODO: Apply an Output Layer\n # Set this to the number of classes\n # Function Definition from Above:\n # output(x_tensor, num_outputs)\n \n \n # TODO: return output\n return output(x_tensor,10)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\n\n##############################\n## Build the Neural Network ##\n##############################\n\n# Remove previous weights, bias, inputs, etc..\ntf.reset_default_graph()\n\n# Inputs\nx = neural_net_image_input((32, 32, 3))\ny = neural_net_label_input(10)\nkeep_prob = neural_net_keep_prob_input()\n\n# Model\nlogits = conv_net(x, keep_prob)\n\n# Name logits Tensor, so that is can be loaded from disk after training\nlogits = tf.identity(logits, name='logits')\n\n# Loss and Optimizer\ncost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))\noptimizer = tf.train.AdamOptimizer().minimize(cost)\n\n# Accuracy\ncorrect_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')\n\ntests.test_conv_net(conv_net)",
"Train the Neural Network\nSingle Optimization\nImplement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:\n* x for image input\n* y for labels\n* keep_prob for keep probability for dropout\nThis function will be called for each batch, so tf.global_variables_initializer() has already been called.\nNote: Nothing needs to be returned. This function is only optimizing the neural network.",
"def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):\n \"\"\"\n Optimize the session on a batch of images and labels\n : session: Current TensorFlow session\n : optimizer: TensorFlow optimizer function\n : keep_probability: keep probability\n : feature_batch: Batch of Numpy image data\n : label_batch: Batch of Numpy label data\n \"\"\"\n # TODO: Implement Function\n session.run(optimizer,feed_dict={x:feature_batch,y:label_batch,keep_prob: keep_probability})\n pass\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_train_nn(train_neural_network)",
"Show Stats\nImplement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.",
"%matplotlib inline\nimport matplotlib.pyplot as plt\n\ndef print_stats(session, feature_batch, label_batch, cost, accuracy):\n \"\"\"\n Print information about loss and validation accuracy\n : session: Current TensorFlow session\n : feature_batch: Batch of Numpy image data\n : label_batch: Batch of Numpy label data\n : cost: TensorFlow cost function\n : accuracy: TensorFlow accuracy function\n \"\"\"\n # TODO: Implement Function\n valid_loss = session.run(cost,feed_dict={x:valid_features[0:2048],y:valid_labels[0:2048],keep_prob: 1})\n valid_accu = session.run(accuracy,feed_dict={x:valid_features[0:2048],y:valid_labels[0:2048],keep_prob: 1})\n print('Validation Loss: {:>10.4f} Accuracy: {:.4f}'.format(valid_loss,valid_accu))\n ",
"Hyperparameters\nTune the following parameters:\n* Set epochs to the number of iterations until the network stops learning or start overfitting\n* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:\n * 64\n * 128\n * 256\n * ...\n* Set keep_probability to the probability of keeping a node using dropout",
"# TODO: Tune Parameters\nepochs = 10\nbatch_size = 128\nkeep_probability = 0.5",
"Train on a Single CIFAR-10 Batch\nInstead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nprint('Checking the Training on a Single Batch...')\nwith tf.Session() as sess:\n # Initializing the variables\n sess.run(tf.global_variables_initializer())\n \n # Training cycle\n for epoch in range(epochs):\n batch_i = 1\n for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):\n train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)\n print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')\n print_stats(sess, batch_features, batch_labels, cost, accuracy)\n ",
"Fully Train the Model\nNow that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nsave_model_path = './image_classification'\n\nprint('Training...')\nwith tf.Session() as sess:\n # Initializing the variables\n sess.run(tf.global_variables_initializer())\n \n # Training cycle\n for epoch in range(epochs):\n # Loop over all batches\n n_batches = 5\n for batch_i in range(1, n_batches + 1):\n for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):\n train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)\n print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')\n print_stats(sess, batch_features, batch_labels, cost, accuracy)\n \n # Save Model\n saver = tf.train.Saver()\n save_path = saver.save(sess, save_model_path)",
"Checkpoint\nThe model has been saved to disk.\nTest Model\nTest your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport tensorflow as tf\nimport pickle\nimport helper\nimport random\n\n# Set batch size if not already set\ntry:\n if batch_size:\n pass\nexcept NameError:\n batch_size = 64\n\nsave_model_path = './image_classification'\nn_samples = 4\ntop_n_predictions = 3\n\ndef test_model():\n \"\"\"\n Test the saved model against the test dataset\n \"\"\"\n\n test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))\n loaded_graph = tf.Graph()\n\n with tf.Session(graph=loaded_graph) as sess:\n # Load model\n loader = tf.train.import_meta_graph(save_model_path + '.meta')\n loader.restore(sess, save_model_path)\n\n # Get Tensors from loaded model\n loaded_x = loaded_graph.get_tensor_by_name('x:0')\n loaded_y = loaded_graph.get_tensor_by_name('y:0')\n loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')\n loaded_logits = loaded_graph.get_tensor_by_name('logits:0')\n loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')\n \n # Get accuracy in batches for memory limitations\n test_batch_acc_total = 0\n test_batch_count = 0\n \n for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):\n test_batch_acc_total += sess.run(\n loaded_acc,\n feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})\n test_batch_count += 1\n\n print('Testing Accuracy: {}\\n'.format(test_batch_acc_total/test_batch_count))\n\n # Print Random Samples\n random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))\n random_test_predictions = sess.run(\n tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),\n feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})\n helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)\n\n\ntest_model()",
"Why 50-80% Accuracy?\nYou might be wondering why you can't get an accuracy any higher. First things first, 50% isn't bad for a simple CNN. Pure guessing would get you 10% accuracy. However, you might notice people are getting scores well above 80%. That's because we haven't taught you all there is to know about neural networks. We still need to cover a few more techniques.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_image_classification.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
rishuatgithub/MLPy
|
torch/PYTORCH_NOTEBOOKS/01-PyTorch-Basics/00-Tensor-Basics.ipynb
|
apache-2.0
|
[
"<img src=\"../Pierian-Data-Logo.PNG\">\n<br>\n<strong><center>Copyright 2019. Created by Jose Marcial Portilla.</center></strong>\nTensor Basics\nThis section covers:\n* Converting NumPy arrays to PyTorch tensors\n* Creating tensors from scratch\nPerform standard imports",
"import torch\nimport numpy as np",
"Confirm you're using PyTorch version 1.1.0",
"torch.__version__",
"Converting NumPy arrays to PyTorch tensors\nA <a href='https://pytorch.org/docs/stable/tensors.html'><strong><tt>torch.Tensor</tt></strong></a> is a multi-dimensional matrix containing elements of a single data type.<br>\nCalculations between tensors can only happen if the tensors share the same dtype.<br>\nIn some cases tensors are used as a replacement for NumPy to use the power of GPUs (more on this later).",
"arr = np.array([1,2,3,4,5])\nprint(arr)\nprint(arr.dtype)\nprint(type(arr))\n\nx = torch.from_numpy(arr)\n# Equivalent to x = torch.as_tensor(arr)\n\nprint(x)\n\n# Print the type of data held by the tensor\nprint(x.dtype)\n\n# Print the tensor object type\nprint(type(x))\nprint(x.type()) # this is more specific!\n\narr2 = np.arange(0.,12.).reshape(4,3)\nprint(arr2)\n\nx2 = torch.from_numpy(arr2)\nprint(x2)\nprint(x2.type())",
"Here <tt>torch.DoubleTensor</tt> refers to 64-bit floating point data.\n<h2><a href='https://pytorch.org/docs/stable/tensors.html'>Tensor Datatypes</a></h2>\n<table style=\"display: inline-block\">\n<tr><th>TYPE</th><th>NAME</th><th>EQUIVALENT</th><th>TENSOR TYPE</th></tr>\n<tr><td>32-bit integer (signed)</td><td>torch.int32</td><td>torch.int</td><td>IntTensor</td></tr>\n<tr><td>64-bit integer (signed)</td><td>torch.int64</td><td>torch.long</td><td>LongTensor</td></tr>\n<tr><td>16-bit integer (signed)</td><td>torch.int16</td><td>torch.short</td><td>ShortTensor</td></tr>\n<tr><td>32-bit floating point</td><td>torch.float32</td><td>torch.float</td><td>FloatTensor</td></tr>\n<tr><td>64-bit floating point</td><td>torch.float64</td><td>torch.double</td><td>DoubleTensor</td></tr>\n<tr><td>16-bit floating point</td><td>torch.float16</td><td>torch.half</td><td>HalfTensor</td></tr>\n<tr><td>8-bit integer (signed)</td><td>torch.int8</td><td></td><td>CharTensor</td></tr>\n<tr><td>8-bit integer (unsigned)</td><td>torch.uint8</td><td></td><td>ByteTensor</td></tr></table>\n\nCopying vs. sharing\n<a href='https://pytorch.org/docs/stable/torch.html#torch.from_numpy'><strong><tt>torch.from_numpy()</tt></strong></a><br>\n<a href='https://pytorch.org/docs/stable/torch.html#torch.as_tensor'><strong><tt>torch.as_tensor()</tt></strong></a><br>\n<a href='https://pytorch.org/docs/stable/torch.html#torch.tensor'><strong><tt>torch.tensor()</tt></strong></a><br>\nThere are a number of different functions available for <a href='https://pytorch.org/docs/stable/torch.html#creation-ops'>creating tensors</a>. When using <a href='https://pytorch.org/docs/stable/torch.html#torch.from_numpy'><strong><tt>torch.from_numpy()</tt></strong></a> and <a href='https://pytorch.org/docs/stable/torch.html#torch.as_tensor'><strong><tt>torch.as_tensor()</tt></strong></a>, the PyTorch tensor and the source NumPy array share the same memory. This means that changes to one affect the other. However, the <a href='https://pytorch.org/docs/stable/torch.html#torch.tensor'><strong><tt>torch.tensor()</tt></strong></a> function always makes a copy.",
"# Using torch.from_numpy()\narr = np.arange(0,5)\nt = torch.from_numpy(arr)\nprint(t)\n\narr[2]=77\nprint(t)\n\n# Using torch.tensor()\narr = np.arange(0,5)\nt = torch.tensor(arr)\nprint(t)\n\narr[2]=77\nprint(t)",
"Class constructors\n<a href='https://pytorch.org/docs/stable/tensors.html'><strong><tt>torch.Tensor()</tt></strong></a><br>\n<a href='https://pytorch.org/docs/stable/tensors.html'><strong><tt>torch.FloatTensor()</tt></strong></a><br>\n<a href='https://pytorch.org/docs/stable/tensors.html'><strong><tt>torch.LongTensor()</tt></strong></a>, etc.<br>\nThere's a subtle difference between using the factory function <font color=black><tt>torch.tensor(data)</tt></font> and the class constructor <font color=black><tt>torch.Tensor(data)</tt></font>.<br>\nThe factory function determines the dtype from the incoming data, or from a passed-in dtype argument.<br>\nThe class constructor <tt>torch.Tensor()</tt>is simply an alias for <tt>torch.FloatTensor(data)</tt>. Consider the following:",
"data = np.array([1,2,3])\n\na = torch.Tensor(data) # Equivalent to cc = torch.FloatTensor(data)\nprint(a, a.type())\n\nb = torch.tensor(data)\nprint(b, b.type())\n\nc = torch.tensor(data, dtype=torch.long)\nprint(c, c.type())",
"Creating tensors from scratch\nUninitialized tensors with <tt>.empty()</tt>\n<a href='https://pytorch.org/docs/stable/torch.html#torch.empty'><strong><tt>torch.empty()</tt></strong></a> returns an <em>uninitialized</em> tensor. Essentially a block of memory is allocated according to the size of the tensor, and any values already sitting in the block are returned. This is similar to the behavior of <tt>numpy.empty()</tt>.",
"x = torch.empty(4, 3)\nprint(x)",
"Initialized tensors with <tt>.zeros()</tt> and <tt>.ones()</tt>\n<a href='https://pytorch.org/docs/stable/torch.html#torch.zeros'><strong><tt>torch.zeros(size)</tt></strong></a><br>\n<a href='https://pytorch.org/docs/stable/torch.html#torch.ones'><strong><tt>torch.ones(size)</tt></strong></a><br>\nIt's a good idea to pass in the intended dtype.",
"x = torch.zeros(4, 3, dtype=torch.int64)\nprint(x)",
"Tensors from ranges\n<a href='https://pytorch.org/docs/stable/torch.html#torch.arange'><strong><tt>torch.arange(start,end,step)</tt></strong></a><br>\n<a href='https://pytorch.org/docs/stable/torch.html#torch.linspace'><strong><tt>torch.linspace(start,end,steps)</tt></strong></a><br>\nNote that with <tt>.arange()</tt>, <tt>end</tt> is exclusive, while with <tt>linspace()</tt>, <tt>end</tt> is inclusive.",
"x = torch.arange(0,18,2).reshape(3,3)\nprint(x)\n\nx = torch.linspace(0,18,12).reshape(3,4)\nprint(x)",
"Tensors from data\n<tt>torch.tensor()</tt> will choose the dtype based on incoming data:",
"x = torch.tensor([1, 2, 3, 4])\nprint(x)\nprint(x.dtype)\nprint(x.type())",
"Alternatively you can set the type by the tensor method used.\nFor a list of tensor types visit https://pytorch.org/docs/stable/tensors.html",
"x = torch.FloatTensor([5,6,7])\nprint(x)\nprint(x.dtype)\nprint(x.type())",
"You can also pass the dtype in as an argument. For a list of dtypes visit https://pytorch.org/docs/stable/tensor_attributes.html#torch.torch.dtype<br>",
"x = torch.tensor([8,9,-3], dtype=torch.int)\nprint(x)\nprint(x.dtype)\nprint(x.type())",
"Changing the dtype of existing tensors\nDon't be tempted to use <tt>x = torch.tensor(x, dtype=torch.type)</tt> as it will raise an error about improper use of tensor cloning.<br>\nInstead, use the tensor <tt>.type()</tt> method.",
"print('Old:', x.type())\n\nx = x.type(torch.int64)\n\nprint('New:', x.type())",
"Random number tensors\n<a href='https://pytorch.org/docs/stable/torch.html#torch.rand'><strong><tt>torch.rand(size)</tt></strong></a> returns random samples from a uniform distribution over [0, 1)<br>\n<a href='https://pytorch.org/docs/stable/torch.html#torch.randn'><strong><tt>torch.randn(size)</tt></strong></a> returns samples from the \"standard normal\" distribution [σ = 1]<br>\n Unlike <tt>rand</tt> which is uniform, values closer to zero are more likely to appear.<br>\n<a href='https://pytorch.org/docs/stable/torch.html#torch.randint'><strong><tt>torch.randint(low,high,size)</tt></strong></a> returns random integers from low (inclusive) to high (exclusive)",
"x = torch.rand(4, 3)\nprint(x)\n\nx = torch.randn(4, 3)\nprint(x)\n\nx = torch.randint(0, 5, (4, 3))\nprint(x)",
"Random number tensors that follow the input size\n<a href='https://pytorch.org/docs/stable/torch.html#torch.rand_like'><strong><tt>torch.rand_like(input)</tt></strong></a><br>\n<a href='https://pytorch.org/docs/stable/torch.html#torch.randn_like'><strong><tt>torch.randn_like(input)</tt></strong></a><br>\n<a href='https://pytorch.org/docs/stable/torch.html#torch.randint_like'><strong><tt>torch.randint_like(input,low,high)</tt></strong></a><br> these return random number tensors with the same size as <tt>input</tt>",
"x = torch.zeros(2,5)\nprint(x)\n\nx2 = torch.randn_like(x)\nprint(x2)",
"The same syntax can be used with<br>\n<a href='https://pytorch.org/docs/stable/torch.html#torch.zeros_like'><strong><tt>torch.zeros_like(input)</tt></strong></a><br>\n<a href='https://pytorch.org/docs/stable/torch.html#torch.ones_like'><strong><tt>torch.ones_like(input)</tt></strong></a>",
"x3 = torch.ones_like(x2)\nprint(x3)",
"Setting the random seed\n<a href='https://pytorch.org/docs/stable/torch.html#torch.manual_seed'><strong><tt>torch.manual_seed(int)</tt></strong></a> is used to obtain reproducible results",
"torch.manual_seed(42)\nx = torch.rand(2, 3)\nprint(x)\n\ntorch.manual_seed(42)\nx = torch.rand(2, 3)\nprint(x)",
"Tensor attributes\nBesides <tt>dtype</tt>, we can look at other <a href='https://pytorch.org/docs/stable/tensor_attributes.html'>tensor attributes</a> like <tt>shape</tt>, <tt>device</tt> and <tt>layout</tt>",
"x.shape\n\nx.size() # equivalent to x.shape\n\nx.device",
"PyTorch supports use of multiple <a href='https://pytorch.org/docs/stable/tensor_attributes.html#torch-device'>devices</a>, harnessing the power of one or more GPUs in addition to the CPU. We won't explore that here, but you should know that operations between tensors can only happen for tensors installed on the same device.",
"x.layout",
"PyTorch has a class to hold the <a href='https://pytorch.org/docs/stable/tensor_attributes.html#torch.torch.layout'>memory layout</a> option. The default setting of <a href='https://en.wikipedia.org/wiki/Stride_of_an_array'>strided</a> will suit our purposes throughout the course.\nGreat job!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
biosustain/marsi
|
notebooks/1.1 Sensitivity Analysis.ipynb
|
apache-2.0
|
[
"from bokeh.io import output_notebook\noutput_notebook()\n\nfrom cameo import models\nfrom marsi.cobra.flux_analysis import sensitivity_analysis\n\nmodel = models.bigg.iJO1366",
"Sensitivity analysis for L-Serine\nIn this example, the ammount of produced serine is increased in steps. The biomass production will decrease with increased accumulation of Serine. This is a scenario where an metabolite analog would compete with Serine and the cell needs to increase the production of Serine to compete for biomass production and enzyme activity.",
"ser__L = model.metabolites.ser__L_c\n\nresult = sensitivity_analysis(model, ser__L, is_essential=True, steps=10,\n biomass=model.reactions.BIOMASS_Ec_iJO1366_core_53p95M)\n\nresult.data_frame\n\nresult.plot(width=700, height=500)",
"We can also see how does this affects other variables.",
"result = sensitivity_analysis(model, ser__L, is_essential=True, steps=10, \n variables=[model.reactions.SERAT, model.reactions.SUCOAS],\n biomass=model.reactions.BIOMASS_Ec_iJO1366_core_53p95M)\n\nresult.data_frame\n\nresult.plot(width=700, height=500)",
"The same analysis can be done with different simulation methods (e.g. lMOMA).",
"from cameo.flux_analysis.simulation import lmoma\n\nresult = sensitivity_analysis(model, ser__L, is_essential=True, steps=10, simulation_method=lmoma,\n biomass=model.reactions.BIOMASS_Ec_iJO1366_core_53p95M)\n\nresult.plot(width=700, height=500)",
"Sensitivity analysis for Pyruvate\nIn this example, the pyruvate of succinate is decreased. This is a scenario where the cells are evolved with a toxic compound and the consumption turnover of that compound decreases.",
"pyr = model.metabolites.pyr_c\n\nresult = sensitivity_analysis(model, pyr, is_essential=False, steps=10,\n biomass=model.reactions.BIOMASS_Ec_iJO1366_core_53p95M)\n\nresult.plot(width=700, height=500)\n\nresult = sensitivity_analysis(model, pyr, is_essential=False, steps=10, simulation_method=lmoma,\n biomass=model.reactions.BIOMASS_Ec_iJO1366_core_53p95M)\n\nresult.data_frame"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
GoogleCloudPlatform/covid-19-open-data
|
examples/category_estimation.ipynb
|
apache-2.0
|
[
"Estimating Current Cases by Category\nThis notebook explores a methodology to estimate current mild, severe and critical patients. Both mild and critical categories appear to be correlated to independently reported categories from Italy's ministry of health.\nMost of the reporting of COVID-19 data, including what is reported by the ECDC, focuses on the daily counts of new cases and deaths. While this is useful for tracking the general development of the disease, it provides little information about the capacity required by a health care system to cope with the case load. Here we explore a methodology to estimate total active cases (cases between onset and recovery / death) broken down by category.\nBreakdown of cases\nEarly data from China classified the cases into mild, severe and critical with 80.9%, 13.8% and 4.7% of occurrences respectively (source). While this might be useful for categorising cases, it does not appear to match some outcome-based data like hospitalization rate where, as of March 26, Italy reported 43.2%, Spain 56.80% and New York 15% of all confirmed cases.\nSuch wild range in hospitalization rates is potentially due to different criteria being used across health care systems as well as hitting potential system capacity limits. Therefore, the estimations performed here cannot be used as a proxy for hospitalization rates unless a country-dependent correcting factor is applied. However, using an estimate of 5% of all cases appears to be a good predictor for ICU admission rates, and 15% of all cases seem to correlate to a rate of confirmed cases that only experience mild symptoms.",
"# Since reported numbers are approximate, they are rounded for the sake of simplicity\nsevere_ratio = .15\ncritical_ratio = .05\nmild_ratio = 1 - severe_ratio - critical_ratio",
"Discharge time for severe vs critical cases\nUnfortunately, early data from Chinese sources only reported a median stay of 12 and a mean stay of 12.8 for all hospitalizations without specifying which of them required ICU resources.\nSince we know the ratio of severe vs critical cases, we only need to guess the discharge time of severe cases since there will only be one way to satisfy the constraint of overall hospitalization median and mean. Here, we plot the estimated discharge time for critical cases (y axis) given a discharge time for severe cases (x axis):",
"import numpy as np\nimport pandas as pd\nimport seaborn as sns\nsns.set()\n\n# Data from early Chinese reports\nmean_discharge_time = 12.8\nsevere_ratio_norm = severe_ratio / (severe_ratio + critical_ratio) \ncritical_ratio_norm = critical_ratio / (severe_ratio + critical_ratio) \n\ndef compute_icu_discharge_time(severe_discharge_time, mean_discharge_time: float = 12.8):\n ''' Using mean discharge time from https://www.cnn.com/2020/03/20/health/covid-19-recovery-rates-intl/index.html '''\n return (mean_discharge_time - severe_discharge_time * severe_ratio_norm) / critical_ratio_norm\n\nX = np.linspace(10, 15, 100)\ny = np.array([compute_icu_discharge_time(x) for x in X])\n\nX_name = 'Hospitalization time for severe cases (days)'\ny_name = 'Hospitalization time for critical cases(days)'\ndf = pd.DataFrame([(x, y_) for x, y_ in zip(X, y)], columns=[X_name, y_name]).set_index(X_name)\ndf.plot(figsize=(16, 8), grid=True, ylim=(0, max(y)));",
"Because the data is reported daily, we can only pick an estimated whole number for both variables. The only possible value for hospital discharge days that would result in a median discharge time of 12 days is, unsurprisingly, 12. Then, the estimated ICU discharge days is 15 days.",
"severe_recovery_days = 12\ncritical_recovery_days = 15",
"Recovery time for mild cases\nIn order to estimate the current number of mild cases, there is one more number that we must guess: how many days it takes for recovery, on average, for all cases that are not severe or critical. Reported recovery times from a COVID-19 infection range from 2 weeks (source) to \"a week to 10 days\" (source). After experimenting with several choices, using a median recovery time of 7 days appears to match empirical data from multiple official reports.",
"mild_recovery_days = 7",
"Estimating new cases breakdown\nUsing Italy's data up to March 26, assuming the proportion of each category remains constant for every new case, daily breakdowns can be estimated by multiplying the number of new cases by the ratios of the respective categories:",
"import pandas as pd\n\n# Load country-level data for Italy\ndata = pd.read_csv('https://storage.googleapis.com/covid19-open-data/v2/IT/main.csv').set_index('date')\n\n# Estimate daily counts per category assuming ratio is constant\ndata = data[data.index <= '2020-03-27']\ndata['new_mild'] = data['new_confirmed'] * mild_ratio\ndata['new_severe'] = data['new_confirmed'] * severe_ratio\ndata['new_critical'] = data['new_confirmed'] * critical_ratio\ndata = data[['new_confirmed', 'new_deceased', 'new_mild', 'new_severe', 'new_critical']]\n\ndata.tail()",
"Estimating current cases breakdown\nNow that we have an estimate for the category breakdown of new cases as well as for the discharge time per category, we can estimate the number of current cases per category by adding up each category over a rolling window:",
"data['current_mild'] = data['new_mild'].rolling(round(mild_recovery_days)).sum()\ndata['current_severe'] = data['new_severe'].rolling(round(severe_recovery_days)).sum()\ndata['current_critical'] = data['new_critical'].rolling(round(critical_recovery_days)).sum()\ndata.tail()",
"Comparing with Italy's home care, hospitalizations and ICU counts\nItaly categorises the cases into home care, hospitalizations and ICU admission, which appear to map well into mild, severe and critical categories. From Italy's ministry of health, this is the breakdown as of March 26 of cases compared to our model estimates:",
"estimated = data.iloc[-1]\nreported = pd.DataFrame.from_records([\n {'Category': 'current_mild', 'Count': 30920},\n {'Category': 'current_severe', 'Count': 23112},\n {'Category': 'current_critical', 'Count': 3489},\n]).set_index('Category')\n\npd.DataFrame.from_records([\n {\n 'Category': col, \n 'Estimated': '{0:.02f}'.format(estimated[col]),\n 'Reported': reported.loc[col, 'Count'],\n 'Difference': '{0:.02f}%'.format(100.0 * (estimated[col] - reported.loc[col, 'Count']) / reported.loc[col, 'Count']),\n }\n for col in reported.index.tolist()\n]).set_index('Category')",
"While the severe category does not match what was estimated by the model, both mild cases and critical cases were very accurate predictors for home care and ICU patients, respectively."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Flaviolib/dx
|
06_dx_portfolio_parallel.ipynb
|
agpl-3.0
|
[
"<img src=\"http://hilpisch.com/tpq_logo.png\" alt=\"The Python Quants\" width=\"45%\" align=\"right\" border=\"4\">\nParallel Valuation of Large Portfolios\nDerivatives (portfolio) valuation by Monte Carlo simulation is a computationally demanding task. For practical applications, when valuation speed plays an important role, parallelization of both simulation and valuation tasks might prove a useful strategy. DX Analytics has built in a basic parallelization option which allows the use of the Python mulitprocessing module. Depending on the tasks at hand this can already lead to significant speed-ups.",
"from dx import *\nimport time\nimport matplotlib.pyplot as plt\nimport seaborn as sns; sns.set()\n%matplotlib inline",
"Single Risk Factor\nThe example is based on a single risk factor, a geometric_brownian_motion object.",
"# constant short rate\nr = constant_short_rate('r', 0.02)\n\n# market environments\nme_gbm = market_environment('gbm', dt.datetime(2015, 1, 1))\n\n# geometric Brownian motion\nme_gbm.add_constant('initial_value', 100.)\nme_gbm.add_constant('volatility', 0.2) \nme_gbm.add_constant('currency', 'EUR')\nme_gbm.add_constant('model', 'gbm')\n\n# valuation environment\nval_env = market_environment('val_env', dt.datetime(2015, 1, 1))\nval_env.add_constant('paths', 25000)\nval_env.add_constant('frequency', 'M')\nval_env.add_curve('discount_curve', r)\nval_env.add_constant('starting_date', dt.datetime(2015, 1, 1))\nval_env.add_constant('final_date', dt.datetime(2015, 12, 31))\n\n# add valuation environment to market environments\nme_gbm.add_environment(val_env)\n\nrisk_factors = {'gbm' : me_gbm}",
"American Put Option\nWe also model only a single derivative instrument.",
"gbm = geometric_brownian_motion('gbm_obj', me_gbm)\n\nme_put = market_environment('put', dt.datetime(2015, 1, 1))\nme_put.add_constant('maturity', dt.datetime(2015, 12, 31))\nme_put.add_constant('strike', 40.)\nme_put.add_constant('currency', 'EUR')\nme_put.add_environment(val_env)\n\nam_put = valuation_mcs_american_single(\n 'am_put', mar_env=me_put, underlying=gbm,\n payoff_func='np.maximum(strike - instrument_values, 0)')",
"Large Portfolio\nHowever, the derivatives_portfolio object we compose consists of 100 derivatives positions. Each option differes with respect to the strike.",
"positions = {}\nstrikes = np.linspace(80, 120, 100)\nfor i, strike in enumerate(strikes):\n positions[i] = derivatives_position(\n name='am_put_pos_%s' % strike,\n quantity=1,\n underlyings=['gbm'],\n mar_env=me_put,\n otype='American single',\n payoff_func='np.maximum(%5.3f - instrument_values, 0)' % strike)",
"Sequential Valuation\nFirst, the derivatives portfolio with sequential valuation.",
"port_sequ = derivatives_portfolio(\n name='portfolio',\n positions=positions,\n val_env=val_env,\n risk_factors=risk_factors,\n correlations=None,\n parallel=False) # sequential calculation",
"The call of the get_values method to value all instruments ...",
"t0 = time.time()\nress = port_sequ.get_values()\nts = time.time() - t0\nprint \"Time in sec %.2f\" % ts",
"... and the results visualized.",
"ress['strike'] = strikes\nress.set_index('strike')['value'].plot(figsize=(10, 6))\nplt.ylabel('option value estimates')",
"Parallel Valuation\nSecond, the derivatives portfolio with parallel valuation.",
"port_para = derivatives_portfolio(\n 'portfolio',\n positions,\n val_env,\n risk_factors,\n correlations=None,\n parallel=True) # parallel valuation",
"The call of the get_values method for the parall valuation case.",
"t0 = time.time()\nresp = port_para.get_values()\n # parallel valuation with as many cores as available\ntp = time.time() - t0\nprint \"Time in sec %.2f\" % tp",
"Again, the results visualized (and compared to the sequential results).",
"plt.figure(figsize=(10, 6))\nplt.plot(strikes, resp['value'].values, 'r.', label='parallel')\nplt.plot(strikes, ress['value'].values, 'b', label='sequential')\nplt.legend(loc=0)\nplt.ylabel('option value estimates')",
"Speed.up\nThe realized speed-up is of course dependend on the hardware used, and in particular the number of cores (threads) available.",
"ts / tp\n # speed-up factor\n # of course harware-dependent\n\nwi = 0.4\nplt.figure(figsize=(10, 6))\nplt.bar((1.5 - wi/2, 2.5 - wi/2), (ts/ts, tp/ts), width=wi)\nplt.xticks((1.5, 2.5), ('sequential', 'parallel'))\nplt.ylim(0, 1.1), plt.xlim(0.75, 3.25)\nplt.ylabel('relative performance (lower = better)')\nplt.title('DX Analytics Portfolio Valuation')",
"Copyright, License & Disclaimer\n© Dr. Yves J. Hilpisch | The Python Quants GmbH\nDX Analytics (the \"dx library\") is licensed under the GNU Affero General Public License\nversion 3 or later (see http://www.gnu.org/licenses/).\nDX Analytics comes with no representations\nor warranties, to the extent permitted by applicable law.\n<img src=\"http://hilpisch.com/tpq_logo.png\" alt=\"The Python Quants\" width=\"35%\" align=\"right\" border=\"0\"><br>\nhttp://tpq.io | team@tpq.io | http://twitter.com/dyjh\nQuant Platform |\nhttp://quant-platform.com\nDerivatives Analytics with Python (Wiley Finance) |\nhttp://derivatives-analytics-with-python.com\nPython for Finance (O'Reilly) |\nhttp://python-for-finance.com"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tkurfurst/deep-learning
|
image-classification/Project 2 Submission 1 63.1%/dlnd_image_classification.ipynb
|
mit
|
[
"Image Classification\nIn this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.\nGet the Data\nRun the following cell to download the CIFAR-10 dataset for python.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nfrom urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\nimport problem_unittests as tests\nimport tarfile\n\ncifar10_dataset_folder_path = 'cifar-10-batches-py'\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile('cifar-10-python.tar.gz'):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:\n urlretrieve(\n 'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',\n 'cifar-10-python.tar.gz',\n pbar.hook)\n\nif not isdir(cifar10_dataset_folder_path):\n with tarfile.open('cifar-10-python.tar.gz') as tar:\n tar.extractall()\n tar.close()\n\n\ntests.test_folder_path(cifar10_dataset_folder_path)",
"Explore the Data\nThe dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:\n* airplane\n* automobile\n* bird\n* cat\n* deer\n* dog\n* frog\n* horse\n* ship\n* truck\nUnderstanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.\nAsk yourself \"What are all possible labels?\", \"What is the range of values for the image data?\", \"Are the labels in order or random?\". Answers to questions like these will help you preprocess the data and end up with better predictions.",
"%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport helper\nimport numpy as np\n\n# Explore the dataset\nbatch_id = 1\nsample_id = 10\nhelper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)",
"Implement Preprocess Functions\nNormalize\nIn the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.",
"def normalize(x):\n \"\"\"\n Normalize a list of sample image data in the range of 0 to 1\n : x: List of image data. The image shape is (32, 32, 3)\n : return: Numpy array of normalize data\n \"\"\"\n # TODO: Implement Function\n maximum = np.max(x)\n minimum = np.min(x)\n return (x - minimum) / (maximum - minimum)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_normalize(normalize)",
"One-hot encode\nJust like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.\nHint: Don't reinvent the wheel.",
"from sklearn import preprocessing\nlabels = np.array([0,1,2,3,4,5,6,7,8,9])\none_hot = preprocessing.LabelBinarizer()\none_hot.fit(labels)\n\ndef one_hot_encode(x):\n \"\"\"\n One hot encode a list of sample labels. Return a one-hot encoded vector for each label.\n : x: List of sample Labels\n : return: Numpy array of one-hot encoded labels\n \"\"\"\n # TODO: Implement Function\n \n return one_hot.transform(x)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_one_hot_encode(one_hot_encode)",
"Randomize Data\nAs you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.\nPreprocess all the data and save it\nRunning the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Preprocess Training, Validation, and Testing Data\nhelper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)",
"Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport pickle\nimport problem_unittests as tests\nimport helper\n\n# Load the Preprocessed Validation data\nvalid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))",
"Build the network\nFor the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.\n\nNote: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the \"Convolutional and Max Pooling Layer\" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.\nHowever, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d. \n\nLet's begin!\nInput\nThe neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions\n* Implement neural_net_image_input\n * Return a TF Placeholder\n * Set the shape using image_shape with batch size set to None.\n * Name the TensorFlow placeholder \"x\" using the TensorFlow name parameter in the TF Placeholder.\n* Implement neural_net_label_input\n * Return a TF Placeholder\n * Set the shape using n_classes with batch size set to None.\n * Name the TensorFlow placeholder \"y\" using the TensorFlow name parameter in the TF Placeholder.\n* Implement neural_net_keep_prob_input\n * Return a TF Placeholder for dropout keep probability.\n * Name the TensorFlow placeholder \"keep_prob\" using the TensorFlow name parameter in the TF Placeholder.\nThese names will be used at the end of the project to load your saved model.\nNote: None for shapes in TensorFlow allow for a dynamic size.",
"import tensorflow as tf\n\ndef neural_net_image_input(image_shape):\n \"\"\"\n Return a Tensor for a bach of image input\n : image_shape: Shape of the images\n : return: Tensor for image input.\n \"\"\"\n # TODO: Implement Function\n\n x = tf.placeholder(tf.float32, [None, image_shape[0], image_shape[1], image_shape[2]], name='x')\n \n return x \n\n\ndef neural_net_label_input(n_classes):\n \"\"\"\n Return a Tensor for a batch of label input\n : n_classes: Number of classes\n : return: Tensor for label input.\n \"\"\"\n # TODO: Implement Function\n \n y = tf.placeholder(tf.float32, [None, n_classes], name='y')\n \n return y \n\n\ndef neural_net_keep_prob_input():\n \"\"\"\n Return a Tensor for keep probability\n : return: Tensor for keep probability.\n \"\"\"\n # TODO: Implement Function\n \n keep_prob = tf.placeholder(tf.float32, name='keep_prob')\n \n return keep_prob\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntf.reset_default_graph()\ntests.test_nn_image_inputs(neural_net_image_input)\ntests.test_nn_label_inputs(neural_net_label_input)\ntests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)",
"Convolution and Max Pooling Layer\nConvolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:\n* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.\n* Apply a convolution to x_tensor using weight and conv_strides.\n * We recommend you use same padding, but you're welcome to use any padding.\n* Add bias\n* Add a nonlinear activation to the convolution.\n* Apply Max Pooling using pool_ksize and pool_strides.\n * We recommend you use same padding, but you're welcome to use any padding.\nNote: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.",
"def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):\n \"\"\"\n Apply convolution then max pooling to x_tensor\n :param x_tensor: TensorFlow Tensor\n :param conv_num_outputs: Number of outputs for the convolutional layer\n :param conv_ksize: kernal size 2-D Tuple for the convolutional layer\n :param conv_strides: Stride 2-D Tuple for convolution\n :param pool_ksize: kernal size 2-D Tuple for pool\n :param pool_strides: Stride 2-D Tuple for pool\n : return: A tensor that represents convolution and max pooling of x_tensor\n \"\"\"\n # TODO: Implement Function\n \n\n x_shape = x_tensor.get_shape().as_list()\n xb = x_shape[0]\n xh = x_shape[1]\n xw = x_shape[2]\n xd = x_shape[3]\n \n ###\n # CHECK: random_normal or truncated_normal, mean=0.0 stdev=0.05 or 1.0\n ###\n \n weights = tf.Variable(tf.truncated_normal([conv_ksize[0], conv_ksize[1], xd, conv_num_outputs], mean=0.0, stddev=0.05))\n\n biases = tf.Variable(tf.random_normal([conv_num_outputs]))\n \n def conv2d(x, W, b, strides=[1,1]):\n x = tf.nn.conv2d(x, W, strides=[1, strides[0], strides[1], 1], padding='SAME')\n x = tf.nn.bias_add(x, b)\n return tf.nn.relu(x)\n \n def maxpool2d(x, k=[2,2], s=[2,2]):\n return tf.nn.max_pool(\n x,\n ksize=[1, k[0], k[1], 1],\n strides=[1, s[0], s[1], 1],\n padding='SAME')\n\n conv = conv2d(x_tensor, weights, biases, strides=conv_strides)\n conv2dmax = maxpool2d(conv, k=pool_ksize, s=pool_strides)\n \n return conv2dmax \n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_con_pool(conv2d_maxpool)",
"Flatten Layer\nImplement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.",
"def flatten(x_tensor):\n \"\"\"\n Flatten x_tensor to (Batch Size, Flattened Image Size)\n : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.\n : return: A tensor of size (Batch Size, Flattened Image Size).\n \"\"\"\n # TODO: Implement Function\n \n x_shape = x_tensor.get_shape().as_list()\n xb = x_shape[0]\n xh = x_shape[1]\n xw = x_shape[2]\n xd = x_shape[3]\n \n # flat = tf.reshape(x_tensor, [-1, weights['wd1'].get_shape().as_list()[0]])\n\n flat = tf.reshape(x_tensor, [-1, xh * xw * xd])\n \n return flat\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_flatten(flatten)",
"Fully-Connected Layer\nImplement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.",
"def fully_conn(x_tensor, num_outputs):\n \"\"\"\n Apply a fully connected layer to x_tensor using weight and bias\n : x_tensor: A 2-D tensor where the first dimension is batch size.\n : num_outputs: The number of output that the new tensor should be.\n : return: A 2-D tensor where the second dimension is num_outputs.\n \"\"\"\n # TODO: Implement Function\n \n x_shape = x_tensor.get_shape().as_list()\n xb = x_shape[0]\n xl = x_shape[1]\n \n ###\n # CHECK: random_normal or truncated_normal, mean=0.0 stdev=0.05 or 1.0\n ###\n \n weights = tf.Variable(tf.truncated_normal([xl, num_outputs], mean=0.0, stddev=0.05))\n biases = tf.Variable(tf.random_normal([num_outputs]))\n \n fc = tf.add(tf.matmul(x_tensor, weights), biases)\n fc = tf.nn.relu(fc)\n \n return fc\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_fully_conn(fully_conn)",
"Output Layer\nImplement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.\nNote: Activation, softmax, or cross entropy should not be applied to this.",
"def output(x_tensor, num_outputs):\n \"\"\"\n Apply a output layer to x_tensor using weight and bias\n : x_tensor: A 2-D tensor where the first dimension is batch size.\n : num_outputs: The number of output that the new tensor should be.\n : return: A 2-D tensor where the second dimension is num_outputs.\n \"\"\"\n # TODO: Implement Function\n \n x_shape = x_tensor.get_shape().as_list()\n xb = x_shape[0]\n xl = x_shape[1]\n \n ###\n # CHECK: random_normal or truncated_normal, mean=0.0 stdev=0.05 or 1.0\n ###\n \n weights = tf.Variable(tf.truncated_normal([xl, num_outputs], mean=0.0, stddev=0.05))\n biases = tf.Variable(tf.random_normal([num_outputs]))\n \n out = tf.add(tf.matmul(x_tensor, weights), biases)\n \n return out\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_output(output)",
"Create Convolutional Model\nImplement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:\n\nApply 1, 2, or 3 Convolution and Max Pool layers\nApply a Flatten Layer\nApply 1, 2, or 3 Fully Connected Layers\nApply an Output Layer\nReturn the output\nApply TensorFlow's Dropout to one or more layers in the model using keep_prob.",
"def conv_net(x, keep_prob):\n \"\"\"\n Create a convolutional neural network model\n : x: Placeholder tensor that holds image data.\n : keep_prob: Placeholder tensor that hold dropout keep probability.\n : return: Tensor that represents logits\n \"\"\"\n # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers\n # Play around with different number of outputs, kernel size and stride\n # Function Definition from Above:\n # conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)\n \n # CHECK - set hyperparameters\n \n conv_num_outputs = 16\n conv_ksize = [5,5]\n conv_strides = [1,1]\n pool_ksize = [2,2]\n pool_strides = [2,2]\n \n conv2dmax1 = conv2d_maxpool(x, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)\n\n # TODO: Apply a Flatten Layer\n # Function Definition from Above:\n # flatten(x_tensor)\n \n flat1 = flatten(conv2dmax1)\n\n # TODO: Apply 1, 2, or 3 Fully Connected Layers\n # Play around with different number of outputs\n # Function Definition from Above:\n # fully_conn(x_tensor, num_outputs)\n \n # CHECK - set hyperparameters\n \n fc1_outputs = 1024\n \n fc1 = fully_conn(flat1, fc1_outputs)\n \n \n # TODO - ME: Apply Dropout between Fully Connected and FC or Outpit Layers\n # CHECK\n \n drop1 = tf.nn.dropout(fc1, keep_prob)\n \n # TODO: Apply an Output Layer\n # Set this to the number of classes\n # Function Definition from Above:\n # output(x_tensor, num_outputs)\n \n output_classes = 10\n \n out = output(drop1, output_classes)\n \n # TODO: return output\n \n return out\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\n\n##############################\n## Build the Neural Network ##\n##############################\n\n# Remove previous weights, bias, inputs, etc..\ntf.reset_default_graph()\n\n# Inputs\nx = neural_net_image_input((32, 32, 3))\ny = neural_net_label_input(10)\nkeep_prob = neural_net_keep_prob_input()\n\n# Model\nlogits = conv_net(x, keep_prob)\n\n# Name logits Tensor, so that is can be loaded from disk after training\nlogits = tf.identity(logits, name='logits')\n\n# Loss and Optimizer\ncost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))\noptimizer = tf.train.AdamOptimizer().minimize(cost)\n\n# Accuracy\ncorrect_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')\n\ntests.test_conv_net(conv_net)",
"Train the Neural Network\nSingle Optimization\nImplement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:\n* x for image input\n* y for labels\n* keep_prob for keep probability for dropout\nThis function will be called for each batch, so tf.global_variables_initializer() has already been called.\nNote: Nothing needs to be returned. This function is only optimizing the neural network.",
"def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):\n \"\"\"\n Optimize the session on a batch of images and labels\n : session: Current TensorFlow session\n : optimizer: TensorFlow optimizer function\n : keep_probability: keep probability\n : feature_batch: Batch of Numpy image data\n : label_batch: Batch of Numpy label data\n \"\"\"\n # TODO: Implement Function\n \n \n session.run(optimizer, feed_dict = {x: feature_batch, y: label_batch, keep_prob: keep_probability})\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_train_nn(train_neural_network)",
"Show Stats\nImplement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.",
"def print_stats(session, feature_batch, label_batch, cost, accuracy):\n \"\"\"\n Print information about loss and validation accuracy\n : session: Current TensorFlow session\n : feature_batch: Batch of Numpy image data\n : label_batch: Batch of Numpy label data\n : cost: TensorFlow cost function\n : accuracy: TensorFlow accuracy function\n \"\"\"\n # TODO: Implement Function\n \n loss = session.run(cost, feed_dict = {x: feature_batch, y: label_batch, keep_prob: 1.0})\n valid_acc = session.run(accuracy, feed_dict = {x: valid_features, y: valid_labels, keep_prob: 1.0})\n \n print('Loss: {:>6.4f} Validation Accuracy: {:.6f}'.format(\n loss,\n valid_acc))",
"Hyperparameters\nTune the following parameters:\n* Set epochs to the number of iterations until the network stops learning or start overfitting\n* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:\n * 64\n * 128\n * 256\n * ...\n* Set keep_probability to the probability of keeping a node using dropout",
"# TODO: Tune Parameters - Hyperparameters\nepochs = 50\nbatch_size = 128\nkeep_probability = 0.50",
"Train on a Single CIFAR-10 Batch\nInstead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nprint('Checking the Training on a Single Batch...')\nwith tf.Session() as sess:\n # Initializing the variables\n sess.run(tf.global_variables_initializer())\n \n # Training cycle\n for epoch in range(epochs):\n batch_i = 1\n for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):\n train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)\n print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')\n print_stats(sess, batch_features, batch_labels, cost, accuracy)",
"Fully Train the Model\nNow that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nsave_model_path = './image_classification'\n\nprint('Training...')\nwith tf.Session() as sess:\n # Initializing the variables\n sess.run(tf.global_variables_initializer())\n \n # Training cycle\n for epoch in range(epochs):\n # Loop over all batches\n n_batches = 5\n for batch_i in range(1, n_batches + 1):\n for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):\n train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)\n print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')\n print_stats(sess, batch_features, batch_labels, cost, accuracy)\n \n # Save Model\n saver = tf.train.Saver()\n save_path = saver.save(sess, save_model_path)",
"Checkpoint\nThe model has been saved to disk.\nTest Model\nTest your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport tensorflow as tf\nimport pickle\nimport helper\nimport random\n\n# Set batch size if not already set\ntry:\n if batch_size:\n pass\nexcept NameError:\n batch_size = 64\n\nsave_model_path = './image_classification'\nn_samples = 4\ntop_n_predictions = 3\n\ndef test_model():\n \"\"\"\n Test the saved model against the test dataset\n \"\"\"\n\n test_features, test_labels = pickle.load(open('preprocess_training.p', mode='rb'))\n loaded_graph = tf.Graph()\n\n with tf.Session(graph=loaded_graph) as sess:\n # Load model\n loader = tf.train.import_meta_graph(save_model_path + '.meta')\n loader.restore(sess, save_model_path)\n\n # Get Tensors from loaded model\n loaded_x = loaded_graph.get_tensor_by_name('x:0')\n loaded_y = loaded_graph.get_tensor_by_name('y:0')\n loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')\n loaded_logits = loaded_graph.get_tensor_by_name('logits:0')\n loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')\n \n # Get accuracy in batches for memory limitations\n test_batch_acc_total = 0\n test_batch_count = 0\n \n for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):\n test_batch_acc_total += sess.run(\n loaded_acc,\n feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0})\n test_batch_count += 1\n\n print('Testing Accuracy: {}\\n'.format(test_batch_acc_total/test_batch_count))\n\n # Print Random Samples\n random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))\n random_test_predictions = sess.run(\n tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),\n feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})\n helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)\n\n\ntest_model()",
"Why 50-70% Accuracy?\nYou might be wondering why you can't get an accuracy any higher. First things first, 50% isn't bad for a simple CNN. Pure guessing would get you 10% accuracy. However, you might notice people are getting scores well above 70%. That's because we haven't taught you all there is to know about neural networks. We still need to cover a few more techniques.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_image_classification.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
flyinactor91/Find-Me
|
FindMe.ipynb
|
mit
|
[
"Find Me\nMichael duPont - CodeCamp 2017\n\nFind Faces\nThe first thing we need to do is pick out faces from a larger image. Because the model for this is not user or case specific, we can use an existing model, load it with OpenCV, and tune the hyperparameters instead of building one from scratch, which we will have to do later.",
"import cv2\nimport numpy as np\n\nCASCADE = cv2.CascadeClassifier('findme/haar_cc_front_face.xml')\n\ndef find_faces(img: np.ndarray, sf=1.16, mn=5) -> np.array([[int]]):\n \"\"\"Returns a list of bounding boxes for every face found in an image\"\"\"\n return CASCADE.detectMultiScale(\n cv2.cvtColor(img, cv2.COLOR_RGB2GRAY),\n scaleFactor=sf,\n minNeighbors=mn,\n minSize=(45, 45),\n flags=cv2.CASCADE_SCALE_IMAGE\n )",
"That's really all we need. Now let's test it by drawing rectangles around a few images of groups. Here's one example:",
"import matplotlib.pyplot as plt\nfrom matplotlib.image import imread, imsave\n%matplotlib inline\n\nplt.imshow(imread('test_imgs/initial/group0.jpg'))\n\nfrom glob import glob\n\ndef draw_boxes(bboxes: [[int]], img: 'np.array', line_width: int=2) -> 'np.array':\n \"\"\"Returns an image array with the bounding boxes drawn around potential faces\"\"\"\n for x, y, w, h in bboxes:\n cv2.rectangle(img, (x, y), (x+w, y+h), (0, 255, 0), line_width)\n return img\n\n#Find faces for each test image\nfor fname in glob('test_imgs/initial/group*.jpg'):\n img = imread(fname)\n bboxes = find_faces(img)\n print(bboxes)\n imsave(fname.replace('initial', 'find_faces'), draw_boxes(bboxes, img))\n\nplt.imshow(imread('test_imgs/find_faces/group0.jpg'))",
"After tuning the hyperparameters, we're getting good face identification over our test images.\nBuild Dataset\nBase Corpus\nNow let's use this to build a base corpus of \"these faces are not mine\" so we can augment it later with the face we want to target.",
"#Creates cropped faces for imgs matching 'test_imgs/group*.jpg'\n\ndef crop(img: np.ndarray, x: int, y: int, width: int, height: int) -> np.ndarray:\n \"\"\"Returns an image cropped to a given bounding box of top-left coords, width, and height\"\"\"\n return img[y:y+height, x:x+width]\n\ndef pull_faces(glob_in: str, path_out: str) -> int:\n \"\"\"Pulls faces out of images found in glob_in and saves them as path_out\n Returns the total number of faces found\n \"\"\"\n i = 0\n for fname in glob(glob_in):\n print(fname)\n img = imread(fname)\n bboxes = find_faces(img)\n for bbox in bboxes:\n cropped = crop(img, *bbox)\n imsave(path_out.format(i), cropped)\n i += 1\n return i\n\nfound = pull_faces('test_imgs/initial/group*.jpg', 'test_imgs/corpus/face{}.jpg')\n\nprint('Total number of base corpus faces found:', found)\nplt.imshow(imread('test_imgs/corpus/face0.jpg'))",
"Now that we have some faces to work with, let's save them to a pickle file for use later on.",
"from pickle import dump\n\n#Creates base_corpus.pkl from face imgs in test_imgs/corpus\nimgs = [imread(fname) for fname in glob('test_imgs/corpus/face*.jpg')]\ndump(imgs, open('findme/base_corpus.pkl', 'wb'))",
"Target Corpus\nNow we need to add our target data. Since this is going to power a personal project, I'm going to train it to recognize my face. Other than adding some new images, we can reuse the code from before but just supplying a different glob string.",
"found = pull_faces('test_imgs/initial/me*.jpg', 'test_imgs/corpus/me{}.jpg')\n\nprint('Total number of target faces found:', found)\nplt.imshow(imread('test_imgs/corpus/me0.jpg'))",
"That was easy enough. In order to have a large enough corpus of target faces, I included pictures of myself with other people and deleted their faces after the code block ran. It ended up having eleven target faces.\nModel Training Data\nNow that we have our faces, we need to create the features and labels that will be used to train our facial recognition model. We've already classified our data based on the face's filename; all we need to do is assign a 1 or 0 to each group for our labels. We'll also need to scale each image to a standard size. Thankfully the output for each bounding box is a square, so we don't have to worry about introducing distortions.",
"#Load the two sets of images\nfrom pickle import load\n\nnotme = load(open('findme/base_corpus.pkl', 'rb'))\nme = [imread(fname) for fname in glob('test_imgs/corpus/me*.jpg')]\n\n#Create features and labels\nfeatures = notme + me\nlabels = [0] * len(notme) + [1] * len(me)\n\n#Preprocess images for the model\ndef preprocess(img: np.ndarray) -> np.ndarray:\n \"\"\"Resizes a given image and remove alpha channel\"\"\"\n img = cv2.resize(img, (45, 45), interpolation=cv2.INTER_AREA)[:,:,:3]\n return img\n\nfeatures = [preprocess(face) for face in features]",
"Simple enough. Let's do a quick check before shuffling. The first image should be part of the base corpus:",
"print('Is the target:', labels[0] == 1)\nplt.imshow(features[0], cmap='gray')",
"And the last image should be of the target:",
"print('Is the target:', labels[-1] == 1)\nplt.imshow(features[-1], cmap='gray')",
"Looks good. Let's create a quick data and file checkpoint. This means we'll be able to load the file in from this point on without having to run most of the above code.",
"#Convert into numpy arrays\nfeatures = np.array(features)\nlabels = np.array(labels)\n\ndump(features, open('test_imgs/features.pkl', 'wb'))\ndump(labels, open('test_imgs/labels.pkl', 'wb'))",
"DATA/FILE CHECKPOINT\nThe notebook can be run from scratch from this point onward.",
"# DATA/FILE CHECKPOINT\nfrom pickle import load\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib.image import imread, imsave\n%matplotlib inline\nfrom findme.imageutil import crop, draw_boxes, preprocess\nfrom findme.models import find_faces\n\nfeatures = load(open('findme/features.pkl', 'rb'))\nlabels = load(open('findme/labels.pkl', 'rb'))\n\nfeatures = features[-24:]\nlabels = labels[-24:]",
"That's it for our data. You'll notice that we only loaded a subset of our dataset. This ensures that the number of target and non-target images matches, which leads to a better model even though it has less data overall. We'll split our data in the next section.\nAm I in This?\nWe've already created all of our data. Now for the model we're going to train. First, we need to convert our labels to one-hot encoding for use in the model. This means our output layer will have two nodes: True and False.",
"from sklearn.preprocessing import OneHotEncoder\n\nenc = OneHotEncoder()\nlabels = enc.fit_transform(labels.reshape(-1, 1)).toarray()\nprint('Not target label:', labels[0])\nprint('Is target label:', labels[-1])",
"Now we need to define our model architecture one layer at a time. We'll create three convolutional layers, two fully-connected layers, and the output layer.",
"from keras.layers import Activation, Convolution2D, Dense, Dropout, Flatten, MaxPooling2D\nfrom keras.metrics import binary_accuracy\nfrom keras.models import Sequential\n\nSHAPE = features[0].shape\nNB_FILTER = 16\n\ndef make_model() -> Sequential:\n \"\"\"Create a Sequential Keras model to boolean classify faces\"\"\"\n model = Sequential()\n #First Convolution\n model.add(Convolution2D(NB_FILTER, (3, 3), input_shape=SHAPE))\n model.add(Activation('relu'))\n model.add(MaxPooling2D())\n model.add(Dropout(0.1))\n # Second Convolution\n model.add(Convolution2D(NB_FILTER*2, (2, 2)))\n model.add(Activation('relu'))\n model.add(MaxPooling2D())\n model.add(Dropout(0.2))\n # Third Convolution\n model.add(Convolution2D(NB_FILTER*4, (2, 2)))\n model.add(Activation('relu'))\n model.add(MaxPooling2D())\n model.add(Dropout(0.3))\n # Flatten for Fully Connected\n model.add(Flatten())\n # First Fully Connected\n model.add(Dense(1024))\n model.add(Activation('relu'))\n model.add(Dropout(0.4))\n # Second Fully Connected\n model.add(Dense(1024))\n model.add(Activation('relu'))\n model.add(Dropout(0.5))\n # Output\n model.add(Dense(2))\n model.compile(loss = 'mean_squared_error', optimizer = 'rmsprop', metrics=[binary_accuracy])\n return model\n\nprint(make_model().summary())",
"Now we need to train the model. Even though we have a large model in terms of its parameters, we can still let the model train for many epochs because our feature set is so small. On a MacBook Air, it takes around 30 seconds to train the model with 500 epochs. To save space, I've disabled the full training printout that Keras provides, but you can watch the accuracy progress yourself by changing verbose from 0 to 1.\nWe also need to shuffle our data because feeding all of the non-target and target faces into the model in order will lead to a biased model. Scikit-Learn has a convenient function to do this for us. Rather than just calling random, this function preserves the relationship between the feature and label indexes.",
"from keras.wrappers.scikit_learn import KerasClassifier\nfrom sklearn.utils import shuffle\n\nmodel = KerasClassifier(build_fn=make_model, epochs=500, batch_size=len(labels), verbose=0)\nX, Y = shuffle(features, labels, random_state=42)\nmodel.fit(X, Y)",
"Let's quickly see how well it trained to the given data. Because the dataset is so small, we didn't want to keep any for a test or validation set. We'll test it on a new image later.",
"preds = model.predict(features)\nprint('Non-target faces predicted correctly:', np.all(preds[:12] == 0))\nprint('Non-target faces predicted correctly:', preds[-12:] == 1))",
"That's it. While Keras has its own mechanisms for training and validating models, we're using a wrapper around our Keras model so it conforms to the Scikit-Learn model API. We can use fit and predict when working with the model in our code, and it let's us train and use our model with the other helper modules sk-learn provides. For example, we could have evaluated the model using StratifiedKFold and cross_val_score which would look like this:\n```python\nfrom keras.wrappers.scikit_learn import KerasClassifier\nfrom sklearn.model_selection import StratifiedKFold, cross_val_score\nmodel = KerasClassifier(build_fn=make_model, epochs=5, batch_size=len(labels), verbose=0)\nevaluate using 10-fold cross validation\nkfold = StratifiedKFold(n_splits=3, shuffle=True, random_state=42)\nresult = cross_val_score(model, features, labels, cv=kfold)\nprint(result.mean())\n```\nThis method allows us to determine how effective our model is but does not return a trained model for us to use.\nPutting It Together\nLastly, let's create a single function that takes in an image and returns if the target was found and where.\nFirst we'll load in our test image. Keep in mind that the model we just trained has never seen this image before and it contains multiple people (and a manatee statue).",
"test_img = imread('test_imgs/evaluate/me1.jpg')\nplt.imshow(test_img)",
"Now for the function itself. Because we've already made function around the core parts of our data pipeline, this function is going to be incredibly short yet powerful.",
"def target_in_img(img: np.ndarray) -> (bool, np.array([int])):\n \"\"\"Returns whether the target is in a given image and where\"\"\"\n for bbox in find_faces(img):\n face = preprocess(crop(img, *bbox))\n if model.predict(np.array([face])) == 1:\n return True, bbox\n return False, None",
"Yeah. That's it. Let's break down the steps:\n\nfind_faces returns a list of bounding boxes containing faces\nWe prepare each face by cropping the image to the bounding box, scaling to 45x45, and removing the alpha channel\nThe model predicts whether the face is or is not the target\nIf the target is found (pred == 1), return True and the current bounding box\nIf there aren't any faces or none of the faces belongs to the target, return False and None\n\nNow let's test it. If it works properly, we should see a bounding bx appear around the target's face.",
"found, bbox = target_in_img(test_img)\n\nprint('Target face found in test image:', found)\nif found:\n plt.imshow(draw_boxes([bbox], test_img, line_width=20))",
"We're finally done."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ldhagen/docker-jupyter
|
OpenCV.ipynb
|
mit
|
[
"From http://giusedroid.blogspot.com/2015/04/blog-post.htmld\nQuickie: Mix up OpenCV and Jupyter (iPython Notebook)\nThe purpose of this post is to show how to plot images acquired with opencv rather than matplotlib. Just in case. \nFirst of all, set matplotlib inline and import the necessary stuff.",
"! wget --no-check-certificate http://www.hobieco.com/linked_images/H18-Magnum.jpg\n\n%matplotlib inline\nimport cv2\nfrom matplotlib import pyplot as plt\nimport numpy as np\nimport time as t\nprint \"OpenCV Version : %s \" % cv2.__version__\n\nimage = cv2.imread(\"H18-Magnum.jpg\")\nfig, ax = plt.subplots()\nfig.set_size_inches(3, 3)\nax.axis([35, 150, 250, 100])\nimage_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)\nplt.imshow(image_rgb)\nplt.show()",
"The image has been correctly loaded by openCV as a numpy array, but the color of each pixel has been sorted as BGR. Matplotlib's plot expects an RGB image so, for a correct display of the image, it is necessary to swap those channels. This operation can be done either by using openCV conversion functions cv2.cvtColor() or by working directly with the numpy array.\ncvtColor\ncvtColor is the openCV function which changes the color space of an image. It takes as input an image and a numerical flag which represents the conversion function. Let's list some of that.",
"from matplotlib.pyplot import imshow\nimport numpy as np\nfrom PIL import Image\n\n%matplotlib inline\npil_im = Image.open('H18-Magnum.jpg', 'r')\nimshow(np.asarray(pil_im))\n\nfrom IPython.display import Image \nImage(filename='H18-Magnum.jpg')\n\nBGRflags = [flag for flag in dir(cv2) if flag.startswith('COLOR_BGR') ]\nprint BGRflags",
"In this case it's necessary to change the image space from BGR (Blue, Green, Red) to RGB, so the correct flag is cv2.COLOR_BGR2RGB",
"t0 = t.time()\ncv_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)\nt1 = t.time()\ndt_cv = t1-t0\nprint \"Conversion took %0.5f seconds\" % dt_cv\n\nplt.imshow(cv_rgb)\nplt.show()",
"below from from http://matplotlib.org/users/text_intro.html",
"# -*- coding: utf-8 -*-\nimport matplotlib.pyplot as plt\n\nfig = plt.figure()\nfig.suptitle('bold figure suptitle', fontsize=14, fontweight='bold')\n\nax = fig.add_subplot(111)\nfig.subplots_adjust(top=0.85)\nax.set_title('axes title')\n\nax.set_xlabel('xlabel')\nax.set_ylabel('ylabel')\n\nax.text(3, 8, 'boxed italics text in data coords', style='italic',\n bbox={'facecolor':'red', 'alpha':0.5, 'pad':10})\n\nax.text(2, 6, r'an equation: $E=mc^2$', fontsize=15)\n\nax.text(3, 2, u'unicode: Institut f\\374r Festk\\366rperphysik')\n\nax.text(0.95, 0.01, 'colored text in axes coords',\n verticalalignment='bottom', horizontalalignment='right',\n transform=ax.transAxes,\n color='green', fontsize=15)\n\n\nax.plot([2], [1], 'o')\nax.annotate('annotate', xy=(2, 1), xytext=(3, 4),\n arrowprops=dict(facecolor='black', shrink=0.05))\n\nax.axis([0, 10, 0, 10])\n\nplt.show()",
"Added Friday afternoon 17 Mar 17",
"%matplotlib inline\nimport cv2\nfrom matplotlib import pyplot as plt\nimport matplotlib.cm as cm\nimage = cv2.imread(\"Screenshot_2016-02-23-12-47-43.png\")\nfig, ax = plt.subplots()\nfig.set_size_inches(4, 4)\n#ax.axis([1280, 1400, 400, 200])\nimage_rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)\nplt.imshow(image_rgb)\nplt.show()\n\nimage_gray = cv2.cvtColor(image_rgb, cv2.COLOR_RGB2GRAY)\nfig, ax = plt.subplots(ncols=1, nrows=1, figsize=(2, 2))\nup_right_gray_target = image_gray[210:310, 1280:1400]\nplt.imshow(up_right_gray_target, cmap = cm.gray)\nplt.show()\n\nimage_gray = cv2.cvtColor(image_rgb, cv2.COLOR_RGB2GRAY)\nfig, ax = plt.subplots(ncols=1, nrows=1, figsize=(2, 2))\nlow_left_gray_target = image_gray[2412:2512,65:165]\nplt.imshow(low_left_gray_target, cmap = cm.gray)\nplt.show()\n\nimage_gray = cv2.imread(\"Screenshot_2016-02-23-12-47-43.png\",0)\n#targets = [up_right_gray_target,low_left_gray_target]\ntargets = [up_right_gray_target]\nfor tgt in targets:\n w, h = tgt.shape[::-1]\n res = cv2.matchTemplate(image_gray,tgt,cv2.TM_CCOEFF)\n res1= cv2.matchTemplate(image_gray,tgt,cv2.TM_CCOEFF_NORMED)\n min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)\n top_left = max_loc\n bottom_right = (top_left[0] + w, top_left[1] + h)\n cv2.rectangle(image_gray,top_left, bottom_right, 255, 2)\n#fig, ax = plt.subplots(ncols=1, nrows=1, figsize=(10, 14))\n#plt.imshow(image_gray, cmap = cm.gray)\n#plt.show()\n\nplt.figure(figsize=(16,9))\nplt.subplot(1,2,1)\nplt.imshow(res,cmap=cm.gray)\nplt.subplot(1,2,2)\nplt.imshow(res1,cmap=cm.gray)\nplt.show()",
"Added Thursday afternoon 23 Mar 17",
"%matplotlib inline\nimport cv2\nfrom matplotlib import pyplot as plt\nimport matplotlib.cm as cm\nimage_gray = cv2.imread(\"Screenshot_2016-02-23-12-47-43.png\",0)\nup_right_gray_target = image_gray[210:310, 1280:1400]\n#targets = [up_right_gray_target,low_left_gray_target]\ntargets = [up_right_gray_target]\nfor tgt in targets:\n w, h = tgt.shape[::-1]\n res_TM_CCOEFF = cv2.matchTemplate(image_gray,tgt,cv2.TM_CCOEFF)\n res_TM_CCOEFF_NORMED = cv2.matchTemplate(image_gray,tgt,cv2.TM_CCOEFF_NORMED)\n res_TM_SQDIFF = cv2.matchTemplate(image_gray,tgt,cv2.TM_SQDIFF)\n res_TM_SQDIFF_NORMED = cv2.matchTemplate(image_gray,tgt,cv2.TM_SQDIFF_NORMED)\n res_TM_CORR = cv2.matchTemplate(image_gray,tgt,cv2.TM_CCORR)\n res_TM_CORR_NORMED = cv2.matchTemplate(image_gray,tgt,cv2.TM_CCORR_NORMED)\n min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res_TM_SQDIFF_NORMED)\n# top_left = max_loc\n top_left = min_loc\n bottom_right = (top_left[0] + w, top_left[1] + h)\n cv2.rectangle(image_gray,top_left, bottom_right, 255, 2)\nfig, ax = plt.subplots(ncols=1, nrows=1, figsize=(10, 14))\nplt.imshow(image_gray, cmap = cm.gray)\nplt.show()\n\nfig = plt.figure(figsize=(12,29))\nax1 = fig.add_subplot(321)\nplt.title('CCOEFF')\nplt.imshow(res_TM_CCOEFF,cmap=cm.gray)\nplt.subplot(3,2,2)\nplt.title('CCOEFF_NORMED')\nplt.imshow(res_TM_CCOEFF_NORMED,cmap=cm.gray)\nplt.subplot(3,2,3)\nplt.title('TM_SQDIFF')\nplt.imshow(res_TM_SQDIFF,cmap=cm.gray)\nplt.subplot(3,2,4)\nplt.title('TM_SQDIFF_NORMED')\nplt.imshow(res_TM_SQDIFF_NORMED,cmap=cm.gray)\nplt.subplot(3,2,5)\nplt.title('TM_CORR')\nplt.imshow(res_TM_CORR,cmap=cm.gray)\nplt.subplot(3,2,6)\nplt.title('TM_CORR_NORMED')\nplt.imshow(res_TM_CORR_NORMED,cmap=cm.gray)\nplt.show()",
"Added Friday afternoon 15 Apr 17",
"! pip install --upgrade pandas\n%matplotlib inline\nimport matplotlib as mpl\nfrom matplotlib import pyplot as plt\nfrom matplotlib.dates import date2num, MonthLocator, WeekdayLocator, DateFormatter\nimport datetime as dt\nimport numpy as np\nimport pandas as pd\n\ncount = (dt.datetime.today() - dt.datetime(2016,11,15)).days\ncount\n\ndates = [dt.datetime(2016,11,15) + dt.timedelta(days=i) for i in xrange(count)]\ntype(dates)\n\nimport numpy as np\ndates_np = np.arange(np.datetime64('2016-11-15','D'),np.datetime64(dt.datetime.today(),'D'))\n\ndates_np\n\ntype1 = np.random.randint(0,5,count)\ntype2 = np.random.randint(0,5,count)\ntype3 = np.random.randint(0,7,count)\n\ntype(type1)\n\n#plt.figure(figsize=(20,7))\n#plt.title('Testing', fontsize=16)\n#plt.xlabel('Date', fontsize=16)\n#plt.ylabel('Frequency', fontsize=16)\nfig, ax = plt.subplots(1,1)\np1 = plt.bar(dates_np, type1, width=1, label='Type 1')\np2 = plt.bar(dates_np, type2, bottom = type1, width=1, label='Type 2')\np3 = plt.bar(dates_np, type3, bottom = type1 + type2, width=1, label='Type 3')\nax.xaxis_date()\nax.xaxis.set_major_locator(MonthLocator())\nax.xaxis.set_minor_locator(WeekdayLocator())\nax.xaxis.set_major_formatter(DateFormatter('%b %y'))\nax.set_title('Testing', fontsize=16)\nax.set_xlabel('Date')\nax.set_ylabel('Frequency')\nax.set_xlim(dates_np[0],dates_np[-1])\nfig.set_size_inches(17,6)\nfig.autofmt_xdate()\nfig.tight_layout()\nplt.legend((p1[0],p2[0],p3[0]), ('First', 'Second','Third'))\nplt.show()\n\ntype(dates_np[0]),type(type1[0])",
"20 Jun 17 test case for Matplotlib bug https://github.com/matplotlib/matplotlib/issues/7215/\nbehavior fixed by importing pandas (installing not enough as nothing seems to upgrade) even though it is not called",
"! pip install --upgrade pandas\nimport pandas\n\n%matplotlib inline\nimport numpy as np\nimport datetime as dt\nimport matplotlib as mpl\nfrom matplotlib import pyplot as plt\nfrom matplotlib.dates import date2num, MonthLocator, WeekdayLocator, DateFormatter",
"Below is (dangerously) relying on the latest python 2.7.13 dictionary preserving key order based on creation sequence. Will fix later",
"class test_object_type:\n '''builds test objects which have random dates within a range plus type name, and magnatudes'''\n def __init__(self, first_date, last_date):\n self.full_space_list = self.generate_random_spaced_list()\n self.first_date = first_date\n self.last_date = last_date\n self.event_date_list = self.build_obj_date_list()\n self.date_value_dict = self.build_obj_dict()\n self.sorted_keys = self.build_obj_sorted_keys()\n self.value_list = np.asarray(self.build_value_list())\n self.full_date_dict = self.build_full_date_dict()\n \n def generate_random_spaced_list(self):\n return np.random.randint(4,size=325)\n \n def build_obj_date_list(self):\n '''Makes event days based on spacing by self.full_space_list. May get several \n zero spaces in a row those are not checked for before attempting to recreated same key\n instead a new entry overwrites the previous.'''\n obj_date_list = []\n current_date = self.first_date \n for x in self.full_space_list:\n current_date = current_date + np.timedelta64(x,'D')\n if not current_date > self.last_date:\n obj_date_list.append(current_date)\n else:\n return obj_date_list\n \n def build_obj_dict(self):\n date_value_dict = {}\n for x in self.event_date_list:\n value = np.random.randint(1,5)\n date_value_dict[x] = value\n return date_value_dict \n \n def build_obj_sorted_keys(self):\n dict_keys = self.date_value_dict.keys()\n dict_keys.sort()\n return dict_keys\n \n def build_value_list(self):\n value_list = []\n for x in self.sorted_keys:\n value_list.append(self.date_value_dict[x])\n return value_list\n \n def build_full_date_dict(self):\n full_date_list =[]\n current_date = self.first_date\n while not current_date > self.last_date:\n full_date_list.append(current_date)\n current_date = current_date + np.timedelta64(1,'D')\n full_date_dict = {}\n for x in full_date_list:\n if x in self.date_value_dict:\n full_date_dict[x] = self.date_value_dict[x]\n else:\n full_date_dict[x] = 0\n return full_date_dict\n\n%prun aaa = test_object_type(np.datetime64('2016-11-15','D'),np.datetime64(dt.datetime.today(),'D'))\n\naaa.full_date_dict.keys()[-1] - aaa.full_date_dict.keys()[0]\n\nxxx = np.asarray(aaa.full_date_dict.keys())\nyyy = np.asarray(aaa.full_date_dict.values())\n\ntype(xxx[0]),type(yyy[0])\n\nfig, ax = plt.subplots(1,1)\np1 = plt.bar(xxx, yyy, width=1, label='Type 1')\nax.xaxis_date()\nax.xaxis.set_major_locator(MonthLocator())\nax.xaxis.set_minor_locator(WeekdayLocator())\nax.xaxis.set_major_formatter(DateFormatter('%b %y'))\nax.set_title('Testing', fontsize=16)\nax.set_xlabel('Date')\nax.set_ylabel('Frequency')\nfig.set_size_inches(17,6)\nfig.autofmt_xdate()\nfig.tight_layout()\nplt.show()\n\nimport matplotlib as mpl\nmpl.__version__numpy__"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
gururajl/deep-learning
|
autoencoder/Simple_Autoencoder.ipynb
|
mit
|
[
"A Simple Autoencoder\nWe'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.\n\nIn this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.",
"%matplotlib inline\n\nimport numpy as np\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\n\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets('MNIST_data', validation_size=0)",
"Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.",
"img = mnist.train.images[2]\nplt.imshow(img.reshape((28, 28)), cmap='Greys_r')",
"We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.\n\n\nExercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. Feel free to use TensorFlow's higher level API, tf.layers. For instance, you would use tf.layers.dense(inputs, units, activation=tf.nn.relu) to create a fully connected layer with a ReLU activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function.",
"# Size of the encoding layer (the hidden layer)\nencoding_dim = 32 # feel free to change this value\n\nimage_size = mnist.train.images.shape[1]\n\n# Input and target placeholders\ninputs_ = tf.placeholder(tf.float32, [None, image_size])\ntargets_ = tf.placeholder(tf.float32, [None, image_size])\n\n# Output of hidden layer, single fully connected layer here with ReLU activation\nencoded = tf.layers.dense(inputs=inputs_, units=encoding_dim, activation=tf.nn.relu)\n\n# Output layer logits, fully connected layer with no activation\nlogits = tf.layers.dense(inputs=encoded, units=784)\n# Sigmoid output from logits\ndecoded = tf.nn.sigmoid(logits)\n\n# Sigmoid cross-entropy loss\nloss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)\n# Mean of the loss\ncost = tf.reduce_mean(loss)\n\n# Adam optimizer\nopt = tf.train.AdamOptimizer(0.001).minimize(cost)",
"Training",
"# Create the session\nsess = tf.Session()",
"Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss. \nCalling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).",
"epochs = 20\nbatch_size = 200\nsess.run(tf.global_variables_initializer())\nfor e in range(epochs):\n for ii in range(mnist.train.num_examples//batch_size):\n batch = mnist.train.next_batch(batch_size)\n feed = {inputs_: batch[0], targets_: batch[0]}\n batch_cost, _ = sess.run([cost, opt], feed_dict=feed)\n\n print(\"Epoch: {}/{}...\".format(e+1, epochs),\n \"Training loss: {:.4f}\".format(batch_cost))",
"Checking out the results\nBelow I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.",
"fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))\nin_imgs = mnist.test.images[:10]\nreconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})\n\nfor images, row in zip([in_imgs, reconstructed], axes):\n for img, ax in zip(images, row):\n ax.imshow(img.reshape((28, 28)), cmap='Greys_r')\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n\nfig.tight_layout(pad=0.1)\n\nsess.close()",
"Up Next\nWe're dealing with images here, so we can (usually) get better performance using convolution layers. So, next we'll build a better autoencoder with convolutional layers.\nIn practice, autoencoders aren't actually better at compression compared to typical methods like JPEGs and MP3s. But, they are being used for noise reduction, which you'll also build."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
google/applied-machine-learning-intensive
|
content/06_other_models/05_svm/colab.ipynb
|
apache-2.0
|
[
"<a href=\"https://colab.research.google.com/github/google/applied-machine-learning-intensive/blob/master/content/06_other_models/05_svm/colab.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nCopyright 2020 Google LLC.",
"# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Support Vector Machines\nSupport Vector Machines (SVM) are powerful tools for performing both classification and regression tasks. In this colab we'll create a classification model using an SVM in scikit-learn.\nLoad the Data\nLet's begin by loading a dataset that we'll use for classification.",
"import pandas as pd\nfrom sklearn.datasets import load_iris\n\niris_bunch = load_iris()\n\niris_df = pd.DataFrame(iris_bunch.data, columns=iris_bunch.feature_names)\niris_df['species'] = iris_bunch.target\n\niris_df.describe() ",
"You can see in the data description above that the range of values for each of the columns is quite a bit different. For instance, the mean sepal length is almost twice as big as the mean sepal width.\nSVM is sensitive to features with different scales. We'll run the data through the StandardScaler to get all of the feature data scaled.\nFirst let's create the scalar and fit it to our features.",
"from sklearn.preprocessing import StandardScaler\n\nscaler = StandardScaler()\nscaler.fit(iris_df[iris_bunch.feature_names])\n\nscaler.mean_",
"We can now transform the data by applying the scaler.",
"iris_df[iris_bunch.feature_names] = scaler.transform(\n iris_df[iris_bunch.feature_names])\n\niris_df.describe()",
"Since we scaled the data, the column names are now a bit deceiving. These are no longer unaltered centimeters, but normalized lengths. Let's rename the columns to get \"(cm)\" out of the names.",
"iris_df = iris_df.rename(index=str, columns={\n 'sepal length (cm)': 'sepal_length',\n 'sepal width (cm)': 'sepal_width',\n 'petal length (cm)': 'petal_length',\n 'petal width (cm)': 'petal_width'})\niris_df.head()",
"We could use all of the features to train our model, but in this case we are going to pick two features so that we can make some nice visualizations later on in the colab.",
"features = ['petal_length', 'petal_width']\ntarget = 'species'",
"Now we can create and train a classifier. There are multiple ways to create an SVM model in scikit-learn. We are going to use the linear support vector classifier.",
"from sklearn.svm import LinearSVC\n\nclassifier = LinearSVC()\nclassifier.fit(iris_df[features], iris_df[target])",
"We can now use our model to make predictions. We'll make predictions on the data we just trained on in order to get an F1 score.",
"from sklearn.metrics import f1_score\n\npredictions = classifier.predict(iris_df[features])\n\nf1_score(iris_df[target], predictions, average='micro')",
"We can visualize the decision boundaries using the pyplot contourf function.",
"import matplotlib.pyplot as plt\nimport numpy as np\n\n# Find the smallest value in the feature data. We are looking across both\n# features since we scaled them. Make the min value a little smaller than\n# reality in order to better see all of the points on the chart.\nmin_val = min(iris_df[features].min()) - 0.25\n\n# Find the largest value in the feature data. Make the max value a little bigger\n# than reality in order to better see all of the points on the chart.\nmax_val = max(iris_df[features].max()) + 0.25\n\n# Create a range of numbers from min to max with some small step. This will be\n# used to make multiple predictions that will create the decision boundary\n# outline.\nrng = np.arange(min_val, max_val, .02)\n\n# Create a grid of points.\nxx, yy = np.meshgrid(rng, rng)\n\n# Make predictions on every point in the grid.\npredictions = classifier.predict(np.c_[xx.ravel(), yy.ravel()])\n\n# Reshape the predictions for plotting.\nzz = predictions.reshape(xx.shape)\n\n# Plot the predictions on the grid.\nplt.contourf(xx, yy, zz)\n\n# Plot each class of iris with a different marker.\n# Class 0 with circles\n# Class 1 with triangles\n# Class 2 with squares\nfor species_and_marker in ((0, 'o'), (1, '^'), (2, 's')):\n plt.scatter(\n iris_df[iris_df[target] == species_and_marker[0]][features[0]],\n iris_df[iris_df[target] == species_and_marker[0]][features[1]],\n marker=species_and_marker[1])\nplt.show()",
"Exercises\nExercise 1: Polynomial SVC\nThe scikit-learn module also has an SVC classifier that can use non-linear kernels. Create an SVC classifier with a 3-degree polynomial kernel, and train it on the iris data. Make predictions on the iris data that you trained on, and then print out the F1 score.\nStudent Solution",
"# Your code goes here",
"Exercise 2: Plotting\nCreate a plot that shows the decision boundaries of the polynomial SVC that you created in exercise 1.\nStudent Solution",
"# Your code goes here",
"Exercise 3: C Hyperparameter\nWe accepted the default 1.0 C hyperparameter in the classifier above. Try halving and doubling the C value. How does it affect the F1 score?\nVisualize the decision boundaries. Do they visibly change?\nStudent Solution",
"# Your code goes here",
"Exercise 4: Regression\nUse the LinearSVR to predict Boston housing prices in the Boston housing dataset. Hold out some test data and print your final RMSE.\nStudent Solution",
"# Your code goes here",
""
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Diyago/Machine-Learning-scripts
|
clustering/Базовая кластеризация.ipynb
|
apache-2.0
|
[
"1. Кластеризация\nВыбор оптимального количества кластеров \nКластерный анализ (Data clustering) — это задача разбиения заданной выборки объектов (ситуаций) на непересекающиеся подмножества, называемые кластерами, так, чтобы каждый кластер состоял из схожих объектов, а объекты разных кластеров существенно отличались. Задача кластеризации относится к широкому классу задач обучения без учителя[1].\nТо есть мы изначально, решая эту задачу, не знаем правильного количества кластеров. Используемые алгоритмы при этом оставляют выбор количества кластеров за пользователем. При этом, так или иначе, хотелось бы выбрать наиболее оптимальное количество, такое, которое лучше всего описовало бы наши данные.\nПожалуй самым известным и наиболее часто употребимым методом кластеризации является K-means, который стремится минимизировать суммарное квадратичное отклонение точек кластеров от центров этих кластеров, но может быть использована и другая метрика.\nСгененирируем 4 разных распределения точек и применим метод K средних [2].",
"#импортируем библиотеки\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom sklearn.cluster import KMeans\nfrom sklearn.datasets import make_blobs\n\nfrom sklearn.cluster import DBSCAN\n\n\n\n\nplt.figure(figsize=(12, 12))\n\nn_samples = 2300\nrandom_state = 220\nX, y = make_blobs(n_samples=n_samples, random_state=random_state, centers=7)\n\n# Равномерное распределение кластеров\ny_pred = KMeans(n_clusters=7, random_state=random_state, n_jobs = -1).fit_predict(X)\n\nplt.subplot(221)\nplt.scatter(X[:, 0], X[:, 1], c=y_pred)\nplt.title(\"Равномерное распределение кластеров\")\n\n# Удлиненные по одной из оси распределение точек\ntransformation = [[0.70834549, -0.563667341], [-0.30887718, 0.75253229]]\nX_aniso = np.dot(X, transformation)\ny_pred = KMeans(n_clusters=7, random_state=random_state).fit_predict(X_aniso)\n\n\nplt.subplot(222)\nplt.scatter(X_aniso[:, 0], X_aniso[:, 1], c=y_pred)\nplt.title(\"Удлиненные по оси\")\n\n# Кластеры разной дисперсии\nX_varied, y_varied = make_blobs(n_samples=n_samples, centers=7, \n cluster_std=[1.0, 2.5, 0.5, 3, 0.7, 0.1, 2.3],\n random_state=random_state)\ny_pred = KMeans(n_clusters=7, random_state=random_state).fit_predict(X_varied)\n\nplt.subplot(223)\nplt.scatter(X_varied[:, 0], X_varied[:, 1], c=y_pred)\nplt.title(\"Разная дисперсия точек\")\n\n# Unevenly sized blobs\nX_filtered = np.vstack((X[y == 0][:2000], X[y == 1][:500], X[y == 2][:400], X[y == 3][:300],\n X[y == 4][:200],X[y == 5][:120],X[y == 6][:42]))\ny_pred = KMeans(n_clusters=7,\n random_state=random_state).fit_predict(X_filtered)\n\nplt.subplot(224)\nplt.scatter(X_filtered[:, 0], X_filtered[:, 1], c=y_pred)\nplt.title(\"Разное количество точек в кластерах\")\n\nplt.show()",
"Как видим распредение по кластерам оказалось вполне логичным, не смотря на выбор параметров по умолчанию, за исключением второго случая, но там действительно все несколько неочевидно. Но нужно заметить, что мы рассматриваем достаточно простой двухмерный случай, при этом другие алгоритмы кластеризации (их в sklearn достаточно много) могут показать другой и несколько лучший результат.\n2. Выбор количества кластеров\nПри выборе количества кластеров, хотелось бы минимизировать некоторый функционал. Если использовать исходную метрику евклидовых расстояних и мы придем в выводу, что суммарная функция ошибки будет минимальна при количестве кластеров равных исходному количеству точек.",
"y_pred = KMeans(n_clusters=7, random_state=random_state, n_jobs = -1).fit_predict(X)\n\nplt.scatter(X[:, 0], X[:, 1], c=y_pred)\nplt.title(\"Равномерное распределение кластеров\")\nplt.show()",
"http://scikit-learn.org/stable/modules/clustering.html\nhttp://scikit-learn.org/stable/modules/clustering.html#clustering-performance-evaluation\nhttp://scikit-learn.org/stable/modules/clustering.html#k-means\nhttp://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_assumptions.html#sphx-glr-auto-examples-cluster-plot-kmeans-assumptions-py\n\nКластеризация http://www.machinelearning.ru/wiki/index.php?title=%D0%9A%D0%BB%D0%B0%D1%81%D1%82%D0%B5%D1%80%D0%B8%D0%B7%D0%B0%D1%86%D0%B8%D1%8F\nK means https://ru.wikipedia.org/wiki/%D0%9C%D0%B5%D1%82%D0%BE%D0%B4_k-%D1%81%D1%80%D0%B5%D0%B4%D0%BD%D0%B8%D1%85\nSklearn http://scikit-learn.org/stable/modules/clustering.html",
"db = DBSCAN(eps=0.35,min_samples=5)\ny_pred = db.fit_predict(X_aniso)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io
|
0.23/_downloads/d2352ab4b72ce7d1dc05c76bda6ef71d/55_setting_eeg_reference.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Setting the EEG reference\nThis tutorial describes how to set or change the EEG reference in MNE-Python.\nAs usual we'll start by importing the modules we need, loading some\nexample data <sample-dataset>, and cropping it to save memory. Since\nthis tutorial deals specifically with EEG, we'll also restrict the dataset to\njust a few EEG channels so the plots are easier to see:",
"import os\nimport mne\n\nsample_data_folder = mne.datasets.sample.data_path()\nsample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',\n 'sample_audvis_raw.fif')\nraw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False)\nraw.crop(tmax=60).load_data()\nraw.pick(['EEG 0{:02}'.format(n) for n in range(41, 60)])",
"Background\nEEG measures a voltage (difference in electric potential) between each\nelectrode and a reference electrode. This means that whatever signal is\npresent at the reference electrode is effectively subtracted from all the\nmeasurement electrodes. Therefore, an ideal reference signal is one that\ncaptures none of the brain-specific fluctuations in electric potential,\nwhile capturing all of the environmental noise/interference that is being\npicked up by the measurement electrodes.\nIn practice, this means that the reference electrode is often placed in a\nlocation on the subject's body and close to their head (so that any\nenvironmental interference affects the reference and measurement electrodes\nsimilarly) but as far away from the neural sources as possible (so that the\nreference signal doesn't pick up brain-based fluctuations). Typical reference\nlocations are the subject's earlobe, nose, mastoid process, or collarbone.\nEach of these has advantages and disadvantages regarding how much brain\nsignal it picks up (e.g., the mastoids pick up a fair amount compared to the\nothers), and regarding the environmental noise it picks up (e.g., earlobe\nelectrodes may shift easily, and have signals more similar to electrodes on\nthe same side of the head).\nEven in cases where no electrode is specifically designated as the reference,\nEEG recording hardware will still treat one of the scalp electrodes as the\nreference, and the recording software may or may not display it to you (it\nmight appear as a completely flat channel, or the software might subtract out\nthe average of all signals before displaying, making it look like there is\nno reference).\nSetting or changing the reference channel\nIf you want to recompute your data with a different reference than was used\nwhen the raw data were recorded and/or saved, MNE-Python provides the\n:meth:~mne.io.Raw.set_eeg_reference method on :class:~mne.io.Raw objects\nas well as the :func:mne.add_reference_channels function. To use an\nexisting channel as the new reference, use the\n:meth:~mne.io.Raw.set_eeg_reference method; you can also designate multiple\nexisting electrodes as reference channels, as is sometimes done with mastoid\nreferences:",
"# code lines below are commented out because the sample data doesn't have\n# earlobe or mastoid channels, so this is just for demonstration purposes:\n\n# use a single channel reference (left earlobe)\n# raw.set_eeg_reference(ref_channels=['A1'])\n\n# use average of mastoid channels as reference\n# raw.set_eeg_reference(ref_channels=['M1', 'M2'])\n\n# use a bipolar reference (contralateral)\n# raw.set_bipolar_reference(anode='[F3'], cathode=['F4'])",
"If a scalp electrode was used as reference but was not saved alongside the\nraw data (reference channels often aren't), you may wish to add it back to\nthe dataset before re-referencing. For example, if your EEG system recorded\nwith channel Fp1 as the reference but did not include Fp1 in the data\nfile, using :meth:~mne.io.Raw.set_eeg_reference to set (say) Cz as the\nnew reference will then subtract out the signal at Cz without restoring\nthe signal at Fp1. In this situation, you can add back Fp1 as a flat\nchannel prior to re-referencing using :func:~mne.add_reference_channels.\n(Since our example data doesn't use the 10-20 electrode naming system_, the\nexample below adds EEG 999 as the missing reference, then sets the\nreference to EEG 050.) Here's how the data looks in its original state:",
"raw.plot()",
"By default, :func:~mne.add_reference_channels returns a copy, so we can go\nback to our original raw object later. If you wanted to alter the\nexisting :class:~mne.io.Raw object in-place you could specify\ncopy=False.",
"# add new reference channel (all zero)\nraw_new_ref = mne.add_reference_channels(raw, ref_channels=['EEG 999'])\nraw_new_ref.plot()",
".. KEEP THESE BLOCKS SEPARATE SO FIGURES ARE BIG ENOUGH TO READ",
"# set reference to `EEG 050`\nraw_new_ref.set_eeg_reference(ref_channels=['EEG 050'])\nraw_new_ref.plot()",
"Notice that the new reference (EEG 050) is now flat, while the original\nreference channel that we added back to the data (EEG 999) has a non-zero\nsignal. Notice also that EEG 053 (which is marked as \"bad\" in\nraw.info['bads']) is not affected by the re-referencing.\nSetting average reference\nTo set a \"virtual reference\" that is the average of all channels, you can use\n:meth:~mne.io.Raw.set_eeg_reference with ref_channels='average'. Just\nas above, this will not affect any channels marked as \"bad\", nor will it\ninclude bad channels when computing the average. However, it does modify the\n:class:~mne.io.Raw object in-place, so we'll make a copy first so we can\nstill go back to the unmodified :class:~mne.io.Raw object later:",
"# use the average of all channels as reference\nraw_avg_ref = raw.copy().set_eeg_reference(ref_channels='average')\nraw_avg_ref.plot()",
"Creating the average reference as a projector\nIf using an average reference, it is possible to create the reference as a\n:term:projector rather than subtracting the reference from the data\nimmediately by specifying projection=True:",
"raw.set_eeg_reference('average', projection=True)\nprint(raw.info['projs'])",
"Creating the average reference as a projector has a few advantages:\n\n\nIt is possible to turn projectors on or off when plotting, so it is easy\n to visualize the effect that the average reference has on the data.\n\n\nIf additional channels are marked as \"bad\" or if a subset of channels are\n later selected, the projector will be re-computed to take these changes\n into account (thus guaranteeing that the signal is zero-mean).\n\n\nIf there are other unapplied projectors affecting the EEG channels (such\n as SSP projectors for removing heartbeat or blink artifacts), EEG\n re-referencing cannot be performed until those projectors are either\n applied or removed; adding the EEG reference as a projector is not subject\n to that constraint. (The reason this wasn't a problem when we applied the\n non-projector average reference to raw_avg_ref above is that the\n empty-room projectors included in the sample data :file:.fif file were\n only computed for the magnetometers.)",
"for title, proj in zip(['Original', 'Average'], [False, True]):\n fig = raw.plot(proj=proj, n_channels=len(raw))\n # make room for title\n fig.subplots_adjust(top=0.9)\n fig.suptitle('{} reference'.format(title), size='xx-large', weight='bold')",
"Using an infinite reference (REST)\nTo use the \"point at infinity\" reference technique described in\n:footcite:Yao2001 requires a forward model, which we can create in a few\nsteps. Here we use a fairly large spacing of vertices (pos = 15 mm) to\nreduce computation time; a 5 mm spacing is more typical for real data\nanalysis:",
"raw.del_proj() # remove our average reference projector first\nsphere = mne.make_sphere_model('auto', 'auto', raw.info)\nsrc = mne.setup_volume_source_space(sphere=sphere, exclude=30., pos=15.)\nforward = mne.make_forward_solution(raw.info, trans=None, src=src, bem=sphere)\nraw_rest = raw.copy().set_eeg_reference('REST', forward=forward)\n\nfor title, _raw in zip(['Original', 'REST (∞)'], [raw, raw_rest]):\n fig = _raw.plot(n_channels=len(raw), scalings=dict(eeg=5e-5))\n # make room for title\n fig.subplots_adjust(top=0.9)\n fig.suptitle('{} reference'.format(title), size='xx-large', weight='bold')",
"Using a bipolar reference\nTo create a bipolar reference, you can use :meth:~mne.set_bipolar_reference\nalong with the respective channel names for anode and cathode which\ncreates a new virtual channel that takes the difference between two\nspecified channels (anode and cathode) and drops the original channels by\ndefault. The new virtual channel will be annotated with the channel info of\nthe anode with location set to (0, 0, 0) and coil type set to\nEEG_BIPOLAR by default. Here we use a contralateral/transverse bipolar\nreference between channels EEG 054 and EEG 055 as described in\n:footcite:YaoEtAl2019 which creates a new virtual channel\nnamed EEG 054-EEG 055.",
"raw_bip_ref = mne.set_bipolar_reference(raw, anode=['EEG 054'],\n cathode=['EEG 055'])\nraw_bip_ref.plot()",
"EEG reference and source modeling\nIf you plan to perform source modeling (either with EEG or combined EEG/MEG\ndata), it is strongly recommended to use the\naverage-reference-as-projection approach. It is important to use an average\nreference because using a specific\nreference sensor (or even an average of a few sensors) spreads the forward\nmodel error from the reference sensor(s) into all sensors, effectively\namplifying the importance of the reference sensor(s) when computing source\nestimates. In contrast, using the average of all EEG channels as reference\nspreads the forward modeling error evenly across channels, so no one channel\nis weighted more strongly during source estimation. See also this FieldTrip\nFAQ on average referencing_ for more information.\nThe main reason for specifying the average reference as a projector was\nmentioned in the previous section: an average reference projector adapts if\nchannels are dropped, ensuring that the signal will always be zero-mean when\nthe source modeling is performed. In contrast, applying an average reference\nby the traditional subtraction method offers no such guarantee.\nFor these reasons, when performing inverse imaging, MNE-Python will raise\na ValueError if there are EEG channels present and something other than\nan average reference strategy has been specified.\n.. LINKS\nhttp://www.fieldtriptoolbox.org/faq/why_should_i_use_an_average_reference_for_eeg_source_reconstruction/\n https://en.wikipedia.org/wiki/10%E2%80%9320_system_(EEG)\nReferences\n.. footbibliography::"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
google/intelligent_annotation_dialogs
|
exp1_fixed_strategies_IAD_prob.ipynb
|
apache-2.0
|
[
"Copyright 2018 Google LLC\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\nhttps://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License.\nExperiment 1: fixed detector in many scenarios\nThis notebook computes the performance of the fixed strategies in various scenarios. This experiment is described in Sec. 5.2 of CVPR submission \"Learning Intelligent Dialogs for Bounding Box Annotation\".",
"import matplotlib.pyplot as plt\nimport numpy as np\nfrom __future__ import division\nfrom __future__ import print_function\nimport math\nimport gym\nfrom gym import spaces\n\nimport pandas as pd\n\nfrom sklearn import linear_model, ensemble, neural_network, model_selection, ensemble\nfrom sklearn.neural_network import MLPClassifier\n\nfrom third_party import np_box_ops\nimport annotator, detector, dialog, environment",
"To specify the experiments, 3 paramters need to be defined: \n\ndetector\ntype of drawing\ndesired quality of bounding boxes (this notebook only works with strong detections)\n\nAll together, it gives 8 possible experiment, 6 of which were presented in the paper.",
"# desired quality: high (min_iou=0.7) and low (min_iou=0.5)\nmin_iou = 0.7 # @param [\"0.5\", \"0.7\"]\n# drawing speed: high (time_draw=7) and low (time_draw=25)\ntime_draw = 7 # @param [\"7\", \"25\"]\n\n# if detector is weak, then we use best MIL, if it is strong, we use detector trained on PASCAL 2012\ndetector_weak = False # @param [\"False\"]",
"Other parameters of the experiment",
"random_seed = 805 # global variable that fixes the random seed everywhere for replroducibility of results\n\n# what kind of features will be used to represent the state\n# numerical values 1-20 correspond to one hot encoding of class\npredictive_fields = ['prediction_score', 'relative_size', 'avg_score', 'dif_avg_score', 'dif_max_score', 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20]\n\ntime_verify = 1.8 # @param\n\n# select one of the 10 folds\nfold = 8 # @param",
"Load all data",
"# Download GT:\n# wget wget https://storage.googleapis.com/iad_pascal_annotations_and_detections/pascal_gt_for_iad.h5\n# Download detections with features\n# wget https://storage.googleapis.com/iad_pascal_annotations_and_detections/pascal_proposals_plus_features_for_iad.h5\n\ndownload_dir = ''\nground_truth = pd.read_hdf(download_dir + 'pascal_gt_for_iad.h5', 'ground_truth')\nbox_proposal_features = pd.read_hdf(download_dir + 'pascal_proposals_plus_features_for_iad.h5', 'box_proposal_features')",
"Initialise the experiment",
"the_annotator = annotator.AnnotatorSimple(ground_truth, random_seed, time_verify, time_draw, min_iou)\nthe_detector = detector.Detector(box_proposal_features, predictive_fields)\n\nimage_class = ground_truth[['image_id', 'class_id']]\nimage_class = image_class.drop_duplicates()",
"Select the trainig and testing data according to the selected fold. We split all images in 10 approximately equal parts and each fold includes these images together with all classes present in them.",
"# get a list of unique images\nunique_image = image_class['image_id'].drop_duplicates()\n# a list of image+class pairs\nimage_class_array = image_class.values[:,0]\n\n\nif fold==1:\n index_image_class1 = 0\nelse:\n image_division1 = unique_image.iloc[502+501*(fold-2)]\n index_image_class1 = np.searchsorted(image_class_array, image_division1, side='right')\n \nif fold==10:\n index_image_class2 = len(image_class_array)\nelse:\n image_division2 = unique_image.iloc[502+501*(fold-1)]\n index_image_class2 = np.searchsorted(image_class_array, image_division2, side='right')\n\n# the selected fold becomes the training set\nimage_class_train = image_class.iloc[index_image_class1:index_image_class2]\n# the other 9 folds become test set\nimage_class_test = pd.concat([image_class.iloc[0:index_image_class1],image_class.iloc[index_image_class2:]])",
"Initialise the environment for testing the strategies.",
"env_train = environment.AnnotatingDataset(the_annotator, the_detector, image_class_train)\nenv_test = environment.AnnotatingDataset(the_annotator, the_detector, image_class_test)",
"Experiment with fixed strategies",
"lower_bound = False\n# possible values: DialogD, DialogV, DialogV1D, DialogV2D, DialogV3D, best\nif lower_bound:\n num_verifications = int(math.floor(env_train.annotator.time_draw/env_train.annotator.time_verify))\nelse:\n num_verifications = 2 # @param\n\n%output_height 300\n\nprint('Running ', len(env_test.image_class), 'episodes with strategy: V', str(num_verifications), 'D')\n\n# total reward of all annotation episodes\ntotal_reward = 0\nall_rewards = []\n# go through all training image+class pairs\nfor i in range(len(env_test.image_class)):\n print('Episode ', i, end = ': ')\n # create an agent that generates dialogs\n agent = dialog.FixedDialog(num_verifications)\n # resent the enviroment to set the current imag+class pair to be at index i\n state = env_test.reset(current_index=i)\n done = False\n episode_reward = 0\n # until the end of the episode is reached\n while not(done):\n action = agent.get_next_action(state)\n if action==0:\n print('V', end='')\n elif action==1:\n print('D', end='')\n # make the next step\n next_state, reward, done, _ = env_test.step(action)\n state = next_state\n total_reward += reward\n episode_reward += reward\n all_rewards.append(episode_reward)\n print()\n \nif lower_bound:\n all_rewards = np.array(all_rewards)\n too_long = (all_rewards < -env_test.annotator.time_draw)\n all_rewards[too_long] = -env_test.annotator.time_draw\n total_reward = sum(all_rewards)\n \nprint('Total duration of all episodes = ', -total_reward) \nprint('Average episode duration = ', -total_reward/len(env_test.image_class))",
"Experiments with IAD-Prob\nCollect examples of episodes on the training part of data",
"%output_height 300\n\nprint('Running ', len(env_train.image_class), 'episodes with strategy V3D for data collection')\n\n# total reward of all annotation episodes\ntotal_reward = 0\ndata_for_classifier = []\n# go through all training image+class pairs\nfor i in range(len(env_train.image_class)):\n print('Episode ', i, end = ': ')\n # create an agent that generates dialogs\n # for collecting data we usually use strategy V3D\n # however, I think that if the cases when longer verification series are better, it is better to collect data longer verification series\n agent = dialog.FixedDialog(5)\n # resent the enviroment to set the current imag+class pair to be at index i\n state = env_train.reset(current_index=i)\n done = False\n # until the end of the episode is reached\n while not(done):\n action = agent.get_next_action(state)\n # make the next step\n next_state, reward, done, _ = env_train.step(action)\n if action==0:\n print('V', end='')\n state_dict = dict(state)\n state_dict['is_accepted'] = done\n data_for_classifier.append(state_dict)\n elif action==1:\n print('D', end='')\n state = next_state\n total_reward += reward\n print()\n \nprint('Total duration of all episodes = ', -total_reward) \nprint('Average episode duration = ', -total_reward/len(env_train.image_class))\n\ndata_for_classifier = pd.DataFrame(data_for_classifier)",
"Find the best classification model to predict if a box is going to be accepted or rejected",
"np.random.seed(random_seed) # for reproducibility of fitting the classifier and cross-validation\n\nprint('Cross-validating parameters\\' values... This might take some time.')\n\n# possible parameter values\nparameters = {'hidden_layer_sizes': ((20, 20, 20), (50, 50, 50), (80, 80, 80), (20, 20, 20, 20), (50, 50, 50, 50), (80, 80, 80, 80), (20, 20, 20, 20, 20), (50, 50, 50, 50, 50), (80, 80, 80, 80, 80)), 'activation': ('logistic', 'relu'), 'alpha': [0.0001, 0.001, 0.01]}\nmodel_mlp = neural_network.MLPClassifier()\n# cross-validate parameters\ngrid_search = model_selection.GridSearchCV(model_mlp, parameters, scoring='neg_log_loss', refit=True)\ngrid_search.fit(data_for_classifier[predictive_fields], data_for_classifier['is_accepted'])\nprint('best score = ', grid_search.best_score_)\nprint('best parameters = ', grid_search.best_params_)\n# use the model with the best parameters\nacceptance_prediction_model = grid_search.best_estimator_",
"Run the experiment with IAD-Prob",
"%output_height 300\n\n# initialise the agent\nagent = dialog.DialogProb(acceptance_prediction_model, the_annotator)\n\nprint('Running ', len(env_test.image_class), 'episodes with strategy IAD-Prob')\n# total reward of all annotation episodes\ntotal_reward = 0\n# go through all test image+class pairs\nfor i in range(len(env_test.image_class)):\n print('Episode ', i, end = ': ')\n # resent the enviroment and select the nest image+class pair\n state = env_test.reset(current_index=i)\n done = False\n # until the end of the episode is reached\n while not(done):\n action = agent.get_next_action(state)\n if action==0:\n print('V', end='')\n elif action==1:\n print('D', end='')\n # make the next step\n next_state, reward, done, _ = env_test.step(action)\n state = next_state\n total_reward += reward\n print()\n\n \nprint('Total duration of all episodes = ', -total_reward) \nprint('Average episode duration = ', -total_reward/len(env_test.image_class))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
kaleoyster/nbi-data-science
|
Deterioration Curves/(Southeast) Deterioration+Curves++and+Classification+of+Bridges+in+the+Southeast+United+States.ipynb
|
gpl-2.0
|
[
"Libraries and Packages",
"import pymongo\nfrom pymongo import MongoClient\nimport time\nimport pandas as pd\nimport numpy as np\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nimport csv",
"Connecting to National Data Service: The Lab Benchwork's NBI - MongoDB instance",
"Client = MongoClient(\"mongodb://bridges:readonly@nbi-mongo.admin/bridge\")\ndb = Client.bridge\ncollection = db[\"bridges\"]",
"Deterioration curves of Southeast United States\nFor demonstration purposes, the results only focuses on the states in the South-East United States which includes: West Virginia,Virginia, Kentucky, Tennessee, North Carolina, South Carolina, Georgia, Alabama,\nMississippi, Arkansas, Louisiana, Florida\nThe classification of the bridge into slow deteriorating, fast deteriorating, and average deteriorating is done based on bridge's rate of deterioration. Therefore, In this section will demonstrate how bridges deteriorate over time in the South-East United States. To plot the deterioration curve of bridges in every state of South-East United States, bridges were grouped by their age. As a result, There are 60 groups of bridges from age 1 to 60, The mean of the condition rating of the deck, superstructure, and substructure of the bridge is plotted for every age.\nExtracting Data of South-East United States of the United states from 1992 - 2016.\nThe following query will extract data from the mongoDB instance and project only selected attributes such as structure number, yearBuilt, deck, year, superstructure, and subtructure.",
"def getData(state):\n pipeline = [{\"$match\":{\"$and\":[{\"year\":{\"$gt\":1991, \"$lt\":2017}},{\"stateCode\":state}]}},\n {\"$project\":{\"_id\":0,\n \"structureNumber\":1,\n \"yearBuilt\":1,\n \"deck\":1, ## rating of deck\n \"year\":1, ## survey year\n \"substructure\":1, ## rating of substructure\n \"superstructure\":1, ## rating of superstructure\n }}]\n \n dec = collection.aggregate(pipeline)\n conditionRatings = pd.DataFrame(list(dec)) \n conditionRatings['Age'] = conditionRatings['year'] - conditionRatings['yearBuilt']\n return conditionRatings\n\n",
"Filtering Null Values, Converting JSON format to Dataframes, and Calculating Mean Condition Ratings of Deck, Superstructure, and Substucture\nAfter NBI data is extracted. The Data has to be filtered to remove data points with missing values such as 'N', 'NA'.\nThe mean condition rating for all the components: Deck, Substructure, and Superstructe, has to be calculated.",
"def getMeanRatings(state,startAge, endAge, startYear, endYear):\n conditionRatings = getData(state)\n conditionRatings = conditionRatings[['structureNumber','Age','superstructure','deck','substructure','year']]\n conditionRatings = conditionRatings.loc[~conditionRatings['superstructure'].isin(['N','NA'])]\n conditionRatings = conditionRatings.loc[~conditionRatings['substructure'].isin(['N','NA'])]\n conditionRatings = conditionRatings.loc[~conditionRatings['deck'].isin(['N','NA'])]\n #conditionRatings = conditionRatings.loc[~conditionRatings['Structure Type'].isin([19])]\n #conditionRatings = conditionRatings.loc[~conditionRatings['Type of Wearing Surface'].isin(['6'])]\n \n maxAge = conditionRatings['Age'].unique()\n tempConditionRatingsDataFrame = conditionRatings.loc[conditionRatings['year'].isin([i for i in range(startYear, endYear+1, 1)])]\n \n MeanDeck = []\n StdDeck = []\n \n MeanSubstructure = []\n StdSubstructure = []\n \n MeanSuperstructure = []\n StdSuperstructure = []\n \n ## start point of the age to be = 1 and ending point = 100\n for age in range(startAge,endAge+1,1):\n ## Select all the bridges from with age = i\n tempAgeDf = tempConditionRatingsDataFrame.loc[tempConditionRatingsDataFrame['Age'] == age]\n \n ## type conversion deck rating into int\n listOfMeanDeckOfAge = list(tempAgeDf['deck'])\n listOfMeanDeckOfAge = [ int(deck) for deck in listOfMeanDeckOfAge ] \n \n ## takeing mean and standard deviation of deck rating at age i\n meanDeck = np.mean(listOfMeanDeckOfAge)\n stdDeck = np.std(listOfMeanDeckOfAge)\n \n ## type conversion substructure rating into int\n listOfMeanSubstructureOfAge = list(tempAgeDf['substructure'])\n listOfMeanSubstructureOfAge = [ int(substructure) for substructure in listOfMeanSubstructureOfAge ] \n \n meanSub = np.mean(listOfMeanSubstructureOfAge)\n stdSub = np.std(listOfMeanSubstructureOfAge)\n \n \n ## type conversion substructure rating into int\n listOfMeanSuperstructureOfAge = list(tempAgeDf['superstructure'])\n listOfMeanSuperstructureOfAge = [ int(superstructure) for superstructure in listOfMeanSuperstructureOfAge ] \n \n meanSup = np.mean(listOfMeanSuperstructureOfAge)\n stdSup = np.std(listOfMeanSuperstructureOfAge)\n \n #Append Deck\n MeanDeck.append(meanDeck)\n StdDeck.append(stdDeck)\n \n #Append Substructure\n MeanSubstructure.append(meanSub)\n StdSubstructure.append(stdSub)\n \n #Append Superstructure\n MeanSuperstructure.append(meanSup)\n StdSuperstructure.append(stdSup)\n \n return [MeanDeck, StdDeck ,MeanSubstructure, StdSubstructure, MeanSuperstructure, StdSuperstructure]\n",
"Creating DataFrames of the Mean condition ratings of the deck, superstructure and substructure\nThe calculated Mean Condition Ratings of deck, superstructure, and substructure are now stored in seperate dataframe for the convience.",
"states = ['54','51','21','47','37','45','13','01','28','02','22','12'] \n\n# state code to state abbreviation \nstateNameDict = {'25':'MA',\n '04':'AZ',\n '08':'CO',\n '38':'ND',\n '09':'CT',\n '19':'IA',\n '26':'MI',\n '48':'TX',\n '35':'NM',\n '17':'IL',\n '51':'VA',\n '23':'ME',\n '16':'ID',\n '36':'NY',\n '56':'WY',\n '29':'MO',\n '39':'OH',\n '28':'MS',\n '11':'DC',\n '21':'KY',\n '18':'IN',\n '06':'CA',\n '47':'TN',\n '12':'FL',\n '24':'MD',\n '34':'NJ',\n '46':'SD',\n '13':'GA',\n '55':'WI',\n '30':'MT',\n '54':'WV',\n '15':'HI',\n '32':'NV',\n '37':'NC',\n '10':'DE',\n '33':'NH',\n '44':'RI',\n '50':'VT',\n '42':'PA',\n '05':'AR',\n '20':'KS',\n '45':'SC',\n '22':'LA',\n '40':'OK',\n '72':'PR',\n '41':'OR',\n '27':'MN',\n '53':'WA',\n '01':'AL',\n '31':'NE',\n '02':'AK',\n '49':'UT'\n }\n\ndef getBulkMeanRatings(states, stateNameDict):\n # Initializaing the dataframes for deck, superstructure and subtructure\n df_mean_deck = pd.DataFrame({'Age':range(1,61)})\n df_mean_sup = pd.DataFrame({'Age':range(1,61)})\n df_mean_sub = pd.DataFrame({'Age':range(1,61)})\n \n df_std_deck = pd.DataFrame({'Age':range(1,61)})\n df_std_sup = pd.DataFrame({'Age':range(1,61)})\n df_std_sub = pd.DataFrame({'Age':range(1,61)})\n\n for state in states:\n meanDeck, stdDeck, meanSub, stdSub, meanSup, stdSup = getMeanRatings(state,1,100,1992,2016)\n stateName = stateNameDict[state]\n df_mean_deck[stateName] = meanDeck[:60]\n df_mean_sup[stateName] = meanSup[:60]\n df_mean_sub[stateName] = meanSub[:60]\n \n df_std_deck[stateName] = stdDeck[:60]\n df_std_sup[stateName] = stdSup[:60]\n df_std_sub[stateName] = stdSub[:60]\n \n return df_mean_deck, df_mean_sup, df_mean_sub, df_std_deck, df_std_sup, df_std_sub\n \ndf_mean_deck, df_mean_sup, df_mean_sub, df_std_deck, df_std_sup, df_std_sub = getBulkMeanRatings(states, stateNameDict)",
"Deterioration Curves - Deck",
"%matplotlib inline\n\npalette = [ 'blue', 'blue', 'green', 'magenta', 'cyan', 'brown', 'grey',\n 'red','silver','purple', 'gold', 'black','olive' ]\n\nplt.figure(figsize = (10,8))\nindex = 0\nfor state in states:\n index = index + 1\n stateName = stateNameDict[state]\n plt.plot(df_mean_deck['Age'],df_mean_deck[stateName], color = palette[index])\nplt.legend([stateNameDict[state] for state in states],loc='upper right', ncol = 2) \nplt.xlim(1,60)\nplt.ylim(1,9)\nplt.title('Mean Deck Rating Vs Age')\nplt.xlabel('Age')\nplt.ylabel('Mean Deck Rating')\n\n\nplt.figure(figsize = (16,12))\nplt.xlabel('Age')\nplt.ylabel('Mean')\n\n\n# Initialize the figure\nplt.style.use('seaborn-darkgrid')\n \n# create a color palette\n#palette = plt.get_cmap('gist_ncar')\npalette = [\n 'blue', 'blue', 'green','magenta','cyan','brown','grey','red','silver','purple','gold','black','olive'\n]\n# multiple line plot\nnum=1\nfor column in df_mean_deck.drop('Age', axis=1):\n \n # Find the right spot on the plot\n plt.subplot(4,3, num)\n \n # Plot the lineplot\n plt.plot(df_mean_deck['Age'], df_mean_deck[column], marker='', color=palette[num], linewidth=4, alpha=0.9, label=column)\n \n # Same limits for everybody!\n plt.xlim(1,60)\n plt.ylim(1,9)\n \n # Not ticks everywhere\n if num in range(10) :\n plt.tick_params(labelbottom='off')\n if num not in [1,4,7,10]:\n plt.tick_params(labelleft='off')\n \n # Add title\n plt.title(column, loc='left', fontsize=12, fontweight=0, color=palette[num])\n plt.text(30, -1, 'Age', ha='center', va='center')\n plt.text(1, 4, 'Mean Deck Rating', ha='center', va='center', rotation='vertical')\n num = num + 1\n \n# general title\nplt.suptitle(\"Mean Deck Rating vs Age \\nIndividual State Deterioration Curves\", fontsize=13, fontweight=0, color='black', style='italic', y=1.02)\n ",
"Deterioration Curve - Superstructure",
"palette = [ 'blue', 'blue', 'green', 'magenta', 'cyan', 'brown', 'grey',\n 'red','silver','purple', 'gold', 'black','olive' ]\n\nplt.figure(figsize = (10,8))\nindex = 0\nfor state in states:\n index = index + 1\n stateName = stateNameDict[state]\n plt.plot(df_mean_sup['Age'],df_mean_sup[stateName], color = palette[index])\nplt.legend([stateNameDict[state] for state in states],loc='upper right', ncol = 2) \nplt.xlim(1,60)\nplt.ylim(1,9)\nplt.title('Mean Superstructure Rating Vs Age')\nplt.xlabel('Age')\nplt.ylabel('Mean Superstructure Rating')\n\n\nplt.figure(figsize = (16,12))\nplt.xlabel('Age')\nplt.ylabel('Mean')\n\n\n# Initialize the figure\nplt.style.use('seaborn-darkgrid')\n \n# create a color palette\n#palette = plt.get_cmap('gist_ncar')\npalette = [\n 'blue',\n 'blue',\n 'green',\n 'magenta',\n 'cyan',\n 'brown',\n 'grey',\n 'red',\n 'silver',\n 'purple',\n 'gold',\n 'black',\n 'olive'\n]\n# multiple line plot\nnum=1\nfor column in df_mean_sup.drop('Age', axis=1):\n \n # Find the right spot on the plot\n plt.subplot(4,3, num)\n \n # Plot the lineplot\n plt.plot(df_mean_sup['Age'], df_mean_sup[column], marker='', color=palette[num], linewidth=4, alpha=0.9, label=column)\n \n # Same limits for everybody!\n plt.xlim(1,60)\n plt.ylim(1,9)\n \n # Not ticks everywhere\n if num in range(10) :\n plt.tick_params(labelbottom='off')\n if num not in [1,4,7,10]:\n plt.tick_params(labelleft='off')\n \n # Add title\n plt.title(column, loc='left', fontsize=12, fontweight=0, color=palette[num])\n plt.text(30, -1, 'Age', ha='center', va='center')\n plt.text(1, 4, 'Mean Superstructure Rating', ha='center', va='center', rotation='vertical')\n num = num + 1\n \n# general title\nplt.suptitle(\"Mean Superstructure Rating vs Age \\nIndividual State Deterioration Curves\", fontsize=13, fontweight=0, color='black', style='italic', y=1.02)\n\npalette = [ 'blue', 'blue', 'green', 'magenta', 'cyan', 'brown', 'grey',\n 'red','silver','purple', 'gold', 'black','olive' ]\n\nplt.figure(figsize = (10,8))\nindex = 0\nfor state in states:\n index = index + 1\n stateName = stateNameDict[state]\n plt.plot(df_mean_sup['Age'],df_mean_sup[stateName], color = palette[index])\nplt.legend([stateNameDict[state] for state in states],loc='upper right', ncol = 2) \nplt.xlim(1,60)\nplt.ylim(1,9)\nplt.title('Mean Superstructure Rating Vs Age')\nplt.xlabel('Age')\nplt.ylabel('Mean Superstructure Rating')\n",
"Deterioration Curves - Substructure",
"palette = [ 'blue', 'blue', 'green', 'magenta', 'cyan', 'brown', 'grey',\n 'red','silver','purple', 'gold', 'black','olive' ]\n\nplt.figure(figsize = (10,8))\nindex = 0\nfor state in states:\n index = index + 1\n stateName = stateNameDict[state]\n plt.plot(df_mean_sub['Age'],df_mean_sub[stateName], color = palette[index], linewidth=4)\nplt.legend([stateNameDict[state] for state in states],loc='upper right', ncol = 2) \nplt.xlim(1,60)\nplt.ylim(1,9)\nplt.title('Mean Substructure Rating Vs Age')\nplt.xlabel('Age')\nplt.ylabel('Mean Substructure Rating')\n\n\nplt.figure(figsize = (16,12))\nplt.xlabel('Age')\nplt.ylabel('Mean')\n\n\n# Initialize the figure\nplt.style.use('seaborn-darkgrid')\n \n# create a color palette\npalette = [\n 'blue', 'blue', 'green', 'magenta', 'cyan', 'brown', 'grey', 'red', 'silver', 'purple', 'gold', 'black','olive'\n]\n# multiple line plot\nnum=1\nfor column in df_mean_sub.drop('Age', axis=1):\n \n # Find the right spot on the plot\n plt.subplot(4,3, num)\n \n # Plot the lineplot\n plt.plot(df_mean_sub['Age'], df_mean_sub[column], marker='', color=palette[num], linewidth=4, alpha=0.9, label=column)\n \n # Same limits for everybody!\n plt.xlim(1,60)\n plt.ylim(1,9)\n \n # Not ticks everywhere\n if num in range(7) :\n plt.tick_params(labelbottom='off')\n if num not in [1,4,7] :\n plt.tick_params(labelleft='off')\n \n # Add title\n plt.title(column, loc='left', fontsize=12, fontweight=0, color=palette[num])\n plt.text(30, -1, 'Age', ha='center', va='center')\n plt.text(1, 4, 'Mean Substructure Rating', ha='center', va='center', rotation='vertical')\n num = num + 1\n \n# general title\nplt.suptitle(\"Mean Substructure Rating vs Age \\nIndividual State Deterioration Curves\", fontsize=13, fontweight=0, color='black', style='italic', y=1.02)\n \n\ndef getDataOneYear(state):\n pipeline = [{\"$match\":{\"$and\":[{\"year\":{\"$gt\":2015, \"$lt\":2017}},{\"stateCode\":state}]}},\n {\"$project\":{\"_id\":0,\n \"Structure Type\":\"$structureTypeMain.typeOfDesignConstruction\",\n \"Type of Wearing Surface\":\"$wearingSurface/ProtectiveSystem.typeOfWearingSurface\",\n \"yearBuilt\":1,\n \"deck\":1, ## rating of deck\n \"year\":1, ## survey year\n \"substructure\":1, ## rating of substructure\n \"superstructure\":1, ## rating of superstructure\n }}]\n \n dec = collection.aggregate(pipeline)\n conditionRatings = pd.DataFrame(list(dec)) \n conditionRatings['Age'] = conditionRatings['year'] - conditionRatings['yearBuilt']\n \n \n return conditionRatings\n\n## Condition ratings of all states concatenated into one single data frame ConditionRatings\nframes = []\nfor state in states:\n f = getDataOneYear(state)\n frames.append(f)\ndf_nbi_se = pd.concat(frames)\ndf_nbi_se = df_nbi_se.loc[~df_nbi_se['deck'].isin(['N','NA'])]\ndf_nbi_se = df_nbi_se.loc[~df_nbi_se['substructure'].isin(['N','NA'])]\ndf_nbi_se = df_nbi_se.loc[~df_nbi_se['superstructure'].isin(['N','NA'])]\ndf_nbi_se = df_nbi_se.loc[~df_nbi_se['Type of Wearing Surface'].isin(['6'])]",
"Classification Criteria\nThe classfication criteria used to classify bridges into slow Deterioration, average deterioration and fast deterioration. Bridges are classified based on how far an individual bridge’s deterioration score is from the mean deterioration score.\n| Categories | Value |\n|------------------------|-------------------------------|\n| Slow Deterioration | $z_ia$ ≥ $\\bar x_a$ + 1 σ ( $ x_a$ )|\n| Average Deterioration| $\\bar x_a$ - 1 σ ( $x_a$ ) ≥ $z_ia$ ≥ $\\bar x_a$ + 1 σ ( $ x_a$ ) |\n| Fast Deterioration |$z_ia$ ≤ $\\bar x_a$ - 1 σ ( $ x_a$ ) |",
"stat = ['54','51','21','47','37','45','13','01','28','02','22','12'] \nAgeList = list(df_nbi_se['Age'])\ndeckList = list(df_nbi_se['deck'])\nnum = 1\nfor st in stat:\n deckR = []\n deckR = getDataOneYear(st)\n deckR = deckR[['Age','deck']]\n deckR= deckR.loc[~deckR['deck'].isin(['N','NA'])]\n stateName = stateNameDict[st]\n labels = []\n for deckRating, Age in zip (deckList,AgeList):\n if Age < 60:\n mean_age_conditionRating = df_mean_deck[stateName][Age]\n std_age_conditionRating = df_std_deck[stateName][Age]\n\n detScore = (int(deckRating) - mean_age_conditionRating) / std_age_conditionRating\n\n if (mean_age_conditionRating - std_age_conditionRating) < int(deckRating) <= (mean_age_conditionRating + std_age_conditionRating):\n # Append a label\n labels.append('Average Deterioration')\n # else, if more than a value,\n elif int(deckRating) > (mean_age_conditionRating + std_age_conditionRating):\n # Append a label\n labels.append('Slow Deterioration')\n # else, if more than a value,\n elif int(deckRating) < (mean_age_conditionRating - std_age_conditionRating):\n # Append a label\n labels.append('Fast Deterioration')\n else:\n labels.append('Null Value')\n D = dict((x,labels.count(x)) for x in set(labels))\n \n plt.figure(figsize=(12,6))\n plt.title(stateName)\n plt.bar(range(len(D)), list(D.values()), align='center')\n plt.xticks(range(len(D)), list(D.keys()))\n plt.xlabel('Categories')\n plt.ylabel('Number of Bridges')\n plt.show()\n num = num + 1\n\nstat = ['54','51','21','47','37','45','13','01','28','02','22','12'] \nAgeList = list(df_nbi_se['Age'])\ndeckList = list(df_nbi_se['deck'])\nnum = 1\nlabel = []\nfor st in stat:\n deckR = []\n deckR = getDataOneYear(st)\n deckR = deckR[['Age','deck']]\n deckR= deckR.loc[~deckR['deck'].isin(['N','NA'])]\n stateName = stateNameDict[st]\n \n for deckRating, Age in zip (deckList,AgeList):\n if Age < 60:\n mean_age_conditionRating = df_mean_deck[stateName][Age]\n std_age_conditionRating = df_std_deck[stateName][Age]\n\n detScore = (int(deckRating) - mean_age_conditionRating) / std_age_conditionRating\n\n if (mean_age_conditionRating - std_age_conditionRating) < int(deckRating) <= (mean_age_conditionRating + std_age_conditionRating):\n # Append a label\n labels.append('Average Deterioration')\n # else, if more than a value,\n elif int(deckRating) > (mean_age_conditionRating + std_age_conditionRating):\n # Append a label\n labels.append('Slow Deterioration')\n # else, if more than a value,\n elif int(deckRating) < (mean_age_conditionRating - std_age_conditionRating):\n # Append a label\n labels.append('Fast Deterioration')\n else:\n labels.append('Null Value')\n\n ",
"Classification of all the bridges in the Southeast United States",
"D = dict((x,labels.count(x)) for x in set(labels))\nplt.figure(figsize=(12,6))\nplt.title('Classification of Bridges in Southeast United States')\nplt.bar(range(len(D)), list(D.values()), align='center')\nplt.xticks(range(len(D)), list(D.keys()))\nplt.xlabel('Categories of Bridges')\nplt.ylabel('Number of Bridges')\nplt.show()\n"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dietmarw/EK5312_ElectricalMachines
|
Chapman/Ch5-Problem_5-02.ipynb
|
unlicense
|
[
"Excercises Electric Machinery Fundamentals\nChapter 5\nProblem 5-2",
"%pylab notebook\n%precision %.4g\nimport cmath",
"Description\nAssume that the motor of Problem 5-1 is operating at rated conditions.",
"Vt = 480 # [V]\nPF = 0.8\nfse = 60 # [Hz]\np = 8.0\nPout = 400 * 746 # [W]\nXs = 0.6 # [Ohm]",
"(a)\n\nWhat are the magnitudes and angles of $\\vec{E}_A$ and $\\vec{I}_A$ , and $I_F$?\n\n(b)\nSuppose the load is removed from the motor. \n\nWhat are the magnitudes and angles of $\\vec{E}_A$ and $\\vec{I}_A$ now?\n\nSOLUTION\n(a)\nThe line current flow at rated conditions is:\n$$I_L = \\frac{P}{\\sqrt{3}V_TPF}$$",
"Pin = Pout\nil = Pin / (sqrt(3) * Vt * PF)\nil # [A]",
"Because the motor is $\\Delta$-connected, the corresponding phase current is:",
"ia = il / sqrt(3)\nia # [A]",
"The angle of the current is:",
"Ia_angle = arccos(PF)\nIa_angle /pi *180 # [degrees]\n\nIa = ia * (cos(Ia_angle) + sin(Ia_angle)*1j)\nprint('''\nIa = {:.0f} A ∠{:.2f}°\n=================='''.format(abs(Ia), Ia_angle / pi *180))",
"The internal generated voltage $\\vec{E}_A$ is:\n$$\\vec{E}A = \\vec{V}\\phi - jX_S\\vec{I}_A$$",
"EA = Vt - Xs * 1j * Ia\nEA_angle = arctan(EA.imag/EA.real)\nprint('''\nEa = {:.0f} V ∠{:.1f}°\n=================='''.format(abs(EA), EA_angle / pi *180))",
"The field current is directly proportional to $|\\vec{E}_A|$, which = 480V when $I_F = 4 A$. The required field current is:\n$$\\frac{|\\vec{E}{A2}|}{|\\vec{E}{A1}|} = \\frac{I_{F2}}{I_{F1}}$$",
"If1 = 4 # [A]\nEa1 = 480 # [V]\nEa2 = abs(EA)\nIf2 = (Ea2/Ea1) * If1\nprint('''\nIf2 = {:.2f} A\n============'''.format(If2))",
"(b)\nWhen the load is removed from the motor the magnitude of $|\\vec{E}_A|$ remains unchanged but the torque\nangle goes to $\\delta = 0°$ . The resulting armature current is:\n$$\\vec{I}A = \\frac{\\vec{V}\\phi - \\vec{E}_A}{jX_S}$$",
"delta_b = 0*pi/180 # [rad]\nEA_b = abs(EA) *(cos(delta_b) + sin(delta_b)*1j)\nEA_b_angle = arctan(EA_b.imag/EA_b.real)\n\nIa_b = (Vt - EA_b) / (Xs*1j)\nIa_b_angle = arctan(Ia_b.imag/Ia_b.real) # poosible warning might occur because of division by zero\n\nprint('''\nEA_b = {:.1f} ∠{:>2.0f}°\nIa_b = {:.1f} ∠{:>2.0f}°\n=================='''.format(abs(EA_b), EA_b_angle/pi*180, \n abs(Ia_b), Ia_b_angle/pi*180))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/mri/cmip6/models/mri-esm2-0/atmos.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Atmos\nMIP Era: CMIP6\nInstitute: MRI\nSource ID: MRI-ESM2-0\nTopic: Atmos\nSub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos. \nProperties: 156 (127 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:19\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'mri', 'mri-esm2-0', 'atmos')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties --> Overview\n2. Key Properties --> Resolution\n3. Key Properties --> Timestepping\n4. Key Properties --> Orography\n5. Grid --> Discretisation\n6. Grid --> Discretisation --> Horizontal\n7. Grid --> Discretisation --> Vertical\n8. Dynamical Core\n9. Dynamical Core --> Top Boundary\n10. Dynamical Core --> Lateral Boundary\n11. Dynamical Core --> Diffusion Horizontal\n12. Dynamical Core --> Advection Tracers\n13. Dynamical Core --> Advection Momentum\n14. Radiation\n15. Radiation --> Shortwave Radiation\n16. Radiation --> Shortwave GHG\n17. Radiation --> Shortwave Cloud Ice\n18. Radiation --> Shortwave Cloud Liquid\n19. Radiation --> Shortwave Cloud Inhomogeneity\n20. Radiation --> Shortwave Aerosols\n21. Radiation --> Shortwave Gases\n22. Radiation --> Longwave Radiation\n23. Radiation --> Longwave GHG\n24. Radiation --> Longwave Cloud Ice\n25. Radiation --> Longwave Cloud Liquid\n26. Radiation --> Longwave Cloud Inhomogeneity\n27. Radiation --> Longwave Aerosols\n28. Radiation --> Longwave Gases\n29. Turbulence Convection\n30. Turbulence Convection --> Boundary Layer Turbulence\n31. Turbulence Convection --> Deep Convection\n32. Turbulence Convection --> Shallow Convection\n33. Microphysics Precipitation\n34. Microphysics Precipitation --> Large Scale Precipitation\n35. Microphysics Precipitation --> Large Scale Cloud Microphysics\n36. Cloud Scheme\n37. Cloud Scheme --> Optical Cloud Properties\n38. Cloud Scheme --> Sub Grid Scale Water Distribution\n39. Cloud Scheme --> Sub Grid Scale Ice Distribution\n40. Observation Simulation\n41. Observation Simulation --> Isscp Attributes\n42. Observation Simulation --> Cosp Attributes\n43. Observation Simulation --> Radar Inputs\n44. Observation Simulation --> Lidar Inputs\n45. Gravity Waves\n46. Gravity Waves --> Orographic Gravity Waves\n47. Gravity Waves --> Non Orographic Gravity Waves\n48. Solar\n49. Solar --> Solar Pathways\n50. Solar --> Solar Constant\n51. Solar --> Orbital Parameters\n52. Solar --> Insolation Ozone\n53. Volcanos\n54. Volcanos --> Volcanoes Treatment \n1. Key Properties --> Overview\nTop level key properties\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Model Family\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of atmospheric model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_family') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"AGCM\" \n# \"ARCM\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Basic Approximations\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nBasic approximations made in the atmosphere.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"primitive equations\" \n# \"non-hydrostatic\" \n# \"anelastic\" \n# \"Boussinesq\" \n# \"hydrostatic\" \n# \"quasi-hydrostatic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2. Key Properties --> Resolution\nCharacteristics of the model resolution\n2.1. Horizontal Resolution Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Canonical Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Range Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nRange of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.4. Number Of Vertical Levels\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of vertical levels resolved on the computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"2.5. High Top\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.high_top') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3. Key Properties --> Timestepping\nCharacteristics of the atmosphere model time stepping\n3.1. Timestep Dynamics\nIs Required: TRUE Type: STRING Cardinality: 1.1\nTimestep for the dynamics, e.g. 30 min.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.2. Timestep Shortwave Radiative Transfer\nIs Required: FALSE Type: STRING Cardinality: 0.1\nTimestep for the shortwave radiative transfer, e.g. 1.5 hours.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.3. Timestep Longwave Radiative Transfer\nIs Required: FALSE Type: STRING Cardinality: 0.1\nTimestep for the longwave radiative transfer, e.g. 3 hours.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Key Properties --> Orography\nCharacteristics of the model orography\n4.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime adaptation of the orography.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"modified\" \n# TODO - please enter value(s)\n",
"4.2. Changes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nIf the orography type is modified describe the time adaptation changes.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.changes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"related to ice sheets\" \n# \"related to tectonics\" \n# \"modified mean\" \n# \"modified variance if taken into account in model (cf gravity waves)\" \n# TODO - please enter value(s)\n",
"5. Grid --> Discretisation\nAtmosphere grid discretisation\n5.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of grid discretisation in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Grid --> Discretisation --> Horizontal\nAtmosphere discretisation in the horizontal\n6.1. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal discretisation type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spectral\" \n# \"fixed grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.2. Scheme Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal discretisation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"finite elements\" \n# \"finite volumes\" \n# \"finite difference\" \n# \"centered finite difference\" \n# TODO - please enter value(s)\n",
"6.3. Scheme Order\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal discretisation function order",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"second\" \n# \"third\" \n# \"fourth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.4. Horizontal Pole\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nHorizontal discretisation pole singularity treatment",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"filter\" \n# \"pole rotation\" \n# \"artificial island\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.5. Grid Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal grid type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gaussian\" \n# \"Latitude-Longitude\" \n# \"Cubed-Sphere\" \n# \"Icosahedral\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"7. Grid --> Discretisation --> Vertical\nAtmosphere discretisation in the vertical\n7.1. Coordinate Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nType of vertical coordinate system",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"isobaric\" \n# \"sigma\" \n# \"hybrid sigma-pressure\" \n# \"hybrid pressure\" \n# \"vertically lagrangian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8. Dynamical Core\nCharacteristics of the dynamical core\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of atmosphere dynamical core",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the dynamical core of the model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.3. Timestepping Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTimestepping framework type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.timestepping_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Adams-Bashforth\" \n# \"explicit\" \n# \"implicit\" \n# \"semi-implicit\" \n# \"leap frog\" \n# \"multi-step\" \n# \"Runge Kutta fifth order\" \n# \"Runge Kutta second order\" \n# \"Runge Kutta third order\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.4. Prognostic Variables\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList of the model prognostic variables",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface pressure\" \n# \"wind components\" \n# \"divergence/curl\" \n# \"temperature\" \n# \"potential temperature\" \n# \"total water\" \n# \"water vapour\" \n# \"water liquid\" \n# \"water ice\" \n# \"total water moments\" \n# \"clouds\" \n# \"radiation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9. Dynamical Core --> Top Boundary\nType of boundary layer at the top of the model\n9.1. Top Boundary Condition\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTop boundary condition",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.2. Top Heat\nIs Required: TRUE Type: STRING Cardinality: 1.1\nTop boundary heat treatment",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.3. Top Wind\nIs Required: TRUE Type: STRING Cardinality: 1.1\nTop boundary wind treatment",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Dynamical Core --> Lateral Boundary\nType of lateral boundary condition (if the model is a regional model)\n10.1. Condition\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nType of lateral boundary condition",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11. Dynamical Core --> Diffusion Horizontal\nHorizontal diffusion scheme\n11.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nHorizontal diffusion scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.2. Scheme Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal diffusion scheme method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"iterated Laplacian\" \n# \"bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12. Dynamical Core --> Advection Tracers\nTracer advection scheme\n12.1. Scheme Name\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nTracer advection scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heun\" \n# \"Roe and VanLeer\" \n# \"Roe and Superbee\" \n# \"Prather\" \n# \"UTOPIA\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12.2. Scheme Characteristics\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTracer advection scheme characteristics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Eulerian\" \n# \"modified Euler\" \n# \"Lagrangian\" \n# \"semi-Lagrangian\" \n# \"cubic semi-Lagrangian\" \n# \"quintic semi-Lagrangian\" \n# \"mass-conserving\" \n# \"finite volume\" \n# \"flux-corrected\" \n# \"linear\" \n# \"quadratic\" \n# \"quartic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12.3. Conserved Quantities\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTracer advection scheme conserved quantities",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"dry mass\" \n# \"tracer mass\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12.4. Conservation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTracer advection scheme conservation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Priestley algorithm\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13. Dynamical Core --> Advection Momentum\nMomentum advection scheme\n13.1. Scheme Name\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nMomentum advection schemes name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"VanLeer\" \n# \"Janjic\" \n# \"SUPG (Streamline Upwind Petrov-Galerkin)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Scheme Characteristics\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nMomentum advection scheme characteristics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"2nd order\" \n# \"4th order\" \n# \"cell-centred\" \n# \"staggered grid\" \n# \"semi-staggered grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.3. Scheme Staggering Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMomentum advection scheme staggering type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Arakawa B-grid\" \n# \"Arakawa C-grid\" \n# \"Arakawa D-grid\" \n# \"Arakawa E-grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.4. Conserved Quantities\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nMomentum advection scheme conserved quantities",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Angular momentum\" \n# \"Horizontal momentum\" \n# \"Enstrophy\" \n# \"Mass\" \n# \"Total energy\" \n# \"Vorticity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.5. Conservation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMomentum advection scheme conservation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Radiation\nCharacteristics of the atmosphere radiation process\n14.1. Aerosols\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nAerosols whose radiative effect is taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.aerosols') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sulphate\" \n# \"nitrate\" \n# \"sea salt\" \n# \"dust\" \n# \"ice\" \n# \"organic\" \n# \"BC (black carbon / soot)\" \n# \"SOA (secondary organic aerosols)\" \n# \"POM (particulate organic matter)\" \n# \"polar stratospheric ice\" \n# \"NAT (nitric acid trihydrate)\" \n# \"NAD (nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particle)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15. Radiation --> Shortwave Radiation\nProperties of the shortwave radiation scheme\n15.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of shortwave radiation in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.3. Spectral Integration\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nShortwave radiation scheme spectral integration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.4. Transport Calculation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nShortwave radiation transport calculation methods",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.5. Spectral Intervals\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nShortwave radiation scheme number of spectral intervals",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"16. Radiation --> Shortwave GHG\nRepresentation of greenhouse gases in the shortwave radiation scheme\n16.1. Greenhouse Gas Complexity\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nComplexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.2. ODS\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOzone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.3. Other Flourinated Gases\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOther flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17. Radiation --> Shortwave Cloud Ice\nShortwave radiative properties of ice crystals in clouds\n17.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud ice crystals",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of cloud ice crystals in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18. Radiation --> Shortwave Cloud Liquid\nShortwave radiative properties of liquid droplets in clouds\n18.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud liquid droplets",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19. Radiation --> Shortwave Cloud Inhomogeneity\nCloud inhomogeneity in the shortwave radiation scheme\n19.1. Cloud Inhomogeneity\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20. Radiation --> Shortwave Aerosols\nShortwave radiative properties of aerosols\n20.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral shortwave radiative interactions with aerosols",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of aerosols in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to aerosols in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"21. Radiation --> Shortwave Gases\nShortwave radiative properties of gases\n21.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral shortwave radiative interactions with gases",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22. Radiation --> Longwave Radiation\nProperties of the longwave radiation scheme\n22.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of longwave radiation in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.2. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the longwave radiation scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.3. Spectral Integration\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nLongwave radiation scheme spectral integration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.4. Transport Calculation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nLongwave radiation transport calculation methods",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.5. Spectral Intervals\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nLongwave radiation scheme number of spectral intervals",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"23. Radiation --> Longwave GHG\nRepresentation of greenhouse gases in the longwave radiation scheme\n23.1. Greenhouse Gas Complexity\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nComplexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.2. ODS\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOzone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.3. Other Flourinated Gases\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOther flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24. Radiation --> Longwave Cloud Ice\nLongwave radiative properties of ice crystals in clouds\n24.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral longwave radiative interactions with cloud ice crystals",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24.2. Physical Reprenstation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of cloud ice crystals in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25. Radiation --> Longwave Cloud Liquid\nLongwave radiative properties of liquid droplets in clouds\n25.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral longwave radiative interactions with cloud liquid droplets",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26. Radiation --> Longwave Cloud Inhomogeneity\nCloud inhomogeneity in the longwave radiation scheme\n26.1. Cloud Inhomogeneity\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27. Radiation --> Longwave Aerosols\nLongwave radiative properties of aerosols\n27.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral longwave radiative interactions with aerosols",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of aerosols in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to aerosols in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"28. Radiation --> Longwave Gases\nLongwave radiative properties of gases\n28.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral longwave radiative interactions with gases",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"29. Turbulence Convection\nAtmosphere Convective Turbulence and Clouds\n29.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of atmosphere convection and turbulence",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30. Turbulence Convection --> Boundary Layer Turbulence\nProperties of the boundary layer turbulence scheme\n30.1. Scheme Name\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nBoundary layer turbulence scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Mellor-Yamada\" \n# \"Holtslag-Boville\" \n# \"EDMF\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.2. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nBoundary layer turbulence scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TKE prognostic\" \n# \"TKE diagnostic\" \n# \"TKE coupled with water\" \n# \"vertical profile of Kz\" \n# \"non-local diffusion\" \n# \"Monin-Obukhov similarity\" \n# \"Coastal Buddy Scheme\" \n# \"Coupled with convection\" \n# \"Coupled with gravity waves\" \n# \"Depth capped at cloud base\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.3. Closure Order\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nBoundary layer turbulence scheme closure order",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.4. Counter Gradient\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nUses boundary layer turbulence scheme counter gradient",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"31. Turbulence Convection --> Deep Convection\nProperties of the deep convection scheme\n31.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDeep convection scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"31.2. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDeep convection scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"adjustment\" \n# \"plume ensemble\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.3. Scheme Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDeep convection scheme method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CAPE\" \n# \"bulk\" \n# \"ensemble\" \n# \"CAPE/WFN based\" \n# \"TKE/CIN based\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.4. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of deep convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vertical momentum transport\" \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"updrafts\" \n# \"downdrafts\" \n# \"radiative effect of anvils\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.5. Microphysics\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nMicrophysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32. Turbulence Convection --> Shallow Convection\nProperties of the shallow convection scheme\n32.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nShallow convection scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.2. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nshallow convection scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"cumulus-capped boundary layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32.3. Scheme Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nshallow convection scheme method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"same as deep (unified)\" \n# \"included in boundary layer turbulence\" \n# \"separate diagnosis\" \n# TODO - please enter value(s)\n",
"32.4. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of shallow convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32.5. Microphysics\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nMicrophysics scheme for shallow convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33. Microphysics Precipitation\nLarge Scale Cloud Microphysics and Precipitation\n33.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of large scale cloud microphysics and precipitation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"34. Microphysics Precipitation --> Large Scale Precipitation\nProperties of the large scale precipitation scheme\n34.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name of the large scale precipitation parameterisation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"34.2. Hydrometeors\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPrecipitating hydrometeors taken into account in the large scale precipitation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"liquid rain\" \n# \"snow\" \n# \"hail\" \n# \"graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"35. Microphysics Precipitation --> Large Scale Cloud Microphysics\nProperties of the large scale cloud microphysics scheme\n35.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name of the microphysics parameterisation scheme used for large scale clouds.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"35.2. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nLarge scale cloud microphysics processes",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mixed phase\" \n# \"cloud droplets\" \n# \"cloud ice\" \n# \"ice nucleation\" \n# \"water vapour deposition\" \n# \"effect of raindrops\" \n# \"effect of snow\" \n# \"effect of graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"36. Cloud Scheme\nCharacteristics of the cloud scheme\n36.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of the atmosphere cloud scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"36.2. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the cloud scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"36.3. Atmos Coupling\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nAtmosphere components that are linked to the cloud scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"atmosphere_radiation\" \n# \"atmosphere_microphysics_precipitation\" \n# \"atmosphere_turbulence_convection\" \n# \"atmosphere_gravity_waves\" \n# \"atmosphere_solar\" \n# \"atmosphere_volcano\" \n# \"atmosphere_cloud_simulator\" \n# TODO - please enter value(s)\n",
"36.4. Uses Separate Treatment\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDifferent cloud schemes for the different types of clouds (convective, stratiform and boundary layer)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"36.5. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nProcesses included in the cloud scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"entrainment\" \n# \"detrainment\" \n# \"bulk cloud\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"36.6. Prognostic Scheme\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the cloud scheme a prognostic scheme?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"36.7. Diagnostic Scheme\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the cloud scheme a diagnostic scheme?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"36.8. Prognostic Variables\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nList the prognostic variables used by the cloud scheme, if applicable.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud amount\" \n# \"liquid\" \n# \"ice\" \n# \"rain\" \n# \"snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"37. Cloud Scheme --> Optical Cloud Properties\nOptical cloud properties\n37.1. Cloud Overlap Method\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nMethod for taking into account overlapping of cloud layers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"random\" \n# \"maximum\" \n# \"maximum-random\" \n# \"exponential\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"37.2. Cloud Inhomogeneity\nIs Required: FALSE Type: STRING Cardinality: 0.1\nMethod for taking into account cloud inhomogeneity",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"38. Cloud Scheme --> Sub Grid Scale Water Distribution\nSub-grid scale water distribution\n38.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSub-grid scale water distribution type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n",
"38.2. Function Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nSub-grid scale water distribution function name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"38.3. Function Order\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nSub-grid scale water distribution function type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"38.4. Convection Coupling\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSub-grid scale water distribution coupling with convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n",
"39. Cloud Scheme --> Sub Grid Scale Ice Distribution\nSub-grid scale ice distribution\n39.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSub-grid scale ice distribution type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n",
"39.2. Function Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nSub-grid scale ice distribution function name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"39.3. Function Order\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nSub-grid scale ice distribution function type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"39.4. Convection Coupling\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSub-grid scale ice distribution coupling with convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n",
"40. Observation Simulation\nCharacteristics of observation simulation\n40.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of observation simulator characteristics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"41. Observation Simulation --> Isscp Attributes\nISSCP Characteristics\n41.1. Top Height Estimation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nCloud simulator ISSCP top height estimation methodUo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"no adjustment\" \n# \"IR brightness\" \n# \"visible optical depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"41.2. Top Height Direction\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nCloud simulator ISSCP top height direction",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"lowest altitude level\" \n# \"highest altitude level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"42. Observation Simulation --> Cosp Attributes\nCFMIP Observational Simulator Package attributes\n42.1. Run Configuration\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nCloud simulator COSP run configuration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Inline\" \n# \"Offline\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"42.2. Number Of Grid Points\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nCloud simulator COSP number of grid points",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"42.3. Number Of Sub Columns\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nCloud simulator COSP number of sub-cloumns used to simulate sub-grid variability",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"42.4. Number Of Levels\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nCloud simulator COSP number of levels",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"43. Observation Simulation --> Radar Inputs\nCharacteristics of the cloud radar simulator\n43.1. Frequency\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nCloud simulator radar frequency (Hz)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"43.2. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nCloud simulator radar type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface\" \n# \"space borne\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"43.3. Gas Absorption\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nCloud simulator radar uses gas absorption",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"43.4. Effective Radius\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nCloud simulator radar uses effective radius",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"44. Observation Simulation --> Lidar Inputs\nCharacteristics of the cloud lidar simulator\n44.1. Ice Types\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nCloud simulator lidar ice type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ice spheres\" \n# \"ice non-spherical\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"44.2. Overlap\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nCloud simulator lidar overlap",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"max\" \n# \"random\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"45. Gravity Waves\nCharacteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.\n45.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of gravity wave parameterisation in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"45.2. Sponge Layer\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSponge layer in the upper levels in order to avoid gravity wave reflection at the top.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.sponge_layer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rayleigh friction\" \n# \"Diffusive sponge layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"45.3. Background\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nBackground wave distribution",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"continuous spectrum\" \n# \"discrete spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"45.4. Subgrid Scale Orography\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSubgrid scale orography effects taken into account.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"effect on drag\" \n# \"effect on lifting\" \n# \"enhanced topography\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"46. Gravity Waves --> Orographic Gravity Waves\nGravity waves generated due to the presence of orography\n46.1. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the orographic gravity wave scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"46.2. Source Mechanisms\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOrographic gravity wave source mechanisms",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear mountain waves\" \n# \"hydraulic jump\" \n# \"envelope orography\" \n# \"low level flow blocking\" \n# \"statistical sub-grid scale variance\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"46.3. Calculation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOrographic gravity wave calculation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"non-linear calculation\" \n# \"more than two cardinal directions\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"46.4. Propagation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nOrographic gravity wave propogation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"includes boundary layer ducting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"46.5. Dissipation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nOrographic gravity wave dissipation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"47. Gravity Waves --> Non Orographic Gravity Waves\nGravity waves generated by non-orographic processes.\n47.1. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the non-orographic gravity wave scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"47.2. Source Mechanisms\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nNon-orographic gravity wave source mechanisms",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convection\" \n# \"precipitation\" \n# \"background spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"47.3. Calculation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nNon-orographic gravity wave calculation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spatially dependent\" \n# \"temporally dependent\" \n# TODO - please enter value(s)\n",
"47.4. Propagation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nNon-orographic gravity wave propogation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"47.5. Dissipation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nNon-orographic gravity wave dissipation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"48. Solar\nTop of atmosphere solar insolation characteristics\n48.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of solar insolation of the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"49. Solar --> Solar Pathways\nPathways for solar forcing of the atmosphere\n49.1. Pathways\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPathways for the solar forcing of the atmosphere model domain",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_pathways.pathways') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"SW radiation\" \n# \"precipitating energetic particles\" \n# \"cosmic rays\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"50. Solar --> Solar Constant\nSolar constant and top of atmosphere insolation characteristics\n50.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime adaptation of the solar constant.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n",
"50.2. Fixed Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf the solar constant is fixed, enter the value of the solar constant (W m-2).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"50.3. Transient Characteristics\nIs Required: TRUE Type: STRING Cardinality: 1.1\nsolar constant transient characteristics (W m-2)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"51. Solar --> Orbital Parameters\nOrbital parameters and top of atmosphere insolation characteristics\n51.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime adaptation of orbital parameters",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n",
"51.2. Fixed Reference Date\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nReference date for fixed orbital parameters (yyyy)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"51.3. Transient Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescription of transient orbital parameters",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"51.4. Computation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod used for computing orbital parameters.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Berger 1978\" \n# \"Laskar 2004\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"52. Solar --> Insolation Ozone\nImpact of solar insolation on stratospheric ozone\n52.1. Solar Ozone Impact\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes top of atmosphere insolation impact on stratospheric ozone?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"53. Volcanos\nCharacteristics of the implementation of volcanoes\n53.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of the implementation of volcanic effects in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"54. Volcanos --> Volcanoes Treatment\nTreatment of volcanoes in the atmosphere\n54.1. Volcanoes Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow volcanic effects are modeled in the atmosphere.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"high frequency solar constant anomaly\" \n# \"stratospheric aerosols optical thickness\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
statsmodels/statsmodels.github.io
|
v0.12.1/examples/notebooks/generated/ordinal_regression.ipynb
|
bsd-3-clause
|
[
"Ordinal Regression",
"import numpy as np\nimport pandas as pd\nimport scipy.stats as stats\n\nfrom statsmodels.miscmodels.ordinal_model import OrderedModel",
"Loading a stata data file from the UCLA website.This notebook is inspired by https://stats.idre.ucla.edu/r/dae/ordinal-logistic-regression/ which is a R notebook from UCLA.",
"url = \"https://stats.idre.ucla.edu/stat/data/ologit.dta\"\ndata_student = pd.read_stata(url)\n\ndata_student.head(5)\n\ndata_student.dtypes\n\ndata_student['apply'].dtype",
"This dataset is about the probability for undergraduate students to apply to graduate school given three exogenous variables:\n- their grade point average(gpa), a float between 0 and 4.\n- pared, a binary that indicates if at least one parent went to graduate school.\n- and public, a binary that indicates if the current undergraduate institution of the student is public or private.\napply, the target variable is categorical with ordered categories: unlikely < somewhat likely < very likely. It is a pd.Serie of categorical type, this is preferred over NumPy arrays.\nThe model is based on a numerical latent variable $y_{latent}$ that we cannot observe but that we can compute thanks to exogenous variables.\nMoreover we can use this $y_{latent}$ to define $y$ that we can observe.\nFor more details see the the Documentation of OrderedModel, the UCLA webpage or this book.\nProbit ordinal regression:",
"mod_prob = OrderedModel(data_student['apply'],\n data_student[['pared', 'public', 'gpa']],\n distr='probit')\n\nres_prob = mod_prob.fit(method='bfgs')\nres_prob.summary()",
"In our model, we have 3 exogenous variables(the $\\beta$s if we keep the documentation's notations) so we have 3 coefficients that need to be estimated.\nThose 3 estimations and their standard errors can be retrieved in the summary table.\nSince there are 3 categories in the target variable(unlikely, somewhat likely, very likely), we have two thresholds to estimate. \nAs explained in the doc of the method OrderedModel.transform_threshold_params, the first estimated threshold is the actual value and all the other thresholds are in terms of cumulative exponentiated increments. Actual thresholds values can be computed as follows:",
"num_of_thresholds = 2\nmod_prob.transform_threshold_params(res_prob.params[-num_of_thresholds:])",
"Logit ordinal regression:",
"mod_log = OrderedModel(data_student['apply'],\n data_student[['pared', 'public', 'gpa']],\n distr='logit')\n\nres_log = mod_log.fit(method='bfgs', disp=False)\nres_log.summary()\n\npredicted = res_log.model.predict(res_log.params, exog=data_student[['pared', 'public', 'gpa']])\npredicted\n\npred_choice = predicted.argmax(1)\nprint('Fraction of correct choice predictions')\nprint((np.asarray(data_student['apply'].values.codes) == pred_choice).mean())",
"Ordinal regression with a custom cumulative cLogLog distribution:\nIn addition to logit and probit regression, any continuous distribution from SciPy.stats package can be used for the distr argument. Alternatively, one can define its own distribution simply creating a subclass from rv_continuous and implementing a few methods.",
"# using a SciPy distribution\nres_exp = OrderedModel(data_student['apply'],\n data_student[['pared', 'public', 'gpa']],\n distr=stats.expon).fit(method='bfgs', disp=False)\nres_exp.summary()\n\n# minimal definition of a custom scipy distribution.\nclass CLogLog(stats.rv_continuous):\n def _ppf(self, q):\n return np.log(-np.log(1 - q))\n\n def _cdf(self, x):\n return 1 - np.exp(-np.exp(x))\n\n\ncloglog = CLogLog()\n\n# definition of the model and fitting\nres_cloglog = OrderedModel(data_student['apply'],\n data_student[['pared', 'public', 'gpa']],\n distr=cloglog).fit(method='bfgs', disp=False)\nres_cloglog.summary()",
"Using formulas - treatment of endog\nPandas' ordered categorical and numeric values are supported as dependent variable in formulas. Other types will raise a ValueError.",
"modf_logit = OrderedModel.from_formula(\"apply ~ 0 + pared + public + gpa\", data_student,\n distr='logit')\nresf_logit = modf_logit.fit(method='bfgs')\nresf_logit.summary()",
"Using numerical codes for the dependent variable is supported but loses the names of the category levels. The levels and names correspond to the unique values of the dependent variable sorted in alphanumeric order as in the case without using formulas.",
"data_student[\"apply_codes\"] = data_student['apply'].cat.codes * 2 + 5\ndata_student[\"apply_codes\"].head()\n\nOrderedModel.from_formula(\"apply_codes ~ 0 + pared + public + gpa\", data_student,\n distr='logit').fit().summary()\n\nresf_logit.predict(data_student.iloc[:5])",
"Using string values directly as dependent variable raises a ValueError.",
"data_student[\"apply_str\"] = np.asarray(data_student[\"apply\"])\ndata_student[\"apply_str\"].head()\n\nOrderedModel.from_formula(\"apply_str ~ 0 + pared + public + gpa\", data_student,\n distr='logit')",
"Using formulas - no constant in model\nThe parameterization of OrderedModel requires that there is no constant in the model, neither explicit nor implicit. The constant is equivalent to shifting all thresholds and is therefore not separately identified.\nPatsy's formula specification does not allow a design matrix without explicit or implicit constant if there are categorical variables (or maybe splines) among explanatory variables. As workaround, statsmodels removes an explit intercept. \nConsequently, there are two valid cases to get a design matrix without intercept.\n\nspecify a model without explicit and implicit intercept which is possible if there are only numerical variables in the model.\nspecify a model with an explicit intercept which statsmodels will remove.\n\nModels with an implicit intercept will be overparameterized, the parameter estimates will not be fully identified, cov_params will not be invertible and standard errors might contain nans.\nIn the following we look at an example with an additional categorical variable.",
"nobs = len(data_student)\ndata_student[\"dummy\"] = (np.arange(nobs) < (nobs / 2)).astype(float)",
"explicit intercept, that will be removed:\nNote \"1 +\" is here redundant because it is patsy's default.",
"modfd_logit = OrderedModel.from_formula(\"apply ~ 1 + pared + public + gpa + C(dummy)\", data_student,\n distr='logit')\nresfd_logit = modfd_logit.fit(method='bfgs')\nprint(resfd_logit.summary())\n\nmodfd_logit.k_vars\n\nmodfd_logit.k_constant",
"implicit intercept creates overparameterized model\nSpecifying \"0 +\" in the formula drops the explicit intercept. However, the categorical encoding is now changed to include an implicit intercept. In this example, the created dummy variables C(dummy)[0.0] and C(dummy)[1.0] sum to one.",
"OrderedModel.from_formula(\"apply ~ 0 + pared + public + gpa + C(dummy)\", data_student,\n distr='logit')\n",
"To see what would happen in the overparameterized case, we can avoid the constant check in the model by explicitly specifying whether a constant is present or not. We use hasconst=False, even though the model has an implicit constant.\nThe parameters of the two dummy variable columns and the first threshold are not separately identified. Estimates for those parameters and availability of standard errors are arbitrary and depends on numerical details that differ across environments.\nSome summary measures like log-likelihood value are not affected by this, within convergence tolerance and numerical precision. Prediction should also be possible. However, inference is not available, or is not valid.",
"modfd2_logit = OrderedModel.from_formula(\"apply ~ 0 + pared + public + gpa + C(dummy)\", data_student,\n distr='logit', hasconst=False)\nresfd2_logit = modfd2_logit.fit(method='bfgs')\nprint(resfd2_logit.summary())\n\nresfd2_logit.predict(data_student.iloc[:5])\n\nresf_logit.predict()",
"Binary Model compared to Logit\nIf there are only two levels of the dependent ordered categorical variable, then the model can also be estimated by a Logit model.\nThe models are (theoretically) identical in this case except for the parameterization of the constant. Logit as most other models requires in general an intercept. This corresponds to the threshold parameter in the OrderedModel, however, with opposite sign.\nThe implementation differs and not all of the same results statistic and post-estimation features are available. Estimated parameters and other results statistic differ mainly based on convergence tolerance of the optimization.",
"from statsmodels.discrete.discrete_model import Logit\nfrom statsmodels.tools.tools import add_constant",
"We drop the middle category from the data and keep the two extreme categories.",
"mask_drop = data_student['apply'] == \"somewhat likely\"\ndata2 = data_student.loc[~mask_drop, :]\n# we need to remove the category also from the Categorical Index\ndata2['apply'].cat.remove_categories(\"somewhat likely\", inplace=True)\ndata2.head()\n\nmod_log = OrderedModel(data2['apply'],\n data2[['pared', 'public', 'gpa']],\n distr='logit')\n\nres_log = mod_log.fit(method='bfgs', disp=False)\nres_log.summary()",
"The Logit model does not have a constant by default, we have to add it to our explanatory variables.\nThe results are essentially identical between Logit and ordered model up to numerical precision mainly resulting from convergence tolerance in the estimation.\nThe only difference is in the sign of the constant, Logit and OrdereModel have opposite signs of he constant. This is a consequence of the parameterization in terms of cut points in OrderedModel instead of including and constant column in the design matrix.",
"ex = add_constant(data2[['pared', 'public', 'gpa']], prepend=False)\nmod_logit = Logit(data2['apply'].cat.codes, ex)\n\nres_logit = mod_logit.fit(method='bfgs', disp=False)\n\nres_logit.summary()",
"Robust standard errors are also available in OrderedModel in the same way as in discrete.Logit.\nAs example we specify HAC covariance type even though we have cross-sectional data and autocorrelation is not appropriate.",
"res_logit_hac = mod_logit.fit(method='bfgs', disp=False, cov_type=\"hac\", cov_kwds={\"maxlags\": 2})\nres_log_hac = mod_log.fit(method='bfgs', disp=False, cov_type=\"hac\", cov_kwds={\"maxlags\": 2})\n\nres_logit_hac.bse.values - res_log_hac.bse"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
shankari/folium
|
examples/WidthHeight.ipynb
|
mit
|
[
"import os\nimport folium\n\nprint(folium.__version__)\n\nfrom branca.element import Figure\n\nlon, lat = -122.1889, 46.1991\n\nlocation = [lat, lon]\n\nzoom_start = 13\n\ntiles = 'OpenStreetMap'",
"Using same width and height triggers the scroll bar",
"width, height = 480, 350\n\nfig = Figure(width=width, height=height)\n\nm = folium.Map(\n location=location,\n tiles=tiles,\n width=width,\n height=height,\n zoom_start=zoom_start\n)\n\nfig.add_child(m)\n\nfig.save(os.path.join('results', 'WidthHeight_0.html'))\n\nfig",
"Can figure take relative sizes?",
"width, height = '100%', 350\n\nfig = Figure(width=width, height=height)\n\nm = folium.Map(\n location=location,\n tiles=tiles,\n width=width,\n height=height,\n zoom_start=zoom_start\n)\n\nfig.add_child(m)\n\nfig.save(os.path.join('results', 'WidthHeight_1.html'))\n\nfig",
"I guess not. (Well, it does make sense for a single HTML page, but not for iframes.)",
"width, height = 480, '100%'\n\nfig = Figure(width=width, height=height)\n\nm = folium.Map(\n location=location,\n tiles=tiles,\n width=width,\n height=height,\n zoom_start=zoom_start\n)\n\nfig.add_child(m)\n\nfig.save(os.path.join('results', 'WidthHeight_2.html'))\n\nfig",
"Not that Figure is interpreting this as 50px. We should raise something and be explicit on the docs.",
"width, height = '50%', '100%'\n\nfig = Figure(width=width, height=height)\n\nm = folium.Map(\n location=location,\n tiles=tiles,\n width=width,\n height=height,\n zoom_start=zoom_start\n)\n\nfig.add_child(m)\n\nfig.save(os.path.join('results', 'WidthHeight_3.html'))\n\nfig\n\nwidth, height = '150%', '100%'\n\ntry:\n folium.Map(location=location, tiles=tiles,\n width=width, height=height, zoom_start=zoom_start)\nexcept ValueError as e:\n print(e)\n\nwidth, height = '50%', '80p'\n\ntry:\n folium.Map(location=location, tiles=tiles,\n width=width, height=height, zoom_start=zoom_start)\nexcept ValueError as e:\n print(e)\n\nwidth, height = width, height = 480, -350\n\ntry:\n folium.Map(location=location, tiles=tiles,\n width=width, height=height, zoom_start=zoom_start)\nexcept ValueError as e:\n print(e)",
"Maybe we should recommend",
"width, height = 480, 350\n\nfig = Figure(width=width, height=height)\n\nm = folium.Map(\n location=location,\n tiles=tiles,\n width='100%',\n height='100%',\n zoom_start=zoom_start\n)\n\nfig.add_child(m)\n\nfig.save(os.path.join('results', 'WidthHeight_4.html'))\n\nfig"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io
|
0.13/_downloads/plot_sensors_time_frequency.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Frequency and time-frequency sensors analysis\nThe objective is to show you how to explore the spectral content\nof your data (frequency and time-frequency). Here we'll work on Epochs.\nWe will use the somatosensory dataset that contains so\ncalled event related synchronizations (ERS) / desynchronizations (ERD) in\nthe beta band.",
"import numpy as np\nimport matplotlib.pyplot as plt\n\nimport mne\nfrom mne.time_frequency import tfr_morlet, psd_multitaper\nfrom mne.datasets import somato",
"Set parameters",
"data_path = somato.data_path()\nraw_fname = data_path + '/MEG/somato/sef_raw_sss.fif'\n\n# Setup for reading the raw data\nraw = mne.io.read_raw_fif(raw_fname, add_eeg_ref=False)\nevents = mne.find_events(raw, stim_channel='STI 014')\n\n# picks MEG gradiometers\npicks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True, stim=False)\n\n# Construct Epochs\nevent_id, tmin, tmax = 1, -1., 3.\nbaseline = (None, 0)\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,\n baseline=baseline, reject=dict(grad=4000e-13, eog=350e-6),\n preload=True, add_eeg_ref=False)\n\nepochs.resample(150., npad='auto') # resample to reduce computation time",
"Frequency analysis\nWe start by exploring the frequence content of our epochs.\nLet's first check out all channel types by averaging across epochs.",
"epochs.plot_psd(fmin=2., fmax=40.)",
"Now let's take a look at the spatial distributions of the PSD.",
"epochs.plot_psd_topomap(ch_type='grad', normalize=True)",
"Alternatively, you can also create PSDs from Epochs objects with functions\nthat start with psd_ such as\n:func:mne.time_frequency.psd_multitaper and\n:func:mne.time_frequency.psd_welch.",
"f, ax = plt.subplots()\npsds, freqs = psd_multitaper(epochs, fmin=2, fmax=40, n_jobs=1)\npsds = 10 * np.log10(psds)\npsds_mean = psds.mean(0).mean(0)\npsds_std = psds.mean(0).std(0)\n\nax.plot(freqs, psds_mean, color='k')\nax.fill_between(freqs, psds_mean - psds_std, psds_mean + psds_std,\n color='k', alpha=.5)\nax.set(title='Multitaper PSD (gradiometers)', xlabel='Frequency',\n ylabel='Power Spectral Density (dB)')\nplt.show()",
"Time-frequency analysis: power and intertrial coherence\nWe now compute time-frequency representations (TFRs) from our Epochs.\nWe'll look at power and intertrial coherence (ITC).\nTo this we'll use the function :func:mne.time_frequency.tfr_morlet\nbut you can also use :func:mne.time_frequency.tfr_multitaper\nor :func:mne.time_frequency.tfr_stockwell.",
"freqs = np.arange(6, 30, 3) # define frequencies of interest\nn_cycles = freqs / 2. # different number of cycle per frequency\npower, itc = tfr_morlet(epochs, freqs=freqs, n_cycles=n_cycles, use_fft=True,\n return_itc=True, decim=3, n_jobs=1)",
"Inspect power\n<div class=\"alert alert-info\"><h4>Note</h4><p>The generated figures are interactive. In the topo you can click\n on an image to visualize the data for one censor.\n You can also select a portion in the time-frequency plane to\n obtain a topomap for a certain time-frequency region.</p></div>",
"power.plot_topo(baseline=(-0.5, 0), mode='logratio', title='Average power')\npower.plot([82], baseline=(-0.5, 0), mode='logratio')\n\nfig, axis = plt.subplots(1, 2, figsize=(7, 4))\npower.plot_topomap(ch_type='grad', tmin=0.5, tmax=1.5, fmin=8, fmax=12,\n baseline=(-0.5, 0), mode='logratio', axes=axis[0],\n title='Alpha', vmax=0.45, show=False)\npower.plot_topomap(ch_type='grad', tmin=0.5, tmax=1.5, fmin=13, fmax=25,\n baseline=(-0.5, 0), mode='logratio', axes=axis[1],\n title='Beta', vmax=0.45, show=False)\nmne.viz.tight_layout()\nplt.show()",
"Inspect ITC",
"itc.plot_topo(title='Inter-Trial coherence', vmin=0., vmax=1., cmap='Reds')",
"<div class=\"alert alert-info\"><h4>Note</h4><p>Baseline correction can be applied to power or done in plots\n To illustrate the baseline correction in plots the next line is\n commented power.apply_baseline(baseline=(-0.5, 0), mode='logratio')</p></div>\n\nExercise\n\nVisualize the intertrial coherence values as topomaps as done with\n power."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
xtr33me/deep-learning
|
intro-to-tensorflow/intro_to_tensorflow.ipynb
|
mit
|
[
"<h1 align=\"center\">TensorFlow Neural Network Lab</h1>\n\n<img src=\"image/notmnist.png\">\nIn this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href=\"http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html\">notMNIST</a>, consists of images of a letter from A to J in different fonts.\nThe above images are a few examples of the data you'll be training on. After training the network, you will compare your prediction model against test data. Your goal, by the end of this lab, is to make predictions against that test set with at least an 80% accuracy. Let's jump in!\nTo start this lab, you first need to import all the necessary modules. Run the code below. If it runs successfully, it will print \"All modules imported\".",
"import hashlib\nimport os\nimport pickle\nfrom urllib.request import urlretrieve\n\nimport numpy as np\nfrom PIL import Image\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.preprocessing import LabelBinarizer\nfrom sklearn.utils import resample\nfrom tqdm import tqdm\nfrom zipfile import ZipFile\n\nprint('All modules imported.')",
"The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).",
"def download(url, file):\n \"\"\"\n Download file from <url>\n :param url: URL to file\n :param file: Local file path\n \"\"\"\n if not os.path.isfile(file):\n print('Downloading ' + file + '...')\n urlretrieve(url, file)\n print('Download Finished')\n\n# Download the training and test dataset.\ndownload('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip')\ndownload('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip')\n\n# Make sure the files aren't corrupted\nassert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\\\n 'notMNIST_train.zip file is corrupted. Remove the file and try again.'\nassert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\\\n 'notMNIST_test.zip file is corrupted. Remove the file and try again.'\n\n# Wait until you see that all files have been downloaded.\nprint('All files downloaded.')\n\ndef uncompress_features_labels(file):\n \"\"\"\n Uncompress features and labels from a zip file\n :param file: The zip file to extract the data from\n \"\"\"\n features = []\n labels = []\n\n with ZipFile(file) as zipf:\n # Progress Bar\n filenames_pbar = tqdm(zipf.namelist(), unit='files')\n \n # Get features and labels from all files\n for filename in filenames_pbar:\n # Check if the file is a directory\n if not filename.endswith('/'):\n with zipf.open(filename) as image_file:\n image = Image.open(image_file)\n image.load()\n # Load image data as 1 dimensional array\n # We're using float32 to save on memory space\n feature = np.array(image, dtype=np.float32).flatten()\n\n # Get the the letter from the filename. This is the letter of the image.\n label = os.path.split(filename)[1][0]\n\n features.append(feature)\n labels.append(label)\n return np.array(features), np.array(labels)\n\n# Get the features and labels from the zip files\ntrain_features, train_labels = uncompress_features_labels('notMNIST_train.zip')\ntest_features, test_labels = uncompress_features_labels('notMNIST_test.zip')\n\n# Limit the amount of data to work with a docker container\ndocker_size_limit = 150000\ntrain_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit)\n\n# Set flags for feature engineering. This will prevent you from skipping an important step.\nis_features_normal = False\nis_labels_encod = False\n\n# Wait until you see that all features and labels have been uncompressed.\nprint('All features and labels uncompressed.')",
"<img src=\"image/Mean_Variance_Image.png\" style=\"height: 75%;width: 75%; position: relative; right: 5%\">\nProblem 1\nThe first problem involves normalizing the features for your training and test data.\nImplement Min-Max scaling in the normalize_grayscale() function to a range of a=0.1 and b=0.9. After scaling, the values of the pixels in the input data should range from 0.1 to 0.9.\nSince the raw notMNIST image data is in grayscale, the current values range from a min of 0 to a max of 255.\nMin-Max Scaling:\n$\nX'=a+{\\frac {\\left(X-X_{\\min }\\right)\\left(b-a\\right)}{X_{\\max }-X_{\\min }}}\n$\nIf you're having trouble solving problem 1, you can view the solution here.",
"# Problem 1 - Implement Min-Max scaling for grayscale image data\ndef normalize_grayscale(image_data):\n \"\"\"\n Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]\n :param image_data: The image data to be normalized\n :return: Normalized image data\n \"\"\"\n # TODO: Implement Min-Max scaling for grayscale image data\n a = 0.1\n b = 0.9\n xmin = np.min(image_data)\n xmax = np.max(image_data)\n valRange = b-a\n denominator = xmax-xmin\n return [a + ((x-xmin)*valRange)/denominator for x in image_data]\n \n\n\n### DON'T MODIFY ANYTHING BELOW ###\n# Test Cases\nnp.testing.assert_array_almost_equal(\n normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])),\n [0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314,\n 0.125098039216, 0.128235294118, 0.13137254902, 0.9],\n decimal=3)\nnp.testing.assert_array_almost_equal(\n normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])),\n [0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078,\n 0.896862745098, 0.9])\n\nif not is_features_normal:\n train_features = normalize_grayscale(train_features)\n test_features = normalize_grayscale(test_features)\n is_features_normal = True\n\nprint('Tests Passed!')\n\nif not is_labels_encod:\n # Turn labels into numbers and apply One-Hot Encoding\n encoder = LabelBinarizer()\n encoder.fit(train_labels)\n train_labels = encoder.transform(train_labels)\n test_labels = encoder.transform(test_labels)\n\n # Change to float32, so it can be multiplied against the features in TensorFlow, which are float32\n train_labels = train_labels.astype(np.float32)\n test_labels = test_labels.astype(np.float32)\n is_labels_encod = True\n\nprint('Labels One-Hot Encoded')\n\nassert is_features_normal, 'You skipped the step to normalize the features'\nassert is_labels_encod, 'You skipped the step to One-Hot Encode the labels'\n\n# Get randomized datasets for training and validation\ntrain_features, valid_features, train_labels, valid_labels = train_test_split(\n train_features,\n train_labels,\n test_size=0.05,\n random_state=832289)\n\nprint('Training features and labels randomized and split.')\n\n# Save the data for easy access\npickle_file = 'notMNIST.pickle'\nif not os.path.isfile(pickle_file):\n print('Saving data to pickle file...')\n try:\n with open('notMNIST.pickle', 'wb') as pfile:\n pickle.dump(\n {\n 'train_dataset': train_features,\n 'train_labels': train_labels,\n 'valid_dataset': valid_features,\n 'valid_labels': valid_labels,\n 'test_dataset': test_features,\n 'test_labels': test_labels,\n },\n pfile, pickle.HIGHEST_PROTOCOL)\n except Exception as e:\n print('Unable to save data to', pickle_file, ':', e)\n raise\n\nprint('Data cached in pickle file.')",
"Checkpoint\nAll your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.",
"%matplotlib inline\n\n# Load the modules\nimport pickle\nimport math\n\nimport numpy as np\nimport tensorflow as tf\nfrom tqdm import tqdm\nimport matplotlib.pyplot as plt\n\n# Reload the data\npickle_file = 'notMNIST.pickle'\nwith open(pickle_file, 'rb') as f:\n pickle_data = pickle.load(f)\n train_features = pickle_data['train_dataset']\n train_labels = pickle_data['train_labels']\n valid_features = pickle_data['valid_dataset']\n valid_labels = pickle_data['valid_labels']\n test_features = pickle_data['test_dataset']\n test_labels = pickle_data['test_labels']\n del pickle_data # Free up memory\n\nprint('Data and modules loaded.')",
"Problem 2\nNow it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.\n<img src=\"image/network_diagram.png\" style=\"height: 40%;width: 40%; position: relative; right: 10%\">\nFor the input here the images have been flattened into a vector of $28 \\times 28 = 784$ features. Then, we're trying to predict the image digit so there are 10 output units, one for each label. Of course, feel free to add hidden layers if you want, but this notebook is built to guide you through a single layer network. \nFor the neural network to train on your data, you need the following <a href=\"https://www.tensorflow.org/resources/dims_types.html#data-types\">float32</a> tensors:\n - features\n - Placeholder tensor for feature data (train_features/valid_features/test_features)\n - labels\n - Placeholder tensor for label data (train_labels/valid_labels/test_labels)\n - weights\n - Variable Tensor with random numbers from a truncated normal distribution.\n - See <a href=\"https://www.tensorflow.org/api_docs/python/constant_op.html#truncated_normal\">tf.truncated_normal() documentation</a> for help.\n - biases\n - Variable Tensor with all zeros.\n - See <a href=\"https://www.tensorflow.org/api_docs/python/constant_op.html#zeros\"> tf.zeros() documentation</a> for help.\nIf you're having trouble solving problem 2, review \"TensorFlow Linear Function\" section of the class. If that doesn't help, the solution for this problem is available here.",
"# All the pixels in the image (28 * 28 = 784)\nfeatures_count = 784\n# All the labels\nlabels_count = 10\n\n# TODO: Set the features and labels tensors\nfeatures = tf.placeholder(tf.float32)\nlabels = tf.placeholder(tf.float32)\n\n# TODO: Set the weights and biases tensors\nweights = tf.Variable(tf.truncated_normal((features_count, labels_count)))\nbiases = tf.Variable(tf.zeros(labels_count))\n\n\n\n### DON'T MODIFY ANYTHING BELOW ###\n\n#Test Cases\nfrom tensorflow.python.ops.variables import Variable\n\nassert features._op.name.startswith('Placeholder'), 'features must be a placeholder'\nassert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder'\nassert isinstance(weights, Variable), 'weights must be a TensorFlow variable'\nassert isinstance(biases, Variable), 'biases must be a TensorFlow variable'\n\nassert features._shape == None or (\\\n features._shape.dims[0].value is None and\\\n features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect'\nassert labels._shape == None or (\\\n labels._shape.dims[0].value is None and\\\n labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect'\nassert weights._variable._shape == (784, 10), 'The shape of weights is incorrect'\nassert biases._variable._shape == (10), 'The shape of biases is incorrect'\n\nassert features._dtype == tf.float32, 'features must be type float32'\nassert labels._dtype == tf.float32, 'labels must be type float32'\n\n# Feed dicts for training, validation, and test session\ntrain_feed_dict = {features: train_features, labels: train_labels}\nvalid_feed_dict = {features: valid_features, labels: valid_labels}\ntest_feed_dict = {features: test_features, labels: test_labels}\n\n# Linear Function WX + b\nlogits = tf.matmul(features, weights) + biases\n\nprediction = tf.nn.softmax(logits)\n\n# Cross entropy\ncross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)\n\n# Training loss\nloss = tf.reduce_mean(cross_entropy)\n\n# Create an operation that initializes all variables\ninit = tf.global_variables_initializer()\n\n# Test Cases\nwith tf.Session() as session:\n session.run(init)\n session.run(loss, feed_dict=train_feed_dict)\n session.run(loss, feed_dict=valid_feed_dict)\n session.run(loss, feed_dict=test_feed_dict)\n biases_data = session.run(biases)\n\nassert not np.count_nonzero(biases_data), 'biases must be zeros'\n\nprint('Tests Passed!')\n\n# Determine if the predictions are correct\nis_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1))\n# Calculate the accuracy of the predictions\naccuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32))\n\nprint('Accuracy function created.')",
"<img src=\"image/Learn_Rate_Tune_Image.png\" style=\"height: 70%;width: 70%\">\nProblem 3\nBelow are 2 parameter configurations for training the neural network. In each configuration, one of the parameters has multiple options. For each configuration, choose the option that gives the best acccuracy.\nParameter configurations:\nConfiguration 1\n* Epochs: 1\n* Learning Rate:\n * 0.8\n * 0.5\n * 0.1\n * 0.05\n * 0.01\nConfiguration 2\n* Epochs:\n * 1\n * 2\n * 3\n * 4\n * 5\n* Learning Rate: 0.2\nThe code will print out a Loss and Accuracy graph, so you can see how well the neural network performed.\nIf you're having trouble solving problem 3, you can view the solution here.",
"# Change if you have memory restrictions\nbatch_size = 128\n\n# TODO: Find the best parameters for each configuration\nepochs = 3\nlearning_rate = 0.1\n\n### DON'T MODIFY ANYTHING BELOW ###\n# Gradient Descent\noptimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss) \n\n# The accuracy measured against the validation set\nvalidation_accuracy = 0.0\n\n# Measurements use for graphing loss and accuracy\nlog_batch_step = 50\nbatches = []\nloss_batch = []\ntrain_acc_batch = []\nvalid_acc_batch = []\n\nwith tf.Session() as session:\n session.run(init)\n batch_count = int(math.ceil(len(train_features)/batch_size))\n\n for epoch_i in range(epochs):\n \n # Progress bar\n batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')\n \n # The training cycle\n for batch_i in batches_pbar:\n # Get a batch of training features and labels\n batch_start = batch_i*batch_size\n batch_features = train_features[batch_start:batch_start + batch_size]\n batch_labels = train_labels[batch_start:batch_start + batch_size]\n\n # Run optimizer and get loss\n _, l = session.run(\n [optimizer, loss],\n feed_dict={features: batch_features, labels: batch_labels})\n\n # Log every 50 batches\n if not batch_i % log_batch_step:\n # Calculate Training and Validation accuracy\n training_accuracy = session.run(accuracy, feed_dict=train_feed_dict)\n validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)\n\n # Log batches\n previous_batch = batches[-1] if batches else 0\n batches.append(log_batch_step + previous_batch)\n loss_batch.append(l)\n train_acc_batch.append(training_accuracy)\n valid_acc_batch.append(validation_accuracy)\n\n # Check accuracy against Validation data\n validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)\n\nloss_plot = plt.subplot(211)\nloss_plot.set_title('Loss')\nloss_plot.plot(batches, loss_batch, 'g')\nloss_plot.set_xlim([batches[0], batches[-1]])\nacc_plot = plt.subplot(212)\nacc_plot.set_title('Accuracy')\nacc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')\nacc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy')\nacc_plot.set_ylim([0, 1.0])\nacc_plot.set_xlim([batches[0], batches[-1]])\nacc_plot.legend(loc=4)\nplt.tight_layout()\nplt.show()\n\nprint('Validation accuracy at {}'.format(validation_accuracy))",
"Test\nYou're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%.",
"### DON'T MODIFY ANYTHING BELOW ###\n# The accuracy measured against the test set\ntest_accuracy = 0.0\n\nwith tf.Session() as session:\n \n session.run(init)\n batch_count = int(math.ceil(len(train_features)/batch_size))\n\n for epoch_i in range(epochs):\n \n # Progress bar\n batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')\n \n # The training cycle\n for batch_i in batches_pbar:\n # Get a batch of training features and labels\n batch_start = batch_i*batch_size\n batch_features = train_features[batch_start:batch_start + batch_size]\n batch_labels = train_labels[batch_start:batch_start + batch_size]\n\n # Run optimizer\n _ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels})\n\n # Check accuracy against Test data\n test_accuracy = session.run(accuracy, feed_dict=test_feed_dict)\n\n\nassert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy)\nprint('Nice Job! Test Accuracy is {}'.format(test_accuracy))",
"Multiple layers\nGood job! You built a one layer TensorFlow network! However, you might want to build more than one layer. This is deep learning after all! In the next section, you will start to satisfy your need for more layers."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/inm/cmip6/models/sandbox-2/landice.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Landice\nMIP Era: CMIP6\nInstitute: INM\nSource ID: SANDBOX-2\nTopic: Landice\nSub-Topics: Glaciers, Ice. \nProperties: 30 (21 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:05\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'inm', 'sandbox-2', 'landice')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Software Properties\n3. Grid\n4. Glaciers\n5. Ice\n6. Ice --> Mass Balance\n7. Ice --> Mass Balance --> Basal\n8. Ice --> Mass Balance --> Frontal\n9. Ice --> Dynamics \n1. Key Properties\nLand ice key properties\n1.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of land surface model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of land surface model code",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Ice Albedo\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify how ice albedo is modelled",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.ice_albedo') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"function of ice age\" \n# \"function of ice density\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Atmospheric Coupling Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nWhich variables are passed between the atmosphere and ice (e.g. orography, ice mass)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.5. Oceanic Coupling Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nWhich variables are passed between the ocean and ice",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.6. Prognostic Variables\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhich variables are prognostically calculated in the ice model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ice velocity\" \n# \"ice thickness\" \n# \"ice temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2. Key Properties --> Software Properties\nSoftware properties of land ice code\n2.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Grid\nLand ice grid\n3.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of the grid in the land ice scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.2. Adaptive Grid\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs an adative grid being used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3.3. Base Resolution\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nThe base resolution (in metres), before any adaption",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.base_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.4. Resolution Limit\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf an adaptive grid is being used, what is the limit of the resolution (in metres)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.resolution_limit') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.5. Projection\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThe projection of the land ice grid (e.g. albers_equal_area)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.grid.projection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Glaciers\nLand ice glaciers\n4.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of glaciers in the land ice scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the treatment of glaciers, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Dynamic Areal Extent\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nDoes the model include a dynamic glacial extent?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"5. Ice\nIce sheet and ice shelf\n5.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of the ice sheet and ice shelf in the land ice scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Grounding Line Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.grounding_line_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grounding line prescribed\" \n# \"flux prescribed (Schoof)\" \n# \"fixed grid size\" \n# \"moving grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"5.3. Ice Sheet\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre ice sheets simulated?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.ice_sheet') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"5.4. Ice Shelf\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre ice shelves simulated?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.ice_shelf') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6. Ice --> Mass Balance\nDescription of the surface mass balance treatment\n6.1. Surface Mass Balance\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7. Ice --> Mass Balance --> Basal\nDescription of basal melting\n7.1. Bedrock\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the implementation of basal melting over bedrock",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Ocean\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the implementation of basal melting over the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Ice --> Mass Balance --> Frontal\nDescription of claving/melting from the ice shelf front\n8.1. Calving\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the implementation of calving from the front of the ice shelf",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Melting\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the implementation of melting from the front of the ice shelf",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Ice --> Dynamics\n**\n9.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description if ice sheet and ice shelf dynamics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.2. Approximation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nApproximation type used in modelling ice dynamics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.approximation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"SIA\" \n# \"SAA\" \n# \"full stokes\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.3. Adaptive Timestep\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there an adaptive time scheme for the ice scheme?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"9.4. Timestep\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTimestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.landice.ice.dynamics.timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jeancochrane/learning
|
python-machine-learning/code/ch03solutions.ipynb
|
mit
|
[
"Exercises for Chapter 3\n\nMost exercises below are adapted from An Introduction to Statistical Learning.\nSupport Vector Machines\nThe following questions test your ability to implement SVM classifiers and reason about their effectiveness.\n(a) Generate a simulated two-class data set with 100 observations and two features in which there is a visible but nonlinear separation between the two classes.",
"% matplotlib inline\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n\n# Generate observations on the interval [0, 1) \nx1 = np.random.uniform(low=0.0, high=2.0, size=100)\nx2 = np.random.uniform(low=0.0, high=2.0, size=100)\nX = np.matrix([x1, x2]).T\n\n# Assign class labels based on the decision surface 2x1^3 + 3x2 = 5\ny = np.where(((2 * (x1 ** 3)) + (3 * x2) >= 5), 1, -1)\n\n# Plot the decision boundary\ndec_x = np.arange(0, 2, 0.1)\ndec_y = (5 - (2 *(dec_x ** 3))) / 3\nplt.plot(dec_x, dec_y)\n\n# Plot the samples\nmarkers = ('x', 'o')\nfor idx, cl in enumerate(np.unique(y)):\n plt.scatter(X[y == cl, 0], X[y == cl, 1],\n marker=markers[idx], label=cl)\n\nplt.xlim(0.0, 2.0)\nplt.xlabel('X1')\nplt.ylim(0.0, 2.0)\nplt.ylabel('X2')\nplt.legend(loc='upper left')\nplt.show()\nplt.close()",
"(b) Show that in this setting, a support vector machine with a polynomial kernel (with degree greater than 1) or a radial kernel will outperform a support vector classifier on the training data.",
"from sklearn import svm\n\n# Instantiate the three classifiers\nlinear = svm.SVC(kernel='linear')\nradial = svm.SVC(kernel='rbf')\npoly_2 = svm.SVC(kernel='poly', degree=2)\npoly_3 = svm.SVC(kernel='poly', degree=3)\nclassifiers = [linear, radial, poly_2, poly_3]\nnames = ['linear', 'radial', 'degree 2 polynomial', 'degree 3 polynomial']\n\n# Fit classifiers to the training data and calculate accuracy\nfor name, classifier in zip(names, classifiers):\n classifier.fit(X, y)\n score = classifier.score(X, y)\n print('''The {name} classifier has a mean accuracy of {score} \\\non the training data.'''.format(name=name, score=str(int(score*100)) + '%'))\n\nprint()\nprint('''Hence, the polynomial and radial kernels out-perform the linear classifier.''')",
"(c) Generate 1000 test observations through the same method that you used in (a).",
"# Generate observations on the interval [0, 1) \nx1_test = np.random.uniform(low=0.0, high=2.0, size=1000)\nx2_test = np.random.uniform(low=0.0, high=2.0, size=1000)\nX_test = np.matrix([x1_test, x2_test]).T\n\n# Assign class labels based on the decision surface 2x1^3 + 3x2 = 5\ny_test = np.where(((2 * (x1_test ** 3)) + (3 * x2_test) >= 5), 1, -1)",
"(d) Which technique performs best on the test data? Make plots and report training and test error rates in order to back up your assertions.",
"for name, classifier in zip(names, classifiers):\n # Predict training and test data\n pred_train = classifier.predict(X)\n pred_test = classifier.predict(X_test)\n \n # Calculate error rates for training and test data\n err_train = np.sum(np.where(pred_train != y, 1, 0)) / len(pred_train)\n err_test = np.sum(np.where(pred_test != y_test, 1, 0)) / len(pred_test)\n\n # Plot results\n plt.plot(dec_x, dec_y, label='actual boundary')\n markers = ('x', 'o')\n for idx, cl in enumerate(np.unique(y_test)):\n plt.scatter(X_test[pred_test == cl, 0], X_test[pred_test == cl, 1],\n marker=markers[idx], label=cl)\n\n plt.xlim(0.0, 2.0)\n plt.xlabel('X1')\n plt.ylim(0.0, 2.0)\n plt.ylabel('X2')\n plt.legend(loc='upper left')\n plt.title('SVM with a {name} classifier'.format(name=name))\n plt.show()\n plt.close()\n \n # Print results\n print('''Training error for the {name} classifier: {err}'''.format(\n name=name, err=str(err_train*100) + '%'))\n print('''Testing error for the {name} classifier: {err}'''.format(\n name=name, err=str(err_test*100) + '%'))\n \n ",
"As we can see in the error reports and plots above, the radial and polynomial classifiers perform equally well on the test data within 0.5% error. The polynomial classifier with degree 3 performs the best in most iterations of this experiment, as we would expect given the actual underlying decision boundary.\n\nLogistic regression\nWe have seen that we can fit an SVM with a nonlinear kernel in order to perform classification using a non-linear decision boundary. We will now see that we can also obtain a non-linear decision boundary by performing logistic regression using non-linear transformations of the features.\n(a) Generate a data set with n = 500 and p = 2, such that the observations belong to two classes with a quadratic decision boundary between them.",
"# Generate random training data\nx1 = np.random.uniform(low=0.0, high=10.0, size=500)\nx2 = np.random.uniform(low=0.0, high=10.0, size=500)\nX = np.matrix([x1, x2]).T\ny = np.where(0.15 * (x1 ** 2) - x2 > 0, 1, 0)",
"(b) Plot the observations, colored according to their class labels. Your plot should display X1 on the x-axis, and X2 on the y-axis.",
"import math\n\n# Plot the decision boundary\ndec_x = np.arange(0, 10, 0.1)\ndec_y = 0.15 * (dec_x ** 2)\nplt.plot(dec_x, dec_y)\n\n# Plot the samples\nmarkers = ('x', 'o')\nfor idx, cl in enumerate(np.unique(y)):\n plt.scatter(X[y == cl, 0], X[y == cl, 1],\n marker=markers[idx], label=cl)\n \nplt.xlim(0.0, 10.0)\nplt.xlabel('X1')\nplt.ylim(0.0, 10.0)\nplt.ylabel('X2')\nplt.legend(loc='upper left')\nplt.show()\nplt.close()",
"(c) Fit a logistic regression model to the data, using X1 and X2 as predictors.",
"from sklearn.linear_model import LogisticRegression\n\nlogistic = LogisticRegression()\nlogistic.fit(X, y)",
"(d) Apply this model to the training data in order to obtain a predicted class label for each training observation. Plot the observations, colored according to the predicted class labels. The decision boundary should be linear.",
"y_pred = logistic.predict(X)\n\n# Plot the decision boundary\ndec_x = np.arange(0, 10, 0.1)\ndec_y = 0.15 * (dec_x ** 2)\nplt.plot(dec_x, dec_y)\n\n# Plot the samples\nmarkers = ('x', 'o')\nfor idx, cl in enumerate(np.unique(y_pred)):\n plt.scatter(X[y_pred == cl, 0], X[y_pred == cl, 1],\n marker=markers[idx], label=cl)\n \nplt.xlim(0.0, 10.0)\nplt.xlabel('X1')\nplt.ylim(0.0, 10.0)\nplt.ylabel('X2')\nplt.legend(loc='upper left')\nplt.show()\nplt.close()",
"(e) Now fit a logistic regression model to the data using non-linear functions of X1 and X2 as predictors (e.g. X12, X1 ×X2, log(X2), and so forth).",
"logistic_sqd = LogisticRegression()\nX_sqd = np.matrix([0.15 * (x1**2), x2]).T\n\nlogistic_sqd.fit(X_sqd, y)",
"(f) Apply this model to the training data in order to obtain a predicted class label for each training observation. Plot the observations, colored according to the predicted class labels. The decision boundary should be obviously non-linear. If it is not, then repeat (a)-(e) until you come up with an example in which the predicted class labels are obviously non-linear.",
"y_pred = logistic_sqd.predict(X_sqd)\nplt.plot(dec_x, dec_y)\nfor idx, cl in enumerate(np.unique(y_pred)):\n plt.scatter(X[y_pred == cl, 0], X[y_pred == cl, 1],\n marker=markers[idx], label=cl)\n\nplt.xlim(0.0, 10.0)\nplt.xlabel('X1')\nplt.ylim(0.0, 10.0)\nplt.ylabel('X2')\nplt.legend(loc='upper left')\nplt.title('Logistic regression with nonlinear features')\nplt.show()\nplt.close()",
"(g) Fit a support vector classifier to the data with X1 and X2 as predictors. Obtain a class prediction for each training observation. Plot the observations, colored according to the predicted class labels.",
"svc = svm.SVC(kernel='linear')\nsvc.fit(X, y)\n\ny_pred = svc.predict(X)\nplt.plot(dec_x, dec_y)\nfor idx, cl in enumerate(np.unique(y_pred)):\n plt.scatter(X[y_pred == cl, 0], X[y_pred == cl, 1],\n marker=markers[idx], label=cl)\n\nplt.xlim(0.0, 10.0)\nplt.xlabel('X1')\nplt.ylim(0.0, 10.0)\nplt.ylabel('X2')\nplt.legend(loc='upper left')\nplt.title('Kernel SVM')\nplt.show()\nplt.close()",
"(h) Fit a SVM using a non-linear kernel to the data. Obtain a class prediction for each training observation. Plot the observations, colored according to the predicted class labels.",
"svc = svm.SVC(kernel='rbf')\nsvc.fit(X, y)\n\ny_pred = svc.predict(X)\nplt.plot(dec_x, dec_y)\nfor idx, cl in enumerate(np.unique(y_pred)):\n plt.scatter(X[y_pred == cl, 0], X[y_pred == cl, 1],\n marker=markers[idx], label=cl)\n\nplt.xlim(0.0, 10.0)\nplt.xlabel('X1')\nplt.ylim(0.0, 10.0)\nplt.ylabel('X2')\nplt.legend(loc='upper left')\nplt.title('Kernel SVM')\nplt.show()\nplt.close()",
"(i) Comment on your results.\nSeems like logistic regression with nonlinear training data performs just as well, if not better, than the kernelized SVM! This makes some sense – the nonlinear logistic regression classifier, after all, is in this case fit to transformed training data that exactly models the relationship underlying the data ($x_{2} = 0.15 \\cdot x_{1}^{2}$). If we didn't know the underlying relationship and had to guess at the degree and shape of the polynomial, accurately transforming the training data would probably be much more difficult.\n\nProve algebraically that the logistic and logit representations of the logistic regression model are equivalent. More specifically, prove that:\n$$ p(X) = \\frac{1}{1 + e^{-z}} \\quad \\Leftrightarrow \\quad log(\\frac{p(X)}{1-p(X)}) = z $$\nGiven:\n$$ p(X) = \\frac{1}{1 + e^{-z}} $$\nIt follows by algebra that:\n$$\\begin{gathered}\n(1 + e^{-z}) \\cdot p(X) = 1 \\\n\\\np(X) + p(X) \\cdot e^{-z} = 1 \\\n\\\np(X) \\cdot e^{-z} = 1 - p(X) \\\n\\\ne^{-z} = \\frac{1 - p(X)}{p(X)} \\\n\\\ne^{z} = \\frac{p(X)}{1 - p(X)} \\\n\\\nln(e^{z}) = ln(\\frac{p(X)}{1 - p(X)}) \\\n\\\nz = ln(\\frac{p(X)}{1 - p(X)}) \\quad \\square\n\\end{gathered}$$\n\nComprehension questions about odds:\n(a) On average, what fraction of people with an odds of 0.37 of\ndefaulting on their credit card payment will in fact default?\nIf the odds are 0.37 that a person will default on their credit card payment, then we can say that:\n$$\\begin{gathered}\n\\frac{p(default)}{1 - p(default)} = 0.37 \\\n\\\n0.37 - 0.37 \\cdot p(default) = p(default) \\\n\\\n0.37 = 1.37 \\cdot p(default) \\\n\\\np(default) = \\frac{0.37}{1.37} = 27\\% \\\n\\end{gathered}$$\n(b) Suppose that an individual has a 16% chance of defaulting on their credit card payment. What are the odds that they will default?\nBy the definition of odds:\n$$\\begin{gathered}\nodds = \\frac{p(default)}{1 - p(default)} \\\n\\\nodds = \\frac{0.16}{0.84} = 0.19 \\\n\\end{gathered}$$\n\nSuppose we collect data for a group of students in a statistics class with variables $x_{1}$ = hours studied, $x_{2}$ = undergrad GPA, and $y$ = receive an A. We fit a logistic regression and produce estimated coefficients $w_{0} = -6$, $w_{1} = 0.05$, and $w_{2} = 1$.\n(a) Estimate the probability that a student who studies for 40 hours and has an undergrad GPA of 3.5 gets an A in the class.\nThe probability of a class label given data, $p(y \\mid X; w)$, under the logistic regression model is given by:\n$$ p(X) = \\frac{1}{1 + e^{-z}} $$\nIn this case, $z = w_{0} + w_{1}x_{1} + w_{2}x_{2} = -6 + 0.05 \\cdot (40) + 3.5 = -0.5$, so:\n$$ p(X) = \\frac{1}{1 + e^{0.5}} = 0.378 $$\n(b) How many hours would the student in part (a) need to study to have a 50% chance of getting an A in the class?\nWe can interpret this question as a univariate equation in which we must solve for $x_{1}$:\n$$\\begin{gathered}\nz = w_{0} + w_{1}x_{1} + w_{2}x_{2} = 0.05x_{1} - 2.5 \\\n\\\n0.5 = \\frac{1}{1 + e^{2.5 - 0.05x_{1}}} \\\n\\\n0.5 + 0.5e^{2.5 - 0.05x_{1}} = 1 \\\n\\\ne^{2.5 - 0.05x_{1}} = 1 \\\n\\\n2.5 - 0.05x_{1} = ln(1) = 0 \\\n\\\n0.05x_{1} = 2.5 \\\n\\\nx_{1} = 50 \\\n\\end{gathered}$$\nSo, the student would have to study for 50 hours to have a 50% chance of getting an A.\n\nK-Nearest Neighbors\nAn exercise to help investigate the curse of dimensionality in nearest-neighbor algorithms:\n(a) Suppose that we have a set of observations, each with measurements on $p = 1$ feature, $X$. We assume that $X$ is uniformly distributed on $[0,1]$. Each observation $X^{(i)}$ is associated with a response value $y^{(i)}$. Suppose that we wish to predict a test observation’s response using only observations that are within 10% of the range of $X$ closest to that test observation. For instance, in order to predict the response for a test observation with $X = 0.6$, we will use observations in the range $[0.55, 0.65]$. On average, what fraction of the available observations will we use to make the prediction?\nAnswer: Since $X$ is uniformly distributed on $[0, 1]$, observations that are within 10% of the range of any given $X^{(i)}$ represent on average 10% of the available observations.\n(b) Now suppose that we have a set of observations, each with measurements on $p = 2$ features, $X_{1}$ and $X_{2}$. We assume that ($X_{1}$, $X_{2}$) are uniformly distributed on $[0,1] × [0,1]$. We wish to predict a test observation’s response using only observations that are within 10% of the range of $X_{1}$ and within 10% of the range of $X_{2}$ closest to that test observation. For instance, in order to predict the response for a test observation with $X_{1} = 0.6$ and $X_{2} = 0.35$, we will use observations in the range $[0.55, 0.65]$ for $X_{1}$ and in the range $[0.3, 0.4]$ for $X_{2}$. On average, what fraction of the available observations will we use to make the prediction?\nAnswer: Since $X_{1}$ and $X_{2}$ are both uniformly distributed on $[0, 1]$ we can apply the same logic as in question (a) and reason that on average, we will consider 10% of the available observations to make our prediction for each variable. But since we must perform this narrowing for both variables, the fractions are multiplicative: in the case of two variables combined, we actually consider $10\\% \\cdot 10\\% = 1\\%$ of the available observations.\nTo see why the effect of considering two variables would be multiplicative, think of the relevant range of each variable as its own random variable (i.e. it maps an observation to a range of values within 10% of that observation). We'll define the range of $X_{1}$ to be the random variable $A$, and the range of $X_{2}$ to be the random variable $B$. Then, the set of observations taht we care about can be represented by the intersection of our events, $A \\cap B$. Hence, since the random variables are independent (according to our assumption that they're uniformly distributed):\n$$ A \\cap B = A \\cdot B $$\nso the ranges are multiplicative.\n(c) Now suppose that we have a set of observations on $p = 100$ features. Again the observations are uniformly distributed on each feature, and again each feature ranges in value from 0 to 1. We wish to predict a test observation’s response using observations within the 10% of each feature’s range that is closest to that test observation. What fraction of the available observations will we use to make the prediction?\nAnswer: Let $S$ be the set corresponding to our relevant observations, and $A_{p}$ be the event corresponding to the range of relevant observations for the feature $X_{p}$. It follows from the reasoning in (b) that:\n$$\\begin{gathered}\nS = A_{1} \\; \\cap \\; A_{2} \\; \\cap \\; ... \\; \\cap \\; A_{p} \\\n\\\nS = \\bigcap_{p} A_{p} \\\n\\end{gathered}$$\nOr in this case:\n$$ S = A_{1} \\; \\cap \\; A_{2} \\; \\cap \\; ... \\; \\cap \\; A_{100} $$\nAnd since all events $A_{p}$ represent the same range (10%), we can easily determine the total fraction of relevant observations, $S_{frac}$:\n$$ S_{frac} = A_{frac}^{p} = 0.1^{100} = 1 \\cdot 10^{-100} $$\n(d) Using your answers to parts (a)–(c), argue that a drawback of KNN when $p$ is large is that there are very few training observations “near” any given test observation.\nAnswer: By the reasoning in part (c), we can see that the set of relevant training observations shrinks exponentially according to $p$ (at least in this particular case, where the random variables that represent our dataset's features are independent). Hence, as $p$ gets large, the probability that we'll have any neighbors to considers approaches 0!\n(e) Now suppose that we wish to make a prediction for a test observation by creating a $p$-dimensional hypercube centered around the test observation that contains, on average, 10% of the training observations. For $p = 1$, $p = 2$, and $p = 100$, what is the length of each side of the hypercube? Comment on your answer.\nNote: A hypercube is a generalization of a cube to an arbitrary number of dimensions. When $p = 1$, a hypercube is simply a line segment; when $p = 2$ it is a square; and when $p = 100$ it is a 100-dimensional cube.\nAnswer: We can interpret the statement that the hypercube must \"contain, on average, 10% of the training observations\" to mean that the volume of the hypercube should be 10% the volume of the dataset, or:\n$$ Volume_{hypercube} = \\frac{Volume_{sample space}}{10} $$\nSince the volume of a hypercube is $s^{p}$, where $s$ is the length of a side and $p$ is the number of dimensions, it follows that:\n$$ s^{p} = \\frac{S^{p}}{10} $$\nWhere $S$ is the length of a dimension of the sample space. Further, since each dimension has length 1 (since we're sampling on the interval $[0, 1]$):\n$$\\begin{gathered}\ns^{p} = \\frac{1^{p}}{10} = \\frac{1}{10} \\\n\\\np \\cdot log(s) = log(\\frac{1}{10}) = -1 \\\n\\\nlog(s) = -\\frac{1}{p} \\\n\\\ns = 10^{-\\frac{1}{p}} = \\frac{1}{\\sqrt[p]10} \\\n\\end{gathered}$$\n\nDecision trees\nConsider the Gini index, classification error, and entropy measures of impurity in a simple classification setting with two classes. Create a single plot that displays each of these quantities as a function of $p(i \\mid t)$. The x- axis should display $p(i \\mid t)$, ranging from 0 to 1, and the y-axis should display the value of the Gini index, classification error, and entropy.",
"def gini(px):\n return (px * (1 - px)) + ((1 - px) * (1 - (1 - px)))\n\ndef entropy(px):\n return -(px * np.log2(px)) - ((1 - px) * np.log2(1 - px))\n\ndef class_error(px):\n return 1 - max([px, 1 - px])\n\nx = np.arange(0, 1, 0.01)\nclasses = np.unique(y)\nI_G = [gini(px) for px in x]\nI_H = [entropy(px) if px != 0 else None for px in x]\nI_E = [class_error(px) for px in x]\n\nplt.plot(x, I_G, label='Gini')\nplt.plot(x, I_H, label='Entropy')\nplt.plot(x, I_E, label='Classification error')\nplt.legend(loc='upper center', ncol=3, bbox_to_anchor=(0.5, 1.15))\nplt.xlabel('Proportion of samples of a given class')\nplt.ylabel('Impurity')\nplt.show()\nplt.close()",
"This problem tests your ability to train decision trees and reason about their effectiveness. It uses the built-in breast cancer dataset that ships with scikit-learn. You can import this dataset through the module method sklearn.datasets.load_breast_cancer.\n(a) Import the breast cancer dataset from scikit-learn. Then, create a training set containing a random sample of 800 observations, and a test set containing the remaining observations.",
"from sklearn.datasets import load_breast_cancer\nfrom sklearn.model_selection import train_test_split\n\nbreast_cancer = load_breast_cancer()\ndata, target = breast_cancer.data, breast_cancer.target\n\nX_train, X_test, y_train, y_test = train_test_split(\n data, target, test_size=0.2, random_state=0)\n\nprint(X_train.shape, y_train.shape)\nprint(X_test.shape, y_test.shape)",
"(b) Fit a decision tree to the training data, with Purchase as the response and all other variables except for Buy as predictors. Produce summary statistics about the tree and describe the results obtained. What is the training error rate?",
"from sklearn.tree import DecisionTreeClassifier\nfrom sklearn.metrics import accuracy_score\n\ndtree = DecisionTreeClassifier()\ndtree.fit(X_train, y_train)\nscore = accuracy_score(dtree.predict(X_train), y_train)\n\nprint('The accuracy of the decision tree is {score} on the training set.'.format(score=score))",
"(c) Type in the name of the tree object in order to get a detailed text output.",
"print(dtree)",
"(d) Predict the response on the test data, and produce a confusion matrix comparing the test labels to the predicted test labels. What is the test error rate?",
"from sklearn.metrics import confusion_matrix\n\npred = dtree.predict(X_test)\nconfusion = confusion_matrix(pred, y_test)\naccuracy = round(accuracy_score(pred, y_test), 3)\n\nprint('The classifier has an accuracy of {score} on the test data.'.format(score=accuracy))\nprint('Confusion matrix:')\nprint()\nprint(confusion)",
"(e) Apply a cross-validation function to the training set in order to determine the optimal tree size.",
"scores = []\nfor i in range(1, 10):\n cv_tree = DecisionTreeClassifier(max_depth=i)\n cv_tree.fit(X_train, y_train)\n scores.append(accuracy_score(cv_tree.predict(X_test), y_test))\n \nhighest_score = (0, 0)\nfor index, score in enumerate(scores):\n if score > highest_score[1]:\n highest_score = (index+1, score)\n \noutput = 'The highest score, {score}, occurs in a tree with node size {size}.'\n\nprint(output.format(score=round(highest_score[1], 3), size=highest_score[0]))\n ",
"(f) Produce a plot with tree size on the x-axis and cross-validated classification error rate on the y-axis.",
"plt.plot(np.arange(1,10), scores)\nplt.xlabel('Tree size')\nplt.ylabel('Accuracy score')\nplt.title('How big should our decision tree be?')\nplt.show()\nplt.close()",
"(g) Which tree size corresponds to the lowest cross-validated classification error rate?\n Answer: Trees of size 2 and 3 perform equally well, with the lowest cross-validated classification error.\n(h) Produce a pruned tree corresponding to the optimal tree size obtained using cross-validation. If cross-validation does not lead to selection of a pruned tree, then create a pruned tree with five terminal nodes.",
"optimal_tree = DecisionTreeClassifier(max_depth=3)\noptimal_tree.fit(X_train, y_train)",
"(i) Compare the training error rates between the pruned and un-pruned trees. Which is higher?",
"prune_acc_train = round(accuracy_score(optimal_tree.predict(X_train), y_train), 3)\nnoprune_acc_train = round(accuracy_score(dtree.predict(X_train), y_train), 3)\n\nprint('The pruned tree has an accuracy score of {score} on training data.'.format(score=prune_acc_train))\nprint('The un-pruned tree has an accuracy score of {score} on training data.'.format(score=noprune_acc_train))",
"(j) Compare the test error rates between the pruned and unpruned trees. Which is higher?",
"prune_acc_test = round(accuracy_score(optimal_tree.predict(X_test), y_test), 3)\nnoprune_acc_test = round(accuracy_score(dtree.predict(X_test), y_test), 3)\n\nprint('The pruned tree has an accuracy score of {score} on test data.'.format(score=prune_acc_test))\nprint('The un-pruned tree has an accuracy score of {score} on test data.'.format(score=noprune_acc_test))",
"(k) Comment on your results.\nAnswer: Decision trees are very prone to overfitting! Our non-pruned model performs perfectly on training data (100% accuracy), but doesn't generalize very well to new samples (~90% accuracy). Our pruned model, on the other hand, does worse on the training data, but far better on test data (around 97% accuracy in both cases). \nTurns out, we don't always want to perfectly maximize information gain on the training data when building a decision tree. Instead, it pays to cross-validate and prune the tree to a size that generalizes better. (Or better yet, to use a random forest!)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
rebeccabernie/Iris-Jupyter
|
IrisData.ipynb
|
mit
|
[
"Iris Data Set\nThis problem sheet relates to the Iris data set and uses jupyter, numpy and pyplot. Problems are labelled 1 to 10. \n1. Get and load the Iris data.",
"import numpy as np\n# Adapted from https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.genfromtxt.html\n\nfilename = 'data.csv'\nsLen, sWid, pLen, pWid = np.genfromtxt('data.csv', delimiter=',', usecols=(0,1,2,3), unpack=True, dtype=float)\nspec = np.genfromtxt('data.csv', delimiter=',', usecols=(4), unpack=True, dtype=str)\n\n \nfor i in range(10):\n print('{0:.1f} {1:.1f} {2:.1f} {3:.1f} {4:s}'.format(sLen[i], sWid[i], pLen[i], pWid[i], spec[i]))",
"2. Write a note about the data set.\nThe Iris data set was created by Ronald Fisher in 1936 and contains 50 samples from each of the three species of Iris - Iris setosa, Iris virginica and Iris versicolor. The structure of the set is as follows: sepal length, sepal width, petal length, petal width, species classification. A raw copy of the data set can be found here. \n3. Create a simple plot.\nUse pyplot to create a scatter plot of sepal length on the x-axis versus sepal width on the y-axis.",
"import matplotlib.pyplot as pl\n\npl.rcParams['figure.figsize'] = (14, 6) # Adapted from gradient descent notebook: https://github.com/emerging-technologies/emerging-technologies.github.io/blob/master/notebooks/gradient-descent.ipynb\npl.scatter(sLen, sWid, marker='.')\n\npl.title('Scatter Diagram of Sepal Width vs Length', fontsize=14)\npl.xlabel('Sepal Length')\npl.ylabel('Sepal Width')\npl.show()",
"4. Create a more complex plot.\nRecreate the above plot, marking the data points in different colours depending on species. Add a legend to the plot to show what species relates to what colour.",
"import matplotlib.patches as mpatches\n\npl.rcParams['figure.figsize'] = (14,6)\n# Colour related to type adapted from https://stackoverflow.com/questions/27318906/python-scatter-plot-with-colors-corresponding-to-strings\ncolours = {'Iris-setosa': 'red', 'Iris-versicolor': 'green', 'Iris-virginica': 'blue'}\npl.scatter(sLen, sWid, c=[colours[i] for i in spec], label=[colours[i] for i in colours], marker=\".\")\n\npl.title('Scatter Diagram of Sepal Width vs Length', fontsize=14)\npl.xlabel('Sepal Length')\npl.ylabel('Sepal Width')\n\n# Custom handles adapted from https://stackoverflow.com/a/44164349/7232648\na = 'red'\nb = 'green'\nc = 'blue'\nhandles = [mpatches.Patch(color=colour, label=label) for label, colour in [('Iris-setosa', a), ('Iris-versicolor', b), ('Iris-virginica', c)]]\npl.legend(handles=handles, loc=2, frameon=True)\n\n#pl.grid()\npl.show()",
"5. Use Seaborn.\nUse Seaborn to create a scatterplot matrix of all five variables (sepal length, sepal width, petal length, petal width, species classification).\nNote: needs work, dataframe working but sb plot isn't. Will do other questions and come back to this if there's time.",
"# Seaborn scatterplot adapted from http://seaborn.pydata.org/examples/scatterplot_matrix.html\nimport seaborn as sb\nsb.set(style=\"ticks\")\n\n# Load the data - Iris included in Seaborn's github repo for csv files here: https://github.com/mwaskom/seaborn-data\ndata = sb.load_dataset(\"iris\")\n\n# Plot data, base the colour of points on species\nsb.pairplot(data, hue=\"species\")\n\npl.show()",
"6. Fit a line.\nFit a straight line to the petal length and width variables for the whole data set. Plot the data points in a scatter plot, including the best fit line.",
"# Conversions adapted from https://stackoverflow.com/a/26440523/7232648\n\n# Adapted from https://github.com/emerging-technologies/emerging-technologies.github.io/blob/master/notebooks/simple-linear-regression.ipynb\nw = pLen\nd = pWid\n\nw_avg = np.mean(w)\nd_avg = np.mean(d)\n\nw_zero = w - w_avg\nd_zero = d - d_avg\n\nm = np.sum(w_zero * d_zero) / np.sum(w_zero * w_zero)\nc = d_avg - m * w_avg\n\n# Graph labels etc\npl.rcParams['figure.figsize'] = (14,6)\npl.title('Petal Measurements', fontsize=14)\npl.xlabel('Petal Length')\npl.ylabel('Petal Width')\n\npl.scatter(w, d, marker='.', label='Data Set')\npl.plot(w, m * w + c, 'r', label='Best Fit Line')\n\npl.legend(loc=2, frameon=True)\npl.show()",
"7. Calculate R-sqaured.\nThe R-squared value estimates how much of the changes in the $y$ value (petal length) are due to the changes in the $x$ value (petal width) compared to all of the other factors affecting the $y$ value.",
"# Adapted from https://github.com/emerging-technologies/emerging-technologies.github.io/blob/master/notebooks/simple-linear-regression.ipynb\n\nrsq = 1.0 - (np.sum((d - m * w - c)**2)/np.sum((d - d_avg)**2))\nprint(\"R-squared: {0:.6f}\".format(rsq))",
"8. Fit another line.\nUse numpy to fit a straight line to the petal length and width variables for the species Iris-setosa. Plot the data points in a scatter plot with the best fit line shown.",
"# Adding arrays as columns adapted from https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.column_stack.html\ndata = np.column_stack((sLen, sWid, pLen, pWid, spec))\n#for i in range(5):\n #print(data[i])\n\n# Setosa data -> 0 - 49 in data set. (Definitely better ways of doing this but works for now, will change if there's time)\nspLen, spWid= [], []\nfor index, row in enumerate(data):\n # Petal info contained in cols 2 & 3\n # For each row, append column 2 to spLen array and column 3 to spWid array\n spLen.append(float(row[2]))\n spWid.append(float(row[3]))\n \n if index == 49:\n break\n\n# Calculate best values for m and c\nm, c = np.polyfit(spLen, spWid, 1)\ny = m * (spLen + c)\n\n# Graph labels etc\npl.rcParams['figure.figsize'] = (16,8)\npl.title('Iris Setosa Petal Measurements', fontsize=14)\npl.xlabel('Petal Length')\npl.ylabel('Petal Width')\n\npl.scatter(spLen, spWid, label = 'Iris Setosa') # Plot the data points\npl.plot(spLen, y, 'r', label = 'Best Fit Line') # Plot the line\n\npl.legend(loc=2, frameon=True)\npl.show()\n\norM = m\norC = c",
"9. Calculate R-squared for the Setosa line.\nCalculate the r-squared of the best fitting line for the Setosa data, plotted above.",
"rsq = 1.0 - (np.sum((d - m * w - c)**2)/np.sum((d - d_avg)**2))\nprint(\"R-squared: {0:.6f}\".format(rsq))",
"10. Use Gradient Descent.",
"w = np.array(spLen)\nd = np.array(spWid)\nprint(\"Original \\t\\tm: %20.16f c: %20.16f\" % (orM, orC))\n\n# Adapted from Gradient Descent worksheet - https://github.com/emerging-technologies/emerging-technologies.github.io/blob/master/notebooks/gradient-descent.ipynb\n# Partial derivatives with respect to m and c\ndef grad_m(x, y, m, c):\n return -2.0 * np.sum(x * (y - m * x - c))\n\ndef grad_c(x, y, m, c):\n return -2.0 * np.sum(y - m * x - c)\n\n# Set up variables\neta = 0.0001 # The x in mx + c\ngdm, gdc = 1.0, 1.0 # Initial guesses for GD m and c\nchange = True\n\nwhile change:\n mnew = gdm - eta * grad_m(w, d, gdm, gdc)\n cnew = gdc - eta * grad_c(w, d, gdm, gdc)\n if gdm == mnew and gdc == cnew:\n # Calculations no longer changing, stop the loop\n change = False\n else:\n gdm, gdc = mnew, cnew\n\n# - End adapted from Gradient Descent worksheet -\n\nprint(\"Gradient desc \\t\\tm: %20.16f c: %20.16f\" % (gdm, gdc))\nprint()\n\n# Graph labels etc\npl.rcParams['figure.figsize'] = (16,8)\npl.title('Iris Setosa Best Fit Line using Gradient Descent', fontsize=14)\npl.xlabel('Petal Length')\npl.ylabel('Petal Width')\n\ny = gdm * (spLen + gdc)\npl.scatter(spLen, spWid, label = 'Iris Setosa')\npl.plot(spLen, y, 'g', label='Best Fit Line using Gradient Descent')\npl.legend()\npl.show()",
"As we can see above, there is a very slight difference in best fit lines generated using polyfit and the gradient descent method. The difference is so small that if you were looking at these lines plotted on two graphs, they would look identical - see the graph in problem 8, which used polyfit to get the best fit line."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/noaa-gfdl/cmip6/models/gfdl-esm2m/atmoschem.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Atmoschem\nMIP Era: CMIP6\nInstitute: NOAA-GFDL\nSource ID: GFDL-ESM2M\nTopic: Atmoschem\nSub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry. \nProperties: 84 (39 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-20 15:02:34\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'noaa-gfdl', 'gfdl-esm2m', 'atmoschem')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Software Properties\n3. Key Properties --> Timestep Framework\n4. Key Properties --> Timestep Framework --> Split Operator Order\n5. Key Properties --> Tuning Applied\n6. Grid\n7. Grid --> Resolution\n8. Transport\n9. Emissions Concentrations\n10. Emissions Concentrations --> Surface Emissions\n11. Emissions Concentrations --> Atmospheric Emissions\n12. Emissions Concentrations --> Concentrations\n13. Gas Phase Chemistry\n14. Stratospheric Heterogeneous Chemistry\n15. Tropospheric Heterogeneous Chemistry\n16. Photo Chemistry\n17. Photo Chemistry --> Photolysis \n1. Key Properties\nKey properties of the atmospheric chemistry\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of atmospheric chemistry model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of atmospheric chemistry model code.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Chemistry Scheme Scope\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nAtmospheric domains covered by the atmospheric chemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"troposhere\" \n# \"stratosphere\" \n# \"mesosphere\" \n# \"mesosphere\" \n# \"whole atmosphere\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Basic Approximations\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBasic approximations made in the atmospheric chemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.basic_approximations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.5. Prognostic Variables Form\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nForm of prognostic variables in the atmospheric chemistry component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"3D mass/mixing ratio for gas\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.6. Number Of Tracers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of advected tracers in the atmospheric chemistry model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"1.7. Family Approach\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAtmospheric chemistry calculations (not advection) generalized into families of species?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.family_approach') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"1.8. Coupling With Chemical Reactivity\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAtmospheric chemistry transport scheme turbulence is couple with chemical reactivity?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"2. Key Properties --> Software Properties\nSoftware properties of aerosol code\n2.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Key Properties --> Timestep Framework\nTimestepping in the atmospheric chemistry model\n3.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMathematical method deployed to solve the evolution of a given variable",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Operator splitting\" \n# \"Integrated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3.2. Split Operator Advection Timestep\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTimestep for chemical species advection (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.3. Split Operator Physical Timestep\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTimestep for physics (in seconds).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.4. Split Operator Chemistry Timestep\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTimestep for chemistry (in seconds).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.5. Split Operator Alternate Order\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\n?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3.6. Integrated Timestep\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTimestep for the atmospheric chemistry model (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.7. Integrated Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify the type of timestep scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Implicit\" \n# \"Semi-implicit\" \n# \"Semi-analytic\" \n# \"Impact solver\" \n# \"Back Euler\" \n# \"Newton Raphson\" \n# \"Rosenbrock\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"4. Key Properties --> Timestep Framework --> Split Operator Order\n**\n4.1. Turbulence\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.2. Convection\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.3. Precipitation\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.4. Emissions\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.5. Deposition\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.6. Gas Phase Chemistry\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.7. Tropospheric Heterogeneous Phase Chemistry\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.8. Stratospheric Heterogeneous Phase Chemistry\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.9. Photo Chemistry\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4.10. Aerosols\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nCall order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"5. Key Properties --> Tuning Applied\nTuning methodology for atmospheric chemistry component\n5.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Global Mean Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.3. Regional Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList of regional metrics of mean state used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.4. Trend Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList observed trend metrics used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Grid\nAtmospheric chemistry grid\n6.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general structure of the atmopsheric chemistry grid",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Matches Atmosphere Grid\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\n* Does the atmospheric chemistry grid match the atmosphere grid?*",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"7. Grid --> Resolution\nResolution in the atmospheric chemistry grid\n7.1. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Canonical Horizontal Resolution\nIs Required: FALSE Type: STRING Cardinality: 0.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.3. Number Of Horizontal Gridpoints\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"7.4. Number Of Vertical Levels\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nNumber of vertical levels resolved on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"7.5. Is Adaptive Grid\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nDefault is False. Set true if grid resolution changes during execution.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"8. Transport\nAtmospheric chemistry transport\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview of transport implementation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Use Atmospheric Transport\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs transport handled by the atmosphere, rather than within atmospheric cehmistry?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"8.3. Transport Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf transport is handled within the atmospheric chemistry scheme, describe it.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.transport_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Emissions Concentrations\nAtmospheric chemistry emissions\n9.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview atmospheric chemistry emissions",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Emissions Concentrations --> Surface Emissions\n**\n10.1. Sources\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nSources of the chemical species emitted at the surface that are taken into account in the emissions scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Vegetation\" \n# \"Soil\" \n# \"Sea surface\" \n# \"Anthropogenic\" \n# \"Biomass burning\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.2. Method\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nMethods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Climatology\" \n# \"Spatially uniform mixing ratio\" \n# \"Spatially uniform concentration\" \n# \"Interactive\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.3. Prescribed Climatology Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10.4. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted at the surface and prescribed as spatially uniform",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10.5. Interactive Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted at the surface and specified via an interactive method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10.6. Other Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted at the surface and specified via any other method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11. Emissions Concentrations --> Atmospheric Emissions\nTO DO\n11.1. Sources\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nSources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Aircraft\" \n# \"Biomass burning\" \n# \"Lightning\" \n# \"Volcanos\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.2. Method\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nMethods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Climatology\" \n# \"Spatially uniform mixing ratio\" \n# \"Spatially uniform concentration\" \n# \"Interactive\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.3. Prescribed Climatology Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.4. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted in the atmosphere and prescribed as spatially uniform",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.5. Interactive Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted in the atmosphere and specified via an interactive method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.6. Other Emitted Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of chemical species emitted in the atmosphere and specified via an "other method"",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12. Emissions Concentrations --> Concentrations\nTO DO\n12.1. Prescribed Lower Boundary\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of species prescribed at the lower boundary.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12.2. Prescribed Upper Boundary\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of species prescribed at the upper boundary.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Gas Phase Chemistry\nAtmospheric chemistry transport\n13.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview gas phase atmospheric chemistry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13.2. Species\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nSpecies included in the gas phase chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HOx\" \n# \"NOy\" \n# \"Ox\" \n# \"Cly\" \n# \"HSOx\" \n# \"Bry\" \n# \"VOCs\" \n# \"isoprene\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.3. Number Of Bimolecular Reactions\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of bi-molecular reactions in the gas phase chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.4. Number Of Termolecular Reactions\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of ter-molecular reactions in the gas phase chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.5. Number Of Tropospheric Heterogenous Reactions\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of reactions in the tropospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.6. Number Of Stratospheric Heterogenous Reactions\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of reactions in the stratospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.7. Number Of Advected Species\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of advected species in the gas phase chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.8. Number Of Steady State Species\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"13.9. Interactive Dry Deposition\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"13.10. Wet Deposition\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"13.11. Wet Oxidation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"14. Stratospheric Heterogeneous Chemistry\nAtmospheric chemistry startospheric heterogeneous chemistry\n14.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview stratospheric heterogenous atmospheric chemistry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.2. Gas Phase Species\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nGas phase species included in the stratospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Cly\" \n# \"Bry\" \n# \"NOy\" \n# TODO - please enter value(s)\n",
"14.3. Aerosol Species\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nAerosol species included in the stratospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Polar stratospheric ice\" \n# \"NAT (Nitric acid trihydrate)\" \n# \"NAD (Nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particule))\" \n# TODO - please enter value(s)\n",
"14.4. Number Of Steady State Species\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of steady state species in the stratospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14.5. Sedimentation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"14.6. Coagulation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs coagulation is included in the stratospheric heterogeneous chemistry scheme or not?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"15. Tropospheric Heterogeneous Chemistry\nAtmospheric chemistry tropospheric heterogeneous chemistry\n15.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview tropospheric heterogenous atmospheric chemistry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Gas Phase Species\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList of gas phase species included in the tropospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.3. Aerosol Species\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nAerosol species included in the tropospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Nitrate\" \n# \"Sea salt\" \n# \"Dust\" \n# \"Ice\" \n# \"Organic\" \n# \"Black carbon/soot\" \n# \"Polar stratospheric ice\" \n# \"Secondary organic aerosols\" \n# \"Particulate organic matter\" \n# TODO - please enter value(s)\n",
"15.4. Number Of Steady State Species\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of steady state species in the tropospheric heterogeneous chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"15.5. Interactive Dry Deposition\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"15.6. Coagulation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs coagulation is included in the tropospheric heterogeneous chemistry scheme or not?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"16. Photo Chemistry\nAtmospheric chemistry photo chemistry\n16.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview atmospheric photo chemistry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16.2. Number Of Reactions\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of reactions in the photo-chemistry scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"17. Photo Chemistry --> Photolysis\nPhotolysis scheme\n17.1. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nPhotolysis scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Offline (clear sky)\" \n# \"Offline (with clouds)\" \n# \"Online\" \n# TODO - please enter value(s)\n",
"17.2. Environmental Conditions\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
wcmckee/wcmckee
|
artcgallery.ipynb
|
mit
|
[
"<h3>artcontrol gallery</h3>\n\nCreate gallery for artcontrol artwork. \nUses Year / Month / Day format.\nCreate blog post for each day there is a post.\nIt will need to list the files for that day and create a markdown file in posts that contains the artwork. Name of art then followed by each pience of artwork -line, bw, color. \nwrite a message about each piece of artwork.",
"import os\nimport arrow\nimport getpass\n\nraw = arrow.now()\nmyusr = getpass.getuser()\ngalpath = ('/home/{}/git/artcontrolme/galleries/'.format(myusr))\n\ngalpath = ('/home/{}/git/artcontrolme/galleries/'.format(myusr))\n \npopath = ('/home/{}/git/artcontrolme/posts/'.format(myusr))\n \n\nclass DayStuff():\n def getUsr():\n return getpass.getuser()\n\n def reTime():\n return raw()\n\n def getYear():\n return raw.strftime(\"%Y\")\n\n def getMonth():\n return raw.strftime(\"%m\")\n\n def getDay():\n return raw.strftime(\"%d\")\n\n def Fullday():\n return (getYear() + '/' + getMonth() + '/' + getDay())\n\n def fixDay():\n return (raw.strftime('%Y/%m/%d'))\n\n #def postPath():\n #return ('/home/{}/git/artcontrolme/posts/'.format(myusr))\n\n \n def listPath():\n return os.listdir(popath)\n\n #def galleryPath():\n # return (galpath)\n\n def galyrPath():\n return ('{}{}'.format(galpath, getYear()))\n \n def galmonPath():\n return('{}{}/{}'.format(galpath, getYear(), getMonth()))\n \n def galdayPath():\n return('{}{}/{}/{}'.format(galpath, getYear(), getMonth(), getDay()))\n \n def galleryList():\n return os.listdir('/home/{}/git/artcontrolme/galleries/'.format(myusr))\n \n def galyrList():\n return os.listdir('/home/{}/git/artcontrolme/galleries/{}/{}'.format(myusr, getYear()))\n\n def galmonList():\n return os.listdir('/home/{}/git/artcontrolme/galleries/{}/{}'.format(myusr, getYear(), getMonth())) \n \n def galdayList():\n return os.listdir('/home/{}/git/artcontrolme/galleries/{}/{}/{}'.format(myusr, getYear(), getMonth(), getDay())) \n \n\n def checkYear():\n if getYear() not in galleryList():\n return os.mkdir('{}{}'.format(galleryPath(), getYear()))\n\n\n def checkMonth():\n if getMonth() not in DayStuff.galyrList():\n return os.mkdir('{}{}'.format(galleryPath(), getMonth()))\n \n def checkDay():\n if getDay() not in DayStuff.galmonList():\n return os.mkdir('{}/{}/{}'.format(galleryPath(), getMonth(), getDay()))\n \n #def makeDay\n\n\n\n#DayStuff.getUsr()\n\n#DayStuff.getYear()\n\n#DayStuff.getMonth()\n\n#DayStuff.getDay()\n\n#DayStuf\n\n#DayStuff.Fullday()\n\n#DayStuff.postPath()\n\n#DayStuff.\n\n#DayStuff.galmonPath()\n\n#DayStuff.galdayPath()\n\n#DayStuff.galyrList()\n\n#getDay()\n\n#getMonth()\n\n#galleryList()\n\n#DayStuff.checkDay()\n\n#DayStuff.galyrList()\n\n#DayStuff.galmonList()\n\n#DayStuff.checkDay()\n\n#DayStuff.checl\n\n#DayStuff.checkMonth()\n\n#DayStuff.galyrList()\n\n#listPath()\n\n#if getYear() not in galleryList():\n# os.mkdir('{}{}'.format(galleryPath(), getYear()))\n\n#galleryPath()\n\n#fixDay()\n\n#galleryPath()\n\n#Fullday()\n\n#getDay()\n\n#getYear()\n\n#getMonth()\n\n#getusr()\n\n#yraw = raw.strftime(\"%Y\")\n#mntaw = raw.strftime(\"%m\")\n#dytaw = raw.strftime(\"%d\")\n\n#fulda = yraw + '/' + mntaw + '/' + dytaw\n\n#fultim = fulda + ' ' + raw.strftime('%H:%M:%S')\n\n#arnow = arrow.now()\n\n#curyr = arnow.strftime('%Y')\n\n#curmon = arnow.strftime('%m')\n\n#curday = arnow.strftime('%d')\n\n#galerdir = ('/home/wcmckee/github/artcontrolme/galleries/')\n\n#galdir = os.listdir('/home/wcmckee/github/artcontrolme/galleries/')\n\n#galdir\n\n#mondir = os.listdir(galerdir + curyr)\n\n#daydir = os.listdir(galerdir + curyr + '/' + curmon )\n\n#daydir\n\n#galdir#\n\n#mondir\n\n#daydir\n\n#if curyr in galdir:\n# pass\n#else:\n# os.mkdir(galerdir + curyr)\n\n#if curmon in mondir:\n# pass\n#else:\n# os.mkdir(galerdir + curyr + '/' + curmon)\n\n#fulldaypath = (galerdir + curyr + '/' + curmon + '/' + curday)\n\n#if curday in daydir:\n# pass\n#else:\n# os.mkdir(galerdir + curyr + '/' + curmon + '/' + curday)\n\n#galdir\n\n#mondir\n\n#daydir\n\n#str(arnow.date())\n\n#nameofblogpost = input('Post name: ')",
"check to see if that blog post name already excist, if so error and ask for something more unique! \ninput art piece writers. Shows the art then asks for input, appending the input below the artwork. Give a name for the art that is appended above.",
"#daypost = open('/home/{}/github/artcontrolme/posts/{}.md'.format(getusr(), nameofblogpost), 'w')\n\n#daymetapost = open('/home/{}/github/artcontrolme/posts/{}.meta'.format(getUsr(), nameofblogpost), 'w')\n\n#daymetapost.write('.. title: ' + nameofblogpost + ' \\n' + '.. slug: ' + nameofblogpost + ' \\n' + '.. date: ' + fultim + ' \\n' + '.. author: wcmckee')\n\n#daymetapost.close()\n\n#todayart = os.listdir(fulldaypath)\n\n#titlewor = list()\n\n#titlewor",
"",
"#galpath = ('/galleries/' + curyr + '/' + curmon + '/' + curday + '/')\n\n#galpath\n\n#todayart.sort()\n\n#todayart\n\n#for toar in todayart:\n# daypost.write(('!' + '[' + toar.strip('.png') + '](' + galpath + toar + ')\\n'))\n\n\n#daypost.close()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
frankbearzou/Data-analysis
|
Police Killings/Police Killing.ipynb
|
mit
|
[
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n%matplotlib inline",
"Data Exploration",
"police_killings = pd.read_csv('police_killings.csv', encoding='ISO-8859-1')\n\npolice_killings.dtypes\n\npolice_killings.head()",
"Does race matter?",
"race_counts = police_killings['raceethnicity'].value_counts()\n\nplt.bar(np.arange(race_counts.size), race_counts, align='center')\nplt.xticks(np.arange(race_counts.size), race_counts.index, rotation='vertical', fontsize='large')\nplt.show()\n\nrace_counts / np.sum(race_counts) * 100",
"from the charts above, we found that about half of people been killed are white. Black people account for approximately one thirds.\nDoes regional income matter?",
"income = police_killings[police_killings['p_income'] != '-']['p_income']\n\nincome = income.astype('float')\n\nsns.distplot(income, kde=False)\nsns.plt.show()\n\nincome.median()",
"The figure above shows that the high police killing rate happen in the states that the median income are about 22,000 dollar.\nShootings By State\nIn order to analyze the killing data by state, we need to know not only the number of people have been killed in each state, but also the population in each state.",
"state_pop = pd.read_csv('state_population.csv')\n\nstate_pop.head()\n\nstate_counts = police_killings['state_fp'].value_counts()",
"Creating a new Data Frame and combine with the state population, so that we can find out the killing rate.",
"states = pd.DataFrame({'STATE': state_counts.index, 'shooting': state_counts})\n\nstates = states.merge(state_pop, on='STATE')\n\nstates.head()",
"Convert the unit of population to millions.",
"states['pop_millions'] = states['POPESTIMATE2015'] / 1000000\n\nstates['rate'] = states['shooting'] / states['pop_millions']",
"Ordering rate column from highest to lowest.",
"states[['STATE', 'shooting', 'NAME', 'pop_millions', 'rate']].sort_values('rate', ascending=False)",
"From the chart above, we can wrap up. Generally speaking, states located in middle south have the heighest rate of police killing rate. However, the rate in northeast seem to be the lowest.\nState By State Differences\nWhich states have the highest number of police killing?",
"states[['STATE', 'shooting', 'NAME', 'pop_millions', 'rate']].sort_values('shooting', ascending=False)\n\npk = police_killings[(police_killings['share_white'] != '-') & \\\n (police_killings['share_black'] != '-') & \\\n (police_killings['share_hispanic'] != '-')]\n\npk['share_white'] = pk['share_white'].astype('float64')\npk['share_black'] = pk['share_black'].astype('float64')\npk['share_hispanic'] = pk['share_hispanic'].astype('float64')",
"We have to change state's name to its abbreviate name because there is no abbr name for state.",
"lowest_states_list = [\"CT\", \"PA\", \"IA\", \"NY\", \"MA\", \"NH\", \"ME\", \"IL\", \"OH\", \"WI\"]\nhighest_states_list = [\"OK\", \"AZ\", \"NE\", \"HI\", \"AK\", \"ID\", \"NM\", \"LA\", \"CO\", \"DE\"]\n\nlowest_states = pk[pk['state'].isin(lowest_states_list)]\nhighest_states = pk[pk['state'].isin(highest_states_list)]",
"Comparing these columns by means of median.",
"columns = [\"pop\", \"county_income\", \"share_white\", \"share_black\", \"share_hispanic\"]\n\nlowest_states[columns].median()\n\nhighest_states[columns].median()",
"In the states of the lower country income, the police killing rate are higher. By contrary, in the states of higher country income, the police killing rate is lower. \nLet's find out the cause.",
"causes = police_killings['cause'].value_counts()\ncauses.plot(kind='pie', autopct='%.2f', title='Cause', fontsize=16, figsize=(12,12))\nplt.legend(labels=causes.index)\nplt.show()",
"Obviously, most causes are gunshot."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
zczapran/datascienceintensive
|
naive_bayes/Mini_Project_Naive_Bayes.ipynb
|
mit
|
[
"Basic Text Classification with Naive Bayes\n\nIn the mini-project, you'll learn the basics of text analysis using a subset of movie reviews from the rotten tomatoes database. You'll also use a fundamental technique in Bayesian inference, called Naive Bayes. This mini-project is based on Lab 10 of Harvard's CS109 class. Please free to go to the original lab for additional exercises and solutions.",
"%matplotlib inline\nimport numpy as np\nimport scipy as sp\nimport matplotlib as mpl\nimport matplotlib.cm as cm\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport seaborn as sns\nfrom six.moves import range\n\n# Setup Pandas\npd.set_option('display.width', 500)\npd.set_option('display.max_columns', 100)\npd.set_option('display.notebook_repr_html', True)\n\n# Setup Seaborn\nsns.set_style(\"whitegrid\")\nsns.set_context(\"poster\")",
"Table of Contents\n\nRotten Tomatoes Dataset\nExplore\n\n\nThe Vector Space Model and a Search Engine\nIn Code\n\n\nNaive Bayes\nMultinomial Naive Bayes and Other Likelihood Functions\nPicking Hyperparameters for Naive Bayes and Text Maintenance\n\n\nInterpretation\n\nRotten Tomatoes Dataset",
"critics = pd.read_csv('./critics.csv')\n#let's drop rows with missing quotes\ncritics = critics[~critics.quote.isnull()]\ncritics.head()",
"Explore",
"n_reviews = len(critics)\nn_movies = critics.rtid.unique().size\nn_critics = critics.critic.unique().size\n\n\nprint(\"Number of reviews: {:d}\".format(n_reviews))\nprint(\"Number of critics: {:d}\".format(n_critics))\nprint(\"Number of movies: {:d}\".format(n_movies))\n\ndf = critics.copy()\ndf['fresh'] = df.fresh == 'fresh'\ngrp = df.groupby('critic')\ncounts = grp.critic.count() # number of reviews by each critic\nmeans = grp.fresh.mean() # average freshness for each critic\n\nmeans[counts > 100].hist(bins=10, edgecolor='w', lw=1)\nplt.xlabel(\"Average Rating per critic\")\nplt.ylabel(\"Number of Critics\")\nplt.yticks([0, 2, 4, 6, 8, 10]);",
"<div class=\"span5 alert alert-info\">\n<h3>Exercise Set I</h3>\n<br/>\n<b>Exercise:</b> Look at the histogram above. Tell a story about the average ratings per critic. What shape does the distribution look like? What is interesting about the distribution? What might explain these interesting things?\n</div>\n\nThe Vector Space Model and a Search Engine\nAll the diagrams here are snipped from Introduction to Information Retrieval by Manning et. al. which is a great resource on text processing. For additional information on text mining and natural language processing, see Foundations of Statistical Natural Language Processing by Manning and Schutze.\nAlso check out Python packages nltk, spaCy, pattern, and their associated resources. Also see word2vec.\nLet us define the vector derived from document $d$ by $\\bar V(d)$. What does this mean? Each document is treated as a vector containing information about the words contained in it. Each vector has the same length and each entry \"slot\" in the vector contains some kind of data about the words that appear in the document such as presence/absence (1/0), count (an integer) or some other statistic. Each vector has the same length because each document shared the same vocabulary across the full collection of documents -- this collection is called a corpus.\nTo define the vocabulary, we take a union of all words we have seen in all documents. We then just associate an array index with them. So \"hello\" may be at index 5 and \"world\" at index 99.\nSuppose we have the following corpus:\nA Fox one day spied a beautiful bunch of ripe grapes hanging from a vine trained along the branches of a tree. The grapes seemed ready to burst with juice, and the Fox's mouth watered as he gazed longingly at them.\nSuppose we treat each sentence as a document $d$. The vocabulary (often called the lexicon) is the following:\n$V = \\left{\\right.$ a, along, and, as, at, beautiful, branches, bunch, burst, day, fox, fox's, from, gazed, grapes, hanging, he, juice, longingly, mouth, of, one, ready, ripe, seemed, spied, the, them, to, trained, tree, vine, watered, with$\\left.\\right}$\nThen the document\nA Fox one day spied a beautiful bunch of ripe grapes hanging from a vine trained along the branches of a tree\nmay be represented as the following sparse vector of word counts:\n$$\\bar V(d) = \\left( 4,1,0,0,0,1,1,1,0,1,1,0,1,0,1,1,0,0,0,0,2,1,0,1,0,0,1,0,0,0,1,1,0,0 \\right)$$\nor more succinctly as\n[(0, 4), (1, 1), (5, 1), (6, 1), (7, 1), (9, 1), (10, 1), (12, 1), (14, 1), (15, 1), (20, 2), (21, 1), (23, 1),\n(26, 1), (30, 1), (31, 1)]\nalong with a dictionary\n{\n 0: a, 1: along, 5: beautiful, 6: branches, 7: bunch, 9: day, 10: fox, 12: from, 14: grapes, \n 15: hanging, 19: mouth, 20: of, 21: one, 23: ripe, 24: seemed, 25: spied, 26: the, \n 30: tree, 31: vine, \n}\nThen, a set of documents becomes, in the usual sklearn style, a sparse matrix with rows being sparse arrays representing documents and columns representing the features/words in the vocabulary.\nNotice that this representation loses the relative ordering of the terms in the document. That is \"cat ate rat\" and \"rat ate cat\" are the same. Thus, this representation is also known as the Bag-Of-Words representation.\nHere is another example, from the book quoted above, although the matrix is transposed here so that documents are columns:\n\nSuch a matrix is also catted a Term-Document Matrix. Here, the terms being indexed could be stemmed before indexing; for instance, jealous and jealousy after stemming are the same feature. One could also make use of other \"Natural Language Processing\" transformations in constructing the vocabulary. We could use Lemmatization, which reduces words to lemmas: work, working, worked would all reduce to work. We could remove \"stopwords\" from our vocabulary, such as common words like \"the\". We could look for particular parts of speech, such as adjectives. This is often done in Sentiment Analysis. And so on. It all depends on our application.\nFrom the book:\n\nThe standard way of quantifying the similarity between two documents $d_1$ and $d_2$ is to compute the cosine similarity of their vector representations $\\bar V(d_1)$ and $\\bar V(d_2)$:\n\n$$S_{12} = \\frac{\\bar V(d_1) \\cdot \\bar V(d_2)}{|\\bar V(d_1)| \\times |\\bar V(d_2)|}$$\n\n\nThere is a far more compelling reason to represent documents as vectors: we can also view a query as a vector. Consider the query q = jealous gossip. This query turns into the unit vector $\\bar V(q)$ = (0, 0.707, 0.707) on the three coordinates below. \n\n\n\nThe key idea now: to assign to each document d a score equal to the dot product:\n\n$$\\bar V(q) \\cdot \\bar V(d)$$\nThen we can use this simple Vector Model as a Search engine.\nIn Code",
"from sklearn.feature_extraction.text import CountVectorizer\n\ntext = ['Hop on pop', 'Hop off pop', 'Hop Hop hop']\nprint(\"Original text is\\n{}\".format('\\n'.join(text)))\n\nvectorizer = CountVectorizer(min_df=0)\n\n# call `fit` to build the vocabulary\nvectorizer.fit(text)\n\n# call `transform` to convert text to a bag of words\nx = vectorizer.transform(text)\n\n# CountVectorizer uses a sparse array to save memory, but it's easier in this assignment to \n# convert back to a \"normal\" numpy array\nx = x.toarray()\n\nprint(\"\")\nprint(\"Transformed text vector is \\n{}\".format(x))\n\n# `get_feature_names` tracks which word is associated with each column of the transformed x\nprint(\"\")\nprint(\"Words for each feature:\")\nprint(vectorizer.get_feature_names())\n\n# Notice that the bag of words treatment doesn't preserve information about the *order* of words, \n# just their frequency\n\ndef make_xy(critics, vectorizer=None):\n #Your code here \n if vectorizer is None:\n vectorizer = CountVectorizer()\n X = vectorizer.fit_transform(critics.quote)\n X = X.tocsc() # some versions of sklearn return COO format\n y = (critics.fresh == 'fresh').values.astype(np.int)\n return X, y\nX, y = make_xy(critics)",
"Naive Bayes\nFrom Bayes' Theorem, we have that\n$$P(c \\vert f) = \\frac{P(c \\cap f)}{P(f)}$$\nwhere $c$ represents a class or category, and $f$ represents a feature vector, such as $\\bar V(d)$ as above. We are computing the probability that a document (or whatever we are classifying) belongs to category c given the features in the document. $P(f)$ is really just a normalization constant, so the literature usually writes Bayes' Theorem in context of Naive Bayes as\n$$P(c \\vert f) \\propto P(f \\vert c) P(c) $$\n$P(c)$ is called the prior and is simply the probability of seeing class $c$. But what is $P(f \\vert c)$? This is the probability that we see feature set $f$ given that this document is actually in class $c$. This is called the likelihood and comes from the data. One of the major assumptions of the Naive Bayes model is that the features are conditionally independent given the class. While the presence of a particular discriminative word may uniquely identify the document as being part of class $c$ and thus violate general feature independence, conditional independence means that the presence of that term is independent of all the other words that appear within that class. This is a very important distinction. Recall that if two events are independent, then:\n$$P(A \\cap B) = P(A) \\cdot P(B)$$\nThus, conditional independence implies\n$$P(f \\vert c) = \\prod_i P(f_i | c) $$\nwhere $f_i$ is an individual feature (a word in this example).\nTo make a classification, we then choose the class $c$ such that $P(c \\vert f)$ is maximal.\nThere is a small caveat when computing these probabilities. For floating point underflow we change the product into a sum by going into log space. This is called the LogSumExp trick. So:\n$$\\log P(f \\vert c) = \\sum_i \\log P(f_i \\vert c) $$\nThere is another caveat. What if we see a term that didn't exist in the training data? This means that $P(f_i \\vert c) = 0$ for that term, and thus $P(f \\vert c) = \\prod_i P(f_i | c) = 0$, which doesn't help us at all. Instead of using zeros, we add a small negligible value called $\\alpha$ to each count. This is called Laplace Smoothing.\n$$P(f_i \\vert c) = \\frac{N_{ic}+\\alpha}{N_c + \\alpha N_i}$$\nwhere $N_{ic}$ is the number of times feature $i$ was seen in class $c$, $N_c$ is the number of times class $c$ was seen and $N_i$ is the number of times feature $i$ was seen globally. $\\alpha$ is sometimes called a regularization parameter.\nMultinomial Naive Bayes and Other Likelihood Functions\nSince we are modeling word counts, we are using variation of Naive Bayes called Multinomial Naive Bayes. This is because the likelihood function actually takes the form of the multinomial distribution.\n$$P(f \\vert c) = \\frac{\\left( \\sum_i f_i \\right)!}{\\prod_i f_i!} \\prod_{f_i} P(f_i \\vert c)^{f_i} \\propto \\prod_{i} P(f_i \\vert c)$$\nwhere the nasty term out front is absorbed as a normalization constant such that probabilities sum to 1.\nThere are many other variations of Naive Bayes, all which depend on what type of value $f_i$ takes. If $f_i$ is continuous, we may be able to use Gaussian Naive Bayes. First compute the mean and variance for each class $c$. Then the likelihood, $P(f \\vert c)$ is given as follows\n$$P(f_i = v \\vert c) = \\frac{1}{\\sqrt{2\\pi \\sigma^2_c}} e^{- \\frac{\\left( v - \\mu_c \\right)^2}{2 \\sigma^2_c}}$$\n<div class=\"span5 alert alert-info\">\n<h3>Exercise Set II</h3>\n\n<p><b>Exercise:</b> Implement a simple Naive Bayes classifier:</p>\n\n<ol>\n<li> split the data set into a training and test set\n<li> Use `scikit-learn`'s `MultinomialNB()` classifier with default parameters.\n<li> train the classifier over the training set and test on the test set\n<li> print the accuracy scores for both the training and the test sets\n</ol>\n\nWhat do you notice? Is this a good classifier? If not, why not?\n</div>",
"from sklearn.naive_bayes import MultinomialNB\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import accuracy_score\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42)\n\nclf = MultinomialNB()\nclf.fit(X_train, y_train)\n\nprint(accuracy_score(y_train, clf.predict(X_train)))\nprint(accuracy_score(y_test, clf.predict(X_test)))\n",
"Picking Hyperparameters for Naive Bayes and Text Maintenance\nWe need to know what value to use for $\\alpha$, and we also need to know which words to include in the vocabulary. As mentioned earlier, some words are obvious stopwords. Other words appear so infrequently that they serve as noise, and other words in addition to stopwords appear so frequently that they may also serve as noise.\nFirst, let's find an appropriate value for min_df for the CountVectorizer. min_df can be either an integer or a float/decimal. If it is an integer, min_df represents the minimum number of documents a word must appear in for it to be included in the vocabulary. If it is a float, it represents the minimum percentage of documents a word must appear in to be included in the vocabulary. From the documentation:\n\nmin_df: When building the vocabulary ignore terms that have a document frequency strictly lower than the given threshold. This value is also called cut-off in the literature. If float, the parameter represents a proportion of documents, integer absolute counts. This parameter is ignored if vocabulary is not None.\n\n<div class=\"span5 alert alert-info\">\n<h3>Exercise Set III</h3>\n\n<p><b>Exercise:</b> Construct the cumulative distribution of document frequencies (df). The $x$-axis is a document count $x_i$ and the $y$-axis is the percentage of words that appear less than $x_i$ times. For example, at $x=5$, plot a point representing the percentage or number of words that appear in 5 or fewer documents.</p>\n\n<p><b>Exercise:</b> Look for the point at which the curve begins climbing steeply. This may be a good value for `min_df`. If we were interested in also picking `max_df`, we would likely pick the value where the curve starts to plateau. What value did you choose?</p>\n</div>\n\nThe parameter $\\alpha$ is chosen to be a small value that simply avoids having zeros in the probability computations. This value can sometimes be chosen arbitrarily with domain expertise, but we will use K-fold cross validation. In K-fold cross-validation, we divide the data into $K$ non-overlapping parts. We train on $K-1$ of the folds and test on the remaining fold. We then iterate, so that each fold serves as the test fold exactly once. The function cv_score performs the K-fold cross-validation algorithm for us, but we need to pass a function that measures the performance of the algorithm on each fold.",
"from sklearn.model_selection import KFold\ndef cv_score(clf, X, y, scorefunc):\n result = 0.\n nfold = 5\n for train, test in KFold(nfold).split(X): # split data into train/test groups, 5 times\n clf.fit(X[train], y[train]) # fit the classifier, passed is as clf.\n result += scorefunc(clf, X[test], y[test]) # evaluate score function on held-out data\n return result / nfold # average",
"We use the log-likelihood as the score here in scorefunc. The higher the log-likelihood, the better. Indeed, what we do in cv_score above is to implement the cross-validation part of GridSearchCV.\nThe custom scoring function scorefunc allows us to use different metrics depending on the decision risk we care about (precision, accuracy, profit etc.) directly on the validation set. You will often find people using roc_auc, precision, recall, or F1-score as the scoring function.",
"def log_likelihood(clf, x, y):\n prob = clf.predict_log_proba(x)\n rotten = y == 0\n fresh = ~rotten\n return prob[rotten, 0].sum() + prob[fresh, 1].sum()",
"We'll cross-validate over the regularization parameter $\\alpha$.\nLet's set up the train and test masks first, and then we can run the cross-validation procedure.",
"from sklearn.model_selection import train_test_split\n_, itest = train_test_split(range(critics.shape[0]), train_size=0.7)\nmask = np.zeros(critics.shape[0], dtype=np.bool)\nmask[itest] = True",
"<div class=\"span5 alert alert-info\">\n<h3>Exercise Set IV</h3>\n\n<p><b>Exercise:</b> What does using the function `log_likelihood` as the score mean? What are we trying to optimize for?</p>\n\n<p><b>Exercise:</b> Without writing any code, what do you think would happen if you choose a value of $\\alpha$ that is too high?</p>\n\n<p><b>Exercise:</b> Using the skeleton code below, find the best values of the parameter `alpha`, and use the value of `min_df` you chose in the previous exercise set. Use the `cv_score` function above with the `log_likelihood` function for scoring.</p>\n</div>",
"from sklearn.naive_bayes import MultinomialNB\n\n#the grid of parameters to search over\nalphas = [0.001, 0.01, .1, 1, 5, 10, 50]\nbest_min_df = 0.01 # YOUR TURN: put your value of min_df here.\n\n#Find the best value for alpha and min_df, and the best classifier\nbest_alpha = None\nmaxscore=-np.inf\nfor alpha in alphas: \n vectorizer = CountVectorizer(min_df=best_min_df) \n Xthis, ythis = make_xy(critics, vectorizer)\n Xtrainthis = Xthis[mask]\n ytrainthis = ythis[mask]\n score = cv_score(MultinomialNB(alpha), Xtrainthis, ytrainthis, log_likelihood)\n if (score > maxscore):\n maxscore = score\n best_alpha = alpha\n\nprint(\"alpha: {}\".format(best_alpha))",
"<div class=\"span5 alert alert-info\">\n<h3>Exercise Set V: Working with the Best Parameters</h3>\n\n<p><b>Exercise:</b> Using the best value of `alpha` you just found, calculate the accuracy on the training and test sets. Is this classifier better? Why (not)?</p>\n\n</div>",
"vectorizer = CountVectorizer(min_df=best_min_df)\nX, y = make_xy(critics, vectorizer)\nxtrain=X[mask]\nytrain=y[mask]\nxtest=X[~mask]\nytest=y[~mask]\n\nclf = MultinomialNB(alpha=best_alpha).fit(xtrain, ytrain)\n\n#your turn. Print the accuracy on the test and training dataset\ntraining_accuracy = clf.score(xtrain, ytrain)\ntest_accuracy = clf.score(xtest, ytest)\n\nprint(\"Accuracy on training data: {:2f}\".format(training_accuracy))\nprint(\"Accuracy on test data: {:2f}\".format(test_accuracy))\n\nfrom sklearn.metrics import confusion_matrix\nprint(confusion_matrix(ytest, clf.predict(xtest)))",
"Interpretation\nWhat are the strongly predictive features?\nWe use a neat trick to identify strongly predictive features (i.e. words). \n\nfirst, create a data set such that each row has exactly one feature. This is represented by the identity matrix.\nuse the trained classifier to make predictions on this matrix\nsort the rows by predicted probabilities, and pick the top and bottom $K$ rows",
"words = np.array(vectorizer.get_feature_names())\n\nx = np.eye(xtest.shape[1])\nprobs = clf.predict_log_proba(x)[:, 0]\nind = np.argsort(probs)\n\ngood_words = words[ind[:10]]\nbad_words = words[ind[-10:]]\n\ngood_prob = probs[ind[:10]]\nbad_prob = probs[ind[-10:]]\n\nprint(\"Good words\\t P(fresh | word)\")\nfor w, p in zip(good_words, good_prob):\n print(\"{:>20}\".format(w), \"{:.2f}\".format(1 - np.exp(p)))\n \nprint(\"Bad words\\t P(fresh | word)\")\nfor w, p in zip(bad_words, bad_prob):\n print(\"{:>20}\".format(w), \"{:.2f}\".format(1 - np.exp(p)))",
"<div class=\"span5 alert alert-info\">\n<h3>Exercise Set VI</h3>\n\n<p><b>Exercise:</b> Why does this method work? What does the probability for each row in the identity matrix represent</p>\n\n</div>\n\nThe above exercise is an example of feature selection. There are many other feature selection methods. A list of feature selection methods available in sklearn is here. The most common feature selection technique for text mining is the chi-squared $\\left( \\chi^2 \\right)$ method.\nPrediction Errors\nWe can see mis-predictions as well.",
"x, y = make_xy(critics, vectorizer)\n\nprob = clf.predict_proba(x)[:, 0]\npredict = clf.predict(x)\n\nbad_rotten = np.argsort(prob[y == 0])[:5]\nbad_fresh = np.argsort(prob[y == 1])[-5:]\n\nprint(\"Mis-predicted Rotten quotes\")\nprint('---------------------------')\nfor row in bad_rotten:\n print(critics[y == 0].quote.iloc[row])\n print(\"\")\n\nprint(\"Mis-predicted Fresh quotes\")\nprint('--------------------------')\nfor row in bad_fresh:\n print(critics[y == 1].quote.iloc[row])\n print(\"\")",
"<div class=\"span5 alert alert-info\">\n<h3>Exercise Set VII: Predicting the Freshness for a New Review</h3>\n<br/>\n<div>\n<b>Exercise:</b>\n<ul>\n<li> Using your best trained classifier, predict the freshness of the following sentence: *'This movie is not remarkable, touching, or superb in any way'*\n<li> Is the result what you'd expect? Why (not)?\n</ul>\n</div>\n</div>",
"r = vectorizer.transform(['This movie is not remarkable, touching, or superb in any way'])\nclf.predict(r)",
"Aside: TF-IDF Weighting for Term Importance\nTF-IDF stands for \nTerm-Frequency X Inverse Document Frequency.\nIn the standard CountVectorizer model above, we used just the term frequency in a document of words in our vocabulary. In TF-IDF, we weight this term frequency by the inverse of its popularity in all documents. For example, if the word \"movie\" showed up in all the documents, it would not have much predictive value. It could actually be considered a stopword. By weighing its counts by 1 divided by its overall frequency, we downweight it. We can then use this TF-IDF weighted features as inputs to any classifier. TF-IDF is essentially a measure of term importance, and of how discriminative a word is in a corpus. There are a variety of nuances involved in computing TF-IDF, mainly involving where to add the smoothing term to avoid division by 0, or log of 0 errors. The formula for TF-IDF in scikit-learn differs from that of most textbooks: \n$$\\mbox{TF-IDF}(t, d) = \\mbox{TF}(t, d)\\times \\mbox{IDF}(t) = n_{td} \\log{\\left( \\frac{\\vert D \\vert}{\\vert d : t \\in d \\vert} + 1 \\right)}$$\nwhere $n_{td}$ is the number of times term $t$ occurs in document $d$, $\\vert D \\vert$ is the number of documents, and $\\vert d : t \\in d \\vert$ is the number of documents that contain $t$",
"# http://scikit-learn.org/dev/modules/feature_extraction.html#text-feature-extraction\n# http://scikit-learn.org/dev/modules/classes.html#text-feature-extraction-ref\nfrom sklearn.feature_extraction.text import TfidfVectorizer\ntfidfvectorizer = TfidfVectorizer(min_df=1, stop_words='english')\nXtfidf=tfidfvectorizer.fit_transform(critics.quote)",
"<div class=\"span5 alert alert-info\">\n<h3>Exercise Set VIII: Enrichment</h3>\n\n<p>\nThere are several additional things we could try. Try some of these as exercises:\n<ol>\n<li> Build a Naive Bayes model where the features are n-grams instead of words. N-grams are phrases containing n words next to each other: a bigram contains 2 words, a trigram contains 3 words, and 6-gram contains 6 words. This is useful because \"not good\" and \"so good\" mean very different things. On the other hand, as n increases, the model does not scale well since the feature set becomes more sparse.\n<li> Try a model besides Naive Bayes, one that would allow for interactions between words -- for example, a Random Forest classifier.\n<li> Try adding supplemental features -- information about genre, director, cast, etc.\n<li> Use word2vec or [Latent Dirichlet Allocation](https://en.wikipedia.org/wiki/Latent_Dirichlet_allocation) to group words into topics and use those topics for prediction.\n<li> Use TF-IDF weighting instead of word counts.\n</ol>\n</p>\n\n<b>Exercise:</b> Try a few of these ideas to improve the model (or any other ideas of your own). Implement here and report on the result.\n</div>",
"X_train, X_test, y_train, y_test = train_test_split(Xtfidf, y, test_size=0.25, random_state=42)\n\nclf = MultinomialNB()\nclf.fit(X_train, y_train)\n\nprint(accuracy_score(y_train, clf.predict(X_train)))\nprint(accuracy_score(y_test, clf.predict(X_test)))\n\n\nX, y = make_xy(critics, CountVectorizer(ngram_range=(1,2)))\n\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42)\n\nclf = MultinomialNB(alpha=1.0)\nclf.fit(X_train, y_train)\n\nprint(accuracy_score(y_train, clf.predict(X_train)))\nprint(accuracy_score(y_test, clf.predict(X_test)))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
srcole/qwm
|
burrito/.ipynb_checkpoints/Burrito_nonlinear-checkpoint.ipynb
|
mit
|
[
"San Diego Burrito Analytics: Data characterization\nScott Cole\n1 July 2016\nThis notebook applies nonlinear technqiues to analyze the contributions of burrito dimensions to the overall burrito rating.\n\nCreate the ‘vitalness’ metric. For each dimension, identify the burritos that scored below average (defined as 2 or lower), then calculate the linear model’s predicted overall score and compare it to the actual overall score. For what dimensions is this distribution not symmetric around 0?\n If this distribution trends greater than 0 (Overall_predict - Overall_actual), that means that the actual score is lower than the predicted score. This means that this metric is ‘vital’ and that it being bad will make the whole burrito bad\n If vitalness < 0, then the metric being really bad actually doesn’t affect the overall burrito as much as it should.\nIn opposite theme, make the ‘saving’ metric for all burritos in which the dimension was 4.5 or 5\nFor those that are significantly different from 0, quantify the effect size. (e.g. a burrito with a 2 or lower rating for this metric: its overall rating will be disproportionately impacted by XX points).\nFor the dimensions, how many are nonzero? If all of them are 0, then burritos are perfectly linear, which would be weird. If many of them are nonzero, then burritos are highly nonlinear. \n\nDefault imports",
"%config InlineBackend.figure_format = 'retina'\n%matplotlib inline\n\nimport numpy as np\nimport scipy as sp\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport statsmodels.api as sm\nimport pandasql\n\nimport seaborn as sns\nsns.set_style(\"white\")",
"Load data",
"import util\ndf = util.load_burritos()\nN = df.shape[0]",
"Vitalness metric",
"def vitalness(df, dim, rating_cutoff = 2,\n metrics = ['Hunger','Tortilla','Temp','Meat','Fillings','Meatfilling',\n 'Uniformity','Salsa','Wrap']):\n # Fit GLM to get predicted values\n dffull = df[np.hstack((metrics,'overall'))].dropna()\n X = sm.add_constant(dffull[metrics])\n y = dffull['overall']\n my_glm = sm.GLM(y,X)\n res = my_glm.fit()\n dffull['overallpred'] = res.fittedvalues\n \n # Make exception for Meat:filling in order to avoid pandasql error\n if dim == 'Meat:filling':\n dffull = dffull.rename(columns={'Meat:filling':'Meatfilling'})\n dim = 'Meatfilling'\n\n # Compare predicted and actual overall ratings for each metric below the rating cutoff\n import pandasql\n q = \"\"\"\n SELECT\n overall, overallpred\n FROM\n dffull\n WHERE\n \"\"\"\n q = q + dim + ' <= ' + np.str(rating_cutoff)\n df2 = pandasql.sqldf(q.lower(), locals())\n return sp.stats.ttest_rel(df2.overall,df2.overallpred)\n\nvital_metrics = ['Hunger','Tortilla','Temp','Meat','Fillings','Meat:filling',\n 'Uniformity','Salsa','Wrap']\nfor metric in vital_metrics:\n print metric\n if metric == 'Volume':\n rating_cutoff = .7\n else:\n rating_cutoff = 1\n print vitalness(df,metric,rating_cutoff=rating_cutoff, metrics=vital_metrics)",
"Savior metric",
"def savior(df, dim, rating_cutoff = 2,\n metrics = ['Hunger','Tortilla','Temp','Meat','Fillings','Meatfilling',\n 'Uniformity','Salsa','Wrap']):\n \n # Fit GLM to get predicted values\n dffull = df[np.hstack((metrics,'overall'))].dropna()\n X = sm.add_constant(dffull[metrics])\n y = dffull['overall']\n my_glm = sm.GLM(y,X)\n res = my_glm.fit()\n dffull['overallpred'] = res.fittedvalues\n \n # Make exception for Meat:filling in order to avoid pandasql error\n if dim == 'Meat:filling':\n dffull = dffull.rename(columns={'Meat:filling':'Meatfilling'})\n dim = 'Meatfilling'\n\n # Compare predicted and actual overall ratings for each metric below the rating cutoff\n import pandasql\n q = \"\"\"\n SELECT\n overall, overallpred\n FROM\n dffull\n WHERE\n \"\"\"\n q = q + dim + ' >= ' + np.str(rating_cutoff)\n df2 = pandasql.sqldf(q.lower(), locals())\n print len(df2)\n return sp.stats.ttest_rel(df2.overall,df2.overallpred)\n\nvital_metrics = ['Hunger','Tortilla','Temp','Meat','Fillings','Meat:filling',\n 'Uniformity','Salsa','Wrap']\nfor metric in vital_metrics:\n print metric\n print savior(df,metric,rating_cutoff=5, metrics=vital_metrics)\nprint 'Volume'\nvital_metrics = ['Hunger','Tortilla','Temp','Meat','Fillings','Meat:filling',\n 'Uniformity','Salsa','Wrap','Volume']\nprint savior(df,'Volume',rating_cutoff=.9,metrics=vital_metrics)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jibanCat/healpy_object_counts
|
tutorial/Jupyter_Intro_Healpix_WISE-2MASS.ipynb
|
mit
|
[
"Note for MF:\n- Jupyter functions other than notebook\n- kernels\n- github tutorial (e.g., pymc3), gallery\n- reveal.js RISE, Blog\n- Jupyter nbextensions\n- Jupyter Lab\nFound it last weekend, quite useful:\nReproducible Data Analysis in Jupyter (jakevdp)\ntool for version control in Jupyter (nbdime) (haven't try it!)\nWhy Jupyter is AWESOME?\n\nAs a Document: Help you to share your code and your idea step by step\nPrototyping: Help you to interact and visualize your idea before you move to a more serious text editor\nPresentation: Use reveal.js rise to present your notebook in slides\n\nSome Jupyter Tricks\nLet us follow the post: 28 jupyter tricks\n\nmagic functions: %\nbash command: !\ninteract with R",
"# how many magic function we got?\n%lsmagic\n\nfrom matplotlib import pyplot as plt\n%matplotlib inline\n%config IPython.matplotlib.backend = 'retina'\n%config InlineBackend.figure_format = 'retina'\n\nplt.style.use('ggplot') # seaborn\nplt.plot(range(10), [x*x for x in range(10)]);\n\n%%writefile test.py\nprint('Hello, AstroTalk.')\n\n%run test.py\n\n# %load test.py\nprint('Hello, AstroTalk.')\n\n%pycat test.py",
"$$\nx^2 = a + b +c\n$$",
"%%python2\nprint 'AstroTalk'",
"Cython Magic fast for loop\n\nFast for loop, pyimagesearch\nCython with OpenMP\n\nline profiler\nCOOL things I've never used\n- nbtutor\n- pivottablejs",
"%load_ext nbtutor\n\n%%nbtutor -r -f\ndef AstroTalk(a):\n a += 1\n return a\nAstroTalk(2)\n\nfrom pivottablejs import pivot_ui\nimport pandas as pd\ndf = pd.DataFrame({\n 'Letter': ['a', 'a', 'a', 'b', 'b', 'b', 'c', 'c', 'c'],\n 'X': [4, 3, 5, 2, 1, 7, 7, 5, 9],\n 'Y': [0, 4, 3, 6, 7, 10, 11, 9, 13],\n 'Z': [1, 2, 3, 1, 2, 3, 1, 2, 3]\n })\n\npivot_ui(df)",
"Basic healpy\nNote for MF\n- RING & NESTED: Hierarchical Equal Area Iso Latitude pixelation of the sphere\n- hack: cartesian projection to Healpix format\n - try to change nside\n - try smoothing\n - try different projections\n - try rotation\n - try diff images\n- counting objects:\n - generate from Healpix pixels with large nside (cheat)\n - counting with pixfunc\n- something you can play with yourself? (list them:)\n- hack: Not so important, but you can use vpython for 3D sphere.",
"from urllib import request\n\nURL = 'http://attach.kmt.org.tw/200910/20091015163058.jpg'\n\nrequest.urlretrieve(URL, filename='kmt.jpg')\n\nfrom PIL import Image\nimport numpy as np\n\nimage = Image.open('kmt.jpg').convert('L')\nimage = np.array(image)\n\nfrom matplotlib import pyplot as plt\n%matplotlib inline\n\nplt.imshow(image); plt.colorbar();\n\nimage = image[image.shape[0]//2 - 300:image.shape[0]//2 + 300, \n image.shape[1]//2 - 600:image.shape[1]//2 + 600]\nplt.imshow(image)\n\nimport numpy as np\nimport healpy as hp\n\ndef cart_healpix(cartview, nside):\n '''read in a matrix and return a healpix pixelization map'''\n # Generate a blank Healpix map and angular to pixels\n healpix = np.zeros(hp.nside2npix(nside), dtype=np.double)\n hptheta = np.linspace(0, np.pi, num=cartview.shape[0])[:, None]\n hpphi = np.linspace(-np.pi, np.pi, num=cartview.shape[1])\n pix = hp.ang2pix(nside, hptheta, hpphi)\n \n # re-pixelize\n healpix[pix] = cartview\n return healpix\n\narray = np.arange(hp.nside2npix(2))\nhp.mollview(array, nest=False, sub=(1,2,1), title='RING')\nhp.mollview(array, nest=True, sub=(1,2,2), title='NEST')\n\n# cartview -> healpix\nhealpix = cart_healpix(image, 128)\nhp.mollview(healpix, sub=(1,3,1))\nhp.cartview(healpix, sub=(1,3,2))\nhp.orthview(healpix, sub=(1,3,3))\n\ncl = hp.anafast(healpix)\nell = np.arange(len(cl))\nplt.loglog(ell, cl);\n\nnside = 1\ncartview = np.arange(200).reshape(10, 20)\n# Generate a blank Healpix map and angular to pixels\nhealpix = np.zeros(hp.nside2npix(nside), dtype=np.double)\n\n\nhptheta = np.linspace(0, np.pi, num=cartview.shape[0])[:, None]\nhpphi = np.linspace(-np.pi, np.pi, num=cartview.shape[1])\npix = hp.ang2pix(nside, hptheta, hpphi)\n\n\npix\n\ncartview\n\n# re-pixelize\nhealpix[pix] = cartview\n\nhealpix\n\n# Try yourself with your own image!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
SXBK/kaggle
|
mercedes-benz/Mercedes-Benz.ipynb
|
gpl-3.0
|
[
"#Basic libraries\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\n#Get data\ntrain = pd.read_csv('./train.csv')\ntest = pd.read_csv('./test.csv')\ncombined = pd.concat([train, test], axis=0)\ncombined.info()\n\n#Preprocessing\ntrain_c = train.copy()\ntest_c = test.copy()\ntrain_c['label'] = train_c.y\ntrain_c.drop(['ID', 'y'], inplace=True, axis=1)\ntest_c.drop('ID', inplace=True, axis=1)\n\n#Histogram of y; has four peaks\ntrain_c.label.hist(bins=1000)\nplt.show()\n\n#Drop the y-outlier\ntrain_c = train_c[train_c.label < 175]\ntrain_c.label.hist(bins=1000)\nplt.show()\n\n#divide features into qualitative and quantitative\nqual = []\nquan = []\nfor col in train_c.columns[:-1]:\n if train_c[col].dtype == 'object':\n qual.append(col)\n elif train_c[col].dtype != 'object':\n quan.append(col)\nlen(qual), len(quan)",
"There is a lot of room for feature engineering the 8 qualitative features, but we'll reserve it for later",
"#Drop quantitative features for which most samples take 0 or 1\nfor cols in quan:\n if train_c[cols].mean() < 0.01 or train_c[cols].mean() > 0.99:\n train_c.drop(cols, inplace=True, axis=1)\n test_c.drop(cols, inplace=True, axis=1)\n\n#For now we only use the quantitative features left to make predictions\nquan_features = train_c.columns[8:-1]\n\nfrom sklearn.metrics import r2_score\nfrom sklearn.model_selection import GridSearchCV\nimport warnings\nwarnings.filterwarnings('ignore')",
"From now we try a range of estimators and use GridSearch to iteratively tune their hyperparameters",
"from sklearn.linear_model import Ridge\nridge = Ridge()\nridge_cv = GridSearchCV(estimator=ridge, param_grid={'alpha':np.arange(1, 50, 1)}, cv=5)\nridge_cv.fit(train_c[quan_features], train_c.label)\n\nridge_cv.best_score_\n\nfrom sklearn.linear_model import Lasso\nlasso = Lasso()\nlasso_cv = GridSearchCV(estimator=lasso, param_grid={'alpha':np.arange(0, 0.05, 0.005)}, cv=5)\nlasso_cv.fit(train_c[quan_features], train_c.label)\n\nlasso_cv.best_score_\n\nfrom sklearn.ensemble import RandomForestRegressor\nrf = RandomForestRegressor()\nparams = {'max_depth':np.arange(5,8),\n 'min_samples_split':np.arange(3, 6)}\nrf_cv = GridSearchCV(estimator=rf, param_grid=params, cv=5)\nrf_cv.fit(train_c[quan_features], train_c.label)\n\nrf_cv.best_score_\n\nfrom sklearn.linear_model import ElasticNet\nen = ElasticNet()\nparams = {'alpha':np.arange(0.01, 0.05, 0.005),\n 'l1_ratio': np.arange(0.1, 0.9, 0.1)}\nen_cv = GridSearchCV(estimator=en, param_grid=params, cv=5)\nen_cv.fit(train_c[quan_features], train_c.label)\n\nen_cv.best_score_\n\nfrom mlxtend.regressor import StackingRegressor\nfrom sklearn.linear_model import LinearRegression\nlin=LinearRegression()\nbasic_regressors= [ridge_cv.best_estimator_, lasso_cv.best_estimator_, \n rf_cv.best_estimator_, en_cv.best_estimator_]\nstacker=StackingRegressor(regressors=basic_regressors, meta_regressor=lin)\nstacker.fit(train_c[quan_features], train_c.label)\npred = stacker.predict(train_c[quan_features])\nr2_score(train_c.label, pred)\n\nresult = pd.DataFrame()\nresult['ID']=test.ID\nresult['y']=stacker.predict(test_c[quan_features])\nresult.to_csv('./stackedprediction.csv', index=False)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code"
] |
willingc/jupyter-data-seeker
|
Generic-GitHub.ipynb
|
gpl-2.0
|
[
"GitHub Basics\nThis notebook is a starter notebook for finding information about repositories that are managed by the Jupyter team.",
"import github3\nimport json\nfrom os.path import join\nimport pprint\nimport requests\nfrom urllib.parse import urljoin",
"GitHub Authorization\nNote: Be careful. Don't check in to version control your username and password.",
"TOKEN=''\ngh = github3.login(token=TOKEN)\ntype(gh)",
"Basic API request",
"url = 'https://api.github.com/orgs/jupyterhub/repos'\n\nresponse = requests.get(url)\nif response.status_code != 200:\n # This means something went wrong.\n raise ApiError('GET /orgs/ {}'.format(resp.status_code))\n\nrepos = response.json()\npprint.pprint(repos)\n\nprint('The total number of repos in the organization is {}'.format(len(repos)))\n\n# print repos\nprint('{0:30s} {1:20s}\\n'.format('Repository name', 'open_issues_count'))\n\nfor num in range(0, len(repos)):\n print('{0:30s} {1:4d}\\n'.format(repos[num]['name'], repos[num]['open_issues_count']))\n\nfor num in range(0, len(repos)):\n print('{0:30s} {1:50s}\\n'.format(repos[num]['name'], repos[num]['description']))\n\nprint('{0:30s} {1:20s}\\n'.format('Repository name', 'open_issues_count'))\n\nfor num in range(0, len(repos)):\n print('{0:30s} {1:4d} {2:20s}\\n'.format(repos[num]['name'], repos[num]['open_issues_count'], repos[num]['description']))",
"Issues in an organization's repos",
"def get_issues(my_org, my_repo):\n for issue in gh.iter_repo_issues(owner=my_org, repository=my_repo):\n print(issue.number, issue.title) \n\nmy_org = 'jupyterhub'\nmy_repo = 'configurable-http-proxy'\nget_issues(my_org, my_repo)\n\nmy_org = 'jupyterhub'\nmy_repo = 'jupyterhub'\nget_issues(my_org, my_repo)\n\nsubgroup={'authenticators':['oauthenticator', 'ldapauthenticator'],\n 'spawners':['dockerspawner', 'sudospawner', 'kubespawner', 'batchspawner', 'wrapspawner', 'systemdspawner'],\n 'deployments':['jupyterhub-deploy-docker', 'jupyterhub-deploy-teaching', 'jupyterhub-deploy-hpc', 'jupyterhub-example-kerberos'],\n 'fundamentals':['jupyterhub', 'configurable-http-proxy', 'hubshare', 'jupyterhub-labextension'],\n 'community':['jupyterhub-tutorial', 'jupyterhub-2016-workshop'],\n }\n\nprint(subgroup['authenticators'])"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
adriantorrie/adriantorrie.github.io
|
downloads/notebooks/eoddata/eoddata_web_service_series_master.ipynb
|
mit
|
[
"Table of Contents\n<p><div class=\"lev1 toc-item\"><a href=\"#Summary\" data-toc-modified-id=\"Summary-1\"><span class=\"toc-item-num\">1 </span>Summary</a></div><div class=\"lev1 toc-item\"><a href=\"#Version-Control\" data-toc-modified-id=\"Version-Control-2\"><span class=\"toc-item-num\">2 </span>Version Control</a></div><div class=\"lev1 toc-item\"><a href=\"#Change-Log\" data-toc-modified-id=\"Change-Log-3\"><span class=\"toc-item-num\">3 </span>Change Log</a></div><div class=\"lev1 toc-item\"><a href=\"#Setup\" data-toc-modified-id=\"Setup-4\"><span class=\"toc-item-num\">4 </span>Setup</a></div><div class=\"lev1 toc-item\"><a href=\"#Secure-Credentials-File\" data-toc-modified-id=\"Secure-Credentials-File-5\"><span class=\"toc-item-num\">5 </span>Secure Credentials File</a></div><div class=\"lev1 toc-item\"><a href=\"#Inspect-the-XML-returned\" data-toc-modified-id=\"Inspect-the-XML-returned-6\"><span class=\"toc-item-num\">6 </span>Inspect the XML returned</a></div><div class=\"lev3 toc-item\"><a href=\"#Data-inspection-(root)\" data-toc-modified-id=\"Data-inspection-(root)-601\"><span class=\"toc-item-num\">6.0.1 </span>Data inspection (root)</a></div><div class=\"lev3 toc-item\"><a href=\"#Get-data--(token)\" data-toc-modified-id=\"Get-data--(token)-602\"><span class=\"toc-item-num\">6.0.2 </span>Get data (token)</a></div><div class=\"lev1 toc-item\"><a href=\"#Client\" data-toc-modified-id=\"Client-7\"><span class=\"toc-item-num\">7 </span>Client</a></div>\n\n# Summary\n\n * Master post for the blog series that holds all the links related to making web service calls to Eoddata.com. Overview of the web service can be found [here](http://ws.eoddata.com/data.asmx)\n * Download the [class definition file](https://adriantorrie.github.io/downloads/code/eoddata.py) for an easy to use client, which is demonstrated below\n * This post shows you how to create a secure credentials file to hold the username and password so you don't have to keep entering it, and will allow for automation later.\n * A quick overview is given below of establishing a session using the `requests` module, and parsing the xml response using `xml.etree.cElementTree`. Then a quick inspection of the objects created follows.\n\nThe following links were used to help get these things working.\n\n* http://stackoverflow.com/a/17378332/893766\n* http://stackoverflow.com/a/1912483/893766\n* hidden password entry: https://docs.python.org/2/library/getpass.html\n\n# Version Control",
"%run ../../code/version_check.py",
"Change Log\nDate Created: 2017-03-25\n\nDate of Change Change Notes\n-------------- ----------------------------------------------------------------\n2017-03-25 Initial draft\n2017-04-02 Added \"file saved: <location>\" output\n\n[Top]\nSetup",
"%run ../../code/eoddata.py\n\nfrom getpass import getpass\nimport json\nimport os\nimport os.path\nimport requests as r\nimport stat\nimport xml.etree.cElementTree as etree\n\nws = 'http://ws.eoddata.com/data.asmx'\nns='http://ws.eoddata.com/Data'\nsession = r.Session()\n\nusername = getpass()\n\npassword = getpass()",
"[Top]\nSecure Credentials File\nCreate credentials file for later usage. The file will have permissions created so only the current user can access the file. The following SO post was followed.\nThe following directory will be created if it doesn't exist:\n * Windows: %USERPROFILE%/.eoddata\n * Linux: ~/.eoddata",
"# gather credentials\ncredentials = {'username': username, 'password': password}\n\n# set filename variables\ncredentials_dir = os.path.join(os.path.expanduser(\"~\"), '.eoddata')\ncredentials_file_name = 'credentials'\ncredentials_path = os.path.join(credentials_dir, credentials_file_name)\n\n# set security variables\nflags = os.O_WRONLY | os.O_CREAT | os.O_EXCL # Refer to \"man 2 open\".\nmode = stat.S_IRUSR | stat.S_IWUSR # This is 0o600 in octal and 384 in decimal.\n\n# create directory for file if not exists\nif not os.path.exists(credentials_dir):\n os.makedirs(credentials_dir)\n\n# for security, remove file with potentially elevated mode\ntry:\n os.remove(credentials_path)\nexcept OSError:\n pass\n\n# open file descriptor\numask_original = os.umask(0)\ntry:\n fdesc = os.open(credentials_path, flags, mode)\nfinally:\n os.umask(umask_original)\n\n# save credentials in secure file\nwith os.fdopen(fdesc, 'w') as f:\n json.dump(credentials, f)\n f.write(\"\\n\")\n \nprint(\"file saved: {}\".format(credentials_path))",
"[Top]\nInspect the XML returned",
"call = 'Login'\nurl = '/'.join((ws, call))\n\npayload = {'Username': username, 'Password': password}\n\nresponse = session.get(url, params=payload, stream=True)\n\nif response.status_code == 200:\n root = etree.parse(response.raw).getroot()",
"Data inspection (root)",
"dir(root)\n\nfor child in root.getchildren():\n print (child.tag, child.attribute)\n\nfor item in root.items():\n print (item)\n\nfor key in root.keys():\n print (key)\n\nprint (root.get('Message'))\nprint (root.get('Token'))\nprint (root.get('DataFormat'))\nprint (root.get('Header'))\nprint (root.get('Suffix'))",
"Get data (token)",
"token = root.get('Token')",
"[Top]\nClient",
"# client can be opened using a with statement\nwith (Client()) as eoddata:\n print('token: {}'.format(eoddata.get_token()))\n\n# initialise using secure credentials file\neoddata = Client()\n\n# client field accessors\nws = eoddata.get_web_service()\nns = eoddata.get_namespace()\ntoken = eoddata.get_token()\nsession = eoddata.get_session()\n\nprint('ws: {}'.format(ws))\nprint('ns: {}'.format(ns))\nprint('token: {}'.format(token))\nprint(session)\n\n# the client has a list of exchange codes avaiable once intialised\neoddata.get_exchange_codes()\n\n# client must be closed if opened outside a with block\nsession.close()\neoddata.close_session()",
"[Top]"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
metpy/MetPy
|
v0.8/_downloads/Natural_Neighbor_Verification.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Natural Neighbor Verification\nWalks through the steps of Natural Neighbor interpolation to validate that the algorithmic\napproach taken in MetPy is correct.\nFind natural neighbors visual test\nA triangle is a natural neighbor for a point if the\ncircumscribed circle <https://en.wikipedia.org/wiki/Circumscribed_circle>_ of the\ntriangle contains that point. It is important that we correctly grab the correct triangles\nfor each point before proceeding with the interpolation.\nAlgorithmically:\n\n\nWe place all of the grid points in a KDTree. These provide worst-case O(n) time\n complexity for spatial searches.\n\n\nWe generate a Delaunay Triangulation <https://docs.scipy.org/doc/scipy/\n reference/tutorial/spatial.html#delaunay-triangulations>_\n using the locations of the provided observations.\n\n\nFor each triangle, we calculate its circumcenter and circumradius. Using\n KDTree, we then assign each grid a triangle that has a circumcenter within a\n circumradius of the grid's location.\n\n\nThe resulting dictionary uses the grid index as a key and a set of natural\n neighbor triangles in the form of triangle codes from the Delaunay triangulation.\n This dictionary is then iterated through to calculate interpolation values.\n\n\nWe then traverse the ordered natural neighbor edge vertices for a particular\n grid cell in groups of 3 (n - 1, n, n + 1), and perform calculations to generate\n proportional polygon areas.\n\n\nCircumcenter of (n - 1), n, grid_location\n Circumcenter of (n + 1), n, grid_location\nDetermine what existing circumcenters (ie, Delaunay circumcenters) are associated\n with vertex n, and add those as polygon vertices. Calculate the area of this polygon.\n\n\nIncrement the current edges to be checked, i.e.:\n n - 1 = n, n = n + 1, n + 1 = n + 2\n\n\nRepeat steps 5 & 6 until all of the edge combinations of 3 have been visited.\n\n\nRepeat steps 4 through 7 for each grid cell.",
"import matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy.spatial import ConvexHull, Delaunay, delaunay_plot_2d, Voronoi, voronoi_plot_2d\nfrom scipy.spatial.distance import euclidean\n\nfrom metpy.gridding import polygons, triangles\nfrom metpy.gridding.interpolation import nn_point",
"For a test case, we generate 10 random points and observations, where the\nobservation values are just the x coordinate value times the y coordinate\nvalue divided by 1000.\nWe then create two test points (grid 0 & grid 1) at which we want to\nestimate a value using natural neighbor interpolation.\nThe locations of these observations are then used to generate a Delaunay triangulation.",
"np.random.seed(100)\n\npts = np.random.randint(0, 100, (10, 2))\nxp = pts[:, 0]\nyp = pts[:, 1]\nzp = (pts[:, 0] * pts[:, 0]) / 1000\n\ntri = Delaunay(pts)\n\nfig, ax = plt.subplots(1, 1, figsize=(15, 10))\ndelaunay_plot_2d(tri, ax=ax)\n\nfor i, zval in enumerate(zp):\n ax.annotate('{} F'.format(zval), xy=(pts[i, 0] + 2, pts[i, 1]))\n\nsim_gridx = [30., 60.]\nsim_gridy = [30., 60.]\n\nax.plot(sim_gridx, sim_gridy, '+', markersize=10)\nax.set_aspect('equal', 'datalim')\nax.set_title('Triangulation of observations and test grid cell '\n 'natural neighbor interpolation values')\n\nmembers, tri_info = triangles.find_natural_neighbors(tri, list(zip(sim_gridx, sim_gridy)))\n\nval = nn_point(xp, yp, zp, (sim_gridx[0], sim_gridy[0]), tri, members[0], tri_info)\nax.annotate('grid 0: {:.3f}'.format(val), xy=(sim_gridx[0] + 2, sim_gridy[0]))\n\nval = nn_point(xp, yp, zp, (sim_gridx[1], sim_gridy[1]), tri, members[1], tri_info)\nax.annotate('grid 1: {:.3f}'.format(val), xy=(sim_gridx[1] + 2, sim_gridy[1]))",
"Using the circumcenter and circumcircle radius information from\n:func:metpy.gridding.triangles.find_natural_neighbors, we can visually\nexamine the results to see if they are correct.",
"def draw_circle(ax, x, y, r, m, label):\n th = np.linspace(0, 2 * np.pi, 100)\n nx = x + r * np.cos(th)\n ny = y + r * np.sin(th)\n ax.plot(nx, ny, m, label=label)\n\n\nmembers, tri_info = triangles.find_natural_neighbors(tri, list(zip(sim_gridx, sim_gridy)))\n\nfig, ax = plt.subplots(1, 1, figsize=(15, 10))\ndelaunay_plot_2d(tri, ax=ax)\nax.plot(sim_gridx, sim_gridy, 'ks', markersize=10)\n\nfor i, info in tri_info.items():\n x_t = info['cc'][0]\n y_t = info['cc'][1]\n\n if i in members[1] and i in members[0]:\n draw_circle(ax, x_t, y_t, info['r'], 'm-', str(i) + ': grid 1 & 2')\n ax.annotate(str(i), xy=(x_t, y_t), fontsize=15)\n elif i in members[0]:\n draw_circle(ax, x_t, y_t, info['r'], 'r-', str(i) + ': grid 0')\n ax.annotate(str(i), xy=(x_t, y_t), fontsize=15)\n elif i in members[1]:\n draw_circle(ax, x_t, y_t, info['r'], 'b-', str(i) + ': grid 1')\n ax.annotate(str(i), xy=(x_t, y_t), fontsize=15)\n else:\n draw_circle(ax, x_t, y_t, info['r'], 'k:', str(i) + ': no match')\n ax.annotate(str(i), xy=(x_t, y_t), fontsize=9)\n\nax.set_aspect('equal', 'datalim')\nax.legend()",
"What?....the circle from triangle 8 looks pretty darn close. Why isn't\ngrid 0 included in that circle?",
"x_t, y_t = tri_info[8]['cc']\nr = tri_info[8]['r']\n\nprint('Distance between grid0 and Triangle 8 circumcenter:',\n euclidean([x_t, y_t], [sim_gridx[0], sim_gridy[0]]))\nprint('Triangle 8 circumradius:', r)",
"Lets do a manual check of the above interpolation value for grid 0 (southernmost grid)\nGrab the circumcenters and radii for natural neighbors",
"cc = np.array([tri_info[m]['cc'] for m in members[0]])\nr = np.array([tri_info[m]['r'] for m in members[0]])\n\nprint('circumcenters:\\n', cc)\nprint('radii\\n', r)",
"Draw the natural neighbor triangles and their circumcenters. Also plot a Voronoi diagram\n<https://docs.scipy.org/doc/scipy/reference/tutorial/spatial.html#voronoi-diagrams>_\nwhich serves as a complementary (but not necessary)\nspatial data structure that we use here simply to show areal ratios.\nNotice that the two natural neighbor triangle circumcenters are also vertices\nin the Voronoi plot (green dots), and the observations are in the polygons (blue dots).",
"vor = Voronoi(list(zip(xp, yp)))\n\nfig, ax = plt.subplots(1, 1, figsize=(15, 10))\nvoronoi_plot_2d(vor, ax=ax)\n\nnn_ind = np.array([0, 5, 7, 8])\nz_0 = zp[nn_ind]\nx_0 = xp[nn_ind]\ny_0 = yp[nn_ind]\n\nfor x, y, z in zip(x_0, y_0, z_0):\n ax.annotate('{}, {}: {:.3f} F'.format(x, y, z), xy=(x, y))\n\nax.plot(sim_gridx[0], sim_gridy[0], 'k+', markersize=10)\nax.annotate('{}, {}'.format(sim_gridx[0], sim_gridy[0]), xy=(sim_gridx[0] + 2, sim_gridy[0]))\nax.plot(cc[:, 0], cc[:, 1], 'ks', markersize=15, fillstyle='none',\n label='natural neighbor\\ncircumcenters')\n\nfor center in cc:\n ax.annotate('{:.3f}, {:.3f}'.format(center[0], center[1]),\n xy=(center[0] + 1, center[1] + 1))\n\ntris = tri.points[tri.simplices[members[0]]]\nfor triangle in tris:\n x = [triangle[0, 0], triangle[1, 0], triangle[2, 0], triangle[0, 0]]\n y = [triangle[0, 1], triangle[1, 1], triangle[2, 1], triangle[0, 1]]\n ax.plot(x, y, ':', linewidth=2)\n\nax.legend()\nax.set_aspect('equal', 'datalim')\n\n\ndef draw_polygon_with_info(ax, polygon, off_x=0, off_y=0):\n \"\"\"Draw one of the natural neighbor polygons with some information.\"\"\"\n pts = np.array(polygon)[ConvexHull(polygon).vertices]\n for i, pt in enumerate(pts):\n ax.plot([pt[0], pts[(i + 1) % len(pts)][0]],\n [pt[1], pts[(i + 1) % len(pts)][1]], 'k-')\n\n avex, avey = np.mean(pts, axis=0)\n ax.annotate('area: {:.3f}'.format(polygons.area(pts)), xy=(avex + off_x, avey + off_y),\n fontsize=12)\n\n\ncc1 = triangles.circumcenter((53, 66), (15, 60), (30, 30))\ncc2 = triangles.circumcenter((34, 24), (53, 66), (30, 30))\ndraw_polygon_with_info(ax, [cc[0], cc1, cc2])\n\ncc1 = triangles.circumcenter((53, 66), (15, 60), (30, 30))\ncc2 = triangles.circumcenter((15, 60), (8, 24), (30, 30))\ndraw_polygon_with_info(ax, [cc[0], cc[1], cc1, cc2], off_x=-9, off_y=3)\n\ncc1 = triangles.circumcenter((8, 24), (34, 24), (30, 30))\ncc2 = triangles.circumcenter((15, 60), (8, 24), (30, 30))\ndraw_polygon_with_info(ax, [cc[1], cc1, cc2], off_x=-15)\n\ncc1 = triangles.circumcenter((8, 24), (34, 24), (30, 30))\ncc2 = triangles.circumcenter((34, 24), (53, 66), (30, 30))\ndraw_polygon_with_info(ax, [cc[0], cc[1], cc1, cc2])",
"Put all of the generated polygon areas and their affiliated values in arrays.\nCalculate the total area of all of the generated polygons.",
"areas = np.array([60.434, 448.296, 25.916, 70.647])\nvalues = np.array([0.064, 1.156, 2.809, 0.225])\ntotal_area = np.sum(areas)\nprint(total_area)",
"For each polygon area, calculate its percent of total area.",
"proportions = areas / total_area\nprint(proportions)",
"Multiply the percent of total area by the respective values.",
"contributions = proportions * values\nprint(contributions)",
"The sum of this array is the interpolation value!",
"interpolation_value = np.sum(contributions)\nfunction_output = nn_point(xp, yp, zp, (sim_gridx[0], sim_gridy[0]), tri, members[0], tri_info)\n\nprint(interpolation_value, function_output)",
"The values are slightly different due to truncating the area values in\nthe above visual example to the 3rd decimal place.",
"plt.show()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io
|
0.12/_downloads/plot_read_bem_surfaces.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Reading BEM surfaces from a forward solution\nPlot BEM surfaces used for forward solution generation.",
"# Author: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>\n#\n# License: BSD (3-clause)\n\nimport mne\nfrom mne.datasets import sample\n\nprint(__doc__)\n\ndata_path = sample.data_path()\nfname = data_path + '/subjects/sample/bem/sample-5120-5120-5120-bem-sol.fif'\n\nsurfaces = mne.read_bem_surfaces(fname, patch_stats=True)\n\nprint(\"Number of surfaces : %d\" % len(surfaces))",
"Show result",
"head_col = (0.95, 0.83, 0.83) # light pink\nskull_col = (0.91, 0.89, 0.67)\nbrain_col = (0.67, 0.89, 0.91) # light blue\ncolors = [head_col, skull_col, brain_col]\n\n# 3D source space\nfrom mayavi import mlab # noqa\n\nmlab.figure(size=(600, 600), bgcolor=(0, 0, 0))\nfor c, surf in zip(colors, surfaces):\n points = surf['rr']\n faces = surf['tris']\n mlab.triangular_mesh(points[:, 0], points[:, 1], points[:, 2], faces,\n color=c, opacity=0.3)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code"
] |
tensorflow/workshops
|
extras/tensorflow_lattice/04_lattice_basics.ipynb
|
apache-2.0
|
[
"Basics of lattice models\nIn this notebook, we'll explain a lattice model, an interpolated lookup table.\nIn addition, we'll show how monotonicity and smooth regularizers can change the model.\nFirst we need to import libraries we're going to use.",
"!pip install tensorflow_lattice\nimport tensorflow as tf\nimport tensorflow_lattice as tfl\nfrom mpl_toolkits.mplot3d import Axes3D\nimport matplotlib.pyplot as plt\nfrom matplotlib import cm\nfrom matplotlib.ticker import LinearLocator, FormatStrFormatter\nimport numpy as np",
"Lattice model visualization\nNow, let us define helper functions for visualizing the surface of 2d lattice.",
"# Hypercube (multilinear) interpolation in a 2 x 2 lattice.\n# params[0] == lookup value at (0, 0)\n# params[1] == lookup value at (0, 1)\n# params[2] == lookup value at (1, 0)\n# params[3] == lookup value at (1, 1)\ndef twod(x1, x2, params):\n y = ((1 - x1) * (1 - x2) * params[0]\n + (1 - x1) * x2 * params[1]\n + x1 * (1 - x2) * params[2]\n + x1 * x2 * params[3])\n return y\n\n# This function will generate 3d plot for lattice function values.\n# params uniquely characterizes the lattice lookup values.\ndef lattice_surface(params):\n print('Lattice params:')\n print(params)\n \n %matplotlib inline\n fig = plt.figure()\n ax = fig.gca(projection='3d')\n\n # Make data.\n n = 50\n xv, yv = np.meshgrid(np.linspace(0.0, 1.0, num=n),\n np.linspace(0.0, 1.0, num=n))\n zv = np.zeros([n, n])\n for k1 in range(n):\n for k2 in range(n):\n zv[k1, k2] = twod(xv[k1, k2], yv[k1, k2], params)\n\n # Plot the surface.\n surf = ax.plot_surface(xv, yv, zv, cmap=cm.coolwarm)\n # Customize the z axis.\n ax.set_zlim(0.0, 1.0)\n ax.zaxis.set_major_locator(LinearLocator(10))\n ax.zaxis.set_major_formatter(FormatStrFormatter('%.02f'))\n\n # Add a color bar which maps values to colors.\n fig.colorbar(surf, shrink=0.5, aspect=5,)",
"Let's draw a surface of 2d lattice model.\nThis model represents an \"XOR\" function.",
"# This will plot the surface plot.\nlattice_surface([0.0, 1.0, 1.0, 0.0])",
"Train XOR function\nWe'll provide a synthetic data that represents the \"XOR\" function, that is\nf(0, 0) = 0\nf(0, 1) = 1\nf(1, 0) = 1\nf(1, 1) = 0\nand check whether a lattice can learn this function.",
"# Reset the graph.\ntf.reset_default_graph()\n\n# Prepare the dataset.\nx_data = [[0.0, 0.0], [0.0, 1.0], [1.0, 0.0], [1.0, 1.0]]\ny_data = [[0.0], [1.0], [1.0], [0.0]]\n\n# Define placeholders.\nx = tf.placeholder(dtype=tf.float32, shape=(None, 2))\ny_ = tf.placeholder(dtype=tf.float32, shape=(None, 1))\n\n# 2 x 2 lattice with 1 output.\n# lattice_param is [output_dim, 4] tensor.\nlattice_sizes = [2, 2]\n(y, lattice_param, _, _) = tfl.lattice_layer(\n x, lattice_sizes=[2, 2], output_dim=1)\n\n# Sqaured loss\nloss = tf.reduce_mean(tf.square(y - y_))\n\n# Minimize!\ntrain_op = tf.train.GradientDescentOptimizer(learning_rate=0.1).minimize(loss)\n\nsess = tf.Session()\nsess.run(tf.global_variables_initializer())\n# Iterate 100 times\nfor _ in range(100):\n sess.run(train_op, feed_dict={x: x_data, y_: y_data})\n\n# Fetching trained lattice parameter.\nlattice_param_val = sess.run(lattice_param)\n# Draw the surface!\nlattice_surface(lattice_param_val[0])",
"Train with monotonicity\nNow we'll set monotonicity in a lattice model. We'll use the same synthetic data generated by \"XOR\" function, but now we'll require full monotonicity in both directions, x1 and x2. Note that the data does not contain monotonicity, since \"XOR\" function value decreases, i.e., f(1, 0) > f(1, 1) and f(0, 1) > f(1, 1).\nSo the trained model will do its best to fit the data while satisfying the monotonicity.",
"tf.reset_default_graph()\n\nx_data = [[0.0, 0.0], [0.0, 1.0], [1.0, 0.0], [1.0, 1.0]]\ny_data = [[0.0], [1.0], [1.0], [0.0]]\n\nx = tf.placeholder(dtype=tf.float32, shape=(None, 2))\ny_ = tf.placeholder(dtype=tf.float32, shape=(None, 1))\n\n# 2 x 2 lattice with 1 output.\n# lattice_param is [output_dim, 4] tensor.\nlattice_sizes = [2, 2]\n(y, lattice_param, projection_op, _) = tfl.lattice_layer(\n x, lattice_sizes=[2, 2], output_dim=1, is_monotone=True)\n\n# Sqaured loss\nloss = tf.reduce_mean(tf.square(y - y_))\n\n# Minimize!\ntrain_op = tf.train.GradientDescentOptimizer(learning_rate=0.1).minimize(loss)\n\nsess = tf.Session()\nsess.run(tf.global_variables_initializer())\n# Iterate 100 times\nfor _ in range(100):\n # Apply gradient.\n sess.run(train_op, feed_dict={x: x_data, y_: y_data})\n # Then projection.\n sess.run(projection_op)\n\n# Fetching trained lattice parameter.\nlattice_param_val = sess.run(lattice_param)\n# Draw it!\n# You can see that the prediction does not decrease.\nlattice_surface(lattice_param_val[0])",
"Train with partial monotonicity\nNow we'll set partial monotonicity. Here only one input is constrained to be monotonic.",
"tf.reset_default_graph()\n\nx_data = [[0.0, 0.0], [0.0, 1.0], [1.0, 0.0], [1.0, 1.0]]\ny_data = [[0.0], [1.0], [1.0], [0.0]]\n\nx = tf.placeholder(dtype=tf.float32, shape=(None, 2))\ny_ = tf.placeholder(dtype=tf.float32, shape=(None, 1))\n\n# 2 x 2 lattice with 1 output.\n# lattice_param is [output_dim, 4] tensor.\nlattice_sizes = [2, 2]\n(y, lattice_param, projection_op, _) = tfl.lattice_layer(\n x, lattice_sizes=[2, 2], output_dim=1, is_monotone=[True, False])\n\n# Sqaured loss\nloss = tf.reduce_mean(tf.square(y - y_))\n\n# Minimize!\ntrain_op = tf.train.GradientDescentOptimizer(learning_rate=0.1).minimize(loss)\n\nsess = tf.InteractiveSession()\nsess.run(tf.global_variables_initializer())\n# Iterate 100 times\nfor _ in range(100):\n # Apply gradient.\n sess.run(train_op, feed_dict={x: x_data, y_: y_data})\n # Then projection.\n sess.run(projection_op)\n\n# Fetching trained lattice parameter.\nlattice_param_val = sess.run(lattice_param)\n# Draw it!\n# You can see that the prediction does not decrease in one direction.\nlattice_surface(lattice_param_val[0])",
"Training OR function\nNow we switch to a synthetic dataset generated by \"OR\" function to illustrate other regularizers.\nf(0, 0) = 0\nf(0, 1) = 1\nf(1, 0) = 1\nf(1, 1) = 1",
"tf.reset_default_graph()\n\nx_data = [[0.0, 0.0], [0.0, 1.0], [1.0, 0.0], [1.0, 1.0]]\ny_data = [[0.0], [1.0], [1.0], [1.0]]\n\nx = tf.placeholder(dtype=tf.float32, shape=(None, 2))\ny_ = tf.placeholder(dtype=tf.float32, shape=(None, 1))\n\n# 2 x 2 lattice with 1 output.\n# lattice_param is [output_dim, 4] tensor.\nlattice_sizes = [2, 2]\n(y, lattice_param, _, _) = tfl.lattice_layer(\n x, lattice_sizes=[2, 2], output_dim=1)\n\n# Sqaured loss\nloss = tf.reduce_mean(tf.square(y - y_))\n\n# Minimize!\ntrain_op = tf.train.GradientDescentOptimizer(learning_rate=0.1).minimize(loss)\n\nsess = tf.InteractiveSession()\nsess.run(tf.global_variables_initializer())\n# Iterate 100 times\nfor _ in range(100):\n # Apply gradient.\n sess.run(train_op, feed_dict={x: x_data, y_: y_data})\n\n# Fetching trained lattice parameter.\nlattice_param_val = sess.run(lattice_param)\n# Draw it!\nlattice_surface(lattice_param_val[0])",
"Laplacian regularizer\nLaplacian regularizer puts the penalty on lookup value changes. In other words, it tries to make the slope of each face as small as possible.",
"tf.reset_default_graph()\n\nx_data = [[0.0, 0.0], [0.0, 1.0], [1.0, 0.0], [1.0, 1.0]]\ny_data = [[0.0], [1.0], [1.0], [1.0]]\n\nx = tf.placeholder(dtype=tf.float32, shape=(None, 2))\ny_ = tf.placeholder(dtype=tf.float32, shape=(None, 1))\n\n# 2 x 2 lattice with 1 output.\n# lattice_param is [output_dim, 4] tensor.\nlattice_sizes = [2, 2]\n(y, lattice_param, _, regularization) = tfl.lattice_layer(\n x, lattice_sizes=[2, 2], output_dim=1, l2_laplacian_reg=[0.0, 1.0])\n\n# Sqaured loss\nloss = tf.reduce_mean(tf.square(y - y_))\nloss += regularization\n\n# Minimize!\ntrain_op = tf.train.GradientDescentOptimizer(learning_rate=0.1).minimize(loss)\n\nsess = tf.InteractiveSession()\nsess.run(tf.global_variables_initializer())\n# Iterate 100 times\nfor _ in range(1000):\n # Apply gradient.\n sess.run(train_op, feed_dict={x: x_data, y_: y_data})\n\n# Fetching trained lattice parameter.\nlattice_param_val = sess.run(lattice_param)\n# Draw it!\n# With heavy Laplacian regularization along the second axis, the second axis's slope becomes zero.\nlattice_surface(lattice_param_val[0])",
"Torsion regularizer\nTorsion regularizer penalizes nonlinear interactions in the feature.",
"tf.reset_default_graph()\n\nx_data = [[0.0, 0.0], [0.0, 1.0], [1.0, 0.0], [1.0, 1.0]]\ny_data = [[0.0], [1.0], [1.0], [1.0]]\n\nx = tf.placeholder(dtype=tf.float32, shape=(None, 2))\ny_ = tf.placeholder(dtype=tf.float32, shape=(None, 1))\n\n# 2 x 2 lattice with 1 output.\n# lattice_param is [output_dim, 4] tensor.\nlattice_sizes = [2, 2]\n(y, lattice_param, _, regularization) = tfl.lattice_layer(\n x, lattice_sizes=[2, 2], output_dim=1, l2_torsion_reg=1.0)\n\n# Sqaured loss\nloss = tf.reduce_mean(tf.square(y - y_))\nloss += regularization\n\n# Minimize!\ntrain_op = tf.train.GradientDescentOptimizer(learning_rate=0.1).minimize(loss)\n\nsess = tf.InteractiveSession()\nsess.run(tf.global_variables_initializer())\n# Iterate 1000 times\nfor _ in range(1000):\n # Apply gradient.\n sess.run(train_op, feed_dict={x: x_data, y_: y_data})\n\n# Fetching trained lattice parameter.\nlattice_param_val = sess.run(lattice_param)\n# Draw it!\n# With heavy Torsion regularization, the model becomes a linear model.\nlattice_surface(lattice_param_val[0])"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
MonicaGutierrez/PracticalMachineLearningClass
|
exercises/04-BikesRent.ipynb
|
mit
|
[
"Exercise 04\nEstimate a regression using the Capital Bikeshare data\nForecast use of a city bikeshare system\nWe'll be working with a dataset from Capital Bikeshare that was used in a Kaggle competition (data dictionary).\nGet started on this competition through Kaggle Scripts\nBike sharing systems are a means of renting bicycles where the process of obtaining membership, rental, and bike return is automated via a network of kiosk locations throughout a city. Using these systems, people are able rent a bike from a one location and return it to a different place on an as-needed basis. Currently, there are over 500 bike-sharing programs around the world.\nThe data generated by these systems makes them attractive for researchers because the duration of travel, departure location, arrival location, and time elapsed is explicitly recorded. Bike sharing systems therefore function as a sensor network, which can be used for studying mobility in a city. In this competition, participants are asked to combine historical usage patterns with weather data in order to forecast bike rental demand in the Capital Bikeshare program in Washington, D.C.",
"import pandas as pd\nimport numpy as np\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\n# read the data and set the datetime as the index\nurl = 'https://raw.githubusercontent.com/justmarkham/DAT8/master/data/bikeshare.csv'\nbikes = pd.read_csv(url, index_col='datetime', parse_dates=True)\n\n# \"count\" is a method, so it's best to name that column something else\nbikes.rename(columns={'count':'total'}, inplace=True)\n\nbikes.head()",
"datetime - hourly date + timestamp \nseason - \n1 = spring\n2 = summer \n3 = fall \n4 = winter \nholiday - whether the day is considered a holiday\nworkingday - whether the day is neither a weekend nor holiday\nweather - \n1: Clear, Few clouds, Partly cloudy, Partly cloudy \n2: Mist + Cloudy, Mist + Broken clouds, Mist + Few clouds, Mist \n3: Light Snow, Light Rain + Thunderstorm + Scattered clouds, Light Rain + Scattered clouds \n4: Heavy Rain + Ice Pallets + Thunderstorm + Mist, Snow + Fog \ntemp - temperature in Celsius\natemp - \"feels like\" temperature in Celsius\nhumidity - relative humidity\nwindspeed - wind speed\ncasual - number of non-registered user rentals initiated\nregistered - number of registered user rentals initiated\ntotal - number of total rentals",
"bikes.shape",
"Exercise 4.1\nWhat is the relation between the temperature and total?\nFor a one percent increase in temperature how much the bikes shares increases?\nUsing sklearn estimate a linear regression and predict the total bikes share when the temperature is 31 degrees",
"# Pandas scatter plot\nbikes.plot(kind='scatter', x='temp', y='total', alpha=0.2)\n\nfeature_cols = ['temp']\nX1 = bikes[feature_cols]\nY1 = bikes.total\n\nfrom sklearn.linear_model import SGDRegressor\nfrom sklearn.linear_model import LinearRegression\n\nclf1 = LinearRegression()\nclf1.fit(X1, Y1)\nclf1.predict(X1)\n\nprint(clf1.coef_)\nprint(clf1.intercept_)",
"La relación entre la temperatura y el total es directamente proporcional. Podemos observar en la primer grafica que a medida que aumenta la temperatura, el numero total de bicicletas rentadas aumenta también, esto lo podemos corroborar con el coeficiente del modelo lineal de regresion (B1) que al ser positivo indica que cuando la variable X (Temp) aumenta, la variable Y (Total) lo hace también.\nSi la temperatura aumenta en 1 unidad, tenemos un incremento de 9 unidades en el total de ciclas rentadas.",
"prediction=clf1.intercept_+(clf1.coef_*31)\nprediction",
"El total de bibiclitas rentadas cuando la temperatura es de 31º es de 290 unds.\nExercise 04.2\nEvaluate the model using the MSE",
"Y1_pred=clf1.predict(X1)\nY1_pred\n\nfrom sklearn import metrics\nimport numpy as np\nprint('MSE:', metrics.mean_squared_error(Y1, Y1_pred))\n",
"Exercise 04.3\nDoes the scale of the features matter?\nLet's say that temperature was measured in Fahrenheit, rather than Celsius. How would that affect the model?",
"bikes[\"temp_conv\"]=(bikes.temp*(9/5))+32\nbikes.head()\n\nfeature_cols2 = ['temp_conv']\nX2 = bikes[feature_cols]\nY2= bikes.total\n\nclf2 = LinearRegression()\nclf2.fit(X2, Y2)\nY2_pred=clf2.predict(X2)\nY2_pred\n\nY1_pred-Y2_pred",
"Cómo se ve la diferencia entre las predicciones o valores estimados de la regresión con la temperatura en grados Celsius y los grados Fahrenheit, es muy cercana a cero casi nula. Es decir, que a pesar que la escala de la temperatura esté diferente y la escala cambie, las predicciones no van a tener ninguna variación. \nExercise 04.4\nRun a regression model using as features the temperature and temperature$^2$ using the OLS equations",
"bikes['temp2']=bikes.temp**2\nbikes.head()\n\nfeature_cols3 = ['temp','temp2']\nX3 = bikes[feature_cols]\nY3 = bikes.total\n\nclf3 = LinearRegression()\nclf3.fit(X3, Y3)\nclf3.coef_,clf3.intercept_\n\nY3_pred=clf3.predict(X3)\nY3_pred",
"Exercise 04.5\nData visualization.\nWhat behavior is unexpected?",
"# explore more features\nfeature_cols = ['temp', 'season', 'weather', 'humidity']\n\n# multiple scatter plots in Pandas\nfig, axs = plt.subplots(1, len(feature_cols), sharey=True)\nfor index, feature in enumerate(feature_cols):\n bikes.plot(kind='scatter', x=feature, y='total', ax=axs[index], figsize=(16, 3))",
"Are you seeing anything that you did not expect?\nseasons: \n * 1 = spring\n * 2 = summer \n * 3 = fall \n * 4 = winter",
"# pivot table of season and month\nmonth = bikes.index.month\npd.pivot_table(bikes, index='season', columns=month, values='temp', aggfunc=np.count_nonzero).fillna(0)\n\n# box plot of rentals, grouped by season\nbikes.boxplot(column='total', by='season')\n\n# line plot of rentals\nbikes.total.plot()",
"El comportamiento inesperado es ver que se rentan más bicicletas en invierno. Uno pensaría que en las demás estaciones es cuando más rentan bicicletas, pero las gráficas nos dicen lo contrario. Podemos ver en los Boxplot que el promedio más alto es de \"winter\" y podemos ver en la gráfica anterior que en el mes de Octubre es cuando mas se rentan bicicletas y este mes corresponde a invierno. Otra razon que tenemos para decir que es un comportamiento inesperado es que en el primer punto de este taller nos pudimos dar cuenta que la relacion de la temperatura con el total de bicicletas rentadas es directamente proporcional entonces a más temperatura más bicicletas se rentan y este comportamiento no se ve gráficamente.\nExercise 04.6\nEstimate a regression using more features ['temp', 'season', 'weather', 'humidity'].\nHow is the performance compared to using only the temperature?",
"feature_cols4 = ['temp', 'season', 'weather', 'humidity']\nX4 = bikes[feature_cols]\nY4 = bikes.total\n\nclf4 = LinearRegression()\nclf4.fit(X4, Y4)\nclf4.coef_,clf4.intercept_\n\nclf1.score(X1,Y1,sample_weight=None), clf4.score(X4,Y4,sample_weight=None)\n\nY4_pred=clf4.predict(bikes[['temp', 'season', 'weather', 'humidity']])\nY4_pred\n\nprint('MSE:', metrics.mean_squared_error(Y1, Y1_pred))\n\nprint('MSE:', metrics.mean_squared_error(Y4, Y4_pred))",
"Exercise 04.7 (3 points)\nSplit randomly the data in train and test\nWhich of the following models is the best in the testing set?\n* ['temp', 'season', 'weather', 'humidity']\n* ['temp', 'season', 'weather']\n* ['temp', 'season', 'humidity']",
"import numpy as np\nfrom sklearn.cross_validation import train_test_split\nX4_train, X4_test, Y4_train, Y4_test = train_test_split(X4, Y4, test_size=0.35, random_state=666)\nprint(Y4_train.shape, Y4_test.shape)\n\nfeature_cols = ['temp', 'season', 'weather']\nX5 = bikes[feature_cols]\nY5 = bikes.total\n\nimport numpy as np\nfrom sklearn.cross_validation import train_test_split\nX5_train, X5_test, Y5_train, Y5_test = train_test_split(X5, Y5, test_size=0.35, random_state=666)\nprint(Y5_train.shape, Y5_test.shape)\n\nfeature_cols = ['temp', 'season', 'humidity']\nX6 = bikes[feature_cols]\nY6 = bikes.total\n\nimport numpy as np\nfrom sklearn.cross_validation import train_test_split\nX6_train, X6_test, Y6_train, Y6_test = train_test_split(X6, Y6, test_size=0.35, random_state=333)\nprint(Y6_train.shape, Y6_test.shape)\n\nclf4 = LinearRegression()\nclf4.fit(X4_train, Y4_train)\n\nclf5 = LinearRegression()\nclf5.fit(X5_train, Y5_train)\n\nclf6 = LinearRegression()\nclf6.fit(X6_train, Y6_train)\n\nY4_pred=clf4.predict(X4_test)\n(Y4_test == Y4_pred).mean()\n\nY4_test\n\nY4_pred\n\nY5_pred=clf5.predict(X5_test)\n(Y5_test == Y5_pred).mean()\n\nY6_pred=clf6.predict(X6_test)\n(Y6_test == Y6_pred).mean()\n\nprint('MSE:', metrics.mean_squared_error(Y4_test, Y4_pred)),\nprint('MSE:', metrics.mean_squared_error(Y5_test, Y5_pred)),\nprint('MSE:', metrics.mean_squared_error(Y6_test, Y6_pred))",
"El menor MSE resultó ser el del modelo (temp, season, humidity) así que este es el mejor modelo usando el metodo de train-test."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.