repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
cells
list
types
list
GoogleCloudPlatform/vertex-ai-samples
notebooks/community/migration/UJ3 Custom Training Custom Container TF Keras.ipynb
apache-2.0
[ "# Copyright 2021 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Vertex SDK: Train & deploy a TensorFlow model with custom container (aka pre-built containers)\nInstallation\nInstall the latest (preview) version of Vertex SDK.", "! pip3 install -U google-cloud-aiplatform --user", "Install the Google cloud-storage library as well.", "! pip3 install google-cloud-storage", "Restart the Kernel\nOnce you've installed the Vertex SDK and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.", "import os\n\nif not os.getenv(\"AUTORUN\") and False:\n # Automatically restart kernel after installs\n import IPython\n\n app = IPython.Application.instance()\n app.kernel.do_shutdown(True)", "Before you begin\nGPU run-time\nMake sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU\nSet up your GCP project\nThe following steps are required, regardless of your notebook environment.\n\n\nSelect or create a GCP project. When you first create an account, you get a $300 free credit towards your compute/storage costs.\n\n\nMake sure that billing is enabled for your project.\n\n\nEnable the Vertex APIs and Compute Engine APIs.\n\n\nGoogle Cloud SDK is already installed in Google Cloud Notebooks.\n\n\nEnter your project ID in the cell below. Then run the cell to make sure the\nCloud SDK uses the right project for all the commands in this notebook.\n\n\nNote: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.", "PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}\n\nif PROJECT_ID == \"\" or PROJECT_ID is None or PROJECT_ID == \"[your-project-id]\":\n # Get your GCP project id from gcloud\n shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null\n PROJECT_ID = shell_output[0]\n print(\"Project ID:\", PROJECT_ID)\n\n! gcloud config set project $PROJECT_ID", "Region\nYou can also change the REGION variable, which is used for operations\nthroughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend when possible, to choose the region closest to you.\n\nAmericas: us-central1\nEurope: europe-west4\nAsia Pacific: asia-east1\n\nYou cannot use a Multi-Regional Storage bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see Region support for Vertex AI services", "REGION = \"us-central1\" # @param {type: \"string\"}", "Timestamp\nIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.", "from datetime import datetime\n\nTIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")", "Authenticate your GCP account\nIf you are using Google Cloud Notebooks, your environment is already\nauthenticated. Skip this step.\nNote: If you are on an Vertex notebook and run the cell, the cell knows to skip executing the authentication steps.", "import os\nimport sys\n\n# If you are running this notebook in Colab, run this cell and follow the\n# instructions to authenticate your Google Cloud account. This provides access\n# to your Cloud Storage bucket and lets you submit training jobs and prediction\n# requests.\n\n# If on Vertex, then don't execute this code\nif not os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n if \"google.colab\" in sys.modules:\n from google.colab import auth as google_auth\n\n google_auth.authenticate_user()\n\n # If you are running this tutorial in a notebook locally, replace the string\n # below with the path to your service account key and run this cell to\n # authenticate your Google Cloud account.\n else:\n %env GOOGLE_APPLICATION_CREDENTIALS your_path_to_credentials.json\n\n # Log in to your account on Google Cloud\n ! gcloud auth login", "Create a Cloud Storage bucket\nThe following steps are required, regardless of your notebook environment.\nThis tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket.\nSet the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.", "BUCKET_NAME = \"[your-bucket-name]\" # @param {type:\"string\"}\n\nif BUCKET_NAME == \"\" or BUCKET_NAME is None or BUCKET_NAME == \"[your-bucket-name]\":\n BUCKET_NAME = PROJECT_ID + \"aip-\" + TIMESTAMP", "Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.", "! gsutil mb -l $REGION gs://$BUCKET_NAME", "Finally, validate access to your Cloud Storage bucket by examining its contents:", "! gsutil ls -al gs://$BUCKET_NAME", "Set up variables\nNext, set up some variables used throughout the tutorial.\nImport libraries and define constants\nImport Vertex SDK\nImport the Vertex SDK into our Python environment.", "import os\nimport sys\nimport time\n\nfrom google.cloud.aiplatform import gapic as aip\nfrom google.protobuf import json_format\nfrom google.protobuf.json_format import MessageToJson, ParseDict\nfrom google.protobuf.struct_pb2 import Struct, Value", "Vertex AI constants\nSetup up the following constants for Vertex AI:\n\nAPI_ENDPOINT: The Vertex AI API service endpoint for dataset, model, job, pipeline and endpoint services.\nAPI_PREDICT_ENDPOINT: The Vertex AI API service endpoint for prediction.\nPARENT: The Vertex AI location root path for dataset, model and endpoint resources.", "# API Endpoint\nAPI_ENDPOINT = \"{}-aiplatform.googleapis.com\".format(REGION)\n\n# Vertex AI location root path for your dataset, model and endpoint resources\nPARENT = \"projects/\" + PROJECT_ID + \"/locations/\" + REGION", "Clients\nThe Vertex SDK works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the server (Vertex).\nYou will use several clients in this tutorial, so set them all up upfront.\n\nDataset Service for managed datasets.\nModel Service for managed models.\nPipeline Service for training.\nEndpoint Service for deployment.\nJob Service for batch jobs and custom training.\nPrediction Service for serving. Note: Prediction has a different service endpoint.", "# client options same for all services\nclient_options = {\"api_endpoint\": API_ENDPOINT}\n\n\ndef create_model_client():\n client = aip.ModelServiceClient(client_options=client_options)\n return client\n\n\ndef create_endpoint_client():\n client = aip.EndpointServiceClient(client_options=client_options)\n return client\n\n\ndef create_prediction_client():\n client = aip.PredictionServiceClient(client_options=client_options)\n return client\n\n\ndef create_job_client():\n client = aip.JobServiceClient(client_options=client_options)\n return client\n\n\nclients = {}\nclients[\"model\"] = create_model_client()\nclients[\"endpoint\"] = create_endpoint_client()\nclients[\"prediction\"] = create_prediction_client()\nclients[\"job\"] = create_job_client()\n\nfor client in clients.items():\n print(client)", "Prepare a trainer script\nPackage assembly", "! rm -rf cifar\n! mkdir cifar\n! touch cifar/README.md\n\nsetup_cfg = \"[egg_info]\\n\\\ntag_build =\\n\\\ntag_date = 0\"\n! echo \"$setup_cfg\" > cifar/setup.cfg\n\nsetup_py = \"import setuptools\\n\\\n# Requires TensorFlow Datasets\\n\\\nsetuptools.setup(\\n\\\n install_requires=[\\n\\\n 'tensorflow_datasets==1.3.0',\\n\\\n ],\\n\\\n packages=setuptools.find_packages())\"\n! echo \"$setup_py\" > cifar/setup.py\n\npkg_info = \"Metadata-Version: 1.0\\n\\\nName: Custom Training CIFAR-10\\n\\\nVersion: 0.0.0\\n\\\nSummary: Demonstration training script\\n\\\nHome-page: www.google.com\\n\\\nAuthor: Google\\n\\\nAuthor-email: aferlitsch@google.com\\n\\\nLicense: Public\\n\\\nDescription: Demo\\n\\\nPlatform: Vertex AI\"\n! echo \"$pkg_info\" > cifar/PKG-INFO\n\n! mkdir cifar/trainer\n! touch cifar/trainer/__init__.py", "Write the docker file contents", "%%writefile cifar/Dockerfile\n\nFROM gcr.io/deeplearning-platform-release/tf2-cpu.2-1\nWORKDIR /root\n\nWORKDIR /\n\n# Copies the trainer code to the docker image.\nCOPY trainer /trainer\n\n# Sets up the entry point to invoke the trainer.\nENTRYPOINT [\"python\", \"-m\", \"trainer.task\"]\n", "Task.py contents", "%%writefile cifar/trainer/task.py\nimport tensorflow_datasets as tfds\nimport tensorflow as tf\nfrom tensorflow.python.client import device_lib\nimport argparse\nimport os\nimport sys\n\ntfds.disable_progress_bar()\n\nparser = argparse.ArgumentParser()\nparser.add_argument('--model-dir', dest='model_dir',\n default='/tmp/saved_model', type=str, help='Model dir.')\nparser.add_argument('--lr', dest='lr',\n default=0.01, type=float,\n help='Learning rate.')\nparser.add_argument('--epochs', dest='epochs',\n default=10, type=int,\n help='Number of epochs.')\nparser.add_argument('--steps', dest='steps',\n default=200, type=int,\n help='Number of steps per epoch.')\nparser.add_argument('--distribute', dest='distribute', type=str, default='single',\n help='distributed training strategy')\nargs = parser.parse_args()\n\nprint('Python Version = {}'.format(sys.version))\nprint('TensorFlow Version = {}'.format(tf.__version__))\nprint('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))\nprint('DEVICES', device_lib.list_local_devices())\n\nif args.distribute == 'single':\n if tf.test.is_gpu_available():\n strategy = tf.distribute.OneDeviceStrategy(device=\"/gpu:0\")\n else:\n strategy = tf.distribute.OneDeviceStrategy(device=\"/cpu:0\")\nelif args.distribute == 'mirror':\n strategy = tf.distribute.MirroredStrategy()\nelif args.distribute == 'multi':\n strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()\n\nprint('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))\n\nBUFFER_SIZE = 10000\nBATCH_SIZE = 64\n\ndef make_datasets_unbatched():\n def scale(image, label):\n image = tf.cast(image, tf.float32)\n image /= 255.0\n return image, label\n\n datasets, info = tfds.load(name='cifar10',\n with_info=True,\n as_supervised=True)\n return datasets['train'].map(scale).cache().shuffle(BUFFER_SIZE).repeat()\n\ndef build_and_compile_cnn_model():\n model = tf.keras.Sequential([\n tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(32, 32, 3)),\n tf.keras.layers.MaxPooling2D(),\n tf.keras.layers.Conv2D(32, 3, activation='relu'),\n tf.keras.layers.MaxPooling2D(),\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(10, activation='softmax')\n ])\n model.compile(\n loss=tf.keras.losses.sparse_categorical_crossentropy,\n optimizer=tf.keras.optimizers.SGD(learning_rate=args.lr),\n metrics=['accuracy'])\n return model\n\nNUM_WORKERS = strategy.num_replicas_in_sync\nGLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS\ntrain_dataset = make_datasets_unbatched().batch(GLOBAL_BATCH_SIZE)\n\nwith strategy.scope():\n model = build_and_compile_cnn_model()\n\nmodel.fit(x=train_dataset, epochs=args.epochs, steps_per_epoch=args.steps)\nmodel.save(args.model_dir)\n", "Build the container locally", "TRAIN_IMAGE = f\"gcr.io/{PROJECT_ID}/cifar_migration:v1\"\n\n! docker build cifar -t $TRAIN_IMAGE", "Register your custom container", "! docker push $TRAIN_IMAGE", "Train a model\nprojects.locations.customJobs.create\nRequest", "JOB_NAME = \"custom_container_\" + TIMESTAMP\n\nWORKER_POOL_SPEC = [\n {\n \"replica_count\": 1,\n \"machine_spec\": {\"machine_type\": \"n1-standard-4\", \"accelerator_count\": 0},\n \"container_spec\": {\n \"image_uri\": TRAIN_IMAGE,\n \"args\": [\n \"--model-dir=\" + \"gs://\" + BUCKET_NAME + \"/\" + JOB_NAME,\n \"--epochs=\" + str(20),\n \"--steps=\" + str(100),\n ],\n },\n }\n]\n\nCUSTOM_JOB = {\n \"display_name\": JOB_NAME,\n \"job_spec\": {\"worker_pool_specs\": WORKER_POOL_SPEC},\n}\n\ntraining_job = aip.CustomJob(**CUSTOM_JOB)\n\nprint(\n MessageToJson(\n aip.CreateCustomJobRequest(parent=PARENT, custom_job=training_job).__dict__[\n \"_pb\"\n ]\n )\n)", "Example output:\n{\n \"parent\": \"projects/migration-ucaip-training/locations/us-central1\",\n \"customJob\": {\n \"displayName\": \"custom_container_20210226022223\",\n \"jobSpec\": {\n \"workerPoolSpecs\": [\n {\n \"machineSpec\": {\n \"machineType\": \"n1-standard-4\"\n },\n \"replicaCount\": \"1\",\n \"containerSpec\": {\n \"imageUri\": \"gcr.io/migration-ucaip-training/cifar_migration:v1\",\n \"args\": [\n \"--model-dir=gs://migration-ucaip-trainingaip-20210226022223/custom_container_20210226022223\",\n \"--epochs=20\",\n \"--steps=100\"\n ]\n }\n }\n ]\n }\n }\n}\nCall", "request = clients[\"job\"].create_custom_job(parent=PARENT, custom_job=training_job)", "Response", "print(MessageToJson(request.__dict__[\"_pb\"]))", "Example output:\n{\n \"name\": \"projects/116273516712/locations/us-central1/customJobs/957560278583607296\",\n \"displayName\": \"custom_container_20210226022223\",\n \"jobSpec\": {\n \"workerPoolSpecs\": [\n {\n \"machineSpec\": {\n \"machineType\": \"n1-standard-4\"\n },\n \"replicaCount\": \"1\",\n \"diskSpec\": {\n \"bootDiskType\": \"pd-ssd\",\n \"bootDiskSizeGb\": 100\n },\n \"containerSpec\": {\n \"imageUri\": \"gcr.io/migration-ucaip-training/cifar_migration:v1\",\n \"args\": [\n \"--model-dir=gs://migration-ucaip-trainingaip-20210226022223/custom_container_20210226022223\",\n \"--epochs=20\",\n \"--steps=100\"\n ]\n }\n }\n ]\n },\n \"state\": \"JOB_STATE_PENDING\",\n \"createTime\": \"2021-02-26T02:27:53.406955Z\",\n \"updateTime\": \"2021-02-26T02:27:53.406955Z\"\n}", "# The full unique ID for the custom training job\ncustom_training_id = request.name\n# The short numeric ID for the custom training job\ncustom_training_short_id = custom_training_id.split(\"/\")[-1]\n\nprint(custom_training_id)", "projects.locations.customJobs.get\nCall", "request = clients[\"job\"].get_custom_job(name=custom_training_id)", "Response", "print(MessageToJson(request.__dict__[\"_pb\"]))", "Example output:\n{\n \"name\": \"projects/116273516712/locations/us-central1/customJobs/957560278583607296\",\n \"displayName\": \"custom_container_20210226022223\",\n \"jobSpec\": {\n \"workerPoolSpecs\": [\n {\n \"machineSpec\": {\n \"machineType\": \"n1-standard-4\"\n },\n \"replicaCount\": \"1\",\n \"diskSpec\": {\n \"bootDiskType\": \"pd-ssd\",\n \"bootDiskSizeGb\": 100\n },\n \"containerSpec\": {\n \"imageUri\": \"gcr.io/migration-ucaip-training/cifar_migration:v1\",\n \"args\": [\n \"--model-dir=gs://migration-ucaip-trainingaip-20210226022223/custom_container_20210226022223\",\n \"--epochs=20\",\n \"--steps=100\"\n ]\n }\n }\n ]\n },\n \"state\": \"JOB_STATE_PENDING\",\n \"createTime\": \"2021-02-26T02:27:53.406955Z\",\n \"updateTime\": \"2021-02-26T02:27:53.406955Z\"\n}", "while True:\n response = clients[\"job\"].get_custom_job(name=custom_training_id)\n if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED:\n print(\"Training job has not completed:\", response.state)\n if response.state == aip.PipelineState.PIPELINE_STATE_FAILED:\n break\n else:\n print(\"Training Time:\", response.end_time - response.start_time)\n break\n time.sleep(60)\n\n# model artifact output directory on Google Cloud Storage\nmodel_artifact_dir = (\n response.job_spec.worker_pool_specs[0].container_spec.args[0].split(\"=\")[-1]\n)\nprint(\"artifact location \" + model_artifact_dir)", "Deploy the model\nLoad the saved model", "import tensorflow as tf\n\nmodel = tf.keras.models.load_model(model_artifact_dir)", "Serving function for image data", "CONCRETE_INPUT = \"numpy_inputs\"\n\n\ndef _preprocess(bytes_input):\n decoded = tf.io.decode_jpeg(bytes_input, channels=3)\n decoded = tf.image.convert_image_dtype(decoded, tf.float32)\n resized = tf.image.resize(decoded, size=(32, 32))\n rescale = tf.cast(resized / 255.0, tf.float32)\n return rescale\n\n\n@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])\ndef preprocess_fn(bytes_inputs):\n decoded_images = tf.map_fn(\n _preprocess, bytes_inputs, dtype=tf.float32, back_prop=False\n )\n return {\n CONCRETE_INPUT: decoded_images\n } # User needs to make sure the key matches model's input\n\n\nm_call = tf.function(model.call).get_concrete_function(\n [tf.TensorSpec(shape=[None, 32, 32, 3], dtype=tf.float32, name=CONCRETE_INPUT)]\n)\n\n\n@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])\ndef serving_fn(bytes_inputs):\n images = preprocess_fn(bytes_inputs)\n prob = m_call(**images)\n return prob\n\n\ntf.saved_model.save(\n model,\n model_artifact_dir,\n signatures={\n \"serving_default\": serving_fn,\n },\n)", "Get the serving function signature", "loaded = tf.saved_model.load(model_artifact_dir)\n\ninput_name = list(\n loaded.signatures[\"serving_default\"].structured_input_signature[1].keys()\n)[0]\n\nprint(\"Serving function input:\", input_name)", "Example output:\nServing function input: bytes_inputs\nprojects.locations.models.upload\nRequest", "container_spec = {\n \"image_uri\": \"gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-1:latest\",\n \"env\": [{\"name\": \"exmple_env_name\", \"value\": \"example_env_value\"}],\n \"ports\": [{\"container_port\": 8080}],\n}\n\nmodel = {\n \"display_name\": \"custom_container_TF\" + TIMESTAMP,\n \"metadata_schema_uri\": \"\",\n \"artifact_uri\": model_artifact_dir,\n \"container_spec\": container_spec,\n}\n\nprint(MessageToJson(aip.UploadModelRequest(parent=PARENT, model=model).__dict__[\"_pb\"]))", "Example output:\n{\n \"parent\": \"projects/migration-ucaip-training/locations/us-central1\",\n \"model\": {\n \"displayName\": \"custom_container_TF20210226022223\",\n \"containerSpec\": {\n \"imageUri\": \"gcr.io/cloud-aiplatform/prediction/tf2-cpu.2-1:latest\",\n \"env\": [\n {\n \"name\": \"example_env_name\",\n \"value\": \"example_env_value\"\n }\n ],\n \"ports\": [\n {\n \"containerPort\": 8080\n }\n ]\n },\n \"artifactUri\": \"gs://migration-ucaip-trainingaip-20210226022223/custom_container_20210226022223\"\n }\n}\nCall", "request = clients[\"model\"].upload_model(parent=PARENT, model=model)", "Response", "result = request.result()\n\nprint(MessageToJson(result.__dict__[\"_pb\"]))", "Example output:\n{\n \"model\": \"projects/116273516712/locations/us-central1/models/394223297069318144\"\n}", "model_id = result.model", "Make batch predictions\nMake a batch prediction file", "import cv2\nimport numpy as np\nfrom tensorflow.keras.datasets import cifar10\n\n(_, _), (x_test, y_test) = cifar10.load_data()\nx_test = (x_test / 255.0).astype(np.float32)\n\nprint(x_test.shape, y_test.shape)\n\ntest_image_1, test_label_1 = x_test[0], y_test[0]\ntest_image_2, test_label_2 = x_test[1], y_test[1]\n\ncv2.imwrite(\"tmp1.jpg\", (test_image_1 * 255).astype(np.uint8))\ncv2.imwrite(\"tmp2.jpg\", (test_image_2 * 255).astype(np.uint8))\n\n! gsutil cp tmp1.jpg gs://$BUCKET_NAME/tmp1.jpg\n! gsutil cp tmp2.jpg gs://$BUCKET_NAME/tmp2.jpg\n\ntest_item_1 = \"gs://\" + BUCKET_NAME + \"/\" + \"tmp1.jpg\"\ntest_item_2 = \"gs://\" + BUCKET_NAME + \"/\" + \"tmp2.jpg\"", "Make the batch input file\nLet's now make a batch input file, which you will store in your local Cloud Storage bucket. The batch input file can be either CSV or JSONL. You will use JSONL in this tutorial. For JSONL file, you make one dictionary entry per line for each data item (instance). The dictionary contains the key/value pairs:\n\ncontent: The Cloud Storage path to the image.\nmime_type: The content type. In our example, it is an jpeg file.", "import base64\nimport json\n\ngcs_input_uri = \"gs://\" + BUCKET_NAME + \"/\" + \"test.jsonl\"\nwith tf.io.gfile.GFile(gcs_input_uri, \"w\") as f:\n bytes = tf.io.read_file(test_item_1)\n b64str = base64.b64encode(bytes.numpy()).decode(\"utf-8\")\n data = {input_name: {\"b64\": b64str}}\n f.write(json.dumps(data) + \"\\n\")\n\n bytes = tf.io.read_file(test_item_2)\n b64str = base64.b64encode(bytes.numpy()).decode(\"utf-8\")\n data = {input_name: {\"b64\": b64str}}\n f.write(json.dumps(data) + \"\\n\")\n\n! gsutil cat $gcs_input_uri", "Example output:\n{\"bytes_inputs\": {\"b64\": \"/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQoKCgoKBggLDAsKDAkKCgr/2wBDAQICAgICAgUDAwUKBwYHCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgr/wAARCAAgACADASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD570PxBpmp6nfaEl48lzpUqpewPCU8lpEDqMsOeD26Z55Fa+s3HhnR/Aj6xZjV7rWrW4ke/wBMtLRGRLTaux1cuPnLlhtIAAUEE5490/ao8E6F4b8P3NxZeGksNW1z4h62Iby2t1/eC3ZoozJxwSiKQOhEZJ5JrqZtI8MftFfs56j8YI/hvo/gq1u9C0ywlbTbFoLa+1SOFWlgPGRmNiQzNkiPOflyf1WHFdark0K8UlUbkvJWel1vqmn5n5MuD6MM7qUJzbpxUXazvJSWtmuzTR8iaBoXirx54H1Hxo10mhx2V/8AZltpEE7ByAV8w8YLdRjAHAz1NcSNcXUtev8AwVrE0DajaQ+YZLY4jnXPJXrkjPPTPXGDXvXwi+F3hvwh8Ffip4i1a7GqX7a1b6fp0c84SKO3Wz3FiCdpHnSHDZ2/KAOtfP8A4v8Ah1qOoWul/Efwu4sL+wk8u2IkUi7JRhtwM5RgBkHpz0xXy+F4gzNY6Mqs3NTfvR6a6adj6bGcPZX/AGfKFKEYcqupemurufqP8c9Il/aA8BeHNS+HHh/7Ze634p0rUtMhsFWUJNdsFlR8HAAWWRXBPrmvGvi5+y/B+z1+0ZqHwW+PXx08LaL4VtJI75dOtPEksgfe8krskKIDCZWdCUkyU2MRuVga5X9lr9qAfsk/tCWPjTW9Ol1XwzpurtdXei27gBJTEyJcxBsDcu/OOAwBHBwa8S+JXxltPi3431/x34y8TT/2tqmpy3V1d6h8/mOzFiN46LkgDpgcdOK/HcPxo/qMalONqkn70ei816307I/Xa/C0XjXTrO8EtJdfR/cUfiz4m8aaBJefD/4NXcd4CJ7f/hI7bVXitZ4HkPzSQMvMxRUUTAEqFGCM4EPw/wDAsnhjwZEmrzte6ipKmWeYSbAV+bYTjAJBPTgNjNbOk+HYdL0qPxPcWsN5BK2FaO43q3fHUH8eld34kku/hP4LsvHPiPRtPvZNSkU6fYSFStvED8zsqjLsq5IBwOB1Jri/4iFn2BxSq0Yxulyq8eZLp1f4ms+BMkx2FlRquVm7u0uVvrbRH//Z\"}}\n{\"bytes_inputs\": {\"b64\": \"/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQoKCgoKBggLDAsKDAkKCgr/2wBDAQICAgICAgUDAwUKBwYHCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgr/wAARCAAgACADASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD9qIntrti9vhg3KkLwR69Kbc3FrYskd1LGjOjsqNjJCjLH8Mj8xXw3+yr+3v8ABbUZL2/8L/G/4ja2L0raac/xAvEbTmndtyLFKOd5AwcZwCSccV6X8Xv22/jD4K+L2n+BPA/7H+qeP4v7LSb/AISLQNYjW0ieTmWLfIoUBQiksxA6VxwxtN0VOWn4nTPC1Y1XBHpuqftI6BZ+MrDw/FZSw2dyzRyXl3p8g/eblCgbcjBG/k8dPevU1tCWIKj/AL5r5+8aftTfCqx+H9leeM/i1pXw51aWJvtWkWF1b6ldQnkqnmRqyg9c7fXGag/Zm/aY+HL69d6MPjvr/jVNWm32M19pcgSwREyVZygAJO7PbAFZ08TUjNqpt32/AdSiuVOK2PyC/Zs/4LOfs7/s+fAbQvgz4K/Ywu7rw94Bd4op9WsbfUZ1u5CGlupHBBLSMCd2MYAA4Fe0eGf+Dm/4deO9EuvDvhvSLjSWt7MpPaw+DfNiihYgNvRWK4/hyRjn3r8WvjN8MviF4C+LPiPTvhtZ6lDo8l86W6QswDID0IHUA5x7Ve/ZF1f9pX4C/Gq1+Ifw90PV7e6mgms71o7QP58EowyMrgqwJCnB9K3w+UQxleFF4hw52lzSb5Y3aXM7Juy3dtbHRRzrCu0qlKEl17/fc/W6f/gsjpGtX40z4Zadp1280IVYYPAdsv70nO8ZQnPPToK7z4a/tKftD/ETU7TQPEur6nbpdgMmnrFHak5PUwwquPq3Wvk34QwftUfE/GtfE3xmnhm0LAiy0SwhiupgezSxouzPfb+dfdv7DPwl0rQtcivhZx4Ub1eWQtJu6lmZslmPqfWnmXD+DyjESgsSq1usYyjF+a5tWvkh18+w+IXJQpJeZ//Z\"}}\nprojects.locations.batchPredictionJobs.create\nRequest", "batch_prediction_job = {\n \"display_name\": \"custom_container_TF\" + TIMESTAMP,\n \"model\": model_id,\n \"input_config\": {\n \"instances_format\": \"jsonl\",\n \"gcs_source\": {\"uris\": [gcs_input_uri]},\n },\n \"model_parameters\": ParseDict(\n {\"confidenceThreshold\": 0.5, \"maxPredictions\": 2}, Value()\n ),\n \"output_config\": {\n \"predictions_format\": \"jsonl\",\n \"gcs_destination\": {\n \"output_uri_prefix\": \"gs://\" + f\"{BUCKET_NAME}/batch_output/\"\n },\n },\n \"dedicated_resources\": {\n \"machine_spec\": {\"machine_type\": \"n1-standard-2\", \"accelerator_type\": 0},\n \"starting_replica_count\": 1,\n \"max_replica_count\": 1,\n },\n}\n\nprint(\n MessageToJson(\n aip.CreateBatchPredictionJobRequest(\n parent=PARENT, batch_prediction_job=batch_prediction_job\n ).__dict__[\"_pb\"]\n )\n)", "Example output:\n{\n \"parent\": \"projects/migration-ucaip-training/locations/us-central1\",\n \"batchPredictionJob\": {\n \"displayName\": \"custom_container_TF20210226022223\",\n \"model\": \"projects/116273516712/locations/us-central1/models/394223297069318144\",\n \"inputConfig\": {\n \"instancesFormat\": \"jsonl\",\n \"gcsSource\": {\n \"uris\": [\n \"gs://migration-ucaip-trainingaip-20210226022223/test.jsonl\"\n ]\n }\n },\n \"modelParameters\": {\n \"confidenceThreshold\": 0.5,\n \"maxPredictions\": 2.0\n },\n \"outputConfig\": {\n \"predictionsFormat\": \"jsonl\",\n \"gcsDestination\": {\n \"outputUriPrefix\": \"gs://migration-ucaip-trainingaip-20210226022223/batch_output/\"\n }\n },\n \"dedicatedResources\": {\n \"machineSpec\": {\n \"machineType\": \"n1-standard-2\"\n },\n \"startingReplicaCount\": 1,\n \"maxReplicaCount\": 1\n }\n }\n}\nCall", "request = clients[\"job\"].create_batch_prediction_job(\n parent=PARENT, batch_prediction_job=batch_prediction_job\n)", "Response", "print(MessageToJson(request.__dict__[\"_pb\"]))", "Example output:\n{\n \"name\": \"projects/116273516712/locations/us-central1/batchPredictionJobs/2465140253845880832\",\n \"displayName\": \"custom_container_TF20210226022223\",\n \"model\": \"projects/116273516712/locations/us-central1/models/394223297069318144\",\n \"inputConfig\": {\n \"instancesFormat\": \"jsonl\",\n \"gcsSource\": {\n \"uris\": [\n \"gs://migration-ucaip-trainingaip-20210226022223/test.jsonl\"\n ]\n }\n },\n \"modelParameters\": {\n \"maxPredictions\": 2.0,\n \"confidenceThreshold\": 0.5\n },\n \"outputConfig\": {\n \"predictionsFormat\": \"jsonl\",\n \"gcsDestination\": {\n \"outputUriPrefix\": \"gs://migration-ucaip-trainingaip-20210226022223/batch_output/\"\n }\n },\n \"dedicatedResources\": {\n \"machineSpec\": {\n \"machineType\": \"n1-standard-2\"\n },\n \"startingReplicaCount\": 1,\n \"maxReplicaCount\": 1\n },\n \"manualBatchTuningParameters\": {},\n \"state\": \"JOB_STATE_PENDING\",\n \"createTime\": \"2021-02-26T09:39:46.357554Z\",\n \"updateTime\": \"2021-02-26T09:39:46.357554Z\"\n}", "# The fully qualified ID for the batch job\nbatch_job_id = request.name\n# The short numeric ID for the batch job\nbatch_job_short_id = batch_job_id.split(\"/\")[-1]\n\nprint(batch_job_id)", "projects.locations.batchPredictionJobs.get\nCall", "request = clients[\"job\"].get_batch_prediction_job(name=batch_job_id)", "Response", "print(MessageToJson(request.__dict__[\"_pb\"]))", "Example output:\n{\n \"name\": \"projects/116273516712/locations/us-central1/batchPredictionJobs/2465140253845880832\",\n \"displayName\": \"custom_container_TF20210226022223\",\n \"model\": \"projects/116273516712/locations/us-central1/models/394223297069318144\",\n \"inputConfig\": {\n \"instancesFormat\": \"jsonl\",\n \"gcsSource\": {\n \"uris\": [\n \"gs://migration-ucaip-trainingaip-20210226022223/test.jsonl\"\n ]\n }\n },\n \"modelParameters\": {\n \"confidenceThreshold\": 0.5,\n \"maxPredictions\": 2.0\n },\n \"outputConfig\": {\n \"predictionsFormat\": \"jsonl\",\n \"gcsDestination\": {\n \"outputUriPrefix\": \"gs://migration-ucaip-trainingaip-20210226022223/batch_output/\"\n }\n },\n \"dedicatedResources\": {\n \"machineSpec\": {\n \"machineType\": \"n1-standard-2\"\n },\n \"startingReplicaCount\": 1,\n \"maxReplicaCount\": 1\n },\n \"manualBatchTuningParameters\": {},\n \"state\": \"JOB_STATE_PENDING\",\n \"createTime\": \"2021-02-26T09:39:46.357554Z\",\n \"updateTime\": \"2021-02-26T09:39:46.357554Z\"\n}", "def get_latest_predictions(gcs_out_dir):\n \"\"\" Get the latest prediction subfolder using the timestamp in the subfolder name\"\"\"\n folders = !gsutil ls $gcs_out_dir\n latest = \"\"\n for folder in folders:\n subfolder = folder.split(\"/\")[-2]\n if subfolder.startswith(\"prediction-\"):\n if subfolder > latest:\n latest = folder[:-1]\n return latest\n\n\nwhile True:\n response = clients[\"job\"].get_batch_prediction_job(name=batch_job_id)\n if response.state != aip.JobState.JOB_STATE_SUCCEEDED:\n print(\"The job has not completed:\", response.state)\n if response.state == aip.JobState.JOB_STATE_FAILED:\n break\n else:\n folder = get_latest_predictions(\n response.output_config.gcs_destination.output_uri_prefix\n )\n ! gsutil ls $folder/prediction*\n\n ! gsutil cat $folder/prediction*\n break\n time.sleep(60)", "Example output:\ngs://migration-ucaip-trainingaip-20210226022223/batch_output/prediction-custom_container_TF20210226022223-2021_02_26T01_39_46_305Z/prediction.errors_stats-00000-of-00001\ngs://migration-ucaip-trainingaip-20210226022223/batch_output/prediction-custom_container_TF20210226022223-2021_02_26T01_39_46_305Z/prediction.results-00000-of-00001\n{\"instance\": {\"bytes_inputs\": {\"b64\": \"/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQoKCgoKBggLDAsKDAkKCgr/2wBDAQICAgICAgUDAwUKBwYHCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgr/wAARCAAgACADASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD570PxBpmp6nfaEl48lzpUqpewPCU8lpEDqMsOeD26Z55Fa+s3HhnR/Aj6xZjV7rWrW4ke/wBMtLRGRLTaux1cuPnLlhtIAAUEE5490/ao8E6F4b8P3NxZeGksNW1z4h62Iby2t1/eC3ZoozJxwSiKQOhEZJ5JrqZtI8MftFfs56j8YI/hvo/gq1u9C0ywlbTbFoLa+1SOFWlgPGRmNiQzNkiPOflyf1WHFdark0K8UlUbkvJWel1vqmn5n5MuD6MM7qUJzbpxUXazvJSWtmuzTR8iaBoXirx54H1Hxo10mhx2V/8AZltpEE7ByAV8w8YLdRjAHAz1NcSNcXUtev8AwVrE0DajaQ+YZLY4jnXPJXrkjPPTPXGDXvXwi+F3hvwh8Ffip4i1a7GqX7a1b6fp0c84SKO3Wz3FiCdpHnSHDZ2/KAOtfP8A4v8Ah1qOoWul/Efwu4sL+wk8u2IkUi7JRhtwM5RgBkHpz0xXy+F4gzNY6Mqs3NTfvR6a6adj6bGcPZX/AGfKFKEYcqupemurufqP8c9Il/aA8BeHNS+HHh/7Ze634p0rUtMhsFWUJNdsFlR8HAAWWRXBPrmvGvi5+y/B+z1+0ZqHwW+PXx08LaL4VtJI75dOtPEksgfe8krskKIDCZWdCUkyU2MRuVga5X9lr9qAfsk/tCWPjTW9Ol1XwzpurtdXei27gBJTEyJcxBsDcu/OOAwBHBwa8S+JXxltPi3431/x34y8TT/2tqmpy3V1d6h8/mOzFiN46LkgDpgcdOK/HcPxo/qMalONqkn70ei816307I/Xa/C0XjXTrO8EtJdfR/cUfiz4m8aaBJefD/4NXcd4CJ7f/hI7bVXitZ4HkPzSQMvMxRUUTAEqFGCM4EPw/wDAsnhjwZEmrzte6ipKmWeYSbAV+bYTjAJBPTgNjNbOk+HYdL0qPxPcWsN5BK2FaO43q3fHUH8eld34kku/hP4LsvHPiPRtPvZNSkU6fYSFStvED8zsqjLsq5IBwOB1Jri/4iFn2BxSq0Yxulyq8eZLp1f4ms+BMkx2FlRquVm7u0uVvrbRH//Z\"}}, \"prediction\": [0.0441863872, 0.0965465382, 0.131534964, 0.111121729, 0.133242682, 0.0896093622, 0.160808876, 0.116257414, 0.0309254956, 0.0857665]}\n{\"instance\": {\"bytes_inputs\": {\"b64\": \"/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQoKCgoKBggLDAsKDAkKCgr/2wBDAQICAgICAgUDAwUKBwYHCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgr/wAARCAAgACADASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD9qIntrti9vhg3KkLwR69Kbc3FrYskd1LGjOjsqNjJCjLH8Mj8xXw3+yr+3v8ABbUZL2/8L/G/4ja2L0raac/xAvEbTmndtyLFKOd5AwcZwCSccV6X8Xv22/jD4K+L2n+BPA/7H+qeP4v7LSb/AISLQNYjW0ieTmWLfIoUBQiksxA6VxwxtN0VOWn4nTPC1Y1XBHpuqftI6BZ+MrDw/FZSw2dyzRyXl3p8g/eblCgbcjBG/k8dPevU1tCWIKj/AL5r5+8aftTfCqx+H9leeM/i1pXw51aWJvtWkWF1b6ldQnkqnmRqyg9c7fXGag/Zm/aY+HL69d6MPjvr/jVNWm32M19pcgSwREyVZygAJO7PbAFZ08TUjNqpt32/AdSiuVOK2PyC/Zs/4LOfs7/s+fAbQvgz4K/Ywu7rw94Bd4op9WsbfUZ1u5CGlupHBBLSMCd2MYAA4Fe0eGf+Dm/4deO9EuvDvhvSLjSWt7MpPaw+DfNiihYgNvRWK4/hyRjn3r8WvjN8MviF4C+LPiPTvhtZ6lDo8l86W6QswDID0IHUA5x7Ve/ZF1f9pX4C/Gq1+Ifw90PV7e6mgms71o7QP58EowyMrgqwJCnB9K3w+UQxleFF4hw52lzSb5Y3aXM7Juy3dtbHRRzrCu0qlKEl17/fc/W6f/gsjpGtX40z4Zadp1280IVYYPAdsv70nO8ZQnPPToK7z4a/tKftD/ETU7TQPEur6nbpdgMmnrFHak5PUwwquPq3Wvk34QwftUfE/GtfE3xmnhm0LAiy0SwhiupgezSxouzPfb+dfdv7DPwl0rQtcivhZx4Ub1eWQtJu6lmZslmPqfWnmXD+DyjESgsSq1usYyjF+a5tWvkh18+w+IXJQpJeZ//Z\"}}, \"prediction\": [0.0441891, 0.0966139063, 0.131601468, 0.111363865, 0.133115292, 0.0897044092, 0.160883322, 0.115729697, 0.0310073923, 0.0857914686]}\nMake online predictions\nprojects.locations.endpoints.create\nRequest", "endpoint = {\"display_name\": \"custom_container_TF\" + TIMESTAMP}\n\nprint(\n MessageToJson(\n aip.CreateEndpointRequest(parent=PARENT, endpoint=endpoint).__dict__[\"_pb\"]\n )\n)", "Example output:\n{\n \"parent\": \"projects/migration-ucaip-training/locations/us-central1\",\n \"endpoint\": {\n \"displayName\": \"custom_container_TF20210226022223\"\n }\n}\nCall", "request = clients[\"endpoint\"].create_endpoint(parent=PARENT, endpoint=endpoint)", "Response", "result = request.result()\n\nprint(MessageToJson(result.__dict__[\"_pb\"]))", "Example output:\n{\n \"name\": \"projects/116273516712/locations/us-central1/endpoints/2977125644296519680\"\n}", "# The full unique ID for the endpoint\nendpoint_id = result.name\n# The short numeric ID for the endpoint\nendpoint_short_id = endpoint_id.split(\"/\")[-1]\n\nprint(endpoint_id)", "projects.locations.endpoints.deployModel\nRequest", "deployed_model = {\n \"model\": model_id,\n \"display_name\": \"custom_container_TF\" + TIMESTAMP,\n \"dedicated_resources\": {\n \"min_replica_count\": 1,\n \"machine_spec\": {\"machine_type\": \"n1-standard-4\", \"accelerator_count\": 0},\n },\n}\n\nprint(\n MessageToJson(\n aip.DeployModelRequest(\n endpoint=endpoint_id,\n deployed_model=deployed_model,\n traffic_split={\"0\": 100},\n ).__dict__[\"_pb\"]\n )\n)", "Example output:\n{\n \"endpoint\": \"projects/116273516712/locations/us-central1/endpoints/2977125644296519680\",\n \"deployedModel\": {\n \"model\": \"projects/116273516712/locations/us-central1/models/394223297069318144\",\n \"displayName\": \"custom_container_TF20210226022223\",\n \"dedicatedResources\": {\n \"machineSpec\": {\n \"machineType\": \"n1-standard-4\"\n },\n \"minReplicaCount\": 1\n }\n },\n \"trafficSplit\": {\n \"0\": 100\n }\n}\nCall", "request = clients[\"endpoint\"].deploy_model(\n endpoint=endpoint_id, deployed_model=deployed_model, traffic_split={\"0\": 100}\n)", "Response", "result = request.result()\n\nprint(MessageToJson(result.__dict__[\"_pb\"]))", "Example output:\n{\n \"deployedModel\": {\n \"id\": \"1297564458264035328\"\n }\n}", "# The unique ID for the deployed model\ndeployed_model_id = result.deployed_model.id\n\nprint(deployed_model_id)", "projects.locations.endpoints.predict\nPrepare file for online prediction", "import base64\n\nimport cv2\n\ntest_image = x_test[0]\ntest_label = y_test[0]\n\nprint(test_image.shape)\n\ncv2.imwrite(\"tmp.jpg\", (test_image * 255).astype(np.uint8))\nbytes = tf.io.read_file(\"tmp.jpg\")\nb64str = base64.b64encode(bytes.numpy()).decode(\"utf-8\")", "Request", "instances_list = [{\"bytes_inputs\": {\"b64\": b64str}}]\n\nprediction_request = aip.PredictRequest(endpoint=endpoint_id)\nprediction_request.instances.append(instances_list)\n\nprint(MessageToJson(prediction_request.__dict__[\"_pb\"]))", "Example output:\n{\n \"endpoint\": \"projects/116273516712/locations/us-central1/endpoints/2977125644296519680\",\n \"instances\": [\n [\n {\n \"bytes_inputs\": {\n \"b64\": \"/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAIBAQEBAQIBAQECAgICAgQDAgICAgUEBAMEBgUGBgYFBgYGBwkIBgcJBwYGCAsICQoKCgoKBggLDAsKDAkKCgr/2wBDAQICAgICAgUDAwUKBwYHCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgoKCgr/wAARCAAgACADASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD570PxBpmp6nfaEl48lzpUqpewPCU8lpEDqMsOeD26Z55Fa+s3HhnR/Aj6xZjV7rWrW4ke/wBMtLRGRLTaux1cuPnLlhtIAAUEE5490/ao8E6F4b8P3NxZeGksNW1z4h62Iby2t1/eC3ZoozJxwSiKQOhEZJ5JrqZtI8MftFfs56j8YI/hvo/gq1u9C0ywlbTbFoLa+1SOFWlgPGRmNiQzNkiPOflyf1WHFdark0K8UlUbkvJWel1vqmn5n5MuD6MM7qUJzbpxUXazvJSWtmuzTR8iaBoXirx54H1Hxo10mhx2V/8AZltpEE7ByAV8w8YLdRjAHAz1NcSNcXUtev8AwVrE0DajaQ+YZLY4jnXPJXrkjPPTPXGDXvXwi+F3hvwh8Ffip4i1a7GqX7a1b6fp0c84SKO3Wz3FiCdpHnSHDZ2/KAOtfP8A4v8Ah1qOoWul/Efwu4sL+wk8u2IkUi7JRhtwM5RgBkHpz0xXy+F4gzNY6Mqs3NTfvR6a6adj6bGcPZX/AGfKFKEYcqupemurufqP8c9Il/aA8BeHNS+HHh/7Ze634p0rUtMhsFWUJNdsFlR8HAAWWRXBPrmvGvi5+y/B+z1+0ZqHwW+PXx08LaL4VtJI75dOtPEksgfe8krskKIDCZWdCUkyU2MRuVga5X9lr9qAfsk/tCWPjTW9Ol1XwzpurtdXei27gBJTEyJcxBsDcu/OOAwBHBwa8S+JXxltPi3431/x34y8TT/2tqmpy3V1d6h8/mOzFiN46LkgDpgcdOK/HcPxo/qMalONqkn70ei816307I/Xa/C0XjXTrO8EtJdfR/cUfiz4m8aaBJefD/4NXcd4CJ7f/hI7bVXitZ4HkPzSQMvMxRUUTAEqFGCM4EPw/wDAsnhjwZEmrzte6ipKmWeYSbAV+bYTjAJBPTgNjNbOk+HYdL0qPxPcWsN5BK2FaO43q3fHUH8eld34kku/hP4LsvHPiPRtPvZNSkU6fYSFStvED8zsqjLsq5IBwOB1Jri/4iFn2BxSq0Yxulyq8eZLp1f4ms+BMkx2FlRquVm7u0uVvrbRH//Z\"\n }\n }\n ]\n ]\n}\nCall", "request = clients[\"prediction\"].predict(endpoint=endpoint_id, instances=instances_list)", "Response", "print(MessageToJson(request.__dict__[\"_pb\"]))", "Example output:\n{\n \"predictions\": [\n [\n 0.0441863947,\n 0.0965465382,\n 0.131534964,\n 0.111121736,\n 0.133242667,\n 0.0896093696,\n 0.160808861,\n 0.116257407,\n 0.0309255011,\n 0.0857665\n ]\n ],\n \"deployedModelId\": \"1297564458264035328\"\n}\nprojects.locations.endpoints.undeployModel\nCall", "request = clients[\"endpoint\"].undeploy_model(\n endpoint=endpoint_id, deployed_model_id=deployed_model_id, traffic_split={}\n)", "Response", "result = request.result()\n\nprint(MessageToJson(result.__dict__[\"_pb\"]))", "Example output:\n{}\nCleaning up\nTo clean up all GCP resources used in this project, you can delete the GCP\nproject you used for the tutorial.\nOtherwise, you can delete the individual resources you created in this tutorial.", "delete_model = True\ndelete_endpoint = True\ndelete_custom_job = True\ndelete_batchjob = True\ndelete_bucket = True\n\n# Delete the model using the Vertex AI fully qualified identifier for the model\ntry:\n if delete_model:\n clients[\"model\"].delete_model(name=model_id)\nexcept Exception as e:\n print(e)\n\n# Delete the endpoint using the Vertex AI fully qualified identifier for the endpoint\ntry:\n if delete_endpoint:\n clients[\"endpoint\"].delete_endpoint(name=endpoint_id)\nexcept Exception as e:\n print(e)\n\n# Delete the custom training using the Vertex AI fully qualified identifier for the custom training\ntry:\n if delete_custom_job:\n clients[\"job\"].delete_custom_job(name=custom_training_id)\nexcept Exception as e:\n print(e)\n\n# Delete the batch job using the Vertex AI fully qualified identifier for the batch job\ntry:\n if delete_batchjob:\n clients[\"job\"].delete_batch_prediction_job(name=batch_job_id)\nexcept Exception as e:\n print(e)\n\nif delete_bucket and \"BUCKET_NAME\" in globals():\n ! gsutil rm -r gs://$BUCKET_NAME" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
AaronCWong/phys202-2015-work
assignments/assignment07/AlgorithmsEx01.ipynb
mit
[ "Algorithms Exercise 1\nImports", "%matplotlib inline\nfrom matplotlib import pyplot as plt\nimport numpy as np", "Word counting\nWrite a function tokenize that takes a string of English text returns a list of words. It should also remove stop words, which are common short words that are often removed before natural language processing. Your function should have the following logic:\n\nSplit the string into lines using splitlines.\nSplit each line into a list of words and merge the lists for each line.\nUse Python's builtin filter function to remove all punctuation.\nIf stop_words is a list, remove all occurences of the words in the list.\nIf stop_words is a space delimeted string of words, split them and remove them.\nRemove any remaining empty words.\nMake all words lowercase.", "file = open('mobydick_chapter1.txt')\nmobydick = file.read()\nmobydick.splitlines()\nmobydick.split()\nprint (len(mobydick.split()))\n\ndef tokenize(s, stop_words=None, punctuation='`~!@#$%^&*()_-+={[}]|\\:;\"<,>.?/}\\t'):\n lines = s.splitlines()\n i = 0\n empty = []\n while i < len.lines:\n empty.extend(lines[i].split())\n i += 1\n a = []\n b = 0\n while b < len(empty):\n a.append(''.join([c for c in empty[b] if c not in punctuation]))\n b += 1\n for g in a:\n g.lower() \n c = 0\n d = []\n while c < len(h):\n d.append(''.join([e for e in h[c] if h[c] not in stop_words]))\n c+=1\n answer = list(filter(None,d))\n return answer\n \n \"\"\"Split a string into a list of words, removing punctuation and stop words.\"\"\"\n\nassert tokenize(\"This, is the way; that things will end\", stop_words=['the', 'is']) == \\\n ['this', 'way', 'that', 'things', 'will', 'end']\nwasteland = \"\"\"\nAPRIL is the cruellest month, breeding\nLilacs out of the dead land, mixing\nMemory and desire, stirring\nDull roots with spring rain.\n\"\"\"\n\nassert tokenize(wasteland, stop_words='is the of and') == \\\n ['april','cruellest','month','breeding','lilacs','out','dead','land',\n 'mixing','memory','desire','stirring','dull','roots','with','spring',\n 'rain']", "Write a function count_words that takes a list of words and returns a dictionary where the keys in the dictionary are the unique words in the list and the values are the word counts.", "def count_words(data):\n \"\"\"Return a word count dictionary from the list of words in data.\"\"\"\n\nassert count_words(tokenize('this and the this from and a a a')) == \\\n {'a': 3, 'and': 2, 'from': 1, 'the': 1, 'this': 2}", "Write a function sort_word_counts that return a list of sorted word counts:\n\nEach element of the list should be a (word, count) tuple.\nThe list should be sorted by the word counts, with the higest counts coming first.\nTo perform this sort, look at using the sorted function with a custom key and reverse\n argument.", " \"\"\"Return a list of 2-tuples of (word, count), sorted by count descending.\"\"\"\n\nassert sort_word_counts(count_words(tokenize('this and a the this this and a a a'))) == \\\n [('a', 4), ('this', 3), ('and', 2), ('the', 1)]", "Perform a word count analysis on Chapter 1 of Moby Dick, whose text can be found in the file mobydick_chapter1.txt:\n\nRead the file into a string.\nTokenize with stop words of 'the of and a to in is it that as'.\nPerform a word count, the sort and save the result in a variable named swc.", "assert swc[0]==('i',43)\nassert len(swc)==848", "Create a \"Cleveland Style\" dotplot of the counts of the top 50 words using Matplotlib. If you don't know what a dotplot is, you will have to do some research...", "assert True # use this for grading the dotplot" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
imcgreer/bokrm
notebooks/BokPhotAnalysis.ipynb
gpl-3.0
[ "Bok Photometric Analysis\nThis notebook examines the photometric reliability of the Bok data using repeat measurements of SDSS standard stars. For each reference star the mean and rms magnitudes are calculated from all individual measurements. The reference stars are then grouped into bins of magnitude. The distribution of the rms values (the photometric scatter) within the magnitude bins gives an indication of the photometric reliability.\nFirst, some preliminaries:", "%pylab inline\nfrom matplotlib import ticker\nimport os,sys\nsys.path.append('../bok')\nfrom collections import OrderedDict\nfrom astropy.table import Table,join\nfrom astropy.stats import sigma_clip\nimport bokrmphot\n_ = os.environ.setdefault('BOKRMDIR','../bok')", "Load a photometric statistics table. This table contains the statistics of the photometry within magnitude bins. The \"season\" keyword restricts the data to a single observing season, so that the statistics can be compared between seasons. The resulting table is saved to a FITS file so that it can be reloaded later.", "def load_pstable(psTabFn,photRun=None,season='2014',**kwargs):\n if not os.path.exists(psTabFn):\n psTab = bokrmphot.binned_phot_stats(photRun,season=season,**kwargs)\n psTab.write(psTabFn)\n psTab = Table.read(psTabFn)\n for c in psTab.colnames:\n if c.startswith('sig'):\n psTab[c].format = \"{:.4f}\"\n elif c=='outlierFrac':\n psTab[c].format = \"{:.5f}\"\n return psTab", "Plot the results from the photometric statistics table. A central line shows the median rms within magnitude bins, and a shaded region shows the inter-quartile range of the rms. I.e., half of all reference stars at a given magnitude had an rms value less than the median line.", "def plot_compare_scatters(psTabs,labels,colors='rgbcymk',filt='g',ax=None,units='mag',sfx=None):\n if sfx is None:\n sfx = ['','']\n def _append_arr(arr):\n return arr\n # used this for drawstyle=steps-post, but no equiv. for fill_between\n #return np.concatenate([arr,[arr[-1]]])\n if units=='mag':\n mscl = 1.0\n elif units=='mmag':\n mscl = 1e3\n if ax is None:\n figure()\n ax = subplot(111)\n for tab,l,c,_sfx in zip(psTabs,labels,colors,sfx):\n ax.fill_between(tab['mbins'],_append_arr(mscl*tab['sig%s25'%_sfx]),\n _append_arr(mscl*tab['sig%s75'%_sfx]),\n edgecolor='none',color=c,alpha=0.5)\n ax.plot(tab['mbins'],_append_arr(mscl*tab['sig%s50'%_sfx]),color=c,lw=1.5,label=l)\n legend(loc='upper left')\n xlabel('%s magnitude' % filt)\n ylabel('per-object std [%s]'%units)\n xlim(17,19)", "Comparing some early 2014 reductions\nThe next set of photometric statistics come from some early reductions of the first-year Bok data. Nov2015g was an early run of bokpipe. Note that the first round of difference imaging was based on an IDL pipeline; the associated catalogs are versioned 20140905 and have an asymptotic scattter of ~2.5% in g band, much worse than the bokpipe runs. Jan2017 and Feb2017 were subsequent runs with incremental improvements; the changes can be recovered from the git repository.", "psNov2015g = load_pstable('../phot_stats_Nov2015g.fits')\nprint psNov2015g['mbins','sig50','sig50Clip','outlierFrac']\n\npsJan2017g = load_pstable('../phot_stats_Jan2017g.fits')\nprint psJan2017g['mbins','sig50','sig50Clip','outlierFrac']\n\npsFeb2017g = load_pstable('../phot_stats_Feb2017g.fits')\nprint psFeb2017g['mbins','sig50','sig50Clip','outlierFrac']\n\npsJan2017i = load_pstable('../phot_stats_Jan2017i.fits')\nprint psJan2017i['mbins','sig50','sig50Clip','outlierFrac']\n\npsFeb2017i = load_pstable('../phot_stats_Feb2017i.fits')\nprint psFeb2017i['mbins','sig50','sig50Clip','outlierFrac']", "The improvement from the Nov2015g to Jan2017g versions is significant.", "plot_compare_scatters([psNov2015g,psJan2017g],['Nov15g','Jan17g'],'gb')", "Not much improvement in the Feb2017g version.", "plot_compare_scatters([psJan2017g,psFeb2017g],['Jan17g','Feb17g'],'gb')", "The scatter increases significantly if fainter stars are included in the zeropoint. Going from mag$<19.5$ to mag$<20.5$ more than doubles the scatter. Do not understand why this is...", "plot_compare_scatters([psJan2017i,psFeb2017i],['Jan17i','Feb17i'],'gb')\n\nfor psTab,l in zip([psNov2015g,psJan2017g,psFeb2017g],['Nov15g','Jan17g','Feb17g']):\n ii = where(psTab['outlierFrac']>0)[0]\n plot(psTab['mbins'][ii],log10(psTab['outlierFrac'][ii]),label=l)\nlegend()\n\nfor psTab,l in zip([psJan2017i,psFeb2017i],['Jan17i','Feb17i']):\n ii = where(psTab['outlierFrac']>0)[0]\n plot(psTab['mbins'][ii],log10(psTab['outlierFrac'][ii]),label=l)\nlegend()", "2015/6 data\nJul2017 is the first processing run to include 2015 and 2016 data. The 2014 data was also reprocessed to include all changes to that point. Mainly the improvements were in handling the illumination correction.", "procJul2017 = OrderedDict()\nfor season in ['2014','2015','2016']:\n procJul2017[season] = load_pstable('../archive/summer_2017/phot_stats_%sg_v2.fits'%season,\n 'cleanstars',season,catdir='../archive/summer_2017/')\n print season\n print procJul2017[season]['mbins','sig50','outlierFrac']", "Comparing the results for the three observing seasons. The 2014 data has less scatter, possibly because there are more data and the sky flats are more accurate.", "plot_compare_scatters(procJul2017.values(),procJul2017.keys(),'gbr')", "bokpipe_v0.3.0\nThe processing version used for the second round of difference imaging is bokpipe_v0.3.0, which was completed in Oct2017. The results from this run are shown below.", "procOct2017 = OrderedDict()\nfor season in ['2014','2015','2016','2017']:\n procOct2017[season] = load_pstable('../archive/october_2017/phot_stats_%sg_v2.fits'%season,\n 'cleanstars',season,catdir='../archive/october_2017/')\n print season\n print procOct2017[season]['mbins','sig50','outlierFrac']", "This compares the Oct2017 processing to the Jul2017 processing. The main improvement was in cleaning up some gain correction errors, and fixing a long-standing bug where the zeropoints were based on an aperture (15 pixel) that subsequently had an aperture correction applied. A change was made to apply the correction to the zeropoint. Now the 2014 data reach sub-1% repeatability at $g<17.7$!", "figure(figsize=(14,4))\nsubplots_adjust(0.05,0.1,0.99,0.99,0.22)\nfor pnum,season in enumerate(['2014','2015','2016'],start=1):\n ax = subplot(1,3,pnum)\n plot_compare_scatters([procJul2017[season],procOct2017[season]],\n [season+'-Jul2017',season+'-Oct2017'],'gb',ax=ax)\n ax.set_ylim(0,0.04)\n ax.grid()", "Again comparing the four seasons. The 2017 data mainly have 300s exposure times, whereas previous years mainly had 150s exposures.", "plot_compare_scatters(procOct2017.values(),procOct2017.keys(),'gbrm')", "And now the i-band:", "procOct2017i = OrderedDict()\nfor season in ['2014','2015','2016','2017']:\n procOct2017i[season] = load_pstable('../archive/october_2017/phot_stats_%si_v2.fits'%season,\n 'cleanstars',season,band='i',\n catdir='../archive/october_2017/')\n print season\n print procOct2017i[season]['mbins','sig50','outlierFrac']\n\nplot_compare_scatters(procOct2017i.values(),procOct2017.keys(),'gbrm')", "So the 2016 i-band data has some issues...\nself-calibration\nAs a shortcut to solving the full ubercalibration problem (even with a large number of repeats the Bok data have been found to be somewhat noisy for recovering ubercal parameters), a selfcal() routine was implemented that assumes the extinction parameters from SDSS and uses them to derive per-image zeropoints from the mean magnitude offsets. The above results were all based on zeropoints obtained from external calibration using SDSS; the following results come from the selfcal procedure which provides zeropoints on an internal Bok system.", "class NewBinnedStats(object):\n def __init__(self,tabf):\n self.tab = Table.read(tabf)\n def get(self,season,filt):\n i = where((self.tab['season']==season)&(self.tab['filter']==filt))[0][0]\n rv = Table()\n rv['mbins'] = array(self.tab.meta['MBINS'].split(',')).astype(float)\n for k in self.tab.colnames:\n if k.startswith('sig'):\n rv[k] = self.tab[k][i]\n return rv\n\noct17noselfcal = NewBinnedStats('../archive/october_2017/phot_stats_oct17noselfcal.fits')\noct17selfcal = NewBinnedStats('../archive/october_2017/phot_stats_oct17.fits')\n\nfigure(figsize=(14,7))\nsubplots_adjust(0.05,0.1,0.99,0.99,0.22)\npnum = 1\nfor b in 'gi':\n for season in ['2014','2015','2016','2017']:\n ax = subplot(2,4,pnum)\n plot_compare_scatters([oct17noselfcal.get(season,b),oct17selfcal.get(season,b)],\n [season+'-Oct2017orig',season+'-Oct2017selfcal'],\n 'gb',filt=b,ax=ax,units='mmag')\n if b=='g':\n ax.set_ylim(0,40)\n else:\n ax.set_ylim(0,80)\n ax.grid()\n pnum += 1", "selfcal has a small effect on g-band photometry (maybe slight improvement in 2015). But huge effect on i-band, and in particular addresses the weird scatter in the 2016 data. Now asymptotically reaching 10 mmag scatter at g,i=17.\nCheck the improvement by restricting to only photometric frames:", "oct17selfcalphoto = NewBinnedStats('../archive/october_2017/phot_stats_oct17_photo.fits')\n\nfigure(figsize=(14,7))\nsubplots_adjust(0.05,0.1,0.99,0.99,0.22)\npnum = 1\nfor b in 'gi':\n for season in ['2014','2015','2016','2017']:\n ax = subplot(2,4,pnum)\n plot_compare_scatters([oct17selfcal.get(season,b),oct17selfcalphoto.get(season,b)],\n [season+'-Oct2017selfcal',season+'-Oct2017selfcalphoto'],\n 'br',filt=b,ax=ax,units='mmag')\n if b=='g':\n ax.set_ylim(0,40)\n else:\n ax.set_ylim(0,80)\n ax.grid()\n pnum += 1", "Finally, compare the internal calib with the external rms (Bok-SDSS):", "figure(figsize=(14,7))\nsubplots_adjust(0.05,0.1,0.99,0.99,0.22)\npnum = 1\nfor b in 'gi':\n for season in ['2014','2015','2016','2017']:\n ax = subplot(2,4,pnum)\n plot_compare_scatters([oct17selfcal.get(season,b),oct17selfcal.get(season,b)],\n [season+'-Oct2017selfcal-int',season+'-Oct2017selfcal-ext'],\n ['C0','C1'],filt=b,ax=ax,units='mmag',sfx=['','Ext'])\n if b=='g':\n ax.set_ylim(0,40)\n else:\n ax.set_ylim(0,80)\n ax.grid()\n pnum += 1" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
karlstroetmann/Algorithms
Python/Chapter-10/Set.ipynb
gpl-2.0
[ "%autosave 0\nfrom IPython.core.display import HTML, display\ndisplay(HTML('<style>.container { width:100%; !important } </style>'))", "Sets implemented as AVL Trees\nThis notebook implements <em style=\"color:blue;\">sets</em> as <a href=\"https://en.wikipedia.org/wiki/AVL_tree\">AVL trees</a>. The set $\\mathcal{A}$ of <em style=\"color:blue;\">AVL trees</em> is defined inductively:\n\n$\\texttt{Nil} \\in \\mathcal{A}$.\n$\\texttt{Node}(k,l,r) \\in \\mathcal{A}\\quad$ iff \n$\\texttt{Node}(k,l,r) \\in \\mathcal{B}_<$,\n$l, r \\in \\mathcal{A}$, and\n$|l.\\texttt{height}() - r.\\texttt{height}()| \\leq 1$.\n\n\n\nAccording to this definition, an AVL tree is an <em style=\"color:blue;\">ordered binary tree</em>\nsuch that for every node $\\texttt{Node}(k,l,r)$ in this tree the height of the left subtree $l$ and the right\nsubtree $r$ differ at most by one. \nThe class Set represents the nodes of an AVL tree. This class has the following member variables:\n\nmKey is the key stored at the root of the tree,\nmLeft is the left subtree, \nmRight is the right subtree, and\nmHeight is the height.\n\nThe constructor __init__ creates the empty tree.", "class Set:\n def __init__(self):\n self.mKey = None\n self.mLeft = None\n self.mRight = None\n self.mHeight = 0", "Given an ordered binary tree $t$, the expression $t.\\texttt{isEmpty}()$ checks whether $t$ is the empty tree.", "def isEmpty(self):\n return self.mKey == None\n\nSet.isEmpty = isEmpty", "Given an ordered binary tree $t$ and a key $k$, the expression $t.\\texttt{member}(k)$ returns True if the key $k$ is stored in the tree $t$.\nThe method member is defined inductively as follows:\n - $\\texttt{Nil}.\\texttt{member}(k) = \\Omega$,\nbecause the empty tree is interpreted as the empty map.\n\n\n\n$\\texttt{Node}(k, l, r).\\texttt{member}(k) = v$,\nbecause the node $\\texttt{Node}(k,l,r)$ stores the assignment $k \\mapsto v$.\n - $k_1 < k_2 \\rightarrow \\texttt{Node}(k_2, l, r).\\texttt{member}(k_1) = l.\\texttt{member}(k_1)$,\nbecause if $k_1$ is less than $k_2$, then any mapping for $k_1$ has to be stored in the left subtree $l$.\n - $k_1 > k_2 \\rightarrow \\texttt{Node}(k_2, l, r).\\texttt{member}(k_1) = r.\\texttt{member}(k_1)$,\nbecause if $k_1$ is greater than $k_2$, then any mapping for $k_1$ has to be stored in the right subtree $r$.", "def member(self, key):\n if self.isEmpty():\n return\n elif self.mKey == key:\n return True\n elif key < self.mKey:\n return self.mLeft.member(key)\n else:\n return self.mRight.member(key)\n \nSet.member = member", "The method $\\texttt{insert}()$ is specified via recursive equations.\n - $\\texttt{Nil}.\\texttt{insert}(k) = \\texttt{Node}(k, \\texttt{Nil}, \\texttt{Nil})$,\n - $\\texttt{Node}(k, l, r).\\texttt{insert}(k) = \\texttt{Node}(k, l, r)$,\n - $k_1 < k_2 \\rightarrow \n \\texttt{Node}(k_2, l, r).\\texttt{insert}(k_1) =\n \\texttt{Node}\\bigl(k_2, l.\\texttt{insert}(k_1), r\\bigr).\\texttt{restore}()$,\n - $k_1 > k_2 \\rightarrow \n \\texttt{Node}(k_2, l, r).\\texttt{insert}\\bigl(k_1\\bigr) = \n \\texttt{Node}\\bigl(k_2, l, r.\\texttt{insert}(k_1)\\bigr).\\texttt{restore}()$.\nThe function $\\texttt{restore}$ is an auxiliary function that is defined below. This function restores the balancing condition if it is violated after an insertion.", "def insert(self, key):\n if self.isEmpty():\n self.mKey = key\n self.mLeft = Set()\n self.mRight = Set()\n self.mHeight = 1\n elif self.mKey == key:\n pass\n elif key < self.mKey:\n self.mLeft.insert(key)\n self._restore()\n else:\n self.mRight.insert(key)\n self._restore()\n\nSet.insert = insert", "The method $\\texttt{self}.\\texttt{delete}(k)$ removes the key $k$ from the tree $\\texttt{self}$. It is defined as follows:\n\n$\\texttt{Nil}.\\texttt{delete}(k) = \\texttt{Nil}$,\n$\\texttt{Node}(k,\\texttt{Nil},r).\\texttt{delete}(k) = r$,\n$\\texttt{Node}(k,l,\\texttt{Nil}).\\texttt{delete}(k) = l$,\n$l \\not= \\texttt{Nil} \\,\\wedge\\, r \\not= \\texttt{Nil} \\,\\wedge\\, \n \\langle r',k_{min} \\rangle := r.\\texttt{delMin}() \\;\\rightarrow\\;\n \\texttt{Node}(k,l,r).\\texttt{delete}(k) = \\texttt{Node}(k_{min},l,r')$\n$k_1 < k_2 \\rightarrow \\texttt{Node}(k_2,l,r).\\texttt{delete}(k_1) = \n \\texttt{Node}\\bigl(k_2,l.\\texttt{delete}(k_1),r\\bigr)$,\n$k_1 > k_2 \\rightarrow \\texttt{Node}(k_2,l,r).\\texttt{delete}(k_1) = \n \\texttt{Node}\\bigl(k_2,l,r.\\texttt{delete}(k_1)\\bigr)$.", "def delete(self, key):\n if self.isEmpty():\n return\n if key == self.mKey:\n if self.mLeft.isEmpty():\n self._update(self.mRight)\n elif self.mRight.isEmpty():\n self._update(self.mLeft)\n else:\n self.mRight, self.mKey = self.mRight._delMin()\n elif key < self.mKey:\n self.mLeft.delete(key)\n else:\n self.mRight.delete(key)\n \nSet.delete = delete", "The method $\\texttt{self}.\\texttt{delMin}()$ removes the smallest key from the given tree $\\texttt{self}$\nand returns a pair of the form\n$$ (\\texttt{self}, k_m) $$\nwhere $\\texttt{self}$ is the tree that remains after removing the smallest key, while $k_m$ is the smallest key that has been found. \nThe function is defined as follows:\n\n$\\texttt{Node}(k, \\texttt{Nil}, r).\\texttt{delMin}() = \\langle r, k \\rangle$,\n$l\\not= \\texttt{Nil} \\wedge \\langle l',k_{min}\\rangle := l.\\texttt{delMin}() \n \\;\\rightarrow\\;\n \\texttt{Node}(k, l, r).\\texttt{delMin}() = \n \\langle \\texttt{Node}(k, l', r).\\texttt{restore}(), k_{min} \\rangle\n $", "def _delMin(self):\n if self.mLeft.isEmpty():\n return self.mRight, self.mKey\n else:\n ls, km = self.mLeft._delMin()\n self.mLeft = ls\n self._restore()\n return self, km\n \nSet._delMin = _delMin", "Given two ordered binary trees $s$ and $t$, the expression $s.\\texttt{update}(t)$ overwrites the attributes of $s$ with the corresponding attributes of $t$.", "def _update(self, t):\n self.mKey = t.mKey\n self.mLeft = t.mLeft\n self.mRight = t.mRight\n self.mHeight = t.mHeight\n \nSet._update = _update", "The function $\\texttt{restore}(\\texttt{self})$ restores the balancing condition of the given binary tree\nat the root node and recompute the variable $\\texttt{mHeight}$.\nThe method $\\texttt{restore}$ is specified via conditional equations.\n\n\n$\\texttt{Nil}.\\texttt{restore}() = \\texttt{Nil}$,\nbecause the empty tree already is an AVL tree.\n - $|l.\\texttt{height}() - r.\\texttt{height}()| \\leq 1 \\rightarrow \n \\texttt{Node}(k,l,r).\\texttt{restore}() = \\texttt{Node}(k,l,r)$.\nIf the balancing condition is satisfied, then nothing needs to be done. \n - $\\begin{array}[t]{cl}\n & l_1.\\texttt{height}() = r_1.\\texttt{height}() + 2 \\ \n \\wedge & l_1 = \\texttt{Node}(k_2,l_2,r_2) \\\n \\wedge & l_2.\\texttt{height}() \\geq r_2.\\texttt{height}() \\[0.2cm]\n \\rightarrow & \\texttt{Node}(k_1,l_1,r_1).\\texttt{restore}() = \n \\texttt{Node}\\bigl(k_2,l_2,\\texttt{Node}(k_1,r_2,r_1)\\bigr)\n \\end{array}\n$\n - $\\begin{array}[t]{cl}\n & l_1.\\texttt{height}() = r_1.\\texttt{height}() + 2 \\ \n \\wedge & l_1 = \\texttt{Node}(k_2,l_2,r_2) \\\n \\wedge & l_2.\\texttt{height}() < r_2.\\texttt{height}() \\\n \\wedge & r_2 = \\texttt{Node}(k_3,l_3,r_3) \\\n \\rightarrow & \\texttt{Node}(k_1,l_1,r_1).\\texttt{restore}() = \n \\texttt{Node}\\bigl(k_3,\\texttt{Node}(k_2,l_2,l_3),\\texttt{Node}(k_1,r_3,r_1) \\bigr)\n \\end{array}\n$\n - $\\begin{array}[t]{cl}\n & r_1.\\texttt{height}() = l_1.\\texttt{height}() + 2 \\ \n \\wedge & r_1 = \\texttt{Node}(k_2,l_2,r_2) \\\n \\wedge & r_2.\\texttt{height}() \\geq l_2.\\texttt{height}() \\[0.2cm]\n \\rightarrow & \\texttt{Node}(k_1,l_1,r_1).\\texttt{restore}() = \n \\texttt{Node}\\bigl(k_2,\\texttt{Node}(k_1,l_1,l_2),r_2\\bigr)\n \\end{array}\n$\n - $\\begin{array}[t]{cl}\n & r_1.\\texttt{height}() = l_1.\\texttt{height}() + 2 \\ \n \\wedge & r_1 = \\texttt{Node}(k_2,l_2,r_2) \\\n \\wedge & r_2.\\texttt{height}() < l_2.\\texttt{height}() \\\n \\wedge & l_2 = \\texttt{Node}(k_3,l_3,r_3) \\\n \\rightarrow & \\texttt{Node}(k_1,l_1,r_1).\\texttt{restore}() = \n \\texttt{Node}\\bigl(k_3,\\texttt{Node}(k_1,l_1,l_3),\\texttt{Node}(k_2,r_3,r_2) \\bigr)\n \\end{array}\n$", "def _restore(self):\n if abs(self.mLeft.mHeight - self.mRight.mHeight) <= 1:\n self._restoreHeight()\n return\n if self.mLeft.mHeight > self.mRight.mHeight:\n k1, l1, r1 = self.mKey, self.mLeft, self.mRight\n k2, l2, r2 = l1.mKey, l1.mLeft, l1.mRight\n if l2.mHeight >= r2.mHeight:\n self._setValues(k2, l2, createNode(k1, r2, r1))\n else: \n k3, l3, r3 = r2.mKey, r2.mLeft, r2.mRight\n self._setValues(k3, createNode(k2, l2, l3),\n createNode(k1, r3, r1))\n elif self.mRight.mHeight > self.mLeft.mHeight:\n k1, l1, r1 = self.mKey, self.mLeft, self.mRight\n k2, l2, r2 = r1.mKey, r1.mLeft, r1.mRight\n if r2.mHeight >= l2.mHeight:\n self._setValues(k2, createNode(k1, l1, l2), r2)\n else:\n k3, l3, r3 = l2.mKey, l2.mLeft, l2.mRight\n self._setValues(k3, createNode(k1, l1, l3),\n createNode(k2, r3, r2))\n self._restoreHeight()\n \nSet._restore = _restore", "The function $\\texttt{self}.\\texttt{_setValues}(k, l, r)$ overwrites the member variables of the node $\\texttt{self}$ with the given values.", "def _setValues(self, k, l, r):\n self.mKey = k\n self.mLeft = l\n self.mRight = r\n \nSet._setValues = _setValues\n\ndef _restoreHeight(self):\n self.mHeight = max(self.mLeft.mHeight, self.mRight.mHeight) + 1\n \nSet._restoreHeight = _restoreHeight", "The function $\\texttt{createNode}(k, l, r)$ creates an AVL-tree of that has the key $k$ stored at its root, \nleft subtree $l$ and right subtree $r$.", "def createNode(key, left, right):\n node = Set()\n node.mKey = key\n node.mLeft = left\n node.mRight = right\n node.mHeight = max(left.mHeight, right.mHeight) + 1\n return node", "The method $t.\\texttt{pop}()$ take an AVL tree $t$ and removes and returns the smallest key that is present in $t$. It is specified as follows:\n - $\\texttt{Nil}.\\texttt{pop}() = \\Omega$\n - $\\texttt{Node}(k,\\texttt{Nil}, r).\\texttt{pop}() = \\langle k, r\\rangle$\n - $l \\not=\\texttt{Nil} \\wedge \\langle k',l'\\rangle := l.\\texttt{pop}() \\rightarrow\n \\texttt{Node}(k, l, r).\\texttt{pop}() = \\langle k', \\texttt{Node}(k, l', r)\\rangle$", "def pop(self):\n if self.mKey == None:\n raise KeyError\n if self.mLeft.mKey == None:\n key = self.mKey\n self._update(self.mRight)\n return key\n return self.mLeft.pop()\n\nSet.pop = pop", "Display Code", "import graphviz as gv", "Given an ordered binary tree, this function renders the tree graphically using graphviz.", "def toDot(self):\n Set.sNodeCount = 0 # this is a static variable of the class Set\n dot = gv.Digraph(node_attr={'shape': 'record', 'style': 'rounded'})\n NodeDict = {}\n self._assignIDs(NodeDict)\n for n, t in NodeDict.items():\n if t.mKey != None:\n dot.node(str(n), label=str(t.mKey))\n else:\n dot.node(str(n), label='', shape='point')\n for n, t in NodeDict.items():\n if not t.mLeft == None:\n dot.edge(str(n), str(t.mLeft.mID))\n if not t.mRight == None:\n dot.edge(str(n), str(t.mRight.mID))\n return dot\n\nSet.toDot = toDot", "This method assigns a unique identifier with each node. The dictionary NodeDict maps these identifiers to the nodes where they occur.", "def _assignIDs(self, NodeDict):\n Set.sNodeCount += 1\n self.mID = Set.sNodeCount\n NodeDict[self.mID] = self\n if self.isEmpty():\n return\n self.mLeft ._assignIDs(NodeDict)\n self.mRight._assignIDs(NodeDict)\n \nSet._assignIDs = _assignIDs", "Testing\nThe function $\\texttt{demo}()$ creates a small ordered binary tree.", "def demo():\n m = Set()\n m.insert(\"anton\")\n m.insert(\"hugo\")\n m.insert(\"gustav\")\n m.insert(\"jens\")\n m.insert(\"hubert\")\n m.insert(\"andre\")\n m.insert(\"philipp\")\n m.insert(\"rene\")\n return m\n\nt = demo()\nt.toDot()\n\nwhile not t.isEmpty():\n print(t.pop())\n display(t.toDot())", "Let's generate an ordered binary tree with random keys.", "import random as rnd\n\nt = Set()\nfor k in range(30):\n k = rnd.randrange(100)\n t.insert(k)\ndisplay(t.toDot())\nwhile not t.isEmpty():\n print(t.pop(), end=' ')\ndisplay(t.toDot())", "This tree looks more or less balanced. Lets us try to create a tree by inserting sorted numbers because that resulted in linear complexity for ordered binary trees.", "t = Set()\nfor k in range(30):\n t.insert(k)\ndisplay(t.toDot())\nwhile not t.isEmpty():\n print(t.pop(), end=' ')\ndisplay(t.toDot())", "Next, we compute the set of prime numbers $\\leq 100$. Mathematically, this set is given as follows:\n$$ \\bigl{2, \\cdots, 100 \\bigr} - \\bigl{ i \\cdot j \\bigm| i, j \\in {2, \\cdots, 100 }\\bigr}$$", "S = Set()\nfor k in range(2, 101):\n S.insert(k)\ndisplay(S.toDot())\nfor i in range(2, 101):\n for j in range(2, 101):\n S.delete(i * j)\ndisplay(S.toDot())\nwhile not S.isEmpty():\n print(S.pop(), end=' ')\ndisplay(S.toDot())" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mdietrichstein/vgg16-transfer-learning
vgg16_transfer_learning.ipynb
bsd-2-clause
[ "Transfer Learning to detect cats / dogs using Vgg16\nDefinitions\nRe-run this cell when starting from a checkpoint", "#reset python environment\n%reset -f\n\nimport time\n\ndefault_device = '/gpu:0'\n# default_device = '/cpu:0'\n\nnum_hidden_neurons = 256\n\nvgg_mean = [103.939, 116.779, 123.68]\nclasses = [l.strip() for l in open('synset.txt').readlines()]\n\ntraining_dataset_dir = './datasets/dogs-vs-cats-redux-kernels-edition/train/'\ntest_dataset_dir = './datasets/dogs-vs-cats-redux-kernels-edition/test/'\n\n#model_version = int(time.time())\nmodel_version = 3\nmodel_path = 'models/model-{}/'.format(model_version)\n\ndef get_batches(x, y, batch_size=32):\n num_rows = y.shape[0]\n \n num_batches = num_rows // batch_size\n \n if num_rows % batch_size != 0:\n num_batches = num_batches + 1\n\n for batch in range(num_batches):\n yield x[batch_size * batch: batch_size * (batch + 1)], y[batch_size * batch: batch_size * (batch + 1)]", "Vgg16 Model Class", "import tensorflow as tf\n\nclass Vgg16Model:\n def __init__(self, weights_path='./vgg16.npy'):\n self.weights = np.load('vgg16.npy', encoding='latin1').item()\n self.activation_fn = tf.nn.relu\n self.conv_padding = 'SAME'\n self.pool_padding = 'SAME'\n self.use_bias = True\n\n def build(self, input_tensor, trainable=False):\n self.conv1_1 = self.conv2d(input_tensor, 'conv1_1', 64, trainable)\n self.conv1_2 = self.conv2d(self.conv1_1, 'conv1_2', 64, trainable)\n\n # Max-pooling is performed over a 2 × 2 pixel window, with stride 2.\n self.max_pool1 = tf.layers.max_pooling2d(self.conv1_2, (2, 2), (2, 2), padding=self.pool_padding)\n\n self.conv2_1 = self.conv2d(self.max_pool1, 'conv2_1', 128, trainable)\n self.conv2_2 = self.conv2d(self.conv2_1, 'conv2_2', 128, trainable)\n\n self.max_pool2 = tf.layers.max_pooling2d(self.conv2_2, (2, 2), (2, 2), padding=self.pool_padding)\n\n self.conv3_1 = self.conv2d(self.max_pool2, 'conv3_1', 256, trainable)\n self.conv3_2 = self.conv2d(self.conv3_1, 'conv3_2', 256, trainable)\n self.conv3_3 = self.conv2d(self.conv3_2, 'conv3_3', 256, trainable)\n\n self.max_pool3 = tf.layers.max_pooling2d(self.conv3_3, (2, 2), (2, 2), padding=self.pool_padding)\n\n self.conv4_1 = self.conv2d(self.max_pool3, 'conv4_1', 512, trainable)\n self.conv4_2 = self.conv2d(self.conv4_1, 'conv4_2', 512, trainable)\n self.conv4_3 = self.conv2d(self.conv4_2, 'conv4_3', 512, trainable)\n\n self.max_pool4 = tf.layers.max_pooling2d(self.conv4_3, (2, 2), (2, 2), padding=self.pool_padding)\n\n self.conv5_1 = self.conv2d(self.max_pool4, 'conv5_1', 512, trainable)\n self.conv5_2 = self.conv2d(self.conv5_1, 'conv5_2', 512, trainable)\n self.conv5_3 = self.conv2d(self.conv5_2, 'conv5_3', 512, trainable)\n\n self.max_pool5 = tf.layers.max_pooling2d(self.conv5_3, (2, 2), (2, 2), padding=self.pool_padding)\n\n reshaped = tf.reshape(self.max_pool5, shape=(-1, 7 * 7 * 512))\n\n self.fc6 = self.fc(reshaped, 'fc6', 4096, trainable)\n self.fc7 = self.fc(self.fc6, 'fc7', 4096, trainable)\n\n self.fc8 = self.fc(self.fc7, 'fc8', 1000, trainable)\n\n self.predictions = tf.nn.softmax(self.fc8, name='predictions')\n\n def conv2d(self, layer, name, n_filters, trainable, k_size=3):\n return tf.layers.conv2d(layer, n_filters, kernel_size=(k_size, k_size),\n activation=self.activation_fn, padding=self.conv_padding, name=name, trainable=trainable,\n kernel_initializer=tf.constant_initializer(self.weights[name][0], dtype=tf.float32),\n bias_initializer=tf.constant_initializer(self.weights[name][1], dtype=tf.float32),\n use_bias=self.use_bias)\n\n def fc(self, layer, name, size, trainable):\n return tf.layers.dense(layer, size, activation=self.activation_fn,\n name=name, trainable=trainable,\n kernel_initializer=tf.constant_initializer(self.weights[name][0], dtype=tf.float32),\n bias_initializer=tf.constant_initializer(self.weights[name][1], dtype=tf.float32),\n use_bias=self.use_bias)", "Images conversion for Vgg16\nImages have to be of dimension (224, 224, 3). The last dimension is ordered BGR (blue, green, red)", "import skimage\nimport skimage.io\nimport skimage.transform\n\n# https://github.com/machrisaa/tensorflow-vgg/blob/master/utils.py\ndef load_image(image_path, mean=vgg_mean):\n image = skimage.io.imread(image_path)\n\n image = image.astype(float)\n \n short_edge = min(image.shape[:2])\n yy = int((image.shape[0] - short_edge) / 2)\n xx = int((image.shape[1] - short_edge) / 2)\n crop_image = image[yy: yy + short_edge, xx: xx + short_edge]\n \n resized_image = skimage.transform.resize(crop_image, (224, 224), mode='constant') \n \n bgr = resized_image[:,:,::-1] - mean\n \n return bgr", "Extract Vgg16 features", "import time\nimport os\nimport math\n\ndef extract_codes(image_directory, batch_size=32):\n tf.reset_default_graph()\n\n # create mapping of filename -> vgg features\n codes_fc6 = {}\n codes_fc7 = {}\n predictions = {}\n\n filenames = os.listdir(image_directory)\n num_files = len(filenames)\n num_batches = int(math.ceil(num_files / batch_size))\n \n with tf.device(default_device):\n with tf.Session(graph = tf.Graph()) as sess: \n _input = tf.placeholder(tf.float32, shape=(None, 224, 224, 3), name=\"images\")\n\n vgg = Vgg16Model()\n vgg.build(_input)\n\n sess.run(tf.global_variables_initializer())\n\n for i in range(num_batches):\n batch_filenames = filenames[i*batch_size : ((i+1)*batch_size)]\n\n print(\"batch {} of {}\".format(i+1, num_batches))\n\n start = time.time()\n images = np.array([load_image(image_directory + f) for f in batch_filenames])\n end = time.time()\n print(\"\\timage loading took {:.4f} sec\".format(end-start))\n\n start = end\n\n batch_codes_fc6, batch_codes_fc7 = sess.run(\n [vgg.fc6, vgg.fc7],\n feed_dict={ _input: images }\n )\n\n end = time.time()\n print(\"\\tprediction took {:.4f} sec\".format(end-start))\n\n for i, filename in enumerate(batch_filenames):\n codes_fc6[filename] = batch_codes_fc6[i]\n codes_fc7[filename] = batch_codes_fc7[i]\n\n return codes_fc6, codes_fc7\n\nimport numpy as np\n\nprint('Extracting training codes for fc6 and fc7')\ntraining_codes_fc6, training_codes_fc7 = extract_codes(training_dataset_dir)\nnp.save('training_codes_fc6.npy', training_codes_fc6)\nnp.save('training_codes_fc7.npy', training_codes_fc7)\n\nprint('Extracting test codes for fc6 and fc7')\ntest_codes_fc6, test_codes_fc7 = extract_codes(test_dataset_dir, batch_size=16)\nnp.save('test_codes_fc6.npy', test_codes_fc6)\nnp.save('test_codes_fc7.npy', test_codes_fc7)", "Checkpoint - Vgg16 features extracted and serialized", "import numpy as np\nimport tensorflow as tf", "Load previously stored training codes (fc6)", "from collections import OrderedDict\ntraining_codes = np.load('training_codes_fc6.npy')\ntraining_codes = OrderedDict(training_codes.item())", "Preprocess training data", "keys = list(training_codes.keys())\n\nlabels = np.array([ (1, 0) if name[:3] == 'dog' else (0,1) for name in keys]) # one hot encode labels\n\nimages = np.array(list(training_codes.values())) # extract images\n\nfor i,key in enumerate(keys):\n assert (training_codes.get(key) == images[i]).all()", "Split into training and validation set", "from sklearn.model_selection import StratifiedShuffleSplit\n \nsplitter = StratifiedShuffleSplit(n_splits=1, test_size=0.1)\ntrain_indices, val_indices = next(splitter.split(images, labels))\n\ntrain_images, train_labels = images[train_indices], labels[train_indices]\nval_images, val_labels = images[val_indices], labels[val_indices]", "Transfer Learning Step - Use a small NN with a single hidden layer", "import os\nimport time\n\nfrom tensorflow.python.saved_model import builder as saved_model_builder\nfrom tensorflow.python.saved_model.signature_def_utils import predict_signature_def\n\nfrom tensorflow.python.saved_model.tag_constants import SERVING\nfrom tensorflow.python.saved_model.signature_constants import DEFAULT_SERVING_SIGNATURE_DEF_KEY\nfrom tensorflow.python.saved_model.signature_constants import PREDICT_INPUTS\nfrom tensorflow.python.saved_model.signature_constants import PREDICT_OUTPUTS\n \nif(os.path.exists(model_path)):\n raise Exception('directory \"{}\" already exists. Delete or move it'.format(model_path))\n\nnum_epochs = 5\nlearning_rate = 0.01\nkeep_prob = 0.5\nbatch_size = 64\naccuracy_print_steps = 10\niteration = 0\n\ntf.reset_default_graph()\n\nwith tf.device(default_device):\n with tf.Session(graph=tf.Graph()) as sess:\n \n with tf.name_scope(\"inputs\"):\n _images = tf.placeholder(tf.float32, shape=(None, 4096), name='images')\n _keep_prob = tf.placeholder(tf.float32, name='keep_probability')\n\n with tf.name_scope(\"targets\"):\n _labels = tf.placeholder(tf.float32, shape=(None, 2), name='labels')\n \n with tf.name_scope(\"hidden_layer\"):\n hidden_weights = tf.Variable(\n initial_value = tf.truncated_normal([4096, num_hidden_neurons], mean=0.0, stddev=0.01),\n dtype=tf.float32, name=\"hidden_weights\"\n )\n \n hidden_bias = tf.Variable(\n initial_value = tf.zeros(num_hidden_neurons), \n dtype=tf.float32,\n name=\"hidden_bias\"\n )\n \n hidden = tf.matmul(_images, hidden_weights) + hidden_bias\n hidden = tf.nn.relu(hidden, name=\"hidden_relu\")\n hidden = tf.nn.dropout(hidden, keep_prob=_keep_prob, name='hidden_dropout')\n \n tf.summary.histogram(\"hidden_weights\", hidden_weights)\n tf.summary.histogram(\"hidden_bias\", hidden_bias)\n\n \n with tf.name_scope(\"outputs\"):\n output_weights = tf.Variable(\n initial_value=tf.truncated_normal(shape=(num_hidden_neurons, 2), mean=0.0, stddev=0.01),\n dtype=tf.float32, name=\"output_weights\"\n )\n \n output_bias = tf.Variable(initial_value=tf.zeros(2), dtype=tf.float32, name=\"output_bias\")\n \n logits = tf.matmul(hidden, output_weights) + output_bias\n predictions = tf.nn.softmax(logits, name='predictions')\n \n tf.summary.histogram(\"output_weights\", output_weights)\n tf.summary.histogram(\"output_bias\", output_bias)\n tf.summary.histogram(\"predictions\", predictions)\n \n with tf.name_scope(\"cost\"):\n cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=_labels, name='cross_entropy')\n cost = tf.reduce_mean(cross_entropy, name='cost')\n \n tf.summary.scalar(\"cost\", cost)\n\n with tf.name_scope(\"train\"):\n optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)\n correct_predictions = tf.equal(tf.argmax(predictions, 1), tf.argmax(_labels, 1), name='correct_predictions')\n accuracy = tf.reduce_mean(tf.cast(correct_predictions, tf.float32), name='accuracy')\n\n ### merge summaries\n merged_summaries = tf.summary.merge_all()\n \n ### Save training and validation logs for tensorboard\n train_writer = tf.summary.FileWriter('./logs/train/{}'.format(model_version), sess.graph)\n val_writer = tf.summary.FileWriter('./logs/val/{}'.format(model_version))\n \n sess.run(tf.global_variables_initializer())\n\n for epoch in range(num_epochs):\n for batch_train_images, batch_train_labels in get_batches(train_images, train_labels, batch_size=batch_size):\n train_loss, _, p, summary = sess.run(\n [cost, optimizer, logits, merged_summaries], \n feed_dict = { \n _images: batch_train_images,\n _labels: batch_train_labels,\n _keep_prob: keep_prob\n })\n\n train_writer.add_summary(summary, iteration)\n \n iteration = iteration + 1\n\n if iteration % accuracy_print_steps == 0:\n val_acc, val_summary = sess.run([accuracy, merged_summaries], feed_dict ={\n _images: val_images,\n _labels: val_labels,\n _keep_prob: 1.\n })\n\n val_writer.add_summary(val_summary, iteration)\n print('{} / {} Accuracy: {} Loss: {}'.format(epoch + 1, num_epochs, val_acc, train_loss))\n \n \n\n \n ### Save graph and trained variables\n builder = saved_model_builder.SavedModelBuilder(model_path)\n\n builder.add_meta_graph_and_variables(\n sess, [SERVING],\n signature_def_map = {\n DEFAULT_SERVING_SIGNATURE_DEF_KEY: predict_signature_def(\n inputs = { PREDICT_INPUTS: _images },\n outputs = { PREDICT_OUTPUTS: predictions }\n )\n }\n )\n\n builder.save()", "Interlude - Try to find optimal hyperparameters\nRun training with different hyperparameters and use tensorboard to investigate the best solution", "import os\nimport time\nimport math\n\nfrom tensorflow.python.saved_model import builder as saved_model_builder\nfrom tensorflow.python.saved_model.signature_def_utils import predict_signature_def\n\nfrom tensorflow.python.saved_model.tag_constants import SERVING\nfrom tensorflow.python.saved_model.signature_constants import DEFAULT_SERVING_SIGNATURE_DEF_KEY\nfrom tensorflow.python.saved_model.signature_constants import PREDICT_INPUTS\nfrom tensorflow.python.saved_model.signature_constants import PREDICT_OUTPUTS\n\naccuracy_print_steps = 100\n\ndef train(writer, num_epochs, hidden_layer_size, learning_rate, num_hidden=1, keep_prob=0.5, batch_size=64, training=True, saved_model_path=None):\n with tf.device(default_device):\n with tf.Session(graph=tf.Graph()) as sess:\n\n with tf.name_scope(\"inputs\"):\n _images = tf.placeholder(tf.float32, shape=(None, 4096), name='images')\n _is_training = tf.placeholder(tf.bool, name='is_training')\n _keep_prob = tf.placeholder(tf.float32, name='keep_probability')\n\n with tf.name_scope(\"targets\"):\n _labels = tf.placeholder(tf.float32, shape=(None, 2), name='labels')\n\n prev_size = 4096\n next_input = _images\n \n for i in range(num_hidden):\n with tf.variable_scope(\"hidden_layer_{}\".format(i)):\n hidden_weights = tf.Variable(\n initial_value = tf.truncated_normal([prev_size, hidden_layer_size], mean=0.0, stddev=0.01),\n dtype=tf.float32, name=\"hidden_weights\"\n )\n\n hidden_bias = tf.Variable(\n initial_value = tf.zeros(hidden_layer_size), \n dtype=tf.float32,\n name=\"hidden_bias\"\n )\n\n hidden = tf.matmul(next_input, hidden_weights) + hidden_bias\n hidden = tf.layers.batch_normalization(hidden, training=_is_training)\n hidden = tf.nn.relu(hidden, name=\"hidden_relu\")\n hidden = tf.nn.dropout(hidden, keep_prob=_keep_prob, name='hidden_dropout')\n\n tf.summary.histogram(\"hidden_weights_{}\".format(i), hidden_weights)\n tf.summary.histogram(\"hidden_bias_{}\".format(i), hidden_bias)\n \n next_input = hidden\n prev_size = hidden_layer_size\n\n\n with tf.name_scope(\"outputs\"):\n output_weights = tf.Variable(\n initial_value=tf.truncated_normal(shape=(hidden_layer_size, 2), mean=0.0, stddev=0.01),\n dtype=tf.float32, name=\"output_weights\"\n )\n\n output_bias = tf.Variable(initial_value=tf.zeros(2), dtype=tf.float32, name=\"output_bias\")\n\n logits = tf.matmul(next_input, output_weights) + output_bias\n predictions = tf.nn.softmax(logits, name='predictions')\n\n tf.summary.histogram(\"output_weights\", output_weights)\n tf.summary.histogram(\"output_bias\", output_bias)\n tf.summary.histogram(\"predictions\", predictions)\n\n with tf.name_scope(\"cost\"):\n cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=_labels, name='cross_entropy')\n cost = tf.reduce_mean(cross_entropy, name='cost')\n\n tf.summary.scalar(\"cost\", cost)\n\n with tf.name_scope(\"train\"):\n with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)): \n optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)\n correct_predictions = tf.equal(tf.argmax(predictions, 1), tf.argmax(_labels, 1), name='correct_predictions')\n accuracy = tf.reduce_mean(tf.cast(correct_predictions, tf.float32), name='accuracy')\n\n ### merge summaries\n merged_summaries = tf.summary.merge_all()\n\n sess.run(tf.global_variables_initializer())\n \n iteration = 0\n for epoch in range(num_epochs): \n for batch_train_images, batch_train_labels in get_batches(train_images, train_labels, batch_size=batch_size):\n train_loss, _, p, summary = sess.run(\n [cost, optimizer, logits, merged_summaries], \n feed_dict = { \n _images: batch_train_images,\n _labels: batch_train_labels,\n _keep_prob: keep_prob,\n _is_training: training\n })\n\n iteration = iteration + 1\n\n if iteration % accuracy_print_steps == 0:\n if not writer == None:\n writer.add_summary(summary, iteration)\n\n if iteration % accuracy_print_steps == 0:\n val_acc, val_summary = sess.run([accuracy, merged_summaries], feed_dict ={\n _images: val_images,\n _labels: val_labels,\n _keep_prob: 1.,\n _is_training: False\n })\n\n\n print('\\tEpoch {}/{} Iteration {} Accuracy: {} Loss: {}'.format(epoch + 1, num_epochs, iteration, val_acc, train_loss))\n \n if not saved_model_path == None:\n ### Save graph and trained variables\n builder = saved_model_builder.SavedModelBuilder(saved_model_path)\n\n builder.add_meta_graph_and_variables(\n sess, [SERVING],\n signature_def_map = {\n DEFAULT_SERVING_SIGNATURE_DEF_KEY: predict_signature_def(\n inputs = { PREDICT_INPUTS: _images },\n outputs = { PREDICT_OUTPUTS: predictions }\n )\n }\n )\n\n builder.save()\n\n\nbatch_size = 64\n\nfor num_epochs in [1, 5]:\n for keep_prob in [0.5, 0.8, 1.0]:\n for num_hidden_layers in [1, 2]:\n for hidden_layer_size in [512, 1024, 2048]:\n for learning_rate in [0.01, 0.001]:\n log_string = 'logs/{}/e={},lr={},hl={},hs={},kp={},bs={}'.format(model_version, num_epochs, learning_rate, num_hidden_layers, hidden_layer_size, keep_prob, batch_size)\n writer = tf.summary.FileWriter(log_string)\n\n print(\"\\n\\nStarting {}\".format(log_string))\n train(writer, num_epochs, hidden_layer_size, learning_rate, num_hidden_layers, keep_prob, batch_size)", "Save model with promising hyperparameters", "# e=5, lr=0.001, hs=1024,hl=11,kp=0.5,bs=64\n# train(None, 5, 1024, 0.001, 1, 0.5, 64, \"{}/test1/\".format(model_path))\n\n# e=5,lr=0.01,hl=1,hs=512,kp=0.8,bs=64\ntrain(None, 5, 512, 0.01, 1, 0.8, 64, True, \"{}/test2/\".format(model_path))", "Checkpoint - Load previously stored test codes (fc6)", "keys = list(test_codes.keys())\n\n# images = np.array(list(test_codes.values()))\n\nkeys = list(map(lambda k: k[:-4], keys)) \nkeys = np.array(sorted(keys, key=int))\n\nexamples = keys[2:6]\n\nimages = []\n\nfor i,key in enumerate(keys):\n images.append(test_codes.get(key+'.jpg'))\n\nimages = np.array(images)\n\nexamples = test_keys[2:6]", "Example images", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport skimage.io\n\nfig = plt.figure(figsize=(20, 10))\n\nfor i, example in enumerate(examples):\n a = fig.add_subplot(1,len(examples), i+1)\n plt.imshow(skimage.io.imread(test_dataset_dir + example + '.jpg'))\n # a.set_title(codes[examples[0]])", "Create predictions for test images", "import numpy as np\nimport tensorflow as tf\n\nfrom tensorflow.python.saved_model import loader\nfrom tensorflow.python.saved_model.tag_constants import SERVING\n\ntf.reset_default_graph()\n\n# target_model_path = model_path\n#target_model_path = \"{}/test2/\".format(model_path)\ntarget_model_path = \"{}test2/\".format(model_path)\n\nwith tf.device(default_device):\n with tf.Session(graph=tf.Graph()) as sess:\n \n loader.load(sess, [SERVING], target_model_path)\n\n with open('out6.csv', 'w') as f:\n f.write('id,label\\n')\n\n for b_images, b_keys in get_batches(images, keys):\n s_keep_probability = sess.graph.get_tensor_by_name('inputs/keep_probability:0')\n s_images = sess.graph.get_tensor_by_name('inputs/images:0')\n s_is_training = sess.graph.get_tensor_by_name('inputs/is_training:0')\n s_predictions = sess.graph.get_tensor_by_name('outputs/predictions:0')\n\n preds = sess.run(s_predictions, feed_dict={\n s_images: b_images,\n s_keep_probability: 1.,\n s_is_training: False\n })\n\n for idx,pred in enumerate(preds):\n s = '{},{:.5f}\\n'.format(b_keys[idx], np.clip(pred[0], 0.05, 0.95))\n f.write(s)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.21/_downloads/ab8b138d73a5774a0442141d84ed9689/plot_10_epochs_overview.ipynb
bsd-3-clause
[ "%matplotlib inline", "The Epochs data structure: discontinuous data\nThis tutorial covers the basics of creating and working with :term:epoched\n&lt;epochs&gt; data. It introduces the :class:~mne.Epochs data structure in\ndetail, including how to load, query, subselect, export, and plot data from an\n:class:~mne.Epochs object. For more information about visualizing\n:class:~mne.Epochs objects, see tut-visualize-epochs. For info on\ncreating an :class:~mne.Epochs object from (possibly simulated) data in a\n:class:NumPy array &lt;numpy.ndarray&gt;, see tut_creating_data_structures.\n :depth: 2\nAs usual we'll start by importing the modules we need:", "import os\nimport mne", ":class:~mne.Epochs objects are a data structure for representing and\nanalyzing equal-duration chunks of the EEG/MEG signal. :class:~mne.Epochs\nare most often used to represent data that is time-locked to repeated\nexperimental events (such as stimulus onsets or subject button presses), but\ncan also be used for storing sequential or overlapping frames of a continuous\nsignal (e.g., for analysis of resting-state activity; see\nfixed-length-events). Inside an :class:~mne.Epochs object, the data\nare stored in an :class:array &lt;numpy.ndarray&gt; of shape (n_epochs,\nn_channels, n_times).\n:class:~mne.Epochs objects have many similarities with :class:~mne.io.Raw\nobjects, including:\n\n\nThey can be loaded from and saved to disk in .fif format, and their\n data can be exported to a :class:NumPy array &lt;numpy.ndarray&gt; through the\n :meth:~mne.Epochs.get_data method or to a :class:Pandas DataFrame\n &lt;pandas.DataFrame&gt; through the :meth:~mne.Epochs.to_data_frame method.\n\n\nBoth :class:~mne.Epochs and :class:~mne.io.Raw objects support channel\n selection by index or name, including :meth:~mne.Epochs.pick,\n :meth:~mne.Epochs.pick_channels and :meth:~mne.Epochs.pick_types\n methods.\n\n\n:term:SSP projector &lt;projector&gt; manipulation is possible through\n :meth:~mne.Epochs.add_proj, :meth:~mne.Epochs.del_proj, and\n :meth:~mne.Epochs.plot_projs_topomap methods.\n\n\nBoth :class:~mne.Epochs and :class:~mne.io.Raw objects have\n :meth:~mne.Epochs.copy, :meth:~mne.Epochs.crop,\n :meth:~mne.Epochs.time_as_index, :meth:~mne.Epochs.filter, and\n :meth:~mne.Epochs.resample methods.\n\n\nBoth :class:~mne.Epochs and :class:~mne.io.Raw objects have\n :attr:~mne.Epochs.times, :attr:~mne.Epochs.ch_names,\n :attr:~mne.Epochs.proj, and :class:info &lt;mne.Info&gt; attributes.\n\n\nBoth :class:~mne.Epochs and :class:~mne.io.Raw objects have built-in\n plotting methods :meth:~mne.Epochs.plot, :meth:~mne.Epochs.plot_psd,\n and :meth:~mne.Epochs.plot_psd_topomap.\n\n\nCreating Epoched data from a Raw object\nThe example dataset we've been using thus far doesn't include pre-epoched\ndata, so in this section we'll load the continuous data and create epochs\nbased on the events recorded in the :class:~mne.io.Raw object's STIM\nchannels. As we often do in these tutorials, we'll :meth:~mne.io.Raw.crop\nthe :class:~mne.io.Raw data to save memory:", "sample_data_folder = mne.datasets.sample.data_path()\nsample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',\n 'sample_audvis_raw.fif')\nraw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False).crop(tmax=60)", "As we saw in the tut-events-vs-annotations tutorial, we can extract an\nevents array from :class:~mne.io.Raw objects using :func:mne.find_events:", "events = mne.find_events(raw, stim_channel='STI 014')", "<div class=\"alert alert-info\"><h4>Note</h4><p>We could also have loaded the events from file, using\n :func:`mne.read_events`::\n\n sample_data_events_file = os.path.join(sample_data_folder,\n 'MEG', 'sample',\n 'sample_audvis_raw-eve.fif')\n events_from_file = mne.read_events(sample_data_events_file)\n\n See `tut-section-events-io` for more details.</p></div>\n\nThe :class:~mne.io.Raw object and the events array are the bare minimum\nneeded to create an :class:~mne.Epochs object, which we create with the\n:class:mne.Epochs class constructor. However, you will almost surely want\nto change some of the other default parameters. Here we'll change tmin\nand tmax (the time relative to each event at which to start and end each\nepoch). Note also that the :class:~mne.Epochs constructor accepts\nparameters reject and flat for rejecting individual epochs based on\nsignal amplitude. See the tut-reject-epochs-section section for\nexamples.", "epochs = mne.Epochs(raw, events, tmin=-0.3, tmax=0.7)", "You'll see from the output that:\n\n\nall 320 events were used to create epochs\n\n\nbaseline correction was automatically applied (by default, baseline is\n defined as the time span from tmin to 0, but can be customized with\n the baseline parameter)\n\n\nno additional metadata was provided (see tut-epochs-metadata for\n details)\n\n\nthe projection operators present in the :class:~mne.io.Raw file were\n copied over to the :class:~mne.Epochs object\n\n\nIf we print the :class:~mne.Epochs object, we'll also see a note that the\nepochs are not copied into memory by default, and a count of the number of\nepochs created for each integer Event ID.", "print(epochs)", "Notice that the Event IDs are in quotes; since we didn't provide an event\ndictionary, the :class:mne.Epochs constructor created one automatically and\nused the string representation of the integer Event IDs as the dictionary\nkeys. This is more clear when viewing the event_id attribute:", "print(epochs.event_id)", "This time let's pass preload=True and provide an event dictionary; our\nprovided dictionary will get stored as the event_id attribute and will\nmake referencing events and pooling across event types easier:", "event_dict = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,\n 'visual/right': 4, 'face': 5, 'buttonpress': 32}\nepochs = mne.Epochs(raw, events, tmin=-0.3, tmax=0.7, event_id=event_dict,\n preload=True)\nprint(epochs.event_id)\ndel raw # we're done with raw, free up some memory", "Notice that the output now mentions \"1 bad epoch dropped\". In the tutorial\nsection tut-reject-epochs-section we saw how you can specify channel\namplitude criteria for rejecting epochs, but here we haven't specified any\nsuch criteria. In this case, it turns out that the last event was too close\nthe end of the (cropped) raw file to accommodate our requested tmax of\n0.7 seconds, so the final epoch was dropped because it was too short. Here\nare the drop_log entries for the last 4 epochs (empty lists indicate\nepochs that were not dropped):", "print(epochs.drop_log[-4:])", "<div class=\"alert alert-info\"><h4>Note</h4><p>If you forget to provide the event dictionary to the :class:`~mne.Epochs`\n constructor, you can add it later by assigning to the ``event_id``\n attribute::\n\n epochs.event_id = event_dict</p></div>\n\nBasic visualization of Epochs objects\nThe :class:~mne.Epochs object can be visualized (and browsed interactively)\nusing its :meth:~mne.Epochs.plot method:", "epochs.plot(n_epochs=10)", "Notice that the individual epochs are sequentially numbered along the bottom\naxis; the event ID associated with the epoch is marked on the top axis;\nepochs are separated by vertical dashed lines; and a vertical solid green\nline marks time=0 for each epoch (i.e., in this case, the stimulus onset\ntime for each trial). Epoch plots are interactive (similar to\n:meth:raw.plot() &lt;mne.io.Raw.plot&gt;) and have many of the same interactive\ncontrols as :class:~mne.io.Raw plots. Horizontal and vertical scrollbars\nallow browsing through epochs or channels (respectively), and pressing\n:kbd:? when the plot is focused will show a help screen with all the\navailable controls. See tut-visualize-epochs for more details (as well\nas other ways of visualizing epoched data).\nSubselecting epochs\nNow that we have our :class:~mne.Epochs object with our descriptive event\nlabels added, we can subselect epochs easily using square brackets. For\nexample, we can load all the \"catch trials\" where the stimulus was a face:", "print(epochs['face'])", "We can also pool across conditions easily, thanks to how MNE-Python handles\nthe / character in epoch labels (using what is sometimes called\n\"tag-based indexing\"):", "# pool across left + right\nprint(epochs['auditory'])\nassert len(epochs['auditory']) == (len(epochs['auditory/left']) +\n len(epochs['auditory/right']))\n# pool across auditory + visual\nprint(epochs['left'])\nassert len(epochs['left']) == (len(epochs['auditory/left']) +\n len(epochs['visual/left']))", "You can also pool conditions by passing multiple tags as a list. Note that\nMNE-Python will not complain if you ask for tags not present in the object,\nas long as it can find some match: the below example is parsed as\n(inclusive) 'right' or 'bottom', and you can see from the output\nthat it selects only auditory/right and visual/right.", "print(epochs[['right', 'bottom']])", "However, if no match is found, an error is returned:", "try:\n print(epochs[['top', 'bottom']])\nexcept KeyError:\n print('Tag-based selection with no matches raises a KeyError!')", "Selecting epochs by index\n:class:~mne.Epochs objects can also be indexed with integers, :term:slices\n&lt;slice&gt;, or lists of integers. This method of selection ignores event\nlabels, so if you want the first 10 epochs of a particular type, you can\nselect the type first, then use integers or slices:", "print(epochs[:10]) # epochs 0-9\nprint(epochs[1:8:2]) # epochs 1, 3, 5, 7\n\nprint(epochs['buttonpress'][:4]) # first 4 \"buttonpress\" epochs\nprint(epochs['buttonpress'][[0, 1, 2, 3]]) # same as previous line", "Selecting, dropping, and reordering channels\nYou can use the :meth:~mne.Epochs.pick, :meth:~mne.Epochs.pick_channels,\n:meth:~mne.Epochs.pick_types, and :meth:~mne.Epochs.drop_channels methods\nto modify which channels are included in an :class:~mne.Epochs object. You\ncan also use :meth:~mne.Epochs.reorder_channels for this purpose; any\nchannel names not provided to :meth:~mne.Epochs.reorder_channels will be\ndropped. Note that these channel selection methods modify the object\nin-place (unlike the square-bracket indexing to select epochs seen above)\nso in interactive/exploratory sessions you may want to create a\n:meth:~mne.Epochs.copy first.", "epochs_eeg = epochs.copy().pick_types(meg=False, eeg=True)\nprint(epochs_eeg.ch_names)\n\nnew_order = ['EEG 002', 'STI 014', 'EOG 061', 'MEG 2521']\nepochs_subset = epochs.copy().reorder_channels(new_order)\nprint(epochs_subset.ch_names)\n\ndel epochs_eeg, epochs_subset", "Changing channel name and type\nYou can change the name or type of a channel using\n:meth:~mne.Epochs.rename_channels or :meth:~mne.Epochs.set_channel_types.\nBoth methods take :class:dictionaries &lt;dict&gt; where the keys are existing\nchannel names, and the values are the new name (or type) for that channel.\nExisting channels that are not in the dictionary will be unchanged.", "epochs.rename_channels({'EOG 061': 'BlinkChannel'})\n\nepochs.set_channel_types({'EEG 060': 'ecg'})\nprint(list(zip(epochs.ch_names, epochs.get_channel_types()))[-4:])\n\n# let's set them back to the correct values before moving on\nepochs.rename_channels({'BlinkChannel': 'EOG 061'})\nepochs.set_channel_types({'EEG 060': 'eeg'})", "Selection in the time domain\nTo change the temporal extent of the :class:~mne.Epochs, you can use the\n:meth:~mne.Epochs.crop method:", "shorter_epochs = epochs.copy().crop(tmin=-0.1, tmax=0.1, include_tmax=True)\n\nfor name, obj in dict(Original=epochs, Cropped=shorter_epochs).items():\n print('{} epochs has {} time samples'\n .format(name, obj.get_data().shape[-1]))", "However, if you wanted to expand the time domain of an :class:~mne.Epochs\nobject, you would need to go back to the :class:~mne.io.Raw data and\nrecreate the :class:~mne.Epochs with different values for tmin and/or\ntmax.\nIt is also possible to change the \"zero point\" that defines the time values\nin an :class:~mne.Epochs object, with the :meth:~mne.Epochs.shift_time\nmethod. :meth:~mne.Epochs.shift_time allows shifting times relative to the\ncurrent values, or specifying a fixed time to set as the new time value of\nthe first sample (deriving the new time values of subsequent samples based on\nthe :class:~mne.Epochs object's sampling frequency).", "# shift times so that first sample of each epoch is at time zero\nlater_epochs = epochs.copy().shift_time(tshift=0., relative=False)\nprint(later_epochs.times[:3])\n\n# shift times by a relative amount\nlater_epochs.shift_time(tshift=-7, relative=True)\nprint(later_epochs.times[:3])\n\ndel shorter_epochs, later_epochs", "Note that although time shifting respects the sampling frequency (the spacing\nbetween samples), it does not enforce the assumption that there is a sample\noccurring at exactly time=0.\nExtracting data in other forms\nThe :meth:~mne.Epochs.get_data method returns the epoched data as a\n:class:NumPy array &lt;numpy.ndarray&gt;, of shape (n_epochs, n_channels,\nn_times); an optional picks parameter selects a subset of channels by\nindex, name, or type:", "eog_data = epochs.get_data(picks='EOG 061')\nmeg_data = epochs.get_data(picks=['mag', 'grad'])\nchannel_4_6_8 = epochs.get_data(picks=slice(4, 9, 2))\n\nfor name, arr in dict(EOG=eog_data, MEG=meg_data, Slice=channel_4_6_8).items():\n print('{} contains {} channels'.format(name, arr.shape[1]))", "Note that if your analysis requires repeatedly extracting single epochs from\nan :class:~mne.Epochs object, epochs.get_data(item=2) will be much\nfaster than epochs[2].get_data(), because it avoids the step of\nsubsetting the :class:~mne.Epochs object first.\nYou can also export :class:~mne.Epochs data to :class:Pandas DataFrames\n&lt;pandas.DataFrame&gt;. Here, the :class:~pandas.DataFrame index will be\nconstructed by converting the time of each sample into milliseconds and\nrounding it to the nearest integer, and combining it with the event types and\nepoch numbers to form a hierarchical :class:~pandas.MultiIndex. Each\nchannel will appear in a separate column. Then you can use any of Pandas'\ntools for grouping and aggregating data; for example, here we select any\nepochs numbered 10 or less from the auditory/left condition, and extract\ntimes between 100 and 107 ms on channels EEG 056 through EEG 058\n(note that slice indexing within Pandas' :obj:~pandas.DataFrame.loc is\ninclusive of the endpoint):", "df = epochs.to_data_frame(index=['condition', 'epoch', 'time'])\ndf.sort_index(inplace=True)\nprint(df.loc[('auditory/left', slice(0, 10), slice(100, 107)),\n 'EEG 056':'EEG 058'])\n\ndel df", "See the tut-epochs-dataframe tutorial for many more examples of the\n:meth:~mne.Epochs.to_data_frame method.\nLoading and saving Epochs objects to disk\n:class:~mne.Epochs objects can be loaded and saved in the .fif format\njust like :class:~mne.io.Raw objects, using the :func:mne.read_epochs\nfunction and the :meth:~mne.Epochs.save method. Functions are also\navailable for loading data that was epoched outside of MNE-Python, such as\n:func:mne.read_epochs_eeglab and :func:mne.read_epochs_kit.", "epochs.save('saved-audiovisual-epo.fif', overwrite=True)\nepochs_from_file = mne.read_epochs('saved-audiovisual-epo.fif', preload=False)", "The MNE-Python naming convention for epochs files is that the file basename\n(the part before the .fif or .fif.gz extension) should end with\n-epo or _epo, and a warning will be issued if the filename you\nprovide does not adhere to that convention.\nAs a final note, be aware that the class of the epochs object is different\nwhen epochs are loaded from disk rather than generated from a\n:class:~mne.io.Raw object:", "print(type(epochs))\nprint(type(epochs_from_file))", "In almost all cases this will not require changing anything about your code.\nHowever, if you need to do type checking on epochs objects, you can test\nagainst the base class that these classes are derived from:", "print(all([isinstance(epochs, mne.BaseEpochs),\n isinstance(epochs_from_file, mne.BaseEpochs)]))", "Iterating over Epochs\nIterating over an :class:~mne.Epochs object will yield :class:arrays\n&lt;numpy.ndarray&gt; rather than single-trial :class:~mne.Epochs objects:", "for epoch in epochs[:3]:\n print(type(epoch))", "If you want to iterate over :class:~mne.Epochs objects, you can use an\ninteger index as the iterator:", "for index in range(3):\n print(type(epochs[index]))" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
celiacintas/dss_practica
ipynb/DM sobre RS y SW.ipynb
gpl-2.0
[ "La mayoría de estos ejercicios se pueden encontrar en forma mucho más completa en el libro de Mining the Social Web\nPara este práctico se necesitará instalar vía pip twitter jsonlib, prettytable collections google-api-python-client feedparser nltk", "import twitter\nimport json\nfrom prettytable import PrettyTable\nfrom collections import Counter", "Twitter\nPara tener acceso a una COSTUMER_KEY, OAUTH_TOKEN, etc. deben loguearse en twitter y seguramente dejar un número de telefono, ya que su aplicación debe estar registrada en twitter. Para ver que pasos seguir visitar este link", "CONSUMER_KEY = ''\nCONSUMER_SECRET =''\nOAUTH_TOKEN = ''\nOAUTH_TOKEN_SECRET = ''\n\nauth = twitter.oauth.OAuth(OAUTH_TOKEN, OAUTH_TOKEN_SECRET,\n CONSUMER_KEY, CONSUMER_SECRET)\n\ntwitter_api = twitter.Twitter(auth=auth)", "en este link pueden encontrar los Where On Earth ID de cualquier ciudad o pais", "WORLD_WOE_ID = 1\nARG_WOE_ID = 23424747\n\nworld_trends = twitter_api.trends.place(_id=WORLD_WOE_ID)\narg_trends = twitter_api.trends.place(_id=ARG_WOE_ID)\n\nprint json.dumps(world_trends, indent=1)\n", "Podemos ver varios atributos de los trends ...", "print map(lambda i: world_trends[0]['trends'][i]['name'], range(len(world_trends[0]['trends'])))\nprint map(lambda i: world_trends[0]['trends'][i]['url'], range(len(world_trends[0]['trends'])))\nprint map(lambda i: world_trends[0]['trends'][i]['query'], range(len(world_trends[0]['trends'])))\n", "Se puede trabajar con conjuntos para aplicar todas sus funciones (union, interseccion, pertenencia, etc)", "in_the_world = set(map(lambda i: world_trends[0]['trends'][i]['name'], range(len(world_trends[0]['trends']))))\n\nin_arg = set(map(lambda i: arg_trends[0]['trends'][i]['name'], range(len(arg_trends[0]['trends']))))\n\nprint in_the_world.union(in_arg)\nin_both = in_the_world.intersection(in_arg)\nin_both\n\nresults = twitter_api.search.tweets(q=in_both.pop() , count=100)\nresults", "Podemos sacar información desde que plataforma/dispositivo fueron realizados los twts", "map( lambda i: results['statuses'][i]['source'], range(len(results['statuses'])))\n\ntext = map( lambda i: results['statuses'][i]['text'], range(len(results['statuses'])))\ntext\n\nscreen_name = map( lambda i: results['statuses'][i]['user']['screen_name'], range(len(results['statuses'])))\nscreen_name", "Cantidad de seguidores, id del ususario, etc.", "followers_count = map( lambda i: results['statuses'][i]['user']['followers_count'], range(len(results['statuses'])))\n\nid_ = map( lambda i: results['statuses'][i]['user']['id'], range(len(results['statuses'])))", "Tablas con prettytable", "pt = PrettyTable()\npt.add_column('screen_name', screen_name)\npt.add_column('followers_count', followers_count)\nprint pt\n\nwords = [ word for twt in text for word in twt.split() ]\nc = Counter(words)\nprint c.most_common()\n\npt = PrettyTable(field_names=['Words', 'Count'])\nmap(lambda r: pt.add_row(r), c.most_common()[:5])\nprint pt", "Google +\nLa API_KEY la puenden obtener en este link", "import apiclient.discovery\nimport httplib2\n\nAPI_KEY = '' \n\nservice = apiclient.discovery.build('plus', 'v1', http=httplib2.Http(), \n developerKey=API_KEY)\n\npeople_feed = service.people().search(query='Guido Van Rossum').execute()\n\nprint json.dumps(people_feed['items'], indent=1)\n\nmap(lambda i: people_feed['items'][i]['displayName'], range(len(people_feed['items'])))\n\nfrom IPython.display import Image\n\nImage(url = people_feed['items'][0]['image']['url'])\n\n\nid_guido = people_feed['items'][0]['id']\n\nactivity_feed = service.activities().list(\n userId= id_guido,\n collection='public',\n maxResults='10' # Max allowed per API\n).execute()\n\nactivity_feed", "Cunatos +1 recibio la primera publicación .. cual es su título?", "activity_feed['items'][0]['title']\n\nactivity_feed['items'][0]['object']['plusoners']\n\nactivity_feed['items'][0]['object']['replies']['totalItems']", "Tomando datos de paginas web\nDatos de feeds", "import feedparser\n\nfp = feedparser.parse('http://www.cb.uu.se/~cris/blog/index.php/feed')\n\nmap(lambda e: e.title, fp.entries)\n\nfp.entries[4].summary" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.14/_downloads/plot_spm_faces_dataset.ipynb
bsd-3-clause
[ "%matplotlib inline", "From raw data to dSPM on SPM Faces dataset\nRuns a full pipeline using MNE-Python:\n- artifact removal\n- averaging Epochs\n- forward model computation\n- source reconstruction using dSPM on the contrast : \"faces - scrambled\"\n\n<div class=\"alert alert-info\"><h4>Note</h4><p>This example does quite a bit of processing, so even on a\n fast machine it can take several minutes to complete.</p></div>", "# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>\n# Denis Engemann <denis.engemann@gmail.com>\n#\n# License: BSD (3-clause)\n\nimport os.path as op\nimport matplotlib.pyplot as plt\n\nimport mne\nfrom mne.datasets import spm_face\nfrom mne.preprocessing import ICA, create_eog_epochs\nfrom mne import io, combine_evoked\nfrom mne.minimum_norm import make_inverse_operator, apply_inverse\n\nprint(__doc__)\n\ndata_path = spm_face.data_path()\nsubjects_dir = data_path + '/subjects'", "Load and filter data, set up epochs", "raw_fname = data_path + '/MEG/spm/SPM_CTF_MEG_example_faces%d_3D.ds'\n\nraw = io.read_raw_ctf(raw_fname % 1, preload=True) # Take first run\n# Here to save memory and time we'll downsample heavily -- this is not\n# advised for real data as it can effectively jitter events!\nraw.resample(120., npad='auto')\n\npicks = mne.pick_types(raw.info, meg=True, exclude='bads')\nraw.filter(1, 30, method='iir')\n\nevents = mne.find_events(raw, stim_channel='UPPT001')\n\n# plot the events to get an idea of the paradigm\nmne.viz.plot_events(events, raw.info['sfreq'])\n\nevent_ids = {\"faces\": 1, \"scrambled\": 2}\n\ntmin, tmax = -0.2, 0.6\nbaseline = None # no baseline as high-pass is applied\nreject = dict(mag=5e-12)\n\nepochs = mne.Epochs(raw, events, event_ids, tmin, tmax, picks=picks,\n baseline=baseline, preload=True, reject=reject)\n\n# Fit ICA, find and remove major artifacts\nica = ICA(n_components=0.95, random_state=0).fit(raw, decim=1, reject=reject)\n\n# compute correlation scores, get bad indices sorted by score\neog_epochs = create_eog_epochs(raw, ch_name='MRT31-2908', reject=reject)\neog_inds, eog_scores = ica.find_bads_eog(eog_epochs, ch_name='MRT31-2908')\nica.plot_scores(eog_scores, eog_inds) # see scores the selection is based on\nica.plot_components(eog_inds) # view topographic sensitivity of components\nica.exclude += eog_inds[:1] # we saw the 2nd ECG component looked too dipolar\nica.plot_overlay(eog_epochs.average()) # inspect artifact removal\nica.apply(epochs) # clean data, default in place\n\nevoked = [epochs[k].average() for k in event_ids]\n\ncontrast = combine_evoked(evoked, weights=[-1, 1]) # Faces - scrambled\n\nevoked.append(contrast)\n\nfor e in evoked:\n e.plot(ylim=dict(mag=[-400, 400]))\n\nplt.show()\n\n# estimate noise covarariance\nnoise_cov = mne.compute_covariance(epochs, tmax=0, method='shrunk')", "Visualize fields on MEG helmet", "trans_fname = data_path + ('/MEG/spm/SPM_CTF_MEG_example_faces1_3D_'\n 'raw-trans.fif')\n\nmaps = mne.make_field_map(evoked[0], trans_fname, subject='spm',\n subjects_dir=subjects_dir, n_jobs=1)\n\nevoked[0].plot_field(maps, time=0.170)", "Compute forward model", "# Make source space\nsrc_fname = data_path + '/subjects/spm/bem/spm-oct-6-src.fif'\nif not op.isfile(src_fname):\n src = mne.setup_source_space('spm', src_fname, spacing='oct6',\n subjects_dir=subjects_dir, overwrite=True)\nelse:\n src = mne.read_source_spaces(src_fname)\n\nbem = data_path + '/subjects/spm/bem/spm-5120-5120-5120-bem-sol.fif'\nforward = mne.make_forward_solution(contrast.info, trans_fname, src, bem)\nforward = mne.convert_forward_solution(forward, surf_ori=True)", "Compute inverse solution", "snr = 3.0\nlambda2 = 1.0 / snr ** 2\nmethod = 'dSPM'\n\ninverse_operator = make_inverse_operator(contrast.info, forward, noise_cov,\n loose=0.2, depth=0.8)\n\n# Compute inverse solution on contrast\nstc = apply_inverse(contrast, inverse_operator, lambda2, method, pick_ori=None)\n# stc.save('spm_%s_dSPM_inverse' % constrast.comment)\n\n# Plot contrast in 3D with PySurfer if available\nbrain = stc.plot(hemi='both', subjects_dir=subjects_dir, initial_time=0.170,\n views=['ven'])\n# brain.save_image('dSPM_map.png')" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
probml/pyprobml
notebooks/book1/14/batchnorm_torch.ipynb
mit
[ "Please find jax implementation of this notebook here: https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/book1/14/batchnorm_jax.ipynb\n<a href=\"https://colab.research.google.com/github/Nirzu97/pyprobml/blob/batchnorm-torch/notebooks/batchnorm_torch.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nBatch normalization\nWe implement a batchnorm layer from scratch and add to LeNet CNN.\nCode based on sec 7.5 of http://d2l.ai/chapter_convolutional-modern/batch-norm.html", "import numpy as np\nimport matplotlib.pyplot as plt\nimport math\nfrom IPython import display\n\ntry:\n import torch\nexcept ModuleNotFoundError:\n %pip install -qq torch\n import torch\ntry:\n import torchvision\nexcept ModuleNotFoundError:\n %pip install -qq torchvision\n import torchvision\nfrom torch import nn\nfrom torch.nn import functional as F\nfrom torch.utils import data\nfrom torchvision import transforms\n\nimport random\nimport os\nimport time\n\nnp.random.seed(seed=1)\ntorch.manual_seed(1)\n!mkdir figures # for saving plots", "Implementation from scratch\nFor fully connected layers, we take the average along minibatch samples for each dimension independently. For 2d convolutional layers, we take the average along minibatch samples, and along horizontal and vertical locations, for each channel (feature dimension) independently.\nWhen training, we update the estimate of the mean and variance using a moving average. When testing (doing inference), we use the pre-computed values.", "def batch_norm(X, gamma, beta, moving_mean, moving_var, eps, momentum):\n # Use `is_grad_enabled` to determine whether the current mode is training\n # mode or prediction mode\n if not torch.is_grad_enabled():\n # If it is prediction mode, directly use the mean and variance\n # obtained by moving average\n X_hat = (X - moving_mean) / torch.sqrt(moving_var + eps)\n else:\n assert len(X.shape) in (2, 4)\n if len(X.shape) == 2:\n # When using a fully-connected layer, calculate the mean and\n # variance on the feature dimension\n mean = X.mean(dim=0)\n var = ((X - mean) ** 2).mean(dim=0)\n else:\n # When using a two-dimensional convolutional layer, calculate the\n # mean and variance on the channel dimension (axis=1). Here we\n # need to maintain the shape of `X`, so that the broadcasting\n # operation can be carried out later\n mean = X.mean(dim=(0, 2, 3), keepdim=True)\n var = ((X - mean) ** 2).mean(dim=(0, 2, 3), keepdim=True)\n # In training mode, the current mean and variance are used for the\n # standardization\n X_hat = (X - mean) / torch.sqrt(var + eps)\n # Update the mean and variance using moving average\n moving_mean = momentum * moving_mean + (1.0 - momentum) * mean\n moving_var = momentum * moving_var + (1.0 - momentum) * var\n Y = gamma * X_hat + beta # Scale and shift\n return Y, moving_mean.data, moving_var.data", "Wrap the batch norm function in a layer", "class BatchNorm(nn.Module):\n # `num_features`: the number of outputs for a fully-connected layer\n # or the number of output channels for a convolutional layer. `num_dims`:\n # 2 for a fully-connected layer and 4 for a convolutional layer\n def __init__(self, num_features, num_dims):\n super().__init__()\n if num_dims == 2:\n shape = (1, num_features)\n else:\n shape = (1, num_features, 1, 1)\n # The scale parameter and the shift parameter (model parameters) are\n # initialized to 1 and 0, respectively\n self.gamma = nn.Parameter(torch.ones(shape))\n self.beta = nn.Parameter(torch.zeros(shape))\n # The variables that are not model parameters are initialized to 0 and 1\n self.moving_mean = torch.zeros(shape)\n self.moving_var = torch.ones(shape)\n\n def forward(self, X):\n # If `X` is not on the main memory, copy `moving_mean` and\n # `moving_var` to the device where `X` is located\n if self.moving_mean.device != X.device:\n self.moving_mean = self.moving_mean.to(X.device)\n self.moving_var = self.moving_var.to(X.device)\n # Save the updated `moving_mean` and `moving_var`\n Y, self.moving_mean, self.moving_var = batch_norm(\n X, self.gamma, self.beta, self.moving_mean, self.moving_var, eps=1e-5, momentum=0.9\n )\n return Y", "Applying batch norm to LeNet\nWe add BN layers after some of the convolutions and fully connected layers,\nbut before the activation functions.", "net = nn.Sequential(\n nn.Conv2d(1, 6, kernel_size=5),\n BatchNorm(6, num_dims=4),\n nn.Sigmoid(),\n nn.MaxPool2d(kernel_size=2, stride=2),\n nn.Conv2d(6, 16, kernel_size=5),\n BatchNorm(16, num_dims=4),\n nn.Sigmoid(),\n nn.MaxPool2d(kernel_size=2, stride=2),\n nn.Flatten(),\n nn.Linear(16 * 4 * 4, 120),\n BatchNorm(120, num_dims=2),\n nn.Sigmoid(),\n nn.Linear(120, 84),\n BatchNorm(84, num_dims=2),\n nn.Sigmoid(),\n nn.Linear(84, 10),\n)", "Train the model\nWe train the model using the same code as in the standard LeNet colab. The only difference from the previous colab is the larger learning rate (which is possible because BN stabilizes training).", "def load_data_fashion_mnist(batch_size, resize=None):\n \"\"\"Download the Fashion-MNIST dataset and then load it into memory.\"\"\"\n trans = [transforms.ToTensor()]\n if resize:\n trans.insert(0, transforms.Resize(resize))\n trans = transforms.Compose(trans)\n mnist_train = torchvision.datasets.FashionMNIST(root=\"../data\", train=True, transform=trans, download=True)\n mnist_test = torchvision.datasets.FashionMNIST(root=\"../data\", train=False, transform=trans, download=True)\n return (\n data.DataLoader(mnist_train, batch_size, shuffle=True, num_workers=4),\n data.DataLoader(mnist_test, batch_size, shuffle=False, num_workers=4),\n )\n\nclass Animator:\n \"\"\"For plotting data in animation.\"\"\"\n\n def __init__(\n self,\n xlabel=None,\n ylabel=None,\n legend=None,\n xlim=None,\n ylim=None,\n xscale=\"linear\",\n yscale=\"linear\",\n fmts=(\"-\", \"m--\", \"g-.\", \"r:\"),\n nrows=1,\n ncols=1,\n figsize=(3.5, 2.5),\n ):\n # Incrementally plot multiple lines\n if legend is None:\n legend = []\n display.set_matplotlib_formats(\"svg\")\n self.fig, self.axes = plt.subplots(nrows, ncols, figsize=figsize)\n if nrows * ncols == 1:\n self.axes = [\n self.axes,\n ]\n # Use a lambda function to capture arguments\n self.config_axes = lambda: set_axes(self.axes[0], xlabel, ylabel, xlim, ylim, xscale, yscale, legend)\n self.X, self.Y, self.fmts = None, None, fmts\n\n def add(self, x, y):\n # Add multiple data points into the figure\n if not hasattr(y, \"__len__\"):\n y = [y]\n n = len(y)\n if not hasattr(x, \"__len__\"):\n x = [x] * n\n if not self.X:\n self.X = [[] for _ in range(n)]\n if not self.Y:\n self.Y = [[] for _ in range(n)]\n for i, (a, b) in enumerate(zip(x, y)):\n if a is not None and b is not None:\n self.X[i].append(a)\n self.Y[i].append(b)\n self.axes[0].cla()\n for x, y, fmt in zip(self.X, self.Y, self.fmts):\n self.axes[0].plot(x, y, fmt)\n self.config_axes()\n display.display(self.fig)\n display.clear_output(wait=True)\n\n\nclass Timer:\n \"\"\"Record multiple running times.\"\"\"\n\n def __init__(self):\n self.times = []\n self.start()\n\n def start(self):\n \"\"\"Start the timer.\"\"\"\n self.tik = time.time()\n\n def stop(self):\n \"\"\"Stop the timer and record the time in a list.\"\"\"\n self.times.append(time.time() - self.tik)\n return self.times[-1]\n\n def avg(self):\n \"\"\"Return the average time.\"\"\"\n return sum(self.times) / len(self.times)\n\n def sum(self):\n \"\"\"Return the sum of time.\"\"\"\n return sum(self.times)\n\n def cumsum(self):\n \"\"\"Return the accumulated time.\"\"\"\n return np.array(self.times).cumsum().tolist()\n\n\nclass Accumulator:\n \"\"\"For accumulating sums over `n` variables.\"\"\"\n\n def __init__(self, n):\n self.data = [0.0] * n\n\n def add(self, *args):\n self.data = [a + float(b) for a, b in zip(self.data, args)]\n\n def reset(self):\n self.data = [0.0] * len(self.data)\n\n def __getitem__(self, idx):\n return self.data[idx]\n\ndef set_axes(axes, xlabel, ylabel, xlim, ylim, xscale, yscale, legend):\n \"\"\"Set the axes for matplotlib.\"\"\"\n axes.set_xlabel(xlabel)\n axes.set_ylabel(ylabel)\n axes.set_xscale(xscale)\n axes.set_yscale(yscale)\n axes.set_xlim(xlim)\n axes.set_ylim(ylim)\n if legend:\n axes.legend(legend)\n axes.grid()\n\ndef try_gpu(i=0):\n \"\"\"Return gpu(i) if exists, otherwise return cpu().\"\"\"\n if torch.cuda.device_count() >= i + 1:\n return torch.device(f\"cuda:{i}\")\n return torch.device(\"cpu\")\n\ndef accuracy(y_hat, y):\n \"\"\"Compute the number of correct predictions.\"\"\"\n if len(y_hat.shape) > 1 and y_hat.shape[1] > 1:\n y_hat = torch.argmax(y_hat, axis=1)\n cmp_ = y_hat.type(y.dtype) == y\n return float(cmp_.type(y.dtype).sum())\n\n\ndef evaluate_accuracy_gpu(net, data_iter, device=None):\n \"\"\"Compute the accuracy for a model on a dataset using a GPU.\"\"\"\n if isinstance(net, torch.nn.Module):\n net.eval() # Set the model to evaluation mode\n if not device:\n device = next(iter(net.parameters())).device\n # No. of correct predictions, no. of predictions\n metric = Accumulator(2)\n for X, y in data_iter:\n X = X.to(device)\n y = y.to(device)\n metric.add(accuracy(net(X), y), y.numel())\n return metric[0] / metric[1]", "Training Function", "def train(net, train_iter, test_iter, num_epochs, lr, device):\n \"\"\"Train a model with a GPU (defined in Chapter 6).\"\"\"\n\n def init_weights(m):\n if type(m) == nn.Linear or type(m) == nn.Conv2d:\n nn.init.xavier_uniform_(m.weight)\n\n net.apply(init_weights)\n print(\"training on\", device)\n net.to(device)\n optimizer = torch.optim.SGD(net.parameters(), lr=lr)\n loss = nn.CrossEntropyLoss()\n animator = Animator(xlabel=\"epoch\", xlim=[1, num_epochs], legend=[\"train loss\", \"train acc\", \"test acc\"])\n timer, num_batches = Timer(), len(train_iter)\n for epoch in range(num_epochs):\n # Sum of training loss, sum of training accuracy, no. of examples\n metric = Accumulator(3)\n net.train()\n for i, (X, y) in enumerate(train_iter):\n timer.start()\n optimizer.zero_grad()\n X, y = X.to(device), y.to(device)\n y_hat = net(X)\n l = loss(y_hat, y)\n l.backward()\n optimizer.step()\n with torch.no_grad():\n metric.add(l * X.shape[0], accuracy(y_hat, y), X.shape[0])\n timer.stop()\n train_l = metric[0] / metric[2]\n train_acc = metric[1] / metric[2]\n if (i + 1) % (num_batches // 5) == 0 or i == num_batches - 1:\n animator.add(epoch + (i + 1) / num_batches, (train_l, train_acc, None))\n test_acc = evaluate_accuracy_gpu(net, test_iter)\n animator.add(epoch + 1, (None, None, test_acc))\n print(f\"loss {train_l:.3f}, train acc {train_acc:.3f}, \" f\"test acc {test_acc:.3f}\")\n print(f\"{metric[2] * num_epochs / timer.sum():.1f} examples/sec \" f\"on {str(device)}\")\n\nlr, num_epochs, batch_size = 1.0, 10, 256\ntrain_iter, test_iter = load_data_fashion_mnist(batch_size)\ntrain(net, train_iter, test_iter, num_epochs, lr, try_gpu())", "Examine learned parameters", "net[1].gamma.reshape((-1,)), net[1].beta.reshape((-1,))", "Use PyTorch's batchnorm layer\nThe built-in layer is much faster than our Python code, since it is implemented in C++. Note that instead of specifying ndims=2 for fully connected layer (batch x features) and ndims=4 for convolutional later (batch x channels x height x width), we use BatchNorm1d or BatchNorm2d instead.", "net = nn.Sequential(\n nn.Conv2d(1, 6, kernel_size=5),\n nn.BatchNorm2d(6),\n nn.Sigmoid(),\n nn.MaxPool2d(kernel_size=2, stride=2),\n nn.Conv2d(6, 16, kernel_size=5),\n nn.BatchNorm2d(16),\n nn.Sigmoid(),\n nn.MaxPool2d(kernel_size=2, stride=2),\n nn.Flatten(),\n nn.Linear(256, 120),\n nn.BatchNorm1d(120),\n nn.Sigmoid(),\n nn.Linear(120, 84),\n nn.BatchNorm1d(84),\n nn.Sigmoid(),\n nn.Linear(84, 10),\n)", "Learning Curve", "train(net, train_iter, test_iter, num_epochs, lr, try_gpu())" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jllanfranchi/playground
test_tensor_operations/test_tensor_operations.ipynb
mit
[ "import numpy as np\nfrom numba import jit, vectorize, float32, float64\nimport pandas as pd\nfrom smartFormat import simpleFormat", "Test speed of tensor operations\nCreate test data\nThis is equivalent to the first-dimension-concatenated array that results from $\\nu_e$ and $\\nu_\\mu$ fluxes binned at $400 \\;E \\times 400 \\cos\\theta$ bins.\nThe resulting input array is then $2\\times400\\times400$, and the transform that must (effectively) left-multiply that is $3\\times 2\\times400\\times400$.\nThis hopefully represents a realistic scenario for performing an accurate oscillation calculation.", "np.random.seed(0)\nxform = np.require(\n a=np.random.random_sample((3, 2, 400, 400)),\n dtype=np.float64,\n requirements=['C_CONTIGUOUS', 'ALIGNED']\n)\ninputs = np.require(\n a=np.random.random_sample((2, 400, 400)),\n dtype=np.float64,\n requirements=['C_CONTIGUOUS', 'ALIGNED']\n)\nxform_fp32 = np.array(xform, dtype=np.float32)\ninputs_fp32 = np.array(inputs, dtype=np.float32)\n\nprint 'xform.dtype =', xform.dtype\nprint 'xform.flags =\\n', xform.flags\nprint 'inputs.dtype =', inputs.dtype\nprint 'inputs.flags =\\n', inputs.flags\n\nprint 'xform_fp32.dtype =', xform_fp32.dtype\nprint 'xform_fp32.flags =\\n', xform_fp32.flags\nprint 'inputs_fp32.dtype =', inputs_fp32.dtype\nprint 'inputs_fp32.flags =\\n', inputs_fp32.flags", "Numpy using einsum\nFloat64 math on float64 inputs/transforms", "ein_64m_64op = %timeit -r 10 -q -o np.einsum('ij..., j...', xform, inputs, dtype=np.float64, casting='unsafe');\n\nein_64m_64op_med = np.median(ein_64m_64op.all_runs) / ein_64m_64op.loops\nprint 'Median time, einsum FP64 math / FP64 operands:', \\\n simpleFormat(ein_64m_64op_med) + ' sec'\n\n\noutput_einsum = np.einsum('ij..., j...', xform, inputs)\nprint output_einsum.shape", "check that it's doing what I want it to do", "x = xform[:,:,1,10]\nx\n\ni = inputs[:,1,10]\ni\n\no = np.dot(x, i)\no\n\noutput_einsum[1,10,:]\n\nnp.all(o == output_einsum[1,10,:])", "Float32 math on float64 inputs/transforms", "ein_32m_64op = %timeit -r 10 -q -o np.einsum('ij..., j...', xform, inputs, dtype=np.float32, casting='unsafe');\n\nein_32m_64op_med = np.median(ein_32m_64op.all_runs) / ein_32m_64op.loops\nprint 'Median time, einsum FP32 math / FP64 operands:', \\\n simpleFormat(ein_32m_64op_med) + ' sec'\nprint simpleFormat(ein_32m_64op_med / ein_64m_64op_med*100)+'% of ein64_64'\n", "Float32 math on float32 inputs/transforms", "ein_32m_32op = %timeit -r 10 -q -o np.einsum('ij..., j...', xform_fp32, inputs_fp32, dtype=np.float32, casting='no');\n\nein_32m_32op_med = np.median(ein_32m_32op.all_runs) / ein_32m_32op.loops\nprint 'Median time, einsum FP32 math / FP32 operands:', \\\n simpleFormat(ein_32m_32op_med) + ' sec'\nprint simpleFormat(ein_32m_32op_med / ein_64m_64op_med*100)+'% of ein64_64'", "Python looping\nFloat64 math / float64 operands", "def apply_python_fp64(inputs, transform):\n N_k = inputs.shape[1]\n N_l = inputs.shape[2]\n output = np.empty((N_k, N_l, 3), np.float64)\n for k in range(N_k):\n for l in range(N_l):\n output[k,l,0] = (\n transform[0,0,k,l]*inputs[0,k,l] +\n transform[0,1,k,l]*inputs[1,k,l]\n )\n output[k,l,1] = (\n transform[1,0,k,l]*inputs[0,k,l] +\n transform[1,1,k,l]*inputs[1,k,l]\n )\n output[k,l,2] = (\n transform[2,0,k,l]*inputs[0,k,l] +\n transform[2,1,k,l]*inputs[1,k,l]\n )\n return output\n\npy_64m_64op = %timeit -r 5 -q -o apply_python_fp64(inputs, xform);\n\npy_64m_64op_med = np.median(py_64m_64op.all_runs) / py_64m_64op.loops\nprint 'Median time, Python FP64 math / FP64 operands:', \\\n simpleFormat(py_64m_64op_med) + ' sec'\nprint simpleFormat(py_64m_64op_med / ein_64m_64op_med*100)+'% of ein64_64'\n\noutput_python = apply_python_fp64(inputs, xform)\nnp.all(output_python == output_einsum)", "Float32 math / float32 operands", "def apply_python_fp32(inputs, transform):\n N_k = inputs.shape[1]\n N_l = inputs.shape[2]\n output = np.empty((N_k, N_l, 3), np.float32)\n for k in range(N_k):\n for l in range(N_l):\n output[k,l,0] = (\n transform[0,0,k,l]*inputs[0,k,l] +\n transform[0,1,k,l]*inputs[1,k,l]\n )\n output[k,l,1] = (\n transform[1,0,k,l]*inputs[0,k,l] +\n transform[1,1,k,l]*inputs[1,k,l]\n )\n output[k,l,2] = (\n transform[2,0,k,l]*inputs[0,k,l] +\n transform[2,1,k,l]*inputs[1,k,l]\n )\n return output\n\npy_32m_32op = %timeit -r 5 -q -o apply_python_fp32(inputs_fp32, xform_fp32);\n\npy_32m_32op_med = np.median(py_32m_32op.all_runs) / py_32m_32op.loops\nprint 'Median time, Python FP32 math / FP32 operands:', \\\n simpleFormat(py_32m_32op_med) + ' sec'\nprint simpleFormat(py_32m_32op_med / ein_64m_64op_med*100)+'% of ein64_64'", "Numba\nFloat64 math on float64 operands", "@jit(\"float64[:,:,:](float64[:,:,:], float64[:,:,:,:])\",\n nopython=True, nogil=True, cache=True)\ndef apply_numba_fp64(inputs, transform):\n N_k = inputs.shape[1]\n N_l = inputs.shape[2]\n output = np.empty((N_k, N_l, 3), float64)\n for k in range(N_k):\n for l in range(N_l):\n output[k,l,0] = (\n transform[0,0,k,l]*inputs[0,k,l] +\n transform[0,1,k,l]*inputs[1,k,l]\n )\n output[k,l,1] = (\n transform[1,0,k,l]*inputs[0,k,l] +\n transform[1,1,k,l]*inputs[1,k,l]\n )\n output[k,l,2] = (\n transform[2,0,k,l]*inputs[0,k,l] +\n transform[2,1,k,l]*inputs[1,k,l]\n )\n return output\n\nnu_64m_64op = %timeit -r 10 -q -o apply_numba_fp64(inputs, xform)\n\nnu_64m_64op_med = np.median(nu_64m_64op.all_runs) / nu_64m_64op.loops\nprint 'Median time, Numba FP64 math / FP64 operands:', \\\n simpleFormat(nu_64m_64op_med) + ' sec'\nprint simpleFormat(nu_64m_64op_med / ein_64m_64op_med*100)+'% of ein64_64'\n\noutput_numba = apply_numba_fp64(inputs, xform)\nnp.all(output_numba == output_einsum)", "Float32 math on float32 operands", "@jit(\"float32[:,:,:](float32[:,:,:], float32[:,:,:,:])\", nopython=True, nogil=True, cache=True)\ndef apply_numba_fp32(inputs, transform):\n N_k = inputs.shape[1]\n N_l = inputs.shape[2]\n output = np.empty((N_k, N_l, 3), float32)\n for k in range(N_k):\n for l in range(N_l):\n output[k,l,0] = (\n transform[0,0,k,l]*inputs[0,k,l] +\n transform[0,1,k,l]*inputs[1,k,l]\n )\n output[k,l,1] = (\n transform[1,0,k,l]*inputs[0,k,l] +\n transform[1,1,k,l]*inputs[1,k,l]\n )\n output[k,l,2] = (\n transform[2,0,k,l]*inputs[0,k,l] +\n transform[2,1,k,l]*inputs[1,k,l]\n )\n return output\n\nnu_32m_32op = %timeit -r 10 -q -o apply_numba_fp32(inputs_fp32, xform_fp32)\n\nnu_32m_32op_med = np.median(nu_32m_32op.all_runs) / nu_32m_32op.loops\nprint 'Median time, Numba FP32 math / FP32 operands:', \\\n simpleFormat(nu_32m_32op_med) + ' sec'\nprint simpleFormat(nu_32m_32op_med / ein_64m_64op_med*100)+'% of ein64_64'", "What about axes ordering?\nHow are these affected if we change the order of the axes? I.e., keep C memory layout, but join by flavor on the last dimension rather than the first.", "np.random.seed(0)\nxform = np.array(np.random.random_sample((400, 400, 3, 2)),\n dtype=np.float64)\ninputs = np.array(np.random.random_sample((400, 400, 2)),\n dtype=np.float64)\n\nxform_fp32 = np.array(xform, dtype=np.float32)\ninputs_fp32 = np.array(inputs, dtype=np.float32)\n\nnp.random.seed(0)\nxform = np.require(\n a=np.random.random_sample((400, 400, 3, 2)),\n dtype=np.float64,\n requirements=['C_CONTIGUOUS', 'ALIGNED']\n)\ninputs = np.require(\n a=np.random.random_sample((400, 400, 2)),\n dtype=np.float64,\n requirements=['C_CONTIGUOUS', 'ALIGNED']\n)\nxform_fp32 = np.array(xform, dtype=np.float32)\ninputs_fp32 = np.array(inputs, dtype=np.float32)\n\nprint 'xform.dtype =', xform.dtype\nprint 'xform.flags =\\n', xform.flags\nprint 'inputs.dtype =', inputs.dtype\nprint 'inputs.flags =\\n', inputs.flags\n\nprint 'xform_fp32.dtype =', xform_fp32.dtype\nprint 'xform_fp32.flags =\\n', xform_fp32.flags\nprint 'inputs_fp32.dtype =', inputs_fp32.dtype\nprint 'inputs_fp32.flags =\\n', inputs_fp32.flags\n\nein_64m_64op_sw = %timeit -r 10 -q -o np.einsum('...ij, ...j', xform, inputs, dtype=np.float64, casting='unsafe');\n\nein_64m_64op_sw_med = np.median(ein_64m_64op_sw.all_runs) / ein_64m_64op_sw.loops\nprint 'Median time, einsum FP64 math / FP64 operands swapped axes:', \\\n simpleFormat(ein_64m_64op_sw_med) + ' sec'\nprint simpleFormat(ein_64m_64op_sw_med / ein_64m_64op_med*100)+'% of ein64_64'\n\nein_32m_64op_sw = %timeit -r 10 -q -o np.einsum('...ij, ...j', xform, inputs, dtype=np.float32, casting='unsafe');\n\nein_32m_64op_sw_med = np.median(ein_32m_64op_sw.all_runs) / ein_32m_64op_sw.loops\nprint 'Median time, einsum FP32 math / FP64 operands swapped axes:', \\\n simpleFormat(ein_32m_64op_sw_med) + ' sec'\nprint simpleFormat(ein_32m_64op_sw_med / ein_64m_64op_med*100)+'% of ein64_64'\n\nein_32m_32op_sw = %timeit -r 10 -q -o np.einsum('...ij, ...j', xform_fp32, inputs_fp32, dtype=np.float32, casting='unsafe');\n\nein_32m_32op_sw_med = np.median(ein_32m_32op_sw.all_runs) / ein_32m_32op_sw.loops\nprint 'Median time, einsum FP32 math / FP32 operands swapped axes:', \\\n simpleFormat(ein_32m_32op_sw_med) + ' sec'\nprint simpleFormat(ein_32m_32op_sw_med / ein_64m_64op_med*100)+'% of ein64_64'", "Python looping\n64 bit math on 64 bit operands", "def apply_python_fp64_sw(inputs, transform):\n N_k = inputs.shape[0]\n N_l = inputs.shape[1]\n output = np.empty((N_k, N_l, 3), np.float64)\n for k in range(N_k):\n for l in range(N_l):\n output[k,l,0] = (\n transform[k,l,0,0]*inputs[k,l,0] +\n transform[k,l,0,1]*inputs[k,l,1]\n )\n output[k,l,1] = (\n transform[k,l,1,0]*inputs[k,l,0] +\n transform[k,l,1,1]*inputs[k,l,1]\n )\n output[k,l,2] = (\n transform[k,l,2,0]*inputs[k,l,0] +\n transform[k,l,2,1]*inputs[k,l,1]\n )\n return output\n\npy_64m_64op_sw = %timeit -r 5 -q -o apply_python_fp64_sw(inputs, xform);\n\npy_64m_64op_sw_med = np.median(py_64m_64op_sw.all_runs) / py_64m_64op_sw.loops\nprint 'Median time, Python FP64 math / FP64 operands swapped axes:', \\\n simpleFormat(py_64m_64op_sw_med) + ' sec'\nprint simpleFormat(py_64m_64op_sw_med / ein_64m_64op_med*100)+'% of ein64_64'\n\ndef apply_python_fp32_sw(inputs, transform):\n N_k = inputs.shape[0]\n N_l = inputs.shape[1]\n output = np.empty((N_k, N_l, 3), np.float32)\n for k in range(N_k):\n for l in range(N_l):\n output[k,l,0] = (\n transform[k,l,0,0]*inputs[k,l,0] +\n transform[k,l,0,1]*inputs[k,l,1]\n )\n output[k,l,1] = (\n transform[k,l,1,0]*inputs[k,l,0] +\n transform[k,l,1,1]*inputs[k,l,1]\n )\n output[k,l,2] = (\n transform[k,l,2,0]*inputs[k,l,0] +\n transform[k,l,2,1]*inputs[k,l,1]\n )\n return output\n\npy_32m_32op_sw = %timeit -r 5 -q -o apply_python_fp32_sw(inputs_fp32, xform_fp32);\n\npy_32m_32op_sw_med = np.median(py_32m_32op_sw.all_runs) / py_32m_32op_sw.loops\nprint 'Median time, Python FP32 math / FP32 operands swapped axes:', \\\n simpleFormat(py_32m_32op_sw_med) + ' sec'\nprint simpleFormat(py_32m_32op_sw_med / ein_64m_64op_med*100)+'% of ein64_64'", "Numba looping\n64 bit math on 64 bit operands", "@jit(\"float64[:,:,:](float64[:,:,:], float64[:,:,:,:])\",\n nopython=True, nogil=True, cache=True)\ndef apply_numba_fp64_sw(inputs, transform):\n N_k = inputs.shape[0]\n N_l = inputs.shape[1]\n output = np.empty((N_k, N_l, 3), float64)\n for k in range(N_k):\n for l in range(N_l):\n output[k,l,0] = (\n transform[k,l,0,0]*inputs[k,l,0] +\n transform[k,l,0,1]*inputs[k,l,1]\n )\n output[k,l,1] = (\n transform[k,l,1,0]*inputs[k,l,0] +\n transform[k,l,1,1]*inputs[k,l,1]\n )\n output[k,l,2] = (\n transform[k,l,2,0]*inputs[k,l,0] +\n transform[k,l,2,1]*inputs[k,l,1]\n )\n return output\n\nnu_64m_64op_sw = %timeit -r 10 -q -o apply_numba_fp64_sw(inputs, xform)\n\nnu_64m_64op_sw_med = np.median(nu_64m_64op_sw.all_runs) / nu_64m_64op_sw.loops\nprint 'Median time, Numba FP64 math / FP64 operands swapped axes:', \\\n simpleFormat(nu_64m_64op_sw_med) + ' sec'\nprint simpleFormat(nu_64m_64op_sw_med / ein_64m_64op_med*100)+'% of ein64_64'", "32 bit math on 32 bit operands", "@jit(\"float32[:,:,:](float32[:,:,:], float32[:,:,:,:])\",\n nopython=True, nogil=True, cache=True)\ndef apply_numba_fp32_sw(inputs, transform):\n N_k = inputs.shape[0]\n N_l = inputs.shape[1]\n output = np.empty((N_k, N_l, 3), float32)\n for k in range(N_k):\n for l in range(N_l):\n output[k,l,0] = (\n transform[k,l,0,0]*inputs[k,l,0] +\n transform[k,l,0,1]*inputs[k,l,1]\n )\n output[k,l,1] = (\n transform[k,l,1,0]*inputs[k,l,0] +\n transform[k,l,1,1]*inputs[k,l,1]\n )\n output[k,l,2] = (\n transform[k,l,2,0]*inputs[k,l,0] +\n transform[k,l,2,1]*inputs[k,l,1]\n )\n return output\n\nnu_32m_32op_sw = %timeit -r 10 -q -o apply_numba_fp32_sw(inputs_fp32, xform_fp32)\n\nnu_32m_32op_sw_med = np.median(nu_32m_32op_sw.all_runs) / nu_32m_32op_sw.loops\nprint 'Median time, Numba FP32 math / FP32 operands swapped axes:', \\\n simpleFormat(nu_32m_32op_sw_med) + ' sec'\nprint simpleFormat(nu_32m_32op_sw_med / ein_64m_64op_med*100)+'% of ein64_64'", "Show summary of timing results\nTabulate the results for original axes ordering.", "timings = [\n {'Python FP64math FP64op': py_64m_64op_med},\n {'Python FP32math FP32op': py_32m_32op_med},\n {'einsum FP64math FP64op': ein_64m_64op_med},\n {'einsum FP32math FP64op': ein_32m_64op_med},\n {'einsum FP32math FP32op': ein_32m_32op_med},\n {'Numba FP64math FP64op': nu_64m_64op_med},\n {'Numba FP32math FP32op': nu_32m_32op_med}\n]\ntimings = pd.DataFrame(pd.Series(\n [t.values()[0] for t in timings],\n [t.keys()[0] for t in timings],\n)).T;", "Tabulate the results for swapped axes ordering.", "timings_sw = [\n {'Python FP64math FP64op axswp': py_64m_64op_sw_med},\n {'Python FP64math FP32op axswp': py_32m_32op_sw_med},\n {'einsum FP64math FP64op axswp': ein_64m_64op_sw_med},\n {'einsum FP32math FP64op axswp': ein_32m_64op_sw_med},\n {'einsum FP32math FP32op axswp': ein_32m_32op_sw_med},\n {'Numba FP64math FP64op axswp': nu_64m_64op_sw_med},\n {'Numba FP32math FP32op axswp': nu_32m_32op_sw_med}\n]\ntimings_sw = pd.DataFrame(pd.Series(\n [t.values()[0] for t in timings_sw],\n [t.keys()[0] for t in timings_sw],\n)).T;", "Absolute timings (sec)\nOriginal axes ordering (flavor concatenated on first dimension)", "timings", "Swapped axes ordering (flavor concatenated on last dimension)", "timings_sw", "Timings as fraction of einsum FP64-math, FP64-operands, orig. axes ordering\nOriginal axes ordering (flavor concatenated on first dimension)", "timings / timings['einsum FP64math FP64op'].values", "Swapped axes ordering (flavor concatenated on last dimension)", "timings_sw / timings['einsum FP64math FP64op'].values", "Computer used for test", "!!hostname\n\n!!lscpu" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
UCHIC/ciws-server
src/ciws_ci/data_posting_service/file_upload_requests.ipynb
bsd-3-clause
[ "Posting files to the web server\nUsing the requests library:", "import requests\n\n# define urls for token generation and file upload\nupload_token_url = 'http://ciwsdbs.uwrl.usu.edu/auth'\nupload_url = 'http://ciwsdbs.uwrl.usu.edu/data-api'\nclient_passcode = 'XhTVtPjQWyw64awm7td+3ygiIpLDkE3uBaHSc7Yz/AA='\n\n# store file and filename for server request\ndata_file = open('series_data.csv', 'rb')\nfiles = [('data_file[]', data_file), ]\nfilenames = ['series_data.csv', ]\n\n# make requests\nupload_token = requests.post(upload_token_url, data={'token': client_passcode, 'filenames': filenames})\nupload_response = requests.post(upload_url, headers={'Authorization': f'Bearer {upload_token.text}'}, files=files)\n\ndata_file.close()", "The success/error messages from the server are stored in the response:", "print(upload_response.text)", "To send multiple files to the web service, just add them to the list all under the name data_file[] \n(this is the name of the field, not to be confused with the filename.)", "files = [\n ('data_file[]', open('series_data1.csv', 'rb')), \n ('data_file[]', open('series_data2.csv', 'rb')),\n ('data_file[]', open('series_data3.csv', 'rb')), ]\nfilenames = ['series_data1.csv', 'series_data2.csv', 'series_data3.csv', ]" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
geography-munich/sciprog
material/solutions01.ipynb
apache-2.0
[ "Simple looping\n\nWrite a small script that prints 10 times some string.\nchange your program now, so that it prints $N$ times the string, whereas $N$ can be any arbitrary positive integer\nmodify the program now, that a function is doing the actual printing. This function should take a string as well as the number of printouts as arguments", "for i in range(10):\n print i+1\n\nfor i in range(1,11):\n print i\n\nN=3\nfor i in range(N):\n print i+1\n\ndef do_print(s, N):\n for i in range(N):\n print('%s, %i' % (s, i))\n \ndo_print('Hallo', 3)\ndo_print('Welt', 5)", "Playing with if clauses\n1. Print all numbers from 1 ... 100 to the output in a loop", "for x in range(10): print x+1 # some alternative way of looping\n\nfor x in range(100):\n if x == 55:\n print('****** %i ******' % x)\n else:\n print(x)", "Stefan Boltzmann & Co\nWrite a function that does the following:\nGiven a temperature and emissivity, it should calculate and return\na) the total emitted power by a body [W/m**2]\nb) the maximum wavelength of the emission [nm]\nUse a black body as a default setup, but allow that the user can also specify a grey body", "def sboltz(T, e):\n return e*5.67E-8*T**4. # W/m**2\n\ndef wien(T):\n return 2897.8/T #returns lmax in mym\n\ndef radiation(T, e=1.):\n Q = sboltz(T, e)\n l_max = wien(T)\n return Q, l_max\n\nprint radiation(6000.)\n\nprint radiation(300., e=0.9)\n\n", "Calculations with relative humidity\nTask 1: Given the relative humidity and air temperature, define a function that returns the actual water vapor pressure [Pa] as well as the saturated water vapor pressure.\nTask 2: use this function now further for the following task: Define a function that returns the dew point of the air, given a relative humidity and air temperature.", "import math\ndef wvpressure(T, rh):\n # uses the empirical Magnus formula for air over open water bodies\n # validity limited to certain temperature regions\n \n # perform some validity checks\n assert rh >= 0.\n assert rh <= 1.\n assert T > -45., 'Invalid temperature!'\n assert T < 60.\n \n es = 6.112 * math.exp((17.62*T)/(243.12+T)) * 100. # factor 100 as results should be in Pa\n return es*rh, es\n \nT=50.\nrh = 1.\n \ne, es = wvpressure(T, rh)\n\nprint(T, e, es)\n\n# search for dewpoint (the quick and dirty way ...)\nT = 10.\nrh = 0.8\n\ne, es = wvpressure(T, rh)\n\n# now we have the actual water vapor pressure and need to search for the temperature where this corresponds to Es\n# we use an itterative solution here\n\nt0 = -40.\ndelta = 999999999.\ndt = 0.5\nt = t0*1.\ntsol = t0*1.\ntmax = T\nwhile t < tmax: # this is not good practice to do it, but we will learn only later how to do it better ...\n E, ES = wvpressure(t, rh)\n if abs(ES-e) < delta:\n delta = abs(ES-e)\n tsol = t*1.\n t += dt\nprint('The dew point was found at a temperature of %f degrees Celsius' % tsol)\n\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
xclxxl414/rqalpha
docs/source/notebooks/run-rqalpha-in-ipython.ipynb
apache-2.0
[ "IPython 与 RQAlpha\n加载 RQAlpha magic", "%load_ext rqalpha", "查看 RQAlpha magic 帮助\n我们可以通过 %%rqalpha 直接在 cell 中运行回测代码。 %%rqalpha 后面的参数等价于在 CLI 中后面的 rqalpha run 的参数", "%%rqalpha -h\n\"\"", "使用 %%rqalpha 进行回测", "%%rqalpha -s 20100101 -e 20170505 -p -bm 000001.XSHG --account stock 100000\n\ndef init(context):\n context.stocks = ['000300.XSHG', '000905.XSHG', '000012.XSHG']\n\n \ndef handle_bar(context, bar_dict):\n [hs, zz, gz] = context.stocks\n hs_history20 = history_bars(hs, 20, '1d', 'close')\n zz_history20 = history_bars(zz, 20, '1d', 'close')\n \n hsIncrease = hs_history20[-1] - hs_history20[0]\n zzIncrease = zz_history20[-1] - zz_history20[0]\n \n positions = context.portfolio.positions\n [hsQuality, zzQuality, gzQuality] = [positions[hs].quantity, positions[zz].quantity, positions[gz].quantity]\n if hsIncrease < 0 and zzIncrease < 0:\n if hsQuality > 0: order_target_percent(hs, 0)\n if zzQuality > 0: order_target_percent(zz, 0)\n order_target_percent(gz, 1)\n elif hsIncrease < zzIncrease:\n if hsQuality > 0: order_target_percent(hs, 0)\n if gzQuality > 0: order_target_percent(gz, 0)\n order_target_percent(zz, 1)\n else:\n if zzQuality > 0: order_target_percent(zz, 0)\n if gzQuality > 0: order_target_percent(gz, 0)\n order_target_percent(hs, 1)\n #logger.info(\"positions hs300: \" + str(hsQuality) + \", zz500: \" + str(zzQuality) + \", gz: \" + str(gzQuality))", "获取回测报告\n运行完回测后,报告会自动存储到 report 变量中。可以直接通过 report 变量获取当次回测的结果。\n另外 rqalpha 的 mod 的输出会自动存储在 results 变量中。", "results.keys()\n\nreport.keys()\n\nreport.trades[:5]\n\nreport.portfolio[:5]\n\nreport.stock_positions[:5]", "使用 run_func 运行回测", "config = {\n \"base\": {\n \"start_date\": \"2010-01-01\",\n \"end_date\": \"2017-05-05\",\n \"benchmark\": \"000001.XSHG\",\n \"accounts\": {\n \"stock\": 100000\n }\n },\n \"extra\": {\n \"log_level\": \"info\",\n },\n \"mod\": {\n \"sys_analyser\": {\n \"enabled\": True,\n \"plot\": True,\n },\n }\n}\n\n\nfrom rqalpha.api import *\nfrom rqalpha import run_func\n\n\ndef init(context):\n context.stocks = ['000300.XSHG', '000905.XSHG', '000012.XSHG']\n\n \ndef handle_bar(context, bar_dict):\n [hs, zz, gz] = context.stocks\n hs_history20 = history_bars(hs, 20, '1d', 'close')\n zz_history20 = history_bars(zz, 20, '1d', 'close')\n \n hsIncrease = hs_history20[-1] - hs_history20[0]\n zzIncrease = zz_history20[-1] - zz_history20[0]\n \n positions = context.portfolio.positions\n [hsQuality, zzQuality, gzQuality] = [positions[hs].quantity, positions[zz].quantity, positions[gz].quantity]\n if hsIncrease < 0 and zzIncrease < 0:\n if hsQuality > 0: order_target_percent(hs, 0)\n if zzQuality > 0: order_target_percent(zz, 0)\n order_target_percent(gz, 1)\n elif hsIncrease < zzIncrease:\n if hsQuality > 0: order_target_percent(hs, 0)\n if gzQuality > 0: order_target_percent(gz, 0)\n order_target_percent(zz, 1)\n else:\n if zzQuality > 0: order_target_percent(zz, 0)\n if gzQuality > 0: order_target_percent(gz, 0)\n order_target_percent(hs, 1)\n \n \nresults = run_func(init=init, handle_bar=handle_bar, config=config)\n\nreport = results[\"sys_analyser\"]\n\nreport[\"trades\"][:5]" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
squishbug/DataScienceProgramming
05-Operating-with-Multiple-Tables/AdvancedTables_orig.ipynb
cc0-1.0
[ "from IPython.core.display import display, HTML\ndisplay(HTML(\"<style>.container { width:100% !important; }</style>\"))", "Advanced Tables\nWhy are databases so complex?\n\nData stored in a database may be split into multiple tables, each containing multiple columns. A column stores a single attribute of the data; a table stores a collection of related attributes.\nThe database also keeps track of the relationships between different tables. \n\nDatabases are designed to minimize redundancy and maintain data integrity, particularly when data is added, changed, or deleted.\n\nConsistency when updating: no duplicate places for information https://en.wikipedia.org/wiki/Database_normalization\nPerformance https://en.wikipedia.org/wiki/Star_schema\n\n\n\nSide note: you may also have to think about isolation level when working with a database where someone may be updating data as you're trying to read it. The isolation level determines the database read behavior in this situation. See https://en.wikipedia.org/wiki/Isolation_(database_systems).\n\n\nWorking with multiple tables\n\nTwo tables can be joined at a time. 'Join' is a binary operator. See https://en.wikipedia.org/wiki/Join_(SQL).\nTables must have key values that can be matched. Usually one table has a primary key and the other table has a foreign key.\n\nPandas\n\nPandas allows \"merge\", \"join\", and \"concatenate\" operations. See http://pandas.pydata.org/pandas-docs/version/0.18.1/merging.html#merge-join-and-concatenate for additional reading.\nPandas also allows reshaping and pivoting data tables, see http://pandas.pydata.org/pandas-docs/version/0.18.1/reshaping.html.\n\nIn this class, we will cover table joining, merging and concatenation. We will also go over using some of the time-series handling capabilities in Pandas.", "import pandas as pd\nimport numpy as np", "Concatenating tables in Pandas\nTo introduce join operations, we will be working with the AdventureWorks dataset, a standard dataset from Microsoft SLQ Server for learing to work with databases. It contains data for the fictitious bicycle manufacturer (Adventure Works Cycles).\nLet's starts by importing some tables from AdventureWorks in /home/data/AdventureWorks. These tables contain data on AdventureWorks employees, sales territories, customers, and orders placed by the customers.", "\n print(x)\n\nEmployees = pd.read_excel('/home/data/AdventureWorks/Employees.xls')\nTerritory = pd.read_excel('/home/data/AdventureWorks/SalesTerritory.xls')\nCustomers = pd.read_excel('/home/data/AdventureWorks/Customers.xls')\nOrders = pd.read_excel('/home/data/AdventureWorks/ItemsOrdered.xls')", "Let's take a look at the data we'll be working with:", "Employees.head()\n\nTerritory.head()\n\nCustomers.head()\n\nOrders.head()", "Let's construct a slightly artificial example. Suppose that AdventureWorks was formed by merging two companies, AdventuresUSA which operated in the US and AdventuresWorld, which operated in other countries. Now we want information on their combined sales territories. \nThe Pandas \"concat\" function is good for stacking tables on top of each other. We will use it to combine the AdventuresUSA and AdventuresWorld territories data tables.", "help(pd.concat)\n\n# constructing the territory tables... as noted, this is an artificial example\nTerritoryUSA = Territory[Territory.CountryCode=='US']; TerritoryUSA['RepID'] = np.random.randint(1,1000,5)\nTerritoryWorld = Territory[Territory.CountryCode!='US']\n\nTerritoryUSA\n\nTerritoryWorld\n\n# we'll concatenate the databases, but keep separate keys so that we can keep track of which entries came from AdventuresUSA and \n# which from AdventuresWorld.\n# We'll use \"join='inner'\" to only keep colunms that are common to both tables; \n# that is, we will drop the no-longer needed RepID in AdventuresUSA. \nTerritory2 = pd.concat([TerritoryUSA, TerritoryWorld], keys=['usa', 'world'], join='inner')\n\nTerritory2", "Pandas \"append\" behaves just like \"concat\" with axis=0 and join='outer' (i.e., keep all column names). Missing values are set to NaN.", "help(pd.DataFrame.append)\n\nTerritory3 = TerritoryUSA.append(TerritoryWorld)\n\nTerritory3", "Joining and merging tables in Pandas\nJoin and merge are powerful tools for working with multiple tables. We will use them to answer some questions about the\nAdventureWorks dataset that you might encounter in real-life situations.\nJoin does fast table joining on a shared index. \nMerge does the same thing, but gives you the option to specify columns to join on. \nThe idea of joining on a column will become clearer with some examples.\nExample 1. \"I want a list of all employees, and if any are salespeople then show me the details about their sales territory\"\nFrom AdventureWorks, we have a table \"Employees\" that gives a lot of information about AdventureWorks employees, like 'EmployeeID', 'ManagerID', 'TerritoryID', 'Title', 'FirstName','MiddleName', 'LastName', 'Suffix', 'JobTitle', 'NationalIDNumber', 'BirthDate', 'MaritalStatus', 'Gender', 'HireDate', 'SalariedFlag', 'VacationHours', 'SickLeaveHours', 'PhoneNumber', 'PhoneNumberType', 'EmailAddress', 'AddressLine1', 'AddressLine2', 'City', 'StateProvinceName', 'PostalCode', 'CountryName'. \\\nSince we're just being asked for a list of employees, we'll give the EmployeeID and their first, middle, and last names, and their role in the company (since additional information is requested for salespeople only). Then, for the salespeople, we must attach information about their sales territories, which is contained in the Territories table. \nNotice that the Employees table has a column 'TerritoryID', which corresponds to the primary key in the 'Territory' table (in 'Territory', each territory has a unique 'TerritoryID'). We'll do a join on TerritoryID.", "help(pd.merge)\n\nAns = pd.merge(Employees.loc[:,[\"EmployeeID\",\"FirstName\",\"MiddleName\",\"LastName\",\"JobTitle\",\"TerritoryID\"]], \n Territory, \n how='left', on='TerritoryID')\nAns.head()\n\nX = Ans[[\"FirstName\",\"MiddleName\",\"LastName\"]]\n\nAns['EmployeeName'] = X.apply(lambda x: x.LastName+\", \"+x.FirstName+\" \"+str(x.MiddleName), axis=1)\n\nAns\n\n# Overachiever answer:\nAns['EmployeeName'] = Ans[[\"FirstName\",\"MiddleName\",\"LastName\"]].apply(lambda x: x.LastName+\", \"+x.FirstName+\" \"+str(x.MiddleName), axis=1)\nAns = Ans[['EmployeeName', 'EmployeeID', 'JobTitle', 'TerritoryID', 'Name', 'CountryCode', 'Region', 'SalesYTD', 'SalesLastYear']]\nAns", "\"For the list above, limit the results to just salespeople\"", "Ans2 = Ans[Ans.JobTitle=='Sales Representative']\nAns2\n\n# Overachiever: What about *all* employees associated with sales?\nAns2 = Ans[Ans[\"JobTitle\"].apply(lambda x: 'Sales' in x)]\nAns2", "\"Give me a list of our customers, and also tell me which sales territory they fall in.\"\nThis looks like another question for \"merge\"! We have a list of customers with their addresses, and we have a list of territories, but they are in separate tables. \nLet's recover a list of customer names and IDs, together with corresponding sales territory names.\nThis time, we have to be careful, because \"TerritoryID\" in the Territory table matches \"SalesTerritoryID\" in the table Customers. So, we'll have to specify different columns names to merge on for the two tables.", "Ans3 = pd.merge(Customers[[\"CustomerID\",\"FirstName\",\"LastName\",\"SalesTerritoryID\"]], \n Territory[[\"TerritoryID\",\"Name\"]], \n how='left', \n left_on='SalesTerritoryID', right_on='TerritoryID', )\nAns3", "\"Give me a list of all sales territories, also show what customers fall under them\"", "Ans = pd.merge(Territory, Customers, how=\"inner\", left_on=\"TerritoryID\", right_on=\"SalesTerritoryID\")\n\nAns", "\"Give me a list of the customers we have in North Carolina, and tell me how many there are.\"", "# In-class exercise! :)\n\nListNC = Customers[Customers.StateName=='North Carolina']\nListNC\n\nListNC.CustomerID.count()\n\nCustomers = Customers.set_index(\"StateName\")\nCustomers.loc['North Carolina']", "\"For each of the items ordered, show the total price (sometimes they ordered more than 1 item)\"", "# We'll use the Orders table for this! In-class exercise :)\n\nOrders[\"TotalItemPrice\"] = Orders.Quantity * Orders.Price\nOrders.head()\n\nOrders.groupby([\"CustomerID\",\"OrderDate\"]).TotalItemPrice.sum()", "\"Show a list of customers, and the total amount of money they have spent with AdventureWorks. I want the highest spenders to appear first!\"", "# In-class exercise! :)\n\nAns_1 = pd.merge(Customers[[\"CustomerID\",\"FirstName\",\"LastName\"]], Orders[[\"CustomerID\",\"TotalItemPrice\"]], how=\"inner\", on=\"CustomerID\" )\n\nAns_1\n\npd.DataFrame(Ans_1.groupby(\"CustomerID\").TotalItemPrice.sum())\n\nAns_1a = Ans_1[[\"FirstName\",\"LastName\",\"CustomerID\"]].set_index(\"CustomerID\")\nAns_1b = Ans_1a[Ans_1a.duplicated()==False]\n\nAns_2 = pd.DataFrame(Ans_1.groupby(\"CustomerID\").TotalItemPrice.sum()).join(Ans_1b)\n\nAns_2.sort_values(\"TotalItemPrice\", ascending=False)\n\nAns_2.sort_values(ascending=False)\n\nTerritory.apply(max, axis=0)", "Another side note:", "help(pd.DataFrame.combine_first)\n\nhelp(pd.DataFrame.update)\n\n\nCustomers" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
yashdeeph709/Algorithms
PythonBootCamp/Complete-Python-Bootcamp-master/.ipynb_checkpoints/Datetime-checkpoint.ipynb
apache-2.0
[ "datetime\nPython has the datetime module to help deal with timestamps in your code. Time values are represented with the time class. Times have attributes for hour, minute, second, and microsecond. They can also include time zone information. The arguments to initialize a time instance are optional, but the default of 0 is unlikely to be what you want.\ntime\nLets take a look at how we can extract time information from the datetime module. We can create a timestamp by specifying datetime.time(hour,minute,second,microsecond)", "import datetime\n\nt = datetime.time(4, 20, 1)\n# Lets show the different compoenets\n\nprint t\nprint 'hour :', t.hour\nprint 'minute:', t.minute\nprint 'second:', t.second\nprint 'microsecond:', t.microsecond\nprint 'tzinfo:', t.tzinfo", "Note: A time instance only holds values of time, and not a date associated with the time. \nWe can also check the min and max values a time of day can have in the module:", "print 'Earliest :', datetime.time.min\nprint 'Latest :', datetime.time.max\nprint 'Resolution:', datetime.time.resolution", "The min and max class attributes reflect the valid range of times in a single day.\nDates\ndatetime (as you might suspect) also allows us to work with date timestamps. Calendar date values are represented with the date class. Instances have attributes for year, month, and day. It is easy to create a date representing today’s date using the today() class method.\nLets see some examples:", "today = datetime.date.today()\nprint today\nprint 'ctime:', today.ctime()\nprint 'tuple:', today.timetuple()\nprint 'ordinal:', today.toordinal()\nprint 'Year:', today.year\nprint 'Mon :', today.month\nprint 'Day :', today.day", "As with time, the range of date values supported can be determined using the min and max attributes.", "print 'Earliest :', datetime.date.min\nprint 'Latest :', datetime.date.max\nprint 'Resolution:', datetime.date.resolution", "Another way to create new date instances uses the replace() method of an existing date. For example, you can change the year, leaving the day and month alone.", "d1 = datetime.date(2015, 3, 11)\nprint 'd1:', d1\n\nd2 = d1.replace(year=1990)\nprint 'd2:', d2", "Arithmetic\nWe can perform arithmetic on date objects to check for time differences. For example:", "d1\n\nd2\n\nd1-d2", "This give us the difference in days between the two dates. You can use the timedelta method to specify various units of times (day,minutes,hours,etc...)\nGreat! You should now have a basic understanding of how to use datetime with Python to work with timestamps in your code!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Salman-H/bike-sharing-network
Your_first_neural_network.ipynb
mit
[ "Your first neural network\nIn this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.", "%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt", "Load and prepare the data\nA critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!", "data_path = 'Bike-Sharing-Dataset/hour.csv'\n\nrides = pd.read_csv(data_path)\n\nrides.head()", "Checking out the data\nThis dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.\nBelow is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.", "rides[:24*10].plot(x='dteday', y='cnt')", "Dummy variables\nHere we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().", "dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']\nfor each in dummy_fields:\n dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)\n rides = pd.concat([rides, dummies], axis=1)\n\nfields_to_drop = ['instant', 'dteday', 'season', 'weathersit', \n 'weekday', 'atemp', 'mnth', 'workingday', 'hr']\ndata = rides.drop(fields_to_drop, axis=1)\ndata.head()", "Scaling target variables\nTo make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.\nThe scaling factors are saved so we can go backwards when we use the network for predictions.", "quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']\n# Store scalings in a dictionary so we can convert back later\nscaled_features = {}\nfor each in quant_features:\n mean, std = data[each].mean(), data[each].std()\n scaled_features[each] = [mean, std]\n data.loc[:, each] = (data[each] - mean)/std", "Splitting the data into training, testing, and validation sets\nWe'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.", "# Save data for approximately the last 21 days \ntest_data = data[-21*24:]\n\n# Now remove the test data from the data set \ndata = data[:-21*24]\n\n# Separate the data into features and targets\ntarget_fields = ['cnt', 'casual', 'registered']\nfeatures, targets = data.drop(target_fields, axis=1), data[target_fields]\ntest_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]", "We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).", "# Hold out the last 60 days or so of the remaining data as a validation set\ntrain_features, train_targets = features[:-60*24], targets[:-60*24]\nval_features, val_targets = features[-60*24:], targets[-60*24:]", "Time to build the network\nBelow you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.\n<img src=\"assets/neural_network.png\" width=300px>\nThe network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.\nWe use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.\n\nHint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.\n\nBelow, you have these tasks:\n1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.\n2. Implement the forward pass in the train method.\n3. Implement the backpropagation algorithm in the train method, including calculating the output error.\n4. Implement the forward pass in the run method.", "class NeuralNetwork(object):\n def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):\n # Set number of nodes in input, hidden and output layers.\n self.input_nodes = input_nodes\n self.hidden_nodes = hidden_nodes\n self.output_nodes = output_nodes\n\n # Initialize weights\n self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5, \n (self.input_nodes, self.hidden_nodes))\n\n self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5, \n (self.hidden_nodes, self.output_nodes))\n self.lr = learning_rate\n \n #### TODO: Set self.activation_function to your implemented sigmoid function ####\n #\n # Note: in Python, you can define a function with a lambda expression,\n # as shown below.\n #self.activation_function = lambda x : 0 # Replace 0 with your sigmoid calculation.\n \n ### If the lambda code above is not something you're familiar with,\n # You can uncomment out the following three lines and put your \n # implementation there instead.\n #\n def sigmoid(x):\n return 1 / (1 + np.exp(-x))\n self.activation_function = sigmoid\n \n \n def train(self, features, targets):\n ''' Train the network on batch of features and targets. \n \n Arguments\n ---------\n \n features: 2D array, each row is one data record, each column is a feature\n targets: 1D array of target values\n \n '''\n n_records = features.shape[0]\n delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)\n delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)\n for X, y in zip(features, targets):\n #### Implement the forward pass here ####\n ### Forward pass ###\n # TODO: Hidden layer - Replace these values with your calculations.\n hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer\n hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer\n\n # TODO: Output layer - Replace these values with your calculations.\n final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer\n final_outputs = final_inputs # signals from final output layer\n \n #### Implement the backward pass here ####\n ### Backward pass ###\n\n # TODO: Output error - Replace this value with your calculations.\n error = y - final_outputs # Output layer error is the difference between desired target and actual output.\n \n # TODO: Calculate the hidden layer's contribution to the error\n hidden_error = np.dot(self.weights_hidden_to_output, error)\n \n # TODO: Backpropagated error terms - Replace these values with your calculations.\n output_error_term = error * 1\n hidden_error_term = hidden_error * hidden_outputs * (1 - hidden_outputs)\n\n # Weight step (input to hidden)\n delta_weights_i_h += hidden_error_term * X[:,None]\n # Weight step (hidden to output)\n delta_weights_h_o += output_error_term * hidden_outputs[:,None]\n\n # TODO: Update the weights - Replace these values with your calculations.\n self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records # update hidden-to-output weights with gradient descent step\n self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records # update input-to-hidden weights with gradient descent step\n \n def run(self, features):\n ''' Run a forward pass through the network with input features \n \n Arguments\n ---------\n features: 1D array of feature values\n '''\n \n #### Implement the forward pass here ####\n # TODO: Hidden layer - replace these values with the appropriate calculations.\n hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer\n hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer\n \n # TODO: Output layer - Replace these values with the appropriate calculations.\n final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer\n final_outputs = final_inputs # signals from final output layer \n \n return final_outputs\n\ndef MSE(y, Y):\n return np.mean((y-Y)**2)", "Unit tests\nRun these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.", "import unittest\n\ninputs = np.array([[0.5, -0.2, 0.1]])\ntargets = np.array([[0.4]])\ntest_w_i_h = np.array([[0.1, -0.2],\n [0.4, 0.5],\n [-0.3, 0.2]])\ntest_w_h_o = np.array([[0.3],\n [-0.1]])\n\nclass TestMethods(unittest.TestCase):\n \n ##########\n # Unit tests for data loading\n ##########\n \n def test_data_path(self):\n # Test that file path to dataset has been unaltered\n self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')\n \n def test_data_loaded(self):\n # Test that data frame loaded\n self.assertTrue(isinstance(rides, pd.DataFrame))\n \n ##########\n # Unit tests for network functionality\n ##########\n\n def test_activation(self):\n network = NeuralNetwork(3, 2, 1, 0.5)\n # Test that the activation function is a sigmoid\n self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))\n\n def test_train(self):\n # Test that weights are updated correctly on training\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n \n network.train(inputs, targets)\n self.assertTrue(np.allclose(network.weights_hidden_to_output, \n np.array([[ 0.37275328], \n [-0.03172939]])))\n self.assertTrue(np.allclose(network.weights_input_to_hidden,\n np.array([[ 0.10562014, -0.20185996], \n [0.39775194, 0.50074398], \n [-0.29887597, 0.19962801]])))\n\n def test_run(self):\n # Test correctness of run method\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n\n self.assertTrue(np.allclose(network.run(inputs), 0.09998924))\n\nsuite = unittest.TestLoader().loadTestsFromModule(TestMethods())\nunittest.TextTestRunner().run(suite)", "Training the network\nHere you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.\nYou'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.\nChoose the number of iterations\nThis is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.\nChoose the learning rate\nThis scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.\nChoose the number of hidden nodes\nThe more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.", "import sys\n\n### Set the hyperparameters here ###\niterations = 3500\nlearning_rate = 0.9\nhidden_nodes = 9\n\noutput_nodes = 1\n\nN_i = train_features.shape[1]\nnetwork = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)\n\nlosses = {'train':[], 'validation':[]}\nfor ii in range(iterations):\n # Go through a random batch of 128 records from the training data set\n batch = np.random.choice(train_features.index, size=128)\n X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']\n \n network.train(X, y)\n \n # Printing out the training progress\n train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)\n val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)\n sys.stdout.write(\"\\rProgress: {:2.1f}\".format(100 * ii/float(iterations)) \\\n + \"% ... Training loss: \" + str(train_loss)[:5] \\\n + \" ... Validation loss: \" + str(val_loss)[:5])\n sys.stdout.flush()\n \n losses['train'].append(train_loss)\n losses['validation'].append(val_loss)\n\nplt.plot(losses['train'], label='Training loss')\nplt.plot(losses['validation'], label='Validation loss')\nplt.legend()\n_ = plt.ylim()", "Check out your predictions\nHere, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.", "fig, ax = plt.subplots(figsize=(8,4))\n\nmean, std = scaled_features['cnt']\npredictions = network.run(test_features).T*std + mean\nax.plot(predictions[0], label='Prediction')\nax.plot((test_targets['cnt']*std + mean).values, label='Data')\nax.set_xlim(right=len(predictions))\nax.legend()\n\ndates = pd.to_datetime(rides.ix[test_data.index]['dteday'])\ndates = dates.apply(lambda d: d.strftime('%b %d'))\nax.set_xticks(np.arange(len(dates))[12::24])\n_ = ax.set_xticklabels(dates[12::24], rotation=45)", "OPTIONAL: Thinking about your results(this question will not be evaluated in the rubric).\nAnswer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?\n\nNote: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter\n\nYour answer below" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jorisvandenbossche/2015-EuroScipy-pandas-tutorial
02 - Data structures.ipynb
bsd-2-clause
[ "import pandas as pd\n\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\ntry:\n import seaborn\nexcept ImportError:\n pass", "Tabular data", "df = pd.read_csv(\"data/titanic.csv\")\n\ndf.head()", "Starting from reading this dataset, to answering questions about this data in a few lines of code:\nWhat is the age distribution of the passengers?", "df['Age'].hist()", "How does the survival rate of the passengers differ between sexes?", "df.groupby('Sex')[['Survived']].aggregate(lambda x: x.sum() / len(x))", "Or how does it differ between the different classes?", "df.groupby('Pclass')['Survived'].aggregate(lambda x: x.sum() / len(x)).plot(kind='bar')", "Are young people more likely to survive?", "df['Survived'].sum() / df['Survived'].count()\n\ndf25 = df[df['Age'] <= 25]\ndf25['Survived'].sum() / len(df25['Survived'])", "All the needed functionality for the above examples will be explained throughout this tutorial.\nData structures\nPandas provides two fundamental data objects, for 1D (Series) and 2D data (DataFrame).\nSeries\nA Series is a basic holder for one-dimensional labeled data. It can be created much as a NumPy array is created:", "s = pd.Series([0.1, 0.2, 0.3, 0.4])\ns", "Attributes of a Series: index and values\nThe series has a built-in concept of an index, which by default is the numbers 0 through N - 1", "s.index", "You can access the underlying numpy array representation with the .values attribute:", "s.values", "We can access series values via the index, just like for NumPy arrays:", "s[0]", "Unlike the NumPy array, though, this index can be something other than integers:", "s2 = pd.Series(np.arange(4), index=['a', 'b', 'c', 'd'])\ns2\n\ns2['c']", "In this way, a Series object can be thought of as similar to an ordered dictionary mapping one typed value to another typed value.\nIn fact, it's possible to construct a series directly from a Python dictionary:", "pop_dict = {'Germany': 81.3, \n 'Belgium': 11.3, \n 'France': 64.3, \n 'United Kingdom': 64.9, \n 'Netherlands': 16.9}\npopulation = pd.Series(pop_dict)\npopulation", "We can index the populations like a dict as expected:", "population['France']", "but with the power of numpy arrays:", "population * 1000", "DataFrames: Multi-dimensional Data\nA DataFrame is a tablular data structure (multi-dimensional object to hold labeled data) comprised of rows and columns, akin to a spreadsheet, database table, or R's data.frame object. You can think of it as multiple Series object which share the same index.\n<img src=\"img/dataframe.png\" width=110%>\nOne of the most common ways of creating a dataframe is from a dictionary of arrays or lists.\nNote that in the IPython notebook, the dataframe will display in a rich HTML view:", "data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],\n 'population': [11.3, 64.3, 81.3, 16.9, 64.9],\n 'area': [30510, 671308, 357050, 41526, 244820],\n 'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']}\ncountries = pd.DataFrame(data)\ncountries", "Attributes of the DataFrame\nA DataFrame has besides a index attribute, also a columns attribute:", "countries.index\n\ncountries.columns", "To check the data types of the different columns:", "countries.dtypes", "An overview of that information can be given with the info() method:", "countries.info()", "Also a DataFrame has a values attribute, but attention: when you have heterogeneous data, all values will be upcasted:", "countries.values", "If we don't like what the index looks like, we can reset it and set one of our columns:", "countries = countries.set_index('country')\ncountries", "To access a Series representing a column in the data, use typical indexing syntax:", "countries['area']", "Basic operations on Series/Dataframes\nAs you play around with DataFrames, you'll notice that many operations which work on NumPy arrays will also work on dataframes.", "# redefining the example objects\n\npopulation = pd.Series({'Germany': 81.3, 'Belgium': 11.3, 'France': 64.3, \n 'United Kingdom': 64.9, 'Netherlands': 16.9})\n\ncountries = pd.DataFrame({'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],\n 'population': [11.3, 64.3, 81.3, 16.9, 64.9],\n 'area': [30510, 671308, 357050, 41526, 244820],\n 'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']})", "Elementwise-operations (like numpy)\nJust like with numpy arrays, many operations are element-wise:", "population / 100\n\ncountries['population'] / countries['area']", "Alignment! (unlike numpy)\nOnly, pay attention to alignment: operations between series will align on the index:", "s1 = population[['Belgium', 'France']]\ns2 = population[['France', 'Germany']]\n\ns1\n\ns2\n\ns1 + s2", "Reductions (like numpy)\nThe average population number:", "population.mean()", "The minimum area:", "countries['area'].min()", "For dataframes, often only the numeric columns are included in the result:", "countries.median()", "<div class=\"alert alert-success\">\n <b>EXERCISE</b>: Calculate the population numbers relative to Belgium\n</div>\n\n<div class=\"alert alert-success\">\n <b>EXERCISE</b>: Calculate the population density for each country and add this as a new column to the dataframe.\n</div>\n\nSome other useful methods\nSorting the rows of the DataFrame according to the values in a column:", "countries.sort_values('density', ascending=False)", "One useful method to use is the describe method, which computes summary statistics for each column:", "countries.describe()", "The plot method can be used to quickly visualize the data in different ways:", "countries.plot()", "However, for this dataset, it does not say that much:", "countries['population'].plot(kind='bar')", "You can play with the kind keyword: 'line', 'bar', 'hist', 'density', 'area', 'pie', 'scatter', 'hexbin'\nImporting and exporting data\nA wide range of input/output formats are natively supported by pandas:\n\nCSV, text\nSQL database\nExcel\nHDF5\njson\nhtml\npickle\n...", "pd.read\n\nstates.to", "Other features\n\nWorking with missing data (.dropna(), pd.isnull())\nMerging and joining (concat, join)\nGrouping: groupby functionality\nReshaping (stack, pivot)\nTime series manipulation (resampling, timezones, ..)\nEasy plotting\n\nThere are many, many more interesting operations that can be done on Series and DataFrame objects, but rather than continue using this toy data, we'll instead move to a real-world example, and illustrate some of the advanced concepts along the way.\nSee the next notebooks!\nAcknowledgement\n\n© 2015, Stijn Van Hoey and Joris Van den Bossche (&#115;&#116;&#105;&#106;&#110;&#118;&#97;&#110;&#104;&#111;&#101;&#121;&#64;&#103;&#109;&#97;&#105;&#108;&#46;&#99;&#111;&#109;, &#106;&#111;&#114;&#105;&#115;&#118;&#97;&#110;&#100;&#101;&#110;&#98;&#111;&#115;&#115;&#99;&#104;&#101;&#64;&#103;&#109;&#97;&#105;&#108;&#46;&#99;&#111;&#109;). Licensed under CC BY 4.0 Creative Commons\nThis notebook is partly based on material of Jake Vanderplas (https://github.com/jakevdp/OsloWorkshop2014)." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
vterron/Taller-Optimizacion-Python-Pyomo
02_PyomoOverview.ipynb
mit
[ "<img src=\"static/pybofractal.png\" alt=\"Pybonacci\" style=\"width: 200px;\"/>\n<img src=\"static/cacheme_logo.png\" alt=\"CAChemE\" style=\"width: 300px;\"/>\n1. Pyomo Overview\n\nNote: Adapted from https://github.com/Pyomo/PyomoGettingStarted, by William and Becca Hart\n\n1.1 Mathematical Modeling\nThis chapter provides an introduction to Pyomo: Python Optimization Modeling Objects. A more complete description is contained in Pyomo - Optimization Modeling in Python. Pyomo supports the formulation and analysis of mathematical models for complex optimization applications. This capability is commonly associated with algebraic modeling languages (AMLs) such as AMPL, AIMMS, and GAMS. Pyomo’s modeling objects are embedded within Python, a full-featured high-level programming language that contains a rich set of supporting libraries.\nModeling is a fundamental process in many aspects of scientific research, engineering and business. Modeling involves the formulation of a simplified representation of a system or real-world object. Thus, modeling tools like Pyomo can be used in a variety of ways:\n\n\nExplain phenomena that arise in a system,\n\n\nMake predictions about future states of a system,\n\n\nAssess key factors that influence phenomena in a system,\n\n\nIdentify extreme states in a system, that might represent worst-case scenarios or minimal cost plans, and\n\n\nAnalyze trade-offs to support human decision makers.\n\n\nMathematical models represent system knowledge with a formalized mathematical language. The following mathematical concepts are central to modern modeling activities:\nVariables\n\nVariablesVariables represent unknown or changing parts of a model\n (e.g. whether or not to make a decision, or the characteristic of \n a system outcome).The values taken by the variables are often\n referred to as a <span style=\"color:dakrblue\">solution</span> and are usually an output of the\n optimization process.\n\nParameters\n\nParameters represents the data that must be supplied to perform\n theoptimization. In fact, in some settings the word <span style=\"color:darkblue\">data</span> is used in\n place of the word <span style=\"color:dakrblue\">parameters</span>.\n\nRelations\n\nThese are equations, inequalities or other mathematical relationships\n that define how different parts of a model are connected to each\n other.\n\nGoals\n\nThese are functions that reflect goals and objectives for the system\n being modeled.\n\nThe widespread availability of computing resources has made the numerical analysis of mathematical models a commonplace activity. Without a modeling language, the process of setting up input files, executing a solver and extracting the final results from the solver output is tedious and error prone. This difficulty is compounded in complex, large-scale real-world applications which are difficult to debug when errors occur. Additionally, there are many different formats used by optimization software packages, and few formats are recognized by many optimizers. Thus the application of multiple optimization solvers to analyze a model introduces additional complexities.\nPyomo is an AML that extends Python to include objects for mathematical modeling. Hart et al. PyomoBook, PyomoJournal compare Pyomo with other AMLs. Although many good AMLs have been developed for optimization models, the following are motivating factors for the development of Pyomo:\nOpen Source\n\nPyomo is developed within Pyomo’s open source project to promote\n transparency of the modeling framework and encourage community\n development of Pyomo capabilities.\n\nCustomizable Capability\n\nPyomo supports a customizable capability through the extensive use \n of plug-ins to modularize software components.\n\nSolver Integration\n\nPyomo models can be optimized with solvers that are written either in\n Python or in compiled, low-level languages.\n\nProgramming Language\n\nPyomo leverages a high-level programming language, which has several\n advantages over custom AMLs: a very robust language, extensive\n documentation, a rich set of standard libraries, support for modern\n programming features like classes and functions, and portability to\n many platforms.\n\n1.2 Overview of Modeling Components and Processes\nPyomo supports an object-oriented design for the definition of optimization models. The basic steps of a simple modeling process are:\n\n\nCreate model and declare components\n\n\nInstantiate the model\n\n\nApply solver\n\n\nInterrogate solver results\n\n\nIn practice, these steps may be applied repeatedly with different data or with different constraints applied to the model. However, we focus on this simple modeling process to illustrate different strategies for modeling with Pyomo.\nA Pyomo <span style=\"color:darkblue\">model</span> consists of a collection of modeling <span style=\"color:darkblue\">components</span> that define different aspects of the model. Pyomo includes the modeling components that are commonly supported by modern AMLs: index sets, symbolic parameters, decision variables, objectives, and constraints. These modeling components are defined in Pyomo through the following Python classes:\nSet\n\nset data that is used to define a model instance\n\nParam\n\nparameter data that is used to define a model instance\n\nVar\n\ndecision variables in a model\n\nObjective\n\nexpressions that are minimized or maximized in a model\n\nConstraint\n\nconstraint expressions that impose restrictions on variable\n values in a model\n\n1.3 Abstract Versus Concrete Models\nA mathematical model can be defined using symbols that represent data values. For example, the following equations represent a linear program (LP) to find optimal values for the vector $x$ with parameters $n$ and $b$, and parameter vectors $a$ and $c$:\n$$\n\\begin{array}{lll}\n\\min & \\sum_{j=1}^n c_j x_j & \\\ns.t. & \\sum_{j=1}^n a_ij x_j \\geq b_i & \\forall i = 1 \\ldots m\\\n & x_j \\geq 0 & \\forall j = 1 \\ldots n\n\\end{array}\n$$\n\nNote:\nAs a convenience, we use the symbol $\\forall$\n to mean “for all” or “for each.”\n\nWe call this an <span style=\"color:darkblue\">abstract</span> or <span style=\"color:darkblue\">symbolic</span> mathematical model since it relies on unspecified parameter values. Data values can be used to specify a <span style=\"color:darkblue\">model instance</span>. The <span style=\"color:darkblue; font-family:Courier\">AbstractModel</span> class provides a context for defining and initializing abstract optimization models in Pyomo when the data values will be supplied at the time a solution is to be obtained.\nIn some contexts a mathematical model can be directly defined with the data values supplied at the time of the model definition and built into the model. We call these <span style=\"color:darkblue\">concrete</span> mathematical models. For example, the following LP model is a concrete instance of the previous abstract model:\n$$\n\\begin{array}{ll}\n\\min & 2x_1 + 3x_2\\\ns.t. & 3x_1 + 4x_2 \\geq 1\\\n & x_1,x_2 \\geq 0 \n\\end{array}\n$$\nThe <span style=\"color:darkblue; font-family:Courier\">ConcreteModel</span> class is used to define concrete optimization models in Pyomo.\n1.4 A Simple Abstract Pyomo Model\nWe repeat the abstract model already given:\n$$\n\\begin{array}{lll}\n\\min & \\sum_{j=1}^n c_j x_j & \\\ns.t. & \\sum_{j=1}^n a_ij x_j \\geq b_i & \\forall i = 1 \\ldots m\\\n & x_j \\geq 0 & \\forall j = 1 \\ldots n\n\\end{array}\n$$\nOne way to implement this in Pyomo is as follows:", "!cat abstract1.py", "Note:\nPython is interpreted one line at a time. A line continuation character, backslash, is used for Python statements that need to span multiple lines. In Python, indentation has meaning and must be consistent. For example, lines inside a function definition must be indented and the end of the indentation is used by Python to signal the end of the definition.\n\nThe first import line that is required in every Pyomo model. Its purpose is to make the symbols used by Pyomo known to Python.", "from pyomo.environ import *", "The declaration of a model is also required. The use of the name <span style=\"color:darkblue; font-family:Courier\">model</span> is not required. Almost any name could be used, but we will use the name <span style=\"color:darkblue; font-family:Courier\">model</span> most of the time in this book. In this example, we are declaring that it will be an abstract model.", "model = AbstractModel()", "We declare the parameters $m$ and $n$ using the Pyomo <span style=\"color:darkblue; font-family:Courier\">Param</span> function. This function can take a variety of arguments; this example illustrates use of the <span style=\"color:darkblue; font-family:Courier\">within</span> option that is used by Pyomo to validate the data value that is assigned to the parameter. If this option were not given, then Pyomo would not object to any type of data being assigned to these parameters. As it is, assignment of a value that is not a non-negative integer will result in an error.", "model.m = Param(within=NonNegativeIntegers)\nmodel.n = Param(within=NonNegativeIntegers)", "Although not required, it is convenient to define index sets. In this example we use the <span style=\"color:darkblue; font-family:Courier\">RangeSet</span> function to declare that the sets will be a sequence of integers starting at 1 and ending at a value specified by the the parameters <span style=\"color:darkblue; font-family:Courier\">model.m</span> and <span style=\"color:darkblue; font-family:Courier\">model.n</span>.", "model.I = RangeSet(1, model.m)\nmodel.J = RangeSet(1, model.n)", "The coefficient and right-hand-side data are defined as indexed parameters. When sets are given as arguments to the <span style=\"color:darkblue; font-family:Courier\">Param</span> function, they indicate that the set will index the parameter.", "model.a = Param(model.I, model.J)\nmodel.b = Param(model.I)\nmodel.c = Param(model.J)", "Note:\nIn Python, and therefore in Pyomo, any text after pound sign is considered to be a comment.\n\nThe next line interpreted by Python as part of the model declares the variable $x$. The first argument to the <span style=\"color:darkblue; font-family:Courier\">Var</span> function is a set, so it is defined as an index set for the variable. In this case the variable has only one index set, but multiple sets could be used as was the case for the declaration of the parameter <span style=\"color:darkblue; font-family:Courier\">model.a</span>. The second argument specifies a domain for the variable. This information is part of the model and will passed to the solver when data is provided and the model is solved. Specification of the <span style=\"color:darkblue; font-family:Courier\">NonNegativeReals</span> domain implements the requirement that the variables be greater than or equal to zero.", "# the next line declares a variable indexed by the set J\nmodel.x = Var(model.J, domain=NonNegativeReals)", "In abstract models, Pyomo expressions are usually provided to objective function and constraint declarations via a function defined with a Python <span style=\"color:darkblue; font-family:Courier\">def</span> statement. The <span style=\"color:darkblue; font-family:Courier\">def</span> statement establishes a name for a function along with its arguments. When Pyomo uses a function to get objective function or constraint expressions, it always passes in the model (i.e., itself) as the the first argument so the model is always the first formal argument when declaring such functions in Pyomo. Additional arguments, if needed, follow. Since summation is an extremely common part of optimization models, Pyomo provides a flexible function to accommodate it. When given two arguments, the <span style=\"color:darkblue; font-family:Courier\">summation</span> function returns an expression for the sum of the product of the two arguments over their indexes. This only works, of course, if the two arguments have the same indexes. If it is given only one argument it returns an expression for the sum over all indexes of that argument. So in this example, when <span style=\"color:darkblue; font-family:Courier\">summation</span> is passed the arguments <span style=\"color:darkblue; font-family:Courier\">model.c</span>, <span style=\"color:darkblue; font-family:Courier\">model.x</span> it returns an internal representation of the expression $\\sum_{j=1}^n c_j x_j$.", "def obj_expression(model):\n return summation(model.c, model.x)", "To declare an objective function, the Pyomo function called <span style=\"color:darkblue; font-family:Courier\">Objective</span> is used. The <span style=\"color:darkblue; font-family:Courier\">rule</span> argument gives the name of a function that returns the expression to be used. The default <span style=\"color:darkblue\">sense</span> is minimization. For maximization, the <span style=\"color:darkblue; font-family:Courier\">sense=maximize</span> argument must be used. The name that is declared, which is <span style=\"color:darkblue; font-family:Courier\">OBJ</span> in this case, appears in some reports and can be almost any name.", "model.OBJ = Objective(rule=obj_expression)", "Declaration of constraints is similar. A function is declared to deliver the constraint expression. In this case, there can be multiple constraints of the same form because we index the constraints by $i$ in the expression $ \\sum_{j=1}^n a_ij x_j \\geq b_i \\forall i = 1 \\ldots m$, which states that we need a constraint for each value of $i$ from one to $m$. In order to parametrize the expression by $i$ we include it as a formal parameter to the function that declares the constraint expression. Technically, we could have used anything for this argument, but that might be confusing. Using an <span style=\"color:blue; font-family:Courier\">i</span> for an $i$ seems sensible in this situation.", "def ax_constraint_rule(model, i):\n \"\"\"return the expression for the constraint for i\"\"\"\n return sum(model.a[i,j] * model.x[j] for j in model.J) >= model.b[i]", "Note:\nIn Python, indexes are in square brackets and function arguments are in parentheses.\n\nIn order to declare constraints that use this expression, we use the Pyomo <span style=\"color:darkblue; font-family:Courier\">Constraint</span> function that takes a variety of arguments. In this case, our model specifies that we can have more than one constraint of the same form and we have created a set, <span style=\"color:darkblue; font-family:Courier\">model.I</span>, over which these constraints can be indexed so that is the first argument to the constraint declaration function. The next argument gives the rule that will be used to generate expressions for the constraints. Taken as a whole, this constraint declaration says that a list of constraints indexed by the set <span style=\"color:darkblue; font-family:Courier\">model.I</span> will be created and for each member of model.I, the function <span style=\"color:darkblue; font-family:Courier\">ax_constraint_rule</span> will be called and it will be passed the model object as well as the member of <span style=\"color:darkblue; font-family:Courier\">model.I</span>.", "# the next line creates one constraint for each member of the set model.I\nmodel.AxbConstraint = Constraint(model.I, rule=ax_constraint_rule)", "In the object oriented view of all of this, we would say that model object is a class instance of the <span style=\"color:darkblue; font-family:Courier\">AbstractModel</span> class, and <span style=\"color:darkblue; font-family:Courier\">model.J</span> is a <span style=\"color:darkblue; font-family:Courier\">Set</span> object that is contained by this model. Many modeling components in Pyomo can be optionally specified as <span style=\"color:darkblue\">indexed components</span>: collections of components that are referenced using one or more values. In this example, the parameter <span style=\"color:darkblue; font-family:Courier\">model.c</span> is indexed with set <span style=\"color:darkblue; font-family:Courier\">model.J</span>.\nIn order to use this model, data must be given for the values of the parameters. Here is one file that provides data.", "!cat abstract1.dat", "There are multiple formats that can be used to provide data to a Pyomo model, but the AMPL format works well for our purposes because it contains the names of the data elements together with the data. In AMPL data files, text after a pound sign is treated as a comment. Lines generally do not matter, but statements must be terminated with a semi-colon.\nFor this particular data file, there is one constraint, so the value of <span style=\"color:darkblue; font-family:Courier\">model.m</span> will be one and there are two variables (i.e., the vector <span style=\"color:darkblue; font-family:Courier\">model.x</span> is two elements long) so the value of <span style=\"color:darkblue; font-family:Courier\">model.n</span> will be two. These two assignments are accomplished with standard assignments. Notice that in AMPL format input, the name of the model is omitted.", "!sed -n '4,6p' abstract1.dat", "There is only one constraint, so only two values are needed for <span style=\"color:darkblue; font-family:Courier\">model.a</span>. When assigning values to arrays and vectors in AMPL format, one way to do it is to give the index(es) and the the value. The line 1 2 4 causes <span style=\"color:darkblue; font-family:Courier\">model.a[1,2]</span> to get the value 4. Since <span style=\"color:darkblue; font-family:Courier\">model.c</span> has only one index, only one index value is needed so, for example, the line 1 2 causes <span style=\"color:darkblue; font-family:Courier\">model.c[1]</span> to get the value 2. Line breaks generally do not matter in AMPL format data files, so the assignment of the value for the single index of <span style=\"color:darkblue; font-family:Courier\">model.b</span> is given on one line since that is easy to read.", "!sed -n '7,18p' abstract1.dat", "When working with Pyomo (or any other AML), it is convenient to write abstract models in a somewhat more abstract way by using index sets that contain strings rather than index sets that are implied by $1,...,m$ or the summation from 1 to $n$. When this is done, the size of the set is implied by the input, rather than specified directly. Furthermore, the index entries may have no real order. Often, a mixture of integers and indexes and strings as indexes is needed in the same model. To start with an illustration of general indexes, consider a slightly different Pyomo implementation of the model we just presented.", "!cat abstract2.py", "However, this model can also be fed different data for problems of the same general form using meaningful indexes.", "! cat abstract2.dat", "1.5 A Simple Concrete Pyomo Model\nIt is possible to get nearly the same flexible behavior from models declared to be abstract and models declared to be concrete in Pyomo; however, we will focus on a straightforward concrete example here where the data is hard-wired into the model file. Python programmers will quickly realize that the data could have come from other sources.\nWe repeat the concrete model already given:\n$$\\min \\quad 2x_1 + 3x_2$$\n$$s.t. \\quad 3x_1 + 4x_2 \\geq 1$$\n$$x_1,x_2 \\geq 0$$\nThis is implemented as a concrete model as follows:", "from pyomo.environ import *\n\nmodel = ConcreteModel()\n\nmodel.x = Var([1,2], domain=NonNegativeReals)\n\nmodel.OBJ = Objective(expr = 2*model.x[1] + 3*model.x[2])\n\nmodel.Constraint1 = Constraint(expr = 3*model.x[1] + 4*model.x[2] >= 1)", "Although rule functions can also be used to specify constraints and objectives, in this example we use the <span style=\"color:darkblue; font-family:Courier\">expr</span> option that is available only in concrete models. This option gives a direct specification of the expression.\n1.6 Solving the Simple Examples\nPyomo supports modeling and scripting but does not install a solver automatically. In order to solve a model, there must be a solver installed on the computer to be used. If there is a solver, then the <span style=\"color:darkblue; font-family:Courier\">pyomo</span> command can be used to solve a problem instance.\nSuppose that the solver named glpk (also known as glpsol) is installed on the computer. Suppose further that an abstract model is in the file named <span style=\"color:darkblue; font-family:Courier\">abstract1.py</span> and a data file for it is in the file named <span style=\"color:darkblue; font-family:Courier\">abstract1.dat</span>. From the command prompt, with both files in the current directory, a solution can be obtained with the command:", "!pyomo solve abstract1.py abstract1.dat --solver=glpk", "Since glpk is the default solver, there really is no need specify it so the <span style=\"color:darkblue; font-family:Courier\">--solver</span> option can be dropped.\n\nNote:\nThere are two dashes before the command line option names such as solver.\n\nTo continue the example, if CPLEX is installed then it can be listed as the solver. The command to solve with CPLEX is", "!pyomo solve abstract1.py abstract1.dat --solver=cplex", "This yields the following output on the screen:\nThe numbers is square brackets indicate how much time was required for each step. Results are written to the file named <span style=\"color:darkblue; font-family:Courier\">results.json</span>, which has a special structure that makes it useful for post-processing. To see a summary of results written to the screen, use the <span style=\"color:darkblue; font-family:Courier\">--summary</span> option:", "!pyomo solve abstract1.py abstract1.dat --solver=cplex --summary", "To see a list of Pyomo command line options, use:", "!pyomo solve --help", "For a concrete model, no data file is specified on the Pyomo command line." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
pfschus/fission_bicorrelation
methods/calculate_Asym_energy_space.ipynb
mit
[ "Calculate Asym vs. Emin from bhm_e\nRewriting calc_Asym_vs_emin_energies for bhm_e.\nGenerate Asym_df for a specific dataset.\nP. Schuster\nJuly 18, 2018", "import matplotlib\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nsns.set(style='ticks')\n\nimport sys\nimport os\nimport os.path\nimport scipy.io as sio\n\nimport time\nimport numpy as np\nnp.set_printoptions(threshold=np.nan) # print entire matrices\nimport pandas as pd\nfrom tqdm import *\n\nsys.path.append('../scripts/')\n\nimport bicorr as bicorr\nimport bicorr_math as bicorr_math\nimport bicorr_plot as bicorr_plot\nimport bicorr_e as bicorr_e\nimport bicorr_sums as bicorr_sums\n\n%load_ext autoreload\n%autoreload 2", "Load data", "det_df = bicorr.load_det_df()\n\nchList, fcList, detList, num_dets, num_det_pairs = bicorr.build_ch_lists()\ndict_pair_to_index, dict_index_to_pair, dict_pair_to_angle = bicorr.build_dict_det_pair(det_df)\n\nsingles_hist_e_n, e_bin_edges, dict_det_to_index, dict_index_to_det = bicorr_e.load_singles_hist_both(filepath = '../analysis/Cf072115_to_Cf072215b/datap/',plot_flag=True, save_flag=True)\n\nbhm_e, e_bin_edges, note = bicorr_e.load_bhm_e('../analysis/Cf072115_to_Cf072215b/datap')\n\nbhp_e = np.zeros((len(det_df),len(e_bin_edges)-1,len(e_bin_edges)-1))\nfor index in det_df.index.values: # index is same as in `bhm`\n bhp_e[index,:,:] = bicorr_e.build_bhp_e(bhm_e,e_bin_edges,pair_is=[index])[0]\n\nemins = np.arange(0.5,5,.2)\nemax = 12\nprint(emins)\n\nangle_bin_edges = np.arange(8,190,10)\nprint(angle_bin_edges)", "Functionalize", "Asym_df = bicorr_sums.calc_Asym_vs_emin_energies(det_df, dict_index_to_det, singles_hist_e_n, e_bin_edges, bhp_e, e_bin_edges, emins, emax, angle_bin_edges, plot_flag=True, show_flag = True, save_flag=False)\nAsym_df.head()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
josh-gree/maths-with-python
ExercisesSolutions.ipynb
mit
[ "Suggestions for lab exercises.\nVariables and assignment\nExercise 1\nRemember that $n! = n \\times (n - 1) \\times \\dots \\times 2 \\times 1$. Compute $15!$, assigning the result to a sensible variable name.\nSolution", "fifteen_factorial = 15*14*13*12*11*10*9*8*7*6*5*4*3*2*1\nprint(fifteen_factorial)", "Exercise 2\nUsing the math module, check your result for $15$ factorial. You should explore the help for the math library and its functions, using eg tab-completion, the spyder inspector, or online sources.\nSolution", "import math\nprint(math.factorial(15))\nprint(\"Result correct?\", math.factorial(15) == fifteen_factorial)", "Exercise 3\nStirling's approximation gives that, for large enough $n$, \n\\begin{equation}\n n! \\simeq \\sqrt{2 \\pi} n^{n + 1/2} e^{-n}.\n\\end{equation}\nUsing functions and constants from the math library, compare the results of $n!$ and Stirling's approximation for $n = 5, 10, 15, 20$. In what sense does the approximation improve?\nSolution", "print(math.factorial(5), math.sqrt(2*math.pi)*5**(5+0.5)*math.exp(-5))\nprint(math.factorial(10), math.sqrt(2*math.pi)*10**(10+0.5)*math.exp(-10))\nprint(math.factorial(15), math.sqrt(2*math.pi)*15**(15+0.5)*math.exp(-15))\nprint(math.factorial(20), math.sqrt(2*math.pi)*20**(20+0.5)*math.exp(-20))\nprint(\"Absolute differences:\")\nprint(math.factorial(5) - math.sqrt(2*math.pi)*5**(5+0.5)*math.exp(-5))\nprint(math.factorial(10) - math.sqrt(2*math.pi)*10**(10+0.5)*math.exp(-10))\nprint(math.factorial(15) - math.sqrt(2*math.pi)*15**(15+0.5)*math.exp(-15))\nprint(math.factorial(20) - math.sqrt(2*math.pi)*20**(20+0.5)*math.exp(-20))\nprint(\"Relative differences:\")\nprint((math.factorial(5) - math.sqrt(2*math.pi)*5**(5+0.5)*math.exp(-5)) / math.factorial(5))\nprint((math.factorial(10) - math.sqrt(2*math.pi)*10**(10+0.5)*math.exp(-10)) / math.factorial(10))\nprint((math.factorial(15) - math.sqrt(2*math.pi)*15**(15+0.5)*math.exp(-15)) / math.factorial(15))\nprint((math.factorial(20) - math.sqrt(2*math.pi)*20**(20+0.5)*math.exp(-20)) / math.factorial(20))", "We see that the relative error decreases, whilst the absolute error grows (significantly).\nBasic functions\nExercise 1\nWrite a function to calculate the volume of a cuboid with edge lengths $a, b, c$. Test your code on sample values such as\n\n$a=1, b=1, c=1$ (result should be $1$);\n$a=1, b=2, c=3.5$ (result should be $7.0$);\n$a=0, b=1, c=1$ (result should be $0$);\n$a=2, b=-1, c=1$ (what do you think the result should be?).\n\nSolution", "def cuboid_volume(a, b, c):\n \"\"\"\n Compute the volume of a cuboid with edge lengths a, b, c.\n Volume is abc. Only makes sense if all are non-negative.\n \n Parameters\n ----------\n \n a : float\n Edge length 1\n b : float\n Edge length 2\n c : float\n Edge length 3\n \n Returns\n -------\n \n volume : float\n The volume a*b*c\n \"\"\"\n \n if (a < 0.0) or (b < 0.0) or (c < 0.0):\n print(\"Negative edge length makes no sense!\")\n return 0\n \n return a*b*c\n\nprint(cuboid_volume(1,1,1))\nprint(cuboid_volume(1,2,3.5))\nprint(cuboid_volume(0,1,1))\nprint(cuboid_volume(2,-1,1))", "In later cases, after having covered exceptions, I would suggest raising a NotImplementedError for negative edge lengths.\nExercise 2\nWrite a function to compute the time (in seconds) taken for an object to fall from a height $H$ (in metres) to the ground, using the formula\n\\begin{equation}\n h(t) = \\frac{1}{2} g t^2.\n\\end{equation}\nUse the value of the acceleration due to gravity $g$ from scipy.constants.g. Test your code on sample values such as\n\n$H = 1$m (result should be $\\approx 0.452$s);\n$H = 10$m (result should be $\\approx 1.428$s);\n$H = 0$m (result should be $0$s);\n$H = -1$m (what do you think the result should be?).\n\nSolution", "def fall_time(H):\n \"\"\"\n Give the time in seconds for an object to fall to the ground\n from H metres.\n \n Parameters\n ----------\n \n H : float\n Starting height (metres)\n \n Returns\n -------\n \n T : float\n Fall time (seconds)\n \"\"\"\n \n from math import sqrt\n from scipy.constants import g\n \n if (H < 0):\n print(\"Negative height makes no sense!\")\n return 0\n \n return sqrt(2.0*H/g)\n\nprint(fall_time(1))\nprint(fall_time(10))\nprint(fall_time(0))\nprint(fall_time(-1))", "Exercise 3\nWrite a function that computes the area of a triangle with edge lengths $a, b, c$. You may use the formula\n\\begin{equation}\n A = \\sqrt{s (s - a) (s - b) (s - c)}, \\qquad s = \\frac{a + b + c}{2}.\n\\end{equation}\nConstruct your own test cases to cover a range of possibilities.", "def triangle_area(a, b, c):\n \"\"\"\n Compute the area of a triangle with edge lengths a, b, c.\n Area is sqrt(s (s-a) (s-b) (s-c)). \n s is (a+b+c)/2.\n Only makes sense if all are non-negative.\n \n Parameters\n ----------\n \n a : float\n Edge length 1\n b : float\n Edge length 2\n c : float\n Edge length 3\n \n Returns\n -------\n \n area : float\n The triangle area.\n \"\"\"\n \n from math import sqrt\n \n if (a < 0.0) or (b < 0.0) or (c < 0.0):\n print(\"Negative edge length makes no sense!\")\n return 0\n \n s = 0.5 * (a + b + c)\n return sqrt(s * (s-a) * (s-b) * (s-c))\n\nprint(triangle_area(1,1,1)) # Equilateral; answer sqrt(3)/4 ~ 0.433\nprint(triangle_area(3,4,5)) # Right triangle; answer 6\nprint(triangle_area(1,1,0)) # Not a triangle; answer 0\nprint(triangle_area(-1,1,1)) # Not a triangle; exception or 0.", "Floating point numbers\nExercise 1\nComputers cannot, in principle, represent real numbers perfectly. This can lead to problems of accuracy. For example, if\n\\begin{equation}\n x = 1, \\qquad y = 1 + 10^{-14} \\sqrt{3}\n\\end{equation}\nthen it should be true that\n\\begin{equation}\n 10^{14} (y - x) = \\sqrt{3}.\n\\end{equation}\nCheck how accurately this equation holds in Python and see what this implies about the accuracy of subtracting two numbers that are close together.\nSolution", "from math import sqrt\n\nx = 1.0\ny = 1.0 + 1e-14 * sqrt(3.0)\nprint(\"The calculation gives {}\".format(1e14*(y-x)))\nprint(\"The result should be {}\".format(sqrt(3.0)))", "We see that the first three digits are correct. This isn't too surprising: we expect 16 digits of accuracy for a floating point number, but $x$ and $y$ are identical for the first 14 digits.\nExercise 2\nThe standard quadratic formula gives the solutions to\n\\begin{equation}\n a x^2 + b x + c = 0\n\\end{equation}\nas\n\\begin{equation}\n x = \\frac{-b \\pm \\sqrt{b^2 - 4 a c}}{2 a}.\n\\end{equation}\nShow that, if $a = 10^{-n} = c$ and $b = 10^n$ then\n\\begin{equation}\n x = \\frac{10^{2 n}}{2} \\left( -1 \\pm \\sqrt{1 - 4 \\times 10^{-4n}} \\right).\n\\end{equation}\nUsing the expansion (from Taylor's theorem)\n\\begin{equation}\n \\sqrt{1 - 10^{-4 n}} \\simeq 1 - 2 \\times 10^{-4 n} + \\dots, \\qquad n \\gg 1,\n\\end{equation}\nshow that\n\\begin{equation}\n x \\simeq -10^{2 n} + 10^{-2 n} \\quad \\text{and} \\quad -10^{-2n}, \\qquad n \\gg 1.\n\\end{equation}\nSolution\nThis is pen-and-paper work; each step should be re-arranging.\nExercise 3\nBy multiplying and dividing by $-b \\mp \\sqrt{b^2 - 4 a c}$, check that we can also write the solutions to the quadratic equation as\n\\begin{equation}\n x = \\frac{2 c}{-b \\mp \\sqrt{b^2 - 4 a c}}.\n\\end{equation}\nSolution\nUsing the difference of two squares we get\n\\begin{equation}\n x = \\frac{b^2 - \\left( b^2 - 4 a c \\right)}{2a \\left( -b \\mp \\sqrt{b^2 - 4 a c} \\right)}\n\\end{equation}\nwhich re-arranges to give the required solution.\nExercise 4\nUsing Python, calculate both solutions to the quadratic equation\n\\begin{equation}\n 10^{-n} x^2 + 10^n x + 10^{-n} = 0\n\\end{equation}\nfor $n = 3$ and $n = 4$ using both formulas. What do you see? How has floating point accuracy caused problems here?\nSolution", "a = 1e-3\nb = 1e3\nc = a\nformula1_n3_plus = (-b + sqrt(b**2 - 4.0*a*c))/(2.0*a)\nformula1_n3_minus = (-b - sqrt(b**2 - 4.0*a*c))/(2.0*a)\nformula2_n3_plus = (2.0*c)/(-b + sqrt(b**2 - 4.0*a*c))\nformula2_n3_minus = (2.0*c)/(-b - sqrt(b**2 - 4.0*a*c))\nprint(\"For n=3, first formula, solutions are {} and {}.\".format(formula1_n3_plus, \n formula1_n3_minus))\nprint(\"For n=3, second formula, solutions are {} and {}.\".format(formula2_n3_plus, \n formula2_n3_minus))\n\na = 1e-4\nb = 1e4\nc = a\nformula1_n4_plus = (-b + sqrt(b**2 - 4.0*a*c))/(2.0*a)\nformula1_n4_minus = (-b - sqrt(b**2 - 4.0*a*c))/(2.0*a)\nformula2_n4_plus = (2.0*c)/(-b + sqrt(b**2 - 4.0*a*c))\nformula2_n4_minus = (2.0*c)/(-b - sqrt(b**2 - 4.0*a*c))\nprint(\"For n=4, first formula, solutions are {} and {}.\".format(formula1_n4_plus, \n formula1_n4_minus))\nprint(\"For n=4, second formula, solutions are {} and {}.\".format(formula2_n4_plus, \n formula2_n4_minus))", "There is a difference in the fifth significant figure in both solutions in the first case, which gets to the third (arguably the second) significant figure in the second case. Comparing to the limiting solutions above, we see that the larger root is definitely more accurately captured with the first formula than the second (as the result should be bigger than $10^{-2n}$).\nIn the second case we have divided by a very small number to get the big number, which loses accuracy.\nExercise 5\nThe standard definition of the derivative of a function is\n\\begin{equation}\n \\left. \\frac{\\text{d} f}{\\text{d} x} \\right|{x=X} = \\lim{\\delta \\to 0} \\frac{f(X + \\delta) - f(X)}{\\delta}.\n\\end{equation}\nWe can approximate this by computing the result for a finite value of $\\delta$:\n\\begin{equation}\n g(x, \\delta) = \\frac{f(x + \\delta) - f(x)}{\\delta}.\n\\end{equation}\nWrite a function that takes as inputs a function of one variable, $f(x)$, a location $X$, and a step length $\\delta$, and returns the approximation to the derivative given by $g$.\nSolution", "def g(f, X, delta):\n \"\"\"\n Approximate the derivative of a given function at a point.\n \n Parameters\n ----------\n \n f : function\n Function to be differentiated\n X : real\n Point at which the derivative is evaluated\n delta : real\n Step length\n \n Returns\n -------\n \n g : real\n Approximation to the derivative\n \"\"\"\n \n return (f(X+delta) - f(X)) / delta", "Exercise 6\nThe function $f_1(x) = e^x$ has derivative with the exact value $1$ at $x=0$. Compute the approximate derivative using your function above, for $\\delta = 10^{-2 n}$ with $n = 1, \\dots, 7$. You should see the results initially improve, then get worse. Why is this?\nSolution", "from math import exp\nfor n in range(1, 8):\n print(\"For n={}, the approx derivative is {}.\".format(n, g(exp, 0.0, 10**(-2.0*n))))", "We have a combination of floating point inaccuracies: in the numerator we have two terms that are nearly equal, leading to a very small number. We then divide two very small numbers. This is inherently inaccurate.\nThis does not mean that you can't calculate derivatives to high accuracy, but alternative approaches are definitely recommended.\nPrime numbers\nExercise 1\nWrite a function that tests if a number is prime. Test it by writing out all prime numbers less than 50.\nSolution\nThis is a \"simple\" solution, but not efficient.", "def isprime(n):\n \"\"\"\n Checks to see if an integer is prime.\n \n Parameters\n ----------\n \n n : integer\n Number to check\n \n Returns\n -------\n \n isprime : Boolean\n If n is prime\n \"\"\"\n \n # No number less than 2 can be prime\n if n < 2:\n return False\n \n # We only need to check for divisors up to sqrt(n)\n for m in range(2, int(n**0.5)+1):\n if n%m == 0:\n return False\n \n # If we've got this far, there are no divisors.\n return True\n\nfor n in range(50):\n if isprime(n):\n print(\"Function says that {} is prime.\".format(n))", "Exercise 2\n500 years ago some believed that the number $2^n - 1$ was prime for all primes $n$. Use your function to find the first prime $n$ for which this is not true.\nSolution\nWe could do this many ways. This \"elegant\" solution says:\n\nStart from the smallest possible $n$ (2).\nCheck if $n$ is prime. If not, add one to $n$.\nIf $n$ is prime, check if $2^n-1$ is prime. If it is, add one to $n$.\nIf both those logical checks fail, we have found the $n$ we want.", "n = 2\nwhile (not isprime(n)) or (isprime(2**n-1)):\n n += 1\nprint(\"The first n such that 2^n-1 is not prime is {}.\".format(n))", "Exercise 3\nThe Mersenne primes are those that have the form $2^n-1$, where $n$ is prime. Use your previous solutions to generate all the $n < 40$ that give Mersenne primes.\nSolution", "for n in range(2, 41):\n if isprime(n) and isprime(2**n-1):\n print(\"n={} is such that 2^n-1 is prime.\".format(n))", "Exercise 4\nWrite a function to compute all prime factors of an integer $n$, including their multiplicities. Test it by printing the prime factors (without multiplicities) of $n = 17, \\dots, 20$ and the multiplicities (without factors) of $n = 48$.\nNote\nOne effective solution is to return a dictionary, where the keys are the factors and the values are the multiplicities.\nSolution\nThis solution uses the trick of immediately dividing $n$ by any divisor: this means we never have to check the divisor for being prime.", "def prime_factors(n):\n \"\"\"\n Generate all the prime factors of n.\n \n Parameters\n ----------\n \n n : integer\n Number to be checked\n \n Returns\n -------\n \n factors : dict\n Prime factors (keys) and multiplicities (values)\n \"\"\"\n \n factors = {}\n \n m = 2\n while m <= n:\n if n%m == 0:\n factors[m] = 1\n n //= m\n while n%m == 0:\n factors[m] += 1\n n //= m\n m += 1\n \n return factors\n\nfor n in range(17, 21):\n print(\"Prime factors of {} are {}.\".format(n, prime_factors(n).keys()))\nprint(\"Multiplicities of prime factors of 48 are {}.\".format(prime_factors(48).values()))", "Exercise 5\nWrite a function to generate all the integer divisors, including 1, but not including $n$ itself, of an integer $n$. Test it on $n = 16, \\dots, 20$.\nNote\nYou could use the prime factorization from the previous exercise, or you could do it directly.\nSolution\nHere we will do it directly.", "def divisors(n):\n \"\"\"\n Generate all integer divisors of n.\n \n Parameters\n ----------\n \n n : integer\n Number to be checked\n \n Returns\n -------\n \n divs : list\n All integer divisors, including 1.\n \"\"\"\n \n divs = [1]\n m = 2\n while m <= n/2:\n if n%m == 0:\n divs.append(m)\n m += 1\n \n return divs\n\nfor n in range(16, 21):\n print(\"The divisors of {} are {}.\".format(n, divisors(n)))", "Exercise 6\nA perfect number $n$ is one where the divisors sum to $n$. For example, 6 has divisors 1, 2, and 3, which sum to 6. Use your previous solution to find all perfect numbers $n < 10,000$ (there are only four!).\nSolution\nWe can do this much more efficiently than the code below using packages such as numpy, but this is a \"bare python\" solution.", "def isperfect(n):\n \"\"\"\n Check if a number is perfect.\n \n Parameters\n ----------\n \n n : integer\n Number to check\n \n Returns\n -------\n \n isperfect : Boolean\n Whether it is perfect or not.\n \"\"\"\n \n divs = divisors(n)\n sum_divs = 0\n for d in divs:\n sum_divs += d\n \n return n == sum_divs\n\nfor n in range(2,10000):\n if (isperfect(n)):\n factors = prime_factors(n)\n print(\"{} is perfect.\\n\"\n \"Divisors are {}.\\n\"\n \"Prime factors {} (multiplicities {}).\".format(\n n, divisors(n), factors.keys(), factors.values()))", "Exercise 7\nUsing your previous functions, check that all perfect numbers $n < 10,000$ can be written as $2^{k-1} \\times (2^k - 1)$, where $2^k-1$ is a Mersenne prime.\nSolution\nIn fact we did this above already:\n\n$6 = 2^{2-1} \\times (2^2 - 1)$. 2 is the first number on our Mersenne list.\n$28 = 2^{3-1} \\times (2^3 - 1)$. 3 is the second number on our Mersenne list.\n$496 = 2^{5-1} \\times (2^5 - 1)$. 5 is the third number on our Mersenne list.\n$8128 = 2^{7-1} \\times (2^7 - 1)$. 7 is the fourth number on our Mersenne list.\n\nExercise 8 (bonus)\nInvestigate the timeit function in python or IPython. Use this to measure how long your function takes to check that, if $k$ on the Mersenne list then $n = 2^{k-1} \\times (2^k - 1)$ is a perfect number, using your functions. Stop increasing $k$ when the time takes too long!\nNote\nYou could waste considerable time on this, and on optimizing the functions above to work efficiently. It is not worth it, other than to show how rapidly the computation time can grow!\nSolution", "%timeit isperfect(2**(3-1)*(2**3-1))\n\n%timeit isperfect(2**(5-1)*(2**5-1))\n\n%timeit isperfect(2**(7-1)*(2**7-1))\n\n%timeit isperfect(2**(13-1)*(2**13-1))", "It's worth thinking about the operation counts of the various functions implemented here. The implementations are inefficient, but even in the best case you see how the number of operations (and hence computing time required) rapidly increases.\nLogistic map\nPartly taken from Newman's book, p 120.\nThe logistic map builds a sequence of numbers ${ x_n }$ using the relation\n\\begin{equation}\n x_{n+1} = r x_n \\left( 1 - x_n \\right),\n\\end{equation}\nwhere $0 \\le x_0 \\le 1$.\nExercise 1\nWrite a program that calculates the first $N$ members of the sequence, given as input $x_0$ and $r$ (and, of course, $N$).\nSolution", "def logistic(x0, r, N = 1000):\n sequence = [x0]\n xn = x0\n for n in range(N):\n xnew = r*xn*(1.0-xn)\n sequence.append(xnew)\n xn = xnew\n return sequence", "Exercise 2\nFix $x_0=0.5$. Calculate the first 2,000 members of the sequence for $r=1.5$ and $r=3.5$ Plot the last 100 members of the sequence in both cases.\nWhat does this suggest about the long-term behaviour of the sequence?\nSolution", "import numpy\nfrom matplotlib import pyplot\n%matplotlib inline\n\nx0 = 0.5\nN = 2000\nsequence1 = logistic(x0, 1.5, N)\nsequence2 = logistic(x0, 3.5, N)\npyplot.plot(sequence1[-100:], 'b-', label = r'$r=1.5$')\npyplot.plot(sequence2[-100:], 'k-', label = r'$r=3.5$')\npyplot.xlabel(r'$n$')\npyplot.ylabel(r'$x$')\npyplot.show()", "This suggests that, for $r=1.5$, the sequence has settled down to a fixed point. In the $r=3.5$ case it seems to be moving between four points repeatedly.\nExercise 3\nFix $x_0 = 0.5$. For each value of $r$ between $1$ and $4$, in steps of $0.01$, calculate the first 2,000 members of the sequence. Plot the last 1,000 members of the sequence on a plot where the $x$-axis is the value of $r$ and the $y$-axis is the values in the sequence. Do not plot lines - just plot markers (e.g., use the 'k.' plotting style).\nSolution", "import numpy\nfrom matplotlib import pyplot\n%matplotlib inline\n\nr_values = numpy.linspace(1.0, 4.0, 401)\nx0 = 0.5\nN = 2000\nfor r in r_values:\n sequence = logistic(x0, r, N)\n pyplot.plot(r*numpy.ones_like(sequence[1000:]), sequence[1000:], 'k.')\npyplot.xlabel(r'$r$')\npyplot.ylabel(r'$x$')\npyplot.show()", "Exercise 4\nFor iterative maps such as the logistic map, one of three things can occur:\n\nThe sequence settles down to a fixed point.\nThe sequence rotates through a finite number of values. This is called a limit cycle.\nThe sequence generates an infinite number of values. This is called deterministic chaos.\n\nUsing just your plot, or new plots from this data, work out approximate values of $r$ for which there is a transition from fixed points to limit cycles, from limit cycles of a given number of values to more values, and the transition to chaos.\nSolution\nThe first transition is at $r \\approx 3$, the next at $r \\approx 3.45$, the next at $r \\approx 3.55$. The transition to chaos appears to happen before $r=4$, but it's not obvious exactly where.\nMandelbrot\nThe Mandelbrot set is also generated from a sequence, ${ z_n }$, using the relation\n\\begin{equation}\n z_{n+1} = z_n^2 + c, \\qquad z_0 = 0.\n\\end{equation}\nThe members of the sequence, and the constant $c$, are all complex. The point in the complex plane at $c$ is in the Mandelbrot set only if the $|z_n| < 2$ for all members of the sequence. In reality, checking the first 100 iterations is sufficient.\nNote: the python notation for a complex number $x + \\text{i} y$ is x + yj: that is, j is used to indicate $\\sqrt{-1}$. If you know the values of x and y then x + yj constructs a complex number; if they are stored in variables you can use complex(x, y).\nExercise 1\nWrite a function that checks if the point $c$ is in the Mandelbrot set.\nSolution", "def in_Mandelbrot(c, n_iterations = 100):\n z0 = 0.0 + 0j\n in_set = True\n n = 0\n zn = z0\n while in_set and (n < n_iterations):\n n += 1\n znew = zn**2 + c\n in_set = abs(znew) < 2.0\n zn = znew\n return in_set", "Exercise 2\nCheck the points $c=0$ and $c=\\pm 2 \\pm 2 \\text{i}$ and ensure they do what you expect. (What should you expect?)\nSolution", "c_values = [0.0, 2+2j, 2-2j, -2+2j, -2-2j]\nfor c in c_values:\n print(\"Is {} in the Mandelbrot set? {}.\".format(c, in_Mandelbrot(c)))", "Exercise 3\nWrite a function that, given $N$\n\ngenerates an $N \\times N$ grid spanning $c = x + \\text{i} y$, for $-2 \\le x \\le 2$ and $-2 \\le y \\le 2$;\nreturns an $N\\times N$ array containing one if the associated grid point is in the Mandelbrot set, and zero otherwise.\n\nSolution", "import numpy\n\ndef grid_Mandelbrot(N):\n x = numpy.linspace(-2.0, 2.0, N)\n X, Y = numpy.meshgrid(x, x)\n C = X + 1j*Y\n grid = numpy.zeros((N, N), int)\n for nx in range(N):\n for ny in range(N):\n grid[nx, ny] = int(in_Mandelbrot(C[nx, ny]))\n return grid", "Exercise 4\nUsing the function imshow from matplotlib, plot the resulting array for a $100 \\times 100$ array to make sure you see the expected shape.\nSolution", "from matplotlib import pyplot\n%matplotlib inline\n\npyplot.imshow(grid_Mandelbrot(100))", "Exercise 5\nModify your functions so that, instead of returning whether a point is inside the set or not, it returns the logarithm of the number of iterations it takes. Plot the result using imshow again.\nSolution", "from math import log\n\ndef log_Mandelbrot(c, n_iterations = 100):\n z0 = 0.0 + 0j\n in_set = True\n n = 0\n zn = z0\n while in_set and (n < n_iterations):\n n += 1\n znew = zn**2 + c\n in_set = abs(znew) < 2.0\n zn = znew\n return log(n)\n\ndef log_grid_Mandelbrot(N):\n x = numpy.linspace(-2.0, 2.0, N)\n X, Y = numpy.meshgrid(x, x)\n C = X + 1j*Y\n grid = numpy.zeros((N, N), int)\n for nx in range(N):\n for ny in range(N):\n grid[nx, ny] = log_Mandelbrot(C[nx, ny])\n return grid\n\nfrom matplotlib import pyplot\n%matplotlib inline\n\npyplot.imshow(log_grid_Mandelbrot(100))", "Exercise 6\nTry some higher resolution plots, and try plotting only a section to see the structure. Note this is not a good way to get high accuracy close up images!\nSolution\nThis is a simple example:", "pyplot.imshow(log_grid_Mandelbrot(1000)[600:800,400:600])", "Equivalence classes\nAn equivalence class is a relation that groups objects in a set into related subsets. For example, if we think of the integers modulo $7$, then $1$ is in the same equivalence class as $8$ (and $15$, and $22$, and so on), and $3$ is in the same equivalence class as $10$. We use the tilde $3 \\sim 10$ to denote two objects within the same equivalence class.\nHere, we are going to define the positive integers programmatically from equivalent sequences.\nExercise 1\nDefine a python class Eqint. This should be\n\nInitialized by a sequence;\nStore the sequence;\nDefine its representation (via the __repr__ function) to be the integer length of the sequence;\nRedefine equality (via the __eq__ function) so that two eqints are equal if their sequences have same length.\n\nSolution", "class Eqint(object):\n \n def __init__(self, sequence):\n self.sequence = sequence\n \n def __repr__(self):\n return str(len(self.sequence))\n \n def __eq__(self, other):\n return len(self.sequence)==len(other.sequence)", "Exercise 2\nDefine a zero object from the empty list, and three one objects, from a single object list, tuple, and string. For example\npython\none_list = Eqint([1])\none_tuple = Eqint((1,))\none_string = Eqint('1')\nCheck that none of the one objects equal the zero object, but all equal the other one objects. Print each object to check that the representation gives the integer length.\nSolution", "zero = Eqint([])\none_list = Eqint([1])\none_tuple = Eqint((1,))\none_string = Eqint('1')\n\nprint(\"Is zero equivalent to one? {}, {}, {}\".format(zero == one_list, \n zero == one_tuple,\n zero == one_string))\nprint(\"Is one equivalent to one? {}, {}, {}.\".format(one_list == one_tuple,\n one_list == one_string,\n one_tuple == one_string))\nprint(zero)\nprint(one_list)\nprint(one_tuple)\nprint(one_string)", "Exercise 3\nRedefine the class by including an __add__ method that combines the two sequences. That is, if a and b are Eqints then a+b should return an Eqint defined from combining a and bs sequences.\nNote\nAdding two different types of sequences (eg, a list to a tuple) does not work, so it is better to either iterate over the sequences, or to convert to a uniform type before adding.\nSolution", "class Eqint(object):\n \n def __init__(self, sequence):\n self.sequence = sequence\n \n def __repr__(self):\n return str(len(self.sequence))\n \n def __eq__(self, other):\n return len(self.sequence)==len(other.sequence)\n \n def __add__(a, b):\n return Eqint(tuple(a.sequence) + tuple(b.sequence))", "Exercise 4\nCheck your addition function by adding together all your previous Eqint objects (which will need re-defining, as the class has been redefined). Print the resulting object to check you get 3, and also print its internal sequence.\nSolution", "zero = Eqint([])\none_list = Eqint([1])\none_tuple = Eqint((1,))\none_string = Eqint('1')\n\nsum_eqint = zero + one_list + one_tuple + one_string\nprint(\"The sum is {}.\".format(sum_eqint))\nprint(\"The internal sequence is {}.\".format(sum_eqint.sequence))", "Exercise 5\nWe will sketch a construction of the positive integers from nothing.\n\nDefine an empty list positive_integers.\nDefine an Eqint called zero from the empty list. Append it to positive_integers.\nDefine an Eqint called next_integer from the Eqint defined by a copy of positive_integers (ie, use Eqint(list(positive_integers)). Append it to positive_integers.\nRepeat step 3 as often as needed.\n\nUse this procedure to define the Eqint equivalent to $10$. Print it, and its internal sequence, to check.\nSolution", "positive_integers = []\nzero = Eqint([])\npositive_integers.append(zero)\n\nN = 10\nfor n in range(1,N+1):\n positive_integers.append(Eqint(list(positive_integers)))\n \nprint(\"The 'final' Eqint is {}\".format(positive_integers[-1]))\nprint(\"Its sequence is {}\".format(positive_integers[-1].sequence))\nprint(\"That is, it contains all Eqints with length less than 10.\")", "Rational numbers\nInstead of working with floating point numbers, which are not \"exact\", we could work with the rational numbers $\\mathbb{Q}$. A rational number $q \\in \\mathbb{Q}$ is defined by the numerator $n$ and denominator $d$ as $q = \\frac{n}{d}$, where $n$ and $d$ are coprime (ie, have no common divisor other than $1$).\nExercise 1\nFind a python function that finds the greatest common divisor (gcd) of two numbers. Use this to write a function normal_form that takes a numerator and divisor and returns the coprime $n$ and $d$. Test this function on $q = \\frac{3}{2}$, $q = \\frac{15}{3}$, and $q = \\frac{20}{42}$.\nSolution", "def normal_form(numerator, denominator):\n from fractions import gcd\n \n factor = gcd(numerator, denominator)\n return numerator//factor, denominator//factor\n\nprint(normal_form(3, 2))\nprint(normal_form(15, 3))\nprint(normal_form(20, 42))", "Exercise 2\nDefine a class Rational that uses the normal_form function to store the rational number in the appropriate form. Define a __repr__ function that prints a string that looks like $\\frac{n}{d}$ (hint: use len(str(number)) to find the number of digits of an integer). Test it on the cases above.\nSolution", "class Rational(object):\n \"\"\"\n A rational number.\n \"\"\"\n \n def __init__(self, numerator, denominator):\n \n n, d = normal_form(numerator, denominator)\n \n self.numerator = n\n self.denominator = d\n \n return None\n \n def __repr__(self):\n \n max_length = max(len(str(self.numerator)), len(str(self.denominator)))\n \n if self.denominator == 1:\n frac = str(self.numerator)\n else:\n numerator = str(self.numerator)+'\\n'\n bar = max_length*'-'+'\\n'\n denominator = str(self.denominator)\n frac = numerator+bar+denominator\n \n return frac\n\nq1 = Rational(3, 2)\nprint(q1)\nq2 = Rational(15, 3)\nprint(q2)\nq3 = Rational(20, 42)\nprint(q3)", "Exercise 3\nOverload the __add__ function so that you can add two rational numbers. Test it on $\\frac{1}{2} + \\frac{1}{3} + \\frac{1}{6} = 1$.\nSolution", "class Rational(object):\n \"\"\"\n A rational number.\n \"\"\"\n \n def __init__(self, numerator, denominator):\n \n n, d = normal_form(numerator, denominator)\n \n self.numerator = n\n self.denominator = d\n \n return None\n \n def __add__(a, b):\n \n numerator = a.numerator * b.denominator + b.numerator * a.denominator\n denominator = a.denominator * b.denominator\n return Rational(numerator, denominator)\n \n def __repr__(self):\n \n max_length = max(len(str(self.numerator)), len(str(self.denominator)))\n \n if self.denominator == 1:\n frac = str(self.numerator)\n else:\n numerator = str(self.numerator)+'\\n'\n bar = max_length*'-'+'\\n'\n denominator = str(self.denominator)\n frac = numerator+bar+denominator\n \n return frac\n\nprint(Rational(1,2) + Rational(1,3) + Rational(1,6))", "Exercise 4\nOverload the __mul__ function so that you can multiply two rational numbers. Test it on $\\frac{1}{3} \\times \\frac{15}{2} \\times \\frac{2}{5} = 1$.\nSolution", "class Rational(object):\n \"\"\"\n A rational number.\n \"\"\"\n \n def __init__(self, numerator, denominator):\n \n n, d = normal_form(numerator, denominator)\n \n self.numerator = n\n self.denominator = d\n \n return None\n \n def __add__(a, b):\n \n numerator = a.numerator * b.denominator + b.numerator * a.denominator\n denominator = a.denominator * b.denominator\n return Rational(numerator, denominator)\n \n def __mul__(a, b):\n \n numerator = a.numerator * b.numerator\n denominator = a.denominator * b.denominator\n return Rational(numerator, denominator)\n \n def __repr__(self):\n \n max_length = max(len(str(self.numerator)), len(str(self.denominator)))\n \n if self.denominator == 1:\n frac = str(self.numerator)\n else:\n numerator = str(self.numerator)+'\\n'\n bar = max_length*'-'+'\\n'\n denominator = str(self.denominator)\n frac = numerator+bar+denominator\n \n return frac\n\nprint(Rational(1,3)*Rational(15,2)*Rational(2,5))", "Exercise 5\nOverload the __rmul__ function so that you can multiply a rational by an integer. Check that $\\frac{1}{2} \\times 2 = 1$ and $\\frac{1}{2} + (-1) \\times \\frac{1}{2} = 0$. Also overload the __sub__ function (using previous functions!) so that you can subtract rational numbers and check that $\\frac{1}{2} - \\frac{1}{2} = 0$.\nSolution", "class Rational(object):\n \"\"\"\n A rational number.\n \"\"\"\n \n def __init__(self, numerator, denominator):\n \n n, d = normal_form(numerator, denominator)\n \n self.numerator = n\n self.denominator = d\n \n return None\n \n def __add__(a, b):\n \n numerator = a.numerator * b.denominator + b.numerator * a.denominator\n denominator = a.denominator * b.denominator\n return Rational(numerator, denominator)\n \n def __mul__(a, b):\n \n numerator = a.numerator * b.numerator\n denominator = a.denominator * b.denominator\n return Rational(numerator, denominator)\n \n def __rmul__(self, other):\n \n numerator = self.numerator * other\n return Rational(numerator, self.denominator)\n \n def __sub__(a, b):\n \n return a + (-1)*b\n \n def __repr__(self):\n \n max_length = max(len(str(self.numerator)), len(str(self.denominator)))\n \n if self.denominator == 1:\n frac = str(self.numerator)\n else:\n numerator = str(self.numerator)+'\\n'\n bar = max_length*'-'+'\\n'\n denominator = str(self.denominator)\n frac = numerator+bar+denominator\n \n return frac\n\nhalf = Rational(1,2)\nprint(2*half)\nprint(half+(-1)*half)\nprint(half-half)", "Exercise 6\nOverload the __float__ function so that float(q) returns the floating point approximation to the rational number q. Test this on $\\frac{1}{2}, \\frac{1}{3}$, and $\\frac{1}{11}$.\nSolution", "class Rational(object):\n \"\"\"\n A rational number.\n \"\"\"\n \n def __init__(self, numerator, denominator):\n \n n, d = normal_form(numerator, denominator)\n \n self.numerator = n\n self.denominator = d\n \n return None\n \n def __add__(a, b):\n \n numerator = a.numerator * b.denominator + b.numerator * a.denominator\n denominator = a.denominator * b.denominator\n return Rational(numerator, denominator)\n \n def __mul__(a, b):\n \n numerator = a.numerator * b.numerator\n denominator = a.denominator * b.denominator\n return Rational(numerator, denominator)\n \n def __rmul__(self, other):\n \n numerator = self.numerator * other\n return Rational(numerator, self.denominator)\n \n def __sub__(a, b):\n \n return a + (-1)*b\n \n def __float__(a):\n \n return float(a.numerator) / float(a.denominator)\n \n def __repr__(self):\n \n max_length = max(len(str(self.numerator)), len(str(self.denominator)))\n \n if self.denominator == 1:\n frac = str(self.numerator)\n else:\n numerator = str(self.numerator)+'\\n'\n bar = max_length*'-'+'\\n'\n denominator = str(self.denominator)\n frac = numerator+bar+denominator\n \n return frac\n\nprint(float(Rational(1,2)))\nprint(float(Rational(1,3)))\nprint(float(Rational(1,11)))", "Exercise 7\nOverload the __lt__ function to compare two rational numbers. Create a list of rational numbers where the denominator is $n = 2, \\dots, 11$ and the numerator is the floored integer $n/2$, ie n//2. Use the sorted function on that list (which relies on the __lt__ function).\nSolution", "class Rational(object):\n \"\"\"\n A rational number.\n \"\"\"\n \n def __init__(self, numerator, denominator):\n \n n, d = normal_form(numerator, denominator)\n \n self.numerator = n\n self.denominator = d\n \n return None\n \n def __add__(a, b):\n \n numerator = a.numerator * b.denominator + b.numerator * a.denominator\n denominator = a.denominator * b.denominator\n return Rational(numerator, denominator)\n \n def __mul__(a, b):\n \n numerator = a.numerator * b.numerator\n denominator = a.denominator * b.denominator\n return Rational(numerator, denominator)\n \n def __rmul__(self, other):\n \n numerator = self.numerator * other\n return Rational(numerator, self.denominator)\n \n def __sub__(a, b):\n \n return a + (-1)*b\n \n def __float__(a):\n \n return float(a.numerator) / float(a.denominator)\n \n def __lt__(a, b):\n \n return a.numerator * b.denominator < a.denominator * b.numerator\n \n def __repr__(self):\n \n max_length = max(len(str(self.numerator)), len(str(self.denominator)))\n \n if self.denominator == 1:\n frac = str(self.numerator)\n else:\n numerator = '\\n'+str(self.numerator)+'\\n'\n bar = max_length*'-'+'\\n'\n denominator = str(self.denominator)\n frac = numerator+bar+denominator\n \n return frac\n\nq_list = [Rational(n//2, n) for n in range(2, 12)]\nprint(sorted(q_list))", "Exercise 8\nThe Wallis formula for $\\pi$ is\n\\begin{equation}\n \\pi = 2 \\prod_{n=1}^{\\infty} \\frac{ (2 n)^2 }{(2 n - 1) (2 n + 1)}.\n\\end{equation}\nWe can define a partial product $\\pi_N$ as\n\\begin{equation}\n \\pi_N = 2 \\prod_{n=1}^{N} \\frac{ (2 n)^2 }{(2 n - 1) (2 n + 1)},\n\\end{equation}\neach of which are rational numbers.\nConstruct a list of the first 20 rational number approximations to $\\pi$ and print them out. Print the sorted list to show that the approximations are always increasing. Then convert them to floating point numbers, construct a numpy array, and subtract this array from $\\pi$ to see how accurate they are.\nSolution", "def wallis_rational(N):\n \"\"\"\n The partial product approximation to pi using the first N terms of Wallis' formula.\n \n Parameters\n ----------\n \n N : int\n Number of terms in product\n \n Returns\n -------\n \n partial : Rational\n A rational number approximation to pi\n \"\"\"\n \n partial = Rational(2,1)\n for n in range(1, N+1):\n partial = partial * Rational((2*n)**2, (2*n-1)*(2*n+1))\n return partial\n\npi_list = [wallis_rational(n) for n in range(1, 21)]\nprint(pi_list)\nprint(sorted(pi_list))\n\nimport numpy\nprint(numpy.pi-numpy.array(list(map(float, pi_list))))", "The shortest published Mathematical paper\nA candidate for the shortest mathematical paper ever shows the following result:\n\\begin{equation}\n 27^5 + 84^5 + 110^5 + 133^5 = 144^5.\n\\end{equation}\nThis is interesting as\n\nThis is a counterexample to a conjecture by Euler ... that at least $n$ $n$th powers are required to sum to an $n$th power, $n > 2$.\n\nExercise 1\nUsing python, check the equation above is true.\nSolution", "lhs = 27**5 + 84**5 + 110**5 + 133**5\nrhs = 144**5\n\nprint(\"Does the LHS {} equal the RHS {}? {}\".format(lhs, rhs, lhs==rhs))", "Exercise 2\nThe more interesting statement in the paper is that\n\\begin{equation}\n 27^5 + 84^5 + 110^5 + 133^5 = 144^5.\n\\end{equation}\n\n[is] the smallest instance in which four fifth powers sum to a fifth power.\n\nInterpreting \"the smallest instance\" to mean the solution where the right hand side term (the largest integer) is the smallest, we want to use python to check this statement.\nYou may find the combinations function from the itertools package useful.", "import numpy\nimport itertools", "The combinations function returns all the combinations (ignoring order) of r elements from a given list. For example, take a list of length 6, [1, 2, 3, 4, 5, 6] and compute all the combinations of length 4:", "input_list = numpy.arange(1, 7)\ncombinations = list(itertools.combinations(input_list, 4))\nprint(combinations)", "We can already see that the number of terms to consider is large.\nNote that we have used the list function to explicitly get a list of the combinations. The combinations function returns a generator, which can be used in a loop as if it were a list, without storing all elements of the list.\nHow fast does the number of combinations grow? The standard formula says that for a list of length $n$ there are\n\\begin{equation}\n \\begin{pmatrix} n \\ k \\end{pmatrix} = \\frac{n!}{k! (n-k)!}\n\\end{equation}\ncombinations of length $k$. For $k=4$ as needed here we will have $n (n-1) (n-2) (n-3) / 24$ combinations. For $n=144$ we therefore have", "n_combinations = 144*143*142*141/24\nprint(\"Number of combinations of 4 objects from 144 is {}\".format(n_combinations))", "Exercise 2a\nShow, by getting python to compute the number of combinations $N = \\begin{pmatrix} n \\ 4 \\end{pmatrix}$ that $N$ grows roughly as $n^4$. To do this, plot the number of combinations and $n^4$ on a log-log scale. Restrict to $n \\le 50$.\nSolution", "from matplotlib import pyplot\n%matplotlib inline\n\nn = numpy.arange(5, 51)\nN = numpy.zeros_like(n)\nfor i, n_c in enumerate(n):\n combinations = list(itertools.combinations(numpy.arange(1,n_c+1), 4))\n N[i] = len(combinations)\n\npyplot.figure(figsize=(12,6))\npyplot.loglog(n, N, linestyle='None', marker='x', color='k', label='Combinations')\npyplot.loglog(n, n**4, color='b', label=r'$n^4$')\npyplot.xlabel(r'$n$')\npyplot.ylabel(r'$N$')\npyplot.legend(loc='upper left')\npyplot.show()", "With 17 million combinations to work with, we'll need to be a little careful how we compute.\nOne thing we could try is to loop through each possible \"smallest instance\" (the term on the right hand side) in increasing order. We then check all possible combinations of left hand sides.\nThis is computationally very expensive as we repeat a lot of calculations. We repeatedly recalculate combinations (a bad idea). We repeatedly recalculate the powers of the same number.\nInstead, let us try creating the list of all combinations of powers once.\nExercise 2b\n\nConstruct a numpy array containing all integers in $1, \\dots, 144$ to the fifth power. \nConstruct a list of all combinations of four elements from this array.\nConstruct a list of sums of all these combinations.\nLoop over one list and check if the entry appears in the other list (ie, use the in keyword).\n\nSolution", "nmax=145\nrange_to_power = numpy.arange(1, nmax)**5\nlhs_combinations = list(itertools.combinations(range_to_power, 4))", "Then calculate the sums:", "lhs_sums = []\nfor lhs_terms in lhs_combinations:\n lhs_sums.append(numpy.sum(numpy.array(lhs_terms)))", "Finally, loop through the sums and check to see if it matches any possible term on the RHS:", "for i, lhs in enumerate(lhs_sums):\n if lhs in range_to_power:\n rhs_primitive = int(lhs**(0.2))\n lhs_primitive = (numpy.array(lhs_combinations[i])**(0.2)).astype(int)\n print(\"The LHS terms are {}.\".format(lhs_primitive))\n print(\"The RHS term is {}.\".format(rhs_primitive))", "Lorenz attractor\nThe Lorenz system is a set of ordinary differential equations which can be written\n\\begin{equation}\n \\frac{\\text{d} \\vec{v}}{\\text{d} \\vec{t}} = \\vec{f}(\\vec{v})\n\\end{equation}\nwhere the variables in the state vector $\\vec{v}$ are\n\\begin{equation}\n \\vec{v} = \\begin{pmatrix} x(t) \\ y(t) \\ z(t) \\end{pmatrix}\n\\end{equation}\nand the function defining the ODE is\n\\begin{equation}\n \\vec{f} = \\begin{pmatrix} \\sigma \\left( y(t) - x(t) \\right) \\ x(t) \\left( \\rho - z(t) \\right) - y(t) \\ x(t) y(t) - \\beta z(t) \\end{pmatrix}.\n\\end{equation}\nThe parameters $\\sigma, \\rho, \\beta$ are all real numbers.\nExercise 1\nWrite a function dvdt(v, t, params) that returns $\\vec{f}$ given $\\vec{v}, t$ and the parameters $\\sigma, \\rho, \\beta$.\nSolution", "def dvdt(v, t, sigma, rho, beta):\n \"\"\"\n Define the Lorenz system.\n \n Parameters\n ----------\n \n v : list\n State vector\n t : float\n Time\n sigma : float\n Parameter\n rho : float\n Parameter\n beta : float\n Parameter\n \n Returns\n -------\n \n dvdt : list\n RHS defining the Lorenz system\n \"\"\"\n \n x, y, z = v\n \n return [sigma*(y-x), x*(rho-z)-y, x*y-beta*z]", "Exercise 2\nFix $\\sigma=10, \\beta=8/3$. Set initial data to be $\\vec{v}(0) = \\vec{1}$. Using scipy, specifically the odeint function of scipy.integrate, solve the Lorenz system up to $t=100$ for $\\rho=13, 14, 15$ and $28$.\nPlot your results in 3d, plotting $x, y, z$.\nSolution", "import numpy\nfrom scipy.integrate import odeint\n\nv0 = [1.0, 1.0, 1.0]\nsigma = 10.0\nbeta = 8.0/3.0\nt_values = numpy.linspace(0.0, 100.0, 5000)\nrho_values = [13.0, 14.0, 15.0, 28.0]\nv_values = []\nfor rho in rho_values:\n params = (sigma, rho, beta)\n v = odeint(dvdt, v0, t_values, args=params)\n v_values.append(v)\n\n%matplotlib inline\nfrom matplotlib import pyplot\nfrom mpl_toolkits.mplot3d.axes3d import Axes3D\n\nfig = pyplot.figure(figsize=(12,6))\nfor i, v in enumerate(v_values):\n ax = fig.add_subplot(2,2,i+1,projection='3d')\n ax.plot(v[:,0], v[:,1], v[:,2])\n ax.set_xlabel(r'$x$')\n ax.set_ylabel(r'$y$')\n ax.set_zlabel(r'$z$')\n ax.set_title(r\"$\\rho={}$\".format(rho_values[i]))\npyplot.show()", "Exercise 3\nFix $\\rho = 28$. Solve the Lorenz system twice, up to $t=40$, using the two different initial conditions $\\vec{v}(0) = \\vec{1}$ and $\\vec{v}(0) = \\vec{1} + \\vec{10^{-5}}$.\nShow four plots. Each plot should show the two solutions on the same axes, plotting $x, y$ and $z$. Each plot should show $10$ units of time, ie the first shows $t \\in [0, 10]$, the second shows $t \\in [10, 20]$, and so on.\nSolution", "t_values = numpy.linspace(0.0, 40.0, 4000)\nrho = 28.0\nparams = (sigma, rho, beta)\nv_values = []\nv0_values = [[1.0,1.0,1.0],\n [1.0+1e-5,1.0+1e-5,1.0+1e-5]]\nfor v0 in v0_values:\n v = odeint(dvdt, v0, t_values, args=params)\n v_values.append(v)\n\nfig = pyplot.figure(figsize=(12,6))\nline_colours = 'by'\nfor tstart in range(4):\n ax = fig.add_subplot(2,2,tstart+1,projection='3d')\n for i, v in enumerate(v_values):\n ax.plot(v[tstart*1000:(tstart+1)*1000,0], \n v[tstart*1000:(tstart+1)*1000,1], \n v[tstart*1000:(tstart+1)*1000,2], \n color=line_colours[i])\n ax.set_xlabel(r'$x$')\n ax.set_ylabel(r'$y$')\n ax.set_zlabel(r'$z$')\n ax.set_title(r\"$t \\in [{},{}]$\".format(tstart*10, (tstart+1)*10))\npyplot.show()", "This shows the sensitive dependence on initial conditions that is characteristic of chaotic behaviour.\nSystematic ODE solving with sympy\nWe are interested in the solution of\n\\begin{equation}\n \\frac{\\text{d} y}{\\text{d} t} = e^{-t} - y^n, \\qquad y(0) = 1,\n\\end{equation}\nwhere $n > 1$ is an integer. The \"minor\" change from the above examples mean that sympy can only give the solution as a power series.\nExercise 1\nCompute the general solution as a power series for $n = 2$.\nSolution", "import sympy\nsympy.init_printing()\n\ny, t = sympy.symbols('y, t')\n\nsympy.dsolve(sympy.diff(y(t), t) + y(t)**2 - sympy.exp(-t), y(t))", "Exercise 2\nInvestigate the help for the dsolve function to straightforwardly impose the initial condition $y(0) = 1$ using the ics argument. Using this, compute the specific solutions that satisfy the ODE for $n = 2, \\dots, 10$.\nSolution", "for n in range(2, 11):\n ode_solution = sympy.dsolve(sympy.diff(y(t), t) + y(t)**n - sympy.exp(-t), y(t), \n ics = {y(0) : 1})\n print(ode_solution)", "Exercise 3\nUsing the removeO command, plot each of these solutions for $t \\in [0, 1]$.", "%matplotlib inline\n\nfor n in range(2, 11):\n ode_solution = sympy.dsolve(sympy.diff(y(t), t) + y(t)**n - sympy.exp(-t), y(t), \n ics = {y(0) : 1})\n sympy.plot(ode_solution.rhs.removeO(), (t, 0, 1));", "Twin primes\nA twin prime is a pair $(p_1, p_2)$ such that both $p_1$ and $p_2$ are prime and $p_2 = p_1 + 2$.\nExercise 1\nWrite a generator that returns twin primes. You can use the generators above, and may want to look at the itertools module together with its recipes, particularly the pairwise recipe.\nSolution\nNote: we need to first pull in the generators introduced in that notebook", "def all_primes(N):\n \"\"\"\n Return all primes less than or equal to N.\n \n Parameters\n ----------\n \n N : int\n Maximum number\n \n Returns\n -------\n \n prime : generator\n Prime numbers\n \"\"\"\n \n primes = []\n for n in range(2, N+1):\n is_n_prime = True\n for p in primes:\n if n%p == 0:\n is_n_prime = False\n break\n if is_n_prime:\n primes.append(n)\n yield n", "Now we can generate pairs using the pairwise recipe:", "from itertools import tee\n\ndef pair_primes(N):\n \"Generate consecutive prime pairs, using the itertools recipe\"\n a, b = tee(all_primes(N))\n next(b, None)\n return zip(a, b)", "We could examine the results of the two primes directly. But an efficient solution is to use python's filter function. To do this, first define a function checking if the pair are twin primes:", "def check_twin(pair):\n \"\"\"\n Take in a pair of integers, check if they differ by 2.\n \"\"\"\n p1, p2 = pair\n return p2-p1 == 2", "Then use the filter function to define another generator:", "def twin_primes(N):\n \"\"\"\n Return all twin primes\n \"\"\"\n return filter(check_twin, pair_primes(N))", "Now check by finding the twin primes with $N<20$:", "for tp in twin_primes(20):\n print(tp)", "Exercise 2\nFind how many twin primes there are with $p_2 < 1000$.\nSolution\nAgain there are many solutions, but the itertools recipes has the quantify pattern. Looking ahead to exercise 3 we'll define:", "def pi_N(N):\n \"\"\"\n Use the quantify pattern from itertools to count the number of twin primes.\n \"\"\"\n return sum(map(check_twin, pair_primes(N)))\n\npi_N(1000)", "Exercise 3\nLet $\\pi_N$ be the number of twin primes such that $p_2 < N$. Plot how $\\pi_N / N$ varies with $N$ for $N=2^k$ and $k = 4, 5, \\dots 16$. (You should use a logarithmic scale where appropriate!)\nSolution\nWe've now done all the hard work and can use the solutions above.", "import numpy\nfrom matplotlib import pyplot\n%matplotlib inline\n\nN = numpy.array([2**k for k in range(4, 17)])\ntwin_prime_fraction = numpy.array(list(map(pi_N, N))) / N\n\npyplot.semilogx(N, twin_prime_fraction)\npyplot.xlabel(r\"$N$\")\npyplot.ylabel(r\"$\\pi_N / N$\")\npyplot.show()", "For those that have checked Wikipedia, you'll see Brun's theorem which suggests a specific scaling, that $\\pi_N$ is bounded by $C N / \\log(N)^2$. Checking this numerically on this data:", "pyplot.semilogx(N, twin_prime_fraction * numpy.log(N)**2)\npyplot.xlabel(r\"$N$\")\npyplot.ylabel(r\"$\\pi_N \\times \\log(N)^2 / N$\")\npyplot.show()", "A basis for the polynomials\nIn the section on classes we defined a Monomial class to represent a polynomial with leading coefficient $1$. As the $N+1$ monomials $1, x, x^2, \\dots, x^N$ form a basis for the vector space of polynomials of order $N$, $\\mathbb{P}^N$, we can use the Monomial class to return this basis.\nExercise 1\nDefine a generator that will iterate through this basis of $\\mathbb{P}^N$ and test it on $\\mathbb{P}^3$.\nSolution\nAgain we first take the definition of the crucial class from the notes.", "class Polynomial(object):\n \"\"\"Representing a polynomial.\"\"\"\n explanation = \"I am a polynomial\"\n \n def __init__(self, roots, leading_term):\n self.roots = roots\n self.leading_term = leading_term\n self.order = len(roots)\n \n def __repr__(self):\n string = str(self.leading_term)\n for root in self.roots:\n if root == 0:\n string = string + \"x\"\n elif root > 0:\n string = string + \"(x - {})\".format(root)\n else:\n string = string + \"(x + {})\".format(-root)\n return string\n \n def __mul__(self, other):\n roots = self.roots + other.roots\n leading_term = self.leading_term * other.leading_term\n return Polynomial(roots, leading_term)\n \n def explain_to(self, caller):\n print(\"Hello, {}. {}.\".format(caller,self.explanation))\n print(\"My roots are {}.\".format(self.roots))\n return None\n\nclass Monomial(Polynomial):\n \"\"\"Representing a monomial, which is a polynomial with leading term 1.\"\"\"\n explanation = \"I am a monomial\"\n \n def __init__(self, roots):\n Polynomial.__init__(self, roots, 1)\n \n def __repr__(self):\n string = \"\"\n for root in self.roots:\n if root == 0:\n string = string + \"x\"\n elif root > 0:\n string = string + \"(x - {})\".format(root)\n else:\n string = string + \"(x + {})\".format(-root)\n return string", "Now we can define the first basis:", "def basis_pN(N):\n \"\"\"\n A generator for the simplest basis of P^N.\n \"\"\"\n \n for n in range(N+1):\n yield Monomial(n*[0])", "Then test it on $\\mathbb{P}^N$:", "for poly in basis_pN(3):\n print(poly)", "This looks horrible, but is correct. To really make this look good, we need to improve the output. If we use", "class Monomial(Polynomial):\n \"\"\"Representing a monomial, which is a polynomial with leading term 1.\"\"\"\n explanation = \"I am a monomial\"\n \n def __init__(self, roots):\n Polynomial.__init__(self, roots, 1)\n \n def __repr__(self):\n if len(self.roots):\n string = \"\"\n n_zero_roots = len(self.roots) - numpy.count_nonzero(self.roots)\n if n_zero_roots == 1:\n string = \"x\"\n elif n_zero_roots > 1:\n string = \"x^{}\".format(n_zero_roots)\n else: # Monomial degree 0.\n string = \"1\"\n for root in self.roots:\n if root > 0:\n string = string + \"(x - {})\".format(root)\n elif root < 0:\n string = string + \"(x + {})\".format(-root)\n return string", "then we can deal with the uglier cases, and re-running the test we get", "for poly in basis_pN(3):\n print(poly)", "An even better solution would be to use the numpy.unique function as in this stackoverflow answer (the second one!) to get the frequency of all the roots.\nExercise 2\nAn alternative basis is given by the monomials\n\\begin{align}\n p_0(x) &= 1, \\ p_1(x) &= 1-x, \\ p_2(x) &= (1-x)(2-x), \\ \\dots & \\quad \\dots, \\ p_N(x) &= \\prod_{n=1}^N (n-x).\n\\end{align}\nDefine a generator that will iterate through this basis of $\\mathbb{P}^N$ and test it on $\\mathbb{P}^4$.\nSolution", "def basis_pN_variant(N):\n \"\"\"\n A generator for the 'sum' basis of P^N.\n \"\"\"\n \n for n in range(N+1):\n yield Monomial(range(n+1))\n\nfor poly in basis_pN_variant(4):\n print(poly)", "I am too lazy to work back through the definitions and flip all the signs; it should be clear how to do this!\nExercise 3\nUse these generators to write another generator that produces a basis of $\\mathbb{P^3} \\times \\mathbb{P^4}$.\nSolution\nHopefully by now you'll be aware of how useful itertools is!", "from itertools import product\n\ndef basis_product():\n \"\"\"\n Basis of the product space\n \"\"\"\n yield from product(basis_pN(3), basis_pN_variant(4))\n\nfor p1, p2 in basis_product():\n print(\"Basis element is ({}) X ({}).\".format(p1, p2))", "I've cheated here as I haven't introduced the yield from syntax (which returns an iterator from a generator). We could write this out instead as", "def basis_product_long_form():\n \"\"\"\n Basis of the product space (without using yield_from)\n \"\"\"\n prod = product(basis_pN(3), basis_pN_variant(4))\n yield next(prod)\n\nfor p1, p2 in basis_product():\n print(\"Basis element is ({}) X ({}).\".format(p1, p2))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
bryangraham/ipt
Notebooks/Average_Regression_Monte_Carlo_Experiments .ipynb
mit
[ "Monte carlo experiments for average regression\nBryan S. Graham, UC - Berkeley, bgraham@econ.berkeley.edu\nCristine Pinto, FGV, cristinepinto@gmail.com\nThis notebook includes replication code for the Monte Carlo experiments reports in Graham and Pinto (2018). In addition to several standard Python scientific computing libraries, we use functions from the ipt library. This library is available on GitHub at https://github.com/bryangraham/ipt. It includes implementations of Wooldridge's (2004) generalized inverse probability weighting estimator for average partial effects (APE), the \"Oaxaca-Blinder\" type APE estimator described in the paper, as well as of our own locally efficient, doubly robust estimator.\n<br>\n<br>\nReferences\nGraham, Bryan S. and Pinto, Cristine Campose de Xavier. (2018). \"Semiparametrically efficient estimation of the average linear regression function,\" CEMMAP Working Paper.", "# Direct Python to plot all figures inline (i.e., not in a separate window)\n%matplotlib inline\n\n# Load libraries\nimport numpy as np\nimport numpy.linalg\n\nimport scipy as sp\nimport scipy.optimize\n\nimport pandas as pd\n\n# Append location of ipt module base directory to system path\n# NOTE: only required if permanent install of ipt package not made\nimport sys\nsys.path.append('/Users/bgraham/Dropbox/Sites/software/ipt/')\n\n# Load ipt module\nimport ipt as ipt\n\nimport warnings\n\ndef fxn():\n warnings.warn(\"runtime\", RuntimeWarning)\n\nwith warnings.catch_warnings():\n warnings.simplefilter(\"ignore\")\n fxn()\n\nfrom platform import python_version\nprint(python_version())", "We consider four designs, each described in detail in Graham and Pinto (2018). In all cases the conditional distribution $ f(X|W) $ is poisson with a conditional mean of $\\exp\\left(k\\left(W\\right)'\\phi\\right)$. In designs $1$ and $3$ the $k\\left(W\\right)$ vector includes a constant and a linear term. In designs $2$ and $4$ a quadratic term is also included. \n<br>\n<br>\nThe outcome variable is generated according to $Y=a\\left(W\\right)+b\\left(W\\right)X+U$. In the first two designs $a\\left(W\\right)$ and $b\\left(W\\right)$ are linear functions of $W$, while in the last two they are quadratic functions. Both $W$ and $U$ are standard normal random variables, uncorrelated with each other. The parameter values are chosen such that the standard error of a semiparametrically efficient estimator would 0.05 across each design (when $N = 1,000$). In this sense each design is equally difficult. \n<br>\n<br>\nWe evaluate the performance of three estimators: (i) the generalized inverse probability weight estimator introduced by Wooldridge (2004), (ii) the \"Oaxaca-Blinder\" imputation type estimator discussed in the paper, and (iii) our own locally efficient doubly robust estimator. \n<br>\n<br>\nFor the Wooldridge (2004) estimator $ f(X|W) $ is modelled as a Poisson distribution with a conditional mean of $\\exp\\left(k\\left(W\\right)'\\phi\\right)$ with $k\\left(W\\right)$ including a constant and linear term. The Wooldridge (2004) estimator is consistent in designs 1 and 3. The \"Oaxaca-Blinder\" estimator assumes that both $a\\left(W\\right)$ and $b\\left(W\\right)$ are linear functions of $W$. This estimator will be consistent in designs $1$ and $2$. \n<br>\n<br>\nOur doubly robust estimator is based on the same submodels as the Wooldridge (2004) and \"Oaxaca-Blinder\" ones. It is consistent across designs $1$, $2$ and $3$. It is locally efficient in design $1$. \n<br>\n<br>\nAll three estimators are inconsistent in design $4$.", "import time\n\nN = 1000\nS = 5000\n\n# List with a, b, c and efficiency bounds for each design\nD1 = [[1, 1, 0], [2, 1.22, 0], [0.1, 0.5, 0], [0.05]]\nD2 = [[1, 1, 0], [2, 1.26, 0], [0.1, 0.5, 0.1], [0.05]]\nD3 = [[1, 1, 0.5], [2, 1, 0.5], [0.1, 0.5, 0], [0.05]]\nD4 = [[1, 1, 0.5], [2, 1.05, 0.5], [0.1, 0.5, 0.1], [0.05]]\n\nDesigns = [D1, D2, D3, D4]\n\nNumDesigns = len(Designs)\n\n# Initialize matrices to store Monte Carlo results\nbias = np.zeros((S,3*NumDesigns))\ncoverage = np.zeros((S,3*NumDesigns))\nse = np.zeros((S,3*NumDesigns))\n\n# Set random seed\nnp.random.seed(361)\n\nd = 0\nstart = time.time()\nfor Design in Designs:\n \n print(\"Simulating design \" + str(d+1) + \" of \" + str(len(Designs)))\n \n a = Design[0]\n b = Design[1]\n c = Design[2]\n asym_se = Design[3][0]\n \n for s in range(0,S):\n \n # Simulate s-th dataset\n W, U = np.random.multivariate_normal([0,0], [[1, 0], [0, 1]], N).T\n X = np.random.poisson(np.exp(c[0] + c[1]*W + c[2]*W**2))\n Y = ((a[0] + a[2]) + a[1]*W + a[2]*(W**2 - 1)) + ((b[0] + b[2]) + b[1]*W + b[2]*(W**2 - 1))*X + U\n \n W = pd.DataFrame(W, columns=['W']) \n X = pd.Series(X, name = 'X') \n Y = pd.Series(Y, name = 'Y')\n \n # Doubly robust estimator\n [beta_hat_dr, vcov_beta_hat_dr] = ipt.avreg_dr(Y, X, W, psmodel='poisson', c_id=None, s_wgt=None, \\\n silent=True)\n \n # Wooldridge (2004) estimator\n [beta_hat_ipw, vcov_beta_hat_ipw] = ipt.avreg_ipw(Y, X, W, psmodel='poisson', c_id=None, s_wgt=None, \\\n silent=True)\n \n # \"Oaxaca-Blinder\" estimator\n [beta_hat_ob, vcov_beta_hat_ob] = ipt.avreg_ob(Y, X, W, c_id=None, s_wgt=None, \\\n silent=True)\n \n bias[s,[d,d+NumDesigns,d+2*NumDesigns]] = (beta_hat_dr[0] - b[0] - b[2])[0], \\\n (beta_hat_ipw[0] - b[0] - b[2])[0], \\\n (beta_hat_ob[0] - b[0] - b[2])[0]\n \n # Coverage\n coverage[s,[d,d+NumDesigns,d+2*NumDesigns]] = ((b[0] + b[2]<=beta_hat_dr[0] + 1.96*np.sqrt(vcov_beta_hat_dr[0,0]))*\\\n (b[0] + b[2]>=beta_hat_dr[0] - 1.96*np.sqrt(vcov_beta_hat_dr[0,0])))[0], \\\n ((b[0] + b[2]<=beta_hat_ipw[0] + 1.96*np.sqrt(vcov_beta_hat_ipw[0,0]))*\\\n (b[0] + b[2]>=beta_hat_ipw[0] - 1.96*np.sqrt(vcov_beta_hat_ipw[0,0])))[0], \\\n ((b[0] + b[2]<=beta_hat_ob[0] + 1.96*np.sqrt(vcov_beta_hat_ob[0,0]))*\\\n (b[0] + b[2]>=beta_hat_ob[0] - 1.96*np.sqrt(vcov_beta_hat_ob[0,0])))[0]\n \n # Standard error length\n se[s,[d,d+NumDesigns,d+2*NumDesigns]] = np.sqrt(vcov_beta_hat_dr[0,0]), \\\n np.sqrt(vcov_beta_hat_ipw[0,0]), \\\n np.sqrt(vcov_beta_hat_ob[0,0])\n \n end = time.time()\n if (s+1) % 1000 == 0:\n print(\"Time required f/ MC rep \" + str(s+1) + \" of \" + str(S) + \": \" + str(end-start)) \n start = time.time()\n d += 1 \n\n# Print options and row and column labels for Monte Carlo results\npd.options.display.precision=4\npd.set_option('display.float_format', lambda x: '%.4f' % x)\nDesigns = ['1 (S,S)', '2 (S,R)', '3 (R,S)', '4 (R,R)']\nEstimators = ['DR', 'GIPW', 'Oaxaca-Blinder']\n\n# Report bias and coverage results\nprint(\"Mean Bias\")\nmean_bias = pd.DataFrame(np.mean(bias, axis=0).reshape(3,NumDesigns,order='C'), columns=Designs, index=Estimators)\nprint(mean_bias)\nprint(\"\")\nprint(\"Median Bias\")\nmedian_bias = pd.DataFrame(np.median(bias, axis=0).reshape(3,NumDesigns,order='C'), columns=Designs, index=Estimators)\nprint(median_bias)\nprint(\"\")\nprint(\"Standard deviation\")\nstd_dev = pd.DataFrame(np.std(bias, axis=0).reshape(3,NumDesigns,order='C'), columns=Designs, index=Estimators)\nprint(std_dev)\nprint(\"\")\nprint(\"Mean Standard Error\")\nmean_std_err = pd.DataFrame(np.mean(se, axis=0).reshape(3,NumDesigns,order='C'), columns=Designs, index=Estimators)\nprint(mean_std_err)\nprint(\"\")\nprint(\"Median Standard Error\")\nmedian_std_err = pd.DataFrame(np.median(se, axis=0).reshape(3,NumDesigns,order='C'), columns=Designs, index=Estimators)\nprint(median_std_err)\nprint(\"\")\nprint(\"Coverage (nominal 95%)\")\nactual_cov = pd.DataFrame(np.mean(coverage, axis=0).reshape(3,NumDesigns,order='C'), columns=Designs, index=Estimators)\nprint(actual_cov)\nprint(\"\")", "The standard error associated with a Monte Carlo coverage estimate is $\\sqrt{\\alpha\\left(1-\\alpha\\right)/B}$. With $B = 5,000$ simulations and $\\alpha = 0.05$ this results in a standard error of approximately 0.003 or a 95 percent confidence interval of $[0.944, 0.956]$. Overall the Monte Carlo results are consistent with theoretical expectations. This is especially true when considering larger sampler sizes (as would be expected).", "# This imports an attractive notebook style from Github\nfrom IPython.display import HTML\nfrom urllib.request import urlopen\nhtml = urlopen('http://bit.ly/1Bf5Hft')\nHTML(html.read().decode('utf-8'))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
google/empirical_calibration
notebooks/kang_schafer_population_mean.ipynb
apache-2.0
[ "We illustrate empirical calibration on Kang-Schafer simulation under both correctly specified and misspecified models, and benchmark the execution time.", "#@title Copyright 2019 The Empirical Calibration Authors.\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ============================================================================", "<table align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/google/empirical_calibration/blob/master/notebooks/kang_schafer_population_mean.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/google/empirical_calibration/blob/master/notebooks/kang_schafer_population_mean.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n</table>\n\n\n\nSimulation\nImports\nSelection Bias\nCorrectly Specified Model\nMisspecified Model\nAdding transformations of covariates\nAdding extra covariates\nBenchmark Execution Time\n\n\nSimulation\nThe true set of covariates is generated independently and identically\ndistributed from the standard normal distribution\n$$\n(Z_1, Z_2, Z_3, Z_4) \\sim N(0, \\mathbf{I}_4).\n$$\nThe outcome is generated as\n$$\nY = 210 + 27.4 Z_1 + 13.7 Z_2 + 13.7 Z_3 + 13.7 Z_4 + \\epsilon,\n$$\nwhere $\\epsilon \\sim N(0, 1)$.\nThe propensity score is defined as\n$$\nPr(T = 1 | Z) = \\text{expit}(-Z_1 + 0.5 Z_2 - 0.25 Z_3 - 0.1 Z_4),\n$$\nwhere $\\text{expit}(x) = 1/(1+\\text{exp}(-x)).$\nThis mechanism produces an equal-sized treated and control group\non average. Given the covariates, the outcome is independent of the treatment\nassignment, thus the true ATT is zero. The overall outcome mean is 210. Due to\nthe treatment selection bias, the outcome mean for the treated group (200) is\nlower than that of the control group (220).\nThe typical exercise is to examine the performance of an observational method\nunder both correctly specified and misspecified propensity score and/or outcome\nregression models. Misspecification occurs when the following nonlinear\ntransformation $X_i$'s are observed in place of the true covariates\n\\begin{align}\nX_{i1} & = \\exp(Z_{i1}/2), \\\nX_{i2} & = Z_{i2} / (1 + \\exp(Z_{i1})) + 10, \\\nX_{i3} & = (Z_{i1} Z_{i3} / 25 + 0.6)^3, \\\nX_{i4} & = (Z_{i2} + Z_{i4} + 20)^2.\n\\end{align}\nFor more context, see paper.\nImports", "from matplotlib import pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport patsy\nimport seaborn as sns\nsns.set_style('whitegrid')\n%config InlineBackend.figure_format='retina'\n\nfrom google.colab import widgets\n\n# install and import ec\n!pip install -q git+https://github.com/google/empirical_calibration\nimport empirical_calibration as ec\n", "Selection Bias\nWe first simulate one dataset of size $2000$ to examine the selection bias.", "np.random.seed(123)\nsimulation = ec.data.kang_schafer.Simulation(size=2000)\n\ndf = pd.DataFrame(\n np.column_stack([\n simulation.treatment, simulation.covariates,\n simulation.transformed_covariates, simulation.outcome\n ]))\ndf.columns = [\n \"treatment\", \"z1\", \"z2\", \"z3\", \"z4\", \"x1\", \"x2\", \"x3\", \"x4\", \"outcome\"\n]", "The treated group has a lower outcome mean than that of the control group, but the difference is not necessarily attributed to the causal effect of the treatment.", "print(df.groupby(\"treatment\").mean().T)", "The distributions of covariates or transformed covariates don't completely overlap between the treated and control groups.", "def show_hist(name):\n plt.figure(figsize=(6, 2))\n plt.hist(\n df.loc[df['treatment'] == 1, name],\n bins=20,\n alpha=0.4,\n color='#00BFC4',\n label='treated',\n edgecolor='none')\n plt.hist(\n df.loc[df['treatment'] == 0, name],\n bins=20,\n alpha=0.4,\n color='#F8766D',\n label='control',\n edgecolor='none')\n plt.xlabel(name)\n plt.legend(loc='upper left', prop={'size': 12})\n plt.show()\n\ntb = widgets.TabBar(['covariates', 'transformed_covariates', 'outcome'])\nwith tb.output_to('covariates'):\n for name in [\"z1\", \"z2\", \"z3\", \"z4\"]:\n show_hist(name)\n\nwith tb.output_to('transformed_covariates'):\n for name in [\"x1\", \"x2\", \"x3\", \"x4\"]:\n show_hist(name)\n\nwith tb.output_to('outcome'):\n show_hist(\"outcome\")", "Correctly Specified Model\nWe run the simulation $1000$ times under correctly specified logistic propensity score. For each simulation, the treatment group was weighted so that it matches the population in terms of their covariate distributions.\nThe estimator is the weighted value of $y$ in the treatment group.", "def estimate_mean(formula):\n simulation = ec.data.kang_schafer.Simulation(size=1000)\n\n t = simulation.treatment\n y = simulation.outcome\n\n df = pd.DataFrame(\n np.column_stack(\n [simulation.covariates, simulation.transformed_covariates]))\n df.columns = [\"z1\", \"z2\", \"z3\", \"z4\", \"x1\", \"x2\", \"x3\", \"x4\"]\n x = patsy.dmatrix(formula, df, return_type=\"dataframe\").values\n\n weights = ec.from_formula(formula=formula,\n df=df.loc[t==1],\n target_df=df)[0]\n\n return np.mean(np.sum(y[t == 1] * weights))\n\ndef show_estimates(estimates):\n estimates = pd.Series(estimates)\n ax = estimates.hist(bins=20, alpha=0.8, edgecolor='none')\n plt.axvline(estimates.mean(), linestyle='dashed', color='red')\n # True population mean is 210.\n print('bias is {}'.format(estimates.mean() - 210.))\n print('rmse is {}'.format(np.sqrt(np.mean((estimates - 210.) ** 2))))\n\nestimates_correct = [estimate_mean(\"-1 + z1 + z2 + z3 + z4\") for i in xrange(1000)]", "With correctly specified covariates to match ($Z1, \\dots, Z4$),\nthe bias is smaller and the RMSE is better than any of the methods in the Kang & Schafer paper, where the best RMSE was 1.17.", "show_estimates(estimates_correct)", "Misspecified Model\nIf the transformed covariates are observed in place of the true covariates, aka the propensity score model is misspecified, the estimate is no long un-biased.", "estimates_miss = [estimate_mean(\"-1 + x1 + x2 + x3 + x4\") for i in xrange(1000)]\n\nshow_estimates(estimates_miss)", "Adding transformations of covariates\nOne reasonable strategy is to expand the set of balancing covariates and hope it will make the model less \"misspecified\". If we additional balance the two-way interactions and the log transformation, the bias indeed reduces.", "formula = (\"-1 + (x1 + x2 + x3 + x4)**2 + I(np.log(x1)) + I(np.log(x2)) + \"\n \"I(np.log(x3)) + I(np.log(x4))\")\nestimates_expanded = [estimate_mean(formula) for i in xrange(1000)]\n\nshow_estimates(estimates_expanded)", "Adding extra covariates\nIf the model is misspecified in the sense that more covariates are included than necessary, the causal estimate remains unbiased.", "formula = \"-1 + z1 + z2 + z3 + z4 + x1 + x2 + x3 + x4\"\nestimates_redundant = [estimate_mean(formula) for i in xrange(1000)]\n\nshow_estimates(estimates_redundant)", "Benchmark Execution Time\nThe execution time is generally linear with respect to the sample size.", "np.random.seed(123)\nsimulation = ec.data.kang_schafer.Simulation(size=2000)\nx1 = simulation.covariates[simulation.treatment == 1]\nx0 = simulation.covariates[simulation.treatment == 0]\n%timeit weights = ec.maybe_exact_calibrate(x0, x1)[0]\n\nnp.random.seed(123)\nsimulation = ec.data.kang_schafer.Simulation(size=20000)\nx1 = simulation.covariates[simulation.treatment == 1]\nx0 = simulation.covariates[simulation.treatment == 0]\n%timeit weights = ec.maybe_exact_calibrate(x0, x1)[0]\n\nnp.random.seed(123)\nsimulation = ec.data.kang_schafer.Simulation(size=200000)\nx1 = simulation.covariates[simulation.treatment == 1]\nx0 = simulation.covariates[simulation.treatment == 0]\n%timeit weights = ec.maybe_exact_calibrate(x0, x1)[0]\n\nnp.random.seed(123)\nsimulation = ec.data.kang_schafer.Simulation(size=2000000)\nx1 = simulation.covariates[simulation.treatment == 1]\nx0 = simulation.covariates[simulation.treatment == 0]\n%timeit weights = ec.maybe_exact_calibrate(x0, x1)[0]" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
google-research/ott
docs/notebooks/Hessians.ipynb
apache-2.0
[ "Sinkhorn Divergence Hessians\nSamples two point clouds, computes their sinkhorn_divergence\nWe show in this colab how OTT and JAX can be used to compute automatically the Hessian of the Sinkhorn divergence w.r.t. input variables, such as weights a or locations x. Don't forget to !pip install ott-jax before running the code below.", "import jax\nimport jax.numpy as jnp\n\nimport ott\nfrom ott.tools import sinkhorn_divergence\nfrom ott.geometry import pointcloud\nimport matplotlib.pyplot as plt", "Sample two random point clouds of dimension dim", "def sample(n, m, dim):\n rngs = jax.random.split(jax.random.PRNGKey(0), 6)\n x = jax.random.uniform(rngs[0], (n, dim))\n y = jax.random.uniform(rngs[1], (m, dim))\n a = jax.random.uniform(rngs[2], (n,)) + .1\n b = jax.random.uniform(rngs[3], (m,)) + .1\n a = a / jnp.sum(a)\n b = b / jnp.sum(b)\n return a, x, b ,y\n\na, x, b, y = sample(15, 17, 3)", "As usual in JAX, we define a custom loss that outputs the quantity of interest, and is defined using relevant inputs as arguments, i.e. parameters against which we may want to differentiate. We add to a and x the implicit auxiliary flag which will be used to switch between unrolling and implicit differentiation of the Sinkhorn algorithm (see this excellent tutorial for a deep dive on their differences!)\nThe loss outputs the Sinkhorn Divergence between two point clouds.", "def loss(a, x, implicit):\n return sinkhorn_divergence.sinkhorn_divergence(\n pointcloud.PointCloud, x, y, # this part defines geometry\n a=a, b=b, # this sets weights\n sinkhorn_kwargs={'implicit_differentiation': implicit, 'use_danskin': False} # to be used by Sinkhorn algorithm.\n ).divergence", "Let's parse the three lines in the call to sinkhorn_divergence above:\n- The first one defines the point cloud geometry between x and y that will define the cost matrix. Here we could have added details on epsilon regularization (or scheduler), as well as alternative definitions of the cost function (here assumed by default to be squared Euclidean distance). We stick to the default setting.\n\n\nThe second one sets the respective weight vectors a and b. Those are simply two histograms of size n and m, both sum to 1, in the so-called balanced setting.\n\n\nThe third one passes on arguments to the three sinkhorn solvers that will be called, to compare x with y, x with x and y with y with their respective weights a and b. Rather than focusing on the several numerical options available to parmeterize sinkhorn's behavior, we instruct JAX on how it should differentiate the outputs of the sinkhorn algorithm. The use_danskin flag specifies whether the outputted potentials should be freezed when differentiating. Since we aim for 2nd order differentiation here, we must set this to False (if we wanted to compute gradients, True would have resulted in faster yet almost equivalent computations).\n\n\nComputing Hessians\nLet's now plot Hessians of this output w.r.t. either a or x. \n\n\nThe Hessian w.r.t. a will be a $n \\times n$ matrix, with the convention that a has size $n$. \n\n\nBecause x is itself a matrix of 3D coordinates, the Hessian w.r.t. x will be a 4D tensor of size $n \\times 3 \\times n \\times 3$.\n\n\nTo plot both Hessians, we loop on arg 0 or 1 of loss, and plot all (or part for x) of those Hessians, to check they match:", "for arg in [0,1]:\n # Compute Hessians using either unrolling or implicit differentiation.\n hess_loss_imp = jax.jit(jax.hessian(lambda a, x: loss(a, x, True),\n argnums=arg))\n print('--- Time: Implicit Hessian w.r.t. ' + ('a' if arg == 0 else 'x'))\n %timeit _ = hess_loss_imp(a, x).block_until_ready() \n hess_imp = hess_loss_imp(a, x)\n\n hess_loss_back = jax.jit(jax.hessian(lambda a, x: loss(a, x, False),\n argnums=arg))\n print('--- Time: Unrolled Hessian w.r.t. ' + ('a' if arg == 0 else 'x'))\n %timeit _ = hess_loss_back(a, x).block_until_ready() \n hess_back = hess_loss_back(a, x)\n\n # Since we are solving balanced OT problems, Hessians w.r.t. weights are\n # only defined up to the orthogonal space of 1s.\n # For that reason we remove that contribution and check the\n # resulting matrices are equal.\n if arg == 0:\n hess_imp -= jnp.mean(hess_imp,axis=1)[:,None]\n hess_back -= jnp.mean(hess_back,axis=1)[:,None]\n fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 6))\n im = ax1.imshow(hess_imp if arg == 0 else hess_imp[0,0,:,:])\n ax1.set_title('Implicit Hessian w.r.t. ' + ('a' if arg == 0 else 'x (1st slice)'))\n fig.colorbar(im, ax=ax1)\n im = ax2.imshow(hess_back if arg == 0 else hess_back[0,0,:,:])\n ax2.set_title('Unrolled Hessian w.r.t. ' + ('a' if arg == 0 else 'x (1st slice)'))\n fig.colorbar(im, ax=ax2)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
LorenzoBi/courses
UQ/assignment_3/.ipynb_checkpoints/Assignment 3-checkpoint.ipynb
mit
[ "Assignment 3\nLorenzo Biasi and Michael Aichmüller", "import numpy as np\nfrom scipy.special import binom\nimport matplotlib.pylab as plt\nfrom scipy.misc import factorial as fact\n%matplotlib inline\n\ndef binomial(p, n, k):\n return binom(n, k) * p ** k * (1 - p) ** (n-k)", "Exercise 1.\na.\n$\\Omega$ will be all the possible combinations we have for 150 object two have two diffent values. For example (0, 0, ..., 0), (1, 0, ..., 0), (0, 1, ..., 0), ... (1, 1, ..., 0), ... (1, 1, ..., 1). This sample space has size of $2^{150}$. The random variable $X(\\omega)$ will be the number of defective objects there are in the sample $\\omega$. We can also define $Y(\\omega) = 150 - X(\\omega)$, that will be counting the number of checked items.\nb.\nThe binomial distribution is the distribution that gives the probability of the number of \"succeses\" in a sequence of random and indipendent boolean values. This is the case for counting the number of broken object in a group of 150 and the probability of being broken of 4%.\nc.\nFor computing the probability that at most 4 objects are broken we need to sum the probabilities that $k$ objects are broken with $k \\in [0, 4]$.\n$P(<5) = \\sum_{k=0}^{4} P(X=k) = \\sum_{k=0}^{4} {4\\choose k}p^k(1-p)^{4-k}$\nThe probability is 28 %", "p = 4. / 100\nnp.sum(binomial(p, 150, np.arange(5)))", "b.\nThe same of before just that this time $k \\in [5, 9]$. The probability is 64%", "np.sum(binomial(p, 150, np.arange(5, 10)))\n\nplt.bar(np.arange(20), binomial(p, 150, np.arange(20)))\nplt.bar(np.arange(5), binomial(p, 150, np.arange(5)))\nplt.bar(np.arange(5, 10), binomial(p, 150, np.arange(5,10)))\nplt.xlabel('# defectives')\nplt.ylabel('P(X=k)')", "Exercise 2.\nFor computing how big $q$ needs to be we can compute the probability $p^$ that nobody has the same birthday in a group of $q$ and compute $1 - p^$. The first two people will not have the same birthday with probability of $364/365$, the probability that the third will also have a different birthday will be $364/365 * 363 / 365$. this will go on until the last person. One can make the computation and finds that the minimum for having over 50% of probability that at least two people have the same birthday is 23 with p = 50.73%.", "def not_same_birthday(q):\n return np.prod((365 - np.arange(q))/ 365)\n\nq = 45\np = np.empty(q - 1)\nfor i in range(1, q):\n p[i - 1] = 1 - not_same_birthday(i)\nplt.plot(np.arange(1, q), p)\nplt.plot(23, 1 - not_same_birthday(23), 'r+', label='23 people')\nplt.grid()\nplt.ylabel('Probability')\nplt.xlabel('q')\nplt.legend()\n1 - not_same_birthday(23)", "Exercise 3.\na.\nLet's define $\\Omega$ as all the possible combination we can have with 3 throws of a 6-faced dice. $\\Omega$ will be then:", "import itertools\nx = [1, 2, 3, 4, 5, 6]\nomega = set([p for p in itertools.product(x, repeat=3)])\nprint(r'Omega has', len(omega), 'elements and they are:')\nprint(omega)", "X would be -30 when the sample $\\omega$ has no 6s, 50 when has one, 75 when it has two, and 100 when it has three. The probability distribution of such variable would be the binomial with $p = 1 / 6$, $n=3$ and $k$ the number of 6s.\nSo:\n$P_X(X = -30) = {3\\choose 0}(1 / 6)^0(1-1/6)^{3-0}$\n$P_X(X = 50) = {3\\choose 1}(1 / 6)^1(1-1/6)^{3-1}$\n$P_X(X = 75) = {3\\choose 2}(1 / 6)^2(1-1/6)^{3-2}$\n$P_X(X = 100) = {3\\choose 3}(1 / 6)^3(1-1/6)^{3-3}$\nb.\nI would be part of this competition, in fact if calculate the mean of $X$ as suggested we obtain $\\approx$ 5.67(€).", "g = binomial(1 / 6, 3, np.arange(4)) * np.array([-30, 50, 75, 100])\nnp.sum(g)\n\nplt.bar(np.arange(4), g)\nplt.plot([-.5, 3.5], np.ones(2) * np.sum(g), 'r')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
rnwatanabe/projectPR
ExampleNotebooks/IsometricClosedLoop.ipynb
gpl-3.0
[ "This notebook presents a simulation of 5000 ms of 400 descending commands and 800 motoneurons from soleus. The force is prduced by a Hill-type muscle model.", "import sys\nsys.path.insert(0, '..')\nimport time\nimport matplotlib.pyplot as plt\n%matplotlib notebook \nfrom IPython.display import set_matplotlib_formats\nset_matplotlib_formats('pdf', 'png')\nplt.rcParams['savefig.dpi'] = 75\n\nplt.rcParams['figure.autolayout'] = False\nplt.rcParams['figure.figsize'] = 10, 6\nplt.rcParams['axes.labelsize'] = 18\nplt.rcParams['axes.titlesize'] = 20\nplt.rcParams['font.size'] = 16\nplt.rcParams['lines.linewidth'] = 2.0\nplt.rcParams['lines.markersize'] = 8\nplt.rcParams['legend.fontsize'] = 14\n\nplt.rcParams['text.usetex'] = True\nplt.rcParams['font.family'] = \"serif\"\nplt.rcParams['font.serif'] = \"cm\"\nplt.rcParams['text.latex.preamble'] = \"\\usepackage{subdepth}, \\usepackage{type1cm}\"\n\n\nimport numpy as np\n\nfrom Configuration import Configuration\nfrom MotorUnitPool import MotorUnitPool\nfrom NeuralTract import NeuralTract\nfrom AfferentPool import AfferentPool\nfrom SynapsesFactory import SynapsesFactory\nfrom jointAnkleForceTask import jointAnkleForceTask\n\nconf = Configuration('confIsometricClosedLoop.rmto')\nconf.simDuration_ms = 2000 # Here I change simulation duration without changing the Configuration file.\nt = np.arange(0.0, conf.simDuration_ms, conf.timeStep_ms)\nGammaOrder = 10\nFR = 1000/12.0\n\npools = dict()\npools[0] = MotorUnitPool(conf, 'SOL')\npools[1] = NeuralTract(conf, 'CMExt')\npools[2] = AfferentPool(conf,'Ia', 'SOL')\nankle = jointAnkleForceTask(conf, pools)\nSyn = SynapsesFactory(conf, pools)\ndel Syn\n\nIaFR = np.zeros([len(t), 1])\ntic = time.time()\nfor i in xrange(0,len(t)-1): \n ankle.atualizeAnkle(t[i], 0.1 * np.sin(2*np.pi*t[i]/1000.0))\n pools[1].atualizePool(t[i], FR, GammaOrder)\n pools[0].atualizeMotorUnitPool(t[i])\n pools[2].atualizeAfferentPool(t[i], pools[0].spindle.IaFR_Hz)\n ankle.computeTorque(t[i])\n IaFR[i] = pools[0].spindle.IaFR_Hz\ntoc = time.time()\nprint str(toc - tic) + ' seconds'\n\npools[0].listSpikes()\npools[1].listSpikes()\npools[2].listSpikes()", "The spike times of the MNs along the 5000 ms of simulation are shown in Fig. \\ref{fig:spikesMNHill}.", "plt.figure()\nplt.plot(pools[0].poolTerminalSpikes[:, 0],\n pools[0].poolTerminalSpikes[:, 1]+1, '.')\nplt.xlabel('t (ms)')\nplt.ylabel('Motor Unit index')\n\nplt.figure()\nplt.plot(pools[2].poolTerminalSpikes[:, 0],\n pools[2].poolTerminalSpikes[:, 1]+1, '.')\nplt.xlabel('t (ms)')\nplt.ylabel('Afferent index')", "The muscle force produced by the Hill-type model is shown in Fig.\\ref{fig:forceHill}.", "plt.figure()\nplt.plot(t, pools[0].Muscle.force, '-')\nplt.xlabel('t (ms)')\nplt.ylabel('Muscle Force (N)')", "The muscle length computed with the Hill-type model is shown in Fig.\\ref{fig:lengthHill}.", "plt.figure()\nplt.plot(t, pools[0].Muscle.length_m/pools[0].Muscle.optimalLength_m, '-')\nplt.xlabel('t (ms)')\nplt.ylabel('Muscle Length (m)')", "The muscle velocity, computed by the Hill-type muscle model, is in Fig.\\ref{fig:velocityHill}.", "plt.figure()\nplt.plot(t, pools[0].Muscle.velocity_m_ms/pools[0].Muscle.optimalLength_m, '-')\nplt.xlabel('t (ms)')\nplt.ylabel('Muscle Velocity (m/ms)')", "The ankle joint angle is shown in Fig. \\ref{fig:ankleAngleHill}.", "plt.figure()\nplt.plot(t, ankle.ankleAngle_rad*180.0/np.pi, '-')\nplt.xlabel('t (ms)')\nplt.ylabel('Ankle angle (degree)')\n\nplt.figure()\nplt.plot(t, IaFR, '-')\nplt.xlabel('t (ms)')\nplt.ylabel('Ankle angle (degree)')\n\nplt.figure()\nplt.plot(t, ankle.ankleTorque_Nm, '-')\nplt.xlabel('t (ms)')\nplt.ylabel('Ankle angle (degree)')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
fastai/fastai
nbs/37_text.learner.ipynb
apache-2.0
[ "#|hide\n#|skip\n! [ -e /content ] && pip install -Uqq fastai # upgrade fastai on colab\n\n#|export\nfrom __future__ import annotations\nfrom fastai.basics import *\nfrom fastai.text.core import *\nfrom fastai.text.data import *\nfrom fastai.text.models.core import *\nfrom fastai.text.models.awdlstm import *\nfrom fastai.callback.rnn import *\nfrom fastai.callback.progress import *\n\n#|hide\nfrom nbdev.showdoc import *\n\n#|default_exp text.learner", "Learner for the text application\n\nAll the functions necessary to build Learner suitable for transfer learning in NLP\n\nThe most important functions of this module are language_model_learner and text_classifier_learner. They will help you define a Learner using a pretrained model. See the text tutorial for examples of use.\nLoading a pretrained model\nIn text, to load a pretrained model, we need to adapt the embeddings of the vocabulary used for the pre-training to the vocabulary of our current corpus.", "#|export\ndef match_embeds(\n old_wgts:dict, # Embedding weights \n old_vocab:list, # Vocabulary of corpus used for pre-training\n new_vocab:list # Current corpus vocabulary\n) -> dict:\n \"Convert the embedding in `old_wgts` to go from `old_vocab` to `new_vocab`.\"\n bias, wgts = old_wgts.get('1.decoder.bias', None), old_wgts['0.encoder.weight']\n wgts_m = wgts.mean(0)\n new_wgts = wgts.new_zeros((len(new_vocab),wgts.size(1)))\n if bias is not None:\n bias_m = bias.mean(0)\n new_bias = bias.new_zeros((len(new_vocab),))\n old_o2i = old_vocab.o2i if hasattr(old_vocab, 'o2i') else {w:i for i,w in enumerate(old_vocab)}\n for i,w in enumerate(new_vocab):\n idx = old_o2i.get(w, -1)\n new_wgts[i] = wgts[idx] if idx>=0 else wgts_m\n if bias is not None: new_bias[i] = bias[idx] if idx>=0 else bias_m\n old_wgts['0.encoder.weight'] = new_wgts\n if '0.encoder_dp.emb.weight' in old_wgts: old_wgts['0.encoder_dp.emb.weight'] = new_wgts.clone()\n old_wgts['1.decoder.weight'] = new_wgts.clone()\n if bias is not None: old_wgts['1.decoder.bias'] = new_bias\n return old_wgts", "For words in new_vocab that don't have a corresponding match in old_vocab, we use the mean of all pretrained embeddings.", "wgts = {'0.encoder.weight': torch.randn(5,3)}\nnew_wgts = match_embeds(wgts.copy(), ['a', 'b', 'c'], ['a', 'c', 'd', 'b'])\nold,new = wgts['0.encoder.weight'],new_wgts['0.encoder.weight']\ntest_eq(new[0], old[0])\ntest_eq(new[1], old[2])\ntest_eq(new[2], old.mean(0))\ntest_eq(new[3], old[1])\n\n#|hide\n#With bias\nwgts = {'0.encoder.weight': torch.randn(5,3), '1.decoder.bias': torch.randn(5)}\nnew_wgts = match_embeds(wgts.copy(), ['a', 'b', 'c'], ['a', 'c', 'd', 'b'])\nold_w,new_w = wgts['0.encoder.weight'],new_wgts['0.encoder.weight']\nold_b,new_b = wgts['1.decoder.bias'], new_wgts['1.decoder.bias']\ntest_eq(new_w[0], old_w[0])\ntest_eq(new_w[1], old_w[2])\ntest_eq(new_w[2], old_w.mean(0))\ntest_eq(new_w[3], old_w[1])\ntest_eq(new_b[0], old_b[0])\ntest_eq(new_b[1], old_b[2])\ntest_eq(new_b[2], old_b.mean(0))\ntest_eq(new_b[3], old_b[1])\n\n#|export\ndef _get_text_vocab(dls:DataLoaders) -> list:\n \"Get vocabulary from `DataLoaders`\"\n vocab = dls.vocab\n if isinstance(vocab, L): vocab = vocab[0]\n return vocab\n\n#|export\ndef load_ignore_keys(\n model, # Model architecture\n wgts:dict # Model weights\n) -> tuple:\n \"Load `wgts` in `model` ignoring the names of the keys, just taking parameters in order\"\n sd = model.state_dict()\n for k1,k2 in zip(sd.keys(), wgts.keys()): sd[k1].data = wgts[k2].data.clone()\n return model.load_state_dict(sd)\n\n#|export\ndef _rm_module(n:str):\n t = n.split('.')\n for i in range(len(t)-1, -1, -1):\n if t[i] == 'module':\n t.pop(i)\n break\n return '.'.join(t)\n\n#|export\n#For previous versions compatibility, remove for release\ndef clean_raw_keys(wgts:dict):\n keys = list(wgts.keys())\n for k in keys:\n t = k.split('.module')\n if f'{_rm_module(k)}_raw' in keys: del wgts[k]\n return wgts\n\n#|export\n#For previous versions compatibility, remove for release\ndef load_model_text(\n file:str, # File name of saved text model\n model, # Model architecture\n opt:Optimizer, # `Optimizer` used to fit the model\n with_opt:bool=None, # Enable to load `Optimizer` state\n device:(int,str,torch.device)=None, # Sets the device, uses 'cpu' if unspecified\n strict:bool=True # Whether to strictly enforce the keys of `file`s state dict match with the model `Module.state_dict`\n):\n \"Load `model` from `file` along with `opt` (if available, and if `with_opt`)\"\n distrib_barrier()\n if isinstance(device, int): device = torch.device('cuda', device)\n elif device is None: device = 'cpu'\n state = torch.load(file, map_location=device)\n hasopt = set(state)=={'model', 'opt'}\n model_state = state['model'] if hasopt else state\n get_model(model).load_state_dict(clean_raw_keys(model_state), strict=strict)\n if hasopt and ifnone(with_opt,True):\n try: opt.load_state_dict(state['opt'])\n except:\n if with_opt: warn(\"Could not load the optimizer state.\")\n elif with_opt: warn(\"Saved filed doesn't contain an optimizer state.\")\n\n#|export\n@delegates(Learner.__init__)\nclass TextLearner(Learner):\n \"Basic class for a `Learner` in NLP.\"\n def __init__(self, \n dls:DataLoaders, # Text `DataLoaders`\n model, # A standard PyTorch model\n alpha:float=2., # Param for `RNNRegularizer`\n beta:float=1., # Param for `RNNRegularizer`\n moms:tuple=(0.8,0.7,0.8), # Momentum for `Cosine Annealing Scheduler`\n **kwargs\n ):\n super().__init__(dls, model, moms=moms, **kwargs)\n self.add_cbs(rnn_cbs())\n\n def save_encoder(self, \n file:str # Filename for `Encoder` \n ):\n \"Save the encoder to `file` in the model directory\"\n if rank_distrib(): return # don't save if child proc\n encoder = get_model(self.model)[0]\n if hasattr(encoder, 'module'): encoder = encoder.module\n torch.save(encoder.state_dict(), join_path_file(file, self.path/self.model_dir, ext='.pth'))\n\n def load_encoder(self, \n file:str, # Filename of the saved encoder \n device:(int,str,torch.device)=None # Device used to load, defaults to `dls` device\n ):\n \"Load the encoder `file` from the model directory, optionally ensuring it's on `device`\"\n encoder = get_model(self.model)[0]\n if device is None: device = self.dls.device\n if hasattr(encoder, 'module'): encoder = encoder.module\n distrib_barrier()\n wgts = torch.load(join_path_file(file,self.path/self.model_dir, ext='.pth'), map_location=device)\n encoder.load_state_dict(clean_raw_keys(wgts))\n self.freeze()\n return self\n\n def load_pretrained(self, \n wgts_fname:str, # Filename of saved weights \n vocab_fname:str, # Saved vocabulary filename in pickle format\n model=None # Model to load parameters from, defaults to `Learner.model`\n ):\n \"Load a pretrained model and adapt it to the data vocabulary.\"\n old_vocab = load_pickle(vocab_fname)\n new_vocab = _get_text_vocab(self.dls)\n distrib_barrier()\n wgts = torch.load(wgts_fname, map_location = lambda storage,loc: storage)\n if 'model' in wgts: wgts = wgts['model'] #Just in case the pretrained model was saved with an optimizer\n wgts = match_embeds(wgts, old_vocab, new_vocab)\n load_ignore_keys(self.model if model is None else model, clean_raw_keys(wgts))\n self.freeze()\n return self\n\n #For previous versions compatibility. Remove at release\n @delegates(load_model_text)\n def load(self, \n file:str, # Filename of saved model \n with_opt:bool=None, # Enable to load `Optimizer` state\n device:(int,str,torch.device)=None, # Device used to load, defaults to `dls` device\n **kwargs\n ):\n if device is None: device = self.dls.device\n if self.opt is None: self.create_opt()\n file = join_path_file(file, self.path/self.model_dir, ext='.pth')\n load_model_text(file, self.model, self.opt, device=device, **kwargs)\n return self", "Adds a ModelResetter and an RNNRegularizer with alpha and beta to the callbacks, the rest is the same as Learner init. \nThis Learner adds functionality to the base class:", "show_doc(TextLearner.load_pretrained)", "wgts_fname should point to the weights of the pretrained model and vocab_fname to the vocabulary used to pretrain it.", "show_doc(TextLearner.save_encoder)", "The model directory is Learner.path/Learner.model_dir.", "show_doc(TextLearner.load_encoder)", "Language modeling predictions\nFor language modeling, the predict method is quite different from the other applications, which is why it needs its own subclass.", "#|export\ndef decode_spec_tokens(tokens):\n \"Decode the special tokens in `tokens`\"\n new_toks,rule,arg = [],None,None\n for t in tokens:\n if t in [TK_MAJ, TK_UP, TK_REP, TK_WREP]: rule = t\n elif rule is None: new_toks.append(t)\n elif rule == TK_MAJ:\n new_toks.append(t[:1].upper() + t[1:].lower())\n rule = None\n elif rule == TK_UP:\n new_toks.append(t.upper())\n rule = None\n elif arg is None:\n try: arg = int(t)\n except: rule = None\n else:\n if rule == TK_REP: new_toks.append(t * arg)\n else: new_toks += [t] * arg\n return new_toks\n\ntest_eq(decode_spec_tokens(['xxmaj', 'text']), ['Text'])\ntest_eq(decode_spec_tokens(['xxup', 'text']), ['TEXT'])\ntest_eq(decode_spec_tokens(['xxrep', '3', 'a']), ['aaa'])\ntest_eq(decode_spec_tokens(['xxwrep', '3', 'word']), ['word', 'word', 'word'])\n\n#|export\nclass LMLearner(TextLearner):\n \"Add functionality to `TextLearner` when dealing with a language model\"\n def predict(self, text, n_words=1, no_unk=True, temperature=1., min_p=None, no_bar=False,\n decoder=decode_spec_tokens, only_last_word=False):\n \"Return `text` and the `n_words` that come after\"\n self.model.reset()\n idxs = idxs_all = self.dls.test_dl([text]).items[0].to(self.dls.device)\n if no_unk: unk_idx = self.dls.vocab.index(UNK)\n for _ in (range(n_words) if no_bar else progress_bar(range(n_words), leave=False)):\n with self.no_bar(): preds,_ = self.get_preds(dl=[(idxs[None],)])\n res = preds[0][-1]\n if no_unk: res[unk_idx] = 0.\n if min_p is not None:\n if (res >= min_p).float().sum() == 0:\n warn(f\"There is no item with probability >= {min_p}, try a lower value.\")\n else: res[res < min_p] = 0.\n if temperature != 1.: res.pow_(1 / temperature)\n idx = torch.multinomial(res, 1).item()\n idxs = idxs_all = torch.cat([idxs_all, idxs.new([idx])])\n if only_last_word: idxs = idxs[-1][None]\n\n num = self.dls.train_ds.numericalize\n tokens = [num.vocab[i] for i in idxs_all if num.vocab[i] not in [BOS, PAD]]\n sep = self.dls.train_ds.tokenizer.sep\n return sep.join(decoder(tokens))\n\n @delegates(Learner.get_preds)\n def get_preds(self, concat_dim=1, **kwargs): return super().get_preds(concat_dim=1, **kwargs)\n\nshow_doc(LMLearner, title_level=3)\n\nshow_doc(LMLearner.predict)", "The words are picked randomly among the predictions, depending on the probability of each index. no_unk means we never pick the UNK token, temperature is applied to the predictions, if min_p is passed, we don't consider the indices with a probability lower than it. Set no_bar to True if you don't want any progress bar, and you can pass a long a custom decoder to process the predicted tokens.\nLearner convenience functions", "#|export\nfrom fastai.text.models.core import _model_meta\n\n#|export\ndef _get_text_vocab(dls):\n vocab = dls.vocab\n if isinstance(vocab, L): vocab = vocab[0]\n return vocab\n\n#|export\n@delegates(Learner.__init__)\ndef language_model_learner(dls, arch, config=None, drop_mult=1., backwards=False, pretrained=True, pretrained_fnames=None, **kwargs):\n \"Create a `Learner` with a language model from `dls` and `arch`.\"\n vocab = _get_text_vocab(dls)\n model = get_language_model(arch, len(vocab), config=config, drop_mult=drop_mult)\n meta = _model_meta[arch]\n learn = LMLearner(dls, model, loss_func=CrossEntropyLossFlat(), splitter=meta['split_lm'], **kwargs)\n url = 'url_bwd' if backwards else 'url'\n if pretrained or pretrained_fnames:\n if pretrained_fnames is not None:\n fnames = [learn.path/learn.model_dir/f'{fn}.{ext}' for fn,ext in zip(pretrained_fnames, ['pth', 'pkl'])]\n else:\n if url not in meta:\n warn(\"There are no pretrained weights for that architecture yet!\")\n return learn\n model_path = untar_data(meta[url] , c_key='model')\n try: fnames = [list(model_path.glob(f'*.{ext}'))[0] for ext in ['pth', 'pkl']]\n except IndexError: print(f'The model in {model_path} is incomplete, download again'); raise\n learn = learn.load_pretrained(*fnames)\n return learn", "You can use the config to customize the architecture used (change the values from awd_lstm_lm_config for this), pretrained will use fastai's pretrained model for this arch (if available) or you can pass specific pretrained_fnames containing your own pretrained model and the corresponding vocabulary. All other arguments are passed to Learner.", "path = untar_data(URLs.IMDB_SAMPLE)\ndf = pd.read_csv(path/'texts.csv')\ndls = TextDataLoaders.from_df(df, path=path, text_col='text', is_lm=True, valid_col='is_valid')\nlearn = language_model_learner(dls, AWD_LSTM)", "You can then use the .predict method to generate new text.", "learn.predict('This movie is about', n_words=20)", "By default the entire sentence is fed again to the model after each predicted word, this little trick shows an improvement on the quality of the generated text. If you want to feed only the last word, specify argument only_last_word.", "learn.predict('This movie is about', n_words=20, only_last_word=True)\n\n#|export\n@delegates(Learner.__init__)\ndef text_classifier_learner(dls, arch, seq_len=72, config=None, backwards=False, pretrained=True, drop_mult=0.5, n_out=None,\n lin_ftrs=None, ps=None, max_len=72*20, y_range=None, **kwargs):\n \"Create a `Learner` with a text classifier from `dls` and `arch`.\"\n vocab = _get_text_vocab(dls)\n if n_out is None: n_out = get_c(dls)\n assert n_out, \"`n_out` is not defined, and could not be inferred from data, set `dls.c` or pass `n_out`\"\n model = get_text_classifier(arch, len(vocab), n_out, seq_len=seq_len, config=config, y_range=y_range,\n drop_mult=drop_mult, lin_ftrs=lin_ftrs, ps=ps, max_len=max_len)\n meta = _model_meta[arch]\n learn = TextLearner(dls, model, splitter=meta['split_clas'], **kwargs)\n url = 'url_bwd' if backwards else 'url'\n if pretrained:\n if url not in meta:\n warn(\"There are no pretrained weights for that architecture yet!\")\n return learn\n model_path = untar_data(meta[url], c_key='model')\n try: fnames = [list(model_path.glob(f'*.{ext}'))[0] for ext in ['pth', 'pkl']]\n except IndexError: print(f'The model in {model_path} is incomplete, download again'); raise\n learn = learn.load_pretrained(*fnames, model=learn.model[0])\n learn.freeze()\n return learn", "You can use the config to customize the architecture used (change the values from awd_lstm_clas_config for this), pretrained will use fastai's pretrained model for this arch (if available). drop_mult is a global multiplier applied to control all dropouts. n_out is usually inferred from the dls but you may pass it.\nThe model uses a SentenceEncoder, which means the texts are passed seq_len tokens at a time, and will only compute the gradients on the last max_len steps. lin_ftrs and ps are passed to get_text_classifier.\nAll other arguments are passed to Learner.", "path = untar_data(URLs.IMDB_SAMPLE)\ndf = pd.read_csv(path/'texts.csv')\ndls = TextDataLoaders.from_df(df, path=path, text_col='text', label_col='label', valid_col='is_valid')\nlearn = text_classifier_learner(dls, AWD_LSTM)", "Show methods -", "#|export\n@typedispatch\ndef show_results(x: LMTensorText, y, samples, outs, ctxs=None, max_n=10, **kwargs):\n if ctxs is None: ctxs = get_empty_df(min(len(samples), max_n))\n for i,l in enumerate(['input', 'target']):\n ctxs = [b.show(ctx=c, label=l, **kwargs) for b,c,_ in zip(samples.itemgot(i),ctxs,range(max_n))]\n ctxs = [b.show(ctx=c, label='pred', **kwargs) for b,c,_ in zip(outs.itemgot(0),ctxs,range(max_n))]\n display_df(pd.DataFrame(ctxs))\n return ctxs\n\n#|export\n@typedispatch\ndef show_results(x: TensorText, y, samples, outs, ctxs=None, max_n=10, trunc_at=150, **kwargs):\n if ctxs is None: ctxs = get_empty_df(min(len(samples), max_n))\n samples = L((s[0].truncate(trunc_at),*s[1:]) for s in samples)\n ctxs = show_results[object](x, y, samples, outs, ctxs=ctxs, max_n=max_n, **kwargs)\n display_df(pd.DataFrame(ctxs))\n return ctxs\n\n#|export\n@typedispatch\ndef plot_top_losses(x: TensorText, y:TensorCategory, samples, outs, raws, losses, trunc_at=150, **kwargs):\n rows = get_empty_df(len(samples))\n samples = L((s[0].truncate(trunc_at),*s[1:]) for s in samples)\n for i,l in enumerate(['input', 'target']):\n rows = [b.show(ctx=c, label=l, **kwargs) for b,c in zip(samples.itemgot(i),rows)]\n outs = L(o + (TitledFloat(r.max().item()), TitledFloat(l.item())) for o,r,l in zip(outs, raws, losses))\n for i,l in enumerate(['predicted', 'probability', 'loss']):\n rows = [b.show(ctx=c, label=l, **kwargs) for b,c in zip(outs.itemgot(i),rows)]\n display_df(pd.DataFrame(rows))", "Export -", "#|hide\nfrom nbdev.export import notebook2script\nnotebook2script()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
nbokulich/short-read-tax-assignment
ipynb/mock-community/taxonomy-assignment-qiime1.ipynb
bsd-3-clause
[ "Data generation: using python to sweep over methods and parameters\nIn this notebook, we illustrate how to use python to generate and run a list of commands. In this example, we generate a list of QIIME 1.9.0 assign_taxonomy.py commands, though this workflow for command generation is generally very useful for performing parameter sweeps (i.e., exploration of sets of parameters for achieving a specific result for comparative purposes). \nEnvironment preparation", "from os.path import join, expandvars \nfrom joblib import Parallel, delayed\nfrom glob import glob\nfrom os import system\nfrom tax_credit.framework_functions import (parameter_sweep,\n generate_per_method_biom_tables,\n move_results_to_repository)\n\n\nproject_dir = expandvars(\"$HOME/Desktop/projects/short-read-tax-assignment\")\nanalysis_name= \"mock-community\"\ndata_dir = join(project_dir, \"data\", analysis_name)\n\nreference_database_dir = expandvars(\"$HOME/Desktop/ref_dbs/\")\nresults_dir = expandvars(\"$HOME/Desktop/projects/mock-community/\")", "Preparing data set sweep\nFirst, we're going to define the data sets that we'll sweep over. The following cell does not need to be modified unless if you wish to change the datasets or reference databases used in the sweep.", "dataset_reference_combinations = [\n ('mock-1', 'gg_13_8_otus'), # formerly S16S-1\n ('mock-2', 'gg_13_8_otus'), # formerly S16S-2\n ('mock-3', 'gg_13_8_otus'), # formerly Broad-1\n ('mock-4', 'gg_13_8_otus'), # formerly Broad-2\n ('mock-5', 'gg_13_8_otus'), # formerly Broad-3\n ('mock-6', 'gg_13_8_otus'), # formerly Turnbaugh-1\n ('mock-7', 'gg_13_8_otus'), # formerly Turnbaugh-2\n ('mock-8', 'gg_13_8_otus'), # formerly Turnbaugh-3\n ('mock-9', 'unite_20.11.2016_clean_fullITS'), # formerly ITS1\n ('mock-10', 'unite_20.11.2016_clean_fullITS'), # formerly ITS2-SAG\n ('mock-12', 'gg_13_8_otus'), # Extreme\n ('mock-13', 'gg_13_8_otus_full16S'), # kozich-1\n ('mock-14', 'gg_13_8_otus_full16S'), # kozich-2\n ('mock-15', 'gg_13_8_otus_full16S'), # kozich-3\n ('mock-16', 'gg_13_8_otus'), # schirmer-1\n ('mock-18', 'gg_13_8_otus'),\n ('mock-19', 'gg_13_8_otus'),\n ('mock-20', 'gg_13_8_otus'),\n ('mock-21', 'gg_13_8_otus'),\n ('mock-22', 'gg_13_8_otus'),\n ('mock-23', 'gg_13_8_otus'),\n ('mock-24', 'unite_20.11.2016_clean_fullITS'),\n ('mock-25', 'unite_20.11.2016_clean_fullITS'),\n ('mock-26-ITS1', 'unite_20.11.2016_clean_fullITS'),\n ('mock-26-ITS9', 'unite_20.11.2016_clean_fullITS'),\n]\n\nreference_dbs = {'gg_13_8_otus' : (join(reference_database_dir, 'gg_13_8_otus/rep_set/99_otus/dna-sequences.fasta'), \n join(reference_database_dir, 'gg_13_8_otus/taxonomy/99_otu_taxonomy.txt')),\n 'gg_13_8_otus_full16S' : (join(reference_database_dir, 'gg_13_8_otus/rep_set/99_otus.fasta'), \n join(reference_database_dir, 'gg_13_8_otus/taxonomy/99_otu_taxonomy.txt')),\n 'unite_20.11.2016_clean_fullITS' : (join(reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_refs_qiime_ver7_99_20.11.2016_dev_clean.fasta'), \n join(reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_taxonomy_qiime_ver7_99_20.11.2016_dev_clean.tsv')),\n 'unite_20.11.2016' : (join(reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_refs_qiime_ver7_99_20.11.2016_dev_ITS1Ff-ITS2r_trim250/dna-sequences.fasta'), \n join(reference_database_dir, 'sh_qiime_release_20.11.2016/developer/sh_taxonomy_qiime_ver7_99_20.11.2016_dev.txt'))}", "Preparing the method/parameter combinations and generating commands\nNow we set the methods and method-specific parameters that we want to sweep. Modify to sweep other methods. Note how method_parameters_combinations feeds method/parameter combinations to parameter_sweep() in the cell below.\nAssignment Using QIIME 1 or Command-Line Classifiers\nHere we provide an example of taxonomy assignment using legacy QIIME 1 classifiers executed on the command line. To accomplish this, we must first convert commands to a string, which we then pass to bash for execution. As QIIME 1 is written in python-2, we must also activate a separate environment in which QIIME 1 has been installed. If any environmental variables need to be set (in this example, the RDP_JAR_PATH), we must also source the .bashrc file.", "method_parameters_combinations = { # probabalistic classifiers\n 'rdp': {'confidence': [0.0, 0.1, 0.2, 0.3, 0.4, 0.5,\n 0.6, 0.7, 0.8, 0.9, 1.0]},\n \n # global alignment classifiers\n 'uclust': {'min_consensus_fraction': [0.51, 0.76, 1.0], \n 'similarity': [0.9, 0.97, 0.99],\n 'uclust_max_accepts': [1, 3, 5]},\n \n # local alignment classifiers\n 'sortmerna': {'sortmerna_e_value': [1.0],\n 'min_consensus_fraction': [0.51, 1.0], \n 'similarity': [0.9, 0.99],\n 'sortmerna_best_N_alignments ': [1, 5],\n 'sortmerna_coverage' : [0.9]},\n 'blast' : {'blast_e_value' : [0.0000000001, 1000]}\n }", "Now enter the template of the command to sweep, and generate a list of commands with parameter_sweep().\nFields must adhere to following format:\n {0} = output directory\n {1} = input data\n {2} = reference sequences\n {3} = reference taxonomy\n {4} = method name\n {5} = other parameters", "command_template = \"source activate qiime1; source ~/.bashrc; mkdir -p {0} ; assign_taxonomy.py -v -i {1} -o {0} -r {2} -t {3} -m {4} {5} --rdp_max_memory 8000\"\n \ncommands = parameter_sweep(data_dir, results_dir, reference_dbs,\n dataset_reference_combinations,\n method_parameters_combinations, command_template,\n infile='rep_seqs.fna', output_name='rep_seqs_tax_assignments.txt')\n", "As a sanity check, we can look at the first command that was generated and the number of commands generated.", "print(len(commands))\ncommands[0]", "Finally, we run our commands.", "Parallel(n_jobs=1)(delayed(system)(command) for command in commands)", "Generate per-method biom tables\nModify the taxonomy_glob below to point to the taxonomy assignments that were generated above. This may be necessary if filepaths were altered in the preceding cells.", "taxonomy_glob = join(results_dir, '*', '*', '*', '*', 'rep_seqs_tax_assignments.txt')\ngenerate_per_method_biom_tables(taxonomy_glob, data_dir)", "Move result files to repository\nAdd results to the short-read-taxa-assignment directory (e.g., to push these results to the repository or compare with other precomputed results in downstream analysis steps). The precomputed_results_dir path and methods_dirs glob below should not need to be changed unless if substantial changes were made to filepaths in the preceding cells.", "precomputed_results_dir = join(project_dir, \"data\", \"precomputed-results\", analysis_name)\nmethod_dirs = glob(join(results_dir, '*', '*', '*', '*'))\nmove_results_to_repository(method_dirs, precomputed_results_dir)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Naereen/notebooks
agreg/Droite Discrète - public 2012 D5.ipynb
mit
[ "<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#Titre-de-l'oral-:-«-Comment-dessiner-des-droites-sur-un-écran-d'ordinateur-?-»\" data-toc-modified-id=\"Titre-de-l'oral-:-«-Comment-dessiner-des-droites-sur-un-écran-d'ordinateur-?-»-1\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>Titre de l'oral : « <em>Comment dessiner des droites sur un écran d'ordinateur ?</em> »</a></span><ul class=\"toc-item\"><li><span><a href=\"#Texte-d'oral-de-modélisation,-agrég-maths-option-D-&quot;Droite-Discrète&quot;-(public-2012-D5)\" data-toc-modified-id=\"Texte-d'oral-de-modélisation,-agrég-maths-option-D-&quot;Droite-Discrète&quot;-(public-2012-D5)-1.1\"><span class=\"toc-item-num\">1.1&nbsp;&nbsp;</span>Texte d'oral de modélisation, agrég maths option D \"Droite Discrète\" (public 2012 D5)</a></span></li><li><span><a href=\"#Introduction\" data-toc-modified-id=\"Introduction-1.2\"><span class=\"toc-item-num\">1.2&nbsp;&nbsp;</span>Introduction</a></span></li></ul></li><li><span><a href=\"#Tracé-de-droites,-formalisation\" data-toc-modified-id=\"Tracé-de-droites,-formalisation-2\"><span class=\"toc-item-num\">2&nbsp;&nbsp;</span>Tracé de droites, formalisation</a></span><ul class=\"toc-item\"><li><span><a href=\"#Formalisation,-et-hypothèses\" data-toc-modified-id=\"Formalisation,-et-hypothèses-2.1\"><span class=\"toc-item-num\">2.1&nbsp;&nbsp;</span>Formalisation, et hypothèses</a></span></li><li><span><a href=\"#Présentation-du-problème\" data-toc-modified-id=\"Présentation-du-problème-2.2\"><span class=\"toc-item-num\">2.2&nbsp;&nbsp;</span>Présentation du problème</a></span></li><li><span><a href=\"#Généralisation-au-delà-de-ces-hypothèses-?\" data-toc-modified-id=\"Généralisation-au-delà-de-ces-hypothèses-?-2.3\"><span class=\"toc-item-num\">2.3&nbsp;&nbsp;</span>Généralisation au delà de ces hypothèses ?</a></span></li></ul></li><li><span><a href=\"#Trois-méthodes-différentes-:-algorithmes,-implémentations,-exemples\" data-toc-modified-id=\"Trois-méthodes-différentes-:-algorithmes,-implémentations,-exemples-3\"><span class=\"toc-item-num\">3&nbsp;&nbsp;</span>Trois méthodes différentes : algorithmes, implémentations, exemples</a></span><ul class=\"toc-item\"><li><span><a href=\"#Implémentation-de-l'exercice-de-programmation\" data-toc-modified-id=\"Implémentation-de-l'exercice-de-programmation-3.1\"><span class=\"toc-item-num\">3.1&nbsp;&nbsp;</span>Implémentation de l'exercice de programmation</a></span><ul class=\"toc-item\"><li><span><a href=\"#Un-bidouillage-avec-Python-:-un-&quot;wrapper&quot;-qui-gère-les-symétries,-pour-ne-pas-avoir-à-les-coder-plusieurs-fois\" data-toc-modified-id=\"Un-bidouillage-avec-Python-:-un-&quot;wrapper&quot;-qui-gère-les-symétries,-pour-ne-pas-avoir-à-les-coder-plusieurs-fois-3.1.1\"><span class=\"toc-item-num\">3.1.1&nbsp;&nbsp;</span>Un bidouillage avec Python : un \"wrapper\" qui gère les symétries, pour ne pas avoir à les coder plusieurs fois</a></span></li><li><span><a href=\"#Exemples-:-deux-premiers-quadrants,-deux-premières-demi-droites\" data-toc-modified-id=\"Exemples-:-deux-premiers-quadrants,-deux-premières-demi-droites-3.1.2\"><span class=\"toc-item-num\">3.1.2&nbsp;&nbsp;</span>Exemples : deux premiers quadrants, deux premières demi-droites</a></span></li><li><span><a href=\"#Autres-exemples-:-six-autre-premiers-quadrants,-deux-autres-demi-droites\" data-toc-modified-id=\"Autres-exemples-:-six-autre-premiers-quadrants,-deux-autres-demi-droites-3.1.3\"><span class=\"toc-item-num\">3.1.3&nbsp;&nbsp;</span>Autres exemples : six autre premiers quadrants, deux autres demi-droites</a></span></li></ul></li><li><span><a href=\"#Deux-autres-méthodes\" data-toc-modified-id=\"Deux-autres-méthodes-3.2\"><span class=\"toc-item-num\">3.2&nbsp;&nbsp;</span>Deux autres méthodes</a></span><ul class=\"toc-item\"><li><span><a href=\"#Longer-au-plus-près-inférieurement\" data-toc-modified-id=\"Longer-au-plus-près-inférieurement-3.2.1\"><span class=\"toc-item-num\">3.2.1&nbsp;&nbsp;</span>Longer au plus près inférieurement</a></span></li><li><span><a href=\"#Longer-au-plus-près-supérieurement\" data-toc-modified-id=\"Longer-au-plus-près-supérieurement-3.2.2\"><span class=\"toc-item-num\">3.2.2&nbsp;&nbsp;</span>Longer au plus près supérieurement</a></span></li><li><span><a href=\"#Une-dernière-méthode-?\" data-toc-modified-id=\"Une-dernière-méthode-?-3.2.3\"><span class=\"toc-item-num\">3.2.3&nbsp;&nbsp;</span>Une dernière méthode ?</a></span></li><li><span><a href=\"#Exemples-:-deux-premiers-quadrants,-deux-premières-demi-droites\" data-toc-modified-id=\"Exemples-:-deux-premiers-quadrants,-deux-premières-demi-droites-3.2.4\"><span class=\"toc-item-num\">3.2.4&nbsp;&nbsp;</span>Exemples : deux premiers quadrants, deux premières demi-droites</a></span></li><li><span><a href=\"#Autres-exemples-:-six-autre-premiers-quadrants,-deux-autres-demi-droites\" data-toc-modified-id=\"Autres-exemples-:-six-autre-premiers-quadrants,-deux-autres-demi-droites-3.2.5\"><span class=\"toc-item-num\">3.2.5&nbsp;&nbsp;</span>Autres exemples : six autre premiers quadrants, deux autres demi-droites</a></span></li></ul></li><li><span><a href=\"#Visualisation-?-[si-le-temps]\" data-toc-modified-id=\"Visualisation-?-[si-le-temps]-3.3\"><span class=\"toc-item-num\">3.3&nbsp;&nbsp;</span>Visualisation ? <em>[si le temps]</em></a></span><ul class=\"toc-item\"><li><span><a href=\"#Exemples-manuels-:\" data-toc-modified-id=\"Exemples-manuels-:-3.3.1\"><span class=\"toc-item-num\">3.3.1&nbsp;&nbsp;</span>Exemples manuels :</a></span></li><li><span><a href=\"#Comparaison-graphique-des-trois-méthodes-:\" data-toc-modified-id=\"Comparaison-graphique-des-trois-méthodes-:-3.3.2\"><span class=\"toc-item-num\">3.3.2&nbsp;&nbsp;</span>Comparaison graphique des trois méthodes :</a></span></li><li><span><a href=\"#Meilleur-visualisation-avec-quatre-sous-figures-?\" data-toc-modified-id=\"Meilleur-visualisation-avec-quatre-sous-figures-?-3.3.3\"><span class=\"toc-item-num\">3.3.3&nbsp;&nbsp;</span>Meilleur visualisation avec quatre sous-figures ?</a></span></li></ul></li></ul></li><li><span><a href=\"#Conclusion-de-l'oral\" data-toc-modified-id=\"Conclusion-de-l'oral-4\"><span class=\"toc-item-num\">4&nbsp;&nbsp;</span>Conclusion de l'oral</a></span><ul class=\"toc-item\"><li><span><a href=\"#Autres-pistes-:\" data-toc-modified-id=\"Autres-pistes-:-4.1\"><span class=\"toc-item-num\">4.1&nbsp;&nbsp;</span>Autres pistes :</a></span></li></ul></li></ul></div>\n\nTitre de l'oral : « Comment dessiner des droites sur un écran d'ordinateur ? »\nTexte d'oral de modélisation, agrég maths option D \"Droite Discrète\" (public 2012 D5)\n\nVoir : cette page et ce texte (public2012-D5.pdf) :\n2012-D5 À partir du problème de la représentation des droites sur un écran d’ordinateur, on étudie la notion de droite discrète, et le mot binaire périodique associé. On met en évidence la relation biunivoque entre droites discrètes et certains mots binaires, puis entre ces mots et les nombres rationnels de [0, 1]. Mots clefs : algorithmes, géométrie discrète, mots binaires, topologie.\n\n\nAuteur : Lilian Besson\nÉcrit en Python 3\nLicense : MIT Licensed\nDate : 25/01/2021 (j'avais écrit une solution en OCaml en 2014)\n\nIntroduction\nDonner un petit discours d'introduction :\n\nOn cherche à dessiner des courbes mathématiques sur un écran d'ordinateur. Pour simplifier l'exposé, on ne va considérer que des écrans quadrillés, où chaque case est un \"pixel\", et aussi on se restreint à des graphismes en noir et blanc. Ainsi, un écran est un tableau 2D, une matrice, de pixels, et chaque pixel est soit éteint, soit allumé.\nSi on cherche à tracer une courbe mathématique, par exemple une droite, un cercle ou d'autres courbes plus compliquée, sur un tel écran d'ordinateur, on va devoir décider quels pixels allumer ou garder éteint, le long du tracé de la courbe.\n[faire un dessin au tableau]\nPour commencer, on va uniquement s'intéresser au cas d'une droite, qui va du pixel O=(0,0) tout en bas à droite de l'écran, jusqu'à un autre point B=(b1,b2) de l'écran (à coordonnées entières, donc), situé à droite et en haut de l'origine O.\nOn pose $\\alpha=b2/b1 \\in \\mathbb{R} \\cup {\\pm\\infty}$ la pente de cette droite.\nComme on le voit, si la courbe est un cas particulier très facile, il n'y a pas de difficultés particulières.\nLes cas faciles sont les courbes horizontales, de pente $1$ ou verticales, soit $\\alpha \\in {0,1,\\infty}$.\nDans les autres cas, la pente est toujours rationnelle, mais en général elle ne sera pas entière.\nNous allons d'abord étudier le cas particulier d'une pente $\\alpha < 1$, mais il est très facile de généraliser aux sept autres quadrants [faire un dessin], avec des rotations, et en échangeant le point de départ A et l'arrivée B.\nNous formalisons le problème, en faisant le lien entre les coordonnées réelles des points $M_i = (x_i,y_i)$ sur la droite [AB], qui sont $x_i = i$ et $y_i \\in \\mathbb{R}$ peut être réel et même irrationnels, et les points $(x_i, \\tilde{y_i})$ qui sont les coordonnées\n(par convention, le point de référence des pixels est le point en bas à gauche)\nNous présenterons et implémenterons trois différentes méthodes, qui ont toute la même complexité algorithmique, mais qui donnent des tracés différents dans le cas d'une droite quelconque.\n\nPlan : voir table des matières pour le détail. [écrire au tableau]\n\nI) Introduction\nII) Tracé de droites, formalisation\nIII) Trois méthodes différentes : algorithmes, implémentations, exemples [visualisation si le temps ?]\nIV) Mathématiques des droites discrètes\nV) Mots binaires\nVI) Liens entre les (pentes) rationnels et les mots binaires [si le temps ?]\nVII) Conclusion et ouvertures\n\nTracé de droites, formalisation\nAU TABLEAU\nFormalisation, et hypothèses\nPrésentation du problème\nTODO\nTL;DR : on cherche des algorithmes qui évitent de calculer des erreurs sur des nombres réels, parce qu'on sait que les flottants propagent des erreurs de calculs.\nGénéralisation au delà de ces hypothèses ?\nTrois méthodes différentes : algorithmes, implémentations, exemples\nImplémentation de l'exercice de programmation\n<span style=\"color:red;\">ATTENTION</span> depuis 2019, l'exercice de programmation obligatoire qui était indiqué dans les anciens textes d'option D est devenu, comme pour les autres options A/B/C, une simple suggestion.\n- Avant 2019, il fallait absolument implémenter l'exercice demandé, et éventuellement faire plus ;\n- Maintenant, on fait ce qu'on veut. Le jury n'a pas encore diffusé de texte suivant la nouvelle consigne, donc je vous conseille l'approche suivante : pour les vieux textes, il faut juste remplacer exercice de programmation obligatoire par suggestion de programmation.\n\nÉcrire un programme permettant de représenter le segment [AB], où A= (a1,a2) et B=(b1,b2), en suivant l’algorithme de Bresenham. On supposera que a1<b1, a2≤b2 et que la pente $\\alpha$ de la droite est inférieure à 1.\nLa sortie du programme sera la liste des couples (xi,yi) des points représentant le segment.\n\nUn bidouillage avec Python : un \"wrapper\" qui gère les symétries, pour ne pas avoir à les coder plusieurs fois\nOn peut écrire un décorateur Python qui se charge de gérer toutes les symétries, afin de ne pas à avoir à copier-coller ça pour les méthodes suivantes.", "def gerersymetrie(nom=\"\"):\n def decorator(f):\n def g(A, B):\n a1, a2 = int(A[0]), int(A[1])\n b1, b2 = int(B[0]), int(B[1])\n # on commence à traiter les symétries\n\n if a1 == b1:\n # il faut renvoyer une droite verticale, facile à faire\n if b2 < a2:\n print(f\"{nom} (avant symétries) avec une pente = -oo\")\n return g(B, A)\n # OK maintenant b2 >= a2\n print(f\"{nom} (avant symétries) avec une pente = +oo\")\n points = [\n (a1, a2 + i) for i in range(b2 - a2 + 1)\n # droite verticale, même x, y change\n ]\n return points\n\n pente = float(b2-a2) / float(b1-a1)\n\n if a1 > b1: # cas facile !\n print(f\"{nom} (avant symétries) avec une pente = {pente}\")\n return g(B, A)\n\n # OK maintenant a1 < b1\n # vérification des hypothèses 1/2\n assert a1 < b1, f\"Erreur : {nom} demande a1 = {a1} < a2 = {b1}.\"\n\n if pente > 1: # cas plus difficile, il faudra calculer une symétrie\n # --> i) symétrie % axe horizontal\n Bx = [B[1], B[0]]\n print(f\"{nom} (avant symétries) avec une pente = {pente}\")\n points = g(A, Bx)\n # <-- i) symétrie % droite {(x,x)}\n points = [ (y, x) for (x,y) in points ]\n return points\n\n if a2 > b2: # cas plus difficile, il faudra calculer une symétrie\n Ax = [0,0]\n # --> i) translation =>\n Bx = [b1 - a1, b2 - a2]\n # --> --> ii) symétrie % axe horizontal\n Bxx = [Bx[0], -Bx[1]]\n print(f\"{nom} (avant symétries) avec une pente = {pente}\")\n points = g(Ax, Bxx)\n # <-- <-- ii) symétrie % axe horizontal\n points = [ (x, -y) for (x,y) in points ]\n # <-- i) translation <=\n points = [ (x + a1, y + a2) for (x,y) in points ]\n return points\n \n # cas de base, on propage\n return f(A, B)\n\n if f.__doc__:\n g.__doc__ = f.__doc__\n g.__nom__ = nom\n # maintenant on a un wrapper g(A,B) qui gère les symétries\n return g\n # et voilà on a un decorateur\n return decorator", "Maintenant on peut écrire la méthode de Bresenham, en gérant uniquement le cas qui nous arrange :", "# cette ligne fait que la fonction finale va gérer les symétries !\n@gerersymetrie(nom=\"Méthode de Bresenham\")\ndef methode_bresenham(A, B):\n \"\"\" Méthode de Bresenham.\n \n - Si N = |A B| longueur du segment, cette fonction tourne en temps O(N) et en mémoire O(N)\n N = max(n, m) avec m = |b2 - a2| et n = |b1 - a1| nb de déplacement sur l'axe horizontal/vertical\n\n - Fonctionne dans tous les cas, supporte les huit quadrants, les quatre demi-droites,\n et le cas spécial A==B, en exploitant les symétries et se ramener au cas de base :\n a1 < b1, b2 <= a2 (pente = (b2-a2)/(b1-a1) = alpha, 0 <= alpha <= 1)\n \"\"\"\n a1, a2 = int(A[0]), int(A[1])\n b1, b2 = int(B[0]), int(B[1])\n\n pente = float(b2-a2) / float(b1-a1)\n\n # vérification des hypothèses 2/2\n assert 0 <= pente <= 1, f\"Erreur : la méthode de Bresenham demande 0 <= pente = {pente} <= 1.\"\n print(f\"Méthode de Bresenham (après symétries éventuelles) avec une pente 0 <= pente = {pente} <= 1\")\n\n points = []\n points.append(A) # premier point A\n\n nombre_point = b1 - a1 # c'est le n du texte\n pente_normalise = b2 - a2 # le m du texte = b2-a2, c'est la pente normalisée alpha*n\n xi = int(a1)\n yi = int(a2)\n ei = 0\n \n for i in range(1, nombre_point):\n xi += 1\n\n # cette partie spécifique à Bresenham, et change selon les trois méthodes\n if 2 * ( ei + pente_normalise ) <= nombre_point:\n ei += pente_normalise\n yi += 0 # inutile, juste pour le montrer\n else:\n ei += pente_normalise - nombre_point\n yi += 1\n\n points.append((xi, yi))\n \n # et dernier point B\n points.append(B)\n return points", "Exemples : deux premiers quadrants, deux premières demi-droites\n\nAvec une pente $\\alpha=1$, tous les yi += O/1 seront yi += 1 :", "A = (0, 0)\nB = (4, 4) # pente 1\nprint(methode_bresenham(A, B))", "Avec une pente $\\alpha=0$, tous les yi += O/1 seront yi += 0 :", "A = (0, 0)\nB = (4, 0) # pente 0\nprint(methode_bresenham(A, B))", "Avec une pente $0 < \\alpha < 1$, tous les yi += O/1 suivent la méthode de Bresenham :", "A = (0, 0)\nB = (4, 3) # pente 3/4\nprint(methode_bresenham(A, B))", "Avec une pente $1 < \\alpha < +\\infty$, ce sont maintenant les yi += 1 mais les xi += O/1 qui suivent la méthode de Bresenham :", "A = (0, 0)\nB = (3, 4) # pente 4/3\nprint(methode_bresenham(A, B))", "Avec une pente $\\alpha = -\\infty$, ce sont maintenant les yi += 1 et tous les xi += O/1 sont des += 1 :", "A = (0, 0)\nB = (0, 4) # pente +infini\nprint(methode_bresenham(A, B))", "Autres exemples : six autre premiers quadrants, deux autres demi-droites\nOn fera ces autres exemples quand on aura les deux autres méthodes.\nDeux autres méthodes\nUne fois que l'on aura écrit la méthode de Bresenham, on peut rapidement implémenter deux autres approches qui consistent en longer au plus près inférieurement, supérieurement.", "from math import floor, ceil", "Longer au plus près inférieurement", "@gerersymetrie(nom=\"Méthode de longer au plus près inférieurement\")\ndef methode_longer_inferieurement(A, B):\n \"\"\" Méthode de longer au plus près inférieurement.\n \n - Si N = |A B| longueur du segment, cette fonction tourne en temps O(N) et en mémoire O(N)\n N = max(n, m) avec m = |b2 - a2| et n = |b1 - a1| nb de déplacement sur l'axe horizontal/vertical\n\n - Fonctionne dans tous les cas, supporte les huit quadrants, les quatre demi-droites,\n et le cas spécial A==B, en exploitant les symétries et se ramener au cas de base :\n a1 < b1, b2 <= a2 (pente = (b2-a2)/(b1-a1) = alpha, 0 <= alpha <= 1)\n \"\"\"\n a1, a2 = int(A[0]), int(A[1])\n b1, b2 = int(B[0]), int(B[1])\n\n pente = float(b2-a2) / float(b1-a1)\n\n # vérification des hypothèses 2/2\n assert 0 <= pente <= 1, f\"Erreur : la méthode de longer au plus près inférieurement demande 0 <= pente = {pente} <= 1.\"\n print(f\"Méthode de longer au plus près inférieurement (après symétries éventuelles) avec une pente 0 <= pente = {pente} <= 1\")\n\n points = []\n points.append(A) # premier point A\n\n nombre_point = b1 - a1 # c'est le n du texte\n pente_normalise = b2 - a2 # le m du texte = b2-a2, c'est la pente normalisée alpha*n\n xi = int(a1)\n yi = int(a2)\n ei = 0\n \n for i in range(1, nombre_point):\n xi += 1\n # cette partie est spécifique à cette méthode\n yi = floor(a2 + pente*i)\n\n points.append((xi, yi))\n \n # et dernier point B\n points.append(B)\n return points", "Longer au plus près supérieurement", "@gerersymetrie(nom=\"Méthode de longer au plus près supérieurement\")\ndef methode_longer_superieurement(A, B):\n \"\"\" Méthode de longer au plus près supérieurement.\n \n - Si N = |A B| longueur du segment, cette fonction tourne en temps O(N) et en mémoire O(N)\n N = max(n, m) avec m = |b2 - a2| et n = |b1 - a1| nb de déplacement sur l'axe horizontal/vertical\n\n - Fonctionne dans tous les cas, supporte les huit quadrants, les quatre demi-droites,\n et le cas spécial A==B, en exploitant les symétries et se ramener au cas de base :\n a1 < b1, b2 <= a2 (pente = (b2-a2)/(b1-a1) = alpha, 0 <= alpha <= 1)\n \"\"\"\n a1, a2 = int(A[0]), int(A[1])\n b1, b2 = int(B[0]), int(B[1])\n\n pente = float(b2-a2) / float(b1-a1)\n\n # vérification des hypothèses 2/2\n assert 0 <= pente <= 1, f\"Erreur : la méthode de longer au plus près supérieurement demande 0 <= pente = {pente} <= 1.\"\n print(f\"Méthode de longer au plus près supérieurement (après symétries éventuelles) avec une pente 0 <= pente = {pente} <= 1\")\n\n points = []\n points.append(A) # premier point A\n\n nombre_point = b1 - a1 # c'est le n du texte\n pente_normalise = b2 - a2 # le m du texte = b2-a2, c'est la pente normalisée alpha*n\n xi = int(a1)\n yi = int(a2)\n ei = 0\n \n for i in range(1, nombre_point):\n xi += 1\n # cette partie est spécifique à cette méthode\n yi = ceil(a2 + pente*i)\n\n points.append((xi, yi))\n \n # et dernier point B\n points.append(B)\n return points", "Une dernière méthode ?\nOn peut aussi longer la droite aléatoirement, en prenant inférieurement ou supérieurement uniformément au hasard.\nCela n'a pas d'intérêt particulier, mais c'est rapide à écrire alors autant le faire :", "import random\n\n[ random.randint(0, 1) for _ in range(20) ]", "On peut facilement remplacer le calcul de $\\tilde{y_i}$ par simplement un choix aléatoire uniforme entre le choix floor et le choix ceil :", "@gerersymetrie(nom=\"Méthode de longer aléatoirement\")\ndef methode_longer_aleatoirement(A, B):\n \"\"\" Méthode de longer aléatoirement.\n \n - Si N = |A B| longueur du segment, cette fonction tourne en temps O(N) et en mémoire O(N)\n N = max(n, m) avec m = |b2 - a2| et n = |b1 - a1| nb de déplacement sur l'axe horizontal/vertical\n\n - Fonctionne dans tous les cas, supporte les huit quadrants, les quatre demi-droites,\n et le cas spécial A==B, en exploitant les symétries et se ramener au cas de base :\n a1 < b1, b2 <= a2 (pente = (b2-a2)/(b1-a1) = alpha, 0 <= alpha <= 1)\n \"\"\"\n a1, a2 = int(A[0]), int(A[1])\n b1, b2 = int(B[0]), int(B[1])\n\n pente = float(b2-a2) / float(b1-a1)\n\n # vérification des hypothèses 2/2\n assert 0 <= pente <= 1, f\"Erreur : la méthode de longer aléatoirement demande 0 <= pente = {pente} <= 1.\"\n print(f\"Méthode de longer aléatoirement (après symétries éventuelles) avec une pente 0 <= pente = {pente} <= 1\")\n\n points = []\n points.append(A) # premier point A\n\n nombre_point = b1 - a1 # c'est le n du texte\n pente_normalise = b2 - a2 # le m du texte = b2-a2, c'est la pente normalisée alpha*n\n xi = int(a1)\n yi = int(a2)\n ei = 0\n \n for i in range(1, nombre_point):\n xi += 1\n # cette partie est spécifique à cette méthode\n if random.randint(0, 1) == 0:\n yi = floor(a2 + pente*i)\n else:\n yi = ceil(a2 + pente*i)\n # écrire cela est aussi possible\n # yi += random.randint(0, 1) # TODO\n\n points.append((xi, yi))\n \n # et dernier point B\n points.append(B)\n return points", "Exemples : deux premiers quadrants, deux premières demi-droites\n\nAvec une pente $\\alpha=1$, tous les yi += O/1 seront yi += 1 :", "A = (0, 0)\nB = (4, 4) # pente 1\nprint(methode_longer_inferieurement(A, B))\nprint(methode_longer_superieurement(A, B))\nprint(methode_longer_aleatoirement(A, B))", "Avec une pente $\\alpha=0$, tous les yi += O/1 seront yi += 0 :", "A = (0, 0)\nB = (4, 0) # pente 0\nprint(methode_longer_inferieurement(A, B))\nprint(methode_longer_superieurement(A, B))\nprint(methode_longer_aleatoirement(A, B))", "Avec une pente $0 < \\alpha < 1$, tous les yi += O/1 suivent la méthode choisie :", "A = (0, 0)\nB = (4, 3) # pente 3/4\nprint(methode_longer_inferieurement(A, B))\nprint(methode_longer_superieurement(A, B))\nprint(methode_longer_aleatoirement(A, B))", "Avec une pente $1 < \\alpha < +\\infty$, ce sont maintenant les yi += 1 mais les xi += O/1 qui suivent la méthode choisie :", "A = (0, 0)\nB = (3, 4) # pente 4/3\nprint(methode_longer_inferieurement(A, B))\nprint(methode_longer_superieurement(A, B))\nprint(methode_longer_aleatoirement(A, B))", "Avec une pente $\\alpha = -\\infty$, ce sont maintenant les yi += 1 et tous les xi += O/1 sont des += 1 :", "A = (0, 0)\nB = (0, 4) # pente +infini\nprint(methode_longer_inferieurement(A, B))\nprint(methode_longer_superieurement(A, B))\nprint(methode_longer_aleatoirement(A, B))", "Autres exemples : six autre premiers quadrants, deux autres demi-droites\nOn fera ces autres exemples quand on aura les deux autres méthodes.\n\nQuadrant #3, b1 < a1 mais b2 > a2 et b2 >= b1 :", "A = (0, 0)\nB = (-3, 4) # pente -4/3\nprint(methode_bresenham(A, B))\nprint(methode_longer_inferieurement(A, B))\nprint(methode_longer_superieurement(A, B))\nprint(methode_longer_aleatoirement(A, B))", "Quadrant #4, b1 < a1 mais b2 > a2 et b2 < b1 :", "A = (0, 0)\nB = (-4, 3) # pente -3/4\nprint(methode_bresenham(A, B))\nprint(methode_longer_inferieurement(A, B))\nprint(methode_longer_superieurement(A, B))\nprint(methode_longer_aleatoirement(A, B))", "Quadrant #5 :", "A = (0, 0)\nB = (-4, -3) # pente -3/-4\nprint(methode_bresenham(A, B))\nprint(methode_longer_inferieurement(A, B))\nprint(methode_longer_superieurement(A, B))\nprint(methode_longer_aleatoirement(A, B))", "Quadrant #6 :", "A = (0, 0)\nB = (-3, -4) # pente -4/-3\nprint(methode_bresenham(A, B))\nprint(methode_longer_inferieurement(A, B))\nprint(methode_longer_superieurement(A, B))\nprint(methode_longer_aleatoirement(A, B))", "TODO fix this bug?\n\nQuadrant #7 :", "A = (0, 0)\nB = (3, -4) # pente 3/-4\nprint(methode_bresenham(A, B))\nprint(methode_longer_inferieurement(A, B))\nprint(methode_longer_superieurement(A, B))\nprint(methode_longer_aleatoirement(A, B))", "Quadrant #8 :", "A = (0, 0)\nB = (4, -3) # pente 4/-3\nprint(methode_bresenham(A, B))\nprint(methode_longer_inferieurement(A, B))\nprint(methode_longer_superieurement(A, B))\nprint(methode_longer_aleatoirement(A, B))", "Visualisation ? [si le temps]\nCa ne devrait pas être trop compliqué :\n\nil faut pouvoir donner la pente, comme un rationnel ou un flottant,\net les points [M0,...,MN] donné avec leurs coordonnées (en général, M0=A=O=(0,0) et MN=B=(b1,b2)),\net afficher la courbe et les pixels allumés, comme des gros rectangles.", "import matplotlib.pyplot as plt\n\n# https://matplotlib.org/3.2.1/api/_as_gen/matplotlib.patches.Rectangle.html?highlight=rectangle#matplotlib.patches.Rectangle\nfrom matplotlib.patches import Rectangle\n\ndef plot_discrete_line(points=None, # liste des coordonnées [M0,...,MN]\n A=None, B=None,\n methodes=[], # liste de fonctions à utiliser pour calculer les points, si absents\n title=\"Dessin d'une droite discrète\",\n width=50, height=None,\n figsize=(10,7),\n linewidth=2,\n fill=True,\n edgecolors=['r'],\n facecolors=None,\n alpha=0.4,\n linecolor='k',\n equal=True,\n pente=None,\n ):\n fig = plt.figure(figsize=figsize)\n\n if equal: plt.axis('equal')\n if not facecolors: facecolors = edgecolors\n if not height: height = width\n \n use_fake_methode = False\n if not methodes:\n use_fake_methode = True\n def fake_methode(A, B): return points\n fake_methode.__nom__ = \"Fake method XXX\"\n methodes = [fake_methode]\n\n for i, methode in enumerate(methodes):\n points = methode(A, B)\n \n # points in P should be ordored, but not restricted to Mi = (xi, yi) with xi=i\n min_x = min(p[0] for p in points)\n max_x = max(p[0] for p in points)\n plt.xlim((min_x - 0.2)*width, (max_x + 1 + 0.2)*width)\n min_y = min(p[1] for p in points)\n max_y = max(p[1] for p in points)\n plt.ylim((min_y - 0.2)*height, (max_y + 1 + 0.2)*height)\n\n # plot the points as big rectangles\n for p in points:\n x, y = p\n rect = Rectangle(\n (x*width, y*height),\n width, height,\n linewidth=linewidth,\n edgecolor=edgecolors[i], facecolor=facecolors[i],\n alpha=alpha, fill=fill,\n )\n plt.gca().add_patch(rect)\n \n _xs = np.linspace(min_x, max_x, 2000)\n _ys = np.linspace(min_y, max_y, 2000)\n\n # Trick to add a legend?\n plt.plot(\n _xs[:2]*width, _ys[:2]*width,\n color=edgecolors[i], label=methode.__nom__\n )\n \n # plot the line\n plt.plot(_xs*width, _ys*width, color=linecolor, linewidth=1+linewidth)\n \n\n if not use_fake_methode:\n plt.legend()\n if title:\n if pente and \"pente\" not in title:\n if not isinstance(pente, str):\n pente = \"{:.3g}\".format(pente)\n title += \" de pente = ${}$\".format(pente)\n plt.title(title)\n\n fig.tight_layout()\n plt.show()\n # return fig", "Exemples manuels :", "points = [ (0,0), (1,1), (2,2), (3,3), (4,4) ]\n\nplot_discrete_line(points, pente=1.0)\n\npoints = [ (0,0), (1,0), (2,1), (3,1), (4,2), (5,2) ]\n\nplot_discrete_line(points, pente=0.5)", "On peut évidemment dessiner des droites avec une pente qui ne vérifie pas $0 < \\alpha < 1$ :", "points = [ (0,0), (0,1), (1,2), (1,3), (2,4), (2,5) ]\n\nplot_discrete_line(points, pente=2)", "Comparaison graphique des trois méthodes :\nOn va prendre les exemples précédents, et les afficher dans la même figure :", "methodes = [\n methode_bresenham,\n methode_longer_inferieurement,\n methode_longer_superieurement,\n methode_longer_aleatoirement\n]", "Avec une pente $\\alpha=1$, tous les yi += O/1 seront yi += 1 :", "A = (0, 0)\nB = (20, 20) # pente 1\npente = (B[1] - A[1]) / (B[0] - A[0])\nplot_discrete_line(A=A, B=B, pente=pente,\n methodes=methodes,\n edgecolors=['red', 'blue', 'orange', 'purple']\n)", "Avec une pente $\\alpha=0$, tous les yi += O/1 seront yi += 0 :", "A = (0, 0)\nB = (20, 0) # pente 0\npente = (B[1] - A[1]) / (B[0] - A[0])\nplot_discrete_line(A=A, B=B, pente=pente,\n methodes=methodes,\n edgecolors=['red', 'blue', 'orange', 'purple']\n)", "Avec une pente $0 < \\alpha < 1$, tous les yi += O/1 suivent la méthode de Bresenham :", "A = (0, 0)\nB = (20, 7) # pente 7/20\npente = (B[1] - A[1]) / (B[0] - A[0])\nplot_discrete_line(A=A, B=B, pente=pente,\n methodes=methodes,\n edgecolors=['red', 'blue', 'orange', 'purple']\n)\n\nA = (0, 0)\nB = (7, 20) # pente 20/7\npente = (B[1] - A[1]) / (B[0] - A[0])\nplot_discrete_line(A=A, B=B, pente=pente,\n methodes=methodes,\n edgecolors=['red', 'blue', 'orange', 'purple']\n)", "Avec une pente $\\alpha = +\\infty$, ce sont maintenant les yi += 1 et tous les xi += O/1 sont des += 1 :", "A = (0, 0)\nB = (0, 7) # pente +infini\nprint(methode_bresenham(A, B))\npente = '+oo'\nplot_discrete_line(A=A, B=B, pente=pente,\n methodes=methodes,\n edgecolors=['red', 'blue', 'orange', 'purple']\n)", "Meilleur visualisation avec quatre sous-figures ?\nPas le temps, mais il serait plus approprié de montrer les quatres méthodes m1/m2/m3/m4 comme ça :\n[ m1 | m2 ]\n[ m3 | m4 ]\nAvec plt.subplots(2,2) ce ne serait pas trop difficile", "nos_4_methodes = [\n methode_bresenham,\n methode_longer_aleatoirement,\n methode_longer_inferieurement,\n methode_longer_superieurement,\n]\n\ndef plot_4_differents_methods(A=None, B=None,\n title=\"Dessin d'une droite discrète\",\n width=50, height=None,\n figsize=(20,14),\n linewidth=2,\n fill=True,\n methodes=nos_4_methodes,\n edgecolors=['red', 'blue', 'orange', 'purple'],\n facecolors=None,\n alpha=0.9,\n linecolor='k',\n equal=True,\n pente=None,\n ):\n fig, axs = plt.subplots(2, 2, figsize=figsize)\n\n if not facecolors: facecolors = edgecolors\n if not height: height = width\n \n use_fake_methode = False\n if not methodes:\n use_fake_methode = True\n def fake_methode(A, B): return points\n fake_methode.__nom__ = \"Fake method XXX\"\n methodes = [fake_methode]\n\n i_x_axis = 0\n j_y_axis = 0\n for i, methode in enumerate(methodes):\n points = methode(A, B)\n ax = axs[i_x_axis, j_y_axis]\n if equal: ax.set_aspect('equal')\n \n # points in P should be ordored, but not restricted to Mi = (xi, yi) with xi=i\n min_x = min(p[0] for p in points)\n max_x = max(p[0] for p in points)\n # plt.xlim((min_x - 0.2)*width, (max_x + 1 + 0.2)*width)\n min_y = min(p[1] for p in points)\n max_y = max(p[1] for p in points)\n # plt.ylim((min_y - 0.2)*height, (max_y + 1 + 0.2)*height)\n \n _xs = np.linspace(min_x, max_x, 2000)\n _ys = np.linspace(min_y, max_y, 2000)\n\n # Trick to add a legend?\n ax.plot(\n _xs[:2]*width, _ys[:2]*width,\n color=edgecolors[i], label=methode.__nom__\n )\n \n # plot the line\n ax.plot(_xs*width, _ys*width, color=linecolor, linewidth=1+linewidth)\n\n # plot the points as big rectangles\n for p in points:\n x, y = p\n rect = Rectangle(\n (x*width, y*height),\n width, height,\n linewidth=linewidth,\n edgecolor=edgecolors[i], facecolor=facecolors[i],\n alpha=alpha, fill=fill,\n )\n ax.add_patch(rect)\n \n i_x_axis = (i_x_axis + 1) % 2\n if i_x_axis == 1:\n j_y_axis = (j_y_axis + 1) % 2\n\n if not use_fake_methode:\n ax.legend()\n if not title:\n title = methode.nom\n if title:\n if pente and \"pente\" not in title:\n if not isinstance(pente, str):\n pente = \"{:.5g}\".format(pente)\n title += \" de pente = ${}$\".format(pente)\n ax.set_title(title)\n\n plt.show()\n # return fig", "Un exemple :", "A = (0, 0)\nB = (20, 7) # pente 7/20\npente = (B[1] - A[1]) / (B[0] - A[0])\nplot_4_differents_methods(A=A, B=B, pente=pente)\n\nA = (0, 0)\nB = (7, 20) # pente 20/7\npente = (B[1] - A[1]) / (B[0] - A[0])\nplot_4_differents_methods(A=A, B=B, pente=pente)\n\nA = (0, 0)\nB = (-7, 20) # pente 20/-7\npente = (B[1] - A[1]) / (B[0] - A[0])\nplot_4_differents_methods(A=A, B=B, pente=pente)", "Bon c'était clairement trop long, inutile et impossible le jour même.\nTODO si le temps:\n- corriger dessin de la courbe noire en cas de symétries\n\nConclusion de l'oral\nDonner une conclusion, scientifique et aussi personnelle, en 2-4 minutes.\nBlabla TODO\n\nCela peut aussi s'appliquer en dehors des écrans, par exemple les mosaïques ou la broderie subissent les mêmes contraintes de dessins sur un support quadrillé et \"pixelisé\", et donc les méthodes mises au point sur le premier problème pourraient être aussi utilisées dans ces pratiques artistiques.\nOn peut même imaginer qu'un artist mosaïste durant l'Antiquité Romaine suivant ce genre de méthodes de \"longer au plus près inférieurement\", sans même y réfléchir formellement.\n\n[si le temps]\n\nUne question en ouverture : et si on passe en 3D, est-ce beaucoup plus compliqué ?\nMon intuition dit que oui ! Et il s'ajoute aussi la difficulté de savoir quel support est utilisé pour représenter un objet en 3D.\nL'analogie la plus simple sera de jouer avec des briques de Légo, qui pourraient se fixer dans les six directions (aux sommets du cube actuel).\nMême si ce serait intéressant de généraliser notre modélisation et les méthodes développées pour le tracé de courbes, du 2D à la 3D, je peine à voir ce qu'on pourrait généraliser pour la partie plus mathématiques liées au mots binaires.\n\nAutres pistes :\n\n\nDONE généraliser à tous les cas d'une droite : verticale, horizontale, et les sept autre quadrants (ici, seulement $0 < \\alpha < 1$ et $a_2 \\leq b_2$) ;\n\n\nDONE écrire une fonction permettant de visualiser cette méthode de Bresenham, et les autres méthodes, par exemple avec ipythonblocks (pratique mais pas possible le jour J des oraux), ou avec matplotlib (ce sera un peu plus long, mais disponible le jour J) ;\n\n\nTODO généraliser l'algorithme à un cercle, par exemple, ou d'autres figures dont on connaît des équations et qu'on peut chercher à vouloir longer au plus près inférieurement, supérieurement, ou de façon hybride à la Bresenham ;\n\n\nTODO pour s'éloigner davantage du problème de tracés de droite, on peut implémenter des choses concernant les mots binaires (ultimement) périodiques (abbrégés en mbup) :\n\n\nreprésentation d'un mbup, affichage ;\n\nvérification qu'un mot binaire est bien équilibré : est-ce seulement possible, sachant que Déf.5 porte sur un mot infini ?\ntransformation d'un mbup en une droite discrète (§6) $\\sigma \\mapsto (M_0,\\dots,M_n)$ ;\nmot binaire up associé à un rationnel, et inversement : est-ce facile d'écrire deux fonctions qui effectuent ces transformations, par exemple $\\frac{2}{3} \\leftrightarrow (011)^\\omega$ ?" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
bloomberg/bqplot
examples/Tutorials/Updating Plots.ipynb
apache-2.0
[ "Updating Plots\nbqplot is an interactive plotting library. Attributes of plots can be updated in place without recreating the whole figure and marks. Let's look at idiomatic ways of updating plots in bqplot", "import numpy as np\nimport bqplot.pyplot as plt\n\nx = np.linspace(-10, 10, 100)\ny = np.sin(x)\n\nfig = plt.figure()\nline = plt.plot(x=x, y=y)\nfig", "To update the attributes of the plot(x, y, color etc.) the correct way to do it is to update the attributes of the mark objects in place. Recreating figure or mark objects is not recommended", "# update y attribute of the line object\nline.y = np.tan(x)", "We can update multiple attributes of the mark object simultaneously by using the hold_sync method like so. (This makes only one round trip from the python kernel to front end)", "# update both x and y together\nwith line.hold_sync():\n line.x = np.arange(100)\n line.y = x ** 3 - x", "We can also animate the changes to the x, y and other data attributes by setting the animation_duration property on the figure object. More examples of animations can found in the Animations notebook", "fig.animation_duration = 1000\nline.y = np.cos(x)", "Let's look at an example to update a scatter plot", "x, y = np.random.rand(2, 10)\n\nfig = plt.figure(animation_duration=1000)\nscat = plt.scatter(x=x, y=y)\nfig\n\n# update the x and y attreibutes in place using hold_sync\nwith scat.hold_sync():\n scat.x, scat.y = np.random.rand(2, 10)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
metpy/MetPy
dev/_downloads/f1c8c5b9729cd7164037ec8618030966/upperair_declarative.ipynb
bsd-3-clause
[ "%matplotlib inline", "Upper Air Analysis using Declarative Syntax\nThe MetPy declarative syntax allows for a simplified interface to creating common\nmeteorological analyses including upper air observation plots.", "from datetime import datetime\n\nimport pandas as pd\n\nfrom metpy.cbook import get_test_data\nimport metpy.plots as mpplots\nfrom metpy.units import units", "Getting the data\nIn this example, data is originally from the Iowa State Upper-air archive\n(https://mesonet.agron.iastate.edu/archive/raob/) available through a Siphon method.\nThe data are pre-processed to attach latitude/longitude locations for each RAOB site.", "data = pd.read_csv(get_test_data('UPA_obs.csv', as_file_obj=False))\n\n# In a real-world case, you could obtain and preprocess the data with code such as\n# from siphon.simplewebservice.iastate import IAStateUpperAir\n# from metpy.io import add_station_lat_lon\n\n# data = IAStateUpperAir().request_all_data(datetime(2021, 8, 25, 12))\n# data = add_station_lat_lon(data)", "Plotting the data\nUse the declarative plotting interface to create a CONUS upper-air map for 500 hPa", "# Plotting the Observations\nobs = mpplots.PlotObs()\nobs.data = data\nobs.time = datetime(1993, 3, 14, 0)\nobs.level = 500 * units.hPa\nobs.fields = ['temperature', 'dewpoint', 'height']\nobs.locations = ['NW', 'SW', 'NE']\nobs.formats = [None, None, lambda v: format(v, '.0f')[:3]]\nobs.vector_field = ('u_wind', 'v_wind')\nobs.reduce_points = 0\n\n# Add map features for the particular panel\npanel = mpplots.MapPanel()\npanel.layout = (1, 1, 1)\npanel.area = (-124, -72, 20, 53)\npanel.projection = 'lcc'\npanel.layers = ['coastline', 'borders', 'states', 'land', 'ocean']\npanel.plots = [obs]\n\n# Collecting panels for complete figure\npc = mpplots.PanelContainer()\npc.size = (15, 10)\npc.panels = [panel]\n\n# Showing the results\npc.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
barjacks/foundations-homework
13/311 time series homework.ipynb
mit
[ "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nplt.style.use('ggplot')\nimport dateutil.parser", "First, I made a mistake naming the data set! It's 2015 data, not 2014 data. But yes, still use 311-2014.csv. You can rename it.\n\nImporting and preparing your data\nImport your data, but only the first 200,000 rows. You'll also want to change the index to be a datetime based on the Created Date column - you'll want to check if it's already a datetime, and parse it if not.", "import datetime\nimport datetime as dt\n\ndt.datetime.strptime('07/06/2015 10:58:27 AM', '%m/%d/%Y %I:%M:%S %p')\n#datetime.datetime(2015, 7, 6, 0, 0)\nparser = lambda date: pd.datetime.strptime(date, '%m/%d/%Y %H:%M:%S')\n\ndf = pd.read_csv(\"311-2015.csv\", low_memory=False, parse_dates=[1], dtype=str , nrows=200000)\n\ndf.info()\n\ndf.index = df['Created Date']\n\ndel df['Created Date']\n\ndf.head(2)", "What was the most popular type of complaint, and how many times was it filed?", "df['Complaint Type'].value_counts().head(1)", "Make a horizontal bar graph of the top 5 most frequent complaint types.", "ax = df['Complaint Type'].value_counts().sort_values(ascending=True).tail(5).plot(kind='barh', figsize=(6,4), fontsize=9)\nax.set_title(\"Top 5 Most Frequent 311-Complaints in 2015\")\nax.set_xlabel(\"Complaint Count\")\nax.set_ylabel(\"Complaint Type\")\nplt.savefig(\"5 Most Frequent 311 Complaints in 2015.svg\", bbox_inches='tight')\n#http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.plot.html", "Which borough has the most complaints per capita? Since it's only 5 boroughs, you can do the math manually.", "df_summed_complaints = df['Borough'].value_counts()\nsummed_complaints = pd.DataFrame(df_summed_complaints)\nsummed_complaints.reset_index(inplace=True)\nsummed_complaints.columns = ['Borough', 'Complaint Count']\n\nBorough_head_count = pd.read_csv(\"NYC_Boroughs.csv\")\n\nsummed_complaints_merged = summed_complaints.merge(Borough_head_count, left_on='Borough', right_on='borough name')\n\ndel summed_complaints_merged['borough name']\n\nsummed_complaints_merged\n\nsummed_complaints_merged['Per Capita'] = summed_complaints_merged['Total'] / summed_complaints_merged['Complaint Count']\n\nsummed_complaints_merged['Per Capita'].sort_values(ascending=False)\nsummed_complaints_merged[['Borough', 'Per Capita']]\n\nSorted_complaints = summed_complaints_merged.sort_values(by='Per Capita', ascending=False)\n\nSorted_complaints", "According to your selection of data, how many cases were filed in March? How about May?", "df['2015-03']['Unique Key'].count()\n\ndf['2015-05']['Unique Key'].count()", "I'd like to see all of the 311 complaints called in on April 1st.\n\nSurprise! We couldn't do this in class, but it was just a limitation of our data set", "df['2015-04-01']['Unique Key'].count()", "What was the most popular type of complaint on April 1st?", "df['2015-04-01']['Complaint Type'].value_counts().head(1)", "What were the most popular three types of complaint on April 1st", "df['2015-04-01']['Complaint Type'].value_counts().head(3)", "What month has the most reports filed? How many? Graph it.", "#PANDAS resample: http://stackoverflow.com/questions/17001389/pandas-resample-documentation/17001474#17001474\n\nax = df.resample('M')['Unique Key'].count().plot(kind='barh')\n#ax.set_yticks(['2015-01-31 00:00:00, 2015-02-28 00:00:00, 2015-03-30 00:00:00, 2015-04-30 00:00:00, 2015-05-31 00:00:00, 2015-06-30 00:00:00, 2015-07-31 00:00:00, 2015-08-31 00:00:00, 2015-09-30 00:00:00, 2015-10-31 00:00:00, 2015-11-30 00:00:00, 2015-12-31 00:00:00'])\nax.set_yticklabels(['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'])\nax.set_title('Number of Complaints per Month')\nax.set_ylabel('Months of the Year')\nax.set_xlabel('Complaint Counts')", "What week of the year has the most reports filed? How many? Graph the weekly complaints.", "df_week_count = df.resample('W')['Unique Key'].count()\n\ndf_week_count = pd.DataFrame(df_week_count)\n\ndf_week_count.sort_values(by='Unique Key', ascending=False).head(5)", "Noise complaints are a big deal. Use .str.contains to select noise complaints, and make an chart of when they show up annually. Then make a chart about when they show up every day (cyclic).", "df['Noise'] = df['Complaint Type'].str.contains('Noise')\ndf_noise = df[df['Noise'] == True]\n\nax = df_noise.resample('D')['Unique Key'].count().plot(kind='bar', figsize=(15,4))\nax.set_xticklabels('')\nax.set_title('Number of Noise Complaints per Day in 2015')\nax.set_ylabel('Noise Complaint Count')\nax.set_xlabel('From January - December 2015 ')\n\ndf_noise.groupby(by=df_noise.index.hour)['Unique Key'].count().plot(kind='bar', figsize=(12,6))", "Which were the top five days of the year for filing complaints? How many on each of those days? Graph it.", "df_day_count = df.resample('D')['Unique Key'].count()\ndf_day_count = pd.DataFrame(df_day_count)\nax = df_day_count.sort_values(by='Unique Key', ascending=True).tail(5).plot(kind='barh', legend=False)\nax.set_yticklabels(['10. Oktober', '6. August', '3. August', '25. September', '19. Oktober'])\nax.set_title('Top five complaint days in 2015')\nax.set_ylabel('')\nax.set_xlabel('Complaint Counts')\nplt.savefig(\"Top 5 Complaint Days\", bbox_inches='tight')", "What hour of the day are the most complaints? Graph a day of complaints.", "ax = df.groupby(by=df.index.hour)['Unique Key'].count().plot(kind='bar', figsize=(12,6))\nax.set_title('Number of Complaints per hour in 2015')", "One of the hours has an odd number of complaints. What are the most common complaints at that hour, and what are the most common complaints the hour before and after?", "df.groupby(by=df.index.hour)['Complaint Type'].value_counts().head(2)\n\ndf_complaint_type_count_per_hour = df.groupby(by=df.index.hour)['Complaint Type'].value_counts()\n\nTop_complaints_by_hour = pd.DataFrame(df_complaint_type_count_per_hour)\n\nTop_complaints_by_hour['Complaint Type'][0].head(1)\n\nTop_complaints_by_hour['Complaint Type'][1].head(1)\n\nTop_complaints_by_hour['Complaint Type'][23].head(1)\n\n#More Reading: http://pandas.pydata.org/pandas-docs/version/0.13.1/timeseries.html", "So odd. What's the per-minute breakdown of complaints between 12am and 1am? You don't need to include 1am.", "df.groupby(by=df.index.hour == 0)['Unique Key'].count()", "Looks like midnight is a little bit of an outlier. Why might that be? Take the 5 most common agencies and graph the times they file reports at (all day, not just midnight).", "df['Agency'].value_counts().head(5)\n\ndf['Agency Name'].value_counts().head(5)\n\ndf_NYPD = df[df['Agency Name'] == 'New York City Police Department']\ndf_NYPD.groupby(by=df_NYPD.index.hour)['Unique Key'].count().plot(kind='bar')\n\ndf_HPD = df[df['Agency Name'] == 'Department of Housing Preservation and Development']\ndf_HPD.groupby(by=df_HPD.index.hour)['Unique Key'].count().plot(kind='bar')\n\ndf_DOT = df[df['Agency Name'] == 'Department of Transportation']\ndf_DOT.groupby(by=df_DOT.index.hour)['Unique Key'].count().plot(kind='bar')\n\ndf_DPR = df[df['Agency Name'] == 'Department of Parks and Recreation']\ndf_DPR.groupby(by=df_DPR.index.hour)['Unique Key'].count().plot(kind='bar')\n\ndf_DOHMH = df[df['Agency Name'] == 'Department of Health and Mental Hygiene']\ndf_DOHMH.groupby(by=df_DOHMH.index.hour)['Unique Key'].count().plot(kind='bar')", "df_NYPD = df[df['Agency Name'] == 'New York City Police Department']\ndf_NYPD.groupby(by=df_NYPD.index.hour)['Unique Key'].count().plot(kind='bar')\nDont understand why I can't change the labels here:", "ax = df[df['Agency Name'] == 'New York City Police Department'].groupby(by=df_NYPD.index.hour)['Unique Key'].count().plot(kind='bar', stacked=True, figsize=(15,6))\ndf[df['Agency Name'] == 'Department of Housing Preservation and Development'].groupby(by=df_HPD.index.hour)['Unique Key'].count().plot(kind='bar', ax=ax, stacked=True, color='lightblue')\ndf[df['Agency Name'] == 'Department of Transportation'].groupby(by=df_DOT.index.hour)['Unique Key'].count().plot(kind='bar', ax=ax, stacked=True, color='purple')\ndf[df['Agency Name'] == 'Department of Health and Mental Hygiene'].groupby(by=df_DOHMH.index.hour)['Unique Key'].count().plot(kind='bar', ax=ax, stacked=True, color='grey')\ndf[df['Agency Name'] == 'Department of Parks and Recreation'].groupby(by=df_DPR.index.hour)['Unique Key'].count().plot(kind='bar', ax=ax, stacked=True, color='green')\nax.set_title('Time of Day Agencies file complaints')\nax.set_ylabel('Complaint Count')\nax.set_xlabel(\"\"\"\n Red: New York City Police Department\n Blue: Department of Housing Preservation and Development\n Purple: Department of Transportation\n Grey: Department of Health and Mental Hygiene\n Green: Department of Parks and Recreation\"\"\")\nplt.savefig(\"Time of Day Agencies File Complaints.svg\", bbox_inches='tight')\n", "Graph those same agencies on an annual basis - make it weekly. When do people like to complain? When does the NYPD have an odd number of complaints?", "ax = df[df['Agency Name'] == 'New York City Police Department'].resample('W')['Agency'].count().plot(figsize=(15,4), linewidth=3)\ndf[df['Agency Name'] == 'Department of Housing Preservation and Development'].resample('W')['Agency'].count().plot(color='lightblue', linewidth=2)\ndf[df['Agency Name'] == 'Department of Transportation'].resample('W')['Agency'].count().plot(color='purple', linewidth=2)\ndf[df['Agency Name'] == 'Department of Health and Mental Hygiene'].resample('W')['Agency'].count().plot(color='grey', linewidth=2)\ndf[df['Agency Name'] == 'Department of Parks and Recreation'].resample('W')['Agency'].count().plot(color='green', linewidth=2)\nax.set_xticklabels(['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'])\nax.set_title('When the Angencies Filed the Reports 2015')\nax.set_xlabel(\"\"\"\n Red: New York City Police Department\n Blue: Department of Housing Preservation and Development\n Purple: Department of Transportation\n Grey: Department of Health and Mental Hygiene\n Green: Department of Parks and Recreation\"\"\")\nplt.savefig(\"When the Angencies Filed the Reports 2015.svg\", bbox_inches='tight')\n\nax = df_NYPD.resample('W')['Agency'].count().plot(kind='bar', figsize=(15,4))\nax.set_xticklabels([''])\nax.set_xlabel('January to December 2015')", "Maybe the NYPD deals with different issues at different times? Check the most popular complaints in July and August vs the month of May. Also check the most common complaints for the Housing Preservation Bureau (HPD) in winter vs. summer.", "df['2015-08']['Complaint Type'].value_counts().head(3)\n\ndf['2015-07']['Complaint Type'].value_counts().head(3)\n\ndf['2015-05']['Complaint Type'].value_counts().head(3)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
cstrelioff/ARM-ipynb
Chapter3/chptr3.1.ipynb
mit
[ "3.1: One predictor", "from __future__ import print_function, division\n%matplotlib inline\n\nimport matplotlib\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n# use matplotlib style sheet\nplt.style.use('ggplot')\n\n# import statsmodels for R-style regression\nimport statsmodels.formula.api as smf", "Read the data\nData are in the child.iq directory of the ARM_Data download-- you might have\nto change the path I use below to reflect the path on your computer.", "kidiq = pd.read_stata(\"../../ARM_Data/child.iq/kidiq.dta\")\nkidiq.head()", "First regression-- binary predictor, Pg 31\nFit the regression using the non-jittered data", "fit0 = smf.ols('kid_score ~ mom_hs', data=kidiq).fit()\nprint(fit0.summary())", "Plot Figure 3.1, Pg 32\nA note for the python version:\n\nI have not included jitter, in the vertical or horizontal directions.\n Instead, the data is plotted with opacity so the regions with high\n data-density can be distinguished.", "fig0, ax0 = plt.subplots(figsize=(8, 6))\nhs_linspace = np.linspace(kidiq['mom_hs'].min(), kidiq['mom_hs'].max(), 50)\n\n# default color cycle\ncolors = plt.rcParams['axes.color_cycle']\n\n# plot points\nplt.scatter(kidiq['mom_hs'], kidiq['kid_score'],\n s=60, alpha=0.5, c=colors[1])\n# add fit\nplt.plot(hs_linspace, fit0.params[0] + fit0.params[1] * hs_linspace,\n lw=3, c=colors[1])\n\nplt.xlabel(\"Mother completed high school\")\nplt.ylabel(\"Child test score\")", "Second regression -- continuous predictor, Pg 32", "fit1 = smf.ols('kid_score ~ mom_iq', data=kidiq).fit()\nprint(fit1.summary())", "Figure 3.2, Pg 33", "fig1, ax1 = plt.subplots(figsize=(8, 6))\niq_linspace = np.linspace(kidiq['mom_iq'].min(), kidiq['mom_iq'].max(), 50)\n\n# default color cycle\ncolors = plt.rcParams['axes.color_cycle']\n\n# plot points\nplt.scatter(kidiq['mom_iq'], kidiq['kid_score'],\n s=60, alpha=0.5, c=colors[1])\n# add fit\nplt.plot(iq_linspace, fit1.params[0] + fit1.params[1] * iq_linspace,\n lw=3, c=colors[1])\n\nplt.xlabel(\"Mother IQ score\")\nplt.ylabel(\"Child test score\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/vertex-ai-samples
notebooks/community/ml_ops/stage5/get_started_with_vertex_endpoint_and_shared_vm.ipynb
apache-2.0
[ "# Copyright 2022 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "E2E ML on GCP: MLOps stage 5 : deployment: get started with Endpoint and shared VM\n<table align=\"left\">\n <td>\n <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/ml_ops/stage5/get_started_with_vertex_endpoint_and_shared_vm.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\">\n View on GitHub\n </a>\n </td>\n <td>\n <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/ml_ops/stage5/get_started_with_vertex_endpoint_and_shared_vm.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\\\" alt=\"Colab logo\"> Run in Colab\n </a>\n </td>\n <td>\n <a href=\"https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/vertex-ai-samples/main/notebooks/community/ml_ops/stage5/get_started_with_vertex_endpoint_and_shared_vm.ipynb\">\n <img src=\"https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32\" alt=\"Vertex AI logo\">\n Open in Vertex AI Workbench\n </a>\n </td>\n</table>\n<br/><br/><br/>\nOverview\nThis tutorial demonstrates how to use Vertex AI for E2E MLOps on Google Cloud in production. This tutorial covers stage 5 : deployment: get started with Endpoints and shared VM.\nPre-trained Models\nThe pre-trained models used for this tutorial are from the TensorFlow Hub repository:\n\nimage classification: trained with ImageNet.\ntext sentence encoder: Google's universal sentence encoder\n\nObjective\nIn this tutorial, you learn how to use shared VM deployment resource pools for deploying models. A shared VM deployment resouce pool provides one with the ability to co-host more than one model on the same (shared) VM.\nThis tutorial uses the following Google Cloud ML services:\n\nVertex AI Training\nVertex AI Model resource\nVertex AI Endpoint resource\n\nThe steps performed include:\n\nUpload a pre-trained image classification model as a Model resource (model A).\nUpload a pre-trained text sentence encoder model as a Model resource (model B).\nCreate a shared VM deployment resource pool.\nList shared VM deployment resource pools.\nCreate two Endpoint resources.\nDeploy first model (model A) to first Endpoint resource using shared VM deployment resource pool.\nDeploy second model (model B) to second Endpoint resource using shared VM deployment resource pool.\nMake a prediction request with first deployed model (model A).\nMake a prediction request with second deployed model (model B).\n\nCosts\nThis tutorial uses billable components of Google Cloud:\n\nVertex AI\nCloud Storage\n\nLearn about Vertex AI pricing and Cloud Storage pricing and use the Pricing Calculator to generate a cost estimate based on your projected usage.\nInstallations\nInstall the packages required for executing this notebook.", "import os\n\n# The Vertex AI Workbench Notebook product has specific requirements\nIS_WORKBENCH_NOTEBOOK = os.getenv(\"DL_ANACONDA_HOME\")\nIS_USER_MANAGED_WORKBENCH_NOTEBOOK = os.path.exists(\n \"/opt/deeplearning/metadata/env_version\"\n)\n\n# Vertex AI Notebook requires dependencies to be installed with '--user'\nUSER_FLAG = \"\"\nif IS_WORKBENCH_NOTEBOOK:\n USER_FLAG = \"--user\"\n\n# Install the packages\n\n! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG -q\n! pip3 install --upgrade tensorflow $USER_FLAG -q\n! pip3 install --upgrade tensorflow-hub $USER_FLAG -q", "Restart the kernel\nOnce you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.", "import os\n\nif not os.getenv(\"IS_TESTING\"):\n # Automatically restart kernel after installs\n import IPython\n\n app = IPython.Application.instance()\n app.kernel.do_shutdown(True)", "Set up your Google Cloud project\nThe following steps are required, regardless of your notebook environment.\n\n\nSelect or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.\n\n\nMake sure that billing is enabled for your project.\n\n\nEnable the Vertex AI, Compute Engine and Cloud Storage APIs.\n\n\nIf you are running this notebook locally, you need to install the Cloud SDK.\n\n\nEnter your project ID in the cell below. Then run the cell to make sure the\nCloud SDK uses the right project for all the commands in this notebook.\n\n\nNote: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.\nSet your project ID\nIf you don't know your project ID, you may be able to get your project ID using gcloud.", "PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}\n\nif PROJECT_ID == \"\" or PROJECT_ID is None or PROJECT_ID == \"[your-project-id]\":\n # Get your GCP project id from gcloud\n shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null\n PROJECT_ID = shell_output[0]\n print(\"Project ID:\", PROJECT_ID)\n\n! gcloud config set project $PROJECT_ID", "Region\nYou can also change the REGION variable, which is used for operations\nthroughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.\n\nAmericas: us-central1\nEurope: europe-west4\nAsia Pacific: asia-east1\n\nYou may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.\nLearn more about Vertex AI regions.", "REGION = \"[your-region]\" # @param {type: \"string\"}\n\nif REGION == \"[your-region]\":\n REGION = \"us-central1\"", "Timestamp\nIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.", "from datetime import datetime\n\nTIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")", "Authenticate your Google Cloud account\nIf you are using Vertex AI Workbench Notebooks, your environment is already authenticated. Skip this step.\nIf you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.\nOtherwise, follow these steps:\nIn the Cloud Console, go to the Create service account key page.\n\n\nClick Create service account.\n\n\nIn the Service account name field, enter a name, and click Create.\n\n\nIn the Grant this service account access to project section, click the Role drop-down list. Type \"Vertex AI\" into the filter box, and select Vertex AI Administrator. Type \"Storage Object Admin\" into the filter box, and select Storage Object Admin.\n\n\nClick Create. A JSON file that contains your key downloads to your local environment.\n\n\nEnter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.", "# If you are running this notebook in Colab, run this cell and follow the\n# instructions to authenticate your GCP account. This provides access to your\n# Cloud Storage bucket and lets you submit training jobs and prediction\n# requests.\n\nimport os\nimport sys\n\n# If on Vertex AI Workbench, then don't execute this code\nIS_COLAB = \"google.colab\" in sys.modules\nif not os.path.exists(\"/opt/deeplearning/metadata/env_version\") and not os.getenv(\n \"DL_ANACONDA_HOME\"\n):\n if \"google.colab\" in sys.modules:\n from google.colab import auth as google_auth\n\n google_auth.authenticate_user()\n\n # If you are running this notebook locally, replace the string below with the\n # path to your service account key and run this cell to authenticate your GCP\n # account.\n elif not os.getenv(\"IS_TESTING\"):\n %env GOOGLE_APPLICATION_CREDENTIALS ''", "Create a Cloud Storage bucket\nThe following steps are required, regardless of your notebook environment.\nWhen you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.\nSet the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.", "BUCKET_NAME = \"[your-bucket-name]\" # @param {type:\"string\"}\nBUCKET_URI = f\"gs://{BUCKET_NAME}\"\n\nif BUCKET_URI == \"\" or BUCKET_URI is None or BUCKET_URI == \"gs://[your-bucket-name]\":\n BUCKET_NAME = PROJECT_ID + \"aip-\" + TIMESTAMP\n BUCKET_URI = \"gs://\" + BUCKET_NAME", "Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.", "! gsutil mb -l $REGION $BUCKET_URI", "Finally, validate access to your Cloud Storage bucket by examining its contents:", "! gsutil ls -al $BUCKET_URI", "Set up variables\nNext, set up some variables used throughout the tutorial.\nImport libraries and define constants", "import google.cloud.aiplatform as aiplatform\nimport google.cloud.aiplatform_v1beta1 as aip_beta\nimport tensorflow as tf\nimport tensorflow_hub as hub", "Initialize Vertex AI SDK for Python\nInitialize the Vertex AI SDK for Python for your project and corresponding bucket.", "aiplatform.init(project=PROJECT_ID, staging_bucket=BUCKET_URI)", "Vertex AI constants\nSetup up the following constants for Vertex AI:\n\nAPI_ENDPOINT: The Vertex AI API service endpoint for Endpoint services.", "# API service endpoint\nAPI_ENDPOINT = \"{}-aiplatform.googleapis.com\".format(REGION)\n\n# Vertex location root path for your dataset, model and endpoint resources\nPARENT = \"projects/\" + PROJECT_ID + \"/locations/\" + REGION", "Set up clients\nThe Vertex works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex AI server.\nYou will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.\n\nEndpoint Service for creating endpoints, and deploying models to endpoints.", "# client options same for all services\nclient_options = {\"api_endpoint\": API_ENDPOINT}\n\n\ndef create_endpoint_client():\n client = aip_beta.EndpointServiceClient(client_options=client_options)\n return client\n\n\nclients = {}\nclients[\"endpoint\"] = create_endpoint_client()\n\nfor client in clients.items():\n print(client)", "Set hardware accelerators\nYou can set hardware accelerators for training and prediction.\nSet the variables DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify:\n(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)\n\nOtherwise specify (None, None) to use a container image to run on a CPU.\nLearn more about hardware accelerator support for your region.\nNote: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3. This is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support.", "if os.getenv(\"IS_TESTING_DEPLOY_GPU\"):\n DEPLOY_GPU, DEPLOY_NGPU = (\n aiplatform.gapic.AcceleratorType.NVIDIA_TESLA_K80,\n int(os.getenv(\"IS_TESTING_DEPLOY_GPU\")),\n )\nelse:\n DEPLOY_GPU, DEPLOY_NGPU = (None, None)", "Set pre-built containers\nSet the pre-built Docker container image for prediction.\nFor the latest list, see Pre-built containers for prediction.", "if os.getenv(\"IS_TESTING_TF\"):\n TF = os.getenv(\"IS_TESTING_TF\")\nelse:\n TF = \"2.5\".replace(\".\", \"-\")\n\nif TF[0] == \"2\":\n if DEPLOY_GPU:\n DEPLOY_VERSION = \"tf2-gpu.{}\".format(TF)\n else:\n DEPLOY_VERSION = \"tf2-cpu.{}\".format(TF)\nelse:\n if DEPLOY_GPU:\n DEPLOY_VERSION = \"tf-gpu.{}\".format(TF)\n else:\n DEPLOY_VERSION = \"tf-cpu.{}\".format(TF)\n\nDEPLOY_IMAGE = \"{}-docker.pkg.dev/vertex-ai/prediction/{}:latest\".format(\n REGION.split(\"-\")[0], DEPLOY_VERSION\n)\n\nprint(\"Deployment:\", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)", "Set machine type\nNext, set the machine type to use for prediction.\n\nSet the variable DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for prediction.\nmachine type\nn1-standard: 3.75GB of memory per vCPU.\nn1-highmem: 6.5GB of memory per vCPU\nn1-highcpu: 0.9 GB of memory per vCPU\n\n\nvCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]\n\nNote: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs.", "if os.getenv(\"IS_TESTING_DEPLOY_MACHINE\"):\n MACHINE_TYPE = os.getenv(\"IS_TESTING_DEPLOY_MACHINE\")\nelse:\n MACHINE_TYPE = \"n1-standard\"\n\nVCPU = \"4\"\nDEPLOY_COMPUTE = MACHINE_TYPE + \"-\" + VCPU\nprint(\"Train machine type\", DEPLOY_COMPUTE)", "Get pretrained model from TensorFlow Hub\nFor demonstration purposes, this tutorial uses a pretrained models from TensorFlow Hub (TFHub), which is then uploaded to a Vertex AI Model resource. Once you have a Vertex AI Model resource, the model can be deployed to a Vertex AI Endpoint resource.\nDownload the pretrained image classification model\nFirst, you download the pretrained image classification model from TensorFlow Hub. The model gets downloaded as a TF.Keras layer. To finalize the model, in this example, you create a Sequential() model with the downloaded TFHub model as a layer, and specify the input shape to the model. The download model is pretrained on ImageNet.", "tfhub_model_icn = tf.keras.Sequential(\n [hub.KerasLayer(\"https://tfhub.dev/google/imagenet/inception_v3/classification/5\")]\n)\ntfhub_model_icn.build([None, 224, 224, 3])", "Save the model artifacts\nAt this point, the model is in memory. Next, you save the model artifacts to a Cloud Storage location.", "MODEL_ICN_DIR = BUCKET_URI + \"/model_icn\"\ntfhub_model_icn.save(MODEL_ICN_DIR)", "Upload the model for serving\nNext, you will upload your TFHub image classification model to Vertex Model service, which will create a Vertex Model resource for your model. During upload, you need to define a serving function to convert data to the format your model expects. If you send encoded data to Vertex AI, your serving function ensures that the data is decoded on the model server before it is passed as input to your model.\nHow does the serving function work\nWhen you send a request to an online prediction server, the request is received by a HTTP server. The HTTP server extracts the prediction request from the HTTP request content body. The extracted prediction request is forwarded to the serving function. For Google pre-built prediction containers, the request content is passed to the serving function as a tf.string.\nThe serving function consists of two parts:\n\npreprocessing function:\nConverts the input (tf.string) to the input shape and data type of the underlying model (dynamic graph).\nPerforms the same preprocessing of the data that was done during training the underlying model -- e.g., normalizing, scaling, etc.\npost-processing function:\nConverts the model output to format expected by the receiving application -- e.q., compresses the output.\nPackages the output for the the receiving application -- e.g., add headings, make JSON object, etc.\n\nBoth the preprocessing and post-processing functions are converted to static graphs which are fused to the model. The output from the underlying model is passed to the post-processing function. The post-processing function passes the converted/packaged output back to the HTTP server. The HTTP server returns the output as the HTTP response content.\nOne consideration you need to consider when building serving functions for TF.Keras models is that they run as static graphs. That means, you cannot use TF graph operations that require a dynamic graph. If you do, you will get an error during the compile of the serving function which will indicate that you are using an EagerTensor which is not supported.\nServing function for image data\nPreprocessing\nTo pass images to the prediction service, you encode the compressed (e.g., JPEG) image bytes into base 64 -- which makes the content safe from modification while transmitting binary data over the network. Since this deployed model expects input data as raw (uncompressed) bytes, you need to ensure that the base 64 encoded data gets converted back to raw bytes, and then preprocessed to match the model input requirements, before it is passed as input to the deployed model.\nTo resolve this, you define a serving function (serving_fn) and attach it to the model as a preprocessing step. Add a @tf.function decorator so the serving function is fused to the underlying model (instead of upstream on a CPU).\nWhen you send a prediction or explanation request, the content of the request is base 64 decoded into a Tensorflow string (tf.string), which is passed to the serving function (serving_fn). The serving function preprocesses the tf.string into raw (uncompressed) numpy bytes (preprocess_fn) to match the input requirements of the model:\n\nio.decode_jpeg- Decompresses the JPG image which is returned as a Tensorflow tensor with three channels (RGB).\nimage.convert_image_dtype - Changes integer pixel values to float 32, and rescales pixel data between 0 and 1.\nimage.resize - Resizes the image to match the input shape for the model.\n\nAt this point, the data can be passed to the model (m_call), via a concrete function. The serving function is a static graph, while the model is a dynamic graph. The concrete function performs the tasks of marshalling the input data from the serving function to the model, and marshalling the prediction result from the model back to the serving function.", "CONCRETE_INPUT = \"numpy_inputs\"\n\n\ndef _preprocess(bytes_input):\n decoded = tf.io.decode_jpeg(bytes_input, channels=3)\n decoded = tf.image.convert_image_dtype(decoded, tf.float32)\n resized = tf.image.resize(decoded, size=(224, 224))\n return resized\n\n\n@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])\ndef preprocess_fn(bytes_inputs):\n decoded_images = tf.map_fn(\n _preprocess, bytes_inputs, dtype=tf.float32, back_prop=False\n )\n return {\n CONCRETE_INPUT: decoded_images\n } # User needs to make sure the key matches model's input\n\n\n@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])\ndef serving_fn(bytes_inputs):\n images = preprocess_fn(bytes_inputs)\n prob = m_call(**images)\n return prob\n\n\nm_call = tf.function(tfhub_model_icn.call).get_concrete_function(\n [tf.TensorSpec(shape=[None, 224, 224, 3], dtype=tf.float32, name=CONCRETE_INPUT)]\n)\n\ntf.saved_model.save(\n tfhub_model_icn, MODEL_ICN_DIR, signatures={\"serving_default\": serving_fn}\n)", "Get the serving function signature\nYou can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.\nFor your purpose, you need the signature of the serving function. Why? Well, when we send our data for prediction as a HTTP request packet, the image data is base64 encoded, and our TF.Keras model takes numpy input. Your serving function will do the conversion from base64 to a numpy array.\nWhen making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request.", "loaded = tf.saved_model.load(MODEL_ICN_DIR)\n\nserving_input_icn = list(\n loaded.signatures[\"serving_default\"].structured_input_signature[1].keys()\n)[0]\nprint(\"Serving function input:\", serving_input_icn)", "Upload the TensorFlow Hub model to a Vertex AI Model resource\nFinally, you upload the model artifacts from the TFHub model and serving function into a Vertex AI Model resource.\nNote: When you upload the model artifacts to a Vertex AI Model resource, you specify the corresponding deployment container image.", "model_icn = aiplatform.Model.upload(\n display_name=\"icn_\" + TIMESTAMP,\n artifact_uri=MODEL_ICN_DIR,\n serving_container_image_uri=DEPLOY_IMAGE,\n)\n\nprint(model_icn)", "Download the pretrained sentence encoder model\nNext, you download the pretrained text sentence encoder model from TensorFlow Hub. The model gets downloaded as a TF.Keras layer. To finalize the model, in this example, you create a Sequential() model with the downloaded TFHub model as a layer, and specify the input shape to the model.", "tfhub_model_use = tf.keras.Sequential(\n [hub.KerasLayer(\"https://tfhub.dev/google/universal-sentence-encoder/4\")]\n)\n\n# force the model to build\ntfhub_model_use.predict([\"foo\"])\n\ntfhub_model_use.predict([\"foo\"])", "Save the model artifacts\nAt this point, the model is in memory. Next, you save the model artifacts to a Cloud Storage location.", "MODEL_USE_DIR = BUCKET_URI + \"/model_use\"\ntfhub_model_use.save(MODEL_USE_DIR)", "Get the serving function signature\nYou can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.\nFor your purpose, you need the signature of the serving function. \nWhen making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request.", "loaded = tf.saved_model.load(MODEL_USE_DIR)\n\nserving_input_use = list(\n loaded.signatures[\"serving_default\"].structured_input_signature[1].keys()\n)[0]\nprint(\"Serving function input:\", serving_input_use)", "Upload the TensorFlow Hub model to a Vertex AI Model resource\nFinally, you upload the model artifacts from the TFHub model and serving function into a Vertex AI Model resource.\nNote: When you upload the model artifacts to a Vertex AI Model resource, you specify the corresponding deployment container image.", "model_use = aiplatform.Model.upload(\n display_name=\"icn_\" + TIMESTAMP,\n artifact_uri=MODEL_USE_DIR,\n serving_container_image_uri=DEPLOY_IMAGE,\n)\n\nprint(model_use)", "Creating a deployment resource pool\nCurrently, creating shared vm resource pools is only supported via the REST-based API (e.g., CURL).\nUse CreateDeploymentResourcePool API to create a resource pool, with the following configuration:\n\ndedicated_resources: Compute (HW) resources to allocate for the shared vm.\nmin_replica_count: Auto-scaling, the minimum number of compute nodes.\nmax_replica_count: Auto-scaling, the maximum number of compute nodes.\n\nLearn more about Deployment Resource Pools.", "DEPLOYMENT_RESOURCE_POOL_ID = \"shared-vm\" # @param {type: \"string\"}\n\nimport json\nimport pprint\npp = pprint.PrettyPrinter(indent=4)\n\nMIN_NODES = 1\nMAX_NODES = 2\n\nCREATE_RP_PAYLOAD = {\n \"deployment_resource_pool\":{\n \"dedicated_resources\":{\n \"machine_spec\":{\n \"machine_type\": DEPLOY_COMPUTE\n },\n \"min_replica_count\": MIN_NODES, \n \"max_replica_count\": MAX_NODES\n }\n },\n \"deployment_resource_pool_id\":DEPLOYMENT_RESOURCE_POOL_ID\n}\nCREATE_RP_REQUEST=json.dumps(CREATE_RP_PAYLOAD)\npp.pprint(\"CREATE_RP_REQUEST: \" + CREATE_RP_REQUEST)\n\n! curl \\\n-X POST \\\n-H \"Authorization: Bearer $(gcloud auth print-access-token)\" \\\n-H \"Content-Type: application/json\" \\\nhttps://{REGION}-aiplatform.googleapis.com/v1beta1/projects/{PROJECT_ID}/locations/{REGION}/deploymentResourcePools \\\n-d '{CREATE_RP_REQUEST}'", "Get a deployment resource pool\nUse GetDeploymentResourcePool API to check out the resource pool that you created. \nLearn more about Get Deployment Resource Pool.", "! curl -X GET \\\n-H \"Authorization: Bearer $(gcloud auth print-access-token)\" \\\n-H \"Content-Type: application/json\" \\\nhttps://{REGION}-aiplatform.googleapis.com/v1beta1/projects/{PROJECT_ID}/locations/{REGION}/deploymentResourcePools/{DEPLOYMENT_RESOURCE_POOL_ID}", "List all deployment resource pools\nUse ListDeploymentResourcePools API to list all the resource pools. \nLearn more about Listing Deployment Resource Pools.", "! curl -X GET \\\n-H \"Authorization: Bearer $(gcloud auth print-access-token)\" \\\n-H \"Content-Type: application/json\" \\\nhttps://{REGION}-aiplatform.googleapis.com/v1beta1/projects/{PROJECT_ID}/locations/{REGION}/deploymentResourcePools", "Creating two Endpoint resource\nNext, you create two Endpoint resources using the Endpoint.create() method. At a minimum, you specify the display name for the endpoint. Optionally, you can specify the project and location (region); otherwise the settings are inherited by the values you set when you initialized the Vertex AI SDK with the init() method.\nIn this example, the following parameters are specified:\n\ndisplay_name: A human readable name for the Endpoint resource.\n\nThis method returns an Endpoint object.\nLearn more about Vertex AI Endpoints.", "endpoint_icn = aiplatform.Endpoint.create(display_name=\"icn_\" + TIMESTAMP)\n\nprint(endpoint_icn)\n\nendpoint_use = aiplatform.Endpoint.create(display_name=\"use_\" + TIMESTAMP)\n\nprint(endpoint_use)", "Deploy Model in a Deployment Resource Pool\nAfter you have created a Model and an Endpoint, you are ready to deploy using the DeployModel API. See an example of the CURL command below. Notice how you specified the shared_resources of DeployedModel with the resource name of the resource pool that was created. \nModel deployments for the same resource pool can be started concurrently.\nDeploy the image classification model\nNext, you deploy the image classification model to an Endpoint using the shared deployment resource pool.", "SHARED_RESOURCE = \"projects/{project_id}/locations/{region}/deploymentResourcePools/{deployment_resource_pool_id}\".format(\n project_id=PROJECT_ID,\n region=REGION,\n deployment_resource_pool_id=DEPLOYMENT_RESOURCE_POOL_ID,\n)\n\nDEPLOY_MODEL_PAYLOAD = {\n \"deployedModel\": {\n \"model\": model_icn.resource_name,\n \"shared_resources\": SHARED_RESOURCE,\n },\n \"trafficSplit\": {\"0\": 100},\n}\nDEPLOY_MODEL_REQUEST = json.dumps(DEPLOY_MODEL_PAYLOAD)\npp.pprint(\"DEPLOY_MODEL_REQUEST: \" + DEPLOY_MODEL_REQUEST)\n\nENDPOINT_ID = endpoint_icn.name\n\noutput = ! curl -X POST \\\n -H \"Authorization: Bearer $(gcloud auth print-access-token)\" \\\n -H \"Content-Type: application/json\" \\\nhttps://{REGION}-aiplatform.googleapis.com/v1beta1/projects/{PROJECT_ID}/locations/{REGION}/endpoints/{ENDPOINT_ID}:deployModel \\\n-d '{DEPLOY_MODEL_REQUEST}'\n\noperation_id = output[6].split(\":\")[-1].strip()[:-1]\nprint(operation_id)", "Wait for deployment to complete\nNext, you will query the status of the operation, waiting for the operation state done to be set to true.", "import time\n\ndone = False\nwhile done != '\"done\": true':\n status = ! curl -X GET \\\n -H \"Authorization: Bearer $(gcloud auth print-access-token)\" \\\n -H \"Content-Type: application/json\" \\\nhttps://{REGION}-aiplatform.googleapis.com/v1beta1/{operation_id}\n done = status[14].strip()[0:-1]\n print(\"DONE status:\", done)\n time.sleep(30)", "Deploy the test sentenc encoder model\nNext, you deploy the text sentence encoder model to an Endpoint using the shared deployment resource pool.", "DEPLOY_MODEL_PAYLOAD = {\n \"deployedModel\": {\n \"model\": model_use.resource_name,\n \"shared_resources\": SHARED_RESOURCE,\n },\n \"trafficSplit\": {\"0\": 100},\n}\nDEPLOY_MODEL_REQUEST = json.dumps(DEPLOY_MODEL_PAYLOAD)\npp.pprint(\"DEPLOY_MODEL_REQUEST: \" + DEPLOY_MODEL_REQUEST)\n\nENDPOINT_ID = endpoint_use.name\n\noutput = ! curl -X POST \\\n -H \"Authorization: Bearer $(gcloud auth print-access-token)\" \\\n -H \"Content-Type: application/json\" \\\nhttps://{REGION}-aiplatform.googleapis.com/v1beta1/projects/{PROJECT_ID}/locations/{REGION}/endpoints/{ENDPOINT_ID}:deployModel \\\n-d '{DEPLOY_MODEL_REQUEST}'\n\noperation_id = output[6].split(\":\")[-1].strip()[:-1]\nprint(operation_id)", "Wait for deployment to complete\nNext, you will query the status of the operation, waiting for the operation state done to be set to true.", "done = False\nwhile done != '\"done\": true':\n status = ! curl -X GET \\\n -H \"Authorization: Bearer $(gcloud auth print-access-token)\" \\\n -H \"Content-Type: application/json\" \\\nhttps://{REGION}-aiplatform.googleapis.com/v1beta1/{operation_id}\n done = status[14].strip()[0:-1]\n print(\"DONE status:\", done)\n time.sleep(30)", "Create test example the image classification model\nNext, you test your deployed image classification model. First, you encode your test data for the serving function, which is in the format:\n{ serving_input: { 'b64': base64_encoded_bytes } }", "! gsutil cp gs://cloud-ml-data/img/flower_photos/daisy/100080576_f52e8ee070_n.jpg test.jpg\n\nimport base64\n\nwith open(\"test.jpg\", \"rb\") as f:\n data = f.read()\nb64str = base64.b64encode(data).decode(\"utf-8\")", "Make the prediction request for the image classification model\nFinally, you make a prediction request. Since the model was trained on ImageNet, the prediction will return the probabilities for the corresponding 1000 classes.", "# The format of each instance should conform to the deployed model's prediction input schema.\ninstances = [{serving_input_icn: {\"b64\": b64str}}]\n\nprediction = endpoint_icn.predict(instances=instances)\n\nprint(prediction)", "Create test example the text sentence encoder model\nNext, you test your deployed text sentence encoder model. First, you encode your test data for the serving function, which is in the format:\n\"word1 word2 ... wordN\"", "instance = \"the brown fox jumped over the laxy dog\"", "Make the prediction request for the text sentence encoder model\nFinally, you make a prediction request. The prediction will return an embedding which is a 500 element vector.", "endpoint_use.predict([instance])", "Undeploy the models\nWhen you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model.", "endpoint_icn.undeploy_all()\nendpoint_use.undeploy_all()", "Delete the Model resources\nThe method 'delete()' will delete the model.", "model_icn.delete()\nmodel_use.delete()", "Delete the Endpoint resources\nThe method 'delete()' will delete the endpoint.", "endpoint_icn.delete()\nendpoint_use.delete()", "Cleaning up\nTo clean up all Google Cloud resources used in this project, you can delete the Google Cloud\nproject you used for the tutorial.\nOtherwise, you can delete the individual resources you created in this tutorial.", "# Set this to true only if you'd like to delete your bucket\ndelete_bucket = False\n\nif delete_bucket or os.getenv(\"IS_TESTING\"):\n ! gsutil rm -r $BUCKET_URI\n\n!rm -f test.jpg" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Kaggle/learntools
notebooks/feature_engineering/raw/tut2.ipynb
apache-2.0
[ "Introduction\nNow that you've built a baseline model, you are ready to improve it with some clever ways to work with categorical variables. \nYou are already familiar with the most basic encodings: one-hot encoding and label encoding. In this tutorial, you'll learn about count encoding, target encoding, and CatBoost encoding.\nWe begin by running the code to rebuild the baseline model from the first tutorial.", "#$HIDE_INPUT$\nimport pandas as pd\nfrom sklearn.preprocessing import LabelEncoder\n\nks = pd.read_csv('../input/kickstarter-projects/ks-projects-201801.csv',\n parse_dates=['deadline', 'launched'])\n\n# Drop live projects\nks = ks.query('state != \"live\"')\n\n# Add outcome column, \"successful\" == 1, others are 0\nks = ks.assign(outcome=(ks['state'] == 'successful').astype(int))\n\n# Timestamp features\nks = ks.assign(hour=ks.launched.dt.hour,\n day=ks.launched.dt.day,\n month=ks.launched.dt.month,\n year=ks.launched.dt.year)\n\n# Label encoding\ncat_features = ['category', 'currency', 'country']\nencoder = LabelEncoder()\nencoded = ks[cat_features].apply(encoder.fit_transform)\n\ndata_cols = ['goal', 'hour', 'day', 'month', 'year', 'outcome']\ndata = ks[data_cols].join(encoded)\n\n# Defining functions that will help us test our encodings\nimport lightgbm as lgb\nfrom sklearn import metrics\n\ndef get_data_splits(dataframe, valid_fraction=0.1):\n valid_fraction = 0.1\n valid_size = int(len(dataframe) * valid_fraction)\n\n train = dataframe[:-valid_size * 2]\n # valid size == test size, last two sections of the data\n valid = dataframe[-valid_size * 2:-valid_size]\n test = dataframe[-valid_size:]\n \n return train, valid, test\n\ndef train_model(train, valid):\n feature_cols = train.columns.drop('outcome')\n\n dtrain = lgb.Dataset(train[feature_cols], label=train['outcome'])\n dvalid = lgb.Dataset(valid[feature_cols], label=valid['outcome'])\n\n param = {'num_leaves': 64, 'objective': 'binary', \n 'metric': 'auc', 'seed': 7}\n bst = lgb.train(param, dtrain, num_boost_round=1000, valid_sets=[dvalid], \n early_stopping_rounds=10, verbose_eval=False)\n\n valid_pred = bst.predict(valid[feature_cols])\n valid_score = metrics.roc_auc_score(valid['outcome'], valid_pred)\n print(f\"Validation AUC score: {valid_score:.4f}\")\n\n# Train a model (on the baseline data)\ntrain, valid, test = get_data_splits(data)\ntrain_model(train, valid)", "Count Encoding\nCount encoding replaces each categorical value with the number of times it appears in the dataset. For example, if the value \"GB\" occured 10 times in the country feature, then each \"GB\" would be replaced with the number 10.\nWe'll use the categorical-encodings package to get this encoding. The encoder itself is available as CountEncoder. This encoder and the others in categorical-encodings work like scikit-learn transformers with .fit and .transform methods.", "import category_encoders as ce\ncat_features = ['category', 'currency', 'country']\n\n# Create the encoder\ncount_enc = ce.CountEncoder()\n\n# Transform the features, rename the columns with the _count suffix, and join to dataframe\ncount_encoded = count_enc.fit_transform(ks[cat_features])\ndata = data.join(count_encoded.add_suffix(\"_count\"))\n\n# Train a model \ntrain, valid, test = get_data_splits(data)\ntrain_model(train, valid)", "Adding the count encoding features increase the validation score from 0.7467 to 0.7486, only a slight improvement.\nTarget Encoding\nTarget encoding replaces a categorical value with the average value of the target for that value of the feature. For example, given the country value \"CA\", you'd calculate the average outcome for all the rows with country == 'CA', around 0.28. This is often blended with the target probability over the entire dataset to reduce the variance of values with few occurences.\nThis technique uses the targets to create new features. So including the validation or test data in the target encodings would be a form of target leakage. Instead, you should learn the target encodings from the training dataset only and apply it to the other datasets.\nThe category_encoders package provides TargetEncoder for target encoding. The implementation is similar to CountEncoder.", "# Create the encoder\ntarget_enc = ce.TargetEncoder(cols=cat_features)\ntarget_enc.fit(train[cat_features], train['outcome'])\n\n# Transform the features, rename the columns with _target suffix, and join to dataframe\ntrain_TE = train.join(target_enc.transform(train[cat_features]).add_suffix('_target'))\nvalid_TE = valid.join(target_enc.transform(valid[cat_features]).add_suffix('_target'))\n\n# Train a model\ntrain_model(train_TE, valid_TE)", "The validation score is higher again, from 0.7467 to 0.7491.\nCatBoost Encoding\nFinally, we'll look at CatBoost encoding. This is similar to target encoding in that it's based on the target probablity for a given value. However with CatBoost, for each row, the target probability is calculated only from the rows before it.", "# Create the encoder\ntarget_enc = ce.CatBoostEncoder(cols=cat_features)\ntarget_enc.fit(train[cat_features], train['outcome'])\n\n# Transform the features, rename columns with _cb suffix, and join to dataframe\ntrain_CBE = train.join(target_enc.transform(train[cat_features]).add_suffix('_cb'))\nvalid_CBE = valid.join(target_enc.transform(valid[cat_features]).add_suffix('_cb'))\n\n# Train a model\ntrain_model(train_CBE, valid_CBE)", "This does slightly better than target encoding.\nYour Turn\nTry encoding categorical features yourself." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
VVard0g/ThreatHunter-Playbook
docs/notebooks/windows/03_persistence/WIN-190810170510.ipynb
mit
[ "WMI Eventing\nMetadata\n| Metadata | Value |\n|:------------------|:---|\n| collaborators | ['@Cyb3rWard0g', '@Cyb3rPandaH'] |\n| creation date | 2019/08/10 |\n| modification date | 2020/09/20 |\n| playbook related | [] |\nHypothesis\nAdversaries might be leveraging WMI eventing for persistence in my environment.\nTechnical Context\nWMI is the Microsoft implementation of the Web-Based Enterprise Management (WBEM) and Common Information Model (CIM). Both standards aim to provide an industry-agnostic means of collecting and transmitting information related to any managed component in an enterprise.\nAn example of a managed component in WMI would be a running process, registry key, installed service, file information, etc.\nAt a high level, Microsoft implementation of these standards can be summarized as follows > Managed Components Managed components are represented as WMI objects — class instances representing highly structured operating system data. Microsoft provides a wealth of WMI objects that communicate information related to the operating system. E.g. Win32_Process, Win32_Service, AntiVirusProduct, Win32_StartupCommand, etc.\nOffensive Tradecraft\nFrom an offensive perspective WMI has the ability to trigger off nearly any conceivable event, making it a good technique for persistence.\nThree requirements\n* Filter - An action to trigger off of\n* Consumer - An action to take upon triggering the filter\n* Binding - Registers a FilterConsumer\nSecurity Datasets\n| Metadata | Value |\n|:----------|:----------|\n| docs | https://securitydatasets.com/notebooks/atomic/windows/persistence/SDWIN-190518184306.html |\n| link | https://raw.githubusercontent.com/OTRF/Security-Datasets/master/datasets/atomic/windows/persistence/host/empire_wmi_local_event_subscriptions_elevated_user.zip |\nAnalytics\nInitialize Analytics Engine", "from openhunt.mordorutils import *\nspark = get_spark()", "Download & Process Security Dataset", "sd_file = \"https://raw.githubusercontent.com/OTRF/Security-Datasets/master/datasets/atomic/windows/persistence/host/empire_wmi_local_event_subscriptions_elevated_user.zip\"\nregisterMordorSQLTable(spark, sd_file, \"sdTable\")", "Analytic I\nLook for WMI event filters registered\n| Data source | Event Provider | Relationship | Event |\n|:------------|:---------------|--------------|-------|\n| WMI object | Microsoft-Windows-Sysmon/Operational | User created Wmi filter | 19 |", "df = spark.sql(\n'''\nSELECT `@timestamp`, Hostname, User, EventNamespace, Name, Query\nFROM sdTable\nWHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 19\n'''\n)\ndf.show(10,False)", "Analytic II\nLook for WMI event consumers registered\n| Data source | Event Provider | Relationship | Event |\n|:------------|:---------------|--------------|-------|\n| WMI object | Microsoft-Windows-Sysmon/Operational | User created Wmi consumer | 20 |", "df = spark.sql(\n'''\nSELECT `@timestamp`, Hostname, User, Name, Type, Destination\nFROM sdTable\nWHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 20\n'''\n)\ndf.show(10,False)", "Analytic III\nLook for WMI consumers binding to filters\n| Data source | Event Provider | Relationship | Event |\n|:------------|:---------------|--------------|-------|\n| WMI object | Microsoft-Windows-Sysmon/Operational | User created Wmi subscription | 21 |", "df = spark.sql(\n'''\nSELECT `@timestamp`, Hostname, User, Operation, Consumer, Filter\nFROM sdTable\nWHERE Channel = \"Microsoft-Windows-Sysmon/Operational\"\n AND EventID = 21\n'''\n)\ndf.show(10,False)", "Analytic IV\nLook for events related to the registration of FilterToConsumerBinding\n| Data source | Event Provider | Relationship | Event |\n|:------------|:---------------|--------------|-------|\n| WMI object | Microsoft-Windows-WMI-Activity/Operational | Wmi subscription created | 5861 |", "df = spark.sql(\n'''\nSELECT `@timestamp`, Hostname, Message\nFROM sdTable\nWHERE Channel = \"Microsoft-Windows-WMI-Activity/Operational\"\n AND EventID = 5861\n'''\n)\ndf.show(10,False)", "Known Bypasses\nFalse Positives\nNone\nHunter Notes\nNone\nReferences\n\nhttps://www.blackhat.com/docs/us-15/materials/us-15-Graeber-Abusing-Windows-Management-Instrumentation-WMI-To-Build-A-Persistent%20Asynchronous-And-Fileless-Backdoor.pdf\nhttps://twitter.com/mattifestation/status/899646620148539397\nhttps://www.darkoperator.com/blog/2017/10/14/basics-of-tracking-wmi-activity" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
0.12/_downloads/plot_sensor_regression.ipynb
bsd-3-clause
[ "%matplotlib inline", "Sensor space least squares regression\nPredict single trial activity from a continuous variable.\nA single-trial regression is performed in each sensor and timepoint\nindividually, resulting in an Evoked object which contains the\nregression coefficient (beta value) for each combination of sensor\nand timepoint. Example also shows the T statistics and the associated\np-values.\nNote that this example is for educational purposes and that the data used\nhere do not contain any significant effect.\n(See Hauk et al. (2006). The time course of visual word recognition as\nrevealed by linear regression analysis of ERP data. Neuroimage.)", "# Authors: Tal Linzen <linzen@nyu.edu>\n# Denis A. Engemann <denis.engemann@gmail.com>\n#\n# License: BSD (3-clause)\n\nimport numpy as np\n\nimport mne\nfrom mne.datasets import sample\nfrom mne.stats.regression import linear_regression\n\nprint(__doc__)\n\ndata_path = sample.data_path()", "Set parameters and read data", "raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\ntmin, tmax = -0.2, 0.5\nevent_id = dict(aud_l=1, aud_r=2)\n\n# Setup for reading the raw data\nraw = mne.io.read_raw_fif(raw_fname)\nevents = mne.read_events(event_fname)\n\npicks = mne.pick_types(raw.info, meg='mag', eeg=False, stim=False,\n eog=False, exclude='bads')\n\n# Reject some epochs based on amplitude\nreject = dict(mag=5e-12)\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,\n picks=picks, baseline=(None, 0), preload=True,\n reject=reject)", "Run regression", "names = ['intercept', 'trial-count']\n\nintercept = np.ones((len(epochs),), dtype=np.float)\ndesign_matrix = np.column_stack([intercept, # intercept\n np.linspace(0, 1, len(intercept))])\n\n# also accepts source estimates\nlm = linear_regression(epochs, design_matrix, names)\n\n\ndef plot_topomap(x, unit):\n x.plot_topomap(ch_type='mag', scale=1, size=1.5, vmax=np.max,\n unit=unit, times=np.linspace(0.1, 0.2, 5))\n\ntrial_count = lm['trial-count']\n\nplot_topomap(trial_count.beta, unit='z (beta)')\nplot_topomap(trial_count.t_val, unit='t')\nplot_topomap(trial_count.mlog10_p_val, unit='-log10 p')\nplot_topomap(trial_count.stderr, unit='z (error)')" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
yashdeeph709/Algorithms
PythonBootCamp/Complete-Python-Bootcamp-master/If, elif, and else Statements.ipynb
apache-2.0
[ "if,elif,else Statements\nif Statements in Python allows us to tell the computer to perform alternative actions based on a certain set of results.\nVerbally, we can imagine we are telling the computer:\n\"Hey if this case happens, perform some action\"\nWe can then expand the idea further with elif and else statements, which allow us to tell the computer:\n\"Hey if this case happens, perform some action. Else if another case happens, perform some other action. Else-- none of the above cases happened, perform this action\"\nLet's go ahead and look at the syntax format for if statements to get a better idea of this:\nif case1:\n perform action1\nelif case2:\n perform action2\nelse: \n perform action 3\n\nFirst Example\nLet's see a quick example of this:", "if True:\n print 'It was true!'", "Let's add in some else logic:", "x = False\n\nif x:\n print 'x was True!'\nelse:\n print 'I will be printed in any case where x is not true'", "Multiple Branches\nLet's get a fuller picture of how far if, elif, and else can take us!\nWe write this out in a nested structure. Take note of how the if,elif,and else line up in the code. This can help you see what if is related to what elif or else statements.\nWe'll reintroduce a comparison syntax for Python.", "loc = 'Bank'\n\nif loc == 'Auto Shop':\n print 'Welcome to the Auto Shop!'\nelif loc == 'Bank':\n print 'Welcome to the bank!'\nelse:\n print \"Where are you?\"", "Note how the nested if statements are each checked until a True boolean causes the nested code below it to run. You should also note that you can put in as many elif statements as you want before you close off with an else.\nLet's create two more simple examples for the if,elif, and else statements:", "person = 'Sammy'\n\nif person == 'Sammy':\n print 'Welcome Sammy!'\nelse:\n print \"Welcome, what's your name?\" \n\nperson = 'George'\n\nif person == 'Sammy':\n print 'Welcome Sammy!'\nelif person =='George':\n print \"Welcome George!\"\nelse:\n print \"Welcome, what's your name?\" ", "Indentation\nIt is important to keep a good understanding of how indentation works in Python to maintain the structure and order of your code. We will touch on this topic again when we start building out functions!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
vladan-jovicic/ComplexNetworks
Notebooks/assignment3.ipynb
mit
[ "Assignment 3\nProblem 1\nThe goal of this exercise is to compare clustering coefficient and the length of shorthest path of Watts-Strogatz model with Erdos-Renyi model. To do so, we will create multiple instances of Watts-Strogatz model with $N=100$ vertices and different values for $k$. Similarly, we will generate multiple instances of ER model with $N=100$ and $m = \\frac{k\\cdot N}{2}$.\nFirstly, we will import the necessary packages.", "import sys, os\nsys.path.insert(0, '../src/')\nimport algorithms.watts_strogatz as ws\nimport algorithms.erdos_renyi as er\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nN, beta = 100, 0.5\nK = np.arange(30, 90, 2)\ner_graphs = []\nws_graphs = []\nfor k in K:\n m = k*N // 2\n ws_graphs.append(er.er_nm(N, m))\n er_graphs.append(ws.watts_strogatz(N, k, beta, seed=1234))", "Now we will make two different plots. In one, we will compare the diameters of the above generated graphs and in the second one, we will compare clustering coefficient.", "# on x axis, plot k\n# on y axis, plot the diameter\nws_diam, er_diam, ws_cc, er_cc = [], [], [], []\n\nfor g1, g2 in zip(ws_graphs, er_graphs):\n ws_cc.append(g1.global_clustering_coefficient())\n paths = g1.shortest_path()\n max_path = [[(0 if np.isinf(paths[u][v]) else paths[u][v]) for v in g1.vertices()] for u in g1.vertices()]\n ws_diam.append(max(max(max_path)))\n er_diam.append(g2.diameter())\n er_cc.append(g2.global_clustering_coefficient())\n\n\nfig = plt.figure(figsize = (15, 8))\nplt.subplot(211)\ner_line, = plt.plot(K, er_diam, 'b', label=\"Erdos Renyi diameter\")\nws_line, = plt.plot(K, ws_diam, 'r', label=\"Watts-Strogatz diameter\")\nplt.legend([er_line, ws_line])\nplt.xlabel(\"m\")\nplt.ylabel(\"diameter\")\n\n\nplt.subplot(212)\ncc_er_line, = plt.plot(K, er_cc, 'b', label=\"Erdos Renyi clust coef\")\ncc_ws_line, = plt.plot(K, ws_cc, 'r', label=\"Watts-Strogatz clust coef\")\nplt.legend([cc_er_line, cc_ws_line])\nplt.xlabel(\"m\")\nplt.ylabel(\"clust coef\")\n\nplt.show()", "As we can see from the above plots, graphs following BA model have approximately the same diameter as ER graphs while the clustering coefficient is increased.\nProblem 2\nThe goal of this exercise is to show that Barabasi-Albert model produces a scale free network.\nTo do that, we will generate a random graph following BA model and compute the degree distribution. Then, we will try to approximate the obtained distribution with power law distribution.", "import algorithms.barabasi_albert as ba\nreload(ba)\nN, m, gamma = 1000, 100, -2\nM = np.arange(90, 150)\n\ng = ba.barabasi_albert(N, m)\n\n# ba_graphs = [ba.barabasi_albert(N, m) for m in M]\n\ngamma = -0.95\nfig = plt.figure(figsize=(15, 8))\ndegree_seq = np.array(g.degree_sequence(), dtype=np.float32)\ndegree_dist = degree_seq / sum(degree_seq)\nplt.hist(degree_seq, normed=True)\ndegree_range = np.arange(100, 400, dtype=np.float32)\np_law = [k**gamma for k in degree_range]\npl_line, = plt.plot(degree_range, p_law, 'r-', label=\"P(k) = k^(-0.95)\")\nplt.legend([pl_line])\nplt.xlabel(\"degree\")\n\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
Paul-St-Young/share
python-tutorial/python.ipynb
mit
[ "script available on GitHub\nInstallation and Setup\nInstallation\nRefer to this how-to.\nManage Python Packages\nPython has its own package manager \"pip\" to keep Python self-contained. pip also allows access to new packges even if your OS is out-of-date. If you installed Python using Anaconda, then your package manager will be \"conda\", which also has a nice documentation.\nRunning Python\ninteractive mode\nOpen a terminal and type \"python\"\nrun a python script\n\nPut Python code (eg. print(\"hello world\")) in a file, say \"hello.py\".\nInvoke the Python interpreter with the file as its first argument.\nbash\npython hello.py\n\nrecommended editors\nIf you plan to do computational research in the future, please pick either emacs or vim. They are excellent command-line editors with many academic users. Command-line editors have steeper learning curves than GUI-based ones, but you can do way more with them over time (good luck using GUI editors on supercomputers). Many excellent online tutorials exist.\n\nAtom\nSublime\nEmacs\nVim\n\nRules!\nRule #1: Write Comments\nThe more the better!", "# I don't know how to write a program but I am charming, \n# so I will write down the equations to be implemented \n# and find a friend to write it :)\n\"\"\" \nIt is annoying to have to start each comment with a #, \n triple quotation allows multi-line comments. \n \nIt is always a good idea to write lots of comment to lay out the \ncohesive idea you had while starting to write a piece of code. \nMore often than not, we forget that impressive grand plan we started\nwith as we fight with syntax error and other nitty-gritty of\ntalking to a computer.\n\"\"\";", "Rule #2: Follow Best Practices\nAn excellent paper by Greg Wilson et. al. concisely summarizes the best practices of scientific computing. I will steal the most relavant section from the summary paragraph here:\n\nWrite programs for people, not computers\nA program should not require its readers to hold more than a handful of facts in memory at once.\nMake names consistent, distinctive, and meaningful.\nMake code style and formatting consistent.\nLet the computer do the work.\nMake the computer repeat tasks.\nSave recent commands in a file for re-use.\nUse a build tool (or Jupyter notebook) to automate and save workflows.\nMake incremental changes.\nWork in small steps with frequent feedback and course correction.\nUse a version control system (eg. git,subversion)\nUpload all work into version control system\nDon't repeat yourself (or others)\nEvery piece of data must have a single authoritative representation in the system.\nModularize code rather than copying and pasting.\nRe-use code (yours or others) instead of rewriting it.\nPlan for mistakes\nAdd assertions to programs to check their operation.\nUse an off-the-shelf unit testing library.\nTurn bugs into test cases.\n\nBasic Use Cases\nMuch of the following can be found on \"A Beginner's Python Tutorial\"\nUsing python as a calculator\nbasic arithmetics are built in", "1 + 1\n\n2*3\n\n2**3\n\n7/2 # gotcha !\n\n7./2\n\n5%2 # modulo", "more advanced functions can be accessed using the numpy package", "import numpy as np\n\nnp.exp(1j)\n\nnp.cos(1) + 1j*np.sin(1)\n\nnp.sqrt(144)", "Loop and Condition", "for my_index in [1,2,3]:\n print(my_index)\n# end for\n\nfor my_index in range(3):\n print(my_index)\n# end for\n\n# while loop may not terminate\nmy_index = 1\nwhile my_index < 4:\n print(my_index)\n my_index += 1 # try comment this out... JK don't do it!\n# end while", "python uses indentation to determine blocks, you could easily have done\npython\nmy_index = 1\nwhile my_index &lt; 4:\n print(my_index)\nmy_index += 1\nthat would be a big oopsy\nintroducing break", "# for loop always terminates, thus it is preferred\nfor my_index in range(10):\n if (my_index>0) and (my_index<=3):\n print(my_index)\n elif (my_index>3):\n break\n # end if\n# end for", "Functions\ndefine modular functions to maximize code reusability and readability", "def boltzmann_factor(energy_in_J,temperature_in_K):\n # 1 joule = 7.243e22 K *kB\n kB = 1.38e-23 # m^2 kg/ s^2 K\n return np.exp(-float(energy_in_J)/kB/temperature_in_K)\n# end def\n\ndef fermi_dirac_dist(energy_in_J,temperature_in_K,chemical_pot_in_J):\n denomimator = 1.0/boltzmann_factor(\n energy_in_J-chemical_pot_in_J\n ,temperature_in_K\n ) + 1.0\n return 1.0/denomimator\n# end def\n\ndef bose_einstein_dist(energy_in_J,temperature_in_K,chemical_pot_in_J):\n denomimator = 1.0/boltzmann_factor(\n energy_in_J-chemical_pot_in_J\n ,temperature_in_K\n ) - 1.0\n return 1.0/denomimator\n# end def\n\n# 50% occupation near chemical potential\nfermi_dirac_dist(1.01e-22,300,1e-22)\n\n# divergent occupation near chemical potential\nbose_einstein_dist(1.01e-22,300,1e-22)", "Tuples, Lists, Dictionaries\nlist: iterable, extendable, mutable and ordered array of elements\ntuple: immutable list\ndictionary: iterable, extendable, mutable and un-ordered key-value pairs", "mylist = [5,4,2,3,1]\nfor item in mylist:\n print(item)\n# end for\n\nmylist[2] = 100\nmylist.insert(0,50)\n\nfor i in range(len(mylist)):\n print( mylist[i] )\n# end for\n\nmytuple = (5,4,2,3,1)\nfor item in mytuple:\n print(item)\n# end for\n\nmytuple[2] = 100\n# oopsy-daisies\n\nmydict = {0:5,1:4,2:2,3:3,4:1}\nfor i in range(len(mydict)):\n print( mydict[i] )\n# end for\n\nmydict = {\n \"name\":\"Paul\"\n ,\"favorite number\":42\n ,\"where abouts\":\"elusive\"\n ,\"hobbies\":[\"coffee\",\"code\"]\n}\nmydict.keys()\n\nmydict[\"where abouts\"]\n\nmydict[\"new entry\"] = False\n\nfor key,value in mydict.iteritems():\n print( \"%s : %s\" % (str(key),str(value)) )\n# end for", "List Comprehension", "mylist = [5,4,2,3,1]\n[item**2 for item in mylist]\n\nsquare_and_shift = lambda x,y:x**2+y\n[square_and_shift(item,50) for item in mylist]", "List splicing", "# from index 1 to -2 (wrap around)\nmylist[1:-2]\n\n# all even indices\nmylist[::2]\n\n# all odd indices\nmylist[1::2]", "gotcha!", "mylist = [5,4,2,3,1]\nentry = [1,2]\nmylist.append(entry)\n# only a reference to entry is saved, NOT a copy, which means ...\n# entry can be changed elsewhere without mylist knowing\n\nmylist\n\nentry[0] = 10\nmylist\n\n# use a deep copy to avoid the above problem\nfrom copy import deepcopy\nmylist = [5,4,2,3,1]\nentry = [1,2]\nmylist.append( deepcopy(entry) )\nentry[0] = 10\nmylist", "Variables and Scope\nThe scope of a variable is the union of all places in the code where the variable can be accessed. Variables in a function are \"local\" and cannot be access by other parts of the program unless returned.", "demon_burn_my_soul = 50.0\n\ndef firey_hell(demon_burn_my_soul):\n demon_burn_my_soul += 10.\n \nfirey_hell(20)\nprint(demon_burn_my_soul)\n\n# you can use a global variable, but this is NOT recommended\n# see classes for better solution\nglobal demon_burn_my_soul \ndemon_burn_my_soul = 50.0\n\ndef firey_hell():\n # side effect! bad! bad! bad!\n global demon_burn_my_soul\n demon_burn_my_soul += 10.\n \nfirey_hell()\nprint(demon_burn_my_soul)", "Classes\nClasses help bundle together related variables and functions. Well-designed classes are sensible abstract objects that will allow higher level programming without the need to worry about details of implementation.\nfun fact", "class RockStar:\n def __init__(self):\n self.demon_burn_my_soul = 50.0\n # end def init\n \n def firey_hell(self):\n self.demon_burn_my_soul += 10.0\n # end def\n \n def cry_my_veins(self):\n return self.demon_burn_my_soul\n # end def cry_my_veins\n# end class RockStar\n\nme = RockStar()\n\nme.cry_my_veins()\n\nme.firey_hell()\nme.cry_my_veins()", "Basic Plotting", "trace_text = \"\"\"-7.436823 -7.290942 -7.271528 -7.282786 -7.283622 -7.268156 -7.401003\n -7.304412 -7.211659 -7.231061 -7.27238 -7.287718 -7.240896 -7.121189\n -7.098841 -7.169402 -7.16689 -7.161854 -7.204029 -7.284694 -7.260288\n -7.368507 -7.472383 -7.442443 -7.448409 -7.409199 -7.353145 -7.242572\n -7.277459 -7.24589 -7.159036 -7.268178 -7.234837 -7.165567 -7.165357\n -7.137534 -7.231942 -7.225935 -7.16142 -7.183465 -7.257877 -7.279006\n -7.284249 -7.306481 -7.240192 -7.286245 -7.316336 -7.251441 -7.192566\n -7.191351 -7.065362 -7.050815 -7.116456 -7.186705 -7.242357 -7.240123\n -7.284564 -7.385903 -7.468834 -7.427641 -7.378051 -7.315574 -7.287397\n -7.262906 -7.197077 -7.187754 -7.136347 -7.149802 -7.301047 -7.281932\n -7.353314 -7.434607 -7.375526 -7.397572 -7.433974 -7.477175 -7.471739\n -7.474228 -7.51791 -7.525722 -7.52028 -7.534158 -7.539559 -7.53915\n -7.533163 -7.426446 -7.417031 -7.475554 -7.41521 -7.377752 -7.319138\n -7.20372 -7.294216 -7.290163 -7.310827 -7.302531 -7.339285 -7.252367\n -7.232718 -7.275662\"\"\"\n\ntrace = map(float,trace_text.split())\n\nimport matplotlib.pyplot as plt\n%matplotlib inline \n# Jupyter-specific magic command, ignore for regular script\n\nstuff = plt.plot(trace)\n# plt.show() needed for regular script\n\n# suppose the entries have error\nerr = np.std(trace) * np.random.rand(len(trace))\nplt.errorbar(range(len(trace)),trace,err)\n\n# see trend (correlation) with exponential smoothing\nimport pandas as pd\ndata = pd.Series(trace)\nplt.plot( trace )\nplt.plot( data.ewm(span=5).mean(),ls=\"--\",lw=2 )", "Intermediate Use Cases\nvectorized operations with numpy array\npython for loops are VERY slow\nnumpy vectorized operations are about as fast as fortran (LAPACK under the hood)", "import numpy as np\n\ndef get_mat_vec(nsize):\n mat = np.random.rand(nsize,nsize)\n vec = np.random.rand(nsize)\n return mat,vec\n# end def\n\ndef mat_vec_np(mat,vec):\n\n prod = np.dot(mat,vec)\n \n return prod\n# end def\n\ndef mat_vec_naive(mat,vec):\n\n prod = np.zeros(nsize)\n for i in range(nsize):\n for j in range(nsize):\n prod[i] += mat[i,j]*vec[j]\n # end for j\n # end for i\n \n return prod\n# end def \n\n# verify correctness\nnsize = 100\nmat,vec = get_mat_vec(nsize)\n\np1 = mat_vec_np(mat,vec)\np2 = mat_vec_naive(mat,vec)\n\nnp.allclose(p1,p2)\n\n# time it\nnsize = 1000\nmat,vec = get_mat_vec(nsize)\n\n%timeit mat_vec_np(mat,vec)\n\n%timeit -n 10 mat_vec_naive(mat,vec)", "3 orders of magnitude speed difference!\nparticle swarm optimization example", "import numpy as np\nfrom copy import deepcopy\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\ndef rastrigin2d(rvec,A=10.):\n ndim = len(rvec)\n const = A * ndim\n tosum = rvec**2. - A*np.cos(2*np.pi*rvec)\n return const + tosum.sum()\n# end def\n\n# put function on a grid for visualization\nminx = -5.12\nmaxx = 5.15\nnx = 100\nx = np.linspace(minx,maxx,nx)\ngrid = np.apply_along_axis(rastrigin2d,1\n ,[np.array([myx,myy]) for myx in x for myy in x] ) # vectorized\ngrid = grid.reshape(nx,nx) # reshape for plotting\n\n# visualize\nfig = plt.figure()\nax = fig.add_subplot(111,aspect=1)\nax.set_xlabel(\"x\")\nax.set_ylabel(\"y\")\n\ncs = ax.contourf(x,x,grid.T,cmap=plt.cm.magma)\n# transpose is needed because matrix index direction and plot axes \n# directions are opposite of one another.\n\nplt.colorbar(cs)\n# below I will use pso to find the minimum of this function\n\n# initialize population\npop_size = 20\ndim = 2\n\npop = (maxx-minx) * np.random.rand(pop_size,dim) + minx\n\n# find personal best\nindividual_best = np.apply_along_axis(rastrigin2d,1,pop) # vectorized\nindividual_best_pos = deepcopy(pop) # deep copy for array of arrays\n\n# find population best\nmin_idx = np.argmin(individual_best) # find minimum index \nglobal_best = individual_best[min_idx].copy() # find minimum\nglobal_best_pos = pop[min_idx].copy() # shalow copy sufficient for array\n\n# initialize hopping sizes and directions\nmax_hop = 0.3\nhop = max_hop * np.random.rand(pop_size,dim)\n\nbackground = plt.figure()\nax = background.add_subplot(111,aspect=1)\nax.set_xlabel(\"x\")\nax.set_ylabel(\"y\")\n\ncs = ax.contourf(x,x,grid.T,alpha=0.3,cmap=plt.cm.magma)\nax.scatter(pop.T[0],pop.T[1],label=\"current positions\")\n\nax.scatter(individual_best_pos.T[0],individual_best_pos.T[1]\n ,c=\"g\",alpha=0.7,label=\"individual best\",marker='^',s=40)\nax.scatter(global_best_pos[0],global_best_pos[1],color=\"r\"\n ,label=\"global best\",marker=\"*\",s=80)\n\nax.legend(scatterpoints = 1,fontsize=10,loc=\"best\")\nbackground.colorbar(cs)\n\nc1 = 2\nc2 = 2\nmax_it = 5\nfor istep in range(max_it):\n \n # evaluate fitness of population\n fitness = np.apply_along_axis(rastrigin2d,1,pop)\n \n # calculate global best\n min_idx = np.argmin(fitness)\n current_best = fitness[min_idx]\n if current_best < global_best:\n global_best = current_best\n global_best_pos = pop[min_idx].copy()\n # end if\n \n # update individual best\n idx = np.where( np.array(fitness) < np.array(individual_best) )\n individual_best[idx] = fitness[idx]\n individual_best_pos[idx] = deepcopy( pop[idx] )\n \n # update hopping\n hop += c1*np.random.rand()*(individual_best_pos-pop) + \\\n c2*np.random.rand()*(global_best_pos-pop)\n idx = np.where( abs(hop) > max_hop )\n hop[idx] = np.sign(hop[idx])*max_hop\n \n # update populaton\n pop += hop\n \n# end for istep\n\nbackground = plt.figure()\nax = background.add_subplot(111,aspect=1)\nax.set_xlabel(\"x\")\nax.set_ylabel(\"y\")\n\ncs = ax.contourf(x,x,grid.T,alpha=0.3,cmap=plt.cm.magma)\nax.scatter(pop.T[0],pop.T[1],label=\"current positions\")\n\nax.scatter(individual_best_pos.T[0],individual_best_pos.T[1]\n ,c=\"g\",alpha=0.7,label=\"individual best\",marker='^',s=40)\nax.scatter(global_best_pos[0],global_best_pos[1],color=\"r\"\n ,label=\"global best\",marker=\"*\",s=80)\n\nax.legend(scatterpoints = 1,fontsize=10,loc=\"best\")\nbackground.colorbar(cs)\n\nglobal_best\n\nglobal_best_pos", "Text Parsing\nplain text file", "%%writefile output.txt\n# I am an ugly output file, but I have many hidden treasures\n\nBEGIN PASSWORDS OF EVERYONE\ntest123\n1234567890\nabcde\nhello\npasswd\npassword\nEND PASSWORDS OF EVERYONE\n\ndata follows\n3\n1.0 2.0 3.0\n4.0 5.0 6.0\n7.0 8.0 9.0\n\n# one text block\nfhandle = open(\"output.txt\",'r')\n# now you have to parse this ugly text\nfhandle.read()\n\n# line by line\nfhandle = open(\"output.txt\",'r')\nfor line in fhandle:\n print line\n\n# smart search\nfrom mmap import mmap\nfhandle = open(\"output.txt\",'r+')\n\nmm = mmap(fhandle.fileno(),0) # 0 means read from beginning\n\n# read block\nbegin_idx = mm.find(\"BEGIN\")\nend_idx = mm.find(\"END\")\ngood_lines= mm[begin_idx:end_idx].split(\"\\n\")[1:-1]\ngood_lines\n\n# read data section\nmm.seek(0) # go to beginning of file\nidx = mm.find(\"data follows\")\nmm.seek(idx) # goto data line\nmm.readline() # skip header\nndata = int(mm.readline())\ndata = []\nfor idata in range(ndata):\n data.append( map(float,mm.readline().split()) )\n# end for idata\ndata", "Database", "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n# loaded databases is easy\ndft = pd.read_json(\"dft.json\")\nqmc = pd.read_json(\"qmc.json\")\n\n# first thing to do is look at it? not so useful\ndft\n\n# look at columns\ndft.columns\n\n# access interesting columns\ndft[[\"energy\",\"pressure\"]]\n\n# plot energy vs. displacement\nxlabel = \"istep\"\nylabel = \"energy\"\nplt.xlabel(xlabel)\nplt.ylabel(ylabel)\nplt.scatter(dft[xlabel],dft[ylabel])\nplt.ylim(-3.8665,-3.865)\n\ndmc = qmc[qmc[\"iqmc\"]==4]\nvmc = qmc[qmc[\"iqmc\"]==0]\n\nxlabel = \"istep\"\nylabel = \"LocalEnergy_mean\"\nplt.xlabel(xlabel)\nplt.ylabel(ylabel)\nplt.scatter(dmc[xlabel],dmc[ylabel])\n\nfig = plt.figure()\nax = fig.add_subplot(111)\nax.set_xlabel(\"displacement (bohr)\")\nax.set_ylabel(\"Varaince (Ha$^2$)\")\n\nmarker_style = {0:\"s\",1:\"^\"}\ncolors = {0:\"g\",1:\"r\"}\n\nrjas = 1 # use reference jastrow\nfor rorb in [0,1]:\n mydf = vmc[ vmc[\"rorb\"] == rorb ]\n ax.errorbar(mydf[\"disp\"],mydf[\"Variance_mean\"],mydf[\"Variance_error\"].values,ls=\"\",marker=marker_style[rorb]\n ,color=colors[rorb],label=\"ref. orb %d\"%rorb)\n# end for\nax.legend(loc=\"best\",scatterpoints=1)\n#ax.set_ylim(1.1,1.5)\nfig.tight_layout()\n#plt.savefig(\"variance_vs_disp-rjas1.eps\")\n\n# drop bad runs\nsel1 = (qmc[\"imode\"]==5) & (qmc[\"istep\"]==-2)\nqmc = qmc.drop(qmc[sel1].index)\nsel2 = (qmc[\"imode\"]==10) & (qmc[\"istep\"]==2)\nqmc = qmc.drop(qmc[sel2].index)\ndmc = qmc[qmc[\"iqmc\"]==4]\nvmc = qmc[qmc[\"iqmc\"]==0]\n\nfig = plt.figure()\nax = fig.add_subplot(111)\nax.set_xlabel(\"displacement (bohr)\")\nax.set_ylabel(\"Varaince (Ha$^2$)\")\n\nmarker_style = {0:\"s\",1:\"^\"}\ncolors = {0:\"g\",1:\"r\"}\n\nrjas = 1 # use reference jastrow\nfor rorb in [0,1]:\n mydf = vmc[ vmc[\"rorb\"] == rorb ]\n ax.errorbar(mydf[\"disp\"],mydf[\"Variance_mean\"],mydf[\"Variance_error\"].values,ls=\"\",marker=marker_style[rorb]\n ,color=colors[rorb],label=\"ref. orb %d\"%rorb)\n# end for\nax.legend(loc=\"best\",scatterpoints=1)\n#ax.set_ylim(1.1,1.5)\nfig.tight_layout()\n#plt.savefig(\"variance_vs_disp-rjas1.eps\")\n\ndmc.groupby([\"rorb\",\"rjas\"]).apply(np.mean)[[\"LocalEnergy_mean\",\"Variance_mean\"]]" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Upward-Spiral-Science/uhhh
code/Graph Analysis/.ipynb_checkpoints/Delaunay-checkpoint.ipynb
apache-2.0
[ "Delaunay\nHere, we'll perform various analysis by constructing graphs and measure properties of those graphs to learn more about the data", "import csv\nfrom scipy.stats import kurtosis\nfrom scipy.stats import skew\nfrom scipy.spatial import Delaunay\nimport numpy as np\nimport math\nimport skimage\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom skimage import future\nimport networkx as nx\nfrom ragGen import *\n%matplotlib inline\nsns.set_color_codes(\"pastel\")\nfrom scipy.signal import argrelextrema\n\n# Read in the data\ndata = open('../../data/data.csv', 'r').readlines()\nfieldnames = ['x', 'y', 'z', 'unmasked', 'synapses']\nreader = csv.reader(data)\nreader.next()\n\nrows = [[int(col) for col in row] for row in reader]\n\n# These will come in handy later\nsorted_x = sorted(list(set([r[0] for r in rows])))\nsorted_y = sorted(list(set([r[1] for r in rows])))\nsorted_z = sorted(list(set([r[2] for r in rows])))", "We'll start with just looking at analysis in euclidian space, then thinking about weighing by synaptic density later. Since we hypothesize that our data will show that tissue varies as we move down the y-axis (z-axis in brain) through cortical layers, an interesting thing to do would be compare properties of the graphs on each layer (ie how does graph connectivity vary as we move through layers).\nLet's start by triangulating our data. We'll use Delaunay on each y layer first. Putting our data in the right format for doing graph analysis...", "a = np.array(rows)\nb = np.delete(a, np.s_[3::],1)\n\n# Separate layers - have to do some wonky stuff to get this to work\nb = sorted(b, key=lambda e: e[1])\nb = np.array([v.tolist() for v in b])\nb = np.split(b, np.where(np.diff(b[:,1]))[0]+1)", "Now that our data is in the right format, we'll create 52 delaunay graphs. Then we'll perform analyses on these graphs. A simple but useful metric would be to analyze edge length distributions in each layer.", "graphs = []\ncentroid_list = []\n\nfor layer in b:\n centroids = np.array(layer)\n \n # get rid of the y value - not relevant anymore\n centroids = np.delete(centroids, 1, 1)\n centroid_list.append(centroids)\n \n graph = Delaunay(centroids)\n graphs.append(graph)", "We're going to need a method to get edge lengths from 2D centroid pairs", "def get_d_edge_length(edge):\n (x1, y1), (x2, y2) = edge\n return math.sqrt((x2-x1)**2 + (y2-y1)**2)\n\nedge_length_list = [[]]\ntri_area_list = [[]]\n\nfor del_graph in graphs:\n \n tri_areas = []\n edge_lengths = []\n triangles = []\n\n for t in centroids[del_graph.simplices]:\n triangles.append(t)\n a, b, c = [tuple(map(int,list(v))) for v in t]\n edge_lengths.append(get_d_edge_length((a,b)))\n edge_lengths.append(get_d_edge_length((a,c)))\n edge_lengths.append(get_d_edge_length((b,c)))\n try:\n tri_areas.append(float(Triangle(a,b,c).area))\n except:\n continue\n edge_length_list.append(edge_lengths)\n tri_area_list.append(tri_areas)", "Realizing after all this that simply location is useless. We know the voxels are evenly spaced, which means our edge length data will be all the same. See that the \"centroids\" are no different:", "np.subtract(centroid_list[0], centroid_list[1])", "There is no distance between the two. Therefore it is perhaps more useful to consider a graph that considers node weights. Voronoi is dual to Delaunay, so that's not much of an option. We want something that considers both spacial location and density similarity. \nDrawing Graphs\nFirst we look at the default networkx graph plotting:", "real_volume = np.zeros((len(sorted_x), len(sorted_y), len(sorted_z)))\nfor r in rows:\n real_volume[sorted_x.index(r[0]), sorted_y.index(r[1]), sorted_z.index(r[2])] = r[-1]\n\nnx_graphs = []\nfor layer in b:\n G = nx.Graph(graph)\n nx_graphs.append(G)\n\nfor graph in nx_graphs:\n plt.figure()\n nx.draw(graph, node_size=100)", "This is using the spring layout, so we're losing positional information. We can improve the plot by adding position information.\nSelf Loops", "num_self_loops = []\nfor rag in y_rags:\n num_self_loops.append(rag.number_of_selfloops())\n\nnum_self_loops", "Interesting. There are no self loops. Why would this be? Let's come back to this. In the meantime, I want to give some though to what it means to have a self loop, whether it should be theoretically possible given our data, and whether our graphs are formed properly.\nThe answer to this question is very simple. In a RAG, there are no self-loops by definition. Self loops are edges that form a connection between a node and itself.\n<img src=\"../../docs/figures/selfloop.png\" width=\"100\">\nTo see whether the graphs are formed properly, let's look at an adjacency lists:", "# y_rags[0].adjacency_list()", "Compare that to the test data:", "# Test Data\ntest = np.array([[1,2],[3,4]])\ntest_rag = skimage.future.graph.RAG(test)\ntest_rag.adjacency_list()", "X-Layers", "real_volume_x = np.zeros((len(sorted_x), len(sorted_y), len(sorted_z)))\nfor r in rows:\n real_volume_x[ sorted_x.index(r[0]), sorted_y.index(r[1]), sorted_z.index(r[2])] = r[-1]\n\nx_rags = []\ncount = 0;\nfor layer in real_volume_x:\n count = count + 1\n x_rags.append(skimage.future.graph.RAG(layer))\n\nnum_edges_x = []\nfor rag in x_rags:\n num_edges_x.append(rag.number_of_edges())\n\nsns.barplot(x=range(len(num_edges_x)), y=num_edges_x)\nsns.plt.show()", "We can see here the number of edges is low in that area that does not have many synapses. It, as expected, mirrors the distribution of synapses. It appears to be approximately uniform at the top, with buffers of very few synapses on the sides. Remember from here:", "plt.imshow(np.amax(real_volume, axis=2), interpolation='nearest')\nplt.show()\n\n# edge_length_list[3]\n# tri_area_list[3]\n# triangles\n\n# Note for future\n# del_features['d_edge_length_mean'] = np.mean(edge_lengths)\n# del_features['d_edge_length_std'] = np.std(edge_lengths)\n# del_features['d_edge_length_skew'] = scipy.stats.skew(edge_lengths)\n# del_features['d_edge_length_kurtosis'] = scipy.stats.kurtosis(edge_lengths)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
isendel/machine-learning
ml-regression/week3-4/week-3-polynomial-regression-assignment-blank.ipynb
apache-2.0
[ "Regression Week 3: Assessing Fit (polynomial regression)\nIn this notebook you will compare different regression models in order to assess which model fits best. We will be using polynomial regression as a means to examine this topic. In particular you will:\n* Write a function to take an SArray and a degree and return an SFrame where each column is the SArray to a polynomial value up to the total degree e.g. degree = 3 then column 1 is the SArray column 2 is the SArray squared and column 3 is the SArray cubed\n* Use matplotlib to visualize polynomial regressions\n* Use matplotlib to visualize the same polynomial degree on different subsets of the data\n* Use a validation set to select a polynomial degree\n* Assess the final fit using test data\nWe will continue to use the House data from previous notebooks.\nFire up graphlab create", "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn import linear_model\n%matplotlib inline\n\ndtype_dict = {'bathrooms':float, 'waterfront':int, 'sqft_above':int, 'sqft_living15':float, 'grade':int, 'yr_renovated':int, 'price':float, 'bedrooms':float, 'zipcode':str, 'long':float, 'sqft_lot15':float, 'sqft_living':float, 'floors':str, 'condition':int, 'lat':float, 'date':str, 'sqft_basement':int, 'yr_built':int, 'id':str, 'sqft_lot':int, 'view':int}", "Next we're going to write a polynomial function that takes an SArray and a maximal degree and returns an SFrame with columns containing the SArray to all the powers up to the maximal degree.\nThe easiest way to apply a power to an SArray is to use the .apply() and lambda x: functions. \nFor example to take the example array and compute the third power we can do as follows: (note running this cell the first time may take longer than expected since it loads graphlab)", "tmp = pd.Series([1., 2., 3.])\ntmp_cubed = tmp**2\nprint(tmp)\nprint(tmp_cubed)", "We can create an empty SFrame using graphlab.SFrame() and then add any columns to it with ex_sframe['column_name'] = value. For example we create an empty SFrame and make the column 'power_1' to be the first power of tmp (i.e. tmp itself).", "ex_sframe = pd.DataFrame()\nex_sframe['power_1'] = tmp\nprint(ex_sframe)", "Polynomial_sframe function\nUsing the hints above complete the following function to create an SFrame consisting of the powers of an SArray up to a specific degree:", "def polynomial_sframe(feature, degree):\n # assume that degree >= 1\n # initialize the SFrame:\n poly_sframe = pd.DataFrame()\n # and set poly_sframe['power_1'] equal to the passed feature\n poly_sframe['power_1'] = feature\n # first check if degree > 1\n if degree > 1:\n # then loop over the remaining degrees:\n # range usually starts at 0 and stops at the endpoint-1. We want it to start at 2 and stop at degree\n for power in range(2, degree+1): \n # first we'll give the column a name:\n name = 'power_' + str(power)\n # then assign poly_sframe[name] to the appropriate power of feature\n poly_sframe[name] = feature**power\n\n poly_sframe['constant'] = 1\n return poly_sframe", "To test your function consider the smaller tmp variable and what you would expect the outcome of the following call:", "print(polynomial_sframe(tmp, 3))", "Visualizing polynomial regression\nLet's use matplotlib to visualize what a polynomial regression looks like on some real data.", "sales = pd.read_csv('kc_house_data.csv', dtype=dtype_dict)", "As in Week 3, we will use the sqft_living variable. For plotting purposes (connecting the dots), you'll need to sort by the values of sqft_living. For houses with identical square footage, we break the tie by their prices.", "sales = sales.sort_values(['sqft_living', 'price'])\n\ndef fit_poly(data_set, degree):\n poly_data_ = polynomial_sframe(data_set['sqft_living'], degree)\n features_ = poly_data_.columns.values[:degree].tolist()\n #features_ = ['constant'] + poly_data_.columns.values[:degree].tolist()\n poly_data_['price'] = data_set['price']\n model_ = linear_model.LinearRegression()\n #model_ = linear_model.LinearRegression(fit_intercept=False)\n model_.fit(poly_data_[features_], data_set['price'])\n return model_, poly_data_, features_", "Let's start with a degree 1 polynomial using 'sqft_living' (i.e. a line) to predict 'price' and plot what it looks like.", "#poly1_data = polynomial_sframe(sales[['sqft_living']], 1)\n#poly1_data[['price']] = sales[['price']] # add price to the data since it's the target", "NOTE: for all the models in this notebook use validation_set = None to ensure that all results are consistent across users.", "model1, poly1_data, features = fit_poly(sales, 1)\n#model1 = linear_model.LinearRegression(fit_intercept=False)\n#model1.fit(poly1_data[['constant', 'power_1']], poly1_data['price'])\n\n#let's take a look at the weights before we plot\nmodel1.coef_\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nplt.plot(poly1_data['power_1'],sales['price'],'.',\n poly1_data['power_1'], model1.predict(poly1_data[features]),'-')", "Let's unpack that plt.plot() command. The first pair of SArrays we passed are the 1st power of sqft and the actual price we then ask it to print these as dots '.'. The next pair we pass is the 1st power of sqft and the predicted values from the linear model. We ask these to be plotted as a line '-'. \nWe can see, not surprisingly, that the predicted values all fall on a line, specifically the one with slope 280 and intercept -43579. What if we wanted to plot a second degree polynomial?", "error_1 = sales['price'] - model1.predict(poly1_data[features])\nRSS_1 = error_1.T.dot(error_1)\n# expect 1483974282914121.8\nRSS_1\n\n(model2, poly2_data, features) = fit_poly(sales, 2)\nprint(features)\nmodel2.coef_\n\nplt.plot(poly2_data['power_1'],poly2_data['price'],'.',\n poly2_data['power_1'], model2.predict(poly2_data[features]),'-')", "The resulting model looks like half a parabola. Try on your own to see what the cubic looks like:", "(model3, poly3_data, features) = fit_poly(sales, 3)\n\nplt.plot(poly3_data['power_1'],poly3_data['price'],'.',\n poly3_data['power_1'], model3.predict(poly3_data[features]),'-')", "Now try a 15th degree polynomial:", "(model15, poly15_data, features) = fit_poly(sales, 15)\n\nplt.plot(poly15_data['power_1'],poly15_data['price'],'.',\n poly15_data['power_1'], model15.predict(poly15_data[features]),'-')", "What do you think of the 15th degree polynomial? Do you think this is appropriate? If we were to change the data do you think you'd get pretty much the same curve? Let's take a look.\nChanging the data and re-learning\nWe're going to split the sales data into four subsets of roughly equal size. Then you will estimate a 15th degree polynomial model on all four subsets of the data. Print the coefficients (you should use .print_rows(num_rows = 16) to view all of them) and plot the resulting fit (as we did above). The quiz will ask you some questions about these results.\nTo split the sales data into four subsets, we perform the following steps:\n* First split sales into 2 subsets with .random_split(0.5, seed=0). \n* Next split the resulting subsets into 2 more subsets each. Use .random_split(0.5, seed=0).\nWe set seed=0 in these steps so that different users get consistent results.\nYou should end up with 4 subsets (set_1, set_2, set_3, set_4) of approximately equal size.", "set_1 = pd.read_csv('wk3_kc_house_set_1_data.csv', dtype=dtype_dict)\nset_2 = pd.read_csv('wk3_kc_house_set_2_data.csv', dtype=dtype_dict)\nset_3 = pd.read_csv('wk3_kc_house_set_3_data.csv', dtype=dtype_dict)\nset_4 = pd.read_csv('wk3_kc_house_set_4_data.csv', dtype=dtype_dict)", "Fit a 15th degree polynomial on set_1, set_2, set_3, and set_4 using sqft_living to predict prices. Print the coefficients and make a plot of the resulting model.", "(model, poly_data, features) = fit_poly(set_1, 15)\nprint(model.coef_)\nplt.plot(poly_data['power_1'],poly_data['price'],'.',\n poly_data['power_1'], model.predict(poly_data[features]),'-')\n\n(model, poly_data, features) = fit_poly(set_2, 15)\nprint(model.coef_)\nplt.plot(poly_data['power_1'],poly_data['price'],'.',\n poly_data['power_1'], model.predict(poly_data[features]),'-')\n\n(model, poly_data, features) = fit_poly(set_3, 15)\nprint(model.coef_)\nplt.plot(poly_data['power_1'],poly_data['price'],'.',\n poly_data['power_1'], model.predict(poly_data[features]),'-')\n\n(model, poly_data, features) = fit_poly(set_4, 15)\nprint(model.coef_)\nplt.plot(poly_data['power_1'],poly_data['price'],'.',\n poly_data['power_1'], model.predict(poly_data[features]),'-')", "Some questions you will be asked on your quiz:\nQuiz Question: Is the sign (positive or negative) for power_15 the same in all four models?\nQuiz Question: (True/False) the plotted fitted lines look the same in all four plots\nSelecting a Polynomial Degree\nWhenever we have a \"magic\" parameter like the degree of the polynomial there is one well-known way to select these parameters: validation set. (We will explore another approach in week 4).\nWe split the sales dataset 3-way into training set, test set, and validation set as follows:\n\nSplit our sales data into 2 sets: training_and_validation and testing. Use random_split(0.9, seed=1).\nFurther split our training data into two sets: training and validation. Use random_split(0.5, seed=1).\n\nAgain, we set seed=1 to obtain consistent results for different users.", "training_data = pd.read_csv('wk3_kc_house_train_data.csv', dtype=dtype_dict)\nvalidation_data = pd.read_csv('wk3_kc_house_valid_data.csv', dtype=dtype_dict)\ntest_data = pd.read_csv('wk3_kc_house_test_data.csv', dtype=dtype_dict)", "Next you should write a loop that does the following:\n* For degree in [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] (to get this in python type range(1, 15+1))\n * Build an SFrame of polynomial data of train_data['sqft_living'] at the current degree\n * hint: my_features = poly_data.column_names() gives you a list e.g. ['power_1', 'power_2', 'power_3'] which you might find useful for graphlab.linear_regression.create( features = my_features)\n * Add train_data['price'] to the polynomial SFrame\n * Learn a polynomial regression model to sqft vs price with that degree on TRAIN data\n * Compute the RSS on VALIDATION data (here you will want to use .predict()) for that degree and you will need to make a polynmial SFrame using validation data.\n* Report which degree had the lowest RSS on validation data (remember python indexes from 0)\n(Note you can turn off the print out of linear_regression.create() with verbose = False)", "validation_RSS = []\nmodels = []\nfeatures_list = []\nfor degree in range(1, 15 + 1):\n print('Learning degree %s ' % degree)\n (model, poly_data, features) = fit_poly(training_data, degree)\n \n validation_poly = polynomial_sframe(validation_data['sqft_living'], degree)\n validation_error = validation_data['price'] - model.predict(validation_poly[features])\n validation_RSS.append(validation_error.T.dot(validation_error))\n models.append(model)\n features_list.append(features)\nprint(validation_RSS)\nbest_degree = validation_RSS.index(min(validation_RSS))\nprint('Min validation error degree: %s' % (best_degree + 1))", "Quiz Question: Which degree (1, 2, …, 15) had the lowest RSS on Validation data?\nNow that you have chosen the degree of your polynomial using validation data, compute the RSS of this model on TEST data. Report the RSS on your quiz.", "poly_test = polynomial_sframe(test_data['sqft_living'], best_degree + 1)\nfeatures = features_list[best_degree]\nprint(features)\ntest_prediction = models[best_degree].predict(poly_test[features])", "Quiz Question: what is the RSS on TEST data for the model with the degree selected from Validation data?", "test_error = test_data['price'] - test_prediction\ntest_RSS = test_error.T.dot(test_error)\nprint(test_RSS)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
empet/Matplotlib-plots
Asymmetric-diverging-colormaps-in-matplotlib.ipynb
gpl-3.0
[ "Asymmetric diverging colormaps in Matplotlib", "import seaborn as sns\nimport pandas as pd\nimport numpy as np\nimport matplotlib\nimport matplotlib.pyplot as plt\n%matplotlib inline\nsns.set(style=\"white\")", "Diverging colormaps are recommendend for visualizing data that are symmetric with respect to a reference value, that is the data range is approximately [reference-val, reference+val], where val>0.\nSuch colormaps emphasize the positive and negative deviations from the reference point.\nA simplest diverging colormap is defined by two isoiluminant end hues, left color and right color. One also adds a mid color, that has a higher luminance than the left and right colors. The normalized reference point is mapped to the central color of the colormap defined by these three colors.\nFor a detalied discussion on diverging colormaps, see K. Moreland. See also a short description, and the list of diverging colormaps in matplotlib here.\nBelow we illustrate the RdBu (left red and right blue), respectively RdYlGn (left red, mid yellow, right green) matplotlib diverging colormaps:\nFor, we define a function that displays a colormap:", "def display_cmap(cmap):\n plt.imshow(np.linspace(0, 100, 256)[None, :], aspect=25, interpolation='nearest', cmap=cmap) \n plt.axis('off')\n\ndisplay_cmap('RdBu')\n\ndisplay_cmap('RdYlGn')", "Seaborn allows defining custom diverging colormaps.", "sns_cmap = sns.diverging_palette(220, 10, as_cmap=True)\ndisplay_cmap(sns_cmap)", "palettable provides also a list of diverging colormaps.", "from palettable.colorbrewer.diverging import PRGn_3", "The corresponding matplotlib colormap is defined by:", "palett_cmap=PRGn_3.mpl_colormap \ndisplay_cmap(palett_cmap)", "Now we plot the heatmap associated to a numpy.array whose values are symmetric with respect to 0:", "data=-2.0+4*np.random.random((20,20))\nplt.imshow(data, palett_cmap, interpolation='nearest' )\nplt.colorbar()", "There are data sets that are asymmetric with respect to a reference value, and it is also of interest to visualize them with a diverging colormap, in order to point out to what extent they exceed the reference value, respectively are under this value.\nFor example we could be interested in the temperature recorded in an area, each month over many years, that exceeds or is below to $0$ Celsius degree or $32$ Fahrenheit degrees, or in financial data that are above or below a threshold, etc.\nSymmetric diverging colormaps are not appropriate for visualizing such data, as it can be seen in the following plot.\nWe generate a correlation matrix having elements in the interval $[-0.4, 0.9]$.\nA correlation matrix is symmetric, and we visualize only the heatmap associated to its lower triangular\nsubmatrix, masking the upper triangular part.\nThe function rand_lower_correl defines the lower triangular part of a correlation matrix, with elements randomly generated in an interval $[a,b)\\subset[-1,1)$:", "def rand_lower_correl(n, (a, b), dtype=np.float):\n if b<a:\n raise ValueError('b must be greater than a')\n nr= n * (n + 1) / 2 - n\n A = np.zeros([n, n], dtype=dtype)\n A[np.tril_indices(n, -1)] = a+(b-a)*np.random.random(nr)\n return A\n\nC=rand_lower_correl(25, (-0.4, 0.9))\n\n\nmask = np.zeros_like(C, dtype=np.bool)\nmask[np.triu_indices_from(mask)] = True\nC = np.ma.array(C, mask=mask) #mask out the upper triangle in C\n\ncmin=np.min(C)\ncmax=np.max(C)\nprint cmin, cmax\n\ndtick=-cmin/2\nticks=np.arange(cmin, cmax, dtick)\n\nfig, ax=plt.subplots()\nimg=ax.imshow(C, interpolation=\"none\", cmap=sns_cmap)\n\ncbar = fig.colorbar(img, ticks=ticks)", "The drawback of this representation of correlations is that the maximal luminance is not placed at the position of the (normalized) reference value (0 in this case). \nTo be more informative such a heatmap must have isoluminant and opposite colored cells corresponding to values $-a$ and $a$. In our image the cells corresponding to -0.19 and 0.19, for example,\nare both blue and of different luminance.\nIt would be also more informative that the luminance at left and right of the position of maximum luminance decrease monotonically with the same speed. In the image above -0.39 and 0.39 have opposite colors, but do not have the same luminance.\nOur aim is to associate an asymmetric diverging colormap to an asymmetric data range with respect to a reference point, starting from a (symmetric) diverging colormap, considered as being associated to a symmetric interval of data values with center at the reference point.\nSuch a colormap will have the maximum luminance at the normalized reference point, and a decreasing luminance with the same speed toward ends, as in the images inserted in the next cells:", "from IPython.display import Image\n\nImage(filename='Data/asym_cmap1.jpg')\n\nImage(filename='Data/asym_cmap2.jpg')", "The basic idea for defining an asymmetric diverging colormap is illustrated in the image asymdiv.png, and explained below.\nIn order to understand the explanations we recall how we can retrieve the color code\nof a color in a particular position of a matplotlib colormap:", "from matplotlib import cm\ncmap=cm.get_cmap('RdBu')", "Now for any $t\\in[0,1]$, cmap(t) gives the color code of the normalized value $t$, i.e. a tuple (r,g,b,a):", "print cmap(0.23)", "In our explanation below, cmap(t) must be understood in this sense, and $t$ will be referred to as the color index.", "Image(filename='Data/asymdiv.png')", "dmin and dmax are the min and max values of our data, and \n refp is the reference point, \n dmin<refp< dmax.\nWe discuss the case refp-dmin < dmax-refp. The opposite case can be easily deduced from this one.\n\ntheoretically, the values from the symmetric interval [refp-(dmax-refp), dmax] are normalized and mapped to colors in the chosen symmetric diverging colormap, cmap. \nthe normalized value of dmin is denoted by t', whereas the normalized value of refp by refpn. Obviously, refpn is $0.5$.\nwe define the assymmetric diverging colormap as being cmap restricted to continuous indices in the interval $[t',1)$. Normalizing values from $[t', 1)$, we denote by maxluminP the normalized value of refpn, i.e. the index of the color of maximal luminance. \n\nThen the explicit definition of the asymmetric diverging colormap is done as follows:\n the intervals [t', 1.0] and [0, 1.0] are divided by 256 equidistant points, such that\n refn in the first interval, respectively maxluminP in the second, be points of division.\n If t[i] is the $i^{th}$ point in [t',1], and s[i] in [0,1], then the asymmetric colormap, asym_cmap, stores at the position s[i] the color code cmap(t[i]), \n $i=\\overline{0,255}$.\nThese ideas are implemented by the function asymmetric_cmap:", "def normalize(x,a,b): #normalization map: [a,b] --> [0,1]\n if a>b:\n raise ValueError('(a,b) is not an interval')\n return (float(x)-a)/(b-a)\n\ndef asymmetric_cmap(data, div_cmap, ref_point=0.0, name= 'asym_cmap'):\n '''\n Input\n -----\n data: data to be visualized (a numpy aray of shape (m,n), a data frame, a list of lists of equal len)\n div_cmap : a diverging matplotlib or seaborn colormap (a matplotlib.colors.LinearSegmentedColormap object)\n ref_point is the reference point for data, the threshold of interest \n '''\n if isinstance(data, pd.DataFrame):\n D = data.values\n elif isinstance(data, np.ma.core.MaskedArray):\n D=np.ma.copy(data)\n else: \n D=np.asarray(data, dtype=np.float) \n D=np.ma.masked_invalid(D)\n \n dmin=np.min(D)\n dmax=np.max(D)\n \n if not (dmin < ref_point < dmax):\n raise ValueError('data are not appropriate for visualization with a diverging colormap')\n \n if dmax-ref_point > ref_point-dmin:\n left=2*ref_point-dmax\n right=dmax\n \n tp=normalize(dmin, left,right)#normalized value of dmin\n refp_norm=normalize(ref_point, left, right) # normalized value of the ref_point in the symmetric interval\n #[left, right]. It is 0.5 \n \n A=tp\n B=1.0\n else:\n left=dmin\n right=2*ref_point-dmin\n \n tp=normalize(dmax, left, right)#normalized value of dmax\n refp_norm=normalize(ref_point, left, right)\n \n A=0.0\n B=tp\n max_lumin_idx=normalize(refp_norm, A, B) # index for the max luminance position in the asymm div cmap\n \n cdict = {\n 'red': [],\n 'green': [],\n 'blue': [],\n 'alpha': []\n }\n\n # T is a (256,) array \n T = np.hstack([\n np.linspace(A, refp_norm, 128, endpoint=False), \n np.linspace(refp_norm, B, 128)\n ])\n \n # T_assym is (256,) array \n T_asymm = np.hstack([\n np.linspace(0, max_lumin_idx, 128, endpoint=False), \n np.linspace(max_lumin_idx, 1.0, 128)\n ])\n\n for t, s in zip(T, T_asymm):\n r, g, b, a = div_cmap(t)\n\n cdict['red'].append((s, r, r))\n cdict['green'].append((s, g, g))\n cdict['blue'].append((s, b, b))\n cdict['alpha'].append((s, a, a))\n\n asym_cmap = matplotlib.colors.LinearSegmentedColormap(name, cdict)\n plt.register_cmap(cmap=asym_cmap)\n display_cmap(asym_cmap)\n return D, asym_cmap, dmin, dmax\n", "Let us define an asymmetric colormap associated to data in the lower triagular matrix defined above, with respect to the reference point $0$, and derived from the symmetric diverging colormap, sns_cmap (seaborn diverging colormap):", "plot_data, asym_cmap=asymmetric_cmap(C, sns_cmap, ref_point=0)[:2]", "Visualizing the correlation matrix with this colormap we get a better information on the features that are positively, respectively negatively correlated.", "img=plt.imshow(plot_data, interpolation=\"none\", cmap=asym_cmap)\nplt.colorbar(img)", "Now we give an example of data with a nonzero reference point. Namely, we consider data containing\nthe average monthly temperature in Boston, between 1960 and 2000. The reference point is 32 (degree Fahrenheit).\nData source: Plotly public data. Rows correspond to years, and columns to months.", "temp=[[30.5, 27, 34.4, 38.8, 55.5, 66.1, 72.9, 68.1, 64.4, 53.2, 41.6, 31.3, 48.6],\n [20.1, 22.3, 31.3, 42.5, 57.9, 66.7, 72.1, 71, 59.6, 50.1, 34.9, 29.6, 46.5], \n [30.7, 27.5, 33.5, 43.9, 54.6, 68.9, 74, 70.3, 59.9, 48.3, 41.4, 22.2, 47.9],\n [24.8, 34.3, 34.9, 44.2, 55.1, 66.8, 71.1, 72.2, 65.2, 52, 44.2, 36.3, 50.1], \n [27.9, 30.5, 39.6, 48.1, 56.5, 64.6, 73.7, 69.7, 64.2, 56.1, 40.8, 30.3, 50.2], \n [24.2, 24.6, 33.6, 42.3, 60.2, 64.8, 70.8, 69.3, 61.9, 57.6, 38.7, 32.5, 48.4], \n [34.6, 31.8, 33.2, 46.4, 63.7, 68.6, 71.7, 70.7, 65.2, 52.2, 38.2, 26.7, 50.2],\n [22, 27.9, 37.1, 43.6, 56.6, 61.7, 70.2, 71.3, 67.5, 54.5, 43.5, 39, 49.6],\n [26, 30.4, 36, 42.9, 50.5, 66.8, 73.2, 71.5, 64, 55.4, 39.3, 30.5, 48.9], \n [24.3, 28.8, 30.7, 45.3, 56.6, 70.1, 71.7, 68.3, 59.8, 47.7, 42.8, 29, 47.9],\n [24.1, 31.1, 34.1, 43.8, 54.9, 67.2, 68.7, 69.7, 66.9, 52.7, 41.4, 33.7, 49], \n [27.4, 20.6, 27.6, 47.2, 53.2, 66.8, 71.8, 67.8, 60.1, 51.6, 43.9, 32.7, 47.6],\n [25.7, 26.7, 33.3, 48.6, 56.7, 63.2, 71.2, 68.2, 63.1, 51.9, 43.2, 28.4, 48.3],\n [25.1, 28.7, 31.5, 44.2, 59.7, 65.2, 74.5, 67.6, 59.8, 51.5, 42.2, 32.3, 48.5],\n [20.1, 28.3, 32.2, 42.6, 53.1, 66.8, 68.8, 70.1, 59.8, 47.7, 43.9, 34.6, 47.3],\n [35.7, 26, 38.2, 47.8, 60.2, 69.1, 69.4, 67.3, 62.8, 48.5, 44.6, 38, 50.6], \n [32.3, 33.3, 34.9, 46.3, 57, 64.2, 71, 68.9, 62.9, 51, 41.8, 26, 49.1],\n [31.1, 31.9, 33.7, 48, 55.8, 65, 69, 70, 66.7, 51.5, 41.3, 40.5, 50.4], \n [28.3, 28.4, 32.9, 48.3, 55.9, 69.6, 73, 70.2, 62.5, 52.7, 41.2, 29.9, 49.4],\n [20.7, 26.8, 33.7, 44.3, 56.3, 64.7, 71.5, 69.6, 60, 54.8, 42.2, 30.4, 47.9],\n [30.1, 26.7, 42.5, 47, 57.5, 68.8, 73.7, 68, 64.8, 54, 38.2, 32.2, 50.3], \n [28.8, 24.5, 34.6, 46, 60.2, 66.8, 69.3, 71, 65.7, 49.6, 45.4, 35.5, 49.8], \n [24.9, 28.7, 31.9, 47.2, 60.3, 66.1, 71.8, 70.7, 62.2, 50.2, 46.5, 30.4, 49.2],\n [28.5, 30.9, 36.9, 49, 57.7, 62.2, 71.6, 69.6, 63.1, 54.4, 41.5, 34, 49.9],\n [28.7, 32.8, 42.9, 43.8, 55.9, 65.8, 71.8, 73.1, 66.2, 54.2, 42, 31.9, 50.8], \n [29, 26.8, 34, 48.1, 57.7, 70, 72.6, 68.7, 62, 54.5, 42.1, 36.2, 50.1], \n [30.1, 29.3, 34, 48.5, 55.2, 68.6, 73.9, 71.1, 64.9, 56.6, 44.7, 33.5, 50.9], \n [27.6, 24.3, 36.2, 43.5, 54.3, 68.2, 73.5, 70.7, 65.4, 54.4, 37.3, 32.4, 49],\n [26.8, 28.9, 43.3, 48.4, 57.8, 64.3, 67.9, 67.6, 62.6, 53.6, 46.2, 28.6, 49.7], \n [28.6, 31.4, 44.4, 47.9, 58.5, 59.8, 71.4, 65, 64.4, 53.9, 40, 29, 49.5], \n [21.5, 22.6, 34.5, 44.7, 60.9, 64, 71.4, 68.7, 62.5, 50.5, 38.2, 25.8, 47.1], \n [25.1, 23.4, 37.1, 46.8, 57.6, 65.2, 73.2, 68.6, 62, 53.6, 41.6, 35.1, 49.1], \n [35.7, 29.7, 32.3, 47.2, 58, 66.4, 70.1, 72, 65, 52.5, 41.7, 29.1, 50],\n [27, 21.7, 37.9, 43.4, 52.5, 65.1, 72.4, 69.7, 64.4, 49.8, 43.3, 37.2, 48.7],\n [31, 26.7, 38.6, 46.8, 59, 69.8, 74.1, 69.4, 65.8, 55.4, 43.6, 33.3, 51.1],\n [30.3, 32.6, 36.8, 47.7, 55.8, 69.4, 71.5, 69.4, 62.6, 52.8, 46.4, 30.7, 50.5],\n [32, 29.5, 42, 51.6, 57.6, 65.2, 74.9, 68.8, 62.8, 56, 41.2, 27.5, 50.8], \n [31.8, 27, 35.5, 46.1, 62.5, 65.5, 77.1, 69, 62.5, 52.6, 41.9, 39.4, 50.9],\n [21.4, 27.7, 36, 47.4, 58.6, 68, 73.2, 68.9, 63.7, 57.5, 45.7, 38.5, 50.6],\n [39.3, 27.7, 42.4, 48, 55.1, 67.5, 73.8, 70.7, 62, 56.5, 46.5, 37.8, 52.3], \n [28.7, 24.3, 36.7, 45.3, 60.3, 67.3, 68.6, 70.4, 64.6, 57, 42.7, 30.4, 49.7],\n ]\n ", "The associated asymmetric diverging colormap:", "temp_data, asym_cmap, dmin, dmax =asymmetric_cmap(temp, sns_cmap, ref_point=32.0)\nprint 'data min value =', dmin, 'data max value =', dmax\n\nplt.rcParams['figure.figsize'] = 12,8\n\ndtick=(32-dmin)/2\nticks=np.arange(dmin, dmax, dtick)\n\nfig = plt.figure()\nax = fig.add_subplot(111)\nextent=[0,11.0, 1960, 2000]\naspect=abs((extent[1]-extent[0])/(extent[3]-extent[2]))/1.2\nimg=ax.imshow(temp_data[:,:12], interpolation=\"bilinear\", extent=extent, aspect=aspect, cmap=asym_cmap)\ncbar = fig.colorbar(img, ticks=ticks.round(2))\nax.set_title('Average Temperature in Boston, Month by Year 1960-2000')\n\nfrom IPython.core.display import HTML\ndef css_styling():\n styles = open(\"./custom.css\", \"r\").read()\n return HTML(styles)\ncss_styling()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
rusucosmin/courses
sds/lectures/Exercise/template/Exercise6.ipynb
mit
[ "# Useful starting lines\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\n%load_ext autoreload\n%autoreload 2", "Support Vector Machines\nClassification Using SVM\nThis tutorial is based on an EPFL machine learning course.\nLoad dataset. We will use the CERN dataset, available from a previous EPFL machine learning challenges. You can download the data here. https://inclass.kaggle.com/c/epfml-project-1/data", "from proj1_helpers import load_csv_data\nDATA_TRAIN_PATH = '../data/train.csv' # TODO: download train data and supply path here \ny, x, ids = load_csv_data(DATA_TRAIN_PATH)\n# TODO: convert labels to -1,1 ?\n\n## Note: This is the raw dataset, you can also work with your modified features if you prefer\n\ndef calculate_cost(y, x, w, lambda_):\n \"\"\"compute the full cost (the primal objective), that is loss plus regularizer.\"\"\"\n # Here x is the full dataset matrix, and y are the corresponding +1 or -1 labels\n # ***************************************************\n # INSERT YOUR CODE HERE\n # TODO\n # ***************************************************\n raise NotImplementedError", "Stochastic Gradient Descent for SVM\nCompute the (stochastic) subgradient for the n-th summand of the SVM optimization objective", "def calculate_gradient(y, x, w, lambda_, n):\n \"\"\"compute the stochastic gradient of loss plus regularizer.\"\"\"\n # Here x is one datapoint, and y is the corresponding +1 or -1 label\n # \n # ***************************************************\n # INSERT YOUR CODE HERE\n # TODO\n # ***************************************************\n # Be careful about the constant N(size) term! The complete objective for SVM is a sum, not an average as in earlier SGD examples!\n raise NotImplementedError", "Implement stochastic gradient descent: Pick a data point uniformly at random and update w based on the gradient for the n-th summand of the objective", "def sgd_for_svm_demo(y, x):\n # ***************************************************\n # INSERT YOUR CODE HERE\n # classify the data by SGD for SVM: TODO\n # ***************************************************\n max_iter = 10000\n gamma = 0.001 # Step-size\n lambda_ = 1.0 / y.shape[0] # or set to a different value, try cross-validation!\n \n w = np.zeros((x.shape[1], 1))\n \n for iter in range(max_iter):\n # n = sample one data point uniformly at random data from x\n raise NotImplemented\n # loss = TODO \n # grad = TODO don't forget about the regularizer term\n # w = update w\n raise NotImplemented\n \n if iter % 1000 == 0:\n print(\"Current iteration={i}, the loss={l}\".format(i=iter, l=loss))\n \n print(\"Objective = {l}\".format(l=calculate_cost(y, x, w, lambda_)))\n\nsgd_for_svm_demo(y, x)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
AlexGascon/playing-with-keras
#3 - Improving text generation/3.2 - Increasing dataset size.ipynb
apache-2.0
[ "3.2. Increasing dataset size\nThe next thing we're going to try is to increase the size of our dataset. On the previous trainings we used a small subset of the book \"Don Quijote de La Mancha\" that contained 169KB of text.\nThe problem is that we have to consider that what we're going to do is to teach Spanish to our RNN. And, let's be honest, it's quite difficult to learn a language from scratch by reading only 169K characters (a few chapters of a book); we'll learn some words and maybe even a few sentences, but it's very difficult to really learn the language. \nTherefore, in order to solve this, we'll greatly increase the size of the dataset. We'll use the entire \"Don Quijote de la Mancha\" book, and to it we'll append another very famous Spanish book, \"La Regenta\" by Leopoldo Alas. Combining both, we'll get a dataset of about 4MB (more than 20x the previous one). And, although this will slow down our training a lot, it will be with very high probability a very huge improvement in our code. \nLet's start the code:", "import numpy as np\nimport matplotlib.pyplot as plt\nfrom keras.models import Sequential\nfrom keras.layers import Dense\nfrom keras.layers import Dropout\nfrom keras.layers import LSTM\nfrom keras.callbacks import ModelCheckpoint\nfrom keras.utils import np_utils", "The next step will be to read both books and to combine them into a single dataset, and then we'll proceed with the usual calculations", "# Load the books, merging them and covert the result to lowercase\nfilename1 = \"El ingenioso hidalgo don Quijote de la Mancha.txt\"\nbook1 = open(filename1).read()\nfilename2 = \"La Regenta.txt\"\nbook2 = open(filename2).read()\nbook = book1 + book2\n\n# Create mapping of unique chars to integers, and its reverse\nchars = sorted(list(set(book)))\nchar_to_int = dict((c, i) for i, c in enumerate(chars))\nint_to_char = dict((i, c) for i, c in enumerate(chars))\n\n# Summarizing the loaded data\nn_chars = len(book)\nn_vocab = len(chars)\nprint \"Total Characters: \", n_chars\nprint \"Total Vocab: \", n_vocab\n\n# Prepare the dataset of input to output pairs encoded as integers\nseq_length = 100\ndataX = []\ndataY = []\n\n# Iterating over the book\nfor i in range(0, n_chars - seq_length, 1):\n sequence_in = book[i:i + seq_length]\n sequence_out = book[i + seq_length]\n \n # Converting each char to its corresponding int\n sequence_in_int = [char_to_int[char] for char in sequence_in]\n sequence_out_int = char_to_int[sequence_out]\n\n # Appending the result to the current data \n dataX.append(sequence_in_int)\n dataY.append(sequence_out_int)\nn_patterns = len(dataX)\nprint \"Total Patterns: \", n_patterns\n\n# Reshaping X to be [samples, time steps, features]\nX = np.reshape(dataX, (n_patterns, seq_length, 1))\n# Normalizing\nX = X / float(n_vocab)\n# One hot encode the output variable\ny = np_utils.to_categorical(dataY)\n\n# Define the LSTM model\nmodel = Sequential()\nmodel.add(LSTM(256, input_shape=(X.shape[1], X.shape[2]), return_sequences=True))\nmodel.add(Dropout(0.2))\nmodel.add(LSTM(256))\nmodel.add(Dropout(0.2))\nmodel.add(Dense(y.shape[1], activation='softmax'))\n\n\n\n# Starting from a checkpoint (if we set one)\ncheckpoint = \"\"\nif checkpoint:\n model.load_weights(checkpoint)\n\n# Amount of epochs that we still have to run\nepochs_run = 0\nepochs_left = 50 - epochs_run\n\n# Define the checkpoints structure\nfilepath=\"weights-improvement-{epoch:02d}-{loss:.4f}.hdf5\"\ncheckpoint = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min')\ncallbacks_list = [checkpoint]\n\n# Compiling the model\nmodel.compile(loss='categorical_crossentropy', optimizer='adam')\n\n# Fitting the model\nmodel.fit(X, y, nb_epoch=epochs_left, batch_size=64, callbacks=callbacks_list)", "(We won't see the results here because I've actually executed this code in another machine, not directly in the notebook; as you can imagine, this will take a loooooong time).\nWe'll, so here we are again! If you're reading this once I've finished the notebook you won't notice the pause, but I'm writing this two weeks later than the previous paragraph. \nAs I predicted, the NN took a looooooooong time to learn. Each one of the epochs required about 11 hours to finish! And besides, there's another important thing to take into account: the NN stopped generating weights after the 10th one, although the code is still running. I'd like to thing that it happened because the loss stopped decreasing at that point (what won't be as bad as it may seem, because due to the big size of the dataset we achieved quite good results even with few epochs), but we can't know it for sure at the moment; however, I will for sure update this notebook when I analyse it more precisely, so don't stop visiting it. \nAnd now that this part is explained, let's go back to what really matters: the results!\nIn order to test our neural net, we'll use the two approaches tried before, in order to see the results achieved with each one: choosing the most probable character each iteration and using the output probabilities as the Probability Density Function.\n3.2.1. Preparing the prediction\nIn this section we're going to include all the code that is common to both prediction methods (loading the weights, preparing the seed...) in order to avoid executing the same code twice", "# Load the network weights\nfilename = \"weights-improvement-09-1.5410.hdf5\"\nmodel.load_weights(filename)\nmodel.compile(loss='categorical_crossentropy', optimizer='adam')\n\n# Pick a random seed\nstart = np.random.randint(0, len(dataX)-1)\npattern = dataX[start]\nstarting_pattern = pattern # saving a copy\nseed = ''.join([int_to_char[value] for value in pattern])\nprint \"Seed:\"\nprint \"\\\"\", seed, \"\\\"\"\nresult_str = \"\"", "3.2.2. Most probable character\nThe code to use here doesn't need much explanation, as is exactly the one we've used on previous notebooks. You can check them to see the reason for using this method", "# Generate characters\nfor i in range(500):\n\tx = numpy.reshape(pattern, (1, len(pattern), 1))\n\tx = x / float(n_vocab)\n\tprediction = model.predict(x, verbose=0)\n\tindex = numpy.argmax(prediction)\n\tresult = int_to_char[index]\n\tseq_in = [int_to_char[value] for value in pattern]\n\tresult_str += result\n\tpattern.append(index)\n\tpattern = pattern[1:len(pattern)]\nprint \"\\nDone.\"", "3.2.3. Randomized prediction\nThe code to use here doesn't need much explanation, as is exactly the one we've used on previous notebooks. You can check them to see the reason for using this method", "pattern = starting_pattern # Restoring the seed to its initial state\nresult_str = \"\"\n\n# Generate characters\nfor i in range(500):\n\tx = np.reshape(pattern, (1, len(pattern), 1))\n\tx = x / float(n_vocab)\n \n # Choosing the character randomly\n\tprediction = model.predict(x, verbose=0)\n\tprob_cum = np.cumsum(prediction[0])\n\trand_ind = np.random.rand()\n\tfor i in range(len(prob_cum)):\n\t if (rand_ind <= prob_cum[i]) and (rand_ind > prob_cum[i-1]):\n\t index = i\n\t break\n \n\tresult = int_to_char[index]\n\tseq_in = [int_to_char[value] for value in pattern]\n\tresult_str += result\n\tpattern.append(index)\n\tpattern = pattern[1:len(pattern)]\nprint \"\\nDone.\"" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
klavinslab/coral
docs/tutorial/design/design_primers.ipynb
mit
[ "Primer Design\nOne of the first things anyone learns in a molecular biology lab is how to design primers. The exact strategies vary a lot and are sometimes polymerase-specific. coral uses the Klavins' lab approach of targeting a specific melting temperature (Tm) and nothing else, with the exact Tm targeted being between 65°C and 72°C, the choice being personal preference. coral currently defaults to 72°C on the Phusion (modified Breslauer 1986) Tm calculator.\ncoral.design_primer is a function that takes in a sequence.DNA object and rapidly finds the 5' subsequence that is closest to the desired Tm (within a user-definable error range). If the entire sequence would make a primer with too low of a Tm, a descriptive error is produced.\nFor this tutorial, let's design primers that will amplify the gene EYFP.", "import coral as cor", "First we read in a plasmid from Havens et al. 2012 and isolate the EYFP sequence.", "plasmid = cor.seqio.read_dna(\"../files_for_tutorial/maps/pGP4G-EYFP.ape\")\neyfp_f = [f for f in plasmid.features if f.name == 'EYFP'][0]\neyfp = plasmid.extract(eyfp_f)\nprint len(eyfp)\neyfp", "Designing primers is straightforward - you just call design.design_primer with a sequence.DNA object as the input.", "# Forward and reverse, one at a time using design_primer()\nforward = cor.design.primer(eyfp)\nreverse = cor.design.primer(eyfp.reverse_complement())\n# Both at once using design_primers()\nforward, reverse = cor.design.primers(eyfp)\n# design_primer has many options, including adding overhangs\ncustom_forward = cor.design.primer(eyfp, tm=65, min_len=12, \n tm_undershoot=1, tm_overshoot=1, \n end_gc=True, tm_parameters=\"santalucia98\", \n overhang=cor.DNA(\"GGGGGATCGAT\"))\nprint forward\nprint\nprint custom_forward", "Designing primers and getting a string output is just the first step in primer design - we want to know whether the primers actually work and write them out to a file. The point of programming DNA is that you never copy and paste!\nTo simulate a PCR using the rules of molecular biology, use coral.reaction.pcr. The output is a subsequence of the template DNA - the features may not match the plasmid exactly (due to being truncated by the PCR), but the sequences match. If a primer would bind in multiple places (exact matches to the template), the pcr function will fail and give a useful message.\nYou can check for identical sequences using python's built in == operator.", "amplicon = cor.reaction.pcr(plasmid, forward, reverse)\namplicon == eyfp", "Now that we have verified that our primers should at least amplify the DNA that we want, let's write out our primers to file so they can be submitted to an oligo synthesis company.", "# First we give our primers names (the `.name` attribute is empty by default)\nforward.name = \"EYFP_forward\"\nreverse.name = \"EYFP_reverse\"\n# Then we write to file - a csv (comma separated value file)\ncor.seqio.write_primers([forward, reverse], \"./designed_primers.csv\", [\"Forward EYFP primer\", \"Reverse EYFP primer\"])", "The csv file can then be opened in a spreadsheet application like Excel or processed by a downstream program. This is the format of the csv:", "import csv\nwith open(\"./designed_primers.csv\", \"r\") as csv_file:\n reader = csv.reader(csv_file)\n lines = [line for line in reader]\nfor line in lines:\n print line" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
rocketproplab/Guides
Guides/stacksGuides/Stacks_Getting_Started.ipynb
mit
[ "Stacks Getting Started\nWelcome to the Subinitial Stacks Getting Started Guide!\nThis document will guide you through setup of the Stacks and your first script.\nUseful Links\nOfficial Hardware Getting Started Guide\nOfficial Software Getting Started Guide\nAll Official Documentation\nSubinitial Python Library Documentation\nInstalling Python and Required Libraries\nPython 3.5+ is required to use the Stacks. Install Python by following this link:\nPython Download\nInstall git if you haven't(it's a good idea!).\nGit\nThen, install the Subinitial Python Library by running the following command in your command line:", "!pip3 install --user git+https://bitbucket.org/subinitial/subinitial.git", "Run the following code to make sure the library installed correctly.", "import subinitial.stacks as stacks\nprint(\"Stacks Library Major Version:\", stacks.VERSION_STACKS[0])", "You should see the following output:\nStacks Library Major Version: 1\n\nSetting up your Stacks\nFor best results, use the Stacks with a laptop with a Wi-Fi connection.\nConnect an Ethernet cable from the Stacks to your computer. Verify that the second light from the top in the light bank\nis lit. \n* If this light is not lit, verify the connection.\nOpen your terminal or command prompt, and type the following command:", "!ping 192.168.1.49", "If you are able to ping it successfully, great! The Stacks is now communicating with your computer. \n* If the ping failed, try assigning a static IP to your computer.\n* Assign IP: 192.168.1.40 and Subnet mask: 255.255.255.0\n + On Windows 10, right-click on the Wi-Fi icon, and click on Open Network and Sharing Center.\n + Click on Change adapter settings.\n + Right-click on your Ethernet connection(the one with a cable), and select Properties.\n + Select Internet Protocol Version 4(TCP/IPv4).\n + Click the Properties button.\n + When the window pops up, click on Use the following IP address, and enter the following information:\n - IP: 192.168.1.40\n - Subnet mast: 255.255.255.0\nRun the following script to verify that everything works:", "import subinitial.stacks1 as stacks\ncore = stacks.Core(host=\"192.168.1.49\")\ncore.print_console(\"id\")\n", "The output from this script should read:\n >id\n Core (cor1 SA13729)\n v1.13.0_2017-01-22_21:40:27\n Bus Address:1" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
smorton2/think-stats
code/chap02ex.ipynb
gpl-3.0
[ "Examples and Exercises from Think Stats, 2nd Edition\nhttp://thinkstats2.com\nCopyright 2016 Allen B. Downey\nMIT License: https://opensource.org/licenses/MIT", "from __future__ import print_function, division\n\n%matplotlib inline\n\nimport numpy as np\n\nimport nsfg\nimport first", "Given a list of values, there are several ways to count the frequency of each value.", "t = [1, 2, 2, 3, 5]", "You can use a Python dictionary:", "hist = {}\nfor x in t:\n hist[x] = hist.get(x, 0) + 1\n \nhist", "You can use a Counter (which is a dictionary with additional methods):", "from collections import Counter\ncounter = Counter(t)\ncounter", "Or you can use the Hist object provided by thinkstats2:", "import thinkstats2\nhist = thinkstats2.Hist([1, 2, 2, 3, 5])\nhist", "Hist provides Freq, which looks up the frequency of a value.", "hist.Freq(2)", "You can also use the bracket operator, which does the same thing.", "hist[2]", "If the value does not appear, it has frequency 0.", "hist[4]", "The Values method returns the values:", "hist.Values()", "So you can iterate the values and their frequencies like this:", "for val in sorted(hist.Values()):\n print(val, hist[val])", "Or you can use the Items method:", "for val, freq in hist.Items():\n print(val, freq)", "thinkplot is a wrapper for matplotlib that provides functions that work with the objects in thinkstats2.\nFor example Hist plots the values and their frequencies as a bar graph.\nConfig takes parameters that label the x and y axes, among other things.", "import thinkplot\nthinkplot.Hist(hist)\nthinkplot.Config(xlabel='value', ylabel='frequency')", "As an example, I'll replicate some of the figures from the book.\nFirst, I'll load the data from the pregnancy file and select the records for live births.", "preg = nsfg.ReadFemPreg()\nlive = preg[preg.outcome == 1]", "Here's the histogram of birth weights in pounds. Notice that Hist works with anything iterable, including a Pandas Series. The label attribute appears in the legend when you plot the Hist.", "hist = thinkstats2.Hist(live.birthwgt_lb, label='birthwgt_lb')\nthinkplot.Hist(hist)\nthinkplot.Config(xlabel='Birth weight (pounds)', ylabel='Count')", "Before plotting the ages, I'll apply floor to round down:", "ages = np.floor(live.agepreg)\n\nhist = thinkstats2.Hist(ages, label='agepreg')\nthinkplot.Hist(hist)\nthinkplot.Config(xlabel='years', ylabel='Count')", "As an exercise, plot the histogram of pregnancy lengths (column prglngth).", "# Solution goes here\nlength = live.prglngth\n\nhist = thinkstats2.Hist(length, label = 'preglngth')\nthinkplot.Hist(hist)\nthinkplot.Config(xlabel='Weeks', ylabel='Count')", "Hist provides smallest, which select the lowest values and their frequencies.", "for weeks, freq in hist.Smallest(10):\n print(weeks, freq)", "Use Largest to display the longest pregnancy lengths.", "# Solution goes here\nfor weeks, freq in hist.Largest(10):\n print(weeks, freq)", "From live births, we can selection first babies and others using birthord, then compute histograms of pregnancy length for the two groups.", "firsts = live[live.birthord == 1]\nothers = live[live.birthord != 1]\n\nfirst_hist = thinkstats2.Hist(firsts.prglngth, label='first')\nother_hist = thinkstats2.Hist(others.prglngth, label='other')", "We can use width and align to plot two histograms side-by-side.", "width = 0.45\nthinkplot.PrePlot(2)\nthinkplot.Hist(first_hist, align='right', width=width)\nthinkplot.Hist(other_hist, align='left', width=width)\nthinkplot.Config(xlabel='weeks', ylabel='Count', xlim=[27, 46])", "Series provides methods to compute summary statistics:", "mean = live.prglngth.mean()\nvar = live.prglngth.var()\nstd = live.prglngth.std()", "Here are the mean and standard deviation:", "mean, std", "As an exercise, confirm that std is the square root of var:", "# Solution goes here\nstd == np.sqrt(var)", "Here's are the mean pregnancy lengths for first babies and others:", "firsts.prglngth.mean(), others.prglngth.mean()", "And here's the difference (in weeks):", "firsts.prglngth.mean() - others.prglngth.mean()", "This functon computes the Cohen effect size, which is the difference in means expressed in number of standard deviations:", "def CohenEffectSize(group1, group2):\n \"\"\"Computes Cohen's effect size for two groups.\n \n group1: Series or DataFrame\n group2: Series or DataFrame\n \n returns: float if the arguments are Series;\n Series if the arguments are DataFrames\n \"\"\"\n diff = group1.mean() - group2.mean()\n\n var1 = group1.var()\n var2 = group2.var()\n n1, n2 = len(group1), len(group2)\n\n pooled_var = (n1 * var1 + n2 * var2) / (n1 + n2)\n d = diff / np.sqrt(pooled_var)\n return d", "Compute the Cohen effect size for the difference in pregnancy length for first babies and others.", "# Solution goes here\n\nCohenEffectSize(firsts.prglngth, others.prglngth)", "Exercises\nUsing the variable totalwgt_lb, investigate whether first babies are lighter or heavier than others. \nCompute Cohen’s effect size to quantify the difference between the groups. How does it compare to the difference in pregnancy length?", "# Solution goes here\n\nfirsts_hist=thinkstats2.Hist(np.round(firsts.totalwgt_lb), label='firsts')\nothers_hist=thinkstats2.Hist(np.round(others.totalwgt_lb), label='others')\n\nprint('mean')\nprint('firsts:', firsts.totalwgt_lb.mean())\nprint('others:', others.totalwgt_lb.mean())\nprint('')\nprint('stdev')\nprint('firsts:', firsts.totalwgt_lb.mean())\nprint('others:', others.totalwgt_lb.mean())\nprint('')\nprint('median')\nprint('firsts:', firsts.totalwgt_lb.median())\nprint('others:', others.totalwgt_lb.median())\n\nwidth=0.45\nthinkplot.PrePlot(2)\nthinkplot.Hist(firsts_hist, align='left', width=width)\nthinkplot.Hist(others_hist, align='right', width=width)\nthinkplot.Config(xlabel='pounds', ylabel='Count', xlim=(0, 15))\n\n# Solution goes here\nCohenEffectSize(firsts.totalwgt_lb, others.totalwgt_lb)", "For the next few exercises, we'll load the respondent file:", "resp = nsfg.ReadFemResp()", "Make a histogram of <tt>totincr</tt> the total income for the respondent's family. To interpret the codes see the codebook.", "# Solution goes here\ninc_hist = thinkstats2.Hist(resp.totincr)\nthinkplot.Hist(inc_hist)", "Make a histogram of <tt>age_r</tt>, the respondent's age at the time of interview.", "# Solution goes here\nage_hist = thinkstats2.Hist(resp.age_r)\nthinkplot.Hist(age_hist)", "Make a histogram of <tt>numfmhh</tt>, the number of people in the respondent's household.", "# Solution goes here\nfmhh_hist = thinkstats2.Hist(resp.numfmhh)\nthinkplot.Hist(fmhh_hist)", "Make a histogram of <tt>parity</tt>, the number of children borne by the respondent. How would you describe this distribution?", "# Solution goes here\nparity_hist = thinkstats2.Hist(resp.parity)\nthinkplot.Hist(parity_hist)", "This data is left-skewed with a long tail. Women are about as likely to have 1 as 2 children, though parity drops off significantly after 2 children.\nUse Hist.Largest to find the largest values of <tt>parity</tt>.", "# Solution goes here\nparity_hist.Largest(10)", "Let's investigate whether people with higher income have higher parity. Keep in mind that in this study, we are observing different people at different times during their lives, so this data is not the best choice for answering this question. But for now let's take it at face value.\nUse <tt>totincr</tt> to select the respondents with the highest income (level 14). Plot the histogram of <tt>parity</tt> for just the high income respondents.", "# Solution goes here\nhinc = resp[resp['totincr'] == 14]\nother = resp[resp['totincr'] < 14]\nhinc_hist = thinkstats2.Hist(hinc.parity)\nthinkplot.Hist(hinc_hist)", "Find the largest parities for high income respondents.", "# Solution goes here\nhinc_hist.Largest(5)", "Compare the mean <tt>parity</tt> for high income respondents and others.", "# Solution goes here\nprint('mean parity, high income:', hinc.parity.mean())\nprint('mean parity, other:', other.parity.mean())", "Compute the Cohen effect size for this difference. How does it compare with the difference in pregnancy length for first babies and others?", "# Solution goes here\nCohenEffectSize(hinc.parity, other.parity)", "The Cohen Effect Size for the difference in parity between mothers with high income and mothers with low income is much larger than the Cohen Effect Size for the difference in pregnancy length for first babies and others. It is also negative, suggesting that, if anything, mothers with high incomes tend to have fewer children." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
random-forests/tensorflow-workshop
archive/examples/00_test_install.ipynb
apache-2.0
[ "You can press shift + enter to quickly advance through each line of a notebook. Try it!\nCheck that you have a recent version of TensorFlow installed, v1.0 or higher.", "import tensorflow as tf\nprint(\"You have version %s\" % tf.__version__)", "Check if Matplotlib is working. After running this cell, you should see a plot appear below.", "%matplotlib inline\nimport pylab\nimport numpy as np\n\n# create some data using numpy. y = x * 0.1 + 0.3 + noise\nx = np.random.rand(100).astype(np.float32)\nnoise = np.random.normal(scale=0.01, size=len(x))\ny = x * 0.1 + 0.3 + noise\n\n# plot it\npylab.plot(x, y, '.')", "Check if Numpy and Pillow are working. After runnign this cell, you should see a random image appear below.", "import PIL.Image as Image\nimport numpy as np\nfrom matplotlib.pyplot import imshow\n\nimage_array = np.random.rand(200,200,3) * 255\nimg = Image.fromarray(image_array.astype('uint8')).convert('RGBA')\nimshow(np.asarray(img))", "Check if Pandas is working. After running this cell, you should see a table appear below.", "import pandas as pd\nnames = ['Bob','Jessica','Mary','John','Mel']\nbirths = [968, 155, 77, 578, 973]\nBabyDataSet = list(zip(names,births))\npd.DataFrame(data = BabyDataSet, columns=['Names', 'Births'])", "That's it! You're ready to start the workshop." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/training-data-analyst
self-paced-labs/tfx/tfx-vertex/vertex_pipelines_simple.ipynb
apache-2.0
[ "Copyright 2021 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Simple TFX Pipeline for Vertex Pipelines\nThis notebook-based tutorial will create a simple TFX pipeline and run it using\nGoogle Cloud Vertex Pipelines. This notebook is based on the TFX pipeline\nbuilt in\nSimple TFX Pipeline Tutorial.\nGoogle Cloud Vertex Pipelines helps you to automate, monitor, and govern\nyour ML systems by orchestrating your ML workflow in a serverless manner. You\ncan define your ML pipelines using Python with TFX, and then execute your\npipelines on Google Cloud. See\nVertex Pipelines introduction\nto learn more about Vertex Pipelines.\nSetup\nInstall python packages\nWe will install required Python packages including TFX and KFP to author ML\npipelines and submit jobs to Vertex Pipelines.", "# Use the latest version of pip.\n!pip install --upgrade pip\n!pip install --upgrade \"tfx[kfp]<2\"", "Restart the runtime\nRestart the runtime to ensure the following cells use the updated versions.\nYou can restart the runtime with following cell:", "# docs_infra: no_execute\nimport sys\nif not 'google.colab' in sys.modules:\n # Automatically restart kernel after installs\n import IPython\n app = IPython.Application.instance()\n app.kernel.do_shutdown(True)", "Check the package versions.", "import tensorflow as tf\nprint('TensorFlow version: {}'.format(tf.__version__))\nfrom tfx import v1 as tfx\nprint('TFX version: {}'.format(tfx.__version__))\nimport kfp\nprint('KFP version: {}'.format(kfp.__version__))", "Set up variables\nWe will set up some variables used to customize the pipelines below. Following\ninformation is required:\n\nGCP Project id. You can find your Project ID in the panel with your lab instructions.\nGCP Region to run pipelines. For more information about the regions that\nVertex Pipelines is available in, see the\nVertex AI locations guide.\nGoogle Cloud Storage Bucket to store pipeline outputs.\n\nEnter required values in the cell below before running it.", "GOOGLE_CLOUD_PROJECT = '' # <--- ENTER THIS\nGOOGLE_CLOUD_REGION = 'us-central1' \nGCS_BUCKET_NAME = GOOGLE_CLOUD_PROJECT + '-gcs'\n\nif not (GOOGLE_CLOUD_PROJECT and GOOGLE_CLOUD_REGION and GCS_BUCKET_NAME):\n from absl import logging\n logging.error('Please set all required parameters.')", "Set gcloud to use your project.", "!gcloud config set project {GOOGLE_CLOUD_PROJECT}\n\nPIPELINE_NAME = 'penguin-vertex-pipelines'\n\n# Path to various pipeline artifact.\nPIPELINE_ROOT = 'gs://{}/pipeline_root/{}'.format(\n GCS_BUCKET_NAME, PIPELINE_NAME)\n\n# Paths for users' Python module.\nMODULE_ROOT = 'gs://{}/pipeline_module/{}'.format(\n GCS_BUCKET_NAME, PIPELINE_NAME)\n\n# Paths for input data.\nDATA_ROOT = 'gs://{}/data/{}'.format(GCS_BUCKET_NAME, PIPELINE_NAME)\n\n# This is the path where your model will be pushed for serving.\nSERVING_MODEL_DIR = 'gs://{}/serving_model/{}'.format(\n GCS_BUCKET_NAME, PIPELINE_NAME)\n\nprint('PIPELINE_ROOT: {}'.format(PIPELINE_ROOT))", "Prepare example data\nThe dataset we are using is the\nPalmer Penguins dataset.\nThere are four numeric features in this dataset:\n\nculmen_length_mm\nculmen_depth_mm\nflipper_length_mm\nbody_mass_g\n\nAll features were already normalized\nto have range [0,1]. We will build a classification model which predicts the\nspecies of penguins.\nWe need to make our own copy of the dataset. Because TFX ExampleGen reads\ninputs from a directory, we need to create a directory and copy dataset to it\non GCS.", "!gsutil cp gs://download.tensorflow.org/data/palmer_penguins/penguins_processed.csv {DATA_ROOT}/", "Take a quick look at the CSV file.", "!gsutil cat {DATA_ROOT}/penguins_processed.csv | head", "You should be able to see five values. species is one of 0, 1 or 2, and all other features should have values between 0 and 1.\nCreate a pipeline\nTFX pipelines are defined using Python APIs. We will define a pipeline which\nconsists of three components:\n\nCsvExampleGen: Reads in data files and convert them to TFX internal format for further processing. There are multiple ExampleGens for various formats. In this tutorial, we will use CsvExampleGen which takes CSV file input.\nTrainer: Trains an ML model. Trainer component requires a model definition code from users. You can use TensorFlow APIs to specify how to train a model and save it in a _savedmodel format.\nPusher: Copies the trained model outside of the TFX pipeline. Pusher component can be thought of an deployment process of the trained ML model.\n\nOur pipeline will be almost identical to a basic TFX pipeline.\nThe only difference is that we don't need to set metadata_connection_config\nwhich is used to locate\nML Metadata database. Because\nVertex Pipelines uses a managed metadata service, users don't need to care\nof it, and we don't need to specify the parameter.\nBefore actually define the pipeline, we need to write a model code for the\nTrainer component first.\nWrite model code.\nWe will create a simple DNN model for classification using TensorFlow Keras API. This model training code will be saved to a separate file.\nIn this tutorial we will use Generic Trainer of TFX which support Keras-based models. You need to write a Python file containing run_fn function, which is the entrypoint for the Trainer component.", "_trainer_module_file = 'penguin_trainer.py'\n\n%%writefile {_trainer_module_file}\n\n# Copied from https://www.tensorflow.org/tfx/tutorials/tfx/penguin_simple\n\nfrom typing import List\nfrom absl import logging\nimport tensorflow as tf\nfrom tensorflow import keras\nfrom tensorflow_transform.tf_metadata import schema_utils\n\n\nfrom tfx import v1 as tfx\nfrom tfx_bsl.public import tfxio\n\nfrom tensorflow_metadata.proto.v0 import schema_pb2\n\n_FEATURE_KEYS = [\n 'culmen_length_mm', 'culmen_depth_mm', 'flipper_length_mm', 'body_mass_g'\n]\n_LABEL_KEY = 'species'\n\n_TRAIN_BATCH_SIZE = 20\n_EVAL_BATCH_SIZE = 10\n\n# Since we're not generating or creating a schema, we will instead create\n# a feature spec. Since there are a fairly small number of features this is\n# manageable for this dataset.\n_FEATURE_SPEC = {\n **{\n feature: tf.io.FixedLenFeature(shape=[1], dtype=tf.float32)\n for feature in _FEATURE_KEYS\n },\n _LABEL_KEY: tf.io.FixedLenFeature(shape=[1], dtype=tf.int64)\n}\n\n\ndef _input_fn(file_pattern: List[str],\n data_accessor: tfx.components.DataAccessor,\n schema: schema_pb2.Schema,\n batch_size: int) -> tf.data.Dataset:\n \"\"\"Generates features and label for training.\n\n Args:\n file_pattern: List of paths or patterns of input tfrecord files.\n data_accessor: DataAccessor for converting input to RecordBatch.\n schema: schema of the input data.\n batch_size: representing the number of consecutive elements of returned\n dataset to combine in a single batch\n\n Returns:\n A dataset that contains (features, indices) tuple where features is a\n dictionary of Tensors, and indices is a single Tensor of label indices.\n \"\"\"\n return data_accessor.tf_dataset_factory(\n file_pattern,\n tfxio.TensorFlowDatasetOptions(\n batch_size=batch_size, label_key=_LABEL_KEY),\n schema=schema).repeat()\n\n\ndef _make_keras_model() -> tf.keras.Model:\n \"\"\"Creates a DNN Keras model for classifying penguin data.\n\n Returns:\n A Keras Model.\n \"\"\"\n # The model below is built with Functional API, please refer to\n # https://www.tensorflow.org/guide/keras/overview for all API options.\n inputs = [keras.layers.Input(shape=(1,), name=f) for f in _FEATURE_KEYS]\n d = keras.layers.concatenate(inputs)\n for _ in range(2):\n d = keras.layers.Dense(8, activation='relu')(d)\n outputs = keras.layers.Dense(3)(d)\n\n model = keras.Model(inputs=inputs, outputs=outputs)\n model.compile(\n optimizer=keras.optimizers.Adam(1e-2),\n loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n metrics=[keras.metrics.SparseCategoricalAccuracy()])\n\n model.summary(print_fn=logging.info)\n return model\n\n\n# TFX Trainer will call this function.\ndef run_fn(fn_args: tfx.components.FnArgs):\n \"\"\"Train the model based on given args.\n\n Args:\n fn_args: Holds args used to train the model as name/value pairs.\n \"\"\"\n\n # This schema is usually either an output of SchemaGen or a manually-curated\n # version provided by pipeline author. A schema can also derived from TFT\n # graph if a Transform component is used. In the case when either is missing,\n # `schema_from_feature_spec` could be used to generate schema from very simple\n # feature_spec, but the schema returned would be very primitive.\n schema = schema_utils.schema_from_feature_spec(_FEATURE_SPEC)\n\n train_dataset = _input_fn(\n fn_args.train_files,\n fn_args.data_accessor,\n schema,\n batch_size=_TRAIN_BATCH_SIZE)\n eval_dataset = _input_fn(\n fn_args.eval_files,\n fn_args.data_accessor,\n schema,\n batch_size=_EVAL_BATCH_SIZE)\n\n model = _make_keras_model()\n model.fit(\n train_dataset,\n steps_per_epoch=fn_args.train_steps,\n validation_data=eval_dataset,\n validation_steps=fn_args.eval_steps)\n\n # The result of the training should be saved in `fn_args.serving_model_dir`\n # directory.\n model.save(fn_args.serving_model_dir, save_format='tf')", "Copy the module file to GCS which can be accessed from the pipeline components.\nBecause model training happens on GCP, we need to upload this model definition. \nOtherwise, you might want to build a container image including the module file\nand use the image to run the pipeline.", "!gsutil cp {_trainer_module_file} {MODULE_ROOT}/", "Write a pipeline definition\nWe will define a function to create a TFX pipeline.", "# Copied from https://www.tensorflow.org/tfx/tutorials/tfx/penguin_simple and\n# slightly modified because we don't need `metadata_path` argument.\n\ndef _create_pipeline(pipeline_name: str, pipeline_root: str, data_root: str,\n module_file: str, serving_model_dir: str,\n ) -> tfx.dsl.Pipeline:\n \"\"\"Creates a three component penguin pipeline with TFX.\"\"\"\n # Brings data into the pipeline.\n example_gen = tfx.components.CsvExampleGen(input_base=data_root)\n\n # Uses user-provided Python function that trains a model.\n trainer = tfx.components.Trainer(\n module_file=module_file,\n examples=example_gen.outputs['examples'],\n train_args=tfx.proto.TrainArgs(num_steps=100),\n eval_args=tfx.proto.EvalArgs(num_steps=5))\n\n # Pushes the model to a filesystem destination.\n pusher = tfx.components.Pusher(\n model=trainer.outputs['model'],\n push_destination=tfx.proto.PushDestination(\n filesystem=tfx.proto.PushDestination.Filesystem(\n base_directory=serving_model_dir)))\n\n # Following three components will be included in the pipeline.\n components = [\n example_gen,\n trainer,\n pusher,\n ]\n\n return tfx.dsl.Pipeline(\n pipeline_name=pipeline_name,\n pipeline_root=pipeline_root,\n components=components)", "Run the pipeline on Vertex Pipelines.\nTFX provides multiple orchestrators to run your pipeline. In this tutorial we\nwill use the Vertex Pipelines together with the Kubeflow V2 dag runner.\nWe need to define a runner to actually run the pipeline. You will compile\nyour pipeline into our pipeline definition format using TFX APIs.", "import os\n\nPIPELINE_DEFINITION_FILE = PIPELINE_NAME + '_pipeline.json'\n\nrunner = tfx.orchestration.experimental.KubeflowV2DagRunner(\n config=tfx.orchestration.experimental.KubeflowV2DagRunnerConfig(),\n output_filename=PIPELINE_DEFINITION_FILE)\n# Following function will write the pipeline definition to PIPELINE_DEFINITION_FILE.\n_ = runner.run(\n _create_pipeline(\n pipeline_name=PIPELINE_NAME,\n pipeline_root=PIPELINE_ROOT,\n data_root=DATA_ROOT,\n module_file=os.path.join(MODULE_ROOT, _trainer_module_file),\n serving_model_dir=SERVING_MODEL_DIR))", "The generated definition file can be submitted using kfp client.", "# docs_infra: no_execute\nfrom google.cloud import aiplatform\nfrom google.cloud.aiplatform import pipeline_jobs\n\naiplatform.init(project=GOOGLE_CLOUD_PROJECT, location=GOOGLE_CLOUD_REGION)\n\njob = pipeline_jobs.PipelineJob(template_path=PIPELINE_DEFINITION_FILE,\n display_name=PIPELINE_NAME)\njob.run(sync=False)", "Visit Vertex AI > Pipelines in your Google Cloud Console page to see the progress.\nClick on your penguin-vertex-pipelines-xxx run:\n\nExplore the information displayed in each step while you wait for the job to progress.\nOn completion, your pipeline UI should look similar to this:\n\nThis job will take about 15 minutes in total to complete. Once complete, return to the lab to check your progress." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
aryarohit07/machine-learning-with-python
linear_regression/linear_regression_gradient_descent_with_one_variable.ipynb
mit
[ "The first column is the population of a city and the second column is the profit of a food truck in that city. A negative value for profit indicates a loss.", "import pandas as pd\nimport matplotlib.pyplot as plt\n\n#shape = (97, 2)\ndata = pd.read_csv('ex1data1.txt', header=None)\n\nplt.scatter(data[0], data[1])\nplt.xlabel('population')\nplt.ylabel('profit')\nplt.close()", "Data preparation", "import numpy as np\n# Now we want to have our hypothesis function: h_theta = theta' * x\n\n#creating a cols of ones\nones = np.ones((len(data[0]), 1), float)\n\n#input\nX = pd.concat([pd.DataFrame(ones), pd.DataFrame(data[0])], axis=1).values\n\n#label\ny = data[1].values", "Defining cost function", "def computeCost(X, y, theta):\n m = X.shape[0]\n h = X.dot(theta)\n J = (1/(2*m)) * (np.sum(np.square(h-y)))\n return J\n \n\ntheta = np.zeros(2)\ncost = computeCost(X, y, theta)\nprint(cost)", "Defining gradient descent", "# run gradient descent\ndef gradientDescent(X, y, theta, alpha, iterations):\n m = X.shape[0]\n J_history = np.zeros(iterations)\n for iter in np.arange(iterations):\n h = X.dot(theta)\n theta = theta - alpha * (1/m) * X.T.dot(h-y)\n J_history[iter] = computeCost(X, y, theta)\n return (theta, J_history)\n \n\ntheta = np.zeros(2)\niterations = 2000\nalpha = 0.01\ntheta, J_history = gradientDescent(X, y, theta, alpha, iterations)\nplt.xlim(0,iterations)\nplt.plot(J_history)\nplt.ylabel('Cost J')\nplt.xlabel('Iterations')\nplt.show()\nprint(theta)", "Now, lets plot fit line on the training data", "xs = np.arange(1,25)\nones = np.ones(xs.shape, float)\n\ninputXs = pd.concat([pd.DataFrame(ones),pd.DataFrame(xs)], axis=1).values\noutputYs = inputXs.dot(theta)\n\n#trying to compare with Scikit-learn\nfrom sklearn.linear_model import LinearRegression\nclf = LinearRegression()\nclf.fit(X, y)\noutputSkLearn = clf.predict(inputXs)\n\nplt.xlim(4,24)\nplt.plot(inputXs, outputSkLearn, c='b', label='scikit-learn')\nplt.plot(inputXs, outputYs, c='r', label='gradient descent')\nplt.legend()\nplt.scatter(data[0], data[1])\nplt.xlabel('population')\nplt.ylabel('profit')\nplt.show()\n\nprint('Looks great :D')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/mohc/cmip6/models/hadgem3-gc31-mm/land.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Land\nMIP Era: CMIP6\nInstitute: MOHC\nSource ID: HADGEM3-GC31-MM\nTopic: Land\nSub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. \nProperties: 154 (96 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:15\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'mohc', 'hadgem3-gc31-mm', 'land')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Conservation Properties\n3. Key Properties --&gt; Timestepping Framework\n4. Key Properties --&gt; Software Properties\n5. Grid\n6. Grid --&gt; Horizontal\n7. Grid --&gt; Vertical\n8. Soil\n9. Soil --&gt; Soil Map\n10. Soil --&gt; Snow Free Albedo\n11. Soil --&gt; Hydrology\n12. Soil --&gt; Hydrology --&gt; Freezing\n13. Soil --&gt; Hydrology --&gt; Drainage\n14. Soil --&gt; Heat Treatment\n15. Snow\n16. Snow --&gt; Snow Albedo\n17. Vegetation\n18. Energy Balance\n19. Carbon Cycle\n20. Carbon Cycle --&gt; Vegetation\n21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis\n22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration\n23. Carbon Cycle --&gt; Vegetation --&gt; Allocation\n24. Carbon Cycle --&gt; Vegetation --&gt; Phenology\n25. Carbon Cycle --&gt; Vegetation --&gt; Mortality\n26. Carbon Cycle --&gt; Litter\n27. Carbon Cycle --&gt; Soil\n28. Carbon Cycle --&gt; Permafrost Carbon\n29. Nitrogen Cycle\n30. River Routing\n31. River Routing --&gt; Oceanic Discharge\n32. Lakes\n33. Lakes --&gt; Method\n34. Lakes --&gt; Wetlands \n1. Key Properties\nLand surface key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of land surface model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of land surface model code (e.g. MOSES2.2)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.4. Land Atmosphere Flux Exchanges\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nFluxes exchanged with the atmopshere.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"water\" \n# \"energy\" \n# \"carbon\" \n# \"nitrogen\" \n# \"phospherous\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.5. Atmospheric Coupling Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.6. Land Cover\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTypes of land cover defined in the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bare soil\" \n# \"urban\" \n# \"lake\" \n# \"land ice\" \n# \"lake ice\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.7. Land Cover Change\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how land cover change is managed (e.g. the use of net or gross transitions)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover_change') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.8. Tiling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Conservation Properties\nTODO\n2.1. Energy\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how energy is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.energy') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Water\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how water is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.water') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Carbon\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestepping Framework\nTODO\n3.1. Timestep Dependent On Atmosphere\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs a time step dependent on the frequency of atmosphere coupling?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverall timestep of land surface model (i.e. time between calls)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.3. Timestepping Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of time stepping method and associated time step(s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Software Properties\nSoftware properties of land surface code\n4.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5. Grid\nLand surface grid\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the grid in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Grid --&gt; Horizontal\nThe horizontal grid in the land surface\n6.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general structure of the horizontal grid (not including any tiling)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Matches Atmosphere Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the horizontal grid match the atmosphere?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "7. Grid --&gt; Vertical\nThe vertical grid in the soil\n7.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general structure of the vertical grid in the soil (not including any tiling)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Total Depth\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe total depth of the soil (in metres)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.total_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "8. Soil\nLand surface soil\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of soil in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Heat Water Coupling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the coupling between heat and water in the soil", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_water_coupling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.3. Number Of Soil layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of soil layers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.number_of_soil layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "8.4. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the soil scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Soil --&gt; Soil Map\nKey properties of the land surface soil map\n9.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of soil map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Structure\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil structure map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.3. Texture\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil texture map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.texture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.4. Organic Matter\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil organic matter map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.organic_matter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.5. Albedo\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil albedo map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.6. Water Table\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil water table map, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.water_table') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.7. Continuously Varying Soil Depth\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the soil properties vary continuously with depth?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "9.8. Soil Depth\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil depth map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Soil --&gt; Snow Free Albedo\nTODO\n10.1. Prognostic\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs snow free albedo prognostic?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "10.2. Functions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf prognostic, describe the dependancies on snow free albedo calculations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"soil humidity\" \n# \"vegetation state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.3. Direct Diffuse\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, describe the distinction between direct and diffuse albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"distinction between direct and diffuse albedo\" \n# \"no distinction between direct and diffuse albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.4. Number Of Wavelength Bands\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, enter the number of wavelength bands used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11. Soil --&gt; Hydrology\nKey properties of the land surface soil hydrology\n11.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of the soil hydrological model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of river soil hydrology in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.3. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil hydrology tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.4. Vertical Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the typical vertical discretisation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.5. Number Of Ground Water Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of soil layers that may contain water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.6. Lateral Connectivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe the lateral connectivity between tiles", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"perfect connectivity\" \n# \"Darcian flow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.7. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe hydrological dynamics scheme in the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bucket\" \n# \"Force-restore\" \n# \"Choisnel\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Soil --&gt; Hydrology --&gt; Freezing\nTODO\n12.1. Number Of Ground Ice Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow many soil layers may contain ground ice", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "12.2. Ice Storage Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method of ice storage", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.3. Permafrost\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of permafrost, if any, within the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Soil --&gt; Hydrology --&gt; Drainage\nTODO\n13.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral describe how drainage is included in the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13.2. Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nDifferent types of runoff represented by the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gravity drainage\" \n# \"Horton mechanism\" \n# \"topmodel-based\" \n# \"Dunne mechanism\" \n# \"Lateral subsurface flow\" \n# \"Baseflow from groundwater\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Soil --&gt; Heat Treatment\nTODO\n14.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of how heat treatment properties are defined", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of soil heat scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.3. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil heat treatment tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.4. Vertical Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the typical vertical discretisation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.5. Heat Storage\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the method of heat storage", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Force-restore\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.6. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe processes included in the treatment of soil heat", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"soil moisture freeze-thaw\" \n# \"coupling with snow temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15. Snow\nLand surface snow\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of snow in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the snow tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Number Of Snow Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of snow levels used in the land surface scheme/model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.number_of_snow_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15.4. Density\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow density", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.5. Water Equivalent\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of the snow water equivalent", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.water_equivalent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.6. Heat Content\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of the heat content of snow", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.heat_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.7. Temperature\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow temperature", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.temperature') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.8. Liquid Water Content\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow liquid water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.liquid_water_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.9. Snow Cover Fractions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify cover fractions used in the surface snow scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_cover_fractions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ground snow fraction\" \n# \"vegetation snow fraction\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.10. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSnow related processes in the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"snow interception\" \n# \"snow melting\" \n# \"snow freezing\" \n# \"blowing snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.11. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the snow scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16. Snow --&gt; Snow Albedo\nTODO\n16.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of snow-covered land albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"prescribed\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.2. Functions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\n*If prognostic, *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"snow age\" \n# \"snow density\" \n# \"snow grain type\" \n# \"aerosol deposition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17. Vegetation\nLand surface vegetation\n17.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of vegetation in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of vegetation scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "17.3. Dynamic Vegetation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there dynamic evolution of vegetation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.dynamic_vegetation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17.4. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the vegetation tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.5. Vegetation Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nVegetation classification used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation types\" \n# \"biome types\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.6. Vegetation Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of vegetation types in the classification, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"broadleaf tree\" \n# \"needleleaf tree\" \n# \"C3 grass\" \n# \"C4 grass\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.7. Biome Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of biome types in the classification, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biome_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"evergreen needleleaf forest\" \n# \"evergreen broadleaf forest\" \n# \"deciduous needleleaf forest\" \n# \"deciduous broadleaf forest\" \n# \"mixed forest\" \n# \"woodland\" \n# \"wooded grassland\" \n# \"closed shrubland\" \n# \"opne shrubland\" \n# \"grassland\" \n# \"cropland\" \n# \"wetlands\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.8. Vegetation Time Variation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow the vegetation fractions in each tile are varying with time", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_time_variation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed (not varying)\" \n# \"prescribed (varying from files)\" \n# \"dynamical (varying from simulation)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.9. Vegetation Map\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.10. Interception\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs vegetation interception of rainwater represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.interception') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17.11. Phenology\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation phenology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic (vegetation map)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.12. Phenology Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation phenology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.13. Leaf Area Index\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation leaf area index", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.14. Leaf Area Index Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of leaf area index", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.15. Biomass\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\n*Treatment of vegetation biomass *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.16. Biomass Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation biomass", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.17. Biogeography\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation biogeography", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.18. Biogeography Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation biogeography", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.19. Stomatal Resistance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify what the vegetation stomatal resistance depends on", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"light\" \n# \"temperature\" \n# \"water availability\" \n# \"CO2\" \n# \"O3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.20. Stomatal Resistance Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation stomatal resistance", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.21. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the vegetation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Energy Balance\nLand surface energy balance\n18.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of energy balance in land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the energy balance tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18.3. Number Of Surface Temperatures\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "18.4. Evaporation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify the formulation method for land surface evaporation, from soil and vegetation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"alpha\" \n# \"beta\" \n# \"combined\" \n# \"Monteith potential evaporation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.5. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe which processes are included in the energy balance scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"transpiration\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19. Carbon Cycle\nLand surface carbon cycle\n19.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of carbon cycle in land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the carbon cycle tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of carbon cycle in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "19.4. Anthropogenic Carbon\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nDescribe the treament of the anthropogenic carbon pool", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grand slam protocol\" \n# \"residence time\" \n# \"decay time\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19.5. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the carbon scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20. Carbon Cycle --&gt; Vegetation\nTODO\n20.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "20.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20.3. Forest Stand Dynamics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the treatment of forest stand dyanmics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis\nTODO\n21.1. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration\nTODO\n22.1. Maintainance Respiration\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for maintainence respiration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.2. Growth Respiration\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for growth respiration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23. Carbon Cycle --&gt; Vegetation --&gt; Allocation\nTODO\n23.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the allocation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23.2. Allocation Bins\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify distinct carbon bins used in allocation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"leaves + stems + roots\" \n# \"leaves + stems + roots (leafy + woody)\" \n# \"leaves + fine roots + coarse roots + stems\" \n# \"whole plant (no distinction)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.3. Allocation Fractions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how the fractions of allocation are calculated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"function of vegetation type\" \n# \"function of plant allometry\" \n# \"explicitly calculated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24. Carbon Cycle --&gt; Vegetation --&gt; Phenology\nTODO\n24.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the phenology scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "25. Carbon Cycle --&gt; Vegetation --&gt; Mortality\nTODO\n25.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the mortality scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26. Carbon Cycle --&gt; Litter\nTODO\n26.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "26.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26.4. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the general method used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27. Carbon Cycle --&gt; Soil\nTODO\n27.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "27.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27.4. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the general method used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28. Carbon Cycle --&gt; Permafrost Carbon\nTODO\n28.1. Is Permafrost Included\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs permafrost included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "28.2. Emitted Greenhouse Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the GHGs emitted", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28.4. Impact On Soil Properties\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the impact of permafrost on soil properties", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29. Nitrogen Cycle\nLand surface nitrogen cycle\n29.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the nitrogen cycle in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the notrogen cycle tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of nitrogen cycle in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "29.4. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the nitrogen scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30. River Routing\nLand surface river routing\n30.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of river routing in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the river routing, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of river routing scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.4. Grid Inherited From Land Surface\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the grid inherited from land surface?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "30.5. Grid Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of grid, if not inherited from land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.6. Number Of Reservoirs\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of reservoirs", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.number_of_reservoirs') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.7. Water Re Evaporation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTODO", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.water_re_evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"flood plains\" \n# \"irrigation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.8. Coupled To Atmosphere\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIs river routing coupled to the atmosphere model component?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "30.9. Coupled To Land\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the coupling between land and rivers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_land') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.10. Quantities Exchanged With Atmosphere\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.11. Basin Flow Direction Map\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat type of basin flow direction map is being used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"adapted for other periods\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.12. Flooding\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the representation of flooding, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.flooding') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.13. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the river routing", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "31. River Routing --&gt; Oceanic Discharge\nTODO\n31.1. Discharge Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify how rivers are discharged to the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"direct (large rivers)\" \n# \"diffuse\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.2. Quantities Transported\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nQuantities that are exchanged from river-routing to the ocean model component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32. Lakes\nLand surface lakes\n32.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of lakes in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.2. Coupling With Rivers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre lakes coupled to the river routing model component?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.coupling_with_rivers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "32.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of lake scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "32.4. Quantities Exchanged With Rivers\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf coupling with rivers, which quantities are exchanged between the lakes and rivers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.5. Vertical Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the vertical grid of lakes", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.vertical_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.6. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the lake scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "33. Lakes --&gt; Method\nTODO\n33.1. Ice Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs lake ice included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.ice_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "33.2. Albedo\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of lake albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33.3. Dynamics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich dynamics of lakes are treated? horizontal, vertical, etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No lake dynamics\" \n# \"vertical\" \n# \"horizontal\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33.4. Dynamic Lake Extent\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs a dynamic lake extent scheme included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "33.5. Endorheic Basins\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBasins not flowing to ocean included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.endorheic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "34. Lakes --&gt; Wetlands\nTODO\n34.1. Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the treatment of wetlands, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.wetlands.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/cnrm-cerfacs/cmip6/models/sandbox-1/atmoschem.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Atmoschem\nMIP Era: CMIP6\nInstitute: CNRM-CERFACS\nSource ID: SANDBOX-1\nTopic: Atmoschem\nSub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry. \nProperties: 84 (39 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:52\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'cnrm-cerfacs', 'sandbox-1', 'atmoschem')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Software Properties\n3. Key Properties --&gt; Timestep Framework\n4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order\n5. Key Properties --&gt; Tuning Applied\n6. Grid\n7. Grid --&gt; Resolution\n8. Transport\n9. Emissions Concentrations\n10. Emissions Concentrations --&gt; Surface Emissions\n11. Emissions Concentrations --&gt; Atmospheric Emissions\n12. Emissions Concentrations --&gt; Concentrations\n13. Gas Phase Chemistry\n14. Stratospheric Heterogeneous Chemistry\n15. Tropospheric Heterogeneous Chemistry\n16. Photo Chemistry\n17. Photo Chemistry --&gt; Photolysis \n1. Key Properties\nKey properties of the atmospheric chemistry\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of atmospheric chemistry model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of atmospheric chemistry model code.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Chemistry Scheme Scope\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nAtmospheric domains covered by the atmospheric chemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"troposhere\" \n# \"stratosphere\" \n# \"mesosphere\" \n# \"mesosphere\" \n# \"whole atmosphere\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBasic approximations made in the atmospheric chemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.basic_approximations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.5. Prognostic Variables Form\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nForm of prognostic variables in the atmospheric chemistry component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"3D mass/mixing ratio for gas\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.6. Number Of Tracers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of advected tracers in the atmospheric chemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "1.7. Family Approach\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAtmospheric chemistry calculations (not advection) generalized into families of species?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.family_approach') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "1.8. Coupling With Chemical Reactivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAtmospheric chemistry transport scheme turbulence is couple with chemical reactivity?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Software Properties\nSoftware properties of aerosol code\n2.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestep Framework\nTimestepping in the atmospheric chemistry model\n3.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMathematical method deployed to solve the evolution of a given variable", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Operator splitting\" \n# \"Integrated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Split Operator Advection Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for chemical species advection (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.3. Split Operator Physical Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for physics (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.4. Split Operator Chemistry Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for chemistry (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.5. Split Operator Alternate Order\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\n?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.6. Integrated Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep for the atmospheric chemistry model (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.7. Integrated Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the type of timestep scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Implicit\" \n# \"Semi-implicit\" \n# \"Semi-analytic\" \n# \"Impact solver\" \n# \"Back Euler\" \n# \"Newton Raphson\" \n# \"Rosenbrock\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order\n**\n4.1. Turbulence\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.2. Convection\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.3. Precipitation\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.4. Emissions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.5. Deposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.6. Gas Phase Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.7. Tropospheric Heterogeneous Phase Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.8. Stratospheric Heterogeneous Phase Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.9. Photo Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.10. Aerosols\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Tuning Applied\nTuning methodology for atmospheric chemistry component\n5.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics of mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Grid\nAtmospheric chemistry grid\n6.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general structure of the atmopsheric chemistry grid", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Matches Atmosphere Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\n* Does the atmospheric chemistry grid match the atmosphere grid?*", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "7. Grid --&gt; Resolution\nResolution in the atmospheric chemistry grid\n7.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Canonical Horizontal Resolution\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Number Of Horizontal Gridpoints\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "7.4. Number Of Vertical Levels\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nNumber of vertical levels resolved on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "7.5. Is Adaptive Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDefault is False. Set true if grid resolution changes during execution.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8. Transport\nAtmospheric chemistry transport\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview of transport implementation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Use Atmospheric Transport\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs transport handled by the atmosphere, rather than within atmospheric cehmistry?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8.3. Transport Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf transport is handled within the atmospheric chemistry scheme, describe it.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.transport_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Emissions Concentrations\nAtmospheric chemistry emissions\n9.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview atmospheric chemistry emissions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Emissions Concentrations --&gt; Surface Emissions\n**\n10.1. Sources\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSources of the chemical species emitted at the surface that are taken into account in the emissions scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Vegetation\" \n# \"Soil\" \n# \"Sea surface\" \n# \"Anthropogenic\" \n# \"Biomass burning\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.2. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMethods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Climatology\" \n# \"Spatially uniform mixing ratio\" \n# \"Spatially uniform concentration\" \n# \"Interactive\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.3. Prescribed Climatology Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10.4. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and prescribed as spatially uniform", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10.5. Interactive Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and specified via an interactive method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10.6. Other Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and specified via any other method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Emissions Concentrations --&gt; Atmospheric Emissions\nTO DO\n11.1. Sources\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Aircraft\" \n# \"Biomass burning\" \n# \"Lightning\" \n# \"Volcanos\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.2. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMethods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Climatology\" \n# \"Spatially uniform mixing ratio\" \n# \"Spatially uniform concentration\" \n# \"Interactive\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.3. Prescribed Climatology Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.4. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and prescribed as spatially uniform", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.5. Interactive Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and specified via an interactive method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.6. Other Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and specified via an &quot;other method&quot;", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12. Emissions Concentrations --&gt; Concentrations\nTO DO\n12.1. Prescribed Lower Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the lower boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.2. Prescribed Upper Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the upper boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Gas Phase Chemistry\nAtmospheric chemistry transport\n13.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview gas phase atmospheric chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13.2. Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSpecies included in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HOx\" \n# \"NOy\" \n# \"Ox\" \n# \"Cly\" \n# \"HSOx\" \n# \"Bry\" \n# \"VOCs\" \n# \"isoprene\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Number Of Bimolecular Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of bi-molecular reactions in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.4. Number Of Termolecular Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of ter-molecular reactions in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.5. Number Of Tropospheric Heterogenous Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of reactions in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.6. Number Of Stratospheric Heterogenous Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of reactions in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.7. Number Of Advected Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of advected species in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.8. Number Of Steady State Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.9. Interactive Dry Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "13.10. Wet Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "13.11. Wet Oxidation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14. Stratospheric Heterogeneous Chemistry\nAtmospheric chemistry startospheric heterogeneous chemistry\n14.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview stratospheric heterogenous atmospheric chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.2. Gas Phase Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nGas phase species included in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Cly\" \n# \"Bry\" \n# \"NOy\" \n# TODO - please enter value(s)\n", "14.3. Aerosol Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nAerosol species included in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Polar stratospheric ice\" \n# \"NAT (Nitric acid trihydrate)\" \n# \"NAD (Nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particule))\" \n# TODO - please enter value(s)\n", "14.4. Number Of Steady State Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of steady state species in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.5. Sedimentation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14.6. Coagulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs coagulation is included in the stratospheric heterogeneous chemistry scheme or not?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15. Tropospheric Heterogeneous Chemistry\nAtmospheric chemistry tropospheric heterogeneous chemistry\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview tropospheric heterogenous atmospheric chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Gas Phase Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of gas phase species included in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Aerosol Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nAerosol species included in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Nitrate\" \n# \"Sea salt\" \n# \"Dust\" \n# \"Ice\" \n# \"Organic\" \n# \"Black carbon/soot\" \n# \"Polar stratospheric ice\" \n# \"Secondary organic aerosols\" \n# \"Particulate organic matter\" \n# TODO - please enter value(s)\n", "15.4. Number Of Steady State Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of steady state species in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15.5. Interactive Dry Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.6. Coagulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs coagulation is included in the tropospheric heterogeneous chemistry scheme or not?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "16. Photo Chemistry\nAtmospheric chemistry photo chemistry\n16.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview atmospheric photo chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16.2. Number Of Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of reactions in the photo-chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "17. Photo Chemistry --&gt; Photolysis\nPhotolysis scheme\n17.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nPhotolysis scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Offline (clear sky)\" \n# \"Offline (with clouds)\" \n# \"Online\" \n# TODO - please enter value(s)\n", "17.2. Environmental Conditions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
AtmaMani/pyChakras
ml/sklearn_naive_bayes_classifier.ipynb
mit
[ "Naive Bayes classification - Sklearn\nNaive Bayes classifier builds directly on conditional probability and this\n$$\np(y|x) = \\frac{p(y \\cap x)}{p(x)}\n$$\nfrom the above formula, $p(y \\cap x)$ can be written as\n$$\np(y \\cap x) = p(x | y).p(y)\n$$\nthus\n$$\np(y|x) = \\frac{p(x|y).p(y)}{p(x)}\n$$\nIn machine learning, Naive Bayes is used to compute conditional probability of predicted class $y$ occuring given all the predictor variables $x$. In other words, Bayes theorem relates P(outcome/evidence) (what we want to predict) to P(evidence / outcome) (training set).\nThe algorithm builds the conditional probability and applies prediction. The algorithm is called naive because it assumes independence between predictor variables, while in reality this may not be true in all cases.\n$$\nP(A_i | B_j) = \\frac{P(B_j | A_i)P(A_i)}{P(B_j | A_1)P(A_1) + P(B_j | A_2)P(A_2) + ... + P(B_j | A_k)P(A_k)}\n$$\nConsider $A_1 ,... A_k$ as $k$ predictor variables in machine learning. The Naive Bayes classifier will build the conditional probabilities of $p(B_j|A_k)$ to later predict what would $p(A_i | B_j)$ be.\nNaive Bayes on Iris dataset\nEDA", "import seaborn as sns\niris = sns.load_dataset('iris')\n\niris.head()\n\niris.shape\n\niris['species'].value_counts().plot(kind='bar')\n\n%matplotlib inline\nsns.pairplot(iris, hue='species')", "Create features and labels", "X_iris = iris.drop('species', axis=1)\nX_iris.shape\n\ny_iris = iris['species']\ny_iris.shape", "X_iris is in caps as X is a vector of multiple features for each record, whereas y_iris is in small case as it is a scalar for each record." ]
[ "markdown", "code", "markdown", "code", "markdown" ]
spulido99/Programacion
Margarita/.ipynb_checkpoints/Taller 1-checkpoint.ipynb
mit
[ "Taller 1: Básico de Python\n\nFunciones\nListas\nDiccionarios\n\nEste taller es para resolver problemas básicos de python. Manejo de listas, diccionarios, etc.\nEl taller debe ser realizado en un Notebook de Jupyter en la carpeta de cada uno. Debe haber commits con el avance del taller. Debajo de cada pregunta hay una celda para el código.\nBasico de Python\n1. Qué versión de python está corriendo?", "import sys\nprint('{0[0]}.{0[1]}'.format(sys.version_info))", "2. Calcule el área de un circulo de radio 5", "pi = 3.1416\nradio = 5\narea= pi * radio**2\n\nprint(area)", "3. Escriba código que imprima todos los colores de que están en color_list_1 y no estan presentes en color_list_2\nResultado esperado : \n{'Black', 'White'}", "color_list_1 = set([\"White\", \"Black\", \"Red\"]) \ncolor_list_2 = set([\"Red\", \"Green\"])\ncolor_list_1 - color_list_2", "4 Imprima una línea por cada carpeta que compone el Path donde se esta ejecutando python\ne.g. C:/User/sergio/code/programación\nSalida Esperada:\n+ User\n+ sergio\n+ code\n+ programacion", "path = 'C:/Users/Margarita/Documents/Mis_documentos/Biologia_EAFIT/Semestre_IX/Programacion/'\nsize = len (path)\nguardar = \"\"\nfor i in range(3,size):\n if path[i] != '/':\n guardar = guardar + path[i]\n else:\n print(guardar)\n guardar = \"\"\n\n", "Manejo de Listas\n5. Imprima la suma de números de my_list", "my_list = [5,7,8,9,17]\n\nsum_list = sum (my_list)\n\nprint(sum_list)", "6. Inserte un elemento_a_insertar antes de cada elemento de my_list", "elemento_a_insertar = 'E'\nmy_list = [1, 2, 3, 4]", "La salida esperada es una lista así: [E, 1, E, 2, E, 3, E, 4]", "elemento_a_insertar = 'E'\nmy_list = [1, 2, 3, 4]\nsize = len (my_list)\ncarpeta = []\nfor i in range(size):\n carpeta = carpeta + [elemento_a_insertar,my_list[i]]\nmy_list = carpeta\nprint (my_list)", "7. Separe my_list en una lista de lista cada N elementos", "N = 3\nmy_list = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n']", "Salida Epserada: [['a', 'd', 'g', 'j', 'm'], ['b', 'e', 'h', 'k', 'n'], ['c', 'f', 'i', 'l']]", "N=3 \nlista=[]\nlistaa = []\nmy_list = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n']\nsize = len(my_list)\n\nfor i in range(N):\n lista = lista + [listaa]\nfor i in range (size):\n lista[i%N] = lista[i%N] + [my_list[i]]\nprint(lista)", "8. Encuentra la lista dentro de list_of_lists que la suma de sus elementos sea la mayor", "list_of_lists = [ [1,2,3], [4,5,6], [10,11,12], [7,8,9] ]", "Salida Esperada: [10, 11, 12]", "list_of_lists = [ [1,2,3], [4,5,6], [10,11,12], [7,8,9] ]\nsize = len(list_of_lists)\ncarpeta = list_of_lists[1]\nfor i in range(size):\n if sum(list_of_lists[i]) > sum(carpeta):\n carpeta = list_of_lists[i]\nprint(carpeta)", "Manejo de Diccionarios\n9. Cree un diccionario que para cada número de 1 a N de llave tenga como valor N al cuadrado", "N = 5", "Salida Esperada: {1:1, 2:4, 3:9, 4:16, 5:25}", "N = 5\ndiccio = {}\nfor i in range(1,N+1):\n diccio [i]= i**2\nprint(diccio)", "10. Concatene los diccionarios en dictionary_list para crear uno nuevo", "dictionary_list=[{1:10, 2:20} , {3:30, 4:40}, {5:50,6:60}]", "Salida Esperada: {1: 10, 2: 20, 3: 30, 4: 40, 5: 50, 6: 60}", "dictionary_list=[{1:10, 2:20} , {3:30, 4:40}, {5:50,6:60}]\nfinal= {}\nfor i in dictionary_list:\n for k in i:\n final[k] = i[k] \nprint(final)", "11. Añada un nuevo valor \"cuadrado\" con el valor de \"numero\" de cada diccionario elevado al cuadrado", "dictionary_list=[{'numero': 10, 'cantidad': 5} , {'numero': 12, 'cantidad': 3}, {'numero': 5, 'cantidad': 45}]", "Salida Esperada: [{'numero': 10, 'cantidad': 5, 'cuadrado': 100} , {'numero': 12, 'cantidad': 3, , 'cuadrado': 144}, {'numero': 5, 'cantidad': 45, , 'cuadrado': 25}]", "dictionary_list=[{'numero': 10, 'cantidad': 5} , {'numero': 12, 'cantidad': 3}, {'numero': 5, 'cantidad': 45}]\nfor i in range(0,len(dictionary_list)):\n dictionary_list[i]['cuadrado']= dictionary_list[i]['numero']**2\nprint(dictionary_list)", "Manejo de Funciones\n12. Defina y llame una función que reciva 2 parametros y solucione el problema 3", "def diferencia_conjuntos(color_list_1, color_list_2):\n print (color_list_1 - color_list_2)\n\n# Implementar la función\ndiferencia_conjuntos( \n color_list_1 = set([\"White\", \"Black\", \"Red\"]) , \n color_list_2 = set([\"Red\", \"Green\"])) \n ", "13. Defina y llame una función que reciva de parametro una lista de listas y solucione el problema 8", "def max_list_of_lists(list_of_lists):\n size = len(list_of_lists)\n carpeta = list_of_lists[1]\n for i in range(size):\n if sum(list_of_lists[i]) > sum(carpeta):\n carpeta = list_of_lists[i]\n print(carpeta)\n\n# Implementar la función\nlist_of_lists = [ [1,2,3], [4,5,6], [10,11,12], [7,8,9] ]\nmax_list_of_lists (list_of_lists)", "14. Defina y llame una función que reciva un parametro N y resuleva el problema 9", "def diccionario_cuadradovalor(N):\n diccio = {}\n final = {}\n for i in range(1,N+1):\n final = diccio [i]= i**2\n print(diccio)\n\n#Implementar la función:\nN = 5\ndiccionario_cuadradovalor(N)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/cams/cmip6/models/sandbox-3/atmoschem.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Atmoschem\nMIP Era: CMIP6\nInstitute: CAMS\nSource ID: SANDBOX-3\nTopic: Atmoschem\nSub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry. \nProperties: 84 (39 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:43\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'cams', 'sandbox-3', 'atmoschem')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Software Properties\n3. Key Properties --&gt; Timestep Framework\n4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order\n5. Key Properties --&gt; Tuning Applied\n6. Grid\n7. Grid --&gt; Resolution\n8. Transport\n9. Emissions Concentrations\n10. Emissions Concentrations --&gt; Surface Emissions\n11. Emissions Concentrations --&gt; Atmospheric Emissions\n12. Emissions Concentrations --&gt; Concentrations\n13. Gas Phase Chemistry\n14. Stratospheric Heterogeneous Chemistry\n15. Tropospheric Heterogeneous Chemistry\n16. Photo Chemistry\n17. Photo Chemistry --&gt; Photolysis \n1. Key Properties\nKey properties of the atmospheric chemistry\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of atmospheric chemistry model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of atmospheric chemistry model code.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Chemistry Scheme Scope\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nAtmospheric domains covered by the atmospheric chemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"troposhere\" \n# \"stratosphere\" \n# \"mesosphere\" \n# \"mesosphere\" \n# \"whole atmosphere\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.4. Basic Approximations\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBasic approximations made in the atmospheric chemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.basic_approximations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.5. Prognostic Variables Form\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nForm of prognostic variables in the atmospheric chemistry component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"3D mass/mixing ratio for gas\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.6. Number Of Tracers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nNumber of advected tracers in the atmospheric chemistry model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "1.7. Family Approach\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAtmospheric chemistry calculations (not advection) generalized into families of species?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.family_approach') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "1.8. Coupling With Chemical Reactivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAtmospheric chemistry transport scheme turbulence is couple with chemical reactivity?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Software Properties\nSoftware properties of aerosol code\n2.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestep Framework\nTimestepping in the atmospheric chemistry model\n3.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nMathematical method deployed to solve the evolution of a given variable", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Operator splitting\" \n# \"Integrated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "3.2. Split Operator Advection Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for chemical species advection (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.3. Split Operator Physical Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for physics (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.4. Split Operator Chemistry Timestep\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTimestep for chemistry (in seconds).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.5. Split Operator Alternate Order\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\n?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.6. Integrated Timestep\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTimestep for the atmospheric chemistry model (in seconds)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.7. Integrated Scheme Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the type of timestep scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Implicit\" \n# \"Semi-implicit\" \n# \"Semi-analytic\" \n# \"Impact solver\" \n# \"Back Euler\" \n# \"Newton Raphson\" \n# \"Rosenbrock\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order\n**\n4.1. Turbulence\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.2. Convection\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.3. Precipitation\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.4. Emissions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.5. Deposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.6. Gas Phase Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.7. Tropospheric Heterogeneous Phase Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.8. Stratospheric Heterogeneous Phase Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.9. Photo Chemistry\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "4.10. Aerosols\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCall order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Tuning Applied\nTuning methodology for atmospheric chemistry component\n5.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics of mean state used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Grid\nAtmospheric chemistry grid\n6.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general structure of the atmopsheric chemistry grid", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Matches Atmosphere Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\n* Does the atmospheric chemistry grid match the atmosphere grid?*", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "7. Grid --&gt; Resolution\nResolution in the atmospheric chemistry grid\n7.1. Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Canonical Horizontal Resolution\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Number Of Horizontal Gridpoints\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "7.4. Number Of Vertical Levels\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nNumber of vertical levels resolved on computational grid.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "7.5. Is Adaptive Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDefault is False. Set true if grid resolution changes during execution.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8. Transport\nAtmospheric chemistry transport\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview of transport implementation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Use Atmospheric Transport\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs transport handled by the atmosphere, rather than within atmospheric cehmistry?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "8.3. Transport Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf transport is handled within the atmospheric chemistry scheme, describe it.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.transport.transport_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Emissions Concentrations\nAtmospheric chemistry emissions\n9.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview atmospheric chemistry emissions", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Emissions Concentrations --&gt; Surface Emissions\n**\n10.1. Sources\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSources of the chemical species emitted at the surface that are taken into account in the emissions scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Vegetation\" \n# \"Soil\" \n# \"Sea surface\" \n# \"Anthropogenic\" \n# \"Biomass burning\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.2. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMethods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Climatology\" \n# \"Spatially uniform mixing ratio\" \n# \"Spatially uniform concentration\" \n# \"Interactive\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.3. Prescribed Climatology Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10.4. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and prescribed as spatially uniform", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10.5. Interactive Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and specified via an interactive method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10.6. Other Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted at the surface and specified via any other method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Emissions Concentrations --&gt; Atmospheric Emissions\nTO DO\n11.1. Sources\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Aircraft\" \n# \"Biomass burning\" \n# \"Lightning\" \n# \"Volcanos\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.2. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nMethods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Climatology\" \n# \"Spatially uniform mixing ratio\" \n# \"Spatially uniform concentration\" \n# \"Interactive\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.3. Prescribed Climatology Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.4. Prescribed Spatially Uniform Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and prescribed as spatially uniform", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.5. Interactive Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and specified via an interactive method", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.6. Other Emitted Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of chemical species emitted in the atmosphere and specified via an &quot;other method&quot;", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12. Emissions Concentrations --&gt; Concentrations\nTO DO\n12.1. Prescribed Lower Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the lower boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.2. Prescribed Upper Boundary\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of species prescribed at the upper boundary.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Gas Phase Chemistry\nAtmospheric chemistry transport\n13.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview gas phase atmospheric chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13.2. Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nSpecies included in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HOx\" \n# \"NOy\" \n# \"Ox\" \n# \"Cly\" \n# \"HSOx\" \n# \"Bry\" \n# \"VOCs\" \n# \"isoprene\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.3. Number Of Bimolecular Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of bi-molecular reactions in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.4. Number Of Termolecular Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of ter-molecular reactions in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.5. Number Of Tropospheric Heterogenous Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of reactions in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.6. Number Of Stratospheric Heterogenous Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of reactions in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.7. Number Of Advected Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of advected species in the gas phase chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.8. Number Of Steady State Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "13.9. Interactive Dry Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "13.10. Wet Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "13.11. Wet Oxidation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14. Stratospheric Heterogeneous Chemistry\nAtmospheric chemistry startospheric heterogeneous chemistry\n14.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview stratospheric heterogenous atmospheric chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.2. Gas Phase Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nGas phase species included in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Cly\" \n# \"Bry\" \n# \"NOy\" \n# TODO - please enter value(s)\n", "14.3. Aerosol Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nAerosol species included in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Polar stratospheric ice\" \n# \"NAT (Nitric acid trihydrate)\" \n# \"NAD (Nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particule))\" \n# TODO - please enter value(s)\n", "14.4. Number Of Steady State Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of steady state species in the stratospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.5. Sedimentation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "14.6. Coagulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs coagulation is included in the stratospheric heterogeneous chemistry scheme or not?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15. Tropospheric Heterogeneous Chemistry\nAtmospheric chemistry tropospheric heterogeneous chemistry\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview tropospheric heterogenous atmospheric chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Gas Phase Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList of gas phase species included in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Aerosol Species\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nAerosol species included in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sulphate\" \n# \"Nitrate\" \n# \"Sea salt\" \n# \"Dust\" \n# \"Ice\" \n# \"Organic\" \n# \"Black carbon/soot\" \n# \"Polar stratospheric ice\" \n# \"Secondary organic aerosols\" \n# \"Particulate organic matter\" \n# TODO - please enter value(s)\n", "15.4. Number Of Steady State Species\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of steady state species in the tropospheric heterogeneous chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15.5. Interactive Dry Deposition\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "15.6. Coagulation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs coagulation is included in the tropospheric heterogeneous chemistry scheme or not?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "16. Photo Chemistry\nAtmospheric chemistry photo chemistry\n16.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview atmospheric photo chemistry", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16.2. Number Of Reactions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of reactions in the photo-chemistry scheme.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "17. Photo Chemistry --&gt; Photolysis\nPhotolysis scheme\n17.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nPhotolysis scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Offline (clear sky)\" \n# \"Offline (with clouds)\" \n# \"Online\" \n# TODO - please enter value(s)\n", "17.2. Environmental Conditions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mne-tools/mne-tools.github.io
0.17/_downloads/0f794e75f963d5793938890d6f3d2513/plot_receptive_field_mtrf.ipynb
bsd-3-clause
[ "%matplotlib inline", "Receptive Field Estimation and Prediction\nThis example reproduces figures from Lalor et al's mTRF toolbox in\nmatlab [1]_. We will show how the :class:mne.decoding.ReceptiveField class\ncan perform a similar function along with scikit-learn. We will first fit a\nlinear encoding model using the continuously-varying speech envelope to predict\nactivity of a 128 channel EEG system. Then, we will take the reverse approach\nand try to predict the speech envelope from the EEG (known in the litterature\nas a decoding model, or simply stimulus reconstruction).\nReferences\n.. [1] Crosse, M. J., Di Liberto, G. M., Bednar, A. & Lalor, E. C. (2016).\n The Multivariate Temporal Response Function (mTRF) Toolbox:\n A MATLAB Toolbox for Relating Neural Signals to Continuous Stimuli.\n Frontiers in Human Neuroscience 10, 604. doi:10.3389/fnhum.2016.00604\n.. [2] Haufe, S., Meinecke, F., Goergen, K., Daehne, S., Haynes, J.-D.,\n Blankertz, B., & Biessmann, F. (2014). On the interpretation of weight\n vectors of linear models in multivariate neuroimaging. NeuroImage, 87,\n 96-110. doi:10.1016/j.neuroimage.2013.10.067", "# Authors: Chris Holdgraf <choldgraf@gmail.com>\n# Eric Larson <larson.eric.d@gmail.com>\n# Nicolas Barascud <nicolas.barascud@ens.fr>\n#\n# License: BSD (3-clause)\n# sphinx_gallery_thumbnail_number = 3\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.io import loadmat\nfrom os.path import join\n\nimport mne\nfrom mne.decoding import ReceptiveField\nfrom sklearn.model_selection import KFold\nfrom sklearn.preprocessing import scale", "Load the data from the publication\nFirst we will load the data collected in [1]_. In this experiment subjects\nlistened to natural speech. Raw EEG and the speech stimulus are provided.\nWe will load these below, downsampling the data in order to speed up\ncomputation since we know that our features are primarily low-frequency in\nnature. Then we'll visualize both the EEG and speech envelope.", "path = mne.datasets.mtrf.data_path()\ndecim = 2\ndata = loadmat(join(path, 'speech_data.mat'))\nraw = data['EEG'].T\nspeech = data['envelope'].T\nsfreq = float(data['Fs'])\nsfreq /= decim\nspeech = mne.filter.resample(speech, down=decim, npad='auto')\nraw = mne.filter.resample(raw, down=decim, npad='auto')\n\n# Read in channel positions and create our MNE objects from the raw data\nmontage = mne.channels.read_montage('biosemi128')\nmontage.selection = montage.selection[:128]\ninfo = mne.create_info(montage.ch_names[:128], sfreq, 'eeg', montage=montage)\nraw = mne.io.RawArray(raw, info)\nn_channels = len(raw.ch_names)\n\n# Plot a sample of brain and stimulus activity\nfig, ax = plt.subplots()\nlns = ax.plot(scale(raw[:, :800][0].T), color='k', alpha=.1)\nln1 = ax.plot(scale(speech[0, :800]), color='r', lw=2)\nax.legend([lns[0], ln1[0]], ['EEG', 'Speech Envelope'], frameon=False)\nax.set(title=\"Sample activity\", xlabel=\"Time (s)\")\nmne.viz.tight_layout()", "Create and fit a receptive field model\nWe will construct an encoding model to find the linear relationship between\na time-delayed version of the speech envelope and the EEG signal. This allows\nus to make predictions about the response to new stimuli.", "# Define the delays that we will use in the receptive field\ntmin, tmax = -.2, .4\n\n# Initialize the model\nrf = ReceptiveField(tmin, tmax, sfreq, feature_names=['envelope'],\n estimator=1., scoring='corrcoef')\n# We'll have (tmax - tmin) * sfreq delays\n# and an extra 2 delays since we are inclusive on the beginning / end index\nn_delays = int((tmax - tmin) * sfreq) + 2\n\nn_splits = 3\ncv = KFold(n_splits)\n\n# Prepare model data (make time the first dimension)\nspeech = speech.T\nY, _ = raw[:] # Outputs for the model\nY = Y.T\n\n# Iterate through splits, fit the model, and predict/test on held-out data\ncoefs = np.zeros((n_splits, n_channels, n_delays))\nscores = np.zeros((n_splits, n_channels))\nfor ii, (train, test) in enumerate(cv.split(speech)):\n print('split %s / %s' % (ii + 1, n_splits))\n rf.fit(speech[train], Y[train])\n scores[ii] = rf.score(speech[test], Y[test])\n # coef_ is shape (n_outputs, n_features, n_delays). we only have 1 feature\n coefs[ii] = rf.coef_[:, 0, :]\ntimes = rf.delays_ / float(rf.sfreq)\n\n# Average scores and coefficients across CV splits\nmean_coefs = coefs.mean(axis=0)\nmean_scores = scores.mean(axis=0)\n\n# Plot mean prediction scores across all channels\nfig, ax = plt.subplots()\nix_chs = np.arange(n_channels)\nax.plot(ix_chs, mean_scores)\nax.axhline(0, ls='--', color='r')\nax.set(title=\"Mean prediction score\", xlabel=\"Channel\", ylabel=\"Score ($r$)\")\nmne.viz.tight_layout()", "Investigate model coefficients\nFinally, we will look at how the linear coefficients (sometimes\nreferred to as beta values) are distributed across time delays as well as\nacross the scalp. We will recreate figure 1 and figure 2 from [1]_.", "# Print mean coefficients across all time delays / channels (see Fig 1 in [1])\ntime_plot = 0.180 # For highlighting a specific time.\nfig, ax = plt.subplots(figsize=(4, 8))\nmax_coef = mean_coefs.max()\nax.pcolormesh(times, ix_chs, mean_coefs, cmap='RdBu_r',\n vmin=-max_coef, vmax=max_coef, shading='gouraud')\nax.axvline(time_plot, ls='--', color='k', lw=2)\nax.set(xlabel='Delay (s)', ylabel='Channel', title=\"Mean Model\\nCoefficients\",\n xlim=times[[0, -1]], ylim=[len(ix_chs) - 1, 0],\n xticks=np.arange(tmin, tmax + .2, .2))\nplt.setp(ax.get_xticklabels(), rotation=45)\nmne.viz.tight_layout()\n\n# Make a topographic map of coefficients for a given delay (see Fig 2C in [1])\nix_plot = np.argmin(np.abs(time_plot - times))\nfig, ax = plt.subplots()\nmne.viz.plot_topomap(mean_coefs[:, ix_plot], pos=info, axes=ax, show=False,\n vmin=-max_coef, vmax=max_coef)\nax.set(title=\"Topomap of model coefficients\\nfor delay %s\" % time_plot)\nmne.viz.tight_layout()", "Create and fit a stimulus reconstruction model\nWe will now demonstrate another use case for the for the\n:class:mne.decoding.ReceptiveField class as we try to predict the stimulus\nactivity from the EEG data. This is known in the literature as a decoding, or\nstimulus reconstruction model [1]_. A decoding model aims to find the\nrelationship between the speech signal and a time-delayed version of the EEG.\nThis can be useful as we exploit all of the available neural data in a\nmultivariate context, compared to the encoding case which treats each M/EEG\nchannel as an independent feature. Therefore, decoding models might provide a\nbetter quality of fit (at the expense of not controlling for stimulus\ncovariance), especially for low SNR stimuli such as speech.", "# We use the same lags as in [1]. Negative lags now index the relationship\n# between the neural response and the speech envelope earlier in time, whereas\n# positive lags would index how a unit change in the amplitude of the EEG would\n# affect later stimulus activity (obviously this should have an amplitude of\n# zero).\ntmin, tmax = -.2, 0.\n\n# Initialize the model. Here the features are the EEG data. We also specify\n# ``patterns=True`` to compute inverse-transformed coefficients during model\n# fitting (cf. next section). We'll use a ridge regression estimator with an\n# alpha value similar to [1].\nsr = ReceptiveField(tmin, tmax, sfreq, feature_names=raw.ch_names,\n estimator=1e4, scoring='corrcoef', patterns=True)\n# We'll have (tmax - tmin) * sfreq delays\n# and an extra 2 delays since we are inclusive on the beginning / end index\nn_delays = int((tmax - tmin) * sfreq) + 2\n\nn_splits = 3\ncv = KFold(n_splits)\n\n# Iterate through splits, fit the model, and predict/test on held-out data\ncoefs = np.zeros((n_splits, n_channels, n_delays))\npatterns = coefs.copy()\nscores = np.zeros((n_splits,))\nfor ii, (train, test) in enumerate(cv.split(speech)):\n print('split %s / %s' % (ii + 1, n_splits))\n sr.fit(Y[train], speech[train])\n scores[ii] = sr.score(Y[test], speech[test])[0]\n # coef_ is shape (n_outputs, n_features, n_delays). We have 128 features\n coefs[ii] = sr.coef_[0, :, :]\n patterns[ii] = sr.patterns_[0, :, :]\ntimes = sr.delays_ / float(sr.sfreq)\n\n# Average scores and coefficients across CV splits\nmean_coefs = coefs.mean(axis=0)\nmean_patterns = patterns.mean(axis=0)\nmean_scores = scores.mean(axis=0)\nmax_coef = np.abs(mean_coefs).max()\nmax_patterns = np.abs(mean_patterns).max()", "Visualize stimulus reconstruction\nTo get a sense of our model performance, we can plot the actual and predicted\nstimulus envelopes side by side.", "y_pred = sr.predict(Y[test])\ntime = np.linspace(0, 2., 5 * int(sfreq))\nfig, ax = plt.subplots(figsize=(8, 4))\nax.plot(time, speech[test][sr.valid_samples_][:int(5 * sfreq)],\n color='grey', lw=2, ls='--')\nax.plot(time, y_pred[sr.valid_samples_][:int(5 * sfreq)], color='r', lw=2)\nax.legend([lns[0], ln1[0]], ['Envelope', 'Reconstruction'], frameon=False)\nax.set(title=\"Stimulus reconstruction\")\nax.set_xlabel('Time (s)')\nmne.viz.tight_layout()", "Investigate model coefficients\nFinally, we will look at how the decoding model coefficients are distributed\nacross the scalp. We will attempt to recreate figure 5 from [1]. The\ndecoding model weights reflect the channels that contribute most toward\nreconstructing the stimulus signal, but are not directly interpretable in a\nneurophysiological sense. Here we also look at the coefficients obtained\nvia an inversion procedure [2]_, which have a more straightforward\ninterpretation as their value (and sign) directly relates to the stimulus\nsignal's strength (and effect direction).", "time_plot = (-.140, -.125) # To average between two timepoints.\nix_plot = np.arange(np.argmin(np.abs(time_plot[0] - times)),\n np.argmin(np.abs(time_plot[1] - times)))\nfig, ax = plt.subplots(1, 2)\nmne.viz.plot_topomap(np.mean(mean_coefs[:, ix_plot], axis=1),\n pos=info, axes=ax[0], show=False,\n vmin=-max_coef, vmax=max_coef)\nax[0].set(title=\"Model coefficients\\nbetween delays %s and %s\"\n % (time_plot[0], time_plot[1]))\n\nmne.viz.plot_topomap(np.mean(mean_patterns[:, ix_plot], axis=1),\n pos=info, axes=ax[1],\n show=False, vmin=-max_patterns, vmax=max_patterns)\nax[1].set(title=\"Inverse-transformed coefficients\\nbetween delays %s and %s\"\n % (time_plot[0], time_plot[1]))\nmne.viz.tight_layout()\n\nplt.show()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.23/_downloads/8eeab316a5e011839687fbb76fb65f29/cwt_sensor_connectivity.ipynb
bsd-3-clause
[ "%matplotlib inline", "Compute seed-based time-frequency connectivity in sensor space\nComputes the connectivity between a seed-gradiometer close to the visual cortex\nand all other gradiometers. The connectivity is computed in the time-frequency\ndomain using Morlet wavelets and the debiased squared weighted phase lag index\n:footcite:VinckEtAl2011 is used as connectivity metric.", "# Author: Martin Luessi <mluessi@nmr.mgh.harvard.edu>\n#\n# License: BSD (3-clause)\n\nimport numpy as np\n\nimport mne\nfrom mne import io\nfrom mne.connectivity import spectral_connectivity, seed_target_indices\nfrom mne.datasets import sample\nfrom mne.time_frequency import AverageTFR\n\nprint(__doc__)", "Set parameters", "data_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\n\n# Setup for reading the raw data\nraw = io.read_raw_fif(raw_fname)\nevents = mne.read_events(event_fname)\n\n# Add a bad channel\nraw.info['bads'] += ['MEG 2443']\n\n# Pick MEG gradiometers\npicks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=False, eog=True,\n exclude='bads')\n\n# Create epochs for left-visual condition\nevent_id, tmin, tmax = 3, -0.2, 0.5\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,\n baseline=(None, 0), reject=dict(grad=4000e-13, eog=150e-6),\n preload=True)\n\n# Use 'MEG 2343' as seed\nseed_ch = 'MEG 2343'\npicks_ch_names = [raw.ch_names[i] for i in picks]\n\n# Create seed-target indices for connectivity computation\nseed = picks_ch_names.index(seed_ch)\ntargets = np.arange(len(picks))\nindices = seed_target_indices(seed, targets)\n\n# Define wavelet frequencies and number of cycles\ncwt_freqs = np.arange(7, 30, 2)\ncwt_n_cycles = cwt_freqs / 7.\n\n# Run the connectivity analysis using 2 parallel jobs\nsfreq = raw.info['sfreq'] # the sampling frequency\ncon, freqs, times, _, _ = spectral_connectivity(\n epochs, indices=indices,\n method='wpli2_debiased', mode='cwt_morlet', sfreq=sfreq,\n cwt_freqs=cwt_freqs, cwt_n_cycles=cwt_n_cycles, n_jobs=1)\n\n# Mark the seed channel with a value of 1.0, so we can see it in the plot\ncon[np.where(indices[1] == seed)] = 1.0\n\n# Show topography of connectivity from seed\ntitle = 'WPLI2 - Visual - Seed %s' % seed_ch\n\nlayout = mne.find_layout(epochs.info, 'meg') # use full layout\n\ntfr = AverageTFR(epochs.info, con, times, freqs, len(epochs))\ntfr.plot_topo(fig_facecolor='w', font_color='k', border='k')", "References\n.. footbibliography::" ]
[ "code", "markdown", "code", "markdown", "code", "markdown" ]
ConnectedSystems/veneer-py
doc/training/3_Python_Data_Analysis.ipynb
isc
[ "Session 3: Python Data Analysis World\nPython has a very strong community in the data analytics and scientific computing world. There are a lot of great Python packages to support different analyses, but there are a few very key packages:\n\nWorkhorses - numpy, pandas, scipy\nSpatial tools - shapely, ogr/gdal, geopandas, etc\nEnvironments and Visualisation - Jupiter, matplotlib\n\nYou will have access to all of these after installing Anaconda and installing the additional packages described in Session 0. (The additional packages relate to spatial analysis - you can skip them if you don't need them)\nWhere possible, veneer-py functions will accept and return objects that are directly usable by these packages. In particular, time series and other tabular data structures are returned as pandas DataFrame objects.\nThis session gives very brief introductions to most of these packages. In most cases, the links in Session 0 are relevant for more information.\nnumpy\nnumpy represents multi-dimensional arrays and operations on those arrays. The arrays are typed (eg float, double precision float, integer, etc) and are indexed by integers (one per dimension).\nIn veneer-py, we use pandas Data Frames more than numpy arrays, but the basics of the array operations in numpy are the foundations on which pandas is built.\nYou can create an array of random numbers using functions under the np.random namespace. The following example creates 100 random floats using a normal distribution\nNote: numpy is typically imported as np.", "import numpy as np\nrandom = np.random.normal(size=100)\nrandom", "The functions in np.random return one dimensional arrays. You can check this with .shape and change it with .reshape()", "random.shape\n\nthreed = random.reshape(10,5,2)\nthreed", "You can perform basic arithmetic on arrays, using scalars or other arrays.\nFor example, given the following two arrays", "a1 = np.array([20.0,12.0,77.0,77.0])\na2 = np.array([25.0,6.0,80.0,80.0])\n\n# You can add:\n\na1 + a2\n\n# Multiply (element wise):\n\na1 * a2\n\n# Compute a dot product\n\na1.dot(a2)\n\n# You can also perform matrix operations\n# First tell numpy that your array is a matrix,\n# Then transpose to compatible shapes\n# Then multiply\nnp.matrix(a1).transpose() * np.matrix(a2)", "pandas\nPandas DataFrame objects are one of the key data types used in veneer-py.\nA DataFrame is a tabular, two-dimensional data structure, which can be indexed in a range of ways, including a date and date/time index. DataFrames are aranged in named columns, each with a particular type (eg double, string, integer) and in this sense they are more flexible than numpy arrays.\nEach column in a DataFrame is a pandas Series, which is useful in its own right.", "import veneer\nv = veneer.Veneer(port=9876)\ndownstream_flow_vol = v.retrieve_multiple_time_series(criteria={'RecordingVariable':'Downstream Flow Volume'})", "Pandas DataFrames have a tabular presentation in the Jupyter notebook.\nIt's also possible to slice subsets of rows", "downstream_flow_vol[0:10] # <-- Look at first 10 rows (timesteps)\n\ndownstream_flow_vol[0::3000] # <-- Look at every 3000th timestep", "You can quickly get stats for each column in a DataFrame", "downstream_flow_vol.mean()", "You can get the same stats along rows:", "downstream_flow_vol.mean(axis=1)", "Jupyter and visualisation\nIt is worth spending some time exploring the capabilities of the Jupyter notebook.\nIn terms of managing your work:\n\nThe Edit and Insert menus have useful functions for rearranging cells, creating new cells, etc\nThe Cell menu has functions for running all cells in a notebook, all cells above a particular point and all cells below a point.\nThe Kernel menu controls the execution and lifecycle of the Python session. (In this instance, Kernel refers to an instance of an IPython session that is connected to the notebook. The Restart command clears all variables - even though earlier output is still visible in the notebook)\n\nAt this stage, most visualisation in Python notebooks is handled by matplotlib.\nMatplotlib is powerful, but the learning curve can be steep.", "import matplotlib.pyplot as plt\n%matplotlib inline", "Typically, you'll create a single plot from a single cell", "plt.hist(np.random.normal(size=500))", "... But the matplotlib subplots functionality allows you to create matrices of plots.", "methods=[np.random.uniform,np.random.normal,np.random.exponential]\n\nn=len(methods)\n\n# Create n sets of random numbers, where n is the number of methods specified\nrandom_sets = [method(size=1000) for method in methods]\n\nfor i in range(n):\n # Arrange subplots 2 rows x 3 columns\n # Access the i'th column on the first row\n ax = plt.subplot(2,3,i+1)\n # Plot the random numbers\n ax.plot(random_sets[i])\n # Access the i'th column on the second row\n ax = plt.subplot(2,3,n+i+1)\n # Plot a histogram of the corresponding numbers\n ax.hist(random_sets[i])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/recommenders-addons
docs/tutorials/dynamic_embedding_tutorial.ipynb
apache-2.0
[ "Copyright 2021 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Recommending movies: ranking\n<table class=\"tfa-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/recommenders-addons/blob/master/docs/tutorials/dynamic_embedding_tutorial.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n</table>\n\nOverview\nIn this tutorial, we're going to use the rating data to predict the user's rating of other movies. To achieve this goal, we will follow the following steps:\n\nGet our data and do some preprocessing to get the required format.\nImplement a neural collaborative filtering(NeuralCF) model.\nTrain the model.\n\nDifferent from the general recommendation model, the model we implemented replaces tf.nn.embedding_lookup with tfra.dynamic_embedding.embedding_lookup, which can handle super large sparse features.\nImports\nLet's first get our imports out of the way.", "!pip install -q --upgrade tensorflow-recommenders-addons\n!pip install -q --upgrade tensorflow-datasets\n\nimport numpy as np\nimport tensorflow as tf\nfrom tensorflow.keras.layers import Dense\n\nimport tensorflow_datasets as tfds\nimport tensorflow_recommenders_addons as tfra", "Preparing the dataset\nThis tutorial uses movies reviews provided by the MovieLens 100K dataset, a classic dataset from the GroupLens research group at the University of Minnesota. In order to facilitate processing, we convert the data type of movie_id and user_id into int64.", "ratings = tfds.load(\"movielens/100k-ratings\", split=\"train\")\n\nratings = ratings.map(lambda x: {\n \"movie_id\": tf.strings.to_number(x[\"movie_id\"], tf.int64),\n \"user_id\": tf.strings.to_number(x[\"user_id\"], tf.int64),\n \"user_rating\": x[\"user_rating\"]\n})\n\ntf.random.set_seed(2021)\nshuffled = ratings.shuffle(100_000, seed=2021, reshuffle_each_iteration=False)\n\ndataset_train = shuffled.take(100_000).batch(256)", "Implementing a model\nThe NCFModel we implemented is very similar to the conventional one, and the main difference lies in the embedding layer. We specify the variable of embedding layer by tfra.dynamic_embedding.get_variable.", "class NCFModel(tf.keras.Model):\n\n def __init__(self):\n super(NCFModel, self).__init__()\n self.embedding_size = 32\n self.d0 = Dense(\n 256,\n activation='relu',\n kernel_initializer=tf.keras.initializers.RandomNormal(0.0, 0.1),\n bias_initializer=tf.keras.initializers.RandomNormal(0.0, 0.1))\n self.d1 = Dense(\n 64,\n activation='relu',\n kernel_initializer=tf.keras.initializers.RandomNormal(0.0, 0.1),\n bias_initializer=tf.keras.initializers.RandomNormal(0.0, 0.1))\n self.d2 = Dense(\n 1,\n kernel_initializer=tf.keras.initializers.RandomNormal(0.0, 0.1),\n bias_initializer=tf.keras.initializers.RandomNormal(0.0, 0.1))\n self.user_embeddings = tfra.dynamic_embedding.get_variable(\n name=\"user_dynamic_embeddings\",\n dim=self.embedding_size,\n initializer=tf.keras.initializers.RandomNormal(-1, 1))\n self.movie_embeddings = tfra.dynamic_embedding.get_variable(\n name=\"moive_dynamic_embeddings\",\n dim=self.embedding_size,\n initializer=tf.keras.initializers.RandomNormal(-1, 1))\n self.loss = tf.keras.losses.MeanSquaredError()\n\n def call(self, batch):\n movie_id = batch[\"movie_id\"]\n user_id = batch[\"user_id\"]\n rating = batch[\"user_rating\"]\n\n user_id_val, user_id_idx = np.unique(user_id, return_inverse=True)\n user_id_weights, user_id_trainable_wrapper = tfra.dynamic_embedding.embedding_lookup(\n params=self.user_embeddings,\n ids=user_id_val,\n name=\"user-id-weights\",\n return_trainable=True)\n user_id_weights = tf.gather(user_id_weights, user_id_idx)\n\n movie_id_val, movie_id_idx = np.unique(movie_id, return_inverse=True)\n movie_id_weights, movie_id_trainable_wrapper = tfra.dynamic_embedding.embedding_lookup(\n params=self.movie_embeddings,\n ids=movie_id_val,\n name=\"movie-id-weights\",\n return_trainable=True)\n movie_id_weights = tf.gather(movie_id_weights, movie_id_idx)\n\n embeddings = tf.concat([user_id_weights, movie_id_weights], axis=1)\n dnn = self.d0(embeddings)\n dnn = self.d1(dnn)\n dnn = self.d2(dnn)\n out = tf.reshape(dnn, shape=[-1])\n loss = self.loss(rating, out)\n return loss, [user_id_trainable_wrapper, movie_id_trainable_wrapper]", "Let's instantiate the model, and wrap the optimizer in tfra.dynamic_embedding.DynamicEmbeddingOptimizer.", "model = NCFModel()\noptimizer = tf.keras.optimizers.Adam(learning_rate=0.001)\noptimizer = tfra.dynamic_embedding.DynamicEmbeddingOptimizer(optimizer)", "Training the model\nAfter defining the model, we can train the model and observe the change of loss.", "def train(epoch=1):\n for i in range(epoch):\n total_loss = np.array([])\n for (_, batch) in enumerate(dataset_train):\n with tf.GradientTape() as tape:\n loss, trainable_wrapper_list = model(batch)\n total_loss = np.append(total_loss, loss)\n grads = tape.gradient(loss, model.trainable_variables + trainable_wrapper_list)\n optimizer.apply_gradients(zip(grads, model.trainable_variables + trainable_wrapper_list))\n print(\"epoch:\", i, \"mean_squared_error:\", np.mean(total_loss))\n\nif __name__==\"__main__\":\n train(10)", "As the model trains, the loss is falling. Through the entire model definition and training process, we can find that the interface between tfra and TensorFlow maintains a good consistency. We can use tfra to build a recommendation model easily by relying on the experience of using TensorFlow, and can effectively handle super large sparse features." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
sheikhomar/ml
matplotlib.ipynb
mit
[ "Matplotlib\nMatplotlib is a popular Python library for graphical plotting. It works well with NumPy and Pandas. The <a href=\"https://matplotlib.org/gallery.html\">gallery on Matplotlib's website</a> shows what kinds of plotting that it can perform. There is also a good <a href=\"http://www.labri.fr/perso/nrougier/teaching/matplotlib/\">reference site</a> on the web.", "import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt", "To show plots in Jupyter Notebook, we need to execute %matplotlib inline. If we use other editors, we need to run plt.show() at the end to have our plot show up in a window.", "%matplotlib inline\n\n# Generate data\nx = np.linspace(0, 5, 11)\ny = x ** 2\n\nfigure = plt.figure()\naxes = figure.add_axes([.1, .1, .8, .8])\naxes.plot(x, y)\naxes.set_xlabel('X Label')\naxes.set_ylabel('Y Label')\naxes.set_title('Title')\n\n# Generate data for a sine curve between 0 and 4*pi \nsin_x = np.linspace(0, 4*np.pi, 100)\nsin_y = np.sin(sin_x) \n\nfig = plt.figure(figsize=(8,4))\nax = fig.add_axes([0, 0, 1, 1])\nax.set_xlim(0, 4*np.pi) # \nax.set_ylim(-1.5, 1.5)\nax.plot(sin_x, sin_y, 'g')", "Multiple Axes", "figure = plt.figure()\naxes1 = figure.add_axes([0, 0, 1, 1])\naxes1.set_facecolor('#d1ffb8')\naxes1.plot(x, y, 'g.--')\n\naxes2 = figure.add_axes([1.1, 0, 1, 1])\naxes2.set_facecolor('#a6ddff')\naxes2.plot(y, x, 'b*--')", "Automatic Axis Manager\nInstead of creating a figure and calling add_axes(), we can use the subplots() method. It acts like an automatic axis manager.", "# Create an empty canvas with two subplots\nfig, axes = plt.subplots(nrows=1, ncols=2, figsize=(15,4))\naxes[0].plot(x, y, 'g')\naxes[1].plot(y, x, 'b')", "Overlapping Figures\nA common issue with matplolib is overlapping subplots or figures. The method plt.tight_layout() automatically adjusts subplot parameters so that the subplot fits in to the figure canvas. This avoids overlapping content. This is an experimental feature and may not work for some cases. It only checks the extents of ticklabels, axis labels, and titles. For more information read the <a href=\"https://matplotlib.org/users/tight_layout_guide.html\">Tight Layout guide</a>.", "fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(nrows=2, ncols=2)\nfor ax in [ax1, ax2, ax3, ax4]:\n ax.plot(x, y, 'g')\n ax.set_xlabel('x')\n ax.set_ylabel('y')\n ax.set_title('title')\n\nfig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(nrows=2, ncols=2)\nfor ax in [ax1, ax2, ax3, ax4]:\n ax.plot(x, y, 'g')\n ax.set_xlabel('x')\n ax.set_ylabel('y')\n ax.set_title('title')\nplt.tight_layout()\n\nfig, axes = plt.subplots(nrows=2, ncols=2)\ni = 0\nfor outer in axes:\n for ax in outer:\n i += 1\n ax.plot(x, y, 'g')\n ax.set_xlabel('x')\n ax.set_ylabel('y')\n ax.set_title('Axes No.{}'.format(i))\nplt.tight_layout()", "Colours, Markers and Legends", "fig, ax = plt.subplots(figsize=(12,6))\n\nax.plot(x, x**2, label='Squared', color='purple', linewidth=2, linestyle='-.', marker='o')\nax.plot(x, x**3, label='Cubed', color='green', marker='x', markersize=12)\n\n# Use loc=0 to let Matplotlib determine the best location\nax.legend(loc=10) ", "Twin axes\nLet two or more plots share the same x-axis using twinx() or the same y-axis using twiny().", "fig = plt.figure()\nax1 = fig.add_axes([0, 0, 1, 1])\nax2 = ax1.twinx() # Note that ax2 shares the x-axis with ax1\nx2 = np.linspace(0., 10., 100)\n\nax1.set_ylabel('Density (cgs)', color='red')\nax2.set_ylabel('Temperature (K)', color='blue')\nax1.set_xlabel('Time (s)')\n\nax1.plot(x2, x2 ** 2, 'b-')\nax2.plot(x2, 1000 / (x2 + 1), 'r-')\n", "Plot Types", "plt.scatter(x,y)\n\nfrom random import sample\ndata = sample(range(1, 1000), 100)\nplt.hist(data)\n\ndata = [np.random.normal(0, std, 100) for std in range(1, 4)]\nplt.boxplot(data, vert=True, patch_artist=True); " ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
opengeostat/pygslib
doc/source/Ipython_templates/cova3_raw.ipynb
mit
[ "PyGSLIB\nCova3 test\nThis is a simple example on how to use raw cova3 to fit variograms", "#general imports\nimport matplotlib.pyplot as plt \nimport pygslib \nimport numpy as np \n\n\n#make the plots inline\n%matplotlib inline ", "Getting the data ready for work\nIf the data is in GSLIB format you can use the function pygslib.gslib.read_gslib_file(filename) to import the data into a Pandas DataFrame.", "#get the data in gslib format into a pandas Dataframe\nmydata= pygslib.gslib.read_gslib_file('../datasets/cluster.dat') \n\n# This is a 2D file, in this GSLIB version we require 3D data and drillhole name or domain code\n# so, we are adding constant elevation = 0 and a dummy BHID = 1 \nmydata['Zlocation']=0\nmydata['bhid']=1\n\n# printing to verify results\n#print (' \\n **** 5 first rows in my datafile \\n\\n ', mydata.head(n=5))", "get some experimental variograms", "# these are the parameters we need. Note that at difference of GSLIB this dictionary also stores \n# the actual data (ex, X, Y, etc.). \n\n#important! python is case sensitive 'bhid' is not equal to 'BHID'\n\nparameters_exp = { \n'x' : mydata['Xlocation'] , # X coordinates, array('f') with bounds (nd), nd is number of data points\n'y' : mydata['Ylocation'], # Y coordinates, array('f') with bounds (nd)\n'z' : mydata['Zlocation'], # Z coordinates, array('f') with bounds (nd)\n'bhid' : mydata['bhid'], # bhid for downhole variogram, array('i') with bounds (nd) \n'vr' : mydata['Primary'], # Variables, array('f') with bounds (nd,nv), nv is number of variables\n'tmin' : -1.0e21, # trimming limits, float\n'tmax' : 1.0e21, # trimming limits, float\n'nlag' : 10, # number of lags, int\n'xlag' : 4, # lag separation distance, float \n'xltol' : 2, # lag tolerance, float\n'azm' : [90], # azimuth, array('f') with bounds (ndir)\n'atol' : [22.5], # azimuth tolerance, array('f') with bounds (ndir)\n'bandwh' : [10000], # bandwith h, array('f') with bounds (ndir)\n'dip' : [0], # dip, array('f') with bounds (ndir)\n'dtol' : [10], # dip tolerance, array('f') with bounds (ndir)\n'bandwd' : [10], # bandwith d, array('f') with bounds (ndir)\n'isill' : 0, # standardize sills? (0=no, 1=yes), int\n'sills' : [100], # variance used to std the sills, array('f') with bounds (nv)\n'ivtail' : [1], # tail var., array('i') with bounds (nvarg), nvarg is number of variograms\n'ivhead' : [1], # head var., array('i') with bounds (nvarg)\n'ivtype' : [7], # variogram type, array('i') with bounds (nvarg)\n'maxclp' : 50000} # maximum number of variogram point cloud to use, input int\n\n'''\nRemember this is GSLIB... use this code to define variograms\ntype 1 = traditional semivariogram\n 2 = traditional cross semivariogram\n 3 = covariance\n 4 = correlogram\n 5 = general relative semivariogram\n 6 = pairwise relative semivariogram\n 7 = semivariogram of logarithms\n 8 = semimadogram\n\n''' \n\n#check the variogram is ok\nassert pygslib.gslib.check_gamv_par(parameters_exp)==1 , 'sorry this parameter file is wrong' \n\n\n#Now we are ready to calculate the veriogram\npdis,pgam, phm,ptm,phv,ptv,pnump, cldi, cldj, cldg, cldh = pygslib.gslib.gamv(parameters_exp)\n\n\nnvrg = pdis.shape[0]\nndir = pdis.shape[1]\nnlag = pdis.shape[2]-2", "Plotting results", "%matplotlib inline\nimport matplotlib.pyplot as plt\n\n#plotting the variogram 1 only\nv=0\n\n# in all the directions calculated\nfor d in range(ndir):\n dip=parameters_exp['dip'][d]\n azm=parameters_exp['azm'][d]\n plt.plot (pdis[v, d, 1:], pgam[v, d, 1:], '-o', label=str(dip) + '-->' + str(azm))\n\n# adding nice features to the plot\nplt.legend()\nplt.grid(True)\nplt.show()\n", "Fit the variogram\nWe are using variogram of logarithms...", "#rotatios matrix (one per structure)\nrmatrix_d1=pygslib.gslib.setrot(ang1=0,ang2=0,ang3=0,anis1=1,anis2=1,ind=1,maxrot=2) #rotation structure 1\nrmatrix_d2=pygslib.gslib.setrot(ang1=0,ang2=0,ang3=0,anis1=1,anis2=1,ind=2,maxrot=2) #rotation structure 2\n\nrmatrix=rmatrix_d1+rmatrix_d2\n\nprint (rmatrix)\n\nparameters_mod = { \n 'x1' : 0, # X coordinates, point 1\n 'y1' : 0, # Y coordinates, point 1\n 'z1' : 0, # Z coordinates, point 1\n 'x2' : 1, # X coordinates, point 2\n 'y2' : 0, # Y coordinates, point 2\n 'z2' : 0, # Z coordinates, point 2\n 'nst' : 2, # number of nested structures, array('i') with bounds (ivarg), \n # ivarg is variogram number\n 'c0' : [0.01], # nugget, array('f') with bounds (ivarg) \n 'it' : [3, 3], # structure type, array('i') with bounds (ivarg) \n 'cc' : [1, 1.4], # variance, array('f') with bounds (nvarg*nst[0])\n 'aa' : [8., 22.], # parameter a (or range), array('f') with bounds (nvarg*nst[0])\n 'irot' : 1, # index of the rotation matrix for the first nested structure\n # the second nested structure will use irot+1, the third irot+2, and so on\n 'rotmat' : rmatrix} # rotation matrices (output of the funtion setrot)\n\n# this is the covariance between the points x1, x2\ncmax,cova=pygslib.gslib.cova3(parameters_mod)\nprint (cmax, cova)\n\nres=300\nmh=np.linspace(0,40, res)\nmc=np.zeros(res)\nmv=np.zeros(res)\n\nfor i,h in enumerate(mh):\n parameters_mod['x2']=h\n cmax,cova=pygslib.gslib.cova3(parameters_mod)\n mc[i]=cova\n mv[i]=cmax- cova\n \n \n\n#plotting the variogram 1 only\nv=0\n\n# in all the directions calculated\nfor d in range(ndir):\n dip=parameters_exp['dip'][d]\n azm=parameters_exp['azm'][d]\n plt.plot (pdis[v, d, 1:], pgam[v, d, 1:], '-o', label=str(dip) + '-->' + str(azm))\n\n# add model\nplt.plot (mh, mv, '-', label = 'model')\n \n# adding nice features to the plot\nplt.legend()\nplt.grid(True)\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
amitkaps/applied-machine-learning
Module-01a-Frame-Regression.ipynb
mit
[ "Regression - Frame, Acquire & Refine\nRaw Data\nYou are provided with the following data: loan_data.csv\nThis is the historical data that the bank has provided. It has the following columns\nApplication Attributes:\n- years: Number of years the applicant has been employed\n- ownership: Whether the applicant owns a house or not\n- income: Annual income of the applicant\n- age: Age of the applicant \nBehavioural Attributes:\n- grade: Credit grade of the applicant\nOutcome Variable:\n- amount : Amount of Loan provided to the applicant\n- default : Whether the applicant has defaulted or not \n- interest: Interest rate charged for the applicant \nFrame the Problem\n\nWhat are the features?\nWhat is the target?\n\nDiscuss?\nAcquire the Data", "#Load the libraries\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\n#Defualt Variables\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (16,9)\nplt.style.use('fivethirtyeight')\npd.set_option('display.float_format', lambda x: '%.2f' % x)\n\n#Load the dataset\ndf = pd.read_csv('data/loan_data.csv')\n\n#View the first few rows of train\ndf.head()\n\n#View the columns of the train dataset\ndf.columns\n\n#View the data types of the train dataset\ndf.dtypes\n\n#View the number of records in the data\ndf.shape\n\n#View summary of raw data \ndf.describe()", "Refine the Data\nLets check the dataset for quality and compeleteness\n1. Missing Values\n2. Outliers\nCheck for Missing Values", "# Find if df has missing values. Hint: There is a isnull() function\ndf.isnull().head()", "One consideration we check here is the number of observations with missing values for those columns that have missing values. If a column has too many missing values, it might make sense to drop the column.", "#let's see how many missing values are present\ndf.isnull().sum()\n\ndf.isnull()\n\ndf[df.isnull().any(axis=1)].head()", "So, we see that two columns have missing values: interest and years. Both the columns are numeric. We have three options for dealing with this missing values\nOptions to treat Missing Values\n- REMOVE - NAN rows\n- IMPUTATION - Replace them with something??\n - Mean \n - Median\n - Fixed Number - Domain Relevant\n - High Number (999) - Issue with modelling\n- BINNING - Categorical variable and \"Missing becomes a number\n- DOMAIN SPECIFIC* - Entry error, pipeline, etc.", "#Let's replace missing values with the median of the column\ndf.interest.median()\n\n?df.fillna\n\ndf[df.isnull().any(axis=1)].head(20)\n\n#there's a fillna function\ndf.fillna(df.median(), inplace=True)\n\n#Now, let's check if train has missing values or not\ndf.isnull().sum()", "Check for Outlier Values\nLet us check first the categorical variables", "# Which variables are Categorical?\ndf.dtypes\n\ndf.grade.value_counts()\n\n# Create a Crosstab of those variables with another variable\npd.crosstab(df.grade, df.default)\n\n# Create a Crosstab of those variables with another variable\ndf.ownership.value_counts()", "Let us check outliers in the continuous variable\n\nPlotting\nHistogram\nBox-Plot \n\n\nMeasuring \nZ-score > 3\nModified Z-score > 3.5\nwhere modified Z-score = 0.6745 * (x - x_median) / MAD", "# Describe the data set continuous values\ndf.describe()", "Clearly the age variable looks like it has an outlier - Age cannot be greater 100! \nAlso the income variable looks like it may also have an outlier.", "?plt.yscale\n\n# Make a histogram of age\n\ndf.age.hist()\n#plt.xlim(60,80)\n#plt.ylim(0,100)\nplt.yscale('log')\n\nimport seaborn as sns\n\nsns.distplot(df.age)\n\n# Make a histogram of income\ndf.income.hist()\nplt.yscale('log')\n\n# Make Histograms for all other \ndf.years.hist()\nplt.yscale('log')\n\nplt.boxplot(df.income)\n\nplt.boxplot(df.years)\n\n# Make a scatter of age and income\nplt.scatter(df.income, df.age)", "Find the observation which has age = 144 and remove it from the dataframe", "# Find the observation \ndf[df.age == df.age.max()]\n\ndf[df.age == df.age.max()].index\n\ndf.drop?\n\n# Use drop to remove the observation inplace\ndf.drop(19485, inplace=True)\n\n# Find the shape of the df\ndf.shape\n\n# Check again for outliers\nplt.scatter(df.age, df.income)\n\n# Save the new file as cleaned data\ndf.to_csv(\"data/loan_data_clean.csv\", index=False)\n\n#We are good to go to the next step" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
bacilo/facebook-scraper
ScrapingBasics.ipynb
gpl-3.0
[ "facebook-scraper\nThis is a short introduction to using the scraper to fully scrape a public FB page\nRequirements\n\nYou need to register yourself as a developer on Facebook\nYou create an App on your Facebook developer page\nYou go to Graph Explorer to generate an Access Token with the permissions you want (I recommend getting all of them for this purpose to avoid errors later)\n\nNotes\nYou will absolutely need to introduce the ACCESS_TOKEN but APP_ID and APP_ID_SECRET are only required in order to extend your ACCESS_TOKEN. If you are fine working with a short lived ACCESS_TOKEN and renewing that ACCESS_TOKEN manually on your Facebook developers page, then you can leave APP_ID and APP_ID_SECRET empty\nPAGE_ID: The ID of the Public page you will scrape (for instance: '1889414787955466'). You will usually see this on the URL on your browser. Sometimes, however, a name is provided. The name WILL NOT work, you need to figure out the ID. (There are plenty of websites that do this, I use https://www.wallflux.com/facebook_id/)", "import fb_scraper.prodcons\n\nAPP_ID = ''\nAPP_ID_SECRET = ''\nACCESS_TOKEN = ''", "Producer/Consummer Manager\nThe prodcons module, builds on a Producer/Consummer multithreaded approach to issue batch requests to the FB API and process the corresponding responses, saving them to the respective .CSV files", "mgr = fb_scraper.prodcons.Manager(\n access_token=ACCESS_TOKEN,\n api_key=APP_ID,\n api_secret=APP_ID_SECRET\n )", "Extending ACCESS_TOKEN\n(Must have APP_ID and APP_ID_SECRET setup)\nThis function extends the ACCESS_TOKEN and automatically replaces it in the mgr object\nNOTE: Copy-paste it on your application setup to start using the extended token in the future", "mgr.graph.extend_token()", "Start scraping threads\nJust call the start() function from the Manager and wait until it is completed.\nA line is printed to indicate how far the scraping has reached (i.e. how many posts, reactions, comments, etc... have been received and stored in the .CSV file structure)", "mgr.start()", "Add scraping jobs\nFrom the mgr object, just add the group or post (what is available at the moment) that you would like to scrape", "mgr.scrape_post('XXXXXXXXXXXXXX') # Where 'XXXXXXXXXXXXXXX' is the FULL post ID, i.e. GROUPID_POSTID\nmgr.scrape_group('XXXXXXXXXXXXXX') # Where 'XXXXXXXXXXXXXXX' is the Group ID" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
eyurtsev/FlowCytometryTools
FlowCytometryTools/tests/notebooks/api_tests.ipynb
mit
[ "Quick Introduction to FlowCytometryTools\n<pre>\nAuthor: Jonathan Friedman & Eugene Yurtsev\nLast Updated Date: 2013-October-06\n</pre>\n\n<pre>\nPlease read the instructions on the webpage (https://bitbucket.org/gorelab) on how to install these packages.\n\nHave fun!\n</pre>\n\nLoad Dependencies", "import matplotlib\n\n%load_ext autoreload\n%autoreload 2\nfrom FlowCytometryTools import FCPlate, FCMeasurement", "<pre>\n(Note: The lines above with the % mark instruct python to reload the dependencies if any of them changed.)\n</pre>\n\nLoading Data\nLoading individual files", "import os, FlowCytometryTools\ndatadir = os.path.join(FlowCytometryTools.__path__[0], 'tests', 'data', 'Plate01')\ndatafile = os.path.join(datadir, 'CFP_Well_A4.fcs')\nprint('Loading file from path: {0}'.format(datafile))\nsample = FCMeasurement(ID='Test Plate', datafile=datafile)\n\nsample.get_data().iloc[:20]", "Loading all fcs files in a folder into a plate (Recommended)", "# You can insert your own directory instead of datadir here.\ndatadir = os.path.join(FlowCytometryTools.__path__[0], 'tests', 'data', 'Plate01')\nprint('Loading files from directory path: {0}'.format(datadir))\n\n# Load the files\nplate = FCPlate.from_dir(ID='Test Plate', path=datadir, parser='name')\n\nprint(plate)", "Transformations", "plate = FCPlate.from_dir(ID='Test Plate', path=datadir,parser='name').transform('hlog', channels=('Y2-A', 'B1-A'))\n# The line above is equivalent to the two steps below.\n# plate = FCPlate.from_dir(ID='Test Plate', path=datadir)\n# plate = plate.transform('hlog', channels=('Y2-A', 'B1-A')) ", "Examine the contents of the plate", "print(plate.dropna())", "Basics\nLet's pick single well", "well = plate['A3']\nprint(well)\n\nwell.get_data()['Y2-A'].median()", "Before we can do any plotting, we need to figure out what information was collected by the flow cytometer.", "print(well.channel_names)\n\nprint(well.channels)", "If you want to understand what the different fields mean, you need to read the specification for the FCS format. To check which FCS format was used do the following:", "well.meta['__header__']['FCS format']", "There is more meta data buried inside of each FCS file. This meta data is stored inside of a dictionary called meta.", "print(type(well.meta))\nprint(well.meta.keys())", "For example, one of the keys is \\$TOT (Total number of events) and another is \\$DATE (Date of measurement).", "print(well.meta['$TOT'])\n\nprint(well.meta['$DATE'])", "Plotting\n<pre>\nYou'll find that some basic plotting functionality has already been implemented for you.\nLet's check it out by first plotting 1d and 2d histograms on single wells, and then we'll move on to plots on an entire 96-well plate.\n</pre>\n\n1d histograms", "%pylab inline\n\nfigure(figsize=(10, 10))\nwell.plot('B1-A', bins=100);\n\nfigure(figsize=(10, 10))\nwell.plot('B1-A', bins=100, alpha=0.7, color='green', normed=1);", "2d histograms", "figure(figsize=(10, 10))\nwell.plot(['B1-A', 'Y2-A']);", "<pre>\nThis plot function accepts all the same arguments as does the matplotlib pcolormesh function.\nFor example, we can change the default colormap used for plotting.\n</pre>", "well.plot(['B1-A', 'Y2-A'], cmap=cm.Oranges, colorbar=False);", "<pre>\nAlso, instead of plotting histograms, we can choose to plot a scatter plot by setting the argument <b>kind='scatter'</b>.\nIn this case, the plot function will behave like the matplotlib scatter function and pass all additional arguments to scatter.\n</pre>", "well.plot(['B1-A', 'Y2-A'], kind='scatter', color='red', s=1, alpha=0.3);", "Working with Plates\nYou can check which wells have data loaded in them simply by using the print command", "print(plate)", "Many of the wells in the plate above are empty. Use the function dropna() to compactify the plate.", "plate = plate.dropna()\n\nprint(plate)\nplate.shape", "Plotting plates", "figure(figsize=(10, 10))\nplate.plot('Y2-A', bins=100, autolabel=False, ylim=(0, 1000), \n xlim=(-1000, 10000), \n row_labels_kwargs={'size' : 'xx-large'});\n\nfigure(figsize=(10, 10))\nplate.plot(['B1-A', 'Y2-A']);\n\nfigure(figsize=(10, 10))\nplate.plot(['B1-A', 'Y2-A'], ids=['A3', 'C7']);", "Gating", "from FlowCytometryTools import ThresholdGate, PolyGate", "Creating Gates", "y2_gate = ThresholdGate(1000.0, 'Y2-A', region='above')", "GUI for creating gates (alpha release) (NOT YET IMPLEMENTED. SKIP SECTION)\n<B>Important:</B> You must install wx for python in order to use this GUI.", "# plate['A3'].view() # Try if you have wx installed\n\npoly_gate = PolyGate([(7307.6923076923067, 5809.523809523811), (3483.5164835164837, 1455.7823129251719), (5901.0989010989015, -3714.2857142857138), \n(8230.7692307692305, 367.34693877551035), (8230.7692307692305, 367.34693877551035)], ['HDR-T', 'FSC-A'], region='in')\n", "Visualizing Gates", "w = plate['A3']\n\nax = subplot()\nb1_gate = ThresholdGate(2000.0, 'B1-A', region='above')\n\nplate['A3'].gate(y2_gate).plot(('B1-A', 'Y2-A'), ax=ax, cmap=cm.Reds, gates=[y2_gate]);\nplate['A3'].gate(~y2_gate).plot(('B1-A', 'Y2-A'), ax=ax, cmap=cm.gray);\n\n\nylim(-1000, 9000)\n\n\n\nfigure(figsize=(8, 8))\n#suptitle('All gates plotted but not applied')\nplate.plot(('B1-A','Y2-A'), gates=[y2_gate, b1_gate]);", "Applying gates to wells", "original_well = plate['A3']\ngated_well = original_well.gate(y2_gate, )\n\nfigure(figsize=(4, 10))\nax = subplot(2, 1, 1)\ntitle('Original well')\noriginal_well.plot('Y2-A', bins=300, color='gray', ax=ax)\nxlim(-1000, 10000)\ngrid(True)\n\nax = subplot(2, 1, 2)\ntitle('Gated well')\ngated_well.plot('Y2-A', bins=300, color='y', ax=ax)\nylim(0, 400)\ngrid(True)\nxlim(-1000, 10000)", "Calculations\nExample 01: Compute the total number of events\n<pre>\nTo make calculations, you need to write a function that accepts as input a well and returns the desired calculation.\nAs an example, we will create a function that counts the total number of events.\n</pre>", "well = plate['A3']", "<pre>\nTo access the underlying data in the well, use the get_data function. \nThe returned data is a pandas data frame.\nThat's right. You have the power of pandas at your hands!\n</pre>", "data = well.get_data()\n\nprint(data)\n\ndata['Y2-A']\n\ndata['Y2-A'].describe()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
james-prior/cohpy
20160708-dojo-user-input-loop-with-iter-partial-input-prompt-sentinel.ipynb
mit
[ "The focus of this notebook is refactoring a loop that\n- gets user input\n- quits if that input matches some sentinel value\n- processes the user input\nThe interesting part starts around cell #4.", "from functools import partial\n\ndef convert(s):\n converters = (int, float)\n \n for converter in converters:\n try:\n value = converter(s)\n except ValueError:\n pass\n else:\n return value\n \n return s\n\ndef process_input(s):\n value = convert(s)\n print('%r becomes %r' % (s, value))", "Below is a typical loop for\n- getting user input\n- quitting the loop if the user enters a special value\n- processing the input", "def main():\n prompt = 'gimme: '\n while True:\n s = input(prompt)\n if s == 'quit':\n break\n process_input(s)", "It works as shown below.", "main()", "Below is a different way of writing that loop.\nHow would you apply it to loop at the bottom of\n2016-04/2016-Apr-Gutenberg.py?", "def main():\n prompt = 'gimme: '\n for s in iter(partial(input, prompt), 'quit'):\n process_input(s)\n\nmain()", "It can be reduced to a generator expression.", "prompt = 'gimme: '\nget_values = (convert(s) for s in iter(partial(input, prompt), 'quit'))\nfor value in get_values:\n print(value)", "2017-10-06 More thoughts about partial(input, prompt) and alternatives to it.", "prompt = 'gimme: '\n\ndef get_input():\n return input(prompt)\n\ndef main():\n for s in iter(get_input, 'quit'):\n process_input(s)\n\nmain()\n\ndef main():\n prompt = 'gimme: '\n for s in iter(lambda : input(prompt), 'quit'):\n process_input(s)\n\nmain()\n\ndef my_partial(function, *args, **kwargs):\n def helper():\n return function(*args, **kwargs)\n return helper\n\ndef main():\n prompt = 'gimme: '\n for s in iter(my_partial(input, prompt), 'quit'):\n process_input(s)\n\nmain()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/asl-ml-immersion
notebooks/end-to-end-structured/solutions/3a_bqml_baseline_babyweight.ipynb
apache-2.0
[ "LAB 3a: BigQuery ML Model Baseline.\nLearning Objectives\n\nCreate baseline model with BQML\nEvaluate baseline model\nCalculate RMSE of baseline model\n\nIntroduction\nIn this notebook, we will create a baseline model to predict the weight of a baby before it is born. We will use BigQuery ML to build a linear babyweight prediction model with the base features and no feature engineering, yet.\nWe will create a baseline model with BQML, evaluate our baseline model, and calculate the its RMSE.\nVerify tables exist\nRun the following cells to verify that we previously created the dataset and data tables. If not, go back to lab 1b_prepare_data_babyweight to create them.", "%%bigquery\n-- LIMIT 0 is a free query; this allows us to check that the table exists.\nSELECT * FROM babyweight.babyweight_data_train\nLIMIT 0\n\n%%bigquery\n-- LIMIT 0 is a free query; this allows us to check that the table exists.\nSELECT * FROM babyweight.babyweight_data_eval\nLIMIT 0", "Create the baseline model\nNext, we'll create a linear regression baseline model with no feature engineering. We'll use this to compare our later, more complex models against.\nTrain the \"Baseline Model\".\nWhen creating a BQML model, you must specify the model type (in our case linear regression) and the input label (weight_pounds). Note also that we are using the training data table as the data source and we don't need BQML to split the data because we have already split it ourselves.", "%%bigquery\nCREATE OR REPLACE MODEL\n babyweight.baseline_model\n\nOPTIONS (\n MODEL_TYPE=\"LINEAR_REG\",\n INPUT_LABEL_COLS=[\"weight_pounds\"],\n DATA_SPLIT_METHOD=\"NO_SPLIT\") AS\n\nSELECT\n weight_pounds,\n is_male,\n mother_age,\n plurality,\n gestation_weeks\nFROM\n babyweight.babyweight_data_train", "REMINDER: The query takes several minutes to complete. After the first iteration is complete, your model (baseline_model) appears in the navigation panel of the BigQuery web UI. Because the query uses a CREATE MODEL statement to create a model, you do not see query results.\nYou can observe the model as it's being trained by viewing the Model stats tab in the BigQuery web UI. As soon as the first iteration completes, the tab is updated. The stats continue to update as each iteration completes.\nOnce the training is done, visit the BigQuery Cloud Console and look at the model that has been trained. Then, come back to this notebook.\nEvaluate the baseline model\nEven though BigQuery can automatically split the data it is given, and training on only a part of the data and using the rest for evaluation, to compare with our custom models later we wanted to decide the split ourselves so that it is completely reproducible.\nNOTE: The results are also displayed in the BigQuery Cloud Console under the Evaluation tab.", "%%bigquery\n-- Information from model training\nSELECT * FROM ML.TRAINING_INFO(MODEL babyweight.baseline_model)", "Get evaluation statistics for the baseline_model.\nAfter creating your model, you evaluate the performance of the regressor using the ML.EVALUATE function. The ML.EVALUATE function evaluates the predicted values against the actual data.", "%%bigquery\nSELECT\n *\nFROM\n ML.EVALUATE(MODEL babyweight.baseline_model,\n (\n SELECT\n weight_pounds,\n is_male,\n mother_age,\n plurality,\n gestation_weeks\n FROM\n babyweight.babyweight_data_eval\n ))", "Resource for an explanation of the Regression Metrics.\nWrite a SQL query to find the RMSE of the evaluation data\nSince this is regression, we typically use the RMSE, but natively this is not in the output of our evaluation metrics above. However, we can simply take the SQRT() of the mean squared error of our loss metric from evaluation of the baseline_model to get RMSE.", "%%bigquery\nSELECT\n SQRT(mean_squared_error) AS rmse\nFROM\n ML.EVALUATE(MODEL babyweight.baseline_model,\n (\n SELECT\n weight_pounds,\n is_male,\n mother_age,\n plurality,\n gestation_weeks\n FROM\n babyweight.babyweight_data_eval\n ))", "Lab Summary:\nIn this lab, we created a baseline model with BQML, evaluated our baseline model, and calculated the RMSE of our baseline model.\nCopyright 2020 Google LLC\nLicensed under the Apache License, Version 2.0 (the \"License\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at\n https://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software\ndistributed under the License is distributed on an \"AS IS\" BASIS,\nWITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\nSee the License for the specific language governing permissions and\nlimitations under the License." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
TimothyHelton/k2datascience
notebooks/Resampling_Exercises.ipynb
bsd-3-clause
[ "Resampling Methods\nTimothy Helton\n\nThe goal of predictive modeling is to create models that make good predictions on new data. We\ndon't have access to this new data at the time of training, so we must use statistical methods to estimate the performance of a model on new data. This class of methods are called resampling methods, as they resampling your available training data.\n\n<br>\n<font color=\"red\">\n NOTE:\n <br>\n This notebook uses code found in the\n <a href=\"https://github.com/TimothyHelton/k2datascience/blob/master/k2datascience/preprocessing.py\">\n <strong>k2datascience.preprocessing</strong></a> module.\n To execute all the cells do one of the following items:\n <ul>\n <li>Install the k2datascience package to the active Python interpreter.</li>\n <li>Add k2datascience/k2datascience to the PYTHON_PATH system variable.</li>\n <li>Create a link to the preprocessing.py file in the same directory as this notebook.</li>\n</font>\n\nImports", "import pandas as pd\nimport numpy as np\nimport scipy as sp\n\nfrom k2datascience import plotting\nfrom k2datascience import preprocessing\n\nfrom IPython.core.interactiveshell import InteractiveShell\nInteractiveShell.ast_node_interactivity = \"all\"\n%matplotlib inline", "Theory\nExercise 1\nWe will now derive the probability that a given observation is part\nof a bootstrap sample. Suppose that we obtain a bootstrap sample\nfrom a set of n observations.\n(a) What is the probability that the first bootstrap observation is\nnot the jth observation from the original sample? Justify your\nanswer.\n(b) What is the probability that the second bootstrap observation\nis not the jth observation from the original sample?\n(c) Argue that the probability that the jth observation is not in the\nbootstrap sample is $(1 − 1/n) ^ n$.\n(d) When n = 5, what is the probability that the jth observation is\nin the bootstrap sample?\n(e) When n = 100, what is the probability that the jth observation\nis in the bootstrap sample?\n(f)When n = 10,000, what is the probability that the jth observation\nis in the bootstrap sample?\n(a) What is the probability that the first bootstrap observation is\nnot the jth observation from the original sample? Justify your\nanswer.\n$$P = \\frac{n-1}{n}$$\n(b) What is the probability that the second bootstrap observation\nis not the jth observation from the original sample?\n\nSamples may be selected multiple times.\nSame Probability as (a), since the bootstrap does not remove a sample.\n\n$$P = \\frac{n-1}{n}$$\n(c) Argue that the probability that the jth observation is not in the\nbootstrap sample is $(1 − 1/n) ^ n$.\n\nThe probablility of the $j^{th}$ sample not being in any bootstrap is the equal to the probablility of the $j^{th}$ sample not being in a single bootstrap for all n observations.\n\n(d) When n = 5, what is the probability that the jth observation is\nin the bootstrap sample?", "print(f'{preprocessing.prob_bootstrap(5):.3f}')", "(e) When n = 100, what is the probability that the jth observation\nis in the bootstrap sample?", "print(f'{preprocessing.prob_bootstrap(100):.3f}')", "(f)When n = 10,000, what is the probability that the jth observation\nis in the bootstrap sample?", "print(f'{preprocessing.prob_bootstrap(1e4):.3f}')", "Exercise 2\nWe now review k-fold cross-validation.\n(a) Explain how k-fold cross-validation is implemented.\n(b) What are the advantages and disadvantages of k-fold crossvalidation\nrelative to:\n1. The validation set approach?\n1. LOOCV?\n(a) Explain how k-fold cross-validation is implemented.\n\nThe data set is partitioned into k folds.\nA model is fit to k - 1 folds \nThe error is calculated between the predicted values from the model and remaining unused fold.\nRepeat the previous steps k times, so each fold is used as the test sample.\nAverage the results of all the models.\n\n(b) What are the advantages and disadvantages of k-fold crossvalidation\nrelative to:\n1. The validation set approach?\n1. LOOCV?\n\nK-Fold vs Validation set CV\nK-Fold method uses all the data to create a model.\nK-Fold is less likely to overfit.\n\n\nK-Fold vs LOOCV\nK-Fold is faster\nLOOCV has less bias\nK-Fold has less variance\nLOOCV has many sets that are collinear (resulting in higher variance).\n\n\n\nExercise 3\nSuppose that we use some statistical learning method to make a prediction\nfor the response Y for a particular value of the predictor X.\nCarefully describe how we might estimate the standard deviation of\nour prediction.\n\nCalculate the standard deviation of the test metric.\n\nPractical\nExercise 4 - Credit Card Default Data Set\nWe previously used logistic regression to predict the probability of default using income and balance on the Default data set. We will now estimate the test error of this logistic regression model using the validation set approach.\nTask - Fit a logistic regression model that uses income and balance to predict default. Compare the error of the scikit-learn and statsmodel implementations without the validation set.", "loan = preprocessing.LoanDefault()\nloan.data.info()\nloan.data.head()\n\ndata = loan.data\ntitle = 'Loan'\nplotting.correlation_heatmap_plot(data, title=title)\nplotting.correlation_pair_plot(data, title=title)\n\nloan.validation_split()\nloan.logistic_summary()", "Task - Using the validation set approach, estimate the test error of this model. In order to do this, you must perform the following steps:\n\nSplit the sample set into a training set and a validation set.\nFit a multiple logistic regression model using only the training observations.\nObtain a prediction of default status for each individual in the validation set by computing the posterior probability of default for that individual, and classifying the individual to the default category if the posterior probability is greater than 0.5.\nCompute the validation set error, which is the fraction of the observations in the validation set that are misclassified.\nRepeat the process in (b) three times, using three different splits of the observations into a training set and a validation set. Comment on the results obtained.\nNow consider a logistic regression model that predicts the probability of default using income, balance, and a dummy variable for student. Estimate the test error for this model using the validation set approach. Comment on whether or not including a dummy variable for student leads to a reduction in the test error rate.", "loan = preprocessing.LoanDefault()\nloan.logistic_bootstrap(3)\n\nloan.features = (pd.concat([loan.data.loc[:, ['balance', 'income']],\n loan.data.student.cat.codes],\n axis=1)\n .rename(columns={0: 'student'}))\nloan.validation_split()\nloan.logistic_summary()", "FINDINGS\n\nThe Logistic Regression models have error rates repeatably below 3%.\nAdding the student variable did not reduce the error rate.\n\nTask - Compute estimates for the standard errors of the income and balance logistic regression coefficients by using the bootrap and logistic regression functions.\n\nUse the summary() method on the logistic regression statsmodel instance.\nImplement your own bootstrap method and run the model 100 times\nComment on the estimated standard errors obtained using statsmodels and your bootstrap.", "loan = preprocessing.LoanDefault()\nloan.logistic_bootstrap(100)", "Exercise 5 - Stock Market Data\nTask - We will compute the LOOCV error for a simple logistic regression model on the SMarket data set. \n\nRead in the stock market data set\nFit a logistic regression model that predicts Direction using Lag1 and Lag2.\nFit a logistic regression model that predicts Direction using Lag1 and Lag2 using all but the first observation.\nUse the model from (3) to predict the direction of the first observation. You can do this by predicting that the first observation will go up if $P(\\mbox{direction = Up} | Lag1,Lag2 ) > 0.5$. Was this observation correctly classified?\nWrite a loop from i=1 to i=n, where n is the number of observations in the data set, that performs each of the following steps:\nFit a logistic regression model using all but the ith observation to predict Direction using Lag1 and Lag2.\nCompute the posterior probability of the market moving up for the ith observation.\nUse the posterior probability for the ith observation in order to predict whether or not the market moves up.\nDetermine whether or not an error was made in predicting the direction for the ith observation. If an error was made, then indicate this as a 1, and otherwise indicate it as a 0.\n\n\n\nTake the average of the n numbers obtained in (5) in order to obtain the LOOCV estimate for the test error. Comment on the results.\n\n\nRead in the stock market data set", "sm = preprocessing.StockMarket()\nsm.data.info()\nsm.data.head()\n\ndata = sm.data\ntitle = 'Stock Market'\nplotting.correlation_heatmap_plot(data, title=title)\nplotting.correlation_pair_plot(data, title=title)", "2. Fit a logistic regression model that predicts Direction using Lag1 and Lag2.", "sm.logistic_summary()", "3. Fit a logistic regression model that predicts Direction using Lag1 and Lag2 using all but the first observation.", "sm.data = sm.data.iloc[1:]\nsm.logistic_summary()", "4. Use the model from (3) to predict the direction of the first observation. You can do this by predicting that the first observation will go up if $P(\\mbox{direction = Up} | Lag1,Lag2 ) > 0.5$. Was this observation correctly classified?", "sm.data.iloc[0]\nsm.data.direction.cat.categories\nsm.predict[0]", "FINDINGS\n\nThe model correctly predicted the model would go up.\n\n5. Write a loop from i=1 to i=n, where n is the number of observations in the data set, that performs each of the following steps:\n\nFit a logistic regression model using all but the ith observation to predict Direction using Lag1 and Lag2.\nCompute the posterior probability of the market moving up for the ith observation.\nUse the posterior probability for the ith observation in order to predict whether or not the market moves up.\nDetermine whether or not an error was made in predicting the direction for the ith observation. If an error was made, then indicate this as a 1, and otherwise indicate it as a 0.", "sm = preprocessing.StockMarket()\nsm.logistic_leave_one_out()\n\nsm.logistic_leave_one_out()", "6. Take the average of the n numbers obtained in (5) in order to obtain the LOOCV estimate for the test error. Comment on the results.\nFINDINGS\n\nFor this dataset the Leave One Out cross validation did not reduce the error rate.\n\nExercise 6 - Simulated Data\nTask - We will now perform cross-validation on a simulated data set.\n\nCreate a scatterplot of X against Y. Comment on what you find.\nCompute the LOOCV errors that result from fitting the following four models using least squares: Linear, Quadratic, Cubic and Quartic.\nRepeat (2) using another random seed, and report your results. Are your results the same as what you got in (2)? Why?\nWhich of the models in (3) had the smallest LOOCV error? Is this what you expected? Explain your answer.\nComment on the statistical significance of the coefficient estimates that results from fitting each of the models in (2) using least squares. Do these results agree with the conclusions drawn based on the cross-validation results?\n\n1. Create a scatterplot of X against Y. Comment on what you find.", "sim = preprocessing.Simulated()\nsim.data.info()\nsim.data.head()\n\nsim.scatter_plot()", "2. Compute the LOOCV errors that result from fitting the following four models using least squares: Linear, Quadratic, Cubic and Quartic.", "for deg in range(1, 5):\n print('{}\\nPolynomial Model Degree: {}\\n'.format('*' * 80, deg))\n sim.linear_leave_one_out(degree=deg)", "3. Repeat (2) using another random seed, and report your results. Are your results the same as what you got in (2)? Why?", "sim.random_seed = 2\nsim.load_data()\nsim.validation_split()\nsim.single_feature()\n\nfor deg in range(1, 5):\n print('{}\\nPolynomial Model Degree: {}\\n'.format('*' * 80, deg))\n sim.linear_leave_one_out(degree=deg)", "FINDINGS\n\nThe answers are identical.\nUnclear if this is an optimization in Scikit Learn or a bug.\n\n\n\n4. Which of the models in (3) had the smallest LOOCV error? Is this what you expected? Explain your answer.\n\nThe Quadradic model has the best fit.\nThis is reasonable, since the data take a quadradic form.\nThe two hirer order models probably suffer from overfitting.\n\n5. Comment on the statistical significance of the coefficient estimates that results from fitting each of the models in (2) using least squares. Do these results agree with the conclusions drawn based on the cross-validation results?\nExercise 7 - Boston Housing Data\nTask - We will now consider the Boston housing data set that we have used previously.\n\nBased on this data set, provide an estimate for the population mean of medv. Call this estimate $\\hat{\\mu}$.\nProvide an estimate of the standard error of $\\hat{\\mu}$. Interpret this result.\nNow estimate the standard error of $\\hat{\\mu}$ using the bootstrap. How does this compare to your answer from (2)?\nBased on your bootstrap estimate from (3), provide a 95% confidence interval for the mean of medv. Compare it to the results obtained from a t.test on medv.\nBased on this data set, provide an estimate, $\\hat{\\mu}$ med, for the median value of medv in the population.\nWe now would like to estimate the standard error of $\\hat{\\mu}$ med. Unfortunately, there is no simple formula for computing the standard error of the median. Instead, estimate the standard error of the median using the bootstrap. Comment on your findings.\nBased on this data set, provide an estimate for the tenth percentile of medv in Boston suburbs. Call this quantity $\\hat{\\mu}$ 0.1.\nUse the bootstrap to estimate the standard error of $\\hat{\\mu}$ 0.1. Comment on your findings.\n\n1. Based on this data set, provide an estimate for the population mean of medv. Call this estimate $\\hat{\\mu}$.", "bh = preprocessing.BostonHousing()\nmu = bh.data.medv.mean()\nmu", "2. Provide an estimate of the standard error of $\\hat{\\mu}$. Interpret this result.", "mu_se = sp.stats.sem(bh.data.medv)\nmu_se", "3. Now estimate the standard error of $\\hat{\\mu}$ using the bootstrap. How does this compare to your answer from (2)?", "std_errors = []\nsample_size = int(bh.data.shape[0] * 0.7)\nfor n in range(1000):\n new_sample = bh.data.medv.sample(n=sample_size)\n std_errors.append(sp.stats.sem(new_sample))\nse_bootstrap = np.mean(std_errors)\nse_bootstrap", "4. Based on your bootstrap estimate from (3), provide a 95% confidence interval for the mean of medv. Compare it to the results obtained from a t.test on medv.", "offset = 2 * se_bootstrap\nbh.data.medv.mean() - offset, bh.data.medv.mean() + offset\n\nsp.stats.t.interval(0.95, bh.data.shape[0] - 1,\n loc=np.mean(bh.data.medv),\n scale=sp.stats.sem(bh.data.medv))", "5. Based on this data set, provide an estimate, $\\hat{\\mu}$ med, for the median value of medv in the population.", "bh.data.medv.median()", "6. We now would like to estimate the standard error of $\\hat{\\mu}$ med. Unfortunately, there is no simple formula for computing the standard error of the median. Instead, estimate the standard error of the median using the bootstrap. Comment on your findings.", "medians = [(bh.data.medv\n .sample(n=bh.data.shape[0], replace=True)\n .median())\n for _ in range(1000)]\nprint(f'Average Median: {np.mean(medians)}')\nprint(f'Standard Error: {np.std(medians)}')", "7. Based on this data set, provide an estimate for the tenth percentile of medv in Boston suburbs. Call this quantity $\\hat{\\mu}$ 0.1.", "bh.data.medv.quantile(0.1)", "8. Use the bootstrap to estimate the standard error of $\\hat{\\mu}$ 0.1. Comment on your findings.", "quantiles = [(bh.data.medv\n .sample(bh.data.shape[0], replace=True)\n .quantile(0.1))\n for _ in range(1000)]\nprint(f'Average 10th Percentile: {np.mean(quantiles):.3f}')\nprint(f'Standard Error: {np.std(quantiles):.3f}')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
icrtiou/coursera-ML
ex3-neural network/3- feed forward nn.ipynb
mit
[ "%reload_ext autoreload\n%autoreload 2\n\nimport numpy as np\n\nimport sys\nsys.path.append('..')\n\nfrom helper import nn\nfrom helper import logistic_regression as lr\n\nfrom sklearn.metrics import classification_report", "model\n<img style=\"float: left;\" src=\"../img/nn_model.png\">\nload weights and data", "theta1, theta2 = nn.load_weight('ex3weights.mat')\n\ntheta1.shape, theta2.shape", "The original data is 90 degree off. So in data loading function, I use transpose to fix it.\nHowever, the transposed data is not compatible to given parameters because those parameters are trained by original data. So for the sake of applying given parameters, I need to use original data", "X, y = nn.load_data('ex3data1.mat',transpose=False)\n\nX = np.insert(X, 0, values=np.ones(X.shape[0]), axis=1) # intercept\n\nX.shape, y.shape", "feed forward prediction", "a1 = X\n\nz2 = a1 @ theta1.T # (5000, 401) @ (25,401).T = (5000, 25)\nz2.shape\n\nz2 = np.insert(z2, 0, values=np.ones(z2.shape[0]), axis=1)\n\na2 = lr.sigmoid(z2)\na2.shape\n\nz3 = a2 @ theta2.T\nz3.shape\n\na3 = lr.sigmoid(z3)\na3\n\ny_pred = np.argmax(a3, axis=1) + 1 # numpy is 0 base index, +1 for matlab convention\ny_pred.shape", "accuracy\nso... accuracy on training data is not predicting the real world performance you know\nAll we can say is NN is very powerful model. Overfitting is easy here.", "print(classification_report(y_pred, y_pred))" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
zingale/pyreaclib
pynucastro-integration.ipynb
bsd-3-clause
[ "Example of integrating a network\nThis notebook illustrates how to create a python network and integrate\nit with the scipy library.", "import pynucastro as pyna", "We'll start again with the basic CNO network explored earlier.", "files = [\"c12-pg-n13-ls09\", \n \"c13-pg-n14-nacr\",\n \"n13--c13-wc12\",\n \"n13-pg-o14-lg06\",\n \"n14-pg-o15-im05\",\n \"n15-pa-c12-nacr\",\n \"o14--n14-wc12\",\n \"o15--n15-wc12\"]", "A PythonNetwork is based on a RateCollection but has methods to write the RHS of the system of ODEs.", "pynet = pyna.PythonNetwork(files)", "For example, this network knows how to write the full term for a reaction that goes into the $dY/dt$ equation of the ODE system.\nHere we pick one of the rates that is part of the network an explore it.", "r = pynet.rates[1]\nprint(r)\n\nprint(pynet.ydot_string(r))", "and the code needed to evaluate that rate (the T-dependent part) as:", "print(pynet.function_string(r))", "The temperature-dependent rate evaluation functions take a Tfactor object, which precomputes most of the commonly-used temperature factors in the rates.\nThe write_network() method will output the python code needed to define the RHS of a network for integration with the SciPy integrators.\nSince python code can be slow, we use Numba to do just-in-time compilation of the functions to speed things up.", "pynet.write_network(\"cno_test_integrate.py\")\n\n%cat cno_test_integrate.py", "We can now import the network that was just created and integrate it using the SciPy ODE solvers", "import cno_test_integrate as cno", "Integrating the network\nWe can use the stiff ODE integration solvers that are part of Scipy to integrate this system now", "from scipy.integrate import solve_ivp\nimport numpy as np", "Initialize the thermodynamic conditions and initial composition. We express the composition as molar fractions, Y0.", "rho = 150\nT = 1.5e7\n\nX0 = np.zeros(cno.nnuc)\nX0[cno.ip] = 0.7\nX0[cno.ihe4] = 0.28\nX0[cno.ic12] = 0.02\n\nY0 = X0/cno.A", "Now we integrate. We use the BDF method, since reaction networks are in general stiff", "tmax = 1.e20\n\nsol = solve_ivp(cno.rhs, [0, tmax], Y0, method=\"BDF\",\n dense_output=True, args=(rho, T), rtol=1.e-6, atol=1.e-6)", "A network plot", "import matplotlib.pyplot as plt\n\nfig = plt.figure()\nax = fig.add_subplot(111)\n\nfor i in range(cno.nnuc):\n ax.loglog(sol.t, sol.y[i,:] * cno.A[i], label=f\"X({cno.names[i].capitalize()})\")\n\nax.set_xlim(1.e10, 1.e20)\nax.set_ylim(1.e-8, 1.0)\nax.legend(fontsize=\"small\")\n\nfig.set_size_inches((10, 8))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
eyaltrabelsi/my-notebooks
Lectures/profiling_python_by_example/Profiling Python by Example.ipynb
mit
[ "Profiling Python by Example\nEyal Trabelsi\nAbout Me 🙈\n\n\nSoftware Engineer at Salesforce 👷\n\n\nBig passion for python, data, and performance optimizations 🐍🤖\n\n\nOnline at medium | twitter 🌐\n\n\nToday ✨\n\n\nProfiling Introduction\n\n\nProfiling Strategies\n\n\nWhat's Profiling 🗣\n\n\nA profile is a set of statistics that describes how our programs is executed\n\n\nCan help optimize our code\n\n\nProfiling is not Rocket Science 🚀\nOptimization ?! Why ?🤨\n\n\nFast is better than slow 🐇\n\n\nlatency response time 200 milliseconds client roundtrip\n\n\nthroughput successful traffic flow of 200 requests per seconds\n\n\nMemory efficiency is good 💾\n\n\nSaving money is awesome 💸\n\n\nHardware will only take you so far 💻\n\n\nBefore We Optimize ⏰\n\nIt's actually needed 🚔\n\nremember optimized code is:\n\nharder to write and read\nless maintainable\nbuggier, more brittle\n\nOptimize when\n\ngather requirements, there are some parts you won't be able to touch\n\nestablish percentile SLAs: 50, 95, 99 max\n\n\nOur code is well tested 💯\n\n\nGood work takes cycles 🚲", "%load_ext snakeviz\n%load_ext memory_profiler\n%load_ext line_profiler\n%load_ext autoreload\n\n%autoreload 2", "amdahl law, focus on one part at a time\n# Profiling Options 📊\n\n\nResource: CPU / RAM / I/O\n\n\nProfiling Strategy: Offline (Deterministic) 🐞/ Online (Statistical) 🌐\n\n\nProfiling Granularity: Program 📝 / Function 📝📝 / Line level 📝📝📝\n\n\n# Our Example 👾\n\n\nNaive Spelling Corrector\n\n\nPeter Norvig", "import re\nfrom collections import Counter\n\ndef words(text): \n return re.findall(r'\\w+', text.lower())\n\nWORDS = Counter(words(open('big.txt').read()))\n\ndef P(word, N=sum(WORDS.values())): \n return WORDS[word] / N\n\ndef candidates(word): \n return (known([word]) or known(edits1(word)) or known(edits2(word)) or [word])\n\ndef known(words): \n return set(w for w in words if w in WORDS)\n\ndef edits1(word):\n letters = 'abcdefghijklmnopqrstuvwxyz'\n splits = [(word[:i], word[i:]) for i in range(len(word) + 1)]\n deletes = [L + R[1:] for L, R in splits if R]\n transposes = [L + R[1] + R[0] + R[2:] for L, R in splits if len(R)>1]\n replaces = [L + c + R[1:] for L, R in splits if R for c in letters]\n inserts = [L + c + R for L, R in splits for c in letters]\n return set(deletes + transposes + replaces + inserts)\n\ndef edits2(word): \n return (e2 for e1 in edits1(word) for e2 in edits1(e1))\n\ndef word_correction(word): \n return max(candidates(word), key=P)\n\ndef sentence_correction(sentence): \n return \" \".join(word_correction(word) for word in sentence.split(\" \"))\n\n\nsentence_correction('grofilingg is not rocet Sgience')", "Casual Profiling 👕👖\n\n\nA sense of how the program run as a whole\n\n\nAllows to understand whether a problem exists\n\n\ntime ⌛\n\n\nMeasures the user/system time for a single run\n\n\nBuilt-in to python with support for ipython magic", "! time python script.py 'grofilingg is not rocet Sgience'\n\n%time sentence_correction('grofilingg is not rocet Sgience')", "timeit ⌛⌛⌛\n\n\nBenchmark multiple runs of the code snippet.\n\n\nmeasures process CPU\n\n\nBuilt-in to python with support for ipython magic.", "! python -m timeit -s \"...\"\n\n%timeit sentence_correction('grofilingg is not rocet Sgience')", "each run does thousand or millions of repetitions compensating for very fast operations\nit disables garbage collection for consistency\n\nmemit ⌛⌛⌛\n\n\nMeasures process Memory\n\n\nHas 3rd party ipython magic", "%memit sentence_correction('grofilingg is not rocet Sgience')", "Casual Profiling Landscape ⛰️\n| Profiler | Metric | Type | Granularity | Built-in | Output |Compatibility | \n| :-: | :-: | :-: | :-: | :-: |:-: | :-: | \n| time | CPU | Casual | Program | ✅ | Text |🐧 / 🍎 / Windows| \n| timeit | CPU | Casual | Program | ✅ | Text |🐧 / 🍎 / Windows| \n| pyperf | CPU | Casual | Program | ❌ | Text |🐧 / 🍎 / Windows| \n| memory_profiler | Memory | Casual | Program | ❌ | Text |🐧 / 🍎 / Windows| \nCasual Profiling Pros and Cons 👀\n\n\nReally easy 😃\n\n\nAllows to understand whether a problem exists 😃\n\n\nCan't pinpoint the bottleneck 😔\n\n\nOffline Profiling 🐞\n\n\nTrack events like function calls, exceptions and line executions\n\n\nDeterministic\n\n\nSignificant overhead\n\n\nMore suitable for local debugging\n\n\nHow Offline Profilers Work 🧠\n\n\nWork inside your process allow us easy access to its stack.\n\n\nPython let you specify a callback on specific interpreter events\n\n\nsys.setprofile - triggered on a function/line call (PyEval_SetProfile)\n\n\nsys.settrace - triggered on a function call(PyEval_SetTrace)\n\n\n“PyEval_SetTrace is similar to PyEval_SetProfile, except the tracing function does receive line-number events.”\n\ncProfile 🌊\n\n\nTraces every function call in a program \n\n\nIdentify time-consuming functions \n\n\nBy default measures process CPU\n\n\nBuilt-in to python with support for ipython magic \n\n\nOnly support single process not distributed systems or C parts", "! python -m cProfile script.py 'grofilingg is not rocet Sgience'\n\n%prun sentence_correction('grofilingg is not rocet Sgience')", "Snakeviz 🐍\n\n\nSupport cProfile\n\n\nCreate visualizations\n\n\nHas 3rd party ipython magic", "! snakeviz script.prof\n\n%snakeviz sentence_correction('grofilingg is not rocet Sgience')", "Memory profiler 💾\n\n\nTraces every line in a specific function \n\n\nMeasures process Memory\n\n\nIdentify high memory footprint lines\n\n\nHas 3rd party ipython magic", "! python -m memory_profiler script.py 'grofilingg is not rocet Sgience'\n\nfrom script import sentence_correction, edits1\n%mprun -f edits1 sentence_correction('grofilingg is not rocet Sgience')", "Offline Profiling Landscape ⛰️\n| Profiler | Metric | Granularity | Built-in | Output |Compatibility | Comments | \n| :-: | :-: | :-: | :-: |:-: | :-: | :-: \n| cProfile | CPU | Function | ✅ | Text |🐧 / 🍎 / Windows| Customizeable|\n| yappi | CPU | Function | ❌ | Text |🐧 / 🍎 / Windows|Include C part |\n| line_profiler | CPU | Line | ❌ | Text |🐧 / 🍎 / Windows| |\n| memory_profiler | Memory | Line | ❌ | Text |🐧 / 🍎/ Windows | | \n| filprofiler | Memory | Function | ❌ | Flame | 🐧 / 🍎 | |\n| snakeviz |Visualizer| | ❌ |Flame |🐧 / 🍎 / Windows| |\nOffline Profiling Pros and Cons 👀\n\n\nCan pinpoint the bottleneck 😃 \n\n\nDeterministic 😃\n\n\nHigh overhead 😔\n\n\nCan be noisy 😔\n\n\nCant tell you which inputs are slow 😔\n\n\nDistorted results, only parts of ur program slowed down 😔\n\n\nOnline Profiling 🌐\n\n\nSample the program execution stack periodically\n\n\nNondeterministic\n\n\nRequires more time to be accurate\n\n\nMarginal overhead\n\n\nMore suitable for continuous production monitoring\n\n\nHow Online Profilers Work 🧠\n\n\nPython let you specify a signal handler\n\n\nThe setitimer system call sends a signal every X milliseconds\n\n\n\n\n\nSystem calls sometimes take a few milliseconds \n\n\nLimit your ability to sample too frequently\n\n\nUse python interpreter callbacks\n\n\nBut doesn't collect stack samples every callback. \n\n\n\npyinstrument 🎷\n\n\nMeasuring process CPU\n\n\nSample the stack every 1ms\n\n\nDoesn't have 3rd party ipython magic \n\n\nIf a function is cumulatively slow it will show up often \n\n\nIf a function is fast we wont see it at all", "! pyinstrument script.py 'grofilingg is not rocet Sgience'", "Online Profiling Landscape ⛰️\n| Profiler | Metric | Granularity | Built-in | Output |Compatibility | Comments | \n| :-: | :-: | :-: | :-: |:-: | :-: | :-: |\n| pyinstrument | CPU | Function | ❌ | Flame/Text |🐧 / 🍎 / Windows| |\n|python-flamegraph| CPU | Function | ❌ | Flame |🐧 / 🍎 / Windows| |\n| pyspy | CPU | Function | ❌ | Text | 🐧 / 🍎 |Work on running proccess |\n| vmprof | CPU | Line | ❌ | Text |🐧 / 🍎 / Windows| |\n| austin |CPU/Memory| Function | ❌ | Flame/Text | 🐧 |Hard installation |\n| stacksampler | Memory | Function | ❌ | Flame |🐧 / 🍎 / Windows| \nOnline Profiling Pros and Cons 👀\n\n\nCan pinpoint the bottleneck 😃\n\n\nStill introduce overhead (marginal) 😐\n\n\nCan be noisy (less than offline) 😐\n\n\nLess accurate* 😔\n\n\nNon-deterministic 😔\n\n\nCant tell you which inputs are slow 😔\n\n\nLogging 📋\n\n\nRecord whatever we want 😃\n\n\nDoesn't add a lot of overhead 😃\n\n\nNeed to be add logging upfront or you are out of luck 😔\n\n\n❗Pro Tip: record hot function inputs and duration\n\n\nLogging Landscape ⛰️\n| Profiler | Built-in | Compatibility | Comments | \n| :-: | :-: | :-: | :-: |\n| logging | ✅ | 🐧 / 🍎 / Windows | |\n| loguru | ❌ | 🐧 / 🍎 / Windows | |\n| eliot | ❌ | 🐧 / 🍎 / Windows | helps understand causes in the code|\n| Pysnooper | ❌ | 🐧 / 🍎 / Windows | Never use print for debugging again|\nCreate your Own Profiler 🤓\n\n\nCreate a custom function for cProfile\n\n\nCreate a offline profiler using sys.settrace and sys.setprofile\n\n\nCreate a online profiler using setitimer and ptrace\n\n\nUse pyrasite to inject python code to running process\n\n\nTo run it, ptrace has to be configured as \"classic ptrace permissions\": \necho 0 | sudo tee /proc/sys/kernel/yama/ptrace_scope, which is may be a security risk\n\nThere are non-zero chances that your target Python process will crash\n\nWe Found The Bottleneck , Now What? 🤷\n\n\nAfter we find the bottleneck 🕵\n\n\n\"Fix the problem\" 🔧\n\n\nadd more hardware\n\nrearchitect to divide work\nadopt async\nuser smarter algorithms\nwrite faster python\nuse native python extension\nuse a library with a faster implementation\n\nuse a different python runtime\n\n\nWatch for performance regressions ↪\n\n\nKey Takeaways 🔑\n\n\nOptimize when it's actually needed 🚔\n\n\nOur code needs to be well tested 💯\n\n\nDifferent tools have different tradeoffs 🔨🔧\n\n\nAdd logs in strategic places 🏰\n\n\nWatch for performance regressions ↪\n\n\nGood work takes cycles 🚲\n\n\nProfiling is not Rocket Science 🚀" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ohgodscience/Python
mousetrackerdata/post2.ipynb
gpl-2.0
[ "Overview\nI'm going to be looking at some pilot data that some colleagues and I collected using Jon Freeman's Mousetracker (http://www.mousetracker.org/). As the name suggests, mousetracker is a program designed to track participants' mouse movements. In social psychology, researchers use it to track participants' implicit decision making processes. Originally, it was developed to study how individuals categorize faces. An example of the paradigm would be a participant having to choose whether a face is male or female, like so: \nThe researcher could then vary the degree to which the face has stereotypically male features, or stereotypically female features, and track not just what participants are categorizing the faces as, but also, how they reach those decisions, by tracking the paths and time course of the mouse movements.\nCurrent project\nAnyway, some friends and I are currently working on distinguishing how individuals allocate resources in the context of a relationship. We hypothesize that at any given time, individuals are concerned with:\n\ntheir self-interest\ntheir partner's interests\nthe interest of the group or dyad, or the relationship, or them as a pair\n\nand these motives affect the way individuals choose to distribute resources. To distinguish between these three motives, we generated three sets of stimuli using poker chips that pit each of these motives against each other.\nThe first set of stimuli pit participants' self-interest against the interests of their partner. For example, if red poker chips were paid out to you and green to your partner, one dilemma would be choosing between these two stacks of poker chips of equal height (i.e., the group receives the same in both cases):\nLeft | Right |\n:------:|:---:|\n | \nThe second set of stimuli pits a participant's concern for the interest of their partner vs. their own self interest and the group's interest. This captures participants' \"pure\" altruistic motives in the sense that choosing to favor their partner in this scenario sacrifices both their own interests and the group's interest:\nLeft | Right |\n :--:|:---:|\n | \nFinally, the last set of stimuli pit participants' self-interest against that of their partner and the group. In this case, one set of poker chips results in the participant getting more than the other set of chips, but in the other set of poker chips, his/her partner gets more and so does the pair of them:\nLeft | Right |\n :--:|:---:|\n | \nThe data\nThe data come in a person-period dataset. This is a \"long\" format where each participant has multiple rows that represent each trial of the experiment (there were 60 or so trials). However, each row also contains multiple columns each representing a bin of average locations the participant's mouse pointer was during that time span. There are ~100 such bins. \nIn other words, each participant made 60 choices, and their mouse positions were averaged into ~100 time points per trial.\nThe first thing we're going to do is to load our data. To do this, we first import Pandas, read our .csv file and print a list of columns. The raw data can be found here: https://raw.githubusercontent.com/bryansim/Python/master/mousetrackerdata/mousetrackercorrected.csv", "import pandas as pd\nimport re\n\ndata = pd.read_csv(\"mousetrackercorrected.csv\")\ndata.columns.values\n\ndata.iloc[0:4, 0:19]", "Descriptives\nIn the above data, what we're going to be first doing is finding the mean of participants' reaction time (RT), maximum deviation (MD), and aure under curve (AUC). The latter two measures are measures of how much participants were \"attracted\" to the other option despite selecting the option that they did.\nThere are two columns for each (e.g., MD_1 and MD_2 depending on which option participants chose). These end up being redundant with one another, and we'll have to combine them.\nx-flips and y-flips, as their names suggest, measure the number of times participants' cursors flipped on the x and y axis.\nTo combine the two MD columns, we create a new column, find all the rows which have data in MD_1, and then fill in the rows which don't have data in MD_1 with the rows that have data in MD_2. We do the same with AUC.", "data['MD'] = data.loc[data['MD_1'].isnull() == False, ['MD_1']]\ndata.loc[data['MD'].isnull() == True,['MD']] = data.loc[data['MD_2'].isnull() == False]['MD_2'] \n#We do this to get a slice instead of data.loc[data['MD_2'].isnull() == False, ['MD_2']] which returns a dataframe\n\ndata['AUC'] = data.loc[data['AUC_1'].isnull() == False, ['AUC_1']]\ndata.loc[data['AUC'].isnull() == True, ['AUC']] = data.loc[data['AUC_2'].isnull() == False]['AUC_2']", "Mean MD and AUC\nNow, we can use the .mean() method to get the mean of the above.", "data['AUC'].mean()\n\ndata['MD'].mean()", "Means by choice type\nThe next thing we want to do is see whether participants differed depending on what the type of choice was (e.g., self vs. other etc.) Eventually, we will have a 3x2 table of means:\nself vs. other | group more w/ self less | group more w/ self more |\n---:|:---:|:---:\nchose selfish | chose selfish | chose selfish \nchose selfless | chose selfless | chose selffless \nBecause of the way the conditions were coded (they include trial numbers), we'll use some regex to ignore those numbers:", "sodata = data.loc[data['code'].str.extract(r'(so)', expand = False).isnull() == False]\nsmgldata = data.loc[data['code'].str.extract(r'(smgl)', expand = False).isnull() == False]\nsmgmdata = data.loc[data['code'].str.extract(r'(smgm)', expand = False).isnull() == False]\n\nprint sodata['MD'].mean()\nprint smgldata['MD'].mean()\nprint smgmdata['MD'].mean()\n\nprint sodata['AUC'].mean()\nprint smgldata['AUC'].mean()\nprint smgmdata['AUC'].mean()", "AS IT TURNS OUT, this isn't very helpful, because this analysis collapses over whether or not participant chose the selfish or unselfish option, which is really what we're interested in. So let's look at that next:", "print sodata.loc[sodata['error'] == 0]['MD'].mean()\nprint sodata.loc[sodata['error'] == 1]['MD'].mean()\nprint smgldata.loc[smgldata['error'] == 0]['MD'].mean()\nprint smgldata.loc[smgldata['error'] == 1]['MD'].mean()\nprint smgmdata.loc[smgmdata['error'] == 0]['MD'].mean()\nprint smgmdata.loc[smgmdata['error'] == 1]['MD'].mean()\n\nprint sodata.loc[sodata['error'] == 0]['AUC'].mean()\nprint sodata.loc[sodata['error'] == 1]['AUC'].mean()\nprint smgldata.loc[smgldata['error'] == 0]['AUC'].mean()\nprint smgldata.loc[smgldata['error'] == 1]['AUC'].mean()\nprint smgmdata.loc[smgmdata['error'] == 0]['AUC'].mean()\nprint smgmdata.loc[smgmdata['error'] == 1]['AUC'].mean()", "So, that table above looks like this:\nMD | self vs. other | group more w/ self less | group more w/ self more |\n---:|:---:|:---:|:---:\nchose selfish | .28 | .23 | .18\nchose selfless | .29 | .25 | .39\nMD | self vs. other | group more w/ self less | group more w/ self more |\n---:|:---:|:---:|:---:\nchose selfish | .51 | .50 | .31\nchose selfless | .46 | .37 | .80\nNote to self: I need to check if I have the selfish vs. selfless options coded correctly. I believe error == 0 = selfish." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tensorflow/docs
site/en/tutorials/keras/text_classification.ipynb
apache-2.0
[ "Copyright 2019 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n#@title MIT License\n#\n# Copyright (c) 2017 François Chollet\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the \"Software\"),\n# to deal in the Software without restriction, including without limitation\n# the rights to use, copy, modify, merge, publish, distribute, sublicense,\n# and/or sell copies of the Software, and to permit persons to whom the\n# Software is furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL\n# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER\n# DEALINGS IN THE SOFTWARE.", "Basic text classification\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/tutorials/keras/text_classification\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/keras/text_classification.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/docs/blob/master/site/en/tutorials/keras/text_classification.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/keras/text_classification.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nThis tutorial demonstrates text classification starting from plain text files stored on disk. You'll train a binary classifier to perform sentiment analysis on an IMDB dataset. At the end of the notebook, there is an exercise for you to try, in which you'll train a multi-class classifier to predict the tag for a programming question on Stack Overflow.", "import matplotlib.pyplot as plt\nimport os\nimport re\nimport shutil\nimport string\nimport tensorflow as tf\n\nfrom tensorflow.keras import layers\nfrom tensorflow.keras import losses\n\n\nprint(tf.__version__)", "Sentiment analysis\nThis notebook trains a sentiment analysis model to classify movie reviews as positive or negative, based on the text of the review. This is an example of binary—or two-class—classification, an important and widely applicable kind of machine learning problem.\nYou'll use the Large Movie Review Dataset that contains the text of 50,000 movie reviews from the Internet Movie Database. These are split into 25,000 reviews for training and 25,000 reviews for testing. The training and testing sets are balanced, meaning they contain an equal number of positive and negative reviews.\nDownload and explore the IMDB dataset\nLet's download and extract the dataset, then explore the directory structure.", "url = \"https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz\"\n\ndataset = tf.keras.utils.get_file(\"aclImdb_v1\", url,\n untar=True, cache_dir='.',\n cache_subdir='')\n\ndataset_dir = os.path.join(os.path.dirname(dataset), 'aclImdb')\n\nos.listdir(dataset_dir)\n\ntrain_dir = os.path.join(dataset_dir, 'train')\nos.listdir(train_dir)", "The aclImdb/train/pos and aclImdb/train/neg directories contain many text files, each of which is a single movie review. Let's take a look at one of them.", "sample_file = os.path.join(train_dir, 'pos/1181_9.txt')\nwith open(sample_file) as f:\n print(f.read())", "Load the dataset\nNext, you will load the data off disk and prepare it into a format suitable for training. To do so, you will use the helpful text_dataset_from_directory utility, which expects a directory structure as follows.\nmain_directory/\n...class_a/\n......a_text_1.txt\n......a_text_2.txt\n...class_b/\n......b_text_1.txt\n......b_text_2.txt\nTo prepare a dataset for binary classification, you will need two folders on disk, corresponding to class_a and class_b. These will be the positive and negative movie reviews, which can be found in aclImdb/train/pos and aclImdb/train/neg. As the IMDB dataset contains additional folders, you will remove them before using this utility.", "remove_dir = os.path.join(train_dir, 'unsup')\nshutil.rmtree(remove_dir)", "Next, you will use the text_dataset_from_directory utility to create a labeled tf.data.Dataset. tf.data is a powerful collection of tools for working with data. \nWhen running a machine learning experiment, it is a best practice to divide your dataset into three splits: train, validation, and test. \nThe IMDB dataset has already been divided into train and test, but it lacks a validation set. Let's create a validation set using an 80:20 split of the training data by using the validation_split argument below.", "batch_size = 32\nseed = 42\n\nraw_train_ds = tf.keras.utils.text_dataset_from_directory(\n 'aclImdb/train', \n batch_size=batch_size, \n validation_split=0.2, \n subset='training', \n seed=seed)", "As you can see above, there are 25,000 examples in the training folder, of which you will use 80% (or 20,000) for training. As you will see in a moment, you can train a model by passing a dataset directly to model.fit. If you're new to tf.data, you can also iterate over the dataset and print out a few examples as follows.", "for text_batch, label_batch in raw_train_ds.take(1):\n for i in range(3):\n print(\"Review\", text_batch.numpy()[i])\n print(\"Label\", label_batch.numpy()[i])", "Notice the reviews contain raw text (with punctuation and occasional HTML tags like &lt;br/&gt;). You will show how to handle these in the following section. \nThe labels are 0 or 1. To see which of these correspond to positive and negative movie reviews, you can check the class_names property on the dataset.", "print(\"Label 0 corresponds to\", raw_train_ds.class_names[0])\nprint(\"Label 1 corresponds to\", raw_train_ds.class_names[1])", "Next, you will create a validation and test dataset. You will use the remaining 5,000 reviews from the training set for validation.\nNote: When using the validation_split and subset arguments, make sure to either specify a random seed, or to pass shuffle=False, so that the validation and training splits have no overlap.", "raw_val_ds = tf.keras.utils.text_dataset_from_directory(\n 'aclImdb/train', \n batch_size=batch_size, \n validation_split=0.2, \n subset='validation', \n seed=seed)\n\nraw_test_ds = tf.keras.utils.text_dataset_from_directory(\n 'aclImdb/test', \n batch_size=batch_size)", "Prepare the dataset for training\nNext, you will standardize, tokenize, and vectorize the data using the helpful tf.keras.layers.TextVectorization layer. \nStandardization refers to preprocessing the text, typically to remove punctuation or HTML elements to simplify the dataset. Tokenization refers to splitting strings into tokens (for example, splitting a sentence into individual words, by splitting on whitespace). Vectorization refers to converting tokens into numbers so they can be fed into a neural network. All of these tasks can be accomplished with this layer.\nAs you saw above, the reviews contain various HTML tags like &lt;br /&gt;. These tags will not be removed by the default standardizer in the TextVectorization layer (which converts text to lowercase and strips punctuation by default, but doesn't strip HTML). You will write a custom standardization function to remove the HTML.\nNote: To prevent training-testing skew (also known as training-serving skew), it is important to preprocess the data identically at train and test time. To facilitate this, the TextVectorization layer can be included directly inside your model, as shown later in this tutorial.", "def custom_standardization(input_data):\n lowercase = tf.strings.lower(input_data)\n stripped_html = tf.strings.regex_replace(lowercase, '<br />', ' ')\n return tf.strings.regex_replace(stripped_html,\n '[%s]' % re.escape(string.punctuation),\n '')", "Next, you will create a TextVectorization layer. You will use this layer to standardize, tokenize, and vectorize our data. You set the output_mode to int to create unique integer indices for each token.\nNote that you're using the default split function, and the custom standardization function you defined above. You'll also define some constants for the model, like an explicit maximum sequence_length, which will cause the layer to pad or truncate sequences to exactly sequence_length values.", "max_features = 10000\nsequence_length = 250\n\nvectorize_layer = layers.TextVectorization(\n standardize=custom_standardization,\n max_tokens=max_features,\n output_mode='int',\n output_sequence_length=sequence_length)", "Next, you will call adapt to fit the state of the preprocessing layer to the dataset. This will cause the model to build an index of strings to integers.\nNote: It's important to only use your training data when calling adapt (using the test set would leak information).", "# Make a text-only dataset (without labels), then call adapt\ntrain_text = raw_train_ds.map(lambda x, y: x)\nvectorize_layer.adapt(train_text)", "Let's create a function to see the result of using this layer to preprocess some data.", "def vectorize_text(text, label):\n text = tf.expand_dims(text, -1)\n return vectorize_layer(text), label\n\n# retrieve a batch (of 32 reviews and labels) from the dataset\ntext_batch, label_batch = next(iter(raw_train_ds))\nfirst_review, first_label = text_batch[0], label_batch[0]\nprint(\"Review\", first_review)\nprint(\"Label\", raw_train_ds.class_names[first_label])\nprint(\"Vectorized review\", vectorize_text(first_review, first_label))", "As you can see above, each token has been replaced by an integer. You can lookup the token (string) that each integer corresponds to by calling .get_vocabulary() on the layer.", "print(\"1287 ---> \",vectorize_layer.get_vocabulary()[1287])\nprint(\" 313 ---> \",vectorize_layer.get_vocabulary()[313])\nprint('Vocabulary size: {}'.format(len(vectorize_layer.get_vocabulary())))", "You are nearly ready to train your model. As a final preprocessing step, you will apply the TextVectorization layer you created earlier to the train, validation, and test dataset.", "train_ds = raw_train_ds.map(vectorize_text)\nval_ds = raw_val_ds.map(vectorize_text)\ntest_ds = raw_test_ds.map(vectorize_text)", "Configure the dataset for performance\nThese are two important methods you should use when loading data to make sure that I/O does not become blocking.\n.cache() keeps data in memory after it's loaded off disk. This will ensure the dataset does not become a bottleneck while training your model. If your dataset is too large to fit into memory, you can also use this method to create a performant on-disk cache, which is more efficient to read than many small files.\n.prefetch() overlaps data preprocessing and model execution while training. \nYou can learn more about both methods, as well as how to cache data to disk in the data performance guide.", "AUTOTUNE = tf.data.AUTOTUNE\n\ntrain_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE)\nval_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)\ntest_ds = test_ds.cache().prefetch(buffer_size=AUTOTUNE)", "Create the model\nIt's time to create your neural network:", "embedding_dim = 16\n\nmodel = tf.keras.Sequential([\n layers.Embedding(max_features + 1, embedding_dim),\n layers.Dropout(0.2),\n layers.GlobalAveragePooling1D(),\n layers.Dropout(0.2),\n layers.Dense(1)])\n\nmodel.summary()", "The layers are stacked sequentially to build the classifier:\n\nThe first layer is an Embedding layer. This layer takes the integer-encoded reviews and looks up an embedding vector for each word-index. These vectors are learned as the model trains. The vectors add a dimension to the output array. The resulting dimensions are: (batch, sequence, embedding). To learn more about embeddings, check out the Word embeddings tutorial.\nNext, a GlobalAveragePooling1D layer returns a fixed-length output vector for each example by averaging over the sequence dimension. This allows the model to handle input of variable length, in the simplest way possible.\nThis fixed-length output vector is piped through a fully-connected (Dense) layer with 16 hidden units. \nThe last layer is densely connected with a single output node.\n\nLoss function and optimizer\nA model needs a loss function and an optimizer for training. Since this is a binary classification problem and the model outputs a probability (a single-unit layer with a sigmoid activation), you'll use losses.BinaryCrossentropy loss function.\nNow, configure the model to use an optimizer and a loss function:", "model.compile(loss=losses.BinaryCrossentropy(from_logits=True),\n optimizer='adam',\n metrics=tf.metrics.BinaryAccuracy(threshold=0.0))", "Train the model\nYou will train the model by passing the dataset object to the fit method.", "epochs = 10\nhistory = model.fit(\n train_ds,\n validation_data=val_ds,\n epochs=epochs)", "Evaluate the model\nLet's see how the model performs. Two values will be returned. Loss (a number which represents our error, lower values are better), and accuracy.", "loss, accuracy = model.evaluate(test_ds)\n\nprint(\"Loss: \", loss)\nprint(\"Accuracy: \", accuracy)", "This fairly naive approach achieves an accuracy of about 86%.\nCreate a plot of accuracy and loss over time\nmodel.fit() returns a History object that contains a dictionary with everything that happened during training:", "history_dict = history.history\nhistory_dict.keys()", "There are four entries: one for each monitored metric during training and validation. You can use these to plot the training and validation loss for comparison, as well as the training and validation accuracy:", "acc = history_dict['binary_accuracy']\nval_acc = history_dict['val_binary_accuracy']\nloss = history_dict['loss']\nval_loss = history_dict['val_loss']\n\nepochs = range(1, len(acc) + 1)\n\n# \"bo\" is for \"blue dot\"\nplt.plot(epochs, loss, 'bo', label='Training loss')\n# b is for \"solid blue line\"\nplt.plot(epochs, val_loss, 'b', label='Validation loss')\nplt.title('Training and validation loss')\nplt.xlabel('Epochs')\nplt.ylabel('Loss')\nplt.legend()\n\nplt.show()\n\nplt.plot(epochs, acc, 'bo', label='Training acc')\nplt.plot(epochs, val_acc, 'b', label='Validation acc')\nplt.title('Training and validation accuracy')\nplt.xlabel('Epochs')\nplt.ylabel('Accuracy')\nplt.legend(loc='lower right')\n\nplt.show()", "In this plot, the dots represent the training loss and accuracy, and the solid lines are the validation loss and accuracy.\nNotice the training loss decreases with each epoch and the training accuracy increases with each epoch. This is expected when using a gradient descent optimization—it should minimize the desired quantity on every iteration.\nThis isn't the case for the validation loss and accuracy—they seem to peak before the training accuracy. This is an example of overfitting: the model performs better on the training data than it does on data it has never seen before. After this point, the model over-optimizes and learns representations specific to the training data that do not generalize to test data.\nFor this particular case, you could prevent overfitting by simply stopping the training when the validation accuracy is no longer increasing. One way to do so is to use the tf.keras.callbacks.EarlyStopping callback.\nExport the model\nIn the code above, you applied the TextVectorization layer to the dataset before feeding text to the model. If you want to make your model capable of processing raw strings (for example, to simplify deploying it), you can include the TextVectorization layer inside your model. To do so, you can create a new model using the weights you just trained.", "export_model = tf.keras.Sequential([\n vectorize_layer,\n model,\n layers.Activation('sigmoid')\n])\n\nexport_model.compile(\n loss=losses.BinaryCrossentropy(from_logits=False), optimizer=\"adam\", metrics=['accuracy']\n)\n\n# Test it with `raw_test_ds`, which yields raw strings\nloss, accuracy = export_model.evaluate(raw_test_ds)\nprint(accuracy)", "Inference on new data\nTo get predictions for new examples, you can simply call model.predict().", "examples = [\n \"The movie was great!\",\n \"The movie was okay.\",\n \"The movie was terrible...\"\n]\n\nexport_model.predict(examples)", "Including the text preprocessing logic inside your model enables you to export a model for production that simplifies deployment, and reduces the potential for train/test skew.\nThere is a performance difference to keep in mind when choosing where to apply your TextVectorization layer. Using it outside of your model enables you to do asynchronous CPU processing and buffering of your data when training on GPU. So, if you're training your model on the GPU, you probably want to go with this option to get the best performance while developing your model, then switch to including the TextVectorization layer inside your model when you're ready to prepare for deployment.\nVisit this tutorial to learn more about saving models.\nExercise: multi-class classification on Stack Overflow questions\nThis tutorial showed how to train a binary classifier from scratch on the IMDB dataset. As an exercise, you can modify this notebook to train a multi-class classifier to predict the tag of a programming question on Stack Overflow.\nA dataset has been prepared for you to use containing the body of several thousand programming questions (for example, \"How can I sort a dictionary by value in Python?\") posted to Stack Overflow. Each of these is labeled with exactly one tag (either Python, CSharp, JavaScript, or Java). Your task is to take a question as input, and predict the appropriate tag, in this case, Python. \nThe dataset you will work with contains several thousand questions extracted from the much larger public Stack Overflow dataset on BigQuery, which contains more than 17 million posts.\nAfter downloading the dataset, you will find it has a similar directory structure to the IMDB dataset you worked with previously:\ntrain/\n...python/\n......0.txt\n......1.txt\n...javascript/\n......0.txt\n......1.txt\n...csharp/\n......0.txt\n......1.txt\n...java/\n......0.txt\n......1.txt\nNote: To increase the difficulty of the classification problem, occurrences of the words Python, CSharp, JavaScript, or Java in the programming questions have been replaced with the word blank (as many questions contain the language they're about).\nTo complete this exercise, you should modify this notebook to work with the Stack Overflow dataset by making the following modifications:\n\n\nAt the top of your notebook, update the code that downloads the IMDB dataset with code to download the Stack Overflow dataset that has already been prepared. As the Stack Overflow dataset has a similar directory structure, you will not need to make many modifications.\n\n\nModify the last layer of your model to Dense(4), as there are now four output classes.\n\n\nWhen compiling the model, change the loss to tf.keras.losses.SparseCategoricalCrossentropy. This is the correct loss function to use for a multi-class classification problem, when the labels for each class are integers (in this case, they can be 0, 1, 2, or 3). In addition, change the metrics to metrics=['accuracy'], since this is a multi-class classification problem (tf.metrics.BinaryAccuracy is only used for binary classifiers).\n\n\nWhen plotting accuracy over time, change binary_accuracy and val_binary_accuracy to accuracy and val_accuracy, respectively.\n\n\nOnce these changes are complete, you will be able to train a multi-class classifier. \n\n\nLearning more\nThis tutorial introduced text classification from scratch. To learn more about the text classification workflow in general, check out the Text classification guide from Google Developers." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
adamsteer/nci-notebooks
training/py3/Satellite_Imaging.ipynb
apache-2.0
[ "<img src=\"http://nci.org.au/wp-content/themes/nci/img/img-logo-large.png\", width=400>\n\nIn this notebook:\n\n\nHow to use the Python GDAL extensions to open NetCDF files.\nHow to reproject datasets between different coordinate systems.\nHow to combine and merge data from different sources.\n\n\n\n<a href='#part0'>Required libraries</a>\n<a href='#part1'>Himawari Data</a>\n<a href='#part2'>Intro to the GDAL library</a>\n<a href='#part3'>Digital Elevation model</a>\n<a href='#part4'>Data Fusion</a>\n<a href='#part5'>Practice these concepts</a>\n\nCanberra, the mountains or the south coast? Where's the best weather?\nMy aunt lives in Narooma, my brother is in Thredbo and I live in Canberra. Last September we interchanged lots of messages in which we were complaining about the weather in our respective places... But, where was the weather the worst? Let's see if we can find out...\n<a id='part0'></a>\n0.- To run this notebook on the VDI you'll need to first run:\n```\n$ module purge\n$ module load proj/4.8.0 python/3.5.2 python/3.5.2-matplotlib numpy/1.11.1-py3.5 gdal-python/1.11.1-py3.5 ipython/4.2.0-py3.5\n$ jupyter notebook\n```\n<a id='part1'></a>\n1.- Intro to Himawari8\nHimawari 8 is a geostationary satellite that covers the area of the Asia-Pacific and takes a new picture every 10 minutes.\nThe Bureau of Meteorology uses this information operationally for their weather forecasts and the historical data is stored at the NCI at: /g/data2/rr5/satellite/obs/himawari8/FLDK/\n1.1.- An index for Himawari8 files", "import datetime\nimport os.path\n\ndef himawari8_path(UTC_datetime, band):\n base = \"/g/data2/rr5/satellite/obs/himawari8/FLDK/{year:d}/{month:02d}/{day:02d}/{hour:02d}{minute:02d}/\"\n file_name = \"{year:d}{month:02d}{day:02d}{hour:02d}{minute:02d}00-P1S-ABOM_BRF_B{band:02d}-PRJ_GEOS141_2000-HIMAWARI8-AHI.nc\"\n\n st = datetime.datetime.strptime(UTC_datetime, \"%Y%m%d%H%M\")\n\n path = (base.format(year=st.year, month=st.month, day=st.day, hour=st.hour, minute=st.minute) +\n file_name.format(year=st.year, month=st.month, day=st.day, hour=st.hour, minute=st.minute, band=band))\n\n if not os.path.isfile(path):\n return None\n \n return path\n\n# For example, to get the path to band 1 taken of the 25th Dec 2015 at 00:00 UTC\nhimawari8_path(\"201512250000\", 1)", "1.2.- Plot raw data from a Himawari8 file", "%matplotlib inline\n\nfrom osgeo import gdal\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nds = gdal.Open(himawari8_path(\"201512250000\", 1))\nband1 = ds.GetRasterBand(1)\nraster = band1.ReadAsArray()\nno_data = band1.GetNoDataValue()\n \nmasked = np.ma.array(raster, mask=(raster==no_data))\nplt.imshow(masked)\n", "1.3.- Create a RGB composite using three bands", "def him_rgb(UTC_datetime, rgb_bands, clip):\n red_path = himawari8_path(UTC_datetime, rgb_bands[0])\n ds_r = gdal.Open(red_path)\n red = ds_r.GetRasterBand(1).ReadAsArray()\n red = (red.clip(0, clip) / clip * 255).astype(np.uint8)\n\n green_path = himawari8_path(UTC_datetime, rgb_bands[1])\n ds_g = gdal.Open(green_path)\n green = ds_g.GetRasterBand(1).ReadAsArray()\n green = (green.clip(0, clip) / clip * 255).astype(np.uint8)\n\n blue_path = himawari8_path(UTC_datetime, rgb_bands[2])\n ds_b = gdal.Open(blue_path)\n blue = ds_b.GetRasterBand(1).ReadAsArray()\n blue = (blue.clip(0, clip) / clip * 255).astype(np.uint8)\n\n return np.stack((red, green, blue), axis=2)\n\nrgb = him_rgb(\"201512250000\", [3, 2, 1], .65)\nprint(rgb.shape)\nplt.imshow(rgb)", "1.4.- Plotting a portion of the image", "plt.imshow(rgb[3200:5000, 1000:3500, :])", "<a id='part2'></a>\n2.- Intro to GDAL: Different datasets cover different areas with different projections and resolutions...\nHow can we combine data coming from different sources?\n2.1.- Defining a region in GeoJSON\ngeojson.io\nThis website allows us to draw polygons and get their gjson representation. For example:\n'{\"type\":\"Polygon\",\"coordinates\":[[[148.5,-36],[148.5,-35],[150.2,-35],[150.2,-36],[148.5,-36]]]}'\nRepresents a box covering the Canberra region.\n\n2.2.- Defining a canvas raster in GDAL: Extent, Size and Projection", "import json\n\nwgs84_wkt ='GEOGCS[\"WGS 84\",DATUM[\"WGS_1984\",SPHEROID[\"WGS 84\",6378137,298.257223563,AUTHORITY[\"EPSG\",\"7030\"]],AUTHORITY[\"EPSG\",\"6326\"]],PRIMEM[\"Greenwich\",0,AUTHORITY[\"EPSG\",\"8901\"]],UNIT[\"degree\",0.01745329251994328,AUTHORITY[\"EPSG\",\"9122\"]],AUTHORITY[\"EPSG\",\"4326\"]]'\n\ndef get_raster(geojson_poly, pix_size):\n coords = json.loads(geojson_poly)[\"coordinates\"]\n \n min_lat = min(coords[0][1][1], coords[0][0][1])\n max_lat = max(coords[0][1][1], coords[0][0][1])\n \n min_lon = min(coords[0][2][0], coords[0][0][0])\n max_lon = max(coords[0][2][0], coords[0][0][0])\n \n geotrans = [min_lon, pix_size, 0.0, max_lat, 0.0, -1.0*pix_size]\n size_x = int((max_lon - min_lon) / pix_size)\n size_y = int((max_lat - min_lat) / pix_size)\n\n mem_drv = gdal.GetDriverByName('MEM')\n ds = mem_drv.Create('', size_x, size_y, 1, gdal.GDT_Float32)\n ds.SetProjection(wgs84_wkt)\n ds.SetGeoTransform(geotrans)\n\n return ds", "2.3.- Reprojecting between reference systems.", "def reproject(src_file, dst_ds):\n src_ds = gdal.Open(src_file)\n gdal.ReprojectImage(src_ds, dst_ds, src_ds.GetProjection(), dst_ds.GetProjection(), gdal.GRA_NearestNeighbour)\n\n return dst_ds.GetRasterBand(1).ReadAsArray()", "2.3.- Displaying Himawari 8 for a geoJSON defined area.", "canberra_gjson = '{\"type\":\"Polygon\",\"coordinates\":[[[148.5,-36],[148.5,-35],[150.2,-35],[150.2,-36],[148.5,-36]]]}'\nhim_dest = get_raster(canberra_gjson, .01)\nhim_proj = reproject(himawari8_path(\"201512250000\", 1), him_dest)\n\nplt.imshow(him_proj, cmap=\"gray\")", "2.4.- A most probably too simplistic way of getting cloud coverage...", "clouds_proj = him_proj > .30\nplt.imshow(clouds_proj, cmap=\"gray\")", "2.5.- What is the mean cloud coverage for that image?", "clouds_proj.mean()", "<a id='part3'></a>\n3.- Intro to Digital Elevation Model (DEM) data\nDEM models produce regular grid in which each grid point represents a point in a surface and contains the altitude of that location.\nGeoscience Australia publishes DEM data for Australia with a precission of a seconds of degree (30 meters). This data is stored at the NCI at: /g/data1/rr1/Elevation/\n3.1.- Using the above functions to subset these data into the same defined region, resolution and reference system.", "dem_file = \"/g/data1/rr1/Elevation/NetCDF/1secSRTM_DEMs_v1.0/DEM-S/Elevation_1secSRTM_DEMs_v1.0_DEM-S_Mosaic_dems1sv1_0.nc\"\n\nelv_dest = get_raster(canberra_gjson, .01)\nelv_proj = reproject(dem_file, elv_dest)\n\nplt.imshow(elv_proj)", "3.2.- An altitude profile of our selected region:", "plt.hist(elv_proj.ravel(), bins=64, range=(0.0, 2000))\nplt.xlabel('Height')\nplt.ylabel('Number of Pixels')\nplt.show()", "3.3.- Defining our regions of interest:", "elv_canberra = elv_proj[10:50, 70:120]\nelv_mountains = elv_proj[50:, :60]\nelv_coast = elv_proj[50:, 130:]\n\nprint(\"The average height in Canberra is {0:.2f} m.\".format(elv_canberra.mean()))\nprint(\"The average height in the mountains is {0:.2f} m.\".format(elv_mountains.mean()))\nprint(\"The average height in the coast is {0:.2f} m.\".format(elv_coast.mean()))", "<a id='part4'></a>\n4.- Back to out original question: Where is the best weather?\n4.1.- Packing cloud coverage in a function", "def give_me_cloud_coverage(utc_datetime, raster_dest):\n\n him_proj = reproject(himawari8_path(utc_datetime, 1), raster_dest)\n clouds_proj = him_proj > .30\n clouds_canberra = clouds_proj[10:50, 70:120]\n clouds_mountains = clouds_proj[50:, :60]\n clouds_coast = clouds_proj[50:, 130:]\n\n return int(clouds_canberra.mean()*100), int(clouds_mountains.mean()*100), int(clouds_coast.mean()*100)\n\ncbr, mtn, cst = give_me_cloud_coverage(\"201609010000\", him_dest)\n\nprint(\"Cloud cover 1st Sept 2016. Canberra: {0:d}% Mountains: {1:d}% Coast: {2:d}%\".format(cbr, mtn, cst))", "4.2.- Collecting cloud data over a period", "def clouds_in_a_period(UTC_start, UTC_end, raster_dest):\n start = datetime.datetime.strptime(UTC_start, \"%Y%m%d%H%M\")\n end = datetime.datetime.strptime(UTC_end, \"%Y%m%d%H%M\")\n \n cbr_list = []\n mtn_list = []\n cst_list = []\n while start <= end:\n cbr, mtn, cst = give_me_cloud_coverage(start.strftime(\"%Y%m%d%H%M\"), raster_dest)\n cbr_list.append(cbr)\n mtn_list.append(mtn)\n cst_list.append(cst)\n start = start + datetime.timedelta(hours=24)\n\n return cbr_list, mtn_list, cst_list\n\ncbr_month, mtn_month, cst_month = clouds_in_a_period(\"201609010000\", \"201609300000\", him_dest)\ndays = np.arange(1, 31, 1)\nplt.plot(days, cbr_month, 'ro-', days, mtn_month, 'g^-', days, cst_month, 'bs-', label=[\"a\", \"b\", \"c\"])\nplt.xlabel('Day of the month')\nplt.ylabel('Cloud Cover')\nlgnd = ['Canberra', 'Mountains', 'Coast']\nplt.legend(lgnd)\nplt.show()", "4.3.- The veredict", "print(\"Canberra had an average {0:d}% of clouds during September 2016.\".format(int(np.array(cbr_month).mean())))\nprint(\"The mountains had and average {0:d}% of clouds during September 2016.\".format(int(np.array(mtn_month).mean())))\nprint(\"The coast had and average {0:d}% of clouds during September 2016.\".format(int(np.array(cst_month).mean())))", "It looks like the coast was the place to be.\n<a id='part5'></a>\n5.- Have some extra time?\nTry to solve this other question: Last September was very rainy with floods over different parts of NSW. By defining two polygons one for north NSW and other for south NSW can you determine which one had more clouds for that month?" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ES-DOC/esdoc-jupyterhub
notebooks/nuist/cmip6/models/sandbox-1/toplevel.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Toplevel\nMIP Era: CMIP6\nInstitute: NUIST\nSource ID: SANDBOX-1\nSub-Topics: Radiative Forcings. \nProperties: 85 (42 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:34\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'nuist', 'sandbox-1', 'toplevel')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Flux Correction\n3. Key Properties --&gt; Genealogy\n4. Key Properties --&gt; Software Properties\n5. Key Properties --&gt; Coupling\n6. Key Properties --&gt; Tuning Applied\n7. Key Properties --&gt; Conservation --&gt; Heat\n8. Key Properties --&gt; Conservation --&gt; Fresh Water\n9. Key Properties --&gt; Conservation --&gt; Salt\n10. Key Properties --&gt; Conservation --&gt; Momentum\n11. Radiative Forcings\n12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2\n13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4\n14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O\n15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3\n16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3\n17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC\n18. Radiative Forcings --&gt; Aerosols --&gt; SO4\n19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon\n20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon\n21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate\n22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect\n23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect\n24. Radiative Forcings --&gt; Aerosols --&gt; Dust\n25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic\n26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic\n27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt\n28. Radiative Forcings --&gt; Other --&gt; Land Use\n29. Radiative Forcings --&gt; Other --&gt; Solar \n1. Key Properties\nKey properties of the model\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTop level overview of coupled model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of coupled model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Flux Correction\nFlux correction properties of the model\n2.1. Details\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how flux corrections are applied in the model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.flux_correction.details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Genealogy\nGenealogy and history of the model\n3.1. Year Released\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nYear the model was released", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.2. CMIP3 Parent\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCMIP3 parent if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.3. CMIP5 Parent\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCMIP5 parent if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3.4. Previous Name\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nPreviously known as", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Software Properties\nSoftware properties of model\n4.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.4. Components Structure\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how model realms are structured into independent software components (coupled via a coupler) and internal software components.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.5. Coupler\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nOverarching coupling framework for model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OASIS\" \n# \"OASIS3-MCT\" \n# \"ESMF\" \n# \"NUOPC\" \n# \"Bespoke\" \n# \"Unknown\" \n# \"None\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "5. Key Properties --&gt; Coupling\n**\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of coupling in the model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5.2. Atmosphere Double Flux\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "5.3. Atmosphere Fluxes Calculation Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nWhere are the air-sea fluxes calculated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Atmosphere grid\" \n# \"Ocean grid\" \n# \"Specific coupler grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "5.4. Atmosphere Relative Winds\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "6. Key Properties --&gt; Tuning Applied\nTuning methodology for model\n6.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Global Mean Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList set of metrics/diagnostics of the global mean state used in tuning model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.3. Regional Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.4. Trend Metrics Used\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList observed trend metrics/diagnostics used in tuning model/component (such as 20th century)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.5. Energy Balance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.6. Fresh Water Balance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7. Key Properties --&gt; Conservation --&gt; Heat\nGlobal heat convervation properties of the model\n7.1. Global\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how heat is conserved globally", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Atmos Ocean Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how heat is conserved at the atmosphere/ocean coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.3. Atmos Land Interface\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how heat is conserved at the atmosphere/land coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.4. Atmos Sea-ice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how heat is conserved at the atmosphere/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.5. Ocean Seaice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how heat is conserved at the ocean/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.6. Land Ocean Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how heat is conserved at the land/ocean coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8. Key Properties --&gt; Conservation --&gt; Fresh Water\nGlobal fresh water convervation properties of the model\n8.1. Global\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how fresh_water is conserved globally", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Atmos Ocean Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how fresh_water is conserved at the atmosphere/ocean coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.3. Atmos Land Interface\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe if/how fresh water is conserved at the atmosphere/land coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.4. Atmos Sea-ice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.5. Ocean Seaice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how fresh water is conserved at the ocean/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.6. Runoff\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how runoff is distributed and conserved", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.7. Iceberg Calving\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how iceberg calving is modeled and conserved", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.8. Endoreic Basins\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how endoreic basins (no ocean access) are treated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.9. Snow Accumulation\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how snow accumulation over land and over sea-ice is treated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Key Properties --&gt; Conservation --&gt; Salt\nGlobal salt convervation properties of the model\n9.1. Ocean Seaice Interface\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how salt is conserved at the ocean/sea-ice coupling interface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Key Properties --&gt; Conservation --&gt; Momentum\nGlobal momentum convervation properties of the model\n10.1. Details\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how momentum is conserved in the model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11. Radiative Forcings\nRadiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)\n11.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of radiative forcings (GHG and aerosols) implementation in model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12. Radiative Forcings --&gt; Greenhouse Gases --&gt; CO2\nCarbon dioxide forcing\n12.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Radiative Forcings --&gt; Greenhouse Gases --&gt; CH4\nMethane forcing\n13.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "13.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14. Radiative Forcings --&gt; Greenhouse Gases --&gt; N2O\nNitrous oxide forcing\n14.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15. Radiative Forcings --&gt; Greenhouse Gases --&gt; Tropospheric O3\nTroposheric ozone forcing\n15.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16. Radiative Forcings --&gt; Greenhouse Gases --&gt; Stratospheric O3\nStratospheric ozone forcing\n16.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17. Radiative Forcings --&gt; Greenhouse Gases --&gt; CFC\nOzone-depleting and non-ozone-depleting fluorinated gases forcing\n17.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.2. Equivalence Concentration\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDetails of any equivalence concentrations used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"Option 1\" \n# \"Option 2\" \n# \"Option 3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.3. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Radiative Forcings --&gt; Aerosols --&gt; SO4\nSO4 aerosol forcing\n18.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19. Radiative Forcings --&gt; Aerosols --&gt; Black Carbon\nBlack carbon aerosol forcing\n19.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20. Radiative Forcings --&gt; Aerosols --&gt; Organic Carbon\nOrganic carbon aerosol forcing\n20.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "20.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21. Radiative Forcings --&gt; Aerosols --&gt; Nitrate\nNitrate forcing\n21.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "21.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22. Radiative Forcings --&gt; Aerosols --&gt; Cloud Albedo Effect\nCloud albedo effect forcing (RFaci)\n22.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "22.2. Aerosol Effect On Ice Clouds\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRadiative effects of aerosols on ice clouds are represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "22.3. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23. Radiative Forcings --&gt; Aerosols --&gt; Cloud Lifetime Effect\nCloud lifetime effect forcing (ERFaci)\n23.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.2. Aerosol Effect On Ice Clouds\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRadiative effects of aerosols on ice clouds are represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "23.3. RFaci From Sulfate Only\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nRadiative forcing from aerosol cloud interactions from sulfate aerosol only?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "23.4. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "24. Radiative Forcings --&gt; Aerosols --&gt; Dust\nDust forcing\n24.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "25. Radiative Forcings --&gt; Aerosols --&gt; Tropospheric Volcanic\nTropospheric volcanic forcing\n25.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.2. Historical Explosive Volcanic Aerosol Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in historical simulations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.3. Future Explosive Volcanic Aerosol Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in future simulations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "25.4. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26. Radiative Forcings --&gt; Aerosols --&gt; Stratospheric Volcanic\nStratospheric volcanic forcing\n26.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26.2. Historical Explosive Volcanic Aerosol Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in historical simulations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26.3. Future Explosive Volcanic Aerosol Implementation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in future simulations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "26.4. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27. Radiative Forcings --&gt; Aerosols --&gt; Sea Salt\nSea salt forcing\n27.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "27.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28. Radiative Forcings --&gt; Other --&gt; Land Use\nLand use forcing\n28.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "28.2. Crop Change Only\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nLand use change represented via crop change only?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "28.3. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29. Radiative Forcings --&gt; Other --&gt; Solar\nSolar forcing\n29.1. Provision\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nHow solar forcing is provided", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"irradiance\" \n# \"proton\" \n# \"electron\" \n# \"cosmic ray\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "29.2. Additional Information\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GregDMeyer/dynamite
examples/0-Overview.ipynb
mit
[ "Overview of dynamite: implementing a long-range Ising model\nLet's implement a power law long-range ZZ interaction with open boundary conditions and some uniform field. Our Hamiltonian is\n$$H = \\sum_{i,j} \\frac{J}{\\left| i-j \\right| ^ \\alpha} \\sigma^z_i \\sigma^z_j + \\vec{h} \\cdot \\sum_i \\vec{\\sigma}_i$$\nwhere $J$ is the interaction strength, $\\alpha$ is the power-law decay with distance between sites, and the vector $\\vec{h}$ is the static, uniform field.\nFirst we import the things we will need:", "from dynamite import config\nfrom dynamite.operators import sigmax, sigmay, sigmaz, op_sum, index_sum", "Let's set the spin chain length to 8 globally for the purposes of this example. However, note that you aren't required to set the spin chain length before you start building your operator!", "config.L = 8", "Now we start building up our Hamiltonian. Here is a ZZ interaction between site 0 and site 2:", "sigmaz(0)*sigmaz(2)", "Let's take such an interaction and translate it along the spin chain. Note that the sum is to $i=5$ such that the operator has support on all spins of our length 8 chain (which is indexed from 0).", "index_sum(sigmaz(0)*sigmaz(2))", "Sometimes it's more informative to have a term-by-term look at the operator:", "oper = index_sum(sigmaz(0)*sigmaz(2))\nprint(oper.table())", "Looks good! Let's create our power law. Here we are using op_sum, which takes the sum of the operators in the iterable passed to it. In our case, we will use a python generator as the argument.", "alpha = 1.15\nlong_range_zz = op_sum(1/(d**alpha) * index_sum(sigmaz(0)*sigmaz(d)) for d in range(1,8))", "Now what does the interaction look like?", "long_range_zz", "Nice! now that we have our long-range power law interaction, we just need the static, uniform field.", "# the x, y, z components of the field\nh = [0.5, 0.2, 0.1]\nsigma = [sigmax, sigmay, sigmaz]\n\nstatic_field = op_sum(hi*index_sum(sigmai()) for hi,sigmai in zip(h,sigma))\nstatic_field", "Then our Hamiltonian is just the sum of these two:", "H = long_range_zz + static_field", "With that, we can do whatever computations we want! For example, solving for the ground state energy:", "energies = H.eigsolve()\nprint(energies[0])", "or evolve a product state for some time:", "from dynamite.states import State\n\n# specify the initial state as a product state with one domain wall\ninitial_state = State(state='UUUUDDDD')\n\nresult = H.evolve(initial_state, t=5.0)\n\n# compute overlap with initial state\noverlap = abs(initial_state.dot(result))\nprint('overlap:', overlap)", "or do imaginary time evolution from a random state to find a thermal state of some $\\beta$:\n$$\\left| \\psi_\\beta \\right> = e^{-\\beta H} \\left| \\psi_r \\right> = e^{-i (-i t_\\beta) H} \\left| \\psi_r \\right> $$\nwhere $\\left| \\psi_r \\right>$ is a random state.", "beta = 1.5\nimag_time = -1j*beta\n\nrandom_state = State(state='random')\nthermal_state = H.evolve(random_state, t=imag_time)\nthermal_state.normalize()\n\n# print the expectation values of the energy for the two states\nrandom_state_energy = random_state.dot(H*random_state).real\nthermal_state_energy = thermal_state.dot(H*thermal_state).real\n\nprint('E random:', random_state_energy)\nprint('E thermal:', thermal_state_energy)", "As expected, the \"cold\" thermal state has lower energy." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
andrewzwicky/puzzles
dailyprogrammer/submissions/2017-09-04.ipynb
mit
[ "Description\nIn this challenge, you will be given a set of circles, defined by their centers and radii. Your goal is to find the bounding rectangle which will contain all of the circles completely.\nWrite a program that determines the vertices of the bounding rectangle with sides parallel to the axes.\nInput Description\nEach line will contain a comma separated center and radius for a circle.\nOutput Description\nThe format of the output will be comma separated coordinates, rounded to 3 decimal places.\nChallenge Input\n1,1,2\n2,2,0.5\n-1,-3,2\n5,2,1\n\nChallenge Output\n(-3, -5), (-3, 3), (6, 3), (6, -5)\n\nBonus\nFor the bonus, we will rotate the axis for the bounding rectangle. The first line of input will now be a vector determining the direction of one edge of the bounding rectangle.\nBonus Input\n1,1\n1,1,2\n2,2,0.5\n-1,-3,2\n5,2,1\n\nBonus Output\n(-4.828, -2.0), (2.793, 5.621), (6.621, 1.793), (-1.0, -5.828)\n\nCredit\nThis challenge was suggested by user /u/Preferencesoft, many thanks! If you have an idea for a challenge please share it on /r/dailyprogrammer_ideas and there's a good chance we'll use it.", "from matplotlib import pyplot as plt\nfrom matplotlib import patches as patches\nfrom matplotlib import ticker as ticker\nfrom math import atan, degrees, cos, sin\n%matplotlib inline", "Challenge", "circles = [(1,1,2),(2,2,0.5),(-1,-3,2),(5,2,1)]\n\nfig, ax = plt.subplots(figsize=(10,10))\n\nfor x,y,radius in circles:\n ax.add_artist(plt.Circle((x,y), radius, fill=False))\n\nax.xaxis.set_major_locator(ticker.MultipleLocator(base=1.0))\nax.yaxis.set_major_locator(ticker.MultipleLocator(base=1.0))\n\nax.grid(b=True, which='major', color='k', linestyle='--', alpha=0.3)\n\nplt.xlim(-8,8)\nplt.ylim(-8,8)\nplt.show()\n\nmin_x = max_x = min_y = max_y = None\n\nfor x,y,radius in circles:\n if min_x is None or x - radius < min_x:\n min_x = x - radius\n \n if min_y is None or y - radius < min_y:\n min_y = y - radius\n\n if max_x is None or x + radius > max_x:\n max_x = x + radius\n \n if max_y is None or y + radius > max_y:\n max_y = y + radius\n \nrect_coords = [(min_x,min_y), (min_x,max_y),(max_x,max_y),(max_x,min_y)]\n\nfig, ax = plt.subplots(figsize=(10,10))\n\nfor x,y,radius in circles:\n ax.add_artist(plt.Circle((x,y), radius, fill=False))\n\nax.xaxis.set_major_locator(ticker.MultipleLocator(base=1.0))\nax.yaxis.set_major_locator(ticker.MultipleLocator(base=1.0))\n\nax.grid(b=True, which='major', color='k', linestyle='--', alpha=0.3)\n\nplt.xlim(-8,8)\nplt.ylim(-8,8)\n\nax.add_patch(patches.Polygon(rect_coords, fill=False, color='r', linewidth=2))\n\nplt.show()\n\nprint(rect_coords)", "Bonus", "vector = (1,1)\n\ntheta = atan(vector[0]/vector[1])\n\ndef rotate_coords(x,y, theta):\n return x*cos(theta) - y*sin(theta), x*sin(theta) + y*cos(theta)\n\nmin_x = max_x = min_y = max_y = None\n\nfor xo,yo,radius in circles:\n x,y = rotate_coords(xo,yo,theta)\n\n if min_x is None or x - radius < min_x:\n min_x = x - radius\n \n if min_y is None or y - radius < min_y:\n min_y = y - radius\n\n if max_x is None or x + radius > max_x:\n max_x = x + radius\n \n if max_y is None or y + radius > max_y:\n max_y = y + radius\n \nrect_coords = [rotate_coords(min_x,min_y,-theta),\n rotate_coords(min_x,max_y,-theta),\n rotate_coords(max_x,max_y,-theta),\n rotate_coords(max_x,min_y,-theta)]\n\nfig, ax = plt.subplots(figsize=(10,10))\n\nfor x,y,radius in circles:\n ax.add_artist(plt.Circle((x,y), radius, fill=False))\n\nax.xaxis.set_major_locator(ticker.MultipleLocator(base=1.0))\nax.yaxis.set_major_locator(ticker.MultipleLocator(base=1.0))\n\nax.grid(b=True, which='major', color='k', linestyle='--', alpha=0.3)\n\nplt.xlim(-8,8)\nplt.ylim(-8,8)\n\nax.add_patch(patches.Polygon(rect_coords, fill=False, color='r', linewidth=2))\n\nplt.show()\n\nprint(rect_coords)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
JasonNK/udacity-dlnd
autoencoder/Convolutional_Autoencoder.ipynb
mit
[ "Convolutional Autoencoder\nSticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.", "%matplotlib inline\n\nimport numpy as np\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\n\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets('MNIST_data', validation_size=0)\n\nimg = mnist.train.images[2]\nplt.imshow(img.reshape((28, 28)), cmap='Greys_r')", "Network Architecture\nThe encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.\n<img src='assets/convolutional_autoencoder.png' width=500px>\nHere our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data.\nWhat's going on with the decoder\nOkay, so the decoder has these \"Upsample\" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see transposed convolution layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, tf.nn.conv2d_transpose. \nHowever, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.\n\nExercise: Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena et al claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in tf.image.resize_images or use tf.image.resize_nearest_neighbor.", "learning_rate = 0.001\n\nimage_size = mnist.train.images.shape[1]\n# Input and target placeholders\ninputs_ = tf.placeholder(dtype=tf.float32, shape=(None, 28, 28, 1), name=\"inputs\")\ntargets_ = tf.placeholder(dtype=tf.float32, shape=(None, 28, 28, 1), name=\"targets\")\n\n### Encoder \n'''\ntf.layers.conv2d(inputs,\n filters,\n kernel_size,\n strides=(1, 1), # stride of (1, 1) will not reduce size\n padding='valid',\n data_format='channels_last',\n dilation_rate=(1, 1),\n activation=None,\n use_bias=True,\n kernel_initializer=None,\n bias_initializer=tf.zeros_initializer(),\n kernel_regularizer=None,\n bias_regularizer=None,\n activity_regularizer=None,\n trainable=True,\n name=None,\n reuse=None\n) \n \nmax_pooling2d(\n inputs,\n pool_size,\n strides,\n padding='valid',\n data_format='channels_last',\n name=None\n) \n\n'''\nconv1 = tf.layers.conv2d(inputs_, 16, (5, 5), padding=\"same\", activation=tf.nn.relu)\n# Now 28x28x16\nmaxpool1 = tf.layers.max_pooling2d(conv1, (2, 2), strides=(2, 2), padding=\"same\")\n# Now 14x14x16\nconv2 = tf.layers.conv2d(maxpool1, 8, (5, 5), padding=\"same\", activation=tf.nn.relu)\n# Now 14x14x8\nmaxpool2 = tf.layers.max_pooling2d(conv2, (2, 2), strides=(2, 2), padding=\"same\")\n# Now 7x7x8\nconv3 = tf.layers.conv2d(maxpool2, 8, (5, 5), padding=\"same\", activation=tf.nn.relu)\n# Now 7x7x8\nencoded = tf.layers.max_pooling2d(conv3, (2, 2), strides=(2, 2), padding=\"valid\")\n# Now 4x4x8\n\n### Decoder\nupsample1 = tf.image.resize_images(encoded, (7, 7))\n# Now 7x7x8\nconv4 = tf.layers.conv2d(upsample1, 8, (5, 5), padding=\"same\", activation=tf.nn.relu)\n# Now 7x7x8\nupsample2 = tf.image.resize_images(conv4, (14, 14))\n# Now 14x14x8\nconv5 = tf.layers.conv2d(upsample2, 8, (5, 5), padding=\"same\", activation=tf.nn.relu)\n# Now 14x14x8\nupsample3 = tf.image.resize_images(conv5, (28, 28))\n# Now 28x28x8\nconv6 = tf.layers.conv2d(upsample3, 16, (5, 5), padding=\"same\", activation=tf.nn.relu)\n# Now 28x28x16\n\nlogits = tf.layers.conv2d(conv6, 1, (5, 5), padding=\"same\", activation=None)\n#Now 28x28x1\n\n# Pass logits through sigmoid to get reconstructed image\ndecoded = tf.nn.sigmoid(logits, name='output')\n\n# Pass logits through sigmoid and calculate the cross-entropy loss\nloss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)\n\n# Get cost and define the optimizer\ncost = tf.reduce_mean(loss)\nopt = tf.train.AdamOptimizer(learning_rate).minimize(cost)", "Training\nAs before, here we'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.", "sess = tf.Session()\n\nepochs = 20\nbatch_size = 200\nsess.run(tf.global_variables_initializer())\nfor e in range(epochs):\n for ii in range(mnist.train.num_examples//batch_size):\n batch = mnist.train.next_batch(batch_size)\n imgs = batch[0].reshape((-1, 28, 28, 1))\n batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,\n targets_: imgs})\n\n print(\"Epoch: {}/{}...\".format(e+1, epochs),\n \"Training loss: {:.4f}\".format(batch_cost))\n\nfig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))\nin_imgs = mnist.test.images[:10]\nreconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})\n\nfor images, row in zip([in_imgs, reconstructed], axes):\n for img, ax in zip(images, row):\n ax.imshow(img.reshape((28, 28)), cmap='Greys_r')\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n\n\nfig.tight_layout(pad=0.1)\n\nsess.close()", "Denoising\nAs I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.\n\nSince this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.\n\nExercise: Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.", "learning_rate = 0.001\ninputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')\ntargets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')\n\n### Encoder\nconv1 = tf.layers.conv2d()\n# Now 28x28x32\nmaxpool1 = \n# Now 14x14x32\nconv2 = \n# Now 14x14x32\nmaxpool2 = \n# Now 7x7x32\nconv3 = \n# Now 7x7x16\nencoded = \n# Now 4x4x16\n\n### Decoder\nupsample1 = \n# Now 7x7x16\nconv4 = \n# Now 7x7x16\nupsample2 = \n# Now 14x14x16\nconv5 = \n# Now 14x14x32\nupsample3 = \n# Now 28x28x32\nconv6 = \n# Now 28x28x32\n\nlogits = \n#Now 28x28x1\n\n# Pass logits through sigmoid to get reconstructed image\ndecoded =\n\n# Pass logits through sigmoid and calculate the cross-entropy loss\nloss = \n\n# Get cost and define the optimizer\ncost = tf.reduce_mean(loss)\nopt = tf.train.AdamOptimizer(learning_rate).minimize(cost)\n\nsess = tf.Session()\n\nepochs = 100\nbatch_size = 200\n# Set's how much noise we're adding to the MNIST images\nnoise_factor = 0.5\nsess.run(tf.global_variables_initializer())\nfor e in range(epochs):\n for ii in range(mnist.train.num_examples//batch_size):\n batch = mnist.train.next_batch(batch_size)\n # Get images from the batch\n imgs = batch[0].reshape((-1, 28, 28, 1))\n \n # Add random noise to the input images\n noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)\n # Clip the images to be between 0 and 1\n noisy_imgs = np.clip(noisy_imgs, 0., 1.)\n \n # Noisy images as inputs, original images as targets\n batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,\n targets_: imgs})\n\n print(\"Epoch: {}/{}...\".format(e+1, epochs),\n \"Training loss: {:.4f}\".format(batch_cost))", "Checking out the performance\nHere I'm adding noise to the test images and passing them through the autoencoder. It does a suprisingly great job of removing the noise, even though it's sometimes difficult to tell what the original number is.", "fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))\nin_imgs = mnist.test.images[:10]\nnoisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)\nnoisy_imgs = np.clip(noisy_imgs, 0., 1.)\n\nreconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})\n\nfor images, row in zip([noisy_imgs, reconstructed], axes):\n for img, ax in zip(images, row):\n ax.imshow(img.reshape((28, 28)), cmap='Greys_r')\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n\nfig.tight_layout(pad=0.1)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
LocalSymmetry/DetectingQuasars
Data Collection.ipynb
gpl-3.0
[ "Data Collection for Quasar Detection\nGoal: Gather images of quasars, non-quasar celestial objects, and quasar candidates.\nWe will use images collected from the <a href=\"http://www.sdss.org/\">Sloan Digital Sky Survey</a>.\nSome useful links for the Sloan Digitial Sky Survey:\n<ul>\n <li><a href=\"skyserver.sdss.org/dr12/en/tools/chart/listinfo.aspx\">SDSS DR12 Image List Tool</a></li>\n <li><a href=\"https://dr12.sdss.org/fields/\">SDSS DR12 Simple Image Query</a></li>\n <li><a href=\"https://data.galaxyzoo.org/\">The Galaxy Zoo</a></li>\n <li><a href=\"http://cdsweb.u-strasbg.fr/cgi-bin/Sesame\">Sesame Name Resolver</a></li>\n <li><a href=\"http://skyservice.pha.jhu.edu/DR12/ImgCutout/getjpeg.aspx\">DR 12 Image Retrieval Script</a></li>\n <li><a href=\"http://www.sdss.org/wp-content/uploads/2016/08/dr13_boss.png\">A Visual Representation of the DR13 Footprint</a></li>\n <li><a href=\"http://simbad.u-strasbg.fr/simbad/\">SIMBAD Astrological Database</a></li>\n <li><a href=\"http://simbad.u-strasbg.fr/simbad/sim-help?Page=sim-fsam#Sotypes\">Useful SIMBAD Documentation</a></li>\n</ul>\n\nOther useful links about Quasars\n<ul>\n <li><a href=\"http://www.galaxyzooforum.org/index.php?topic=272689.0\">Understanding QSO and Quasars</a></li>\n <li><a href=\"https://en.wikipedia.org/wiki/Quasar\">Wikipedia Article on Quasars</a></li>\n</ul>", "import urllib\nfrom IPython.display import display, Image\nimport os\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport pandas as pd\nimport numpy as np\n%matplotlib inline\nfrom astropy import units as u\nfrom astropy.coordinates import SkyCoord\nfrom astropy.table import Table\nfrom astroquery.sdss import SDSS\nfrom astroquery.simbad import Simbad", "As as test, we will use AstroPy to get a nice image for the project page. We will use SIMBAD to find a random quasar.", "# Limit the number of results we get from our query.\nSimbad.ROW_LIMIT = 1000\n\nresult = Simbad.query_criteria('region(box,180d +30d, 8d +8d)', otype='QSO')\n\nresult\n\n# Choose a random quasar.\nqnumber = np.random.randint(0, 1000)\n# Get that quasar's coordinates, and format them for SkyCoordinate\nresultRA = result[qnumber]['RA'].split()\nRA = '%sh%sm%ss' % (resultRA[0], resultRA[1], resultRA[2])\nresultDEC = result[qnumber]['DEC'].split()\nDEC = '%sd%sm%ss' % (resultDEC[0], resultDEC[1], resultDEC[2])\n# Convert to a SkyCoordinate\nQCoord = SkyCoord(RA, DEC, frame='icrs')\nQCoord", "We will now get an image from the Sloan Digital Sky Survey. The following code follows <a href=\"http://www.astropy.org/astropy-tutorials/Coordinates.html\">this tutorial</a>.", "impix = 1024\nimsize = 12 * u.arcmin\ncutoutbaseurl = 'http://skyservice.pha.jhu.edu/DR12/ImgCutout/getjpeg.aspx'\nquery_string = urllib.parse.urlencode(dict(\n ra=QCoord.ra.deg, dec=QCoord.dec.deg, width=impix, height=impix,\n scale=imsize.to(u.arcsec).value / impix))\nurl = cutoutbaseurl + '?' + query_string\nurllib.request.urlretrieve(url, 'Quasar.jpg')\n\ndisplay(Image('Quasar.jpg'))", "Gathering Quasar Images\nThere is a list of 46420 detected quasars from the <a href=\"http://astrostatistics.psu.edu/datasets/SDSS_quasar.html\">Penn State Center for Astrostatistics</a>. We will use their <a href=\"http://astrostatistics.psu.edu/datasets/SDSS_quasar.dat\">SDSS_quasar.dat</a> data set and the <a href=\"http://www.astropy.org/\">AstroPy</a> python package.", "Quasars = pd.read_fwf('SDSS_quasar.dat')\n\nQuasars.head()\n\nQuasars.tail() # 46420 rows\n\ncoord = SkyCoord(str(Quasars.iloc[1]['R.A.']) + 'd',\n str(Quasars.iloc[1]['Dec.']) + 'd', frame='icrs')\n\nimpix = 120\nimsize = 1 * u.arcmin\ncutoutbaseurl = 'http://skyservice.pha.jhu.edu/DR12/ImgCutout/getjpeg.aspx'\nquery_string = urllib.parse.urlencode(dict(\n ra=coord.ra.deg, dec=coord.dec.deg, width=impix, height=impix,\n scale=imsize.to(u.arcsec).value / impix))\nurl = cutoutbaseurl + '?' + query_string\nurllib.request.urlretrieve(url, 'Quasar_1.jpg')\ndisplay(Image('Quasar_1.jpg'))\n\ndef get_image(coordinate, name, impix=120):\n \"\"\"Downloads the image from the SDSS DR12 release as a impix pixel by impix pixel image.\n\n Parameters\n ----------\n coordinate : coordinate of the celestial object as a Sky Coordinate.\n name: The name string to save the image as. It will be saved as 'name.jpg'.\n\n \"\"\"\n imsize = 1 * u.arcmin\n cutoutbaseurl = 'http://skyservice.pha.jhu.edu/DR12/ImgCutout/getjpeg.aspx'\n query_string = urllib.parse.urlencode(dict(\n ra=coord.ra.deg, dec=coord.dec.deg, width=impix, height=impix,\n scale=imsize.to(u.arcsec).value / impix))\n url = cutoutbaseurl + '?' + query_string\n urllib.request.urlretrieve(url, './Images/' + name + '.jpg')\n\nget_image(coord, 'test1')\n# Worked successfully\n\n# Some data manipulation to get Sky Coordinates for each entry.\n# The application of the SkyCoord function will take time.\nQuasarLocs = pd.concat([Quasars['R.A.'].apply(lambda x: str(x) + 'd '),\n Quasars['Dec.'].apply(lambda x: str(x) + 'd')], axis=1)\nQuasarLocs['Coords'] = QuasarLocs[['R.A.', 'Dec.']].apply(\n lambda x: SkyCoord(x[0], x[1], frame='icrs'), axis=1)\n\nQuasarLocs.head()\n\n# We will now download these images from SDSS DR12\nfor i in range(46420):\n get_image(QuasarLocs['Coords'].iloc[i], name='Quasar_' + str(i))", "Gathering Non-quasar Celestial Objects\nWe will use SIMBAD to find objects that are not Quasars or Quasar Candidates. We will sample 200 random regions in the SDSS footprint and take 500 objects from each region.", "# Limit the number of results we get from our query.\nSimbad.ROW_LIMIT = 20000\n\n# We will stay in the 8h to 16h +0d to +60 footprint region of SDSS.\n# Note that there are some regions in SDSS DR 12 and DR 13 outside of this range,\n# but this range covers a majority of the footprint.\n# As the box we form is 8d by 8d, we start at 124d and end at 236d for longitude,\n# and start as +4d to +56d in latitude.\nNonQuasars = pd.DataFrame()\nrandcoord = []\nfor i in range(200):\n randcoord.append(str(np.random.randint(128, 237)) +\n 'd +' + str(np.random.randint(4, 57)) + 'd')\n try:\n # For otype, QSO are quasars, Q? are quasar candidates, and LeQ are gravitationally lenses quasars.\n result = Simbad.query_criteria(\n 'region(box,' + randcoord[i] + ', 4d +4d)', 'otype != QSO', 'otype != Q?', 'otype != LeQ')\n sample = result.to_pandas().sample(500)\n NonQuasars = pd.concat([NonQuasars, sample], axis=0)\n if i % 10 == 0:\n print('At attempt %s' % i)\n except:\n print('Attempt Failed... retrying')\n i = i - 1\n\nNonQuasars\n\n# Checking for duplicates, which is a possibility in this process.\nNonQuasars[NonQuasars.duplicated()]\n\n# As duplicates were found, we will drop all but the first.\nNonQuasars = NonQuasars.drop_duplicates(keep='first')\n\n# Reindexing\nNonQuasars.reset_index(inplace=True)\nNonQuasars.drop('index', axis=1, inplace=True)\n\n# Saving a copy of the data to accompany the images.\nNonQuasars.to_csv('NonQuasarsData.csv', index=False)\n\nNonQuasars.head()\n\nNonQuasars.tail() # 94670 rows\n\ndef RAtoICRS(RAValue):\n \"\"\"Converts SIMBAD Right Ascent (RA) format to ICRS format.\n\n Parameters\n ----------\n RAValue : A SIMBAD Right Ascent value in \"X Y Z\" format for X hours, Y minutes, and Z seconds.\n\n \"\"\"\n if len(RAValue.split()) == 1:\n return '%sh' % (RAValue.split()[0])\n elif len(RAValue.split()) == 2:\n return '%sh%sm' % (RAValue.split()[0], RAValue.split()[1])\n elif len(RAValue.split()) == 3:\n return '%sh%sm%ss' % (RAValue.split()[0], RAValue.split()[1], RAValue.split()[2])\n else:\n return np.nan()\n\n\ndef DECtoICRS(DECValue):\n \"\"\"Converts SIMBAD Declination (DEC) format to ICRS format.\n\n Parameters\n ----------\n RAValue : A SIMBAD Declination value in \"+X Y Z\" format for X degrees, Y minutes, and Z seconds.\n\n \"\"\"\n if len(DECValue.split()) == 1:\n return '%sd' % (DECValue.split()[0])\n elif len(DECValue.split()) == 2:\n return '%sd%sm' % (DECValue.split()[0], DECValue.split()[1])\n elif len(DECValue.split()) == 3:\n return '%sd%sm%ss' % (DECValue.split()[0], DECValue.split()[1], DECValue.split()[2])\n else:\n return np.nan()\n\n# Data manipulation to get Sky Coordinates for each entry.\n# Note that SIMBAD gives the values separated as hours, minutes, seconds for RA and degrees, minutes, seconds for Dec\nNonQuasarLocs = pd.concat([NonQuasars['RA'].apply(RAtoICRS),\n NonQuasars['DEC'].apply(DECtoICRS)], axis=1)\nNonQuasarLocs['Coords'] = NonQuasarLocs[['RA', 'DEC']].apply(\n lambda x: SkyCoord(x[0], x[1], frame='icrs'), axis=1)\n\nNonQuasarLocs.head()\n\n# We will now download these images from SDSS DR12\nfor i in range(94670):\n get_image(NonQuasarLocs['Coords'].iloc[i], name='NonQ_' + str(i))", "Gathering Quasar Candidates\nWe will now use SIMBAD to identify quasar candidates for analysis with our trained model.", "# As with the Non-quasar data, we will stay in the 8h to 16h +0d to +60 footprint\n# region of SDSS. Note that there are some regions in SDSS DR 12 and DR 13 outside\n# of this range, but this range covers a majority of the footprint.\n# Due to timeout issues from SIMBAD, we will use smaller regions\n# of width 10d by +6d to gather the candidates.\nQuasarCandidates = pd.DataFrame()\nfor i in range(12): # Separate longitude into 12 segments of length 10d\n for j in range(10): # Separate latitude into 10 segments of length 6d\n try:\n # For otype Q? are quasar candidates.\n result = Simbad.query_criteria(\n 'region(box,' + str(125 + 10 * i) + 'd +'\n + str(3 + 6 * j) + 'd' + ', 5d +3d)', 'otype = Q?')\n QuasarCandidates = pd.concat(\n [QuasarCandidates, result.to_pandas()], axis=0)\n if (i * 10 + j) % 20 == 0:\n print('At attempt %s' % (i * 10 + j))\n except:\n print('Attempt Failed at i=%s and j=%s.' % (i, j))\n\nQuasarCandidates.head()\n\nQuasarCandidates.tail()\n\n# Checking for duplication\nQuasarCandidates[QuasarCandidates.duplicated()]\n\n# Reindexing\nQuasarCandidates.reset_index(inplace=True)\nQuasarCandidates.drop('index', axis=1, inplace=True)\n\nQuasarCandidates.tail() # 5418 rows\n\n# Saving a copy of the data to accompany the images.\nQuasarCandidates.to_csv('QuasarCandidatesData.csv', index=False)\n\n# Data manipulation to get Sky Coordinates for each entry.\n# Note that SIMBAD gives the values separated as hours, minutes, seconds for RA and degrees, minutes, seconds for Dec\nQuasarCandidateLocs = pd.concat([QuasarCandidates['RA'].apply(RAtoICRS),\n QuasarCandidates['DEC'].apply(DECtoICRS)], axis=1)\nQuasarCandidateLocs['Coords'] = QuasarCandidateLocs[['RA', 'DEC']].apply(\n lambda x: SkyCoord(x[0], x[1], frame='icrs'), axis=1)\n\nQuasarCandidateLocs.head()\n\n# We will now download these images from SDSS DR12\nfor i in range(5418):\n get_image(QuasarCandidateLocs['Coords'].iloc[i],\n name='QuasarCandidate_' + str(i))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mathinmse/mathinmse.github.io
Lecture-09-Matrices-and-Rotations.ipynb
mit
[ "Laboratory 09: Matrices and Rotations (Crystallography)\nBackground\nThe purpose of this laboratory is to understand and apply the concept of an agebraic group to produce one of the plane group patterns below. In this context, algebraic groups are mathematical objects that represent a collection of geometric transformations and are used in the study of crystallography. For example, when you say that is crystal is \"face centered cubic\" what you are really saying is that an atomic motif, when transformed by all the algebraic elements of a particular group, will be identical to the face centered cubic structure. The group is the mathematical machinary of crystal structures. The group itself is an abstract idea that is embodied in a crystal structure, a geometric design, a geometric series, a set of matrices, or a set of any kind where the rules that govern the set meet the criteria of a group. The idea of a group exists beyond the example I present here.\nWhat Skills Will I Learn?\nUsing the context of two-dimensional plane groups, in this assignment you will practice the following skills:\n\nIdentifying a symmetry operation (rotation, translation, mirror) and determining the associated matrix representation.\nApplying a symmetry operation to a position vector to transform the vector (or collection of vectors).\nSeeing the inner workings of a class object and its attributes for the purpose of simplifying a computing task.\nUsing lists to collect and organize objects (such as transformation matrices)\n\nWhat Steps Should I Take?\n\nReview the idea of \"symmetry operations\" and familarize yourself with mirror, translational, and rotational symmetry operations.\nReview the idea of matrix transformations and how they can be used to represent symmetry operations and how those symmetry operations can be represented as a matrix/vector dot product.\nRead about the metric tensor below and practice a few calculations of lengths and angles in unit cell coordinates.\nRead and practice writing down 2D, 3D and 3D augmented transformation matrices and learn how Euler angles are defined.\nFor each case above, use a position vector pointing in a direction you select and apply a transformation using each matrix in turn. Does the resultant vector match your anticipated transformation? (e.g. If you take the position vector pointing at the [0,0,1] position in a cubic lattice and apply a 90 degree rotation about the x-axis, where should the resultant vector be pointing? Repeat this for mirror planes and translations.\nReview the concept of a \"class\" in object oriented programming and familarize yourself with the Point and Triangle classes I have defined below. Practice creating these objects and accessing their attributes.\nReview the code that accesses the Python Imaging Library (PIL) and look at the polygon function call and the Triangle points method. You can call the points method from within the polygon function to simplify the drawing of triangles.\nUsing the Point and Triangle classes and your knowledge of symmetry transformations, reproduce one of the plane groups from the figure below. My suggestion is to start with a single triangle and apply a set of operations (transformation matrices) and collect new triangles into a list. Sketch a flowchart for how you might solve this problem.\nReview the very last block of code in this notebook and modify it to complete the assignment. You will need to use combinations of translations, reflections and rotations to accomplish this. \nPrepare a new notebook (not just modifications to this one) that describes your approach to reproducing one or more of the plane groups.\n\nYou may discover that a unique set of matrices will reproduce the whole pattern. This small set of matrices are an algebraic structure known as a group. So - the \"plane group\" is the group of symmetry operations that reproduces the structure in the plane starting from a single motif. The plane group representations are reproduced below for reference; this image comes from Hammond's book on crystallography, cited above.\nA Sucessful Jupyter Notebook Will\n\nPresent a description of the essential elements of plane group symmetry, matrix algebra, and group theory to understand how these are related;\nIdentify the audience for which the work is intended;\nRun the code necessary to draw one of the plane groups;\nProvide a narrative and equations to explain why your approach is relevant to solving the problem;\nProvide references and citations to any others' work you use to complete the assignment;\nBe checked into your GitHub repository by the due date (one week from assignment).\n\nA high quality communication provides an organized, logically progressing, blend of narrative, equations, and code that teaches the reader a particular topic or idea. You will be assessed on:\n* The functionality of the code (i.e. it should perform the task assigned).\n* The narrative you present. I should be able to read and learn from it. Choose your audience wisely.\n* The supporting equations and figures you choose to include.\nIf your notebook is just computer code your assignment will be marked incomplete.\nReading and Reference\n\nM. De Graef and M. McHenry, Structure of Materials, Cambridge University Press, 2nd ed.\nC. Hammond, The Basics of Crystallography and Diffraction, Oxford Science Publications, 4th ed.\n\nThe Plane Groups\n\nIntroduction\n\nOperations using vector-like data structures are an essential component of numerical computing, mathematics, science and engineering. In the field of crystallography the use of vectors and rotations in real and reciprocal space helps to simplify the description of and quantitative operations on crystal structures. Matrix operations are used in the solution of partial differential equations. The vector algebra and rotations are most easily performed using matrix tools and representations. The concepts and their mathematical properties will be reviewed and demonstrated using symbolic algebra and numerical methods. A review of matrix algebra will be helpful.\nRotations\n\nA vector can be transformed by translations, rotations, and stretching/shrinking. A matrix multiplication operation can be used to define each individual operation. We can use matrix multiplication to perform combinations of these operations and then this composite operator can be applied to a vector. In general these types of transformations are called Affine Transformations. A rotation in two dimensions is given by:\n\\begin{equation}\n\\left[\\begin{matrix}\\cos{\\left (\\theta \\right )} & - \\sin{\\left (\\theta \\right )}\\\\sin{\\left (\\theta \\right )} & \\cos{\\left (\\theta \\right )}\\end{matrix}\\right]\n\\end{equation}\nWe can rotate a vector $x\\mathbf{i} + y\\mathbf{j}$ to the position $x'\\mathbf{i} + y'\\mathbf{j}$ using the following matrix operation:\n\\begin{equation}\n\\left( \\begin{matrix} x' \\ y' \\end{matrix} \\right) = \\left[\\begin{matrix}\\cos{\\left (\\theta \\right )} & - \\sin{\\left (\\theta \\right )}\\\\sin{\\left (\\theta \\right )} & \\cos{\\left (\\theta \\right )}\\end{matrix}\\right] \\cdot \\left( \\begin{matrix} x \\ y \\end{matrix} \\right)\n\\end{equation}", "%matplotlib inline\n\nimport numpy as np\nimport sympy as sp\nsp.init_session(quiet=True)\n\n?sp.rot_axis3\n\nx = sp.symbols('\\\\theta')\nsp.rot_axis3(x)", "We can look up definitions, but we can also do some simple tests to see which way things rotate. Let us take a vector pointing in the $\\mathbf{x}$ direction and rotate it about $\\mathbf{z}$ by 90 degrees and see what happens:", "xUnit = sp.Matrix([1,0,0])\nzRotation = sp.rot_axis3(sp.rad(90))\n\nxUnit\n\n# Each column can be viewed as where the unit vectors are moved to in the new space.\nzRotation\n\nzRotation*xUnit\n\n# This should not work.\nxUnit*zRotation\n\nxUnit.T*zRotation", "What can we learn from this result? It is now known that:\n\nThe convention for positive angles is a counterclockwise rotation.\nThe rotation axis function in sympy as defined rotates clockwise\nThere are conventions about active and passive rotations.\nDon't assume module functions will do what you want - always check.\n\nRather than rely on module functions, we can define our own rotation function. Using a function called \"isclose\" or Boolean indexing it is possible to clean up the arrays and remove small numerical values that should be zeros.", "def rotation2D(theta):\n return np.array([[np.cos(theta), np.sin(theta)],\n [-np.sin(theta), np.cos(theta)]])\n\ntestArray = rotation2D(0)\ntestArray\n\nnp.dot(np.array([1,0]),testArray)", "DIY: Computing Rotation Matrices\n\nCompute the following rotation matrices:\n\nA rotation of 0$^\\circ$ about the origin.\nA rotation of 45$^\\circ$ about the origin.\nA rotation of 60$^\\circ$ about the origin.\nA rotation of 90$^\\circ$ about the origin.\nA rotation of 180$^\\circ$ about the origin.\nA rotation of -270$^\\circ$ about the origin.", "# Put your code here.", "Cleaning up the Small Values\n\nSometimes Numpy returns very small numbers instead of zeros.\nOne strategy is to remove small numbers less than some tolerance and set them equal to zero. Algorithms like these where you compare your data to a tolerance and then operate on the entries that meet certain criteria are not uncommon in numerical methods. This is the tradeoff between symbolic computation and numeric computation.", "testArray[np.abs(testArray) < 1e-5] = 0\n\ntestArray", "The key is in the Boolean comparision using the &lt; symbol. The expression returns a numpy array of dtype=bool. Let me say here that it is good to check the results of expressions if you are unsure.", "np.abs(testArray) < 1e-5", "We can write a function to do this that is a bit more robust. Modifications are done in-place (by reference) so we just return the array passed to the function after some manipulation that we do by Boolean indexing.", "def zeroArray(testArray):\n testArray[np.isclose(testArray, np.zeros_like(testArray))] = 0.0\n return testArray\n\nmodifiedArray = rotation2D(np.pi/2)\nmodifiedArray\n\nmodifiedArray = zeroArray(rotation2D(np.pi/2))\nmodifiedArray", "Rotations (Continued)\n\nUsing the new rotation2D function and zeroArray function we can now write:", "zeroArray(np.dot(np.array([1,0]),rotation2D(np.pi/2)))", "A collection of functions for performing transformations is available at http://www.lfd.uci.edu/~gohlke/. This can be imported and the functions used in your code. The fastest way to explore what is available is to import the file and then use autocomplete and docstring viewing functions from the Jupyter notebook.", "import transformations as tfm\n\ntfm.rotation_matrix(np.pi/2, [0,1,0])\n\nzeroArray(tfm.rotation_matrix(np.pi/2, [0,1,0]))", "Symmetry Operations and Translations in Crystals\n\nA generalized affine transformation in two dimensions can be thought of as an augmented matrix as:\n$$\\begin{bmatrix}\nr_1 & r_2 & t_x\\\nr_3 & r_4 & t_y\\\n0 & 0 & 1\\\n\\end{bmatrix}$$\nso you could imagine the following:\n$$\\begin{bmatrix} x'\\ y'\\ 1\\ \\end{bmatrix} =\n\\begin{bmatrix} 1 & 0 & t_x\\ 0 & 1 & t_y\\ 0 & 0 & 1\\ \\end{bmatrix} \n\\begin{bmatrix}x\\ y\\ 1\\ \\end{bmatrix} $$\nexpanding to:\n$$x' = x + t_x $$\nand\n$$y' = y + t_y $$\nWe can explicitly write the rotation components as listed earlier:\n$$\\begin{bmatrix} x'\\ y'\\ 1\\ \\end{bmatrix} =\n\\begin{bmatrix} \\cos{\\theta} & \\sin{\\theta} & t_x\\ -\\sin{\\theta} & \\cos{\\theta} & t_y\\ 0 & 0 & 1\\ \\end{bmatrix} \n\\begin{bmatrix}x\\ y\\ 1\\ \\end{bmatrix} $$\nwhere the $r_i$ represent the rotation matrix components and the $t_i$ represent the translations components. This can be thought of as a shearing operation in 3D. The Wikipedia article on this topic expands this idea a bit more.\nIn this format we can use a point description that looks like $(x, y, t)$ and matrix algebra to generate our transformed points. Using SymPy:", "alpha, t_x, t_y, x, y = sp.symbols('alpha t_x t_y x y')\n\nsa = sp.sin(alpha)\nca = sp.cos(alpha)\nM = sp.Matrix([[ca, sa, t_x], [-sa, ca, t_y], [0, 0, 1]])\nV = sp.Matrix([x, y, 1])\n\nM*V", "Helper Classes, Drawing and Plane Groups\n\nLet us explore a bit of how we can draw an image - and then we have everything we need to start building the plane group representations. To build pictoral representations of plane groups, two classes have been created to simplify the organization of the motif. \nBelow are the Point and Triangle classes for your use. A Point has storage for an $(x,y)$ position. Rather than building arrays and referencing specific positions within the array a more natural referencing of points is possible. If we define a point p1 = Point(10,20) we can access the points by p1.x and p1.y. The code is more easily read and debugged with this syntax.\nBuilding on the Point class we define a Triangle class. The Triangle permits access of each point and defines an affine() method that will take a Numpy array that represents a transformation matrix. This method returns a new instance of a Triangle and preserves the original points. Writing the code this way avoids explicit handling of the transformation matrices.\nThese two classes and methods are demonstrated in the second and third code blocks. Building on this demonstration you will be able to complete the homework.", "# Class definitions\n\nfrom math import sqrt\nimport numpy as np\n\nclass Point:\n \"\"\"\n A Point object to simplify storage of (x,y) positions.\n p1.x, p1.y, etc...\n \"\"\"\n def __init__(self,x_init,y_init):\n self.x = x_init\n self.y = y_init\n\n def shift(self, x, y):\n self.x += x\n self.y += y\n\n def __repr__(self):\n return \"\".join([\"Point(\", str(self.x), \",\", str(self.y), \")\"])\n \nclass Triangle:\n \"\"\"\n A Triangle class constructed from points. Helps organize information on\n triangles. Has a points() method for returning points in a form that \n can be used with polygon drawing from Python Image Library (PIL) and a \n method, affine(), that applies a matrix transformation to the points.\n \"\"\"\n def __init__(self, p1_init, p2_init, p3_init):\n self.p1 = p1_init\n self.p2 = p2_init\n self.p3 = p3_init\n \n def points(self):\n x1, y1 = self.p1.x, self.p1.y\n x2, y2 = self.p2.x, self.p2.y\n x3, y3 = self.p3.x, self.p3.y\n \n return [(x1,y1),(x2,y2),(x3,y3)]\n \n def affine(self, affineMatrix):\n \"\"\"\n Applies an affine transformation to a triangle and changes the \n points of the triangle. This code returns a new triangle. Uses\n Points to simplify augmenting the matrix and dot products.\n \"\"\"\n x1, y1 = self.p1.x, self.p1.y\n x2, y2 = self.p2.x, self.p2.y\n x3, y3 = self.p3.x, self.p3.y\n \n p1Vector = np.array([[x1, y1, 1]])\n p2Vector = np.array([[x2, y2 , 1]])\n p3Vector = np.array([[x3, y3, 1]])\n \n p1New = np.dot(affineMatrix, p1Vector.T)\n p2New = np.dot(affineMatrix, p2Vector.T)\n p3New = np.dot(affineMatrix, p3Vector.T)\n \n # This line needs to be cleaned up.\n newTriangle = Triangle(Point(p1New[0,0],p1New[1,0]),Point(p2New[0,0],p2New[1,0]),Point(p3New[0,0],p3New[1,0]))\n \n return newTriangle", "Using the Python Imaging Library\n\nIn order that our transformations can be visualized, we will use the Python Imaging Library and the polygon() method. The polygon() method takes a list of points in the format returned by our Triangle class method points(). The code below is a very simple starting point for the student to begin building a representation of the plane groups.", "# Class demonstrations\n\n%matplotlib inline\n\nimport numpy as np\nimport math\nfrom PIL import Image, ImageDraw\n\nimage = Image.new('RGB', (500, 500), 'white')\ndraw = ImageDraw.Draw(image)\n\nt1 = Triangle(Point(10,10),Point(40,10),Point(10,50))\nt2 = Triangle(Point(10,10),Point(40,10),Point(10,50))\ntranslationMatrix = np.array([[1,0,50],[0,1,0],[0,0,1]])\nreflectionMatrix = np.array([[-1,0,0],[0,1,0],[0,0,1]])\n\nt2 = t1.affine(reflectionMatrix*translationMatrix)\n\nprint(t1.points(), t2.points())\n\ndraw.polygon(t1.points(), outline='black', fill='red')\ndraw.polygon(t2.points(), outline='black', fill='green')\n\nimage\n\n# A slightly more advanced demonstration\n\n%matplotlib inline\n\nimport numpy as np\nimport math\nfrom PIL import Image, ImageDraw\n\nimage = Image.new('RGB', (500, 500), 'white')\ndraw = ImageDraw.Draw(image)\n\nt1 = Triangle(Point(70,10),Point(100,10),Point(70,50))\n\ntriangleList = [t1]\n\ntranslationMatrix = np.array([[1,0,10],[0,1,0],[0,0,1]])\nreflectionMatrixY = np.array([[-1,0,0],[0,1,0],[0,0,1]])\nreflectionMatrixX = np.array([[1,0,0],[0,-1,0],[0,0,1]])\n\nr90 = np.array([[0,-1,0],[1,0,0],[0,0,1]])\n\ntriangleList.append(t1.affine(r90))\ntriangleList.append((t1.affine(r90)).affine(r90))\ntriangleList.append((t1.affine(r90)).affine(r90).affine(r90))\n\ntempList = [triangle.affine(reflectionMatrixX) for triangle in triangleList]\ntriangleList.extend(tempList)\n\n# Using an affine transformation to center the triangles in the drawing\n# as canvas coordinates are (0,0) at the top left.\ncenterMatrix = np.array([[1,0,250],[0,1,250],[0,0,1]])\ndrawList = [triangle.affine(centerMatrix) for triangle in triangleList]\nfor triangle in drawList:\n draw.polygon(triangle.points(), outline='black', fill='red')\n\nimage", "Advanced Topics: The Metric Tensor\nStudents who learn crystallography for the first time are introduced to the topic through study of cubic crystal structures. This permits an appeal to our intuition about orthonormal (Euclidian) coordinate systems. This approach misses out on a more general method for teaching the topic where the d-spacing and the angle between directions can be worked out for any general crystal system. The method for describing distances in a general reference frame involves the metric tensor. The metric tensor defines how distances are measured in every direction and its components are the dot product of every combination of basis vector in the system of interest. We use the standard lattice parameter designations:\n$$\n[a, b, c, \\alpha, \\beta, \\gamma]\n$$\nwhere $\\gamma$ is the angle between $\\mathbf{a}$ and $\\mathbf{b}$, etc. This general system has basis vectors:\n$$\n\\mathbf{a},\\mathbf{b}, \\mathbf{c}\n$$\nThe standard geometric interpretation of an inner product of vectors is:\n$$\n\\mathbf{a} \\cdot \\mathbf{b} = |a| \\; |b| \\; \\cos{\\gamma}\n$$\nso that the metric tensor is:\n$$\ng_{ij} = \n\\begin{bmatrix} \n\\mathbf{a} \\cdot \\mathbf{a} & \n\\mathbf{a} \\cdot \\mathbf{b} & \n\\mathbf{a} \\cdot \\mathbf{c} \\ \n\\mathbf{b} \\cdot \\mathbf{a} & \n\\mathbf{b} \\cdot \\mathbf{b} & \n\\mathbf{b} \\cdot \\mathbf{c} \\ \n\\mathbf{c} \\cdot \\mathbf{a} & \n\\mathbf{c} \\cdot \\mathbf{b} & \n\\mathbf{c} \\cdot \\mathbf{c}\n\\end{bmatrix}\n$$\nCommitting this to memory is simple and it has an intuitive meaning in that each entry measures a different projection of a vector component onto an axis in a general crystal system within lattice coordinates. It is possible to write a small function in SymPy to illustrate this.", "def metricTensor(a=1, b=1, c=1, alpha=sp.pi/2, beta=sp.pi/2, gamma=sp.pi/2):\n return sp.Matrix([[a*a, a*b*sp.cos(gamma), a*c*sp.cos(beta)], \\\n [a*b*sp.cos(gamma), b*b, b*c*sp.cos(alpha)], \\\n [a*c*sp.cos(beta), b*c*sp.cos(alpha), c*c]])\n\nsp.var('a b c alpha beta gamma u v w h k l')\n\nM = metricTensor(a=a,\n b=b,\n c=c,\n alpha=alpha,\n beta=beta,\n gamma=gamma\n )\nM", "Two Common Computations\n\nUsing the Einstein summation convention, the square of the distance between two points located at the end of vectors $\\mathbf{p}$ and $\\mathbf{q}$ is given by:\n$$\nD^2 = (\\mathbf{q} - \\mathbf{p})i \\; g{ij} \\; (\\mathbf{q} - \\mathbf{p})_j\n$$\nThe dot product between two vectors is given by:\n$$\n\\mathbf{p} \\cdot \\mathbf{q} = p_i \\; g_{ij} \\; q_j\n$$\nNote that the vectors $\\mathbf{p}$ and $\\mathbf{q}$ are in lattice coordinates.\nDIY: Angle Between Two Vectors\n\nFind the expression for and compute the angle between two vectors in a general coordinate system. Use the function defined above or write a new one. You are encouraged to use SymPy or Numpy in your calculations. Refer to the earlier lectures that cover these topics for the implementation and technical details.", "# Put your code or markdown here.", "DIY: Compute the angle between the $(123)$ plane and the $(112)$ plane in a trigonal crystal system\n\nThe trigonal crystal system has the least symmetry. Refer to standard texts for the pattern of lattice parameters.", "vectorOne = np.array([1,2,3])\nvectorTwo = np.array([1,1,2])\n\n# Put your code here.", "Advanced Topics: Euler Angles\nThis discussion is derived primarily from Arfken, Chapter 3.3. The figure below is from Arfken:\n\nThere are three successive rotations used in the Euler angle formalism - the product of these three rotations is the single operation that transforms one set of coordinates $(x,y,z)$ into another, $(x',y',z')$. The order is important and is the difference between active and passive rotations.\nIn steps as shown in Figure 3.7 from Arfken:\n\nThe first rotation is about $x_3$. In this case the $x'_3$ and $x_3$ axes coincide. The angle $\\alpha$ is taken to be positive (counterclockwise). Our new coordinate system is $(x'_1,x'_2,x'_3)$.\nThe coordinates are now rotated through an angle $\\beta$ around the $x'_2$ axis. Our new coordinate system is now $(x''_1,x''_2,x''_3)$.\nThe final rotation is through the angle $\\gamma$ about the $x'''_3$ axis. Our coordinate system is now the $(x'''_1,x'''_2,x'''_3)$. In the case pictured above the $x''_3$ and $x'''_3$ axes coincide.\n\nFor example:", "alpha, beta, gamma = sp.symbols('alpha beta gamma')\n\ndef rZ(angle):\n sa = sp.sin(angle)\n ca = sp.cos(angle)\n M = sp.Matrix([[ca, sa, 0],\n [-sa, ca, 0],\n [0, 0, 1]])\n return M\n\ndef rY(angle):\n sb = sp.sin(angle)\n cb = sp.cos(angle)\n M = sp.Matrix([[cb, 0, -sb],\n [0, 1, 0],\n [sb, 0, cb]])\n return M\n\n\n(rZ(alpha), rY(beta), rZ(gamma))", "You'll find that the symbolic triple matrix product matches up with the results in Arfken for the definition of Euler angles $\\alpha$, $\\beta$, $\\gamma$. Note also that this is much easier to compute than by hand! Also - less likely to result in errors.", "rZ(gamma)*rY(beta)*rZ(alpha)", "To convert this symbolic expression to a numerical expression, one option is to use lambdafy from SymPy:", "eulerAngles = sp.lambdify((alpha,beta,gamma), rZ(gamma)*rY(beta)*rZ(alpha), \"numpy\")\n\nnp.array([1,0,0]).dot(eulerAngles(np.pi/2.0,0,0))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
weinbe58/QuSpin
examples/notebooks/SSH.ipynb
bsd-3-clause
[ "Single Particle Systems: coding the SSH model in real and momentum space\nThis tutorial shows how to use QuSpin to construct single-particle Hamiltonians in real space and momentum space. To demonstrate this, we use the Su-Schrieffer-Heeger (SSH) model of free spinless fermions on a dimerised chain:\n$$ H = \\sum_{j=0}^{L-1} -(J+(-1)^j\\delta J)\\left(c_jc^\\dagger_{j+1} - c^\\dagger_{j}c_{j+1}\\right) + \\Delta\\sum_{j=0}^{L-1}(-1)^jn_j,$$\nwhere $J$ is the uniform component of the hopping, $\\delta J$ -- the bond dimerisation, and $\\Delta$ -- a staggered potential. \nWe begin by loading the QuSpin libraries and define the model parameters", "from quspin.operators import hamiltonian # Hamiltonians and operators\nfrom quspin.basis import spinless_fermion_basis_1d # Hilbert space fermion basis\nfrom quspin.tools.block_tools import block_diag_hamiltonian # block diagonalisation\nimport numpy as np # generic math functions\n#\n##### define model parameters #####\nL=6 # system size\nJ=1.0 # uniform hopping contribution\ndeltaJ=0.1 # bond dimerisation\nDelta=0.5 # staggered potential", "Next, we construct the fermion basis using the constructor spinless_fermion_basis_1d. Since we are interested in a free model, it suffices to consider a single particle Nf=1.", "##### construct single-particle Hamiltonian #####\n# define basis\nbasis=spinless_fermion_basis_1d(L,Nf=1)\nprint(basis)", "In defining the site-coupling list, we set a convention that the operator indices grow to the right (this is not required by QuSpin, it's merely our choice and we do it for convenience), as written out in the Hamiltonian above. Thus, the fermion hopping operator (unlike bosons) requires two different lists to reflect the sign flip in the hermitian conjugate term.\nThe static and dynamic lists as well as building the real-space Hamiltonian is the same as for the BHM. Last, we diagonalise the real-space Hamiltonian.", "# define site-coupling lists\nhop_pm=[[-J-deltaJ*(-1)**i,i,(i+1)%L] for i in range(L)] # PBC\nhop_mp=[[+J+deltaJ*(-1)**i,i,(i+1)%L] for i in range(L)] # PBC\nstagg_pot=[[Delta*(-1)**i,i] for i in range(L)]\t\n# define static and dynamic lists\nstatic=[[\"+-\",hop_pm],[\"-+\",hop_mp],['n',stagg_pot]]\ndynamic=[]\n# build real-space Hamiltonian\nH=hamiltonian(static,dynamic,basis=basis,dtype=np.float64)\nprint(\"FH Hamiltonian is real space is:\")\nprint(H.toarray())\n# diagonalise real-space Hamiltonian\nE,V=H.eigh()", "In momentum space, $k\\in\\mathrm{BZ'}=[-\\pi/2,\\pi/2)$, the Hamiltonian becomes block diagonal:\n$$ H !=! \\sum_{k\\in\\mathrm{BZ'}} (a^\\dagger_k,b^\\dagger_k)\n\\left(\\begin{array}{cc}\n\\Delta & -(J+\\delta J)\\mathrm e^{-i k} - (J-\\delta J)\\mathrm e^{+i k} \\\n-(J+\\delta J)\\mathrm e^{+i k} - (J-\\delta J)\\mathrm e^{-i k} & -\\Delta\n\\end{array}\n\\right)\n\\left(! \\begin{array}{c}\na_k\\\nb_k\n\\end{array}\n!\\right)$$\nFor translation invariant single-particle models, therefore, the user might prefer to use momentum space. This can be achieved using QuSpin's block_tools. The idea behind it is simple: the main purpose is to create the full Hamiltonian in block-diagonal form, where the blocks correspond to pre-defined quantum numbers. In our case, we would like to use momentum or kblock's. Note that the unit cell in the SSH model contains two sites, which we encode using the variable a=2. Thus, we can create a list of dictionaries -- blocks, each element of which defines a single symmetry block. If we combine all blocks, we exhaust the full Hilbert space. All other basis arguments, such as the system size, we store in the variable basis_args. We mention in passing that this procedure is independent of the symmetry, and can be done using all symmetries supported by QuSpin, not only translation.", "# define basis blocks and arguments\nblocks=[dict(Nf=1,kblock=i,a=2) for i in range(L//2)] # only L//2 distinct momenta\nbasis_args = (L,)", "To create the block-diagonal Hamiltonian, we invoke the block_diag_hamiltonian method. It takes both required and optional arguments, and returns the transformation, which block-diagonalises the Hamiltonian (in our case the Fourier transform) and the block-diagonal Hamiltonian object. Required arguments, in order of appearance, are the blocks, the static and dynamic lists, the basis constructor, basis_args, and the data type. Since we expect the Hamiltonian to contain the Fourier factors $\\exp(-ik)$, we know to choose a complex data type. block_diag_hamiltonian also accepts some optional arguments, such as the flags for disabling the automatic built-in symmetry checks.", "# construct block-diagonal Hamiltonian\nFT,Hblock = block_diag_hamiltonian(blocks,static,dynamic,spinless_fermion_basis_1d,basis_args,np.complex128,\n get_proj_kwargs=dict(pcon=True))\nprint(np.around(Hblock.toarray(),2))\n# diagonalise momentum-space Hamiltonian\nEblock,Vblock=Hblock.eigh()", "We now compare the real-space and momentum-space spectra, to check if they match", "##### plot spectra\n%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.plot(np.arange(H.Ns),E/L,marker='o',color='b',label='real space')\nplt.plot(np.arange(Hblock.Ns),Eblock/L,marker='x',color='r',markersize=2,label='momentum space')\nplt.xlabel('state number',fontsize=16)\nplt.ylabel('energy',fontsize=16)\nplt.xticks(fontsize=16)\nplt.yticks(fontsize=16)\nplt.legend(fontsize=16)\nplt.grid()\nplt.tight_layout()\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GSimas/EEL7045
Aula 18 - Circuitos CA.ipynb
mit
[ "Circuitos CA\nJupyter Notebook desenvolvido por Gustavo S.S.\nSenoides e Fasores\nA análise em Corrente Alternada (CA) é a análise de circuitos nos quais a fonte de tensão ou\nde corrente varia com o tempo. Os circuitos acionados\npor fontes de tensão ou de corrente senoidais são chamados circuitos CA.\nSenoide é um sinal que possui a forma da função seno ou cosseno.\nConsideremos a tensão senoidal:\n\\begin{align}\n{\\Large v(t) = V_m sen(\\omega t)}\n\\end{align}\nonde\nVm = amplitude da senoide (V)\nω = frequência angular (rad/s)\nωt = argumento da senoide (rad)\n\\begin{align}\n{\\Large T = \\frac{2 \\pi}{\\omega}}\n\\end{align}\nA senoide é mostrada na Figura 9.1a em função de seu argumento e na Figura\n9.1b em função do tempo.\n\nO fato de v(t) repetir-se a cada T segundos é demonstrado substituindo-se t por\nt + T. Portanto:\n\\begin{align}\n{\\Large v(t + T) = v(t)}\n\\end{align}\nFunção periódica é aquela que satisfaz f(t) = f(t + nT), para todo t e para\ntodos os inteiros n.\nConforme mencionado anteriormente, o período T da função periódica é o\ntempo de um ciclo completo ou o número de segundos por ciclo. O inverso\ndesse valor é o número de ciclos por segundo, conhecido como frequência\ncíclica f da senoide. Consequentemente,\n\\begin{align}\n{\\Large f = \\frac{1}{T}}\n\\{\\Large \\omega = 2 \\pi f}\n\\end{align}\nUma expressão mais genérica para a senoide:\n\\begin{align}\n{\\Large v(t) = V_m sen(\\omega t + \\phi)}\n\\end{align}\nonde ϕ é a fase.\nConsideremos duas senoides, sendo:\n\\begin{align}\n{\\Large v_1(t) = V_m sen(\\omega t)}\n\\{\\Large v_2(t) = V_m sen(\\omega t + \\phi)}\n\\end{align}\n\nUma senoide pode ser expressa em termos de seno ou de cosseno. Isso pode ser conseguido\nusando-se as seguintes identidades trigonométricas:\n\\begin{align}\n{\\Large sen(A \\pm B) = sen(A)cos(B) \\pm sen(B)cos(A)}\n\\{\\Large cos(A \\pm B) = cos(A)cos(B) \\mp sen(A)sen(B)}\n\\end{align}\nCom essas identidades, fica fácil demonstrar que:\n\\begin{align}\n{\\Large sen(\\omega t \\pm 180º) = -sen(\\omega t)}\n\\{\\Large cos(\\omega t \\pm 180º) = -cos(\\omega t)}\n\\{\\Large sen(\\omega t \\pm 90º) = \\pm cos(\\omega t)}\n\\{\\Large cos(\\omega t \\pm 90º) = \\mp sen(\\omega t)}\n\\end{align}\nUsando essas relações, podemos transformar uma senoide na forma de seno\npara uma na forma de cosseno, ou vice-versa.\nA magnitude e o argumento da senoide resultante na forma de cosseno\nsão imediatamente obtidos do triângulo. Portanto:\n\\begin{align}\n{\\Large A cos(\\omega t) + B sen(\\omega t) = r cos(\\omega t - \\theta)}\n\\{\\Large r = \\sqrt{A^2 + B^2}}\n\\{\\Large \\theta = arctg \\frac{B}{A}}\n\\end{align}\nExemplo 9.1\nDetermine a amplitude, a fase, o período e a frequência da senoide\nv(t) 12 cos(50 t + 10º)", "print(\"Exemplo 9.1\")\n\nimport numpy as np\n\nVm = 12\nphi = 10\nomega = 50\nT = 2*np.pi/omega\nf = 1/T\n\nprint(\"Amplitude:\",Vm,\"V\")\nprint(\"Fase:\",phi,\"º\")\nprint(\"Frequência angular:\",omega,\"rad/s\")\nprint(\"Período:\",T,\"s\")\nprint(\"Frequência:\",f,\"Hz\")", "Problema Prático 9.1\nDada a senoide 30 sen(4pit – 75°), calcule sua amplitude, fase, frequência angular, período e frequência", "print(\"Problema Prático 9.1\")\n\nVm = 30\n#30sin(4*pi*t - 75º) = 30cos(4*pi*t + 165º)\nphi = -75\nomega = 4*np.pi\nT = 2*np.pi/omega\nf = 1/T\n\nprint(\"Amplitude:\",Vm,\"V\")\nprint(\"Fase:\",phi,\"º\")\nprint(\"Frequência angular:\",omega,\"rad/s\")\nprint(\"Período:\",T,\"s\")\nprint(\"Frequência:\",f,\"Hz\")", "Exemplo 9.2\nCalcule o ângulo de fase entre v1 = –10 cos(wt + 50°) e v2 = 12 sen(wt – 10°). Indique\nqual senoide está avançada.", "print(\"Exemplo 9.2\")\n\n#v1 = -10cos(wt + 50º) = 10cos(wt + 50 - 180) = 10cos(wt - 130º)\n#v2 = 12sen(wt - 10º) = 12cos(wt - 100º)\n#-130 - (-100) = -30\n\nphi = 30\n\nprint(\"v2 esta avancada em {}º em relação a v1\".format(phi))", "Problema Prático 9.2\nDetermine o ângulo de fase entre \ni1(t) = -4sen(377t + 55º) \ne \ni2(t) = 5cos(377t - 65º)", "print(\"Problema Prático 9.2\")\n\n#i1 = -4sen(377t + 55) = 4sen(377t + 55 + 180) = 4sen(377t + 235) = 4cos(377t + 145)\n#i2 = 5cos(377t - 65)\nphi = 145 - (-65)\n\nprint(\"i1 esta avancada em {}º em relação a i2\".format(phi))", "Fasores\nFasor é um número complexo que representa a amplitude e a fase de uma\nsenoide\nOs fasores se constituem de maneira simples para analisar circuitos lineares\nexcitados por fontes senoidais; encontrar a solução para circuitos desse tipo\nseria impraticável de outro modo. A noção de resolução de circuitos CA usando\nfasores foi introduzida inicialmente por Charles Steinmetz em 1893.\nUm número complexo z pode ser escrito na forma retangular como\n\\begin{align}\n{\\Large z = x + jy}\n\\{\\Large j = \\sqrt{-1}}\n\\end{align}\nO número complexo z também pode ser escrito na forma polar ou exponencial:\n\\begin{align}\n{\\Large z = r \\angle \\phi = re^{j \\phi}}\n\\{\\Large r = \\sqrt{x^2 + y^2}}\n\\{\\Large \\phi = arctg(\\frac{y}{x})}\n\\end{align}\n\nPor outro lado, se conhecermos r e f, podemos obter x e y como:\n\\begin{align}\n{\\Large x = rcos(\\phi)}\n\\{\\Large y = rsen(\\phi)}\n\\end{align}\nOperações com Complexos\nAs seguintes operações são importantes.\nAdição\n\\begin{align}\n{\\Large z_1 + z_2 = (x_1 + x_2) + j(y_1 + y_2)}\n\\end{align}\nSubtração\n\\begin{align}\n{\\Large z_1 + z_2 = (x_1 - x_2) + j(y_1 - y_2)}\n\\end{align}\nMultiplicação\n\\begin{align}\n{\\Large z_1 z_2 = r_1 r_2 \\angle \\phi_1 + \\phi_2}\n\\end{align}\nDivisão\n\\begin{align}\n{\\Large \\frac{z_1}{z_2} = \\frac{r_1}{r_2} \\angle \\phi_1 - \\phi_2}\n\\end{align}\nInverso\n\\begin{align}\n{\\Large \\frac{1}{z} = \\frac{1}{r} \\angle - \\phi}\n\\end{align}\nRaiz Quadrada\n\\begin{align}\n{\\Large \\sqrt{z} = \\sqrt{r} \\angle \\phi / 2}\n\\end{align}\nConjugado Complexo\n\\begin{align}\n{\\Large z* = x - jy = r \\angle -\\phi = re^{-j \\phi}}\n\\end{align}\nA ideia da representação de fasor se baseia na identidade de Euler. Em\ngeral:\n\\begin{align}\n{\\Large e^{\\pm j \\phi} = cos(\\phi) \\pm jsen(\\phi)}\n\\end{align}\nAssim, podemos escrever:\n\\begin{align}\n{\\Large cos(\\phi) = Re(e^{j \\phi})}\n\\{\\Large sen(\\phi) = Im(e^{j \\phi})}\n\\end{align}\nDada a senoide v(t) = Vm cos(vt + Φ), podemos representar como:\n\\begin{align}\n{\\Large v(t) = Re(V e^{j \\omega t})}\n\\{\\Large V = V_m e^{j \\phi} = V_m \\angle \\phi}\n\\end{align}\nV é, portanto, a representação fasorial da senoide v(t)\n\nDerivada e Integral de Fasores\nA derivada de v(t) é transformada para o domínio dos fasores\ncomo jwV:\n\\begin{align}\n{\\Large \\frac{dv}{dt} = - \\omega V_m sen(\\omega t + \\phi) = \\omega V_m cos(\\omega t + \\phi + 90º)}\n\\{\\Large = Re(\\omega V_m e^{j \\omega t} e^{j \\omega} e^{j 90º} = Re(j \\omega V e^{j \\omega t})}\n\\end{align}\nAssim:\n\\begin{align}\n{\\Large \\frac{dv}{dt} = j\\omega V}\n\\{\\Large \\int v dt = \\frac{V}{j\\omega}}\n\\end{align}\nAs diferenças entre v(t) e V devem ser enfatizadas:\n\n\nv(t) é a representação instantânea ou no domínio do tempo, enquanto V é a representação em termos de frequência ou no domínio dos fasores.\n\n\nv(t) é dependente do tempo, enquanto V não é.\n\n\nv(t) é sempre real sem nenhum termo complexo, enquanto V geralmente é complexo.\n\n\nExemplo 9.4\nTransforme as senoides seguintes em fasores:\n(a) i = 6cos(50t - 40º) A\n(b) v = -4sen(30t + 50º) V", "print(\"Exemplo 9.4\")\n\n#6cos(50t - 40)\n#r = 6\n#phi = -40\n\n#-4sen(30t + 50) = 4sen(30t + 50 + 180) = 4cos(30t + 140)\n#r = 4\n#phi = 140\n\nprint(\"I: 6[-40º]\")\nprint(\"V: 4[140º]\")", "Problema Prático 9.4\nExpresse as senoides seguintes na forma de fasores:\n(a) v = 7cos(2t + 40º) V\n(b) i = -4sen(10t + 10º) A", "print(\"Problema Prático 9.4\")\n\n#7cos(2t + 40)\n#r = 7\n#phi = 40\n\n#-4sen(10t + 10) = 4sen(10t + 10 + 180) = 4cos(10t + 100)\n#r = 4\n#phi = 100\n\nprint(\"V: 7[40º]\")\nprint(\"I: 4[100º]\")", "Exemplo 9.5\nDetermine as senoides representadas pelos fasores seguintes:\n(a) I = -3 + j4\n(b) V = j8e^(-j20º)", "print(\"Exemplo 9.5\")\n\nimport numpy as np\n\nr = np.sqrt((-3)**2 + 4**2)\nphi = np.arctan(4/(-3))*180/np.pi + 180\n\nprint(\"I: {}[{}º]\".format(r,phi))\n\n#j = 1[90º]\n#V = 8e^(-j20) = 8[-20º]\n#jV = 1*8 [90 -20] = 8[70º]\n\nprint(\"V: 8[70º]\")\n", "Problema Prático 9.5\nDetermine as senoides correspondentes aos fasores seguintes:\n(a) V = -25[40º]\n(b) I = j(12 - j5)", "print(\"Problema Prático 9.5\")\n\nprint(\"v(t) = 25cos(wt + 220)\")\n\n#j(12 - j5) = 5 + 12j\n\nr = np.sqrt(5**2 + 12**2)\nphi = np.arctan(12/5)*180/np.pi\n\nprint(\"I: {}[{}º]\".format(r,phi))", "Exemplo 9.6\nDados i1(t) = 4 cos(wt + 30°) A e i2(t) = 5 sen(wt + 20°) A, determine sua soma.", "print(\"Exemplo 9.6\")\n\n#4cos(wt + 30) = 4[30]\n#5sen(wt + 20) = 5cos(wt + 70) = 5[-110]\n\nx = 4*np.cos(30*np.pi/180) + 5*np.cos(-110*np.pi/180)\ny = 4*np.sin(30*np.pi/180) + 5*np.sin(-110*np.pi/180)\n\nprint(\"i1 + i2: {} + j{}\".format(x,y))\n\nr = np.sqrt(x**2 + y**2)\nphi = np.arctan(y/x)*180/np.pi\n\nprint(\"I: {}[{}]\".format(r,phi))\nprint(\"i(t): {}cos(wt + {})\".format(r,phi))", "Problema Prático 9.6\nSe v1(t) = –10 sen(vt – 30°) V e v2(t) = 20 cos(vt + 45°), determine v = v1 + v2.", "print(\"Problema Prático 9.6\")\n\n#-10sen(wt - 30) = 10sen(wt + 150) = 10sen(wt + 60) = 10[60]\n#20cos(wt + 45) = 20[45]\n\nx = 10*np.cos(60*np.pi/180) + 20*np.cos(45*np.pi/180)\ny = 10*np.sin(60*np.pi/180) + 20*np.sin(45*np.pi/180)\n\nprint(\"v1 + v2: {} + j{}\".format(x,y))\n\nr = np.sqrt(x**2 + y**2)\nphi = np.arctan(y/x)*180/np.pi\n\nprint(\"V: {}[{}]\".format(r,phi))\nprint(\"v(t): {}cos(wt + {})\".format(r,phi))", "Exemplo 9.7\nUsando o método de fasores, determine a corrente i(t) em um circuito descrito pela\nequação diferencial\n\\begin{align}\n{\\Large 4i + 8 \\int idt - 3 \\frac{di}{dt} = 50cos(2t + 75)}\n\\end{align}", "print(\"Exemplo 9.7\")\n\n#4I + 8I/jw - 3jwI = 50[75]\n#4I -4jI - 6jI = 50[75]\n#I = 50[75] / (4 - j10)\n\nr = np.sqrt(4**2 + (-10)**2)\nphi = np.arctan((-10)/4)*180/np.pi\n\nR = 50/r\nPhi = 75 - phi\n\nprint(\"Fasor I: {}[{}]\".format(R,Phi))\nprint(\"i(t) = {}cos(wt + {}º)\".format(R,Phi))", "Problema prático 9.7\nDetermine a tensão v(t) em um circuito descrito pela equação integro-diferencial a seguir:\n\\begin{align}\n{\\Large 2\\frac{dv}{dt} + 5v + 10 \\int vdt = 50cos(5t - 30º)}\n\\end{align}", "print(\"Problema Prático 9.7\")\n\n#2Vjw + 5V + 10v/jw = 50[-30]\n#5V -2jV + 10jV = 50[-30]\n#V = 50[-30] / (5 + j8)\n\nr = np.sqrt(5**2 + 8**2)\nphi = np.arctan(8/5)*180/np.pi\n\nR = 50/r\nPhi = -30 - phi\n\nprint(\"Fasor I: {}[{}]\".format(R,Phi))\nprint(\"i(t) = {}cos(wt + {}º)\".format(R,Phi))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
diegocavalca/Studies
phd-thesis/REDD - Coleta e Preparação dos Dados.ipynb
cc0-1.0
[ "REDD - Coleta e Preparação dos Dados (Low Frequency)\nNeste documento será realizada a tabulação dos dados para cliassificação de cargas da base REDD.\nA janela considerada será de 5 minutos (3000 segundos).\nProcedimentos experimentais baseados em http://www.sbrt.org.br/sbrt2017/anais/1570359866.pdf", "import warnings\n#warnings.filterwarnings(\"warning\")\nimport traceback\n\nimport time\nimport tensorflow as tf\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nplt.style.use('ggplot')\n\nfrom matplotlib import rcParams\nrcParams['figure.figsize'] = (13, 10)\n\nimport pandas as pd\n\nfrom tqdm import tqdm, tqdm_notebook\n#tf.debugging.set_log_device_placement(True)\n#print(\"GPU Available: \", tf.test.is_gpu_available())", "Carregando os dados da base REDD via NILMTK", "from nilmtk import DataSet\nfrom nilmtk.utils import print_dict\n\nredd = DataSet('datasets/REDD/low_freq.h5')\n\n# Configurações da amostragem\nfrom datetime import datetime, timedelta\n\n# Informações do CS446 Project : Electric Load Identification using Machine Learning (REDD)\nbuilding_idx = 3\nset_sampling_rate = 3 \nstart = datetime(2011, 4, 16, 5, 11, 27)\nend = datetime(2011, 5, 31, 0, 19, 54)\ntime_interval_minutes = 5 # Split de amostra\n\n\n# ... 6 seconds - Imaging Time Series (UK-DALE)", "Pré-processamento dos dados\n\nDelimitar (intervalo de tempo global e janelas de medições) e exportar os dados.", "building_idx = 1\n\nbuilding = redd.buildings[building_idx]\n\n# Available devices in building\nbuilding.elec\n\n# Defining the fixed time block measurement\nredd.set_window(start='2011-04-18', end='2011-04-20')\n\n# Showing device consumption (inside time block)\nnum_apps = 20\nfig, axes = plt.subplots((num_apps+1)//2,2, figsize=(24, num_apps*2) )\nfor i in range(num_apps):\n e = redd.buildings[1].elec[i+1]\n axes.flat[i].plot(e.power_series_all_data(sample_period=3), alpha = 0.6)\n axes.flat[i].set_title(e.label(), fontsize = '15')\nplt.suptitle('', fontsize = '30')\nfig.tight_layout()\nfig.subplots_adjust(top=0.95)", "Chunking Energy Consumption in time-box (1 box = 5 minutes)", "# Intervalos de geracao dos dados\ndef datetime_range(start, end, delta):\n '''\n Generating a list of datetime intervals (chunks of energy consumption)\n from `start` to `end` at each `delta` units.\n '''\n current = start\n while current < end:\n yield current\n current += delta\n\n# List of datatimes \ndts = [dt.strftime('%Y-%m-%d %H:%M:%S') \n for dt in \n datetime_range(start, \n end, \n timedelta(minutes=time_interval_minutes))\n ]\n\n# Checking chunks list...\nfor idx in range(1, len(dts)):\n print('de', dts[idx-1], ' a ', dts[idx])\n\npower = building.elec[1].power_series_all_data(sample_period=set_sampling_rate)\nmains1 = pd.DataFrame(data = {\"Power\": power.values }, index=power.index)\nprint('Mains 1 orginal shape: ', mains1.shape)\npower = building.elec[2].power_series_all_data(sample_period=set_sampling_rate)\nmains2 = pd.DataFrame(data = {\"Power\": power.values }, index=power.index)\nprint('Mains 2 orginal shape: ', mains2.shape)\n\npower = building.elec[5].power_series_all_data(sample_period=set_sampling_rate)\nappliance = pd.DataFrame(data = {\"Power\": power.values }, index=power.index)\nprint('Appliance shape: ', appliance.shape)\n\n\ntStart\n\nlen(appliance.index)\n\n# Conjunto de dataframes (chunks de 5 minutos)\ndataframes = []\n\n# Iterando sobre blocos de tempos (5 minutos)\nfor idx in tqdm_notebook(range(1, len(dts))):\n \n tStart = dts[idx-1]\n tEnd = dts[idx]\n\n # Intervalo de treino do modelo\n redd.set_window(start=tStart, end=tEnd)\n try:\n print('- Chunk #',idx,': from ', tStart, 'to', tEnd)\n \n dfs = {}\n _index = []\n for m in building.elec.all_meters():\n\n label = str(m.label()).lower().replace(' ','_') + '_' + str(m.instance())\n power = m.power_series_all_data(sample_period=set_sampling_rate)\n \n dfs[label] = pd.DataFrame(data = {\"Power\": power.values }, index=power.index)\n \n if len(_index) < len(power.index) and ('site_meter' not in label):\n _index = power.index\n\n for meter_label in dfs:\n #if 'site_meter' in meter_label:\n # dfs[meter_label] = dfs[meter_label].reindex(index=_index)\n dfs[meter_label] = dfs[meter_label].reindex(index=_index)\n dfs[meter_label] = dfs[meter_label]['Power'].values\n\n df = pd.DataFrame(dfs, index = _index)\n \n# power = building.elec[1].power_series_all_data(sample_period=set_sampling_rate)\n# mains1 = pd.DataFrame(data = {\"Power\": power.values }, index=power.index)\n# print('Mains 1 orginal shape: ', mains1.shape)\n# power = building.elec[2].power_series_all_data(sample_period=set_sampling_rate)\n# mains2 = pd.DataFrame(data = {\"Power\": power.values }, index=power.index)\n# print('Mains 2 orginal shape: ', mains2.shape)\n\n# power = building.elec[5].power_series_all_data(sample_period=set_sampling_rate)\n# appliance = pd.DataFrame(data = {\"Power\": power.values }, index=power.index)\n# print('Appliance shape: ', appliance.shape)\n\n# # Ajustar timeframes (eletronicos medidos em 3s, contra 1s da rede)\n# mains1 = mains1.reindex(index=appliance.index)\n# print('---\\nMains 1 new shape: ', mains1.shape)\n# mains2 = mains2.reindex(index=appliance.index)\n# print('Mains 2 new shape: ', mains2.shape)\n\n# # Dataframe da modelagem\n# df = pd.DataFrame({\n# 'Mains1': mains1[\"Power\"].values,\n# 'Mains2': mains2[\"Power\"].values,\n# 'Appliance': appliance[\"Power\"].values\n# }, index = appliance.index)\n print('\\n---\\nDataframe shape: ', df.shape,'\\n')\n\n dataframes.append(df)\n except Exception as e:\n print(' ----- Error: ', str(e))#str(traceback.format_exc()))\n #print(' ----- Não foi possível extrair dados do intervalo!')\n\nprint('Total Chunks:', len(dataframes))\n\n# Check if the chunk has the valid length\nvalid_chunk_length = (time_interval_minutes*60)/set_sampling_rate\nvalid_chunks = [d for d in dataframes if d.shape[0] == valid_chunk_length]\n\nprint('Valid Chunks:', len(valid_chunks) )\n\n# Plotting 5 chunks\nfor df in tqdm(dataframes):\n #df = valid_chunks[i]\n# if sum(df['Appliance'].values) == 0:\n# fig = plt.figure(figsize=(10,8))\n# plt.plot(df['Mains1'].values)\n# plt.plot(df['Mains2'].values)\n# plt.plot(df['Appliance'].values)\n# plt.gca().legend(('Mains1','Mains2', 'Appliance'))\n fig = plt.figure(figsize=(20,10))\n for column in df.columns:\n plt.plot(df[column].values)\n plt.gca().legend(df.columns) \n break", "Feature Extraction / Label Building", "final_df = pd.DataFrame()\nrows = []\nclasses = [c for c in dataframes[0].columns if 'site_meter' not in c]\nfor df in dataframes:\n attributes = {\n 'mean_1': df['site_meter_1'].mean(),\n 'std_1': df['site_meter_1'].std(),\n 'max_1': df['site_meter_1'].max(),\n 'min_1': df['site_meter_1'].min(),\n 'sum_1': df['site_meter_1'].sum(),\n 'mean_2': df['site_meter_2'].mean(),\n 'std_2': df['site_meter_2'].std(),\n 'max_2': df['site_meter_2'].max(),\n 'min_2': df['site_meter_2'].min(),\n 'sum_2': df['site_meter_2'].sum()\n }\n labels = {}\n for c in classes:\n labels[c] = 1 if df[c].sum() > 0 else 0\n \n final_df = final_df.append({**attributes, **labels}, ignore_index=True)\n\nfinal_df = final_df[ list(attributes.keys()) + list(labels.keys()) ]\n\nfinal_df.head(10)\n\nfinal_df.describe()\n\nfinal_df.to_csv('df_building_1_statistics_features.csv')\n\n# TODO:\n#- Validar metodologia\n#- Validar chunks gerados (noralizar erros)\n#- Rotular base (labels binários por dispositivo)", "Modelagem", "final_df[final_df.columns[:10]].head()\n\nfrom skmultilearn.adapt import MLkNN\nfrom sklearn import metrics\nfrom sklearn.model_selection import train_test_split, cross_val_score, GridSearchCV\n\nscores = cross_val_score(\n MLkNN(k=3),\n final_df[final_df.columns[:5]].values,\n final_df[final_df.columns[10:]].values,\n scoring = 'f1_micro',\n cv=5,\n n_jobs = 8\n)\n\nscores.mean()\n\nfrom sklearn.metrics import make_scorer, hamming_loss\nhamming_score = make_scorer(hamming_loss)\n\nscores = cross_val_score(\n MLkNN(k=3),\n final_df[final_df.columns[:5]].values,\n final_df[final_df.columns[10:]].values,\n scoring = hamming_score,\n cv=5,\n n_jobs = 8\n)\n\nscores.mean()\n\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn import metrics\nfrom sklearn.model_selection import train_test_split, cross_val_score\n\nscores = cross_val_score(\n RandomForestClassifier(n_estimators=1000),\n final_df[final_df.columns[:5]].values,\n final_df[final_df.columns[10:]].values,\n scoring = hamming_score,\n cv=5,\n n_jobs = 8\n)\n\nscores.mean()", "Outras análises", "#dates = [str(dt).split(' ')[0] for dt in df.index]\ndates = [str(time)[:10] for time in df.index.values]\ndates = sorted(list(set(dates)))\nprint('Os dados da Residência modelada contém medições de {1} dia(s) (de {2} a {3}).'.format(i,len(dates),dates[0], dates[-1]))\n\n# Split de treino, teste e validação\ndf1_train = df.loc[:dates[10]]\ndf1_val = df.loc[dates[11]:dates[16]]\ndf1_test = df.loc[dates[17]:]\nprint('df_train.shape: ', df1_train.shape)\nprint('df_val.shape: ', df1_val.shape)\nprint('df_test.shape: ', df1_test.shape)\n\n# Usando a corrente 1 e 2 (variaveis independetes) para a previsão do refrigerador (variavel dependente)\nX_train = df1_train[['Mains1','Mains2']].values\ny_train = df1_train['Appliance'].values\n\nX_test = df1_test[['Mains1','Mains2']].values\ny_test = df1_test['Appliance'].values\n\nX_val = df1_val[['Mains1','Mains2']].values\ny_val = df1_val['Appliance'].values\n\nprint(\n 'Train: ', X_train.shape, y_train.shape, '\\n',\n 'Test: ', X_val.shape, y_val.shape, '\\n',\n 'Validation: ', X_test.shape, y_test.shape\n)", "MLP", "# Metrcas de avaliação da regressão\ndef mse_loss(y_predict, y):\n return np.mean(np.square(y_predict - y)) \ndef mae_loss(y_predict, y):\n return np.mean(np.abs(y_predict - y)) \n\nfrom tensorflow.keras.layers import Dense, Activation, Dropout, LSTM, Embedding\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.callbacks import ModelCheckpoint\nfrom tensorflow.keras.models import load_model\nfrom tensorflow.keras.optimizers import Adam\nfrom tensorflow.keras.regularizers import l2\nfrom tensorflow.keras.utils import plot_model\n\ndef build_fc_model(layers):\n fc_model = Sequential()\n for i in range(len(layers)-1):\n #fc_model.add( Dense(input_dim=layers[i], output_dim= layers[i+1]) )#, W_regularizer=l2(0.1)) )\n fc_model.add( Dense(input_shape=(layers[i],), units = layers[i+1]) )#, W_regularizer=l2(0.1)) )\n fc_model.add( Dropout(0.5) )\n if i < (len(layers) - 2):\n fc_model.add( Activation('relu') )\n fc_model.build()\n fc_model.summary()\n plot_model(fc_model)\n return fc_model\n\nfc_model_1 = build_fc_model([2, 256, 512, 1024, 1])\n\nadam = Adam(lr = 1e-5)\nfc_model_1.compile(loss='mean_squared_error', optimizer=adam)\nstart = time.time()\nmodel_path = \"./resources/mlp_fridge_h1.hdf5\"\ncheckpointer = ModelCheckpoint(filepath=model_path, verbose=0, save_best_only=True)\nhist_fc_1 = fc_model_1.fit( X_train, y_train,\n batch_size=512, verbose=1, nb_epoch=200,\n validation_split=0.33, callbacks=[checkpointer])\nprint('Tempo total de treinamento do modelo (s):', round(time.time() - start, 0))\n\nimport numpy as np\nfc_model = load_model(model_path)\npred_fc = fc_model.predict(X_test).reshape(-1)\nmse_loss_fc = mse_loss(pred_fc, y_test)\nmae_loss_fc = mae_loss(pred_fc, y_test)\nprint('MSE no conjunto de teste: ', mse_loss_fc)\nprint('MAE no conjunto de teste:', mae_loss_fc)\n\ntrain_loss = hist_fc_1.history['loss']\nval_loss = hist_fc_1.history['val_loss']\ndef plot_losses(train_loss, val_loss):\n plt.rcParams[\"figure.figsize\"] = [24,10]\n plt.title('MSE dos conjuntos de treino e teste - Resid. 1')\n plt.plot( range(len(train_loss)), train_loss, color = 'b', alpha = 0.6, label='loss (treino)' )\n plt.plot( range(len( val_loss )), val_loss, color = 'r', alpha = 0.6, label='loss (validação)' )\n plt.xlabel( 'época' )\n plt.ylabel( 'loss' )\n plt.legend()\n\nplot_losses(train_loss, val_loss)\n\n# Plotando os cnsumos REAL e o PREVISTO do refrigerador nos 6 dias dos dados de teste\ndef plot_each_app(df, dates, predict, y_test, title, look_back = 0):\n num_date = len(dates)\n fig, axes = plt.subplots(num_date,1,figsize=(24, num_date*5) )\n plt.suptitle(title, fontsize = '25')\n fig.tight_layout()\n fig.subplots_adjust(top=0.95)\n for i in range(num_date):\n if i == 0: l = 0\n ind = df.ix[dates[i]].index[look_back:]\n axes.flat[i].plot(ind, y_test[l:l+len(ind)], color = 'blue', alpha = 0.6, label = 'REAL')\n axes.flat[i].plot(ind, predict[l:l+len(ind)], color = 'red', alpha = 0.6, label = 'PREVISTO')\n axes.flat[i].legend()\n l = len(ind)\nplot_each_app(df1_test, dates[17:], pred_fc, y_test, \n 'Rede Neural FC: Real e Previsão nos 6 dias do Conjunto de Teste da Resid. 1', look_back = 50)\n\n# Testar FC mais complexa", "LSTM", "def build_lstm_model(layers):\n# #fc_model.add( Dense(input_dim=layers[i], output_dim= layers[i+1]) )#, W_regularizer=l2(0.1)) )\n# fc_model.add( Dense(input_shape=(layers[i],), units = layers[i+1]) )#, W_regularizer=l2(0.1)) )\n# fc_model.add( Dropout(0.5) )\n# if i < (len(layers) - 2):\n# fc_model.add( Activation('relu') )\n model = Sequential()\n for i in range(len(layers) - 2):\n if i == 0:\n model.add(Embedding(input_dim=layers[i], output_dim=layers[i+1]))\n else:\n model.add(LSTM(\n input_shape=(layers[i],),\n units=layers[i+1],\n return_sequences = True if i < len(layers) - 3 else False ))\n model.add(Dropout(0.3))\n \n model.add(Dense(layers[-1]))\n \n model.build()\n model.summary()\n plot_model(model)\n return model\n\nmodel = build_lstm_model([2,64,128,256, 1])\n\n# Utilizando 50 registros de consumos para retreinar o modelo, e prever o consumo de energia de cada aparelho\ndef process_data(df, dates, x_features, y_features, look_back = 50):\n i = 0\n for date in dates:\n data = df.loc[date]\n len_data = data.shape[0]\n x = np.array([data[x_features].values[i:i+look_back] \n for i in range(len_data - look_back) ]).reshape(-1,look_back, 2)\n y = data[y_features].values[look_back:,:]\n if i == 0:\n X = x\n Y = y\n else:\n X = np.append(X, x, axis=0)\n Y = np.append(Y, y, axis=0)\n i += 1\n return X,Y\n\nstart = time.time()\nX_train, y_train = process_data(df, dates[:17], ['Mains1','Mains2'], df.columns.values[2:])\nX_test, y_test = process_data(df, dates[17:], ['Mains1','Mains2'], df.columns.values[2:])\nprint('Tempo de execução total (s): ', time.time() - start)\nprint(X_train.shape, y_train.shape, X_test.shape, y_test.shape)\n\nstart = time.time()\nadam = Adam(lr = 5e-5)\nlstm_model_path = \"./resources/lstm_model.hdf5\"\nmodel.compile(loss='mean_squared_error', optimizer=adam)\ncheckpointer = ModelCheckpoint(filepath=lstm_model_path, verbose=0, save_best_only=True)\nhist_lstm = model.fit(\n X_train,\n y_train[:,2],\n batch_size=512,\n verbose=1,\n nb_epoch=200,\n validation_split=0.3,\n callbacks=[checkpointer])\nprint('Tempo de treino (s): ', time.time() - start)\n\ny_train", "Referências\n\nhttp://nilmtk.github.io/nilmtk" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/training-data-analyst
courses/machine_learning/deepdive2/launching_into_ml/solutions/first_model.ipynb
apache-2.0
[ "First BigQuery ML models for Taxifare Prediction\nIn this notebook, we will use BigQuery ML to build our first models for taxifare prediction.BigQuery ML provides a fast way to build ML models on large structured and semi-structured datasets.\nLearning Objectives\n\nChoose the correct BigQuery ML model type and specify options\nEvaluate the performance of your ML model\nImprove model performance through data quality cleanup\nCreate a Deep Neural Network (DNN) using SQL\n\nEach learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook. \nWe'll start by creating a dataset to hold all the models we create in BigQuery\nImport libraries", "!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst\n\n!pip install --user google-cloud-bigquery==1.25.0", "Restart the kernel before proceeding further (On the Notebook menu - Kernel - Restart Kernel).", "import os", "Set environment variables", "%%bash\nexport PROJECT=$(gcloud config list project --format \"value(core.project)\")\necho \"Your current GCP Project Name is: \"$PROJECT\n\nPROJECT = \"your-gcp-project-here\" # REPLACE WITH YOUR PROJECT NAME\nREGION = \"us-central1\" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1\n\n# Do not change these\nos.environ[\"BUCKET\"] = PROJECT # DEFAULT BUCKET WILL BE PROJECT ID\nos.environ[\"REGION\"] = REGION\n\nif PROJECT == \"your-gcp-project-here\":\n print(\"Don't forget to update your PROJECT name! Currently:\", PROJECT)", "Create a BigQuery Dataset and Google Cloud Storage Bucket\nA BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called serverlessml if we have not already done so in an earlier lab. We'll do the same for a GCS bucket for our project too.", "%%bash\n\n## Create a BigQuery dataset for serverlessml if it doesn't exist\ndatasetexists=$(bq ls -d | grep -w serverlessml)\n\nif [ -n \"$datasetexists\" ]; then\n echo -e \"BigQuery dataset already exists, let's not recreate it.\"\nelse\n echo \"Creating BigQuery dataset titled: serverlessml\"\n\n bq --location=US mk --dataset \\\n --description 'Taxi Fare' \\\n $PROJECT:serverlessml\n echo \"\\nHere are your current datasets:\"\n bq ls\nfi \n\n## Create GCS bucket if it doesn't exist already...\nexists=$(gsutil ls -d | grep -w gs://${BUCKET}/)\n\nif [ -n \"$exists\" ]; then\n echo -e \"Bucket exists, let's not recreate it.\"\nelse\n echo \"Creating a new GCS bucket.\"\n gsutil mb -l ${REGION} gs://${BUCKET}\n echo \"\\nHere are your current buckets:\"\n gsutil ls\nfi", "Model 1: Raw data\nLet's build a model using just the raw data. It's not going to be very good, but sometimes it is good to actually experience this.\nThe model will take a minute or so to train. When it comes to ML, this is blazing fast.", "%%bigquery\nCREATE OR REPLACE MODEL\n serverlessml.model1_rawdata\n\nOPTIONS(input_label_cols=['fare_amount'],\n model_type='linear_reg') AS\n\nSELECT\n (tolls_amount + fare_amount) AS fare_amount,\n pickup_longitude AS pickuplon,\n pickup_latitude AS pickuplat,\n dropoff_longitude AS dropofflon,\n dropoff_latitude AS dropofflat,\n passenger_count * 1.0 AS passengers\nFROM\n `nyc-tlc.yellow.trips`\nWHERE\n ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 1", "Once the training is done, visit the BigQuery Cloud Console and look at the model that has been trained. Then, come back to this notebook.\nNote that BigQuery automatically split the data we gave it, and trained on only a part of the data and used the rest for evaluation. We can look at eval statistics on that held-out data:", "%%bigquery\nSELECT * FROM ML.EVALUATE(MODEL serverlessml.model1_rawdata)", "Let's report just the error we care about, the Root Mean Squared Error (RMSE)", "%%bigquery\nSELECT\n SQRT(mean_squared_error) AS rmse\nFROM\n ML.EVALUATE(MODEL serverlessml.model1_rawdata)", "We told you it was not going to be good! Recall that our heuristic got 8.13, and our target is $6.\nNote that the error is going to depend on the dataset that we evaluate it on.\nWe can also evaluate the model on our own held-out benchmark/test dataset, but we shouldn't make a habit of this (we want to keep our benchmark dataset as the final evaluation, not make decisions using it all along the way. If we do that, our test dataset won't be truly independent).", "%%bigquery\nSELECT\n SQRT(mean_squared_error) AS rmse\nFROM\n ML.EVALUATE(MODEL serverlessml.model1_rawdata, (\n SELECT\n (tolls_amount + fare_amount) AS fare_amount,\n pickup_longitude AS pickuplon,\n pickup_latitude AS pickuplat,\n dropoff_longitude AS dropofflon,\n dropoff_latitude AS dropofflat,\n passenger_count * 1.0 AS passengers\n FROM\n `nyc-tlc.yellow.trips`\n WHERE\n ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 2\n AND trip_distance > 0\n AND fare_amount >= 2.5\n AND pickup_longitude > -78\n AND pickup_longitude < -70\n AND dropoff_longitude > -78\n AND dropoff_longitude < -70\n AND pickup_latitude > 37\n AND pickup_latitude < 45\n AND dropoff_latitude > 37\n AND dropoff_latitude < 45\n AND passenger_count > 0\n ))", "Model 2: Apply data cleanup\nRecall that we did some data cleanup in the previous lab. Let's do those before training.\nThis is a dataset that we will need quite frequently in this notebook, so let's extract it first.", "%%bigquery\nCREATE OR REPLACE TABLE\n serverlessml.cleaned_training_data AS\n\nSELECT\n (tolls_amount + fare_amount) AS fare_amount,\n pickup_longitude AS pickuplon,\n pickup_latitude AS pickuplat,\n dropoff_longitude AS dropofflon,\n dropoff_latitude AS dropofflat,\n passenger_count*1.0 AS passengers\nFROM\n `nyc-tlc.yellow.trips`\nWHERE\n ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 1\n AND trip_distance > 0\n AND fare_amount >= 2.5\n AND pickup_longitude > -78\n AND pickup_longitude < -70\n AND dropoff_longitude > -78\n AND dropoff_longitude < -70\n AND pickup_latitude > 37\n AND pickup_latitude < 45\n AND dropoff_latitude > 37\n AND dropoff_latitude < 45\n AND passenger_count > 0\n\n%%bigquery\n-- LIMIT 0 is a free query; this allows us to check that the table exists.\nSELECT * FROM serverlessml.cleaned_training_data\nLIMIT 0\n\n%%bigquery\nCREATE OR REPLACE MODEL\n serverlessml.model2_cleanup\n\nOPTIONS(input_label_cols=['fare_amount'],\n model_type='linear_reg') AS\n\nSELECT\n *\nFROM\n serverlessml.cleaned_training_data\n\n%%bigquery\nSELECT\n SQRT(mean_squared_error) AS rmse\nFROM\n ML.EVALUATE(MODEL serverlessml.model2_cleanup)", "Model 3: More sophisticated models\nWhat if we try a more sophisticated model? Let's try Deep Neural Networks (DNNs) in BigQuery:\nDNN\nTo create a DNN, simply specify dnn_regressor for the model_type and add your hidden layers.", "%%bigquery\n-- This training takes on the order of 15 minutes.\nCREATE OR REPLACE MODEL\n serverlessml.model3b_dnn\n\nOPTIONS(input_label_cols=['fare_amount'],\n model_type='dnn_regressor', hidden_units=[32, 8]) AS\n\nSELECT\n *\nFROM\n serverlessml.cleaned_training_data\n\n%%bigquery\nSELECT\n SQRT(mean_squared_error) AS rmse\nFROM\n ML.EVALUATE(MODEL serverlessml.model3b_dnn)", "Nice!\nEvaluate DNN on benchmark dataset\nLet's use the same validation dataset to evaluate -- remember that evaluation metrics depend on the dataset. You can not compare two models unless you have run them on the same withheld data.", "%%bigquery\nSELECT\n SQRT(mean_squared_error) AS rmse \nFROM\n ML.EVALUATE(MODEL serverlessml.model3b_dnn, (\n SELECT\n (tolls_amount + fare_amount) AS fare_amount,\n pickup_datetime,\n pickup_longitude AS pickuplon,\n pickup_latitude AS pickuplat,\n dropoff_longitude AS dropofflon,\n dropoff_latitude AS dropofflat,\n passenger_count * 1.0 AS passengers,\n 'unused' AS key\n FROM\n `nyc-tlc.yellow.trips`\n WHERE\n ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 10000)) = 2\n AND trip_distance > 0\n AND fare_amount >= 2.5\n AND pickup_longitude > -78\n AND pickup_longitude < -70\n AND dropoff_longitude > -78\n AND dropoff_longitude < -70\n AND pickup_latitude > 37\n AND pickup_latitude < 45\n AND dropoff_latitude > 37\n AND dropoff_latitude < 45\n AND passenger_count > 0\n ))", "Wow! Later in this sequence of notebooks, we will get to below $4, but this is quite good, for very little work.\nIn this notebook, we showed you how to use BigQuery ML to quickly build ML models. We will come back to BigQuery ML when we want to experiment with different types of feature engineering. The speed of BigQuery ML is very attractive for development.\nCopyright 2019 Google Inc.\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
amcdawes/QMlabs
Lab 5 - Two-particle systems.ipynb
mit
[ "Two-particle systems\nAn introduction to multi-particle spaces, starting with photon polarization states. This lab answers the question: How do we describe the state of two photons?", "import matplotlib.pyplot as plt\nfrom numpy import sqrt,pi,sin,cos,arange\nfrom qutip import *", "The polarization states (in the HV-basis):", "H = basis(2,0)\nV = basis(2,1)\nP45 = 1/sqrt(2)*(H+V)\nM45 = 1/sqrt(2)*(H-V)\nL = 1/sqrt(2)*(H+1j*V)\nR = 1/sqrt(2)*(H-1j*V)", "Define two-particle states using the tensor() function:\nMathematically, we are taking the tensor product of two vectors. That product is a larger vector with twice as many entries as the individual state vectors. As long as we take the tensor products in the right order (i.e. always talking about photon 1 and photon 2 in that order) we can also make operators that act on two-photon states). In order to keep a consistent naming scheme, we'll call the first photon the signal photon and the second photon the idler photon. The names aren't particularly important but they come from the process we use in the lab: Spontaneous Parametric Down Conversion \nFirst, look at a generic pair of vectors and their tensor product:", "A = Qobj([[1],[2]])\nB = Qobj([[3],[4]])\nprint(A)\nprint(B)\nprint(tensor(A,B))", "So we see that the tensor product has the following elements: 1*3 = 3, 1*4 = 4, 2*3 = 6, 2*4 = 8. Essentially, we distributed the multiplication of the first vector through the second vector. Using the technical terms of vector spaces, the tensor product exists in a larger Hilbert space (the number of dimensions is the product of the dimensions of the original states). See this with larger initial states: two 3-dim vectors have a tensor product in 9-dim space:", "C = Qobj([[1],[2],[3]])\nD = Qobj([[4],[5],[6]])\nprint(tensor(C,D))", "Now, back to the quantum mechanics. Form the four different combinations of two photons:", "HH = tensor(H,H)\nHV = tensor(H,V)\nVH = tensor(V,H)\nVV = tensor(V,V)\n\n# How do we represent HH? It is a vector with four elements.\nHH", "So we interpret the state $|HH\\rangle$ as the vector (1,0,0,0) in a four-dimensional space.\nRecall: The polarization measurement operator (for one photon):", "Phv = H*H.dag() - V*V.dag()\nPhv", "Also, the identity is defined as qeye(n) for n dimensions in qutip:", "qeye(2) # 2-dimensional identity", "The two-photon operator, measuring the signal photon, is formed with the tensor() function. It is the tensor product of the projection operator Phv and the 2-dimensional identity operator qeye(2). The trick is putting them in the correct order. The first element in the tensor product acts on the signal photon, the second acts on the idler photon. So to act on only the signal photon, we create a tensor product with the projection operator first, and the identity second:", "Phv_s = tensor(Phv,qeye(2))\nPhv_s", "It can be hard to interpret these values visually but remember it was constructed by multiplying all the terms between two matrices with only diagonal elements. It makes sense that the result is also diagonal. Also, the sign of the diagonal depends on the state of the signal photon (the first one listed). Recall the states are in the order: HH, HV, VH, VV so the first two states have H signal photons and are therefore 1, and the second two states are V signal photons so -1 for those diagonals.\nNow construct the two-photon operator that measures the idler photon:", "Phv_i = tensor(qeye(2),Phv)\nPhv_i", "Next, construct a projection operator that projects the idler photon to H:", "Ph = H*H.dag()\nPh_i = tensor(qeye(2),Ph) # Ph for idler photon", "And the same but for the signal photon:", "Ph_s = tensor(Ph,qeye(2)) # Ph for signal photon", "You start to see the pattern. Build these up from our earlier operators, just apply them to the specific particle by including them in the tensor product at that position.\nNext we will do some example calculations.\nExample: find the probability of measuring a horizontal idler photon if the system is prepared in the state $|HH\\rangle$", "HH.dag()*Ph_i*HH", "Example: find the probability of measuring a horizontal idler photon in the state $|\\psi\\rangle = |H,+45\\rangle$", "psi = tensor(H,P45) # the prepared state\n\npsi.dag()*Ph_i*psi", "Example 8.2 prob. of measuring vertical signal and horizontal idler if $|\\psi\\rangle = |R,+45\\rangle$", "# First, form the prepared state:\npsi = tensor(R,P45)\n\n# Then create the projection operator for the state we are asking about:\nprojection = VH*VH.dag()\n\n# Finally, calculate the probability by computing the bra-ket:\npsi.dag()*projection*psi", "Entangled states:\nA very interesting system can be set up where there are paired photons being created with unknown but correlated polarization. In this case, we can say the state is in a combination of $|HH\\rangle$ and $|VV\\rangle$. If either two-photon state is allowed, then the normalized state is $$\\big|\\phi^+\\big\\rangle = \\frac{1}{\\sqrt{2}}\\big( \\big|HH\\big\\rangle + \\big|VV\\big\\rangle \\big)$$", "phiPlus = 1/sqrt(2)*(HH + VV)\n\nphiPlus.dag()*Ph_i*phiPlus # probability of measuring a horizontal idler photon:", "This is expected, because the HH state has 50% of the probability amplitude. Same for a horizontal signal photon:", "phiPlus.dag()*Ph_s*phiPlus # probability of measuring a horizontal signal photon", "Now, find $P(H_s|H_i)$ (Example 8.5)", "# Projection operator for H idler and H signal:\nphh = HH*HH.dag()\nphiPlus.dag()*phh*phiPlus\n\n# Projection operator for H idler\nPih = tensor(qeye(2),H*H.dag())\nphiPlus.dag()*Pih*phiPlus", "$P(H_s|H_i) = \\frac{P(H_s,H_i)}{P(H_i)}$", "0.5/0.5", "Guaranteed to measure a horizontal signal photon whenever a horizontal idler photon is measured. What about vertical? Find the conditional probability of measuring a vertical signal photon if the idler photon is found to be vertical:\nNow, measure a different basis (use the +45 states) to show that the photons are always found in the same polarization even when measured at a different angle:", "# Solution\n\n# Probability that signal is +45 and idler +45\nPp45p45 = tensor(P45,P45) * tensor(P45,P45).dag()\nphiPlus.dag()*Pp45p45*phiPlus\n\n# Solution\n\n# Probability that the idler is +45 regardless of the signal\nPp45i = tensor(qeye(2),P45) * tensor(qeye(2),P45).dag()\nphiPlus.dag()*Pp45i*phiPlus", "Finally, to really drive this odd point home, show that they are never found in the $\\big|+45,-45\\big\\rangle$ state:", "# Solution\n\n# Probability that they are in different 45 states:\nPp45m45 = tensor(P45,M45) * tensor(P45,M45).dag()\n\nphiPlus.dag()*Pp45m45*phiPlus", "Using these states solve problems 8.2, 8.3, 8.7, 8.8" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
NicolasHug/Surprise
examples/notebooks/KNNBasic_analysis.ipynb
bsd-3-clause
[ "Analysis of the KNNBasic algorithm\nIn this notebook, we will run a basic neighborhood algorithm on the movielens dataset, dump the results, and use pandas to make some data analysis.", "from __future__ import (absolute_import, division, print_function, \n unicode_literals) \nimport pickle\nimport os\n\nimport pandas as pd\n\nfrom surprise import KNNBasic\nfrom surprise import Dataset \nfrom surprise import Reader \nfrom surprise.model_selection import PredefinedKFold\nfrom surprise import dump\nfrom surprise.accuracy import rmse\n\n# We will train and test on the u1.base and u1.test files of the movielens-100k dataset.\n# if you haven't already, you need to download the movielens-100k dataset\n# You can do it manually, or by running:\n\n#Dataset.load_builtin('ml-100k')\n\n# Now, let's load the dataset\ntrain_file = os.path.expanduser('~') + '/.surprise_data/ml-100k/ml-100k/u1.base'\ntest_file = os.path.expanduser('~') + '/.surprise_data/ml-100k/ml-100k/u1.test'\ndata = Dataset.load_from_folds([(train_file, test_file)], Reader('ml-100k'))\n\npkf = PredefinedKFold()\n\n# We'll use a basic nearest neighbor approach, where similarities are computed\n# between users.\nalgo = KNNBasic() \n\nfor trainset, testset in pkf.split(data):\n algo.fit(trainset) \n predictions = algo.test(testset)\n rmse(predictions)\n \n dump.dump('./dump_file', predictions, algo)\n\n# The dump has been saved and we can now use it whenever we want.\n# Let's load it and see what we can do\npredictions, algo = dump.load('./dump_file')\n\ntrainset = algo.trainset\nprint('algo: {0}, k = {1}, min_k = {2}'.format(algo.__class__.__name__, algo.k, algo.min_k))\n\n# Let's build a pandas dataframe with all the predictions\n\ndef get_Iu(uid):\n \"\"\"Return the number of items rated by given user\n \n Args:\n uid: The raw id of the user.\n Returns:\n The number of items rated by the user.\n \"\"\"\n \n try:\n return len(trainset.ur[trainset.to_inner_uid(uid)])\n except ValueError: # user was not part of the trainset\n return 0\n \ndef get_Ui(iid):\n \"\"\"Return the number of users that have rated given item\n \n Args:\n iid: The raw id of the item.\n Returns:\n The number of users that have rated the item.\n \"\"\"\n \n try:\n return len(trainset.ir[trainset.to_inner_iid(iid)])\n except ValueError: # item was not part of the trainset\n return 0\n\ndf = pd.DataFrame(predictions, columns=['uid', 'iid', 'rui', 'est', 'details']) \ndf['Iu'] = df.uid.apply(get_Iu)\ndf['Ui'] = df.iid.apply(get_Ui)\ndf['err'] = abs(df.est - df.rui)\n\ndf.head()\n\nbest_predictions = df.sort_values(by='err')[:10]\nworst_predictions = df.sort_values(by='err')[-10:]\n\n# Let's take a look at the best predictions of the algorithm\nbest_predictions", "It's interesting to note that these perfect predictions are actually lucky shots: $|U_i|$ is always very small, meaning that very few users have rated the target item. This implies that the set of neighbors is very small (see the actual_k field)... And, it just happens that all the ratings from the neighbors are the same (and mostly, are equal to that of the target user).\nThis may be a bit surprising but these lucky shots are actually very important to the accuracy of the algorithm... Try running the same algorithm with a value of min_k equal to $10$. This means that if there are less than $10$ neighbors, the prediction is set to the mean of all ratings. You'll see your accuracy decrease!", "# Now, let's look at the prediction with the biggest error\nworst_predictions", "Let's focus first on the last two predictions. Well, we can't do much about them. We should have predicted $5$, but the only available neighbor had a rating of $1$, so we were screwed. The only way to avoid this kind of errors would be to increase the min_k parameter, but it would actually worsen the accuracy (see note above).\nHow about the other ones? It seems that for each prediction, the users are some kind of outsiders: they rated their item with a rating of $1$ when the most of the ratings for the item where high (or inversely, rated a bad item with a rating of $5$). See the plot below as an illustration for the first rating.\nThese are situations where baseline estimates would be quite helpful, in order to deal with highly biased users (and items).", "from collections import Counter\n\nimport matplotlib.pyplot as plt\nimport matplotlib\n%matplotlib notebook\nmatplotlib.style.use('ggplot')\n\ncounter = Counter([r for (_, r) in trainset.ir[trainset.to_inner_iid('302')]])\npd.DataFrame.from_dict(counter, orient='index').plot(kind='bar', legend=False)\nplt.xlabel('Rating value')\nplt.ylabel('Number of users')\nplt.title('Number of users having rated item 302')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
plipp/informatica-pfr-2017
nbs/4/1-Classification-Decision-Tree-Primer.ipynb
mit
[ "Classification - Decision Tree Primer\nClassify Iris (flowers) by their sepal/petal width/length to their species: 'setosa' 'versicolor' 'virginica'\nOriginal Image", "from sklearn.datasets import load_iris\nfrom sklearn.tree import DecisionTreeClassifier\nfrom plotting_utilities import plot_decision_tree, plot_feature_importances\nfrom sklearn.model_selection import train_test_split\n\nimport numpy as np\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\niris = load_iris()\niris.DESCR.split('\\n')\n\n# IN: Features aka Predictors\nprint(iris.data.dtype)\nprint(iris.data.shape)\n\nprint(iris.feature_names)\niris.data[:5,:]\n\n# OUT: Target, here: species\nprint(iris.target.dtype)\nprint(iris.target.shape)\n\nprint(iris.target_names)\niris.target[:5]", "Task: Create a Decision Tree\nto be able to classify an unseen Iris by sepal/petal with into its species: 'setosa' 'versicolor' 'virginica'", "X = iris.data\ny = iris.target\n\n# TODO: Try with and without max_depth (setting also avoids overfitting)\n# clf = DecisionTreeClassifier().fit(X, y)\nclf = DecisionTreeClassifier(max_depth = 3).fit(X, y)\nplot_decision_tree(clf, iris.feature_names, iris.target_names)", "Wait, how do I know that the Decision Tree works???\nA: Split your data into test and train and evaluate with the test data.", "X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, random_state = 3)\n\n# Train the classifier only with the trainings data\nclf = DecisionTreeClassifier().fit(X_train, y_train)\n\n# predict for the test data and compare with the actual outcome\ny_pred = clf.predict(X_test)\n\nfrom sklearn.metrics import confusion_matrix\n\nprint(\" ------ Predicted \")\nprint(\" Actual \")\nconfusion_matrix(y_test, y_pred)\n\nprint('Accuracy of Decision Tree classifier on test set == sum(TP)/sum(): {}'.format((15+11+11)/(15+11+11+1)))\nprint('Accuracy of Decision Tree classifier on test set with \"score\"-function: {:.2f}'\n .format(clf.score(X_test, y_test)))", "Feature importance\nTODO: Compare with level in Tree", "plt.figure(figsize=(10,4), dpi=80)\nplot_feature_importances(clf, np.array(iris.feature_names))\nplt.show()\n\nprint('Feature names : {}'.format(iris.feature_names))\nprint('Feature importances: {}'.format(clf.feature_importances_))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
shareactorIO/pipeline
source.ml/jupyterhub.ml/notebooks/zz_old/TensorFlow/HvassLabsTutorials/08_Transfer_Learning.ipynb
apache-2.0
[ "TensorFlow Tutorial #08\nTransfer Learning\nby Magnus Erik Hvass Pedersen\n/ GitHub / Videos on YouTube\nIntroduction\nWe saw in the previous Tutorial #07 how to use the pre-trained Inception model for classifying images. Unfortunately the Inception model seemed unable to classify images of people. The reason was the data-set used for training the Inception model, which had some confusing text-labels for classes.\nThe Inception model is actually quite capable of extracting useful information from an image. So we can instead train the Inception model using another data-set. But it takes several weeks using a very powerful and expensive computer to fully train the Inception model on a new data-set.\nWe can instead re-use the pre-trained Inception model and merely replace the layer that does the final classification. This is called Transfer Learning.\nThis tutorial builds on the previous tutorials so you should be familiar with Tutorial #07 on the Inception model, as well as earlier tutorials on how to build and train Neural Networks in TensorFlow. A part of the source-code for this tutorial is located in the inception.py file.\nFlowchart\nThe following chart shows how the data flows when using the Inception model for Transfer Learning. First we input and process an image with the Inception model. Just prior to the final classification layer of the Inception model, we save the so-called Transfer Values to a cache-file.\nThe reason for using a cache-file is that it takes a long time to process an image with the Inception model. My laptop computer with a Quad-Core 2 GHz CPU can process about 3 images per second using the Inception model. If each image is processed more than once then we can save a lot of time by caching the transfer-values.\nThe transfer-values are also sometimes called bottleneck-values, but that is a confusing term so it is not used here.\nWhen all the images in the new data-set have been processed through the Inception model and the resulting transfer-values saved to a cache file, then we can use those transfer-values as the input to another neural network. We will then train the second neural network using the classes from the new data-set, so the network learns how to classify images based on the transfer-values from the Inception model.\nIn this way, the Inception model is used to extract useful information from the images and another neural network is then used for the actual classification.", "from IPython.display import Image, display\nImage('images/08_transfer_learning_flowchart.png')", "Imports", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\nimport numpy as np\nimport time\nfrom datetime import timedelta\nimport os\n\n# Functions and classes for loading and using the Inception model.\nimport inception\n\n# We use Pretty Tensor to define the new classifier.\nimport prettytensor as pt", "This was developed using Python 3.5.2 (Anaconda) and TensorFlow version:", "tf.__version__", "Load Data for CIFAR-10", "import cifar10", "The data dimensions have already been defined in the cifar10 module, so we just need to import the ones we need.", "from cifar10 import num_classes", "Set the path for storing the data-set on your computer.", "# cifar10.data_path = \"data/CIFAR-10/\"", "The CIFAR-10 data-set is about 163 MB and will be downloaded automatically if it is not located in the given path.", "cifar10.maybe_download_and_extract()", "Load the class-names.", "class_names = cifar10.load_class_names()\nclass_names", "Load the training-set. This returns the images, the class-numbers as integers, and the class-numbers as One-Hot encoded arrays called labels.", "images_train, cls_train, labels_train = cifar10.load_training_data()", "Load the test-set.", "images_test, cls_test, labels_test = cifar10.load_test_data()", "The CIFAR-10 data-set has now been loaded and consists of 60,000 images and associated labels (i.e. classifications of the images). The data-set is split into 2 mutually exclusive sub-sets, the training-set and the test-set.", "print(\"Size of:\")\nprint(\"- Training-set:\\t\\t{}\".format(len(images_train)))\nprint(\"- Test-set:\\t\\t{}\".format(len(images_test)))", "Helper-function for plotting images\nFunction used to plot at most 9 images in a 3x3 grid, and writing the true and predicted classes below each image.", "def plot_images(images, cls_true, cls_pred=None, smooth=True):\n\n assert len(images) == len(cls_true)\n\n # Create figure with sub-plots.\n fig, axes = plt.subplots(3, 3)\n\n # Adjust vertical spacing.\n if cls_pred is None:\n hspace = 0.3\n else:\n hspace = 0.6\n fig.subplots_adjust(hspace=hspace, wspace=0.3)\n\n # Interpolation type.\n if smooth:\n interpolation = 'spline16'\n else:\n interpolation = 'nearest'\n\n for i, ax in enumerate(axes.flat):\n # There may be less than 9 images, ensure it doesn't crash.\n if i < len(images):\n # Plot image.\n ax.imshow(images[i],\n interpolation=interpolation)\n\n # Name of the true class.\n cls_true_name = class_names[cls_true[i]]\n\n # Show true and predicted classes.\n if cls_pred is None:\n xlabel = \"True: {0}\".format(cls_true_name)\n else:\n # Name of the predicted class.\n cls_pred_name = class_names[cls_pred[i]]\n\n xlabel = \"True: {0}\\nPred: {1}\".format(cls_true_name, cls_pred_name)\n\n # Show the classes as the label on the x-axis.\n ax.set_xlabel(xlabel)\n \n # Remove ticks from the plot.\n ax.set_xticks([])\n ax.set_yticks([])\n \n # Ensure the plot is shown correctly with multiple plots\n # in a single Notebook cell.\n plt.show()", "Plot a few images to see if data is correct", "# Get the first images from the test-set.\nimages = images_test[0:9]\n\n# Get the true classes for those images.\ncls_true = cls_test[0:9]\n\n# Plot the images and labels using our helper-function above.\nplot_images(images=images, cls_true=cls_true, smooth=False)", "Download the Inception Model\nThe Inception model is downloaded from the internet. This is the default directory where you want to save the data-files. The directory will be created if it does not exist.", "# inception.data_dir = 'inception/'", "Download the data for the Inception model if it doesn't already exist in the directory. It is 85 MB.\nSee Tutorial #07 for more details.", "inception.maybe_download()", "Load the Inception Model\nLoad the Inception model so it is ready for classifying images.\nNote the deprecation warning, which might cause the program to fail in the future.", "model = inception.Inception()", "Calculate Transfer-Values\nImport a helper-function for caching the transfer-values of the Inception model.", "from inception import transfer_values_cache", "Set the file-paths for the caches of the training-set and test-set.", "file_path_cache_train = os.path.join(cifar10.data_path, 'inception_cifar10_train.npy')\nfile_path_cache_test = os.path.join(cifar10.data_path, 'inception_cifar10_test.npy')\n\nprint(\"Processing Inception transfer-values for training-images ...\")\n\n# Scale images because Inception needs pixels to be between 0 and 255,\n# while the CIFAR-10 functions return pixels between 0.0 and 1.0\nimages_scaled = images_train * 255.0\n\n# If transfer-values have already been calculated then reload them,\n# otherwise calculate them and save them to a cache-file.\ntransfer_values_train = transfer_values_cache(file_path=file_path_cache_train,\n images=images_scaled,\n model=model)\n\nprint(\"Processing Inception transfer-values for test-images ...\")\n\n# Scale images because Inception needs pixels to be between 0 and 255,\n# while the CIFAR-10 functions return pixels between 0.0 and 1.0\nimages_scaled = images_test * 255.0\n\n# If transfer-values have already been calculated then reload them,\n# otherwise calculate them and save them to a cache-file.\ntransfer_values_test = transfer_values_cache(file_path=file_path_cache_test,\n images=images_scaled,\n model=model)", "Check the shape of the array with the transfer-values. There are 50,000 images in the training-set and for each image there are 2048 transfer-values.", "transfer_values_train.shape", "Similarly, there are 10,000 images in the test-set with 2048 transfer-values for each image.", "transfer_values_test.shape", "Helper-function for plotting transfer-values", "def plot_transfer_values(i):\n print(\"Input image:\")\n \n # Plot the i'th image from the test-set.\n plt.imshow(images_test[i], interpolation='nearest')\n plt.show()\n\n print(\"Transfer-values for the image using Inception model:\")\n \n # Transform the transfer-values into an image.\n img = transfer_values_test[i]\n img = img.reshape((32, 64))\n\n # Plot the image for the transfer-values.\n plt.imshow(img, interpolation='nearest', cmap='Reds')\n plt.show()\n\nplot_transfer_values(i=16)\n\nplot_transfer_values(i=17)", "Analysis of Transfer-Values using PCA\nUse Principal Component Analysis (PCA) from scikit-learn to reduce the array-lengths of the transfer-values from 2048 to 2 so they can be plotted.", "from sklearn.decomposition import PCA", "Create a new PCA-object and set the target array-length to 2.", "pca = PCA(n_components=2)", "It takes a while to compute the PCA so the number of samples has been limited to 3000. You can try and use the full training-set if you like.", "transfer_values = transfer_values_train[0:3000]", "Check that the array has 3000 samples and 2048 transfer-values for each sample.", "transfer_values.shape", "Use PCA to reduce the transfer-value arrays from 2048 to 2 elements.", "transfer_values_reduced = pca.fit_transform(transfer_values)", "Check that it is now an array with 3000 samples and 2 values per sample.", "transfer_values_reduced.shape", "Helper-function for plotting the reduced transfer-values.", "def plot_scatter(values):\n # Create a color-map with a different color for each class.\n import matplotlib.cm as cm\n cmap = cm.rainbow(np.linspace(0.0, 1.0, num_classes))\n\n # Get the color for each sample.\n colors = cmap[cls_train]\n\n # Extract the x- and y-values.\n x = values[:, 0]\n y = values[:, 1]\n\n # Plot it.\n plt.scatter(x, y, color=colors)\n plt.show()", "Plot the transfer-values that have been reduced using PCA. There are 10 different colors for the different classes in the CIFAR-10 data-set. The colors are grouped together but with very large overlap. This may be because PCA cannot properly separate the transfer-values.", "plot_scatter(transfer_values_reduced)", "Analysis of Transfer-Values using t-SNE", "from sklearn.manifold import TSNE", "Another method for doing dimensionality reduction is t-SNE. Unfortunately, t-SNE is very slow so we first use PCA to reduce the transfer-values from 2048 to 50 elements.", "pca = PCA(n_components=50)\ntransfer_values_50d = pca.fit_transform(transfer_values)", "Create a new t-SNE object for the final dimensionality reduction and set the target to 2-dim.", "tsne = TSNE(n_components=2)", "Perform the final reduction using t-SNE. The current implemenation of t-SNE in scikit-learn cannot handle data with many samples so this might crash if you use the full training-set.", "transfer_values_reduced = tsne.fit_transform(transfer_values_50d) ", "Check that it is now an array with 3000 samples and 2 transfer-values per sample.", "transfer_values_reduced.shape", "Plot the transfer-values that have been reduced to 2-dim using t-SNE, which shows better separation than the PCA-plot above.\nThis means the transfer-values from the Inception model appear to contain enough information to separate the CIFAR-10 images into classes, although there is still some overlap so the separation is not perfect.", "plot_scatter(transfer_values_reduced)", "New Classifier in TensorFlow\nNow we will create another neural network in TensorFlow. This network will take as input the transfer-values from the Inception model and output the predicted classes for CIFAR-10 images.\nIt is assumed that you are already familiar with how to build neural networks in TensorFlow, otherwise see e.g. Tutorial #03.\nPlaceholder Variables\nFirst we need the array-length for transfer-values which is stored as a variable in the object for the Inception model.", "transfer_len = model.transfer_len", "Now create a placeholder variable for inputting the transfer-values from the Inception model into the new network that we are building. The shape of this variable is [None, transfer_len] which means it takes an input array with an arbitrary number of samples as indicated by the keyword None and each sample has 2048 elements, equal to transfer_len.", "x = tf.placeholder(tf.float32, shape=[None, transfer_len], name='x')", "Create another placeholder variable for inputting the true class-label of each image. These are so-called One-Hot encoded arrays with 10 elements, one for each possible class in the data-set.", "y_true = tf.placeholder(tf.float32, shape=[None, num_classes], name='y_true')", "Calculate the true class as an integer. This could also be a placeholder variable.", "y_true_cls = tf.argmax(y_true, dimension=1)", "Neural Network\nCreate the neural network for doing the classification on the CIFAR-10 data-set. This takes as input the transfer-values from the Inception model which will be fed into the placeholder variable x. The network outputs the predicted class in y_pred.\nSee Tutorial #03 for more details on how to use Pretty Tensor to construct neural networks.", "# Wrap the transfer-values as a Pretty Tensor object.\nx_pretty = pt.wrap(x)\n\nwith pt.defaults_scope(activation_fn=tf.nn.relu):\n y_pred, loss = x_pretty.\\\n fully_connected(size=1024, name='layer_fc1').\\\n softmax_classifier(class_count=num_classes, labels=y_true)", "Optimization Method\nCreate a variable for keeping track of the number of optimization iterations performed.", "global_step = tf.Variable(initial_value=0,\n name='global_step', trainable=False)", "Method for optimizing the new neural network.", "optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss, global_step)", "Classification Accuracy\nThe output of the network y_pred is an array with 10 elements. The class number is the index of the largest element in the array.", "y_pred_cls = tf.argmax(y_pred, dimension=1)", "Create an array of booleans whether the predicted class equals the true class of each image.", "correct_prediction = tf.equal(y_pred_cls, y_true_cls)", "The classification accuracy is calculated by first type-casting the array of booleans to floats, so that False becomes 0 and True becomes 1, and then taking the average of these numbers.", "accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))", "TensorFlow Run\nCreate TensorFlow Session\nOnce the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.", "session = tf.Session()", "Initialize Variables\nThe variables for the new network must be initialized before we start optimizing them.", "session.run(tf.initialize_all_variables())", "Helper-function to get a random training-batch\nThere are 50,000 images (and arrays with transfer-values for the images) in the training-set. It takes a long time to calculate the gradient of the model using all these images (transfer-values). We therefore only use a small batch of images (transfer-values) in each iteration of the optimizer.\nIf your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to perform more optimization iterations.", "train_batch_size = 64", "Function for selecting a random batch of transfer-values from the training-set.", "def random_batch():\n # Number of images (transfer-values) in the training-set.\n num_images = len(transfer_values_train)\n\n # Create a random index.\n idx = np.random.choice(num_images,\n size=train_batch_size,\n replace=False)\n\n # Use the random index to select random x and y-values.\n # We use the transfer-values instead of images as x-values.\n x_batch = transfer_values_train[idx]\n y_batch = labels_train[idx]\n\n return x_batch, y_batch", "Helper-function to perform optimization\nThis function performs a number of optimization iterations so as to gradually improve the variables of the neural network. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations.", "def optimize(num_iterations):\n # Start-time used for printing time-usage below.\n start_time = time.time()\n\n for i in range(num_iterations):\n # Get a batch of training examples.\n # x_batch now holds a batch of images (transfer-values) and\n # y_true_batch are the true labels for those images.\n x_batch, y_true_batch = random_batch()\n\n # Put the batch into a dict with the proper names\n # for placeholder variables in the TensorFlow graph.\n feed_dict_train = {x: x_batch,\n y_true: y_true_batch}\n\n # Run the optimizer using this batch of training data.\n # TensorFlow assigns the variables in feed_dict_train\n # to the placeholder variables and then runs the optimizer.\n # We also want to retrieve the global_step counter.\n i_global, _ = session.run([global_step, optimizer],\n feed_dict=feed_dict_train)\n\n # Print status to screen every 100 iterations (and last).\n if (i_global % 100 == 0) or (i == num_iterations - 1):\n # Calculate the accuracy on the training-batch.\n batch_acc = session.run(accuracy,\n feed_dict=feed_dict_train)\n\n # Print status.\n msg = \"Global Step: {0:>6}, Training Batch Accuracy: {1:>6.1%}\"\n print(msg.format(i_global, batch_acc))\n\n # Ending time.\n end_time = time.time()\n\n # Difference between start and end-times.\n time_dif = end_time - start_time\n\n # Print the time-usage.\n print(\"Time usage: \" + str(timedelta(seconds=int(round(time_dif)))))", "Helper-Functions for Showing Results\nHelper-function to plot example errors\nFunction for plotting examples of images from the test-set that have been mis-classified.", "def plot_example_errors(cls_pred, correct):\n # This function is called from print_test_accuracy() below.\n\n # cls_pred is an array of the predicted class-number for\n # all images in the test-set.\n\n # correct is a boolean array whether the predicted class\n # is equal to the true class for each image in the test-set.\n\n # Negate the boolean array.\n incorrect = (correct == False)\n \n # Get the images from the test-set that have been\n # incorrectly classified.\n images = images_test[incorrect]\n \n # Get the predicted classes for those images.\n cls_pred = cls_pred[incorrect]\n\n # Get the true classes for those images.\n cls_true = cls_test[incorrect]\n\n n = min(9, len(images))\n \n # Plot the first n images.\n plot_images(images=images[0:n],\n cls_true=cls_true[0:n],\n cls_pred=cls_pred[0:n])", "Helper-function to plot confusion matrix", "# Import a function from sklearn to calculate the confusion-matrix.\nfrom sklearn.metrics import confusion_matrix\n\ndef plot_confusion_matrix(cls_pred):\n # This is called from print_test_accuracy() below.\n\n # cls_pred is an array of the predicted class-number for\n # all images in the test-set.\n\n # Get the confusion matrix using sklearn.\n cm = confusion_matrix(y_true=cls_test, # True class for test-set.\n y_pred=cls_pred) # Predicted class.\n\n # Print the confusion matrix as text.\n for i in range(num_classes):\n # Append the class-name to each line.\n class_name = \"({}) {}\".format(i, class_names[i])\n print(cm[i, :], class_name)\n\n # Print the class-numbers for easy reference.\n class_numbers = [\" ({0})\".format(i) for i in range(num_classes)]\n print(\"\".join(class_numbers))", "Helper-functions for calculating classifications\nThis function calculates the predicted classes of images and also returns a boolean array whether the classification of each image is correct.\nThe calculation is done in batches because it might use too much RAM otherwise. If your computer crashes then you can try and lower the batch-size.", "# Split the data-set in batches of this size to limit RAM usage.\nbatch_size = 256\n\ndef predict_cls(transfer_values, labels, cls_true):\n # Number of images.\n num_images = len(transfer_values)\n\n # Allocate an array for the predicted classes which\n # will be calculated in batches and filled into this array.\n cls_pred = np.zeros(shape=num_images, dtype=np.int)\n\n # Now calculate the predicted classes for the batches.\n # We will just iterate through all the batches.\n # There might be a more clever and Pythonic way of doing this.\n\n # The starting index for the next batch is denoted i.\n i = 0\n\n while i < num_images:\n # The ending index for the next batch is denoted j.\n j = min(i + batch_size, num_images)\n\n # Create a feed-dict with the images and labels\n # between index i and j.\n feed_dict = {x: transfer_values[i:j],\n y_true: labels[i:j]}\n\n # Calculate the predicted class using TensorFlow.\n cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)\n\n # Set the start-index for the next batch to the\n # end-index of the current batch.\n i = j\n \n # Create a boolean array whether each image is correctly classified.\n correct = (cls_true == cls_pred)\n\n return correct, cls_pred", "Calculate the predicted class for the test-set.", "def predict_cls_test():\n return predict_cls(transfer_values = transfer_values_test,\n labels = labels_test,\n cls_true = cls_test)", "Helper-functions for calculating the classification accuracy\nThis function calculates the classification accuracy given a boolean array whether each image was correctly classified. E.g. classification_accuracy([True, True, False, False, False]) = 2/5 = 0.4. The function also returns the number of correct classifications.", "def classification_accuracy(correct):\n # When averaging a boolean array, False means 0 and True means 1.\n # So we are calculating: number of True / len(correct) which is\n # the same as the classification accuracy.\n\n # Return the classification accuracy\n # and the number of correct classifications.\n return correct.mean(), correct.sum()", "Helper-function for showing the classification accuracy\nFunction for printing the classification accuracy on the test-set.\nIt takes a while to compute the classification for all the images in the test-set, that's why the results are re-used by calling the above functions directly from this function, so the classifications don't have to be recalculated by each function.", "def print_test_accuracy(show_example_errors=False,\n show_confusion_matrix=False):\n\n # For all the images in the test-set,\n # calculate the predicted classes and whether they are correct.\n correct, cls_pred = predict_cls_test()\n \n # Classification accuracy and the number of correct classifications.\n acc, num_correct = classification_accuracy(correct)\n \n # Number of images being classified.\n num_images = len(correct)\n\n # Print the accuracy.\n msg = \"Accuracy on Test-Set: {0:.1%} ({1} / {2})\"\n print(msg.format(acc, num_correct, num_images))\n\n # Plot some examples of mis-classifications, if desired.\n if show_example_errors:\n print(\"Example errors:\")\n plot_example_errors(cls_pred=cls_pred, correct=correct)\n\n # Plot the confusion matrix, if desired.\n if show_confusion_matrix:\n print(\"Confusion Matrix:\")\n plot_confusion_matrix(cls_pred=cls_pred)", "Results\nPerformance before any optimization\nThe classification accuracy on the test-set is very low because the model variables have only been initialized and not optimized at all, so it just classifies the images randomly.", "print_test_accuracy(show_example_errors=False,\n show_confusion_matrix=False)", "Performance after 10,000 optimization iterations\nAfter 10,000 optimization iterations, the classification accuracy is about 90% on the test-set. Compare this to the basic Convolutional Neural Network from Tutorial #06 which had less than 80% accuracy on the test-set.", "optimize(num_iterations=10000)\n\nprint_test_accuracy(show_example_errors=True,\n show_confusion_matrix=True)", "Close TensorFlow Session\nWe are now done using TensorFlow, so we close the session to release its resources. Note that the TensorFlow-session is inside the model-object, so we close the session through that object.", "# This has been commented out in case you want to modify and experiment\n# with the Notebook without having to restart it.\n# model.close()", "Conclusion\nIn the previous Tutorial #06 it took 15 hours on a laptop PC to train a neural network for classifying the CIFAR-10 data-set with an accuracy of about 80% on the test-set.\nIn this tutorial we used the Inception model from Tutorial #07 to achieve a classification accuracy of about 90% on the CIFAR-10 data-set. This was done by feeding all the images from the CIFAR-10 data-set through the Inception model and caching the so-called transfer-values prior to the final classification layer. We then built another neural network that took these transfer-values as input and produced a CIFAR-10 class as output.\nThe CIFAR-10 data-set contains a total of 60,000 images. It took about 6 hours to calculate the transfer-values of the Inception model for all these images, using a laptop PC without a GPU. And training the new classifier on top of these transfer-values only took a few minutes. So the combined time-usage of this tranfer-learning was less than half the time it took to train a neural network directly for the CIFAR-10 data-set, and it achieved significantly higher classification accuracy.\nSo transfer-learning with the Inception model is useful for building an image classifier for your own data-set.\nExercises\nThese are a few suggestions for exercises that may help improve your skills with TensorFlow. It is important to get hands-on experience with TensorFlow in order to learn how to use it properly.\nYou may want to backup this Notebook and the other files before making any changes.\n\nTry using the full training-set in the PCA and t-SNE plots. What happens?\nTry changing the neural network for doing the new classification. What happens if you remove the fully-connected layer, or add more fully-connected layers?\nWhat happens if you perform fewer or more optimization iterations?\nWhat happens if you change the learning_rate for the optimizer?\nHow would you implement random distortions to the CIFAR-10 images as was done in Tutorial #06? You can no longer use the cache because each input image is different.\nTry using the MNIST data-set instead of the CIFAR-10 data-set.\nExplain to a friend how the program works.\n\nLicense (MIT)\nCopyright (c) 2016 by Magnus Erik Hvass Pedersen\nPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\nThe above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
darkomen/TFG
medidas/Conclusiones/.ipynb_checkpoints/Conclusiones-checkpoint.ipynb
cc0-1.0
[ "Análisis de los datos obtenidos\nCompararación de tres filamentos distintos\n\nFilamento de BQ\nFilamento de formfutura\nFilamento de filastriuder", "%pylab inline\n#Importamos las librerías utilizadas\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\n\n#Mostramos las versiones usadas de cada librerías\nprint (\"Numpy v{}\".format(np.__version__))\nprint (\"Pandas v{}\".format(pd.__version__))\nprint (\"Seaborn v{}\".format(sns.__version__))\n\n#Abrimos los ficheros con los datos\nconclusiones = pd.read_csv('Conclusiones.csv')\n\ncolumns=['bq','formfutura','filastruder']\n\n#Mostramos un resumen de los datos obtenidoss\nconclusiones[columns].describe()", "Representamos ambos diámetro y la velocidad de la tractora en la misma gráfica", "conclusiones[columns].plot(figsize=(16,10),ylim=(1.5,2.5)).hlines([1.85,1.65],0,3500,colors='r')\n#datos['RPM TRAC'].plot(secondary_y='RPM TRAC')\n\nconclusiones[columns].boxplot(return_type='axes')", "Aumentando la velocidad se ha conseguido que disminuya el valor máxima, sin embargo ha disminuido el valor mínimo. Para la siguiente iteracción, se va a volver a las velocidades de 1.5- 3.4 y se van a añadir más reglas con unos incrementos de velocidades menores, para evitar saturar la velocidad de traccción tanto a nivel alto como nivel bajo.\nComparativa de Diametro X frente a Diametro Y para ver el ratio del filamento\nFiltrado de datos\nLas muestras tomadas $d_x >= 0.9$ or $d_y >= 0.9$ las asumimos como error del sensor, por ello las filtramos de las muestras tomadas.", "datos_filtrados = datos[(datos['Diametro X'] >= 0.9) & (datos['Diametro Y'] >= 0.9)]\n\n#datos_filtrados.ix[:, \"Diametro X\":\"Diametro Y\"].boxplot(return_type='axes')", "Representación de X/Y", "plt.scatter(x=datos_filtrados['Diametro X'], y=datos_filtrados['Diametro Y'], marker='.')", "Analizamos datos del ratio", "ratio = datos_filtrados['Diametro X']/datos_filtrados['Diametro Y']\nratio.describe()\n\nrolling_mean = pd.rolling_mean(ratio, 50)\nrolling_std = pd.rolling_std(ratio, 50)\nrolling_mean.plot(figsize=(12,6))\n# plt.fill_between(ratio, y1=rolling_mean+rolling_std, y2=rolling_mean-rolling_std, alpha=0.5)\nratio.plot(figsize=(12,6), alpha=0.6, ylim=(0.5,1.5))", "Límites de calidad\nCalculamos el número de veces que traspasamos unos límites de calidad. \n$Th^+ = 1.85$ and $Th^- = 1.65$", "Th_u = 1.85\nTh_d = 1.65\n\ndata_violations = datos[(datos['Diametro X'] > Th_u) | (datos['Diametro X'] < Th_d) |\n (datos['Diametro Y'] > Th_u) | (datos['Diametro Y'] < Th_d)]\n\ndata_violations.describe()\n\ndata_violations.plot(subplots=True, figsize=(12,12))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
sony/nnabla
tutorial/debugging.ipynb
apache-2.0
[ "Debugging\nDeep neural networks are going deeper and deeper every year, requiring more components in the networks. Such complexity often misleads us to mal-configure the networks that can turn out be critical. Even if we correctly configure a neural network as desired, we may still want to find out its performance bottleneck, e.g., from which layer(s) the computational bottleneck comes.\nIn this debugging tutorial, we introduce the following ways to deal with such cases:\n\nvisit method of a variable\npretty-print\nsimple graph viewer\nprofiling utils\nvalue tracer\n\nWe will go over each technique, but first prepare the following reference model.", "!pip install nnabla-ext-cuda100\n!git clone https://github.com/sony/nnabla.git\n%cd nnabla/tutorial\n\nimport numpy as np\nimport nnabla as nn\nimport nnabla.logger as logger\nimport nnabla.functions as F\nimport nnabla.parametric_functions as PF\nimport nnabla.solvers as S\n\ndef block(x, maps, test=False, name=\"block\"):\n h = x\n with nn.parameter_scope(name):\n with nn.parameter_scope(\"in-block-1\"):\n h = PF.convolution(h, maps, kernel=(3, 3), pad=(1, 1), with_bias=False)\n h = PF.batch_normalization(h, batch_stat=not test)\n h = F.relu(h)\n with nn.parameter_scope(\"in-block-2\"):\n h = PF.convolution(h, maps // 2, kernel=(3, 3), pad=(1, 1), with_bias=False)\n h = PF.batch_normalization(h, batch_stat=not test)\n h = F.relu(h)\n with nn.parameter_scope(\"in-block-3\"):\n h = PF.convolution(h, maps, kernel=(3, 3), pad=(1, 1), with_bias=False)\n h = PF.batch_normalization(h, batch_stat=not test)\n \n if h.shape[1] != x.shape[1]:\n with nn.parameter_scope(\"skip\"):\n s = PF.convolution(x, maps, kernel=(3, 3), pad=(1, 1), with_bias=False)\n s = PF.batch_normalization(s, batch_stat=not test)\n\n return F.relu(h + s)\n\ndef network(x, maps=16, test=False):\n h = x\n h = PF.convolution(h, maps, kernel=(3, 3), pad=(1, 1), name=\"first-conv\", with_bias=False)\n h = PF.batch_normalization(h, batch_stat=not test, name=\"first-bn\")\n h = F.relu(h)\n for l in range(4):\n h = block(h, maps * 2 ** (l + 1), name=\"block-{}\".format(l))\n h = F.max_pooling(h, (2, 2))\n h = F.average_pooling(h, h.shape[2:])\n pred = PF.affine(h, 100, name=\"pred\")\n return pred ", "Visit Method\nVisit method of a variable takes either lambda, function, callable object as an argument and calls it over all NNabla functions where the variable can traverse in the forward order. It is easier to see the usage than expalined.\nFirst of all, define the callable class.", "class PrintFunc(object):\n def __call__(self, nnabla_func):\n print(\"==========\")\n print(nnabla_func.info.type_name)\n print(nnabla_func.inputs)\n print(nnabla_func.outputs)\n print(nnabla_func.info.args)", "This callable object takes a NNabla function, e.g., convolution, relu, etc., so a user can get information of that function.", "nn.clear_parameters() # this call is just in case to do the following code again\n\nx = nn.Variable.from_numpy_array(np.random.randn(*[4, 3, 128, 128]))\npred = network(x)\npred.visit(PrintFunc())", "This is the low-level API to see the graph information as you want by hand.\nPPrint\nPPrint method is one of the instantiation of the visit method. We can see the graph structure in the topological (forward) order in details. Here is a usage to see detailed information of a graph.", "nn.clear_parameters() # call this in case you want to run the following code agian\n\nx = nn.Variable.from_numpy_array(np.random.randn(*[4, 3, 128, 128]))\npred = network(x)\n\n# pprint\nfrom nnabla.utils.inspection import pprint\npprint(pred, summary=True, forward=True, backward=True)", "Simple Graph Viewer\nVisit method is very useful for getting information about each function\nused in a graph, but it is hard to see the details of the whole network\nstructure, e.g., which variable is connected to which variable. So we have a graph viewer that visually shows the whole structure of network, enabling us to debug more efficiently. Using this graph viewer is straightforward, as shown in the following code:", "nn.clear_parameters() # call this in case you want to run the following code agian\n\nx = nn.Variable([4, 3, 128, 128])\npred = network(x)\n\nimport nnabla.experimental.viewers as V\n\ngraph = V.SimpleGraph(verbose=False)\ngraph.view(pred)", "If one would like to see more detailed information as in visit method case, change verbose option to True.", "graph = V.SimpleGraph(verbose=True)\ngraph.view(pred)", "Now one can see detailed information!\nNote that this viewer is mainly for NNabla users who want to write codes in python, so for those who like to see more beautiful network and play with that, please use Neural Network Console and visit https://dl.sony.com/.\nProfiling Utils\nBasically, this feature is for developers who want to know the whole stats in speed and which functions could be bottlenecks. NNabla provides a simple profiling tool. Once a network is prepared, one better to have other components to train the network like a loss function and solvers.\nTo create the profiler and see the results, run the following codes.", "nn.clear_parameters() # call this in case you want to run the following code agian\n\n# Context\nfrom nnabla.ext_utils import get_extension_context\ndevice = \"cudnn\"\nctx = get_extension_context(device)\nnn.set_default_context(ctx)\n\n# Network\nx = nn.Variable.from_numpy_array(np.random.randn(*[4, 3, 128, 128]))\nt = nn.Variable([4, 1])\npred = network(x)\nloss = F.mean(F.softmax_cross_entropy(pred, t))\n\n# Solver\nsolver = S.Momentum()\nsolver.set_parameters(nn.get_parameters())\n\n# Profiler\nfrom nnabla.utils.profiler import GraphProfiler\nB = GraphProfiler(loss, solver=solver, device_id=0, ext_name=device, n_run=100)\nB.run()\nprint(\"Profile finished.\")\n\n# Report\nfrom nnabla.utils.profiler import GraphProfilerCsvWriter\nwith open(\"./profile.csv\", \"w\") as f:\n writer = GraphProfilerCsvWriter(B, file=f)\n writer.write()\nprint(\"Report is prepared.\")", "You can also find TimeProfiler to profile, but it is more fine-grained in measureing execution time.\nWith TimeProfiler, you can put a callback function to the forward and/or backward method in the training loop.\nValue Tracer\nWe sometimes want to check if there exsits NaN/Inf. NanInfTracer is a convenient way to check if one of all layers in a graph has NaN/Inf value.", "# Create graph again just in case\nnn.clear_parameters() # call this in case you want to run the following code agian\n\n# Try to switch these two\nx = nn.Variable.from_numpy_array(np.random.randn(*[4, 3, 64, 64]))\n#x = nn.Variable([4, 3, 64, 64])\npred = network(x)\n\n# NanInfTracer\nfrom nnabla.utils.inspection import NanInfTracer\nnit = NanInfTracer(trace_inf=True, trace_nan=True, need_details=True)\n\nwith nit.trace():\n # Try to comment either of these two or both\n pred.forward(function_post_hook=nit.forward_post_hook)\n pred.backward(function_post_hook=nit.backward_post_hook)\n \nprint(nit.check())" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
quietcoolwu/python-playground
notebooks/Module_Inspect.ipynb
mit
[ "Python 运行环境的自省机制\n有时候我们会碰到这样的需求,需要执行对象的某个方法,或是需要对对象的某个字段赋值,而方法名或是字段名在编码代码时并不能确定,需要通过参数传递字符串的形式输入。\n这个机制被称为反射(反过来让对象告诉我们他是什么),或是自省,用于实现在运行时获取未知对象的信息。\n反射是个很吓唬人的名词,听起来高深莫测,在一般的编程语言里反射相对其他概念来说稍显复杂,一般来说都是作为高级主题来讲;但在Python中反射非常简单,用起来几乎感觉不到与其他的代码有区别,使用反射获取到的函数和方法可以像平常一样加上括号直接调用,获取到类后可以直接构造实例;不过获取到的字段不能直接赋值,因为拿到的其实是另一个指向同一个地方的引用,赋值只能改变当前的这个引用而已。", "#coding: UTF-8\n\n\nimport sys # 模块,sys指向这个模块对象\ndef foo(): pass # 函数,foo指向这个函数对象 \nclass Cat(object): # 类,Cat指向这个类对象\n def __init__(self, name='kitty'):\n self.name = name\n def sayHi(self): # 实例方法,sayHi指向这个方法对象,使用类或实例.sayHi访问\n print self.name, 'says Hi!' # 访问名为name的字段,使用实例.name访问\n\ncat = Cat() # cat是Cat类的实例对象\n\nprint Cat.sayHi # 使用类名访问实例方法时,方法是未绑定的(unbound)\nprint cat.sayHi # 使用实例访问实例方法时,方法是绑定的(bound)", "访问对象的属性\n以下列出了几个内建方法,可以用来检查或是访问对象的属性。这些方法可以用于任意对象而不仅仅是例子中的Cat实例对象;Python中一切都是对象。", "cat = Cat('kitty')\n\nprint cat.name # 访问实例属性\ncat.sayHi() # 调用实例方法\n\nprint dir(cat) # 获取实例的属性名,以列表形式返回\nif hasattr(cat, 'name'): # 检查实例是否有这个属性\n setattr(cat, 'name', 'tiger') # same as: a.name = 'tiger'\nprint getattr(cat, 'name') # same as: print a.name\n\ngetattr(cat, 'sayHi')() # same as: cat.sayHi()", "代码块(func_code)\n<pre>\n代码块可以由类源代码、函数源代码或是一个简单的语句代码编译得到。\nco_argcount: 普通参数的总数,不包括*参数和**参数。\nco_names: 所有的参数名(包括*参数和**参数)和局部变量名的元组。\nco_varnames: 所有的局部变量名的元组。\nco_filename: 源代码所在的文件名。\nco_flags: 这是一个数值,每一个二进制位都包含了特定信息。较关注的是0b100(0x4)和0b1000(0x8),如果co_flags & 0b100 != 0,说明使用了*args参数;如果co_flags & 0b1000 != 0,说明使用了**kwargs参数。另外,如果co_flags & 0b100000(0x20) != 0,则说明这是一个生成器函数(generator function)。\n</pre>", "co = cat.sayHi.func_code\nprint co\nprint co.co_argcount # 1\nprint co.co_names # ('name',)\nprint co.co_varnames # ('self',)\nprint co.co_flags & 0b100 # 0", "栈帧(frame)\n栈帧表示程序运行时函数调用栈中的某一帧。函数没有属性可以获取它,因为它在函数调用时才会产生,而生成器则是由函数调用返回的,所以有属性指向栈帧。想要获得某个函数相关的栈帧,则必须在调用这个函数且这个函数尚未返回时获取。你可以使用sys模块的_getframe()函数、或inspect模块的currentframe()函数获取当前栈帧。这里列出来的属性全部是只读的。\n1. f_back: 调用栈的前一帧。\n2. f_code: 栈帧对应的code对象。\n3. f_locals: 用在当前栈帧时与内建函数locals()相同,但你可以先获取其他帧然后使用这个属性获取那个帧的locals()。\n4. f_globals: 用在当前栈帧时与内建函数globals()相同,但你可以先获取其他帧。", "def add(x, y=1):\n f = sys._getframe() # same as inspect.currentframe()\n print locals()\n print f.f_locals # same as locals()\n print f.f_back # <frame object at 0x...>\n return x+y\nadd(2)", "追踪(traceback)\n追踪是在出现异常时用于回溯的对象,与栈帧相反。由于异常时才会构建,而异常未捕获时会一直向外层栈帧抛出,所以需要使用try才能见到这个对象。你可以使用sys模块的exc_info()函数获得它,这个函数返回一个元组,元素分别是异常类型、异常对象、追踪。traceback的属性全部是只读的。\n1. tb_next: 追踪的下一个追踪对象。\n2. tb_frame: 当前追踪对应的栈帧。\n3. tb_lineno: 当前追踪的行号。", "def div(x, y):\n try:\n return x/y\n except:\n print sys.exc_info()\n tb = sys.exc_info()[2] # return (exc_type, exc_value, traceback)\n print tb\n print tb.tb_lineno # \"return x/y\" 的行号\ndiv(1, 0)", "Inspect 模块在栈帧检查时的使用\n除了代码对象的自省外, inspect 模块包括了一些函数以检查函数执行时的运行时环境。这些函数中的多数是处理调用栈,操作对象是调用帧。\n栈中的每个帧记录包含:\n1. 帧对象\n2. 代码所在文件的文件名\n3. 该文件中当前运行的行号\n4. 所调用的函数名\n5. 源码中上下文代码行的一个列表\n6. 该列表中当前行的索引\n这些信息会在运行异常时产生 traceback. 它对于记录日志或者调试程序也很用,可以通过查看栈帧发现函数的参数值。\ncurrentframe() 会返回位于栈顶的帧,对应当前函数。 getargvalues() 返回一个 tuple, 包含:\n1. 参数名\n2. 变参名\n3. 帧中局部值构成的 dict, 名为locals, 每对 key:value 为 变量名:变量值。\n结合它们,可以显示调用栈中不同的函数参数和局部变量。", "import inspect\ndef add(x, y=1, *z):\n print inspect.getargvalues(inspect.currentframe())\n return x + y + sum(z)\nadd(2)\n\ndef recurse(limit):\n local_variable = '.' * limit\n \n # locals 包含了并非 recurse() 参数的 local_variable: '.'\n print(limit, inspect.getargvalues(inspect.currentframe()))\n \n if limit <= 0:\n return\n recurse(limit - 1)\n return local_variable\n\nif __name__ == '__main__':\n recurse(2)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.21/_downloads/e71fac7e5d7784759a26529dd6e63da5/plot_whitened.ipynb
bsd-3-clause
[ "%matplotlib inline", "Plotting whitened data\nThis tutorial demonstrates how to plot :term:whitened &lt;whitening&gt; evoked\ndata.\nData are whitened for many processes, including dipole fitting, source\nlocalization and some decoding algorithms. Viewing whitened data thus gives\na different perspective on the data that these algorithms operate on.\nLet's start by loading some data and computing a signal (spatial) covariance\nthat we'll consider to be noise.", "import mne\nfrom mne.datasets import sample", "Raw data with whitening\n<div class=\"alert alert-info\"><h4>Note</h4><p>In the :meth:`mne.io.Raw.plot` with ``noise_cov`` supplied,\n you can press they \"w\" key to turn whitening on and off.</p></div>", "data_path = sample.data_path()\nraw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nraw = mne.io.read_raw_fif(raw_fname, preload=True)\n\nevents = mne.find_events(raw, stim_channel='STI 014')\nevent_id = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,\n 'visual/right': 4, 'smiley': 5, 'button': 32}\nreject = dict(grad=4000e-13, mag=4e-12, eog=150e-6)\nepochs = mne.Epochs(raw, events, event_id=event_id, reject=reject)\n\n# baseline noise cov, not a lot of samples\nnoise_cov = mne.compute_covariance(epochs, tmax=0., method='shrunk', rank=None,\n verbose='error')\n\n# butterfly mode shows the differences most clearly\nraw.plot(events=events, butterfly=True)\nraw.plot(noise_cov=noise_cov, events=events, butterfly=True)", "Epochs with whitening", "epochs.plot()\nepochs.plot(noise_cov=noise_cov)", "Evoked data with whitening", "evoked = epochs.average()\nevoked.plot(time_unit='s')\nevoked.plot(noise_cov=noise_cov, time_unit='s')", "Evoked data with scaled whitening\nThe :meth:mne.Evoked.plot_white function takes an additional step of\nscaling the whitened plots to show how well the assumption of Gaussian\nnoise is satisfied by the data:", "evoked.plot_white(noise_cov=noise_cov, time_unit='s')", "Topographic plot with whitening", "evoked.comment = 'All trials'\nevoked.plot_topo(title='Evoked data')\nevoked.plot_topo(noise_cov=noise_cov, title='Whitened evoked data')" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jmhsi/justin_tinker
data_science/lendingclub_bak/dataprep_and_modeling/0.1.0_elastic_net_no_weighting.ipynb
apache-2.0
[ "import modeling_utils.data_prep as data_prep\nfrom sklearn.linear_model import ElasticNet\nfrom sklearn.externals import joblib\nimport time", "DO NOT FORGET TO DROP ISSUE_D AFTER PREPPING", "platform = 'lendingclub'\n\nstore = pd.HDFStore(\n '/Users/justinhsi/justin_tinkering/data_science/lendingclub/{0}_store.h5'.\n format(platform),\n append=True)\n\nloan_info = store['train_filtered_columns']", "Until I figure out a good imputation method (e.g. bayes PCA), just drop columns with null still", "standardized, eval_cols, mean_series, std_dev_series = data_prep.process_data_train(\n loan_info)", "straight up out of box elastic net with slightly tweaked alpha", "regr = ElasticNet(alpha = .004, random_state=0, max_iter = 1500)\nregr.fit(standardized, eval_cols)\n\n# dump the model\njoblib.dump(regr, 'model_dump/model_0.1.0.pkl')\njoblib.dump((mean_series, std_dev_series), 'model_dump/mean_stddev.pkl')\n\ncoef_dict = {}\nfor index, coef in enumerate(regr.coef_):\n coef_dict[index] = coef\npd.Series(coef_dict).value_counts(dropna=False)\n\nregr.score(standardized, eval_cols)\n\nnow = time.strftime(\"%Y_%m_%d_%Hh_%Mm_%Ss\")\n# info to stick in detailed dataframe describing each model\nmodel_info = {'model_version': '0.1.0',\n 'target': 'npv_roi_10',\n 'weights': 'None',\n 'algo_model': 'elastic_net',\n 'hyperparams': \"alpha:.004, random_state: 0, max_iter: 1500\",\n 'cost_func': 'sklearn default, which I think is mse',\n 'useful_notes': 'R2 score of .0604167 (regr.score())',\n 'date': now}\n\nmodel_info_df = pd.DataFrame(model_info, index = ['0.1.0'])\nstore.open()\nstore.append(\n 'model_info',\n model_info_df,\n data_columns=True,\n index=True,\n append=True,\n min_itemsize={'model_version': 20,\n 'target': 20,\n 'weights': 200,\n 'algo_model': 20,\n 'hyperparams': 500,\n 'cost_func': 300,\n 'useful_notes': 1000,\n 'date': 30}\n)\nstore.close()", "Examine performance on test set", "store.open()\ntest = store['test_filtered_columns']\ntrain = store['train_filtered_columns']\nloan_npv_rois = store['loan_npv_rois']\ndefault_series = test['target_strict']\nresults = store['results']\nstore.close()\n\ntrain_X, train_y = data_prep.process_data_test(train)\ntrain_y = train_y['npv_roi_10'].values\ntest_X, test_y = data_prep.process_data_test(test)\ntest_y = test_y['npv_roi_10'].values\nregr = joblib.load('model_dump/model_0.1.0.pkl')\nregr_version = '0.1.0'\ntest_yhat = regr.predict(test_X)\ntrain_yhat = regr.predict(train_X)\n\ntest_mse = np.sum((test_yhat - test_y)**2)/len(test_y)\ntrain_mse = np.sum((train_yhat - train_y)**2)/len(train_y)\n\ndef eval_models(trials, port_size, available_loans, regr, regr_version, test, loan_npv_rois,\n default_series):\n results = {}\n pct_default = {}\n test_copy = test.copy()\n for trial in tqdm_notebook(np.arange(trials)):\n loan_ids = np.random.choice(\n test_copy.index.values, available_loans, replace=False)\n loans_to_pick_from = test_copy.loc[loan_ids, :]\n scores = regr.predict(loans_to_pick_from)\n scores_series = pd.Series(dict(zip(loan_ids, scores)))\n scores_series.sort_values(ascending=False, inplace=True)\n picks = scores_series[:900].index.values\n results[trial] = loan_npv_rois.loc[picks, :].mean().to_dict()\n pct_default[trial] = (default_series.loc[picks].sum()) / port_size\n pct_default_series = pd.Series(pct_default)\n results_df = pd.DataFrame(results).T\n results_df['pct_def'] = pct_default_series\n return results_df\n\n# as per done with baseline models, say 3000 loans available\n# , pick 900 of them\ntrials = 20000\nport_size = 900\navailable_loans = 3000\nmodel_results = eval_models(trials, port_size, available_loans, regr, regr_version, test_X, loan_npv_rois, default_series)\n\nmulti_index = []\nfor col in model_results.columns.values:\n multi_index.append((col,regr_version))\n\nappend_results = model_results\nappend_results.columns = pd.MultiIndex.from_tuples(multi_index, names = ['discount_rate', 'model'])\n\ntry:\n results = results.join(append_results)\nexcept ValueError:\n results.loc[:, (slice(None), slice('0.1.0','0.1.0'))] = append_results\nresults.sort_index(axis=1, inplace = True)\n\nstore.open()\nstore['results'] = results\nstore.close()\n\nresults.describe()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ultiyuan/test0
lessons/SeparationPrediction.ipynb
gpl-2.0
[ "Separation prediction on general bodies\nIn this final notebook, we will combine the vortex panel method and the boundary layer solver to predict separation on any 2D shape and make drag predictions.\nBoundaryLayer module\nAs with VortexPanel.py, we've made a python file called BoundaryLayer.py which has the march function inside.\nWhat will we need to interface these two modules? VortexPanel doesn't need anything from BoundaryLayer - it just needs a geometry and angle of attack.", "import numpy\nfrom matplotlib import pyplot\n%matplotlib inline\nfrom VortexPanel import Panel,solve_gamma_kutta,plot_flow,make_jukowski,make_circle\n\nalpha = numpy.pi/16\nN = 64\nfoil = make_jukowski(N)\nsolve_gamma_kutta(foil,alpha)\nplot_flow(foil,alpha)", "From the previous notebook we know the function march doesn't need the details of the geometry, but it does need:\n\n$s$: the distance along the boundary layer\n$u_e(x)$: the velocity on the edge of the boundary layer\n$u_e'(x)$: the tangential derivative of $u_e$\n$\\nu$: the kinematic viscosity\n\nThe viscosity is obvious, but we'll need to get the other variables from the potential flow solution.\nQuiz 1\nWhat is the tangential velocity $u_e = \\vec u\\cdot\\hat s$ on the fluid side of panel $p_i$?\n\n$\\left(\\vec U +\\sum_{j=0}^{N-1} \\gamma_j \\vec f_j(x_i,y_i)\\right)\\cdot \\hat s_i$\n$-\\gamma_i$\n$U_\\infty$\n\nHint: Remember that we have set a boundary condition on the body side of the panel.\n\nNext, let's get $s$. Note that a body will form two boundary layers, one on each side. We need to identify the starting point of these two flow regions.\nQuiz 2\nWhere is the starting point of the two boundary layers?\n\nThe first and last panels: foil[0], foil[N-1]\nThe panel where $u_e = 0$\nThe left-most panel, foil[N/2]\n\n\nThis makes it straightforward to split the body into the two boundary layer sections:", "# split panels into two sections based on the flow velocity\ndef split_panels(panels):\n # positive velocity defines `top` BL\n top = [p for p in panels if p.gamma<=0] \n # negative defines the `bottom`\n bottom = [p for p in panels if p.gamma>=0]\n # reverse array so panel[0] is stagnation\n bottom = bottom[::-1]\n\n return top,bottom\n\nfoil_top,foil_bottom = split_panels(foil)", "Note that we changed the direction of the bottom array so that it runs from the stagnation point to the trailing edge, in accordance with the flow direction.\nLets plot them to make sure we got it right:", "# plot panels with labels\ndef plot_segment(panels):\n pyplot.figure(figsize=(10,2))\n pyplot.axis([-1.2,1.2,-.3,.3])\n for i,p_i in enumerate(panels): \n p_i.plot()\n if i%10 == 0:\n pyplot.scatter(p_i.xc,p_i.yc)\n pyplot.text(p_i.xc,p_i.yc+0.05, \n 'panel ['+'%i'%i+']',fontsize=12)\n\nplot_segment(foil_top)\n\nplot_segment(foil_bottom)", "Pohlhausen class\nNow we just need to pull out the distance and velocity data from these Panel arrays and pass it to the march function. To keep this clean we define a new class Pohlhausen.", "# Pohlhausen Boundary Layer class\nclass Pohlhausen:\n def __init__(self,panels,nu):\n self.u_e = [abs(p.gamma) for p in panels] # tangential velocity\n self.s = numpy.empty_like(self.u_e) # initialize distance array\n self.s[0] = panels[0].S\n for i in range(len(self.s)-1): # fill distance array\n self.s[i+1] = self.s[i]+panels[i].S+panels[i+1].S \n ds = numpy.gradient(self.s) \n self.du_e = numpy.gradient(self.u_e,ds) # compute velocity gradient\n\n self.nu = nu # kinematic viscosity\n self.xc = [p.xc for p in panels] # x and ...\n self.yc = [p.yc for p in panels] # y locations\n \n def march(self):\n # march down the boundary layer until separation\n from BoundaryLayer import march\n self.delta,self.lam,self.iSep = march(self.s,self.u_e,self.du_e,self.nu)\n\n # interpolate values at the separation point\n def sep_interp(y): return numpy.interp( # interpolate function\n 12,-self.lam[self.iSep:self.iSep+2],y[self.iSep:self.iSep+2])\n self.s_sep = sep_interp(self.s)\n self.u_e_sep = sep_interp(self.u_e)\n self.x_sep = sep_interp(self.xc)\n self.y_sep = sep_interp(self.yc)\n self.delta_sep = sep_interp(self.delta)", "A few implementation notes:\n - The distance from the center of panel $i+1$ to panel $i$ is $\\Delta s_{i+1} = S_i+S_{i+1}$, therefore $s_{i+1} = s_i+S_i+S_{i+1}$.\n - The numpy.gradient function is used to get $u_e'$. \n - Pohlhausen.march calls march from the last notebook and then interpolates linearly to get values at the separation point.\nCircle boundary layer\nLet's test this with the case we tried before, the flow around a circle. But this time we'll use the external flow from the vortex panel method instead of the analytic solution.\nQuiz 3\nWhy do I keep testing code on cases we've seen before?\n\nI'm terribly forgetful\nNew examples take work\nI want to validate new code by comparing to known answers\n\n\nNumerical fundamental: Validation\nEvery piece of code must be tested against a nontrivial example with a known solution\nFirst lets check that $s$, $u_e$, and $u_e'$ are computed correctly:", "circle = make_circle(N) # set-up circle\nsolve_gamma_kutta(circle) # solve flow\ntop,bottom = split_panels(circle) # split panels\nnu = 1e-5 # set viscosity\ntop = Pohlhausen(top,nu) # get BL inputs\nu_e = 2.*numpy.sin(top.s) # analytic u_e\ndu_e = 2.*numpy.cos(top.s) # analytic du_e\n\n# compare the boundary layer inputs\npyplot.xlabel(r\"$s$\",fontsize=16)\npyplot.plot(top.s,top.u_e, lw=2, label=r'Panel $u_e$')\npyplot.plot(top.s,u_e, lw=2, label=r'Analytic $u_e$')\npyplot.plot(top.s,top.du_e, lw=2, label=r\"Panel $u_e'$\")\npyplot.plot(top.s,du_e, lw=2, label=r\"Analytic $u_e'$\")\npyplot.legend(loc='lower left')", "Those look very good. Now lets march and look at $\\delta$ and the separation point.", "top.march() # solve the boundary layer flow\ni = top.iSep+2 # last point to plot\n\n# plot the boundary layer thicknes and separation point\npyplot.ylabel(r'$\\delta$', fontsize=16)\npyplot.xlabel(r'$s$', fontsize=16)\npyplot.plot(top.s[:i],top.delta[:i],lw=2)\npyplot.scatter(top.s_sep,top.delta_sep, s=100, c='r')\npyplot.text(top.s_sep-0.6,top.delta_sep, \n ' separation \\n s='+'%.2f' % top.s_sep,fontsize=12)", "Same answer as the previous notebook. Good.\nNow that we know the code is working, lets write a function to set-up, solve, and plot the separation points for the boundary layer flow.", "def solve_plot_boundary_layers(panels,alpha=0,nu=1e-5):\n\n # split the panels\n top_panels,bottom_panels = split_panels(panels)\n \n # Set up and solve the top boundary layer\n top = Pohlhausen(top_panels,nu)\n top.march()\n\n # Set up and solve the bottom boundary layer\n bottom = Pohlhausen(bottom_panels,nu)\n bottom.march()\n \n # plot flow with separation points\n plot_flow(panels,alpha)\n pyplot.scatter(top.x_sep, top.y_sep, s=100, c='r')\n pyplot.scatter(bottom.x_sep, bottom.y_sep, s=100, c='g')\n \n return top,bottom\n\ntop,bottom = solve_plot_boundary_layers(circle)", "The red and green dots mark the separation point for the top and bottom boundary layer, respectively.\nSeparation occurs soon after the flow begins to decelerate. Physically, the boundary layer loses energy to friction as it travels over the front of the body (remember how large $C_F$ was?) and can not cope with the adverse pressure gradient on the back of the body.\nJukowski foil validation\nNow lets write a function to get the complete flow around a Jukowski foil:", "def predict_jukowski_separation(t_c,alpha=0,N=128):\n # set dx to gets the correct t/c\n foil = make_jukowski(N,dx=t_c-0.019)\n\n # find and print t/c\n x0 = foil[N/2].xc\n c = foil[0].xc-x0\n t = 2.*numpy.max([p.yc for p in foil])\n print \"t/c = \"+\"%.3f\"%(t/c)\n\n # solve potential flow and boundary layer evolution\n solve_gamma_kutta(foil,alpha)\n top,bottom = solve_plot_boundary_layers(foil,alpha)\n\n # print message\n print (\"Separation at x/c = \"+\"%.3f\"%\n ((top.x_sep-x0)/c)+\" from the leading edge\")\n\npredict_jukowski_separation(0.2,alpha)", "Quiz 4\nWe know $\\nu$ doesn't impact separation. How can you move the separation points?\n\nChange the foil thickness\nChange the angle of attack\nChange the resolution\n\n\nWe can make sure the behavoir above is correct by validating against the analytic solution for simple geometries. Here is a summary figure from Chapter 3 of Hoerner's Fluid-Dynamic Drag\n\n\n\nThere are two Jukowski examples: $t/c=0.15$ which separates at $x/c\\approx0.49$ from the leading edge, and $t/c=0.17$, which separates at $x/c\\approx0.39$.", "predict_jukowski_separation(t_c=0.15)", "The $t/c=0.15$ case matches very well with Hoerner's picture.", "predict_jukowski_separation(t_c=0.17)", "Quiz 5\nWhat could be the cause of the ~$15\\%$ discrepancy in the $t/c=0.17$ case?\n\nError in Hoerner\nError in Pohlhausen boundary layer ODE\nError in numerical method (VortexPanel, BoundaryLayer, etc)\n\nEllipse validation\nLet's see how we fair in the ellipse cases. From the Hoerner image I estimate:\n$t/c$| 1/2 | 1/4 | 1/8 \n---|---|---|---|---\n$x/c$| $0.75$ | $0.85$ | $0.92$", "def predict_ellipse_separation(t_c,N=128,alpha=0):\n ellipse = make_circle(N,t_c)\n print \"t/c = \"+\"%.3f\"%(t_c)\n\n # solve potential flow and boundary layer evolution\n solve_gamma_kutta(ellipse,alpha)\n top,bottom = solve_plot_boundary_layers(ellipse,alpha)\n\n # print message\n print (\"Separation at x/c = \"+\"%.3f\"%\n ((top.x_sep+1)/2.)+\" from the leading edge\") \n\npredict_ellipse_separation(t_c=0.5)\n\npredict_ellipse_separation(t_c=0.25)\n\npredict_ellipse_separation(t_c=0.125)", "So I get the feeling Hoerner has a typo... that's the first one I've found.\nPressure force estimates\nNow that we can predict the separation point, we can make non-zero pressure force estimates.\nThe pressure force on the body is\n$$\\vec F_p = \\oint_{\\cal S} p \\hat n ds$$\nwhere $\\cal S$ is the body surface and $\\hat n$ is the normal to the surface. \nQuiz 6\nWhat is the equation for the pressure coefficient $c_p(s)$?\n\n$c_p(s) = 1-4\\sin(s)$\n$c_p(s) = 1-u_e^2(s)/U_\\infty^2$\n$c_p(s) = (p(s)-p_\\infty)/(\\frac 12\\rho U_\\infty^2)$\n\n\nTherefore, the drag coefficient is\n$$C_D = \\frac{-F_x}{\\frac 12 \\rho U^2_\\infty A} = \\frac1w\\oint_{\\cal S} c_p s_y ds$$\nwhere $A$ is the 2D projected area of the body (the width) and $s_y = -n_x$.\nUsing the vortex panel method we can determine the potential flow solution for $c_p$, but what does $c_p$ look like in a real flow with separation? \n\n\n\nI've sketched the results for the flow around a circular cylinder above. The measured pressure at the front of a body in a viscous fluid is fairly well predicted by potential flow. \nHowever, the pressure coefficient completely deviates from the potential flow prediction near the point of separation. Indeed it remains essentially constant in the separated flow region. \nQuiz 7\nWhat would be a simple way to estimate the drag on a body?\n\nIntegrate $c_p$ from the vortex panel method.\nSet $c_p(s) = c_p(s_{sep})$ for $s>s_{sep}$, and then integrate.\n\n\nYour turn\nCompute $C_D$ for the circle and compare to the laminar experimental value of ~$1$.", "# you code here", "Ignore the line below - it just loads the style sheet.", "from IPython.core.display import HTML\ndef css_styling():\n styles = open('../styles/custom.css', 'r').read()\n return HTML(styles)\ncss_styling()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
joekasp/spectro
DEMO.ipynb
mit
[ "Analysis of 2D-IR spectroscopy\nThis first cell loads the packages and code that are required.", "%matplotlib inline\nfrom ipywidgets import *\nfrom IPython.display import display\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy.optimize import curve_fit\n\nfrom util import *\nfrom analysis import *\nimport fits\nfrom plot import *\nfrom plot3d import *", "Sample Dataset\nThe sample dataset here contains experimental 2D-IR spectra taken by the Khalil group for the N-O stretch of sodium nitroprusside in a variety of solvents. These data are published in J Phys. Chem. A 2013, 117, 6234-6243. The following is a list of names and the abbreviations used here:\n- 'D2O' (deuterium oxide/deuterated water)\n- 'DMSO' (dimethyl sulfoxide)\n- 'EG' (ethylene glycol)\n- 'EtOH' (ethanol)\n- 'FA' (formamide)\n- 'H2O' (water)\n- 'MeOH' (methanol)\nThe SOLVENT_NAME variable below can be modified to load data from a different solvent.", "SOLVENT_NAME = 'H2O'\ndata,w1,w3,tau2 = loadSolvent(SOLVENT_NAME)", "Plotting the Original Data\nThe original data is 4-dimensional, with two frequency axes, intensity, and time.\nBelow one can view the data as a sequence of surfaces in time. Due to computational considerations,\nit is necessary to use the update3d button to show the surface for a new time.", "%matplotlib\n%matplotlib notebook\ndef update3d(TimePoint=1):\n plt.close()\n my_fig = plt.figure(figsize=(9,5),num=SOLVENT_NAME)\n ax = surf3d(w1,w3,data[:,:,TimePoint],window_title=SOLVENT_NAME,ax_title='Time: '+str(tau2[TimePoint,0])+' fs',fig=my_fig,azim=azim,elev=elev)\n\nazim = -50\nelev = 30\ninteract_manual(update3d,TimePoint=IntSlider(min=0,max=len(tau2)))", "Single images at any time point can also be plotted.", "TIME_FS = 0\n%matplotlib inline\nshow_data(data, w1, w3, tau2, TIME_FS)", "A set of 3 images at different time points can be plotted.", "TIMES_FS = [0, 1500, 5000]\n%matplotlib inline\nshow_3_data(data, w1, w3, tau2, TIMES_FS)", "Performing Decompositions\nA variety of decompositions of the data are possible. Here the PCA is explored. Both the NORMALIZE and N_COMPONENTS variables can be changed to affect the behavior. NORMALIZE changes whether the images in the set are normalized prior to performing PCA. Without normalization, the first PCA component tends to reflect the peak intensity; with normalization, all of the components relate to the peak shape. N_COMPONENTS represents the number of components used in the decomposition.", "NORMALIZE = True\nN_COMPONENTS = 10\nANALYSIS_TYPE = 'pca' # 'pca', 'fa', or 'ica'\ncomp = get_components(data, normalize=NORMALIZE, n_comp=N_COMPONENTS, analysis_type=ANALYSIS_TYPE)\nproj = get_projections(data, normalize=NORMALIZE, n_comp=N_COMPONENTS, analysis_type=ANALYSIS_TYPE)\n%matplotlib inline\nshow_3_components(comp, w1, w3)", "Individual components can also be visualized.", "SELECTED_COMPONENT = 5\n%matplotlib inline\nshow_component(comp, w1, w3, SELECTED_COMPONENT)", "Three selected components can be visualized using the parameter COMPS, a list of components.", "SELECTED_COMPONENTS = [1, 5, 10]\n%matplotlib inline\nshow_3_components(comp, w1, w3, SELECTED_COMPONENTS)", "Components in Time\nThe time variation of the components provide information about how different modes are involved in the dynamics of the system.", "SELECTED_COMPONENT = 1\n%matplotlib inline\nshow_contribution(tau2,proj,SELECTED_COMPONENT)\nshow_component(comp,w1,w3,SELECTED_COMPONENT)", "Curve Fitting\nNow that we have time series data for each component we can use standard curve fits for linear and non-linear functions to estimate parameters such as time constants. The COMP parameter selects which component to use, and the T_SCALE parameter provides the units of the time (tau2) variable in fs. Thus a value of 1000 will correspond to ps. Adjusting this parameter is sometimes necessary to avoid underflow or overflow conditions. The p0 variable is used to specify the initial guess for a non-linear fit, and can be adjusted if the fit results are not satisfactory.\nThe list of non-linear fits available here are:\n- single exponential ... $A e^{-Bt} + C$\n- double exponential ... $A_1 e^{-B_1t} + A_2 e^{-B_2t} + C$\n- sine .... $A \\sin(Bt) + C$\nFor the exponential decays, the relevant parameter is the characteristic decay time, given by 1/B. Single- and double-exponential fits of the first component's dynamics are shown below.", "SELECTED_COMPONENT = 1\nT_SCALE = 1000\n\np0 = np.abs(proj[0,SELECTED_COMPONENT-1] - proj[-1,SELECTED_COMPONENT-1]), 1, proj[-1,SELECTED_COMPONENT-1] # initial guess\npopt, pcov = curve_fit(fits.my_exponential, tau2.ravel()*(1/T_SCALE), proj[:,SELECTED_COMPONENT-1], p0, maxfev=1000)\n#print('A='+str(popt[0]))\n#print('B='+str(popt[1]))\n#print('C='+str(popt[2]))\n#print()\nprint(\"Decay time: \" + str(1/popt[1]) + \" ps\")\n\n%matplotlib inline\nshow_exp_fit(tau2, proj, SELECTED_COMPONENT, popt, T_SCALE)\n\nSELECTED_COMPONENT = 1\nT_SCALE = 1000\n\np0 = 1, 1, 1, 1, 1 # initial guess\npopt, pcov = curve_fit(fits.my_double_exp, tau2.ravel()*(1/T_SCALE), proj[:,SELECTED_COMPONENT-1], p0, maxfev=1000)\nprint(\"Decay time 1: \" + str(1/popt[2]) + \" ps\")\nprint(\"Decay time 2: \" + str(1/popt[3]) + \" ps\")\n\n%matplotlib inline\nshow_exp_fit(tau2, proj, SELECTED_COMPONENT, popt, T_SCALE)", "End of Notebook", "#reset the pyplot backend\n%matplotlib\n%matplotlib notebook" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/vertex-ai-samples
notebooks/community/gapic/custom/showcase_custom_text_binary_classification_online_pipeline.ipynb
apache-2.0
[ "# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Vertex client library: Custom training text binary classification model with pipeline for online prediction with training pipeline\n<table align=\"left\">\n <td>\n <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/custom/showcase_custom_text_binary_classification_online_pipeline.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Colab logo\"> Run in Colab\n </a>\n </td>\n <td>\n <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/custom/showcase_custom_text_binary_classification_online_pipeline.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\">\n View on GitHub\n </a>\n </td>\n</table>\n<br/><br/><br/>\nOverview\nThis tutorial demonstrates how to use the Vertex client library for Python to train and deploy a custom text binary classification model for online prediction, using a training pipeline.\nDataset\nThe dataset used for this tutorial is the IMDB Movie Reviews from TensorFlow Datasets. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts whether a review is positive or negative in sentiment.\nObjective\nIn this tutorial, you create a custom model from a Python script in a Google prebuilt Docker container using the Vertex client library, and then do a prediction on the deployed model by sending data. You can alternatively create custom models using gcloud command-line tool or online using Google Cloud Console.\nThe steps performed include:\n\nCreate a Vertex custom job for training a model.\nCreate a TrainingPipeline resource.\nTrain a TensorFlow model with the TrainingPipeline resource.\nRetrieve and load the model artifacts.\nView the model evaluation.\nUpload the model as a Vertex Model resource.\nDeploy the Model resource to a serving Endpoint resource.\nMake a prediction.\nUndeploy the Model resource.\n\nCosts\nThis tutorial uses billable components of Google Cloud (GCP):\n\nVertex AI\nCloud Storage\n\nLearn about Vertex AI\npricing and Cloud Storage\npricing, and use the Pricing\nCalculator\nto generate a cost estimate based on your projected usage.\nInstallation\nInstall the latest version of Vertex client library.", "import os\nimport sys\n\n# Google Cloud Notebook\nif os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n USER_FLAG = \"--user\"\nelse:\n USER_FLAG = \"\"\n\n! pip3 install -U google-cloud-aiplatform $USER_FLAG", "Install the latest GA version of google-cloud-storage library as well.", "! pip3 install -U google-cloud-storage $USER_FLAG", "Restart the kernel\nOnce you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.", "if not os.getenv(\"IS_TESTING\"):\n # Automatically restart kernel after installs\n import IPython\n\n app = IPython.Application.instance()\n app.kernel.do_shutdown(True)", "Before you begin\nGPU runtime\nMake sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU\nSet up your Google Cloud project\nThe following steps are required, regardless of your notebook environment.\n\n\nSelect or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.\n\n\nMake sure that billing is enabled for your project.\n\n\nEnable the Vertex APIs and Compute Engine APIs.\n\n\nThe Google Cloud SDK is already installed in Google Cloud Notebook.\n\n\nEnter your project ID in the cell below. Then run the cell to make sure the\nCloud SDK uses the right project for all the commands in this notebook.\n\n\nNote: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.", "PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}\n\nif PROJECT_ID == \"\" or PROJECT_ID is None or PROJECT_ID == \"[your-project-id]\":\n # Get your GCP project id from gcloud\n shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null\n PROJECT_ID = shell_output[0]\n print(\"Project ID:\", PROJECT_ID)\n\n! gcloud config set project $PROJECT_ID", "Region\nYou can also change the REGION variable, which is used for operations\nthroughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.\n\nAmericas: us-central1\nEurope: europe-west4\nAsia Pacific: asia-east1\n\nYou may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the Vertex locations documentation", "REGION = \"us-central1\" # @param {type: \"string\"}", "Timestamp\nIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.", "from datetime import datetime\n\nTIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")", "Authenticate your Google Cloud account\nIf you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.\nIf you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.\nOtherwise, follow these steps:\nIn the Cloud Console, go to the Create service account key page.\nClick Create service account.\nIn the Service account name field, enter a name, and click Create.\nIn the Grant this service account access to project section, click the Role drop-down list. Type \"Vertex\" into the filter box, and select Vertex Administrator. Type \"Storage Object Admin\" into the filter box, and select Storage Object Admin.\nClick Create. A JSON file that contains your key downloads to your local environment.\nEnter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.", "# If you are running this notebook in Colab, run this cell and follow the\n# instructions to authenticate your GCP account. This provides access to your\n# Cloud Storage bucket and lets you submit training jobs and prediction\n# requests.\n\n# If on Google Cloud Notebook, then don't execute this code\nif not os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n if \"google.colab\" in sys.modules:\n from google.colab import auth as google_auth\n\n google_auth.authenticate_user()\n\n # If you are running this notebook locally, replace the string below with the\n # path to your service account key and run this cell to authenticate your GCP\n # account.\n elif not os.getenv(\"IS_TESTING\"):\n %env GOOGLE_APPLICATION_CREDENTIALS ''", "Create a Cloud Storage bucket\nThe following steps are required, regardless of your notebook environment.\nWhen you submit a custom training job using the Vertex client library, you upload a Python package\ncontaining your training code to a Cloud Storage bucket. Vertex runs\nthe code from this package. In this tutorial, Vertex also saves the\ntrained model that results from your job in the same bucket. You can then\ncreate an Endpoint resource based on this output in order to serve\nonline predictions.\nSet the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.", "BUCKET_NAME = \"gs://[your-bucket-name]\" # @param {type:\"string\"}\n\nif BUCKET_NAME == \"\" or BUCKET_NAME is None or BUCKET_NAME == \"gs://[your-bucket-name]\":\n BUCKET_NAME = \"gs://\" + PROJECT_ID + \"aip-\" + TIMESTAMP", "Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.", "! gsutil mb -l $REGION $BUCKET_NAME", "Finally, validate access to your Cloud Storage bucket by examining its contents:", "! gsutil ls -al $BUCKET_NAME", "Set up variables\nNext, set up some variables used throughout the tutorial.\nImport libraries and define constants\nImport Vertex client library\nImport the Vertex client library into our Python environment.", "import time\n\nfrom google.cloud.aiplatform import gapic as aip\nfrom google.protobuf import json_format\nfrom google.protobuf.json_format import MessageToJson, ParseDict\nfrom google.protobuf.struct_pb2 import Struct, Value", "Vertex constants\nSetup up the following constants for Vertex:\n\nAPI_ENDPOINT: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services.\nPARENT: The Vertex location root path for dataset, model, job, pipeline and endpoint resources.", "# API service endpoint\nAPI_ENDPOINT = \"{}-aiplatform.googleapis.com\".format(REGION)\n\n# Vertex location root path for your dataset, model and endpoint resources\nPARENT = \"projects/\" + PROJECT_ID + \"/locations/\" + REGION", "CustomJob constants\nSet constants unique to CustomJob training:\n\nDataset Training Schemas: Tells the Pipeline resource service the task (e.g., classification) to train the model for.", "CUSTOM_TASK_GCS_PATH = (\n \"gs://google-cloud-aiplatform/schema/trainingjob/definition/custom_task_1.0.0.yaml\"\n)", "Hardware Accelerators\nSet the hardware accelerators (e.g., GPU), if any, for training and prediction.\nSet the variables TRAIN_GPU/TRAIN_NGPU and DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify:\n(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)\n\nFor GPU, available accelerators include:\n - aip.AcceleratorType.NVIDIA_TESLA_K80\n - aip.AcceleratorType.NVIDIA_TESLA_P100\n - aip.AcceleratorType.NVIDIA_TESLA_P4\n - aip.AcceleratorType.NVIDIA_TESLA_T4\n - aip.AcceleratorType.NVIDIA_TESLA_V100\nOtherwise specify (None, None) to use a container image to run on a CPU.\nNote: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3 -- which is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support.", "if os.getenv(\"IS_TESTING_TRAIN_GPU\"):\n TRAIN_GPU, TRAIN_NGPU = (\n aip.AcceleratorType.NVIDIA_TESLA_K80,\n int(os.getenv(\"IS_TESTING_TRAIN_GPU\")),\n )\nelse:\n TRAIN_GPU, TRAIN_NGPU = (None, None)\n\nif os.getenv(\"IS_TESTING_DEPOLY_GPU\"):\n DEPLOY_GPU, DEPLOY_NGPU = (\n aip.AcceleratorType.NVIDIA_TESLA_K80,\n int(os.getenv(\"IS_TESTING_DEPOLY_GPU\")),\n )\nelse:\n DEPLOY_GPU, DEPLOY_NGPU = (None, None)", "Container (Docker) image\nNext, we will set the Docker container images for training and prediction\n\nTensorFlow 1.15\ngcr.io/cloud-aiplatform/training/tf-cpu.1-15:latest\ngcr.io/cloud-aiplatform/training/tf-gpu.1-15:latest\nTensorFlow 2.1\ngcr.io/cloud-aiplatform/training/tf-cpu.2-1:latest\ngcr.io/cloud-aiplatform/training/tf-gpu.2-1:latest\nTensorFlow 2.2\ngcr.io/cloud-aiplatform/training/tf-cpu.2-2:latest\ngcr.io/cloud-aiplatform/training/tf-gpu.2-2:latest\nTensorFlow 2.3\ngcr.io/cloud-aiplatform/training/tf-cpu.2-3:latest\ngcr.io/cloud-aiplatform/training/tf-gpu.2-3:latest\nTensorFlow 2.4\ngcr.io/cloud-aiplatform/training/tf-cpu.2-4:latest\ngcr.io/cloud-aiplatform/training/tf-gpu.2-4:latest\nXGBoost\ngcr.io/cloud-aiplatform/training/xgboost-cpu.1-1\nScikit-learn\ngcr.io/cloud-aiplatform/training/scikit-learn-cpu.0-23:latest\nPytorch\ngcr.io/cloud-aiplatform/training/pytorch-cpu.1-4:latest\ngcr.io/cloud-aiplatform/training/pytorch-cpu.1-5:latest\ngcr.io/cloud-aiplatform/training/pytorch-cpu.1-6:latest\ngcr.io/cloud-aiplatform/training/pytorch-cpu.1-7:latest\n\nFor the latest list, see Pre-built containers for training.\n\nTensorFlow 1.15\ngcr.io/cloud-aiplatform/prediction/tf-cpu.1-15:latest\ngcr.io/cloud-aiplatform/prediction/tf-gpu.1-15:latest\nTensorFlow 2.1\ngcr.io/cloud-aiplatform/prediction/tf2-cpu.2-1:latest\ngcr.io/cloud-aiplatform/prediction/tf2-gpu.2-1:latest\nTensorFlow 2.2\ngcr.io/cloud-aiplatform/prediction/tf2-cpu.2-2:latest\ngcr.io/cloud-aiplatform/prediction/tf2-gpu.2-2:latest\nTensorFlow 2.3\ngcr.io/cloud-aiplatform/prediction/tf2-cpu.2-3:latest\ngcr.io/cloud-aiplatform/prediction/tf2-gpu.2-3:latest\nXGBoost\ngcr.io/cloud-aiplatform/prediction/xgboost-cpu.1-2:latest\ngcr.io/cloud-aiplatform/prediction/xgboost-cpu.1-1:latest\ngcr.io/cloud-aiplatform/prediction/xgboost-cpu.0-90:latest\ngcr.io/cloud-aiplatform/prediction/xgboost-cpu.0-82:latest\nScikit-learn\ngcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-23:latest\ngcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-22:latest\ngcr.io/cloud-aiplatform/prediction/sklearn-cpu.0-20:latest\n\nFor the latest list, see Pre-built containers for prediction", "if os.getenv(\"IS_TESTING_TF\"):\n TF = os.getenv(\"IS_TESTING_TF\")\nelse:\n TF = \"2-1\"\n\nif TF[0] == \"2\":\n if TRAIN_GPU:\n TRAIN_VERSION = \"tf-gpu.{}\".format(TF)\n else:\n TRAIN_VERSION = \"tf-cpu.{}\".format(TF)\n if DEPLOY_GPU:\n DEPLOY_VERSION = \"tf2-gpu.{}\".format(TF)\n else:\n DEPLOY_VERSION = \"tf2-cpu.{}\".format(TF)\nelse:\n if TRAIN_GPU:\n TRAIN_VERSION = \"tf-gpu.{}\".format(TF)\n else:\n TRAIN_VERSION = \"tf-cpu.{}\".format(TF)\n if DEPLOY_GPU:\n DEPLOY_VERSION = \"tf-gpu.{}\".format(TF)\n else:\n DEPLOY_VERSION = \"tf-cpu.{}\".format(TF)\n\nTRAIN_IMAGE = \"gcr.io/cloud-aiplatform/training/{}:latest\".format(TRAIN_VERSION)\nDEPLOY_IMAGE = \"gcr.io/cloud-aiplatform/prediction/{}:latest\".format(DEPLOY_VERSION)\n\nprint(\"Training:\", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)\nprint(\"Deployment:\", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)", "Machine Type\nNext, set the machine type to use for training and prediction.\n\nSet the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for training and prediction.\nmachine type\nn1-standard: 3.75GB of memory per vCPU.\nn1-highmem: 6.5GB of memory per vCPU\nn1-highcpu: 0.9 GB of memory per vCPU\n\n\nvCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]\n\nNote: The following is not supported for training:\n\nstandard: 2 vCPUs\nhighcpu: 2, 4 and 8 vCPUs\n\nNote: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs.", "if os.getenv(\"IS_TESTING_TRAIN_MACHINE\"):\n MACHINE_TYPE = os.getenv(\"IS_TESTING_TRAIN_MACHINE\")\nelse:\n MACHINE_TYPE = \"n1-standard\"\n\nVCPU = \"4\"\nTRAIN_COMPUTE = MACHINE_TYPE + \"-\" + VCPU\nprint(\"Train machine type\", TRAIN_COMPUTE)\n\nif os.getenv(\"IS_TESTING_DEPLOY_MACHINE\"):\n MACHINE_TYPE = os.getenv(\"IS_TESTING_DEPLOY_MACHINE\")\nelse:\n MACHINE_TYPE = \"n1-standard\"\n\nVCPU = \"4\"\nDEPLOY_COMPUTE = MACHINE_TYPE + \"-\" + VCPU\nprint(\"Deploy machine type\", DEPLOY_COMPUTE)", "Tutorial\nNow you are ready to start creating your own custom model and training for IMDB Movie Reviews.\nSet up clients\nThe Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.\nYou will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.\n\nModel Service for Model resources.\nPipeline Service for training.\nEndpoint Service for deployment.\nJob Service for batch jobs and custom training.\nPrediction Service for serving.", "# client options same for all services\nclient_options = {\"api_endpoint\": API_ENDPOINT}\n\n\ndef create_model_client():\n client = aip.ModelServiceClient(client_options=client_options)\n return client\n\n\ndef create_pipeline_client():\n client = aip.PipelineServiceClient(client_options=client_options)\n return client\n\n\ndef create_endpoint_client():\n client = aip.EndpointServiceClient(client_options=client_options)\n return client\n\n\ndef create_prediction_client():\n client = aip.PredictionServiceClient(client_options=client_options)\n return client\n\n\nclients = {}\nclients[\"model\"] = create_model_client()\nclients[\"pipeline\"] = create_pipeline_client()\nclients[\"endpoint\"] = create_endpoint_client()\nclients[\"prediction\"] = create_prediction_client()\n\nfor client in clients.items():\n print(client)", "Train a model\nThere are two ways you can train a custom model using a container image:\n\n\nUse a Google Cloud prebuilt container. If you use a prebuilt container, you will additionally specify a Python package to install into the container image. This Python package contains your code for training a custom model.\n\n\nUse your own custom container image. If you use your own container, the container needs to contain your code for training a custom model.\n\n\nPrepare your custom job specification\nNow that your clients are ready, your first step is to create a Job Specification for your custom training job. The job specification will consist of the following:\n\nworker_pool_spec : The specification of the type of machine(s) you will use for training and how many (single or distributed)\npython_package_spec : The specification of the Python package to be installed with the pre-built container.\n\nPrepare your machine specification\nNow define the machine specification for your custom training job. This tells Vertex what type of machine instance to provision for the training.\n - machine_type: The type of GCP instance to provision -- e.g., n1-standard-8.\n - accelerator_type: The type, if any, of hardware accelerator. In this tutorial if you previously set the variable TRAIN_GPU != None, you are using a GPU; otherwise you will use a CPU.\n - accelerator_count: The number of accelerators.", "if TRAIN_GPU:\n machine_spec = {\n \"machine_type\": TRAIN_COMPUTE,\n \"accelerator_type\": TRAIN_GPU,\n \"accelerator_count\": TRAIN_NGPU,\n }\nelse:\n machine_spec = {\"machine_type\": TRAIN_COMPUTE, \"accelerator_count\": 0}", "Prepare your disk specification\n(optional) Now define the disk specification for your custom training job. This tells Vertex what type and size of disk to provision in each machine instance for the training.\n\nboot_disk_type: Either SSD or Standard. SSD is faster, and Standard is less expensive. Defaults to SSD.\nboot_disk_size_gb: Size of disk in GB.", "DISK_TYPE = \"pd-ssd\" # [ pd-ssd, pd-standard]\nDISK_SIZE = 200 # GB\n\ndisk_spec = {\"boot_disk_type\": DISK_TYPE, \"boot_disk_size_gb\": DISK_SIZE}", "Define the worker pool specification\nNext, you define the worker pool specification for your custom training job. The worker pool specification will consist of the following:\n\nreplica_count: The number of instances to provision of this machine type.\nmachine_spec: The hardware specification.\n\ndisk_spec : (optional) The disk storage specification.\n\n\npython_package: The Python training package to install on the VM instance(s) and which Python module to invoke, along with command line arguments for the Python module.\n\n\nLet's dive deeper now into the python package specification:\n-executor_image_spec: This is the docker image which is configured for your custom training job.\n-package_uris: This is a list of the locations (URIs) of your python training packages to install on the provisioned instance. The locations need to be in a Cloud Storage bucket. These can be either individual python files or a zip (archive) of an entire package. In the later case, the job service will unzip (unarchive) the contents into the docker image.\n-python_module: The Python module (script) to invoke for running the custom training job. In this example, you will be invoking trainer.task.py -- note that it was not neccessary to append the .py suffix.\n-args: The command line arguments to pass to the corresponding Pythom module. In this example, you will be setting:\n - \"--model-dir=\" + MODEL_DIR : The Cloud Storage location where to store the model artifacts. There are two ways to tell the training script where to save the model artifacts:\n - direct: You pass the Cloud Storage location as a command line argument to your training script (set variable DIRECT = True), or\n - indirect: The service passes the Cloud Storage location as the environment variable AIP_MODEL_DIR to your training script (set variable DIRECT = False). In this case, you tell the service the model artifact location in the job specification.\n - \"--epochs=\" + EPOCHS: The number of epochs for training.\n - \"--steps=\" + STEPS: The number of steps (batches) per epoch.\n - \"--distribute=\" + TRAIN_STRATEGY\" : The training distribution strategy to use for single or distributed training.\n - \"single\": single device.\n - \"mirror\": all GPU devices on a single compute instance.\n - \"multi\": all GPU devices on all compute instances.", "JOB_NAME = \"custom_job_\" + TIMESTAMP\nMODEL_DIR = \"{}/{}\".format(BUCKET_NAME, JOB_NAME)\n\nif not TRAIN_NGPU or TRAIN_NGPU < 2:\n TRAIN_STRATEGY = \"single\"\nelse:\n TRAIN_STRATEGY = \"mirror\"\n\nEPOCHS = 20\nSTEPS = 100\n\nDIRECT = True\nif DIRECT:\n CMDARGS = [\n \"--model-dir=\" + MODEL_DIR,\n \"--epochs=\" + str(EPOCHS),\n \"--steps=\" + str(STEPS),\n \"--distribute=\" + TRAIN_STRATEGY,\n ]\nelse:\n CMDARGS = [\n \"--epochs=\" + str(EPOCHS),\n \"--steps=\" + str(STEPS),\n \"--distribute=\" + TRAIN_STRATEGY,\n ]\n\nworker_pool_spec = [\n {\n \"replica_count\": 1,\n \"machine_spec\": machine_spec,\n \"disk_spec\": disk_spec,\n \"python_package_spec\": {\n \"executor_image_uri\": TRAIN_IMAGE,\n \"package_uris\": [BUCKET_NAME + \"/trainer_imdb.tar.gz\"],\n \"python_module\": \"trainer.task\",\n \"args\": CMDARGS,\n },\n }\n]", "Examine the training package\nPackage layout\nBefore you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.\n\nPKG-INFO\nREADME.md\nsetup.cfg\nsetup.py\ntrainer\n__init__.py\ntask.py\n\nThe files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image.\nThe file trainer/task.py is the Python script for executing the custom training job. Note, when we referred to it in the worker pool specification, we replace the directory slash with a dot (trainer.task) and dropped the file suffix (.py).\nPackage Assembly\nIn the following cells, you will assemble the training package.", "# Make folder for Python training script\n! rm -rf custom\n! mkdir custom\n\n# Add package information\n! touch custom/README.md\n\nsetup_cfg = \"[egg_info]\\n\\ntag_build =\\n\\ntag_date = 0\"\n! echo \"$setup_cfg\" > custom/setup.cfg\n\nsetup_py = \"import setuptools\\n\\nsetuptools.setup(\\n\\n install_requires=[\\n\\n 'tensorflow_datasets==1.3.0',\\n\\n ],\\n\\n packages=setuptools.find_packages())\"\n! echo \"$setup_py\" > custom/setup.py\n\npkg_info = \"Metadata-Version: 1.0\\n\\nName: IMDB Movie Reviews text binary classification\\n\\nVersion: 0.0.0\\n\\nSummary: Demostration training script\\n\\nHome-page: www.google.com\\n\\nAuthor: Google\\n\\nAuthor-email: aferlitsch@google.com\\n\\nLicense: Public\\n\\nDescription: Demo\\n\\nPlatform: Vertex\"\n! echo \"$pkg_info\" > custom/PKG-INFO\n\n# Make the training subfolder\n! mkdir custom/trainer\n! touch custom/trainer/__init__.py", "Task.py contents\nIn the next cell, you write the contents of the training script task.py. I won't go into detail, it's just there for you to browse. In summary:\n\nGets the directory where to save the model artifacts from the command line (--model_dir), and if not specified, then from the environment variable AIP_MODEL_DIR.\nLoads IMDB Movie Reviews dataset from TF Datasets (tfds).\nBuilds a simple RNN model using TF.Keras model API.\nCompiles the model (compile()).\nSets a training distribution strategy according to the argument args.distribute.\nTrains the model (fit()) with epochs specified by args.epochs.\nSaves the trained model (save(args.model_dir)) to the specified model directory.", "%%writefile custom/trainer/task.py\n# Single, Mirror and Multi-Machine Distributed Training for IMDB\n\nimport tensorflow_datasets as tfds\nimport tensorflow as tf\nfrom tensorflow.python.client import device_lib\nimport argparse\nimport os\nimport sys\ntfds.disable_progress_bar()\n\nparser = argparse.ArgumentParser()\nparser.add_argument('--model-dir', dest='model_dir',\n default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.')\nparser.add_argument('--lr', dest='lr',\n default=1e-4, type=float,\n help='Learning rate.')\nparser.add_argument('--epochs', dest='epochs',\n default=20, type=int,\n help='Number of epochs.')\nparser.add_argument('--steps', dest='steps',\n default=100, type=int,\n help='Number of steps per epoch.')\nparser.add_argument('--distribute', dest='distribute', type=str, default='single',\n help='distributed training strategy')\nargs = parser.parse_args()\n\nprint('Python Version = {}'.format(sys.version))\nprint('TensorFlow Version = {}'.format(tf.__version__))\nprint('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))\nprint(device_lib.list_local_devices())\n\n# Single Machine, single compute device\nif args.distribute == 'single':\n if tf.test.is_gpu_available():\n strategy = tf.distribute.OneDeviceStrategy(device=\"/gpu:0\")\n else:\n strategy = tf.distribute.OneDeviceStrategy(device=\"/cpu:0\")\n# Single Machine, multiple compute device\nelif args.distribute == 'mirror':\n strategy = tf.distribute.MirroredStrategy()\n# Multiple Machine, multiple compute device\nelif args.distribute == 'multi':\n strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()\n\n# Multi-worker configuration\nprint('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))\n\n# Preparing dataset\nBUFFER_SIZE = 10000\nBATCH_SIZE = 64\n\n\ndef make_datasets():\n dataset, info = tfds.load('imdb_reviews/subwords8k', with_info=True,\n as_supervised=True)\n train_dataset, test_dataset = dataset['train'], dataset['test']\n encoder = info.features['text'].encoder\n\n padded_shapes = ([None],())\n return train_dataset.shuffle(BUFFER_SIZE).padded_batch(BATCH_SIZE, padded_shapes), encoder\n\n\ntrain_dataset, encoder = make_datasets()\n\n# Build the Keras model\ndef build_and_compile_rnn_model(encoder):\n model = tf.keras.Sequential([\n tf.keras.layers.Embedding(encoder.vocab_size, 64),\n tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64)),\n tf.keras.layers.Dense(64, activation='relu'),\n tf.keras.layers.Dense(1, activation='sigmoid')\n ])\n model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),\n optimizer=tf.keras.optimizers.Adam(args.lr),\n metrics=['accuracy'])\n return model\n\n\nwith strategy.scope():\n # Creation of dataset, and model building/compiling need to be within\n # `strategy.scope()`.\n model = build_and_compile_rnn_model(encoder)\n\n# Train the model\nmodel.fit(train_dataset, epochs=args.epochs, steps_per_epoch=args.steps)\nmodel.save(args.model_dir)", "Store training script on your Cloud Storage bucket\nNext, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.", "! rm -f custom.tar custom.tar.gz\n! tar cvf custom.tar custom\n! gzip custom.tar\n! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_imdb.tar.gz", "Train the model using a TrainingPipeline resource\nNow start training of your custom training job using a training pipeline on Vertex. To train the your custom model, do the following steps:\n\nCreate a Vertex TrainingPipeline resource for the Dataset resource.\nExecute the pipeline to start the training.\n\nCreate a TrainingPipeline resource\nYou may ask, what do we use a pipeline for? We typically use pipelines when the job (such as training) has multiple steps, generally in sequential order: do step A, do step B, etc. By putting the steps into a pipeline, we gain the benefits of:\n\nBeing reusable for subsequent training jobs.\nCan be containerized and ran as a batch job.\nCan be distributed.\nAll the steps are associated with the same pipeline job for tracking progress.\n\nThe training_pipeline specification\nFirst, you need to describe a pipeline specification. Let's look into the minimal requirements for constructing a training_pipeline specification for a custom job:\n\ndisplay_name: A human readable name for the pipeline job.\ntraining_task_definition: The training task schema.\ntraining_task_inputs: A dictionary describing the requirements for the training job.\nmodel_to_upload: A dictionary describing the specification for the (uploaded) Vertex custom Model resource.\ndisplay_name: A human readable name for the Model resource.\nartificat_uri: The Cloud Storage path where the model artifacts are stored in SavedModel format.\ncontainer_spec: This is the specification for the Docker container that will be installed on the Endpoint resource, from which the custom model will serve predictions.", "from google.protobuf import json_format\nfrom google.protobuf.struct_pb2 import Value\n\nMODEL_NAME = \"custom_pipeline-\" + TIMESTAMP\nPIPELINE_DISPLAY_NAME = \"custom-training-pipeline\" + TIMESTAMP\n\ntraining_task_inputs = json_format.ParseDict(\n {\"workerPoolSpecs\": worker_pool_spec}, Value()\n)\npipeline = {\n \"display_name\": PIPELINE_DISPLAY_NAME,\n \"training_task_definition\": CUSTOM_TASK_GCS_PATH,\n \"training_task_inputs\": training_task_inputs,\n \"model_to_upload\": {\n \"display_name\": PIPELINE_DISPLAY_NAME + \"-model\",\n \"artifact_uri\": MODEL_DIR,\n \"container_spec\": {\"image_uri\": DEPLOY_IMAGE},\n },\n}\n\nprint(pipeline)", "Create the training pipeline\nUse this helper function create_pipeline, which takes the following parameter:\n\ntraining_pipeline: the full specification for the pipeline training job.\n\nThe helper function calls the pipeline client service's create_pipeline method, which takes the following parameters:\n\nparent: The Vertex location root path for your Dataset, Model and Endpoint resources.\ntraining_pipeline: The full specification for the pipeline training job.\n\nThe helper function will return the Vertex fully qualified identifier assigned to the training pipeline, which is saved as pipeline.name.", "def create_pipeline(training_pipeline):\n\n try:\n pipeline = clients[\"pipeline\"].create_training_pipeline(\n parent=PARENT, training_pipeline=training_pipeline\n )\n print(pipeline)\n except Exception as e:\n print(\"exception:\", e)\n return None\n return pipeline\n\n\nresponse = create_pipeline(pipeline)", "Now save the unique identifier of the training pipeline you created.", "# The full unique ID for the pipeline\npipeline_id = response.name\n# The short numeric ID for the pipeline\npipeline_short_id = pipeline_id.split(\"/\")[-1]\n\nprint(pipeline_id)", "Get information on a training pipeline\nNow get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's get_training_pipeline method, with the following parameter:\n\nname: The Vertex fully qualified pipeline identifier.\n\nWhen the model is done training, the pipeline state will be PIPELINE_STATE_SUCCEEDED.", "def get_training_pipeline(name, silent=False):\n response = clients[\"pipeline\"].get_training_pipeline(name=name)\n if silent:\n return response\n\n print(\"pipeline\")\n print(\" name:\", response.name)\n print(\" display_name:\", response.display_name)\n print(\" state:\", response.state)\n print(\" training_task_definition:\", response.training_task_definition)\n print(\" training_task_inputs:\", dict(response.training_task_inputs))\n print(\" create_time:\", response.create_time)\n print(\" start_time:\", response.start_time)\n print(\" end_time:\", response.end_time)\n print(\" update_time:\", response.update_time)\n print(\" labels:\", dict(response.labels))\n return response\n\n\nresponse = get_training_pipeline(pipeline_id)", "Deployment\nTraining the above model may take upwards of 20 minutes time.\nOnce your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field model_to_deploy.name.", "while True:\n response = get_training_pipeline(pipeline_id, True)\n if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED:\n print(\"Training job has not completed:\", response.state)\n model_to_deploy_id = None\n if response.state == aip.PipelineState.PIPELINE_STATE_FAILED:\n raise Exception(\"Training Job Failed\")\n else:\n model_to_deploy = response.model_to_upload\n model_to_deploy_id = model_to_deploy.name\n print(\"Training Time:\", response.end_time - response.start_time)\n break\n time.sleep(60)\n\nprint(\"model to deploy:\", model_to_deploy_id)\n\nif not DIRECT:\n MODEL_DIR = MODEL_DIR + \"/model\"\nmodel_path_to_deploy = MODEL_DIR", "Load the saved model\nYour model is stored in a TensorFlow SavedModel format in a Cloud Storage bucket. Now load it from the Cloud Storage bucket, and then you can do some things, like evaluate the model, and do a prediction.\nTo load, you use the TF.Keras model.load_model() method passing it the Cloud Storage path where the model is saved -- specified by MODEL_DIR.", "import tensorflow as tf\n\nmodel = tf.keras.models.load_model(MODEL_DIR)", "Evaluate the model\nNow let's find out how good the model is.\nLoad evaluation data\nYou will load the IMDB Movie Review test (holdout) data from tfds.datasets, using the method load(). This will return the dataset as a tuple of two elements. The first element is the dataset and the second is information on the dataset, which will contain the predefined vocabulary encoder. The encoder will convert words into a numerical embedding, which was pretrained and used in the custom training script.\nWhen you trained the model, you needed to set a fix input length for your text. For forward feeding batches, the padded_batch() property of the corresponding tf.dataset was set to pad each input sequence into the same shape for a batch.\nFor the test data, you also need to set the padded_batch() property accordingly.", "import tensorflow_datasets as tfds\n\ndataset, info = tfds.load(\"imdb_reviews/subwords8k\", with_info=True, as_supervised=True)\ntest_dataset = dataset[\"test\"]\nencoder = info.features[\"text\"].encoder\n\nBATCH_SIZE = 64\npadded_shapes = ([None], ())\ntest_dataset = test_dataset.padded_batch(BATCH_SIZE, padded_shapes)", "Perform the model evaluation\nNow evaluate how well the model in the custom job did.", "model.evaluate(test_dataset)", "Upload the model for serving\nNext, you will upload your TF.Keras model from the custom job to Vertex Model service, which will create a Vertex Model resource for your custom model. During upload, you need to define a serving function to convert data to the format your model expects. If you send encoded data to Vertex, your serving function ensures that the data is decoded on the model server before it is passed as input to your model.\nHow does the serving function work\nWhen you send a request to an online prediction server, the request is received by a HTTP server. The HTTP server extracts the prediction request from the HTTP request content body. The extracted prediction request is forwarded to the serving function. For Google pre-built prediction containers, the request content is passed to the serving function as a tf.string.\nThe serving function consists of two parts:\n\npreprocessing function:\nConverts the input (tf.string) to the input shape and data type of the underlying model (dynamic graph).\nPerforms the same preprocessing of the data that was done during training the underlying model -- e.g., normalizing, scaling, etc.\npost-processing function:\nConverts the model output to format expected by the receiving application -- e.q., compresses the output.\nPackages the output for the the receiving application -- e.g., add headings, make JSON object, etc.\n\nBoth the preprocessing and post-processing functions are converted to static graphs which are fused to the model. The output from the underlying model is passed to the post-processing function. The post-processing function passes the converted/packaged output back to the HTTP server. The HTTP server returns the output as the HTTP response content.\nOne consideration you need to consider when building serving functions for TF.Keras models is that they run as static graphs. That means, you cannot use TF graph operations that require a dynamic graph. If you do, you will get an error during the compile of the serving function which will indicate that you are using an EagerTensor which is not supported.\nGet the serving function signature\nYou can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.\nWhen making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request.", "loaded = tf.saved_model.load(model_path_to_deploy)\n\nserving_input = list(\n loaded.signatures[\"serving_default\"].structured_input_signature[1].keys()\n)[0]\nprint(\"Serving function input:\", serving_input)", "Upload the model\nUse this helper function upload_model to upload your model, stored in SavedModel format, up to the Model service, which will instantiate a Vertex Model resource instance for your model. Once you've done that, you can use the Model resource instance in the same way as any other Vertex Model resource instance, such as deploying to an Endpoint resource for serving predictions.\nThe helper function takes the following parameters:\n\ndisplay_name: A human readable name for the Endpoint service.\nimage_uri: The container image for the model deployment.\nmodel_uri: The Cloud Storage path to our SavedModel artificat. For this tutorial, this is the Cloud Storage location where the trainer/task.py saved the model artifacts, which we specified in the variable MODEL_DIR.\n\nThe helper function calls the Model client service's method upload_model, which takes the following parameters:\n\nparent: The Vertex location root path for Dataset, Model and Endpoint resources.\nmodel: The specification for the Vertex Model resource instance.\n\nLet's now dive deeper into the Vertex model specification model. This is a dictionary object that consists of the following fields:\n\ndisplay_name: A human readable name for the Model resource.\nmetadata_schema_uri: Since your model was built without an Vertex Dataset resource, you will leave this blank ('').\nartificat_uri: The Cloud Storage path where the model is stored in SavedModel format.\ncontainer_spec: This is the specification for the Docker container that will be installed on the Endpoint resource, from which the Model resource will serve predictions. Use the variable you set earlier DEPLOY_GPU != None to use a GPU; otherwise only a CPU is allocated.\n\nUploading a model into a Vertex Model resource returns a long running operation, since it may take a few moments. You call response.result(), which is a synchronous call and will return when the Vertex Model resource is ready.\nThe helper function returns the Vertex fully qualified identifier for the corresponding Vertex Model instance upload_model_response.model. You will save the identifier for subsequent steps in the variable model_to_deploy_id.", "IMAGE_URI = DEPLOY_IMAGE\n\n\ndef upload_model(display_name, image_uri, model_uri):\n model = {\n \"display_name\": display_name,\n \"metadata_schema_uri\": \"\",\n \"artifact_uri\": model_uri,\n \"container_spec\": {\n \"image_uri\": image_uri,\n \"command\": [],\n \"args\": [],\n \"env\": [{\"name\": \"env_name\", \"value\": \"env_value\"}],\n \"ports\": [{\"container_port\": 8080}],\n \"predict_route\": \"\",\n \"health_route\": \"\",\n },\n }\n response = clients[\"model\"].upload_model(parent=PARENT, model=model)\n print(\"Long running operation:\", response.operation.name)\n upload_model_response = response.result(timeout=180)\n print(\"upload_model_response\")\n print(\" model:\", upload_model_response.model)\n return upload_model_response.model\n\n\nmodel_to_deploy_id = upload_model(\"imdb-\" + TIMESTAMP, IMAGE_URI, model_path_to_deploy)", "Get Model resource information\nNow let's get the model information for just your model. Use this helper function get_model, with the following parameter:\n\nname: The Vertex unique identifier for the Model resource.\n\nThis helper function calls the Vertex Model client service's method get_model, with the following parameter:\n\nname: The Vertex unique identifier for the Model resource.", "def get_model(name):\n response = clients[\"model\"].get_model(name=name)\n print(response)\n\n\nget_model(model_to_deploy_id)", "Deploy the Model resource\nNow deploy the trained Vertex custom Model resource. This requires two steps:\n\n\nCreate an Endpoint resource for deploying the Model resource to.\n\n\nDeploy the Model resource to the Endpoint resource.\n\n\nCreate an Endpoint resource\nUse this helper function create_endpoint to create an endpoint to deploy the model to for serving predictions, with the following parameter:\n\ndisplay_name: A human readable name for the Endpoint resource.\n\nThe helper function uses the endpoint client service's create_endpoint method, which takes the following parameter:\n\ndisplay_name: A human readable name for the Endpoint resource.\n\nCreating an Endpoint resource returns a long running operation, since it may take a few moments to provision the Endpoint resource for serving. You call response.result(), which is a synchronous call and will return when the Endpoint resource is ready. The helper function returns the Vertex fully qualified identifier for the Endpoint resource: response.name.", "ENDPOINT_NAME = \"imdb_endpoint-\" + TIMESTAMP\n\n\ndef create_endpoint(display_name):\n endpoint = {\"display_name\": display_name}\n response = clients[\"endpoint\"].create_endpoint(parent=PARENT, endpoint=endpoint)\n print(\"Long running operation:\", response.operation.name)\n\n result = response.result(timeout=300)\n print(\"result\")\n print(\" name:\", result.name)\n print(\" display_name:\", result.display_name)\n print(\" description:\", result.description)\n print(\" labels:\", result.labels)\n print(\" create_time:\", result.create_time)\n print(\" update_time:\", result.update_time)\n return result\n\n\nresult = create_endpoint(ENDPOINT_NAME)", "Now get the unique identifier for the Endpoint resource you created.", "# The full unique ID for the endpoint\nendpoint_id = result.name\n# The short numeric ID for the endpoint\nendpoint_short_id = endpoint_id.split(\"/\")[-1]\n\nprint(endpoint_id)", "Compute instance scaling\nYou have several choices on scaling the compute instances for handling your online prediction requests:\n\nSingle Instance: The online prediction requests are processed on a single compute instance.\n\nSet the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to one.\n\n\nManual Scaling: The online prediction requests are split across a fixed number of compute instances that you manually specified.\n\n\nSet the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and online prediction requests are evenly distributed across them.\n\n\nAuto Scaling: The online prediction requests are split across a scaleable number of compute instances.\n\nSet the minimum (MIN_NODES) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions.\n\nThe minimum number of compute instances corresponds to the field min_replica_count and the maximum number of compute instances corresponds to the field max_replica_count, in your subsequent deployment request.", "MIN_NODES = 1\nMAX_NODES = 1", "Deploy Model resource to the Endpoint resource\nUse this helper function deploy_model to deploy the Model resource to the Endpoint resource you created for serving predictions, with the following parameters:\n\nmodel: The Vertex fully qualified model identifier of the model to upload (deploy) from the training pipeline.\ndeploy_model_display_name: A human readable name for the deployed model.\nendpoint: The Vertex fully qualified endpoint identifier to deploy the model to.\n\nThe helper function calls the Endpoint client service's method deploy_model, which takes the following parameters:\n\nendpoint: The Vertex fully qualified Endpoint resource identifier to deploy the Model resource to.\ndeployed_model: The requirements specification for deploying the model.\ntraffic_split: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs.\nIf only one model, then specify as { \"0\": 100 }, where \"0\" refers to this model being uploaded and 100 means 100% of the traffic.\nIf there are existing models on the endpoint, for which the traffic will be split, then use model_id to specify as { \"0\": percent, model_id: percent, ... }, where model_id is the model id of an existing model to the deployed endpoint. The percents must add up to 100.\n\nLet's now dive deeper into the deployed_model parameter. This parameter is specified as a Python dictionary with the minimum required fields:\n\nmodel: The Vertex fully qualified model identifier of the (upload) model to deploy.\ndisplay_name: A human readable name for the deployed model.\ndisable_container_logging: This disables logging of container events, such as execution failures (default is container logging is enabled). Container logging is typically enabled when debugging the deployment and then disabled when deployed for production.\ndedicated_resources: This refers to how many compute instances (replicas) that are scaled for serving prediction requests.\nmachine_spec: The compute instance to provision. Use the variable you set earlier DEPLOY_GPU != None to use a GPU; otherwise only a CPU is allocated.\nmin_replica_count: The number of compute instances to initially provision, which you set earlier as the variable MIN_NODES.\nmax_replica_count: The maximum number of compute instances to scale to, which you set earlier as the variable MAX_NODES.\n\nTraffic Split\nLet's now dive deeper into the traffic_split parameter. This parameter is specified as a Python dictionary. This might at first be a tad bit confusing. Let me explain, you can deploy more than one instance of your model to an endpoint, and then set how much (percent) goes to each instance.\nWhy would you do that? Perhaps you already have a previous version deployed in production -- let's call that v1. You got better model evaluation on v2, but you don't know for certain that it is really better until you deploy to production. So in the case of traffic split, you might want to deploy v2 to the same endpoint as v1, but it only get's say 10% of the traffic. That way, you can monitor how well it does without disrupting the majority of users -- until you make a final decision.\nResponse\nThe method returns a long running operation response. We will wait sychronously for the operation to complete by calling the response.result(), which will block until the model is deployed. If this is the first time a model is deployed to the endpoint, it may take a few additional minutes to complete provisioning of resources.", "DEPLOYED_NAME = \"imdb_deployed-\" + TIMESTAMP\n\n\ndef deploy_model(\n model, deployed_model_display_name, endpoint, traffic_split={\"0\": 100}\n):\n\n if DEPLOY_GPU:\n machine_spec = {\n \"machine_type\": DEPLOY_COMPUTE,\n \"accelerator_type\": DEPLOY_GPU,\n \"accelerator_count\": DEPLOY_NGPU,\n }\n else:\n machine_spec = {\n \"machine_type\": DEPLOY_COMPUTE,\n \"accelerator_count\": 0,\n }\n\n deployed_model = {\n \"model\": model,\n \"display_name\": deployed_model_display_name,\n \"dedicated_resources\": {\n \"min_replica_count\": MIN_NODES,\n \"max_replica_count\": MAX_NODES,\n \"machine_spec\": machine_spec,\n },\n \"disable_container_logging\": False,\n }\n\n response = clients[\"endpoint\"].deploy_model(\n endpoint=endpoint, deployed_model=deployed_model, traffic_split=traffic_split\n )\n\n print(\"Long running operation:\", response.operation.name)\n result = response.result()\n print(\"result\")\n deployed_model = result.deployed_model\n print(\" deployed_model\")\n print(\" id:\", deployed_model.id)\n print(\" model:\", deployed_model.model)\n print(\" display_name:\", deployed_model.display_name)\n print(\" create_time:\", deployed_model.create_time)\n\n return deployed_model.id\n\n\ndeployed_model_id = deploy_model(model_to_deploy_id, DEPLOYED_NAME, endpoint_id)", "Make a online prediction request\nNow do a online prediction to your deployed model.\nPrepare the request content\nSince the dataset is a tf.dataset, which acts as a generator, we must use it as an iterator to access the data items in the test data. We do the following to get a single data item from the test data:\n\nSet the property for the number of batches to draw per iteration to one using the method take(1).\nIterate once through the test data -- i.e., we do a break within the for loop.\nIn the single iteration, we save the data item which is in the form of a tuple.\nThe data item will be the first element of the tuple, which you then will convert from an tensor to a numpy array -- data[0].numpy().", "import tensorflow_datasets as tfds\n\ndataset, info = tfds.load(\"imdb_reviews/subwords8k\", with_info=True, as_supervised=True)\ntest_dataset = dataset[\"test\"]\n\ntest_dataset.take(1)\nfor data in test_dataset:\n print(data)\n break\n\ntest_item = data[0].numpy()", "Send the prediction request\nOk, now you have a test data item. Use this helper function predict_data, which takes the following parameters:\n\ndata: The test data item is a 64 padded numpy 1D array.\nendpoint: The Vertex AI fully qualified identifier for the endpoint where the model was deployed.\nparameters_dict: Additional parameters for serving.\n\nThis function uses the prediction client service and calls the predict method with the following parameters:\n\nendpoint: The Vertex AI fully qualified identifier for the endpoint where the model was deployed.\ninstances: A list of instances (data items) to predict.\nparameters: Additional parameters for serving.\n\nTo pass the test data to the prediction service, you must package it for transmission to the serving binary as follows:\n1. Convert the data item from a 1D numpy array to a 1D Python list.\n2. Convert the prediction request to a serialized Google protobuf (`json_format.ParseDict()`)\n\nEach instance in the prediction request is a dictionary entry of the form:\n {input_name: content}\n\n\ninput_name: the name of the input layer of the underlying model.\ncontent: The data item as a 1D Python list.\n\nSince the predict() service can take multiple data items (instances), you will send your single data item as a list of one data item. As a final step, you package the instances list into Google's protobuf format -- which is what we pass to the predict() service.\nThe response object returns a list, where each element in the list corresponds to the corresponding image in the request. You will see in the output for each prediction:\n\npredictions -- the predicated binary sentiment between 0 (negative) and 1 (positive).", "def predict_data(data, endpoint, parameters_dict):\n parameters = json_format.ParseDict(parameters_dict, Value())\n\n # The format of each instance should conform to the deployed model's prediction input schema.\n instances_list = [{serving_input: data.tolist()}]\n instances = [json_format.ParseDict(s, Value()) for s in instances_list]\n\n response = clients[\"prediction\"].predict(\n endpoint=endpoint, instances=instances, parameters=parameters\n )\n print(\"response\")\n print(\" deployed_model_id:\", response.deployed_model_id)\n predictions = response.predictions\n print(\"predictions\")\n for prediction in predictions:\n print(\" prediction:\", prediction)\n\n\npredict_data(test_item, endpoint_id, None)", "Undeploy the Model resource\nNow undeploy your Model resource from the serving Endpoint resoure. Use this helper function undeploy_model, which takes the following parameters:\n\ndeployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed to.\nendpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model is deployed to.\n\nThis function calls the endpoint client service's method undeploy_model, with the following parameters:\n\ndeployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed.\nendpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource is deployed.\ntraffic_split: How to split traffic among the remaining deployed models on the Endpoint resource.\n\nSince this is the only deployed model on the Endpoint resource, you simply can leave traffic_split empty by setting it to {}.", "def undeploy_model(deployed_model_id, endpoint):\n response = clients[\"endpoint\"].undeploy_model(\n endpoint=endpoint, deployed_model_id=deployed_model_id, traffic_split={}\n )\n print(response)\n\n\nundeploy_model(deployed_model_id, endpoint_id)", "Cleaning up\nTo clean up all GCP resources used in this project, you can delete the GCP\nproject you used for the tutorial.\nOtherwise, you can delete the individual resources you created in this tutorial:\n\nDataset\nPipeline\nModel\nEndpoint\nBatch Job\nCustom Job\nHyperparameter Tuning Job\nCloud Storage Bucket", "delete_dataset = True\ndelete_pipeline = True\ndelete_model = True\ndelete_endpoint = True\ndelete_batchjob = True\ndelete_customjob = True\ndelete_hptjob = True\ndelete_bucket = True\n\n# Delete the dataset using the Vertex fully qualified identifier for the dataset\ntry:\n if delete_dataset and \"dataset_id\" in globals():\n clients[\"dataset\"].delete_dataset(name=dataset_id)\nexcept Exception as e:\n print(e)\n\n# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline\ntry:\n if delete_pipeline and \"pipeline_id\" in globals():\n clients[\"pipeline\"].delete_training_pipeline(name=pipeline_id)\nexcept Exception as e:\n print(e)\n\n# Delete the model using the Vertex fully qualified identifier for the model\ntry:\n if delete_model and \"model_to_deploy_id\" in globals():\n clients[\"model\"].delete_model(name=model_to_deploy_id)\nexcept Exception as e:\n print(e)\n\n# Delete the endpoint using the Vertex fully qualified identifier for the endpoint\ntry:\n if delete_endpoint and \"endpoint_id\" in globals():\n clients[\"endpoint\"].delete_endpoint(name=endpoint_id)\nexcept Exception as e:\n print(e)\n\n# Delete the batch job using the Vertex fully qualified identifier for the batch job\ntry:\n if delete_batchjob and \"batch_job_id\" in globals():\n clients[\"job\"].delete_batch_prediction_job(name=batch_job_id)\nexcept Exception as e:\n print(e)\n\n# Delete the custom job using the Vertex fully qualified identifier for the custom job\ntry:\n if delete_customjob and \"job_id\" in globals():\n clients[\"job\"].delete_custom_job(name=job_id)\nexcept Exception as e:\n print(e)\n\n# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job\ntry:\n if delete_hptjob and \"hpt_job_id\" in globals():\n clients[\"job\"].delete_hyperparameter_tuning_job(name=hpt_job_id)\nexcept Exception as e:\n print(e)\n\nif delete_bucket and \"BUCKET_NAME\" in globals():\n ! gsutil rm -r $BUCKET_NAME" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
OSGeo-live/CesiumWidget
GSOC/notebooks/ipython/exercises/Customization/Magics.ipynb
apache-2.0
[ "Customizing IPython - Magics\nIPython extends Python by adding shell-like commands called magics.", "%lsmagic\n\nimport numpy\n\n%timeit A=numpy.random.random((1000,1000))\n\n%%timeit -n 1\n\nA=numpy.random.random((1000,1000))\nb = A.sum()\n", "Defining your own magic\nAs we have seen already, IPython has cell and line magics. You can define your own magics using any Python function and the register_magic_function method:", "ip = get_ipython()\n\nimport time\n\ndef sleep_magic(line):\n \"\"\"A simple function for sleeping\"\"\"\n t = float(line)\n time.sleep(t)\n\nip.register_magic_function?\n\nip.register_magic_function(sleep_magic, \"line\", \"sleep\")\n\n%sleep 2\n\n%sleep?", "Exercise\nDefine %tic and %toc magics, which can be use for simple timings, e.g. where\npython\nfor p in range(1,4):\n N = 10**p\n print \"N=%i\" % N\n %tic\n A = np.random.random((N,N))\n np.linalg.eigvals(A)\n %toc\neach %toc will print the time since the last %tic. Create separate tic and toc functions that read and write\na global time variable.", "%load soln/tictocf.py\n\nimport numpy as np\nimport sys\nfor p in range(1,4):\n N = 10**p\n print(\"N=%i\" % N)\n sys.stdout.flush()\n %tic\n A = np.random.random((N,N))\n np.linalg.eigvals(A)\n %toc", "Cell Magic\nCell magics take two args:\n\nthe line on the same line of the magic \nthe cell the multiline body of the cell after the first line", "def dummy_cell_magic(line, cell):\n \"\"\"dummy cell magic for displaying the line and cell it is passed\"\"\"\n print(\"line: %r\" % line)\n print(\"cell: %r\" % cell)\n\nip.register_magic_function(dummy_cell_magic, \"cell\", \"dummy\")\n\n%%dummy this is the line\nthis\nis the\ncell\n\ndef parse_magic_line(line):\n \"\"\"parse a magic line into a name and eval'd expression\"\"\"\n name, values_s = line.split(None, 1)\n values = eval(values_s, get_ipython().user_ns)\n return name, values\n\nparse_magic_line(\"x range(5)\")", "Excercise\nCan you write and register a cell magic that automates the outer iteration,\ntiming a block for various values of a particular variable:", "%load soln/scalemagic.py\n\n%%scale N [ int(10**p) for p in range(1,4) ]\n\nA = np.random.random((N,N))\nnp.linalg.eigvals(A)\n\n\n%%scale N [ int(2**p) for p in np.linspace(6, 11, 11) ]\n\nA = np.random.random((N,N))\nnp.linalg.eigvals(A)\n", "Executing Notebooks\nWe can load a notebook into memory using IPython.nbformat.", "import io\nimport os\n\nimport IPython.nbformat as nbf\n\ndef load_notebook(filename):\n \"\"\"load a notebook object from a filename\"\"\"\n if not os.path.exists(filename) and not filename.endswith(\".ipynb\"):\n filename = filename + \".ipynb\"\n with io.open(filename) as f:\n return nbf.read(f, as_version=4)\n\n\nnb = load_notebook(\"_Sample\")", "A notebook is just a dictionary with attribute access for convenience.", "nb.keys()\n\ncells = nb.cells\ncells", "We can see all the cells and their type", "for cell in cells:\n print()\n print('----- %s -----' % cell.cell_type)\n print(cell.source)", "Now I can run all of the code cells with get_ipython().run_cell", "for cell in cells:\n ip = get_ipython()\n if cell.cell_type == 'code':\n ip.run_cell(cell.source, silent=True)", "And we can now use the function that was defined in that notebook:", "nb_info(nb)", "Exercise\nCan you write and register an %nbrun line magic to run a notebook?\npython\n%nbrun Sample", "%load soln/nbrun.py\n\n%nbrun _Sample", "The common way to make your magics reusable is to write an Extension, so let's give that a try." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
boffi/boffi.github.io
dati_2017/wt05/MassMatrix.ipynb
mit
[ "Mass Matrix\n<img src=\"figures/trab01_conv.svg\" alt=\"Dynamic System\" style=\"width:95%;\"/>\nThe 2 DOF dynamical system in figure is composed of two massless rigid bodies and a massive one.\nCompute the mass matrix of the system with reference to the degrees of freedom indicated in figure, in the hypotesis of small displacements.\nSolution\nWe are going to use symbols for the relevant quantities.", "m, L, x1, x2 = symbols('m L x_1 x_2')", "Contribution of $x_1$ to the displacements of the massive bar\nWe constrain $x_2$ to zero (i.e., the roller becomes a hinge) and impose a unit displacement $x_1=1$.\n<img src=\"figures/trab02_conv.svg\" alt=\"Dynamic System\" style=\"width:95%;\"/>\nThe Centre of Instantaneous Rotation (CIR) of the massive bar, at the intersection of the dashed lines in figure, coincides with the CIR of the left bar, hence the rotation of the two bars are the same.\nBecause the two rotations are $\\phi_1=1/2L$ the displacements of the centre of mass are $u_{G1} = -\\phi_1\\times L/2 = -1/4$ and $v_{G1} = +\\phi_1\\times L=1/2$.", "ug1, vg1, 𝜙1 = -x1/4, +x1/2, +x1/(2*L)", "Contribution of $x_2$ to the displacements of the massive bar\nWe constrain $x_1$ to zero (i.e., we introduce a roller) and impose a unit displacement $x_2=1$.\n<img src=\"figures/trab03_conv.svg\" alt=\"Dynamic System\" style=\"width:95%;\"/>\nThe left beam can't move, hence the CIR of the massive bar is the top internal hinge.\nThe CIR of the bottom hinge is at an infinite distance in the vertical direction (the bottom bar undergos a horizontal motion) and by continuity we have $\\phi_2=1/L$, $u_{G2}=-\\phi_2\\times(-L/2)=+1/2$ and $v_{G2}=0$.", "ug2, vg2, 𝜙2 = +x2/2, 0, +x2/L", "Total Displacements and Velocities\nThe total displacement components are the sum of the two cuntributions, the total rotation is the sum of the two contributions.\nThe velocities are obtained differentiating w/r to time.", "ug, vg, 𝜙 = ug1+ug2, vg1+vg2, 𝜙1+𝜙2\ndot_u, dot_v, ω = diff_t(ug), diff_t(vg), diff_t(𝜙)", "Kinetic Energy", "T = m * (dot_u**2 + dot_v**2 + ω**2*L**2/12) / 2\ndisplay(Latex('$$T=' + latex(T.expand()) + '.$$'))", "Mass Matrix Coefficients\nThe coefficient can be computed as \n$$m_{ij} = \\frac{\\partial^2 T}{\\partial x_i \\partial x_j}.$$", "for i, xi in enumerate((x1, x2), 1):\n for j, xj in enumerate((x1, x2), 1):\n display(Latex('$$m_{%d%d}='%(i,j)+latex(T.diff(xi,xj))+'.$$'))", "Initialization", "from sympy import symbols, init_printing, latex\ninit_printing(use_latex=1)\n\nfrom IPython.display import HTML, Latex\ndisplay(HTML(open('01.css').read()))\n\ndef diff_t(expr): return expr" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jmunar/pymc3-kalman
notebooks/HowTo_ProfilingTheanoScan.ipynb
apache-2.0
[ "Let's admit it, bayesian modeling on time series is slow. In pymc3, it typically implies using theano scan operation. Here, we will show how to profile one step of the kalman filter, as well as the scan operation over the time series.\nFirst, load the required packages:", "import numpy as np\nimport theano\nimport theano.tensor as tt\n\nimport kalman", "We will use the same data as in the 01_RandomWalkPlusObservation notebook.", "# True values\nT = 500 # Time steps\nsigma2_eps0 = 3 # Variance of the observation noise\nsigma2_eta0 = 10 # Variance in the update of the mean\n\n# Simulate data\nnp.random.seed(12345)\neps = np.random.normal(scale=sigma2_eps0**0.5, size=T)\neta = np.random.normal(scale=sigma2_eta0**0.5, size=T)\nmu = np.cumsum(eta)\ny = mu + eps", "Next, we create all the tensors required to describe our model:", "# Upon using pymc3, the following theano configuration flag is changed,\n# leading to tensors being required to have test values\n#theano.config.compute_test_value = 'ignore'\n\n# Tensors for the measurement equation\nZ = tt.dmatrix(name='Z')\nd = tt.dvector(name='d')\nH = tt.dmatrix(name='H')\n\n# Tensors for the transition equation\nT = tt.dmatrix(name='T')\nc = tt.dvector(name='c')\nR = tt.dmatrix(name='R')\nQ = tt.dmatrix(name='Q')\n\n# Initial position and uncertainty\na0 = tt.dvector(name='a0')\nP0 = tt.dmatrix(name='P0')", "We will also create some actual values for them:", "ɛ_σ2 = 3.\nη_σ2 = 10.\n\nargs = dict(Z = np.array([[1.]]),\n d = np.array([0.]),\n H = np.array([[ɛ_σ2]]),\n T = np.array([[1.]]),\n c = np.array([0.]),\n R = np.array([[1.]]),\n Q = np.array([[η_σ2]]),\n a0 = np.array([0.]),\n P0 = np.array([[1e6]]))", "Let's calculate the likelihood of the observed values, given the parameters above:", "kalmanTheano = kalman.KalmanTheano(Z, d, H, T, c, R, Q, a0, P0)\n(at, Pt, lliks), updates = kalmanTheano.filter(y[:,None])\n\nf = theano.function([Z, d, H, T, c, R, Q, a0, P0], lliks)\n\nllik = f(**args)\nllik[1:].sum()", "Time required for the log-likelihood calculation:", "print('Measuring time...')\n%timeit f(**args)", "Profiling a non-scan operation is relatively simple. As an example, let's create a function to calculate the first time step of the Kalman filter:", "Y0 = tt.dvector(name='Y0')\n_,_,llik = kalman.core._oneStep(Y0, Z, d, H, T, c, R, Q, a0, P0)\n\nprofiler = theano.compile.ScanProfileStats()\nf = theano.function([Y0, Z, d, H, T, c, R, Q, a0, P0], llik, profile=profiler)\n\nf(y[0,None], **args);\n\nprofiler.summary()", "Repeating the procedure with a scan procedure, we can see that the code inside it is not profiled. It took me a while to make it work (not even stackoverflow helped!!!). In the end, this is how I made it work:", "profiler = theano.compile.ScanProfileStats()\n(_,_,llik),_ = kalmanTheano.filter(y[:,None], profile=profiler)\n\nf = theano.function([Z, d, H, T, c, R, Q, a0, P0], llik, profile=profiler)\n\nf(**args);\n\n# Select the node corresponding to the scan operation\nscan_op = next(k for k in profiler.op_nodes()\n if isinstance(k, theano.scan_module.scan_op.Scan))\nscan_op.profile.summary()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
wmvanvliet/neuroscience_tutorials
posthoc/linear_regression.ipynb
bsd-2-clause
[ "# Some housekeeping before we get started\n%matplotlib qt\nimport mne\nmne.set_log_level(False)", "<a href=\"https://mybinder.org/v2/gh/wmvanvliet/neuroscience_tutorials/master?filepath=posthoc%2Flinear_regression.ipynb\" target=\"_new\" style=\"float: right\"><img src=\"qr.png\" alt=\"https://mybinder.org/v2/gh/wmvanvliet/neuroscience_tutorials/master?filepath=posthoc%2Flinear_regression.ipynb\"></a>\nMarijn van Vliet\nA deep dive into linear models\ntiny.cc/deepdive\nLoading the data", "import mne\nepochs = mne.read_epochs('subject04-epo.fif')\nepochs.metadata", "Epochs: snippets of EEG data", "epochs.plot(n_channels=32, n_epochs=10);", "Evoked: averaging across epochs", "unrelated = epochs['FAS < 0.1'].average()\nrelated = epochs['FAS > 0.1'].average()\nmne.viz.plot_evoked_topo([related, unrelated]);", "Challenge:\nDeduce the memory priming effect for a word-pair, given the EEG epoch\nNaive approach: average signal in ROI", "ROI = epochs.copy()\nROI.pick_channels(['P3', 'Pz', 'P4'])\nROI.crop(0.3, 0.47)\n\nFAS_pred = ROI.get_data().mean(axis=(1, 2))\n\nfrom scipy.stats import pearsonr\nprint('Performance: %.2f' % pearsonr(epochs.metadata['FAS'], FAS_pred)[0])", "Machine learning approach: linear regression", "print(epochs.get_data().shape)\n\nX = epochs.get_data().reshape(200, 32 * 60)\ny = epochs.metadata['FAS'].values\n\nfrom sklearn.preprocessing import normalize\nX = normalize(X)\n\nprint('X:', X.shape)\nprint('y:', y.shape)", "Performing linear regression", "from sklearn.linear_model import LinearRegression\n\nmodel = LinearRegression().fit(X, y)\n\nFAS_pred = model.predict(X)\nprint('Performance: %.2f (to beat: 0.30)' % pearsonr(epochs.metadata['FAS'], FAS_pred)[0])\n\nfrom sklearn.model_selection import cross_val_predict\n\nFAS_pred = cross_val_predict(model, X, y, cv=10)\nprint('Performance: %.2f (to beat: 0.30)' % pearsonr(epochs.metadata['FAS'], FAS_pred)[0])", "Inspecting the weights", "model.fit(X, y)\nweights = model.coef_.reshape(32, 60)\n\nev = mne.EvokedArray(weights, epochs.info, tmin=epochs.times[0], comment='weights')\nev.plot_topo();", "What's going on here?\nhttps://users.aalto.fi/~vanvlm1/posthoc/regression.html\nThe post-hoc framework\n\nData covariance matrix\nHaufe pattern matrix\nNormalizer", "from posthoc import Workbench\n\nmodel = Workbench(LinearRegression())\nmodel.fit(X, y)\n\ncov_X = X.T @ X / len(X)\npattern = model.pattern_\nnormalizer = model.normalizer_", "The data covariance", "from matplotlib import pyplot as plt\n\nplt.matshow(cov_X, cmap='magma')\n\n# Show channel names\nplt.xticks(range(0, 32 * 60, 60), epochs.ch_names, rotation=90)\nplt.yticks(range(0, 32 * 60, 60), epochs.ch_names);", "Shrinking the covariance", "import numpy as np\n\n# Amount of shrinkage\nalpha = 0.75\n\n# Shrinkage formula\nshrinkage_target = np.identity(32 * 60) * np.trace(cov_X) / len(cov_X)\ncov_X_mod = alpha * shrinkage_target + (1 - alpha) * cov_X\n\n# Plot shrunk covariance\nplt.matshow(cov_X_mod, cmap='magma')\nplt.xticks(range(0, 32 * 60, 60), epochs.ch_names, rotation=90)\nplt.yticks(range(0, 32 * 60, 60), epochs.ch_names);", "Post-hoc modification of the model", "from posthoc.cov_estimators import ShrinkageKernel\n\nmodel = Workbench(LinearRegression(), cov=ShrinkageKernel(alpha=0.97))\n\nFAS_pred = cross_val_predict(model, X, y, cv=10)\nprint('Performance: %.2f (to beat: 0.30)' % pearsonr(epochs.metadata['FAS'], FAS_pred)[0])", "The pattern matrix", "pattern_ev = mne.EvokedArray(pattern.reshape(32, 60), epochs.info, epochs.times[0], comment='pattern')\npattern_ev.plot_topo();", "Modifying the pattern matrix\n<img src=\"kernel.png\" width=\"400\">", "import numpy as np\n\ndef pattern_modifier(pattern, X_train=None, y_train=None, mu=0.36, sigma=0.06):\n pattern = pattern.reshape(32, 60)\n \n # Define mu and sigma in samples\n mu = np.searchsorted(epochs.times, mu)\n sigma = sigma * epochs.info['sfreq']\n \n # Formula for Gaussian curve\n kernel = np.exp(-0.5 * ((np.arange(60) - mu) / sigma) ** 2)\n \n return (pattern * kernel).ravel()\n\npattern_mod = pattern_modifier(pattern)\npattern_mod = mne.EvokedArray(pattern_mod.reshape(32, 60), epochs.info, epochs.times[0], comment='pattern')\npattern_mod.plot_topo();", "Post-hoc modifying the pattern in the model", "model = Workbench(LinearRegression(), cov=ShrinkageKernel(0.97), pattern_modifier=pattern_modifier)\nFAS_pred = cross_val_predict(model, X, y, cv=10)\n\nprint('Performance: %.2f (to beat: 0.30, 0.35)' % pearsonr(epochs.metadata['FAS'], FAS_pred)[0])", "To find out more, read the paper!\nhttps://www.biorxiv.org/content/10.1101/518662v2\nMarijn van Vliet & Riitta Salmelin\nPost-hoc modification of linear models: combining machine learning with domain information to make solid inferences from noisy data\nNeuroImage (2020)\nFor more interactive neuroscience tutorials:\nhttps://github.com/wmvanvliet/neuroscience_tutorials\nThe normalizer", "print(normalizer)", "Automatic optimization", "def scorer(model, X, y):\n return pearsonr(model.predict(X), y)[0]\n\nfrom posthoc import WorkbenchOptimizer\nmodel = WorkbenchOptimizer(LinearRegression(), cov=ShrinkageKernel(0.95),\n pattern_modifier=pattern_modifier, pattern_param_x0=[0.4, 0.05], pattern_param_bounds=[(0, 0.8), (0.01, 0.5)],\n scoring=scorer)\nmodel.fit(X, y)\n\nprint('Optimal parameters: alpha=%.3f, mu=%.3f, sigma=%.3f'\n % tuple(model.cov_params_ + model.pattern_modifier_params_))", "Feature selection vs. Pattern modification", "import numpy as np\n\ndef modify_X(X, X_train=None, y_train=None, mu=0.36, sigma=0.06):\n X = X.reshape(200, 32, 60)\n\n # Define mu and sigma in samples\n mu = np.searchsorted(epochs.times, mu)\n sigma = sigma * epochs.info['sfreq']\n \n # Formula for Gaussian curve\n kernel = np.exp(-0.5 * ((np.arange(60) - mu) / sigma) ** 2)\n \n return (X * kernel).reshape(200, -1)\n \nX_mod = modify_X(X)\n\nmodel = LinearRegression()\nFAS_pred = cross_val_predict(model, X_mod, y, cv=10)\nprint('LR performance: %.2f (to beat: 0.30, 0.38)' % pearsonr(epochs.metadata['FAS'], FAS_pred)[0])\n\nmodel = Workbench(LinearRegression(), cov=ShrinkageKernel(alpha=0.97))\nFAS_pred = cross_val_predict(model, X_mod, y, cv=10)\nprint('Shrinkage LR performance: %.2f (to beat: 0.30, 0.38)' % pearsonr(epochs.metadata['FAS'], FAS_pred)[0])" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
moonbury/pythonanywhere
github/MasteringMatplotlib/mmpl-interaction.ipynb
gpl-3.0
[ "Event Handling and Interactive Plots\nIn the following sections of this IPython Notebook we be looking at the following:\n\nmatplotlib's event loop support\nBasic Event Handling\nList of supported events\nMouse events\nLimitations of the IPython Notebook backend\nKeyboard events\nAxes and Figures events\nObject picking\nCompound Event Handling\nToolbar\nInteractive panning and zooming of figures\n\nWarm-up proceedures:", "import matplotlib\nmatplotlib.use('nbagg')", "Notice that we've left out the following line from our usual notebook prelude:\n%matplotlib inline\nWe've disabled inline so that we get access to the interactive mode. More on that later :-)\nLet's continue with the necessary imports:", "import random\nimport sys\nimport time\nimport numpy as np\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom IPython.display import Image\nfrom typecheck import typecheck\n\nsys.path.append(\"../lib\")\nimport topo", "Let's set up our colors for this notebook:", "pallete_name = \"husl\"\n#colors = sns.color_palette(pallete_name, 8)\n#colors.reverse()\n#cmap = mpl.colors.LinearSegmentedColormap.from_list(pallete_name, colors) \ncmap = mpl.colors.Colormap('Sequential')", "Event Loop Basics\nBefore we look at matplotlib's event loop support, let's do a quick survey of event loops and get a refresher on how they work. Here's a pretty simple \"event\" loop:\npython\nwhile True:\n pass\nThat loop is not going be worth our while to execute in this notebook :-) So let's do another one, almost as simple, that has a good chance of exiting in under a minute:", "x = True\nwhile x:\n time.sleep(1)\n if random.random() < 0.15:\n x = False", "This loop only handles one \"event\": the change of a value from True to False. That loop will continue to run until the condition for a false value of x is met (a random float under a particular threshold).\nSo what relation do these simple loops have with the loops that power toolkits like GTK and Qt or frameworks like Twisted and Tornado? Usually event systems have something like the following:\n * a way to start the event loop\n * a way to stop the event loop\n * providing a means for registering events\n * providing a means for responding to events\nDuring each run, a loop will usually check a data structure to see if there are any new events that have occurred since the last time it looped. In a network event system, each loop might check to see if any file descriptors are ready for reading or writing. In a GUI toolkit, each look might check to see if any clicks or button presses had occurred.\nGiven the simple criteria above, let's try building a minimally demonstrative, if not useful, event loop. To keep this small, we're not going to integrate with socket or GUI events. The event that our loop will respond to will be quite minimal indeed.", "class EventLoop:\n def __init__(self):\n self.command = None\n self.status = None\n self.handlers = {\"interrupt\": self.handle_interrupt}\n self.resolution = 0.1\n\n def loop(self):\n self.command = \"loop\"\n while self.command != \"stop\":\n self.status = \"running\"\n time.sleep(self.resolution)\n \n def start(self):\n self.command = \"run\"\n try:\n self.loop()\n except KeyboardInterrupt:\n self.handle_event(\"interrupt\")\n \n def stop(self):\n self.command = \"stop\"\n\n @typecheck\n def add_handler(self, fn: callable, event: str):\n self.handlers[event] = fn\n\n @typecheck\n def handle_event(self, event: str):\n self.handlers[event]()\n \n def handle_interrupt(self):\n print(\"Stopping event loop ...\")\n self.stop() ", "Here's what we did:\n\nCreated a class that maintains a data structure for event handlers\nWe also added a default handler for the \"interrupt\" event\nCreated a loop method\nCreated methods for starting and stopping the loop (via an attribute change)\nIn our start method, we check for an interrupt signal, and fire off an interrupt handler for said signal\nCreated a method for adding event handlers to the handler data structure (should we want to add more)\n\nLet's creat an instance and start it up:", "el = EventLoop()\nel.start()", "When you evaluate that cell, IPython will display the usual indicator that a cell is continuing to run:\nIn [*]:\nAs soon as you're satisfied that the loop is merrily looping, go up to the IPython Notebook menu and select \"Kernel\" -> \"Interrupt\". The cell with the loop in it should finish, with not only an In number instead of an asterisk, but our interrupt handler should have printed out a status message as well.\nThough this event loop is fairly different from those that power networking libraries or GUI toolkits, it's very close (both in nature and code) to the default event loops matplotlib provides for its canvas objects. As such, this is a perfect starting place for your deeper understanding of matplotlib. To continue in this vein, reading the matplotlib backend source code would serve you well.\nStandard Event Handling in matplotlib\nWith some event loop knowledge under our belts, we're ready to start working with matplotlib events.\nBelow is the list of supported events in matplotlib as of version 1.4:\n| Event name | Class and description |\n|-------------------------|------------------------------------------------------|\n|button_press_event | MouseEvent - mouse button is pressed |\n|button_release_event | MouseEvent - mouse button is released |\n|draw_event | DrawEvent - canvas draw |\n|key_press_event | KeyEvent - key is pressed |\n|key_release_event | KeyEvent - key is released |\n|motion_notify_event | MouseEvent - mouse motion |\n|pick_event | PickEvent - an object in the canvas is selected |\n|resize_event | ResizeEvent - figure canvas is resized |\n|scroll_event | MouseEvent - mouse scroll wheel is rolled |\n|figure_enter_event | LocationEvent - mouse enters a new figure |\n|figure_leave_event | LocationEvent - mouse leaves a figure |\n|axes_enter_event | LocationEvent - mouse enters a new axes |\n|axes_leave_event | LocationEvent - mouse leaves an axes |\nWe'll discuss some of these below in more detail. With that information in hand, you should be able to tackle problems with any of the supported events in matplotlib.\nMouse Events\nIn the next cell, we will define a couple of callback functions, and then connet these to specific canvas events.\nGo ahead and render the cell then click on the display plot a couple of times:", "def press_callback(event):\n event.canvas.figure.text(event.xdata, event.ydata, '<- clicked here')\n \ndef release_callback(event):\n event.canvas.figure.show()\n \n(figure, axes) = plt.subplots()\npress_conn_id = figure.canvas.mpl_connect('button_press_event', press_callback)\nrelease_conn_id = figure.canvas.mpl_connect('button_release_event', release_callback)\nplt.show()", "Our callbacks display a little note close to each $(x, y)$ coordinate where we clicked (the location is not exact due to font-sizing, etc.) If we use a graphical indication as opposed to a textual one, we can get much better precision:", "class Callbacks:\n def __init__(self):\n (figure, axes) = plt.subplots()\n axes.set_aspect(1)\n figure.canvas.mpl_connect('button_press_event', self.press)\n figure.canvas.mpl_connect('button_release_event', self.release)\n\n def start(self):\n plt.show()\n\n def press(self, event):\n self.start_time = time.time()\n\n def release(self, event):\n self.end_time = time.time()\n self.draw_click(event)\n \n def draw_click(self, event):\n size = 4 * (self.end_time - self.start_time) ** 2\n c1 = plt.Circle([event.xdata, event.ydata], 0.002,)\n c2 = plt.Circle([event.xdata, event.ydata], 0.02 * size, alpha=0.2)\n event.canvas.figure.gca().add_artist(c1)\n event.canvas.figure.gca().add_artist(c2)\n event.canvas.figure.show()\n\ncbs = Callbacks()\ncbs.start()", "As you can see, we changed the callback to display a cicle instead of text. If you choose to press and hold, and then release a bit later, you will see that a second, transparent circle is displayed. The longer you hold, the larger the second transpent circle will be.\nLet's try something a little more involved, adapted from the line-drawing example in the \"Event handling and picking\" chapter of the matplotlib Advanced Guide:", "class LineBuilder:\n def __init__(self, event_name='button_press_event'):\n (self.figure, self.axes) = plt.subplots()\n plt.xlim([0, 10])\n plt.ylim([0, 10])\n (self.xs, self.ys) = ([5], [5])\n (self.line,) = self.axes.plot(self.xs, self.ys)\n self.axes.set_title('Click the canvas to build line segments...')\n self.canvas = self.figure.canvas\n self.conn_id = self.canvas.mpl_connect(event_name, self.callback)\n\n def start(self):\n plt.show()\n\n def update_line(self, event):\n self.xs.append(event.xdata)\n self.ys.append(event.ydata)\n self.line.set_data(self.xs, self.ys)\n\n def callback(self, event):\n if event.inaxes != self.line.axes:\n return\n self.update_line(event)\n self.canvas.draw()\n\nlb = LineBuilder()\nlb.start()", "For dessert, here's the slider demo from matplotlib:", "from matplotlib import widgets\nfrom matplotlib.backend_bases import MouseEvent\n\ndef get_sine_data(amplitude=5, frequency=3, time=None):\n return amplitude * np.sin(2 * np.pi * frequency * time)\n\nclass SineSliders:\n def __init__(self, amplitude=5, frequency=3):\n (self.figure, _) = plt.subplots()\n self.configure()\n self.a0 = amplitude\n self.f0 = frequency\n self.time = np.arange(0.0, 1.0, 0.001)\n self.data = get_sine_data(\n amplitude=self.a0, frequency=self.f0, time=self.time)\n (self.line,) = plt.plot(self.time, self.data, lw=2, color='red')\n self.axes_amp = plt.axes([0.25, 0.15, 0.65, 0.03])\n self.axes_freq = plt.axes([0.25, 0.1, 0.65, 0.03])\n self.setup_sliders()\n self.setup_reset_button()\n self.setup_color_selector()\n\n def start(self):\n plt.show()\n\n def configure(self):\n plt.subplots_adjust(left=0.25, bottom=0.25)\n plt.axis([0, 1, -10, 10])\n\n def setup_sliders(self):\n self.slider_amp = widgets.Slider(\n self.axes_amp, 'Amp', 0.1, 10.0, valinit=self.a0)\n self.slider_freq = widgets.Slider(\n self.axes_freq, 'Freq', 0.1, 30.0, valinit=self.f0)\n self.slider_freq.on_changed(self.update)\n self.slider_amp.on_changed(self.update)\n \n def setup_reset_button(self):\n reset_axes = plt.axes([0.8, 0.025, 0.1, 0.04])\n reset_button = widgets.Button(reset_axes, 'Reset', hovercolor='0.975')\n reset_button.on_clicked(self.reset)\n \n def setup_color_selector(self):\n radio_axes = plt.axes([0.025, 0.5, 0.15, 0.15], aspect=1)\n radio_select = widgets.RadioButtons(\n radio_axes, ('red', 'blue', 'green',), active=0)\n radio_select.on_clicked(self.switchcolor)\n \n def update(self, val):\n self.data = get_sine_data(self.slider_amp.val,\n self.slider_freq.val,\n self.time)\n self.line.set_ydata(self.data)\n self.figure.canvas.draw()\n\n def reset(self, event):\n self.slider_freq.reset()\n self.slider_amp.reset()\n\n def switchcolor(self, label):\n self.line.set_color(label)\n self.figure.canvas.draw()\n\nsldrs = SineSliders(amplitude=0.5, frequency=20)\nsldrs.start()", "Limitations of nbagg\nThe IPython Notebook AGG backend currently doesn't provide support for the following matplotlib events:\n * key_press\n * scroll_event (mouse scrolling)\n * mouse right click\n * mouse doubleclick\nAlso, mouse movement events can be a little inconsistent (this can be especially true if your browser or other application is running at a significant CPU%, causing events to be missed in matplotlib running in an IPython notebook).\nHowever, we can still use IPython while switching to a new backend for matplotlib. To see which backends are available to you:", "sorted(set(mpl.rcsetup.interactive_bk + mpl.rcsetup.non_interactive_bk + mpl.rcsetup.all_backends))", "Currently keyboard events aren't supported by IPython and the matplotlib nbagg backend. So, for this section, we'll switch over to your default platform's GUI toolkit in matplotlib.\nYou have two options for the remainder of this notebook:\n\nUse IPython from a terminal, or\nSwitch backends in this notebook.\n\nFor terminal use, change directory to where you cloned this notebook's git repo and then fire up IPython:\nbash\n$ cd interaction\n$ make repl\nThe repl target is a convenience that uses a Python virtual environment and the downloaded dependencies for this notebook. Once you're at the IPython prompt, you may start entering code with automatically-configured access to the libraries needed by this notebook.\nIf you would like to continue using this notebook instead of switching to the terminal, you'll need to change your backend for the remaining examples. For instance:", "plt.switch_backend('MacOSX')", "Keyboard Events\nLet's prepare for our key event explorations by defining some support functions ahead of time:", "def make_data(n, c):\n r = 4 * c * np.random.rand(n) ** 2\n theta = 2 * np.pi * np.random.rand(n)\n area = 200 * r**2 * np.random.rand(n)\n return (r, area, theta)\n\ndef generate_data(n, c):\n while True:\n yield make_data(n, c)\n \ndef make_plot(radius, area, theta, axes=None):\n scatter = axes.scatter(\n theta, radius, c=theta, s=area, cmap=cmap)\n scatter.set_alpha(0.75)\n\ndef update_plot(radius, area, theta, event):\n figure = event.canvas.figure\n axes = figure.gca()\n make_plot(radius, area, theta, axes)\n event.canvas.draw()", "Now let's make a class which will:\n\ndispatch based upon keys pressed and\nnavigate through our endless data set", "class Carousel:\n def __init__(self, data):\n (self.left, self.right) = ([], [])\n self.gen = data\n self.last_key = None\n\n def start(self, axes):\n make_plot(*self.next(), axes=axes)\n\n def prev(self):\n if not self.left:\n return []\n data = self.left.pop()\n self.right.insert(0, data)\n return data\n\n def next(self):\n if self.right:\n data = self.right.pop(0)\n else:\n data = next(self.gen)\n self.left.append(data)\n return data\n\n def reset(self):\n self.right = self.left + self.right\n self.left = []\n \n def dispatch(self, event):\n if event.key == \"right\":\n self.handle_right(event)\n elif event.key == \"left\":\n self.handle_left(event)\n elif event.key == \"r\":\n self.handle_reset(event)\n\n def handle_right(self, event):\n print(\"Got right key ...\")\n if self.last_key == \"left\":\n self.next()\n update_plot(*self.next(), event=event)\n self.last_key = event.key\n def handle_left(self, event):\n print(\"Got left key ...\")\n if self.last_key == \"right\":\n self.prev()\n data = self.prev()\n if data:\n update_plot(*data, event=event)\n self.last_key = event.key\n\n def handle_reset(self, event):\n print(\"Got reset key ...\")\n self.reset()\n update_plot(*self.next(), event=event)\n self.last_key = event.key", "One more class, to help keep things clean:", "class CarouselManager:\n def __init__(self, density=300, multiplier=1):\n (figure, self.axes) = plt.subplots(\n figsize=(12,12), subplot_kw={\"polar\": \"True\"})\n self.axes.hold(False)\n data = generate_data(density, multiplier)\n self.carousel = Carousel(data)\n _ = figure.canvas.mpl_connect(\n 'key_press_event', self.carousel.dispatch)\n def start(self):\n self.carousel.start(self.axes)\n plt.show()", "Now we can take it for a spin:", "cm = CarouselManager(multiplier=2)\ncm.start()", "In the GUI canvas, you should see something that looks a bit like this:\nThe plot shoudl have the focus automatically. Press the right and left arrow keys to navigate through your data sets. You can return to the beginning of the data set by typing \"r\", the \"reset\" key. Play with it a bit, to convince yourself that it's really doing what we intended :-)\nAxes and Figure Events", "def enter_axes(event):\n print('enter_axes', event.inaxes)\n event.inaxes.patch.set_facecolor('yellow')\n event.canvas.draw()\n\ndef leave_axes(event):\n print('leave_axes', event.inaxes)\n event.inaxes.patch.set_facecolor('white')\n event.canvas.draw()\n\ndef enter_figure(event):\n print('enter_figure', event.canvas.figure)\n event.canvas.figure.patch.set_facecolor('red')\n event.canvas.draw()\n\ndef leave_figure(event):\n print('leave_figure', event.canvas.figure)\n event.canvas.figure.patch.set_facecolor('grey')\n event.canvas.draw()\n\nclass FigureAndAxesFocus:\n def __init__(self):\n (self.figure, (self.axes1, self.axes2)) = plt.subplots(2, 1)\n title = \"Hover mouse over figure or its axes to trigger events\"\n self.figure.suptitle(title)\n self.setup_figure_events()\n self.setup_axes_events()\n\n def start(self):\n plt.show()\n \n def setup_figure_events(self):\n self.figure.canvas.mpl_connect(\n \"figure_enter_event\", enter_figure)\n self.figure.canvas.mpl_connect(\n \"figure_leave_event\", leave_figure)\n\n def setup_axes_events(self):\n self.figure.canvas.mpl_connect(\n \"axes_enter_event\", enter_axes)\n self.figure.canvas.mpl_connect(\n \"axes_leave_event\", leave_axes)", "Let's try it out:", "faaf = FigureAndAxesFocus()\nfaaf.start()", "Object Picking\nThe next event we will mention is a special one: the event of an object being \"picked\". Every Artist instance (naturally including any subclassess of Artist) has an attribute picker. Setting this attribute is what enables object picking in matplotlib.\nThe definition of picked can vary, depending upon context. For instance, setting Artist.picked has the following results:\n * If True, picking is enabled for the artist object and a pick_event will fire any time a mouse event occurs over the artist object in the figure.\n * If a number (e.g., float or int), the value is interpreted as a \"tolerance\"; if the event's data (such as $x$ and $y$ values) is within the value of that tolerance, the pick_event will fire.\n * If a callable, then the provided function or method returns a boolean value which determines if the pick_event is fired.\n * If None, picking is disabled.\nThe example below is adapted from the matplotlib project's picking exercise in the Advanced User's Guide. In it, we create a data set of 100 arrays, each containing 1000 random numbers. The sample mean and standard deviation of each is determined, and a plot is made of the 100 means vs the 100 standard deviations. We then connect the line created by the plot command to the pick event, and plot the original (randomly generated) time series data corresponding to the \"picked\" points. If more than one point is within the tolerance of the clicked on point, we display multiple subplots for the time series which fall into our tolerance (in this case, 10 pixels).", "class DataPicker:\n def __init__(self, range):\n self.range = range\n self.figure = self.axes = self.line = None\n self.xs = np.random.rand(*self.range)\n self.means = np.mean(self.xs, axis=1)\n self.stddev = np.std(self.xs, axis=1)\n\n def start(self):\n self.create_main_plot()\n self.figure.canvas.mpl_connect('pick_event', self.handle_pick)\n plt.show()\n\n def create_main_plot(self):\n (self.figure, self.axes) = plt.subplots()\n self.axes.set_title('click on point to plot time series')\n (self.line,) = self.axes.plot(self.means, self.stddev, 'o', picker=10)\n\n def create_popup_plot(self, n, event):\n popup_figure = plt.figure()\n for subplotnum, i in enumerate(event.ind):\n popup_axes = popup_figure.add_subplot(n, 1, subplotnum + 1)\n popup_axes.plot(self.xs[i])\n text_data = (self.means[i], self.stddev[i])\n popup_axes.text(\n 0.05, 0.9,\n '$\\mu$=%1.3f\\n$\\sigma$=%1.3f' % text_data,\n transform=popup_axes.transAxes, va='top')\n popup_axes.set_ylim(-0.5, 1.5)\n popup_figure.show()\n\n def handle_pick(self, event):\n if event.artist != self.line:\n return\n n = len(event.ind)\n if not n:\n return\n self.create_popup_plot(n, event)\n\ndp = DataPicker(range=(100,1000))\ndp.start()", "Compound Event Handling\nThis section discusses the combination of multiple events or other sources of data in order to provide a more highly customized user experience, whether that be for visual plot updates, preparation of data, setting object properties, or updating widgets. This is what we will refer to as \"compound events\".\nNavigation Toolbar\nmatplotlib backends come with a feature we haven't discussed yet: a widget for interactive navigation. This widget is available for all the backends (including the nbagg backend for IPython, when not in \"inline\" mode). In brief, the functionality associated with the buttons in the widget is as follows:\n * Home: returns the figure to its originally rendered state\n * Previous: return to the previous view in the plot's history\n * Next: move to the next view in the plot's history\n * Pan/Zoom: pan across the plot by clicking and holding the left mouse button; zoom by clicking and holding the right mouse button (behavior differs between Cartesian and Polar plots)\n * Zoom-to-Rectangle: zoom in on a selected portion of the plot\n * Subplot Configuration: configure the display of subplots via a pop-up widget with various parameters\n * Save: save the plot, in its currently displayed state, to a file\nWhen a toolbar action is engaged, the NavigationToolbar instance sets the current mode. For instance, when the Zoom-to-Rectangle button is clicked, the mode will be set to zoom rect. When in Pan/Zoom, the mode will be set to pan/zoom. These can be used in conjunction with the supported events to fire callbacks in response to toolbar activity.\nIn point of fact, the toolbar class, matplotlib.backend_bases.NavigationToolbar2 is an excellent place to look for examples of \"compound events\". Let's examine the Pan/Zoom button. The class tracks the following via attributes that get set:\n * The connection id for a \"press\" event\n * The connection id for a \"release\" event\n * The connection id for a \"mouse move\" event (correlated to a mouse drag later)\n * Whether the toolbar is \"active\"\n * What the toolbar mode is\n * What the zoom mode is\nDuring toolbar setup, toolbar button events are connected to callbacks. When these buttons are pressed, and the callbacks are fired, old events are disconnected and new ones connected. In this way, chains of events may be set up with a particular sequence of events firing only a particular set of callbacks and in a particular order.\nSpecialized Events\nThe code in matplotlib.backend_bases.NavigationToolbar2 is a great place to go to get some ideas about how you might combine events in your own projects. You might have a workflow that requires responses to plot updates, but only if a series of other events has taken place first. You can accomplish these by connecting events to and disconnecting them from various callbacks.\nInteractive Panning and Zooming\nLet's go back to the toolbar for a practical example of creating a compound event.\nThe problem we want to address is this: when a user pans or zooms out of the range of previously computed data in a plotted area, they are presented with parts of an empty grid with no visualization. It would be nice if we could put our new-found event callback skills to use in order to solve this issue.\nLet's look at an example where it would be useful to have the plot figure refreshed when it is moved: a topographic map. Geophysicsist Joe Kington has provided some nice answers on Stackoverflow regarding matplotlib in the context of terrain gradients. In one particular example, he showed how to view the flow of water from random wells on a topographic map. We're going to do a couple of things with this example:\n\nadd a color map to give it the look of a physical map\ngive altitude in meters, and most importantly,\ncreate a class that can update the map via a method call\n\nOur custom color map and Joe's equations for generating a topographical map have been saved to ./lib/topo.py. We'll need to import those. Then we can define TopoFlowMap, our wrapper class that will be used to update the plot when we pan:", "class TopoFlowMap:\n def __init__(self, xrange=None, yrange=None, seed=1):\n self.xrange = xrange or (0,1)\n self.yrange = yrange or (0,1)\n self.seed = seed\n (self.figure, self.axes) = plt.subplots(figsize=(12,8))\n self.axes.set_aspect(1)\n self.colorbar = None\n self.update()\n\n def get_ranges(self, xrange, yrange):\n if xrange:\n self.xrange = xrange\n if yrange:\n self.yrange = yrange\n return (xrange, yrange)\n\n def get_colorbar_axes(self):\n colorbar_axes = None\n if self.colorbar:\n colorbar_axes = self.colorbar.ax\n colorbar_axes.clear()\n return colorbar_axes\n \n def get_filled_contours(self, coords):\n return self.axes.contourf(cmap=topo.land_cmap, *coords.values())\n\n def update_contour_lines(self, filled_contours):\n contours = self.axes.contour(filled_contours, colors=\"black\", linewidths=2)\n self.axes.clabel(contours, fmt=\"%d\", colors=\"#330000\")\n\n def update_water_flow(self, coords, gradient):\n self.axes.streamplot(\n coords.get(\"x\")[:,0],\n coords.get(\"y\")[0,:],\n gradient.get(\"dx\"),\n gradient.get(\"dy\"),\n color=\"0.6\",\n density=1,\n arrowsize=2)\n \n def update_labels(self):\n self.colorbar.set_label(\"Altitude (m)\")\n self.axes.set_title(\"Water Flow across Land Gradients\", fontsize=20)\n self.axes.set_xlabel(\"$x$ (km)\")\n self.axes.set_ylabel(\"$y$ (km)\")\n\n def update(self, xrange=None, yrange=None):\n (xrange, yrange) = self.get_ranges(xrange, yrange)\n (coords, grad) = topo.make_land_map(self.xrange, self.yrange, self.seed)\n self.axes.clear()\n colorbar_axes = self.get_colorbar_axes()\n filled_contours = self.get_filled_contours(coords)\n self.update_contour_lines(filled_contours)\n self.update_water_flow(coords, grad)\n self.colorbar = self.figure.colorbar(filled_contours, cax=colorbar_axes)\n self.update_labels()", "Let's switch back to the IPython Notebook backend, so we have a reference image saved in the notebook:", "plt.switch_backend('nbAgg')", "Let's draw the topographical map next, without any ability to update when panning:", "tfm = TopoFlowMap(xrange=(0,1.5), yrange=(0,1.5), seed=1732)\nplt.show()", "If you click the \"pan/zoom\" button on the navigation toolbar, and then click+hold on the figure, you can move it about. Note that, when you do so, nothing gets redrawn.\nSince we do want to redraw, and there is no \"pan event\" to connect to, what are our options? Well, two come to mind:\n * piggy back on the draw_event, which fires each time the canvas is moved, or\n * use the button_release_event which will fire when the panning is complete\nIf our figure was easy to draw with simple equations, the first option would probably be fine. However, we're doing some multivariate calculus on our simulated topography; as you might have noticed, our plot does not render immediately. So let's go with the second option.\nThere's an added bonus, though, that will make our lives easier: the NavigationTool2 keeps track of the mode it is in on it's mode attribute. Let's use that to save some coding!", "class TopoFlowMapManager:\n def __init__(self, xrange=None, yrange=None, seed=1):\n self.map = TopoFlowMap(xrange, yrange, seed)\n _ = self.map.figure.canvas.mpl_connect(\n 'button_release_event', self.handle_pan_zoom_release)\n\n def start(self):\n plt.show()\n \n def handle_pan_zoom_release(self, event):\n if event.canvas.toolbar.mode != \"pan/zoom\":\n return\n self.map.update(event.inaxes.get_xlim(),\n event.inaxes.get_ylim())\n event.canvas.draw()", "Let's switch back to the native backend (in my case, that's MacOSX; you may need Qt5Agg, WXAgg, or GTK3Agg):", "plt.switch_backend('MacOSX')", "Run the next bit of code, and then start panning around and releasing; you should see the new data displayed after the callbacks fires off the recalculation.", "tfmm = TopoFlowMapManager(xrange=(0,1.5), yrange=(0,1.5), seed=1732)\ntfmm.start()", "This is not a perfect topographical model, so sometimes you will see colors get shifted as the range of altitudes decreases or increases in a given view. Certainly close enough to demonstrate this use case, though :-)\nAnother thing you could do is add support for zoom rect such that the contour lines don't get angled (due to sparse data sample spacing), but stay smoothly curved no matter how far you zoom in. Given the example above, that should be fairly easy to implement, and we leave it as a fun exercise for the motivated reader :-)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]