repo_name
stringlengths
6
77
path
stringlengths
8
215
license
stringclasses
15 values
cells
list
types
list
msschwartz21/craniumPy
experiments/yot_experiment/landmarks.ipynb
gpl-3.0
[ "Introduction: Landmarks", "import deltascope as ds\nimport deltascope.alignment as ut\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\nfrom sklearn.preprocessing import normalize\nfrom scipy.optimize import minimize\n\nimport os\nimport tqdm\nimport json\nimport time", "Import raw data\nThe user needs to specify the directories containing the data of interest. Each sample type should have a key which corresponds to the directory path. Additionally, each object should have a list that includes the channels of interest.", "# --------------------------------\n# -------- User input ------------\n# --------------------------------\n\ndata = {\n # Specify sample type key\n 'wt': {\n # Specify path to data directory\n 'path': './data/Output_wt03-09-21-29/',\n # Specify which channels are in the directory and are of interest\n 'channels': ['AT','ZRF']\n },\n 'you-too': {\n 'path': './data/Output_yot03-09-23-21/',\n 'channels': ['AT','ZRF']\n }\n}", "We'll generate a list of pairs of stypes and channels for ease of use.", "data_pairs = []\nfor s in data.keys():\n for c in data[s]['channels']:\n data_pairs.append((s,c))", "We can now read in all datafiles specified by the data dictionary above.", "D = {}\nfor s in data.keys():\n D[s] = {}\n for c in data[s]['channels']:\n D[s][c] = ds.read_psi_to_dict(data[s]['path'],c)", "Calculate landmark bins\nBased on the analysis above, we can select the optimal value of alpha bins.", "# --------------------------------\n# -------- User input ------------\n# --------------------------------\n\n# Pick an integer value for bin number based on results above\nanum = 25\n\n# Specify the percentiles which will be used to calculate landmarks\npercbins = [50]", "Calculate landmark bins based on user input parameters and the previously specified control sample.", "theta_step = np.pi/4\n\nlm = ds.landmarks(percbins=percbins, rnull=np.nan)\nlm.calc_bins(D['wt']['AT'], anum, theta_step)\n\nprint('Alpha bins')\nprint(lm.acbins)\nprint('Theta bins')\nprint(lm.tbins)", "Calculate landmarks", "lmdf = pd.DataFrame()\n\n# Loop through each pair of stype and channels\nfor s,c in tqdm.tqdm(data_pairs):\n print(s,c)\n # Calculate landmarks for each sample with this data pair\n for k,df in tqdm.tqdm(D[s][c].items()):\n lmdf = lm.calc_perc(df, k, '-'.join([s,c]), lmdf)\n \n# Set timestamp for saving data\ntstamp = time.strftime(\"%m-%d-%H-%M\",time.localtime())\n \n# Save completed landmarks to a csv file\nlmdf.to_csv(tstamp+'_landmarks.csv')\nprint('Landmarks saved to csv')\n\n# Save landmark bins to json file\nbins = {\n 'acbins':list(lm.acbins),\n 'tbins':list(lm.tbins)\n}\nwith open(tstamp+'_landmarks_bins.json', 'w') as outfile:\n json.dump(bins, outfile)\nprint('Bins saved to json')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
GoogleCloudPlatform/mlops-on-gcp
on_demand/tfx-caip/lab-04-tfx-metadata/solutions/lab-04.ipynb
apache-2.0
[ "Inspecting TFX metadata\nLearning Objectives\n\nUse a GRPC server to access and analyze pipeline artifacts stored in the ML Metadata service of your AI Platform Pipelines instance.\n\nIn this lab, you will explore TFX pipeline metadata including pipeline and run artifacts. A hosted AI Platform Pipelines instance includes the ML Metadata service. In AI Platform Pipelines, ML Metadata uses MySQL as a database backend and can be accessed using a GRPC server.\nSetup", "import os\nimport json\n\nimport ml_metadata\nimport tensorflow_data_validation as tfdv\nimport tensorflow_model_analysis as tfma\n\n\nfrom ml_metadata.metadata_store import metadata_store\nfrom ml_metadata.proto import metadata_store_pb2\n\nfrom tfx.orchestration import metadata\nfrom tfx.types import standard_artifacts\n\nfrom tensorflow.python.lib.io import file_io\n\n!python -c \"import tfx; print('TFX version: {}'.format(tfx.__version__))\"\n!python -c \"import kfp; print('KFP version: {}'.format(kfp.__version__))\"", "Option 1: Explore metadata from existing TFX pipeline runs from AI Pipelines instance created in lab-02 or lab-03.\n1.1 Configure Kubernetes port forwarding\nTo enable access to the ML Metadata GRPC server, configure Kubernetes port forwarding.\nFrom a JupyterLab terminal, execute the following commands:\ngcloud container clusters get-credentials [YOUR CLUSTER] --zone [YOUR CLUSTER ZONE] \nkubectl port-forward service/metadata-grpc-service --namespace [YOUR NAMESPACE] 7000:8080\nProceed to the next step, \"Connecting to ML Metadata\".\nOption 2: Create new AI Pipelines instance and evaluate metadata on newly triggered pipeline runs.\nHosted AI Pipelines incurs cost for the duration your Kubernetes cluster is running. If you deleted your previous lab instance, proceed with the 6 steps below to deploy a new TFX pipeline and triggers runs to inspect its metadata.", "import yaml\n\n# Set `PATH` to include the directory containing TFX CLI.\nPATH=%env PATH\n%env PATH=/home/jupyter/.local/bin:{PATH}", "The pipeline source can be found in the pipeline folder. Switch to the pipeline folder and compile the pipeline.", "%cd pipeline", "2.1 Create AI Platform Pipelines cluster\nNavigate to AI Platform Pipelines page in the Google Cloud Console.\nCreate or select an existing Kubernetes cluster (GKE) and deploy AI Platform. Make sure to select \"Allow access to the following Cloud APIs https://www.googleapis.com/auth/cloud-platform\" to allow for programmatic access to your pipeline by the Kubeflow SDK for the rest of the lab. Also, provide an App instance name such as \"TFX-lab-04\".\n2.2 Configure environment settings\nUpdate the below constants with the settings reflecting your lab environment.\n\nGCP_REGION - the compute region for AI Platform Training and Prediction\nARTIFACT_STORE - the GCS bucket created during installation of AI Platform Pipelines. The bucket name starts with the kubeflowpipelines- prefix. Alternatively, you can specify create a new storage bucket to write pipeline artifacts to.", "!gsutil ls", "CUSTOM_SERVICE_ACCOUNT - In the gcp console Click on the Navigation Menu. Navigate to IAM &amp; Admin, then to Service Accounts and use the service account starting with prifix - 'tfx-tuner-caip-service-account'. This enables CloudTuner and the Google Cloud AI Platform extensions Tuner component to work together and allows for distributed and parallel tuning backed by AI Platform Vizier's hyperparameter search algorithm. Please see the lab setup README for setup instructions.\n\n\nENDPOINT - set the ENDPOINT constant to the endpoint to your AI Platform Pipelines instance. The endpoint to the AI Platform Pipelines instance can be found on the AI Platform Pipelines page in the Google Cloud Console.\n\n\nOpen the SETTINGS for your instance\n\nUse the value of the host variable in the Connect to this Kubeflow Pipelines instance from a Python client via Kubeflow Pipelines SKD section of the SETTINGS window.", "#TODO: Set your environment resource settings here for GCP_REGION, ARTIFACT_STORE_URI, ENDPOINT, and CUSTOM_SERVICE_ACCOUNT.\nGCP_REGION = 'us-central1'\nARTIFACT_STORE_URI = 'gs://dougkelly-sandbox-kubeflowpipelines-default' #Change\nENDPOINT = '60ff837483ecde05-dot-us-central2.pipelines.googleusercontent.com' #Change\nCUSTOM_SERVICE_ACCOUNT = 'tfx-tuner-caip-service-account@dougkelly-sandbox.iam.gserviceaccount.com' #Change\n\nPROJECT_ID = !(gcloud config get-value core/project)\nPROJECT_ID = PROJECT_ID[0]\n\n# Set your resource settings as environment variables. These override the default values in pipeline/config.py.\n%env GCP_REGION={GCP_REGION}\n%env ARTIFACT_STORE_URI={ARTIFACT_STORE_URI}\n%env CUSTOM_SERVICE_ACCOUNT={CUSTOM_SERVICE_ACCOUNT}\n%env PROJECT_ID={PROJECT_ID}", "2.3 Compile pipeline", "PIPELINE_NAME = 'tfx_covertype_lab_04'\nMODEL_NAME = 'tfx_covertype_classifier'\nDATA_ROOT_URI = 'gs://workshop-datasets/covertype/small'\nCUSTOM_TFX_IMAGE = 'gcr.io/{}/{}'.format(PROJECT_ID, PIPELINE_NAME)\nRUNTIME_VERSION = '2.3'\nPYTHON_VERSION = '3.7'\nUSE_KFP_SA=False\nENABLE_TUNING=False\n\n%env PIPELINE_NAME={PIPELINE_NAME}\n%env MODEL_NAME={MODEL_NAME}\n%env DATA_ROOT_URI={DATA_ROOT_URI}\n%env KUBEFLOW_TFX_IMAGE={CUSTOM_TFX_IMAGE}\n%env RUNTIME_VERSION={RUNTIME_VERSION}\n%env PYTHON_VERIONS={PYTHON_VERSION}\n%env USE_KFP_SA={USE_KFP_SA}\n%env ENABLE_TUNING={ENABLE_TUNING}\n\n!tfx pipeline compile --engine kubeflow --pipeline_path runner.py", "2.4 Deploy pipeline to AI Platform", "!tfx pipeline create \\\n--pipeline_path=runner.py \\\n--endpoint={ENDPOINT} \\\n--build_target_image={CUSTOM_TFX_IMAGE}", "(optional) If you make local changes to the pipeline, you can update the deployed package on AI Platform with the following command:", "!tfx pipeline update --pipeline_path runner.py --endpoint {ENDPOINT}", "2.5 Create and monitor pipeline run", "!tfx run create --pipeline_name={PIPELINE_NAME} --endpoint={ENDPOINT}", "2.6 Configure Kubernetes port forwarding\nTo enable access to the ML Metadata GRPC server, configure Kubernetes port forwarding.\nFrom a JupyterLab terminal, execute the following commands:\ngcloud container clusters get-credentials [YOUR CLUSTER] --zone [YOURE CLUSTER ZONE] \nkubectl port-forward service/metadata-grpc-service --namespace [YOUR NAMESPACE] 7000:8080\nConnecting to ML Metadata\nConfigure ML Metadata GRPC client", "grpc_host = 'localhost'\ngrpc_port = 7000\nconnection_config = metadata_store_pb2.MetadataStoreClientConfig()\nconnection_config.host = grpc_host\nconnection_config.port = grpc_port", "Connect to ML Metadata service", "store = metadata_store.MetadataStore(connection_config)", "Important\nA full pipeline run without tuning takes about 40-45 minutes to complete. You need to wait until a pipeline run is complete before proceeding with the steps below.\nExploring ML Metadata\nThe Metadata Store uses the following data model:\n\nArtifactType describes an artifact's type and its properties that are stored in the Metadata Store. These types can be registered on-the-fly with the Metadata Store in code, or they can be loaded in the store from a serialized format. Once a type is registered, its definition is available throughout the lifetime of the store.\nArtifact describes a specific instances of an ArtifactType, and its properties that are written to the Metadata Store.\nExecutionType describes a type of component or step in a workflow, and its runtime parameters.\nExecution is a record of a component run or a step in an ML workflow and the runtime parameters. An Execution can be thought of as an instance of an ExecutionType. Every time a developer runs an ML pipeline or step, executions are recorded for each step.\nEvent is a record of the relationship between an Artifact and Executions. When an Execution happens, Events record every Artifact that was used by the Execution, and every Artifact that was produced. These records allow for provenance tracking throughout a workflow. By looking at all Events MLMD knows what Executions happened, what Artifacts were created as a result, and can recurse back from any Artifact to all of its upstream inputs.\nContextType describes a type of conceptual group of Artifacts and Executions in a workflow, and its structural properties. For example: projects, pipeline runs, experiments, owners.\nContext is an instances of a ContextType. It captures the shared information within the group. For example: project name, changelist commit id, experiment annotations. It has a user-defined unique name within its ContextType.\nAttribution is a record of the relationship between Artifacts and Contexts.\nAssociation is a record of the relationship between Executions and Contexts.\n\nList the registered artifact types.", "for artifact_type in store.get_artifact_types():\n print(artifact_type.name)", "Display the registered execution types.", "for execution_type in store.get_execution_types():\n print(execution_type.name)", "List the registered context types.", "for context_type in store.get_context_types():\n print(context_type.name)", "Visualizing TFX artifacts\nRetrieve data analysis and validation artifacts", "with metadata.Metadata(connection_config) as store:\n schema_artifacts = store.get_artifacts_by_type(standard_artifacts.Schema.TYPE_NAME) \n stats_artifacts = store.get_artifacts_by_type(standard_artifacts.ExampleStatistics.TYPE_NAME)\n anomalies_artifacts = store.get_artifacts_by_type(standard_artifacts.ExampleAnomalies.TYPE_NAME)\n\nschema_file = os.path.join(schema_artifacts[-1].uri, 'schema.pbtxt')\nprint(\"Generated schame file:{}\".format(schema_file))\n\nstats_path = stats_artifacts[-1].uri\ntrain_stats_file = os.path.join(stats_path, 'train', 'stats_tfrecord')\neval_stats_file = os.path.join(stats_path, 'eval', 'stats_tfrecord')\nprint(\"Train stats file:{}, Eval stats file:{}\".format(\n train_stats_file, eval_stats_file))\n\nanomalies_path = anomalies_artifacts[-1].uri\ntrain_anomalies_file = os.path.join(anomalies_path, 'train', 'anomalies.pbtxt')\neval_anomalies_file = os.path.join(anomalies_path, 'eval', 'anomalies.pbtxt')\n\nprint(\"Train anomalies file:{}, Eval anomalies file:{}\".format(\n train_anomalies_file, eval_anomalies_file))", "Visualize schema", "schema = tfdv.load_schema_text(schema_file)\ntfdv.display_schema(schema=schema)", "Visualize statistics\nExercise: looking at the features visualized below, answer the following questions:\n\nWhich feature transformations would you apply to each feature with TF Transform?\nAre there data quality issues with certain features that may impact your model performance? How might you deal with it?", "train_stats = tfdv.load_statistics(train_stats_file)\neval_stats = tfdv.load_statistics(eval_stats_file)\ntfdv.visualize_statistics(lhs_statistics=eval_stats, rhs_statistics=train_stats,\n lhs_name='EVAL_DATASET', rhs_name='TRAIN_DATASET')", "Visualize anomalies", "train_anomalies = tfdv.load_anomalies_text(train_anomalies_file)\ntfdv.display_anomalies(train_anomalies)\n\neval_anomalies = tfdv.load_anomalies_text(eval_anomalies_file)\ntfdv.display_anomalies(eval_anomalies)", "Retrieve model artifacts", "with metadata.Metadata(connection_config) as store:\n model_eval_artifacts = store.get_artifacts_by_type(standard_artifacts.ModelEvaluation.TYPE_NAME)\n hyperparam_artifacts = store.get_artifacts_by_type(standard_artifacts.HyperParameters.TYPE_NAME)\n \nmodel_eval_path = model_eval_artifacts[-1].uri\nprint(\"Generated model evaluation result:{}\".format(model_eval_path))\nbest_hparams_path = os.path.join(hyperparam_artifacts[-1].uri, 'best_hyperparameters.txt')\nprint(\"Generated model best hyperparameters result:{}\".format(best_hparams_path))", "Return best hyperparameters", "# Latest pipeline run Tuner search space.\njson.loads(file_io.read_file_to_string(best_hparams_path))['space']\n\n# Latest pipeline run Tuner searched best_hyperparameters artifacts.\njson.loads(file_io.read_file_to_string(best_hparams_path))['values']", "Visualize model evaluations\nExercise: review the model evaluation results below and answer the following questions:\n\nWhich Wilderness Area had the highest accuracy?\nWhich Wilderness Area had the lowest performance? Why do you think that is? What are some steps you could take to improve your next model runs?", "eval_result = tfma.load_eval_result(model_eval_path)\ntfma.view.render_slicing_metrics(\n eval_result, slicing_column='Wilderness_Area')", "Debugging tip: If the TFMA visualization of the Evaluator results do not render, try switching to view in a Classic Jupyter Notebook. You do so by clicking Help &gt; Launch Classic Notebook and re-opening the notebook and running the above cell to see the interactive TFMA results.\nLicense\n<font size=-1>Licensed under the Apache License, Version 2.0 (the \\\"License\\\");\nyou may not use this file except in compliance with the License.\nYou may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \\\"AS IS\\\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.</font>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
LimeeZ/phys292-2015-work
assignments/assignment12/FittingModelsEx02.ipynb
mit
[ "Fitting Models Exercise 2\nImports", "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport scipy.optimize as opt", "Fitting a decaying oscillation\nFor this problem you are given a raw dataset in the file decay_osc.npz. This file contains three arrays:\n\ntdata: an array of time values\nydata: an array of y values\ndy: the absolute uncertainties (standard deviations) in y\n\nYour job is to fit the following model to this data:\n$$ y(t) = A e^{-\\lambda t} \\cos{\\omega t + \\delta} $$\nFirst, import the data using NumPy and make an appropriately styled error bar plot of the raw data.", "# YOUR CODE HERE\nraise NotImplementedError()\n\nassert True # leave this to grade the data import and raw data plot", "Now, using curve_fit to fit this model and determine the estimates and uncertainties for the parameters:\n\nPrint the parameters estimates and uncertainties.\nPlot the raw and best fit model.\nYou will likely have to pass an initial guess to curve_fit to get a good fit.\nTreat the uncertainties in $y$ as absolute errors by passing absolute_sigma=True.", "# YOUR CODE HERE\nraise NotImplementedError()\n\nassert True # leave this cell for grading the fit; should include a plot and printout of the parameters+errors" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
basp/aya
noise_old.ipynb
mit
[ "import numpy as np\nimport matplotlib.pyplot as plt\nimport noise\nimport perlin\n%matplotlib inline", "linear interpolation\nWe need a function ${f}$ that, given values ${v_0}$ and ${v_1}$ and some interval ${t}$ where $0 \\le {t} \\le 1$, returns an interpolated value between ${v_0}$ and ${v_1}$.\nThe best way to start is with linear interpolation and that's what the lerp function does.\nLet's assume we have to values ${v_0}$ and ${v_1}$:", "v0 = 2\nv1 = 5\nplt.plot([0, 1], [v0, v1], '--')\nt = 1.0 / 3\nvt = noise.lerp(v0, v1, t)\nplt.plot(t, vt, 'ro')", "smoothstep", "x = np.linspace(0, 1.0)\ny1 = noise.ss3(x)\ny2 = noise.ss5(x)\nplt.plot(x, y1, label=\"smooth\")\nplt.plot(x, y2, label=\"smoother\")\nplt.legend(loc=2)", "vectors", "class Vector:\n def __init__(self, *components):\n self.components = np.array(components)\n \n def mag(self):\n return np.sqrt(sum(self.components**2))\n \n def __len__(self):\n return len(self.components)\n \n def __iter__(self):\n for c in self.components:\n yield c", "seeding", "np.random.ranf((2,3,2,2)) # seed in n-dimensions", "noise field\nFor instance, to create a hypercube of noise we could do something like this:", "c4 = noise.Field(d=(8,8,8,8), seed = 5)", "We can plot any course through this field, for example:", "q = np.arange(0, 8)\nx = [c4(x, 0, 0, 0) for x in q]\ny = [c4(0, y, 0, 0) for y in q]\nplt.plot(q, x, 'bo')\nplt.plot(q, y, 'ro')", "We could render a graph but that would be like cheating. We would be using the matplotlib linear interpolation instead of our own:", "# a one-dimensional noise field of 8 samples\nc1 = noise.Field(d=(8,))\nx = np.linspace(0, 7, 8)\ny = [c1(x) for x in x]\n# this will use matplotlib interpolation and not ours\nplt.plot(x, y)", "We can do better though by using one of the smoothstep functions. Instead of calculating ${v_t}$ directly we can do some tricks on ${t}$ to modify the outcome.\nFor convience let's start with the ss3 function and plot it so we know what it looks like:", "x = np.linspace(0, 1.0, 100)\ny = noise.ss3(x)\nplt.plot(x, y)", "Now we setup a noise field and define a helper function noise1 in order to get our coherent noise.", "samples = 32\ngen = noise.Field(d=(samples,))\n\ndef noise1(x, curve = lambda x: x):\n xi = int(x)\n xmin = xi % samples\n xmax = 0 if xmin == (samples - 1) else xmin + 1\n t = x - xi\n return noise.lerp(gen(xmin), gen(xmax), curve(t))\n\nx = np.linspace(0, 10, 100)\ny1 = [noise1(x) for x in x]\ny2 = [noise1(x, noise.ss5) for x in x]\nplt.plot(x, y1, '--')\nplt.plot(x, y2)", "fbm noise", "x = np.linspace(0, 4, 100)\ny1 = [1.0 * perlin.noise2d(x, 0) for x in x]\ny2 = [0.5 * perlin.noise2d(x * 2, 0) for x in x]\ny3 = [0.25 * perlin.noise2d(x * 4, 0) for x in x]\ny4 = [0.125 * perlin.noise2d(x * 8, 0) for x in x]\nplt.plot(x, y1)\nplt.plot(x, y2)\nplt.plot(x, y3)\nplt.plot(x, y4)\n\nx = np.linspace(0, 4, 100)\ny = [perlin.fbm(x, 0) for x in x]\nplt.plot(x, y)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ceos-seo/data_cube_notebooks
notebooks/machine_learning/Uruguay_Random_Forest/Random_Forest/1. Data Exploration.ipynb
apache-2.0
[ "import sys\nimport os\nsys.path.append(os.environ.get('NOTEBOOK_ROOT'))\n\nimport pandas as pd\nimport seaborn as sns\n\nfrom matplotlib import pyplot\nfrom utils.data_cube_utilities import dc_display_map\n\n\n%matplotlib inline", "Load Truth Data\nOur uruguay data comes in a csv format. It contains three attributes: \n\nlatitude\nlongitude\nlandcover class", "df = pd.read_csv('../data.csv')\ndf.head()", "Label distribution\nIn this section, data is binned by landcover and counted. Landcover classes with little to no labels will be unreliable candidates for classification as there may not be enough variance in the training labels to guarantee that the model learns to generalize.", "df.groupby(\"LandUse\").size()\n\nfig, ax = pyplot.subplots(figsize=(15,3))\nsns.countplot(x=\"LandUse\",data=df, palette=\"Greens_d\");", "Re-Labeling\nRelated classes are combined to boost the number of samples in the new classes.", "df_new = df.copy() \ndf_new['LandUse'].update(df_new['LandUse'].map(lambda x: \"Forest\" if x in [\"Forestry\",\"Fruittrees\",\"Nativeforest\"] else x ))\ndf_new['LandUse'].update(df_new['LandUse'].map(lambda x: \"Misc\" if x not in [\"Forest\",\"Prairie\",\"Summercrops\",\"Naturalgrassland\"] else x ))\n\ndf_new.groupby(\"LandUse\").size()\n\nfig, ax = pyplot.subplots(figsize=(15,5))\nsns.countplot(x=\"LandUse\",data=df_new, palette=\"Greens_d\");", "Visualize Label Distribution", "dc_display_map.display_grouped_pandas_rows_as_pins(df_new, group_name= \"LandUse\")", "Export re-labled data", "output_destination_name = \"./relabeled_data.csv\"\n\n## Recap of structure\ndf_new.head()\n\ndf_new.to_csv(output_destination_name)\n\n!ls" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dh7/ML-Tutorial-Notebooks
tf-linear-regression.ipynb
bsd-2-clause
[ "Linear regression with TensorFlow\nThis notebooks is dedicated to Linear regression. It is based on a code from Aymeric Damien.\nCheck the source here\n(please note that I've made several litle change to the code to make is easier to explain as a notebook)\nThe code is super straigh forward, so let's dive in!\nImport\nThe classical import, plus the %mapplotlib magical trick to have matplotlib working with Jupyter", "%matplotlib notebook\nimport matplotlib\nimport matplotlib.pyplot as plt\n\n'''\nA linear regression learning algorithm example using TensorFlow library.\nAuthor: Aymeric Damien\nProject: https://github.com/aymericdamien/TensorFlow-Examples/\n'''\n\nimport tensorflow as tf\nimport numpy as np", "Set up the data\nMachine learning need data!\nSo let's create a training set, and a testing set.\nWe'll train our model using the training set, and we'll use the testing set to check the result.", "# Training Data\ntrain_X = np.asarray([3.3,4.4,5.5,6.71,6.93,4.168,9.779,6.182,7.59,2.167,\n 7.042,10.791,5.313,7.997,5.654,9.27,3.1])\ntrain_Y = np.asarray([1.7,2.76,2.09,3.19,1.694,1.573,3.366,2.596,2.53,1.221,\n 2.827,3.465,1.65,2.904,2.42,2.94,1.3])\n\n# Testing example\ntest_X = np.asarray([6.83, 4.668, 8.9, 7.91, 5.7, 8.7, 3.1, 2.1])\ntest_Y = np.asarray([1.84, 2.273, 3.2, 2.831, 2.92, 3.24, 1.35, 1.03])\n\nn_samples = train_X.shape[0]\nprint \"size of the training samples:\", n_samples\n\nfig1 = plt.figure(figsize=(12,5))\nplt.plot(train_X, train_Y, 'bo', label='Training data')\nplt.plot(test_X, test_Y, 'ro', label='Testing data')\nplt.legend()\nplt.show()", "Creation of the model\nThe model is a simple linear model:\npred = W*X + b", "# tf Graph Input\nX = tf.placeholder(\"float\")\nY = tf.placeholder(\"float\")\n\n# Set model weights\nW = tf.Variable(tf.ones([1])*0.1, name=\"weight\")\nb = tf.Variable(tf.ones([1])*1.2, name=\"bias\")\n\n# Construct a linear model\npred = tf.add(tf.mul(X, W), b)", "Loss function\nThe loss (or cost) is a key concept in all neural networks trainning. It is a value that describe how bag/good is our model.\nIt is always positive, the closest to zero, the better is our model.\n(A good model is a model where the predicted output is close to the training output)\nDuring the trainning phase we want to minimize the loss. \nloss here is the sum of all (prediction-real)^2", "# Mean squared error\ncost = tf.reduce_sum(tf.pow(pred-Y, 2))", "Hyper parameters\nHyper parameters are not parameters of the models.\nWe use them as parameters for the optimisation", "# Parameters\nlearning_rate = 0.001\ntraining_epochs = 1000\ndisplay_step = 100", "Optimisation\nOptimisation will use the Gradient Descent technique.\nhere a great source to understand this technic in detail.", "# Gradient descent\noptimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)", "Launch the graph\nWe are not using the \"#with tf.Session() as sess:\" to make it more convinent to use in a notebook.\nSo we need to add a sess.close() at the end.", "# Initializing the variables\ninit = tf.initialize_all_variables()\n\n# Launch the graph\nsess = tf.Session() \nsess.run(init)", "Do the trainning\nYou can see how cost function is evolving during the training and how the model evolve during the training.", "plt.figure(figsize=(12,5))\nplt.plot(train_X, train_Y, 'bo', label='Testing data')\nplt.plot(train_X, sess.run(W) * train_X + sess.run(b), label='before start ')\n\ncost_optimisation = []\n\n# Fit all training data\nfor epoch in range(training_epochs):\n for (x, y) in zip(train_X, train_Y):\n sess.run(optimizer, feed_dict={X: x, Y: y})\n\n #Display logs per epoch step\n if (epoch+1) % display_step == 0:\n c = sess.run(cost, feed_dict={X: train_X, Y:train_Y})\n print \"Epoch:\", '%04d' % (epoch+1), \"cost=\", \"{:.9f}\".format(c), \\\n \"W=\", sess.run(W), \"b=\", sess.run(b)\n plt.plot(train_X, sess.run(W) * train_X + sess.run(b), label='epoch '+ str(epoch+1))\n # save the value of the cost to draw it\n cost_optimisation.append(c)\nplt.legend()\nplt.show()\n\nplt.figure(figsize=(12,5))\nplt.plot(range(len(cost_optimisation)), cost_optimisation, label='cost')\nplt.legend()\nplt.show()", "Check with the trainning data", " print \"Optimization Finished!\"\ntraining_cost = sess.run(cost, feed_dict={X: train_X, Y: train_Y})\nprint \"Training cost=\", training_cost, \"W=\", sess.run(W), \"b=\", sess.run(b), '\\n'\n\nprint \"Testing... (Mean square loss Comparison)\"\ntesting_cost = sess.run(\n tf.reduce_sum(tf.pow(pred - Y, 2)) / (2 * test_X.shape[0]),\n feed_dict={X: test_X, Y: test_Y}) # same function as cost above\nprint \"Testing cost=\", testing_cost\nprint \"Absolute mean square loss difference:\", abs(\n training_cost - testing_cost)\n\n#Graphic display\nplt.figure(figsize=(10,5))\nplt.plot(train_X, train_Y, 'bo', label='Original data')\nplt.plot(test_X, test_Y, 'ro', label='Testing data')\nplt.plot(train_X, sess.run(W) * train_X + sess.run(b), label='Fitted line')\nplt.legend()\nplt.show()", "Closing the session\nDon't forget to close the session if you don't use the \"with\" statement", "sess.close()", "Hope this is usefull.\nFeedback welcome @dh7net" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
0u812/nteract
example-notebooks/tutorial.ipynb
bsd-3-clause
[ "<a id=\"topcell\"></a>\nTellurium Notebook Tutorial\nThe Tellurium notebook environment is a self-contained Jupyter-like environment based on the nteract project. Tellurium adds special cells for working with SBML and COMBINE archives by representing these standards in human-readable form.\nTellurium also features a variety of Python packages, such as the libroadrunner simulator, designed to provide a complete biochemical network modeling environment using Python.\nContents:\n\n\nExample 1: A Simple SBML Model\n\n\nExample 2: Advanced SBML Features\n\n\nExample 3: Creating a COMBINE Archive\n\n\n<a id=\"ex1\"></a>\nExample 1: A Simple SBML Model\nThis example generates a very simple SBML model. Reactant S1 is converted to product S2 at a rate k1*S1. Running the following cell will generate an executable version of the model in the variable simple. You can then call the simulate method on this variable (specifying the start time, end time, and number of points), and plot the result.\nBack to top", "model simple()\n S1 -> S2; k1*S1\n k1 = 0.1\n S1 = 10\nend\n\nsimple.simulate(0, 50, 100)\nsimple.plot()", "<a id=\"ex2\"></a>\nExample 2: Advanced SBML Features\nIn this example, we will demonstrate the use of SBML events, compartments, and assignment rules. Events occur at discrete instants in time, and can be used to model the addition of a bolus of ligand etc. to the system. Compartments allow modeling of discrete volumetric spaces within a cell or system. Assignment rules provide a way to explicitly specify a value, as a function of time (as we do here) or otherwise.\n\nThere are two compartments: one containing species A, and one containing species B.\nOne mass unit of A is converted to one mass unit of B, but because B's compartment is half the size, the concentration of B increases at twice the rate as A diminishes.\nHalf-way through the simulation, we add a bolus of A\nSpecies C is neither created nor destroyed in a reaction - it is defined entirely by a rate rule.\n\nBack to top", "model advanced()\n # Create two compartments\n compartment compA=1, compB=0.5 # B is half the volume of A\n species A in compA, B in compB\n # Use the label `J0` for the reaction\n J0: A -> B; k*A\n # C is defined by an assignment rule\n species C\n C := sin(2*time/3.14) # a sine wave\n k = 0.1\n A = 10\n \n # Event: half-way through the simulation,\n # add a bolus of A\n at time>=5: A = A+10\nend\n\nadvanced.simulate(0, 10, 100)\nadvanced.plot()", "<a id=\"ex3\"></a>\nExample 3: Creating a COMBINE Archive\nCOMBINE archives are containers for standards. They enable models encoded in SBML and simulations encoded in SED-ML to be exchanged between different tools. Tellurium displays COMBINE archives in an inline, human-readable form.\nTo convert the SBML model of Example 1 into a COMBINE archive, we need to define four steps in the workflow, which correspond to distinct elements in SED–ML: (1) models, (2) simulations, (3) tasks, and (4) outputs.\nYou can export this cell as a COMBINE archive by clicking on the diskette icon in the upper-right. You should be able to import it using other tools which support COMBINE archives, such as the SED-ML Web Tools or iBioSim.\nBack to top", "model simple()\n S1 -> S2; k1*S1\n k1 = 0.1\n S1 = 10\nend\n\n# Models\nmodel1 = model \"simple\"\n# Simulations\nsim1 = simulate uniform(0, 50, 1000)\n// Tasks\ntask1 = run sim1 on model1\n// Outputs\nplot \"COMBINE Archive Plot\" time vs S1, S2" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
liufuyang/deep_learning_tutorial
jizhi-pytorch-2/03_text_generation/RNNGenerative/MIDIComposer.ipynb
mit
[ "神经莫扎特——MIDI音乐的学习与生成\n在这节课中,我们学习了如何通过人工神经网络学习一个MIDI音乐,并记忆中音符时间序列中的模式,并生成一首音乐\n首先,我们要学习如何解析一个MIDI音乐,将它读如进来;其次,我们用处理后的MIDI序列数据训练一个LSTM网络,并让它预测下一个音符;\n最后,我们用训练好的LSTM生成MIDI音乐\n本程序改造自\n本文件是集智AI学园http://campus.swarma.org 出品的“火炬上的深度学习”第VI课的配套源代码", "# 导入必须的依赖包\n\n# 与PyTorch相关的包\nimport torch\nimport torch.utils.data as DataSet\nimport torch.nn as nn\nfrom torch.autograd import Variable\nimport torch.optim as optim\n\n\n# 导入midi音乐处理的包\nfrom mido import MidiFile, MidiTrack, Message\n\n# 导入计算与绘图必须的包\nimport numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline", "一、导入MIDI文件,并处理成标准形式\n首先,我们从MIDI文件中提取出消息(Message)序列,一个消息包括:音符(note)、速度(velocity)与时间(time,距离上一个音符的时间长度)\n其次,我们要将每一个消息进行编码,根据音符、速度、时间的取值范围,我们分别用长度为89、128与11的one-hot编码得到一个01向量。\n1. 从硬盘读取MIDI文件", "# 从硬盘中读入MIDI音乐文件\n#mid = MidiFile('./music/allegroconspirito.mid') # a Mozart piece\nmid = MidiFile('./music/krebs.mid') # a Mozart piece\n\nnotes = []\n\ntime = float(0)\nprev = float(0)\n\noriginal = [] # original记载了原始message数据,以便后面的比较\n\n# 对MIDI文件中所有的消息进行循环\nfor msg in mid:\n # 时间的单位是秒,而不是帧\n time += msg.time\n \n # 如果当前消息不是描述信息\n if not msg.is_meta:\n # 仅提炼第一个channel的音符\n if msg.channel == 0:\n # 如果当前音符为打开的\n if msg.type == 'note_on':\n # 获得消息中的信息(编码在字节中)\n note = msg.bytes() \n # 我们仅对音符信息感兴趣. 音符消息按如下形式记录 [type, note, velocity]\n note = note[1:3] #操作玩这一步后,note[0]存音符,note[1]存速度(力度)\n # note[2]存据上一个message的时间间隔\n note.append(time - prev)\n prev = time\n # 将音符添加到列表notes中\n notes.append(note)\n # 在原始列表中保留这些音符\n original.append([i for i in note])\n\n# 绘制每一个分量的直方图,方便看出每一个量的取值范围\nplt.figure()\nplt.hist([i[0] for i in notes])\nplt.title('Note')\nplt.figure()\nplt.hist([i[1] for i in notes])\nplt.title('Velocity')\nplt.figure()\nplt.hist([i[2] for i in notes])\nplt.title('Time')\n\n", "2. 将每一个Message进行编码\n原始的数据是形如(78, 0, 0.0108)这样的三元组\n编码后的数据格式为:(00...010..., 100..., 0100...)这样的三个one-hot向量,第一个向量长度89,第二个128,第三个11", "# note和velocity都可以看作是类型变量\n# time为float,我们按照区间将其也化成离散的类型变量\n# 首先,我们找到time变量的取值区间,并进行划分。由于大量msg的time为0,因此我们把0归为了一个特别的类\nintervals = 10\nvalues = np.array([i[2] for i in notes])\nmax_t = np.amax(values) #区间中的最大值\nmin_t = np.amin(values[values > 0]) #区间中的最小值\ninterval = 1.0 * (max_t - min_t) / intervals\n\n# 接下来,我们将每一个message编码成三个one-hot向量,将这三个向量合并到一起就构成了slot向量\ndataset = []\nfor note in notes:\n slot = np.zeros(89 + 128 + 12)\n \n #由于note是介于24-112之间的,因此减24\n ind1 = note[0]-24\n ind2 = note[1]\n # 由于message中有大量的time=0的情况,因此我们将0分为单独的一类,其他的都是按照区间划分\n ind3 = int((note[2] - min_t) / interval + 1) if note[2] > 0 else 0\n slot[ind1] = 1\n slot[89 + ind2] = 1\n slot[89 + 128 + ind3] = 1\n # 将处理后得到的slot数组加入到dataset中\n dataset.append(slot)", "3.生成训练集和校验集,装进数据加载器\n我们将整个音符三元组(note,velocity,time)序列按照31位长度的滑动窗口切分成了len(dataset)-n_prev组\n每一组的前30位作为输入,最后一位作为输出形成了训练数据", "# 生成训练集和校验集\nX = []\nY = []\n# 首先,按照预测的模式,我们将原始数据生成一对一对的训练数据\nn_prev = 30 # 滑动窗口长度为30\n\n# 对数据中的所有数据进行循环\nfor i in range(len(dataset)-n_prev):\n # 往后取n_prev个note作为输入属性\n x = dataset[i:i+n_prev]\n # 将第n_prev+1个note(编码前)作为目标属性\n y = notes[i+n_prev]\n # 注意time要转化成类别的形式\n ind3 = int((y[2] - min_t) / interval + 1) if y[2] > 0 else 0\n y[2] = ind3\n \n # 将X和Y加入到数据集中\n X.append(x)\n Y.append(y)\n \n# 将数据集中的前n_prev个音符作为种子,用于生成音乐的时候用\nseed = dataset[0:n_prev]\n\n# 对所有数据顺序打乱重排\nidx = np.random.permutation(range(len(X)))\n# 形成训练与校验数据集列表\nX = [X[i] for i in idx]\nY = [Y[i] for i in idx]\n\n# 从中切分1/10的数据出来放入校验集\nvalidX = X[: len(X) // 10]\nX = X[len(X) // 10 :]\nvalidY = Y[: len(Y) // 10]\nY = Y[len(Y) // 10 :]\n\n# 将列表再转化为dataset,并用dataloader来加载数据\n# dataloader是PyTorch开发采用的一套管理数据的方法。通常数据的存储放在dataset中,而对数据的调用则是通过data loader完成的\n# 同时,在进行预处理的时候,系统已经自动将数据打包成撮(batch),每次调用,我们都提取一整个撮出来(包含多条记录)\n# 从dataloader中吐出的每一个元素都是一个(x,y)元组,其中x为输入的张量,y为标签。x和y的第一个维度都是batch_size大小。\n\nbatch_size = 30 #一撮包含30个数据记录,这个数字越大,系统在训练的时候,每一个周期处理的数据就越多,这样处理越快,但总的数据量会减少\n\n# 形成训练集\ntrain_ds = DataSet.TensorDataset(torch.FloatTensor(np.array(X, dtype = float)), torch.LongTensor(np.array(Y)))\n# 形成数据加载器\ntrain_loader = DataSet.DataLoader(train_ds, batch_size = batch_size, shuffle = True, num_workers=4)\n\n\n# 校验数据\nvalid_ds = DataSet.TensorDataset(torch.FloatTensor(np.array(validX, dtype = float)), torch.LongTensor(np.array(validY)))\nvalid_loader = DataSet.DataLoader(valid_ds, batch_size = batch_size, shuffle = True, num_workers=4)\n", "二、定义一个LSTM网络\n该网络特殊的地方在于它的输出,对于每一个样本,它会输出三个变量x,y,z,它们分别是一个归一化的概率向量\n分别用来预测类型化了的note、velocity和time\n在网络中我们对lstm的输出结果进行dropout的操作,所谓的dropout就是指在训练的截断,系统会随机删除掉一些神经元,\n,而在测试阶段则不会删掉神经元,这样使得模型给出正确的输出会更加困难,从避免了过拟合现象。", "\nclass LSTMNetwork(nn.Module):\n def __init__(self, input_size, hidden_size, out_size, n_layers=1):\n super(LSTMNetwork, self).__init__()\n self.n_layers = n_layers\n \n self.hidden_size = hidden_size\n self.out_size = out_size\n # 一层LSTM单元\n self.lstm = nn.LSTM(input_size, hidden_size, n_layers, batch_first = True)\n # 一个Dropout部件,以0.2的概率Dropout\n self.dropout = nn.Dropout(0.2)\n # 一个全链接层\n self.fc = nn.Linear(hidden_size, out_size)\n # 对数Softmax层\n self.softmax = nn.LogSoftmax()\n\n def forward(self, input, hidden=None):\n # 神经网络的每一步运算\n\n hhh1 = hidden[0] #读如隐含层的初始信息\n \n # 完成一步LSTM运算\n # input的尺寸为:batch_size , time_step, input_size\n output, hhh1 = self.lstm(input, hhh1) #input:batchsize*timestep*3\n # 对神经元输出的结果进行dropout\n output = self.dropout(output)\n # 取出最后一个时刻的隐含层输出值\n # output的尺寸为:batch_size, time_step, hidden_size\n output = output[:, -1, ...]\n # 此时,output的尺寸为:batch_size, hidden_size\n # 喂入一个全链接层\n out = self.fc(output)\n # out的尺寸为:batch_size, output_size\n\n # 将out的最后一个维度分割成三份x, y, z分别对应对note,velocity以及time的预测\n \n x = self.softmax(out[:, :89])\n y = self.softmax(out[:, 89: 89 + 128])\n z = self.softmax(out[:, 89 + 128:])\n \n # x的尺寸为batch_size, 89\n # y的尺寸为batch_size, 128\n # z的尺寸为batch_size, 11\n # 返回x,y,z\n return (x,y,z)\n\n def initHidden(self, batch_size):\n # 对隐含层单元变量全部初始化为0\n # 注意尺寸是: layer_size, batch_size, hidden_size\n out = []\n hidden1=Variable(torch.zeros(1, batch_size, self.hidden_size1))\n cell1=Variable(torch.zeros(1, batch_size, self.hidden_size1))\n out.append((hidden1, cell1))\n return out\n\ndef criterion(outputs, target):\n # 为本模型自定义的损失函数,它由三部分组成,每部分都是一个交叉熵损失函数,\n # 它们分别对应note、velocity和time的交叉熵\n x, y, z = outputs\n loss_f = nn.NLLLoss()\n loss1 = loss_f(x, target[:, 0])\n loss2 = loss_f(y, target[:, 1])\n loss3 = loss_f(z, target[:, 2])\n return loss1 + loss2 + loss3\ndef rightness(predictions, labels):\n \"\"\"计算预测错误率的函数,其中predictions是模型给出的一组预测结果,batch_size行num_classes列的矩阵,labels是数据之中的正确答案\"\"\"\n pred = torch.max(predictions.data, 1)[1] # 对于任意一行(一个样本)的输出值的第1个维度,求最大,得到每一行的最大元素的下标\n rights = pred.eq(labels.data).sum() #将下标与labels中包含的类别进行比较,并累计得到比较正确的数量\n return rights, len(labels) #返回正确的数量和这一次一共比较了多少元素", "开始训练一个LSTM。", "\n# 定义一个LSTM,其中输入输出层的单元个数取决于每个变量的类型取值范围\nlstm = LSTMNetwork(89 + 128 + 12, 128, 89 + 128 + 12)\noptimizer = optim.Adam(lstm.parameters(), lr=0.001)\nnum_epochs = 100\ntrain_losses = []\nvalid_losses = []\nrecords = []\n\n# 开始训练循环\nfor epoch in range(num_epochs):\n train_loss = []\n # 开始遍历加载器中的数据\n for batch, data in enumerate(train_loader):\n # batch为数字,表示已经进行了第几个batch了\n # data为一个二元组,分别存储了一条数据记录的输入和标签\n # 每个数据的第一个维度都是batch_size = 30的数组\n \n lstm.train() # 标志LSTM当前处于训练阶段,Dropout开始起作用\n init_hidden = lstm.initHidden(len(data[0])) # 初始化LSTM的隐单元变量\n optimizer.zero_grad()\n x, y = Variable(data[0]), Variable(data[1]) # 从数据中提炼出输入和输出对\n outputs = lstm(x, init_hidden) #喂入LSTM,产生输出outputs\n loss = criterion(outputs, y) #代入损失函数并产生loss\n train_loss.append(loss.data.numpy()[0]) # 记录loss\n loss.backward() #反向传播\n optimizer.step() #梯度更新\n if 0 == 0:\n #在校验集上跑一遍,并计算在校验集上的分类准确率\n valid_loss = []\n lstm.eval() #将模型标志为测试状态,关闭dropout的作用\n rights = []\n # 遍历加载器加载进来的每一个元素\n for batch, data in enumerate(valid_loader):\n init_hidden = lstm.initHidden(len(data[0]))\n #完成LSTM的计算\n x, y = Variable(data[0]), Variable(data[1])\n #x的尺寸:batch_size, length_sequence, input_size\n #y的尺寸:batch_size, (data_dimension1=89+ data_dimension2=128+ data_dimension3=12)\n outputs = lstm(x, init_hidden)\n #outputs: (batch_size*89, batch_size*128, batch_size*11)\n loss = criterion(outputs, y)\n valid_loss.append(loss.data.numpy()[0])\n #计算每个指标的分类准确度\n right1 = rightness(outputs[0], y[:, 0])\n right2 = rightness(outputs[1], y[:, 1])\n right3 = rightness(outputs[2], y[:, 2])\n rights.append((right1[0] + right2[0] + right3[0]) * 1.0 / (right1[1] + right2[1] + right3[1]))\n # 打印结果\n print('第{}轮, 训练Loss:{:.2f}, 校验Loss:{:.2f}, 校验准确度:{:.2f}'.format(epoch, \n np.mean(train_loss),\n np.mean(valid_loss),\n np.mean(rights)\n ))\n records.append([np.mean(train_loss), np.mean(valid_loss), np.mean(rights)])\n\n# 绘制训练过程中的Loss曲线\na = [i[0] for i in records]\nb = [i[1] for i in records]\nc = [i[2] * 10 for i in records]\nplt.plot(a, '-', label = 'Train Loss')\nplt.plot(b, '-', label = 'Validation Loss')\nplt.plot(c, '-', label = '10 * Accuracy')\nplt.legend()", "三、音乐生成\n我们运用训练好的LSTM来生成音符。首先把seed喂给LSTM并产生第n_prev + 1个msg,然后把这个msg加到输入数据的最后面,删除第一个元素\n这就又构成了一个标准的输入序列;然后再得到下一个msg,……,如此循环往复得到音符序列的生成", "# 生成3000步\npredict_steps = 3000\n\n# 初始时刻,将seed(一段种子音符,即我为开始读入的音乐文件)付给x\nx = seed\n# 将数据扩充为合适的形式\nx = np.expand_dims(x, axis = 0)\n# 现在的x的尺寸为:batch=1, time_step =30, data_dim = 229\n\nlstm.eval()\niniti = lstm.initHidden(1)\npredictions = []\n# 开始每一步的迭代\nfor i in range(predict_steps):\n # 根据前n_prev预测后面的一个音符\n xx = Variable(torch.FloatTensor(np.array(x, dtype = float)))\n preds = lstm(xx, initi)\n \n # 返回预测的note,velocity,time的模型预测概率对数\n a,b,c = preds\n # a的尺寸为:batch=1*data_dim=89, b为1*128,c为1*11\n \n # 将概率对数转化为随机的选择\n ind1 = torch.multinomial(a.view(-1).exp()) \n ind2 = torch.multinomial(b.view(-1).exp()) \n ind3 = torch.multinomial(c.view(-1).exp()) \n\n ind1 = ind1.data.numpy()[0] # 0-89中的整数\n ind2 = ind2.data.numpy()[0] # 0-128中的整数\n ind3 = ind3.data.numpy()[0] # 0-11中的整数\n \n # 将选择转换为正确的音符等数值,注意time分为11类,第一类为0这个特殊的类,其余按照区间放回去\n note = [ind1 + 24, ind2, 0 if ind3 ==0 else ind3 * interval + min_t]\n \n # 将预测的内容存储下来\n predictions.append(note)\n \n # 将新的预测内容再次转变为输入数据准备喂给LSTM\n slot = np.zeros(89 + 128 + 12, dtype = int)\n slot[ind1] = 1\n slot[89 + ind2] = 1\n slot[89 + 128 + ind3] = 1\n slot1 = np.expand_dims(slot, axis = 0)\n slot1 = np.expand_dims(slot1, axis = 0)\n \n #slot1的数据格式为:batch=1*time=1*data_dim=229\n \n # x拼接上新的数据\n x = np.concatenate((x, slot1), 1)\n # 现在x的尺寸为: batch_size = 1 * time_step = 31 * data_dim =229\n \n # 滑动窗口往前平移一次\n x = x[:, 1:, :]\n # 现在x的尺寸为:batch_size = 1 * time_step = 30 * data_dim = 229\n\n\n# 将生成的序列转化为MIDI的消息,并保存MIDI音乐\nmid = MidiFile()\ntrack = MidiTrack()\nmid.tracks.append(track)\n\nfor i, note in enumerate(predictions):\n # 在note一开始插入一个147表示打开note_on\n note = np.insert(note, 0, 147)\n # 将整数转化为字节\n bytes = note.astype(int)\n # 创建一个message\n msg = Message.from_bytes(bytes[0:3]) \n # 0.001025为任意取值,可以调节音乐的速度。由于生成的time都是一系列的间隔时间,转化为msg后时间尺度过小,因此需要调节放大\n time = int(note[3]/0.001025)\n msg.time = time\n # 将message添加到音轨中\n track.append(msg)\n\n#保存文件\nmid.save('music/new_song.mid')\n###########################################" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ml4a/ml4a-guides
examples/reinforcement_learning/Reinforcement_Learning.ipynb
gpl-2.0
[ "Reinforcement Learning\nThis tutorial is a modified version of pytorch's own Reinforcement Learning tutorial.\nSimply put, reinforcement learning is the cyclical process of Reward = State * Action. In psychological terms, the process is most akin to positive reinforcement, in which a subject is given a reward for completing a simple goal in order to increase the likelihood of doing said goal. \nReinforcement learning is similar in that we intend to maximize a reward by promoting a given behavior, although we break down our ultimate goal into smaller actions, giving rewards along the way. Each action is made in response to a state, or the current conditions of the environment. \nBut how do we know the current condtions of the environment? Or put another way, how can we make a statistical model to optimize a reward? We use a neural network. The beauty in this is it also allows the AI to act on inference given knowledge of similar states, as an neural network can approximate any statistical model, including the that of action, state, and reward.\nLet's see this is in action (ba-dum-tss)...\nSetting Up Packages\nWe begin by collecting the pip package gym. Gym is a collection of environments made by OpenAI that can be used to train our neural network. In this case, we will build our model based on their atari clones. Specifically, Atari Breakout.\nIf not in colab, run pip install gym.\nNext we import our libraries:", "import gym\nimport math\nimport random\nimport numpy as np\nimport matplotlib\nimport matplotlib.pyplot as plt\nfrom collections import namedtuple\nfrom itertools import count\nfrom PIL import Image\n\nimport torch\nimport torch.nn as nn\nimport torch.optim as optim\nimport torch.nn.functional as F\nimport torchvision.transforms as T\n\n\nenv = gym.make('Breakout-v0').unwrapped\n\n# set up matplotlib\nis_ipython = 'inline' in matplotlib.get_backend()\nif is_ipython:\n from IPython import display\n\nplt.ion()\n\n# if gpu is to be used\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n", "Replay Memory\nReplay memory is our means of storing information about prior states and what was done to maximize rewards in a given state. This gives us the ability to compute inference of the current state faster, as well as have a successful model faster.\nFor this we need two objects:\n\nTransition: This is a named tuple that demonstrates to the AI what the concequences will be given the state and action.\nReplay Memory: A means of storing a sequence of transitions and sampling them at random for training.", "Transition = namedtuple('Transition',\n ('state', 'action', 'next_state', 'reward'))\n\n\nclass ReplayMemory(object):\n\n def __init__(self, capacity):\n self.capacity = capacity\n self.memory = []\n self.position = 0\n\n def push(self, *args):\n \"\"\"Saves a transition.\"\"\"\n if len(self.memory) < self.capacity:\n self.memory.append(None)\n self.memory[self.position] = Transition(*args)\n self.position = (self.position + 1) % self.capacity\n\n def sample(self, batch_size):\n return random.sample(self.memory, batch_size)\n\n def __len__(self):\n return len(self.memory)\n", "Q-Network\nThis is the main neural network we will be running. The most critical idea to take away from it is the number of outputs must equal the total number of possible number of moves for a game controller. This way, we can predict the action (or move) that gives the highest reward. In this case, our moves include left and right.", "class DQN(nn.Module):\n\n def __init__(self, h, w, outputs):\n super(DQN, self).__init__()\n # Input dimension of 1 because color doesn't really matter\n self.conv1 = nn.Conv2d(1, 16, kernel_size=5, stride=2)\n self.conv2 = nn.Conv2d(16, 32, kernel_size=5, stride=2)\n self.conv3 = nn.Conv2d(32, 64, kernel_size=5, stride=2)\n self.fc1 = nn.Linear(1472 * 17, 256)\n self.head = nn.Linear(256, 2)\n def forward(self, x):\n x = F.relu(self.conv1(x))\n x = F.relu(self.conv2(x))\n x = F.relu(self.conv3(x))\n x = (x.view(-1,1472 * 17))\n x = F.relu(self.fc1(x))\n return self.head(x.view(x.size(0), -1))\n", "Scene Extraction\nBelow is the code we use to grab the screen for input into the Q-Network.", "resize = T.Compose([T.ToPILImage(),\n T.Grayscale(),\n T.ToTensor()])\n\ndef get_screen():\n\n screen = env.render(mode=\"rgb_array\").transpose((2, 0, 1))\n # Convert to float, rescale, convert to torch tensor\n # (this doesn't require a copy)\n screen = np.ascontiguousarray(screen, dtype=np.float32) / 255\n screen = torch.from_numpy(screen)\n # Resize, and add a batch dimension (BCHW)\n return resize(screen).unsqueeze(0).to(device)\n\n# Extracted Scene\nenv.reset()\nplt.figure()\nplt.imshow(get_screen().cpu().squeeze(1).squeeze(0).numpy(), cmap=\"gray\")\nplt.title('Example extracted screen')\nplt.show()", "Utilities\nTo train our model, we will rely on both past experience and randomness. The element of randomness allows us to find new methods not previously discovered. This can be useful, but eventually, we want the element of randomness to decay.\nOur measurement of success is the length of time our player stays alive.", "BATCH_SIZE = 128\nGAMMA = 0.999\nEPS_START = 0.9\nEPS_END = 0.05\nEPS_DECAY = 200\nTARGET_UPDATE = 10\n\n# Get screen size so that we can initialize layers correctly based on shape\n# returned from AI gym. Typical dimensions at this point are close to 3x40x90\n# which is the result of a clamped and down-scaled render buffer in get_screen()\ninit_screen = get_screen()\n_, _, screen_height, screen_width = init_screen.shape\n\n# Get number of actions from gym action space\nn_actions = env.action_space.n\n\npolicy_net = DQN(screen_height, screen_width, n_actions).to(device)\ntarget_net = DQN(screen_height, screen_width, n_actions).to(device)\ntarget_net.load_state_dict(policy_net.state_dict())\ntarget_net.eval()\n\noptimizer = optim.RMSprop(policy_net.parameters())\nmemory = ReplayMemory(10000)\n\n\nsteps_done = 0\n\ndef select_action(state):\n global steps_done\n sample = random.random()\n eps_threshold = EPS_END + (EPS_START - EPS_END) * \\\n math.exp(-1. * steps_done / EPS_DECAY)\n steps_done += 1\n if sample > eps_threshold:\n with torch.no_grad():\n # t.max(1) will return largest column value of each row.\n # second column on max result is index of where max element was\n # found, so we pick action with the larger expected reward.\n \n return policy_net(state).max(1)[1].view(1, 1)\n else:\n return torch.tensor([[random.randrange(n_actions)]], device=device, dtype=torch.long)\n\n\nepisode_durations = []\n\n\ndef plot_durations():\n plt.figure(2)\n plt.clf()\n durations_t = torch.tensor(episode_durations, dtype=torch.float)\n plt.title('Training...')\n plt.xlabel('Episode')\n plt.ylabel('Duration')\n plt.plot(durations_t.numpy())\n # Take 100 episode averages and plot them too\n if len(durations_t) >= 100:\n means = durations_t.unfold(0, 100, 1).mean(1).view(-1)\n means = torch.cat((torch.zeros(99), means))\n plt.plot(means.numpy())\n\n plt.pause(0.001) # pause a bit so that plots are updated\n if is_ipython:\n display.clear_output(wait=True)\n display.display(plt.gcf())\n", "Training\nAt last, we train and optimize our model. For those curious, we use a different loss function known as Huber Loss as it reduces noise of outliers, behaving as different losses depending on the situation.", "def optimize_model():\n if len(memory) < BATCH_SIZE:\n return\n transitions = memory.sample(BATCH_SIZE)\n # Transpose the batch (see https://stackoverflow.com/a/19343/3343043 for\n # detailed explanation). This converts batch-array of Transitions\n # to Transition of batch-arrays.\n batch = Transition(*zip(*transitions))\n\n # Compute a mask of non-final states and concatenate the batch elements\n # (a final state would've been the one after which simulation ended)\n non_final_mask = torch.tensor(tuple(map(lambda s: s is not None,\n batch.next_state)), device=device, dtype=torch.bool)\n non_final_next_states = torch.cat([s for s in batch.next_state\n if s is not None])\n state_batch = torch.cat(batch.state)\n action_batch = torch.cat(batch.action)\n reward_batch = torch.cat(batch.reward)\n\n # Compute Q(s_t, a) - the model computes Q(s_t), then we select the\n # columns of actions taken. These are the actions which would've been taken\n # for each batch state according to policy_net\n state_action_values = policy_net(state_batch).gather(0, action_batch)\n\n # Compute V(s_{t+1}) for all next states.\n # Expected values of actions for non_final_next_states are computed based\n # on the \"older\" target_net; selecting their best reward with max(1)[0].\n # This is merged based on the mask, such that we'll have either the expected\n # state value or 0 in case the state was final.\n next_state_values = torch.zeros(BATCH_SIZE, device=device)\n next_state_values[non_final_mask] = target_net(non_final_next_states).max(1)[0].detach()\n # Compute the expected Q values\n expected_state_action_values = (next_state_values * GAMMA) + reward_batch\n\n # Compute Huber loss\n loss = F.smooth_l1_loss(state_action_values, expected_state_action_values.unsqueeze(1))\n\n # Optimize the model\n optimizer.zero_grad()\n loss.backward()\n for param in policy_net.parameters():\n param.grad.data.clamp_(-1, 1)\n optimizer.step()\n", "Finally, we enter the main training loop. We run through the episodes, and display the final one. Cograts on making it this far!", "# Similar to epochs\nnum_episodes = 10\nfor i_episode in range(num_episodes):\n # Initialize the environment and state\n env.reset()\n last_screen = get_screen()\n current_screen = get_screen()\n state = current_screen - last_screen\n for t in count():\n # Select and perform an action\n action = select_action(state)\n _, reward, done, _ = env.step(action.item())\n reward = torch.tensor([reward], device=device)\n\n # Observe new state\n last_screen = current_screen\n current_screen = get_screen()\n if not done:\n next_state = current_screen - last_screen\n else:\n next_state = None\n\n # Store the transition in memory\n memory.push(state, action, next_state, reward)\n\n # Move to the next state\n state = next_state\n\n # Perform one step of the optimization (on the target network)\n optimize_model()\n if done:\n episode_durations.append(t + 1)\n plot_durations()\n break\n # Update the target network, copying all weights and biases in DQN\n if i_episode % TARGET_UPDATE == 0:\n target_net.load_state_dict(policy_net.state_dict())\n print(i_episode)\nprint('Complete')\n\n\nenv.reset()\n\nfor i in range(120):\n plt.imshow(env.render(mode='rgb_array'))\n display.display(plt.gcf()) \n display.clear_output(wait=True)\n env.step(env.action_space.sample()) # take a random action\n\nenv.close()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
amcdawes/QMlabs
Chapter 6 - Spin.ipynb
mit
[ "Chapter 6 - Spin\nA few new operators (or new names for the same ones!)\nThe three axes, x, y, z spin components can be measured with $SA_x$, $SA_y$, and $SA_z$ devices.\nWe'll use $\\hbar=1$ for numerical results, this is fairly standard practice, but can be tricky to remember.", "from numpy import sin,cos,pi,sqrt\nfrom qutip import *\n\npz = Qobj([[1],[0]])\nmz = Qobj([[0],[1]])\npx = Qobj([[1/sqrt(2)],[1/sqrt(2)]])\nmx = Qobj([[1/sqrt(2)],[-1/sqrt(2)]])\npy = Qobj([[1/sqrt(2)],[1j/sqrt(2)]])\nmy = Qobj([[1/sqrt(2)],[-1j/sqrt(2)]])\nSx = 1/2.0*sigmax()\nSy = 1/2.0*sigmay()\nSz = 1/2.0*sigmaz()\n\npy", "Example: determine $P(S_x = \\frac{\\hbar}{2} ||-y\\rangle)$", "((px.dag()*my).norm())**2", "Example: verify the commutation relation: $\\left[\\hat{S}_x,\\hat{S}_z\\right] = -i\\hbar\\hat{S}_y$", "Sx*Sz - Sz*Sx == -1j*Sy # remember, h = 1", "Ex: find $\\langle \\hat{S}_x\\rangle$ for the state $|\\psi\\rangle=|+Z\\rangle$.", "pz.dag()*Sx*pz", "This makes sense given that $S_x$ can be either $\\frac{+\\hbar}{2}$ or $\\frac{-\\hbar}{2}$ with equal probability. Similarly, if the state is $|\\psi\\rangle=|+x\\rangle$.", "px.dag()*Sx*px", "Again, in units of $\\hbar$." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
IST256/learn-python
content/lessons/04-Iterations/SmallGroup-Iterations.ipynb
mit
[ "Now You Code In Class: Gathering Data and Reporting Statistics\nFor this in-class exercise we demonstrate a common programming pattern for reporting statistics on data in real time as it is input. This is used for data collection and real time analyics of time-series data. \nIn this example we will input temperatures ourselves, but same code could be used to read from an actual temperature sensor or any other type of sensor or data stream for that matter.\nWrite a program to input a series of temperatures in Celcius, until quit is entered (sentinel controlled indefinite loop). For each temperature input, we should output:\n\nthe count of temperatures recorded\nthe mean (average) temperature recorded\nthe max temperature recorded\nthe min temperature recorded\nthe count and percentage of temperatures above freezing\nthe count and percentage of temperatures at or below freezing\n\nExample Run:\nEnter temp: 100\n Count : 1\n Min : 100\n Mean : 100\n Max : 100\n Above 0 : 1 (100%)\n At/Below 0 : 0 (0%)\n\nEnter temp: 0\n Count : 2\n Min : 0\n Mean : 50\n Max : 100\n Above 0 : 1 (50%)\n At/Below 0 : 1 (50%)\n\nEnter temp: -100\n Count : 3\n Min : -100\n Mean : 0\n Max : 100\n Above 0 : 1 (33%)\n At/Below 0 : 2 (67%)\n\nEnter temp: quit\n\nSimplifying this problem\nThis can seem like a lot to figure out all at once. Once again we will use problem simplification to make this problem easier to code, taking several iterations and adding more features with each iteration. Typically we use problem simplification to constrain the inputs, but since this problem has only one input, instead we will constrain the outputs. \nFirst Iteration: Problem Analysis\nThe first version of this program will will ONLY output the count (number of temperature readings) for each temperature input.\nExample Run:\nEnter temp: 100\n Count : 1\nEnter temp: 0\n Count : 2\nEnter temp: -100\n Count : 3\nEnter temp: quit\n\nLoop-Building:\nWhat makes this program stop? (sentinel value)\nPROMPT 1\nWhat are we doing in the body of the loop?\nPROMPT 2\nAlgorithm (Steps in Program):\nPROMPT 3", "# PROMPT 4", "Second Iteration: Problem Analysis\nNext let's add a feature which calculates the mean. The mean is the average of the recorded temperatures so far.\nExample Run:\nEnter temp: 100\n Count : 1\n Mean : 100\n\nEnter temp: 0\n Count : 2\n Mean : 50\n\nEnter temp: -100\n Count : 3\n Mean : 0\n\nEnter temp: quit\n\nUnderstanding the Problem\nHow do we calculate a mean? E.g. what is the mean of 10,5,6 ?\nPROMPT 5\nAlgorithm (Steps in Program):\ncopy from PROMPT 3 and revise\nPROMPT 6", "# PROMPT 7 (copy prompt 4 add code)", "Third Iteration: Problem Analysis\nNext let's add a feature which calculates the minimum. this a variable which should keep track of the lowest recorded temperature so far.\nExample Run:\nEnter temp: 100\n Count : 1\n Min : 100\n Mean : 100\n\nEnter temp: 0\n Count : 2\n Min : 0\n Mean : 50\n\nEnter temp: -100\n Count : 3\n Min : -100\n Mean : 0\n\nEnter temp: quit\n\nUnderstanding the Problem\nWhat when given a freshly input temp and a current minimum how to we know when temp should be the new minimum?\nPROMPT 8\nAlgorithm (Steps in Program):\ncopy from PROMPT 6 and revise\nPROMPT 9\nNote: We must initialize minimum to be the largest number possible, so float('inf') see: https://www.geeksforgeeks.org/python-infinity/", "# PROMPT 10 (copy prompt 7 add code )", "Final Iteration: Problem Analysis\nWe should have enough knowledge of how to do this to complete the rest of the program.\nUnderstanding the Problem\nWhat when given a freshly input temp and a current maximum how to we know when temp should be the new maximum?\nPROMPT 11\nHow do we keep a count of temperatures above freezing?\nPROMPT 12\nHow do we keep a count of temperatures above freezing?\nPROMPT 13\nAlgorithm (Steps in Program):\ncopy from PROMPT 9 and revise\nPROMPT 14\nNote: We must initialize maximum to be the largest number possible, so float('-inf') see: https://www.geeksforgeeks.org/python-infinity/", "# PROMPT 15 (copy from prompt 10, final code!!!)\n\n\n# run this code to turn in your work!\nfrom coursetools.submission import Submission\nSubmission().submit_now()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
olgabot/cshl-singlecell-2017
notebooks/1.3_explore_gene_dropout_via_distance_correlation_linkage_clustering.ipynb
mit
[ "How does gene dropout affect my results?\nA key issue in single-cell anything-seq is that you're not capturing every single molecule that you want. Thus you have many more zeros in your data than you truly have. We will discuss:\n\nHow does gene dropout affect your interpretation of the results? \nWhat computational tricks can you use to avoid making conclusions that are a result of lack of data?\n\nTo be able to talk about this, we need to introduce some computational concepts. Here, we will talk about:\n\nCorrelation metrics\nDistance metrics\nClustering linkage method\nHierarchical clustering - agglomerative vs dismissive\n\nLet's get started! In the first code cell, we import modules we'll use", "# Alphabetical order for nonstandard python modules is conventional\n# We're doing \"import superlongname as abbrev\" for our laziness -- \n# -- this way we don't have to type out the whole thing each time.\n\n# Python plotting library\nimport matplotlib.pyplot as plt\n\n# Dataframes in Python\nimport pandas as pd\n\n# Statistical plotting library we'll use\nimport seaborn as sns\n# Use the visual style of plots that I prefer and use the \n# \"notebook\" context, which sets the default font and figure sizes\nsns.set(style='whitegrid')\n\n# This is necessary to show the plotted figures inside the notebook -- \"inline\" with the notebook cells\n%matplotlib inline\n\n# Import figure code for interactive widgets\nimport fig_code", "Correlation metrics\nSpearman correlation\nSpearman correlation`\nanswers the simple question, every time $x$ increases, does $y$ increase also? If yes, then spearman correlation = 1.\nMathematically speaking, Spearman tells you whether $x$ and $y$ increase monotonically together (but not necessarily linearly!!)\n\nPearson correlation\nPearson Correlation answers the question, every time my $x$ decreases by some amount $a$, does $y$ decrease by an amount proportional to that, say $10a$ or $0.5a$, and is this amount constant?\n$\\rho_{x,y} = \\frac{\\mathrm{cov}(\\vec{x}, \\vec{y})}{\\sigma_x, \\sigma_y}$\nMathematically speaking, pearson tells you whether $x$ and $y$ are linear to each other.\n\nSpearman vs Pearson\nSpearman's correlation is related to Pearson because Spearman \nSpearman correlation = Pearson correlation on the ranks of the data.\nAnscombe's quartet\nAnscombe's quartet is a group of four datasets that have nearly identical statistical properties that we'll use for exploring distance and correlation metrics.", "# Read the file - notice it is a URL. pandas can read either URLs or files on your computer\nanscombe = pd.read_csv(\"https://github.com/mwaskom/seaborn-data/raw/master/anscombe.csv\")\n\n# Say the variable name with no arguments to look at the data\nanscombe", "Let's use FacetGrid from seaborn to plot the data onto four axes, and plot the regression line `", "# Make a \"grid\" of plots based on the column name \"dataset\"\ng = sns.FacetGrid(anscombe, col='dataset')\n\n# Make a regression plot (regplot) using 'x' for the x-axis and 'y' for the y-axis\ng.map(sns.regplot, 'x', 'y')", "Below is a widget that calculates different summary statistics or distance metrics using Anscombe's quartet. It shows both a table and a barplot of the values. Play around with the different settings and discuss the questions below with your partner.", "fig_code.interact_anscombe()", "Discussion\nDiscuss the questions below while you play with the widgets.\n\nWhich metrics were nearly the same between all four datasets of Anscombe's quartet? Why?\nWhich metrics were different between all four datasets of Anscombe's quartet? Why?\nWhy do we use different summary statistics?\n\nDistance metrics: Euclidean vs Manhattan\nOne important point of how you decide two points are \"near\" each other is which distance metric you use.\n\nEuclidean distance is what you learned in algebra class: $d(x, y) = \\sqrt{x^2 + y^2}$, but all the way to $N$ dimensional vectors ($\\vec{x}, \\vec{y}$ represent $N$-dimensional vectors): $d(\\vec{x}, \\vec{y}) = \\sqrt{\\sum_i^N \\left(x_i -y_i\\right)^2}$\nManhattan distance (also called \"taxicab geometry\") is similar but no squares or square roots: $d(\\vec{x}, \\vec{y}) = \\sum_i^N |x_i - y_i|$\n\n\nClustering linkage methods: Ward, average, single, complete\n\n\nSingle: Compares shortest distance between clusters\nComplete: Compares largest distance between clusters\nAverage: Compares average distance between clusters\nWard: Compares how the addition of a new cluster increases the within-cluster variance\nCentroid: Compares centroid points of clusters\n\nsource: http://www.slideshare.net/neerajkaushik/cluster-analysis\nHierarchical clustering: Agglomerative vs Dismissive\nHierarchical clustering creates a ordered grouping of the cells (or genes, but today we're focusing on cells) based on how close they are. To create this grouping, you can either be \"top-down\" (dismissive) or \"bottom-up\" (agglomerative)\n\n\"Top-down\" means you start with one BIG cluster of all the cells, you remove one cell at a time, and the way you choose that cell is by using the one that is least similar to everyone else. You're basically kicking out the outcasts every time :(\n\"Bottom-up\" is much more inclusive - everyone starts as their own solo cluster, and then the two closest cells get merged together, then the next closest cell gets added to the growing cluster. This way. you're growing a big happy family.\n\nBelow is a diagram showing the steps of \"bottom-up\" (agglomerative) clustering on a small dataset. Notice that as you group points together, you add \"leaves\" to your \"tree\" -- yes these are the real words that are used! The diagram of lines on top of the ordered letters showing the clustering is called a \"dendrogram\" (\"tree diagram\").\n\nsource: https://www.researchgate.net/figure/273456906_fig3_Figure-4-Example-of-hierarchical-clustering-clusters-are-consecutively-merged-with-the\nDropout using Macosko2015 data\nTo explore the concepts of correlation, linkage, and distance metrics, we will be using a 300-cell, 259-gene subset of the Macosko 2015 data that contains 50 cells each from the following clusters:", "for cluster_id, name in fig_code.cluster_id_to_name.items():\n # The 'f' before the string means it's a \"format string,\"\n # which means it will read the variable names that exist\n # in your workspace. This is a very helpful and convient thing that was\n # just released in Python 3.6! (not available in Python 3.5)\n print('---')\n print(f'{cluster_id}: {name}')", "Here is a plot of the colors associated with each group", "fig_code.plot_color_legend()", "Below is another widget for you to play with. It lets you set different gene dropout thresholds (starts at 0 dropout), which will randomly remove (additional!) genes from the data.\nWe will be evaluating dropout by looking at the cell-cell correlations, measured by the correlation metric (starts at pearson), then will be clustered using hierarchical clustering using the distance metric and linkage method specified.", "fig_code.plot_dropout_interactive()", "Notes:\n\nThe ends of the branches are called \"leaves\" and this indicates how closely related the pairs of samples (or groups of samples) are related\nBy \"best clustering\", I mean where do you see the most biologically relevant information?\n\nDiscussion\nDiscuss the questions below while you play with the widgets.\nCorrelation methods\n\nCan you tell the difference between the correlations at 0% dropout? What about at 50%?\nWhich correlation method maintains the clustering from the paper even at higher dropouts?\nWhich correlation method made the best clusters, in your opininon?\nWhich one would you use for single-cell RNA seq data with lots of dropout?\n\nDistance metrics\n\nWhich distance metric produced the longest branch lengths? The shortest?\nWhich distance metric was most consistent with their clustering? Least consistent?\nWhich distance metric made the best clusters, in your opininon?\nWhich one would you use for single-cell RNA seq data with lots of dropout?\n\nLinkage methods\n\nWhich linkage method produced the longest branch lengths? The shortest?\nWhich linkage method was most consistent with their clustering? Least consistent?\nWhich linkage method made the best clusters, in your opininon?\nWhich one would you use for single-cell RNA seq data with lots of dropout?\n\nGeneral\n\nDid the cell type clustering from the paper always agree with the different methods/metrics? Why do you think that is?\nDo you think their clustering could have been improved? How?\nWhat influenced the clustering the most?\nGene dropout\nCorrelation method\nDistance metric\nLinkage method\n\n\n\nHow to make the clustered heatmap above\nNow we'll break down how to read the clustered heatmap we made above.\nLet's move on to 1.4_make_clustered_heatmap." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
GoogleCloudPlatform/ai-platform-samples
notebooks/samples/pytorch/lightning/TrainingAndPredictionWithPyTorchLightning.ipynb
apache-2.0
[ "# Copyright 2021 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "PyTorch Lightning Training\nThis notebook trains a model to predict whether the given sonar signals are bouncing off a metal cylinder or off a cylindrical rock from UCI Machine Learning Repository.\nThis notebook is derived from the PyTorch sample. It demonstrates how to perform the same task using PyTorch Lightning, a lightweight wrapper around PyTorch.\nThe notebook is intended to run within AI Platform Notebooks. The model will be trained within the notebook instance VM, optionally attached to GPUs or TPUs. With the following link, you can directly Open in AI Platform Notebooks. \nDataset\nThe Sonar Signals dataset that this sample uses for training is provided by the UC Irvine Machine Learning Repository. Google has hosted the data on a public GCS bucket gs://cloud-samples-data/ai-platform/sonar/sonar.all-data.\n\nsonar.all-data is split for both training and evaluation\n\nNote: Your typical development process with your own data would require you to upload your data to GCS so that you can access that data from inside your notebook. However, in this case, Google has put the data on GCS to avoid the steps of having you download the data from UC Irvine and then upload the data to GCS.\nDisclaimer\nThis dataset is provided by a third party. Google provides no representation, warranty, or other guarantees about the validity or any other aspects of this dataset.\n(Optional) TPU configuration\nTo use Cloud TPUs, first create a TPU node. Set the TPU software version to a matching PyTorch version (e.g. pytorch-1.7) and the Network to the same network used for your notebook instance (e.g. datalab-network).\nUncomment this section only if you are using TPUs. Note that you must be running this notebook on an XLA image such as pytorch-xla.1-7 for PyTorch to connect to Cloud TPUs. To use an XLA image, you can create a new notebook instance with the Environment set to Custom container and the Docker container image set to the XLA image location.\nIf you need a quota increase for Cloud TPUs, please review the Cloud TPU Quota Policy for more details.\nReview TPU configuration\nRun the gcloud command to review the available TPUs for the one you wish to use.\nMake note of the IP address (from NETWORK_ENDPOINT, without the port), and the # of TPU cores (derived from ACCELERATOR_TYPE). An ACCELERATOR_TYPE of v3-8 will indicate 8 TPU cores, for example.", "# !gcloud compute tpus list --zone=YOUR_ZONE_HERE_SUCH_AS_us-central1-b", "Update TPU configuration\nUpdate the IP address and cores variables here", "# tpu_ip_address='10.1.2.3'\n# tpu_cores=8", "Set TPU environment variables", "# # TPU configuration\n# %env XRT_TPU_CONFIG=tpu_worker;0;$tpu_ip_address:8470\n\n# # Use bfloat16\n# %env XLA_USE_BF16=1", "Install and import packages", "!pip install -U pytorch-lightning --quiet\n\nfrom pytorch_lightning.utilities.xla_device_utils import XLADeviceUtils\nif XLADeviceUtils.tpu_device_exists():\n import torch_xla # noqa: F401\n\nimport torch\nimport torch.nn as nn\nfrom torch.nn import functional as F\nfrom torch.utils.data import DataLoader, Dataset, random_split\n\nimport pandas as pd\nfrom google.cloud import storage\n\nfrom pytorch_lightning.core import LightningModule, LightningDataModule\nfrom pytorch_lightning.metrics.functional import accuracy\nfrom pytorch_lightning.trainer.trainer import Trainer", "Environment configuration", "_ = !nproc\ntpu_cores = tpu_cores if 'tpu_cores' in vars() else 0\nnum_cpus = int(_[0])\nnum_gpus = torch.cuda.device_count()\ndevice = torch.device('cuda') if num_gpus else 'cpu'\n\nprint(f'Device: {device}')\nprint(f'CPUs: {num_cpus}')\nprint(f'GPUs: {num_gpus}')\nprint(f'TPUs: {tpu_cores}')", "Download data", "# Public bucket holding data for samples\nBUCKET = 'cloud-samples-data'\n\n# Path to the directory inside the public bucket containing the sample data\nBUCKET_PATH = 'ai-platform/sonar/'\n\n# Sample data file\nFILE = 'sonar.all-data'\n\nbucket = storage.Client().bucket(BUCKET)\n\nblob = bucket.blob(BUCKET_PATH + FILE)\n\nblob.download_to_filename(FILE)", "Define the PyTorch Dataset", "class SonarDataset(Dataset):\n def __init__(self, csv_file):\n self.dataframe = pd.read_csv(csv_file, header=None)\n\n def __len__(self):\n return len(self.dataframe)\n\n def __getitem__(self, idx):\n # When iterating through the dataset get the features and targets\n features = self.dataframe.iloc[idx, :-1].values.astype(dtype='float64')\n\n # Convert the targets to binary values:\n # R = rock --> 0\n # M = mine --> 1\n target = self.dataframe.iloc[idx, -1:].values\n if target[0] == 'R':\n target[0] = 0\n elif target[0] == 'M':\n target[0] = 1\n target = target.astype(dtype='float64')\n\n # Load the data as a tensor\n data = {'features': torch.from_numpy(features),\n 'target': target}\n return data", "Define a data processing module\nIn this step, you will create a custom data module that extends LightningDataModule to encapsulate the data processing steps.", "class SonarDataModule(LightningDataModule):\n\n def __init__(self, bucket=BUCKET, bucket_path=BUCKET_PATH, file=FILE, batch_size=32, num_workers=0):\n super().__init__()\n\n self.batch_size = batch_size\n self.num_workers = num_workers\n self.bucket = bucket\n self.bucket_path = bucket_path\n self.file = file\n\n def prepare_data(self):\n # Public bucket holding the data\n bucket = storage.Client().bucket(self.bucket)\n\n # Path to the data inside the public bucket\n blob = bucket.blob(self.bucket_path + self.file)\n\n # Download the data\n blob.download_to_filename(self.file)\n\n def setup(self, stage=None):\n # Load the data\n sonar_dataset = SonarDataset(self.file)\n\n # Create indices for the split\n dataset_size = len(sonar_dataset)\n test_size = int(0.2 * dataset_size) # Use a test_split of 0.2\n val_size = int(0.2 * dataset_size) # Use a test_split of 0.2\n train_size = dataset_size - test_size - val_size\n\n # Assign train/test/val datasets for use in dataloaders\n self.sonar_train, self.sonar_val, self.sonar_test = random_split(sonar_dataset, [train_size, val_size, test_size])\n\n def train_dataloader(self):\n return DataLoader(self.sonar_train, batch_size=self.batch_size, num_workers=self.num_workers)\n\n def val_dataloader(self):\n return DataLoader(self.sonar_val, batch_size=self.batch_size, num_workers=self.num_workers)\n\n def test_dataloader(self):\n return DataLoader(self.sonar_test, batch_size=self.batch_size, num_workers=self.num_workers)\n\n\ndm = SonarDataModule(num_workers=num_cpus)", "Define a model\nNext, you will create a module that extends LightningModule. This module includes your model code and organizes steps of the model-building process.", "class SonarModel(LightningModule):\n\n def __init__(self):\n super().__init__()\n\n # Define PyTorch model\n self.model = nn.Sequential(\n nn.Linear(60, 60),\n nn.ReLU(),\n nn.Dropout(p=0.2),\n nn.Linear(60, 30),\n nn.ReLU(),\n nn.Dropout(p=0.2),\n nn.Linear(30, 1),\n nn.Sigmoid()\n )\n\n def forward(self, x):\n return self.model(x.float())\n\n def training_step(self, batch, batch_idx):\n x, y = batch['features'].float(), batch['target'].float()\n y_hat = self(x)\n\n loss = F.binary_cross_entropy(y_hat, y)\n return loss\n\n def validation_step(self, batch, batch_idx):\n x, y = batch['features'].float(), batch['target'].float()\n y_hat = self(x)\n\n loss = F.binary_cross_entropy(y_hat, y)\n\n # Binarize the output\n y_hat_binary = y_hat.round()\n acc = accuracy(y_hat_binary, y.int())\n\n # Log metrics for TensorBoard\n self.log('val_loss', loss, prog_bar=True)\n self.log('val_acc', acc, prog_bar=True)\n\n return loss\n\n def test_step(self, batch, batch_idx):\n # Reuse validation step\n return self.validation_step(batch, batch_idx)\n\n def configure_optimizers(self):\n return torch.optim.SGD(self.parameters(), lr=0.01, momentum=0.5, nesterov=False)\n\n\nmodel = SonarModel()", "Train and evaluate the model\nFinally, you will create and use a Trainer to build the model and evaluate its accuracy.\nThe trainer is initialized with an accelerator, with different options depending on your environment:\n* TPUs support only ddp, or distributed data parallel, and so accelerator cannot be specified. \n* GPUs support a variety of distributed modes. In this notebook, we are using dp for multiple GPUs on 1 machine.\n* CPUs can support ddp_cpu for multi-node CPU training. For multiple CPUs on one node, there is no speed increase from using this accelerator, and so the default of None is used in this notebook.", "epochs = 100\n\nif tpu_cores:\n trainer = Trainer(tpu_cores=tpu_cores, max_epochs=epochs)\nelif num_gpus:\n trainer = Trainer(gpus=num_gpus, accelerator='dp', max_epochs=epochs)\nelse:\n trainer = Trainer(max_epochs=epochs)\n\ntrainer.fit(model, dm)\n\ntrainer.test(datamodule=dm)", "Save and load a trained model\nThe following steps aren't required, but are shown for use in a production environment.\nFirst, we'll export the model to a file. Then, we'll load the model file (which isn't required in a notebook, because we already have a trained model). Finally, we'll set the model to evaluation mode (rather than train mode) for inference.", "torch.save(model.state_dict(), 'model.pt')\n\nmodel.load_state_dict(torch.load('model.pt'))\n\nmodel.eval()", "Predict with the model\nFinally, let's illustrate model inference, with set values as inputs:", "rock_feature = torch.tensor([[3.6800e-02, 4.0300e-02, 3.1700e-02, 2.9300e-02, 8.2000e-02, 1.3420e-01,\n 1.1610e-01, 6.6300e-02, 1.5500e-02, 5.0600e-02, 9.0600e-02, 2.5450e-01,\n 1.4640e-01, 1.2720e-01, 1.2230e-01, 1.6690e-01, 1.4240e-01, 1.2850e-01,\n 1.8570e-01, 1.1360e-01, 2.0690e-01, 2.1900e-02, 2.4000e-01, 2.5470e-01,\n 2.4000e-02, 1.9230e-01, 4.7530e-01, 7.0030e-01, 6.8250e-01, 6.4430e-01,\n 7.0630e-01, 5.3730e-01, 6.6010e-01, 8.7080e-01, 9.5180e-01, 9.6050e-01,\n 7.7120e-01, 6.7720e-01, 6.4310e-01, 6.7200e-01, 6.0350e-01, 5.1550e-01,\n 3.8020e-01, 2.2780e-01, 1.5220e-01, 8.0100e-02, 8.0400e-02, 7.5200e-02,\n 5.6600e-02, 1.7500e-02, 5.8000e-03, 9.1000e-03, 1.6000e-02, 1.6000e-02,\n 8.1000e-03, 7.0000e-03, 1.3500e-02, 6.7000e-03, 7.8000e-03, 6.8000e-03]], dtype=torch.float64, device=device)\nrock_prediction = model(rock_feature)\n\nmine_feature = torch.tensor([[5.9900e-02, 4.7400e-02, 4.9800e-02, 3.8700e-02, 1.0260e-01, 7.7300e-02,\n 8.5300e-02, 4.4700e-02, 1.0940e-01, 3.5100e-02, 1.5820e-01, 2.0230e-01,\n 2.2680e-01, 2.8290e-01, 3.8190e-01, 4.6650e-01, 6.6870e-01, 8.6470e-01,\n 9.3610e-01, 9.3670e-01, 9.1440e-01, 9.1620e-01, 9.3110e-01, 8.6040e-01,\n 7.3270e-01, 5.7630e-01, 4.1620e-01, 4.1130e-01, 4.1460e-01, 3.1490e-01,\n 2.9360e-01, 3.1690e-01, 3.1490e-01, 4.1320e-01, 3.9940e-01, 4.1950e-01,\n 4.5320e-01, 4.4190e-01, 4.7370e-01, 3.4310e-01, 3.1940e-01, 3.3700e-01,\n 2.4930e-01, 2.6500e-01, 1.7480e-01, 9.3200e-02, 5.3000e-02, 8.1000e-03,\n 3.4200e-02, 1.3700e-02, 2.8000e-03, 1.3000e-03, 5.0000e-04, 2.2700e-02,\n 2.0900e-02, 8.1000e-03, 1.1700e-02, 1.1400e-02, 1.1200e-02, 1.0000e-02]], dtype=torch.float64, device=device)\nmine_prediction = model(mine_feature)\n\nprint('Result Values: (Rock: 0) - (Mine: 1)\\n')\nprint(f'Rock Prediction:\\n\\t{\"Rock\" if rock_prediction <= 0.5 else \"Mine\"} - {rock_prediction.item()}')\nprint(f'Mine Prediction:\\n\\t{\"Rock\" if mine_prediction <= 0.5 else \"Mine\"} - {mine_prediction.item()}')" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
AEW2015/PYNQ_PR_Overlay
Pynq-Z1/notebooks/examples/tracebuffer_spi.ipynb
bsd-3-clause
[ "Trace Buffer - Tracing SPI Transactions\nThe Trace_Buffer class can monitor the waveform and transations on PMODA, PMODB, and ARDUINO connectors.\nThis demo shows how to use this class to track SPI transactions. For this demo, users have to connect the Pmod OLED to PMODB.\nStep 1: Overlay Management\nUsers have to import all the necessary classes. Make sure to use the right bitstream.", "from pprint import pprint\nfrom time import sleep\nfrom pynq import PL\nfrom pynq import Overlay\nfrom pynq.drivers import Trace_Buffer\nfrom pynq.iop import Pmod_OLED\nfrom pynq.iop import PMODA\nfrom pynq.iop import PMODB\nfrom pynq.iop import ARDUINO\n\nol = Overlay(\"base.bit\")\nol.download()\npprint(PL.ip_dict)", "Step 2: Instantiating OLED\nAlthough this demo can also be done on PMODA, we use PMODB in this demo.", "oled = Pmod_OLED(PMODB)", "Step 3: Tracking Transactions\nInstantiating the trace buffer with SPI protocol. The SPI clock is controlled by the 100MHz IO Processor (IOP). The SPI clock period is 16 times the IOP clock rate based on the settings of the IOP SPI controller. Hence we set the sample rate to 20MHz. \nAfter starting the trace buffer DMA, also start to write some characters. Then stop the trace buffer DMA.", "tr_buf = Trace_Buffer(PMODB,\"spi\",samplerate=20000000)\n\n# Start the trace buffer\ntr_buf.start()\n\n# Write characters\noled.write(\"1 2 3 4 5 6\")\n\n# Stop the trace buffer\ntr_buf.stop()", "Step 4: Parsing and Decoding Transactions\nThe trace buffer object is able to parse the transactions into a *.csv file (saved into the same folder as this script). The input arguments for the parsing method is:\n * start : the starting sample number of the trace.\n * stop : the stopping sample number of the trace.\n * tri_sel: masks for tri-state selection bits.\n * tri_0: masks for pins selected when the corresponding tri_sel = 0.\n * tri_0: masks for pins selected when the corresponding tri_sel = 1.\n * mask: mask for pins selected always.\nFor PMODA, the configuration of the masks can be:\n * tri_sel = [0x80000,0x40000,0x20000,0x10000]\n * tri_0 = [0x8,0x4,0x2,0x1]\n * tri_1 = [0x800,0x400,0x200,0x100]\n * mask = 0x0\nThen the trace buffer object can also decode the transactions using the open-source sigrok decoders. The decoded file (*.pd) is saved into the same folder as this script.\nReference:\nhttps://sigrok.org/wiki/Main_Page", "# Configuration for PMODB\nstart = 20000\nstop = 40000\ntri_sel = [0x80000<<32,0x40000<<32,0x20000<<32,0x10000<<32]\ntri_0 = [0x8<<32,0x4<<32,0x2<<32,0x1<<32]\ntri_1 = [0x800<<32,0x400<<32,0x200<<32,0x100<<32]\nmask = 0x0\n\n# Parsing and decoding\ntr_buf.parse(\"spi_trace.csv\",\n start,stop,mask,tri_sel,tri_0,tri_1)\ntr_buf.set_metadata(['CLK','NC','MOSI','CS'])\ntr_buf.decode(\"spi_trace.pd\",\n options=':wordsize=8:cpol=0:cpha=0')", "Step 5: Displaying the Result\nThe final waveform and decoded transactions are shown using the open-source wavedrom library. The two input arguments (s0 and s1 ) indicate the starting and stopping location where the waveform is shown. \nThe valid range for s0 and s1 is: 0 &lt; s0 &lt; s1 &lt; (stop-start), where start and stop are defined in the last step.\nReference:\nhttps://www.npmjs.com/package/wavedrom", "s0 = 10000\ns1 = 15000\ntr_buf.display(s0,s1)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
probml/pyprobml
notebooks/misc/linreg_hierarchical_pymc3.ipynb
mit
[ "<a href=\"https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/linreg_hierarchical_pymc3.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nHierarchical Bayesian Linear Regression in PyMC3\nThe text and code for this notebook are taken directly from this blog post by Thomas Wiecki and Danne Elbers. Original notebook.\nGelman et al.'s (2007) radon dataset is a classic for hierarchical modeling. In this dataset the amount of the radioactive gas radon has been measured among different households in all county's of several states. Radon gas is known to be the highest cause of lung cancer in non-smokers. It is believed to enter the house through the basement. Moreover, its concentration is thought to differ regionally due to different types of soil.\nHere we'll investigate this difference and try to make predictions of radon levels in different countys and where in the house radon was measured. In this example we'll look at Minnesota, a state that contains 85 county's in which different measurements are taken, ranging from 2 till 80 measurements per county. \nFirst, we'll load the data:", "%matplotlib inline\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pymc3 as pm\nimport pandas as pd\n\nurl = \"https://github.com/twiecki/WhileMyMCMCGentlySamples/blob/master/content/downloads/notebooks/radon.csv?raw=true\"\ndata = pd.read_csv(url)\n\ncounty_names = data.county.unique()\ncounty_idx = data[\"county_code\"].values\n\n!pip install arviz\nimport arviz", "The relevant part of the data we will model looks as follows:", "data[[\"county\", \"log_radon\", \"floor\"]].head()", "As you can see, we have multiple radon measurements (log-converted to be on the real line) in a county and whether the measurement has been taken in the basement (floor == 0) or on the first floor (floor == 1). Here we want to test the prediction that radon concentrations are higher in the basement.\nThe Models\nPooling of measurements\nNow you might say: \"That's easy! I'll just pool all my data and estimate one big regression to asses the influence of measurement across all counties\". In math-speak that model would be:\n$$radon_{i, c} = \\alpha + \\beta*\\text{floor}_{i, c} + \\epsilon$$ \nWhere $i$ represents the measurement, $c$ the county and floor contains which floor the measurement was made. If you need a refresher on Linear Regressions in PyMC3, check out my previous blog post. Critically, we are only estimating one intercept and one slope for all measurements over all counties.\nSeparate regressions\nBut what if we are interested whether different counties actually have different relationships (slope) and different base-rates of radon (intercept)? Then you might say \"OK then, I'll just estimate $n$ (number of counties) different regresseions -- one for each county\". In math-speak that model would be:\n$$radon_{i, c} = \\alpha_{c} + \\beta_{c}*\\text{floor}_{i, c} + \\epsilon_c$$\nNote that we added the subindex $c$ so we are estimating $n$ different $\\alpha$s and $\\beta$s -- one for each county.\nThis is the extreme opposite model, where above we assumed all counties are exactly the same, here we are saying that they share no similarities whatsoever which ultimately is also unsatisifying.\nHierarchical Regression: The best of both worlds\nFortunately there is a middle ground to both of these extreme views. Specifically, we may assume that while $\\alpha$s and $\\beta$s are different for each county, the coefficients all come from a common group distribution:\n$$\\alpha_{c} \\sim \\mathcal{N}(\\mu_{\\alpha}, \\sigma_{\\alpha}^2)$$\n$$\\beta_{c} \\sim \\mathcal{N}(\\mu_{\\beta}, \\sigma_{\\beta}^2)$$\nWe thus assume the intercepts $\\alpha$ and slopes $\\beta$ to come from a normal distribution centered around their respective group mean $\\mu$ with a certain standard deviation $\\sigma^2$, the values (or rather posteriors) of which we also estimate. That's why this is called multilevel or hierarchical modeling.\nHow do we estimate such a complex model with all these parameters you might ask? Well, that's the beauty of Probabilistic Programming -- we just formulate the model we want and press our Inference Button(TM). \nNote that the above is not a complete Bayesian model specification as we haven't defined priors or hyperpriors (i.e. priors for the group distribution, $\\mu$ and $\\sigma$). These will be used in the model implementation below but only distract here.\nProbabilistic Programming\nIndividual/non-hierarchical model\nTo really highlight the effect of the hierarchical linear regression we'll first estimate the non-hierarchical Bayesian model from above (separate regressions). For each county a new estimate of the parameters is initiated. As we have no prior information on what the intercept or regressions could be we are placing a Normal distribution centered around 0 with a wide standard-deviation. We'll assume the measurements are normally distributed with noise $\\epsilon$ on which we place a Half-Cauchy distribution.", "# takes about 45 minutes\nindiv_traces = {}\nfor county_name in county_names:\n # Select subset of data belonging to county\n c_data = data.loc[data.county == county_name]\n c_data = c_data.reset_index(drop=True)\n\n c_log_radon = c_data.log_radon\n c_floor_measure = c_data.floor.values\n\n with pm.Model() as individual_model:\n # Intercept prior\n a = pm.Normal(\"alpha\", mu=0, sigma=1)\n # Slope prior\n b = pm.Normal(\"beta\", mu=0, sigma=1)\n\n # Model error prior\n eps = pm.HalfCauchy(\"eps\", beta=1)\n\n # Linear model\n radon_est = a + b * c_floor_measure\n\n # Data likelihood\n y_like = pm.Normal(\"y_like\", mu=radon_est, sigma=eps, observed=c_log_radon)\n\n # Inference button (TM)!\n trace = pm.sample(progressbar=False)\n\n indiv_traces[county_name] = trace", "Hierarchical Model\nInstead of initiating the parameters separatly, the hierarchical model initiates group parameters that consider the county's not as completely different but as having an underlying similarity. These distributions are subsequently used to influence the distribution of each county's $\\alpha$ and $\\beta$.", "with pm.Model() as hierarchical_model:\n # Hyperpriors\n mu_a = pm.Normal(\"mu_alpha\", mu=0.0, sigma=1)\n sigma_a = pm.HalfCauchy(\"sigma_alpha\", beta=1)\n mu_b = pm.Normal(\"mu_beta\", mu=0.0, sigma=1)\n sigma_b = pm.HalfCauchy(\"sigma_beta\", beta=1)\n\n # Intercept for each county, distributed around group mean mu_a\n a = pm.Normal(\"alpha\", mu=mu_a, sigma=sigma_a, shape=len(data.county.unique()))\n # Intercept for each county, distributed around group mean mu_a\n b = pm.Normal(\"beta\", mu=mu_b, sigma=sigma_b, shape=len(data.county.unique()))\n\n # Model error\n eps = pm.HalfCauchy(\"eps\", beta=1)\n\n # Expected value\n radon_est = a[county_idx] + b[county_idx] * data.floor.values\n\n # Data likelihood\n y_like = pm.Normal(\"y_like\", mu=radon_est, sigma=eps, observed=data.log_radon)\n\nwith hierarchical_model:\n hierarchical_trace = pm.sample()\n\npm.traceplot(hierarchical_trace);\n\npm.traceplot(hierarchical_trace, var_names=[\"alpha\", \"beta\"])", "The marginal posteriors in the left column are highly informative. mu_a tells us the group mean (log) radon levels. mu_b tells us that the slope is significantly negative (no mass above zero), meaning that radon concentrations are higher in the basement than first floor. We can also see by looking at the marginals for a that there is quite some differences in radon levels between counties; the different widths are related to how much measurements we have per county, the more, the higher our confidence in that parameter estimate.\n<div class=\"alert alert-warning\">\n\nAfter writing this blog post I found out that the chains here (which look worse after I just re-ran them) are not properly converged, you can see that best for `sigma_beta` but also the warnings about \"diverging samples\" (which are also new in PyMC3). If you want to learn more about the problem and its solution, see my more recent blog post <a href='https://twiecki.github.io/blog/2017/02/08/bayesian-hierchical-non-centered/'>\"Why hierarchical models are awesome, tricky, and Bayesian\"</a>.\n\n</div>\n\nPosterior Predictive Check\nThe Root Mean Square Deviation\nTo find out which of the models works better we can calculate the Root Mean Square Deviaton (RMSD). This posterior predictive check revolves around recreating the data based on the parameters found at different moments in the chain. The recreated or predicted values are subsequently compared to the real data points, the model that predicts data points closer to the original data is considered the better one. Thus, the lower the RMSD the better.\nWhen computing the RMSD (code not shown) we get the following result:\n\nindividual/non-hierarchical model: 0.13\nhierarchical model: 0.08\n\nAs can be seen above the hierarchical model performs a lot better than the non-hierarchical model in predicting the radon values. Following this, we'll plot some examples of county's showing the true radon values, the hierarchial predictions and the non-hierarchical predictions.", "selection = [\"CASS\", \"CROW WING\", \"FREEBORN\"]\nfig, axis = plt.subplots(1, 3, figsize=(12, 6), sharey=True, sharex=True)\naxis = axis.ravel()\nfor i, c in enumerate(selection):\n c_data = data.loc[data.county == c]\n c_data = c_data.reset_index(drop=True)\n z = list(c_data[\"county_code\"])[0]\n\n xvals = np.linspace(-0.2, 1.2)\n for a_val, b_val in zip(indiv_traces[c][\"alpha\"][::10], indiv_traces[c][\"beta\"][::10]):\n axis[i].plot(xvals, a_val + b_val * xvals, \"b\", alpha=0.05)\n axis[i].plot(\n xvals,\n indiv_traces[c][\"alpha\"][::10].mean() + indiv_traces[c][\"beta\"][::10].mean() * xvals,\n \"b\",\n alpha=1,\n lw=2.0,\n label=\"individual\",\n )\n for a_val, b_val in zip(hierarchical_trace[\"alpha\"][::10][z], hierarchical_trace[\"beta\"][::10][z]):\n axis[i].plot(xvals, a_val + b_val * xvals, \"g\", alpha=0.05)\n axis[i].plot(\n xvals,\n hierarchical_trace[\"alpha\"][::10][z].mean() + hierarchical_trace[\"beta\"][::10][z].mean() * xvals,\n \"g\",\n alpha=1,\n lw=2.0,\n label=\"hierarchical\",\n )\n axis[i].scatter(\n c_data.floor + np.random.randn(len(c_data)) * 0.01,\n c_data.log_radon,\n alpha=1,\n color=\"k\",\n marker=\".\",\n s=80,\n label=\"original data\",\n )\n axis[i].set_xticks([0, 1])\n axis[i].set_xticklabels([\"basement\", \"first floor\"])\n axis[i].set_ylim(-1, 4)\n axis[i].set_title(c)\n if not i % 3:\n axis[i].legend()\n axis[i].set_ylabel(\"log radon level\")", "In the above plot we have the data points in black of three selected counties. The thick lines represent the mean estimate of the regression line of the individual (blue) and hierarchical model (in green). The thinner lines are regression lines of individual samples from the posterior and give us a sense of how variable the estimates are.\nWhen looking at the county 'CASS' we see that the non-hierarchical estimation has huge uncertainty about the radon levels of first floor measurements -- that's because we don't have any measurements in this county. The hierarchical model, however, is able to apply what it learned about the relationship between floor and radon-levels from other counties to CASS and make sensible predictions even in the absence of measurements.\nWe can also see how the hierarchical model produces more robust estimates in 'CROW WING' and 'FREEBORN'. In this regime of few data points the non-hierarchical model reacts more strongly to individual data points because that's all it has to go on. \nHaving the group-distribution constrain the coefficients we get meaningful estimates in all cases as we apply what we learn from the group to the individuals and vice-versa.\nShrinkage\nShrinkage describes the process by which our estimates are \"pulled\" towards the group-mean as a result of the common group distribution -- county-coefficients very far away from the group mean have very low probability under the normality assumption. In the non-hierachical model every county is allowed to differ completely from the others by just using each county's data, resulting in a model more prone to outliers (as shown above).", "hier_a = hierarchical_trace[\"alpha\"].mean(axis=0)\nhier_b = hierarchical_trace[\"beta\"].mean(axis=0)\nindv_a = [indiv_traces[c][\"alpha\"].mean() for c in county_names]\nindv_b = [indiv_traces[c][\"beta\"].mean() for c in county_names]\n\nfig = plt.figure(figsize=(10, 10))\nax = fig.add_subplot(\n 111,\n xlabel=\"Intercept\",\n ylabel=\"Floor Measure\",\n title=\"Hierarchical vs. Non-hierarchical Bayes\",\n xlim=(0.25, 2),\n ylim=(-2, 1.5),\n)\n\nax.scatter(indv_a, indv_b, s=26, alpha=0.4, label=\"non-hierarchical\")\nax.scatter(hier_a, hier_b, c=\"red\", s=26, alpha=0.4, label=\"hierarchical\")\nfor i in range(len(indv_b)):\n ax.arrow(\n indv_a[i],\n indv_b[i],\n hier_a[i] - indv_a[i],\n hier_b[i] - indv_b[i],\n fc=\"k\",\n ec=\"k\",\n length_includes_head=True,\n alpha=0.4,\n head_width=0.02,\n )\nax.legend();", "In the shrinkage plot above we show the coefficients of each county's non-hierarchical posterior mean (blue) and the hierarchical posterior mean (red). To show the effect of shrinkage on a single coefficient-pair (alpha and beta) we connect the blue and red points belonging to the same county by an arrow. Some non-hierarchical posteriors are so far out that we couldn't display them in this plot (it makes the axes to wide). Interestingly, all hierarchical posteriors of the floor-measure seem to be around -0.6 confirming out prediction that radon levels are higher in the basement than in the first floor. The differences in intercepts (which we take for type of soil) differs among countys indicating that meaningful regional differences exist in radon concentration. This information would have been difficult to find when just the non-hierarchial model had been used and estimates for individual counties would have been much more noisy.\nSummary\nIn this post, co-authored by Danne Elbers, we showed how a multi-level hierarchical Bayesian model gives the best of both worlds when we have multiple sets of measurements we expect to have similarity. The naive approach either pools all data together and ignores the individual differences, or treats each set as completely separate leading to noisy estimates as shown above. By placing a group distribution on the individual sets we can learn about each set and the group simultaneously. Probabilistic Programming in PyMC then makes Bayesian estimation of this model trivial.\nReferences\n\nThe Inference Button: Bayesian GLMs made easy with PyMC3\nThis world is far from Normal(ly distributed): Bayesian Robust Regression in PyMC3 \nChris Fonnesbeck repo containing a more extensive analysis\nShrinkage in multi-level hierarchical models by John Kruschke\nGelman, A.; Carlin; Stern; and Rubin, D., 2007, \"Replication data for: Bayesian Data Analysis, Second Edition\", \nGelman, A., & Hill, J. (2006). Data Analysis Using Regression and Multilevel/Hierarchical Models (1st ed.). Cambridge University Press.\nGelman, A. (2006). Multilevel (Hierarchical) modeling: what it can and cannot do. Technometrics, 48(3), 432–435." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mercybenzaquen/foundations-homework
foundations_hw/05/.ipynb_checkpoints/Homework5_NYT-checkpoint.ipynb
mit
[ "What books topped the Hardcover Fiction NYT best-sellers list on Mother's Day in 2009 and 2010? How about Father's Day?", "#my IPA key b577eb5b46ad4bec8ee159c89208e220\n#base url http://api.nytimes.com/svc/books/{version}/lists\n\nimport requests\nresponse = requests.get(\"http://api.nytimes.com/svc/books/v2/lists.json?list=hardcover-fiction&published-date=2009-05-10&api-key=b577eb5b46ad4bec8ee159c89208e220\")\nbest_seller = response.json()\nprint(best_seller.keys())\n\n\nprint(type(best_seller))\n\nprint(type(best_seller['results']))\n\nprint(len(best_seller['results']))\n\nprint(best_seller['results'][0])\n\nmother_best_seller_results_2009 = best_seller['results']\n\nfor item in mother_best_seller_results_2009:\n print(\"This books ranks #\", item['rank'], \"on the list\") #just to make sure they are in order\n for book in item['book_details']:\n print(book['title'])\n \n \n\nprint(\"The top 3 books in the Hardcover fiction NYT best-sellers on Mother's day 2009 were:\")\nfor item in mother_best_seller_results_2009:\n if item['rank']< 4: #to get top 3 books on the list\n for book in item['book_details']:\n print(book['title'])\n \n\nimport requests\nresponse = requests.get(\"http://api.nytimes.com/svc/books/v2/lists.json?list=hardcover-fiction&published-date=2010-05-09&api-key=b577eb5b46ad4bec8ee159c89208e220\")\nbest_seller_2010 = response.json()\nprint(best_seller.keys())\n\nprint(best_seller_2010['results'][0])\n\nmother_best_seller_2010_results = best_seller_2010['results']\n\nprint(\"The top 3 books in the Hardcover fiction NYT best-sellers on Mother's day 2010 were:\")\nfor item in mother_best_seller_2010_results:\n if item['rank']< 4: #to get top 3 books on the list\n for book in item['book_details']:\n print(book['title'])\n \n\nimport requests\nresponse = requests.get(\"http://api.nytimes.com/svc/books/v2/lists.json?list=hardcover-fiction&published-date=2009-06-21&api-key=b577eb5b46ad4bec8ee159c89208e220\")\nbest_seller = response.json()\n\n\nfather_best_seller_results_2009 = best_seller['results']\nprint(\"The top 3 books in the Hardcover fiction NYT best-sellers on Father's day 2009 were:\")\nfor item in father_best_seller_results_2009:\n if item['rank']< 4: #to get top 3 books on the list\n for book in item['book_details']:\n print(book['title'])\n \n\nimport requests\nresponse = requests.get(\"http://api.nytimes.com/svc/books/v2/lists.json?list=hardcover-fiction&published-date=2010-06-20&api-key=b577eb5b46ad4bec8ee159c89208e220\")\nbest_seller = response.json()\n\n\nfather_best_seller_results_2010 = best_seller['results']\nprint(\"The top 3 books in the Hardcover fiction NYT best-sellers on Father's day 2010 were:\")\nfor item in father_best_seller_results_2010:\n if item['rank']< 4: #to get top 3 books on the list\n for book in item['book_details']:\n print(book['title'])\n ", "2) What are all the different book categories the NYT ranked in June 6, 2009? How about June 6, 2015?", "import requests\nresponse = requests.get(\"http://api.nytimes.com/svc/books/v2/lists/names.json?published-date=2009-06-06&api-key=b577eb5b46ad4bec8ee159c89208e220\")\nbest_seller = response.json()\n\n\n\nprint(best_seller.keys())\n\nprint(len(best_seller['results']))\n\nbook_categories_2009 = best_seller['results']\n\n\nfor item in book_categories_2009:\n print(item['display_name'])\n\nimport requests\nresponse = requests.get(\"http://api.nytimes.com/svc/books/v2/lists/names.json?published-date=2015-06-06&api-key=b577eb5b46ad4bec8ee159c89208e220\")\nbest_seller = response.json()\n\nprint(len(best_seller['results']))\n\n\nbook_categories_2015 = best_seller['results']\nfor item in book_categories_2015:\n print(item['display_name'])", "3) Muammar Gaddafi's name can be transliterated many many ways. His last name is often a source of a million and one versions - Gadafi, Gaddafi, Kadafi, and Qaddafi to name a few. How many times has the New York Times referred to him by each of those names?\nTip: Add \"Libya\" to your search to make sure (-ish) you're talking about the right guy.", "import requests\nresponse = requests.get(\"http://api.nytimes.com/svc/search/v2/articlesearch.json?q=Gadafi&fq=Libya&api-key=b577eb5b46ad4bec8ee159c89208e220\")\ngadafi = response.json()\n\nprint(gadafi.keys())\nprint(gadafi['response'])\nprint(gadafi['response'].keys())\nprint(gadafi['response']['docs']) #so no results for GADAFI. \n\nprint('The New York times has not used the name Gadafi to refer to Muammar Gaddafi')\n\n\nimport requests\nresponse = requests.get(\"http://api.nytimes.com/svc/search/v2/articlesearch.json?q=Gaddafi&fq=Libya&api-key=b577eb5b46ad4bec8ee159c89208e220\")\ngaddafi = response.json()\n\nprint(gaddafi.keys())\nprint(gaddafi['response'].keys())\nprint(type(gaddafi['response']['meta']))\nprint(gaddafi['response']['meta'])\n\nprint(\"'The New York times used the name Gaddafi to refer to Muammar Gaddafi\", gaddafi['response']['meta']['hits'], \"times\")\n\nimport requests\nresponse = requests.get(\"http://api.nytimes.com/svc/search/v2/articlesearch.json?q=Kadafi&fq=Libya&api-key=b577eb5b46ad4bec8ee159c89208e220\")\nkadafi = response.json()\nprint(kadafi.keys())\nprint(kadafi['response'].keys())\nprint(type(kadafi['response']['meta']))\nprint(kadafi['response']['meta'])\n\n\nprint(\"'The New York times used the name Kadafi to refer to Muammar Gaddafi\", kadafi['response']['meta']['hits'], \"times\")\n\nimport requests\nresponse = requests.get(\"http://api.nytimes.com/svc/search/v2/articlesearch.json?q=Qaddafi&fq=Libya&api-key=b577eb5b46ad4bec8ee159c89208e220\")\nqaddafi = response.json()\n\nprint(qaddafi.keys())\nprint(qaddafi['response'].keys())\nprint(type(qaddafi['response']['meta']))\nprint(qaddafi['response']['meta'])\n\nprint(\"'The New York times used the name Qaddafi to refer to Muammar Gaddafi\", qaddafi['response']['meta']['hits'], \"times\")", "4) What's the title of the first story to mention the word 'hipster' in 1995? What's the first paragraph?", "import requests\nresponse = requests.get(\"https://api.nytimes.com/svc/search/v2/articlesearch.json?q=hipster&begin_date=19950101&end_date=19953112&sort=oldest&api-key=b577eb5b46ad4bec8ee159c89208e220\")\nhipster = response.json()\n\n\nprint(hipster.keys())\nprint(hipster['response'].keys())\nprint(hipster['response']['docs'][0])\nhipster_info= hipster['response']['docs']\n\n\nprint('These articles all had the word hipster in them and were published in 1995') #ordered from oldest to newest\nfor item in hipster_info:\n print(item['headline']['main'], item['pub_date'])\n\nfor item in hipster_info:\n if item['headline']['main'] == \"SOUND\":\n \n print(\"This is the first article to mention the word hispter in 1995 and was titled:\", item['headline']['main'],\"and it was publised on:\", item['pub_date'])\n print(\"This is the lead paragraph of\", item['headline']['main'],item['lead_paragraph'])\n ", "5) How many times was gay marriage mentioned in the NYT between 1950-1959, 1960-1969, 1970-1978, 1980-1989, 1990-2099, 2000-2009, and 2010-present?", "import requests\nresponse = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q=\"gay marriage\"&begin_date=19500101&end_date=19593112&api-key=b577eb5b46ad4bec8ee159c89208e220')\nmarriage_1959 = response.json()\n\nprint(marriage_1959.keys())\nprint(marriage_1959['response'].keys())\nprint(marriage_1959['response']['meta'])\nprint(\"___________\")\nprint(\"Gay marriage was mentioned\", marriage_1959['response']['meta']['hits'], \"between 1950-1959\")\n\nimport requests\nresponse = requests.get(\"https://api.nytimes.com/svc/search/v2/articlesearch.json?q='gay marriage'&begin_date=19600101&end_date=19693112&api-key=b577eb5b46ad4bec8ee159c89208e220\")\nmarriage_1969 = response.json()\n\nprint(marriage_1969.keys())\nprint(marriage_1969['response'].keys())\nprint(marriage_1969['response']['meta'])\nprint(\"___________\")\nprint(\"Gay marriage was mentioned\", marriage_1969['response']['meta']['hits'], \"between 1960-1969\")\n\nimport requests\nresponse = requests.get(\"https://api.nytimes.com/svc/search/v2/articlesearch.json?q='gay marriage'&begin_date=19700101&end_date=19783112&api-key=b577eb5b46ad4bec8ee159c89208e220\")\nmarriage_1978 = response.json()\n\nprint(marriage_1978.keys())\nprint(marriage_1978['response'].keys())\nprint(marriage_1978['response']['meta'])\nprint(\"___________\")\nprint(\"Gay marriage was mentioned\", marriage_1978['response']['meta']['hits'], \"between 1970-1978\")\n\nimport requests\nresponse = requests.get(\"https://api.nytimes.com/svc/search/v2/articlesearch.json?q='gay marriage'&begin_date=19800101&end_date=19893112&api-key=b577eb5b46ad4bec8ee159c89208e220\")\nmarriage_1989 = response.json()\n\nprint(marriage_1989.keys())\nprint(marriage_1989['response'].keys())\nprint(marriage_1989['response']['meta'])\nprint(\"___________\")\nprint(\"Gay marriage was mentioned\", marriage_1989['response']['meta']['hits'], \"between 1980-1989\")\n\nimport requests\nresponse = requests.get(\"https://api.nytimes.com/svc/search/v2/articlesearch.json?q='gay marriage'&begin_date=19900101&end_date=20003112&api-key=b577eb5b46ad4bec8ee159c89208e220\")\nmarriage_2000 = response.json()\n\nprint(marriage_2000.keys())\nprint(marriage_2000['response'].keys())\nprint(marriage_2000['response']['meta'])\nprint(\"___________\")\nprint(\"Gay marriage was mentioned\", marriage_2000['response']['meta']['hits'], \"between 1990-2000\")\n\nimport requests\nresponse = requests.get(\"https://api.nytimes.com/svc/search/v2/articlesearch.json?q='gay marriage'&begin_date=20000101&end_date=20093112&api-key=b577eb5b46ad4bec8ee159c89208e220\")\nmarriage_2009 = response.json()\n\nprint(marriage_2009.keys())\nprint(marriage_2009['response'].keys())\nprint(marriage_2009['response']['meta'])\nprint(\"___________\")\nprint(\"Gay marriage was mentioned\", marriage_2009['response']['meta']['hits'], \"between 2000-2009\")\n\n\nimport requests\nresponse = requests.get(\"https://api.nytimes.com/svc/search/v2/articlesearch.json?q='gay marriage'&begin_date=20100101&end_date=20160609&api-key=b577eb5b46ad4bec8ee159c89208e220\")\nmarriage_2016 = response.json()\n\nprint(marriage_2016.keys())\nprint(marriage_2016['response'].keys())\nprint(marriage_2016['response']['meta'])\nprint(\"___________\")\nprint(\"Gay marriage was mentioned\", marriage_2016['response']['meta']['hits'], \"between 2010-present\")\n\n", "6) What section talks about motorcycles the most?\nTip: You'll be using facets", "import requests\nresponse = requests.get(\"http://api.nytimes.com/svc/search/v2/articlesearch.json?q=motorcycles&facet_field=section_name&api-key=b577eb5b46ad4bec8ee159c89208e220\")\nmotorcycles = response.json()\n\n\n\nprint(motorcycles.keys())\n\nprint(motorcycles['response'].keys())\n\nprint(motorcycles['response']['facets']['section_name']['terms'])\n\nmotorcycles_info= motorcycles['response']['facets']['section_name']['terms']\nprint(motorcycles_info)\nprint(\"These are the sections that talk the most about motorcycles:\")\nprint(\"_________________\")\nfor item in motorcycles_info:\n print(\"The\",item['term'],\"section mentioned motorcycle\", item['count'], \"times\")\n\n\nmotorcycle_info= motorcycles['response']['facets']['section_name']['terms']\nmost_motorcycle_section = 0\nsection_name = \"\" \nfor item in motorcycle_info:\n if item['count']>most_motorcycle_section:\n most_motorcycle_section = item['count']\n section_name = item['term']\n\nprint(section_name, \"is the sections that talks the most about motorcycles, with\", most_motorcycle_section, \"mentions of the word\")", "7) How many of the last 20 movies reviewed by the NYT were Critics' Picks? How about the last 40? The last 60?\nTip: You really don't want to do this 3 separate times (1-20, 21-40 and 41-60) and add them together. What if, perhaps, you were able to figure out how to combine two lists? Then you could have a 1-20 list, a 1-40 list, and a 1-60 list, and then just run similar code for each of them.", "import requests\nresponse = requests.get('http://api.nytimes.com/svc/movies/v2/reviews/search.json?api-key=b577eb5b46ad4bec8ee159c89208e220')\nmovies_reviews_20 = response.json()\n\nprint(movies_reviews_20.keys())\n\n\n\n\nprint(movies_reviews_20['results'][0])\n\ncritics_pick = 0\nnot_a_critics_pick = 0\nfor item in movies_reviews_20['results']:\n print(item['display_title'], item['critics_pick'])\n if item['critics_pick'] == 1:\n print(\"-------------CRITICS PICK!\")\n critics_pick = critics_pick + 1\n else:\n print(\"-------------NOT CRITICS PICK!\")\n not_a_critics_pick = not_a_critics_pick + 1\nprint(\"______________________\") \nprint(\"There were\", critics_pick, \"critics picks in the last 20 revies by the NYT\")\n\nimport requests\nresponse = requests.get('http://api.nytimes.com/svc/movies/v2/reviews/search.json?offset=20&api-key=b577eb5b46ad4bec8ee159c89208e220')\nmovies_reviews_40 = response.json()\n\nprint(movies_reviews_40.keys())\n\n\nimport requests\nresponse = requests.get('http://api.nytimes.com/svc/movies/v2/reviews/search.json?offset=40&api-key=b577eb5b46ad4bec8ee159c89208e220')\nmovies_reviews_60 = response.json()\n\nprint(movies_reviews_60.keys())\n\nnew_medium_list = movies_reviews_20['results'] + movies_reviews_40['results']\n\nprint(len(new_medium_list))\n\ncritics_pick = 0\nnot_a_critics_pick = 0\nfor item in new_medium_list:\n print(item['display_title'], item['critics_pick'])\n if item['critics_pick'] == 1:\n print(\"-------------CRITICS PICK!\")\n critics_pick = critics_pick + 1\n else:\n print(\"-------------NOT CRITICS PICK!\")\n not_a_critics_pick = not_a_critics_pick + 1\nprint(\"______________________\") \nprint(\"There were\", critics_pick, \"critics picks in the last 40 revies by the NYT\")\n\nnew_big_list = movies_reviews_20['results'] + movies_reviews_40['results'] + movies_reviews_60['results']\n\nprint(new_big_list[0])\n\nprint(len(new_big_list))\n\ncritics_pick = 0\nnot_a_critics_pick = 0\nfor item in new_big_list:\n print(item['display_title'], item['critics_pick'])\n if item['critics_pick'] == 1:\n print(\"-------------CRITICS PICK!\")\n critics_pick = critics_pick + 1\n else:\n print(\"-------------NOT CRITICS PICK!\")\n not_a_critics_pick = not_a_critics_pick + 1\nprint(\"______________________\") \nprint(\"There were\", critics_pick, \"critics picks in the last 60 revies by the NYT\")", "8) Out of the last 40 movie reviews from the NYT, which critic has written the most reviews?", "medium_list = movies_reviews_20['results'] + movies_reviews_40['results']\nprint(type(medium_list))\nprint(medium_list[0])\nfor item in medium_list:\n print(item['byline'])\n \n\n\nall_critics = []\nfor item in medium_list:\n all_critics.append(item['byline'])\nprint(all_critics)\n\nunique_medium_list = set(all_critics)\nprint(unique_medium_list)\n\nprint(\"___________________________________________________\")\n\nprint(\"This is a list of the authors who have written the NYT last 40 movie reviews, in descending order:\")\nfrom collections import Counter\ncount = Counter(all_critics)\nprint(count)\n\nprint(\"___________________________________________________\")\n\n\nprint(\"This is a list of the top 3 authors who have written the NYT last 40 movie reviews:\")\ncount.most_common(3)\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
fluxcapacitor/source.ml
jupyterhub.ml/notebooks/train_deploy/spark/spark_census/01_TrainModel.ipynb
apache-2.0
[ "Train Model\nConfigure Spark for Your Notebook\n\nThis examples uses the local Spark Master --master local[1]\nIn production, you would use the PipelineIO Spark Master --master spark://apachespark-master-2-1-0:7077", "import os\n\nmaster = '--master local[1]'\n#master = '--master spark://apachespark-master-2-1-0:7077'\nconf = '--conf spark.cores.max=1 --conf spark.executor.memory=512m'\npackages = '--packages com.amazonaws:aws-java-sdk:1.7.4,org.apache.hadoop:hadoop-aws:2.7.1'\njars = '--jars /root/lib/jpmml-sparkml-package-1.0-SNAPSHOT.jar'\npy_files = '--py-files /root/lib/jpmml.py'\n\nos.environ['PYSPARK_SUBMIT_ARGS'] = master \\\n + ' ' + conf \\\n + ' ' + packages \\\n + ' ' + jars \\\n + ' ' + py_files \\\n + ' ' + 'pyspark-shell'\n\nprint(os.environ['PYSPARK_SUBMIT_ARGS'])", "Import Spark Libraries", "from pyspark.ml import Pipeline\nfrom pyspark.ml.feature import RFormula\nfrom pyspark.ml.classification import DecisionTreeClassifier\nfrom pyspark import SparkConf, SparkContext\nfrom pyspark.sql.context import SQLContext", "Create Spark Session\nThis may take a minute or two. Please be patient.", "from pyspark.sql import SparkSession\n\nspark_session = SparkSession.builder.getOrCreate()", "Read Data from Public S3 Bucket\n\nAWS credentials are not needed.\nWe're asking Spark to infer the schema\nThe data has a header\nUsing bzip2 because it's a splittable compression file format", "df = spark_session.read.format(\"csv\") \\\n .option(\"inferSchema\", \"true\").option(\"header\", \"true\") \\\n .load(\"s3a://datapalooza/R/census.csv\")\n\ndf.head()\n\nprint(df.count())", "Create and Train Spark ML Pipeline", "formula = RFormula(formula = \"income ~ .\")\nclassifier = DecisionTreeClassifier()\n\npipeline = Pipeline(stages = [formula, classifier])\npipeline_model = pipeline.fit(df)\nprint(pipeline_model)", "Export the Spark ML Pipeline", "from jpmml import toPMMLBytes\n\nmodel = toPMMLBytes(spark_session, df, pipeline_model)\nwith open('model.spark', 'wb') as fh:\n fh.write(model)\n \n\n!ls -al model.spark" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
baifan-wang/structural-bioinformatics_in_python
docs/Introduction.ipynb
gpl-3.0
[ "Overall layout of a Molecule object.\nThe Molecule object has the following architecture:\n* A Molecule is composed of Models (conformations)\n* A Model is composed of Residues\n* A Residue is composed of Atoms \n\nAtom object is the basic component in SBio python, which is a container of an atomic coordinate line in a 3D-coordinates file. For example, when reading a pdb file, each line starting with ‘ATOM’ or ‘HETATM’ will be used to create an Atom object. \nResidue object is used to represent a residue (amino acid or nucleic acid residue, sometimes could be a small molecule.) in a macromolecule. Residue object is composed of several Atom object of atoms belong to a specific residue.\nModel object is composed of Residue objects and used to represent a model (or conformation) of a macromolecule.\nMolecule object is the top container for Atom objects. Molecule can have at least 1 Model object. \n\nAccess\nAtom, Residue and Model objects are stored in a python dict of their parent containers. Access of Atom, Residue and Model objects within a Molecule object is supported by using the properties of python object. Normally a Model object will be assigned with the key of ‘m+number’, such as ‘m1, m100’. Suppose we have a Molecule object name ‘mol’, then the 1st Model object of ‘mol’ is ‘mol.m1’. Residue objects within a model will be assigned with the key of ‘chain id+residue serial’, 1st residue of chain A will have the name of ‘A1’, and can be access by ‘mol.m1.A1’. The name of an atom in its 3D coordinates will be used as the key. Then an Atom object with the name of ‘CA’ in residue ‘A1’ in 1st model can be access by ‘mol.m1.A1.CA’. However some atom has the name end with quotes, in this case the quotes will be replaced by ‘’ (underscore). E.g., the key for “C5’” will be ‘C5’\nCreate a Molecule object\nMolecule objects can be created from PDB files or other formats (to be implemented), for example:", "from SBio import *\nmol = create_molecule('test.pdb')\nmol", "navigate through a Molecule object:", "for m in mol.get_model():\n for r in m.get_residue():\n print(r)", "The \"get_model\", \"get_atom\" and \"get_residue\" are python generators, can be more conveniently used like this:", "atoms=mol.m1.get_atom()\nresidue=mol.m1.get_residue()\nfor r in residue:\n print(r)", "write coordinates to pdb\nBoth Molecule and Model object can be written into pdb file. Thus it provides a method to split pdb file with multiple conformations.", "mol.write_pdb('mol_new.pdb') # write all conformation into a single pdb file\ni = 1\nfor m in mol.get_model():\n name = 'mol_m'+str(i)+'.pdb'\n m.write_pdb(name) #write one conformation to a single pdb file\n i+=1", "get information of a molecule\nThe 'Model' module provide several methods for extraction information of a molecule \n get_atom_num: return the number of atoms in a molecule \n get_residue_list: return a list of residue of a molecule \n get_sequence: return the sequence information of a molecule \n write_fasta: write sequence information into a fasta file \n get_mw: return the molecular weight of a molecule \n get_dist_matrix: compute the complete inter-atomic distance matrix\nusage:", "m1 = mol.m1\nprint(m1.get_atom_num())\nprint(m1.get_residue_list())\nprint(m1.get_sequence('A')) #the sequence of chain A\nm1.write_fasta('A', 'test.fasta', comment='test')\nm1.get_mw()", "compute geometry information\nThe 'Geometry' module contains several methods for the measurement of distance, angle and torsion angle among atoms:", "a1 = mol.m1.A2.O4_ # the actual name for this atom is \"O4'\"\na2 = mol.m1.A2.C1_\na3 = mol.m1.A2.N9\na4 = mol.m1.A2.C4\nget_distance(a1, a2)\n\nget_angle(a1, a2, a3)\n\nchi = get_torsion(a1,a2,a3,a4)\nprint('the CHI torsion angle is {}'.format(chi))", "compute the interaction between atoms\nThe 'Interaction' module provides several methods to check the interaction between atoms:\n* get_hydrogen_bond: check whether hydrogen bond formation between given atoms\n* get_polar_interaction: compute the polar interaction between given atoms\n* get_pi_pi_interaction: compute the aromatic pi-pi interaction between given atom groups", "a5 = m1.A2.N6\na6 = m1.A2.H61\na7 = m1.A1.O2\nprint(get_hydrogen_bond(a5, a7, a6)) #arguments order: donor, acceptor, donor_H=None\nprint(get_polar_interaction(a5, a7))", "Structure alignment\nThe 'Structural_alignment' module provide funtion to align a set of molecules. The RMSD values for the strcuture superimpose can be calculated, coordinates of the aligned structure can also be updated.", "m1 = mol.m1\nm2 = mol.m2\nmolecule_list = [m1,m2]\n\nresidue_range = [[1,2],[1,2]]\nsa = Structural_alignment(molecule_list, residue_range, update_coord=False)\nsa.run()\nprint(sa.rmsd)", "Sequence alignment\nThe 'Structural_alignment' module is used to deal with the multiple sequence alignment to mapping residues between different residues, i.e., to get the residue serials of the conserved residues among different molecules. The conserved residue serials can than be used in the structure alignment.", "seq = 'D:\\\\python\\\\structural bioinformatics_in_python\\\\PPO-crystal.clustal'\nalignment=Seq_align_parser(seq)\nalignment.run()\ncon_res = []\ncon_res.append(alignment.align_res_mask['O24164|PPOM_TOBAC '])\ncon_res.append(alignment.align_res_mask['P56601|PPOX_MYXXA '])\nprint(con_res[0]) #conserved residue in 'PPOM_TOBAC'\nprint(con_res[1])", "Structural analysis for nucleic acids and protein\nThe 'Structural_analysis' and 'Plot' modules provide methods for simple structural analysis and visulization for nucleic and protein. For example, the backbone torsion angle, the puckering of the sugar of nucleeotide, can be computed and plotted. (see examples for more detail)\nStandard biomolecular data\nThe 'Data' module provides standard syntax for biomolecule, such as the standard name for amino acid and nucleic acid residue, molecular weights, etc." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
kubeflow/pipelines
components/gcp/dataproc/submit_spark_job/sample.ipynb
apache-2.0
[ "Name\nData preparation using Spark on YARN with Cloud Dataproc\nLabel\nCloud Dataproc, GCP, Cloud Storage, Spark, Kubeflow, pipelines, components, YARN\nSummary\nA Kubeflow Pipeline component to prepare data by submitting a Spark job on YARN to Cloud Dataproc.\nDetails\nIntended use\nUse the component to run an Apache Spark job as one preprocessing step in a Kubeflow Pipeline.\nRuntime arguments\nArgument | Description | Optional | Data type | Accepted values | Default |\n:--- | :---------- | :--- | :------- | :------| :------| \nproject_id | The ID of the Google Cloud Platform (GCP) project that the cluster belongs to.|No | GCPProjectID | | |\nregion | The Cloud Dataproc region to handle the request. | No | GCPRegion | | |\ncluster_name | The name of the cluster to run the job. | No | String | | |\nmain_jar_file_uri | The Hadoop Compatible Filesystem (HCFS) URI of the JAR file that contains the main class. | No | GCSPath | | |\nmain_class | The name of the driver's main class. The JAR file that contains the class must be either in the default CLASSPATH or specified in spark_job.jarFileUris.| No | | | | \nargs | The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.| Yes | | | |\nspark_job | The payload of a SparkJob.| Yes | | | |\njob | The payload of a Dataproc job. | Yes | | | |\nwait_interval | The number of seconds to wait between polling the operation. | Yes | | | 30 |\nOutput\nName | Description | Type\n:--- | :---------- | :---\njob_id | The ID of the created job. | String\nCautions & requirements\nTo use the component, you must:\n\nSet up a GCP project by following this guide.\nCreate a new cluster.\nThe component can authenticate to GCP. Refer to Authenticating Pipelines to GCP for details.\nGrant the Kubeflow user service account the role roles/dataproc.editor on the project.\n\nDetailed description\nThis component creates a Spark job from Dataproc submit job REST API.\nFollow these steps to use the component in a pipeline:\n\nInstall the Kubeflow Pipeline SDK:", "%%capture --no-stderr\n\n!pip3 install kfp --upgrade", "Load the component using KFP SDK", "import kfp.components as comp\n\ndataproc_submit_spark_job_op = comp.load_component_from_url(\n 'https://raw.githubusercontent.com/kubeflow/pipelines/1.7.0-rc.3/components/gcp/dataproc/submit_spark_job/component.yaml')\nhelp(dataproc_submit_spark_job_op)", "Sample\nNote: The following sample code works in an IPython notebook or directly in Python code.\nSet up a Dataproc cluster\nCreate a new Dataproc cluster (or reuse an existing one) before running the sample code.\nPrepare a Spark job\nUpload your Spark JAR file to a Cloud Storage bucket. In the sample, we use a JAR file that is preinstalled in the main cluster: file:///usr/lib/spark/examples/jars/spark-examples.jar.\nHere is the source code of the sample.\nTo package a self-contained Spark application, follow these instructions.\nSet sample parameters", "PROJECT_ID = '<Please put your project ID here>'\nCLUSTER_NAME = '<Please put your existing cluster name here>'\nREGION = 'us-central1'\nSPARK_FILE_URI = 'file:///usr/lib/spark/examples/jars/spark-examples.jar'\nMAIN_CLASS = 'org.apache.spark.examples.SparkPi'\nARGS = ['1000']\nEXPERIMENT_NAME = 'Dataproc - Submit Spark Job'", "Example pipeline that uses the component", "import kfp.dsl as dsl\nimport json\n@dsl.pipeline(\n name='Dataproc submit Spark job pipeline',\n description='Dataproc submit Spark job pipeline'\n)\ndef dataproc_submit_spark_job_pipeline(\n project_id = PROJECT_ID, \n region = REGION,\n cluster_name = CLUSTER_NAME,\n main_jar_file_uri = '',\n main_class = MAIN_CLASS,\n args = json.dumps(ARGS), \n spark_job=json.dumps({ 'jarFileUris': [ SPARK_FILE_URI ] }), \n job='{}', \n wait_interval='30'\n):\n dataproc_submit_spark_job_op(\n project_id=project_id, \n region=region, \n cluster_name=cluster_name, \n main_jar_file_uri=main_jar_file_uri, \n main_class=main_class,\n args=args, \n spark_job=spark_job, \n job=job, \n wait_interval=wait_interval)\n ", "Compile the pipeline", "pipeline_func = dataproc_submit_spark_job_pipeline\npipeline_filename = pipeline_func.__name__ + '.zip'\nimport kfp.compiler as compiler\ncompiler.Compiler().compile(pipeline_func, pipeline_filename)", "Submit the pipeline for execution", "#Specify pipeline argument values\narguments = {}\n\n#Get or create an experiment and submit a pipeline run\nimport kfp\nclient = kfp.Client()\nexperiment = client.create_experiment(EXPERIMENT_NAME)\n\n#Submit a pipeline run\nrun_name = pipeline_func.__name__ + ' run'\nrun_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments)", "References\n\nComponent Python code\nComponent Docker file\nSample notebook\nDataproc SparkJob\n\nLicense\nBy deploying or using this software you agree to comply with the AI Hub Terms of Service and the Google APIs Terms of Service. To the extent of a direct conflict of terms, the AI Hub Terms of Service will control." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ewulczyn/talk_page_abuse
src/modeling/cv_ngram_architectures.ipynb
apache-2.0
[ "%load_ext autoreload\n%autoreload 2\n%matplotlib inline\n%load_ext autotime\n\nfrom ngram import tune, roc_scorer,spearman_scorer\nfrom baselines import load_comments_and_labels, assemble_data, one_hot\nfrom deep_learning import make_mlp, DenseTransformer\nfrom deep_learning import make_lstm, make_conv_lstm, SequenceTransformer\n\n\nfrom sklearn.pipeline import Pipeline\nfrom sklearn.grid_search import RandomizedSearchCV\nfrom sklearn.linear_model import LinearRegression\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.feature_extraction.text import CountVectorizer\nfrom sklearn.feature_extraction.text import TfidfTransformer\nfrom sklearn.decomposition import TruncatedSVD\nfrom keras.wrappers.scikit_learn import KerasClassifier\nfrom serialization import save_pipeline, load_pipeline\nimport joblib\nimport copy\nimport pandas as pd\n\nimport keras\nkeras.__version__", "Helpers", "def get_best_estimator(cv):\n params = cv.best_params_\n model = cv.estimator\n model = model.set_params(**params)\n return model\n \ndef save_best_estimator(cv, directory, name):\n model = get_best_estimator(cv)\n save_pipeline(model, directory, name)", "Load Annotated Data", "task = 'attack'\ndata = load_comments_and_labels(task)", "Params", "path = '../../models/cv/'\nn_max = 10000000\nn_iter = 15", "Prep Data", "X_train, y_train_ohv = assemble_data(data, 'comments', 'plurality', splits = ['train'])\nX_dev, y_dev_ohv = assemble_data(data, 'comments', 'plurality', splits = ['dev'])\n\n_, y_train_ed = assemble_data(data, 'comments', 'empirical_dist', splits = ['train'])\n_, y_dev_ed = assemble_data(data, 'comments', 'empirical_dist', splits = ['dev'])\n\ny_train_ohm = one_hot(y_train_ed)\ny_dev_ohm = one_hot(y_dev_ed)\n\nX_train = X_train[:n_max]\nX_dev = X_dev[:n_max]\n\ny_train_ohv = y_train_ohv[:n_max]\ny_dev_ohv = y_dev_ohv[:n_max]\n\ny_train_ed = y_train_ed[:n_max]\ny_dev_ed = y_dev_ed[:n_max]\n\ny_train_ohm = y_train_ohm[:n_max]\ny_dev_ohm = y_dev_ohm[:n_max]\n\nresults_list = []", "Sklearn Experiments\nLets run some quick experiments in sklearn, so that we have baselines for the following models built in keras. We will only be building logistic regressions with one-hot labels. This will also help us see if we should use tfidf weighting and normalization.", "max_features = (5000, 10000, 50000, 100000)\nC = (0.0001, 0.001, 0.01, 0.1, 1, 10)", "No tfidf", "alg = Pipeline([\n ('vect', CountVectorizer()),\n ('clf', LogisticRegression()),\n])\n\n# linear char-gram, no tfidf\n\nparam_grid = {\n 'vect__max_features': max_features, \n 'vect__ngram_range': ((1,5),), \n 'vect__analyzer' : ('char',),\n 'clf__C' : C,\n}\n\nm = tune (X_train, y_train_ohv, X_dev, y_dev_ohv, alg, param_grid, n_iter, roc_scorer, n_jobs = 6, verbose = True)\n\n# linear word-gram, no tfidf\n\nparam_grid = {\n 'vect__max_features': max_features, \n 'vect__ngram_range': ((1,2),), \n 'vect__analyzer' : ('word',),\n 'clf__C' : C,\n}\n\nm = tune (X_train, y_train_ohv, X_dev, y_dev_ohv, alg, param_grid, n_iter, roc_scorer, n_jobs = 6, verbose = True)", "With tfidf", "alg = Pipeline([\n ('vect', CountVectorizer()),\n ('tfidf', TfidfTransformer()),\n ('clf', LogisticRegression()),\n])\n\n# linear char-gram, tfidf\n\nparam_grid = {\n 'vect__max_features': max_features, \n 'vect__ngram_range': ((1,5),), \n 'vect__analyzer' : ('char',),\n 'tfidf__sublinear_tf' : (True, False),\n 'tfidf__norm' : (None, 'l2'),\n 'clf__C' : C,\n}\n\nm = tune (X_train, y_train_ohv, X_dev, y_dev_ohv, alg, param_grid, n_iter, roc_scorer, n_jobs = 6, verbose = True)\n\n# linear word-gram, tfidf\n\nparam_grid = {\n 'vect__max_features': max_features, \n 'vect__ngram_range': ((1,2),), \n 'vect__analyzer' : ('word',),\n 'tfidf__sublinear_tf' : (True, False),\n 'tfidf__norm' : (None, 'l2'),\n 'clf__C' : C,\n}\n\nm = tune (X_train, y_train_ohv, X_dev, y_dev_ohv, alg, param_grid, n_iter, roc_scorer, n_jobs = 6, verbose = True)", "TFIDF improves the ROC score for both types of ngram models although it gives a bigger boost for the char-ngram models.\nTensorflow/Keras\nNow we will cross-validate over model architectures (linear, mlp, lstm), ngram type (word, char), and label type (one hot or empirical distribution)\nLinear and MLP\nThe mlp model class actually includes linear models (just set hidden layers to be empty)", "alg = Pipeline([\n ('vect', CountVectorizer()),\n ('tfidf', TfidfTransformer()),\n ('to_dense', DenseTransformer()), \n ('clf', KerasClassifier(build_fn=make_mlp, output_dim = 2, verbose=False)),\n])\n\ndependencies = [( 'vect__max_features', 'clf__input_dim')]\n\n\nchar_vec_params = {\n 'vect__max_features': (5000, 10000, 30000), \n 'vect__ngram_range': ((1,5),), \n 'vect__analyzer' : ('char',)\n }\n\nword_vect_params = {\n 'vect__max_features': (5000, 10000, 30000), \n 'vect__ngram_range': ((1,2),), \n 'vect__analyzer' : ('word',)\n }\n\ntfidf_params = {\n 'tfidf__sublinear_tf' : (True, False),\n 'tfidf__norm' : ('l2',),\n }\n\nlinear_clf_params = {\n 'clf__alpha' : (0.000000001, 0.0000001, 0.00001, 0.001, 0.01),\n 'clf__hidden_layer_sizes' : ((),),\n 'clf__nb_epoch' : (2,4,8,16),\n 'clf__batch_size': (200,)\n }\n\nmlp_clf_params = {\n 'clf__alpha' : (0.000000001, 0.0000001, 0.00001, 0.001, 0.01),\n 'clf__hidden_layer_sizes' : ((50,), (50, 50), (50, 50, 50)),\n 'clf__nb_epoch' : (2,4,8,16),\n 'clf__batch_size': (200,)\n }\n\n\nfor model in ['linear', 'mlp']:\n for gram in ['word', 'char']:\n for label in ['oh', 'ed']:\n params = {}\n \n if model == 'linear':\n params.update(linear_clf_params)\n else:\n params.update(mlp_clf_params)\n \n params.update(tfidf_params)\n \n if gram == 'char':\n params.update(char_vec_params)\n else:\n params.update(word_vect_params)\n \n if label == 'oh':\n y_train = y_train_ohm\n y_dev = y_dev_ohm\n else:\n y_train = y_train_ed\n y_dev = y_dev_ed\n \n print('\\n\\n\\n %s %s %s' % (model, gram, label))\n cv = tune (X_train, y_train, X_dev, y_dev,\n alg, params,\n n_iter,\n roc_scorer,\n n_jobs = 1,\n verbose = True,\n dependencies = dependencies)\n \n save_best_estimator(cv, path, '%s_%s_%s' % (model, gram, label))\n est = get_best_estimator(cv)\n est.fit(X_train, y_train)\n \n best_spearman = spearman_scorer(est, X_dev, y_dev_ed) * 100\n print (\"\\n best spearman: \", best_spearman)\n best_roc = max(cv.grid_scores_, key=lambda x: x[1])[1] * 100\n print (\"\\n best roc: \", best_roc)\n \n results_list.append({'model_type': model,\n 'ngram_type': gram,\n 'label_type' : label,\n 'cv': cv.grid_scores_,\n 'best_roc': round(best_roc, 3),\n 'best_spearman': round(best_spearman, 3)\n })\n\nresults_df = pd.DataFrame(results_list)\n\nresults_df\n\ngrid_scores[0].mean_validation_score\n\ngrid_scores = results_df['cv'][0]\nmax(grid_scores, key = lambda x: x.mean_validation_score).parameters\n\nimport json\n\ndef get_best_params(grid_scores):\n return json.dumps(max(grid_scores, key = lambda x: x.mean_validation_score).parameters)\n\nresults_df['best_params'] = results_df['cv'].apply(get_best_params)\n\nresults_df.to_csv('cv_results.csv')", "LSTM", "alg = Pipeline([\n ('seq', SequenceTransformer()),\n ('clf', KerasClassifier(build_fn=make_lstm, output_dim = 2, verbose=True)),\n])\n\ndependencies = [( 'seq__max_features', 'clf__max_features'),\n ( 'seq__max_len', 'clf__max_len')]\n\nword_seq_params = {\n 'seq__max_features' : (5000, 10000, 30000),\n 'seq__max_len' : (100, 200, 500),\n 'seq__analyzer' : ('word',)\n}\n\nchar_seq_params = {\n 'seq__max_features' : (100,),\n 'seq__max_len' : (200, 500, 1000),\n 'seq__analyzer' : ('char',)\n}\n\nclf_params = {\n 'clf__dropout' : (0.1, 0.2, 0.4),\n 'clf__embedding_size' : (64, 128),\n 'clf__lstm_output_size': (64, 128),\n 'clf__nb_epoch' : (2,3,4),\n 'clf__batch_size': (200,)\n}\n\nfrom pprint import pprint\n\nmodel = 'lstm'\nfor gram in ['word', 'char']:\n for label in ['oh', 'ed']:\n params = {}\n params.update(clf_params)\n\n if gram == 'char':\n params.update(char_seq_params)\n else:\n params.update(word_seq_params)\n\n if label == 'oh':\n y_train = y_train_ohm\n y_dev = y_dev_ohm\n else:\n y_train = y_train_ed\n y_dev = y_dev_ed\n \n pprint(params)\n\n print('\\n\\n\\n %s %s %s' % (model, gram, label))\n cv = tune (X_train, y_train, X_dev, y_dev,\n alg, params,\n n_iter,\n roc_scorer,\n n_jobs = 1,\n verbose = True,\n dependencies = dependencies)\n\n save_best_estimator(cv, path, '%s_%s_%s' % (model, gram, label))\n est = get_best_estimator(cv)\n est.fit(X_train, y_train)\n \n best_spearman = spearman_scorer(est, X_dev, y_dev_ed) * 100\n print (\"\\n best spearman: \", best_spearman)\n best_roc = max(cv.grid_scores_, key=lambda x: x[1])[1] * 100\n print (\"\\n best roc: \", best_roc)\n\n results_list.append({'model_type': model,\n 'ngram_type': gram,\n 'label_type' : label,\n 'cv': cv.grid_scores_,\n 'best_roc': round(best_roc, 3),\n 'best_spearman': round(best_spearman, 3)\n })", "Conv LSTM", "alg = Pipeline([\n ('seq', SequenceTransformer()),\n ('clf', KerasClassifier(build_fn=make_conv_lstm, output_dim = 2, verbose=True)),\n])\n\ndependencies = [( 'seq__max_features', 'clf__max_features'),\n ( 'seq__max_len', 'clf__max_len')]\n\nword_seq_params = {\n 'seq__max_features' : (5000, 10000, 30000),\n 'seq__max_len' : (100, 200, 500),\n 'seq__analyzer' : ('word',),\n 'clf__filter_length': (2, 4, 6),\n 'clf__pool_length' : (2, 4, 6)\n}\n\nchar_seq_params = {\n 'seq__max_features' : (100,),\n 'seq__max_len' : (200, 500, 1000),\n 'seq__analyzer' : ('char',),\n 'clf__filter_length': (5, 10, 15),\n 'clf__pool_length' : (5, 10, 15)\n}\n\nclf_params = {\n 'clf__dropout' : (0.1, 0.2, 0.4),\n 'clf__embedding_size' : (64, 128),\n 'clf__lstm_output_size': (64, 128),\n 'clf__nb_epoch' : (2,3,4),\n 'clf__batch_size': (200,),\n 'clf__nb_filter' : (64, 128),\n \n}\n\nmodel = 'conv_lstm'\nfor gram in ['word', 'char']:\n for label in ['oh', 'ed']:\n params = {}\n params.update(clf_params)\n\n if gram == 'char':\n params.update(char_seq_params)\n else:\n params.update(word_seq_params)\n\n if label == 'oh':\n y_train = y_train_ohm\n y_dev = y_dev_ohm\n else:\n y_train = y_train_ed\n y_dev = y_dev_ed\n \n pprint(params)\n\n print('\\n\\n\\n %s %s %s' % (model, gram, label))\n cv = tune (X_train, y_train, X_dev, y_dev,\n alg, params,\n n_iter,\n roc_scorer,\n n_jobs = 1,\n verbose = True,\n dependencies = dependencies)\n\n save_best_estimator(cv, path, '%s_%s_%s' % (model, gram, label))\n est = get_best_estimator(cv)\n est.fit(X_train, y_train)\n \n best_spearman = spearman_scorer(est, X_dev, y_dev_ed) * 100\n print (\"\\n best spearman: \", best_spearman)\n best_roc = max(cv.grid_scores_, key=lambda x: x[1])[1] * 100\n print (\"\\n best roc: \", best_roc)\n\n results_list.append({'model_type': model,\n 'ngram_type': gram,\n 'label_type' : label,\n 'cv': cv.grid_scores_,\n 'best_roc': round(best_roc, 3),\n 'best_spearman': round(best_spearman, 3)\n })" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
svdwulp/da-programming-1
week_02_oefeningen_uitwerkingen.ipynb
gpl-2.0
[ "Oefeningen\nOpgave 1. Schrijf een programma dat het algoritme\nuit de slide [Talstelsels (3)](week_02.ipynb#Talstelsels-(3%29) implementeert,\ndat wil zeggen, waarmee een getal uit een willekeurig talstelsel\nomgezet kan worden naar een tientallige representatie.\nJe kunt de index()-functie gebruiken om de decimale waarde van een karakters op te zoeken in een string (zie String indexing (2).\nOm de string number van achter naar voren te doorlopen, heb je een Python functie nodig die strings kan omkeren. Probeer die zelf te vinden, met behulp van internet.", "number = \"3DB\"\nbase = 16\nresult = 0\n\ndigits = \"0123456789ABCDEF\"\n\npower = 0\nfor digit in reversed(number):\n result += digits.index(digit) * base**power\n power += 1\nprint(\"Resultaat: {}\".format(tgt_number))\n\n# Van collega Peter kreeg ik een mooie recursieve oplossing\n# die ik jullie toch niet wil onthouden.\n# In periode 3 gaan we recursieve definities bekijken, \n# maar hier vast een voorproefje.\n# Kun je achterhalen waarom dit werkt?\n\n# To understand recursion you gotta understand recursion\ndef base2dec(number, base):\n digits = \"0123456789ABCDEF\"\n if number < base:\n return digits[number]\n else:\n return base2dec(number // base, base) + digits[number % base]\n\nprint(base2dec(3021, 16))\n\n\n# To understand recursion you gotta understand recursion\ndef dec2base(number, base): \n digits = \"0123456789ABCDEF\"\n if len(number) > 0:\n digit = number.pop(0)\n position = digits.index(digit) \n power = len(number)\n result = position * base**power\n return result + dec2base(number, base)\n else:\n return 0\n\nprint(dec2base(\"BCD\", 16))", "Opgave 2. Nu je de programma's hebt om de conversie van een willekeurig talstelsel\nnaar 10-tallig uit te voeren en tevens de conversie van 10-tallig naar een\nwillekeurig talstelsel, kun je ze samen gebruiken om van het ene\nwillekeurige talstelsel, zeg $n$, naar het andere willekeurige talstelsel $m$ te converteren:\n$x_n \\rightarrow y_{10} \\rightarrow z_m$\nSchrijf een programma dat een getal in een opgegeven talstelsel kan\nconverteren naar een ander opgegeven talstelsel. Je kunt de onderstaande code\ngebruiken om je programma te beginnen:\n```python\norg_base = 8\norg_number = \"4607\"\ntgt_base = 16\nconverteer org_number (basis org_base) naar dec_number (10-tallig),\nconverteer dec_number (10-tallig) naar tgt_number (basis tgt_base),\ndruk tgt_number af\n```", "org_base = 8\norg_number = \"4607\"\ntgt_base = 16\n\ndigits = \"0123456789ABCDEF\"\n\n## converteer org_number (basis org_base) naar dec_number (10-tallig)\ndec_number = 0\npower = 0\nfor digit in reversed(org_number):\n dec_number += digits.index(digit) * org_base**power\n power += 1\n## converteer dec_number (10-tallig) naar tgt_number (basis tgt_base)\ntgt_number = \"\"\nwhile dec_number > 0:\n remainder = dec_number % tgt_base\n tgt_number = str(remainder) + tgt_number\n dec_number = dec_number // tgt_base\n\n## druk tgt_number af\nprint(\"Resultaat: {}\".format(tgt_number))", "Opgave 3. Pas het programma uit opgave 2 aan zodat de gebruiker de beide bases\nen het te converteren getal mag opgeven, zoals beschreven\nin de slide [Python User Interaction (1)](week_02.ipynb#Python-user-interaction-(1%29)", "org_base = int(input(\"Geef het talstelsel (2..16) voor het originele getal: \"))\norg_number = input(\"Geef het originele getal: \")\ntgt_base = int(input(\"Geef het gewenste talstelsel: \"))\n\ndigits = \"0123456789ABCDEF\"\n\n## converteer org_number (basis org_base) naar dec_number (10-tallig)\ndec_number = 0\npower = 0\nfor digit in reversed(org_number):\n dec_number += digits.index(digit) * org_base**power\n power += 1\n## converteer dec_number (10-tallig) naar tgt_number (basis tgt_base)\ntgt_number = \"\"\nwhile dec_number > 0:\n remainder = dec_number % tgt_base\n tgt_number = str(remainder) + tgt_number\n dec_number = dec_number // tgt_base\n\n## druk tgt_number af\nprint(\"Resultaat: {}\".format(tgt_number))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
leriomaggio/python-in-a-notebook
01 Introducing the IPython Notebook.ipynb
mit
[ "Introducing the IPython Notebook\nAron Ahmadia (US Army ERDC) and David Ketcheson (KAUST)\nTeaching Numerical Methods with IPython Notebooks, SciPy 2014\n<a rel=\"license\" href=\"http://creativecommons.org/licenses/by/4.0/\"><img alt=\"Creative Commons License\" style=\"border-width:0\" src=\"https://i.creativecommons.org/l/by/4.0/80x15.png\" /></a><br /><span xmlns:dct=\"http://purl.org/dc/terms/\" property=\"dct:title\">This lecture</span> by <a xmlns:cc=\"http://creativecommons.org/ns#\" property=\"cc:attributionName\" rel=\"cc:attributionURL\">Aron Ahmadia and David Ketcheson</a> is licensed under a <a rel=\"license\" href=\"http://creativecommons.org/licenses/by/4.0/\">Creative Commons Attribution 4.0 International License</a>. All code examples are also licensed under the MIT license.\nNOTE: Some changes have been applied to make this notebook compliant with Python 3\nWhat is this?\nThis is a gentle introduction to the IPython Notebook aimed at lecturers who wish to incorporate it in their teaching, written in an IPython Notebook. This presentation adapts material from the IPython official documentation.\nWhat is an IPython Notebook?\nAn IPython Notebook is a:\n[A] Interactive environment for writing and running code\n[B] Weave of code, data, prose, equations, analysis, and visualization\n[C] Tool for prototyping new code and analysis\n[D] Reproducible workflow for scientific research \n[E] All of the above\nWriting and Running Code\nThe IPython Notebook consists of an ordered list of cells. \nThere are four important cell types:\n\nCode\nMarkdown\nHeading\nRaw\n\nWe briefly introduce how Code Cells work here. We will return to the other three cell types later.\nCode Cells", "# This is a code cell made up of Python comments\n# We can execute it by clicking on it with the mouse\n# then clicking the \"Run Cell\" button\n\n# A comment is a pretty boring piece of code\n# This code cell generates \"Hello, World\" when executed\n\nprint(\"Hello, World\")\n\n# Code cells can also generate graphical output\n%matplotlib inline\nimport matplotlib\nmatplotlib.pyplot.hist([0, 1, 2, 2, 3, 3, 3, 4, 4, 4, 10]);", "Modal editor\nStarting with IPython 2.0, the IPython Notebook has a modal user interface. This means that the keyboard does different things depending on which mode the Notebook is in. There are two modes: edit mode and command mode.\nEdit mode\nEdit mode is indicated by a green cell border and a prompt showing in the editor area:\n<img src=\"./files/images/edit_mode.png\">\nWhen a cell is in edit mode, you can type into the cell, like a normal text editor.\n<div class=\"alert alert-success\" style=\"margin: 10px\">\nEnter edit mode by pressing `enter` or using the mouse to click on a cell's editor area.\n</div>\n\n<div class=\"alert alert-success\" style=\"margin: 10px\">\nWhile in edit mode, tab-completion works for variables the kernel knows about from executing previous cells.\n</div>\n\nCommand mode\nCommand mode is indicated by a grey cell border:\n<img src=\"./files/images/command_mode.png\">\nWhen you are in command mode, you are able to edit the notebook as a whole, but not type into individual cells. Most importantly, in command mode, the keyboard is mapped to a set of shortcuts that let you perform notebook and cell actions efficiently. For example, if you are in command mode and you press c, you will copy the current cell - no modifier is needed.\n<div class=\"alert alert-error\" style=\"margin: 10px\">\nDon't try to type into a cell in command mode; unexpected things will happen!\n</div>\n\n<div class=\"alert alert-success\" style=\"margin: 10px\">\nEnter command mode by pressing `esc` or using the mouse to click *outside* a cell's editor area.\n</div>\n\nMouse navigation\nAll navigation and actions in the Notebook are available using the mouse through the menubar and toolbar, which are both above the main Notebook area:\n<img src=\"./files/images/menubar_toolbar.png\">\nThe first idea of mouse based navigation is that cells can be selected by clicking on them. The currently selected cell gets a grey or green border depending on whether the notebook is in edit or command mode. If you click inside a cell's editor area, you will enter edit mode. If you click on the prompt or output area of a cell you will enter command mode.\nIf you are running this notebook in a live session (not on http://nbviewer.ipython.org) try selecting different cells and going between edit and command mode. Try typing into a cell.\nThe second idea of mouse based navigation is that cell actions usually apply to the currently selected cell. Thus if you want to run the code in a cell, you would select it and click the \"Play\" button in the toolbar or the \"Cell:Run\" menu item. Similarly, to copy a cell you would select it and click the \"Copy\" button in the toolbar or the \"Edit:Copy\" menu item. With this simple pattern, you should be able to do most everything you need with the mouse.\nMarkdown and heading cells have one other state that can be modified with the mouse. These cells can either be rendered or unrendered. When they are rendered, you will see a nice formatted representation of the cell's contents. When they are unrendered, you will see the raw text source of the cell. To render the selected cell with the mouse, click the \"Play\" button in the toolbar or the \"Cell:Run\" menu item. To unrender the selected cell, double click on the cell.\nKeyboard Navigation\nThe modal user interface of the IPython Notebook has been optimized for efficient keyboard usage. This is made possible by having two different sets of keyboard shortcuts: one set that is active in edit mode and another in command mode.\nThe most important keyboard shortcuts are enter, which enters edit mode, and esc, which enters command mode.\nIn edit mode, most of the keyboard is dedicated to typing into the cell's editor. Thus, in edit mode there are relatively few shortcuts:\nIn command mode, the entire keyboard is available for shortcuts:\nHere the rough order in which the IPython Developers recommend learning the command mode shortcuts:\n\nBasic navigation: enter, shift-enter, up/k, down/j\nSaving the notebook: s\nCell types: y, m, 1-6, t\nCell creation and movement: a, b, ctrl+k, ctrl+j\nCell editing: x, c, v, d, z, shift+=\nKernel operations: i, 0\n\nI personally (& humbly) suggest learning h first!\nThe IPython Notebook Architecture\nSo far, we have learned the basics of using IPython Notebooks.\nFor simple demonstrations, the typical user doesn't need to understand how the computations are being handled, but to successfully write and present computational notebooks, you will need to understand how the notebook architecture works.\nA live notebook is composed of an interactive web page (the front end), a running IPython session (the kernel or back end), and a web server responsible for handling communication between the two (the, err..., middle-end)\nA static notebook, as for example seen on NBViewer, is a static view of the notebook's content. The default format is HTML, but a notebook can also be output in PDF or other formats.\nThe centerpiece of an IPython Notebook is the \"kernel\", the IPython instance responsible for executing all code. Your IPython kernel maintains its state between executed cells.", "x = 0\nprint(x)\n\nx += 1\nprint(x)", "There are two important actions for interacting with the kernel. The first is to interrupt it. This is the same as sending a Control-C from the command line. The second is to restart it. This completely terminates the kernel and starts it anew. None of the kernel state is saved across a restart. \nMarkdown cells\nText can be added to IPython Notebooks using Markdown cells. Markdown is a popular markup language that is a superset of HTML. Its specification can be found here:\nhttp://daringfireball.net/projects/markdown/\nMarkdown basics\nText formatting\nYou can make text italic or bold or monospace\nItemized Lists\n\nOne\nSublist\nThis\n\n\n\n\nSublist\n - That\n - The other thing\nTwo\nSublist\nThree\nSublist\n\nEnumerated Lists\n\nHere we go\nSublist\nSublist\n\n\nThere we go\nNow this\n\nHorizontal Rules\n\n\n\nBlockquotes\n\nTo me programming is more than an important practical art. It is also a gigantic undertaking in the foundations of knowledge. -- Rear Admiral Grace Hopper\n\nLinks\nIPython's website\nCode\nThis is a code snippet: \nPython\ndef f(x):\n \"\"\"a docstring\"\"\"\n return x**2\nThis is an example of a Python function\nYou can also use triple-backticks to denote code blocks.\nThis also allows you to choose the appropriate syntax highlighter.\nC\nif (i=0; i&lt;n; i++) {\n printf(\"hello %d\\n\", i);\n x += 4;\n}\nTables\nTime (s) | Audience Interest\n---------|------------------\n 0 | High\n 1 | Medium\n 5 | Facebook\nImages\n\nYouTube", "from IPython.display import YouTubeVideo\nYouTubeVideo('vW_DRAJ0dtc')", "Other HTML\n<strong> Be Bold! </strong>\nMathematical Equations\nCourtesy of MathJax, you can beautifully render mathematical expressions, both inline: \n$e^{i\\pi} + 1 = 0$, and displayed:\n$$e^x=\\sum_{i=0}^\\infty \\frac{1}{i!}x^i$$\nEquation Environments\nYou can also use a number of equation environments, such as align:\n\\begin{align}\n x &= 4 \\\ny+z &= x\n\\end{align}\nA full list of available TeX and LaTeX commands is maintained by Dr. Carol Burns.\nOther Useful MathJax Notes\n\ninline math is demarcated by $ $, or \\( \\)\ndisplayed math is demarcated by $$ $$ or \\[ \\]\ndisplayed math environments can also be directly demarcated by \\begin and \\end\n\\newcommand and \\def are supported, within areas MathJax processes (such as in a \\[ \\] block)\nequation numbering is not officially supported, but it can be indirectly enabled\n\nA Note about Notebook Security\nBy default, a notebook downloaded to a new computer is untrusted\n\nHTML and Javascript in Markdown cells is now never executed\nHTML and Javascript code outputs must be explicitly re-executed\nSome of these restrictions can be mitigrated through shared accounts (Sage MathCloud) and secrets\n\nMore information on notebook security is in the IPython Notebook documentation\nMagics\nIPython kernels execute a superset of the Python language. The extension functions, commonly referred to as magics, come in two variants. \nLine Magics\n\nA line magic looks like a command line call. The most important of these is %matplotlib inline, which embeds all matplotlib plot output as images in the notebook itself.", "%matplotlib inline\n\n%whos", "Cell Magics\n\nA cell magic takes its entire cell as an argument. Although there are a number of useful cell magics, you may find %%timeit to be useful for exploring code performance.", "%%timeit\n\nimport numpy as np\nnp.sum(np.random.rand(1000))", "Execute Code as Python 2", "%%python2\n\ni = 10**60\nprint type(i)", "Interacting with the Command Line\nIPython supports one final trick, the ability to interact directly with your shell by using the ! operator.", "!ls\n\nx = !ls\n\nprint(x)", "A Note about Notebook Version Control\nThe IPython Notebook is stored using canonicalized JSON for ease of use with version control systems.\nThere are two things to be aware of:\n\n\nBy default, IPython embeds all content and saves kernel execution numbers. You may want to get in the habit of clearing all cells before committing.\n\n\nAs of IPython 2.0, all notebooks are signed on save. This increases the chances of a commit collision during merge, forcing a manual resolution. Either signature can be safely deleted in this situation." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jvbalen/cover_id
draft_notebooks/paired_data_draft.ipynb
mit
[ "%matplotlib inline\n\nfrom __future__ import division, print_function\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom itertools import combinations\nimport tensorflow\n\nimport SHS_data\nimport main\nimport fingerprints as fp\nimport util\n\nimport paired_data\nreload(paired_data);", "Load some data", "# ratio = (5, 15, 80)\nratio = (1, 9, 90)\nclique_dict, cliques_by_uri = SHS_data.read_cliques()\ntrain_cliques, test_cliques, val_cliques = util.split_train_test_validation(clique_dict, ratio=ratio)", "Load pairs of covers and non-covers\n```Python\ndef get_pairs(clique_dict):\n...\n```", "pairs, non_pairs = paired_data.get_pairs(train_cliques)\n\nassert len(pairs) == len(non_pairs)\nassert np.all([len(pair) == 2 for pair in pairs])\nassert np.all([len(non_pair) == 2 for non_pair in non_pairs])\nassert np.all([cliques_by_uri[pair[0]] == cliques_by_uri[pair[1]] for pair in pairs])\nassert not np.any([cliques_by_uri[non_pair[0]] == cliques_by_uri[non_pair[1]] for non_pair in non_pairs])", "Cut chroma features to fixed-length arrays\n```Python\ndef patchwork(chroma, n_patches=7, patch_len=64):\n...\n```\nStrategy: cuttinging out n_patches equally-spaced (possibly overlapping) patches of length patch_len and stitching them back together.\nNote that this requires some extra attention as there are unusually short chroma files in the dataset:\nAround 30 files are less than 64 beats long.\nHence an exta test in which patch_len &gt; len(chroma).", "reload(paired_data)\n\n# simple array\nlen_x = 10\nn_patch, patch_len = 3, 14\n\nx_test = np.arange(len_x).reshape((-1,1))\n\nx_patches = paired_data.patchwork(x_test, n_patches=n_patch, patch_len=patch_len)\n\nassert x_patches[0] == x_test[0]\nassert x_patches[-1] == x_test[-1]\nassert len(x_patches) == n_patch * patch_len\n\n# real data\ntest_pair = pairs[0]\nchroma_1 = SHS_data.read_chroma(test_pair[0])\nchroma_2 = SHS_data.read_chroma(test_pair[1])\n\npatches_1 = paired_data.patchwork(chroma_1)\npatches_2 = paired_data.patchwork(chroma_2)\n\nassert patches_1.shape == patches_2.shape\n\n# short chroma\nn_patches = 3\npatch_len = min(len(chroma_1), len(chroma_2)) + 10\n\npatches_1 = paired_data.patchwork(chroma_1, n_patches=n_patches, patch_len=patch_len)\npatches_2 = paired_data.patchwork(chroma_2, n_patches=n_patches, patch_len=patch_len)\n \nassert np.all(patches_1.shape == patches_2.shape)\nassert patches_1.shape[0] == n_patches * patch_len", "Align chroma pitch dimension\n```Python\ndef align_pitch(chroma_1, chroma_2):\n...\n```", "a = np.array([[2,0,1,0,0,0],\n [2,0,1,0,0,0]])\n\nb = np.array([[0,0,1,0,3,0],\n [0,0,1,0,3,0]])\n\na_, b_ = paired_data.align_pitch(a, b)\n\nprint(a)\nprint(b)\nprint('\\n', b_)", "Construct a dataset of cover and non-cover 'patchworks'\nPython\ndef dataset_of_pairs(clique_dict, chroma_dict):\n...", "train_uris = util.uris_from_clique_dict(train_cliques)\nchroma_dict = SHS_data.preload_chroma(train_uris)\n\nX_1, X_2, is_cover, _ = paired_data.dataset_of_pairs(train_cliques, chroma_dict)\n\nprint(X_1.shape, X_2.shape, is_cover.shape)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
jeffcarter-github/MachineLearningLibrary
MachineLearningLibrary/NeuralNetworks/CNN_MNIST_Keras_Tensorflow.ipynb
mit
[ "This notebook walks through training a CNN Model on the MNIST data using Keras and Tensorflow...\n\nLoad Data and Reshape\nBuild Model\nTrain / Test\nBuild interactive OpenCV GUI for playing\n\nimport ploting library...", "from __future__ import print_function\nimport matplotlib.pyplot as plt\n%matplotlib notebook", "Import the MNIST dataset using the keras api", "from keras.datasets import mnist\n# load data...\n(X_train, y_train), (X_test, y_test) = mnist.load_data()\n# check dimensions...\nprint('Train: ', X_train.shape, y_train.shape)\nprint('Test: ', X_test.shape, y_test.shape)", "Looks like we have 60k images of 28, 28 pixels. These images are single-channel, i.e. black and white... If these were color images, then we would see dimensions of (60000, 28, 28, 3)... 3 channels for Red-Green-Blue (RGB) or Blue-Green-Red (BGR), depending on the order of the color channels...\nShow an image and check data...", "# select a number [0, 60000)...\nidx = 1000\n\n# plot image...\nplt.figure()\nplt.title('Number: %s'%y_train[idx])\nplt.imshow(X_train[idx], cmap='gray')", "Image Processing...\nReshape (28, 28) to (28, 28, 1)... and Normalized Image Data... from uint8 to float32 over the range [0,1]", "X_train = X_train.reshape(X_train.shape[0], X_train.shape[1], X_train.shape[2], 1).astype('float32') / 255.\nX_test = X_test.reshape(X_test.shape[0], X_test.shape[1], X_test.shape[2], 1).astype('float32') / 255.\n\nprint(X_train.shape)", "now we have explicity created a one-channel dataset... and normalized it between [0, 1]... alternatively, you might normalize it more correctly as Gaussian distributed about zero with a variance of one... this would help with training but for this example, as you'll see, it doesn't really matter...\nEncode numbers from 0-9 into 10-dimensional vectors... this is called one-hot encoding... i.e. 0 -> [1, 0, 0, ..., 0] and 1 -> [0, 1, 0, 0, ..., 0], etc.", "# import to_categorial function that does the one-hot encoding...\nfrom keras.utils import to_categorical\n\n# encode both training and testing data...\ny_train = to_categorical(y_train, 10)\ny_test = to_categorical(y_test, 10)\n\ny_train[0]", "Build CNN Model...", "from keras.models import Sequential\nfrom keras.layers import Conv2D, MaxPool2D, Dense, Dropout, Flatten\n\nimg_shape = X_train[0].shape\nprint(img_shape)\n\nmodel = Sequential()\n\n# Convolutional Section...\nmodel.add(Conv2D(32, (3, 3), activation='relu', input_shape=img_shape))\nmodel.add(Conv2D(32, (3, 3), activation='relu'))\nmodel.add(MaxPool2D((2, 2)))\nmodel.add(Dropout(rate=0.25))\nmodel.add(Conv2D(64, (3, 3), activation='relu', input_shape=img_shape))\nmodel.add(Conv2D(64, (3, 3), activation='relu'))\nmodel.add(MaxPool2D((2, 2)))\nmodel.add(Dropout(rate=0.25))\n\n# Fully Connected Section...\nmodel.add(Flatten())\nmodel.add(Dense(128, activation='relu'))\nmodel.add(Dropout(rate=0.25))\nmodel.add(Dense(10, activation='softmax'))\n\nmodel.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])\n\nmodel.summary()", "Train model...", "n_epochs = 2\nmodel.fit(X_train, y_train, batch_size=32, epochs=n_epochs, verbose=True)\n\nloss, accuracy = model.evaluate(X_test, y_test, batch_size=32)\nprint('Test Accuracy: ', accuracy)\n\n# save model for retrieval at later date...\nmodel.save('./MNIST_CNN')", "Build interactive notepad...", "import cv2\nimport numpy as np", "The code below will create an OpenCV popup window... the window can be closed using the 'esc' key... and we can draw in the window by holding the left-mouse button and moving the mouse within the window...", "def record_location(event, x, y, flags, param):\n '''callback function that draws a circle at the point x, y...'''\n if flags == cv2.EVENT_FLAG_LBUTTON and event == cv2.EVENT_MOUSEMOVE:\n cv2.circle(img, (x,y), 10, (255, 255, 255), -1)\n\nimg = np.zeros((256, 256, 3), np.uint8)\n\ncv2.namedWindow('image')\ncv2.setMouseCallback('image', record_location)\nwhile(1):\n cv2.imshow('image',img)\n k = cv2.waitKey(1) & 0xFF\n if k == ord('m'):\n mode = not mode\n elif k == 27:\n break\ncv2.destroyAllWindows()\n\n# copy one color channel and normalize values...\n_img = img[:,:,0] / 255.0\n# resize image to (28, 28)\n_img = cv2.resize(_img, (28, 28), interpolation=cv2.INTER_AREA).reshape(1, 28, 28, 1)\n\np = model.predict(_img)\nprint(p)\nplt.figure()\nplt.title('Guess: %s' %p.argmax())\nplt.imshow(_img[0][:,:,0], cmap='gray')" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
georgetown-analytics/yelp-classification
.ipynb_checkpoints/filter_user_reviews-checkpoint.ipynb
mit
[ "from pymongo import MongoClient\nfrom datetime import datetime\nimport json\nimport pdb\nimport csv\n\nip = '54.227.180.242'\n\nconn = MongoClient(ip, 27017)\nconn.database_names()\n\ndb = conn.get_database('cleaned_data')\n\nbiz = db.get_collection('academic_biz')\nusers = db.get_collection('academic_users')\nreviews = db.get_collection('academic_reviews')", "The business ID field has already been filtered for only restaurants\nWe want to filter the users collection for the following:\n 1. User must have at least 20 reviews\n 2. For users with 20 reviews, identify the reviews which are for businesses\n 3. For each user, keep only those reviews which are related to a business in \n the list of restaurant business IDs\n 4. Keep only users who have at least 20 reviews after finishing step 3", "#Find a list of users with at least 20 reviews\nuser_list = []\nfor user in users.find():\n if user['review_count'] >= 20:\n user_list.append(user['_id'])\n else:\n pass", "Create a new dictionary with the following structure and then export as a json object:\n{user id: [review, review, review], ..., user id: [review, review, review]}", "user_reviews = dict.fromkeys(user_list, 0)\nfor review in reviews.find():\n try:\n if user_reviews[review['_id']] == 0:\n print review['_id']\n print review\n break\n except KeyError:\n pass\n# user_reviews[review['_id']] = [review]\n# else:\n# user_reviews[review['_id']].append(review)\n \n# except KeyError:\n# pass\n\nuser_reviews[user_reviews.keys()[23]]\n\nfiltered_reviews = {}\nfor user in user_reviews.keys():\n if user_reviews[user] != 0:\n filtered_reviews[user] = user_reviews[user]\n\n#We have this many users after our filtering\nlen(filtered_reviews)\n\n#Dump file of cleaned up user data\nwith open('merged_user_reviews.json', 'w') as fp:\n json.dump(user_reviews, fp)" ]
[ "code", "markdown", "code", "markdown", "code" ]
peteWT/cec_apl
Biomass/ReadFromDBv2-Copy1.ipynb
mit
[ "This notebook is intended to show how to use pandas, and sql alchemy to upload data into DB2-switch and create geospatial coordinate and indexes.\nInstall using pip or any other package manager pandas, sqlalchemy and pg8000. The later one is the driver to connect to the db.", "import pandas as pd\nfrom sqlalchemy import create_engine", "After importing the required packages, first create the engine to connect to the DB. The approach I generally use is to create a string based on the username and password. The code is a function, you just need to fill in with the username, password and the dbname. \nIt allows you to create different engines to connect to serveral dbs.", "def connection(user,passwd,dbname, echo_i=False):\n str1 = ('postgresql+pg8000://' + user +':' + passw + '@switch-db2.erg.berkeley.edu:5432/' \n + dbname + '?ssl=true&sslfactory=org.postgresql.ssl.NonValidatingFactory')\n engine = create_engine(str1,echo=echo_i,isolation_level='AUTOCOMMIT')\n return engine\n\nuser = 'jdlara'\npassw = 'Amadeus-2010'\ndbname = 'apl_cec' \nengine_db= connection(user,passw,dbname)", "Afterwards, use pandas to import the data from Excel files or any other text file format. Make sure that the data in good shape before trying to push it into the server. In this example I use previous knowledge of the structure of the tabs in the excel file to recursively upload each tab and match the name of the table with the tab name. \nIf you are using csv files just change the commands to pd.read_csv() in this link you can find the documentation. \nBefore doing this I already checked that the data is properly organized, crate new cells to explore the data beforehand if needed\nexcel_file = 'substations_table.xlsx'\ntab_name = 'sheet1'\nschema_for_upload = 'geographic_data'\npd_data.to_sql(name, engine_db, schema=schema_for_upload, if_exists='replace',chunksize=100)", "#excel_file = 'substations_table.xlsx'\n#tab_name = 'sheet1'\ncsv_name = ['Results_2016_wholestate_noBA.csv']\nschema_for_upload = 'lemma2016'\nfor name in csv_name:\n pd_data = pd.read_csv(name, encoding='UTF-8')\n pd_data.to_sql(name, engine_db, schema=schema_for_upload, if_exists='replace',chunksize=1000)", "Once the data is updated, it is possible to run the SQL commands to properly create geom columns in the tables, this can be done as follows. The ojective is to run an SQL querie like this: \nPGSQL\nset search_path = SCHEMA, public;\nalter table vTABLE drop column if exists geom;\nSELECT AddGeometryColumn ('SCHEMA','vTABLE','geom',4326,'POINT',2);\nUPDATE TABLE set geom = ST_SetSRID(st_makepoint(vTABLE.lon, vTABLE.lat), 4326)::geometry;\nwhere SCHEMA and vTABLE are the variable portions. Also note, that this query assumes that your columns with latitude and longitude are named lat and lon respectively; moreover, it also assumes that the coordinates are in the 4326 projection. \nThe following function runs the query for you, considering again that the data is clean and nice.", "def create_geom(table,schema,engine, projection=5070):\n k = engine.connect()\n query = ('set search_path = \"'+ schema +'\"'+ ', public;')\n print query\n k.execute(query)\n query = ('alter table ' + table + ' drop column if exists geom;')\n print query\n k.execute(query)\n query = 'SELECT AddGeometryColumn (\\''+ schema + '\\',\\''+ table + '\\',\\'geom\\''+',5070,\\'POINT\\',2);'\n print query\n k.execute(query)\n query = ('UPDATE ' + table + ' set geom = ST_SetSRID(st_makepoint(' + table + '.x, ' + \n table + '.y),' + str(projection) + ')::geometry;')\n k.execute(query)\n print query\n k = engine.dispose()\n return 'geom column added with SRID ' + str(projection)\n\ntable = 'results_approach2'\nschema = 'lemma2016'\ncreate_geom(table,schema,engine_db)", "The function created the geom column, the next step is to define a function to create the Primary-Key in the db. Remember that the index from the data frame is included as an index in the db, sometimes an index is not really neded and might need to be dropped.", "def create_pk(table,schema,column,engine):\n k = engine.connect()\n query = ('set search_path = \"'+ schema +'\"'+ ', public;')\n print query\n k.execute(query)\n query = ('alter table ' + table + ' ADD CONSTRAINT '+ table +'_pk PRIMARY KEY (' + column + ')')\n print query \n k.execute(query)\n k = engine.dispose()\n return 'Primary key created with column' + column\n\ncol = ''\ncreate_pk(table,schema,col,engine_db)", "The reason why we use postgis is to improve geospatial queries and provide a better data structure for geospatial operations. Many of the ST_ functions have improved performance when a geospatial index is created. The process implemented here comes from this workshop. This re-creates the process using python functions so that it can be easily replicated for many tables.\nThe query to create a geospatial index is as follows: \nSQL\nset search_path = SCHEMA, public;\nCREATE INDEX vTABLE_gix ON vTABLE USING GIST (geom);\nThis assumes that the column name with the geometry is named geom. If the process follows from the previous code, it will work ok.\nThe following step is to run a VACUUM, creating an index is not enough to allow PostgreSQL to use it effectively. VACUUMing must be performed when ever a new index is created or after a large number of UPDATEs, INSERTs or DELETEs are issued against a table. \nSQL\nVACUUM ANALYZE vTABLE;\nThe final step corresponds to CLUSTERING, this process re-orders the table according to the geospatial index we created. This ensures that records with similar attributes have a high likelihood of being found in the same page, reducing the number of pages that must be read into memory for some types of queries. When a query to find nearest neighbors or within a certain are is needed, geometries that are near each other in space are near each other on disk. The query to perform this clustering is as follows:\nCLUSTER vTABLE USING vTABLE_gix;\nANALYZE vTABLE;", "def create_gidx(table,schema,engine,column='geom'):\n k = engine.connect()\n query = ('set search_path = \"'+ schema +'\"'+ ', public;')\n k.execute(query)\n print query\n query = ('CREATE INDEX ' + table + '_gix ON ' + table + ' USING GIST (' + column + ');')\n k.execute(query)\n print query\n query = ('VACUUM ' + table + ';')\n k.execute(query)\n print query\n query = ('CLUSTER ' + table + ' USING ' + table + '_gix;')\n k.execute(query)\n print query\n query = ('ANALYZE ' + table + ';')\n k.execute(query)\n print query\n k = engine.dispose()\n return k\n\ncreate_gidx(table,schema,engine_db)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
johntanz/ROP
Old Code/ROPFinal160302.ipynb
gpl-2.0
[ "ROP Exam Analysis for NIRS and Pulse Ox\nFinalized notebook to combine Masimo and NIRS Data into one iPython Notebook.\nSelect ROP Subject Number and Input Times", "from ROP import *\n#Takes a little bit, wait a while.\n#ROP Number syntax: ###\n#Eye Drop syntax: HH MM HH MM HH MM\n#Exam Syntax: HH MM HH MM", "Baseline Average Calculation\nFrom first point of data collection to the first eye drops", "print 'Baseline Averages\\n', 'NIRS :\\t', avg0NIRS, '\\nPI :\\t',avg0PI, '\\nSpO2 :\\t',avg0O2,'\\nPR :\\t',avg0PR,", "First Eye Drop Avg Every 10 Sec For 5 Minutes", "print resultdrops1", "Second Eye Drop Avg Every 10 Sec For 5 Minutes", "print resultdrops2", "Third Eye Drop Avg Every 10 Sec For 5 Minutes", "print resultdrops3", "Average Every 10 Sec During ROP Exam for first 4 minutes", "print result1", "Average Every 5 Mins Hour 1-2 After ROP Exam", "print result2", "Average Every 15 Mins Hour 2-3 After ROP Exam", "print result3", "Average Every 30 Mins Hour 3-4 After ROP Exam", "print result4", "Average Every Hour 4-24 Hours Post ROP Exam", "print result5", "Mild, Moderate, and Severe Desaturation Events", "print \"Desat Counts for X mins\\n\"\nprint \"Pre Mild Desat (85-89) Count: %s\\t\" %above, \"for %s min\" %((a_len*2)/60.)\nprint \"Pre Mod Desat (81-84) Count: %s\\t\" %middle, \"for %s min\" %((m_len*2)/60.)\nprint \"Pre Sev Desat (=< 80) Count: %s\\t\" %below, \"for %s min\\n\" %((b_len*2)/60.)\n\nprint \"Post Mild Desat (85-89) Count: %s\\t\" %above2, \"for %s min\" %((a_len2*2)/60.)\nprint \"Post Mod Desat (81-84) Count: %s\\t\" %middle2, \"for %s min\" %((m_len2*2)/60.)\nprint \"Post Sev Desat (=< 80) Count: %s\\t\" %below2, \"for %s min\\n\" %((b_len2*2)/60.)\n\nprint \"Data Recording Time!\"\nprint '*' * 10\nprint \"Pre-Exam Data Recording Length\\t\", X - Y # start of exam - first data point\nprint \"Post-Exam Data Recording Length\\t\", Q - Z #last data point - end of exam\nprint \"Total Data Recording Length\\t\", Q - Y #last data point - first data point" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
turbomanage/training-data-analyst
courses/machine_learning/deepdive2/feature_engineering/labs/4_keras_adv_feat_eng-lab.ipynb
apache-2.0
[ "LAB04: Advanced Feature Engineering in Keras\nLearning Objectives\n\nProcess temporal feature columns in Keras\nUse Lambda layers to perform feature engineering on geolocation features \nCreate bucketized and crossed feature columns\n\nIntroduction\nIn this notebook, we use Keras to build a taxifare price prediction model and utilize feature engineering to improve the fare amount prediction for NYC taxi cab rides. \nEach learning objective will correspond to a #TODO in the notebook where you will complete the notebook cell's code before running. Refer to the solution for reference. \nSet up environment variables and load necessary libraries\nWe will start by importing the necessary libraries for this lab.", "import datetime\nimport logging\nimport os\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport tensorflow as tf\n\nfrom tensorflow import feature_column as fc\nfrom tensorflow.keras import layers\nfrom tensorflow.keras import models\n\n# set TF error log verbosity\nlogging.getLogger(\"tensorflow\").setLevel(logging.ERROR)\n\nprint(tf.version.VERSION)", "Load taxifare dataset\nThe Taxi Fare dataset for this lab is 106,545 rows and has been pre-processed and split for use in this lab. Note that the dataset is the same as used in the Big Query feature engineering labs. The fare_amount is the target, the continuous value we’ll train a model to predict. \nFirst, let's download the .csv data by copying the data from a cloud storage bucket.", "if not os.path.isdir(\"../data\"):\n os.makedirs(\"../data\")\n\n!gsutil cp gs://cloud-training-demos/feat_eng/data/*.csv ../data", "Let's check that the files were copied correctly and look like we expect them to.", "!ls -l ../data/*.csv\n\n!head ../data/*.csv", "Create an input pipeline\nTypically, you will use a two step proces to build the pipeline. Step one is to define the columns of data; i.e., which column we're predicting for, and the default values. Step 2 is to define two functions - a function to define the features and label you want to use and a function to load the training data. Also, note that pickup_datetime is a string and we will need to handle this in our feature engineered model.", "CSV_COLUMNS = [\n 'fare_amount',\n 'pickup_datetime',\n 'pickup_longitude',\n 'pickup_latitude',\n 'dropoff_longitude',\n 'dropoff_latitude',\n 'passenger_count',\n 'key',\n]\nLABEL_COLUMN = 'fare_amount'\nSTRING_COLS = ['pickup_datetime']\nNUMERIC_COLS = ['pickup_longitude', 'pickup_latitude',\n 'dropoff_longitude', 'dropoff_latitude',\n 'passenger_count']\nDEFAULTS = [[0.0], ['na'], [0.0], [0.0], [0.0], [0.0], [0.0], ['na']]\nDAYS = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat']\n\n# A function to define features and labesl\ndef features_and_labels(row_data):\n for unwanted_col in ['key']:\n row_data.pop(unwanted_col)\n label = row_data.pop(LABEL_COLUMN)\n return row_data, label\n\n\n# A utility method to create a tf.data dataset from a Pandas Dataframe\ndef load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):\n dataset = tf.data.experimental.make_csv_dataset(pattern,\n batch_size,\n CSV_COLUMNS,\n DEFAULTS)\n dataset = dataset.map(features_and_labels) # features, label\n if mode == tf.estimator.ModeKeys.TRAIN:\n dataset = dataset.shuffle(1000).repeat()\n # take advantage of multi-threading; 1=AUTOTUNE\n dataset = dataset.prefetch(1)\n return dataset", "Create a Baseline DNN Model in Keras\nNow let's build the Deep Neural Network (DNN) model in Keras using the functional API. Unlike the sequential API, we will need to specify the input and hidden layers. Note that we are creating a linear regression baseline model with no feature engineering. Recall that a baseline model is a solution to a problem without applying any machine learning techniques.", "# Build a simple Keras DNN using its Functional API\ndef rmse(y_true, y_pred): # Root mean square error\n return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))\n\n\ndef build_dnn_model():\n # input layer\n inputs = {\n colname: layers.Input(name=colname, shape=(), dtype='float32')\n for colname in NUMERIC_COLS\n }\n\n # feature_columns\n feature_columns = {\n colname: fc.numeric_column(colname)\n for colname in NUMERIC_COLS\n }\n\n # Constructor for DenseFeatures takes a list of numeric columns\n dnn_inputs = layers.DenseFeatures(feature_columns.values())(inputs)\n\n # two hidden layers of [32, 8] just in like the BQML DNN\n h1 = layers.Dense(32, activation='relu', name='h1')(dnn_inputs)\n h2 = layers.Dense(8, activation='relu', name='h2')(h1)\n\n # final output is a linear activation because this is regression\n output = layers.Dense(1, activation='linear', name='fare')(h2)\n model = models.Model(inputs, output)\n\n # compile model\n model.compile(optimizer='adam', loss='mse', metrics=[rmse, 'mse'])\n\n return model", "We'll build our DNN model and inspect the model architecture.", "model = build_dnn_model()\n\ntf.keras.utils.plot_model(model, 'dnn_model.png', show_shapes=False, rankdir='LR')", "Train the model\nTo train the model, simply call model.fit(). Note that we should really use many more NUM_TRAIN_EXAMPLES (i.e. a larger dataset). We shouldn't make assumptions about the quality of the model based on training/evaluating it on a small sample of the full data.\nWe start by setting up the environment variables for training, creating the input pipeline datasets, and then train our baseline DNN model.", "TRAIN_BATCH_SIZE = 32 \nNUM_TRAIN_EXAMPLES = 59621 * 5\nNUM_EVALS = 5\nNUM_EVAL_EXAMPLES = 14906\n\ntrainds = load_dataset('../data/taxi-train*',\n TRAIN_BATCH_SIZE,\n tf.estimator.ModeKeys.TRAIN)\nevalds = load_dataset('../data/taxi-valid*',\n 1000,\n tf.estimator.ModeKeys.EVAL).take(NUM_EVAL_EXAMPLES//1000)\n\nsteps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)\n\nhistory = model.fit(trainds,\n validation_data=evalds,\n epochs=NUM_EVALS,\n steps_per_epoch=steps_per_epoch)", "Visualize the model loss curve\nNext, we will use matplotlib to draw the model's loss curves for training and validation. A line plot is also created showing the mean squared error loss over the training epochs for both the train (blue) and test (orange) sets.", "def plot_curves(history, metrics):\n nrows = 1\n ncols = 2\n fig = plt.figure(figsize=(10, 5))\n\n for idx, key in enumerate(metrics): \n ax = fig.add_subplot(nrows, ncols, idx+1)\n plt.plot(history.history[key])\n plt.plot(history.history['val_{}'.format(key)])\n plt.title('model {}'.format(key))\n plt.ylabel(key)\n plt.xlabel('epoch')\n plt.legend(['train', 'validation'], loc='upper left'); \n\nplot_curves(history, ['loss', 'mse'])", "Predict with the model locally\nTo predict with Keras, you simply call model.predict() and pass in the cab ride you want to predict the fare amount for. Next we note the fare price at this geolocation and pickup_datetime.", "model.predict({\n 'pickup_longitude': tf.convert_to_tensor([-73.982683]),\n 'pickup_latitude': tf.convert_to_tensor([40.742104]),\n 'dropoff_longitude': tf.convert_to_tensor([-73.983766]),\n 'dropoff_latitude': tf.convert_to_tensor([40.755174]),\n 'passenger_count': tf.convert_to_tensor([3.0]),\n 'pickup_datetime': tf.convert_to_tensor(['2010-02-08 09:17:00 UTC'], dtype=tf.string),\n}, steps=1)", "Improve Model Performance Using Feature Engineering\nWe now improve our model's performance by creating the following feature engineering types: Temporal, Categorical, and Geolocation. \nTemporal Feature Columns\nWe incorporate the temporal feature pickup_datetime. As noted earlier, pickup_datetime is a string and we will need to handle this within the model. First, you will include the pickup_datetime as a feature and then you will need to modify the model to handle our string feature.", "# TODO 1a\n\n\n\n# TODO 1b\n\n\n\n# TODO 1c\n", "Geolocation/Coordinate Feature Columns\nThe pick-up/drop-off longitude and latitude data are crucial to predicting the fare amount as fare amounts in NYC taxis are largely determined by the distance traveled. As such, we need to teach the model the Euclidean distance between the pick-up and drop-off points.\nRecall that latitude and longitude allows us to specify any location on Earth using a set of coordinates. In our training data set, we restricted our data points to only pickups and drop offs within NYC. New York city has an approximate longitude range of -74.05 to -73.75 and a latitude range of 40.63 to 40.85.\nComputing Euclidean distance\nThe dataset contains information regarding the pickup and drop off coordinates. However, there is no information regarding the distance between the pickup and drop off points. Therefore, we create a new feature that calculates the distance between each pair of pickup and drop off points. We can do this using the Euclidean Distance, which is the straight-line distance between any two coordinate points.", "# TODO 2\n", "Scaling latitude and longitude\nIt is very important for numerical variables to get scaled before they are \"fed\" into the neural network. Here we use min-max scaling (also called normalization) on the geolocation fetures. Later in our model, you will see that these values are shifted and rescaled so that they end up ranging from 0 to 1.\nFirst, we create a function named 'scale_longitude', where we pass in all the longitudinal values and add 78 to each value. Note that our scaling longitude ranges from -70 to -78. Thus, the value 78 is the maximum longitudinal value. The delta or difference between -70 and -78 is 8. We add 78 to each longitidunal value and then divide by 8 to return a scaled value.", "def scale_longitude(lon_column):\n return (lon_column + 78)/8.", "Next, we create a function named 'scale_latitude', where we pass in all the latitudinal values and subtract 37 from each value. Note that our scaling latitude ranges from -37 to -45. Thus, the value 37 is the minimal latitudinal value. The delta or difference between -37 and -45 is 8. We subtract 37 from each latitudinal value and then divide by 8 to return a scaled value.", "def scale_latitude(lat_column):\n return (lat_column - 37)/8.", "Putting it all together\nWe now create two new \"geo\" functions for our model. We create a function called \"euclidean\" to initialize our geolocation parameters. We then create a function called transform. The transform function passes our numerical and string column features as inputs to the model, scales geolocation features, then creates the Euclian distance as a transformed variable with the geolocation features. Lastly, we bucketize the latitude and longitude features.", "def transform(inputs, numeric_cols, string_cols, nbuckets):\n print(\"Inputs before features transformation: {}\".format(inputs.keys()))\n\n # Pass-through columns\n transformed = inputs.copy()\n del transformed['pickup_datetime']\n\n feature_columns = {\n colname: tf.feature_column.numeric_column(colname)\n for colname in numeric_cols\n }\n\n # Scaling longitude from range [-70, -78] to [0, 1]\n for lon_col in ['pickup_longitude', 'dropoff_longitude']:\n transformed[lon_col] = layers.Lambda(\n scale_longitude,\n name=\"scale_{}\".format(lon_col))(inputs[lon_col])\n\n # Scaling latitude from range [37, 45] to [0, 1]\n for lat_col in ['pickup_latitude', 'dropoff_latitude']:\n transformed[lat_col] = layers.Lambda(\n scale_latitude,\n name='scale_{}'.format(lat_col))(inputs[lat_col])\n \n\n # TODO 2 - Your code here\n # add Euclidean distance\n \n\n\n\n # TODO 3 - Your code here\n # create bucketized features\n \n \n \n \n # TODO 3\n # create crossed columns\n \n \n \n \n # create embedding columns\n feature_columns['pickup_and_dropoff'] = fc.embedding_column(pd_pair, 100)\n\n print(\"Transformed features: {}\".format(transformed.keys()))\n print(\"Feature columns: {}\".format(feature_columns.keys()))\n return transformed, feature_columns", "Next, we'll create our DNN model now with the engineered features. We'll set NBUCKETS = 10 to specify 10 buckets when bucketizing the latitude and longitude.", "NBUCKETS = 10\n\n\n# DNN MODEL\ndef rmse(y_true, y_pred):\n return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))\n\n\ndef build_dnn_model():\n # input layer is all float except for pickup_datetime which is a string\n inputs = {\n colname: layers.Input(name=colname, shape=(), dtype='float32')\n for colname in NUMERIC_COLS\n }\n inputs.update({\n colname: tf.keras.layers.Input(name=colname, shape=(), dtype='string')\n for colname in STRING_COLS\n })\n\n # transforms\n transformed, feature_columns = transform(inputs,\n numeric_cols=NUMERIC_COLS,\n string_cols=STRING_COLS,\n nbuckets=NBUCKETS)\n dnn_inputs = layers.DenseFeatures(feature_columns.values())(transformed)\n\n # two hidden layers of [32, 8] just in like the BQML DNN\n h1 = layers.Dense(32, activation='relu', name='h1')(dnn_inputs)\n h2 = layers.Dense(8, activation='relu', name='h2')(h1)\n\n # final output is a linear activation because this is regression\n output = layers.Dense(1, activation='linear', name='fare')(h2)\n model = models.Model(inputs, output)\n\n # Compile model\n model.compile(optimizer='adam', loss='mse', metrics=[rmse, 'mse'])\n return model\n\nmodel = build_dnn_model()", "Let's see how our model architecture has changed now.", "tf.keras.utils.plot_model(model, 'dnn_model_engineered.png', show_shapes=False, rankdir='LR')\n\ntrainds = load_dataset('../data/taxi-train*',\n TRAIN_BATCH_SIZE,\n tf.estimator.ModeKeys.TRAIN)\nevalds = load_dataset('../data/taxi-valid*',\n 1000,\n tf.estimator.ModeKeys.EVAL).take(NUM_EVAL_EXAMPLES//1000)\n\nsteps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)\n\nhistory = model.fit(trainds,\n validation_data=evalds,\n epochs=NUM_EVALS+3,\n steps_per_epoch=steps_per_epoch)", "As before, let's visualize the DNN model layers.", "plot_curves(history, ['loss', 'mse'])", "Let's a prediction with this new model with engineered features on the example we had above.", "model.predict({\n 'pickup_longitude': tf.convert_to_tensor([-73.982683]),\n 'pickup_latitude': tf.convert_to_tensor([40.742104]),\n 'dropoff_longitude': tf.convert_to_tensor([-73.983766]),\n 'dropoff_latitude': tf.convert_to_tensor([40.755174]),\n 'passenger_count': tf.convert_to_tensor([3.0]),\n 'pickup_datetime': tf.convert_to_tensor(['2010-02-08 09:17:00 UTC'], dtype=tf.string),\n}, steps=1)", "Below we summarize our training results comparing our baseline model with our model with engineered features.\n| Model | Taxi Fare | Description |\n|--------------------|-----------|-------------------------------------------|\n| Baseline | value? | Baseline model - no feature engineering |\n| Feature Engineered | value? | Feature Engineered Model |\nCopyright 2020 Google Inc.\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
irockafe/revo_healthcare
notebooks/MTBLS315/exploratory/Old_notebooks/MTBLS315_uhplc_pos_classifer-3.5ppm_data.ipynb
mit
[ "<h2> 3.5ppm </h2>\nNot many retention-correction peaks detected. should switch to linear smoothing...?", "import time\n\nimport pandas as pd\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nfrom sklearn import preprocessing\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.cross_validation import StratifiedShuffleSplit\nfrom sklearn.cross_validation import cross_val_score\n#from sklearn.model_selection import StratifiedShuffleSplit\n#from sklearn.model_selection import cross_val_score\n\nfrom sklearn.ensemble import AdaBoostClassifier\nfrom sklearn.metrics import roc_curve, auc\nfrom sklearn.utils import shuffle\n\nfrom scipy import interp\n\n%matplotlib inline\n\ndef remove_zero_columns(X, threshold=1e-20):\n # convert zeros to nan, drop all nan columns, the replace leftover nan with zeros\n X_non_zero_colum = X.replace(0, np.nan).dropna(how='all', axis=1).replace(np.nan, 0)\n #.dropna(how='all', axis=0).replace(np.nan,0)\n return X_non_zero_colum\n\ndef zero_fill_half_min(X, threshold=1e-20):\n # Fill zeros with 1/2 the minimum value of that column\n # input dataframe. Add only to zero values\n \n # Get a vector of 1/2 minimum values\n half_min = X[X > threshold].min(axis=0)*0.5\n \n # Add the half_min values to a dataframe where everything that isn't zero is NaN.\n # then convert NaN's to 0\n fill_vals = (X[X < threshold] + half_min).fillna(value=0)\n \n # Add the original dataframe to the dataframe of zeros and fill-values\n X_zeros_filled = X + fill_vals\n return X_zeros_filled\n\n\n\ntoy = pd.DataFrame([[1,2,3,0],\n [0,0,0,0],\n [0.5,1,0,0]], dtype=float)\n\ntoy_no_zeros = remove_zero_columns(toy)\ntoy_filled_zeros = zero_fill_half_min(toy_no_zeros)\nprint toy\nprint toy_no_zeros\nprint toy_filled_zeros", "<h2> Import the dataframe and remove any features that are all zero </h2>", "### Subdivide the data into a feature table\ndata_path = '/home/irockafe/Dropbox (MIT)/Alm_Lab/projects/revo_healthcare/data/processed/MTBLS315/'\\\n'uhplc_pos/xcms_result_3.5.csv'\n## Import the data and remove extraneous columns\ndf = pd.read_csv(data_path, index_col=0)\ndf.shape\ndf.head()\n# Make a new index of mz:rt\nmz = df.loc[:,\"mz\"].astype('str')\nrt = df.loc[:,\"rt\"].astype('str')\nidx = mz+':'+rt\ndf.index = idx\ndf\n# separate samples from xcms/camera things to make feature table\nnot_samples = ['mz', 'mzmin', 'mzmax', 'rt', 'rtmin', 'rtmax', \n 'npeaks', 'uhplc_pos', \n ]\nsamples_list = df.columns.difference(not_samples)\nmz_rt_df = df[not_samples]\n\n# convert to samples x features\nX_df_raw = df[samples_list].T\n# Remove zero-full columns and fill zeroes with 1/2 minimum values\nX_df = remove_zero_columns(X_df_raw)\nX_df_zero_filled = zero_fill_half_min(X_df)\n\nprint \"original shape: %s \\n# zeros: %f\\n\" % (X_df_raw.shape, (X_df_raw < 1e-20).sum().sum())\nprint \"zero-columns repalced? shape: %s \\n# zeros: %f\\n\" % (X_df.shape, \n (X_df < 1e-20).sum().sum())\nprint \"zeros filled shape: %s \\n#zeros: %f\\n\" % (X_df_zero_filled.shape, \n (X_df_zero_filled < 1e-20).sum().sum())\n\n\n# Convert to numpy matrix to play nicely with sklearn\nX = X_df.as_matrix()\nprint X.shape\n", "<h2> Get mappings between sample names, file names, and sample classes </h2>", "# Get mapping between sample name and assay names\npath_sample_name_map = '/home/irockafe/Dropbox (MIT)/Alm_Lab/projects/revo_healthcare/data/raw/'\\\n 'MTBLS315/metadata/a_UPLC_POS_nmfi_and_bsi_diagnosis.txt'\n# Index is the sample name\nsample_df = pd.read_csv(path_sample_name_map, \n sep='\\t', index_col=0)\nsample_df = sample_df['MS Assay Name']\nsample_df.shape\nprint sample_df.head(10)\n\n# get mapping between sample name and sample class\npath_sample_class_map = '/home/irockafe/Dropbox (MIT)/Alm_Lab/projects/revo_healthcare/data/raw/'\\\n 'MTBLS315/metadata/s_NMFI and BSI diagnosis.txt'\nclass_df = pd.read_csv(path_sample_class_map,\n sep='\\t')\n# Set index as sample name\nclass_df.set_index('Sample Name', inplace=True)\nclass_df = class_df['Factor Value[patient group]']\nprint class_df.head(10)\n\n# convert all non-malarial classes into a single classes \n# (collapse non-malarial febril illness and bacteremia together)\nclass_map_df = pd.concat([sample_df, class_df], axis=1)\nclass_map_df.rename(columns={'Factor Value[patient group]': 'class'}, inplace=True)\nclass_map_df\n\nbinary_class_map = class_map_df.replace(to_replace=['non-malarial febrile illness', 'bacterial bloodstream infection' ], \n value='non-malarial fever')\n\nbinary_class_map\n\n\n# convert classes to numbers\nle = preprocessing.LabelEncoder()\nle.fit(binary_class_map['class'])\ny = le.transform(binary_class_map['class'])", "<h2> Plot the distribution of classification accuracy across multiple cross-validation splits - Kinda Dumb</h2>\nTurns out doing this is kind of dumb, because you're not taking into account the prediction score your classifier assigned. Use AUC's instead. You want to give your classifier a lower score if it is really confident and wrong, than vice-versa", "def rf_violinplot(X, y, n_iter=25, test_size=0.3, random_state=1,\n n_estimators=1000):\n cross_val_skf = StratifiedShuffleSplit(y, n_iter=n_iter, test_size=test_size, \n random_state=random_state)\n clf = RandomForestClassifier(n_estimators=n_estimators, random_state=random_state)\n\n scores = cross_val_score(clf, X, y, cv=cross_val_skf)\n\n sns.violinplot(scores,inner='stick')\n\nrf_violinplot(X,y)\n\n\n# TODO - Switch to using caret for this bs..?\n\n# Do multi-fold cross validation for adaboost classifier\ndef adaboost_violinplot(X, y, n_iter=25, test_size=0.3, random_state=1,\n n_estimators=200):\n cross_val_skf = StratifiedShuffleSplit(y, n_iter=n_iter, test_size=test_size, random_state=random_state)\n clf = AdaBoostClassifier(n_estimators=n_estimators, random_state=random_state)\n\n scores = cross_val_score(clf, X, y, cv=cross_val_skf)\n\n sns.violinplot(scores,inner='stick')\n\nadaboost_violinplot(X,y)\n\n# TODO PQN normalization, and log-transformation, \n# and some feature selection (above certain threshold of intensity, use principal components), et\n\ndef pqn_normalize(X, integral_first=False, plot=False):\n '''\n Take a feature table and run PQN normalization on it\n '''\n # normalize by sum of intensities in each sample first. Not necessary\n if integral_first: \n sample_sums = np.sum(X, axis=1)\n X = (X / sample_sums[:,np.newaxis])\n \n # Get the median value of each feature across all samples\n mean_intensities = np.median(X, axis=0)\n \n # Divde each feature by the median value of each feature - \n # these are the quotients for each feature\n X_quotients = (X / mean_intensities[np.newaxis,:])\n \n if plot: # plot the distribution of quotients from one sample\n for i in range(1,len(X_quotients[:,1])):\n print 'allquotients reshaped!\\n\\n', \n #all_quotients = X_quotients.reshape(np.prod(X_quotients.shape))\n all_quotients = X_quotients[i,:]\n print all_quotients.shape\n x = np.random.normal(loc=0, scale=1, size=len(all_quotients))\n sns.violinplot(all_quotients)\n plt.title(\"median val: %f\\nMax val=%f\" % (np.median(all_quotients), np.max(all_quotients)))\n plt.plot( title=\"median val: \")#%f\" % np.median(all_quotients))\n plt.xlim([-0.5, 5])\n plt.show()\n\n # Define a quotient for each sample as the median of the feature-specific quotients\n # in that sample\n sample_quotients = np.median(X_quotients, axis=1)\n \n # Quotient normalize each samples\n X_pqn = X / sample_quotients[:,np.newaxis]\n return X_pqn\n\n# Make a fake sample, with 2 samples at 1x and 2x dilutions\nX_toy = np.array([[1,1,1,],\n [2,2,2],\n [3,6,9],\n [6,12,18]], dtype=float)\nprint X_toy\nprint X_toy.reshape(1, np.prod(X_toy.shape))\nX_toy_pqn_int = pqn_normalize(X_toy, integral_first=True, plot=True)\nprint X_toy_pqn_int\n\nprint '\\n\\n\\n'\nX_toy_pqn = pqn_normalize(X_toy)\nprint X_toy_pqn", "<h2> pqn normalize your features </h2>", "X_pqn = pqn_normalize(X)\nprint X_pqn", "<h2>Random Forest & adaBoost with PQN-normalized data</h2>", "rf_violinplot(X_pqn, y)\n\n# Do multi-fold cross validation for adaboost classifier\nadaboost_violinplot(X_pqn, y)", "<h2> RF & adaBoost with PQN-normalized, log-transformed data </h2>\nTurns out a monotonic transformation doesn't really affect any of these things. \nI guess they're already close to unit varinace...?", "X_pqn_nlog = np.log(X_pqn)\nrf_violinplot(X_pqn_nlog, y)\n\nadaboost_violinplot(X_pqn_nlog, y)\n\ndef roc_curve_cv(X, y, clf, cross_val,\n path='/home/irockafe/Desktop/roc.pdf',\n save=False, plot=True): \n t1 = time.time()\n # collect vals for the ROC curves\n tpr_list = []\n mean_fpr = np.linspace(0,1,100)\n auc_list = []\n \n # Get the false-positive and true-positive rate\n for i, (train, test) in enumerate(cross_val):\n clf.fit(X[train], y[train])\n y_pred = clf.predict_proba(X[test])[:,1]\n \n # get fpr, tpr\n fpr, tpr, thresholds = roc_curve(y[test], y_pred)\n roc_auc = auc(fpr, tpr)\n #print 'AUC', roc_auc\n #sns.plt.plot(fpr, tpr, lw=10, alpha=0.6, label='ROC - AUC = %0.2f' % roc_auc,)\n #sns.plt.show()\n tpr_list.append(interp(mean_fpr, fpr, tpr))\n tpr_list[-1][0] = 0.0\n auc_list.append(roc_auc)\n \n if (i % 10 == 0):\n print '{perc}% done! {time}s elapsed'.format(perc=100*float(i)/cross_val.n_iter, time=(time.time() - t1))\n \n \n \n \n # get mean tpr and fpr\n mean_tpr = np.mean(tpr_list, axis=0)\n # make sure it ends up at 1.0\n mean_tpr[-1] = 1.0\n mean_auc = auc(mean_fpr, mean_tpr)\n std_auc = np.std(auc_list)\n \n if plot:\n # plot mean auc\n plt.plot(mean_fpr, mean_tpr, label='Mean ROC - AUC = %0.2f $\\pm$ %0.2f' % (mean_auc, \n std_auc),\n lw=5, color='b')\n\n # plot luck-line\n plt.plot([0,1], [0,1], linestyle = '--', lw=2, color='r',\n label='Luck', alpha=0.5) \n\n # plot 1-std\n std_tpr = np.std(tpr_list, axis=0)\n tprs_upper = np.minimum(mean_tpr + std_tpr, 1)\n tprs_lower = np.maximum(mean_tpr - std_tpr, 0)\n plt.fill_between(mean_fpr, tprs_lower, tprs_upper, color='grey', alpha=0.2,\n label=r'$\\pm$ 1 stdev')\n\n plt.xlim([-0.05, 1.05])\n plt.ylim([-0.05, 1.05])\n plt.xlabel('False Positive Rate')\n plt.ylabel('True Positive Rate')\n plt.title('ROC curve, {iters} iterations of {cv} cross validation'.format(\n iters=cross_val.n_iter, cv='{train}:{test}'.format(test=cross_val.test_size, train=(1-cross_val.test_size)))\n )\n plt.legend(loc=\"lower right\")\n\n if save:\n plt.savefig(path, format='pdf')\n\n\n plt.show()\n return tpr_list, auc_list, mean_fpr\n\n\n\nrf_estimators = 1000\nn_iter = 3\ntest_size = 0.3\nrandom_state = 1\ncross_val_rf = StratifiedShuffleSplit(y, n_iter=n_iter, test_size=test_size, random_state=random_state)\nclf_rf = RandomForestClassifier(n_estimators=rf_estimators, random_state=random_state)\nrf_graph_path = '''/home/irockafe/Dropbox (MIT)/Alm_Lab/projects/revolutionizing_healthcare/data/MTBLS315/\\\nisaac_feature_tables/uhplc_pos/rf_roc_{trees}trees_{cv}cviter.pdf'''.format(trees=rf_estimators, cv=n_iter)\n\nprint cross_val_rf.n_iter\nprint cross_val_rf.test_size\n\ntpr_vals, auc_vals, mean_fpr = roc_curve_cv(X_pqn, y, clf_rf, cross_val_rf,\n path=rf_graph_path, save=False)\n\n# For adaboosted\nn_iter = 3\ntest_size = 0.3\nrandom_state = 1\nadaboost_estimators = 200\nadaboost_path = '''/home/irockafe/Dropbox (MIT)/Alm_Lab/projects/revolutionizing_healthcare/data/MTBLS315/\\\nisaac_feature_tables/uhplc_pos/adaboost_roc_{trees}trees_{cv}cviter.pdf'''.format(trees=adaboost_estimators, \n cv=n_iter)\n\n\ncross_val_adaboost = StratifiedShuffleSplit(y, n_iter=n_iter, test_size=test_size, random_state=random_state)\nclf = AdaBoostClassifier(n_estimators=adaboost_estimators, random_state=random_state)\nadaboost_tpr, adaboost_auc, adaboost_fpr = roc_curve_cv(X_pqn, y, clf, cross_val_adaboost,\n path=adaboost_path)", "<h2> Great, you can classify things. But make null models and do a sanity check to make \nsure you arent just classifying garbage </h2>", "# Make a null model AUC curve\n\ndef make_null_model(X, y, clf, cross_val, random_state=1, num_shuffles=5, plot=True):\n '''\n Runs the true model, then sanity-checks by:\n \n Shuffles class labels and then builds cross-validated ROC curves from them.\n Compares true AUC vs. shuffled auc by t-test (assumes normality of AUC curve)\n '''\n null_aucs = []\n print y.shape\n print X.shape\n tpr_true, auc_true, fpr_true = roc_curve_cv(X, y, clf, cross_val)\n # shuffle y lots of times\n for i in range(0, num_shuffles):\n #Iterate through the shuffled y vals and repeat with appropriate params\n # Retain the auc vals for final plotting of distribution\n y_shuffle = shuffle(y)\n cross_val.y = y_shuffle\n cross_val.y_indices = y_shuffle\n print 'Number of differences b/t original and shuffle: %s' % (y == cross_val.y).sum()\n # Get auc values for number of iterations\n tpr, auc, fpr = roc_curve_cv(X, y_shuffle, clf, cross_val, plot=False)\n \n null_aucs.append(auc)\n \n \n #plot the outcome\n if plot:\n flattened_aucs = [j for i in null_aucs for j in i]\n my_dict = {'true_auc': auc_true, 'null_auc': flattened_aucs}\n df_poop = pd.DataFrame.from_dict(my_dict, orient='index').T\n df_tidy = pd.melt(df_poop, value_vars=['true_auc', 'null_auc'],\n value_name='auc', var_name='AUC_type')\n #print flattened_aucs\n sns.violinplot(x='AUC_type', y='auc',\n inner='points', data=df_tidy)\n # Plot distribution of AUC vals \n plt.title(\"Distribution of aucs\")\n #sns.plt.ylabel('count')\n plt.xlabel('AUC')\n #sns.plt.plot(auc_true, 0, color='red', markersize=10)\n plt.show()\n # Do a quick t-test to see if odds of randomly getting an AUC that good\n return auc_true, null_aucs\n\n\n# Make a null model AUC curve & compare it to null-model\n\n# Random forest magic!\nrf_estimators = 1000\nn_iter = 50\ntest_size = 0.3\nrandom_state = 1\ncross_val_rf = StratifiedShuffleSplit(y, n_iter=n_iter, test_size=test_size, random_state=random_state)\nclf_rf = RandomForestClassifier(n_estimators=rf_estimators, random_state=random_state)\n\ntrue_auc, all_aucs = make_null_model(X_pqn, y, clf_rf, cross_val_rf, num_shuffles=5)\n\n# make dataframe from true and false aucs\nflattened_aucs = [j for i in all_aucs for j in i]\nmy_dict = {'true_auc': true_auc, 'null_auc': flattened_aucs}\ndf_poop = pd.DataFrame.from_dict(my_dict, orient='index').T\ndf_tidy = pd.melt(df_poop, value_vars=['true_auc', 'null_auc'],\n value_name='auc', var_name='AUC_type')\nprint df_tidy.head()\n#print flattened_aucs\nsns.violinplot(x='AUC_type', y='auc',\n inner='points', data=df_tidy, bw=0.7)\nplt.show()\n\n", "<h2> Let's check out some PCA plots </h2>", "from sklearn.decomposition import PCA\n\n# Check PCA of things\ndef PCA_plot(X, y, n_components, plot_color, class_nums, class_names, title='PCA'):\n pca = PCA(n_components=n_components)\n X_pca = pca.fit(X).transform(X)\n\n print zip(plot_color, class_nums, class_names)\n for color, i, target_name in zip(plot_color, class_nums, class_names):\n \n # plot one class at a time, first plot all classes y == 0\n #print color\n #print y == i\n xvals = X_pca[y == i, 0]\n print xvals.shape\n yvals = X_pca[y == i, 1]\n plt.scatter(xvals, yvals, color=color, alpha=0.8, label=target_name)\n\n plt.legend(bbox_to_anchor=(1.01,1), loc='upper left', shadow=False)#, scatterpoints=1)\n plt.title('PCA of Malaria data')\n plt.show()\n\n\nPCA_plot(X_pqn, y, 2, ['red', 'blue'], [0,1], ['malaria', 'non-malaria fever'])\nPCA_plot(X, y, 2, ['red', 'blue'], [0,1], ['malaria', 'non-malaria fever'])", "<h2> What about with all thre classes? </h2>", "# convert classes to numbers\nle = preprocessing.LabelEncoder()\nle.fit(class_map_df['class'])\ny_three_class = le.transform(class_map_df['class'])\nprint class_map_df.head(10)\nprint y_three_class\nprint X.shape\nprint y_three_class.shape\n\ny_labels = np.sort(class_map_df['class'].unique())\nprint y_labels\ncolors = ['green', 'red', 'blue']\n\nprint np.unique(y_three_class)\nPCA_plot(X_pqn, y_three_class, 2, colors, np.unique(y_three_class), y_labels)\nPCA_plot(X, y_three_class, 2, colors, np.unique(y_three_class), y_labels)\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dstrockis/outlook-autocategories
notebooks/2-Trying out classification models.ipynb
apache-2.0
[ "Hypothesis\nNon-linear models will be more accurate than a logistic regression model", "# Load data\nimport pandas as pd\nwith open('./data_files/8lWZYw-u-yNbGBkC4B--ip77K1oVwwyZTHKLeD7rm7k.csv') as data_file:\n df = pd.read_csv(data_file)\ndf.head()", "Comparing classification models\n\n\nDo some preprocessing on the text columns (subject, body, maybe to, cc, from)\nClean NaN's or remove rows of data with NaNs\nDo stuff the Preprocess Text Azure module does for us (stopwords, etc)\nUse scikit learn where possible\n\n\nDo some feature construction using pandas & scikit learn\nOn subject, to, cc, from\nBag of words\nTF/IDF\n\n\nOne-Hot Encode FolderId labels into their own boolean columns (1s & 0s)\nIgnore better features for now, this is good enough for comparisions\nSplit data into training & test sets to be used for all ensemble members\nFor each classifier, train a model on the training data\nEvaluate performance of model on test data, compare to Logistic Regression model\n\nConstructing Subject Feature Matrix", "# Remove messages without a Subject\nprint df.shape\ndf = df.dropna(subset=['Subject'])\nprint df.shape\n\n# Perform bag of words feature extraction\n# TODO: Why are there only 3000 words in the vocabulary?\nfrom sklearn.feature_extraction.text import CountVectorizer\ncount_vect = CountVectorizer(stop_words='english', lowercase=True)\ntrain_counts = count_vect.fit_transform(df['Subject'])\nprint 'Dimensions of vocabulary feature matrix are:'\nprint train_counts.shape\n\n# Add TF/IDF weighting to account for lenght of documents\nfrom sklearn.feature_extraction.text import TfidfTransformer\ntfidf_transformer = TfidfTransformer()\ntrain_tfidf = tfidf_transformer.fit_transform(train_counts)\nprint 'Dimensions of vocabulary feature matrix are:'\nprint train_tfidf.shape\nprint 'But, its a sparse matrix: ' + str(type(train_tfidf))", "Constructing CC, To, and From", "# Merge CC, To, From into one People column\ndf['CcRecipients'].fillna('', inplace=True)\ndf['ToRecipients'].fillna('', inplace=True)\ndf['Sender'].fillna('', inplace=True)\ndf['People'] = df['Sender'] + ';' + df['CcRecipients'] + ';' + df['ToRecipients']\ndf.head(10)\n\n# Convert People to matrix representation\npeople_features = df['People'].str.get_dummies(sep=';')\nprint people_features.shape\npeople_features.head()\n\n# Will need to store people vocabulary for feature construction during predictions\npeople_vocabulary = people_features.columns\nprint people_vocabulary[:2]\nprint len(people_vocabulary)\n\n# Convert to csr_matrix and hstack with Subject feature matrix\nimport scipy\nsparse_people_features = scipy.sparse.csr_matrix(people_features)\nprint people_features.shape\nprint sparse_people_features.shape\n\nprint sparse_people_features.shape\nprint train_tfidf.shape\nfeature_matrix = scipy.sparse.hstack([sparse_people_features, train_tfidf])\nprint feature_matrix.shape", "Train models & compare accuracies", "# Split into test and training data sets\nfrom sklearn.model_selection import train_test_split\nlabels_train, labels_test, features_train, features_test = train_test_split(df['FolderId'], feature_matrix, test_size=0.20, random_state=42)\nprint labels_train.shape\nprint labels_test.shape\nprint features_train.shape\nprint features_test.shape\n\n# Construct a list of classifiers\nfrom sklearn.neural_network import MLPClassifier\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.svm import SVC\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier\n\nnames = [\n \"Nearest Neighbors\", \n \"Linear SVM\", \n \"Decision Tree\", \n \"Random Forest\", \n \"Neural Net\", \n \"AdaBoost\",\n]\n\ncandidate_classifiers = [\n KNeighborsClassifier(),\n SVC(kernel='linear', C=0.025),\n DecisionTreeClassifier(max_depth=5),\n RandomForestClassifier(max_depth=5, n_estimators=10, max_features=1),\n MLPClassifier(alpha=1),\n AdaBoostClassifier(),\n]\n\n# Train and evaluate models, compare accuracy\nfrom sklearn import metrics\nfor name, clf in zip(names, candidate_classifiers):\n model = clf.fit(features_train, labels_train)\n predictions = model.predict(features_test)\n print name + \": \" + str(metrics.accuracy_score(labels_test, predictions))\n\n# Construct a list of classifiers\nfrom sklearn.svm import SVC\nfrom sklearn.gaussian_process import GaussianProcessClassifier\nfrom sklearn.gaussian_process.kernels import RBF\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis\n\ndense_names = [\n \"RBF SVM\", \n# \"Gaussian Process\", # Taking way too long\n \"Naive Bayes\",\n# \"QDA\" # Didn't work for classes with only one sample\n]\n\ncandidate_dense_classifiers = [\n SVC(gamma=2, C=1),\n# GaussianProcessClassifier(1.0 * RBF(1.0), warm_start=True),\n GaussianNB(),\n# QuadraticDiscriminantAnalysis()\n]\n\n# Train and evaluate models using dense feature matrix, compare accuracy\nfrom sklearn import metrics\ndense_features_train = features_train.toarray()\ndense_features_test = features_test.toarray()\nfor name, clf in zip (dense_names, candidate_dense_classifiers):\n model = clf.fit(dense_features_train, labels_train)\n predictions = model.predict(dense_features_test)\n print name + \": \" + str(metrics.accuracy_score(labels_test, predictions))", "Conclusions\n\nModels which probably deserve more investigation & tuning (in order):\nMultiple logistic regression\nNaive Bayes\nNearest neighbors\nNeural networks\n\n\nDecision trees don't seem to perform well at all (could be my fault though?)\nSupport vector machines are close, but significantly worse than the above\nNext steps: focus on quality of feature construction" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Alexoner/skynet
notebooks/knn.ipynb
mit
[ "k-Nearest Neighbor (kNN) exercise\nComplete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.\nThe kNN classifier consists of two stages:\n\nDuring training, the classifier takes the training data and simply remembers it\nDuring testing, kNN classifies every test image by comparing to all training images and transfering the labels of the k most similar training examples\nThe value of k is cross-validated\n\nIn this exercise you will implement these steps and understand the basic Image Classification pipeline, cross-validation, and gain proficiency in writing efficient, vectorized code.", "# Run some setup code for this notebook.\n\nimport random\nimport numpy as np\nfrom skynet.utils.data_utils import load_CIFAR10\nimport matplotlib.pyplot as plt\n\n# This is a bit of magic to make matplotlib figures appear inline in the notebook\n# rather than in a new window.\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# Some more magic so that the notebook will reload external python modules;\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n\n# Load the raw CIFAR-10 data.\ncifar10_dir = '../skynet/datasets/cifar-10-batches-py'\nX_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)\n\n# As a sanity check, we print out the size of the training and test data.\nprint('Training data shape: ', X_train.shape)\nprint('Training labels shape: ', y_train.shape)\nprint('Test data shape: ', X_test.shape)\nprint('Test labels shape: ', y_test.shape)\n\n# Visualize some examples from the dataset.\n# We show a few examples of training images from each class.\nclasses = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']\nnum_classes = len(classes)\nsamples_per_class = 7\nfor y, cls in enumerate(classes):\n idxs = np.flatnonzero(y_train == y)\n idxs = np.random.choice(idxs, samples_per_class, replace=False)\n for i, idx in enumerate(idxs):\n plt_idx = i * num_classes + y + 1\n plt.subplot(samples_per_class, num_classes, plt_idx)\n plt.imshow(X_train[idx].astype('uint8'))\n plt.axis('off')\n if i == 0:\n plt.title(cls)\nplt.show()\n\n# Subsample the data for more efficient code execution in this exercise\nnum_training = 5000\nmask = list(range(num_training))\nX_train = X_train[mask]\ny_train = y_train[mask]\n\nnum_test = 500\nmask = list(range(num_test))\nX_test = X_test[mask]\ny_test = y_test[mask]\n\n# Reshape the image data into rows\nX_train = np.reshape(X_train, (X_train.shape[0], -1))\nX_test = np.reshape(X_test, (X_test.shape[0], -1))\nprint(X_train.shape, X_test.shape)\n\nfrom skynet.linear import KNearestNeighbor\n\n# Create a kNN classifier instance. \n# Remember that training a kNN classifier is a noop: \n# the Classifier simply remembers the data and does no further processing \nclassifier = KNearestNeighbor()\nclassifier.train(X_train, y_train)", "We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps: \n\nFirst we must compute the distances between all test examples and all train examples. \nGiven these distances, for each test example we find the k nearest examples and have them vote for the label\n\nLets begin with computing the distance matrix between all training and test examples. For example, if there are Ntr training examples and Nte test examples, this stage should result in a Nte x Ntr matrix where each element (i,j) is the distance between the i-th test and j-th train example.\nFirst, open cs231n/classifiers/k_nearest_neighbor.py and implement the function compute_distances_two_loops that uses a (very inefficient) double loop over all pairs of (test, train) examples and computes the distance matrix one element at a time.", "# Open cs231n/classifiers/k_nearest_neighbor.py and implement\n# compute_distances_two_loops.\n\n# Test your implementation:\ndists = classifier.compute_distances_two_loops(X_test)\nprint(dists.shape)\n\n# We can visualize the distance matrix: each row is a single test example and\n# its distances to training examples\nplt.imshow(dists, interpolation='none')\nplt.show()", "Inline Question #1: Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.)\n\nWhat in the data is the cause behind the distinctly bright rows?\nWhat causes the columns?\n\nYour Answer: fill this in.\nBright rows indicates that the test data/image is not similar to most of the images in the training set.\nBright columns indicates this column's training data is not similar to most of test data, which means it's not that useful.\nThis is maybe due to noise, or other generalization problem that KNN suffers from.", "# Now implement the function predict_labels and run the code below:\n# We use k = 1 (which is Nearest Neighbor).\ny_test_pred = classifier.predict_labels(dists, k=1)\n\n# Compute and print the fraction of correctly predicted examples\nnum_correct = np.sum(y_test_pred == y_test)\naccuracy = float(num_correct) / num_test\nprint('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))", "You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5:", "y_test_pred = classifier.predict_labels(dists, k=5)\nnum_correct = np.sum(y_test_pred == y_test)\naccuracy = float(num_correct) / num_test\nprint('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))", "You should expect to see a slightly better performance than with k = 1.", "# Now lets speed up distance matrix computation by using partial vectorization\n# with one loop. Implement the function compute_distances_one_loop and run the\n# code below:\ndists_one = classifier.compute_distances_one_loop(X_test)\n\n# To ensure that our vectorized implementation is correct, we make sure that it\n# agrees with the naive implementation. There are many ways to decide whether\n# two matrices are similar; one of the simplest is the Frobenius norm. In case\n# you haven't seen it before, the Frobenius norm of two matrices is the square\n# root of the squared sum of differences of all elements; in other words, reshape\n# the matrices into vectors and compute the Euclidean distance between them.\ndifference = np.linalg.norm(dists - dists_one, ord='fro')\nprint('Difference was: %f' % (difference, ))\nif difference < 0.001:\n print('Good! The distance matrices are the same')\nelse:\n print('Uh-oh! The distance matrices are different')\n\n# Now implement the fully vectorized version inside compute_distances_no_loops\n# and run the code\ndists_two = classifier.compute_distances_no_loops(X_test)\n\n# check that the distance matrix agrees with the one we computed before:\ndifference = np.linalg.norm(dists - dists_two, ord='fro')\nprint('Difference was: %f' % (difference, ))\nif difference < 0.001:\n print('Good! The distance matrices are the same')\nelse:\n print('Uh-oh! The distance matrices are different')\n\n# Let's compare how fast the implementations are\ndef time_function(f, *args):\n \"\"\"\n Call a function f with args and return the time (in seconds) that it took to execute.\n \"\"\"\n import time\n tic = time.time()\n f(*args)\n toc = time.time()\n return toc - tic\n\ntwo_loop_time = time_function(classifier.compute_distances_two_loops, X_test)\nprint('Two loop version took %f seconds' % two_loop_time)\n\none_loop_time = time_function(classifier.compute_distances_one_loop, X_test)\nprint('One loop version took %f seconds' % one_loop_time)\n\nno_loop_time = time_function(classifier.compute_distances_no_loops, X_test)\nprint('No loop version took %f seconds' % no_loop_time)\n\n# you should see significantly faster performance with the fully vectorized implementation", "Cross-validation\nWe have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation.", "num_folds = 5\nk_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]\n\nX_train_folds = []\ny_train_folds = []\n################################################################################\n# TODO: #\n# Split up the training data into folds. After splitting, X_train_folds and #\n# y_train_folds should each be lists of length num_folds, where #\n# y_train_folds[i] is the label vector for the points in X_train_folds[i]. #\n# Hint: Look up the numpy array_split function. #\n################################################################################\nX_train_folds = np.array_split(X_train, num_folds)\ny_train_folds = np.array_split(y_train, num_folds)\npass\n################################################################################\n# END OF YOUR CODE #\n################################################################################\n\n# A dictionary holding the accuracies for different values of k that we find\n# when running cross-validation. After running cross-validation,\n# k_to_accuracies[k] should be a list of length num_folds giving the different\n# accuracy values that we found when using that value of k.\nk_to_accuracies = {}\n\n\n################################################################################\n# TODO: #\n# Perform k-fold cross validation to find the best value of k. For each #\n# possible value of k, run the k-nearest-neighbor algorithm num_folds times, #\n# where in each case you use all but one of the folds as training data and the #\n# last fold as a validation set. Store the accuracies for all fold and all #\n# values of k in the k_to_accuracies dictionary. #\n################################################################################\nfor k in k_choices:\n classifier = KNearestNeighbor()\n k_to_accuracies[k] = []\n for i in range(num_folds):\n X_train_i = np.vstack(X_train_folds[:i] + X_train_folds[i+1:])\n y_train_i = np.hstack(y_train_folds[:i] + y_train_folds[i+1:])\n X_val_i = X_train_folds[i]\n y_val_i = y_train_folds[i]\n \n classifier.train(X_train_i,y_train_i)\n y_val_pred = classifier.predict(X_val_i, k)\n num_correct = np.sum(y_val_pred == y_val_i)\n accuracy = float(num_correct) / len(y_val_i)\n k_to_accuracies[k].append(accuracy)\n\npass\n################################################################################\n# END OF YOUR CODE #\n################################################################################\n\n# Print out the computed accuracies\nfor k in sorted(k_to_accuracies):\n for accuracy in k_to_accuracies[k]:\n print('k = %d, accuracy = %f' % (k, accuracy))\n\n# plot the raw observations\nfor k in k_choices:\n accuracies = k_to_accuracies[k]\n plt.scatter([k] * len(accuracies), accuracies)\n\n# plot the trend line with error bars that correspond to standard deviation\naccuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])\naccuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])\nplt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)\nplt.title('Cross-validation on k')\nplt.xlabel('k')\nplt.ylabel('Cross-validation accuracy')\nplt.show()\n\n# Based on the cross-validation results above, choose the best value for k, \n# retrain the classifier using all the training data, and test it on the test\n# data. You should be able to get above 28% accuracy on the test data.\nbest_k = 10\n\nclassifier = KNearestNeighbor()\nclassifier.train(X_train, y_train)\ny_test_pred = classifier.predict(X_test, k=best_k)\n\n# Compute and display the accuracy\nnum_correct = np.sum(y_test_pred == y_test)\naccuracy = float(num_correct) / num_test\nprint('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
Danghor/Algorithms
Python/Chapter-04/Quick-Sort-Array.ipynb
gpl-2.0
[ "from IPython.core.display import HTML\nwith open('../style.css') as file:\n css = file.read()\nHTML(css)", "The function logging is used as a decorator. It takes a function \nf as its argument and returns a new function logged_f that returns \nthe same result as the function f, but additionally it prints its arguments \nbefore the function is called and when the function returns, both the function \ncall and the result is printed.\nThe decorator logging is useful for debugging.", "def logging(f):\n def logged_f(*a):\n print(f'{f.__name__}{a}')\n r = f(*a)\n print(f'{f.__name__}{a} = {r}')\n return r\n return logged_f", "An Array-Based Implementation of Quick-Sort\nThe function $\\texttt{sort}(L)$ sorts the list $L$ in place.", "def sort(L):\n quickSort(0, len(L) - 1, L)", "The function quickSort(start, end, L) sorts the sublist L[start:end+1] in place.", "def quickSort(start, end, L):\n if end <= start:\n return # at most one element, nothing to do\n m = partition(start, end, L) # m is the split index\n quickSort(start, m - 1, L)\n quickSort(m + 1, end , L)", "The function $\\texttt{partition}(\\texttt{start}, \\texttt{end}, L)$ returns an index $m$ into the list $L$ and \nregroups the elements of $L$ such that after the function returns the following holds:\n\n$\\forall i \\in {\\texttt{start}, \\cdots, m-1} : L[i] \\leq L[m]$,\n$\\forall i \\in { m+1, \\cdots, \\texttt{end} } : L[m] < L[i]$,\n$L[m] = \\texttt{pivot}$.\n\nHere, pivot is the element that is at the index end at the time of the invocation \nof the function, i.e. we have\n\n$L[\\texttt{end}] = \\texttt{pivot}$\n\nat invocation time.\nThe for-loop of partition maintains the following invariants:\n\n$\\forall i \\in {\\texttt{start}, \\cdots, \\texttt{left} } : L[i] \\leq \\texttt{pivot}$,\n$\\forall i \\in {\\texttt{left}+1, \\cdots, \\texttt{idx}-1} : \\texttt{pivot} < L[i]$,\n$L[\\texttt{end}] = \\texttt{pivot}$.\n\nThese invariants are depicted below:\n\nThis algorithm has been suggested by Nico Lomuto. It is not the most efficient implementation of partition, but\nit is easier to understand than the algorithm given by Tony Hoare that uses two separate loops.", "#@logging\ndef partition(start, end, L):\n pivot = L[end]\n left = start - 1\n for idx in range(start, end):\n if L[idx] <= pivot:\n left += 1\n swap(left, idx, L)\n swap(left + 1, end, L)\n return left + 1", "The function $\\texttt{swap}(x, y, L)$ swaps the elements at index $x$ and $y$ in $L$.", "def swap(x, y, L):\n L[x], L[y] = L[y], L[x]", "Testing", "import random as rnd\n\ndef demo():\n L = [ rnd.randrange(1, 20) for n in range(1, 16) ]\n print(\"L = \", L)\n sort(L)\n print(\"L = \", L)\n\ndemo()\n\ndef isOrdered(L):\n for i in range(len(L) - 1):\n assert L[i] <= L[i+1]\n\nfrom collections import Counter\n\ndef sameElements(L, S):\n assert Counter(L) == Counter(S)", "The function $\\texttt{testSort}(n, k)$ generates $n$ random lists of length $k$, sorts them, and checks whether the output is sorted and contains the same elements as the input.", "def testSort(n, k):\n for i in range(n):\n L = [ rnd.randrange(2*k) for x in range(k) ]\n oldL = L[:]\n sort(L)\n isOrdered(L)\n sameElements(oldL, L)\n assert len(L) == len(oldL)\n print('.', end='')\n print()\n print(\"All tests successful!\")\n\n%%time\ntestSort(100, 20000)", "Next, we sort a million random integers.", "%%timeit\nk = 1_000_000\nL = [ rnd.randrange(1000 * k) for x in range(k) ]\nsort(L)", "Next, we sort a hundred thousand integers. This time, many of the integers have the same value.", "L = [ rnd.randrange(100) for x in range(100_000) ]\n\n%%time\nsort(L)", "Finally, we test the worst case and sort 5000 integers that are sorted ascendingly. Since quicksort is recursive, we have to increment the <em style=\"color:blue\">recursion limit</em> of Python, because otherwise we would get an error telling us that we exceed the maximum recursion depth.", "import sys\n\nsys.setrecursionlimit(20000)\nsys.version\n\nL = list(range(5000))\n\n%%time\nsort(L)", "If we shuffle the list that is to be sorted before calling sort, the worst case behaviour disappears.", "rnd.shuffle(L)\n\n%%time\nsort(L)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
DS-100/sp17-materials
sp17/labs/lab05/lab05.ipynb
gpl-3.0
[ "Lab 5: Relational Algebra in Pandas", "# Run this cell to set up the notebook.\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\n\nimport matplotlib\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\nfrom client.api.notebook import Notebook\nok = Notebook('lab05.ok')", "Boat Club\nThe Berkeley Boat Club wants to better organize their user data, and they've hired you to do it. Your first job is to implement code for relational algebra operators in python (unlike you, they don't know how to use pandas).\nYou may want to refer to these slides, to remember what each operation does. You may also want to refer to the pandas documentation.\nHere are the Boat Club's databases. Your job is to implement a variety of unary and binary relational algebra operators.", "young_sailors = pd.DataFrame({\n \"sid\": [2701, 18869, 63940, 21869, 17436],\n \"sname\": [\"Jerry\", \"Morgan\", \"Danny\", \"Jack\", \"Dustin\"],\n \"rating\": [8, 6, 4, 9, 3],\n \"age\": [25, 26, 21, 27, 22],\n })\nsalty_sailors = pd.DataFrame({\n \"sid\": [2701, 17436, 45433, 22689, 46535],\n \"sname\": [\"Jerry\", \"Dustin\", \"Balon\", \"Euron\", \"Victarion\"],\n \"rating\": [8, 3, 7, 10, 2],\n \"age\": [25, 22, 39, 35, 37],\n })\nboats = pd.DataFrame({\n \"bid\": [41116, 54505, 50041, 35168, 58324],\n \"bname\": [\"The Black Sparrow\", \"The Great Kraken\", \"The Prophetess\", \"Silence\", \"Iron Victory\"],\n \"color\": [\"Black\", \"Orange\", \"Silver\", \"Red\", \"Grey\"],\n })\nreservations = pd.DataFrame({\n \"sid\": [21869, 45433, 18869, 22689, 21869, 17436, 63940, 45433, 21869, 18869],\n \"bid\": [41116, 35168, 50041, 41116, 58324, 50041, 54505, 41116, 50041, 41116],\n \"day\": [\"3/1\", \"3/1\", \"3/2\", \"3/2\", \"3/2\", \"3/3\", \"3/3\", \"3/3\", \"3/3\", \"3/4\"],\n })", "Question 1: Projection\nOur arguments are a dataframe and a list of columns to select. This should be a simple one :)", "def project(df, columns):\n ...\n\nproject(salty_sailors, [\"sname\", \"age\"])\n\n_ = ok.grade('qproject')\n_ = ok.backup()", "Question 2: Selection\nFor selecton, our arguments are a dataframe and a function which determines which rows we select. For instance,\ngood_sailors = select(young_sailors, lambda x: x[\"rating\"] &gt; 6)", "def select(df, condition):\n ...\n\nselect(young_sailors, lambda x: x[\"rating\"] > 6)\n\n_ = ok.grade('qselect')\n_ = ok.backup()", "Question 3: Union\nThis is a binary operator, so we pass in two dataframes as our arguments. You can assume that the two dataframes are union compatible - that is, that they have the same number of columns, and their columns have the same types.", "def union(df1, df2):\n ...\n\nunion(young_sailors, salty_sailors)\n\n_ = ok.grade('qunion')\n_ = ok.backup()", "Question 4: Intersection\nSimilar to Union, this is also a binary operator.", "def intersection(df1, df2):\n ...\n\nintersection(young_sailors, salty_sailors)\n\n_ = ok.grade('qintersection')\n_ = ok.backup()", "Question 5: Set-difference\nThis one is a bit harder. You might just want to convert the rows of the dataframes to tuple, if you're having trouble.", "def difference(df1, df2):\n return df1.where(df1.apply(lambda x: ~x.isin(df2[x.name]))).dropna()\n\ndifference(young_sailors, salty_sailors)\n\n_ = ok.grade('qdifference')\n_ = ok.backup()", "Question 6: Cross-product\nThis one is also tricky, so we've provided some help for you. Think about how the new key column could be used...", "def cross_product(df1, df2):\n # add a column \"tmp-key\" of zeros to df1 and df2 \n df1 = pd.concat([df1, pd.Series(0, index=df1.index, name=\"tmp-key\")], axis=1)\n df2 = pd.concat([df2, pd.Series(0, index=df2.index, name=\"tmp-key\")], axis=1)\n # use Pandas merge functionality along with drop \n # to compute outer product and remove extra column\n return (pd\n .merge(df1, df2, on=\"tmp-key\")\n ...\n\ncross_product(young_sailors, salty_sailors)\n\n_ = ok.grade('qcross_product')\n_ = ok.backup()", "Question 7: Theta-Join\nCan you do this by using two other relational operators?", "def theta_join(df1, df2, condition):\n return select(cross_product(df1, df2), condition)\n\ntheta_join(young_sailors, salty_sailors, lambda x: x[\"age_x\"] > x[\"age_y\"])\n\n_ = ok.grade('qtheta_join')\n_ = ok.backup()", "Question 8: Natural Join\nSimilar to above, try to implement this using two relational operators.", "def natural_join(df1, df2, attr):\n return select(cross_product(df1, df2), lambda x: x[attr+\"_x\"] == x[attr+\"_y\"])\n\nall_sailors = union(young_sailors, salty_sailors)\nsailor_reservtions = natural_join(all_sailors, reservations, \"sid\")\nsailors_and_boats = natural_join(sailor_reservtions, boats, \"bid\")\nproject(sailors_and_boats, [\"sname\", \"bname\", \"day\"])\n\n_ = ok.grade('qnatural_join')\n_ = ok.backup()", "Submitting your assignment\nIf you made a good-faith effort to complete the lab, change i_finished_the_lab to True in the cell below. In any case, run the cells below to submit the lab.", "i_finished_the_lab = False\n\n_ = ok.grade('qcompleted')\n_ = ok.backup()\n\n_ = ok.submit()", "Now, run this code in your terminal to make a\ngit commit\nthat saves a snapshot of your changes in git. The last line of the cell\nruns git push, which will send your work to your personal Github repo.\n# Tell git to commit your changes to this notebook\ngit add sp17/lab/lab04/lab04.ipynb\n\n# Tell git to make the commit\ngit commit -m \"lab04 finished\"\n\n# Send your updates to your personal private repo\ngit push origin master" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
namco1992/algorithms_in_python
data_structures/stack.ipynb
mit
[ "Stack\n可以直接使用列表来实现。\n- Stack() creates a new stack that is empty. It needs no parameters and returns an empty stack.\n- push(item) adds a new item to the top of the stack. It needs the item and returns nothing.\n- pop() removes the top item from the stack. It needs no parameters and returns the item. The stack is modified.\n- peek() returns the top item from the stack but does not remove it. It needs no parameters. The stack is not modified.\n- isEmpty() tests to see whether the stack is empty. It needs no parameters and returns a boolean value.\n- size() returns the number of items on the stack. It needs no parameters and returns an integer.", "class Stack:\n def __init__(self):\n self.items = []\n\n def isEmpty(self):\n return self.items == []\n\n def push(self, item):\n self.items.append(item)\n\n def pop(self):\n return self.items.pop()\n\n def peek(self):\n return self.items[len(self.items)-1]\n\n def size(self):\n return len(self.items)\n\nif __name__ == '__main__':\n stack = Stack()\n stack.push(12)\n print stack.items", "我们需要注意的是,在这里我们把列表的尾作为“栈顶”,因此 push 和 pop 的操作使用了列表的append和pop方法。这两个方法的时间复杂度都是 O(1)。\n小练习:string 倒转", "def revstring(mystr):\n # your code here\n stack = Stack()\n for x in mystr:\n stack.push(x)\n length = len(mystr)\n reversed_str = ''\n while not stack.isEmpty():\n reversed_str += stack.pop()\n return reversed_str\n\nif __name__ == '__main__':\n assert revstring('apple') == 'elppa'", "小练习:括号匹配" ]
[ "markdown", "code", "markdown", "code", "markdown" ]
AllenDowney/ModSimPy
notebooks/chap09.ipynb
mit
[ "Modeling and Simulation in Python\nChapter 9\nCopyright 2017 Allen Downey\nLicense: Creative Commons Attribution 4.0 International", "# Configure Jupyter to display the assigned value after an assignment\n%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'\n\n# import everything from SymPy.\nfrom sympy import *\n\n# Set up Jupyter notebook to display math.\ninit_printing() ", "The following displays SymPy expressions and provides the option of showing results in LaTeX format.", "from sympy.printing import latex\n\ndef show(expr, show_latex=False):\n \"\"\"Display a SymPy expression.\n \n expr: SymPy expression\n show_latex: boolean\n \"\"\"\n if show_latex:\n print(latex(expr))\n return expr", "Analysis with SymPy\nCreate a symbol for time.", "t = symbols('t')", "If you combine symbols and numbers, you get symbolic expressions.", "expr = t + 1", "The result is an Add object, which just represents the sum without trying to compute it.", "type(expr)", "subs can be used to replace a symbol with a number, which allows the addition to proceed.", "expr.subs(t, 2)", "f is a special class of symbol that represents a function.", "f = Function('f')", "The type of f is UndefinedFunction", "type(f)", "SymPy understands that f(t) means f evaluated at t, but it doesn't try to evaluate it yet.", "f(t)", "diff returns a Derivative object that represents the time derivative of f", "dfdt = diff(f(t), t)\n\ntype(dfdt)", "We need a symbol for alpha", "alpha = symbols('alpha')", "Now we can write the differential equation for proportional growth.", "eq1 = Eq(dfdt, alpha*f(t))", "And use dsolve to solve it. The result is the general solution.", "solution_eq = dsolve(eq1)", "We can tell it's a general solution because it contains an unspecified constant, C1.\nIn this example, finding the particular solution is easy: we just replace C1 with p_0", "C1, p_0 = symbols('C1 p_0')\n\nparticular = solution_eq.subs(C1, p_0)", "In the next example, we have to work a little harder to find the particular solution.\nSolving the quadratic growth equation\nWe'll use the (r, K) parameterization, so we'll need two more symbols:", "r, K = symbols('r K')", "Now we can write the differential equation.", "eq2 = Eq(diff(f(t), t), r * f(t) * (1 - f(t)/K))", "And solve it.", "solution_eq = dsolve(eq2)", "The result, solution_eq, contains rhs, which is the right-hand side of the solution.", "general = solution_eq.rhs", "We can evaluate the right-hand side at $t=0$", "at_0 = general.subs(t, 0)", "Now we want to find the value of C1 that makes f(0) = p_0.\nSo we'll create the equation at_0 = p_0 and solve for C1. Because this is just an algebraic identity, not a differential equation, we use solve, not dsolve.\nThe result from solve is a list of solutions. In this case, we have reason to expect only one solution, but we still get a list, so we have to use the bracket operator, [0], to select the first one.", "solutions = solve(Eq(at_0, p_0), C1)\ntype(solutions), len(solutions)\n\nvalue_of_C1 = solutions[0]", "Now in the general solution, we want to replace C1 with the value of C1 we just figured out.", "particular = general.subs(C1, value_of_C1)", "The result is complicated, but SymPy provides a method that tries to simplify it.", "particular = simplify(particular)", "Often simplicity is in the eye of the beholder, but that's about as simple as this expression gets.\nJust to double-check, we can evaluate it at t=0 and confirm that we get p_0", "particular.subs(t, 0)", "This solution is called the logistic function.\nIn some places you'll see it written in a different form:\n$f(t) = \\frac{K}{1 + A e^{-rt}}$\nwhere $A = (K - p_0) / p_0$.\nWe can use SymPy to confirm that these two forms are equivalent. First we represent the alternative version of the logistic function:", "A = (K - p_0) / p_0\n\nlogistic = K / (1 + A * exp(-r*t))", "To see whether two expressions are equivalent, we can check whether their difference simplifies to 0.", "simplify(particular - logistic)", "This test only works one way: if SymPy says the difference reduces to 0, the expressions are definitely equivalent (and not just numerically close).\nBut if SymPy can't find a way to simplify the result to 0, that doesn't necessarily mean there isn't one. Testing whether two expressions are equivalent is a surprisingly hard problem; in fact, there is no algorithm that can solve it in general.\nExercises\nExercise: Solve the quadratic growth equation using the alternative parameterization\n$\\frac{df(t)}{dt} = \\alpha f(t) + \\beta f^2(t) $", "# Solution goes here\n\n# Solution goes here\n\n# Solution goes here\n\n# Solution goes here\n\n# Solution goes here\n\n# Solution goes here\n\n# Solution goes here", "Exercise: Use WolframAlpha to solve the quadratic growth model, using either or both forms of parameterization:\ndf(t) / dt = alpha f(t) + beta f(t)^2\n\nor\ndf(t) / dt = r f(t) (1 - f(t)/K)\n\nFind the general solution and also the particular solution where f(0) = p_0." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
davidzchen/tensorflow
tensorflow/lite/g3doc/tutorials/model_maker_image_classification.ipynb
apache-2.0
[ "Copyright 2019 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Image classification with TensorFlow Lite Model Maker\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/lite/tutorials/model_maker_image_classification\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_image_classification.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/tutorials/model_maker_image_classification.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/tutorials/model_maker_image_classification.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nModel Maker library simplifies the process of adapting and converting a TensorFlow neural-network model to particular input data when deploying this model for on-device ML applications.\nThis notebook shows an end-to-end example that utilizes this Model Maker library to illustrate the adaption and conversion of a commonly-used image classification model to classify flowers on a mobile device.\nPrerequisites\nTo run this example, we first need to install serveral required packages, including Model Maker package that in github repo.", "!pip install tflite-model-maker", "Import the required packages.", "import numpy as np\n\nimport tensorflow as tf\nassert tf.__version__.startswith('2')\n\nfrom tflite_model_maker import configs\nfrom tflite_model_maker import image_classifier\nfrom tflite_model_maker import ImageClassifierDataLoader\nfrom tflite_model_maker import model_spec\n\nimport matplotlib.pyplot as plt", "Simple End-to-End Example\nGet the data path\nLet's get some images to play with this simple end-to-end example. Hundreds of images is a good start for Model Maker while more data could achieve better accuracy.", "image_path = tf.keras.utils.get_file(\n 'flower_photos',\n 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',\n untar=True)", "You could replace image_path with your own image folders. As for uploading data to colab, you could find the upload button in the left sidebar shown in the image below with the red rectangle. Just have a try to upload a zip file and unzip it. The root file path is the current path.\n<img src=\"https://storage.googleapis.com/download.tensorflow.org/models/tflite/screenshots/model_maker_image_classification.png\" alt=\"Upload File\" width=\"800\" hspace=\"100\">\nIf you prefer not to upload your images to the cloud, you could try to run the library locally following the guide in github.\nRun the example\nThe example just consists of 4 lines of code as shown below, each of which representing one step of the overall process.\nStep 1. Load input data specific to an on-device ML app. Split it to training data and testing data.", "data = ImageClassifierDataLoader.from_folder(image_path)\ntrain_data, test_data = data.split(0.9)", "Step 2. Customize the TensorFlow model.", "model = image_classifier.create(train_data)", "Step 3. Evaluate the model.", "loss, accuracy = model.evaluate(test_data)", "Step 4. Export to TensorFlow Lite model.\nHere, we export TensorFlow Lite model with metadata which provides a standard for model descriptions.\nYou could download it in the left sidebar same as the uploading part for your own use.", "model.export(export_dir='.')", "After this simple 4 steps, we could further use TensorFlow Lite model file and label file in on-device applications like in image classification reference app.\nDetailed Process\nCurrently, we support several models such as EfficientNet-Lite* models, MobileNetV2, ResNet50 as pre-trained models for image classification. But it is very flexible to add new pre-trained models to this library with just a few lines of code.\nThe following walks through this end-to-end example step by step to show more detail.\nStep 1: Load Input Data Specific to an On-device ML App\nThe flower dataset contains 3670 images belonging to 5 classes. Download the archive version of the dataset and untar it.\nThe dataset has the following directory structure:\n<pre>\n<b>flower_photos</b>\n|__ <b>daisy</b>\n |______ 100080576_f52e8ee070_n.jpg\n |______ 14167534527_781ceb1b7a_n.jpg\n |______ ...\n|__ <b>dandelion</b>\n |______ 10043234166_e6dd915111_n.jpg\n |______ 1426682852_e62169221f_m.jpg\n |______ ...\n|__ <b>roses</b>\n |______ 102501987_3cdb8e5394_n.jpg\n |______ 14982802401_a3dfb22afb.jpg\n |______ ...\n|__ <b>sunflowers</b>\n |______ 12471791574_bb1be83df4.jpg\n |______ 15122112402_cafa41934f.jpg\n |______ ...\n|__ <b>tulips</b>\n |______ 13976522214_ccec508fe7.jpg\n |______ 14487943607_651e8062a1_m.jpg\n |______ ...\n</pre>", "image_path = tf.keras.utils.get_file(\n 'flower_photos',\n 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',\n untar=True)", "Use ImageClassifierDataLoader class to load data.\nAs for from_folder() method, it could load data from the folder. It assumes that the image data of the same class are in the same subdirectory and the subfolder name is the class name. Currently, JPEG-encoded images and PNG-encoded images are supported.", "data = ImageClassifierDataLoader.from_folder(image_path)", "Split it to training data (80%), validation data (10%, optional) and testing data (10%).", "train_data, rest_data = data.split(0.8)\nvalidation_data, test_data = rest_data.split(0.5)", "Show 25 image examples with labels.", "plt.figure(figsize=(10,10))\nfor i, (image, label) in enumerate(data.dataset.take(25)):\n plt.subplot(5,5,i+1)\n plt.xticks([])\n plt.yticks([])\n plt.grid(False)\n plt.imshow(image.numpy(), cmap=plt.cm.gray)\n plt.xlabel(data.index_to_label[label.numpy()])\nplt.show()", "Step 2: Customize the TensorFlow Model\nCreate a custom image classifier model based on the loaded data. The default model is EfficientNet-Lite0.", "model = image_classifier.create(train_data, validation_data=validation_data)", "Have a look at the detailed model structure.", "model.summary()", "Step 3: Evaluate the Customized Model\nEvaluate the result of the model, get the loss and accuracy of the model.", "loss, accuracy = model.evaluate(test_data)", "We could plot the predicted results in 100 test images. Predicted labels with red color are the wrong predicted results while others are correct.", "# A helper function that returns 'red'/'black' depending on if its two input\n# parameter matches or not.\ndef get_label_color(val1, val2):\n if val1 == val2:\n return 'black'\n else:\n return 'red'\n\n# Then plot 100 test images and their predicted labels.\n# If a prediction result is different from the label provided label in \"test\"\n# dataset, we will highlight it in red color.\nplt.figure(figsize=(20, 20))\npredicts = model.predict_top_k(test_data)\nfor i, (image, label) in enumerate(test_data.dataset.take(100)):\n ax = plt.subplot(10, 10, i+1)\n plt.xticks([])\n plt.yticks([])\n plt.grid(False)\n plt.imshow(image.numpy(), cmap=plt.cm.gray)\n\n predict_label = predicts[i][0][0]\n color = get_label_color(predict_label,\n test_data.index_to_label[label.numpy()])\n ax.xaxis.label.set_color(color)\n plt.xlabel('Predicted: %s' % predict_label)\nplt.show()", "If the accuracy doesn't meet the app requirement, one could refer to Advanced Usage to explore alternatives such as changing to a larger model, adjusting re-training parameters etc.\nStep 4: Export to TensorFlow Lite Model\nConvert the existing model to TensorFlow Lite model format and save the image labels in label file. The default TFLite filename is model.tflite, the default label filename is label.txt.", "model.export(export_dir='.')", "The TensorFlow Lite model file and label file could be used in image classification reference app.\nAs for android reference app as an example, we could add flower_classifier.tflite and flower_label.txt in assets folder. Meanwhile, change label filename in code and TensorFlow Lite file name in code. Thus, we could run the retrained float TensorFlow Lite model on the android app.\nYou can also evalute the tflite model with the evaluate_tflite method.", "model.evaluate_tflite('model.tflite', test_data)", "Advanced Usage\nThe create function is the critical part of this library. It uses transfer learning with a pretrained model similiar to the tutorial.\nThe createfunction contains the following steps:\n\nSplit the data into training, validation, testing data according to parameter validation_ratio and test_ratio. The default value of validation_ratio and test_ratio are 0.1 and 0.1.\nDownload a Image Feature Vector as the base model from TensorFlow Hub. The default pre-trained model is EfficientNet-Lite0.\nAdd a classifier head with a Dropout Layer with dropout_rate between head layer and pre-trained model. The default dropout_rate is the default dropout_rate value from make_image_classifier_lib by TensorFlow Hub.\nPreprocess the raw input data. Currently, preprocessing steps including normalizing the value of each image pixel to model input scale and resizing it to model input size. EfficientNet-Lite0 have the input scale [0, 1] and the input image size [224, 224, 3].\nFeed the data into the classifier model. By default, the training parameters such as training epochs, batch size, learning rate, momentum are the default values from make_image_classifier_lib by TensorFlow Hub. Only the classifier head is trained.\n\nIn this section, we describe several advanced topics, including switching to a different image classification model, changing the training hyperparameters etc.\nPost-training quantization on the TensorFLow Lite model\nPost-training quantization is a conversion technique that can reduce model size and inference latency, while also improving CPU and hardware accelerator latency, with little degradation in model accuracy. Thus, it's widely used to optimize the model.\nModel Maker supports multiple post-training quantization options. Let's take full integer quantization as an instance. First, define the quantization config to enforce enforce full integer quantization for all ops including the input and output. The input type and output type are uint8 by default. You may also change them to other types like int8 by setting inference_input_type and inference_output_type in config.", "config = configs.QuantizationConfig.create_full_integer_quantization(representative_data=test_data, is_integer_only=True)", "Then we export TensorFlow Lite model with such configuration.", "model.export(export_dir='.', tflite_filename='model_quant.tflite', quantization_config=config)", "In Colab, you can download the model named model_quant.tflite from the left sidebar, same as the uploading part mentioned above.\nChange the model\nChange to the model that's supported in this library.\nThis library supports EfficientNet-Lite models, MobileNetV2, ResNet50 by now. EfficientNet-Lite are a family of image classification models that could achieve state-of-art accuracy and suitable for Edge devices. The default model is EfficientNet-Lite0.\nWe could switch model to MobileNetV2 by just setting parameter model_spec to mobilenet_v2_spec in create method.", "model = image_classifier.create(train_data, model_spec=model_spec.mobilenet_v2_spec, validation_data=validation_data)", "Evaluate the newly retrained MobileNetV2 model to see the accuracy and loss in testing data.", "loss, accuracy = model.evaluate(test_data)", "Change to the model in TensorFlow Hub\nMoreover, we could also switch to other new models that inputs an image and outputs a feature vector with TensorFlow Hub format.\nAs Inception V3 model as an example, we could define inception_v3_spec which is an object of ImageModelSpec and contains the specification of the Inception V3 model.\nWe need to specify the model name name, the url of the TensorFlow Hub model uri. Meanwhile, the default value of input_image_shape is [224, 224]. We need to change it to [299, 299] for Inception V3 model.", "inception_v3_spec = model_spec.ImageModelSpec(\n uri='https://tfhub.dev/google/imagenet/inception_v3/feature_vector/1')\ninception_v3_spec.input_image_shape = [299, 299]", "Then, by setting parameter model_spec to inception_v3_spec in create method, we could retrain the Inception V3 model.\nThe remaining steps are exactly same and we could get a customized InceptionV3 TensorFlow Lite model in the end.\nChange your own custom model\nIf we'd like to use the custom model that's not in TensorFlow Hub, we should create and export ModelSpec in TensorFlow Hub.\nThen start to define ImageModelSpec object like the process above.\nChange the training hyperparameters\nWe could also change the training hyperparameters like epochs, dropout_rate and batch_size that could affect the model accuracy. For instance,\n\nepochs: more epochs could achieve better accuracy until it converges but training for too many epochs may lead to overfitting.\ndropout_rate: avoid overfitting.\nbatch_size: number of samples to use in one training step.\nvalidation_data: number of samples to use in one training step.\n\nFor example, we could train with more epochs.", "model = image_classifier.create(train_data, validation_data=validation_data, epochs=10)", "Evaluate the newly retrained model with 10 training epochs.", "loss, accuracy = model.evaluate(test_data)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
zaqwes8811/micro-apps
self_driving/deps/Kalman_and_Bayesian_Filters_in_Python_master/Supporting_Notebooks/Interactions.ipynb
mit
[ "#format the book\nfrom __future__ import division, print_function\n%matplotlib inline\nimport sys\nsys.path.insert(0, '..')\nimport book_format\nbook_format.set_style()", "Interactions\nThis is a collection of interactions, mostly from the book. If you have are reading a print version of the book, or are reading it online via Github or nbviewer you will be unable to run the interactions.\nSo I have created this notebook. Here is how you run an interaction if you do not have IPython installed on your computer.\n\n\nGo to try.juptyer.org in your browser. It will launch a temporary notebook server for you.\n\n\nClick the New button and select Python 3. This will create a new notebook that will run Python 3 for you in your browser.\n\n\nCopy the entire contents of a cell from this notebook and paste it into a 'code' cell in the notebook on your browser. \n\n\nPress CTRL+ENTER to execute the cell.\n\n\nHave fun! Change code. Play. Experiment. Hack.\n\n\nYour server and notebook is not permanently saved. Once you close the session your data is lost. Yes, it says it is saving your file if you press save, and you can see it in the directory. But that is just happening in a Docker container that will be deleted as soon as you close the window. Copy and paste any changes you want to keep to an external file.\nOf course if you have IPython installed you can download this notebook and run it on your own computer. Type\nipython notebook\n\nin a command prompt from the directory where you downloaded this file. Click on the name of this file to open it.\nExperimenting with FPF'\nThe Kalman filter uses the equation $P^- = FPF^\\mathsf{T}$ to compute the prior of the covariance matrix during the prediction step, where P is the covariance matrix and F is the system transistion function. For a Newtonian system $x = \\dot{x}\\Delta t + x_0$ F might look like\n$$F = \\begin{bmatrix}1 & \\Delta t\\0 & 1\\end{bmatrix}$$\n$FPF^\\mathsf{T}$ alters P by taking the correlation between the position ($x$) and velocity ($\\dot{x}$). This interactive plot lets you see the effect of different designs of F has on this value. For example,\n\n\nwhat if $x$ is not correlated to $\\dot{x}$? (set F01 to 0)\n\n\nwhat if $x = 2\\dot{x}\\Delta t + x_0$? (set F01 to 2)\n\n\nwhat if $x = \\dot{x}\\Delta t + 2*x_0$? (set F00 to 2)\n\n\nwhat if $x = \\dot{x}\\Delta t$? (set F00 to 0)", "%matplotlib inline\nfrom IPython.html.widgets import interact, interactive, fixed\nimport IPython.html.widgets as widgets\nimport numpy as np\nimport numpy.linalg as linalg\nimport math\nimport matplotlib.pyplot as plt\nfrom matplotlib.patches import Ellipse\n\ndef plot_covariance_ellipse(x, P, edgecolor='k'):\n U,s,v = linalg.svd(P)\n angle = math.atan2(U[1,0],U[0,0])\n width = math.sqrt(s[0]) * 2\n height = math.sqrt(s[1]) * 2\n\n ax = plt.gca()\n e = Ellipse(xy=(0, 0), width=width, height=height, angle=angle,\n edgecolor=edgecolor, facecolor='none',\n lw=2, ls='solid')\n ax.add_patch(e)\n ax.set_aspect('equal')\n \n \ndef plot_FPFT(F00, F01, F10, F11, covar):\n \n dt = 1.\n x = np.array((0, 0.))\n P = np.array(((1, covar), (covar, 2)))\n F = np.array(((F00, F01), (F10, F11)))\n\n plot_covariance_ellipse(x, P)\n plot_covariance_ellipse(x, np.dot(F, P).dot(F.T), edgecolor='r')\n #plt.axis('equal')\n plt.xlim(-4, 4)\n plt.ylim(-4, 4)\n plt.title(str(F))\n plt.xlabel('position')\n plt.ylabel('velocity')\n \ninteract(plot_FPFT, \n F00=widgets.IntSlider(value=1, min=0, max=2.), \n F01=widgets.FloatSlider(value=1, min=0., max=2., description='F01(dt)'),\n F10=widgets.FloatSlider(value=0, min=0., max=2.),\n F11=widgets.FloatSlider(value=1, min=0., max=2.),\n covar=widgets.FloatSlider(value=0, min=0, max=1.));", "Covariance Ellipse\nSee the effect of varying the variances and covariance of a covariance matrix of the form\n$$\\begin{bmatrix}\\texttt{var}_x & \\texttt{cov}_xy \\ \\texttt{cov}_xy & \\texttt{var}_y\\end{bmatrix}$$", "%matplotlib inline\nfrom IPython.html.widgets import interact, interactive, fixed\nfrom IPython.html.widgets import FloatSlider\nfrom math import cos, sin, pi, atan2, sqrt\nimport numpy.linalg as linalg\nimport matplotlib.pyplot as plt\nfrom matplotlib.patches import Ellipse\n\ndef plot_covariance_ellipse(P):\n U,s,v = linalg.svd(P)\n angle = atan2(U[1,0],U[0,0])\n width = sqrt(s[0]) * 2\n height = sqrt(s[1]) * 2\n\n ax = plt.gca()\n e = Ellipse(xy=(0, 0), width=width, height=height, angle=angle,\n edgecolor='k', facecolor='none',\n lw=2, ls='solid')\n ax.add_patch(e)\n h, w = height/4, width/4\n plt.plot([0, h*cos(angle+pi/2)], [0, h*sin(angle+pi/2)])\n plt.plot([0, w*cos(angle)], [0, w*sin(angle)])\n\ndef plot_covariance(var_x, var_y, cov_xy):\n P = [[var_x, cov_xy], [cov_xy, var_y]]\n plot_covariance_ellipse(P)\n plt.xlim(-6, 6)\n plt.gca().set_aspect('equal')\n plt.ylim(-6, 6)\n plt.show()\n\ninteract (plot_covariance, \n var_x=FloatSlider(value=5., min=0, max=20.), \n var_y=FloatSlider(value=5., min=0., max=20.), \n cov_xy=FloatSlider(value=1.5, min=0.0, max=50, step=.2));\n", "g-h Filter\nExperiment with various values for g-h filter parameters.", "%matplotlib inline\nfrom IPython.html.widgets import interact, interactive, fixed\nimport IPython.html.widgets as widgets\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport numpy.random as random\n\ndef gen_data(x0, dx, count, noise_factor):\n return [x0 + dx*i + random.randn()*noise_factor for i in range (count)]\n\ndef g_h_filter(data, x0, dx, g, h, dt=1., pred=None): \n x = x0\n results = []\n for z in data:\n #prediction step\n x_est = x + (dx*dt)\n dx = dx \n if pred is not None:\n pred.append(x_est)\n \n # update step\n residual = z - x_est\n dx = dx + h * (residual) / dt\n x = x_est + g * residual \n results.append(x) \n return np.array(results)\n\nzs = gen_data(x0=5, dx=5, count=100, noise_factor=50)\n\ndef interactive_gh(x, dx, g, h):\n data = g_h_filter(data=zs, x0=x, dx=dx, dt=1.,g=g, h=h)\n plt.plot(zs, color='r')\n plt.plot(data, color='k')\n plt.show()\n\ninteract (interactive_gh, \n x=widgets.FloatSlider(value=0., min=-50, max=50.), \n dx=widgets.FloatSlider(value=5., min=-50., max=50.), \n g=widgets.FloatSlider(value=0.1, min=0.01, max=2, step=.02), \n h=widgets.FloatSlider(value=0.02, min=0.0, max=0.5, step=0.01));" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.23/_downloads/c69e0120935518121b8298ecac72eed8/35_dipole_orientations.ipynb
bsd-3-clause
[ "%matplotlib inline", "The role of dipole orientations in distributed source localization\nWhen performing source localization in a distributed manner\n(MNE/dSPM/sLORETA/eLORETA),\nthe source space is defined as a grid of dipoles that spans a large portion of\nthe cortex. These dipoles have both a position and an orientation. In this\ntutorial, we will look at the various options available to restrict the\norientation of the dipoles and the impact on the resulting source estimate.\nSee inverse_orientation_constraints for related information.\nLoading data\nLoad everything we need to perform source localization on the sample dataset.", "import mne\nimport numpy as np\nfrom mne.datasets import sample\nfrom mne.minimum_norm import make_inverse_operator, apply_inverse\n\ndata_path = sample.data_path()\nevokeds = mne.read_evokeds(data_path + '/MEG/sample/sample_audvis-ave.fif')\nleft_auditory = evokeds[0].apply_baseline()\nfwd = mne.read_forward_solution(\n data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif')\nmne.convert_forward_solution(fwd, surf_ori=True, copy=False)\nnoise_cov = mne.read_cov(data_path + '/MEG/sample/sample_audvis-cov.fif')\nsubject = 'sample'\nsubjects_dir = data_path + '/subjects'\ntrans_fname = data_path + '/MEG/sample/sample_audvis_raw-trans.fif'", "The source space\nLet's start by examining the source space as constructed by the\n:func:mne.setup_source_space function. Dipoles are placed along fixed\nintervals on the cortex, determined by the spacing parameter. The source\nspace does not define the orientation for these dipoles.", "lh = fwd['src'][0] # Visualize the left hemisphere\nverts = lh['rr'] # The vertices of the source space\ntris = lh['tris'] # Groups of three vertices that form triangles\ndip_pos = lh['rr'][lh['vertno']] # The position of the dipoles\ndip_ori = lh['nn'][lh['vertno']]\ndip_len = len(dip_pos)\ndip_times = [0]\nwhite = (1.0, 1.0, 1.0) # RGB values for a white color\n\nactual_amp = np.ones(dip_len) # misc amp to create Dipole instance\nactual_gof = np.ones(dip_len) # misc GOF to create Dipole instance\ndipoles = mne.Dipole(dip_times, dip_pos, actual_amp, dip_ori, actual_gof)\ntrans = mne.read_trans(trans_fname)\n\nfig = mne.viz.create_3d_figure(size=(600, 400), bgcolor=white)\ncoord_frame = 'mri'\n\n# Plot the cortex\nmne.viz.plot_alignment(\n subject=subject, subjects_dir=subjects_dir, trans=trans, surfaces='white',\n coord_frame=coord_frame, fig=fig)\n\n# Mark the position of the dipoles with small red dots\nmne.viz.plot_dipole_locations(\n dipoles=dipoles, trans=trans, mode='sphere', subject=subject,\n subjects_dir=subjects_dir, coord_frame=coord_frame, scale=7e-4, fig=fig)\n\nmne.viz.set_3d_view(figure=fig, azimuth=180, distance=0.25)", "Fixed dipole orientations\nWhile the source space defines the position of the dipoles, the inverse\noperator defines the possible orientations of them. One of the options is to\nassign a fixed orientation. Since the neural currents from which MEG and EEG\nsignals originate flows mostly perpendicular to the cortex\n:footcite:HamalainenEtAl1993, restricting the orientation of the dipoles\naccordingly places a useful restriction on the source estimate.\nBy specifying fixed=True when calling\n:func:mne.minimum_norm.make_inverse_operator, the dipole orientations are\nfixed to be orthogonal to the surface of the cortex, pointing outwards. Let's\nvisualize this:", "fig = mne.viz.create_3d_figure(size=(600, 400))\n\n# Plot the cortex\nmne.viz.plot_alignment(\n subject=subject, subjects_dir=subjects_dir, trans=trans,\n surfaces='white', coord_frame='head', fig=fig)\n\n# Show the dipoles as arrows pointing along the surface normal\nmne.viz.plot_dipole_locations(\n dipoles=dipoles, trans=trans, mode='arrow', subject=subject,\n subjects_dir=subjects_dir, coord_frame='head', scale=7e-4, fig=fig)\n\nmne.viz.set_3d_view(figure=fig, azimuth=180, distance=0.1)", "Restricting the dipole orientations in this manner leads to the following\nsource estimate for the sample data:", "# Compute the source estimate for the 'left - auditory' condition in the sample\n# dataset.\ninv = make_inverse_operator(left_auditory.info, fwd, noise_cov, fixed=True)\nstc = apply_inverse(left_auditory, inv, pick_ori=None)\n\n# Visualize it at the moment of peak activity.\n_, time_max = stc.get_peak(hemi='lh')\nbrain_fixed = stc.plot(surface='white', subjects_dir=subjects_dir,\n initial_time=time_max, time_unit='s', size=(600, 400))\nmne.viz.set_3d_view(figure=brain_fixed, focalpoint=(0., 0., 50))", "The direction of the estimated current is now restricted to two directions:\ninward and outward. In the plot, blue areas indicate current flowing inwards\nand red areas indicate current flowing outwards. Given the curvature of the\ncortex, groups of dipoles tend to point in the same direction: the direction\nof the electromagnetic field picked up by the sensors.\nLoose dipole orientations\nForcing the source dipoles to be strictly orthogonal to the cortex makes the\nsource estimate sensitive to the spacing of the dipoles along the cortex,\nsince the curvature of the cortex changes within each ~10 square mm patch.\nFurthermore, misalignment of the MEG/EEG and MRI coordinate frames is more\ncritical when the source dipole orientations are strictly constrained\n:footcite:LinEtAl2006. To lift the restriction on the orientation of the\ndipoles, the inverse operator has the ability to place not one, but three\ndipoles at each location defined by the source space. These three dipoles are\nplaced orthogonally to form a Cartesian coordinate system. Let's visualize\nthis:", "fig = mne.viz.create_3d_figure(size=(600, 400))\n\n# Plot the cortex\nmne.viz.plot_alignment(\n subject=subject, subjects_dir=subjects_dir, trans=trans,\n surfaces='white', coord_frame='head', fig=fig)\n\n# Show the three dipoles defined at each location in the source space\nmne.viz.plot_alignment(\n subject=subject, subjects_dir=subjects_dir, trans=trans, fwd=fwd,\n surfaces='white', coord_frame='head', fig=fig)\n\nmne.viz.set_3d_view(figure=fig, azimuth=180, distance=0.1)", "When computing the source estimate, the activity at each of the three dipoles\nis collapsed into the XYZ components of a single vector, which leads to the\nfollowing source estimate for the sample data:", "# Make an inverse operator with loose dipole orientations\ninv = make_inverse_operator(left_auditory.info, fwd, noise_cov, fixed=False,\n loose=1.0)\n\n# Compute the source estimate, indicate that we want a vector solution\nstc = apply_inverse(left_auditory, inv, pick_ori='vector')\n\n# Visualize it at the moment of peak activity.\n_, time_max = stc.magnitude().get_peak(hemi='lh')\nbrain_mag = stc.plot(subjects_dir=subjects_dir, initial_time=time_max,\n time_unit='s', size=(600, 400), overlay_alpha=0)\nmne.viz.set_3d_view(figure=brain_mag, focalpoint=(0., 0., 50))", "Limiting orientations, but not fixing them\nOften, the best results will be obtained by allowing the dipoles to have\nsomewhat free orientation, but not stray too far from a orientation that is\nperpendicular to the cortex. The loose parameter of the\n:func:mne.minimum_norm.make_inverse_operator allows you to specify a value\nbetween 0 (fixed) and 1 (unrestricted or \"free\") to indicate the amount the\norientation is allowed to deviate from the surface normal.", "# Set loose to 0.2, the default value\ninv = make_inverse_operator(left_auditory.info, fwd, noise_cov, fixed=False,\n loose=0.2)\nstc = apply_inverse(left_auditory, inv, pick_ori='vector')\n\n# Visualize it at the moment of peak activity.\n_, time_max = stc.magnitude().get_peak(hemi='lh')\nbrain_loose = stc.plot(subjects_dir=subjects_dir, initial_time=time_max,\n time_unit='s', size=(600, 400), overlay_alpha=0)\nmne.viz.set_3d_view(figure=brain_loose, focalpoint=(0., 0., 50))", "Discarding dipole orientation information\nOften, further analysis of the data does not need information about the\norientation of the dipoles, but rather their magnitudes. The pick_ori\nparameter of the :func:mne.minimum_norm.apply_inverse function allows you\nto specify whether to return the full vector solution ('vector') or\nrather the magnitude of the vectors (None, the default) or only the\nactivity in the direction perpendicular to the cortex ('normal').", "# Only retain vector magnitudes\nstc = apply_inverse(left_auditory, inv, pick_ori=None)\n\n# Visualize it at the moment of peak activity\n_, time_max = stc.get_peak(hemi='lh')\nbrain = stc.plot(surface='white', subjects_dir=subjects_dir,\n initial_time=time_max, time_unit='s', size=(600, 400))\nmne.viz.set_3d_view(figure=brain, focalpoint=(0., 0., 50))", "References\n.. footbibliography::" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Neurosim-lab/netpyne
netpyne/tutorials/saving_loading_tut/saving_tut.ipynb
mit
[ "Saving and Loading Tutorial\nPreparing a virtual environment\nFirst, you need to have Python3 and openmpi installed and running on your machine.\nIn a new directory, here are the steps I took to create a virtual environment for this Jupyter notebook:\necho \"\" \necho \"Preparing a virtual environment for NetPyNE\" \necho \"=============================================================================\"\necho \"Using Python version:\"\npython3 --version\necho \"Using Python from:\"\nwhich python3\n\necho \"\"\necho \"Creating a virtual environment: python3 -m venv env\"\necho \"-----------------------------------------------------------------------------\"\npython3 -m venv env\n\necho \"\"\necho \"Activating virtual environment: source env/bin/activate\"\necho \"-----------------------------------------------------------------------------\"\nsource env/bin/activate\n\necho \"\"\necho \"Updating pip: python3 -m pip install --upgrade pip\"\necho \"-----------------------------------------------------------------------------\"\npython3 -m pip install --upgrade pip\n\necho \"\"\necho \"Installing wheel: python3 -m pip install --upgrade wheel\"\necho \"-----------------------------------------------------------------------------\"\npython3 -m pip install --upgrade wheel\n\necho \"\"\necho \"Installing ipython: python3 -m pip install --upgrade ipython\"\necho \"-----------------------------------------------------------------------------\"\npython3 -m pip install ipython\n\necho \"\"\necho \"Installing NEURON: python3 -m pip install --upgrade neuron\"\necho \"-----------------------------------------------------------------------------\"\npython3 -m pip install --upgrade neuron\n\necho \"\"\necho \"Cloning NetPyNE: git clone https://github.com/Neurosim-lab/netpyne.git\"\necho \"-----------------------------------------------------------------------------\"\ngit clone https://github.com/Neurosim-lab/netpyne.git\n\necho \"\"\necho \"Installing NetPyNE: python3 -m pip install -e netpyne\"\necho \"-----------------------------------------------------------------------------\"\npython3 -m pip install -e netpyne\n\necho \"\"\necho \"Installing ipykernel for Jupyter: python3 -m pip install --upgrade ipykernel\"\necho \"-----------------------------------------------------------------------------\"\npython3 -m pip install --upgrade ipykernel\n\necho \"\"\necho \"Installing Jupyter: python3 -m pip install --upgrade jupyter\"\necho \"-----------------------------------------------------------------------------\"\npython3 -m pip install --upgrade jupyter\n\necho \"\"\necho \"Creating a kernel for Jupyter: ipython kernel install --user --name=env\"\necho \"-----------------------------------------------------------------------------\"\nipython kernel install --user --name=env\n\necho \"\"\necho \"=============================================================================\"\necho \"Your virtual environment is ready for use.\"\necho \"\"\necho \"To deactivate, execute: deactivate\"\necho \"To reactivate, execute: source env/bin/activate\"\necho \"=============================================================================\"\n\nCopying this tutorial\nFor convenience, let's copy this tutorial's directory up to the directory we're working in and then change into that directory.\npwd\ncp -r netpyne/netpyne/tutorials/saving_loading_tut .\ncd saving_loading_tut\npwd\n\nNormal saving\nThen we'll run a simulation with normal saving, using saving_netParams.py (which is used by all simulations in this tutorial), saving_normal_cfg.py, and saving_normal_init.py.\nLet's take a look at saving_normal_init.py, to see the standard way to run and save a simulation:\nfrom netpyne import sim\n\ncfg, netParams = sim.readCmdLineArgs(\n simConfigDefault='saving_normal_cfg.py', \n netParamsDefault='saving_netParams.py')\nsim.initialize(simConfig=cfg, netParams=netParams)\nsim.net.createPops()\nsim.net.createCells()\nsim.net.connectCells()\nsim.net.addStims()\nsim.setupRecording()\nsim.runSim()\nsim.gatherData()\nsim.saveData()\nsim.analysis.plotData()\n\nWe could run this on a single core using python3 saving_normal_init.py (if we just want the output) or ipython -i saving_normal_init.py (if we wanted to interact with the simulation afterwards. But we will run this on multiple cores using the following command:", "!mpiexec -n 4 nrniv -python -mpi saving_normal_init.py", "This command does not currently exit to the system prompt, so you will have to restart your kernel. In the menu bar above, click on Kernel, then Restart, then Restart.\nThe whos in the next cell should return Interactive namespace is empty. after the Kernel has been cleared.", "whos", "The simulation should have produced a directory called saving_normal_data with three analysis plots and a data file named saving_normal_data.pkl. We are now going to load the simulation from this file and produce the same plots.", "from netpyne import sim\nsim.loadAll('saving_normal_data/saving_normal_data.pkl')\n\nsim.analysis.plotConn(saveFig='saving_normal_data/saving_normal_plot_conn_pop_strength_matrix_FROMFILE.png');\nsim.analysis.plotRaster(saveFig='saving_normal_data/saving_normal_raster_gid_FROMFILE.png');\nsim.analysis.plotTraces(saveFig='saving_normal_data/saving_normal_traces_FROMFILE.png');", "Compare the plots, they should be identical. Congratulations! You have run a simulation, saved the data, then loaded it later to perform more analysis.\nNow restart your kernel and check the whos.", "whos", "Distributed Saving\nIf you're running large sims, you may want to save the data from each node in a separate file, i.e. distributed saving.\nWe'll run a simulation using distributed saving and loading using saving_netParams.py (which is used by all simulations in this tutorial), saving_dist_cfg.py, and saving_dist_init.py.\nThe only changes to the cfg file are renaming the simulation:\ncfg.simLabel = 'saving_dist'\n\nand turning off the saving of the data into one file:\ncfg.savePickle = False #True\n\nOur init file for distributed saving looks like this:\nfrom netpyne import sim\ncfg, netParams = sim.readCmdLineArgs(\n simConfigDefault='saving_dist_cfg.py', \n netParamsDefault='saving_netParams.py')\nsim.initialize(simConfig=cfg, netParams=netParams)\nsim.net.createPops()\nsim.net.createCells()\nsim.net.connectCells()\nsim.net.addStims()\nsim.setupRecording()\nsim.runSim()\n#sim.gatherData()\n#sim.saveData()\n##### new #####\nsim.saveDataInNodes()\nsim.gatherDataFromFiles()\n##### end new #####\nsim.analysis.plotData()\n\nWe turned off gatherData and saveData and replaced those with saveDataInNodes and gatherDataFromFiles.\nLet's run the simulation now.", "!mpiexec -n 4 nrniv -python -mpi saving_dist_init.py", "That should have produced a directory saving_dist_data containing the same three analysis plots and a node_data directory containing a data file from each of the four nodes we used.\nNow restart your kernel so we can load the data from file analyze it again.\nThe whos in the next cell should return Interactive namespace is empty.", "whos\n\nfrom netpyne import sim\nsim.gatherDataFromFiles(simLabel='saving_dist')\n\nsim.analysis.plotConn(saveFig='saving_dist_data/saving_dist_plot_conn_pop_strength_matrix_FROMFILE.png');\nsim.analysis.plotRaster(saveFig='saving_dist_data/saving_dist_raster_gid_FROMFILE.png');\nsim.analysis.plotTraces(saveFig='saving_dist_data/saving_dist_traces_FROMFILE.png');", "Compare the plots, they should be identical except for the connectivity plot, which didn't retain the connectivity for the background inputs.\nNow restart your kernel and check the whos.", "whos", "Interval Saving\nPerhaps you want to save data at intervals in case you have large, long simulations you're worried won't complete.\nWe'll run a simulation using interval saving and loading using saving_netParams.py (which is used by all simulations in this tutorial), saving_int_cfg.py, and saving_int_init.py.\nThe only changes to the cfg file are renaming the simulation:\ncfg.simLabel = 'saving_int'\n\nand turning back on the saving of the data into one file:\ncfg.savePickle = True\n\nOur init file for interval saving looks like this:\nfrom netpyne import sim\nfrom netpyne import sim\n\ncfg, netParams = sim.readCmdLineArgs(\n simConfigDefault='saving_int_cfg.py', \n netParamsDefault='saving_netParams.py')\nsim.initialize(simConfig=cfg, netParams=netParams)\nsim.net.createPops()\nsim.net.createCells()\nsim.net.connectCells()\nsim.net.addStims()\nsim.setupRecording()\n#sim.runSim()\n##### new #####\nsim.runSimIntervalSaving(1000)\n##### end new #####\nsim.gatherData()\nsim.saveData()\nsim.analysis.plotData()\n\nWe turned off runSim and replaced it with runSimIntervalSaving(1000), which will save the simulation every 1000 ms.\nLet's run the simulation now. Remember you can run this without MPI using the command python3 saving_int_init.py.", "!mpiexec -n 4 nrniv -python -mpi saving_int_init.py", "That should have produced a directory saving_int_data containing the data file and the same three analysis plots (from the completed simulation) and an interval_data directory containing a data file for each 1000 ms of our 10,000 ms simulation.\nNow restart your kernel so we can load interval data from file.\nThe whos in the next cell should return Interactive namespace is empty.", "whos", "Now, let's assume our simulation timed out, and the last interval save we got was at 5000 ms. We can still analyze that partial data.", "from netpyne import sim\nsim.loadAll('saving_int_data/interval_data/interval_5000.pkl', createNEURONObj=False)\n\nsim.analysis.plotConn(saveFig='saving_int_data/saving_int_plot_conn_pop_strength_matrix_INTERVAL.png');\nsim.analysis.plotRaster(saveFig='saving_int_data/saving_int_raster_gid_INTERVAL.png');\nsim.analysis.plotTraces(saveFig='saving_int_data/saving_int_traces_INTERVAL.png');", "The connectivity plot should be identical and the raster plot is currently failing for interval saving (debugging in progress), but you can see that we recovered partial data from the traces plot.\nCongratulations! You have successfully saved, loaded, and analyzed simulation data in a variety of ways." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
McStasMcXtrace/McCode
Docker/mcstas/mcstasscript/McStasScript_demo.ipynb
gpl-3.0
[ "Demonstration of McStasScript\nThis file demonstrates how McStasScript can be used to run McStas from a python environment in a userfreindly manner.", "import sys\n# Path to McStasScript pythoon file\nsys.path.append('/home/docker/McStasScript')\n\nfrom mcstasscript.interface import instr, plotter, functions\n\n# Creating the instance of the class, insert path to mcrun and to mcstas root directory\nInstr = instr.McStas_instr(\"jupyter_demo\")\n\nInstr.show_components() # Shows available McStas component categories in current installation\n\nInstr.show_components(\"sources\") # Display all McStas source components \n\nInstr.component_help(\"Source_simple\") # Displays help on the Source_simple component\n\nsource = Instr.add_component(\"Source\",\"Source_simple\") # Adds an instance of Source_simple\n\n# Lets add a parameter to the instrument to control the wavelength of the source\nInstr.add_parameter(\"double\", \"wavelength\", value=3,\n comment=\"[AA] Wavelength emmited from source\")\nsource.xwidth = 0.06; source.yheight = 0.08;\nsource.dist = 2; source.focus_xw = 0.05; source.focus_yh = 0.05\nsource.lambda0 = \"wavelength\"; source.dlambda = 0.05; source.flux = 1E8\n\nsource.print_long() # Verify that the information is correct\n\nguide = Instr.add_component(\"Guide\", \"Guide_gravity\", AT=[0,0,2], RELATIVE=\"Source\")\nguide.set_comment=\"Beam extraction and first guide piece\"\n\nguide.show_parameters() # Lets view the parameters available in our guide component\n\nguide.set_parameters({\"w1\" : 0.05, \"w2\" : 0.05, \"h1\" : 0.05, \"h2\" : 0.05,\n \"l\" : 8, \"m\" : 3.5, \"G\" : -9.2})\n\nguide.print_long() # Verify the information on this component is correct\n\n# Add a sample to the instrument\nsample = Instr.add_component(\"sample\", \"PowderN\", AT=[0, 0, 9], RELATIVE=\"Guide\") \n\n# Set parameters corresponding to a copper cylinder\nsample.radius = 0.015; sample.yheight = 0.05; sample.reflections = \"\\\"Cu.laz\\\"\"\n\nInstr.show_components(\"monitors\") # Monitors are needed to record information\n\n# Add 4PI detector to detect all neutrons\nsphere = Instr.add_component(\"PSD_4PI\", \"PSD_monitor_4PI\", RELATIVE=\"sample\")\n\nsphere.nx = 300; sphere.ny = 300\nsphere.radius = 1; sphere.restore_neutron = 1\nsphere.filename = \"\\\"PSD_4PI.dat\\\"\" # filenames need printed quotes, use \\\"\nsphere.print_long() # Verify that monitors have filenames that are strings when printed\n\n# Add PSD monitor to see the direct beam after the sample\nPSD = Instr.add_component(\"PSD\", \"PSD_monitor\", AT=[0,0,1], RELATIVE=\"sample\") \nPSD.xwidth = 0.1; PSD.yheight = 0.1; PSD.nx = 200; PSD.ny = 200\nPSD.filename = \"\\\"PSD.dat\\\"\"; PSD.restore_neutron = 1\n\nL_mon = Instr.add_component(\"L_mon\", \"L_monitor\", RELATIVE=\"PSD\")\n\n# Since the wavelength is an instrument parameter, it can be used when setting parameters\nL_mon.Lmin = \"wavelength - 0.1\"; L_mon.Lmax = \"wavelength + 0.1\"; L_mon.nL = 150\nL_mon.xwidth = 0.1; L_mon.yheight = 0.1\nL_mon.filename = \"\\\"wave.dat\\\"\"; L_mon.restore_neutron = 1\nL_mon.comment = \"Wavelength monitor for narrow range\"\n\nL_mon.print_long()\n\nInstr.print_components() # Lets get an overview of the instrument so far\n\nInstr.show_parameters()", "Running the McStas instrument\nNow we have assembled an instrument and it is time to perform a simulation", "# With increment_folder_name enabled, a new folder with incremented number is created\ndata = Instr.run_full_instrument(foldername=\"jupyter_demo\",\n parameters={\"wavelength\" : 1.5},\n mpi=2, ncount=2E7,\n increment_folder_name = True)", "Working with the returned data\nThe returned data object is a list of McStasData objects, each containing the results from a monitor.\nThese data objects also contain preferences for how they should be plotted if this is done automatically.", "wavelength_data = functions.name_search(\"L_mon\", data)\nwavelength_intensity = wavelength_data.Intensity\nwavelength_xaxis = wavelength_data.xaxis\n\nfor index in range(70,75):\n print([wavelength_xaxis[index], wavelength_intensity[index]])", "Plotting the returned data\nThe plot options looks at some metadata in the McStasData for plotting preferences. For this reason these options can be adjusted for individual data files instead of complex syntax for the plotting command.", "# Adjusting PSD_4PI plot\nfunctions.name_plot_options(\"PSD_4PI\", data, log=1, colormap=\"hot\", orders_of_mag=5)\n\nplot = plotter.make_sub_plot(data) # Making subplot of our monitors" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
h-mayorquin/time_series_basic
presentations/2016-01-18(Prediction with Bigrams).ipynb
bsd-3-clause
[ "Prediction with Bigrams\nIn this notebook I will show how to do predictions in text using a very simple scheme. The idea is to get the frequency of the letters for a certain text and for each letter predict the most frequent letter. This method is in my opinion the simplest method that do not use temporal information.\nIn order to obtain a prediction that is as simple and yet includes temporal information to make the prediction we calculate the most frequent bigrams (pair of letters like 'r' and 's' at the end of the word letters). We calculate for every letter what is the letter that is more likely to follow and use that as a prediction.\nSetup", "import nltk\nfrom nltk.book import text7 as text", "First we extract the information from the text.", "letters = ' '.join(text)\nletters = [letter.lower() for letter in letters] # Get the lowercase\nsymbols = set(letters)\nNletters = len(letters)\nNsymbols = len(symbols)\n\nsymbols\n\nprint('Number of letters', Nletters)\nprint('Nymbols', Nsymbols)", "We get the frequency for all the letters and the most common which turns out to be a space. Latter we will analyze how the result changes when we remove space from the whole analysis.", "freq_letters = nltk.FreqDist(letters) # Get the most frequent letters\nmost_common_letter = freq_letters.most_common(1)[0][0]\nprint('most common letter', most_common_letter)\n\nfreq_letters.plot()", "We will the bigrams frequency as well", "bigrams = nltk.bigrams(letters)\nfreq_bigrams = nltk.FreqDist(bigrams)", "Now we want to extract the next most probable letter for every letter. From the bigran frequency first we make a dictionary (master dictionary ) of all the next letters and their frequency for each letter (each symbol here). In particular we use a dictionary where the key is the symbol and the value is a list with a the tuples of the next letter and its frequency. With this in our hand we take for every list the one with the maximun frequency using a lambda function and build the next letter dictionary with it.", "master_dictionary = {}\nnext_letters = {}\n\nfor symbol in symbols:\n master_dictionary[symbol] = [(key[1], value) for key,value in freq_bigrams.items() if key[0]==symbol]\n\nfor symbol in symbols:\n aux = max(master_dictionary[symbol], key=lambda x:x[1]) # Maximize over the second element of the tuple\n next_letters[symbol] = aux[0]", "Predictions", "prediction = 0\nfor letter in letters:\n if letter == most_common_letter:\n prediction += 1 # Get's the result right\n\nprediction /= Nletters\nprint('Predictions using the most comon letter', prediction * 100.0)\n\n# Now we make use of the temporal information\nprediction_temp = 0\nlast_letter = None\nfor index, letter in enumerate(letters):\n if last_letter: # If last_letter is not None\n if next_letters[last_letter] == letter:\n prediction_temp += 1\n # Save the last letter\n last_letter = letter\n\nprediction_temp /= Nletters\nprint('Prediction using bigramsl information', prediction_temp * 100)\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ReactiveX/RxPY
notebooks/reactivex.io/Marble Diagrams.ipynb
mit
[ "Marble Diagrams with RxPY\nThis is a fantastic feature to produce and visualize streams and to verify how various operators work on them.\nHave also a look at rxmarbles for interactive visualisations.\nONE DASH IS <font size=\"40px\">100</font> MILLISECONDS!", "%run startup.py", "Create Streams from Strings: from_marbles", "rst(O.from_marbles)\nts = time.time()\n# producing a stream\ns = O.from_marbles('1-2-3|')\n# mapping into real time:\ns2 = s.to_blocking()\n# adding times\ns3 = s2.map(lambda x: 'val: %s, dt: %s' % (x, time.time()-ts))\n# subscribing to it:\nd = s3.subscribe(print)", "Visualize Streams as Marble Strings: to_marbles", "rst(rx.core.blockingobservable.BlockingObservable.to_marbles)\ns1 = O.from_marbles('1---2-3|')\ns2 = O.from_marbles('-a-b-c-|')\nprint(s1.merge(s2).to_blocking().to_marbles())\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
jgdwyer/nn-convection
notebooks/Code snippets.ipynb
apache-2.0
[ "Test that I am extracting the parameters correctly", "out_test = r_mlp.predict(x3)\nout_test = scaler_y.inverse_transform(out_test)\nw1 = r_mlp.get_parameters()[0].weights\nw2 = r_mlp.get_parameters()[1].weights\nw3 = r_mlp.get_parameters()[2].weights\nb1 = r_mlp.get_parameters()[0].biases\nb2 = r_mlp.get_parameters()[1].biases\nb3 = r_mlp.get_parameters()[2].biases\n\nxscale_min = scaler_x.data_min_\nxscale_max = scaler_x.data_max_\nyscale_absmax = scaler_y.max_abs_\n\nout_test_check = np.dot(x3,w1) + b1\nout_test_check[out_test_check<0] = 0\nout_test_check = np.dot(out_test_check,w2) + b2\nout_test_check[out_test_check<0] = 0\nout_test_check = np.dot(out_test_check,w3) + b3\n\nout_test_check = out_test_check*yscale_absmax\nptl.plot(out_test-out_test_check)\nplt.show()", "Check that I understand what classification is doing", "c_w1=c_mlp.get_parameters()[0].weights\nc_w2=c_mlp.get_parameters()[1].weights\nc_w3=c_mlp.get_parameters()[2].weights\nc_b1=c_mlp.get_parameters()[0].biases\nc_b2=c_mlp.get_parameters()[1].biases\nc_b3=c_mlp.get_parameters()[2].biases\n\nout_test_check = np.dot(x1 ,c_w1) + c_b1\nout_test_check[out_test_check<0] = 0\nout_test_check = np.dot(out_test_check,c_w2) + c_b2\nout_test_check[out_test_check<0] = 0\nout_test_check = np.dot(out_test_check,c_w3) + c_b3\n\nexpo = np.exp(out_test_check)\nexpos = np.sum(expo,axis=1)\n\n#foo=np.empty((x1.shape[0], 2))\nfoo[:,0] = expo[:,0]/expos\nfoo[:,1] = expo[:,1]/expos\nff=np.zeros(x1.shape[0])\nff[foo[:,1]>0.5]=1.\nprint(x1.shape)\nee=c_mlp.predict(x1)\nee=np.squeeze(ee)\nprint(ee.shape)\nprint(ff.shape)\nprint(np.sum(np.logical_and(ff==0,ee==1)))\nprint(np.sum(np.logical_and(ff==1,ee==0)))\n\nrexpo = np.exp(out_test_check[:,1])", "Plot (1-d) histograms at each input and output level", "plt.figure(figsize=(8,40))\n_,ax = plt.subplots(lev.size,2,sharex=True)\nfor i in range(lev.size):\n step=.05\n bins=np.arange(-1,1+step,step)\n n,bins,_ =ax[i,0].hist(unpack(x_train_norm,'T')[:,i],bins=bins,facecolor='yellow',alpha=0.5,normed=True)\n n2,bins2,_=ax[i,1].hist(unpack(y_train_norm,'T')[:,i],bins=bins,facecolor='blue' ,alpha=0.5,normed=True)\n\n\n ax[i,0].set_xlim((-1,1))\n ax[i,0].set_ylim(0,np.amax(n))\n ax[i,1].set_ylim(0,np.amax(n2))\n\n\n #ax[i,0].set_ylim([-1,1])\n print(np.amax(n))\n print(np.amax(n2))\n #ax[i,1].hist(unpack(x_train_norm,'q')[:,i]*step,bins=np.arange(-1,1+step,step),facecolor='yellow',alpha=0.5,normed=True)\n #ax[i,1].hist(unpack(y_train_norm,'q')[:,i]*step,bins=np.arange(-1,1+step,step),facecolor='blue' ,alpha=0.5,normed=True)\n #ax[i,1].set_xlim([-1,1])\n\n\n\n #plt.subplot(lev.size,2,i+1+lev.size)\n #plt.hist(y_train_norm[:,i],100,facecolor='green')\n #ax[i,0].get_yaxis().set_visible(False)\n\n #n, bins, patches = plt.hist(y_train_norm[:,28], 100, normed=1, facecolor='green', alpha=0.75)\nplt.show()", "Calculate out of bag error importance for random forest regressor", "\noob_mlp = RandomForestRegressor(n_estimators=30)\n\noob_mlp.fit(x2,y2)\nindmin=np.argmin(oob_mlp.feature_importances_)\nprint(indmin)\nprint(np.min(oob_mlp.feature_importances_))\nprint(oob_mlp.score(x3,y3))\nx2 = np.delete(x2,indmin,1)\n\nplt.plot(unpack(oob_mlp.feature_importances_,'T'),lev,label='T')\nplt.plot(unpack(oob_mlp.feature_importances_,'q'),lev,label='q')\nplt.ylim((1,0))\nplt.show()\noob_mlp2 = RandomForestRegressor(n_estimators=30)\noob_mlp2.fit(x2[:,oob_mlp.feature_importances_>0.],cv2)\n\n#oob_mlp2.feature_importances_.shape\nnp.argmin(oob_mlp.feature_importances_)#[oob_mlp.feature_importances_>0.]", "Principal Component Analysis attempt", "from sklearn.decomposition import PCA\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport matplotlib.pylab as pylab\n%matplotlib inline\nreload(nnload)\nx, y, cv, Pout, lat, lev, dlev, timestep = nnload.loaddata(data_dir + 'nntest.nc', minlev,\n rainonly=rainonly) #,all_lats=False,indlat=8)\nprint(x.shape)\n# pcax = PCA(n_components=10)\n# pcay = PCA(n_components=10)\npcax=\nxpp = preprocessing.StandardScaler()\nypp = preprocessing.StandardScaler()\nx = xpp.fit_transform(x)\ny = ypp.fit_transform(y)\n# x = pcax.fit_transform(x)\n# y = pcay.fit_transform(y)\n# Subsample data\nx1, x2, x3, y1, y2, y3 = nnload.subsample(x, y, N_samples=10000)\n\nprint(pcax.explained_variance_ratio_)\nprint(x2.shape)\nplt.plot(pcax.components_[0,0:15],lev,color='blue')\nplt.plot(pcax.components_[0,15:30],lev,color='red')\n\n\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
synthicity/activitysim
activitysim/examples/example_estimation/notebooks/15_non_mand_tour_freq.ipynb
agpl-3.0
[ "Estimating Non-Mandatory Tour Frequency\nThis notebook illustrates how to re-estimate a single model component for ActivitySim. This process \nincludes running ActivitySim in estimation mode to read household travel survey files and write out\nthe estimation data bundles used in this notebook. To review how to do so, please visit the other\nnotebooks in this directory.\nLoad libraries", "import os\nimport larch # !conda install larch -c conda-forge # for estimation\nimport pandas as pd", "We'll work in our test directory, where ActivitySim has saved the estimation data bundles.", "os.chdir('test')", "Load data and prep model for estimation", "modelname = \"nonmand_tour_freq\"\n\nfrom activitysim.estimation.larch import component_model\nmodel, data = component_model(modelname, return_data=True)", "This component actually has a distinct choice model for each person type, so\ninstead of a single model there's a dict of models.", "type(model)\n\nmodel.keys()", "Review data loaded from the EDB\nWe can review the data loaded as well, similarly there is seperate data \nfor each person type.\nCoefficients", "data.coefficients['PTYPE_FULL']", "Utility specification", "data.spec['PTYPE_FULL']", "Chooser data", "data.chooser_data['PTYPE_FULL']", "Estimate\nWith the model setup for estimation, the next step is to estimate the model coefficients. Make sure to use a sufficiently large enough household sample and set of zones to avoid an over-specified model, which does not have a numerically stable likelihood maximizing solution. Larch has a built-in estimation methods including BHHH, and also offers access to more advanced general purpose non-linear optimizers in the scipy package, including SLSQP, which allows for bounds and constraints on parameters. BHHH is the default and typically runs faster, but does not follow constraints on parameters.", "for k, m in model.items():\n m.estimate(method='SLSQP')", "Estimated coefficients", "model['PTYPE_FULL'].parameter_summary()", "Output Estimation Results", "from activitysim.estimation.larch import update_coefficients\nfor k, m in model.items():\n result_dir = data.edb_directory/k/\"estimated\"\n update_coefficients(\n m, data.coefficients[k], result_dir,\n output_file=f\"{modelname}_{k}_coefficients_revised.csv\",\n );", "Write the model estimation report, including coefficient t-statistic and log likelihood", "for k, m in model.items():\n result_dir = data.edb_directory/k/\"estimated\"\n m.to_xlsx(\n result_dir/f\"{modelname}_{k}_model_estimation.xlsx\", \n data_statistics=False,\n )", "Next Steps\nThe final step is to either manually or automatically copy the *_coefficients_revised.csv file to the configs folder, rename it to *_coefficients.csv, and run ActivitySim in simulation mode.", "result_dir = data.edb_directory/'PTYPE_FULL'/\"estimated\"\npd.read_csv(result_dir/f\"{modelname}_PTYPE_FULL_coefficients_revised.csv\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
cpcloud/ibis
docs/tutorial/04-More-Value-Expressions.ipynb
apache-2.0
[ "More Value Expressions\nLet's walk through some more value expressions.\nSetup", "!curl -LsS -o $TEMPDIR/geography.db 'https://storage.googleapis.com/ibis-tutorial-data/geography.db'\n\nimport os\nimport tempfile\n\nimport ibis\n\nibis.options.interactive = True\n\nconnection = ibis.sqlite.connect(\n os.path.join(tempfile.gettempdir(), 'geography.db')\n)", "Type casting\nThe Ibis type system supports the most common data types used in analytics, including support for nested types like lists, structs, and maps.\nType names can be used to cast from one type to another.", "countries = connection.table('countries')\ncountries\n\ncountries = connection.table('countries')\ncountries.population.cast('float').sum()\n\ncountries.area_km2.cast('int32').sum()", "Case / if-then-else expressions\nWe support a number of variants of the SQL-equivalent CASE expression, and will add more API functions over time to meet different use cases and enhance the expressiveness of any branching-based value logic.", "expr = (\n countries.continent.case()\n .when('AF', 'Africa')\n .when('AN', 'Antarctica')\n .when('AS', 'Asia')\n .when('EU', 'Europe')\n .when('NA', 'North America')\n .when('OC', 'Oceania')\n .when('SA', 'South America')\n .else_(countries.continent)\n .end()\n .name('continent_name')\n)\n\nexpr.value_counts()", "If the else_ default condition is not provided, any values not matching one of the conditions will be NULL.", "expr = (\n countries.continent.case()\n .when('AF', 'Africa')\n .when('AS', 'Asia')\n .when('EU', 'Europe')\n .when('NA', 'North America')\n .when('OC', 'Oceania')\n .when('SA', 'South America')\n .end()\n .name('continent_name_with_nulls')\n)\n\nexpr.value_counts()", "To test for an arbitrary series of boolean conditions, use the case API method and pass any boolean expressions potentially involving columns of the table:", "expr = (\n ibis.case()\n .when(countries.population > 25_000_000, 'big')\n .when(countries.population < 5_000_000, 'small')\n .else_('medium')\n .end()\n .name('size')\n)\n\ncountries['name', 'population', expr].limit(10)", "Simple ternary-cases (like the Python X if COND else Y) can be written using the ifelse function:", "expr = (countries.continent == 'AS').ifelse('Asia', 'Not Asia').name('is_asia')\n\ncountries['name', 'continent', expr].limit(10)", "Set membership\nThe isin and notin functions are like their pandas counterparts. These can take:\n\nA list of value expressions, either literal values or other column expressions\nAn array/column expression of some kind", "is_america = countries.continent.isin(['NA', 'SA'])\ncountries[is_america].continent.value_counts()", "You can also check for membership in an array. Here is an example of filtering based on the top 3 (ignoring ties) most frequently-occurring values in the string_col column of alltypes:", "top_continents = countries.continent.value_counts().limit(3).continent\ntop_continents_filter = countries.continent.isin(top_continents)\nexpr = countries[top_continents_filter]\n\nexpr.count()", "This is a common enough operation that we provide a special analytical filter function topk:", "countries.continent.topk(3)", "Cool, huh? More on topk later.\nNull Checking\nLike their pandas equivalents, the isnull and notnull functions return True values if the values are null, or non-null, respectively. For example:", "expr = (\n countries.continent.case()\n .when('AF', 'Africa')\n .when('EU', 'Europe')\n .when('AS', 'Asia')\n .end()\n .name('top_continent_name')\n)\n\nexpr.isnull().value_counts()", "Functions like isnull can be combined with case expressions or functions like ifelse to replace null values with some other value. ifelse here will use the first value supplied for any True value and the second value for any False value. Either value can be a scalar or array.", "expr2 = expr.isnull().ifelse('Other continent', expr).name('continent')\nexpr2.value_counts()", "Distinct-based operations\nIbis supports using distinct to remove duplicate rows or values on tables or arrays. For example:", "countries[['continent']].distinct()", "This can be combined with count to form a reduction metric:", "metric = countries[['continent']].distinct().count().name('num_continents')\nmetric", "String operations\nWhat's supported is pretty basic right now. We intend to support the full gamut of regular expression munging with a nice API, though in some cases some work will be required on SQLite's backend to support everything.", "countries[['name']].limit(5)", "At the moment, basic substring operations (substr, with conveniences left and right) and Python-like APIs such as lower and upper (for case normalization) are supported. So you could count first letter occurrences in a string column like so:", "expr = countries.name.lower().left(1).name('first_letter')\nexpr.value_counts().sort_by(('count', False)).limit(10)", "For fuzzy and regex filtering/searching, you can use one of the following\n\nlike, works as the SQL LIKE keyword\nrlike, like re.search or SQL RLIKE\ncontains, like x in str_value in Python", "countries[countries.name.like('%GE%')].name\n\ncountries[countries.name.lower().rlike('.*ge.*')].name\n\ncountries[countries.name.lower().contains('ge')].name", "Timestamp operations\nDate and time functionality is relatively limited at present compared with pandas, but we'll get there. The main things we have right now are\n\nField access (year, month, day, ...)\nTimedeltas\nComparisons with fixed timestamps", "independence = connection.table('independence')\n\nindependence[\n independence.independence_date,\n independence.independence_date.month().name('month'),\n].limit(10)", "Somewhat more comprehensively", "def get_field(f):\n return getattr(independence.independence_date, f)().name(f)\n\n\nfields = [\n 'year',\n 'month',\n 'day',\n] # datetime fields can also use: 'hour', 'minute', 'second', 'millisecond'\nprojection = [independence.independence_date] + [get_field(x) for x in fields]\nindependence[projection].limit(10)", "For timestamp arithmetic and comparisons, check out functions in the top level ibis namespace. This include things like day and second, but also the ibis.timestamp function:", "independence[\n independence.independence_date.min(),\n independence.independence_date.max(),\n independence.count().name('nrows'),\n].distinct()\n\nindependence[independence.independence_date > '2000-01-01'].count()", "Some backends support adding offsets. For example:\npython\nindependence.independence_date + ibis.interval(days=1)\nibis.now() - independence.independence_date" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
elmaso/tno-ai
aind2-dl-master/IMDB_In_Keras_Solutions.ipynb
gpl-3.0
[ "Analyzing IMDB Data in Keras - Solution", "# Imports\nimport numpy as np\nimport keras\nfrom keras.datasets import imdb\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Dropout, Activation\nfrom keras.preprocessing.text import Tokenizer\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nnp.random.seed(42)", "1. Loading the data\nThis dataset comes preloaded with Keras, so one simple command will get us training and testing data. There is a parameter for how many words we want to look at. We've set it at 1000, but feel free to experiment.", "# Loading the data (it's preloaded in Keras)\n(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=1000)\n\nprint(x_train.shape)\nprint(x_test.shape)", "2. Examining the data\nNotice that the data has been already pre-processed, where all the words have numbers, and the reviews come in as a vector with the words that the review contains. For example, if the word 'the' is the first one in our dictionary, and a review contains the word 'the', then there is a 1 in the corresponding vector.\nThe output comes as a vector of 1's and 0's, where 1 is a positive sentiment for the review, and 0 is negative.", "print(x_train[0])\nprint(y_train[0])", "3. One-hot encoding the output\nHere, we'll turn the input vectors into (0,1)-vectors. For example, if the pre-processed vector contains the number 14, then in the processed vector, the 14th entry will be 1.", "# Turning the output into vector mode, each of length 1000\ntokenizer = Tokenizer(num_words=1000)\nx_train = tokenizer.sequences_to_matrix(x_train, mode='binary')\nx_test = tokenizer.sequences_to_matrix(x_test, mode='binary')\nprint(x_train.shape)\nprint(x_test.shape)", "And we'll one-hot encode the output.", "# One-hot encoding the output\nnum_classes = 2\ny_train = keras.utils.to_categorical(y_train, num_classes)\ny_test = keras.utils.to_categorical(y_test, num_classes)\nprint(y_train.shape)\nprint(y_test.shape)", "4. Building the model architecture\nBuild a model here using sequential. Feel free to experiment with different layers and sizes! Also, experiment adding dropout to reduce overfitting.", "# Building the model architecture with one layer of length 100\nmodel = Sequential()\nmodel.add(Dense(512, activation='relu', input_dim=1000))\nmodel.add(Dropout(0.5))\nmodel.add(Dense(num_classes, activation='softmax'))\nmodel.summary()\n\n# Compiling the model using categorical_crossentropy loss, and rmsprop optimizer.\nmodel.compile(loss='categorical_crossentropy',\n optimizer='rmsprop',\n metrics=['accuracy'])", "5. Training the model\nRun the model here. Experiment with different batch_size, and number of epochs!", "# Running and evaluating the model\nhist = model.fit(x_train, y_train,\n batch_size=32,\n epochs=10,\n validation_data=(x_test, y_test), \n verbose=2)", "6. Evaluating the model\nThis will give you the accuracy of the model. Can you get something over 85%?", "score = model.evaluate(x_test, y_test, verbose=0)\nprint(\"accuracy: \", score[1])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
kalugny/pypachy
examples/jupyter/investigate-unexpected-sales.ipynb
mit
[ "Intro\nIn this example, we'll use a combination of Jupyter notebooks, Pandas, and Pachyderm to analyze Citi Bike sales data.", "%matplotlib inline\n\nimport os\nimport datetime\nfrom io import StringIO\n\nimport pandas as pd\nimport python_pachyderm\nfrom python_pachyderm.service import pps_proto", "Insert Data\nFirst, we'll create a couple of repos and populate them:\n\ntrips - This repo is populated with a daily file that records the number of bicycle trips recorded by NYC's citibike bike sharing company on that particular day (data from here).\nweather - This repo is populated daily with a JSON file representing the weather forecast for that day from forecast.io.", "client = python_pachyderm.Client()\n\n# First create the repos/pipelines\nclient.create_repo(\"trips\")\nclient.create_repo(\"weather\")\nclient.create_pipeline(\n \"jupyter\",\n transform=pps_proto.Transform(\n image=\"pachyderm/pachyderm_jupyter:2019\",\n cmd=[\"python3\", \"merge.py\"],\n ),\n input=pps_proto.Input(cross=[\n pps_proto.Input(pfs=pps_proto.PFSInput(glob=\"/\", repo=\"weather\")),\n pps_proto.Input(pfs=pps_proto.PFSInput(glob=\"/\", repo=\"trips\")),\n ])\n)\n\n# Populate the input repos\ndef insert_data(name):\n print(\"Inserting {} data...\".format(name))\n with client.commit(name, \"master\") as c:\n data_dir = \"{}_data\".format(name)\n python_pachyderm.put_files(client, data_dir, c, \"/\")\n \n return c\n \ntrips_commit = insert_data(\"trips\")\nweather_commit = insert_data(\"weather\")\n\n# Wait for the commits to finish\nprint(\"Waiting for commits to finish...\")\nfor commit in [client.wait_commit(c.id)[0] for c in [trips_commit, weather_commit]]:\n print(commit)\n\nfile = client.get_file((\"jupyter\", \"master\"), \"data.csv\")\ncontents = \"\\n\".join([chunk.decode(\"utf8\") for chunk in file])\ndf = pd.read_csv(StringIO(contents), names=[\"Date\", \"Precipitation\", \"Trips\", \"Sales\"], index_col=\"Date\")\ndf.index = pd.to_datetime(df.index)\ndf.sort_index(inplace=True)\n\n# Get just July 2016\ndf = df[datetime.datetime(year=2016, month=7, day=1):datetime.datetime(year=2016, month=7, day=31)]\nprint(df)", "Visualize the sales in the context of weather\nFinally, we confirm our suspicions by visualizing the precipitation probabilities with the sales data:", "ax = df.plot(secondary_y=[\"Precipitation\"], figsize=(10, 8))\nax.set_ylabel(\"Sales ($), # Trips\")\nax.right_ax.set_ylabel(\"Precipitation probability\")\nax.right_ax.legend(loc=\"best\")\nax.legend(loc=\"upper left\")", "We can see that their was a probability of precipitation in NYC above 70% both of the days in question. This is likely to be the explanation for the poor sales. Of course, we can attach our Jupyter notebook other parts of the data to explore other unexpected behavior, develop further analyses, etc." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
mclaughlin6464/pearce
notebooks/Pearce wt integral tests.ipynb
mit
[ "I've implemented the integral of wt in pearce. This notebook verifies it works as I believe it should.", "from pearce.mocks import cat_dict\nimport numpy as np\nfrom os import path\nfrom astropy.io import fits\n\nimport matplotlib\n#matplotlib.use('Agg')\nfrom matplotlib import pyplot as plt\n%matplotlib inline\nimport seaborn as sns\nsns.set()", "Load up the tptY3 buzzard mocks.", "fname = '/u/ki/jderose/public_html/bcc/measurement/y3/3x2pt/buzzard/flock/buzzard-2/tpt_Y3_v0.fits'\nhdulist = fits.open(fname)\n\nhdulist.info()\n\nhdulist[0].header\n\nz_bins = np.array([0.15, 0.3, 0.45, 0.6, 0.75, 0.9])\nzbin=1\n\na = 0.81120\nz = 1.0/a - 1.0", "Load up a snapshot at a redshift near the center of this bin.", "print z", "This code load a particular snapshot and and a particular HOD model. In this case, 'redMagic' is the Zheng07 HOD with the f_c variable added in.", "cosmo_params = {'simname':'chinchilla', 'Lbox':400.0, 'scale_factors':[a]}\ncat = cat_dict[cosmo_params['simname']](**cosmo_params)#construct the specified catalog!\n\ncat.load_catalog(a, particles = True)\n\ncat.load_model(a, 'redMagic')\n\nfrom astropy.cosmology import FlatLambdaCDM\n\ncosmo = FlatLambdaCDM(H0 = 100, Om0 = 0.3, Tcmb0=2.725)\n\n#cat.cosmology = cosmo # set to the \"standard\" one\n#cat.h = cat.cosmology.h", "Take the zspec in our selected zbin to calculate the dN/dz distribution. The below cell calculate the redshift distribution prefactor\n$$ W = \\frac{2}{c}\\int_0^{\\infty} dz H(z) \\left(\\frac{dN}{dz} \\right)^2 $$", "hdulist[8].columns\n\nnz_zspec = hdulist[8]\n\nzbin_edges = [row[0] for row in nz_zspec.data]\nzbin_edges.append(nz_zspec.data[-1][2]) # add the last bin edge\nzbin_edges = np.array(zbin_edges)\nNz = np.array([row[2+zbin] for row in nz_zspec.data])\nN_total = np.sum(Nz)\ndNdz = Nz/N_total\n\nW = cat.compute_wt_prefactor(zbin_edges, dNdz)\n\nprint W", "If we happened to choose a model with assembly bias, set it to 0. Leave all parameters as their defaults, for now.", "params = cat.model.param_dict.copy()\n\nparams['mean_occupation_centrals_assembias_param1'] = 0\nparams['mean_occupation_satellites_assembias_param1'] = 0\nparams['logMmin'] = 12.0\nparams['sigma_logM'] = 0.2\nparams['f_c'] = 0.19\nparams['alpha'] = 1.21\nparams['logM1'] = 13.71\nparams['logM0'] = 11.39\n\nprint params\n\ncat.populate(params)\n\nnd_cat = cat.calc_analytic_nd()\nprint nd_cat\n\ncat.cosmology\n\narea = 4635.4 #sq degrees\nfull_sky = 41253 #sq degrees\n\nvolIn, volOut = cat.cosmology.comoving_volume(z_bins[zbin-1]), cat.cosmology.comoving_volume(z_bins[zbin])\n\nfullsky_volume = volOut-volIn\nsurvey_volume = fullsky_volume*area/full_sky\nnd_mock = N_total/survey_volume\nprint nd_mock\n\nvolIn.value, volOut\n\ncorrect_nds = np.array([1e-3, 1e-3, 1e-3, 4e-4, 1e-4])\n\n%%bash\nls ~jderose/public_html/bcc/catalog/redmagic/y3/buzzard/flock/buzzard-0/a/buzzard-0_1.6_y3_run_redmapper_v6.4.20_redmagic_*vlim_area.fit\n\nvol_fname = '/u/ki/jderose/public_html/bcc/catalog/redmagic/y3/buzzard/flock/buzzard-0/a/buzzard-0_1.6_y3_run_redmapper_v6.4.20_redmagic_highlum_1.0_vlim_area.fit'\nvol_hdulist = fits.open(vol_fname)\n\nnd_mock.value/nd_cat\n\n#compute the mean mass\nmf = cat.calc_mf()\nHOD = cat.calc_hod()\nmass_bin_range = (9,16)\nmass_bin_size = 0.01\nmass_bins = np.logspace(mass_bin_range[0], mass_bin_range[1], int( (mass_bin_range[1]-mass_bin_range[0])/mass_bin_size )+1 )\n\nmean_host_mass = np.sum([mass_bin_size*mf[i]*HOD[i]*(mass_bins[i]+mass_bins[i+1])/2 for i in xrange(len(mass_bins)-1)])/\\\n np.sum([mass_bin_size*mf[i]*HOD[i] for i in xrange(len(mass_bins)-1)])\nprint mean_host_mass\n\ntheta_bins = np.logspace(np.log10(2.5), np.log10(2000), 25)/60 #binning used in buzzard mocks\ntpoints = (theta_bins[1:]+theta_bins[:-1])/2\n\nr_bins = np.logspace(-0.5, 1.7, 16)/cat.h\nrpoints = (r_bins[1:]+r_bins[:-1])/2\n\nr_bins\n\nwt = cat.calc_wt(theta_bins, r_bins, W)\n\nwt\n\nr_bins", "Use my code's wrapper for halotools' xi calculator. Full source code can be found here.", "xi = cat.calc_xi(r_bins, do_jackknife=False)", "Interpolate with a Gaussian process. May want to do something else \"at scale\", but this is quick for now.", "import george\nfrom george.kernels import ExpSquaredKernel\nkernel = ExpSquaredKernel(0.05)\ngp = george.GP(kernel)\ngp.compute(np.log10(rpoints))\n\nprint xi\n\nxi[xi<=0] = 1e-2 #ack\n\nfrom scipy.stats import linregress\nm,b,_,_,_ = linregress(np.log10(rpoints), np.log10(xi))\n\nplt.plot(rpoints, (2.22353827e+03)*(rpoints**(-1.88359)))\n#plt.plot(rpoints, b2*(rpoints**m2))\n\nplt.scatter(rpoints, xi)\nplt.loglog();\n\nplt.plot(np.log10(rpoints), b+(np.log10(rpoints)*m))\n#plt.plot(np.log10(rpoints), b2+(np.log10(rpoints)*m2))\n#plt.plot(np.log10(rpoints), 90+(np.log10(rpoints)*(-2)))\n\nplt.scatter(np.log10(rpoints), np.log10(xi) )\n#plt.loglog();\n\nprint m,b\n\nrpoints_dense = np.logspace(-0.5, 2, 500)\n\nplt.scatter(rpoints, xi)\nplt.plot(rpoints_dense, np.power(10, gp.predict(np.log10(xi), np.log10(rpoints_dense))[0]))\nplt.loglog();", "This plot looks bad on large scales. I will need to implement a linear bias model for larger scales; however I believe this is not the cause of this issue. The overly large correlation function at large scales if anything should increase w(theta). \nThis plot shows the regimes of concern. The black lines show the value of r for u=0 in the below integral for each theta bin. The red lines show the maximum value of r for the integral I'm performing.\nPerform the below integral in each theta bin:\n$$ w(\\theta) = W \\int_0^\\infty du \\xi \\left(r = \\sqrt{u^2 + \\bar{x}^2(z)\\theta^2} \\right) $$\nWhere $\\bar{x}$ is the median comoving distance to z.", "print zbin\n\n#a subset of the data from above. I've verified it's correct, but we can look again. \nzbin = 1\nwt_redmagic = np.loadtxt('/u/ki/swmclau2/Git/pearce/bin/mcmc/buzzard2_wt_%d%d.npy'%(zbin,zbin))", "The below plot shows the problem. There appears to be a constant multiplicative offset between the redmagic calculation and the one we just performed. The plot below it shows their ratio. It is near-constant, but there is some small radial trend. Whether or not it is significant is tough to say.", "from scipy.special import gamma\ndef wt_analytic(m,b,t,x):\n return W*b*np.sqrt(np.pi)*(t*x)**(1 + m)*(gamma(-(1./2) - m/2.)/(2*gamma(-(m/2.))) )\n\ntheta_bins_rm = np.logspace(np.log10(2.5), np.log10(250), 21)/60 #binning used in buzzard mocks\ntpoints_rm = (theta_bins_rm[1:]+theta_bins_rm[:-1])/2\n\nplt.plot(tpoints, wt, label = 'My Calculation')\nplt.plot(tpoints_rm, wt_redmagic, label = 'Buzzard Mock')\n#plt.plot(tpoints_rm, W.to(\"1/Mpc\").value*mathematica_calc, label = 'Mathematica Calc')\n#plt.plot(tpoints, wt_analytic(m,10**b, np.radians(tpoints), x),label = 'Mathematica Calc' )\n\nplt.ylabel(r'$w(\\theta)$')\nplt.xlabel(r'$\\theta \\mathrm{[degrees]}$')\nplt.loglog();\nplt.legend(loc='best')\n\nprint bias2\n\nplt.plot(rpoints, xi/xi_mm)\nplt.plot(rpoints, cat.calc_bias(r_bins))\nplt.plot(rpoints, bias2*np.ones_like(rpoints))\nplt.xscale('log')\n\nplt.plot(rpoints, xi, label = 'Galaxy')\nplt.plot(rpoints, xi_mm, label = 'Matter')\nplt.loglog()\nplt.legend(loc ='best')\n\nfrom astropy import units\nfrom scipy.interpolate import interp1d\n\ncat.cosmology\n\nimport pyccl as ccl\nob = 0.047\nom = cat.cosmology.Om0\noc = om - ob\nsigma_8 = 0.82\nh = cat.h\nns = 0.96\ncosmo = ccl.Cosmology(Omega_c =oc, Omega_b=ob, h=h, n_s=ns, sigma8=sigma_8 )\n\nbig_rbins = np.logspace(1, 2.1, 21)\nbig_rbc = (big_rbins[1:] + big_rbins[:-1])/2.0\nxi_mm2 = ccl.correlation_3d(cosmo, cat.a, big_rbc)\n\nplt.plot(rpoints, xi)\nplt.plot(big_rbc, xi_mm2)\nplt.vlines(30, 1e-3, 1e2)\nplt.loglog()\n\nplt.plot(np.logspace(0,1.5, 20), xi_interp(np.log10(np.logspace(0,1.5,20))))\nplt.plot(np.logspace(1.2,2.0, 20), xi_mm_interp(np.log10(np.logspace(1.2,2.0,20))))\nplt.vlines(30, -3, 2)\n#plt.loglog()\nplt.xscale('log')\n\nxi_interp = interp1d(np.log10(rpoints), np.log10(xi))\nxi_mm_interp = interp1d(np.log10(big_rbc), np.log10(xi_mm2))\n\nprint xi_interp(np.log10(30))/xi_mm_interp(np.log10(30))\n\n\n#xi = cat.calc_xi(r_bins)\n\nxi_interp = interp1d(np.log10(rpoints), np.log10(xi))\nxi_mm_interp = interp1d(np.log10(big_rbc), np.log10(xi_mm2))\n\n#xi_mm = cat._xi_mm#self.calc_xi_mm(r_bins,n_cores='all')\n#if precomputed, will just load the cache\n\nbias2 = np.mean(xi[-3:]/xi_mm[-3:]) #estimate the large scale bias from the box\n#print bias2\n#note i don't use the bias builtin cuz i've already computed xi_gg. \n\n#Assume xi_mm doesn't go below 0; will fail catastrophically if it does. but if it does we can't hack around it.\n#idx = -3\n#m,b,_,_,_ =linregress(np.log10(rpoints), np.log10(xi))\n\n#large_scale_model = lambda r: bias2*(10**b)*(r**m) #should i use np.power?\nlarge_scale_model = lambda r: (10**b)*(r**m) #should i use np.power?\n\ntpoints = (theta_bins[1:] + theta_bins[:-1])/2.0\nwt_large = np.zeros_like(tpoints)\nwt_small = np.zeros_like(tpoints)\nx = cat.cosmology.comoving_distance(cat.z)*cat.a/cat.h\n\nassert tpoints[0]*x.to(\"Mpc\").value/cat.h >= r_bins[0]\n #ubins = np.linspace(10**-6, 10**4.0, 1001)\nubins = np.logspace(-6, 3.0, 1001)\nubc = (ubins[1:]+ubins[:-1])/2.0\n\ndef integrate_xi(bin_no):#, w_theta, bin_no, ubc, ubins) \n int_xi = 0\n t_med = np.radians(tpoints[bin_no])\n for ubin_no, _u in enumerate(ubc):\n _du = ubins[ubin_no+1]-ubins[ubin_no]\n u = _u*units.Mpc*cat.a/cat.h\n du = _du*units.Mpc*cat.a/cat.h\n\n r = np.sqrt((u**2+(x*t_med)**2))#*cat.h#not sure about the h\n\n #if r > (units.Mpc)*cat.Lbox/10:\n try:\n int_xi+=du*bias2*(np.power(10, \\\n xi_mm_interp(np.log10(r.value)))) \n except ValueError:\n int_xi+=du*0\n #else:\n #int_xi+=du*0#(np.power(10, \\\n #xi_interp(np.log10(r.value))))\n wt_large[bin_no] = int_xi.to(\"Mpc\").value/cat.h\n\ndef integrate_xi_small(bin_no):#, w_theta, bin_no, ubc, ubins) \n int_xi = 0\n t_med = np.radians(tpoints[bin_no])\n for ubin_no, _u in enumerate(ubc):\n _du = ubins[ubin_no+1]-ubins[ubin_no]\n u = _u*units.Mpc*cat.a/cat.h\n du = _du*units.Mpc*cat.a/cat.h\n\n r = np.sqrt((u**2+(x*t_med)**2))#*cat.h#not sure about the h\n\n #if r > (units.Mpc)*cat.Lbox/10:\n #int_xi+=du*large_scale_model(r.value)\n #else:\n try:\n int_xi+=du*(np.power(10, \\\n xi_interp(np.log10(r.value))))\n except ValueError:\n try:\n int_xi+=du*bias2*(np.power(10, \\\n xi_mm_interp(np.log10(r.value))))\n except ValueError:\n int_xi+=0*du\n wt_small[bin_no] = int_xi.to(\"Mpc\").value/cat.h\n#Currently this doesn't work cuz you can't pickle the integrate_xi function.\n#I'll just ignore for now. This is why i'm making an emulator anyway\n#p = Pool(n_cores) \nmap(integrate_xi, range(tpoints.shape[0]));\nmap(integrate_xi_small, range(tpoints.shape[0]));\n\n#wt_large[wt_large<1e-10] = 0\nwt_small[wt_small<1e-10] = 0\n\nwt_large\n\nplt.plot(tpoints, wt, label = 'My Calculation')\nplt.plot(tpoints_rm, wt_redmagic, label = 'Buzzard Mock')\n#plt.plot(tpoints, W*wt_large, label = 'LS')\nplt.plot(tpoints, W*wt_small, label = \"My Calculation\")\n#plt.plot(tpoints, wt+W*wt_large, label = \"both\")\n#plt.plot(tpoints_rm, W.to(\"1/Mpc\").value*mathematica_calc, label = 'Mathematica Calc')\n#plt.plot(tpoints, wt_analytic(m,10**b, np.radians(tpoints), x),label = 'Mathematica Calc' )\n\nplt.ylabel(r'$w(\\theta)$')\nplt.xlabel(r'$\\theta \\mathrm{[degrees]}$')\nplt.loglog();\nplt.legend(loc='best')\n\nwt/wt_redmagic\n\nwt_redmagic/(W.to(\"1/Mpc\").value*mathematica_calc)\n\nimport cPickle as pickle\nwith open('/u/ki/jderose/ki23/bigbrother-addgals/bbout/buzzard-flock/buzzard-0/buzzard0_lb1050_xigg_ministry.pkl') as f:\n xi_rm = pickle.load(f)\n\nxi_rm.metrics[0].xi.shape\n\nxi_rm.metrics[0].mbins\n\nxi_rm.metrics[0].cbins\n\n#plt.plot(np.log10(rpoints), b2+(np.log10(rpoints)*m2))\n#plt.plot(np.log10(rpoints), 90+(np.log10(rpoints)*(-2)))\n\nplt.scatter(rpoints, xi)\nfor i in xrange(3):\n for j in xrange(3):\n plt.plot(xi_rm.metrics[0].rbins[:-1], xi_rm.metrics[0].xi[:,i,j,0])\nplt.loglog();\n\nplt.subplot(211)\nplt.plot(tpoints_rm, wt_redmagic/wt)\nplt.xscale('log')\n#plt.ylim([0,10])\nplt.subplot(212)\nplt.plot(tpoints_rm, wt_redmagic/wt)\nplt.xscale('log')\nplt.ylim([2.0,4])\n\nxi_rm.metrics[0].xi.shape\n\nxi_rm.metrics[0].rbins #Mpc/h", "The below cell calculates the integrals jointly instead of separately. It doesn't change the results significantly, but is quite slow. I've disabled it for that reason.", "x = cat.cosmology.comoving_distance(z)*a\n#ubins = np.linspace(10**-6, 10**2.0, 1001)\nubins = np.logspace(-6, 2.0, 51)\nubc = (ubins[1:]+ubins[:-1])/2.0\n\n#NLL\ndef liklihood(params, wt_redmagic,x, tpoints):\n #print _params\n #prior = np.array([ PRIORS[pname][0] < v < PRIORS[pname][1] for v,pname in zip(_params, param_names)])\n #print param_names\n #print prior\n #if not np.all(prior):\n # return 1e9\n #params = {p:v for p,v in zip(param_names, _params)}\n #cat.populate(params)\n #nd_cat = cat.calc_analytic_nd(parmas)\n #wt = np.zeros_like(tpoints_rm[:-5])\n \n #xi = cat.calc_xi(r_bins, do_jackknife=False)\n #m,b,_,_,_ = linregress(np.log10(rpoints), np.log10(xi))\n \n #if np.any(xi < 0):\n # return 1e9\n #kernel = ExpSquaredKernel(0.05)\n #gp = george.GP(kernel)\n #gp.compute(np.log10(rpoints))\n \n #for bin_no, t_med in enumerate(np.radians(tpoints_rm[:-5])):\n # int_xi = 0\n # for ubin_no, _u in enumerate(ubc):\n # _du = ubins[ubin_no+1]-ubins[ubin_no]\n # u = _u*unit.Mpc*a\n # du = _du*unit.Mpc*a\n #print np.sqrt(u**2+(x*t_med)**2)\n # r = np.sqrt((u**2+(x*t_med)**2))#*cat.h#not sure about the h\n #if r > unit.Mpc*10**1.7: #ignore large scales. In the full implementation this will be a transition to a bias model. \n # int_xi+=du*0\n #else:\n # the GP predicts in log, so i predict in log and re-exponate\n # int_xi+=du*(np.power(10, \\\n # gp.predict(np.log10(xi), np.log10(r.value), mean_only=True)[0]))\n # int_xi+=du*(10**b)*(r.to(\"Mpc\").value**m)\n\n #print (((int_xi*W))/wt_redmagic[0]).to(\"m/m\")\n #break\n # wt[bin_no] = int_xi*W.to(\"1/Mpc\")\n \n wt = wt_analytic(params[0],params[1], tpoints, x.to(\"Mpc\").value) \n chi2 = np.sum(((wt - wt_redmagic[:-5])**2)/(1e-3*wt_redmagic[:-5]) )\n \n #chi2=0\n #print nd_cat\n #print wt\n #chi2+= ((nd_cat-nd_mock.value)**2)/(1e-6)\n \n #mf = cat.calc_mf()\n #HOD = cat.calc_hod()\n #mass_bin_range = (9,16)\n #mass_bin_size = 0.01\n #mass_bins = np.logspace(mass_bin_range[0], mass_bin_range[1], int( (mass_bin_range[1]-mass_bin_range[0])/mass_bin_size )+1 )\n\n #mean_host_mass = np.sum([mass_bin_size*mf[i]*HOD[i]*(mass_bins[i]+mass_bins[i+1])/2 for i in xrange(len(mass_bins)-1)])/\\\n # np.sum([mass_bin_size*mf[i]*HOD[i] for i in xrange(len(mass_bins)-1)])\n \n #chi2+=((13.35-np.log10(mean_host_mass))**2)/(0.2)\n print chi2\n return chi2 #nll\n\nprint nd_mock\nprint wt_redmagic[:-5]\n\nimport scipy.optimize as op\n\nresults = op.minimize(liklihood, np.array([-2.2, 10**1.7]),(wt_redmagic,x, tpoints_rm[:-5]))\n\nresults\n\n#plt.plot(tpoints_rm, wt, label = 'My Calculation')\nplt.plot(tpoints_rm, wt_redmagic, label = 'Buzzard Mock')\nplt.plot(tpoints_rm, wt_analytic(-1.88359, 2.22353827e+03,tpoints_rm, x.to(\"Mpc\").value), label = 'Mathematica Calc')\n\nplt.ylabel(r'$w(\\theta)$')\nplt.xlabel(r'$\\theta \\mathrm{[degrees]}$')\nplt.loglog();\nplt.legend(loc='best')\n\nplt.plot(np.log10(rpoints), np.log10(2.22353827e+03)+(np.log10(rpoints)*(-1.88)))\nplt.scatter(np.log10(rpoints), np.log10(xi) )\n\n\nnp.array([v for v in params.values()])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
phobson/statsmodels
examples/notebooks/statespace_cycles.ipynb
bsd-3-clause
[ "Trends and cycles in unemployment\nHere we consider three methods for separating a trend and cycle in economic data. Supposing we have a time series $y_t$, the basic idea is to decompose it into these two components:\n$$\ny_t = \\mu_t + \\eta_t\n$$\nwhere $\\mu_t$ represents the trend or level and $\\eta_t$ represents the cyclical component. In this case, we consider a stochastic trend, so that $\\mu_t$ is a random variable and not a deterministic function of time. Two of methods fall under the heading of \"unobserved components\" models, and the third is the popular Hodrick-Prescott (HP) filter. Consistent with e.g. Harvey and Jaeger (1993), we find that these models all produce similar decompositions.\nThis notebook demonstrates applying these models to separate trend from cycle in the U.S. unemployment rate.", "%matplotlib inline\n\nimport numpy as np\nimport pandas as pd\nimport statsmodels.api as sm\nimport matplotlib.pyplot as plt\n\nfrom pandas.io.data import DataReader\nendog = DataReader('UNRATE', 'fred', start='1954-01-01')", "Hodrick-Prescott (HP) filter\nThe first method is the Hodrick-Prescott filter, which can be applied to a data series in a very straightforward method. Here we specify the parameter $\\lambda=129600$ because the unemployment rate is observed monthly.", "hp_cycle, hp_trend = sm.tsa.filters.hpfilter(endog, lamb=129600)", "Unobserved components and ARIMA model (UC-ARIMA)\nThe next method is an unobserved components model, where the trend is modeled as a random walk and the cycle is modeled with an ARIMA model - in particular, here we use an AR(4) model. The process for the time series can be written as:\n$$\n\\begin{align}\ny_t & = \\mu_t + \\eta_t \\\n\\mu_{t+1} & = \\mu_t + \\epsilon_{t+1} \\\n\\phi(L) \\eta_t & = \\nu_t\n\\end{align}\n$$\nwhere $\\phi(L)$ is the AR(4) lag polynomial and $\\epsilon_t$ and $\\nu_t$ are white noise.", "mod_ucarima = sm.tsa.UnobservedComponents(endog, 'rwalk', autoregressive=4)\n# Here the powell method is used, since it achieves a\n# higher loglikelihood than the default L-BFGS method\nres_ucarima = mod_ucarima.fit(method='powell')\nprint(res_ucarima.summary())", "Unobserved components with stochastic cycle (UC)\nThe final method is also an unobserved components model, but where the cycle is modeled explicitly.\n$$\n\\begin{align}\ny_t & = \\mu_t + \\eta_t \\\n\\mu_{t+1} & = \\mu_t + \\epsilon_{t+1} \\\n\\eta_{t+1} & = \\eta_t \\cos \\lambda_\\eta + \\eta_t^ \\sin \\lambda_\\eta + \\tilde \\omega_t \\qquad & \\tilde \\omega_t \\sim N(0, \\sigma_{\\tilde \\omega}^2) \\\n\\eta_{t+1}^ & = -\\eta_t \\sin \\lambda_\\eta + \\eta_t^ \\cos \\lambda_\\eta + \\tilde \\omega_t^ & \\tilde \\omega_t^* \\sim N(0, \\sigma_{\\tilde \\omega}^2)\n\\end{align}\n$$", "mod_uc = sm.tsa.UnobservedComponents(\n endog, 'rwalk',\n cycle=True, stochastic_cycle=True, damped_cycle=True,\n)\n# Here the powell method gets close to the optimum\nres_uc = mod_uc.fit(method='powell')\n# but to get to the highest loglikelihood we do a\n# second round using the L-BFGS method.\nres_uc = mod_uc.fit(res_uc.params)\nprint(res_uc.summary())", "Graphical comparison\nThe output of each of these models is an estimate of the trend component $\\mu_t$ and an estimate of the cyclical component $\\eta_t$. Qualitatively the estimates of trend and cycle are very similar, although the trend component from the HP filter is somewhat more variable than those from the unobserved components models. This means that relatively mode of the movement in the unemployment rate is attributed to changes in the underlying trend rather than to temporary cyclical movements.", "fig, axes = plt.subplots(2, figsize=(13,5));\naxes[0].set(title='Level/trend component')\naxes[0].plot(endog.index, res_uc.level.smoothed, label='UC')\naxes[0].plot(endog.index, res_ucarima.level.smoothed, label='UC-ARIMA(2,0)')\naxes[0].plot(hp_trend, label='HP Filter')\naxes[0].legend(loc='upper left')\naxes[0].grid()\n\naxes[1].set(title='Cycle component')\naxes[1].plot(endog.index, res_uc.cycle.smoothed, label='UC')\naxes[1].plot(endog.index, res_ucarima.autoregressive.smoothed, label='UC-ARIMA(2,0)')\naxes[1].plot(hp_cycle, label='HP Filter')\naxes[1].legend(loc='upper left')\naxes[1].grid()\n\nfig.tight_layout();" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
gully/PyKE
docs/source/tutorials/ipython_notebooks/motion-correction/Replicate_Vanderburg_2014_K2SFF.ipynb
mit
[ "Replicate Vanderburg & Johnson 2014 K2SFF Method\nIn this notebook we will replicate the K2SFF method from Vanderburg and Johnson 2014. The paper introduces a method for \"Self Flat Fielding\", by tracking how the lightcurve changes with motion of the spacecraft.", "from pyke import KeplerTargetPixelFile\n%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\nimport numpy as np\nimport matplotlib.pyplot as plt", "Get data", "#! wget https://www.cfa.harvard.edu/~avanderb/k2/ep60021426alldiagnostics.csv\n\nimport pandas as pd\n\ndf = pd.read_csv('ep60021426alldiagnostics.csv',index_col=False)\n\ndf.head()", "Let's use the provided $x-y$ centroids, but we could compute these on our own too.", "col = df[' X-centroid'].values\ncol = col - np.mean(col)\nrow = df[' Y-centroid'].values \nrow = row - np.mean(row)\n\ndef _get_eigen_vectors(centroid_col, centroid_row):\n centroids = np.array([centroid_col, centroid_row])\n eig_val, eig_vec = np.linalg.eigh(np.cov(centroids))\n return eig_val, eig_vec\n\ndef _rotate(eig_vec, centroid_col, centroid_row):\n centroids = np.array([centroid_col, centroid_row])\n return np.dot(eig_vec, centroids)\n\neig_val, eig_vec = _get_eigen_vectors(col, row)\n\nv1, v2 = eig_vec", "The major axis is the last one.", "plt.figure(figsize=(5, 6))\nplt.plot(col*4.0, row*4.0, 'ko', ms=4)\nplt.plot(col*4.0, row*4.0, 'ro', ms=1)\nplt.xticks([-2, -1,0, 1, 2])\nplt.yticks([-2, -1,0, 1, 2])\nplt.xlabel('X position [arcseconds]')\nplt.ylabel('Y position [arcseconds]')\nplt.xlim(-2, 2)\nplt.ylim(-2, 2)\nplt.plot([0, v1[0]], [0, v1[1]], color='blue', lw=3)\nplt.plot([0, v2[0]], [0, v2[1]], color='blue', lw=3);", "Following the form of Figure 2 of Vanderburg & Johsnon 2014.", "rot_colp, rot_rowp = _rotate(eig_vec, col, row)", "You can rotate into the new reference frame.", "plt.figure(figsize=(5, 6))\nplt.plot(rot_rowp*4.0, rot_colp*4.0, 'ko', ms=4)\nplt.plot(rot_rowp*4.0, rot_colp*4.0, 'ro', ms=1)\nplt.xticks([-2, -1,0, 1, 2])\nplt.yticks([-2, -1,0, 1, 2])\nplt.xlabel(\"X' position [arcseconds]\")\nplt.ylabel(\"Y' position [arcseconds]\")\nplt.xlim(-2, 2)\nplt.ylim(-2, 2)\nplt.plot([0, 1], [0, 0], color='blue')\nplt.plot([0, 0], [0, 1], color='blue');", "We need to calculate the arclength using:\n$$s= \\int_{x'_0}^{x'_1}\\sqrt{1+\\left( \\frac{dy'_p}{dx'}\\right)^2} dx'$$\n\nwhere $x^\\prime_0$ is the transformed $x$ coordinate of the point with the smallest $x^\\prime$ position, and $y^\\prime_p$ is the best--fit polynomial function.", "z = np.polyfit(rot_rowp, rot_colp, 5)\np5 = np.poly1d(z)\np5_deriv = p5.deriv()\n\nx0_prime = np.min(rot_rowp)\nxmax_prime = np.max(rot_rowp)\n\nx_dense = np.linspace(x0_prime, xmax_prime, 2000)\n\nplt.plot(rot_rowp, rot_colp, '.')\nplt.plot(x_dense, p5(x_dense));\n\n@np.vectorize\ndef arclength(x):\n '''Input x1_prime, get out arclength'''\n gi = x_dense <x\n s_integrand = np.sqrt(1 + p5_deriv(x_dense[gi]) ** 2)\n s = np.trapz(s_integrand, x=x_dense[gi])\n return s\n\nplt.plot(df[' arclength'], arclength(rot_rowp)*4.0, '.')\nplt.plot([0, 4], [0, 4], 'k--');", "It works!\nNow we apply a high-pass filter. We follow the original paper by using BSplines with 1.5 day breakpoints.", "from scipy.interpolate import BSpline\nfrom scipy import interpolate\n\ntt, ff = df['BJD - 2454833'].values, df[' Raw Flux'].values\ntt = tt - tt[0]\n\nknots = np.arange(0, tt[-1], 1.5)\n\nt,c,k = interpolate.splrep(tt, ff, s=0, task=-1, t=knots[1:])\n\nbspl = BSpline(t,c,k)\n\nplt.plot(tt, ff, '.')\nplt.plot(tt, bspl(tt))", "Spline fit looks good, so normalize the flux by the long-term trend.\nPlot the normalized flux versus arclength to see the position-dependent flux.", "norm_ff = ff/bspl(tt)", "Mask the data by keeping only the good samples.", "bi = df[' Thrusters On'].values == 1.0\ngi = df[' Thrusters On'].values == 0.0\nal, gff = arclength(rot_rowp[gi])*4.0, norm_ff[gi]\n\nsorted_inds = np.argsort(al)", "We will follow the paper by interpolating 15 bins of means. This is a piecewise linear fit.", "knots = np.array([np.min(al)]+ \n [np.median(splt) for splt in np.array_split(al[sorted_inds], 15)]+\n [np.max(al)])\n\nbin_means = np.array([gff[sorted_inds][0]]+\n [np.mean(splt) for splt in np.array_split(gff[sorted_inds], 15)]+\n [gff[sorted_inds][-1]])\n\nzz = np.polyfit(al, gff,6)\nsff = np.poly1d(zz)\nal_dense = np.linspace(0, 4, 1000)\ninterp_func = interpolate.interp1d(knots, bin_means)\n\nplt.figure(figsize=(5, 6))\nplt.plot(arclength(rot_rowp)*4.0, norm_ff, 'ko', ms=4)\nplt.plot(arclength(rot_rowp)*4.0, norm_ff, 'o', color='#3498db', ms=3)\nplt.plot(arclength(rot_rowp[bi])*4.0, norm_ff[bi], 'o', color='r', ms=3)\n#plt.plot(al_dense, sff(al_dense), '-', color='#e67e22')\n#plt.plot(knots, bin_means, '-', color='#e67e22')\nplt.plot(np.sort(al), interp_func(np.sort(al)), '-', color='#e67e22')\n\n#plt.xticks([0, 1,2, 3, 4])\nplt.xlabel('Arclength [arcseconds]')\nplt.ylabel('Relative Brightness')\nplt.title('EPIC 60021426, Kp =10.3')\n#plt.xlim(0,4)\nplt.ylim(0.997, 1.002);", "Following Figure 4 of Vanderburg & Johnson 2014.\nApply the Self Flat Field (SFF) correction:", "corr_flux = gff / interp_func(al)\n\nplt.figure(figsize=(10,6))\n\ndy = 0.004\nplt.plot(df['BJD - 2454833'], df[' Raw Flux']+dy, 'ko', ms=4)\nplt.plot(df['BJD - 2454833'], df[' Raw Flux']+dy, 'o', color='#3498db', ms=3)\nplt.plot(df['BJD - 2454833'][bi], df[' Raw Flux'][bi]+dy, 'o', color='r', ms=3)\nplt.plot(df['BJD - 2454833'][gi], corr_flux*bspl(tt[gi]), 'o', color='k', ms = 4)\nplt.plot(df['BJD - 2454833'][gi], corr_flux*bspl(tt[gi]), 'o', color='#e67e22', ms = 3)\n#plt.plot(df['BJD - 2454833'][gi], df[' Corrected Flux'][gi], 'o', color='#00ff00', ms = 4)\n\nplt.xlabel('BJD - 2454833')\nplt.ylabel('Relative Brightness')\n\nplt.xlim(1862, 1870)\nplt.ylim(0.994, 1.008);", "Following Figure 5 of Vanderburg & Johnson 2015.\nLet's compute the CDPP:", "from pyke import LightCurve\n\n#lc = LightCurve(time=df['BJD - 2454833'][gi], flux=corr_flux*bspl(tt[gi]))\nlc = LightCurve(time=df['BJD - 2454833'][gi], flux=df[' Corrected Flux'][gi])\n\nlc.cdpp(savgol_window=201)", "The end.\nUsing PyKE:", "from pyke.lightcurve import SFFCorrector\n\nsff = SFFCorrector()\n\nlc_corrected = sff.correct(df['BJD - 2454833'][gi].values,\n df[' Raw Flux'][gi].values,\n col[gi], row[gi], niters=1, windows=1, polyorder=5)\n\nplt.figure(figsize=(10,6))\n\ndy = 0.004\nplt.plot(df['BJD - 2454833'], df[' Raw Flux']+dy, 'ko', ms=4)\nplt.plot(df['BJD - 2454833'], df[' Raw Flux']+dy, 'o', color='#3498db', ms=3)\nplt.plot(df['BJD - 2454833'][bi], df[' Raw Flux'][bi]+dy, 'o', color='r', ms=3)\nplt.plot(df['BJD - 2454833'][gi], lc_corrected.flux*bspl(tt[gi]), 'o', color='k', ms = 4)\nplt.plot(df['BJD - 2454833'][gi], lc_corrected.flux*bspl(tt[gi]), 'o', color='pink', ms = 3)\n\nplt.xlabel('BJD - 2454833')\nplt.ylabel('Relative Brightness')\n\nplt.xlim(1862, 1870)\nplt.ylim(0.994, 1.008);\n\nsff._plot_normflux_arclength()\n\nsff._plot_rotated_centroids()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
CodyKochmann/battle_tested
tutorials/hardening_filters.ipynb
mit
[ "%xmode plain", "battle_tested was originally created to harden your safeties.\nIn the example below battle_tested is being used to harden a function that is suppose to give you a predictable list of strings so you can continue with your code knowing the input has already been sanitized.", "def list_of_strings_v1(iterable):\n \"\"\" converts the iterable input into a list of strings \"\"\"\n # build the output\n out = [str(i) for i in iterable]\n # validate the output\n for i in out:\n assert type(i) == str\n # return\n return out", "Here's an example of what many programmers would consider enough of a test.", "list_of_strings_v1(range(10))", "The above proves it works and is pretty clean and understandable right?", "from battle_tested import fuzz\n\nfuzz(list_of_strings_v1)", "And with 2 lines of code, that was proven wrong.\nWhile you could argue that the input of the tests is crazy and would never happen with how you structured your code, lets see how hard it really is to rewrite this function so it actually can reliably act as your input's filter.", "def list_of_strings_v2(iterable):\n \"\"\" converts the iterable input into a list of strings \"\"\"\n try:\n iter(iterable)\n # build the output\n out = [str(i) for i in iterable]\n except TypeError: # raised when input was not iterable\n out = [str(iterable)]\n # validate the output\n for i in out:\n assert type(i) == str\n # return\n return out\n\nfuzz(list_of_strings_v2)", "With the new version, the code not only seems like it can have anything thrown at it, battle_tested prove's its validity by running thousands of tests without a single issue." ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
ogoann/StatisticalMethods
examples/Cepheids/FirstLook.ipynb
gpl-2.0
[ "A First Look at the Periods and Luminosities of Cepheid Stars\n\n\nCepheids are stars whose brightness oscillates with a stable period that appears to be strongly correlated with their luminosity (or absolute magnitude).\n\n\nA lot of monitoring data - repeated imaging and subsequent \"photometry\" of the star - can provide a measurement of the absolute magnitude (if we know the distance to it's host galaxy) and the period of the oscillation.\n\n\nLet's look at some Cepheid measurements reported by Riess et al (2011). Like the correlation function summaries, they are in the form of datapoints with error bars, where it is not clear how those error bars were derived (or what they mean).", "import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (15.0, 8.0) ", "A Look at Each Host Galaxy's Cepheids\nLet's read in all the data, and look at each galaxy's Cepheid measurements separately. Instead of using pandas, we'll write our own simple data structure, and give it a custom plotting method so we can compare the different host galaxies' datasets.", "# First, we need to know what's in the data file.\n\n!head R11ceph.dat\n\nclass Cepheids(object):\n \n def __init__(self,filename):\n # Read in the data and store it in this master array:\n self.data = np.loadtxt(filename)\n self.hosts = self.data[:,1].astype('int').astype('str')\n # We'll need the plotting setup to be the same each time we make a plot:\n colornames = ['red','orange','yellow','green','cyan','blue','violet','magenta','gray']\n self.colors = dict(zip(self.list_hosts(), colornames))\n self.xlimits = np.array([0.3,2.3])\n self.ylimits = np.array([30.0,17.0])\n return\n \n def list_hosts(self):\n # The list of (9) unique galaxy host names:\n return np.unique(self.hosts)\n \n def select(self,ID):\n # Pull out one galaxy's data from the master array:\n index = (self.hosts == str(ID))\n self.m = data[index,2]\n self.merr = data[index,3]\n self.logP = np.log10(data[index,4])\n return\n \n def plot(self,X):\n # Plot all the points in the dataset for host galaxy X.\n ID = str(X)\n self.select(ID)\n plt.rc('xtick', labelsize=16) \n plt.rc('ytick', labelsize=16)\n plt.errorbar(self.logP, self.m, yerr=self.merr, fmt='.', ms=7, lw=1, color=self.colors[ID], label='NGC'+ID)\n plt.xlabel('$\\\\log_{10} P / {\\\\rm days}$',fontsize=20)\n plt.ylabel('${\\\\rm magnitude (AB)}$',fontsize=20)\n plt.xlim(self.xlimits)\n plt.ylim(self.ylimits)\n plt.title('Cepheid Period-Luminosity (Riess et al 2011)',fontsize=20)\n return\n\n def overlay_straight_line_with(self,m=0.0,c=24.0):\n # Overlay a straight line with gradient m and intercept c.\n x = self.xlimits\n y = m*x + c\n plt.plot(x, y, 'k-', alpha=0.5, lw=2)\n plt.xlim(self.xlimits)\n plt.ylim(self.ylimits)\n return\n \n def add_legend(self):\n plt.legend(loc='upper left')\n return\n\n\nC = Cepheids('R11ceph.dat')\nprint C.colors", "OK, now we are all set up! Let's plot some data.", "C.plot(4258)\n\nC.plot(1309)\n\n# for ID in C.list_hosts():\n# C.plot(ID)\n \nC.overlay_straight_line_with(m=-3.0,c=26.0)\n\nC.add_legend()", "Q: Is the Cepheid Period-Luminosity relation a) a power law and b) universal?\nWith your neighbor, try plotting up the different host galaxy's cepheid datasets. Can you find straight lines that \"fit\" all the data from each host? And do you get the same \"fit\" for each host? Notice that you can plot multiple datasets on the same axes." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
amkatrutsa/MIPT-Opt
Spring2017-2019/12-NumMethods/Seminar12en.ipynb
mit
[ "Seminar 12\nIntroduction to numerical optimization (Y. E. Nesterov Introduction to convex optimization, ch. 1 $\\S$ 1.1)\n\nReview of the spring term topics\nProblem statement\nGeneral scheme of any optimization method\nComparison of optimization methods\nMethods to solve one-dimansional minimization problem\n\nSyllabus of the spring term\nAlso, see on the GitHub course page.\n\nMethods to solve unconstrained optimization problem\nOne-dimensional optimization problem (already today!)\nGradient descent\nNewton method\nQuasi-Newton methods\nConjugate gradient method \nOptional:\nLerast squares problem\nOptimal methods and lower bounds\n\n\n\n\nMethods to solve constrained optimization problem\nLinear programming\nProjected gradient method and conditional gradient method\nBarrier method\nPenalty function methods\nAugmented Lagrangian method\n\n\n\nOrganizational\n\nOne seminar and one lecture per week\nTwo problem sets\nMidterm in the middle of the term\nFinal test at the end of the term\nOral exam at the end of the term (grading for a semester is similar to the fall term)\nMinitests in the deginning of every class\nHomework assignment almost every week: $\\TeX$ or Jupyter Notebook\n\nProblem statement\n\\begin{equation}\n\\begin{split}\n& \\min_{x \\in S} \\; f_0(x)\\\n\\text{s.t. } & f_j(x) = 0, \\; j = 1,\\ldots,m\\\n& g_k(x) \\leq 0, \\; k = 1,\\ldots,p\n\\end{split}\n\\end{equation}\nwhere $S \\subseteq \\mathbb{R}^n$, $f_j: S \\rightarrow \\mathbb{R}, \\; j = 0,\\ldots,m$, $g_k: S \\rightarrow \\mathbb{R}, \\; k=1,\\ldots,p$\nAll functions are at least continuous. \nImportant fact: nonlinear optimization problem in its general form is \nnumerically intractable!\nAnalytical results\n\nFirst order necessary condition: \n\nif $x^$ is a local minimum point of the differentiable function $f(x)$, then \n$$\nf'(x^) = 0\n$$\n- Second order necessary condition: \nif $x^*$ is a local minimum point of the twice differentiable function $f(x)$, then \n$$\nf'(x^) = 0 \\quad \\text{и} \\quad f''(x^) \\succeq 0\n$$\n- Sufficient condition:\nAssume $f(x)$ is twice differentiable function and $x^*$ satisfies the following condition\n$$\nf'(x^) = 0 \\quad f''(x^) \\succ 0,\n$$\nthen $x^*$ is a strict local minimum point of function $f(x)$.\nRemark: check that you can prove these claims!\nFeatures of numerical solutions\n\nExact solution of the given problem is impossible due to precision of machine arithmetic\nIt is necessary to define the way to check if current point is a solution or not\nIt is necessary to define what information about the problem is stored\n\nGeneral iterative scheme\nGiven: initial guess $x$, required tolerance $\\varepsilon$.\n```python\ndef GeneralScheme(x, epsilon):\nwhile StopCriterion(x) &gt; epsilon:\n\n OracleResponse = RequestOracle(x)\n\n UpdateInformation(I, x, OracleResponse)\n\n x = NextPoint(I, x)\n\nreturn x\n\n```\nQuestions\n\nWhat are the possible stopping criteria?\nWhat is an oracle and what is it for?\nWhat is information model?\nHow to get next point?\n\nStopping criteria\n\nConvergence in $x$: \n$$\n\\| x_k - x^* \\|_2 < \\varepsilon\n$$ \nConvergence in $f$: \n$$\n\\| f_k - f^* \\|_2 < \\varepsilon\n$$ \nNecessary condition \n$$\n\\| f'(x_k) \\|_2 < \\varepsilon\n$$\n\nBut we don't know $x^*$!\nThen\n$$\n\\|x_{k+1} - x_k \\| = \\|x_{k+1} - x_k + x^ - x^ \\| \\leq \\|x_{k+1} - x^ \\| + \\| x_k - x^ \\| \\leq 2\\varepsilon\n$$\nThe same is true for the convergence in $f$, but sometimes $f^*$ can be estimated!\nRemark: better practise is to use relative difference in argument and functional, for example $\\dfrac{\\|x_{k+1} - x_k \\|_2}{\\| x_k \\|_2}$\nWhat is oracle?\nDefinition: oracle is some abstact machine that responses on the sequaential method requests\nOOP analogy: \n\noracle is a virtual method of the base class\nevery problem is derived class\noracle is defined for every particular problem according to the declaration in the base class\n\nBlack box concept\n1. Iterative method can use only oracle responses\n2. Oracle responses are local\nInformation about the problem\n\nEvery oracle response gives local information about function behaviour in the given point\nAfter aggregation of the oracle responses, we update global information about objective function:\ncurvature\ndescent direction\netc\n\n\n\nCompute next point\n$$\nx_{k+1} = x_{k} + \\alpha_k h_k\n$$\n\nLine search: fix direction $h_k$ and search for this direction the optimal value of $\\alpha_k$\nTrust region method: fix appropriate size of region in some norm $\\| \\cdot \\| \\leq \\alpha$ and model of the objective function, which is a good approximation in the considered region.\n Next, we search direction $h_k$, that minimizes the chosen model of the objective function and does not lead to the point $x_k + h_k$ lying outside of the considered region\n\nQuestions:\n1. How to choose $\\alpha_k$?\n2. How to choose $h_k$?\n3. How to choose model?\n4. How to choose region?\n5. How to choose region size? \n<span style=\"color:red\">\n In this course we consider only line search methods!</span> \nHowever someiemes the concept of trust region methods will be helpful.\nHow to compare optimization methods?\nFor given class of problems one can compare the following quantities:\n1. Complexity\n - analytical: number of the oracle requests to solve the problem with accuracy $\\varepsilon$\n - arithmetic: total number of computations to solve the problem with accuracy $\\varepsilon$\n2. Convergence speed\n3. Experiments\nConvergence speed\n\nSublinear\n$$\n\\| x_{k+1} - x^* \\|_2 \\leq C k^{\\alpha},\n$$\nwhere $\\alpha < 0$ и $ 0 < C < \\infty$\n\nLinear (geometric progression)\n$$\n\\| x_{k+1} - x^* \\|_2 \\leq Cq^k, \n$$\nwhere $q \\in (0, 1)$ and $ 0 < C < \\infty$\n\n\nSuperlinear \n$$\n\\| x_{k+1} - x^* \\|_2 \\leq Cq^{k^p}, \n$$\nwhere $q \\in (0, 1)$, $ 0 < C < \\infty$ and $p > 1$\n\nQuadratic\n$$\n\\| x_{k+1} - x^ \\|_2 \\leq C\\| x_k - x^ \\|^2_2, \\qquad \\text{or} \\qquad \\| x_{k+1} - x^* \\|_2 \\leq C q^{2^k}\n$$\nwhere $q \\in (0, 1)$ and $ 0 < C < \\infty$\n\nOptimal methods: can we do better?\n\nAuthors prove lower bounds of the convergence speed for given set of problems and methods of given order\nNext, the methods, for which these lower bounds are tight, were proposed $\\Rightarrow$ optimality is proved\nLater more about convergence theorem", "%matplotlib inline\nimport matplotlib.pyplot as plt\n\nUSE_COLAB = False\nif not USE_COLAB:\n plt.rc(\"text\", usetex=True)\n \nimport numpy as np\nC = 10\nalpha = -0.5\nq = 0.9\nnum_iter = 7\nsublinear = np.array([C * k**alpha for k in range(1, num_iter + 1)])\nlinear = np.array([C * q**k for k in range(1, num_iter + 1)])\nsuperlinear = np.array([C * q**(k**2) for k in range(1, num_iter + 1)])\nquadratic = np.array([C * q**(2**k) for k in range(1, num_iter + 1)])\nplt.figure(figsize=(12,8))\nplt.semilogy(np.arange(1, num_iter+1), sublinear, \n label=r\"Sublinear, $\\alpha = -0.5$\")\nplt.semilogy(np.arange(1, num_iter+1), superlinear, \n label=r\"Superlinear, $q = 0.5, p=2$\")\nplt.semilogy(np.arange(1, num_iter+1), linear, \n label=r\"Linear, $q = 0.5$\")\nplt.semilogy(np.arange(1, num_iter+1), quadratic, \n label=r\"Quadratic, $q = 0.5$\")\nplt.xlabel(\"Number of iteration, $k$\", fontsize=24)\nplt.ylabel(\"Error rate upper bound\", fontsize=24)\nplt.legend(loc=\"best\", fontsize=28)\nplt.xticks(fontsize = 28)\n_ = plt.yticks(fontsize = 28)", "On interpretations of the convergence theorems (B.T. Polyak Introduction to optimization, ch. 1, $\\S$ 6)\n\n\nWhat answers give convergence theorems?\n\nclass of problems for which the method is applicable (it is important to minimize the number of restrictions!)\nconvexity\nsmoothness\n\n\nqualitative behaviour of the method \nis initial guess significant for convergence of method?\nwhat convergence can be observed?\n\n\nestimate of convergence speed\ntheoretical estimate of the method behavior without experiments\ndetection of properties that affect convergence (condition number, dimension, etc)\nsometimes one can set number of iterations to achieve required accuracy in advance\n\n\n\n\n\nWhat answers do not give convergence theorem?\n\nconvergence of the method does not mean that it has to be used\nconvergence estimates depend on unknown constants - only theoretical interest\ntheorems do not take into account rounding errors and accuracy of solving auxiliary problems\n\n\n\nMain point: it is necessary to use common sense and be careful in usage of convergence theorem!\nClasses of problems\n\nUncontrained optimization\nLipschitz objective function \nLipschitz gradient of the objective function\n\n\nContrained optimization\npolytope\nsets with simple structure\ngeneral form\n\n\n\nClasses of methods\n\n\nZero order method: oracle returns only objective function $f(x)$\n\n\nFist order method : oracle returns objective function $f(x)$ and its gradient $f'(x)$\n\n\nSecond order method: oracle returns objective function $f(x)$, its gradient $f'(x)$ and its hessian $f''(x)$.\n\n\nQ: do methods of higher order exist?\n\nOne-step methods \n$$\nx_{k+1} = \\Phi(x_k)\n$$\nMulti-step methods\n$$\nx_{k+1} = \\Phi(x_k, x_{k-1}, ...)\n$$\n\nOne-dimensional optimization\nDefinition. Funtion $f(x)$ is unimodal in interval $[a, b]$, if there exists such point $x^ \\in [a, b]$, that \n- $f(x_1) > f(x_2)$ for any $a \\leq x_1 < x_2 < x^$, \nand\n\n$f(x_1) < f(x_2)$ for any $x^* < x_1 < x_2 \\leq b$.\n\nQ: what geometry of unimodal function?\nBisection method\nIdea from the first term CS course: divide given interval $[a,b]$ on two equal parts till minimum of the unimodal function is not found\nDenite by $N$ the number of computations of function $f$, then one can perform $K = \\frac{N - 1}{2}$ iterations and the following estimate holds: \n$$\n|x_{K+1} - x^*| \\leq \\frac{b_{K+1} - a_{K+1}}{2} = \\left( \\frac{1}{2} \\right)^{\\frac{N-1}{2}} (b - a) \\approx 0.5^{K} (b - a) \n$$", "def binary_search(f, a, b, epsilon, callback=None):\n c = (a + b) / 2.0\n while abs(b - a) > epsilon:\n# Check left subsegment\n y = (a + c) / 2.0\n if f(y) <= f(c):\n b = c\n c = y\n else:\n# Check right subsegment\n z = (b + c) / 2.0\n if f(c) <= f(z):\n a = y\n b = z\n else:\n a = c\n c = z\n if callback is not None:\n callback(a, b)\n return c\n\ndef my_callback(a, b, left_bound, right_bound, approximation):\n left_bound.append(a)\n right_bound.append(b)\n approximation.append((a + b) / 2.0)\n\n# %matplotlib inline\nimport numpy as np\n# import matplotlib.pyplot as plt\n\nleft_boud_bs = []\nright_bound_bs = []\napproximation_bs = []\n\ncallback_bs = lambda a, b: my_callback(a, b, \n left_boud_bs, right_bound_bs, approximation_bs)\n\n# Target unimodal function on given segment\nf = lambda x: (x - 2) * x * (x + 2)**2 # np.power(x+2, 2)\n# f = lambda x: -np.sin(x)\nx_true = -2\n# x_true = np.pi / 2.0\na = -3\nb = -1.5\nepsilon = 1e-8\nx_opt = binary_search(f, a, b, epsilon, callback_bs)\nprint(np.abs(x_opt - x_true))\nplt.figure(figsize=(10,6))\nplt.plot(np.linspace(a,b), f(np.linspace(a,b)))\nplt.title(\"Objective function\", fontsize=20)\nplt.xticks(fontsize=20)\n_ = plt.yticks(fontsize=20)", "Golden search method\nIdea: divide interval $[a,b]$ not on two eqwual parts, but in the golden ratio.\nEstimate convergence speed like in bisection method:\n$$\n|x_{K+1} - x^*| \\leq b_{K+1} - a_{K+1} = \\left( \\frac{1}{\\tau} \\right)^{N-1} (b - a) \\approx 0.618^K(b-a),\n$$\nwhere $\\tau = \\frac{\\sqrt{5} + 1}{2}$.\n\nConstant of linear convergence is higher, than corresponding constant in bisection method\nNumber of function calls is less than for the bisection method", "def golden_search(f, a, b, tol=1e-5, callback=None):\n tau = (np.sqrt(5) + 1) / 2.0\n y = a + (b - a) / tau**2\n z = a + (b - a) / tau\n while b - a > tol:\n if f(y) <= f(z):\n b = z\n z = y\n y = a + (b - a) / tau**2\n else:\n a = y\n y = z\n z = a + (b - a) / tau\n if callback is not None:\n callback(a, b)\n return (a + b) / 2.0\n\nleft_boud_gs = []\nright_bound_gs = []\napproximation_gs = []\n\ncb_gs = lambda a, b: my_callback(a, b, left_boud_gs, right_bound_gs, approximation_gs)\nx_gs = golden_search(f, a, b, epsilon, cb_gs)\n\nprint(f(x_opt))\nprint(f(x_gs))\nprint(np.abs(x_opt - x_true))", "Comparison of the methods for one-dimensional problem", "plt.figure(figsize=(10,6))\nplt.semilogy(np.arange(1, len(approximation_bs) + 1), np.abs(x_true - np.array(approximation_bs, dtype=np.float64)), label=\"Binary search\")\nplt.semilogy(np.arange(1, len(approximation_gs) + 1), np.abs(x_true - np.array(approximation_gs, dtype=np.float64)), label=\"Golden search\")\nplt.xlabel(r\"Number of iterations, $k$\", fontsize=24)\nplt.ylabel(\"Error rate upper bound\", fontsize=24)\nplt.legend(loc=\"best\", fontsize=24)\nplt.xticks(fontsize = 24)\n_ = plt.yticks(fontsize = 24)\n\n%timeit binary_search(f, a, b, epsilon)\n%timeit golden_search(f, a, b, epsilon)", "Example of other behaviour of the considered methods\n$$\nf(x) = \\sin(\\sin(\\sin(\\sqrt{x}))), \\; x \\in [2, 60]\n$$", "f = lambda x: np.sin(np.sin(np.sin(np.sqrt(x))))\nx_true = (3 * np.pi / 2)**2\na = 2\nb = 60\nepsilon = 1e-8\nplt.plot(np.linspace(a,b), f(np.linspace(a,b)))\nplt.xticks(fontsize = 24)\n_ = plt.yticks(fontsize = 24)\nplt.title(\"Objective function\", fontsize=20)", "Comparison of convergence speed and execution time\nBisection method", "left_boud_bs = []\nright_bound_bs = []\napproximation_bs = []\n\ncallback_bs = lambda a, b: my_callback(a, b, \n left_boud_bs, right_bound_bs, approximation_bs)\n\nx_opt = binary_search(f, a, b, epsilon, callback_bs)\nprint(np.abs(x_opt - x_true))", "Golden section method", "left_boud_gs = []\nright_bound_gs = []\napproximation_gs = []\n\ncb_gs = lambda a, b: my_callback(a, b, left_boud_gs, right_bound_gs, approximation_gs)\nx_gs = golden_search(f, a, b, epsilon, cb_gs)\n\nprint(np.abs(x_opt - x_true))", "Convergence", "plt.figure(figsize=(8,6))\nplt.semilogy(np.abs(x_true - np.array(approximation_bs, dtype=np.float64)), label=\"Binary\")\nplt.semilogy(np.abs(x_true - np.array(approximation_gs, dtype=np.float64)), label=\"Golden\")\nplt.legend(fontsize=24)\nplt.xticks(fontsize=24)\n_ = plt.yticks(fontsize=24)\nplt.xlabel(r\"Number of iterations, $k$\", fontsize=24)\nplt.ylabel(\"Error rate upper bound\", fontsize=24)", "Execution time", "%timeit binary_search(f, a, b, epsilon)\n%timeit golden_search(f, a, b, epsilon)", "Recap\n\nIntroduction to numerical optimization\nGeneral scheme of any optimization method \nHow to compare optimization methods\nZoo of the optimization methods and problems\nOne-dimensional minimization" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
kadamkaustubh/project-Goldilocks
pymc proof of concept-final.ipynb
mit
[ "Using PyMC for parameter estimation\n\nEstimate parameters of ODE using Bayesian Inference on a smaller model\nMotivation\nI wanted to test whether I could code in PyMC and not have to use R to achieve the same result; i.e do parameter estimation using Bayesian Inference. So I used a smaller ODE model, that I had previously worked on and knew that a solution exists, to explore PyMC as well as develop a strategy to implement the more complex and more time consuming Tunable Bandpass Filter Model.\nI used the data from strain characterization experiment I had done with Christian. the general scheme is glucose uptake by GaLP, followed by phosphorylation by Glk and eventual conversion to biomass. The unknown variables in the ODE system are the velocities of the GalP and Glk reactions which are approximated by Bayesian Inference by the following routine.", "import pymc as pm\nimport numpy as np\nfrom scipy.integrate import odeint\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n#importing experimental data\ntimedata=np.genfromtxt(\"data/timedata.txt\", delimiter=',')\nglc_concentration=np.genfromtxt(\"data/GLCexconc.txt\", delimiter=',')", "Defining Priors\nProperly defining priors is crucial. \n* Make sure that distribution covers the entire solution space\n* If the solution space is too big, there's a chance that the entire solution space is not scanned (This happened when I defined the solution space as pm.Uniform(1e3, 1e4)\n* If prior has probability zero for a point then posterior will as well\n* It could be beneficial to bias the prior at some value just to boost convergence. (If we define prior as Normal distribution with $ \\mu$ = expected value and a very large $\\sigma$ for certain parameters rather than Uniform for all", "#defining prior distributions of our unknown variables \nkcat_prob = pm.Uniform('kcat', 0.01, 1.0)\nvmaxf_prob = pm.Uniform('vmaxf', 0.01, 1.0)", "Deterministic variables are completely defined by their parents.\nIf you know the values of it's parents, you can calculate its value as well.", "# deterministic compartmental model\n\n#initial conditions for concentration profile\ntspan = timedata\ninitialBiomass=0.043 #gDwt/l\ninitialGLCexconc=21 #mM\ninitialGLCinconc=0\nrho=564 #gdw/Lcell\nmetabolite0 = [initialGLCexconc, initialGLCinconc, initialBiomass] #Initial conditions\n\n \n\n@pm.deterministic\ndef glk(kcat=kcat_prob, vmaxf=vmaxf_prob,tspan=tspan):\n def glk_model(metabolite, t):\n [GLCex,GLCin,Biomass]=metabolite\n \n #all constants in the model\n km=10.2e-3\n rho=564; #gDW/Lcell\n vmaxHKr=24e-6*1.16*3600 #mM/h\n KmATP=1\n KmG6P=47e-3\n KiATP=1\n KiGLC=47e-3\n KiG6P=47e-3\n KiADP=1\n ATP=1.54 #mM\n ADP=560e-3 #mM \n G6P=0.801\n \n #Reactions in the model\n #denominator of glk reaction\n D=1+ ATP/KiATP+ G6P/KiG6P+ GLCin/KiGLC+ ADP/KiADP+ATP*GLCin/(KmATP*KiGLC)+ADP*G6P/(KiADP*KmG6P)\n #glk reaction\n VHK=(vmaxf*10000*ATP*GLCin/(KmATP*KiGLC)-vmaxHKr*ADP*G6P/(KiADP*KmG6P))/D \n #GalP reaction\n VGalP=kcat*10000*GLCex/(km+GLCex) #GalP reaction\n #growth rate\n mu=(VHK/rho-0.4531)/10.971 \n if mu<=0:\n mu=0\n \n #Differential equations\n dglcex_dt=-VGalP*Biomass/rho #d(GLCex)/dt\n dglcin_dt=VGalP-VHK-mu*GLCin #d(GLCin)/dt\n dx_dt=mu*Biomass #d(biomass)/dt\n dmetabolite=[dglcex_dt,dglcin_dt,dx_dt]\n return dmetabolite\n #ODE solver call\n soln = odeint(glk_model, metabolite0, tspan)\n #Solution return\n GLCconc= soln[:,0]\n return [GLCconc]\n", "Observed data will have the observed=True argument in its definition. Observed data must be able to satisfy any value defined by its parent function. \nHigh tau value indicates the confidence of observed data, $\\tau=\\frac{1}{\\sigma^2}$\nFor the Bandpass Model, I might try a different distribution for observed data. I was thinking of maybe defining the observed data as a binomial distribution. Assigning the output of the Model as a probability of growth rather than OD. hence the probability near changeover points will be ~0.5 rather than strictly 0 or strictly 1.", "# data likelihood\nobserved_glc = pm.Normal('observed_glc', mu=glk,tau=1000,value=glc_concentration, observed=True)", "Define the model for Markov Chain Monte Carlo Simulation.\npm.MAP function computes the Maximum a posteriori estimates\nThe model is then sampled to finf posterior distribution. Here we need to define the algorithm for MCMC (Metropolis, Hamiltonian etc.). I've stuck to default for now, but might need improvement if the problem is too complex to solve.\nburn rejects the initial samples as they are biased to the initial position\nthin defines the interval between samples recorded to remove the effects of auto-correlation\nHackett et al. used the burn=8000 and thin=300 taking 200 samples for 10 different starting positions\nI havent figured out how to do multi-start. This workload could be parallelized to reduce computtional time. (have to figure this out as well)", "model = pm.Model([kcat_prob,vmaxf_prob,glk,observed_glc])\n \n# fit the model with mcmc\nmap_ = pm.MAP(model)\nmap_.fit()\nmcmc = pm.MCMC(model)\nmcmc.sample(5000, burn=400, thin=10)\n\n#MCMC samples turned to arrays\nkcat_samples=mcmc.trace('kcat')[:]\nvmaxf_samples=mcmc.trace('vmaxf')[:]\n\n#means will be the estimated values of the variables\nprint('mean of kcat values:',round(kcat_samples.mean(),4),'\\n')\nprint('mean of vmaxf values:',round(vmaxf_samples.mean(),4))\n\n#Histogram of variable 1:kcat\nplt.hist(kcat_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $kcat$\", color=\"#7A68A6\", normed=True);\nplt.vlines(np.median(kcat_samples), 0, 8000, linestyle=\"--\", label=\"mean kcat\")\nplt.legend(loc=\"upper right\")\nplt.title(\"Posterior distribution of kcat\")\n\n#histogram of Variable 2:vmaxf\nplt.hist(vmaxf_samples, histtype='stepfilled', bins=30, alpha=0.85,\n label=\"posterior of $vmaxf$\", color=\"#7A68A6\", normed=True);\nplt.vlines(np.median(vmaxf_samples), 0, 10000, linestyle=\"--\", label=\"mean vmaxf\")\nplt.legend(loc=\"upper left\")\nplt.title(\"Posterior distribution of vmaxf\")\n\nimport json, matplotlib\ns = json.load( open(\"styles/bmh_matplotlibrc.json\") )\nmatplotlib.rcParams.update(s)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tensorflow/docs-l10n
site/ja/tutorials/generative/pix2pix.ipynb
apache-2.0
[ "Copyright 2019 The TensorFlow Authors.\nLicensed under the Apache License, Version 2.0 (the \"License\");", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "pix2pix: 条件付き GAN による画像から画像への変換\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td><a target=\"_blank\" href=\"https://www.tensorflow.org/tutorials/generative/pix2pix\"><img src=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tutorials/generative/pix2pix.ipynb\">TensorFlow.org で実行</a></td>\n <td> <img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\"><a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tutorials/generative/pix2pix.ipynb\">Google Colab で実行</a> </td>\n <td> <img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\"><a target=\"_blank\" href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/tutorials/generative/pix2pix.ipynb\">GitHub でソースを表示</a> </td>\n <td> <img src=\"https://www.tensorflow.org/images/download_logo_32px.png\"><a href=\"https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/generative/pix2pix.ipynb\">ノートブックをダウンロード</a> </td>\n</table>\n\nこのチュートリアルでは、Isola et al による『Image-to-image translation with conditional adversarial networks』(2017 年)で説明されているように、入力画像から出力画像へのマッピングを学習する pix2pix と呼ばれる条件付き敵対的生成ネットワーク(cGAN)を構築し、トレーニングする方法を説明します。pix2pix はアプリケーションに依存しません。ラベルマップからの写真の合成、モノクロ画像からのカラー写真の生成、Google Maps の写真から航空写真への変換、スケッチ画像から写真への変換など、広範なタスクに適用できます。\nこの例のネットワークは、プラハにあるチェコ工科大学の機械知覚センターが提供する CMP Facade Database を使用して、建築物のファサード(正面部)の画像を生成します。この例を手短に紹介できるように、pix2pix の著者が作成したデータセットの事前処理済みセットを使用します。\npix2pix の cGAN では、入力画像で条件付けを行い、対応する出力画像を生成します。cGANs は『Conditional Generative Adversarial Nets』(2014 年 Mirza and Osindero)おいて初めて言及されました。\nネットワークのアーキテクチャには、以下の項目が含まれます。\n\nU-Net ベースのアーキテクチャを使用したジェネレータ。\n畳みこみ PatchGAN 分類器で表現されたディスクリミネータ(pix2pix 論文で提案)。\n\n単一の V100 GPU で、エポックごとに約 15 秒かかる可能性があります。\n以下は、ファサードデータセットを使って 200 エポックトレーニング(8 万ステップ)した後に pix2pix xGAN が生成した出力の例です。\n \nTensorFlow とその他のライブラリをインポートする", "import tensorflow as tf\n\nimport os\nimport pathlib\nimport time\nimport datetime\n\nfrom matplotlib import pyplot as plt\nfrom IPython import display", "データセットを読み込む\nCMP Facade データベースのデータをダウンロードします(30 MB)。追加のデータセットはこちらから同じ形式で入手できます。Colab では、ドロップダウンメニューから別のデータセットを選択できます。他のデータベースの一部は非常に大きいことに注意してください(edges2handbags は 8 GB)。", "dataset_name = \"facades\" #@param [\"cityscapes\", \"edges2handbags\", \"edges2shoes\", \"facades\", \"maps\", \"night2day\"]\n\n\n_URL = f'http://efrosgans.eecs.berkeley.edu/pix2pix/datasets/{dataset_name}.tar.gz'\n\npath_to_zip = tf.keras.utils.get_file(\n fname=f\"{dataset_name}.tar.gz\",\n origin=_URL,\n extract=True)\n\npath_to_zip = pathlib.Path(path_to_zip)\n\nPATH = path_to_zip.parent/dataset_name\n\nlist(PATH.parent.iterdir())", "それぞれの元の画像のサイズは 256 x 512 で、256 x 256 の画像が 2 つ含まれます。", "sample_image = tf.io.read_file(str(PATH / 'train/1.jpg'))\nsample_image = tf.io.decode_jpeg(sample_image)\nprint(sample_image.shape)\n\nplt.figure()\nplt.imshow(sample_image)", "実際の建物のファサードの写真と建築ラベル画像を分離する必要があります。すべてのサイズは 256 x 256 になります。\n画像ファイルを読み込んで 2 つの画像テンソルを出力する関数を定義します。", "def load(image_file):\n # Read and decode an image file to a uint8 tensor\n image = tf.io.read_file(image_file)\n image = tf.io.decode_jpeg(image)\n\n # Split each image tensor into two tensors:\n # - one with a real building facade image\n # - one with an architecture label image \n w = tf.shape(image)[1]\n w = w // 2\n input_image = image[:, w:, :]\n real_image = image[:, :w, :]\n\n # Convert both images to float32 tensors\n input_image = tf.cast(input_image, tf.float32)\n real_image = tf.cast(real_image, tf.float32)\n\n return input_image, real_image", "入力(建築ラベル画像)画像と実際の(建物のファサードの写真)画像のサンプルをプロットします。", "inp, re = load(str(PATH / 'train/100.jpg'))\n# Casting to int for matplotlib to display the images\nplt.figure()\nplt.imshow(inp / 255.0)\nplt.figure()\nplt.imshow(re / 255.0)", "pix2pix 論文に述べられているように、トレーニングセットを前処理するために、ランダムなジッターとミラーリングを適用する必要があります。\n以下を行う関数を定義します。\n\n256 x 256 の画像サイズをそれぞれより大きな高さと幅の 286 x 286 に変更する。\nそれをランダムに 256 x 256 にトリミングする。\nその画像をランダムに横方向(左右)に反転する(ランダムミラーリング)。\nその画像を [-1, 1] の範囲に正規化する。", "# The facade training set consist of 400 images\nBUFFER_SIZE = 400\n# The batch size of 1 produced better results for the U-Net in the original pix2pix experiment\nBATCH_SIZE = 1\n# Each image is 256x256 in size\nIMG_WIDTH = 256\nIMG_HEIGHT = 256\n\ndef resize(input_image, real_image, height, width):\n input_image = tf.image.resize(input_image, [height, width],\n method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)\n real_image = tf.image.resize(real_image, [height, width],\n method=tf.image.ResizeMethod.NEAREST_NEIGHBOR)\n\n return input_image, real_image\n\ndef random_crop(input_image, real_image):\n stacked_image = tf.stack([input_image, real_image], axis=0)\n cropped_image = tf.image.random_crop(\n stacked_image, size=[2, IMG_HEIGHT, IMG_WIDTH, 3])\n\n return cropped_image[0], cropped_image[1]\n\n# Normalizing the images to [-1, 1]\ndef normalize(input_image, real_image):\n input_image = (input_image / 127.5) - 1\n real_image = (real_image / 127.5) - 1\n\n return input_image, real_image\n\n@tf.function()\ndef random_jitter(input_image, real_image):\n # Resizing to 286x286\n input_image, real_image = resize(input_image, real_image, 286, 286)\n\n # Random cropping back to 256x256\n input_image, real_image = random_crop(input_image, real_image)\n\n if tf.random.uniform(()) > 0.5:\n # Random mirroring\n input_image = tf.image.flip_left_right(input_image)\n real_image = tf.image.flip_left_right(real_image)\n\n return input_image, real_image", "前処理された一部の出力を検査することができます。", "plt.figure(figsize=(6, 6))\nfor i in range(4):\n rj_inp, rj_re = random_jitter(inp, re)\n plt.subplot(2, 2, i + 1)\n plt.imshow(rj_inp / 255.0)\n plt.axis('off')\nplt.show()", "読み込みと前処理がうまく機能することを確認したら、トレーニングセットとテストセットを読み込んで前処理するヘルパー関数を 2 つ定義しましょう。", "def load_image_train(image_file):\n input_image, real_image = load(image_file)\n input_image, real_image = random_jitter(input_image, real_image)\n input_image, real_image = normalize(input_image, real_image)\n\n return input_image, real_image\n\ndef load_image_test(image_file):\n input_image, real_image = load(image_file)\n input_image, real_image = resize(input_image, real_image,\n IMG_HEIGHT, IMG_WIDTH)\n input_image, real_image = normalize(input_image, real_image)\n\n return input_image, real_image", "tf.data を使用して入力パイプラインを構築する", "train_dataset = tf.data.Dataset.list_files(str(PATH / 'train/*.jpg'))\ntrain_dataset = train_dataset.map(load_image_train,\n num_parallel_calls=tf.data.AUTOTUNE)\ntrain_dataset = train_dataset.shuffle(BUFFER_SIZE)\ntrain_dataset = train_dataset.batch(BATCH_SIZE)\n\ntry:\n test_dataset = tf.data.Dataset.list_files(str(PATH / 'test/*.jpg'))\nexcept tf.errors.InvalidArgumentError:\n test_dataset = tf.data.Dataset.list_files(str(PATH / 'val/*.jpg'))\ntest_dataset = test_dataset.map(load_image_test)\ntest_dataset = test_dataset.batch(BATCH_SIZE)", "ジェネレータを構築する\npix2pix cGAN のジェネレータは、調整済みの U-Net です。U-Net は、エンコーダ(ダウンサンプラー)とデコーダ(アップサンプラー)で構成されています。(これについては、画像のセグメンテーションチュートリアルと U-Net プロジェクトのウェブサイト をご覧ください。)\n\nエンコーダの各ブロック: 畳み込み -&gt; バッチ正規化 -&gt; Leaky ReLU\nデコーダの各ブロック: 転置畳み込み -&gt; バッチ正規化 -&gt; ドロップアウト(最初の 3 つのブロックに適用) -&gt; ReLU\nエンコーダとデコーダ間にはスキップ接続があります(U-Net と同じ)。\n\nダウンサンプラー(エンコーダ)を定義します。", "OUTPUT_CHANNELS = 3\n\ndef downsample(filters, size, apply_batchnorm=True):\n initializer = tf.random_normal_initializer(0., 0.02)\n\n result = tf.keras.Sequential()\n result.add(\n tf.keras.layers.Conv2D(filters, size, strides=2, padding='same',\n kernel_initializer=initializer, use_bias=False))\n\n if apply_batchnorm:\n result.add(tf.keras.layers.BatchNormalization())\n\n result.add(tf.keras.layers.LeakyReLU())\n\n return result\n\ndown_model = downsample(3, 4)\ndown_result = down_model(tf.expand_dims(inp, 0))\nprint (down_result.shape)", "アップサンプラー(デコーダ)を定義します。", "def upsample(filters, size, apply_dropout=False):\n initializer = tf.random_normal_initializer(0., 0.02)\n\n result = tf.keras.Sequential()\n result.add(\n tf.keras.layers.Conv2DTranspose(filters, size, strides=2,\n padding='same',\n kernel_initializer=initializer,\n use_bias=False))\n\n result.add(tf.keras.layers.BatchNormalization())\n\n if apply_dropout:\n result.add(tf.keras.layers.Dropout(0.5))\n\n result.add(tf.keras.layers.ReLU())\n\n return result\n\nup_model = upsample(3, 4)\nup_result = up_model(down_result)\nprint (up_result.shape)", "ダウンサンプラーとアップサンプラーを使用してジェネレータを定義します。", "def Generator():\n inputs = tf.keras.layers.Input(shape=[256, 256, 3])\n\n down_stack = [\n downsample(64, 4, apply_batchnorm=False), # (batch_size, 128, 128, 64)\n downsample(128, 4), # (batch_size, 64, 64, 128)\n downsample(256, 4), # (batch_size, 32, 32, 256)\n downsample(512, 4), # (batch_size, 16, 16, 512)\n downsample(512, 4), # (batch_size, 8, 8, 512)\n downsample(512, 4), # (batch_size, 4, 4, 512)\n downsample(512, 4), # (batch_size, 2, 2, 512)\n downsample(512, 4), # (batch_size, 1, 1, 512)\n ]\n\n up_stack = [\n upsample(512, 4, apply_dropout=True), # (batch_size, 2, 2, 1024)\n upsample(512, 4, apply_dropout=True), # (batch_size, 4, 4, 1024)\n upsample(512, 4, apply_dropout=True), # (batch_size, 8, 8, 1024)\n upsample(512, 4), # (batch_size, 16, 16, 1024)\n upsample(256, 4), # (batch_size, 32, 32, 512)\n upsample(128, 4), # (batch_size, 64, 64, 256)\n upsample(64, 4), # (batch_size, 128, 128, 128)\n ]\n\n initializer = tf.random_normal_initializer(0., 0.02)\n last = tf.keras.layers.Conv2DTranspose(OUTPUT_CHANNELS, 4,\n strides=2,\n padding='same',\n kernel_initializer=initializer,\n activation='tanh') # (batch_size, 256, 256, 3)\n\n x = inputs\n\n # Downsampling through the model\n skips = []\n for down in down_stack:\n x = down(x)\n skips.append(x)\n\n skips = reversed(skips[:-1])\n\n # Upsampling and establishing the skip connections\n for up, skip in zip(up_stack, skips):\n x = up(x)\n x = tf.keras.layers.Concatenate()([x, skip])\n\n x = last(x)\n\n return tf.keras.Model(inputs=inputs, outputs=x)", "ジェネレータモデルアーキテクチャを可視化します。", "generator = Generator()\ntf.keras.utils.plot_model(generator, show_shapes=True, dpi=64)", "ジェネレータをテストします。", "gen_output = generator(inp[tf.newaxis, ...], training=False)\nplt.imshow(gen_output[0, ...])", "ジェネレータ損失を定義する\npix2pix 論文によると、GAN はデータに適応する損失を学習するのに対し、cGAN はネットワーク出力とターゲット画像とは異なる可能性のある構造にペナルティを与える構造化損失を学習します。\n\nジェネレータ損失は、生成された画像と 1 の配列のシグモイド交差エントロピー損失です。\npix2pix 論文には、生成された画像とターゲット画像間の MAE(平均絶対誤差)である L1 損失も言及されています。\nこれにより、生成された画像は、構造的にターゲット画像に似るようになります。\n合計ジェネレータ損失の計算式は、gan_loss + LAMBDA * l1_loss で、LAMBDA = 100 です。この値は論文の執筆者が決定したものです。", "LAMBDA = 100\n\nloss_object = tf.keras.losses.BinaryCrossentropy(from_logits=True)\n\ndef generator_loss(disc_generated_output, gen_output, target):\n gan_loss = loss_object(tf.ones_like(disc_generated_output), disc_generated_output)\n\n # Mean absolute error\n l1_loss = tf.reduce_mean(tf.abs(target - gen_output))\n\n total_gen_loss = gan_loss + (LAMBDA * l1_loss)\n\n return total_gen_loss, gan_loss, l1_loss", "以下に、ジェネレータのトレーニング手順を示します。\n\nディスクリミネータを構築する\npix2pix cGAN のディスクリミネータは、畳み込み PatchGAN 分類器です。pix2pix 論文によると、これは、各画像のパッチが本物であるか偽物であるかの分類を試みます。\n\nディスクリミネータの各ブロック: 畳み込み -&gt; バッチ正規化 -&gt; Leaky ReLU\n最後のレイヤーの後の出力の形状: (batch_size, 30, 30, 1)\n出力の各 30 x 30 の画像パッチは入力画像の 70 x 70 の部分を分類します。\nディスクリミネータは 2 つの入力を受け取ります。\n入力画像とターゲット画像。本物として分類する画像です。\n入力画像と生成された画像(ジェネレータの出力)。偽物として分類する画像です。\ntf.concat([inp, tar], axis=-1) を使用して、これら 2 つの入力を連結します。\n\n\n\nディスクリミネータを定義しましょう。", "def Discriminator():\n initializer = tf.random_normal_initializer(0., 0.02)\n\n inp = tf.keras.layers.Input(shape=[256, 256, 3], name='input_image')\n tar = tf.keras.layers.Input(shape=[256, 256, 3], name='target_image')\n\n x = tf.keras.layers.concatenate([inp, tar]) # (batch_size, 256, 256, channels*2)\n\n down1 = downsample(64, 4, False)(x) # (batch_size, 128, 128, 64)\n down2 = downsample(128, 4)(down1) # (batch_size, 64, 64, 128)\n down3 = downsample(256, 4)(down2) # (batch_size, 32, 32, 256)\n\n zero_pad1 = tf.keras.layers.ZeroPadding2D()(down3) # (batch_size, 34, 34, 256)\n conv = tf.keras.layers.Conv2D(512, 4, strides=1,\n kernel_initializer=initializer,\n use_bias=False)(zero_pad1) # (batch_size, 31, 31, 512)\n\n batchnorm1 = tf.keras.layers.BatchNormalization()(conv)\n\n leaky_relu = tf.keras.layers.LeakyReLU()(batchnorm1)\n\n zero_pad2 = tf.keras.layers.ZeroPadding2D()(leaky_relu) # (batch_size, 33, 33, 512)\n\n last = tf.keras.layers.Conv2D(1, 4, strides=1,\n kernel_initializer=initializer)(zero_pad2) # (batch_size, 30, 30, 1)\n\n return tf.keras.Model(inputs=[inp, tar], outputs=last)", "ディスクリミネータモデルアーキテクチャを可視化します。", "discriminator = Discriminator()\ntf.keras.utils.plot_model(discriminator, show_shapes=True, dpi=64)", "ディスクリミネータをテストします。", "disc_out = discriminator([inp[tf.newaxis, ...], gen_output], training=False)\nplt.imshow(disc_out[0, ..., -1], vmin=-20, vmax=20, cmap='RdBu_r')\nplt.colorbar()", "ディスクリミネータ損失を定義する\n\ndiscriminator_loss 関数は、本物の画像と生成された画像の 2 つの入力を取ります。\nreal_loss は 本物の画像と 1 の配列(本物の画像であるため)のシグモイド交差エントロピー損失です。\ngenerated_loss は、生成された画像と 0 の配列(偽物の画像であるため)のシグモイド交差エントロピー損失です。\ntotal_loss は、real_loss と generated_loss の和です。", "def discriminator_loss(disc_real_output, disc_generated_output):\n real_loss = loss_object(tf.ones_like(disc_real_output), disc_real_output)\n\n generated_loss = loss_object(tf.zeros_like(disc_generated_output), disc_generated_output)\n\n total_disc_loss = real_loss + generated_loss\n\n return total_disc_loss", "以下に、ディスクリミネータのトレーニング手順を示します。\nこのアーキテクチャとハイパーパラメータについては、pix2pix 論文をご覧ください。\n\nオプティマイザとチェックポイントセーバーを定義する", "generator_optimizer = tf.keras.optimizers.Adam(2e-4, beta_1=0.5)\ndiscriminator_optimizer = tf.keras.optimizers.Adam(2e-4, beta_1=0.5)\n\ncheckpoint_dir = './training_checkpoints'\ncheckpoint_prefix = os.path.join(checkpoint_dir, \"ckpt\")\ncheckpoint = tf.train.Checkpoint(generator_optimizer=generator_optimizer,\n discriminator_optimizer=discriminator_optimizer,\n generator=generator,\n discriminator=discriminator)", "画像を生成する\nトレーニング中に画像を描画する関数を記述します。\n\nテストセットの画像をジェネレータに渡します。\nジェネレータは入力画像を出力に変換します。\n最後に、予測をプロットすると、出来上がり!\n\n注意: training=True は、テストデータセットでモデルを実行中にバッチ統計を行うために、ここに意図的に指定されています。training=False を使用した場合、トレーニングデータセットから学習した蓄積された統計が取得されます(ここでは使用したくないデータです)。", "def generate_images(model, test_input, tar):\n prediction = model(test_input, training=True)\n plt.figure(figsize=(15, 15))\n\n display_list = [test_input[0], tar[0], prediction[0]]\n title = ['Input Image', 'Ground Truth', 'Predicted Image']\n\n for i in range(3):\n plt.subplot(1, 3, i+1)\n plt.title(title[i])\n # Getting the pixel values in the [0, 1] range to plot.\n plt.imshow(display_list[i] * 0.5 + 0.5)\n plt.axis('off')\n plt.show()", "関数をテストします。", "for example_input, example_target in test_dataset.take(1):\n generate_images(generator, example_input, example_target)", "トレーニング\n\n各サンプルについて、入力は出力を生成します。\nディスクリミネータは input_image と生成された画像を最初の入力として受け取ります。2 番目の入力は input_image と target_image です。\n次に、ジェネレータとディスクリミネータの損失を計算します。\nさらに、ジェネレータとディスクリミネータの変数(入力)の両方に関して損失の勾配を計算し、これらをオプティマイザに適用します。\n最後に、損失を TensorBoard にログします。", "log_dir=\"logs/\"\n\nsummary_writer = tf.summary.create_file_writer(\n log_dir + \"fit/\" + datetime.datetime.now().strftime(\"%Y%m%d-%H%M%S\"))\n\n@tf.function\ndef train_step(input_image, target, step):\n with tf.GradientTape() as gen_tape, tf.GradientTape() as disc_tape:\n gen_output = generator(input_image, training=True)\n\n disc_real_output = discriminator([input_image, target], training=True)\n disc_generated_output = discriminator([input_image, gen_output], training=True)\n\n gen_total_loss, gen_gan_loss, gen_l1_loss = generator_loss(disc_generated_output, gen_output, target)\n disc_loss = discriminator_loss(disc_real_output, disc_generated_output)\n\n generator_gradients = gen_tape.gradient(gen_total_loss,\n generator.trainable_variables)\n discriminator_gradients = disc_tape.gradient(disc_loss,\n discriminator.trainable_variables)\n\n generator_optimizer.apply_gradients(zip(generator_gradients,\n generator.trainable_variables))\n discriminator_optimizer.apply_gradients(zip(discriminator_gradients,\n discriminator.trainable_variables))\n\n with summary_writer.as_default():\n tf.summary.scalar('gen_total_loss', gen_total_loss, step=step//1000)\n tf.summary.scalar('gen_gan_loss', gen_gan_loss, step=step//1000)\n tf.summary.scalar('gen_l1_loss', gen_l1_loss, step=step//1000)\n tf.summary.scalar('disc_loss', disc_loss, step=step//1000)", "実際のトレーニングループ。このチュートリアルは 2 つ以上のデータセットを実行でき、データセットのサイズは非常に大きく異なるため、トレーニングループはエポックではなくステップで動作するようにセットアップされています。\n\nステップの回数をイテレートします。\n10 ステップごとにドット(.)を出力します。\n1000 ステップごとに、表示を消去し、generate_images を実行して進行状況を示します。\n5000 ステップごとに、チェックポイントを保存します。", "def fit(train_ds, test_ds, steps):\n example_input, example_target = next(iter(test_ds.take(1)))\n start = time.time()\n\n for step, (input_image, target) in train_ds.repeat().take(steps).enumerate():\n if (step) % 1000 == 0:\n display.clear_output(wait=True)\n\n if step != 0:\n print(f'Time taken for 1000 steps: {time.time()-start:.2f} sec\\n')\n\n start = time.time()\n\n generate_images(generator, example_input, example_target)\n print(f\"Step: {step//1000}k\")\n\n train_step(input_image, target, step)\n\n # Training step\n if (step+1) % 10 == 0:\n print('.', end='', flush=True)\n\n\n # Save (checkpoint) the model every 5k steps\n if (step + 1) % 5000 == 0:\n checkpoint.save(file_prefix=checkpoint_prefix)", "このトレーニングループは、TensorBoard に表示してトレーニングの進行状況を監視できるようにログを保存します。\nローカルマシンで作業する場合は、別の TensorBoard プロセスが起動します。ノートブックで作業する場合は、トレーニングを起動する前にビューアーを起動して、TensorBoard で監視します。\nTo launch the viewer paste the following into a code-cell:", "%load_ext tensorboard\n%tensorboard --logdir {log_dir}", "最後に、トレーニングループを実行します。", "fit(train_dataset, test_dataset, steps=40000)", "TensorBoard の結果を公開することを希望する場合は、以下のコードをコードセルに張り付けて、TensorBoard.dev にログをアップロードできます。\n注意: Google アカウントが必要です。\n!tensorboard dev upload --logdir {log_dir}\n要注意: このコマンドは終了しません。長時間に及ぶ実験の結果を連続的にアップロードするように設計されています。データのアップロードが完了したら、ノートブックツールの \"interrupt execution\" オプションを使って停止する必要があります。\nTensorBoard.dev で、このノートブックの前回の実行の結果を閲覧できます。\nTensorBoard.dev は、ML の実験をホスト、追跡、および共有するための、公開マネージドエクスペリエンスです。\nまた、&lt;iframe&gt; を使用してインラインに含めることもできます。", "display.IFrame(\n src=\"https://tensorboard.dev/experiment/lZ0C6FONROaUMfjYkVyJqw\",\n width=\"100%\",\n height=\"1000px\")", "GAN(または pix2pix のような cGAN)をトレーニングする場合、ログの解釈は、単純な分類または回帰モデルよりも明確ではありません。以下の項目に注目してください。\n\nジェネレータまたはディスクリミネータのいずれのモデルにも \"won\" がないことを確認してください。gen_gan_loss または disc_lossのいずれかが非常に低い場合、そのモデルがもう片方のモデルを上回っていることを示しているため、組み合わされたモデルを正しくトレーニングできていないことになります。\n値 log(2) = 0.69 は、これらの損失に適した基準点です。パープレキシティ(予測性能)が 2 であるということは、ディスクリミネータが、平均して 2 つのオプションについて等しく不確実であることを表します。\ndisc_loss については、値が 0.69 を下回る場合、ディスクリミネータは、本物の画像と生成された画像を組み合わせたセットにおいて、ランダムよりも優れていることを示します。\ngen_gan_loss については、値が 0.69 を下回る場合、ジェネレータがディスクリミネータを騙すことにおいて、ランダムよりも優れていることを示します。\nトレーニングが進行するにつれ、gen_l1_lossは下降します。\n\n最後のチェックポイントを復元してネットワークをテストする", "!ls {checkpoint_dir}\n\n# Restoring the latest checkpoint in checkpoint_dir\ncheckpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))", "テストセットを使用して画像を生成する", "# Run the trained model on a few examples from the test set\nfor inp, tar in test_dataset.take(5):\n generate_images(generator, inp, tar)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
quantopian/research_public
notebooks/lectures/Variance/answers/notebook.ipynb
apache-2.0
[ "Exercises: Variance - Answer Key\nBy Christopher van Hoecke, Maxwell Margenot, and Delaney Mackenzie\nLecture Link :\nhttps://www.quantopian.com/lectures/variance\nIMPORTANT NOTE:\nThis lecture corresponds to the Variance lecture, which is part of the Quantopian lecture series. This homework expects you to rely heavily on the code presented in the corresponding lecture. Please copy and paste regularly from that lecture when starting to work on the problems, as trying to do them from scratch will likely be too difficult.\nPart of the Quantopian Lecture Series:\n\nwww.quantopian.com/lectures\ngithub.com/quantopian/research_public", "# Useful Libraries\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt", "Data:", "X = np.random.randint(100, size = 100)", "Exercise 1:\nUsing the skills aquired in the lecture series, find the following parameters of the list X above:\n- Range\n- Mean Absolute Deviation\n- Variance and Standard Deviation\n- Semivariance and Semideviation\n- Target variance (with B = 60)", "# Range of X\n\nprint 'Range of X: %s' %(np.ptp(X))\n\n# Mean Absolute Deviation\n# First calculate the value of mu (the mean)\n\nmu = np.mean(X)\n\nabs_dispersion = [np.abs(mu - x) for x in X]\nMAD = np.sum(abs_dispersion)/len(abs_dispersion)\n\nprint 'Mean absolute deviation of X:', MAD\n\n# Variance and standard deviation\n\nprint 'Variance of X:', np.var(X)\nprint 'Standard deviation of X:', np.std(X)\n\n# Semivariance and semideviation\n\nlows = [e for e in X if e <= mu]\n\nsemivar = np.sum( (lows - mu) ** 2 ) / len(lows)\n\nprint 'Semivariance of X:', semivar\nprint 'Semideviation of X:', np.sqrt(semivar)\n\n# Target variance\n\nB = 60\nlows_B = [e for e in X if e <= B]\nsemivar_B = sum(map(lambda x: (x - B)**2,lows_B))/len(lows_B)\n\nprint 'Target semivariance of X:', semivar_B\nprint 'Target semideviation of X:', np.sqrt(semivar_B)", "Exercise 2:\nUsing the skills aquired in the lecture series, find the following parameters of prices for AT&T stock over a year:\n- 30 days rolling variance \n- 15 days rolling Standard Deviation", "att = get_pricing('T', fields='open_price', start_date='2016-01-01', end_date='2017-01-01')\n\n# Rolling mean\nvariance = att.rolling(window = 30).var()\n\n# Rolling standard deviation\nstd = att.rolling(window = 15).std()", "Exercise 3 :\nThe portfolio variance is calculated as\n$$\\text{VAR}p = \\text{VAR}{s1} (w_1^2) + \\text{VAR}{s2}(w_2^2) + \\text{COV}{S_1, S_2} (2 w_1 w_2)$$\nWhere $w_1$ and $w_2$ are the weights of $S_1$ and $S_2$.\nFind values of $w_1$ and $w_2$ to have a portfolio variance of 50.", "asset1 = get_pricing('AAPL', fields='open_price', start_date='2016-01-01', end_date='2017-01-01')\nasset2 = get_pricing('XLF', fields='open_price', start_date='2016-01-01', end_date='2017-01-01')\n\ncov = np.cov(asset1, asset2)[0,1]\n\nw1 = 0.87\nw2 = 1 - w1\n\nv1 = np.var(asset1)\nv2 = np.var(asset2)\n\npvariance = (w1**2)*v1+(w2**2)*v2+(2*w1*w2)*cov\n\nprint 'Portfolio variance: ', pvariance", "Congratulations on completing the Variance answer key!\nAs you learn more about writing trading models and the Quantopian platform, enter a daily Quantopian Contest. Your strategy will be evaluated for a cash prize every day.\nStart by going through the Writing a Contest Algorithm tutorial.\nThis presentation is for informational purposes only and does not constitute an offer to sell, a solicitation to buy, or a recommendation for any security; nor does it constitute an offer to provide investment advisory or other services by Quantopian, Inc. (\"Quantopian\"). Nothing contained herein constitutes investment advice or offers any opinion with respect to the suitability of any security, and any views expressed herein should not be taken as advice to buy, sell, or hold any security or as an endorsement of any security or company. In preparing the information contained herein, Quantopian, Inc. has not taken into account the investment needs, objectives, and financial circumstances of any particular investor. Any views expressed and data illustrated herein were prepared based upon information, believed to be reliable, available to Quantopian, Inc. at the time of publication. Quantopian makes no guarantees as to their accuracy or completeness. All information is subject to change and may quickly become unreliable for various reasons, including changes in market conditions or economic circumstances." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
variani/study
02-intro-python/projects/ggplot/iris.ipynb
cc0-1.0
[ "%qtconsole\n\n%matplotlib inline", "About\nThis notebook presents plotting examples on the famous iris data set by using the Grammar of Graphics implemented in ggplot package. This package is good for those users coming from R, because of its design goal:\n\nThe goal is to have no difference other than those necessary due to the differences between R and Python.\n\nvia github.com/yhat/ggplot/\nInclude", "from ggplot import *\n\nimport pandas as pd\nfrom sklearn import datasets", "Data", "# import iris data\niris = datasets.load_iris()\n\ndf1 = pd.DataFrame(iris.data, columns = iris.feature_names)\ndf2 = pd.DataFrame(iris.target_names[iris.target])\n\ndf = pd.concat([df1, df2], axis = 1)\ndf.head()\n\ndf.columns = ['sl', 'sw', 'pl', 'pw', 'species']", "geom_point (scatter plot)\nUsing two continious variables sl and sw within the class labels in species variable, one can see whether there are class differences in the 2D space (via scatter plot).", "p1 = ggplot(aes(x = 'sl', y = 'sw', color = 'species'), data = df) + geom_point()\np1", "The plot shows that setosa class can be linearly separated from other two classes.\ngeom_smooth", "p2 = ggplot(aes(x = 'sl', y = 'sw', group = 'species', color = 'species'), data = df) + \\\n geom_point() + geom_smooth(alpha = 0.5) + theme_bw()\np2", "geom_points with subsetting", "p3 = ggplot(aes(x = 'sl', y = 'sw', color = 'species'), data = df[df.species != 'setosa']) + \\\n geom_point() + theme_538()\np3", "geom_histogram with facetting", "p3 = ggplot(aes(x = 'sl'), data = df) + geom_histogram() + facet_wrap('species', ncol = 1)\np3" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ceos-seo/data_cube_notebooks
notebooks/training/ardc_training/Training_TaskE_Transect.ipynb
apache-2.0
[ "# Enable importing of utilities.\nimport sys\nimport os\nsys.path.append(os.environ.get('NOTEBOOK_ROOT'))", "ARDC Training: Python Notebooks\nTask-E: This notebook will demonstrate 2D transect analyses and 3D Hovmoller plots. We will run these for NDVI (land) and TSM (water quality) to show the spatial and temporal variation of data along a line (transect) for a given time slice and for the entire time series. \n\nImport the Datacube Configuration", "import xarray as xr\nimport numpy as np\n\nimport datacube\nimport utils.data_cube_utilities.data_access_api as dc_api \n\nfrom datacube.utils.aws import configure_s3_access\nconfigure_s3_access(requester_pays=True)\n\napi = dc_api.DataAccessApi()\ndc = api.dc", "Browse the available Data Cubes", "list_of_products = dc.list_products()\nnetCDF_products = list_of_products[list_of_products['format'] == 'NetCDF']\nnetCDF_products", "Pick a product\nUse the platform and product names from the previous block to select a Data Cube.", "# Change the data platform and data cube here\n\nplatform = 'LANDSAT_7'\nproduct = 'ls7_usgs_sr_scene'\ncollection = 'c1'\nlevel = 'l2'", "Display Latitude-Longitude and Time Bounds of the Data Cube", "from utils.data_cube_utilities.dc_time import _n64_to_datetime, dt_to_str\n\nextents = api.get_full_dataset_extent(platform = platform, product = product, measurements=[])\n\nlatitude_extents = (min(extents['latitude'].values),max(extents['latitude'].values))\nlongitude_extents = (min(extents['longitude'].values),max(extents['longitude'].values))\ntime_extents = (min(extents['time'].values),max(extents['time'].values))\n\nprint(\"Latitude Extents:\", latitude_extents)\nprint(\"Longitude Extents:\", longitude_extents)\nprint(\"Time Extents:\", list(map(dt_to_str, map(_n64_to_datetime, time_extents))))", "Visualize Data Cube Region", "## The code below renders a map that can be used to orient yourself with the region.\nfrom utils.data_cube_utilities.dc_display_map import display_map\ndisplay_map(latitude = latitude_extents, longitude = longitude_extents)", "Pick a smaller analysis region and display that region\nTry to keep your region to less than 0.2-deg x 0.2-deg for rapid processing. You can click on the map above to find the Lat-Lon coordinates of any location. You will want to identify a region with an inland water body and some vegetation. Pick a time window of several years.", "## Vietnam - Central Lam Dong Province ##\n# longitude_extents = (107.0, 107.2)\n# latitude_extents = (11.7, 12.0)\n\n## Vietnam Ho Tri An Lake\n# longitude_extents = (107.0, 107.2)\n# latitude_extents = (11.1, 11.3)\n\n## Sierra Leone - Delta du Saloum\nlatitude_extents = (13.55, 14.12)\nlongitude_extents = (-16.80, -16.38)\n\ntime_extents = ('2005-01-01', '2005-12-31')\n\ndisplay_map(latitude = latitude_extents, longitude = longitude_extents)", "Load the dataset and the required spectral bands or other parameters\nAfter loading, you will view the Xarray dataset. Notice the dimensions represent the number of pixels in your latitude and longitude dimension as well as the number of time slices (time) in your time series.", "landsat_dataset = dc.load(latitude = latitude_extents,\n longitude = longitude_extents,\n platform = platform,\n time = time_extents,\n product = product,\n measurements = ['red', 'green', 'blue', 'nir', 'swir1', 'swir2', 'pixel_qa']) \n\nlandsat_dataset\n#view the dimensions and sample content from the cube", "Preparing the data\nWe will filter out the clouds and the water using the Landsat pixel_qa information. Next, we will calculate the values of NDVI (vegetation index) and TSM (water quality).", "from utils.data_cube_utilities.clean_mask import landsat_qa_clean_mask\n\nplt_col_lvl_params = dict(platform=platform, collection=collection, level=level)\nclear_xarray = landsat_qa_clean_mask(landsat_dataset, cover_types=['clear'], **plt_col_lvl_params)\nwater_xarray = landsat_qa_clean_mask(landsat_dataset, cover_types=['water'], **plt_col_lvl_params)\nshadow_xarray = landsat_qa_clean_mask(landsat_dataset, cover_types=['cld_shd'], **plt_col_lvl_params) \ncloud_xarray = landsat_qa_clean_mask(landsat_dataset, cover_types=['cloud'], **plt_col_lvl_params) \n\nclean_xarray = (clear_xarray | water_xarray).rename(\"clean_mask\")\n\ndef NDVI(dataset):\n return ((dataset.nir - dataset.red)/(dataset.nir + dataset.red)).rename(\"NDVI\")\n\nndvi_xarray = NDVI(landsat_dataset) # Vegetation Index\n\nfrom utils.data_cube_utilities.dc_water_quality import tsm\n\ntsm_xarray = tsm(landsat_dataset, clean_mask = water_xarray.values.astype(bool) ).tsm", "Combine everything into one XARRAY for further analysis", "combined_dataset = xr.merge([landsat_dataset,\n clean_xarray,\n clear_xarray,\n water_xarray,\n shadow_xarray,\n cloud_xarray, \n ndvi_xarray,\n tsm_xarray])\n\n# Copy original crs to merged dataset \ncombined_dataset = combined_dataset.assign_attrs(landsat_dataset.attrs)", "Define a path for a transect\nA transect is just a line that will run across our region of interest. Use the display map above to find the end points of your desired line. If you click on the map it will give you precise Lat-Lon positions for a point.\nStart with a line across a mix of water and land", "# Water and Land Mixed Examples\n\nmid_lon = np.mean(longitude_extents)\nmid_lat = np.mean(latitude_extents)\n\n# North-South Path\nstart = (latitude_extents[0], mid_lon)\nend = (latitude_extents[1], mid_lon)\n\n# East-West Path\n# start = (mid_lat, longitude_extents[0])\n# end = (mid_lat, longitude_extents[1])\n\n# East-West Path for Lake Ho Tri An\n# start = ( 11.25, 107.02 )\n# end = ( 11.25, 107.18 )", "Plot the transect line", "import folium\nimport numpy as np \nfrom folium.features import CustomIcon\n\ndef plot_a_path(points , zoom = 15):\n xs,ys = zip(*points)\n \n map_center_point = (np.mean(xs), np.mean(ys))\n the_map = folium.Map(location=[map_center_point[0], map_center_point[1]], zoom_start = zoom, tiles='http://mt1.google.com/vt/lyrs=y&z={z}&x={x}&y={y}', attr = \"Google Attribution\")\n path = folium.PolyLine(locations=points, weight=5, color = 'orange')\n the_map.add_child(path)\n \n start = ( xs[0] ,ys[0] )\n end = ( xs[-1],ys[-1])\n \n return the_map \n\nplot_a_path([start,end]) ", "Find the nearest pixels along the transect path", "from utils.data_cube_utilities.transect import line_scan\n\nimport numpy as np\n\ndef get_index_at(coords, ds):\n '''Returns an integer index pair.'''\n lat = coords[0]\n lon = coords[1]\n \n nearest_lat = ds.sel(latitude = lat, method = 'nearest').latitude.values\n nearest_lon = ds.sel(longitude = lon, method = 'nearest').longitude.values\n \n lat_index = np.where(ds.latitude.values == nearest_lat)[0]\n lon_index = np.where(ds.longitude.values == nearest_lon)[0]\n \n return (int(lat_index), int(lon_index))\n\ndef create_pixel_trail(start, end, ds):\n a = get_index_at(start, ds)\n b = get_index_at(end, ds)\n \n indices = line_scan.line_scan(a, b)\n\n pixels = [ ds.isel(latitude = x, longitude = y) for x, y in indices]\n return pixels\n\nlist_of_pixels_along_segment = create_pixel_trail(start, end, landsat_dataset)", "Groundwork for Transect (2-D) and Hovmöller (3-D) Plots", "import xarray\nimport matplotlib.pyplot as plt \nfrom matplotlib.ticker import FuncFormatter \nfrom datetime import datetime \nimport time\n\ndef plot_list_of_pixels(list_of_pixels, band_name, y = None): \n start = (\n \"{0:.2f}\".format(float(list_of_pixels[0].latitude.values )),\n \"{0:.2f}\".format(float(list_of_pixels[0].longitude.values))\n ) \n end = (\n \"{0:.2f}\".format(float(list_of_pixels[-1].latitude.values)),\n \"{0:.2f}\".format(float(list_of_pixels[-1].longitude.values))\n )\n \n def reformat_n64(t):\n return time.strftime(\"%Y.%m.%d\", time.gmtime(t.astype(int)/1000000000)) \n \n def pixel_to_array(pixel):\n return(pixel.values)\n \n def figure_ratio(x,y, fixed_width = 10):\n width = fixed_width\n height = y * (fixed_width / x)\n return (width, height)\n \n pixel_array = np.transpose([pixel_to_array(pix) for pix in list_of_pixels])\n \n #If the data has one acquisition, then plot transect (2-D), else Hovmöller (3-D) \n if y.size == 1:\n plt.figure(figsize = (15,5))\n plt.scatter(np.arange(pixel_array.size), pixel_array)\n plt.title(\"Transect (2-D) \\n Acquisition date: {}\".format(reformat_n64(y)))\n plt.xlabel(\"Pixels along the transect \\n {} - {} \\n \".format(start,end))\n plt.ylabel(band_name)\n\n else:\n m = FuncFormatter(lambda x :x )\n figure = plt.figure(figsize = figure_ratio(len(list_of_pixels),\n len(list_of_pixels[0].values),\n fixed_width = 15))\n number_of_y_ticks = 5 \n\n ax = plt.gca()\n cax = ax.imshow(pixel_array, interpolation='none')\n figure.colorbar(cax,fraction=0.110, pad=0.04)\n\n ax.set_title(\"Hovmöller (3-D) \\n Acquisition range: {} - {} \\n \".format(reformat_n64(y[0]),reformat_n64(y[-1])))\n plt.xlabel(\"Pixels along the transect \\n {} - {} \\n \".format(start,end))\n ax.get_yaxis().set_major_formatter( FuncFormatter(lambda x, p: reformat_n64(list_of_pixels[0].time.values[int(x)]) if int(x) < len(list_of_pixels[0].time) else \"\")) \n plt.ylabel(\"Time\")\n plt.show()\n\ndef transect_plot(start,\n end,\n da):\n if type(da) is not xarray.DataArray and (type(da) is xarray.Dataset) :\n raise Exception('You should be passing in a data-array, not a Dataset')\n\n pixels = create_pixel_trail(start, end,da)\n dates = da.time.values \n\n lats = [x.latitude.values for x in pixels]\n lons = [x.longitude.values for x in pixels]\n plot_list_of_pixels(pixels, da.name, y = dates)\n\npixels = create_pixel_trail(start, end, landsat_dataset)\n\nt = 2\nsubset = list( map(lambda x: x.isel(time = t), pixels))", "Mask Clouds", "from utils.data_cube_utilities.clean_mask import landsat_qa_clean_mask\n\nclean_mask = landsat_qa_clean_mask(landsat_dataset, platform=platform, \n collection=collection, level=level)\n\ncloudless_dataset = landsat_dataset.where(clean_mask)", "Select an acquisition date and then plot a 2D transect without clouds", "# select an acquisition number from the start (t=0) to \"time\" using the array limits above\nacquisition_number = 10\n\n#If plotted will create the 2-D transect\ncloudless_dataset_for_acq_no = cloudless_dataset.isel(time = acquisition_number) \n\n#If Plotted will create the 3-D Hovmoller plot for a portion of the time series (min to max)\nmin_acq = 1\nmax_acq = 4\n\ncloudless_dataset_from_1_to_acq_no = cloudless_dataset.isel(time = slice(min_acq, max_acq)) ", "Select one of the XARRAY parameters for analysis", "band = 'green'", "Create a 2D Transect plot of the \"band\" for one date", "transect_plot(start, end, cloudless_dataset_for_acq_no[band])", "Create a 2D Transect plot of NDVI for one date", "transect_plot(start, end, NDVI(cloudless_dataset_for_acq_no))", "Create a 3D Hovmoller plot of NDVI for the entire time series", "transect_plot(start, end, NDVI(cloudless_dataset))", "Create a 2D Transect plot of water existence for one date", "transect_plot(start, end, water_xarray.isel(time = acquisition_number))", "Create a 3D Hovmoller plot of water extent for the entire time series", "transect_plot(start, end, water_xarray)", "Create a 2D Transect plot of water quality (TSM) for one date", "transect_plot(start, end, tsm_xarray.isel(time = acquisition_number))", "Create a 3D Hovmoller plot of water quality (TSM) for one date", "transect_plot(start, end, tsm_xarray)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
bbglab/adventofcode
2017/iker/Ferran AoC 2017.ipynb
mit
[ "In general your solutions are more elegant.\nGreat use of available libraries\nLike your solutions for day3 (spiral memory), day11 (hex grid)\nDay 1\nSmart, clean and elegant.", "digits = '91212129'\n\nL = len(digits)\nsum([int(digits[i]) for i in range(L) if digits[i] == digits[(i+1) % L]])\n\ndef solve(captcha):\n captcha = list(map(int, captcha))\n prev_val = captcha[-1]\n repeated = 0\n for v in captcha:\n if v == prev_val:\n repeated += v\n prev_val = v\n return repeated\nsolve(digits)", "Day 3\nWell done", "import numpy as np\n\ndef number_to_coordinates(n):\n \n q = int(np.sqrt(n))\n r = n - q ** 2\n if q % 2 != 0:\n x = (q - 1) // 2 + min(1, r) + min(q - r + 1, 0)\n y = - (q - 1) // 2 + min(max(r - 1, 0), q)\n else:\n x = 1 - (q // 2) - min(1, r) - min(q - r + 1, 0)\n y = q // 2 - min(max(r - 1, 0), q)\n return x, y\n\ndef spiral_manhattan(n):\n x, y = number_to_coordinates(n)\n return abs(x) + abs(y)\n\nspiral_manhattan(1024)\n\nimport math\n\ndef get_side_size(point):\n side_size = math.ceil(math.sqrt(point))\n if side_size % 2 == 0:\n side_size += 1\n return side_size\n\n\ndef get_displacement(point, ring):\n distances = []\n for i in [1,3,5,7]:\n distances.append(abs(point-i*ring))\n return min(distances)\n\n\ndef distance(point):\n if point == 1:\n return 0\n else:\n side_size = get_side_size(point)\n radius = (side_size - 1) // 2\n rescaled = point - (side_size-2)**2\n displacement = get_displacement(rescaled, radius)\n return displacement + radius\n \ndistance(1024)", "Day 5\nAlthough it can be better to expicitily check, you can use Exceptions to capture exceptional behaviour.\nNote that your code is mor robutes and pos=-1 will work for even even when it shouldn't", "instructions = [0, 3, 0, 1, -3]\n\nclass Maze(object):\n \n def __init__(self, curr_pos, state):\n self.curr_pos = curr_pos\n self.state = state.copy()\n self.length = len(self.state)\n \n def evolve(self):\n self.state[self.curr_pos] += 1\n self.curr_pos += self.state[self.curr_pos] - 1\n \n def outside(self):\n return (self.curr_pos >= self.length) or (self.curr_pos < 0)\n \n\ndef steps_maze(l):\n maze = Maze(0, l)\n count = 0\n while not maze.outside():\n maze.evolve()\n count += 1\n return count, maze.state\n\nsteps_maze(instructions)\n\ndef steps2exit(instructions):\n position = 0\n steps = 0\n try:\n while True:\n jump = instructions[position]\n instructions[position] = jump + 1\n position += jump\n steps += 1\n except IndexError:\n return steps\n \nsteps2exit(instructions)\n\n%%timeit\nsteps_maze(instructions)\n\n%%timeit\nsteps2exit(instructions)", "Day 6\nIt can be difficult to decide when classes are useful", "banks = [0, 2, 7, 0]\n\ndef reallocate(val, pos, n):\n l = [val // n] * n\n r = val % n\n for i in range(r):\n l[(pos + i + 1) % n] += 1\n return l\n\ndef update(b):\n blocks = sorted(list(enumerate(b)), key=lambda v: (v[1], -v[0]), reverse=True)\n pos = blocks[0][0]\n val = blocks[0][1]\n c = [b[i] if i != pos else 0 for i in range(len(b))]\n l = reallocate(val, pos, len(b))\n for i, v in enumerate(c):\n c[i] += l[i]\n return c\n\ndef count_until_loop(b):\n count = 0\n previous = set()\n h = hash(tuple(b))\n while h not in previous:\n previous.add(h)\n count += 1\n b = update(b)\n h = hash(tuple(b))\n return count\n\ncount_until_loop(banks)\n\nclass Memory:\n\n def __init__(self, banks):\n self.banks = banks\n self.states = []\n\n def _find_fullest(self):\n blocks = max(self.banks)\n return self.banks.index(blocks), blocks\n\n def _redistribue(self):\n pos, blocks = self._find_fullest()\n self.banks[pos] = 0\n while blocks > 0:\n pos += 1\n if pos >= len(self.banks):\n pos = 0\n self.banks[pos] += 1\n blocks -= 1\n\n def realloate_till_loop(self):\n redistributions = 0\n self.states.append(self.banks.copy())\n while True:\n self._redistribue()\n redistributions += 1\n configuration = self.banks.copy()\n if configuration in self.states:\n break\n else:\n self.states.append(configuration)\n return redistributions\n\nMemory(banks).realloate_till_loop()\n\n%%timeit\ncount_until_loop(banks)\n\n%%timeit\nMemory(banks).realloate_till_loop()", "Day 7\nAssertions should not be used in normal program execution because they can be disabled\nYou can define your own exceptions easily", "def pick_cherry(leaves):\n while leaves:\n leaf = leaves.pop()\n parent = parents[leaf]\n offspring = children[parent]\n try:\n for child in offspring:\n assert(children[child] == [])\n return parent\n except AssertionError:\n pass\n\nclass Unbalanced(Exception):\n pass", "Day 8\nExec is a good idea (and makes the code really simple), but there are also other solutions.\nSame for day 18", "def apply_instructions(registers):\n for reg, leap, cond in instructions:\n bool_str = 'registers[\"{0}\"]'.format(cond[0]) + ''.join(cond[1:])\n update_str = 'if {0}: registers[\"{1}\"] += {2} '.format(bool_str, reg, leap)\n exec(update_str)\n\nimport operator\n\ncomparisons = {'>': operator.gt, '>=': operator.ge, '<': operator.lt,\n '<=': operator.le, '==': operator.eq, '!=': operator.ne}\n\ndef process(instruction):\n reg, operation, val, condition = parse(instruction)\n cond_reg, cond_op, cond_val = condition\n if cond_op(registers[cond_reg], cond_val):\n registers[reg] = operation(registers[reg], val)", "Day 12\nContinue more explicit than pass", "def connected(node, pipes):\n neighbors = pipes[node]\n pending = list(neighbors)\n while pending:\n alice = pending.pop(0)\n for bob in pipes[alice]:\n if bob in neighbors:\n pass # ---> continue\n else:\n neighbors.add(bob)\n pending.append(bob)\n return neighbors", "Day 13\nDo not make complex parsers if not needed", "def parse_scanners(input_file):\n scanners = defaultdict(int)\n with open(input_file, 'rt') as f_input:\n csv_reader = csv.reader(f_input, delimiter=' ')\n for l in csv_reader:\n scanners[int(l[0].rstrip(':'))] = int(l[1].rstrip())\n return scanners\n\ndef parse(lines):\n layers_depth = {}\n for line in lines:\n l = line.strip().split(': ')\n layers_depth[int(l[0])] = int(l[1])\n return layers_depth", "And do not need to do too many things", "test_input = \"\"\"0: 3\n1: 2\n4: 4\n6: 4\"\"\".splitlines()\n\nlayers = parse(test_input)\n\nimport collections\n\ndef tick(lrank, time):\n r = time % (2 * (lrank - 1))\n return (r <= lrank - 1) * r + (r > lrank - 1) * (2 * (lrank - 1) - r)\n\n\ndef get_state(time, scanners): \n state = dict(zip(list(scanners.keys()), [0] * len(scanners)))\n if time == 0:\n return state\n elif time > 0:\n for t in range(time + 1):\n for scanner in scanners:\n state[scanner] = tick(scanners[scanner], t)\n return state\n\ndef trip_severity(scanners):\n severity = 0\n layers = max(list(scanners.keys()))\n for t in range(layers + 1):\n if scanners[t] != 0:\n tick_before = tick(scanners[t], t)\n tick_now = tick(scanners[t], t + 1)\n if (tick_before == 0):\n severity += scanners[t] * t\n return severity\n\nscanners = collections.defaultdict(int)\nscanners.update(layers)\ntrip_severity(scanners)\n\ndef severity(layers_depth, start_time=0):\n severity_ = 0\n for i, depth in layers_depth.items():\n if (start_time + i) % ((depth-1) * 2) == 0:\n severity_ += i*depth\n return severity_\n\nseverity(layers)", "Day 15\nGenerators are an esaier way to make iterators. In the end the result is similar", "class FancyGen(object):\n \n def __init__(self, start, factor):\n self.start = start\n self.factor = factor\n self.q = 2147483647\n \n def __iter__(self):\n self.a = self.start\n return self\n\n def __next__(self):\n n = (self.a * self.factor) % self.q\n self.a = n\n return n\n\ndef compare_lowest_bits(n, m):\n n = n % (2 ** 16)\n m = m % (2 ** 16)\n return n == m\n\ndef duel(starta, startb):\n N = 40 * 10 ** 6\n count = 0\n gena = iter(FancyGen(starta, 16807))\n genb = iter(FancyGen(startb, 48271))\n for _ in range(N):\n if compare_lowest_bits(next(gena), next(genb)):\n count += 1\n return count\n\n%%timeit\nduel(65, 8921)\n\ndef generator(start_value, factor):\n val = start_value\n while True:\n val = val * factor % 2147483647\n yield val\n\ndef compare(start_A, start_B, rounds):\n matches = 0\n for i, values in enumerate(zip(generator(start_A, 16807), generator(start_B, 48271))):\n if i >= rounds:\n return matches\n else:\n vA, vB = values\n if vA.to_bytes(100, 'big')[-2:] == vB.to_bytes(100, 'big')[-2:]:\n matches += 1\n\n%%timeit\ncompare(65, 8921, 40*10**6)", "Day 16\nRegex are expensive, even if you do not \"precompile\" them", "import re\nimport numpy as np\nimport copy\n\ndef shuffle(p, moves):\n s = copy.copy(p)\n for move in moves:\n spin = re.search('s(\\d+)', move)\n swapx = re.search('x(\\d+)\\/(\\d+)', move)\n swapp = re.search('p(\\w)\\/(\\w)', move)\n if spin:\n s = np.roll(s, int(spin.group(1)))\n if swapx:\n a = int(swapx.group(1))\n b = int(swapx.group(2))\n s[a], s[b] = s[b], s[a]\n if swapp:\n a = swapp.group(1)\n b = swapp.group(2)\n a = ''.join(s).index(a)\n b = ''.join(s).index(b)\n s[a], s[b] = s[b], s[a]\n return ''.join(s)\n\n%%timeit\nshuffle(list('abcde'), ['s1', 'x3/4', 'pe/b'])\n\ndef parse(instruction):\n name = instruction[0]\n params = instruction[1:]\n if name == 's':\n params = [int(params)]\n else:\n params = params.split('/')\n if name == 'x':\n params = list(map(int, params))\n return name, params\n\nclass Programs:\n\n def __init__(self, progs):\n self.progs = progs\n self.length = len(self.progs)\n self.instructions_dict = {'s': self.spin, 'x': self.exchange, 'p': self.partner}\n\n def spin(self, pos):\n pos = pos % self.length\n if pos > 0:\n tmp = self.progs[-pos:]\n progs = tmp + self.progs\n self.progs = progs[:self.length]\n\n def exchange(self, pos1, pos2):\n v1 = self.progs[pos1]\n v2 = self.progs[pos2]\n self.progs = self.progs[:pos1] + v2 + self.progs[pos1+1:]\n self.progs = self.progs[:pos2] + v1 + self.progs[pos2+1:]\n\n def partner(self, prog1, prog2):\n self.exchange(self.progs.index(prog1), self.progs.index(prog2))\n\n def dance(self, instructions):\n for inst, params in instructions:\n self.instructions_dict[inst](*params)\n return p.progs\n\n%%timeit\np = Programs('abcde')\np.dance([parse(inst) for inst in ['s1', 'x3/4', 'pe/b']])\n\nimport re\nimport numpy as np\nimport copy\n\nregex1 = re.compile('s(\\d+)')\nregex2 = re.compile('x(\\d+)\\/(\\d+)')\nregex3 = re.compile('p(\\w)\\/(\\w)')\n\ndef shuffle2(p, moves):\n s = copy.copy(p)\n for move in moves:\n spin = regex1.search(move)\n swapx = regex2.search(move)\n swapp = regex3.search(move)\n if spin:\n s = np.roll(s, int(spin.group(1)))\n if swapx:\n a = int(swapx.group(1))\n b = int(swapx.group(2))\n s[a], s[b] = s[b], s[a]\n if swapp:\n a = swapp.group(1)\n b = swapp.group(2)\n a = ''.join(s).index(a)\n b = ''.join(s).index(b)\n s[a], s[b] = s[b], s[a]\n return ''.join(s)\n\n%%timeit\nshuffle2(list('abcde'), ['s1', 'x3/4', 'pe/b'])", "Day 22\nUseless day just takes memory and time", "test_input = \"\"\"..#\n#..\n...\"\"\".splitlines()\n\nimport numpy as np\nfrom collections import defaultdict\n\ndef parse_grid(f_input):\n grid = defaultdict(lambda: '.')\n size = 0\n for l in f_input:\n hash_row = {hash(np.array([size, i], dtype=np.int16).tostring()): v for i, v in enumerate(list(l.rstrip()))}\n grid.update(hash_row)\n size += 1\n return grid, size\n\nclass Virus(object):\n def __init__(self, grid, size):\n self.grid = grid # enclosing the hashes and states of infected positions\n self.pos = np.array([(size - 1) // 2, (size - 1) // 2], dtype=np.int16) # initially in the center of a positive grid\n self.facing = np.array([-1, 0], dtype=np.int16) # initially facing up in our coords\n self.count_infect = 0\n def burst(self):\n hash_pos = hash(self.pos.tostring())\n rotation = np.array([[0, -1], [1, 0]], dtype=np.int16)\n self.facing = np.dot(rotation, self.facing)\n if self.grid[hash_pos] == '#':\n self.grid[hash_pos] = '.'\n self.facing *= -1\n else:\n self.grid[hash_pos] = '#'\n self.count_infect += 1\n self.pos += self.facing\n\ndef count_infect(grid, size, n):\n test_virus = Virus(grid, size)\n for _ in range(n):\n test_virus.burst()\n return test_virus.count_infect\n\ngrid, size = parse_grid(test_input)\n\n%%timeit\ncount_infect(grid, size, 10000)\n\ndirections = 'nesw'\ndirections2move = {'n': (0, 1), 's': (0, -1), 'e': (1, 0), 'w': (-1, 0)}\n\ndef parse(lines):\n nodes = []\n size = len(lines[0].strip())\n v = size // 2\n for i, line in enumerate(lines):\n for j, c in enumerate(line.strip()):\n if c == '#':\n nodes.append((j-v, (i-v)*(-1)))\n return set(nodes)\n\n\ndef burst(infected_nodes, pos, direction):\n x, y = pos\n\n # next direction\n if pos in infected_nodes:\n i = (directions.index(direction) + 1) % 4\n\n infected_nodes.remove(pos)\n\n else:\n i = (directions.index(direction) - 1) % 4\n\n infected_nodes.add(pos)\n\n next_direction = directions[i]\n\n # next position\n a, b = directions2move[next_direction]\n next_pos = (x+a, y+b)\n\n return infected_nodes, next_pos, next_direction\n\n\ndef count_infections(initial_status, iterations):\n count = 0\n status = initial_status\n pos = (0,0)\n direction = 'n'\n for _ in range(iterations):\n prev_size = len(status)\n status, pos, direction = burst(status, pos, direction)\n count += 1 if len(status) > prev_size else 0 # should be 0 or 1\n return count\n\nnodes = parse(test_input)\n\n%%timeit\ncount_infections(nodes, 10**4)", "Day 23\nJust different approaches", "def is_prime(x):\n if x >= 2:\n for y in range(2,x):\n if not ( x % y ):\n return False\n else:\n return False\n return True\n\ndef run_coprocessor(alpha):\n loop = False\n a, b = alpha, 79\n c = b\n d, e, f, g, h = 0, 0, 0, 0, 0\n if a != 0:\n b *= 100\n b += 100000\n c = b\n c += 17000\n while (g != 0) or not loop:\n loop = True\n f = 1\n d = 2\n e = 2\n if not is_prime(b):\n f = 0\n e = b\n d = b\n if f == 0:\n h += 1\n g = b\n g = g - c\n if g == 0:\n return a, b, c, d, e, f, g, h\n else:\n b += 17\n\nrun_coprocessor(1)\n\ndef isprime(value):\n for i in range(2, value):\n if (value % i) == 0:\n return False\n return True\n\n\ndef count_primes(init, end, step):\n count = 0\n for i in range(init, end+1, step):\n if isprime(i):\n count += 1\n return count\n\n\ndef part2():\n b = 106500\n c = 123500\n h = (c-b)/17 # each loop b increases 17 until it matches c\n h += 1 # there is an extra loop when b == c ??\n h -= count_primes(b, c, 17) # on primes, f is set to 0 and h not increased\n return int(h)\n\npart2()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
DSSatPitt/katz-python-workshop
jupyter-notebooks/Running Code.ipynb
cc0-1.0
[ "Running Code\nFirst and foremost, the Jupyter Notebook is an interactive environment for writing and running code. The notebook is capable of running code in a wide range of languages. However, each notebook is associated with a single kernel. This notebook is associated with the IPython kernel, therefor runs Python code.\nCode cells allow you to enter and run code\nRun a code cell using Shift-Enter or pressing the <button class='btn btn-default btn-xs'><i class=\"icon-step-forward fa fa-step-forward\"></i></button> button in the toolbar above:", "a = 10\n\nprint(a)", "There are two other keyboard shortcuts for running code:\n\nAlt-Enter runs the current cell and inserts a new one below.\nCtrl-Enter run the current cell and enters command mode.\n\nManaging the Kernel\nCode is run in a separate process called the Kernel. The Kernel can be interrupted or restarted. Try running the following cell and then hit the <button class='btn btn-default btn-xs'><i class='icon-stop fa fa-stop'></i></button> button in the toolbar above.", "import time\ntime.sleep(10)", "If the Kernel dies you will be prompted to restart it. Here we call the low-level system libc.time routine with the wrong argument via\nctypes to segfault the Python interpreter:", "import sys\nfrom ctypes import CDLL\n# This will crash a Linux or Mac system\n# equivalent calls can be made on Windows\n\n# Uncomment these lines if you would like to see the segfault\n\n# dll = 'dylib' if sys.platform == 'darwin' else 'so.6'\n# libc = CDLL(\"libc.%s\" % dll) \n# libc.time(-1) # BOOM!!", "Cell menu\nThe \"Cell\" menu has a number of menu items for running code in different ways. These includes:\n\nRun and Select Below\nRun and Insert Below\nRun All\nRun All Above\nRun All Below\n\nRestarting the kernels\nThe kernel maintains the state of a notebook's computations. You can reset this state by restarting the kernel. This is done by clicking on the <button class='btn btn-default btn-xs'><i class='fa fa-repeat icon-repeat'></i></button> in the toolbar above.\nsys.stdout and sys.stderr\nThe stdout and stderr streams are displayed as text in the output area.", "print(\"hi, stdout\")\n\nfrom __future__ import print_function\nprint('hi, stderr', file=sys.stderr)", "Output is asynchronous\nAll output is displayed asynchronously as it is generated in the Kernel. If you execute the next cell, you will see the output one piece at a time, not all at the end.", "import time, sys\nfor i in range(8):\n print(i)\n time.sleep(0.5)", "Large outputs\nTo better handle large outputs, the output area can be collapsed. Run the following cell and then single- or double- click on the active area to the left of the output:", "for i in range(50):\n print(i)", "Beyond a certain point, output will scroll automatically:", "for i in range(50):\n print(2**i - 1)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mohsinhaider/pythonbootcampacm
Objects and Data Structures/Dictionaries.ipynb
mit
[ "Dictionaries\nPython has 3 primary types of data: sequences, sets, and mappings. A dictionary is a mapping, or, in other words, a container for multiple mappings of key-value pairs. In specific, mappings are collections of objects organized by key values. What does this mean, effectively? Well, dictionaries do not retain any specific order - after all, they are organized by \"keys\", not by sequential memory locations like in a sequence. In this lecture we will cover:\n1. Initializing a Dictionary\n &gt; General key-value mappings\n &gt; Varying keys\n &gt; Varying values\n &gt; Multiple values per key?\n2. Accessing and Mutating Dictionaries\n &gt; Notation for access\n &gt; Mutation possibilities\n &gt; Assignment based, deletion, etc\n &gt; Methods\n3. Dictionary Functions and Methods\n4. Nesting Dictionaries\n\nInitializing a Dictionary\nA dictionary is a mapping. To initialize dictionaries, we create these key-value mappings ourselves. The keys in the key object has to be a hashable type, but the value can be any type.\nLet's initialize a dictionary that maps strings to numbers. If we were specific, this dictionary could hold the amount of the shares (integer) a person (string) has in a company.", "# Initializing a Dictionary\nmy_dictionary = {\"Mike\":1, \"John\":5}", "Can we map two integers? Yes, we can. Keys must be hashable types, and integers are hashable.", "# Initializing a dictionary with integer-integer pairs\nanother_dict = {4:1, 8:2, 9:4, 2:6}", "Can we vary the key type across mappings in the dictionary? Yes. The example below creates a string, integer, and float key, and maps them all to integers.", "# Initializing a Dictionary with varying key types\nvarying_dict = {\"John\":4, 9:5, 4.32:8}", "Can we vary the value type? Yes. Also, in most of our examples, we will end up mapping strings to numbers or data structures. Below, we've mapped out string keys to integers and lists.", "# Initializing a Dictionary that maps strings to either to an integer or list\nlast_dict = {\"Micah\":[1, 2, 5], \"Rose\":4, \"John\":9, \"Gwen\":[5, 3]}", "Can we map dictionary to other dictionaries? Yes. Note below that \"Ruth\" and \"Barbara\" are keys of final_dict, and that \"John\" and \"Leslie\" are keys of the dictionary that Ruth is mapped to.", "final_dict = {\"Ruth\":{\"John\":[1, 2, 5], \"Leslie\":6 }, \"Barbara\":4}", "Multiple values cannot be assigned to a key in a dictionary. However, as we experimented above, we can map a key to a data structure that holds multiple values by nature.\nAccessing and Mutating Dictionaries\nLike any other data structure, dictionaries have certain methods of access and mutation.\nAccessing Dictionaries\nThe common notation for access into a dictionary is by specifying the key, which returns the associated value. Consider the notation:\nmy_dict[key_here]\n\nThis notation will return the value(s) that the explicit key is mapped to in the dictionary. Let's try accessing values from dictionaries below.\nFirst, let us create a dictionary.", "# Initializing a Dictionary that maps strings to integers\nshare_holdings_dict = {\"john\":5, \"michael\":4, \"rutherford\":19}", "How do we get the number of shares Michael has in the company? Using the above notation, we write the following.", "michael_shares = share_holdings_dict[\"michael\"]\nprint(michael_shares)", "Sure enough, we get the number of shares Michael has: 4. This notation will allow you to extract a key of any type. Let's try this example again. This time, let's map first names to last names.", "# Initializing a Dictionary that maps strings to strings\ncomputer_scientists_dict = {\"Peter\":\"Norvig\", \"Donald\":\"Knuth\", \"Ada\":\"Lovelace\", \"Grace\":\"Hopper\"}", "How do we get Ada's last name? Use the same notation from before.", "last_name = computer_scientists_dict[\"Ada\"]\nprint(last_name)", "Now, let us maps strings to lists. Recall that we can access values using keys. If a string is our key, then our value is a list.", "# Initializaing a Dictionary that maps strings to lists\nconversion_rate = {\"Moe's Pizza\":[5, 13, 4], \"Jeanie's Pub\":[7, 7, 8]}", "How do we access the Moe's conversion rate list? The same notation as always...", "# Retrieving the list\nrates_list = conversion_rate[\"Moe's Pizza\"]\nprint(rates_list)", "We already know how to retrive values from keys - we just accessed the conversion rate list for Moe's Pizza. So, how do we access the the \"things\" inside of a value if it is a data structure (in this case, it is a list). In example: let us assume that the elements in the list represent the conversation rate from the 1st year, 2nd year, and 3rd year of opening business. How we do access Moe's Pizza's third year conversion rates. Here is the shorthand way to do it:", "third_year_conv = conversion_rate[\"Moe's Pizza\"][2]\nprint(third_year_conv)", "The answer, as you can see, is to simply use bracket notation for this list. The expression\nconversion_rate[\"Moe's Pizza\"]\n\ngives us [5, 13, 4]. If we call\n[5, 13, 4][2]\n\nwe would land 4. This is why we can use the bracket notation after getting the list to begin with.\nMutating Dictionaries\nYou can mutate a dictionary multiple ways. We could consider:\nChange value?\nDelete key?\nAdd key?\nChange key?\nWe won't consider deleting a value or adding a value because neither follow the natural state of a dictionary mapping: a key is never value-less.\nTo change the value of a key we could simple reassign the value, or perform some operation on it. Use the notation following to reassign the value:\nmy_dict[key] = new_value\n\nThis is the same notation as retrieving the value, but now with an equal sign to specify reassignment.", "# Initializing a Dictionary\npothole_dict = {\"Morgan St.\":4, \"Tulsen Blvd.\":0, \"Michigan Ave.\":8}", "Now, Tulsen Blvd. has a recorded 9 potholes.", "# Changing a value of a key\npothole_dict[\"Tulsen Blvd.\"] = 9", "Let's change Morgan St. and Mich Ave, too.", "# Change Michigan and Morgan\npothole_dict[\"Michigan Ave.\"] = 30\npothole_dict[\"Morgan St.\"] = 2", "How about deleting a key? Let's imagine that somehow, Michigan Avenue gets all of its potholes fixed. We can now remove it from the dictionary. That means, we want to remove a key. To remove a key, we need to delete the key using the \"del\" keyword. The notation is as follows:\ndel my_dict[key]\n\nThis statement removes your key, and thus its values, from the dictionary.\nLet's remove Michigan Avenue using the notation above.", "# Initializing a Dictionary\npothole_dict = {\"Morgan St.\":4, \"Tulsen Blvd.\":0, \"Michigan Ave.\":8}\n# Delete the reference\ndel pothole_dict[\"Michigan Ave.\"]\n# Verify\nprint(pothole_dict)", "As we can see, Michigan Ave. is no longer in the dictionary.\nAdding a key: To add a key is to simply use the same notation as reassigning a key to a new value. If the dictionary detects that the key you put in does not exist, it will take that key and assign it to the value you assigned. The notation remains the same:\nmy_dict[key] = new_value\n\nAs long as key is not already in the dictionary, it will just create it and map it to the value you sent in anyways.", "# Initializing a Dictionary\npothole_dict = {\"Morgan St.\":4, \"Tulsen Blvd.\":0, \"Michigan Ave.\":8}\n# Add a new street (Key)\npothole_dict[\"Wallace St.\"] = 12\n# Verify we added Wallace St.\nprint(pothole_dict)", "We can also change a key. Imagine we accidentally mapped the wrong name to a set of values. How would we do this? While there isn't a single-liner notation for this, we can think methodically. To assign a new key, is to save the values of the old key into a new variable, deleting the old key afterwards (so that we don't lose the values to begin with). Here's the notation:\nmy_dict[new_key] = my_dict[old_key]\ndel my_dict[old_key]\n\nWe create a new key (a key that's not already in the dictionary), and then assign it to the value that is mapped to the old_key (my_dict[old_key]). Afterwards, we no longer need the mapping of the old_key to its values, so we use the del operator to delete it and its values. The new_key and its values remain unscathed.\nLet's change Morgan St. to Hollywood Blvd.", "# Initializing a Dictionary\npothole_dict = {\"Morgan St.\":4, \"Tulsen Blvd.\":0, \"Michigan Ave.\":8}\n# Store the values with new key\npothole_dict[\"Hollywood Blvd.\"] = pothole_dict[\"Morgan St.\"]\n# Delete the old key, and the values it was mapped to\ndel pothole_dict[\"Morgan St.\"]\n# Verify that Hollywood Blvd. replaced Morgan St.\nprint(pothole_dict)", "Instead of mutating by reassignment, we can also mutate by using methods or operators on dictionary values. Below, we will make a dictionary and show a variety of ways to put this into action.", "# Initialize a Dictionary that maps Strings to numbers\ntemp_dict = {\"Tulsa\":79, \"Anchorage\":10, \"Chicago\":65}\n# Add 20 to the current temperature, and then reassign\ntemp_dict[\"Anchorage\"] = temp_dict[\"Anchorage\"] + 20\n# Print result\nprint(temp_dict[\"Anchorage\"])", "Let's use a method now to change a dicitonary's string values using methods.", "# Initialize a Dictionary that maps integer keys to string values\npalindrome_dict = {3:\"wow\", 7:\"rotator\", 4:\"noon\"}\n# Capitalize all of the palidromes, and reassign each as we go\npalindrome_dict[3] = palindrome_dict[3].upper()\npalindrome_dict[7] = palindrome_dict[7].upper()\npalindrome_dict[4] = palindrome_dict[4].upper()\n\nprint(palindrome_dict)", "Dictionary Functions and Methods\nThere are built-in functions that can be used with dictionaries, and methods that belong inherently to the dictionary library class in Python. \nDictionary Functions\nWe will discuss the dict() function. The dict function is documented the following way:\ndict(**kwarg)\ndict(mappings, **kwarg)\ndict(iterable, **kwarg)\n\nThis can look daunting at first, but it is not too difficult to understand. In essense, the dict() function creates a dictionary. We can supply it with mappings, an iterable, keywords, or simply an argument that evaluates to either a mapping or iterable.\nLet's first look at recreating the following dictionary with the 3 types of notations:", "# \"Normal\" Initialization\nbball_wins = {\"Lakers\":20, \"Heat\":24, \"Bulls\":26}\nprint(bball_wins)", "First, we showcase the first notation. Here, the equal sign denotes a \"name=value\" relationship. To make it simple, this notation treats the left variable as the key, and the right as the value.", "# Using keywords... dict(**kwargs)\nbball_wins2 = dict(Lakers=20, Heat=24, Bulls=26)\nprint(bball_wins2)", "Now the second notation. Here, we've sent in a mapping as the argument (a dictionary itself) into the dict function. It may look odd at first, but it is another way to write the dictionary.", "# Using mapping... dict(mapping, **kwargs)\nbball_wins3 = dict({\"Lakers\":20, \"Heat\":24, \"Bulls\":26})\nprint(bball_wins3)", "The third notation has a few common variations used. To use it, we need an interable type. However, in the iterable, all objects must also be iterables that contain, at max, two items in themselves.\nLet's consider the list data structure, which is iterable.", "# Using iterable... dict(iterable, **kwargs)\nbball_wins4 = dict([[\"Lakers\",20], [\"Heat\",24], [\"Bulls\",26]])\nprint(bball_wins4)", "So, what happened? Well, the iterable we sent in happened to have 3 items in itself, which also all happened to be iterables themselves (lists). However, each internal iterable must have 2 objects in it. This is what Python uses to map the iterable:\ndict = {}\nfor k, v in iterable:\n d[k] = v\n\nWhile this line(s) of code may seem puzzling, we will discuss it again when we reach for-statements. However, it can still be understoof reasonably if you understand the idea of unpacking, which you can learn about right now if you open the \"Unpacking.ipynb\" iPython notebook (you should right now, before you move on).\nYou should know it because sometimes dict() will be used with the built-in function zip(). Zip() is fully covered in the built-in functions lecture (which you should also review now before moving on). Consider the code below:", "bball_wins5 = dict(zip([\"Lakers\", \"Heat\", \"Bulls\"], (20, 24, 26)))\nprint(bball_wins5)", "What is zip doing? Well, the documentation describes it in this way:\n\"Make an iterator that aggregates elements from each of the iterables.\"\n\nIt makes an iterator. Well, what does that mean? In short, it takes two iterable objects... these could be two lists, two tuples, a list and a tuple, etc. It takes an element from the first iterable, and the same positioned element in the second iterable, and throws them together in a tuple. Here's a diagram:\nzip([\"Lakers\", \"Heat\", \"Bulls\"], (20, 24, 26)]\n---&gt; First take first elements of both iterables, and put them together... (\"Lakers\", 20)\n ---&gt; Now return the tuple... the statement looks like dict([(\"Lakers\", 20)])\n ---&gt; They get mapped together using the dict(mapping, **kwarg) notation\n---&gt; Now take seocnd elements of both iterables, and tuple them together... (\"Heat\", 24)\n ---&gt; refer above\n --&gt; refer above\n---&gt; Lastly, put the last elements into a tuple... (\"Bulls\", 26)\n ---&gt; refer above\n --&gt; refer above\n\nBy the end, all of the lists that were created get added as mappings in the dictionary. The result is the dictionary we expected.\nNested Dictionaries\nWe can nest dictionaries. The access notation gets longer everytime we nest.\nBelow, we've made a dictionary that maps a first name to a dictionary of possible last names and their shareholdings in a company.", "my_dict = {\"John\":{\"Doe\":9, \"Hansen\":13}, \"Mariel\":{\"Stevenson\":11, \"Somers\":2}, \"Rocky\":9}", "How do we get Mariel Somer's shares in the company?\nmy_dict[key][subkey]....\n\nThe same notation access as a list, except now with keys.", "# Nested dictionary access\nprint(my_dict[\"Mariel\"][\"Somers\"])" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ShorensteinCenter/Shorenstein-Center-Notebooks
Shorenstein_Center_Notebook_2.ipynb
mit
[ "Table of Contents:\n0 import libraries\n1 Pull Data from the API\n2 Turn the Data into a pandas DataFrame\n3 Explore the Data\n- 3.1Basic Engagement by Individual User\n- 3.2 Last Active by Individual User\n- 3.3Two-Dimensional Distributions \n- 3.4Time on List for Unsubscribed Users\n0. Import libraries and set global variables <a class=\"anchor\" id=\"0-bullet\"></a>", "# set colors \nc1='#18a45f' # subs \nc2='#ec3038' # unsubs\nc3='#3286ec' # cleaned\nc4='#fecf5f' # pending\nc_ev= '#cccccc'\nc_nev='#000000' \nc12m = '#016d2c'#'12 months' \nc9m ='#31a354' #'9 months '\nc6m = '#74c476' #'6 months' \nc3m= '#bae4b3' #'3 months' \nc1m = '#edf8e9'#'1 month' \n\n# import libraries\n%matplotlib inline\nimport os\nfrom mailchimp3 import MailChimp # import your wrapper of choice for your email service provider - in this case mailchimp3; learn more about mailchimp3:https://github.com/charlesthk/python-mailchimp/blob/master/README.md\nimport pandas as pd # standard code for importing the pandas library and aliasing it as pd - if you want to learn all about pandas read 'Python for Data Analysis' version 2nd Edition by Wes McKinney, the creator of pandas\nimport time # allows you to time things \nimport matplotlib.pyplot as plt # allows you to plot data \nimport seaborn as sns # makes the plots look nicer\nimport numpy as np\nimport os\nimport glob", "1. Pull data from API", "# run this cell to initialize variables to pull data from the API of your email service provider, in this case MailChimp\n# replace the variable values in quotes in red caps with the unique values for your MailChimp account\n# if this request times out, pull via batch request -- which is slower but the recommended method by MailChimp\nLIST_NAME='YOURLISTNAME'\nNAME='YOURUSERNAME'# your MailChimp user name (used at login)\nSECRET_KEY='YOURAPIKEY' # your MailChimp API Key\nLIST_ID='YOURLISTID' # the ID for the individual list you want to look at\n# OUT_FILE='OUTFILENAME.csv'# if you want to export your data, you can speficy the outfile name and type, in this case CSV \n\n# make an output directory to explort the results and images from this notebook\noupt_dir='Shorenstein_Notebook_2_'+str(LIST_NAME)\ntry:\n os.mkdir(oupt_dir)\nexcept:\n \"marvelous!\"\n\n# initalizes client - creates a connection with the API; calling that connection client \nclient=MailChimp(NAME,SECRET_KEY)\n\nlists_endpoint=client.lists.get(LIST_ID)\n\n# read in data from Shorenstein Notebook 1, or pull it again \n# GET request pulling data from the MailChimp API - see documentation\n# you can also read in a pkl or other file type if you already have this information from running Notebook 1\nmember_data=client.lists.members.all(LIST_ID,get_all=True,\n fields='members.status,members.email_address,members.timestamp_opt,members.timestamp_signup,members.member_rating,members.stats,members.id, members.last_changed, members.action, members.timestamp, members.unsubscribe_reason')\n\n# this is a function that gets the last 50 actions for each user on your list\n# if it times out do a batch request\ndef last_user_actions(userid):\n \"\"\"user id is a string that is the md5 hash of the lower case email.\n this function gets the lasy 50 user actions and returns a dataframe of user actions\"\"\"\n member_act_api=client.lists.members.activity.all(list_id=LIST_ID, subscriber_hash=userid)\n member_act=pd.DataFrame(member_act_api['activity'])\n member_act['id']=userid\n return member_act\n\n# create member list of unique member ids in your member data frame\nmemb_list=list(pd.DataFrame(member_data['members'])['id'].unique())\n\nmember_actions=pd.concat(map(last_user_actions,memb_list))\n# parse the timestamp\nmember_actions['timestamp']=member_actions.timestamp.apply(pd.to_datetime)", "2. Turn the Data into a pandas Data Frame", "# turns the member_data returned by the API into a pandas data frame \nmember_data_frame=pd.DataFrame(member_data['members'])\n\n# unpack open rate and click rate from stats for each record, add the value to a new column named open and click respectively\n# create a column for those who never opened or clicked\n# false = number of subscribers who have ever opened\n# true = number of subscribers who have never opened \nmember_data_frame['open']=member_data_frame.stats.apply(lambda x: x['avg_open_rate'])\nmember_data_frame['click']=member_data_frame.stats.apply(lambda x: x['avg_click_rate'])\nmember_data_frame['never_opened']=member_data_frame.open.apply(lambda x:x==0)\nmember_data_frame['never_clicked']=member_data_frame.click.apply(lambda x:x==0)\n\n# preparing the data by calculating the month joined, and for each of those months, what % of those people are subscribed, unsubscribed, cleaned or pending. \n# NOTE: There is no output from this cell but you need to run it to see the graphs below. \nmember_data_frame['timestamp_opt']=member_data_frame.timestamp_opt.apply(pd.to_datetime)\nmember_data_frame['timestamp_signup']=member_data_frame.timestamp_signup.apply(pd.to_datetime)\n\n# records missing signup_time\nprint sum(member_data_frame.timestamp_signup.isnull())\n\n# make sure index is unique because we are about to do some manipulations based on it\nmember_data_frame.reset_index(drop=True,inplace=True)\n\n# index of members where we don't know when the signed up but we have opt in time\nguess_time_ix=member_data_frame[(member_data_frame.timestamp_signup.isnull())&\n (member_data_frame.timestamp_opt.isnull()!=True)].index\n\n# when we don't have signup time use opt in time\nmember_data_frame.loc[guess_time_ix,'timestamp_signup']=member_data_frame.loc[guess_time_ix,'timestamp_opt']\n\n# use integer division to break down people in to groups by the month they joined\nmember_data_frame['join_month']=member_data_frame.timestamp_signup.apply(lambda x:pd.to_datetime(2592000*int((x.value/1e9)/2592000),unit='s'))\n \n\n# represent the joined month as an interger of ms since epoch time 0\n# this format is not nice for people but very nice for computers\nmember_data_frame['jv']=member_data_frame.join_month.apply(lambda x: x.value)\nmember_data_frame['jv']=member_data_frame.join_month.apply(lambda x: x.value)\n\nmember_actions['timestamp']=member_actions.timestamp.apply(pd.to_datetime)\n\n\n# slice to only look at opens\nmemb_open=member_actions[member_actions.action=='open']\n\n# get last open\nlast_open=memb_open.groupby('id').timestamp.max().reset_index()\n# get oldest open\nold_open=memb_open.groupby('id').timestamp.min().reset_index()\n# clean name\nlast_open.columns=['id','last']\nold_open.columns=['id','old']\n# merge\nopen_time=pd.merge(last_open,old_open, how='left',on='id')\n# get ms time\nopen_time['lv']=open_time['last'].apply(lambda x: x.value)\nopen_time['ov']=open_time['old'].apply(lambda x: x.value)\n\n# add member open times to member data frame\nmember_data_frame=pd.merge(member_data_frame,open_time, how='left',on='id')\n\nmember_data_frame['latestv']=member_data_frame['last'].apply(lambda x: x.value)\n\nmember_last_not_null=member_data_frame[member_data_frame['last'].isnull()!=True]\n\nimport time\n\n# get todays date\na=pd.to_datetime(time.time(),unit='s')\n\n# slices dataframe into 12 months, 9 months, 6 months, 3 months and 1 month active subscribers\nmember_data_frame['12m']=member_data_frame['last'].apply(lambda x: x>(a-pd.to_timedelta('365D')))\nmember_data_frame['9m']=member_data_frame['last'].apply(lambda x: x>(a-pd.to_timedelta('274D')))\nmember_data_frame['6m']=member_data_frame['last'].apply(lambda x: x>(a-pd.to_timedelta('183D')))\nmember_data_frame['3m']=member_data_frame['last'].apply(lambda x: x>(a-pd.to_timedelta('91D')))\nmember_data_frame['1m']=member_data_frame['last'].apply(lambda x: x>(a-pd.to_timedelta('30D')))\n\n# get monthly activity as a fraction of all users who joined that month\nmonthly_act=member_data_frame.groupby('join_month').agg({'12m':sum,\n '9m':sum,\n '6m':sum,\n '3m':sum,\n '1m':sum,\n 'id':lambda x: x.size}).reset_index()\n\nmonthly_act.rename(columns={'id':'tot'},inplace=True)\nmonthly_act['1m_per']=monthly_act.apply(lambda x: x['1m']/float(x['tot']),axis=1)\nmonthly_act['3m_per']=monthly_act.apply(lambda x: x['3m']/float(x['tot']),axis=1)\nmonthly_act['6m_per']=monthly_act.apply(lambda x: x['6m']/float(x['tot']),axis=1)\nmonthly_act['9m_per']=monthly_act.apply(lambda x: x['9m']/float(x['tot']),axis=1)\nmonthly_act['12m_per']=monthly_act.apply(lambda x: x['12m']/float(x['tot']),axis=1)\nunsubscribe_times=member_actions[member_actions.action=='unsub'][['id','timestamp']].copy()\nunsubscribe_times.rename(columns={'timestamp':'unsub_time'},inplace=True)\nunsubscribe_times['unsubv']=unsubscribe_times.unsub_time.apply(lambda x: x.value/1e9)\nmember_data_frame=pd.merge(member_data_frame,unsubscribe_times, how='left',on='id')\nmember_data_frame['life']=member_data_frame.apply(lambda x:x['unsub_time']-x['timestamp_opt'],axis=1)", "3. Explore the data\n3.1 Basic Engagement by Individual User\nWe go from asking, 'What is the (unique) open rate for your list?'", "list_open_rate=lists_endpoint['stats']['open_rate']\nprint list_open_rate", "To asking 'What is the distribution of user unique open rates for current subscribers vs. unsubscribes?'", "# user unique open rate = unique open rate for an individual on your list\n# calculation: (number of unique opens by a user / number of campaigns received by that user) x 100 \nplt.figure(figsize=(20,10))\nplt.hist([member_data_frame[member_data_frame.status=='subscribed'].open,\n member_data_frame[member_data_frame.status=='unsubscribed'].open], stacked=True,\n normed=False,label=['Subscribed',\n 'Unsubscribed'],\n color=[c1,c2])\n\n\nplt.title('Distribution of User Unique Open Rate, Subscribed vs Unsubscibed',fontdict={'fontsize':25})\nplt.xlabel(\"User Unique Open Rate\",fontdict={'fontsize':20})\nplt.ylabel(\"Counts\",fontdict={'fontsize':20})\nplt.yticks(fontsize=15)\nplt.xticks(fontsize=15)\nplt.legend(loc='best', prop={'size': 20})\nplt.savefig(oupt_dir+'/3.1_dist_open_sub_vs_unsub.png')\nplt.show()", "What is the distribution of user unique click rates for current subscribers vs. unsubscribed users?", "plt.figure(figsize=(20,10))\nplt.hist([member_data_frame[member_data_frame.status=='subscribed'].click,\n member_data_frame[member_data_frame.status=='unsubscribed'].click], stacked=True,\n normed=False,label=['Subscribed',\n 'Unsubscribed'],\n color=[c1,c2])\n\nplt.title('Distribution of User Unique Click Rate, Subscribed vs Unsubscirebed',fontdict={'fontsize':25})\nplt.xlabel(\"User Unique Click Rate\",fontdict={'fontsize':20})\nplt.ylabel(\"Counts\",fontdict={'fontsize':20})\nplt.yticks(fontsize=15)\nplt.xticks(fontsize=15)\nplt.legend(loc='best', prop={'size': 20})\nplt.savefig(oupt_dir+'/3.1_dist_click_sub_vs_unsub.png')\nplt.show()\n\nplt.figure(figsize=(20,10))\n\nax=plt.hist([member_last_not_null[member_last_not_null.status=='subscribed'].latestv,\n member_last_not_null[member_last_not_null.status=='unsubscribed'].latestv], stacked=True,\n normed=False,label=['Subscribed',\n 'Unsubscribed'],\n color=[c1,c2])\n\nplt.xticks(ax[1],map(lambda x: pd.to_datetime(x).date(),ax[1]), rotation=35)\n\nplt.title('Distribution of Time of Last Opened Email, Subscribed vs Unsubscribed',fontdict={'fontsize':25})\n\nplt.ylabel(\"Counts\",fontdict={'fontsize':20})\nplt.yticks(fontsize=15)\nplt.xticks(fontsize=15)\nplt.xlabel('Last Email Opened',fontdict={'fontsize':20})\nplt.legend(loc='best', prop={'size': 20})\nplt.savefig(oupt_dir+'/3.1_distlast_opened_sub_vs_unsub.png')\nplt.show()\n\nplt.figure(figsize=(20,10))\nplt.title('Distribution of Time of Last Opened Email, Subscribers',fontdict={'fontsize':25})\nmember_data_frame[member_data_frame.status=='subscribed']['last'].hist(label='SUBSCRIBED',color=c1)\nplt.xlabel('Date of Last Email Opened by Subscriber',fontdict={'fontsize':20})\nplt.ylabel(\"Subscriber Counts\",fontdict={'fontsize':20})\nplt.yticks(fontsize=15)\nplt.xticks(fontsize=15)\nplt.savefig(oupt_dir+'/4_last_opens_sub.png')\n\nplt.figure(figsize=(20,10))\nplt.title('Distribution of Time of Last Opened Email, Unsubscribed',fontdict={'fontsize':25})\nmember_data_frame[member_data_frame.status=='unsubscribed']['last'].hist(label='UNSUBSCRIBED',color=c2)\nplt.xlabel('Date of Last Email Opened by Unsubscribed Users',fontdict={'fontsize':20})\nplt.ylabel(\"Unsubscriber Counts\",fontdict={'fontsize':20})\nplt.yticks(fontsize=15)\nplt.xticks(fontsize=15)\nplt.savefig(oupt_dir+'/5_last_opens_unsub.png')", "3.2 Last Active by Individual User\nNumber of current subscribers who have opened an email in the last: 12 months, 9 months, 6 months, 3 months", "m12_act=member_data_frame['12m'].sum()\nprint m12_act\n\nm9_act=member_data_frame['9m'].sum()\nprint m9_act\n\nm6_act=member_data_frame['6m'].sum()\nprint m6_act\n\nm3_act=member_data_frame['3m'].sum()\nprint m3_act\n\nm1_act=member_data_frame['1m'].sum()\nprint m1_act\n\n# stacked histogram showing number of members active in the last 12 months, 9 months, 6 months, 3 months, 1 month \nplt.figure(figsize=(20,10))\nmember_data_frame[member_data_frame['12m']==True].join_month.hist(label='12 MONTHS',color=c12m)\nmember_data_frame[member_data_frame['9m']==True].join_month.hist(label='9 MONTHS',color=c9m)\nmember_data_frame[member_data_frame['6m']==True].join_month.hist(label='6 MONTHS',color=c6m)\nmember_data_frame[member_data_frame['3m']==True].join_month.hist(label='3 MONTHS',color=c3m)\nmember_data_frame[member_data_frame['1m']==True].join_month.hist(label='1 MONTH',color=c1m)\nplt.legend(loc='best', prop={'size': 20})\n\nplt.xlabel('Time Joined',fontsize=20)\nplt.ylabel('Counts',fontdict={'fontsize':20})\nplt.title('''Number of Members Active in the Last 12 Months, 9 Months, 6 Months, 3 Months 1 Month\n by Time Joined''',fontdict={'fontsize':25})\nplt.yticks(fontsize=15)\nplt.xticks(fontsize=15)\nplt.savefig(oupt_dir+'/7_memb_active.png')\n\nfig, ax = plt.subplots(figsize=(20,10))\nax.stackplot(list(monthly_act.join_month),monthly_act['1m_per'],\n monthly_act['3m_per']-monthly_act['1m_per'],\n monthly_act['6m_per']-monthly_act['3m_per'],\n monthly_act['9m_per']-monthly_act['6m_per'],\n monthly_act['12m_per']-monthly_act['9m_per'],labels=['1 MONTH','3 MONTHS','6 MONTHS',\n '9 MONTHS',\n '12 MONTHS'],\n colors=[c1m,c3m,c6m,c9m,c12m])\nplt.legend(loc='upper left', prop={'size': 15})\nplt.xlabel('Time Joined',fontdict={'fontsize':20})\nplt.ylabel('Percent Active',fontdict={'fontsize':20})\nplt.title('''Percent Active 12 Months, 9 Months, 6 Months, 3 Months 1 Month\n by Time Joined - for Current Subscribers''',fontdict={'fontsize':25})\nplt.yticks(fontsize=15)\nplt.xticks(fontsize=15)\nplt.savefig(oupt_dir+'/6_percent_memb_active.png')\nplt.show()\n", "3.3 Two-Dimensional Distributions\nUser unique open rate vs. when they joined, for subscribers", "# open rate vs. when someone joined, for subscribers only \n# x axis = month they joined in miliseconds since 1970\n# y axis is users unique open rate \n\nnxt=3 #number of x ticks\nxrt=np.linspace(member_data_frame[member_data_frame.status=='subscribed'].jv.min(),\nmember_data_frame[member_data_frame.status=='subscribed'].jv.max(),num=nxt)\ng=sns.jointplot(member_data_frame[member_data_frame.status=='subscribed'].jv/1e9,\n member_data_frame[member_data_frame.status=='subscribed'].open,\n kind=\"kde\",size=10, space=0,ylim=(0,1))\nj=g.ax_joint\nmx=g.ax_marg_x\nmy=g.ax_marg_y\nmx.set_xticks(map(lambda x:x/1e9,xrt))\nmx.set_xticklabels(map(lambda x: pd.to_datetime(x).date(),xrt))\nmx.set_title(\"Open Rate vs Join Time for Subscribers\",fontdict={'fontsize':25})\nplt.rcParams[\"axes.labelsize\"] = 25\ng.set_axis_labels(xlabel='Time Joined',ylabel='User Unique Open Rate',fontdict={'fontsize':25})\ng.savefig(oupt_dir+'/8_open_vs_join_sub.png')\n", "User unique open rate vs. time joined, for unsubscribers", "# x axis = month they joined in miliseconds since 1970\n# y axis = user unique open rate\n\nnxt=3 #number of x ticks\nxrt=np.linspace(member_data_frame[member_data_frame.status=='unsubscribed'].jv.min(),\nmember_data_frame[member_data_frame.status=='unsubscribed'].jv.max(),num=nxt)\n\ng = sns.jointplot(member_data_frame[member_data_frame.status=='unsubscribed'].jv/1e9,\n member_data_frame[member_data_frame.status=='unsubscribed'].open,\n ylim=(0,1),kind=\"kde\", size=10, space=0)\nj=g.ax_joint\nmx=g.ax_marg_x\nmy=g.ax_marg_y\nmx.set_xticks(map(lambda x:x/1e9,xrt))\nmx.set_xticklabels(map(lambda x: pd.to_datetime(x).date(),xrt))\nmx.set_title(\"Open Rate vs Join Time for Unsubscribed Users\",fontdict={'fontsize':25})\ng.set_axis_labels(xlabel='Time Joined',ylabel='User Unique Open Rate')\n\ng.savefig(oupt_dir+'/9_open_vs_join_unsub.png')", "Time of the last email opened vs. time joined for subscribers", "# x axis is distribution of when joined - farther to the right is joining more recently. \n# y axis is the oldest email record of opening in last 50 actions. \n# upper left is longtime engaged person, person who joined awhile ago who is still active \n\nnxt=3 #number of x ticks\nnyt=3 #number of x ticks\nxrt=np.linspace(member_data_frame[member_data_frame.status=='subscribed'].jv.min(),\nmember_data_frame[member_data_frame.status=='subscribed'].jv.max(),num=nxt)\nyrt=np.linspace(member_data_frame[member_data_frame.status=='subscribed'].lv.min(),\nmember_data_frame[member_data_frame.status=='subscribed'].lv.max(),num=nxt)\n\ng = sns.jointplot(member_data_frame[member_data_frame.status=='subscribed'].jv/1e9,\n member_data_frame[member_data_frame.status=='subscribed'].lv/1e9, kind=\"kde\", size=10, space=0)\n\nj=g.ax_joint\nmx=g.ax_marg_x\nmy=g.ax_marg_y\nmx.set_xticks(map(lambda x:x/1e9,xrt))\nmx.set_xticklabels(map(lambda x: pd.to_datetime(x).date(),xrt))\nmy.set_yticks(map(lambda x:x/1e9,yrt))\nmy.set_yticklabels(map(lambda x: pd.to_datetime(x).date(),yrt))\nmx.set_title(\"Latest Opened Email vs Join Time for Subscribers\",fontdict={'fontsize':25})\ng.set_axis_labels(xlabel='Time Joined',ylabel='Latest Open')\ng.savefig(oupt_dir+'/9_last_open_vs_join_sub.png')\n", "Joined Date vs Unsubscribed Date", "# x axis is distribution of when joined - farther to the right is joining more recently. \n# y axis is the oldest email record of opening in last 50 actions. \n# upper left is longtime engaged person, person who joined awhile ago who is still active \n\nnxt=3 # number of x ticks\nnyt=3 # number of y ticks\nxrt=np.linspace(member_data_frame[member_data_frame.status=='unsubscribed'].jv.min(),\nmember_data_frame[member_data_frame.status=='unsubscribed'].jv.max(),num=nxt)\nyrt=np.linspace(member_data_frame[member_data_frame.status=='unsubscribed'].unsubv.min(),\nmember_data_frame[member_data_frame.status=='unsubscribed'].unsubv.max(),num=nyt)\n\ng = sns.jointplot(member_data_frame[member_data_frame.status=='unsubscribed'].jv/1e9,\n member_data_frame[member_data_frame.status=='unsubscribed'].unsubv/1e9, kind=\"kde\", size=10, space=0)\n\nj=g.ax_joint\nmx=g.ax_marg_x\nmy=g.ax_marg_y\nmx.set_xticks(map(lambda x:x/1e9,xrt))\nmx.set_xticklabels(map(lambda x: pd.to_datetime(x).date(),xrt))\nmy.set_yticks(map(lambda x:x/1e9,yrt))\nmy.set_yticklabels(map(lambda x: pd.to_datetime(x,unit='s').date(),yrt))\nmx.set_title(\"Unsubscribe Time vs Join Time\",fontdict={'fontsize':25})\ng.set_axis_labels(xlabel='Time Joined',ylabel='Time Unsubscribed')\ng.savefig(oupt_dir+'/10_unusub_vs_join_unsub.png')", "Time of the last email opened vs. time joined for unsubscribers", "# x axis is distribution of when joined - farther to the right is joining more recently\n# y axis is the oldest email record of opening in last 50 actions \n# upper left is longtime engaged user \n# upper right newer user recently opened\n\nnxt=3 #number of x ticks\nnyt=3 #number of x ticks\nxrt=np.linspace(member_data_frame[member_data_frame.status=='unsubscribed'].jv.min(),\nmember_data_frame[member_data_frame.status=='unsubscribed'].jv.max(),num=nxt)\nyrt=np.linspace(member_data_frame[member_data_frame.status=='unsubscribed'].lv.min(),\nmember_data_frame[member_data_frame.status=='unsubscribed'].lv.max(),num=nxt)\n\ng = sns.jointplot(member_data_frame[member_data_frame.status=='unsubscribed'].jv/1e9,\n member_data_frame[member_data_frame.status=='unsubscribed'].lv/1e9, kind=\"kde\", size=10, space=0)\n\nj=g.ax_joint\nmx=g.ax_marg_x\nmy=g.ax_marg_y\nmx.set_xticks(map(lambda x:x/1e9,xrt))\nmx.set_xticklabels(map(lambda x: pd.to_datetime(x).date(),xrt))\nmy.set_yticks(map(lambda x:x/1e9,yrt))\nmy.set_yticklabels(map(lambda x: pd.to_datetime(x).date(),yrt))\nmx.set_title(\"Latest Opened Email vs Join Time for Unsubscribed Users\",fontdict={'fontsize':25})\ng.set_axis_labels(xlabel='Time Joined',ylabel='Latest Open')\ng.savefig(oupt_dir+'/12_last_open_vs_join_unsub.png')", "3.4 Time on List for Unsubscribers", "# shortest time on the list before unsubscribing\n# from the individual who unsubscribed the fastest\nmember_data_frame[member_data_frame.life.isnull()!=True].life.min()\n\n# longest time on the list before unsubscribing \n# from the individual who stayed on the longest before unsubscribing \nmember_data_frame[member_data_frame.life.isnull()!=True].life.max()\n\n# histogram of time range each unsubscriber was on the list before they unsubscribed \n# depending on how granular you want to go and the lifetime of your list you may want to update bin size\n\nplt.figure(figsize=(20,10))\nax=plt.hist([member_data_frame.dropna(subset=['life']).life.apply(lambda x:x.value)],label=['life time'],color=c2)\nplt.xticks(ax[1],map(lambda x: pd.to_timedelta(x).floor('D'),ax[1]), rotation=30)\nplt.title('Distribution of Lifetime of Unsubscribed Users',fontdict={'fontsize':25})\nplt.xlabel('Lifetime on List',fontdict={'fontsize':20})\nplt.ylabel('Counts',fontdict={'fontsize':20})\nplt.yticks(fontsize=15)\nplt.xticks(fontsize=15)\nplt.legend(loc='best')\nplt.savefig(oupt_dir+'/11_life_unsub.png')\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
linebp/pandas
doc/source/style.ipynb
bsd-3-clause
[ "Styling\nNew in version 0.17.1\n<span style=\"color: red\">Provisional: This is a new feature and still under development. We'll be adding features and possibly making breaking changes in future releases. We'd love to hear your feedback.</span>\nThis document is written as a Jupyter Notebook, and can be viewed or downloaded here.\nYou can apply conditional formatting, the visual styling of a DataFrame\ndepending on the data within, by using the DataFrame.style property.\nThis is a property that returns a Styler object, which has\nuseful methods for formatting and displaying DataFrames.\nThe styling is accomplished using CSS.\nYou write \"style functions\" that take scalars, DataFrames or Series, and return like-indexed DataFrames or Series with CSS \"attribute: value\" pairs for the values.\nThese functions can be incrementally passed to the Styler which collects the styles before rendering.\nBuilding Styles\nPass your style functions into one of the following methods:\n\nStyler.applymap: elementwise\nStyler.apply: column-/row-/table-wise\n\nBoth of those methods take a function (and some other keyword arguments) and applies your function to the DataFrame in a certain way.\nStyler.applymap works through the DataFrame elementwise.\nStyler.apply passes each column or row into your DataFrame one-at-a-time or the entire table at once, depending on the axis keyword argument.\nFor columnwise use axis=0, rowwise use axis=1, and for the entire table at once use axis=None.\nFor Styler.applymap your function should take a scalar and return a single string with the CSS attribute-value pair.\nFor Styler.apply your function should take a Series or DataFrame (depending on the axis parameter), and return a Series or DataFrame with an identical shape where each value is a string with a CSS attribute-value pair.\nLet's see some examples.", "import matplotlib.pyplot\n# We have this here to trigger matplotlib's font cache stuff.\n# This cell is hidden from the output\n\nimport pandas as pd\nimport numpy as np\n\nnp.random.seed(24)\ndf = pd.DataFrame({'A': np.linspace(1, 10, 10)})\ndf = pd.concat([df, pd.DataFrame(np.random.randn(10, 4), columns=list('BCDE'))],\n axis=1)\ndf.iloc[0, 2] = np.nan", "Here's a boring example of rendering a DataFrame, without any (visible) styles:", "df.style", "Note: The DataFrame.style attribute is a property that returns a Styler object. Styler has a _repr_html_ method defined on it so they are rendered automatically. If you want the actual HTML back for further processing or for writing to file call the .render() method which returns a string.\nThe above output looks very similar to the standard DataFrame HTML representation. But we've done some work behind the scenes to attach CSS classes to each cell. We can view these by calling the .render method.", "df.style.highlight_null().render().split('\\n')[:10]", "The row0_col2 is the identifier for that particular cell. We've also prepended each row/column identifier with a UUID unique to each DataFrame so that the style from one doesn't collide with the styling from another within the same notebook or page (you can set the uuid if you'd like to tie together the styling of two DataFrames).\nWhen writing style functions, you take care of producing the CSS attribute / value pairs you want. Pandas matches those up with the CSS classes that identify each cell.\nLet's write a simple style function that will color negative numbers red and positive numbers black.", "def color_negative_red(val):\n \"\"\"\n Takes a scalar and returns a string with\n the css property `'color: red'` for negative\n strings, black otherwise.\n \"\"\"\n color = 'red' if val < 0 else 'black'\n return 'color: %s' % color", "In this case, the cell's style depends only on it's own value.\nThat means we should use the Styler.applymap method which works elementwise.", "s = df.style.applymap(color_negative_red)\ns", "Notice the similarity with the standard df.applymap, which operates on DataFrames elementwise. We want you to be able to resuse your existing knowledge of how to interact with DataFrames.\nNotice also that our function returned a string containing the CSS attribute and value, separated by a colon just like in a &lt;style&gt; tag. This will be a common theme.\nFinally, the input shapes matched. Styler.applymap calls the function on each scalar input, and the function returns a scalar output.\nNow suppose you wanted to highlight the maximum value in each column.\nWe can't use .applymap anymore since that operated elementwise.\nInstead, we'll turn to .apply which operates columnwise (or rowwise using the axis keyword). Later on we'll see that something like highlight_max is already defined on Styler so you wouldn't need to write this yourself.", "def highlight_max(s):\n '''\n highlight the maximum in a Series yellow.\n '''\n is_max = s == s.max()\n return ['background-color: yellow' if v else '' for v in is_max]\n\ndf.style.apply(highlight_max)", "In this case the input is a Series, one column at a time.\nNotice that the output shape of highlight_max matches the input shape, an array with len(s) items.\nWe encourage you to use method chains to build up a style piecewise, before finally rending at the end of the chain.", "df.style.\\\n applymap(color_negative_red).\\\n apply(highlight_max)", "Above we used Styler.apply to pass in each column one at a time.\n<span style=\"background-color: #DEDEBE\">Debugging Tip: If you're having trouble writing your style function, try just passing it into <code style=\"background-color: #DEDEBE\">DataFrame.apply</code>. Internally, <code style=\"background-color: #DEDEBE\">Styler.apply</code> uses <code style=\"background-color: #DEDEBE\">DataFrame.apply</code> so the result should be the same.</span>\nWhat if you wanted to highlight just the maximum value in the entire table?\nUse .apply(function, axis=None) to indicate that your function wants the entire table, not one column or row at a time. Let's try that next.\nWe'll rewrite our highlight-max to handle either Series (from .apply(axis=0 or 1)) or DataFrames (from .apply(axis=None)). We'll also allow the color to be adjustable, to demonstrate that .apply, and .applymap pass along keyword arguments.", "def highlight_max(data, color='yellow'):\n '''\n highlight the maximum in a Series or DataFrame\n '''\n attr = 'background-color: {}'.format(color)\n if data.ndim == 1: # Series from .apply(axis=0) or axis=1\n is_max = data == data.max()\n return [attr if v else '' for v in is_max]\n else: # from .apply(axis=None)\n is_max = data == data.max().max()\n return pd.DataFrame(np.where(is_max, attr, ''),\n index=data.index, columns=data.columns)", "When using Styler.apply(func, axis=None), the function must return a DataFrame with the same index and column labels.", "df.style.apply(highlight_max, color='darkorange', axis=None)", "Building Styles Summary\nStyle functions should return strings with one or more CSS attribute: value delimited by semicolons. Use\n\nStyler.applymap(func) for elementwise styles\nStyler.apply(func, axis=0) for columnwise styles\nStyler.apply(func, axis=1) for rowwise styles\nStyler.apply(func, axis=None) for tablewise styles\n\nAnd crucially the input and output shapes of func must match. If x is the input then func(x).shape == x.shape.\nFiner Control: Slicing\nBoth Styler.apply, and Styler.applymap accept a subset keyword.\nThis allows you to apply styles to specific rows or columns, without having to code that logic into your style function.\nThe value passed to subset behaves simlar to slicing a DataFrame.\n\nA scalar is treated as a column label\nA list (or series or numpy array)\nA tuple is treated as (row_indexer, column_indexer)\n\nConsider using pd.IndexSlice to construct the tuple for the last one.", "df.style.apply(highlight_max, subset=['B', 'C', 'D'])", "For row and column slicing, any valid indexer to .loc will work.", "df.style.applymap(color_negative_red,\n subset=pd.IndexSlice[2:5, ['B', 'D']])", "Only label-based slicing is supported right now, not positional.\nIf your style function uses a subset or axis keyword argument, consider wrapping your function in a functools.partial, partialing out that keyword.\npython\nmy_func2 = functools.partial(my_func, subset=42)\nFiner Control: Display Values\nWe distinguish the display value from the actual value in Styler.\nTo control the display value, the text is printed in each cell, use Styler.format. Cells can be formatted according to a format spec string or a callable that takes a single value and returns a string.", "df.style.format(\"{:.2%}\")", "Use a dictionary to format specific columns.", "df.style.format({'B': \"{:0<4.0f}\", 'D': '{:+.2f}'})", "Or pass in a callable (or dictionary of callables) for more flexible handling.", "df.style.format({\"B\": lambda x: \"±{:.2f}\".format(abs(x))})", "Builtin Styles\nFinally, we expect certain styling functions to be common enough that we've included a few \"built-in\" to the Styler, so you don't have to write them yourself.", "df.style.highlight_null(null_color='red')", "You can create \"heatmaps\" with the background_gradient method. These require matplotlib, and we'll use Seaborn to get a nice colormap.", "import seaborn as sns\n\ncm = sns.light_palette(\"green\", as_cmap=True)\n\ns = df.style.background_gradient(cmap=cm)\ns", "Styler.background_gradient takes the keyword arguments low and high. Roughly speaking these extend the range of your data by low and high percent so that when we convert the colors, the colormap's entire range isn't used. This is useful so that you can actually read the text still.", "# Uses the full color range\ndf.loc[:4].style.background_gradient(cmap='viridis')\n\n# Compress the color range\n(df.loc[:4]\n .style\n .background_gradient(cmap='viridis', low=.5, high=0)\n .highlight_null('red'))", "There's also .highlight_min and .highlight_max.", "df.style.highlight_max(axis=0)", "Use Styler.set_properties when the style doesn't actually depend on the values.", "df.style.set_properties(**{'background-color': 'black',\n 'color': 'lawngreen',\n 'border-color': 'white'})", "Bar charts\nYou can include \"bar charts\" in your DataFrame.", "df.style.bar(subset=['A', 'B'], color='#d65f5f')", "New in version 0.20.0 is the ability to customize further the bar chart: You can now have the df.style.bar be centered on zero or midpoint value (in addition to the already existing way of having the min value at the left side of the cell), and you can pass a list of [color_negative, color_positive].\nHere's how you can change the above with the new align='mid' option:", "df.style.bar(subset=['A', 'B'], align='mid', color=['#d65f5f', '#5fba7d'])", "The following example aims to give a highlight of the behavior of the new align options:", "import pandas as pd\nfrom IPython.display import HTML\n\n# Test series\ntest1 = pd.Series([-100,-60,-30,-20], name='All Negative')\ntest2 = pd.Series([10,20,50,100], name='All Positive')\ntest3 = pd.Series([-10,-5,0,90], name='Both Pos and Neg')\n\nhead = \"\"\"\n<table>\n <thead>\n <th>Align</th>\n <th>All Negative</th>\n <th>All Positive</th>\n <th>Both Neg and Pos</th>\n </thead>\n </tbody>\n\n\"\"\"\n\naligns = ['left','zero','mid']\nfor align in aligns:\n row = \"<tr><th>{}</th>\".format(align)\n for serie in [test1,test2,test3]:\n s = serie.copy()\n s.name=''\n row += \"<td>{}</td>\".format(s.to_frame().style.bar(align=align, \n color=['#d65f5f', '#5fba7d'], \n width=100).render()) #testn['width']\n row += '</tr>'\n head += row\n \nhead+= \"\"\"\n</tbody>\n</table>\"\"\"\n \n\nHTML(head)", "Sharing Styles\nSay you have a lovely style built up for a DataFrame, and now you want to apply the same style to a second DataFrame. Export the style with df1.style.export, and import it on the second DataFrame with df1.style.set", "df2 = -df\nstyle1 = df.style.applymap(color_negative_red)\nstyle1\n\nstyle2 = df2.style\nstyle2.use(style1.export())\nstyle2", "Notice that you're able share the styles even though they're data aware. The styles are re-evaluated on the new DataFrame they've been used upon.\nOther Options\nYou've seen a few methods for data-driven styling.\nStyler also provides a few other options for styles that don't depend on the data.\n\nprecision\ncaptions\ntable-wide styles\n\nEach of these can be specified in two ways:\n\nA keyword argument to Styler.__init__\nA call to one of the .set_ methods, e.g. .set_caption\n\nThe best method to use depends on the context. Use the Styler constructor when building many styled DataFrames that should all share the same properties. For interactive use, the.set_ methods are more convenient.\nPrecision\nYou can control the precision of floats using pandas' regular display.precision option.", "with pd.option_context('display.precision', 2):\n html = (df.style\n .applymap(color_negative_red)\n .apply(highlight_max))\nhtml", "Or through a set_precision method.", "df.style\\\n .applymap(color_negative_red)\\\n .apply(highlight_max)\\\n .set_precision(2)", "Setting the precision only affects the printed number; the full-precision values are always passed to your style functions. You can always use df.round(2).style if you'd prefer to round from the start.\nCaptions\nRegular table captions can be added in a few ways.", "df.style.set_caption('Colormaps, with a caption.')\\\n .background_gradient(cmap=cm)", "Table Styles\nThe next option you have are \"table styles\".\nThese are styles that apply to the table as a whole, but don't look at the data.\nCertain sytlings, including pseudo-selectors like :hover can only be used this way.", "from IPython.display import HTML\n\ndef hover(hover_color=\"#ffff99\"):\n return dict(selector=\"tr:hover\",\n props=[(\"background-color\", \"%s\" % hover_color)])\n\nstyles = [\n hover(),\n dict(selector=\"th\", props=[(\"font-size\", \"150%\"),\n (\"text-align\", \"center\")]),\n dict(selector=\"caption\", props=[(\"caption-side\", \"bottom\")])\n]\nhtml = (df.style.set_table_styles(styles)\n .set_caption(\"Hover to highlight.\"))\nhtml", "table_styles should be a list of dictionaries.\nEach dictionary should have the selector and props keys.\nThe value for selector should be a valid CSS selector.\nRecall that all the styles are already attached to an id, unique to\neach Styler. This selector is in addition to that id.\nThe value for props should be a list of tuples of ('attribute', 'value').\ntable_styles are extremely flexible, but not as fun to type out by hand.\nWe hope to collect some useful ones either in pandas, or preferable in a new package that builds on top the tools here.\nCSS Classes\nCertain CSS classes are attached to cells.\n\nIndex and Column names include index_name and level&lt;k&gt; where k is its level in a MultiIndex\nIndex label cells include\nrow_heading\nrow&lt;n&gt; where n is the numeric position of the row\nlevel&lt;k&gt; where k is the level in a MultiIndex\nColumn label cells include\ncol_heading\ncol&lt;n&gt; where n is the numeric position of the column\nlevel&lt;k&gt; where k is the level in a MultiIndex\nBlank cells include blank\nData cells include data\n\nLimitations\n\nDataFrame only (use Series.to_frame().style)\nThe index and columns must be unique\nNo large repr, and performance isn't great; this is intended for summary DataFrames\nYou can only style the values, not the index or columns\nYou can only apply styles, you can't insert new HTML entities\n\nSome of these will be addressed in the future.\nTerms\n\nStyle function: a function that's passed into Styler.apply or Styler.applymap and returns values like 'css attribute: value'\nBuiltin style functions: style functions that are methods on Styler\ntable style: a dictionary with the two keys selector and props. selector is the CSS selector that props will apply to. props is a list of (attribute, value) tuples. A list of table styles passed into Styler.\n\nFun stuff\nHere are a few interesting examples.\nStyler interacts pretty well with widgets. If you're viewing this online instead of running the notebook yourself, you're missing out on interactively adjusting the color palette.", "from IPython.html import widgets\n@widgets.interact\ndef f(h_neg=(0, 359, 1), h_pos=(0, 359), s=(0., 99.9), l=(0., 99.9)):\n return df.style.background_gradient(\n cmap=sns.palettes.diverging_palette(h_neg=h_neg, h_pos=h_pos, s=s, l=l,\n as_cmap=True)\n )\n\ndef magnify():\n return [dict(selector=\"th\",\n props=[(\"font-size\", \"4pt\")]),\n dict(selector=\"td\",\n props=[('padding', \"0em 0em\")]),\n dict(selector=\"th:hover\",\n props=[(\"font-size\", \"12pt\")]),\n dict(selector=\"tr:hover td:hover\",\n props=[('max-width', '200px'),\n ('font-size', '12pt')])\n]\n\nnp.random.seed(25)\ncmap = cmap=sns.diverging_palette(5, 250, as_cmap=True)\nbigdf = pd.DataFrame(np.random.randn(20, 25)).cumsum()\n\nbigdf.style.background_gradient(cmap, axis=1)\\\n .set_properties(**{'max-width': '80px', 'font-size': '1pt'})\\\n .set_caption(\"Hover to magnify\")\\\n .set_precision(2)\\\n .set_table_styles(magnify())", "Export to Excel\nNew in version 0.20.0\n<span style=\"color: red\">Experimental: This is a new feature and still under development. We'll be adding features and possibly making breaking changes in future releases. We'd love to hear your feedback.</span>\nSome support is available for exporting styled DataFrames to Excel worksheets using the OpenPyXL engine. CSS2.2 properties handled include:\n\nbackground-color\nborder-style, border-width, border-color and their {top, right, bottom, left variants}\ncolor\nfont-family\nfont-style\nfont-weight\ntext-align\ntext-decoration\nvertical-align\nwhite-space: nowrap\n\nOnly CSS2 named colors and hex colors of the form #rgb or #rrggbb are currently supported.", "df.style.\\\n applymap(color_negative_red).\\\n apply(highlight_max).\\\n to_excel('styled.xlsx', engine='openpyxl')", "A screenshot of the output:\n\nExtensibility\nThe core of pandas is, and will remain, its \"high-performance, easy-to-use data structures\".\nWith that in mind, we hope that DataFrame.style accomplishes two goals\n\nProvide an API that is pleasing to use interactively and is \"good enough\" for many tasks\nProvide the foundations for dedicated libraries to build on\n\nIf you build a great library on top of this, let us know and we'll link to it.\nSubclassing\nIf the default template doesn't quite suit your needs, you can subclass Styler and extend or override the template.\nWe'll show an example of extending the default template to insert a custom header before each table.", "from jinja2 import Environment, ChoiceLoader, FileSystemLoader\nfrom IPython.display import HTML\nfrom pandas.io.formats.style import Styler\n\n%mkdir templates", "This next cell writes the custom template.\nWe extend the template html.tpl, which comes with pandas.", "%%file templates/myhtml.tpl\n{% extends \"html.tpl\" %}\n{% block table %}\n<h1>{{ table_title|default(\"My Table\") }}</h1>\n{{ super() }}\n{% endblock table %}", "Now that we've created a template, we need to set up a subclass of Styler that\nknows about it.", "class MyStyler(Styler):\n env = Environment(\n loader=ChoiceLoader([\n FileSystemLoader(\"templates\"), # contains ours\n Styler.loader, # the default\n ])\n )\n template = env.get_template(\"myhtml.tpl\")", "Notice that we include the original loader in our environment's loader.\nThat's because we extend the original template, so the Jinja environment needs\nto be able to find it.\nNow we can use that custom styler. It's __init__ takes a DataFrame.", "MyStyler(df)", "Our custom template accepts a table_title keyword. We can provide the value in the .render method.", "HTML(MyStyler(df).render(table_title=\"Extending Example\"))", "For convenience, we provide the Styler.from_custom_template method that does the same as the custom subclass.", "EasyStyler = Styler.from_custom_template(\"templates\", \"myhtml.tpl\")\nEasyStyler(df)", "Here's the template structure:", "with open(\"template_structure.html\") as f:\n structure = f.read()\n \nHTML(structure)", "See the template in the GitHub repo for more details.", "# Hack to get the same style in the notebook as the\n# main site. This is hidden in the docs.\nfrom IPython.display import HTML\nwith open(\"themes/nature_with_gtoc/static/nature.css_t\") as f:\n css = f.read()\n \nHTML('<style>{}</style>'.format(css))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
liganega/Gongsu-DataSci
notebooks/GongSu06_Errors_and_Exception_Handling.ipynb
gpl-3.0
[ "오류 및 예외 처리\n개요\n\n\n코딩할 때 발생할 수 있는 다양한 오류 살펴 보기\n\n\n오류 메시지 정보 확인 방법\n\n\n예외 처리, 즉 오류가 발생할 수 있는 예외적인 상황을 미리 고려하는 방법 소개\n\n\n오늘의 주요 예제\n아래 코드는 input() 함수를 이용하여 사용자로부터 숫자를 입력받아 \n그 숫자의 제곱을 리턴하는 내용을 담고 있다. \n코드를 실행하면 숫자를 입력하라는 창이 나오며, \n여기에 숫자 3을 입력하면 정상적으로 작동한다. \n하지만, 예를 들어, 3.2를 입력하면 값 오류(value error)가 발생한다.", "input_number = input(\"A number please: \")\nnumber = int(input_number)\n\nprint(\"제곱의 결과는\", number**2, \"입니다.\")\n\ninput_number = input(\"A number please: \")\nnumber = int(input_number)\n\nprint(\"제곱의 결과는\", number**2, \"입니다.\")", "위 코드는 정수들의 제곱을 계산하는 프로그램이다. \n하지만 사용자가 경우에 따라 정수 이외의 값을 입력하면 시스템이 다운된다. \n이에 대한 해결책을 다루고자 한다.\n오류 예제\n먼저 오류의 다양한 예제를 살펴보자.\n다음 코드들은 모두 오류를 발생시킨다.\n예제: 0으로 나누기 오류\npython\n4.6/0\n오류 설명: 0으로 나눌 수 없다.\n예제: 문법 오류\npython\nsentence = 'I am a sentence\n오류 설명: 문자열 양 끝의 따옴표가 짝이 맞아야 한다.\n* 작은 따옴표끼리 또는 큰 따옴표끼리\n예제: 들여쓰기 문법 오류\npython\nfor i in range(3):\n j = i * 2\n print(i, j)\n오류 설명: 2번 줄과 3번 줄의 들여쓰기 정도가 동일해야 한다.\n예제: 자료형 오류\n아래 연산은 모두 오류를 발생시킨다.\n```python\nnew_string = 'cat' - 'dog'\nnew_string = 'cat' * 'dog'\nnew_string = 'cat' / 'dog'\nnew_string = 'cat' + 3\nnew_string = 'cat' - 3\nnew_string = 'cat' / 3\n```\n이유: 문자열 끼리의 합, 문자열과 정수의 곱셈만 정의되어 있다.\n예제: 이름 오류\npython\nprint(party)\n오류 설명: 미리 선언된 변수만 사용할 수 있다.\n예제: 인덱스 오류\npython\na_string = 'abcdefg'\na_string[12]\n오류 설명: 인덱스는 문자열의 길이보다 작은 수만 사용할 수 있다.\n예제: 값 오류\npython\nint(a_string)\n오류 설명: int() 함수는 정수로만 구성된 문자열만 처리할 수 있다.\n예제: 속성 오류\npython\nprint(a_string.len())\n오류 설명: 문자열 자료형에는 len() 메소드가 존재하지 않는다.\n주의: len() 이라는 함수는 문자열의 길이를 확인하지만 문자열 메소드는 아니다. \n이후에 다룰 리스트, 튜플 등에 대해서도 사용할 수 있는 함수이다.\n오류 확인\n앞서 언급한 코드들을 실행하면 오류가 발생하고 어디서 어떤 오류가 발생하였는가에 대한 정보를 \n파이썬 해석기가 바로 알려 준다. \n예제", "sentence = 'I am a sentence", "오류를 확인하는 메시지가 처음 볼 때는 매우 생소하다. \n위 오류 메시지를 간단하게 살펴보면 다음과 같다.\n\n\nFile \"&lt;ipython-input-3-a6097ed4dc2e&gt;\", line 1\n1번 줄에서 오류 발생\n\n\nsentence = 'I am a sentence \n ^\n오류 발생 위치 명시\n\n\nSyntaxError: EOL while scanning string literal\n오류 종류 표시: 문법 오류(SyntaxError)\n\n\n예제\n아래 예제는 0으로 나눌 때 발생하는 오류를 나타낸다.\n오류에 대한 정보를 잘 살펴보면서 어떤 내용을 담고 있는지 확인해 보아야 한다.", "a = 0\n4/a", "오류의 종류\n앞서 예제들을 통해 살펴 보았듯이 다양한 종류의 오류가 발생하며,\n코드가 길어지거나 복잡해지면 오류가 발생할 가능성은 점차 커진다.\n오류의 종류를 파악하면 어디서 왜 오류가 발생하였는지를 보다 쉽게 파악하여\n코드를 수정할 수 있게 된다.\n따라서 코드의 발생원인을 바로 알아낼 수 있어야 하며 이를 위해서는 오류 메시지를 \n제대로 확인할 수 있어야 한다. \n하지만 여기서는 언급된 예제 정도의 수준만 다루고 넘어간다.\n코딩을 하다 보면 어차피 다양한 오류와 마주치게 될 텐데 그때마다\n스스로 오류의 내용과 원인을 확인해 나가는 과정을 통해 \n보다 많은 경험을 쌓는 길 외에는 달리 방법이 없다.\n예외 처리\n코드에 문법 오류가 포함되어 있는 경우 아예 실행되지 않는다. \n그렇지 않은 경우에는 일단 실행이 되고 중간에 오류가 발생하면 바로 멈춰버린다.\n이렇게 중간에 오류가 발생할 수 있는 경우를 미리 생각하여 대비하는 과정을 \n예외 처리(exception handling)라고 부른다. \n예를 들어, 오류가 발생하더라도 오류발생 이전까지 생성된 정보들을 저장하거나, 오류발생 이유를 좀 더 자세히 다루거나, 아니면 오류발생에 대한 보다 자세한 정보를 사용자에게 알려주기 위해 예외 처리를 사용한다. \n사용방식은 다음과 같다.\npython\ntry:\n 코드1\nexcept:\n 코드2\n* 먼저 코드1 부분을 실행한다.\n* 코드1 부분이 실행되면서 오류가 발생하지 않으면 코드2 부분은 무시하고 다음으로 넘어간다.\n* 코드1 부분이 실행되면서 오류가 발생하면 더이상 진행하지 않고 바로 코드2 부분을 실행한다.\n예제\n아래 코드는 input() 함수를 이용하여 사용자로부터 숫자를 입력받아 그 숫자의 제곱을 리턴하고자 하는 내용을 담고 있으며, 코드에는 문법적 오류가 없다. \n그리고 코드를 실행하면 숫자를 입력하라는 창이 나온다. \n여기에 숫자 3을 입력하면 정상적으로 작동하지만 \n예를 들어, 3.2를 입력하면 값 오류(value error)가 발생한다.", "number_to_square = input(\"정수를 입력하세요: \")\n\n# number_to_square 변수의 자료형이 문자열(str)임에 주의하라. \n# 따라서 연산을 하고 싶으면 정수형(int)으로 형변환을 먼저 해야 한다. \n\nnumber = int(number_to_square)\n\nprint(\"제곱의 결과는\", number**2, \"입니다.\")\n\nnumber_to_square = input(\"정수를 입력하세요: \")\n\n# number_to_square 변수의 자료형이 문자열(str)임에 주의하라. \n# 따라서 연산을 하고 싶으면 정수형(int)으로 형변환을 먼저 해야 한다. \n\nnumber = int(number_to_square)\n\nprint(\"제곱의 결과는\", number**2, \"입니다.\")", "3.2를 입력했을 때 오류가 발생하는 이유는 int() 함수가 정수 모양의 문자열만 \n처리할 수 있기 때문이다. \n사실 정수들의 제곱을 계산하는 프로그램을 작성하였지만 경우에 따라 \n정수 이외의 값을 입력하는 경우가 발생하게 되며, 이런 경우를 대비해야 한다.\n즉, 오류가 발생할 것을 미리 예상해야 하며, 어떻게 대처해야 할지 준비해야 하는데, \ntry ... except ...문을 이용하여 예외를 처리하는 방식을 활용할 수 있다.", "number_to_square = input(\"정수를 입력하세요: \")\n\ntry: \n number = int(number_to_square)\n print(\"제곱의 결과는\", number ** 2, \"입니다.\")\nexcept:\n print(\"정수를 입력해야 합니다.\")\n", "올바른 값이 들어올 때까지 입력을 요구할 수 있다.", "while True:\n try:\n number = int(input(\"정수를 입력하세요: \"))\n print(\"제곱의 결과는\", number**2, \"입니다.\")\n break\n except:\n print(\"정수를 입력해야 합니다.\")", "오류 종류에 맞추어 다양한 대처를 하기 위해서는 오류의 종류를 명시하여 예외처리를 하면 된다.\n아래 코드는 입력 갑에 따라 다른 오류가 발생하고 그에 상응하는 방식으로 예외처리를 실행한다.\n값 오류(ValueError)의 경우", "number_to_square = input(\"정수를 입력하세요: \")\n\ntry: \n number = int(number_to_square)\n a = 5/(number - 4)\n print(\"결과는\", a, \"입니다.\")\nexcept ValueError:\n print(\"정수를 입력해야 합니다.\")\nexcept ZeroDivisionError:\n print(\"4는 빼고 하세요.\")", "0으로 나누기 오류(ZeroDivisionError)의 경우", "number_to_square = input(\"A number please: \")\n\ntry: \n number = int(number_to_square)\n a = 5/(number - 4)\n print(\"결과는\", a, \"입니다.\")\nexcept ValueError:\n print(\"정수를 입력해야 합니다.\")\nexcept ZeroDivisionError:\n print(\"4는 빼고 하세요.\")", "주의: 이와 같이 발생할 수 예외를 가능한 한 모두 염두하는 프로그램을 구현해야 하는 일은\n매우 어려운 일이다.\n앞서 보았듯이 오류의 종류를 정확히 알 필요가 발생한다. \n다음 예제에서 보듯이 오류의 종류를 틀리게 명시하면 예외 처리가 제대로 작동하지 않는다.", "try:\n a = 1/0\nexcept ValueError:\n print(\"This program stops here.\")", "raise 함수\n강제로 오류를 발생시키고자 하는 경우에 사용한다.\n예제\n어떤 함수를 정확히 정의하지 않은 상태에서 다른 중요한 일을 먼저 처리하고자 할 때 \n아래와 같이 함수를 선언하고 넘어갈 수 있다.\n그런데 아래 함수를 제대로 선언하지 않은 채로 다른 곳에서 호출하면 \n\"아직 정의되어 있지 않음\"\n\n이란 메시지로 정보를 알려주게 된다.", "def to_define():\n \"\"\"아주 복잡하지만 지금 당장 불필요\"\"\"\n raise NotImplementedError(\"아직 정의되어 있지 않음\")\n\nprint(to_define())", "주의: 오류 처리를 사용하지 않으면 오류 메시지가 보이지 않을 수도 있음에 주의해야 한다.", "def to_define1():\n \"\"\"아주 복잡하지만 지금 당장 불필요\"\"\"\n\nprint(to_define1())", "코드의 안전성 문제\n문법 오류 또는 실행 중에 오류가 발생하지 않는다 하더라도 코드의 안전성이 보장되지는 않는다. \n코드의 안정성이라 함은 코드를 실행할 때 기대하는 결과가 산출된다는 것을 보장한다는 의미이다. \n예제\n아래 코드는 숫자의 제곱을 리턴하는 square() 함수를 제대로 구현하지 못한 경우를 다룬다.", "def square(number):\n \"\"\"\n 정수를 인자로 입력 받아 제곱을 리턴한다.\n \"\"\"\n \n square_of_number = number * 2\n \n return square_of_number\n", "위 함수를 아래와 같이 호출하면 오류가 전혀 발생하지 않지만,\n엉뚱한 값을 리턴한다.", "square(3)", "주의: help() 를 이용하여 어떤 함수가 무슨 일을 하는지 내용을 확인할 수 있다.\n단, 함수를 정의할 때 함께 적힌 문서화 문자열(docstring) 내용이 확인된다.\n따라서, 함수를 정의할 때 문서화 문자열에 가능한 유효한 정보를 입력해 두어야 한다.", "help(square)", "오류에 대한 보다 자세한 정보\n파이썬에서 다루는 오류에 대한 보다 자세한 정보는 아래 사이트들에 상세하게 안내되어 있다.\n\n\n파이썬 기본 내장 오류 정보 문서:\n https://docs.python.org/3.4/library/exceptions.html\n\n\n파이썬 예외처리 정보 문서: \n https://docs.python.org/3.4/tutorial/errors.html\n\n\n연습문제\n연습\n아래 코드는 100을 입력한 값으로 나누는 함수이다.\n다만 0을 입력할 경우 0으로 나누기 오류(ZeroDivisionError)가 발생한다.", "number_to_square = input(\"100을 나눌 숫자를 입력하세요: \")\n\nnumber = int(number_to_square)\nprint(\"100을 입력한 값으로 나눈 결과는\", 100/number, \"입니다.\")", "아래 내용이 충족되도록 위 코드를 수정하라.\n\n나눗셈이 부동소수점으로 계산되도록 한다. \n0이 아닌 숫자가 입력될 경우 100을 그 숫자로 나눈다.\n0이 입력될 경우 0이 아닌 숫자를 입력하라고 전달한다. \n숫자가 아닌 값이 입력될 경우 숫자를 입력하라고 전달한다.\n\n견본답안:", "number_to_square = input(\"A number to divide 100: \")\n\ntry: \n number = float(number_to_square)\n print(\"100을 입력한 값으로 나눈 결과는\", 100/number, \"입니다.\")\nexcept ZeroDivisionError:\n raise ZeroDivisionError('0이 아닌 숫자를 입력하세요.')\nexcept ValueError:\n raise ValueError('숫자를 입력하세요.') \n\nnumber_to_square = input(\"A number to divide 100: \")\n\ntry: \n number = float(number_to_square)\n print(\"100을 입력한 값으로 나눈 결과는\", 100/number, \"입니다.\")\nexcept ZeroDivisionError:\n raise ZeroDivisionError('0이 아닌 숫자를 입력하세요.')\nexcept ValueError:\n raise ValueError('숫자를 입력하세요.') ", "연습\n두 개의 정수 a와 b를 입력 받아 a/b를 계산하여 출력하는 코드를 작성하라.\n견본답안 1:", "while True:\n try:\n a, b = input(\"정수 두 개를 입력하세요. 쉼표를 사용해야 합니다.\\n\").split(',')\n a, b = int(a), int(b)\n print(\"계산의 결과는\", a/b, \"입니다.\")\n break\n except ValueError:\n print(\"정수 두 개를 쉼표로 구분해서 입력해야 합니다.\\n\")\n except ZeroDivisionError:\n print(\"둘째 수는 0이 아니어야 합니다.\\n\")", "견본답안 2: map 함수를 활용하여 a, b 각각에 int 함수를 자동으로 적용할 수 있다.\nmap 함수에 대한 설명은 여기를 참조하면 된다.", "while True:\n try:\n a, b = map(int, input(\"정수 두 개를 입력하세요. 쉼표를 사용해야 합니다.\\n\").split(','))\n print(\"계산의 결과는\", a/b, \"입니다.\")\n break\n except ValueError:\n print(\"정수 두 개를 쉼표로 구분해서 입력해야 합니다.\\n\")\n except ZeroDivisionError:\n print(\"둘째 수는 0이 아니어야 합니다.\\n\")", "연습\n키와 몸무게를 인자로 받아 체질량지수(BMI)를 구하는 코드를 작성하라.\n아래 사항들을 참고한다. \n$$BMI = \\frac{weight}{height^2}$$\n\n단위:\n몸무게(weight): kg\n키(height): m\n\n\nBMI 수치에 따른 체중 분류\nBMI &lt;= 18.5이면 저체중\n18.5 &lt; BMI &lt;= 23이면 정상\n23 &lt; BMI &lt;= 25이면 과체중\n25 &lt; BMI &lt;= 30이면 비만\nBMI &gt; 30이면 고도비만\n\n\n\n견본답안:", "while True:\n try:\n print(\"키와 몸무게를 입력하세요: \")\n a, b = map(float, input().split(\", \"))\n BMI = b/(a**2)\n if BMI <= 18.5:\n print(\"BMI는\", BMI, \"입니다. 저체중입니다.\")\n elif 18.5 < BMI <= 23:\n print(\"BMI는\", BMI, \"입니다. 정상 체중입니다.\")\n elif 23 < BMI <= 25:\n print(\"BMI는\", BMI, \"입니다. 비만입니다.\")\n elif 25 < BMI <= 30:\n print(\"BMI는\", BMI, \"입니다. 과체중입니다.\")\n else:\n print(\"BMI는\", BMI, \"입니다. 고도비만입니다.\")\n break\n except ValueError:\n print(\"숫자를 입력하세요.\")\n except ZeroDivisionError:\n print(\"0이 아닌 숫자를 입력하세요.\")" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ES-DOC/esdoc-jupyterhub
notebooks/cas/cmip6/models/sandbox-1/land.ipynb
gpl-3.0
[ "ES-DOC CMIP6 Model Properties - Land\nMIP Era: CMIP6\nInstitute: CAS\nSource ID: SANDBOX-1\nTopic: Land\nSub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. \nProperties: 154 (96 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:45\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook", "# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'cas', 'sandbox-1', 'land')", "Document Authors\nSet document authors", "# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Contributors\nSpecify document contributors", "# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)", "Document Publication\nSpecify document publication status", "# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)", "Document Table of Contents\n1. Key Properties\n2. Key Properties --&gt; Conservation Properties\n3. Key Properties --&gt; Timestepping Framework\n4. Key Properties --&gt; Software Properties\n5. Grid\n6. Grid --&gt; Horizontal\n7. Grid --&gt; Vertical\n8. Soil\n9. Soil --&gt; Soil Map\n10. Soil --&gt; Snow Free Albedo\n11. Soil --&gt; Hydrology\n12. Soil --&gt; Hydrology --&gt; Freezing\n13. Soil --&gt; Hydrology --&gt; Drainage\n14. Soil --&gt; Heat Treatment\n15. Snow\n16. Snow --&gt; Snow Albedo\n17. Vegetation\n18. Energy Balance\n19. Carbon Cycle\n20. Carbon Cycle --&gt; Vegetation\n21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis\n22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration\n23. Carbon Cycle --&gt; Vegetation --&gt; Allocation\n24. Carbon Cycle --&gt; Vegetation --&gt; Phenology\n25. Carbon Cycle --&gt; Vegetation --&gt; Mortality\n26. Carbon Cycle --&gt; Litter\n27. Carbon Cycle --&gt; Soil\n28. Carbon Cycle --&gt; Permafrost Carbon\n29. Nitrogen Cycle\n30. River Routing\n31. River Routing --&gt; Oceanic Discharge\n32. Lakes\n33. Lakes --&gt; Method\n34. Lakes --&gt; Wetlands \n1. Key Properties\nLand surface key properties\n1.1. Model Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of land surface model.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.2. Model Name\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nName of land surface model code (e.g. MOSES2.2)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.3. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.4. Land Atmosphere Flux Exchanges\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nFluxes exchanged with the atmopshere.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"water\" \n# \"energy\" \n# \"carbon\" \n# \"nitrogen\" \n# \"phospherous\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.5. Atmospheric Coupling Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.6. Land Cover\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTypes of land cover defined in the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bare soil\" \n# \"urban\" \n# \"lake\" \n# \"land ice\" \n# \"lake ice\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "1.7. Land Cover Change\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe how land cover change is managed (e.g. the use of net or gross transitions)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover_change') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "1.8. Tiling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2. Key Properties --&gt; Conservation Properties\nTODO\n2.1. Energy\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how energy is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.energy') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.2. Water\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how water is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.water') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "2.3. Carbon\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "3. Key Properties --&gt; Timestepping Framework\nTODO\n3.1. Timestep Dependent On Atmosphere\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs a time step dependent on the frequency of atmosphere coupling?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "3.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverall timestep of land surface model (i.e. time between calls)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "3.3. Timestepping Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of time stepping method and associated time step(s)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4. Key Properties --&gt; Software Properties\nSoftware properties of land surface code\n4.1. Repository\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nLocation of code for this component.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.2. Code Version\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nCode version identifier.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "4.3. Code Languages\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nCode language(s).", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "5. Grid\nLand surface grid\n5.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the grid in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6. Grid --&gt; Horizontal\nThe horizontal grid in the land surface\n6.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general structure of the horizontal grid (not including any tiling)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "6.2. Matches Atmosphere Grid\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the horizontal grid match the atmosphere?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "7. Grid --&gt; Vertical\nThe vertical grid in the soil\n7.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general structure of the vertical grid in the soil (not including any tiling)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "7.2. Total Depth\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe total depth of the soil (in metres)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.total_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "8. Soil\nLand surface soil\n8.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of soil in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.2. Heat Water Coupling\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the coupling between heat and water in the soil", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_water_coupling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "8.3. Number Of Soil layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of soil layers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.number_of_soil layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "8.4. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the soil scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9. Soil --&gt; Soil Map\nKey properties of the land surface soil map\n9.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of soil map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.2. Structure\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil structure map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.3. Texture\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil texture map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.texture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.4. Organic Matter\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil organic matter map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.organic_matter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.5. Albedo\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil albedo map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.6. Water Table\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil water table map, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.water_table') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "9.7. Continuously Varying Soil Depth\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDoes the soil properties vary continuously with depth?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "9.8. Soil Depth\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil depth map", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "10. Soil --&gt; Snow Free Albedo\nTODO\n10.1. Prognostic\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs snow free albedo prognostic?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "10.2. Functions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf prognostic, describe the dependancies on snow free albedo calculations", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"soil humidity\" \n# \"vegetation state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.3. Direct Diffuse\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, describe the distinction between direct and diffuse albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"distinction between direct and diffuse albedo\" \n# \"no distinction between direct and diffuse albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "10.4. Number Of Wavelength Bands\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf prognostic, enter the number of wavelength bands used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11. Soil --&gt; Hydrology\nKey properties of the land surface soil hydrology\n11.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of the soil hydrological model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of river soil hydrology in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.3. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil hydrology tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.4. Vertical Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the typical vertical discretisation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "11.5. Number Of Ground Water Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of soil layers that may contain water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "11.6. Lateral Connectivity\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe the lateral connectivity between tiles", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"perfect connectivity\" \n# \"Darcian flow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "11.7. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe hydrological dynamics scheme in the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bucket\" \n# \"Force-restore\" \n# \"Choisnel\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "12. Soil --&gt; Hydrology --&gt; Freezing\nTODO\n12.1. Number Of Ground Ice Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow many soil layers may contain ground ice", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "12.2. Ice Storage Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the method of ice storage", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "12.3. Permafrost\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of permafrost, if any, within the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13. Soil --&gt; Hydrology --&gt; Drainage\nTODO\n13.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral describe how drainage is included in the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "13.2. Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nDifferent types of runoff represented by the land surface model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gravity drainage\" \n# \"Horton mechanism\" \n# \"topmodel-based\" \n# \"Dunne mechanism\" \n# \"Lateral subsurface flow\" \n# \"Baseflow from groundwater\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14. Soil --&gt; Heat Treatment\nTODO\n14.1. Description\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nGeneral description of how heat treatment properties are defined", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of soil heat scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "14.3. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the soil heat treatment tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.4. Vertical Discretisation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the typical vertical discretisation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "14.5. Heat Storage\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify the method of heat storage", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Force-restore\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "14.6. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe processes included in the treatment of soil heat", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"soil moisture freeze-thaw\" \n# \"coupling with snow temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15. Snow\nLand surface snow\n15.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of snow in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the snow tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "15.3. Number Of Snow Layers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe number of snow levels used in the land surface scheme/model", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.number_of_snow_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "15.4. Density\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow density", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.5. Water Equivalent\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of the snow water equivalent", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.water_equivalent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.6. Heat Content\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of the heat content of snow", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.heat_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.7. Temperature\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow temperature", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.temperature') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.8. Liquid Water Content\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescription of the treatment of snow liquid water", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.liquid_water_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.9. Snow Cover Fractions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify cover fractions used in the surface snow scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_cover_fractions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ground snow fraction\" \n# \"vegetation snow fraction\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.10. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSnow related processes in the land surface scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"snow interception\" \n# \"snow melting\" \n# \"snow freezing\" \n# \"blowing snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "15.11. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the snow scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "16. Snow --&gt; Snow Albedo\nTODO\n16.1. Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of snow-covered land albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"prescribed\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "16.2. Functions\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\n*If prognostic, *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"snow age\" \n# \"snow density\" \n# \"snow grain type\" \n# \"aerosol deposition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17. Vegetation\nLand surface vegetation\n17.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of vegetation in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.2. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of vegetation scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "17.3. Dynamic Vegetation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs there dynamic evolution of vegetation?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.dynamic_vegetation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17.4. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the vegetation tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.5. Vegetation Representation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nVegetation classification used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation types\" \n# \"biome types\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.6. Vegetation Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of vegetation types in the classification, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"broadleaf tree\" \n# \"needleleaf tree\" \n# \"C3 grass\" \n# \"C4 grass\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.7. Biome Types\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nList of biome types in the classification, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biome_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"evergreen needleleaf forest\" \n# \"evergreen broadleaf forest\" \n# \"deciduous needleleaf forest\" \n# \"deciduous broadleaf forest\" \n# \"mixed forest\" \n# \"woodland\" \n# \"wooded grassland\" \n# \"closed shrubland\" \n# \"opne shrubland\" \n# \"grassland\" \n# \"cropland\" \n# \"wetlands\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.8. Vegetation Time Variation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nHow the vegetation fractions in each tile are varying with time", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_time_variation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed (not varying)\" \n# \"prescribed (varying from files)\" \n# \"dynamical (varying from simulation)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.9. Vegetation Map\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIf vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.10. Interception\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs vegetation interception of rainwater represented?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.interception') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "17.11. Phenology\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation phenology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic (vegetation map)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.12. Phenology Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation phenology", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.13. Leaf Area Index\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation leaf area index", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.14. Leaf Area Index Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of leaf area index", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.15. Biomass\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\n*Treatment of vegetation biomass *", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.16. Biomass Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation biomass", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.17. Biogeography\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTreatment of vegetation biogeography", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.18. Biogeography Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation biogeography", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.19. Stomatal Resistance\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify what the vegetation stomatal resistance depends on", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"light\" \n# \"temperature\" \n# \"water availability\" \n# \"CO2\" \n# \"O3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "17.20. Stomatal Resistance Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of the treatment of vegetation stomatal resistance", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "17.21. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the vegetation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18. Energy Balance\nLand surface energy balance\n18.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of energy balance in land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the energy balance tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "18.3. Number Of Surface Temperatures\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nThe maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "18.4. Evaporation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nSpecify the formulation method for land surface evaporation, from soil and vegetation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"alpha\" \n# \"beta\" \n# \"combined\" \n# \"Monteith potential evaporation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "18.5. Processes\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nDescribe which processes are included in the energy balance scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"transpiration\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19. Carbon Cycle\nLand surface carbon cycle\n19.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of carbon cycle in land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the carbon cycle tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "19.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of carbon cycle in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "19.4. Anthropogenic Carbon\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nDescribe the treament of the anthropogenic carbon pool", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grand slam protocol\" \n# \"residence time\" \n# \"decay time\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "19.5. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the carbon scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20. Carbon Cycle --&gt; Vegetation\nTODO\n20.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "20.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "20.3. Forest Stand Dynamics\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the treatment of forest stand dyanmics", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "21. Carbon Cycle --&gt; Vegetation --&gt; Photosynthesis\nTODO\n21.1. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22. Carbon Cycle --&gt; Vegetation --&gt; Autotrophic Respiration\nTODO\n22.1. Maintainance Respiration\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for maintainence respiration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "22.2. Growth Respiration\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the general method used for growth respiration", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23. Carbon Cycle --&gt; Vegetation --&gt; Allocation\nTODO\n23.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the allocation scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "23.2. Allocation Bins\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify distinct carbon bins used in allocation", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"leaves + stems + roots\" \n# \"leaves + stems + roots (leafy + woody)\" \n# \"leaves + fine roots + coarse roots + stems\" \n# \"whole plant (no distinction)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "23.3. Allocation Fractions\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe how the fractions of allocation are calculated", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"function of vegetation type\" \n# \"function of plant allometry\" \n# \"explicitly calculated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "24. Carbon Cycle --&gt; Vegetation --&gt; Phenology\nTODO\n24.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the phenology scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "25. Carbon Cycle --&gt; Vegetation --&gt; Mortality\nTODO\n25.1. Method\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the general principle behind the mortality scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26. Carbon Cycle --&gt; Litter\nTODO\n26.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "26.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "26.4. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the general method used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27. Carbon Cycle --&gt; Soil\nTODO\n27.1. Number Of Carbon Pools\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "27.2. Carbon Pools\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the carbon pools used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "27.4. Method\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the general method used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28. Carbon Cycle --&gt; Permafrost Carbon\nTODO\n28.1. Is Permafrost Included\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs permafrost included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "28.2. Emitted Greenhouse Gases\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the GHGs emitted", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28.3. Decomposition\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nList the decomposition methods used", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "28.4. Impact On Soil Properties\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the impact of permafrost on soil properties", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29. Nitrogen Cycle\nLand surface nitrogen cycle\n29.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of the nitrogen cycle in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the notrogen cycle tiling, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "29.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of nitrogen cycle in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "29.4. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the nitrogen scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30. River Routing\nLand surface river routing\n30.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of river routing in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.2. Tiling\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the river routing, if any.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of river routing scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.4. Grid Inherited From Land Surface\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs the grid inherited from land surface?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "30.5. Grid Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nGeneral description of grid, if not inherited from land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.6. Number Of Reservoirs\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nEnter the number of reservoirs", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.number_of_reservoirs') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "30.7. Water Re Evaporation\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nTODO", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.water_re_evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"flood plains\" \n# \"irrigation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.8. Coupled To Atmosphere\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nIs river routing coupled to the atmosphere model component?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "30.9. Coupled To Land\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the coupling between land and rivers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_land') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.10. Quantities Exchanged With Atmosphere\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.11. Basin Flow Direction Map\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nWhat type of basin flow direction map is being used?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"adapted for other periods\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "30.12. Flooding\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the representation of flooding, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.flooding') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "30.13. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the river routing", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "31. River Routing --&gt; Oceanic Discharge\nTODO\n31.1. Discharge Type\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nSpecify how rivers are discharged to the ocean", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"direct (large rivers)\" \n# \"diffuse\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "31.2. Quantities Transported\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nQuantities that are exchanged from river-routing to the ocean model component", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32. Lakes\nLand surface lakes\n32.1. Overview\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nOverview of lakes in the land surface", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.2. Coupling With Rivers\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nAre lakes coupled to the river routing model component?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.coupling_with_rivers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "32.3. Time Step\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nTime step of lake scheme in seconds", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n", "32.4. Quantities Exchanged With Rivers\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N\nIf coupling with rivers, which quantities are exchanged between the lakes and rivers", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "32.5. Vertical Grid\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the vertical grid of lakes", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.vertical_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "32.6. Prognostic Variables\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nList the prognostic variables of the lake scheme", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "33. Lakes --&gt; Method\nTODO\n33.1. Ice Treatment\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs lake ice included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.ice_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "33.2. Albedo\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nDescribe the treatment of lake albedo", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33.3. Dynamics\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N\nWhich dynamics of lakes are treated? horizontal, vertical, etc.", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No lake dynamics\" \n# \"vertical\" \n# \"horizontal\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n", "33.4. Dynamic Lake Extent\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nIs a dynamic lake extent scheme included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "33.5. Endorheic Basins\nIs Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1\nBasins not flowing to ocean included?", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.endorheic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n", "34. Lakes --&gt; Wetlands\nTODO\n34.1. Description\nIs Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1\nDescribe the treatment of wetlands, if any", "# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.wetlands.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n", "©2017 ES-DOC" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
LSST-Supernova-Workshops/Pittsburgh-2016
Tutorials/Cadence/Writing A New Metric.ipynb
mit
[ "This notebook assumes sims_maf version >= 1.1 and that you have 'setup sims_maf' in your shell.", "import numpy as np \nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport lsst.sims.maf.db as db\nimport lsst.sims.maf.metrics as metrics\nimport lsst.sims.maf.slicers as slicers\nimport lsst.sims.maf.metricBundles as metricBundles", "Writing a new metric\nMAF provides many 'stock' metrics, and there are more in the sims_maf_contrib library. \nBut at some point, you're likely to want to write your own metric. We have tried to make the process for this simple. \nHere is the code for a very simple, (existing) metric which calculates the coadded depth of a set of visits.", "from lsst.sims.maf.metrics import BaseMetric\n\nclass Coaddm5Metric(BaseMetric):\n \"\"\"Calculate the coadded m5 value at this gridpoint.\"\"\"\n def __init__(self, m5Col = 'fiveSigmaDepth', metricName='CoaddM5', **kwargs):\n \"\"\"Instantiate metric.\n m5col = the column name of the individual visit m5 data.\"\"\"\n self.m5col = m5col\n super(Coaddm5Metric, self).__init__(col=m5Col, metricName=metricName, **kwargs)\n def run(self, dataSlice, slicePoint=None):\n return 1.25 * np.log10(np.sum(10.**(.8*dataSlice[self.m5col])))", "To understand this, you need to know a little bit about \"classes\" and \"inheritance\". \nBasically, a \"class\" is a python object which can hold data and methods (like functions) to manipulate that data. The idea is that a class can be a self-encapsulated thing -- the class knows what its data should look like, and then the methods know how to work with that data. \n\"Inheritance\" means that you can create a child version of another class, that inherits all of its features - and possibly adds new data or methods or replaces data or methods of the parent. \nThe point here is that the \"framework\" part of MAF is encapsulated in the BaseMetric. By inheriting from the BaseMetric (that's the bit where we said class Coaddm5Metric(BaseMetric) above), we get the column tracking so that MAF knows what columns to query the database for and we get added to the registry of existing metrics. \nBy following the same API (the 'signature' of the methods), we can write a new metric that will plug into the MAF framework seamlessly. This means you write an __init__ method that includes (self, **kwargs) and whatever else your particular metric needs. And then you write a run method that is called as run(self, dataSlice, slicePoint=None). \ndataSlice refers to the visits handed to the metric by the slicer. slicePoint refers to the metadata about the slice (such as it's ra/dec in the case of a HealpixSlicer, or it's bin information in the case of a OneDSlicer).\nLet's write another example, this time to calculate the Percentile value of a given column in a set of visits.", "# Import BaseMetric, or have it available to inherit from\nfrom lsst.sims.maf.metrics import BaseMetric\n\n# Define our class, inheriting from BaseMetric\nclass OurPercentileMetric(BaseMetric):\n # Add a doc string to describe the metric.\n \"\"\"\n Calculate the percentile value of a data column\n \"\"\"\n # Add our \"__init__\" method to instantiate the class.\n # We will make the 'percentile' value an additional value to be set by the user.\n # **kwargs allows additional values to be passed to the BaseMetric that you \n # may not have been using here and don't want to bother with. \n def __init__(self, colname, percentile, **kwargs):\n # Set the values we want to keep for our class.\n self.colname = colname\n self.percentile = percentile\n # Now we have to call the BaseMetric's __init__ method, to get the \"framework\" part set up.\n # We currently do this using 'super', which just calls BaseMetric's method.\n # The call to super just basically looks like this .. you must pass the columns you need, and the kwargs.\n super(OurPercentileMetric, self).__init__(col=colname, **kwargs)\n \n # Now write out \"run\" method, the part that does the metric calculation.\n def run(self, dataSlice, slicePoint=None):\n # for this calculation, I'll just call numpy's percentile function.\n result = np.percentile(dataSlice[self.colname], self.percentile)\n return result", "So then how do we use this new metric? Just as before, although you may have to adjust the namespace.", "metric = OurPercentileMetric('airmass', 20)\nslicer = slicers.HealpixSlicer(nside=64)\nsqlconstraint = 'filter = \"r\" and night<365'\nmyBundle = metricBundles.MetricBundle(metric, slicer, sqlconstraint)\n\nopsdb = db.OpsimDatabase('minion_1016_sqlite.db')\nbgroup = metricBundles.MetricBundleGroup({0: myBundle}, opsdb, outDir='newmetric_test', resultsDb=None)\nbgroup.runAll()\n\nmyBundle.setPlotDict({'colorMin':1.0, 'colorMax':1.8})\nbgroup.plotAll(closefigs=False)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
marko911/deep-learning
sentiment-rnn/Sentiment_RNN.ipynb
mit
[ "Sentiment Analysis with an RNN\nIn this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.\nThe architecture for this network is shown below.\n<img src=\"assets/network_diagram.png\" width=400px>\nHere, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.\nFrom the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.\nWe don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.", "import numpy as np\nimport tensorflow as tf\n\nwith open('../sentiment-network/reviews.txt', 'r') as f:\n reviews = f.read()\nwith open('../sentiment-network/labels.txt', 'r') as f:\n labels = f.read()\n\nreviews[:2000]", "Data preprocessing\nThe first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.\nYou can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \\n. To deal with those, I'm going to split the text into each review using \\n as the delimiter. Then I can combined all the reviews back together into one big string.\nFirst, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.", "from string import punctuation\nall_text = ''.join([c for c in reviews if c not in punctuation])\nreviews = all_text.split('\\n')\n\nall_text = ' '.join(reviews)\nwords = all_text.split()\n\nall_text[:2000]\n\nwords[:100]", "Encoding the words\nThe embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.\n\nExercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0.\nAlso, convert the reviews to integers and store the reviews in a new list called reviews_ints.", "# Create your dictionary that maps vocab words to integers here\nvocab_to_int = \n\n# Convert the reviews to integers, same shape as reviews list, but with integers\nreviews_ints = ", "Encoding the labels\nOur labels are \"positive\" or \"negative\". To use these labels in our network, we need to convert them to 0 and 1.\n\nExercise: Convert labels from positive and negative to 1 and 0, respectively.", "# Convert labels to 1s and 0s for 'positive' and 'negative'\nlabels = ", "If you built labels correctly, you should see the next output.", "from collections import Counter\nreview_lens = Counter([len(x) for x in reviews_ints])\nprint(\"Zero-length reviews: {}\".format(review_lens[0]))\nprint(\"Maximum review length: {}\".format(max(review_lens)))", "Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.\n\nExercise: First, remove the review with zero length from the reviews_ints list.", "# Filter out that review with 0 length\nreviews_ints = ", "Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector.\n\nThis isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.", "seq_len = 200\nfeatures = ", "If you build features correctly, it should look like that cell output below.", "features[:10,:100]", "Training, Validation, Test\nWith our data in nice shape, we'll split it into training, validation, and test sets.\n\nExercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data.", "split_frac = 0.8\n\ntrain_x, val_x = \ntrain_y, val_y = \n\nval_x, test_x = \nval_y, test_y = \n\nprint(\"\\t\\t\\tFeature Shapes:\")\nprint(\"Train set: \\t\\t{}\".format(train_x.shape), \n \"\\nValidation set: \\t{}\".format(val_x.shape),\n \"\\nTest set: \\t\\t{}\".format(test_x.shape))", "With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like:\nFeature Shapes:\nTrain set: (20000, 200) \nValidation set: (2500, 200) \nTest set: (2500, 200)\nBuild the graph\nHere, we'll build the graph. First up, defining the hyperparameters.\n\nlstm_size: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.\nlstm_layers: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting.\nbatch_size: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory.\nlearning_rate: Learning rate", "lstm_size = 256\nlstm_layers = 1\nbatch_size = 500\nlearning_rate = 0.001", "For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.\n\nExercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. labels_ needs to be two-dimensional to work with some functions later. Since keep_prob is a scalar (a 0-dimensional tensor), you shouldn't provide a size to tf.placeholder.", "n_words = len(vocab)\n\n# Create the graph object\ngraph = tf.Graph()\n# Add nodes to the graph\nwith graph.as_default():\n inputs_ = \n labels_ = \n keep_prob = ", "Embedding\nNow we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.\n\nExercise: Create the embedding lookup matrix as a tf.Variable. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with tf.nn.embedding_lookup. This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer has 200 units, the function will return a tensor with size [batch_size, 200].", "# Size of the embedding vectors (number of units in the embedding layer)\nembed_size = 300 \n\nwith graph.as_default():\n embedding = \n embed = ", "LSTM cell\n<img src=\"assets/network_diagram.png\" width=400px>\nNext, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.\nTo create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation:\ntf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=&lt;function tanh at 0x109f1ef28&gt;)\nyou can see it takes a parameter called num_units, the number of units in the cell, called lstm_size in this code. So then, you can write something like \nlstm = tf.contrib.rnn.BasicLSTMCell(num_units)\nto create an LSTM cell with num_units. Next, you can add dropout to the cell with tf.contrib.rnn.DropoutWrapper. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like\ndrop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)\nMost of the time, your network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with tf.contrib.rnn.MultiRNNCell:\ncell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)\nHere, [drop] * lstm_layers creates a list of cells (drop) that is lstm_layers long. The MultiRNNCell wrapper builds this into multiple layers of RNN cells, one for each cell in the list.\nSo the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell.\n\nExercise: Below, use tf.contrib.rnn.BasicLSTMCell to create an LSTM cell. Then, add drop out to it with tf.contrib.rnn.DropoutWrapper. Finally, create multiple LSTM layers with tf.contrib.rnn.MultiRNNCell.\n\nHere is a tutorial on building RNNs that will help you out.", "with graph.as_default():\n # Your basic LSTM cell\n lstm = \n \n # Add dropout to the cell\n drop = \n \n # Stack up multiple LSTM layers, for deep learning\n cell = \n \n # Getting an initial state of all zeros\n initial_state = cell.zero_state(batch_size, tf.float32)", "RNN forward pass\n<img src=\"assets/network_diagram.png\" width=400px>\nNow we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.\noutputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)\nAbove I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.\n\nExercise: Use tf.nn.dynamic_rnn to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, embed.", "with graph.as_default():\n outputs, final_state = ", "Output\nWe only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[:, -1], the calculate the cost from that and labels_.", "with graph.as_default():\n predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)\n cost = tf.losses.mean_squared_error(labels_, predictions)\n \n optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)", "Validation accuracy\nHere we can add a few nodes to calculate the accuracy which we'll use in the validation pass.", "with graph.as_default():\n correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)\n accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))", "Batching\nThis is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].", "def get_batches(x, y, batch_size=100):\n \n n_batches = len(x)//batch_size\n x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]\n for ii in range(0, len(x), batch_size):\n yield x[ii:ii+batch_size], y[ii:ii+batch_size]", "Training\nBelow is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.", "epochs = 10\n\nwith graph.as_default():\n saver = tf.train.Saver()\n\nwith tf.Session(graph=graph) as sess:\n sess.run(tf.global_variables_initializer())\n iteration = 1\n for e in range(epochs):\n state = sess.run(initial_state)\n \n for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):\n feed = {inputs_: x,\n labels_: y[:, None],\n keep_prob: 0.5,\n initial_state: state}\n loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)\n \n if iteration%5==0:\n print(\"Epoch: {}/{}\".format(e, epochs),\n \"Iteration: {}\".format(iteration),\n \"Train loss: {:.3f}\".format(loss))\n\n if iteration%25==0:\n val_acc = []\n val_state = sess.run(cell.zero_state(batch_size, tf.float32))\n for x, y in get_batches(val_x, val_y, batch_size):\n feed = {inputs_: x,\n labels_: y[:, None],\n keep_prob: 1,\n initial_state: val_state}\n batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)\n val_acc.append(batch_acc)\n print(\"Val acc: {:.3f}\".format(np.mean(val_acc)))\n iteration +=1\n saver.save(sess, \"checkpoints/sentiment.ckpt\")", "Testing", "test_acc = []\nwith tf.Session(graph=graph) as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n test_state = sess.run(cell.zero_state(batch_size, tf.float32))\n for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):\n feed = {inputs_: x,\n labels_: y[:, None],\n keep_prob: 1,\n initial_state: test_state}\n batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)\n test_acc.append(batch_acc)\n print(\"Test accuracy: {:.3f}\".format(np.mean(test_acc)))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
gvasold/gdp17
module/modules.ipynb
apache-2.0
[ "Module\nWas sind Module?\nPython organisiert Sourcecode in Modulen. Ein Modul ist nichts anderes,\nals eine Datei mit der Extension .py.\nModule dienen dazu,\n\ngroße Projekte in mehrere kleinere überschaubare (und logisch\n zusammenhängende) Sourcecode-Dateien zu organisieren\nCode besser wiederverwendbar zu machen, da Module selektiv in neuen Code\n importiert werden können\n\nKleine Skripte kann man ohne weiteres in eine einzige Datei packen.\nSobald ein Programm aber mehr als ein paar hundert Zeilen hat, empfiehlt es\nsich, den Code auf mehrere Module aufzuteilen, auch weil der Code dadurch\neinfacher zu pflegen und zu testen ist.\nModule sind überall\nNicht nur der eigene Programmcode lässt sich in Modulen organisieren, auch\nfremder Code wird in Modulen verteilt:\n\nDie Standard-Library von Python ist in Modulen (und Paketen) organisiert\nThird-Party-Libraries sind ebenfalls als Module verfügbar\n\nModule müssen importiert werden\nDamit ein Modul im eigenen Programm verwendet werden kann, muss das Modul zuerst importiert werden:\n~~~\nimport <Modulname>\n~~~\nDanach steht das Modul mit der dort definierten Funktionalität zur Verfügung. Das in der mit Python mitinstallierten Standardlibrary vorhandene Modul random stellt eine Reihe von Zufalls-Funktionen bereit. Damit wird diese verwenden können, müssen wir das Modul zuerst importieren:", "import random", "Danach können wir uns beispielsweise eine Zufallszahl generieren lassen:", "random.randint(0, 1000000)", "Module und Namensräume\nIm letzten Beispiel haben wir nicht irgendeine Funktion mit dem Namen randint verwendet, sondern genau die, die vom Modul random bereit gestellt wird.\nModule strukturieren daher nicht nur den Sourcecode, sondern bilden auch Namensräume,\nwodurch verhindert wird, dass sich beispielsweise zwei in unterschiedlichen\nModulen definierte, gleichnamige Funktionen gegenseitig überlagern. Um das zu verdeutlichen habe ich im Verzeichnis, in dem dieses Notebook liegt, zwei minimale Module angelegt:\n\na.py\nb.py\n\nIn beiden Modulen gibt es eine Funktion serialize(), die wir unter Nutzung der Namensräume beide in unserem Programm nutzen können:", "import a\nimport b\n\na.serialize()\nb.serialize()", "Der Namensraum ist einfach der Name der Datei, die das Modul repräsentiert.\nIm obigen Beispiel gibt es also ein Modul (d.h. eine Datei) a.py und ein\nzweites Modul b.py.\nModule importieren\nWir haben bereits gehört, dass wir Module importieren müssen, ehe wir sie verwenden können. Dazu gibt es verschiedene Möglichkeiten. Die einfachste haben wir bereits kennengelernt: Wir importieren das gesamte Modul unter Beibehaltung des Modulnamens. Als Beispiel verwenden wir wieder ein Modul aus der Standard-Library: sys stellt Informationen zur aktuellen Systemumgebung bereit:", "import sys\nsys.version\n\nsys.platform", "Nur einen Teil eines Moduls importieren\nManchmal sind wir nur an einem kleinen Teil eines Moduls interessiert, zum Beispiel wenn wir nur die aktuelle verwendetet Plattform herausfinden wollen:", "from sys import platform\nplatform", "Achtung: Hier haben wir etwas aus einem Modul in den globalen (bzw. unseren eigenen) Namensraum importiert. Wir ersparen uns dadurch zwar Tipparbeit, handeln uns aber auch einige Probleme ein, weil wir den eigenen Namensraum verschmutzt haben:\n\nWir haben die Nachvollziehbarkeit unseres Codes erschwert, weil beim Lesen des Codes erst herausgefunden werden muss, was es mit diesem platform auf sich hat - sys.platform ist hier viel klarer.\nWir können uns Seiteneffekte einhandeln, wenn wir uns u.U. unbeabsichtigt eigene Variablen überlagern.\n\nHier ein Beispiel:", "version = '0.9 beta'\nprint('Sie verwenden MeinProgramm in Version {}'.format(version))\n\n# Imports sollten immer ganz oben passieren, man kann sie aber überall verwenden\nfrom sys import version \nprint('Sie verwenden MeinProgramm in Version {}'.format(version))", "Noch schlimmer ist diese Version:", "version = '0.9 beta'\n\n# Imports sollten immer ganz oben passieren, man kann sie aber überall verwenden\nfrom sys import *\nprint('Sie verwenden MeinProgramm in Version {}'.format(version))", "Hier haben wir alles aus dem Modul sys in unseren eigenen Namespace importiert. Wir waren uns möglicherweise gar nicht bewusst, dass es in sys eine Variable version gibt, die unsere eigene Variable überlagert. Schwer zu findende Fehler sind so vorprogrammiert!\nKleiner Exkurs: wenn Sie feststellen wollen, was in einem Modul vorhanden (und was wir beim letzten Beispiel allen in unseren Namensraum importiert haben) ist, können Sie die dir() Funktion verwenden:", "import sys\ndir(sys)", "Namensräume umdefinieren\nManche Namensräume sind sehr lange und es ist daher mühsam, diese immer einzutippen. Deshalb besteht die Möglichkeit, einem Modul einen eigenen Namen zuzuweisen. pyplot ist ein Modul des mächtigen matplotlib Pakets. (Achtung: Dieses Paket ist nicht in der Standard-Library und muss möglicherweise erst nachinstalliert werden).\nHier zuerst die umständliche Variante:", "import matplotlib.pyplot\nmatplotlib.pyplot.plot([1, 2, 3, 4, 4, 3, 5, 6, 6, 3, 3, 4])\nmatplotlib.pyplot.show()", "Normalerweise schreibt man das allerdings so, um sich Tipparbeit zu sparen:", "import matplotlib.pyplot as plt\nplt.plot([1, 2, 3, 4, 4, 3, 5, 6, 6, 3, 3, 4])\nplt.show()", "Module und Docstrings\nSo wie eine Funktion durch einen Docstring beschrieben werden kann, funktioniert das auch für Module. Dazu muss man einfach direkt am Anfang der Modul-Datei den entsprechenden Docstring schreiben. Im Verzeichnis dieses Notebooks finden sie eine Datei (d.h. ein Modul) mystring.py. Da dieses Modul über einen DocString verfügt, können diesen auslesen:", "import mystring\nhelp(mystring)\n\nmystring.reverse('abc')\n\nmystring.distinct_len('Mississippi')", "Wie werden Module gefunden?\nModule können an unterschiedlichen Stellen im Filesystem liegen. Hier wird kurz beschrieben, wo und wie Python nach Modulen sucht. Dabei kommt eine bestimmte Reihenfolge zum Einsatz. Sobald das (oder zumindest ein gleichnamiges) Modul gefunden wird, wird dieses verwendet. Diese Reihenfolge ist:\n\nDas aktuelle Verzeichnis.\nAlle Verzeichnisse, die in der Umgebungsvariable PYTHONPATH definiert sind.\nAbhängig von der aufgerufenen Python-Version in bestimmten Verzeichnissen, in denen beispielweise die Standard Library liegt.\n\nDas sys-Modul weiß, wo gesucht wird:", "sys.path", "sys.path ist übrigens eine normale Liste, die z.B. erweitert werden kann (was aber, wenn Sie Ihr Programm weitergeben wollen, keine besonders gute Idee ist).\nModule und Bytecode\nWenn ein Modul zum ersten Mal von Python geladen wird, übersetzt es den Code in Bytecode und speichert diesen in eine eigene Datei, damit das Modul bei zukünftigen Aufrufen schneller geladen werden kann. Diese Bytecode-Dateien haben die Dateinamenerweiterung .pyc und liegen unter Python3 im Verzeichnis __pycache__. Sowohl dieses Verzeichnis als auch einzelne pyc-Datei können gefahrlos gelöscht werden, weil Sie bei Bedarf automatisch neu erzeugt werden.\nPakete\nWenn man größere Projekte tiefer organisieren will, kann man mehrere Module\n(und sogar Subpakete) zu einem Paket zusammenfassen.\nEin Paket ist nichts anderes, als ein Verzeichnis, das Module enthält. Zu einem\nPaket wird ein solches Verzeichnis allerdings erst, wenn im Verzeichnis\neine Datei __init__.py existiert. Diese Datei kann leer sein.\nEin Modul in einem Paket wird durch den Punkt-Operator getrennt angesprochen:\n~~~\n\n\n\nimport os\nif os.path.exists('daten.csv'):\n...\n~~~\n\n\n\nVirtualenv und Environments\nWenn man parallel an mehreren Projekten arbeitet oder fremde Python-Programme verwendet, kann es passieren, dass diese unterschiedle Bibliotheken benötigen, vielleicht sogar unterschiedliche Versionen derselben Bibliothek. Es ist daher sehr empfehlenswert, unterschiedliche, voneinander isolierte Python-Umgebungen zu verwenden. Diese sind allgemein für Python virtualenv und für Conda bzw. Anaconda environments.\nVirtualenv\nDiese virtuellen Umgebungen isolieren Python weitgehend vom systemweit\ninstallierten Python. Ein Virtualenv verwendet zwar einen der systemweit\ninstallierten Python-Interpreter, alle zusätzlich installierten Pakete und\nModule sind jedoch spezifisch für diese eine Umgebung.\nDadurch ist es möglich, bestimmte Module nur für ein bestimmtes Projekt\nzu installieren oder ein Modul in unterschiedlichen Versionen für verschiedene\nProjekte zu verwenden.\nNicht zuletzt sind Virualenvs praktisch, um erste Experimente mit einem\nZusatzmodul anzustellen ohne deshalb gleich das Modul systemweit installieren\nzu müssen.\nUm die Verwaltung von virtuellen Python-Umgebungen kümmert sich ein\nProgramm mit dem Namen virtualenv. Dieses muss zusätzlich zu\nPython installiert werden. (IDEs wie PyCharm bringen virtualenv\nautomatisch mit).\nhttp://virtualenv.pypa.io/\nMit virtuellen Umgebungen arbeiten\nWichtiger Hinweise: virtualenv ist ein Kommandozeilenprogramm. Die hier beschriebenen Befehle können daher nicht in einem Notebook ausgeführt werden!\nZunächst muss man ein Virtualenv anlegen:\n~~~\nvirtualenv zielverzeichnis\nbzw.\nvirtualenv -p c:\\python3.5\\bin\\python zielverzeichnis\n~~~\nDie erste Zeile legt im Verzeichnis zielverzeichnis eine neue virtuelle\nPython-Umgebung an.\nFalls man eine bestimmte Version von Python verwenden will, kann man diese\nexplizit mit der Option -p angeben (letzte Zeile).\nDanach (und zu Beginn jeder Arbeits-Sitzung) muss dieses Virtualenv aktiviert\nwerden:\n~~~\n<venv-verzeichnis>\\Scripts\\activate # Windows\nsource <venv-verzeichnis>/bin/activate # OS X, Linux\n~~~\nMit deactivate kann man das Virualenv wieder deaktivieren.\nVirtualenv in Pycharm\n<!--- if slides\n## Virtualenv mit PyCharm\n-->\n\nIDEs wie Pycharm bietet direkt aus der IDE heraus die Möglichkeit, Virtualenvs\nanzulegen und zu aktivieren.\n\nIn einem virtuellen Environment arbeiten\nSobald eine virtuellen Umgebung aktiviert ist, können Pakete in gewohnter Weise\ninstalliert werden:\n~~~\npip install paketname\n~~~\nDiese werden dann nur in dem aktiven Virtualenv installiert.\nEine virtuelle Umgebung sichern\nMit pip lassen sich virtuelle Umgebungen relativ leicht dokumentieren:\n~~~\npip freeze > requirements.txt\n~~~\nDie Datei requirements.txt enthält nun Informationen über alle\ninstallierten Zusatzmodule incl. Versionsinformation.\nUm genau diese Zusatzmodule in einem neuen Virtualenv wieder zu installieren,\nreicht dieser Befehl:\n~~~\npip install -r requirements.txt\n~~~\nConda Environments\nConda Environments funktioren von der Idee her sehr ähnlich wie virtuelle Environments. \nEin Environment anlegen\nEin neues Environment wird so angelegt:\n~~~\nconda create --name myenv\n~~~\nBei Bedarf kann auch hier eine bestimmte Python-Version angegeben werden:\n~~~\nconda create -n myenv python=3.2\n~~~\nEin Environment aktivieren\nUm ein Environment zu aktivieren muss unter Windows dieser Befehl eingegeben werden:\n~~~\nactivate myenv\n~~~\nUnter OS X und Linux geht das Aktivieren so:\n~~~\nsource activate myenv\n~~~\nEine ausführliche Beschreibung zur Arbeit mit Environments findet sich hier: \nhttps://conda.io/docs/user-guide/tasks/manage-environments.html" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
Kulbear/deep-learning-nano-foundation
DLND-image-classification/dlnd_image_classification.ipynb
mit
[ "Image Classification\nIn this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.\nGet the Data\nRun the following cell to download the CIFAR-10 dataset for python.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nfrom urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\nimport problem_unittests as tests\nimport tarfile\n\ncifar10_dataset_folder_path = 'cifar-10-batches-py'\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile('cifar-10-python.tar.gz'):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:\n urlretrieve(\n 'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',\n 'cifar-10-python.tar.gz',\n pbar.hook)\n\nif not isdir(cifar10_dataset_folder_path):\n with tarfile.open('cifar-10-python.tar.gz') as tar:\n tar.extractall()\n tar.close()\n\n\ntests.test_folder_path(cifar10_dataset_folder_path)", "Explore the Data\nThe dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:\n* airplane\n* automobile\n* bird\n* cat\n* deer\n* dog\n* frog\n* horse\n* ship\n* truck\nUnderstanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.\nAsk yourself \"What are all possible labels?\", \"What is the range of values for the image data?\", \"Are the labels in order or random?\". Answers to questions like these will help you preprocess the data and end up with better predictions.", "%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport helper\nimport numpy as np\n\n# Explore the dataset\nbatch_id = 1\nsample_id = 10\nhelper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)", "Implement Preprocess Functions\nNormalize\nIn the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.", "def normalize(x):\n \"\"\"\n Normalize a list of sample image data in the range of 0 to 1\n : x: List of image data. The image shape is (32, 32, 3)\n : return: Numpy array of normalize data\n \"\"\"\n return x / 255\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_normalize(normalize)", "One-hot encode\nJust like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.\nHint: Don't reinvent the wheel.", "from sklearn.preprocessing import LabelBinarizer\n\ndef one_hot_encode(x):\n \"\"\"\n One hot encode a list of sample labels. Return a one-hot encoded vector for each label.\n : x: List of sample Labels\n : return: Numpy array of one-hot encoded labels\n \"\"\"\n # The possible values for labels are 0 to 9. 10 in total\n return np.eye(10)[x]\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_one_hot_encode(one_hot_encode)", "Randomize Data\nAs you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.\nPreprocess all the data and save it\nRunning the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Preprocess Training, Validation, and Testing Data\nhelper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)", "Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport pickle\nimport problem_unittests as tests\nimport helper\n\n# Load the Preprocessed Validation data\nvalid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))", "Build the network\nFor the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.\n\nNote: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the \"Convolutional and Max Pooling Layer\" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.\nHowever, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d. \n\nLet's begin!\nInput\nThe neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions\n* Implement neural_net_image_input\n * Return a TF Placeholder\n * Set the shape using image_shape with batch size set to None.\n * Name the TensorFlow placeholder \"x\" using the TensorFlow name parameter in the TF Placeholder.\n* Implement neural_net_label_input\n * Return a TF Placeholder\n * Set the shape using n_classes with batch size set to None.\n * Name the TensorFlow placeholder \"y\" using the TensorFlow name parameter in the TF Placeholder.\n* Implement neural_net_keep_prob_input\n * Return a TF Placeholder for dropout keep probability.\n * Name the TensorFlow placeholder \"keep_prob\" using the TensorFlow name parameter in the TF Placeholder.\nThese names will be used at the end of the project to load your saved model.\nNote: None for shapes in TensorFlow allow for a dynamic size.", "import tensorflow as tf\n\ndef neural_net_image_input(image_shape):\n \"\"\"\n Return a Tensor for a batch of image input\n : image_shape: Shape of the images\n : return: Tensor for image input.\n \"\"\"\n return tf.placeholder(tf.float32, shape=[None, *image_shape], name = \"x\")\n\n\ndef neural_net_label_input(n_classes):\n \"\"\"\n Return a Tensor for a batch of label input\n : n_classes: Number of classes\n : return: Tensor for label input.\n \"\"\"\n return tf.placeholder(tf.float32, shape=[None, n_classes], name = \"y\")\n\n\ndef neural_net_keep_prob_input():\n \"\"\"\n Return a Tensor for keep probability\n : return: Tensor for keep probability.\n \"\"\"\n return tf.placeholder(tf.float32, name=\"keep_prob\")\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntf.reset_default_graph()\ntests.test_nn_image_inputs(neural_net_image_input)\ntests.test_nn_label_inputs(neural_net_label_input)\ntests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)", "Convolution and Max Pooling Layer\nConvolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:\n* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.\n* Apply a convolution to x_tensor using weight and conv_strides.\n * We recommend you use same padding, but you're welcome to use any padding.\n* Add bias\n* Add a nonlinear activation to the convolution.\n* Apply Max Pooling using pool_ksize and pool_strides.\n * We recommend you use same padding, but you're welcome to use any padding.\nNote: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.", "def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):\n \"\"\"\n Apply convolution then max pooling to x_tensor\n :param x_tensor: TensorFlow Tensor\n :param conv_num_outputs: Number of outputs for the convolutional layer\n :param conv_ksize: kernal size 2-D Tuple for the convolutional layer\n :param conv_strides: Stride 2-D Tuple for convolution\n :param pool_ksize: kernal size 2-D Tuple for pool\n :param pool_strides: Stride 2-D Tuple for pool\n : return: A tensor that represents convolution and max pooling of x_tensor\n \"\"\"\n # x_tensor.get_shape()\n # >> (?, 32, 32, 5)\n input_depth = x_tensor.shape[-1].value\n\n weights = tf.Variable(\n tf.truncated_normal(\n shape=[\n conv_ksize[0], # height\n conv_ksize[1], # width\n input_depth, # input_depth\n conv_num_outputs # out_depth\n ], \n mean=0.0,\n stddev=0.1\n ),\n name='weights'\n )\n bias = tf.Variable(tf.zeros(conv_num_outputs), trainable=True)\n \n # Apply a convolution to x_tensor using weight and conv_strides\n conv_layer = tf.nn.conv2d(\n x_tensor, \n weights, \n strides=[1, *conv_strides, 1], \n padding='SAME'\n )\n \n # Add bias\n conv_layer = tf.nn.bias_add(conv_layer, bias)\n \n # Add a nonlinear activation to the convolution\n conv_layer = tf.nn.relu(conv_layer)\n \n # Apply Max Pooling using pool_ksize and pool_strides\n conv_layer = tf.nn.max_pool(\n conv_layer, \n ksize=[1, *pool_ksize, 1], \n strides=[1, *pool_strides, 1], \n padding='SAME'\n )\n \n return conv_layer \n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_con_pool(conv2d_maxpool)", "Flatten Layer\nImplement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.", "def flatten(x_tensor):\n \"\"\"\n Flatten x_tensor to (Batch Size, Flattened Image Size)\n : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.\n : return: A tensor of size (Batch Size, Flattened Image Size).\n \"\"\"\n # print(x_tensor.get_shape()[1:4].num_elements())\n return tf.reshape(x_tensor, [-1, x_tensor.get_shape()[1:4].num_elements()])\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_flatten(flatten)", "Fully-Connected Layer\nImplement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.", "def fully_conn(x_tensor, num_outputs):\n \"\"\"\n Apply a fully connected layer to x_tensor using weight and bias\n : x_tensor: A 2-D tensor where the first dimension is batch size.\n : num_outputs: The number of output that the new tensor should be.\n : return: A 2-D tensor where the second dimension is num_outputs.\n \"\"\"\n batch_size = x_tensor.shape[1].value\n weights = tf.Variable(\n tf.truncated_normal(\n [batch_size, num_outputs],\n mean = 0,\n stddev=0.1)\n )\n bias = tf.Variable(tf.zeros(num_outputs))\n\n fully_conn_layer = tf.matmul(x_tensor, weights)\n fully_conn_layer = tf.nn.bias_add(fully_conn_layer, bias)\n fully_conn_layer = tf.nn.relu(fully_conn_layer)\n\n return fully_conn_layer\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_fully_conn(fully_conn)", "Output Layer\nImplement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.\nNote: Activation, softmax, or cross entropy should not be applied to this.", "def output(x_tensor, num_outputs):\n \"\"\"\n Apply a output layer to x_tensor using weight and bias\n : x_tensor: A 2-D tensor where the first dimension is batch size.\n : num_outputs: The number of output that the new tensor should be.\n : return: A 2-D tensor where the second dimension is num_outputs.\n \"\"\"\n batch_size = x_tensor.shape[1].value\n weights = tf.Variable(\n tf.truncated_normal(\n [batch_size, num_outputs],\n mean = 0,\n stddev=0.1)\n )\n bias = tf.Variable(tf.zeros(num_outputs))\n \n output_layer = tf.matmul(x_tensor, weights)\n output_layer = tf.nn.bias_add(output_layer, bias)\n \n return output_layer\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_output(output)", "Create Convolutional Model\nImplement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:\n\nApply 1, 2, or 3 Convolution and Max Pool layers\nApply a Flatten Layer\nApply 1, 2, or 3 Fully Connected Layers\nApply an Output Layer\nReturn the output\nApply TensorFlow's Dropout to one or more layers in the model using keep_prob.", "def conv_net(x, keep_prob):\n \"\"\"\n Create a convolutional neural network model\n : x: Placeholder tensor that holds image data.\n : keep_prob: Placeholder tensor that hold dropout keep probability.\n : return: Tensor that represents logits\n \"\"\"\n conv_num_outputs_layer1 = 32\n conv_num_outputs_layer2 = 64\n conv_num_outputs_layer3 = 128\n fully_conn_num_outputs_layer_1 = 256\n fully_conn_num_outputs_layer_2 = 512\n conv_ksize = (5, 5)\n conv_strides = (1, 1)\n pool_ksize = (2, 2)\n pool_strides = (2, 2)\n \n common_params = [conv_ksize, conv_strides, pool_ksize, pool_strides]\n \n # Apply 1, 2, or 3 Convolution and Max Pool layers\n conv_layer_1 = conv2d_maxpool(x, conv_num_outputs_layer1, *common_params)\n conv_layer_1 = tf.nn.dropout(conv_layer_1, keep_prob)\n \n conv_layer_2 = conv2d_maxpool(conv_layer_1, conv_num_outputs_layer2, *common_params)\n conv_layer_2 = tf.nn.dropout(conv_layer_2, keep_prob)\n \n conv_layer_3 = conv2d_maxpool(conv_layer_2, conv_num_outputs_layer3, *common_params)\n conv_layer_3 = tf.nn.dropout(conv_layer_3, keep_prob)\n \n # Apply a Flatten Layer\n flatten_layer_1 = flatten(conv_layer_3)\n\n # Apply 1, 2, or 3 Fully Connected Layers\n fully_conn_layer_1 = fully_conn(flatten_layer_1, fully_conn_num_outputs_layer_1)\n fully_conn_layer_1 = tf.nn.dropout(fully_conn_layer_1, keep_prob)\n fully_conn_layer_2 = fully_conn(flatten_layer_1, fully_conn_num_outputs_layer_2)\n fully_conn_layer_2 = tf.nn.dropout(fully_conn_layer_2, keep_prob)\n \n \n # Apply an Output Layer\n num_outputs = 10 # 10 classes\n output_layer = output(fully_conn_layer_2, num_outputs)\n \n return output_layer\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\n\n##############################\n## Build the Neural Network ##\n##############################\n\n# Remove previous weights, bias, inputs, etc..\ntf.reset_default_graph()\n\n# Inputs\nx = neural_net_image_input((32, 32, 3))\ny = neural_net_label_input(10)\nkeep_prob = neural_net_keep_prob_input()\n\n# Model\nlogits = conv_net(x, keep_prob)\n\n# Name logits Tensor, so that is can be loaded from disk after training\nlogits = tf.identity(logits, name='logits')\n\n# Loss and Optimizer\ncost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))\noptimizer = tf.train.AdamOptimizer().minimize(cost)\n\n# Accuracy\ncorrect_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')\n\ntests.test_conv_net(conv_net)", "Train the Neural Network\nSingle Optimization\nImplement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:\n* x for image input\n* y for labels\n* keep_prob for keep probability for dropout\nThis function will be called for each batch, so tf.global_variables_initializer() has already been called.\nNote: Nothing needs to be returned. This function is only optimizing the neural network.", "def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):\n \"\"\"\n Optimize the session on a batch of images and labels\n : session: Current TensorFlow session\n : optimizer: TensorFlow optimizer function\n : keep_probability: keep probability\n : feature_batch: Batch of Numpy image data\n : label_batch: Batch of Numpy label data\n \"\"\"\n session.run(optimizer, feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability})\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_train_nn(train_neural_network)", "Show Stats\nImplement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.", "def print_stats(session, feature_batch, label_batch, cost, accuracy):\n \"\"\"\n Print information about loss and validation accuracy\n : session: Current TensorFlow session\n : feature_batch: Batch of Numpy image data\n : label_batch: Batch of Numpy label data\n : cost: TensorFlow cost function\n : accuracy: TensorFlow accuracy function\n \"\"\"\n loss = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0})\n accuracy = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.0})\n print(\"Loss: {} Accuracy: {}\".format(loss, accuracy))", "Hyperparameters\nTune the following parameters:\n* Set epochs to the number of iterations until the network stops learning or start overfitting\n* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:\n * 64\n * 128\n * 256\n * ...\n* Set keep_probability to the probability of keeping a node using dropout", "# TODO: Tune Parameters\nepochs = 35\nbatch_size = 128\nkeep_probability = 0.75", "Train on a Single CIFAR-10 Batch\nInstead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nprint('Checking the Training on a Single Batch...')\nwith tf.Session() as sess:\n # Initializing the variables\n sess.run(tf.global_variables_initializer())\n \n # Training cycle\n for epoch in range(epochs):\n batch_i = 1\n for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):\n train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)\n print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')\n print_stats(sess, batch_features, batch_labels, cost, accuracy)", "Fully Train the Model\nNow that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nsave_model_path = './image_classification'\n\nprint('Training...')\nwith tf.Session() as sess:\n # Initializing the variables\n sess.run(tf.global_variables_initializer())\n \n # Training cycle\n for epoch in range(epochs):\n # Loop over all batches\n n_batches = 5\n for batch_i in range(1, n_batches + 1):\n for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):\n train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)\n print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')\n print_stats(sess, batch_features, batch_labels, cost, accuracy)\n \n # Save Model\n saver = tf.train.Saver()\n save_path = saver.save(sess, save_model_path)", "Checkpoint\nThe model has been saved to disk.\nTest Model\nTest your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.", "\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport tensorflow as tf\nimport pickle\nimport helper\nimport random\n\n# Set batch size if not already set\ntry:\n if batch_size:\n pass\nexcept NameError:\n batch_size = 64\n\nsave_model_path = './image_classification'\nn_samples = 4\ntop_n_predictions = 3\n\ndef test_model():\n \"\"\"\n Test the saved model against the test dataset\n \"\"\"\n\n test_features, test_labels = pickle.load(open('preprocess_training.p', mode='rb'))\n loaded_graph = tf.Graph()\n\n with tf.Session(graph=loaded_graph) as sess:\n # Load model\n loader = tf.train.import_meta_graph(save_model_path + '.meta')\n loader.restore(sess, save_model_path)\n\n # Get Tensors from loaded model\n loaded_x = loaded_graph.get_tensor_by_name('x:0')\n loaded_y = loaded_graph.get_tensor_by_name('y:0')\n loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')\n loaded_logits = loaded_graph.get_tensor_by_name('logits:0')\n loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')\n \n # Get accuracy in batches for memory limitations\n test_batch_acc_total = 0\n test_batch_count = 0\n \n for train_feature_batch, train_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):\n test_batch_acc_total += sess.run(\n loaded_acc,\n feed_dict={loaded_x: train_feature_batch, loaded_y: train_label_batch, loaded_keep_prob: 1.0})\n test_batch_count += 1\n\n print('Testing Accuracy: {}\\n'.format(test_batch_acc_total/test_batch_count))\n\n # Print Random Samples\n random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))\n random_test_predictions = sess.run(\n tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),\n feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})\n helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)\n\n\ntest_model()", "Why 50-80% Accuracy?\nYou might be wondering why you can't get an accuracy any higher. First things first, 50% isn't bad for a simple CNN. Pure guessing would get you 10% accuracy. However, you might notice people are getting scores well above 80%. That's because we haven't taught you all there is to know about neural networks. We still need to cover a few more techniques.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_image_classification.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
h-mayorquin/time_series_basic
presentations/2015-11-11(Analyzing text with Nexa, Part 1).ipynb
bsd-3-clause
[ "Analyzing text with Nexa\nThis is an analysis of the text from the financial times with the nexa framework. Here we apply the nexa machinery to it", "import numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport h5py\nimport IPython\n\nimport sys\nsys.path.append('../')\n\nfrom inputs.sensors import Sensor, PerceptualSpace\nfrom inputs.lag_structure import LagStructure\nfrom nexa.nexa import Nexa# First we have to load the signal\n", "Extract the Data\nNow we extract the signal", "signal_location = '../data/wall_street_data_small.hdf5'\n\n# Access the data and load it into signal\nwith h5py.File(signal_location, 'r') as f:\n dset = f['signal']\n signals = np.empty(dset.shape, np.float)\n dset.read_direct(signals)", "Reshape the data for our purposes and take a piece of it", "# Reshape the data and limit it\nNdata = 10000\nsignals = signals.reshape(signals.shape[0], signals.shape[1] * signals.shape[2])\n# signals = signals[:Ndata, ...].astype('float')\nsignals += np.random.uniform(size=signals.shape)\nprint('zeros', np.sum(signals[0] == 0))\nprint('signals shape', signals.shape)", "Perceptual Space", "dt = 1.0\nlag_times = np.arange(0, 10, 1)\nwindow_size = signals.shape[0] - (lag_times[-1] + 1)\nweights = None\n\nlag_structure = LagStructure(lag_times=lag_times, weights=weights, window_size=window_size)\nsensors = [Sensor(signal, dt, lag_structure) for signal in signals.T]\nperceptual_space = PerceptualSpace(sensors, lag_first=True)", "Nexa Machinery", "# Get the nexa machinery right\nNspatial_clusters = 3\nNtime_clusters = 4\nNembedding = 2\n\nnexa_object = Nexa(perceptual_space, Nspatial_clusters, Ntime_clusters, Nembedding)\n\n# Now we calculate the distance matrix\nnexa_object.calculate_distance_matrix()\nnexa_object.calculate_embedding()\n\n# Now we calculate the clustering\nnexa_object.calculate_spatial_clustering()\n\n# We calculate the cluster to index\nnexa_object.calculate_cluster_to_indexes()\n\n# Data clusters\nnexa_object.calculate_time_clusters()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mintcloud/deep-learning
batch-norm/Batch_Normalization_Exercises.ipynb
mit
[ "Batch Normalization – Practice\nBatch normalization is most useful when building deep neural networks. To demonstrate this, we'll create a convolutional neural network with 20 convolutional layers, followed by a fully connected layer. We'll use it to classify handwritten digits in the MNIST dataset, which should be familiar to you by now.\nThis is not a good network for classfying MNIST digits. You could create a much simpler network and get better results. However, to give you hands-on experience with batch normalization, we had to make an example that was:\n1. Complicated enough that training would benefit from batch normalization.\n2. Simple enough that it would train quickly, since this is meant to be a short exercise just to give you some practice adding batch normalization.\n3. Simple enough that the architecture would be easy to understand without additional resources.\nThis notebook includes two versions of the network that you can edit. The first uses higher level functions from the tf.layers package. The second is the same network, but uses only lower level functions in the tf.nn package.\n\nBatch Normalization with tf.layers.batch_normalization\nBatch Normalization with tf.nn.batch_normalization\n\nThe following cell loads TensorFlow, downloads the MNIST dataset if necessary, and loads it into an object named mnist. You'll need to run this cell before running anything else in the notebook.", "import tensorflow as tf\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets(\"MNIST_data/\", one_hot=True, reshape=False)", "Batch Normalization using tf.layers.batch_normalization<a id=\"example_1\"></a>\nThis version of the network uses tf.layers for almost everything, and expects you to implement batch normalization using tf.layers.batch_normalization \nWe'll use the following function to create fully connected layers in our network. We'll create them with the specified number of neurons and a ReLU activation function.\nThis version of the function does not include batch normalization.", "\"\"\"\nDO NOT MODIFY THIS CELL\n\"\"\"\ndef fully_connected(prev_layer, num_units):\n \"\"\"\n Create a fully connectd layer with the given layer as input and the given number of neurons.\n \n :param prev_layer: Tensor\n The Tensor that acts as input into this layer\n :param num_units: int\n The size of the layer. That is, the number of units, nodes, or neurons.\n :returns Tensor\n A new fully connected layer\n \"\"\"\n layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)\n return layer", "We'll use the following function to create convolutional layers in our network. They are very basic: we're always using a 3x3 kernel, ReLU activation functions, strides of 1x1 on layers with odd depths, and strides of 2x2 on layers with even depths. We aren't bothering with pooling layers at all in this network.\nThis version of the function does not include batch normalization.", "\"\"\"\nDO NOT MODIFY THIS CELL\n\"\"\"\ndef conv_layer(prev_layer, layer_depth):\n \"\"\"\n Create a convolutional layer with the given layer as input.\n \n :param prev_layer: Tensor\n The Tensor that acts as input into this layer\n :param layer_depth: int\n We'll set the strides and number of feature maps based on the layer's depth in the network.\n This is *not* a good way to make a CNN, but it helps us create this example with very little code.\n :returns Tensor\n A new convolutional layer\n \"\"\"\n strides = 2 if layer_depth % 3 == 0 else 1\n conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=tf.nn.relu)\n return conv_layer", "Run the following cell, along with the earlier cells (to load the dataset and define the necessary functions). \nThis cell builds the network without batch normalization, then trains it on the MNIST dataset. It displays loss and accuracy data periodically while training.", "\"\"\"\nDO NOT MODIFY THIS CELL\n\"\"\"\ndef train(num_batches, batch_size, learning_rate):\n # Build placeholders for the input samples and labels \n inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])\n labels = tf.placeholder(tf.float32, [None, 10])\n \n # Feed the inputs into a series of 20 convolutional layers \n layer = inputs\n for layer_i in range(1, 20):\n layer = conv_layer(layer, layer_i)\n\n # Flatten the output from the convolutional layers \n orig_shape = layer.get_shape().as_list()\n layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])\n\n # Add one fully connected layer\n layer = fully_connected(layer, 100)\n\n # Create the output layer with 1 node for each \n logits = tf.layers.dense(layer, 10)\n \n # Define loss and training operations\n model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))\n train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)\n \n # Create operations to test accuracy\n correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))\n accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n \n # Train and test the network\n with tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n for batch_i in range(num_batches):\n batch_xs, batch_ys = mnist.train.next_batch(batch_size)\n\n # train this batch\n sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})\n \n # Periodically check the validation or training loss and accuracy\n if batch_i % 100 == 0:\n loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,\n labels: mnist.validation.labels})\n print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))\n elif batch_i % 25 == 0:\n loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})\n print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))\n\n # At the end, score the final accuracy for both the validation and test sets\n acc = sess.run(accuracy, {inputs: mnist.validation.images,\n labels: mnist.validation.labels})\n print('Final validation accuracy: {:>3.5f}'.format(acc))\n acc = sess.run(accuracy, {inputs: mnist.test.images,\n labels: mnist.test.labels})\n print('Final test accuracy: {:>3.5f}'.format(acc))\n \n # Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.\n correct = 0\n for i in range(100):\n correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],\n labels: [mnist.test.labels[i]]})\n\n print(\"Accuracy on 100 samples:\", correct/100)\n\n\nnum_batches = 800\nbatch_size = 64\nlearning_rate = 0.002\n\ntf.reset_default_graph()\nwith tf.Graph().as_default():\n train(num_batches, batch_size, learning_rate)", "With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.)\nUsing batch normalization, you'll be able to train this same network to over 90% in that same number of batches.\nAdd batch normalization\nWe've copied the previous three cells to get you started. Edit these cells to add batch normalization to the network. For this exercise, you should use tf.layers.batch_normalization to handle most of the math, but you'll need to make a few other changes to your network to integrate batch normalization. You may want to refer back to the lesson notebook to remind yourself of important things, like how your graph operations need to know whether or not you are performing training or inference. \nIf you get stuck, you can check out the Batch_Normalization_Solutions notebook to see how we did things.\nTODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.", "def fully_connected(prev_layer, num_units, is_training):\n \"\"\"\n Create a fully connectd layer with the given layer as input and the given number of neurons.\n \n :param prev_layer: Tensor\n The Tensor that acts as input into this layer\n :param num_units: int\n The size of the layer. That is, the number of units, nodes, or neurons.\n :returns Tensor\n A new fully connected layer\n \"\"\"\n \n layer = tf.layers.dense(prev_layer, num_units, use_bias=False, activation=None)\n layer = tf.layers.batch_normalization(layer, training=is_training)\n layer = tf.nn.relu(layer)\n\n \n return layer", "TODO: Modify conv_layer to add batch normalization to the convolutional layers it creates. Feel free to change the function's parameters if it helps.", "def conv_layer(prev_layer, layer_depth, is_training):\n \"\"\"\n Create a convolutional layer with the given layer as input.\n \n :param prev_layer: Tensor\n The Tensor that acts as input into this layer\n :param layer_depth: int\n We'll set the strides and number of feature maps based on the layer's depth in the network.\n This is *not* a good way to make a CNN, but it helps us create this example with very little code.\n :returns Tensor\n A new convolutional layer\n \"\"\"\n strides = 2 if layer_depth % 3 == 0 else 1\n conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=None,use_bias=False)\n conv_layer = tf.layers.batch_normalization(conv_layer, training=is_training)\n return tf.nn.relu(conv_layer)", "TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training, and you'll need to make sure it updates and uses its population statistics correctly.", "def train(num_batches, batch_size, learning_rate):\n # Build placeholders for the input samples and labels \n inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])\n labels = tf.placeholder(tf.float32, [None, 10])\n is_training = tf.placeholder(tf.bool)\n \n # Feed the inputs into a series of 20 convolutional layers \n layer = inputs\n for layer_i in range(1, 20):\n layer = conv_layer(layer, layer_i, is_training)\n\n\n \n # Flatten the output from the convolutional layers \n orig_shape = layer.get_shape().as_list()\n layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])\n\n # Add one fully connected layer\n layer = fully_connected(layer, 100, is_training)\n\n # Create the output layer with 1 node for each \n logits = tf.layers.dense(layer, 10)\n \n # Define loss and training operations\n model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))\n \n with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):\n train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)\n \n # Create operations to test accuracy\n correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))\n accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n \n # Train and test the network\n with tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n for batch_i in range(num_batches):\n batch_xs, batch_ys = mnist.train.next_batch(batch_size)\n\n # train this batch\n sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training:True})\n \n # Periodically check the validation or training loss and accuracy\n if batch_i % 100 == 0:\n loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,\n labels: mnist.validation.labels,is_training:False})\n print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))\n elif batch_i % 25 == 0:\n loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys,is_training:False})\n print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))\n\n # At the end, score the final accuracy for both the validation and test sets\n acc = sess.run(accuracy, {inputs: mnist.validation.images,\n labels: mnist.validation.labels,is_training:False})\n print('Final validation accuracy: {:>3.5f}'.format(acc))\n acc = sess.run(accuracy, {inputs: mnist.test.images,\n labels: mnist.test.labels,is_training:False})\n print('Final test accuracy: {:>3.5f}'.format(acc))\n \n # Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.\n correct = 0\n for i in range(100):\n correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],\n labels: [mnist.test.labels[i]],is_training:False})\n\n print(\"Accuracy on 100 samples:\", correct/100)\n\n\nnum_batches = 800\nbatch_size = 64\nlearning_rate = 0.002\n\ntf.reset_default_graph()\nwith tf.Graph().as_default():\n train(num_batches, batch_size, learning_rate)", "With batch normalization, you should now get an accuracy over 90%. Notice also the last line of the output: Accuracy on 100 samples. If this value is low while everything else looks good, that means you did not implement batch normalization correctly. Specifically, it means you either did not calculate the population mean and variance while training, or you are not using those values during inference.\nBatch Normalization using tf.nn.batch_normalization<a id=\"example_2\"></a>\nMost of the time you will be able to use higher level functions exclusively, but sometimes you may want to work at a lower level. For example, if you ever want to implement a new feature – something new enough that TensorFlow does not already include a high-level implementation of it, like batch normalization in an LSTM – then you may need to know these sorts of things.\nThis version of the network uses tf.nn for almost everything, and expects you to implement batch normalization using tf.nn.batch_normalization.\nOptional TODO: You can run the next three cells before you edit them just to see how the network performs without batch normalization. However, the results should be pretty much the same as you saw with the previous example before you added batch normalization. \nTODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.\nNote: For convenience, we continue to use tf.layers.dense for the fully_connected layer. By this point in the class, you should have no problem replacing that with matrix operations between the prev_layer and explicit weights and biases variables.", "def fully_connected(prev_layer, num_units):\n \"\"\"\n Create a fully connectd layer with the given layer as input and the given number of neurons.\n \n :param prev_layer: Tensor\n The Tensor that acts as input into this layer\n :param num_units: int\n The size of the layer. That is, the number of units, nodes, or neurons.\n :returns Tensor\n A new fully connected layer\n \"\"\"\n layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)\n return layer", "TODO: Modify conv_layer to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.\nNote: Unlike in the previous example that used tf.layers, adding batch normalization to these convolutional layers does require some slight differences to what you did in fully_connected.", "def conv_layer(prev_layer, layer_depth):\n \"\"\"\n Create a convolutional layer with the given layer as input.\n \n :param prev_layer: Tensor\n The Tensor that acts as input into this layer\n :param layer_depth: int\n We'll set the strides and number of feature maps based on the layer's depth in the network.\n This is *not* a good way to make a CNN, but it helps us create this example with very little code.\n :returns Tensor\n A new convolutional layer\n \"\"\"\n strides = 2 if layer_depth % 3 == 0 else 1\n\n in_channels = prev_layer.get_shape().as_list()[3]\n out_channels = layer_depth*4\n \n weights = tf.Variable(\n tf.truncated_normal([3, 3, in_channels, out_channels], stddev=0.05))\n \n bias = tf.Variable(tf.zeros(out_channels))\n\n conv_layer = tf.nn.conv2d(prev_layer, weights, strides=[1,strides, strides, 1], padding='SAME')\n conv_layer = tf.nn.bias_add(conv_layer, bias)\n conv_layer = tf.nn.relu(conv_layer)\n\n return conv_layer", "TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training.", "def train(num_batches, batch_size, learning_rate):\n # Build placeholders for the input samples and labels \n inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])\n labels = tf.placeholder(tf.float32, [None, 10])\n \n # Feed the inputs into a series of 20 convolutional layers \n layer = inputs\n for layer_i in range(1, 20):\n layer = conv_layer(layer, layer_i)\n\n # Flatten the output from the convolutional layers \n orig_shape = layer.get_shape().as_list()\n layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])\n\n # Add one fully connected layer\n layer = fully_connected(layer, 100)\n\n # Create the output layer with 1 node for each \n logits = tf.layers.dense(layer, 10)\n \n # Define loss and training operations\n model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))\n train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)\n \n # Create operations to test accuracy\n correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))\n accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))\n \n # Train and test the network\n with tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n for batch_i in range(num_batches):\n batch_xs, batch_ys = mnist.train.next_batch(batch_size)\n\n # train this batch\n sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})\n \n # Periodically check the validation or training loss and accuracy\n if batch_i % 100 == 0:\n loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,\n labels: mnist.validation.labels})\n print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))\n elif batch_i % 25 == 0:\n loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})\n print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))\n\n # At the end, score the final accuracy for both the validation and test sets\n acc = sess.run(accuracy, {inputs: mnist.validation.images,\n labels: mnist.validation.labels})\n print('Final validation accuracy: {:>3.5f}'.format(acc))\n acc = sess.run(accuracy, {inputs: mnist.test.images,\n labels: mnist.test.labels})\n print('Final test accuracy: {:>3.5f}'.format(acc))\n \n # Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.\n correct = 0\n for i in range(100):\n correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],\n labels: [mnist.test.labels[i]]})\n\n print(\"Accuracy on 100 samples:\", correct/100)\n\n\nnum_batches = 800\nbatch_size = 64\nlearning_rate = 0.002\n\ntf.reset_default_graph()\nwith tf.Graph().as_default():\n train(num_batches, batch_size, learning_rate)", "Once again, the model with batch normalization should reach an accuracy over 90%. There are plenty of details that can go wrong when implementing at this low level, so if you got it working - great job! If not, do not worry, just look at the Batch_Normalization_Solutions notebook to see what went wrong." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
relopezbriega/mi-python-blog
content/notebooks/CategoricalPython.ipynb
gpl-2.0
[ "Análisis de datos categóricos con Python\nEsta notebook fue creada originalmente como un blog post por Raúl E. López Briega en Matemáticas, análisis de datos y python. El contenido esta bajo la licencia BSD.\n<img alt=\"Datos categóricos con Python\" title=\"Datos categóricos con Python\" src=\"https://relopezbriega.github.io/images/categorical_data.jpg\" high=400px width=600px>\nIntroducción\nCuando trabajamos con estadísticas, es importante reconocer los diferentes tipos de datos: numéricos (discretos y continuos), categóricos y ordinales. Los datos no son más que observaciones del mundo en que vivimos, por tanto, los mismos pueden venir en diferentes formas, no solo numérica. Por ejemplo, si le preguntáramos a nuestros amigos ¿cuántas mascotas tienen? nos podrían responder: 0, 1, 2, 4, 3, 8; esta información por sí misma puede ser útil, pero para nuestro análisis de mascotas, nos podría servir también otro tipo de información, como por ejemplo el género de cada uno de nuestros amigos; de esta forma obtendríamos la siguiente información: hombre, mujer, mujer, mujer, hombre, mujer. Como vemos, podemos incluir a los datos dentro de tres categorías fundamentales: datos cuantitativos o numéricos, datos cualitativos o categóricos y datos ordinales.\nDatos cuantitativos\nLos datos cuantitativos son representados por números; estos números van a ser significativos si representan la medida o la cantidad observada de cierta característica. Dentro de esta categoría podemos encontrar por ejemplo: cantidades de dólares, cuentas, tamaños, número de empleados, y kilómetros por hora. Con los datos cuantitativos, se puede hacer todo tipo de tareas de procesamiento de datos numéricos, tales como sumarlos, calcular promedios, o medir su variabilidad. Asimismo, vamos a poder dividir a los datos cuantitativos en discretos y continuos, dependiendo de los valores potencialmente observables.\n\n\nLos datos discretos solo van a poder asumir un valor de una lista de números específicos. Representan ítems que pueden ser contados; todos sus posibles valores pueden ser listados. Suele ser relativamente fácil trabajar con este tipo de dato.\n\n\nLos datos continuos representan mediciones; sus posibles valores no pueden ser contados y sólo pueden ser descritos usando intervalos en la recta de los números reales. Por ejemplo, la cantidad de kilómetros recorridos no puede ser medida con exactitud, puede ser que hayamos recorrido 1.7 km o 1.6987 km; en cualquier medida que tomemos del mundo real, siempre pueden haber pequeñas o grandes variaciones. Generalmente, los datos continuos se suelen redondear a un número fijo de decimales para facilitar su manipulación.\n\n\nDatos cualitativos\nSi los datos nos dicen en cual de determinadas categorías no numéricas nuestros ítems van a caer, entonces estamos hablando de datos cualitativos o categóricos; ya que los mismos van a representar determinada cualidad que los ítems poseen. Dentro de esta categoría vamos a encontrar datos como: el sexo de una persona, el estado civil, la ciudad natal, o los tipos de películas que le gustan. Los datos categóricos pueden tomar valores numéricos (por ejemplo, \"1\" para indicar \"masculino\" y \"2\" para indicar \"femenino\"), pero esos números no tienen un sentido matemático.\nDatos ordinales\nUna categoría intermedia entre los dos tipos de datos anteriores, son los datos ordinales. En este tipo de datos, va a existir un orden significativo, vamos a poder clasificar un primero, segundo, tercero, etc. es decir, que podemos establecer un ranking para estos datos, el cual posiblemente luego tenga un rol importante en la etapa de análisis. Los datos se dividen en categorías, pero los números colocados en cada categoría tienen un significado. Por ejemplo, la calificación de un restaurante en una escala de 0 (bajo) a 5 (más alta) estrellas representa datos ordinales. Los datos ordinales son a menudo tratados como datos categóricos, en el sentido que se suelen agrupar y ordenar. Sin embargo, a diferencia de los datos categóricos, los números sí tienen un significado matemático.\nEn este artículo me voy a centrar en el segundo grupo, los datos categóricos; veremos como podemos manipular fácilmente con la ayuda de Python estos datos para poder encontrar patrones, relaciones, tendencias y excepciones. \nAnálisis de datos categóricos con Python\nPara ejemplificar el análisis, vamos a utilizar nuestras habituales librerías científicas NumPy, Pandas, Matplotlib y Seaborn. También vamos a utilizar la librería pydataset, la cual nos facilita cargar los diferentes dataset para analizar. \nLa idea es realizar un análisis estadístico sobre los datos de los sobrevivientes a la tragedia del Titanic.\nLa tragedia del Titanic\nEl hundimiento del Titanic es uno de los naufragios más infames de la historia. El 15 de abril de 1912, durante su viaje inaugural, el Titanic se hundió después de chocar con un iceberg, matando a miles de personas. Esta tragedia sensacional conmocionó a la comunidad internacional y condujo a mejores normas de seguridad aplicables a los buques. \nUna de las razones por las que el naufragio dio lugar a semejante cantidad de muertes fue que no había suficientes botes salvavidas para los pasajeros y la tripulación. Aunque hubo algún elemento de suerte involucrada en sobrevivir al hundimiento, algunos grupos de personas tenían más probabilidades de sobrevivir que otros, como las mujeres, los niños y la clase alta.\nEl siguiente dataset proporciona información sobre el destino de los pasajeros en el viaje fatal del trasatlántico Titanic, que se resume de acuerdo con el nivel económico (clase), el sexo, la edad y la supervivencia.", "# <!-- collapse=True -->\n# importando modulos necesarios\n%matplotlib inline\n\nimport matplotlib.pyplot as plt\nimport numpy as np \nimport pandas as pd \nimport seaborn as sns \nfrom pydataset import data\n\n# parametros esteticos de seaborn\nsns.set_palette(\"deep\", desat=.6)\nsns.set_context(rc={\"figure.figsize\": (8, 4)})\n\n# importando dataset\ntitanic = data('titanic')\n\n# ver primeros 10 registros\ntitanic.head(10)", "El problema con datos como estos, y en general con la mayoría de las tablas de datos, es que nos presentan mucha información y no nos permiten ver que es lo que realmente sucede o sucedió. Por tanto, deberíamos procesarla de alguna manera para hacernos una imagen de lo que los datos realmente representan y nos quieren decir; y que mejor manera para hacernos una imagen de algo que utilizar visualizaciones. Una buena visualización de los datos puede revelar cosas que es probable que no podamos ver en una tabla de números y nos ayudará a pensar con claridad acerca de los patrones y relaciones que pueden estar escondidos en los datos. También nos va a ayudar a encontrar las características y patrones más importantes o los casos que son realmente excepcionales y no deberíamos de encontrar.\nTablas de frecuencia\nPara hacernos una imagen de los datos, lo primero que tenemos que hacer es agruparlos. Al armar diferentes grupos nos vamos acercando a la comprensión de los datos. La idea es ir amontonamos las cosas que parecen ir juntas, para poder ver como se distribuyen a través de las diferentes categorías. Para los datos categóricos, agrupar es fácil; simplemente debemos contar el número de\nítems que corresponden a cada categoría y apilarlos.\nUna forma en la que podemos agrupar nuestro dataset del Titanic es contando las diferentes clases de pasajeros. Podemos organizar estos conteos en una tabla de frecuencia, que registra los totales y los nombres de las categorías utilizando la función value_counts que nos proporciona Pandas del siguiente modo:", "# tabla de frecuencia de clases de pasajeros\npd.value_counts(titanic['class'])", "Contar las cantidad de apariciones de cada categoría puede ser útil, pero a veces puede resultar más útil saber la fracción o proporción de los datos de cada categoría, así que podríamos entonces dividir los recuentos por el total de casos para obtener los porcentajes que representa cada categoría. \nUna tabla de frecuencia relativa muestra los porcentajes, en lugar de los recuentos de los valores en cada categoría. Ambos tipos de tablas muestran cómo los\ncasos se distribuyen a través de las categorías. De esta manera, ellas describen la distribución de una variable categórica, ya que enumeran las posibles categorías y nos dicen con qué frecuencia se produce cada una de ellas.", "# tabla de frecuencia relativa de pasajeros\n100 * titanic['class'].value_counts() / len(titanic['class'])", "Gráficos de tartas y barras\nAhora que ya conocemos a las tablas de frecuencia ya estamos en condiciones de crear visualizaciones que realmente nos den una imagen de los datos, sus propiedades y sus relaciones. En este punto, debemos ser sumamente cuidadosos, ya que una mala visualización puede llegar a distorsionar nuestra comprensión, en lugar de ayudarnos. \nLas mejores visualizaciones de datos siguen un principio fundamental llamado el principio del área. Este principio nos dice que el área ocupada por cada parte del gráfico se debe corresponder con la magnitud del valor que representa. Violaciones del principio de área son una forma común de mentir con estadísticas. Dos gráficos útiles que podemos utilizar para representar nuestros datos y que cumplen con este principio son el gráfico de barras y el gráfico de tarta.\nGráfico de barras\nEl gráfico de barras nos ayuda a darnos una impresión visual más precisa de la distribución de nuestros datos. La altura de cada barra muestra el recuento de su categoría. Los barras tienen el mismo ancho, por lo que sus alturas determinan sus áreas, y estas áreas son proporcionales a los recuentos en cada categoría. De esta forma, podemos ver fácilmente que había más del doble de pasajeros de tercera clase, que de primera o segunda clase. Los gráficos de barras hacen que este tipo de comparaciones sean fáciles y naturales. Veamos como podemos crearlos de forma sencilla utilizando el método plot dentro de un DataFrame de Pandas.", "# Gráfico de barras de pasajeros del Titanic\nplot = titanic['class'].value_counts().plot(kind='bar',\n title='Pasajeros del Titanic')", "Si quisiéramos enfocarnos en la proporción relativa de los pasajeros de cada una de las clases, simplemente podemos sustituir a los recuentos con porcentajes y utilizar un gráfico de barras de frecuencias relativas.", "# gráfico de barras de frecuencias relativas.\nplot = (100 * titanic['class'].value_counts() / len(titanic['class'])).plot(\nkind='bar', title='Pasajeros del Titanic %')", "Gráfico de tartas\nEl gráfico de tarta muestra el total de casos como un círculo y luego corta este círculo en piezas cuyos tamaños son proporcionales a la fracción que cada categoría representa sobre el total de casos. Los gráfico de tarta dan una impresión rápida de cómo todo un grupo se divide en grupos más pequeños. Lo podríamos graficar del siguiente modo, también utilizando el método plot:", "# Gráfico de tarta de pasajeros del Titanic\nplot = titanic['class'].value_counts().plot(kind='pie', autopct='%.2f', \n figsize=(6, 6),\n title='Pasajeros del Titanic')", "Como se puede apreciar, con el gráfico de tarta no es tan fácil determinar que los pasajeros de tercera clase son más que el doble que los de primera clase; tampoco es fácil determinar si hay más pasajeros de primera o de segunda clase. Para este tipo de comparaciones, son mucho más útiles los gráficos de barras.\nRelacionando variables categóricas\nAl analizar la tragedia del Titanic, una de las preguntas que podríamos hacer es ¿existe alguna relación entre la clase de pasajeros y la posibilidad de alcanzar un bote salvavidas y sobrevivir a la tragedia? Para poder responder a esta pregunta, vamos a necesitar analizar a las variables class y survived de nuestro dataset en forma conjunta. Una buena forma de analizar dos variables categóricas en forma conjunta, es agrupar los recuentos en una tabla de doble entrada; este tipo de tablas se conocen en estadística con el nombre de tabla de contingencia. Veamos como podemos crear esta tabla utilizando la función crosstab de Pandas.", "# Tabla de contingencia class / survived\npd.crosstab(index=titanic['survived'],\n columns=titanic['class'], margins=True)", "Los márgenes de la tabla, tanto en la derecha y en la parte inferior, nos muestran los totales. La línea inferior de la tabla representa la distribución de frecuencia de la clase de pasajeros. La columna derecha de la tabla es la distribución de frecuencia de la variable supervivencia. \nCuando se presenta la información de este modo, cada celda de cada uno de los márgenes de la tabla representa la distribución marginal de esa variable en particular. Cada celda nos va a mostrar el recuento para la combinación de los valores de nuestras dos variables categóricas, en este caso class y survived.\nAl igual de como habíamos visto con las tablas de frecuencia, también nos podría ser útil representar a las tablas de contingencia con porcentajes relativos; esto lo podríamos realizar utilizando el método apply del siguiente modo:", "# tabla de contingencia en porcentajes relativos total\npd.crosstab(index=titanic['survived'], columns=titanic['class'],\n margins=True).apply(lambda r: r/len(titanic) *100,\n axis=1)", "Con esta tabla podemos ver fácilmente que solo el 37.91% de los pasajeros sobrevivió a la tragedia y que este 37% se compone de la siguiente forma: del total de pasajeros sobrevivió un 15.42% de pasajeros que eran de primera clase, un 8.97% que eran de segunda clase y un 13.52% que eran pasajeros de tercera clase.\nVolviendo a nuestra pregunta inicial sobre la posibilidad de sobrevivir según la clase de pasajero, podría ser más útil armar la tabla de porcentajes como un porcentaje relativo sobre el total de cada fila, es decir calcular el porcentaje relativo que cada clase tiene sobre haber sobrevivido o no. Esto lo podemos realizar del siguiente modo:", "# tabla de contingencia en porcentajes relativos segun sobreviviente\npd.crosstab(index=titanic['survived'], columns=titanic['class']\n ).apply(lambda r: r/r.sum() *100,\n axis=1)", "Aquí podemos ver que de los pasajeros que sobrevivieron a la tragedia, el 40.68% correspondían a primera clase, el 35.67% a tercera clase y el 23.65% a segunda clase. Por tanto podríamos inferir que los pasajeros de primera clase tenían más posibilidades de sobrevivir. \nEs más, también podríamos armar la tabla de porcentaje relativos en relación al total de cada clase de pasajero y así podríamos ver que de los pasajeros de primera clase, logró sobrevivir un 62.46%.", "# tabla de contingencia en porcentajes relativos segun clase\npd.crosstab(index=titanic['survived'], columns=titanic['class']\n ).apply(lambda r: r/r.sum() *100,\n axis=0)", "Este último resultado lo podríamos representar visualmente con simples gráfico de barras del siguiente modo:", "# Gráfico de barras de sobreviviviente segun clase\nplot = pd.crosstab(index=titanic['class'],\n columns=titanic['survived']).apply(lambda r: r/r.sum() *100,\n axis=1).plot(kind='bar')\n\n# Gráfico de barras de sobreviviviente segun clase\nplot = pd.crosstab(index=titanic['survived'],\n columns=titanic['class']\n ).apply(lambda r: r/r.sum() *100,\n axis=0).plot(kind='bar', stacked=True)", "Estas mismas manipulaciones las podemos realizar para otro tipo de combinación de variables categóricas, como podría ser el sexo o la edad de los pasajeros, pero eso ya se los dejo a ustedes para que se entretengan y practiquen un rato.\nCon este termina esta artículo, si les gustó y están interesados en la estadísticas, no duden en visitar mi anterior artículo Probabilidad y Estadística con Python y seguir la novedades del blog!\nSaludos!\nEste post fue escrito utilizando Jupyter notebook. Pueden descargar este notebook o ver su version estática en nbviewer." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
neuropsychology/NeuroKit.py
examples/Bio/.ipynb_checkpoints/bio_processing-checkpoint.ipynb
mit
[ "Biosignals Processing in Python\nWelcome to the course for biosignals processing using NeuroKit and python. You'll find the necessary files to run this example in the examples section.\nImport Necessary Packages", "# Import packages\nimport neurokit as nk\nimport pandas as pd\nimport numpy as np\nimport matplotlib\nimport seaborn as sns\n\n# Plotting preferences\n%matplotlib inline\nmatplotlib.rcParams['figure.figsize'] = [14.0, 10.0] # Bigger figures\nsns.set_style(\"whitegrid\") # White background\nsns.set_palette(sns.color_palette(\"colorblind\")) # Better colours", "Block Paradigms\nPreprocessing", "# Download resting-state data\ndf = pd.read_csv(\"https://raw.githubusercontent.com/neuropsychology/NeuroKit.py/master/examples/Bio/data/bio_rest.csv\", index_col=0)\n# Plot it\ndf.plot()", "df contains about 5 minutes of data recorded at 1000Hz. There are 4 channels, EDA, ECG, RSP and the Photosensor used to localize events. In the present case, there is only one event, one sequence of 5 min during which the participant was instructed to to nothing.\nFirst thing that we're gonna do is crop that data according to the photosensor channel, to keep only the sequence of interest.", " # We want to find events on the Photosensor channel, when it goes down (hence, cut is set to lower).\nevents = nk.find_events(df[\"Photosensor\"], cut=\"lower\") \nprint(events)", "find_events returns a dict containing onsets and durations of each event. Here, it correctly detected only one event. Then, we're gonna crop our data according to that event. The create_epochs function returns a list containing epochs of data corresponding to each event. As we have only one event, we're gonna select the 0th element of that list.", "df = nk.create_epochs(df, events[\"onsets\"], duration=events[\"durations\"], onset=0)\ndf = df[0] # Select the first (0th) element of that list.", "Processing\nBiosignals processing can be done quite easily using NeuroKit with the bio_process function. Simply provide the biosignal channels and additional channels that you want to keep (for example, the photosensor). bio_process returns a dict containing a dataframe df, including raw and processed signals, as well as features relevant to each provided signal.", "bio = nk.bio_process(ecg=df[\"ECG\"], rsp=df[\"RSP\"], eda=df[\"EDA\"], add=df[\"Photosensor\"])\n# Plot the processed dataframe\nbio[\"df\"].plot()", "Bio Features Extraction\nAside from this dataframe, bio contains also several features computed signal wise.\nHeart-Rate Variability (HRV)\nMany indices of HRV, a finely tuned measure of heart-brain communication, are computed.", "bio[\"ECG\"][\"HRV\"]", "Respiratory Sinus Arrythmia (RSA)\nTO BE DONE.\nEntropy\nTO BE DONE.\nHeart Beats\nThe processing functions automatically extracts each individual heartbeat, synchronized by their R peak. You can plot all of them.", "bio[\"ECG\"][\"Heart_Beats\"]\n\npd.DataFrame(bio[\"ECG\"][\"Heart_Beats\"]).T.plot(legend=False) # Plot all the heart beats", "Heart Rate Variability (HRV)", "# Print all the HRV indices\nbio[\"ECG_Features\"][\"ECG_HRV\"]", "Event-Related Analysis\nThis experiment consisted of 8 events (when the photosensor signal goes down), which were 2 types of images that were shown to the participant: \"Negative\" vs \"Neutral\". The following list is the condition order.", "condition_list = [\"Negative\", \"Negative\", \"Neutral\", \"Neutral\", \"Neutral\", \"Negative\", \"Negative\", \"Neutral\"]", "Find Events\nFirst, we must find events onset within our photosensor's signal using the find_events() function. This function requires a treshold and a cut direction (should it select events that are higher or lower than the treshold).", "events = nk.find_events(df[\"Photosensor\"], treshold = 3, cut=\"lower\")\nevents", "Create Epochs\nThen, we divise our dataframe in epochs, i.e. segments of data around the event. We set our epochs to start at the event start (onset=0) and to last for 5000 data points, in our case equal to 5 s (since the signal is sampled at 1000Hz).", "epochs = nk.create_epochs(bio[\"Bio\"], events[\"onsets\"], duration=5000, onset=0)", "Create Evoked-Data\nWe can then itereate through the epochs and store the interesting results in a new dict that will be, at the end, converted to a dataframe.", "evoked = {} # Initialize an empty dict\nfor epoch in epochs:\n evoked[epoch] = {} # Initialize an empty dict for the current epoch\n evoked[epoch][\"Heart_Rate\"] = epochs[epoch][\"Heart_Rate\"].mean() # Heart Rate mean\n evoked[epoch][\"RSP_Rate\"] = epochs[epoch][\"RSP_Rate\"].mean() # Respiration Rate mean\n evoked[epoch][\"EDA_Filtered\"] = epochs[epoch][\"EDA_Filtered\"].mean() # EDA mean\n evoked[epoch][\"EDA_Max\"] = max(epochs[epoch][\"EDA_Filtered\"]) # Max EDA value\n \n # SRC_Peaks are scored np.nan (NaN values) in the absence of peak. We want to change it to 0\n if np.isnan(epochs[epoch][\"SCR_Peaks\"].mean()):\n evoked[epoch][\"SCR_Peaks\"] = 0\n else:\n evoked[epoch][\"SCR_Peaks\"] = epochs[epoch][\"SCR_Peaks\"].mean()\n\nevoked = pd.DataFrame.from_dict(evoked, orient=\"index\") # Convert to a dataframe\nevoked[\"Condition\"] = condition_list # Add the conditions\nevoked # Print", "Plot Results", "sns.boxplot(x=\"Condition\", y=\"Heart_Rate\", data=evoked)\n\nsns.boxplot(x=\"Condition\", y=\"RSP_Rate\", data=evoked)\n\nsns.boxplot(x=\"Condition\", y=\"EDA_Filtered\", data=evoked)\n\nsns.boxplot(x=\"Condition\", y=\"EDA_Max\", data=evoked)\n\nsns.boxplot(x=\"Condition\", y=\"SCR_Peaks\", data=evoked)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
dwhswenson/openpathsampling
examples/tests/test_snapshot_modifier.ipynb
mit
[ "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport openpathsampling as paths\nimport numpy as np\n\nfrom __future__ import print_function", "Test with toy model", "topology = paths.engines.toy.Topology(n_spatial=3, \n n_atoms=2, \n masses=np.array([2.0, 8.0]), \n pes=None)\ninitial_snapshot = paths.engines.toy.Snapshot(\n coordinates=np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]),\n velocities=np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]),\n engine=paths.engines.toy.Engine({}, topology)\n)", "We'll define the modifier at two different temperatures, and we'll run each for 10000 snapshots. Note also that our two atoms have different masses.", "modifier_1 = paths.RandomVelocities(beta=1.0)\nmodifier_5 = paths.RandomVelocities(beta=1.0/5.0)\n\nsnapshots_1 = [modifier_1(initial_snapshot) for i in range(10000)]\nsnapshots_5 = [modifier_5(initial_snapshot) for i in range(10000)]", "Within each atom, all 3 DOFs will be part of the same distribution. We create a few lists with names of the form v_${BETA}_${ATOM_NUMBER}. These are the results we'll histogram and test.", "v_1_0 = sum([s.velocities[0].tolist() for s in snapshots_1], [])\nv_1_1 = sum([s.velocities[1].tolist() for s in snapshots_1], [])\nv_5_0 = sum([s.velocities[0].tolist() for s in snapshots_5], [])\nv_5_1 = sum([s.velocities[1].tolist() for s in snapshots_5], [])", "We know what the distribution should look like, so we write it down explicitly:", "def expected(beta, mass, v):\n alpha = 0.5*beta*mass\n return np.sqrt(alpha/np.pi)*np.exp(-alpha*v**2)", "Now we take each total distribution, and compare it to the expected distribution. This is where we have to use our eyes to check the correctness.", "v = np.arange(-5.0, 5.0, 0.1)\nbins = np.arange(-8.0, 8.0, 0.2)\nplt.hist(v_1_0, bins=bins, normed=True)\nplt.plot(v, expected(1.0, 2.0, v), 'r');\n\nv = np.arange(-5.0, 5.0, 0.1)\nbins = np.arange(-8.0, 8.0, 0.2)\nplt.hist(v_1_1, bins=bins, normed=True)\nplt.plot(v, expected(1.0, 8.0, v), 'r');\n\nv = np.arange(-5.0, 5.0, 0.1)\nbins = np.arange(-8.0, 8.0, 0.2)\nplt.hist(v_5_0, bins=bins, normed=True)\nplt.plot(v, expected(0.2, 2.0, v), 'r');\n\nv = np.arange(-5.0, 5.0, 0.1)\nbins = np.arange(-8.0, 8.0, 0.2)\nplt.hist(v_5_1, bins=bins, normed=True)\nplt.plot(v, expected(0.2, 8.0, v), 'r');", "If the red lines match the blue histograms, we're good. Otherwise, something has gone terribly wrong.\nTest with OpenMM", "import openmmtools as omt\nimport openpathsampling.engines.openmm as omm_engine\nimport simtk.unit as u\ntest_system = omt.testsystems.AlanineDipeptideVacuum()\ntemplate = omm_engine.snapshot_from_testsystem(test_system)\n\n# just to show that the initial velocities are all 0\ntemplate.velocities\n\ntemperature = 300.0 * u.kelvin\nbeta = 1.0 / (temperature * u.BOLTZMANN_CONSTANT_kB)\n\nfull_randomizer = paths.RandomVelocities(beta)\nfully_randomized_snapshot = full_randomizer(template)\nfully_randomized_snapshot.velocities", "That version randomized all velcoties; we can also create a SnapshotModifier that only modifies certain velocities. For example, we might be interested in modifying the velocities of a solvent while ignoring the solute.\nNext we create a little example that only modifies the velocities of the carbon atoms in alanine dipeptide.", "carbon_atoms = template.topology.mdtraj.select(\"element C\")\ncarbon_randomizer = paths.RandomVelocities(beta, subset_mask=carbon_atoms)\ncarbon_randomized_snapshot = carbon_randomizer(template)\ncarbon_randomized_snapshot.velocities", "Note that only the 6 carbon atoms, selected by the subset_mask, have changed velocities from the template's value of 0.0.\nFinally, we'll check that the OpenMM version is giving the right statistics:", "carbon_velocities = [carbon_randomizer(template).velocities[carbon_atoms] for i in range(1000)]\n\nall_dof_values = sum(np.concatenate(carbon_velocities).tolist(), [])\nprint(len(all_dof_values))\n\ndalton_mass = 12.0\n# manually doing conversions here\ncarbon_mass = dalton_mass / (6.02*10**23) * 10**-3 # kg\nboltzmann = 1.38 * 10**-23 # J/K\nm_s__to__nm_ps = 10**-3\n\ntemperature = 300.0 # K\n\nkB_T = boltzmann * temperature * m_s__to__nm_ps**2\n\nv = np.arange(-3.0, 3.0, 0.1)\nbins = np.arange(-3.0, 3.0, 0.1)\nplt.hist(all_dof_values, bins=bins, normed=True);\nplt.plot(v, expected(1.0/kB_T, carbon_mass, v), 'r')" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
KartikKannapur/Programming_Challenges
HackerRank/Intro_to_Statistics/Day_01.ipynb
mit
[ "Day 1: Basic Statistics - A Warmup\nObjective\nIn this challenge, we practice calculating the mean, median, mode, standard deviation, and confidence intervals in statistics. \nTask\nGiven a single line of N space-separated integers describing an array, calculate and print the following:\nMean (m): The average of all the integers.\nArray Median: If the number of integers is odd, display the middle element. Otherwise, display the average of the two middle elements.\nMode: The element(s) that occur most frequently. If multiple elements satisfy this criteria, display the numerically smallest one.\nStandard Deviation (σ)\nOther than the modal values (which should all be integers), the answers should be in decimal form, correct to a single decimal point, 0.0 format. An error margin of ±0.1 will be tolerated for the standard deviation. The mean, mode and median values should match the expected answers exactly.\nAssume the numbers were sampled from a normal distribution. The sample is a reasonable representation of the distribution. A user can approximate that the population standard deviation ≃ standard deviation computed for the given points with the understanding that assumptions of normality are convenient approximations.", "# #Python Library Imports\nimport numpy as np\nfrom scipy import stats\n\n#count = int(raw_input())\n#numbers = raw_input()\ncount = 10\nnumbers = \"64630 11735 14216 99233 14470 4978 73429 38120 51135 67060\"\n\narr_numbers = [int(var_num) for var_num in numbers.split()]\n\n# #MEAN\nprint np.mean(arr_numbers)\n\n# #MEDIAN\nprint np.median(arr_numbers)\n\n# #MODE\nprint int(stats.mode(np.array(arr_numbers))[0])\n\n# #STANDARD DEVIATION\nprint np.std(arr_numbers)", "Day 1: Standard Deviation Puzzles #1\nObjective\nIn this challenge, we practice calculating standard deviation.\nTask\nFind the largest possible value of N where the standard deviation of the values in the set {1,2,3,N} is equal to the standard deviation of the values in the set {1,2,3}.\nOutput the value of N, correct to two decimal places.", "# #Input Sets\nset_original = [1,2,3]\n\nset_original_mean = np.mean(set_original)\nset_original_std = np.std(set_original)\n\nprint set_original_mean, set_original_std\n\nnp.std([1,2,3,2.94])", "Day 1: Standard Deviation Puzzles #2\nObjective\nIn this challenge, we practice calculating standard deviation.\nTask\nThe heights of a group of children are measured. The resulting data has a mean of 0.675 meters, and a standard deviation of 0.065 meters. One particular child is 90.25 centimeters tall. Compute z, the number of standard deviations away from the mean that the particular child is.\nEnter the value of z, correct to a scale of two decimal places.", "var_2_mean = 0.675\nvar_2_std = 0.065\n\nchild = 0.9025\n\n(child - var_2_mean)/var_2_std" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
quantumlib/OpenFermion
docs/fqe/tutorials/hamiltonian_time_evolution_and_expectation_estimation.ipynb
apache-2.0
[ "Copyright 2020 The OpenFermion Developers", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Hamiltonian Time Evolution and Expectation Value Computation\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://quantumai.google/openfermion/fqe/tutorials/hamiltonian_time_evolution_and_expectation_estimation\"><img src=\"https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png\" />View on QuantumAI</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/quantumlib/OpenFermion/blob/master/docs/fqe/tutorials/hamiltonian_time_evolution_and_expectation_estimation.ipynb\"><img src=\"https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/quantumlib/OpenFermion/blob/master/docs/fqe/tutorials/hamiltonian_time_evolution_and_expectation_estimation.ipynb\"><img src=\"https://quantumai.google/site-assets/images/buttons/github_logo_1x.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/OpenFermion/docs/fqe/tutorials/hamiltonian_time_evolution_and_expectation_estimation.ipynb\"><img src=\"https://quantumai.google/site-assets/images/buttons/download_icon_1x.png\" />Download notebook</a>\n </td>\n</table>\n\nThis tutorial describes the FQE's capabilities for Hamiltonian time-evolution and expectation value estimation\nWhere possible, LiH will be used as an example molecule for the API.", "try:\n import fqe\nexcept ImportError:\n !pip install fqe --quiet\n\nPrint = True\nfrom openfermion import FermionOperator, MolecularData\nfrom openfermion.utils import hermitian_conjugated\nimport numpy\nimport fqe\n\nnumpy.set_printoptions(floatmode='fixed', precision=6, linewidth=80, suppress=True)\nnumpy.random.seed(seed=409)\n\n!curl -O https://raw.githubusercontent.com/quantumlib/OpenFermion-FQE/master/tests/unittest_data/build_lih_data.py\n\nimport build_lih_data\n\nh1e, h2e, wfn = build_lih_data.build_lih_data('energy')\nlih_hamiltonian = fqe.get_restricted_hamiltonian(([h1e, h2e]))\nlihwfn = fqe.Wavefunction([[4, 0, 6]])\nlihwfn.set_wfn(strategy='from_data', raw_data={(4, 0): wfn})\nif Print:\n lihwfn.print_wfn()", "Application of one- and two-body fermionic gates\nThe API for time propogation can be invoked through the fqe namespace or the wavefunction object", "# dummy geometry\nfrom openfermion.chem.molecular_data import spinorb_from_spatial\nfrom openfermion import jordan_wigner, get_sparse_operator, InteractionOperator, get_fermion_operator\n\nh1s, h2s = spinorb_from_spatial(h1e, numpy.einsum(\"ijlk\", -2 * h2e) * 0.5)\nmol = InteractionOperator(0, h1s, h2s)\nham_fop = get_fermion_operator(mol)\nham_mat = get_sparse_operator(jordan_wigner(ham_fop)).toarray()\n\n\nfrom scipy.linalg import expm\ntime = 0.01\nevolved1 = lihwfn.time_evolve(time, lih_hamiltonian)\nif Print:\n evolved1.print_wfn()\nevolved2 = fqe.time_evolve(lihwfn, time, lih_hamiltonian)\nif Print:\n evolved2.print_wfn()\nassert numpy.isclose(fqe.vdot(evolved1, evolved2), 1)\ncirq_wf = fqe.to_cirq(lihwfn)\nevolve_cirq = expm(-1j * time * ham_mat) @ cirq_wf\ntest_evolve = fqe.from_cirq(evolve_cirq, thresh=1.0E-12)\nassert numpy.isclose(fqe.vdot(test_evolve, evolved1), 1)", "Exact evolution implementation of quadratic Hamiltonians\nListed here are examples of evolving the special Hamiltonians.\nDiagonal Hamiltonian evolution is supported.", "wfn = fqe.Wavefunction([[4, 2, 4]])\nwfn.set_wfn(strategy='random')\nif Print:\n wfn.print_wfn()\n\ndiagonal = FermionOperator('0^ 0', -2.0) + \\\n FermionOperator('1^ 1', -1.7) + \\\n FermionOperator('2^ 2', -0.7) + \\\n FermionOperator('3^ 3', -0.55) + \\\n FermionOperator('4^ 4', -0.1) + \\\n FermionOperator('5^ 5', -0.06) + \\\n FermionOperator('6^ 6', 0.5) + \\\n FermionOperator('7^ 7', 0.3)\nif Print:\n print(diagonal)\n \nevolved = wfn.time_evolve(time, diagonal)\nif Print:\n evolved.print_wfn()", "Exact evolution of dense quadratic hamiltonians is supported. Here is an evolution example using a spin restricted Hamiltonian on a number and spin conserving wavefunction", "norb = 4 \nh1e = numpy.zeros((norb, norb), dtype=numpy.complex128) \nfor i in range(norb): \n for j in range(norb): \n h1e[i, j] += (i+j) * 0.02 \n h1e[i, i] += i * 2.0 \n\nhamil = fqe.get_restricted_hamiltonian((h1e,)) \nwfn = fqe.Wavefunction([[4, 0, norb]]) \nwfn.set_wfn(strategy='random') \ninitial_energy = wfn.expectationValue(hamil) \nprint('Initial Energy: {}'.format(initial_energy))\nevolved = wfn.time_evolve(time, hamil) \nfinal_energy = evolved.expectationValue(hamil)\nprint('Final Energy: {}'.format(final_energy))", "The GSO Hamiltonian is for evolution of quadratic hamiltonians that are spin broken and number conserving.", "norb = 4 \nh1e = numpy.zeros((2*norb, 2*norb), dtype=numpy.complex128) \nfor i in range(2*norb): \n for j in range(2*norb): \n h1e[i, j] += (i+j) * 0.02 \n h1e[i, i] += i * 2.0 \n\nhamil = fqe.get_gso_hamiltonian((h1e,)) \nwfn = fqe.get_number_conserving_wavefunction(4, norb) \nwfn.set_wfn(strategy='random') \ninitial_energy = wfn.expectationValue(hamil) \nprint('Initial Energy: {}'.format(initial_energy))\nevolved = wfn.time_evolve(time, hamil) \nfinal_energy = evolved.expectationValue(hamil)\nprint('Final Energy: {}'.format(final_energy))", "The BCS hamiltonian evovles spin conserved and number broken wavefunctions.", "norb = 4\ntime = 0.001\nwfn_spin = fqe.get_spin_conserving_wavefunction(2, norb)\nhamil = FermionOperator('', 6.0)\nfor i in range(0, 2*norb, 2):\n for j in range(0, 2*norb, 2):\n opstring = str(i) + ' ' + str(j + 1)\n hamil += FermionOperator(opstring, (i+1 + j*2)*0.1 - (i+1 + 2*(j + 1))*0.1j)\n opstring = str(i) + '^ ' + str(j + 1) + '^ '\n hamil += FermionOperator(opstring, (i+1 + j)*0.1 + (i+1 + j)*0.1j)\nh_noncon = (hamil + hermitian_conjugated(hamil))/2.0\nif Print:\n print(h_noncon)\n\nwfn_spin.set_wfn(strategy='random')\nif Print:\n wfn_spin.print_wfn()\n\nspin_evolved = wfn_spin.time_evolve(time, h_noncon)\nif Print:\n spin_evolved.print_wfn()", "Exact Evolution Implementation of Diagonal Coulomb terms", "norb = 4\nwfn = fqe.Wavefunction([[5, 1, norb]])\nvij = numpy.zeros((norb, norb, norb, norb), dtype=numpy.complex128)\nfor i in range(norb):\n for j in range(norb):\n vij[i, j] += 4*(i % norb + 1)*(j % norb + 1)*0.21\n \nwfn.set_wfn(strategy='random')\n\nif Print:\n wfn.print_wfn()\n \nhamil = fqe.get_diagonalcoulomb_hamiltonian(vij)\n \nevolved = wfn.time_evolve(time, hamil)\nif Print:\n evolved.print_wfn()", "Exact evolution of individual n-body anti-Hermitian gnerators", "norb = 3\nnele = 4\nops = FermionOperator('5^ 1^ 2 0', 3.0 - 1.j)\nops += FermionOperator('0^ 2^ 1 5', 3.0 + 1.j)\nwfn = fqe.get_number_conserving_wavefunction(nele, norb)\nwfn.set_wfn(strategy='random')\nwfn.normalize()\nif Print:\n wfn.print_wfn()\nevolved = wfn.time_evolve(time, ops)\nif Print:\n evolved.print_wfn()\n", "Approximate evolution of sums of n-body generators\nApproximate evolution can be done for dense operators.", "lih_evolved = lihwfn.apply_generated_unitary(time, 'taylor', lih_hamiltonian, accuracy=1.e-8)\nif Print:\n lih_evolved.print_wfn()\n\nnorb = 2\nnalpha = 1\nnbeta = 1\nnele = nalpha + nbeta\ntime = 0.05\nh1e = numpy.zeros((norb*2, norb*2), dtype=numpy.complex128)\nfor i in range(2*norb):\n for j in range(2*norb):\n h1e[i, j] += (i+j) * 0.02\n h1e[i, i] += i * 2.0\nhamil = fqe.get_general_hamiltonian((h1e,))\nspec_lim = [-1.13199078e-03, 6.12720338e+00]\nwfn = fqe.Wavefunction([[nele, nalpha - nbeta, norb]])\nwfn.set_wfn(strategy='random')\nif Print:\n wfn.print_wfn()\nevol_wfn = wfn.apply_generated_unitary(time, 'chebyshev', hamil, spec_lim=spec_lim)\nif Print:\n evol_wfn.print_wfn()", "API for determining desired expectation values", "rdm1 = lihwfn.expectationValue('i^ j')\nif Print:\n print(rdm1)\nval = lihwfn.expectationValue('5^ 3')\nif Print:\n print(2.*val)\ntrdm1 = fqe.expectationValue(lih_evolved, 'i j^', lihwfn)\nif Print:\n print(trdm1)\nval = fqe.expectationValue(lih_evolved, '5 3^', lihwfn)\nif Print:\n print(2*val)", "2.B.1 RDMs \nIn addition to the above API higher order density matrices in addition to hole densities can be calculated.", "rdm2 = lihwfn.expectationValue('i^ j k l^')\nif Print:\n print(rdm2)\nrdm2 = fqe.expectationValue(lihwfn, 'i^ j^ k l', lihwfn)\nif Print:\n print(rdm2)", "2.B.2 Hamiltonian expectations (or any expectation values)", "li_h_energy = lihwfn.expectationValue(lih_hamiltonian)\nif Print:\n print(li_h_energy)\nli_h_energy = fqe.expectationValue(lihwfn, lih_hamiltonian, lihwfn)\nif Print:\n print(li_h_energy)", "2.B.3 Symmetry operations", "op = fqe.get_s2_operator()\nprint(lihwfn.expectationValue(op))\nop = fqe.get_sz_operator()\nprint(lihwfn.expectationValue(op))\nop = fqe.get_time_reversal_operator()\nprint(lihwfn.expectationValue(op))\nop = fqe.get_number_operator()\nprint(lihwfn.expectationValue(op))\n" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
FederatedAI/FATE
doc/tutorial/pipeline/pipeline_tutorial_homo_nn.ipynb
apache-2.0
[ "Pipeline Tutorial\ninstall\nPipeline is distributed along with fate_client.\nbash\npip install fate_client\nTo use Pipeline, we need to first specify which FATE Flow Service to connect to. Once fate_client installed, one can find an cmd enterpoint name pipeline:", "!pipeline --help", "Assume we have a FATE Flow Service in 127.0.0.1:9380(defaults in standalone), then exec", "!pipeline init --ip 127.0.0.1 --port 9380", "homo nn\nThe pipeline package provides components to compose a FATE pipeline.", "from pipeline.backend.pipeline import PipeLine\nfrom pipeline.component import DataTransform\nfrom pipeline.component import Reader\nfrom pipeline.component import HomoNN\nfrom pipeline.interface import Data", "Make a pipeline instance:\n- initiator: \n * role: guest\n * party: 9999\n- roles:\n * guest: 9999\n * host: [10000, 9999]\n * arbiter: 9999", "pipeline = PipeLine() \\\n .set_initiator(role='guest', party_id=9999) \\\n .set_roles(guest=9999, host=[10000], arbiter=10000)", "Define a Reader to load data", "reader_0 = Reader(name=\"reader_0\")\n# set guest parameter\nreader_0.get_party_instance(role='guest', party_id=9999).component_param(\n table={\"name\": \"breast_homo_guest\", \"namespace\": \"experiment\"})\n# set host parameter\nreader_0.get_party_instance(role='host', party_id=10000).component_param(\n table={\"name\": \"breast_homo_host\", \"namespace\": \"experiment\"})", "Add a DataTransform component to parse raw data into Data Instance", "data_transform_0 = DataTransform(name=\"data_transform_0\", with_label=True)\n# set guest parameter\ndata_transform_0.get_party_instance(role='guest', party_id=9999).component_param(\n with_label=True)\ndata_transform_0.get_party_instance(role='host', party_id=[10000]).component_param(\n with_label=True)", "Now, we define the HomoNN component.", "homo_nn_0 = HomoNN(\n name=\"homo_nn_0\", \n max_iter=10, \n batch_size=-1, \n early_stop={\"early_stop\": \"diff\", \"eps\": 0.0001})", "Add single Dense layer:", "from tensorflow.keras.layers import Dense\nhomo_nn_0.add(\n Dense(units=1, input_shape=(10,), activation=\"sigmoid\"))", "Compile:", "from tensorflow.keras import optimizers\nhomo_nn_0.compile(\n optimizer=optimizers.Adam(learning_rate=0.05), \n metrics=[\"accuracy\", \"AUC\"],\n loss=\"binary_crossentropy\")", "Add components to pipeline:\n- data_transform_0 comsume reader_0's output data\n- homo_nn_0 comsume data_transform_0's output data", "pipeline.add_component(reader_0)\npipeline.add_component(data_transform_0, data=Data(data=reader_0.output.data))\npipeline.add_component(homo_nn_0, data=Data(train_data=data_transform_0.output.data))\npipeline.compile();", "Now, submit(fit) our pipeline:", "pipeline.fit()", "Success! Now we can get model summary from homo_nn_0:", "summary = pipeline.get_component(\"homo_nn_0\").get_summary()\nsummary", "And we can use the summary data to draw the loss curve:", "%pylab inline\npylab.plot(summary['loss_history'])", "For more examples about using pipeline to submit HomoNN jobs, please refer to HomoNN Examples" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
bukosabino/btctrading
XGBoost_next_row.ipynb
mit
[ "import numpy as np\nimport pandas as pd\nfrom sklearn.ensemble import *\nimport xgboost as xgb\nimport operator\n\nimport settings\nimport utils\nimport get_data\nfrom ta import *", "Get Data\nAPI: http://bitcoincharts.com/charts\nperiod = ['1-min', '5-min', '15-min', '30-min', 'Hourly', '2-hour', '6-hour', '12-hour', 'Daily', 'Weekly']\nmarket = ['krakenEUR', 'bitstampUSD'] -> list of markets: https://bitcoincharts.com/charts/volumepie/", "# get_data.get('data/datas.csv', period=settings.PERIOD, market=settings.MARKET)", "Load Data", "df = pd.read_csv('data/datas.csv', sep=',')\n\n# add next row\nlast_timestamp = df['Timestamp'].iloc[-1]\nif settings.PERIOD == 'Hourly':\n next_timestamp = last_timestamp + 3600\ndf_next = pd.DataFrame([next_timestamp], columns=['Timestamp'])\ndf = df.append(df_next, ignore_index=True)\ndf.iloc[-1] = df.iloc[-1].fillna(1)\n\nprint('Number of rows: {}, Number of columns: {}'.format(*df.shape))", "Preprocessing", "df = utils.dropna(df)\n\nprint('Number of rows: {}, Number of columns: {}'.format(*df.shape))", "Transformation\nCreate column target with class [UP, KEEP, DOWN]", "df['Target'] = 0 # 'KEEP'\ndf.loc[df.Open + (df.Open * settings.PERCENT_UP) < df.Close, 'Target'] = 1 # 'UP'\ndf.loc[df.Open - (df.Open * settings.PERCENT_DOWN) > df.Close, 'Target'] = 2 # 'DOWN'\n\nprint('Number of rows: {}, Number of columns: {}'.format(*df.shape))\nprint('Number of UP rows: {}, Number of DOWN rows: {}'.format(len(df[df.Target == 1]), len(df[df.Target == 2])))", "Create columns from Timestamp to Date, Year, Month, Hour, etc.\nFeature Engineering", "df['Date'] = df['Timestamp'].apply(utils.timestamptodate)\ndf['Date'] = pd.to_datetime(df['Date'])\n\ndf['Year'] = df['Date'].dt.year\ndf['Month'] = df['Date'].dt.month\ndf['Week'] = df['Date'].dt.weekofyear\ndf['Weekday'] = df['Date'].dt.weekday\ndf['Day'] = df['Date'].dt.day\ndf['Hour'] = df['Date'].dt.hour\n\n# extra dates\n# df[\"yearmonth\"] = df[\"Date\"].dt.year*100 + df[\"Date\"].dt.month\n# df[\"yearweek\"] = df[\"Date\"].dt.year*100 + df[\"Date\"].dt.weekofyear\n# df[\"yearweekday\"] = df[\"Date\"].dt.year*10 + df[\"Date\"].dt.weekday\n\n# shift\ncols = ['Open', 'High', 'Low', 'Close', 'Volume_BTC', 'Volume_Currency', 'Weighted_Price']\nfor col in cols:\n df[col] = df[col].shift(1)\ndf = df.dropna()\n\ndf['High-low'] = df['High'] - df['Low']\ndf['Close-open'] = df['Close'] - df['Open']\n\ndf['Up_or_Down'] = 0 # 'UP' or 'DOWN' if diff > settings.PERCENT_UP\ndf.loc[( df.Open + (df.Open * settings.PERCENT_UP) ) < df.Close, 'Up_or_Down'] = 1 # 'UP'\ndf.loc[( df.Open - (df.Open * settings.PERCENT_DOWN) ) > df.Close, 'Up_or_Down'] = 2 # 'DOWN'\n\ndf['Up_or_Down_2'] = 0 # 'UP' or 'DOWN' if diff > settings.PERCENT_UP * 2\ndf.loc[df.Open + (df.Open * settings.PERCENT_UP * 2 ) < df.Close, 'Up_or_Down_2'] = 1 # 'UP'\ndf.loc[df.Open - (df.Open * settings.PERCENT_DOWN * 2) > df.Close, 'Up_or_Down_2'] = 2 # 'DOWN'\n\ndf['Up_or_Down_3'] = 0 # 'UP' or 'DOWN' if diff > 0\ndf.loc[df.Open < df.Close, 'Up_or_Down_3'] = 1 # 'UP'\ndf.loc[df.Open > df.Close, 'Up_or_Down_3'] = 2 # 'DOWN'\n\ndf['Up_or_Down_4'] = 0 # 'UP' or 'DOWN' if diff > settings.PERCENT_UP / 2\ndf.loc[df.Open + (df.Open * settings.PERCENT_UP / 2 ) < df.Close, 'Up_or_Down_4'] = 1 # 'UP'\ndf.loc[df.Open - (df.Open * settings.PERCENT_DOWN / 2) > df.Close, 'Up_or_Down_4'] = 2 # 'DOWN'\n\n# Fundamental analysis\n\n# daily return\ndf['Daily_return'] = (df['Close'] / df['Close'].shift(1)) - 1\ndf['Daily_return_100'] = ((df['Close'] / df['Close'].shift(1)) - 1) * 100\n\n# cumulative return\ndf = df.dropna()\ndf['Cumulative_return'] = (df['Close'] / df['Close'].iloc[0]) - 1\ndf['Cumulative_return_100'] = ((df['Close'] / df['Close'].iloc[0]) - 1) * 100\n\n# TODO: cumulative return week, month, year...\n\nprint('Number of rows: {}, Number of columns: {}'.format(*df.shape))", "Technical Analysis\nhttps://en.wikipedia.org/wiki/Technical_analysis\nVolume-based indicators", "# Accumulation/Distribution index\ndf['Acc_Dist_Roc_BTC'] = acc_dist_roc(df, 'Volume_BTC', 2)\ndf['Acc_Dist_Roc_Currency'] = acc_dist_roc(df, 'Volume_Currency', 2)\ndf['Acc_Dist_BTC'] = acc_dist_index(df, 'Volume_BTC')\ndf['Acc_Dist_Currency'] = acc_dist_index(df, 'Volume_Currency')\n\n# Chaikin Money Flow\ndf['Chaikin_Money_Flow_1_BTC'] = chaikin_money_flow1(df, 'Volume_BTC')\ndf['Chaikin_Money_Flow_2_BTC'] = chaikin_money_flow2(df, 'Volume_BTC', 20)\ndf['Chaikin_Money_Flow_3_BTC'] = chaikin_money_flow3(df, 'Volume_BTC', 20)\ndf['Chaikin_Money_Flow_1_Currency'] = chaikin_money_flow1(df, 'Volume_Currency')\ndf['Chaikin_Money_Flow_2_Currency'] = chaikin_money_flow2(df, 'Volume_Currency', 20)\ndf['Chaikin_Money_Flow_3_Currency'] = chaikin_money_flow3(df, 'Volume_Currency', 20)\n\n# Money Flow Index\ndf['Money_Flow_BTC'] = money_flow_index(df, 'Volume_BTC', 14)\ndf['Money_Flow_Currency'] = money_flow_index(df, 'Volume_Currency', 14)\n\n# On-balance volume\ndf['OBV_BTC'] = on_balance_volume(df, 'Volume_BTC')\ndf['OBV_BTC_mean'] = on_balance_volume_mean(df, 'Volume_BTC')\ndf['OBV_Currency'] = on_balance_volume(df, 'Volume_Currency')\ndf['OBV_Currency_mean'] = on_balance_volume_mean(df, 'Volume_Currency')\n\n# Force Index\ndf['Force_Index_BTC'] = force(df, 'Volume_BTC', 2)\ndf['Force_Index_Currency'] = force(df, 'Volume_Currency', 2)\n\n# delete intermediate columns\ndf.drop('OBV', axis=1, inplace=True)", "Trend indicators", "# Moving Average Convergence Divergence\ndf[['MACD', 'MACD_sign', 'MACD_diff']] = macd(df, 12, 26, 9)\n\n# Average directional movement index\ndf[['ADX', 'ADX_pos', 'ADX_neg']] = adx(df, 14)\n\n# Vortex indicator\ndf[['Vortex_pos', 'Vortex_neg']] = vortex(df, 14)", "Momentum Indicators", "df['RSI'] = rsi(df, 14)\n\n\"\"\"\nfor c in df.columns:\n print str(c) + u' - ' + str(df[c].isnull().sum())\n\"\"\"", "Price-based indicators", "# Momentum\nfor idx in range(9):\n m = idx+2\n df['Momentum_'+str(m)] = ((df['Close'] / df['Close'].shift(m)) - 1)\n\n# Rollings\nfor idx in range(9):\n m = idx+2\n df['Rolling_mean_'+str(m)] = (df.set_index('Date')['Close'].rolling(window=m).mean()).values\n df['Rolling_std_'+str(m)] = (df.set_index('Date')['Close'].rolling(window=m).std()).values\n df['Rolling_cov_'+str(m)] = (df.set_index('Date')['Close'].rolling(window=m).cov()).values\n\n# Bollinger bands\nfor idx in range(9):\n m = idx+2\n df['Bollinger_band_mean_'+str(m)+'_max'] = df['Rolling_mean_'+str(m)] + (2*df['Rolling_std_'+str(m)])\n df['Bollinger_band_mean_'+str(m)+'_min'] = df['Rolling_mean_'+str(m)] - (2*df['Rolling_std_'+str(m)])\n\nprint('Number of rows: {}, Number of columns: {}'.format(*df.shape))\ndf = df.dropna()\nprint('Number of rows: {}, Number of columns: {}'.format(*df.shape))", "Split", "train, test = utils.split_df(df)\n\nexcl = ['Target', 'Date', 'Timestamp']\ncols = [c for c in df.columns if c not in excl]", "xgboost", "y_train = train['Target']\ny_mean = np.mean(y_train)\nxgb_params = {\n 'n_trees': 800,\n 'eta': 0.0045,\n 'max_depth': 20,\n 'subsample': 0.95,\n 'colsample_bytree': 0.95,\n 'colsample_bylevel': 0.95,\n 'objective': 'multi:softmax',\n 'num_class' : 3,\n 'eval_metric': 'mlogloss', # 'merror', # 'rmse',\n 'base_score': 0,\n 'silent': 1\n}\n\ndtrain = xgb.DMatrix(train[cols], y_train)\ndtest = xgb.DMatrix(test[cols])\n\ncv_result = xgb.cv(xgb_params, dtrain)\n\n# xgboost, cross-validation\ncv_result = xgb.cv(xgb_params,\n dtrain,\n num_boost_round=5000,\n early_stopping_rounds=50,\n verbose_eval=50,\n show_stdv=False\n )\nnum_boost_rounds = len(cv_result)\n\n# num_boost_rounds = 1000\n\nprint(num_boost_rounds)\n\n# train\nmodel = xgb.train(xgb_params, dtrain, num_boost_round=num_boost_rounds)\n\n# predict\ny_pred = model.predict(dtest)\ny_true = test['Target']\n\nprediction_value = y_true.tolist()[0]\n\nif prediction_value == 1.0:\n print(\"Prediction: UP\")\nelif prediction_value == 2.0:\n print(\"Prediction: DOWN\")\nelse: # 0.0\n print(\"Prediction: KEEP\")\n\nprint \"\\n \\n \\n \\n \\n \\n ********** WEIGHT ************\"\nimportance = model.get_fscore()\nimportance = sorted(importance.items(), key=operator.itemgetter(1))\nfor i in importance:\n print i\n \nprint \"\\n \\n \\n \\n \\n \\n ********** GAIN ************\"\nimportance = model.get_score(fmap='', importance_type='gain')\nimportance = sorted(importance.items(), key=operator.itemgetter(1))\nfor i in importance:\n print i" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
RoozbehFarhoodi/McNeuron
Features of different class of neurons.ipynb
mit
[ "In this script features of different calss of the neuron are shown. The features are in the form of histogram, density or scalar values.", "import numpy as np\nimport McNeuron\nimport matplotlib.pyplot as plt\n%matplotlib inline", "Class1: Interneuron\nAn indivisual neuron", "#loc1 = \"/Volumes/Arch/Projects/Computational Anatomy/neuron_nmo/poorthuis/CNG version/060110-LII-III.CNG.swc\"\nloc1 = \"../Generative-Models-of-Neuron-Morphology/Data/Pyramidal/poorthuis/CNG version/060110-LV.CNG.swc\"\nloc2 = \"../Generative-Models-of-Neuron-Morphology/Data/Interneuron/allen cell types/CNG version/Pvalb-IRES-Cre-Ai14-475465561.CNG.swc\"\npyramidal = McNeuron.Neuron(file_format = 'swc', input_file=loc1)\ninter = McNeuron.Neuron(file_format = 'swc', input_file=loc2)\na = pyramidal.subsample(20.)\nMcNeuron.visualize.plot_2D(a,show_radius=True)\nprint len(a.nodes_list)\na.show_features(15,17,30)\n\nbtmorph3.visualize.plot_2D(inter,show_radius=False)\n\nlen(inter.nodes_list)", "Morphology of the neurons\nThe first one is pyramidal neuron and second is interneuron", "ax1 = McNeuron.visualize.plot_2D(pyramidal, show_radius=False)\nax2 = McNeuron.visualize.plot_2D(inter, show_radius=False)", "Feature of interneuron", "inter.show_features(15,17,30)", "Feature of Pyramidal", "pyramidal.show_features(15,17,50)", "Sholl Diagram\nFor given real number of $r$, we can calculate how many times a sphere with the radius $r$ with the center of the soma intersects with the neuron. Sholl diagram shows this number for differnt values of $r$.\nFor the pyramidal neuron it usually has two bumps, which represents the basal and apical dendrites, versus interneuron which usually has one.", "f,(ax1, ax2) = plt.subplots(1, 2)\nax1.plot(pyramidal.sholl_r,pyramidal.sholl_n,'g')\nax2.plot(inter.sholl_r,inter.sholl_n,'m')\n\ninter.features", "histogram of diameters\nThe histogram of the diameters of all compartments in the neuron", "f, (ax1, ax2) = plt.subplots(1, 2)\na = pyramidal.diameter1\nb = pyramidal.distance_from_root\nc = ax1.hist(a[b>20],bins = 30,color = 'g')\nax1.set_xlabel('diameter (um3)')\nax1.set_ylabel('density')\n#ax1.set_title('Histogram of the size of compartments of neuron')\n\na = inter.diameter\nb = inter.distance_from_root\nc = ax2.hist(a[b>20],bins = 15,color = 'm')\nax2.set_xlabel('diameter (um3)')\n#ax2.set_ylabel('density')\n#ax2.set_title('Histogram of the size of compartments of neuron')\n\na = inter.diameter\nb = inter.distance_from_root\nc = plt.hist(a[b>20],bins = 15,color = 'm')\nplt.xlabel('diameter (um3)')", "Histogram of Slope of each segments\nBy looking at the conneceted compartments, we can calculate the slope of the segment by dividing the diffrernce of radius and difference of the location. For many of them the slope is zero and we ignor them.", "f, (ax1, ax2) = plt.subplots(1, 2)\ne = pyramidal.slope\nx = ax1.hist(e[e!=0],bins=40,color = 'g')\nax1.set_xlabel('Value of Slope')\nax1.set_ylabel('density')\n\ne = inter.slope\nx = ax2.hist(e[e!=0],bins=40,color = 'm')\nax2.set_xlabel('Value of Slope')\n#ax2.set_ylabel('density')", "Histogram of distance from soma\nFor each compartments on the neuron, the distance from the soma are calculated and histogram of them are shown", "f, (ax1, ax2) = plt.subplots(1, 2)\na = pyramidal.distance_from_root\nb = ax1.hist(a[~np.isnan(a)],bins = 50,color = 'g')\nax1.set_xlabel('distance (um)')\nax1.set_ylabel('density')\n#plt.title('Histogram of distance from soma for different compartments of neuron')\n\na = inter.distance_from_root\nb = ax2.hist(a[~np.isnan(a)],bins = 50,color = 'm')\nax2.set_xlabel('distance (um)')\n#ax2.set_ylabel('density')\n#plt.title('Histogram of distance from soma for different compartments of neuron')\n\na = inter.distance_from_root\nb = plt.hist(a[~np.isnan(a)],bins = 50,color = 'm')\nplt.xlabel('distance (um)')", "Local Angle\nLocal angles are the angles between the vector with the starting point of one compartment and end point of its child, and the vector that connect it to $its$ $parent$.", "f, (ax1, ax2) = plt.subplots(1, 2)\na = pyramidal.local_angle\nb = ax1.hist(a[~np.isnan(a)],bins = 50,color = 'g')\nax1.set_xlabel('angle (radian)')\nax1.set_ylabel('density')\n#plt.title('Histogram of local angles')\na = inter.local_angle\nb = ax2.hist(a[~np.isnan(a)],bins = 50,color = 'm')\nax2.set_xlabel('angle (radian)')\n#ax2.set_ylabel('density')\n\na = inter.local_angle\nb = plt.hist(a[~np.isnan(a)],bins = 50,color = 'm')\nplt.xlabel('angle (radian)')", "Global Angle\nGlobal angles are the angles between the vector with the starting point of one compartment and end point of its child, and the vector that connect it to $soma$.", "f, (ax1, ax2) = plt.subplots(1, 2)\na = pyramidal.angle_global\nb = ax1.hist(a[~np.isnan(a)],bins = 50,color = 'g')\nax1.set_xlabel('angle (radian)')\nax1.set_ylabel('density')\n#plt.title('Histogram of global angles')\n\na = inter.angle_global\nb = ax2.hist(a[~np.isnan(a)],bins = 50,color = 'm')\nax2.set_xlabel('angle (radian)')\n\na = inter.angle_global\nb = plt.hist(a[~np.isnan(a)],bins = 50,color = 'm')\nplt.xlabel('angle (radian)')", "Angle at the branching point\nAt each branching points in the neuron, we can cauculate the angles between two outward segments. Here we plot the histogram of them for different branching points.", "f, (ax1, ax2) = plt.subplots(1, 2)\na = pyramidal.angle_branch[0,:]\nb = ax1.hist(a[~np.isnan(a)],bins = 20,color = 'g')\na = inter.angle_branch[0,:]\nb = ax2.hist(a[~np.isnan(a)],bins = 20,color = 'm')\n\na = inter.angle_branch[0,:]\nb = plt.hist(a[~np.isnan(a)],bins = 10,color = 'm')", "Rall Ratio\nThe Rall ratio is defined by taking the ratio of the diameter^2/3 parent in branching point divided by sum of diameter^2/3 of its children.", "f, (ax1, ax2) = plt.subplots(1, 2)\na = pyramidal.rall_ratio\nb = ax1.hist(a[~np.isnan(a)],bins = 20,color = 'g')\n\na = inter.rall_ratio\nb = ax2.hist(a[~np.isnan(a)],bins = 20,color = 'm')", "Slope", "f, (ax1, ax2) = plt.subplots(1, 2)\na = pyramidal.slope\nb = ax1.hist(a[~np.isnan(a)],bins = 40,color = 'g')\n\na = inter.slope\nb = ax2.hist(a[~np.isnan(a)],bins = 40,color = 'm')\n\na = inter.slope\nb = plt.hist(a[~np.isnan(a)],bins = 40,color = 'm')", "distance from parent", "f, (ax1, ax2) = plt.subplots(1, 2)\na = pyramidal.length_to_parent\nb = ax1.hist(a[~np.isnan(a)],bins = 40,color = 'g')\n\na = inter.length_to_parent\nb = ax2.hist(a[~np.isnan(a)],bins = 40,color = 'm')\n\na = inter.length_to_parent\nb = a[~np.isnan(a)]\nc = plt.hist(b[np.absolute(b)<4],bins = 70,color = 'm')\n\nnp.absolute(b)<3", "Ratio of neuronal distance over Euclidian distance from root", "f, (ax1, ax2) = plt.subplots(1, 2)\nax1.hist(inter.overall_connectivity_matrix.sum(axis = 1)/inter.distance_from_root,bins = 40,color = 'g')\nax2.hist(pyramidal.overall_connectivity_matrix.sum(axis = 1)/pyramidal.distance_from_root,bins = 40,color = 'g')\n\nplt.hist(inter.features['ratio_euclidian_neuronal'],bins = 40,color = 'g')", "Connectivity matrix", "import matplotlib.pyplot as plt\n%matplotlib inline\nplt.imshow(inter.connection)\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
KIPAC/StatisticalMethods
tutorials/demo.ipynb
gpl-2.0
[ "Tutorial: Demo\nThis notebook demonstrates the logistics of completing one of these assignments.\nThe first things to say is that you cannot complete one of these tutorials if you are viewing it as HTML in a web browser! You will need to be able to make changes and run the notebook in Jupyter, either on your own computer or some remote system (see Assignments and Getting Started).\nCode stuff\nThe first executable block of one of these notebooks is always:", "exec(open('tbc.py').read()) # define TBC and TBC_above", "The file tbc.py is located in the same directory as the tutorial notebooks, so you will need to either modify the cell or copy/link the file if you are working elsewhere for some reason. It provides the TBC (to be completed) functions described below, which you will find everywhere you are expected to provide a missing solution. (There is a reason we use this inelegant construction rather than import, but not an interesting one.)\nA notebook typically has a great deal of narrative text, and a great deal of functional code provided. This is because we are not concerned with having you learn obscure python shortcuts or the minutia of making plots with matplotlib.pyplot, for example. Code of that nature will usually be given to you, and you can learn from it if it's useful (or not, if you know better ways of doing something than we do). However, when it comes to implementing the statistical methods we study in this course, you will find lines and blocks of code that need to be completed.\nHere's a simple example of such a cell:", "# For no good reason, let's define x to be 3.14159.\nTBC() # x = ...", "Try to run the cell as it stands, and you'll see an Exception reminding you that there is a TBC line that needs to be completed. Between the textual and psuedo-code comments, it should be clear that you're expected to assign the variable x a numeric value. So, you might edit the code block as follows:", "# For no good reason, let's define x to be 3.14159.\nx = 3.14159", "Feel free to delete/replace the TBC line in the original cell. We only added a second cell here to show the before and after.\nYou'll also see to-be-completed function and class definitions, for example:", "# Define f to be a very stupid function.\ndef f(a, b):\n \"\"\"\n Add a and b.\n Arguments\n a: something to add to b\n b: something to add to a\n Return value: a+b\n \"\"\"\n TBC()\n # In real life, there might be some suggested code or pseudo-code here.\n\nTBC_above()", "Notice that there is a TBC in the function where you are expected to provide a solution, and a TBC_above call at the end of the cell. The latter is just to ensure that you don't accidentally run through the cell without realizing that there's work to be done, since the TBC within the function definition isn't triggered until the function is called. If you comment the TBC_above and run the cell, you'll see what happens when trying to use f below; this might be a useful trick if you want to put off completing the definition of f until absolutely necessary for some reason.", "f(5,-3)", "Naturally, a completed cell would look something like:", "# Define f to be a very stupid function.\ndef f(a, b):\n \"\"\"\n Add a and b.\n Arguments\n a: something to add to b\n b: something to add to a\n Return value: a+b\n \"\"\"\n return a+b", "Finally, each notebook will typically include at least one point where your work can be instantly compared to a known solution. Often, this involves reading in previously generated results of some kind from the solutions directory. In the case of the particularly simple tasks above, the correct answers might be directly included in the notebook, as below:", "# Check value of x - should evaluate to zero\nx - 3.14159\n\n# Check f - third entry on each line should be zero\nprint(1, 1, f(1,1)-2)\nprint(9, 4, f(9,4)-13)\nprint(5, -2, f(5,-2)-3)", "Non-code stuff\nYou will also be asked to do things other than writing code. Text responses can be filled in using markdown-mode cells like this one. When executed, simple text in these cells will be displayed nicely as... simple text. You probably will never need to use markdown formatting, but there is a guide here if you want.\nYou will need to write some equations, however. These can be included in markdown cells using LaTeX syntax, between $ characters. Here is an example (double-click this cell in Jupyter to see the underlying code):\n$f(a,b) = a+b = e^{\\ln(a+b)} = \\frac{1}{e^{-\\ln(a+b)}}$\nIf you're not yet a LaTeX magician, don't worry; you'll probably see everything you need for the moment as we go. When in doubt, the appendices of this guide list many helpful math commands.\nFinally, you will be asked make simple sketches of what are called Probabilistic Graphical Models (the free body diagrams of statistics!) and possibly some other cartoons of a physical model. You can produce an image file however you like (I usually resort to exporting from Google Drawings or Slides when the result doesn't need to be pretty). They can then be displayed in a markdown cell using embedded HTML as below:\n<table><tr width=90%>\n<td><img src=\"graphics/demo.png\"></td>\n</tr></table>" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
smeingast/PNICER
notebooks/pnicer.ipynb
gpl-3.0
[ "<h1 align=\"center\">PNICER demonstration notebook</h1>\n\nPreparations\nThe main dependencies of PNICER are astropy, numpy, scipy, matplotlib, and scikit-learn. Here we only import the necessary packages to run this notebook.", "import sys\n\nfrom pnicer import ApparentMagnitudes\nfrom pnicer.utils.auxiliary import get_resource_path\n\n%matplotlib inline", "The easiest way to initialize PNICER is to simply point it to catalog files in FITS format. The package includes a set of test resources which can be loaded with the get_resource_path method that was imported above. PNICER uses a set of measured intrinsic magnitudes (or colors) in an extinction-free control field to de-redden an extincted science field. In our test case we use the Orion A molecular cloud as science field and another field a few degrees away for our control field.", "# ----------------------------------------------------------------------\n# Find the test files included in the package\nscience_path = get_resource_path(package=\"pnicer.tests_resources\", resource=\"Orion_A_2mass.fits\")\ncontrol_path = get_resource_path(package=\"pnicer.tests_resources\", resource=\"CF_2mass.fits\")", "Of course, you can also specify any valid path in your file system that points to a FITS catalog file. Something like the following will do the job for you.\nscience_path = \"/path/to/science.fits\"\ncontrol_path = \"/path/to/control.fits\"\nPNICER data setup\nNow we specify the parameters for PNICER. We need to know the column names in the FITS file for both the magnitudes and the errors. In our case this is J/H/Ks from 2MASS. In addition, we also need to define the extinction law for these bands.", "# ----------------------------------------------------------------------\n# Define feature names and extinction vector\nmag_names = [\"Jmag\", \"Hmag\", \"Kmag\"]\nerr_names = [\"e_Jmag\", \"e_Hmag\", \"e_Kmag\"]\nextvec = [2.5, 1.55, 1.0] # Indebetouw et al. 2015", "PNICER Initialization\nData can be initialized as a ApparentMagnitudes instance. Alternativley also a ApparentColors instance can be used if the data are already photometric colors. While for ApparentMagnitudes, the parameter space will be spanned by magnitudes, Colors calculates the probability density in color space.\nNow we have everything we need to initialize our data. We do this by loading everyting directly from the FITS files and the 'from_fits' method:", "science = ApparentMagnitudes.from_fits(path=science_path, extvec=extvec,\n mag_names=mag_names, err_names=err_names,\n lon_name=\"GLON\", lat_name=\"GLAT\",\n frame=\"galactic\", coo_unit=\"deg\")\n\ncontrol = ApparentMagnitudes.from_fits(path=control_path, extvec=extvec,\n mag_names=mag_names, err_names=err_names, \n lon_name=\"GLON\", lat_name=\"GLAT\",\n frame=\"galactic\", coo_unit=\"deg\")", "Now we have two ApparentMagnitudes instances for our science and control field. We can also create a ApparentColors instance directly from a ApparentMagnitudes instance which calculates consecutive colors for the input data:", "science_color = science.mag2color()\ncontrol_color = control.mag2color()", "Plotting methods\nBefore diving into extinction, let us take a look at the data. PNICER includes a set of plotting methods, which help to quickly visualize your data. One, for instance, may first like to look at the density distribution for the feature combinations:", "science.plot_combinations_kde()", "We can do the same plot in color space with the instances created before:", "science.mag2color().plot_combinations_kde()", "Also, it is very useful to look at the spatial distribution of sources. With the following plotting method we can display a kernel density map of all input features.", "science.plot_sources_kde(bandwidth=15/60)", "Estimating extinction\nNow that we had a look at our data, it is time to calculate the extinction. The software package offers both the new PNICER method, as well as a NICER implementation. Running them is very simple:", "ext_pnicer = science_color.pnicer(control=control_color)", "Or for NICER:", "ext_nicer = science.nicer(control=control)", "We must note a few things here:\n\nNICER can only be run with an ApparentMagnitudes instance, since the individual photometric errors are used\nPNICER can run on both ApparentMagnitudes and ApparentColors instances as long as you know the extinction vector for the components.\nIn our case, the extinction vector was normalized to 1 in K band. Thus, the results are directly given in A_K.\n\nFurthermore, PNICER returns a probabilistic description of the extinction for each source and for the moment needs to be discretized to get useable values.", "ext_pnicer_discrete = ext_pnicer.discretize()\next_pnicer_discrete.extinction", "NICER returns discrete extinction by design", "ext_nicer.extinction", "Both calculations return and Extinction instance from which further options are available. We can save the data in a FITS table if we want:", "ext_pnicer_discrete.save_fits(path=\"/tmp/temp.fits\")", "Extinction map\nThe Extinction instance we got above can now also be used to calculate and extinction map. For this, various options are available", "pnicer_emap = ext_pnicer_discrete.build_map(bandwidth=5 / 60, metric=\"gaussian\", sampling=2, use_fwhm=True)", "The methods above returns an ExtinctionMap instance, which again can be used for various subsequent tasks. We can save the data as a FITS image:", "pnicer_emap.save_fits(path=\"/tmp/temp.fits\")", "Note that this already includes an automaically calculated astrometric projection. We can also display the results with a convenient plotting method:", "pnicer_emap.plot_map(figsize=10)", "The red background here refers to pixels where not enough sources are available. These are the basics of PNICER/NICER. Happy hunting for extinction! :)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
MaxPowerWasTaken/MaxPowerWasTaken.github.io
jupyter_notebooks/Multiprocessing with Pandas.ipynb
gpl-3.0
[ "Processing Multiple Pandas Series in Parallel\nIntroduction\nPython's Pandas library for data processing is great for all sorts of data-processing tasks. However, one thing it doesn't support out of the box is parallel processing across multiple cores. \nI've been wanting a simple way to process Pandas DataFrames in parallel, and recently I found this truly awesome blog post.. It shows how to apply an arbitrary Python function to each object in a sequence, in parallel, using Pool.map from the Multiprocessing library. \nThe author's example involves running urllib2.urlopen() across a list of urls, to scrape html from several web sites in parallel. But the principle applies equally to mapping a function across several columns in a Pandas DataFrame. Here's an example of how useful that can be.\nA simple multiprocessing wrapper\nHere's some code which will accept a Pandas DataFrame and a function, apply the function to each column in the DataFrame, and return the results (as a new dataframe). It also allows the caller to specify the number of processes to run in parallel, but uses a sensible default when not provided.", "from multiprocessing import Pool, cpu_count\n\ndef process_Pandas_data(func, df, num_processes=None):\n ''' Apply a function separately to each column in a dataframe, in parallel.'''\n \n # If num_processes is not specified, default to minimum(#columns, #machine-cores)\n if num_processes==None:\n num_processes = min(df.shape[1], cpu_count())\n \n # 'with' context manager takes care of pool.close() and pool.join() for us\n with Pool(num_processes) as pool:\n \n # we need a sequence to pass pool.map; this line creates a generator (lazy iterator) of columns\n seq = [df[col_name] for col_name in df.columns]\n \n # pool.map returns results as a list\n results_list = pool.map(func, seq)\n \n # return list of processed columns, concatenated together as a new dataframe\n return pd.concat(results_list, axis=1)", "Hopefully the code above looks pretty straightforward, but if it looks a bit confusing at first glance, ultimately the key is these two lines:", "#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n# UNCOMMENT IN MARKDOWN BEFORE PUSHING LIVE\n# !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!\n\n# (commented out so can run notebook in one click.)\n#with Pool(num_processes) as pool:\n# ...\n# results_list = pool.map(func, seq)", "the rest was just setting the default number of processes to run in parallel, getting a 'sequence of columns' from our input dataframe, and concatenating the list of results we get back from pool.map\nA function to measure parallel performance gains with\nTo measure the speed boost from wrapping a bit of Pandas processing in this multiprocessing wrapper, I'm going to load the Quora Duplicate Questions dataset, and the vectorized text-tokenizing function from my last blog post on using vectorized Pandas functions.", "import pandas as pd\n\ndf = pd.read_csv('datasets/quora_kaggle.csv')\ndf.head(3)\n\nimport re\nfrom nltk.corpus import stopwords\n\ndef tokenize_column(text_series):\n ''' Accept a series of strings, returns list of words (lowercased) without punctuation or stopwords'''\n\n # lowercase everything\n text_series = text_series.astype(str).str.lower()\n \n # remove punctuation (r'\\W' is regex, matches any non-alphanumeric character)\n text_series = text_series.str.replace(r'\\W', ' ')\n \n # return list of words, without stopwords\n sw = stopwords.words('english')\n \n return text_series.apply(lambda row: [word for word in row.split() if word not in sw])", "To see what this does \"tokenizing\" function does, here's a few unprocessed quora questions, followed by their outputs from the tokenizer", "print(df.question1.head(3), '\\n\\n', tokenize_column(df.question1.head(3)))", "Clocking Performance Gains of Using Multiprocessing, 2 Cores\nThe two functions below clock the time elapsed from tokenizing our two question columns in series or in parallel.\nDefining these tests as their own functions means we're not creating any new global-scope variables when we measure performance. All the intermediate results (like the new dataframes of processed questions) are garbage-collected after the function returns its results (an elapsed time). This is important to maintain an apples-to-apples performance comparison; otherwise, performance tests run later in the notebook would have less RAM available than the first test we run.", "from datetime import datetime\n\ndef clock_tokenize_in_series(df): \n '''Calc time to process in series'''\n \n # Initialize dataframe to hold processed questions, and start clock\n qs_processed = pd.DataFrame()\n start = datetime.now()\n\n # process question columns in series\n for col in df.columns:\n qs_processed[col] = tokenize_column(df[col])\n\n # return time elapsed\n return datetime.now() - start\n \n\ndef clock_tokenize_in_parallel(df): \n '''Calc time to process in parallel'''\n \n # Initialize dataframe to hold processed questions, and start clock\n qs_processed = pd.DataFrame()\n start = datetime.now()\n\n # process question columns in parallel\n qs_processed2 = process_Pandas_data(tokenize_column, df)\n\n # return time elapsed\n return datetime.now() - start ", "And now to measure our results:", "# Print Time Results\nno_parallel = clock_tokenize_in_series(df[['question1', 'question2']])\nparallel = clock_tokenize_in_parallel(df[['question1', 'question2']])\n\nprint('Time elapsed for processing 2 questions in series :', no_parallel)\nprint('Time elapsed for processing 2 questions in parallel :', parallel)", "So processing the two columns in parallel cut our processing time from 23.7 seconds down to 14.7 seconds, a decrease of 38%. The theoretical maximum reduction we might have expected with no multiprocessing overhead would of course been a 50% reduction, so this is not bad.\nComparing Performance with 4 Cores\nI have four cores on this laptop, and I'd like to see how the performance gains scale here from two to four cores. Below, I'll make copies of our q1 and q2 so we have four total text columns, then re-run the comparison by passing this new 4-column dataframe to the testing function defined above.", "# Column-bind two questions with copies of themselves for 4 text columns\nfour_qs = pd.concat([df[['question1','question2']], \n df[['question1','question2']]], axis=1) \n\nfour_qs.columns = ['q1', 'q2', 'q1copy', 'q2copy']\nfour_qs.head(2)\n\n# Print Results for running tokenizer on 4 questions in series, then in parallel\nno_parallel = clock_tokenize_in_series(four_qs)\nparallel = clock_tokenize_in_parallel(four_qs)\n\nprint('Time elapsed for processing 4 questions in series :', no_parallel)\nprint('Time elapsed for processing 4 questions in parallel :', parallel)", "Conclusion\n[edit this after nbconvert to markdown, based on final stats]\nmultiprocessing does have to pickle (serialize) each object in seq to send it to a Pool worker, deserialize it to work on it, then serialize/deserialize. So I'm guessing this doesn't scale very well for multiprocessing because the text data is relatively large, and the processing itself happens relatively quickly. so it's a lot of serializing/unserializing cost to pay for a relatively little. maybe if our underlying calculation were more compute-intensive, it would scale better. I'll try and" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
tensorflow/docs-l10n
site/ja/tutorials/estimator/keras_model_to_estimator.ipynb
apache-2.0
[ "Copyright 2019 The TensorFlow Authors.", "#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "Keras モデルから Estimator を作成する\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td><a target=\"_blank\" href=\"https://www.tensorflow.org/tutorials/estimator/keras_model_to_estimator\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\">TensorFlow.org で表示</a></td>\n <td><a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tutorials/estimator/keras_model_to_estimator.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\">Google Colab で実行</a></td>\n <td><a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tutorials/estimator/keras_model_to_estimator.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\">GitHub でソースを表示</a></td>\n <td><a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/tutorials/estimator/keras_model_to_estimator.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\">ノートブックをダウンロード</a></td>\n</table>\n\n\n警告: 新しいコードには Estimators は推奨されません。Estimators は v1.Session スタイルのコードを実行しますが、これは正しく記述するのはより難しく、特に TF 2 コードと組み合わせると予期しない動作をする可能性があります。Estimators は、互換性保証の対象となりますが、セキュリティの脆弱性以外の修正は行われません。詳細については、移行ガイドを参照してください。\n\n概要\nTensorFlow Estimator は、TensorFlow でサポートされており、新規または既存の tf.keras モデルから作成することができます。このチュートリアルには、このプロセスの完全な最小限の例が含まれます。\n注意: Keras モデルがある場合は、Estimator に変換せずに、直接 tf.distribute ストラテジーで使用することができます。したがって、model_to_estimator は推奨されなくなりました。\nセットアップ", "import tensorflow as tf\n\nimport numpy as np\nimport tensorflow_datasets as tfds", "単純な Keras モデルを作成する。\nKeras では、レイヤーを組み合わせてモデルを構築します。モデルは(通常)レイヤーのグラフです。最も一般的なモデルのタイプはレイヤーのスタックである tf.keras.Sequential モデルです。\n単純で完全に接続されたネットワーク(多層パーセプトロン)を構築するには、以下を実行します。", "model = tf.keras.models.Sequential([\n tf.keras.layers.Dense(16, activation='relu', input_shape=(4,)),\n tf.keras.layers.Dropout(0.2),\n tf.keras.layers.Dense(3)\n])", "モデルをコンパイルして要約を取得します。", "model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),\n optimizer='adam')\nmodel.summary()", "入力関数を作成する\nDatasets API を使用して、大規模なデータセットまたはマルチデバイストレーニングにスケーリングします。\nEstimator には、いつどのように入力パイプラインが構築されるのかを制御する必要があります。これを行えるようにするには、\"入力関数\" または input_fn が必要です。Estimator は引数なしでこの関数を呼び出します。input_fn は、tf.data.Dataset を返す必要があります。", "def input_fn():\n split = tfds.Split.TRAIN\n dataset = tfds.load('iris', split=split, as_supervised=True)\n dataset = dataset.map(lambda features, labels: ({'dense_input':features}, labels))\n dataset = dataset.batch(32).repeat()\n return dataset", "input_fn をテストします。", "for features_batch, labels_batch in input_fn().take(1):\n print(features_batch)\n print(labels_batch)", "tf.keras モデルから Estimator を作成する。\ntf.keras.Model は、tf.estimator API を使って、tf.keras.estimator.model_to_estimator を持つ tf.estimator.Estimator オブジェクトにモデルを変換することで、トレーニングすることができます。", "import tempfile\nmodel_dir = tempfile.mkdtemp()\nkeras_estimator = tf.keras.estimator.model_to_estimator(\n keras_model=model, model_dir=model_dir)", "Estimator をトレーニングして評価します。", "keras_estimator.train(input_fn=input_fn, steps=500)\neval_result = keras_estimator.evaluate(input_fn=input_fn, steps=10)\nprint('Eval result: {}'.format(eval_result))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mldbai/mldb
container_files/tutorials/Selecting Columns Programmatically Using Column Expressions Tutorial.ipynb
apache-2.0
[ "Selecting Columns Programmatically Using Column Expressions Tutorial\nMLDB provides a complete implementation of the SQL SELECT statement. Most of the functions you are accustomed to using are available in your queries. \nMLDB is different from traditional SQL databases in that there is no enforced schema on rows, allowing you to work with millions of columns of sparse data. This makes it easy to load and manipulate sparse datasets, even when there are millions of columns. To reduce the size of your dataset or use only specific variables, we may need to select columns based on specific critera. Column Expressions is an MLDB extension that provides additional control over your column selection. With a column expression, you can programmatically return specific columns with a SQL SELECT statement.\nIn this tutorial, we will provide examples of <code>COLUMN EXPR</code> within <code>SELECT</code> statements. This tutorial assumes familiarity with Procedures and Datasets. We suggest going through the Procedures and Functions Tutorial and the Loading Data Tutorial beforehand.\nSetting up\nThe notebook cells below use pymldb's Connection class to make REST API calls. You can check out the Using pymldb Tutorial for more details.", "from pymldb import Connection\nmldb = Connection()", "Basic usage example\nLet's begin by loading and visualizing our data. We will be using the dataset from the Virtual Manipulation of Datasets Tutorial. We had chosen the tokenize function to count the number of words in the Wikipedia descriptions of several Machine Learning concepts (please check out the tutorial for more details).", "print mldb.put(\"/v1/procedures/import_ML_concepts\", {\n \"type\":\"import.text\",\n \"params\": {\n \"dataFileUrl\":\"file://mldb/mldb_test_data/MachineLearningConcepts.csv\",\n \"outputDataset\":{\n \"id\":\"ml_concepts\",\n \"type\": \"sparse.mutable\"\n },\n \"named\": \"Concepts\",\n \"select\": \"\"\" \n tokenize(\n lower(Text), \n {splitChars: ' -''\"?!;:/[]*,().', \n minTokenLength: 4}) AS *\n \"\"\",\n \"runOnCreation\": True\n }\n})", "Each word is represented by a column and each Machine Learning concept by a row. We can run a simple SELECT query to take a quick look at the first 5 rows of our dataset.", "mldb.query(\"SELECT * FROM ml_concepts LIMIT 5\")", "There are 286 columns, some of which may or may not be useful to the data analysis at hand. For example, we may want to rebuild a dataset with:\n* verbs and adverbs that end with \"ing\"\n* words that appear at least twice in each of the descriptions of the Machine Learning concepts.\nThis can be done in a few queries as you will see below.\nUsing column expressions to keep columns that end with \"ing\"\nColumn Expressions provide efficient ways of picking and choosing our columns. For example, we can only choose verbs and adverbs that end with \"ing\" to understand the overall meaning of a description.\nWe use the columnName column expression function along with the LIKE SQL expression, as you will see below.", "mldb.query(\"\"\"\n SELECT COLUMN EXPR (WHERE columnName() LIKE '%ing')\n FROM ml_concepts\n LIMIT 5\n\"\"\")", "This is very powerful because the LIKE statement in Standard SQL is typically found in row operations and more rarely in column operations. MLDB makes it simple to use such SQL expressions on columns.\nUsing column expressions to keep columns that appear in multiple descriptions\nWith Column Expressions, we can select columns based on specific row selection criteria. <code/>COLUMN EXPR</code> will allow us for example to choose words that appear in multiple descriptions. In this case, we filter on words that show up at least 4 times.\nTo achieve the desired outcome, we use a Built-in Function available in column expressions called rowCount. rowCount iterates through each column and returns the number of rows that have a value for the specific column.", "mldb.query(\"\"\"\n SELECT COLUMN EXPR (WHERE rowCount() > 4)\n FROM ml_concepts\n\"\"\")", "The results make sense. The words that we found above in the columns are common in Machine Learning concept descriptions. With a plain SQL statement and the rowCount function, we reduced our dataset to include words that appear at least 4 times.\nNested JSON example\nNested JSON objects can have complex schemas, often involving multi-level and multidimensional data structures. In this section we will create a more complex dataset to illustrate ways to simplify data structures and column selection with Built-in Function and Column Expression.\nLet's first create an empty dataset called 'toy_example'.", "# create dataset\nprint mldb.put('/v1/datasets/toy_example', { \"type\":\"sparse.mutable\" })", "We will now create one row in the 'toy_example' dataset with the 'row1' JSON object below.", "import json\n\nrow1 = {\n \"name\": \"Bob\", \n \"address\": {\"city\": \"Montreal\", \"street\": \"Stanley\"}, \n \"sports\": [\"soccer\",\"hockey\"], \n \"friends\": [{\"name\": \"Mich\", \"age\": 25}, {\"name\": \"Jean\", \"age\": 28}]\n}\n\n# update dataset by adding a row\nmldb.post('/v1/datasets/toy_example/rows', {\n \"rowName\": \"row1\",\n \"columns\": [[\"data\", json.dumps(row1), 0]]\n})\n# save changes\nmldb.post(\"/v1/datasets/toy_example/commit\")", "We will check out our data with a SELECT query.", "mldb.query(\"SELECT * FROM toy_example\")", "There are many elements within the cell above. We will need to better structure elements within the nested JSON object.\nWorking with nested JSON objects with built-in functions and column expressions\nTo understand and query nested JSON objects, we will be using a Built-in Function called <code/>parse_json</code> and a Column Expression <code/>columnPathElement</code>.\nThis is where the parse_json function comes in handy. It will help us turn a multidimensional JSON object into a 2D dataset.", "mldb.query(\"\"\"\n SELECT parse_json(data, {arrays: 'parse'}) AS * \n FROM toy_example\n\"\"\")", "parse_json is a powerful feature since we can create 2D representations out of multidimensional data. We can read all of the elements of the JSON object on one line. It is also easier to SQL as we will see below.\ncolumnPathElement makes it convenient to navigate specific parts of the data structure. In the next block of code, we will do the following:\n* use parse_json to parse each data element of the object on one row (same as above)\n* select specific cells using columnPathElement where the the column path name at index = 2 is 'name' (note that 'friends' is at index = 0)", "mldb.query(\"\"\"\n SELECT COLUMN EXPR (WHERE columnPathElement(2) = 'name') \n FROM (\n SELECT parse_json(data, {arrays: 'parse'}) AS * NAMED rowPath() FROM toy_example\n )\n\"\"\")", "We now know the name of Bob's two friends... As you may have noticed, this is very practical if we want to query a specific attribute of a nested object. The columnPathElement Column Expression allows us to easily query specific JSON data levels or dimensions.\nColumn operations such as the ones shown in this tutorial can be difficult without column expressions. Column Expressions offer a compact and flexible way to programmatically select columns. It is a great tool to carve out the data that is most needed for your analysis.\nIn this tutorial, we covered three Column Expressions:\n* columnName which returns the name of the columns inside our dataset\n* rowCount which returns the number of non-empty rows for each column\n* columnPathElement which allows us to chose columns at specific sub-levels\nWhere to next?\nCheck out the other Tutorials and Demos." ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
MLWave/kepler-mapper
docs/notebooks/Confidence-Graphs.ipynb
mit
[ "Confidence Graphs: Representing Model Uncertainty in Deep Learning\nHendrik Jacob van Veen <br>\n&#104;&#101;&#110;&#100;&#114;&#105;&#107;&#46;&#118;&#97;&#110;&#118;&#101;&#101;&#110;&#64;&#110;&#117;&#98;&#97;&#110;&#107;&#46;&#99;&#111;&#109;&#46;&#98;&#114; &bull; https://mlwave.com\nMatheus Facure<br>\n&#109;&#97;&#116;&#104;&#101;&#117;&#115;&#46;&#102;&#97;&#99;&#117;&#114;&#101;&#64;&#110;&#117;&#98;&#97;&#110;&#107;&#46;&#99;&#111;&#109;&#46;&#98;&#114; &bull; https://matheusfacure.github.io/\nIntroduction\nVariational inference (MacKay, 2003) gives a computationally tractible measure of uncertainty/confidence/variance for machine learning models, including complex black-box models, like those used in the fields of gradient boosting (Chen et al, 2016) and deep learning (Schmidhuber, 2014).\nThe $MAPPER$ algorithm (Singh et al, 2007) [.pdf] from Topological Data Analysis (Carlsson, 2009) turns any data or function output into a graph (or simplicial complex) which is used for data exploration (Lum et al, 2009), error analysis (Carlsson et al, 2018), serving as input for higher-level machine learning algorithms (Hofer et al, 2017), and more.\nDropout (Srivastava et al, 2014) can be viewed as an ensemble of many different sub-networks inside a single neural network, which, much like bootstrap aggregation of decision trees (Breiman, 1996), aims to combat overfit. Viewed as such, dropout is applicable as a Bayesian approximation (Rubin, 1984) in the variational inference framework (Gal, 2016) (.pdf)\nInterpretability is useful for detecting bias in and debugging errors of machine learning models. Many methods exist, such as tree paths (Saabas, 2014), saliency maps, permutation feature importance (Altmann et al, 2010), locally-fit white box models (van Veen, 2015) (Ribeiro et al, 2016). More recent efforts aim to combine a variety of methods (Korobov et al, 2016) (Olah et al, 2018). \nMotivation\nError analysis surfaces different subsets/types of the data where a model makes fundamental errors. When building policies and making financial decisions based on the output of a model it is not only useful to study the errors of a model, but also the confidence:\n- Correct, but low-confidence, predictions for a cluster of data tells us where to focus our active learning (Dasgupta et al, 2009) - and data collection efforts, so as to make the model more certain. \n- Incorrect, but high-confidence predictions, surface fundamental error types that can more readily be fixed by a correction layer (Schapire, 1999) [.pdf], or redoing feature engineering (Guyon et al, 2006).\n- Every profit-maximizing model has a prediction threshold where a decision is made (Hardt et al, 2016). However, given two equal predictions, the more confident predictions are preferred.\n- Interpretability methods have focussed either on explaining the model in general, or explaining a single sample. To our knowledge, not much focus has gone in a holistic view of modeled data, including explanations for subsets of similar samples (for whatever pragmatic definition of \"similar\", like \"similar age\", \"similar spend\", \"similar transaction behavior\"). The combination of interpretability and unsupervised exploratory analysis is attractive, because it catches unexpected behavior early on, as opposed to acting on faulty model output, and digging down to find a cause.\nExperimental setup\nWe will use the MNIST dataset (LeCun et al, 1999), Keras (Chollet et al, 2015) with TensorFlow (Abadi et al, 2016), NumPy (van der Walt et al., 2011), Pandas (McKinney, 2010), Scikit-Learn (Pedregosa et al, 2011), Matplotlib (Hunter, 2007), and KeplerMapper (Saul et al, 2017).\n\n\nTo classify between the digits 3 and 5, we will train a Multi-Layer Perceptron (Ivakhnenko et al, 1965) with 2 hidden layers, Backprop (LeCun et al, 1998) (pdf), RELU activation (Nair et al, 2010), ADAM optimizer (Kingma et al, 2014), dropout of 0.5, and softmax output, to classify between the digits 3 and 5.\n\n\nWe perform a 1000 forward passes to get the standard deviation and variance ratio of our predictions as per (Gal, 2016, page 51) [.pdf].\n\n\nClosely following the $FiFa$ method from (Carlsson et al, 2018, page 4) we then apply $MAPPER$ with the 2D filter function [predicted probability(x), confidence(x)] to project the data. We cover this projection with 10 10% overlapping intervals per dimension. We cluster with complete single-linkage agglomerative clustering (n_clusters=3) (Ward, 1963) and use the penultimate layer as the inverse $X$. To guide exploration, we color the graph nodes by mean absolute error(x).\n\n\nWe also ask predictions for the digit 4 which was never seen during training (Larochelle et al, 2008), to see how this influences the confidence of the network, and to compare the graphs outputted by KeplerMapper.\n\n\nFor every graph node we show the original images. Binary classification on MNIST digits is easy enough to resort to a simple interpretability method to show what distinguishes the cluster from the rest of the data: We order each feature by z-score and highlight the top 10% features (Singh, 2016).", "%matplotlib inline\n\nimport keras\nfrom keras import backend as K\nfrom keras.datasets import mnist\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Dropout\nfrom keras.optimizers import Adam\n\nimport kmapper as km\nimport numpy as np\nimport pandas as pd\nfrom sklearn import metrics, cluster, preprocessing\nimport xgboost as xgb\n\nfrom matplotlib import pyplot as plt\nplt.style.use(\"ggplot\")", "Preparing Data\nWe create train and test data sets for the digits 3, 4, and 5.", "# get the data, shuffled and split between train and test sets\n(X_train, y_train), (X_test, y_test) = mnist.load_data()\n\nX_strange = X_train[y_train == 4]\ny_strange = y_train[y_train == 4]\n\nX_train = X_train[np.logical_or(y_train == 3, y_train == 5)]\ny_train = y_train[np.logical_or(y_train == 3, y_train == 5)]\n\nX_test = X_test[np.logical_or(y_test == 3, y_test == 5)]\ny_test = y_test[np.logical_or(y_test == 3, y_test == 5)]\n\nX_strange = X_strange[:X_test.shape[0]]\ny_strange = y_strange[:X_test.shape[0]]\n\nX_train = X_train.reshape(-1, 784)\nX_test = X_test.reshape(-1, 784)\nX_strange = X_strange.reshape(-1, 784)\n\nX_train = X_train.astype('float32')\nX_test = X_test.astype('float32')\nX_strange = X_strange.astype('float32')\n\nX_train /= 255\nX_test /= 255\nX_strange /= 255\n\nprint(X_train.shape[0], 'train samples')\nprint(X_test.shape[0], 'test samples')\nprint(X_strange.shape[0], 'strange samples')\n\n# convert class vectors to binary class matrices\ny_train = (y_train == 3).astype(int)\ny_test = (y_test == 3).astype(int)\n\ny_mean_test = y_test.mean()\nprint(y_mean_test, 'y test mean')", "Model\nModel is a basic 2-hidden layer MLP with RELU activation, ADAM optimizer, and softmax output. Dropout is applied to every layer but the final.", "batch_size = 128\nnum_classes = 1\nepochs = 10\n\nmodel = Sequential()\nmodel.add(Dropout(0.5, input_shape=(784,)))\nmodel.add(Dense(512, activation='relu', input_shape=(784,)))\nmodel.add(Dropout(0.5))\nmodel.add(Dense(512, activation='relu'))\nmodel.add(Dropout(0.5))\nmodel.add(Dense(num_classes, activation='sigmoid'))\n\nmodel.summary()\nmodel.compile(loss='binary_crossentropy',\n optimizer=Adam(),\n metrics=['accuracy'])", "Fitting and evaluation", "history = model.fit(X_train, y_train,\n batch_size=batch_size,\n epochs=epochs,\n verbose=1,\n validation_data=(X_test, y_test))\nscore = model.evaluate(X_test, y_test, verbose=0)\nscore", "Perform 1000 forward passes on test set and calculate Variance Ratio and Standard Dev", "FP = 1000\npredict_stochastic = K.function([model.layers[0].input, K.learning_phase()], [model.layers[-1].output])\n\ny_pred_test = np.array([predict_stochastic([X_test, 1]) for _ in range(FP)])\ny_pred_stochastic_test = y_pred_test.reshape(-1,y_test.shape[0]).T\n\ny_pred_std_test = np.std(y_pred_stochastic_test, axis=1)\ny_pred_mean_test = np.mean(y_pred_stochastic_test, axis=1)\ny_pred_mode_test = (np.mean(y_pred_stochastic_test > .5, axis=1) > .5).astype(int).reshape(-1,1)\n\ny_pred_var_ratio_test = 1 - np.mean((y_pred_stochastic_test > .5) == y_pred_mode_test, axis=1)\n\ntest_analysis = pd.DataFrame({\n \"y_true\": y_test,\n \"y_pred\": y_pred_mean_test,\n \"VR\": y_pred_var_ratio_test,\n \"STD\": y_pred_std_test\n})\n\nprint(metrics.accuracy_score(y_true=y_test, y_pred=y_pred_mean_test > .5))\nprint(test_analysis.describe())", "Plot test set confidence", "prediction_cut_off = (test_analysis.y_pred < .96) & (test_analysis.y_pred > .94)\nstd_diff = test_analysis.STD[prediction_cut_off].max() - test_analysis.STD[prediction_cut_off].min() \nvr_diff = test_analysis.VR[prediction_cut_off].max() - test_analysis.VR[prediction_cut_off].min() \nnum_preds = test_analysis.STD[prediction_cut_off].shape[0]\n\n# STD plot\nplt.figure(figsize=(16,8))\nplt.suptitle(\"Standard Deviation of Test Predictions\", fontsize=18, weight=\"bold\")\nplt.title(\"For the %d predictions between 0.94 and 0.96 the STD varies with %f\"%(num_preds, std_diff),\n style=\"italic\")\nplt.xlabel(\"Standard Deviation\")\nplt.ylabel(\"Predicted Probability\")\nplt.scatter(test_analysis.STD, test_analysis.y_pred, alpha=.3)\nplt.scatter(test_analysis.STD[prediction_cut_off],\n test_analysis.y_pred[prediction_cut_off])\nplt.show()\n\n# VR plot\nplt.figure(figsize=(16,8))\nplt.suptitle(\"Variance Ratio of Test Predictions\", fontsize=18, weight=\"bold\")\nplt.title(\"For the %d predictions between 0.94 and 0.96 the Variance Ratio varies with %f\"%(num_preds, vr_diff),\n style=\"italic\")\nplt.xlabel(\"Variance Ratio\")\nplt.ylabel(\"Predicted Probability\")\nplt.scatter(test_analysis.VR, test_analysis.y_pred, alpha=.3)\nplt.scatter(test_analysis.VR[prediction_cut_off],\n test_analysis.y_pred[prediction_cut_off])\nplt.show()", "Apply $MAPPER$\nTake penultimate layer activations from test set for the inverse $X$", "predict_penultimate_layer = K.function([model.layers[0].input, K.learning_phase()], [model.layers[-2].output])\n\nX_inverse_test = np.array(predict_penultimate_layer([X_test, 1]))[0]\nprint((X_inverse_test.shape, \"X_inverse_test shape\"))", "Take STD and error as the projected $X$", "X_projected_test = np.c_[test_analysis.STD, test_analysis.y_true - test_analysis.y_pred]\nprint((X_projected_test.shape, \"X_projected_test shape\"))", "Create the confidence graph $G$", "mapper = km.KeplerMapper(verbose=2)\nG = mapper.map(X_projected_test,\n X_inverse_test,\n clusterer=cluster.AgglomerativeClustering(n_clusters=2),\n overlap_perc=0.8,\n nr_cubes=10)", "Create color function output (absolute error)", "color_function_output = np.sqrt((y_test-test_analysis.y_pred)**2)", "Create image tooltips for samples that are interpretable for humans", "import io\nimport base64\nfrom scipy.misc import toimage, imsave, imresize\n\n# Create z-scores\nhard_predictions = (test_analysis.y_pred > 0.5).astype(int)\n\no = np.std(X_test, axis=0)\nu = np.mean(X_test[hard_predictions == 0], axis=0)\nv = np.mean(X_test[hard_predictions == 1], axis=0)\nz_scores = (u-v)/o\n\nscores_0 = sorted([(score,i) for i, score in enumerate(z_scores) if str(score) != \"nan\"],\n reverse=False)\nscores_1 = sorted([(score,i) for i, score in enumerate(z_scores) if str(score) != \"nan\"],\n reverse=True)\n\n# Fill RGBA image array with top 200 scores for positive and negative\nimg_array_0 = np.zeros((28,28,4))\nimg_array_1 = np.zeros((28,28,4))\n\nfor e, (score, i) in enumerate(scores_0[:200]):\n y = i % 28\n x = int((i - (i % 28))/28)\n img_array_0[x][y] = [255,255,0,205-e]\n \nfor e, (score, i) in enumerate(scores_1[:200]):\n y = i % 28\n x = int((i - (i % 28))/28)\n img_array_1[x][y] = [255,0,0,205-e]\n\nimg_array = (img_array_0 + img_array_1) / 2\n\n# Get base64 encoded version of this\noutput = io.BytesIO()\nimg = imresize(img_array, (64,64))\nimg = toimage(img)\n\nimg.save(output, format=\"PNG\")\ncontents = output.getvalue()\nexplanation_img_encoded = base64.b64encode(contents) \noutput.close()\n\n# Create tooltips for each digit\ntooltip_s = []\nfor ys, image_data in zip(y_test, X_test):\n output = io.BytesIO()\n img = toimage(imresize(image_data.reshape((28,28)), (64,64))) # Data was a flat row of \"pixels\".\n img.save(output, format=\"PNG\")\n contents = output.getvalue()\n img_encoded = base64.b64encode(contents)\n img_tag = \"\"\"<div style=\"width:71px;\n height:71px;\n overflow:hidden;\n float:left;\n position: relative;\">\n <img src=\"data:image/png;base64,%s\" style=\"position:absolute; top:0; right:0\" />\n <img src=\"data:image/png;base64,%s\" style=\"position:absolute; top:0; right:0;\n opacity:0.5; width: 64px; height: 64px;\" />\n <div style=\"position: relative; top: 0; left: 1px; font-size:9px\">%s</div>\n </div>\"\"\"%((img_encoded.decode('utf-8'),\n explanation_img_encoded.decode('utf-8'),\n ys))\n \n tooltip_s.append(img_tag)\n output.close()\ntooltip_s = np.array(tooltip_s)", "Visualize", "_ = mapper.visualize(G,\n lens=X_projected_test,\n lens_names=[\"Uncertainty\", \"Error\"],\n custom_tooltips=tooltip_s,\n color_function=color_function_output.values,\n title=\"Confidence Graph for a MLP trained on MNIST\",\n path_html=\"confidence_graph_output.html\")\n\nfrom kmapper import jupyter\njupyter.display(\"confidence_graph_output.html\")", "Image of output\n\nLink to output\nhttp://mlwave.github.io/tda/confidence-graphs.html" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
metpy/MetPy
v0.9/_downloads/4d64a32e8cfca4a5a78f2d1f68ae3c83/Gradient.ipynb
bsd-3-clause
[ "%matplotlib inline", "Gradient\nUse metpy.calc.gradient.\nThis example demonstrates the various ways that MetPy's gradient function\ncan be utilized.", "import numpy as np\n\nimport metpy.calc as mpcalc\nfrom metpy.units import units", "Create some test data to use for our example", "data = np.array([[23, 24, 23],\n [25, 26, 25],\n [27, 28, 27],\n [24, 25, 24]]) * units.degC\n\n# Create an array of x position data (the coordinates of our temperature data)\nx = np.array([[1, 2, 3],\n [1, 2, 3],\n [1, 2, 3],\n [1, 2, 3]]) * units.kilometer\n\ny = np.array([[1, 1, 1],\n [2, 2, 2],\n [3, 3, 3],\n [4, 4, 4]]) * units.kilometer", "Calculate the gradient using the coordinates of the data", "grad = mpcalc.gradient(data, coordinates=(y, x))\nprint('Gradient in y direction: ', grad[0])\nprint('Gradient in x direction: ', grad[1])", "It's also possible that we do not have the position of data points, but know\nthat they are evenly spaced. We can then specify a scalar delta value for each\naxes.", "x_delta = 2 * units.km\ny_delta = 1 * units.km\ngrad = mpcalc.gradient(data, deltas=(y_delta, x_delta))\nprint('Gradient in y direction: ', grad[0])\nprint('Gradient in x direction: ', grad[1])", "Finally, the deltas can be arrays for unevenly spaced data.", "x_deltas = np.array([[2, 3],\n [1, 3],\n [2, 3],\n [1, 2]]) * units.kilometer\ny_deltas = np.array([[2, 3, 1],\n [1, 3, 2],\n [2, 3, 1]]) * units.kilometer\ngrad = mpcalc.gradient(data, deltas=(y_deltas, x_deltas))\nprint('Gradient in y direction: ', grad[0])\nprint('Gradient in x direction: ', grad[1])" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
parthasen/java-R
DS-ML_Nov_train_test.ipynb
gpl-3.0
[ "BS, buy- sell can be predicted with 85% accuracy\n1. Sentiment\n2. Market state change\n3. VWAP spread\n4. regression spread\n5. Velocity\n6. Market State ( 4 states)\n7. Distance\nThese are features generated to find BS and RSb class\nextra: tqqq/sqqq ratio", "import pandas as pd\nimport numpy as np\nimport matplotlib.pylab as plt\nimport csv\nimport glob\n\nfrom statsmodels.tsa.arima_model import ARIMA\nfrom statsmodels.tsa.arima_model import ARIMAResults\n\nimport pickle\n#from sklearn.cross_validation import train_test_split\nfrom sklearn import linear_model\nfrom sklearn.svm import SVR\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.svm import SVC\n\n# loading csv file\ndef get_csv_pd(path):\n #spy_pd=pd.read_csv('C:\\\\Users\\Michal\\Dropbox\\IB_data\\SPY.csv',sep=' ',names=['askPrice','askSize','bidPrice','bidSize'],index_col=0,parse_dates=True)\n #spy_pd=pd.read_csv(path+'\\SPY.csv',sep=',',names=['askPrice','askSize','bidPrice','bidSize'],index_col=0,parse_dates=True)\n spy_pd=pd.read_csv(path,sep=',',dtype={'askPrice':np.float32,'askSize':np.float32,\n 'bidPrice':np.float32,'bidSize':np.float32},index_col=0,parse_dates=True)\n #spy_pd = pd.read_csv(path, usecols=['askPrice','askSize','bidPrice','bidSize'], engine='python', skipfooter=3)\n return spy_pd\n\ndef preprocessing(df):\n df.bidPrice=df.loc[:,'bidPrice'].replace(to_replace=0, method='ffill')\n df.bidSize=df.loc[:,'bidSize'].replace(to_replace=0, method='ffill')\n df.askPrice=df.loc[:,'askPrice'].replace(to_replace=0, method='ffill')\n df.askSize=df.loc[:,'askSize'].replace(to_replace=0, method='ffill')\n df=df.dropna()\n # to exclude 0\n df=df[df['bidPrice']>df.bidPrice.mean()-df.bidPrice.std()]\n df=df[df['askPrice']>df.askPrice.mean()-df.askPrice.std()]\n df['mid']=(df.askPrice+df.bidPrice)/2\n df['vwap']=((df.loc[:,'bidPrice']*df.loc[:,'bidSize'])+(df.loc[:,'askPrice']*df.loc[:,'askSize']))/(df.loc[:,'bidSize']+df.loc[:,'askSize'])\n df['spread']=df.vwap-df.mid\n df['v']=(df.mid-df.mid.shift(60))\n df['mom']=np.where(np.logical_and((df.mid-df.mid.shift(12))!=0,df.v!=0),(df.mid-df.mid.shift(12))/df.v,0)\n df['return']=(df.askPrice/df.bidPrice.shift(1))-1\n #df['ret'] = np.log(df.Close/df.Close.shift(1))\n df['sigma']=df.spread.rolling(60).std()\n #df['sigma']=df.Close.rolling(5).std()\n df['high']=df.askPrice.rolling(5).max()\n df['low']=df.bidPrice.rolling(5).min()\n \n #df['mom']=np.where(np.logical_and(df.vel_c==1,df.Close>df.price),1,np.where(np.logical_and(df.vel_c==-1,df.Close<df.price),-1,0))\n #flagD=np.logical_and(np.logical_and(df.Close.shift(10)<df.Close.shift(15),df.Close.shift(15)< df.Close.shift(20)),df.Close< df.Close.shift(10))\n #flagU=np.logical_and(np.logical_and(df.Close.shift(15)>df.Close.shift(20),df.Close.shift(10)> df.Close.shift(15)),df.Close> df.Close.shift(10))\n #df['UD']= np.where(flagU,-1,np.where(flagD,1,0))\n \n #df['P']=(df.High+df.Low+df.Close)/3\n #df['UT']=(pd.rolling_max(df.High,60)+pd.rolling_max(df.P+df.High-df.Low,60))*0.5\n #df['DT']=(pd.rolling_min(df.Low,60)+pd.rolling_min(df.P+df.High-df.Low,60))*0.5\n #df['BA']=np.where(df.Close<=df.DT,-1,np.where(df.Close>=df.UT,1,0))# below or above\n return df\n\n'''\ndef normalise(df,window_length=60):\n dfn=(df-df.rolling(window_length).min())/(df.rolling(window_length).max()-df.rolling(window_length).min())\n return dfn\n\ndef de_normalise(data,df,window_length=60):\n dn=(df*(data.rolling(window_length).max()-data.rolling(window_length).min()))+data.rolling(window_length).min()\n return dn\n\n#https://stackoverflow.com/questions/312443/how-do-you-split-a-list-into-evenly-sized-chunks\ndef chunks(l, n):\n \"\"\"Yield successive n-sized chunks from l.\"\"\"\n for i in range(0, len(l), n):\n yield l[i:i + n]\n\n##### ARIMA \n\nfrom statsmodels.tsa.arima_model import ARIMA\nfrom statsmodels.tsa.arima_model import ARIMAResults\n \n###ARIMA preprocessing\ndef arima_processing(df):\n #data=df[['vwap','mid']]\n df=df.dropna()\n df['Lvwap']=np.log(df.vwap)\n df['Lmid']=np.log(df.mid)\n df['LDvwap']=df.Lvwap-df.Lvwap.shift(60)\n df['LDmid']=df.Lmid-df.Lmid.shift(60)\n df=df.dropna()\n return df \n\n###Model is already saved from \"/Dropbox/DataScience/ARIMA_model_saving.ipynb\". Here loaded and added to \"df_ml\"\ndef ARIMA_(data):\n ### load model\n data=data.dropna()\n predictions_mid=ARIMA_mid(data.LDmid)\n predictions_vwap=ARIMA_vwap(data.LDvwap) \n vwap_arima=np.exp(predictions_vwap+data.Lvwap.shift(60))\n mid_arima=np.exp(predictions_mid+data.Lmid.shift(60))\n df_ml['arima']=data.mid+vwap_arima-mid_arima\n \ndef ARIMA_mid(data):\n ### load model\n mid_arima_loaded = ARIMAResults.load('mid_arima.pkl')\n predictions_mid = mid_arima_loaded.predict()\n return predictions_mid\n\ndef ARIMA_vwap(data):\n ### load model\n vwap_arima_loaded = ARIMAResults.load('vwap_arima.pkl')\n predictions_vwap = vwap_arima_loaded.predict()\n return predictions_vwap\n\n#### KALMAN moving average\n\n##KF moving average\n#https://github.com/pykalman/pykalman\n\n# Import a Kalman filter and other useful libraries\nfrom pykalman import KalmanFilter\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom scipy import poly1d\n\ndef kalman_ma(data):\n #x=data.mid\n x=data.mid\n # Construct a Kalman filter\n kf = KalmanFilter(transition_matrices = [1],\n observation_matrices = [1],\n initial_state_mean = 248,\n initial_state_covariance = 1,\n observation_covariance=1,\n transition_covariance=.01)\n\n # Use the observed values of the price to get a rolling mean\n state_means, _ = kf.filter(x.values)\n state_means = pd.Series(state_means.flatten(), index=x.index)\n df_ml['km']=state_means\n\n### Linear Regression, sklearn, svm:SVR,linear_model\nimport pickle\n#from sklearn.cross_validation import train_test_split\nfrom sklearn import linear_model\nfrom sklearn.svm import SVR\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.svm import SVC\n\n\n## loading model saved from /Dropbox/DataScience/REG_model_saving.ipynb\nfilename_rgr = 'rgr.sav'\nfilename_svr = 'svr.sav'\n# load the model from disk\nloaded_rgr_model = pickle.load(open(filename_rgr, 'rb'))\nloaded_svr_model = pickle.load(open(filename_svr, 'rb'))\n\ndef strat_lr(data,df):\n df=df.dropna()\n data=data.dropna()\n X=df[['askPrice','askSize','bidPrice','bidSize','vwap','spread','v','return','sigma']]\n y=df.mid\n predict_regr=loaded_rgr_model.predict(X)\n predict_svr=loaded_svr_model.predict(X)\n df['predict_regr']=predict_regr\n df['predict_svr']=predict_svr\n df_ml['REG']=de_normalise(data.mid,df.predict_regr)\n df_ml['SVR']=de_normalise(data.mid,df.predict_svr)\n \n#### loading classification model from /Dropbox/DataScience/ML_20Sep\nfilename_svm_model_up = 'svm_model_up.sav'\nfilename_lm_model_up = 'lm_model_up.sav'\nfilename_svm_model_dn = 'svm_model_dn.sav'\nfilename_lm_model_dn = 'lm_model_dn.sav'\n# load the model from disk\nloaded_svm_up_model = pickle.load(open(filename_svm_model_up, 'rb'))\nloaded_lm_up_model = pickle.load(open(filename_lm_model_up, 'rb'))\nloaded_svm_dn_model = pickle.load(open(filename_svm_model_dn, 'rb'))\nloaded_lm_dn_model = pickle.load(open(filename_lm_model_dn, 'rb'))\n\ndef classification_up_dn(data):\n X=data[['askPrice','askSize','bidPrice','bidSize','vwap','spread','v','return','sigma']]\n y1=data.U\n y2=data.D\n \n \n predict_svm_up=loaded_svm_up_model.predict(X)\n predict_lm_up=loaded_lm_up_model.predict(X)\n predict_svm_dn=loaded_svm_dn_model.predict(X)\n predict_lm_dn=loaded_lm_dn_model.predict(X)\n \n data['predict_svm_up']=predict_svm_up\n data['predict_lm_up']=predict_lm_up\n data['predict_svm_dn']=predict_svm_dn\n data['predict_lm_dn']=predict_lm_dn\n \n data['predict_svm']=data.predict_svm_up+data.predict_svm_dn\n data['predict_lm']=data.predict_lm_up+data.predict_lm_dn\n \n data['UD']=np.where(np.logical_and(data.predict_svm>0,data.predict_lm>0),1,np.where(np.logical_and(data.predict_svm<0,data.predict_lm<0),-1,0)) \n \n df_ml['UD']=data.UD\n\n### LSTM\n\n#df.loc[:, cols].prod(axis=1)\ndef lstm_processing(df):\n df=df.dropna()\n df_price=df[['mid','vwap','arima','km','REG','SVR']]\n #normalization\n dfn=normalise(df_price,12)\n dfn['UD']=df.UD\n return dfn\n\n\nimport numpy\nimport matplotlib.pyplot as plt\nimport pandas\nimport math\nfrom keras.models import Sequential\nfrom keras.layers import Dense\nfrom keras.layers import LSTM\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn.metrics import mean_squared_error\nimport numpy\nimport matplotlib.pyplot as plt\nimport pandas\nimport math\nfrom keras.models import Sequential\nfrom keras.layers import Dense\nfrom keras.layers import LSTM\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn.metrics import mean_squared_error\n\nfrom keras.models import load_model\nmodel = load_model('21sep.h5')\n\n# convert an array of values into a dataset matrix\ndef create_dataset(dataset, look_back=1):\n dataX, dataY = [], []\n for i in range(len(dataset)-look_back-1):\n a = dataset[i:(i+look_back), 0]\n b = dataset[i:(i+look_back), 1]\n c = dataset[i:(i+look_back), 2]\n d = dataset[i:(i+look_back), 3]\n e= dataset[i:(i+look_back), 4]\n f = dataset[i:(i+look_back), 5]\n g= dataset[i:(i+look_back), 6]\n dataX.append(np.c_[b,c,d,e,f,g])\n #dataX.append(b)\n #dataX.append(c)\n #dataX.append(d)\n #dataX.append(e)\n #dataX.concatenate((a,bT,cT,dT,eT),axis=1)\n dataY.append(dataset[i + look_back,0])\n return np.array(dataX), np.array(dataY)\n\n\ndef strat_LSTM(df_ml):\n \n #normalization\n df_lstm=lstm_processing(df_ml)\n df_lstm=df_lstm.dropna()\n dataset=df_lstm.values\n dataset = dataset.astype('float32')\n # reshape into X=t and Y=t+1\n look_back = 3\n X_,Y_ = create_dataset(dataset,look_back)\n \n # reshape input to be [samples, time steps, features]\n X_ = numpy.reshape(X_, (X_.shape[0],X_.shape[1],X_.shape[2]))\n # make predictions\n predict = model.predict(X_)\n df_lstm=df_lstm.tail(len(predict))\n df_lstm['LSTM']=predict\n\n #LSTM=(df_lstm.LSTM*(df_ml.mid.rolling(60).max()-df_ml.midClose.rolling(60).min()))+df_LSTM.Close.rolling(60).min()\n LSTM=de_normalise(df_ml.mid,df_lstm.LSTM,window_length=12)\n df_lstm['pred']=LSTM\n df_lstm=df_lstm.dropna()\n df_lstm=df_lstm.tail(len(df_ml))\n df_ml['LSTM']=df_lstm.pred\n '''\n\n'''\n#https://stackoverflow.com/questions/312443/how-do-you-split-a-list-into-evenly-sized-chunks\ndef chunks(l, n):\n \"\"\"Yield successive n-sized chunks from l.\"\"\"\n for i in range(0, len(l), n):\n yield l[i:i + n]\n''' ", "Dataset", "filename = '/home/octo/Dropbox'+ '/SPY7Dec.csv'\n\ndata=get_csv_pd(filename)\n\ndata=preprocessing(data)\n\ndf=data.dropna()\ndf=df[['mid','vwap','spread','v','mom','return','sigma','high','low',]]\n\n# split into train and test sets\ntrain_size = int(len(df) * 0.80)\ntest_size = len(df) - train_size\ntrain= df[0:train_size]\ntest= df[train_size:len(df)]\nprint(len(train), len(test))\n\ntrain_X=train[['mid','vwap','spread','v','return','sigma','high','low',]]\ntrain_y=train['mom']\ntest_X=test[['mid','vwap','spread','v','return','sigma','high','low',]]\ntest_y=test['mom']\n\ntrain_X.head()\n\ntest_y.head()", "Regression", "from sklearn import linear_model\nregr = linear_model.LinearRegression()\n#regr.fit(X.tail(20),y.tail(20))\n#predict=regr.predict(X.tail(5))\nregr.fit(train_X,train_y)\npredict=regr.predict(test_X)\n#X=X.dropna()\n#y=y.dropna()\n#y[y == inf] = 0\ndt=test[['mid']]\ndt['predict']=predict\n#dt['predict']=dt.mid+dt.mid*dt.predict\ndt['predict']=dt.predict*test.v+test.mid.shift(12)\npdf=test\npdf['pREG']=dt.predict\n\npdf.tail()\n\nfrom sklearn.svm import SVR\n# Fit regression model\nsvr_rbf = SVR(kernel='rbf', C=1e3, gamma=0.9) #kernel='linear' #kernel='poly'\npredict_svr = svr_rbf.fit(train_X,train_y).predict(test_X)\ndt1=test[['mid']]\ndt1['predict']=predict_svr\ndt1['predict']=dt1.predict*test.v+test.mid.shift(12)\npdf['pSVR']=dt1.predict\n\npdf.dropna().head()\n\npdf[['mid','high','low','pREG','pSVR']].tail(300).plot(figsize=(15,9))\n#df[['Volume']].tail(5000).plot(figsize=(15,9))\n#data[['AvgVolume']].tail(5000).plot(figsize=(15,9))\nplt.show()\n\n# look at the results\nplt.scatter(pdf['mid'],test_y, c='k', label='data')\nplt.hold('on')\nplt.plot(pdf['pREG'],test_y, c='g', label='pREG')\n#plt.plot(pdf['pSVR'], y, c='g', label='pSVR')\nplt.xlabel('data')\nplt.ylabel('target')\nplt.title('Support Vector Regression')\nplt.legend()\nplt.show()\n\n# look at the results\nplt.scatter(pdf['mid'],pdf['pREG'], c='k', label='pSVR')\nplt.hold('on')\nplt.plot(pdf['mid'],pdf['pSVR'], c='g', label='pREG')\nplt.plot(pdf['mid'], pdf['high'], c='g', label='high')\nplt.xlabel('data')\nplt.ylabel('target')\nplt.title('Support Vector Regression')\nplt.legend()\nplt.show()", "ARCH\n https://www.quantopian.com/posts/some-code-from-ernie-chans-new-book-implemented-in-python\n http://auquan.com/cointegration-stationarity/\n https://www.analyticsvidhya.com/blog/2016/02/time-series-forecasting-codes-python/\n http://machinelearningmastery.com/time-series-forecast-case-study-python-monthly-armed-robberies-boston/\n https://www.quantstart.com/articles/Basics-of-Statistical-Mean-Reversion-Testing-Part-II\n https://datascience.ibm.com/exchange/public/entry/view/815137c868b916821dec777bdc23013c\n http://machinelearningmastery.com/time-series-data-stationary-python/\n\nClassification", "X=df[['mid','vwap','spread','v','return','sigma','high','low',]]\ny=df['mom']\n\nlen(df)\n\nfrom sklearn import linear_model\nregr = linear_model.LinearRegression()\n#regr.fit(X.tail(20),y.tail(20))\n#predict=regr.predict(X.tail(5))\nregr.fit(X.dropna(),y.dropna())\npredict=regr.predict(X)\n#X=X.dropna()\n#y=y.dropna()\n#y[y == inf] = 0\ndt=df[['mid']]\ndt['predict']=predict\n#dt['predict']=dt.mid+dt.mid*dt.predict\ndt['predict']=dt.predict*df.v+df.mid.shift(12)\nclassify_df=df\nclassify_df['pREG']=dt.predict\n\nfrom sklearn.svm import SVR\n# Fit regression model\nsvr_rbf = SVR(kernel='rbf', C=1e3, gamma=0.9) #kernel='linear' #kernel='poly'\npredict_svr = svr_rbf.fit(X, y).predict(X)\ndt1=df[['mid']]\ndt1['predict']=predict_svr\ndt1['predict']=dt1.predict*df.v+df.mid.shift(12)\nclassify_df['pSVR']=dt1.predict\n\ndef classification(df):\n mid1=(df.high+df.low)/2\n #flagUD=np.where(np.logical_and(df.mid>df.pREG,df.mid>df.pSVR),1,np.where(np.logical_and(df.mid<df.pREG,df.mid<df.pSVR),-1,0))\n #df['UD']= np.where(np.logical_and(df.mid>mid1,flagUD==1),1,np.where(np.logical_and(df.mid<mid1,flagUD==-1),-1,0))\n flagUD=np.where(np.logical_and(df.mid>df.pREG,df.mid>df.pSVR),1,np.where(np.logical_and(df.mid<df.pREG,df.mid<df.pSVR),-1,0))\n UD= np.where(np.logical_and(df.mid>mid1,flagUD==1),1,np.where(np.logical_and(df.mid<mid1,flagUD==-1),-1,0))\n df['U']= np.where(UD==1,1,0)\n df['D']= np.where(UD==-1,-1,0)\n df['UD']=df.U+df.D\n return df\n\ndata_class=classification(classify_df)\ndata_class=data_class.dropna()\ndf=df.dropna()\n# both df and data_class have U,D,UD\n\ndata_class.head()\n\ndf.head()\n\n# split into train and test sets\ntrain_size = int(len(data_class) * 0.80)\ntest_size = len(data_class) - train_size\ntrain= data_class[0:train_size]\ntest= data_class[train_size:len(data_class)]\nprint(len(train), len(test))\n\ntrain_X=train[['mid','vwap','spread','v','return','sigma','high','low','mom','pREG','pSVR']]\ntrain_y=train['UD']\ntest_X=test[['mid','vwap','spread','v','return','sigma','high','low','mom','pREG','pSVR']]\ntest_y=test['UD']\ntrain_U=train['U']\ntest_U=test['U']\ntrain_D=train['D']\ntest_D=test['D']\n\nprint(len(train_U), len(test_U))", "Logistic Regression", "from sklearn import metrics\nfrom sklearn.linear_model import LogisticRegression\n\nmodel = LogisticRegression()\nmodel.fit(train_X,train_U)\nprint(model)\n\n# make predictions\nexpected =test_U\npredicted = model.predict(test_X)\n\n# summarize the fit of the model\nprint(metrics.classification_report(expected, predicted))\nprint(metrics.confusion_matrix(expected, predicted))\n\nmodel = LogisticRegression()\nmodel.fit(train_X,train_D)\nprint(model)\n\n# make predictions\nexpected =test_D\npredicted = model.predict(test_X)\n\n# summarize the fit of the model\nprint(metrics.classification_report(expected, predicted))\nprint(metrics.confusion_matrix(expected, predicted))", "Decision Trees", "from sklearn import metrics\nfrom sklearn.tree import DecisionTreeClassifier\n\n# fit a CART model to the data\nmodel = DecisionTreeClassifier()\nmodel.fit(train_X,train_U)\nprint(model)\n\n# make predictions\nexpected =test_U\npredicted = model.predict(test_X)\n\n# summarize the fit of the model\nprint(metrics.classification_report(expected, predicted))\nprint(metrics.confusion_matrix(expected, predicted))\n\naccuracy =model.score(test_X,test_U) \naccuracy\n\n# fit a CART model to the data\nmodel = DecisionTreeClassifier()\nmodel.fit(train_X,train_D)\nprint(model)\n\n# make predictions\nexpected =test_D\npredicted = model.predict(test_X)\n\n# summarize the fit of the model\nprint(metrics.classification_report(expected, predicted))\nprint(metrics.confusion_matrix(expected, predicted))\n\naccuracy =model.score(test_X,test_D) \naccuracy", "Random Forest\nThe random forest algorithm can find nonlinearities in data that a linear regression wouldn’t be able to pick up on.", "# Import the random forest model.\nfrom sklearn.ensemble import RandomForestRegressor\n\n# Initialize the model with some parameters.\nmodel = RandomForestRegressor(n_estimators=100, min_samples_leaf=10, random_state=1)\n# Fit the model to the data.\nmodel.fit(train_X,train_U)\nprint(model)\n\n# Make predictions.\nexpected=test_U\npredicted = model.predict(test_X)\n\naccuracy =model.score(test_X,test_U) \naccuracy\n\nfrom sklearn.ensemble import RandomForestClassifier\n\nclf = RandomForestClassifier(n_estimators=100, n_jobs=-1)\nclf.fit(train_X,train_U)\naccuracy = clf.score(test_X,test_U)\nprint(accuracy)", "KNN", "from sklearn import neighbors\n\nclf = neighbors.KNeighborsClassifier()\nclf.fit(train_X,train_U)\n\naccuracy = clf.score(test_X,test_U)\nprint(accuracy)", "Ada Boosting binary Classification", "from sklearn.ensemble import AdaBoostClassifier\n\nclf = AdaBoostClassifier()\nclf.fit(train_X,train_U)\naccuracy = clf.score(test_X,test_U)\nprint(accuracy)", "Gradient Tree Boosting binary Classification", "from sklearn.ensemble import GradientBoostingClassifier\n\nclf = GradientBoostingClassifier(n_estimators=100)\nclf.fit(train_X,train_U)\naccuracy = clf.score(test_X,test_U)\nprint(accuracy)", "Quadratic Discriminant Analysis binary Classification", "from sklearn.qda import QDA\n\nclf = QDA()\nclf.fit(train_X,train_U)\naccuracy = clf.score(test_X,test_U)\nprint(accuracy)", "SVM\nSVM (Support Vector Machines) is one of the most popular machine learning algorithms used mainly for the classification problem. As well as logistic regression, SVM allows multi-class classification with the help of the one-vs-all method.", "from sklearn import metrics\nfrom sklearn.svm import SVC\n\n# fit a SVM model to the data\nmodel = SVC()\nmodel.fit(train_X,train_U)\nprint(model)\n\n# Make predictions.\nexpected=test_U\npredicted = model.predict(test_X)\n\naccuracy =model.score(test_X,test_U) \naccuracy\n\n# fit a SVM model to the data\nmodel = SVC()\nmodel.fit(train_X,train_D)\nprint(model)\n\n# Make predictions.\nexpected=test_D\npredicted = model.predict(test_X)\n\naccuracy =model.score(test_X,test_D) \naccuracy", "Saving", " \n if savemodel == True:\n fname_out = '{}-{}.pickle'.format(fout, datetime.now())\n with open(fname_out, 'wb') as f:\n cPickle.dump(clf, f, -1) ", "graphical", "# plotting\nimport matplotlib.pyplot as plt\nimport seaborn as sns; sns.set()\n%matplotlib inline\n#plt.rcParams['figure.figsize'] = 8,6\n\ntest.boxplot(column='v')\n\ntest.boxplot(by='v')\nplt.ylim(245,248)\n\ntest.boxplot(by='UD')\n\n#some descriptive statistics\ntest.describe()\n\ntest['v'].plot(kind='hist', grid=True, title='velocity')\n\ntest['UD'].plot(kind='hist', grid=True, title='up-down')\n\ntest['v'].plot(kind='line', grid=True, title='velocity')\n\ntest['UD'].plot(kind='line', grid=True, title='up-down')\n\n# Find 7, 30, 120 day moving averages (very broadly, rolling week, month and quarter)\nspy_12 = test.rolling(window=12).mean()\nspy_60 = test.rolling(window=60).mean()\nspy_360 = test.rolling(window=360).mean()\n\nfig = plt.figure()\nfig.autofmt_xdate()\nax = fig.add_subplot(1,1,1)\nax.plot(test.index,test, label='SPY')\nax.plot(spy_12.index, spy_12, label='1 min rolling')\nax.plot(spy_60.index, spy_60, label='5 min rolling')\nax.plot(spy_360.index,spy_360, label='30 min rolling')\nax.grid()\nax.legend(loc=2)\nax.set_xlabel('Date')\nplt.title('SPY Closes & Rolling Averages')\nplt.show()\n\n#frequency\nround(test['mom']).value_counts()\n\nround(test['vwap'],1).hist(bins=50)\n\ntest.boxplot(column='mid')\n\n#df for datascience\n#signal=df.DataFrame(data=df.mid)\nsignal=df\n#df['time']=df.index.strftime('%H:%M:%S')\ntime=signal.index.strftime('%H:%M:%S')\n\nP=(signal.high+signal.low+signal.mid)/3\nsignal['UT']=(P+signal.high.rolling(60).max()-signal.low.rolling(60).max())\nsignal['DT']=(P-signal.high.rolling(60).min()+signal.low.rolling(60).min())\nsignal['BS']=np.where(signal.mid<=df.DT,\"B\",np.where(signal.mid>=df.UT,\"S\",\"H\"))\nsignal=signal.dropna()\n\nsignal.head()\n\n\n\ndf[['UT','DT','mid','high','low','pREG','pSVR']].tail(100).plot(figsize=(16, 10))\nplt.show()\n\n\n\nsignal.boxplot(column='mid',by ='BS')\n\ntemp1 = round(signal['UD']).value_counts(ascending=True)\ntemp2 = signal.pivot_table(values='UD',index=['BS'],aggfunc=lambda x: x.map({'B':1,'S':-1,'H':0}).mean())\n\nprint ('Frequency Table for spread:')\nprint (temp1)\nprint ('\\nProbility') \nprint (temp2.tail())\n\ntemp3 = pd.crosstab(round(signal['UD']),signal['BS'])\ntemp3.plot(kind='bar', stacked=True, color=['red','blue'], grid=False)\n\n# number of missing values in each column as isnull() returns 1, if the value is null.\nsignal.apply(lambda x: sum(x.isnull()),axis=0)\n\nsignal['BS'].value_counts()\n\nsignal['UD'].value_counts()\n\ntable = signal.pivot_table(values='v', index='BS' ,columns='UD', aggfunc=np.median)\nprint(table)\n\n#Boolean indexing\nsignal.loc[(signal['v']<0) & (signal[\"BS\"]==\"B\") & (signal[\"DT\"]>signal[\"mid\"]), ['mid',\"spread\",\"BS\",\"DT\"]].head()\n\ntrain_X.head()\n\n# Create first network with Keras\nfrom keras.models import Sequential\nfrom keras.layers import Dense\nimport numpy\n\n# create model\nmodel = Sequential()\nmodel.add(Dense(12, input_dim=11, init='uniform', activation='relu'))\nmodel.add(Dense(8, init='uniform', activation='relu'))\nmodel.add(Dense(1, init='uniform', activation='sigmoid'))\n# Compile model\nmodel.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])\n# Fit the model\nmodel.fit(train_X,train_U, nb_epoch=11, batch_size=10)\n# evaluate the model\nscores = model.evaluate(test_X,test_U)\nprint(\"%s: %.2f%%\" % (model.metrics_names[1], scores[1]*100))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
qutip/qutip-notebooks
examples/heom/heom-1a-spin-bath-model-basic.ipynb
lgpl-3.0
[ "Example 1a: Spin-Bath model (basic)\nIntroduction\nThe HEOM method solves the dynamics and steady state of a system and its environment, the latter of which is encoded in a set of auxiliary density matrices.\nIn this example we show the evolution of a single two-level system in contact with a single Bosonic environment. The properties of the system are encoded in Hamiltonian, and a coupling operator which describes how it is coupled to the environment.\nThe Bosonic environment is implicitly assumed to obey a particular Hamiltonian (see paper), the parameters of which are encoded in the spectral density, and subsequently the free-bath correlation functions.\nIn the example below we show how to model the overdamped Drude-Lorentz Spectral Density, commonly used with the HEOM. We show how to do this using the Matsubara, Pade and fitting decompositions, and compare their convergence. \nDrude-Lorentz (overdamped) spectral density\nThe Drude-Lorentz spectral density is:\n$$J_D(\\omega)= \\frac{2\\omega\\lambda\\gamma}{{\\gamma}^2 + \\omega^2}$$\nwhere $\\lambda$ scales the coupling strength, and $\\gamma$ is the cut-off frequency. We use the convention,\n\\begin{equation}\nC(t) = \\int_0^{\\infty} d\\omega \\frac{J_D(\\omega)}{\\pi}[\\coth(\\beta\\omega) \\cos(\\omega \\tau) - i \\sin(\\omega \\tau)]\n\\end{equation}\nWith the HEOM we must use an exponential decomposition:\n\\begin{equation}\nC(t)=\\sum_{k=0}^{k=\\infty} c_k e^{-\\nu_k t}\n\\end{equation}\nAs an example, the Matsubara decomposition of the Drude-Lorentz spectral density is given by:\n\\begin{equation}\n \\nu_k = \\begin{cases}\n \\gamma & k = 0\\\n {2 \\pi k} / {\\beta } & k \\geq 1\\\n \\end{cases}\n\\end{equation}\n\\begin{equation}\n c_k = \\begin{cases}\n \\lambda \\gamma (\\cot(\\beta \\gamma / 2) - i) & k = 0\\\n 4 \\lambda \\gamma \\nu_k / {(nu_k^2 - \\gamma^2)\\beta } & k \\geq 1\\\n \\end{cases}\n\\end{equation}\nNote that in the above, and the following, we set $\\hbar = k_\\mathrm{B} = 1$.", "%pylab inline\n%load_ext autoreload\n%autoreload 2\n\nimport contextlib\nimport time\n\nimport numpy as np\n\nfrom qutip import *\nfrom qutip.nonmarkov.heom import HEOMSolver, HSolverDL, BosonicBath, DrudeLorentzBath, DrudeLorentzPadeBath\n\ndef cot(x):\n \"\"\" Vectorized cotangent of x. \"\"\"\n return 1. / np.tan(x)\n\ndef dl_matsubara_params(lam, gamma, T, nk):\n \"\"\" Calculation of the real and imaginary expansions of the Drude-Lorenz correlation functions.\n \"\"\"\n ckAR = [lam * gamma * cot(gamma / (2 * T))]\n ckAR.extend(\n 4 * lam * gamma * T * 2 * np.pi * k * T / ((2 * np.pi * k * T)**2 - gamma**2)\n for k in range(1, nk + 1)\n )\n vkAR = [gamma]\n vkAR.extend(2 * np.pi * k * T for k in range(1, nk + 1))\n\n ckAI = [lam * gamma * (-1.0)]\n vkAI = [gamma]\n \n return ckAR, vkAR, ckAI, vkAI\n\ndef dl_corr_approx(t, nk):\n \"\"\" Drude-Lorenz correlation function approximation.\n \n Approximates the correlation function at each time t to nk exponents.\n \"\"\"\n c = lam * gamma * (-1.0j + cot(gamma / (2 * T))) * np.exp(-gamma * t)\n for k in range(1, nk):\n vk = 2 * np.pi * k * T\n c += (4 * lam * gamma * T * vk / (vk**2 - gamma**2)) * np.exp(-vk * t)\n return c\n\ndef plot_result_expectations(plots, axes=None):\n \"\"\" Plot the expectation values of operators as functions of time.\n \n Each plot in plots consists of (solver_result, measurement_operation, color, label).\n \"\"\"\n if axes is None:\n fig, axes = plt.subplots(1, 1, sharex=True, figsize=(8,8))\n fig_created = True\n else:\n fig = None\n fig_created = False\n\n # add kw arguments to each plot if missing\n plots = [p if len(p) == 5 else p + ({},) for p in plots]\n for result, m_op, color, label, kw in plots:\n exp = np.real(expect(result.states, m_op))\n kw.setdefault(\"linewidth\", 2)\n axes.plot(result.times, exp, color, label=label, **kw)\n\n if fig_created:\n axes.legend(loc=0, fontsize=12)\n axes.set_xlabel(\"t\", fontsize=28)\n\n return fig\n\n@contextlib.contextmanager\ndef timer(label):\n \"\"\" Simple utility for timing functions:\n \n with timer(\"name\"):\n ... code to time ...\n \"\"\"\n start = time.time()\n yield\n end = time.time()\n print(f\"{label}: {end - start}\")\n\n# Defining the system Hamiltonian\neps = .5 # Energy of the 2-level system.\nDel = 1.0 # Tunnelling term\nHsys = 0.5 * eps * sigmaz() + 0.5 * Del* sigmax()\n\n# Initial state of the system.\nrho0 = basis(2,0) * basis(2,0).dag() \n\n# System-bath coupling (Drude-Lorentz spectral density)\nQ = sigmaz() # coupling operator\n\n# Bath properties:\ngamma = .5 # cut off frequency\nlam = .1 # coupling strength\nT = 0.5\nbeta = 1./T\n\n# HEOM parameters\nNC = 5 # cut off parameter for the bath\nNk = 2 # number of exponents to retain in the Matsubara expansion of the correlation function\n\n# Times to solve for\ntlist = np.linspace(0, 50, 1000)\n\n# Define some operators with which we will measure the system\n# 1,1 element of density matrix - corresonding to groundstate\nP11p = basis(2,0) * basis(2,0).dag()\nP22p = basis(2,1) * basis(2,1).dag()\n# 1,2 element of density matrix - corresonding to coherence\nP12p = basis(2,0) * basis(2,1).dag()", "First of all, it is useful to look at the spectral density, to understand its magnitude and width, relative to the system properties:", "def plot_spectral_density():\n \"\"\" Plot the Drude-Lorentz spectral density \"\"\"\n w = np.linspace(0, 5, 1000)\n J = w * 2 * lam * gamma / (gamma**2 + w**2)\n\n fig, axes = plt.subplots(1, 1, sharex=True, figsize=(8,8))\n axes.plot(w, J, 'r', linewidth=2)\n axes.set_xlabel(r'$\\omega$', fontsize=28)\n axes.set_ylabel(r'J', fontsize=28)\n\nplot_spectral_density()", "Next we calculate the exponents using the Matsubara decompositions. Here we split them into real and imaginary parts.\nThe HEOM code will optimize these, and reduce the number of exponents when real and imaginary parts have the same\nexponent. This is clearly the case for the first term in the vkAI and vkAR lists.", "ckAR, vkAR, ckAI, vkAI = dl_matsubara_params(nk=Nk, lam=lam, gamma=gamma, T=T)", "Having created the lists which specify the bath correlation functions, we create a BosonicBath from them and pass the bath to the HEOMSolver class.\nThe solver constructs the \"right hand side\" (RHS) determinining how the system and auxiliary density operators evolve in time. This can then be used to solve for dynamics or steady-state.\nBelow we create the bath and solver and then solve for the dynamics by calling .run(rho0, tlist).", "options = Options(nsteps=15000, store_states=True, rtol=1e-14, atol=1e-14)\n\nwith timer(\"RHS construction time\"):\n bath = BosonicBath(Q, ckAR, vkAR, ckAI, vkAI)\n HEOMMats = HEOMSolver(Hsys, bath, NC, options=options)\n \nwith timer(\"ODE solver time\"):\n resultMats = HEOMMats.run(rho0, tlist)\n\nplot_result_expectations([\n (resultMats, P11p, 'b', \"P11 Mats\"),\n (resultMats, P12p, 'r', \"P12 Mats\"),\n]);", "In practice, one would not perform this laborious expansion for the Drude-Lorentz correlation function, because QuTiP already has a class, DrudeLorentzBath, that can construct this bath for you. Nevertheless, knowing how to perform this expansion will allow you to construct your own baths for other spectral densities.\nBelow we show how to use this built-in functionality:", "# Compare to built-in Drude-Lorentz bath:\n\nwith timer(\"RHS construction time\"):\n bath = DrudeLorentzBath(Q, lam=lam, gamma=gamma, T=T, Nk=Nk)\n HEOM_dlbath = HEOMSolver(Hsys, bath, NC, options=options)\n\nwith timer(\"ODE solver time\"):\n result_dlbath = HEOM_dlbath.run(rho0, tlist) #normal 115\n\nplot_result_expectations([\n (result_dlbath, P11p, 'b', \"P11 (DrudeLorentzBath)\"),\n (result_dlbath, P12p, 'r', \"P12 (DrudeLorentzBath)\"),\n]);", "We also provide a legacy class, HSolverDL, which calculates the Drude-Lorentz correlation functions automatically, to be backwards compatible with the previous HEOM solver in QuTiP:", "# Compare to legacy class:\n\n# The legacy class performs the above collation of co-oefficients automatically, based upon the\n# parameters for the Drude-Lorentz spectral density.\n\nwith timer(\"RHS construction time\"):\n HEOMlegacy = HSolverDL(Hsys, Q, lam, T, NC, Nk, gamma, options=options)\n\nwith timer(\"ODE solver time\"):\n resultLegacy = HEOMlegacy.run(rho0, tlist) #normal 115\n\nplot_result_expectations([\n (resultLegacy, P11p, 'b', \"P11 Legacy\"),\n (resultLegacy, P12p, 'r', \"P12 Legacy\"),\n]);", "Ishizaki-Tanimura Terminator\nTo speed up convergence (in terms of the number of exponents kept in the Matsubara decomposition), We can treat the $Re[C(t=0)]$ component as a delta-function distribution, and include it as Lindblad correction. This is sometimes known as the Ishizaki-Tanimura Terminator.\nIn more detail, given\n\\begin{equation}\nC(t)=\\sum_{k=0}^{\\infty} c_k e^{-\\nu_k t}\n\\end{equation}\nsince $\\nu_k=\\frac{2 \\pi k}{\\beta }$, if $1/\\nu_k$ is much much smaller than other important time-scales, we can approximate, $ e^{-\\nu_k t} \\approx \\delta(t)/\\nu_k$, and $C(t)=\\sum_{k=N_k}^{\\infty} \\frac{c_k}{\\nu_k} \\delta(t)$\nIt is convenient to calculate the whole sum $C(t)=\\sum_{k=0}^{\\infty} \\frac{c_k}{\\nu_k} = 2 \\lambda / (\\beta \\gamma) - i\\lambda $, and subtract off the contribution from the finite number of Matsubara terms that are kept in the hierarchy, and treat the residual as a Lindblad.\nThis is clearer if we plot the correlation function with a large number of Matsubara terms:", "def plot_correlation_expansion_divergence(): \n \"\"\" We plot the correlation function with a large number of Matsubara terms to show that\n the real part is slowly diverging at t = 0.\n \"\"\"\n t = linspace(0, 2, 100)\n\n # correlation coefficients with 15k and 2 terms\n corr_15k = dl_corr_approx(t, 15_000)\n corr_2 = dl_corr_approx(t, 2)\n\n fig, ax1 = plt.subplots(figsize=(12, 7))\n\n ax1.plot(t, np.real(corr_2), color=\"b\", linewidth=3, label= r\"Mats = 2 real\")\n ax1.plot(t, np.imag(corr_2), color=\"r\", linewidth=3, label= r\"Mats = 2 imag\")\n ax1.plot(t, np.real(corr_15k), \"b--\", linewidth=3, label= r\"Mats = 15000 real\")\n ax1.plot(t, np.imag(corr_15k), \"r--\", linewidth=3, label= r\"Mats = 15000 imag\")\n\n ax1.set_xlabel(\"t\")\n ax1.set_ylabel(r\"$C$\")\n ax1.legend()\n \nplot_correlation_expansion_divergence()", "Let us evaluate the result including this Ishizaki-Tanimura terminator:", "# Run HEOM solver and include the Ishizaki-Tanimura terminator\n\n# Notes:\n#\n# * when using the built-in DrudeLorentzBath, the terminator (L_bnd) is available\n# from bath.terminator().\n#\n# * in the legacy HSolverDL function the terminator is included automatically if\n# the parameter bnd_cut_approx=True is used.\n\nop = -2*spre(Q)*spost(Q.dag()) + spre(Q.dag()*Q) + spost(Q.dag()*Q)\n\napprox_factr = ((2 * lam / (beta * gamma)) - 1j*lam) \n\napprox_factr -= lam * gamma * (-1.0j + cot(gamma / (2 * T)))/gamma\nfor k in range(1,Nk+1):\n vk = 2 * np.pi * k * T\n \n approx_factr -= ((4 * lam * gamma * T * vk / (vk**2 - gamma**2))/ vk)\n\nL_bnd = -approx_factr*op\n\nLtot = -1.0j*(spre(Hsys)-spost(Hsys)) + L_bnd\nLtot = liouvillian(Hsys) + L_bnd\n\noptions = Options(nsteps=15000, store_states=True, rtol=1e-14, atol=1e-14)\n\nwith timer(\"RHS construction time\"):\n bath = BosonicBath(Q, ckAR, vkAR, ckAI, vkAI)\n HEOMMatsT = HEOMSolver(Ltot, bath, NC, options=options)\n\nwith timer(\"ODE solver time\"):\n resultMatsT = HEOMMatsT.run(rho0, tlist)\n\nplot_result_expectations([\n (resultMatsT, P11p, 'b', \"P11 Mats + Term\"),\n (resultMatsT, P12p, 'r', \"P12 Mats + Term\"),\n]);", "Or using the built-in Drude-Lorentz bath we can write simply:", "options = Options(nsteps=15000, store_states=True, rtol=1e-14, atol=1e-14)\n\nwith timer(\"RHS construction time\"):\n bath = DrudeLorentzBath(Q, lam=lam, gamma=gamma, T=T, Nk=Nk)\n _, terminator = bath.terminator()\n Ltot = liouvillian(Hsys) + terminator\n HEOM_dlbath_T = HEOMSolver(Ltot, bath, NC, options=options)\n\nwith timer(\"ODE solver time\"):\n result_dlbath_T = HEOM_dlbath_T.run(rho0, tlist)\n\nplot_result_expectations([\n (result_dlbath_T, P11p, 'b', \"P11 Mats (DrudeLorentzBath + Term)\"),\n (result_dlbath_T, P12p, 'r', \"P12 Mats (DrudeLorentzBath + Term)\"),\n]);", "We can compare the solution obtained from the QuTiP Bloch-Redfield solver:", "DL = (\n f\"2*pi* 2.0 * {lam} / (pi * {gamma} * {beta}) if (w == 0) else \"\n f\"2*pi*(2.0*{lam}*{gamma} *w /(pi*(w**2+{gamma}**2))) * ((1/(exp((w) * {beta})-1))+1)\"\n)\noptions = Options(nsteps=15000, store_states=True,rtol=1e-12,atol=1e-12)\n\nwith timer(\"ODE solver time\"):\n resultBR = brmesolve(Hsys, rho0, tlist, a_ops=[[sigmaz(), DL]], options=options)\n\nplot_result_expectations([\n (resultMats, P11p, 'b', \"P11 Mats\"),\n (resultMats, P12p, 'r', \"P12 Mats\"),\n (resultMatsT, P11p, 'b--', \"P11 Mats + Term\"),\n (resultMatsT, P12p, 'r--', \"P12 Mats + Term\"),\n (resultBR, P11p, 'g--', \"P11 Bloch Redfield\"),\n (resultBR, P12p, 'g--', \"P12 Bloch Redfield\"),\n]);\n\n# XXX: We should probably remove this at some point and make a separate notebook(s) for\n# generating plots for the paper.\n\nfig = plot_result_expectations([\n (resultMats, P11p, 'b', \"P11 Mats\"),\n (resultMats, P12p, 'r', \"P12 Mats\"),\n]);\n\nfig.savefig(\"figures/docsfig1.png\")", "Padé decomposition\nThe Matsubara decomposition is not the only option. We can also use the faster-converging Pade decomposition.", "def deltafun(j,k):\n if j == k: \n return 1.\n else:\n return 0.\n\ndef pade_eps(lmax):\n Alpha = np.zeros((2 * lmax, 2 * lmax))\n for j in range(2 * lmax):\n for k in range(2 * lmax):\n # fermionic (see other example notebooks):\n # Alpha[j][k] = (deltafun(j,k+1)+deltafun(j,k-1))/sqrt((2*(j+1)-1)*(2*(k+1)-1))\n # bosonic:\n Alpha[j][k] = (deltafun(j,k+1)+deltafun(j,k-1))/sqrt((2*(j+1)+1)*(2*(k+1)+1))\n \n eigvalsA = eigvalsh(Alpha)\n eps = [-2/val for val in eigvalsA[0: lmax]]\n return eps\n\ndef pade_chi(lmax):\n AlphaP = np.zeros((2 * lmax - 1, 2 * lmax - 1))\n for j in range(2 * lmax - 1):\n for k in range(2 * lmax - 1):\n # fermionic:\n # AlphaP[j][k] = (deltafun(j,k+1)+deltafun(j,k-1))/sqrt((2*(j+1)+1)*(2*(k+1)+1))\n # bosonic [this is +3 because +1 (bose) + 2*(+1)(from bm+1)]:\n AlphaP[j][k] = (deltafun(j,k+1)+deltafun(j,k-1))/sqrt((2*(j+1)+3)*(2*(k+1)+3))\n\n eigvalsAP = eigvalsh(AlphaP)\n chi = [-2/val for val in eigvalsAP[0: lmax - 1]]\n return chi\n\ndef pade_kappa_epsilon(lmax):\n eps = pade_eps(lmax)\n chi = pade_chi(lmax)\n \n kappa = [0]\n prefactor = 0.5 * lmax * (2 * (lmax + 1) + 1)\n\n for j in range(lmax):\n term = prefactor\n for k in range(lmax - 1):\n term *= (chi[k]**2 - eps[j]**2) / (eps[k]**2 - eps[j]**2 + deltafun(j, k))\n\n for k in range(lmax-1, lmax):\n term /= (eps[k]**2 - eps[j]**2 + deltafun(j, k))\n\n kappa.append(term)\n \n epsilon = [0] + eps\n\n return kappa, epsilon\n\ndef pade_corr(tlist, lmax):\n kappa, epsilon = pade_kappa_epsilon(lmax)\n \n eta_list = [lam * gamma * (cot(gamma * beta / 2.0) - 1.0j)]\n gamma_list = [gamma]\n \n if lmax > 0:\n for l in range(1, lmax + 1):\n eta_list.append((kappa[l]/beta)*4*lam*gamma*(epsilon[l]/beta)/((epsilon[l]**2/beta**2)-gamma**2))\n gamma_list.append(epsilon[l]/beta)\n \n c_tot = []\n for t in tlist:\n c_tot.append(sum([eta_list[l]*exp(-gamma_list[l]*t) for l in range(lmax+1)]))\n return c_tot, eta_list, gamma_list\n\n\ntlist_corr = linspace(0, 2, 100)\ncppLP, etapLP, gampLP = pade_corr(tlist_corr, 2)\ncorr_15k = dl_corr_approx(tlist_corr, 15_000)\ncorr_2k = dl_corr_approx(tlist_corr, 2)\n\nfig, ax1 = plt.subplots(figsize=(12, 7))\nax1.plot(tlist_corr, real(cppLP), color=\"b\", linewidth=3, label= r\"real pade 2 terms\")\nax1.plot(tlist_corr, real(corr_15k), \"r--\", linewidth=3, label= r\"real mats 15000 terms\")\n#ax1.plot(tlist_corr, imag(corr_15k), \"r--\", linewidth=3, label= r\"imag mats 15000 terms\")\nax1.plot( tlist_corr,real(corr_2k), \"g--\", linewidth=3, label= r\"real mats 2 terms\")\n#ax1.plot(tlist_corr, imag(cppL), color=\"r\", linewidth=3, label= r\"imag mats 2 terms\")\n\nax1.set_xlabel(\"t\")\nax1.set_ylabel(r\"$C$\")\nax1.legend()\n\nfig, ax1 = plt.subplots(figsize=(12, 7))\n\nax1.plot(tlist_corr, real(cppLP) - real(corr_15k), color=\"b\", linewidth=3, label= r\"pade error\")\nax1.plot(tlist_corr, real(corr_2k) - real(corr_15k),\"r--\", linewidth=3, label= r\"mats error\")\n\nax1.set_xlabel(\"t\")\nax1.set_ylabel(r\"Error\")\nax1.legend();\n\n# put pade parameters in lists for heom solver\nckAR = [real(eta) +0j for eta in etapLP]\nckAI = [imag(etapLP[0]) + 0j]\nvkAR = [gam +0j for gam in gampLP]\nvkAI = [gampLP[0] + 0j]\n\noptions = Options(nsteps=15000, store_states=True, rtol=1e-14, atol=1e-14)\n\nwith timer(\"RHS construction time\"):\n bath = BosonicBath(Q, ckAR, vkAR, ckAI, vkAI)\n HEOMPade = HEOMSolver(Hsys, bath, NC, options=options)\n\nwith timer(\"ODE solver time\"):\n resultPade = HEOMPade.run(rho0, tlist)\n\nplot_result_expectations([\n (resultMats, P11p, 'b', \"P11 Mats\"),\n (resultMats, P12p, 'r', \"P12 Mats\"),\n (resultMatsT, P11p, 'y', \"P11 Mats + Term\"),\n (resultMatsT, P12p, 'g', \"P12 Mats + Term\"),\n (resultPade, P11p, 'b--', \"P11 Pade\"),\n (resultPade, P12p, 'r--', \"P12 Pade\"),\n]);", "The Padé decomposition of the Drude-Lorentz bath is also available via a built-in class, DrudeLorentzPadeBath bath. Like DrudeLorentzBath, the one can obtain the terminator by calling bath.terminator().\nBelow we show how to use the built-in Padé Drude-Lorentz bath and its terminator (although the termintor does not provide much improvement here, because the Padé expansion already fits the correlation function well):", "options = Options(nsteps=15000, store_states=True, rtol=1e-14, atol=1e-14)\n\nwith timer(\"RHS construction time\"):\n bath = DrudeLorentzPadeBath(Q, lam=lam, gamma=gamma, T=T, Nk=Nk)\n _, terminator = bath.terminator()\n Ltot = liouvillian(Hsys) + terminator\n HEOM_dlpbath_T = HEOMSolver(Ltot, bath, NC, options=options)\n\nwith timer(\"ODE solver time\"):\n result_dlpbath_T = HEOM_dlpbath_T.run(rho0, tlist)\n\nplot_result_expectations([\n (result_dlpbath_T, P11p, 'b', \"P11 Padé (DrudeLorentzBath + Term)\"),\n (result_dlpbath_T, P12p, 'r', \"P12 Padé (DrudeLorentzBath + Term)\"),\n]);", "Next we do fitting of correlation functions, and compare the Matsubara and Pade decompositions\nThis is not efficient for this example, but can be extremely useful in situations where large number of\nexponents are needed (e.g., near zero temperature).\nFirst we collect a large sum of matsubara terms for many time steps:", "tlist2 = np.linspace(0, 2, 10000)\n\ncorr_15k_t10k = dl_corr_approx(tlist2, 15_000)\n\ncorrRana = np.real(corr_15k_t10k)\ncorrIana = np.imag(corr_15k_t10k)", "We then fit this sum with standard least-squares approach:", "from scipy.optimize import curve_fit\ndef wrapper_fit_func(x, N, *args):\n a, b = list(args[0][:N]), list(args[0][N:2*N])\n return fit_func(x, a, b, N)\n\n# actual fitting function\ndef fit_func(x, a, b, N):\n tot = 0\n for i in range(N):\n tot += a[i]*np.exp(b[i]*x)\n return tot\n\n\ndef fitter(ans, tlist, k):\n # the actual computing of fit\n popt = []\n pcov = [] \n # tries to fit for k exponents\n for i in range(k):\n params_0 = [0]*(2*(i+1))\n upper_a = abs(max(ans, key = abs))*10\n #sets initial guess\n guess = []\n aguess = [ans[0]]*(i+1)#[max(ans)]*(i+1)\n bguess = [0]*(i+1)\n guess.extend(aguess)\n guess.extend(bguess)\n # sets bounds\n b_lower = []\n alower = [-upper_a]*(i+1)\n blower = [-np.inf]*(i+1)\n b_lower.extend(alower)\n b_lower.extend(blower)\n # sets higher bound\n b_higher = []\n ahigher = [upper_a]*(i+1)\n bhigher = [0]*(i+1)\n b_higher.extend(ahigher)\n b_higher.extend(bhigher)\n param_bounds = (b_lower, b_higher)\n p1, p2 = curve_fit(lambda x, *params_0: wrapper_fit_func(x, i+1, \\\n params_0), tlist, ans, p0=guess, sigma=[0.01 for t in tlist2], bounds = param_bounds,maxfev = 1e8)\n popt.append(p1)\n pcov.append(p2)\n return popt\n\n# function that evaluates values with fitted params at\n# given inputs\ndef checker(tlist, vals):\n y = []\n for i in tlist:\n # print(i)\n y.append(wrapper_fit_func(i, int(len(vals)/2), vals))\n return y\n\n# number of exponents to use for real part\nk = 4\npopt1 = fitter(corrRana, tlist2, k)\ncorrRMats = np.real(dl_corr_approx(tlist2, Nk))\n\nfor i in range(k):\n y = checker(tlist2, popt1[i])\n plt.plot(tlist2, corrRana, tlist2, y, tlist2, corrRMats) \n plt.show()\n\n# number of exponents for imaginary part\nk1 = 1\npopt2 = fitter(corrIana, tlist2, k1)\nfor i in range(k1):\n y = checker(tlist2, popt2[i])\n plt.plot(tlist2, corrIana, tlist2, y)\n plt.show() \n\n# Set the exponential coefficients from the fit parameters\n\nckAR1 = list(popt1[k-1])[:len(list(popt1[k-1]))//2]\nckAR = [x+0j for x in ckAR1]\n\nvkAR1 = list(popt1[k-1])[len(list(popt1[k-1]))//2:]\nvkAR = [-x+0j for x in vkAR1]\n\nckAI1 = list(popt2[k1-1])[:len(list(popt2[k1-1]))//2]\nckAI = [x+0j for x in ckAI1]\n\nvkAI1 = list(popt2[k1-1])[len(list(popt2[k1-1]))//2:]\nvkAI = [-x+0j for x in vkAI1]\n\n# overwrite imaginary fit with analytical value (not much reason to use the fit for this)\n\nckAI = [lam * gamma * (-1.0) + 0.j]\nvkAI = [gamma+0.j]\n\n# The BDF ODE solver method here is faster because we have a slightly stiff problem\n# We set NC=8 because we are keeping more exponents\n\noptions = Options(nsteps=1500, store_states=True, rtol=1e-12, atol=1e-12, method=\"bdf\") \nNC = 8\n\nwith timer(\"RHS construction time\"):\n bath = BosonicBath(Q, ckAR, vkAR, ckAI, vkAI)\n HEOMFit = HEOMSolver(Hsys, bath, NC, options=options)\n \nwith timer(\"ODE solver time\"):\n resultFit = HEOMFit.run(rho0, tlist)\n\nplot_result_expectations([\n (resultFit, P11p, 'b', \"P11 Fit\"),\n (resultFit, P12p, 'r', \"P12 Fit\"),\n]);", "Here we construct a reaction coordinate inspired model to capture the steady-state behavior,\nand compare to the HEOM prediction. This result is more accurate for narrow spectral densities. Both the population and coherence from this cell are used in the final plot below.", "dot_energy, dot_state = Hsys.eigenstates()\ndeltaE = dot_energy[1] - dot_energy[0]\n\ngamma2 = deltaE / (2 * np.pi * gamma)\nwa = 2 * np.pi * gamma2 * gamma # reaction coordinate frequency\ng = np.sqrt(np.pi * wa * lam / 2.0) # reaction coordinate coupling\ng = np.sqrt(np.pi * wa * lam / 4.0) # reaction coordinate coupling Factor over 2 because of diff in J(w) (I have 2 lam now)\n#nb = (1 / (np.exp(wa/w_th) - 1))\n\nNRC = 10\n\nHsys_exp = tensor(qeye(NRC), Hsys)\nQ_exp = tensor(qeye(NRC), Q)\na = tensor(destroy(NRC), qeye(2))\n\nH0 = wa * a.dag() * a + Hsys_exp\n# interaction\nH1 = (g * (a.dag() + a) * Q_exp)\n\nH = H0 + H1\n\n#print(H.eigenstates())\nenergies, states = H.eigenstates()\nrhoss = 0*states[0]*states[0].dag()\nfor kk, energ in enumerate(energies):\n rhoss += (states[kk]*states[kk].dag()*exp(-beta*energies[kk])) \n\n#rhoss = (states[0]*states[0].dag()*exp(-beta*energies[0]) + states[1]*states[1].dag()*exp(-beta*energies[1]))\n\nrhoss = rhoss/rhoss.norm()\n\nP12RC = tensor(qeye(NRC), basis(2,0) * basis(2,1).dag())\n\nP12RC = expect(rhoss,P12RC)\n\n\nP11RC = tensor(qeye(NRC), basis(2,0) * basis(2,0).dag())\n\nP11RC = expect(rhoss,P11RC)\n\n# XXX: Decide what to do with this cell\n\nmatplotlib.rcParams['figure.figsize'] = (7, 5)\nmatplotlib.rcParams['axes.titlesize'] = 25\nmatplotlib.rcParams['axes.labelsize'] = 30\nmatplotlib.rcParams['xtick.labelsize'] = 28\nmatplotlib.rcParams['ytick.labelsize'] = 28\nmatplotlib.rcParams['legend.fontsize'] = 28\nmatplotlib.rcParams['axes.grid'] = False\nmatplotlib.rcParams['savefig.bbox'] = 'tight'\nmatplotlib.rcParams['lines.markersize'] = 5\nmatplotlib.rcParams['font.family'] = 'STIXgeneral' \nmatplotlib.rcParams['mathtext.fontset'] = 'stix'\nmatplotlib.rcParams[\"font.serif\"] = \"STIX\"\nmatplotlib.rcParams['text.usetex']=False\n\n# XXX: Decide what to do with this cell\n\nfig, axes = plt.subplots(2, 1, sharex=False, figsize=(12,15))\n\nplt.sca(axes[0])\nplt.yticks([np.real(P11RC), 0.6, 1.0], [0.32, 0.6, 1])\n\nplot_result_expectations([\n (resultBR, P11p, 'y-.', \"Bloch-Redfield\"),\n (resultMats, P11p, 'b', \"Matsubara $N_k=2$\"),\n (resultMatsT, P11p, 'g--', \"Matsubara $N_k=2$ & Terminator\", {\"linewidth\": 3}),\n (resultFit, P11p, 'r', r\"Fit $N_f = 4$, $N_k=15\\times 10^3$\", {\"dashes\": [3,2]}),\n], axes=axes[0])\naxes[0].plot(tlist, [np.real(P11RC) for t in tlist], 'black', ls='--',linewidth=2, label=\"Thermal\")\n\naxes[0].locator_params(axis='y', nbins=4)\naxes[0].locator_params(axis='x', nbins=4)\n\naxes[0].set_ylabel(r\"$\\rho_{11}$\", fontsize=30)\naxes[0].legend(loc=0)\n\naxes[0].text(5, 0.9, \"(a)\", fontsize=30)\naxes[0].set_xlim(0,50)\n\nplt.sca(axes[1])\nplt.yticks([np.real(P12RC), -0.2, 0.0, 0.2], [-0.33, -0.2, 0, 0.2])\n\nplot_result_expectations([\n (resultBR, P12p, 'y-.', \"Bloch-Redfield\"),\n (resultMats, P12p, 'b', \"Matsubara $N_k=2$\"),\n (resultMatsT, P12p, 'g--', \"Matsubara $N_k=2$ & Terminator\", {\"linewidth\": 3}),\n (resultFit, P12p, 'r', r\"Fit $N_f = 4$, $N_k=15\\times 10^3$\", {\"dashes\": [3,2]}),\n], axes=axes[1])\naxes[1].plot(tlist, [np.real(P12RC) for t in tlist], 'black', ls='--', linewidth=2, label=\"Thermal\")\n\naxes[1].locator_params(axis='y', nbins=4)\naxes[1].locator_params(axis='x', nbins=4)\n\naxes[1].text(5, 0.1, \"(b)\", fontsize=30)\n\naxes[1].set_xlabel(r'$t \\Delta$', fontsize=30)\naxes[1].set_ylabel(r'$\\rho_{01}$', fontsize=30)\n\naxes[1].set_xlim(0,50)\n\nfig.tight_layout()\n#fig.savefig(\"fig1.pdf\")\n\nfrom qutip.ipynbtools import version_table\n\nversion_table()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
mne-tools/mne-tools.github.io
0.15/_downloads/plot_read_epochs.ipynb
bsd-3-clause
[ "%matplotlib inline", "Reading epochs from a raw FIF file\nThis script shows how to read the epochs from a raw file given\na list of events. For illustration, we compute the evoked responses\nfor both MEG and EEG data by averaging all the epochs.", "# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>\n# Matti Hamalainen <msh@nmr.mgh.harvard.edu>\n#\n# License: BSD (3-clause)\n\nimport mne\nfrom mne import io\nfrom mne.datasets import sample\n\nprint(__doc__)\n\ndata_path = sample.data_path()", "Set parameters", "raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\nevent_id, tmin, tmax = 1, -0.2, 0.5\n\n# Setup for reading the raw data\nraw = io.read_raw_fif(raw_fname)\nevents = mne.read_events(event_fname)\n\n# Set up pick list: EEG + MEG - bad channels (modify to your needs)\nraw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more\npicks = mne.pick_types(raw.info, meg=True, eeg=False, stim=True, eog=True,\n exclude='bads')\n\n# Read epochs\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,\n picks=picks, baseline=(None, 0), preload=True,\n reject=dict(grad=4000e-13, mag=4e-12, eog=150e-6))\n\nevoked = epochs.average() # average epochs to get the evoked response", "Show result", "evoked.plot()" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
thalesians/tsa
src/jupyter/python/processes.ipynb
apache-2.0
[ "Processes\nIntroduction\nIn simulation and modelling we encounter a wide range of stochastic processes. But most fall into a few common categories: Ito processes, martingales, Markov processes, Gaussian processes, etc. We attempt to take this into account in our treatment of stochastic processes in thalesians.tsa, where we represent different categories of stochastic processes with distinct abstract data types.\nBefore we proceed, we need to enable Matplotlib to inline its graphs in this Jupyter notebook...", "%matplotlib inline", "...and import some Python modules:", "import os, sys\nsys.path.append(os.path.abspath('../../main/python'))\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport thalesians.tsa.numpyutils as npu\nimport thalesians.tsa.processes as proc\nimport thalesians.tsa.randomness as rnd\nimport thalesians.tsa.simulation as sim", "Ito processes\nAn Ito process is defined to be an adapted stochastic process that can be expressed as the sum of an integral with respect to a Wiener process and an integral with respect to time,\n$$X_t = X_0 + \\int_0^t \\mu_s \\, ds + \\int_0^t \\sigma_s \\, dW_s,$$\nor, in differential form,\n$$dX_s = \\mu_s \\, ds + \\sigma_s \\, dW_s,$$\nwhere $W$ is a Wiener process, $\\sigma$ a predictable $W$-integrable process, $\\mu$ predictable and Lebesgue-integrable. The integrability conditions can be expressed as\n$$\\int_0^t (\\sigma_s^2 + |\\mu_s|) \\, ds < \\infty.$$\n$\\mu$ and $\\sigma$ are allowed to depend both on the time and current state, so we can write\n$$X_t = X_0 + \\int_0^t \\mu(s, X_s) \\, ds + \\int_0^t \\sigma(s, X_s) \\, dW_s.$$\nThe function $\\mu$ is referred to as drift, the function $\\sigma$ as diffusion. The ItoProcess can thus be specified by providing these two functions:", "X = proc.ItoProcess(drift=lambda t, x: -x, diffusion=lambda t, x: .25)", "It can then be approximated with a stochastic time discrete approximation, such as the Euler-Maruyama strong Taylor approximation scheme:", "rnd.random_state(np.random.RandomState(seed=42), force=True);\nts = []; xs = []\nfor t, x in sim.EulerMaruyama(process=X, times=sim.xtimes(0., 100.)):\n ts.append(t); xs.append(x.flatten())\nplt.plot(ts, xs);", "Since in this particular case the diffusion coefficient is constant, we could have defined X as", "X = proc.ItoProcess(drift=lambda t, x: -x, diffusion=.25)", "Solved Ito processes\nIn case the stochastic differential equation\n$$X_t = X_0 + \\int_0^t \\mu(s, X_s) \\, ds + \\int_0^t \\sigma(s, X_s) \\, dW_s$$\nhas a solution, it may be possible to compute the above integrals analytically. Although the rigorous theory of SDE requires one to specify what exactly one means by a \"solution\", we shall reserve the term solved Ito process for those Ito processes where these integrals can be computed analytically. Such processes should inherit from the SolvedItoProcess class and override its abstract method", "def propagate(self, time0, value0, time, variate=None, state0=None, random_state=None):\n raise NotImplementedError()", "Given the time time0 and the process's value at that time, value0, and (if the process is stateful) the process's state, state0, at time0, as well as the random variate variate corresponding to the actual increment in the driving Brownian motion $W$, the propagate method will return the value of the process at time, time &gt;= time0. If propagate is implemented, there is no need to resort to approximate schemes, such as the Euler-Maruyama scheme demonstrated above.\nIn fact, the Ito process in our example above happens to be an Ornstein-Uhlenbeck process, whose solution is well known:", "X = proc.OrnsteinUhlenbeckProcess(transition=1, vol=.25)", "We make sure that we generate it with the same random seed...", "rnd.random_state(np.random.RandomState(seed=42), force=True);", "...and verify that the graph is unchanged when we apply EulerMaruyama to this process, now instantiated as an OrnsteinUhlenbeckProcess, rather than an ItoProcess:", "ts = []; xs = []\nfor t, x in sim.EulerMaruyama(process=X, times=sim.xtimes(0., 100.)):\n ts.append(t); xs.append(x.flatten())\nplt.plot(ts, xs);", "Instead of looping explicitly, we could have used the method run:", "rnd.random_state(np.random.RandomState(seed=42), force=True)\nem = sim.EulerMaruyama(process=X, times=sim.xtimes(0., 100.))\ndf = sim.run(em)\nplt.plot(df);", "Now, since", "isinstance(X, proc.SolvedItoProcess)", "we don't need to apply Euler-Maruyama to produce a trajectory of this process and can use the propagate method instead:", "rnd.random_state(np.random.RandomState(seed=42), force=True);\nx = 0.\nts = [0.]; xs = [x]\nfor t, v in zip(sim.xtimes(1., 100., 1.), rnd.multivariate_normals(ndim=1)):\n x = X.propagate(ts[-1], x, t, v)\n ts.append(t); xs.append(x.flatten())\nplt.plot(ts, xs);", "Markov processes\nInformally, a Markov process models the motion of a particle that moves around in a measurable space in a memoryless way. Such processes are extremely important and merit their own theory. The Wiener process and the Ornstein-Uhlenbeck process illustrated above are special cases. There also deserve special treatment in filtering theory &mdash; both Kalman and particle filtering.\nIn thalesians.tsa, Markov processes inherit from the abstract class MarkovProcess. Thus they provide the method", "def propagate_distr(self, time0, distr0, time):\n pass", "This method represents the transition kernel of the Markov process: given the marginal distribution at time0, distr0, and time &gt;= time0, the method returns the marginal distribution at time.\nSolved Ito Markov processes\nProcesses that are both \"solved Ito\" (and therefore children of SolvedItoProcess) and Markov (and therefore children of MarkovProcess) should inherit from SolvedItoMarkovProcess. By default, their propagate is implemented in terms of their propagate_distr using the Dirac delta:", "def propagate(self, time0, value0, time, variate=None, state0=None, random_state=None):\n if self.noisedim != self.processdim:\n raise NotImplementedError('Cannot utilize the propagate_distr of the Markov process in propagate if noisedim != processdim; provide a custom implementation')\n if time == time0: return npu.tondim2(value0, ndim1tocol=True, copy=True)\n value0 = npu.tondim2(value0, ndim1tocol=True, copy=False)\n variate = npu.tondim2(variate, ndim1tocol=True, copy=False)\n distr = self.propagate_distr(time, time0, distrs.NormalDistr.creatediracdelta(value0))\n return distr.mean + np.dot(np.linalg.cholesky(distr.cov), variate)", "Gaussian and Gauss-Markov processes\nGaussian processes play an important role in mathematical finance, machine learning, and in stochastic filtering, where the Kalman filter is the solution of this very special case of the filtering problem &mdash; the linear-Gaussian case.\nSeveral Gauss-Markov processes are implemented in thalesians.tsa, notably the WienerProcess and OrnsteinUhlenbeckProcess, the latter being the only nontrivial stationary Gauss-Markov process. The Ornstein-Uhlenbeck process is ubiquitous in portfolio management and merits special treatment.\nSpecific processes\nWiener process\nUnivariate standard Wiener process", "X = proc.WienerProcess()\nx0 = 0.\n\nrnd.random_state(np.random.RandomState(seed=42), force=True)\nem = sim.EulerMaruyama(process=X, initial_value=x0, times=sim.xtimes(start=0., stop=1., step=1E-3))\ndf = sim.run(em)\nplt.plot(df);\n\nrnd.random_state(np.random.RandomState(seed=42), force=True)\nx = [x0]\nts = [0.]; xs = [x]\nfor t, v in zip(sim.xtimes(0., 1., 1E-3), rnd.multivariate_normals(ndim=1)):\n x = X.propagate(ts[-1], x, t, v)\n ts.append(t); xs.append(x.flatten())\nplt.plot(ts, xs);", "Univariate variance-scaled Wiener process with drift", "X = proc.WienerProcess(mean=3., vol=4.)\nx0 = 7.\n\nrnd.random_state(np.random.RandomState(seed=42), force=True)\nem = sim.EulerMaruyama(process=X, initial_value=x0, times=sim.xtimes(start=0., stop=5., step=1E-3))\ndf = sim.run(em)\nplt.plot(df);\n\nrnd.random_state(np.random.RandomState(seed=42), force=True)\nx = [x0]\nts = [0.]; xs = [x]\nfor t, v in zip(sim.xtimes(0., 5., 1E-3), rnd.multivariate_normals(ndim=1)):\n x = X.propagate(ts[-1], x, t, v)\n ts.append(t); xs.append(x.flatten())\nplt.plot(ts, xs);", "Multivariate variance-scaled, correlated Wiener process with drift", "X = proc.WienerProcess.create_from_cov(mean=[3., 5.], cov=[[16., -8.], [-8., 16.]])\nx0 = npu.col(7., 8.)\n\nrnd.random_state(np.random.RandomState(seed=42), force=True)\nem = sim.EulerMaruyama(process=X, initial_value=x0, times=sim.xtimes(start=0., stop=5., step=1E-3))\ndf = sim.run(em)\nplt.plot(df);\n\nrnd.random_state(np.random.RandomState(seed=42), force=True)\nx = x0\nts = [0.]; xs = [x0.flatten()]\nfor t, v in zip(sim.xtimes(0., 5., 1E-3), rnd.multivariate_normals(ndim=2)):\n x = X.propagate(ts[-1], x, t, v)\n ts.append(t); xs.append(x.flatten())\nplt.plot(ts, xs);", "Brownian Bridge\nStandard Brownian bridge", "X = proc.BrownianBridge()\n\nrnd.random_state(np.random.RandomState(seed=42), force=True);\nts = []; xs = []\nfor t, x in sim.EulerMaruyama(process=X, initial_value=0., times=sim.xtimes(0., 1., .005)):\n ts.append(t); xs.append(x.flatten())\nplt.plot(ts, xs);\n\nrnd.random_state(np.random.RandomState(seed=42), force=True);\nx = [0.]\nts = [0.]; xs = [x]\nfor t, v in zip(sim.xtimes(0., 1., .005), rnd.multivariate_normals(ndim=1)):\n x = X.propagate(ts[-1], x, t, v)\n ts.append(t); xs.append(x.flatten())\nplt.plot(ts, xs);", "Generalized Brownian bridge", "X = proc.BrownianBridge(10., 15., 0., 10.)\n\nrnd.random_state(np.random.RandomState(seed=42), force=True);\nts = []; xs = []\nfor t, x in sim.EulerMaruyama(process=X, initial_value=10., times=sim.xtimes(0., 10., .005)):\n ts.append(t); xs.append(x.flatten())\nplt.plot(ts, xs);\n\nrnd.random_state(np.random.RandomState(seed=42), force=True);\nx = [10.]\nts = [0.]; xs = [x]\nfor t, v in zip(sim.xtimes(0., 10., .005), rnd.multivariate_normals(ndim=1)):\n x = X.propagate(ts[-1], x, t, v)\n ts.append(t); xs.append(x.flatten())\nplt.plot(ts, xs);", "Multivariate Brownian bridge", "x0 = npu.col(10., 7.)\ncov = [[1., -2.], [-2., 9.]]\nX = proc.BrownianBridge.create_from_cov(x0, npu.col(15., 3.), 0., 10., cov)\n\nrnd.random_state(np.random.RandomState(seed=42), force=True);\nts = []; xs = []\nfor t, x in sim.EulerMaruyama(process=X, initial_value=x0, times=sim.xtimes(0., 10., .005)):\n ts.append(t); xs.append(x.flatten())\nplt.plot(ts, xs);\n\nxs[-1]\n\nrnd.random_state(np.random.RandomState(seed=42), force=True)\nx = x0\nts = [0.]; xs = [x0.flatten()]\nfor t, v in zip(sim.xtimes(0., 10., .005), rnd.multivariate_normals(ndim=2)):\n x = X.propagate(ts[-1], x, t, v)\n ts.append(t); xs.append(x.flatten())\nplt.plot(ts, xs);\n\nxs[-1]", "A more efficient method for simulating a Brownian bridge", "start_time = 0.\nend_time = 10.\ntimes = np.linspace(0., 10., 1000)\nstart_value = 10.\nend_value = 15.\n\ntimes_col = npu.to_ndim_2(times, ndim_1_to_col=True)\n\nmean = start_value + (times_col - start_time) / (end_time - start_time) * (end_value - start_value)\n\ncov = np.array([[(end_time - max(times[i], times[j])) * (min(times[i], times[j]) - start_time) / (end_time - start_time) for j in range(len(times))] for i in range(len(times))])\n\nvalues = rnd.multivariate_normal(mean, cov)\n\nall_times = np.concatenate(([start_time], times, [end_time]))\n\nall_values = np.concatenate(([start_value], values, [end_value]))\n\nplt.plot(all_times, all_values);", "Geometric Brownian motion\nUnivariate geometric Brownian motion", "X = proc.GeometricBrownianMotion()\nx0 = .3\n\nrnd.random_state(np.random.RandomState(seed=42), force=True)\nem = sim.EulerMaruyama(process=X, initial_value=x0, times=sim.xtimes(start=0., stop=1., step=1E-3))\ndf = sim.run(em)\nplt.plot(df);\n\nrnd.random_state(np.random.RandomState(seed=42), force=True)\nx = [x0]\nts = [0.]; xs = [x]\nfor t, v in zip(sim.xtimes(0., 1., 1E-3), rnd.multivariate_normals(ndim=1)):\n x = X.propagate(ts[-1], x, t, v)\n ts.append(t); xs.append(x.flatten())\nplt.plot(ts, xs);\n\nX = proc.WienerProcess()\nx0 = .3\n\nrnd.random_state(np.random.RandomState(seed=42), force=True)\nem = sim.EulerMaruyama(process=X, initial_value=x0, times=sim.xtimes(start=0., stop=1., step=1E-3))\ndf = sim.run(em)\nplt.plot(df);", "Multivariate variance-scaled, correlated geometric Brownian motion with drift", "X = proc.GeometricBrownianMotion.create_from_pct_cov(pct_drift=[3., 5.], pct_cov=[[16., -8.], [-8., 16.]])\nx0 = npu.col(7., 8.)\n\nrnd.random_state(np.random.RandomState(seed=42), force=True)\nem = sim.EulerMaruyama(process=X, initial_value=x0, times=sim.xtimes(start=0., stop=1., step=1E-3))\ndf = sim.run(em)\nplt.plot(df);\n\nrnd.random_state(np.random.RandomState(seed=42), force=True)\nx = x0\nts = [0.]; xs = [x0.flatten()]\nfor t, v in zip(sim.xtimes(0., 1., 1E-3), rnd.multivariate_normals(ndim=2)):\n x = X.propagate(ts[-1], x, t, v)\n ts.append(t); xs.append(x.flatten())\nplt.plot(ts, xs);", "Ornstein-Uhlenbeck process\nUnivariate Ornstein-Uhlenbeck process", "X = proc.OrnsteinUhlenbeckProcess(transition=1., vol=1.)\nx0 = 0.\n\nrnd.random_state(np.random.RandomState(seed=42), force=True)\nem = sim.EulerMaruyama(process=X, initial_value=x0, times=sim.xtimes(start=0., stop=5., step=.01))\ndf = sim.run(em)\nplt.plot(df);\n\nrnd.random_state(np.random.RandomState(seed=42), force=True)\nx = [0.]\nts = [0.]; xs = [x]\nfor t, v in zip(sim.xtimes(0., 5., .01), rnd.multivariate_normals(ndim=1)):\n x = X.propagate(ts[-1], x, t, v)\n ts.append(t); xs.append(x.flatten())\nplt.plot(ts, xs);", "Multivariate Ornstein-Uhlenbeck process", "X = proc.OrnsteinUhlenbeckProcess.create_from_cov(\n transition=[[10., 0.], [0., 10.]],\n mean=[3., 5.],\n cov=[[9., -7.5], [-7.5, 25.]])\nx0 = npu.col(7., 8.)\n\nrnd.random_state(np.random.RandomState(seed=42), force=True)\nem = sim.EulerMaruyama(process=X, initial_value=x0, times=sim.xtimes(start=0., stop=5., step=.01))\ndf = sim.run(em)\nplt.plot(df);\n\nrnd.random_state(np.random.RandomState(seed=42), force=True)\nx = x0\nts = [0.]; xs = [x0.flatten()]\nfor t, v in zip(sim.xtimes(0., 5., .01), rnd.multivariate_normals(ndim=2)):\n x = X.propagate(ts[-1], x, t, v)\n ts.append(t); xs.append(x.flatten())\nplt.plot(ts, xs);" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
wenduowang/git_home
python/MSBA/intro/HW3/.ipynb_checkpoints/HW3_wenduowang-checkpoint.ipynb
gpl-3.0
[ "from pandas import Series, DataFrame\nimport pandas as pd\nimport warnings\nwarnings.filterwarnings('ignore')\n%pylab inline", "Question 1: Read in data\nRead in the data from \"gold.txt\" and \"labels.txt\".\nSince there are no headers in the files, names parameter should be set explicitly.\n\nDuplicate records in both dataframes are kept, for repeated test on the same url provides enables more precise information about the turks' discernibility", "gold = pd.read_table(\"gold.txt\", names=[\"url\", \"category\"]).dropna()\nlabels = pd.read_table(\"labels.txt\", names=[\"turk\", \"url\", \"category\"]).dropna()", "Question 2: Split into two DataFrames\nTo determine if a url in labels is in gold, make a list of unique url in gold, and map the lambda expression on the url series in labels.", "url_list = gold[\"url\"].unique()\nlabels_on_gold = labels[labels[\"url\"].map(lambda s: s in url_list)]\nlabels_unknown = labels[labels[\"url\"].map(lambda s: s not in url_list)]", "Question 3: Compute accuracies of turks\n\nSince the computation is all on \"gold\" set url, \"labels_on_gold\" dataframe is used instead of \"labels\"\nMerge \"labels_on_gold\" with \"gold\" on url.\nCreate a new column correct in the new dataframe, and assign True where the \"turk\" rating is the same with the true rating.\nOptional: drop the rating columns to reduce the size of the dataframe\ngroupby on turk, and sum up the True records on correct for each turk, the returned value is a series\nvalue_counts on turk, a series of total rating numbers is returned.\nDivide the previous two series to get the rating accuracy of each turk\nCreate a new dataframe \"rater_goodness\" with the total rating number series and rating accuracy series, index by default set as turk", "rater_merged = pd.merge(\n labels_on_gold,\n gold,\n left_on=\"url\",\n right_on=\"url\",\n suffixes=[\"_1\", \"_2\"]\n )\n\nrater_merged[\"correct\"] = rater_merged[\"category_1\"] == rater_merged[\"category_2\"]\nrater_merged = rater_merged[[\"turk\", \"correct\"]]\ncorrect_counts = rater_merged.groupby(\"turk\")[\"correct\"].sum()\ntotal_counts = rater_merged[\"turk\"].value_counts()\navg_correctness = correct_counts/total_counts\nrater_goodness = pd.DataFrame({\"number_of_ratings\": total_counts, \"average_correctness\": avg_correctness})\nrater_goodness[:10]", "Question 4: Odds ratios\n\nUse \"map\" function on average_correctness to get $\\frac{average\\ correctness}{1 - average\\ correctness}$\nBy definition, when average_correctness = 1, the ratio should be assigned float(\"inf\")", "rater_goodness[\"odds\"] = rater_goodness[\"average_correctness\"].map(lambda x: x/(1.001-x))\nrater_goodness[:20]", "Question 5: Most accurate turks\n\nUse rater_goodness[\"number of ratings\"]&gt;=20 to select turks who rated at least 20 times.\nSort the list by average_correctness in descending order.\n.index.values is optional to return only turks, but for aesthetic reasons it is not applied.", "rater_goodness[rater_goodness[\"number_of_ratings\"]>=20].sort_values(by=\"average_correctness\", ascending=False)[:10]", "Question 6: Rating counts versus accuracy\nPlotting average_correctness against number of ratings makes it easier to have an general idea between the two variables. However, from the plot, it is difficult to identify a clear pattern.", "plot(rater_goodness['number_of_ratings'],\n rater_goodness['average_correctness'],\n marker='o',\n color='blue',\n linestyle='None')\nxlabel('number of ratings')\nylabel('average correctness')", "To quantitatively measure the linear correlation between number of ratings and average correctness, linear regression is used to draw insights.\nFrom the model summary, it is still difficult to establish reliable linear correlation between the two variables, since the coefficient of number of ratings is not significantly different from zero.\n\nstatsmodels and patsy modules are imported for linear regression", "import statsmodels.api as sm\nfrom patsy import dmatrices\n\ny, X = dmatrices('average_correctness ~ number_of_ratings', data=rater_goodness, return_type='dataframe')\nmodel = sm.OLS(y, X)\nresult = model.fit()\nprint result.summary()", "Question 7: Overall predicted odds\n\nDefine the cutpoint of top 25% turks in term of number of ratings using quantile(q=.75).\nMake a list of \"turk: number of ratings\"\nMake a mask to select records rated by top 25% turks using map function.\nSelect from the total \"labels\" data set the records rated by top 25% turks.\nMerge this dataframe with \"labels_unknown\" dataframe on url and category, duplicates dropped.\nNext merge the resulting dataframe with \"rater_goodness\" dataframe.\nFirst create a new turk column in \"rater_goodness\" dataframe from the index\nOnly select the records rated by top 25% turks from \"rater_goodness\" dataframe\nMerge the two dataframe on turk\nDrop duplicates and missing values\n\n\ngroupby the resulting dataframe on url and category.\nApply prod() on odds to calculate overall odds by url and category.\nhere odds is the \"overall odds\" as defined in the assignment description", "top_25_cutpoint = labels_on_gold[\"turk\"].value_counts().quantile(q=.75)\nturk_list = labels_on_gold[\"turk\"].value_counts()\n\nmask_1 = labels_unknown[\"turk\"].map(lambda s: turk_list[s]>=top_25_cutpoint if s in turk_list else False)\nlabels_bytop25 = labels_unknown[mask_1]\n\nrater_goodness[\"turk\"] = rater_goodness.index\n\nodds_top25 = rater_goodness[rater_goodness[\"turk\"].map(lambda s: turk_list[s]>=top_25_cutpoint if s in turk_list else False)]\n\noverall_odds = pd.merge(labels_bytop25,\n odds_top25,\n left_on=\"turk\",\n right_on=\"turk\",\n how=\"left\").dropna()\n\noverall_odds.groupby([\"url\", \"category\"])[[\"odds\"]].prod()[:10]", "Question 8: Predicted categories\n\nCreate a dataframe from the groupby object in the last question, containing url, category and overall odds.\nApply unstack to breakdown category from index to columns.\nTranspose the dataframe and get idxmax() on all columns, i.e. url, returned value is a series with url as index and np.array (\"odds\", category) as values.\nCreate a dataframe using the returned series, and convert the np.array into a string column \"top category\" by selecting the second element.\nCreate a new \"top odds\" column for the dataframe by max() on the transposed dataframe in step 2.", "overall_odds_df = overall_odds.groupby([\"url\", \"category\"])[[\"odds\"]].prod().unstack(\"category\").T.fillna(0)\nurl_rating = pd.DataFrame(overall_odds_df.idxmax())\nurl_rating[\"top category\"] = url_rating[0].map(lambda s: s[1])\nurl_rating = url_rating.set_index(url_rating.index.values)\nurl_rating[\"top odds\"] = overall_odds_df.max()\nurl_rating = url_rating[[\"top category\", \"top odds\"]]\nurl_rating[:10]", "Question 9: Predicted categories using more turks\n\nRepeat Question\\ 7 and Question\\ 8 to create a dataframe where url are rated by top 75% turks.\n > Here only the \"top category\" column is kept and named result_75\nTake out top category column from the dataframe from Question 8 and rename it result_25, and make it a dataframe.\nMerge the two dataframes on index.\nCreate a crosstab with the two columns as index and columns respectively.\nFrom the crosstab it can be seen that, the most errors are where the top 25% turks rated \"G\" but the top 75% turks rated \"P\" (836 occurences), \"G\" versus \"R\" (285 occurences), and \"P\" versus \"G\" (229 occurences).", "top_75_cutpoint = labels_on_gold[\"turk\"].value_counts().quantile(q=.25)\n\nmask_2 = labels_unknown[\"turk\"].map(lambda s: turk_list[s]>=top_75_cutpoint if s in turk_list else False)\nlabels_bytop75 = labels_unknown[mask_2]\n\nodds_top75 = rater_goodness[rater_goodness[\"turk\"].map(lambda s: turk_list[s]>=top_75_cutpoint if s in turk_list else False)]\n\noverall_odds_75 = pd.merge(labels_bytop75,\n odds_top75,\n left_on=\"turk\",\n right_on=\"turk\",\n how=\"left\").dropna()\n\noverall_odds_df_75 = overall_odds_75.groupby([\"url\", \"category\"])[[\"odds\"]].prod().unstack(\"category\").T.fillna(0)\n\nurl_rating_75 = pd.DataFrame(overall_odds_df_75.idxmax())\nurl_rating_75[\"result_75\"] = url_rating_75[0].map(lambda s: s[1])\nurl_rating_75 = pd.DataFrame(url_rating_75[\"result_75\"])\nurl_rating_75 = url_rating_75.set_index(url_rating_75.index.values)\n\nurl_rating_25 = pd.DataFrame({\"result_25\": url_rating[\"top category\"]})\n\nurl_rating_merged = pd.merge(url_rating_25,\n url_rating_75,\n left_index=True,\n right_index=True,\n ).dropna()\n\nurl_rating_crosstab = pd.crosstab(index=url_rating_merged[\"result_25\"],\n columns=url_rating_merged[\"result_75\"]\n )\n\nurl_rating_crosstab" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
CMU-CREATE-Lab/speck-ml
Speck Grimm comparison.ipynb
mit
[ "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\n%matplotlib inline\nsns.set('notebook')\nsns.set_style('whitegrid')", "Import GRIMM data\nConvert epoch to date-times and visualize relationships in data", "grimm = pd.read_csv('data/humexp/Grimm.csv', index_col='EpochTime', header=False, names=['EpochTime','Count','PM1', 'PM10', 'PM2.5'])\ngrimm.index = pd.to_datetime((grimm.index.values*1e9).astype(int))\n\ngrimm.head()\n\nsns.pairplot(grimm)", "Import Speck data\nDrop extra rows and convert epoch timestamps to date-times", "speck1 = pd.read_csv('data/humexp/Speck1.csv', index_col='EpochTime', header=False, names=['EpochTime','Humidity', 'Concentration', 'Count', 'Raw', 'Temp'])\nspeck2 = pd.read_csv('data/humexp/Speck2.csv', index_col='EpochTime', header=False, names=['EpochTime','Humidity', 'Concentration', 'Count', 'Raw', 'Temp'])\n\nspeck1 = speck1.iloc[2:]\nspeck2 = speck2.iloc[1:]\n\nspeck1.index = pd.to_datetime((speck1.index.values*1e9).astype(int))\nspeck2.index = pd.to_datetime((speck2.index.values*1e9).astype(int))\n\nspeck1.head()", "Resample data to common interval of 1 minute", "speck1 = speck1.resample('1Min').dropna()\nspeck2 = speck2.resample('1Min').dropna()\ngrimm = grimm.resample('1Min').dropna()\n\nsns.jointplot(speck1['Concentration'].values, speck2['Concentration'].values)\nsns.jointplot(speck1['Concentration'].values, grimm['PM2.5'].values)\nsns.jointplot(speck2['Concentration'].values, grimm['PM2.5'].values)\n\nplt.subplot(121)\nplt.plot(grimm['PM2.5'])\nplt.plot(speck1['Concentration'], alpha=0.8)\nplt.plot(speck2['Concentration'], alpha=0.8)\nplt.subplot(122)\nplt.plot(speck1['Humidity'])\nplt.plot(speck2['Humidity'])", "Learning a better fit to PM2.5", "from sklearn.preprocessing import StandardScaler, PolynomialFeatures\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.svm import SVR\nfrom sklearn.linear_model import Ridge, LinearRegression", "Compare two predictors, SVM may overfit the training data, linear ridge regression will not be able to overfit if $d<<n$", "predictors = {'Ridge': make_pipeline(StandardScaler(), PolynomialFeatures(2), Ridge()),\n 'RBF SVM': make_pipeline(StandardScaler(), SVR(kernel='rbf', C=1e4, epsilon=1, degree=3))}\n\n# Note, RBF parameters were not tunes with a validation set, but with the test set. \n# This is more of an exploration and is not suitable for publication\n\nresults = {}\nX = speck1.iloc[:500].values\ny = grimm['PM2.5'].iloc[:500]\ntestX = speck1.iloc[500:].values\ntesty = grimm['PM2.5'].iloc[500:]\n\n#X = speck1.iloc[::2].values\n#y = grimm['PM2.5'].iloc[::2]\n#testX = speck1.iloc[1::2].values\n#testy = grimm['PM2.5'].iloc[1::2]\n\nfor label in predictors:\n regressor = predictors[label]\n regressor.fit(X, y)\n results[label] = regressor.predict(testX)\n\nplt.subplot(111)\nplt.plot(testy, label='Grimm')\nfor label in results:\n plt.plot(results[label], label=label, alpha=0.7)\nplt.legend()\n\nprint 'Training data fit scores'\nfor label in predictors:\n print label + ' ' + str(predictors[label].score(speck1.iloc[::2].values, grimm['PM2.5'].iloc[::2]))", "For each feautre (polynomial combination of features), what is the respective weight in the ridge regressor?", "print speck1.columns\nprint zip(predictors['Ridge'].steps[1][1].powers_, predictors['Ridge'].steps[2][1].coef_)" ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
ctroupin/OceanData_NoteBooks
PythonNotebooks/PlatformPlots/plot_CMEMS_profiler.ipynb
gpl-3.0
[ "The objective of this notebook is to show how to read and plot the data obtained with a profiler (or profiling buoy).", "%matplotlib inline\nimport netCDF4\nfrom netCDF4 import num2date\nimport numpy as np\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nfrom matplotlib import colors\nfrom mpl_toolkits.basemap import Basemap", "Data reading\nThe data file is located in the datafiles directory.", "datadir = './datafiles/'\ndatafile = 'GL_PR_PF_3900150.nc'", "We extract only the spatial coordinates:", "with netCDF4.Dataset(datadir + datafile) as nc:\n lon = nc.variables['LONGITUDE'][:]\n lat = nc.variables['LATITUDE'][:]", "Location of the profiles\nIn this first plot we want to see the location of the profiles obtained with the profiler.", "mpl.rcParams.update({'font.size': 16})\nfig = plt.figure(figsize=(8,8))\nax = plt.subplot(111)\nplt.plot(lon, lat, 'ko', ms=1)\nplt.show()", "The figure can be improved by adding the landmask and coastline.\nPlot on a map\nThe first thing to do is to create a projection using basemap. We can obtain the bounding box from the previous plot:", "lonmin, lonmax = ax.get_xlim()\nlatmin, latmax = ax.get_ylim()", "then we create the projection, slightly enlarging the longitude extension:", "m = Basemap(projection='merc', llcrnrlat=latmin, urcrnrlat=latmax,\n llcrnrlon=lonmin-2., urcrnrlon=lonmax+2., lat_ts=0.5*(lon.min()+lon.max()), resolution='i')", "The new figure is generated similarly to the previous one:", "lon2, lat2 = m(lon, lat)\nfig = plt.figure(figsize=(8,8))\nm.plot(lon2, lat2, 'ko', ms=3)\n\nm.drawcoastlines(linewidth=0.5, zorder=3)\nm.fillcontinents(zorder=2)\n\nm.drawparallels(np.arange(-90.,91.,2.), labels=[1,0,0,0], zorder=1)\nm.drawmeridians(np.arange(-180.,181.,3.), labels=[0,0,1,0], zorder=1)\nplt.show()", "Profile plot\nWe extract the salinity and the vertical coordinate used in the profile, the pressure in this case.<br/>\nFor the x-dimension, we will use the time.", "with netCDF4.Dataset(datadir + datafile) as nc:\n pressure = nc.variables['PRES'][:]\n pressure_name = nc.variables['PRES'].long_name\n pressure_units = nc.variables['PRES'].units\n salinity = nc.variables['PSAL'][:]\n salinity_name = nc.variables['PSAL'].long_name\n salinity_units = nc.variables['PSAL'].units\n time = nc.variables['TIME'][:]\n time_units = nc.variables['TIME'].units\n time_name = nc.variables['TIME'].long_name\n dates = num2date(time, units=time_units)", "We also have to set the colormap and the limits for the salinity.", "cmap = plt.cm.Spectral_r\nnorm = colors.Normalize(vmin=34, vmax=36)", "We specify the coordinates (time and pressure) and the salinity as the arguments of the scatter plot:\n* s=10 indicate the size of the dots,\n* c=salinity indicates which variable is used as the z-dimension (color)\n* edgecolor='None' means that no color is applied around the edge of the marker\n* cmap=cmap sets the colormap to cmap, defined before and\n* norm=norm sets the limits for the color scale.", "fig = plt.figure(figsize=(8,8))\nax = plt.subplot(111)\nfor ntime in range(0, len(time)):\n plt.scatter(time[ntime]*np.ones_like(pressure[ntime,]), pressure[ntime,:], \n s=20, c=salinity[0,:], edgecolor='None', cmap=cmap, norm=norm)\nplt.ylim(0.0, 1200)\nplt.gca().invert_yaxis()\nplt.colorbar(extend='both')\nfig.autofmt_xdate()\nplt.ylabel(\"%s (%s)\" % (pressure_name, pressure_units))\nplt.xlabel(\"%s (%s)\" % (time_name, time_units))\nplt.show()", "The salinity has it's highest values near surface, but we can also see an increase between 800 and 1000 m depth. \nT-S diagram\nFor the temperature-salinity (T-S) diagram we need to load the temperature variable.", "with netCDF4.Dataset(datadir + datafile) as nc:\n temperature = nc.variables['TEMP'][:]\n temperature_name = nc.variables['TEMP'].long_name\n temperature_units = nc.variables['TEMP'].units", "The x and y labels for the plot are directly taken from the netCDF variable attributes.", "fig = plt.figure(figsize=(8,8))\nax = plt.subplot(111)\nplt.plot(temperature, salinity, 'k.', markersize=2)\nplt.xlabel(\"%s (%s)\" % (temperature_name, temperature_units))\nplt.ylabel(\"%s (%s)\" % (salinity_name, salinity_units))\nplt.ylim(22.5, 42)\nplt.show()", "Adjusted variables\nThe profiler files also contain adjusted variables (pressure, temperature, salinity) which correspond to the variable after the application of a correction.\nWe will repeat the T-S diagram with the adjusted variables.", "with netCDF4.Dataset(datadir + datafile) as nc:\n temperature_adj = nc.variables['TEMP_ADJUSTED'][:]\n temperature_adj_name = nc.variables['TEMP_ADJUSTED'].long_name\n salinity_adj = nc.variables['PSAL_ADJUSTED'][:]\n salinity_adj_name = nc.variables['PSAL_ADJUSTED'].long_name\n\nfig = plt.figure(figsize=(8,8))\nax = plt.subplot(111)\nplt.plot(temperature, salinity, 'r.', markersize=2)\nplt.plot(temperature_adj, salinity_adj, 'k.', markersize=2)\nplt.xlabel(\"%s (%s)\" % (temperature_adj_name, temperature_units))\nplt.ylabel(\"%s (%s)\" % (salinity_adj_name, salinity_units))\nplt.ylim(22.5, 42)\nplt.show()", "3-D plot\nWe illustrate with a simple example how to have a 3-dimensional representation of the profiles.<br/>\nFirst we import the required modules.", "from mpl_toolkits.mplot3d import Axes3D", "Then the plot is easily obtained by specifying the coordinates (x, y, z) and the variables (salinity) to be plotted.", "fig = plt.figure(figsize=(8,8))\nax = fig.add_subplot(111, projection='3d')\nfor ntime in range(0, ntimes):\n plt.scatter(lon[ntime]*np.ones(ndepths), lat[ntime]*np.ones(ndepths), zs=-pressure[ntime,:], zdir='z', \n s=20, c=salinity[ntime,:], edgecolor='None', cmap=cmap, norm=norm)\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
arsenovic/clifford
docs/tutorials/cga/clustering.ipynb
bsd-3-clause
[ "This notebook is part of the clifford documentation: https://clifford.readthedocs.io/.\nExample 2 Clustering Geometric Objects\nIn this example we will look at a few of the tools provided by the clifford package for (4,1) conformal geometric algebra (CGA) and see how we can use them in a practical setting to cluster geometric objects via the simple K-means clustering algorithm provided in clifford.tools\nAs before the first step in using the package for CGA is to generate and import the algebra:", "from clifford.g3c import *\nprint('e1*e1 ', e1*e1)\nprint('e2*e2 ', e2*e2)\nprint('e3*e3 ', e3*e3)\nprint('e4*e4 ', e4*e4)\nprint('e5*e5 ', e5*e5)", "The tools submodule of the clifford package contains a wide array of algorithms and tools that can be useful for manipulating objects in CGA. In this case we will be generating a large number of objects and then segmenting them into clusters.\nWe first need an algorithm for generating a cluster of objects in space. We will construct this cluster by generating a random object and then repeatedly disturbing this object by some small fixed amount and storing the result:", "from clifford.tools.g3c import *\nimport numpy as np\n\ndef generate_random_object_cluster(n_objects, object_generator, max_cluster_trans=1.0, max_cluster_rot=np.pi/8):\n \"\"\" Creates a cluster of random objects \"\"\"\n ref_obj = object_generator()\n cluster_objects = []\n for i in range(n_objects):\n r = random_rotation_translation_rotor(maximum_translation=max_cluster_trans, maximum_angle=max_cluster_rot)\n new_obj = apply_rotor(ref_obj, r)\n cluster_objects.append(new_obj)\n return cluster_objects", "We can use this function to create a cluster and then we can visualise this cluster with pyganja.", "from pyganja import *\nclustered_circles = generate_random_object_cluster(10, random_circle)\nsc = GanjaScene()\nfor c in clustered_circles:\n sc.add_object(c, rgb2hex([255,0,0]))\ndraw(sc, scale=0.05)", "This cluster generation function appears in clifford tools by default and it can be imported as follows:", "from clifford.tools.g3c import generate_random_object_cluster", "Now that we can generate individual clusters we would like to generate many:", "def generate_n_clusters( object_generator, n_clusters, n_objects_per_cluster ):\n object_clusters = []\n for i in range(n_clusters):\n cluster_objects = generate_random_object_cluster(n_objects_per_cluster, object_generator,\n max_cluster_trans=0.5, max_cluster_rot=np.pi / 16)\n object_clusters.append(cluster_objects)\n all_objects = [item for sublist in object_clusters for item in sublist]\n return all_objects, object_clusters", "Again this function appears by default in clifford tools and we can easily visualise the result:", "from clifford.tools.g3c import generate_n_clusters\n\nall_objects, object_clusters = generate_n_clusters(random_circle, 2, 5)\nsc = GanjaScene()\nfor c in all_objects:\n sc.add_object(c, rgb2hex([255,0,0]))\ndraw(sc, scale=0.05)", "Given that we can now generate multiple clusters of objects we can test algorithms for segmenting them.\nThe function run_n_clusters below generates a lot of objects distributed into n clusters and then attempts to segment the objects to recover the clusters.", "from clifford.tools.g3c.object_clustering import n_clusters_objects\nimport time\n\ndef run_n_clusters( object_generator, n_clusters, n_objects_per_cluster, n_shotgunning):\n all_objects, object_clusters = generate_n_clusters( object_generator, n_clusters, n_objects_per_cluster ) \n [new_labels, centroids, start_labels, start_centroids] = n_clusters_objects(n_clusters, all_objects, \n initial_centroids=None, \n n_shotgunning=n_shotgunning, \n averaging_method='unweighted') \n return all_objects, new_labels, centroids ", "Lets try it!", "def visualise_n_clusters(all_objects, centroids, labels,\n color_1=np.array([255, 0, 0]), \n color_2=np.array([0, 255, 0])):\n \"\"\"\n Utility method for visualising several clusters and their respective centroids\n using pyganja\n \"\"\"\n alpha_list = np.linspace(0, 1, num=len(centroids))\n sc = GanjaScene()\n for ind, this_obj in enumerate(all_objects):\n alpha = alpha_list[labels[ind]]\n cluster_color = (alpha * color_1 + (1 - alpha) * color_2).astype(np.int)\n sc.add_object(this_obj, rgb2hex(cluster_color))\n\n for c in centroids:\n sc.add_object(c, Color.BLACK)\n\n return sc\n\n \n\nobject_generator = random_circle \n\nn_clusters = 3 \nn_objects_per_cluster = 10 \nn_shotgunning = 60 \nall_objects, labels, centroids = run_n_clusters(object_generator, n_clusters, \n n_objects_per_cluster, n_shotgunning)\n \nsc = visualise_n_clusters(all_objects, centroids, labels, \n color_1=np.array([255, 0, 0]), \n color_2=np.array([0, 255, 0])) \ndraw(sc, scale=0.05) " ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
arcyfelix/Courses
18-11-22-Deep-Learning-with-PyTorch/06-Sentiment Prediction with RNNs/Sentiment_analysis_with_RNNs.ipynb
apache-2.0
[ "Sentiment Analysis with an RNN\nIn this notebook, you'll implement a recurrent neural network that performs sentiment analysis. \n\nUsing an RNN rather than a strictly feedforward network is more accurate since we can include information about the sequence of words. \n\nHere we'll use a dataset of movie reviews, accompanied by sentiment labels: positive or negative.\n<img src=\"images/reviews_ex.png\" width=40%>\nNetwork Architecture\nThe architecture for this network is shown below.\n<img src=\"images/network_diagram.png\" width=40%>\n\nFirst, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the Word2Vec lesson. You can actually train an embedding with the Skip-gram Word2Vec model and use those embeddings as input, here. However, it's good enough to just have an embedding layer and let the network learn a different embedding table on its own. In this case, the embedding layer is for dimensionality reduction, rather than for learning semantic representations.\nAfter input words are passed to an embedding layer, the new embeddings will be passed to LSTM cells. The LSTM cells will add recurrent connections to the network and give us the ability to include information about the sequence of words in the movie review data. \nFinally, the LSTM outputs will go to a sigmoid output layer. We're using a sigmoid function because positive and negative = 1 and 0, respectively, and a sigmoid will output predicted, sentiment values between 0-1. \n\nWe don't care about the sigmoid outputs except for the very last one; we can ignore the rest. We'll calculate the loss by comparing the output at the last time step and the training label (pos or neg).\n\nLoad in and visualize the data", "import numpy as np\nfrom tqdm import tqdm_notebook as tqdm\n# read data from text files\nwith open('data/reviews.txt', 'r') as f:\n reviews = f.read()\nwith open('data/labels.txt', 'r') as f:\n labels = f.read()\n\nprint(reviews[:1000])\nprint()\nprint(labels[:20])", "Data pre-processing\nThe first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.\nYou can see an example of the reviews data above. Here are the processing steps, we'll want to take:\n\n\nWe'll want to get rid of periods and extraneous punctuation.\nAlso, you might notice that the reviews are delimited with newline characters \\n. To deal with those, I'm going to split the text into each review using \\n as the delimiter. \nThen I can combined all the reviews back together into one big string.\n\n\nFirst, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.", "from string import punctuation\n\n# get rid of punctuation\nreviews = reviews.lower() # lowercase, standardize\nall_text = ''.join([c for c in reviews if c not in punctuation])\n\n# split by new lines and spaces\nreviews_split = all_text.split('\\n')\nall_text = ' '.join(reviews_split)\n\n# create a list of words\nwords = all_text.split()\n\nwords[:30]", "Encoding the words\nThe embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.\n\nExercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0.\nAlso, convert the reviews to integers and store the reviews in a new list called reviews_ints.", "# feel free to use this import \nfrom collections import Counter\n\n## Build a dictionary that maps words to integers\ncounts = Counter(words)\nvocab = sorted(counts, key=counts.get, reverse=True)\nvocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}\n\n## use the dict to tokenize each review in reviews_split\n## store the tokenized reviews in reviews_ints\nreviews_ints = []\nfor review in reviews_split:\n reviews_ints.append([vocab_to_int[word] for word in review.split()])", "Test your code\nAs a text that you've implemented the dictionary correctly, print out the number of unique words in your vocabulary and the contents of the first, tokenized review.", "# Stats about vocabulary\nprint('Unique words: ', len((vocab_to_int))) # should ~ 74000+\nprint()\n\n# Print tokens in first review\nprint('Tokenized review: \\n', reviews_ints[:1])", "Encoding the labels\nOur labels are \"positive\" or \"negative\". To use these labels in our network, we need to convert them to 0 and 1.\n\nExercise: Convert labels from positive and negative to 1 and 0, respectively, and place those in a new list, encoded_labels.", "# 1=positive, 0=negative label conversion\nlabels_split = labels.split('\\n')\nencoded_labels = np.array([1 if label == 'positive' else 0 for label in labels_split])", "Removing Outliers\nAs an additional pre-processing step, we want to make sure that our reviews are in good shape for standard processing. That is, our network will expect a standard input text size, and so, we'll want to shape our reviews into a specific length. We'll approach this task in two main steps:\n\nGetting rid of extremely long or short reviews; the outliers\nPadding/truncating the remaining data so that we have reviews of the same length.\n\nBefore we pad our review text, we should check for reviews of extremely short or long lengths; outliers that may mess with our training.", "# Outlier review stats\nreview_lens = Counter([len(x) for x in reviews_ints])\nprint(\"Zero-length reviews: {}\".format(review_lens[0]))\nprint(\"Maximum review length: {}\".format(max(review_lens)))", "Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. We'll have to remove any super short reviews and truncate super long reviews. This removes outliers and should allow our model to train more efficiently.\n\nExercise: First, remove any reviews with zero length from the reviews_ints list and their corresponding label in encoded_labels.", "print('Number of reviews before removing outliers: ', len(reviews_ints))\n\n## Remove any reviews/labels with zero length from the reviews_ints list.\n\n# Get indices of any reviews with length 0\nnon_zero_idx = [ii for ii, review in enumerate(reviews_ints) if len(review) != 0]\n\n# Remove 0-length reviews and their labels\nreviews_ints = [reviews_ints[ii] for ii in non_zero_idx]\nencoded_labels = np.array([encoded_labels[ii] for ii in non_zero_idx])\n\nprint('Number of reviews after removing outliers: ', len(reviews_ints))", "Padding sequences\nTo deal with both short and very long reviews, we'll pad or truncate all our reviews to a specific length. For reviews shorter than some seq_length, we'll pad with 0s. For reviews longer than seq_length, we can truncate them to the first seq_length words. A good seq_length, in this case, is 200.\n\nExercise: Define a function that returns an array features that contains the padded data, of a standard size, that we'll pass to the network. \n* The data should come from review_ints, since we want to feed integers to the network. \n* Each row should be seq_length elements long. \n* For reviews shorter than seq_length words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. \n* For reviews longer than seq_length, use only the first seq_length words as the feature vector.\n\nAs a small example, if the seq_length=10 and an input review is: \n[117, 18, 128]\nThe resultant, padded sequence should be: \n[0, 0, 0, 0, 0, 0, 0, 117, 18, 128]\nYour final features array should be a 2D array, with as many rows as there are reviews, and as many columns as the specified seq_length.\nThis isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.", "def pad_features(reviews_ints, seq_length):\n ''' Return features of review_ints, where each review is padded with 0's \n or truncated to the input seq_length.\n '''\n \n # Getting the correct rows x cols shape\n features = np.zeros((len(reviews_ints), seq_length), dtype=int)\n\n # For each review, I grab that review and \n for i, row in enumerate(reviews_ints):\n features[i, -len(row):] = np.array(row)[:seq_length]\n \n return features\n\n# Test your implementation!\n\nseq_length = 200\n\nfeatures = pad_features(reviews_ints, seq_length=seq_length)\nfeatures = features.astype(int)\n\n## Test statements - do not change - ##\nassert len(features) == len(reviews_ints), \"Your features should have as many rows as reviews.\"\nassert len(features[0]) == seq_length, \"Each feature row should contain seq_length values.\"\n\n# Print first 10 values of the first 30 batches \nprint(features[:30,:10])", "Training, Validation, Test\nWith our data in nice shape, we'll split it into training, validation, and test sets.\n\nExercise: Create the training, validation, and test sets. \n* You'll need to create sets for the features and the labels, train_x and train_y, for example. \n* Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. \n* Whatever data is left will be split in half to create the validation and testing data.", "split_frac = 0.8\n\n## Split data into training, validation, and test data (features and labels, x and y)\n\nsplit_idx = int(len(features)*0.8)\ntrain_x, remaining_x = features[:split_idx], features[split_idx:]\ntrain_y, remaining_y = encoded_labels[:split_idx], encoded_labels[split_idx:]\n\ntest_idx = int(len(remaining_x)*0.5)\nval_x, test_x = remaining_x[:test_idx], remaining_x[test_idx:]\nval_y, test_y = remaining_y[:test_idx], remaining_y[test_idx:]\n\n## Print out the shapes of your resultant feature data\nprint(\"\\t\\t\\tFeature Shapes:\")\nprint(\"Train set: \\t\\t{}\".format(train_x.shape), \n \"\\nValidation set: \\t{}\".format(val_x.shape),\n \"\\nTest set: \\t\\t{}\".format(test_x.shape))", "Check your work\nWith train, validation, and test fractions equal to 0.8, 0.1, 0.1, respectively, the final, feature data shapes should look like:\nFeature Shapes:\nTrain set: (20000, 200) \nValidation set: (2500, 200) \nTest set: (2500, 200)\n\nDataLoaders and Batching\nAfter creating training, test, and validation data, we can create DataLoaders for this data by following two steps:\n1. Create a known format for accessing our data, using TensorDataset which takes in an input set of data and a target set of data with the same first dimension, and creates a dataset.\n2. Create DataLoaders and batch our training, validation, and test Tensor datasets.\ntrain_data = TensorDataset(torch.from_numpy(train_x), torch.from_numpy(train_y))\ntrain_loader = DataLoader(train_data, batch_size=batch_size)\nThis is an alternative to creating a generator function for batching our data into full batches.", "import torch\nfrom torch.utils.data import TensorDataset, DataLoader\n\n# Create Tensor datasets\ntrain_data = TensorDataset(torch.from_numpy(train_x),\n torch.from_numpy(train_y))\nvalid_data = TensorDataset(torch.from_numpy(val_x),\n torch.from_numpy(val_y))\ntest_data = TensorDataset(torch.from_numpy(test_x),\n torch.from_numpy(test_y))\n\n# Dataloaders\nbatch_size = 50\n\n# Make sure the SHUFFLE your training data\ntrain_loader = DataLoader(dataset=train_data,\n shuffle=True,\n batch_size=batch_size)\nvalid_loader = DataLoader(dataset=valid_data,\n shuffle=True,\n batch_size=batch_size)\ntest_loader = DataLoader(dataset=test_data,\n shuffle=True,\n batch_size=batch_size)\n\n# Obtain one batch of training data\ndataiter = iter(train_loader)\nsample_x, sample_y = dataiter.next()\n\n# batch_size, seq_length\nprint('Sample input size: ', sample_x.size()) \nprint('Sample input: \\n', sample_x)\nprint()\n# batch_size\nprint('Sample label size: ', sample_y.size()) \nprint('Sample label: \\n', sample_y)", "Sentiment Network with PyTorch\nBelow is where you'll define the network.\n<img src=\"assets/network_diagram.png\" width=40%>\nThe layers are as follows:\n1. An embedding layer that converts our word tokens (integers) into embeddings of a specific size.\n2. An LSTM layer defined by a hidden_state size and number of layers\n3. A fully-connected output layer that maps the LSTM layer outputs to a desired output_size\n4. A sigmoid activation layer which turns all outputs into a value 0-1; return only the last sigmoid output as the output of this network.\nThe Embedding Layer\nWe need to add an embedding layer because there are 74000+ words in our vocabulary. It is massively inefficient to one-hot encode that many classes. So, instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using Word2Vec, then load it here. But, it's fine to just make a new layer, using it for only dimensionality reduction, and let the network learn the weights.\nThe LSTM Layer(s)\nWe'll create an LSTM to use in our recurrent network, which takes in an input_size, a hidden_dim, a number of layers, a dropout probability (for dropout between multiple layers), and a batch_first parameter.\nMost of the time, you're network will have better performance with more layers; between 2-3. Adding more layers allows the network to learn really complex relationships. \n\nExercise: Complete the __init__, forward, and init_hidden functions for the SentimentRNN model class.\n\nNote: init_hidden should initialize the hidden and cell state of an lstm layer to all zeros, and move those state to GPU, if available.", "# First checking if GPU is available\ntrain_on_gpu=torch.cuda.is_available()\n\nif(train_on_gpu):\n print('Training on GPU.')\nelse:\n print('No GPU available, training on CPU.')\n\ntrain_on_gpu = False\n\nimport torch.nn as nn\n\nclass SentimentRNN(nn.Module):\n \"\"\"\n The RNN model that will be used to perform Sentiment analysis.\n \"\"\"\n\n def __init__(self, vocab_size, output_size, embedding_dim, hidden_dim, n_layers, drop_prob=0.5):\n \"\"\"\n Initialize the model by setting up the layers.\n \"\"\"\n super(SentimentRNN, self).__init__()\n\n self.output_size = output_size\n self.n_layers = n_layers\n self.hidden_dim = hidden_dim\n \n # Embedding and LSTM layers\n self.embedding = nn.Embedding(num_embeddings=vocab_size,\n embedding_dim=embedding_dim)\n self.lstm = nn.LSTM(input_size=embedding_dim,\n hidden_size=hidden_dim,\n num_layers=n_layers, \n dropout=drop_prob,\n batch_first=True)\n \n # Dropout layer\n self.dropout = nn.Dropout(p=0.3)\n \n # Linear and sigmoid layers\n self.fc = nn.Linear(in_features=hidden_dim,\n out_features=output_size)\n self.sig = nn.Sigmoid()\n \n\n def forward(self, x, hidden):\n \"\"\"\n Perform a forward pass of our model on some input and hidden state.\n \"\"\"\n batch_size = x.size(0)\n\n # Embeddings and lstm_out\n embeds = self.embedding(x)\n lstm_out, hidden = self.lstm(embeds, hidden)\n \n # Stack up lstm outputs\n lstm_out = lstm_out.contiguous().view(-1, self.hidden_dim)\n \n # Dropout and fully-connected layer\n out = self.dropout(lstm_out)\n out = self.fc(out)\n # Sigmoid function\n sig_out = self.sig(out)\n \n # Reshape to be batch_size first\n sig_out = sig_out.view(batch_size, -1)\n sig_out = sig_out[:, -1] # get last batch of labels\n \n # Return last sigmoid output and hidden state\n return sig_out, hidden\n \n \n def init_hidden(self, batch_size):\n ''' Initializes hidden state '''\n # Create two new tensors with sizes n_layers x batch_size x hidden_dim,\n # initialized to zero, for hidden state and cell state of LSTM\n weight = next(self.parameters()).data\n \n if (train_on_gpu):\n hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda(),\n weight.new(self.n_layers, batch_size, self.hidden_dim).zero_().cuda())\n else:\n hidden = (weight.new(self.n_layers, batch_size, self.hidden_dim).zero_(),\n weight.new(self.n_layers, batch_size, self.hidden_dim).zero_())\n \n return hidden\n ", "Instantiate the network\nHere, we'll instantiate the network. First up, defining the hyperparameters.\n\nvocab_size: Size of our vocabulary or the range of values for our input, word tokens.\noutput_size: Size of our desired output; the number of class scores we want to output (pos/neg).\nembedding_dim: Number of columns in the embedding lookup table; size of our embeddings.\nhidden_dim: Number of units in the hidden layers of our LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.\nn_layers: Number of LSTM layers in the network. Typically between 1-3\n\n\nExercise: Define the model hyperparameters.", "# Instantiate the model w/ hyperparams\nvocab_size = len(vocab_to_int) + 1 # +1 for the 0 padding + our word tokens\noutput_size = 1\nembedding_dim = 200\nhidden_dim = 32\nn_layers = 2\n\nnet = SentimentRNN(vocab_size, output_size, embedding_dim, hidden_dim, n_layers)\n\nprint(net)", "Training\nBelow is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. You can also add code to save a model by name.\n\nWe'll also be using a new kind of cross entropy loss, which is designed to work with a single Sigmoid output. BCELoss, or Binary Cross Entropy Loss, applies cross entropy loss to a single value between 0 and 1.\n\nWe also have some data and training hyparameters:\n\nlr: Learning rate for our optimizer.\nepochs: Number of times to iterate through the training dataset.\nclip: The maximum gradient value to clip at (to prevent exploding gradients).", "# Loss and optimization functions\nlr = 0.001\n\n# Binary Cross Entropy Loss\ncriterion = nn.BCELoss()\noptimizer = torch.optim.Adam(params=net.parameters(),\n lr=lr)\n\n# Training params\nepochs = 3\n\ncounter = 0\nprint_every = 10\n# Gradient clipping\nclip = 5 \n\n# Move model to GPU, if available\nif(train_on_gpu):\n net.cuda()\n \n\nnet.train()\n# Train for some number of epochs\nfor e in range(epochs):\n # Initialize hidden state\n h = net.init_hidden(batch_size)\n\n # Batch loop\n for inputs, labels in tqdm(train_loader):\n counter += 1\n\n if(train_on_gpu):\n inputs, labels = inputs.cuda(), labels.cuda()\n\n # Creating new variables for the hidden state, otherwise\n # we'd backprop through the entire training history\n h = tuple([each.data for each in h])\n\n # Zero accumulated gradients\n net.zero_grad()\n\n # Get the output from the model\n output, h = net(inputs, h)\n\n # Calculate the loss and perform backprop\n loss = criterion(output.squeeze(), labels.float())\n loss.backward()\n # `clip_grad_norm` helps prevent the exploding gradient problem in RNNs / LSTMs.\n nn.utils.clip_grad_norm_(parameters=net.parameters(),\n max_norm=clip)\n optimizer.step()\n\n # Loss stats\n if counter % print_every == 0:\n # Get validation loss\n val_h = net.init_hidden(batch_size)\n val_losses = []\n net.eval()\n for inputs, labels in valid_loader:\n\n # Creating new variables for the hidden state, otherwise\n # we'd backprop through the entire training history\n val_h = tuple([each.data for each in val_h])\n\n if(train_on_gpu):\n inputs, labels = inputs.cuda(), labels.cuda()\n\n output, val_h = net(inputs, val_h)\n val_loss = criterion(output.squeeze(), labels.float())\n\n val_losses.append(val_loss.item())\n\n net.train()\n print(\"Epoch: {}/{}...\".format(e+1, epochs),\n \"Step: {}...\".format(counter),\n \"Loss: {:.6f}...\".format(loss.item()),\n \"Val Loss: {:.6f}\".format(np.mean(val_losses)))", "Testing\nThere are a few ways to test your network.\n\n\nTest data performance: First, we'll see how our trained model performs on all of our defined test_data, above. We'll calculate the average loss and accuracy over the test data.\n\n\nInference on user-generated data: Second, we'll see if we can input just one example review at a time (without a label), and see what the trained model predicts. Looking at new, user input data like this, and predicting an output label, is called inference.", "# Get test data loss and accuracy\ntest_losses = []\nnum_correct = 0\n\n# Initialize hidden state\nh = net.init_hidden(batch_size)\n\nnet.eval()\n# Iterate over test data\nfor inputs, labels in test_loader:\n\n # Creating new variables for the hidden state, otherwise\n # we'd backprop through the entire training history\n h = tuple([each.data for each in h])\n\n if(train_on_gpu):\n inputs, labels = inputs.cuda(), labels.cuda()\n \n # Get predicted outputs\n output, h = net(inputs, h)\n \n # Calculate loss\n test_loss = criterion(output.squeeze(), labels.float())\n test_losses.append(test_loss.item())\n \n # Convert output probabilities to predicted class (0 or 1)\n pred = torch.round(output.squeeze()) # rounds to the nearest integer\n \n # Compare predictions to true label\n correct_tensor = pred.eq(labels.float().view_as(pred))\n correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())\n num_correct += np.sum(correct)\n\n\n# -- stats! -- ##\n# avg test loss\nprint(\"Test loss: {:.3f}\".format(np.mean(test_losses)))\n\n# accuracy over all test data\ntest_acc = num_correct/len(test_loader.dataset)\nprint(\"Test accuracy: {:.3f}\".format(test_acc))", "Inference on a test review\nYou can change this test_review to any text that you want. Read it and think: is it pos or neg? Then see if your model predicts correctly!\n\nExercise: Write a predict function that takes in a trained net, a plain text_review, and a sequence length, and prints out a custom statement for a positive or negative review!\n* You can use any functions that you've already defined or define any helper functions you want to complete predict, but it should just take in a trained net, a text review, and a sequence length.", "# Negative test review\ntest_review_neg = 'The worst movie I have seen; acting was terrible and I want my money back. This movie had bad acting and the dialogue was slow.'\n\n\nfrom string import punctuation\n\ndef tokenize_review(test_review):\n test_review = test_review.lower() # lowercase\n # get rid of punctuation\n test_text = ''.join([c for c in test_review if c not in punctuation])\n\n # splitting by spaces\n test_words = test_text.split()\n\n # tokens\n test_ints = []\n test_ints.append([vocab_to_int[word] for word in test_words])\n\n return test_ints\n\n# Test code and generate tokenized review\ntest_ints = tokenize_review(test_review_neg)\nprint(test_ints)\n\n# Test sequence padding\nseq_length=200\nfeatures = pad_features(test_ints, seq_length)\n\nprint(features)\n\n# Test conversion to tensor and pass into your model\nfeature_tensor = torch.from_numpy(features)\nprint(feature_tensor.size())\n\ndef predict(net,\n test_review,\n sequence_length=200):\n # Setting the evaluation mode\n net.eval()\n \n # Tokenize review\n test_ints = tokenize_review(test_review)\n \n # Pad tokenized sequence\n seq_length=sequence_length\n features = pad_features(test_ints, seq_length)\n \n # Convert to tensor to pass into your model\n feature_tensor = torch.from_numpy(features)\n \n batch_size = feature_tensor.size(0)\n \n # Initialize hidden state\n h = net.init_hidden(batch_size)\n \n if(train_on_gpu):\n feature_tensor = feature_tensor.cuda()\n \n # Get the output from the model\n output, h = net(feature_tensor, h)\n \n # Convert output probabilities to predicted class (0 or 1)\n pred = torch.round(output.squeeze()) \n # Printing output value, before rounding\n print('Prediction value, pre-rounding: {:.6f}'.format(output.item()))\n \n # Print custom response\n if(pred.item()==1):\n print(\"Positive review detected!\")\n else:\n print(\"Negative review detected.\") \n\n# Positive test review\ntest_review_pos = 'This movie had the best acting and the dialogue was so good. I loved it.'\n\n# Call function\nseq_length=200\npredict(net, test_review_neg, seq_length)", "Try out test_reviews of your own!\nNow that you have a trained model and a predict function, you can pass in any kind of text and this model will predict whether the text has a positive or negative sentiment. Push this model to its limits and try to find what words it associates with positive or negative.\nLater, you'll learn how to deploy a model like this to a production environment so that it can respond to any kind of user data put into a web app!" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
setiQuest/ML4SETI
tutorials/General_move_data_to_from_Object_Storage.ipynb
apache-2.0
[ "How to move data to/from Object Storage.\nThis tutorial shows you how to use the python-swiftclient to move data to/from your Object Storge account. \nThis will be useful in a variety of ways. \nIf you're at the hackathon and running on a PowerAI system, using the swift client will be the best way to move data from your Nimbix cloud machine to an IBM Object Storage account. \nImportant for hackathon participants using the PowerAI systems. When those machines are shut down, all data in your local user space will be lost. So, be sure to save your work somewhere!", "#!pip install --user --upgrade python-keystoneclient\n#!pip install --user --upgrade python-swiftclient", "Find Your Object Storage Credentials\nIf you have an IBM Object Storage account, then you probably have either signed up with IBM Bluemix or IBM Data Science Experience (DSX). If you signed up with DSX, then a Bluemix account was created for you automatically. \nInstructions for finding your Object Storage credentials are found here and also on the DSX Docs page.\nNB: If you follow the instructions that take you through your IBM Bluemix account, when you land on the Object Storage dashboard, you'll be able to create new containers from that interface. You might want to create different containers to hold different kinds of data. IBM Object Storage comes with 5 GB of free storage. \nNB 2: From the Object Storage Dashboard in both DSX and Bluemix, you can download any file there through your web browser. Thus, if you move a .csv file to your Object Storage from Spark, you can download that .csv file to your local machine through the dashboard.", "credentials = {\n 'auth_uri':'',\n 'global_account_auth_uri':'',\n 'username':'xx',\n 'password':\"xx\",\n 'auth_url':'https://identity.open.softlayer.com',\n 'project':'xx',\n 'project_id':'xx',\n 'region':'dallas',\n 'user_id':'xx',\n 'domain_id':'xx',\n 'domain_name':'xx',\n 'tenantId':'xx'\n}\n\nimport swiftclient.client as swiftclient\n\nconn = swiftclient.Connection(\n key=credentials['password'],\n authurl=credentials['auth_url']+\"/v3\",\n auth_version='3',\n os_options={\n \"project_id\": credentials['project_id'],\n \"user_id\": credentials['user_id'],\n \"region_name\": credentials['region']})", "Now use the SwiftClient connection to programmatically\n\nput_object(container, objectname, data)\nget_object(container, objectname)", "examplefile = 'my_team_name_data_folder/zipfiles/classification_1_narrowband.zip'\netag = conn.put_object('some_container', 'classification_1_narrowband.zip', open(examplefile).read())\n\nclassification_results_file = 'my_team_name_data_folder/results/my_final_testset_classes.csv'\netag = conn.put_object('some_container', 'my_final_testset_classes.csv', open(examplefile).read())", "Using SwiftClient from the Command Line.\nWhen you install python-swiftclient on your local machine, this also installs the CLI tool, swift. \nYou can use this to create new containers on your Object Storage account, upload and download data files.\nThe easiest way to use the swift CLI tool is to set the following environment variabls in your bash shell\n```\n\npip install python-keystoneclient\npip install python-swiftclient\n```\n\nCopy and paste this into your bash shell. Get your credentials from above.\nexport OS_PROJECT_ID=xx\nexport OS_PASSWORD=xx\nexport OS_USER_ID=xx\nexport OS_AUTH_URL=https://identity.open.softlayer.com/v3\nexport OS_REGION_NAME=dallas\nexport OS_IDENTITY_API_VERSION=3\nexport OS_AUTH_VERSION=3\nThen from the command line, you can\n```\n\nswift list\n```\n\nCreate a new container\n```\n\nswift post new_container_name\n```\n\nUpload a file \n```\n\nswift upload some_container_name some_local_file\n```\n\nDownload a file\n```\n\nswift download some_container_name some_file_in_container\n```\n\nThis tool can be used from the shell prompt of the PowerAI system as well to move data from those instances to and from your Object Storage account" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
jphall663/GWU_data_mining
10_model_interpretability/src/loco.ipynb
apache-2.0
[ "License\n\nCopyright 2017 J. Patrick Hall, jphall@gwu.edu\nPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\nThe above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\nLocal Feature Importance and Reason Codes using LOCO\n\nBased on: Lei, Jing, G’Sell, Max, Rinaldo, Alessandro, Tibshirani, Ryan J., and Wasserman, Larry. Distribution-free predictive inference for regression. Journal of the American Statistical Association, 2017.\nhttp://www.stat.cmu.edu/~ryantibs/papers/conformal.pdf\n Instead of dropping one variable and retraining a model to understand the importance of that variable in a model, these examples set a variable to missing and rescore this new, corrupted sample with the original model. This is approach may be more appropriate for nonlineaer models in which nonlinear dependencies can allow variables to nearly completely replace one another when a model is retrained. \nPreliminaries: imports, start h2o, load and clean data", "# imports\nimport h2o \nimport numpy as np\nimport pandas as pd\nfrom h2o.estimators.gbm import H2OGradientBoostingEstimator\n\n# start h2o\nh2o.init()\nh2o.remove_all()\nh2o.show_progress()", "Load and prepare data for modeling", "# load clean data\npath = '../../03_regression/data/train.csv'\nframe = h2o.import_file(path=path)\n\n# assign target and inputs\ny = 'SalePrice'\nX = [name for name in frame.columns if name not in [y, 'Id']]", "LOCO is simpler to use with data containing no missing values", "# determine column types\n# impute\nreals, enums = [], []\nfor key, val in frame.types.items():\n if key in X:\n if val == 'enum':\n enums.append(key)\n else: \n reals.append(key)\n \n_ = frame[reals].impute(method='median')\n_ = frame[enums].impute(method='mode')\n\n# split into training and validation\ntrain, valid = frame.split_frame([0.7])", "Understanding linear correlation and nonlinear dependencies are important for LOCO.\n\nIf strong relationships are present, retraining the model after removing an input will simply allow the linearly correlated or nonlinearly dependent variables to make up for the impact of the removed input. This why we will set to missing here, and not drop and retrain.\nIf such relationships are present, models must be regularized to prevent correlation or other dependencies from creating instability in model parameters or rules. (H2O GBM is regularized by column and row sampling.)\nFor H2O GBM, setting a variable to missing causes it to follow the majority path in each decision tree. The interpretation of LOCO becomes the numeric difference between the local behavior of the variable and the most common local behavior.\nBecause of linear correlation and nonlinear dependence, LOCO values are valid only for a given data and feature set.", "# print out linearly correlated pairs\ncorr = train[reals].cor().as_data_frame()\nfor i in range(0, corr.shape[0]):\n for j in range(0, corr.shape[1]):\n if i != j:\n if np.abs(corr.iat[i, j]) > 0.7:\n print(corr.columns[i], corr.columns[j])", "It's likely that even more nonlinearly dependent relationships exist between inputs. Nonlinearly relationships can also behave differently at global and local scales.\nRemoving one var from each correlated pair may increase stability in the model and its explanations", "X_reals_decorr = [i for i in reals if i not in ['GarageYrBlt', 'TotRmsAbvGrd', 'TotalBsmtSF', 'GarageCars']]", "Train a predictive model", "# train GBM model\nmodel = H2OGradientBoostingEstimator(ntrees=100,\n max_depth=10,\n distribution='huber',\n learn_rate=0.1,\n stopping_rounds=5,\n seed=12345)\n\nmodel.train(y=y, x=X_reals_decorr, training_frame=train, validation_frame=valid)\n\npreds = valid['Id'].cbind(model.predict(valid))", "Rescore predictive model\n\nEach time leaving one input (covariate) out by setting it to missing\nTo generate local feature importance values for each decision", "h2o.no_progress()\n\nfor k, i in enumerate(X_reals_decorr):\n\n # train and predict with Xi set to missing\n valid_loco = h2o.deep_copy(valid, 'valid_loco')\n valid_loco[i] = np.nan\n preds_loco = model.predict(valid_loco)\n \n # create a new, named column for the LOCO prediction\n preds_loco.columns = [i]\n preds = preds.cbind(preds_loco)\n \n # subtract the LOCO prediction from \n preds[i] = preds[i] - preds['predict']\n \n print('LOCO Progress: ' + i + ' (' + str(k+1) + '/' + str(len(X_reals_decorr)) + ') ...')\n \nprint('Done.') \n\npreds.head()", "The numeric values in each column are an estimate of how much each variable contributed to each decision. These values can tell you how a variable and it's values were weighted in any given decision by the model. These values are crucially important for machine learning interpretability and are often to referred to \"local feature importance\", \"reason codes\", or \"turn-down codes.\" The latter phrases are borrowed from credit scoring. Credit lenders must provide reasons for turning down a credit application, even for automated decisions. Reason codes can be easily extracted from LOCO local feature importance values, by simply ranking the variables that played the largest role in any given decision.\nHelper function for finding quantile indices", "def get_quantile_dict(y, id_, frame):\n\n \"\"\" Returns the percentiles of a column y as the indices for another column id_.\n \n Args:\n y: Column in which to find percentiles.\n id_: Id column that stores indices for percentiles of y.\n frame: H2OFrame containing y and id_. \n \n Returns:\n Dictionary of percentile values and index column values.\n \n \"\"\"\n \n quantiles_df = frame.as_data_frame()\n quantiles_df.sort_values(y, inplace=True)\n quantiles_df.reset_index(inplace=True)\n \n percentiles_dict = {}\n percentiles_dict[0] = quantiles_df.loc[0, id_]\n percentiles_dict[99] = quantiles_df.loc[quantiles_df.shape[0]-1, id_]\n inc = quantiles_df.shape[0]//10\n \n for i in range(1, 10):\n percentiles_dict[i * 10] = quantiles_df.loc[i * inc, id_]\n\n return percentiles_dict\n\nquantile_dict = get_quantile_dict('predict', 'Id', preds)\nprint(quantile_dict)", "Plot some reason codes for a representative row", "%matplotlib inline\n\nmedian_loco = preds[preds['Id'] == int(quantile_dict[50]), :].as_data_frame().drop(['Id', 'predict'], axis=1)\nmedian_loco = median_loco.T.sort_values(by=0)[:5]\n_ = median_loco.plot(kind='bar', \n title='Negative Reason Codes for the Median of Predicted Sale Price\\n', \n legend=False)\n\nmedian_loco = preds[preds['Id'] == int(quantile_dict[50]), :].as_data_frame().drop(['Id', 'predict'], axis=1)\nmedian_loco = median_loco.T.sort_values(by=0, ascending=False)[:5]\n_ = median_loco.plot(kind='bar', \n title='Positive Reason Codes for the Median of Predicted Sale Price\\n', \n color='r',\n legend=False)", "Ensembling explantions to reduce local variance\nExplanations derived from high variance machine learning models can be unstable. One general way to decrease variance is to ensemble the results of many models.\nTrain multiple models", "n_models = 10 # select number of models\n\nmodels = []\npred_frames = []\n\nfor i in range(0, n_models):\n\n # store models\n models.append(H2OGradientBoostingEstimator(ntrees=500,\n max_depth=2 * (i + 1),\n distribution='huber',\n learn_rate=0.01 * (i + 1),\n stopping_rounds=5,\n seed=i + 1))\n \n # train models\n models[i].train(y=y, x=X_reals_decorr, training_frame=train, validation_frame=valid)\n \n # store predictions\n pred_frames.append(valid['Id'].cbind(models[i].predict(valid)))\n\n print('Training Progress: model %d/%d ...' % (i + 1, n_models))\n\nprint('Done.')", "Calculate LOCO for each model", "for k, model in enumerate(models):\n\n for i in X_reals_decorr:\n\n # train and predict with Xi set to missing\n valid_loco = h2o.deep_copy(valid, 'valid_loco')\n valid_loco[i] = np.nan\n preds_loco = model.predict(valid_loco)\n\n # create a new, named column for the LOCO prediction\n preds_loco.columns = [i]\n pred_frames[k] = pred_frames[k].cbind(preds_loco)\n\n # subtract the LOCO prediction from \n pred_frames[k][i] = pred_frames[k][i] - pred_frames[k]['predict']\n \n print('LOCO Progress: model %d/%d ...' % (k + 1, n_models))\n\nprint('Done.')", "Collect LOCO values for each model for the median home", "median_loco_frames = []\ncol_names = ['Loco ' + str(i) for i in range(1, n_models + 1)]\n\nfor i in range(0, n_models):\n \n # collect LOCO as a column vector in a Pandas df\n preds = pred_frames[i]\n median_loco_frames.append(preds[preds['Id'] == int(quantile_dict[50]), :]\\\n .as_data_frame()\\\n .drop(['Id', 'predict'], axis=1)\n .T)\n \nloco_ensemble = pd.concat(median_loco_frames, axis=1) \nloco_ensemble.columns = col_names\nloco_ensemble['Mean Local Importance'] = loco_ensemble.mean(axis=1)\nloco_ensemble['Std. Dev. Local Importance'] = loco_ensemble.std(axis=1)\nloco_ensemble", "Negative mean reason codes", "median_mean_loco = loco_ensemble['Mean Local Importance'].sort_values()[:5]\n_ = median_mean_loco.plot(kind='bar', \n title='Negative Mean Reason Codes for the Median of Predicted Sale Price\\n', \n legend=False)", "Positive mean reason codes", "median_mean_loco = loco_ensemble['Mean Local Importance'].sort_values(ascending=False)[:5]\n_ = median_mean_loco.plot(kind='bar', \n title='Positive Mean Reason Codes for the Median of Predicted Sale Price\\n', \n color='r',\n legend=False)", "Shutdown H2O", "h2o.cluster().shutdown(prompt=True)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
tmolteno/TART
doc/calibration/phase/Far_Field.ipynb
lgpl-3.0
[ "Far field calculations for phase calibration\nFor a source at distance $r$ from the phase-center of the telescope, and for an antenna\nat a distance $b$ from the phase center, what is the maximum phase error that results if we\ndon't take the finite distance into consideration.\nA bit of geometry. The", "import sympy as sp\nsp.init_printing(use_latex=\"mathjax\") \n\nr = sp.Symbol('r', real=True, positive=True)\nb = sp.Symbol('b', real=True, positive=True)\n\ndistance_error = sp.simplify(r*(1 - sp.cos(sp.asin(b/r))))\n\ndistance_error", "Simulation\nPlot the resulting distance error (measured in degrees) as a function of source radius for a telescope with a maximum radius of 2 meters.", "import matplotlib.pyplot as plt\nimport numpy as np\n\nbaseline = 2.0\nwavelen = 0.19\nr = np.linspace(15,500,200)\ndistance_error = r - np.sqrt(-baseline**2 + r**2)\nphase_error = (distance_error / wavelen)*2*np.pi\nplt.semilogy(r,np.degrees(phase_error))\nplt.grid(True)\nplt.xlabel('Distance to source (m)')\nplt.ylabel('Phase Error (degrees)')\nplt.title('Plane-wave Error vs source distance (b = 4m)')\nplt.show()", "The conclusion is that for all calibration sources, which are likely to be quite close (within 100m of the telescope), we need to take the source distance into consideration to keep the phase error below ten degrees. Even after 200m, the phase error is still too large.\nCorrection per antenna\nTake far-field into account by modifying the el, az of the source for each antenna.", "el_0 = sp.Symbol('theta_0', real=True, positive=True)\naz_0 = sp.Symbol('phi_0', real=True, positive=True)\nr_0 = sp.Symbol('r_0', real=True, positive=True)\n\nx0 = r_0*sp.sin(az_0) * sp.cos(el_0)\ny0 = r_0*sp.cos(az_0) * sp.cos(el_0)\nz0 = r_0*sp.sin(el_0)\n", "The antenna has coordianates $x_a, y_a$.", "x_a = sp.Symbol('x_a', real=True)\ny_a = sp.Symbol('y_a', real=True)\nz_a = sp.Symbol('z_a', real=True)", "The source has position $(x,y,z)$ in the antenna frame of reference.", "x = x0 - x_a\ny = y0 - y_a\nz = z0 - z_a", "The new antenna-relative elevation and azimuth are,", "r = sp.simplify(sp.sqrt(x**2 + y**2 + z**2))\n\nr\n\nprint(sp.python(r))\n\ndr = sp.simplify(r - r_0)\ndr\n\n#r = sp.Symbol('r', real=True, positive=True)\nel = sp.simplify(sp.asin(z/r))\naz = sp.simplify(sp.acos(y / (r*sp.cos(el))))\n\nel\n\naz\n\nprint(sp.python(el))\n\nprint(sp.python(az))" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
gfabieno/SeisCL
Tutorial/unused/Model_Building.ipynb
gpl-3.0
[ "Models generation\nIn this notebook, we will explore various ways of creating a model for our simulation.\nThree-layer model\nFirst, let's start by creating a single layer model with the following characteristics :\n\\begin{array}{|c|c|}\n\\hline V_p \\hspace{2mm} (m / s) & Depth \\hspace{2mm} (m) \\\\hline\n \\hspace{2cm} 3500 \\hspace{2cm} & \\hspace{2cm} 120 \\hspace{2cm} \\\\hline\n 2000 & 200 \\\\hline\n 2500 & 280 \\\\hline\n\\end{array}", "import numpy as np\nimport matplotlib.pyplot as plt\nfrom SeisCL import SeisCL", "Again, we initially have to define basics simulation constants to create the model. Here we're going to use $2 \\, m$ spatial spacing with $300$ grid points deep and $500$ points wide which will give us a $600 \\times 1000 m $ domain.", "seis3Layers = SeisCL()\n\nseis3Layers.csts['ND'] = 2\nseis3Layers.csts['N'] = np.array([300, 500])\nseis3Layers.csts['dt'] = dt = 0.25e-03\nseis3Layers.csts['dh'] = dh = 2\nseis3Layers.csts['NT'] = NT = 1500\nseis3Layers.csts['FDORDER'] = 4\n\nseis3Layers.csts['seisout'] = 1", "We then fill the domain with a source and receivers near the surface.", "seis3Layers.fill_src_rec_reg()", "Now let's create the model over which the simulation will take place. This consists of a matrix $\\small \\left[ N_z \\times N_x \\right]$ where each points got assigned a velocity.\nThe formulation $\\small \\left[N_z, N_x, N_y\\right]$ instead of $\\small \\left[N_x, N_y, N_z\\right]$ in the definition of the domain is extremely useful during the construction of the models since this one allows to create a matrix which corresponds exactly to the desired model, without performing any transpose operation.", "Nz = seis3Layers.csts['N'][0]\nNx = seis3Layers.csts['N'][1]\n\nvp = [3500, 2000, 2500]\nvs = 2000\nrho = 2000\ntaup = 0\ntaus = 0\n\nEp_vp = [Nz//5, Nz//3]\n\nvp_1 = np.zeros((Ep_vp[0], Nx)) + vp[0]\nvp_2 = np.zeros((Ep_vp[1], Nx)) + vp[1]\nvp_3 = np.zeros((Nz-np.sum(Ep_vp), Nx)) + vp[2]\n\nvp_all = np.vstack((vp_1, vp_2, vp_3))\n\nvs_all = np.zeros(seis3Layers.csts['N']) + vs\nrho_all = np.zeros(seis3Layers.csts['N']) + rho\ntaup_all = np.zeros(seis3Layers.csts['N']) + taup\ntaus_all = np.zeros(seis3Layers.csts['N']) + taus\n\n\nModel3Layers = {\"vp\": vp_all, \"rho\": rho_all, \"vs\": vs_all, \"taup\": taup_all, \"taus\": taus_all}", "Before running our simulation, we want to visualize the domain to ensure that the model is correct.\nWe can create a figure using matplotlib, and pass the figure axis to the DrawDomain2D function.", "_, ax = plt.subplots(1, 1, figsize = (12,6))\nseis3Layers.DrawDomain2D(vp_all, ax = ax)", "Finally, we can launch the simulation and visualize the result with matplotlib.", "gsid = seis3Layers.src_pos_all[3]\nseis3Layers.set_forward(gsid, Model3Layers, withgrad=False) #TODO changer dépendance set_foward à jobids\nseis3Layers.execute()\ndata3Layers = seis3Layers.read_data()\n\nclip = 0.1\nextent = [min(seis3Layers.rec_pos_all[0]), max(seis3Layers.rec_pos_all[0]), (data3Layers.shape[0]-1)*dt, 0]\nvmax = np.max(data3Layers) * clip\nvmin = -vmax\n_, ax3 = plt.subplots(1, 1, figsize=[6, 6])\nax3.imshow(data3Layers, aspect='auto', vmax=vmax, vmin=vmin, extent = extent,\n interpolation='bilinear', cmap=plt.get_cmap('Greys'))\nax3.set_title(\"Solution of 3 layers model \\n\", fontsize=16, fontweight='bold')\nax3.set_xlabel(\"Receiver position (m)\")\nax3.set_ylabel(\"time (s)\")\nplt.show()", "Marmousi model", "import os\nfrom urllib.request import urlretrieve\nimport numpy as np\n\nurl = \"http://sw3d.cz/software/marmousi/little.bin/velocity.h@\"\nif not os.path.isfile(\"velocity.h@\"):\n urlretrieve(url, filename=\"velocity.h@\")\nvel = np.fromfile(\"velocity.h@\", dtype=np.float32)\nvp = np.transpose(np.reshape(np.array(vel), [2301, 751]))", "For inversion, we often want a coaser grid. We must also pad the model for the absorbing boundary and create the vs and rho paramters.", "vp = vp[::4, ::4]\nvp = np.pad(vp, ((seis.nab, seis.nab), (seis.nab, seis.nab)), mode=\"edge\")\nrho = vp * 0 + 2000\nvs = vp * 0\n\nmodel = {'vp':vp, 'vs':vs, 'rho':rho}\n\nseis = SeisCL()\nseis.N = np.array([vp.shape[0], vp.shape[1]])\nseis.dt = dt = 6 * dh / (7 * np.sqrt(2) * np.max(vp)) * 0.85\nseis.NT = int(2 / seisMarmousi.csts['dt'])\nseis.dh = 16\n\nseisMarmousi.fill_src_rec_reg()\n\n_, ax = plt.subplots(1, 1, figsize = (18, 6))\nseisMarmousi.DrawDomain2D(vp, ax = ax)\n\ngsid = seis.src_pos_all[3]\nseisMarmousi.set_forward(gsid, modelMarmousi, withgrad=False)\nseisMarmousi.execute()\ndataMarmousi = seisMarmousi.read_data()\n\nclip = 0.1\nextent = [min(seisMarmousi.rec_pos_all[0]), max(seisMarmousi.rec_pos_all[0]), (dataMarmousi.shape[0]-1)*dt, 0]\nvmax = np.max(dataMarmousi) * clip\nvmin = -vmax\n_, axR = plt.subplots(1, 1, figsize=[6, 6])\naxR.imshow(dataMarmousi, aspect='auto', vmax=vmax, vmin=vmin, extent = extent,\n interpolation='bilinear', cmap=plt.get_cmap('Greys'))\naxR.set_title(\"Marmousi model solution \\n\", fontsize=16, fontweight='bold')\naxR.set_xlabel(\"Receiver position (m)\")\naxR.set_ylabel(\"time (s)\")\nplt.show()" ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
opengeostat/pygslib
pygslib/Ipython_templates/probplt_html.ipynb
mit
[ "PyGSLIB\nPPplot", "#general imports\nimport pygslib", "Getting the data ready for work\nIf the data is in GSLIB format you can use the function pygslib.gslib.read_gslib_file(filename) to import the data into a Pandas DataFrame.", "#get the data in gslib format into a pandas Dataframe\nmydata= pygslib.gslib.read_gslib_file('../data/cluster.dat') \ntrue= pygslib.gslib.read_gslib_file('../data/true.dat') \ntrue['Declustering Weight'] = 1", "gslib probplot with bokeh", "parameters_probplt = {\n # gslib parameters for histogram calculation \n 'iwt' : 0, # input boolean (Optional: set True). Use weight variable?\n 'va' : mydata['Primary'], # input rank-1 array('d') with bounds (nd). Variable\n 'wt' : mydata['Declustering Weight'], # input rank-1 array('d') with bounds (nd) (Optional, set to array of ones). Declustering weight. \n # visual parameters for figure (if a new figure is created)\n 'figure' : None, # a bokeh figure object (Optional: new figure created if None). Set none or undefined if creating a new figure. \n 'title' : 'Prob blot', # string (Optional, \"Histogram\"). Figure title\n 'xlabel' : 'Primary', # string (Optional, default \"Z\"). X axis label \n 'ylabel' : 'P[Z<c]', # string (Optional, default \"f(%)\"). Y axis label\n 'xlog' : 1, # boolean (Optional, default True). If true plot X axis in log sale.\n 'ylog' : 1, # boolean (Optional, default True). If true plot Y axis in log sale. \n # visual parameter for the probplt\n 'style' : 'circle', # string with valid bokeh chart type \n 'color' : 'blue', # string with valid CSS colour (https://www.w3schools.com/colors/colors_names.asp), or an RGB(A) hex value, or tuple of integers (r,g,b), or tuple of (r,g,b,a) (Optional, default \"navy\")\n 'legend': 'Non declustered', # string (Optional, default \"NA\"). \n 'alpha' : 1, # float [0-1] (Optional, default 0.5). Transparency of the fill colour \n 'lwidth': 0, # float (Optional, default 1). Line width\n # leyend\n 'legendloc': 'bottom_right'} # float (Optional, default 'top_right'). Any of top_left, top_center, top_right, center_right, bottom_right, bottom_center, bottom_left, center_left or center\n \nparameters_probplt_dcl = parameters_probplt.copy()\nparameters_probplt_dcl['iwt']=1\nparameters_probplt_dcl['legend']='Declustered'\nparameters_probplt_dcl['color'] = 'red'\n\nparameters_probplt_true = parameters_probplt.copy()\nparameters_probplt_true['va'] = true['Primary']\nparameters_probplt_true['wt'] = true['Declustering Weight']\nparameters_probplt_true['iwt']=0\nparameters_probplt_true['legend']='True'\nparameters_probplt_true['color'] = 'black'\nparameters_probplt_true['style'] = 'line'\nparameters_probplt_true['lwidth'] = 1\n\n\nresults, fig = pygslib.plothtml.probplt(parameters_probplt)\n\n# add declustered to the plot\nparameters_probplt_dcl['figure']= fig\nresults, fig = pygslib.plothtml.probplt(parameters_probplt_dcl)\n\n# add true CDF to the plot\nparameters_probplt_true['figure']=parameters_probplt_dcl['figure']\nresults, fig = pygslib.plothtml.probplt(parameters_probplt_true)\n\n# show the plot\npygslib.plothtml.show(fig)" ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]