markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
| notebooks/miroc/cmip6/models/sandbox-3/land.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc. | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
| notebooks/miroc/cmip6/models/sandbox-3/land.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included? | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
| notebooks/miroc/cmip6/models/sandbox-3/land.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included? | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
| notebooks/miroc/cmip6/models/sandbox-3/land.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/miroc/cmip6/models/sandbox-3/land.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: ... | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter val... | notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass) | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value... | notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component. | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier. | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s). | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used? | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres) | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area) | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent? | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please speci... | notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated? | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated? | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosph... | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme? | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep. | # PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
| notebooks/nuist/cmip6/models/sandbox-2/landice.ipynb | ES-DOC/esdoc-jupyterhub | gpl-3.0 |
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run i... | import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
! pip3 install -... | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Restart the kernel
After you install the additional packages, you need to restart the notebook kernel so it can find the packages. | # Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True) | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Before you begin
Select a GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select "Runtime --> Change runtime type > GPU"
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud pr... | import os
PROJECT_ID = ""
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID) | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Otherwise, set your project ID here. | if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "[your-project-id]" # @param {type:"string"} | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Set gcloud config to your project ID. | !gcloud config set project $PROJECT_ID | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial. | from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S") | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Authenticate your Google Cloud account
If you are using Vertex AI Workbench, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go t... | import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebooks, then don't execute this code
... | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a training job using the Cloud SDK, you upload a Python package
containing your training code to a Cloud Storage bucket. Vertex AI runs
the code from this package. In this tutorial, Vertex AI also sa... | BUCKET_URI = "gs://[your-bucket-name]" # @param {type:"string"}
REGION = "[your-region]" # @param {type:"string"}
if BUCKET_URI == "" or BUCKET_URI is None or BUCKET_URI == "gs://[your-bucket-name]":
BUCKET_URI = "gs://" + PROJECT_ID + "-aip-" + TIMESTAMP
if REGION == "[your-region]":
REGION = "us-central1" | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket. | ! gsutil mb -l $REGION $BUCKET_URI | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Finally, validate access to your Cloud Storage bucket by examining its contents: | ! gsutil ls -al $BUCKET_URI | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Import libraries and define constants
Import required libraries. | import pandas as pd
from google.cloud import aiplatform
from sklearn.metrics import mean_absolute_error, mean_squared_error
from tensorflow.python.keras.utils import data_utils | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Initialize Vertex AI and set an experiment
Define experiment name. | EXPERIMENT_NAME = "" # @param {type:"string"} | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
If EXEPERIMENT_NAME is not set, set a default one below: | if EXPERIMENT_NAME == "" or EXPERIMENT_NAME is None:
EXPERIMENT_NAME = "my-experiment-" + TIMESTAMP | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Initialize the client for Vertex AI. | aiplatform.init(
project=PROJECT_ID,
location=REGION,
staging_bucket=BUCKET_URI,
experiment=EXPERIMENT_NAME,
) | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Tracking parameters and metrics in Vertex AI custom training jobs
This example uses the Abalone Dataset. For more information about this dataset please visit: https://archive.ics.uci.edu/ml/datasets/abalone | !wget https://storage.googleapis.com/download.tensorflow.org/data/abalone_train.csv
!gsutil cp abalone_train.csv {BUCKET_URI}/data/
gcs_csv_path = f"{BUCKET_URI}/data/abalone_train.csv" | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Create a managed tabular dataset from a CSV
A Managed dataset can be used to create an AutoML model or a custom model. | ds = aiplatform.TabularDataset.create(display_name="abalone", gcs_source=[gcs_csv_path])
ds.resource_name | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Write the training script
Run the following cell to create the training script that is used in the sample custom training job. | %%writefile training_script.py
import pandas as pd
import argparse
import os
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
parser = argparse.ArgumentParser()
parser.add_argument('--epochs', dest='epochs',
default=10, type=int,
help='Nu... | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Launch a custom training job and track its trainig parameters on Vertex AI ML Metadata | job = aiplatform.CustomTrainingJob(
display_name="train-abalone-dist-1-replica",
script_path="training_script.py",
container_uri="us-docker.pkg.dev/vertex-ai/training/tf-cpu.2-8:latest",
requirements=["gcsfs==0.7.1"],
model_serving_container_image_uri="us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.... | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Start a new experiment run to track training parameters and start the training job. Note that this operation will take around 10 mins. | aiplatform.start_run("custom-training-run-1") # Change this to your desired run name
parameters = {"epochs": 10, "num_units": 64}
aiplatform.log_params(parameters)
model = job.run(
ds,
replica_count=1,
model_display_name="abalone-model",
args=[f"--epochs={parameters['epochs']}", f"--num_units={paramet... | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Deploy Model and calculate prediction metrics
Deploy model to Google Cloud. This operation will take 10-20 mins. | endpoint = model.deploy(machine_type="n1-standard-4") | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Once model is deployed, perform online prediction using the abalone_test dataset and calculate prediction metrics.
Prepare the prediction dataset. | def read_data(uri):
dataset_path = data_utils.get_file("abalone_test.data", uri)
col_names = [
"Length",
"Diameter",
"Height",
"Whole weight",
"Shucked weight",
"Viscera weight",
"Shell weight",
"Age",
]
dataset = pd.read_csv(
datas... | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Perform online prediction. | prediction = endpoint.predict(test_dataset.tolist())
prediction | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Calculate and track prediction evaluation metrics. | mse = mean_squared_error(test_labels, prediction.predictions)
mae = mean_absolute_error(test_labels, prediction.predictions)
aiplatform.log_metrics({"mse": mse, "mae": mae}) | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Extract all parameters and metrics created during this experiment. | aiplatform.get_experiment_df() | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
View data in the Cloud Console
Parameters and metrics can also be viewed in the Cloud Console. | print("Vertex AI Experiments:")
print(
f"https://console.cloud.google.com/ai/platform/experiments/experiments?folder=&organizationId=&project={PROJECT_ID}"
) | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Training Job
Model
Cloud Storage Bucket
Vertex AI Dataset
Training Job
Model
Endpoint
Cloud Storag... | # Warning: Setting this to true will delete everything in your bucket
delete_bucket = False
# Delete dataset
ds.delete()
# Delete the training job
job.delete()
# Undeploy model from endpoint
endpoint.undeploy_all()
# Delete the endpoint
endpoint.delete()
# Delete the model
model.delete()
if delete_bucket or os.g... | notebooks/official/ml_metadata/sdk-metric-parameter-tracking-for-custom-jobs.ipynb | GoogleCloudPlatform/vertex-ai-samples | apache-2.0 |
读取所有的section列表
section即[]中的内容。 | s = cf.sections()
print '【Output】'
print s | libs/ConfigParser/handout.ipynb | dnxbjyj/python-basic | mit |
读取指定section下options key列表
options即某个section下的每个键值对的key. | opt = cf.options('concurrent')
print '【Output】'
print opt | libs/ConfigParser/handout.ipynb | dnxbjyj/python-basic | mit |
获取指定section下的键值对字典列表 | items = cf.items('concurrent')
print '【Output】'
print items | libs/ConfigParser/handout.ipynb | dnxbjyj/python-basic | mit |
按照指定数据类型读取配置值
cf对象有get()、getint()、getboolean()、getfloat()四种方法来读取不同数据类型的配置项的值。 | db_host = cf.get('db','db_host')
db_port = cf.getint('db','db_port')
thread = cf.getint('concurrent','thread')
print '【Output】'
print db_host,db_port,thread | libs/ConfigParser/handout.ipynb | dnxbjyj/python-basic | mit |
修改某个配置项的值
比如要修改一下数据库的密码,可以这样修改: | cf.set('db','db_pass','newpass')
# 修改完了要写入才能生效
with open('sys.conf','w') as f:
cf.write(f) | libs/ConfigParser/handout.ipynb | dnxbjyj/python-basic | mit |
添加一个section | cf.add_section('log')
cf.set('log','name','mylog.log')
cf.set('log','num',100)
cf.set('log','size',10.55)
cf.set('log','auto_save',True)
cf.set('log','info','%(bar)s is %(baz)s!')
# 同样的,要写入才能生效
with open('sys.conf','w') as f:
cf.write(f) | libs/ConfigParser/handout.ipynb | dnxbjyj/python-basic | mit |
执行上面代码后,sys.conf文件多了一个section,内容如下:
bash
[log]
name = mylog.log
num = 100
size = 10.55
auto_save = True
info = %(bar)s is %(baz)s!
移除某个section | cf.remove_section('log')
# 同样的,要写入才能生效
with open('sys.conf','w') as f:
cf.write(f) | libs/ConfigParser/handout.ipynb | dnxbjyj/python-basic | mit |
移除某个option | cf.remove_option('db','db_pass')
# 同样的,要写入才能生效
with open('sys.conf','w') as f:
cf.write(f) | libs/ConfigParser/handout.ipynb | dnxbjyj/python-basic | mit |
Setup | !pip install floq_client --quiet
# Imports
import numpy as np
import sympy
import cirq
import floq.client | samples/notebooks/Floq_Client_Colab_Tutorial.ipynb | google/floq-client | apache-2.0 |
Floq simulation | nrows = 10
ncols = 2
qubits = cirq.GridQubit.rect(nrows, ncols) # 20 qubits
parameters = sympy.symbols([f'a{idx}' for idx in range(nrows * ncols)])
circuit = cirq.Circuit(cirq.HPowGate(exponent=p).on(q) for p, q in zip(parameters, qubits)) | samples/notebooks/Floq_Client_Colab_Tutorial.ipynb | google/floq-client | apache-2.0 |
New observable compatible with Floq
Floq accepts observables in the type of cirq.ops.linear_combinations.PauliSum only | observables = []
for i in range(nrows):
for j in range(ncols):
if i < nrows - 1:
observables.append(cirq.Z(qubits[i*ncols + j]) * cirq.Z(qubits[(i + 1)*ncols + j]))
# Z[i * ncols + j] * Z[(i + 1) * ncols + j]
if j < ncols - 1:
observables.append(cirq.Z(qubits[i*ncols + j]) * cirq.Z(qubits[i*... | samples/notebooks/Floq_Client_Colab_Tutorial.ipynb | google/floq-client | apache-2.0 |
Padding qubits
Because Floq's minimum number of qubits is 26, we need to pad it. This will be changed in the future. | def pad_circuit(circ, qubits):
return circ + cirq.Circuit([cirq.I(q) for q in qubits])
def get_pad_qubits(circ):
num = len(circ.all_qubits())
return [cirq.GridQubit(num, pad) for pad in range(26 - num)]
pad_qubits = get_pad_qubits(circuit)
padded_circuit = pad_circuit(circuit, pad_qubits)
padded_circuit
val... | samples/notebooks/Floq_Client_Colab_Tutorial.ipynb | google/floq-client | apache-2.0 |
Using Floq simulator
Before going further, please FORK THIS COLAB NOTEBOOK, and DO NOT SHARE YOUR API KEY WITH OTHERS PLEASE
Create & start a Floq instance | # Please specify your API_KEY
API_KEY = "" #@param {type:"string"}
!floq-client "$API_KEY" worker start
client = floq.client.CirqClient(API_KEY) | samples/notebooks/Floq_Client_Colab_Tutorial.ipynb | google/floq-client | apache-2.0 |
Expectation values from the circuit and measurements | energy = client.simulator.simulate_expectation_values(padded_circuit, measure, resolver)
# energy shows expectation values on each Pauli sum in measure.
energy
# Here is the total energy
sum(energy) | samples/notebooks/Floq_Client_Colab_Tutorial.ipynb | google/floq-client | apache-2.0 |
Samples from the circuit | niter = 100
samples = client.simulator.run(padded_circuit, resolver, niter)
samples | samples/notebooks/Floq_Client_Colab_Tutorial.ipynb | google/floq-client | apache-2.0 |
Stop the Floq instance | !floq-client "$API_KEY" worker stop | samples/notebooks/Floq_Client_Colab_Tutorial.ipynb | google/floq-client | apache-2.0 |
We will use mostly TensorFlow functions to open and process images: | def open_image(filename, target_shape = (256, 256)):
""" Load the specified file as a JPEG image, preprocess it and
resize it to the target shape.
"""
image_string = tf.io.read_file(filename)
image = tf.image.decode_jpeg(image_string, channels=3)
image = tf.image.convert_image_dtype(image, tf.fl... | labs/09_triplet_loss/triplet_loss_totally_looks_like.ipynb | m2dsupsdlclass/lectures-labs | mit |
To generate the list of negative images, let's randomize the list of available images (anchors and positives) and concatenate them together. | import numpy as np
rng = np.random.RandomState(seed=42)
rng.shuffle(anchor_images)
rng.shuffle(positive_images)
negative_images = anchor_images + positive_images
np.random.RandomState(seed=32).shuffle(negative_images)
negative_dataset_files = tf.data.Dataset.from_tensor_slices(negative_images)
negative_dataset_file... | labs/09_triplet_loss/triplet_loss_totally_looks_like.ipynb | m2dsupsdlclass/lectures-labs | mit |
We can visualize a triplet and display its shape: | anc_batch, pos_batch, neg_batch = next(train_dataset.take(1).as_numpy_iterator())
print(anc_batch.shape, pos_batch.shape, neg_batch.shape)
idx = np.random.randint(0, 32)
visualize([anc_batch[idx], pos_batch[idx], neg_batch[idx]]) | labs/09_triplet_loss/triplet_loss_totally_looks_like.ipynb | m2dsupsdlclass/lectures-labs | mit |
Exercise
Build the embedding network, starting from a resnet and adding a few layers. The output should have a dimension $d= 128$ or $d=256$. Edit the following code, and you may use the next cell to test your code.
Bonus: Try to freeze the weights of the ResNet. | from tensorflow.keras import Model, layers
from tensorflow.keras import optimizers, losses, metrics, applications
from tensorflow.keras.applications import resnet
input_img = layers.Input((224,224,3))
output = input_img # change that line and edit this code!
embedding = Model(input_img, output, name="Embedding")
ou... | labs/09_triplet_loss/triplet_loss_totally_looks_like.ipynb | m2dsupsdlclass/lectures-labs | mit |
Run the following can be run to get the same architecture as we have: | from tensorflow.keras import Model, layers
from tensorflow.keras import optimizers, losses, metrics, applications
from tensorflow.keras.applications import resnet
input_img = layers.Input((224,224,3))
base_cnn = resnet.ResNet50(weights="imagenet", input_shape=(224,224,3), include_top=False)
resnet_output = base_cnn(i... | labs/09_triplet_loss/triplet_loss_totally_looks_like.ipynb | m2dsupsdlclass/lectures-labs | mit |
Exercise
Our goal is now to build the positive and negative distances from 3 inputs images: the anchor, the positive, and the negative one $‖f(A) - f(P)‖²$ $‖f(A) - f(N)‖²$. You may define a specific Layer using the Keras subclassing API, or any other method.
You will need to run the Embedding model previously defined,... | anchor_input = layers.Input(name="anchor", shape=(224, 224, 3))
positive_input = layers.Input(name="positive", shape=(224, 224, 3))
negative_input = layers.Input(name="negative", shape=(224, 224, 3))
distances = [anchor_input, positive_input] # TODO: Change this code to actually compute the distances
siamese_network ... | labs/09_triplet_loss/triplet_loss_totally_looks_like.ipynb | m2dsupsdlclass/lectures-labs | mit |
Solution: run the following cell to get the exact same method as we have. | class DistanceLayer(layers.Layer):
def __init__(self, **kwargs):
super().__init__(**kwargs)
def call(self, anchor, positive, negative):
ap_distance = tf.reduce_sum(tf.square(anchor - positive), -1)
an_distance = tf.reduce_sum(tf.square(anchor - negative), -1)
return (ap_distance... | labs/09_triplet_loss/triplet_loss_totally_looks_like.ipynb | m2dsupsdlclass/lectures-labs | mit |
The final triplet model
Once we are able to produce the distances, we may wrap it into a new Keras Model which includes the computation of the loss. The following implementation uses a subclassing of the Model class, redefining a few functions used internally during model.fit: call, train_step, test_step | class TripletModel(Model):
"""The Final Keras Model with a custom training and testing loops.
Computes the triplet loss using the three embeddings produced by the
Siamese Network.
The triplet loss is defined as:
L(A, P, N) = max(‖f(A) - f(P)‖² - ‖f(A) - f(N)‖² + margin, 0)
"""
def __in... | labs/09_triplet_loss/triplet_loss_totally_looks_like.ipynb | m2dsupsdlclass/lectures-labs | mit |
Find most similar images in test dataset
The negative_images list was built by concatenating all possible images, both anchors and positive. We can reuse these to form a bank of possible images to query from.
We will first compute all embeddings of these images. To do so, we build a tf.Dataset and apply the few functio... | from functools import partial
open_img = partial(open_image, target_shape=(224,224))
all_img_files = tf.data.Dataset.from_tensor_slices(negative_images)
dataset = all_img_files.map(open_img).map(preprocess).take(1024).batch(32, drop_remainder=False).prefetch(8)
all_embeddings = loaded_model.predict(dataset)
all_embed... | labs/09_triplet_loss/triplet_loss_totally_looks_like.ipynb | m2dsupsdlclass/lectures-labs | mit |
We can build a most_similar function which takes an image path as input and return the topn most similar images through the embedding representation. It would be possible to use another metric, such as the cosine similarity here. | random_img = np.random.choice(negative_images)
def most_similar(img, topn=5):
img_batch = tf.expand_dims(open_image(img, target_shape=(224, 224)), 0)
new_emb = loaded_model.predict(preprocess(img_batch))
dists = tf.sqrt(tf.reduce_sum((all_embeddings - new_emb)**2, -1)).numpy()
idxs = np.argsort(dists)[... | labs/09_triplet_loss/triplet_loss_totally_looks_like.ipynb | m2dsupsdlclass/lectures-labs | mit |
Signal-space separation (SSS) and Maxwell filtering
This tutorial covers reducing environmental noise and compensating for head
movement with SSS and Maxwell filtering.
:depth: 2
As usual we'll start by importing the modules we need, loading some
example data <sample-dataset>, and cropping it to save on memory... | import os
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
import mne
from mne.preprocessing import find_bad_channels_maxwell
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
... | 0.22/_downloads/243172b1ef6a2d804d3245b8c0a927ef/plot_60_maxwell_filtering_sss.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Background on SSS and Maxwell filtering
Signal-space separation (SSS) :footcite:TauluKajola2005,TauluSimola2006
is a technique based on the physics
of electromagnetic fields. SSS separates the measured signal into components
attributable to sources inside the measurement volume of the sensor array
(the internal compone... | fine_cal_file = os.path.join(sample_data_folder, 'SSS', 'sss_cal_mgh.dat')
crosstalk_file = os.path.join(sample_data_folder, 'SSS', 'ct_sparse_mgh.fif') | 0.22/_downloads/243172b1ef6a2d804d3245b8c0a927ef/plot_60_maxwell_filtering_sss.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Before we perform SSS we'll look for bad channels — MEG 2443 is quite
noisy.
<div class="alert alert-danger"><h4>Warning</h4><p>It is critical to mark bad channels in ``raw.info['bads']`` *before*
calling :func:`~mne.preprocessing.maxwell_filter` in order to prevent
bad channel noise from spreading.</p></div>
... | raw.info['bads'] = []
raw_check = raw.copy()
auto_noisy_chs, auto_flat_chs, auto_scores = find_bad_channels_maxwell(
raw_check, cross_talk=crosstalk_file, calibration=fine_cal_file,
return_scores=True, verbose=True)
print(auto_noisy_chs) # we should find them!
print(auto_flat_chs) # none for this dataset | 0.22/_downloads/243172b1ef6a2d804d3245b8c0a927ef/plot_60_maxwell_filtering_sss.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
<div class="alert alert-info"><h4>Note</h4><p>`~mne.preprocessing.find_bad_channels_maxwell` needs to operate on
a signal without line noise or cHPI signals. By default, it simply
applies a low-pass filter with a cutoff frequency of 40 Hz to the
data, which should remove these artifacts. Y... | bads = raw.info['bads'] + auto_noisy_chs + auto_flat_chs
raw.info['bads'] = bads | 0.22/_downloads/243172b1ef6a2d804d3245b8c0a927ef/plot_60_maxwell_filtering_sss.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
We called ~mne.preprocessing.find_bad_channels_maxwell with the optional
keyword argument return_scores=True, causing the function to return a
dictionary of all data related to the scoring used to classify channels as
noisy or flat. This information can be used to produce diagnostic figures.
In the following, we will g... | # Only select the data forgradiometer channels.
ch_type = 'grad'
ch_subset = auto_scores['ch_types'] == ch_type
ch_names = auto_scores['ch_names'][ch_subset]
scores = auto_scores['scores_noisy'][ch_subset]
limits = auto_scores['limits_noisy'][ch_subset]
bins = auto_scores['bins'] # The the windows that were evaluated.... | 0.22/_downloads/243172b1ef6a2d804d3245b8c0a927ef/plot_60_maxwell_filtering_sss.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
<div class="alert alert-info"><h4>Note</h4><p>You can use the very same code as above to produce figures for
*flat* channel detection. Simply replace the word "noisy" with
"flat", and replace ``vmin=np.nanmin(limits)`` with
``vmax=np.nanmax(limits)``.</p></div>
You can see the un-altered ... | raw.info['bads'] += ['MEG 2313'] # from manual inspection | 0.22/_downloads/243172b1ef6a2d804d3245b8c0a927ef/plot_60_maxwell_filtering_sss.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
After that, performing SSS and Maxwell filtering is done with a
single call to :func:~mne.preprocessing.maxwell_filter, with the crosstalk
and fine calibration filenames provided (if available): | raw_sss = mne.preprocessing.maxwell_filter(
raw, cross_talk=crosstalk_file, calibration=fine_cal_file, verbose=True) | 0.22/_downloads/243172b1ef6a2d804d3245b8c0a927ef/plot_60_maxwell_filtering_sss.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
To see the effect, we can plot the data before and after SSS / Maxwell
filtering. | raw.pick(['meg']).plot(duration=2, butterfly=True)
raw_sss.pick(['meg']).plot(duration=2, butterfly=True) | 0.22/_downloads/243172b1ef6a2d804d3245b8c0a927ef/plot_60_maxwell_filtering_sss.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Notice that channels marked as "bad" have been effectively repaired by SSS,
eliminating the need to perform interpolation <tut-bad-channels>.
The heartbeat artifact has also been substantially reduced.
The :func:~mne.preprocessing.maxwell_filter function has parameters
int_order and ext_order for setting the orde... | head_pos_file = os.path.join(mne.datasets.testing.data_path(), 'SSS',
'test_move_anon_raw.pos')
head_pos = mne.chpi.read_head_pos(head_pos_file)
mne.viz.plot_head_positions(head_pos, mode='traces') | 0.22/_downloads/243172b1ef6a2d804d3245b8c0a927ef/plot_60_maxwell_filtering_sss.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Comparing the time | start = timeit.timeit()
X = range(1000)
pySum = sum([n*n for n in X])
end = timeit.timeit()
print("Total time taken: ", end-start) | BMLSwPython/01_GettingStarted_withPython.ipynb | atulsingh0/MachineLearning | gpl-3.0 |
Learning Scipy | # reading the web data
data = sp.genfromtxt("data/web_traffic.tsv", delimiter="\t")
print(data[:3])
print(len(data)) | BMLSwPython/01_GettingStarted_withPython.ipynb | atulsingh0/MachineLearning | gpl-3.0 |
Preprocessing and Cleaning the data | X = data[:, 0]
y = data[:, 1]
# checking for nan values
print(sum(np.isnan(X)))
print(sum(np.isnan(y))) | BMLSwPython/01_GettingStarted_withPython.ipynb | atulsingh0/MachineLearning | gpl-3.0 |
Filtering the nan data | X = X[~np.isnan(y)]
y = y[~np.isnan(y)]
# checking for nan values
print(sum(np.isnan(X)))
print(sum(np.isnan(y)))
fig, ax = plt.subplots(figsize=(8,6))
ax.plot(X, y, '.b')
ax.margins(0.2)
plt.xticks([w*24*7 for w in range(0, 6)], ["week %d" %w for w in range(0, 6)])
ax.set_xlabel("Week")
ax.set_ylabel("Hits / Week")... | BMLSwPython/01_GettingStarted_withPython.ipynb | atulsingh0/MachineLearning | gpl-3.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.