markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Figure of sum of residual erros illustrates that the expressions for the objective function has very shallow gradients near the minimum, which results in difficult in converging to a global minimum.
_dat['predictions']['KKF_zero'] = np.nan _dat['predictions']['KKF_zero'].loc[25] = t_298 / (365*24*3600)
notebooks/Impurity Prediction Example 1.ipynb
brentjm/Impurity-Predictions
bsd-2-clause
First order (method of King, Kung, Fung) Define the objective function for a first order reaaction.
def error(p): t_298 = p[0] E = p[1] k = np.exp(E/R * (1/298. - 1/T)) k = np.log(1-Sp) / t_298 * k err = Co * np.exp(k*t) - C return np.sum(err**2)
notebooks/Impurity Prediction Example 1.ipynb
brentjm/Impurity-Predictions
bsd-2-clause
Figure of sum of residual erros illustrates that the expressions for the objective function has very shallow gradients near the minimum, which results in difficult in converging to a global minimum.
_dat['predictions']['KKF_first'] = np.nan _dat['predictions']['KKF_first'].loc[25] = t_298 / (365*24*3600)
notebooks/Impurity Prediction Example 1.ipynb
brentjm/Impurity-Predictions
bsd-2-clause
Second order (method of King, Kung, Fung) Define the objective function for a first order reaaction.
def error(p): t_298 = p[0] E = p[1] k = np.exp(E/R * (1/298. - 1/T)) k = 1 / t_298 * (Sp/(1-Sp)) * k err = Co / (1 + k*t) return np.sum(err**2)
notebooks/Impurity Prediction Example 1.ipynb
brentjm/Impurity-Predictions
bsd-2-clause
Figure of sum of residual erros illustrates that the expressions for the objective function has very shallow gradients near the minimum, which results in difficult in converging to a global minimum.
_dat['predictions']['KKF_second'] = np.nan _dat['predictions']['KKF_second'].loc[25] = t_298 / (365*24*3600) _dat['predictions']
notebooks/Impurity Prediction Example 1.ipynb
brentjm/Impurity-Predictions
bsd-2-clause
Notebook is a revised version of notebook from Amy Wu and Shen Zhimo E2E ML on GCP: MLOps stage 6 : serving: get started with Vertex AI Matching Engine and Two Towers builtin algorithm <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/main/notebooks/ocommunity/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> <td> <a href="https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://raw.githubusercontent.com/GoogleCloudPlatform/vertex-ai-samples/main/notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb"> <img src="https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32" alt="Vertex AI logo"> Run in Vertex Workbench </a> </td> </table> Overview This tutorial demonstrates how to use the Vertex AI Two-Tower built-in algorithm with Vertex AI Matching Engine. Dataset This tutorial uses the movielens_100k sample dataset in the public bucket gs://cloud-samples-data/vertex-ai/matching-engine/two-tower, which was generated from the MovieLens movie rating dataset. For this tutorial, the data only includes the user id feature for users, and the movie id and movie title features for movies. In this example, the user is the query object and the movie is the candidate object, and each training example in the dataset contains a user and a movie they rated (we only include positive ratings in the dataset). The two-tower model will embed the user and the movie in the same embedding space, so that given a user, the model will recommend movies it thinks the user will like. Objective In this notebook, you will learn how to use the Two-Tower builtin algorithms for generating embeddings for a dataset, for use with generating an Matching Engine Index, with the Vertex AI Matching Engine service. This tutorial uses the following Google Cloud ML services: Vertex AI Two-Towers builtin algorithm Vertex AI Matching Engine Vertex AI Batch Prediction The tutorial covers the following steps: Train the Two-Tower algorithm to generate embeddings (encoder) for the dataset. Hyperparameter tune the trained Two-Tower encoder. Make example predictions (embeddings) from then trained encoder. Generate embeddings using the trained Two-Tower builtin algorithm. Store embeddings to format supported by Matching Engine. Create a Matching Engine Index for the embeddings. Deploy the Matching Engine Index to a Index Endpoint. Make a matching engine prediction request. Costs This tutorial uses billable components of Google Cloud: Vertex AI Cloud Storage Learn about Vertex AI pricing and Cloud Storage pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage. Set up your local development environment If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step. Otherwise, make sure your environment meets this notebook's requirements. You need the following: The Google Cloud SDK Git Python 3 virtualenv Jupyter notebook running in a virtual environment with Python 3 The Google Cloud guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions: Install and initialize the Cloud SDK. Install Python 3. Install virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment. To install Jupyter, run pip3 install jupyter on the command-line in a terminal shell. To launch Jupyter, run jupyter notebook on the command-line in a terminal shell. Open this notebook in the Jupyter Notebook Dashboard. Installation Install the packages required for executing this notebook.
import os # The Vertex AI Workbench Notebook product has specific requirements IS_WORKBENCH_NOTEBOOK = os.getenv("DL_ANACONDA_HOME") IS_USER_MANAGED_WORKBENCH_NOTEBOOK = os.path.exists( "/opt/deeplearning/metadata/env_version" ) # Vertex AI Notebook requires dependencies to be installed with '--user' USER_FLAG = "" if IS_WORKBENCH_NOTEBOOK: USER_FLAG = "--user" ! pip3 install {USER_FLAG} --upgrade tensorflow -q ! pip3 install {USER_FLAG} --upgrade google-cloud-aiplatform tensorboard-plugin-profile -q ! gcloud components update --quiet
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Before you begin Set up your Google Cloud project The following steps are required, regardless of your notebook environment. Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs. Make sure that billing is enabled for your project. Enable the Vertex AI API. If you are running this notebook locally, you will need to install the Cloud SDK. Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands. Set your project ID If you do not know your project ID, you may be able to get your project ID using gcloud.
PROJECT_ID = "[your-project-id]" # @param {type:"string"} if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]": # Get your GCP project id from gcloud shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID:", PROJECT_ID) ! gcloud config set project $PROJECT_ID
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. Before you submit a training job for the two-tower model, you need to upload your training data and schema to Cloud Storage. Vertex AI trains the model using this input data. In this tutorial, the Two-Tower built-in algorithm also saves the trained model that results from your job in the same bucket. Using this model artifact, you can then create Vertex AI model and endpoint resources in order to serve online predictions. Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.
BUCKET_NAME = "[your-bucket-name]" # @param {type:"string"} BUCKET_URI = f"gs://{BUCKET_NAME}" if BUCKET_URI == "" or BUCKET_URI is None or BUCKET_URI == "gs://[your-bucket-name]": BUCKET_NAME = PROJECT_ID + "aip-" + TIMESTAMP BUCKET_URI = "gs://" + BUCKET_NAME
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Import libraries and define constants
import os from google.cloud import aiplatform %load_ext tensorboard
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Introduction to Two-Tower algorithm Two-tower models learn to represent two items of various types (such as user profiles, search queries, web documents, answer passages, or images) in the same vector space, so that similar or related items are close to each other. These two items are referred to as the query and candidate object, since when paired with a nearest neighbor search service such as Vertex Matching Engine, the two-tower model can retrieve candidate objects related to an input query object. These objects are encoded by a query and candidate encoder (the two "towers") respectively, which are trained on pairs of relevant items. This built-in algorithm exports trained query and candidate encoders as model artifacts, which can be deployed in Vertex Prediction for usage in a recommendation system. Configure training parameters for the Two-Tower builtin algorithm The following table shows parameters that are common to all Vertex AI Training jobs created using the gcloud ai custom-jobs create command. See the official documentation for all the possible arguments. | Parameter | Data type | Description | Required | |--|--|--|--| | display-name | string | Name of the job. | Yes | | worker-pool-spec | string | Comma-separated list of arguments specifying a worker pool configuration (see below). | Yes | | region | string | Region to submit the job to. | No | The worker-pool-spec flag can be specified multiple times, one for each worker pool. The following table shows the arguments used to specify a worker pool. | Parameter | Data type | Description | Required | |--|--|--|--| | machine-type | string | Machine type for the pool. See the official documentation for supported machines. | Yes | | replica-count | int | The number of replicas of the machine in the pool. | No | | container-image-uri | string | Docker image to run on each worker. | No | The following table shows the parameters for the two-tower model training job: | Parameter | Data type | Description | Required | |--|--|--|--| | training_data_path | string | Cloud Storage pattern where training data is stored. | Yes | | input_schema_path | string | Cloud Storage path where the JSON input schema is stored. | Yes | | input_file_format | string | The file format of input. Currently supports jsonl and tfrecord. | No - default is jsonl. | | job_dir | string | Cloud Storage directory where the model output files will be stored. | Yes | | eval_data_path | string | Cloud Storage pattern where eval data is stored. | No | | candidate_data_path | string | Cloud Storage pattern where candidate data is stored. Only used for top_k_categorical_accuracy metrics. If not set, it's generated from training/eval data. | No | | train_batch_size | int | Batch size for training. | No - Default is 100. | | eval_batch_size | int | Batch size for evaluation. | No - Default is 100. | | eval_split | float | Split fraction to use for the evaluation dataset, if eval_data_path is not provided. | No - Default is 0.2 | | optimizer | string | Training optimizer. Lowercase string name of any TF2.3 Keras optimizer is supported ('sgd', 'nadam', 'ftrl', etc.). See TensorFlow documentation. | No - Default is 'adagrad'. | | learning_rate | float | Learning rate for training. | No - Default is the default learning rate of the specified optimizer. | | momentum | float | Momentum for optimizer, if specified. | No - Default is the default momentum value for the specified optimizer. | | metrics | string | Metrics used to evaluate the model. Can be either auc, top_k_categorical_accuracy or precision_at_1. | No - Default is auc. | | num_epochs | int | Number of epochs for training. | No - Default is 10. | | num_hidden_layers | int | Number of hidden layers. | No | | num_nodes_hidden_layer{index} | int | Num of nodes in hidden layer {index}. The range of index is 1 to 20. | No | | output_dim | int | The output embedding dimension for each encoder tower of the two-tower model. | No - Default is 64. | | training_steps_per_epoch | int | Number of steps per epoch to run the training for. Only needed if you are using more than 1 machine or using a master machine with more than 1 gpu. | No - Default is None. | | eval_steps_per_epoch | int | Number of steps per epoch to run the evaluation for. Only needed if you are using more than 1 machine or using a master machine with more than 1 gpu. | No - Default is None. | | gpu_memory_alloc | int | Amount of memory allocated per GPU (in MB). | No - Default is no limit. |
DATASET_NAME = "movielens_100k" # Change to your dataset name. # Change to your data and schema paths. These are paths to the movielens_100k # sample data. TRAINING_DATA_PATH = f"gs://cloud-samples-data/vertex-ai/matching-engine/two-tower/{DATASET_NAME}/training_data/*" INPUT_SCHEMA_PATH = f"gs://cloud-samples-data/vertex-ai/matching-engine/two-tower/{DATASET_NAME}/input_schema.json" # URI of the two-tower training Docker image. LEARNER_IMAGE_URI = "us-docker.pkg.dev/vertex-ai-restricted/builtin-algorithm/two-tower" # Change to your output location. OUTPUT_DIR = f"{BUCKET_URI}/experiment/output" TRAIN_BATCH_SIZE = 100 # Batch size for training. NUM_EPOCHS = 3 # Number of epochs for training. print(f"Dataset name: {DATASET_NAME}") print(f"Training data path: {TRAINING_DATA_PATH}") print(f"Input schema path: {INPUT_SCHEMA_PATH}") print(f"Output directory: {OUTPUT_DIR}") print(f"Train batch size: {TRAIN_BATCH_SIZE}") print(f"Number of epochs: {NUM_EPOCHS}")
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Train on Vertex AI Training with CPU Submit the Two-Tower training job to Vertex AI Training. The following command uses a single CPU machine for training. When using single node training, training_steps_per_epoch and eval_steps_per_epoch do not need to be set. Prepare your machine specification Now define the machine specification for your custom hyperparameter tuning job. This tells Vertex what type of machine instance to provision for the hyperparameter tuning. - machine_type: The type of GCP instance to provision -- e.g., n1-standard-8. - accelerator_type: The type, if any, of hardware accelerator. In this tutorial if you previously set the variable TRAIN_GPU != None, you are using a GPU; otherwise you will use a CPU. - accelerator_count: The number of accelerators.
TRAIN_COMPUTE = "n1-standard-8" machine_spec = {"machine_type": TRAIN_COMPUTE, "accelerator_count": 0}
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Define the worker pool specification Next, you define the worker pool specification for your custom hyperparameter tuning job. The worker pool specification will consist of the following: replica_count: The number of instances to provision of this machine type. machine_spec: The hardware specification. disk_spec : (optional) The disk storage specification. container_spec: The training container containing the training package. Let's dive deeper now into the container specification: image_uri: The training image. command: The command to invoke in the training image. Defaults to the command entry point specified for the training image. args: The command line arguments to pass to the corresponding command entry point in training image.
JOB_NAME = "twotowers_cpu_" + TIMESTAMP MODEL_DIR = "{}/{}".format(BUCKET_URI, JOB_NAME) CMDARGS = [ f"--training_data_path={TRAINING_DATA_PATH}", f"--input_schema_path={INPUT_SCHEMA_PATH}", f"--job-dir={OUTPUT_DIR}", f"--train_batch_size={TRAIN_BATCH_SIZE}", f"--num_epochs={NUM_EPOCHS}", ] worker_pool_spec = [ { "replica_count": 1, "machine_spec": machine_spec, "disk_spec": disk_spec, "container_spec": { "image_uri": LEARNER_IMAGE_URI, "command": [], "args": CMDARGS, }, } ]
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create a custom job Use the class CustomJob to create a custom job, such as for hyperparameter tuning, with the following parameters: display_name: A human readable name for the custom job. worker_pool_specs: The specification for the corresponding VM instances.
job = aiplatform.CustomJob( display_name="twotower_cpu_" + TIMESTAMP, worker_pool_specs=worker_pool_spec )
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Execute the custom job Next, execute your custom job using the method run().
job.run()
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
View output After the job finishes successfully, you can view the output directory.
! gsutil ls {OUTPUT_DIR} ! gsutil rm -rf {OUTPUT_DIR}/*
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Train on Vertex AI Training with GPU Next, train the Two Tower model using a GPU.
JOB_NAME = "twotowers_gpu_" + TIMESTAMP MODEL_DIR = "{}/{}".format(BUCKET_URI, JOB_NAME) TRAIN_COMPUTE = "n1-highmem-4" TRAIN_GPU = "NVIDIA_TESLA_K80" machine_spec = { "machine_type": TRAIN_COMPUTE, "accelerator_type": TRAIN_GPU, "accelerator_count": 1, } CMDARGS = [ f"--training_data_path={TRAINING_DATA_PATH}", f"--input_schema_path={INPUT_SCHEMA_PATH}", f"--job-dir={OUTPUT_DIR}", "--training_steps_per_epoch=1500", "--eval_steps_per_epoch=1500", ] worker_pool_spec = [ { "replica_count": 1, "machine_spec": machine_spec, "disk_spec": disk_spec, "container_spec": { "image_uri": LEARNER_IMAGE_URI, "command": [], "args": CMDARGS, }, } ]
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create and execute the custom job Next, create and execute the custom job.
job = aiplatform.CustomJob( display_name="twotower_cpu_" + TIMESTAMP, worker_pool_specs=worker_pool_spec ) job.run()
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Train on Vertex AI Training with TFRecords Next, train the Two Tower model using TFRecords
TRAINING_DATA_PATH = f"gs://cloud-samples-data/vertex-ai/matching-engine/two-tower/{DATASET_NAME}/tfrecord/*" JOB_NAME = "twotowers_tfrec_" + TIMESTAMP MODEL_DIR = "{}/{}".format(BUCKET_URI, JOB_NAME) TRAIN_COMPUTE = "n1-standard-8" machine_spec = {"machine_type": TRAIN_COMPUTE, "accelerator_count": 0} CMDARGS = [ f"--training_data_path={TRAINING_DATA_PATH}", f"--input_schema_path={INPUT_SCHEMA_PATH}", f"--job-dir={OUTPUT_DIR}", f"--train_batch_size={TRAIN_BATCH_SIZE}", f"--num_epochs={NUM_EPOCHS}", "--input_file_format=tfrecord", ] worker_pool_spec = [ { "replica_count": 1, "machine_spec": machine_spec, "disk_spec": disk_spec, "container_spec": { "image_uri": LEARNER_IMAGE_URI, "command": [], "args": CMDARGS, }, } ]
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
View output After the job finishes successfully, you can view the output directory.
! gsutil ls {OUTPUT_DIR} ! gsutil rm -rf {OUTPUT_DIR}
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Tensorboard When the training starts, you can view the logs in TensorBoard. Colab users can use the TensorBoard widget below: For Workbench AI Notebooks users, the TensorBoard widget above won't work. We recommend you to launch TensorBoard through the Cloud Shell. In your Cloud Shell, launch Tensorboard on port 8080: export TENSORBOARD_DIR=gs://xxxxx/tensorboard tensorboard --logdir=${TENSORBOARD_DIR} --port=8080 Click the "Web Preview" button at the top-right of the Cloud Shell window (looks like an eye in a rectangle). Select "Preview on port 8080". This should launch the TensorBoard webpage in a new tab in your browser. After the job finishes successfully, you can view the output directory:
try: TENSORBOARD_DIR = os.path.join(OUTPUT_DIR, "tensorboard") %tensorboard --logdir {TENSORBOARD_DIR} except Exception as e: print(e)
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Hyperparameter tuning You may want to optimize the hyperparameters used during training to improve your model's accuracy and performance. For this example, the following command runs a Vertex AI hyperparameter tuning job with 8 trials that attempts to maximize the validation AUC metric. The hyperparameters it optimizes are the number of hidden layers, the size of the hidden layers, and the learning rate. Learn more about Hyperparameter tuning overview.
from google.cloud.aiplatform import hyperparameter_tuning as hpt hpt_job = aiplatform.HyperparameterTuningJob( display_name="twotowers_" + TIMESTAMP, custom_job=job, metric_spec={ "val_auc": "maximize", }, parameter_spec={ "learning_rate": hpt.DoubleParameterSpec(min=0.0001, max=0.1, scale="log"), "num_hidden_layers": hpt.IntegerParameterSpec(min=0, max=2, scale="linear"), "num_nodes_hidden_layer1": hpt.IntegerParameterSpec( min=1, max=128, scale="log" ), "num_nodes_hidden_layer2": hpt.IntegerParameterSpec( min=1, max=128, scale="log" ), }, search_algorithm=None, max_trial_count=8, parallel_trial_count=1, )
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
View output After the job finishes successfully, you can view the output directory.
BEST_MODEL = OUTPUT_DIR + "/trial_" + best[0] ! gsutil ls {BEST_MODEL}
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Upload the model to Vertex AI Model resource Your training job will export two TF SavedModels under gs://&lt;job_dir&gt;/query_model and gs://&lt;job_dir&gt;/candidate_model. These exported models can be used for online or batch prediction in Vertex Prediction. First, import the query (or candidate) model using the upload() method, with the following parameters: display_name: A human readable name for the model resource. artifact_uri: The Cloud Storage location of the model artifacts. serving_container_image_uri: The deployment container. In this tutorial, you use the prebuilt Two-Tower deployment container. serving_container_health_route: The URL for the service to periodically ping for a response to verify that the serving binary is running. For Two-Towers, this will be /v1/models/[model_name]. serving_container_predict_route: The URL for the service to periodically ping for a response to verify that the serving binary is running. For Two-Towers, this will be /v1/models/[model_name]:predict. serving_container_environment_variables: Preset environment variables to pass into the deployment container. Note: The underlying deployment container is built on TensorFlow Serving.
# The following imports the query (user) encoder model. MODEL_TYPE = "query" # Use the following instead to import the candidate (movie) encoder model. # MODEL_TYPE = 'candidate' DISPLAY_NAME = f"{DATASET_NAME}_{MODEL_TYPE}" # The display name of the model. MODEL_NAME = f"{MODEL_TYPE}_model" # Used by the deployment container. model = aiplatform.Model.upload( display_name=DISPLAY_NAME, artifact_uri=BEST_MODEL, serving_container_image_uri="us-central1-docker.pkg.dev/cloud-ml-algos/two-tower/deploy", serving_container_health_route=f"/v1/models/{MODEL_NAME}", serving_container_predict_route=f"/v1/models/{MODEL_NAME}:predict", serving_container_environment_variables={ "MODEL_BASE_PATH": "$(AIP_STORAGE_URI)", "MODEL_NAME": MODEL_NAME, }, )
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Deploy the model to Vertex AI Endpoint Deploying the Vertex AI Model resoure to a Vertex AI Endpoint for online predictions: Create an Endpoint resource exposing an external interface to users consuming the model. After the Endpoint is ready, deploy one or more instances of a model to the Endpoint. The deployed model runs the custom container image running Two-Tower encoder to serve embeddings. Refer to Vertex AI Predictions guide to Deploy a model using the Vertex AI API for more information about the APIs used in the following cells. Create a Vertex AI Endpoint Next, you create the Vertex AI Endpoint, from which you subsequently deploy your Vertex AI Model resource to.
endpoint = aiplatform.Endpoint.create(display_name=DATASET_NAME)
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Creating embeddings Now that you have deployed the query/candidate encoder model on Vertex AI Prediction, you can call the model to generate embeddings for new data. Make an online prediction with SDK Online prediction is used to synchronously query a model on a small batch of instances with minimal latency. The following function calls the deployed model using Vertex AI SDK for Python. The input data you want predicted embeddings on should be provided as a stringified JSON in the data field. Note that you should also provide a unique key field (of type str) for each input instance so that you can associate each output embedding with its corresponding input.
# Input items for the query model: input_items = [ {"data": '{"user_id": ["1"]}', "key": "key1"}, {"data": '{"user_id": ["2"]}', "key": "key2"}, ] # Input items for the candidate model: # input_items = [{ # 'data' : '{"movie_id": ["1"], "movie_title": ["fake title"]}', # 'key': 'key1' # }] encodings = endpoint.predict(input_items) print(f"Number of encodings: {len(encodings.predictions)}") print(encodings.predictions[0]["encoding"])
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Make an online prediction with gcloud You can also do online prediction using the gcloud CLI.
import json request = json.dumps({"instances": input_items}) with open("request.json", "w") as writer: writer.write(f"{request}\n") ENDPOINT_ID = endpoint.resource_name ! gcloud ai endpoints predict {ENDPOINT_ID} \ --region={REGION} \ --json-request=request.json
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Make a batch prediction Batch prediction is used to asynchronously make predictions on a batch of input data. This is recommended if you have a large input size and do not need an immediate response, such as getting embeddings for candidate objects in order to create an index for a nearest neighbor search service such as Vertex Matching Engine. Create the batch input file Next, you generate the batch input file to generate embeddings for the dataset, which you subsequently use to create an index with Vertex AI Matching Engine. In this example, the dataset contains a 1000 unique identifiers (0...999). You will use the trained encoder to generate a predicted embedding for each unique identifier. The input data needs to be on Cloud Storage and in JSONL format. You can use the sample query object file provided below. Like with online prediction, it's recommended to have the key field so that you can associate each output embedding with its corresponding input.
QUERY_EMBEDDING_PATH = f"{BUCKET_URI}/embeddings/train.jsonl" import tensorflow as tf with tf.io.gfile.GFile(QUERY_EMBEDDING_PATH, "w") as f: for i in range(0, 1000): query = {"data": '{"user_id": ["' + str(i) + '"]}', "key": f"key{i}"} f.write(json.dumps(query) + "\n") print("\nNumber of embeddings: ") ! gsutil cat {QUERY_EMBEDDING_PATH} | wc -l
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Send the prediction request To make a batch prediction request, call the model object's batch_predict method with the following parameters: - instances_format: The format of the batch prediction request file: "jsonl", "csv", "bigquery", "tf-record", "tf-record-gzip" or "file-list" - prediction_format: The format of the batch prediction response file: "jsonl", "csv", "bigquery", "tf-record", "tf-record-gzip" or "file-list" - job_display_name: The human readable name for the prediction job. - gcs_source: A list of one or more Cloud Storage paths to your batch prediction requests. - gcs_destination_prefix: The Cloud Storage path that the service will write the predictions to. - model_parameters: Additional filtering parameters for serving prediction results. - machine_type: The type of machine to use for training. - accelerator_type: The hardware accelerator type. - accelerator_count: The number of accelerators to attach to a worker replica. - starting_replica_count: The number of compute instances to initially provision. - max_replica_count: The maximum number of compute instances to scale to. In this tutorial, only one instance is provisioned. Compute instance scaling You can specify a single instance (or node) to process your batch prediction request. This tutorial uses a single node, so the variables MIN_NODES and MAX_NODES are both set to 1. If you want to use multiple nodes to process your batch prediction request, set MAX_NODES to the maximum number of nodes you want to use. Vertex AI autoscales the number of nodes used to serve your predictions, up to the maximum number you set. Refer to the pricing page to understand the costs of autoscaling with multiple nodes.
MIN_NODES = 1 MAX_NODES = 4 batch_predict_job = model.batch_predict( job_display_name=f"batch_predict_{DISPLAY_NAME}", gcs_source=[QUERY_EMBEDDING_PATH], gcs_destination_prefix=f"{BUCKET_URI}/embeddings/output", machine_type=DEPLOY_COMPUTE, starting_replica_count=MIN_NODES, max_replica_count=MAX_NODES, )
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Save the embeddings in JSONL format Next, you store the predicted embeddings as a JSONL formatted file. Each embedding is stored as: { 'id': .., 'embedding': [ ... ] } The format of the embeddings for the index can be in either CSV, JSON, or Avro format. Learn more about Embedding Formats for Indexing
embeddings = [] for result_file in result_files: with tf.io.gfile.GFile(result_file, "r") as f: instances = list(f) for instance in instances: instance = instance.replace('\\"', "'") result = json.loads(instance) prediction = result["prediction"] key = prediction["key"][3:] encoding = prediction["encoding"] embedding = {"id": key, "embedding": encoding} embeddings.append(embedding) print("Number of embeddings", len(embeddings)) print("Encoding Dimensions", len(embeddings[0]["embedding"])) print("Example embedding", embeddings[0]) with open("embeddings.json", "w") as f: for i in range(len(embeddings)): f.write(json.dumps(embeddings[i]).replace('"', "'")) f.write("\n") ! head -n 2 embeddings.json
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Store the JSONL formatted embeddings in Cloud Storage Next, you upload the training data to your Cloud Storage bucket.
EMBEDDINGS_URI = f"{BUCKET_URI}/embeddings/twotower/" ! gsutil cp embeddings.json {EMBEDDINGS_URI}
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Cleaning up To clean up all Google Cloud resources used in this project, you can delete the Google Cloud project you used for the tutorial. Otherwise, you can delete the individual resources you created in this tutorial:
# Delete endpoint resource endpoint.delete(force=True) # Delete model resource model.delete() # Force undeployment of indexes and delete endpoint try: index_endpoint.delete(force=True) except Exception as e: print(e) # Delete indexes try: tree_ah_index.delete() brute_force_index.delete() except Exception as e: print(e) # Delete Cloud Storage objects that were created delete_bucket = False if delete_bucket or os.getenv("IS_TESTING"): ! gsutil -m rm -r $OUTPUT_DIR
notebooks/community/ml_ops/stage6/get_started_with_matching_engine_twotowers.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
把datetime域切成 日期 和 时间 两部分。
# 处理时间字段 temp = pd.DatetimeIndex(data['datetime']) data['date'] = temp.date data['time'] = temp.time data.head()
七月在线机器学习在bat工业中应用项目实战/特征工程练习/feature_engineering.ipynb
qiu997018209/MachineLearning
apache-2.0
时间那部分,好像最细的粒度也只到小时,所以我们干脆把小时字段拿出来作为更简洁的特征。
# 设定hour这个小时字段 data['hour'] = pd.to_datetime(data.time, format="%H:%M:%S") data['hour'] = pd.Index(data['hour']).hour data
七月在线机器学习在bat工业中应用项目实战/特征工程练习/feature_engineering.ipynb
qiu997018209/MachineLearning
apache-2.0
仔细想想,数据只告诉我们是哪天了,按照一般逻辑,应该周末和工作日出去的人数量不同吧。我们设定一个新的字段dayofweek表示是一周中的第几天。再设定一个字段dateDays表示离第一天开始租车多久了(猜测在欧美国家,这种绿色环保的出行方式,会迅速蔓延吧)
# 我们对时间类的特征做处理,产出一个星期几的类别型变量 data['dayofweek'] = pd.DatetimeIndex(data.date).dayofweek # 对时间类特征处理,产出一个时间长度变量 data['dateDays'] = (data.date - data.date[0]).astype('timedelta64[D]') data
七月在线机器学习在bat工业中应用项目实战/特征工程练习/feature_engineering.ipynb
qiu997018209/MachineLearning
apache-2.0
其实我们刚才一直都在猜测,并不知道真实的日期相关的数据分布对吧,所以我们要做一个小小的统计来看看真实的数据分布,我们统计一下一周各天的自行车租赁情况(分注册的人和没注册的人)
byday = data.groupby('dayofweek') # 统计下没注册的用户租赁情况 byday['casual'].sum().reset_index() # 统计下注册的用户的租赁情况 byday['registered'].sum().reset_index()
七月在线机器学习在bat工业中应用项目实战/特征工程练习/feature_engineering.ipynb
qiu997018209/MachineLearning
apache-2.0
周末既然有不同,就单独拿一列出来给星期六,再单独拿一列出来给星期日
data['Saturday']=0 data.Saturday[data.dayofweek==5]=1 data['Sunday']=0 data.Sunday[data.dayofweek==6]=1 data
七月在线机器学习在bat工业中应用项目实战/特征工程练习/feature_engineering.ipynb
qiu997018209/MachineLearning
apache-2.0
从数据中,把原始的时间字段等踢掉
# remove old data features dataRel = data.drop(['datetime', 'count','date','time','dayofweek'], axis=1) dataRel.head()
七月在线机器学习在bat工业中应用项目实战/特征工程练习/feature_engineering.ipynb
qiu997018209/MachineLearning
apache-2.0
特征向量化 我们这里打算用scikit-learn来建模。对于pandas的dataframe我们有方法/函数可以直接转成python中的dict。 另外,在这里我们要对离散值和连续值特征区分一下了,以便之后分开做不同的特征处理。
from sklearn.feature_extraction import DictVectorizer # 我们把连续值的属性放入一个dict中 featureConCols = ['temp','atemp','humidity','windspeed','dateDays','hour'] dataFeatureCon = dataRel[featureConCols] dataFeatureCon = dataFeatureCon.fillna( 'NA' ) #in case I missed any X_dictCon = dataFeatureCon.T.to_dict().values() # 把离散值的属性放到另外一个dict中 featureCatCols = ['season','holiday','workingday','weather','Saturday', 'Sunday'] dataFeatureCat = dataRel[featureCatCols] dataFeatureCat = dataFeatureCat.fillna( 'NA' ) #in case I missed any X_dictCat = dataFeatureCat.T.to_dict().values() # 向量化特征 vec = DictVectorizer(sparse = False) X_vec_cat = vec.fit_transform(X_dictCat) X_vec_con = vec.fit_transform(X_dictCon) dataFeatureCon.head() X_vec_con dataFeatureCat.head() X_vec_cat
七月在线机器学习在bat工业中应用项目实战/特征工程练习/feature_engineering.ipynb
qiu997018209/MachineLearning
apache-2.0
标准化连续值特征 我们要对连续值属性做一些处理,最基本的当然是标准化,让连续值属性处理过后均值为0,方差为1。 这样的数据放到模型里,对模型训练的收敛和模型的准确性都有好处
from sklearn import preprocessing # 标准化连续值数据 scaler = preprocessing.StandardScaler().fit(X_vec_con) X_vec_con = scaler.transform(X_vec_con) X_vec_con
七月在线机器学习在bat工业中应用项目实战/特征工程练习/feature_engineering.ipynb
qiu997018209/MachineLearning
apache-2.0
类别特征编码 最常用的当然是one-hot编码咯,比如颜色 红、蓝、黄 会被编码为[1, 0, 0],[0, 1, 0],[0, 0, 1]
from sklearn import preprocessing # one-hot编码 enc = preprocessing.OneHotEncoder() enc.fit(X_vec_cat) X_vec_cat = enc.transform(X_vec_cat).toarray() X_vec_cat
七月在线机器学习在bat工业中应用项目实战/特征工程练习/feature_engineering.ipynb
qiu997018209/MachineLearning
apache-2.0
把特征拼一起 把离散和连续的特征都组合在一起
import numpy as np # combine cat & con features X_vec = np.concatenate((X_vec_con,X_vec_cat), axis=1) X_vec
七月在线机器学习在bat工业中应用项目实战/特征工程练习/feature_engineering.ipynb
qiu997018209/MachineLearning
apache-2.0
最后的特征,前6列是标准化过后的连续值特征,后面是编码后的离散值特征 对结果值也处理一下 拿到结果的浮点数值
# 对Y向量化 Y_vec_reg = dataRel['registered'].values.astype(float) Y_vec_cas = dataRel['casual'].values.astype(float) Y_vec_reg Y_vec_cas
七月在线机器学习在bat工业中应用项目实战/特征工程练习/feature_engineering.ipynb
qiu997018209/MachineLearning
apache-2.0
<table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://quantumai.google/cirq/qcvv/xeb_theory"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/quantumlib/Cirq/blob/master/docs/qcvv/xeb_theory.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/quantumlib/Cirq/blob/master/docs/qcvv/xeb_theory.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/Cirq/docs/qcvv/xeb_theory.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a> </td> </table>
try: import cirq except ImportError: print("installing cirq...") !pip install --quiet cirq import cirq print("installed cirq.")
docs/qcvv/xeb_theory.ipynb
quantumlib/Cirq
apache-2.0
Cross Entropy Benchmarking Theory Cross entropy benchmarking uses the properties of random quantum programs to determine the fidelity of a wide variety of circuits. When applied to circuits with many qubits, XEB can characterize the performance of a large device. When applied to deep, two-qubit circuits it can be used to accurately characterize a two-qubit interaction potentially leading to better calibration.
# Standard imports import numpy as np from cirq.contrib.svg import SVGCircuit
docs/qcvv/xeb_theory.ipynb
quantumlib/Cirq
apache-2.0
The action of random circuits with noise An XEB experiment collects data from the execution of random circuits subject to noise. The effect of applying a random circuit with unitary $U$ is modeled as $U$ followed by a depolarizing channel. The result is that the initial state $|𝜓⟩$ is mapped to a density matrix $ρ_U$ as follows: $$ |𝜓⟩ → ρ_U = f |𝜓_U⟩⟨𝜓_U| + (1 - f) I / D $$ where $|𝜓_U⟩ = U|𝜓⟩$, $D$ is the dimension of the Hilbert space, $I / D$ is the maximally mixed state, and $f$ is the fidelity with which the circuit is applied. For this model to be accurate, we require $U$ to be a random circuit that scrambles errors. In practice, we use a particular circuit ansatz consisting of random single-qubit rotations interleaved with entangling gates. Possible single-qubit rotations Geometrically, we choose 8 axes in the XY plane to perform a quarter-turn (pi/2 rotation) around. This is followed by a rotation around the Z axis of 8 different magnitudes. These 8*8 possible rotations are chosen randomly when constructing the circuit.
exponents = np.linspace(0, 7/4, 8) exponents import itertools SINGLE_QUBIT_GATES = [ cirq.PhasedXZGate(x_exponent=0.5, z_exponent=z, axis_phase_exponent=a) for a, z in itertools.product(exponents, repeat=2) ] SINGLE_QUBIT_GATES[:10], '...'
docs/qcvv/xeb_theory.ipynb
quantumlib/Cirq
apache-2.0
Random circuit We use cirq.experiments.random_quantum_circuit_generation.random_rotations_between_two_qubit_circuit to generate a random two-qubit circuit. Note that we provide the possible single-qubit rotations from above and declare that our two-qubit operation is the $\sqrt{i\mathrm{SWAP}}$ gate.
import cirq_google as cg from cirq.experiments import random_quantum_circuit_generation as rqcg q0, q1 = cirq.LineQubit.range(2) circuit = rqcg.random_rotations_between_two_qubit_circuit( q0, q1, depth=4, two_qubit_op_factory=lambda a, b, _: cirq.SQRT_ISWAP(a, b), single_qubit_gates=SINGLE_QUBIT_GATES ) SVGCircuit(circuit)
docs/qcvv/xeb_theory.ipynb
quantumlib/Cirq
apache-2.0
Estimating fidelity Let $O_U$ be an observable that is diagonal in the computational basis. Then the expectation value of $O_U$ on $ρ_U$ is given by $$ Tr(ρ_U O_U) = f ⟨𝜓_U|O_U|𝜓_U⟩ + (1 - f) Tr(O_U / D). $$ This equation shows how $f$ can be estimated, since $Tr(ρ_U O_U)$ can be estimated from experimental data, and $⟨𝜓_U|O_U|𝜓_U⟩$ and $Tr(O_U / D)$ can be computed. Let $e_U = ⟨𝜓_U|O_U|𝜓_U⟩$, $u_U = Tr(O_U / D)$, and $m_U$ denote the experimental estimate of $Tr(ρ_U O_U)$. We can write the following linear equation (equivalent to the expression above): $$ m_U = f e_U + (1-f) u_U \ m_U - u_U = f (e_U - u_U) $$
# Make long circuits (which we will truncate) MAX_DEPTH = 100 N_CIRCUITS = 10 circuits = [ rqcg.random_rotations_between_two_qubit_circuit( q0, q1, depth=MAX_DEPTH, two_qubit_op_factory=lambda a, b, _: cirq.SQRT_ISWAP(a, b), single_qubit_gates=SINGLE_QUBIT_GATES) for _ in range(N_CIRCUITS) ] # We will truncate to these lengths cycle_depths = np.arange(1, MAX_DEPTH + 1, 9) cycle_depths
docs/qcvv/xeb_theory.ipynb
quantumlib/Cirq
apache-2.0
Execute circuits Cross entropy benchmarking requires sampled bitstrings from the device being benchmarked as well as the true probabilities from a noiseless simulation. We find these quantities for all (cycle_depth, circuit) permutations.
pure_sim = cirq.Simulator() # Pauli Error. If there is an error, it is either X, Y, or Z # with probability E_PAULI / 3 E_PAULI = 5e-3 noisy_sim = cirq.DensityMatrixSimulator(noise=cirq.depolarize(E_PAULI)) # These two qubit circuits have 2^2 = 4 probabilities DIM = 4 records = [] for cycle_depth in cycle_depths: for circuit_i, circuit in enumerate(circuits): # Truncate the long circuit to the requested cycle_depth circuit_depth = cycle_depth * 2 + 1 assert circuit_depth <= len(circuit) trunc_circuit = circuit[:circuit_depth] # Pure-state simulation psi = pure_sim.simulate(trunc_circuit).final_state_vector pure_probs = np.abs(psi)**2 # Noisy execution meas_circuit = trunc_circuit + cirq.measure(q0, q1) sampled_inds = noisy_sim.sample(meas_circuit, repetitions=10_000).values[:,0] sampled_probs = np.bincount(sampled_inds, minlength=DIM) / len(sampled_inds) # Save the results records += [{ 'circuit_i': circuit_i, 'cycle_depth': cycle_depth, 'circuit_depth': circuit_depth, 'pure_probs': pure_probs, 'sampled_probs': sampled_probs, }] print('.', end='', flush=True)
docs/qcvv/xeb_theory.ipynb
quantumlib/Cirq
apache-2.0
What's the observable What is $O_U$? Let's define it to be the observable that gives the sum of all probabilities, i.e. $$ O_U |x \rangle = p(x) |x \rangle $$ for any bitstring $x$. We can use this to derive expressions for our quantities of interest. $$ e_U = \langle \psi_U | O_U | \psi_U \rangle \ = \sum_x a_x^* \langle x | O_U | x \rangle a_x \ = \sum_x p(x) \langle x | O_U | x \rangle \ = \sum_x p(x) p(x) $$ $e_U$ is simply the sum of squared ideal probabilities. $u_U$ is a normalizing factor that only depends on the operator. Since this operator has the true probabilities in the definition, they show up here anyways. $$ u_U = \mathrm{Tr}[O_U / D] \ = 1/D \sum_x \langle x | O_U | x \rangle \ = 1/D \sum_x p(x) $$ For the measured values, we use the definition of an expectation value $$ \langle f(x) \rangle_\rho = \sum_x p(x) f(x) $$ It becomes notationally confusing because remember: our operator on basis states returns the ideal probability of that basis state $p(x)$. The probability of observing a measured basis state is estimated from samples and denoted $p_\mathrm{est}(x)$ here. $$ m_U = \mathrm{Tr}[\rho_U O_U] \ = \langle O_U \rangle_{\rho_U} = \sum_{x} p_\mathrm{est}(x) p(x) $$
for record in records: e_u = np.sum(record['pure_probs']**2) u_u = np.sum(record['pure_probs']) / DIM m_u = np.sum(record['pure_probs'] * record['sampled_probs']) record.update( e_u=e_u, u_u=u_u, m_u=m_u, )
docs/qcvv/xeb_theory.ipynb
quantumlib/Cirq
apache-2.0
Remember: $$ m_U - u_U = f (e_U - u_U) $$ We estimate f by performing least squares minimization of the sum of squared residuals $$ \sum_U \left(f (e_U - u_U) - (m_U - u_U)\right)^2 $$ over different random circuits. The solution to the least squares problem is given by $$ f = (∑_U (m_U - u_U) * (e_U - u_U)) / (∑_U (e_U - u_U)^2) $$
import pandas as pd df = pd.DataFrame(records) df['y'] = df['m_u'] - df['u_u'] df['x'] = df['e_u'] - df['u_u'] df['numerator'] = df['x'] * df['y'] df['denominator'] = df['x'] ** 2 df.head()
docs/qcvv/xeb_theory.ipynb
quantumlib/Cirq
apache-2.0
Fit We'll plot the linear relationship and least-squares fit while we transform the raw DataFrame into one containing fidelities.
%matplotlib inline from matplotlib import pyplot as plt # Color by cycle depth import seaborn as sns colors = sns.cubehelix_palette(n_colors=len(cycle_depths)) colors = {k: colors[i] for i, k in enumerate(cycle_depths)} _lines = [] def per_cycle_depth(df): fid_lsq = df['numerator'].sum() / df['denominator'].sum() cycle_depth = df.name xx = np.linspace(0, df['x'].max()) l, = plt.plot(xx, fid_lsq*xx, color=colors[cycle_depth]) plt.scatter(df['x'], df['y'], color=colors[cycle_depth]) global _lines _lines += [l] # for legend return pd.Series({'fidelity': fid_lsq}) fids = df.groupby('cycle_depth').apply(per_cycle_depth).reset_index() plt.xlabel(r'$e_U - u_U$', fontsize=18) plt.ylabel(r'$m_U - u_U$', fontsize=18) _lines = np.asarray(_lines) plt.legend(_lines[[0,-1]], cycle_depths[[0,-1]], loc='best', title='Cycle depth') plt.tight_layout()
docs/qcvv/xeb_theory.ipynb
quantumlib/Cirq
apache-2.0
Fidelities
plt.plot( fids['cycle_depth'], fids['fidelity'], marker='o', label='Least Squares') xx = np.linspace(0, fids['cycle_depth'].max()) # In XEB, we extract the depolarizing fidelity, which is # related to (but not equal to) the Pauli error. # For the latter, an error involves doing X, Y, or Z with E_PAULI/3 # but for the former, an error involves doing I, X, Y, or Z with e_depol/4 e_depol = E_PAULI / (1 - 1/DIM**2) # The additional factor of four in the exponent is because each layer # involves two moments of two qubits (so each layer has four applications # of a single-qubit single-moment depolarizing channel). plt.plot(xx, (1-e_depol)**(4*xx), label=r'$(1-\mathrm{e\_depol})^{4d}$') plt.ylabel('Circuit fidelity', fontsize=18) plt.xlabel('Cycle Depth $d$', fontsize=18) plt.legend(loc='best') plt.yscale('log') plt.tight_layout() from cirq.experiments.xeb_fitting import fit_exponential_decays # Ordinarily, we'd use this function to fit curves for multiple pairs. # We add our qubit pair as a column. fids['pair'] = [(q0, q1)] * len(fids) fit_df = fit_exponential_decays(fids) fit_row = fit_df.iloc[0] print(f"Noise model fidelity: {(1-e_depol)**4:.3e}") print(f"XEB layer fidelity: {fit_row['layer_fid']:.3e} +- {fit_row['layer_fid_std']:.2e}")
docs/qcvv/xeb_theory.ipynb
quantumlib/Cirq
apache-2.0
Set up data augmentation The first thing we'll do is set up some data augmentation transformations to use during training, as well as some basic normalization to use during both training and testing. We'll use random crops and flips to train the model, and do basic normalization at both training time and test time. To accomplish these transformations, we use standard torchvision transforms.
normalize = transforms.Normalize(mean=[0.5071, 0.4867, 0.4408], std=[0.2675, 0.2565, 0.2761]) aug_trans = [transforms.RandomCrop(32, padding=4), transforms.RandomHorizontalFlip()] common_trans = [transforms.ToTensor(), normalize] train_compose = transforms.Compose(aug_trans + common_trans) test_compose = transforms.Compose(common_trans)
examples/06_PyTorch_NN_Integration_DKL/Deep_Kernel_Learning_DenseNet_CIFAR_Tutorial.ipynb
jrg365/gpytorch
mit
Create DataLoaders Next, we create dataloaders for the selected dataset using the built in torchvision datasets. The cell below will download either the cifar10 or cifar100 dataset, depending on which choice is made. The default here is cifar10, however training is just as fast on either dataset. After downloading the datasets, we create standard torch.utils.data.DataLoaders for each dataset that we will be using to get minibatches of augmented data.
dataset = "cifar10" if ('CI' in os.environ): # this is for running the notebook in our testing framework train_set = torch.utils.data.TensorDataset(torch.randn(8, 3, 32, 32), torch.rand(8).round().long()) test_set = torch.utils.data.TensorDataset(torch.randn(4, 3, 32, 32), torch.rand(4).round().long()) train_loader = torch.utils.data.DataLoader(train_set, batch_size=4, shuffle=True) test_loader = torch.utils.data.DataLoader(test_set, batch_size=2, shuffle=False) num_classes = 2 elif dataset == 'cifar10': train_set = dset.CIFAR10('data', train=True, transform=train_compose, download=True) test_set = dset.CIFAR10('data', train=False, transform=test_compose) train_loader = torch.utils.data.DataLoader(train_set, batch_size=256, shuffle=True) test_loader = torch.utils.data.DataLoader(test_set, batch_size=256, shuffle=False) num_classes = 10 elif dataset == 'cifar100': train_set = dset.CIFAR100('data', train=True, transform=train_compose, download=True) test_set = dset.CIFAR100('data', train=False, transform=test_compose) train_loader = torch.utils.data.DataLoader(train_set, batch_size=256, shuffle=True) test_loader = torch.utils.data.DataLoader(test_set, batch_size=256, shuffle=False) num_classes = 100 else: raise RuntimeError('dataset must be one of "cifar100" or "cifar10"')
examples/06_PyTorch_NN_Integration_DKL/Deep_Kernel_Learning_DenseNet_CIFAR_Tutorial.ipynb
jrg365/gpytorch
mit
Creating the DenseNet Model With the data loaded, we can move on to defining our DKL model. A DKL model consists of three components: the neural network, the Gaussian process layer used after the neural network, and the Softmax likelihood. The first step is defining the neural network architecture. To do this, we use a slightly modified version of the DenseNet available in the standard PyTorch package. Specifically, we modify it to remove the softmax layer, since we'll only be needing the final features extracted from the neural network.
from densenet import DenseNet class DenseNetFeatureExtractor(DenseNet): def forward(self, x): features = self.features(x) out = F.relu(features, inplace=True) out = F.avg_pool2d(out, kernel_size=self.avgpool_size).view(features.size(0), -1) return out feature_extractor = DenseNetFeatureExtractor(block_config=(6, 6, 6), num_classes=num_classes) num_features = feature_extractor.classifier.in_features
examples/06_PyTorch_NN_Integration_DKL/Deep_Kernel_Learning_DenseNet_CIFAR_Tutorial.ipynb
jrg365/gpytorch
mit
Creating the GP Layer In the next cell, we create the layer of Gaussian process models that are called after the neural network. In this case, we'll be using one GP per feature, as in the SV-DKL paper. The outputs of these Gaussian processes will the be mixed in the softmax likelihood.
class GaussianProcessLayer(gpytorch.models.ApproximateGP): def __init__(self, num_dim, grid_bounds=(-10., 10.), grid_size=64): variational_distribution = gpytorch.variational.CholeskyVariationalDistribution( num_inducing_points=grid_size, batch_shape=torch.Size([num_dim]) ) # Our base variational strategy is a GridInterpolationVariationalStrategy, # which places variational inducing points on a Grid # We wrap it with a IndependentMultitaskVariationalStrategy so that our output is a vector-valued GP variational_strategy = gpytorch.variational.IndependentMultitaskVariationalStrategy( gpytorch.variational.GridInterpolationVariationalStrategy( self, grid_size=grid_size, grid_bounds=[grid_bounds], variational_distribution=variational_distribution, ), num_tasks=num_dim, ) super().__init__(variational_strategy) self.covar_module = gpytorch.kernels.ScaleKernel( gpytorch.kernels.RBFKernel( lengthscale_prior=gpytorch.priors.SmoothedBoxPrior( math.exp(-1), math.exp(1), sigma=0.1, transform=torch.exp ) ) ) self.mean_module = gpytorch.means.ConstantMean() self.grid_bounds = grid_bounds def forward(self, x): mean = self.mean_module(x) covar = self.covar_module(x) return gpytorch.distributions.MultivariateNormal(mean, covar)
examples/06_PyTorch_NN_Integration_DKL/Deep_Kernel_Learning_DenseNet_CIFAR_Tutorial.ipynb
jrg365/gpytorch
mit
Creating the full SVDKL Model With both the DenseNet feature extractor and GP layer defined, we can put them together in a single module that simply calls one and then the other, much like building any Sequential neural network in PyTorch. This completes defining our DKL model.
class DKLModel(gpytorch.Module): def __init__(self, feature_extractor, num_dim, grid_bounds=(-10., 10.)): super(DKLModel, self).__init__() self.feature_extractor = feature_extractor self.gp_layer = GaussianProcessLayer(num_dim=num_dim, grid_bounds=grid_bounds) self.grid_bounds = grid_bounds self.num_dim = num_dim def forward(self, x): features = self.feature_extractor(x) features = gpytorch.utils.grid.scale_to_bounds(features, self.grid_bounds[0], self.grid_bounds[1]) # This next line makes it so that we learn a GP for each feature features = features.transpose(-1, -2).unsqueeze(-1) res = self.gp_layer(features) return res model = DKLModel(feature_extractor, num_dim=num_features) likelihood = gpytorch.likelihoods.SoftmaxLikelihood(num_features=model.num_dim, num_classes=num_classes) # If you run this example without CUDA, I hope you like waiting! if torch.cuda.is_available(): model = model.cuda() likelihood = likelihood.cuda()
examples/06_PyTorch_NN_Integration_DKL/Deep_Kernel_Learning_DenseNet_CIFAR_Tutorial.ipynb
jrg365/gpytorch
mit
Defining Training and Testing Code Next, we define the basic optimization loop and testing code. This code is entirely analogous to the standard PyTorch training loop. We create a torch.optim.SGD optimizer with the parameters of the neural network on which we apply the standard amount of weight decay suggested from the paper, the parameters of the Gaussian process (from which we omit weight decay, as L2 regualrization on top of variational inference is not necessary), and the mixing parameters of the Softmax likelihood. We use the standard learning rate schedule from the paper, where we decrease the learning rate by a factor of ten 50% of the way through training, and again at 75% of the way through training.
n_epochs = 1 lr = 0.1 optimizer = SGD([ {'params': model.feature_extractor.parameters(), 'weight_decay': 1e-4}, {'params': model.gp_layer.hyperparameters(), 'lr': lr * 0.01}, {'params': model.gp_layer.variational_parameters()}, {'params': likelihood.parameters()}, ], lr=lr, momentum=0.9, nesterov=True, weight_decay=0) scheduler = MultiStepLR(optimizer, milestones=[0.5 * n_epochs, 0.75 * n_epochs], gamma=0.1) mll = gpytorch.mlls.VariationalELBO(likelihood, model.gp_layer, num_data=len(train_loader.dataset)) def train(epoch): model.train() likelihood.train() minibatch_iter = tqdm.notebook.tqdm(train_loader, desc=f"(Epoch {epoch}) Minibatch") with gpytorch.settings.num_likelihood_samples(8): for data, target in minibatch_iter: if torch.cuda.is_available(): data, target = data.cuda(), target.cuda() optimizer.zero_grad() output = model(data) loss = -mll(output, target) loss.backward() optimizer.step() minibatch_iter.set_postfix(loss=loss.item()) def test(): model.eval() likelihood.eval() correct = 0 with torch.no_grad(), gpytorch.settings.num_likelihood_samples(16): for data, target in test_loader: if torch.cuda.is_available(): data, target = data.cuda(), target.cuda() output = likelihood(model(data)) # This gives us 16 samples from the predictive distribution pred = output.probs.mean(0).argmax(-1) # Taking the mean over all of the sample we've drawn correct += pred.eq(target.view_as(pred)).cpu().sum() print('Test set: Accuracy: {}/{} ({}%)'.format( correct, len(test_loader.dataset), 100. * correct / float(len(test_loader.dataset)) ))
examples/06_PyTorch_NN_Integration_DKL/Deep_Kernel_Learning_DenseNet_CIFAR_Tutorial.ipynb
jrg365/gpytorch
mit
We are now ready to train the model. At the end of each Epoch we report the current test loss and accuracy, and we save a checkpoint model out to a file.
for epoch in range(1, n_epochs + 1): with gpytorch.settings.use_toeplitz(False): train(epoch) test() scheduler.step() state_dict = model.state_dict() likelihood_state_dict = likelihood.state_dict() torch.save({'model': state_dict, 'likelihood': likelihood_state_dict}, 'dkl_cifar_checkpoint.dat')
examples/06_PyTorch_NN_Integration_DKL/Deep_Kernel_Learning_DenseNet_CIFAR_Tutorial.ipynb
jrg365/gpytorch
mit
.. _tut_viz_raw: Visualize Raw data
import os.path as op import mne data_path = op.join(mne.datasets.sample.data_path(), 'MEG', 'sample') raw = mne.io.read_raw_fif(op.join(data_path, 'sample_audvis_raw.fif')) events = mne.read_events(op.join(data_path, 'sample_audvis_raw-eve.fif'))
0.12/_downloads/plot_visualize_raw.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
The visualization module (:mod:mne.viz) contains all the plotting functions that work in combination with MNE data structures. Usually the easiest way to use them is to call a method of the data container. All of the plotting method names start with plot. If you're using Ipython console, you can just write raw.plot and ask the interpreter for suggestions with a tab key. To visually inspect your raw data, you can use the python equivalent of mne_browse_raw.
raw.plot(block=True, events=events)
0.12/_downloads/plot_visualize_raw.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
The channels are color coded by channel type. Generally MEG channels are colored in different shades of blue, whereas EEG channels are black. The channels are also sorted by channel type by default. If you want to use a custom order for the channels, you can use order parameter of :func:raw.plot. The scrollbar on right side of the browser window also tells us that two of the channels are marked as bad. Bad channels are color coded gray. By clicking the lines or channel names on the left, you can mark or unmark a bad channel interactively. You can use +/- keys to adjust the scale (also = works for magnifying the data). Note that the initial scaling factors can be set with parameter scalings. If you don't know the scaling factor for channels, you can automatically set them by passing scalings='auto'. With pageup/pagedown and home/end keys you can adjust the amount of data viewed at once. To see all the interactive features, hit ? or click help in the lower left corner of the browser window. We read the events from a file and passed it as a parameter when calling the method. The events are plotted as vertical lines so you can see how they align with the raw data. We can check where the channels reside with plot_sensors. Notice that this method (along with many other MNE plotting functions) is callable using any MNE data container where the channel information is available.
raw.plot_sensors(kind='3d', ch_type='mag')
0.12/_downloads/plot_visualize_raw.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Now let's add some ssp projectors to the raw data. Here we read them from a file and plot them.
projs = mne.read_proj(op.join(data_path, 'sample_audvis_eog-proj.fif')) raw.add_proj(projs) raw.plot_projs_topomap()
0.12/_downloads/plot_visualize_raw.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Plotting channel wise power spectra is just as easy. The layout is inferred from the data by default when plotting topo plots. This works for most data, but it is also possible to define the layouts by hand. Here we select a layout with only magnetometer channels and plot it. Then we plot the channel wise spectra of first 30 seconds of the data.
layout = mne.channels.read_layout('Vectorview-mag') layout.plot() raw.plot_psd_topo(tmax=30., fmin=5., fmax=60., n_fft=1024, layout=layout)
0.12/_downloads/plot_visualize_raw.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Next step we will set up basic Factories for Inception
def ConvFactory(data, num_filter, kernel, stride=(1,1), pad=(0, 0), name=None, suffix=''): conv = mx.symbol.Convolution(data=data, num_filter=num_filter, kernel=kernel, stride=stride, pad=pad, name='conv_%s%s' %(name, suffix)) bn = mx.symbol.BatchNorm(data=conv, name='bn_%s%s' %(name, suffix)) act = mx.symbol.LeakyReLU(data=bn, act_type='rrelu', name='rrelu_%s%s' %(name, suffix)) return act def InceptionFactoryA(data, num_1x1, num_3x3red, num_3x3, num_d3x3red, num_d3x3, pool, proj, name): # 1x1 c1x1 = ConvFactory(data=data, num_filter=num_1x1, kernel=(1, 1), name=('%s_1x1' % name)) # 3x3 reduce + 3x3 c3x3r = ConvFactory(data=data, num_filter=num_3x3red, kernel=(1, 1), name=('%s_3x3' % name), suffix='_reduce') c3x3 = ConvFactory(data=c3x3r, num_filter=num_3x3, kernel=(3, 3), pad=(1, 1), name=('%s_3x3' % name)) # double 3x3 reduce + double 3x3 cd3x3r = ConvFactory(data=data, num_filter=num_d3x3red, kernel=(1, 1), name=('%s_double_3x3' % name), suffix='_reduce') cd3x3 = ConvFactory(data=cd3x3r, num_filter=num_d3x3, kernel=(3, 3), pad=(1, 1), name=('%s_double_3x3_0' % name)) cd3x3 = ConvFactory(data=cd3x3, num_filter=num_d3x3, kernel=(3, 3), pad=(1, 1), name=('%s_double_3x3_1' % name)) # pool + proj pooling = mx.symbol.Pooling(data=data, kernel=(3, 3), stride=(1, 1), pad=(1, 1), pool_type=pool, name=('%s_pool_%s_pool' % (pool, name))) cproj = ConvFactory(data=pooling, num_filter=proj, kernel=(1, 1), name=('%s_proj' % name)) # concat concat = mx.symbol.Concat(*[c1x1, c3x3, cd3x3, cproj], name='ch_concat_%s_chconcat' % name) return concat def InceptionFactoryB(data, num_3x3red, num_3x3, num_d3x3red, num_d3x3, name): # 3x3 reduce + 3x3 c3x3r = ConvFactory(data=data, num_filter=num_3x3red, kernel=(1, 1), name=('%s_3x3' % name), suffix='_reduce') c3x3 = ConvFactory(data=c3x3r, num_filter=num_3x3, kernel=(3, 3), pad=(1, 1), stride=(2, 2), name=('%s_3x3' % name)) # double 3x3 reduce + double 3x3 cd3x3r = ConvFactory(data=data, num_filter=num_d3x3red, kernel=(1, 1), name=('%s_double_3x3' % name), suffix='_reduce') cd3x3 = ConvFactory(data=cd3x3r, num_filter=num_d3x3, kernel=(3, 3), pad=(1, 1), stride=(1, 1), name=('%s_double_3x3_0' % name)) cd3x3 = ConvFactory(data=cd3x3, num_filter=num_d3x3, kernel=(3, 3), pad=(1, 1), stride=(2, 2), name=('%s_double_3x3_1' % name)) # pool + proj pooling = mx.symbol.Pooling(data=data, kernel=(3, 3), stride=(2, 2), pool_type="max", name=('max_pool_%s_pool' % name)) # concat concat = mx.symbol.Concat(*[c3x3, cd3x3, pooling], name='ch_concat_%s_chconcat' % name) return concat
example/notebooks/moved-from-mxnet/cifar-100.ipynb
weleen/mxnet
apache-2.0
Build Network by using Factories
def inception(nhidden, grad_scale): # data data = mx.symbol.Variable(name="data") # stage 2 in3a = InceptionFactoryA(data, 64, 64, 64, 64, 96, "avg", 32, '3a') in3b = InceptionFactoryA(in3a, 64, 64, 96, 64, 96, "avg", 64, '3b') in3c = InceptionFactoryB(in3b, 128, 160, 64, 96, '3c') # stage 3 in4a = InceptionFactoryA(in3c, 224, 64, 96, 96, 128, "avg", 128, '4a') in4b = InceptionFactoryA(in4a, 192, 96, 128, 96, 128, "avg", 128, '4b') in4c = InceptionFactoryA(in4b, 160, 128, 160, 128, 160, "avg", 128, '4c') in4d = InceptionFactoryA(in4c, 96, 128, 192, 160, 192, "avg", 128, '4d') in4e = InceptionFactoryB(in4d, 128, 192, 192, 256, '4e') # stage 4 in5a = InceptionFactoryA(in4e, 352, 192, 320, 160, 224, "avg", 128, '5a') in5b = InceptionFactoryA(in5a, 352, 192, 320, 192, 224, "max", 128, '5b') # global avg pooling avg = mx.symbol.Pooling(data=in5b, kernel=(7, 7), stride=(1, 1), name="global_pool", pool_type='avg') # linear classifier flatten = mx.symbol.Flatten(data=avg, name='flatten') fc1 = mx.symbol.FullyConnected(data=flatten, num_hidden=nhidden, name='fc') softmax = mx.symbol.SoftmaxOutput(data=fc1, name='softmax') return softmax softmax = inception(100, 1.0)
example/notebooks/moved-from-mxnet/cifar-100.ipynb
weleen/mxnet
apache-2.0
Make data iterator. Note we convert original CIFAR-100 dataset into image format then pack into RecordIO in purpose of using our build-in image augmentation. For details about RecordIO, please refer ()[]
batch_size = 64 train_dataiter = mx.io.ImageRecordIter( shuffle=True, path_imgrec="./data/train.rec", mean_img="./data/mean.bin", rand_crop=True, rand_mirror=True, data_shape=(3, 28, 28), batch_size=batch_size, prefetch_buffer=4, preprocess_threads=2) test_dataiter = mx.io.ImageRecordIter( path_imgrec="./data/test.rec", mean_img="./data/mean.bin", rand_crop=False, rand_mirror=False, data_shape=(3, 28, 28), batch_size=batch_size, prefetch_buffer=4, preprocess_threads=2, round_batch=False)
example/notebooks/moved-from-mxnet/cifar-100.ipynb
weleen/mxnet
apache-2.0
Make model
num_epoch = 38 model_prefix = "model/cifar_100" softmax = inception(100, 1.0) model = mx.model.FeedForward(ctx=mx.gpu(), symbol=softmax, num_epoch=num_epoch, learning_rate=0.05, momentum=0.9, wd=0.0001)
example/notebooks/moved-from-mxnet/cifar-100.ipynb
weleen/mxnet
apache-2.0
Fit first stage
model.fit(X=train_dataiter, eval_data=test_dataiter, eval_metric="accuracy", batch_end_callback=mx.callback.Speedometer(batch_size, 200), epoch_end_callback=mx.callback.do_checkpoint(model_prefix))
example/notebooks/moved-from-mxnet/cifar-100.ipynb
weleen/mxnet
apache-2.0
Without reducing learning rate, this model is able to achieve state-of-art result. Let's reduce learning rate to train a few more rounds.
# load params from saved model num_epoch = 38 model_prefix = "model/cifar_100" tmp_model = mx.model.FeedForward.load(model_prefix, epoch) # create new model with params num_epoch = 6 model_prefix = "model/cifar_100_stage2" model = mx.model.FeedForward(ctx=mx.gpu(), symbol=softmax, num_epoch=num_epoch, learning_rate=0.01, momentum=0.9, wd=0.0001, arg_params=tmp_model.arg_params, aux_params=tmp_model.aux_params,) model.fit(X=train_dataiter, eval_data=test_dataiter, eval_metric="accuracy", batch_end_callback=mx.callback.Speedometer(batch_size, 200), epoch_end_callback=mx.callback.do_checkpoint(model_prefix))
example/notebooks/moved-from-mxnet/cifar-100.ipynb
weleen/mxnet
apache-2.0
Now going the other way, convering DataFrame back into TicDat objects. Since pan_dat just has a collection of DataFrames attached to it, this is easy to show.
dat2 = input_schema.TicDat(foods = pan_dat.foods, categories = pan_dat.categories, nutrition_quantities = pan_dat.nutrition_quantities) dat2.nutrition_quantities input_schema._same_data(dat, dat2)
examples/expert_section/notebooks/pandas_and_ticdat_2.ipynb
opalytics/opalytics-ticdat
bsd-2-clause
That said, if you drop the indicies then things don't work, so be sure to set DataFrame indicies correctly.
df = pan_dat.nutrition_quantities.reset_index(drop=False) df
examples/expert_section/notebooks/pandas_and_ticdat_2.ipynb
opalytics/opalytics-ticdat
bsd-2-clause
So, well this might appear to be ok, it does need the index for the current ticdat implementation.
input_schema.TicDat(foods = pan_dat.foods, categories = pan_dat.categories, nutrition_quantities = df)
examples/expert_section/notebooks/pandas_and_ticdat_2.ipynb
opalytics/opalytics-ticdat
bsd-2-clause
<H2> Create normally distributed data</H2>
# fake some data data = norm.rvs(loc=0.0, scale=1.0, size =150) plt.hist(data, rwidth=0.85, facecolor='black'); plt.ylabel('Number of events'); plt.xlabel('Value');
Stochastic_systems/Fit_real_histogram.ipynb
JoseGuzman/myIPythonNotebooks
gpl-2.0
<H2> Obtain the fitting to a normal distribution</H2> <P> This is simply the mean and the standard deviation of the sample data<P>
mean, stdev = norm.fit(data) print('Mean =%f, Stdev=%f'%(mean,stdev))
Stochastic_systems/Fit_real_histogram.ipynb
JoseGuzman/myIPythonNotebooks
gpl-2.0
To adapt the normalized PDF of the normal distribution we simply have to multiply every value by the area of the histogram obtained <H2> Get the histogram data from NumPy</H2>
histdata = plt.hist(data, bins=10, color='black', rwidth=.85) # we set 10 bins counts, binedge = np.histogram(data, bins=10); print(binedge) #G et bincenters from bin edges bincenter = [0.5 * (binedge[i] + binedge[i+1]) for i in xrange(len(binedge)-1)] bincenter binwidth = (max(bincenter) - min(bincenter)) / len(bincenter) print(binwidth)
Stochastic_systems/Fit_real_histogram.ipynb
JoseGuzman/myIPythonNotebooks
gpl-2.0
<H2> Scale the normal PDF to the area of the histogram</H2>
x = np.linspace( start = -4 , stop = 4, num = 100) mynorm = norm(loc = mean, scale = stdev) # Scale Norm PDF to the area (binwidth)*number of samples of the histogram myfit = mynorm.pdf(x)*binwidth*len(data) # Plot everthing together plt.hist(data, bins=10, facecolor='white', histtype='stepfilled'); plt.fill(x, myfit, 'r', alpha=.5); plt.ylabel('Number of observations'); plt.xlabel('Value');
Stochastic_systems/Fit_real_histogram.ipynb
JoseGuzman/myIPythonNotebooks
gpl-2.0
Loading up the data of the LCS SPRING 2016 tournament Fatures: MPB/MPP = Match Points Blue/Purple MR = Match Result (B = Blue won, P = Purple won) TRB/TRP = Team Rank Blue/Purple FRR = First Round Result (B/P) RB/RP = Rating of team Blue/Purple W10B/W10P = Wins in the last 10 months team Blue/Purple L10B/L10P = Losses in the last 10 months team Blue/Purple HWB/HWP = Historic Wins by Blue/Purple against this opponent GBB/GBP = GosuBet on Blue/Purple BO = Best of 1, 3 or 5 url = url of the match
dta = pd.read_csv('data/lol-2016/lcs-s-16.csv', parse_dates=[1]) dta
Odds to win.ipynb
Scoppio/League-of-Odds
mit
Separating the data for training and for test
train_idx = np.array(dta.data < '2016-02-27') #year month day test_idx = np.array(dta.data >= '2016-02-27') #year month day results_train = np.array(dta.MR[train_idx]) results_test = np.array(dta.MR[test_idx]) print("Training set:", len(results_train), "Test set:", len(results_test)) dta[test_idx]
Odds to win.ipynb
Scoppio/League-of-Odds
mit
Now the feature columns are setted so we can use it to train the machine-learning algorithm
feature_columns = ['TRB', 'TRP', 'RB', 'RP', 'W10B', 'L10B', 'W10P', 'L10P', 'HWB', 'HWP', 'GBB', 'GBP']
Odds to win.ipynb
Scoppio/League-of-Odds
mit
I don't really know if I needed this many manipulations, probably not. I just needed to concatenate train_arrays and test_arrays in a single place. If you know of a faster way, please point to me.
#Column numbers for odds for the two outcomes cidx_home = [i for i, col in enumerate(dta.columns) if col[-1] in 'B' and col in feature_columns] cidx_away = [i for i, col in enumerate(dta.columns) if col[-1] in 'P' and col in feature_columns] #The two feature matrices for training feature_train_home = dta.ix[train_idx, cidx_home].as_matrix() feature_train_away = dta.ix[train_idx, cidx_away].as_matrix() #The two feature matrices for testing feature_test_home = dta.ix[test_idx, cidx_home].as_matrix() feature_test_away = dta.ix[test_idx, cidx_away].as_matrix() train_arrays = [feature_train_home, feature_train_away] test_arrays = [feature_test_home, feature_test_away] #merge the arrays feature_train = np.concatenate(train_arrays, axis=1) feature_test = np.concatenate(test_arrays, axis=1)
Odds to win.ipynb
Scoppio/League-of-Odds
mit
Now we are finally ready to use the data to train the algorithm. First an AdaBoostClassifier object is created, and here we need to give supply a set of arguments for it to work properly. The first argument is classification algoritm to use, which is the DecisionTreeClassifier algorithm. I have chosen to supply this algorithms with the max_dept=3 argument, which constrains the training algorithm to not apply more than three rules before making a prediction. The n_estimators argument tells the algorithm how many decision trees it should fit, and the learning_rate argument tells the algorithm how much the misclassified matches are going to be up-weighted in the next round of decision three fitting. These two values are usually something that you can experiment with since there is no definite rule on how these should be set. The rule of thumb is that the lower the learning rate is, the more estimators you neeed. Actually there is way more to it, because if it is too large, it will miss the local minimum, if it is too low it will take FOREVER to reach the minimum. The last argument, random_state, is a random seed. It is necessary if you want to reproduce the model fitting exactly every time. If this is not specified you will end up with slightly different trained algroithm each time you fit them. At last the algorithm is fitted using the fit() method, which is supplied with the odds and match results.
from sklearn.ensemble import AdaBoostClassifier from sklearn.tree import DecisionTreeClassifier adb = AdaBoostClassifier( DecisionTreeClassifier(max_depth=3), n_estimators=1000, learning_rate=0.4, random_state=42) adb = adb.fit(feature_train, results_train)
Odds to win.ipynb
Scoppio/League-of-Odds
mit
We can now see how well the trained algorithm fits the training data.
import sklearn.metrics as skm from sklearn.cross_validation import train_test_split from sklearn.metrics import confusion_matrix training_pred = adb.predict(feature_train) print (skm.confusion_matrix(list(training_pred), list(results_train)))
Odds to win.ipynb
Scoppio/League-of-Odds
mit
We see that no matches in the training data are misclassified, usually this is not very good because it may imply that we have an overfitted model with poor predictive power on new data. This problem may be solved by tweaking features and adding more data. Yeah, it may sound strange, but something that predicts 100% of accuracy the training data may actually be a bad thing. Let’s try to predict the outcome of the LCS-Sprint 2016 from march to the end of the tournament.
import matplotlib.pyplot as plt test_pred = adb.predict(feature_test) labels = ['Blue Team', 'Purple Team'] y_test = results_test y_pred = test_pred def plot_confusion_matrix(cm, title='Confusion matrix', cmap=plt.cm.Blues): plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(labels)) plt.xticks(tick_marks, labels, rotation=45) plt.yticks(tick_marks, labels) plt.tight_layout() plt.ylabel('True label') plt.xlabel('Predicted label') # Compute confusion matrix cm = confusion_matrix(y_test, y_pred) np.set_printoptions(precision=2) print('Confusion matrix, without normalization') print(cm) plt.figure() plot_confusion_matrix(cm) # Normalize the confusion matrix by row (i.e by the number of samples # in each class) cm_normalized = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] print('Normalized confusion matrix') print(cm_normalized) plt.figure() plot_confusion_matrix(cm_normalized, title='Normalized confusion matrix') plt.show()
Odds to win.ipynb
Scoppio/League-of-Odds
mit
Scoring our model visualizing the data Ok, first thing we are going to do is run a score that will evaluate our model. There are many ways to evaluate it, but the simple "score" method shown here is good enough for this model that we are dealing with. (Others would be F1 and accuracy.) Also important is to be able to see the prediction in the data, view each "game result" according to our model and compare with its real outcomes.
adb.score(feature_test, results_test)
Odds to win.ipynb
Scoppio/League-of-Odds
mit
0.633... as a score is not that bad, a little bit better than a blind guess. To know if your prediction model is good or not, all you have to do is compare with blind guesses. This case we have only two possible outcomes, either a victory to team purple or victory to team blue, 50-50! Now we have a model that predicts correclty 63.3% of the games. We saw in the confusion matrix that our model is predicting much more victories to Blue Team over the Purple Team, which shows some bias towards onf of the teams, this may be corrected adding more features or maybe normalizing them (to be tried some other day). Also this bias is what is damaging our model, because it can correctly predicts 60% of the games won by the Blue Team, but less than 70% of the games won by purple team. So, if we blind guess "Blue Team Win" 100% of the time, will we score a higher "correct rate" than using the model? The answer should be "no", but we are dealing with a skill game, not a pure random one, therefore there must be important features left behind to make the prediction a better one (thinking of addind the champions chosen by each team) We have 20 victories by Blue Team and 10 victories by Purple Team Total of 30 games
Blue_Team_Win = 20 Purple_Team_Win = 10 total_games = Purple_Team_Win + Blue_Team_Win All_Blue_Bets = (100 / total_games) * Blue_Team_Win All_Purple_Bets = (100 / total_games) * Purple_Team_Win print ("Team Blue won:", All_Blue_Bets, "%") print ("Team Purple won:", All_Purple_Bets, "%")
Odds to win.ipynb
Scoppio/League-of-Odds
mit
63.3% does not seen great anymore So as we can see, the highest win rate is higher than our model accuracy. But we have to keep in mind that the accuracy points that this model predicts correctly 70% of the Purple Team wins and 60% of the Blue Team win. This unfortunatelly counts to only 63.3% of the correct predictions, since the purple team so rarelly wins (only 1 in 3) this affects highly the accuracy. I believe that it has much more to do with pure chance in this tournament, so I should try with different tournaments later.
test_data = dta[test_idx] test_data
Odds to win.ipynb
Scoppio/League-of-Odds
mit
The PR column Now we add the Prediction column to the data (and drop the URL column because it takes too much view space) The Prediction column will be declared as PR and must show the predicted outcome for a given match.
pd.options.mode.chained_assignment = None test_data['PR'] = np.array(adb.predict(feature_test)) try: test_data = test_data.drop('url', 1) except: pass test_data.sort('data', ascending=True)
Odds to win.ipynb
Scoppio/League-of-Odds
mit
3. Affine decomposition In order to obtain an affine decomposition, we recast the problem on a fixed, parameter independent, reference domain $\Omega$. We choose one characterized by $\mu_0=\mu_1=\mu_2=\mu_3=\mu_4=1$ and $\mu_5=0$, which we generate through the generate_mesh notebook provided in the data folder.
@PullBackFormsToReferenceDomain() @AffineShapeParametrization("data/t_bypass_vertices_mapping.vmp") class Stokes(StokesProblem): # Default initialization of members def __init__(self, V, **kwargs): # Call the standard initialization StokesProblem.__init__(self, V, **kwargs) # ... and also store FEniCS data structures for assembly assert "subdomains" in kwargs assert "boundaries" in kwargs self.subdomains, self.boundaries = kwargs["subdomains"], kwargs["boundaries"] up = TrialFunction(V) (self.u, self.p) = split(up) vq = TestFunction(V) (self.v, self.q) = split(vq) self.dx = Measure("dx")(subdomain_data=self.subdomains) self.ds = Measure("ds")(subdomain_data=self.boundaries) # ... as well as forcing terms self.f = Constant((0.0, -10.0)) self.g = Constant(0.0) # Return custom problem name def name(self): return "Stokes1RB" # Return the lower bound for inf-sup constant. def get_stability_factor_lower_bound(self): return 1. # Return theta multiplicative terms of the affine expansion of the problem. @compute_theta_for_supremizers def compute_theta(self, term): if term == "a": theta_a0 = 1.0 return (theta_a0, ) elif term in ("b", "bt"): theta_b0 = 1.0 return (theta_b0, ) elif term == "f": theta_f0 = 1.0 return (theta_f0, ) elif term == "g": theta_g0 = 1.0 return (theta_g0, ) else: raise ValueError("Invalid term for compute_theta().") # Return forms resulting from the discretization of the affine expansion of the problem operators. @assemble_operator_for_supremizers def assemble_operator(self, term): dx = self.dx if term == "a": u = self.u v = self.v a0 = inner(grad(u), grad(v)) * dx return (a0, ) elif term == "b": u = self.u q = self.q b0 = - q * div(u) * dx return (b0, ) elif term == "bt": p = self.p v = self.v bt0 = - p * div(v) * dx return (bt0, ) elif term == "f": v = self.v f0 = inner(self.f, v) * dx return (f0, ) elif term == "g": q = self.q g0 = self.g * q * dx return (g0, ) elif term == "dirichlet_bc_u": bc0 = [DirichletBC(self.V.sub(0), Constant((0.0, 0.0)), self.boundaries, 3)] return (bc0,) elif term == "inner_product_u": u = self.u v = self.v x0 = inner(grad(u), grad(v)) * dx return (x0, ) elif term == "inner_product_p": p = self.p q = self.q x0 = inner(p, q) * dx return (x0, ) else: raise ValueError("Invalid term for assemble_operator().")
tutorials/12_stokes/tutorial_stokes_1_rb.ipynb
mathLab/RBniCS
lgpl-3.0
1. Definiciones y conceptos básicos. Entenderemos por algoritmo a una serie de pasos que persiguen un objetivo específico. Intuitivamente lo podemos relacionar con una receta de cocina: una serie de pasos bien definidos (sin dejar espacio para la confusión del usuario) que deben ser realizados en un orden específico para obtener un determinado resultado. En general un buen algoritmo debe poseer las siguientes características: No debe ser ambiguo en su implementación para cualquier usuario. Debe definir adecuadamente datos de entrada (inputs). Debe producir datos de salida (outputs) específicos. Debe poder realizarse en un número finito de pasos y por ende, en un tiempo finito. ( Ver The Halting Problem ). Por otra parte, llamaremos código a la materialización, en base a la implementación en la sintaxis adecuada de un determinado lenguaje de programación, de un determinado algoritmo. Entonces, para escribir un buen código y que sea eficiente debe tratar de respetar las ideas anteriores: se debe desarrollar en un número finito de pasos, ocupar adecuadamente las estructuras propias del lenguaje, se debe poder ingresar y manipular adecuadamente los datos de entrada y finalmente entregar el resultado deseado. A diferencia de lo anterior, una idea un poco menos estructurada es el concepto de pseudo-código. Entenderemos por el anterior a la descripción informal de un determinado algoritmo en un determinado lenguaje de programación. Sin embargo, no debe perder las características esenciales de un algoritmo como claridad en los pasos, inputs y outputs bien definidos, etc. de tal forma que permita la implementación directa de éste en el computador. Una vez implementado un algoritmo viene el proceso de revisión de éste. Para realizar adecuadamente lo anterior se recomienda contestar las siguentes preguntas: 1. ¿Mi algoritmo funciona para todos los posibles datos de entrada? 2. ¿Cuánto tiempo tomará en correr mi algoritmo? ¿Cuánta memoria ocupa en mi computador? 3. Ya que sé que mi algoritmo funciona: ¿es posible mejorarlo? ¿Puedo hacer que resuelva mi problema más rápido? 2. Un ejemplo sencillo: Programa para números primos. A continuación estudiamos el problema de determinar si un número entero $N\geq 2$ es primo o no. Consideremos los siguientes números: 8191 (primo), 8192 (compuesto), 49979687 (primo), 49979689 (compuesto). 2.1 Primer programa Nuestro primer algoritmo para determinar si un numero es primo es: verificar que ningún número entre $2$ y $N-1$ sea divisor de $N$. El pseudo-código es: 1. Ingresar un determinado número $N$ mayor a 1. 2. Si el número es 2 o 3, el numero es primo. Sino, se analizan los restos de la división entre $2$ y $N-1$. Si ningún resto es cero, entonces el numero $N$ es primo. En otro caso, el número no es primo. El código es el siguiente:
N = int(raw_input("Ingrese el numero que desea estudiar ")) if N<=1: print("Numero N tiene que ser mayor o igual a 2") elif 2<=N<=3: print("{0} es primo".format(N)) else: es_primo = True for i in range(2, N): if N%i==0: es_primo = False if es_primo: print("{0} es primo".format(N)) else: print("{0} es compuesto".format(N))
04_python_algoritmos_y_funciones/algoritmos_y_funciones.ipynb
usantamaria/ipynb_para_docencia
mit
2.2 Segundo Programa Al utilizar números grandes ($N=10^7$, por ejemplo) nos damos cuenta que el algoritmo anterior tarda mucho tiempo en ejecutar, y que recorre todos los numeros. Sin embargo, si se encuentra un divisor ya sabemos que el número no es primo, pudiendo detener inmediatamente el algoritmo. Esto se consigue utilizando únicamente una línea extra, con una sentencia break. El algoritmo para verificar si un numero no primo es: verificar si algún numero entre $2$ y $N-1$ es divisor de $N$. El pseudo-código es: 1. Ingresar un determinado número $N$ mayor a 1. 2. Si el número es 2 o 3, el numero es primo. Sino, se analizan los restos de la división entre $2$ y $N-1$. Si alguno de los restos es cero, entonces el numero $N$ es divisible, y no es primo. Mientras que el código es el siguiente:
N = int(raw_input("Ingrese el numero que desea estudiar ")) if N<=1: print("Numero N tiene que ser mayor o igual a 2") elif 2<=N<=3: print("{0} es primo".format(N)) else: es_primo = True for i in range(2, N): if N%i==0: es_primo = False break if es_primo: print("{0} es primo".format(N)) else: print("{0} es compuesto".format(N))
04_python_algoritmos_y_funciones/algoritmos_y_funciones.ipynb
usantamaria/ipynb_para_docencia
mit
La ejecución de números grandes compuestos se detiene en el primer múltiplo cuando el número es compuesto. Sin embargo, para numeros grandes y primos tarda bastante. 2.3 Tercer Programa Un último truco que podemos utilizar para verificar más rápidamente si un número es primo es verificar únicamente parte del rango de los múltiplos. Esto se explica mejor con un ejemplo. Consideremos el número 16: los multiplos son 2, 4 y 8. Como el número es compuesto, nuestro algoritmo anterior detecta rápidamente que 2 es un factor, detiene el algoritmo e indica que el número 12 no es primo. Consideremos ahora el numero 17: es un número primo y no tiene factores, por lo que el algoritmo revisa los numeros 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 y 16. Sin embargo, sólo es necesario revisar los números 2, 3, 4, 5 y 6 porque para que exista un factor mayor a 6, tiene que simultáneamente haber un factor menor a 6 tal que la multiplicación sea 17. Esto es, basta revisar los factores más pequeños, donde la cota está dada por el entero más cercano a $\sqrt{17}$ o en general, $\sqrt{N}$. El algoritmo para verificar si un numero no primo es: verificar si algún numero entero entre $2$ y $\sqrt{N}$ es divisor de $N$. El pseudo-código es: 1. Ingresar un determinado número $N$ mayor a 1. 2. Si el número es 2 o 3, el numero es primo. Sino, se analizan los restos de la división para cada número entre $2$ y $\sqrt{N-1}$. Si alguno de los restos es cero, entonces el numero $N$ es divisible, y no es primo. Mientras que el código es el siguiente:
N = int(raw_input("Ingrese el numero que desea estudiar ")) if N<=1: print("Numero N tiene que ser mayor o igual a 2") elif 2<=N<=3: print("{0} es primo".format(N)) else: es_primo = True for i in range(2, int(N**.5)): if N%i==0: es_primo = False break if es_primo: print("{0} es primo".format(N)) else: print("{0} no es primo".format(N))
04_python_algoritmos_y_funciones/algoritmos_y_funciones.ipynb
usantamaria/ipynb_para_docencia
mit
3. Midiendo la complejidad Como dijimos anteriormente luego de hacer que un algoritmo funcione, una de las preguntas más importantes es la revisión de éste haciendo énfasis en la medición del tiempo que necesita para resolver el problema. Así, la primera interrogante es: ¿cómo podemos medir el tiempo que tarda un algoritmo en relación al tamaño del problema que resuelve? Esto se denomina usualmente como complejidad temporal o, en inglés, como time complexity o escalability. Sin embargo es importante notar que medir la complejidad temporal de un algoritmo puede resultar un poco complejo puesto que: (a) El tiempo que toma al computador realizar las distintas operaciones en general es heterogeneo, es decir, realizar una suma es mucho más rápido que hacer una división, (b) Computadores distintos puede realizar un determinado experimento en tiempos distintos. La notación estándar para la complejidad de un algoritmo es mediante la letra mayúscula O, por lo que la complejidad de alguna función la podemos expresar por O("función"), lo que podemos interpretar como que el número de operaciones es proporcional a la función por una determinada constante. Las complejidades más importantes son: O(1) es un algoritmo de complejidad temporal constante, es decir, el número de operaciones del algoritmo realmente no varía mucho si el tamaño del problema crece. O(log(n)) es la complejidad logarítmica. O(n) significa que la complejidad del problema es lineal, es decir, doblar el tamaño del problema dobla el tamaño requerido para su solución. O($n^2$) significa complejidad cuadrática, es decir, doblar el tamaño del problema cuatriplica el tiempo requerido para su solución. O($2^n$) y en general O($a^n$), $a>1$, posee complejidad exponencial. En nuestros algoritmos anteriormente desarrollados: 1. El primer algoritmo tiene complejidad $O(N)$: siempre tarda lo mismo. 2. El segundo algoritmo tiene complejidad variable: si el numero es compuesto tarda en el mejor de los casos O($1$) y O($\sqrt{N}$) en el peor de los casos (como 25, o cualquier numero primo al cuadrado), pero si es primo tarda O($N$), pues verifica todos los posibles múltiplos. 2. El segundo algoritmo tiene complejidad variable: si el numero es compuesto tarda en ambos casos a lo más O($\sqrt{N}$), pues verifica solo los multiplos menores. Desafío A B C Funciones Cuando un algoritmo se utiliza muy seguido, resulta conveniente encapsular su utilización en una función. Ahora bien, resulta importante destacar que en informática una función no tiene el mismo significado que en matemáticas. Una función (en python) es simplemente una sucesión de acciones que se ejecutan sobre un conjunto de variables de entrada para producir un conjunto de variables de salida. La definición de funciones se realiza de la siguiente forma: def nombre_de_funcion(variable_1, variable_2, variable_opcional_1=valor_por_defecto_1, ...): accion_1 accion_2 return valor_1, valor_2 A continuación algunos ejemplos.
def sin_inputs_ni_outputs(): print "Hola mundo" def sin_inputs(): return "42" def sin_outputs(a,b): print a print b def con_input_y_output(a,b): return a+b def con_tuti(a,b,c=2): return a+b*c
04_python_algoritmos_y_funciones/algoritmos_y_funciones.ipynb
usantamaria/ipynb_para_docencia
mit
La función sin_inputs_ni_outputs se ejecuta sin recibir datos de entrada ni producir datos de salida (Y no es muy útil).
sin_inputs_ni_outputs()
04_python_algoritmos_y_funciones/algoritmos_y_funciones.ipynb
usantamaria/ipynb_para_docencia
mit
La función sin_inputs se ejecuta sin recibir datos de entrada pero si produce datos de salida.
x = sin_inputs() print("El sentido de la vida, el universo y todo lo demás es: "+x)
04_python_algoritmos_y_funciones/algoritmos_y_funciones.ipynb
usantamaria/ipynb_para_docencia
mit
La función con_input_y_output se ejecuta con datos de entrada y produce datos de salida. Cabe destacar que como python no utiliza tipos de datos, la misma función puede aplicarse a distintos tipos de datos mientras la lógica aplicada dentro de la función tenga sentido (y no arroje errores).
print con_input_y_output("uno","dos") print con_input_y_output(1,2) print con_input_y_output(1.0, 2) print con_input_y_output(1.0, 2.0)
04_python_algoritmos_y_funciones/algoritmos_y_funciones.ipynb
usantamaria/ipynb_para_docencia
mit
La función con_tuti se ejecuta con datos de entrada y valores por defecto, y produce datos de salida.
print con_tuti(1,2) print con_tuti("uno","dos") print con_tuti(1,2,c=3) print con_tuti(1,2,3) print con_tuti("uno","dos",3)
04_python_algoritmos_y_funciones/algoritmos_y_funciones.ipynb
usantamaria/ipynb_para_docencia
mit
네트워크는 사용자와 친구 관계를 나타낸다.
friendships = [(0, 1), (0, 2), (1, 2), (1, 3), (2, 3), (3, 4), (4, 5), (5, 6), (5, 7), (6, 8), (7, 8), (8, 9)]
notebook/ch21_network_analysis.ipynb
rnder/data-science-from-scratch
unlicense
친구 목록을 각 사용자의 dict에 추가하기도 했다.
# give each user a friends list for user in users: user["friends"] = [] # and populate it for i, j in friendships: # this works because users[i] is the user whose id is i users[i]["friends"].append(users[j]) # add i as a friend of j users[j]["friends"].append(users[i]) # add j as a friend of i
notebook/ch21_network_analysis.ipynb
rnder/data-science-from-scratch
unlicense
1장에서 연결 중심성(degree centrality)을 살펴볼 때는, 우리가 직관적으로 생각했던 주요 연결고리들이 선정되지 않아 약간 아쉬웠다. 대안으로 사용할 수 있는 지수 중 하나는 매개 중심성(betweenness centrality)인데, 이는 두 사람 사이의 최단 경로상에 빈번하게 등장하는 사람들이 큰 값을 가지는 지수이다. 구체적으로는, 노드 $i$의 매개 중심성은 다른 모든 노드 $j,k$ 쌍의 최단 경로 중에, $i$를 거치는 경로의 비율로 계산한다. 임의의 두 사람이 주어졌을 때 그들 간의 최단 경로를 구해야 한다. 이 책에서는 덜 효율적이더라도 훨씬 이해하기 쉬운 'Breadth-first search'라고도 알려진 알고리즘을 사용한다.
# # Betweenness Centrality # def shortest_paths_from(from_user): # 특정 사용자로부터 다른 사용자까지의 모든 최단 경로를 포함하는 dict shortest_paths_to = { from_user["id"] : [[]] } # 확인해야 하는 (이전 사용자, 다음 사용자) 큐 # 모든 (from_user, from_user의 친구) 쌍으로 시작 frontier = deque((from_user, friend) for friend in from_user["friends"]) # 큐가 빌 때까지 반복 while frontier: prev_user, user = frontier.popleft() # 큐의 첫 번째 사용자를 user_id = user["id"] # 제거 # 큐에 사용자를 추가하는 방법을 고려해 보면 # prev_user까지의 최단 경로를 이미 알고 있을 수도 있다. paths_to_prev = shortest_paths_to[prev_user["id"]] paths_via_prev = [path + [user_id] for path in paths_to_prev] # 만약 최단 경로를 이미 알고 있다면 old_paths_to_here = shortest_paths_to.get(user_id, []) # 지금까지의 최단 경로는 무엇일까? if old_paths_to_here: min_path_length = len(old_paths_to_here[0]) else: min_path_length = float('inf') # 길지 않은 새로운 경로만 저장 new_paths_to_here = [path_via_prev for path_via_prev in paths_via_prev if len(path_via_prev) <= min_path_length and path_via_prev not in old_paths_to_here] shortest_paths_to[user_id] = old_paths_to_here + new_paths_to_here # 아직 한번도 보지 못한 이웃을 frontier에 추가 frontier.extend((user, friend) for friend in user["friends"] if friend["id"] not in shortest_paths_to) return shortest_paths_to
notebook/ch21_network_analysis.ipynb
rnder/data-science-from-scratch
unlicense