markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Vertex constantsSetup up the following constants for Vertex:- `API_ENDPOINT`: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services.- `PARENT`: The Vertex location root path for dataset, model, job, pipeline and endpoint resources.
# API service endpoint API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION) # Vertex location root path for your dataset, model and endpoint resources PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
AutoML constantsSet constants unique to AutoML datasets and training:- Dataset Schemas: Tells the `Dataset` resource service which type of dataset it is.- Data Labeling (Annotations) Schemas: Tells the `Dataset` resource service how the data is labeled (annotated).- Dataset Training Schemas: Tells the `Pipeline` resou...
# Video Dataset type DATA_SCHEMA = 'gs://google-cloud-aiplatform/schema/dataset/metadata/video_1.0.0.yaml' # Video Labeling type LABEL_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/ioformat/video_action_recognition_io_format_1.0.0.yaml" # Video Training task TRAINING_SCHEMA = "gs://google-cloud-aiplatform/schem...
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Hardware AcceleratorsSet the hardware accelerators (e.g., GPU), if any, for prediction.Set the variable `DEPLOY_GPU/DEPLOY_NGPU` to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs alloca...
if os.getenv("IS_TESTING_DEPOLY_GPU"): DEPLOY_GPU, DEPLOY_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, int(os.getenv("IS_TESTING_DEPOLY_GPU"))) else: DEPLOY_GPU, DEPLOY_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1)
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Container (Docker) imageFor AutoML batch prediction, the container image for the serving binary is pre-determined by the Vertex prediction service. More specifically, the service will pick the appropriate container for the model depending on the hardware accelerator you selected. Machine TypeNext, set the machine typ...
if os.getenv("IS_TESTING_DEPLOY_MACHINE"): MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE") else: MACHINE_TYPE = 'n1-standard' VCPU = '4' DEPLOY_COMPUTE = MACHINE_TYPE + '-' + VCPU print('Deploy machine type', DEPLOY_COMPUTE)
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
TutorialNow you are ready to start creating your own AutoML video action recognition model. Set up clientsThe Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.You will use different clients...
# client options same for all services client_options = {"api_endpoint": API_ENDPOINT} def create_dataset_client(): client = aip.DatasetServiceClient( client_options=client_options ) return client def create_model_client(): client = aip.ModelServiceClient( client_options=client_optio...
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
DatasetNow that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it. Create `Dataset` resource instanceUse the helper function `create_dataset` to create the instance of a `Dataset` resource. This function does the following:1. Us...
TIMEOUT = 90 def create_dataset(name, schema, labels=None, timeout=TIMEOUT): start_time = time.time() try: dataset = aip.Dataset(display_name=name, metadata_schema_uri=schema, labels=labels) operation = clients['dataset'].create_dataset(parent=PARENT, dataset=dataset) print("Long runni...
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Now save the unique dataset identifier for the `Dataset` resource instance you created.
# The full unique ID for the dataset dataset_id = result.name # The short numeric ID for the dataset dataset_short_id = dataset_id.split('/')[-1] print(dataset_id)
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Data preparationThe Vertex `Dataset` resource for video has some requirements for your data.- Videos must be stored in a Cloud Storage bucket.- Each video file must be in a video format (MPG, AVI, ...).- There must be an index file stored in your Cloud Storage bucket that contains the path and label for each video.- T...
IMPORT_FILES = ['gs://automl-video-demo-data/hmdb_golf_swing_train.csv', 'gs://automl-video-demo-data/hmdb_golf_swing_test.csv']
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Quick peek at your dataYou will use a version of the Golf Swings dataset that is stored in a public Cloud Storage bucket, using a CSV index file.Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (`wc -l`) and then peek at the first few rows.
if 'IMPORT_FILES' in globals(): FILE = IMPORT_FILES[0] else: FILE = IMPORT_FILE count = ! gsutil cat $FILE | wc -l print("Number of Examples", int(count[0])) print("First 10 rows") ! gsutil cat $FILE | head
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Import dataNow, import the data into your Vertex Dataset resource. Use this helper function `import_data` to import the data. The function does the following:- Uses the `Dataset` client.- Calls the client method `import_data`, with the following parameters: - `name`: The human readable name you give to the `Dataset` r...
def import_data(dataset, gcs_sources, schema): config = [{ 'gcs_source': {'uris': gcs_sources}, 'import_schema_uri': schema }] print("dataset:", dataset_id) start_time = time.time() try: operation = clients['dataset'].import_data(name=dataset_id, import_configs=config) ...
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Train the modelNow train an AutoML video action recognition model using your Vertex `Dataset` resource. To train the model, do the following steps:1. Create an Vertex training pipeline for the `Dataset` resource.2. Execute the pipeline to start the training. Create a training pipelineYou may ask, what do we use a pip...
def create_pipeline(pipeline_name, model_name, dataset, schema, task): dataset_id = dataset.split('/')[-1] input_config = {'dataset_id': dataset_id, 'fraction_split': { 'training_fraction': 0.8, 'test_fraction': 0.2 }} ...
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Construct the task requirementsNext, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the `task` field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the `json_format.ParseDict` method for the conversion.The minimal fields you need ...
PIPE_NAME = "golf_pipe-" + TIMESTAMP MODEL_NAME = "golf_model-" + TIMESTAMP task = json_format.ParseDict({'model_type': "CLOUD", }, Value()) response = create_pipeline(PIPE_NAME, MODEL_NAME, dataset_id, TRAINING_SCHEMA, task)
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Now save the unique identifier of the training pipeline you created.
# The full unique ID for the pipeline pipeline_id = response.name # The short numeric ID for the pipeline pipeline_short_id = pipeline_id.split('/')[-1] print(pipeline_id)
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Get information on a training pipelineNow get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's `get_training_pipeline` method, with the following parameter:- `name`: The Vertex fully qualified pipeline...
def get_training_pipeline(name, silent=False): response = clients['pipeline'].get_training_pipeline(name=name) if silent: return response print("pipeline") print(" name:", response.name) print(" display_name:", response.display_name) print(" state:", response.state) print(" training...
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
DeploymentTraining the above model may take upwards of 240 minutes time.Once your model is done training, you can calculate the actual time it took to train the model by subtracting `end_time` from `start_time`. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipel...
while True: response = get_training_pipeline(pipeline_id, True) if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED: print("Training job has not completed:", response.state) model_to_deploy_id = None if response.state == aip.PipelineState.PIPELINE_STATE_FAILED: ra...
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Model informationNow that your model is trained, you can get some information on your model. Evaluate the Model resourceNow find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to ev...
def list_model_evaluations(name): response = clients['model'].list_model_evaluations(parent=name) for evaluation in response: print("model_evaluation") print(" name:", evaluation.name) print(" metrics_schema_uri:", evaluation.metrics_schema_uri) metrics = json_format.MessageToDic...
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Model deployment for batch predictionNow deploy the trained Vertex `Model` resource you created for batch prediction. This differs from deploying a `Model` resource for on-demand prediction.For online prediction, you:1. Create an `Endpoint` resource for deploying the `Model` resource to.2. Deploy the `Model` resource ...
import json import_file = IMPORT_FILES[0] test_items = ! gsutil cat $import_file | head -n2 cols = str(test_items[0]).split(',') test_item_1 = str(cols[0]) test_label_1 = str(cols[-1]) cols = str(test_items[1]).split(',') test_item_2 = str(cols[0]) test_label_2 = str(cols[-1]) print(test_item_1, test_label_1) print...
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Make a batch input fileNow make a batch input file, which you store in your local Cloud Storage bucket. The batch input file can be either CSV or JSONL. You will use JSONL in this tutorial. For JSONL file, you make one dictionary entry per line for each video. The dictionary contains the key/value pairs:- `content`: T...
import json import tensorflow as tf gcs_input_uri = BUCKET_NAME + '/test.jsonl' with tf.io.gfile.GFile(gcs_input_uri, 'w') as f: data = { "content": test_item_1, "mimeType": "video/avi", "timeSegmentStart": "0.0s", 'timeSegmentEnd': '5.0s' } f.write(json.dumps(data) + '\n') data = { "content": test_item_2...
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Compute instance scalingYou have several choices on scaling the compute instances for handling your batch prediction requests:- Single Instance: The batch prediction requests are processed on a single compute instance. - Set the minimum (`MIN_NODES`) and maximum (`MAX_NODES`) number of compute instances to one.- Manu...
MIN_NODES = 1 MAX_NODES = 1
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Make batch prediction requestNow that your batch of two test items is ready, let's do the batch request. Use this helper function `create_batch_prediction_job`, with the following parameters:- `display_name`: The human readable name for the prediction job.- `model_name`: The Vertex fully qualified identifier for the `...
BATCH_MODEL = "golf_batch-" + TIMESTAMP def create_batch_prediction_job(display_name, model_name, gcs_source_uri, gcs_destination_output_uri_prefix, parameters=None): if DEPLOY_GPU: machine_spec = { "machine_type": DEPLOY_COMPUTE, "accelerator_type": DEPLOY_GPU, "accel...
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Now get the unique identifier for the batch prediction job you created.
# The full unique ID for the batch job batch_job_id = response.name # The short numeric ID for the batch job batch_job_short_id = batch_job_id.split('/')[-1] print(batch_job_id)
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Get information on a batch prediction jobUse this helper function `get_batch_prediction_job`, with the following paramter:- `job_name`: The Vertex fully qualified identifier for the batch prediction job.The helper function calls the job client service's `get_batch_prediction_job` method, with the following paramter:- ...
def get_batch_prediction_job(job_name, silent=False): response = clients['job'].get_batch_prediction_job(name=job_name) if silent: return response.output_config.gcs_destination.output_uri_prefix, response.state print("response") print(" name:", response.name) print(" display_name:", respons...
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Get the predictionsWhen the batch prediction is done processing, the job state will be `JOB_STATE_SUCCEEDED`.Finally you view the predictions stored at the Cloud Storage path you set as output. The predictions will be in a JSONL format, which you indicated at the time you made the batch prediction job, under a subfold...
def get_latest_predictions(gcs_out_dir): ''' Get the latest prediction subfolder using the timestamp in the subfolder name''' folders = !gsutil ls $gcs_out_dir latest = "" for folder in folders: subfolder = folder.split('/')[-2] if subfolder.startswith('prediction-'): if subf...
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Cleaning upTo clean up all GCP resources used in this project, you can [delete the GCPproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:- Dataset- Pipeline- ...
delete_dataset = True delete_pipeline = True delete_model = True delete_endpoint = True delete_batchjob = True delete_customjob = True delete_hptjob = True delete_bucket = True # Delete the dataset using the Vertex fully qualified identifier for the dataset try: if delete_dataset and 'dataset_id' in globals(): ...
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_video_action_recognition_batch.ipynb
shenzhimo2/vertex-ai-samples
Tensor analysis using Amazon SageMaker DebuggerLooking at the distributions of activation inputs/outputs, gradients and weights per layer can give useful insights. For instance, it helps to understand whether the model runs into problems like neuron saturation, whether there are layers in your model that are not learn...
! python -m pip install smdebug
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Configuring the inputs for the training jobNow we'll call the Sagemaker MXNet Estimator to kick off a training job . The `entry_point_script` points to the MXNet training script. The users can create a custom *SessionHook* in their training script. If they chose not to create such hook in the training script (similar ...
entry_point_script = 'mnist.py' bad_hyperparameters = {'initializer': 2, 'lr': 0.001} import sagemaker from sagemaker.mxnet import MXNet from sagemaker.debugger import DebuggerHookConfig, CollectionConfig import boto3 import os estimator = MXNet(role=sagemaker.get_execution_role(), base_job_name='mxn...
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Start the training job
estimator.fit(wait=False)
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Get S3 location of tensorsWe can get information related to the training job:
job_name = estimator.latest_training_job.name client = estimator.sagemaker_session.sagemaker_client description = client.describe_training_job(TrainingJobName=job_name) description
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
We can retrieve the S3 location of the tensors:
path = estimator.latest_job_debugger_artifacts_path() print('Tensors are stored in: ', path)
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
We can check the status of our training job, by executing `describe_training_job`:
job_name = estimator.latest_training_job.name print('Training job name: {}'.format(job_name)) client = estimator.sagemaker_session.sagemaker_client description = client.describe_training_job(TrainingJobName=job_name)
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
We can access the tensors from S3 once the training job is in status `Training` or `Completed`. In the following code cell we check the job status:
import time if description['TrainingJobStatus'] != 'Completed': while description['SecondaryStatus'] not in {'Training', 'Completed'}: description = client.describe_training_job(TrainingJobName=job_name) primary_status = description['TrainingJobStatus'] secondary_status = description['Secon...
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Once the job is in status `Training` or `Completed`, we can create the trial that allows us to access the tensors in Amazon S3.
from smdebug.trials import create_trial trial1 = create_trial(path)
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
We can check the available steps. A step presents one forward and backward pass.
trial1.steps()
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
As training progresses more steps will become available. Next we will access specific tensors like weights, gradients and activation outputs and plot their distributions. We will use Amazon SageMaker Debugger and define custom rules to retrieve certain tensors. Rules are supposed to return True or False. However in th...
from smdebug.trials import create_trial from smdebug.rules.rule_invoker import invoke_rule from smdebug.exceptions import NoMoreData from smdebug.rules.rule import Rule import numpy as np import utils import collections import os from IPython.display import Image class ActivationOutputs(Rule): def __init__(self, ba...
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Plot the histograms
utils.create_interactive_matplotlib_histogram(rule.tensors, filename='images/activation_outputs.gif') Image(url='images/activation_outputs.gif')
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Activation InputsIn this rule we look at the inputs into activation function, rather than the output. This can be helpful to understand if there are extreme negative or positive values that saturate the activation functions.
class ActivationInputs(Rule): def __init__(self, base_trial): super().__init__(base_trial) self.tensors = collections.OrderedDict() def invoke_at_step(self, step): for tname in self.base_trial.tensor_names(regex='.*relu_input'): if "gradients" not in tname: ...
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Plot the histograms
utils.create_interactive_matplotlib_histogram(rule.tensors, filename='images/activation_inputs.gif')
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
We can see that second convolutional layer `conv1_relu_input_0` receives only negative input values, which means that all ReLUs in this layer output 0.
Image(url='images/activation_inputs.gif')
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
GradientsThe following code retrieves the gradients and plots their distribution. If variance is tiny, that means that the model parameters do not get updated effectively with each training step or that the training has converged to a minimum.
class GradientsLayer(Rule): def __init__(self, base_trial): super().__init__(base_trial) self.tensors = collections.OrderedDict() def invoke_at_step(self, step): for tname in self.base_trial.tensor_names(regex='.*gradient'): try: tensor = self.bas...
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Plot the histograms
utils.create_interactive_matplotlib_histogram(rule.tensors, filename='images/gradients.gif') Image(url='images/gradients.gif')
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Check variance across layersThe rule retrieves gradients, but this time we compare variance of gradient distribution across layers. We want to identify if there is a large difference between the min and max variance per training step. For instance, very deep neural networks may suffer from vanishing gradients the deep...
class GradientsAcrossLayers(Rule): def __init__(self, base_trial, ): super().__init__(base_trial) self.tensors = collections.OrderedDict() def invoke_at_step(self, step): for tname in self.base_trial.tensor_names(regex='.*gradient'): try: tensor =...
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Let's check min and max values of the gradients across layers:
for step in rule.tensors: print("Step", step, "variance of gradients: ", rule.tensors[step][0], " to ", rule.tensors[step][1])
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Distribution of weightsThis rule retrieves the weight tensors and checks the variance. If the distribution does not change much across steps it may indicate that the learning rate is too low, that gradients are too small or that the training has converged to a minimum.
class WeightRatio(Rule): def __init__(self, base_trial, ): super().__init__(base_trial) self.tensors = collections.OrderedDict() def invoke_at_step(self, step): for tname in self.base_trial.tensor_names(regex='.*weight'): if "gradient" not in tname: ...
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Plot the histograms
utils.create_interactive_matplotlib_histogram(rule.tensors, filename='images/weights.gif') Image(url='images/weights.gif')
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
InputsThis rule retrieves layer inputs excluding activation inputs.
class Inputs(Rule): def __init__(self, base_trial, ): super().__init__(base_trial) self.tensors = collections.OrderedDict() def invoke_at_step(self, step): for tname in self.base_trial.tensor_names(regex='.*input'): if "relu" not in tname: try: ...
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Plot the histograms
utils.create_interactive_matplotlib_histogram(rule.tensors, filename='images/layer_inputs.gif') Image(url='images/layer_inputs.gif')
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Layer outputsThis rule retrieves outputs of layers excluding activation outputs.
class Outputs(Rule): def __init__(self, base_trial, ): super().__init__(base_trial) self.tensors = collections.OrderedDict() def invoke_at_step(self, step): for tname in self.base_trial.tensor_names(regex='.*output'): if "relu" not in tname: try: ...
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Plot the histograms
utils.create_interactive_matplotlib_histogram(rule.tensors, filename='images/layer_outputs.gif') Image(url='images/layer_outputs.gif')
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Comparison In the previous section we have looked at the distribution of gradients, activation outputs and weights of a model that has not trained well due to poor initialization. Now we will compare some of these distributions with a model that has been well intialized.
entry_point_script = 'mnist.py' hyperparameters = {'lr': 0.01} estimator = MXNet(role=sagemaker.get_execution_role(), base_job_name='mxnet', train_instance_count=1, train_instance_type='ml.m5.xlarge', train_volume_size=400, source...
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Start the training job
estimator.fit(wait=False)
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Get S3 path where tensors have been stored
path = estimator.latest_job_debugger_artifacts_path() print('Tensors are stored in: ', path)
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Check the status of the training job:
job_name = estimator.latest_training_job.name print('Training job name: {}'.format(job_name)) client = estimator.sagemaker_session.sagemaker_client description = client.describe_training_job(TrainingJobName=job_name) if description['TrainingJobStatus'] != 'Completed': while description['SecondaryStatus'] not in ...
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Now we create a new trial object `trial2`:
from smdebug.trials import create_trial trial2 = create_trial(path)
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
GradientsLets compare distribution of gradients of the convolutional layers of both trials. `trial` is the trial object of the first training job, `trial2` is the trial object of second training job. We can now easily compare tensors from both training jobs.
rule = GradientsLayer(trial1) try: invoke_rule(rule) except NoMoreData: print('The training has ended and there is no more data to be analyzed. This is expected behavior.') dict_gradients = {} dict_gradients['gradient/conv0_weight_bad_hyperparameters'] = rule.tensors['gradient/conv0_weight'] dict_gradients['gr...
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Second trial:
rule = GradientsLayer(trial2) try: invoke_rule(rule) except NoMoreData: print('The training has ended and there is no more data to be analyzed. This is expected behavior.') dict_gradients['gradient/conv0_weight_good_hyperparameters'] = rule.tensors['gradient/conv0_weight'] dict_gradients['gradient/conv1_weight...
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Plot the histograms
utils.create_interactive_matplotlib_histogram(dict_gradients, filename='images/gradients_comparison.gif')
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
In the case of the poorly initalized model, gradients are fluctuating a lot leading to very high variance.
Image(url='images/gradients_comparison.gif')
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Activation inputsLets compare distribution of activation inputs of both trials.
rule = ActivationInputs(trial1) try: invoke_rule(rule) except NoMoreData: print('The training has ended and there is no more data to be analyzed. This is expected behavior.') dict_activation_inputs = {} dict_activation_inputs['conv0_relu_input_0_bad_hyperparameters'] = rule.tensors['conv0_relu_input_0'] dict_a...
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Second trial
rule = ActivationInputs(trial2) try: invoke_rule(rule) except NoMoreData: print('The training has ended and there is no more data to be analyzed. This is expected behavior.') dict_activation_inputs['conv0_relu_input_0_good_hyperparameters'] = rule.tensors['conv0_relu_input_0'] dict_activation_inputs['conv1_rel...
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Plot the histograms
utils.create_interactive_matplotlib_histogram(dict_activation_inputs, filename='images/activation_inputs_comparison.gif')
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
The distribution of activation inputs into first activation layer `conv0_relu_input_0` look quite similar in both trials. However in the case of the second layer they drastically differ.
Image(url='images/activation_inputs_comparison.gif')
_____no_output_____
Apache-2.0
sagemaker-debugger/mnist_tensor_analysis/mnist_tensor_analysis.ipynb
P15241328/amazon-sagemaker-examples
Copyright 2019 The TensorFlow Authors.
#@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under...
_____no_output_____
Apache-2.0
site/ja/tutorials/text/word_embeddings.ipynb
mulka/docs
単語埋め込み (Word embeddings) View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook Note: これらのドキュメントは私たちTensorFlowコミュニティが翻訳したものです。コミュニティによる 翻訳は**ベストエフォート**であるため、この翻訳が正確であることや[英語の公式ドキュメント](https://www.tensorflow.org/?hl=en)の 最新の状態を反映したものであることを...
from __future__ import absolute_import, division, print_function, unicode_literals try: # %tensorflow_version は Colab 中でのみ使用できます !pip install tf-nightly except Exception: pass import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers import tensorflow_datasets as tfds tfds.disable...
_____no_output_____
Apache-2.0
site/ja/tutorials/text/word_embeddings.ipynb
mulka/docs
Embedding レイヤーを使うKeras では単語埋め込みを使うのも簡単です。[Embedding](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding) レイヤーを見てみましょう。 Embedding レイヤーは、(特定の単語を示す)整数のインデックスに(その埋め込みである)密なベクトルを対応させる参照テーブルとして理解することができます。埋め込みの次元数(あるいはその幅)は、取り組んでいる問題に適した値を実験して求めるパラメータです。これは、Dense レイヤーの中のニューロンの数を実験で求めるのとまったくおなじです。
embedding_layer = layers.Embedding(1000, 5)
_____no_output_____
Apache-2.0
site/ja/tutorials/text/word_embeddings.ipynb
mulka/docs
Embedding レイヤーを作成するとき、埋め込みの重みは(ほかのレイヤーとおなじように)ランダムに初期化されます。訓練を通じて、これらの重みはバックプロパゲーションによって徐々に調整されます。いったん訓練が行われると、学習された単語埋め込みは、(モデルを訓練した特定の問題のために学習された結果)単語の間の類似性をおおまかにコード化しています。Embedding レイヤーに整数を渡すと、結果はそれぞれの整数が埋め込みテーブルのベクトルに置き換えられます。
result = embedding_layer(tf.constant([1,2,3])) result.numpy()
_____no_output_____
Apache-2.0
site/ja/tutorials/text/word_embeddings.ipynb
mulka/docs
テキストあるいはシーケンスの問題では、入力として、Embedding レイヤーは shape が `(samples, sequence_length)` の2次元整数テンソルを取ります。ここで、各エントリは整数のシーケンスです。このレイヤーは、可変長のシーケンスを埋め込みベクトルにすることができます。上記のバッチでは、 `(32, 10)` (長さ10のシーケンス32個のバッチ)や、 `(64, 15)` (長さ15のシーケンス64個のバッチ)を埋め込みレイヤーに投入可能です。返されたテンソルは入力より 1つ軸が多くなっており、埋め込みベクトルはその最後の新しい軸に沿って並べられます。`(2, 3)` の入力バッチを渡すと、出力は...
result = embedding_layer(tf.constant([[0,1,2],[3,4,5]])) result.shape
_____no_output_____
Apache-2.0
site/ja/tutorials/text/word_embeddings.ipynb
mulka/docs
シーケンスのバッチを入力されると、Embedding レイヤーは shape が `(samples, sequence_length, embedding_dimensionality)` の3次元浮動小数点数テンソルを返します。この可変長のシーケンスを、固定長の表現に変換するには、さまざまな標準的なアプローチが存在します。Dense レイヤーに渡す前に、RNNやアテンション、プーリングレイヤーを使うことができます。ここでは、一番単純なのでプーリングを使用します。[RNN を使ったテキスト分類](https://github.com/tensorflow/docs/blob/master/site/ja/tutorials/tex...
(train_data, test_data), info = tfds.load( 'imdb_reviews/subwords8k', split = (tfds.Split.TRAIN, tfds.Split.TEST), with_info=True, as_supervised=True)
_____no_output_____
Apache-2.0
site/ja/tutorials/text/word_embeddings.ipynb
mulka/docs
エンコーダー(`tfds.features.text.SubwordTextEncoder`)を取得し、すこしボキャブラリを見てみましょう。ボキャブラリ中の "\_" は空白を表しています。ボキャブラリの中にどんなふうに("\_")で終わる単語全体と、長い単語を構成する単語の一部が含まれているかに注目してください。
encoder = info.features['text'].encoder encoder.subwords[:20]
_____no_output_____
Apache-2.0
site/ja/tutorials/text/word_embeddings.ipynb
mulka/docs
映画のレビューはそれぞれ長さが異なっているはずです。`padded_batch` メソッドを使ってレビューの長さを標準化します。
train_batches = train_data.shuffle(1000).padded_batch(10) test_batches = test_data.shuffle(1000).padded_batch(10)
_____no_output_____
Apache-2.0
site/ja/tutorials/text/word_embeddings.ipynb
mulka/docs
インポートした状態では、レビューのテキストは整数エンコードされています(それぞれの整数がボキャブラリ中の特定の単語あるいは部分単語を表しています)。あとの方のゼロに注目してください。これは、バッチが一番長いサンプルに合わせてパディングされた結果です。
train_batch, train_labels = next(iter(train_batches)) train_batch.numpy()
_____no_output_____
Apache-2.0
site/ja/tutorials/text/word_embeddings.ipynb
mulka/docs
単純なモデルの構築[Keras Sequential API](../../guide/keras) を使ってモデルを定義することにします。今回の場合、モデルは「連続した Bag of Words」スタイルのモデルです。* 次のレイヤーは Embedding レイヤーです。このレイヤーは整数エンコードされた語彙を受け取り、それぞれの単語のインデックスに対応する埋め込みベクトルをみつけて取り出します。これらのベクトルはモデルの訓練により学習されます。このベクトルは出力配列に次元を追加します。その結果次元は `(batch, sequence, embedding)` となります。* 次に、GlobalAveragePooling1D...
embedding_dim=16 model = keras.Sequential([ layers.Embedding(encoder.vocab_size, embedding_dim), layers.Dense(16, activation='relu'), layers.GlobalAveragePooling1D(), layers.Dense(1, activation='sigmoid') ]) model.summary()
_____no_output_____
Apache-2.0
site/ja/tutorials/text/word_embeddings.ipynb
mulka/docs
モデルのコンパイルと訓練
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) history = model.fit( train_batches, epochs=10, validation_data=test_batches, validation_steps=20)
_____no_output_____
Apache-2.0
site/ja/tutorials/text/word_embeddings.ipynb
mulka/docs
このアプローチにより、モデルの評価時の正解率は 88% 前後に達します(モデルは過学習しており、訓練時の正解率の方が際立って高いことに注意してください)。
import matplotlib.pyplot as plt history_dict = history.history acc = history_dict['accuracy'] val_acc = history_dict['val_accuracy'] loss = history_dict['loss'] val_loss = history_dict['val_loss'] epochs = range(1, len(acc) + 1) plt.figure(figsize=(12,9)) plt.plot(epochs, loss, 'bo', label='Training loss') plt.plot...
_____no_output_____
Apache-2.0
site/ja/tutorials/text/word_embeddings.ipynb
mulka/docs
学習した埋め込みの取得次に、訓練によって学習された単語埋め込みを取得してみます。これは、shape が `(vocab_size, embedding-dimension)` の行列になります。
e = model.layers[0] weights = e.get_weights()[0] print(weights.shape) # shape: (vocab_size, embedding_dim)
_____no_output_____
Apache-2.0
site/ja/tutorials/text/word_embeddings.ipynb
mulka/docs
この重みをディスクに出力します。[Embedding Projector](http://projector.tensorflow.org) を使うため、タブ区切り形式の2つのファイルをアップロードします。(埋め込みを含む)ベクトルのファイルと、(単語を含む)メタデータファイルです。
import io encoder = info.features['text'].encoder out_v = io.open('vecs.tsv', 'w', encoding='utf-8') out_m = io.open('meta.tsv', 'w', encoding='utf-8') for num, word in enumerate(encoder.subwords): vec = weights[num+1] # 0 はパディングのためスキップ out_m.write(word + "\n") out_v.write('\t'.join([str(x) for x in vec]) + "\...
_____no_output_____
Apache-2.0
site/ja/tutorials/text/word_embeddings.ipynb
mulka/docs
このチュートリアルを [Colaboratory](https://colab.research.google.com) で実行している場合には、下記のコードを使ってこれらのファイルをローカルマシンにダウンロードすることができます(あるいは、ファイルブラウザを使います。*表示 -> 目次 -> ファイル* )。
try: from google.colab import files except ImportError: pass else: files.download('vecs.tsv') files.download('meta.tsv')
_____no_output_____
Apache-2.0
site/ja/tutorials/text/word_embeddings.ipynb
mulka/docs
Методы обучения без учителя Метод главных компонент Внимание! Решение данной задачи предполагает, что у вас установлены библиотека numpy версии 1.16.4 и выше и библиотека scikit-learn версии 0.21.2 и выше. В следующей ячейке мы проверим это. Если у вас установлены более старые версии, обновите их пожалуйста, или во...
import numpy as np import sklearn
_____no_output_____
Apache-2.0
pca/pca.ipynb
myusernameisuseless/python_for_data_analysis_mailru_mipt
В этом задании мы применим метод главных компонент на многомерных данных и постараемся найти оптимальную размерность признаков для решения задачи классификации
import pandas as pd import matplotlib.pyplot as plt import numpy as np %matplotlib inline
_____no_output_____
Apache-2.0
pca/pca.ipynb
myusernameisuseless/python_for_data_analysis_mailru_mipt
Подготовка данных Исходными [данными](http://archive.ics.uci.edu/ml/machine-learning-databases/auslan2-mld/auslan.data.html) являются показания различных сенсоров, установленных на руках человека, который умеет общаться на языке жестов.В данном случае задача ставится следующим образом: по показаниям датчиков (по 11 се...
# Загружаем данные сенсоров df_database = pd.read_csv('sign_database.csv') # Загружаем метки классов sign_classes = pd.read_csv('sign_classes.csv', index_col=0, header=0, names=['id', 'class']) # Столбец id - идентификаторы "слов" # Столбец time - метка времени # Остальные столбцы - показания серсоров для слова id в м...
_____no_output_____
Apache-2.0
pca/pca.ipynb
myusernameisuseless/python_for_data_analysis_mailru_mipt
Для каждого из "слов" у нас есть набор показаний сенсоров с разных частей руки в каждый момент времени.Идея нашего подхода будет заключаться в следующем – давайте для каждого сенсора составим набор характеристик (например, разброс значений, максимальное, минимальное, среднее значение, количество "пиков", и т.п.) и буде...
## Если не хотите долго ждать - не убирайте комментарии # from tsfresh.feature_extraction import extract_features # from tsfresh.feature_selection import select_features # from tsfresh.utilities.dataframe_functions import impute # from tsfresh.feature_extraction import ComprehensiveFCParameters, MinimalFCParameters, se...
_____no_output_____
Apache-2.0
pca/pca.ipynb
myusernameisuseless/python_for_data_analysis_mailru_mipt
Базовая модель В результате у нас получилось очень много признаков (аж 10865), давайте применим метод главных компонент, чтобы получить сжатое признаковое представление, сохранив при этом предиктивную силу в модели.
from sklearn.model_selection import cross_val_score from sklearn.model_selection import StratifiedKFold from sklearn.neighbors import KNeighborsClassifier from sklearn.decomposition import PCA from sklearn.preprocessing import StandardScaler from sklearn.pipeline import Pipeline from sklearn.preprocessing import LabelE...
_____no_output_____
Apache-2.0
pca/pca.ipynb
myusernameisuseless/python_for_data_analysis_mailru_mipt
Создадим бейзлайн без уменьшения размерности. Гиперпараметры модели подбирались произвольно
# Подготовим данные на вход в модель # признаки X = sign_features_filtered.values # классы enc = LabelEncoder() enc.fit(sign_classes.loc[:, 'class']) sign_classes.loc[:, 'target'] = enc.transform(sign_classes.loc[:, 'class']) y = sign_classes.target.values # Будем делать кросс-валидацию на 5 фолдов cv = StratifiedKFo...
_____no_output_____
Apache-2.0
pca/pca.ipynb
myusernameisuseless/python_for_data_analysis_mailru_mipt
Качество базовой модели должно быть в районе 92 процентов. Метод главных компонент * Добавьте в пайплайн `base_model` шаг с методом главных компонент. Начиная с версии 0.18 в sklearn добавили разные солверы для PCA. Дополнитенльно задайте в модели следующие параметры: `svd_solder = "randomized"` и `random_state=123`.*...
numbers numbers = [i for i in range(9, 19)] scores = [] for n in numbers: base_model1 = Pipeline([ ('scaler', StandardScaler()), ('pca', PCA(n_components=n, svd_solver='randomized', random_state=123)), ('clf', KNeighborsClassifier(n_neighbors=9)) ]) scores.append(cross_val_score(bas...
_____no_output_____
Apache-2.0
pca/pca.ipynb
myusernameisuseless/python_for_data_analysis_mailru_mipt
Ответ
print('{:.2f}'.format(expl))
0.39
Apache-2.0
pca/pca.ipynb
myusernameisuseless/python_for_data_analysis_mailru_mipt
Load libraries
!pip install -q -r requirements.txt import sys import os import numpy as np import pandas as pd from PIL import Image import torch import torch.nn as nn import torch.utils.data as D from torch.optim.lr_scheduler import ExponentialLR import torch.nn.functional as F from torch.autograd import Variable from torchvision...
_____no_output_____
Apache-2.0
my_notebooks/eval10_experiment5.ipynb
MichelML/ml-aging
Define dataset and model
img_dir = '../input/rxrxairgb512' path_data = '../input/rxrxaicsv' device = 'cuda' batch_size = 32 torch.manual_seed(0) model_name = 'efficientnet-b3' jitter = (0.6, 1.4) class ImagesDS(D.Dataset): # taken textbook from https://arxiv.org/pdf/1812.01187.pdf transform_train = transforms.Compose([ transfor...
Loaded pretrained weights for efficientnet-b3
Apache-2.0
my_notebooks/eval10_experiment5.ipynb
MichelML/ml-aging
Evaluate
model.cuda() eval_model_10(model, tloader, 'models/Model_efficientnet-b3_93.pth', path_data)
_____no_output_____
Apache-2.0
my_notebooks/eval10_experiment5.ipynb
MichelML/ml-aging
Introduction to the Research EnvironmentThe research environment is powered by IPython notebooks, which allow one to perform a great deal of data analysis and statistical validation. We'll demonstrate a few simple techniques here. Code Cells vs. Text CellsAs you can see, each cell can be either code or text. To select ...
2 + 2
_____no_output_____
MIT
Lab01/lmbaeza-lecture1.ipynb
lmbaeza/numerical-methods-2021
Sometimes there is no result to be printed, as is the case with assignment.
X = 2
_____no_output_____
MIT
Lab01/lmbaeza-lecture1.ipynb
lmbaeza/numerical-methods-2021
Remember that only the result from the last line is printed.
2 + 2 3 + 3
_____no_output_____
MIT
Lab01/lmbaeza-lecture1.ipynb
lmbaeza/numerical-methods-2021
However, you can print whichever lines you want using the `print` statement.
print(2 + 2) 3 + 3
4
MIT
Lab01/lmbaeza-lecture1.ipynb
lmbaeza/numerical-methods-2021
Knowing When a Cell is RunningWhile a cell is running, a `[*]` will display on the left. When a cell has yet to be executed, `[ ]` will display. When it has been run, a number will display indicating the order in which it was run during the execution of the notebook `[5]`. Try on this cell and note it happening.
#Take some time to run something c = 0 for i in range(10000000+1): c = c + i print(c)
50000005000000
MIT
Lab01/lmbaeza-lecture1.ipynb
lmbaeza/numerical-methods-2021
Ejemplo 1: Progresión Aritmética de Diferencia 1$\frac{n\cdot \left(n+1\right)}{2}=1+2+3+4+5+6+\cdot \cdot \cdot +n$
n = 10000000 print(int(n*(n+1)/2))
50000005000000
MIT
Lab01/lmbaeza-lecture1.ipynb
lmbaeza/numerical-methods-2021
Importing LibrariesThe vast majority of the time, you'll want to use functions from pre-built libraries. You can't import every library on Quantopian due to security issues, but you can import most of the common scientific ones. Here I import numpy and pandas, the two most common and useful libraries in quant finance. ...
import numpy as np import pandas as pd # This is a plotting library for pretty pictures. import matplotlib.pyplot as plt
_____no_output_____
MIT
Lab01/lmbaeza-lecture1.ipynb
lmbaeza/numerical-methods-2021
Tab AutocompletePressing tab will give you a list of IPython's best guesses for what you might want to type next. This is incredibly valuable and will save you a lot of time. If there is only one possible option for what you could type next, IPython will fill that in for you. Try pressing tab very frequently, it will s...
np.random
_____no_output_____
MIT
Lab01/lmbaeza-lecture1.ipynb
lmbaeza/numerical-methods-2021
Getting Documentation HelpPlacing a question mark after a function and executing that line of code will give you the documentation IPython has for that function. It's often best to do this in a new cell, as you avoid re-executing other code and running into bugs.
np.random.normal?
_____no_output_____
MIT
Lab01/lmbaeza-lecture1.ipynb
lmbaeza/numerical-methods-2021
Ejemplo 2 Obtener un numero primo entre 1 y 100
def is_prime(number): if number <= 1: return False elif number <= 3: return True if number%2==0 or number%3==0: return False i = 5 while i*i <= number: if number % i == 0 or number % (i+2) == 0: return False; return True n = 0 while True: n = np.ran...
49 Es un numero primo
MIT
Lab01/lmbaeza-lecture1.ipynb
lmbaeza/numerical-methods-2021
SamplingWe'll sample some random data using a function from `numpy`.
# Sample 100 points with a mean of 0 and an std of 1. This is a standard normal distribution. X = np.random.normal(0, 1, 100) X
_____no_output_____
MIT
Lab01/lmbaeza-lecture1.ipynb
lmbaeza/numerical-methods-2021
PlottingWe can use the plotting library we imported as follows.
plt.plot(X)
_____no_output_____
MIT
Lab01/lmbaeza-lecture1.ipynb
lmbaeza/numerical-methods-2021
Squelching Line OutputYou might have noticed the annoying line of the form `[]` before the plots. This is because the `.plot` function actually produces output. Sometimes we wish not to display output, we can accomplish this with the semi-colon as follows.
plt.plot(X);
_____no_output_____
MIT
Lab01/lmbaeza-lecture1.ipynb
lmbaeza/numerical-methods-2021