markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Set region and boto3 config
# You can change this to a region of your choice import sagemaker region = sagemaker.Session().boto_region_name print("Using AWS Region: {}".format(region)) boto3.setup_default_session(region_name=region) boto_session = boto3.Session(region_name=region) s3_client = boto3.client("s3", region_name=region) sagemaker_boto_client = boto_session.client("sagemaker") sagemaker_session = sagemaker.session.Session( boto_session=boto_session, sagemaker_client=sagemaker_boto_client ) sagemaker_role = sagemaker.get_execution_role() account_id = boto3.client("sts").get_caller_identity()["Account"] bucket = sagemaker_session.default_bucket() prefix = "fraud-detect-demo" claims_fg_name = f"{prefix}-claims" customers_fg_name = f"{prefix}-customers" # ======> Tons of output_paths training_job_output_path = f"s3://{bucket}/{prefix}/training_jobs" bias_report_output_path = f"s3://{bucket}/{prefix}/clarify-bias" explainability_output_path = f"s3://{bucket}/{prefix}/clarify-explainability" train_data_uri = f"s3://{bucket}/{prefix}/data/train/train.csv" test_data_uri = f"s3://{bucket}/{prefix}/data/test/test.csv" train_data_upsampled_s3_path = f"s3://{bucket}/{prefix}/data/train/upsampled/train.csv" processing_dir = "/opt/ml/processing" create_dataset_script_uri = f"s3://{bucket}/{prefix}/code/create_dataset.py" pipeline_bias_output_path = f"s3://{bucket}/{prefix}/clarify-output/pipeline/bias" deploy_model_script_uri = f"s3://{bucket}/{prefix}/code/deploy_model.py" # ======> variables used for parameterizing the notebook run flow_instance_count = 1 flow_instance_type = "ml.m5.4xlarge" train_instance_count = 1 train_instance_type = "ml.m4.xlarge" deploy_model_instance_type = "ml.m4.xlarge"
_____no_output_____
Apache-2.0
end_to_end/fraud_detection/pipeline-e2e.ipynb
qidewenwhen/amazon-sagemaker-examples
Architecture: Create a SageMaker Pipeline to Automate All the Steps from Data Prep to Model Deployment----![End to end pipeline architecture](./images/e2e-5-pipeline-v3b.png) Creating an Automated Pipeline using SageMaker Pipeline- [Step 1: Claims Data Wrangler Preprocessing Step](Step-1:-Claims-Data-Wrangler-Preprocessing-Step)- [Step 2: Customers Data Wrangler Preprocessing Step](Step-2:-Customers-Data-Wrangler-Preprocessing-Step)- [Step 3: Create Dataset and Train/Test Split](Step-3:-Create-Dataset-and-Train/Test-Split)- [Step 4: Train XGBoost Model](Step-4:-Train-XGBoost-Model)- [Step 5: Model Pre-Deployment Step](Step-5:-Model-Pre-Deployment-Step)- [Step 6: Run Bias Metrics with Clarify](Step-6:-Run-Bias-Metrics-with-Clarify)- [Step 7: Register Model](Step-7:-Register-Model)- [Step 8: Deploy Model](Step-8:-Deploy-Model)- [Step 9: Combine and Run the Pipeline Steps](Step-9:-Combine-and-Run-the-Pipeline-Steps) ----Now that youve manually done each step in our machine learning workflow, you can certain steps to allow for faster model experimentation without sacrificing transparncy and model tracking. In this section you will create a pipeline which trains a new model, persists the model in SageMaker and then adds the model to the registry. Pipeline parametersAn important feature of SageMaker Pipelines is the ability to define the steps ahead of time, but be able to change the parameters to those steps at execution without having to re-define the pipeline. This can be achieved by using ParameterInteger, ParameterFloat or ParameterString to define a value upfront which can be modified when you call `pipeline.start(parameters=parameters)` later. Only certain parameters can be defined this way.
train_instance_param = ParameterString( name="TrainingInstance", default_value="ml.m4.xlarge", ) model_approval_status = ParameterString( name="ModelApprovalStatus", default_value="PendingManualApproval" )
_____no_output_____
Apache-2.0
end_to_end/fraud_detection/pipeline-e2e.ipynb
qidewenwhen/amazon-sagemaker-examples
Step 1: Claims Data Wrangler Preprocessing Step Upload raw data to S3Before you can preprocess the raw data with Data Wrangler, it must exist in S3.
s3_client.upload_file( Filename="data/claims.csv", Bucket=bucket, Key=f"{prefix}/data/raw/claims.csv" ) s3_client.upload_file( Filename="data/customers.csv", Bucket=bucket, Key=f"{prefix}/data/raw/customers.csv" )
_____no_output_____
Apache-2.0
end_to_end/fraud_detection/pipeline-e2e.ipynb
qidewenwhen/amazon-sagemaker-examples
Update attributes within the `.flow` file Data Wrangler will generate a .flow file. It contains a reference to an S3 bucket used during the Wrangling. This may be different from the one you have as a default in this notebook eg if the Wrangling was done by someone else, you will probably not have access to their bucket and you now need to point to your own S3 bucket so you can actually load the .flow file into Data Wrangler or access the data.After running the cell below you can open the `claims.flow` and `customers.flow` files and export the data to S3 or you can continue the guide using the provided `data/claims_preprocessed.csv` and `data/customers_preprocessed.csv` files.
claims_flow_template_file = "claims_flow_template" with open(claims_flow_template_file, "r") as f: variables = {"bucket": bucket, "prefix": prefix} template = string.Template(f.read()) claims_flow = template.substitute(variables) claims_flow = json.loads(claims_flow) with open("claims.flow", "w") as f: json.dump(claims_flow, f) customers_flow_template_file = "customers_flow_template" with open(customers_flow_template_file, "r") as f: variables = {"bucket": bucket, "prefix": prefix} template = string.Template(f.read()) customers_flow = template.substitute(variables) customers_flow = json.loads(customers_flow) with open("customers.flow", "w") as f: json.dump(customers_flow, f)
_____no_output_____
Apache-2.0
end_to_end/fraud_detection/pipeline-e2e.ipynb
qidewenwhen/amazon-sagemaker-examples
Upload flow to S3This will become an input to the first step and, as such, needs to be in S3.
s3_client.upload_file( Filename="claims.flow", Bucket=bucket, Key=f"{prefix}/dataprep-notebooks/claims.flow" ) claims_flow_uri = f"s3://{bucket}/{prefix}/dataprep-notebooks/claims.flow" print(f"Claims flow file uploaded to S3")
_____no_output_____
Apache-2.0
end_to_end/fraud_detection/pipeline-e2e.ipynb
qidewenwhen/amazon-sagemaker-examples
Define the first Data Wrangler step's inputs
with open("claims.flow", "r") as f: claims_flow = json.load(f) flow_step_inputs = [] # flow file contains the code for each transformation flow_file_input = sagemaker.processing.ProcessingInput( source=claims_flow_uri, destination=f"{processing_dir}/flow", input_name="flow" ) flow_step_inputs.append(flow_file_input) # parse the flow file for S3 inputs to Data Wranger job for node in claims_flow["nodes"]: if "dataset_definition" in node["parameters"]: data_def = node["parameters"]["dataset_definition"] name = data_def["name"] s3_input = sagemaker.processing.ProcessingInput( source=data_def["s3ExecutionContext"]["s3Uri"], destination=f"{processing_dir}/{name}", input_name=name, ) flow_step_inputs.append(s3_input)
_____no_output_____
Apache-2.0
end_to_end/fraud_detection/pipeline-e2e.ipynb
qidewenwhen/amazon-sagemaker-examples
Define outputs for first Data Wranger step
claims_output_name = ( f"{claims_flow['nodes'][-1]['node_id']}.{claims_flow['nodes'][-1]['outputs'][0]['name']}" ) flow_step_outputs = [] flow_output = sagemaker.processing.ProcessingOutput( output_name=claims_output_name, feature_store_output=sagemaker.processing.FeatureStoreOutput(feature_group_name=claims_fg_name), app_managed=True, ) flow_step_outputs.append(flow_output)
_____no_output_____
Apache-2.0
end_to_end/fraud_detection/pipeline-e2e.ipynb
qidewenwhen/amazon-sagemaker-examples
Define processor and processing step
# You can find the proper image uri by exporting your Data Wrangler flow to a pipeline notebook # ================================= from sagemaker import image_uris # Pulls the latest data-wrangler container tag, i.e. "1.x" # The latest tested container version was "1.11.0" image_uri = image_uris.retrieve(framework="data-wrangler", region=region) print("image_uri: {}".format(image_uri)) flow_processor = sagemaker.processing.Processor( role=sagemaker_role, image_uri=image_uri, instance_count=flow_instance_count, instance_type=flow_instance_type, max_runtime_in_seconds=86400, ) output_content_type = "CSV" # Output configuration used as processing job container arguments claims_output_config = {claims_output_name: {"content_type": output_content_type}} claims_flow_step = ProcessingStep( name="ClaimsDataWranglerProcessingStep", processor=flow_processor, inputs=flow_step_inputs, outputs=flow_step_outputs, job_arguments=[f"--output-config '{json.dumps(claims_output_config)}'"], )
_____no_output_____
Apache-2.0
end_to_end/fraud_detection/pipeline-e2e.ipynb
qidewenwhen/amazon-sagemaker-examples
Step 2: Customers Data Wrangler Preprocessing Step
s3_client.upload_file( Filename="customers.flow", Bucket=bucket, Key=f"{prefix}/dataprep-notebooks/customers.flow" ) claims_flow_uri = f"s3://{bucket}/{prefix}/dataprep-notebooks/customers.flow" print(f"Customers flow file uploaded to S3") with open("customers.flow", "r") as f: customers_flow = json.load(f) flow_step_inputs = [] # flow file contains the code for each transformation flow_file_input = sagemaker.processing.ProcessingInput( source=claims_flow_uri, destination=f"{processing_dir}/flow", input_name="flow" ) flow_step_inputs.append(flow_file_input) # parse the flow file for S3 inputs to Data Wranger job for node in customers_flow["nodes"]: if "dataset_definition" in node["parameters"]: data_def = node["parameters"]["dataset_definition"] name = data_def["name"] s3_input = sagemaker.processing.ProcessingInput( source=data_def["s3ExecutionContext"]["s3Uri"], destination=f"{processing_dir}/{name}", input_name=name, ) flow_step_inputs.append(s3_input) customers_output_name = ( f"{customers_flow['nodes'][-1]['node_id']}.{customers_flow['nodes'][-1]['outputs'][0]['name']}" ) flow_step_outputs = [] flow_output = sagemaker.processing.ProcessingOutput( output_name=customers_output_name, feature_store_output=sagemaker.processing.FeatureStoreOutput( feature_group_name=customers_fg_name ), app_managed=True, ) flow_step_outputs.append(flow_output) output_content_type = "CSV" # Output configuration used as processing job container arguments customers_output_config = {customers_output_name: {"content_type": output_content_type}} customers_flow_step = ProcessingStep( name="CustomersDataWranglerProcessingStep", processor=flow_processor, inputs=flow_step_inputs, outputs=flow_step_outputs, job_arguments=[f"--output-config '{json.dumps(customers_output_config)}'"], )
_____no_output_____
Apache-2.0
end_to_end/fraud_detection/pipeline-e2e.ipynb
qidewenwhen/amazon-sagemaker-examples
Step 3: Create Dataset and Train/Test Split
s3_client.upload_file( Filename="create_dataset.py", Bucket=bucket, Key=f"{prefix}/code/create_dataset.py" ) create_dataset_processor = SKLearnProcessor( framework_version="0.23-1", role=sagemaker_role, instance_type="ml.m5.xlarge", instance_count=1, base_job_name="fraud-detection-demo-create-dataset", sagemaker_session=sagemaker_session, ) create_dataset_step = ProcessingStep( name="CreateDataset", processor=create_dataset_processor, outputs=[ sagemaker.processing.ProcessingOutput( output_name="train_data", source="/opt/ml/processing/output/train" ), sagemaker.processing.ProcessingOutput( output_name="test_data", source="/opt/ml/processing/output/test" ), ], job_arguments=[ "--claims-feature-group-name", claims_fg_name, "--customers-feature-group-name", customers_fg_name, "--bucket-name", bucket, "--bucket-prefix", prefix, "--region", region, ], code=create_dataset_script_uri, depends_on=[claims_flow_step.name, customers_flow_step.name], )
_____no_output_____
Apache-2.0
end_to_end/fraud_detection/pipeline-e2e.ipynb
qidewenwhen/amazon-sagemaker-examples
Step 4: Train XGBoost ModelIn this step we use the ParameterString `train_instance_param` defined at the beginning of the pipeline.
hyperparameters = { "max_depth": "3", "eta": "0.2", "objective": "binary:logistic", "num_round": "100", } xgb_estimator = XGBoost( entry_point="xgboost_starter_script.py", output_path=training_job_output_path, code_location=training_job_output_path, hyperparameters=hyperparameters, role=sagemaker_role, instance_count=train_instance_count, instance_type=train_instance_param, framework_version="1.0-1", ) train_step = TrainingStep( name="XgboostTrain", estimator=xgb_estimator, inputs={ "train": sagemaker.inputs.TrainingInput( s3_data=create_dataset_step.properties.ProcessingOutputConfig.Outputs[ "train_data" ].S3Output.S3Uri ) }, )
_____no_output_____
Apache-2.0
end_to_end/fraud_detection/pipeline-e2e.ipynb
qidewenwhen/amazon-sagemaker-examples
Step 5: Model Pre-Deployment Step
model = sagemaker.model.Model( name="fraud-detection-demo-pipeline-xgboost", image_uri=train_step.properties.AlgorithmSpecification.TrainingImage, model_data=train_step.properties.ModelArtifacts.S3ModelArtifacts, sagemaker_session=sagemaker_session, role=sagemaker_role, ) inputs = sagemaker.inputs.CreateModelInput(instance_type="ml.m4.xlarge") create_model_step = CreateModelStep(name="ModelPreDeployment", model=model, inputs=inputs)
_____no_output_____
Apache-2.0
end_to_end/fraud_detection/pipeline-e2e.ipynb
qidewenwhen/amazon-sagemaker-examples
Step 6: Run Bias Metrics with Clarify Clarify configuration
bias_data_config = sagemaker.clarify.DataConfig( s3_data_input_path=create_dataset_step.properties.ProcessingOutputConfig.Outputs[ "train_data" ].S3Output.S3Uri, s3_output_path=pipeline_bias_output_path, label="fraud", dataset_type="text/csv", ) bias_config = sagemaker.clarify.BiasConfig( label_values_or_threshold=[0], facet_name="customer_gender_female", facet_values_or_threshold=[1], ) analysis_config = bias_data_config.get_config() analysis_config.update(bias_config.get_config()) analysis_config["methods"] = {"pre_training_bias": {"methods": "all"}} clarify_config_dir = pathlib.Path("config") clarify_config_dir.mkdir(exist_ok=True) with open(clarify_config_dir / "analysis_config.json", "w") as f: json.dump(analysis_config, f) s3_client.upload_file( Filename="config/analysis_config.json", Bucket=bucket, Key=f"{prefix}/clarify-config/analysis_config.json", )
_____no_output_____
Apache-2.0
end_to_end/fraud_detection/pipeline-e2e.ipynb
qidewenwhen/amazon-sagemaker-examples
Clarify processing step
clarify_processor = sagemaker.processing.Processor( base_job_name="fraud-detection-demo-clarify-processor", image_uri=sagemaker.clarify.image_uris.retrieve(framework="clarify", region=region), role=sagemaker.get_execution_role(), instance_count=1, instance_type="ml.c5.xlarge", ) clarify_step = ProcessingStep( name="ClarifyProcessor", processor=clarify_processor, inputs=[ sagemaker.processing.ProcessingInput( input_name="analysis_config", source=f"s3://{bucket}/{prefix}/clarify-config/analysis_config.json", destination="/opt/ml/processing/input/config", ), sagemaker.processing.ProcessingInput( input_name="dataset", source=create_dataset_step.properties.ProcessingOutputConfig.Outputs[ "train_data" ].S3Output.S3Uri, destination="/opt/ml/processing/input/data", ), ], outputs=[ sagemaker.processing.ProcessingOutput( source="/opt/ml/processing/output/analysis.json", destination=pipeline_bias_output_path, output_name="analysis_result", ) ], )
_____no_output_____
Apache-2.0
end_to_end/fraud_detection/pipeline-e2e.ipynb
qidewenwhen/amazon-sagemaker-examples
Step 7: Register ModelIn this step you will use the ParameterString `model_approval_status` defined at the outset of the pipeline code.
mpg_name = prefix model_metrics = demo_helpers.ModelMetrics( bias=sagemaker.model_metrics.MetricsSource( s3_uri=clarify_step.properties.ProcessingOutputConfig.Outputs[ "analysis_result" ].S3Output.S3Uri, content_type="application/json", ) ) register_step = RegisterModel( name="XgboostRegisterModel", estimator=xgb_estimator, model_data=train_step.properties.ModelArtifacts.S3ModelArtifacts, content_types=["text/csv"], response_types=["text/csv"], inference_instances=["ml.t2.medium", "ml.m5.xlarge"], transform_instances=["ml.m5.xlarge"], model_package_group_name=mpg_name, approval_status=model_approval_status, model_metrics=model_metrics, )
_____no_output_____
Apache-2.0
end_to_end/fraud_detection/pipeline-e2e.ipynb
qidewenwhen/amazon-sagemaker-examples
Step 8: Deploy Model
s3_client.upload_file( Filename="deploy_model.py", Bucket=bucket, Key=f"{prefix}/code/deploy_model.py" ) deploy_model_processor = SKLearnProcessor( framework_version="0.23-1", role=sagemaker_role, instance_type="ml.t3.medium", instance_count=1, base_job_name="fraud-detection-demo-deploy-model", sagemaker_session=sagemaker_session, ) deploy_step = ProcessingStep( name="DeployModel", processor=deploy_model_processor, job_arguments=[ "--model-name", create_model_step.properties.ModelName, "--region", region, "--endpoint-instance-type", deploy_model_instance_type, "--endpoint-name", "xgboost-model-pipeline-0120", ], code=deploy_model_script_uri, )
_____no_output_____
Apache-2.0
end_to_end/fraud_detection/pipeline-e2e.ipynb
qidewenwhen/amazon-sagemaker-examples
Step 9: Combine and Run the Pipeline StepsThough easier to reason with, the parameters and steps don't need to be in order. The pipeline DAG will parse it out properly.
pipeline_name = f"FraudDetectDemo" %store pipeline_name pipeline = Pipeline( name=pipeline_name, parameters=[train_instance_param, model_approval_status], steps=[ claims_flow_step, customers_flow_step, create_dataset_step, train_step, create_model_step, clarify_step, register_step, deploy_step, ], )
_____no_output_____
Apache-2.0
end_to_end/fraud_detection/pipeline-e2e.ipynb
qidewenwhen/amazon-sagemaker-examples
Submit the pipeline definition to the SageMaker Pipeline serviceNote: If an existing pipeline has the same name it will be overwritten.
pipeline.upsert(role_arn=sagemaker_role)
_____no_output_____
Apache-2.0
end_to_end/fraud_detection/pipeline-e2e.ipynb
qidewenwhen/amazon-sagemaker-examples
View the entire pipeline definitionViewing the pipeline definition will all the string variables interpolated may help debug pipeline bugs. It is commented out here due to length.
json.loads(pipeline.describe()["PipelineDefinition"])
_____no_output_____
Apache-2.0
end_to_end/fraud_detection/pipeline-e2e.ipynb
qidewenwhen/amazon-sagemaker-examples
Run the pipelineNote this will take about 23 minutes to complete. You can watch the progress of the Pipeline Job on your SageMaker Studio Components panel ![image.png](attachment:image.png)
# Special pipeline parameters can be defined or changed here parameters = {"TrainingInstance": "ml.m5.xlarge"} start_response = pipeline.start(parameters=parameters) start_response.wait(delay=60, max_attempts=500) start_response.describe()
_____no_output_____
Apache-2.0
end_to_end/fraud_detection/pipeline-e2e.ipynb
qidewenwhen/amazon-sagemaker-examples
after completion it will look something like this![image.png](attachment:image.png) ![Pipeline](./images/pipeline-success.png) Clean Up----After running the demo, you should remove the resources which were created. You can also delete all the objects in the project's S3 directory by passing the keyword argument `delete_s3_objects=True`.
from demo_helpers import delete_project_resources delete_project_resources( sagemaker_boto_client=sagemaker_boto_client, pipeline_name=pipeline_name, mpg_name=mpg_name, prefix=prefix, delete_s3_objects=False, bucket_name=bucket, )
_____no_output_____
Apache-2.0
end_to_end/fraud_detection/pipeline-e2e.ipynb
qidewenwhen/amazon-sagemaker-examples
This is an important notebook for giving an intuition for how fitting gaussian mixture models to the latent space could work
a = np.ones((1, 1, 2, 2)) np.array([a[0]]).shape %matplotlib notebook from mpl_toolkits import mplot3d import torch import numpy as np from matplotlib import pyplot as plt from torch.nn import functional as F import math from sklearn.neighbors.kde import KernelDensity from sklearn.model_selection import GridSearchCV from torch import nn import torch.optim as optim from torch.autograd import Variable import time from sklearn.mixture import GaussianMixture samples = 500 x = np.random.randint(-25, 25, samples) + np.random.random(samples) y = np.random.normal(0, 3, samples) line = np.hstack((x.reshape(-1, 1), y.reshape(-1, 1))) theta = math.radians(-45) r = np.array([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]]) clockwise_line = line @ r theta = math.radians(45) r = np.array([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]]) counter_clockwise_line = line @ r data = np.vstack((clockwise_line, counter_clockwise_line)) data[:, 0] += 0 data[:, 1] += 0 fig = plt.figure() ax = fig.add_subplot(1, 1, 1) ax.scatter(data[:, 0], data[:, 1]) fig.subplots_adjust(bottom=0.15) fig.subplots_adjust(right=0.85) fig.canvas.draw() time.sleep(0.1) plt.close(fig) n_components = np.arange(1, 50) params = {'n_components': n_components} grid = GridSearchCV(GaussianMixture(covariance_type='full', random_state=42), params, cv=5) grid.fit(data) gmm = grid.best_estimator_ samples, _ = gmm.sample(data.shape[0]) print(grid.best_params_) # plot samples fig = plt.figure() ax = fig.add_subplot(1, 1, 1) ax.scatter(samples[:, 0], samples[:, 1]) fig.subplots_adjust(bottom=0.15) fig.subplots_adjust(right=0.85) fig.canvas.draw() time.sleep(0.1) plt.close(fig) xx, yy = np.mgrid[-30:30:1, -30:30:1] sh = xx.shape zz = np.stack((xx, yy)).transpose(1, 2, 0).reshape(-1, 2) zz = np.exp(gmm.score_samples(zz)).reshape(sh[0], sh[1]).transpose() fig = plt.figure(figsize=(12, 10)) ax = fig.add_subplot(1, 1, 1, projection='3d') surf = ax.plot_surface(xx, yy, zz, rstride=1, cstride=1, cmap='viridis', edgecolor='none') ax.view_init(90, 0) fig.colorbar(surf, shrink=0.5, aspect=5) fig.canvas.draw()
_____no_output_____
MIT
Exploring_Gaussian_Mixture_Models.ipynb
PatrickgHayes/gmm-dnn-for-interpretability
Sweeeeet it doesn't model the original data perfectly but it does a pretty good job Let's model the domains of individual neurons in a neural network! Here is our dataset
def create_sombreros(size=10): base_x = np.random.randint(-1, 1, size * 2) + np.random.random(size * 2) base_y = np.random.normal(0, 0.1, size * 2) base = np.hstack((base_x.reshape(-1, 1), base_y.reshape(-1, 1))) top_x = np.random.normal(0, 0.1, size) top_y = np.random.randint(1, 2, size) + np.random.random(size) top = np.hstack((top_x.reshape(-1, 1), top_y.reshape(-1, 1))) sombrero = np.vstack((base, top)) theta = math.radians(45) r = np.array([[np.cos(theta), -np.sin(theta)] ,[np.sin(theta), np.cos(theta)]]) right_sombrero = sombrero @ r right_sombrero[:, 0] += 1.5 right_sombrero[:, 1] += 1.5 theta = math.radians(-45) r = np.array([[np.cos(theta), -np.sin(theta)] ,[np.sin(theta), np.cos(theta)]]) left_sombrero = sombrero @ r left_sombrero[:, 0] -= 1.5 left_sombrero[:, 1] += 1.5 theta = math.radians(180) r = np.array([[np.cos(theta), -np.sin(theta)] ,[np.sin(theta), np.cos(theta)]]) upsidedown_sombrero = sombrero @ r upsidedown_sombrero[:, 0] += 0 upsidedown_sombrero[:, 1] += 0 data = np.vstack((left_sombrero, right_sombrero, upsidedown_sombrero)) left_labels = np.array([0] * sombrero.shape[0]) right_labels = np.array([1] * sombrero.shape[0]) upsidedown_labels = np.array([2] * sombrero.shape[0]) labels = np.concatenate((left_labels, right_labels, upsidedown_labels)) return data, labels data, labels = create_sombreros(size=100) fig = plt.figure() ax = fig.add_subplot(1, 1, 1) for i in range(np.max(labels) + 1): idxs = np.argwhere(labels == i) ax.scatter(data[idxs, 0], data[idxs, 1]) ax.axis('equal') fig.canvas.draw() time.sleep(1) plt.close(fig) n_components = np.arange(50, 52) params = {'n_components': n_components} grid = GridSearchCV(GaussianMixture(covariance_type='full', random_state=41), params, cv=5) grid.fit(data) gmm = grid.best_estimator_ print("n_components= " + str(grid.best_params_)) samples, _ = gmm.sample(data.shape[0]) # Let's map the domain of the inputs xx, yy = np.mgrid[-3:3:0.1, -3:3:0.1] sh = xx.shape zz = np.stack((xx, yy)).transpose(1, 2, 0).reshape(-1, 2) zz = np.exp(gmm.score_samples(zz)).reshape(sh[0], sh[1]).transpose() fig = plt.figure(figsize=(12, 10)) ax = fig.add_subplot(1, 1, 1, projection='3d') surf = ax.plot_surface(xx, yy, zz, rstride=1, cstride=1, cmap='viridis', edgecolor='none') ax.view_init(90, 0) fig.colorbar(surf, shrink=0.5, aspect=5) fig.canvas.draw() class VisualizeGaussiansNet(nn.Module): def __init__(self): super(VisualizeGaussiansNet, self).__init__() self.layer1 = nn.Linear(2, 3) self.layer2 = nn.Linear(3, 3) self.layer3 = nn.Linear(3, 3) self.input_gmm = None self.layer1_gmm = None self.layer2_gmm = None self.layer3_gmm = None return def forward(self, x): outputs = F.leaky_relu(self.layer1(x)) outputs = F.leaky_relu(self.layer2(outputs)) outputs = self.layer3(outputs) return outputs def fit_gmms(self, x, n_components=50): self.input_gmm = GaussianMixture(covariance_type='full', n_components=n_components).fit(x.detach().numpy()) outputs = self.layer1(x) self.layer1_gmm = GaussianMixture(covariance_type='full', n_components=n_components).fit(outputs.detach().numpy()) outputs = self.layer2(F.leaky_relu(outputs)) self.layer2_gmm = GaussianMixture(covariance_type='full', n_components=n_components).fit(outputs.detach().numpy()) outputs = self.layer3(F.leaky_relu(outputs)) self.layer3_gmm = GaussianMixture(covariance_type='full', n_components=n_components).fit(outputs.detach().numpy()) return def sample(self, target=None, sample_size=100, temp=1): # Layer 3 if target is None: target, _ = self.layer3_gmm.sample() else: samples, _ = self.layer3_gmm.sample(sample_size) squashed_samples = F.softmax(torch.tensor(samples).float(), dim=1).detach().numpy() negative_distances = -1 * np.linalg.norm(squashed_samples - target, axis=1) / temp probability_weights = F.softmax(torch.tensor(negative_distances).float(), dim=0).detach().numpy() idx = np.random.choice(np.arange(probability_weights.shape[0]), p=probability_weights) target = samples[idx] #Layer 2 samples, _ = self.layer2_gmm.sample(sample_size) squashed_samples = self.layer3(F.leaky_relu(torch.tensor(samples).float())).detach().numpy() negative_distances = -1 * np.linalg.norm(squashed_samples - target, axis=1) / temp probability_weights = F.softmax(torch.tensor(negative_distances).float(), dim=0).detach().numpy() idx = np.random.choice(np.arange(probability_weights.shape[0]), p=probability_weights) target = samples[idx] #Layer 1 samples, _ = self.layer1_gmm.sample(sample_size) squashed_samples = self.layer2(F.leaky_relu(torch.tensor(samples).float())).detach().numpy() negative_distances = -1 * np.linalg.norm(squashed_samples - target, axis=1) / temp probability_weights = F.softmax(torch.tensor(negative_distances).float(), dim=0).detach().numpy() idx = np.random.choice(np.arange(probability_weights.shape[0]), p=probability_weights) target = samples[idx] # input samples, _ = self.input_gmm.sample(sample_size) squashed_samples = self.layer1(torch.tensor(samples).float()).detach().numpy() negative_distances = -1 * np.linalg.norm(squashed_samples - target, axis=1) / temp probability_weights = F.softmax(torch.tensor(negative_distances).float(), dim=0).detach().numpy() idx = np.random.choice(np.arange(probability_weights.shape[0]), p=probability_weights) target = samples[idx] return target def plot_3d(self, outputs, title, labels=None): print(title) data = outputs fig = plt.figure() ax = fig.add_subplot(111, projection='3d') if labels is None: ax.scatter3D(data[:, 0], data[:, 1], data[:, 2]) else: for i in range(labels.max() + 1): idxs = np.argwhere(labels == i) ax.scatter3D(data[idxs, 0], data[idxs, 1], data[idxs, 2]) fig.canvas.draw() return def plot_journey(self, x, labels, epoch): print("------------ Start of Epoch " + str(epoch) + " -------------") print("Inputs") data = x.detach().numpy() fig = plt.figure() ax = fig.add_subplot(1, 1, 1) for i in range(labels.max() + 1): idxs = np.argwhere(labels == i) ax.scatter(data[idxs, 0], data[idxs, 1]) fig.canvas.draw() time.sleep(1) plt.close(fig) print("Inputs Sampled") samples, _ = self.input_gmm.sample(data.shape[0]) fig = plt.figure() ax = fig.add_subplot(1, 1, 1) ax.scatter(samples[:, 0], samples[:, 1]) fig.canvas.draw() time.sleep(1) plt.close(fig) outputs = self.layer1(x) samples, _ = self.layer1_gmm.sample(data.shape[0]) self.plot_3d(outputs.detach().numpy(), "Layer1", labels=labels) self.plot_3d(samples, "Layer1 Samples") outputs = F.leaky_relu(outputs) self.plot_3d(outputs.detach().numpy(), "Layer1 Squashed", labels=labels) outputs = self.layer2(outputs) samples, _ = self.layer2_gmm.sample(data.shape[0]) self.plot_3d(outputs.detach().numpy(), "Layer2", labels=labels) self.plot_3d(samples, "Layer2 Samples") outputs = F.leaky_relu(outputs) self.plot_3d(outputs.detach().numpy(), "Layer2 Squashed", labels=labels) outputs = self.layer3(outputs) samples, _ = self.layer3_gmm.sample(data.shape[0]) self.plot_3d(outputs.detach().numpy(), "Layer3", labels=labels) self.plot_3d(samples, "Layer3 Samples") outputs = F.softmax(outputs, dim=1) self.plot_3d(outputs.detach().numpy(), "Layer3 Squashed", labels=labels) print("------------ End of Epoch " + str(epoch) + " -------------") print() return vis_net = VisualizeGaussiansNet() criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(vis_net.parameters(), lr=0.001) data, labels = create_sombreros(size=100) data, labels = Variable(torch.tensor(data).float()), Variable(torch.tensor(labels)) print_every = (5999,) loss_history = list() for epoch in range(6000): running_loss = 0.0 optimizer.zero_grad() outputs = vis_net(data) loss = criterion(outputs, labels) loss.backward() optimizer.step() running_loss = loss.item() loss_history.append(running_loss) if epoch in print_every: print("Loss: " + str(loss_history[-1])) fig = plt.figure() ax = fig.add_subplot(1, 1, 1) ax.plot(loss_history) fig.subplots_adjust(bottom=0.15) fig.canvas.draw() time.sleep(0.2) plt.close(fig) vis_net.fit_gmms(data, n_components=10) vis_net.plot_journey(data, labels.detach().numpy().reshape(-1), epoch) print("Run plt.close('all') to close all the plots") plt.close('all') samples = list() for i in range(300): samples.append(vis_net.sample(target=np.array([[0, 0, 1]]), sample_size=300, temp=0.02)) samples = np.vstack(samples) fig = plt.figure() ax = fig.add_subplot(1, 1, 1) ax.scatter(samples[:, 0], samples[:, 1]) fig.subplots_adjust(bottom=0.15) fig.subplots_adjust(right=0.85) fig.canvas.draw() time.sleep(0.2) plt.close(fig)
_____no_output_____
MIT
Exploring_Gaussian_Mixture_Models.ipynb
PatrickgHayes/gmm-dnn-for-interpretability
There were somethings about the demo above that I didn't like.It masked the problem of having to collide probability distributions. It masked the problem of having to deal with overlapping convolutions.It masked the problem of having to deal with inputs aren't drawn from a gaussian Exploring Gaussian Mixture Models 2.0
from sklearn.datasets import make_moons def create_moons(n_samples=300): samples, labels = make_moons(n_samples=n_samples, shuffle=False, noise=0.05) ys = (np.random.randint(-2, 2, n_samples) + np.random.uniform(0, 1, n_samples)).reshape(-1, 1) samples = np.hstack((samples, ys)) return samples, labels data, labels = create_moons() print(data.shape) fig = plt.figure() ax = fig.add_subplot(111, projection='3d') if labels is None: ax.scatter3D(data[:, 0], data[:, 1], data[:, 2]) else: for i in range(labels.max() + 1): idxs = np.argwhere(labels == i) ax.scatter3D(data[idxs, 0], data[idxs, 1], data[idxs, 2]) fig.canvas.draw() class CollisionNet(nn.Module): def __init__(self): super(CollisionNet, self).__init__() self.layer1 = [nn.Linear(1, 1) for i in range(3)] self.layer2 = nn.Linear(3, 3) self.layer3 = nn.Linear(3, 2) self.input_gmm = [None for i in range(3)] self.layer1_gmm = None self.layer2_gmm = None self.layer3_gmm = None return def forward(self, x): outputs = list() for i in range(3): outputs.append(F.leaky_relu(self.layer1[i](x[:, i].view(-1, 1)))) outputs = torch.cat(outputs, 1) outputs = F.leaky_relu(self.layer2(outputs)) outputs = self.layer3(outputs) return outputs def fit_gmms(self, x, n_components=50): for i in range(3): self.input_gmm[i] = GaussianMixture(covariance_type='full', n_components=n_components).fit(x[:, i].view(-1, 1).detach().numpy()) outputs = list() for i in range(3): outputs.append(F.leaky_relu(self.layer1[i](x[:, i].view(-1, 1)))) outputs = torch.cat(outputs, 1) self.layer1_gmm = GaussianMixture(covariance_type='full', n_components=n_components).fit(outputs.detach().numpy()) outputs = self.layer2(F.leaky_relu(outputs)) self.layer2_gmm = GaussianMixture(covariance_type='full', n_components=n_components).fit(outputs.detach().numpy()) outputs = self.layer3(F.leaky_relu(outputs)) self.layer3_gmm = GaussianMixture(covariance_type='full', n_components=n_components).fit(outputs.detach().numpy()) return def sample(self, target=None, sample_size=100, temp=1): # Layer 3 if target is None: target, _ = self.layer3_gmm.sample() else: samples, _ = self.layer3_gmm.sample(sample_size) squashed_samples = F.softmax(torch.tensor(samples).float(), dim=1).detach().numpy() negative_distances = -1 * np.linalg.norm(squashed_samples - target, axis=1) / temp probability_weights = F.softmax(torch.tensor(negative_distances).float(), dim=0).detach().numpy() idx = np.random.choice(np.arange(probability_weights.shape[0]), p=probability_weights) target = samples[idx] #Layer 2 samples, _ = self.layer2_gmm.sample(sample_size) squashed_samples = self.layer3(F.leaky_relu(torch.tensor(samples).float())).detach().numpy() negative_distances = -1 * np.linalg.norm(squashed_samples - target, axis=1) / temp probability_weights = F.softmax(torch.tensor(negative_distances).float(), dim=0).detach().numpy() idx = np.random.choice(np.arange(probability_weights.shape[0]), p=probability_weights) target = samples[idx] #Layer 1 samples, _ = self.layer1_gmm.sample(sample_size) squashed_samples = self.layer2(F.leaky_relu(torch.tensor(samples).float())).detach().numpy() negative_distances = -1 * np.linalg.norm(squashed_samples - target, axis=1) / temp probability_weights = F.softmax(torch.tensor(negative_distances).float(), dim=0).detach().numpy() idx = np.random.choice(np.arange(probability_weights.shape[0]), p=probability_weights) target = samples[idx] # input targets = list() for i in range(3): samples, _ = self.input_gmm[i].sample(sample_size) squashed_samples = self.layer1[i](torch.tensor(samples).float()).detach().numpy() negative_distances = -1 * np.linalg.norm(squashed_samples - target[i].reshape(-1, 1), axis=1) / temp probability_weights = F.softmax(torch.tensor(negative_distances).float(), dim=0).detach().numpy() idx = np.random.choice(np.arange(probability_weights.shape[0]), p=probability_weights) targets.append(samples[idx]) return np.hstack(targets) def plot_3d(self, outputs, title, labels=None): print(title) data = outputs fig = plt.figure() ax = fig.add_subplot(111, projection='3d') if labels is None: ax.scatter3D(data[:, 0], data[:, 1], data[:, 2]) else: for i in range(labels.max() + 1): idxs = np.argwhere(labels == i) ax.scatter3D(data[idxs, 0], data[idxs, 1], data[idxs, 2]) fig.canvas.draw() return def plot_journey(self, x, labels, epoch): print("------------ Start of Epoch " + str(epoch) + " -------------") self.plot_3d(x.detach().numpy(), "Inputs", labels=labels) samples = list() for i in range(3): sample, _ = self.input_gmm[i].sample(data.shape[0]) samples.append(sample) samples = np.hstack(samples) self.plot_3d(samples, "Input Samples") outputs = list() for i in range(3): outputs.append(F.leaky_relu(self.layer1[i](x[:, i].view(-1, 1)))) outputs = torch.cat(outputs, 1) samples, _ = self.layer1_gmm.sample(data.shape[0]) self.plot_3d(outputs.detach().numpy(), "Layer1", labels=labels) self.plot_3d(samples, "Layer1 Samples") outputs = F.leaky_relu(outputs) self.plot_3d(outputs.detach().numpy(), "Layer1 Squashed", labels=labels) outputs = self.layer2(outputs) samples, _ = self.layer2_gmm.sample(data.shape[0]) self.plot_3d(outputs.detach().numpy(), "Layer2", labels=labels) self.plot_3d(samples, "Layer2 Samples") outputs = F.leaky_relu(outputs) self.plot_3d(outputs.detach().numpy(), "Layer2 Squashed", labels=labels) outputs = self.layer3(outputs) outputs = torch.cat((outputs, torch.tensor(np.zeros((outputs.shape[0], 1))).float()), dim=1) samples, _ = self.layer3_gmm.sample(data.shape[0]) samples = np.hstack((samples, np.zeros((samples.shape[0], 1)))) self.plot_3d(outputs.detach().numpy(), "Layer3", labels=labels) self.plot_3d(samples, "Layer3 Samples") outputs = F.softmax(outputs, dim=1) self.plot_3d(outputs.detach().numpy(), "Layer3 Squashed", labels=labels) print("------------ End of Epoch " + str(epoch) + " -------------") print() return collision_net = CollisionNet() criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(collision_net.parameters(), lr=0.001) data, labels = create_moons(n_samples=500) data, labels = Variable(torch.tensor(data).float()), Variable(torch.tensor(labels)) print_every = (2, 100, 5999,) loss_history = list() for epoch in range(6000): running_loss = 0.0 optimizer.zero_grad() outputs = collision_net(data) loss = criterion(outputs, labels) loss.backward() optimizer.step() running_loss = loss.item() loss_history.append(running_loss) if epoch in print_every: print("Loss: " + str(loss_history[-1])) fig = plt.figure() ax = fig.add_subplot(1, 1, 1) ax.plot(loss_history) fig.subplots_adjust(bottom=0.15) fig.canvas.draw() time.sleep(0.2) plt.close(fig) collision_net.fit_gmms(data, n_components=100) collision_net.plot_journey(data, labels.detach().numpy().reshape(-1), epoch) print("Run plt.close('all') to close all the plots") plt.close('all') samples = list() for i in range(300): samples.append(collision_net.sample(sample_size=300, temp=0.002)) samples = np.vstack(samples) outputs = collision_net(torch.tensor(samples).float()).detach().numpy() labels = np.argmax(outputs, axis=1) data = samples fig = plt.figure() ax = fig.add_subplot(111, projection='3d') if labels is None: ax.scatter3D(data[:, 0], data[:, 1], data[:, 2]) else: for i in range(labels.max() + 1): idxs = np.argwhere(labels == i) ax.scatter3D(data[idxs, 0], data[idxs, 1], data[idxs, 2]) data_, _ = create_moons(n_samples=300) ax.scatter3D(data_[:, 0], data_[:, 1], data_[:, 2]) fig.canvas.draw()
_____no_output_____
MIT
Exploring_Gaussian_Mixture_Models.ipynb
PatrickgHayes/gmm-dnn-for-interpretability
Generalizing Failure CircumstancesOne central question in debugging is: _Does this bug occur in other situations, too?_ In this chapter, we present a technique that is set to _generalize_ the circumstances under which a failure occurs. The DDSET algorithm takes a failure-inducing input, breaks it into individual elements. For each element, it tries to find whether it can be replaced by others in the same category, and if so, it _generalizes_ the concrete element to the very category. The result is a _pattern_ that characterizes the failure condition: "The failure occurs for all inputs of the form `( * )`.
from bookutils import YouTubeVideo YouTubeVideo("PV22XtIQU1s")
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
**Prerequisites*** You should have read the [chapter on _delta debugging_](DeltaDebugger.ipynb).
import bookutils import DeltaDebugger
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
SynopsisTo [use the code provided in this chapter](Importing.ipynb), write```python>>> from debuggingbook.DDSetDebugger import ```and then make use of the following features.This chapter provides a class `DDSetDebugger`, implementing the DDSET algorithm for generalizing failure-inducing inputs. The `DDSetDebugger` is used as follows:```pythonwith DDSetDebugger(grammar) as dd: function(args...)dd```Here, `function(args...)` is a failing function call (= raises an execption) that takes at least one string argument; `grammar` is an [input grammar in fuzzingbook format](https://www.fuzzingbook.org/html/Grammars.html) that matches the format of this argument.The result is a call of `function()` with an _abstract failure-inducing input_ – a variant of the conrete input in which parts are replaced by placeholders in the form ``, where `` is a nonterminal in the grammar. The failure has been verified to occur for a number of instantiations of ``.Here is an example of how `DDSetDebugger` works. The concrete failing input `"bar` is generalized to an _abstract failure-inducing input_:```python>>> with DDSetDebugger(SIMPLE_HTML_GRAMMAR) as dd:>>> remove_html_markup('"bar')>>> ddremove_html_markup(s='"')```The abstract input tells us that the failure occurs for whatever opening and closing HTML tags as long as there is a double quote between them.A programmatic interface is available as well. `generalize()` returns a mapping of argument names to (generalized) values:```python>>> dd.generalize(){'s': '"'}```Using `fuzz()`, the abstract input can be instantiated to further concrete inputs, all set to produce the failure again:```python>>> for i in range(10):>>> print(dd.fuzz())remove_html_markup(s='"1')remove_html_markup(s='"c*C')remove_html_markup(s='"')remove_html_markup(s='")')remove_html_markup(s='"')remove_html_markup(s='"')remove_html_markup(s='"\t7')remove_html_markup(s='"')remove_html_markup(s='"2')remove_html_markup(s='"\r~\t\r')````DDSetDebugger` can be customized by passing a subclass of `TreeGeneralizer`, which does the gist of the work; for details, see its constructor.The full class hierarchy is shown below.![](PICS/DDSetDebugger-synopsis-1.svg) A Failing ProgramAs with previous chapters, we use `remove_html_markup()` as our ongoing example. This function is set to remove HTML markup tags (like ``) from a given string `s`, returning the plain text only. We use the version from [the chapter on asssertions](Assertions.ipynb), using an assertion as postcondition checker.
def remove_html_markup(s): # type: ignore tag = False quote = False out = "" for c in s: if c == '<' and not quote: tag = True elif c == '>' and not quote: tag = False elif c == '"' or c == "'" and tag: quote = not quote elif not tag: out = out + c # postcondition assert '<' not in out and '>' not in out return out
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
For the most inputs, `remove_html_markup()` works just as expected:
remove_html_markup("Be <em>quiet</em>, he said")
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
There are inputs, however, for which it fails:
BAD_INPUT = '<foo>"bar</foo>' from ExpectError import ExpectError with ExpectError(AssertionError): remove_html_markup(BAD_INPUT) from bookutils import quiz
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
In contrast to the other chapters, our aim now is not to immediately go and debug `remove_html_markup()`. Instead, we focus on another important question: > Under which conditions precisely does `remove_html_markup()` fail? This question can be generalized to> What is the set of inputs for which `remove_html_markup()` fails? Our plan for this is to _generalize_ concrete inputs (such as `BAD_INPUTS`) into an *abstract failure-inducing inputs*. These are patterns formed from a concrete input, but in which specific _placeholders_ indicate sets of inputs that are permitted. In the abstract failure-inducing input```html"bar```for instance, `` and `` are placeholders for opening and closing HTML tags, respectively. The pattern indicates that any opening HTML tag and closing HTML tag can be present in the input, as long as the enclosed text reads `"bar`. Given a concrete failure-inducing input, our aim is to _generalize_ it as much as possible to such an abstract failure-inducing input. The resulting pattern should then* capture the _circumstances_ under which the program fails;* allow for _test generation_ by instantiating the placeholders;* help ensuring our fix is as _general as possible_.
quiz("If `s = '<foo>\"bar</foo>'` (i.e., `BAD_INPUT`), " "what is the value of `out` such that the assertion fails?", [ '`bar`', '`bar</foo>`', '`"bar</foo>`', '`<foo>"bar</foo>`', ], '9999999 // 4999999')
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
GrammarsTo determine abstract failure-inducing inputs, we need means to determine and characterize _sets of inputs_ – known in computer science as _languages_. To formally describe languages, the field of *formal languages* has devised a number of *language specifications* that describe a language. *Regular expressions* represent the simplest class of these languages to denote sets of strings: The regular expression `[a-z]*`, for instance, denotes a (possibly empty) sequence of lowercase letters. *Automata theory* connects these languages to automata that accept these inputs; *finite state machines*, for instance, can be used to specify the language of regular expressions. Regular expressions are great for not-too-complex input formats, and the associated finite state machines have many properties that make them great for reasoning. To specify more complex inputs, though, they quickly encounter limitations. At the other end of the language spectrum, we have *universal grammars* that denote the language accepted by *Turing machines*. A Turing machine can compute anything that can be computed; and with Python being Turing-complete, this means that we can also use a Python program $p$ to specify or even enumerate legal inputs. But then, computer science theory also tells us that each such program has to be written specifically for the input to be considered, which is not the level of automation we want. The middle ground between regular expressions and Turing machines is covered by *grammars*. Grammars are among the most popular (and best understood) formalisms to formally specify input languages. Using a grammar, one can express a wide range of the properties of an input language. Grammars are particularly great for expressing the *syntactical structure* of an input, and are the formalism of choice to express nested or recursive inputs. The grammars we use are so-called *context-free grammars*, one of the easiest and most popular grammar formalisms. A grammar is defined as a mapping of _nonterminal_ symbols (denoted in `` to lists of alternative _expansions_, which are strings containing _terminal_ symbols and possibly more _nonterminal_ symbols. To make the writing of grammars as simple as possible, we adopt the [fuzzingbook](https://www.fuzzingbook.org/) format that is based on strings and lists.
import fuzzingbook
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
Fuzzingbook grammars take the format of a _mapping_ between symbol names and expansions, where expansions are _lists_ of alternatives.
# ignore from typing import Any, Callable, Optional, Type, Tuple from typing import Dict, Union, List, cast, Generator Grammar = Dict[str, # A grammar maps strings... List[ Union[str, # to list of strings... Tuple[str, Dict[str, Any]] # or to pairs of strings and attributes. ] ] ]
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
A one-rule grammar for digits thus takes the form
DIGIT_GRAMMAR: Grammar = { "<start>": ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"] }
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
which means that the `` symbol can be expanded into any of the digits listed. A full grammar for arithmetic expressions looks like this:
EXPR_GRAMMAR: Grammar = { "<start>": ["<expr>"], "<expr>": ["<term> + <expr>", "<term> - <expr>", "<term>"], "<term>": ["<factor> * <term>", "<factor> / <term>", "<factor>"], "<factor>": ["+<factor>", "-<factor>", "(<expr>)", "<integer>.<integer>", "<integer>"], "<integer>": ["<digit><integer>", "<digit>"], "<digit>": ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"] }
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
From such a grammar, one can easily generate inputs that conform to the grammar.
from fuzzingbook.GrammarFuzzer import GrammarFuzzer simple_expr_fuzzer = GrammarFuzzer(EXPR_GRAMMAR) for i in range(10): fuzz_expr = simple_expr_fuzzer.fuzz() print(fuzz_expr)
3.8 + --62.912 - ++4 - +5 * 3.0 * 4 7 * (75.5 - -6 + 5 - 4) + -(8 - 1) / 5 * 2 (-(9) * +6 + 9 / 3 * 8 - 9 * 8 / 7) / -+-65 (9 + 8) * 2 * (6 + 6 + 9) * 0 * 1.9 * 0 (1 * 7 - 9 + 5) * 5 / 0 * 5 + 7 * 5 * 7 -(6 / 9 - 5 - 3 - 1) - -1 / +1 + (9) / (8) * 6 (+-(0 - (1) * 7 / 3)) / ((1 * 3 + 8) + 9 - +1 / --0) - 5 * (-+939.491) +2.9 * 0 / 501.19814 / --+--(6.05002) +-8.8 / (1) * -+1 + -8 + 9 - 3 / 8 * 6 + 4 * 3 * 5 (+(8 / 9 - 1 - 7)) + ---06.30 / +4.39
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
Nonterminals as found in the grammar make natural _placeholders_ in abstract failure-inducing inputs. If we know, for instance, that it is not just the concrete failure-inducing input```python(2 * 3)```but the abstract failure-inducing input```html( * )```that causes the failure, we immediately see that the error is due to the multiplication operator rather than its operands. Coming back to our `remove_html_markup()` example, let us create a simple grammar for HTML expressions. A `` element is either plain text or tagged text.
SIMPLE_HTML_GRAMMAR: Grammar = { "<start>": ["<html>"], "<html>": ["<plain-text>", "<tagged-text>"], }
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
Plain text is a simple (possibly empty) sequence of letter, digits, punctuation, and whitespace. (Note how `` is either empty or some character followed by more plain text.) The characters `` are not allowed, though.
import string SIMPLE_HTML_GRAMMAR.update({ "<plain-text>": ["", "<plain-char><plain-text>"], "<plain-char>": ["<letter>", "<digit>", "<other>", "<whitespace>"], "<letter>": list(string.ascii_letters), "<digit>": list(string.digits), "<other>": list(string.punctuation.replace('<', '').replace('>', '')), "<whitespace>": list(string.whitespace) })
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
Tagged text is a bit more complicated. We have opening tags ``, followed by some more HTML material, and then closed by a closing tag ``. (We do not insist that the two tags match.) A self-closing tag has the form ``. For compatibility reasons, we also allow just opening tags without closing tags, as in ``.
SIMPLE_HTML_GRAMMAR.update({ "<tagged-text>": ["<opening-tag><html><closing-tag>", "<self-closing-tag>", "<opening-tag>"], })
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
Since the characters `` are already reserved for denoting nonterminal symbols, we use the special nonterminal symbols `` and `` that expand into ``, respectively,
SIMPLE_HTML_GRAMMAR.update({ "<opening-tag>": ["<lt><id><gt>", "<lt><id><attrs><gt>"], "<lt>": ["<"], "<gt>": [">"], "<id>": ["<letter>", "<id><letter>", "<id><digit>"], "<closing-tag>": ["<lt>/<id><gt>"], "<self-closing-tag>": ["<lt><id><attrs>/<gt>"], })
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
Finally, HTML tags can have attributes, which are enclosed in quotes.
SIMPLE_HTML_GRAMMAR.update({ "<attrs>": ["<attr>", "<attr><attrs>" ], "<attr>": [" <id>='<plain-text>'", ' <id>="<plain-text>"'], })
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
Again, we can generate inputs from the grammar.
simple_html_fuzzer = GrammarFuzzer(SIMPLE_HTML_GRAMMAR) for i in range(10): fuzz_html = simple_html_fuzzer.fuzz() print(repr(fuzz_html))
'<T3 xG="">' '<N9cd U=\'\' y=\'l1\' v0="" tb4ya="" UbD=\'\'>9</R>' '\x0b' ' ea\\\\' '&7' "<c1 o2='' x9661lQo64T=''/>" '<S4>' '<GMS></wAu>' '<j CI=\'\' T98sJ="" DR4=\'\'/>' '<FQc90 Wt=""/>'
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
Such inputs, of course, are great for systematic testing. Our sister book, [the fuzzing book](https://www.fuzzingbook.org/), covers these and more. Derivation TreesTo produce inputs from a grammar, the fuzzingbook `GrammarFuzzer` makes use of a structure called a *derivation tree* (also known as *syntax tree*). A derivation tree encodes the individual expansion steps undertaken while producing the output.
DerivationTree = Tuple[str, Optional[List[Any]]]
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
Let us illustrate derivation trees by example, using the last HTML output we produced.
fuzz_html
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
The `GrammarFuzzer` attribute `derivation_tree` holds the last tree used to produced this input. We can visualize the tree as follows:
# ignore from graphviz import Digraph # ignore def display_tree(tree: DerivationTree) -> Digraph: def graph_attr(dot: Digraph) -> None: dot.attr('node', shape='box', color='white', margin='0.0,0.0') dot.attr('node', fontname="'Fira Mono', 'Source Code Pro', 'Courier', monospace") def node_attr(dot: Digraph, nid: str, symbol: str, ann: str) -> None: fuzzingbook.GrammarFuzzer.default_node_attr(dot, nid, symbol, ann) if symbol.startswith('<'): dot.node(repr(nid), fontcolor='#0060a0') else: dot.node(repr(nid), fontcolor='#00a060') dot.node(repr(nid), scale='2') return fuzzingbook.GrammarFuzzer.display_tree(tree, node_attr=node_attr, graph_attr=graph_attr) display_tree(simple_html_fuzzer.derivation_tree)
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
From top to bottom, we see that the input was constructed from a `` symbol, which then expanded into `html`, which then expanded into HTML text, and so on. Multiple children in a tree stand for a concatenation of individual symbols. Internally, these trees come as pairs `(symbol, children)`, where `symbol` is the name of a node (say, ``), and `children` is a (possibly empty) list of subtrees. Here are the topmost nodes of the above tree:
import pprint pp = pprint.PrettyPrinter(depth=7) pp.pprint(simple_html_fuzzer.derivation_tree)
('<start>', [('<html>', [('<tagged-text>', [('<self-closing-tag>', [...])])])])
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
To produce abstract failure-inducing patterns, we will work on this very structure. The idea is to1. systematically replace subtrees by other, generated, compatible subtrees (e.g. replace one `` subtree in the concrete input by some other generated `` subtree);2. see whether these subtrees also result in failures; and3. if they do, use the nonterminal (``) as a placeholder in the pattern.This will involve some subtree manipulation, construction, and finally testing. First of all, though, we need to be able to turn an _existing input_ into a derivation tree. ParsingThe activity of creating a structure out of an unstructured input is called _parsing_. Generally speaking, a _parser_ uses a _grammar_ to create a _derivation tree_ (also called *parse tree* in parsing contexts) from a string input. Again, there's a whole body of theory (and practice!) around constructing parsers. We make our life simple by using an existing parser (again, from [the fuzzing book](https://www.fuzzingbook.org/Parser.html)), which does just what we want. The `EarleyParser` is instantiated with a grammar such as `SIMPLE_HTML_GRAMMAR`:
from fuzzingbook.Parser import Parser, EarleyParser # minor dependency simple_html_parser = EarleyParser(SIMPLE_HTML_GRAMMAR)
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
Its method `parse()` returns an iterator over multiple possible derivation trees. (There can be multiple trees because the grammar could be ambiguous). We are only interested in the first such tree. Let us parse `BAD_INPUT` and inspect the resulting ~parse tree~ ~syntax tree~ derivation tree:
bad_input_tree = list(simple_html_parser.parse(BAD_INPUT))[0] display_tree(bad_input_tree)
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
This derivation tree has the same structure as the one created from our `GrammarFuzzer` above. We see how the `` is composed of three elements:1. an`` (``);2. a `` element which becomes `` (`"bar`); and3. a `` (``). We can easily turn the tree back into a string. The method `tree_to_string()` traverses the tree left-to-right and joins all nonterminal symbols.
from fuzzingbook.GrammarFuzzer import tree_to_string, all_terminals tree_to_string(bad_input_tree) assert tree_to_string(bad_input_tree) == BAD_INPUT
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
With this, we can now* parse an input into a tree structure;* (re-)create parts of the tree structure; and* turn the tree back into an input string. Mutating the TreeWe introduce a class `TreeMutator` that is set to mutate a tree. Its constructor takes a grammar and a tree.
from fuzzingbook.Grammars import is_valid_grammar class TreeMutator: """Grammar-based mutations of derivation trees.""" def __init__(self, grammar: Grammar, tree: DerivationTree, fuzzer: Optional[GrammarFuzzer] = None, log: Union[bool, int] = False): """ Constructor. `grammar` is the underlying grammar; `tree` is the tree to work on. `fuzzer` is the grammar fuzzer to use (default: `GrammarFuzzer`) """ assert is_valid_grammar(grammar) self.grammar = grammar self.tree = tree self.log = log if fuzzer is None: fuzzer = GrammarFuzzer(grammar) self.fuzzer = fuzzer
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
Referencing Subtrees To reference individual elements in the tree, we introduce the concept of a _path_. A path is a list of numbers indicating the children (starting with 0) we should follow. A path `[0, 0, 0, ..., 0]` stands for the leftmost child in a tree.
TreePath = List[int]
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
The method `get_subtree()` returns the subtree for a given path.
class TreeMutator(TreeMutator): def get_subtree(self, path: TreePath, tree: Optional[DerivationTree] = None) -> DerivationTree: """Access a subtree based on `path` (a list of children numbers)""" if tree is None: tree = self.tree symbol, children = tree if not path or children is None: return tree return self.get_subtree(path[1:], children[path[0]])
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
Here's `get_subtree()` in action. We instantiate a `TreeMutator` on the `BAD_INPUT` tree as shown above and return the element at the path `[0, 0, 1, 0]` – i.e. follow the leftmost edge twice, than the second-to-leftmost edge, then the leftmost edge again. This gives us the `` subtree representing the string `"bar`:
def bad_input_tree_mutator() -> TreeMutator: return TreeMutator(SIMPLE_HTML_GRAMMAR, bad_input_tree, log=2) plain_text_subtree = bad_input_tree_mutator().get_subtree([0, 0, 1, 0]) pp.pprint(plain_text_subtree) tree_to_string(plain_text_subtree) # ignore def primes_generator() -> Generator[int, None, None]: # Adapted from https://www.python.org/ftp/python/doc/nluug-paper.ps primes = [2] yield 2 i = 3 while True: for p in primes: if i % p == 0 or p * p > i: break if i % p != 0: primes.append(i) yield i i += 2 # ignore prime_numbers = primes_generator() quiz("In `bad_input_tree`, what is " " the subtree at the path `[0, 0, 2, 1]` as string?", [ f"`{tree_to_string(bad_input_tree_mutator().get_subtree([0, 0, 2, 0]))}`", f"`{tree_to_string(bad_input_tree_mutator().get_subtree([0, 0, 2, 1]))}`", f"`{tree_to_string(bad_input_tree_mutator().get_subtree([0, 0, 2]))}`", f"`{tree_to_string(bad_input_tree_mutator().get_subtree([0, 0, 0]))}`", ], 'next(prime_numbers)', globals() )
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
Creating new SubtreesThe method `new_tree()` creates a new subtree for the given `` according to the rules of the grammar. It invokes `expand_tree()` on the given `GrammarFuzzer` – a method that takes an initial (empty) tree and expands it until no more expansions are left.
class TreeMutator(TreeMutator): def new_tree(self, start_symbol: str) -> DerivationTree: """Create a new subtree for <start_symbol>.""" if self.log >= 2: print(f"Creating new tree for {start_symbol}") tree = (start_symbol, None) return self.fuzzer.expand_tree(tree)
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
Here is an example of `new_tree()`:
plain_text_tree = cast(TreeMutator, bad_input_tree_mutator()).new_tree('<plain-text>') display_tree(plain_text_tree) tree_to_string(plain_text_tree)
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
Mutating the TreeWith us now being able to * access a particular path in the tree (`get_subtree()`) and* create a new subtree (`new_tree()`),we can mutate the tree at a given path. This is the task of `mutate()`.
class TreeMutator(TreeMutator): def mutate(self, path: TreePath, tree: Optional[DerivationTree] = None) -> DerivationTree: """Return a new tree mutated at `path`""" if tree is None: tree = self.tree assert tree is not None symbol, children = tree if not path or children is None: return self.new_tree(symbol) head = path[0] new_children = (children[:head] + [self.mutate(path[1:], children[head])] + children[head + 1:]) return symbol, new_children
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
Here is an example of `mutate()` in action. We mutate `bad_input_tree` at the path `[0, 0, 1, 0]` – that is, ``:
mutated_tree = cast(TreeMutator, bad_input_tree_mutator()).mutate([0, 0, 1, 0]) display_tree(mutated_tree)
Creating new tree for <plain-text>
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
We see that the `` subtree is now different, which also becomes evident in the string representation.
tree_to_string(mutated_tree)
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
Generalizing TreesNow for the main part – finding out which parts of a tree can be generalized. Our idea is to _test_ a finite number of mutations to a subtree (say, 10). If all of these tests fail as well, then we assume we can generalize the subtree to a placeholder. We introduce a class `TreeGeneralizer` for this purpose. On top of `grammar` and `tree` already used for the `TreeMutator` constructor, the `TreeGeneralizer` also takes a `test` function.
class TreeGeneralizer(TreeMutator): """Determine which parts of a derivation tree can be generalized.""" def __init__(self, grammar: Grammar, tree: DerivationTree, test: Callable, max_tries_for_generalization: int = 10, **kwargs: Any) -> None: """ Constructor. `grammar` and `tree` are as in `TreeMutator`. `test` is a function taking a string that either * raises an exception, indicating test failure; * or not, indicating test success. `max_tries_for_generalization` is the number of times an instantiation has to fail before it is generalized. """ super().__init__(grammar, tree, **kwargs) self.test = test self.max_tries_for_generalization = max_tries_for_generalization
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
The `test` function is used in `test_tree()`, returning `False` if the test fails (raising an exception), and `True` if the test passes (no exception).
class TreeGeneralizer(TreeGeneralizer): def test_tree(self, tree: DerivationTree) -> bool: """Return True if testing `tree` passes, else False""" s = tree_to_string(tree) if self.log: print(f"Testing {repr(s)}...", end="") try: self.test(s) except Exception as exc: if self.log: print(f"FAIL ({type(exc).__name__})") ret = False else: if self.log: print(f"PASS") ret = True return ret
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
Testing for GeneralizationThe `can_generalize()` method brings the above methods together. It creates a number of tree mutations at the given path, and returns True if all of them produce a failure. (Note that this is not as sophisticated as our [delta debugger](DeltaDebugger.ipynb) implementation, which also checks that the _same_ error occurs.)
class TreeGeneralizer(TreeGeneralizer): def can_generalize(self, path: TreePath, tree: Optional[DerivationTree] = None) -> bool: """Return True if the subtree at `path` can be generalized.""" for i in range(self.max_tries_for_generalization): mutated_tree = self.mutate(path, tree) if self.test_tree(mutated_tree): # Failure no longer occurs; cannot abstract return False return True
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
Let us put `TreeGeneralizer` into action. We can directly use `remove_html_markup()` as test function.
def bad_input_tree_generalizer(**kwargs: Any) -> TreeGeneralizer: return TreeGeneralizer(SIMPLE_HTML_GRAMMAR, bad_input_tree, remove_html_markup, **kwargs)
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
On our `BAD_INPUT` (and its tree), can we generalize the root `` node? In other words, does the failure occur for all possible `` inputs?
bad_input_tree_generalizer(log=True).can_generalize([0])
Testing "<l35Gmq W2G571=''>"...PASS
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
The answer is no. The first alternative passes the test; hence no generalization. How about the middle `` part? Can we generalize this?
bad_input_tree_generalizer(log=True).can_generalize([0, 0, 1, 0])
Testing '<foo>8</foo>'...PASS
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
The answer is no – just as above. How about the closing tag? Can we generalize this one?
bad_input_tree_generalizer(log=True).can_generalize([0, 0, 2])
Testing '<foo>"bar</e7N>'...FAIL (AssertionError) Testing '<foo>"bar</q37A>'...FAIL (AssertionError) Testing '<foo>"bar</W>'...FAIL (AssertionError) Testing '<foo>"bar</z93Q5>'...FAIL (AssertionError) Testing '<foo>"bar</WR>'...FAIL (AssertionError) Testing '<foo>"bar</m>'...FAIL (AssertionError) Testing '<foo>"bar</Uy443wt1h7>'...FAIL (AssertionError) Testing '<foo>"bar</fon2>'...FAIL (AssertionError) Testing '<foo>"bar</M>'...FAIL (AssertionError) Testing '<foo>"bar</g9>'...FAIL (AssertionError)
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
Yes, we can! All alternate instantiations of `` result in a failure.
quiz("Is this also true for `<opening-tag>`?", [ "Yes", "No" ], '("Yes" == "Yes") + ("No" == "No")')
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
Note that the above does not hold for ``. If the attribute value contains a quote character, it will extend to the end of the input. This is another error, but not caught by our assertion; hence, the input will be flagged as passing:
BAD_ATTR_INPUT = '<foo attr="\'">bar</foo>' remove_html_markup(BAD_ATTR_INPUT)
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
The effect of this is that there are patterns for `` which do not cause the failure to occur; hence, `` is not a fully valid generalization. This, however, becomes apparent only if one of our generated tests includes a quote character in the attribute value. Since quote characters are as likely (or as unlikely) to appear as other characters, this effect may not become apparent in our default 10 tests:
bad_input_tree_generalizer().can_generalize([0, 0, 0])
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
It will become apparent, however, as we increase the number of tests:
bad_input_tree_generalizer(max_tries_for_generalization=100, log=True).can_generalize([0, 0, 0])
Testing '<Ada np=\'7\' y7v=\'\'>"bar</foo>'...FAIL (AssertionError) Testing '<B V=\'\'>"bar</foo>'...FAIL (AssertionError) Testing '<K v="$" s5F="\x0b" q="" E=\'\'>"bar</foo>'...FAIL (AssertionError) Testing '<Fcdt8 v7A4u=\'.\t\'>"bar</foo>'...FAIL (AssertionError) Testing '<s n="">"bar</foo>'...FAIL (AssertionError) Testing '<W>"bar</foo>'...FAIL (AssertionError) Testing '<ap>"bar</foo>'...FAIL (AssertionError) Testing '<B1>"bar</foo>'...FAIL (AssertionError) Testing '<Q00wY M=\'\r \'>"bar</foo>'...FAIL (AssertionError) Testing '<O6 d7="" H=\'\'>"bar</foo>'...FAIL (AssertionError) Testing '<v1IH w="" ZI="" O="">"bar</foo>'...FAIL (AssertionError) Testing '<T1 w998=\'a\' j=\'z\n7\'>"bar</foo>'...FAIL (AssertionError) Testing '<Dnh1>"bar</foo>'...FAIL (AssertionError) Testing '<D F9="" x4=\'\' Hup=\'7\n\'>"bar</foo>'...FAIL (AssertionError) Testing '<l62E>"bar</foo>'...FAIL (AssertionError) Testing '<k11 P8x5="">"bar</foo>'...FAIL (AssertionError) Testing '<V6LBVu>"bar</foo>'...FAIL (AssertionError) Testing '<k9S>"bar</foo>'...FAIL (AssertionError) Testing '<tU2J913 lQ6N=\'\' f=\'*\' V=\'\' b="" l="" G=\'\'>"bar</foo>'...FAIL (AssertionError) Testing '<X O="U~">"bar</foo>'...FAIL (AssertionError) Testing '<q4 W=\'\' i=\'aA9\' I=\'9\'>"bar</foo>'...FAIL (AssertionError) Testing '<HK>"bar</foo>'...FAIL (AssertionError) Testing '<T>"bar</foo>'...FAIL (AssertionError) Testing '<NJc j32="\x0b">"bar</foo>'...FAIL (AssertionError) Testing '<G>"bar</foo>'...FAIL (AssertionError) Testing '<w B="\r">"bar</foo>'...FAIL (AssertionError) Testing '<Ac1>"bar</foo>'...FAIL (AssertionError) Testing '<vB y2=\'7x\'\'>"bar</foo>'...PASS
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
We see that our approach may _overgeneralize_ – producing a generalization that may be too lenient. In practice, this is not too much of a problem, as we would be interested in characterizing cases that trigger the failure, rather than characterizing a small subset that does not trigger the failure. Generalizable PathsUsing `can_generalize()`, we can devise a method `generalizable_paths()` that returns all paths in the tree that can be generalized.
class TreeGeneralizer(TreeGeneralizer): def find_paths(self, predicate: Callable[[TreePath, DerivationTree], bool], path: Optional[TreePath] = None, tree: Optional[DerivationTree] = None) -> List[TreePath]: """ Return a list of all paths for which `predicate` holds. `predicate` is a function `predicate`(`path`, `tree`), where `path` denotes a subtree in `tree`. If `predicate()` returns True, `path` is included in the returned list. """ if path is None: path = [] assert path is not None if tree is None: tree = self.tree assert tree is not None symbol, children = self.get_subtree(path) if predicate(path, tree): return [path] paths = [] if children is not None: for i, child in enumerate(children): child_symbol, _ = child if child_symbol in self.grammar: paths += self.find_paths(predicate, path + [i]) return paths def generalizable_paths(self) -> List[TreePath]: """Return a list of all paths whose subtrees can be generalized.""" return self.find_paths(self.can_generalize)
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
Here is `generalizable_paths()` in action. We obtain all (paths to) subtrees that can be generalized:
bad_input_generalizable_paths = \ cast(TreeGeneralizer, bad_input_tree_generalizer()).generalizable_paths() bad_input_generalizable_paths
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
To convert these subtrees into abstract failure-inducing patterns, the method `generalize_path()` returns a copy of the tree with the subtree replaced by a nonterminal without children:
class TreeGeneralizer(TreeGeneralizer): def generalize_path(self, path: TreePath, tree: Optional[DerivationTree] = None) -> DerivationTree: """Return a copy of the tree in which the subtree at `path` is generalized (= replaced by a nonterminal without children)""" if tree is None: tree = self.tree assert tree is not None symbol, children = tree if not path or children is None: return symbol, None # Nonterminal without children head = path[0] new_children = (children[:head] + [self.generalize_path(path[1:], children[head])] + children[head + 1:]) return symbol, new_children
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
The function `all_terminals()` expands these placeholders:
all_terminals(cast(TreeGeneralizer, bad_input_tree_generalizer()).generalize_path([0, 0, 0]))
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
Finally, the method `generalize()` obtains a tree in which all generalizable paths actually are generalized:
class TreeGeneralizer(TreeGeneralizer): def generalize(self) -> DerivationTree: """Returns a copy of the tree in which all generalizable subtrees are generalized (= replaced by nonterminals without children)""" tree = self.tree assert tree is not None for path in self.generalizable_paths(): tree = self.generalize_path(path, tree) return tree abstract_failure_inducing_input = cast(TreeGeneralizer, bad_input_tree_generalizer()).generalize()
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
This gives us the final generalization of `BAD_INPUT`. In the abstract failure-inducing input, all generalizable elements are generalized.
all_terminals(abstract_failure_inducing_input)
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
We see that to obtain the failure, it suffices to have an ``, followed by a quote and (any) `` and (any) ``. Clearly, all that it takes to produce the failure is to have a double quote in the plain text. Also note how this diagnosis was reached through _experiments_ only – just as with [delta debugging](DeltaDebugger.ipynb), we could treat the program under test as a black box. In contrast to delta debugging, however, we obtain an _abstraction_ that generalizes the circumstances under which a given failure occurs. Fuzzing with PatternsOne neat side effect of abstract failure-inducing patterns is that they can be easily instantiated into further test cases, all set to reproduce the failure in question. This gives us a test suite we can later test our fix against. The method `fuzz_tree()` takes a tree representing an abstract failure-inducing input and instantiates all missing subtrees.
import copy class TreeGeneralizer(TreeGeneralizer): def fuzz_tree(self, tree: DerivationTree) -> DerivationTree: """Return an instantiated copy of `tree`.""" tree = copy.deepcopy(tree) return self.fuzzer.expand_tree(tree) bitg = cast(TreeGeneralizer, bad_input_tree_generalizer()) for i in range(10): print(all_terminals(bitg.fuzz_tree(abstract_failure_inducing_input)))
<UzL3Ct6>"</p> <nw10E6>"</W> <h lV="'">"</x8> <k>"</u> <a0 l0820650g='3'>"</v5t> <zTg>"</o1Z> <yMgT02p s="" g94e='R'>"</P2> <Y>"</S9b> <X2566xS8v2>"</r13> <D48>" </R>
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
We can take these inputs and see whether they reproduce the failure in question:
successes = 0 failures = 0 trials = 1000 for i in range(trials): test_input = all_terminals( bitg.fuzz_tree(abstract_failure_inducing_input)) try: remove_html_markup(test_input) except AssertionError: successes += 1 else: failures += 1 successes, failures
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
We get an overall failure rate of ~98%, which is not bad at all.
failures / 1000
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
In our case, it is _overgeneralization_ (as discussed above) that is responsible for not reaching a 100% rate. (In all generality, we are trying to approximate the behavior of a Turing machine with a context free grammar, which is, well, always an approximation.) However, even a lower rate would still be useful, as any additional test case that reproduces a failure helps in ensuring the final fix is complete. Putting it all TogetherLet us now put together all this in a more convenient package that does not require the user to parse and unparse derivation trees. Our `DDSetDebugger` is modeled after the `DeltaDebugger` from [the chapter on delta debugging](DeltaDebugger.ipynb). It is to be used as```pythonwith DDSetDebugger(grammar) as dd: some_failing_function(...)```After that, evaluating `dd` yields a generalized abstract failure-inducing input as a string. Since `DDSetDebugger` accepts only one grammar, the function to be debugged should have exactly one string argument (besides other arguments); this string must fit the grammar. Constructor The constructor puts together the various components. It allows for customization by subclassing.
from DeltaDebugger import CallCollector class DDSetDebugger(CallCollector): """ Debugger implementing the DDSET algorithm for abstracting failure-inducing inputs. """ def __init__(self, grammar: Grammar, generalizer_class: Type = TreeGeneralizer, parser: Optional[Parser] = None, **kwargs: Any) -> None: """Constructor. `grammar` is an input grammar in fuzzingbook format. `generalizer_class` is the tree generalizer class to use (default: `TreeGeneralizer`) `parser` is the parser to use (default: `EarleyParser(grammar)`). All other keyword args are passed to the tree generalizer, notably: `fuzzer` - the fuzzer to use (default: `GrammarFuzzer`), and `log` - enables debugging output if True. """ super().__init__() self.grammar = grammar assert is_valid_grammar(grammar) self.generalizer_class = generalizer_class if parser is None: parser = EarleyParser(grammar) self.parser = parser self.kwargs = kwargs # These save state for further fuzz() calls self.generalized_args: Dict[str, Any] = {} self.generalized_trees: Dict[str, DerivationTree] = {} self.generalizers: Dict[str, TreeGeneralizer] = {}
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
Generalizing Arguments The method `generalize()` is the many entry point. For all string arguments collected in the first function call, it generalizes the arguments and returns an abstract failure-inducing string.
class DDSetDebugger(DDSetDebugger): def generalize(self) -> Dict[str, Any]: """ Generalize arguments seen. For each function argument, produce an abstract failure-inducing input that characterizes the set of inputs for which the function fails. """ if self.generalized_args: return self.generalized_args self.generalized_args = copy.deepcopy(self.args()) self.generalized_trees = {} self.generalizers = {} for arg in self.args(): def test(value: Any) -> Any: return self.call({arg: value}) value = self.args()[arg] if isinstance(value, str): tree = list(self.parser.parse(value))[0] gen = self.generalizer_class(self.grammar, tree, test, **self.kwargs) generalized_tree = gen.generalize() self.generalizers[arg] = gen self.generalized_trees[arg] = generalized_tree self.generalized_args[arg] = all_terminals(generalized_tree) return self.generalized_args class DDSetDebugger(DDSetDebugger): def __repr__(self) -> str: """Return a string representation of the generalized call.""" return self.format_call(self.generalize())
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
Here is an example of how `DDSetDebugger` would be used on our `BAD_INPUT` example. Simply evaluating the debugger yields a call with a generalized input.
with DDSetDebugger(SIMPLE_HTML_GRAMMAR) as dd: remove_html_markup(BAD_INPUT) dd
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
FuzzingThe `fuzz()` method produces instantiations of the abstract failure-inducing pattern.
class DDSetDebugger(DDSetDebugger): def fuzz_args(self) -> Dict[str, Any]: """ Return arguments randomly instantiated from the abstract failure-inducing pattern. """ if not self.generalized_trees: self.generalize() args = copy.deepcopy(self.generalized_args) for arg in args: if arg not in self.generalized_trees: continue tree = self.generalized_trees[arg] gen = self.generalizers[arg] instantiated_tree = gen.fuzz_tree(tree) args[arg] = all_terminals(instantiated_tree) return args def fuzz(self) -> str: """ Return a call with arguments randomly instantiated from the abstract failure-inducing pattern. """ return self.format_call(self.fuzz_args())
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
Here are some axamples of `fuzz()` in action:
with DDSetDebugger(SIMPLE_HTML_GRAMMAR) as dd: remove_html_markup(BAD_INPUT) dd.fuzz() dd.fuzz() dd.fuzz()
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
These can be fed into `eval()`, set to produce more failing calls.
with ExpectError(AssertionError): eval(dd.fuzz())
Traceback (most recent call last): File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_58133/2072246869.py", line 2, in <module> eval(dd.fuzz()) File "<string>", line 1, in <module> File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_58133/2717035104.py", line 17, in remove_html_markup assert '<' not in out and '>' not in out AssertionError (expected)
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
More ExamplesLet us apply `DDSetDebugger` on more examples. Square RootOur first example is the `square_root()` function from [the chapter on assertions](Assertions.ipynb).
from Assertions import square_root # minor dependency
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
The `square_root()` function fails on a value of `-1`:
with ExpectError(AssertionError): square_root(-1)
Traceback (most recent call last): File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_58133/1205325604.py", line 2, in <module> square_root(-1) File "/Users/zeller/Projects/debuggingbook/notebooks/Assertions.ipynb", line 55, in square_root assert x >= 0 # precondition AssertionError (expected)
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
We define a grammar for its arguments:
INT_GRAMMAR: Grammar = { "<start>": ["<int>"], "<int>": ["<positive-int>", "-<positive-int>"], "<positive-int>": ["<digit>", "<nonzero-digit><positive-int>"], "<nonzero-digit>": list("123456789"), "<digit>": list(string.digits), }
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
The test function takes a string and converts it into an integer:
def square_root_test(s: str) -> None: return square_root(int(s))
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
With this, we can go and see whether we can generalize a failing input:
with DDSetDebugger(INT_GRAMMAR, log=True) as dd_square_root: square_root_test("-1") dd_square_root
Testing '-8'...FAIL (AssertionError) Testing '-316'...FAIL (AssertionError) Testing '8'...PASS Testing '684'...PASS Testing '-3'...FAIL (AssertionError) Testing '-870'...FAIL (AssertionError) Testing '-3'...FAIL (AssertionError) Testing '-3451'...FAIL (AssertionError) Testing '-8'...FAIL (AssertionError) Testing '-63213'...FAIL (AssertionError) Testing '-26'...FAIL (AssertionError) Testing '-4'...FAIL (AssertionError) Testing '-6'...FAIL (AssertionError) Testing '-8'...FAIL (AssertionError)
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
Success! Using `DDSetDebugger`, we have nicely generalized the failure-inducing input to a pattern `-` that translates into "any negative number". MiddleThe `middle()` function from [the chapter on statistical debugging](StatisticalDebugger.ipynb) returns the middle of three numerical values `x`, `y`, and `z`.
from StatisticalDebugger import middle # minor dependency
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
We set up a test function that evaluates a string – a tuple of three arguments – and then tests `middle()`:
def middle_test(s: str) -> None: x, y, z = eval(s) assert middle(x, y, z) == sorted([x, y, z])[1]
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
The grammar for the three numbers simply puts three integers together:
XYZ_GRAMMAR: Grammar = { "<start>": ["<int>, <int>, <int>"], "<int>": ["<positive-int>", "-<positive-int>"], "<positive-int>": ["<digit>", "<nonzero-digit><positive-int>"], "<nonzero-digit>": list("123456789"), "<digit>": list(string.digits), }
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
Here is an example of `middle()` failing:
with ExpectError(AssertionError): middle_test("2, 1, 3")
Traceback (most recent call last): File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_58133/982110330.py", line 2, in <module> middle_test("2, 1, 3") File "/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_58133/3079946275.py", line 3, in middle_test assert middle(x, y, z) == sorted([x, y, z])[1] AssertionError (expected)
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
What happens if we debug this with `DDSetDebugger`? We see that there is no abstraction at the syntax level that could characterize this failure:
with DDSetDebugger(XYZ_GRAMMAR, log=True) as dd_middle: middle_test("2, 1, 3") dd_middle
Testing '7, 4591, -0'...PASS Testing '6, 1, 3'...PASS Testing '7, 1, 3'...PASS Testing '7, 1, 3'...PASS Testing '2, -7, 3'...FAIL (AssertionError) Testing '2, 0, 3'...FAIL (AssertionError) Testing '2, -89, 3'...FAIL (AssertionError) Testing '2, 973, 3'...PASS Testing '2, 11, 3'...PASS Testing '2, 8, 3'...PASS Testing '2, 1, 9'...FAIL (AssertionError) Testing '2, 1, -16'...PASS Testing '2, 1, 35'...FAIL (AssertionError) Testing '2, 1, 6'...FAIL (AssertionError) Testing '2, 1, 53'...FAIL (AssertionError) Testing '2, 1, 5'...FAIL (AssertionError) Testing '2, 1, 737'...FAIL (AssertionError) Testing '2, 1, 28'...FAIL (AssertionError) Testing '2, 1, 3'...FAIL (AssertionError) Testing '2, 1, 5'...FAIL (AssertionError) Testing '2, 1, 5'...FAIL (AssertionError) Testing '2, 1, 56'...FAIL (AssertionError)
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
So, while there are failures that can be nicely characterized using abstractions of input elements, `middle()` is not one of them. Which is good, because this means that all our other techniques such as [statistical debugging](StatisticalDebugger.ipynb) and [dynamic invariants](DynamicInvariants.ipynb) still have a use case :-) Synopsis This chapter provides a class `DDSetDebugger`, implementing the DDSET algorithm for generalizing failure-inducing inputs. The `DDSetDebugger` is used as follows:```pythonwith DDSetDebugger(grammar) as dd: function(args...)dd```Here, `function(args...)` is a failing function call (= raises an execption) that takes at least one string argument; `grammar` is an [input grammar in fuzzingbook format](https://www.fuzzingbook.org/html/Grammars.html) that matches the format of this argument.The result is a call of `function()` with an _abstract failure-inducing input_ – a variant of the conrete input in which parts are replaced by placeholders in the form ``, where `` is a nonterminal in the grammar. The failure has been verified to occur for a number of instantiations of ``. Here is an example of how `DDSetDebugger` works. The concrete failing input `"bar` is generalized to an _abstract failure-inducing input_:
with DDSetDebugger(SIMPLE_HTML_GRAMMAR) as dd: remove_html_markup('<foo>"bar</foo>') dd
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
The abstract input tells us that the failure occurs for whatever opening and closing HTML tags as long as there is a double quote between them. A programmatic interface is available as well. `generalize()` returns a mapping of argument names to (generalized) values:
dd.generalize()
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
Using `fuzz()`, the abstract input can be instantiated to further concrete inputs, all set to produce the failure again:
for i in range(10): print(dd.fuzz())
remove_html_markup(s='<s1d>"1</hF>') remove_html_markup(s='<H>"c*C</l>') remove_html_markup(s='<Ah2>"</v>') remove_html_markup(s='<a7>")</NP>') remove_html_markup(s='<boyIIt640TF>"</b08>') remove_html_markup(s='<dF>"</fay>') remove_html_markup(s='<l2>"\t7</z>') remove_html_markup(s='<ci>"</t>') remove_html_markup(s='<J>"2</t>') remove_html_markup(s='<Fo9g>"\r~\t\r</D>')
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
`DDSetDebugger` can be customized by passing a subclass of `TreeGeneralizer`, which does the gist of the work; for details, see its constructor.The full class hierarchy is shown below.
# ignore from ClassDiagram import display_class_hierarchy # ignore display_class_hierarchy([DDSetDebugger, TreeGeneralizer], public_methods=[ CallCollector.__init__, CallCollector.__enter__, CallCollector.__exit__, CallCollector.function, CallCollector.args, CallCollector.exception, CallCollector.call, # type: ignore DDSetDebugger.__init__, DDSetDebugger.__repr__, DDSetDebugger.fuzz, DDSetDebugger.fuzz_args, DDSetDebugger.generalize, ], project='debuggingbook')
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
Lessons Learned* Generalizing failure-inducing inputs can yield important information for which inputs and under which circumstances a failure occurs.* Generalizing failure-inducing inputs is most useful if the input can be split into multiple elements, of which only a part are relevant for producing the error.* As they help in _parsing_ and _producing_ input, _grammars_ can play an important role in testing and debugging. Next StepsOur [next chapter](Repairer.ipynb) introduces _automated repair_ of programs, building on the fault localization and generalization mechanisms introduced so far. BackgroundOur `DDSetDebugger` class implements the DDSET algorithm as introduced by Gopinath et al. in \cite{Gopinath2020}. A [full-fledged implementation of DDSET](https://rahul.gopinath.org/post/2020/07/15/ddset/) with plenty of details and experiments is available as a Jupyter Notebook. Our implementation follows the [simplified implementation of DDSET, as described by Gopinath](https://rahul.gopinath.org/post/2020/08/03/simple-ddset/).The potential for determining how input features relate to bugs is not nearly explored yet. The ALHAZEN work by Kampmann et al. \cite{Kampmann2020} generalizes over DDSET in a different way, by investigating _semantic_ features of input elements such as their numeric interpretation or length and their correlation with failures. Like DDSET, ALHAZEN also uses a feedback loop to strengthen or refute its hypotheses.In recent work \cite{Gopinath2021}, Gopinath has extended the concept of DDSET further. His work on _evocative expressions_ introduces a _pattern language_ in which arbitrary DDSET-like patterns can be combined into Boolean formula that even more precisely capture and produce failure circumstances. In particular, evocative expressions can _specialize_ grammars towards Boolean pattern combinations, thus allowing for great flexibility in testing and debugging. Exercises Exercise 1: Generalization and SpecializationConsider the abstract failure-inducing input for `BAD_INPUT` we determined:
all_terminals(abstract_failure_inducing_input)
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
1. How does it change if you increase the number of test runs, using `max_tries_for_generalization`?2. What is the success rate of the new pattern? **Solution.** We can compute this by increasing `max_tries_for_generalization`:
more_precise_bitg = \ cast(TreeGeneralizer, bad_input_tree_generalizer(max_tries_for_generalization=100)) more_precise_abstract_failure_inducing_input = \ more_precise_bitg.generalize() all_terminals(more_precise_abstract_failure_inducing_input)
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook
We see that we still have an opening tag; however, it no longer assumes attributes. The success rate can be computed as before:
successes = 0 failures = 0 trials = 1000 for i in range(trials): test_input = all_terminals( more_precise_bitg.fuzz_tree( more_precise_abstract_failure_inducing_input)) try: remove_html_markup(test_input) except AssertionError: successes += 1 else: failures += 1 successes, failures failures / 1000
_____no_output_____
MIT
docs/notebooks/DDSetDebugger.ipynb
niMgnoeSeeL/debuggingbook