markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Census aggregation scratchpadBy [Ben Welsh](https://palewi.re/who-is-ben-welsh/)
import math
_____no_output_____
MIT
notebooks/scratchpad.ipynb
nkrishnaswami/census-data-aggregator
Approximation ![](https://assets.documentcloud.org/documents/6162551/pages/20180418-MOE-p50-normal.gif)![](https://assets.documentcloud.org/documents/6162551/pages/20180418-MOE-p51-normal.gif)
males_under_5, males_under_5_moe = 10154024, 3778 females_under_5, females_under_5_moe = 9712936, 3911 total_under_5 = males_under_5 + females_under_5 total_under_5 total_under_5_moe = math.sqrt(males_under_5_moe**2 + females_under_5_moe**2) total_under_5_moe
_____no_output_____
MIT
notebooks/scratchpad.ipynb
nkrishnaswami/census-data-aggregator
![](https://assets.documentcloud.org/documents/6162551/pages/20180418-MOE-p52-normal.gif?1561126109)
def approximate_margin_of_error(*pairs): """ Returns the approximate margin of error after combining all of the provided Census Bureau estimates, taking into account each value's margin of error. Expects a series of arguments, each a paired list with the estimated value first and the margin of error se...
_____no_output_____
MIT
notebooks/scratchpad.ipynb
nkrishnaswami/census-data-aggregator
Aggregating totals
def total(*pairs): """ Returns the combined value of all the provided Census Bureau estimates, along with an approximated margin of error. Expects a series of arguments, each a paired list with the estimated value first and the margin of error second. """ return sum([p[0] for p in pairs]), appr...
_____no_output_____
MIT
notebooks/scratchpad.ipynb
nkrishnaswami/census-data-aggregator
Aggregating medians ![](https://assets.documentcloud.org/documents/6165014/pages/How-to-Recalculate-a-Median-p1-normal.gif?1561138970)![](https://assets.documentcloud.org/documents/6165014/pages/How-to-Recalculate-a-Median-p2-normal.gif?1561138970)![](https://assets.documentcloud.org/documents/6165014/pages/How-to-Rec...
def approximate_median(range_list, design_factor=1.5): """ Returns the estimated median from a set of ranged totals. Useful for generated medians for measures like median household income and median agn when aggregating census geographies. Expects a list of dictionaries with three keys: min: ...
_____no_output_____
MIT
notebooks/scratchpad.ipynb
nkrishnaswami/census-data-aggregator
Install and setup use conda insteadmax python version 3.7!pip install tensorflow use conda install graphviz instead* also rqeuires conda install python-graphviz!pip install graphviz must use pip here - no conda package!pip install hiddenlayer
import graphviz d= graphviz.Digraph() d.edge('hello','world') d !conda env list
# conda environments: # base C:\ProgramData\Anaconda3 myenv * C:\Users\Rob.DESKTOP-HBG5EOT\.conda\envs\myenv tf37 C:\Users\Rob.DESKTOP-HBG5EOT\.conda\envs\tf37
MIT
pyTorch_PS/PT08-InstallAndSetupTensorflowHiddenLayer.ipynb
rsunderscore/learning
Vertex client library: AutoML tabular binary classification model for batch prediction Run in Colab View on GitHub OverviewThis tutorial demonstrates how to use the Vertex client library for Python to create tabular binary classification models and do batch prediction using Go...
import os import sys # Google Cloud Notebook if os.path.exists("/opt/deeplearning/metadata/env_version"): USER_FLAG = "--user" else: USER_FLAG = "" ! pip3 install -U google-cloud-aiplatform $USER_FLAG
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb
nayaknishant/vertex-ai-samples
Install the latest GA version of *google-cloud-storage* library as well.
! pip3 install -U google-cloud-storage $USER_FLAG
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb
nayaknishant/vertex-ai-samples
Restart the kernelOnce you've installed the Vertex client library and Google *cloud-storage*, you need to restart the notebook kernel so it can find the packages.
if not os.getenv("IS_TESTING"): # Automatically restart kernel after installs import IPython app = IPython.Application.instance() app.kernel.do_shutdown(True)
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb
nayaknishant/vertex-ai-samples
Before you begin GPU runtime*Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select* **Runtime > Change Runtime Type > GPU** Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud proje...
PROJECT_ID = "[your-project-id]" # @param {type:"string"} if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]": # Get your GCP project id from gcloud shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null PROJECT_ID = shell_output[0] print("Project ID:",...
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb
nayaknishant/vertex-ai-samples
RegionYou can also change the `REGION` variable, which is used for operationsthroughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.- Americas: `us-central1`- Europe: `europe-west4`- Asia Pacific: `asia-east1`You may not use a multi-regiona...
REGION = "us-central1" # @param {type: "string"}
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb
nayaknishant/vertex-ai-samples
TimestampIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
from datetime import datetime TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb
nayaknishant/vertex-ai-samples
Authenticate your Google Cloud account**If you are using Google Cloud Notebook**, your environment is already authenticated. Skip this step.**If you are using Colab**, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.**Otherwise**, follow these steps:In the Cloud Cons...
# If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your GCP account. This provides access to your # Cloud Storage bucket and lets you submit training jobs and prediction # requests. # If on Google Cloud Notebook, then don't execute this code if not os.path.exists("...
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb
nayaknishant/vertex-ai-samples
Create a Cloud Storage bucket**The following steps are required, regardless of your notebook environment.**This tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have ...
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"} if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]": BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb
nayaknishant/vertex-ai-samples
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
! gsutil mb -l $REGION $BUCKET_NAME
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb
nayaknishant/vertex-ai-samples
Finally, validate access to your Cloud Storage bucket by examining its contents:
! gsutil ls -al $BUCKET_NAME
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb
nayaknishant/vertex-ai-samples
Set up variablesNext, set up some variables used throughout the tutorial. Import libraries and define constants Import Vertex client libraryImport the Vertex client library into our Python environment.
import time from google.cloud.aiplatform import gapic as aip from google.protobuf import json_format from google.protobuf.struct_pb2 import Struct, Value
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb
nayaknishant/vertex-ai-samples
Vertex constantsSetup up the following constants for Vertex:- `API_ENDPOINT`: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services.- `PARENT`: The Vertex location root path for dataset, model, job, pipeline and endpoint resources.
# API service endpoint API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION) # Vertex location root path for your dataset, model and endpoint resources PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb
nayaknishant/vertex-ai-samples
AutoML constantsSet constants unique to AutoML datasets and training:- Dataset Schemas: Tells the `Dataset` resource service which type of dataset it is.- Data Labeling (Annotations) Schemas: Tells the `Dataset` resource service how the data is labeled (annotated).- Dataset Training Schemas: Tells the `Pipeline` resou...
# Tabular Dataset type DATA_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/metadata/tables_1.0.0.yaml" # Tabular Labeling type LABEL_SCHEMA = ( "gs://google-cloud-aiplatform/schema/dataset/ioformat/table_io_format_1.0.0.yaml" ) # Tabular Training task TRAINING_SCHEMA = "gs://google-cloud-aiplatform/schema/tr...
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb
nayaknishant/vertex-ai-samples
Hardware AcceleratorsSet the hardware accelerators (e.g., GPU), if any, for prediction.Set the variable `DEPLOY_GPU/DEPLOY_NGPU` to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs alloca...
if os.getenv("IS_TESTING_DEPOLY_GPU"): DEPLOY_GPU, DEPLOY_NGPU = ( aip.AcceleratorType.NVIDIA_TESLA_K80, int(os.getenv("IS_TESTING_DEPOLY_GPU")), ) else: DEPLOY_GPU, DEPLOY_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1)
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb
nayaknishant/vertex-ai-samples
Container (Docker) imageFor AutoML batch prediction, the container image for the serving binary is pre-determined by the Vertex prediction service. More specifically, the service will pick the appropriate container for the model depending on the hardware accelerator you selected. Machine TypeNext, set the machine typ...
if os.getenv("IS_TESTING_DEPLOY_MACHINE"): MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE") else: MACHINE_TYPE = "n1-standard" VCPU = "4" DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU print("Deploy machine type", DEPLOY_COMPUTE)
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb
nayaknishant/vertex-ai-samples
TutorialNow you are ready to start creating your own AutoML tabular binary classification model. Set up clientsThe Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.You will use different cl...
# client options same for all services client_options = {"api_endpoint": API_ENDPOINT} def create_dataset_client(): client = aip.DatasetServiceClient(client_options=client_options) return client def create_model_client(): client = aip.ModelServiceClient(client_options=client_options) return client ...
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb
nayaknishant/vertex-ai-samples
DatasetNow that your clients are ready, your first step is to create a `Dataset` resource instance. This step differs from Vision, Video and Language. For those products, after the `Dataset` resource is created, one then separately imports the data, using the `import_data` method.For tabular, importing of the data is ...
IMPORT_FILE = "gs://cloud-ml-tables-data/bank-marketing.csv"
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb
nayaknishant/vertex-ai-samples
Quick peek at your dataYou will use a version of the Bank Marketing dataset that is stored in a public Cloud Storage bucket, using a CSV index file.Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (`wc -l`) and then peek at the first few ro...
count = ! gsutil cat $IMPORT_FILE | wc -l print("Number of Examples", int(count[0])) print("First 10 rows") ! gsutil cat $IMPORT_FILE | head heading = ! gsutil cat $IMPORT_FILE | head -n1 label_column = str(heading).split(",")[-1].split("'")[0] print("Label Column Name", label_column) if label_column is None: rai...
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb
nayaknishant/vertex-ai-samples
DatasetNow that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it. Create `Dataset` resource instanceUse the helper function `create_dataset` to create the instance of a `Dataset` resource. This function does the following:1. Us...
TIMEOUT = 90 def create_dataset(name, schema, src_uri=None, labels=None, timeout=TIMEOUT): start_time = time.time() try: if src_uri.startswith("gs://"): metadata = {"input_config": {"gcs_source": {"uri": [src_uri]}}} elif src_uri.startswith("bq://"): metadata = {"input_...
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb
nayaknishant/vertex-ai-samples
Now save the unique dataset identifier for the `Dataset` resource instance you created.
# The full unique ID for the dataset dataset_id = result.name # The short numeric ID for the dataset dataset_short_id = dataset_id.split("/")[-1] print(dataset_id)
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb
nayaknishant/vertex-ai-samples
Train the modelNow train an AutoML tabular binary classification model using your Vertex `Dataset` resource. To train the model, do the following steps:1. Create an Vertex training pipeline for the `Dataset` resource.2. Execute the pipeline to start the training. Create a training pipelineYou may ask, what do we use ...
def create_pipeline(pipeline_name, model_name, dataset, schema, task): dataset_id = dataset.split("/")[-1] input_config = { "dataset_id": dataset_id, "fraction_split": { "training_fraction": 0.8, "validation_fraction": 0.1, "test_fraction": 0.1, }, ...
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb
nayaknishant/vertex-ai-samples
Construct the task requirementsNext, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the `task` field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the `json_format.ParseDict` method for the conversion.The minimal fields you need ...
TRANSFORMATIONS = [ {"auto": {"column_name": "Age"}}, {"auto": {"column_name": "Job"}}, {"auto": {"column_name": "MaritalStatus"}}, {"auto": {"column_name": "Education"}}, {"auto": {"column_name": "Default"}}, {"auto": {"column_name": "Balance"}}, {"auto": {"column_name": "Housing"}}, {"...
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb
nayaknishant/vertex-ai-samples
Now save the unique identifier of the training pipeline you created.
# The full unique ID for the pipeline pipeline_id = response.name # The short numeric ID for the pipeline pipeline_short_id = pipeline_id.split("/")[-1] print(pipeline_id)
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb
nayaknishant/vertex-ai-samples
Get information on a training pipelineNow get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's `get_training_pipeline` method, with the following parameter:- `name`: The Vertex fully qualified pipeline...
def get_training_pipeline(name, silent=False): response = clients["pipeline"].get_training_pipeline(name=name) if silent: return response print("pipeline") print(" name:", response.name) print(" display_name:", response.display_name) print(" state:", response.state) print(" training...
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb
nayaknishant/vertex-ai-samples
DeploymentTraining the above model may take upwards of 30 minutes time.Once your model is done training, you can calculate the actual time it took to train the model by subtracting `end_time` from `start_time`. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeli...
while True: response = get_training_pipeline(pipeline_id, True) if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED: print("Training job has not completed:", response.state) model_to_deploy_id = None if response.state == aip.PipelineState.PIPELINE_STATE_FAILED: ra...
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb
nayaknishant/vertex-ai-samples
Model informationNow that your model is trained, you can get some information on your model. Evaluate the Model resourceNow find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to ev...
def list_model_evaluations(name): response = clients["model"].list_model_evaluations(parent=name) for evaluation in response: print("model_evaluation") print(" name:", evaluation.name) print(" metrics_schema_uri:", evaluation.metrics_schema_uri) metrics = json_format.MessageToDic...
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb
nayaknishant/vertex-ai-samples
Model deployment for batch predictionNow deploy the trained Vertex `Model` resource you created for batch prediction. This differs from deploying a `Model` resource for on-demand prediction.For online prediction, you:1. Create an `Endpoint` resource for deploying the `Model` resource to.2. Deploy the `Model` resource ...
HEADING = "Age,Job,MaritalStatus,Education,Default,Balance,Housing,Loan,Contact,Day,Month,Duration,Campaign,PDays,Previous,POutcome,Deposit" INSTANCE_1 = ( "58,managment,married,teritary,no,2143,yes,no,unknown,5,may,261,1,-1,0, unknown" ) INSTANCE_2 = ( "44,technician,single,secondary,no,39,yes,no,unknown,5,may...
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb
nayaknishant/vertex-ai-samples
Make the batch input fileNow make a batch input file, which you will store in your local Cloud Storage bucket. Unlike image, video and text, the batch input file for tabular is only supported for CSV. For CSV file, you make:- The first line is the heading with the feature (fields) heading names.- Each remaining line i...
import tensorflow as tf gcs_input_uri = BUCKET_NAME + "/test.csv" with tf.io.gfile.GFile(gcs_input_uri, "w") as f: f.write(HEADING + "\n") f.write(str(INSTANCE_1) + "\n") f.write(str(INSTANCE_2) + "\n") print(gcs_input_uri) ! gsutil cat $gcs_input_uri
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb
nayaknishant/vertex-ai-samples
Compute instance scalingYou have several choices on scaling the compute instances for handling your batch prediction requests:- Single Instance: The batch prediction requests are processed on a single compute instance. - Set the minimum (`MIN_NODES`) and maximum (`MAX_NODES`) number of compute instances to one.- Manu...
MIN_NODES = 1 MAX_NODES = 1
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb
nayaknishant/vertex-ai-samples
Make batch prediction requestNow that your batch of two test items is ready, let's do the batch request. Use this helper function `create_batch_prediction_job`, with the following parameters:- `display_name`: The human readable name for the prediction job.- `model_name`: The Vertex fully qualified identifier for the `...
BATCH_MODEL = "bank_batch-" + TIMESTAMP def create_batch_prediction_job( display_name, model_name, gcs_source_uri, gcs_destination_output_uri_prefix, parameters=None, ): if DEPLOY_GPU: machine_spec = { "machine_type": DEPLOY_COMPUTE, "accelerator_type": DEPLOY_...
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb
nayaknishant/vertex-ai-samples
Now get the unique identifier for the batch prediction job you created.
# The full unique ID for the batch job batch_job_id = response.name # The short numeric ID for the batch job batch_job_short_id = batch_job_id.split("/")[-1] print(batch_job_id)
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb
nayaknishant/vertex-ai-samples
Get information on a batch prediction jobUse this helper function `get_batch_prediction_job`, with the following paramter:- `job_name`: The Vertex fully qualified identifier for the batch prediction job.The helper function calls the job client service's `get_batch_prediction_job` method, with the following paramter:- ...
def get_batch_prediction_job(job_name, silent=False): response = clients["job"].get_batch_prediction_job(name=job_name) if silent: return response.output_config.gcs_destination.output_uri_prefix, response.state print("response") print(" name:", response.name) print(" display_name:", respons...
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb
nayaknishant/vertex-ai-samples
Get PredictionsWhen the batch prediction is done processing, the job state will be `JOB_STATE_SUCCEEDED`.Finally you view the predictions stored at the Cloud Storage path you set as output. The predictions will be in a CSV format, which you indicated at the time you made the batch prediction job, under a subfolder sta...
def get_latest_predictions(gcs_out_dir): """ Get the latest prediction subfolder using the timestamp in the subfolder name""" folders = !gsutil ls $gcs_out_dir latest = "" for folder in folders: subfolder = folder.split("/")[-2] if subfolder.startswith("prediction-"): if subf...
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb
nayaknishant/vertex-ai-samples
Cleaning upTo clean up all GCP resources used in this project, you can [delete the GCPproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:- Dataset- Pipeline- ...
delete_dataset = True delete_pipeline = True delete_model = True delete_endpoint = True delete_batchjob = True delete_customjob = True delete_hptjob = True delete_bucket = True # Delete the dataset using the Vertex fully qualified identifier for the dataset try: if delete_dataset and "dataset_id" in globals(): ...
_____no_output_____
Apache-2.0
notebooks/community/gapic/automl/showcase_automl_tabular_binary_classification_batch.ipynb
nayaknishant/vertex-ai-samples
Project: Linear RegressionReggie is a mad scientist who has been hired by the local fast food joint to build their newest ball pit in the play area. As such, he is working on researching the bounciness of different balls so as to optimize the pit. He is running an experiment to bounce different sizes of bouncy balls, ...
def get_y(m, b, x): y = m*x + b return y get_y(1, 0, 7) == 7 get_y(5, 10, 3) == 25
_____no_output_____
BSD-2-Clause
Cumulative_Projects/Wk 5 Reggie_Linear_Regression_Solution.ipynb
jfreeman812/Project_ZF
Reggie wants to try a bunch of different `m` values and `b` values and see which line produces the least error. To calculate error between a point and a line, he wants a function called `calculate_error()`, which will take in `m`, `b`, and an [x, y] point called `point` and return the distance between the line and the ...
def calculate_error(m, b, point): x_point, y_point = point y = m*x_point + b distance = abs(y - y_point) return distance
_____no_output_____
BSD-2-Clause
Cumulative_Projects/Wk 5 Reggie_Linear_Regression_Solution.ipynb
jfreeman812/Project_ZF
Let's test this function!
#this is a line that looks like y = x, so (3, 3) should lie on it. thus, error should be 0: print(calculate_error(1, 0, (3, 3))) #the point (3, 4) should be 1 unit away from the line y = x: print(calculate_error(1, 0, (3, 4))) #the point (3, 3) should be 1 unit away from the line y = x - 1: print(calculate_error(1, -1,...
0 1 1 5
BSD-2-Clause
Cumulative_Projects/Wk 5 Reggie_Linear_Regression_Solution.ipynb
jfreeman812/Project_ZF
Great! Reggie's datasets will be sets of points. For example, he ran an experiment comparing the width of bouncy balls to how high they bounce:
datapoints = [(1, 2), (2, 0), (3, 4), (4, 4), (5, 3)]
_____no_output_____
BSD-2-Clause
Cumulative_Projects/Wk 5 Reggie_Linear_Regression_Solution.ipynb
jfreeman812/Project_ZF
The first datapoint, `(1, 2)`, means that his 1cm bouncy ball bounced 2 meters. The 4cm bouncy ball bounced 4 meters.As we try to fit a line to this data, we will need a function called `calculate_all_error`, which takes `m` and `b` that describe a line, and `points`, a set of data like the example above.`calculate_all...
def calculate_all_error(m, b, points): total_error = 0 for point in datapoints: point_error = calculate_error(m, b, point) total_error += point_error return total_error
_____no_output_____
BSD-2-Clause
Cumulative_Projects/Wk 5 Reggie_Linear_Regression_Solution.ipynb
jfreeman812/Project_ZF
Let's test this function!
#every point in this dataset lies upon y=x, so the total error should be zero: datapoints = [(1, 1), (3, 3), (5, 5), (-1, -1)] print(calculate_all_error(1, 0, datapoints)) #every point in this dataset is 1 unit away from y = x + 1, so the total error should be 4: datapoints = [(1, 1), (3, 3), (5, 5), (-1, -1)] print(c...
0 4 4 18
BSD-2-Clause
Cumulative_Projects/Wk 5 Reggie_Linear_Regression_Solution.ipynb
jfreeman812/Project_ZF
Great! It looks like we now have a function that can take in a line and Reggie's data and return how much error that line produces when we try to fit it to the data.Our next step is to find the `m` and `b` that minimizes this error, and thus fits the data best! Part 2: Try a bunch of slopes and intercepts!The way Regg...
possible_ms = [m * 0.1 for m in range(-100, 100)]
_____no_output_____
BSD-2-Clause
Cumulative_Projects/Wk 5 Reggie_Linear_Regression_Solution.ipynb
jfreeman812/Project_ZF
Now, let's make a list of `possible_bs` to check that would be the values from -20 to 20, in steps of 0.1:
possible_bs = [b * 0.1 for b in range(-200, 200)]
_____no_output_____
BSD-2-Clause
Cumulative_Projects/Wk 5 Reggie_Linear_Regression_Solution.ipynb
jfreeman812/Project_ZF
We are going to find the smallest error. First, we will make every possible `y = m*x + b` line by pairing all of the possible `m`s with all of the possible `b`s. Then, we will see which `y = m*x + b` line produces the smallest total error with the set of data stored in `datapoint`.First, create the variables that we wi...
datapoints = [(1, 2), (2, 0), (3, 4), (4, 4), (5, 3)] best_error = float("inf") best_m = 0 best_b = 0 for m in possible_ms: for b in possible_bs: error = calculate_all_error(m, b, datapoints) if error < best_error: best_m = m best_b = b best_error = error print(best_m, best_b,...
0.30000000000000004 1.7000000000000002 4.999999999999999
BSD-2-Clause
Cumulative_Projects/Wk 5 Reggie_Linear_Regression_Solution.ipynb
jfreeman812/Project_ZF
Part 3: What does our model predict?Now we have seen that for this set of observations on the bouncy balls, the line that fits the data best has an `m` of 0.3 and a `b` of 1.7:```y = 0.3x + 1.7```This line produced a total error of 5.Using this `m` and this `b`, what does your line predict the bounce height of a ball ...
get_y(0.3, 1.7, 6)
_____no_output_____
BSD-2-Clause
Cumulative_Projects/Wk 5 Reggie_Linear_Regression_Solution.ipynb
jfreeman812/Project_ZF
Practical Examples of Interactive Visualizations in JupyterLab with Pixi.js and Jupyter Widgets PyData Berlin 2018 - 2018-07-08 Jeremy Tuloup [@jtpio](https://twitter.com/jtpio) [github.com/jtpio](https://github.com/jtpio) [jtp.io](https://jtp.io) ![skip](./img/skip.png) The Python Visualization Landscape (2017)![Py...
from ipywidgets import IntSlider slider = IntSlider(min=0, max=10) slider slider slider.value slider.value = 2
_____no_output_____
BSD-3-Clause
examples/presentation.ipynb
jtpio/pixijs-jupyter
Tutorial to create your own https://ipywidgets.readthedocs.io/en/stable/examples/Widget%20Custom.html Libraries bqplot![bqplot](./img/bqplot.gif) ipyleaflet![ipyleaflet](./img/ipyleaflet.gif) ipyvolume![ipyvolume](./img/ipyvolume.gif) ![skip](./img/skip.png) Motivation: Very Custom Visualizations ![motivation](./img/...
from ipyutils import SimpleShape
_____no_output_____
BSD-3-Clause
examples/presentation.ipynb
jtpio/pixijs-jupyter
Implementation - [simple_shape.py](../ipyutils/simple_shape.py): defines the **SimpleShape** Python class - [widget.ts](../src/simple_shapes/widget.ts): defines the **SimpleShapeModel** and **SimpleShapeView** Typescript classes
square = SimpleShape() square square.rotate = True
_____no_output_____
BSD-3-Clause
examples/presentation.ipynb
jtpio/pixijs-jupyter
Level Up 🚀
from ipyutils import Shapes shapes = Shapes(n_shapes=100) shapes shapes.shape shapes.shape = 'square' shapes.rotate = True shapes.wobble = True
_____no_output_____
BSD-3-Clause
examples/presentation.ipynb
jtpio/pixijs-jupyter
![skip](./img/skip.png) Visualizing Recursion with the Bermuda Triangle Puzzle![Bermuda Triangle Puzzle](img/bermuda_triangle_puzzle.jpg) ![skip](./img/skip.png) Motivation * Solve the puzzle programmatically * Verify a solution visually * Animate the process ![skip](./img/skip.png) BermudaTriangle Widget
from ipyutils import TriangleAnimation, BermudaTriangle triangles = TriangleAnimation() triangles
_____no_output_____
BSD-3-Clause
examples/presentation.ipynb
jtpio/pixijs-jupyter
![skip](./img/skip.png) What can we do with this widget? ![skip](./img/skip.png) Visualize TransitionsFrom | To:--------------------------:|:-------------------------:![from](img/anim_from.png) | ![to](img/anim_to.png)
# states state_0 = [None] * 16 print(state_0) state_1 = [[13, 1]] + [None] * 15 print(state_1) state_2 = [[13, 1], [12, 0]] + [None] * 14 print(state_2)
[[13, 1], [12, 0], None, None, None, None, None, None, None, None, None, None, None, None, None, None]
BSD-3-Clause
examples/presentation.ipynb
jtpio/pixijs-jupyter
Example States and Animation
example_states = TriangleAnimation() bermuda = example_states.bermuda bermuda.states = [ [None] * 16, [[7, 0]] + [None] * 15, [[7, 1]] + [None] * 15, [[7, 2]] + [None] * 15, [[7, 2], [0, 0]] + [None] * 14, [[7, 2], [0, 1]] + [None] * 14, [[i, 0] for i in range(16)], [[i, 1] for i in rang...
_____no_output_____
BSD-3-Clause
examples/presentation.ipynb
jtpio/pixijs-jupyter
![skip](./img/skip.png) Solver
from copy import deepcopy class Solver(BermudaTriangle): def __init__(self, **kwargs): super().__init__(**kwargs) self.reset_state() def reset_state(self): self.board = [None] * self.N_TRIANGLES self.logs = [deepcopy(self.board)] self.it = 0 def so...
_____no_output_____
BSD-3-Clause
examples/presentation.ipynb
jtpio/pixijs-jupyter
Valid Permutation - is_valid()
help(Solver.is_valid)
Help on function is_valid in module ipyutils.bermuda: is_valid(self, i) Parameters ---------- i: int Position of the triangle to check, between 0 and 15 (inclusive) Returns ------- valid: bool True if the triangle at position i doesn't have any conflict False o...
BSD-3-Clause
examples/presentation.ipynb
jtpio/pixijs-jupyter
```pythonsolver.is_valid(7) False```![Valid](./img/valid_triangle.png) ![skip](./img/skip.png) First Try: Random Permutations
import random class RandomSearch(Solver): def solve(self): random.seed(42) self.reset_state() for i in range(200): self.board = random.sample(self.permutation, self.N_TRIANGLES) self.log() if self.found(): print('Found!') ...
_____no_output_____
BSD-3-Clause
examples/presentation.ipynb
jtpio/pixijs-jupyter
![skip](./img/skip.png) Better: Brute Force using Recursion
class RecursiveSolver(Solver): def solve(self): self.used = [False] * self.N_TRIANGLES self.reset_state() self._place(0) return self.board def _place(self, i): self.it += 1 if i == self.N_TRIANGLES: return True for j in range...
_____no_output_____
BSD-3-Clause
examples/presentation.ipynb
jtpio/pixijs-jupyter
Regressão linear **TOC:**Na aula de hoje, vamos explorar os seguintes tópicos em Python:- 1) [Introdução](intro)- 2) [Regressão linear simples](reglinear)- 3) [Regressão linear múltipla](multireglinear)- 4) [Tradeoff viés-variância](tradeoff)
# importe as principais bibliotecas de análise de dados import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns
_____no_output_____
MIT
semana_6/intro_regressao.ipynb
rocabrera/curso_ml
____________ 1) **Introdução** Imagine que você que vender sua casa.Você sabe os atributos da sua casa: quantos cômodos têm, quantos carros cabem na garagem, qual é a área construída, qual sua localidade, etc.Agora, a pergunta é: qual seria o melhor preço pra você colocá-la a venda, ou seja, quanto de fato ela vale?Vo...
df = pd.read_csv("data/house_prices/house_price.csv")
_____no_output_____
MIT
semana_6/intro_regressao.ipynb
rocabrera/curso_ml
Por enquanto, não vamos nos preocupar com os dados missing, pois vamos usar apenas uma feature no nosso modelo inicial.Aproveite para depois explorar os dados da forma que quiser!Por enquanto, vamos dar uma olhada na coluna target! Fica evidente que a distribuição é desviada para a direita.Vamos tentar alterar isso na ...
df_anscombe = sns.load_dataset('anscombe') df_anscombe.groupby("dataset").agg({"mean", "std"})
_____no_output_____
MIT
semana_6/intro_regressao.ipynb
rocabrera/curso_ml
Table of Contents 1&nbsp;&nbsp;Lambda-calcul implémenté en OCaml1.1&nbsp;&nbsp;Expressions1.2&nbsp;&nbsp;But ?1.3&nbsp;&nbsp;Grammaire1.4&nbsp;&nbsp;L'identité1.5&nbsp;&nbsp;Conditionnelles1.6&nbsp;&nbsp;Nombres1.7&nbsp;&nbsp;Test d'inégalité1.8&nbsp;&nbsp;Successeurs1.9&nbsp;&nbsp;Prédecesseurs1.10&nbsp;&nbsp;Additio...
let identite = fun x -> x ;; let vide = fun x -> x ;;
_____no_output_____
MIT
agreg/Lambda_Calcul_en_OCaml.ipynb
doc22940/notebooks-2
ConditionnellesLa conditionnelle est `si cond alors valeur_vraie sinon valeur_fausse`.
let si = fun cond valeur_vraie valeur_fausse -> cond valeur_vraie valeur_fausse ;;
_____no_output_____
MIT
agreg/Lambda_Calcul_en_OCaml.ipynb
doc22940/notebooks-2
C'est très simple, du moment qu'on s'assure que `cond` est soit `vrai` soit `faux` tels que définis par leur comportement : si vrai e1 e2 == e1 si faux e1 e2 == e2
let vrai = fun valeur_vraie valeur_fausse -> valeur_vraie ;; let faux = fun valeur_vraie valeur_fausse -> valeur_fausse ;;
File "[14]", line 1, characters 28-41: Warning 27: unused variable valeur_fausse.
MIT
agreg/Lambda_Calcul_en_OCaml.ipynb
doc22940/notebooks-2
La négation est facile !
let non = fun v x y -> v y x;;
_____no_output_____
MIT
agreg/Lambda_Calcul_en_OCaml.ipynb
doc22940/notebooks-2
En fait, on va forcer une évaluation paresseuse, comme ça si l'une des deux expressions ne terminent pas, l'évaluation fonctionne quand même.
let vrai_paresseux = fun valeur_vraie valeur_fausse -> valeur_vraie () ;; let faux_paresseux = fun valeur_vraie valeur_fausse -> valeur_fausse () ;;
File "[16]", line 1, characters 38-51: Warning 27: unused variable valeur_fausse.
MIT
agreg/Lambda_Calcul_en_OCaml.ipynb
doc22940/notebooks-2
Pour rendre paresseux un terme, rien de plus simple !
let paresseux = fun f -> fun () -> f ;;
_____no_output_____
MIT
agreg/Lambda_Calcul_en_OCaml.ipynb
doc22940/notebooks-2
NombresLa représentation de Church consiste a écrire $n$ comme $\lambda f. \lambda z. f^n z$.
type 'a nombres = ('a -> 'a) -> 'a -> 'a;; (* inutilisé *) type entiers_church = (int -> int) -> int -> int;;
_____no_output_____
MIT
agreg/Lambda_Calcul_en_OCaml.ipynb
doc22940/notebooks-2
$0$ est trivialement $\lambda f. \lambda z. z$ :
let zero = fun (f : ('a -> 'a)) (z : 'a) -> z ;;
File "[34]", line 1, characters 16-17: Warning 27: unused variable f.
MIT
agreg/Lambda_Calcul_en_OCaml.ipynb
doc22940/notebooks-2
$1$ est $\lambda f. \lambda z. f z$ :
let un = fun (f : ('a -> 'a)) -> f ;;
_____no_output_____
MIT
agreg/Lambda_Calcul_en_OCaml.ipynb
doc22940/notebooks-2
Avec l'opérateur de composition, l'écriture des entiers suivants est facile.
let compose = fun f g x -> f (g x);; let deux = fun f -> compose f f;; (* == compose f (un f) *) let trois = fun f -> compose f (deux f) ;; let quatre = fun f -> compose f (trois f) ;; (* etc *)
_____no_output_____
MIT
agreg/Lambda_Calcul_en_OCaml.ipynb
doc22940/notebooks-2
On peut généraliser ça, avec une fonction qui transforme un entier (`int`) de Caml en un entier de Church :
let rec entierChurch (n : int) = fun f z -> if n = 0 then z else f ((entierChurch (n-1)) f z) ;;
_____no_output_____
MIT
agreg/Lambda_Calcul_en_OCaml.ipynb
doc22940/notebooks-2
Par exemple :
(entierChurch 0) (fun x -> x + 1) 0;; (* 0 *) (entierChurch 7) (fun x -> x + 1) 0;; (* 7 *) (entierChurch 3) (fun x -> 2*x) 1;; (* 8 *)
_____no_output_____
MIT
agreg/Lambda_Calcul_en_OCaml.ipynb
doc22940/notebooks-2
Et une fonction qui fait l'inverse (note : cette fonction n'est *pas* un $\lambda$-terme) :
let entierNatif c : int = c (fun x -> x + 1) 0 ;;
_____no_output_____
MIT
agreg/Lambda_Calcul_en_OCaml.ipynb
doc22940/notebooks-2
Un petit test :
entierNatif (si vrai zero un);; (* 0 *) entierNatif (si faux zero un);; (* 1 *) entierNatif (entierChurch 100);; (* 100 *)
_____no_output_____
MIT
agreg/Lambda_Calcul_en_OCaml.ipynb
doc22940/notebooks-2
Test d'inégalitéOn a besoin de pouvoir tester si $n \leq 0$ (ou $n = 0$) en fait.
(* prend un lambda f lambda z. ... est donne vrai ssi n = 0 ou faux sinon *) let estnul = fun n -> n (fun z -> faux) (vrai);; (* prend un lambda f lambda z. ... est donne vrai ssi n > 0 ou faux sinon *) let estnonnul = fun n -> n (fun z -> vrai) (faux);;
File "[44]", line 2, characters 32-33: Warning 27: unused variable z.
MIT
agreg/Lambda_Calcul_en_OCaml.ipynb
doc22940/notebooks-2
On peut proposer cette autre implémentation, qui "fonctionne" pareil (au sens calcul des $\beta$-réductions) mais est plus compliquée :
let estnonnul2 = fun n -> non (estnul n);; entierNatif (si (estnul zero) zero un);; (* 0 *) entierNatif (si (estnul un) zero un);; (* 1 *) entierNatif (si (estnul deux) zero un);; (* 1 *) entierNatif (si (estnonnul zero) zero un);; (* 0 *) entierNatif (si (estnonnul un) zero un);; (* 1 *) entierNatif (si (estnonnul...
_____no_output_____
MIT
agreg/Lambda_Calcul_en_OCaml.ipynb
doc22940/notebooks-2
SuccesseursVue la représentation de Churc, $n+1$ consiste a appliquer l'argument $f$ une fois de plus :$f^{n+1}(z) = f (f^n(z))$.
let succ = fun n f z -> f ((n f) z) ;; entierNatif (succ un);; (* 2 *) deux;; succ un;;
_____no_output_____
MIT
agreg/Lambda_Calcul_en_OCaml.ipynb
doc22940/notebooks-2
On remarque qu'ils ont le même typage, mais OCaml indique qu'il a moins d'informations à propos du deuxième : ce `'_a` signifie que le type est *contraint*, il sera fixé dès la première utilisation de cette fonction.C'est assez mystérieux, mais il faut retenir le point suivant : `deux` était écrit manuellement, donc le...
let succ_de_un = succ un;; (succ_de_un) (fun x -> x + 1);; (succ_de_un) (fun x -> x ^ "0");; (succ un) (fun x -> x ^ "0");; (* une valeur fraîchement calculée, sans contrainte *)
_____no_output_____
MIT
agreg/Lambda_Calcul_en_OCaml.ipynb
doc22940/notebooks-2
PrédecesseursVue la représentation de Church, $\lambda n. n-1$ n'existe pas... mais on peut tricher.
let pred = fun n -> if (entierNatif n) > 0 then entierChurch ((entierNatif n) - 1) else zero ;; entierNatif (pred deux);; (* 1 *) entierNatif (pred trois);; (* 2 *)
_____no_output_____
MIT
agreg/Lambda_Calcul_en_OCaml.ipynb
doc22940/notebooks-2
AdditionPour ajouter $n$ et $m$, il faut appliquer une fonction $f$ $n$ fois puis $m$ fois : $f^{n+m}(z) = f^n(f^m(z))$.
let somme = fun n m f z -> n(f)( m(f)(z));; let cinq = somme deux trois ;; entierNatif cinq;; let sept = somme cinq deux ;; entierNatif sept;;
_____no_output_____
MIT
agreg/Lambda_Calcul_en_OCaml.ipynb
doc22940/notebooks-2
MultiplicationPour multiplier $n$ et $m$, il faut appliquer le codage de $n$ exactement $m$ fois : $f^{nm}(z) = (f^n(f^n(...(f^n(z))...))$.
let produit = fun n m f z -> m(n(f))(z);;
_____no_output_____
MIT
agreg/Lambda_Calcul_en_OCaml.ipynb
doc22940/notebooks-2
On peut faire encore mieux avec l'opérateur de composition :
let produit = fun n m -> compose m n;; let six = produit deux trois ;; entierNatif six;; let huit = produit deux quatre ;; entierNatif huit;;
_____no_output_____
MIT
agreg/Lambda_Calcul_en_OCaml.ipynb
doc22940/notebooks-2
PairesOn va écrire un constructeur de paires, `paire a b` qui sera comme `(a, b)`, et deux destructeurs, `gauche` et `droite`, qui vérifient : gauche (paire a b) == a droite (paire a b) == b
let paire = fun a b -> fun f -> f(a)(b);; let gauche = fun p -> p(fun a b -> a);; let droite = fun p -> p(fun a b -> b);; entierNatif (gauche (paire zero un));; entierNatif (droite (paire zero un));;
_____no_output_____
MIT
agreg/Lambda_Calcul_en_OCaml.ipynb
doc22940/notebooks-2
Prédécesseurs, deuxième essaiIl y a une façon, longue et compliquée ([source](http://gregfjohnson.com/pred/)) d'y arriver, avec des paires.
let pred n suivant premier = let pred_suivant = paire vrai premier in let pred_premier = fun p -> si (gauche p) (paire faux premier) (paire faux (suivant (droite p))) in let paire_finale = n pred_suivant pred_premier in droite paire_finale ;;
_____no_output_____
MIT
agreg/Lambda_Calcul_en_OCaml.ipynb
doc22940/notebooks-2
Malheureusement, ce n'est pas bien typé.
entierNatif (pred deux);; (* 1 *)
_____no_output_____
MIT
agreg/Lambda_Calcul_en_OCaml.ipynb
doc22940/notebooks-2
ListesPour construire des listes (simplement chaînées), on a besoin d'une valeur pour la liste vide, `listevide`, d'un constructeur pour une liste `cons`, un prédicat pour la liste vide `estvide`, un accesseur `tete` et `queue`, et avec les contraintes suivantes (avec `vrai`, `faux` définis comme plus haut) : estvi...
let listevide = fun survide surpasvide -> survide;; let cons = fun hd tl -> fun survide surpasvide -> surpasvide hd tl;;
_____no_output_____
MIT
agreg/Lambda_Calcul_en_OCaml.ipynb
doc22940/notebooks-2
Avec cette construction, `estvide` est assez simple : `survide` est `() -> vrai` et `surpasvide` est `tt qu -> faux`.
let estvide = fun liste -> liste (vrai) (fun tt qu -> faux);;
File "[60]", line 1, characters 45-47: Warning 27: unused variable tt. File "[60]", line 1, characters 48-50: Warning 27: unused variable qu.
MIT
agreg/Lambda_Calcul_en_OCaml.ipynb
doc22940/notebooks-2
Deux tests :
entierNatif (si (estvide (listevide)) un zero);; (* estvide listevide == vrai *) entierNatif (si (estvide (cons un listevide)) un zero);; (* estvide (cons un listevide) == faux *)
_____no_output_____
MIT
agreg/Lambda_Calcul_en_OCaml.ipynb
doc22940/notebooks-2
Et pour les deux extracteurs, c'est très facile avec cet encodage.
let tete = fun liste -> liste (vide) (fun tt qu -> tt);; let queue = fun liste -> liste (vide) (fun tt qu -> qu);; entierNatif (tete (cons un listevide));; entierNatif (tete (queue (cons deux (cons un listevide))));; entierNatif (tete (queue (cons trois (cons deux (cons un listevide)))));;
_____no_output_____
MIT
agreg/Lambda_Calcul_en_OCaml.ipynb
doc22940/notebooks-2
Visualisons les types que Caml trouve a des listes de tailles croissantes :
cons un (cons un listevide);; (* 8 variables pour une liste de taille 2 *) cons un (cons un (cons un (cons un listevide)));; (* 14 variables pour une liste de taille 4 *) cons un (cons un (cons un (cons un (cons un (cons un (cons un (cons un listevide)))))));; (* 26 variables pour une liste de taille 7 *)
_____no_output_____
MIT
agreg/Lambda_Calcul_en_OCaml.ipynb
doc22940/notebooks-2
Pour ces raisons là, on se rend compte que le type donné par Caml à une liste de taille $k$ croît linéairement *en taille* en fonction de $k$ !Aucun espoir donc (avec cet encodage) d'avoir un type générique pour les listes représentés en Caml.Et donc nous ne sommes pas surpris de voir cet essai échouer :
let rec longueur liste = liste (zero) (fun t q -> succ (longueur q)) ;;
_____no_output_____
MIT
agreg/Lambda_Calcul_en_OCaml.ipynb
doc22940/notebooks-2
En effet, `longueur` devrait être bien typée et `liste` et `q` devraient avoir le même type, or le type de `liste` est strictement plus grand que celui de `q`... On peut essayer de faire une fonction `ieme`.On veut que `ieme zero liste = tete` et `ieme n liste = ieme (pred n) (queue liste)`.En écrivant en haut niveau, ...
let pop liste = si (estvide liste) (listevide) (queue liste) ;; let ieme n liste = tete (n pop liste) ;;
_____no_output_____
MIT
agreg/Lambda_Calcul_en_OCaml.ipynb
doc22940/notebooks-2
La fonction UC'est le premier indice que le $\lambda$-calcul peut être utilisé comme modèle de calcul : le terme $U : f \to f(f)$ ne termine pas si on l'applique à lui-même.Mais ce sera la faiblesse de l'utilisation de Caml : ce terme ne peut être correctement typé !
let u = fun f -> f (f);;
_____no_output_____
MIT
agreg/Lambda_Calcul_en_OCaml.ipynb
doc22940/notebooks-2
A noter que même dans un langage non typé (par exemple Python), on peut définir $U$ mais son exécution échouera, soit à caude d'un dépassement de pile, soit parce qu'elle ne termine pas. La récursion via la fonction YLa fonction $Y$ trouve le point fixe d'une autre fonction.C'est très utile pour définir des fonctions ...
let rec y = fun f -> f (y(f));; let fact = y(fun f n -> si (estnul n) (un) (produit n (f (pred n))));;
_____no_output_____
MIT
agreg/Lambda_Calcul_en_OCaml.ipynb
doc22940/notebooks-2
On utilise la $\eta$-expansion : si $e$ termine, $e$ est équivalent (ie tout calcul donne le même terme) à $\lambda x. e(x)$.
let rec y = fun f -> f (fun x -> y(f)(x));;
_____no_output_____
MIT
agreg/Lambda_Calcul_en_OCaml.ipynb
doc22940/notebooks-2
Par contre, le typage n'arrive toujours pas à trouver que l'expression suivante devrait être bien définie :
let fact = y(fun f n -> si (estnul n) (un) (produit n (f (pred n))));;
_____no_output_____
MIT
agreg/Lambda_Calcul_en_OCaml.ipynb
doc22940/notebooks-2