markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Now here's the slope function:
def slope_func1(state, t, system): """Compute derivatives of the state. state: position, velocity t: time system: System object containing g, rho, C_d, area, and mass returns: derivatives of y and v """ y, v = state M, g = system.M, system.g a_drag = drag_force(v, system) / M a_cord = cord_acc(y, v, system) dvdt = -g + a_cord + a_drag return v, dvdt
notebooks/jump2.ipynb
AllenDowney/ModSimPy
mit
As always, let's test the slope function with the initial params.
slope_func1(system.init, 0, system)
notebooks/jump2.ipynb
AllenDowney/ModSimPy
mit
We'll need an event function to stop the simulation when we get to the end of the cord.
def event_func(state, t, system): """Run until y=-L. state: position, velocity t: time system: System object containing g, rho, C_d, area, and mass returns: difference between y and -L """ y, v = state return y + system.L
notebooks/jump2.ipynb
AllenDowney/ModSimPy
mit
We can test it with the initial conditions.
event_func(system.init, 0, system)
notebooks/jump2.ipynb
AllenDowney/ModSimPy
mit
And then run the simulation.
results, details = run_ode_solver(system, slope_func1, events=event_func) details.message
notebooks/jump2.ipynb
AllenDowney/ModSimPy
mit
Here's how long it takes to drop 25 meters.
t_final = get_last_label(results)
notebooks/jump2.ipynb
AllenDowney/ModSimPy
mit
Here's the plot of position as a function of time.
def plot_position(results, **options): plot(results.y, **options) decorate(xlabel='Time (s)', ylabel='Position (m)') plot_position(results)
notebooks/jump2.ipynb
AllenDowney/ModSimPy
mit
We can use min to find the lowest point:
min(results.y)
notebooks/jump2.ipynb
AllenDowney/ModSimPy
mit
Here's velocity as a function of time:
def plot_velocity(results): plot(results.v, color='C1', label='v') decorate(xlabel='Time (s)', ylabel='Velocity (m/s)') plot_velocity(results)
notebooks/jump2.ipynb
AllenDowney/ModSimPy
mit
Velocity when we reach the end of the cord.
min(results.v)
notebooks/jump2.ipynb
AllenDowney/ModSimPy
mit
Although we compute acceleration inside the slope function, we don't get acceleration as a result from run_ode_solver. We can approximate it by computing the numerical derivative of v:
a = gradient(results.v) plot(a) decorate(xlabel='Time (s)', ylabel='Acceleration (m/$s^2$)')
notebooks/jump2.ipynb
AllenDowney/ModSimPy
mit
The maximum downward acceleration, as a factor of g
max_acceleration = max(abs(a)) * m/s**2 / params.g
notebooks/jump2.ipynb
AllenDowney/ModSimPy
mit
Using Equation (1) from Heck, Uylings, and Kędzierska, we can compute the peak acceleration due to interaction with the cord, neglecting drag.
def max_acceleration(system): mu = system.mu return 1 + mu * (4+mu) / 8 max_acceleration(system)
notebooks/jump2.ipynb
AllenDowney/ModSimPy
mit
If you set C_d=0, the simulated acceleration approaches the theoretical result, although you might have to reduce max_step to get a good numerical estimate. Sweeping cord weight Now let's see how velocity at the crossover point depends on the weight of the cord.
def sweep_m_cord(m_cord_array, params): sweep = SweepSeries() for m_cord in m_cord_array: system = make_system(Params(params, m_cord=m_cord)) results, details = run_ode_solver(system, slope_func1, events=event_func) min_velocity = min(results.v) * m/s sweep[m_cord.magnitude] = min_velocity return sweep m_cord_array = linspace(1, 201, 21) * kg sweep = sweep_m_cord(m_cord_array, params)
notebooks/jump2.ipynb
AllenDowney/ModSimPy
mit
Here's what it looks like. As expected, a heavier cord gets the jumper going faster. There's a hitch near 25 kg that seems to be due to numerical error.
plot(sweep) decorate(xlabel='Mass of cord (kg)', ylabel='Fastest downward velocity (m/s)')
notebooks/jump2.ipynb
AllenDowney/ModSimPy
mit
Phase 2 Once the jumper falls past the length of the cord, acceleration due to energy transfer from the cord stops abruptly. As the cord stretches, it starts to exert a spring force. So let's simulate this second phase. spring_force computes the force of the cord on the jumper:
def spring_force(y, system): """Computes the force of the bungee cord on the jumper: y: height of the jumper Uses these variables from system: y_attach: height of the attachment point L: resting length of the cord k: spring constant of the cord returns: force in N """ L, k = system.L, system.k distance_fallen = -y extension = distance_fallen - L f_spring = k * extension return f_spring
notebooks/jump2.ipynb
AllenDowney/ModSimPy
mit
The spring force is 0 until the cord is fully extended. When it is extended 1 m, the spring force is 40 N.
spring_force(-25*m, system) spring_force(-26*m, system)
notebooks/jump2.ipynb
AllenDowney/ModSimPy
mit
The slope function for Phase 2 includes the spring force, and drops the acceleration due to the cord.
def slope_func2(state, t, system): """Compute derivatives of the state. state: position, velocity t: time system: System object containing g, rho, C_d, area, and mass returns: derivatives of y and v """ y, v = state M, g = system.M, system.g a_drag = drag_force(v, system) / M a_spring = spring_force(y, system) / M dvdt = -g + a_drag + a_spring return v, dvdt
notebooks/jump2.ipynb
AllenDowney/ModSimPy
mit
I'll run Phase 1 again so we can get the final state.
system1 = make_system(params) event_func.direction=-1 results1, details1 = run_ode_solver(system1, slope_func1, events=event_func) print(details1.message)
notebooks/jump2.ipynb
AllenDowney/ModSimPy
mit
Now I need the final time, position, and velocity from Phase 1.
t_final = get_last_label(results1) init2 = results1.row[t_final]
notebooks/jump2.ipynb
AllenDowney/ModSimPy
mit
And that gives me the starting conditions for Phase 2.
system2 = System(system1, t_0=t_final, init=init2)
notebooks/jump2.ipynb
AllenDowney/ModSimPy
mit
Here's how we run Phase 2, setting the direction of the event function so it doesn't stop the simulation immediately.
event_func.direction=+1 results2, details2 = run_ode_solver(system2, slope_func2, events=event_func) print(details2.message) t_final = get_last_label(results2)
notebooks/jump2.ipynb
AllenDowney/ModSimPy
mit
We can plot the results on the same axes.
plot_position(results1, label='Phase 1') plot_position(results2, label='Phase 2')
notebooks/jump2.ipynb
AllenDowney/ModSimPy
mit
And get the lowest position from Phase 2.
min(results2.y)
notebooks/jump2.ipynb
AllenDowney/ModSimPy
mit
To see how big the effect of the cord is, I'll collect the previous code in a function.
def simulate_system2(params): system1 = make_system(params) event_func.direction=-1 results1, details1 = run_ode_solver(system1, slope_func1, events=event_func) t_final = get_last_label(results1) init2 = results1.row[t_final] system2 = System(system1, t_0=t_final, init=init2) results2, details2 = run_ode_solver(system2, slope_func2, events=event_func) t_final = get_last_label(results2) return TimeFrame(pd.concat([results1, results2]))
notebooks/jump2.ipynb
AllenDowney/ModSimPy
mit
Now we can run both phases and get the results in a single TimeFrame.
results = simulate_system2(params); plot_position(results) params_no_cord = Params(params, m_cord=1*kg) results_no_cord = simulate_system2(params_no_cord); plot_position(results, label='m_cord = 75 kg') plot_position(results_no_cord, label='m_cord = 1 kg') savefig('figs/jump.png') min(results_no_cord.y) diff = min(results.y) - min(results_no_cord.y)
notebooks/jump2.ipynb
AllenDowney/ModSimPy
mit
Categorical Variables
import pandas as pd df = pd.DataFrame({'salary': [103, 89, 142, 54, 63, 219], 'boro': ['Manhatten', 'Queens', 'Manhatten', 'Brooklyn', 'Brooklyn', 'Bronx']}) df pd.get_dummies(df) df = pd.DataFrame({'salary': [103, 89, 142, 54, 63, 219], 'boro': [0, 1, 0, 2, 2, 3]}) df pd.get_dummies(df, columns=['boro'])
Teaching Materials/Machine Learning/ml-training-intro/notebooks/04 - Preprocessing.ipynb
astro4dev/OAD-Data-Science-Toolkit
gpl-3.0
Exercise Apply dummy encoding and scaling to the "adult" dataset consisting of income data from the census. Bonus: visualize the data.
data = pd.read_csv("adult.csv", index_col=0) # %load solutions/load_adult.py
Teaching Materials/Machine Learning/ml-training-intro/notebooks/04 - Preprocessing.ipynb
astro4dev/OAD-Data-Science-Toolkit
gpl-3.0
Composing a pipeline from reusable, pre-built, and lightweight components This tutorial describes how to build a Kubeflow pipeline from reusable, pre-built, and lightweight components. The following provides a summary of the steps involved in creating and using a reusable component: Write the program that contains your component’s logic. The program must use files and command-line arguments to pass data to and from the component. Containerize the program. Write a component specification in YAML format that describes the component for the Kubeflow Pipelines system. Use the Kubeflow Pipelines SDK to load your component, use it in a pipeline and run that pipeline. Then, we will compose a pipeline from a reusable component, a pre-built component, and a lightweight component. The pipeline will perform the following steps: - Train an MNIST model and export it to Google Cloud Storage. - Deploy the exported TensorFlow model on AI Platform Prediction service. - Test the deployment by calling the endpoint with test data. Note: Ensure that you have Docker installed, if you want to build the image locally, by running the following command: which docker The result should be something like: /usr/bin/docker
import kfp import kfp.gcp as gcp import kfp.dsl as dsl import kfp.compiler as compiler import kfp.components as comp import datetime import kubernetes as k8s # Required Parameters PROJECT_ID='<ADD GCP PROJECT HERE>' GCS_BUCKET='gs://<ADD STORAGE LOCATION HERE>'
courses/machine_learning/deepdive2/production_ml/labs/samples/contrib/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Create client If you run this notebook outside of a Kubeflow cluster, run the following command: - host: The URL of your Kubeflow Pipelines instance, for example "https://&lt;your-deployment&gt;.endpoints.&lt;your-project&gt;.cloud.goog/pipeline" - client_id: The client ID used by Identity-Aware Proxy - other_client_id: The client ID used to obtain the auth codes and refresh tokens. - other_client_secret: The client secret used to obtain the auth codes and refresh tokens. python client = kfp.Client(host, client_id, other_client_id, other_client_secret) If you run this notebook within a Kubeflow cluster, run the following command: python client = kfp.Client() You'll need to create OAuth client ID credentials of type Other to get other_client_id and other_client_secret. Learn more about creating OAuth credentials
# Optional Parameters, but required for running outside Kubeflow cluster # The host for 'AI Platform Pipelines' ends with 'pipelines.googleusercontent.com' # The host for pipeline endpoint of 'full Kubeflow deployment' ends with '/pipeline' # Examples are: # https://7c021d0340d296aa-dot-us-central2.pipelines.googleusercontent.com # https://kubeflow.endpoints.kubeflow-pipeline.cloud.goog/pipeline HOST = '<ADD HOST NAME TO TALK TO KUBEFLOW PIPELINE HERE>' # For 'full Kubeflow deployment' on GCP, the endpoint is usually protected through IAP, therefore the following # will be needed to access the endpoint. CLIENT_ID = '<ADD OAuth CLIENT ID USED BY IAP HERE>' OTHER_CLIENT_ID = '<ADD OAuth CLIENT ID USED TO OBTAIN AUTH CODES HERE>' OTHER_CLIENT_SECRET = '<ADD OAuth CLIENT SECRET USED TO OBTAIN AUTH CODES HERE>' # This is to ensure the proper access token is present to reach the end point for 'AI Platform Pipelines' # If you are not working with 'AI Platform Pipelines', this step is not necessary ! gcloud auth print-access-token # Create kfp client in_cluster = True try: k8s.config.load_incluster_config() except: in_cluster = False pass if in_cluster: client = kfp.Client() else: if HOST.endswith('googleusercontent.com'): CLIENT_ID = None OTHER_CLIENT_ID = None OTHER_CLIENT_SECRET = None client = kfp.Client(host=HOST, client_id=CLIENT_ID, other_client_id=OTHER_CLIENT_ID, other_client_secret=OTHER_CLIENT_SECRET)
courses/machine_learning/deepdive2/production_ml/labs/samples/contrib/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Build reusable components Writing the program code The following cell creates a file app.py that contains a Python script. The script downloads MNIST dataset, trains a Neural Network based classification model, writes the training log and exports the trained model to Google Cloud Storage. Your component can create outputs that the downstream components can use as inputs. Each output must be a string and the container image must write each output to a separate local text file. For example, if a training component needs to output the path of the trained model, the component writes the path into a local file, such as /output.txt.
%%bash # Create folders if they don't exist. mkdir -p tmp/reuse_components_pipeline/mnist_training # Create the Python file that lists GCS blobs. cat > ./tmp/reuse_components_pipeline/mnist_training/app.py <<HERE import argparse from datetime import datetime import tensorflow as tf parser = argparse.ArgumentParser() parser.add_argument( '--model_path', type=str, required=True, help='Name of the model file.') parser.add_argument( '--bucket', type=str, required=True, help='GCS bucket name.') args = parser.parse_args() bucket=args.bucket model_path=args.model_path model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(512, activation=tf.nn.relu), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation=tf.nn.softmax) ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) print(model.summary()) mnist = tf.keras.datasets.mnist (x_train, y_train),(x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 callbacks = [ tf.keras.callbacks.TensorBoard(log_dir=bucket + '/logs/' + datetime.now().date().__str__()), # Interrupt training if val_loss stops improving for over 2 epochs tf.keras.callbacks.EarlyStopping(patience=2, monitor='val_loss'), ] model.fit(x_train, y_train, batch_size=32, epochs=5, callbacks=callbacks, validation_data=(x_test, y_test)) from tensorflow import gfile gcs_path = bucket + "/" + model_path # The export require the folder is new if gfile.Exists(gcs_path): gfile.DeleteRecursively(gcs_path) tf.keras.experimental.export_saved_model(model, gcs_path) with open('/output.txt', 'w') as f: f.write(gcs_path) HERE
courses/machine_learning/deepdive2/production_ml/labs/samples/contrib/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Create a Docker container Create your own container image that includes your program. Creating a Dockerfile Now create a container that runs the script. Start by creating a Dockerfile. A Dockerfile contains the instructions to assemble a Docker image. The FROM statement specifies the Base Image from which you are building. WORKDIR sets the working directory. When you assemble the Docker image, COPY copies the required files and directories (for example, app.py) to the file system of the container. RUN executes a command (for example, install the dependencies) and commits the results.
%%bash # Create Dockerfile. # AI platform only support tensorflow 1.14 cat > ./tmp/reuse_components_pipeline/mnist_training/Dockerfile <<EOF FROM tensorflow/tensorflow:1.14.0-py3 WORKDIR /app COPY . /app EOF
courses/machine_learning/deepdive2/production_ml/labs/samples/contrib/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Build docker image Now that we have created our Dockerfile for creating our Docker image. Then we need to build the image and push to a registry to host the image. There are three possible options: - Use the kfp.containers.build_image_from_working_dir to build the image and push to the Container Registry (GCR). This requires kaniko, which will be auto-installed with 'full Kubeflow deployment' but not 'AI Platform Pipelines'. - Use Cloud Build, which would require the setup of GCP project and enablement of corresponding API. If you are working with GCP 'AI Platform Pipelines' with GCP project running, it is recommended to use Cloud Build. - Use Docker installed locally and push to e.g. GCR. Note: If you run this notebook within Kubeflow cluster, with Kubeflow version >= 0.7 and exploring kaniko option, you need to ensure that valid credentials are created within your notebook's namespace. - With Kubeflow version >= 0.7, the credential is supposed to be copied automatically while creating notebook through Configurations, which doesn't work properly at the time of creating this notebook. - You can also add credentials to the new namespace by either copying credentials from an existing Kubeflow namespace, or by creating a new service account. - The following cell demonstrates how to copy the default secret to your own namespace. ```bash %%bash NAMESPACE=<your notebook name space> SOURCE=kubeflow NAME=user-gcp-sa SECRET=$(kubectl get secrets \${NAME} -n \${SOURCE} -o jsonpath="{.data.\${NAME}.json}" | base64 -D) kubectl create -n \${NAMESPACE} secret generic \${NAME} --from-literal="\${NAME}.json=\${SECRET}" ```
IMAGE_NAME="mnist_training_kf_pipeline" TAG="latest" # "v_$(date +%Y%m%d_%H%M%S)" GCR_IMAGE="gcr.io/{PROJECT_ID}/{IMAGE_NAME}:{TAG}".format( PROJECT_ID=PROJECT_ID, IMAGE_NAME=IMAGE_NAME, TAG=TAG ) APP_FOLDER='./tmp/reuse_components_pipeline/mnist_training/' # In the following, for the purpose of demonstration # Cloud Build is choosen for 'AI Platform Pipelines' # kaniko is choosen for 'full Kubeflow deployment' if HOST.endswith('googleusercontent.com'): # kaniko is not pre-installed with 'AI Platform Pipelines' import subprocess # ! gcloud builds submit --tag ${IMAGE_NAME} ${APP_FOLDER} cmd = ['gcloud', 'builds', 'submit', '--tag', GCR_IMAGE, APP_FOLDER] build_log = (subprocess.run(cmd, stdout=subprocess.PIPE).stdout[:-1].decode('utf-8')) print(build_log) else: if kfp.__version__ <= '0.1.36': # kfp with version 0.1.36+ introduce broken change that will make the following code not working import subprocess builder = kfp.containers._container_builder.ContainerBuilder( gcs_staging=GCS_BUCKET + "/kfp_container_build_staging" ) kfp.containers.build_image_from_working_dir( image_name=GCR_IMAGE, working_dir=APP_FOLDER, builder=builder ) else: raise("Please build the docker image use either [Docker] or [Cloud Build]")
courses/machine_learning/deepdive2/production_ml/labs/samples/contrib/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
If you want to use docker to build the image Run the following in a cell ```bash %%bash -s "{PROJECT_ID}" IMAGE_NAME="mnist_training_kf_pipeline" TAG="latest" # "v_$(date +%Y%m%d_%H%M%S)" Create script to build docker image and push it. cat > ./tmp/components/mnist_training/build_image.sh <<HERE PROJECT_ID="${1}" IMAGE_NAME="${IMAGE_NAME}" TAG="${TAG}" GCR_IMAGE="gcr.io/\${PROJECT_ID}/\${IMAGE_NAME}:\${TAG}" docker build -t \${IMAGE_NAME} . docker tag \${IMAGE_NAME} \${GCR_IMAGE} docker push \${GCR_IMAGE} docker image rm \${IMAGE_NAME} docker image rm \${GCR_IMAGE} HERE cd tmp/components/mnist_training bash build_image.sh ```
image_name = GCR_IMAGE
courses/machine_learning/deepdive2/production_ml/labs/samples/contrib/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Writing your component definition file To create a component from your containerized program, you must write a component specification in YAML that describes the component for the Kubeflow Pipelines system. For the complete definition of a Kubeflow Pipelines component, see the component specification. However, for this tutorial you don’t need to know the full schema of the component specification. The notebook provides enough information to complete the tutorial. Start writing the component definition (component.yaml) by specifying your container image in the component’s implementation section:
%%bash -s "{image_name}" GCR_IMAGE="${1}" echo ${GCR_IMAGE} # Create Yaml # the image uri should be changed according to the above docker image push output cat > mnist_pipeline_component.yaml <<HERE name: Mnist training description: Train a mnist model and save to GCS inputs: - name: model_path description: 'Path of the tf model.' type: String - name: bucket description: 'GCS bucket name.' type: String outputs: - name: gcs_model_path description: 'Trained model path.' type: GCSPath implementation: container: image: ${GCR_IMAGE} command: [ python, /app/app.py, --model_path, {inputValue: model_path}, --bucket, {inputValue: bucket}, ] fileOutputs: gcs_model_path: /output.txt HERE import os mnist_train_op = kfp.components.load_component_from_file(os.path.join('./', 'mnist_pipeline_component.yaml')) mnist_train_op.component_spec
courses/machine_learning/deepdive2/production_ml/labs/samples/contrib/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Define deployment operation on AI Platform
mlengine_deploy_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/1.4.0/components/gcp/ml_engine/deploy/component.yaml') def deploy( project_id, model_uri, model_id, runtime_version, python_version): return mlengine_deploy_op( model_uri=model_uri, project_id=project_id, model_id=model_id, runtime_version=runtime_version, python_version=python_version, replace_existing_version=True, set_default=True)
courses/machine_learning/deepdive2/production_ml/labs/samples/contrib/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Kubeflow serving deployment component as an option. Note that, the deployed Endppoint URI is not availabe as output of this component. ```python kubeflow_deploy_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/1.4.0/components/gcp/ml_engine/deploy/component.yaml') def deploy_kubeflow( model_dir, tf_server_name): return kubeflow_deploy_op( model_dir=model_dir, server_name=tf_server_name, cluster_name='kubeflow', namespace='kubeflow', pvc_name='', service_type='ClusterIP') ``` Create a lightweight component for testing the deployment
def deployment_test(project_id: str, model_name: str, version: str) -> str: model_name = model_name.split("/")[-1] version = version.split("/")[-1] import googleapiclient.discovery def predict(project, model, data, version=None): """Run predictions on a list of instances. Args: project: (str), project where the Cloud ML Engine Model is deployed. model: (str), model name. data: ([[any]]), list of input instances, where each input instance is a list of attributes. version: str, version of the model to target. Returns: Mapping[str: any]: dictionary of prediction results defined by the model. """ service = googleapiclient.discovery.build('ml', 'v1') name = 'projects/{}/models/{}'.format(project, model) if version is not None: name += '/versions/{}'.format(version) response = service.projects().predict( name=name, body={ 'instances': data }).execute() if 'error' in response: raise RuntimeError(response['error']) return response['predictions'] import tensorflow as tf import json mnist = tf.keras.datasets.mnist (x_train, y_train),(x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 result = predict( project=project_id, model=model_name, data=x_test[0:2].tolist(), version=version) print(result) return json.dumps(result) # # Test the function with already deployed version # deployment_test( # project_id=PROJECT_ID, # model_name="mnist", # version='ver_bb1ebd2a06ab7f321ad3db6b3b3d83e6' # previous deployed version for testing # ) deployment_test_op = comp.func_to_container_op( func=deployment_test, base_image="tensorflow/tensorflow:1.15.0-py3", packages_to_install=["google-api-python-client==1.7.8"])
courses/machine_learning/deepdive2/production_ml/labs/samples/contrib/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Create your workflow as a Python function Define your pipeline as a Python function. @kfp.dsl.pipeline is a required decoration, and must include name and description properties. Then compile the pipeline function. After the compilation is completed, a pipeline file is created.
# Define the pipeline @dsl.pipeline( name='Mnist pipeline', description='A toy pipeline that performs mnist model training.' ) def mnist_reuse_component_deploy_pipeline( project_id: str = PROJECT_ID, model_path: str = 'mnist_model', bucket: str = GCS_BUCKET ): train_task = mnist_train_op( model_path=model_path, bucket=bucket ).apply(gcp.use_gcp_secret('user-gcp-sa')) deploy_task = deploy( project_id=project_id, model_uri=train_task.outputs['gcs_model_path'], model_id="mnist", runtime_version="1.14", python_version="3.5" ).apply(gcp.use_gcp_secret('user-gcp-sa')) deploy_test_task = deployment_test_op( project_id=project_id, model_name=deploy_task.outputs["model_name"], version=deploy_task.outputs["version_name"], ).apply(gcp.use_gcp_secret('user-gcp-sa')) return True
courses/machine_learning/deepdive2/production_ml/labs/samples/contrib/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Submit a pipeline run
pipeline_func = mnist_reuse_component_deploy_pipeline experiment_name = 'minist_kubeflow' arguments = {"model_path":"mnist_model", "bucket":GCS_BUCKET} run_name = pipeline_func.__name__ + ' run' # Submit pipeline directly from pipeline function run_result = client.create_run_from_pipeline_func(pipeline_func, experiment_name=experiment_name, run_name=run_name, arguments=arguments)
courses/machine_learning/deepdive2/production_ml/labs/samples/contrib/mnist/04_Reusable_and_Pre-build_Components_as_Pipeline.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Sebastian Raschka
import time print('Last updated: %s' %time.strftime('%d/%m/%Y'))
notebooks/bubble_sort.ipynb
babraham123/script-runner
mit
Sorting Algorithms Overview
import platform import multiprocessing def print_sysinfo(): print('\nPython version :', platform.python_version()) print('compiler :', platform.python_compiler()) print('\nsystem :', platform.system()) print('release :', platform.release()) print('machine :', platform.machine()) print('processor :', platform.processor()) print('CPU count :', multiprocessing.cpu_count()) print('interpreter :', platform.architecture()[0]) print('\n\n')
notebooks/bubble_sort.ipynb
babraham123/script-runner
mit
Bubble sort [back to top] Quick note about Bubble sort I don't want to get into the details about sorting algorithms here, but there is a great report "Sorting in the Presence of Branch Prediction and Caches - Fast Sorting on Modern Computers" written by Paul Biggar and David Gregg, where they describe and analyze elementary sorting algorithms in very nice detail (see chapter 4). And for a quick reference, this website has a nice animation of this algorithm. A long story short: The "worst-case" complexity of the Bubble sort algorithm (i.e., "Big-O") $\Rightarrow \pmb O(n^2)$
print_sysinfo()
notebooks/bubble_sort.ipynb
babraham123/script-runner
mit
Bubble sort implemented in (C)Python
def python_bubblesort(a_list): """ Bubblesort in Python for list objects (sorts in place).""" length = len(a_list) for i in range(length): for j in range(1, length): if a_list[j] < a_list[j-1]: a_list[j-1], a_list[j] = a_list[j], a_list[j-1] return a_list
notebooks/bubble_sort.ipynb
babraham123/script-runner
mit
<br> Below is a improved version that quits early if no further swap is needed.
def python_bubblesort_improved(a_list): """ Bubblesort in Python for list objects (sorts in place).""" length = len(a_list) swapped = 1 for i in range(length): if swapped: swapped = 0 for ele in range(length-i-1): if a_list[ele] > a_list[ele + 1]: temp = a_list[ele + 1] a_list[ele + 1] = a_list[ele] a_list[ele] = temp swapped = 1 return a_list
notebooks/bubble_sort.ipynb
babraham123/script-runner
mit
Verifying that all implementations work correctly
import random import copy random.seed(4354353) l = [random.randint(1,1000) for num in range(1, 1000)] l_sorted = sorted(l) for f in [python_bubblesort, python_bubblesort_improved]: assert(l_sorted == f(copy.copy(l))) print('Bubblesort works correctly')
notebooks/bubble_sort.ipynb
babraham123/script-runner
mit
Performance comparison
# small list l_small = [random.randint(1,100) for num in range(1, 100)] l_small_cp = copy.copy(l_small) %timeit python_bubblesort(l_small) %timeit python_bubblesort_improved(l_small_cp) # larger list l_small = [random.randint(1,10000) for num in range(1, 10000)] l_small_cp = copy.copy(l_small) %timeit python_bubblesort(l_small) %timeit python_bubblesort_improved(l_small_cp)
notebooks/bubble_sort.ipynb
babraham123/script-runner
mit
Does our model obey the theory?
sm.stats.durbin_watson(arma_mod30.resid) fig = plt.figure(figsize=(12,4)) ax = fig.add_subplot(111) ax = plt.plot(arma_mod30.resid) resid = arma_mod30.resid stats.normaltest(resid) fig = plt.figure(figsize=(12,4)) ax = fig.add_subplot(111) fig = qqplot(resid, line='q', ax=ax, fit=True) fig = plt.figure(figsize=(12,8)) ax1 = fig.add_subplot(211) fig = sm.graphics.tsa.plot_acf(resid, lags=40, ax=ax1) ax2 = fig.add_subplot(212) fig = sm.graphics.tsa.plot_pacf(resid, lags=40, ax=ax2) r,q,p = sm.tsa.acf(resid, fft=True, qstat=True) data = np.c_[r[1:], q, p] index = pd.Index(range(1,q.shape[0]+1), name="lag") table = pd.DataFrame(data, columns=["AC", "Q", "Prob(>Q)"], index=index) print(table)
v0.13.1/examples/notebooks/generated/statespace_arma_0.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Create random values for x in interval [0,1)
random.seed(98103) n = 30 x = graphlab.SArray([random.random() for i in range(n)]).sort()
Chapter8/Overfitting_Demo_Ridge_Lasso.ipynb
dkirkby/astroml-study
mit
Compute y
y = x.apply(lambda x: math.sin(4*x))
Chapter8/Overfitting_Demo_Ridge_Lasso.ipynb
dkirkby/astroml-study
mit
Add random Gaussian noise to y
random.seed(1) e = graphlab.SArray([random.gauss(0,1.0/3.0) for i in range(n)]) y = y + e
Chapter8/Overfitting_Demo_Ridge_Lasso.ipynb
dkirkby/astroml-study
mit
Put data into an SFrame to manipulate later
data = graphlab.SFrame({'X1':x,'Y':y}) data
Chapter8/Overfitting_Demo_Ridge_Lasso.ipynb
dkirkby/astroml-study
mit
Create a function to plot the data, since we'll do it many times
def plot_data(data): plt.plot(data['X1'],data['Y'],'k.') plt.xlabel('x') plt.ylabel('y') plot_data(data)
Chapter8/Overfitting_Demo_Ridge_Lasso.ipynb
dkirkby/astroml-study
mit
Define some useful polynomial regression functions Define a function to create our features for a polynomial regression model of any degree:
def polynomial_features(data, deg): data_copy=data.copy() for i in range(1,deg): data_copy['X'+str(i+1)]=data_copy['X'+str(i)]*data_copy['X1'] return data_copy
Chapter8/Overfitting_Demo_Ridge_Lasso.ipynb
dkirkby/astroml-study
mit
Define a function to fit a polynomial linear regression model of degree "deg" to the data in "data":
def polynomial_regression(data, deg): model = graphlab.linear_regression.create(polynomial_features(data,deg), target='Y', l2_penalty=0.,l1_penalty=0., validation_set=None,verbose=False) return model
Chapter8/Overfitting_Demo_Ridge_Lasso.ipynb
dkirkby/astroml-study
mit
Define function to plot data and predictions made, since we are going to use it many times.
def plot_poly_predictions(data, model): plot_data(data) # Get the degree of the polynomial deg = len(model.coefficients['value'])-1 # Create 200 points in the x axis and compute the predicted value for each point x_pred = graphlab.SFrame({'X1':[i/200.0 for i in range(200)]}) y_pred = model.predict(polynomial_features(x_pred,deg)) # plot predictions plt.plot(x_pred['X1'], y_pred, 'g-', label='degree ' + str(deg) + ' fit') plt.legend(loc='upper left') plt.axis([0,1,-1.5,2])
Chapter8/Overfitting_Demo_Ridge_Lasso.ipynb
dkirkby/astroml-study
mit
Create a function that prints the polynomial coefficients in a pretty way :)
def print_coefficients(model): # Get the degree of the polynomial deg = len(model.coefficients['value'])-1 # Get learned parameters as a list w = list(model.coefficients['value']) # Numpy has a nifty function to print out polynomials in a pretty way # (We'll use it, but it needs the parameters in the reverse order) print 'Learned polynomial for degree ' + str(deg) + ':' w.reverse() print numpy.poly1d(w)
Chapter8/Overfitting_Demo_Ridge_Lasso.ipynb
dkirkby/astroml-study
mit
Fit a degree-2 polynomial Fit our degree-2 polynomial to the data generated above:
model = polynomial_regression(data, deg=2)
Chapter8/Overfitting_Demo_Ridge_Lasso.ipynb
dkirkby/astroml-study
mit
Inspect learned parameters
print_coefficients(model)
Chapter8/Overfitting_Demo_Ridge_Lasso.ipynb
dkirkby/astroml-study
mit
Form and plot our predictions along a grid of x values:
plot_poly_predictions(data,model)
Chapter8/Overfitting_Demo_Ridge_Lasso.ipynb
dkirkby/astroml-study
mit
Fit a degree-4 polynomial
model = polynomial_regression(data, deg=4) print_coefficients(model) plot_poly_predictions(data,model)
Chapter8/Overfitting_Demo_Ridge_Lasso.ipynb
dkirkby/astroml-study
mit
Fit a degree-16 polynomial
model = polynomial_regression(data, deg=16) print_coefficients(model)
Chapter8/Overfitting_Demo_Ridge_Lasso.ipynb
dkirkby/astroml-study
mit
Woah!!!! Those coefficients are crazy! On the order of 10^6.
plot_poly_predictions(data,model)
Chapter8/Overfitting_Demo_Ridge_Lasso.ipynb
dkirkby/astroml-study
mit
Above: Fit looks pretty wild, too. Here's a clear example of how overfitting is associated with very large magnitude estimated coefficients. # # Ridge Regression Ridge regression aims to avoid overfitting by adding a cost to the RSS term of standard least squares that depends on the 2-norm of the coefficients $\|w\|$. The result is penalizing fits with large coefficients. The strength of this penalty, and thus the fit vs. model complexity balance, is controled by a parameter lambda (here called "L2_penalty"). Define our function to solve the ridge objective for a polynomial regression model of any degree:
def polynomial_ridge_regression(data, deg, l2_penalty): model = graphlab.linear_regression.create(polynomial_features(data,deg), target='Y', l2_penalty=l2_penalty, validation_set=None,verbose=False) return model
Chapter8/Overfitting_Demo_Ridge_Lasso.ipynb
dkirkby/astroml-study
mit
Perform a ridge fit of a degree-16 polynomial using a very small penalty strength
model = polynomial_ridge_regression(data, deg=16, l2_penalty=1e-25) print_coefficients(model) plot_poly_predictions(data,model)
Chapter8/Overfitting_Demo_Ridge_Lasso.ipynb
dkirkby/astroml-study
mit
Perform a ridge fit of a degree-16 polynomial using a very large penalty strength
model = polynomial_ridge_regression(data, deg=16, l2_penalty=100) print_coefficients(model) plot_poly_predictions(data,model)
Chapter8/Overfitting_Demo_Ridge_Lasso.ipynb
dkirkby/astroml-study
mit
Let's look at fits for a sequence of increasing lambda values
for l2_penalty in [1e-25, 1e-10, 1e-6, 1e-3, 1e0,1e2]: model = polynomial_ridge_regression(data, deg=16, l2_penalty=l2_penalty) print 'lambda = %.2e' % l2_penalty print_coefficients(model) print '\n' plt.figure() plot_poly_predictions(data,model) plt.title('Ridge, lambda = %.2e' % l2_penalty)
Chapter8/Overfitting_Demo_Ridge_Lasso.ipynb
dkirkby/astroml-study
mit
Perform a ridge fit of a degree-16 polynomial using a "good" penalty strength We will learn about cross validation later in this course as a way to select a good value of the tuning parameter (penalty strength) lambda. Here, we consider "leave one out" (LOO) cross validation, which one can show approximates average mean square error (MSE). As a result, choosing lambda to minimize the LOO error is equivalent to choosing lambda to minimize an approximation to average MSE.
# LOO cross validation -- return the average MSE def loo(data, deg, l2_penalty_values): # Create polynomial features polynomial_features(data, deg) # Create as many folds for cross validatation as number of data points num_folds = len(data) folds = graphlab.cross_validation.KFold(data,num_folds) # for each value of l2_penalty, fit a model for each fold and compute average MSE l2_penalty_mse = [] min_mse = None best_l2_penalty = None for l2_penalty in l2_penalty_values: next_mse = 0.0 for train_set, validation_set in folds: # train model model = graphlab.linear_regression.create(train_set,target='Y', l2_penalty=l2_penalty, validation_set=None,verbose=False) # predict on validation set y_test_predicted = model.predict(validation_set) # compute squared error next_mse += ((y_test_predicted-validation_set['Y'])**2).sum() # save squared error in list of MSE for each l2_penalty next_mse = next_mse/num_folds l2_penalty_mse.append(next_mse) if min_mse is None or next_mse < min_mse: min_mse = next_mse best_l2_penalty = l2_penalty return l2_penalty_mse,best_l2_penalty
Chapter8/Overfitting_Demo_Ridge_Lasso.ipynb
dkirkby/astroml-study
mit
Run LOO cross validation for "num" values of lambda, on a log scale
l2_penalty_values = numpy.logspace(-4, 10, num=10) l2_penalty_mse,best_l2_penalty = loo(data, 16, l2_penalty_values)
Chapter8/Overfitting_Demo_Ridge_Lasso.ipynb
dkirkby/astroml-study
mit
Plot results of estimating LOO for each value of lambda
plt.plot(l2_penalty_values,l2_penalty_mse,'k-') plt.xlabel('$\L2_penalty$') plt.ylabel('LOO cross validation error') plt.xscale('log') plt.yscale('log')
Chapter8/Overfitting_Demo_Ridge_Lasso.ipynb
dkirkby/astroml-study
mit
Find the value of lambda, $\lambda_{\mathrm{CV}}$, that minimizes the LOO cross validation error, and plot resulting fit
best_l2_penalty model = polynomial_ridge_regression(data, deg=16, l2_penalty=best_l2_penalty) print_coefficients(model) plot_poly_predictions(data,model)
Chapter8/Overfitting_Demo_Ridge_Lasso.ipynb
dkirkby/astroml-study
mit
Lasso Regression Lasso regression jointly shrinks coefficients to avoid overfitting, and implicitly performs feature selection by setting some coefficients exactly to 0 for sufficiently large penalty strength lambda (here called "L1_penalty"). In particular, lasso takes the RSS term of standard least squares and adds a 1-norm cost of the coefficients $\|w\|$. Define our function to solve the lasso objective for a polynomial regression model of any degree:
def polynomial_lasso_regression(data, deg, l1_penalty): model = graphlab.linear_regression.create(polynomial_features(data,deg), target='Y', l2_penalty=0., l1_penalty=l1_penalty, validation_set=None, solver='fista', verbose=False, max_iterations=3000, convergence_threshold=1e-10) return model
Chapter8/Overfitting_Demo_Ridge_Lasso.ipynb
dkirkby/astroml-study
mit
Explore the lasso solution as a function of a few different penalty strengths We refer to lambda in the lasso case below as "l1_penalty"
for l1_penalty in [0.0001, 0.01, 0.1, 10]: model = polynomial_lasso_regression(data, deg=16, l1_penalty=l1_penalty) print 'l1_penalty = %e' % l1_penalty print 'number of nonzeros = %d' % (model.coefficients['value']).nnz() print_coefficients(model) print '\n' plt.figure() plot_poly_predictions(data,model) plt.title('LASSO, lambda = %.2e, # nonzeros = %d' % (l1_penalty, (model.coefficients['value']).nnz()))
Chapter8/Overfitting_Demo_Ridge_Lasso.ipynb
dkirkby/astroml-study
mit
Remap MEG channel types In this example, MEG data are remapped from one channel type to another. This is useful to: - visualize combined magnetometers and gradiometers as magnetometers or gradiometers. - run statistics from both magnetometers and gradiometers while working with a single type of channels.
# Author: Mainak Jas <mainak.jas@telecom-paristech.fr> # License: BSD-3-Clause import mne from mne.datasets import sample print(__doc__) # read the evoked data_path = sample.data_path() meg_path = data_path / 'MEG' / 'sample' fname = meg_path / 'sample_audvis-ave.fif' evoked = mne.read_evokeds(fname, condition='Left Auditory', baseline=(None, 0))
stable/_downloads/709b65f447b790ec915e9d00176f0746/virtual_evoked.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
First, let's call remap gradiometers to magnometers, and plot the original and remapped topomaps of the magnetometers.
# go from grad + mag to mag and plot original mag virt_evoked = evoked.as_type('mag') evoked.plot_topomap(ch_type='mag', title='mag (original)', time_unit='s') # plot interpolated grad + mag virt_evoked.plot_topomap(ch_type='mag', time_unit='s', title='mag (interpolated from mag + grad)')
stable/_downloads/709b65f447b790ec915e9d00176f0746/virtual_evoked.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Now, we remap magnometers to gradiometers, and plot the original and remapped topomaps of the gradiometers
# go from grad + mag to grad and plot original grad virt_evoked = evoked.as_type('grad') evoked.plot_topomap(ch_type='grad', title='grad (original)', time_unit='s') # plot interpolated grad + mag virt_evoked.plot_topomap(ch_type='grad', time_unit='s', title='grad (interpolated from mag + grad)')
stable/_downloads/709b65f447b790ec915e9d00176f0746/virtual_evoked.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Import NGC 2516 low-mass star data.
ngc2516 = np.genfromtxt('data/ngc2516_Christophe_v3.dat') # data for this study from J&J (2012) irwin07 = np.genfromtxt('data/irwin2007.phot') # data from Irwin+ (2007) jeffr01 = np.genfromtxt('data/jeff_2001.tsv', delimiter=';', comments='#') # data from Jeffries+ (2001) jeffr01 = np.array([star for star in jeffr01 if star[9] == 1]) # extract candidate members
Projects/ngc2516_spots/ngc2516_vs_pleiades.ipynb
gfeiden/Notebook
mit
Jackson et al. (2009) recommend a small correction to I-band magnitudes from Irwin et al. (2007) to place them on the same photometric scale as Jeffries et al. (2001), which they deem to be "better calibrated." Jackson & Jeffries (2012) suggest that the tabulated data (on Vizier) has been transformed to the "better calibrated" system. Key to understanding their results, however, is to also transform $(V-I_C)$ and then calculate a correction to $V$-band magnitudes, as well.
irwinVI = (irwin07[:, 7] - irwin07[:, 8])*(1.0 - 0.153) + 0.300 irwin07[:, 8] = (1.0 - 0.0076)*irwin07[:, 8] + 0.080 irwin07[:, 7] = irwinVI + irwin07[:, 8]
Projects/ngc2516_spots/ngc2516_vs_pleiades.ipynb
gfeiden/Notebook
mit
~~Note that it is not immediately clear whether this correction should be applied to photometric data cataloged by Jackson & Jeffries (2012).~~ Reading through Irwin et al. (2007) and Jackson & Jeffries (2012), it appears that the transformations are largely performed to transform the Irwin+ photometric system (Johnson $I$-band) into Cousins $I_C$ magnitudes. There may be reasons related to a "better calibration," but the issue is to first and foremost put them in the same photometric system. Why that involves altering the $V$-band magnitudes is not abundantly clear. Now data for the Pleiades.
pleiades_s07 = np.genfromtxt('../pleiades_colors/data/Stauffer_Pleiades_litPhot.txt', usecols=(2, 3, 5, 6, 8, 9, 13, 14, 15)) pleiades_k14 = np.genfromtxt('../pleiades_colors/data/Kamai_Pleiades_cmd.dat', usecols=(0, 1, 2, 3, 4, 5)) iso_emp_k14 = np.genfromtxt('../pleiades_colors/data/Kamai_Pleiades_emp.iso') # empirical Pleiades isochrone
Projects/ngc2516_spots/ngc2516_vs_pleiades.ipynb
gfeiden/Notebook
mit
Adopt literature values for reddening, neglecting differential reddening across the Pleiades.
pl_dis = 5.61 pl_ebv = 0.034 pl_evi = 1.25*pl_ebv pl_evk = 2.78*pl_ebv pl_eik = pl_evk - pl_evi pl_av = 3.12*pl_ebv ng_dis = 7.95 ng_ebv = 0.12 ng_evi = 1.25*ng_ebv ng_evk = 2.78*ng_ebv ng_eik = ng_evk - ng_evi ng_av = 3.12*ng_ebv
Projects/ngc2516_spots/ngc2516_vs_pleiades.ipynb
gfeiden/Notebook
mit
Overlay the CMDs for each cluster, corrected for reddening and distance.
fig, ax = plt.subplots(1, 2, figsize=(12., 8.), sharex=True, sharey=True) for axis in ax: axis.grid(True) axis.tick_params(which='major', axis='both', length=15., labelsize=16.) axis.set_ylim(12., 5.) axis.set_xlim(0.5, 3.0) axis.set_xlabel('$(V - I_C)$', fontsize=20.) ax[0].set_ylabel('$M_V$', fontsize=20.) ax[0].plot(jeffr01[:,5] - ng_evi, jeffr01[:,3] - ng_av - ng_dis, 'o', markersize=4.0, c='#555555', alpha=0.2) ax[0].plot(irwin07[:, 7] - irwin07[:, 8] - ng_evi, irwin07[:, 7] - ng_av - ng_dis, 'o', c='#1e90ff', markersize=4.0, alpha=0.6) ax[0].plot(ngc2516[:, 1] - ngc2516[:, 2] - ng_evi, ngc2516[:, 1] - ng_av - ng_dis, 'o', c='#555555', markersize=4.0, alpha=0.8) ax[0].plot(iso_emp_k14[:, 2] - pl_evi, iso_emp_k14[:, 0] - pl_av - pl_dis, dashes=(20., 5.), lw=3, c='#b22222') ax[1].plot(irwin07[:, 7] - irwin07[:, 8] - ng_evi, irwin07[:, 7] - ng_av - ng_dis, 'o', c='#1e90ff', markersize=4.0, alpha=0.6) ax[1].plot(ngc2516[:, 1] - ngc2516[:, 2] - ng_evi, ngc2516[:, 1] - ng_av - ng_dis, 'o', c='#555555', markersize=4.0, alpha=0.6) ax[1].plot(iso_emp_k14[:, 2] - pl_evi, iso_emp_k14[:, 0] - pl_av - pl_dis, dashes=(20., 5.), lw=3, c='#b22222')
Projects/ngc2516_spots/ngc2516_vs_pleiades.ipynb
gfeiden/Notebook
mit
While the Stauffer et al (2007) and Jackson et al. (2009) samples lie a bit redward of the median sequence in the Jeffries et al. (2001), the former two samples compare well against the empirical cluster sequence (shown as a red dashed line; Kamai et al. 2014) from the Pleiades in a $M_V/(V-I_C)$ CMD. What about $M_V/(V-K)$ and $M_V/(I_C-K)$ CMDs?
fig, ax = plt.subplots(1, 2, figsize=(12., 8.), sharey=True) for axis in ax: axis.grid(True) axis.tick_params(which='major', axis='both', length=15., labelsize=16.) axis.set_ylim(12., 5.) ax[0].set_xlim(1.0, 6.0) ax[0].set_xlabel('$(V - K)$', fontsize=20.) ax[0].set_ylabel('$M_V$', fontsize=20.) # include K_CIT --> K_2mass correction for NGC 2516 ax[0].plot(ngc2516[:, 1] - ngc2516[:, 3] - 0.024 - ng_evk, ngc2516[:, 1] - ng_av - ng_dis, 'o', c='#555555', markersize=4.0, alpha=0.6) ax[0].plot(iso_emp_k14[:, 3] - pl_evk, iso_emp_k14[:, 0] - pl_av - pl_dis, dashes=(20., 5.), lw=3, c='#b22222') ax[1].set_xlim(0.5, 3.0) ax[1].set_xlabel('$(I_C - K)$', fontsize=20.) ax[1].plot(ngc2516[:, 2] - ngc2516[:, 3] - 0.024 - ng_eik, ngc2516[:, 1] - ng_av - ng_dis, 'o', c='#555555', markersize=4.0, alpha=0.6) ax[1].plot(iso_emp_k14[:, 3] - iso_emp_k14[:, 2] - pl_eik, iso_emp_k14[:, 0] - pl_av - pl_dis, dashes=(20., 5.), lw=3, c='#b22222')
Projects/ngc2516_spots/ngc2516_vs_pleiades.ipynb
gfeiden/Notebook
mit
While data in the $M_V/(V-I_C)$ CMD appears to be bluer for early M-dwarf stars and redder for later M-dwarf stars, we find that M-dwarfs in NGC 2516 appear to be generally bluer than low-mass stars in the Pleiades. An interesting implication is that empirical isochornes based on the Pleiades or NGC 2516 may not reliably fit other clusters. Something is different between the two. Is it magnetic activity, or perhaps chemical composition? $(B-V)/(V-I_C)$ color-color diagram using data from Jeffries et al. (2001) for NGC 2516.
fig, ax = plt.subplots(1, 1, figsize=(6., 8.)) ax.grid(True) ax.tick_params(which='major', axis='both', length=15., labelsize=16.) ax.set_xlim(-0.5, 2.0) ax.set_ylim( 0.0, 2.5) ax.set_ylabel('$(V - I_C)$', fontsize=20.) ax.set_xlabel('$(B - V)$', fontsize=20.) ax.plot(jeffr01[:,4] - ng_ebv, jeffr01[:,5] - ng_evi, 'o', markersize=4.0, c='#555555', alpha=0.2) ax.plot(iso_emp_k14[:, 1] - pl_ebv, iso_emp_k14[:, 2] - pl_evi, dashes=(20., 5.), lw=3, c='#b22222')
Projects/ngc2516_spots/ngc2516_vs_pleiades.ipynb
gfeiden/Notebook
mit
The empirical isochrone from Kamai et al. (2014) agrees well with photometric data for NGC 2516, with both corrected for differential extinction. There may be some small disagreements at various locations along the sequence, but the morphology of the empirical isochrone is broadly consistent with NGC 2516. However, stars in NGC 2516 appear to have bluer $(B-V)$ colors for $(V-I_C) > 1.8$. Exploring a transformation of the Irwin+ data from Johnson $(V-I)$ to Cousins $(V-I_C)$ from Bessell (1979). This is ignoring issues related to photometric calibrations, that Jackson & Jeffries posit is important. I'm not claiming JJ are wrong, just wondering how using a more standard photometric transformation would affect the resulting CMDs. WAIT: The Irwin et al. (2007) data file suggests the I band photometry is quoted in terms of Cousins I, not Johnson... unlike what is stated in their paper.
tmp_data = np.genfromtxt('data/irwin2007.phot') # re-load Irwin et al. (2007) data into new array
Projects/ngc2516_spots/ngc2516_vs_pleiades.ipynb
gfeiden/Notebook
mit
Now, applying transformation from Bessell (1979), which states that
old_vmi = tmp_data[:, 7] - tmp_data[:, 8] new_vmi = old_vmi*0.835 - 0.130 fig, ax = plt.subplots(1, 1, figsize=(6., 8.)) ax.grid(True) ax.tick_params(which='major', axis='both', length=15., labelsize=16.) ax.set_xlim(2.0, 3.0) ax.set_ylim(22., 16.) ax.set_xlabel('$(V - I_C)$', fontsize=20.) ax.set_ylabel('$M_V$', fontsize=20.) ax.plot(jeffr01[:, 5], jeffr01[:, 3], 'o', markersize=4.0, c='#555555', alpha=0.2) ax.plot(old_vmi, tmp_data[:, 7], 'o', c='#b22222', alpha=0.6)
Projects/ngc2516_spots/ngc2516_vs_pleiades.ipynb
gfeiden/Notebook
mit
Visualize the Data
import matplotlib.pyplot as plt %matplotlib inline # obtain one batch of training images dataiter = iter(train_loader) images, labels = dataiter.next() images = images.numpy() # get one image from the batch img = np.squeeze(images[0]) fig = plt.figure(figsize = (5,5)) ax = fig.add_subplot(111) ax.imshow(img, cmap='gray')
DEEP LEARNING/Pytorch from scratch/TODO/Autoencoders/linear-autoencoder/Simple_Autoencoder_Solution.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Linear Autoencoder We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building a simple autoencoder. The encoder and decoder should be made of one linear layer. The units that connect the encoder and decoder will be the compressed representation. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values that match this input value range. <img src='notebook_ims/simple_autoencoder.png' width=50% /> TODO: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. The encoder and decoder will be made of two linear layers, each. The depth dimensions should change as follows: 784 inputs > encoding_dim > 784 outputs. All layers will have ReLu activations applied except for the final output layer, which has a sigmoid activation. The compressed representation should be a vector with dimension encoding_dim=32.
import torch.nn as nn import torch.nn.functional as F # define the NN architecture class Autoencoder(nn.Module): def __init__(self, encoding_dim): super(Autoencoder, self).__init__() ## encoder ## # linear layer (784 -> encoding_dim) self.fc1 = nn.Linear(28 * 28, encoding_dim) ## decoder ## # linear layer (encoding_dim -> input size) self.fc2 = nn.Linear(encoding_dim, 28*28) def forward(self, x): # add layer, with relu activation function x = F.relu(self.fc1(x)) # output layer (sigmoid for scaling from 0 to 1) x = F.sigmoid(self.fc2(x)) return x # initialize the NN encoding_dim = 32 model = Autoencoder(encoding_dim) print(model)
DEEP LEARNING/Pytorch from scratch/TODO/Autoencoders/linear-autoencoder/Simple_Autoencoder_Solution.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Training Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss and the test loss afterwards. We are not concerned with labels in this case, just images, which we can get from the train_loader. Because we're comparing pixel values in input and output images, it will be best to use a loss that is meant for a regression task. Regression is all about comparing quantities rather than probabilistic values. So, in this case, I'll use MSELoss. And compare output images and input images as follows: loss = criterion(outputs, images) Otherwise, this is pretty straightfoward training with PyTorch. We flatten our images, pass them into the autoencoder, and record the training loss as we go.
# specify loss function criterion = nn.MSELoss() # specify loss function optimizer = torch.optim.Adam(model.parameters(), lr=0.001) # number of epochs to train the model n_epochs = 20 for epoch in range(1, n_epochs+1): # monitor training loss train_loss = 0.0 ################### # train the model # ################### for data in train_loader: # _ stands in for labels, here images, _ = data # flatten images images = images.view(images.size(0), -1) # clear the gradients of all optimized variables optimizer.zero_grad() # forward pass: compute predicted outputs by passing inputs to the model outputs = model(images) # calculate the loss loss = criterion(outputs, images) # backward pass: compute gradient of the loss with respect to model parameters loss.backward() # perform a single optimization step (parameter update) optimizer.step() # update running training loss train_loss += loss.item()*images.size(0) # print avg training statistics train_loss = train_loss/len(train_loader) print('Epoch: {} \tTraining Loss: {:.6f}'.format( epoch, train_loss ))
DEEP LEARNING/Pytorch from scratch/TODO/Autoencoders/linear-autoencoder/Simple_Autoencoder_Solution.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Checking out the results Below I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.
# obtain one batch of test images dataiter = iter(test_loader) images, labels = dataiter.next() images_flatten = images.view(images.size(0), -1) # get sample outputs output = model(images_flatten) # prep images for display images = images.numpy() # output is resized into a batch of images output = output.view(batch_size, 1, 28, 28) # use detach when it's an output that requires_grad output = output.detach().numpy() # plot the first ten input images and then reconstructed images fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(25,4)) # input images on top row, reconstructions on bottom for images, row in zip([images, output], axes): for img, ax in zip(images, row): ax.imshow(np.squeeze(img), cmap='gray') ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False)
DEEP LEARNING/Pytorch from scratch/TODO/Autoencoders/linear-autoencoder/Simple_Autoencoder_Solution.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
c) set rebound.units:
sim.units = ('yr', 'AU', 'Msun') print("G = {0}.".format(sim.G))
ipython_examples/Units.ipynb
dtamayo/rebound
gpl-3.0
When you set the units, REBOUND converts G to the appropriate value for the units passed (must pass exactly 3 units for mass length and time, but they can be in any order). Note that if you are interested in high precision, you have to be quite particular about the exact units. As an aside, the reason why G differs from $4\pi^2 \approx 39.47841760435743$ is mostly that we follow the convention of defining a "year" as 365.25 days (a Julian year), whereas the Earth's sidereal orbital period is closer to 365.256 days (and at even finer level, Venus and Mercury modify the orbital period). G would only equal $4\pi^2$ in units where a "year" was exactly equal to one orbital period at $1 AU$ around a $1 M_\odot$ star. Adding particles If you use sim.units at all, you need to set the units before adding any particles. You can then add particles in any of the ways described in WHFast.ipynb. You can also add particles drawing from the horizons database (see Churyumov-Gerasimenko.ipynb). If you don't set the units ahead of time, HORIZONS will return initial conditions in units of AU, $M_\odot$ and yrs/$2\pi$, such that G=1. Above we switched to units of AU, $M_\odot$ and yrs, so when we add Earth:
sim.add('Earth') ps = sim.particles import math print("v = {0}".format(math.sqrt(ps[0].vx**2 + ps[0].vy**2 + ps[0].vz**2)))
ipython_examples/Units.ipynb
dtamayo/rebound
gpl-3.0
we see that the velocity is correctly set to approximately $2\pi$ AU/yr. If you'd like to enter the initial conditions in one set of units, and then use a different set for the simulation, you can use the sim.convert_particle_units function, which converts both the initial conditions and G. Since we added Earth above, we restart with a new Simulation instance; otherwise we'll get an error saying that we can't set the units with particles already loaded:
sim = rebound.Simulation() sim.units = ('m', 's', 'kg') sim.add(m=1.99e30) sim.add(m=5.97e24,a=1.5e11) sim.convert_particle_units('AU', 'yr', 'Msun') sim.status()
ipython_examples/Units.ipynb
dtamayo/rebound
gpl-3.0
We first set the units to SI, added (approximate values for) the Sun and Earth in these units, and switched to AU, yr, $M_\odot$. You can see that the particle states were converted correctly--the Sun has a mass of about 1, and the Earth has a distance of about 1. Note that when you pass orbital elements to sim.add, you must make sure G is set correctly ahead of time (through either 3 of the methods above), since it will use the value of sim.G to generate the velocities:
sim = rebound.Simulation() print("G = {0}".format(sim.G)) sim.add(m=1.99e30) sim.add(m=5.97e24,a=1.5e11) sim.status()
ipython_examples/Units.ipynb
dtamayo/rebound
gpl-3.0
1st Step: Construct Computational Graph Question 1: Prepare the input variables (x,y_label) of the computational graph Hint: You may use the function tf.placeholder()
# computational graph inputs batch_size = 100 d = train_data.shape[1] nc = 10 x = tf.placeholder(tf.float32,[batch_size,d]); print('x=',x,x.get_shape()) y_label = tf.placeholder(tf.float32,[batch_size,nc]); print('y_label=',y_label,y_label.get_shape())
algorithms/04_sol_tensorflow.ipynb
mdeff/ntds_2016
mit
Question 2: Prepare the variables (W,b) of the computational graph Hint: You may use the function tf.Variable(), tf.truncated_normal()
# computational graph variables initial = tf.truncated_normal([d,nc], stddev=0.1); W = tf.Variable(initial); print('W=',W.get_shape()) b = tf.Variable(tf.zeros([nc],tf.float32)); print('b=',b.get_shape())
algorithms/04_sol_tensorflow.ipynb
mdeff/ntds_2016
mit
Question 3: Compute the classifier such that $$ y=softmax(Wx +b) $$ Hint: You may use the function tf.matmul(), tf.nn.softmax()
# Construct CG / output value y = tf.matmul(x, W); print('y1=',y,y.get_shape()) y += b; print('y2=',y,y.get_shape()) y = tf.nn.softmax(y); print('y3=',y,y.get_shape())
algorithms/04_sol_tensorflow.ipynb
mdeff/ntds_2016
mit
Question 4: Construct the loss of the computational graph such that $$ loss = cross\ entropy(y_{label},y) = mean_{all\ data} \ \sum_{all\ classes} -\ y_{label}.\log(y) $$ Hint: You may use the function tf.Variable(), tf.truncated_normal()
# Loss cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_label * tf.log(y), 1))
algorithms/04_sol_tensorflow.ipynb
mdeff/ntds_2016
mit
Question 5: Construct the L2 regularization of (W,b) to the computational graph such that $$ R(W) = \|W\|_2^2\ R(b) = \|b\|_2^2 $$ Hint: You may use the function tf.nn.l2_loss()
reg_loss = tf.nn.l2_loss(W) reg_loss += tf.nn.l2_loss(b)
algorithms/04_sol_tensorflow.ipynb
mdeff/ntds_2016
mit
Question 6: Form the total loss $$ total\ loss = cross\ entropy(y_{label},y) + reg_par* (R(W) + R(b)) $$
reg_par = 1e-3 total_loss = cross_entropy + reg_par* reg_loss
algorithms/04_sol_tensorflow.ipynb
mdeff/ntds_2016
mit
Question 7: Perform optimization of the total loss for learning weight variables of the computational graph Hint: You may use the function tf.train.GradientDescentOptimizer(learning_rate).minimize(total_loss)
# Update CG variables / backward pass train_step = tf.train.GradientDescentOptimizer(0.25).minimize(total_loss)
algorithms/04_sol_tensorflow.ipynb
mdeff/ntds_2016
mit
Question 8: Evaluate the accuracy Hint: You may use the function tf.equal(tf.argmax(y,1), tf.argmax(y_label,1)) and tf.reduce_mean()
# Accuracy correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_label,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
algorithms/04_sol_tensorflow.ipynb
mdeff/ntds_2016
mit