markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
The date are integers representing the number of days from an origin date. The origin date for this dataset is determined from here and here and is "1899-12-30". The Period integers refer to 30 minute intervals in a 24 hour day, hence there are 48 for each day. Let's extract the date and date-time.
df["Date"] = df["Date"].apply(lambda x: pd.Timestamp("1899-12-30") + pd.Timedelta(x, unit="days")) df["ds"] = df["Date"] + pd.to_timedelta((df["Period"]-1)*30, unit="m")
examples/notebooks/mstl_decomposition.ipynb
bashtage/statsmodels
bsd-3-clause
We will be interested in OperationalLessIndustrial which is the electricity demand excluding the demand from certain high energy industrial users. We will resample the data to hourly and filter the data to the same time period as original MSTL paper [1] which is the first 149 days of the year 2012.
timeseries = df[["ds", "OperationalLessIndustrial"]] timeseries.columns = ["ds", "y"] # Rename to OperationalLessIndustrial to y for simplicity. # Filter for first 149 days of 2012. start_date = pd.to_datetime("2012-01-01") end_date = start_date + pd.Timedelta("149D") mask = (timeseries["ds"] >= start_date) & (timeseries["ds"] < end_date) timeseries = timeseries[mask] # Resample to hourly timeseries = timeseries.set_index("ds").resample("H").sum() timeseries.head()
examples/notebooks/mstl_decomposition.ipynb
bashtage/statsmodels
bsd-3-clause
Decompose electricity demand using MSTL Let's apply MSTL to this dataset. Note: stl_kwargs are set to give results close to [1] which used R and therefore has a slightly different default settings for the underlying STL parameters. It would be rare to manually set inner_iter and outer_iter explicitly in practice.
mstl = MSTL(timeseries["y"], periods=[24, 24 * 7], iterate=3, stl_kwargs={"seasonal_deg": 0, "inner_iter": 2, "outer_iter": 0}) res = mstl.fit() # Use .fit() to perform and return the decomposition ax = res.plot() plt.tight_layout()
examples/notebooks/mstl_decomposition.ipynb
bashtage/statsmodels
bsd-3-clause
The multiple seasonal components are stored as a pandas dataframe in the seasonal attribute:
res.seasonal.head()
examples/notebooks/mstl_decomposition.ipynb
bashtage/statsmodels
bsd-3-clause
Let's inspect the seasonal components in a bit more detail and look at the first few days and weeks to examine the daily and weekly seasonality.
fig, ax = plt.subplots(nrows=2, figsize=[10,10]) res.seasonal["seasonal_24"].iloc[:24*3].plot(ax=ax[0]) ax[0].set_ylabel("seasonal_24") ax[0].set_title("Daily seasonality") res.seasonal["seasonal_168"].iloc[:24*7*3].plot(ax=ax[1]) ax[1].set_ylabel("seasonal_168") ax[1].set_title("Weekly seasonality") plt.tight_layout()
examples/notebooks/mstl_decomposition.ipynb
bashtage/statsmodels
bsd-3-clause
We can see that the daily seasonality of electricity demand is well captured. This is the first few days in January so during the summer months in Australia there is a peak in the afternoon most likely due to air conditioning use. For the weekly seasonality we can see that there is less usage during the weekends. One of the advantages of MSTL is that is allows us to capture seasonality which changes over time. So let's look at the seasonality during cooler months in May.
fig, ax = plt.subplots(nrows=2, figsize=[10,10]) mask = res.seasonal.index.month==5 res.seasonal[mask]["seasonal_24"].iloc[:24*3].plot(ax=ax[0]) ax[0].set_ylabel("seasonal_24") ax[0].set_title("Daily seasonality") res.seasonal[mask]["seasonal_168"].iloc[:24*7*3].plot(ax=ax[1]) ax[1].set_ylabel("seasonal_168") ax[1].set_title("Weekly seasonality") plt.tight_layout()
examples/notebooks/mstl_decomposition.ipynb
bashtage/statsmodels
bsd-3-clause
Now we can see an additional peak in the evening! This could be related to heating and lighting now required in the evenings. So this makes sense. We see that main weekly pattern of lower demand over the weekends continue. The other components can also be extracted from the trend and resid attribute:
display(res.trend.head()) # trend component display(res.resid.head()) # residual component
examples/notebooks/mstl_decomposition.ipynb
bashtage/statsmodels
bsd-3-clause
Build a simple keras DNN model We will use feature columns to connect our raw data to our keras DNN model. Feature columns make it easy to perform common types of feature engineering on your raw data. For example, you can one-hot encode categorical data, create feature crosses, embeddings and more. We'll cover these in more detail later in the course, but if you want to a sneak peak browse the official TensorFlow feature columns guide. In our case we won't do any feature engineering. However, we still need to create a list of feature columns to specify the numeric values which will be passed on to our model. To do this, we use tf.feature_column.numeric_column() We use a python dictionary comprehension to create the feature columns for our model, which is just an elegant alternative to a for loop. Lab Task #1: Create a feature column dictionary that we will use when building our deep neural network below. The keys should be the element of the INPUT_COLS list, while the values should be numeric feature columns.
INPUT_COLS = [ 'pickup_longitude', 'pickup_latitude', 'dropoff_longitude', 'dropoff_latitude', 'passenger_count', ] # Create input layer of feature columns # TODO 1 feature_columns = # TODO: Your code goes here.
courses/machine_learning/deepdive2/introduction_to_tensorflow/labs/3_keras_sequential_api.ipynb
turbomanage/training-data-analyst
apache-2.0
Next, we create the DNN model. The Sequential model is a linear stack of layers and when building a model using the Sequential API, you configure each layer of the model in turn. Once all the layers have been added, you compile the model. Lab Task #2a: Create a deep neural network using Keras's Sequential API. In the cell below, use the tf.keras.layers library to create all the layers for your deep neural network.
# Build a keras DNN model using Sequential API # TODO 2a model = # TODO: Your code goes here.
courses/machine_learning/deepdive2/introduction_to_tensorflow/labs/3_keras_sequential_api.ipynb
turbomanage/training-data-analyst
apache-2.0
Next, to prepare the model for training, you must configure the learning process. This is done using the compile method. The compile method takes three arguments: An optimizer. This could be the string identifier of an existing optimizer (such as rmsprop or adagrad), or an instance of the Optimizer class. A loss function. This is the objective that the model will try to minimize. It can be the string identifier of an existing loss function from the Losses class (such as categorical_crossentropy or mse), or it can be a custom objective function. A list of metrics. For any machine learning problem you will want a set of metrics to evaluate your model. A metric could be the string identifier of an existing metric or a custom metric function. We will add an additional custom metric called rmse to our list of metrics which will return the root mean square error. Lab Task #2b: Compile the model you created above. Create a custom loss function called rmse which computes the root mean squared error between y_true and y_pred. Pass this function to the model as an evaluation metric.
# TODO 2b # Create a custom evalution metric def rmse(y_true, y_pred): return # TODO: Your code goes here # Compile the keras model # TODO: Your code goes here.
courses/machine_learning/deepdive2/introduction_to_tensorflow/labs/3_keras_sequential_api.ipynb
turbomanage/training-data-analyst
apache-2.0
There are various arguments you can set when calling the .fit method. Here x specifies the input data which in our case is a tf.data dataset returning a tuple of (inputs, targets). The steps_per_epoch parameter is used to mark the end of training for a single epoch. Here we are training for NUM_EVALS epochs. Lastly, for the callback argument we specify a Tensorboard callback so we can inspect Tensorboard after training. Lab Task #3: In the cell below, you will train your model. First, define the steps_per_epoch then train your model using .fit(), saving the model training output to a variable called history.
# TODO 3 %time steps_per_epoch = # TODO: Your code goes here. LOGDIR = "./taxi_trained" history = # TODO: Your code goes here.
courses/machine_learning/deepdive2/introduction_to_tensorflow/labs/3_keras_sequential_api.ipynb
turbomanage/training-data-analyst
apache-2.0
Export and deploy our model Of course, making individual predictions is not realistic, because we can't expect client code to have a model object in memory. For others to use our trained model, we'll have to export our model to a file, and expect client code to instantiate the model from that exported file. We'll export the model to a TensorFlow SavedModel format. Once we have a model in this format, we have lots of ways to "serve" the model, from a web application, from JavaScript, from mobile applications, etc. Lab Task #4: Use tf.saved_model.save to export the trained model to a Tensorflow SavedModel format. Reference the documentation for tf.saved_model.save as you fill in the code for the cell below. Next, print the signature of your saved model using the SavedModel Command Line Interface command saved_model_cli. You can read more about the command line interface and the show and run commands it supports in the documentation here.
# TODO 4a OUTPUT_DIR = "./export/savedmodel" shutil.rmtree(OUTPUT_DIR, ignore_errors=True) EXPORT_PATH = os.path.join(OUTPUT_DIR, datetime.datetime.now().strftime("%Y%m%d%H%M%S")) tf.saved_model.save( # TODO: Your code goes here. # TODO 4b !saved_model_cli show \ --tag_set # TODO: Your code goes here. --signature_def # TODO: Your code goes here. --dir # TODO: Your code goes here. !find {EXPORT_PATH} os.environ['EXPORT_PATH'] = EXPORT_PATH
courses/machine_learning/deepdive2/introduction_to_tensorflow/labs/3_keras_sequential_api.ipynb
turbomanage/training-data-analyst
apache-2.0
Deploy our model to AI Platform Finally, we will deploy our trained model to AI Platform and see how we can make online predicitons. Lab Task #5a: Complete the code in the cell below to deploy your trained model to AI Platform using the gcloud ai-platform versions create command. Have a look at the documentation for how to create model version with gcloud.
%%bash # TODO 5a PROJECT= #TODO: Change this to your PROJECT BUCKET=${PROJECT} REGION=us-east1 MODEL_NAME=taxifare VERSION_NAME=dnn if [[ $(gcloud ai-platform models list --format='value(name)' | grep $MODEL_NAME) ]]; then echo "$MODEL_NAME already exists" else echo "Creating $MODEL_NAME" gcloud ai-platform models create --regions=$REGION $MODEL_NAME fi if [[ $(gcloud ai-platform versions list --model $MODEL_NAME --format='value(name)' | grep $VERSION_NAME) ]]; then echo "Deleting already existing $MODEL_NAME:$VERSION_NAME ... " echo yes | gcloud ai-platform versions delete --model=$MODEL_NAME $VERSION_NAME echo "Please run this cell again if you don't see a Creating message ... " sleep 2 fi echo "Creating $MODEL_NAME:$VERSION_NAME" gcloud ai-platform versions create \ --model= #TODO: Your code goes here. --framework= #TODO: Your code goes here. --python-version= #TODO: Your code goes here. --runtime-version= #TODO: Your code goes here. --origin= #TODO: Your code goes here. --staging-bucket= #TODO: Your code goes here. %%writefile input.json {"pickup_longitude": -73.982683, "pickup_latitude": 40.742104,"dropoff_longitude": -73.983766,"dropoff_latitude": 40.755174,"passenger_count": 3.0}
courses/machine_learning/deepdive2/introduction_to_tensorflow/labs/3_keras_sequential_api.ipynb
turbomanage/training-data-analyst
apache-2.0
Lab Task #5b: Complete the code in the cell below to call prediction on your deployed model for the example you just created in the input.json file above.
# TODO 5b !gcloud ai-platform predict \ --model #TODO: Your code goes here. --json-instances #TODO: Your code goes here. --version #TODO: Your code goes here.
courses/machine_learning/deepdive2/introduction_to_tensorflow/labs/3_keras_sequential_api.ipynb
turbomanage/training-data-analyst
apache-2.0
.shape is a tuple of the number of rows and the number of columns:
grades.shape
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
.head() returns the first 5 rows of a DataFrame, .tail() returns the last ones. These are very useful for manual data inspection. You should always check what the contents of your dataframes.
grades.head()
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
Printing other only the last two elements:
grades.tail(2)
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
Each of these operations return a new dataframe. We can confirm this via their object identity:
id(grades), id(grades.tail(2))
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
But these objects are not copied unless we explicitly ask for a copy:
grades.tail(2).copy()
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
Selecting rows, columns and cells The first boldfaced column of the table is the index column. It's possible to use multiple columns as index (Multiindex). Selecting columns
grades['teacher']
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
The name of the column is also exposed as an attribute as long as it adheres to the naming limitations of attributes (no spaces, starts with a letter, doesn't crash with a method name):
grades.teacher
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
The type of a column is pd.Series, which is the type for a vector:
type(grades.teacher) grades.teacher.shape
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
We can select multiple columns with a list of column names instead of a column name. Note the double square brackets.
grades[['grade', 'teacher']]
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
The return type of the operator [] depends on the type of the index. If it's a string, it returns a Series if it's a list, it returns a DataFrame:
print(type(grades[['grade']])) grades[['grade']]
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
Selecting rows Rows can be selected by their index or by their integer location. To demonstrate this, we will use the subject name as the index. Note that it's now in bold:
grades = grades.set_index('subject') grades
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
Selecting by index Note that you need to use [] not ():
grades.loc['Physics 1i']
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
The type of one row is a Series since it's a vector:
type(grades.loc['Physics 1i'])
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
Selecting by integer location
grades.iloc[2]
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
We can use ranges here as well. Note that the range is upper-bound exclusive, in other words, .iloc[i:j] will not include element j:
grades.iloc[1:3]
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
Selecting columns with iloc
grades.iloc[:, [0, 2]] grades.iloc[:, 1:-1] grades.iloc[1:5, 1:2]
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
Selecting a cell There are multiple ways of selecting a single cell, this is perhaps the easiest one:
grades.loc['Physics 1i', 'grade']
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
Vectorized operations Arithmetic operators are overloaded for DataFrames and Series allowing vectorized operations
grades['grade'] + 1 grades[['grade', 'semester']] + 1
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
Comparisions are overloaded as well:
grades['semester'] == 1
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
The index can be manipulated in a similar way but the return value is different:
grades.index == 'System theory'
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
It is generally used to override the index:
old_index = grades.index.copy() grades.index = grades.index.str.upper() grades
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
Changing it back:
grades.index = old_index grades
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
Vectorized string operations String columns have a .str namespace with many string operations:
grades['teacher'].str grades['teacher'].str.contains('Smith')
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
It also provides access to the character array:
grades['teacher'].str[:5]
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
apply .apply allows running arbitrary functions on each element of a Series (or a DataFrame):
def get_last_name(name): return name.split(" ")[1] grades['teacher'].apply(get_last_name)
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
The same with a lambda function:
grades['teacher'].apply(lambda name: name.split(" ")[1])
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
apply on rows apply also works on full dataframes. The parameter is a row (axis=1) or a column in this case.
def format_grade_and_completion(row): grade = row['grade'] completed = row['completion_date'].strftime("%b %Y") return f"grade: {grade}, completed: {completed}" grades.apply(format_grade_and_completion, axis=1)
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
Vectorized date manipulation Date columns can be manipulated through the dt namespace:
grades['completion_date'].dt grades['completion_date'].dt.day_name() grades['completion_date'].dt.year
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
Filtering Comparisons return a Series of True/False values
grades.semester == 1
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
which can be used for filtering rows:
grades[grades.semester == 1]
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
We can also use multiple conditions. Note the parentheses:
grades[(grades.semester == 1) & (grades.teacher.str.contains('Smith'))]
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
Handling multiple dataframes, merge Let's define a second Dataframe with the credit values of some classes:
credits = pd.DataFrame( { 'subject': ['Calculus 1', 'Physics 1i', 'Physics 2i'], 'credit': [7, 5, 5] } ) credits
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
What are the credit values of the classes we have in the grades table?
d = grades.merge(credits, left_index=True, right_on='subject', how='outer') d
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
Merge Merge has two operands, a left and a right DataFrame. Parameters: 1. left_index: merge on the index of the left Dataframe 2. right_on: merge on one or more columns of the right Dataframe. This column is credits in this example. 3. how: inner/outer. Exclude/include all rows even if the key of the merge is unmatched. We can merge on two types of columns, index and non-index. left_index=True and right_index=True means that we merge on the index. left_on and right_on means that we merge on a column.
grades.merge(credits, left_index=True, right_on='subject', how='inner')
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
We can discard rows with NaN values with dropna. Be careful. It discards all rows with any NaN. This has the same effect as an inner join:
d = d.dropna() d
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
Finding min/max rows max and min return the highest and lowest values for each column. The return value is a Series with the column names as indices and the maximum/minimum values as the Series values:
print(type(grades.max())) grades.max()
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
The location of the maximum/minimum is often more interesting. idxmax and idxmin return where the maximum is:
# grades.idxmax() # we get an error because of the string and the date column grades[['grade', 'semester']].idxmax()
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
The return value(s) of idxmax can directly be used with loc:
grades.loc[grades[['grade', 'semester']].idxmax()]
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
idxmax works similarly for Series but the return value is a single scalar, the index of the maximum:
grades.grade.idxmax()
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
groupby Groupby allows grouping the rows of the Dataframe along any column(s):
g = credits.groupby('credit') g.groups
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
Or on multiple columns:
grades.groupby(['grade', 'semester']).groups
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
Or even on conditions:
grades.groupby(grades['semester'] % 2).groups
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
We can perform operations on the groups:
grades.groupby('semester').mean() grades.groupby('semester').std()
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
size returns the number of elements in each group:
grades.groupby('semester').size()
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
stack and unstack Grouping on multiple columns and then aggregating results in a multiindex:
grades.groupby(['grade', 'semester']).size().index grades.groupby(['grade', 'semester']).size()
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
unstack moves up the innermost index level to columns:
grades.groupby(['grade', 'semester']).size().unstack()
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
stack does the opposite:
credits credits.stack()
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
Sorting We can sort Dataframes by their index:
grades.sort_index()
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
Or by one or more columns:
grades.sort_values(['grade', 'semester'])
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
In ascending order:
grades.sort_index(ascending=False)
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
Miscellaneous operations value_counts value_counts returns the frequency of values in a column:
grades['semester'].value_counts()
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
It can't be used on multiple columns but groupby+size does the same:
grades.groupby(['semester', 'grade']).size()
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
We can also plot the histogram of the values with: Histogram
grades['semester'].hist()
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
Visualization Pandas is integrated with matplotlib, the main plotting module of Python.
grades.plot(y='grade')
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
Or as a bar chart:
grades.plot(y='grade', kind='bar')
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
We can also specify both axes:
grades.plot(x='semester', y='grade', kind='scatter')
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
Combining groupby and visualization. Plotting the grade averages by semester:
grades.groupby('semester').mean().plot(kind='bar')
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
Or the number of classes per semester:
grades.groupby('semester').size().plot(kind='pie', title="Classes per semester", ylabel="Classes")
notebooks/Pandas_introduction.ipynb
juditacs/labor
lgpl-3.0
Bert Pipeline : PyTorch BERT News Classfication This notebook shows PyTorch BERT end-to-end news classification example using Kubeflow Pipelines. An example notebook that demonstrates how to: Get different tasks needed for the pipeline Create a Kubeflow pipeline Include Pytorch KFP components to preprocess, train, visualize and deploy the model in the pipeline Submit a job for execution Query(prediction and explain) the final deployed model Interpretation of the model using the Captum Insights
! pip uninstall -y kfp ! pip install --no-cache-dir kfp import kfp import json import os from kfp.onprem import use_k8s_secret from kfp import components from kfp.components import load_component_from_file, load_component_from_url from kfp import dsl from kfp import compiler kfp.__version__
samples/contrib/pytorch-samples/Pipeline-Bert.ipynb
kubeflow/pipelines
apache-2.0
Enter your gateway and the cookie Use this extension on chrome to get token Update values for the ingress gateway and auth session
INGRESS_GATEWAY='http://istio-ingressgateway.istio-system.svc.cluster.local' AUTH="<enter your token here>" NAMESPACE="kubeflow-user-example-com" COOKIE="authservice_session="+AUTH EXPERIMENT="Default"
samples/contrib/pytorch-samples/Pipeline-Bert.ipynb
kubeflow/pipelines
apache-2.0
Set Log bucket and Tensorboard Image
MINIO_ENDPOINT="http://minio-service.kubeflow:9000" LOG_BUCKET="mlpipeline" TENSORBOARD_IMAGE="public.ecr.aws/pytorch-samples/tboard:latest" client = kfp.Client(host=INGRESS_GATEWAY+"/pipeline", cookies=COOKIE) client.create_experiment(EXPERIMENT) experiments = client.list_experiments(namespace=NAMESPACE) my_experiment = experiments.experiments[0] my_experiment
samples/contrib/pytorch-samples/Pipeline-Bert.ipynb
kubeflow/pipelines
apache-2.0
Set Inference parameters
DEPLOY_NAME="bertserve" MODEL_NAME="bert" ! python utils/generate_templates.py bert/template_mapping.json prepare_tensorboard_op = load_component_from_file("yaml/tensorboard_component.yaml") prep_op = components.load_component_from_file( "yaml/preprocess_component.yaml" ) train_op = components.load_component_from_file( "yaml/train_component.yaml" ) deploy_op = load_component_from_file("yaml/deploy_component.yaml") minio_op = components.load_component_from_file( "yaml/minio_component.yaml" )
samples/contrib/pytorch-samples/Pipeline-Bert.ipynb
kubeflow/pipelines
apache-2.0
Define pipeline
@dsl.pipeline(name="Training pipeline", description="Sample training job test") def pytorch_bert( # pylint: disable=too-many-arguments minio_endpoint=MINIO_ENDPOINT, log_bucket=LOG_BUCKET, log_dir=f"tensorboard/logs/{dsl.RUN_ID_PLACEHOLDER}", mar_path=f"mar/{dsl.RUN_ID_PLACEHOLDER}/model-store", config_prop_path=f"mar/{dsl.RUN_ID_PLACEHOLDER}/config", model_uri=f"s3://mlpipeline/mar/{dsl.RUN_ID_PLACEHOLDER}", tf_image=TENSORBOARD_IMAGE, deploy=DEPLOY_NAME, namespace=NAMESPACE, confusion_matrix_log_dir=f"confusion_matrix/{dsl.RUN_ID_PLACEHOLDER}/", num_samples=1000, max_epochs=1 ): """Thid method defines the pipeline tasks and operations""" prepare_tb_task = prepare_tensorboard_op( log_dir_uri=f"s3://{log_bucket}/{log_dir}", image=tf_image, pod_template_spec=json.dumps({ "spec": { "containers": [{ "env": [ { "name": "AWS_ACCESS_KEY_ID", "valueFrom": { "secretKeyRef": { "name": "mlpipeline-minio-artifact", "key": "accesskey", } }, }, { "name": "AWS_SECRET_ACCESS_KEY", "valueFrom": { "secretKeyRef": { "name": "mlpipeline-minio-artifact", "key": "secretkey", } }, }, { "name": "AWS_REGION", "value": "minio" }, { "name": "S3_ENDPOINT", "value": f"{minio_endpoint}", }, { "name": "S3_USE_HTTPS", "value": "0" }, { "name": "S3_VERIFY_SSL", "value": "0" }, ] }] } }), ).set_display_name("Visualization") prep_task = ( prep_op().after(prepare_tb_task ).set_display_name("Preprocess & Transform") ) confusion_matrix_url = f"minio://{log_bucket}/{confusion_matrix_log_dir}" script_args = f"model_name=bert.pth," \ f"num_samples={num_samples}," \ f"confusion_matrix_url={confusion_matrix_url}" # For GPU , set gpus count and accelerator type ptl_args = f"max_epochs={max_epochs},profiler=pytorch,gpus=0,accelerator=None" train_task = ( train_op( input_data=prep_task.outputs["output_data"], script_args=script_args, ptl_arguments=ptl_args ).after(prep_task).set_display_name("Training") ) # For GPU uncomment below line and set GPU limit and node selector # ).set_gpu_limit(1).add_node_selector_constraint # ('cloud.google.com/gke-accelerator','nvidia-tesla-p4') ( minio_op( bucket_name="mlpipeline", folder_name=log_dir, input_path=train_task.outputs["tensorboard_root"], filename="", ).after(train_task).set_display_name("Tensorboard Events Pusher") ) minio_mar_upload = ( minio_op( bucket_name="mlpipeline", folder_name=mar_path, input_path=train_task.outputs["checkpoint_dir"], filename="bert_test.mar", ).after(train_task).set_display_name("Mar Pusher") ) ( minio_op( bucket_name="mlpipeline", folder_name=config_prop_path, input_path=train_task.outputs["checkpoint_dir"], filename="config.properties", ).after(train_task).set_display_name("Conifg Pusher") ) model_uri = str(model_uri) # pylint: disable=unused-variable isvc_yaml = """ apiVersion: "serving.kubeflow.org/v1beta1" kind: "InferenceService" metadata: name: {} namespace: {} spec: predictor: serviceAccountName: sa pytorch: storageUri: {} resources: requests: cpu: 4 memory: 8Gi limits: cpu: 4 memory: 8Gi """.format(deploy, namespace, model_uri) # For GPU inference use below yaml with gpu count and accelerator gpu_count = "1" accelerator = "nvidia-tesla-p4" isvc_gpu_yaml = """ apiVersion: "serving.kubeflow.org/v1beta1" kind: "InferenceService" metadata: name: {} namespace: {} spec: predictor: serviceAccountName: sa pytorch: storageUri: {} resources: requests: cpu: 4 memory: 8Gi limits: cpu: 4 memory: 8Gi nvidia.com/gpu: {} nodeSelector: cloud.google.com/gke-accelerator: {} """.format(deploy, namespace, model_uri, gpu_count, accelerator) # Update inferenceservice_yaml for GPU inference deploy_task = ( deploy_op(action="apply", inferenceservice_yaml=isvc_yaml ).after(minio_mar_upload).set_display_name("Deployer") ) dsl.get_pipeline_conf().add_op_transformer( use_k8s_secret( secret_name="mlpipeline-minio-artifact", k8s_secret_key_to_env={ "secretkey": "MINIO_SECRET_KEY", "accesskey": "MINIO_ACCESS_KEY", }, ) ) # Compile pipeline compiler.Compiler().compile(pytorch_bert, 'pytorch.tar.gz', type_check=True) # Execute pipeline run = client.run_pipeline(my_experiment.id, 'pytorch-bert', 'pytorch.tar.gz')
samples/contrib/pytorch-samples/Pipeline-Bert.ipynb
kubeflow/pipelines
apache-2.0
Processing game image Raw atari images are large, 210x160x3 by default. However, we don't need that level of detail in order to learn them. We can thus save a lot of time by preprocessing game image, including * Resizing to a smaller shape * Converting to grayscale * Cropping irrelevant image parts
from gym.core import ObservationWrapper from gym.spaces import Box from scipy.misc import imresize class PreprocessAtari(ObservationWrapper): def __init__(self, env): """A gym wrapper that crops, scales image into the desired shapes and optionally grayscales it.""" ObservationWrapper.__init__(self,env) self.img_size = (64, 64) self.observation_space = Box(0.0, 1.0, self.img_size) def _observation(self, img): """what happens to each observation""" # Here's what you need to do: # * crop image, remove irrelevant parts # * resize image to self.img_size # (use imresize imported above or any library you want, # e.g. opencv, skimage, PIL, keras) # * cast image to grayscale # * convert image pixels to (0,1) range, float32 type <Your code here> return <...> import gym #game maker consider https://gym.openai.com/envs def make_env(): env = gym.make("KungFuMaster-v0") return PreprocessAtari(env) #spawn game instance env = make_env() observation_shape = env.observation_space.shape n_actions = env.action_space.n obs = env.reset() plt.imshow(obs[0],interpolation='none',cmap='gray')
hw12/a2c_kungfu_dmia.ipynb
AndreySheka/dl_ekb
mit
Basic agent setup Here we define a simple agent that maps game images into policy using simple convolutional neural network.
import theano, lasagne import theano.tensor as T from lasagne.layers import * from agentnet.memory import WindowAugmentation #observation goes here observation_layer = InputLayer((None,)+observation_shape,) #4-tick window over images prev_wnd = InputLayer((None,4)+observation_shape,name='window from last tick') new_wnd = WindowAugmentation(observation_layer,prev_wnd,name='updated window') #reshape to (frame, h,w). If you don't use grayscale, 4 should become 12. wnd_reshape = reshape(new_wnd, (-1,4*observation_shape[0])+observation_shape[1:])
hw12/a2c_kungfu_dmia.ipynb
AndreySheka/dl_ekb
mit
Network body Here will need to build a convolutional network that consists of 4 layers: * 3 convolutional layers with 32 filters, 5x5 window size, 2x2 stride * Choose any nonlinearity but for softmax * You may want to increase number of filters for the last layer * Dense layer on top of all convolutions * anywhere between 100 and 512 neurons You may find a template for such network below
from lasagne.nonlinearities import rectify,elu,tanh,softmax #network body conv0 = Conv2DLayer(wnd_reshape,<...>) conv1 = <another convolutional layer, growing from conv0> conv2 = <yet another layer...> dense = DenseLayer(<what is it's input?>, nonlinearity=tanh, name='dense "neck" layer')
hw12/a2c_kungfu_dmia.ipynb
AndreySheka/dl_ekb
mit
Network head You will now need to build output layers. Since we're building advantage actor-critic algorithm, out network will require two outputs: * policy, $pi(a|s)$, defining action probabilities * state value, $V(s)$, defining expected reward from the given state Both those layers will grow from final dense layer from the network body.
#actor head logits_layer = DenseLayer(dense,n_actions,nonlinearity=None) #^^^ separately define pre-softmax policy logits to regularize them later policy_layer = NonlinearityLayer(logits_layer,softmax) #critic head V_layer = DenseLayer(dense,1,nonlinearity=None) #sample actions proportionally to policy_layer from agentnet.resolver import ProbabilisticResolver action_layer = ProbabilisticResolver(policy_layer)
hw12/a2c_kungfu_dmia.ipynb
AndreySheka/dl_ekb
mit
Finally, agent We declare that this network is and MDP agent with such and such inputs, states and outputs
from agentnet.agent import Agent #all together agent = Agent(observation_layers=observation_layer, policy_estimators=(logits_layer,V_layer), agent_states={new_wnd:prev_wnd}, action_layers=action_layer) #Since it's a single lasagne network, one can get it's weights, output, etc weights = lasagne.layers.get_all_params([V_layer,policy_layer],trainable=True) weights
hw12/a2c_kungfu_dmia.ipynb
AndreySheka/dl_ekb
mit
Create and manage a pool of atari sessions to play with To make training more stable, we shall have an entire batch of game sessions each happening independent of others Why several parallel agents help training: http://arxiv.org/pdf/1602.01783v1.pdf Alternative approach: store more sessions: https://www.cs.toronto.edu/~vmnih/docs/dqn.pdf
from agentnet.experiments.openai_gym.pool import EnvPool #number of parallel agents N_AGENTS = 10 pool = EnvPool(agent,make_env, N_AGENTS) #may need to adjust %%time #interact for 7 ticks _,action_log,reward_log,_,_,_ = pool.interact(10) print('actions:') print(action_log[0]) print("rewards") print(reward_log[0]) # batch sequence length (frames) SEQ_LENGTH = 25 #load first sessions (this function calls interact and remembers sessions) pool.update(SEQ_LENGTH)
hw12/a2c_kungfu_dmia.ipynb
AndreySheka/dl_ekb
mit
Advantage actor-critic An agent has a method that produces symbolic environment interaction sessions Such sessions are in sequences of observations, agent memory, actions, q-values,etc one has to pre-define maximum session length. SessionPool also stores rewards, alive indicators, etc. Code mostly copied from here
#get agent's Qvalues obtained via experience replay #we don't unroll scan here and propagate automatic updates #to speed up compilation at a cost of runtime speed replay = pool.experience_replay _,_,_,_,(logits_seq,V_seq) = agent.get_sessions( replay, session_length=SEQ_LENGTH, experience_replay=True, unroll_scan=False, ) auto_updates = agent.get_automatic_updates() # compute pi(a|s) and log(pi(a|s)) manually [use logsoftmax] # we can't guarantee that theano optimizes logsoftmax automatically since it's still in dev logits_flat = logits_seq.reshape([-1,logits_seq.shape[-1]]) policy_seq = T.nnet.softmax(logits_flat).reshape(logits_seq.shape) logpolicy_seq = T.nnet.logsoftmax(logits_flat).reshape(logits_seq.shape) # get policy gradient from agentnet.learning import a2c elwise_actor_loss,elwise_critic_loss = a2c.get_elementwise_objective(policy=logpolicy_seq, treat_policy_as_logpolicy=True, state_values=V_seq[:,:,0], actions=replay.actions[0], rewards=replay.rewards/100., is_alive=replay.is_alive, gamma_or_gammas=0.99, n_steps=None, return_separate=True) # (you can change them more or less harmlessly, this usually just makes learning faster/slower) # also regularize to prioritize exploration reg_logits = T.mean(logits_seq**2) reg_entropy = T.mean(T.sum(policy_seq*logpolicy_seq,axis=-1)) #add-up loss components with magic numbers loss = 0.1*elwise_actor_loss.mean() +\ 0.25*elwise_critic_loss.mean() +\ 1e-3*reg_entropy +\ 1e-3*reg_logits # Compute weight updates, clip by norm grads = T.grad(loss,weights) grads = lasagne.updates.total_norm_constraint(grads,10) updates = lasagne.updates.adam(grads, weights,1e-4) #compile train function train_step = theano.function([],loss,updates=auto_updates+updates)
hw12/a2c_kungfu_dmia.ipynb
AndreySheka/dl_ekb
mit
Demo run
untrained_reward = np.mean(pool.evaluate(save_path="./records", record_video=True)) #show video from IPython.display import HTML import os video_names = list(filter(lambda s:s.endswith(".mp4"),os.listdir("./records/"))) HTML(""" <video width="640" height="480" controls> <source src="{}" type="video/mp4"> </video> """.format("./records/"+video_names[-1])) #this may or may not be _last_ video. Try other indices
hw12/a2c_kungfu_dmia.ipynb
AndreySheka/dl_ekb
mit
Training loop
#starting epoch epoch_counter = 1 #full game rewards rewards = {} loss,reward_per_tick,reward =0,0,0 from tqdm import trange from IPython.display import clear_output #the algorithm almost converges by 15k iterations, 50k is for full convergence for i in trange(150000): #play pool.update(SEQ_LENGTH) #train loss = 0.95*loss + 0.05*train_step() if epoch_counter%10==0: #average reward per game tick in current experience replay pool reward_per_tick = 0.95*reward_per_tick + 0.05*pool.experience_replay.rewards.get_value().mean() print("iter=%i\tloss=%.3f\treward/tick=%.3f"%(epoch_counter, loss, reward_per_tick)) ##record current learning progress and show learning curves if epoch_counter%100 ==0: reward = 0.95*reward + 0.05*np.mean(pool.evaluate(record_video=False)) rewards[epoch_counter] = reward clear_output(True) plt.plot(*zip(*sorted(rewards.items(),key=lambda (t,r):t))) plt.show() epoch_counter +=1 # Time to drink some coffee!
hw12/a2c_kungfu_dmia.ipynb
AndreySheka/dl_ekb
mit
Evaluating results Here we plot learning curves and sample testimonials
import pandas as pd plt.plot(*zip(*sorted(rewards.items(),key=lambda k:k[0]))) from agentnet.utils.persistence import save save(action_layer,"kung_fu.pcl") ###LOAD FROM HERE from agentnet.utils.persistence import load load(action_layer,"kung_fu.pcl") rw = pool.evaluate(n_games=20,save_path="./records",record_video=True) print("mean session score=%f.5"%np.mean(rw)) #show video from IPython.display import HTML import os video_names = list(filter(lambda s:s.endswith(".mp4"),os.listdir("./records/"))) HTML(""" <video width="640" height="480" controls> <source src="{}" type="video/mp4"> </video> """.format("./records/"+video_names[-1])) #this may or may not be _last_ video. Try other indices
hw12/a2c_kungfu_dmia.ipynb
AndreySheka/dl_ekb
mit
2 Ejercicio Dados dos diccionarios d1 y d2, escribe una función en Python llamada fusion que realice la fusión de los dos diccionarios pasados como parámetros. Puedes utilizar la función update. Prueba la función con los diccionarios d1 = {1: 'A', 2:'B', 3:'C'}, d2 = {4: 'Aa', 5:'Ba', 6:'Ca'} Utiliza la función len para recuperar el número de elementos del nuevo diccionario Prueba la función con los diccionarios d1 = {1: 'A', 2:'B', 3:'C'}, d2 = {2: 'Aa', 3:'Ba'} Utiliza la función len para recuperar el número de elementos del nuevo diccionario
# Sol: def fusion(): d1 = {1: 'A', 2:'B', 3:'C'} d2 = {4: 'Aa', 5:'Ba', 6:'Ca'} d1.update(d2) return d1 fusion()
python/ejercicios/ucm_diccionarios_02_ej.ipynb
xMyrst/BigData
gpl-3.0
3 Ejercicio Dada la lista de las ciudades más pobladas de Italia it: it = [ 'Roma', 'Milán', 'Nápoles', 'Turín', 'Palermo' , 'Génova', 'Bolonia', 'Florencia', 'Bari', 'Catania'] Crea un diccionario donde la clave sea la posición que ocupa cada ciudad en la lista. Para hacerlo sigue estas indicaciones: Crea una secuencia de enteros mediante la función range. El inicio de la secuencia es el cero y el fin de la secuencia es la longitud de la lista de poblaciones de Italia. Crea una lista m de tuplas del tipo (pos, ciudad). Utiliza la función zip. Utiliza la función dict para construir el diccionario a partir de la lista m. Escribe una expresión Python para recuperar la quinta ciudad italiana más poblada.
# Sol: # Definimos la lista con las ciudades que aparece en el enunciado it = [ 'Roma', 'Milán', 'Nápoles', 'Turín', 'Palermo' , 'Génova', 'Bolonia', 'Florencia', 'Bari', 'Catania', 'Verona'] # Definimos una variable para almacenar una lista que crearemos a partir de un rango [0, longitud de la lista) # Si no especificamos inicio, el rango comienza en 0 y termina en 10 # Si se especifica el inicio en 1, hay que sumarle +1 a la longitud de la lista a modo de offset pos_ciudad = range(1, len(it)+1) resultado = list(zip(pos_ciudad, it)) resultado dic = dict(resultado) dic
python/ejercicios/ucm_diccionarios_02_ej.ipynb
xMyrst/BigData
gpl-3.0
It looks like GDP has the strongest association with happiness (or satisfaction), followed by social support, life expectancy, and freedom. After controlling for those other factors, the parameters of the other factors are substantially smaller, and since the CI for generosity includes 0, it is plausible that generosity is not substantially related to happiness, at least as they were measured in this study. This example demonstrates the power of MCMC to handle models with more than a few parameters. But it does not really demonstrate the power of Bayesian regression. If the goal of a regression model is to estimate parameters, there is no great advantage to Bayesian regression compared to conventional least squares regression. Bayesian methods are more useful if we plan to use the posterior distribution of the parameters as part of a decision analysis process. Summary In this chapter we used PyMC3 to implement two models we've seen before: a Poisson model of goal-scoring in soccer and a simple regression model. Then we implemented a multiple regression model that would not have been possible to compute with a grid approximation. MCMC is more powerful than grid methods, but that power comes with some disadvantages: MCMC algorithms are fiddly. The same model might behave well with some priors and less well with others. And the sampling process often produces warnings about tuning steps, divergences, "r-hat statistics", acceptance rates, and effective samples. It takes some expertise to diagnose and correct these issues. I find it easier to develop models incrementally using grid algorithms, checking intermediate results along the way. With PyMC3, it is not as easy to be confident that you have specified a model correctly. For these reasons, I recommend a model development process that starts with grid algorithms and resorts to MCMC if necessary. As we saw in the previous chapters, you can solve a lot of real-world problems with grid methods. But when you need MCMC, it is useful to have a grid algorithm to compare to (even if it is based on a simpler model). All of the models in this book can be implemented in PyMC3, but some of them are easier to translate than others. In the exercises, you will have a chance to practice. Exercises Exercise: As a warmup, let's use PyMC3 to solve the Euro problem. Suppose we spin a coin 250 times and it comes up heads 140 times. What is the posterior distribution of $x$, the probability of heads? For the prior, use a beta distribution with parameters $\alpha=1$ and $\beta=1$. See the PyMC3 documentation for the list of continuous distributions.
# Solution n = 250 k_obs = 140 with pm.Model() as model5: x = pm.Beta('x', alpha=1, beta=1) k = pm.Binomial('k', n=n, p=x, observed=k_obs) trace5 = pm.sample(500, **options) az.plot_posterior(trace5)
soln/chap19.ipynb
AllenDowney/ThinkBayes2
mit
Exercise: Now let's use PyMC3 to replicate the solution to the Grizzly Bear problem in <<_TheGrizzlyBearProblem>>, which is based on the hypergeometric distribution. I'll present the problem with slightly different notation, to make it consistent with PyMC3. Suppose that during the first session, k=23 bears are tagged. During the second session, n=19 bears are identified, of which x=4 had been tagged. Estimate the posterior distribution of N, the number of bears in the environment. For the prior, use a discrete uniform distribution from 50 to 500. See the PyMC3 documentation for the list of discrete distributions. Note: HyperGeometric was added to PyMC3 after version 3.8, so you might need to update your installation to do this exercise.
# Solution k = 23 n = 19 x = 4 with pm.Model() as model6: N = pm.DiscreteUniform('N', 50, 500) y = pm.HyperGeometric('y', N=N, k=k, n=n, observed=x) trace6 = pm.sample(1000, **options) az.plot_posterior(trace6)
soln/chap19.ipynb
AllenDowney/ThinkBayes2
mit
Exercise: In <<_TheWeibullDistribution>> we generated a sample from a Weibull distribution with $\lambda=3$ and $k=0.8$. Then we used the data to compute a grid approximation of the posterior distribution of those parameters. Now let's do the same with PyMC3. For the priors, you can use uniform distributions as we did in <<_SurvivalAnalysis>>, or you could use HalfNormal distributions provided by PyMC3. Note: The Weibull class in PyMC3 uses different parameters than SciPy. The parameter alpha in PyMC3 corresponds to $k$, and beta corresponds to $\lambda$. Here's the data again:
data = [0.80497283, 2.11577082, 0.43308797, 0.10862644, 5.17334866, 3.25745053, 3.05555883, 2.47401062, 0.05340806, 1.08386395] # Solution with pm.Model() as model7: lam = pm.Uniform('lam', 0.1, 10.1) k = pm.Uniform('k', 0.1, 5.1) y = pm.Weibull('y', alpha=k, beta=lam, observed=data) trace7 = pm.sample(1000, **options) az.plot_posterior(trace7)
soln/chap19.ipynb
AllenDowney/ThinkBayes2
mit
Now estimate the parameters for the treated group.
data = responses['Treated'] # Solution with pm.Model() as model8: mu = pm.Uniform('mu', 20, 80) sigma = pm.Uniform('sigma', 5, 30) y = pm.Normal('y', mu, sigma, observed=data) trace8 = pm.sample(500, **options) # Solution with model8: az.plot_posterior(trace8)
soln/chap19.ipynb
AllenDowney/ThinkBayes2
mit
In total, 32 bugs have been discovered:
num_seen = k01 + k10 + k11 num_seen # Solution with pm.Model() as model9: p0 = pm.Beta('p0', alpha=1, beta=1) p1 = pm.Beta('p1', alpha=1, beta=1) N = pm.DiscreteUniform('N', num_seen, 350) q0 = 1-p0 q1 = 1-p1 ps = [q0*q1, q0*p1, p0*q1, p0*p1] k00 = N - num_seen data = pm.math.stack((k00, k01, k10, k11)) y = pm.Multinomial('y', n=N, p=ps, observed=data) # Solution with model9: trace9 = pm.sample(1000, **options) # Solution with model9: az.plot_posterior(trace9)
soln/chap19.ipynb
AllenDowney/ThinkBayes2
mit
Now we can start to define the actual convolution code. We start by defining an object that represents a single layer of convolution that does the actual convolution operation followed by pooling over the output of that convolution. These layers will be stacked in the final model.
from theano.tensor.signal import downsample from theano.tensor.nnet import conv class LeNetConvPoolLayer(object): def __init__(self, rng, input, filter_shape, image_shape, poolsize=(2, 2)): assert image_shape[1] == filter_shape[1] self.input = input # there are "num input feature maps * filter height * filter width" # inputs to each hidden unit fan_in = numpy.prod(filter_shape[1:]) # each unit in the lower layer receives a gradient from: # "num output feature maps * filter height * filter width" / pooling size fan_out = (filter_shape[0] * numpy.prod(filter_shape[2:]) / numpy.prod(poolsize)) # initialize weights with random weights W_bound = numpy.sqrt(6. / (fan_in + fan_out)) self.W = theano.shared( numpy.asarray( rng.uniform(low=-W_bound, high=W_bound, size=filter_shape), dtype=theano.config.floatX ), borrow=True ) # the bias is a 1D tensor -- one bias per output feature map b_values = numpy.zeros((filter_shape[0],), dtype=theano.config.floatX) self.b = theano.shared(value=b_values, borrow=True) # convolve input feature maps with filters conv_out = conv.conv2d( input=input, filters=self.W, filter_shape=filter_shape, image_shape=image_shape ) # downsample each feature map individually, using maxpooling pooled_out = downsample.max_pool_2d( input=conv_out, ds=poolsize, ignore_border=True ) # add the bias term. Since the bias is a vector (1D array), we first # reshape it to a tensor of shape (1, n_filters, 1, 1). Each bias will # thus be broadcasted across mini-batches and feature map # width & height self.output = T.tanh(pooled_out + self.b.dimshuffle('x', 0, 'x', 'x')) # store parameters of this layer self.params = [self.W, self.b]
convnets/lenet.ipynb
nouiz/summerschool2015
bsd-3-clause
This next method uses the convolution layer above to make a stack of them and adds a hidden layer followed by a logistic regression classification layer on top.
import time import fuel from fuel.streams import DataStream from fuel.schemes import SequentialScheme from fuel.transformers import Cast fuel.config.floatX = theano.config.floatX = 'float32' def evaluate_lenet5(train, test, valid, learning_rate=0.1, n_epochs=200, nkerns=[20, 50], batch_size=500): rng = numpy.random.RandomState(23455) train_stream = DataStream.default_stream( train, iteration_scheme=SequentialScheme(train.num_examples, batch_size)) valid_stream = DataStream.default_stream( valid, iteration_scheme=SequentialScheme(valid.num_examples, batch_size)) test_stream = DataStream.default_stream( test, iteration_scheme=SequentialScheme(test.num_examples, batch_size)) x = T.tensor4('x') yi = T.imatrix('y') y = yi.reshape((yi.shape[0],)) # Construct the first convolutional pooling layer: # filtering reduces the image size to (28-5+1 , 28-5+1) = (24, 24) # maxpooling reduces this further to (24/2, 24/2) = (12, 12) # 4D output tensor is thus of shape (batch_size, nkerns[0], 12, 12) layer0 = LeNetConvPoolLayer( rng, input=x, image_shape=(batch_size, 1, 28, 28), filter_shape=(nkerns[0], 1, 5, 5), poolsize=(2, 2) ) # Construct the second convolutional pooling layer # filtering reduces the image size to (12-5+1, 12-5+1) = (8, 8) # maxpooling reduces this further to (8/2, 8/2) = (4, 4) # 4D output tensor is thus of shape (batch_size, nkerns[1], 4, 4) layer1 = LeNetConvPoolLayer( rng, input=layer0.output, image_shape=(batch_size, nkerns[0], 12, 12), filter_shape=(nkerns[1], nkerns[0], 5, 5), poolsize=(2, 2) ) # the HiddenLayer being fully-connected, it operates on 2D matrices of # shape (batch_size, num_pixels) (i.e matrix of rasterized images). # This will generate a matrix of shape (batch_size, nkerns[1] * 4 * 4), # or (500, 50 * 4 * 4) = (500, 800) with the default values. layer2_input = layer1.output.flatten(2) # construct a fully-connected sigmoidal layer layer2 = HiddenLayer( rng, input=layer2_input, n_in=nkerns[1] * 4 * 4, n_out=500, activation=T.tanh ) # classify the values of the fully-connected sigmoidal layer layer3 = LogisticRegression(input=layer2.output, n_in=500, n_out=10) # the cost we minimize during training is the NLL of the model cost = layer3.negative_log_likelihood(y) # create a function to compute the mistakes that are made by the model model_errors = theano.function( [x, yi], layer3.errors(y) ) # create a list of all model parameters to be fit by gradient descent params = layer3.params + layer2.params + layer1.params + layer0.params # create a list of gradients for all model parameters grads = T.grad(cost, params) # train_model is a function that updates the model parameters by # SGD Since this model has many parameters, it would be tedious to # manually create an update rule for each model parameter. We thus # create the updates list by automatically looping over all # (params[i], grads[i]) pairs. updates = [ (param_i, param_i - learning_rate * grad_i) for param_i, grad_i in zip(params, grads) ] train_model = theano.function( [x, yi], cost, updates=updates ) # early-stopping parameters patience = 10000 # look as this many examples regardless patience_increase = 2 # wait this much longer when a new best is found # a relative improvement of this much is considered significant improvement_threshold = 0.995 n_train_batches = (train.num_examples + batch_size - 1) // batch_size # go through this many minibatches before checking the network on # the validation set; in this case we check every epoch validation_frequency = min(n_train_batches, patience / 2) best_validation_loss = numpy.inf best_iter = 0 test_score = 0. start_time = time.clock() epoch = 0 iter = 0 done_looping = False while (epoch < n_epochs) and (not done_looping): epoch = epoch + 1 minibatch_index = 0 for minibatch in train_stream.get_epoch_iterator(): iter += 1 minibatch_index += 1 if iter % 100 == 0: print('training @ iter = ', iter) error = train_model(minibatch[0], minibatch[1]) if (iter + 1) % validation_frequency == 0: # compute zero-one loss on validation set validation_losses = [model_errors(vb[0], vb[1]) for vb in valid_stream.get_epoch_iterator()] this_validation_loss = numpy.mean(validation_losses) print('epoch %i, minibatch %i/%i, validation error %f %%' % (epoch, minibatch_index + 1, n_train_batches, this_validation_loss * 100.)) # if we got the best validation score until now if this_validation_loss < best_validation_loss: # improve patience if loss improvement is good enough if this_validation_loss < best_validation_loss * improvement_threshold: patience = max(patience, iter * patience_increase) # save best validation score and iteration number best_validation_loss = this_validation_loss best_iter = iter # test it on the test set test_losses = [ model_errors(tb[0], tb[1]) for tb in test_stream.get_epoch_iterator() ] test_score = numpy.mean(test_losses) print((' epoch %i, minibatch %i/%i, test error of ' 'best model %f %%') % (epoch, minibatch_index + 1, n_train_batches, test_score * 100.)) if patience <= iter: done_looping = True break end_time = time.clock() print('Optimization complete.') print('Best validation score of %f %% obtained at iteration %i, ' 'with test performance %f %%' % (best_validation_loss * 100., best_iter + 1, test_score * 100.)) print('The code ran for %.2fm' % ((end_time - start_time) / 60.)) # This is to make the pretty pictures in the cells below layer0_out = theano.function([x], layer0.output) layer1_out = theano.function([x], layer1.output) return params, layer0_out, layer1_out
convnets/lenet.ipynb
nouiz/summerschool2015
bsd-3-clause
This cell runs the model and allows you to play with a few hyperparameters. The ones below take about 1 to 2 minutes to run.
from fuel.datasets import MNIST train = MNIST(which_sets=('train',), subset=slice(0, 50000)) valid = MNIST(which_sets=('train',), subset=slice(50000, 60000)) test = MNIST(which_sets=('test',)) params, layer0_out, layer1_out = evaluate_lenet5(train, test, valid, learning_rate=0.1, n_epochs=10, nkerns=[10, 25], batch_size=50)
convnets/lenet.ipynb
nouiz/summerschool2015
bsd-3-clause
For most convolution model it can be interesting to show what the trained filters look like. The code below does that from the parameters returned by the training function above. In this model there isn't much of an effect since the filters are 5x5 and we can't see much unfortunately.
%matplotlib inline import matplotlib.pyplot as plt from utils import tile_raster_images filts1 = params[6].get_value() filts2 = params[4].get_value() plt.clf() # Increase the size of the figure plt.gcf().set_size_inches(15, 10) # Make a grid for the two layers gs = plt.GridSpec(1, 2, width_ratios=[1, 25], height_ratios=[1, 1]) a = plt.subplot(gs[0]) b = plt.subplot(gs[1]) # Show the first layer filters (the small column) a.imshow(tile_raster_images(filts1.reshape(10, 25), img_shape=(5, 5), tile_shape=(10, 1), tile_spacing=(1,1)), cmap="Greys", interpolation="none") a.axis('off') # Show the second layer filters (the large block) b.imshow(tile_raster_images(filts2.reshape(250, 25), img_shape=(5, 5), tile_shape=(10, 25), tile_spacing=(1,1)), cmap="Greys", interpolation="none") b.axis('off')
convnets/lenet.ipynb
nouiz/summerschool2015
bsd-3-clause
What can also be interesting is to draw the outputs of the filters for an example. This works somewhat better for this model.
%matplotlib inline import matplotlib.pyplot as plt from utils import tile_raster_images # Grab some input examples from the test set (we cheat a bit here) sample = test.get_data(None, slice(0, 50))[0] # We will print this example amongst the batch example = 7 plt.gcf() # Increase the size of the figure plt.gcf().set_size_inches(15, 10) gs = plt.GridSpec(1, 3, width_ratios=[1, 1, 1], height_ratios=[1, 1, 1]) # Draw the input data a = plt.subplot(gs[0]) a.imshow(sample[example, 0], cmap="Greys", interpolation='none') a.axis('off') # Compute first layer output out0 = layer0_out(sample)[example] # Draw its output b = plt.subplot(gs[1]) b.imshow(tile_raster_images(out0.reshape(10, 144), img_shape=(12, 12), tile_shape=(5, 2), tile_spacing=(1, 1)), cmap="Greys", interpolation='none') b.axis('off') # Compute the second layer output out1 = layer1_out(sample)[example] # Draw it c = plt.subplot(gs[2]) c.imshow(tile_raster_images(out1.reshape(25, 16), img_shape=(4, 4), tile_shape=(5, 5), tile_spacing=(1, 1)), cmap="Greys", interpolation='none') c.axis('off')
convnets/lenet.ipynb
nouiz/summerschool2015
bsd-3-clause
Some things you can try with this model: - change the non linearity of the convolution to rectifier unit. - add an extra mlp layer. If you break the code too much you can get back to the working initial code by loading the lenet.py file with the cell below. (Or just reset the git repo ...)
%load lenet.py
convnets/lenet.ipynb
nouiz/summerschool2015
bsd-3-clause