markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Фиксим неприятный баг Можно заметить необычные пики на графике, которые, как будто, предсказывают значение основного тренда:
df[1010000:1025000].plot()
firmware/arduino_due_1MHz/analyze_current.ipynb
yandex-load/volta
mpl-2.0
Расстояние между пиками -- 256 сэмплов:
for i in range(5): df[10230+256*i:10250+256*(i+1)].plot()
firmware/arduino_due_1MHz/analyze_current.ipynb
yandex-load/volta
mpl-2.0
В самом начале -- пустые сэмплы с пиками, по числу буферов. На столько же "предсказывается" значение:
df[:2048].plot()
firmware/arduino_due_1MHz/analyze_current.ipynb
yandex-load/volta
mpl-2.0
Оказалось, в исходнике баг, описание тут: https://forum.arduino.cc/index.php?topic=137635.msg2965504#msg2965504 Фиксим, пробуем -- все ок!
df4 = pd.DataFrame(np.fromfile("./output3.bin", dtype=np.uint16).astype(np.float32) * (3300 / 2**12)) fig = sns.plt.figure(figsize=(16, 6)) ax = sns.plt.subplot() df4[:16384].plot(ax=ax)
firmware/arduino_due_1MHz/analyze_current.ipynb
yandex-load/volta
mpl-2.0
Одна миллисекунда, как мы ее видим:
fig = sns.plt.figure(figsize=(16, 6)) ax = sns.plt.subplot() df4[6000000:6001000].plot(ax=ax)
firmware/arduino_due_1MHz/analyze_current.ipynb
yandex-load/volta
mpl-2.0
TF Custom Estimator to Build a NN Autoencoder for Feature Extraction
MODEL_NAME = 'auto-encoder-01' TRAIN_DATA_FILES_PATTERN = 'data/data-*.csv' RESUME_TRAINING = False MULTI_THREADING = True
05_Autoencoding/03.0 - Dimensionality Reduction - Autoencoding + Normalizer + XEntropy Loss.ipynb
GoogleCloudPlatform/tf-estimator-tutorials
apache-2.0
1. Define Dataset Metadata
FEATURE_COUNT = 64 HEADER = ['key'] HEADER_DEFAULTS = [[0]] UNUSED_FEATURE_NAMES = ['key'] CLASS_FEATURE_NAME = 'CLASS' FEATURE_NAMES = [] for i in range(FEATURE_COUNT): HEADER += ['x_{}'.format(str(i+1))] FEATURE_NAMES += ['x_{}'.format(str(i+1))] HEADER_DEFAULTS += [[0.0]] HEADER += [CLASS_FEATURE_NAME] HEADER_DEFAULTS += [['NA']] print("Header: {}".format(HEADER)) print("Features: {}".format(FEATURE_NAMES)) print("Class Feature: {}".format(CLASS_FEATURE_NAME)) print("Unused Features: {}".format(UNUSED_FEATURE_NAMES))
05_Autoencoding/03.0 - Dimensionality Reduction - Autoencoding + Normalizer + XEntropy Loss.ipynb
GoogleCloudPlatform/tf-estimator-tutorials
apache-2.0
2. Define CSV Data Input Function
def parse_csv_row(csv_row): columns = tf.decode_csv(csv_row, record_defaults=HEADER_DEFAULTS) features = dict(zip(HEADER, columns)) for column in UNUSED_FEATURE_NAMES: features.pop(column) target = features.pop(CLASS_FEATURE_NAME) return features, target def csv_input_fn(files_name_pattern, mode=tf.estimator.ModeKeys.EVAL, skip_header_lines=0, num_epochs=None, batch_size=200): shuffle = True if mode == tf.estimator.ModeKeys.TRAIN else False print("") print("* data input_fn:") print("================") print("Input file(s): {}".format(files_name_pattern)) print("Batch size: {}".format(batch_size)) print("Epoch Count: {}".format(num_epochs)) print("Mode: {}".format(mode)) print("Shuffle: {}".format(shuffle)) print("================") print("") file_names = tf.matching_files(files_name_pattern) dataset = data.TextLineDataset(filenames=file_names) dataset = dataset.skip(skip_header_lines) if shuffle: dataset = dataset.shuffle(buffer_size=2 * batch_size + 1) num_threads = multiprocessing.cpu_count() if MULTI_THREADING else 1 dataset = dataset.batch(batch_size) dataset = dataset.map(lambda csv_row: parse_csv_row(csv_row), num_parallel_calls=num_threads) dataset = dataset.repeat(num_epochs) iterator = dataset.make_one_shot_iterator() features, target = iterator.get_next() return features, target features, target = csv_input_fn(files_name_pattern="") print("Feature read from CSV: {}".format(list(features.keys()))) print("Target read from CSV: {}".format(target))
05_Autoencoding/03.0 - Dimensionality Reduction - Autoencoding + Normalizer + XEntropy Loss.ipynb
GoogleCloudPlatform/tf-estimator-tutorials
apache-2.0
3. Define Feature Columns a. Load normalizarion params
df_params = pd.read_csv("data/params.csv", header=0, index_col=0) len(df_params) df_params['feature_name'] = FEATURE_NAMES df_params.head()
05_Autoencoding/03.0 - Dimensionality Reduction - Autoencoding + Normalizer + XEntropy Loss.ipynb
GoogleCloudPlatform/tf-estimator-tutorials
apache-2.0
b. Create normalized feature columns
def standard_scaler(x, mean, stdv): return (x-mean)/stdv def maxmin_scaler(x, max_value, min_value): return (x-min_value)/(max_value-min_value) def get_feature_columns(): feature_columns = {} # feature_columns = {feature_name: tf.feature_column.numeric_column(feature_name) # for feature_name in FEATURE_NAMES} for feature_name in FEATURE_NAMES: feature_max = df_params[df_params.feature_name == feature_name]['max'].values[0] feature_min = df_params[df_params.feature_name == feature_name]['min'].values[0] normalizer_fn = lambda x: maxmin_scaler(x, feature_max, feature_min) feature_columns[feature_name] = tf.feature_column.numeric_column(feature_name, normalizer_fn=normalizer_fn ) return feature_columns print(get_feature_columns())
05_Autoencoding/03.0 - Dimensionality Reduction - Autoencoding + Normalizer + XEntropy Loss.ipynb
GoogleCloudPlatform/tf-estimator-tutorials
apache-2.0
4. Define Autoencoder Model Function
def autoencoder_model_fn(features, labels, mode, params): feature_columns = list(get_feature_columns().values()) input_layer_size = len(feature_columns) encoder_hidden_units = params.encoder_hidden_units # decoder units are the reverse of the encoder units, without the middle layer (redundant) decoder_hidden_units = encoder_hidden_units.copy() decoder_hidden_units.reverse() decoder_hidden_units.pop(0) output_layer_size = len(FEATURE_NAMES) he_initialiser = tf.contrib.layers.variance_scaling_initializer() l2_regulariser = tf.contrib.layers.l2_regularizer(scale=params.l2_reg) print("[{}]->{}-{}->[{}]".format(len(feature_columns) ,encoder_hidden_units ,decoder_hidden_units, output_layer_size)) is_training = (mode == tf.estimator.ModeKeys.TRAIN) # input layer input_layer = tf.feature_column.input_layer(features=features, feature_columns=feature_columns) # Adding Gaussian Noise to input layer noisy_input_layer = input_layer + (params.noise_level * tf.random_normal(tf.shape(input_layer))) # Dropout layer dropout_layer = tf.layers.dropout(inputs=noisy_input_layer, rate=params.dropout_rate, training=is_training) # # Dropout layer without Gaussian Nosing # dropout_layer = tf.layers.dropout(inputs=input_layer, # rate=params.dropout_rate, # training=is_training) # Encoder layers stack encoding_hidden_layers = tf.contrib.layers.stack(inputs= dropout_layer, layer= tf.contrib.layers.fully_connected, stack_args=encoder_hidden_units, #weights_initializer = he_init, weights_regularizer =l2_regulariser, activation_fn = tf.nn.relu ) # Decoder layers stack decoding_hidden_layers = tf.contrib.layers.stack(inputs=encoding_hidden_layers, layer=tf.contrib.layers.fully_connected, stack_args=decoder_hidden_units, #weights_initializer = he_init, weights_regularizer =l2_regulariser, activation_fn = tf.nn.relu ) # Output (reconstructed) layer output_layer = tf.layers.dense(inputs=decoding_hidden_layers, units=output_layer_size, activation=None) # Encoding output (i.e., extracted features) reshaped encoding_output = tf.squeeze(encoding_hidden_layers) # Reconstruction output reshaped (for serving function) reconstruction_output = tf.squeeze(tf.nn.sigmoid(output_layer)) # Provide an estimator spec for `ModeKeys.PREDICT`. if mode == tf.estimator.ModeKeys.PREDICT: # Convert predicted_indices back into strings predictions = { 'encoding': encoding_output, 'reconstruction': reconstruction_output } export_outputs = { 'predict': tf.estimator.export.PredictOutput(predictions) } # Provide an estimator spec for `ModeKeys.PREDICT` modes. return tf.estimator.EstimatorSpec(mode, predictions=predictions, export_outputs=export_outputs) # Define loss based on reconstruction and regularization # reconstruction_loss = tf.losses.mean_squared_error(tf.squeeze(input_layer), reconstruction_output) # loss = reconstruction_loss + tf.losses.get_regularization_loss() reconstruction_loss = tf.losses.sigmoid_cross_entropy(multi_class_labels=tf.squeeze(input_layer), logits=tf.squeeze(output_layer)) loss = reconstruction_loss + tf.losses.get_regularization_loss() # Create Optimiser optimizer = tf.train.AdamOptimizer(params.learning_rate) # Create training operation train_op = optimizer.minimize( loss=loss, global_step=tf.train.get_global_step()) # Calculate root mean squared error as additional eval metric eval_metric_ops = { "rmse": tf.metrics.root_mean_squared_error( tf.squeeze(input_layer), reconstruction_output) } # Provide an estimator spec for `ModeKeys.EVAL` and `ModeKeys.TRAIN` modes. estimator_spec = tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op, eval_metric_ops=eval_metric_ops) return estimator_spec def create_estimator(run_config, hparams): estimator = tf.estimator.Estimator(model_fn=autoencoder_model_fn, params=hparams, config=run_config) print("") print("Estimator Type: {}".format(type(estimator))) print("") return estimator
05_Autoencoding/03.0 - Dimensionality Reduction - Autoencoding + Normalizer + XEntropy Loss.ipynb
GoogleCloudPlatform/tf-estimator-tutorials
apache-2.0
5. Run Experiment using Estimator Train_And_Evaluate a. Set the parameters
TRAIN_SIZE = 2000 NUM_EPOCHS = 1000 BATCH_SIZE = 100 NUM_EVAL = 10 TOTAL_STEPS = (TRAIN_SIZE/BATCH_SIZE)*NUM_EPOCHS CHECKPOINT_STEPS = int((TRAIN_SIZE/BATCH_SIZE) * (NUM_EPOCHS/NUM_EVAL)) hparams = tf.contrib.training.HParams( num_epochs = NUM_EPOCHS, batch_size = BATCH_SIZE, encoder_hidden_units=[30,3], learning_rate = 0.01, l2_reg = 0.0001, noise_level = 0.0, max_steps = TOTAL_STEPS, dropout_rate = 0.05 ) model_dir = 'trained_models/{}'.format(MODEL_NAME) run_config = tf.contrib.learn.RunConfig( save_checkpoints_steps=CHECKPOINT_STEPS, tf_random_seed=19830610, model_dir=model_dir ) print(hparams) print("Model Directory:", run_config.model_dir) print("") print("Dataset Size:", TRAIN_SIZE) print("Batch Size:", BATCH_SIZE) print("Steps per Epoch:",TRAIN_SIZE/BATCH_SIZE) print("Total Steps:", TOTAL_STEPS) print("Required Evaluation Steps:", NUM_EVAL) print("That is 1 evaluation step after each",NUM_EPOCHS/NUM_EVAL," epochs") print("Save Checkpoint After",CHECKPOINT_STEPS,"steps")
05_Autoencoding/03.0 - Dimensionality Reduction - Autoencoding + Normalizer + XEntropy Loss.ipynb
GoogleCloudPlatform/tf-estimator-tutorials
apache-2.0
b. Define TrainSpec and EvaluSpec
train_spec = tf.estimator.TrainSpec( input_fn = lambda: csv_input_fn( TRAIN_DATA_FILES_PATTERN, mode = tf.contrib.learn.ModeKeys.TRAIN, num_epochs=hparams.num_epochs, batch_size=hparams.batch_size ), max_steps=hparams.max_steps, hooks=None ) eval_spec = tf.estimator.EvalSpec( input_fn = lambda: csv_input_fn( TRAIN_DATA_FILES_PATTERN, mode=tf.contrib.learn.ModeKeys.EVAL, num_epochs=1, batch_size=hparams.batch_size ), # exporters=[tf.estimator.LatestExporter( # name="encode", # the name of the folder in which the model will be exported to under export # serving_input_receiver_fn=csv_serving_input_fn, # exports_to_keep=1, # as_text=True)], steps=None, hooks=None )
05_Autoencoding/03.0 - Dimensionality Reduction - Autoencoding + Normalizer + XEntropy Loss.ipynb
GoogleCloudPlatform/tf-estimator-tutorials
apache-2.0
d. Run Experiment via train_and_evaluate
if not RESUME_TRAINING: print("Removing previous artifacts...") shutil.rmtree(model_dir, ignore_errors=True) else: print("Resuming training...") tf.logging.set_verbosity(tf.logging.INFO) time_start = datetime.utcnow() print("Experiment started at {}".format(time_start.strftime("%H:%M:%S"))) print(".......................................") estimator = create_estimator(run_config, hparams) tf.estimator.train_and_evaluate( estimator=estimator, train_spec=train_spec, eval_spec=eval_spec ) time_end = datetime.utcnow() print(".......................................") print("Experiment finished at {}".format(time_end.strftime("%H:%M:%S"))) print("") time_elapsed = time_end - time_start print("Experiment elapsed time: {} seconds".format(time_elapsed.total_seconds()))
05_Autoencoding/03.0 - Dimensionality Reduction - Autoencoding + Normalizer + XEntropy Loss.ipynb
GoogleCloudPlatform/tf-estimator-tutorials
apache-2.0
6. Use the trained model to encode data (prediction)
import itertools DATA_SIZE = 2000 input_fn = lambda: csv_input_fn( TRAIN_DATA_FILES_PATTERN, mode=tf.contrib.learn.ModeKeys.INFER, num_epochs=1, batch_size=500 ) estimator = create_estimator(run_config, hparams) predictions = estimator.predict(input_fn=input_fn) predictions = itertools.islice(predictions, DATA_SIZE) predictions = list(map(lambda item: list(item["encoding"]), predictions)) print(predictions[:5])
05_Autoencoding/03.0 - Dimensionality Reduction - Autoencoding + Normalizer + XEntropy Loss.ipynb
GoogleCloudPlatform/tf-estimator-tutorials
apache-2.0
Visualise Encoded Data
y = pd.read_csv("data/data-01.csv", header=None, index_col=0)[65] data_reduced = pd.DataFrame(predictions, columns=['c1','c2','c3']) data_reduced['class'] = y data_reduced.head() from mpl_toolkits.mplot3d import Axes3D import matplotlib.pyplot as plt fig = plt.figure(figsize=(15,10)) ax = fig.add_subplot(111, projection='3d') ax.scatter(xs=data_reduced.c2/1000000, ys=data_reduced.c3/1000000, zs=data_reduced.c1/1000000, c=data_reduced['class'], marker='o') plt.show()
05_Autoencoding/03.0 - Dimensionality Reduction - Autoencoding + Normalizer + XEntropy Loss.ipynb
GoogleCloudPlatform/tf-estimator-tutorials
apache-2.0
The next function is used to assign an observation to the centroid that is nearest to it.
def assign_nearest(centroids, point): """ assigns the point to its nearest centroid params: centroids - a list of centroids, each of which has 2 dimensions point - a point, which has two dimensions returns: the index of the centroid the point is closest to. """ nearest_idx = 0 nearest_dist = sys.float_info.max # largest float on your computer for i in range(len(centroids)): # sqrt((x1-x2)^2 + (y1-y2)^2) dist = sqrt((centroids[i][0]-point[0])**2 + (centroids[i][1]-point[1])**2) if dist < nearest_dist: # smallest distance thus far nearest_idx = i nearest_dist = dist return nearest_idx
28_Prelab_ML-I/ML1-prelab.ipynb
greenelab/GCB535
bsd-3-clause
The next function actually performs k-means clustering. You need to understand how the algorithm works at the level of the video lecture. You don't need to understand every line of this, but you should feel free to dive in if you're interested!
def kmeans(data, k): """ performs k-means clustering for two-dimensional data. params: data - A numpy array of shape N, 2 k - The number of clusters. returns: a dictionary with three elements - ['centroids']: a list of the final centroid positions. - ['members']: a list [one per centroid] of the points assigned to that centroid at the conclusion of clustering. - ['paths']: a list [one per centroid] of lists [one per iteration] containing the points occupied by each centroid. """ # http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.ndarray.shape.html#numpy.ndarray.shape # .shape returns the size of the input numpy array in each dimension # if there are not 2 dimensions, we can't handle it here. if data.shape[1] != 2: return 'This implementation only supports two dimensional data.' if data.shape[0] < k: return 'This implementation requires at least as many points as clusters.' # pick random points as initial centroids centroids = [] for x in random.sample(data, k): # note the use of tuples here centroids.append(tuple(x.tolist())) paths = [] for i in range(k): paths.append([centroids[i],]) # we'll store all previous states # so if we ever hit the same point again we know to stop previous_states = set() # continue until we repeat the same centroid positions assignments = None while not tuple(centroids) in previous_states: previous_states.add(tuple(centroids)) assignments = [] for point in data: assignments.append(assign_nearest(centroids, point)) centroids_sum = [] # Make a list for each centroid to store position sum centroids_n = [] # Make a list for each centroid to store counts for i in range(k): centroids_sum.append((0,0)) centroids_n.append(0) for i in range(len(assignments)): centroid = assignments[i] centroids_n[centroid] += 1 # found a new member of this centroid # add the point centroids_sum[centroid] = (centroids_sum[centroid][0] + data[i][0], centroids_sum[centroid][1] + data[i][1]) for i in range(k): new_centroid = (centroids_sum[i][0]/centroids_n[i], centroids_sum[i][1]/centroids_n[i]) centroids[i] = new_centroid paths[i].append(new_centroid) r_dict = {} r_dict['centroids'] = centroids r_dict['paths'] = paths r_dict['members'] = assignments return r_dict
28_Prelab_ML-I/ML1-prelab.ipynb
greenelab/GCB535
bsd-3-clause
This next cell is full of plotting code. It uses something called matplotlib to show kmeans clustering. Specifically it shows the path centroids took, where they ended up, and which points were assigned to them. Feel free to take a look at this, but understanding it goes beyond the scope of the class.
def plot_km(km, points): """ Plots the results of a kmeans run. params: km - a kmeans result object that contains centroids, paths, and members returns: a matplotlib figure object """ (xmin, ymin) = np.amin(points, axis=0) (xmax, ymax) = np.amax(points, axis=0) plt.figure(1) plt.clf() plt.plot(points[:, 0], points[:, 1], 'k.', markersize=2) for path in km['paths']: nppath = np.asarray(path) plt.plot(nppath[:, 0], nppath[:, 1]) # Plot the calculated centroids as a red X centroids = np.asarray(km['centroids']) plt.scatter(centroids[:, 0], centroids[:, 1], marker='x', s=169, linewidths=3, color='r', zorder=10) plt.title('K-means clustering of simulated data.\n' 'estimated (red), path (lines)') plt.xlim(xmin, xmax) plt.ylim(ymin, ymax) plt.xticks(()) plt.yticks(()) plt.yticks(()) plt.show()
28_Prelab_ML-I/ML1-prelab.ipynb
greenelab/GCB535
bsd-3-clause
The next line will load a file of data using the numpy function loadtxt. We've created a population of points.
pop = np.loadtxt('kmeans-population.csv', delimiter=',')
28_Prelab_ML-I/ML1-prelab.ipynb
greenelab/GCB535
bsd-3-clause
Now we can use the k-means function to cluster! In this case, we're saying we want to find three clusters.
km_result = kmeans(pop, 3)
28_Prelab_ML-I/ML1-prelab.ipynb
greenelab/GCB535
bsd-3-clause
Now we can plot the results!
plot_km(km_result, pop)
28_Prelab_ML-I/ML1-prelab.ipynb
greenelab/GCB535
bsd-3-clause
Table: HTML Now let's try a different output. This time we still want a table format but instead of it being in plain text we would like it exported in HTML (the markup language used for web pages), and we would like it to be displayed vertically with nice colors to highlight the comparison. To achieve this all you need to do is add the keyword output to the collate command and give it that value html2.
table = collate(collation, output='html2')
unit5/3_collation-outputs.ipynb
DiXiT-eu/collatex-tutorial
gpl-3.0
Before moving to the other outputs, try to produce the simple HTML output by changing the code above. The value required in the output keyword should be html. Table: JSON The same alignment table can be exported in a variety of formats, as we have seen, including JSON (Javascript Object Notation), a format widely used for storing and interchanging data nowadays. In order to produce JSON as output, we need to specify json as the output format.
table = collate(collation, output='json') print(table)
unit5/3_collation-outputs.ipynb
DiXiT-eu/collatex-tutorial
gpl-3.0
Table: XML and XML/TEI We can use the same procedure in order to export the table in XML or XML/TEI (the latter produces a condensed version of the table only listing witnesses at points of divergence - also called a negative apparatus). To do this you just specify a different output format. Let's start with the XML output (that you can later post-process using XSLT or other tools).
table = collate(collation, output='xml') print(table)
unit5/3_collation-outputs.ipynb
DiXiT-eu/collatex-tutorial
gpl-3.0
And, finally, you can test the XML/TEI output that produces XML following the TEI parallel segmentation encoding guidelines.
table = collate(collation, output='tei') print(table)
unit5/3_collation-outputs.ipynb
DiXiT-eu/collatex-tutorial
gpl-3.0
Graph: SVG And now for something different: try with the graph, exported in the SVG format
graph = collate(collation, output='svg')
unit5/3_collation-outputs.ipynb
DiXiT-eu/collatex-tutorial
gpl-3.0
Set Up In this first cell, we'll load the necessary libraries.
import math import shutil import numpy as np import pandas as pd import tensorflow as tf print(tf.__version__) tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.INFO) pd.options.display.max_rows = 10 pd.options.display.float_format = '{:.1f}'.format
courses/machine_learning/deepdive/05_artandscience/a_handtuning.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
In this exercise, we'll be trying to predict median_house_value. It will be our label (sometimes also called a target). Can we use total_rooms as our input feature? What's going on with the values for that feature? This data is at the city block level, so these features reflect the total number of rooms in that block, or the total number of people who live on that block, respectively. Let's create a different, more appropriate feature. Because we are predicing the price of a single house, we should try to make all our features correspond to a single house as well
df['num_rooms'] = df['total_rooms'] / df['households'] df.describe() # Split into train and eval np.random.seed(seed=1) #makes split reproducible msk = np.random.rand(len(df)) < 0.8 traindf = df[msk] evaldf = df[~msk]
courses/machine_learning/deepdive/05_artandscience/a_handtuning.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Build the first model In this exercise, we'll be trying to predict median_house_value. It will be our label (sometimes also called a target). We'll use num_rooms as our input feature. To train our model, we'll use the LinearRegressor estimator. The Estimator takes care of a lot of the plumbing, and exposes a convenient way to interact with data, training, and evaluation.
OUTDIR = './housing_trained' def train_and_evaluate(output_dir, num_train_steps): estimator = tf.compat.v1.estimator.LinearRegressor( model_dir = output_dir, feature_columns = [tf.feature_column.numeric_column('num_rooms')]) #Add rmse evaluation metric def rmse(labels, predictions): pred_values = tf.cast(predictions['predictions'],tf.float64) return {'rmse': tf.compat.v1.metrics.root_mean_squared_error(labels, pred_values)} estimator = tf.compat.v1.estimator.add_metrics(estimator,rmse) train_spec=tf.estimator.TrainSpec( input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(x = traindf[["num_rooms"]], y = traindf["median_house_value"], # note the scaling num_epochs = None, shuffle = True), max_steps = num_train_steps) eval_spec=tf.estimator.EvalSpec( input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(x = evaldf[["num_rooms"]], y = evaldf["median_house_value"], # note the scaling num_epochs = 1, shuffle = False), steps = None, start_delay_secs = 1, # start evaluating after N seconds throttle_secs = 10, # evaluate every N seconds ) tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec) # Run training shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time train_and_evaluate(OUTDIR, num_train_steps = 100)
courses/machine_learning/deepdive/05_artandscience/a_handtuning.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
1. Scale the output Let's scale the target values so that the default parameters are more appropriate.
SCALE = 100000 OUTDIR = './housing_trained' def train_and_evaluate(output_dir, num_train_steps): estimator = tf.compat.v1.estimator.LinearRegressor( model_dir = output_dir, feature_columns = [tf.feature_column.numeric_column('num_rooms')]) #Add rmse evaluation metric def rmse(labels, predictions): pred_values = tf.cast(predictions['predictions'],tf.float64) return {'rmse': tf.compat.v1.metrics.root_mean_squared_error(labels*SCALE, pred_values*SCALE)} estimator = tf.compat.v1.estimator.add_metrics(estimator,rmse) train_spec=tf.estimator.TrainSpec( input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(x = traindf[["num_rooms"]], y = traindf["median_house_value"] / SCALE, # note the scaling num_epochs = None, shuffle = True), max_steps = num_train_steps) eval_spec=tf.estimator.EvalSpec( input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(x = evaldf[["num_rooms"]], y = evaldf["median_house_value"] / SCALE, # note the scaling num_epochs = 1, shuffle = False), steps = None, start_delay_secs = 1, # start evaluating after N seconds throttle_secs = 10, # evaluate every N seconds ) tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec) # Run training shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time train_and_evaluate(OUTDIR, num_train_steps = 100)
courses/machine_learning/deepdive/05_artandscience/a_handtuning.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
2. Change learning rate and batch size Can you come up with better parameters?
SCALE = 100000 OUTDIR = './housing_trained' def train_and_evaluate(output_dir, num_train_steps): myopt = tf.compat.v1.train.FtrlOptimizer(learning_rate = 0.2) # note the learning rate estimator = tf.compat.v1.estimator.LinearRegressor( model_dir = output_dir, feature_columns = [tf.feature_column.numeric_column('num_rooms')], optimizer = myopt) #Add rmse evaluation metric def rmse(labels, predictions): pred_values = tf.cast(predictions['predictions'],tf.float64) return {'rmse': tf.compat.v1.metrics.root_mean_squared_error(labels*SCALE, pred_values*SCALE)} estimator = tf.compat.v1.estimator.add_metrics(estimator,rmse) train_spec=tf.estimator.TrainSpec( input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(x = traindf[["num_rooms"]], y = traindf["median_house_value"] / SCALE, # note the scaling num_epochs = None, batch_size = 512, # note the batch size shuffle = True), max_steps = num_train_steps) eval_spec=tf.estimator.EvalSpec( input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(x = evaldf[["num_rooms"]], y = evaldf["median_house_value"] / SCALE, # note the scaling num_epochs = 1, shuffle = False), steps = None, start_delay_secs = 1, # start evaluating after N seconds throttle_secs = 10, # evaluate every N seconds ) tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec) # Run training shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time train_and_evaluate(OUTDIR, num_train_steps = 100)
courses/machine_learning/deepdive/05_artandscience/a_handtuning.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Again we need functions for shuffling the data and calculating classification errrors.
### function for shuffling the data and labels def shuffle_in_unison(features, labels): rng_state = np.random.get_state() np.random.shuffle(features) np.random.set_state(rng_state) np.random.shuffle(labels) ### calculate classification errors # return a percentage: (number misclassified)/(total number of datapoints) def calc_classification_error(predictions, class_labels): n = predictions.size num_of_errors = 0. for idx in xrange(n): if (predictions[idx] >= 0.5 and class_labels[idx]==0) or (predictions[idx] < 0.5 and class_labels[idx]==1): num_of_errors += 1 return num_of_errors/n
Session 5 - Features_II_NonlinearDimensionalityReduction.ipynb
dinrker/PredictiveModeling
mit
0.1 Load the dataset of handwritten digits We are going to use the MNIST dataset throughout this session. Let's load the data...
mnist = pd.read_csv('../data/mnist_train_100.csv', header=None) # load the 70,000 x 784 matrix from sklearn.datasets import fetch_mldata mnist = fetch_mldata('MNIST original').data # reduce to 5k instances np.random.shuffle(mnist) #mnist = mnist[:5000,:]/255. print "Dataset size: %d x %d"%(mnist.shape) # subplot containing first image ax1 = plt.subplot(1,2,1) digit = mnist[1,:] ax1.imshow(np.reshape(digit, (28, 28)), cmap='Greys_r') # subplot containing second image ax2 = plt.subplot(1,2,2) digit = mnist[2,:] ax2.imshow(np.reshape(digit, (28, 28)), cmap='Greys_r') plt.show()
Session 5 - Features_II_NonlinearDimensionalityReduction.ipynb
dinrker/PredictiveModeling
mit
1 Gradient Descent for PCA Recall the Principal Component Analysis model we covered in the last session. Again, the goal of PCA is for a given datapoint $\mathbf{x}{i}$, find a lower-dimensional representation $\mathbf{h}{i}$ such that $\mathbf{x}{i}$ can be 'predicted' from $\mathbf{h}{i}$ using a linear transformation. Again, the loss function can be written as: $$ \mathcal{L}{\text{PCA}} = \sum{i=1}^{N} (\mathbf{x}{i} - \mathbf{x}{i}\mathbf{W}\mathbf{W}^{T})^{2}.$$ Instead of using the closed-form solution we discussed in the previous session, here we'll use gradient descent. The reason for doing this will become clear later in the session, as we move on to cover a non-linear version of PCA. To run gradient descent, we of course need the derivative of the loss w.r.t. the parameters, which are in this case, the transformation matrix $\mathbf{W}$: $$ \nabla_{\mathbf{W}} \mathcal{L}{\text{PCA}} = -4\sum{i=1}^{N} (\mathbf{x}{i} - \mathbf{\tilde x}{i})^{T}\mathbf{h}_{i} $$ Now let's run our stochastic gradient PCA on the MNIST dataset... <span style="color:red">Caution: Running the following PCA code could take several minutes or more, depending on your computer's processing power.</span>
# set the random number generator for reproducability np.random.seed(49) # define the dimensionality of the hidden rep. n_components = 200 # Randomly initialize the Weight matrix W = np.random.uniform(low=-4 * np.sqrt(6. / (n_components + mnist.shape[1])),\ high=4 * np.sqrt(6. / (n_components + mnist.shape[1])), size=(mnist.shape[1], n_components)) # Initialize the step-size alpha = 1e-3 # Initialize the gradient grad = np.infty # Set the tolerance tol = 1e-8 # Initialize error old_error = 0 error = [np.infty] batch_size = 250 ### train with stochastic gradients start_time = time.time() iter_idx = 1 # loop until gradient updates become small while (alpha*np.linalg.norm(grad) > tol) and (iter_idx < 300): for batch_idx in xrange(mnist.shape[0]/batch_size): x = mnist[batch_idx*batch_size:(batch_idx+1)*batch_size, :] h = np.dot(x, W) x_recon = np.dot(h, W.T) # compute gradient diff = x - x_recon grad = (-4./batch_size)*np.dot(diff.T, h) # update parameters W = W - alpha*grad # track the error if iter_idx % 25 == 0: old_error = error[-1] diff = mnist - np.dot(np.dot(mnist, W), W.T) recon_error = np.mean( np.sum(diff**2, 1) ) error.append(recon_error) print "Epoch %d, Reconstruction Error: %.3f" %(iter_idx, recon_error) iter_idx += 1 end_time = time.time() print print "Training ended after %i iterations, taking a total of %.2f seconds." %(iter_idx, end_time-start_time) print "Final Reconstruction Error: %.2f" %(error[-1]) reduced_mnist = np.dot(mnist, W) print "Dataset is now of size: %d x %d"%(reduced_mnist.shape)
Session 5 - Features_II_NonlinearDimensionalityReduction.ipynb
dinrker/PredictiveModeling
mit
Let's visualize a reconstruction...
img_idx = 2 reconstructed_img = np.dot(reduced_mnist[img_idx,:], W.T) original_img = mnist[img_idx,:] # subplot for original image ax1 = plt.subplot(1,2,1) ax1.imshow(np.reshape(original_img, (28, 28)), cmap='Greys_r') ax1.set_title("Original Painting") # subplot for reconstruction ax2 = plt.subplot(1,2,2) ax2.imshow(np.reshape(reconstructed_img, (28, 28)), cmap='Greys_r') ax2.set_title("Reconstruction") plt.show()
Session 5 - Features_II_NonlinearDimensionalityReduction.ipynb
dinrker/PredictiveModeling
mit
We can again visualize the transformation matrix $\mathbf{W}^{T}$. It's rows act as 'filters' or 'feature detectors'. However, without the orthogonality constraint, we've loss the identifiably of the components...
# two components to show comp1 = 0 comp2 = 150 # subplot ax1 = plt.subplot(1,2,1) filter1 = W[:, comp1] ax1.imshow(np.reshape(filter1, (28, 28)), cmap='Greys_r') # subplot ax2 = plt.subplot(1,2,2) filter2 = W[:, comp2] ax2.imshow(np.reshape(filter2, (28, 28)), cmap='Greys_r') plt.show()
Session 5 - Features_II_NonlinearDimensionalityReduction.ipynb
dinrker/PredictiveModeling
mit
2. Nonlinear Dimensionality Reduction with Autoencoders In the last session (and section) we learned about Principal Component Analysis, a technique that finds some linear projection that reduces the dimensionality of the data while preserving its variance. We looked at it as a form of unsupervised linear regression, where we predict the data itself instead of some associated value (i.e. a label). In this section, we will move on to a nonlinear dimensionality reduction technique called an Autoencoder and derive it's optimization procedure. 2.1 Defining the Autoencoder Model Recall that PCA is comprised of a linear projection step followed by application of the inverse projection. An Autoencoder is the same model but with a non-linear transformation placed on the hidden representation. To reiterate, our goal is: for a datapoint $\mathbf{x}{i}$, find a lower-dimensional representation $\mathbf{h}{i}$ such that $\mathbf{x}{i}$ can be 'predicted' from $\mathbf{h}{i}$---but this time, not necessarily with a linear transformation. In math, this statement can be written as $$\mathbf{\tilde x}{i} = \mathbf{h}{i} \mathbf{W}^{T} \text{ where } \mathbf{h}{i} = f(\mathbf{x}{i} \mathbf{W}). $$ $\mathbf{W}$ is a $D \times K$ matrix of parameters that need to be learned--much like the $\beta$ vector in regression models. $D$ is the dimensionality of the original data, and $K$ is the dimensionality of the compressed representation $\mathbf{h}_{i}$. Lastly, we have the new component, the transformation function $f$. There are many possible function to choose for $f$; yet we'll use a framilar one, the logistic function $$f(z) = \frac{1}{1+\exp(-z)}.$$ The graphic below depicts the autoencoder's computation path: Optimization Having defined the Autoencoder model, we look to write learning as an optimization process. Recall that we wish to make a reconstruction of the data, denoted $\mathbf{\tilde x}{i}$, as close as possible to the original input: $$\mathcal{L}{\text{AE}} = \sum_{i=1}^{N} (\mathbf{x}{i} - \mathbf{\tilde x}{i})^{2}.$$ We can make a substitution for $\mathbf{\tilde x}{i}$ from the equation above: $$ = \sum{i=1}^{N} (\mathbf{x}{i} - \mathbf{h}{i}\mathbf{W}^{T})^{2}.$$ And we can make another substitution for $\mathbf{h}{i}$, bringing us to the final form of the loss function: $$ = \sum{i=1}^{N} (\mathbf{x}{i} - f(\mathbf{x}{i}\mathbf{W})\mathbf{W}^{T})^{2}.$$ <span style="color:red">STUDENT ACTIVITY (15 mins)</span> Derive an expression for the gradient: $$ \nabla_{W}\mathcal{L}_{\text{AE}} = ? $$ Take $f$ to be the logistic function, which has a derivative of $f'(z) = f(z)(1-f(z))$. Those functions are provided for you below.
def logistic(x): return 1./(1+np.exp(-x)) def logistic_derivative(x): z = logistic(x) return np.multiply(z, 1-z) def compute_gradient(x, x_recon, h, a): # parameters: # x: the original data # x_recon: the reconstruction of x # h: the hidden units (after application of f) # a: the pre-activations (before the application of f) return #TODO np.random.seed(39) # dummy variables for testing x = np.random.normal(size=(5,3)) x_recon = x + np.random.normal(size=x.shape) W = np.random.normal(size=(x.shape[1], 2)) a = np.dot(x, W) h = logistic(a) compute_gradient(x, x_recon, h, a)
Session 5 - Features_II_NonlinearDimensionalityReduction.ipynb
dinrker/PredictiveModeling
mit
Should print array([[ 4.70101821, 2.26494039], [ 2.86585042, 0.0731302 ], [ 0.79869215, 0.15570277]]) Autoencoder (AE) Overview Data We observe $\mathbf{x}_{i}$ where \begin{eqnarray} \mathbf{x}{i} = (x{i,1}, \dots, x_{i,D}) &:& \mbox{set of $D$ explanatory variables (aka features). No labels.} \end{eqnarray} Parameters $\mathbf{W}$: Matrix with dimensionality $D \times K$, where $D$ is the dimensionality of the original data and $K$ the dimensionality of the new features. The matrix encodes the transformation between the original and new feature spaces. Error Function \begin{eqnarray} \mathcal{L} = \sum_{i=1}^{N} ( \mathbf{x}{i} - f(\mathbf{x}{i} \mathbf{W}) \mathbf{W}^{T})^{2} \end{eqnarray} 2.2 Autoencoder Implementation Now let's train an Autoencoder...
# set the random number generator for reproducability np.random.seed(39) # define the dimensionality of the hidden rep. n_components = 200 # Randomly initialize the transformation matrix W = np.random.uniform(low=-4 * np.sqrt(6. / (n_components + mnist.shape[1])),\ high=4 * np.sqrt(6. / (n_components + mnist.shape[1])), size=(mnist.shape[1], n_components)) # Initialize the step-size alpha = .01 # Initialize the gradient grad = np.infty # Initialize error old_error = 0 error = [np.infty] batch_size = 250 ### train with stochastic gradients start_time = time.time() iter_idx = 1 # loop until gradient updates become small while (alpha*np.linalg.norm(grad) > tol) and (iter_idx < 300): for batch_idx in xrange(mnist.shape[0]/batch_size): x = mnist[batch_idx*batch_size:(batch_idx+1)*batch_size, :] pre_act = np.dot(x, W) h = logistic(pre_act) x_recon = np.dot(h, W.T) # compute gradient grad = compute_gradient(x, x_recon, h, pre_act) # update parameters W = W - alpha/batch_size * grad # track the error if iter_idx % 25 == 0: old_error = error[-1] diff = mnist - np.dot(logistic(np.dot(mnist, W)), W.T) recon_error = np.mean( np.sum(diff**2, 1) ) error.append(recon_error) print "Epoch %d, Reconstruction Error: %.3f" %(iter_idx, recon_error) iter_idx += 1 end_time = time.time() print print "Training ended after %i iterations, taking a total of %.2f seconds." %(iter_idx, end_time-start_time) print "Final Reconstruction Error: %.2f" %(error[-1]) reduced_mnist = np.dot(mnist, W) print "Dataset is now of size: %d x %d"%(reduced_mnist.shape) img_idx = 2 reconstructed_img = np.dot(logistic(reduced_mnist[img_idx,:]), W.T) original_img = mnist[img_idx,:] # subplot for original image ax1 = plt.subplot(1,2,1) ax1.imshow(np.reshape(original_img, (28, 28)), cmap='Greys_r') ax1.set_title("Original Digit") # subplot for reconstruction ax2 = plt.subplot(1,2,2) ax2.imshow(np.reshape(reconstructed_img, (28, 28)), cmap='Greys_r') ax2.set_title("Reconstruction") plt.show() # two components to show comp1 = 0 comp2 = 150 # subplot ax1 = plt.subplot(1,2,1) filter1 = W[:, comp1] ax1.imshow(np.reshape(filter1, (28, 28)), cmap='Greys_r') # subplot ax2 = plt.subplot(1,2,2) filter2 = W[:, comp2] ax2.imshow(np.reshape(filter2, (28, 28)), cmap='Greys_r') plt.show()
Session 5 - Features_II_NonlinearDimensionalityReduction.ipynb
dinrker/PredictiveModeling
mit
2.3 SciKit Learn Version We can hack the Scikit-Learn Regression neural network into an Autoencoder by feeding it the data back as the labels...
from sklearn.neural_network import MLPRegressor # set the random number generator for reproducability np.random.seed(39) # define the dimensionality of the hidden rep. n_components = 200 # define model ae = MLPRegressor(hidden_layer_sizes=(n_components,), activation='logistic') ### train Autoencoder start_time = time.time() ae.fit(mnist, mnist) end_time = time.time() recon_error = np.mean(np.sum((mnist - ae.predict(mnist))**2, 1)) W = ae.coefs_[0] b = ae.intercepts_[0] reduced_mnist = logistic(np.dot(mnist, W) + b) print print "Training ended after a total of %.2f seconds." %(end_time-start_time) print "Final Reconstruction Error: %.2f" %(recon_error) print "Dataset is now of size: %d x %d"%(reduced_mnist.shape) img_idx = 5 reconstructed_img = np.dot(reduced_mnist[img_idx,:], ae.coefs_[1]) + ae.intercepts_[1] original_img = mnist[img_idx,:] # subplot for original image ax1 = plt.subplot(1,2,1) ax1.imshow(np.reshape(original_img, (28, 28)), cmap='Greys_r') ax1.set_title("Original Digit") # subplot for reconstruction ax2 = plt.subplot(1,2,2) ax2.imshow(np.reshape(reconstructed_img, (28, 28)), cmap='Greys_r') ax2.set_title("Reconstruction") plt.show() # two components to show comp1 = 0 comp2 = 150 # subplot ax1 = plt.subplot(1,2,1) filter1 = W[:, comp1] ax1.imshow(np.reshape(filter1, (28, 28)), cmap='Greys_r') # subplot ax2 = plt.subplot(1,2,2) filter2 = W[:, comp2] ax2.imshow(np.reshape(filter2, (28, 28)), cmap='Greys_r') plt.show()
Session 5 - Features_II_NonlinearDimensionalityReduction.ipynb
dinrker/PredictiveModeling
mit
2.4 Denoising Autoencoder (DAE) Lastly, we are going to examine an extension to the Autoencoder called a Denoising Autoencoder (DAE). It has the following loss fuction: $$\mathcal{L}{\text{DAE}} = \sum{i=1}^{N} (\mathbf{x}{i} - f((\hat{\boldsymbol{\zeta}} \odot \mathbf{x}{i})\mathbf{W})\mathbf{W}^{T})^{2} \ \text{ where } \hat{\boldsymbol{\zeta}} \sim \text{Bernoulli}(p).$$ In words, what we're doing is drawning a Bernoulli (i.e. binary) matrix the same size as the input features, and feeding a corrupted version of $\mathbf{x}_{i}$. The Autoencoder, then, must try to recreate the original data from a lossy representation. This has the effect of forcing the Autoencoder to use features that better generalize. Let's make the simple change that implements a DAE below...
# set the random number generator for reproducability np.random.seed(39) # define the dimensionality of the hidden rep. n_components = 200 # Randomly initialize the Beta vector W = np.random.uniform(low=-4 * np.sqrt(6. / (n_components + mnist.shape[1])),\ high=4 * np.sqrt(6. / (n_components + mnist.shape[1])), size=(mnist.shape[1], n_components)) # Initialize the step-size alpha = .01 # Initialize the gradient grad = np.infty # Set the tolerance tol = 1e-8 # Initialize error old_error = 0 error = [np.infty] batch_size = 250 ### train with stochastic gradients start_time = time.time() iter_idx = 1 # loop until gradient updates become small while (alpha*np.linalg.norm(grad) > tol) and (iter_idx < 300): for batch_idx in xrange(mnist.shape[0]/batch_size): x = mnist[batch_idx*batch_size:(batch_idx+1)*batch_size, :] # add noise to features x_corrupt = np.multiply(x, np.random.binomial(n=1, p=.8, size=x.shape)) pre_act = np.dot(x_corrupt, W) h = logistic(pre_act) x_recon = np.dot(h, W.T) # compute gradient diff = x - x_recon grad = -2.*(np.dot(diff.T, h) + np.dot(np.multiply(np.dot(diff, W), logistic_derivative(pre_act)).T, x_corrupt).T) # NOTICE: during the 'backward pass', use the uncorrupted features # update parameters W = W - alpha/batch_size * grad # track the error if iter_idx % 25 == 0: old_error = error[-1] diff = mnist - np.dot(logistic(np.dot(mnist, W)), W.T) recon_error = np.mean( np.sum(diff**2, 1) ) error.append(recon_error) print "Epoch %d, Reconstruction Error: %.3f" %(iter_idx, recon_error) iter_idx += 1 end_time = time.time() print print "Training ended after %i iterations, taking a total of %.2f seconds." %(iter_idx, end_time-start_time) print "Final Reconstruction Error: %.2f" %(error[-1]) reduced_mnist = np.dot(mnist, W) print "Dataset is now of size: %d x %d"%(reduced_mnist.shape) img_idx = 5 reconstructed_img = np.dot(logistic(reduced_mnist[img_idx,:]), W.T) original_img = mnist[img_idx,:] # subplot for original image ax1 = plt.subplot(1,2,1) ax1.imshow(np.reshape(original_img, (28, 28)), cmap='Greys_r') ax1.set_title("Original Painting") # subplot for reconstruction ax2 = plt.subplot(1,2,2) ax2.imshow(np.reshape(reconstructed_img, (28, 28)), cmap='Greys_r') ax2.set_title("Reconstruction") plt.show() # two components to show comp1 = 0 comp2 = 150 # subplot ax1 = plt.subplot(1,2,1) filter1 = W[:, comp1] ax1.imshow(np.reshape(filter1, (28, 28)), cmap='Greys_r') # subplot ax2 = plt.subplot(1,2,2) filter2 = W[:, comp2] ax2.imshow(np.reshape(filter2, (28, 28)), cmap='Greys_r') plt.show()
Session 5 - Features_II_NonlinearDimensionalityReduction.ipynb
dinrker/PredictiveModeling
mit
When training larger autoencoders, you'll see filters that look like these... Regular Autoencoder: Denoising Autoencoder: <span style="color:red">STUDENT ACTIVITY (until end of session)</span> Your task is to reproduce the faces experiment from the previous session but using an Autoencoder instead of PCA
from sklearn.datasets import fetch_olivetti_faces faces_dataset = fetch_olivetti_faces(shuffle=True) faces = faces_dataset.data # 400 flattened 64x64 images person_ids = faces_dataset.target # denotes the identity of person (40 total) print "Dataset size: %d x %d" %(faces.shape) print "And the images look like this..." plt.imshow(np.reshape(faces[200,:], (64, 64)), cmap='Greys_r') plt.show()
Session 5 - Features_II_NonlinearDimensionalityReduction.ipynb
dinrker/PredictiveModeling
mit
This dataset contains 400 64x64 pixel images of 40 people each exhibiting 10 facial expressions. The images are in gray-scale, not color, and therefore flattened vectors contain 4096 dimensions. <span style="color:red">Subtask 1: Run (Regular) Autoencoder</span>
### Your code goes here ### # train Autoencoder model on 'faces' ########################### print "Training took a total of %.2f seconds." %(end_time-start_time) print "Final reconstruction error: %.2f%%" %(recon_error) print "Dataset is now of size: %d x %d"%(faces_reduced.shape)
Session 5 - Features_II_NonlinearDimensionalityReduction.ipynb
dinrker/PredictiveModeling
mit
<span style="color:red">Subtask 2: Reconstruct an image</span>
### Your code goes here ### # Use learned transformation matrix to project back to the original 4096-dimensional space # Remember you need to use np.reshape() ###########################
Session 5 - Features_II_NonlinearDimensionalityReduction.ipynb
dinrker/PredictiveModeling
mit
<span style="color:red">Subtask 3: Train a Denoising Autoencoder</span>
### Your code goes here ### ###########################
Session 5 - Features_II_NonlinearDimensionalityReduction.ipynb
dinrker/PredictiveModeling
mit
<span style="color:red">Subtask 4: Generate a 2D scatter plot from both models</span>
### Your code goes here ### # Run AE for 2 components # Generate plot # Bonus: color the scatter plot according to the person_ids to see if any structure can be seen ###########################
Session 5 - Features_II_NonlinearDimensionalityReduction.ipynb
dinrker/PredictiveModeling
mit
<span style="color:red">Subtask 5: Train a denoising version of PCA and test its performance</span>
### Your code goes here ### # Run PCA but add noise to the input first ###########################
Session 5 - Features_II_NonlinearDimensionalityReduction.ipynb
dinrker/PredictiveModeling
mit
Search for single-stranded RNA motifs We will now search for single-stranded motifs within a structure/trajectory. This is performed by using the ss_motif function. python results = bb.ss_motif(query,target,threshold=0.6,out=None,bulges=0) query is a PDB file with the structure you want to search for within the file target. If the keyword topology is specified, the query structure is searched in the target trajectory file. threshold is the eRMSD threshold to consider a substructure in target to be significantly similar to query. Typical relevant hits have eRMSD in the 0.6-0.9 range. If you specify the optional string keyword out, PDB structures below the threshold are written with the specified prefix. It is possible to specify the maximum number of allowed inserted or bulged bases with the option bulges. The search is performed not considering the sequence. It is possible to specify a sequence with the sequence option. Abbreviations (i.e N/R/Y) are accepted. The function returns a list of hits. Each element in this list is in turn a list containing the following information: - element 0 is the frame index. This is relevant if the search is performed over a trajectory/multi model PDB. - element 1 is the eRMSD distance from the query - element 2 is the list of residues. In the following example we search for structures similar to GNRA.pdb in a crystal structure of the H.Marismortui large ribosomal subunit (PDB 1S72).
import barnaba as bb # find all GNRA tetraloops in H.Marismortui large ribosomal subunit (PDB 1S72) query = "../test/data/GNRA.pdb" target = "../test/data/1S72.pdb" # call function. results = bb.ss_motif(query,target,threshold=0.6,out='gnra_loops',bulges=1)
examples/example_06_single_strand_motif.ipynb
srnas/barnaba
gpl-3.0
Now we print the fragment residues and their eRMSD distance from the query structure.
for j in range(len(results)): #seq = "".join([r.split("_")[0] for r in results[j][2]]) print("%2d eRMSD:%5.3f " % (j,results[j][1])) print(" Sequence: %s" % ",".join(results[j][2])) print()
examples/example_06_single_strand_motif.ipynb
srnas/barnaba
gpl-3.0
We can also calculate RMSD distances for the different hits
import glob pdbs = glob.glob("gnra_loops*.pdb") dists = [bb.rmsd(query,f)[0] for f in pdbs] for j in range(len(results)): seq = "".join([r.split("_")[0] for r in results[j][2]]) print("%2d eRMSD:%5.3f RMSD: %6.4f" % (j,results[j][1],10.*dists[j]), end="") print(" Sequence: %s" % seq) #print "%50s %6.4f AA" % (f,10.*dist[0])
examples/example_06_single_strand_motif.ipynb
srnas/barnaba
gpl-3.0
Note that the first hit has a low eRMSD, but no GNRA sequence. Let's have a look at this structure:
import py3Dmol query_s = open(query,'r').read() hit_0 = open(pdbs[0],'r').read() p = py3Dmol.view(width=900,height=600,viewergrid=(1,2)) p.addModel(query_s,'pdb',viewer=(0,0)) p.addModel(hit_0,'pdb',viewer=(0,1)) p.setStyle({'stick':{}}) p.setBackgroundColor('0xeeeeee') p.zoomTo() p.show()
examples/example_06_single_strand_motif.ipynb
srnas/barnaba
gpl-3.0
We can also check hit 14, that has low eRMSD but (relatively) high RMSD
hit_14 = open(pdbs[14],'r').read() p = py3Dmol.view(width=900,height=600,viewergrid=(1,2)) p.addModel(query_s,'pdb',viewer=(0,0)) p.addModel(hit_14,'pdb',viewer=(0,1)) p.setStyle({'stick':{}}) p.setBackgroundColor('0xeeeeee') p.zoomTo() p.show()
examples/example_06_single_strand_motif.ipynb
srnas/barnaba
gpl-3.0
Collation import <a name='import1-2'></a>
#path to the file with json results of the collation path = 'json-collations/calpurnius-collation-joint-BCMNPH.json' #path = 'json-collations/calpurnius-collation-joint-BCMNPH-corr.json' #open the file with open (path, encoding='utf-8') as jsonfile: #transform the json structure (arrays, objects) into python structure (lists, dictionaries) data = json.load(jsonfile) #list of witnesses witnesses = data["witnesses"] print(witnesses) #table of the aligned text versions collation = data["table"] #base text: choose a witness which variants are considered true readings (in green) #for Calpurnius, the most recent edition of Hakanson is used as the base text #if you do not want a base text, set it as an empty string '' base_text = 'LH' #the index of a witness is its position in the witness list: #for instance B1 has position 0, and P1594 has position 9.
pycoviz.ipynb
enury/collation-viz
mit
FUNCTIONS <a name="part2"></a> ↑ Transform a cell c (list of tokens) into a string of text
#original text def cell_to_string(c): #tokens t are joined together, separated by a space string = ' '.join(token['t'] for token in c) return string #text with normalized tokens def cell_to_string_norm(c): string = '' #word division is not taken into account when comparing the normalized text #for this reason we do not add a space in between tokens for token in c: if 'n' in token: string += token['n'] elif 't' in token: string += token['t'] return string
pycoviz.ipynb
enury/collation-viz
mit
Compare cells
#compare two cells, original text def compare_cell(c1,c2): return cell_to_string(c1) == cell_to_string(c2) #compare two cells, normalized text def compare_cell_norm(c1,c2): return cell_to_string_norm(c1) == cell_to_string_norm(c2) #compare a list of cells, original text #return true if all the cells are equivalent (they contain the same string of tokens) def compare_multiple_cell(cell_list): #compare each cell to the next for c1,c2 in zip(cell_list, cell_list[1:]): if compare_cell(c1,c2) is False: comparison = False break else: comparison = True return comparison #compare a list of cells, normalized text #return true if all the cells are equivalent (they contain the same string of tokens) def compare_multiple_cell_norm(cell_list): #compare each cell to the next for c1,c2 in zip(cell_list, cell_list[1:]): if compare_cell_norm(c1,c2) is False: comparison = False break else: comparison = True return comparison
pycoviz.ipynb
enury/collation-viz
mit
Find agreements 1
#this function returns rows of the collation table (table) where a list of x witnesses (witlist) agree together. #we display only variant locations, and not places where all witnesses agree. def find_agreements(table, witlist): result_table = [] #transform widget tuple into actual list witlist = list(witlist) #transform the witnesses names (sigla) into indexes witindex = [witnesses.index(wit) for wit in witlist] nonwitindex = [witnesses.index(wit) for wit in witnesses if wit not in witlist] for row in table: #get list of cell for the x witnesses cell_list = [row[i] for i in witindex] #there must be agreement of the x witnesses (normalized tokens) if compare_multiple_cell_norm(cell_list) is True: for i in nonwitindex: #if they disagree with at least one of the others if compare_cell_norm(row[witindex[0]],row[i]) is False: #add row to the result result_table.append(row) #and go to next row break return result_table
pycoviz.ipynb
enury/collation-viz
mit
Find agreements 2 <a name='func2-4'></a> ↑
#This function is similar to the previous one: #it returns rows of the collation table (table) where a list of x witnesses (witlist) agree together, but #do not agree with the witnesses in a second list (nonwitlist). #By default, the function will return the agreement of the x witnesses, against all the other witnesses. def compare_witnesses(table, witlist, nonwitlist=[]): result_table = [] #first list of x witnesses which agree together witindex = [witnesses.index(wit) for wit in witlist] #against all the other witnesses if not nonwitlist: nonwitindex = [witnesses.index(wit) for wit in witnesses if wit not in witlist] #except if a second list of y witnesses is specified else: nonwitindex = [witnesses.index(wit) for wit in nonwitlist] #go through the collation table, row by row #to find places where the x witnesses agree together against others for row in table: #get list of cell for the x witnesses cell_list = [row[i] for i in witindex] #there must be agreement of the x witnesses (normalised tokens) if compare_multiple_cell_norm(cell_list) is True: for i in nonwitindex: #if they agree with one of the other y witnesses if compare_cell_norm(row[witindex[0]],row[i]) is True: #go to next row break #but if they do not agree with any of the y witnesses else: #add row to the result result_table.append(row) return result_table
pycoviz.ipynb
enury/collation-viz
mit
Find all variants in the collation table
def view_variants(table): result_table = [] #go through the collation table, row by row for row in table: #if there is a variant in the row (i.e. at least one cell is different from another cell, normalized form) if compare_multiple_cell_norm(row) is False: #add row to the result result_table.append(row) return result_table
pycoviz.ipynb
enury/collation-viz
mit
Transform the result table into an html table <a name='func2-6'></a> ↑
#this function returns a minimal HTML table, to display in the notebook. def table_to_html(collation,table): #table in an HTML format html_table = '' #div is for a better slides view. For notebook use, comment it out #html_table += '<div style="overflow: scroll; width:960; height:417px; word-break: break-all;">' html_table += '<table border="1" style="width: 100%; border: 1px solid #000000; border-collapse: collapse;" cellpadding="4">' #add a header to the table with columns, one for each witnesses and one for the row ID html_table += '<tr>' #a column for each witness for wit in witnesses: html_table += '<th>'+wit+'</th>' #optional: column for the declamation number #html_table += '<th>Decl</th>' #column for the row id html_table += '<th>ID</th>' html_table += '</tr>' for row in table: #add a row to the html table html_table += '<tr>' #optional : a variable to store the declamation number (will not be defined in empty rows) #declamation = 0 #fill row with cell for each witness for cell in row: #transform the tokens t into a string. #we display the original tokens, not the normalized form token = cell_to_string(cell) #some cells are empty. Thus the declamation number is only available in cell with at least 1 token #if len(cell)>0: # declamation = str(cell[0]['decl']) #if no base text is selected, background colour will be white if not base_text: bg = "white" #if the tokens are the same as the base text tokens (normalized form) #it is displayed as a "true reading" in a green cell elif compare_cell_norm(cell,row[witnesses.index(base_text)]): bg = "d9ead3" #otherwise it is diplayed as an "error" in a red cell else: bg = "ffb1b1" html_table += '<td bgcolor="'+bg+'">'+token+'</td>' #optional: add declamation number #html_table += '<td>'+str(location)+'</td>' #add row ID html_table += '<td>'+str(collation.index(row))+'</td>' #close the row html_table += '</tr>' #close the table html_table += '</table>' #html_table += '</div>' return html_table #this function returns a fancier HTML, but can't be displayed in the notebook (yet) def table_to_html_fancy(collation,table): #table in an HTML format html_table = '<table>' #add a header to the table with columns html_table += '<thead><tr>' #a column for each witness for wit in witnesses: html_table += '<th>'+'<p>'+wit+'</p>'+'</th>' #a column for the row id html_table += '<th><p>ID</p></th>' #close header html_table += '</tr></thead><tbody>' for row in table: #add a row to the html table html_table += '<tr>' for cell in row: #transform the tokens t into a string (original token) token = cell_to_string(cell) #if there is no base text if not base_text: #arbitrary class for the HTML cells. It will have no effect on the result. cl = "foo" #if the normalized token is the same as the base text #it is diplayed as a "true reading" in a cell with green left border elif compare_cell_norm(cell,row[witnesses.index(base_text)]): cl = "green" #otherwise as an "error" in a cell with an orange left border else: cl = "orange" #add token to the table, in a text paragraph html_table += '<td class="'+cl+'">'+'<p>'+token #if there is a note to display, add a little 'i' to indicate there is more hidden information for t in cell: #in the cell, if we find a token with a note if 'note' in t: #add info indicator html_table += ' <a href="#" class="expander right"><i class="fa fa-info-circle"></i></a>' #then stop (even if there are several notes, we display only one indicator) break #close the text paragraph in the cell html_table += '</p>' #add paragraphs for hidden content (notes. Not limited to notes only: normalized form could be added, etc.) for t in cell: if 'note' in t: html_table += '<p class="expandable hidden more-info">Note: '+t['note']+'</p>' #when the cell is not empty, add hidden info of page/line numbers. Adapted to make 'locus' optional if len(cell)>0 and 'locus' in cell[0]: #if len(cell)>0 : #add link to images when possible if 'link' in cell[0]: url = cell[0]['link'] html_table += '<p class="expandable-row hidden more-info"><a target="blank" href='+url+'>'+cell[0]['locus']+'</a></p>' else: html_table += '<p class="expandable-row hidden more-info">'+cell[0]['locus']+'</p>' #close cell html_table += '</td>' #add row ID with indicator of hidden content html_table += '<td>'+'<p>'+str(collation.index(row))+' <a href="#" class="expander-row right"><i class="fa fa-ellipsis-v"></i></a></p>'+'</td>' #close the row html_table += '</tr>' #close the table html_table += '</tbody></table>' return html_table
pycoviz.ipynb
enury/collation-viz
mit
Print collation in a text orientation To read a short passage quickly. The collation table is reversed so that each witness is displayed on a row instead of a column.
def print_witnesses_text(table): reverse_table = [[row[i] for row in table] for i in range(len(witnesses))] for index,row in enumerate(reverse_table): text = '' for cell in row: #the row starts and ends with a token, not a space if row.index(cell) == 0 or text == '' or not cell: text += cell_to_string(cell) #if it is not the start of the string or an empty cell, add a space to separate tokens else: text += ' '+cell_to_string(cell) text += ', '+str(witnesses[index]) print(text) #return reverse_table
pycoviz.ipynb
enury/collation-viz
mit
Print information about a reading
def print_info(rowID, wit): #select cell cell = collation[rowID][witnesses.index(wit)] #if cell is empty, there is no token if len(cell) == 0: print('-') else: for token in cell: #position of token in cell + content print(cell.index(token), ':', ', '.join(token[feature] for feature in token))
pycoviz.ipynb
enury/collation-viz
mit
Get the ID of a row
def get_pos(row): return collation.index(row)
pycoviz.ipynb
enury/collation-viz
mit
Move a token to the previous (up) or next (down) row <a name='func2-10'></a> ↑
def move_token_up(rowID, wit): try: #the token cannot be in the first row rowID > 0 #select the first token token = collation[rowID][witnesses.index(wit)].pop(0) #append it at the end of the cell in the previous row collation[rowID-1][witnesses.index(wit)].append(token) print("Token '"+token['t']+"' moved up!") except: print("There is no token to move.") def move_token_down(rowID, wit): try: #the token cannot be in the last row rowID < len(collation)-1 #select the last token token = collation[rowID][witnesses.index(wit)].pop() #add it at the beginning of the cell in the next row collation[rowID+1][witnesses.index(wit)].insert(0, token) print("Token '"+token['t']+"' moved down!") except: print("There is no token to move.")
pycoviz.ipynb
enury/collation-viz
mit
Add / delete a row
def add_row_after(rowID): #rowID must be within collation table if rowID < 0 or rowID > len(collation)-1: print('Row '+str(rowID)+' does not exist.') else: #create an empty row new_row = [] #for each witness in the collation for wit in witnesses: #add an empty list of tokens to the row new_row.append([]) #insert new row in the collation, after the row passed in argument (+1) collation.insert(rowID+1, new_row) print('Row added!') def delete_row(rowID): #rowID must be within collation table if rowID < 0 or rowID > len(collation)-1: print('Row '+str(rowID)+' does not exist.') else: collation.pop(rowID) print('Row deleted!')
pycoviz.ipynb
enury/collation-viz
mit
Add / Delete a note <a name='func2-11'></a> ↑
#add or modify a note def add_note(wit, rowID, token, note): try: #select token t = collation[rowID][witnesses.index(wit)][token] if note is '': print('Your note is empty.') elif 'note' in t: #add comment to an already existing note t['note'] += ' '+note else: #or create a new note t['note'] = note except: print('This token is not valid.') #delete completely a token's note def del_note(wit, rowID, token): try: #select token t = collation[rowID][witnesses.index(wit)][token] if 'note' in t: #delete note t.pop('note') else: #or print error message print('There is no note to delete') except: print('This token is not valid.')
pycoviz.ipynb
enury/collation-viz
mit
Search <a name='func2-12'></a> ↑
def search(table,text): #result table to build result_table = [] #go through each row of the collation table for row in table: #go through each cell for cell in row: #if the search text matches the cell text (original or normalized form) if text in cell_to_string_norm(cell) or text in cell_to_string(cell): #add row to the result table result_table.append(row) #go to next row break #if the result table is empty, the text was not found in the collation if result_table == []: print(text+" was not found!") return result_table
pycoviz.ipynb
enury/collation-viz
mit
Save results <a name='func2-13'></a> ↑
#save the json file with update in the collation table def save_json(path, table): #combine new collation table with witnesses, so as to have one data variable data = {'witnesses':witnesses, 'table':table} #open a file according to path with open(path, 'w') as outfile: #write the data in json format json.dump(data, outfile) #save a subset of the collation table into fancy HTML version, with a small text description def save_table(descr, table, path): #path to template template_path = 'alignment-tables/template.html' #load the text of the template into a variable html with open(template_path, 'r', encoding='utf-8') as infile: html = infile.read() #add base text to description if base_text: descr += '<br>Agreement with the base text '+base_text+' is marked in green.' descr += ' Variation from '+base_text+' is marked in red.' #modify template: replace the comment with descr paragraph html = re.sub(r'<!--descr-->',descr,html) #modify template: replace the comment with table html = re.sub(r'<!--table-->',table,html) #save with open(path, 'w', encoding='utf-8') as outfile: outfile.write(html)
pycoviz.ipynb
enury/collation-viz
mit
INTERACTIVE EXPLORATION OF THE COLLATION TABLE <a name="part3"></a> ↑ Update the collation results <a name='part3-1'></a> ↑ In this section, we will see how to view a selection of the collation table and update it. Possible updates: 1. Add/delete a row 3. Move one token up/down 4. Add notes 3. Save Note : it is only possible to move the last token (or word) in a cell down to the first place in the next cell; or vice-versa to move the first token of a cell up to the last place in the previous cell. This is to prevent any change in the word order of a witness, and to keep the correct text sequence.
#select an extract of the collation table with interactive widgets #widget for HTML display w1_html = widgets.HTML(value="") #define the beginning of extract w_from = widgets.BoundedIntText( value=6, min=0, max=len(collation)-1, description='From:', continuous_update=True, ) #define the end of extract #because of python list slicing, the last number is not included in the result. #to make it more intuitive, the "to" number is added +1 in collation_extract function w_to = widgets.BoundedIntText( value=11, min=0, max=len(collation)-1, description='To:', continuous_update=True, ) #binding widgets with table_to_html function def collation_extract(a, b): x = a y = b+1 if y <= x: print("The table you have requested does not exist.") w1_html.value = table_to_html(collation,collation[x:y]) #uncomment the next lines to see the widgets ##interactive selection of a collation table extract #interact(collation_extract, a=w_from, b=w_to) ##display HTML widget (rows 6-11) #display(w1_html) #Widgets for: #move tokens up/down #add/delete rows #add/delete notes to a specific token #widget to select a witness w_wit = widgets.Dropdown( options = witnesses, description = 'Witness:', ) #widget to select a row w_rowID = widgets.BoundedIntText( min=0, max=len(collation)-1, description='ID:', ) #widget to select a specific token w_token = widgets.Text( min=0, description = 'Token position:', ) #widget to enter text note w_note = widgets.Text( description = 'Note:', ) out = widgets.Output() #link buttons and functions @out.capture(clear_output=True)#wait=True, clear_output=True def modif_on_click(b): if b.description == 'add row after': #add row add_row_after(rowID=w_rowID.value) if b.description == 'delete row': #delete delete_row(rowID=w_rowID.value) if b.description == 'move token down': move_token_down(rowID=w_rowID.value, wit=w_wit.value) if b.description == 'move token up': move_token_up(rowID=w_rowID.value, wit=w_wit.value) #add row after b1 = widgets.Button(description="add row after", style=ButtonStyle(button_color='#fae58b')) b1.on_click(modif_on_click) #uncomment the next line to see the widget #interact_manual(add_row_after, rowID=w_rowID, {'manual': True, 'manual_name': 'add row after'}) #delete row b2 = widgets.Button(description="delete row", style=b1.style) b2.on_click(modif_on_click) #uncomment the next line to see the widget #interact_manual(delete_row, rowID=w_rowID, {'manual': True, 'manual_name': 'delete row'}) #move token down b3 = widgets.Button(description="move token down", style=b1.style) b3.on_click(modif_on_click) #uncomment the next line to see the widget #interact_manual(move_token_down, rowID=w_rowID, wit=w_wit, {'manual': True, 'manual_name': 'move token down'}) #move token up b4 = widgets.Button(description="move token up", style=b1.style) b4.on_click(modif_on_click) #uncomment the next line to see the widget #interact_manual(move_token_up, rowID=w_rowID, wit=w_wit, {'manual': True, 'manual_name': 'move token up'}) #add/delete notes #link add button and function @out.capture(clear_output=True) def add_on_click(b): add_note(wit=w_wit.value, rowID=w_rowID.value, token=w_token.value, note=w_note.value) #check result print('Result:') print_info(w_rowID.value, w_wit.value) print('\n') #add a note button w_add_note = widgets.Button(description='Add note', button_style='success') w_add_note.on_click(add_on_click) #link del button and function @out.capture(clear_output=True) def del_on_click(b): del_note(wit=w_wit.value, rowID=w_rowID.value, token=w_token.value) #check result print('Result:') print_info(w_rowID.value, w_wit.value) #delete a note button w_del_note = widgets.Button(description='Delete note', button_style='danger') w_del_note.on_click(del_on_click) #dislpay widgets #uncomment the next line to see the widgets #display(w_wit, w_rowID, w_token, w_note) #display(w_add_note, w_del_note) #save new json #path to the new file path_new_json = 'json-collations/calpurnius-collation-joint-BCMNPH-corr.json' #alternative path: take the original collation file name, and add a date/time identifier #file_name = os.path.split(path)[1] #file_id = datetime.now().strftime('%Y-%m-%d-%H%M%S') #path_new_json = 'json-collations/'+file_id+'-'+file_name #save button to click w_button = widgets.Button(description="Save JSON", button_style='info') #on click def on_button_clicked(b): #save json of the whole collation save_json(path_new_json, collation) #link btw button and onclick function w_button.on_click(on_button_clicked) #save json #uncomment the next line to see the widget #display(w_button)
pycoviz.ipynb
enury/collation-viz
mit
Find agreements of witnesses against others <a name='part3-2'></a> ↑ In the first drop-down menu, you will be able to choose one or more witness(es) which agree together in places where all the other witnesses have a different reading. It means that, if you pick only one witness, you will see it's unique variants. On the other hand, you may want to see the agreements of witnesses against another group of witnesses selected in the second drop-down menu. This allows you for instance to ignore modern editors from the comparison. This option also allows to compare for groups of witnesses to see if they share erroneous readings (or innovations), for instance. Finally, if you chose only one witness in each dropdown menu, you will be able to see the differences between the two witnesses.
#widget for HTML display w2_html = widgets.HTML(value="") #selection of a group of witnesses which share the same readings w1 = widgets.SelectMultiple( description="Agreements:", options=witnesses ) #selection of a secong group of witnesses w2 = widgets.SelectMultiple( description="Against:", options=witnesses ) def collation_compare(table, a, b): #transform widget tuple into actual list if isinstance(a, (tuple)): witlist = list(a) nonwitlist = list(b) else: witlist = [a] nonwitlist = [b] if not a: print("No witness selected.") else: #create the result table result = compare_witnesses(table, witlist, nonwitlist) #transform table to HTML html_table = table_to_html(table,result) #add an indication of the number of rows in the result table html_table += '<span>Total: '+str(len(result))+' rows in the table.</span>' #set HTML display value w2_html.value = html_table #----------- #save button w_save = widgets.Button(description="Save Table", button_style='info') #description of the table w_descr = widgets.Text(value="Table description") def on_button_clicked(x): #transform widget tuple into actual list if isinstance(w1.value, (tuple)): witlist = list(w1.value) nonwitlist = list(w2.value) else: witlist = [w1.value] nonwitlist = [w2.value] if not w1.value: print("No table to save.") else: #path for new result file file_id = datetime.now().strftime('%Y-%m-%d-%H%M%S') path_result = 'alignment-tables/collation-'+file_id+'.html' #description descr = str(w_descr.value) #html table table = table_to_html_fancy(collation,compare_witnesses(collation, witlist, nonwitlist)) #save save_table(descr, table, path_result) #link button with saving action w_save.on_click(on_button_clicked) #--------------- #find agreements between witnesses or unique readings #uncomment the next line to see the widgets #interact(collation_compare, table=fixed(collation), a=w1, b=w2) #display(w2_html) #display(w_descr) #display(w_save)
pycoviz.ipynb
enury/collation-viz
mit
Search the collation <a name='part3-3'></a> ↑ Basic search. It will check for token t or normalized form n, and return rows where there is at least one match.
#widget for HTML display w3_html = widgets.HTML(value="") #do the search def search_collation(table,text): w3_html.value = table_to_html(table,search(table,text)) #search collation with interactive text input #uncomment the next line to see the widgets #interact(search_collation, table=fixed(collation),text="calpurnius",__manual=True) #display(w3_html)
pycoviz.ipynb
enury/collation-viz
mit
Clarify one reading <a name='part3-4'></a> ↑
#Examples: 459/C1, 932/M1, 9/LH #uncomment the next line to see the widget #interact(print_info, rowID=w_rowID, wit=w_wit)
pycoviz.ipynb
enury/collation-viz
mit
Summary of Widgets <a name='summary'></a> ↑ Here you will find all possible interactions gathered into one tab widget. It is a widget that displays several pages in tabs. each page is dedicated to a specific interaction: 1. Extract: select a collation extract 2. Modifications: update the collation by adding/deleting rows, moving tokens or adding/deleting notes 3. Agreements: select groups of witnesses to see what readings they have in common, against other witnesses 4. Search: search the collation for a single reading, for which you can see more information in the last page 5. Clarify: see more information about one reading, i.e. all properties of the tokens in one witness, from a specific row
#Using the tab widget, gather all interactions in one place tab = widgets.Tab() #page 1 = view extract w_extract = interactive(collation_extract, a=w_from, b=w_to) page1 = widgets.VBox(children = [w_extract, w1_html]) #page 2 = modify the collation w_modif1 = widgets.VBox(children=[w_rowID, b1, out])#add row w_modif2 = widgets.VBox(children=[w_rowID, b2, out])#delete row w_modif3 = widgets.VBox(children=[w_rowID, w_wit, b3, out])#move token down w_modif4 = widgets.VBox(children=[w_rowID, w_wit, b4, out])#move token up w_modif5 = widgets.VBox([w_wit, w_rowID, w_token, w_note, widgets.HBox(children=[w_add_note, w_del_note]), out])#add/del notes accordion = widgets.Accordion(children=[w_modif1, w_modif2, w_modif3, w_modif4, w_modif5]) accordion.set_title(0, 'Add Row') accordion.set_title(1, 'Delete Row') accordion.set_title(2, 'Move Token Down') accordion.set_title(3, 'Move Token Up') accordion.set_title(4, 'Notes') accordion.selected_index = None page2 = widgets.VBox(children = [accordion, w_button]) #page 3 = find agreements w_agr = interactive(collation_compare, table=fixed(collation), a=w1, b=w2) page3 = widgets.VBox(children = [w_agr, w2_html, w_descr, w_save]) #page 4 = search w_search = interactive(search_collation, {'manual' : True, 'manual_name' : 'Search'}, table=fixed(collation),text="calpurnius") page4 = widgets.VBox(children = [w_search, w3_html]) #page 5 = clarify w_clar = interactive(print_info, rowID=w_rowID, wit=w_wit) page5 = widgets.VBox(children = [w_clar]) tab.children = [page1, page2, page3, page4, page5] tab.set_title(0, 'Extract') tab.set_title(1, 'Modifications') tab.set_title(2, 'Find Agreements') tab.set_title(3, 'Search') tab.set_title(4, 'Clarify') display(tab)
pycoviz.ipynb
enury/collation-viz
mit
Zadatci 1. Jednostavna regresija Zadan je skup primjera $\mathcal{D}={(x^{(i)},y^{(i)})}_{i=1}^4 = {(0,4),(1,1),(2,2),(4,5)}$. Primjere predstavite matrixom $\mathbf{X}$ dimenzija $N\times n$ (u ovom slučaju $4\times 1$) i vektorom oznaka $\textbf{y}$, dimenzija $N\times 1$ (u ovom slučaju $4\times 1$), na sljedeći način:
X = np.array([[0],[1],[2],[4]]) y = np.array([4,1,2,5]) X1 = X y1 = y
STRUCE/SU-2019-LAB01-0036477171.ipynb
DominikDitoIvosevic/Uni
mit
(a) Proučite funkciju PolynomialFeatures iz biblioteke sklearn i upotrijebite je za generiranje matrice dizajna $\mathbf{\Phi}$ koja ne koristi preslikavanje u prostor više dimenzije (samo će svakom primjeru biti dodane dummy jedinice; $m=n+1$).
from sklearn.preprocessing import PolynomialFeatures poly = PolynomialFeatures(1) phi = poly.fit_transform(X) print(phi) # Vaš kôd ovdje
STRUCE/SU-2019-LAB01-0036477171.ipynb
DominikDitoIvosevic/Uni
mit
(b) Upoznajte se s modulom linalg. Izračunajte težine $\mathbf{w}$ modela linearne regresije kao $\mathbf{w}=(\mathbf{\Phi}^\intercal\mathbf{\Phi})^{-1}\mathbf{\Phi}^\intercal\mathbf{y}$. Zatim se uvjerite da isti rezultat možete dobiti izračunom pseudoinverza $\mathbf{\Phi}^+$ matrice dizajna, tj. $\mathbf{w}=\mathbf{\Phi}^+\mathbf{y}$, korištenjem funkcije pinv.
from numpy import linalg pinverse1 = pinv(phi) pinverse2 = matmul(inv(matmul(transpose(phi), phi)), transpose(phi)) #print(pinverse1) #print(pinverse2) w = matmul(pinverse2, y) print(w) # Vaš kôd ovdje
STRUCE/SU-2019-LAB01-0036477171.ipynb
DominikDitoIvosevic/Uni
mit
Radi jasnoće, u nastavku je vektor $\mathbf{x}$ s dodanom dummy jedinicom $x_0=1$ označen kao $\tilde{\mathbf{x}}$. (c) Prikažite primjere iz $\mathcal{D}$ i funkciju $h(\tilde{\mathbf{x}})=\mathbf{w}^\intercal\tilde{\mathbf{x}}$. Izračunajte pogrešku učenja prema izrazu $E(h|\mathcal{D})=\frac{1}{2}\sum_{i=1}^N(\tilde{\mathbf{y}}^{(i)} - h(\tilde{\mathbf{x}}))^2$. Možete koristiti funkciju srednje kvadratne pogreške mean_squared_error iz modula sklearn.metrics. Q: Gore definirana funkcija pogreške $E(h|\mathcal{D})$ i funkcija srednje kvadratne pogreške nisu posve identične. U čemu je razlika? Koja je "realnija"?
import sklearn.metrics as mt wt = w #(np.array([w])) print(wt) print(phi) hx = np.dot(phi, w) E = mt.mean_squared_error(hx, y) print(E) # Vaš kôd ovdje
STRUCE/SU-2019-LAB01-0036477171.ipynb
DominikDitoIvosevic/Uni
mit
(d) Uvjerite se da za primjere iz $\mathcal{D}$ težine $\mathbf{w}$ ne možemo naći rješavanjem sustava $\mathbf{w}=\mathbf{\Phi}^{-1}\mathbf{y}$, već da nam doista treba pseudoinverz. Q: Zašto je to slučaj? Bi li se problem mogao riješiti preslikavanjem primjera u višu dimenziju? Ako da, bi li to uvijek funkcioniralo, neovisno o skupu primjera $\mathcal{D}$? Pokažite na primjeru.
# Vaš kôd ovdje try: w = matmul(inv(phi), y) print(w) except LinAlgError as err: print("Exception") print(err)
STRUCE/SU-2019-LAB01-0036477171.ipynb
DominikDitoIvosevic/Uni
mit
(e) Proučite klasu LinearRegression iz modula sklearn.linear_model. Uvjerite se da su težine koje izračunava ta funkcija (dostupne pomoću atributa coef_ i intercept_) jednake onima koje ste izračunali gore. Izračunajte predikcije modela (metoda predict) i uvjerite se da je pogreška učenja identična onoj koju ste ranije izračunali.
from sklearn.linear_model import LinearRegression # Vaš kôd ovdje lr = LinearRegression().fit(X, y) #print(lr.score(X, y)) #print(lr.coef_) #print(lr.intercept_) print([lr.intercept_, lr.coef_]) print(wt) pr = lr.predict(X) E = mt.mean_squared_error(pr, y) print(E)
STRUCE/SU-2019-LAB01-0036477171.ipynb
DominikDitoIvosevic/Uni
mit
2. Polinomijalna regresija i utjecaj šuma (a) Razmotrimo sada regresiju na većem broju primjera. Definirajte funkciju make_labels(X, f, noise=0) koja uzima matricu neoznačenih primjera $\mathbf{X}{N\times n}$ te generira vektor njihovih oznaka $\mathbf{y}{N\times 1}$. Oznake se generiraju kao $y^{(i)} = f(x^{(i)})+\mathcal{N}(0,\sigma^2)$, gdje je $f:\mathbb{R}^n\to\mathbb{R}$ stvarna funkcija koja je generirala podatke (koja nam je u stvarnosti nepoznata), a $\sigma$ je standardna devijacija Gaussovog šuma, definirana parametrom noise. Za generiranje šuma možete koristiti funkciju numpy.random.normal. Generirajte skup za učenje od $N=50$ primjera uniformno distribuiranih u intervalu $[-5,5]$ pomoću funkcije $f(x) = 5 + x -2 x^2 -5 x^3$ uz šum $\sigma=200$:
from numpy.random import normal def make_labels(X, f, noise=0) : # Vaš kôd ovdje N = numpy.random.normal fx = f(X) #nois = [N(0, noise) for _ in range(X.shape[0])] #print(nois) #y = f(X) + nois y = [ f(x) + N(0, noise) for x in X ] return y def make_instances(x1, x2, N) : return np.array([np.array([x]) for x in np.linspace(x1,x2,N)])
STRUCE/SU-2019-LAB01-0036477171.ipynb
DominikDitoIvosevic/Uni
mit
Prikažite taj skup funkcijom scatter.
# Vaš kôd ovdje N = 50 def f(x): return 5 + x - 2*x*x - 5*x*x*x noise = 200 X2 = make_instances(-5, 5, N) y2 = make_labels(X2, f, noise) #print(X) #print(y) s = scatter(X2, y2)
STRUCE/SU-2019-LAB01-0036477171.ipynb
DominikDitoIvosevic/Uni
mit
(b) Trenirajte model polinomijalne regresije stupnja $d=3$. Na istom grafikonu prikažite naučeni model $h(\mathbf{x})=\mathbf{w}^\intercal\tilde{\mathbf{x}}$ i primjere za učenje. Izračunajte pogrešku učenja modela.
# Vaš kôd ovdje import sklearn.linear_model as lm def polyX(d): p3 = PolynomialFeatures(d).fit_transform(X2) l2 = LinearRegression().fit(p3, y2) h2 = l2.predict(p3) E = mt.mean_squared_error(h2, y2) print('d: ' + str(d) + ' E: ' + str(E)) #print(p3) plot(X2, h2, label = str(d)) scatter(X2, y2) polyX(3)
STRUCE/SU-2019-LAB01-0036477171.ipynb
DominikDitoIvosevic/Uni
mit
3. Odabir modela (a) Na skupu podataka iz zadatka 2 trenirajte pet modela linearne regresije $\mathcal{H}_d$ različite složenosti, gdje je $d$ stupanj polinoma, $d\in{1,3,5,10,20}$. Prikažite na istome grafikonu skup za učenje i funkcije $h_d(\mathbf{x})$ za svih pet modela (preporučujemo koristiti plot unutar for petlje). Izračunajte pogrešku učenja svakog od modela. Q: Koji model ima najmanju pogrešku učenja i zašto?
# Vaš kôd ovdje figure(figsize=(15,10)) scatter(X2, y2) polyX(1) polyX(3) polyX(5) polyX(10) polyX(20) s = plt.legend(loc="center right")
STRUCE/SU-2019-LAB01-0036477171.ipynb
DominikDitoIvosevic/Uni
mit
(b) Razdvojite skup primjera iz zadatka 2 pomoću funkcije model_selection.train_test_split na skup za učenja i skup za ispitivanje u omjeru 1:1. Prikažite na jednom grafikonu pogrešku učenja i ispitnu pogrešku za modele polinomijalne regresije $\mathcal{H}_d$, sa stupnjem polinoma $d$ u rasponu $d\in [1,2,\ldots,20]$. Budući da kvadratna pogreška brzo raste za veće stupnjeve polinoma, umjesto da iscrtate izravno iznose pogrešaka, iscrtajte njihove logaritme. NB: Podjela na skupa za učenje i skup za ispitivanje mora za svih pet modela biti identična. Q: Je li rezultat u skladu s očekivanjima? Koji biste model odabrali i zašto? Q: Pokrenite iscrtavanje više puta. U čemu je problem? Bi li problem bio jednako izražen kad bismo imali više primjera? Zašto?
from sklearn.model_selection import train_test_split # Vaš kôd ovdje xTr, xTest, yTr, yTest = train_test_split(X2, y2, test_size=0.5) testError = [] trainError = [] for d in range(1,33): polyXTrain = PolynomialFeatures(d).fit_transform(xTr) polyXTest = PolynomialFeatures(d).fit_transform(xTest) l2 = LinearRegression().fit(polyXTrain, yTr) h2 = l2.predict(polyXTest) E = mt.mean_squared_error(h2, yTest) #print('d: ' + str(d) + ' E: ' + str(E)) testError.append(E) h2 = l2.predict(polyXTrain) E = mt.mean_squared_error(h2, yTr) #print('d: ' + str(d) + ' E: ' + str(E)) trainError.append(E) #print(p3) #plot(polyXTest, h2, label = str(d)) plot(numpy.log(numpy.array(testError)), label='test') plot(numpy.log(numpy.array(trainError)), label='train') legend()
STRUCE/SU-2019-LAB01-0036477171.ipynb
DominikDitoIvosevic/Uni
mit
(c) Točnost modela ovisi o (1) njegovoj složenosti (stupanj $d$ polinoma), (2) broju primjera $N$, i (3) količini šuma. Kako biste to analizirali, nacrtajte grafikone pogrešaka kao u 3b, ali za sve kombinacija broja primjera $N\in{100,200,1000}$ i količine šuma $\sigma\in{100,200,500}$ (ukupno 9 grafikona). Upotrijebite funkciju subplots kako biste pregledno posložili grafikone u tablicu $3\times 3$. Podatci se generiraju na isti način kao u zadatku 2. NB: Pobrinite se da svi grafikoni budu generirani nad usporedivim skupovima podataka, na sljedeći način. Generirajte najprije svih 1000 primjera, podijelite ih na skupove za učenje i skupove za ispitivanje (dva skupa od po 500 primjera). Zatim i od skupa za učenje i od skupa za ispitivanje načinite tri različite verzije, svaka s drugačijom količinom šuma (ukupno 2x3=6 verzija podataka). Kako bi simulirali veličinu skupa podataka, od tih dobivenih 6 skupova podataka uzorkujte trećinu, dvije trećine i sve podatke. Time ste dobili 18 skupova podataka -- skup za učenje i za testiranje za svaki od devet grafova.
# Vaš kôd ovdje # Vaš kôd ovdje figure(figsize=(15,15)) N = 1000 def f(x): return 5 + x - 2*x*x - 5*x*x*x X3 = make_instances(-5, 5, N) xAllTrain, xAllTest = train_test_split(X3, test_size=0.5) i = 0 j = 0 for N in [100, 200, 1000]: for noise in [100, 200, 500]: j += 1 xTrain = xAllTrain[:N] xTest = xAllTest[:N] yTrain = make_labels(xTrain, f, noise) yTest = make_labels(xTest, f, noise) trainError = [] testError = [] for d in range(1,21): polyXTrain = PolynomialFeatures(d).fit_transform(xTrain) polyXTest = PolynomialFeatures(d).fit_transform(xTest) l2 = LinearRegression().fit(polyXTrain, yTrain) h2 = l2.predict(polyXTest) testE = mt.mean_squared_error(h2, yTest) testError.append(testE) h2 = l2.predict(polyXTrain) trainE = mt.mean_squared_error(h2, yTrain) trainError.append(trainE) #print('d: ' + str(d) + ' E: ' + str(E)) #print(p3) #plot(polyXTest, h2, label = str(d)) subplot(3,3,j, title = "N: " + str(N) + " noise: " + str(noise)) plot(numpy.log(numpy.array(trainError)), label = 'train') plot(numpy.log(numpy.array(testError)), label = 'test') plt.legend(loc="center right") #print(X) #print(y) #s = scatter(X2, y2)
STRUCE/SU-2019-LAB01-0036477171.ipynb
DominikDitoIvosevic/Uni
mit
Q: Jesu li rezultati očekivani? Obrazložite. 4. Regularizirana regresija (a) U gornjim eksperimentima nismo koristili regularizaciju. Vratimo se najprije na primjer iz zadatka 1. Na primjerima iz tog zadatka izračunajte težine $\mathbf{w}$ za polinomijalni regresijski model stupnja $d=3$ uz L2-regularizaciju (tzv. ridge regression), prema izrazu $\mathbf{w}=(\mathbf{\Phi}^\intercal\mathbf{\Phi}+\lambda\mathbf{I})^{-1}\mathbf{\Phi}^\intercal\mathbf{y}$. Napravite izračun težina za regularizacijske faktore $\lambda=0$, $\lambda=1$ i $\lambda=10$ te usporedite dobivene težine. Q: Kojih je dimenzija matrica koju treba invertirati? Q: Po čemu se razlikuju dobivene težine i je li ta razlika očekivana? Obrazložite.
# Vaš kôd ovdje phi4 = PolynomialFeatures(3).fit_transform(X1) def reg2(lambd): w = matmul( matmul(inv( matmul(transpose(phi4), phi4) + lambd * identity(len(phi4))), transpose(phi4)), y1) print(w) reg2(0) reg2(1) reg2(10)
STRUCE/SU-2019-LAB01-0036477171.ipynb
DominikDitoIvosevic/Uni
mit
(b) Proučite klasu Ridge iz modula sklearn.linear_model, koja implementira L2-regularizirani regresijski model. Parametar $\alpha$ odgovara parametru $\lambda$. Primijenite model na istim primjerima kao u prethodnom zadatku i ispišite težine $\mathbf{w}$ (atributi coef_ i intercept_). Q: Jesu li težine identične onima iz zadatka 4a? Ako nisu, objasnite zašto je to tako i kako biste to popravili.
from sklearn.linear_model import Ridge #for s in ['auto', 'svd', 'cholesky', 'lsqr', 'sparse_cg', 'sag', 'saga']: for l in [0, 1, 10]: r = Ridge(l, fit_intercept = False).fit(phi4, y1) print(r.coef_) print(r.intercept_) # Vaš kôd ovdje
STRUCE/SU-2019-LAB01-0036477171.ipynb
DominikDitoIvosevic/Uni
mit
5. Regularizirana polinomijalna regresija (a) Vratimo se na slučaj $N=50$ slučajno generiranih primjera iz zadatka 2. Trenirajte modele polinomijalne regresije $\mathcal{H}_{\lambda,d}$ za $\lambda\in{0,100}$ i $d\in{2,10}$ (ukupno četiri modela). Skicirajte pripadne funkcije $h(\mathbf{x})$ i primjere (na jednom grafikonu; preporučujemo koristiti plot unutar for petlje). Q: Jesu li rezultati očekivani? Obrazložite.
# Vaš kôd ovdje N = 50 figure(figsize = (15, 15)) x123 = scatter(X2, y2) for lambd in [0, 100]: for d in [2, 10]: phi2 = PolynomialFeatures(d).fit_transform(X2) r = Ridge(lambd).fit(phi2, y2) h2 = r.predict(phi2) #print(d) plot(X2, h2, label="lambda " + str(lambd) + " d " + str(d)) x321 = plt.legend(loc="center right")
STRUCE/SU-2019-LAB01-0036477171.ipynb
DominikDitoIvosevic/Uni
mit
(b) Kao u zadataku 3b, razdvojite primjere na skup za učenje i skup za ispitivanje u omjeru 1:1. Prikažite krivulje logaritama pogreške učenja i ispitne pogreške u ovisnosti za model $\mathcal{H}_{d=10,\lambda}$, podešavajući faktor regularizacije $\lambda$ u rasponu $\lambda\in{0,1,\dots,50}$. Q: Kojoj strani na grafikonu odgovara područje prenaučenosti, a kojoj podnaučenosti? Zašto? Q: Koju biste vrijednosti za $\lambda$ izabrali na temelju ovih grafikona i zašto?
# Vaš kôd ovdje xTr, xTest, yTr, yTest = train_test_split(X2, y2, test_size=0.5) figure(figsize=(10,10)) trainError = [] testError = [] #print(xTr) for lambd in range(0,51): polyXTrain = PolynomialFeatures(10).fit_transform(xTr) polyXTest = PolynomialFeatures(10).fit_transform(xTest) l2 = Ridge(lambd).fit(polyXTrain, yTr) h2 = l2.predict(polyXTest) E = mt.mean_squared_error(h2, yTest) #print('d: ' + str(d) + ' E: ' + str(E)) testError.append(log( E)) h2 = l2.predict(polyXTrain) E = mt.mean_squared_error(h2, yTr) trainError.append(log(E)) #print(p3) #plot(polyXTest, h2, label = str(d)) #print(numpy.log(numpy.array(testError))) plot(numpy.log(numpy.array(testError)), label="test") plot(numpy.log(numpy.array(trainError)), label="train") grid() legend()
STRUCE/SU-2019-LAB01-0036477171.ipynb
DominikDitoIvosevic/Uni
mit
6. L1-regularizacija i L2-regularizacija Svrha regularizacije jest potiskivanje težina modela $\mathbf{w}$ prema nuli, kako bi model bio što jednostavniji. Složenost modela može se okarakterizirati normom pripadnog vektora težina $\mathbf{w}$, i to tipično L2-normom ili L1-normom. Za jednom trenirani model možemo izračunati i broj ne-nul značajki, ili L0-normu, pomoću sljedeće funkcije koja prima vektor težina $\mathbf{w}$:
def nonzeroes(coef, tol=1e-6): return len(coef) - len(coef[np.isclose(0, coef, atol=tol)])
STRUCE/SU-2019-LAB01-0036477171.ipynb
DominikDitoIvosevic/Uni
mit
(a) Za ovaj zadatak upotrijebite skup za učenje i skup za testiranje iz zadatka 3b. Trenirajte modele L2-regularizirane polinomijalne regresije stupnja $d=10$, mijenjajući hiperparametar $\lambda$ u rasponu ${1,2,\dots,100}$. Za svaki od treniranih modela izračunajte L{0,1,2}-norme vektora težina $\mathbf{w}$ te ih prikažite kao funkciju od $\lambda$. Pripazite što točno šaljete u funkciju za izračun normi. Q: Objasnite oblik obiju krivulja. Hoće li krivulja za $\|\mathbf{w}\|_2$ doseći nulu? Zašto? Je li to problem? Zašto? Q: Za $\lambda=100$, koliki je postotak težina modela jednak nuli, odnosno koliko je model rijedak?
# Vaš kôd ovdje d = 10 l0 = [] l1 = [] l2 = [] xTr, xTest, yTr, yTest = train_test_split(X2, y2, test_size=0.5) for lambd in range(0,101): polyXTrain = PolynomialFeatures(10).fit_transform(xTr) polyXTest = PolynomialFeatures(10).fit_transform(xTest) r = Ridge(lambd).fit(polyXTrain, yTr) r.coef_[0] = r.intercept_ l0.append(nonzeroes(r.coef_)) #print(r.coef_) l1.append(numpy.linalg.norm(r.coef_, ord=1)) l2.append(numpy.linalg.norm(r.coef_, ord=2)) figure(figsize=(10,10)) plot(l0, label="l0") legend() grid() figure(figsize=(10,10)) plot(l1, label="l1") legend() grid() figure(figsize=(10,10)) plot(l2, label="l2") legend() grid()
STRUCE/SU-2019-LAB01-0036477171.ipynb
DominikDitoIvosevic/Uni
mit
(b) Glavna prednost L1-regularizirane regresije (ili LASSO regression) nad L2-regulariziranom regresijom jest u tome što L1-regularizirana regresija rezultira rijetkim modelima (engl. sparse models), odnosno modelima kod kojih su mnoge težine pritegnute na nulu. Pokažite da je to doista tako, ponovivši gornji eksperiment s L1-regulariziranom regresijom, implementiranom u klasi Lasso u modulu sklearn.linear_model. Zanemarite upozorenja.
# Vaš kôd ovdje d = 10 l0 = [] l1 = [] l2 = [] xTr, xTest, yTr, yTest = train_test_split(X2, y2, test_size=0.5) for lambd in range(0,101): polyXTrain = PolynomialFeatures(10).fit_transform(xTr) polyXTest = PolynomialFeatures(10).fit_transform(xTest) r = sklearn.linear_model.Lasso(lambd).fit(polyXTrain, yTr) r.coef_[0] = r.intercept_ l0.append(nonzeroes(r.coef_)) #print(r.coef_) l1.append(numpy.linalg.norm(r.coef_, ord=1)) l2.append(numpy.linalg.norm(r.coef_, ord=2)) figure(figsize=(10,10)) plot(l0, label="l0") legend() figure(figsize=(10,10)) plot(l1, label="l1") legend() figure(figsize=(10,10)) plot(l2, label="l2") legend()
STRUCE/SU-2019-LAB01-0036477171.ipynb
DominikDitoIvosevic/Uni
mit
7. Značajke različitih skala Često se u praksi možemo susreti sa podatcima u kojima sve značajke nisu jednakih magnituda. Primjer jednog takvog skupa je regresijski skup podataka grades u kojem se predviđa prosjek ocjena studenta na studiju (1--5) na temelju dvije značajke: bodova na prijamnom ispitu (1--3000) i prosjeka ocjena u srednjoj školi. Prosjek ocjena na studiju izračunat je kao težinska suma ove dvije značajke uz dodani šum. Koristite sljedeći kôd kako biste generirali ovaj skup podataka.
n_data_points = 500 np.random.seed(69) # Generiraj podatke o bodovima na prijamnom ispitu koristeći normalnu razdiobu i ograniči ih na interval [1, 3000]. exam_score = np.random.normal(loc=1500.0, scale = 500.0, size = n_data_points) exam_score = np.round(exam_score) exam_score[exam_score > 3000] = 3000 exam_score[exam_score < 0] = 0 # Generiraj podatke o ocjenama iz srednje škole koristeći normalnu razdiobu i ograniči ih na interval [1, 5]. grade_in_highschool = np.random.normal(loc=3, scale = 2.0, size = n_data_points) grade_in_highschool[grade_in_highschool > 5] = 5 grade_in_highschool[grade_in_highschool < 1] = 1 # Matrica dizajna. grades_X = np.array([exam_score,grade_in_highschool]).T # Završno, generiraj izlazne vrijednosti. rand_noise = np.random.normal(loc=0.0, scale = 0.5, size = n_data_points) exam_influence = 0.9 grades_y = ((exam_score / 3000.0) * (exam_influence) + (grade_in_highschool / 5.0) \ * (1.0 - exam_influence)) * 5.0 + rand_noise grades_y[grades_y < 1] = 1 grades_y[grades_y > 5] = 5
STRUCE/SU-2019-LAB01-0036477171.ipynb
DominikDitoIvosevic/Uni
mit
a) Iscrtajte ovisnost ciljne vrijednosti (y-os) o prvoj i o drugoj značajki (x-os). Iscrtajte dva odvojena grafa.
# Vaš kôd ovdje figure(figsize=(10,10)) scatter(exam_score, grades_y, label="l2") legend() figure(figsize=(10,10)) scatter(grade_in_highschool, grades_y, label="l2") legend()
STRUCE/SU-2019-LAB01-0036477171.ipynb
DominikDitoIvosevic/Uni
mit
b) Naučite model L2-regularizirane regresije ($\lambda = 0.01$), na podacima grades_X i grades_y:
# Vaš kôd ovdje r7b = Ridge(0.01).fit(grades_X, grades_y) h2 = r7b.predict(grades_X) E = mt.mean_squared_error(h2, grades_y) print(E)
STRUCE/SU-2019-LAB01-0036477171.ipynb
DominikDitoIvosevic/Uni
mit
Sada ponovite gornji eksperiment, ali prvo skalirajte podatke grades_X i grades_y i spremite ih u varijable grades_X_fixed i grades_y_fixed. Za tu svrhu, koristite StandardScaler.
from sklearn.preprocessing import StandardScaler # Vaš kôd ovdje ssX = StandardScaler().fit_transform(grades_X) ssY = StandardScaler().fit_transform(grades_y.reshape(-1, 1)) r = Ridge(0.01).fit(ssX, ssY) h2 = r.predict(ssX) E = mt.mean_squared_error(h2, ssY) print(E)
STRUCE/SU-2019-LAB01-0036477171.ipynb
DominikDitoIvosevic/Uni
mit
Q: Gledajući grafikone iz podzadatka (a), koja značajka bi trebala imati veću magnitudu, odnosno važnost pri predikciji prosjeka na studiju? Odgovaraju li težine Vašoj intuiciji? Objasnite. 8. Multikolinearnost i kondicija matrice a) Izradite skup podataka grades_X_fixed_colinear tako što ćete u skupu grades_X_fixed iz zadatka 7b duplicirati zadnji stupac (ocjenu iz srednje škole). Time smo efektivno uveli savršenu multikolinearnost.
# Vaš kôd ovdje grades_X_fixed_colinear = [ [x[0], x[1], x[1]] for x in ssX] #print(grades_X_fixed_colinear)
STRUCE/SU-2019-LAB01-0036477171.ipynb
DominikDitoIvosevic/Uni
mit
Ponovno, naučite na ovom skupu L2-regularizirani model regresije ($\lambda = 0.01$).
# Vaš kôd ovdje r8a = Ridge(0.01).fit(grades_X_fixed_colinear, ssY) h2 = r8a.predict(grades_X_fixed_colinear) E = mt.mean_squared_error(h2, ssY) print(E) print(r7b.coef_) print(r8a.coef_)
STRUCE/SU-2019-LAB01-0036477171.ipynb
DominikDitoIvosevic/Uni
mit
Q: Usporedite iznose težina s onima koje ste dobili u zadatku 7b. Što se dogodilo? b) Slučajno uzorkujte 50% elemenata iz skupa grades_X_fixed_colinear i naučite dva modela L2-regularizirane regresije, jedan s $\lambda=0.01$ i jedan s $\lambda=1000$). Ponovite ovaj pokus 10 puta (svaki put s drugim podskupom od 50% elemenata). Za svaki model, ispišite dobiveni vektor težina u svih 10 ponavljanja te ispišite standardnu devijaciju vrijednosti svake od težina (ukupno šest standardnih devijacija, svaka dobivena nad 10 vrijednosti).
# Vaš kôd ovdje for lambd in [0.01, 1000]: print(lambd) ws1 = [] ws2 = [] ws3 = [] for i in range(10): xTrain, xTest, yTrain, yTest = train_test_split(grades_X_fixed_colinear, ssY, test_size=0.5) print(l2.coef_) l2 = Ridge(lambd).fit(xTrain, yTrain) ws1.append(l2.coef_[0][0]) ws2.append(l2.coef_[0][1]) ws3.append(l2.coef_[0][2]) print("std dev: " + str(np.std(ws1))) print("std dev: " + str(np.std(ws2))) print("std dev: " + str(np.std(ws3)))
STRUCE/SU-2019-LAB01-0036477171.ipynb
DominikDitoIvosevic/Uni
mit
Q: Kako regularizacija utječe na stabilnost težina? Q: Jesu li koeficijenti jednakih magnituda kao u prethodnom pokusu? Objasnite zašto. c) Koristeći numpy.linalg.cond izračunajte kondicijski broj matrice $\mathbf{\Phi}^\intercal\mathbf{\Phi}+\lambda\mathbf{I}$, gdje je $\mathbf{\Phi}$ matrica dizajna (grades_X_fixed_colinear). Ponovite i za $\lambda=0.01$ i za $\lambda=10$.
# Vaš kôd ovdje #print(grades_X_fixed_colinear) for l in [0.01, 10]: #print(l * identity(len(grades_X_fixed_colinear))) mm = matmul(transpose(grades_X_fixed_colinear), grades_X_fixed_colinear) matr = mm + l * identity(len(mm)) print(matr) print(np.linalg.cond(matr))
STRUCE/SU-2019-LAB01-0036477171.ipynb
DominikDitoIvosevic/Uni
mit
We see that there are five files four of these are mutants, and and one reference original sample. We will take a look inside one of the files and look at the distribution of read statistics. The reads are in text files, which have been compressed using gzip, a common practice for storing raw data. You can look inside by decompressing a file, piping the output to a program called head, which will stop after a few lines. You don't want to print the contents of the entire file to screen, since it will likely crash IPython.
!gunzip -c ../data/reads/mutant1_OIST-2015-03-28.fq.gz | head -8
src/Raw data.ipynb
mikheyev/phage-lab
mit