markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Instantiate and evaluate this model:
baseline = Baseline(label_index=column_indices['T (degC)']) baseline.compile(loss=tf.losses.MeanSquaredError(), metrics=[tf.metrics.MeanAbsoluteError()]) val_performance = {} performance = {} val_performance['Baseline'] = baseline.evaluate(single_step_window.val) performance['Baseline'] = baseline.evaluate(single_step_window.test, verbose=0)
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
That printed some performance metrics, but those don't give you a feeling for how well the model is doing.The `WindowGenerator` has a plot method, but the plots won't be very interesting with only a single sample. So, create a wider `WindowGenerator` that generates windows 24h of consecutive inputs and labels at a time. The `wide_window` doesn't change the way the model operates. The model still makes predictions 1h into the future based on a single input time step. Here the `time` axis acts like the `batch` axis: Each prediction is made independently with no interaction between time steps.
wide_window = WindowGenerator( input_width=24, label_width=24, shift=1, label_columns=['T (degC)']) wide_window
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
This expanded window can be passed directly to the same `baseline` model without any code changes. This is possible because the inputs and labels have the same number of timesteps, and the baseline just forwards the input to the output: ![One prediction 1h into the future, ever hour.](images/last_window.png)
print('Input shape:', wide_window.example[0].shape) print('Output shape:', baseline(wide_window.example[0]).shape)
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
Plotting the baseline model's predictions you can see that it is simply the labels, shifted right by 1h.
wide_window.plot(baseline)
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
In the above plots of three examples the single step model is run over the course of 24h. This deserves some explaination:* The blue "Inputs" line shows the input temperature at each time step. The model recieves all features, this plot only shows the temperature.* The green "Labels" dots show the target prediction value. These dots are shown at the prediction time, not the input time. That is why the range of labels is shifted 1 step relative to the inputs.* The orange "Predictions" crosses are the model's prediction's for each output time step. If the model were predicting perfectly the predictions would land directly on the "labels". Linear modelThe simplest **trainable** model you can apply to this task is to insert linear transformation between the input and output. In this case the output from a time step only depends on that step:![A single step prediction](images/narrow_window.png)A `layers.Dense` with no `activation` set is a linear model. The layer only transforms the last axis of the data from `(batch, time, inputs)` to `(batch, time, units)`, it is applied independently to every item across the `batch` and `time` axes.
linear = tf.keras.Sequential([ tf.keras.layers.Dense(units=1) ]) print('Input shape:', single_step_window.example[0].shape) print('Output shape:', linear(single_step_window.example[0]).shape)
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
This tutorial trains many models, so package the training procedure into a function:
MAX_EPOCHS = 20 def compile_and_fit(model, window, patience=2): early_stopping = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=patience, mode='min') model.compile(loss=tf.losses.MeanSquaredError(), optimizer=tf.optimizers.Adam(), metrics=[tf.metrics.MeanAbsoluteError()]) history = model.fit(window.train, epochs=MAX_EPOCHS, validation_data=window.val, callbacks=[early_stopping]) return history
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
Train the model and evaluate its performance:
history = compile_and_fit(linear, single_step_window) val_performance['Linear'] = linear.evaluate(single_step_window.val) performance['Linear'] = linear.evaluate(single_step_window.test, verbose=0)
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
Like the `baseline` model, the linear model can be called on batches of wide windows. Used this way the model makes a set of independent predictions on consecuitive time steps. The `time` axis acts like another `batch` axis. There are no interactions between the predictions at each time step.![A single step prediction](images/wide_window.png)
print('Input shape:', wide_window.example[0].shape) print('Output shape:', baseline(wide_window.example[0]).shape)
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
Here is the plot of its example predictions on the `wide_window`, note how in many cases the prediction is clearly better than just returning the input temperature, but in a few cases it's worse:
wide_window.plot(linear)
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
One advantage to linear models is that they're relatively simple to interpret.You can pull out the layer's weights, and see the weight assigned to each input:
plt.bar(x = range(len(train_df.columns)), height=linear.layers[0].kernel[:,0].numpy()) axis = plt.gca() axis.set_xticks(range(len(train_df.columns))) _ = axis.set_xticklabels(train_df.columns, rotation=90)
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
Sometimes the model doesn't even place the most weight on the input `T (degC)`. This is one of the risks of random initialization. DenseBefore applying models that actually operate on multiple time-steps, it's worth checking the performance of deeper, more powerful, single input step models.Here's a model similar to the `linear` model, except it stacks several a few `Dense` layers between the input and the output:
dense = tf.keras.Sequential([ tf.keras.layers.Dense(units=64, activation='relu'), tf.keras.layers.Dense(units=64, activation='relu'), tf.keras.layers.Dense(units=1) ]) history = compile_and_fit(dense, single_step_window) val_performance['Dense'] = dense.evaluate(single_step_window.val) performance['Dense'] = dense.evaluate(single_step_window.test, verbose=0)
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
Multi-step denseA single-time-step model has no context for the current values of its inputs. It can't see how the input features are changing over time. To address this issue the model needs access to multiple time steps when making predictions:![Three time steps are used for each prediction.](images/conv_window.png) The `baseline`, `linear` and `dense` models handled each time step independently. Here the model will take multiple time steps as input to produce a single output.Create a `WindowGenerator` that will produce batches of the 3h of inputs and, 1h of labels: Note that the `Window`'s `shift` parameter is relative to the end of the two windows.
CONV_WIDTH = 3 conv_window = WindowGenerator( input_width=CONV_WIDTH, label_width=1, shift=1, label_columns=['T (degC)']) conv_window conv_window.plot() plt.title("Given 3h as input, predict 1h into the future.")
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
You could train a `dense` model on a multiple-input-step window by adding a `layers.Flatten` as the first layer of the model:
multi_step_dense = tf.keras.Sequential([ # Shape: (time, features) => (time*features) tf.keras.layers.Flatten(), tf.keras.layers.Dense(units=32, activation='relu'), tf.keras.layers.Dense(units=32, activation='relu'), tf.keras.layers.Dense(units=1), # Add back the time dimension. # Shape: (outputs) => (1, outputs) tf.keras.layers.Reshape([1, -1]), ]) print('Input shape:', conv_window.example[0].shape) print('Output shape:', multi_step_dense(conv_window.example[0]).shape) history = compile_and_fit(multi_step_dense, conv_window) IPython.display.clear_output() val_performance['Multi step dense'] = multi_step_dense.evaluate(conv_window.val) performance['Multi step dense'] = multi_step_dense.evaluate(conv_window.test, verbose=0) conv_window.plot(multi_step_dense)
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
The main down-side of this approach is that the resulting model can only be executed on input windows of exactly this shape.
print('Input shape:', wide_window.example[0].shape) try: print('Output shape:', multi_step_dense(wide_window.example[0]).shape) except Exception as e: print(f'\n{type(e).__name__}:{e}')
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
The convolutional models in the next section fix this problem. Convolution neural network A convolution layer (`layers.Conv1D`) also takes multiple time steps as input to each prediction. Below is the **same** model as `multi_step_dense`, re-written with a convolution. Note the changes:* The `layers.Flatten` and the first `layers.Dense` are replaced by a `layers.Conv1D`.* The `layers.Reshape` is no longer necessary since the convolution keeps the time axis in its output.
conv_model = tf.keras.Sequential([ tf.keras.layers.Conv1D(filters=32, kernel_size=(CONV_WIDTH,), activation='relu'), tf.keras.layers.Dense(units=32, activation='relu'), tf.keras.layers.Dense(units=1), ])
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
Run it on an example batch to see that the model produces outputs with the expected shape:
print("Conv model on `conv_window`") print('Input shape:', conv_window.example[0].shape) print('Output shape:', conv_model(conv_window.example[0]).shape)
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
Train and evaluate it on the ` conv_window` and it should give performance similar to the `multi_step_dense` model.
history = compile_and_fit(conv_model, conv_window) IPython.display.clear_output() val_performance['Conv'] = conv_model.evaluate(conv_window.val) performance['Conv'] = conv_model.evaluate(conv_window.test, verbose=0)
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
The difference between this `conv_model` and the `multi_step_dense` model is that the `conv_model` can be run on inputs of any length. The convolutional layer is applied to a sliding window of inputs:![Executing a convolutional model on a sequence](images/wide_conv_window.png)If you run it on wider input, it produces wider output:
print("Wide window") print('Input shape:', wide_window.example[0].shape) print('Labels shape:', wide_window.example[1].shape) print('Output shape:', conv_model(wide_window.example[0]).shape)
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
Note that the output is shorter than the input. To make training or plotting work, you need the labels, and prediction to have the same length. So build a `WindowGenerator` to produce wide windows with a few extra input time steps so the label and prediction lengths match:
LABEL_WIDTH = 24 INPUT_WIDTH = LABEL_WIDTH + (CONV_WIDTH - 1) wide_conv_window = WindowGenerator( input_width=INPUT_WIDTH, label_width=LABEL_WIDTH, shift=1, label_columns=['T (degC)']) wide_conv_window print("Wide conv window") print('Input shape:', wide_conv_window.example[0].shape) print('Labels shape:', wide_conv_window.example[1].shape) print('Output shape:', conv_model(wide_conv_window.example[0]).shape)
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
Now you can plot the model's predictions on a wider window. Note the 3 input time steps before the first prediction. Every prediction here is based on the 3 preceding timesteps:
wide_conv_window.plot(conv_model)
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
Recurrent neural networkA Recurrent Neural Network (RNN) is a type of neural network well-suited to time series data. RNNs process a time series step-by-step, maintaining an internal state from time-step to time-step.For more details, read the [text generation tutorial](https://www.tensorflow.org/tutorials/text/text_generation) or the [RNN guide](https://www.tensorflow.org/guide/keras/rnn). In this tutorial, you will use an RNN layer called Long Short Term Memory ([LSTM](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/LSTM)). An important constructor argument for all keras RNN layers is the `return_sequences` argument. This setting can configure the layer in one of two ways.1. If `False`, the default, the layer only returns the output of the final timestep, giving the model time to warm up its internal state before making a single prediction: ![An lstm warming up and making a single prediction](images/lstm_1_window.png)2. If `True` the layer returns an output for each input. This is useful for: * Stacking RNN layers. * Training a model on multiple timesteps simultaneously.![An lstm making a prediction after every timestep](images/lstm_many_window.png)
lstm_model = tf.keras.models.Sequential([ # Shape [batch, time, features] => [batch, time, lstm_units] tf.keras.layers.LSTM(32, return_sequences=True), # Shape => [batch, time, features] tf.keras.layers.Dense(units=1) ])
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
With `return_sequences=True` the model can be trained on 24h of data at a time.Note: This will give a pessimistic view of the model's performance. On the first timestep the model has no access to previous steps, and so can't do any better than the simple `linear` and `dense` models shown earlier.
print('Input shape:', wide_window.example[0].shape) print('Output shape:', lstm_model(wide_window.example[0]).shape) history = compile_and_fit(lstm_model, wide_window) IPython.display.clear_output() val_performance['LSTM'] = lstm_model.evaluate(wide_window.val) performance['LSTM'] = lstm_model.evaluate(wide_window.test, verbose=0) wide_window.plot(lstm_model)
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
Performance With this dataset typically each of the models does slightly better than the one before it.
x = np.arange(len(performance)) width = 0.3 metric_name = 'mean_absolute_error' metric_index = lstm_model.metrics_names.index('mean_absolute_error') val_mae = [v[metric_index] for v in val_performance.values()] test_mae = [v[metric_index] for v in performance.values()] plt.ylabel('mean_absolute_error [T (degC), normalized]') plt.bar(x - 0.17, val_mae, width, label='Validation') plt.bar(x + 0.17, test_mae, width, label='Test') plt.xticks(ticks=x, labels=performance.keys(), rotation=45) _ = plt.legend() for name, value in performance.items(): print(f'{name:12s}: {value[1]:0.4f}')
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
Multi-output modelsThe models so far all predicted a single output feature, `T (degC)`, for a single time step.All of these models can be converted to predict multiple features just by changing the number of units in the output layer and adjusting the training windows to include all features in the `labels`.
single_step_window = WindowGenerator( # `WindowGenerator` returns all features as labels if you # don't set the `label_columns` argument. input_width=1, label_width=1, shift=1) wide_window = WindowGenerator( input_width=24, label_width=24, shift=1) for example_inputs, example_labels in wide_window.train.take(1): print(f'Inputs shape (batch, time, features): {example_inputs.shape}') print(f'Labels shape (batch, time, features): {example_labels.shape}')
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
Note above that the `features` axis of the labels now has the same depth as the inputs, instead of 1. BaselineThe same baseline model can be used here, but this time repeating all features instead of selecting a specific `label_index`.
baseline = Baseline() baseline.compile(loss=tf.losses.MeanSquaredError(), metrics=[tf.metrics.MeanAbsoluteError()]) val_performance = {} performance = {} val_performance['Baseline'] = baseline.evaluate(wide_window.val) performance['Baseline'] = baseline.evaluate(wide_window.test, verbose=0)
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
Dense
dense = tf.keras.Sequential([ tf.keras.layers.Dense(units=64, activation='relu'), tf.keras.layers.Dense(units=64, activation='relu'), tf.keras.layers.Dense(units=num_features) ]) history = compile_and_fit(dense, single_step_window) IPython.display.clear_output() val_performance['Dense'] = dense.evaluate(single_step_window.val) performance['Dense'] = dense.evaluate(single_step_window.test, verbose=0)
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
RNN
%%time wide_window = WindowGenerator( input_width=24, label_width=24, shift=1) lstm_model = tf.keras.models.Sequential([ # Shape [batch, time, features] => [batch, time, lstm_units] tf.keras.layers.LSTM(32, return_sequences=True), # Shape => [batch, time, features] tf.keras.layers.Dense(units=num_features) ]) history = compile_and_fit(lstm_model, wide_window) IPython.display.clear_output() val_performance['LSTM'] = lstm_model.evaluate( wide_window.val) performance['LSTM'] = lstm_model.evaluate( wide_window.test, verbose=0) print()
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
Advanced: Residual connectionsThe `Baseline` model from earlier took advantage of the fact that the sequence doesn't change drastically from time step to time step. Every model trained in this tutorial so far was randomly initialized, and then had to learn that the output is a a small change from the previous time step.While you can get around this issue with careful initialization, it's simpler to build this into the model structure.It's common in time series analysis to build models that instead of predicting the next value, predict how the value will change in the next timestep.Similarly, "Residual networks" or "ResNets" in deep learning refer to architectures where each layer adds to the model's accumulating result.That is how you take advantage of the knowledge that the change should be small.![A model with a residual connection](images/residual.png)Essentially this initializes the model to match the `Baseline`. For this task it helps models converge faster, with slightly better performance. This approach can be used in conjunction with any model discussed in this tutorial. Here it is being applied to the LSTM model, note the use of the `tf.initializers.zeros` to ensure that the initial predicted changes are small, and don't overpower the residual connection. There are no symmetry-breaking concerns for the gradients here, since the `zeros` are only used on the last layer.
class ResidualWrapper(tf.keras.Model): def __init__(self, model): super().__init__() self.model = model def call(self, inputs, *args, **kwargs): delta = self.model(inputs, *args, **kwargs) # The prediction for each timestep is the input # from the previous time step plus the delta # calculated by the model. return inputs + delta %%time residual_lstm = ResidualWrapper( tf.keras.Sequential([ tf.keras.layers.LSTM(32, return_sequences=True), tf.keras.layers.Dense( num_features, # The predicted deltas should start small # So initialize the output layer with zeros kernel_initializer=tf.initializers.zeros) ])) history = compile_and_fit(residual_lstm, wide_window) IPython.display.clear_output() val_performance['Residual LSTM'] = residual_lstm.evaluate(wide_window.val) performance['Residual LSTM'] = residual_lstm.evaluate(wide_window.test, verbose=0) print()
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
Performance Here is the overall performance for these multi-output models.
x = np.arange(len(performance)) width = 0.3 metric_name = 'mean_absolute_error' metric_index = lstm_model.metrics_names.index('mean_absolute_error') val_mae = [v[metric_index] for v in val_performance.values()] test_mae = [v[metric_index] for v in performance.values()] plt.bar(x - 0.17, val_mae, width, label='Validation') plt.bar(x + 0.17, test_mae, width, label='Test') plt.xticks(ticks=x, labels=performance.keys(), rotation=45) plt.ylabel('MAE (average over all outputs)') _ = plt.legend() for name, value in performance.items(): print(f'{name:15s}: {value[1]:0.4f}')
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
The above performances are averaged across all model outputs. Multi-step modelsBoth the single-output and multiple-output models in the previous sections made **single time step predictions**, 1h into the future.This section looks at how to expand these models to make **multiple time step predictions**.In a multi-step prediction, the model needs to learn to predict a range of future values. Thus, unlike a single step model, where only a single future point is predicted, a multi-step model predicts a sequence of the future values.There are two rough approaches to this:1. Single shot predictions where the entire time series is predicted at once.2. Autoregressive predictions where the model only makes single step predictions and its output is fed back as its input.In this section all the models will predict **all the features across all output time steps**. For the multi-step model, the training data again consists of hourly samples. However, here, the models will learn to predict 24h of the future, given 24h of the past.Here is a `Window` object that generates these slices from the dataset:
OUT_STEPS = 24 multi_window = WindowGenerator(input_width=24, label_width=OUT_STEPS, shift=OUT_STEPS) multi_window.plot() multi_window
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
Baselines A simple baseline for this task is to repeat the last input time step for the required number of output timesteps:![Repeat the last input, for each output step](images/multistep_last.png)
class MultiStepLastBaseline(tf.keras.Model): def call(self, inputs): return tf.tile(inputs[:, -1:, :], [1, OUT_STEPS, 1]) last_baseline = MultiStepLastBaseline() last_baseline.compile(loss=tf.losses.MeanSquaredError(), metrics=[tf.metrics.MeanAbsoluteError()]) multi_val_performance = {} multi_performance = {} multi_val_performance['Last'] = last_baseline.evaluate(multi_window.val) multi_performance['Last'] = last_baseline.evaluate(multi_window.test, verbose=0) multi_window.plot(last_baseline)
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
Since this task is to predict 24h given 24h another simple approach is to repeat the previous day, assuming tomorrow will be similar:![Repeat the previous day](images/multistep_repeat.png)
class RepeatBaseline(tf.keras.Model): def call(self, inputs): return inputs repeat_baseline = RepeatBaseline() repeat_baseline.compile(loss=tf.losses.MeanSquaredError(), metrics=[tf.metrics.MeanAbsoluteError()]) multi_val_performance['Repeat'] = repeat_baseline.evaluate(multi_window.val) multi_performance['Repeat'] = repeat_baseline.evaluate(multi_window.test, verbose=0) multi_window.plot(repeat_baseline)
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
Single-shot modelsOne high level approach to this problem is use a "single-shot" model, where the model makes the entire sequence prediction in a single step.This can be implemented efficiently as a `layers.Dense` with `OUT_STEPS*features` output units. The model just needs to reshape that output to the required `(OUTPUT_STEPS, features)`. LinearA simple linear model based on the last input time step does better than either baseline, but is underpowered. The model needs to predict `OUTPUT_STEPS` time steps, from a single input time step with a linear projection. It can only capture a low-dimensional slice of the behavior, likely based mainly on the time of day and time of year.![Predct all timesteps from the last time-step](images/multistep_dense.png)
multi_linear_model = tf.keras.Sequential([ # Take the last time-step. # Shape [batch, time, features] => [batch, 1, features] tf.keras.layers.Lambda(lambda x: x[:, -1:, :]), # Shape => [batch, 1, out_steps*features] tf.keras.layers.Dense(OUT_STEPS*num_features, kernel_initializer=tf.initializers.zeros), # Shape => [batch, out_steps, features] tf.keras.layers.Reshape([OUT_STEPS, num_features]) ]) history = compile_and_fit(multi_linear_model, multi_window) IPython.display.clear_output() multi_val_performance['Linear'] = multi_linear_model.evaluate(multi_window.val) multi_performance['Linear'] = multi_linear_model.evaluate(multi_window.test, verbose=0) multi_window.plot(multi_linear_model)
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
DenseAdding a `layers.Dense` between the input and output gives the linear model more power, but is still only based on a single input timestep.
multi_dense_model = tf.keras.Sequential([ # Take the last time step. # Shape [batch, time, features] => [batch, 1, features] tf.keras.layers.Lambda(lambda x: x[:, -1:, :]), # Shape => [batch, 1, dense_units] tf.keras.layers.Dense(512, activation='relu'), # Shape => [batch, out_steps*features] tf.keras.layers.Dense(OUT_STEPS*num_features, kernel_initializer=tf.initializers.zeros), # Shape => [batch, out_steps, features] tf.keras.layers.Reshape([OUT_STEPS, num_features]) ]) history = compile_and_fit(multi_dense_model, multi_window) IPython.display.clear_output() multi_val_performance['Dense'] = multi_dense_model.evaluate(multi_window.val) multi_performance['Dense'] = multi_dense_model.evaluate(multi_window.test, verbose=0) multi_window.plot(multi_dense_model)
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
CNN A convolutional model makes predictions based on a fixed-width history, which may lead to better performance than the dense model since it can see how things are changing over time:![A convolutional model sees how things change over time](images/multistep_conv.png)
CONV_WIDTH = 3 multi_conv_model = tf.keras.Sequential([ # Shape [batch, time, features] => [batch, CONV_WIDTH, features] tf.keras.layers.Lambda(lambda x: x[:, -CONV_WIDTH:, :]), # Shape => [batch, 1, conv_units] tf.keras.layers.Conv1D(256, activation='relu', kernel_size=(CONV_WIDTH)), # Shape => [batch, 1, out_steps*features] tf.keras.layers.Dense(OUT_STEPS*num_features, kernel_initializer=tf.initializers.zeros), # Shape => [batch, out_steps, features] tf.keras.layers.Reshape([OUT_STEPS, num_features]) ]) history = compile_and_fit(multi_conv_model, multi_window) IPython.display.clear_output() multi_val_performance['Conv'] = multi_conv_model.evaluate(multi_window.val) multi_performance['Conv'] = multi_conv_model.evaluate(multi_window.test, verbose=0) multi_window.plot(multi_conv_model)
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
RNN A recurrent model can learn to use a long history of inputs, if it's relevant to the predictions the model is making. Here the model will accumulate internal state for 24h, before making a single prediction for the next 24h.In this single-shot format, the LSTM only needs to produce an output at the last time step, so set `return_sequences=False`.![The lstm accumulates state over the input window, and makes a single prediction for the next 24h](images/multistep_lstm.png)
multi_lstm_model = tf.keras.Sequential([ # Shape [batch, time, features] => [batch, lstm_units] # Adding more `lstm_units` just overfits more quickly. tf.keras.layers.LSTM(32, return_sequences=False), # Shape => [batch, out_steps*features] tf.keras.layers.Dense(OUT_STEPS*num_features, kernel_initializer=tf.initializers.zeros), # Shape => [batch, out_steps, features] tf.keras.layers.Reshape([OUT_STEPS, num_features]) ]) history = compile_and_fit(multi_lstm_model, multi_window) IPython.display.clear_output() multi_val_performance['LSTM'] = multi_lstm_model.evaluate(multi_window.val) multi_performance['LSTM'] = multi_lstm_model.evaluate(multi_window.test, verbose=0) multi_window.plot(multi_lstm_model)
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
Advanced: Autoregressive modelThe above models all predict the entire output sequence as a in a single step.In some cases it may be helpful for the model to decompose this prediction into individual time steps. Then each model's output can be fed back into itself at each step and predictions can be made conditioned on the previous one, like in the classic [Generating Sequences With Recurrent Neural Networks](https://arxiv.org/abs/1308.0850).One clear advantage to this style of model is that it can be set up to produce output with a varying length.You could take any of single single-step multi-output models trained in the first half of this tutorial and run in an autoregressive feedback loop, but here you'll focus on building a model that's been explicitly trained to do that.![Feedback a model's output to its input](images/multistep_autoregressive.png) RNNThis tutorial only builds an autoregressive RNN model, but this pattern could be applied to any model that was designed to output a single timestep.The model will have the same basic form as the single-step `LSTM` models: An `LSTM` followed by a `layers.Dense` that converts the `LSTM` outputs to model predictions.A `layers.LSTM` is a `layers.LSTMCell` wrapped in the higher level `layers.RNN` that manages the state and sequence results for you (See [Keras RNNs](https://www.tensorflow.org/guide/keras/rnn) for details).In this case the model has to manually manage the inputs for each step so it uses `layers.LSTMCell` directly for the lower level, single time step interface.
class FeedBack(tf.keras.Model): def __init__(self, units, out_steps): super().__init__() self.out_steps = out_steps self.units = units self.lstm_cell = tf.keras.layers.LSTMCell(units) # Also wrap the LSTMCell in an RNN to simplify the `warmup` method. self.lstm_rnn = tf.keras.layers.RNN(self.lstm_cell, return_state=True) self.dense = tf.keras.layers.Dense(num_features) feedback_model = FeedBack(units=32, out_steps=OUT_STEPS)
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
The first method this model needs is a `warmup` method to initialize its internal state based on the inputs. Once trained this state will capture the relevant parts of the input history. This is equivalent to the single-step `LSTM` model from earlier:
def warmup(self, inputs): # inputs.shape => (batch, time, features) # x.shape => (batch, lstm_units) x, *state = self.lstm_rnn(inputs) # predictions.shape => (batch, features) prediction = self.dense(x) return prediction, state FeedBack.warmup = warmup
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
This method returns a single time-step prediction, and the internal state of the LSTM:
prediction, state = feedback_model.warmup(multi_window.example[0]) prediction.shape
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
With the `RNN`'s state, and an initial prediction you can now continue iterating the model feeding the predictions at each step back as the input.The simplest approach to collecting the output predictions is to use a python list, and `tf.stack` after the loop. Note: Stacking a python list like this only works with eager-execution, using `Model.compile(..., run_eagerly=True)` for training, or with a fixed length output. For a dynamic output length you would need to use a `tf.TensorArray` instead of a python list, and `tf.range` instead of the python `range`.
def call(self, inputs, training=None): # Use a TensorArray to capture dynamically unrolled outputs. predictions = [] # Initialize the lstm state prediction, state = self.warmup(inputs) # Insert the first prediction predictions.append(prediction) # Run the rest of the prediction steps for n in range(1, self.out_steps): # Use the last prediction as input. x = prediction # Execute one lstm step. x, state = self.lstm_cell(x, states=state, training=training) # Convert the lstm output to a prediction. prediction = self.dense(x) # Add the prediction to the output predictions.append(prediction) # predictions.shape => (time, batch, features) predictions = tf.stack(predictions) # predictions.shape => (batch, time, features) predictions = tf.transpose(predictions, [1, 0, 2]) return predictions FeedBack.call = call
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
Test run this model on the example inputs:
print('Output shape (batch, time, features): ', feedback_model(multi_window.example[0]).shape)
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
Now train the model:
history = compile_and_fit(feedback_model, multi_window) IPython.display.clear_output() multi_val_performance['AR LSTM'] = feedback_model.evaluate(multi_window.val) multi_performance['AR LSTM'] = feedback_model.evaluate(multi_window.test, verbose=0) multi_window.plot(feedback_model)
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
Performance There are clearly diminishing returns as a function of model complexity on this problem.
x = np.arange(len(multi_performance)) width = 0.3 metric_name = 'mean_absolute_error' metric_index = lstm_model.metrics_names.index('mean_absolute_error') val_mae = [v[metric_index] for v in multi_val_performance.values()] test_mae = [v[metric_index] for v in multi_performance.values()] plt.bar(x - 0.17, val_mae, width, label='Validation') plt.bar(x + 0.17, test_mae, width, label='Test') plt.xticks(ticks=x, labels=multi_performance.keys(), rotation=45) plt.ylabel(f'MAE (average over all times and outputs)') _ = plt.legend()
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
The metrics for the multi-output models in the first half of this tutorial show the performance averaged across all output features. These performances similar but also averaged across output timesteps.
for name, value in multi_performance.items(): print(f'{name:8s}: {value[1]:0.4f}')
_____no_output_____
Apache-2.0
site/en/tutorials/structured_data/time_series.ipynb
jcjveraa/docs-2
Word Embeddings in MySQLThis example uses the official MySQL Connector within Python3 to store and retrieve various amounts of Word Embeddings.We will use a local MySQL database running as a Docker Container for testing purposes. To start the database run:```docker run -ti --rm --name ohmysql -e MYSQL_ROOT_PASSWORD=mikolov -e MYSQL_DATABASE=embeddings -p 3306:3306 mysql:5.7```
import mysql.connector import io import time import numpy import plotly from tqdm import tqdm_notebook as tqdm
_____no_output_____
MIT
Word_Embeddings_MySQL.ipynb
krokodilj/word_embedding_storage
Dummy EmbeddingsFor testing purposes we will use randomly generated numpy arrays as dummy embbeddings.
def embeddings(n=1000, dim=300): """ Yield n tuples of random numpy arrays of *dim* length indexed by *n* """ idx = 0 while idx < n: yield (str(idx), numpy.random.rand(dim)) idx += 1
_____no_output_____
MIT
Word_Embeddings_MySQL.ipynb
krokodilj/word_embedding_storage
Conversion FunctionsSince we can't just save a NumPy array into the database, we will convert it into a BLOB.
def adapt_array(array): """ Using the numpy.save function to save a binary version of the array, and BytesIO to catch the stream of data and convert it into a BLOB. """ out = io.BytesIO() numpy.save(out, array) out.seek(0) return out.read() def convert_array(blob): """ Using BytesIO to convert the binary version of the array back into a numpy array. """ out = io.BytesIO(blob) out.seek(0) return numpy.load(out) connection = mysql.connector.connect(user='root', password='mikolov', host='127.0.0.1', database='embeddings') cursor = connection.cursor() cursor.execute('CREATE TABLE IF NOT EXISTS `embeddings` (`key` TEXT, `embedding` BLOB);') connection.commit() %%time for key, emb in embeddings(): arr = adapt_array(emb) cursor.execute('INSERT INTO `embeddings` (`key`, `embedding`) VALUES (%s, %s);', (key, arr)) connection.commit() %%time for key, _ in embeddings(): cursor.execute('SELECT embedding FROM `embeddings` WHERE `key`=%s;', (key,)) data = cursor.fetchone() arr = convert_array(data[0])
CPU times: user 393 ms, sys: 42.7 ms, total: 435 ms Wall time: 1.26 s
MIT
Word_Embeddings_MySQL.ipynb
krokodilj/word_embedding_storage
Sample some dataTo test the I/O we will write and read some data from the database. This may take a while.
write_times = [] read_times = [] counts = [500, 1000, 2000, 3000, 4000, 5000] for c in counts: print(c) cursor.execute('DROP TABLE IF EXISTS `embeddings`;') cursor.execute('CREATE TABLE IF NOT EXISTS `embeddings` (`key` TEXT, `embedding` BLOB);') connection.commit() start_time_write = time.time() for key, emb in tqdm(embeddings(c), total=c): arr = adapt_array(emb) cursor.execute('INSERT INTO `embeddings` (`key`, `embedding`) VALUES (%s, %s);', (key, arr)) connection.commit() write_times.append(time.time() - start_time_write) start_time_read = time.time() for key, emb in embeddings(c): cursor.execute('SELECT embedding FROM `embeddings` WHERE `key`=%s;', (key,)) data = cursor.fetchone() arr = convert_array(data[0]) read_times.append(time.time() - start_time_read) print('DONE')
500
MIT
Word_Embeddings_MySQL.ipynb
krokodilj/word_embedding_storage
Results
plotly.offline.init_notebook_mode(connected=True) trace = plotly.graph_objs.Scatter( y = write_times, x = counts, mode = 'lines+markers' ) layout = plotly.graph_objs.Layout(title="MySQL Write Times", yaxis=dict(title='Time in Seconds'), xaxis=dict(title='Embedding Count')) data = [trace] fig = plotly.graph_objs.Figure(data=data, layout=layout) plotly.offline.iplot(fig, filename='jupyter-scatter-write') plotly.offline.init_notebook_mode(connected=True) trace = plotly.graph_objs.Scatter( y = read_times, x = counts, mode = 'lines+markers' ) layout = plotly.graph_objs.Layout(title="MySQL Read Times", yaxis=dict(title='Time in Seconds'), xaxis=dict(title='Embedding Count')) data = [trace] fig = plotly.graph_objs.Figure(data=data, layout=layout) plotly.offline.iplot(fig, filename='jupyter-scatter-read')
_____no_output_____
MIT
Word_Embeddings_MySQL.ipynb
krokodilj/word_embedding_storage
Prepare df and dict corpus
fields_of_interest = [ 'Id', 'name', 'producer', 'description', 'price', 'category', 'source' ] amazon_walmart_all_df = pd.read_csv(amazon_walmart_all_path, sep=',', quotechar='"')[fields_of_interest] amazon_walmart_all_df.dtypes x = amazon_walmart_all_df[amazon_walmart_all_df['category'].isnull()] x.head(1) z = amazon_walmart_all_df[amazon_walmart_all_df['producer'].isnull()] z.head(1) y = amazon_walmart_all_df[amazon_walmart_all_df['price'].isnull()] y.head(1) h = amazon_walmart_all_df[amazon_walmart_all_df['description'].isnull()] h.head() amazon_walmart_all_df[amazon_walmart_all_df['name'].isnull()] description_corpus = amazon_walmart_all_df['description'].to_list() description_corpus = [x for x in description_corpus if str(x) != 'nan'] description_corpus[1] category_corpus = amazon_walmart_all_df.drop_duplicates().to_dict('records') categories = list(amazon_walmart_all_df['category'].unique()) categories = [x for x in categories if str(x) != 'nan'] producer_corpus = amazon_walmart_all_df.drop_duplicates().to_dict('records') producers = list(amazon_walmart_all_df['producer'].unique()) producers = [x for x in producers if str(x) != 'nan'] producers.sort() producers input_file = amazon_walmart_all_path output_file = 'amazon_walmart_output3.csv' settings_file = 'amazon_walmart_learned_settings3' training_file = 'amazon_walmart_training3.json' float('1.25') def preProcess(key, column): try : # python 2/3 string differences column = column.decode('utf8') except AttributeError: pass column = unidecode(column) column = re.sub(' +', ' ', column) column = re.sub('\n', ' ', column) column = column.strip().strip('"').strip("'").lower().strip() column = column.lower() if not column: return None if key == 'price': column = float(column) return column def readData(filename): data_d = {} with open(filename) as f: reader = csv.DictReader(f) for row in reader: clean_row = [(k, preProcess(k, v)) for (k, v) in row.items()] row_id = int(row['Id']) data_d[row_id] = dict(clean_row) return data_d print('importing data ...') data_d = readData(input_file) fields = [ {'field' : 'name', 'type': 'Name'}, {'field' : 'name', 'type': 'String'}, {'field' : 'description', 'type': 'Text', 'corpus': description_corpus, 'has_missing': True }, {'field' : 'category', 'type': 'FuzzyCategorical', 'categories': categories, 'corpus': category_corpus, 'has missing' : True }, {'field' : 'producer', 'type': 'FuzzyCategorical', 'categories': producers, 'corpus': producer_corpus, 'has_missing': True }, {'field' : 'price', 'type': 'Price', 'has_missing': True }, ] deduper = dedupe.Dedupe(fields) # took about 20 min with blocked proportion 0.8 deduper.prepare_training(data_d) dedupe.consoleLabel(deduper) data_d[1] deduper.train() threshold = deduper.threshold(data_d, recall_weight=1) threshold print('clustering...') clustered_dupes = deduper.match(data_d, threshold) print('# duplicate sets', len(clustered_dupes)) for key, values in data_d.items(): values['price'] = str(values['price']) cluster_membership = {} cluster_id = 0 for (cluster_id, cluster) in enumerate(clustered_dupes): id_set, scores = cluster cluster_d = [data_d[c] for c in id_set] canonical_rep = dedupe.canonicalize(cluster_d) for record_id, score in zip(id_set, scores): cluster_membership[record_id] = { "cluster id" : cluster_id, "canonical representation" : canonical_rep, "confidence": score } singleton_id = cluster_id + 1 with open(output_file, 'w') as f_output, open(input_file) as f_input: writer = csv.writer(f_output) reader = csv.reader(f_input) heading_row = next(reader) heading_row.insert(0, 'confidence_score') heading_row.insert(0, 'Cluster ID') canonical_keys = canonical_rep.keys() for key in canonical_keys: heading_row.append('canonical_' + key) writer.writerow(heading_row) for row in reader: row_id = int(row[0]) if row_id in cluster_membership: cluster_id = cluster_membership[row_id]["cluster id"] canonical_rep = cluster_membership[row_id]["canonical representation"] row.insert(0, cluster_membership[row_id]['confidence']) row.insert(0, cluster_id) for key in canonical_keys: row.append(canonical_rep[key].encode('utf8')) else: row.insert(0, None) row.insert(0, singleton_id) singleton_id += 1 for key in canonical_keys: row.append(None) writer.writerow(row) fields_of_interest = ['Cluster ID', 'confidence_score', 'Id', 'name', 'producer', 'description', 'price'] amazon_walmart_output = pd.read_csv('amazon_walmart_output2.csv', sep=',', quotechar='"')[fields_of_interest] amazon_walmart_output[amazon_walmart_output['confidence_score'] == None] amazon_walmart_output = amazon_walmart_output[fields_of_interest] amazon_walmart_output[amazon_walmart_output['confidence_score'] > 0.9].sort_values('Cluster ID')
_____no_output_____
MIT
dedupe/Product Data/product_amazon_walmart.ipynb
MauriceStr/StandardizedDI_Clients_SDASP
In the following, we want to partition responses according to how many times a player has miscoordinated before. For a start, we concatenate all three experiments into one dataframe:
#df = pd.concat([ # df_MTurk.join(pd.Series(['MTurk']*len(df_MTurk), name='experiment')), # df_DTU1.join(pd.Series(['DTU1']*len(df_DTU1), name='experiment')), # df_DTU2.join(pd.Series(['DTU2']*len(df_DTU2), name='experiment'))], # ignore_index=True) #df.head(), len(df)
_____no_output_____
MIT
python/figS6_logit_twobins_MTurk.ipynb
thomasnicolet/Paper_canteen_dilemma
Now we group the data into response pairs
df = df_MTurk.groupby(['session', 'group', 'round'], as_index=False)[['code', 'id_in_group', 'arrival', 'choice', 'certainty']].agg(lambda x: tuple(x)) df.head() #df0 = pd.DataFrame(columns=df.columns) #df1_ = pd.DataFrame(columns=df.columns) #sessions = df.session.unique() #for session in sessions: # df_session = df[df.session == session] # groups = df_session.group.unique() # for group in groups: # df_group = df_session[df_session.group == group] # miss = 0 # for idx, row in df.iterrows(): # if sum(row['choice']) != 1: # row['miss'] = miss # df0 = df0.append(row, ignore_index=True) # else: # miss += 1 # df1_ = df1_.append(row, ignore_index=True) #df.head() # initialize new dataframes that will hold data with a condition. df0 will contain all choices # having had zero previous miscoordinations, while df1_ will contain all choices having had one # or more previous miscoordinations df0 = pd.DataFrame(columns=df.columns) df1_ = pd.DataFrame(columns=df.columns) # partition the dataframe into two bins - the first having 0 and 1, and the other 2 or more # miscoordinations: sessions = df.session.unique() for session in sessions: df_session = df[df.session == session] groups = df_session.group.unique() for group in groups: df_group = df_session[df_session.group == group] miss = 0 for idx, row in df_group.iterrows(): if sum(row['choice']) != 1: if miss == 0: df0 = df0.append(row, ignore_index=True) else: df1_ = df1_.append(row, ignore_index=True) else: if miss == 0: df0 = df0.append(row, ignore_index=True) else: df1_ = df1_.append(row, ignore_index=True) miss += 1
_____no_output_____
MIT
python/figS6_logit_twobins_MTurk.ipynb
thomasnicolet/Paper_canteen_dilemma
Now a bit of magic: we need to separate the tuples again and create rows for each person: (see ref at https://stackoverflow.com/questions/53218931/how-to-unnest-explode-a-column-in-a-pandas-dataframe
def unnesting(df, explode): idx = df.index.repeat(df[explode[0]].str.len()) df1 = pd.concat([ pd.DataFrame({x: np.concatenate(df[x].values)}) for x in explode], axis=1) df1.index = idx return df1.join(df.drop(explode, 1), how='left') dfnew0 = unnesting(df0,['code', 'id_in_group', 'arrival', 'choice', 'certainty']) dfnew1_ = unnesting(df1_,['code', 'id_in_group', 'arrival', 'choice', 'certainty']) dfnew0['arrival'].replace({9.0: 8.6, 9.1: 8.7}, inplace=True) dfnew1_['arrival'].replace({9.0: 8.6, 9.1: 8.7}, inplace=True) len(dfnew0), len(dfnew1_) dfnew0.head() sns.regplot(x='arrival', y='choice', data=dfnew0, ci=95, logistic=True) sns.despine() sns.regplot(x='arrival', y='choice', data=dfnew1_, ci=95, logistic=True) sns.despine() dfall = pd.concat([dfnew0.join(pd.Series(['zero']*len(dfnew0), name='miss')), dfnew1_.join(pd.Series(['one_or_more']*len(dfnew1_), name='miss'))], ignore_index=True) dfall.head() pal = dict(zero="blue", one_or_more="orange") g = sns.lmplot(x="arrival", y="choice", hue="miss", data=dfall, palette=pal, logistic=True, ci=95, n_boot=10000, x_estimator=np.mean, x_ci="ci", y_jitter=.2, legend=False, height=3, aspect=1.5) #g = sns.FacetGrid(dfall, hue='miscoordinations', palette=pal, height=5); #g.map(plt.scatter, "arrival", "choice", marker='o', s=50, alpha=.7, linewidth=.5, color='white', edgecolor="white") #g.map(sns.regplot, "arrival", "choice", scatter_kws={"color": "grey"}, ci=95, n_boot=100, logistic=True, height=3, aspect=1.5) g.set(xlim=(7.98, 8.72)) g.set(ylabel='frequency of canteen choice') my_ticks = ["8:00", "8:10", "8:20", "8:30", "8:40", "8:50", "9:00", "9:10"] g.set(xticks=[8,8.1,8.2,8.3,8.4,8.5,8.6,8.7], yticks=[0,.1,.2,.3,.4,.5,.6,.7,.8,.9,1]) g.set(xticklabels = my_ticks) #, yticklabels = ["office","","","","",".5","","","","","canteen"]) # make my own legend: name_to_color = { ' 0 (n=2762)': 'blue', '>0 (n=1498)': 'orange', } patches = [patch.Patch(color=v, label=k) for k,v in name_to_color.items()] plt.legend(title='# miscoordinations', handles=patches, loc='lower left') plt.title('MTurk', fontsize=16) plt.tight_layout() plt.rcParams["font.family"] = "sans-serif" PLOTS_DIR = '../plots' if not os.path.exists(PLOTS_DIR): os.makedirs(PLOTS_DIR) plt.savefig(os.path.join(PLOTS_DIR, 'figS6_logit_2bins_MTurk.png'), bbox_inches='tight', transparent=True, dpi=300) plt.savefig(os.path.join(PLOTS_DIR, 'figS6_logit_2bins_MTurk.pdf'), transparent=True, dpi=300) sns.despine()
_____no_output_____
MIT
python/figS6_logit_twobins_MTurk.ipynb
thomasnicolet/Paper_canteen_dilemma
COVID-MultiVentThe COVID MultiVent System was developed through a collaboration between Dr. Arun Agarwal, pulmonary critical care fellow at the University of Missouri, and a group of medical students at the University of Pennsylvania with backgrounds in various types of engineering. The system allows for monitoring of pressure and volume delivered to each patient and incorporates valve components that allow for peak inspiratory pressure (PIP), positive end expiratory pressure (PEEP), and inspiratory volume to be modulated independently for each patient. We have created open source assembly resources for this system to expedite its adoption in hospitals around the world with dire need of ventilation resources. The system is assembled with components that can be readily found in most hospitals. If you have questions regarding the system please contact us at **covid.multivent@gmail.com** Dr. Arun AgarwalAlexander LeeFeyisope EwejeStephen LandyDavid GordonRyan Gallagher Alfredo Lucas
%%html <iframe src="https://docs.google.com/presentation/d/e/2PACX-1vT2kAsF54vHJoIti5iQkk8zUTkcQuRBFnbtgAhCoHwxrThIzZ14mCpdgNkWrqS8bRf5xFB2ZoeXlfUk/embed?start=false&loop=false&delayms=3000" frameborder="0" width="480" height="299" allowfullscreen="true" mozallowfullscreen="true" webkitallowfullscreen="true"></iframe>
_____no_output_____
MIT-0
Notebooks/Index.ipynb
covid-multivent/covid-multivent.github.io
Original Video and Facebook Group Please see below Dr. Agrawal's original video showing the proposed setup. A link to the MultiVent facebook group can be found [here](https://facebook.com/COVIDMULTIVENT/).
%%html <iframe width="560" height="315" src="https://www.youtube.com/embed/odnhnnlBlpM" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
_____no_output_____
MIT-0
Notebooks/Index.ipynb
covid-multivent/covid-multivent.github.io
Assembly Guide and Component ListFor detailed instructions on how to build the MultiVent setup please click [here](https://docs.google.com/document/d/11Z7MpeGSC6m3ipsMVLe-Ofkw4HeqCN6x1Zhqy8FMRpc/edit?usp=sharing)Below see the proposed parts list with the number of each part required for a single or a 2-patient setup. A list of vendors associated with each part is also included, however, note that almost all of these components will be readily available in many hospitals that already have ventilator setups. Click [here](https://docs.google.com/spreadsheets/d/1XikdFKNdgZAywoPw4oIWqJ9e0-a059ucXh8DSOGl4GA/edit?usp=sharing) for a Google Sheets version of this list.
import pandas as pd df = pd.read_excel('Website Ventilator splitting components.xlsx') df
_____no_output_____
MIT-0
Notebooks/Index.ipynb
covid-multivent/covid-multivent.github.io
Homework 10: `SQL` Due Date: Thursday, November 16th at 11:59 PMYou will create a database of the NASA polynomial coefficients for each specie.**Please turn in your database with your `Jupyter` notebook!** Question 1: Convert XML to a SQL database Create two tables named `LOW` and `HIGH`, each corresponding to data given for the low and high temperature range.Each should have the following column names:- `SPECIES_NAME`- `TLOW`- `THIGH`- `COEFF_1`- `COEFF_2`- `COEFF_3`- `COEFF_4`- `COEFF_5`- `COEFF_6`- `COEFF_7`Populate the tables using the XML file you created in last assignment. If you did not complete the last assignment, you may also use the `example_thermo.xml` file.`TLOW` should refer to the temperature at the low range and `THIGH` should refer to the temperature at the high range. For example, in the `LOW` table, $H$ would have `TLOW` at $200$ and `THIGH` at $1000$ and in the `HIGH` table, $H$ would have `TLOW` at $1000$ and `THIGH` at $3500$.For both tables, `COEFF_1` through `COEFF_7` should be populated with the corresponding coefficients for the low temperature data and high temperature data.
import xml.etree.ElementTree as ET import sqlite3 import pandas as pd from IPython.core.display import display pd.set_option('display.width', 500) pd.set_option('display.max_columns', 100) pd.set_option('display.notebook_repr_html', True) # Create and connect to database db = sqlite3.connect('NASA_coef.sqlite') cursor = db.cursor() cursor.execute("DROP TABLE IF EXISTS LOW") cursor.execute("DROP TABLE IF EXISTS HIGH") cursor.execute("PRAGMA foreign_keys=1") # Create the table for low temperature range cursor.execute('''CREATE TABLE LOW ( id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, SPECIES_NAME TEXT, TLOW REAL, THIGH REAL, COEFF_1 REAL, COEFF_2 REAL, COEFF_3 REAL, COEFF_4 REAL, COEFF_5 REAL, COEFF_6 REAL, COEFF_7 REAL)''') db.commit() # Create the table for high temperature range cursor.execute('''CREATE TABLE HIGH ( id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, SPECIES_NAME TEXT, TLOW REAL, THIGH REAL, COEFF_1 REAL, COEFF_2 REAL, COEFF_3 REAL, COEFF_4 REAL, COEFF_5 REAL, COEFF_6 REAL, COEFF_7 REAL)''') db.commit() # The given helper function (from L18) to visualize tables def viz_tables(cols, query): q = cursor.execute(query).fetchall() framelist = [] for i, col_name in enumerate(cols): framelist.append((col_name, [col[i] for col in q])) return pd.DataFrame.from_items(framelist) tree = ET.parse('thermo.xml') species_data = tree.find('speciesData') species_list = species_data.findall('species') # a list of all species for species in species_list: name = species.get('name') NASA_list = species.find('thermo').findall('NASA') for NASA in NASA_list: T_min = float(NASA.get('Tmin')) T_max = float(NASA.get('Tmax')) coefs = NASA.find('floatArray').text.split(',') vals_to_insert = (name, T_min, T_max, float(coefs[0].strip()), float(coefs[1].strip()), float(coefs[2].strip()), float(coefs[3].strip()), float(coefs[4].strip()), float(coefs[5].strip()), float(coefs[6].strip())) if T_max > 1000: # high range temperature cursor.execute('''INSERT INTO HIGH (SPECIES_NAME, TLOW, THIGH, COEFF_1, COEFF_2, COEFF_3, COEFF_4, COEFF_5, COEFF_6, COEFF_7) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)''', vals_to_insert) else: # low range temperature cursor.execute('''INSERT INTO LOW (SPECIES_NAME, TLOW, THIGH, COEFF_1, COEFF_2, COEFF_3, COEFF_4, COEFF_5, COEFF_6, COEFF_7) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)''', vals_to_insert) LOW_cols = [col[1] for col in cursor.execute("PRAGMA table_info(LOW)")] display(viz_tables(LOW_cols, '''SELECT * FROM LOW''')) HIGH_cols = [col[1] for col in cursor.execute("PRAGMA table_info(HIGH)")] display(viz_tables(HIGH_cols, '''SELECT * FROM HIGH'''))
_____no_output_____
MIT
Homework/HW10/HW10-final.ipynb
JasmineeeeeTONG/CS207_coursework
Question 2: `WHERE` Statements 1. Write a `Python` function `get_coeffs` that returns an array of 7 coefficients. The function should take in two parameters: 1.) `species_name` and 2.) `temp_range`, an indicator variable ('low' or 'high') to indicate whether the coefficients should come from the low or high temperature range. The function should use `SQL` commands and `WHERE` statements on the table you just created in Question 1 (rather than taking data from the XML directly).```pythondef get_coeffs(species_name, temp_range): ''' Fill in here''' return coeffs```2. Write a python function `get_species` that returns all species that have a temperature range above or below a given value. The function should take in two parameters: 1.) `temp` and 2.) `temp_range`, an indicator variable ('low' or 'high'). When temp_range is 'low', we are looking for species with a temperature range lower than the given temperature, and for a 'high' temp_range, we want species with a temperature range higher than the given temperature. This exercise may be useful if different species have different `LOW` and `HIGH` ranges. And as before, you should accomplish this through `SQL` queries and where statements.```pythondef get_species(temp, temp_range): ''' Fill in here''' return coeffs```
def get_coeffs(species_name, temp_range): query = '''SELECT COEFF_1, COEFF_2, COEFF_3, COEFF_4, COEFF_5, COEFF_6, COEFF_7 FROM {} WHERE SPECIES_NAME = "{}"'''.format(temp_range.upper(), species_name) coeffs = list(cursor.execute(query).fetchall()[0]) return coeffs get_coeffs('H', 'low') def get_species(temp, temp_range): if temp_range == 'low': # temp_range == 'low' query = '''SELECT SPECIES_NAME FROM {} WHERE TLOW < {}'''.format(temp_range.upper(), temp) else: # temp_range == 'high' query = '''SELECT SPECIES_NAME FROM {} WHERE THIGH > {}'''.format(temp_range.upper(), temp) species = [] for s in cursor.execute(query).fetchall(): species.append(s[0]) return species get_species(500, 'low') get_species(100, 'low') get_species(3000, 'high') get_species(3500, 'high')
_____no_output_____
MIT
Homework/HW10/HW10-final.ipynb
JasmineeeeeTONG/CS207_coursework
Question 3: `JOIN` STATEMENTS Create a table named `ALL_TEMPS` that has the following columns:- `SPECIES_NAME`- `TEMP_LOW`- `TEMP_HIGH`This table should be created by joining the tables `LOW` and `HIGH` on the value `SPECIES_NAME`.
# Create the table for ALL_TEMPS cursor.execute("DROP TABLE IF EXISTS ALL_TEMPS") cursor.execute('''CREATE TABLE ALL_TEMPS ( id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, SPECIES_NAME TEXT, TEMP_LOW REAL, TEMP_HIGH REAL)''') db.commit() # Insert TEMP_LOW and TEMP_HIGH of all species to ALL_TEMPS query = '''SELECT LOW.SPECIES_NAME, LOW.TLOW AS TEMP_LOW, HIGH.THIGH AS TEMP_HIGH FROM LOW INNER JOIN HIGH ON LOW.SPECIES_NAME = HIGH.SPECIES_NAME''' for record in cursor.execute(query).fetchall(): cursor.execute('''INSERT INTO ALL_TEMPS (SPECIES_NAME, TEMP_LOW, TEMP_HIGH) VALUES (?, ?, ?)''', record) ALL_TEMPS_cols = [col[1] for col in cursor.execute("PRAGMA table_info(ALL_TEMPS)")] display(viz_tables(ALL_TEMPS_cols, '''SELECT * FROM ALL_TEMPS'''))
_____no_output_____
MIT
Homework/HW10/HW10-final.ipynb
JasmineeeeeTONG/CS207_coursework
1. Write a `Python` function `get_range` that returns the range of temperatures for a given species_name.The range should be computed within the `SQL` query (i.e. you should subtract within the `SELECT` statement in the `SQL` query).```pythondef get_range(species_name): '''Fill in here''' return range```Note that `TEMP_LOW` is the lowest temperature in the `LOW` range and `TEMP_HIGH` is the highest temperature in the `HIGH` range.
def get_range(species_name): query = '''SELECT (TEMP_HIGH - TEMP_LOW) AS T_range FROM ALL_TEMPS WHERE SPECIES_NAME = "{}"'''.format(species_name) T_range = cursor.execute(query).fetchall()[0][0] return T_range get_range('O') # Close the Database db.commit() db.close()
_____no_output_____
MIT
Homework/HW10/HW10-final.ipynb
JasmineeeeeTONG/CS207_coursework
Results for BERT when applying syn tranformation to both premise and hypothesis to the MNLI dataset
import pandas as pd import os import numpy as np import matplotlib.pyplot as plt import seaborn as sns from tqdm import tqdm from IPython.display import display, HTML from lr.analysis.util import get_ts_from_results_folder from lr.analysis.util import get_rho_stats_from_result_list from lr.stats.h_testing import get_ks_stats_from_p_values_compared_to_uniform_dist
_____no_output_____
MIT
src/drafts/check_results_bert_base_MNLI.ipynb
lukewanless/looking-for-equivalences
Get Results
all_accs = [] all_transformed_accs = [] all_paired_t_p_values = [] all_dev_plus_diff = [] all_time = [] m_name = "bert_base" folder = "mnli" test_repetitions = 5 batchs = range(1, test_repetitions + 1) for i in tqdm(batchs): test_accuracy = get_ts_from_results_folder(path="results/{}/{}/syn_p_h/batch{}/".format(folder,m_name, i), stat="test_accuracy") transformed_test_accuracy = get_ts_from_results_folder(path="results/{}/{}/syn_p_h/batch{}/".format(folder,m_name, i), stat="transformed_test_accuracy") paired_t_p_value = get_ts_from_results_folder(path="results/{}/{}/syn_p_h/batch{}/".format(folder, m_name, i), stat="paired_t_p_value") diff = get_ts_from_results_folder(path="results/{}/{}/syn_p_h/batch{}/".format(folder, m_name, i), stat="dev_plus_accuracy_difference") t_time = get_ts_from_results_folder(path="results/{}/{}/syn_p_h/batch{}/".format(folder, m_name,i), stat="test_time") all_accs.append(test_accuracy) all_transformed_accs.append(transformed_test_accuracy) all_paired_t_p_values.append(paired_t_p_value) all_dev_plus_diff.append(diff) all_time.append(t_time) total_time = pd.concat(all_time,1).sum().sum() n_params = 109484547 print("Time for all experiments = {:.1f} hours".format(total_time)) print("Number of paramaters for BERT = {}".format(n_params))
Time for all experiments = 204.6 hours Number of paramaters for BERT = 109484547
MIT
src/drafts/check_results_bert_base_MNLI.ipynb
lukewanless/looking-for-equivalences
Accuracy
rhos, mean_acc, error_acc, _ = get_rho_stats_from_result_list(all_accs) _, mean_acc_t, error_acc_t, _ = get_rho_stats_from_result_list(all_transformed_accs) fig, ax = plt.subplots(figsize=(12,6)) ax.errorbar(rhos, mean_acc, yerr=error_acc, fmt='-o', label="original test data"); ax.errorbar(rhos, mean_acc_t, yerr=error_acc_t, fmt='-o', label="transformed test data"); ax.legend(loc="best"); ax.set_xlabel(r"$\rho$", fontsize=14); ax.set_ylabel("accuracy", fontsize=14); ax.set_title("BERT accuracy\n\ndataset: MNLI\ntransformation: synonym substitution\ntest repetitions: {}\n".format(test_repetitions)); fig.tight_layout() fig.savefig('figs/bert_base_acc_mnli_syn_p_h.png', bbox_inches=None, pad_inches=0.5)
_____no_output_____
MIT
src/drafts/check_results_bert_base_MNLI.ipynb
lukewanless/looking-for-equivalences
P-values
rhos, mean_p_values, error_p_values, min_p_values = get_rho_stats_from_result_list(all_paired_t_p_values) alpha = 0.05 alpha_adj = alpha / test_repetitions rejected_ids = [] remain_ids = [] for i,p in enumerate(min_p_values): if p < alpha_adj: rejected_ids.append(i) else: remain_ids.append(i) rhos_rejected = rhos[rejected_ids] rhos_remain = rhos[remain_ids] y_rejected = mean_p_values[rejected_ids] y_remain = mean_p_values[remain_ids] error_rejected = error_p_values[rejected_ids] error_remain = error_p_values[remain_ids] title_msg = "BERT p-values\n\ndataset:" title_msg += "MNLI\ntransformation: synonym substitution\ntest repetitions: {}\n".format(test_repetitions) title_msg += "significance level = {:.1%} \n".format(alpha) title_msg += "adjusted significance level = {:.2%} \n".format(alpha_adj) fig, ax = plt.subplots(figsize=(12,6)) ax.errorbar(rhos_rejected, y_rejected, yerr=error_rejected, fmt='o', linewidth=0.50, label="at least one p-value is smaller than {:.2%}".format(alpha_adj)); ax.errorbar(rhos_remain, y_remain, yerr=error_remain, fmt='o', linewidth=0.50, label="all p-values are greater than {:.2%}".format(alpha_adj)); ax.legend(loc="best"); ax.set_xlabel(r"$\rho$", fontsize=14); ax.set_ylabel("p-value", fontsize=14); ax.set_title(title_msg); fig.tight_layout() fig.tight_layout() fig.savefig('figs/bert_p_values_mnli_syn_p_h.png', bbox_inches=None, pad_inches=0.5)
_____no_output_____
MIT
src/drafts/check_results_bert_base_MNLI.ipynb
lukewanless/looking-for-equivalences
Accuracy difference
rhos, diff, _,_ = get_rho_stats_from_result_list(all_dev_plus_diff) _, test_acc, _,_ = get_rho_stats_from_result_list(all_accs) _, test_acc_t, _,_ = get_rho_stats_from_result_list(all_transformed_accs) test_diff = np.abs(test_acc - test_acc_t) fig, ax = plt.subplots(figsize=(12,6)) ax.errorbar(rhos, diff, fmt='-o', label="validation"); ax.errorbar(rhos, test_diff, fmt='-o', label="test"); ax.legend(loc="best"); ax.set_xlabel(r"$\rho$", fontsize=14); ax.set_ylabel("average accuracy difference", fontsize=14); ax.set_title("BERT accuracy difference\n\ndataset: MNLI\ntransformation: synonym substitution\ntest repetitions: {}\n".format(test_repetitions)); fig.tight_layout() fig.savefig('figs/bert_acc_diff_mnli_syn_p_h.png', bbox_inches=None, pad_inches=0.5)
_____no_output_____
MIT
src/drafts/check_results_bert_base_MNLI.ipynb
lukewanless/looking-for-equivalences
Selecting the best $\rho$
id_min = np.argmin(diff) min_rho = rhos[id_min] min_rho_test_acc = test_acc[id_min] min_rho_transformed_test_acc = test_acc_t[id_min] test_accuracy_loss_pct = np.round(((min_rho_test_acc - test_acc[0]) / test_acc[0]) * 100, 1) analysis = {"dataset":"mnli", "model": "BERT", "rho":min_rho, "test_accuracy_loss_pct": test_accuracy_loss_pct, "average_test_accuracy": min_rho_test_acc, "average_transformed_test_accuracy": min_rho_transformed_test_acc, "combined_accuracy": np.mean([min_rho_test_acc,min_rho_transformed_test_acc])} analysis = pd.DataFrame(analysis, index=[0]) analysis
_____no_output_____
MIT
src/drafts/check_results_bert_base_MNLI.ipynb
lukewanless/looking-for-equivalences
[View in Colaboratory](https://colab.research.google.com/github/tompollard/buenosaires2018/blob/master/1_intro_to_python.ipynb) Programming in Python In this part of the workshop, we will introduce some basic programming concepts in Python. We will then explore how these concepts allow us to carry out an anlysis that can be reproduced. Working with variables You can get output from Python by typing math into a code cell. Try executing a sum below (for example: 3 + 5).
3+5
_____no_output_____
MIT
1_intro_to_python.ipynb
stickbass/buenosaires2018
However, to do anything useful, we will need to assign values to `variables`. Assign a height in cm to a variable in the cell below.
height_cm = 180
_____no_output_____
MIT
1_intro_to_python.ipynb
stickbass/buenosaires2018
Now the value has been assigned to our variable, we can print it in the console with `print`.
print('Height in cm is:', height_cm)
_____no_output_____
MIT
1_intro_to_python.ipynb
stickbass/buenosaires2018
We can also do arithmetic with the variable. Convert the height in cm to metres, then print the new value as before (Warning! In Python 2, dividing an integer by an integer will return an integer.)
height_m = height_cm / 100 print('height in metres:',height_m)
_____no_output_____
MIT
1_intro_to_python.ipynb
stickbass/buenosaires2018
We can check which variables are available in memory with the special command: `%whos`
%whos
_____no_output_____
MIT
1_intro_to_python.ipynb
stickbass/buenosaires2018
We can see that each of our variables has a type (in this case `int` and `float`), describing the type of data held by the variable. We can use `type` to check the data type of a variable.
type(height_cm)
_____no_output_____
MIT
1_intro_to_python.ipynb
stickbass/buenosaires2018
Another data type is a `list`, which can hold a series of items. For example, we might measure a patient's heart rate several times over a period.
heartrate = [66,64,63,62,66,69,70,75,76] print(heartrate) type(heartrate)
_____no_output_____
MIT
1_intro_to_python.ipynb
stickbass/buenosaires2018
Repeating actions in loops We can access individual items in a list using an index (note, in Python, indexing begins with 0!). For example, let's view the first `[0]` and second `[1]` heart rate measurements.
print(heartrate[0]) print(heartrate[1])
_____no_output_____
MIT
1_intro_to_python.ipynb
stickbass/buenosaires2018
We can iterate through a list with the help of a `for` loop. Let's try looping over our list of heart rates, printing each item as we go.
for i in heartrate: print('the heart rate is:', i)
_____no_output_____
MIT
1_intro_to_python.ipynb
stickbass/buenosaires2018
Making choices Sometimes we want to take different actions depending on a set of conditions. We can do this using an `if/else` statement. Let's write a statement to test if a mean arterial pressure (`meanpressure`) is high or low.
meanpressure = 70 if meanpressure < 60: print('Low pressure') elif meanpressure > 100: print('High pressure') else: print('Normal pressure')
_____no_output_____
MIT
1_intro_to_python.ipynb
stickbass/buenosaires2018
Writing our own functions To help organise our code and to avoid replicating the same code again and again, we can create functions. Let's create a function to convert temperature in fahrenheit to celsius, using the following formula:`celsius = (fahrenheit - 32) * 5/9`
def fahr_to_celsius(temp): celsius = (temp - 32) * 5/9 return celsius
_____no_output_____
MIT
1_intro_to_python.ipynb
stickbass/buenosaires2018
Now we can call the function `fahr_to_celsius` to convert a temperature from celsius to fahrenheit.
body_temp_f = 98.6 body_temp_c = fahr_to_celsius(body_temp_f) print('Patient body temperature is:', body_temp_c, 'celsius')
_____no_output_____
MIT
1_intro_to_python.ipynb
stickbass/buenosaires2018
Reusing code with libraries Python is a popular language for data analysis, so thankfully we can benefit from the hard work of others with the use of libraries. Pandas, a popular library for data analysis, introduces the `DataFrame`, a convenient data structure similar to a spreadsheet. Before using a library, we will need to import it.
# let's assign pandas an alias, pd, for brevity import pandas as pd
_____no_output_____
MIT
1_intro_to_python.ipynb
stickbass/buenosaires2018
We have shared a demo dataset online containing physiological data relating to 1000 patients admitted to an intensive care unit in Boston, Massachussetts, USA. Let's load this data into our new data structure.
url="https://raw.githubusercontent.com/tompollard/tableone/master/data/pn2012_demo.csv" data=pd.read_csv(url)
_____no_output_____
MIT
1_intro_to_python.ipynb
stickbass/buenosaires2018
The variable `data` should now contain our new dataset. Let's view the first few rows using `head()`. Note: parentheses `"()"` are generally required when we are performing an action/operation. In this case, the action is to select a limited number of rows.
data.head()
_____no_output_____
MIT
1_intro_to_python.ipynb
stickbass/buenosaires2018
We can perform other operations on the dataframe. For example, using `mean()` to get an average of the columns.
data.mean()
_____no_output_____
MIT
1_intro_to_python.ipynb
stickbass/buenosaires2018
If we are unsure of the meaning of a method, we can check by adding `?` after the method. For example, what is `max`?
data.max? data.max()
_____no_output_____
MIT
1_intro_to_python.ipynb
stickbass/buenosaires2018
We can access a single column in the data by specifying the column name after the variable. For example, we can select a list of ages with `data.Age`, and then find the mean for this column in a similar way to before.
print('The mean age of patients is:', data.Age.mean())
_____no_output_____
MIT
1_intro_to_python.ipynb
stickbass/buenosaires2018
Pandas also provides a convenient method `plot` for plotting data. Let's plot a distribution of the patient ages in our dataset.
data.Age.plot(kind='kde', title='Age of patients in years')
_____no_output_____
MIT
1_intro_to_python.ipynb
stickbass/buenosaires2018
import pandas as pd !pip install rasterio import rasterio !pip install geopandas import geopandas as gpd import numpy as np from google.colab import drive drive.mount('/content/gdrive') import numpy as np from sklearn.model_selection import train_test_split !pip install xgboost from xgboost import XGBClassifier from sklearn.metrics import classification_report, confusion_matrix lokus=['jabodetabek','mebidangro','maminasata','kedungsepur'] confussion_matrix_list=[] classification_report_list=[] #Create model for lokasi in lokus: print(lokasi) dataset = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2014_'+lokasi+'.tif') dataset_bands=pd.DataFrame() for i in dataset.indexes: temp=dataset.read(i) temp=pd.DataFrame(data=np.array(temp).flatten()).rename(columns={0:i}) dataset_bands=temp.join(dataset_bands) dataset_bands.rename(columns={1:'BU_class',2:'NDVI_landsat7',3:'NDBI_landsat7',4:'MNDWI_landsat7', 5:'SAVI_landsat7',6:'LSE_landsat7',7:'rad_VIIRS'},inplace=True) dataset_bands['U_class']=dataset_bands.BU_class.apply(lambda y: 1 if y>=3 else 0) dataset_bands['LSE_der_landsat7']=dataset_bands.NDVI_landsat7.apply(lambda y: 0.995 if y < -0.185 else ( 0.970 if y< 0.157 else (1.0098+0.047*np.log(y) if y<0.727 else 0.990))) dataset=dataset_bands.query('U_class==0').sample(10000).append(dataset_bands.query('U_class==1').sample(10000)) dataset=dataset.reset_index()[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDBI_landsat7','NDVI_landsat7','LSE_der_landsat7','U_class']] X_train, X_test, y_train, y_test = train_test_split(dataset[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDVI_landsat7','NDBI_landsat7','LSE_der_landsat7']], dataset[['U_class']], test_size = 0.20) xgclassifier = XGBClassifier(random_state=123,colsample_bytree= 0.7, learning_rate=0.05, max_depth= 4, min_child_weight=11, n_estimators= 500, nthread= 4, objective= 'binary:logistic', seed= 123, silent=1, subsample= 0.8) xgclassifier.fit(X_train.values,y_train.values) y_pred=xgclassifier.predict(X_test.values) #confussion_matrix_list.append(confusion_matrix(y_test, y_pred)) classification_report_list.append(classification_report(y_test, y_pred)) dataset=dataset_bands.reset_index()[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDBI_landsat7','NDVI_landsat7','LSE_der_landsat7']] class_2014_=xgclassifier.predict(dataset.values) confussion_matrix_list.append(confusion_matrix(dataset_bands[['U_class']],class_2014_)) temp_ = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2014_'+lokasi+'.tif') temp = temp_.read(1) array_class_2014=class_2014_.reshape(temp.shape[0],temp.shape[1]).astype(np.uint8) profile = temp_.profile profile.update( dtype=array_class_2014.dtype, count=1, compress='lzw') with rasterio.open( '/content/gdrive/My Drive/Urban_monitoring/Urban_/GHSL_/GHSL_rev/rev_class_2014_'+lokasi+'.tif','w',**profile) as dst2: dst2.write(array_class_2014,1) dst2.close() confussion_matrix_list[3] #Create model for lokasi in lokus: print(lokasi) dataset = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2014_'+lokasi+'.tif') dataset_bands=pd.DataFrame() for i in dataset.indexes: temp=dataset.read(i) temp=pd.DataFrame(data=np.array(temp).flatten()).rename(columns={0:i}) dataset_bands=temp.join(dataset_bands) dataset_bands.rename(columns={1:'BU_class',2:'NDVI_landsat7',3:'NDBI_landsat7',4:'MNDWI_landsat7', 5:'SAVI_landsat7',6:'LSE_landsat7',7:'rad_VIIRS'},inplace=True) dataset_bands['U_class']=dataset_bands.BU_class.apply(lambda y: 1 if y>=3 else 0) dataset_bands['LSE_der_landsat7']=dataset_bands.NDVI_landsat7.apply(lambda y: 0.995 if y < -0.185 else ( 0.970 if y< 0.157 else (1.0098+0.047*np.log(y) if y<0.727 else 0.990))) dataset=dataset_bands.query('U_class==0').sample(10000).append(dataset_bands.query('U_class==1').sample(10000)) dataset=dataset.reset_index()[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDBI_landsat7','NDVI_landsat7','LSE_der_landsat7','U_class']] X_train, X_test, y_train, y_test = train_test_split(dataset[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDVI_landsat7','NDBI_landsat7','LSE_der_landsat7']], dataset[['U_class']], test_size = 0.20) xgclassifier = XGBClassifier(random_state=123,colsample_bytree= 0.7, learning_rate=0.05, max_depth= 4, min_child_weight=11, n_estimators= 500, nthread= 4, objective= 'binary:logistic', seed= 123, silent=1, subsample= 0.8) xgclassifier.fit(X_train.values,y_train.values) y_pred=xgclassifier.predict(X_test.values) confussion_matrix_list.append(confusion_matrix(y_test, y_pred)) classification_report_list.append(classification_report(y_test, y_pred)) #classification_2019: #dataset = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2019_'+lokasi+'.tif') #dataset_bands=pd.DataFrame() #for i in dataset.indexes: # temp=dataset.read(i) # temp=pd.DataFrame(data=np.array(temp).flatten()).rename(columns={0:i}) # dataset_bands=temp.join(dataset_bands) #dataset_bands.rename(columns={1:'BU_class',2:'NDVI_landsat7',3:'NDBI_landsat7',4:'MNDWI_landsat7', # 5:'SAVI_landsat7',6:'LSE_landsat7',7:'rad_VIIRS'},inplace=True) #dataset_bands['U_class']=dataset_bands.BU_class.apply(lambda y: 1 if y>=3 else 0) #dataset_bands['LSE_der_landsat7']=dataset_bands.NDVI_landsat7.apply(lambda y: 0.995 if y < -0.185 else ( # 0.970 if y< 0.157 else (1.0098+0.047*np.log(y) if y<0.727 else 0.990))) #dataset=dataset_bands.reset_index()[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDBI_landsat7','NDVI_landsat7','LSE_der_landsat7']] #class_2019_=xgclassifier.predict(dataset.values) #temp_ = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2019_'+lokasi+'.tif') #temp = temp_.read(1) #array_class_2019=class_2019_.reshape(temp.shape[0],temp.shape[1]).astype(np.uint8) #profile = temp_.profile #profile.update( # dtype=array_class_2019.dtype, # count=1, # compress='lzw') #with rasterio.open( # '/content/gdrive/My Drive/Urban_monitoring/Urban_/GHSL_/GHSL_rev/rev_class_2019_'+lokasi+'.tif','w',**profile) as dst2: # dst2.write(array_class_2019,1) # dst2.close() #Create model for lokasi in lokus: #print(lokasi) #dataset = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2014_'+lokasi+'.tif') #dataset_bands=pd.DataFrame() #for i in dataset.indexes: # temp=dataset.read(i) # temp=pd.DataFrame(data=np.array(temp).flatten()).rename(columns={0:i}) # dataset_bands=temp.join(dataset_bands) #dataset_bands.rename(columns={1:'BU_class',2:'NDVI_landsat7',3:'NDBI_landsat7',4:'MNDWI_landsat7', # 5:'SAVI_landsat7',6:'LSE_landsat7',7:'rad_VIIRS'},inplace=True) #dataset_bands['U_class']=dataset_bands.BU_class.apply(lambda y: 1 if y>=3 else 0) #dataset_bands['LSE_der_landsat7']=dataset_bands.NDVI_landsat7.apply(lambda y: 0.995 if y < -0.185 else ( # 0.970 if y< 0.157 else (1.0098+0.047*np.log(y) if y<0.727 else 0.990))) #dataset=dataset_bands.query('U_class==0').sample(10000).append(dataset_bands.query('U_class==1').sample(10000)) #dataset=dataset.reset_index()[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDBI_landsat7','NDVI_landsat7','LSE_der_landsat7','U_class']] #X_train, X_test, y_train, y_test = train_test_split(dataset[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDVI_landsat7','NDBI_landsat7','LSE_der_landsat7']], # dataset[['U_class']], test_size = 0.20) #xgclassifier = XGBClassifier(random_state=123,colsample_bytree= 0.7, # learning_rate=0.05, max_depth= 4, # min_child_weight=11, n_estimators= 500, nthread= 4, objective= 'binary:logistic', # seed= 123, silent=1, subsample= 0.8) #xgclassifier.fit(X_train.values,y_train.values) #y_pred=xgclassifier.predict(X_test.values) #classification_2015: #dataset = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2015_'+lokasi+'.tif') #dataset_bands=pd.DataFrame() #for i in dataset.indexes: # temp=dataset.read(i) # temp=pd.DataFrame(data=np.array(temp).flatten()).rename(columns={0:i}) # dataset_bands=temp.join(dataset_bands) #dataset_bands.rename(columns={1:'BU_class',2:'NDVI_landsat7',3:'NDBI_landsat7',4:'MNDWI_landsat7', # 5:'SAVI_landsat7',6:'LSE_landsat7',7:'rad_VIIRS'},inplace=True) #dataset_bands['U_class']=dataset_bands.BU_class.apply(lambda y: 1 if y>=3 else 0) #dataset_bands['LSE_der_landsat7']=dataset_bands.NDVI_landsat7.apply(lambda y: 0.995 if y < -0.185 else ( # 0.970 if y< 0.157 else (1.0098+0.047*np.log(y) if y<0.727 else 0.990))) #dataset=dataset_bands.reset_index()[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDBI_landsat7','NDVI_landsat7','LSE_der_landsat7']] #class_2015_=xgclassifier.predict(dataset.values) #temp_ = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2015_'+lokasi+'.tif') #temp = temp_.read(1) #array_class_2015=class_2015_.reshape(temp.shape[0],temp.shape[1]).astype(np.uint8) #profile = temp_.profile #profile.update( # dtype=array_class_2015.dtype, # count=1, # compress='lzw') #with rasterio.open( # '/content/gdrive/My Drive/Urban_monitoring/Urban_/GHSL_/GHSL_rev/rev_class_2015_'+lokasi+'.tif','w',**profile) as dst2: # dst2.write(array_class_2015,1) # dst2.close() lokus=['gerbangkertasusila'] #Create model for lokasi in lokus: print(lokasi) dataset = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2014_'+lokasi+'.tif') dataset_bands=pd.DataFrame() for i in dataset.indexes: temp=dataset.read(i) temp=pd.DataFrame(data=np.array(temp).flatten()).rename(columns={0:i}) dataset_bands=temp.join(dataset_bands) dataset_bands.rename(columns={1:'BU_class',2:'NDVI_landsat7',3:'NDBI_landsat7',4:'MNDWI_landsat7', 5:'SAVI_landsat7',6:'LSE_landsat7',7:'rad_VIIRS'},inplace=True) dataset_bands['U_class']=dataset_bands.BU_class.apply(lambda y: 1 if y>=3 else 0) dataset_bands['LSE_der_landsat7']=dataset_bands.NDVI_landsat7.apply(lambda y: 0.995 if y < -0.185 else ( 0.970 if y< 0.157 else (1.0098+0.047*np.log(y) if y<0.727 else 0.990))) dataset=dataset_bands.query('U_class==0').sample(10000).append(dataset_bands.query('U_class==1').sample(10000)) dataset=dataset.reset_index()[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDBI_landsat7','NDVI_landsat7','LSE_der_landsat7','U_class']] X_train, X_test, y_train, y_test = train_test_split(dataset[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDVI_landsat7','NDBI_landsat7','LSE_der_landsat7']], dataset[['U_class']], test_size = 0.20) xgclassifier = XGBClassifier(random_state=123,colsample_bytree= 0.7, learning_rate=0.05, max_depth= 5, min_child_weight=11, n_estimators= 500, nthread= 4, objective= 'binary:logistic', seed= 123, silent=1, subsample= 0.8) xgclassifier.fit(X_train.values,y_train.values) #y_pred=xgclassifier.predict(X_test.values) y_pred=xgclassifier.predict(dataset_bands[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDVI_landsat7','NDBI_landsat7','LSE_der_landsat7']].values) confussion_matrix_list.append(confusion_matrix(dataset_bands[['U_class']],y_pred)) #confussion_matrix_list.append(confusion_matrix(y_test, y_pred)) #classification_report_list.append(classification_report(y_test, y_pred)) #classification_2015: #dataset = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2015_'+lokasi+'.tif') #dataset_bands=pd.DataFrame() #for i in dataset.indexes: # temp=dataset.read(i) # temp=pd.DataFrame(data=np.array(temp).flatten()).rename(columns={0:i}) # dataset_bands=temp.join(dataset_bands) #dataset_bands.rename(columns={1:'BU_class',2:'NDVI_landsat7',3:'NDBI_landsat7',4:'MNDWI_landsat7', # 5:'SAVI_landsat7',6:'LSE_landsat7',7:'rad_VIIRS'},inplace=True) #dataset_bands['U_class']=dataset_bands.BU_class.apply(lambda y: 1 if y>=3 else 0) #dataset_bands['LSE_der_landsat7']=dataset_bands.NDVI_landsat7.apply(lambda y: 0.995 if y < -0.185 else ( # 0.970 if y< 0.157 else (1.0098+0.047*np.log(y) if y<0.727 else 0.990))) #dataset=dataset_bands.reset_index()[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDBI_landsat7','NDVI_landsat7','LSE_der_landsat7']] #class_2015_=xgclassifier.predict(dataset.values) #temp_ = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2015_'+lokasi+'.tif') #temp = temp_.read(1) #array_class_2015=class_2015_.reshape(temp.shape[0],temp.shape[1]).astype(np.uint8) #profile = temp_.profile #profile.update( # dtype=array_class_2015.dtype, # count=1, # compress='lzw') #with rasterio.open( # '/content/gdrive/My Drive/Urban_monitoring/Urban_/GHSL_/GHSL_rev/rev_class_2015_'+lokasi+'.tif','w',**profile) as dst2: # dst2.write(array_class_2015,1) # dst2.close() #Create model #for lokasi in lokus: # print(lokasi) #dataset = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2014_'+lokasi+'.tif') #dataset_bands=pd.DataFrame() #for i in dataset.indexes: # temp=dataset.read(i) # temp=pd.DataFrame(data=np.array(temp).flatten()).rename(columns={0:i}) # dataset_bands=temp.join(dataset_bands) #dataset_bands.rename(columns={1:'BU_class',2:'NDVI_landsat7',3:'NDBI_landsat7',4:'MNDWI_landsat7', # 5:'SAVI_landsat7',6:'LSE_landsat7',7:'rad_VIIRS'},inplace=True) #dataset_bands['U_class']=dataset_bands.BU_class.apply(lambda y: 1 if y>=3 else 0) #dataset_bands['LSE_der_landsat7']=dataset_bands.NDVI_landsat7.apply(lambda y: 0.995 if y < -0.185 else ( # 0.970 if y< 0.157 else (1.0098+0.047*np.log(y) if y<0.727 else 0.990))) #dataset=dataset_bands.query('U_class==0').sample(10000).append(dataset_bands.query('U_class==1').sample(10000)) #dataset=dataset.reset_index()[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDBI_landsat7','NDVI_landsat7','LSE_der_landsat7','U_class']] #X_train, X_test, y_train, y_test = train_test_split(dataset[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDVI_landsat7','NDBI_landsat7','LSE_der_landsat7']], # dataset[['U_class']], test_size = 0.20) #xgclassifier = XGBClassifier(random_state=123,colsample_bytree= 0.7, # learning_rate=0.05, max_depth= 5, # min_child_weight=11, n_estimators= 500, nthread= 4, objective= 'binary:logistic', # seed= 123, silent=1, subsample= 0.8) #xgclassifier.fit(X_train.values,y_train.values) #y_pred=xgclassifier.predict(X_test.values) #confussion_matrix_list.append(confusion_matrix(y_test, y_pred)) #classification_report_list.append(classification_report(y_test, y_pred)) #classification_2019: #dataset = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2019_'+lokasi+'.tif') #dataset_bands=pd.DataFrame() #for i in dataset.indexes: # temp=dataset.read(i) # temp=pd.DataFrame(data=np.array(temp).flatten()).rename(columns={0:i}) # dataset_bands=temp.join(dataset_bands) #dataset_bands.rename(columns={1:'BU_class',2:'NDVI_landsat7',3:'NDBI_landsat7',4:'MNDWI_landsat7', # 5:'SAVI_landsat7',6:'LSE_landsat7',7:'rad_VIIRS'},inplace=True) #dataset_bands['U_class']=dataset_bands.BU_class.apply(lambda y: 1 if y>=3 else 0) #dataset_bands['LSE_der_landsat7']=dataset_bands.NDVI_landsat7.apply(lambda y: 0.995 if y < -0.185 else ( # 0.970 if y< 0.157 else (1.0098+0.047*np.log(y) if y<0.727 else 0.990))) #dataset=dataset_bands.reset_index()[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDBI_landsat7','NDVI_landsat7','LSE_der_landsat7']] #class_2019_=xgclassifier.predict(dataset.values) #temp_ = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2019_'+lokasi+'.tif') #temp = temp_.read(1) #array_class_2019=class_2019_.reshape(temp.shape[0],temp.shape[1]).astype(np.uint8) #profile = temp_.profile #profile.update( # dtype=array_class_2019.dtype, # count=1, # compress='lzw') #with rasterio.open( # '/content/gdrive/My Drive/Urban_monitoring/Urban_/GHSL_/GHSL_rev/rev_class_2019_'+lokasi+'.tif','w',**profile) as dst2: # dst2.write(array_class_2019,1) # dst2.close() #Create model #for lokasi in lokus: #print(lokasi) #dataset = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2014_'+lokasi+'.tif') #dataset_bands=pd.DataFrame() #for i in dataset.indexes: # temp=dataset.read(i) # temp=pd.DataFrame(data=np.array(temp).flatten()).rename(columns={0:i}) # dataset_bands=temp.join(dataset_bands) #dataset_bands.rename(columns={1:'BU_class',2:'NDVI_landsat7',3:'NDBI_landsat7',4:'MNDWI_landsat7', # 5:'SAVI_landsat7',6:'LSE_landsat7',7:'rad_VIIRS'},inplace=True) #dataset_bands['U_class']=dataset_bands.BU_class.apply(lambda y: 1 if y>=3 else 0) #dataset_bands['LSE_der_landsat7']=dataset_bands.NDVI_landsat7.apply(lambda y: 0.995 if y < -0.185 else ( # 0.970 if y< 0.157 else (1.0098+0.047*np.log(y) if y<0.727 else 0.990))) #dataset=dataset_bands.query('U_class==0').sample(10000).append(dataset_bands.query('U_class==1').sample(10000)) #dataset=dataset.reset_index()[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDBI_landsat7','NDVI_landsat7','LSE_der_landsat7','U_class']] #X_train, X_test, y_train, y_test = train_test_split(dataset[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDVI_landsat7','NDBI_landsat7','LSE_der_landsat7']], # dataset[['U_class']], test_size = 0.20) #xgclassifier = XGBClassifier(random_state=123,colsample_bytree= 0.7, # learning_rate=0.05, max_depth= 5, # min_child_weight=11, n_estimators= 500, nthread= 4, objective= 'binary:logistic', # seed= 123, silent=1, subsample= 0.8) #xgclassifier.fit(X_train.values,y_train.values) #y_pred=xgclassifier.predict(X_test.values) #confussion_matrix_list.append(confusion_matrix(y_test, y_pred)) #classification_report_list.append(classification_report(y_test, y_pred)) #classification_2019: #dataset = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2014_'+lokasi+'.tif') #dataset_bands=pd.DataFrame() #for i in dataset.indexes: # temp=dataset.read(i) # temp=pd.DataFrame(data=np.array(temp).flatten()).rename(columns={0:i}) # dataset_bands=temp.join(dataset_bands) #dataset_bands.rename(columns={1:'BU_class',2:'NDVI_landsat7',3:'NDBI_landsat7',4:'MNDWI_landsat7', # 5:'SAVI_landsat7',6:'LSE_landsat7',7:'rad_VIIRS'},inplace=True) #dataset_bands['U_class']=dataset_bands.BU_class.apply(lambda y: 1 if y>=3 else 0) #dataset_bands['LSE_der_landsat7']=dataset_bands.NDVI_landsat7.apply(lambda y: 0.995 if y < -0.185 else ( # 0.970 if y< 0.157 else (1.0098+0.047*np.log(y) if y<0.727 else 0.990))) #dataset=dataset_bands.reset_index()[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDBI_landsat7','NDVI_landsat7','LSE_der_landsat7']] #class_2014_=xgclassifier.predict(dataset.values) #temp_ = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2014_'+lokasi+'.tif') #temp = temp_.read(1) #array_class_2014=class_2014_.reshape(temp.shape[0],temp.shape[1]).astype(np.uint8) #profile = temp_.profile #profile.update( # dtype=array_class_2014.dtype, # count=1, # compress='lzw') #with rasterio.open( # '/content/gdrive/My Drive/Urban_monitoring/Urban_/GHSL_/GHSL_rev/rev_class_2014_'+lokasi+'.tif','w',**profile) as dst2: # dst2.write(array_class_2014,1) # dst2.close() confussion_matrix_list[4] lokus=['bandungraya'] from sklearn.ensemble import RandomForestClassifier #Create model for lokasi in lokus: print(lokasi) dataset = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2014_'+lokasi+'.tif') dataset_bands=pd.DataFrame() for i in dataset.indexes: temp=dataset.read(i) temp=pd.DataFrame(data=np.array(temp).flatten()).rename(columns={0:i}) dataset_bands=temp.join(dataset_bands) dataset_bands.rename(columns={1:'BU_class',2:'NDVI_landsat7',3:'NDBI_landsat7',4:'MNDWI_landsat7', 5:'SAVI_landsat7',6:'LSE_landsat7',7:'rad_VIIRS'},inplace=True) dataset_bands['U_class']=dataset_bands.BU_class.apply(lambda y: 1 if y>=3 else 0) dataset_bands['LSE_der_landsat7']=dataset_bands.NDVI_landsat7.apply(lambda y: 0.995 if y < -0.185 else ( 0.970 if y< 0.157 else (1.0098+0.047*np.log(y) if y<0.727 else 0.990))) dataset=dataset_bands.query('U_class==0').sample(10000).append(dataset_bands.query('U_class==1').sample(10000)) dataset=dataset.reset_index()[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDBI_landsat7','NDVI_landsat7','LSE_der_landsat7','U_class']] X_train, X_test, y_train, y_test = train_test_split(dataset[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDVI_landsat7','NDBI_landsat7','LSE_der_landsat7']], dataset[['U_class']], test_size = 0.20) rfclassifier = RandomForestClassifier(random_state=123,max_depth=6,n_estimators=500,criterion='entropy') rfclassifier.fit(X_train.values,y_train.values) y_pred=rfclassifier.predict(dataset_bands[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDBI_landsat7','NDVI_landsat7','LSE_der_landsat7']].values) confussion_matrix_list.append(confusion_matrix(dataset_bands['U_class'],y_pred)) #y_pred=rfclassifier.predict(X_test.values) #classification_2014: #dataset = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2015_'+lokasi+'.tif') #dataset_bands=pd.DataFrame() #for i in dataset.indexes: # temp=dataset.read(i) # temp=pd.DataFrame(data=np.array(temp).flatten()).rename(columns={0:i}) # dataset_bands=temp.join(dataset_bands) #dataset_bands.rename(columns={1:'BU_class',2:'NDVI_landsat7',3:'NDBI_landsat7',4:'MNDWI_landsat7', # 5:'SAVI_landsat7',6:'LSE_landsat7',7:'rad_VIIRS'},inplace=True) #dataset_bands['U_class']=dataset_bands.BU_class.apply(lambda y: 1 if y>=3 else 0) ##dataset_bands['LSE_der_landsat7']=dataset_bands.NDVI_landsat7.apply(lambda y: 0.995 if y < -0.185 else ( # 0.970 if y< 0.157 else (1.0098+0.047*np.log(y) if y<0.727 else 0.990))) #dataset=dataset_bands.reset_index()[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDBI_landsat7','NDVI_landsat7','LSE_der_landsat7']] #class_2015_=rfclassifier.predict(dataset.values) #dataset_bands['U_class'] #temp_ = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2015_'+lokasi+'.tif') #temp = temp_.read(1) #array_class_2015=class_2015_.reshape(temp.shape[0],temp.shape[1]).astype(np.uint8) #profile = temp_.profile #profile.update( # dtype=array_class_2015.dtype, # count=1, # compress='lzw') #with rasterio.open( # '/content/gdrive/My Drive/Urban_monitoring/Urban_/GHSL_/GHSL_rev/rev_class_2015_'+lokasi+'_2.tif','w',**profile) as dst2: # dst2.write(array_class_2015,1) # dst2.close() #Create model #for lokasi in lokus: # print(lokasi) # dataset = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2014_'+lokasi+'.tif') # dataset_bands=pd.DataFrame() # for i in dataset.indexes: # temp=dataset.read(i) # temp=pd.DataFrame(data=np.array(temp).flatten()).rename(columns={0:i}) # dataset_bands=temp.join(dataset_bands) # dataset_bands.rename(columns={1:'BU_class',2:'NDVI_landsat7',3:'NDBI_landsat7',4:'MNDWI_landsat7', # 5:'SAVI_landsat7',6:'LSE_landsat7',7:'rad_VIIRS'},inplace=True) #dataset_bands['U_class']=dataset_bands.BU_class.apply(lambda y: 1 if y>=3 else 0) #dataset_bands['LSE_der_landsat7']=dataset_bands.NDVI_landsat7.apply(lambda y: 0.995 if y < -0.185 else ( # 0.970 if y< 0.157 else (1.0098+0.047*np.log(y) if y<0.727 else 0.990))) #dataset=dataset_bands.query('U_class==0').sample(10000).append(dataset_bands.query('U_class==1').sample(10000)) #dataset=dataset.reset_index()[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDBI_landsat7','NDVI_landsat7','LSE_der_landsat7','U_class']] #X_train, X_test, y_train, y_test = train_test_split(dataset[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDVI_landsat7','NDBI_landsat7','LSE_der_landsat7']], # dataset[['U_class']], test_size = 0.20) #rfclassifier = RandomForestClassifier(random_state=123,max_depth=6,n_estimators=500,criterion='entropy') #rfclassifier.fit(X_train.values,y_train.values) #y_pred=rfclassifier.predict(X_test.values) #confussion_matrix_list.append(confusion_matrix(y_test, y_pred)) #classification_report_list.append(classification_report(y_test, y_pred)) #classification_2019: #dataset = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2019_'+lokasi+'.tif') #dataset_bands=pd.DataFrame() #for i in dataset.indexes: # temp=dataset.read(i) # temp=pd.DataFrame(data=np.array(temp).flatten()).rename(columns={0:i}) # dataset_bands=temp.join(dataset_bands) #dataset_bands.rename(columns={1:'BU_class',2:'NDVI_landsat7',3:'NDBI_landsat7',4:'MNDWI_landsat7', # 5:'SAVI_landsat7',6:'LSE_landsat7',7:'rad_VIIRS'},inplace=True) #dataset_bands['U_class']=dataset_bands.BU_class.apply(lambda y: 1 if y>=3 else 0) #dataset_bands['LSE_der_landsat7']=dataset_bands.NDVI_landsat7.apply(lambda y: 0.995 if y < -0.185 else ( # 0.970 if y< 0.157 else (1.0098+0.047*np.log(y) if y<0.727 else 0.990))) #dataset=dataset_bands.reset_index()[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDBI_landsat7','NDVI_landsat7','LSE_der_landsat7']] #class_2019_=rfclassifier.predict(dataset.values) #temp_ = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2019_'+lokasi+'.tif') #temp = temp_.read(1) #array_class_2019=class_2019_.reshape(temp.shape[0],temp.shape[1]).astype(np.uint8) #profile = temp_.profile #profile.update( # dtype=array_class_2019.dtype, # count=1, # compress='lzw') #with rasterio.open( # '/content/gdrive/My Drive/Urban_monitoring/Urban_/GHSL_/GHSL_rev/rev_class_2019_'+lokasi+'.tif','w',**profile) as dst2: # dst2.write(array_class_2019,1) # dst2.close() #Create model #for lokasi in lokus: # print(lokasi) # dataset = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2014_'+lokasi+'.tif') # dataset_bands=pd.DataFrame() # for i in dataset.indexes: # temp=dataset.read(i) # temp=pd.DataFrame(data=np.array(temp).flatten()).rename(columns={0:i}) # dataset_bands=temp.join(dataset_bands) # dataset_bands.rename(columns={1:'BU_class',2:'NDVI_landsat7',3:'NDBI_landsat7',4:'MNDWI_landsat7', # 5:'SAVI_landsat7',6:'LSE_landsat7',7:'rad_VIIRS'},inplace=True) # dataset_bands['U_class']=dataset_bands.BU_class.apply(lambda y: 1 if y>=3 else 0) # dataset_bands['LSE_der_landsat7']=dataset_bands.NDVI_landsat7.apply(lambda y: 0.995 if y < -0.185 else ( # 0.970 if y< 0.157 else (1.0098+0.047*np.log(y) if y<0.727 else 0.990))) # dataset=dataset_bands.query('U_class==0').sample(10000).append(dataset_bands.query('U_class==1').sample(10000)) # dataset=dataset.reset_index()[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDBI_landsat7','NDVI_landsat7','LSE_der_landsat7','U_class']] # X_train, X_test, y_train, y_test = train_test_split(dataset[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDVI_landsat7','NDBI_landsat7','LSE_der_landsat7']], # dataset[['U_class']], test_size = 0.20) # rfclassifier = RandomForestClassifier(random_state=123,max_depth=6,n_estimators=500,criterion='entropy') # rfclassifier.fit(X_train.values,y_train.values) # y_pred=rfclassifier.predict(X_test.values) # confussion_matrix_list.append(confusion_matrix(y_test, y_pred)) # classification_report_list.append(classification_report(y_test, y_pred)) #classification_2019: # dataset = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2014_'+lokasi+'.tif') # dataset_bands=pd.DataFrame() # for i in dataset.indexes: # temp=dataset.read(i) # temp=pd.DataFrame(data=np.array(temp).flatten()).rename(columns={0:i}) # dataset_bands=temp.join(dataset_bands) # dataset_bands.rename(columns={1:'BU_class',2:'NDVI_landsat7',3:'NDBI_landsat7',4:'MNDWI_landsat7', # 5:'SAVI_landsat7',6:'LSE_landsat7',7:'rad_VIIRS'},inplace=True) # dataset_bands['U_class']=dataset_bands.BU_class.apply(lambda y: 1 if y>=3 else 0) # dataset_bands['LSE_der_landsat7']=dataset_bands.NDVI_landsat7.apply(lambda y: 0.995 if y < -0.185 else ( # 0.970 if y< 0.157 else (1.0098+0.047*np.log(y) if y<0.727 else 0.990))) # dataset=dataset_bands.reset_index()[['rad_VIIRS','SAVI_landsat7','MNDWI_landsat7','NDBI_landsat7','NDVI_landsat7','LSE_der_landsat7']] # class_2014_=rfclassifier.predict(dataset.values) # temp_ = rasterio.open('/content/gdrive/My Drive/Urban_monitoring/Urban_/compiled_GHSL_30_train_2014_'+lokasi+'.tif') # temp = temp_.read(1) # array_class_2014=class_2014_.reshape(temp.shape[0],temp.shape[1]).astype(np.uint8) # profile = temp_.profile # profile.update( # dtype=array_class_2014.dtype, # count=1, # compress='lzw') # with rasterio.open( # '/content/gdrive/My Drive/Urban_monitoring/Urban_/GHSL_/GHSL_rev/rev_class_2014_'+lokasi+'.tif','w',**profile) as dst2: # dst2.write(array_class_2014,1) # dst2.close() confussion_matrix_list[5]
_____no_output_____
MIT
01_ICST2020/03_BUildup_transformation.ipynb
achmfirmansyah/sweet_project
Sign in to Apple Music and navigate to required song class-name
songs_wrap = driver.find_elements_by_class_name("song-name") songs = [x.text for x in song_wrap] song_subdiv = driver.find_elements_by_class_name("song-name-wrapper") artists_wrap = [x.find_element_by_class_name("dt-link-to") for x in song_subdiv] artists = [x.text for x in artists_wrap] len(songs) len(artists) ''' To combine song and artists lists into a single list of dictionaries ''' l = [] d = {} for x in range(0,99): d["artist"] = artists[x] d["song"] = songs[x] l.append(d) d = {} l #Write list as Json file with open("Love100++.json", "w") as read_file: json.dump(l, read_file)
_____no_output_____
MIT
music-app/Data_Scraping/applescraper.ipynb
DL-ify/Music-app
Copyright 2021 Google LLCLicensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. Quantum Neural NetworksThis notebook provides an introduction to Quantum Neural Networks (QNNs) using the Cirq. The presentation mostly follows [Farhi and Neven](https://arxiv.org/abs/1802.06002). We will construct a simple network for classification to demonstrate its utility on some randomly generated toy data.First we need to install cirq, which has to be done each time this notebook is run. Executing the following cell will do that.
# install published dev version # !pip install cirq~=0.4.0.dev # install directly from HEAD: !pip install git+https://github.com/quantumlib/Cirq.git@8c59dd97f8880ac5a70c39affa64d5024a2364d0
Collecting git+https://github.com/quantumlib/Cirq.git@8c59dd97f8880ac5a70c39affa64d5024a2364d0 Cloning https://github.com/quantumlib/Cirq.git (to revision 8c59dd97f8880ac5a70c39affa64d5024a2364d0) to /tmp/pip-req-build-p85k13_x Requirement already satisfied: google-api-python-client~=1.6 in /usr/local/lib/python3.6/dist-packages (from cirq==0.4.0.dev42) (1.6.7) Collecting matplotlib~=2.2 (from cirq==0.4.0.dev42) [?25l Downloading https://files.pythonhosted.org/packages/9e/59/f235ab21bbe7b7c6570c4abf17ffb893071f4fa3b9cf557b09b60359ad9a/matplotlib-2.2.3-cp36-cp36m-manylinux1_x86_64.whl (12.6MB)  100% |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 12.6MB 837kB/s [?25hRequirement already satisfied: networkx~=2.1 in /usr/local/lib/python3.6/dist-packages (from cirq==0.4.0.dev42) (2.2) Requirement already satisfied: numpy~=1.12 in /usr/local/lib/python3.6/dist-packages (from cirq==0.4.0.dev42) (1.14.6) Requirement already satisfied: protobuf~=3.5 in /usr/local/lib/python3.6/dist-packages (from cirq==0.4.0.dev42) (3.6.1) Requirement already satisfied: requests~=2.18 in /usr/local/lib/python3.6/dist-packages (from cirq==0.4.0.dev42) (2.18.4) Collecting sortedcontainers~=2.0 (from cirq==0.4.0.dev42) Downloading https://files.pythonhosted.org/packages/be/e3/a065de5fdd5849450a8a16a52a96c8db5f498f245e7eda06cc6725d04b80/sortedcontainers-2.0.5-py2.py3-none-any.whl Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from cirq==0.4.0.dev42) (1.1.0) Collecting typing_extensions (from cirq==0.4.0.dev42) Downloading https://files.pythonhosted.org/packages/62/4f/392a1fa2873e646f5990eb6f956e662d8a235ab474450c72487745f67276/typing_extensions-3.6.6-py3-none-any.whl Requirement already satisfied: oauth2client<5.0.0dev,>=1.5.0 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client~=1.6->cirq==0.4.0.dev42) (4.1.3) Requirement already satisfied: uritemplate<4dev,>=3.0.0 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client~=1.6->cirq==0.4.0.dev42) (3.0.0) Requirement already satisfied: six<2dev,>=1.6.1 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client~=1.6->cirq==0.4.0.dev42) (1.11.0) Requirement already satisfied: httplib2<1dev,>=0.9.2 in /usr/local/lib/python3.6/dist-packages (from google-api-python-client~=1.6->cirq==0.4.0.dev42) (0.11.3) Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib~=2.2->cirq==0.4.0.dev42) (2.3.0) Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.6/dist-packages (from matplotlib~=2.2->cirq==0.4.0.dev42) (2.5.3) Requirement already satisfied: pytz in /usr/local/lib/python3.6/dist-packages (from matplotlib~=2.2->cirq==0.4.0.dev42) (2018.7) Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.6/dist-packages (from matplotlib~=2.2->cirq==0.4.0.dev42) (0.10.0) Collecting kiwisolver>=1.0.1 (from matplotlib~=2.2->cirq==0.4.0.dev42) [?25l Downloading https://files.pythonhosted.org/packages/69/a7/88719d132b18300b4369fbffa741841cfd36d1e637e1990f27929945b538/kiwisolver-1.0.1-cp36-cp36m-manylinux1_x86_64.whl (949kB)  100% |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 952kB 10.0MB/s [?25hRequirement already satisfied: decorator>=4.3.0 in /usr/local/lib/python3.6/dist-packages (from networkx~=2.1->cirq==0.4.0.dev42) (4.3.0) Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from protobuf~=3.5->cirq==0.4.0.dev42) (40.6.2) Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests~=2.18->cirq==0.4.0.dev42) (2018.10.15) Requirement already satisfied: idna<2.7,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests~=2.18->cirq==0.4.0.dev42) (2.6) Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests~=2.18->cirq==0.4.0.dev42) (3.0.4) Requirement already satisfied: urllib3<1.23,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests~=2.18->cirq==0.4.0.dev42) (1.22) Requirement already satisfied: pyasn1>=0.1.7 in /usr/local/lib/python3.6/dist-packages (from oauth2client<5.0.0dev,>=1.5.0->google-api-python-client~=1.6->cirq==0.4.0.dev42) (0.4.4) Requirement already satisfied: rsa>=3.1.4 in /usr/local/lib/python3.6/dist-packages (from oauth2client<5.0.0dev,>=1.5.0->google-api-python-client~=1.6->cirq==0.4.0.dev42) (4.0) Requirement already satisfied: pyasn1-modules>=0.0.5 in /usr/local/lib/python3.6/dist-packages (from oauth2client<5.0.0dev,>=1.5.0->google-api-python-client~=1.6->cirq==0.4.0.dev42) (0.2.2) Building wheels for collected packages: cirq Running setup.py bdist_wheel for cirq ... [?25l- \ | / done [?25h Stored in directory: /tmp/pip-ephem-wheel-cache-ub0vqjao/wheels/ae/db/83/edbb28e59157c931c9342e7ae581bebdf4939c0465a413aaaf Successfully built cirq Installing collected packages: kiwisolver, matplotlib, sortedcontainers, typing-extensions, cirq Found existing installation: matplotlib 2.1.2 Uninstalling matplotlib-2.1.2: Successfully uninstalled matplotlib-2.1.2 Successfully installed cirq-0.4.0.dev42 kiwisolver-1.0.1 matplotlib-2.2.3 sortedcontainers-2.0.5 typing-extensions-3.6.6
Apache-2.0
colabs/QNN_hands_on.ipynb
derekchen-2/physics-math-tutorials
To verify that Cirq is installed in your environment, try to `import cirq` and print out a diagram of the Foxtail device. It should produce a 2x11 grid of qubits.
import cirq import numpy as np import matplotlib.pyplot as plt print(cirq.google.Foxtail)
(0, 0)───(0, 1)───(0, 2)───(0, 3)───(0, 4)───(0, 5)───(0, 6)───(0, 7)───(0, 8)───(0, 9)───(0, 10) β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ β”‚ (1, 0)───(1, 1)───(1, 2)───(1, 3)───(1, 4)───(1, 5)───(1, 6)───(1, 7)───(1, 8)───(1, 9)───(1, 10)
Apache-2.0
colabs/QNN_hands_on.ipynb
derekchen-2/physics-math-tutorials
The QNN IdeaWe'll begin by describing here the QNN model we are pursuing. We'll discuss the quantum circuit describing a very simple neuron, and how it can be trained. As in an ordinary neural network, a QNN takes in data, processes that data, and then returns an answer. In the quantum case, the data will be encoded into the initial quantum state, and the processing step is the action of a quantum circuit on that quantum state. At the end we will measure one or more of the qubits, and the statistics of those measurements are the output of the net. Classical vs Quantum An ordinary neural network can only handle classical input. The input to a QNN, though, is a quantum state, which consists of $2^n$ complex amplitudes for $n$-qubits. If you attached your quantum computer directly to some physics experiment, for example, then you could have a QNN do some post-processing on the experimental wavefunction in lieu of a more traditional measurement. There are some very exciting possiblities there, but unfortunately we wil not be considering them in this Colab. It requires significantly more quantum background to understand what's going on, and it's harder to give examples because the input states themselves can be quite complicated. For recent examples of that kind of network, though, check out [this](https://arxiv.org/abs/1805.08654) paper and [this](https://arxiv.org/abs/1810.03787) paper. The basic ingredients are similar to what we'll cover here In this Colab we'll focus on classical inputs, by which I mean the specification of one of the computational basis states as the initial state. There are a total of $2^n$ of these states for $n$ qubits. Note the crucial difference between this case and the quantum case: in the quantum case the input is $2^n$-*dimensional*, while in the classical case there are $2^n$ *possible* inputs. The quantum neural network can process these inputs in a "quantum" way, meaning that it may be able to evaluate certain functions on these inputs more efficiently than a classical network. Whether the "quantum" processing is actually useful in practice remains to be seen, and in this Colab we will not have time to really get into that aspect of a QNN. Data Processing Given the classical input state, what will we do with it? At this stage it's helpful to be more specific and definite about the problem we are trying to solve. The problem we're going to focus on in this Colab is __two-category classicfication__. That means that after the quantum circuit has finished running, we measure one of the qubits, the *readout* qubit, and the value of that qubit will tell us which of the two categories our classical input state belonged to. Since this is quantum, the output that qubit is going to be random according to some probability distributuion. So really we're going to repeat the computation many times and take a majority vote. Our classical input data is a bitstring that is converted into a computational basis state. We want to influence the readout qubit in a way that depends on this state. Our main tool for this a gate we call the $ZX$-gate, which acts on two qubits as$$\exp(i \pi w Z \otimes X) = \begin{bmatrix}\cos \pi w & i\sin\pi w &0&0\\i\sin\pi w & \cos \pi w &0&0\\0&0& \cos \pi w & -i\sin\pi w \\0&0 & -i\sin\pi w & \cos\pi w \end{bmatrix},$$where $w$ is a free parameter ($w$ stands for weight). This gate rotates the second qubit around the $X$-axis (on the Bloch sphere) either clockwise or counterclockwise depending on the state of the first qubit as seen in the computational basis. The amount of the rotation is determined by $w$. If we connect each of our input qubits to the readout qubit using one of these gates, then the result is that the readout qubit will be rotated in a way that depeonds the initial state in a straightforward way. This rotation is in the $YZ$ plane, so will change the statistics of measurements in either the $Z$ basis or the $Y$ basis for the readout qubit. We're going to choose to have the initial state of the readout qubit to be a standard computational basis state as usual, which is a $Z$ eigenstate but "neutral" with respect to $Y$ (i.e., 50/50 probabilty of $Y=+1$ or $Y=-1$). Then after all of the rotations are complete we'll measure the readout qubit in the $Y$ basis. If all goes well, then the net rotation induced by the $ZX$ gates will place the readout qubit near one of the two $Y$ eigenstates in a way that depends on the initial data. To summarize, here is our strategy for processing the two-category classification problem:1) Prepare a computational basis state corresponding to the input that should be categorized.2) Use $ZX$ gates to rotate the state of the readout qubit in a way that depends on the input.3) Measure the readout qubit in the $Y$ basis to get the predicted label. Take a majority vote after many repetitions. This is the simplest possible kind of network, and really only corresponds to a single neuron. We'll talk about more complicated possibilities after understanding how to implement this one. Custom Two-Qubit GateOur first task is to code up the $ZX$ gate described above, which is given by the matrix$$\exp(i \pi w Z \otimes X) = \begin{bmatrix}\cos \pi w & i\sin\pi w &0&0\\i\sin\pi w & \cos \pi w &0&0\\0&0& \cos \pi w & -i\sin\pi w \\0&0 & -i\sin\pi w & \cos\pi w \end{bmatrix},$$ Just from the form of the gate we can see that it performs a rotation by angle $\pm \pi w$ on the second qubit depending on the value of the first qubit. If we only had one or the other of these two blocks, then this gate would literally be a controlled rotation. For example, using the Cirq conventions,$$CR_X(\theta) = \begin{bmatrix}1 & 0 &0&0\\0 & 1 &0&0\\0&0& \cos \theta/2 & -i\sin \theta/2 \\0&0 & -i\sin\theta/2 & \cos\theta/2 \end{bmatrix},$$which means that setting $\theta = 2\pi w$ should give us (part) of our desired transformation.
a = cirq.NamedQubit("a") b = cirq.NamedQubit("b") w = .25 # Put your own weight here. angle = 2*np.pi*w circuit = cirq.Circuit.from_ops(cirq.ControlledGate(cirq.Rx(angle)).on(a,b)) print(circuit) circuit.to_unitary_matrix().round(2)
a: ───@────────── β”‚ b: ───Rx(0.5Ο€)───
Apache-2.0
colabs/QNN_hands_on.ipynb
derekchen-2/physics-math-tutorials
__Question__: The rotation in the upper-left block is by the opposite angle. But how do we get the rotation to happen in the upper-left block of the $4\times 4$ matrix in the first place? What is the circuit? Solution Switching the upper-left and lower-right blocks of a controlled gate corresponds to activating when the control qubit is in the $|0\rangle$ state instead of the $|1\rangle$ state. We can arrange this to happen by taking the control gate we already have and conjugating the control qubit by $X$ gates (which implement the NOT operation). Don't forget to also rotate by the opposite angle.
a = cirq.NamedQubit("a") b = cirq.NamedQubit("b") w = 0.25 # Put your own weight here. angle = 2*np.pi*w circuit = cirq.Circuit.from_ops([cirq.X(a), cirq.ControlledGate(cirq.Rx(-angle)).on(a,b), cirq.X(a)]) print(circuit) circuit.to_unitary_matrix().round(2)
a: ───X───@───────────X─── β”‚ b: ───────Rx(-0.5Ο€)───────
Apache-2.0
colabs/QNN_hands_on.ipynb
derekchen-2/physics-math-tutorials
The Full $ZX$ Gate We can put together the two controlled rotations to make the full $ZX$ gate. Having discussed the decomposition already, we can make our own class and specify its action using the `_decpompose_` method. Fill in the following code block to implement this gate.
class ZXGate(cirq.ops.gate_features.TwoQubitGate): """ZXGate with variable weight.""" def __init__(self, weight=1): """Initializes the ZX Gate up to phase. Args: weight: rotation angle, period 2 """ self.weight = weight def _decompose_(self, qubits): a, b = qubits ## YOUR CODE HERE # This lets the weight be a Symbol. Useful for paramterization. def _resolve_parameters_(self, param_resolver): return ZXGate(weight=param_resolver.value_of(self.weight)) # How should the gate look in ASCII diagrams? def _circuit_diagram_info_(self, args): return cirq.protocols.CircuitDiagramInfo( wire_symbols=('Z', 'X'), exponent=self.weight)
_____no_output_____
Apache-2.0
colabs/QNN_hands_on.ipynb
derekchen-2/physics-math-tutorials
Solution
class ZXGate(cirq.ops.gate_features.TwoQubitGate): """ZXGate with variable weight.""" def __init__(self, weight=1): """Initializes the ZX Gate up to phase. Args: weight: rotation angle, period 2 """ self.weight = weight def _decompose_(self, qubits): a, b = qubits yield cirq.ControlledGate(cirq.Rx(2*np.pi*self.weight)).on(a,b) yield cirq.X(a) yield cirq.ControlledGate(cirq.Rx(-2*np.pi*self.weight)).on(a,b) yield cirq.X(a) # This lets the weight be a Symbol. Useful for paramterization. def _resolve_parameters_(self, param_resolver): return ZXGate(weight=param_resolver.value_of(self.weight)) # How should the gate look in ASCII diagrams? def _circuit_diagram_info_(self, args): return cirq.protocols.CircuitDiagramInfo( wire_symbols=('Z', 'X'), exponent=self.weight)
_____no_output_____
Apache-2.0
colabs/QNN_hands_on.ipynb
derekchen-2/physics-math-tutorials
EigenGate ImplementationAnother way to specify how a gate works is by an explicit eigen-action. In our case that is also easy, since we know that the gate acts as a phase (the eigenvalue) when the first qubit is in a $Z$ eigenstate (i.e., a computational basis state) and the second qubit is in an $X$ eigenstate.The way we specify eigen-actions in Cirq is through the `_eigen_components` method, where we need to specify the eigenvalue as a phase together with a projector onto the eigenspace of that phase. We also conventionally specify the gate at $w=1$ and set $w$ internally to be the `exponent` of the gate, which automatically implements other values of $w$ for us.In the case of the $ZX$ gate with $w=1$, one of our eigenvalues is $\exp(+i\pi)$, which is specified as $1$ in Cirq. (Because $1$ is the coefficeint of $i\pi$ in the exponential.) This is the phase when when the first qubit is in the $Z=+1$ state and the second qubit is in the $X=+1$ state, or when the first qubit is in the $Z=-1$ state and the second qubit is in the $X=-1$ state. The projector onto these states is$$\begin{align}P &= |0+\rangle \langle 0{+}| + |1-\rangle \langle 1{-}|\\&= \frac{1}{2}\big(|00\rangle \langle 00| +|00\rangle \langle 01|+|01\rangle \langle 00|+|01\rangle \langle 01|+ |10\rangle \langle 10|-|10\rangle \langle 11|-|11\rangle \langle 10|+|11\rangle \langle 11|\big)\\&=\frac{1}{2}\begin{bmatrix}1 & 1 &0&0\\1 & 1 &0&0\\0&0& 1 & -1 \\0&0 & -1 & 1\end{bmatrix}\end{align}$$A similar formula holds for the eigenvalue $\exp(-i\pi)$ with the two blocks in the projector flipped.__Exercise__: Implement the $ZX$ gate as an `EigenGate` using this decomposition. The following codeblock will get you started.
class ZXGate(cirq.ops.eigen_gate.EigenGate, cirq.ops.gate_features.TwoQubitGate): """ZXGate with variable weight.""" def __init__(self, weight=1): """Initializes the ZX Gate up to phase. Args: weight: rotation angle, period 2 """ self.weight = weight super().__init__(exponent=weight) # Automatically handles weights other than 1 def _eigen_components(self): return [ (1, np.array([[0.5, 0.5, 0, 0], [ 0.5, 0.5, 0, 0], [0, 0, 0.5, -0.5], [0, 0, -0.5, 0.5]])), (??, ??) # YOUR CODE HERE: phase and projector for the other eigenvalue ] # This lets the weight be a Symbol. Useful for parameterization. def _resolve_parameters_(self, param_resolver): return ZXGate(weight=param_resolver.value_of(self.weight)) # How should the gate look in ASCII diagrams? def _circuit_diagram_info_(self, args): return cirq.protocols.CircuitDiagramInfo( wire_symbols=('Z', 'X'), exponent=self.weight)
_____no_output_____
Apache-2.0
colabs/QNN_hands_on.ipynb
derekchen-2/physics-math-tutorials
Solution
class ZXGate(cirq.ops.eigen_gate.EigenGate, cirq.ops.gate_features.TwoQubitGate): """ZXGate with variable weight.""" def __init__(self, weight=1): """Initializes the ZX Gate up to phase. Args: weight: rotation angle, period 2 """ self.weight = weight super().__init__(exponent=weight) # Automatically handles weights other than 1 def _eigen_components(self): return [ (1, np.array([[0.5, 0.5, 0, 0], [ 0.5, 0.5, 0, 0], [0, 0, 0.5, -0.5], [0, 0, -0.5, 0.5]])), (-1, np.array([[0.5, -0.5, 0, 0], [ -0.5, 0.5, 0, 0], [0, 0, 0.5, 0.5], [0, 0, 0.5, 0.5]])) ] # This lets the weight be a Symbol. Useful for parameterization. def _resolve_parameters_(self, param_resolver): return ZXGate(weight=param_resolver.value_of(self.weight)) # How should the gate look in ASCII diagrams? def _circuit_diagram_info_(self, args): return cirq.protocols.CircuitDiagramInfo( wire_symbols=('Z', 'X'), exponent=self.weight)
_____no_output_____
Apache-2.0
colabs/QNN_hands_on.ipynb
derekchen-2/physics-math-tutorials
Testing the Gate __BEFORE MOVING ON__ make sure you've executed the `EigenGate` solution of the $ZX$ gate implementation. That's the one assumed for the code below, though other implementations may work just as well. In general, the cells in this Colab may depend on previous cells.Let's test out our gate. First we'll make a simple test circuit to see that the ASCII diagrams are diplaying properly:
a = cirq.NamedQubit("a") b = cirq.NamedQubit("b") w = .15 # Put your own weight here. Try using a cirq.Symbol. circuit = cirq.Circuit.from_ops(ZXGate(w).on(a,b)) print(circuit)
a: ───Z──────── β”‚ b: ───X^0.15───
Apache-2.0
colabs/QNN_hands_on.ipynb
derekchen-2/physics-math-tutorials
We should also check that the matrix is what we expect:
test_matrix = np.array([[np.cos(np.pi*w), 1j*np.sin(np.pi*w), 0, 0], [1j*np.sin(np.pi*w), np.cos(np.pi*w), 0, 0], [0, 0, np.cos(np.pi*w), -1j*np.sin(np.pi*w)], [0, 0, -1j*np.sin(np.pi*w),np.cos(np.pi*w)]]) # Test for five digits of accuracy. Won't work with cirq.Symbol assert (circuit.to_unitary_matrix().round(5) == test_matrix.round(5)).all()
_____no_output_____
Apache-2.0
colabs/QNN_hands_on.ipynb
derekchen-2/physics-math-tutorials
Create CircuitNow we have to create the QNN circuit. We are simply going to let a $ZX$ gate act between each data qubit and the readout qubit. For simplicity, let's share a single weight between all of the gates. You are invited to experiment with making the weights different, but in our example problem below we can set them all equal by symmetry.__Question__: What about the order of these actions? Which data qubits should interact with the readout qubit first?Remember that we also want to measure the readout qubit in the $Y$ basis. Fundamentally speaking, all measurements in Cirq are computational basis measurements, and so we have to implement the change of basis by hand.__Question__: What is the circuit for a basis transformation from the $Y$ basis to the computational basis? We want to choose our transformation so that an eigenstate with $Y=+1$ becomes an eigenstate with $Z=+1$ prior to measurement. Solutions * The $ZX$ gates all commute with each other, so the order of implementation doesn't matter!* We want a transformation that maps $\big(|0\rangle + i |1\rangle\big)/\sqrt{2}$ to $|0\rangle$ and $\big(|0\rangle - i |1\rangle\big)\sqrt{2}$ to $|1\rangle$. Recall that the phase gate $S$ is given in matrix form by$$S = \begin{bmatrix}1 & 0 \\0 & i\end{bmatrix},$$and the Hadamard transform is given by$$H = \frac{1}{\sqrt{2}}\begin{bmatrix}1 & 1 \\1 & -1\end{bmatrix},$$So acting with $S^{-1}$ and then $H$ gives what we want. We'll add these two gates to the end of the circuit on the readout qubit so that the final measurement effectively occurs in the $Y$ basis. Make Circuit A clean way of making circuits is to define generators for logically-related circuit elements, and then `append` those to the circuit you want to make. Here is a code snippet that initializes our qubits and defines a generator for a single layer of $ZX$ gates:
# Total number of data qubits INPUT_SIZE = 9 data_qubits = cirq.LineQubit.range(INPUT_SIZE) readout = cirq.NamedQubit('r') # Initialize parameters of the circuit params = {'w': 0} def ZX_layer(): """Adds a ZX gate between each data qubit and the readout. All gates are given the same cirq.Symbol for a weight.""" for qubit in data_qubits: yield ZXGate(cirq.Symbol('w')).on(qubit, readout)
_____no_output_____
Apache-2.0
colabs/QNN_hands_on.ipynb
derekchen-2/physics-math-tutorials
Use this generator to create the QNN circuit. Don't forget to add the basis change for the readout qubit at the end!
qnn = cirq.Circuit() qnn.append(???) # YOUR CODE HERE
_____no_output_____
Apache-2.0
colabs/QNN_hands_on.ipynb
derekchen-2/physics-math-tutorials
Solution
qnn = cirq.Circuit() qnn.append(ZX_layer()) qnn.append([cirq.S(readout)**-1, cirq.H(readout)]) # Basis transformation
_____no_output_____
Apache-2.0
colabs/QNN_hands_on.ipynb
derekchen-2/physics-math-tutorials