markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
previous value = (a, b)new value = (a+b, a) | from functools import reduce
@timed
def fib_reduce(n):
initial = (1, 0)
dummy = range(n)
fib_n = reduce(lambda prev, n: (prev[0] + prev[1], prev[0]), dummy, initial)
return fib_n[0]
fib_reduce(35)
fib_loop(35)
fib_reduce(100)
fib_loop(100)
for i in range(10):
fib_loop(100)
def timed(fn, count):
... | fib_reduce(100) took 0.000020s to run.
| Unlicense | my_classes/ScopesClosuresAndDecorators/decorator_app_timing.ipynb | minefarmer/deep-Dive-1 |
Train a neural network with very little dataIn this notebook, we will use Nobrainer to train a model with limited data. We will start off with a pre-trained model. You can find available pre-trained Nobrainer models at https://github.com/neuronets/nobrainer-models.The pre-trained models can be used to train models for... | !wget -O- http://neuro.debian.net/lists/bionic.us-nh.full | tee /etc/apt/sources.list.d/neurodebian.sources.list \
&& export GNUPGHOME="$(mktemp -d)" \
&& echo "disable-ipv6" >> ${GNUPGHOME}/dirmngr.conf \
&& (apt-key adv --homedir $GNUPGHOME --recv-keys --keyserver hkp://pgpkeys.eu 0xA5D32F012649A5A9 \
|| { curl -s... | _____no_output_____ | Apache-2.0 | guide/transfer_learning.ipynb | richford/nobrainer |
Let's make sure the correct version of git-annex is installed. This should be at least **8.20210223** or later. | !git-annex version
!pip install --no-cache-dir nilearn datalad datalad-osf nobrainer
import nobrainer | _____no_output_____ | Apache-2.0 | guide/transfer_learning.ipynb | richford/nobrainer |
Get sample features and labelsWe use 9 pairs of volumes for training and 1 pair of volumes for evaluation. Many more volumes would be required to train a model for any useful purpose. | csv_of_filepaths = nobrainer.utils.get_data()
filepaths = nobrainer.io.read_csv(csv_of_filepaths)
train_paths = filepaths[:9]
evaluate_paths = filepaths[9:] | _____no_output_____ | Apache-2.0 | guide/transfer_learning.ipynb | richford/nobrainer |
Convert medical images to TFRecordsRemember how many full volumes are in the TFRecords files. This will be necessary to know how many steps are in on training epoch. The default training method needs to know this number, because Datasets don't always know how many items they contain. | # Verify that all volumes have the same shape and that labels are integer-ish.
invalid = nobrainer.io.verify_features_labels(train_paths, num_parallel_calls=2)
assert not invalid
invalid = nobrainer.io.verify_features_labels(evaluate_paths)
assert not invalid
!mkdir -p data
# Convert training and evaluation data to T... | _____no_output_____ | Apache-2.0 | guide/transfer_learning.ipynb | richford/nobrainer |
Create Datasets | n_classes = 1
batch_size = 2
volume_shape = (256, 256, 256)
block_shape = (128, 128, 128)
n_epochs = None
augment = False
shuffle_buffer_size = 10
num_parallel_calls = 2
dataset_train = nobrainer.dataset.get_dataset(
file_pattern='data/data-train_shard-*.tfrec',
n_classes=n_classes,
batch_size=batch_size,
... | _____no_output_____ | Apache-2.0 | guide/transfer_learning.ipynb | richford/nobrainer |
Load pre-trained model Use datalad to retrieve trained models. | !datalad clone https://github.com/neuronets/trained-models && \
cd trained-models && git-annex enableremote osf-storage && \
datalad get -s osf-storage neuronets/brainy/0.1.0/brain-extraction-unet-128iso-model.h5
import tensorflow as tf
model_path = "trained-models/neuronets/brainy/0.1.0/brain-extraction-unet-128is... | _____no_output_____ | Apache-2.0 | guide/transfer_learning.ipynb | richford/nobrainer |
Considerations for transfer learningTraining a neural network changes the model's weights. A pre-trained network has learned weights for a task, and we do not want to forget these weights during training. In other words, we do not want to ruin the pre-trained weights when using our new data. To avoid dramatic changes ... | for layer in model.layers:
layer.kernel_regularizer = tf.keras.regularizers.l2(0.001)
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-05)
model.compile(
optimizer=optimizer,
loss=nobrainer.losses.jaccard,
metrics=[nobrainer.metrics.dice],
) | _____no_output_____ | Apache-2.0 | guide/transfer_learning.ipynb | richford/nobrainer |
Train and evaluate model$$steps = \frac{nBlocks}{volume} * \frac{nVolumes}{batchSize}$$ | steps_per_epoch = nobrainer.dataset.get_steps_per_epoch(
n_volumes=len(train_paths),
volume_shape=volume_shape,
block_shape=block_shape,
batch_size=batch_size)
steps_per_epoch
validation_steps = nobrainer.dataset.get_steps_per_epoch(
n_volumes=len(evaluate_paths),
volume_shape=volume_shape,
... | _____no_output_____ | Apache-2.0 | guide/transfer_learning.ipynb | richford/nobrainer |
The following step may take about 10 mins on a standard colab GPU. | model.fit(
dataset_train,
epochs=5,
steps_per_epoch=steps_per_epoch,
validation_data=dataset_evaluate,
validation_steps=validation_steps) | _____no_output_____ | Apache-2.0 | guide/transfer_learning.ipynb | richford/nobrainer |
Predict natively without TFRecords | from nobrainer.volume import standardize
image_path = evaluate_paths[0][0]
out = nobrainer.prediction.predict_from_filepath(image_path,
model,
block_shape = block_shape,
b... | _____no_output_____ | Apache-2.0 | guide/transfer_learning.ipynb | richford/nobrainer |
This example demonstrates how to dynamically replace widgets in a layout to create responsive user interfaces (requires a live Python server). | selector = pn.widgets.Select(
value=pn.widgets.ColorPicker, options=[
pn.widgets.ColorPicker,
pn.widgets.DatePicker,
pn.widgets.FileInput,
pn.widgets.FloatSlider,
pn.widgets.RangeSlider,
pn.widgets.Spinner,
pn.widgets.TextInput,
], css_classes=['panel-widget-box'])
row = pn.Row(selector... | _____no_output_____ | BSD-3-Clause | examples/gallery/dynamic/dynamic_ui.ipynb | slamer59/panel |
Serving PyTorch Models with CMLE Custom Prediction CodeCloud ML Engine Online Prediction now supports custom python code in to apply custom prediction routines, including custom (stateful) pre/post processing, and/or models not created by the standard supported frameworks (TensorFlow, Keras, Scikit-learn, XGBoost).In... | !pip install -U google-cloud
!pip install torch | _____no_output_____ | Apache-2.0 | pytorch/notebooks/Serving PyTorch Models with CMLE Custom Prediction Code.ipynb | fionalcy/cloudml-samples |
If you are running this notebook in Colab, run the following cell to authenticate your Google Cloud Platform user account | from google.colab import auth
auth.authenticate_user() | _____no_output_____ | Apache-2.0 | pytorch/notebooks/Serving PyTorch Models with CMLE Custom Prediction Code.ipynb | fionalcy/cloudml-samples |
Let's also define the project name, model name, the GCS bucket name that we'll refer to later. Replace ****, ****, and **** with your GCP project ID, your bucket name, and your region, respectively. | PROJECT='<YOUR_PROJECT_ID>'
BUCKET='<YOUR_BUCKET_NAME>'
REGION='<YOUR_REGION>'
!gcloud config set project {PROJECT}
!gcloud config get-value project | _____no_output_____ | Apache-2.0 | pytorch/notebooks/Serving PyTorch Models with CMLE Custom Prediction Code.ipynb | fionalcy/cloudml-samples |
3. Download iris dataIn this example, we want to build a classifier for the simple [iris dataset](https://archive.ics.uci.edu/ml/datasets/iris). So first, we download the data csv file locally. | !mkdir data
!mkdir models
import urllib
LOCAL_DATA_DIR = "data/iris.csv"
url_opener = urllib.URLopener()
url_opener.retrieve("https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data", LOCAL_DATA_DIR) | _____no_output_____ | Apache-2.0 | pytorch/notebooks/Serving PyTorch Models with CMLE Custom Prediction Code.ipynb | fionalcy/cloudml-samples |
Part 1: Build a PyTorch NN ClassifierMake sure that pytorch package is [installed](https://pytorch.org/get-started/locally/). | import torch
from torch.autograd import Variable
print 'PyTorch Version: {}'.format(torch.__version__) | _____no_output_____ | Apache-2.0 | pytorch/notebooks/Serving PyTorch Models with CMLE Custom Prediction Code.ipynb | fionalcy/cloudml-samples |
1. Load Data In this step, we are going to:1. Load the data to Pandas Dataframe.2. Convert the class feature (species) from string to a numeric indicator.3. Split the Dataframe into input feature (xtrain) and target feature (ytrain). | import pandas as pd
CLASS_VOCAB = ['setosa', 'versicolor', 'virginica']
datatrain = pd.read_csv(LOCAL_DATA_DIR, names=['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'species'])
#change string value to numeric
datatrain.loc[datatrain['species']=='Iris-setosa', 'species']=0
datatrain.loc[datatrain['spe... | _____no_output_____ | Apache-2.0 | pytorch/notebooks/Serving PyTorch Models with CMLE Custom Prediction Code.ipynb | fionalcy/cloudml-samples |
2. Set model parametersYou can try different values for **hidden_units** or **learning_rate**. | hidden_units = 10
learning_rate = 0.1 | _____no_output_____ | Apache-2.0 | pytorch/notebooks/Serving PyTorch Models with CMLE Custom Prediction Code.ipynb | fionalcy/cloudml-samples |
3. Define the PyTorch NN modelHere, we build a a neural network with one hidden layer, and a Softmax output layer for classification. | model = torch.nn.Sequential(
torch.nn.Linear(input_features, hidden_units),
torch.nn.Sigmoid(),
torch.nn.Linear(hidden_units, num_classes),
torch.nn.Softmax()
)
loss_metric = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(),lr=learning_rate) | _____no_output_____ | Apache-2.0 | pytorch/notebooks/Serving PyTorch Models with CMLE Custom Prediction Code.ipynb | fionalcy/cloudml-samples |
4. Train the modelWe are going to train the model for **num_epoch** epochs. | num_epochs = 10000
for epoch in range(num_epochs):
x = Variable(torch.Tensor(xtrain).float())
y = Variable(torch.Tensor(ytrain).long())
optimizer.zero_grad()
y_pred = model(x)
loss = loss_metric(y_pred, y)
loss.backward()
optimizer.step()
if (epoch) % 1000 == 0:
pri... | _____no_output_____ | Apache-2.0 | pytorch/notebooks/Serving PyTorch Models with CMLE Custom Prediction Code.ipynb | fionalcy/cloudml-samples |
5. Save and load the model | LOCAL_MODEL_DIR = "models/model.pt"
del model
torch.save(model, LOCAL_MODEL_DIR)
iris_classifier = torch.load(LOCAL_MODEL_DIR) | _____no_output_____ | Apache-2.0 | pytorch/notebooks/Serving PyTorch Models with CMLE Custom Prediction Code.ipynb | fionalcy/cloudml-samples |
6. Test the loaded model for predictions | def predict_class(instances):
instances = torch.Tensor(instances)
output = iris_classifier(instances)
_ , predicted = torch.max(output, 1)
return predicted | _____no_output_____ | Apache-2.0 | pytorch/notebooks/Serving PyTorch Models with CMLE Custom Prediction Code.ipynb | fionalcy/cloudml-samples |
Get predictions for the first 5 instances in the dataset | predicted = predict_class(xtrain[0:5])
print[CLASS_VOCAB[class_index] for class_index in predicted] | _____no_output_____ | Apache-2.0 | pytorch/notebooks/Serving PyTorch Models with CMLE Custom Prediction Code.ipynb | fionalcy/cloudml-samples |
Get the classification accuracy on the training data | import numpy as np
accuracy = round(sum(np.array(predict_class(xtrain)) == ytrain)/float(len(ytrain))*100,2)
print 'Classification accuracy: {}%'.format(accuracy) | _____no_output_____ | Apache-2.0 | pytorch/notebooks/Serving PyTorch Models with CMLE Custom Prediction Code.ipynb | fionalcy/cloudml-samples |
7. Upload trained model to Cloud Storage | GCS_MODEL_DIR='models/pytorch/iris_classifier/'
!gsutil -m cp -r {LOCAL_MODEL_DIR} gs://{BUCKET}/{GCS_MODEL_DIR}
!gsutil ls gs://{BUCKET}/{GCS_MODEL_DIR} | _____no_output_____ | Apache-2.0 | pytorch/notebooks/Serving PyTorch Models with CMLE Custom Prediction Code.ipynb | fionalcy/cloudml-samples |
Part 2: Prepare the Custom Prediction Package1. Implement a model **custom class** for pre/post processing, as well as loading and using your model for prediction.2. Prepare yout **setup.py** file, to include all the modules and packages you need in your custome model class. 1. Create the custom model classIn the **f... | %%writefile model.py
import os
import pandas as pd
from google.cloud import storage
import torch
class PyTorchIrisClassifier(object):
def __init__(self, model):
self._model = model
self.class_vocab = ['setosa', 'versicolor', 'virginica']
@classmethod
def from_path(cls, model_... | _____no_output_____ | Apache-2.0 | pytorch/notebooks/Serving PyTorch Models with CMLE Custom Prediction Code.ipynb | fionalcy/cloudml-samples |
2. Create a setup.py moduleInclude **pytorch** as a required package, as well as the **model.py** file that includes your custom model class. | %%writefile setup.py
from setuptools import setup
REQUIRED_PACKAGES = ['torch']
setup(
name="iris-custom-model",
version="0.1",
scripts=["model.py"],
install_requires=REQUIRED_PACKAGES
) | _____no_output_____ | Apache-2.0 | pytorch/notebooks/Serving PyTorch Models with CMLE Custom Prediction Code.ipynb | fionalcy/cloudml-samples |
3. Create the package This will create a .tar.gz package under /dist directory. The name of the package will be (name)-(version).tar.gz where (name) and (version) are the ones specified in the setup.py. | !python setup.py sdist | _____no_output_____ | Apache-2.0 | pytorch/notebooks/Serving PyTorch Models with CMLE Custom Prediction Code.ipynb | fionalcy/cloudml-samples |
4. Uploaded the package to GCS | GCS_PACKAGE_URI='models/pytorch/packages/iris-custom-model-0.1.tar.gz'
!gsutil cp ./dist/iris-custom-model-0.1.tar.gz gs://{BUCKET}/{GCS_PACKAGE_URI}
!gsutil ls gs://{BUCKET}/{GCS_PACKAGE_URI} | _____no_output_____ | Apache-2.0 | pytorch/notebooks/Serving PyTorch Models with CMLE Custom Prediction Code.ipynb | fionalcy/cloudml-samples |
Part 3: Deploy the Model to CMLE for Online Predictions 1. Create CMLE model | MODEL_NAME='torch_iris_classifier'
!gcloud ml-engine models create {MODEL_NAME} --regions {REGION}
!echo ''
!gcloud ml-engine models list | grep 'torch' | _____no_output_____ | Apache-2.0 | pytorch/notebooks/Serving PyTorch Models with CMLE Custom Prediction Code.ipynb | fionalcy/cloudml-samples |
2. Create CMLE model versionOnce you have your custom package ready, you can specify this as an argument when creating a version resource. Note that you need to provide the path to your package (as package-uris) and also the class name that contains your custom predict method (as model-class). | MODEL_VERSION='v1'
RUNTIME_VERSION='1.10'
MODEL_CLASS='model.PyTorchIrisClassifier'
!gcloud alpha ml-engine versions create {MODEL_VERSION} --model={MODEL_NAME} \
--origin=gs://{BUCKET}/{GCS_MODEL_DIR} \
--runtime-version={RUNTIME_VERSION} \
--framework='SCIKIT_LEARN' \
... | _____no_output_____ | Apache-2.0 | pytorch/notebooks/Serving PyTorch Models with CMLE Custom Prediction Code.ipynb | fionalcy/cloudml-samples |
Part 4: Cloud ML Engine Online Prediction | from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
api = discovery.build('ml', 'v1', credentials=credentials,
discoveryServiceUrl='https://storage.googleapis.com/cloud-ml/discovery/ml_v1_discovery.json'... | _____no_output_____ | Apache-2.0 | pytorch/notebooks/Serving PyTorch Models with CMLE Custom Prediction Code.ipynb | fionalcy/cloudml-samples |
Figure 1 | random.seed(0)
np.random.seed(0)
n = 200
m = 3
repeats = 500
Ks = [1, 2, 3]
filters = {}
for K in Ks:
coeffs = low_pass_poly_coeffs(K)
filters[K] = PolynomialGraphFilter(coeffs)
data = []
for _ in tqdm(range(repeats)):
# Generate the graph anf the perturbed graph
G = nx.barabasi_albert_graph(n, m)... | _____no_output_____ | MIT | figures.ipynb | HenryKenlay/spgf |
Figure 2 | n = 500
m = 3
proportion = 0.05
def get_data(k, dataset):
experiment_name = f'BA({n},{m})_{proportion}' if dataset == 'BA' else f'Sensor({n})_{proportion}'
if k is not None:
experiment_name = f'{experiment_name}_{k}'
df = pd.read_csv(f'results/{experiment_name}.csv', index_col=0)
df = df[['repe... | _____no_output_____ | MIT | figures.ipynb | HenryKenlay/spgf |
Plotting critical pointsAuthor: Martin Horvat, June 2018 | %matplotlib notebook
import plot | _____no_output_____ | MIT | res/plot.ipynb | horvatm/misaligned_roche_critical |
Classifying newswires: a multi-class classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular furth... | from keras.datasets import reuters
(train_data, train_labels), (test_data, test_labels) = reuters.load_data(num_words=10000) | Downloading data from https://s3.amazonaws.com/text-datasets/reuters.npz
2113536/2110848 [==============================] - 1s 0us/step
| MIT | .ipynb_checkpoints/3.6-classifying-newswires-checkpoint.ipynb | ILABUTK/deep-learning-with-python-notebooks |
Like with the IMDB dataset, the argument `num_words=10000` restricts the data to the 10,000 most frequently occurring words found in the data.We have 8,982 training examples and 2,246 test examples: | len(train_data)
len(test_data) | _____no_output_____ | MIT | .ipynb_checkpoints/3.6-classifying-newswires-checkpoint.ipynb | ILABUTK/deep-learning-with-python-notebooks |
As with the IMDB reviews, each example is a list of integers (word indices): | train_data[10] | _____no_output_____ | MIT | .ipynb_checkpoints/3.6-classifying-newswires-checkpoint.ipynb | ILABUTK/deep-learning-with-python-notebooks |
Here's how you can decode it back to words, in case you are curious: | word_index = reuters.get_word_index()
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# Note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_newswire = ' '.join([reverse_word_index.get(i - 3, '?') for... | _____no_output_____ | MIT | .ipynb_checkpoints/3.6-classifying-newswires-checkpoint.ipynb | ILABUTK/deep-learning-with-python-notebooks |
The label associated with an example is an integer between 0 and 45: a topic index. | train_labels[10] | _____no_output_____ | MIT | .ipynb_checkpoints/3.6-classifying-newswires-checkpoint.ipynb | ILABUTK/deep-learning-with-python-notebooks |
Preparing the dataWe can vectorize the data with the exact same code as in our previous example: | import numpy as np
def vectorize_sequences(sequences, dimension=10000):
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1.
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test d... | _____no_output_____ | MIT | .ipynb_checkpoints/3.6-classifying-newswires-checkpoint.ipynb | ILABUTK/deep-learning-with-python-notebooks |
To vectorize the labels, there are two possibilities: we could just cast the label list as an integer tensor, or we could use a "one-hot" encoding. One-hot encoding is a widely used format for categorical data, also called "categorical encoding". For a more detailed explanation of one-hot encoding, you can refer to Cha... | def to_one_hot(labels, dimension=46):
results = np.zeros((len(labels), dimension))
for i, label in enumerate(labels):
results[i, label] = 1.
return results
# Our vectorized training labels
one_hot_train_labels = to_one_hot(train_labels)
# Our vectorized test labels
one_hot_test_labels = to_one_hot(... | _____no_output_____ | MIT | .ipynb_checkpoints/3.6-classifying-newswires-checkpoint.ipynb | ILABUTK/deep-learning-with-python-notebooks |
Note that there is a built-in way to do this in Keras, which you have already seen in action in our MNIST example: | from keras.utils.np_utils import to_categorical
one_hot_train_labels = to_categorical(train_labels)
one_hot_test_labels = to_categorical(test_labels) | _____no_output_____ | MIT | .ipynb_checkpoints/3.6-classifying-newswires-checkpoint.ipynb | ILABUTK/deep-learning-with-python-notebooks |
Building our networkThis topic classification problem looks very similar to our previous movie review classification problem: in both cases, we are trying to classify short snippets of text. There is however a new constraint here: the number of output classes has gone from 2 to 46, i.e. the dimensionality of the outpu... | from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(64, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(46, activation='softmax')) | _____no_output_____ | MIT | .ipynb_checkpoints/3.6-classifying-newswires-checkpoint.ipynb | ILABUTK/deep-learning-with-python-notebooks |
There are two other things you should note about this architecture:* We are ending the network with a `Dense` layer of size 46. This means that for each input sample, our network will output a 46-dimensional vector. Each entry in this vector (each dimension) will encode a different output class.* The last layer uses a ... | model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy']) | _____no_output_____ | MIT | .ipynb_checkpoints/3.6-classifying-newswires-checkpoint.ipynb | ILABUTK/deep-learning-with-python-notebooks |
Validating our approachLet's set apart 1,000 samples in our training data to use as a validation set: | x_val = x_train[:1000]
partial_x_train = x_train[1000:]
y_val = one_hot_train_labels[:1000]
partial_y_train = one_hot_train_labels[1000:] | _____no_output_____ | MIT | .ipynb_checkpoints/3.6-classifying-newswires-checkpoint.ipynb | ILABUTK/deep-learning-with-python-notebooks |
Now let's train our network for 20 epochs: | history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val)) | Train on 7982 samples, validate on 1000 samples
Epoch 1/20
7982/7982 [==============================] - 1s 172us/step - loss: 2.5254 - acc: 0.4932 - val_loss: 1.7216 - val_acc: 0.6140
Epoch 2/20
7982/7982 [==============================] - 1s 71us/step - loss: 1.4463 - acc: 0.6893 - val_loss: 1.3465 - val_acc: 0.7090
E... | MIT | .ipynb_checkpoints/3.6-classifying-newswires-checkpoint.ipynb | ILABUTK/deep-learning-with-python-notebooks |
Let's display its loss and accuracy curves: | ## fix ## XP
import os
os.environ['KMP_DUPLICATE_LIB_OK']='True'
import matplotlib.pyplot as plt
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.tit... | _____no_output_____ | MIT | .ipynb_checkpoints/3.6-classifying-newswires-checkpoint.ipynb | ILABUTK/deep-learning-with-python-notebooks |
It seems that the network starts overfitting after 8 epochs. Let's train a new network from scratch for 8 epochs, then let's evaluate it on the test set: | model = models.Sequential()
model.add(layers.Dense(64, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(46, activation='softmax'))
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.f... | _____no_output_____ | MIT | .ipynb_checkpoints/3.6-classifying-newswires-checkpoint.ipynb | ILABUTK/deep-learning-with-python-notebooks |
Our approach reaches an accuracy of ~78%. With a balanced binary classification problem, the accuracy reached by a purely random classifier would be 50%, but in our case it is closer to 19%, so our results seem pretty good, at least when compared to a random baseline: | import copy
test_labels_copy = copy.copy(test_labels)
np.random.shuffle(test_labels_copy)
float(np.sum(np.array(test_labels) == np.array(test_labels_copy))) / len(test_labels) | _____no_output_____ | MIT | .ipynb_checkpoints/3.6-classifying-newswires-checkpoint.ipynb | ILABUTK/deep-learning-with-python-notebooks |
Generating predictions on new dataWe can verify that the `predict` method of our model instance returns a probability distribution over all 46 topics. Let's generate topic predictions for all of the test data: | predictions = model.predict(x_test) | _____no_output_____ | MIT | .ipynb_checkpoints/3.6-classifying-newswires-checkpoint.ipynb | ILABUTK/deep-learning-with-python-notebooks |
Each entry in `predictions` is a vector of length 46: | predictions[0].shape | _____no_output_____ | MIT | .ipynb_checkpoints/3.6-classifying-newswires-checkpoint.ipynb | ILABUTK/deep-learning-with-python-notebooks |
The coefficients in this vector sum to 1: | np.sum(predictions[0]) | _____no_output_____ | MIT | .ipynb_checkpoints/3.6-classifying-newswires-checkpoint.ipynb | ILABUTK/deep-learning-with-python-notebooks |
The largest entry is the predicted class, i.e. the class with the highest probability: | np.argmax(predictions[0]) | _____no_output_____ | MIT | .ipynb_checkpoints/3.6-classifying-newswires-checkpoint.ipynb | ILABUTK/deep-learning-with-python-notebooks |
A different way to handle the labels and the lossWe mentioned earlier that another way to encode the labels would be to cast them as an integer tensor, like such: | y_train = np.array(train_labels)
y_test = np.array(test_labels) | _____no_output_____ | MIT | .ipynb_checkpoints/3.6-classifying-newswires-checkpoint.ipynb | ILABUTK/deep-learning-with-python-notebooks |
The only thing it would change is the choice of the loss function. Our previous loss, `categorical_crossentropy`, expects the labels to follow a categorical encoding. With integer labels, we should use `sparse_categorical_crossentropy`: | model.compile(optimizer='rmsprop', loss='sparse_categorical_crossentropy', metrics=['acc']) | _____no_output_____ | MIT | .ipynb_checkpoints/3.6-classifying-newswires-checkpoint.ipynb | ILABUTK/deep-learning-with-python-notebooks |
This new loss function is still mathematically the same as `categorical_crossentropy`; it just has a different interface. On the importance of having sufficiently large intermediate layersWe mentioned earlier that since our final outputs were 46-dimensional, we should avoid intermediate layers with much less than 46 h... | model = models.Sequential()
model.add(layers.Dense(64, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(4, activation='relu'))
model.add(layers.Dense(46, activation='softmax'))
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fi... | Train on 7982 samples, validate on 1000 samples
Epoch 1/20
7982/7982 [==============================] - 0s - loss: 3.1620 - acc: 0.2295 - val_loss: 2.6750 - val_acc: 0.2740
Epoch 2/20
7982/7982 [==============================] - 0s - loss: 2.2009 - acc: 0.3829 - val_loss: 1.7626 - val_acc: 0.5990
Epoch 3/20
7982/7982 [... | MIT | .ipynb_checkpoints/3.6-classifying-newswires-checkpoint.ipynb | ILABUTK/deep-learning-with-python-notebooks |
A MadMiner Example Analysis - Analyzing dim6 operators in $W\gamma$ Preparations Let us first load all the python libraries again | import sys
import os
madminer_src_path = "/Users/felixkling/Documents/GitHub/madminer"
sys.path.append(madminer_src_path)
from __future__ import absolute_import, division, print_function, unicode_literals
import numpy as np
import matplotlib
from matplotlib import pyplot as plt
from scipy.optimize import curve_fit
% m... | _____no_output_____ | MIT | examples/example1_wgamma/Part2-Generation.ipynb | vischia/madminer |
Please enter here the path to your MG5 root directory **(This needs to be updated by the user)**. This notebook assumes that you installed Delphes and Pythia through MG5 | mg_dir = '/Users/felixkling/work/MG5_aMC_v2_6_2' | _____no_output_____ | MIT | examples/example1_wgamma/Part2-Generation.ipynb | vischia/madminer |
2. Event Generation 2a) Initialize and load MadMiner Let us first initialize MadMiner and load our setup again | miner = MadMiner()
miner.load('data/madminer_example.h5') | 21:03
21:03 ------------------------------------------------------------
21:03 | |
21:03 | MadMiner v0.1.0 |
21:03 | |
21:03 | Johann Brehmer, Kyle ... | MIT | examples/example1_wgamma/Part2-Generation.ipynb | vischia/madminer |
2b) Run MadMiner Event Generation In a next step, MadMiner starts MadGraph and Pythia to generate events and calculate the weights. You have to provide paths to the process card, run card and param card (the entries corresponding to the parameters of interest will be automatically adapted). Log files in the `log_direc... | miner.run(
mg_directory=mg_dir,
mg_process_directory='./mg_processes/wgamma',
log_directory='logs/wgamma',
sample_benchmark='sm',
proc_card_file='cards/proc_card_wgamma.dat',
param_card_template_file='cards/param_card_template.dat',
run_card_file='cards/run_card_wgamma.dat',
pythia8_card... | 21:03 Generating MadGraph process folder from cards/proc_card_wgamma.dat at ./mg_processes/wgamma
21:03 Run 0
21:03 Sampling from benchmark: sm
21:03 Original run card: cards/run_card_wgamma.dat
21:03 Original Pythia8 card: cards/pythia8_card.dat
21:03 Copied run card: /madminer/cards/run_... | MIT | examples/example1_wgamma/Part2-Generation.ipynb | vischia/madminer |
Spark DataFrame Basics | import os
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('Basics').getOrCreate()
data_file = os.path.join(os.curdir, 'data', 'people.json')
data = spark.read.json(data_file)
data.show()
data.printSchema()
data.describe()
data.describe().show()
from pyspark.sql.types import (
StructField, ... | _____no_output_____ | MIT | Spark_Basic.ipynb | prashantfb65/spark-project |
Group by and aggregate | sales_data_file = os.path.join(os.curdir, 'data', 'sales_info.csv')
sales_data = spark.read.csv(sales_data_file,
inferSchema=True,
header=True)
sales_data.printSchema()
sales_data.groupBy("Company")
sales_data.groupBy("Company").mean().show()
sales_data.groupBy("Com... | +----------+
|max(Sales)|
+----------+
| 870.0|
+----------+
| MIT | Spark_Basic.ipynb | prashantfb65/spark-project |
Import spark functions | from pyspark.sql.functions import countDistinct,\
avg, stddev, format_number
sales_data.select(countDistinct('Sales')).show()
sales_data.select(avg('Sales').alias('Average Sales')).show()
std_sales = sales_data.select(stddev('Sales'))
std_sales.select(format_number('stddev_samp(Sales)',2).alias('std.')).show()
sales_d... | +-------+-------+-----+
|Company| Person|Sales|
+-------+-------+-----+
| GOOG|Charlie|120.0|
| MSFT| Amy|124.0|
| APPL| Linda|130.0|
| GOOG| Sam|200.0|
| MSFT|Vanessa|243.0|
| APPL| John|250.0|
| GOOG| Frank|340.0|
| FB| Sarah|350.0|
| APPL| Chris|350.0|
| MSFT| Tina|600.0|
| APPL... | MIT | Spark_Basic.ipynb | prashantfb65/spark-project |
Checkpoint 1 notebook - solutionsNote that this is just example solutions - other solutions are also fine as long as they run sufficiently fast and produce results of the required accuracyI have added marking cells that we used to automatically evaluate your solutions and check them against correct numerical values. | # add imports here
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import scipy.integrate as integrate
plt.rcParams['figure.figsize'] = (10, 6)
plt.rcParams['font.size'] = 14
# Constants (use these)
c1 = 0.0380998 # nm^2 eV
c2 = 1.43996 # nm eV
r0 = 0.0529177 # nm
h = 6.62606896e-34 # J s
c = 29... | _____no_output_____ | MIT | c1/h_atom.ipynb | c-abbott/num-rep |
Task 1Write a code that calculates V(r) numerically for alpha = 0.01 and plots it for r = 0.01...1 nm. Remember to label the axes. | ### TASK 1 plot
def potential_numerical(r, alpha):
def force(r, alpha):
return -c2*np.power(r,-2+alpha)*np.power(r0,-alpha)
val, err=integrate.quad(force,r,np.inf,args=alpha)
return val
N = 100
rmax = 1
dr = rmax / N
r = np.arange(1, N+1) * dr
v = np.array([potential_numerical(my_r, 0.01) for my_r... | _____no_output_____ | MIT | c1/h_atom.ipynb | c-abbott/num-rep |
Task 2In addition to (1), the test below will compare the analytic expression for V(r) with the numerically obtained values for r = 0.01,0.02...1 nm. The biggest absolute difference diff = max |V_{exact}(r) − V_{numerical}(r)| must be smaller than 1e-5 eV. There is nothing else for you to do. | ### TASK 2 marking cell
def _potential_exact(r, alpha):
return c2*np.power(r,-1+alpha)*np.power(r0,-alpha)/(-1+alpha)
for my_r in np.linspace(0.01, 1, 100):
diff = abs(potential_numerical(my_r, 0.01) - _potential_exact(my_r, 0.01))
assert(diff <= 1e-5) | _____no_output_____ | MIT | c1/h_atom.ipynb | c-abbott/num-rep |
Task 3In addition to (2), calculate the first 2 energy levels (eigenvalues of H) for \alpha = 0, 0.01 and print out the values in eV. The values must be accurate to 0.01 eV. This requires sufficiently large r_{max} and N. Plot the difference \Delta E between the two energies for \alpha = 0, 0.01. Remember to label the... | ### TASK 3 plot
from scipy.sparse.linalg import eigsh
from scipy.sparse import diags
def _energy_levels(alpha, N):
rmax = 1.5 # this value has been found experimentally to give a good tradeoff between speed and accuracy
r = rmax * np.linspace(1/N, 1, N)
dr = rmax / N
diagonals = [np.full((N), -2),
... | alpha = 0.00: (-13.603009365111914, -3.401240417909124)
alpha = 0.01: (-13.805103845839614, -3.5344766180666727)
alpha: 0.00, E(correct): -13.605693, E(student): -13.603009, diff: 0.002684, tol: 0.01.
alpha: 0.00, E(correct): -3.401423, E(student): -3.401240, diff: 0.000183, tol: 0.01.
alpha: 0.01, E(correct): -13.8073... | MIT | c1/h_atom.ipynb | c-abbott/num-rep |
Task 4In addition to (3), assuming that the transition between the 1st excited and the ground state corresponds to the wavelength lambda = 121.5 \pm 0.1 nm, what is the maximum value of alpha_{max} > 0 consistent with this measurement (i.e., the largest alpha_{max} > 0 such that the predicted and measured wavelengths ... | ### TASK 4
import scipy.optimize as opt
def wavelength(alpha):
e0, e1 = energy_levels(alpha)
return hc / (e1-e0) # in nm
def diff_wavelength(alpha):
diff = 121.5 - 0.1 - wavelength(alpha) # in nm
return diff
def find_alpha_max():
alpha_max = opt.brentq(diff_wavelength, 0, 0.01)
return alpha_... | alpha_max: 0.0016279327230272736
| MIT | c1/h_atom.ipynb | c-abbott/num-rep |
Task 5Improve the accuracy of the computation of the two energy levels to 0.001 eV and find alpha_{max} assuming the wavelength lambda = 121.503 \pm 0.01 nm. | ### TASK 5
# we will use a slightly different diagonalization method that will enable us to get the higher accuracy originally
# demanded in the checkpoint (0.0001eV), while not being much slower (since much larger N is required)
import scipy.optimize as opt
from scipy.linalg import eigh_tridiagonal
def _energy_leve... | alpha = 0.01: (-13.807386221625073, -3.5346024455156178)
alpha: 0.01, E(correct): -13.807388, E(student): -13.807386, diff: 0.000002, tol: 0.001.
alpha: 0.01, E(correct): -3.534603, E(student): -3.534602, diff: 0.000000, tol: 0.001.
alpha_max: 0.00012515572787387131
| MIT | c1/h_atom.ipynb | c-abbott/num-rep |
Task 6How would one achieve the accuracy 0.0001eV with significantly smaller matrices? Hint: can we represent R from Eq. (1) as a linear combination of functions that solve the "unperturbed" equation, and translate this into an eigenproblem for a certain N \times N matrix, with N < 100? Possible solutionI wanted you t... | ### TASK 6
# this solves the problem to the original accuracy 0.0001eV
import scipy.optimize as opt
import scipy.special as sp
# For this we need orthogonal functions on 0..infty. A quick search (Wikipedia) shows that
# Laguerre polynomials multiplied by exp(-x/2) have the required property
# These functions will b... | alpha = 0.01: (-13.807288943953521, -3.53458949377518)
alpha: 0.01, E(correct): -13.807388, E(student): -13.807289, diff: 0.000099, tol: 0.0001.
alpha: 0.01, E(correct): -3.534603, E(student): -3.534589, diff: 0.000013, tol: 0.0001.
alpha_max: 1.2055081239240464e-05
| MIT | c1/h_atom.ipynb | c-abbott/num-rep |
Neural Networks and Deep Learning for Life Sciences and Health Applications - An introductory course about theoretical fundamentals, case studies and implementations in python and tensorflow (C) Umberto Michelucci 2018 - umberto.michelucci@gmail.com github repository: https://github.com/michelucci/dlcourse2018_student... | import matplotlib.pyplot as plt
%matplotlib inline
import tensorflow as tf
import numpy as np
from sklearn.linear_model import LinearRegression
from sklearn.datasets.samples_generator import make_regression | _____no_output_____ | BSD-4-Clause-UC | Week 4 - One Neuron/Week 4 - One dimensional linear regression with tensorflow.ipynb | zhaw-dl/zhaw-dlcourse-autumn2018 |
Dataset | Xdata = np.array([4.0, 4.5, 5.0, 5.5, 6.0, 6.5, 7.0])
Ydata = np.array([33, 42, 45, 51, 53, 61, 62])
plt.figure(figsize=(8,5))
plt.scatter(Xdata,Ydata, s = 80)
plt.xlabel('x', fontsize= 16)
plt.tick_params(labelsize=16) | _____no_output_____ | BSD-4-Clause-UC | Week 4 - One Neuron/Week 4 - One dimensional linear regression with tensorflow.ipynb | zhaw-dl/zhaw-dlcourse-autumn2018 |
The points seems to stay on a line, so a linear regression could actually work. Building PhaseLet's build our neural network. That would be one single neuron, with an identity activation function | tf.reset_default_graph()
X = tf.placeholder(tf.float32, [1, None]) # Inputs
Y = tf.placeholder(tf.float32, [1, None]) # Labels
learning_rate = tf.placeholder(tf.float32, shape=())
W = tf.Variable(tf.ones([1, 1])) # Weights
b = tf.Variable(tf.zeros(1)) # Bias
init = tf.global_variables_initializer()
y_ = tf.matmul(t... | _____no_output_____ | BSD-4-Clause-UC | Week 4 - One Neuron/Week 4 - One dimensional linear regression with tensorflow.ipynb | zhaw-dl/zhaw-dlcourse-autumn2018 |
We need additional nodes:- The cost function $J$- A node that will minimize the cost function | cost = tf.reduce_mean(tf.square(y_-Y))
training_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost) | _____no_output_____ | BSD-4-Clause-UC | Week 4 - One Neuron/Week 4 - One dimensional linear regression with tensorflow.ipynb | zhaw-dl/zhaw-dlcourse-autumn2018 |
Reshaping the dataset Now we expect $X$ and $Y$ as tensors with one rows. If we check what we have | print(Xdata.shape)
print(Ydata.shape) | (7,)
(7,)
| BSD-4-Clause-UC | Week 4 - One Neuron/Week 4 - One dimensional linear regression with tensorflow.ipynb | zhaw-dl/zhaw-dlcourse-autumn2018 |
Is not what we need... So we can reshape them in this way | x = Xdata.reshape(1,-1)
y = Ydata.reshape(1,-1) | _____no_output_____ | BSD-4-Clause-UC | Week 4 - One Neuron/Week 4 - One dimensional linear regression with tensorflow.ipynb | zhaw-dl/zhaw-dlcourse-autumn2018 |
The -1simply means that it is an unknown dimension and we want numpy to figure it out. And numpy will figure this by looking at the 'length of the array and remaining dimensions' and making sure it satisfies the criteria: "The new shape should be compatible with the original shape" | print(x.shape)
print(y.shape) | (1, 7)
(1, 7)
| BSD-4-Clause-UC | Week 4 - One Neuron/Week 4 - One dimensional linear regression with tensorflow.ipynb | zhaw-dl/zhaw-dlcourse-autumn2018 |
That is better now... To be able to test various learning rates we can define a function to do the training | def run_linear_model(learning_r, training_epochs, train_obs, train_labels, debug = False):
sess = tf.Session()
sess.run(init)
cost_history = np.empty(shape=[0], dtype = float)
for epoch in range(training_epochs+1):
sess.run(training_step, feed_dict = {X: train_obs, Y: train_labels, learnin... | Reached epoch 0 cost J = 59688.492188
Reached epoch 1000 cost J = nan
| BSD-4-Clause-UC | Week 4 - One Neuron/Week 4 - One dimensional linear regression with tensorflow.ipynb | zhaw-dl/zhaw-dlcourse-autumn2018 |
Interesting... Let's try a smaller learning rate | sess, ch = run_linear_model(1e-3, 1000, x, y, True)
sess2, ch2 = run_linear_model(1e-3, 5000, x, y, True) | Reached epoch 0 cost J = 1766.197388
Reached epoch 1000 cost J = 3.315676
Reached epoch 2000 cost J = 3.261549
Reached epoch 3000 cost J = 3.213741
Reached epoch 4000 cost J = 3.171512
Reached epoch 5000 cost J = 3.134215
| BSD-4-Clause-UC | Week 4 - One Neuron/Week 4 - One dimensional linear regression with tensorflow.ipynb | zhaw-dl/zhaw-dlcourse-autumn2018 |
Check $J$... Is still going down. Plot of the cost function | plt.rc('font', family='arial')
plt.rc('xtick', labelsize='x-small')
plt.rc('ytick', labelsize='x-small')
plt.tight_layout()
fig = plt.figure(figsize=(10, 7))
ax = fig.add_subplot(1, 1, 1)
ax.plot(ch, ls='solid', color = 'black')
ax.plot(ch2, ls='solid', color = 'red')
ax.set_xlabel('epochs', fontsize = 16)
ax.set_yl... | _____no_output_____ | BSD-4-Clause-UC | Week 4 - One Neuron/Week 4 - One dimensional linear regression with tensorflow.ipynb | zhaw-dl/zhaw-dlcourse-autumn2018 |
You don't see any difference... Let's zoom in | plt.rc('font', family='arial')
plt.rc('xtick', labelsize='x-small')
plt.rc('ytick', labelsize='x-small')
plt.tight_layout()
fig = plt.figure(figsize=(10, 7))
ax = fig.add_subplot(1, 1, 1)
ax.plot(ch2, ls='solid', color = 'red')
ax.plot(ch, ls='solid', color = 'black')
ax.set_ylim(3,3.6)
ax.set_xlabel('epochs', fonts... | _____no_output_____ | BSD-4-Clause-UC | Week 4 - One Neuron/Week 4 - One dimensional linear regression with tensorflow.ipynb | zhaw-dl/zhaw-dlcourse-autumn2018 |
Note that the learning rate is small, and so the convergence is slow... Let's try something faster... | sess3, ch3 = run_linear_model(1e-2, 5000, x, y, True)
plt.rc('font', family='arial')
plt.rc('xtick', labelsize='x-small')
plt.rc('ytick', labelsize='x-small')
plt.tight_layout()
fig = plt.figure(figsize=(10, 7))
ax = fig.add_subplot(1, 1, 1)
ax.plot(ch3, ls='solid', lw = 3, color = 'blue', label = r"$\gamma = 10^{-2... | _____no_output_____ | BSD-4-Clause-UC | Week 4 - One Neuron/Week 4 - One dimensional linear regression with tensorflow.ipynb | zhaw-dl/zhaw-dlcourse-autumn2018 |
Now we are getting closer to something flattening... For the sake of trying to find the best parameters | sess5, ch5 = run_linear_model(0.03, 5000, x, y, True)
plt.rc('font', family='arial')
plt.rc('xtick', labelsize='x-small')
plt.rc('ytick', labelsize='x-small')
plt.tight_layout()
fig = plt.figure(figsize=(10, 7))
ax = fig.add_subplot(1, 1, 1)
ax.plot(ch5, ls='solid', lw = 3, color = 'green', label = r"$\gamma = 0.03$... | _____no_output_____ | BSD-4-Clause-UC | Week 4 - One Neuron/Week 4 - One dimensional linear regression with tensorflow.ipynb | zhaw-dl/zhaw-dlcourse-autumn2018 |
So the value reached by the green or the blue line seems to be the one that can be reached... So both are good candidates... Predictions | pred_y = sess3.run(y_, feed_dict = {X: x, Y: y})
mse_y = sess3.run(mse, feed_dict = {X: x, Y: y})
print(mse_y)
plt.rc('font', family='arial')
plt.rc('xtick', labelsize='x-small')
plt.rc('ytick', labelsize='x-small')
plt.tight_layout()
fig = plt.figure(figsize=(10, 7))
ax = fig.add_subplot(1, 1, 1)
ax.scatter(y, ... | _____no_output_____ | BSD-4-Clause-UC | Week 4 - One Neuron/Week 4 - One dimensional linear regression with tensorflow.ipynb | zhaw-dl/zhaw-dlcourse-autumn2018 |
How to find the weights? Normally one is interested in the parameters of the linear regression. Typically when using NNs only the prediction are interested since the number of parameters is too big to be of any use, but is instructive to see how to get the parameters from our computational graph. Our linear equation i... | W_ = sess3.run(W, feed_dict = {X: x, Y: y})
b_ = sess3.run(b, feed_dict = {X: x, Y: y}) | _____no_output_____ | BSD-4-Clause-UC | Week 4 - One Neuron/Week 4 - One dimensional linear regression with tensorflow.ipynb | zhaw-dl/zhaw-dlcourse-autumn2018 |
The parameters are then given by | print(W_, b_) | [[9.468025]] [-2.4970763]
| BSD-4-Clause-UC | Week 4 - One Neuron/Week 4 - One dimensional linear regression with tensorflow.ipynb | zhaw-dl/zhaw-dlcourse-autumn2018 |
So we can plot the data with our best fit by first generating the right data to plot | x_ = np.arange(4, 7, 0.05).reshape(1,-1)
yfit_ = W_* x_+b_
plt.rc('font', family='arial')
plt.rc('xtick', labelsize='x-small')
plt.rc('ytick', labelsize='x-small')
fig = plt.figure(figsize=(8, 5))
ax = fig.add_subplot(1, 1, 1)
ax.plot(x_[0], yfit_[0], label = "Linear Regression")
ax.scatter (x,y, color = 'red', s ... | _____no_output_____ | BSD-4-Clause-UC | Week 4 - One Neuron/Week 4 - One dimensional linear regression with tensorflow.ipynb | zhaw-dl/zhaw-dlcourse-autumn2018 |
Comparison with classical linear regression Now let's compare the results with a classical linear regression as we can do with ```sklearn``` | xt = x.reshape(7,-1)
yt = y.reshape(7,-1)
reg = LinearRegression().fit(xt,yt)
reg.score(xt,yt)
reg.coef_
reg.intercept_
xt_ = x_[0].reshape(60,-1)
yfitsk_ = reg.predict(xt_.reshape(60,-1))
plt.rc('font', family='arial')
plt.rc('xtick', labelsize='x-small')
plt.rc('ytick', labelsize='x-small')
fig = plt.figure(fig... | _____no_output_____ | BSD-4-Clause-UC | Week 4 - One Neuron/Week 4 - One dimensional linear regression with tensorflow.ipynb | zhaw-dl/zhaw-dlcourse-autumn2018 |
Graphically you cannot see any difference... Let's try to train the network longer. | sess4, ch4 = run_linear_model(1e-2, 15000, x, y, True)
W_ = sess4.run(W, feed_dict = {X: x, Y: y})
b_ = sess4.run(b, feed_dict = {X: x, Y: y}) | _____no_output_____ | BSD-4-Clause-UC | Week 4 - One Neuron/Week 4 - One dimensional linear regression with tensorflow.ipynb | zhaw-dl/zhaw-dlcourse-autumn2018 |
The parameters are then given by | print(W_, b_) | [[9.499927]] [-2.6781528]
| BSD-4-Clause-UC | Week 4 - One Neuron/Week 4 - One dimensional linear regression with tensorflow.ipynb | zhaw-dl/zhaw-dlcourse-autumn2018 |
from a classical linear fit we get```9.5, -2.67857143```so now we are very close! Exercise 1: difficulty medium Build a network with one neuron (as we did before) and apply it to the following dataset, finding the best parameters of your linear regression equation. | X,y = make_regression(n_samples=50, n_features = 1, noise = 3)
plt.rc('font', family='arial')
plt.rc('xtick', labelsize='x-small')
plt.rc('ytick', labelsize='x-small')
fig = plt.figure(figsize=(8, 5))
ax = fig.add_subplot(1, 1, 1)
ax.scatter(X, y, label = "True Dataset")
ax.set_xlabel('x', fontsize = 16)
ax.set_yl... | _____no_output_____ | BSD-4-Clause-UC | Week 4 - One Neuron/Week 4 - One dimensional linear regression with tensorflow.ipynb | zhaw-dl/zhaw-dlcourse-autumn2018 |
Exercise 2: difficulty medium Build a network with one neuron (as we did before) and apply it to the following dataset with 3 features, finding the best parameters of your linear regression equation. | X,y = make_regression(n_samples=50, n_features = 3, noise = 0.3)
X.shape
plt.rc('font', family='arial')
plt.rc('xtick', labelsize='x-small')
plt.rc('ytick', labelsize='x-small')
fig = plt.figure(figsize=(8, 15))
ax = fig.add_subplot(3, 1, 1)
ax.scatter(X[:,0], y, label = "Feature 0")
ax.set_xlabel('x', fontsize = ... | _____no_output_____ | BSD-4-Clause-UC | Week 4 - One Neuron/Week 4 - One dimensional linear regression with tensorflow.ipynb | zhaw-dl/zhaw-dlcourse-autumn2018 |
Chicago Crime Prediction PipelineAn example notebook that demonstrates how to:* Download data from BigQuery* Create a Kubeflow pipeline* Include Google Cloud AI Platform components to train and deploy the model in the pipeline* Submit a job for executionThe model forecasts how many crimes are expected to be reported t... | %%capture
# Install the SDK (Uncomment the code if the SDK is not installed before)
!python3 -m pip install 'kfp>=0.1.31' --quiet
!python3 -m pip install pandas --upgrade -q
import json
import kfp
import kfp.components as comp
import kfp.dsl as dsl
import pandas as pd
import time | _____no_output_____ | Apache-2.0 | samples/core/ai_platform/ai_platform.ipynb | magencio/pipelines |
Pipeline Constants | # Required Parameters
project_id = '<ADD GCP PROJECT HERE>'
output = 'gs://<ADD STORAGE LOCATION HERE>' # No ending slash
# Optional Parameters
REGION = 'us-central1'
RUNTIME_VERSION = '1.13'
PACKAGE_URIS=json.dumps(['gs://chicago-crime/chicago_crime_trainer-0.0.tar.gz'])
TRAINER_OUTPUT_GCS_PATH = output + '/train/out... | _____no_output_____ | Apache-2.0 | samples/core/ai_platform/ai_platform.ipynb | magencio/pipelines |
Download dataDefine a download function that uses the BigQuery component | bigquery_query_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/e4d9e2b67cf39c5f12b9c1477cae11feb1a74dc7/components/gcp/bigquery/query/component.yaml')
QUERY = """
SELECT count(*) as count, TIMESTAMP_TRUNC(date, DAY) as day
FROM `bigquery-public-data.chicago_crime.cr... | _____no_output_____ | Apache-2.0 | samples/core/ai_platform/ai_platform.ipynb | magencio/pipelines |
Train the modelRun training code that will pre-process the data and then submit a training job to the AI Platform. | mlengine_train_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/e4d9e2b67cf39c5f12b9c1477cae11feb1a74dc7/components/gcp/ml_engine/train/component.yaml')
def train(project_id,
trainer_args,
package_uris,
trainer_output_gcs_path,
gcs_wor... | _____no_output_____ | Apache-2.0 | samples/core/ai_platform/ai_platform.ipynb | magencio/pipelines |
Deploy modelDeploy the model with the ID given from the training step | mlengine_deploy_op = comp.load_component_from_url(
'https://raw.githubusercontent.com/kubeflow/pipelines/e4d9e2b67cf39c5f12b9c1477cae11feb1a74dc7/components/gcp/ml_engine/deploy/component.yaml')
def deploy(
project_id,
model_uri,
model_id,
model_version,
runtime_version):
return mlengi... | _____no_output_____ | Apache-2.0 | samples/core/ai_platform/ai_platform.ipynb | magencio/pipelines |
Define pipeline | @dsl.pipeline(
name=PIPELINE_NAME,
description=PIPELINE_DESCRIPTION
)
def pipeline(
data_gcs_path=DATA_GCS_PATH,
gcs_working_dir=output,
project_id=project_id,
python_module=PYTHON_MODULE,
region=REGION,
runtime_version=RUNTIME_VERSION,
package_uris=PACKAGE_URIS,
trainer_output_... | _____no_output_____ | Apache-2.0 | samples/core/ai_platform/ai_platform.ipynb | magencio/pipelines |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.