markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Build the network before we use the network object, we need to build() it first. by default, any network object only contains definitions about the network. we need to call build() function to construct the computational graph.
netout = net.build() print netout
notebook/0.Train-LeNet.ipynb
crackhopper/TFS-toolbox
mit
Explore the network object we can see the network structure easily
print net print net.print_shape()
notebook/0.Train-LeNet.ipynb
crackhopper/TFS-toolbox
mit
each network object also has the following components binding with it: initializer loss optimizer
print net.initializer print net.losser print net.optimizer
notebook/0.Train-LeNet.ipynb
crackhopper/TFS-toolbox
mit
Load and explore the data after we have construct the model, what we need to do next is to load data. our package has provided some frequently used dataset, such as Mnist, and Cifar10
from tfs.dataset import Mnist dataset = Mnist()
notebook/0.Train-LeNet.ipynb
crackhopper/TFS-toolbox
mit
we can explore the image inside the mnist dataset
import numpy as np idx = np.random.randint(0,60000) # we have 60000 images in the training dataset img = dataset.train.data[idx,:,:,0] lbl = dataset.train.labels[idx] imshow(img,cmap='gray') print 'index:',idx,'\t','label:',lbl
notebook/0.Train-LeNet.ipynb
crackhopper/TFS-toolbox
mit
Train the network It's very easy to train a network, just use fit function, which is a bit like sklearn If you want to record some information during training, you can define a monitor, and plug it onto the network The default monitor would only print some information every 10 steps.
net.monitor
notebook/0.Train-LeNet.ipynb
crackhopper/TFS-toolbox
mit
now we change the print step to 20, and add a monitor that record the variance of each layer's input and output.
from tfs.core.monitor import * net.monitor['default'].interval=20 net.monitor['var'] = LayerInputVarMonitor(net,interval=10) net.fit(dataset,batch_size=200,n_epoch=1) var_result = net.monitor['var'].results import pandas as pd var = pd.DataFrame(var_result,columns=[n.name for n in net.nodes]) var
notebook/0.Train-LeNet.ipynb
crackhopper/TFS-toolbox
mit
Train and deploy the model For this notebook, we'll build a text classification model using the Hacker News dataset. Each training example consists of an article title and the article source. The model will be trained to classify a given article title as belonging to either nytimes, github or techcrunch. Load the data
DATASET_NAME = "titles_full.csv" COLUMNS = ['title', 'source'] titles_df = pd.read_csv(DATASET_NAME, header=None, names=COLUMNS) titles_df.head()
05_resilience/continuous_eval.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
We one-hot encode the label...
CLASSES = { 'github': 0, 'nytimes': 1, 'techcrunch': 2 } N_CLASSES = len(CLASSES) def encode_labels(sources): classes = [CLASSES[source] for source in sources] one_hots = to_categorical(classes, num_classes=N_CLASSES) return one_hots encode_labels(titles_df.source[:4])
05_resilience/continuous_eval.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
...and create a train/test split.
N_TRAIN = int(len(titles_df) * 0.80) titles_train, sources_train = ( titles_df.title[:N_TRAIN], titles_df.source[:N_TRAIN]) titles_valid, sources_valid = ( titles_df.title[N_TRAIN:], titles_df.source[N_TRAIN:]) X_train, Y_train = titles_train.values, encode_labels(sources_train) X_valid, Y_valid = titles_val...
05_resilience/continuous_eval.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
Swivel Model We'll build a simple text classification model using a Tensorflow Hub embedding module derived from Swivel. Swivel is an algorithm that essentially factorizes word co-occurrence matrices to create the words embeddings. TF-Hub hosts the pretrained gnews-swivel-20dim-with-oov 20-dimensional Swivel module.
SWIVEL = "https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim-with-oov/1" swivel_module = KerasLayer(SWIVEL, output_shape=[20], input_shape=[], dtype=tf.string, trainable=True)
05_resilience/continuous_eval.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
The build_model function is written so that the TF Hub module can easily be exchanged with another module.
def build_model(hub_module, model_name): inputs = Input(shape=[], dtype=tf.string, name="text") module = hub_module(inputs) h1 = Dense(16, activation='relu', name="h1")(module) outputs = Dense(N_CLASSES, activation='softmax', name='outputs')(h1) model = Model(inputs=inputs, outputs=[outputs], name=m...
05_resilience/continuous_eval.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
Train and evaluation the model With the model defined and data set up, next we'll train and evaluate the model.
# set up train and validation data train_data = (X_train, Y_train) val_data = (X_valid, Y_valid)
05_resilience/continuous_eval.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
For training we'll call train_and_evaluate on txtcls_model.
txtcls_history = train_and_evaluate(train_data, val_data, txtcls_model) history = txtcls_history pd.DataFrame(history.history)[['loss', 'val_loss']].plot() pd.DataFrame(history.history)[['accuracy', 'val_accuracy']].plot()
05_resilience/continuous_eval.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
Calling predicition from model head produces output from final dense layer. This final layer is used to compute categorical cross-entropy when training.
txtcls_model.predict(x=["YouTube introduces Video Chapters to make it easier to navigate longer videos"])
05_resilience/continuous_eval.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
We can save the model artifacts in the local directory called ./txtcls_swivel.
tf.saved_model.save(txtcls_model, './txtcls_swivel/')
05_resilience/continuous_eval.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
....and examine the model's serving default signature. As expected the model takes as input a text string (e.g. an article title) and retrns a 3-dimensional vector of floats (i.e. the softmax output layer).
!saved_model_cli show \ --tag_set serve \ --signature_def serving_default \ --dir ./txtcls_swivel/
05_resilience/continuous_eval.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
To simplify the returned predictions, we'll modify the model signature so that the model outputs the predicted article source (either nytimes, techcrunch, or github) rather than the final softmax layer. We'll also return the 'confidence' of the model's prediction. This will be the softmax value corresonding to the pred...
@tf.function(input_signature=[tf.TensorSpec([None], dtype=tf.string)]) def source_name(text): labels = tf.constant(['github', 'nytimes', 'techcrunch'], dtype=tf.string) probs = txtcls_model(text, training=False) indices = tf.argmax(probs, axis=1) pred_source = tf.gather(params=labels, indices=indices) ...
05_resilience/continuous_eval.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
Now, we'll re-save the new Swivel model that has this updated model signature by referencing the source_name function for the model's serving_default.
shutil.rmtree('./txtcls_swivel', ignore_errors=True) txtcls_model.save('./txtcls_swivel', signatures={'serving_default': source_name})
05_resilience/continuous_eval.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
Examine the model signature to confirm the changes:
!saved_model_cli show \ --tag_set serve \ --signature_def serving_default \ --dir ./txtcls_swivel/
05_resilience/continuous_eval.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
Now when we call predictions using the updated serving input function, the model will return the predicted article source as a readable string, and the model's confidence for that prediction.
title1 = "House Passes Sweeping Policing Bill Targeting Racial Bias and Use of Force" title2 = "YouTube introduces Video Chapters to make it easier to navigate longer videos" title3 = "As facebook turns 10 zuckerberg wants to change how tech industry works" restored = tf.keras.models.load_model('./txtcls_swivel') inf...
05_resilience/continuous_eval.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
Deploy the model for online serving Once the model is trained and the assets saved, deploying the model to GCP is straightforward. After some time you should be able to see your deployed model and its version on the model page of GCP console.
%%bash MODEL_NAME="txtcls" MODEL_VERSION="swivel" MODEL_LOCATION="./txtcls_swivel/" gcloud ai-platform models create ${MODEL_NAME} gcloud ai-platform versions create ${MODEL_VERSION} \ --model ${MODEL_NAME} \ --origin ${MODEL_LOCATION} \ --staging-bucket gs://${BUCKET} \ --runtime-version=2.1
05_resilience/continuous_eval.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
Set up the Evaluation job on CAIP Now that the model is deployed, go to Cloud AI Platform to see the model version you've deployed and set up an evaluation job by clicking on the button called "Create Evaluation Job". You will be asked to provide some relevant information: - Job description: txtcls_swivel_eval - Mode...
%load_ext google.cloud.bigquery %%bigquery --project $PROJECT SELECT * FROM `txtcls_eval.swivel`
05_resilience/continuous_eval.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
Now, every time this model version receives an online prediction request, this information will be captured and stored in the BQ table. Note, this happens everytime because we set the sampling proportion to 100%. Send prediction requests to your model Here are some article titles and their groundtruth sources that we ...
%%writefile input.json {"text": "YouTube introduces Video Chapters to make it easier to navigate longer videos"} !gcloud ai-platform predict \ --model txtcls \ --json-instances input.json \ --version swivel %%writefile input.json {"text": "A Filmmaker Put Away for Tax Fraud Takes Us Inside a British Prison"} !...
05_resilience/continuous_eval.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
Summarizing the results from our model: | title | groundtruth | predicted |---|---|---| | YouTube introduces Video Chapters to make it easier to navigate longer videos | techcrunch | techcrunch | | A Filmmaker Put Away for Tax Fraud Takes Us Inside a British Prison | nytimes | techcrunch | | A native Mac app wrappe...
%%bigquery --project $PROJECT SELECT * FROM `txtcls_eval.swivel`
05_resilience/continuous_eval.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
Provide the ground truth for the raw prediction input Notice the groundtruth is missing. We'll update the evaluation table to contain the ground truth.
%%bigquery --project $PROJECT UPDATE `txtcls_eval.swivel` SET groundtruth = '{"predictions": [{"source": "techcrunch"}]}' WHERE raw_data = '{"instances": [{"text": "YouTube introduces Video Chapters to make it easier to navigate longer videos"}]}'; %%bigquery --project $PROJECT UPDATE `txtcls_eval.swivel` SET...
05_resilience/continuous_eval.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
We can confirm that the ground truch has been properly added to the table.
%%bigquery --project $PROJECT SELECT * FROM `txtcls_eval.swivel`
05_resilience/continuous_eval.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
Compute evaluation metrics With the raw prediction input, the model output and the groundtruth in one place, we can evaluation how our model performs. And how the model performs across various aspects (e.g. over time, different model versions, different labels, etc)
import seaborn as sns import pandas as pd import matplotlib.pyplot as plt from sklearn.metrics import confusion_matrix from sklearn.metrics import precision_recall_fscore_support as score from sklearn.metrics import classification_report
05_resilience/continuous_eval.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
Using regex we can extract the model predictions, to have an easier to read format:
%%bigquery --project $PROJECT SELECT model, model_version, time, REGEXP_EXTRACT(raw_data, r'.*"text": "(.*)"') AS text, REGEXP_EXTRACT(raw_prediction, r'.*"source": "(.*?)"') AS prediction, REGEXP_EXTRACT(raw_prediction, r'.*"confidence": (0.\d{2}).*') AS confidence, REGEXP_EXTRACT(groundtruth, r'.*"sourc...
05_resilience/continuous_eval.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
Or a full classification report from the sklearn library:
print(classification_report(y_true=groundtruth, y_pred=prediction))
05_resilience/continuous_eval.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
Can also examine a confusion matrix:
cm = confusion_matrix(groundtruth, prediction, labels=sources) ax= plt.subplot() sns.heatmap(cm, annot=True, ax = ax, cmap="Blues") # labels, title and ticks ax.set_xlabel('Predicted labels') ax.set_ylabel('True labels') ax.set_title('Confusion Matrix') ax.xaxis.set_ticklabels(sources) ax.yaxis.set_ticklabels(source...
05_resilience/continuous_eval.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
Examine eval metrics by model version or timestamp By specifying the same evaluation table, two different model versions can be evaluated. Also, since the timestamp is captured, it is straightforward to evaluation model performance over time.
now = pd.Timestamp.now(tz='UTC') one_week_ago = now - pd.DateOffset(weeks=1) one_month_ago = now - pd.DateOffset(months=1) df_prev_week = df_results[df_results.time > one_week_ago] df_prev_month = df_results[df_results.time > one_month_ago] df_prev_month
05_resilience/continuous_eval.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
Count the amount of requests per path or address
# First parse the line # Then get the url component of the line if the line was successfully parsed # Then map it to one, e.g. ('/', 1) # Then reduce it to count them counts = file.map(lambda line: reg.match(line))\ .map(lambda group: group.group('url') if group else None)\ .map(lambda url: (url, 1))\ .redu...
Spark.ipynb
0Rick0/Fontys-DS-GCD
mit
Ordering
# Ordering is also quite easy to do in spark. For instance for the most commonly requested file: counts = file.map(lambda line: reg.match(line))\ .map(lambda group: group.group('url') if group else None)\ .map(lambda url: (url, 1))\ .reduceByKey(lambda a,b:a+b)\ .sortBy(lambda pair: -pair[1]) # this ord...
Spark.ipynb
0Rick0/Fontys-DS-GCD
mit
Map Reduce wordcount in Spark
gutenberg_file = sc.textFile('hdfs://localhost:8020/user/root/gutenberg_total.txt') import string import sys sys.path.insert(0, '.') sys.path.insert(0, './Portfolio') from MapReduce_code import allStopWords as stopwords punc = str.maketrans('', '', string.punctuation) def rem_punctuation(inp): return inp.trans...
Spark.ipynb
0Rick0/Fontys-DS-GCD
mit
Now we try to get something like this:
from pyquickhelper import open_html_form params= {"module":"", "version":"v..."} open_html_form(params, "fill the fields", "form1") form1
_unittests/ut_ipythonhelper/data/having_a_form_in_a_notebook.ipynb
sdpython/pyquickhelper
mit
With a password:
from pyquickhelper import open_html_form params= {"login":"", "password":""} open_html_form(params, "credential", "credential") credential
_unittests/ut_ipythonhelper/data/having_a_form_in_a_notebook.ipynb
sdpython/pyquickhelper
mit
To excecute an instruction when the button Ok is clicked:
my_address = None def custom_action(x): x["combined"] = x["first_name"] + " " + x["last_name"] return str(x) from pyquickhelper import open_html_form params = { "first_name":"", "last_name":"" } open_html_form (params, title="enter your name", key_save="my_address", hook="custom_action(my_address)") my_address
_unittests/ut_ipythonhelper/data/having_a_form_in_a_notebook.ipynb
sdpython/pyquickhelper
mit
Animated output
from pyquickhelper.ipythonhelper import StaticInteract, RangeWidget, RadioWidget def show_fib(N): sequence = "" a, b = 0, 1 for i in range(N): sequence += "{0} ".format(a) a, b = b, a + b return sequence StaticInteract(show_fib, N=RangeWidget(1, 100, default=10))
_unittests/ut_ipythonhelper/data/having_a_form_in_a_notebook.ipynb
sdpython/pyquickhelper
mit
In order to have a fast display, the function show_lib is called for each possible version. If it is a graph, all possible graphs will be generated.
%matplotlib inline import numpy as np import matplotlib.pyplot as plt def plot(amplitude, color): fig, ax = plt.subplots(figsize=(4, 3), subplot_kw={'axisbg':'#EEEEEE', 'axisbelow':True}) ax.grid(color='w', linewidth=2, linestyle='solid') x ...
_unittests/ut_ipythonhelper/data/having_a_form_in_a_notebook.ipynb
sdpython/pyquickhelper
mit
A form with IPython 3+ Not yet ready and the form does not show up in the converted notebook. You need to execute the notebook.
from IPython.display import display from IPython.html.widgets import Text last_name = Text(description="Last Name") first_name = Text(description="First Name") display(last_name) display(first_name) first_name.value, last_name.value
_unittests/ut_ipythonhelper/data/having_a_form_in_a_notebook.ipynb
sdpython/pyquickhelper
mit
Automated menu
from jyquickhelper import add_notebook_menu add_notebook_menu()
_unittests/ut_ipythonhelper/data/having_a_form_in_a_notebook.ipynb
sdpython/pyquickhelper
mit
Account List
r = accounts.AccountList() client.request(r) print(r.response)
Oanda v20 REST-oandapyV20/03.00 Account Information.ipynb
anthonyng2/FX-Trading-with-Python-and-Oanda
mit
Account Summary
r = accounts.AccountSummary(accountID) client.request(r) print(r.response) pd.Series(r.response['account'])
Oanda v20 REST-oandapyV20/03.00 Account Information.ipynb
anthonyng2/FX-Trading-with-Python-and-Oanda
mit
Account Instruments
r = accounts.AccountInstruments(accountID=accountID, params = "EUR_USD") client.request(r) pd.DataFrame(r.response['instruments'])
Oanda v20 REST-oandapyV20/03.00 Account Information.ipynb
anthonyng2/FX-Trading-with-Python-and-Oanda
mit
Getting a dataset The first step is going to be to load our data. As our example, we will be using the dataset CalTech-101, which contains around 9000 labeled images belonging to 101 object categories. However, we will exclude 5 of the categories which have the most images. This is in order to keep the class distributi...
!echo "Downloading 101_Object_Categories for image notebooks" !curl -L -o 101_ObjectCategories.tar.gz --progress-bar http://www.vision.caltech.edu/Image_Datasets/Caltech101/101_ObjectCategories.tar.gz !tar -xzf 101_ObjectCategories.tar.gz !rm 101_ObjectCategories.tar.gz !ls root = '101_ObjectCategories' exclude = ['B...
examples/fundamentals/transfer_learning.ipynb
ml4a/ml4a-guides
gpl-2.0
This function is useful for pre-processing the data into an image and input vector.
# helper function to load image and return it and input vector def get_image(path): img = image.load_img(path, target_size=(224, 224)) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) return img, x
examples/fundamentals/transfer_learning.ipynb
ml4a/ml4a-guides
gpl-2.0
Load all the images from root folder
data = [] for c, category in enumerate(categories): images = [os.path.join(dp, f) for dp, dn, filenames in os.walk(category) for f in filenames if os.path.splitext(f)[1].lower() in ['.jpg','.png','.jpeg']] for img_path in images: img, x = get_image(img_path) data.ap...
examples/fundamentals/transfer_learning.ipynb
ml4a/ml4a-guides
gpl-2.0
Randomize the data order.
random.shuffle(data)
examples/fundamentals/transfer_learning.ipynb
ml4a/ml4a-guides
gpl-2.0
create training / validation / test split (70%, 15%, 15%)
idx_val = int(train_split * len(data)) idx_test = int((train_split + val_split) * len(data)) train = data[:idx_val] val = data[idx_val:idx_test] test = data[idx_test:]
examples/fundamentals/transfer_learning.ipynb
ml4a/ml4a-guides
gpl-2.0
Separate data for labels.
x_train, y_train = np.array([t["x"] for t in train]), [t["y"] for t in train] x_val, y_val = np.array([t["x"] for t in val]), [t["y"] for t in val] x_test, y_test = np.array([t["x"] for t in test]), [t["y"] for t in test] print(y_test)
examples/fundamentals/transfer_learning.ipynb
ml4a/ml4a-guides
gpl-2.0
Pre-process the data as before by making sure it's float32 and normalized between 0 and 1.
# normalize data x_train = x_train.astype('float32') / 255. x_val = x_val.astype('float32') / 255. x_test = x_test.astype('float32') / 255. # convert labels to one-hot vectors y_train = keras.utils.to_categorical(y_train, num_classes) y_val = keras.utils.to_categorical(y_val, num_classes) y_test = keras.utils.to_categ...
examples/fundamentals/transfer_learning.ipynb
ml4a/ml4a-guides
gpl-2.0
Let's get a summary of what we have.
# summary print("finished loading %d images from %d categories"%(len(data), num_classes)) print("train / validation / test split: %d, %d, %d"%(len(x_train), len(x_val), len(x_test))) print("training data shape: ", x_train.shape) print("training labels shape: ", y_train.shape)
examples/fundamentals/transfer_learning.ipynb
ml4a/ml4a-guides
gpl-2.0
If everything worked properly, you should have loaded a bunch of images, and split them into three sets: train, val, and test. The shape of the training data should be (n, 224, 224, 3) where n is the size of your training set, and the labels should be (n, c) where c is the number of classes (97 in the case of 101_Objec...
images = [os.path.join(dp, f) for dp, dn, filenames in os.walk(root) for f in filenames if os.path.splitext(f)[1].lower() in ['.jpg','.png','.jpeg']] idx = [int(len(images) * random.random()) for i in range(8)] imgs = [image.load_img(images[i], target_size=(224, 224)) for i in idx] concat_image = np.concatenate([np.asa...
examples/fundamentals/transfer_learning.ipynb
ml4a/ml4a-guides
gpl-2.0
First training a neural net from scratch Before doing the transfer learning, let's first build a neural network from scratch for doing classification on our dataset. This will give us a baseline to compare to our transfer-learned network later. The network we will construct contains 4 alternating convolutional and max-...
# build the network model = Sequential() print("Input dimensions: ",x_train.shape[1:]) model.add(Conv2D(32, (3, 3), input_shape=x_train.shape[1:])) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(32, (3, 3))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2...
examples/fundamentals/transfer_learning.ipynb
ml4a/ml4a-guides
gpl-2.0
We've created a medium-sized network with ~1.2 million weights and biases (the parameters). Most of them are leading into the one pre-softmax fully-connected layer "dense_5". We can now go ahead and train our model for 100 epochs with a batch size of 128. We'll also record its history so we can plot the loss over time ...
# compile the model to use categorical cross-entropy loss function and adadelta optimizer model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) history = model.fit(x_train, y_train, batch_size=128, epochs=10, ...
examples/fundamentals/transfer_learning.ipynb
ml4a/ml4a-guides
gpl-2.0
Let's plot the validation loss and validation accuracy over time.
fig = plt.figure(figsize=(16,4)) ax = fig.add_subplot(121) ax.plot(history.history["val_loss"]) ax.set_title("validation loss") ax.set_xlabel("epochs") ax2 = fig.add_subplot(122) ax2.plot(history.history["val_acc"]) ax2.set_title("validation accuracy") ax2.set_xlabel("epochs") ax2.set_ylim(0, 1) plt.show()
examples/fundamentals/transfer_learning.ipynb
ml4a/ml4a-guides
gpl-2.0
Notice that the validation loss begins to actually rise after around 16 epochs, even though validation accuracy remains roughly between 40% and 50%. This suggests our model begins overfitting around then, and best performance would have been achieved if we had stopped early around then. Nevertheless, our accuracy would...
loss, accuracy = model.evaluate(x_test, y_test, verbose=0) print('Test loss:', loss) print('Test accuracy:', accuracy)
examples/fundamentals/transfer_learning.ipynb
ml4a/ml4a-guides
gpl-2.0
Finally, we see that we have achieved a (top-1) accuracy of around 49%. That's not too bad for 6000 images, considering that if we were to use a naive strategy of taking random guesses, we would have only gotten around 1% accuracy. Transfer learning by starting with existing network Now we can move on to the main stra...
vgg = keras.applications.VGG16(weights='imagenet', include_top=True) vgg.summary()
examples/fundamentals/transfer_learning.ipynb
ml4a/ml4a-guides
gpl-2.0
Notice that VGG16 is much bigger than the network we constructed earlier. It contains 13 convolutional layers and two fully connected layers at the end, and has over 138 million parameters, around 100 times as many parameters than the network we made above. Like our first network, the majority of the parameters are sto...
# make a reference to VGG's input layer inp = vgg.input # make a new softmax layer with num_classes neurons new_classification_layer = Dense(num_classes, activation='softmax') # connect our new layer to the second to last layer in VGG, and make a reference to it out = new_classification_layer(vgg.layers[-2].output) ...
examples/fundamentals/transfer_learning.ipynb
ml4a/ml4a-guides
gpl-2.0
We are going to retrain this network, model_new on the new dataset and labels. But first, we need to freeze the weights and biases in all the layers in the network, except our new one at the end, with the expectation that the features that were learned in VGG should still be fairly relevant to the new image classificat...
# make all layers untrainable by freezing weights (except for last layer) for l, layer in enumerate(model_new.layers[:-1]): layer.trainable = False # ensure the last layer is trainable/not frozen for l, layer in enumerate(model_new.layers[-1:]): layer.trainable = True model_new.compile(loss='categorical_cross...
examples/fundamentals/transfer_learning.ipynb
ml4a/ml4a-guides
gpl-2.0
Looking at the summary, we see the network is identical to the VGG model we instantiated earlier, except the last layer, formerly a 1000-neuron softmax, has been replaced by a new 97-neuron softmax. Additionally, we still have roughly 134 million weights, but now the vast majority of them are "non-trainable params" bec...
history2 = model_new.fit(x_train, y_train, batch_size=128, epochs=10, validation_data=(x_val, y_val))
examples/fundamentals/transfer_learning.ipynb
ml4a/ml4a-guides
gpl-2.0
Our validation accuracy hovers close to 80% towards the end, which is more than 30% improvement on the original network trained from scratch (meaning that we make the wrong prediction on 20% of samples, rather than 50%). It's worth noting also that this network actually trains slightly faster than the original network...
fig = plt.figure(figsize=(16,4)) ax = fig.add_subplot(121) ax.plot(history.history["val_loss"]) ax.plot(history2.history["val_loss"]) ax.set_title("validation loss") ax.set_xlabel("epochs") ax2 = fig.add_subplot(122) ax2.plot(history.history["val_acc"]) ax2.plot(history2.history["val_acc"]) ax2.set_title("validation a...
examples/fundamentals/transfer_learning.ipynb
ml4a/ml4a-guides
gpl-2.0
Notice that whereas the original model began overfitting around epoch 16, the new model continued to slowly decrease its loss over time, and likely would have improved its accuracy slightly with more iterations. The new model made it to roughly 80% top-1 accuracy (in the validation set) and continued to improve slowly ...
loss, accuracy = model_new.evaluate(x_test, y_test, verbose=0) print('Test loss:', loss) print('Test accuracy:', accuracy)
examples/fundamentals/transfer_learning.ipynb
ml4a/ml4a-guides
gpl-2.0
To predict a new image, simply run the following code to get the probabilities for each class.
img, x = get_image('101_ObjectCategories/airplanes/image_0003.jpg') probabilities = model_new.predict([x])
examples/fundamentals/transfer_learning.ipynb
ml4a/ml4a-guides
gpl-2.0
Implement Preprocessing Function Text to Word Ids As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence fr...
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): """ Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionar...
language-translation/dlnd_language_translation.ipynb
xtr33me/deep-learning
mit
Build the Neural Network You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below: - model_inputs - process_decoding_input - encoding_layer - decoding_layer_train - decoding_layer_infer - decoding_layer - seq2seq_model Input Implement the model_inputs() f...
def model_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate, keep probability) """ input = tf.placeholder(tf.int32, [None, None],name="input") targets = tf.placeholder(tf.int32, [None, None], name="targets") learning_rat...
language-translation/dlnd_language_translation.ipynb
xtr33me/deep-learning
mit
Process Decoding Input Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch.
def process_decoding_input(target_data, target_vocab_to_int, batch_size): """ Preprocess target data for decoding :param target_data: Target Placeholder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data ...
language-translation/dlnd_language_translation.ipynb
xtr33me/deep-learning
mit
Encoding Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob): """ Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :return: RNN state """ #Encoder enc_cell = tf....
language-translation/dlnd_language_translation.ipynb
xtr33me/deep-learning
mit
Decoding - Training Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded ...
language-translation/dlnd_language_translation.ipynb
xtr33me/deep-learning
mit
Decoding - Inference Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob): """ Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder R...
language-translation/dlnd_language_translation.ipynb
xtr33me/deep-learning
mit
Build the Decoding Layer Implement decoding_layer() to create a Decoder RNN layer. Create RNN cell for decoding using rnn_size and num_layers. Create the output fuction using lambda to transform it's input, logits, to class logits. Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_le...
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob): """ Create decoding layer :param dec_embed_input: Decoder embedded input :param dec_embeddings: Decoder embeddings :param encoder_...
language-translation/dlnd_language_translation.ipynb
xtr33me/deep-learning
mit
Build the Neural Network Apply the functions you implemented above to: Apply embedding to the input data for the encoder. Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob). Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function...
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): """ Build the Sequence-to-Sequence part of the neural network :param input_data: Inpu...
language-translation/dlnd_language_translation.ipynb
xtr33me/deep-learning
mit
Neural Network Training Hyperparameters Tune the following parameters: Set epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set num_layers to the number of layers. Set encoding_embedding_size to the size of the embedding for the encoder. Set decoding_embedding_siz...
# Number of Epochs epochs = 4 # Batch Size batch_size = 256 # RNN Size rnn_size = 256 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 256 decoding_embedding_size = 256 # Learning Rate learning_rate = 0.001 # Dropout Keep Probability keep_probability = 0.7
language-translation/dlnd_language_translation.ipynb
xtr33me/deep-learning
mit
Sentence to Sequence To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences. Convert the sentence to lowercase Convert words into ids using vocab_to_int Convert words not in the vocabulary, to the <UNK> word id.
def sentence_to_seq(sentence, vocab_to_int): """ Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids """ ret_val = [] for word in sentence.split(): word_lower = word.lower() ...
language-translation/dlnd_language_translation.ipynb
xtr33me/deep-learning
mit
We need a way to easily extract the actual data points from the JSON. The data will actually contain multiple layers (really, one layer per operationalLayer, but multiple operationalLayers) so, if we pass a title, we should return the operationalLayer corresponding to that title; otherwise, just return the first one.
import requests def extract_features(url, title=None): r = requests.get(url) idx = 0 found = False if title: while idx < len(r.json()['operationalLayers']): for item in r.json()['operationalLayers'][idx].items(): if item[0] == 'title' and item[1] == title: ...
Red Light Camera Locations.ipynb
newsapps/public-notebooks
mit
Now we need to filter out the bad points from few_crashes - the ones with 0 given as the lat/lon.
filtered_few_crashes = [ point for point in few_crashes if point['attributes']['LONG_X'] != 0 and point['attributes']['LAT_Y'] != 0]
Red Light Camera Locations.ipynb
newsapps/public-notebooks
mit
Now let's build a dictionary of all the cameras, so we can merge all their info.
cameras = {} for point in all_cameras: label = point['attributes']['LABEL'] if label not in cameras: cameras[label] = point cameras[label]['attributes']['Few crashes'] = False cameras[label]['attributes']['To be removed'] = False
Red Light Camera Locations.ipynb
newsapps/public-notebooks
mit
Set the 'Few crashes' flag to True for those intersections that show up in filtered_few_crashes.
for point in filtered_few_crashes: label = point['attributes']['LABEL'] if label not in cameras: print 'Missing label %s' % label else: cameras[label]['attributes']['Few crashes'] = True
Red Light Camera Locations.ipynb
newsapps/public-notebooks
mit
Set the 'To be removed' flag to True for those intersections that show up in removed_cameras.
for point in removed_cameras: label = point['attributes']['displaylabel'].replace(' and ', '-') if label not in cameras: print 'Missing label %s' % label else: cameras[label]['attributes']['To be removed'] = True
Red Light Camera Locations.ipynb
newsapps/public-notebooks
mit
How many camera locations have few crashes and were slated to be removed?
counter = { 'both': { 'names': [], 'count': 0 }, 'crashes only': { 'names': [], 'count': 0 }, 'removed only': { 'names': [], 'count': 0 } } for camera in cameras: if cameras[camera]['attributes']['Few crashes']: if cameras[camera]['att...
Red Light Camera Locations.ipynb
newsapps/public-notebooks
mit
How does this list compare to the one currently published on the Chicago Data Portal?
from csv import DictReader from StringIO import StringIO data_portal_url = 'https://data.cityofchicago.org/api/views/thvf-6diy/rows.csv?accessType=DOWNLOAD' r = requests.get(data_portal_url) fh = StringIO(r.text) reader = DictReader(fh) def cleaner(str): filters = [ ('Stony?Island', 'Stony Island'), ...
Red Light Camera Locations.ipynb
newsapps/public-notebooks
mit
Now we need to compute how much money has been generated at each intersection - assuming $100 fine for each violation. In order to do that, we need to make the violation data line up with the camera location data. Then, we'll add 3 fields: number of violations overall; number on/after 12/22/2014; number on/after 3/6/20...
import requests from csv import DictReader from datetime import datetime from StringIO import StringIO data_portal_url = 'https://data.cityofchicago.org/api/views/spqx-js37/rows.csv?accessType=DOWNLOAD' r = requests.get(data_portal_url) fh = StringIO(r.text) reader = DictReader(fh) def violation_cleaner(str): fil...
Red Light Camera Locations.ipynb
newsapps/public-notebooks
mit
Now it's time to ask some specific questions. First: how much money has the program raised overall? (Note that this data only goes back to 7/1/2014, several years after the program began.)
import locale locale.setlocale( locale.LC_ALL, '' ) total = 0 missing_tickets = [] for camera in cameras: try: total += cameras[camera]['attributes']['total tickets'] except KeyError: missing_tickets.append(camera) print '%d tickets have been issued since 7/1/2014, raising %s' % (total, locale...
Red Light Camera Locations.ipynb
newsapps/public-notebooks
mit
Since 12/22/2014, how much money has been generated by low-crash intersections?
total = 0 low_crash_total = 0 for camera in cameras: try: total += cameras[camera]['attributes']['tickets since 12/22/2014'] if cameras[camera]['attributes']['Few crashes']: low_crash_total += cameras[camera]['attributes']['tickets since 12/22/2014'] except KeyError: continue...
Red Light Camera Locations.ipynb
newsapps/public-notebooks
mit
How about since 3/6/2015?
total = 0 low_crash_total = 0 slated_for_closure_total = 0 for camera in cameras: try: total += cameras[camera]['attributes']['tickets since 3/6/2015'] if cameras[camera]['attributes']['Few crashes']: low_crash_total += cameras[camera]['attributes']['tickets since 3/6/2015'] if c...
Red Light Camera Locations.ipynb
newsapps/public-notebooks
mit
Now let's generate a CSV of the cameras data for export.
from csv import DictWriter output = [] for camera in cameras: data = { 'intersection': camera, 'last ticket date': cameras[camera]['attributes'].get('last ticket date', ''), 'tickets since 7/1/2014': cameras[camera]['attributes'].get('total tickets', 0), 'revenue since 7/1/2014': ca...
Red Light Camera Locations.ipynb
newsapps/public-notebooks
mit
processing We first need to resolve references among papers in the user's base
papers_data = pickle.load(open('papers_data.pkl', 'rb')) # lets collec all papers and their references into one array, type = 1 means paper is in the user's base, 0 - only in references all_papers = [] for k in papers_data: all_papers.append(papers_data[k]['metadata']) all_papers[-1]['type'] = 1 for ref in...
fragments/plot_processed_papers.ipynb
gangiman/CiGraphVis
mit
papers_data = pickle.load(open('papers_data_processed.pkl', 'rb')) import string # split long string into lines, also replace not printable characters with their representations def split_long(s, split_length = 20, clip_size = 100): s = ''.join(x if x in string.printable else repr(x) for x in s) if clip_size i...
fragments/plot_processed_papers.ipynb
gangiman/CiGraphVis
mit
create citation graph and plot it
%matplotlib inline import matplotlib.pyplot as plt import matplotlib.cm as cm import seaborn as sns node_degrees = [] codes = {} for p in papers_data.values(): if 'id' in p['metadata']: if p['metadata']['id'] not in codes: # and len(added_nodes) < 20: codes[p['metadata']['id']] = len(codes) ...
fragments/plot_processed_papers.ipynb
gangiman/CiGraphVis
mit
try plot with spring layout
pos = nx.spring_layout(G, iterations = 30, k = 0.1) plt.figure(figsize = (7, 7)) nx.draw(G, pos, node_size=3, width=0.15, node_color=[node_colors[added_nodes[node]] for node in G.nodes()], with_labels=False, edge_color='gray');
fragments/plot_processed_papers.ipynb
gangiman/CiGraphVis
mit
since spring layout is not good at semantic clustering papers, we will try to use node2vec library : https://github.com/aditya-grover/node2vec
import node2vec.src.node2vec as n2v import node2vec from gensim.models import Word2Vec # we make an undirected representation of the directed graph with spliting original vertices into "in" and "out" ones graph = nx.Graph() for i in range(len(list_nodes)): graph.add_node(str(added_nodes[list_nodes[i]])) graph...
fragments/plot_processed_papers.ipynb
gangiman/CiGraphVis
mit
interactive visualization with plotly
import plotly.plotly as py from plotly.graph_objs import * # you need to set up the connection to plotly API : https://plot.ly/python/getting-started/#initialization-for-online-plotting edge_trace = Scatter( x=[], y=[], line=Line(width=0.5,color='#888'), hoverinfo='none', mode='lines') for edge in...
fragments/plot_processed_papers.ipynb
gangiman/CiGraphVis
mit
Morse potential The Morse potential is given by $$ V(x) = D_e [1 - e^{-a(r-r_e)}])$$
x, lam = symbols("x lambda") n = symbols("n", integer=True) def morse_psi_n(n, x, lam, xe): Nn = sqrt((factorial(n)*(2*lam - 2*n - 10))/gamma(2*lam - n)) z = 2*lam*exp(-(x - xe)) psi = Nn*z**(lam - n -S(1)/2) * exp(-S(1)/2*z) * L(n, 2*lam - 2*n - 1, z) return psi def morse_E_n(n, lam): return ...
QM_analytical solutions.ipynb
nicoguaro/notebooks_examples
mit
Pöschl-Teller potential The Pösch-Teller potential is given by $$ V(x) = -\frac{\lambda(\lambda + 1)}{2} \operatorname{sech}^2(x)$$
def posch_teller_psi_n(n, x, lam): psi = P(lam, n, tanh(x)) return psi posch_teller_psi_n(n, x, lam) def morse_E_n(n, lam): if n <= lam: return -n**2/ 2 else: raise ValueError("Lambda should not be greater than n.") morse_E_n(5, 6)
QM_analytical solutions.ipynb
nicoguaro/notebooks_examples
mit
References Pöschl, G.; Teller, E. (1933). "Bemerkungen zur Quantenmechanik des anharmonischen Oszillators". Zeitschrift für Physik 83 (3–4): 143–151. doi:10.1007/BF01331132. Siegfried Flügge Practical Quantum Mechanics (Springer, 1998) Lekner, John (2007). "Reflectionless eigenstates of the sech2 potential". Americ...
from IPython.core.display import HTML def css_styling(): styles = open('./styles/custom_barba.css', 'r').read() return HTML(styles) css_styling()
QM_analytical solutions.ipynb
nicoguaro/notebooks_examples
mit
Note that MPI for Python (mpi4py) is a requirement for shenfun, but the current solver cannot be used with more than one processor. Tensor product spaces <div id="sec:bases"></div> With the Galerkin method we need function spaces for both velocity and pressure, as well as for the nonlinear right hand side. A Dirichlet...
N = (45, 45) family = 'Legendre' # or use 'Chebyshev' quad = 'GL' # for Chebyshev use 'GC' or 'GL' D0X = FunctionSpace(N[0], family, quad=quad, bc=(0, 0)) D0Y = FunctionSpace(N[1], family, quad=quad, bc=(0, 0))
binder/drivencavity.ipynb
spectralDNS/shenfun
bsd-2-clause
The spaces are here the same, but we will use D0X in the $x$-direction and D0Y in the $y$-direction. But before we use these bases in tensor product spaces, they remain identical as long as $N_0 = N_1$. Special attention is required by the moving lid. To get a solution with nonzero boundary condition at $y=1$ we need t...
D1Y = FunctionSpace(N[1], family, quad=quad, bc=(0, 1))
binder/drivencavity.ipynb
spectralDNS/shenfun
bsd-2-clause
where bc=(1, 0) fixes the values for $y=1$ and $y=-1$, respectively. For a regularized lid driven cavity the velocity of the top lid is $(1-x)^2(1+x)^2$ and not unity. To implement this boundary condition instead, we can make use of sympy and quite straight forward do
import sympy x = sympy.symbols('x') #D1Y = FunctionSpace(N[1], family, quad=quad, bc=(0, (1-x)**2*(1+x)**2))
binder/drivencavity.ipynb
spectralDNS/shenfun
bsd-2-clause
Uncomment the last line to run the regularized boundary conditions. Otherwise, there is no difference at all between the regular and the regularized lid driven cavity implementations. The pressure basis that comes with no restrictions for the boundary is a little trickier. The reason for this has to do with inf-sup sta...
PX = FunctionSpace(N[0], family, quad=quad) PY = FunctionSpace(N[1], family, quad=quad) PX.slice = lambda: slice(0, N[0]-2) PY.slice = lambda: slice(0, N[1]-2)
binder/drivencavity.ipynb
spectralDNS/shenfun
bsd-2-clause