markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
Below I'm running images through the VGG network in batches.> **Exercise:** Below, build the VGG network. Also get the codes from the first fully connected layer (make sure you get the ReLUd values). | # Set the batch size higher if you can fit in in your GPU memory
batch_size = 10
codes_list = []
labels = []
batch = []
codes = None
with tf.Session() as sess:
# TODO: Build the vgg network here
vgg = vgg16.Vgg16()
input_ = tf.placeholder(tf.float32, [None,224,224,3])
with tf.name_scope("content_vgg"):
vgg.build(input_)
for each in classes:
print("Starting {} images".format(each))
class_path = data_dir + each
files = os.listdir(class_path)
for ii, file in enumerate(files, 1):
# Add images to the current batch
# utils.load_image crops the input images for us, from the center
img = utils.load_image(os.path.join(class_path, file))
batch.append(img.reshape((1, 224, 224, 3)))
labels.append(each)
# Running the batch through the network to get the codes
if ii % batch_size == 0 or ii == len(files):
# Image batch to pass to VGG network
images = np.concatenate(batch)
# TODO: Get the values from the relu6 layer of the VGG network
feed_dict = {input_: images}
codes_batch = sess.run(vgg.relu6, feed_dict=feed_dict)
# Here I'm building an array of the codes
if codes is None:
codes = codes_batch
else:
codes = np.concatenate((codes, codes_batch))
# Reset to start building the next batch
batch = []
print('{} images processed'.format(ii))
# write codes to file
with open('codes', 'w') as f:
codes.tofile(f)
# write labels to file
import csv
with open('labels', 'w') as f:
writer = csv.writer(f, delimiter='\n')
writer.writerow(labels) | _____no_output_____ | MIT | transfer-learning/Transfer_Learning.ipynb | skagrawal/Deep-Learning-Udacity-ND |
Building the ClassifierNow that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work. | # read codes and labels from file
import csv
with open('labels') as f:
reader = csv.reader(f, delimiter='\n')
labels = np.array([each for each in reader if len(each) > 0]).squeeze()
with open('codes') as f:
codes = np.fromfile(f, dtype=np.float32)
codes = codes.reshape((len(labels), -1)) | _____no_output_____ | MIT | transfer-learning/Transfer_Learning.ipynb | skagrawal/Deep-Learning-Udacity-ND |
Data prepAs usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!> **Exercise:** From scikit-learn, use [LabelBinarizer](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelBinarizer.html) to create one-hot encoded vectors from the labels. | from sklearn.preprocessing import LabelBinarizer
lb = LabelBinarizer()
lb.fit(labels)
labels_vecs = lb.transform(labels) | _____no_output_____ | MIT | transfer-learning/Transfer_Learning.ipynb | skagrawal/Deep-Learning-Udacity-ND |
Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use [`StratifiedShuffleSplit`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.StratifiedShuffleSplit.html) from scikit-learn.You can create the splitter like so:```ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)```Then split the data with ```splitter = ss.split(x, y)````ss.split` returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use `next(splitter)` to get the indices. Be sure to read the [documentation](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.StratifiedShuffleSplit.html) and the [user guide](http://scikit-learn.org/stable/modules/cross_validation.htmlrandom-permutations-cross-validation-a-k-a-shuffle-split).> **Exercise:** Use StratifiedShuffleSplit to split the codes and labels into training, validation, and test sets. | from sklearn.model_selection import StratifiedShuffleSplit
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)
X = codes
y = labels_vecs
for train_index, test_index in ss.split(X, y):
train_x, train_y = X[train_index], y[train_index]
test_x, test_y = X[test_index], y[test_index]
ss = StratifiedShuffleSplit(n_splits=1, test_size=0.5)
X = test_x
y = test_y
for train_index, test_index in ss.split(X, y):
test_x, test_y = X[train_index], y[train_index]
val_x, val_y = X[test_index], y[test_index]
print("Train shapes (x, y):", train_x.shape, train_y.shape)
print("Validation shapes (x, y):", val_x.shape, val_y.shape)
print("Test shapes (x, y):", test_x.shape, test_y.shape) | Train shapes (x, y): (2936, 4096) (2936, 5)
Validation shapes (x, y): (367, 4096) (367, 5)
Test shapes (x, y): (367, 4096) (367, 5)
| MIT | transfer-learning/Transfer_Learning.ipynb | skagrawal/Deep-Learning-Udacity-ND |
If you did it right, you should see these sizes for the training sets:```Train shapes (x, y): (2936, 4096) (2936, 5)Validation shapes (x, y): (367, 4096) (367, 5)Test shapes (x, y): (367, 4096) (367, 5)``` Classifier layersOnce you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network.> **Exercise:** With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost. | inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])
labels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])
print(labels_vecs.shape)
# TODO: Classifier layers and operations
fc = tf.contrib.layers.fully_connected(inputs_, 256)
logits = tf.contrib.layers.fully_connected(fc, labels_vecs.shape[1], activation_fn=None)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=labels_, logits=logits)
cost = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Operations for validation/test accuracy
predicted = tf.nn.softmax(logits)
correct_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32)) | (3670, 5)
| MIT | transfer-learning/Transfer_Learning.ipynb | skagrawal/Deep-Learning-Udacity-ND |
Batches!Here is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data. | def get_batches(x, y, n_batches=10):
""" Return a generator that yields batches from arrays x and y. """
batch_size = len(x)//n_batches
for ii in range(0, n_batches*batch_size, batch_size):
# If we're not on the last batch, grab data with size batch_size
if ii != (n_batches-1)*batch_size:
X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size]
# On the last batch, grab the rest of the data
else:
X, Y = x[ii:], y[ii:]
# I love generators
yield X, Y | _____no_output_____ | MIT | transfer-learning/Transfer_Learning.ipynb | skagrawal/Deep-Learning-Udacity-ND |
TrainingHere, we'll train the network.> **Exercise:** So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help. Use the `get_batches` function I wrote before to get your batches like `for x, y in get_batches(train_x, train_y)`. Or write your own! | saver = tf.train.Saver()
epochs = 10
iteration = 0
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for x, y in get_batches(train_x, train_y):
feed = {inputs_: x,
labels_: y}
loss, _ = sess.run([cost, optimizer], feed_dict=feed)
print("Epoch: {}/{}".format(e+1, epochs),
"Iteration: {}".format(iteration),
"Training loss: {:.5f}".format(loss))
iteration += 1
if iteration % 5 == 0:
feed = {inputs_: val_x,
labels_: val_y}
val_acc = sess.run(accuracy, feed_dict=feed)
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Validation Acc: {:.4f}".format(val_acc))
saver.save(sess, "checkpoints/flowers.ckpt") | Epoch: 1/10 Iteration: 0 Training loss: 6.09479
Epoch: 1/10 Iteration: 1 Training loss: 19.38938
Epoch: 1/10 Iteration: 2 Training loss: 14.50047
Epoch: 1/10 Iteration: 3 Training loss: 13.24159
Epoch: 1/10 Iteration: 4 Training loss: 7.22328
Epoch: 0/10 Iteration: 5 Validation Acc: 0.6866
Epoch: 1/10 Iteration: 5 Training loss: 4.34417
Epoch: 1/10 Iteration: 6 Training loss: 2.55807
Epoch: 1/10 Iteration: 7 Training loss: 3.47730
Epoch: 1/10 Iteration: 8 Training loss: 2.65929
Epoch: 1/10 Iteration: 9 Training loss: 2.90010
Epoch: 0/10 Iteration: 10 Validation Acc: 0.7030
Epoch: 2/10 Iteration: 10 Training loss: 2.14254
Epoch: 2/10 Iteration: 11 Training loss: 2.04713
Epoch: 2/10 Iteration: 12 Training loss: 2.08653
Epoch: 2/10 Iteration: 13 Training loss: 1.52148
Epoch: 2/10 Iteration: 14 Training loss: 1.14696
Epoch: 1/10 Iteration: 15 Validation Acc: 0.7493
Epoch: 2/10 Iteration: 15 Training loss: 0.90808
Epoch: 2/10 Iteration: 16 Training loss: 0.89916
Epoch: 2/10 Iteration: 17 Training loss: 0.77664
Epoch: 2/10 Iteration: 18 Training loss: 0.71632
Epoch: 2/10 Iteration: 19 Training loss: 0.80467
Epoch: 1/10 Iteration: 20 Validation Acc: 0.7548
Epoch: 3/10 Iteration: 20 Training loss: 0.69502
Epoch: 3/10 Iteration: 21 Training loss: 0.60635
Epoch: 3/10 Iteration: 22 Training loss: 0.86740
Epoch: 3/10 Iteration: 23 Training loss: 0.55340
Epoch: 3/10 Iteration: 24 Training loss: 0.50799
Epoch: 2/10 Iteration: 25 Validation Acc: 0.7875
Epoch: 3/10 Iteration: 25 Training loss: 0.46593
Epoch: 3/10 Iteration: 26 Training loss: 0.41811
Epoch: 3/10 Iteration: 27 Training loss: 0.39395
Epoch: 3/10 Iteration: 28 Training loss: 0.44384
Epoch: 3/10 Iteration: 29 Training loss: 0.43772
Epoch: 2/10 Iteration: 30 Validation Acc: 0.8065
Epoch: 4/10 Iteration: 30 Training loss: 0.37363
Epoch: 4/10 Iteration: 31 Training loss: 0.17821
Epoch: 4/10 Iteration: 32 Training loss: 0.30352
Epoch: 4/10 Iteration: 33 Training loss: 0.27625
Epoch: 4/10 Iteration: 34 Training loss: 0.34593
Epoch: 3/10 Iteration: 35 Validation Acc: 0.8229
Epoch: 4/10 Iteration: 35 Training loss: 0.30728
Epoch: 4/10 Iteration: 36 Training loss: 0.38150
Epoch: 4/10 Iteration: 37 Training loss: 0.35443
Epoch: 4/10 Iteration: 38 Training loss: 0.26565
Epoch: 4/10 Iteration: 39 Training loss: 0.27981
Epoch: 3/10 Iteration: 40 Validation Acc: 0.8365
Epoch: 5/10 Iteration: 40 Training loss: 0.22080
Epoch: 5/10 Iteration: 41 Training loss: 0.13720
Epoch: 5/10 Iteration: 42 Training loss: 0.26349
Epoch: 5/10 Iteration: 43 Training loss: 0.20846
Epoch: 5/10 Iteration: 44 Training loss: 0.21817
Epoch: 4/10 Iteration: 45 Validation Acc: 0.8392
Epoch: 5/10 Iteration: 45 Training loss: 0.21050
Epoch: 5/10 Iteration: 46 Training loss: 0.24346
Epoch: 5/10 Iteration: 47 Training loss: 0.23473
Epoch: 5/10 Iteration: 48 Training loss: 0.19866
Epoch: 5/10 Iteration: 49 Training loss: 0.23902
Epoch: 4/10 Iteration: 50 Validation Acc: 0.8501
Epoch: 6/10 Iteration: 50 Training loss: 0.16664
Epoch: 6/10 Iteration: 51 Training loss: 0.10767
Epoch: 6/10 Iteration: 52 Training loss: 0.17396
Epoch: 6/10 Iteration: 53 Training loss: 0.14594
Epoch: 6/10 Iteration: 54 Training loss: 0.18902
Epoch: 5/10 Iteration: 55 Validation Acc: 0.8692
Epoch: 6/10 Iteration: 55 Training loss: 0.18315
Epoch: 6/10 Iteration: 56 Training loss: 0.19464
Epoch: 6/10 Iteration: 57 Training loss: 0.18242
Epoch: 6/10 Iteration: 58 Training loss: 0.13424
Epoch: 6/10 Iteration: 59 Training loss: 0.18221
Epoch: 5/10 Iteration: 60 Validation Acc: 0.8583
Epoch: 7/10 Iteration: 60 Training loss: 0.12580
Epoch: 7/10 Iteration: 61 Training loss: 0.07224
Epoch: 7/10 Iteration: 62 Training loss: 0.12352
Epoch: 7/10 Iteration: 63 Training loss: 0.11218
Epoch: 7/10 Iteration: 64 Training loss: 0.13097
Epoch: 6/10 Iteration: 65 Validation Acc: 0.8665
Epoch: 7/10 Iteration: 65 Training loss: 0.13078
Epoch: 7/10 Iteration: 66 Training loss: 0.15979
Epoch: 7/10 Iteration: 67 Training loss: 0.13183
Epoch: 7/10 Iteration: 68 Training loss: 0.10843
Epoch: 7/10 Iteration: 69 Training loss: 0.14170
Epoch: 6/10 Iteration: 70 Validation Acc: 0.8719
Epoch: 8/10 Iteration: 70 Training loss: 0.08602
Epoch: 8/10 Iteration: 71 Training loss: 0.05326
Epoch: 8/10 Iteration: 72 Training loss: 0.09561
Epoch: 8/10 Iteration: 73 Training loss: 0.08072
Epoch: 8/10 Iteration: 74 Training loss: 0.10511
Epoch: 7/10 Iteration: 75 Validation Acc: 0.8665
Epoch: 8/10 Iteration: 75 Training loss: 0.10438
Epoch: 8/10 Iteration: 76 Training loss: 0.13204
Epoch: 8/10 Iteration: 77 Training loss: 0.09238
Epoch: 8/10 Iteration: 78 Training loss: 0.08482
Epoch: 8/10 Iteration: 79 Training loss: 0.11027
Epoch: 7/10 Iteration: 80 Validation Acc: 0.8665
Epoch: 9/10 Iteration: 80 Training loss: 0.06720
Epoch: 9/10 Iteration: 81 Training loss: 0.04523
Epoch: 9/10 Iteration: 82 Training loss: 0.07565
Epoch: 9/10 Iteration: 83 Training loss: 0.06168
Epoch: 9/10 Iteration: 84 Training loss: 0.07394
Epoch: 8/10 Iteration: 85 Validation Acc: 0.8747
Epoch: 9/10 Iteration: 85 Training loss: 0.08303
Epoch: 9/10 Iteration: 86 Training loss: 0.10531
Epoch: 9/10 Iteration: 87 Training loss: 0.07123
Epoch: 9/10 Iteration: 88 Training loss: 0.06659
Epoch: 9/10 Iteration: 89 Training loss: 0.08400
Epoch: 8/10 Iteration: 90 Validation Acc: 0.8692
Epoch: 10/10 Iteration: 90 Training loss: 0.05142
Epoch: 10/10 Iteration: 91 Training loss: 0.03468
Epoch: 10/10 Iteration: 92 Training loss: 0.05952
Epoch: 10/10 Iteration: 93 Training loss: 0.04842
Epoch: 10/10 Iteration: 94 Training loss: 0.05592
Epoch: 9/10 Iteration: 95 Validation Acc: 0.8719
Epoch: 10/10 Iteration: 95 Training loss: 0.06398
Epoch: 10/10 Iteration: 96 Training loss: 0.08367
Epoch: 10/10 Iteration: 97 Training loss: 0.05550
Epoch: 10/10 Iteration: 98 Training loss: 0.05315
Epoch: 10/10 Iteration: 99 Training loss: 0.06566
Epoch: 9/10 Iteration: 100 Validation Acc: 0.8719
| MIT | transfer-learning/Transfer_Learning.ipynb | skagrawal/Deep-Learning-Udacity-ND |
TestingBelow you see the test accuracy. You can also see the predictions returned for images. | with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: test_x,
labels_: test_y}
test_acc = sess.run(accuracy, feed_dict=feed)
print("Test accuracy: {:.4f}".format(test_acc))
%matplotlib inline
import matplotlib.pyplot as plt
from scipy.ndimage import imread | _____no_output_____ | MIT | transfer-learning/Transfer_Learning.ipynb | skagrawal/Deep-Learning-Udacity-ND |
Below, feel free to choose images and see how the trained classifier predicts the flowers in them. | test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg'
test_img = imread(test_img_path)
plt.imshow(test_img)
# Run this cell if you don't have a vgg graph built
if 'vgg' in globals():
print('"vgg" object already exists. Will not create again.')
else:
#create vgg
with tf.Session() as sess:
input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])
vgg = vgg16.Vgg16()
vgg.build(input_)
with tf.Session() as sess:
img = utils.load_image(test_img_path)
img = img.reshape((1, 224, 224, 3))
feed_dict = {input_: img}
code = sess.run(vgg.relu6, feed_dict=feed_dict)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
feed = {inputs_: code}
prediction = sess.run(predicted, feed_dict=feed).squeeze()
plt.imshow(test_img)
plt.barh(np.arange(5), prediction)
_ = plt.yticks(np.arange(5), lb.classes_) | _____no_output_____ | MIT | transfer-learning/Transfer_Learning.ipynb | skagrawal/Deep-Learning-Udacity-ND |
Introduction to ProgrammingTopics for today will include:- Mozilla Developer Network [(MDN)](https://developer.mozilla.org/en-US/)- Python Documentation [(Official Documentation)](https://docs.python.org/3/)- Importance of Design- Functions- Built in Functions Mozilla Developer Network [(MDN)](https://developer.mozilla.org/en-US/)---The Mozilla Developer Network is a great resource for all things web dev. This site is good for trying to learn about standards as well as finding out quick information about something that you're trying to do Web Dev WiseThis will be a major resource going forward when it comes to doing things with HTML and CSSYou'll often find that you're not the first to try and do something. That being said you need to start to get comfortable looking for information on your own when things go wrong. Python Documentation [(Official Documentation)](https://docs.python.org/3/)---This section is similar to the one above. Python has a lot of resources out there that we can utilize when we're stuck or need some help with something that we may not have encountered before.Since this is the official documentation page for the language you may often be given too much information or something that you wanted but in the wrong form or for the wrong version of the language. It is up to you to learn how to utilize these things and use them to your advantage. Importance of Design---So this is a topic that i didn't learn the importance of until I was in the work force. Design is a major influence in the way that code is build and in a major capacity and significant effect on the industry. Let's pretend we have a client that wants us to do the following:- Write a function which will count the number of times any one character appears in a string of characters. - Write a main function which takes the character to be counted from the user and calls the function, outputting the result to the user.For example, are you like Android and take the latest and greatest and put them into phones in an unregulated hardware market. Thus leaving great variability in the market for your brand? Are you an Apple where you control the full stack. Your hardware and software may not be bleeding edge but it's seamless and uniform.What does the market want? What are you good at? Do you have people around you that can fill your gaps?Here's a blurb from a friend about the matter:>Design, often paired with the phrase "design thinking", is an approach and method of problem solving that builds empathy for user(s) of a product, resulting in the creation of a seamless and delightful user experience tailored to the user's needs.>Design thinks holistically about the experience that a user would go through when encountering and interacting with a product or technology. Design understands the user and their needs in great detail so that the product team can build the product and experience that fits what the user is looking for. We don't want to create products for the sake of creating them, we want to ensure that there is a need for it by a user.>Design not only focuses on the actual interface design of a product, but can also ensure the actual technology has a seamless experience as well. Anything that blocks potential users from wanting to buy a product or prohibits current users from utilizing the product successfully, design wants to investigate. We ensure all pieces fit together from the user's standpoint, and we work to build a bridge between the technology and the user, who doesn't need to understand the technical depths of the product. Sorting Example [(Toptal Sorting Algorithms)](https://www.toptal.com/developers/sorting-algorithms)---Hypothetical, a client comes to you and they want you sort a list of numbers how do you optimally sort a list? `[2, 5, 6, 1, 4, 3]` Design Thinking [(IBM Design Thinking)](https://www.ibm.com/design/thinking/)---As this idea starts to grow you come to realize that different companies have different design methodologies. IBM has it's own version of Design Thinking. You can find more information about that at the site linked in the title. IBM is very focused on being exactly like its customers in most aspects. What we're mostly going to take from this is that there are entire careers birthed from thinking before you act. That being said we're going to harp on a couple parts of this. Knowing what your requirements are---One of the most common scenarios to come across is a product that is annouced that's going to change everything. In the planning phase everyone agrees that the idea is amazing and going to solve all of our problems. We get down the line and things start to fall apart, we run out of time. Things ran late, or didn't come in in time pushing everything out. Scope creep ensued.This is typically the result of not agreeing on what our requirements are. Something as basic as agreeing on what needs to be done needs to be discussed and checked on thouroughly. We do this because often two people rarely are thinking exactly the same thing.You need to be on the same page as your client and your fellow developers as well. If you don't know ask. Planning Things Out---We have an idea on what we want to do. So now we just write it? No, not quite. We need to have a rough plan on how we're going to do things. Do we want to use functions, do we need a quick solution, is this going to be verbose and complex? It's important to look at what we can set up for ourselves. We don't need to make things difficult by planning things out poorly. This means allotting time for things like getting stuck and brainstorming. Breaking things down---Personally I like to take my problem and scale it down into an easy example so in the case of our problem. The client may want to process a text like Moby Dick. We can start with a sentence and work our way up!Taking the time to break things in to multiple pieces and figure out what goes where is an art in itself. | def char_finder(character, string):
total = 0
for char in string:
if char == character:
total += 1
return total
if __name__ == "__main__":
output = char_finder('z', 'Quick brown fox jumped over the lazy dog')
print(output)
| 1
| MIT | JupyterNotebooks/Lessons/Lesson 4.ipynb | emilekhoury/CMPT-120L-910-20F |
Functions---This is a intergral piece of how we do things in any programming language. This allows us to repeat instances of code that we've seen and use them at our preferance.We'll often be using functions similar to how we use variables and our data types. Making Our Own Functions---So to make a functions we'll be using the `def` keyword followed by a name and then parameters. We've seen this a couple times now in code examples.```def exampleName(exampleParameter1, exampleParameter2): print(exampleParameter1, exampleParameter2)```There are many ways to write functions, we can say that we're going return a specific type of data type.```def exampleName(exampleParameter1, exampleParameter2) -> any: print(exampleParameter1, exampleParameter2)```We can also specify the types that the parameters are going to be. ```def exampleName(exampleParameter1: any, exampleParameter2: any) -> any: print(exampleParameter1, exampleParameter2)```Writing functions is only one part of the fun. We still have to be able to use them. | def exampleName(exampleParameter1: any, exampleParameter2: any) -> any:
print(exampleParameter1, exampleParameter2)
exampleName("Hello", 5) | Hello 5
| MIT | JupyterNotebooks/Lessons/Lesson 4.ipynb | emilekhoury/CMPT-120L-910-20F |
Using functions---Using functions is fairly simple. To use a function all we have to do is give the function name followed by parenthesis. This should seem familiar. Functions In Classes---Now we've mentioned classes before, classes can have functions but they're used a little differently. Functions that stem from classes are used often with a dot notation. | class Person:
def __init__(self, weight: int, height: int, name: str):
self.weight = weight
self.height = height
self.name = name
def who_is_this(self):
print("This person's name is " + self.name)
print("This person's weight is " + str(self.weight) + " pounds")
print("This person's height is " + str(self.height) + " inches")
if __name__ == "__main__":
Kipp = Person(225, 70, "Aaron Kippins")
Kipp.who_is_this() | This person's name is Aaron Kippins
This person's weight is 225 pounds
This person's height is 70 inches
| MIT | JupyterNotebooks/Lessons/Lesson 4.ipynb | emilekhoury/CMPT-120L-910-20F |
Built in Functions and Modules---With the talk of dot notation those are often used with built in functions. Built in function are functions that come along with the language. These tend to be very useful because as we start to visit more complex issues they allow us to do complexs thing with ease in some cases.We have functions that belong to particular classes or special things that can be done with things of a certain class type. Along side those we can also have Modules. Modules are classes or functions that other people wrote that we can import into our code to use. Substrings--- | string = "I want to go home!"
print(string[0:12], "to Cancun!")
# print(string[0:1]) | I want to go to Cancun!
| MIT | JupyterNotebooks/Lessons/Lesson 4.ipynb | emilekhoury/CMPT-120L-910-20F |
toUpper toLower--- | alpha_sentence = 'Quick brown fox jumped over the lazy dog'
print(alpha_sentence.title())
print(alpha_sentence.upper())
print(alpha_sentence.lower())
if alpha_sentence.lower().islower():
print("sentence is all lowercase")
| Quick Brown Fox Jumped Over The Lazy Dog
QUICK BROWN FOX JUMPED OVER THE LAZY DOG
quick brown fox jumped over the lazy dog
sentence is all lowercase
| MIT | JupyterNotebooks/Lessons/Lesson 4.ipynb | emilekhoury/CMPT-120L-910-20F |
Exponents--- | print(2 ** 3) | 8
| MIT | JupyterNotebooks/Lessons/Lesson 4.ipynb | emilekhoury/CMPT-120L-910-20F |
math.sqrt()--- | import math
math.sqrt(4) | _____no_output_____ | MIT | JupyterNotebooks/Lessons/Lesson 4.ipynb | emilekhoury/CMPT-120L-910-20F |
Integer Division vs Float Division--- | print(4//2)
print(4/2) | 2
2.0
| MIT | JupyterNotebooks/Lessons/Lesson 4.ipynb | emilekhoury/CMPT-120L-910-20F |
Abs()--- | abs(-10) | _____no_output_____ | MIT | JupyterNotebooks/Lessons/Lesson 4.ipynb | emilekhoury/CMPT-120L-910-20F |
String Manipulation--- | dummy_string = "Hey there I'm just a string for the example about to happen."
print(dummy_string.center(70, "-"))
print(dummy_string.partition(" "))
print(dummy_string.swapcase())
print(dummy_string.split(" ")) | -----Hey there I'm just a string for the example about to happen.-----
('Hey', ' ', "there I'm just a string for the example about to happen.")
hEY THERE i'M JUST A STRING FOR THE EXAMPLE ABOUT TO HAPPEN.
['Hey', 'there', "I'm", 'just', 'a', 'string', 'for', 'the', 'example', 'about', 'to', 'happen.']
| MIT | JupyterNotebooks/Lessons/Lesson 4.ipynb | emilekhoury/CMPT-120L-910-20F |
Array Manipulation--- | arr = [2, 5, 6, 1, 4, 3]
arr.sort()
print(arr)
print(arr[3])
# sorted(arr)
print(arr[1:3])
| [1, 2, 3, 4, 5, 6]
4
| MIT | JupyterNotebooks/Lessons/Lesson 4.ipynb | emilekhoury/CMPT-120L-910-20F |
Insert and Pop, Append and Remove--- | arr.append(7)
print(arr)
arr.pop()
print(arr) | [1, 2, 3, 4, 5, 6, 7, 7]
[1, 2, 3, 4, 5, 6, 7]
| MIT | JupyterNotebooks/Lessons/Lesson 4.ipynb | emilekhoury/CMPT-120L-910-20F |
Add MollWeide Plotting to gwylm class(L. London 2017) Related: positive_dev/examples/plotting_spherical_harmonics.ipynb Setup Environment | # Setup ipython environment
%load_ext autoreload
%autoreload 2
%matplotlib inline
# Import usefuls
from nrutils import scsearch,gwylm
from matplotlib.pyplot import *
from numpy import array | The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
| MIT | issues/closed/issue2_add_mollweide_plotting_to_gwylm.ipynb | llondon6/nrutils_dev |
Find a simulation and load data | # Find sim
A = scsearch(q=[10,20],keyword='hr',verbose=True,institute='gt')
# Load data
y = gwylm(A[0],lmax=4,verbose=False,clean=True) | [1m([0;33mvalidate![0m)>> [0mMultiple catalog directories found. We will scan through the related list, and then store first the catalog_dir that the OS can find.
[1m([0;33mvalidate![0m)>> [0mSelecting "[0;36m/Volumes/athena/bradwr/[0m"
| MIT | issues/closed/issue2_add_mollweide_plotting_to_gwylm.ipynb | llondon6/nrutils_dev |
Plot Mollweide |
#
kind = 'strain'
# Make mollweide plot -- NOTE that the time input is relative to the peak in h22
ax0,real_time = y.mollweide_plot(time=0,form='abs',kind=kind)
ax0.set_title('$l_{max} = %i$'%max([l for l,m in y.lm]),size=20)
# Make time domain plot for reference
axarr,fig = y.lm[2,2][kind].plot()
for ax in axarr:
sca( ax )
axvline( real_time, linestyle='--', color='k' )
| _____no_output_____ | MIT | issues/closed/issue2_add_mollweide_plotting_to_gwylm.ipynb | llondon6/nrutils_dev |
Try to put everything in the same figure |
#
R,C = 6,3
#
fig = figure( figsize=3*array([C,1.0*R]) )
#
ax4 = subplot2grid( (R,C), (0, 0), colspan=C, rowspan=3, projection='mollweide' )
ax1 = subplot2grid( (R,C), (3, 0), colspan=C)
ax2 = subplot2grid( (R,C), (4, 0), colspan=C, sharex=ax1)
ax3 = subplot2grid( (R,C), (5, 0), colspan=C, sharex=ax1)
#
kind = 'strain'
# Make mollweide plot -- NOTE that the time input is relative to the peak in h22
_,real_time = y.mollweide_plot(time=0,ax=ax4,form='abs',kind=kind,colorbar_shrink=0.8)
ax4.set_title('$l_{max} = %i$'%max([l for l,m in y.lm]),size=20)
#
wfax,_ = y.lm[2,2][kind].plot(ax=[ax1,ax2,ax3],tlim=[100,800])
for a in wfax:
sca( a ); axvline( real_time, linestyle='-', color='k' )
#
subplots_adjust(hspace = 0.1)
| _____no_output_____ | MIT | issues/closed/issue2_add_mollweide_plotting_to_gwylm.ipynb | llondon6/nrutils_dev |
Now perhaps write an external script that animates frames for select time samples? | from os.path import join
range(0,100,10) | _____no_output_____ | MIT | issues/closed/issue2_add_mollweide_plotting_to_gwylm.ipynb | llondon6/nrutils_dev |
Variables | x = 2
y = '3'
print(x+int(y))
z = [1, 2, 3] #List
w = (2, 3, 4) #Tuple
import numpy as np
q = np.array([1, 2, 3]) #numpy.ndarray
type(q) | 5
| MIT | Numeric and scientific python.ipynb | Pytoddler/Data-analysis-and-visualization |
Console input and output | MyName = input('My name is: ')
print('Hello, '+MyName) | My name is: david
Hello, david
| MIT | Numeric and scientific python.ipynb | Pytoddler/Data-analysis-and-visualization |
File input and output | fid = open('msg.txt','w')
fid.write('demo of writing.\n')
fid.write('Second line')
fid.close()
fid = open('msg.txt','r')
msg = fid.readline()
print(msg)
msg = fid.readline()
print(msg)
fid.close()
fid = open('msg.txt','r')
msg = fid.readlines()
print(msg)
fid = open('msg.txt','r')
msg = fid.read()
print(msg)
import numpy as np
x = np.linspace(0, 2*np.pi,4)
y = np.cos(x)
#Stack arrays in sequence vertically (row wise).
data = np.vstack((x,y)) #上下對隊齊好
dataT = data.T #Transpose
np.savetxt('data.txt', data, delimiter=',')
z = np.loadtxt('data.txt', delimiter=',')
print(x)
print(y)
print(data)
print(dataT)
print(z)
import numpy as np
x = np.linspace(0, 2*np.pi,20)
y = np.cos(x)
z = np.sin(x)
%matplotlib inline
import matplotlib.pyplot as plt
#使用 help(plt.plot) 可以看到所有畫圖玩法
plt.plot(x,y,'b')
plt.plot(x,y,'go', label = 'cos(x)')
plt.plot(x,z,'r')
plt.plot(x,z,'go', label = 'sin(x)')
plt.legend(loc='best') # 放到最好的位置
plt.xlim([0, 2*np.pi])
import numpy as np
x = np.linspace(0, 2*np.pi,20)
y = np.cos(x)
z = np.sin(x)
%matplotlib inline
import matplotlib.pyplot as plt
#使用 help(plt.plot) 可以看到所有畫圖玩法
plt.subplot(2,1,1) #分成兩張圖 形式是(row, column, order)
plt.plot(x,y,'b')
plt.plot(x,y,'go', label = 'cos(x)')
plt.legend(loc='best') #放到最好的位置
plt.subplot(2,1,2) #分成兩張圖
plt.plot(x,z,'r')
plt.plot(x,z,'go', label = 'sin(x)')
plt.legend(loc='best') #放到最好的位置
plt.xlim([0, 2*np.pi]) | _____no_output_____ | MIT | Numeric and scientific python.ipynb | Pytoddler/Data-analysis-and-visualization |
Functions, Conditions, Loop | import numpy as np
def f(x):
return x**2
x = np.linspace(0,5,10)
y = f(x)
print(y)
import numpy as np
def f(x): #這是個奇怪的練習用函數
res = x
if res < 3:
res = np.nan #<3就傳 Not a Number
elif res < 15:
res = x**3
else:
res = x**4
return res
x = np.linspace(0,10,20)
y = np.empty_like(x)
#Return a new array with the same shape and type as a given array.
#傳一個跟x一樣的array回來
i = 0
for xi in x:
y[i] = f(xi)
i = i + 1
print(y)
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(x,y,'bp')
plt.xlim([0,11]) | [ nan nan nan nan nan
nan 31.49147106 50.00728969 74.64644992 106.28371483
145.7938475 194.05161102 251.93176848 320.30908296 400.05831754
492.05423531 597.17159936 716.28517277 850.26971862 1000. ]
| MIT | Numeric and scientific python.ipynb | Pytoddler/Data-analysis-and-visualization |
Matrices, linear equations | A = np.array([[1,2],[3,2]])
B = np.array([1,0])
# x = A^-1 * b
sol1 = np.dot(np.linalg.inv(A),B)
print(sol1)
sol2 = np.linalg.solve(A,B)
print(sol2)
import sympy as sym
sym.init_printing()
#This will automatically enable the best printer available in your environment.
x,y = sym.symbols('x y')
z = sym.linsolve([3*x+2*y-1,x+2*y],(x,y))
z
#sym.pprint(z) The ASCII pretty printer | [-0.5 0.75]
[-0.5 0.75]
| MIT | Numeric and scientific python.ipynb | Pytoddler/Data-analysis-and-visualization |
Non-linear equation | from scipy.optimize import fsolve
def f(z): #用z參數來表示x和y,做函數運算
x = z[0]
y = z[1]
return [x+2*y, x**2+y**2-1]
z0 = [0,1]
z = fsolve(f,z0)
print(z)
print(f(z)) | [-0.89442719 0.4472136 ]
[0.0, -1.1102230246251565e-16]
| MIT | Numeric and scientific python.ipynb | Pytoddler/Data-analysis-and-visualization |
Integration | from scipy.integrate import quad
def f(x):
return x**2
quad(f,0,2) #計算積分值
import sympy as sym
sym.init_printing()
x = sym.Symbol('x')
f = sym.integrate(x**2,x)
f.subs(x,2) #將值帶入函數中
f | _____no_output_____ | MIT | Numeric and scientific python.ipynb | Pytoddler/Data-analysis-and-visualization |
Derivative | from scipy.misc import derivative
def f(x):
return x**2
print(derivative(f,2,dx=0.01)) #dx表示精確程度
import sympy as sym
sym.init_printing()
x = sym.Symbol('x')
f = sym.diff(x**3,x)
f.subs(x,2) #將值帶入函數中,得解
f | 4.0
| MIT | Numeric and scientific python.ipynb | Pytoddler/Data-analysis-and-visualization |
Interpolation | from scipy.interpolate import interp1d #中間的字是1不是L喔!!!
x = np.arange(0,6,1)
y = np.array([0.2,0.3,0.5,1.0,0.9,1.1])
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(x,y,'bo')
xp = np.linspace(0,5,100) #為了顯示差別把點增加
y1 = interp1d(x,y,kind='linear') #一階
plt.plot(xp,y1(xp),'r-')
y2 = interp1d(x,y,kind='quadratic') #二階
plt.plot(xp,y2(xp),'k--')
y3 = interp1d(x,y,kind='cubic') #三階
plt.plot(xp,y3(xp),'g--')
| _____no_output_____ | MIT | Numeric and scientific python.ipynb | Pytoddler/Data-analysis-and-visualization |
Linear regression | import numpy as np
x = np.array([0,1,2,3,4,5])
y = np.array([0.1,0.2,0.3,0.5,0.8,2.0 ])
#多項式逼近法,選擇階層
p1 = np.polyfit(x,y,1)
print(p1)
p2 = np.polyfit(x,y,2)
print(p2)
p3 = np.polyfit(x,y,3)
print(p3)
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(x,y,'ro')
# np.polyval表示多項式的值,把係數p_帶入多項式x求出來的值
xp = np.linspace(0,5,100)
plt.plot(xp, np.polyval(p1,xp), 'b-', label='linear') #這個字是polyvaL喔!!
plt.plot(xp, np.polyval(p2,xp), 'g--', label='quadratic')
plt.plot(xp, np.polyval(p3,xp), 'k:', label='cubic')
plt.legend(loc='best') | [ 0.32857143 -0.17142857]
[ 0.1125 -0.23392857 0.20357143]
[ 0.04166667 -0.2 0.33690476 0.07857143]
| MIT | Numeric and scientific python.ipynb | Pytoddler/Data-analysis-and-visualization |
Nonlinear regression | import numpy as np
from scipy.optimize import curve_fit
x = np.array([0,1,2,3,4,5])
y = np.array([0.1,0.2,0.3,0.5,0.8,2.0 ])
#多項式逼近法,選擇階層
p1 = np.polyfit(x,y,1)
print(p1)
p2 = np.polyfit(x,y,2)
print(p2)
p3 = np.polyfit(x,y,3)
print(p3)
#使用指數對數
def f(x,a):
return 0.1 * np.exp(a*x)
a = curve_fit(f,x,y)[0] #非線性回歸,Use non-linear least squares to fit a function,取第0項
print('a='+str(a))
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(x,y,'ro')
# np.polyval表示多項式的值,把係數p_帶入多項式x求出來的值
xp = np.linspace(0,5,100)
plt.plot(xp, np.polyval(p1,xp), 'b-', label='linear') #這個字是polyvaL喔!!
plt.plot(xp, np.polyval(p2,xp), 'g--', label='quadratic')
plt.plot(xp, np.polyval(p3,xp), 'k:', label='cubic')
plt.plot(xp, f(xp,a), 'c', label='nonlinear')
plt.legend(loc='best') | [ 0.32857143 -0.17142857]
[ 0.1125 -0.23392857 0.20357143]
[ 0.04166667 -0.2 0.33690476 0.07857143]
a=[ 0.58628748]
| MIT | Numeric and scientific python.ipynb | Pytoddler/Data-analysis-and-visualization |
Differential equation | from scipy.integrate import odeint
def dydt(y,t,a):
return -a * y
a = 0.5
t = np.linspace(0,20)
y0 = 5.0
y = odeint(dydt,y0,t,args=(a,))
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(t,y)
plt.xlabel('time')
plt.ylabel('y') | _____no_output_____ | MIT | Numeric and scientific python.ipynb | Pytoddler/Data-analysis-and-visualization |
Nonlinear optimization | #概念:要有Objective、Constraint,然後初始猜想值
import numpy as np
from scipy.optimize import minimize
def objective(x): #此函數求最小值
x1 = x[0]
x2 = x[1]
x3 = x[2]
x4 = x[3]
return x1*x4*(x1+x2+x3)+x3
#用減法做比較
def constraint1(x):
return x[0]*x[1]*x[2]*x[3] - 25.0
#用減法做比較
def constraint2(x):
sum_sq = 40.0
for i in range(0,4):
sum_sq = sum_sq - x[i]**2
return sum_sq
#初始猜想值
x0 = [1,5,5,1]
print(objective(x0))
#設定值域
b = (1.0,5.0) #x的值域
bnds = (b,b,b,b) #四個值域都一樣b
con1 = {'type':'ineq','fun': constraint1} #第一個是不等式
con2 = {'type':'eq','fun': constraint2} #第二個需要等式
cons = [con1,con2] #cons合成一個list
sol = minimize(objective,x0,method='SLSQP',\
bounds = bnds, constraints = cons)
print(sol)
print(sol.fun)
print(sol.x) | [ 1. 4.7429961 3.82115462 1.37940765]
| MIT | Numeric and scientific python.ipynb | Pytoddler/Data-analysis-and-visualization |
PyFunc Model + Transformer ExampleThis notebook demonstrates how to deploy a Python function based model and a custom transformer. This type of model is useful as user would be able to define their own logic inside the model as long as it satisfy contract given in `merlin.PyFuncModel`. If the pre/post-processing steps could be implemented in Python, it's encouraged to write them in the PyFunc model code instead of separating them into another transformer.The model we are going to develop and deploy is a cifar10 model accepts a tensor input. The transformer has preprocessing step that allows the user to send a raw image data and convert it to a tensor input. Requirements- Authenticated to gcloud (```gcloud auth application-default login```) | !pip install --upgrade -r requirements.txt > /dev/null
import warnings
warnings.filterwarnings('ignore') | _____no_output_____ | Apache-2.0 | examples/transformer/custom-transformer/PyFunc-Transformer.ipynb | Omrisnyk/merlin |
1. Initialize Merlin 1.1 Set Merlin Server | import merlin
MERLIN_URL = "<MERLIN_HOST>/api/merlin"
merlin.set_url(MERLIN_URL) | _____no_output_____ | Apache-2.0 | examples/transformer/custom-transformer/PyFunc-Transformer.ipynb | Omrisnyk/merlin |
1.2 Set Active Project`project` represent a project in real life. You may have multiple model within a project.`merlin.set_project()` will set the active project into the name matched by argument. You can only set it to an existing project. If you would like to create a new project, please do so from the MLP UI. | PROJECT_NAME = "sample"
merlin.set_project(PROJECT_NAME) | /Users/ariefrahmansyah/.pyenv/versions/3.7.3/lib/python3.7/site-packages/ipykernel/ipkernel.py:287: DeprecationWarning: `should_run_async` will not call `transform_cell` automatically in the future. Please pass the result to `transformed_cell` argument and any exception that happen during thetransform in `preprocessing_exc_tuple` in IPython 7.17 and above.
and should_run_async(code)
| Apache-2.0 | examples/transformer/custom-transformer/PyFunc-Transformer.ipynb | Omrisnyk/merlin |
1.3 Set Active Model`model` represents an abstract ML model. Conceptually, `model` in Merlin is similar to a class in programming language. To instantiate a `model` you'll have to create a `model_version`.Each `model` has a type, currently model type supported by Merlin are: sklearn, xgboost, tensorflow, pytorch, and user defined model (i.e. pyfunc model).`model_version` represents a snapshot of particular `model` iteration. You'll be able to attach information such as metrics and tag to a given `model_version` as well as deploy it as a model service.`merlin.set_model(, )` will set the active model to the name given by parameter, if the model with given name is not found, a new model will be created. | from merlin.model import ModelType
MODEL_NAME = "transformer-pyfunc"
merlin.set_model(MODEL_NAME, ModelType.PYFUNC) | _____no_output_____ | Apache-2.0 | examples/transformer/custom-transformer/PyFunc-Transformer.ipynb | Omrisnyk/merlin |
2. Train ModelIn this step, we are going to train a cifar10 model using PyToch and create PyFunc model class that does the prediction using trained PyTorch model. 2.1 Prepare Training Data | import torch
import torchvision
import torchvision.transforms as transforms
transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=2) | /Users/ariefrahmansyah/.pyenv/versions/3.7.3/lib/python3.7/site-packages/torchvision/datasets/lsun.py:8: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
from collections import Iterable
0it [00:00, ?it/s] | Apache-2.0 | examples/transformer/custom-transformer/PyFunc-Transformer.ipynb | Omrisnyk/merlin |
2.2 Create PyTorch Model | import torch.nn as nn
import torch.nn.functional as F
class PyTorchModel(nn.Module):
def __init__(self):
super(PyTorchModel, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x | /Users/ariefrahmansyah/.pyenv/versions/3.7.3/lib/python3.7/site-packages/ipykernel/ipkernel.py:287: DeprecationWarning: `should_run_async` will not call `transform_cell` automatically in the future. Please pass the result to `transformed_cell` argument and any exception that happen during thetransform in `preprocessing_exc_tuple` in IPython 7.17 and above.
and should_run_async(code)
| Apache-2.0 | examples/transformer/custom-transformer/PyFunc-Transformer.ipynb | Omrisnyk/merlin |
2.3 Train Model | import torch.optim as optim
net = PyTorchModel()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0 | 170500096it [03:10, 1240089.84it/s] | Apache-2.0 | examples/transformer/custom-transformer/PyFunc-Transformer.ipynb | Omrisnyk/merlin |
2.4 Check Prediction | dataiter = iter(trainloader)
inputs, labels = dataiter.next()
predict_out = net(inputs[0:1])
predict_out | _____no_output_____ | Apache-2.0 | examples/transformer/custom-transformer/PyFunc-Transformer.ipynb | Omrisnyk/merlin |
2.5 Serialize Model | import os
model_dir = "pytorch-model"
model_path = os.path.join(model_dir, "model.pt")
model_class_path = os.path.join(model_dir, "model.py")
torch.save(net.state_dict(), model_path) | _____no_output_____ | Apache-2.0 | examples/transformer/custom-transformer/PyFunc-Transformer.ipynb | Omrisnyk/merlin |
2.6 Save PyTorchModel ClassWe also need to save the PyTorchModel class and upload it to Merlin alongside the serialized trained model. The next cell will write the PyTorchModel we defined above to `pytorch-model/model.py` file. | %%file pytorch-model/model.py
import torch.nn as nn
import torch.nn.functional as F
class PyTorchModel(nn.Module):
def __init__(self):
super(PyTorchModel, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x | Overwriting pytorch-model/model.py
| Apache-2.0 | examples/transformer/custom-transformer/PyFunc-Transformer.ipynb | Omrisnyk/merlin |
3. Create PyFunc ModelTo create a PyFunc model you'll have to extend `merlin.PyFuncModel` class and implement its `initialize` and `infer` method.`initialize` will be called once during model initialization. The argument to `initialize` is a dictionary containing a key value pair of artifact name and its URL. The artifact's keys are the same value as received by `log_pyfunc_model`.`infer` method is the prediction method that is need to be implemented. It accept a dictionary type argument which represent incoming request body. `infer` should return a dictionary object which correspond to response body of prediction result.In following example we are creating PyFunc model called `CifarModel`. In its `initialize` method we expect 2 artifacts called `model_path` and `model_class_path`, those 2 artifacts would point to the serialized model and the PyTorch model class file. The `infer` method will simply does prediction for the model and return the result. | import importlib
import sys
from merlin.model import PyFuncModel
MODEL_CLASS_NAME="PyTorchModel"
class CifarModel(PyFuncModel):
def initialize(self, artifacts):
model_path = artifacts["model_path"]
model_class_path = artifacts["model_class_path"]
# Load the python class into memory
sys.path.append(os.path.dirname(model_class_path))
modulename = os.path.basename(model_class_path).split('.')[0].replace('-', '_')
model_class = getattr(importlib.import_module(modulename), MODEL_CLASS_NAME)
# Make sure the model weight is transform with the right device in this machine
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
self._pytorch = model_class().to(device)
self._pytorch.load_state_dict(torch.load(model_path, map_location=device))
self._pytorch.eval()
def infer(self, request, **kwargs):
inputs = torch.tensor(request["instances"])
result = self._pytorch(inputs)
return {"predictions": result.tolist()} | _____no_output_____ | Apache-2.0 | examples/transformer/custom-transformer/PyFunc-Transformer.ipynb | Omrisnyk/merlin |
Now, let's test it locally. | import json
with open(os.path.join("input-tensor.json"), "r") as f:
tensor_req = json.load(f)
m = CifarModel()
m.initialize({"model_path": model_path, "model_class_path": model_class_path})
m.infer(tensor_req) | _____no_output_____ | Apache-2.0 | examples/transformer/custom-transformer/PyFunc-Transformer.ipynb | Omrisnyk/merlin |
4. Deploy Model To deploy the model, we will have to create an iteration of the model (by create a `model_version`), upload the serialized model to MLP, and then deploy. 4.1 Create Model Version and Upload `merlin.new_model_version()` is a convenient method to create a model version and start its development process. It is equal to following codes:```v = model.new_model_version()v.start()v.log_pyfunc_model(model_instance=EnsembleModel(), conda_env="env.yaml", artifacts={"xgb_model": model_1_path, "sklearn_model": model_2_path})v.finish()``` To upload PyFunc model you have to provide following arguments:1. `model_instance` is the instance of PyFunc model, the model has to extend `merlin.PyFuncModel`2. `conda_env` is path to conda environment yaml file. The environment yaml file must contain all dependency required by the PyFunc model.3. (Optional) `artifacts` is additional artifact that you want to include in the model4. (Optional) `code_path` is a list of directory containing python code that will be loaded during model initialization, this is required when `model_instance` depend on local python package | with merlin.new_model_version() as v:
merlin.log_pyfunc_model(model_instance=CifarModel(),
conda_env="env.yaml",
artifacts={"model_path": model_path, "model_class_path": model_class_path}) | 2021/06/23 05:41:28 WARNING mlflow.models.model: Logging model metadata to the tracking server has failed, possibly due older server version. The model artifacts have been logged successfully under gs://<MERLIN_BUCKET>/mlflow/604/7b57180c051842fe815adbacfa282541/artifacts. In addition to exporting model artifacts, MLflow clients 1.7.0 and above attempt to record model metadata to the tracking store. If logging to a mlflow server via REST, consider upgrading the server version to MLflow 1.7.0 or above.
| Apache-2.0 | examples/transformer/custom-transformer/PyFunc-Transformer.ipynb | Omrisnyk/merlin |
4.2 Deploy Model and TransformerTo deploy a model and its transformer, you must pass a `transformer` object to `deploy()` function. Each of deployed model version will have its own generated url. | from merlin.resource_request import ResourceRequest
from merlin.transformer import Transformer
# Create a transformer object and its resources requests
resource_request = ResourceRequest(min_replica=1, max_replica=1,
cpu_request="100m", memory_request="200Mi")
transformer = Transformer("gcr.io/kubeflow-ci/kfserving/image-transformer:latest",
resource_request=resource_request)
endpoint = merlin.deploy(v, transformer=transformer) | /Users/ariefrahmansyah/.pyenv/versions/3.7.3/lib/python3.7/site-packages/ipykernel/ipkernel.py:287: DeprecationWarning: `should_run_async` will not call `transform_cell` automatically in the future. Please pass the result to `transformed_cell` argument and any exception that happen during thetransform in `preprocessing_exc_tuple` in IPython 7.17 and above.
and should_run_async(code)
Deploying model transformer-pyfunc version 2
0% [##############################] 100% | ETA: 00:00:00 | Apache-2.0 | examples/transformer/custom-transformer/PyFunc-Transformer.ipynb | Omrisnyk/merlin |
4.3 Send Test Request | import json
import requests
with open(os.path.join("input-raw-image.json"), "r") as f:
req = json.load(f)
resp = requests.post(endpoint.url, json=req)
resp.text | _____no_output_____ | Apache-2.0 | examples/transformer/custom-transformer/PyFunc-Transformer.ipynb | Omrisnyk/merlin |
4. Clean Up 4.1 Delete Deployment | merlin.undeploy(v) | Deleting deployment of model transformer-pyfunc version 2 from enviroment id-staging
| Apache-2.0 | examples/transformer/custom-transformer/PyFunc-Transformer.ipynb | Omrisnyk/merlin |
CCI501 - Machine Learning Project Name: Samuel Mwamburi Mghendi Admission Number: P52/37621/2020 Email: mghendi@students.uonbi.ac.ke Course: Machine Learning – CCI 501 Applying Logistic Regression to Establish a Good Pricing Model for Mobile Phone Manufacturers in the Current Market Landscape using Technical Specifications and User Preference. This report is organised as follows.1. Problem Statement2. Data Description * Data Loading and Preparation * Exploratory Data Analysis3. Data Preprocessing4. Data Modelling5. Performance Evaluation6. Conclusion 1. Problem Statement To determine the price of a mobile phone in the current market using specifications i.e. screen size, screen and camera resolution, internal storage and battery capacity and user preference.Traditionally, and rightfully so, consumers have been forced to part with a premium to own a mobile phone with top-of-the-line features and specifications. Some smartphone manufacturers in 2020 still charge upwards of KES 100,000 for a mobile phone that has a large screen, good battery, fast processor and sufficient storage capacity. However, according to a December article on Android Central, mobile phones with great features are getting significantly affordable. (Johnson, 2020)A phone’s specifications is a logical way of determining which class it falls under, with the emergence of cheaper manufacturing techniques and parts however, phone pricing models have become more blurry and it is possible for consumers to purchase more powerful smartphones at cheaper prices.This study intends to explore this hypothesis and predict the relationship between these features and the price of a mobile phone in the current landscape using phone specification, product rating and prices data scraped from a Kenyan e-commerce site. Why Logistic Regression?A supervised learning approach would be useful for this experiment since the data being explored has price labels and categories. Logistic regression is used to classify data by considering outcome variables on extreme ends and consequently forms a line to distinguish them. 2. Data Description Data Loading and Preparation Initialization | import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import os
from tqdm import trange | _____no_output_____ | MIT | CCI_501_ML_Project.ipynb | mghendi/smartphonepriceclassifier |
Import Data | df = pd.read_csv("productdata.csv")
df
from sklearn import preprocessing | _____no_output_____ | MIT | CCI_501_ML_Project.ipynb | mghendi/smartphonepriceclassifier |
Exploratory Data AnalysisGathering more information about the dataset in order to better understand it.The relationship and distribution between screen size, screen resolution, camera resolution, storage space, memory, rating and likes against the resultant price charged for each phone sold was plotted and analyzed. | df.describe()
df.info() | <class 'pandas.core.frame.DataFrame'>
RangeIndex: 1148 entries, 0 to 1147
Data columns (total 13 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Phone 1148 non-null object
1 Screen (inches) 1148 non-null float64
2 Resolution (pixels) 1148 non-null object
3 Camera (MP) 1148 non-null float64
4 OS 1131 non-null object
5 Storage (GB) 1148 non-null int64
6 RAM (GB) 1148 non-null int64
7 Battery (mAh) 1148 non-null int64
8 Battery Type 1148 non-null object
9 Price(Kshs) 1148 non-null int64
10 Price Category 1148 non-null object
11 Rating 1148 non-null float64
12 Likes 1148 non-null int64
dtypes: float64(3), int64(5), object(5)
memory usage: 116.7+ KB
| MIT | CCI_501_ML_Project.ipynb | mghendi/smartphonepriceclassifier |
The feature OS has missing values. | # check shape
df.shape | _____no_output_____ | MIT | CCI_501_ML_Project.ipynb | mghendi/smartphonepriceclassifier |
The dataset has 1,148 records and 12 features. | # remove duplicates, if any
df.drop_duplicates(inplace = True)
df.shape | _____no_output_____ | MIT | CCI_501_ML_Project.ipynb | mghendi/smartphonepriceclassifier |
No duplicate records available in the dataset. Mobile Phones by Screen Size Contrasted by User Rating | # previewing distribution of screen size by rating
df['Round Rating'] = df['Rating'].round(decimals=0)
plt.figure(figsize = (20, 6))
ax = sns.histplot(df, x="Screen (inches)", stat="count", hue="Round Rating", multiple="dodge", shrink=0.8)
for p in ax.patches:# histogram bar label
h = p.get_height()
if (h != 0): ax.text(x = p.get_x()+(p.get_width()/2), y = h+1, s = "{:.0f}".format(h),ha = "center")
plt.xlabel('Screen Size (inches)')
plt.title("Screen Size of Mobile Phones contrasted by User Rating", fontsize=12, fontweight="bold");
plt.show()
print("Screen Size: values count=" + str(df['Screen (inches)'].count()) + ", min=" + str(df['Screen (inches)'].min()) + ", max=" + str(df['Screen (inches)'].max()) + ", mean=" + str(df['Screen (inches)'].mean())) | _____no_output_____ | MIT | CCI_501_ML_Project.ipynb | mghendi/smartphonepriceclassifier |
The chart can be used to deduce a high-level inference on the phone industry consumer purchase preference. Phones with a larger screen size, which are inherently larger in size, between 5 to 7 inches are seen to be rated higher. | # changing the datatype of the 'OS' variable
df['OS'] = df['OS'].astype('str') | _____no_output_____ | MIT | CCI_501_ML_Project.ipynb | mghendi/smartphonepriceclassifier |
Mobile Phones by Camera Resolution contrasted by User Rating | # previewing distribution of camera resolution by rating
plt.figure(figsize = (20, 6))
ax = sns.histplot(df, x="Camera (MP)", hue="Round Rating", multiple="dodge", shrink=0.8)
for p in ax.patches:# label each bar in histogram
h = p.get_height()
if (h != 0): ax.text(x = p.get_x()+(p.get_width()/2), y = h+1, s = "{:.0f}".format(h),ha = "center")
plt.xlabel('Camera (MP)')
plt.title("Distribution of Camera Resolution by User Rating", fontsize=12, fontweight="bold");
plt.show()
print("Camera (MP): values count=" + str(df['Camera (MP)'].count()) + ", min=" + str(df['Camera (MP)'].min()) + ", max=" + str(df['Camera (MP)'].max()) + ", mean=" + str(df['Camera (MP)'].mean())) | _____no_output_____ | MIT | CCI_501_ML_Project.ipynb | mghendi/smartphonepriceclassifier |
Mobile phones with cameras sporting high resolutions,15 and 32 Megapixels , based on the current offering in the market have significantly better relative ratings than mid-tier models between 20 to 30 Megapixels and low-tier models less than 5 megapixels. | # previewing distribution of Storage Capacity by rating
plt.figure(figsize = (20, 6))
ax = sns.histplot(df, x="Storage (GB)", hue="Round Rating", multiple="dodge", shrink=0.8)
for p in ax.patches:# label each bar in histogram
h = p.get_height()
if (h != 0): ax.text(x = p.get_x()+(p.get_width()/2), y = h+1, s = "{:.0f}".format(h),ha = "center")
plt.xlabel('Storage (GB)')
plt.title("Distribution of Storage Capacity by User Rating", fontsize=12, fontweight="bold");
plt.show()
print("Storage (GB): values count=" + str(df['Storage (GB)'].count()) + ", min=" + str(df['Storage (GB)'].min()) + ", max=" + str(df['Storage (GB)'].max()) + ", mean=" + str(df['Storage (GB)'].mean())) | _____no_output_____ | MIT | CCI_501_ML_Project.ipynb | mghendi/smartphonepriceclassifier |
As anticipated, mobile phones with higher internal storage capacities, greater than or equal to 256 Gigabytes, recieve significantly better relative ratings than models with less than 128 gigabytes. Additionally, there are very few purchases of mobile phones equal to or greater than 512 gigabytes of storage. Mobile Phones Specifications by User Preference(Likes) In the E-Commerce store from which the data was retrieved, users are also capable of adding a product to their wishlist after a high level assessment of the product features and pricing. The number of likes a product has recieved refers to the number of users who have added the given product to their wishlist. | #pairplot to investigate the relationship between all the variables
sns.pairplot(df)
plt.show() | _____no_output_____ | MIT | CCI_501_ML_Project.ipynb | mghendi/smartphonepriceclassifier |
In reference to the pair plot above, mid-tier phone models are significantly better rated and well recieved as compared to their much more expensive and budget counterparts in the local current market.Phones with mid-tier features such as an average storage capacity, such as a large display 5 to 7 inches, storage of between 128 Gigabytes of storage, 4 Gigabytes of RAM, 3000 to 5000 milliampere hours battery capacity and 10 to 30 Megapixel Camera Resolution are well recieve more likes. There appears to be a direct correlation between the number of likes a product recieves before-hand and the user ratings after purchase. Mobile phones that recieved an average rating of 4 had roughly 300 likes from users based on the specifications and price point provided.This also implies that the likes a product recieved directly translates to an purchase of the product in the long term. 3. Data Preprocessing Converting Text to Numerical VectorThe features Price Category and Battery Type contain important dependent and independent values respectively key in the experiment. These values would need to be converted into numerical values in order to be applied in the algorithm. | # creating categorigal variables for the battery type feature
df["Battery Type"].replace({"Li-Po": "0", "Li-Ion": "1"}, inplace=True)
print(df)
# creating categorigal variables for the battery type feature
df["Price Category"].replace({"Budget": "0", "Mid-Tier": "1", "Flagship": "2"}, inplace=True)
print(df)
df["Price Category"].value_counts(normalize= True)
import nltk
import string
import math
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import SGDClassifier
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn import metrics
import re
import string
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
from nltk.stem.wordnet import WordNetLemmatizer
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(min_df=0, lowercase=False)
vectorizer.fit(df["OS"])
vectorizer.vocabulary_
vectorizer = CountVectorizer(min_df=0, lowercase=False)
vectorizer.fit(df["Resolution (pixels)"])
vectorizer.vocabulary_ | _____no_output_____ | MIT | CCI_501_ML_Project.ipynb | mghendi/smartphonepriceclassifier |
Creating Bag of Words models | df["OS"] = vectorizer.transform(df["OS"]).toarray()
print(df)
df["Resolution (pixels)"] = vectorizer.transform(df["Resolution (pixels)"]).toarray()
print (df) | Phone Screen (inches) Resolution (pixels) \
0 Gionee M7 Power 6.00 0
1 Gionee M7 6.01 0
2 Samsung Galaxy M21 6GB/128GB 6.40 0
3 Samsung Galaxy M21 4GB/64GB 6.40 0
4 Samsung Galaxy A31 6GB/128GB 6.40 0
... ... ... ...
1143 Nokia 105 (2019) 1.77 0
1144 Nokia 220 4G 2.40 0
1145 Nokia X71 6.39 0
1146 Nokia 2.2 3GB/32GB 5.71 0
1147 Nokia 2.2 2GB/16GB 5.71 0
Camera (MP) OS Storage (GB) RAM (GB) Battery (mAh) Battery Type \
0 8.0 0 64 4 4000 0
1 8.0 0 64 6 4000 0
2 20.0 0 128 6 6000 0
3 20.0 0 64 4 6000 0
4 20.0 0 128 6 5000 0
... ... .. ... ... ... ...
1143 8.0 0 4 8 800 1
1144 8.0 0 24 8 1200 1
1145 16.0 0 128 6 3500 0
1146 5.0 0 32 3 3000 1
1147 5.0 0 16 2 3000 1
Price(Kshs) Price Category Rating Likes Round Rating
0 15880 1 4.0 13 4.0
1 15880 1 4.5 8 4.0
2 21590 1 4.3 30 4.0
3 22499 1 3.8 31 4.0
4 24999 1 3.8 31 4.0
... ... ... ... ... ...
1143 1900 0 3.0 11 3.0
1144 1900 0 4.0 8 4.0
1145 1900 0 3.0 53 3.0
1146 1900 0 3.4 50 3.0
1147 9500 0 3.9 39 4.0
[1148 rows x 14 columns]
| MIT | CCI_501_ML_Project.ipynb | mghendi/smartphonepriceclassifier |
4. Data Modelling Data Modelling for Logistic Regression Feature Selection For this experiment, the mobile phone's technical specifications will be used as the independent variables. The ratings and likes which are subjective assessments will be dropped.Variables such as the Phone Name are not important in price point predictability for this particular endevour and will therefore be dropped. | X = df.drop(columns = ['Phone','Price(Kshs)', 'Rating', 'Likes', 'OS', 'Battery Type', 'Resolution (pixels)', 'Round Rating']).values
y = df['Price Category'].values | _____no_output_____ | MIT | CCI_501_ML_Project.ipynb | mghendi/smartphonepriceclassifier |
Splitting Data | # splitting into 75% training and 25% test sets
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=1000) | _____no_output_____ | MIT | CCI_501_ML_Project.ipynb | mghendi/smartphonepriceclassifier |
Feature Scaling | scaler = preprocessing.StandardScaler().fit(X_train)
scaler
scaler.mean_
scaler.scale_
X_scaled = scaler.transform(X_train)
X_scaled
X_scaled.mean(axis=0)
X_scaled.std(axis=0) | _____no_output_____ | MIT | CCI_501_ML_Project.ipynb | mghendi/smartphonepriceclassifier |
Logistic Regression | from sklearn.linear_model import LogisticRegression
from sklearn.datasets import make_classification
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
X, y = make_classification(random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
pipe = make_pipeline(StandardScaler(), LogisticRegression())
pipe.fit(X_train, y_train) # apply scaling on training data
# apply scaling on testing data, without leaking training data.
pipe.score(X_test, y_test)
classifier = LogisticRegression()
classifier.fit(X_train, y_train)
score = classifier.score(X_test, y_test) | _____no_output_____ | MIT | CCI_501_ML_Project.ipynb | mghendi/smartphonepriceclassifier |
5. Performance Evaluation | print("Accuracy:", (score)*100, "%") | Accuracy: 100.0 %
| MIT | CCI_501_ML_Project.ipynb | mghendi/smartphonepriceclassifier |
PROYECTO CIFAR-10 CARLOS CABAÑÓ 1. Librerias Descargamos la librería para los arrays en preprocesamiento de Keras
| from tensorflow import keras as ks
from matplotlib import pyplot as plt
import numpy as np
import time
import datetime
import random
from sklearn.preprocessing import LabelEncoder
from tensorflow.keras.regularizers import l2
from tensorflow.keras.callbacks import EarlyStopping
from tensorflow.keras.preprocessing.image import ImageDataGenerator | _____no_output_____ | MIT | Project Portfolio/cnn-cifar10-tf2-v12_Notebook_CarlosCabano.ipynb | CarlosCabano/carloscabano.github.io |
2. Arquitectura de red del modelo Adoptamos la arquitectura del modelo 11 con los ajustes en Batch Normalization, Kernel Regularizer y Kernel Initializer. Añadimos Batch normalization a las capas de convolución. | model = ks.Sequential()
model.add(ks.layers.Conv2D(64, (3, 3), strides=1, activation='relu', kernel_regularizer=l2(0.0005), kernel_initializer="he_uniform", padding='same', input_shape=(32,32,3)))
model.add(ks.layers.BatchNormalization())
model.add(ks.layers.Conv2D(64, (3, 3), strides=1, activation='relu', kernel_regularizer=l2(0.0005), kernel_initializer="he_uniform", padding='same'))
model.add(ks.layers.BatchNormalization())
model.add(ks.layers.MaxPooling2D((2, 2)))
model.add(ks.layers.Dropout(0.2))
model.add(ks.layers.Conv2D(128, (3, 3), strides=1, activation='relu', kernel_regularizer=l2(0.0005), kernel_initializer="he_uniform", padding='same'))
model.add(ks.layers.BatchNormalization())
model.add(ks.layers.Conv2D(128, (3, 3), strides=1, activation='relu', kernel_regularizer=l2(0.0005), kernel_initializer="he_uniform", padding='same'))
model.add(ks.layers.BatchNormalization())
model.add(ks.layers.MaxPooling2D(pool_size=(2, 2)))
model.add(ks.layers.Dropout(0.2))
model.add(ks.layers.Conv2D(256, (3, 3), strides=1, activation='relu', kernel_regularizer=l2(0.0005), kernel_initializer="he_uniform", padding='same'))
model.add(ks.layers.BatchNormalization())
model.add(ks.layers.Conv2D(256, (3, 3), strides=1, activation='relu', kernel_regularizer=l2(0.0005), kernel_initializer="he_uniform", padding='same'))
model.add(ks.layers.BatchNormalization())
model.add(ks.layers.Conv2D(256, (3, 3), strides=1, activation='relu', kernel_regularizer=l2(0.0005), kernel_initializer="he_uniform", padding='same'))
model.add(ks.layers.BatchNormalization())
model.add(ks.layers.MaxPooling2D(pool_size=(2, 2)))
model.add(ks.layers.Dropout(0.2))
model.add(ks.layers.Conv2D(512, (3, 3), strides=1, activation='relu', kernel_regularizer=l2(0.0005), kernel_initializer="he_uniform", padding='same'))
model.add(ks.layers.BatchNormalization())
model.add(ks.layers.Conv2D(512, (3, 3), strides=1, activation='relu', kernel_regularizer=l2(0.0005), kernel_initializer="he_uniform", padding='same'))
model.add(ks.layers.BatchNormalization())
model.add(ks.layers.Conv2D(512, (3, 3), strides=1, activation='relu', kernel_regularizer=l2(0.0005), kernel_initializer="he_uniform", padding='same'))
model.add(ks.layers.BatchNormalization())
model.add(ks.layers.Conv2D(512, (3, 3), strides=1, activation='relu', kernel_regularizer=l2(0.0005), kernel_initializer="he_uniform", padding='same'))
model.add(ks.layers.BatchNormalization())
model.add(ks.layers.MaxPooling2D(pool_size=(2, 2)))
model.add(ks.layers.Conv2D(512, (3, 3), strides=1, activation='relu', kernel_regularizer=l2(0.0005), kernel_initializer="he_uniform", padding='same'))
model.add(ks.layers.BatchNormalization())
model.add(ks.layers.Conv2D(512, (3, 3), strides=1, activation='relu', kernel_regularizer=l2(0.0005), kernel_initializer="he_uniform", padding='same'))
model.add(ks.layers.BatchNormalization())
model.add(ks.layers.Dropout(0.3))
model.add(ks.layers.Flatten())
model.add(ks.layers.Dense(512, activation='relu', kernel_regularizer=l2(0.001), kernel_initializer="he_uniform"))
model.add(ks.layers.BatchNormalization())
model.add(ks.layers.Dropout(0.4))
model.add(ks.layers.Dense(512, activation='relu', kernel_regularizer=l2(0.001), kernel_initializer="he_uniform"))
model.add(ks.layers.BatchNormalization())
model.add(ks.layers.Dropout(0.5))
model.add(ks.layers.Dense(10, activation='softmax'))
model.summary() | Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 32, 32, 64) 1792
_________________________________________________________________
batch_normalization (BatchNo (None, 32, 32, 64) 256
_________________________________________________________________
conv2d_1 (Conv2D) (None, 32, 32, 64) 36928
_________________________________________________________________
batch_normalization_1 (Batch (None, 32, 32, 64) 256
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 16, 16, 64) 0
_________________________________________________________________
dropout (Dropout) (None, 16, 16, 64) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 16, 16, 128) 73856
_________________________________________________________________
batch_normalization_2 (Batch (None, 16, 16, 128) 512
_________________________________________________________________
conv2d_3 (Conv2D) (None, 16, 16, 128) 147584
_________________________________________________________________
batch_normalization_3 (Batch (None, 16, 16, 128) 512
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 8, 8, 128) 0
_________________________________________________________________
dropout_1 (Dropout) (None, 8, 8, 128) 0
_________________________________________________________________
conv2d_4 (Conv2D) (None, 8, 8, 256) 295168
_________________________________________________________________
batch_normalization_4 (Batch (None, 8, 8, 256) 1024
_________________________________________________________________
conv2d_5 (Conv2D) (None, 8, 8, 256) 590080
_________________________________________________________________
batch_normalization_5 (Batch (None, 8, 8, 256) 1024
_________________________________________________________________
conv2d_6 (Conv2D) (None, 8, 8, 256) 590080
_________________________________________________________________
batch_normalization_6 (Batch (None, 8, 8, 256) 1024
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 4, 4, 256) 0
_________________________________________________________________
dropout_2 (Dropout) (None, 4, 4, 256) 0
_________________________________________________________________
conv2d_7 (Conv2D) (None, 4, 4, 512) 1180160
_________________________________________________________________
batch_normalization_7 (Batch (None, 4, 4, 512) 2048
_________________________________________________________________
conv2d_8 (Conv2D) (None, 4, 4, 512) 2359808
_________________________________________________________________
batch_normalization_8 (Batch (None, 4, 4, 512) 2048
_________________________________________________________________
conv2d_9 (Conv2D) (None, 4, 4, 512) 2359808
_________________________________________________________________
batch_normalization_9 (Batch (None, 4, 4, 512) 2048
_________________________________________________________________
conv2d_10 (Conv2D) (None, 4, 4, 512) 2359808
_________________________________________________________________
batch_normalization_10 (Batc (None, 4, 4, 512) 2048
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 2, 2, 512) 0
_________________________________________________________________
conv2d_11 (Conv2D) (None, 2, 2, 512) 2359808
_________________________________________________________________
batch_normalization_11 (Batc (None, 2, 2, 512) 2048
_________________________________________________________________
conv2d_12 (Conv2D) (None, 2, 2, 512) 2359808
_________________________________________________________________
batch_normalization_12 (Batc (None, 2, 2, 512) 2048
_________________________________________________________________
dropout_3 (Dropout) (None, 2, 2, 512) 0
_________________________________________________________________
flatten (Flatten) (None, 2048) 0
_________________________________________________________________
dense (Dense) (None, 512) 1049088
_________________________________________________________________
batch_normalization_13 (Batc (None, 512) 2048
_________________________________________________________________
dropout_4 (Dropout) (None, 512) 0
_________________________________________________________________
dense_1 (Dense) (None, 512) 262656
_________________________________________________________________
batch_normalization_14 (Batc (None, 512) 2048
_________________________________________________________________
dropout_5 (Dropout) (None, 512) 0
_________________________________________________________________
dense_2 (Dense) (None, 10) 5130
=================================================================
Total params: 16,052,554
Trainable params: 16,042,058
Non-trainable params: 10,496
_________________________________________________________________
| MIT | Project Portfolio/cnn-cifar10-tf2-v12_Notebook_CarlosCabano.ipynb | CarlosCabano/carloscabano.github.io |
3. Optimizador, función error Añadimos el learning rate al optimizador | from keras.optimizers import SGD
model.compile(optimizer=SGD(lr=0.001, momentum=0.9),
loss='sparse_categorical_crossentropy',
metrics=['accuracy']) | _____no_output_____ | MIT | Project Portfolio/cnn-cifar10-tf2-v12_Notebook_CarlosCabano.ipynb | CarlosCabano/carloscabano.github.io |
4. Preparamos los datos | cifar10 = ks.datasets.cifar10
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
cifar10_labels = [
'airplane', # id 0
'automobile',
'bird',
'cat',
'deer',
'dog',
'frog',
'horse',
'ship',
'truck',
]
print('Number of labels: %s' % len(cifar10_labels)) | Number of labels: 10
| MIT | Project Portfolio/cnn-cifar10-tf2-v12_Notebook_CarlosCabano.ipynb | CarlosCabano/carloscabano.github.io |
Pintemos una muestra de las imagenes del dataset CIFAR10: | # Pintemos una muestra de las las imagenes del dataset MNIST
print('Train: X=%s, y=%s' % (x_train.shape, y_train.shape))
print('Test: X=%s, y=%s' % (x_test.shape, y_test.shape))
for i in range(9):
plt.subplot(330 + 1 + i)
plt.imshow(x_train[i], cmap=plt.get_cmap('gray'))
plt.title(cifar10_labels[y_train[i,0]])
plt.subplots_adjust(hspace = 1)
plt.show() | Train: X=(50000, 32, 32, 3), y=(50000, 1)
Test: X=(10000, 32, 32, 3), y=(10000, 1)
| MIT | Project Portfolio/cnn-cifar10-tf2-v12_Notebook_CarlosCabano.ipynb | CarlosCabano/carloscabano.github.io |
Hacemos la validación al mismo tiempo que el entrenamiento: | x_val = x_train[-10000:]
y_val = y_train[-10000:]
x_train = x_train[:-10000]
y_train = y_train[:-10000]
| _____no_output_____ | MIT | Project Portfolio/cnn-cifar10-tf2-v12_Notebook_CarlosCabano.ipynb | CarlosCabano/carloscabano.github.io |
Hacemos el OHE para la clasificación | le = LabelEncoder()
le.fit(y_train.ravel())
y_train_encoded = le.transform(y_train.ravel())
y_val_encoded = le.transform(y_val.ravel())
y_test_encoded = le.transform(y_test.ravel()) | _____no_output_____ | MIT | Project Portfolio/cnn-cifar10-tf2-v12_Notebook_CarlosCabano.ipynb | CarlosCabano/carloscabano.github.io |
5. Ajustes: Early Stopping Definimos un early stopping con base en el loss de validación y con el parámetro de "patience" a 10, para tener algo de margen. Con el Early Stopping lograremos parar el entrenamiento en el momento óptimo para evitar que siga entrenando a partir del overfitting. | callback_val_loss = EarlyStopping(monitor="val_loss", patience=5)
callback_val_accuracy = EarlyStopping(monitor="val_accuracy", patience=10) | _____no_output_____ | MIT | Project Portfolio/cnn-cifar10-tf2-v12_Notebook_CarlosCabano.ipynb | CarlosCabano/carloscabano.github.io |
6. Transformador de imágenes 6.1 Imágenes de entrenamiento | train_datagen = ImageDataGenerator(
horizontal_flip=True,
width_shift_range=0.2,
height_shift_range=0.2,
)
train_generator = train_datagen.flow(
x_train,
y_train_encoded,
batch_size=64
) | _____no_output_____ | MIT | Project Portfolio/cnn-cifar10-tf2-v12_Notebook_CarlosCabano.ipynb | CarlosCabano/carloscabano.github.io |
6.2 Imágenes de validación y testeo | validation_datagen = ImageDataGenerator(
horizontal_flip=True,
width_shift_range=0.2,
height_shift_range=0.2,
)
validation_generator = validation_datagen.flow(
x_val,
y_val_encoded,
batch_size=64
)
test_datagen = ImageDataGenerator(
horizontal_flip=True,
width_shift_range=0.2,
height_shift_range=0.2,
)
test_generator = test_datagen.flow(
x_test,
y_test_encoded,
batch_size=64
) | _____no_output_____ | MIT | Project Portfolio/cnn-cifar10-tf2-v12_Notebook_CarlosCabano.ipynb | CarlosCabano/carloscabano.github.io |
6.3 Generador de datos | sample = random.choice(range(0,1457))
image = x_train[sample]
plt.imshow(image, cmap=plt.cm.binary)
sample = random.choice(range(0,1457))
example_generator = train_datagen.flow(
x_train[sample:sample+1],
y_train_encoded[sample:sample+1],
batch_size=64
)
plt.figure(figsize=(12, 12))
for i in range(0, 15):
plt.subplot(5, 3, i+1)
for X, Y in example_generator:
image = X[0]
plt.imshow(image)
break
plt.tight_layout()
plt.show() | _____no_output_____ | MIT | Project Portfolio/cnn-cifar10-tf2-v12_Notebook_CarlosCabano.ipynb | CarlosCabano/carloscabano.github.io |
7. Entrenamiento | t = time.perf_counter()
steps=int(x_train.shape[0]/64)
history = model.fit(train_generator, epochs=100, use_multiprocessing=False, batch_size= 64, validation_data=validation_generator, steps_per_epoch=steps, callbacks=[callback_val_loss, callback_val_accuracy])
elapsed_time = datetime.timedelta(seconds=(time.perf_counter() - t))
print('Tiempo de entrenamiento:', elapsed_time) | Tiempo de entrenamiento: 0:52:12.653343
| MIT | Project Portfolio/cnn-cifar10-tf2-v12_Notebook_CarlosCabano.ipynb | CarlosCabano/carloscabano.github.io |
8. Evaluamos los resultados | _, acc = model.evaluate(x_test, y_test_encoded, verbose=0)
print('> %.3f' % (acc * 100.0))
plt.title('Cross Entropy Loss')
plt.plot(history.history['loss'], color='blue', label='train')
plt.plot(history.history['val_loss'], color='orange', label='test')
plt.show()
plt.title('Classification Accuracy')
plt.plot(history.history['accuracy'], color='blue', label='train')
plt.plot(history.history['val_accuracy'], color='orange', label='test')
plt.show()
predictions = model.predict(x_test)
def plot_image(i, predictions_array, true_label, img):
predictions_array, true_label, img = predictions_array, true_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(predictions_array)
if predicted_label == true_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(predicted_label,
100*np.max(predictions_array),
true_label[0]),
color=color)
def plot_value_array(i, predictions_array, true_label):
predictions_array, true_label = predictions_array, true_label[i]
plt.grid(False)
plt.xticks(range(10))
plt.yticks([])
thisplot = plt.bar(range(10), predictions_array, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(predictions_array)
thisplot[predicted_label].set_color('red')
thisplot[true_label[0]].set_color('blue') | _____no_output_____ | MIT | Project Portfolio/cnn-cifar10-tf2-v12_Notebook_CarlosCabano.ipynb | CarlosCabano/carloscabano.github.io |
Dibujamos las primeras imágenes: | i = 0
for l in cifar10_labels:
print(i, l)
i += 1
num_rows = 5
num_cols = 4
start = 650
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i+start, predictions[i+start], y_test, x_test)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i+start, predictions[i+start], y_test)
plt.tight_layout()
plt.show() | 0 airplane
1 automobile
2 bird
3 cat
4 deer
5 dog
6 frog
7 horse
8 ship
9 truck
| MIT | Project Portfolio/cnn-cifar10-tf2-v12_Notebook_CarlosCabano.ipynb | CarlosCabano/carloscabano.github.io |
Summary About the DatasetThe data files train.csv and test.csv contain gray-scale images of hand-drawn digits, from zero through nine.Each image is 28 pixels in height and 28 pixels in width, for a total of 784 pixels in total. Each pixel has a single pixel-value associated with it, indicating the lightness or darkness of that pixel, with higher numbers meaning darker. This pixel-value is an integer between 0 and 255, inclusive.The training data set, (train.csv), has 785 columns. The first column, called "label", is the digit that was drawn by the user. The rest of the columns contain the pixel-values of the associated image.Each pixel column in the training set has a name like pixelx, where x is an integer between 0 and 783, inclusive. To locate this pixel on the image, suppose that we have decomposed x as x = i * 28 + j, where i and j are integers between 0 and 27, inclusive. Then pixelx is located on row i and column j of a 28 x 28 matrix, (indexing by zero).For example, pixel31 indicates the pixel that is in the fourth column from the left, and the second row from the top, as in the ascii-diagram below.Visually, the pixels make up the image like this:000 001 002 003 ... 026 027028 029 030 031 ... 054 055056 057 058 059 ... 082 083 | | | | ... | |728 729 730 731 ... 754 755756 757 758 759 ... 782 783 The test data set, (test.csv), is the same as the training set, except that it does not contain the "label" column.[More about MNIST Dataset can be found here](http://yann.lecun.com/exdb/mnist/)[Wiki Link](https://en.wikipedia.org/wiki/MNIST_database) MethodIn this post I will be describing my solution to classify handwritten digits(MNIST Dataset). Below is a deep neural network(Convolution neural network) consisting of convolution and fully connected layers.Go ahead and use the tensorboard for deatiled visualization saved from the model. | import numpy as np
import pandas as pd
import tensorflow as tf
import keras.preprocessing.image
import sklearn.preprocessing
import sklearn.model_selection
import sklearn.metrics
import sklearn.linear_model
import sklearn.naive_bayes
import sklearn.tree
import sklearn.ensemble
import os;
import datetime
import cv2
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib.cm as cm
%matplotlib inline
import platform
print("Platform deatils {0} \nPython version {1}".format(
platform.platform(), platform.python_version())) | Platform deatils Windows-10-10.0.15063-SP0
Python version 3.6.2
| BSD-3-Clause | MNIST-image-classification-using-TF.ipynb | jpnevrones/Digit-Recognizer |
Additional info: I am going to use the Kaggle csv based data set but MNIST Data set can also be downloaded and extracted using the below functions. Function to downlaod and Extract MNIST Dataset | url = 'http://commondatastorage.googleapis.com/books1000/'
last_percent_reported = None
def download_progress_hook(count, blockSize, totalSize):
"""A hook to report the progress of a download. This is mostly intended for users with
slow internet connections. Reports every 1% change in download progress.
"""
global last_percent_reported
percent = int(count * blockSize * 100 / totalSize)
if last_percent_reported != percent:
if percent % 5 == 0:
sys.stdout.write("%s%%" % percent)
sys.stdout.flush()
else:
sys.stdout.write(".")
sys.stdout.flush()
last_percent_reported = percent
def maybe_download(filename, expected_bytes, force=False):
"""Download a file if not present, and make sure it's the right size."""
if force or not os.path.exists(filename):
print('Attempting to download:', filename)
filename, _ = urlretrieve(url + filename, filename, reporthook=download_progress_hook)
print('\nDownload Complete!')
statinfo = os.stat(filename)
if statinfo.st_size == expected_bytes:
print('Found and verified', filename)
else:
raise Exception(
'Failed to verify ' + filename + '. Can you get to it with a browser?')
return filename
train_filename = maybe_download('notMNIST_large.tar.gz', 247336696)
test_filename = maybe_download('notMNIST_small.tar.gz', 8458043)
num_classes = 10
np.random.seed(133)
def maybe_extract(filename, force=False):
root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz
if os.path.isdir(root) and not force:
# You may override by setting force=True.
print('%s already present - Skipping extraction of %s.' % (root, filename))
else:
print('Extracting data for %s. This may take a while. Please wait.' % root)
tar = tarfile.open(filename)
sys.stdout.flush()
tar.extractall()
tar.close()
data_folders = [
os.path.join(root, d) for d in sorted(os.listdir(root))
if os.path.isdir(os.path.join(root, d))]
if len(data_folders) != num_classes:
raise Exception(
'Expected %d folders, one per class. Found %d instead.' % (
num_classes, len(data_folders)))
print(data_folders)
return data_folders
train_folders = maybe_extract(train_filename)
test_folders = maybe_extract(test_filename)
#Load the input file from the folder
if os.path.isfile('MNISTdatacsv/train.csv'):
data_df = pd.read_csv('MNISTdatacsv/train.csv')
print('train.csv loaded: data_df({0[0]},{0[1]})'.format(data_df.shape))
else:
print('Error: train.csv not found')
## read test data
# read test data from CSV file
if os.path.isfile('MNISTdatacsv/test.csv'):
test_df = pd.read_csv('MNISTdatacsv/test.csv')
print('test.csv loaded: test_df{0}'.format(test_df.shape))
else:
print('Error: test.csv not found')
# transforma and normalize test data
x_test = test_df.iloc[:,0:].values.reshape(-1,28,28,1) # (28000,28,28,1) array
x_test = x_test.astype(np.float)
x_test = normalize_data(x_test)
print('x_test.shape = ', x_test.shape)
# for saving results
y_test_pred = {}
y_test_pred_labels = {} | train.csv loaded: data_df(42000,785)
test.csv loaded: test_df(28000, 784)
x_test.shape = (28000, 28, 28, 1)
| BSD-3-Clause | MNIST-image-classification-using-TF.ipynb | jpnevrones/Digit-Recognizer |
Preprocessing Normalize data and split into training and validation sets- In order to scale feature that robust to outlier you can use sklearn.preprocessing.RobustScaler() - rtoo = sklearn.preprocessing.RobustScaler() - rtoo.fit(data) - data = rtoo.transform(data) - or you can do standraization by using mean and std dev - data = (data-data.mean())/(data.std()) (Try different normalization techniques - to understand more about normalization, these are just few) - Another idea might be is to convert the rgb range to -1 to 1 from 255 to 0 - data = ((data / 255.)-0.5)*2. - I am converting the range to 0 to 1 [One hot encoding my notes](http://jp.jithinjp.in/2018/Representing-Categorical-values-in-Machine-learning) | # function to normalize data
def normalize_data(data):
data = data / data.max() # convert from [0:255] to [0.:1.]
return data
# class labels to one-hot vectors e.g. 1 => [0 1 0 0 0 0 0 0 0 0]
def dense_to_one_hot(labels_dense, num_classes):
num_labels = labels_dense.shape[0]
index_offset = np.arange(num_labels) * num_classes
labels_one_hot = np.zeros((num_labels, num_classes))
labels_one_hot.flat[index_offset + labels_dense.ravel()] = 1
return labels_one_hot
# one-hot encodings into labels
def one_hot_to_dense(labels_one_hot):
return np.argmax(labels_one_hot,1)
# accuracy o
def accuracy_from_dense_labels(y_target, y_pred):
y_target = y_target.reshape(-1,)
y_pred = y_pred.reshape(-1,)
return np.mean(y_target == y_pred)
# accuracy of one-hot encoded predictions
def accuracy_from_one_hot_labels(y_target, y_pred):
y_target = one_hot_to_dense(y_target).reshape(-1,)
y_pred = one_hot_to_dense(y_pred).reshape(-1,)
return np.mean(y_target == y_pred)
# extract and normalize images
x_train_valid = data_df.iloc[:,1:].values.reshape(-1,28,28,1) # (42000,28,28,1) array
x_train_valid = x_train_valid.astype(np.float) # convert from int64 to float32
x_train_valid = normalize_data(x_train_valid)
image_width = image_height = 28
image_size = 784
# extract image labels
y_train_valid_labels = data_df.iloc[:,0].values # (42000,1) array
labels_count = np.unique(y_train_valid_labels).shape[0]; # number of different labels = 10
#plot some images and labels
plt.figure(figsize=(15,9))
for i in range(50):
plt.subplot(5,10,1+i)
plt.title(y_train_valid_labels[i])
plt.imshow(x_train_valid[i].reshape(28,28), cmap=cm.inferno)
# labels in one hot representation
y_train_valid = dense_to_one_hot(y_train_valid_labels, labels_count).astype(np.uint8)
# dictionaries for saving results
y_valid_pred = {}
y_train_pred = {}
y_test_pred = {}
train_loss, valid_loss = {}, {}
train_acc, valid_acc = {}, {}
print('x_train_valid.shape = ', x_train_valid.shape)
print('y_train_valid_labels.shape = ', y_train_valid_labels.shape)
print('image_size = ', image_size )
print('image_width = ', image_width)
print('image_height = ', image_height)
print('labels_count = ', labels_count) | x_train_valid.shape = (42000, 28, 28, 1)
y_train_valid_labels.shape = (42000,)
image_size = 784
image_width = 28
image_height = 28
labels_count = 10
| BSD-3-Clause | MNIST-image-classification-using-TF.ipynb | jpnevrones/Digit-Recognizer |
Data augmenttaionlets stick to basics like rotations, translations, zoom using keras | def generate_images(imgs):
# rotations, translations, zoom
image_generator = keras.preprocessing.image.ImageDataGenerator(
rotation_range = 10, width_shift_range = 0.1 , height_shift_range = 0.1,
zoom_range = 0.1)
# get transformed images
imgs = image_generator.flow(imgs.copy(), np.zeros(len(imgs)),
batch_size=len(imgs), shuffle = False).next()
return imgs[0]
# Visulizing the image augmnettaion
fig,axs = plt.subplots(5,10, figsize=(15,9))
for i in range(5):
n = np.random.randint(0,x_train_valid.shape[0]-2)
axs[i,0].imshow(x_train_valid[n:n+1].reshape(28,28),cmap=cm.inferno)
for j in range(1,10):
axs[i,j].imshow(generate_images(x_train_valid[n:n+1]).reshape(28,28), cmap=cm.inferno)
| _____no_output_____ | BSD-3-Clause | MNIST-image-classification-using-TF.ipynb | jpnevrones/Digit-Recognizer |
Benchmarking on some basic ML modelsAs we have our training data ready lets run couple of basic machine elarning model, I would consider these models to kind of create a baseline which would help me later own to generlize the performance of my model. In simple word these would give me datapoints to compare the performance across models.lets use Logistic regression, Extra tress regressor and Random forest model along with cross validation for benmarking. | logistic_regression = sklearn.linear_model.LogisticRegression(verbose=0, solver='lbfgs',multi_class='multinomial')
extra_trees = sklearn.ensemble.ExtraTreesClassifier(verbose=0)
random_forest = sklearn.ensemble.RandomForestClassifier(verbose=0)
bench_markingDict = {'logistic_regression': logistic_regression,
'extra_trees': extra_trees,
'random_forest': random_forest }
bench_marking = ['logistic_regression', 'extra_trees','random_forest']
for bm_model in bench_marking:
train_acc[bm_model] = []
valid_acc[bm_model] = []
cv_num = 10 # cross validations default = 20 => 5% validation set
kfold = sklearn.model_selection.KFold(cv_num, shuffle=True, random_state=123)
for i,(train_index, valid_index) in enumerate(kfold.split(x_train_valid)):
# start timer
start = datetime.datetime.now();
# train and validation data of original images
x_train = x_train_valid[train_index].reshape(-1,784)
y_train = y_train_valid[train_index]
x_valid = x_train_valid[valid_index].reshape(-1,784)
y_valid = y_train_valid[valid_index]
for bm_model in bench_marking:
# create cloned model from base models
model = sklearn.base.clone(bench_markingDict[bm_model])
model.fit(x_train, one_hot_to_dense(y_train))
# predictions
y_train_pred[bm_model] = model.predict_proba(x_train)
y_valid_pred[bm_model] = model.predict_proba(x_valid)
train_acc[bm_model].append(accuracy_from_one_hot_labels(y_train_pred[bm_model], y_train))
valid_acc[bm_model].append(accuracy_from_one_hot_labels(y_valid_pred[bm_model], y_valid))
print(i+1,': '+bm_model+' train/valid accuracy = %.3f/%.3f'%(train_acc[bm_model][-1],
valid_acc[bm_model][-1]))
# only one iteration
if False:
break;
print(bm_model+': averaged train/valid accuracy = %.3f/%.3f'%(np.mean(train_acc[bm_model]),
np.mean(valid_acc[bm_model])))
| 1 : logistic_regression train/valid accuracy = 0.940/0.920
1 : extra_trees train/valid accuracy = 1.000/0.947
1 : random_forest train/valid accuracy = 0.999/0.941
2 : logistic_regression train/valid accuracy = 0.940/0.922
2 : extra_trees train/valid accuracy = 1.000/0.949
2 : random_forest train/valid accuracy = 0.999/0.941
3 : logistic_regression train/valid accuracy = 0.939/0.928
3 : extra_trees train/valid accuracy = 1.000/0.944
3 : random_forest train/valid accuracy = 0.999/0.945
4 : logistic_regression train/valid accuracy = 0.939/0.924
4 : extra_trees train/valid accuracy = 1.000/0.945
4 : random_forest train/valid accuracy = 0.999/0.941
5 : logistic_regression train/valid accuracy = 0.940/0.920
5 : extra_trees train/valid accuracy = 1.000/0.939
5 : random_forest train/valid accuracy = 0.999/0.941
6 : logistic_regression train/valid accuracy = 0.939/0.919
6 : extra_trees train/valid accuracy = 1.000/0.948
6 : random_forest train/valid accuracy = 0.999/0.941
7 : logistic_regression train/valid accuracy = 0.941/0.916
7 : extra_trees train/valid accuracy = 1.000/0.943
7 : random_forest train/valid accuracy = 0.999/0.937
8 : logistic_regression train/valid accuracy = 0.941/0.911
8 : extra_trees train/valid accuracy = 1.000/0.942
8 : random_forest train/valid accuracy = 0.999/0.933
9 : logistic_regression train/valid accuracy = 0.940/0.925
9 : extra_trees train/valid accuracy = 1.000/0.950
9 : random_forest train/valid accuracy = 0.999/0.941
10 : logistic_regression train/valid accuracy = 0.940/0.918
10 : extra_trees train/valid accuracy = 1.000/0.945
10 : random_forest train/valid accuracy = 0.999/0.937
random_forest: averaged train/valid accuracy = 0.999/0.940
| BSD-3-Clause | MNIST-image-classification-using-TF.ipynb | jpnevrones/Digit-Recognizer |
Neural network -Lets get to the fun part Neural network | class nn_class:
# class that implements the neural network
# constructor
def __init__(self, nn_name = 'nn_1'):
# hyperparameters
self.s_f_conv1 = 3; # filter size of first convolution layer (default = 3)
self.n_f_conv1 = 36; # number of features of first convolution layer (default = 36)
self.s_f_conv2 = 3; # filter size of second convolution layer (default = 3)
self.n_f_conv2 = 36; # number of features of second convolution layer (default = 36)
self.s_f_conv3 = 3; # filter size of third convolution layer (default = 3)
self.n_f_conv3 = 36; # number of features of third convolution layer (default = 36)
self.n_n_fc1 = 576; # number of neurons of first fully connected layer (default = 576)
# hyperparameters for training
self.mb_size = 50 # mini batch size
self.keep_prob = 0.33 # keeping probability with dropout regularization
self.learn_rate_array = [10*1e-4, 7.5*1e-4, 5*1e-4, 2.5*1e-4, 1*1e-4, 1*1e-4,
1*1e-4,0.75*1e-4, 0.5*1e-4, 0.25*1e-4, 0.1*1e-4,
0.1*1e-4, 0.075*1e-4,0.050*1e-4, 0.025*1e-4, 0.01*1e-4,
0.0075*1e-4, 0.0050*1e-4,0.0025*1e-4,0.001*1e-4]
self.learn_rate_step_size = 3 # in terms of epochs
# parameters
self.learn_rate = self.learn_rate_array[0]
self.learn_rate_pos = 0 # current position pointing to current learning rate
self.index_in_epoch = 0
self.current_epoch = 0
self.log_step = 0.2 # log results in terms of epochs
self.n_log_step = 0 # counting current number of mini batches trained on
self.use_tb_summary = False # True = use tensorboard visualization
self.use_tf_saver = False # True = use saver to save the model
self.nn_name = nn_name # name of the neural network
# permutation array
self.perm_array = np.array([])
# get the next mini batch
def next_mini_batch(self):
start = self.index_in_epoch
self.index_in_epoch += self.mb_size
self.current_epoch += self.mb_size/len(self.x_train)
# adapt length of permutation array
if not len(self.perm_array) == len(self.x_train):
self.perm_array = np.arange(len(self.x_train))
# shuffle once at the start of epoch
if start == 0:
np.random.shuffle(self.perm_array)
# at the end of the epoch
if self.index_in_epoch > self.x_train.shape[0]:
np.random.shuffle(self.perm_array) # shuffle data
start = 0 # start next epoch
self.index_in_epoch = self.mb_size # set index to mini batch size
if self.train_on_augmented_data:
# use augmented data for the next epoch
self.x_train_aug = normalize_data(self.generate_images(self.x_train))
self.y_train_aug = self.y_train
end = self.index_in_epoch
if self.train_on_augmented_data:
# use augmented data
x_tr = self.x_train_aug[self.perm_array[start:end]]
y_tr = self.y_train_aug[self.perm_array[start:end]]
else:
# use original data
x_tr = self.x_train[self.perm_array[start:end]]
y_tr = self.y_train[self.perm_array[start:end]]
return x_tr, y_tr
# generate new images via rotations, translations, zoom using keras
def generate_images(self, imgs):
print('generate new set of images')
# rotations, translations, zoom
image_generator = keras.preprocessing.image.ImageDataGenerator(
rotation_range = 10, width_shift_range = 0.1 , height_shift_range = 0.1,
zoom_range = 0.1)
# get transformed images
imgs = image_generator.flow(imgs.copy(), np.zeros(len(imgs)),
batch_size=len(imgs), shuffle = False).next()
return imgs[0]
# weight initialization
def weight_variable(self, shape, name = None):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial, name = name)
# bias initialization
def bias_variable(self, shape, name = None):
initial = tf.constant(0.1, shape=shape) # positive bias
return tf.Variable(initial, name = name)
# 2D convolution
def conv2d(self, x, W, name = None):
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME', name = name)
# max pooling
def max_pool_2x2(self, x, name = None):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1],
padding='SAME', name = name)
# attach summaries to a tensor for TensorBoard visualization
def summary_variable(self, var, var_name):
with tf.name_scope(var_name):
mean = tf.reduce_mean(var)
stddev = tf.sqrt(tf.reduce_mean(tf.square(var - mean)))
tf.summary.scalar('mean', mean)
tf.summary.scalar('stddev', stddev)
tf.summary.scalar('max', tf.reduce_max(var))
tf.summary.scalar('min', tf.reduce_min(var))
tf.summary.histogram('histogram', var)
# function to create the graph
def create_graph(self):
# reset default graph
tf.reset_default_graph()
# variables for input and output
self.x_data_tf = tf.placeholder(dtype=tf.float32, shape=[None,28,28,1],
name='x_data_tf')
self.y_data_tf = tf.placeholder(dtype=tf.float32, shape=[None,10], name='y_data_tf')
# 1.layer: convolution + max pooling
self.W_conv1_tf = self.weight_variable([self.s_f_conv1, self.s_f_conv1, 1,
self.n_f_conv1],
name = 'W_conv1_tf') # (5,5,1,32)
self.b_conv1_tf = self.bias_variable([self.n_f_conv1], name = 'b_conv1_tf') # (32)
self.h_conv1_tf = tf.nn.relu(self.conv2d(self.x_data_tf,
self.W_conv1_tf) + self.b_conv1_tf,
name = 'h_conv1_tf') # (.,28,28,32)
self.h_pool1_tf = self.max_pool_2x2(self.h_conv1_tf,
name = 'h_pool1_tf') # (.,14,14,32)
# 2.layer: convolution + max pooling
self.W_conv2_tf = self.weight_variable([self.s_f_conv2, self.s_f_conv2,
self.n_f_conv1, self.n_f_conv2],
name = 'W_conv2_tf')
self.b_conv2_tf = self.bias_variable([self.n_f_conv2], name = 'b_conv2_tf')
self.h_conv2_tf = tf.nn.relu(self.conv2d(self.h_pool1_tf,
self.W_conv2_tf) + self.b_conv2_tf,
name ='h_conv2_tf') #(.,14,14,32)
self.h_pool2_tf = self.max_pool_2x2(self.h_conv2_tf, name = 'h_pool2_tf') #(.,7,7,32)
# 3.layer: convolution + max pooling
self.W_conv3_tf = self.weight_variable([self.s_f_conv3, self.s_f_conv3,
self.n_f_conv2, self.n_f_conv3],
name = 'W_conv3_tf')
self.b_conv3_tf = self.bias_variable([self.n_f_conv3], name = 'b_conv3_tf')
self.h_conv3_tf = tf.nn.relu(self.conv2d(self.h_pool2_tf,
self.W_conv3_tf) + self.b_conv3_tf,
name = 'h_conv3_tf') #(.,7,7,32)
self.h_pool3_tf = self.max_pool_2x2(self.h_conv3_tf,
name = 'h_pool3_tf') # (.,4,4,32)
# 4.layer: fully connected
self.W_fc1_tf = self.weight_variable([4*4*self.n_f_conv3,self.n_n_fc1],
name = 'W_fc1_tf') # (4*4*32, 1024)
self.b_fc1_tf = self.bias_variable([self.n_n_fc1], name = 'b_fc1_tf') # (1024)
self.h_pool3_flat_tf = tf.reshape(self.h_pool3_tf, [-1,4*4*self.n_f_conv3],
name = 'h_pool3_flat_tf') # (.,1024)
self.h_fc1_tf = tf.nn.relu(tf.matmul(self.h_pool3_flat_tf,
self.W_fc1_tf) + self.b_fc1_tf,
name = 'h_fc1_tf') # (.,1024)
# add dropout
self.keep_prob_tf = tf.placeholder(dtype=tf.float32, name = 'keep_prob_tf')
self.h_fc1_drop_tf = tf.nn.dropout(self.h_fc1_tf, self.keep_prob_tf,
name = 'h_fc1_drop_tf')
# 5.layer: fully connected
self.W_fc2_tf = self.weight_variable([self.n_n_fc1, 10], name = 'W_fc2_tf')
self.b_fc2_tf = self.bias_variable([10], name = 'b_fc2_tf')
self.z_pred_tf = tf.add(tf.matmul(self.h_fc1_drop_tf, self.W_fc2_tf),
self.b_fc2_tf, name = 'z_pred_tf')# => (.,10)
# cost function
self.cross_entropy_tf = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(
labels=self.y_data_tf, logits=self.z_pred_tf), name = 'cross_entropy_tf')
# optimisation function
self.learn_rate_tf = tf.placeholder(dtype=tf.float32, name="learn_rate_tf")
self.train_step_tf = tf.train.AdamOptimizer(self.learn_rate_tf).minimize(
self.cross_entropy_tf, name = 'train_step_tf')
# predicted probabilities in one-hot encoding
self.y_pred_proba_tf = tf.nn.softmax(self.z_pred_tf, name='y_pred_proba_tf')
# tensor of correct predictions
self.y_pred_correct_tf = tf.equal(tf.argmax(self.y_pred_proba_tf, 1),
tf.argmax(self.y_data_tf, 1),
name = 'y_pred_correct_tf')
# accuracy
self.accuracy_tf = tf.reduce_mean(tf.cast(self.y_pred_correct_tf, dtype=tf.float32),
name = 'accuracy_tf')
# tensors to save intermediate accuracies and losses during training
self.train_loss_tf = tf.Variable(np.array([]), dtype=tf.float32,
name='train_loss_tf', validate_shape = False)
self.valid_loss_tf = tf.Variable(np.array([]), dtype=tf.float32,
name='valid_loss_tf', validate_shape = False)
self.train_acc_tf = tf.Variable(np.array([]), dtype=tf.float32,
name='train_acc_tf', validate_shape = False)
self.valid_acc_tf = tf.Variable(np.array([]), dtype=tf.float32,
name='valid_acc_tf', validate_shape = False)
# number of weights and biases
num_weights = (self.s_f_conv1**2*self.n_f_conv1
+ self.s_f_conv2**2*self.n_f_conv1*self.n_f_conv2
+ self.s_f_conv3**2*self.n_f_conv2*self.n_f_conv3
+ 4*4*self.n_f_conv3*self.n_n_fc1 + self.n_n_fc1*10)
num_biases = self.n_f_conv1 + self.n_f_conv2 + self.n_f_conv3 + self.n_n_fc1
print('num_weights =', num_weights)
print('num_biases =', num_biases)
return None
def attach_summary(self, sess):
# create summary tensors for tensorboard
self.use_tb_summary = True
self.summary_variable(self.W_conv1_tf, 'W_conv1_tf')
self.summary_variable(self.b_conv1_tf, 'b_conv1_tf')
self.summary_variable(self.W_conv2_tf, 'W_conv2_tf')
self.summary_variable(self.b_conv2_tf, 'b_conv2_tf')
self.summary_variable(self.W_conv3_tf, 'W_conv3_tf')
self.summary_variable(self.b_conv3_tf, 'b_conv3_tf')
self.summary_variable(self.W_fc1_tf, 'W_fc1_tf')
self.summary_variable(self.b_fc1_tf, 'b_fc1_tf')
self.summary_variable(self.W_fc2_tf, 'W_fc2_tf')
self.summary_variable(self.b_fc2_tf, 'b_fc2_tf')
tf.summary.scalar('cross_entropy_tf', self.cross_entropy_tf)
tf.summary.scalar('accuracy_tf', self.accuracy_tf)
# merge all summaries for tensorboard
self.merged = tf.summary.merge_all()
# initialize summary writer
timestamp = datetime.datetime.now().strftime('%d-%m-%Y_%H-%M-%S')
filepath = os.path.join(os.getcwd(), 'logs', (self.nn_name+'_'+timestamp))
self.train_writer = tf.summary.FileWriter(os.path.join(filepath,'train'), sess.graph)
self.valid_writer = tf.summary.FileWriter(os.path.join(filepath,'valid'), sess.graph)
def attach_saver(self):
# initialize tensorflow saver
self.use_tf_saver = True
self.saver_tf = tf.train.Saver()
# train
def train_graph(self, sess, x_train, y_train, x_valid, y_valid, n_epoch = 1,
train_on_augmented_data = False):
# train on original or augmented data
self.train_on_augmented_data = train_on_augmented_data
# training and validation data
self.x_train = x_train
self.y_train = y_train
self.x_valid = x_valid
self.y_valid = y_valid
# use augmented data
if self.train_on_augmented_data:
print('generate new set of images')
self.x_train_aug = normalize_data(self.generate_images(self.x_train))
self.y_train_aug = self.y_train
# parameters
mb_per_epoch = self.x_train.shape[0]/self.mb_size
train_loss, train_acc, valid_loss, valid_acc = [],[],[],[]
# start timer
start = datetime.datetime.now();
print(datetime.datetime.now().strftime('%d-%m-%Y %H:%M:%S'),': start training')
print('learnrate = ',self.learn_rate,', n_epoch = ', n_epoch,
', mb_size = ', self.mb_size)
# looping over mini batches
for i in range(int(n_epoch*mb_per_epoch)+1):
# adapt learn_rate
self.learn_rate_pos = int(self.current_epoch // self.learn_rate_step_size)
if not self.learn_rate == self.learn_rate_array[self.learn_rate_pos]:
self.learn_rate = self.learn_rate_array[self.learn_rate_pos]
print(datetime.datetime.now()-start,': set learn rate to %.6f'%self.learn_rate)
# get new batch
x_batch, y_batch = self.next_mini_batch()
# run the graph
sess.run(self.train_step_tf, feed_dict={self.x_data_tf: x_batch,
self.y_data_tf: y_batch,
self.keep_prob_tf: self.keep_prob,
self.learn_rate_tf: self.learn_rate})
# store losses and accuracies
if i%int(self.log_step*mb_per_epoch) == 0 or i == int(n_epoch*mb_per_epoch):
self.n_log_step += 1 # for logging the results
feed_dict_train = {
self.x_data_tf: self.x_train[self.perm_array[:len(self.x_valid)]],
self.y_data_tf: self.y_train[self.perm_array[:len(self.y_valid)]],
self.keep_prob_tf: 1.0}
feed_dict_valid = {self.x_data_tf: self.x_valid,
self.y_data_tf: self.y_valid,
self.keep_prob_tf: 1.0}
# summary for tensorboard
if self.use_tb_summary:
train_summary = sess.run(self.merged, feed_dict = feed_dict_train)
valid_summary = sess.run(self.merged, feed_dict = feed_dict_valid)
self.train_writer.add_summary(train_summary, self.n_log_step)
self.valid_writer.add_summary(valid_summary, self.n_log_step)
train_loss.append(sess.run(self.cross_entropy_tf,
feed_dict = feed_dict_train))
train_acc.append(self.accuracy_tf.eval(session = sess,
feed_dict = feed_dict_train))
valid_loss.append(sess.run(self.cross_entropy_tf,
feed_dict = feed_dict_valid))
valid_acc.append(self.accuracy_tf.eval(session = sess,
feed_dict = feed_dict_valid))
print('%.2f epoch: train/val loss = %.4f/%.4f, train/val acc = %.4f/%.4f'%(
self.current_epoch, train_loss[-1], valid_loss[-1],
train_acc[-1], valid_acc[-1]))
# concatenate losses and accuracies and assign to tensor variables
tl_c = np.concatenate([self.train_loss_tf.eval(session=sess), train_loss], axis = 0)
vl_c = np.concatenate([self.valid_loss_tf.eval(session=sess), valid_loss], axis = 0)
ta_c = np.concatenate([self.train_acc_tf.eval(session=sess), train_acc], axis = 0)
va_c = np.concatenate([self.valid_acc_tf.eval(session=sess), valid_acc], axis = 0)
sess.run(tf.assign(self.train_loss_tf, tl_c, validate_shape = False))
sess.run(tf.assign(self.valid_loss_tf, vl_c , validate_shape = False))
sess.run(tf.assign(self.train_acc_tf, ta_c , validate_shape = False))
sess.run(tf.assign(self.valid_acc_tf, va_c , validate_shape = False))
print('running time for training: ', datetime.datetime.now() - start)
return None
# save summaries
def save_model(self, sess):
# tf saver
if self.use_tf_saver:
#filepath = os.path.join(os.getcwd(), 'logs' , self.nn_name)
filepath = os.path.join(os.getcwd(), self.nn_name)
self.saver_tf.save(sess, filepath)
# tb summary
if self.use_tb_summary:
self.train_writer.close()
self.valid_writer.close()
return None
# prediction
def forward(self, sess, x_data):
y_pred_proba = self.y_pred_proba_tf.eval(session = sess,
feed_dict = {self.x_data_tf: x_data,
self.keep_prob_tf: 1.0})
return y_pred_proba
# load tensors from a saved graph
def load_tensors(self, graph):
# input tensors
self.x_data_tf = graph.get_tensor_by_name("x_data_tf:0")
self.y_data_tf = graph.get_tensor_by_name("y_data_tf:0")
# weights and bias tensors
self.W_conv1_tf = graph.get_tensor_by_name("W_conv1_tf:0")
self.W_conv2_tf = graph.get_tensor_by_name("W_conv2_tf:0")
self.W_conv3_tf = graph.get_tensor_by_name("W_conv3_tf:0")
self.W_fc1_tf = graph.get_tensor_by_name("W_fc1_tf:0")
self.W_fc2_tf = graph.get_tensor_by_name("W_fc2_tf:0")
self.b_conv1_tf = graph.get_tensor_by_name("b_conv1_tf:0")
self.b_conv2_tf = graph.get_tensor_by_name("b_conv2_tf:0")
self.b_conv3_tf = graph.get_tensor_by_name("b_conv3_tf:0")
self.b_fc1_tf = graph.get_tensor_by_name("b_fc1_tf:0")
self.b_fc2_tf = graph.get_tensor_by_name("b_fc2_tf:0")
# activation tensors
self.h_conv1_tf = graph.get_tensor_by_name('h_conv1_tf:0')
self.h_pool1_tf = graph.get_tensor_by_name('h_pool1_tf:0')
self.h_conv2_tf = graph.get_tensor_by_name('h_conv2_tf:0')
self.h_pool2_tf = graph.get_tensor_by_name('h_pool2_tf:0')
self.h_conv3_tf = graph.get_tensor_by_name('h_conv3_tf:0')
self.h_pool3_tf = graph.get_tensor_by_name('h_pool3_tf:0')
self.h_fc1_tf = graph.get_tensor_by_name('h_fc1_tf:0')
self.z_pred_tf = graph.get_tensor_by_name('z_pred_tf:0')
# training and prediction tensors
self.learn_rate_tf = graph.get_tensor_by_name("learn_rate_tf:0")
self.keep_prob_tf = graph.get_tensor_by_name("keep_prob_tf:0")
self.cross_entropy_tf = graph.get_tensor_by_name('cross_entropy_tf:0')
self.train_step_tf = graph.get_operation_by_name('train_step_tf')
self.z_pred_tf = graph.get_tensor_by_name('z_pred_tf:0')
self.y_pred_proba_tf = graph.get_tensor_by_name("y_pred_proba_tf:0")
self.y_pred_correct_tf = graph.get_tensor_by_name('y_pred_correct_tf:0')
self.accuracy_tf = graph.get_tensor_by_name('accuracy_tf:0')
# tensor of stored losses and accuricies during training
self.train_loss_tf = graph.get_tensor_by_name("train_loss_tf:0")
self.train_acc_tf = graph.get_tensor_by_name("train_acc_tf:0")
self.valid_loss_tf = graph.get_tensor_by_name("valid_loss_tf:0")
self.valid_acc_tf = graph.get_tensor_by_name("valid_acc_tf:0")
return None
# get losses of training and validation sets
def get_loss(self, sess):
train_loss = self.train_loss_tf.eval(session = sess)
valid_loss = self.valid_loss_tf.eval(session = sess)
return train_loss, valid_loss
# get accuracies of training and validation sets
def get_accuracy(self, sess):
train_acc = self.train_acc_tf.eval(session = sess)
valid_acc = self.valid_acc_tf.eval(session = sess)
return train_acc, valid_acc
# get weights
def get_weights(self, sess):
W_conv1 = self.W_conv1_tf.eval(session = sess)
W_conv2 = self.W_conv2_tf.eval(session = sess)
W_conv3 = self.W_conv3_tf.eval(session = sess)
W_fc1_tf = self.W_fc1_tf.eval(session = sess)
W_fc2_tf = self.W_fc2_tf.eval(session = sess)
return W_conv1, W_conv2, W_conv3, W_fc1_tf, W_fc2_tf
# get biases
def get_biases(self, sess):
b_conv1 = self.b_conv1_tf.eval(session = sess)
b_conv2 = self.b_conv2_tf.eval(session = sess)
b_conv3 = self.b_conv3_tf.eval(session = sess)
b_fc1_tf = self.b_fc1_tf.eval(session = sess)
b_fc2_tf = self.b_fc2_tf.eval(session = sess)
return b_conv1, b_conv2, b_conv3, b_fc1_tf, b_fc2_tf
# load session from file, restore graph, and load tensors
def load_session_from_file(self, filename):
tf.reset_default_graph()
filepath = os.path.join(os.getcwd(), filename + '.meta')
#filepath = os.path.join(os.getcwd(),'logs', filename + '.meta')
saver = tf.train.import_meta_graph(filepath)
print(filepath)
sess = tf.Session()
saver.restore(sess, instance)
graph = tf.get_default_graph()
self.load_tensors(graph)
return sess
# receive activations given the input
def get_activations(self, sess, x_data):
feed_dict = {self.x_data_tf: x_data, self.keep_prob_tf: 1.0}
h_conv1 = self.h_conv1_tf.eval(session = sess, feed_dict = feed_dict)
h_pool1 = self.h_pool1_tf.eval(session = sess, feed_dict = feed_dict)
h_conv2 = self.h_conv2_tf.eval(session = sess, feed_dict = feed_dict)
h_pool2 = self.h_pool2_tf.eval(session = sess, feed_dict = feed_dict)
h_conv3 = self.h_conv3_tf.eval(session = sess, feed_dict = feed_dict)
h_pool3 = self.h_pool3_tf.eval(session = sess, feed_dict = feed_dict)
h_fc1 = self.h_fc1_tf.eval(session = sess, feed_dict = feed_dict)
h_fc2 = self.z_pred_tf.eval(session = sess, feed_dict = feed_dict)
return h_conv1,h_pool1,h_conv2,h_pool2,h_conv3,h_pool3,h_fc1,h_fc2
## train the neural network graph
Model_instance_list = ['CNN1'] # use full when you would want to run diffrent
#instamnce of same model with diffrent parameter
# we wont be doing it but you can try, we just ahve one
# cross validations
cv_num = 10 # cross validations default = 20 => 5% validation set
kfold = sklearn.model_selection.KFold(cv_num, shuffle=True, random_state=123)
for i,(train_index, valid_index) in enumerate(kfold.split(x_train_valid)):
# start timer
start = datetime.datetime.now();
# train and validation data of original images
x_train = x_train_valid[train_index]
y_train = y_train_valid[train_index]
x_valid = x_train_valid[valid_index]
y_valid = y_train_valid[valid_index]
# create neural network graph
nn_graph = nn_class(nn_name = Model_instance_list[i]) # instance of nn_class
nn_graph.create_graph() # create graph
nn_graph.attach_saver() # attach saver tensors
# start tensorflow session
with tf.Session() as sess:
# attach summaries
nn_graph.attach_summary(sess)
# variable initialization of the default graph
sess.run(tf.global_variables_initializer())
# training on original data
nn_graph.train_graph(sess, x_train, y_train, x_valid, y_valid, n_epoch = 1.0)
# training on augmented data
nn_graph.train_graph(sess, x_train, y_train, x_valid, y_valid, n_epoch = 14.0,
train_on_augmented_data = True)
# save tensors and summaries of model
nn_graph.save_model(sess)
# only one iteration
if True:
break;
print('total running time for training: ', datetime.datetime.now() - start)
instance = Model_instance_list[0]
nn_graph = nn_class()
sess = nn_graph.load_session_from_file(instance)
y_valid_pred[instance] = nn_graph.forward(sess, x_valid)
sess.close()
cnf_matrix = sklearn.metrics.confusion_matrix(
one_hot_to_dense(y_valid_pred[instance]), one_hot_to_dense(y_valid)).astype(np.float32)
labels_array = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
fig, ax = plt.subplots(1,figsize=(10,10))
ax = sns.heatmap(cnf_matrix, ax=ax, cmap=plt.cm.PuBuGn, annot=True)
ax.set_xticklabels(labels_array)
ax.set_yticklabels(labels_array)
plt.title('Confusion matrix of validation set')
plt.ylabel('True digit')
plt.xlabel('Predicted digit')
plt.show();
## loss and accuracy curves
nn_graph = nn_class()
sess = nn_graph.load_session_from_file(instance)
train_loss[instance], valid_loss[instance] = nn_graph.get_loss(sess)
train_acc[instance], valid_acc[instance] = nn_graph.get_accuracy(sess)
sess.close()
print('final train/valid loss = %.4f/%.4f, train/valid accuracy = %.4f/%.4f'%(
train_loss[instance][-1], valid_loss[instance][-1], train_acc[instance][-1], valid_acc[instance][-1]))
plt.figure(figsize=(10, 5));
plt.subplot(1,2,1);
plt.plot(np.arange(0,len(train_acc[instance])), train_acc[instance],'-b', label='Training')
plt.plot(np.arange(0,len(valid_acc[instance])), valid_acc[instance],'-g', label='Validation')
plt.legend(loc='lower right', frameon=False)
plt.ylim(ymax = 1.1, ymin = 0.0)
plt.ylabel('accuracy')
plt.xlabel('log steps');
plt.subplot(1,2,2)
plt.plot(np.arange(0,len(train_loss[instance])), train_loss[instance],'-b', label='Training')
plt.plot(np.arange(0,len(valid_loss[instance])), valid_loss[instance],'-g', label='Validation')
plt.legend(loc='lower right', frameon=False)
plt.ylim(ymax = 3.0, ymin = 0.0)
plt.ylabel('loss')
plt.xlabel('log steps');
## visualize weights
nn_graph = nn_class()
sess = nn_graph.load_session_from_file(instance)
W_conv1, W_conv2, W_conv3, _, _ = nn_graph.get_weights(sess)
sess.close()
print('W_conv1: min = ' + str(np.min(W_conv1)) + ' max = ' + str(np.max(W_conv1))
+ ' mean = ' + str(np.mean(W_conv1)) + ' std = ' + str(np.std(W_conv1)))
print('W_conv2: min = ' + str(np.min(W_conv2)) + ' max = ' + str(np.max(W_conv2))
+ ' mean = ' + str(np.mean(W_conv2)) + ' std = ' + str(np.std(W_conv2)))
print('W_conv3: min = ' + str(np.min(W_conv3)) + ' max = ' + str(np.max(W_conv3))
+ ' mean = ' + str(np.mean(W_conv3)) + ' std = ' + str(np.std(W_conv3)))
s_f_conv1 = nn_graph.s_f_conv1
s_f_conv2 = nn_graph.s_f_conv2
s_f_conv3 = nn_graph.s_f_conv3
W_conv1 = np.reshape(W_conv1,(s_f_conv1,s_f_conv1,1,6,6))
W_conv1 = np.transpose(W_conv1,(3,0,4,1,2))
W_conv1 = np.reshape(W_conv1,(s_f_conv1*6,s_f_conv1*6,1))
W_conv2 = np.reshape(W_conv2,(s_f_conv2,s_f_conv2,6,6,36))
W_conv2 = np.transpose(W_conv2,(2,0,3,1,4))
W_conv2 = np.reshape(W_conv2,(6*s_f_conv2,6*s_f_conv2,6,6))
W_conv2 = np.transpose(W_conv2,(2,0,3,1))
W_conv2 = np.reshape(W_conv2,(6*6*s_f_conv2,6*6*s_f_conv2))
W_conv3 = np.reshape(W_conv3,(s_f_conv3,s_f_conv3,6,6,36))
W_conv3 = np.transpose(W_conv3,(2,0,3,1,4))
W_conv3 = np.reshape(W_conv3,(6*s_f_conv3,6*s_f_conv3,6,6))
W_conv3 = np.transpose(W_conv3,(2,0,3,1))
W_conv3 = np.reshape(W_conv3,(6*6*s_f_conv3,6*6*s_f_conv3))
plt.figure(figsize=(15,5))
plt.subplot(1,3,1)
plt.gca().set_xticks(np.arange(-0.5, s_f_conv1*6, s_f_conv1), minor = False);
plt.gca().set_yticks(np.arange(-0.5, s_f_conv1*6, s_f_conv1), minor = False);
plt.grid(which = 'minor', color='b', linestyle='-', linewidth=1)
plt.title('W_conv1 ' + str(W_conv1.shape))
plt.colorbar(plt.imshow(W_conv1[:,:,0], cmap=cm.inferno));
plt.subplot(1,3,2)
plt.gca().set_xticks(np.arange(-0.5, 6*6*s_f_conv2, 6*s_f_conv2), minor = False);
plt.gca().set_yticks(np.arange(-0.5, 6*6*s_f_conv2, 6*s_f_conv2), minor = False);
plt.grid(which = 'minor', color='b', linestyle='-', linewidth=1)
plt.title('W_conv2 ' + str(W_conv2.shape))
plt.colorbar(plt.imshow(W_conv2[:,:], cmap=cm.inferno));
plt.subplot(1,3,3)
plt.gca().set_xticks(np.arange(-0.5, 6*6*s_f_conv3, 6*s_f_conv3), minor = False);
plt.gca().set_yticks(np.arange(-0.5, 6*6*s_f_conv3, 6*s_f_conv3), minor = False);
plt.grid(which = 'minor', color='b', linestyle='-', linewidth=1)
plt.title('W_conv3 ' + str(W_conv3.shape))
plt.colorbar(plt.imshow(W_conv3[:,:], cmap=cm.inferno));
## visualize activations
img_no = 143;
nn_graph = nn_class()
sess = nn_graph.load_session_from_file(instance)
(h_conv1, h_pool1, h_conv2, h_pool2,h_conv3, h_pool3, h_fc1,
h_fc2) = nn_graph.get_activations(sess, x_train_valid[img_no:img_no+1])
sess.close()
# original image
plt.figure(figsize=(15,9))
plt.subplot(2,4,1)
plt.imshow(x_train_valid[img_no].reshape(28,28),cmap=cm.inferno);
# 1. convolution
plt.subplot(2,4,2)
plt.title('h_conv1 ' + str(h_conv1.shape))
h_conv1 = np.reshape(h_conv1,(-1,28,28,6,6))
h_conv1 = np.transpose(h_conv1,(0,3,1,4,2))
h_conv1 = np.reshape(h_conv1,(-1,6*28,6*28))
plt.imshow(h_conv1[0], cmap=cm.inferno);
# 1. max pooling
plt.subplot(2,4,3)
plt.title('h_pool1 ' + str(h_pool1.shape))
h_pool1 = np.reshape(h_pool1,(-1,14,14,6,6))
h_pool1 = np.transpose(h_pool1,(0,3,1,4,2))
h_pool1 = np.reshape(h_pool1,(-1,6*14,6*14))
plt.imshow(h_pool1[0], cmap=cm.inferno);
# 2. convolution
plt.subplot(2,4,4)
plt.title('h_conv2 ' + str(h_conv2.shape))
h_conv2 = np.reshape(h_conv2,(-1,14,14,6,6))
h_conv2 = np.transpose(h_conv2,(0,3,1,4,2))
h_conv2 = np.reshape(h_conv2,(-1,6*14,6*14))
plt.imshow(h_conv2[0], cmap=cm.inferno);
# 2. max pooling
plt.subplot(2,4,5)
plt.title('h_pool2 ' + str(h_pool2.shape))
h_pool2 = np.reshape(h_pool2,(-1,7,7,6,6))
h_pool2 = np.transpose(h_pool2,(0,3,1,4,2))
h_pool2 = np.reshape(h_pool2,(-1,6*7,6*7))
plt.imshow(h_pool2[0], cmap=cm.inferno);
# 3. convolution
plt.subplot(2,4,6)
plt.title('h_conv3 ' + str(h_conv3.shape))
h_conv3 = np.reshape(h_conv3,(-1,7,7,6,6))
h_conv3 = np.transpose(h_conv3,(0,3,1,4,2))
h_conv3 = np.reshape(h_conv3,(-1,6*7,6*7))
plt.imshow(h_conv3[0], cmap=cm.inferno);
# 3. max pooling
plt.subplot(2,4,7)
plt.title('h_pool2 ' + str(h_pool3.shape))
h_pool3 = np.reshape(h_pool3,(-1,4,4,6,6))
h_pool3 = np.transpose(h_pool3,(0,3,1,4,2))
h_pool3 = np.reshape(h_pool3,(-1,6*4,6*4))
plt.imshow(h_pool3[0], cmap=cm.inferno);
# 4. FC layer
plt.subplot(2,4,8)
plt.title('h_fc1 ' + str(h_fc1.shape))
h_fc1 = np.reshape(h_fc1,(-1,24,24))
plt.imshow(h_fc1[0], cmap=cm.inferno);
# 5. FC layer
np.set_printoptions(precision=2)
print('h_fc2 = ', h_fc2)
## show misclassified images
nn_graph = nn_class()
sess = nn_graph.load_session_from_file(instance)
y_valid_pred[instance] = nn_graph.forward(sess, x_valid)
sess.close()
y_valid_pred_label = one_hot_to_dense(y_valid_pred[instance])
y_valid_label = one_hot_to_dense(y_valid)
y_val_false_index = []
for i in range(y_valid_label.shape[0]):
if y_valid_pred_label[i] != y_valid_label[i]:
y_val_false_index.append(i)
print('# false predictions: ', len(y_val_false_index),'out of', len(y_valid))
plt.figure(figsize=(10,15))
for j in range(0,5):
for i in range(0,10):
if j*10+i<len(y_val_false_index):
plt.subplot(10,10,j*10+i+1)
plt.title('%d/%d'%(y_valid_label[y_val_false_index[j*10+i]],
y_valid_pred_label[y_val_false_index[j*10+i]]))
plt.imshow(x_valid[y_val_false_index[j*10+i]].reshape(28,28),cmap=cm.inferno)
nn_graph = nn_class() # create instance
sess = nn_graph.load_session_from_file(instance) # receive session
y_test_pred = {}
y_test_pred_labels = {}
# split evaluation of test predictions into batches
kfold = sklearn.model_selection.KFold(40, shuffle=False)
for i,(train_index, valid_index) in enumerate(kfold.split(x_test)):
if i==0:
y_test_pred[instance] = nn_graph.forward(sess, x_test[valid_index])
else:
y_test_pred[instance] = np.concatenate([y_test_pred[instance],
nn_graph.forward(sess, x_test[valid_index])])
sess.close()
y_test_pred_labels[instance] = one_hot_to_dense(y_test_pred[instance])
print(instance +': y_test_pred_labels[instance].shape = ', y_test_pred_labels[instance].shape)
unique, counts = np.unique(y_test_pred_labels[instance], return_counts=True)
print(dict(zip(unique, counts)))
plt.figure(figsize=(10,15))
for j in range(0,5):
for i in range(0,10):
plt.subplot(10,10,j*10+i+1)
plt.title('%d'%y_test_pred_labels[instance][j*10+i])
plt.imshow(x_test[j*10+i].reshape(28,28), cmap=cm.inferno)
# Suppose I have 4 models, how would I stack them up
Model_instance_list = ['CNN1', 'CNN2', 'CNN3', 'CNN4']
# cross validations
# choose the same seed as was done for training the neural nets
kfold = sklearn.model_selection.KFold(len(Model_instance_list), shuffle=True, random_state = 123)
# train and test data for meta model
x_train_meta = np.array([]).reshape(-1,10)
y_train_meta = np.array([]).reshape(-1,10)
x_test_meta = np.zeros((x_test.shape[0], 10))
print('Out-of-folds predictions:')
# make out-of-folds predictions from base models
for i,(train_index, valid_index) in enumerate(kfold.split(x_train_valid)):
# training and validation data
x_train = x_train_valid[train_index]
y_train = y_train_valid[train_index]
x_valid = x_train_valid[valid_index]
y_valid = y_train_valid[valid_index]
# load neural network and make predictions
instance = Model_instance_list[i]
nn_graph = nn_class()
sess = nn_graph.load_session_from_file(instance)
y_train_pred[instance] = nn_graph.forward(sess, x_train[:len(x_valid)])
y_valid_pred[instance] = nn_graph.forward(sess, x_valid)
y_test_pred[instance] = nn_graph.forward(sess, x_test)
sess.close()
# collect train and test data for meta model
x_train_meta = np.concatenate([x_train_meta, y_valid_pred[instance]])
y_train_meta = np.concatenate([y_train_meta, y_valid])
x_test_meta += y_test_pred[instance]
print(take_models[i],': train/valid accuracy = %.4f/%.4f'%(
accuracy_from_one_hot_labels(y_train_pred[instance], y_train[:len(x_valid)]),
accuracy_from_one_hot_labels(y_valid_pred[instance], y_valid)))
if False:
break;
# take average of test predictions
x_test_meta = x_test_meta/(i+1)
y_test_pred['stacked_models'] = x_test_meta
print('Stacked models: valid accuracy = %.4f'%accuracy_from_one_hot_labels(x_train_meta,
y_train_meta))
| _____no_output_____ | BSD-3-Clause | MNIST-image-classification-using-TF.ipynb | jpnevrones/Digit-Recognizer |
plan_pole_transectVisualize pole locations on Pea Island beach transect.Profiles were extracted from SfM maps by Jenna on 31 August 2021 - Provisional Data. Read in profilesUse pandas to read profiles; pull out arrays of x, y (UTM meters, same for all profiles) and z (m NAVD88). Calculate distance along profile from arbitrary starting point. | import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
fnames = ['crossShore_profile_2019_preDorian.xyz', 'crossShore_profile_2019_postDorian.xyz',
'crossShore_profile_2020_Sep.xyz', 'crossShore_profile_2021_Apr.xyz']
df0 = pd.read_csv(fnames[0],skiprows=1,sep=',',header=None,names=['x','y','z'])
df1 = pd.read_csv(fnames[1],skiprows=1,sep=',',header=None,names=['x','y','z'])
df2 = pd.read_csv(fnames[2],skiprows=1,sep=',',header=None,names=['x','y','z'])
df3 = pd.read_csv(fnames[3],skiprows=1,sep=',',header=None,names=['x','y','z'])
df0.describe()
x = df0['x'].values
y = df0['y'].values
z0 = df0['z'].values
z1 = df1['z'].values
z2 = df2['z'].values
z3 = df3['z'].values
dist = np.sqrt((x - x[0])**2+(y-y[0])**2) | _____no_output_____ | CC0-1.0 | plan_pole_transect.ipynb | csherwood-usgs/DUNEX |
Use Stockdon equation to calculate runup for slope on upper beach and offshore waves | def calcR2(H,T,slope,igflag=0):
"""
%
% [R2,S,setup, Sinc, SIG, ir] = calcR2(H,T,slope,igflag);
%
% Calculated 2% runup (R2), swash (S), setup (setup), incident swash (Sinc)
% and infragravity swash (SIG) elevations based on parameterizations from runup paper
% also Iribarren (ir)
% August 2010 - Included 15% runup (R16) statistic that, for a Guassian distribution,
% represents mean+sigma. It is calculated as R16 = setup + swash/4.
% In a wave tank, Palmsten et al (2010) found this statistic represented initiation of dune erosion.
%
%
% H = significant wave height, reverse shoaled to deep water
% T = deep-water peak wave period
% slope = radians
% igflag = 0 (default)use full equation for all data
% = 1 use dissipative-specific calculations when dissipative conditions exist (Iribarren < 0.3)
% = 2 use dissipative-specific (IG energy) calculation for all data
%
% based on:
% Stockdon, H. F., R. A. Holman, P. A. Howd, and J. Sallenger A. H. (2006),
% Empirical parameterization of setup, swash, and runup,
% Coastal Engineering, 53, 573-588.
% author: hstockdon@usgs.gov
# Converted to Python by csherwood@usgs.gov
"""
g = 9.81
# make slopes positive!
slope = np.abs(slope)
# compute wavelength and Iribarren
L = (g*T**2) / (2.*np.pi)
sqHL = np.sqrt(H*L)
ir = slope/np.sqrt(H/L)
if igflag == 2: # use dissipative equations (IG) for ALL data
R2 = 1.1*(0.039 * sqHL)
S = 0.046*sqHL
setup = 0.016*sqHL
elif igflag == 1 and ir < 0.3: # if dissipative site use diss equations
R2 = 1.1*(0.039 * sqHL)
S = 0.046*sqHL
setup = 0.016*sqHL
else: # if int/ref site, use full equations
setup = 0.35*slope*sqHL
Sinc = 0.75*slope*sqHL
SIG = 0.06*sqHL
S = np.sqrt(Sinc**2 + SIG**2)
R2 = 1.1*(setup + S/2.)
R16 = 1.1*(setup + S/4.)
return R2, S, setup, Sinc, SIG, ir, R16
H = 2.
T = 17.
slp = .05
R2, S, setup, Sinc, SIG, ir, R16 = calcR2(H,T,slp,igflag=0)
mllw = -0.6 #NAVD88
high_water = 1.6 + mllw # high water estimates from Duck and Jenettes
maxHW = R2 + high_water
print('R2: {:.2f}, max HW: {:.2f}'.format(R2, maxHW)) | R2: 1.75, max HW: 2.75
| CC0-1.0 | plan_pole_transect.ipynb | csherwood-usgs/DUNEX |
Plot profiles and pole locationsApply arbitrary vertical offset to profiles to collapse them. The range of these offsets suggests fairly big uncertainty in the elevation data. Define a function to plot pole at ground level with 2 m embedded and 3 m above ground. Make plot with vertical exaggeration of 2.1 bazillion.`edist` - Horizontal retreat of hypothetical eroded profile. `pole_locations` - Locations of the pole along the transect...fiddle with this. `polz` - Function to plot the poles at the specified locations, with 2 m buried below local ground elev. and 3 m proud. | # eyeball offsets to make plot easier to interpret (note this elevates May profile)
ioff1 = -.25
ioff2 = +.3
ioff3 = +.25
mhw = 0.77 # estimated from VDatum
edist = -5 # distance to offset eroded profile
#pole_locations = [96, 89, 82, 75, 68, 55, 42] # Chris's original
pole_locations = [104, 95, 84, 76, 68, 55, 42] # Katherine's idea to stretch the array seaward; less overlap
lidar_res_left = 6 # m, depends on orientation
lidar_res_right = 4 # m, depends on orientation
# function to plot pole at ground level, given a distance (pdist) along a profile (dist and z)
def polz(pdist,dist,z,x,y):
idx = (dist>=pdist).argmax()
plt.plot([dist[idx],dist[idx]],[z[idx]-2.,z[idx]+3],'-',c='gray',linewidth=3)
print('dist, z: {:.1f}, {:.1f} utmx, utmy: {:.1f}, {:.1f}'.format(dist[idx],z[idx],x[idx],y[idx]))
plt.hlines(np.min(z), pz-lidar_res_left, pz+lidar_res_right, alpha=0.5)
plt.figure(figsize=(12,3))
plt.plot([dist[0],dist[-1]],[mhw,mhw],'--k',alpha=0.3,label='MHW')
plt.plot([dist[0],dist[-1]],[maxHW,maxHW],'--r',alpha=0.3,label='Max HW')
plt.plot(dist,z0,alpha=0.3,label='pre Dorian')
plt.plot(dist,z1+ioff1,alpha=0.3,label='Post Dorian')
plt.plot(dist,z2+ioff2,alpha=0.3,label='Sep 2020')
plt.plot(dist,z3+ioff3,'-k',linewidth=2,label='May 2021')
plt.plot(dist[500:]+edist,z3[500:]+ioff3,'--r',linewidth=2,label='Eroded')
for pz in pole_locations:
polz(pz,dist,z3+ioff3,x,y)
plt.grid()
plt.legend()
plt.ylabel('Elevation (m NAVD88)')
_ = plt.xlabel('Distance along transect (m)')
| dist, z: 104.0, 0.9 utmx, utmy: 456574.3, 3948281.3
dist, z: 95.0, 1.4 utmx, utmy: 456565.4, 3948279.8
dist, z: 84.0, 2.2 utmx, utmy: 456554.6, 3948278.0
dist, z: 76.0, 3.5 utmx, utmy: 456546.7, 3948276.7
dist, z: 68.0, 4.5 utmx, utmy: 456538.8, 3948275.4
dist, z: 55.0, 4.9 utmx, utmy: 456526.0, 3948273.2
dist, z: 42.0, 2.6 utmx, utmy: 456513.1, 3948271.1
| CC0-1.0 | plan_pole_transect.ipynb | csherwood-usgs/DUNEX |
**Comments from Katherine here:** How much overlap do we really need? Why is this important? Are there severe edge effects? It seems to me that we should either 1) try to cover as much of the profile as we can with the LiDARs since you're interested in runup (i.e., minimal to no overlap) or 2) cluster poles in areas where we expect high gradients in bed-level changes or impacts (i.e., where interpolations in bed-level change between sensors may be a bad assumption: around the "dune toe"(100 m?) and near the dune face). The whole profile looks steeper right now than pre-Dorian, so maybe we'll get more erosion/collision at the dune?I plotted the horizontal lidar resolution because I was having a hard time visualizing. | # plot beach slope
slope = np.diff(z3)/np.diff(dist)
plt.plot(dist,0.1*(z3+ioff3),'-k',linewidth=2,label='May 2021')
plt.plot(dist[1:],slope)
plt.ylim((-.5,.5))
# plot smoothed slope v. index
def running_mean(x, N):
return np.convolve(x, np.ones((N,))/N)[(N-1):]
N = int(2/.12478)
print(N)
sslope = running_mean(slope,N)
plt.plot(0.1*(z3+ioff3),'-k',linewidth=2,label='May 2021')
plt.plot(sslope)
print(np.median(sslope[690:700]))
print(np.std(sslope[690:700])) | -0.05149328538733515
0.0008375655872101676
| CC0-1.0 | plan_pole_transect.ipynb | csherwood-usgs/DUNEX |
We are trying to predict weather the classification is normal or abnormal. | dataset.info()
dataset.describe()
import seaborn as sns
sns.set_style("whitegrid");
sns.FacetGrid(dataset, hue="class", size=5.5) \
.map(plt.scatter, "pelvic_incidence", "pelvic_tilt numeric") \
.add_legend();
plt.show();
sns.set_style("whitegrid");
sns.pairplot(dataset, hue="class", size=3);
plt.show()
for name in dataset.columns.values[:-1]:
sns.FacetGrid(dataset, hue="class", size=5).map(sns.distplot, name).add_legend()
plt.show()
X = dataset.iloc[:, :-1]
display(X)
Y = dataset.iloc[:, -1]
display(Y)
symptom_class = ['Abnormal:1', 'Normal:0']
dataset['symptom_class']
dataset.head()
from sklearn import preprocessing
label_encoder=preprocessing.LabelEncoder()
dataset['symptom_class']=label_encoder.fit_transform(dataset['class'])
dataset.head()
dataset= dataset.drop('class', axis=1)
dataset.head()
from sklearn.model_selection import train_test_split
train, test = train_test_split(dataset, test_size=0.20,random_state = 1)
train_x = train.drop(['symptom_class'], axis = 1)
train_y = train['symptom_class']
test_x = test.drop(['symptom_class'],axis = 1)
test_y = test['symptom_class']
print('Dimension of train_x :',train_x.shape)
print('Dimension of train_y :',train_y.shape)
print('Dimension of test_x :',test_x.shape)
print('Dimension of test_y :',test_y.shape)
from sklearn.neighbors import KNeighborsClassifier
KNN = KNeighborsClassifier(n_neighbors=3)
KNN.fit(train_x, train_y)
pred = KNN.predict(test_x)
pred
from sklearn.metrics import accuracy_score
print('The accuracy of the KNN with K=3 is {}%'.format(round(accuracy_score(pred,test_y)*100,2)))
from sklearn.metrics import accuracy_score
print('The accuracy of the KNN with K=5 is {}%'.format(round(accuracy_score(pred,test_y)*100,2)))
train_accuracy =[]
test_accuracy = []
for k in range(1,15):
KNN = KNeighborsClassifier(n_neighbors=k)
KNN.fit(train_x, train_y)
train_pred = KNN.predict(train_x)
train_score = accuracy_score(train_pred, train_y)
train_accuracy.append(train_score)
test_pred = KNN.predict(test_x)
test_score = accuracy_score(test_pred, test_y)
test_accuracy.append(test_score)
print("Best accuracy is {} with K = {}".format(max(test_accuracy),1+test_accuracy.index(max(test_accuracy))))
plt.figure(figsize=[8,5]) #Accuracy Plot
plt.plot(range(1,15), test_accuracy, label = 'Testing Accuracy')
plt.plot(range(1,15), train_accuracy, label = 'Training Accuracy')
plt.legend()
plt.title('\nTrain Accuracy Vs Test Accuracy\n',fontsize=15)
plt.xlabel('Value of K',fontsize=15)
plt.ylabel('Accuracy',fontsize=15)
plt.xticks(range(1,15))
plt.grid()
plt.show()
from sklearn.model_selection import GridSearchCV
knn_params = {"n_neighbors": list(range(1,15,1)), 'metric': ['euclidean','manhattan']}
grid_knn = GridSearchCV(KNeighborsClassifier(), knn_params, cv=5)
grid_knn.fit(train_x, train_y)
knn_besthypr = grid_knn.best_estimator_ #KNN best estimator
knn_besthypr
print("Tuned hyperparameter: {}".format(grid_knn.best_params_))
print("Best score: {}".format(grid_knn.best_score_))
knn = knn_besthypr.fit(train_x,train_y) #Using best hyperparameter
y_pred = knn.predict(test_x)
acc = accuracy_score(y_pred,test_y)
print('The accuracy of the KNN with K = {} is {}%'.format(knn_besthypr.n_neighbors,round(acc*100,2)))
test = test.reset_index(drop = True) #actual value and predicted value
test["pred_value"] = y_pred
test
from sklearn.naive_bayes import GaussianNB
nvclassifier = GaussianNB()
nvclassifier.fit(train_x, train_y)
y_pred = nvclassifier.predict(test_x)
print(y_pred)
test = test.reset_index(drop = True)
test["pred_value"] = y_pred
test.head()
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(test_y, y_pred)
plt.figure(figsize=(6,5))
sns.heatmap(cm, annot=True)
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
a = cm.shape
corrPred = 0
falsePred = 0
for row in range(a[0]):
for c in range(a[1]):
if row == c:
corrPred += cm[row,c]
else:
falsePred += cm[row,c]
print("*"*70)
print('Correct predictions: ', corrPred)
print('False predictions', falsePred)
print("*"*70)
acc = corrPred/cm.sum()
print ('Accuracy of the Naive Bayes Clasification is {}% '.format(round(acc*100,2)))
print("*"*70)
from sklearn.metrics import accuracy_score
print('The accuracy of the NB is {}%'.format(round(accuracy_score(y_pred,test_y)*100,2)))
nvclassifier.predict_proba(test_x)[:10] | _____no_output_____ | Unlicense | Project/KNN AND NB Project.ipynb | foday1989/FODAY-DS.Portfolio.io |
Analysis of how mentions of a stock on WSB relates to stock pricesWallStreetBets is a popular forum on reddit known for going to the moon, apes and stonks. Jokes aside, despite all of the ridiculous bad trades, undecipherable jargon and love for memes, it's effect on the stock market is undeniable. Therefore in this project, we want to investigate how the reaction of reddit users on the forum relate to actual changes in the stock market. | import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import warnings
import os
import tensorflow as tf
from datetime import datetime
warnings.filterwarnings('ignore')
from google.colab import drive
drive.mount('/content/drive') | Mounted at /content/drive
| MIT | wsb_sentiment.ipynb | kenzeng24/wsb-analysis |
Reddit Post DataSource: https://huggingface.co/datasets/SocialGrep/reddit-wallstreetbets-aug-2021 | # TODO: add shortcut from shared drive for:
# wsb-aug-2021-comments.csv
def load_data(filename, path="/content/drive/MyDrive/"):
# read csv file and drop indices
df = pd.read_csv(os.path.join(path, filename))
df = df.dropna(axis=0)
# convert utc to datetime format
df["date"] = pd.to_datetime(df["created_utc"],unit="s").dt.date
return df
filename = "wsb-aug-2021-comments.csv"
df = load_data(filename) | _____no_output_____ | MIT | wsb_sentiment.ipynb | kenzeng24/wsb-analysis |
Overal Sentiment on the Subreddit | def sentiment_bins(df):
# extract sentiment
sent_df = df[["date","sentiment"]]
bins = {}
bins["positive"] = sent_df.loc[sent_df["sentiment"] > 0.25,:]
bins["negative"] = sent_df.loc[sent_df["sentiment"] < -0.25,:]
bins["neutral"] = sent_df.loc[sent_df["sentiment"].between(-0.25,0.25),:]
# count the posts in each bin for each day
for name in bins:
bins[name] = bins[name].groupby(['date']).count()
counts = sent_df.groupby(['date']).count()
return bins, counts
def plot_sentiment(df,normalize=True, title=None):
# collect sentiment into three bins
bins, counts = sentiment_bins(df)
# plot counts of each bin every day
colours = ["lightgreen", "coral", "grey"]
for i, name in enumerate(["positive", "negative", "neutral"]):
dates = bins[name].index
total_counts = counts.loc[dates,:].values.reshape(-1)
bin_counts = bins[name]["sentiment"].values
if not normalize:
total_counts = 1
plt.plot(dates, bin_counts / total_counts,
alpha=0.7, c=colours[i])
plt.legend(["positive", "negative", "neutral"])
if title:
plt.title(title)
plt.xticks(rotation=20)
plt.show()
plot_sentiment(df,normalize=False, title="overall-unnormalized")
plot_sentiment(df,normalize=True, title="overall-normalized")
# distribution of sentiment of the posts
plt.hist(df["sentiment"], color="coral",
alpha=0.5)
plt.show() | _____no_output_____ | MIT | wsb_sentiment.ipynb | kenzeng24/wsb-analysis |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.