markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
**Expected Output**: **v["dW1"]** [[ 0. 0. 0.] [ 0. 0. 0.]] **v["db1"]** [[ 0.] [ 0.]] **v["dW2"]** [[ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.]] **v["db2"]** [[ 0.] [ 0.] [ 0.]] **s["dW1"]** [[ 0. 0. 0.] [ 0. 0. 0.]] **s["db1"]** [[ 0.] [ 0.]] **s["dW2"]** [[ 0. 0. 0.] [ 0. 0. 0.] [ 0. 0. 0.]] **s["db2"]** [[ 0.] [ 0.] [ 0.]] **Exercise**: Now, implement the parameters update with Adam. Recall the general update rule is, for $l = 1, ..., L$: $$\begin{cases}v_{W^{[l]}} = \beta_1 v_{W^{[l]}} + (1 - \beta_1) \frac{\partial J }{ \partial W^{[l]} } \\v^{corrected}_{W^{[l]}} = \frac{v_{W^{[l]}}}{1 - (\beta_1)^t} \\s_{W^{[l]}} = \beta_2 s_{W^{[l]}} + (1 - \beta_2) (\frac{\partial J }{\partial W^{[l]} })^2 \\s^{corrected}_{W^{[l]}} = \frac{s_{W^{[l]}}}{1 - (\beta_2)^t} \\W^{[l]} = W^{[l]} - \alpha \frac{v^{corrected}_{W^{[l]}}}{\sqrt{s^{corrected}_{W^{[l]}}}+\varepsilon}\end{cases}$$**Note** that the iterator `l` starts at 0 in the `for` loop while the first parameters are $W^{[1]}$ and $b^{[1]}$. You need to shift `l` to `l+1` when coding. | # GRADED FUNCTION: update_parameters_with_adam
def update_parameters_with_adam(parameters, grads, v, s, t, learning_rate = 0.01,
beta1 = 0.9, beta2 = 0.999, epsilon = 1e-8):
"""
Update parameters using Adam
Arguments:
parameters -- python dictionary containing your parameters:
parameters['W' + str(l)] = Wl
parameters['b' + str(l)] = bl
grads -- python dictionary containing your gradients for each parameters:
grads['dW' + str(l)] = dWl
grads['db' + str(l)] = dbl
v -- Adam variable, moving average of the first gradient, python dictionary
s -- Adam variable, moving average of the squared gradient, python dictionary
learning_rate -- the learning rate, scalar.
beta1 -- Exponential decay hyperparameter for the first moment estimates
beta2 -- Exponential decay hyperparameter for the second moment estimates
epsilon -- hyperparameter preventing division by zero in Adam updates
Returns:
parameters -- python dictionary containing your updated parameters
v -- Adam variable, moving average of the first gradient, python dictionary
s -- Adam variable, moving average of the squared gradient, python dictionary
"""
L = len(parameters) // 2 # number of layers in the neural networks
v_corrected = {} # Initializing first moment estimate, python dictionary
s_corrected = {} # Initializing second moment estimate, python dictionary
# Perform Adam update on all parameters
for l in range(L):
# Moving average of the gradients. Inputs: "v, grads, beta1". Output: "v".
### START CODE HERE ### (approx. 2 lines)
v["dW" + str(l+1)] = beta1 * v["dW" + str(l+1)] + (1 - beta1) * grads["dW" + str(l+1)]
v["db" + str(l+1)] = beta1 * v["db" + str(l+1)] + (1 - beta1) * grads["db" + str(l+1)]
### END CODE HERE ###
# Compute bias-corrected first moment estimate. Inputs: "v, beta1, t". Output: "v_corrected".
### START CODE HERE ### (approx. 2 lines)
v_corrected["dW" + str(l+1)] = v["dW" + str(l+1)] / (1 - beta1 ** t)
v_corrected["db" + str(l+1)] = v["db" + str(l+1)] / (1 - beta1 ** t)
### END CODE HERE ###
# Moving average of the squared gradients. Inputs: "s, grads, beta2". Output: "s".
### START CODE HERE ### (approx. 2 lines)
s["dW" + str(l+1)] = beta2 * s["dW" + str(l+1)] + (1 - beta2) * np.square(grads["dW" + str(l+1)])
s["db" + str(l+1)] = beta2 * s["db" + str(l+1)] + (1 - beta2) * np.square(grads["db" + str(l+1)])
### END CODE HERE ###
# Compute bias-corrected second raw moment estimate. Inputs: "s, beta2, t". Output: "s_corrected".
### START CODE HERE ### (approx. 2 lines)
s_corrected["dW" + str(l+1)] = s["dW" + str(l+1)] / (1 - beta2 ** t)
s_corrected["db" + str(l+1)] = s["db" + str(l+1)] / (1 - beta2 ** t)
### END CODE HERE ###
# Update parameters. Inputs: "parameters, learning_rate, v_corrected, s_corrected, epsilon". Output: "parameters".
### START CODE HERE ### (approx. 2 lines)
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - ( learning_rate * v_corrected["dW" + str(l+1)] / ( np.sqrt(s_corrected["dW" + str(l+1)]) + epsilon ) )
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - ( learning_rate * v_corrected["db" + str(l+1)] / ( np.sqrt(s_corrected["db" + str(l+1)]) + epsilon ) )
### END CODE HERE ###
return parameters, v, s
parameters, grads, v, s = update_parameters_with_adam_test_case()
parameters, v, s = update_parameters_with_adam(parameters, grads, v, s, t = 2)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
print("v[\"dW1\"] = " + str(v["dW1"]))
print("v[\"db1\"] = " + str(v["db1"]))
print("v[\"dW2\"] = " + str(v["dW2"]))
print("v[\"db2\"] = " + str(v["db2"]))
print("s[\"dW1\"] = " + str(s["dW1"]))
print("s[\"db1\"] = " + str(s["db1"]))
print("s[\"dW2\"] = " + str(s["dW2"]))
print("s[\"db2\"] = " + str(s["db2"])) | W1 = [[ 1.63178673 -0.61919778 -0.53561312]
[-1.08040999 0.85796626 -2.29409733]]
b1 = [[ 1.75225313]
[-0.75376553]]
W2 = [[ 0.32648046 -0.25681174 1.46954931]
[-2.05269934 -0.31497584 -0.37661299]
[ 1.14121081 -1.09244991 -0.16498684]]
b2 = [[-0.88529979]
[ 0.03477238]
[ 0.57537385]]
v["dW1"] = [[-0.11006192 0.11447237 0.09015907]
[ 0.05024943 0.09008559 -0.06837279]]
v["db1"] = [[-0.01228902]
[-0.09357694]]
v["dW2"] = [[-0.02678881 0.05303555 -0.06916608]
[-0.03967535 -0.06871727 -0.08452056]
[-0.06712461 -0.00126646 -0.11173103]]
v["db2"] = [[ 0.02344157]
[ 0.16598022]
[ 0.07420442]]
s["dW1"] = [[ 0.00121136 0.00131039 0.00081287]
[ 0.0002525 0.00081154 0.00046748]]
s["db1"] = [[ 1.51020075e-05]
[ 8.75664434e-04]]
s["dW2"] = [[ 7.17640232e-05 2.81276921e-04 4.78394595e-04]
[ 1.57413361e-04 4.72206320e-04 7.14372576e-04]
[ 4.50571368e-04 1.60392066e-07 1.24838242e-03]]
s["db2"] = [[ 5.49507194e-05]
[ 2.75494327e-03]
[ 5.50629536e-04]]
| Apache-2.0 | Week2/Optimization+methods.ipynb | softwarebrahma/Deep-Learning-Specialization-Improving-Deep-Neural-Network-Hyperparam-TuneRegularizationOptimization |
**Expected Output**: **W1** [[ 1.63178673 -0.61919778 -0.53561312] [-1.08040999 0.85796626 -2.29409733]] **b1** [[ 1.75225313] [-0.75376553]] **W2** [[ 0.32648046 -0.25681174 1.46954931] [-2.05269934 -0.31497584 -0.37661299] [ 1.14121081 -1.09245036 -0.16498684]] **b2** [[-0.88529978] [ 0.03477238] [ 0.57537385]] **v["dW1"]** [[-0.11006192 0.11447237 0.09015907] [ 0.05024943 0.09008559 -0.06837279]] **v["db1"]** [[-0.01228902] [-0.09357694]] **v["dW2"]** [[-0.02678881 0.05303555 -0.06916608] [-0.03967535 -0.06871727 -0.08452056] [-0.06712461 -0.00126646 -0.11173103]] **v["db2"]** [[ 0.02344157] [ 0.16598022] [ 0.07420442]] **s["dW1"]** [[ 0.00121136 0.00131039 0.00081287] [ 0.0002525 0.00081154 0.00046748]] **s["db1"]** [[ 1.51020075e-05] [ 8.75664434e-04]] **s["dW2"]** [[ 7.17640232e-05 2.81276921e-04 4.78394595e-04] [ 1.57413361e-04 4.72206320e-04 7.14372576e-04] [ 4.50571368e-04 1.60392066e-07 1.24838242e-03]] **s["db2"]** [[ 5.49507194e-05] [ 2.75494327e-03] [ 5.50629536e-04]] You now have three working optimization algorithms (mini-batch gradient descent, Momentum, Adam). Let's implement a model with each of these optimizers and observe the difference. 5 - Model with different optimization algorithmsLets use the following "moons" dataset to test the different optimization methods. (The dataset is named "moons" because the data from each of the two classes looks a bit like a crescent-shaped moon.) | train_X, train_Y = load_dataset() | _____no_output_____ | Apache-2.0 | Week2/Optimization+methods.ipynb | softwarebrahma/Deep-Learning-Specialization-Improving-Deep-Neural-Network-Hyperparam-TuneRegularizationOptimization |
We have already implemented a 3-layer neural network. You will train it with: - Mini-batch **Gradient Descent**: it will call your function: - `update_parameters_with_gd()`- Mini-batch **Momentum**: it will call your functions: - `initialize_velocity()` and `update_parameters_with_momentum()`- Mini-batch **Adam**: it will call your functions: - `initialize_adam()` and `update_parameters_with_adam()` | def model(X, Y, layers_dims, optimizer, learning_rate = 0.0007, mini_batch_size = 64, beta = 0.9,
beta1 = 0.9, beta2 = 0.999, epsilon = 1e-8, num_epochs = 10000, print_cost = True):
"""
3-layer neural network model which can be run in different optimizer modes.
Arguments:
X -- input data, of shape (2, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (1, number of examples)
layers_dims -- python list, containing the size of each layer
learning_rate -- the learning rate, scalar.
mini_batch_size -- the size of a mini batch
beta -- Momentum hyperparameter
beta1 -- Exponential decay hyperparameter for the past gradients estimates
beta2 -- Exponential decay hyperparameter for the past squared gradients estimates
epsilon -- hyperparameter preventing division by zero in Adam updates
num_epochs -- number of epochs
print_cost -- True to print the cost every 1000 epochs
Returns:
parameters -- python dictionary containing your updated parameters
"""
L = len(layers_dims) # number of layers in the neural networks
costs = [] # to keep track of the cost
t = 0 # initializing the counter required for Adam update
seed = 10 # For grading purposes, so that your "random" minibatches are the same as ours
# Initialize parameters
parameters = initialize_parameters(layers_dims)
# Initialize the optimizer
if optimizer == "gd":
pass # no initialization required for gradient descent
elif optimizer == "momentum":
v = initialize_velocity(parameters)
elif optimizer == "adam":
v, s = initialize_adam(parameters)
# Optimization loop
for i in range(num_epochs):
# Define the random minibatches. We increment the seed to reshuffle differently the dataset after each epoch
seed = seed + 1
minibatches = random_mini_batches(X, Y, mini_batch_size, seed)
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
# Forward propagation
a3, caches = forward_propagation(minibatch_X, parameters)
# Compute cost
cost = compute_cost(a3, minibatch_Y)
# Backward propagation
grads = backward_propagation(minibatch_X, minibatch_Y, caches)
# Update parameters
if optimizer == "gd":
parameters = update_parameters_with_gd(parameters, grads, learning_rate)
elif optimizer == "momentum":
parameters, v = update_parameters_with_momentum(parameters, grads, v, beta, learning_rate)
elif optimizer == "adam":
t = t + 1 # Adam counter
parameters, v, s = update_parameters_with_adam(parameters, grads, v, s,
t, learning_rate, beta1, beta2, epsilon)
# Print the cost every 1000 epoch
if print_cost and i % 1000 == 0:
print ("Cost after epoch %i: %f" %(i, cost))
if print_cost and i % 100 == 0:
costs.append(cost)
# plot the cost
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('epochs (per 100)')
plt.title("Learning rate = " + str(learning_rate))
plt.show()
return parameters | _____no_output_____ | Apache-2.0 | Week2/Optimization+methods.ipynb | softwarebrahma/Deep-Learning-Specialization-Improving-Deep-Neural-Network-Hyperparam-TuneRegularizationOptimization |
You will now run this 3 layer neural network with each of the 3 optimization methods. 5.1 - Mini-batch Gradient descentRun the following code to see how the model does with mini-batch gradient descent. | # train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, optimizer = "gd")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
plt.title("Model with Gradient Descent optimization")
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y) | Cost after epoch 0: 0.690736
Cost after epoch 1000: 0.685273
Cost after epoch 2000: 0.647072
Cost after epoch 3000: 0.619525
Cost after epoch 4000: 0.576584
Cost after epoch 5000: 0.607243
Cost after epoch 6000: 0.529403
Cost after epoch 7000: 0.460768
Cost after epoch 8000: 0.465586
Cost after epoch 9000: 0.464518
| Apache-2.0 | Week2/Optimization+methods.ipynb | softwarebrahma/Deep-Learning-Specialization-Improving-Deep-Neural-Network-Hyperparam-TuneRegularizationOptimization |
5.2 - Mini-batch gradient descent with momentumRun the following code to see how the model does with momentum. Because this example is relatively simple, the gains from using momemtum are small; but for more complex problems you might see bigger gains. | # train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, beta = 0.9, optimizer = "momentum")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
plt.title("Model with Momentum optimization")
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y) | Cost after epoch 0: 0.690741
Cost after epoch 1000: 0.685341
Cost after epoch 2000: 0.647145
Cost after epoch 3000: 0.619594
Cost after epoch 4000: 0.576665
Cost after epoch 5000: 0.607324
Cost after epoch 6000: 0.529476
Cost after epoch 7000: 0.460936
Cost after epoch 8000: 0.465780
Cost after epoch 9000: 0.464740
| Apache-2.0 | Week2/Optimization+methods.ipynb | softwarebrahma/Deep-Learning-Specialization-Improving-Deep-Neural-Network-Hyperparam-TuneRegularizationOptimization |
5.3 - Mini-batch with Adam modeRun the following code to see how the model does with Adam. | # train 3-layer model
layers_dims = [train_X.shape[0], 5, 2, 1]
parameters = model(train_X, train_Y, layers_dims, optimizer = "adam")
# Predict
predictions = predict(train_X, train_Y, parameters)
# Plot decision boundary
plt.title("Model with Adam optimization")
axes = plt.gca()
axes.set_xlim([-1.5,2.5])
axes.set_ylim([-1,1.5])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y) | Cost after epoch 0: 0.690552
Cost after epoch 1000: 0.185567
Cost after epoch 2000: 0.150852
Cost after epoch 3000: 0.074454
Cost after epoch 4000: 0.125936
Cost after epoch 5000: 0.104235
Cost after epoch 6000: 0.100552
Cost after epoch 7000: 0.031601
Cost after epoch 8000: 0.111709
Cost after epoch 9000: 0.197648
| Apache-2.0 | Week2/Optimization+methods.ipynb | softwarebrahma/Deep-Learning-Specialization-Improving-Deep-Neural-Network-Hyperparam-TuneRegularizationOptimization |
YOLOv3 - Functions changing the pipeline to functions to make the implementation easier | import os.path
import cv2
import numpy as np
import requests
yolo_config = 'yolov3.cfg'
if not os.path.isfile(yolo_config):
url = 'https://raw.githubusercontent.com/pjreddie/darknet/master/cfg/yolov3.cfg'
r = requests.get(url)
with open(yolo_config, 'wb') as f:
f.write(r.content)
# Download YOLO net weights
# We'll it from the YOLO author's website
yolo_weights = 'yolov3.weights'
if not os.path.isfile(yolo_weights):
url = 'https://pjreddie.com/media/files/yolov3.weights'
r = requests.get(url)
with open(yolo_weights, 'wb') as f:
f.write(r.content)
net = cv2.dnn.readNet(yolo_weights, yolo_config)
classes_file = 'coco.names'
if not os.path.isfile(classes_file):
url = 'https://raw.githubusercontent.com/pjreddie/darknet/master/data/coco.names'
r = requests.get(url)
with open(classes_file, 'wb') as f:
f.write(r.content)
# load class names
with open(classes_file, 'r') as f:
classes = [line.strip() for line in f.readlines()]
image_file = 'C:/Users/Billi/repos/Computer_Vision/OpenCV/bdd100k/seg/images/train/00d79c0a-23bea078.jpg'
image = cv2.imread(image_file)
cv2.imshow('img', image)
cv2.waitKey(0)
def get_image(image):
blob = cv2.dnn.blobFromImage(image, 1 / 255, (416, 416), (0, 0, 0), True, crop=False)
return blob
def get_prediction(blob):
# set as input to the net
net.setInput(blob)
# get network output layers
layer_names = net.getLayerNames()
output_layers = [layer_names[i[0] - 1] for i in net.getUnconnectedOutLayers()]
# inference
# the network outputs multiple lists of anchor boxes
# one for each detected class
outs = net.forward(output_layers)
return outs | _____no_output_____ | MIT | Research Papers/YOLOv3_scripts.ipynb | Hira63S/DeepLearningResearch |
After we get the network outputs, we have to pre-process the network outputs and apply non-max suppression over them to produce the final set of detected objects | def get_boxes(outs):
class_ids = []
confidences = []
boxes = []
for out in outs:
# iterate over the anchor boxes for each class
for detection in out:
center_x = int(detection[0] * image.shape[1])
center_y = int(detection[1] * image.shape[0])
w, h = int(detection[2] * image.shape[1]), int(detection[3] * image.shape[0])
x, y = center_x - w // 2, center_y - h // 2
boxes.append([x, y, w, h])
# confidence
confidences.append(float(detection[4]))
# class
class_ids.append(np.argmax(detection[5:]))
return boxes, confidences, class_ids
def get_ids(boxes, confidences):
ids = cv2.dnn.NMSBoxes(boxes, confidences, score_threshold=0.75, nms_threshold=0.5)
return ids
def colors(image):
colors = np.random.uniform(0, 255, size=(len(classes), 3))
# iterate over all boxes
for i in ids:
i = i[0]
x, y, w, h = boxes[i]
class_id = class_ids[i]
color = colors[class_id]
cv2.rectangle(img=image,
pt1=(round(x), round(y)),
pt2=(round(x + w), round(y + h)),
color=color,
thickness=3)
cv2.putText(img=image,
text=f"{classes[class_id]}: {confidences[i]:.2f}",
org=(x - 10, y - 10),
fontFace=cv2.FONT_HERSHEY_SIMPLEX,
fontScale=0.8,
color=color,
thickness=2)
return image
image = cv2.imread(image_file)
blob = get_image(image)
outs = get_prediction(blob)
boxes, confidences, class_ids = get_boxes(outs)
ids = get_ids(boxes, confidences)
final = colors(image)
plt.imshow(image)
plt.imshow(final)
cv2.imshow('img', image)
cv2.waitKey(0)
image = cv2.imread(image_file)
blob = get_image(image)
outs = get_prediction(blob)
boxes, confidences, class_ids = get_bounding_boxes(outs)
ids = max_supp(boxes, confidences)
final = custom(image)
import matplotlib.pyplot as plt
plt.imshow(final) | _____no_output_____ | MIT | Research Papers/YOLOv3_scripts.ipynb | Hira63S/DeepLearningResearch |
2 Protein Visualization ***For the purposes of this tutorial, we will use the HIV-1 protease structure (PDB ID: 1HSG). It is a homodimer with two chains of 99 residues each. Before starting to perform any simulations and data analysis, we need to observe and familiarize with the protein of interest.There are various software packages for visualizing molecular systems, but here we will guide you through using two of those; NGLView and VMD:* [NGLView](http://nglviewer.org/nglview): An IPython/Jupyter widget to interactively view molecular structures and trajectories.* [VMD](https://www.ks.uiuc.edu/Research/vmd/): VMD is a molecular visualization program for displaying, animating, and analyzing large biomolecular systems using 3-D graphics and built-in scripting.You could either take your time to familiarize with both, or select which one you prefer to delve into.NGLView is great for looking at things directly within a jupyter notebook, but VMD can be a more powerful tool for visualizing, generating high quality images and videos, but also analysing simulation trajectories. 2.0 Obtain the protein structure The first step is to obtain the crystal structure of the HIV-1 protease.Start your web-browser and go to the [protein data bank](https://www.rcsb.org/). Enter the pdb code 1HSG in the site search box at the top and hit the site search button. The protein should come up. Select download from the top right hand menu and save the .pdb file to the current working directory. 2.1 VMD (optional) You can now open the pdb structure with VMD (the following file name might be uppercase depending on how you downloaded it):`% vmd 1hsg.pdb`You should experiment with the menu system and try various representations of the protein such as `Trace`, `NewCartoon` and `Ribbons` for example.Go to `Graphics` and then `Graphical Representations` and from the `Drawing Method` drop-down list, select `Trace`. Similarly, you can explore other drawing methods. **Questions** * Can you find the indinavir drug? *Hint: At the `Graphical Representations` menu, click `Create Rep` and type "all and not protein" and hit Enter. Change the `Drawing Method` to `Licorice`.** Give the protein the Trace representation and then make the polar residues in vdw format as an additional representation. Repeat with the hydrophobic residues. What do you notice?*Hint: Explore the `Selections` tab and the options provided as singlewords.**Hint: To hide a representation, double-click on it. Double-click again if you want to make it reappear.*Take your time to explore the features of VMD and to observe the protein. Once you are happy, you can exit VMD, either by clicking on `File` and then `Quit` or by typing `quit` in the terminal box.*** 2.2 NGLView You have already been introduced to NGLView during the Python tutorial. You can now spend more time to navigate through its features. | # Import NGLView
import nglview
# Select as your protein the 1HSG pdb entry
protein_view = nglview.show_pdbid('1hsg')
protein_view.gui_style = 'ngl'
#Uncomment the command below to add a hyperball representation of the crystal water oxygens in grey
#protein_view.add_hyperball('HOH', color='grey', opacity=1.0)
#Uncomment the command below to color the protein according to its secondary structure with opacity 0.6
#protein_view.update_cartoon(color='sstruc', opacity=0.6)
# Let's change the display a little bit
protein_view.parameters = dict(camera_type='orthographic', clip_dist=0)
# Set the background colour to black
protein_view.background = 'black'
# Call protein_view to visualise the trajectory
protein_view | _____no_output_____ | BSD-3-Clause | tutorials/MD/02_Protein_Visualization.ipynb | bigginlab/OxCompBio |
Image features exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*We have seen that we can achieve reasonable performance on an image classification task by training a linear classifier on the pixels of the input image. In this exercise we will show that we can improve our classification performance by training linear classifiers not on raw pixels but on features that are computed from the raw pixels.All of your work for this exercise will be done in this notebook. | import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2 | _____no_output_____ | MIT | assignments2016/assignment1/features.ipynb | janlukasschroeder/Stanford-cs231n |
Load dataSimilar to previous exercises, we will load CIFAR-10 data from disk. | from cs231n.features import color_histogram_hsv, hog_feature
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
return X_train, y_train, X_val, y_val, X_test, y_test
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data() | _____no_output_____ | MIT | assignments2016/assignment1/features.ipynb | janlukasschroeder/Stanford-cs231n |
Extract FeaturesFor each image we will compute a Histogram of OrientedGradients (HOG) as well as a color histogram using the hue channel in HSVcolor space. We form our final feature vector for each image by concatenatingthe HOG and color histogram feature vectors.Roughly speaking, HOG should capture the texture of the image while ignoringcolor information, and the color histogram represents the color of the inputimage while ignoring texture. As a result, we expect that using both togetherought to work better than using either alone. Verifying this assumption wouldbe a good thing to try for the bonus section.The `hog_feature` and `color_histogram_hsv` functions both operate on a singleimage and return a feature vector for that image. The extract_featuresfunction takes a set of images and a list of feature functions and evaluateseach feature function on each image, storing the results in a matrix whereeach column is the concatenation of all feature vectors for a single image. | from cs231n.features import *
num_color_bins = 10 # Number of bins in the color histogram
feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)]
X_train_feats = extract_features(X_train, feature_fns, verbose=True)
X_val_feats = extract_features(X_val, feature_fns)
X_test_feats = extract_features(X_test, feature_fns)
# Preprocessing: Subtract the mean feature
mean_feat = np.mean(X_train_feats, axis=0, keepdims=True)
X_train_feats -= mean_feat
X_val_feats -= mean_feat
X_test_feats -= mean_feat
# Preprocessing: Divide by standard deviation. This ensures that each feature
# has roughly the same scale.
std_feat = np.std(X_train_feats, axis=0, keepdims=True)
X_train_feats /= std_feat
X_val_feats /= std_feat
X_test_feats /= std_feat
# Preprocessing: Add a bias dimension
X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))])
X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))])
X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))]) | _____no_output_____ | MIT | assignments2016/assignment1/features.ipynb | janlukasschroeder/Stanford-cs231n |
Train SVM on featuresUsing the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels. | # Use the validation set to tune the learning rate and regularization strength
from cs231n.classifiers.linear_classifier import LinearSVM
learning_rates = [1e-9, 1e-8, 1e-7]
regularization_strengths = [1e5, 1e6, 1e7]
results = {}
best_val = -1
best_svm = None
pass
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained classifer in best_svm. You might also want to play #
# with different numbers of bins in the color histogram. If you are careful #
# you should be able to get accuracy of near 0.44 on the validation set. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print 'lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy)
print 'best validation accuracy achieved during cross-validation: %f' % best_val
# Evaluate your trained SVM on the test set
y_test_pred = best_svm.predict(X_test_feats)
test_accuracy = np.mean(y_test == y_test_pred)
print test_accuracy
# An important way to gain intuition about how an algorithm works is to
# visualize the mistakes that it makes. In this visualization, we show examples
# of images that are misclassified by our current system. The first column
# shows images that our system labeled as "plane" but whose true label is
# something other than "plane".
examples_per_class = 8
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for cls, cls_name in enumerate(classes):
idxs = np.where((y_test != cls) & (y_test_pred == cls))[0]
idxs = np.random.choice(idxs, examples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1)
plt.imshow(X_test[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls_name)
plt.show() | _____no_output_____ | MIT | assignments2016/assignment1/features.ipynb | janlukasschroeder/Stanford-cs231n |
Inline question 1:Describe the misclassification results that you see. Do they make sense? Neural Network on image featuresEarlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy. | print X_train_feats.shape
from cs231n.classifiers.neural_net import TwoLayerNet
input_dim = X_train_feats.shape[1]
hidden_dim = 500
num_classes = 10
net = TwoLayerNet(input_dim, hidden_dim, num_classes)
best_net = None
################################################################################
# TODO: Train a two-layer neural network on image features. You may want to #
# cross-validate various parameters as in previous sections. Store your best #
# model in the best_net variable. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# Run your neural net classifier on the test set. You should be able to
# get more than 55% accuracy.
test_acc = (net.predict(X_test_feats) == y_test).mean()
print test_acc | _____no_output_____ | MIT | assignments2016/assignment1/features.ipynb | janlukasschroeder/Stanford-cs231n |
Nuclear Fuel Cycle OverviewThe nuclear fuel cycle is the technical and economic system traversed by nuclear fuel during the generation of nuclear power. Learning ObjectivesBy the end of this lesson, you should be able to:- Categorize types of fission reactors by their fuels and coolants.- Summarize the history and key characteristics of reactor technology generations.- Weigh and compare advanced nuclear reactor types.- Name fuel cycle facilities and technologies that contribute to open and closed cycles.- Identify categories of nuclear fuel cycle strategies (open, closed, etc.)- Associate categories of fuel cycle with nations that implement them (USA, France, etc.)- Order the stages of such fuel cycle from mining to disposal, including reprocessing.- Identify the chemical and physical states of nuclear material passed between stages. Fission Reactor TypesLet's see what you know already.[pollev.com/katyhuff](pollev.com/katyhuff) | from IPython.display import IFrame
IFrame("https://embed.polleverywhere.com/free_text_polls/YWUBNMDynR0yeiu?controls=none&short_poll=true", width="1000", height="700", frameBorder="0")
from IPython.display import IFrame
IFrame("https://embed.polleverywhere.com/free_text_polls/rhvKnG3a6nKaNdU?controls=none&short_poll=true", width="1000", height="700", frameBorder="0")
from IPython.display import IFrame
IFrame("https://embed.polleverywhere.com/free_text_polls/zdDog6JmDGOQ1hJ?controls=none&short_poll=true", width="1000", height="700", frameBorder="0")
from IPython.display import IFrame
IFrame("https://embed.polleverywhere.com/free_text_polls/YE5bPL6KecA5M3A?controls=none&short_poll=true", width="1000", height="700", frameBorder="0")
from IPython.display import IFrame
IFrame("https://embed.polleverywhere.com/free_text_polls/BLojIJiKtPULpmw?controls=none&short_poll=true", width="1000", height="700", frameBorder="0") | _____no_output_____ | CC-BY-4.0 | introduction/02-nfc-overview.ipynb | abachma2/npre412 |
A really good summary, with images, can be found [here](https://www.theiet.org/media/1275/nuclear-reactors.pdf). |
from IPython.display import IFrame
IFrame("https://www.theiet.org/media/1275/nuclear-reactors.pdf", width=1000, height=700) | _____no_output_____ | CC-BY-4.0 | introduction/02-nfc-overview.ipynb | abachma2/npre412 |
What about fusion?Fusion devices can use Tritium, Deuterium, Protium, $^3He$, $^4He$. Fuel Cycle Strategies Once throughAlso known as an open fuel cycle, this is the fuel cycle currently underway in the United States. There is no reprocessing or recycling of any kind and all high level spent nuclear fuel is eventually destined for a geologic repository. | try:
import graphviz
except ImportError:
!y | conda install graphviz
!pip install graphviz
from graphviz import Digraph
dot = Digraph(comment='The Round Table')
dot.node('A', 'Mine')
dot.node('B', 'Mill')
dot.node('C', 'Conversion')
dot.node('D', 'Enrichment')
dot.node('E', 'Fuel Fabrication')
dot.node('F', 'Reactor')
dot.node('G', 'Wet Storage')
dot.node('H', 'Dry Storage')
dot.node('I', 'Repository')
dot.edge('A', 'B', label='Natural U Ore')
dot.edge('B', 'C', label='U3O8')
dot.edge('C', 'D', label='UF6')
dot.edge('D', 'E', label='Enriched UF6')
dot.edge('E', 'F', label='Fresh Fuel')
dot.edge('F', 'G', label='Spent Fuel')
dot.edge('G', 'H', label='Cooled SNF')
dot.edge('H', 'I', label='Cooled SNF')
dot | _____no_output_____ | CC-BY-4.0 | introduction/02-nfc-overview.ipynb | abachma2/npre412 |
Single Pass or Multi Pass RecycleTo add reprocessing or recycling, and all high level spent nuclear fuel is eventually destined for a geologic repository | dot.node('Z', 'Reprocessing')
dot.edge('H', 'Z', label='Cooled SNF')
dot.edge('G', 'Z', label='Cooled SNF')
dot.edge('Z', 'E', label='Pu')
dot.edge('Z', 'E', label='U')
dot.edge('Z', 'I', label='FP')
dot | _____no_output_____ | CC-BY-4.0 | introduction/02-nfc-overview.ipynb | abachma2/npre412 |
Namespace* 為了將你寫的程式碼轉換成可以執行的程式,Python語言使用翻譯器(interpreter)來辨別你的程式碼,它會把你命名的變數分在不同的範圍內,這些範圍就叫作Namespace。* 每次創建一個變數時,翻譯器會在Namespace裡面記錄變數名稱和變數存的東西的記憶體位置。當有新的變數名稱時,翻譯器會先看新變數要存的值有沒有在紀錄裡,有的話就直接將新變數指定到該位置,例如:```pythona = 2a = a + 1b = 2```* 翻譯器認定的範圍大致上可以分成以下三類: * Built-in: 開啟翻譯器時就會有的,裡面就是有預設的函數和以下會介紹的資料結構。 * Module: 要透過```import```來加入的函數和變數等等。 * Function: 通常是使用者自己定義的變數和函數。 | a = 2
print(id(a))
b = 2
print(id(b)) | 10919456
10919456
| MIT | assets/images/pandas/Python3_data_structure.ipynb | haochunchang/haochunchang.github.io |
--- Data Structures* 資料結構就是各種用來存放資料的"容器",並且可以很有效率地操作資料。 [Sequence](Sequence)> _immutable v.s. mutable_ * [Lists](Lists): mutable = 可以更改內容的 * [Tuples](Tuples): immuntable = 不可以更改內容的 * [Range](Range): immuntable [Dictionary](Dictionary) [Set](Set)--- Sequence基本操作:* 檢查東西在不在sequence裡面```pythonx in seqx not in seq```* 把seq.頭尾相接(concatenation)```pythona + b a, b要是同一種sequencea * n repeat n times```* 拿出sequence裡面的東西```pythonseq[i]seq[i:j] 拿出第i到第j-1個seq[i:j:k] 從第i~第j中,每k個拿出一個```* seq.長度、最大/最小、東西出現次數和東西的位置```pythonlen(seq), max(seq), min(seq)seq.index(x)seq.count(x)```* 更多在這裡:https://docs.python.org/3.6/library/stdtypes.htmltypesseq-common --- Lists``` list = [item1, item2, item3, ...] ```* 通常使用在存放一堆相同種類的資料,類似於array(在電腦眼中是一排連續的櫃子)。| 0號櫃子 | 1號櫃子 | 2號櫃子 | |:---|:----|:---|| ㄏ | ㄏ | ㄏ | * 實際長相: 電腦用array記錄每個項目的index,因此可以根據index找到各個項目的內容。[圖片來源](https://www.hackerrank.com/challenges/variable-sized-arrays/problem)| 0號櫃子 | 1號櫃子 | 2號櫃子 | |:---|:----|:---|| 紙條:"東西在3樓" | 紙條:"沒東西" | 紙條:"東西在地下室" | | marvel_hero = ["Steve Rogers", "Tony Stark", "Thor Odinson"]
print(type(marvel_hero), marvel_hero)
marvel_hero.append("Hulk")
marvel_hero.insert(2, "Bruce Banner") # insert "Bruce Banner" into index 2
print(marvel_hero)
print(marvel_hero.pop()) # default: pop last item
marvel_hero[0] = "Captain America"
print(marvel_hero[1:-1]) | Hulk
['Captain America', 'Tony Stark', 'Bruce Banner', 'Thor Odinson']
| MIT | assets/images/pandas/Python3_data_structure.ipynb | haochunchang/haochunchang.github.io |
List comprehension: 可以直接在list裡面處理東西,不用再另外寫for-loop(但是花費時間差不多) | %timeit list_hero = [i.lower() if i.startswith('T') else i.upper() for i in marvel_hero]
print(list_hero)
%%timeit
list_hero = []
for i in marvel_hero:
if i.startswith('T'):
list_hero.append(i.lower())
else:
list_hero.append(i.upper())
print(list_hero) | ['BRUCE BANNER', 'CAPTAIN AMERICA', 'thor odinson', 'tony stark']
| MIT | assets/images/pandas/Python3_data_structure.ipynb | haochunchang/haochunchang.github.io |
List可以排序,排序所花的時間和List長度成正比 | marvel_hero.sort(reverse=False) # sort in-place
marvel_hero
list_hero_sorted = sorted(list_hero) # return a sorted list
print(list_hero_sorted) | ['BRUCE BANNER', 'CAPTAIN AMERICA', 'thor odinson', 'tony stark']
| MIT | assets/images/pandas/Python3_data_structure.ipynb | haochunchang/haochunchang.github.io |
**注意!如果要複製list,不能直接指定給新的變數,這樣只是幫同一個list重新命名而已*** 此行為被稱為 shallow copy | a = [1, 2, 3, 4, 5]
b = a
print(id(a), id(b), id(a) == id(b))
b[0] = 8888
print(a) | 140032783977672 140032783977672 True
[8888, 2, 3, 4, 5]
| MIT | assets/images/pandas/Python3_data_structure.ipynb | haochunchang/haochunchang.github.io |
--- Tuples``` tuples = item1, item2, item3, ...```* 通常用來存放不同種類但是有關聯的資料。* ',' 決定是不是tuples,但是通常會用()來區分function call。例如:```pythondef f(a, b=0): return a1*a2 + bf((87, 2)) return 87*2 + 0``` | love_iron = ("Iron Man", 3000)
cap = "Captain America",
print(type(love_iron), love_iron)
print(love_iron + cap)
print("Does {} in the \"love_iron\" tuples?: {}".format("Iron", 'Iron' in love_iron))
print("Length of cap: {}".format(len(cap)))
max(love_iron) | _____no_output_____ | MIT | assets/images/pandas/Python3_data_structure.ipynb | haochunchang/haochunchang.github.io |
* ```enumerate()``` 用在for-loop裡面可以依次輸出(i, 第i個項目),這樣就不用另外去記錄你跑到第幾個項目 | for e in enumerate(love_iron + cap):
print(e, type(e)) | (0, 'Iron Man') <class 'tuple'>
(1, 3000) <class 'tuple'>
(2, 'Captain America') <class 'tuple'>
| MIT | assets/images/pandas/Python3_data_structure.ipynb | haochunchang/haochunchang.github.io |
--- Range* 產生一串**整數**,通常用在for-loop裡面來記錄次數或是當作index。* 如果要產生一串浮點數,就要用numpy.arange()。```range(start, stop[, step])``` | even_number = [x for x in range(0, 30, 2)]
for i in range(2, 10, 2):
print("The {}th even number is {}".format(i, even_number[i-1])) | The 2th even number is 2
The 4th even number is 6
The 6th even number is 10
The 8th even number is 14
| MIT | assets/images/pandas/Python3_data_structure.ipynb | haochunchang/haochunchang.github.io |
--- Dictionary``` {key1:value1, key2:value2, key3:value3, ...}```* 用來存放具有對應關係的資料。* ```key``` 不能重複,而且必須是hashable * 兩個條件: 1. 建立之後不會更改數值(immutable) 2. 可以和其他東西比較是不是一樣* 實際長相:hash table * 電腦透過一個叫hash的函數將key編碼成一串固定長度的數字,然後用這串數字當作value的index。 * 理想上,每串數字都不重複,這樣可以讓查詢速度在平均上不受裡面的東西數量影響。* [圖片來源](https://en.wikipedia.org/wiki/Hash_table) | hero_id = {"Steve Rogers": 1,
"Tony Stark": 666,
"Thor Odinson": 999
}
hero_code = dict(zip(hero_id.keys(), ["Captain America", "God of Thunder", "Iron Man"]))
print(type(hero_code), hero_code)
# dict[key]: 輸出相對應的value,如果key not in dict則輸出 KeyError
# dict.get(key, default=None): 輸出相對應的value,如果key not in dict則輸出default
hero_name = "Steve Rogers"
print("The codename of hero_id {} is {}".format(hero_id.get(hero_name), hero_code[hero_name]))
hero_id.update({"Bruce Banner": 87})
print(hero_id) | {'Bruce Banner': 87, 'Thor Odinson': 999, 'Steve Rogers': 1, 'Tony Stark': 666}
| MIT | assets/images/pandas/Python3_data_structure.ipynb | haochunchang/haochunchang.github.io |
Dictionary View* 用來看dict裡面目前的值是什麼,可以放在for-loop一個一個處理: * dict.keys() 會輸出keys * dict.values() 會輸出values * dict.items() 會輸出(key, value)的 tuples> **注意!輸出的順序不一定代表加入dictionary的順序!**> 但是key和value的對應順序會一樣。* 如果想要固定輸出的順序,就要用list或是[collections.OrderedDict](https://docs.python.org/3.6/library/collections.htmlcollections.OrderedDict)。 | print(hero_id.keys())
print(hero_id.values())
print(hero_id.items())
for name, code in hero_code.items():
print("{} is {}".format(name, code)) | Steve Rogers is Captain America
Thor Odinson is God of Thunder
Tony Stark is Iron Man
| MIT | assets/images/pandas/Python3_data_structure.ipynb | haochunchang/haochunchang.github.io |
--- Set``` set = {item1, item2, item3, ...}```* 用來存放不重複的資料,放進重複的資料也只會保存一個。* set可以更改內容,frozenset不能更改內容(immuntable)。* 可以操作的動作和數學上的set很像:([圖片來源](https://www.learnbyexample.org/python-set/)) * union (```A | B```) * intersection (```A & B```) * difference (```A - B```) * symmetric difference (```A ^ B```) * subset (```A < B```) * super-set (```A > B```) | set_example = {"o", 7, 7, 7, 7, 7, 7, 7}
print(type(set_example), set_example)
A = set("Shin Shin ba fei jai")
B = set("Ni may yo may may")
print(A)
print(B)
A ^ B
a = set([[1],2,3,3,3,3])
a | _____no_output_____ | MIT | assets/images/pandas/Python3_data_structure.ipynb | haochunchang/haochunchang.github.io |
----- Numpy Array* 以分析資料來說,常見的形式就是矩陣(array),python有一個package叫做numpy。* 這個package可以讓我們更快更方便的處理矩陣。* Full documentation: https://docs.scipy.org/doc/* 快速教學:http://cs231n.github.io/python-numpy-tutorial/numpy-arrays```bash 安裝它只要一個步驟:pip3 install numpy``` | import numpy as np
b = np.array([[1,2,3],[4,5,6]]) # 2D array
print(b)
print(b.shape, b.dtype)
print(b[0, 0], b[0, 1], b[1, 0]) # array[row_index, col_index] | [[1 2 3]
[4 5 6]]
(2, 3) int64
1 2 4
| MIT | assets/images/pandas/Python3_data_structure.ipynb | haochunchang/haochunchang.github.io |
有許多方便建立矩陣的函數* 像是全部數值為0、全部數值為1、Identity等等,更多函數都在[這裡](https://docs.scipy.org/doc/numpy/reference/routines.array-creation.htmlroutines-array-creation)。 | z = np.zeros((8,7))
z | _____no_output_____ | MIT | assets/images/pandas/Python3_data_structure.ipynb | haochunchang/haochunchang.github.io |
取出array中的row或是column* 可以跟list一樣用```1:5```的語法。* 也可以用boolean的方式選取部分的數值。 | yeee = np.fromfile(join("data", "numpy_sample.txt"), sep=' ')
yeee = yeee.reshape((4,4))
print(yeee)
print(yeee[:2,0]) # 取出第一個column的前兩個row
print(yeee[-1,:]) # 取出最後一個row
yeee[yeee > 6] | _____no_output_____ | MIT | assets/images/pandas/Python3_data_structure.ipynb | haochunchang/haochunchang.github.io |
矩陣運算* ``` + - * / ``` 都是element-wise,也就是矩陣內每個數值各自獨立運算。* 矩陣相乘要用 ```dot```* 更多數學運算在[這裡](https://docs.scipy.org/doc/numpy/reference/routines.math.html)。 | x = np.array([[1,2],[3,4]])
y = np.array([[5,6],[7,8]])
print(x.dot(y) == np.dot(x, y)) | [[ True True]
[ True True]]
| MIT | assets/images/pandas/Python3_data_structure.ipynb | haochunchang/haochunchang.github.io |
Broadcasting* 在numpy中,如果我們要對不同形狀的矩陣進行運算,我們可以直接在形狀相同的地方直接進行運算。 | x = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])
v = np.array([1, 0, 1])
y = x + v
print(y) # v 分別加在x的每一個row | [[ 2 2 4]
[ 5 5 7]
[ 8 8 10]
[11 11 13]]
| MIT | assets/images/pandas/Python3_data_structure.ipynb | haochunchang/haochunchang.github.io |
如果想要使用兩個數值一樣的矩陣,必須注意shallow copy的問題。* 用```x[:]```拿出的東西是x的一個View,也就是說,看到的其實是x的data,更動View就是更動x。* 如果要真正複製一份出來,就要用```.copy()```。 | x = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])
shallow = x[:]
deep = x.copy()
shallow[0, 0] = 9487
deep[0, 0] = 5566
print(x) | [[9487 2 3]
[ 4 5 6]
[ 7 8 9]
[ 10 11 12]]
| MIT | assets/images/pandas/Python3_data_structure.ipynb | haochunchang/haochunchang.github.io |
---- Files* 知道如何在程式裡面存放資料後,還需要知道怎麼存檔和讀檔。* 每打開一個檔案時,需要跟電腦說哪一種模式:| Access mode | access_flag | detail ||:------------|:-----|:-------|| Read only | r | 從檔案開頭讀取 || Write only | w | 寫進去的東西會從頭開始寫,如果本來有內容,會覆蓋過去 || Append only | a | 寫進去的東西會接在檔案後面 || Enhance | + | 讀+寫或是讀+append |* f是個負責處理該檔案的管理者,可以透過f對檔案做事情。 * f記錄著現在讀取到檔案的哪裡。 ```python f = open(filepath, access_flag)f.close()``` | from os.path import join
with open(join("data", "heyhey.txt"), 'r', encoding='utf-8') as f:
print(f.read() + '\n')
print(f.readlines())
f.seek(0) # 回到檔案開頭
readlines = f.readlines(1)
print(readlines, type(readlines)) # 以 '\n'為界線,一行一行讀
single_line = f.readline()
print(single_line, type(single_line)) # 一次只讀一行
print()
# 也可以放進for-loop 一行一行讀
for i, line in enumerate(f):
print("Line: {}: {}".format(i, line))
break
with open(join("data", "test.txt"), 'w+', encoding='utf-8') as f:
f.write("Shin Shin ba fei jai")
f.seek(0)
print(f.read()) | Shin Shin ba fei jai
| MIT | assets/images/pandas/Python3_data_structure.ipynb | haochunchang/haochunchang.github.io |
Perform fCUBT on the data | # Load packages
import matplotlib
import matplotlib.pyplot as plt
import pickle
from FDApy.clustering.fcubt import Node, FCUBT
from FDApy.representation.functional_data import DenseFunctionalData
from FDApy.preprocessing.dim_reduction.fpca import UFPCA
from matplotlib import colors as mcolors
COLORS = ['#377eb8', '#ff7f00', '#4daf4a',
'#f781bf', '#a65628', '#984ea3',
'#999999', '#e41a1c', '#dede00']
#matplotlib.use("pgf")
matplotlib.rcParams.update({
"pgf.texsystem": "pdflatex",
'font.family': 'serif',
'text.usetex': True,
'pgf.rcfonts': False,
})
# Load data
with open('./data/scenario_1_review.pkl', 'rb') as f:
data_fd = pickle.load(f)
with open('./data/labels_review.pkl', 'rb') as f:
labels = pickle.load(f)
# Do UFPCA on the data
fpca = UFPCA(n_components=0.99)
fpca.fit(data_fd, method='GAM')
# Compute scores
simu_proj = fpca.transform(data_fd, method='NumInt')
plt.scatter(simu_proj[:, 0], simu_proj[:, 1], c=labels)
plt.show()
# Build the tree
root_node = Node(data_fd, is_root=True)
fcubt = FCUBT(root_node=root_node)
# Growing
fcubt.grow(n_components=0.95, min_size=10)
fcubt.mapping_grow
# Joining
fcubt.join(n_components=0.95)
fcubt.mapping_join | _____no_output_____ | MIT | scenario_1/021-fcubt_review.ipynb | StevenGolovkine/fcubt |
Plotting very large datasets meaningfully, using `datashader`There are a variety of approaches for plotting large datasets, but most of them are very unsatisfactory. Here we first show some of the issues, then demonstrate how the `datashader` library helps make large datasets truly practical. We'll use part of the well-studied [NYC Taxi trip database](http://www.nyc.gov/html/tlc/html/about/trip_record_data.shtml), with the locations of all NYC taxi pickups and dropoffs from the month of January 2015. Although we know what the data is, let's approach it as if we are doing data mining, and see what it takes to understand the dataset from scratch. Load NYC Taxi data (takes 10-20 seconds, since it's in the inefficient but widely supported CSV file format...) | import pandas as pd
%time df = pd.read_csv('../data/nyc_taxi.csv',usecols= \
['pickup_x', 'pickup_y', 'dropoff_x','dropoff_y', 'passenger_count','tpep_pickup_datetime'])
df.tail() | _____no_output_____ | MIT | datashader-work/datashader-examples/topics/nyc_taxi.ipynb | ventureBorbot/Data-Analysis |
As you can see, this file contains about 12 million pickup and dropoff locations (in Web Mercator coordinates), with passenger counts. Define a simple plot | from bokeh.models import BoxZoomTool
from bokeh.plotting import figure, output_notebook, show
output_notebook()
NYC = x_range, y_range = ((-8242000,-8210000), (4965000,4990000))
plot_width = int(750)
plot_height = int(plot_width//1.2)
def base_plot(tools='pan,wheel_zoom,reset',plot_width=plot_width, plot_height=plot_height, **plot_args):
p = figure(tools=tools, plot_width=plot_width, plot_height=plot_height,
x_range=x_range, y_range=y_range, outline_line_color=None,
min_border=0, min_border_left=0, min_border_right=0,
min_border_top=0, min_border_bottom=0, **plot_args)
p.axis.visible = False
p.xgrid.grid_line_color = None
p.ygrid.grid_line_color = None
p.add_tools(BoxZoomTool(match_aspect=True))
return p
options = dict(line_color=None, fill_color='blue', size=5) | _____no_output_____ | MIT | datashader-work/datashader-examples/topics/nyc_taxi.ipynb | ventureBorbot/Data-Analysis |
1000-point scatterplot: undersamplingAny plotting program should be able to handle a plot of 1000 datapoints. Here the points are initially overplotting each other, but if you hit the Reset button (top right of plot) to zoom in a bit, nearly all of them should be clearly visible in the following Bokeh plot of a random 1000-point sample. If you know what to look for, you can even see the outline of Manhattan Island and Central Park from the pattern of dots. We've included geographic map data here to help get you situated, though for a genuine data mining task in an abstract data space you might not have any such landmarks. In any case, because this plot is discarding 99.99% of the data, it reveals very little of what might be contained in the dataset, a problem called *undersampling*. | %%time
from bokeh.tile_providers import STAMEN_TERRAIN
samples = df.sample(n=1000)
p = base_plot()
p.add_tile(STAMEN_TERRAIN)
p.circle(x=samples['dropoff_x'], y=samples['dropoff_y'], **options)
show(p) | _____no_output_____ | MIT | datashader-work/datashader-examples/topics/nyc_taxi.ipynb | ventureBorbot/Data-Analysis |
10,000-point scatterplot: overplottingWe can of course plot more points to reduce the amount of undersampling. However, even if we only try to plot 0.1% of the data, ignoring the other 99.9%, we will find major problems with *overplotting*, such that the true density of dropoffs in central Manhattan is impossible to see due to occlusion: | %%time
samples = df.sample(n=10000)
p = base_plot()
p.circle(x=samples['dropoff_x'], y=samples['dropoff_y'], **options)
show(p) | _____no_output_____ | MIT | datashader-work/datashader-examples/topics/nyc_taxi.ipynb | ventureBorbot/Data-Analysis |
Overplotting is reduced if you zoom in on a particular region (may need to click to enable the wheel-zoom tool in the upper right of the plot first, then use the scroll wheel). However, then the problem switches to back to serious undersampling, as the too-sparsely sampled datapoints get revealed for zoomed-in regions, even though much more data is available. 100,000-point scatterplot: saturationIf you make the dot size smaller, you can reduce the overplotting that occurs when you try to combat undersampling. Even so, with enough opaque data points, overplotting will be unavoidable in popular dropoff locations. So you can then adjust the alpha (opacity) parameter of most plotting programs, so that multiple points need to overlap before full color saturation is achieved. With enough data, such a plot can approximate the probability density function for dropoffs, showing where dropoffs were most common: ```python%%timeoptions = dict(line_color=None, fill_color='blue', size=1, alpha=0.1)samples = df.sample(n=100000)p = base_plot(webgl=True)p.circle(x=samples['dropoff_x'], y=samples['dropoff_y'], **options)show(p)``` [*Here we've shown static output as a PNG rather than a live Bokeh plot, to reduce the file size for distributing full notebooks and because some browsers will have trouble with plots this large. The above cell can be converted into code and executed to get the full interactive plot.*]However, it's very tricky to set the size and alpha parameters. How do we know if certain regions are saturating, unable to show peaks in dropoff density? Here we've manually set the alpha to show a clear structure of streets and blocks, as one would intuitively expect to see, but the density of dropoffs still seems approximately the same on nearly all Manhattan streets (just wider in some locations), which is unlikely to be true. We can of course reduce the alpha value to reduce saturation further, but there's no way to tell when it's been set correctly, and it's already low enough that nothing other than Manhattan and La Guardia is showing up at all. Plus, this alpha value will only work even reasonably well at the one zoom level shown. Try zooming in (may need to enable the wheel zoom tool in the upper right) to see that at higher zooms, there is less overlap between dropoff locations, so that the points *all* start to become transparent due to lack of overlap. Yet without setting the size and alpha to a low value in the first place, the stucture is invisible when zoomed out, due to overplotting. Thus even though Bokeh provides rich support for interactively revealing structure by zooming, it is of limited utility for large data; either the data is invisible when zoomed in, or there's no large-scale structure when zoomed out, which is necessary to indicate where zooming would be informative.Moreover, we're still ignoring 99% of the data. Many plotting programs will have trouble with plots even this large, but Bokeh can handle 100-200,000 points in most browsers. Here we've enabled Bokeh's WebGL support, which gives smoother zooming behavior, but the non-WebGL mode also works well. Still, for such large sizes the plots become slow due to the large HTML file sizes involved, because each of the data points are encoded as text in the web page, and for even larger samples the browser will fail to render the page at all. 10-million-point datashaded plots: auto-ranging, but limited dynamic rangeTo let us work with truly large datasets without discarding most of the data, we can take an entirely different approach. Instead of using a Bokeh scatterplot, which encodes every point into JSON and stores it in the HTML file read by the browser, we can use the [datashader](https://github.com/bokeh/datashader) library to render the entire dataset into a pixel buffer in a separate Python process, and then provide a fixed-size image to the browser containing only the data currently visible. This approach decouples the data processing from the visualization. The data processing is then limited only by the computational power available, while the visualization has much more stringent constraints determined by your display device (a web browser and your particular monitor, in this case). This approach works particularly well when your data is in a far-off server, but it is also useful whenever your dataset is larger than your display device can render easily.Because the number of points involved is no longer a limiting factor, you can now use the entire dataset (including the full 150 million trips that have been made public, if you download that data separately). Most importantly, because datashader allows computation on the intermediate stages of plotting, you can easily define operations like auto-ranging (which is on by default), so that we can be sure there is no overplotting or saturation and no need to set parameters like alpha.The steps involved in datashading are (1) create a Canvas object with the shape of the eventual plot (i.e. having one storage bin for collecting points, per final pixel), (2) aggregating all points into that set of bins, incrementally counting them, and (3) mapping the resulting counts into a visible color from a specified range to make an image: | import datashader as ds
from datashader import transfer_functions as tf
from datashader.colors import Greys9
Greys9_r = list(reversed(Greys9))[:-2]
%%time
cvs = ds.Canvas(plot_width=plot_width, plot_height=plot_height, x_range=x_range, y_range=y_range)
agg = cvs.points(df, 'dropoff_x', 'dropoff_y', ds.count('passenger_count'))
img = tf.shade(agg, cmap=["white", 'darkblue'], how='linear') | _____no_output_____ | MIT | datashader-work/datashader-examples/topics/nyc_taxi.ipynb | ventureBorbot/Data-Analysis |
The resulting image is similar to the 100,000-point Bokeh plot above, but (a) makes use of all 12 million datapoints, (b) is computed in only a tiny fraction of the time, (c) does not require any magic-number parameters like size and alpha, and (d) automatically ensures that there is no saturation or overplotting: | img | _____no_output_____ | MIT | datashader-work/datashader-examples/topics/nyc_taxi.ipynb | ventureBorbot/Data-Analysis |
This plot renders the count at every pixel as a color from the specified range (here from white to dark blue), mapped linearly. If your display device were linear, and the data were distributed evenly across this color range, then the result of such linear, auto-ranged processing would be an effective, parameter-free way to visualize your dataset.However, real display devices are not typically linear, and more importantly, real data is rarely distributed evenly. Here, it is clear that there are "hotspots" in dropoffs, with a very high count for areas around Penn Station and Madison Square Garden, relatively low counts for the rest of Manhattan's streets, and apparently no dropoffs anywhere else but La Guardia airport. NYC taxis definitely cover a larger geographic range than this, so what is the problem? To see, let's look at the histogram of counts for the above image: | import numpy as np
def histogram(x,colors=None):
hist,edges = np.histogram(x, bins=100)
p = figure(y_axis_label="Pixels",
tools='', height=130, outline_line_color=None,
min_border=0, min_border_left=0, min_border_right=0,
min_border_top=0, min_border_bottom=0)
p.quad(top=hist[1:], bottom=0, left=edges[1:-1], right=edges[2:])
print("min: {}, max: {}".format(np.min(x),np.max(x)))
show(p)
histogram(agg.values) | _____no_output_____ | MIT | datashader-work/datashader-examples/topics/nyc_taxi.ipynb | ventureBorbot/Data-Analysis |
Clearly, most of the pixels have very low counts (under 3000), while a very few pixels have much larger counts (up to 22000, in this case). When these values are mapped into colors for display, nearly all of the pixels will end up being colored with the lowest colors in the range, i.e. white or nearly white, while the other colors in the available range will be used for only a few dozen pixels at most. Thus most of the pixels in this plot convey very little information about the data, wasting nearly all of dynamic range available on your display device. It's thus very likely that we are missing a lot of the structure in this data that we could be seeing. 10-million-point datashaded plots: high dynamic rangeFor the typical case of data that is distributed nonlinearly over the available range, we can use nonlinear scaling to map the data range into the visible color range. E.g. first transforming the values via a log function will help flatten out this histogram and reveal much more of the structure of this data: | histogram(np.log1p(agg.values))
tf.shade(agg, cmap=Greys9_r, how='log') | _____no_output_____ | MIT | datashader-work/datashader-examples/topics/nyc_taxi.ipynb | ventureBorbot/Data-Analysis |
We can now see that there is rich structure throughout this dataset -- geographic features like streets and buildings are clearly modulating the values in both the high-dropoff regions in Manhattan and the relatively low-dropoff regions in the surrounding areas. Still, this choice is arbitrary -- why the log function in particular? It clearly flattened the histogram somewhat, but it was just a guess. We can instead explicitly equalize the histogram of the data before building the image, making structure visible at every data level (and thus at all the geographic locations covered) in a general way: | histogram(tf.eq_hist(agg.values))
tf.shade(agg, cmap=Greys9_r, how='eq_hist') | _____no_output_____ | MIT | datashader-work/datashader-examples/topics/nyc_taxi.ipynb | ventureBorbot/Data-Analysis |
The histogram is now fully flat (apart from the spacing of bins caused by the discrete nature of integer counting). Effectively, the visualization now shows a rank-order or percentile distribution of the data. I.e., pixels are now colored according to where their corresponding counts fall in the distribution of all counts, with one end of the color range for the lowest counts, one end for the highest ones, and every colormap step in between having similar numbers of counts. Such a visualization preserves the ordering between count values, faithfully displaying local differences in these counts, but discards absolute magnitudes (as the top 1% of the color range will be used for the top 1% of the data values, whatever those may be).Now that the data is visible at every level, we can immediately see that there are some clear problems with the quality of the data -- there is a surprising number of trips that claim to drop off in the water or in the roadless areas of Central park, as well as in the middle of most of the tallest buildings in central Manhattan. These locations are likely to be GPS errors being made visible, perhaps partly because of poor GPS performance in between the tallest buildings.Histogram equalization does not require any magic parameters, and in theory it should convey the maximum information available about the relative values between pixels, by mapping each of the observed ranges of values into visibly discriminable colors. And it's clearly a good start in practice, because it shows both low values (avoiding undersaturation) and relatively high values clearly, without arbitrary settings. Even so, the results will depend on the nonlinearities of your visual system, your specific display device, and any automatic compensation or calibration being applied to your display device. Thus in practice, the resulting range of colors may not map directly into a linearly perceivable range for your particular setup, and so you may want to further adjust the values to more accurately reflect the underlying structure, by adding additional calibration or compensation steps.Moreover, at this point you can now bring in your human-centered goals for the visualization -- once the overall structure has been clearly revealed, you can select specific aspects of the data to highlight or bring out, based on your own questions about the data. These questions can be expressed at whatever level of the pipeline is most appropriate, as shown in the examples below. For instance, histogram equalization was done on the counts in the aggregate array, because if we waited until the image had been created, we would have been working with data truncated to the 256 color levels available per channel in most display devices, greatly reducing precision. Or you may want to focus specifically on the highest peaks (as shown below), which again should be done at the aggregate level so that you can use the full color range of your display device to represent the narrow range of data that you are interested in. Throughout, the goal is to map from the data of interest into the visible, clearly perceptible range available on your display device. 10-million-point datashaded plots: interactiveAlthough the above plots reveal the entire dataset at once, the full power of datashading requires an interactive plot, because a big dataset will usually have structure at very many different levels (such as different geographic regions). Datashading allows auto-ranging and other automatic operations to be recomputed dynamically for the specific selected viewport, automatically revealing local structure that may not be visible from a global view. Here we'll embed the generated images into a Bokeh plot to support fully interactive zooming. For the highest detail on large monitors, you should increase the plot width and height above. | import datashader as ds
from datashader.bokeh_ext import InteractiveImage
from functools import partial
from datashader.utils import export_image
from datashader.colors import colormap_select, Greys9, Hot, inferno
background = "black"
export = partial(export_image, export_path="export", background=background)
cm = partial(colormap_select, reverse=(background=="black"))
def create_image(x_range, y_range, w=plot_width, h=plot_height):
cvs = ds.Canvas(plot_width=w, plot_height=h, x_range=x_range, y_range=y_range)
agg = cvs.points(df, 'dropoff_x', 'dropoff_y', ds.count('passenger_count'))
img = tf.shade(agg, cmap=Hot, how='eq_hist')
return tf.dynspread(img, threshold=0.5, max_px=4)
p = base_plot(background_fill_color=background)
export(create_image(*NYC),"NYCT_hot")
InteractiveImage(p, create_image) | _____no_output_____ | MIT | datashader-work/datashader-examples/topics/nyc_taxi.ipynb | ventureBorbot/Data-Analysis |
You can now zoom in interactively to this plot, seeing all the points available in that viewport, without ever needing to change the plot parameters for that specific zoom level. Each time you zoom or pan, a new image is rendered (which takes a few seconds for large datasets), and displayed overlaid any other plot elements, providing full access to all of your data. Here we've used the optional `tf.dynspread` function to automatically enlarge the size of each datapoint once you've zoomed in so far that datapoints no longer have nearby neighbors. Customizing datashaderOne of the most important features of datashading is that each of the stages of the datashader pipeline can be modified or replaced, either for personal preferences or to highlight specific aspects of the data. Here we'll use a high-level `Pipeline` object that encapsulates the typical series of steps in the above `create_image` function, and then we'll customize it. The default values of this pipeline are the same as the plot above, but here we'll add a special colormap to make the values stand out against an underlying map, and only plot hotspots (defined here as pixels (aggregation bins) that are in the 90th percentile by count): | import numpy as np
from functools import partial
def create_image90(x_range, y_range, w=plot_width, h=plot_height):
cvs = ds.Canvas(plot_width=w, plot_height=h, x_range=x_range, y_range=y_range)
agg = cvs.points(df, 'dropoff_x', 'dropoff_y', ds.count('passenger_count'))
img = tf.shade(agg.where(agg>np.percentile(agg,90)), cmap=inferno, how='eq_hist')
return tf.dynspread(img, threshold=0.3, max_px=4)
p = base_plot()
p.add_tile(STAMEN_TERRAIN)
export(create_image(*NYC),"NYCT_90th")
InteractiveImage(p, create_image90) | _____no_output_____ | MIT | datashader-work/datashader-examples/topics/nyc_taxi.ipynb | ventureBorbot/Data-Analysis |
If you zoom in to the plot above, you can see that the 90th-percentile criterion at first highlights the most active areas in the entire dataset, and then highlights the most active areas in each subsequent viewport. Here yellow has been chosen to highlight the strongest peaks, and if you zoom in on one of those peaks you can see the most active areas in that particular geographic region, according to this dynamically evaluated definition of "most active". The above plots each followed a roughly standard series of steps useful for many datasets, but you can instead fully customize the computations involved. This capability lets you do novel operations on the data once it has been aggregated into pixel-shaped bins. For instance, you might want to plot all the pixels where there were more dropoffs than pickups in blue, and all those where there were more pickups than dropoffs in red. To do this, just write your own function that will create an image, when given x and y ranges, a resolution (w x h), and any optional arguments needed. You can then either call the function yourself, or pass it to `InteractiveImage` to make an interactive Bokeh plot: | def merged_images(x_range, y_range, w=plot_width, h=plot_height, how='log'):
cvs = ds.Canvas(plot_width=w, plot_height=h, x_range=x_range, y_range=y_range)
picks = cvs.points(df, 'pickup_x', 'pickup_y', ds.count('passenger_count'))
drops = cvs.points(df, 'dropoff_x', 'dropoff_y', ds.count('passenger_count'))
drops = drops.rename({'dropoff_x': 'x', 'dropoff_y': 'y'})
picks = picks.rename({'pickup_x': 'x', 'pickup_y': 'y'})
more_drops = tf.shade(drops.where(drops > picks), cmap=["darkblue", 'cornflowerblue'], how=how)
more_picks = tf.shade(picks.where(picks > drops), cmap=["darkred", 'orangered'], how=how)
img = tf.stack(more_picks, more_drops)
return tf.dynspread(img, threshold=0.3, max_px=4)
p = base_plot(background_fill_color=background)
export(merged_images(*NYC),"NYCT_pickups_vs_dropoffs")
InteractiveImage(p, merged_images) | _____no_output_____ | MIT | datashader-work/datashader-examples/topics/nyc_taxi.ipynb | ventureBorbot/Data-Analysis |
Now you can see that pickups are more common on major roads, as you'd expect, and dropoffs are more common on side streets. In Manhattan, roads running along the island are more common for pickups. If you zoom in to any location, the data will be re-aggregated to the new resolution automatically, again calculating for each newly defined pixel whether pickups or dropoffs were more likely in that pixel. The interactive features of Bokeh are now fully usable with this large dataset, allowing you to uncover new structure at every level. We can also use other columns in the dataset as additional dimensions in the plot. For instance, if we want to see if certain areas are more likely to have pickups at certain hours (e.g. areas with bars and restaurants might have pickups in the evening, while apartment buildings may have pickups in the morning). One way to do this is to use the hour of the day as a category, and then colorize each hour: | df['hour'] = pd.to_datetime(df['tpep_pickup_datetime']).dt.hour.astype('category')
colors = ["#FF0000","#FF3F00","#FF7F00","#FFBF00","#FFFF00","#BFFF00","#7FFF00","#3FFF00",
"#00FF00","#00FF3F","#00FF7F","#00FFBF","#00FFFF","#00BFFF","#007FFF","#003FFF",
"#0000FF","#3F00FF","#7F00FF","#BF00FF","#FF00FF","#FF00BF","#FF007F","#FF003F",]
def colorized_images(x_range, y_range, w=plot_width, h=plot_height, dataset="pickup"):
cvs = ds.Canvas(plot_width=w, plot_height=h, x_range=x_range, y_range=y_range)
agg = cvs.points(df, dataset+'_x', dataset+'_y', ds.count_cat('hour'))
img = tf.shade(agg, color_key=colors)
return tf.dynspread(img, threshold=0.3, max_px=4)
p = base_plot(background_fill_color=background)
#p.add_tile(STAMEN_TERRAIN)
export(colorized_images(*NYC, dataset="pickup"),"NYCT_pickup_times")
InteractiveImage(p, colorized_images, dataset="pickup")
export(colorized_images(*NYC, dataset="dropoff"),"NYCT_dropoff_times")
p = base_plot(background_fill_color=background)
InteractiveImage(p, colorized_images, dataset="dropoff") | _____no_output_____ | MIT | datashader-work/datashader-examples/topics/nyc_taxi.ipynb | ventureBorbot/Data-Analysis |
During Private Multiplication , we require the parties to be able to communicate with each other.We make sure that our Actions are **Idempotent** and **Atomic** such that when a given action is not able to execute, it requeues itself to the back of the queue.We set a maximum amount of retries,until eventually failing, when one of the parties nodes are not able to send their intermediate results.We also create proxy clients with minimal permissions such that the parties are able to communicate with each other. | out = tensor_1 + tensor_2
out2 = out > 3
out2.block.reconstruct()
mpc_1 = tensor_1 * tensor_2
mpc_2 = tensor_2 * tensor_3
mpc = mpc_1 * mpc_2 * 3
mpc.block.reconstruct()
mpc_1 = tensor_1 + tensor_2
mpc_2 = tensor_2 + tensor_3
mpc3 = mpc_1 + mpc_2 + 3
mpc3.block.reconstruct() | _____no_output_____ | Apache-2.0 | notebooks/smpc/Private Mul Tensor Abstraction.ipynb | Noob-can-Compile/PySyft |
mlconfig | from mlrun import mlconf
import os
mlconf.dbpath = mlconf.dbpath or 'http://mlrun-api:8080'
mlconf.artifact_path = mlconf.artifact_path or os.path.abspath('./') | _____no_output_____ | Apache-2.0 | describe/describe.ipynb | michaelk-igz/functions |
save | from mlrun import code_to_function
# create job function object from notebook code
fn = code_to_function("describe", handler="summarize",
description="describe and visualizes dataset stats",
categories=["analysis"],
labels = {"author": "yjb"},
code_output='.')
fn.export() | > 2020-07-23 07:46:39,543 [info] function spec saved to path: function.yaml
| Apache-2.0 | describe/describe.ipynb | michaelk-igz/functions |
tests | from mlrun.platforms import auto_mount
fn.apply(auto_mount())
from mlrun import NewTask, run_local
#DATA_URL = "https://iguazio-sample-data.s3.amazonaws.com/datasets/classifier-data.csv"
DATA_URL = 'https://iguazio-sample-data.s3.amazonaws.com/datasets/iris_dataset.csv'
task = NewTask(
name="tasks-describe",
handler=summarize,
inputs={"table": DATA_URL}, params={'update_dataset': True, 'label_column': 'label'}) | _____no_output_____ | Apache-2.0 | describe/describe.ipynb | michaelk-igz/functions |
run locally | run = run_local(task) | > 2020-07-22 09:00:32,582 [debug] Validating field against patterns: {'field_name': 'run.metadata.name', 'field_value': 'tasks-describe', 'pattern': ['^.{0,63}$', '^(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])?$']}
> 2020-07-22 09:00:32,598 [info] starting run tasks-describe uid=f30656601819462c892a9365dd175f72 -> http://mlrun-api:8080
> 2020-07-22 09:00:37,475 [debug] log artifact histograms at /User/functions/describe/plots/hist.html, size: 140127, db: N
> 2020-07-22 09:00:38,377 [debug] log artifact violin at /User/functions/describe/plots/violin.html, size: 54096, db: N
> 2020-07-22 09:00:38,680 [debug] log artifact imbalance at /User/functions/describe/plots/imbalance.html, size: 10045, db: Y
> 2020-07-22 09:00:38,697 [debug] log artifact imbalance-weights-vec at /User/functions/describe/plots/imbalance-weights-vec.csv, size: 65, db: N
> 2020-07-22 09:00:38,702 [debug] log artifact correlation-matrix at /User/functions/describe/plots/correlation-matrix.csv, size: 324, db: N
> 2020-07-22 09:00:38,877 [debug] log artifact correlation at /User/functions/describe/plots/corr.html, size: 12052, db: N
| Apache-2.0 | describe/describe.ipynb | michaelk-igz/functions |
run remotely | fn.run(task, inputs={"table": DATA_URL}) | > 2020-07-22 09:00:39,154 [debug] Validating field against patterns: {'field_name': 'run.metadata.name', 'field_value': 'tasks-describe', 'pattern': ['^.{0,63}$', '^(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])?$']}
> 2020-07-22 09:00:39,161 [info] starting run tasks-describe uid=d8edd5e4b8004437927f0810d2ad1658 -> http://mlrun-api:8080
> 2020-07-22 09:00:39,287 [info] Job is running in the background, pod: tasks-describe-vmgv8
> 2020-07-22 09:00:45,175 [debug] Validating field against patterns: {'field_name': 'run.metadata.name', 'field_value': 'tasks-describe', 'pattern': ['^.{0,63}$', '^(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])?$']}
> 2020-07-22 09:00:45,291 [debug] starting local run: main.py # summarize
> 2020-07-22 09:00:50,598 [debug] log artifact histograms at /User/functions/describe/plots/hist.html, size: 238319, db: N
> 2020-07-22 09:00:51,652 [debug] log artifact violin at /User/functions/describe/plots/violin.html, size: 86708, db: N
> 2020-07-22 09:00:51,908 [debug] log artifact imbalance at /User/functions/describe/plots/imbalance.html, size: 21657, db: Y
> 2020-07-22 09:00:51,914 [debug] log artifact imbalance-weights-vec at /User/functions/describe/plots/imbalance-weights-vec.csv, size: 65, db: N
> 2020-07-22 09:00:51,919 [debug] log artifact correlation-matrix at /User/functions/describe/plots/correlation-matrix.csv, size: 324, db: N
> 2020-07-22 09:00:52,104 [debug] log artifact correlation at /User/functions/describe/plots/corr.html, size: 26392, db: N
> 2020-07-22 09:00:52,272 [info] run executed, status=completed
final state: succeeded
| Apache-2.0 | describe/describe.ipynb | michaelk-igz/functions |
Taller Presencial --- Programación en Python=== El algoritmo MapReduce de Hadoop se presenta en la siguiente figura. Se desea escribir un programa que realice el conteo de palabras usando el algoritmo MapReduce. | #
# A continuación se crea las carpetas /tmp/input, /tmp/output y tres archivos de prueba
#
!rm -rf /tmp/input /tmp/output
!mkdir /tmp/input
!mkdir /tmp/output
%%writefile /tmp/input/text0.txt
Analytics is the discovery, interpretation, and communication of meaningful patterns
in data. Especially valuable in areas rich with recorded information, analytics relies
on the simultaneous application of statistics, computer programming and operations research
to quantify performance.
Organizations may apply analytics to business data to describe, predict, and improve business
performance. Specifically, areas within analytics include predictive analytics, prescriptive
analytics, enterprise decision management, descriptive analytics, cognitive analytics, Big
Data Analytics, retail analytics, store assortment and stock-keeping unit optimization,
marketing optimization and marketing mix modeling, web analytics, call analytics, speech
analytics, sales force sizing and optimization, price and promotion modeling, predictive
science, credit risk analysis, and fraud analytics. Since analytics can require extensive
computation (see big data), the algorithms and software used for analytics harness the most
current methods in computer science, statistics, and mathematics
%%writefile /tmp/input/text1.txt
The field of data analysis. Analytics often involves studying past historical data to
research potential trends, to analyze the effects of certain decisions or events, or to
evaluate the performance of a given tool or scenario. The goal of analytics is to improve
the business by gaining knowledge which can be used to make improvements or changes.
%%writefile /tmp/input/text2.txt
Data analytics (DA) is the process of examining data sets in order to draw conclusions
about the information they contain, increasingly with the aid of specialized systems
and software. Data analytics technologies and techniques are widely used in commercial
industries to enable organizations to make more-informed business decisions and by
scientists and researchers to verify or disprove scientific models, theories and
hypotheses.
#
# Escriba la función load_input que recive como parámetro un folder y retorna
# una lista de tuplas donde el primer elemento de cada tupla es el nombre del
# archivo y el segundo es una línea del archivo. La función convierte a tuplas
# todas las lineas de cada uno de los archivos. La función es genérica y debe
# leer todos los archivos de folder entregado como parámetro.
#
# Por ejemplo:
# [
# ('text0'.txt', 'Analytics is the discovery, inter ...'),
# ('text0'.txt', 'in data. Especially valuable in ar...').
# ...
# ('text2.txt'. 'hypotheses.')
# ]
#
def load_input(input_directory):
pass
#
# Escriba una función llamada maper que recibe una lista de tuplas de la
# función anterior y retorna una lista de tuplas (clave, valor). En este caso,
# la clave es cada palabra y el valor es 1, puesto que se está realizando un
# conteo.
#
# [
# ('Analytics', 1),
# ('is', 1),
# ...
# ]
#
def mapper(sequence):
pass
#
# Escriba la función shuffle_and_sort que recibe la lista de tuplas entregada
# por el mapper, y retorna una lista con el mismo contenido ordenado por la
# clave.
#
# [
# ('Analytics', 1),
# ('Analytics', 1),
# ...
# ]
#
def shuffle_and_sort(sequence):
pass
#
# Escriba la función reducer, la cual recibe el resultado de shuffle_and_sort y
# reduce los valores asociados a cada clave sumandolos. Como resultado, por
# ejemplo, la reducción indica cuantas veces aparece la palabra analytics en el
# texto.
#
def reducer(sequence):
pass
#
# Escriba la función save_output, que toma la lista devuelta por el reducer y
# escribe en la carpeta /tmp/output/ los archivos 'part-0.txt', 'part-1.txt',
# etc. El primer archivo contiene las primeras 20 palabras contadas, el segundo
# de la 21 a la 40 y asi sucesivamente. Cada línea de cada archivo contiene la
# palabra y las veces que aparece separadas por un tabulador.
#
def save_output(sequence, output_directory):
pass | _____no_output_____ | MIT | notebooks/ciencia_de_los_datos/taller_presencial-programacion_en_python.ipynb | driverava/datalabs |
_Lambda School Data Science Unit 2_ Classification & Validation Sprint Challenge Follow the instructions for each numbered part to earn a score of 2. See the bottom of the notebook for a list of ways you can earn a score of 3. For this Sprint Challenge, you'll predict whether a person's income exceeds $50k/yr, based on census data.You can read more about the Adult Census Income dataset at the UCI Machine Learning Repository: https://archive.ics.uci.edu/ml/datasets/adult Run this cell to load the data: | !pip install category_encoders
import category_encoders as ce
from sklearn.metrics import accuracy_score, confusion_matrix, roc_auc_score
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler, OrdinalEncoder
from sklearn.ensemble import RandomForestClassifier
import pandas as pd
columns = ['age',
'workclass',
'fnlwgt',
'education',
'education-num',
'marital-status',
'occupation',
'relationship',
'race',
'sex',
'capital-gain',
'capital-loss',
'hours-per-week',
'native-country',
'income']
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data',
header=None, names=columns)
df['income'] = df['income'].str.strip() | _____no_output_____ | MIT | DS_Unit_2_Sprint_Challenge_3_Classification_Validation.ipynb | mkirby1995/DS-Unit-2-Sprint-3-Classification-Validation |
Part 1 — Begin with baselinesSplit the data into an **X matrix** (all the features) and **y vector** (the target).(You _don't_ need to split the data into train and test sets here. You'll be asked to do that at the _end_ of Part 1.) | df['income'].value_counts()
X = df.drop(columns='income')
Y = df['income'].replace({'<=50K':0, '>50K':1}) | _____no_output_____ | MIT | DS_Unit_2_Sprint_Challenge_3_Classification_Validation.ipynb | mkirby1995/DS-Unit-2-Sprint-3-Classification-Validation |
What **accuracy score** would you get here with a **"majority class baseline"?** (You can answer this question either with a scikit-learn function or with a pandas function.) | majority_class = Y.mode()[0]
majority_class_prediction = [majority_class] * len(Y)
accuracy_score(Y, majority_class_prediction) | _____no_output_____ | MIT | DS_Unit_2_Sprint_Challenge_3_Classification_Validation.ipynb | mkirby1995/DS-Unit-2-Sprint-3-Classification-Validation |
What **ROC AUC score** would you get here with a **majority class baseline?**(You can answer this question either with a scikit-learn function or with no code, just your understanding of ROC AUC.) | roc_auc_score(Y, majority_class_prediction) | _____no_output_____ | MIT | DS_Unit_2_Sprint_Challenge_3_Classification_Validation.ipynb | mkirby1995/DS-Unit-2-Sprint-3-Classification-Validation |
In this Sprint Challenge, you will use **"Cross-Validation with Independent Test Set"** for your model validaton method.First, **split the data into `X_train, X_test, y_train, y_test`**. You can include 80% of the data in the train set, and hold out 20% for the test set. | X_train, X_test, Y_train, Y_test = train_test_split(X,Y, test_size=.2, random_state=42) | _____no_output_____ | MIT | DS_Unit_2_Sprint_Challenge_3_Classification_Validation.ipynb | mkirby1995/DS-Unit-2-Sprint-3-Classification-Validation |
Part 2 — Modeling with Logistic Regression! - You may do exploratory data analysis and visualization, but it is not required.- You may **use all the features, or select any features** of your choice, as long as you select at least one numeric feature and one categorical feature.- **Scale your numeric features**, using any scikit-learn [Scaler](https://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.preprocessing) of your choice.- **Encode your categorical features**. You may use any encoding (One-Hot, Ordinal, etc) and any library (category_encoders, scikit-learn, pandas, etc) of your choice.- You may choose to use a pipeline, but it is not required.- Use a **Logistic Regression** model.- Use scikit-learn's [**cross_val_score**](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_val_score.html) function. For [scoring](https://scikit-learn.org/stable/modules/model_evaluation.htmlthe-scoring-parameter-defining-model-evaluation-rules), use **accuracy**.- **Print your model's cross-validation accuracy score.** | pipeline = make_pipeline(ce.OneHotEncoder(use_cat_names=True),
StandardScaler(),
LogisticRegression(solver='lbfgs', max_iter=1000))
scores = cross_val_score(pipeline, X_train, Y_train, scoring='accuracy', cv=10, n_jobs=1, verbose=10)
print('Cross-Validation accuracy scores:', scores,'\n\n')
print('Average:', scores.mean()) | Cross-Validation accuracy scores: [0.8452975 0.85105566 0.84683301 0.85412668 0.84683301 0.85143954
0.84952015 0.85143954 0.84754224 0.86059908]
Average: 0.8504686426610766
| MIT | DS_Unit_2_Sprint_Challenge_3_Classification_Validation.ipynb | mkirby1995/DS-Unit-2-Sprint-3-Classification-Validation |
Part 3 — Modeling with Tree Ensembles!Part 3 is the same as Part 2, except this time, use a **Random Forest** or **Gradient Boosting** classifier. You may use scikit-learn, xgboost, or any other library. Then, print your model's cross-validation accuracy score. | pipeline = make_pipeline(ce.OneHotEncoder(use_cat_names=True),
StandardScaler(),
RandomForestClassifier(max_depth=2, n_estimators=40))
scores = cross_val_score(pipeline, X_train, Y_train, scoring='accuracy', cv=10, n_jobs=1, verbose=10)
print('Cross-Validation accuracy scores:', scores,'\n\n')
print('Average:', scores.mean()) | Cross-Validation accuracy scores: [0.76084453 0.77044146 0.76506718 0.76852207 0.7596929 0.77044146
0.76583493 0.77274472 0.76228879 0.77688172]
Average: 0.7672759758351984
| MIT | DS_Unit_2_Sprint_Challenge_3_Classification_Validation.ipynb | mkirby1995/DS-Unit-2-Sprint-3-Classification-Validation |
Part 4 — Calculate classification metrics from a confusion matrixSuppose this is the confusion matrix for your binary classification model: Predicted Negative Positive Actual Negative 85 58 Positive 8 36 | true_neg = 85
true_pos = 36
false_neg = 8
false_pos = 58
pred_neg = true_neg + false_neg
pred_pos = true_pos + false_pos
actual_pos = false_neg + true_pos
actual_neg = false_pos + true_neg | _____no_output_____ | MIT | DS_Unit_2_Sprint_Challenge_3_Classification_Validation.ipynb | mkirby1995/DS-Unit-2-Sprint-3-Classification-Validation |
Calculate accuracy | accuracy = (true_neg + true_pos)/ (pred_neg + pred_pos)
accuracy | _____no_output_____ | MIT | DS_Unit_2_Sprint_Challenge_3_Classification_Validation.ipynb | mkirby1995/DS-Unit-2-Sprint-3-Classification-Validation |
Calculate precision | precision = (true_pos) / (pred_pos)
precision | _____no_output_____ | MIT | DS_Unit_2_Sprint_Challenge_3_Classification_Validation.ipynb | mkirby1995/DS-Unit-2-Sprint-3-Classification-Validation |
Calculate recall | recall = true_pos / actual_pos
recall | _____no_output_____ | MIT | DS_Unit_2_Sprint_Challenge_3_Classification_Validation.ipynb | mkirby1995/DS-Unit-2-Sprint-3-Classification-Validation |
**author**: lukethompson@gmail.com**date**: 7 Oct 2017**language**: Python 3.5**license**: BSD3 alpha_diversity_90bp_100bp_150bp.ipynb | import pandas as pd
import math
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from empcolors import get_empo_cat_color
%matplotlib inline | _____no_output_____ | BSD-3-Clause | code/05-alpha-diversity/alpha_diversity_90bp_100bp_150bp.ipynb | justinshaffer/emp |
*** Choose 2k or qc-filtered subset (one or the other) *** | path_map = '../../data/mapping-files/emp_qiime_mapping_subset_2k.tsv' # already has 90bp alpha-div data
version = '2k'
path_map = '../../data/mapping-files/emp_qiime_mapping_qc_filtered.tsv' # already has 90bp alpha-div data
version = 'qc' | _____no_output_____ | BSD-3-Clause | code/05-alpha-diversity/alpha_diversity_90bp_100bp_150bp.ipynb | justinshaffer/emp |
*** Merged mapping file and alpha-div *** | path_adiv100 = '../../data/alpha-div/emp.100.min25.deblur.withtax.onlytree_5000.txt'
path_adiv150 = '../../data/alpha-div/emp.150.min25.deblur.withtax.onlytree_5000.txt'
df_map = pd.read_csv(path_map, sep='\t', index_col=0)
df_adiv100 = pd.read_csv(path_adiv100, sep='\t', index_col=0)
df_adiv150 = pd.read_csv(path_adiv150, sep='\t', index_col=0)
df_adiv100.columns = ['adiv_chao1_100bp', 'adiv_observed_otus_100bp', 'adiv_faith_pd_100bp', 'adiv_shannon_100bp']
df_adiv150.columns = ['adiv_chao1_150bp', 'adiv_observed_otus_150bp', 'adiv_faith_pd_150bp', 'adiv_shannon_150bp']
df_merged = pd.concat([df_adiv100, df_adiv150, df_map], axis=1, join='outer') | _____no_output_____ | BSD-3-Clause | code/05-alpha-diversity/alpha_diversity_90bp_100bp_150bp.ipynb | justinshaffer/emp |
*** Removing all samples without 150bp alpha-div results *** | df1 = df_merged[['empo_3', 'adiv_observed_otus', 'adiv_observed_otus_100bp', 'adiv_observed_otus_150bp']]
df1.columns = ['empo_3', 'observed_tag_sequences_90bp', 'observed_tag_sequences_100bp', 'observed_tag_sequences_150bp']
df1.dropna(axis=0, inplace=True)
g = sns.PairGrid(df1, hue='empo_3', palette=get_empo_cat_color(returndict=True))
g = g.map(plt.scatter, alpha=0.5)
for i in [0, 1, 2]:
for j in [0, 1, 2]:
g.axes[i][j].set_xscale('log')
g.axes[i][j].set_yscale('log')
g.axes[i][j].set_xlim([1e0, 1e4])
g.axes[i][j].set_ylim([1e0, 1e4])
g.savefig('adiv_%s_scatter.pdf' % version)
sns.lmplot(x='observed_tag_sequences_90bp', y='observed_tag_sequences_150bp', col='empo_3', hue="empo_3", data=df1,
col_wrap=4, palette=get_empo_cat_color(returndict=True), size=3, markers='o',
scatter_kws={"s": 20, "alpha": 1}, fit_reg=True)
plt.xlim([0, 3000])
plt.ylim([0, 3000])
plt.savefig('adiv_%s_lmplot.pdf' % version)
df1melt = pd.melt(df1, id_vars='empo_3')
empo_list = list(set(df1melt.empo_3))
empo_list = [x for x in empo_list if type(x) is str]
empo_list.sort()
empo_colors = [get_empo_cat_color(returndict=True)[x] for x in empo_list]
for var in ['observed_tag_sequences_90bp', 'observed_tag_sequences_100bp', 'observed_tag_sequences_150bp']:
list_of = [0] * len(empo_list)
df1melt2 = df1melt[df1melt['variable'] == var].drop('variable', axis=1)
for empo in np.arange(len(empo_list)):
list_of[empo] = list(df1melt2.pivot(columns='empo_3')['value'][empo_list[empo]].dropna())
fig, ax = plt.subplots(nrows=1, ncols=1, figsize=(2.5,2.5))
plt.hist(list_of, color=empo_colors,
bins=np.logspace(np.log10(1e0),np.log10(1e4), 20),
stacked=True)
plt.xscale('log')
fig.savefig('adiv_%s_hist_%s.pdf' % (version, var)) | _____no_output_____ | BSD-3-Clause | code/05-alpha-diversity/alpha_diversity_90bp_100bp_150bp.ipynb | justinshaffer/emp |
Value Investing Indicators from SEC Filings Data Source: https://www.sec.gov/dera/data/financial-statement-data-sets.html | import pandas as pd
import os
import shutil
import glob
import sys
import warnings
import functools
from functools import reduce
import os
pd.set_option('display.max_columns', 999)
warnings.simplefilter("ignore")
os.chdir("..")
dir_root = os.getcwd()
dir_raw = dir_root + u"/sec_filings/Raw"
cik_lookup = pd.read_csv(dir_root + u"/sec_filings/cik_ticker.csv", usecols=['CIK', 'Ticker'], sep="|")
# num_tags = ["PreferredStockValue", "AssetsCurrent", "Liabilities", "EarningsPerShareBasic", "CommonStockSharesOutstanding", "LiabilitiesCurrent", "EarningsPerShareBasic", "SharePrice", "StockholdersEquity", "PreferredStockValue", "CommonStockSharesOutstanding", "NetIncomeLoss", "GrossProfit", "SalesRevenueNet","StockRepurchasedAndRetiredDuringPeriodShares"]
files_num = sorted(glob.glob(dir_raw + u'/*_num.txt'))
files_sub = sorted(glob.glob(dir_raw + u'/*_sub.txt'))
files_pre = sorted(glob.glob(dir_raw + u'/*_pre.txt'))
sec_df = pd.DataFrame([])
for num, sub, pre in zip(files_num, files_sub, files_pre):
df_num = pd.read_csv(num, sep='\t', dtype=str, encoding = "ISO-8859-1")
df_sub = pd.read_csv(sub, sep='\t', dtype=str, encoding = "ISO-8859-1")
df_pre = pd.read_csv(pre, sep='\t', dtype=str, encoding = "ISO-8859-1")
# df_num = df_num[df_num['tag'].isin(num_tags)]
df_pre = df_pre.merge(df_num, on=['adsh', 'tag', 'version'], sort=True)
sec_merge = df_sub.merge(df_pre, on='adsh', how="inner", sort=True)
sec_merge = sec_merge[sec_merge['form'] == "10-Q"]
sec_merge = sec_merge[sec_merge['stmt'] == "BS"]
sec_df = sec_df.append(sec_merge)
sec_curated = sec_df.sort_values(by='ddate').drop_duplicates(subset=['adsh', 'tag', 'version'], keep='last')
sec_curated = sec_curated[['adsh', 'ddate','version','filed', 'form', 'fp', 'fy', 'fye', 'instance', 'period', 'tag', 'uom', 'value']]
sec_curated.to_csv(dir_root + r'/test.csv', index=False)
sec_curated = sec_curated.drop_duplicates(subset=['instance'])
sec_curated[sec_curated['tag'] == 'Share'].dropna()
drop_cols = ['adsh', 'ddate','version','filed', 'form', 'fp', 'fy', 'fye', 'period', 'tag', 'uom']
PreferredStockValue = sec_curated[sec_curated["tag"] == "PreferredStockValue"].rename(columns={"value": "PreferredStockValue"}).drop(columns=drop_cols)
AssetsCurrent = sec_curated[sec_curated["tag"] == "AssetsCurrent"].rename(columns={'value': 'AssetsCurrent'}).drop(columns=drop_cols)
Liabilities = sec_curated[sec_curated["tag"] == "Liabilities"].rename(columns={'value': 'Liabilities'}).drop(columns=drop_cols)
EarningsPerShareBasic = sec_curated[sec_curated["tag"] == "EarningsPerShareBasic"].rename(columns={'value': 'EarningsPerShareBasic'}).drop(columns=drop_cols)
CommonStockSharesOutstanding = sec_curated[sec_curated["tag"] == "CommonStockSharesOutstanding"].rename(columns={'value': 'CommonStockSharesOutstanding'}).drop(columns=drop_cols)
LiabilitiesCurrent = sec_curated[sec_curated["tag"] == "LiabilitiesCurrent"].rename(columns={'value': 'LiabilitiesCurrent'}).drop(columns=drop_cols)
EarningsPerShareBasic = sec_curated[sec_curated["tag"] == "EarningsPerShareBasic"].rename(columns={'value': 'EarningsPerShareBasic'}).drop(columns=drop_cols)
SharePrice = sec_curated[sec_curated["tag"] == "SharePrice"].rename(columns={'value': 'SharePrice'}).drop(columns=drop_cols)
StockholdersEquity = sec_curated[sec_curated["tag"] == "StockholdersEquity"].rename(columns={'value': 'StockholdersEquity'}).drop(columns=drop_cols)
PreferredStockValue = sec_curated[sec_curated["tag"] == "PreferredStockValue"].rename(columns={'value': 'PreferredStockValue'}).drop(columns=drop_cols)
CommonStockSharesOutstanding = sec_curated[sec_curated["tag"] == "CommonStockSharesOutstanding"].rename(columns={'value': 'CommonStockSharesOutstanding'}).drop(columns=drop_cols)
NetIncomeLoss = sec_curated[sec_curated["tag"] == "NetIncomeLoss"].rename(columns={'value': 'NetIncomeLoss'}).drop(columns=drop_cols)
GrossProfit = sec_curated[sec_curated["tag"] == "GrossProfit"].rename(columns={'value': 'GrossProfit'}).drop(columns=drop_cols)
SalesRevenueNet = sec_curated[sec_curated["tag"] == "SalesRevenueNet"].rename(columns={'value': 'SalesRevenueNet'}).drop(columns=drop_cols)
StockRepurchased = sec_curated[sec_curated["tag"] == "StockRepurchasedAndRetiredDuringPeriodShares"].rename(columns={'value': 'StockRepurchased'}).drop(columns=drop_cols)
cols = ['adsh', 'ddate','version','filed', 'form', 'fp', 'fy', 'fye', 'instance', 'period', 'tag', 'uom']
sec_final = sec_curated[cols]
dfs = [sec_curated, PreferredStockValue, AssetsCurrent, Liabilities, EarningsPerShareBasic,
CommonStockSharesOutstanding, LiabilitiesCurrent, EarningsPerShareBasic,
SharePrice, StockholdersEquity, PreferredStockValue, CommonStockSharesOutstanding,
NetIncomeLoss, GrossProfit, SalesRevenueNet,StockRepurchased]
df_final = reduce(lambda left,right: pd.merge(sec_curated,right,on='instance'), dfs)
dfs_final = [df.set_index('instance') for df in dfs]
x = pd.concat(dfs_final, axis=1)
x['SharePriceBasic'].dropna()
| _____no_output_____ | BSD-3-Clause | explore/tags_filter.ipynb | cpc-azimuths/azimuth-rain |
Benjamin Graham Formulas:* NCAVPS = CurrentAssets - (Total Liabilities + Preferred Stock) ÷ Shares Outstanding * Less than 1.10 * Debt to Assets = Current Assets / Current Liabilities * Greater than 1.50* Price / Earnings per Share ratio * Less than 9.0* PRICE TO BOOK VALUE = (P/BV) * Where BV = (Total Shareholder Equity−Preferred Stock)/ Total Outstanding Shares * Less than 1.20. P/E ratios References:Benjamin Graham rules: https://cabotwealth.com/daily/value-investing/benjamin-grahams-value-stock-criteria/ Benjamin Graham rules Modified: https://www.netnethunter.com/16-benjamin-graham-rules/ |
sec_df['NCAVPS'] = sec_df['AssetsCurrent'] - (sec_df['Liabilities'] + sec_df[
'PreferredStockValue']) / sec_df['CommonStockSharesOutstanding']
sec_df['DebtToAssets'] = sec_df['AssetsCurrent'] / sec_df['LiabilitiesCurrent']
sec_df['PE'] = sec_df['SharePrice'] / sec_df['EarningsPerShareBasic']
sec_df['PBV'] = sec_df['SharePrice'] / ((sec_df['StockholdersEquity'] - sec_df['PreferredStockValue']) / sec_df['CommonStockSharesOutstanding'])
| _____no_output_____ | BSD-3-Clause | explore/tags_filter.ipynb | cpc-azimuths/azimuth-rain |
Warren Buffet Rules Formulas* Debt/Equity= Total Liabilities / Total Shareholders’ Equity * Less than 1 and ROE is greater than 10%* Return on Earnings = (Net Income / Stock Holders Equity) * Is Positive* Gross Profit Margin = Gross Profit / Revenue * Greater than 40% * Quarter over Quarter EPS * Greater than 10* Stock Buybacks * Greater than last period References: https://www.oldschoolvalue.com/tutorial/this-is-how-buffett-interprets-financial-statements/ |
sec_df['DebtEquity'] = sec_df['Liabilities'] / sec_df['StockholdersEquity']
sec_df['ReturnEarnings'] = sec_df['NetIncomeLoss'] / sec_df['StockholdersEquity']
sec_df["GrossProfitMargin"] = sec_df['GrossProfit'] / sec_df['SalesRevenueNet']
sec_df["EPS"] = sec_df["EarningsPerShareBasic"]
sec_df["StockBuybacks"] = sec_df["StockRepurchasedAndRetiredDuringPeriodShares"] | _____no_output_____ | BSD-3-Clause | explore/tags_filter.ipynb | cpc-azimuths/azimuth-rain |
Modeling@Author: Bruno VieiraGoals: Create a classification model able to identify a BOT account on twitter, using only profile-based features. | # Libs
import os
import numpy as np
import pandas as pd
from sklearn.pipeline import Pipeline
from sklearn.compose import ColumnTransformer
from sklearn.impute import SimpleImputer
from sklearn.metrics import classification_report, precision_score, recall_score, roc_auc_score, average_precision_score, f1_score
from sklearn.preprocessing import OneHotEncoder, OrdinalEncoder, MinMaxScaler, StandardScaler, FunctionTransformer
from sklearn.inspection import permutation_importance
from sklearn.model_selection import cross_validate, StratifiedKFold, train_test_split
import cloudpickle
from sklearn.model_selection import learning_curve
import matplotlib.pyplot as plt
from sklearn.svm import SVC
import utils.dev.model as mdl
import importlib
import warnings
warnings.filterwarnings('ignore')
pd.set_option('display.max_columns', 100)
# Paths and Filenames
DATA_INPUT_PATH = 'data/interim'
DATA_INPUT_TRAIN_NAME = 'train_selected_features.csv'
DATA_INPUT_TEST_NAME = 'test.csv'
MODEL_OUTPUT_PATH = 'models'
MODEL_NAME = 'model_bot_classifier_v0.pkl'
df_twitter_train = pd.read_csv(os.path.join('..',DATA_INPUT_PATH, DATA_INPUT_TRAIN_NAME))
df_twitter_test = pd.read_csv(os.path.join('..',DATA_INPUT_PATH, DATA_INPUT_TEST_NAME))
df_twitter_train.replace({False:'FALSE', True:'TRUE'}, inplace=True)
df_twitter_test.replace({False:'FALSE', True:'TRUE'}, inplace=True) | _____no_output_____ | MIT | notebooks/modeling.ipynb | brunocvs7/bot_detection_twitter_profile_features |
1) Training | X_train = df_twitter_train.drop('label', axis=1)
y_train = df_twitter_train['label']
cat_columns = df_twitter_train.select_dtypes(include=['bool', 'object']).columns.tolist()
num_columns = df_twitter_train.select_dtypes(include=['int32','int64','float32', 'float64']).columns.tolist()
num_columns.remove('label')
skf = StratifiedKFold(n_splits=10)
cat_preprocessor = Pipeline(steps=[('imputer', SimpleImputer(strategy='most_frequent')),
('encoder', OneHotEncoder(handle_unknown='ignore'))])
num_preprocessor = Pipeline(steps=[('imputer', SimpleImputer(strategy='constant', fill_value=0)),
('scaler', StandardScaler())])
pipe_transformer = ColumnTransformer(transformers=[('num_pipe_preprocessor', num_preprocessor, num_columns),
('cat_pipe_preprocessor', cat_preprocessor, cat_columns)])
pipe_model = Pipeline(steps=[('pre_processor', pipe_transformer),
('model', SVC(random_state=23, kernel='rbf', gamma='scale', C=1, probability=True))])
cross_validation_results = cross_validate(pipe_model, X=X_train, y=y_train, scoring=['average_precision', 'roc_auc', 'precision', 'recall'], cv=skf, n_jobs=-1, verbose=0, return_train_score=True)
pipe_model.fit(X_train, y_train) | _____no_output_____ | MIT | notebooks/modeling.ipynb | brunocvs7/bot_detection_twitter_profile_features |
2) Evaluation 2.1) Cross Validation | cross_validation_results = pd.DataFrame(cross_validation_results)
cross_validation_results
1.96*cross_validation_results['train_average_precision'].std()
print(f"Avg Train Avg Precision:{np.round(cross_validation_results['train_average_precision'].mean(), 2)} +/- {np.round(1.96*cross_validation_results['train_average_precision'].std(), 2)}")
print(f"Avg Test Avg Precision:{np.round(cross_validation_results['test_average_precision'].mean(), 2)} +/- {np.round(1.96*cross_validation_results['test_average_precision'].std(), 2)}")
print(f"ROC - AUC Train:{np.round(cross_validation_results['train_roc_auc'].mean(), 2)} +/- {np.round(1.96*cross_validation_results['train_roc_auc'].std(), 2)}")
print(f"ROC - AUC Test:{np.round(cross_validation_results['test_roc_auc'].mean(), 2)} +/- {np.round(1.96*cross_validation_results['test_roc_auc'].std(), 2)}") | ROC - AUC Train:0.69 +/- 0.01
ROC - AUC Test:0.68 +/- 0.11
| MIT | notebooks/modeling.ipynb | brunocvs7/bot_detection_twitter_profile_features |
2.2) Test Set | def build_features(df):
list_columns_colors = df.filter(regex='color').columns.tolist()
df = df.replace({'false':'FALSE', 'true':'TRUE', False:'FALSE', True:'TRUE'})
df['name'] = df['name'].apply(lambda x: len(x) if x is not np.nan else 0)
df['profile_location'] = df['profile_location'].apply(lambda x: 'TRUE' if x is not np.nan else 'FALSE')
df['rate_friends_followers'] = df['friends_count']/df['followers_count']
df['rate_friends_followers'] = df['rate_friends_followers'].map({np.inf:0, np.nan:0})
df['unique_colors'] = df[list_columns_colors].stack().groupby(level=0).nunique()
return df
df_twitter_test = build_features(df_twitter_test)
columns_to_predict = df_twitter_train.columns.tolist()
df_twitter_test = df_twitter_test.loc[:,columns_to_predict]
X_test = df_twitter_test.drop('label', axis=1)
y_test = df_twitter_test['label']
| _____no_output_____ | MIT | notebooks/modeling.ipynb | brunocvs7/bot_detection_twitter_profile_features |
2.3) Metrics | y_train_predict = pipe_model.predict_proba(X_train)
y_test_predict = pipe_model.predict_proba(X_test)
df_metrics_train = mdl.eval_thresh(y_real = y_train, y_proba = y_train_predict[:,1])
df_metrics_test = mdl.eval_thresh(y_real = y_test, y_proba = y_test_predict[:,1])
importlib.reload(mdl)
mdl.plot_metrics(df_metrics_train)
mdl.plot_metrics(df_metrics_test) | _____no_output_____ | MIT | notebooks/modeling.ipynb | brunocvs7/bot_detection_twitter_profile_features |
2.4) Learning Curve | train_sizes, train_scores, validation_scores = learning_curve(estimator = pipe_model,
X = X_train,
y = y_train,
cv = 5,
train_sizes=np.linspace(0.1, 1, 10),
scoring = 'neg_log_loss')
train_scores_mean = train_scores.mean(axis=1)
validation_scores_mean = validation_scores.mean(axis=1)
plt.style.use('seaborn')
plt.plot(train_sizes, train_scores_mean, label = 'Training error')
plt.plot(train_sizes, validation_scores_mean, label = 'Validation error')
plt.ylabel('Average Precision Score', fontsize = 14)
plt.xlabel('Training set size', fontsize = 14)
plt.title('Learning curves', fontsize = 18, y = 1.03)
plt.legend()
plt.show() | _____no_output_____ | MIT | notebooks/modeling.ipynb | brunocvs7/bot_detection_twitter_profile_features |
2.5) Ordering 2.6) Callibration 3) Saving the Model | with open(os.path.join('..', MODEL_OUTPUT_PATH, MODEL_NAME), 'wb') as f:
cloudpickle.dump(pipe_model, f) | _____no_output_____ | MIT | notebooks/modeling.ipynb | brunocvs7/bot_detection_twitter_profile_features |
Load Data | def mnist_data():
compose = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((.5, .5, .5), (.5, .5, .5))
])
out_dir = '{}/dataset'.format(DATA_FOLDER)
return datasets.MNIST(root=out_dir, train=True, transform=compose, download=True)
data = mnist_data()
batch_size = 100
data_loader = torch.utils.data.DataLoader(data, batch_size=batch_size, shuffle=True)
num_batches = len(data_loader) | _____no_output_____ | MIT | Online-Courses/gans/Vanilla GAN PyTorch.ipynb | gopala-kr/ds-notebooks |
Networks | class DiscriminativeNet(torch.nn.Module):
"""
A two hidden-layer discriminative neural network
"""
def __init__(self):
super(DiscriminativeNet, self).__init__()
n_features = 784
n_out = 1
self.hidden0 = nn.Sequential(
nn.Linear(n_features, 1024),
nn.LeakyReLU(0.2),
nn.Dropout(0.3)
)
self.hidden1 = nn.Sequential(
nn.Linear(1024, 512),
nn.LeakyReLU(0.2),
nn.Dropout(0.3)
)
self.hidden2 = nn.Sequential(
nn.Linear(512, 256),
nn.LeakyReLU(0.2),
nn.Dropout(0.3)
)
self.out = nn.Sequential(
torch.nn.Linear(256, n_out),
torch.nn.Sigmoid()
)
def forward(self, x):
x = self.hidden0(x)
x = self.hidden1(x)
x = self.hidden2(x)
x = self.out(x)
return x
def images_to_vectors(images):
return images.view(images.size(0), 784)
def vectors_to_images(vectors):
return vectors.view(vectors.size(0), 1, 28, 28)
class GenerativeNet(torch.nn.Module):
"""
A three hidden-layer generative neural network
"""
def __init__(self):
super(GenerativeNet, self).__init__()
n_features = 100
n_out = 784
self.hidden0 = nn.Sequential(
nn.Linear(n_features, 256),
nn.LeakyReLU(0.2)
)
self.hidden1 = nn.Sequential(
nn.Linear(256, 512),
nn.LeakyReLU(0.2)
)
self.hidden2 = nn.Sequential(
nn.Linear(512, 1024),
nn.LeakyReLU(0.2)
)
self.out = nn.Sequential(
nn.Linear(1024, n_out),
nn.Tanh()
)
def forward(self, x):
x = self.hidden0(x)
x = self.hidden1(x)
x = self.hidden2(x)
x = self.out(x)
return x
# Noise
def noise(size):
n = Variable(torch.randn(size, 100))
if torch.cuda.is_available(): return n.cuda
return n
discriminator = DiscriminativeNet()
generator = GenerativeNet()
if torch.cuda.is_available():
discriminator.cuda()
generator.cuda() | _____no_output_____ | MIT | Online-Courses/gans/Vanilla GAN PyTorch.ipynb | gopala-kr/ds-notebooks |
Optimization | # Optimizers
d_optimizer = Adam(discriminator.parameters(), lr=0.0002)
g_optimizer = Adam(generator.parameters(), lr=0.0002)
# Loss function
loss = nn.BCELoss()
# Number of steps to apply to the discriminator
d_steps = 1 # In Goodfellow et. al 2014 this variable is assigned to 1
# Number of epochs
num_epochs = 200 | _____no_output_____ | MIT | Online-Courses/gans/Vanilla GAN PyTorch.ipynb | gopala-kr/ds-notebooks |
Training | def real_data_target(size):
'''
Tensor containing ones, with shape = size
'''
data = Variable(torch.ones(size, 1))
if torch.cuda.is_available(): return data.cuda()
return data
def fake_data_target(size):
'''
Tensor containing zeros, with shape = size
'''
data = Variable(torch.zeros(size, 1))
if torch.cuda.is_available(): return data.cuda()
return data
def train_discriminator(optimizer, real_data, fake_data):
# Reset gradients
optimizer.zero_grad()
# 1.1 Train on Real Data
prediction_real = discriminator(real_data)
# Calculate error and backpropagate
error_real = loss(prediction_real, real_data_target(real_data.size(0)))
error_real.backward()
# 1.2 Train on Fake Data
prediction_fake = discriminator(fake_data)
# Calculate error and backpropagate
error_fake = loss(prediction_fake, fake_data_target(real_data.size(0)))
error_fake.backward()
# 1.3 Update weights with gradients
optimizer.step()
# Return error
return error_real + error_fake, prediction_real, prediction_fake
def train_generator(optimizer, fake_data):
# 2. Train Generator
# Reset gradients
optimizer.zero_grad()
# Sample noise and generate fake data
prediction = discriminator(fake_data)
# Calculate error and backpropagate
error = loss(prediction, real_data_target(prediction.size(0)))
error.backward()
# Update weights with gradients
optimizer.step()
# Return error
return error | _____no_output_____ | MIT | Online-Courses/gans/Vanilla GAN PyTorch.ipynb | gopala-kr/ds-notebooks |
Generate Samples for Testing | num_test_samples = 16
test_noise = noise(num_test_samples) | _____no_output_____ | MIT | Online-Courses/gans/Vanilla GAN PyTorch.ipynb | gopala-kr/ds-notebooks |
Start training | logger = Logger(model_name='VGAN', data_name='MNIST')
for epoch in range(num_epochs):
for n_batch, (real_batch,_) in enumerate(data_loader):
# 1. Train Discriminator
real_data = Variable(images_to_vectors(real_batch))
if torch.cuda.is_available(): real_data = real_data.cuda()
# Generate fake data
fake_data = generator(noise(real_data.size(0))).detach()
# Train D
d_error, d_pred_real, d_pred_fake = train_discriminator(d_optimizer,
real_data, fake_data)
# 2. Train Generator
# Generate fake data
fake_data = generator(noise(real_batch.size(0)))
# Train G
g_error = train_generator(g_optimizer, fake_data)
# Log error
logger.log(d_error, g_error, epoch, n_batch, num_batches)
# Display Progress
if (n_batch) % 100 == 0:
display.clear_output(True)
# Display Images
test_images = vectors_to_images(generator(test_noise)).data.cpu()
logger.log_images(test_images, num_test_samples, epoch, n_batch, num_batches);
# Display status Logs
logger.display_status(
epoch, num_epochs, n_batch, num_batches,
d_error, g_error, d_pred_real, d_pred_fake
)
# Model Checkpoints
logger.save_models(generator, discriminator, epoch) | _____no_output_____ | MIT | Online-Courses/gans/Vanilla GAN PyTorch.ipynb | gopala-kr/ds-notebooks |
Correlation MatrixThis notebook shows how to calculate a correlation matrix to explore if there are correlations any pair of columns in a datafram. The correlation coefficient can be Pearson's, Kendall's, or Spearmans. | # import some modules
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as stats
import seaborn as sns
# read in some data
data = pd.read_csv('corr.csv')
print(data.shape)
data.head(6)
# you can find the correlation coefficients between columns on a data frame using the .corr method
data.corr(method='pearson')
# alternatively, you can find the correlation coeff and the pvalue between individual columns using stats.pearsonr
stats.pearsonr(data.x1,data.x2)
# let's define a function that returns two dataframes, one with correlation coefficients and the other with p-values
def calc_corr_matrix(data, dims):
cmatrix = pd.DataFrame()
pmatrix = pd.DataFrame()
for row in dims:
for col in dims:
corrcoef ,pvalue = stats.pearsonr(data[row],data[col])
cmatrix.loc[row,col] = corrcoef
pmatrix.loc[row,col] = pvalue
for each in dims:
cmatrix.loc[each,each] = np.nan
pmatrix.loc[each,each] = np.nan
return cmatrix, pmatrix
# use the funtion
cmatrix, pmatrix = calc_corr_matrix(data,data.keys().values)
# look at results
cmatrix
pmatrix
# plot a heatmap to show correlation coefficients
plt.figure(figsize=(8,5))
ax = sns.heatmap(cmatrix, center = 0, cmap='coolwarm', annot = True, vmin=-1, vmax=1)
ax.tick_params(axis='both', labelsize=12)
plt.show() | _____no_output_____ | MIT | notebooks/demos/correlation_matrix.ipynb | prof-groff/evns-462 |
Self-Driving Car Engineer Nanodegree Project: **Finding Lane Lines on the Road** ***In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below. Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/!/rubrics/322/view) for this project.---Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.**Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".**--- **The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.**--- Your output should look something like this (above) after detecting line segments using the helper functions below Your goal is to connect/average/extrapolate line segments to get output like this **Run the cell below to import some packages. If you get an `import error` for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.** Import Packages | #importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
%matplotlib inline | _____no_output_____ | MIT | P1.ipynb | Abhishek-Balaji/CarND-LaneLines-P1 |
Read in an Image | #reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray') | This image is: <class 'numpy.ndarray'> with dimensions: (540, 960, 3)
| MIT | P1.ipynb | Abhishek-Balaji/CarND-LaneLines-P1 |
Ideas for Lane Detection Pipeline **Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:**`cv2.inRange()` for color selection `cv2.fillPoly()` for regions selection `cv2.line()` to draw lines on an image given endpoints `cv2.addWeighted()` to coadd / overlay two images `cv2.cvtColor()` to grayscale or change color `cv2.imwrite()` to output images to file `cv2.bitwise_and()` to apply a mask to an image**Check out the OpenCV documentation to learn about these and discover even more awesome functionality!** Helper Functions Below are some helper functions to help get you started. They should look familiar from the lesson! | import math
def grayscale(img):
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')"""
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
# return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
"""Applies the Canny transform"""
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
"""Applies a Gaussian Noise kernel"""
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
"""
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, color=[255, 0, 0], thickness=15):
"""
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
"""
# initialize the accumulators
imshape = img.shape
left_slopes = []
left_x = 0
left_y = 0
right_slopes = []
right_x = 0
right_y = 0
# loop over the lines
for line in lines:
for x1,y1,x2,y2 in line:
slope = (y2 - y1)/(x2 - x1)
if slope < -.5 and slope > -1: #left
left_slopes.append(slope)
left_x += x1 + x2
left_y += y1 + y2
elif slope > .5 and slope < .8: #right
right_slopes.append(slope)
right_x += x1 + x2
right_y += y1 + y2
left_slope = sum(left_slopes) / len(left_slopes)
avg_left_x = int(left_x / (2 * len(left_slopes)))
avg_left_y = int(left_y / (2 * len(left_slopes)))
y_left_1 = imshape[0]
x_left_1 = int((y_left_1 - avg_left_y) / left_slope + avg_left_x)
x_left_2 = int((y_left_2 - avg_left_y) / left_slope + avg_left_x)
right_slope = sum(right_slopes) / len(right_slopes)
avg_right_x = int(right_x / (2 * len(right_slopes)))
avg_right_y = int(right_y / (2 * len(right_slopes)))
y_right_1 = imshape[0]
x_right_1 = int((y_right_1 - avg_right_y) / right_slope + avg_right_x)
y_right_2 = int((34/54) * imshape[0])
x_right_2 = int((y_right_2 - avg_right_y) / right_slope + avg_right_x)
cv2.line(img, (x_left_1, y_left_1), (x_left_2, y_left_2), color, thickness)
cv2.line(img, (x_right_1, y_right_1), (x_right_2, y_right_2), color, thickness)
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines(line_img, lines)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., γ=0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + γ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, α, img, β, γ) | _____no_output_____ | MIT | P1.ipynb | Abhishek-Balaji/CarND-LaneLines-P1 |
Test ImagesBuild your pipeline to work on the images in the directory "test_images" **You should make sure your pipeline works well on these images before you try the videos.** | import os
os.listdir("test_images/") | _____no_output_____ | MIT | P1.ipynb | Abhishek-Balaji/CarND-LaneLines-P1 |
Build a Lane Finding Pipeline Build the pipeline and run your solution on all test_images. Make copies into the `test_images_output` directory, and you can use the images in your writeup report.Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters. | def draw(image):
gray=grayscale(image)
blur=gaussian_blur(gray,5)
edge=canny(blur,50,150)
imshape=image.shape
vertices = np.array([[(0, imshape[0]),((43/96) * imshape[1], (33/54) * imshape[0]), ((54/96) * imshape[1], (33/54) * imshape[0]), (imshape[1],imshape[0])]], dtype=np.int32)
masked_edges = region_of_interest(edge, vertices)
rho = 1
theta = np.pi/180
threshold = 5
min_line_length = 3
max_line_gap = 1
lines = hough_lines(masked_edges, 1, math.pi/180, 5, 3, 1)
lines_overlayed = weighted_img(lines, image)
return lines_overlayed
for i in os.listdir("test_images/"):
image = mpimg.imread('test_images/'+i)
out=draw(image)
plt.imsave(i[:-4]+"out.jpg",out) | _____no_output_____ | MIT | P1.ipynb | Abhishek-Balaji/CarND-LaneLines-P1 |
Test on VideosYou know what's cooler than drawing lanes over images? Drawing lanes over video!We can test our solution on two provided videos:`solidWhiteRight.mp4``solidYellowLeft.mp4`**Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.****If you get an error that looks like this:**```NeedDownloadError: Need ffmpeg exe. You can download it by calling: imageio.plugins.ffmpeg.download()```**Follow the instructions in the error message and check out [this forum post](https://discussions.udacity.com/t/project-error-of-test-on-videos/274082) for more troubleshooting tips across operating systems.** | # Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(image):
# NOTE: The output you return should be a color image (3 channel) for processing video below
# TODO: put your pipeline here,
# you should return the final output (image where lines are drawn on lanes)
result=draw(image)
return result | _____no_output_____ | MIT | P1.ipynb | Abhishek-Balaji/CarND-LaneLines-P1 |
Let's try the one with the solid white lane on the right first ... | white_output = 'test_videos_output/solidWhiteRight.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False) | t: 1%|▏ | 3/221 [00:00<00:09, 22.45it/s, now=None] | MIT | P1.ipynb | Abhishek-Balaji/CarND-LaneLines-P1 |
Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice. | HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output)) | _____no_output_____ | MIT | P1.ipynb | Abhishek-Balaji/CarND-LaneLines-P1 |
Improve the draw_lines() function**At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".****Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.** Now for the one with the solid yellow lane on the left. This one's more tricky! | yellow_output = 'test_videos_output/solidYellowLeft.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)
clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(yellow_output)) | _____no_output_____ | MIT | P1.ipynb | Abhishek-Balaji/CarND-LaneLines-P1 |
Writeup and SubmissionIf you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file. Optional ChallengeTry your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project! | challenge_output = 'test_videos_output/challenge.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5)
clip3 = VideoFileClip('test_videos/challenge.mp4')
challenge_clip = clip3.fl_image(process_image)
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(challenge_output)) | _____no_output_____ | MIT | P1.ipynb | Abhishek-Balaji/CarND-LaneLines-P1 |
Read the data | X, y = load_svmlight_file('data/heart')
X = X.toarray().astype(np.float32)
y[y==-1] = 0
features = [f'f{i+1}' for i in range(X.shape[1])]
df = pd.DataFrame(X, columns=features)
df.head() | _____no_output_____ | BSD-3-Clause | examples/bias/Bias.ipynb | dabrze/similarity_forest |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.