markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
|---|---|---|---|---|
Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction: If a passenger was female, then we will predict they survive. If a passenger was male and younger than 10, then we will also predict they survive. Otherwise, we will predict they do not survive.
Fill in the missing code below so that the function will make this prediction.
Hint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_1.
|
def predictions_2(data):
""" Model with two features:
- Predict a passenger survived if they are female.
- Predict a passenger survived if they are male and younger than 10. """
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
if passenger['Sex'] == "female":
predictions.append(1)
elif passenger['Sex'] == "male" and passenger['Age'] < 10:
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_2(data)
|
udacity/working_titanic_survival_exploration/Titanic_Survival_Exploration.ipynb
|
harish-garg/Machine-Learning
|
mit
|
Answer: 79.35%
Adding the feature Age as a condition in conjunction with Sex improves the accuracy by a small margin more than with simply using the feature Sex alone. Now it's your turn: Find a series of features and conditions to split the data on to obtain an outcome prediction accuracy of at least 80%. This may require multiple features and multiple levels of conditional statements to succeed. You can use the same feature multiple times with different conditions.
Pclass, Sex, Age, SibSp, and Parch are some suggested features to try.
Use the survival_stats function below to to examine various survival statistics.
Hint: To use mulitple filter conditions, put each condition in the list passed as the last argument. Example: ["Sex == 'male'", "Age < 18"]
|
survival_stats(data, outcomes, 'Age', ["Sex == 'male'", "SibSp == 1"])
|
udacity/working_titanic_survival_exploration/Titanic_Survival_Exploration.ipynb
|
harish-garg/Machine-Learning
|
mit
|
After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.
Make sure to keep track of the various features and conditions you tried before arriving at your final prediction model.
Hint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_2.
|
def predictions_3(data):
""" Model with multiple features. Makes a prediction with an accuracy of at least 80%. """
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
if passenger['Sex'] == "female":
if passenger['Sex'] == "female" and passenger['Age'] > 40 and passenger['Age'] < 50 and passenger['Pclass'] == 3:
predictions.append(0)
elif passenger['Sex'] == "female" and passenger['SibSp'] > 2 and passenger['Pclass'] == 3:
predictions.append(0)
else:
predictions.append(1)
elif passenger['Sex'] == "male" and passenger['Age'] < 10:
predictions.append(1)
elif passenger['Sex'] == "male" and passenger['Age'] > 20 and passenger['Age'] < 40 and passenger['Pclass'] == 1:
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_3(data)
|
udacity/working_titanic_survival_exploration/Titanic_Survival_Exploration.ipynb
|
harish-garg/Machine-Learning
|
mit
|
When iteratively removing weak features the choice of model is important. We will discuss the different models available for regression and classification next week but there are a few points relevant to feature selection we will cover here.
A linear model is a useful and easily interpreted model, and when used for feature selection L1 regularization should be used. L1 regularization penalizes large coefficients based on their absolute values. This favors a sparse model with weak features having coefficients close to zero. In contrast, L2 regularization penalizes large coefficients based on their squared value, and this has a tendency to favor many small coefficients rather than a smaller set of larger coefficients.
|
from sklearn import linear_model
from sklearn.datasets import load_digits
from sklearn.feature_selection import RFE
# Load the digits dataset
digits = load_digits()
X = digits.images.reshape((len(digits.images), -1))
y = digits.target
# Create the RFE object and rank each pixel
clf = linear_model.LogisticRegression(C=1.0, penalty='l1', tol=1e-6)
rfe = RFE(estimator=clf, n_features_to_select=1, step=1)
rfe.fit(X, y)
ranking = rfe.ranking_.reshape(digits.images[0].shape)
# Plot pixel ranking
plt.matshow(ranking)
plt.colorbar()
plt.title("Ranking of pixels with RFE")
plt.show()
|
Wk10/Wk10-dimensionality-reduction-clustering.ipynb
|
briennakh/BIOF509
|
mit
|
Exercises
Apply feature selection to the Olivetti faces dataset, identifying the most important 25% of features.
Apply PCA and LDA to the digits dataset used above
Clustering
In clustering we attempt to group observations in such a way that observations assigned to the same cluster are more similar to each other than to observations in other clusters.
Although labels may be known, clustering is usually performed on unlabeled data as a step in exploratory data analysis.
Previously we looked at the Otsu thresholding method as a basic example of clustering. This is very closely related to k-means clustering. A variety of other methods are available with different characteristics.
The best method to use will vary developing on the particular problem.
|
import matplotlib
import matplotlib.pyplot as plt
from skimage.data import camera
from skimage.filters import threshold_otsu
matplotlib.rcParams['font.size'] = 9
image = camera()
thresh = threshold_otsu(image)
binary = image > thresh
#fig, (ax1, ax2, ax3) = plt.subplots(1, 3, figsize=(8, 2.5))
fig = plt.figure(figsize=(8, 2.5))
ax1 = plt.subplot(1, 3, 1, adjustable='box-forced')
ax2 = plt.subplot(1, 3, 2)
ax3 = plt.subplot(1, 3, 3, sharex=ax1, sharey=ax1, adjustable='box-forced')
ax1.imshow(image, cmap=plt.cm.gray)
ax1.set_title('Original')
ax1.axis('off')
ax2.hist(image)
ax2.set_title('Histogram')
ax2.axvline(thresh, color='r')
ax3.imshow(binary, cmap=plt.cm.gray)
ax3.set_title('Thresholded')
ax3.axis('off')
plt.show()
|
Wk10/Wk10-dimensionality-reduction-clustering.ipynb
|
briennakh/BIOF509
|
mit
|
Different clustering algorithms
Cluster comparison
The following algorithms are provided by scikit-learn
K-means
Affinity propagation
Mean Shift
Spectral clustering
Ward
Agglomerative Clustering
DBSCAN
Birch
K-means clustering divides samples between clusters by attempting to minimize the within-cluster sum of squares. It is an iterative algorithm repeatedly updating the position of the centroids (cluster centers), re-assigning samples to the best cluster and repeating until an optimal solution is reached. The clusters will depend on the starting position of the centroids so k-means is often run multiple times with random initialization and then the best solution chosen.
Affinity Propagation operates by passing messages between the samples updating a record of the exemplar samples. These are samples that best represent other samples. The algorithm functions on an affinity matrix that can be eaither user supplied or computed by the algorothm. Two matrices are maintained. One matrix records how well each sample represents other samples in the dataset. When the algorithm finishes the highest scoring samples are chosen to represent the clusters. The second matrix records which other samples best represent each sample so that the entire dataset can be assigned to a cluster when the algorithm terminates.
Mean Shift iteratively updates candidate centroids to represent the clusters. The algorithm attempts to find areas of higher density.
Spectral clustering operates on an affinity matrix that can be user supplied or computed by the model. The algorithm functions by minimizing the value of the links cut in a graph created from the affinity matrix. By focusing on the relationships between samples this algorithm performs well for non-convex clusters.
Ward is a type of agglomerative clustering using minimization of the within-cluster sum of squares to join clusters together until the specified number of clusters remain.
Agglomerative clustering starts all the samples in their own cluster and then progressively joins clusters together minimizing some performance measure. In addition to minimizing the variance as seen with Ward other options are, 1) minimizing the average distance between samples in each cluster, and 2) minimizing the maximum distance between observations in each cluster.
DBSCAN is another algorithm that attempts to find regions of high density and then expands the clusters from there.
Birch is a tree based clustering algorithm assigning samples to nodes on a tree
|
from sklearn import cluster, datasets
dataset, true_labels = datasets.make_blobs(n_samples=200, n_features=2, random_state=0,
centers=3, cluster_std=0.1)
fig, ax = plt.subplots(1,1)
ax.scatter(dataset[:,0], dataset[:,1], c=true_labels)
plt.show()
# Clustering algorithm can be used as a class
means = cluster.KMeans(n_clusters=2)
prediction = means.fit_predict(dataset)
|
Wk10/Wk10-dimensionality-reduction-clustering.ipynb
|
briennakh/BIOF509
|
mit
|
Since month in dimple only except 2 digit format, change this
|
df['month'] = df['month'].map(lambda x: '0' + str(x) if len(str(x)) < 2 else x)
df.month.unique()
|
p6-datviz/analysis.ipynb
|
napjon/ds-nd
|
mit
|
We want to have total number of operations and total minutes delay. So we're going to aggregate it per month.
|
agg_month_sum = df.groupby('month',as_index=False).sum()
not_ontime_flights = ['arr_cancelled','arr_diverted','arr_del15']
agg_month_sum['on_time_flights'] = agg_month_sum['arr_flights'] - agg_month_sum[not_ontime_flights].sum(axis=1)
delayed_columns = agg_month_sum.columns[agg_month_sum.columns.str.endswith('_delay')]
agg_month_sum[delayed_columns] = agg_month_sum[delayed_columns].applymap(lambda x: x/60)
agg_month_sum.to_csv('agg_month_sum_airlines_2015.csv_',index=False)
%matplotlib inline
df[df.month == 6].groupby('carrier_name').sum().T.plot()
df['delay_minutes_per_delayed_flight'] = (df.carrier_delay / df.carrier_ct)
date_df = df.groupby(['carrier_name','month'],as_index=False).delay_minutes_per_delayed_flight.mean()
date_df.to_csv('carr_delay_2015.csv_',index=False)
%matplotlib inline
df.groupby('carr_delay_2015.csv_')
1206011 / 19579
|
p6-datviz/analysis.ipynb
|
napjon/ds-nd
|
mit
|
At this point in time, the model is still empty
|
print('%i reactions in initial model' % len(cobra_model.reactions))
print('%i metabolites in initial model' % len(cobra_model.metabolites))
print('%i genes in initial model' % len(cobra_model.genes))
|
documentation_builder/building_model.ipynb
|
jerkos/cobrapy
|
lgpl-2.1
|
We will add the reaction to the model, which will also add all associated metabolites and genes
|
cobra_model.add_reaction(reaction)
# Now there are things in the model
print('%i reaction in model' % len(cobra_model.reactions))
print('%i metabolites in model' % len(cobra_model.metabolites))
print('%i genes in model' % len(cobra_model.genes))
|
documentation_builder/building_model.ipynb
|
jerkos/cobrapy
|
lgpl-2.1
|
We can iterate through the model objects to observe the contents
|
# Iterate through the the objects in the model
print("Reactions")
print("---------")
for x in cobra_model.reactions:
print("%s : %s" % (x.id, x.reaction))
print("Metabolites")
print("-----------")
for x in cobra_model.metabolites:
print('%s : %s' % (x.id, x.formula))
print("Genes")
print("-----")
for x in cobra_model.genes:
reactions_list_str = "{" + ", ".join((i.id for i in x.reactions)) + "}"
print("%s is associated with reactions: %s" % (x.id, reactions_list_str))
|
documentation_builder/building_model.ipynb
|
jerkos/cobrapy
|
lgpl-2.1
|
TODO: Examine all the reviews. For each word in a positive review, increase the count for that word in both your positive counter and the total words counter; likewise, for each word in a negative review, increase the count for that word in both your negative counter and the total words counter.
Note: Throughout these projects, you should use split(' ') to divide a piece of text (such as a review) into individual words. If you use split() instead, you'll get slightly different results than what the videos and solutions show.
|
# TODO: Loop over all the words in all the reviews and increment the counts in the appropriate counter objects
for i,r in enumerate(reviews):
words = r.split(' ')
positive = labels[i] == 'POSITIVE'
for w in words:
if positive:
positive_counts[w] += 1
else:
negative_counts[w] += 1
total_counts[w] += 1
|
sentiment-network/Sentiment_Classification_Projects.ipynb
|
marko911/deep-learning
|
mit
|
As you can see, common words like "the" appear very often in both positive and negative reviews. Instead of finding the most common words in positive or negative reviews, what you really want are the words found in positive reviews more often than in negative reviews, and vice versa. To accomplish this, you'll need to calculate the ratios of word usage between positive and negative reviews.
TODO: Check all the words you've seen and calculate the ratio of postive to negative uses and store that ratio in pos_neg_ratios.
Hint: the positive-to-negative ratio for a given word can be calculated with positive_counts[word] / float(negative_counts[word]+1). Notice the +1 in the denominator – that ensures we don't divide by zero for words that are only seen in positive reviews.
|
# Create Counter object to store positive/negative ratios
pos_neg_ratios = Counter()
# TODO: Calculate the ratios of positive and negative uses of the most common words
# Consider words to be "common" if they've been used at least 100 times
for w in list(total_counts):
pos_neg_ratios[w] = positive_counts[w] / float(negative_counts[w]+1)
|
sentiment-network/Sentiment_Classification_Projects.ipynb
|
marko911/deep-learning
|
mit
|
Looking closely at the values you just calculated, we see the following:
Words that you would expect to see more often in positive reviews – like "amazing" – have a ratio greater than 1. The more skewed a word is toward postive, the farther from 1 its positive-to-negative ratio will be.
Words that you would expect to see more often in negative reviews – like "terrible" – have positive values that are less than 1. The more skewed a word is toward negative, the closer to zero its positive-to-negative ratio will be.
Neutral words, which don't really convey any sentiment because you would expect to see them in all sorts of reviews – like "the" – have values very close to 1. A perfectly neutral word – one that was used in exactly the same number of positive reviews as negative reviews – would be almost exactly 1. The +1 we suggested you add to the denominator slightly biases words toward negative, but it won't matter because it will be a tiny bias and later we'll be ignoring words that are too close to neutral anyway.
Ok, the ratios tell us which words are used more often in postive or negative reviews, but the specific values we've calculated are a bit difficult to work with. A very positive word like "amazing" has a value above 4, whereas a very negative word like "terrible" has a value around 0.18. Those values aren't easy to compare for a couple of reasons:
Right now, 1 is considered neutral, but the absolute value of the postive-to-negative rations of very postive words is larger than the absolute value of the ratios for the very negative words. So there is no way to directly compare two numbers and see if one word conveys the same magnitude of positive sentiment as another word conveys negative sentiment. So we should center all the values around netural so the absolute value fro neutral of the postive-to-negative ratio for a word would indicate how much sentiment (positive or negative) that word conveys.
When comparing absolute values it's easier to do that around zero than one.
To fix these issues, we'll convert all of our ratios to new values using logarithms.
TODO: Go through all the ratios you calculated and convert their values using the following formulas:
For any postive words, convert the ratio using np.log(ratio)
For any negative words, convert the ratio using -np.log(1/(ratio + 0.01))
That second equation may look strange, but what it's doing is dividing one by a very small number, which will produce a larger positive number. Then, it takes the log of that, which produces numbers similar to the ones for the postive words. Finally, we negate the values by adding that minus sign up front. In the end, extremely positive and extremely negative words will have positive-to-negative ratios with similar magnitudes but oppositite signs.
|
# TODO: Convert ratios to logs
for w,ratio in pos_neg_ratios.items():
if ratio <1:
pos_neg_ratios[w] = -np.log(1/(ratio+0.01))
elif ratio >1:
pos_neg_ratios[w] = np.log(ratio)
|
sentiment-network/Sentiment_Classification_Projects.ipynb
|
marko911/deep-learning
|
mit
|
TODO: Complete the implementation of update_input_layer. It should count
how many times each word is used in the given review, and then store
those counts at the appropriate indices inside layer_0.
|
def update_input_layer(review):
""" Modify the global layer_0 to represent the vector form of review.
The element at a given index of layer_0 should represent
how many times the given word occurs in the review.
Args:
review(string) - the string of the review
Returns:
None
"""
global layer_0
# clear out previous state by resetting the layer to be all 0s
layer_0 *= 0
# TODO: count how many times each word is used in the given review and store the results in layer_0
for w in review.split(' '):
index = word2index[w]
layer_0[0][index] += 1
|
sentiment-network/Sentiment_Classification_Projects.ipynb
|
marko911/deep-learning
|
mit
|
End of Project 2.
Watch the next video to see Andrew's solution, then continue on to the next lesson.
Project 3: Building a Neural Network<a id='project_3'></a>
TODO: We've included the framework of a class called SentimentNetork. Implement all of the items marked TODO in the code. These include doing the following:
- Create a basic neural network much like the networks you've seen in earlier lessons and in Project 1, with an input layer, a hidden layer, and an output layer.
- Do not add a non-linearity in the hidden layer. That is, do not use an activation function when calculating the hidden layer outputs.
- Re-use the code from earlier in this notebook to create the training data (see TODOs in the code)
- Implement the pre_process_data function to create the vocabulary for our training data generating functions
- Ensure train trains over the entire corpus
Where to Get Help if You Need it
Re-watch earlier Udacity lectures
Chapters 3-5 - Grokking Deep Learning - (Check inside your classroom for a discount code)
|
import time
import sys
import numpy as np
from collections import Counter
# Encapsulate our neural network in a class
class SentimentNetwork:
def __init__(self, reviews, labels, hidden_nodes = 10, learning_rate = 0.1):
"""Create a SentimenNetwork with the given settings
Args:
reviews(list) - List of reviews used for training
labels(list) - List of POSITIVE/NEGATIVE labels associated with the given reviews
hidden_nodes(int) - Number of nodes to create in the hidden layer
learning_rate(float) - Learning rate to use while training
"""
# Assign a seed to our random number generator to ensure we get
# reproducable results during development
np.random.seed(1)
# process the reviews and their associated labels so that everything
# is ready for training
self.pre_process_data(reviews, labels)
# Build the network to have the number of hidden nodes and the learning rate that
# were passed into this initializer. Make the same number of input nodes as
# there are vocabulary words and create a single output node.
self.init_network(len(self.review_vocab),hidden_nodes, 1, learning_rate)
def pre_process_data(self, reviews, labels):
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i,r in enumerate(reviews):
words = r.split(' ')
positive = labels[i] == 'POSITIVE'
for w in words:
if positive:
positive_counts[w] += 1
else:
negative_counts[w] += 1
total_counts[w] += 1
review_vocab = set(total_counts)
# TODO: populate review_vocab with all of the words in the given reviews
# Remember to split reviews into individual words
# using "split(' ')" instead of "split()".
# Convert the vocabulary set to a list so we can access words via indices
self.review_vocab = list(review_vocab)
label_vocab = set(labels)
# TODO: populate label_vocab with all of the words in the given labels.
# There is no need to split the labels because each one is a single word.
# Convert the label vocabulary set to a list so we can access labels via indices
self.label_vocab = list(label_vocab)
# Store the sizes of the review and label vocabularies.
self.review_vocab_size = len(self.review_vocab)
self.label_vocab_size = len(self.label_vocab)
# Create a dictionary of words in the vocabulary mapped to index positions
self.word2index = {}
for i in range(len(self.review_vocab)):
self.word2index[self.review_vocab[i]] = i
# TODO: populate self.word2index with indices for all the words in self.review_vocab
# like you saw earlier in the notebook
# Create a dictionary of labels mapped to index positions
self.label2index = {}
for i in range(len(self.label_vocab)):
self.label2index[self.label_vocab[i]]= i
# TODO: do the same thing you did for self.word2index and self.review_vocab,
# but for self.label2index and self.label_vocab instead
def init_network(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Store the number of nodes in input, hidden, and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Store the learning rate
self.learning_rate = learning_rate
# Initialize weights
# TODO: initialize self.weights_0_1 as a matrix of zeros. These are the weights between
# the input layer and the hidden layer.
self.weights_0_1 = np.zeros((input_nodes,hidden_nodes))
# TODO: initialize self.weights_1_2 as a matrix of random values.
# These are the weights between the hidden layer and the output layer.
self.weights_1_2 = np.random.normal(scale=1 / input_nodes ** .5,
size=hidden_nodes)
# TODO: Create the input layer, a two-dimensional matrix with shape
# 1 x input_nodes, with all values initialized to zero
self.layer_0 = np.zeros((1,input_nodes))
def update_input_layer(self,review):
# TODO: You can copy most of the code you wrote for update_input_layer
# earlier in this notebook.
#
# However, MAKE SURE YOU CHANGE ALL VARIABLES TO REFERENCE
# THE VERSIONS STORED IN THIS OBJECT, NOT THE GLOBAL OBJECTS.
# For example, replace "layer_0 *= 0" with "self.layer_0 *= 0"
self.layer_0 *=0
for w in review.split(' '):
index = self.word2index[w]
self.layer_0[0][index] += 1
pass
def get_target_for_label(self,label):
# TODO: Copy the code you wrote for get_target_for_label
# earlier in this notebook.
if label == 'POSITIVE':
return 1
else:
return 0
pass
def sigmoid(self,x):
# TODO: Return the result of calculating the sigmoid activation function
# shown in the lectures
return 1 / (1 + np.exp(-x))
pass
def sigmoid_output_2_derivative(self,output):
# TODO: Return the derivative of the sigmoid activation function,
# where "output" is the original output from the sigmoid fucntion
return output*(1-output)
pass
def train(self, training_reviews, training_labels):
# make sure out we have a matching number of reviews and labels
assert(len(training_reviews) == len(training_labels))
# Keep track of correct predictions to display accuracy during training
correct_so_far = 0
# Remember when we started for printing time statistics
start = time.time()
# loop through all the given reviews and run a forward and backward pass,
# updating weights for every item
for i in range(len(training_reviews)):
# TODO: Get the next review and its correct label
review, label = (training_reviews[i],training_laebels[i])
# TODO: Implement the forward pass through the network.
# That means use the given review to update the input layer,
# then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Do not use an activation function for the hidden layer,
# but use the sigmoid activation function for the output layer.
self.update_input_layer(review)
input_hidden = np.dot(self.layer_0[0], self.weights_0_1)
hidden_output = np.dot(input_hidden, self.weights_1_2)
output = self.sigmoid(hidden_output)
# TODO: Implement the back propagation pass here.
# That means calculate the error for the forward pass's prediction
# and update the weights in the network according to their
# contributions toward the error, as calculated via the
# gradient descent and back propagation algorithms you
# learned in class.
error = self.get_target_for_label(label) - output
output_error_term = error*self.sigmoid_output_2_derivative(output)
hidden_error = np.dot(output_error_term, self.weights_1_2)
hidden_error_term = hidden_error*self.weights_1_2
self.weights_1_2 += self.learning_rate*output_error_term*hidden_output / len(training_reviews)
self.weights_0_1 += self.learning_rate*hidden_error_term*self.layer_0[0] / len(training_reviews)
# TODO: Keep track of correct predictions. To determine if the prediction was
# correct, check that the absolute value of the output error
# is less than 0.5. If so, add one to the correct_so_far count.
if np.absolute(error) < 0.5:
correct_so_far += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the training process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(training_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct_so_far) + " #Trained:" + str(i+1) \
+ " Training Accuracy:" + str(correct_so_far * 100 / float(i+1))[:4] + "%")
if(i % 2500 == 0):
print("")
def test(self, testing_reviews, testing_labels):
"""
Attempts to predict the labels for the given testing_reviews,
and uses the test_labels to calculate the accuracy of those predictions.
"""
# keep track of how many correct predictions we make
correct = 0
# we'll time how many predictions per second we make
start = time.time()
# Loop through each of the given reviews and call run to predict
# its label.
for i in range(len(testing_reviews)):
pred = self.run(testing_reviews[i])
if(pred == testing_labels[i]):
correct += 1
# For debug purposes, print out our prediction accuracy and speed
# throughout the prediction process.
elapsed_time = float(time.time() - start)
reviews_per_second = i / elapsed_time if elapsed_time > 0 else 0
sys.stdout.write("\rProgress:" + str(100 * i/float(len(testing_reviews)))[:4] \
+ "% Speed(reviews/sec):" + str(reviews_per_second)[0:5] \
+ " #Correct:" + str(correct) + " #Tested:" + str(i+1) \
+ " Testing Accuracy:" + str(correct * 100 / float(i+1))[:4] + "%")
def run(self, review):
"""
Returns a POSITIVE or NEGATIVE prediction for the given review.
"""
# TODO: Run a forward pass through the network, like you did in the
# "train" function. That means use the given review to
# update the input layer, then calculate values for the hidden layer,
# and finally calculate the output layer.
#
# Note: The review passed into this function for prediction
# might come from anywhere, so you should convert it
# to lower case prior to using it.
self.update_input_layer(review.lower())
input_hidden = np.dot(self.layer_0[0], self.weights_0_1)
hidden_output = np.dot(input_hidden, self.weights_1_2)
output = self.sigmoid(hidden_output)
if output >= 0.5:
return 'POSITIVE'
else:
return 'NEGATIVE'
# TODO: The output layer should now contain a prediction.
# Return `POSITIVE` for predictions greater-than-or-equal-to `0.5`,
# and `NEGATIVE` otherwise.
pass
|
sentiment-network/Sentiment_Classification_Projects.ipynb
|
marko911/deep-learning
|
mit
|
Now do some waveform creation:
|
N = 40
x = numpy.array(range(40))
data = 0.99*numpy.exp(-((x-20)/4)**2)
plt.plot(data,"o-")
handle.write(":DATA:POINTS VOLATILE,{}".format(N))
handle.ask(":DATA:POINTS? VOLATILE")
|
Rigol Arb Gen Waveform Creation.ipynb
|
DawesLab/Instruments
|
gpl-3.0
|
Now to reverse engineer the VCA response:
|
def VfromI(Intensity):
"""Implement the inverted response function. See data fit in google drive AOM folder."""
V = (.0039757327 + (.0039757327 ** 2 + 4 *.0078826605 * Intensity) ** (1/2))/(2*.0078826605)
return V
voltages = VfromI(data)
floats = voltages/voltages.max() # values scaled to 0-1.0
floats
handle.write(":DATA:POINTS VOLATILE,40")
numPoints = int(handle.ask(":DATA:POINTS? VOLATILE"))
numPoints
for i in range(len(floats)):
command_string = ":DATA:VAL VOLATILE," + str(i+1) + "," + str(int(0.9*16383*floats[i]))
check_string = ":DATA:VAL? VOLATILE," + str(i+1)
#print(command_string)
handle.write(command_string)
#print(handle.ask(check_string))
# Check what the instrument memory holds
# For some reason, it can only pull 38 values.
wave = []
# add 1 to numPoints to account for range function
for i in range(1,numPoints+1):
#sleep(0.2)
wave.append( handle.ask(":DATA:VALUE? VOLATILE,{}".format(i)) )
print(wave)
print(len(wave))
plt.plot(wave,"o-")
handle.close()
|
Rigol Arb Gen Waveform Creation.ipynb
|
DawesLab/Instruments
|
gpl-3.0
|
Running TPOTClassifier
In the interest of time, we'll only use a 500,000 row sample of this file. 500,000 rows is more than enough for this example.
|
NROWS = 500_000
X_train, X_test, y_train, y_test = prepare_higgs(nrows=NROWS)
|
tutorials/Higgs_Boson.ipynb
|
weixuanfu/tpot
|
lgpl-3.0
|
Note that for cuML to work correctly, you must set n_jobs=1 (the default setting).
|
%%time
# cuML TPOT setup
SEED = 12
GENERATIONS = 10
POP_SIZE = 10
CV = 2
tpot = TPOTClassifier(
generations=GENERATIONS,
population_size=POP_SIZE,
random_state=SEED,
config_dict="TPOT cuML",
n_jobs=1, # cuML requires n_jobs=1, the default
cv=CV,
verbosity=2,
)
tpot.fit(X_train, y_train)
%%time
preds = tpot.predict(X_test)
print(accuracy_score(y_test, preds))
%%time
# Default TPOT setup with same params
tpot = TPOTClassifier(
generations=GENERATIONS,
population_size=POP_SIZE,
random_state=SEED,
n_jobs=-1,
cv=CV,
verbosity=2,
)
tpot.fit(X_train, y_train)
%%time
preds = tpot.predict(X_test)
print(accuracy_score(y_test, preds))
|
tutorials/Higgs_Boson.ipynb
|
weixuanfu/tpot
|
lgpl-3.0
|
Loading the data set
Data in GUODA is stored on a clustered file system called HDFS. The Jupyter notebooks are all configured to read and write to HDFS automatically so all file paths are in the HDFS system.
You can read more about working with files and how to see what data sets are availible on the Jupyter service wiki.
This line will load the contents of the file that contains a 100,000 record sub-set of iDigBio into a Spark data frame. Then we can look at how many records are in the data frame to confirm that we are working with the 100k subset.
|
df = sqlContext.read.load("/guoda/data/idigbio-20190612T171757.parquet")
df.count()
|
01_iDigBio_Specimens_Collected_Over_Time.ipynb
|
bio-guoda/guoda-examples
|
mit
|
Examining the data
Now that the data is in memory, let's look at some of the methods availible to examine it before we move on to summarizing it. This will let you see how data is represented both in Spark and Python as well as what kind of data is availible in the iDigBio data frames.
Data frame structure
First we can look at the columns in the data frame. This is all of iDigBio so there are a lot of them.
Also printed by Python is the data type for each column and if a column contains a nested structure (like the "data" structure which has the raw data originally sent to iDigBio) then it is indented.
|
df.printSchema()
|
01_iDigBio_Specimens_Collected_Over_Time.ipynb
|
bio-guoda/guoda-examples
|
mit
|
Next we can look at the first row of data. The (1) after head tells Python how many rows to print. Since this is all iDigBio data, the rows are pretty big so we'll only show one.
|
df.head(1)
|
01_iDigBio_Specimens_Collected_Over_Time.ipynb
|
bio-guoda/guoda-examples
|
mit
|
Summarizing the data
That's certainly more data than we need to make the graph. Since there is one row in the data frame for each specimen record, what we need to do is group the records by the year they were collected and then count the number of records in each group and associate that with the year. The data frame we want to have as a result should have two columns, one for year and one for the count of the records collected in that year.
This is a common chain of operations often refered to as select, group by, and count which comes from the SQL syntax for doing this operation.
Working with the year complicated by the fact that iDigBio has a datecollected field and not a yearcollected field. While we are often provided a year in the raw data, we assemble and convert all the date information from the Darwin Core fields into a date-type object and store that as datecollected. Because this object is a date type we can sort it and search for ranges. (Consider what would happen if we tried that with raw data strings like "2004-01-14" and "March 15, 2015".)
We need to extract the year part of datecollected and we need to convert it to a number so we can sort on it.
|
# The outer "(" and ")" surround the chain of Python method calls to allow them to
# span lines. This is a common convention and makes the data processing pipeline
# easy to read and modify.
#
# The persist() function tells Spark to store the data frame in memory so it can be
# accessed repeatedly without having to be reloaded.
year_summary = (df
.groupBy(year("datecollected").cast("integer").alias("yearcollected"))
.count()
.orderBy("yearcollected")
.persist()
)
|
01_iDigBio_Specimens_Collected_Over_Time.ipynb
|
bio-guoda/guoda-examples
|
mit
|
Let's take a look at this new data frame using some of the commands from above:
|
year_summary.count()
year_summary.printSchema()
year_summary.head(10)
|
01_iDigBio_Specimens_Collected_Over_Time.ipynb
|
bio-guoda/guoda-examples
|
mit
|
Now that our data is both much smaller and mostly numeric, we can use the describe() method to quickly make summary statistics. This method returns a data frame so we have to use show() to actually print the whole contents of the data frame.
|
year_summary.describe().show()
|
01_iDigBio_Specimens_Collected_Over_Time.ipynb
|
bio-guoda/guoda-examples
|
mit
|
Spark data frames, Pandas data frames, and filtering
The term "data frame" is a concept for how data is arranged. Different programming languages and even libraries in a single programming language have different implimentations of this idea.
We have been working with a Spark data frame. Now we want to do some graphing and the Python graphing libraries know how to work with a Pandas data frame. Fortunately this is such a common conversion that there is a built-in method to do it.
One thing to be aware of is that Pandas data frames are not stored on our computation cluster like the Spark data frames are. This means they need to be small and you should not do too much computation on them. Our year_summary data frame is only 2 columns and about 220 rows this isn't a problem.
While converting to a Pandas data frame, we will also reduce the years to the range 1817 - 2017. From the output of describe() we could see that there were some years that didn't make sense.
|
pandas_year_summary = (year_summary
.filter(year_summary.yearcollected >= 1817)
.filter(year_summary.yearcollected <= 2017)
.orderBy("yearcollected")
.toPandas()
)
pandas_year_summary.head()
|
01_iDigBio_Specimens_Collected_Over_Time.ipynb
|
bio-guoda/guoda-examples
|
mit
|
(Notice that the display of the first rows looks different from when we ran head() on the Spark data frame? That's because we're looking at the display generated by the Pandas library instead of the Spark library.)
Making a graph
The number of specimens collected in a year is discrete data so a bar graph is one appropriate way to display them.
|
plt.bar(pandas_year_summary["yearcollected"],
pandas_year_summary["count"],
edgecolor='none', width=1.0
)
plt.title("Specimens in iDigBio by Collection Year and Continent")
plt.ylabel("Number of Specimen Records")
plt.xlabel("Year")
|
01_iDigBio_Specimens_Collected_Over_Time.ipynb
|
bio-guoda/guoda-examples
|
mit
|
Variacion por cuadro de Vt del 0.0004%
|
%matplotlib inline
import math
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
import pylab as plb
def matrix(m_length,m_width):
"Return matrix with no homogeneus resitivity"
m = np.zeros((m_length,m_width))
return m
Material_length=963e-6
scmos_process = 3.e-6
wafer_thickness = 0.05e-6
Vt_n_base = 800e-3
Vt_p_base = 900e-3
size_m = Material_length/scmos_process
delta_Vt=0.32e-3
plt.style.use('ggplot')
|
notebooks/Cmos - Transistors (variant Vt property).ipynb
|
jagarzone6/cmos
|
mit
|
Matriz de Vt ideal Vt_n_Ideal
|
Vt_n_Ideal = matrix(int(size_m),int(size_m))
for i in range(0,int(math.sqrt(Vt_n_Ideal.size))):
for j in range(0,int(math.sqrt(Vt_n_Ideal.size))):
Vt_n_Ideal[i][j]= Vt_n_base
plt.matshow(Vt_n_Ideal)
plt.show()
|
notebooks/Cmos - Transistors (variant Vt property).ipynb
|
jagarzone6/cmos
|
mit
|
Matriz de Vt ideal Vt_p_Ideal
|
Vt_p_Ideal = matrix(int(size_m),int(size_m))
for i in range(0,int(math.sqrt(Vt_p_Ideal.size))):
for j in range(0,int(math.sqrt(Vt_p_Ideal.size))):
Vt_p_Ideal[i][j]= Vt_p_base
plt.matshow(Vt_p_Ideal)
plt.show()
|
notebooks/Cmos - Transistors (variant Vt property).ipynb
|
jagarzone6/cmos
|
mit
|
OPCION 1 Creacion de las matrices del material con variaciones en Vt_n y Vt_p a partir de la esquina
|
Vt_n = matrix(int(size_m),int(size_m))
def set_corner_Vt_n():
for i in range(0,int(math.sqrt(Vt_n.size))):
for j in range(0,int(math.sqrt(Vt_n.size))):
Vt_n[i][j]= Vt_n_base+(i+j)*delta_Vt
set_corner_Vt_n()
plt.matshow(Vt_n)
plt.show()
Vt_p = matrix(int(size_m),int(size_m))
def set_corner_Vt_p():
for i in range(0,int(math.sqrt(Vt_p.size))):
for j in range(0,int(math.sqrt(Vt_p.size))):
Vt_p[i][j]= Vt_p_base+(i+j)*delta_Vt
set_corner_Vt_p()
plt.matshow(Vt_p)
plt.show()
|
notebooks/Cmos - Transistors (variant Vt property).ipynb
|
jagarzone6/cmos
|
mit
|
OPCION 2 Creacion de las matrices del material con variaciones en Vt_n y Vt_p del centro hacia afuera
|
Vt_n = matrix(int(size_m),int(size_m))
def centroid_Vt_n(center,i,j):
"Return a value of Vt_n for a single shape"
difJ=abs(center-j)
difI=abs(center-i)
xy=0
if difJ > difI:
xy=difJ
else:
xy=difI
Vtn= Vt_n_base+(xy)*delta_Vt
return Vtn
def set_centroid_Vt_n():
for i in range(0,int(math.sqrt(Vt_n.size))):
for j in range(0,int(math.sqrt(Vt_n.size))):
Vt_n[i][j]= centroid_Vt_n((int(math.sqrt(Vt_n.size))-1)/2,i,j)
set_centroid_Vt_n()
plt.matshow(Vt_n)
plt.show()
Vt_p = matrix(int(size_m),int(size_m))
def centroid_Vt_p(center,i,j):
"Return a value of Vt_p for a single shape"
difJ=abs(center-j)
difI=abs(center-i)
xy=0
if difJ > difI:
xy=difJ
else:
xy=difI
Vtp= Vt_p_base+(xy)*delta_Vt
return Vtp
def set_centroid_Vt_p():
for i in range(0,int(math.sqrt(Vt_p.size))):
for j in range(0,int(math.sqrt(Vt_p.size))):
Vt_p[i][j]= centroid_Vt_p((int(math.sqrt(Vt_p.size))-1)/2,i,j)
set_centroid_Vt_p()
plt.matshow(Vt_p)
plt.show()
|
notebooks/Cmos - Transistors (variant Vt property).ipynb
|
jagarzone6/cmos
|
mit
|
Ingrese las dimensiones LxW de los transistores
|
display(Image(url='images/WLCMOS.png'))
W_base=2
WTA=60
WTB=60
WTC=60
WTD=60
lenght_active_si_transistor = 6
L=2
display(Image(url='images/Transistor.png'))
print("W_base de 2 unidades")
|
notebooks/Cmos - Transistors (variant Vt property).ipynb
|
jagarzone6/cmos
|
mit
|
OPCION 1 de Diseño de los transistores de optimizacion de espacio con centroide comun en la oblea, para usar 2 de esos en la simulacion del espejo de corriente
|
paint_matrix = matrix(int(size_m),int(size_m))
common_source=9
def opcion1():
M_center=int((int(math.sqrt(paint_matrix.size))-1)/2)
TA = matrix(W_base,int((WTA/W_base+1)*(lenght_active_si_transistor/2)))
for i in range(M_center-2-TA.shape[0],M_center-2):
for j in range(M_center-2-TA.shape[1],M_center-2):
paint_matrix[i][j]=1
TB = matrix(W_base,int((WTB/W_base+1)*(lenght_active_si_transistor/2)))
for i in range(M_center-2-TB.shape[0],M_center-2):
for j in range(M_center+3,TB.shape[1]+M_center+3):
paint_matrix[i][j]=2
TC = matrix(W_base,int((WTC/W_base+1)*(lenght_active_si_transistor/2)))
for i in range(M_center+3,TC.shape[0]+M_center+3):
for j in range(M_center-2-TC.shape[1],M_center-2):
paint_matrix[i][j]=3
TD = matrix(W_base,int((WTD/W_base+1)*(lenght_active_si_transistor/2)))
for i in range(M_center+3,TD.shape[0]+M_center+3):
for j in range(M_center+3,TD.shape[1]+M_center+3):
paint_matrix[i][j]=4
opcion1()
plt.matshow(paint_matrix)
plt.show()
|
notebooks/Cmos - Transistors (variant Vt property).ipynb
|
jagarzone6/cmos
|
mit
|
Calculo del valor promedio de Vt para cada transistor
|
def prom_Vt_n_for_transistor(transistor_num):
"Return a prom value of Vt_n for a single transistor in the wafer"
Vtn_sum=0
Vtn_found=0
for i in range(0,int(math.sqrt(paint_matrix.size))):
for j in range(0,int(math.sqrt(paint_matrix.size))):
if paint_matrix[i][j] == transistor_num:
Vtn_sum += Vt_n[i][j]
Vtn_found += 1
if paint_matrix[i][j] == common_source:
Vtn_sum += Vt_n[i][j]
Vtn_found += 1
Vtn_prom=Vtn_sum/(Vtn_found)
return Vtn_prom
def prom_Vt_p_for_transistor(transistor_num):
"Return a prom value of Vt_n for a single transistor in the wafer"
Vtp_sum=0
Vtp_found=0
for i in range(0,int(math.sqrt(paint_matrix.size))):
for j in range(0,int(math.sqrt(paint_matrix.size))):
if paint_matrix[i][j] == transistor_num:
Vtp_sum += Vt_p[i][j]
Vtp_found += 1
if paint_matrix[i][j] == common_source:
Vtp_sum += Vt_p[i][j]
Vtp_found += 1
Vtp_prom=Vtp_sum/(Vtp_found)
return Vtp_prom
|
notebooks/Cmos - Transistors (variant Vt property).ipynb
|
jagarzone6/cmos
|
mit
|
Con variacion de variacion a partir de la esquina de Vt en la oblea
|
print("Para la opcion diseno 1: Transistores ahorro de espacio con centroide: \n")
paint_matrix = matrix(int(size_m),int(size_m))
opcion1()
set_corner_Vt_n()
plt.matshow(Vt_n)
plt.show()
plt.matshow(paint_matrix)
plt.show()
Vtn_T1=prom_Vt_n_for_transistor(1)
Vtn_T2=prom_Vt_n_for_transistor(2)
Vtn_T3=prom_Vt_n_for_transistor(3)
Vtn_T4=prom_Vt_n_for_transistor(4)
print("Vt_n Transistor 1: "+str(Vtn_T1))
print("Vt_n Transistor 2: "+str(Vtn_T2))
print("Vt_n Transistor 3: "+str(Vtn_T3))
print("Vt_n Transistor 4: "+str(Vtn_T4))
set_corner_Vt_p()
Vtp_T1=prom_Vt_p_for_transistor(1)
Vtp_T2=prom_Vt_p_for_transistor(2)
Vtp_T3=prom_Vt_p_for_transistor(3)
Vtp_T4=prom_Vt_p_for_transistor(4)
print("Vt_p Transistor 1: "+str(Vtp_T1))
print("Vt_p Transistor 2: "+str(Vtp_T2))
print("Vt_p Transistor 3: "+str(Vtp_T3))
print("Vt_p Transistor 4: "+str(Vtp_T4))
print("Para la opcion diseno 2: Transistores con source comun en centroide: \n")
paint_matrix = matrix(int(size_m),int(size_m))
opcion2()
set_corner_Vt_n()
plt.matshow(Vt_n)
plt.show()
plt.matshow(paint_matrix)
plt.show()
Vtn_T1=prom_Vt_n_for_transistor(4)
Vtn_T2=prom_Vt_n_for_transistor(5)
print("Vt_n Transistor 1: "+str(Vtn_T1))
print("Vt_n Transistor 2: "+str(Vtn_T2))
set_corner_Vt_p()
Vtp_T1=prom_Vt_p_for_transistor(4)
Vtp_T2=prom_Vt_p_for_transistor(5)
print("Vt_p Transistor 1: "+str(Vtp_T1))
print("Vt_p Transistor 2: "+str(Vtp_T2))
|
notebooks/Cmos - Transistors (variant Vt property).ipynb
|
jagarzone6/cmos
|
mit
|
Con variacion del centro hacia afuera de Vt en la oblea
|
print("Para la opcion diseno 1: Transistores ahorro de espacio con centroide: \n")
paint_matrix = matrix(int(size_m),int(size_m))
opcion1()
set_centroid_Vt_n()
plt.matshow(Vt_n)
plt.show()
plt.matshow(paint_matrix)
plt.show()
Vtn_T1=prom_Vt_n_for_transistor(1)
Vtn_T2=prom_Vt_n_for_transistor(2)
Vtn_T3=prom_Vt_n_for_transistor(3)
Vtn_T4=prom_Vt_n_for_transistor(4)
print("Vt_n Transistor 1: "+str(Vtn_T1))
print("Vt_n Transistor 2: "+str(Vtn_T2))
print("Vt_n Transistor 3: "+str(Vtn_T3))
print("Vt_n Transistor 4: "+str(Vtn_T4))
set_centroid_Vt_p()
Vtp_T1=prom_Vt_p_for_transistor(1)
Vtp_T2=prom_Vt_p_for_transistor(2)
Vtp_T3=prom_Vt_p_for_transistor(3)
Vtp_T4=prom_Vt_p_for_transistor(4)
print("Vt_p Transistor 1: "+str(Vtp_T1))
print("Vt_p Transistor 2: "+str(Vtp_T2))
print("Vt_p Transistor 3: "+str(Vtp_T3))
print("Vt_p Transistor 4: "+str(Vtp_T4))
print("Para la opcion diseno 2: Transistores con source comun en centroide: \n")
paint_matrix = matrix(int(size_m),int(size_m))
opcion2()
set_centroid_Vt_n()
plt.matshow(Vt_n)
plt.show()
plt.matshow(paint_matrix)
plt.show()
Vtn_T1=prom_Vt_n_for_transistor(4)
Vtn_T2=prom_Vt_n_for_transistor(5)
print("Vt_n Transistor 1: "+str(Vtn_T1))
print("Vt_n Transistor 2: "+str(Vtn_T2))
set_centroid_Vt_p()
Vtp_T1=prom_Vt_p_for_transistor(4)
Vtp_T2=prom_Vt_p_for_transistor(5)
print("Vt_p Transistor 1: "+str(Vtp_T1))
print("Vt_p Transistor 2: "+str(Vtp_T2))
|
notebooks/Cmos - Transistors (variant Vt property).ipynb
|
jagarzone6/cmos
|
mit
|
Funcion para editar los archivo de simulacion del espejo NMOS 'espejoNmosPythonFile.cir' y el espejo P 'espejoPmosPythonFile.cir'
|
import sys
import fileinput
def modificar_cir_Espejo_NMOS(W,L,Vt_T1,Vt_T2):
text="* Simulación Circuito Espejo de Corriente con Ncmos, valores reales de Kp_n y Vt"+"\n"+ \
"* Universidad Nacional de Colombia 2016"+"\n"+ \
"* CMOS Analógico"+"\n"+ \
"* Grupo Jorge Garzón, Esteban Iafrancesco A"+"\n"+ \
"\n"+\
"VDD VDD 0 DC 10 AC 0"+"\n"+\
"V2 VR 0 DC 10 AC 0"+"\n"+\
"VRD RDN VR DC 0 AC 0"+"\n"+\
"RD RDN DRAIN 1000"+"\n"+\
"RP VDD GATE 2000"+"\n"+\
"M1 DRAIN GATE 0 0 nmosideal W="+str(W)+" L="+str(L)+"\n"+\
"M2 GATE GATE 0 0 nmosideal W="+str(W)+" L="+str(L)+"\n"+\
"\n"+\
"VRD2 RDN2 VR DC 0 AC 0"+"\n"+\
"RD2 RDN2 DRAIN2 1000"+"\n"+\
"RP2 VDD GATE2 2000"+"\n"+\
"M3 DRAIN2 GATE2 0 0 nmos1 W="+str(W)+" L="+str(L)+"\n"+\
"M4 GATE2 GATE2 0 0 nmos2 W="+str(W)+" L="+str(L)+"\n"+\
"\n"+\
".model nmosideal nmos LEVEL=1 Vto=0.8 KP=120u LAMBDA=0.01 U0=650"+"\n"+\
".model nmos1 nmos LEVEL=1 Vto="+str(Vt_T1)+" KP=120u LAMBDA=0.01 U0=650"+"\n"+\
".model nmos2 nmos LEVEL=1 Vto="+str(Vt_T2)+" KP=120u LAMBDA=0.01 U0=650"+"\n"+\
"\n"+\
".control"+"\n"+\
"set color0 =white"+"\n"+\
"set color1=black"+"\n"+\
"op"+"\n"+\
"show all"+"\n"+\
"dc vdd 0.7 12 0.01"+"\n"+\
"plot i(vrd) i(vrd2)"+"\n"+\
".endc"+"\n"
for i, line in enumerate(fileinput.input('../spice-simulations/espejoNmosPythonFile.cir', inplace=1)):
if i == 1: sys.stdout.write(text) # replace 'sit' and write
fileinput.close()
def modificar_cir_Espejo_PMOS(W,L,Vt_T1,Vt_T2):
text="* Simulación Circuito Espejo de Corriente con Ncmos, valores reales de Kp_n y Vt"+"\n"+ \
"* Universidad Nacional de Colombia 2016"+"\n"+ \
"* CMOS Analógico"+"\n"+ \
"* Grupo Jorge Garzón, Esteban Iafrancesco A"+"\n"+ \
"\n"+\
"VDD VDD 0 DC -10 AC 0"+"\n"+\
"V2 VR 0 DC -10 AC 0"+"\n"+\
"VRD RDN VR DC 0 AC 0"+"\n"+\
"RD RDN DRAIN 1000"+"\n"+\
"RP VDD GATE 2000"+"\n"+\
"M1 DRAIN GATE 0 0 pmosideal W="+str(W)+" L="+str(L)+"\n"+\
"M2 GATE GATE 0 0 pmosideal W="+str(W)+" L="+str(L)+"\n"+\
"\n"+\
"VRD2 RDN2 VR DC 0 AC 0"+"\n"+\
"RD2 RDN2 DRAIN2 1000"+"\n"+\
"RP2 VDD GATE2 2000"+"\n"+\
"M3 DRAIN2 GATE2 0 0 pmos1 W="+str(W)+" L="+str(L)+"\n"+\
"M4 GATE2 GATE2 0 0 pmos2 W="+str(W)+" L="+str(L)+"\n"+\
"\n"+\
".model pmosideal pmos LEVEL=1 Vto=-0.9 KP=40u LAMBDA=0.0125 U0=250"+"\n"+\
".model pmos1 pmos LEVEL=1 Vto=-"+str(Vt_T1)+" KP=40u LAMBDA=0.0125 U0=250"+"\n"+\
".model pmos2 pmos LEVEL=1 Vto=-"+str(Vt_T2)+" KP=40u LAMBDA=0.0125 U0=250"+"\n"+\
"\n"+\
".control"+"\n"+\
"set color0 =white"+"\n"+\
"set color1=black"+"\n"+\
"op"+"\n"+\
"show all"+"\n"+\
"dc vdd -0.8 -12 -0.01"+"\n"+\
"plot i(vrd) i(vrd2)"+"\n"+\
".endc"+"\n"
for i, line in enumerate(fileinput.input('../spice-simulations/espejoPmosPythonFile.cir', inplace=1)):
if i == 1: sys.stdout.write(text) # replace 'sit' and write
fileinput.close()
|
notebooks/Cmos - Transistors (variant Vt property).ipynb
|
jagarzone6/cmos
|
mit
|
Espejo NMOS con variaciones de Vt_n en la oblea
Simulacion con variacion desde el centro de Vt_n en la oblea y centroide comun entre 2 transistores (de los 4 disponibles) para formar el espejo de corriente.
|
paint_matrix = matrix(int(size_m),int(size_m))
opcion1()
set_corner_Vt_n()
plt.matshow(Vt_n)
plt.show()
plt.matshow(paint_matrix)
plt.show()
Vtn_T1=prom_Vt_n_for_transistor(1)
Vtn_T2=prom_Vt_n_for_transistor(2)
Vtn_T3=prom_Vt_n_for_transistor(3)
Vtn_T4=prom_Vt_n_for_transistor(4)
#modificar_cir_Espejo_NMOS(WTA,L,Vtn_T1,Vtn_T2)
display(Image(url='images/corner_Vt_TA_TB_n.png'))
print("VDD vs -Iout. Rojo Espejo ideal, Azul Espejo Con variaciones en Vt_n de la oblea. Transistores TA y TB")
#modificar_cir_Espejo_NMOS(WTA,L,Vtn_T1,Vtn_T3)
display(Image(url='images/corner_Vt_TA_TC_n.png'))
print("VDD vs -Iout. Rojo Espejo ideal, Azul Espejo Con variaciones en Vt_n de la oblea. Transistores TA y TC")
#modificar_cir_Espejo_NMOS(WTA,L,Vtn_T2,Vtn_T3)
display(Image(url='images/corner_Vt_TB_TC_n.png'))
print("VDD vs -Iout. Rojo Espejo ideal, Azul Espejo Con variaciones en Vt_n de la oblea. Transistores TB y TC")
|
notebooks/Cmos - Transistors (variant Vt property).ipynb
|
jagarzone6/cmos
|
mit
|
Simulacion con variacion desde el centro de Vt_n en la oblea y centroide comun entre 2 transistores (de los 4 disponibles) para formar el espejo de corriente.
|
paint_matrix = matrix(int(size_m),int(size_m))
opcion1()
set_centroid_Vt_n()
plt.matshow(Vt_n)
plt.show()
plt.matshow(paint_matrix)
plt.show()
Vtn_T1=prom_Vt_n_for_transistor(1)
Vtn_T2=prom_Vt_n_for_transistor(2)
Vtn_T3=prom_Vt_n_for_transistor(3)
Vtn_T4=prom_Vt_n_for_transistor(4)
#modificar_cir_Espejo_NMOS(WTA,L,Vtn_T1,Vtn_T2)
display(Image(url='images/centroid_Vt_TA_TB_n.png'))
print("VDD vs -Iout. Rojo Espejo ideal, Azul Espejo Con variaciones en Vt_n de la oblea. Transistores TA y TB")
#modificar_cir_Espejo_NMOS(WTA,L,Vtn_T2,Vtn_T3)
display(Image(url='images/centroid_Vt_TB_TC_n.png'))
print("VDD vs -Iout. Rojo Espejo ideal, Azul Espejo Con variaciones en Vt_n de la oblea. Transistores TB y TC")
|
notebooks/Cmos - Transistors (variant Vt property).ipynb
|
jagarzone6/cmos
|
mit
|
Simulacion con variacion desde la esquina de Vt_n en la oblea y diseno 2 de surce compartido, centroide comun entre 2 transistores para formar el espejo de corriente.
|
paint_matrix = matrix(int(size_m),int(size_m))
opcion2()
set_corner_Vt_n()
plt.matshow(Vt_n)
plt.show()
plt.matshow(paint_matrix)
plt.show()
Vtn_T1=prom_Vt_n_for_transistor(4)
Vtn_T2=prom_Vt_n_for_transistor(5)
#modificar_cir_Espejo_NMOS(WTA,L,Vtn_T1,Vtn_T2)
display(Image(url='images/corner_Vt_CS_n.png'))
print("VDD vs -Iout. Rojo Espejo ideal, Azul Espejo Con variaciones en Vt_n de la oblea. Diseno de source compartido")
|
notebooks/Cmos - Transistors (variant Vt property).ipynb
|
jagarzone6/cmos
|
mit
|
Simulacion con variacion desde el centro de Vt_n en la oblea y diseno 2 de surce compartido, centroide comun entre 2 transistores para formar el espejo de corriente.
|
paint_matrix = matrix(int(size_m),int(size_m))
opcion2()
set_centroid_Vt_n()
plt.matshow(Vt_n)
plt.show()
plt.matshow(paint_matrix)
plt.show()
Vtn_T1=prom_Vt_n_for_transistor(4)
Vtn_T2=prom_Vt_n_for_transistor(5)
#modificar_cir_Espejo_NMOS(WTA,L,Vtn_T1,Vtn_T2)
display(Image(url='images/center_Vt_CS_n.png'))
print("VDD vs -Iout. Rojo Espejo ideal, Azul Espejo Con variaciones en Vt_n de la oblea. Diseno de source compartido")
|
notebooks/Cmos - Transistors (variant Vt property).ipynb
|
jagarzone6/cmos
|
mit
|
Espejo PMOS con variaciones en la oblea desde la esquina
Simulacion con variacion desde la esquina de Vt_p en la oblea y centroide comun entre 2 transistores (de los 4 disponibles) para formar el espejo de corriente.
|
paint_matrix = matrix(int(size_m),int(size_m))
opcion1()
set_corner_Vt_p()
plt.matshow(Vt_p)
plt.show()
plt.matshow(paint_matrix)
plt.show()
Vtp_T1=prom_Vt_p_for_transistor(1)
Vtp_T2=prom_Vt_p_for_transistor(2)
Vtp_T3=prom_Vt_p_for_transistor(3)
Vtp_T4=prom_Vt_p_for_transistor(4)
#modificar_cir_Espejo_PMOS(WTA,L,Vtp_T1,Vtp_T2)
display(Image(url='images/corner_Vt_TA_TB_p.png'))
print("VDD vs -Iout. Rojo Espejo ideal PMOS, Azul Espejo Con variaciones en Vt_p de la oblea. Transistores TA y TB")
#modificar_cir_Espejo_PMOS(WTA,L,Vtp_T1,Vtp_T3)
display(Image(url='images/corner_Vt_TA_TC_p.png'))
print("VDD vs -Iout. Rojo Espejo ideal PMOS, Azul Espejo Con variaciones en Vt_p de la oblea. Transistores TA y TC")
#modificar_cir_Espejo_PMOS(WTA,L,Vtp_T2,Vtp_T3)
display(Image(url='images/corner_Vt_TB_TC_p.png'))
print("VDD vs -Iout. Rojo Espejo ideal PMOS, Azul Espejo Con variaciones en Vt_p de la oblea. Transistores TB y TC")
|
notebooks/Cmos - Transistors (variant Vt property).ipynb
|
jagarzone6/cmos
|
mit
|
Simulacion con variacion desde el centro de Vt_p en la oblea y centroide comun entre 2 transistores (de los 4 disponibles) para formar el espejo de corriente.
|
paint_matrix = matrix(int(size_m),int(size_m))
opcion1()
set_centroid_Vt_p()
plt.matshow(Vt_p)
plt.show()
plt.matshow(paint_matrix)
plt.show()
Vtp_T1=prom_Vt_p_for_transistor(1)
Vtp_T2=prom_Vt_p_for_transistor(2)
Vtp_T3=prom_Vt_p_for_transistor(3)
Vtp_T4=prom_Vt_p_for_transistor(4)
#modificar_cir_Espejo_PMOS(WTA,L,Vtp_T1,Vtp_T2)
display(Image(url='images/center_Vt_LS_p.png'))
print("VDD vs -Iout. Rojo Espejo ideal, Azul Espejo Con variaciones en Vt_p de la oblea. Transistores TA y TB")
|
notebooks/Cmos - Transistors (variant Vt property).ipynb
|
jagarzone6/cmos
|
mit
|
Simulacion con variacion desde la esquina de Vt_p en la oblea y diseno 2 de surce compartido, centroide comun entre 2 transistores para formar el espejo de corriente.
|
paint_matrix = matrix(int(size_m),int(size_m))
opcion2()
set_corner_Vt_p()
plt.matshow(Vt_p)
plt.show()
plt.matshow(paint_matrix)
plt.show()
Vtp_T1=prom_Vt_p_for_transistor(4)
Vtp_T2=prom_Vt_p_for_transistor(5)
#modificar_cir_Espejo_PMOS(WTA,L,Vtp_T1,Vtp_T2)
display(Image(url='images/corner_Vt_CS_p.png'))
print("VDD vs -Iout. Rojo Espejo ideal, Azul Espejo Con variaciones en Vt_p de la oblea. Diseno de source compartido")
|
notebooks/Cmos - Transistors (variant Vt property).ipynb
|
jagarzone6/cmos
|
mit
|
Simulacion con variacion desde el centro de Vt_p en la oblea y diseno 2 de surce compartido, centroide comun entre 2 transistores para formar el espejo de corriente.
|
paint_matrix = matrix(int(size_m),int(size_m))
opcion2()
set_centroid_Vt_p()
plt.matshow(Vt_p)
plt.show()
plt.matshow(paint_matrix)
plt.show()
Vtp_T1=prom_Vt_p_for_transistor(4)
Vtp_T2=prom_Vt_p_for_transistor(5)
#modificar_cir_Espejo_PMOS(WTA,L,Vtp_T1,Vtp_T2)
display(Image(url='images/center_Vt_CS_p.png'))
print("VDD vs -Iout. Rojo Espejo ideal, Azul Espejo Con variaciones en KP_p de la oblea. Diseno de source compartido")
|
notebooks/Cmos - Transistors (variant Vt property).ipynb
|
jagarzone6/cmos
|
mit
|
Linear Regression
Prepare train and test data.
|
data_original = np.loadtxt('stanford_dl_ex/ex1/housing.data')
data = np.insert(data_original, 0, 1, axis=1)
np.random.shuffle(data)
train_X = data[:400, :-1]
train_y = data[:400, -1]
test_X = data[400:, :-1]
test_y = data[400:, -1]
m, n = train_X.shape
|
Learning_Curve_and_Other_Visualizations.ipynb
|
HaFl/ufldl-tutorial-python
|
mit
|
Define functions for Linear Regression (using Gradient Descent).
|
def cost_function(theta, X, y):
squared_errors = (X.dot(theta) - y) ** 2
J = 0.5 * squared_errors.sum()
return J
def gradient(theta, X, y):
errors = X.dot(theta) - y
return errors.dot(X)
def train(X, y):
return scipy.optimize.minimize(
fun=cost_function,
x0=np.random.rand(n),
args=(X, y),
method='bfgs',
jac=gradient,
options={'maxiter': 200, 'disp': False},
).x
|
Learning_Curve_and_Other_Visualizations.ipynb
|
HaFl/ufldl-tutorial-python
|
mit
|
Learning Curve
Define the number of equally sized parts the training data should be split up into. This equals the number of models that will be created.
|
fractions = 20
|
Learning_Curve_and_Other_Visualizations.ipynb
|
HaFl/ufldl-tutorial-python
|
mit
|
In every iteration:
1. Add one fraction of the full train set to the current train set.
2. Train a model.
3. Compute and save its Root Mean Square Error for both the current train set and the test set.
|
rms_train = np.zeros(fractions)
rms_test = np.zeros(fractions)
for i, fraction in enumerate(np.linspace(m / fractions, m, fractions)):
optimal_theta = train(train_X[:fraction], train_y[:fraction])
rms_train[i] = np.sqrt(np.mean((train_X[:fraction].dot(optimal_theta) - train_y[:fraction]) ** 2))
rms_test[i] = np.sqrt(np.mean((test_X.dot(optimal_theta) - test_y) ** 2))
|
Learning_Curve_and_Other_Visualizations.ipynb
|
HaFl/ufldl-tutorial-python
|
mit
|
Now we can plot the Learning Curve.
|
plt.figure(figsize=(8, 6))
plt.xlim(0, m + m / fractions)
plt.xlabel('Training Instances Used')
plt.ylabel('RMS Error')
plt.plot(np.linspace(m / fractions, m, fractions), rms_train, c='g', marker='o', label='Train')
plt.plot(np.linspace(m / fractions, m, fractions), rms_test, c='b', marker='o', label='Test')
plt.legend()
|
Learning_Curve_and_Other_Visualizations.ipynb
|
HaFl/ufldl-tutorial-python
|
mit
|
3D Plots
Scatter Plots
The column with index 6 corresponds to the "Average Number of Rooms".<br>
The column with index 13 corresponds to the "% lower status of the population".<br>
See https://archive.ics.uci.edu/ml/datasets/Housing for more information.
|
fig = plt.figure(figsize=(17, 5))
ax = fig.add_subplot(131)
ax.scatter(train_X[:40, 6], train_y[:40], c='b', marker='o')
ax.set_xlabel('Avg Number of Rooms')
ax.set_ylabel('House Price (in thousands)')
ax = fig.add_subplot(132, projection='3d')
ax.scatter(train_X[:40, 6], train_X[:40, 13], train_y[:40], c='r', marker='o')
ax.set_xlabel('Avg Number of Rooms')
ax.set_ylabel('% lower status of the population')
ax.set_zlabel('House Price (in thousands)')
ax.view_init(30, 70)
ax = fig.add_subplot(133)
ax.scatter(train_X[:40, 13], train_y[:40], c='b', marker='o')
ax.set_xlabel('% lower status of the population')
ax.set_ylabel('House Price (in thousands)')
|
Learning_Curve_and_Other_Visualizations.ipynb
|
HaFl/ufldl-tutorial-python
|
mit
|
Contour Plots
First, let's define a new training function which also returns the theta values of every optimization step.
|
def train_with_history(X, y):
theta = np.random.rand(X.shape[1])
history = [theta]
optimal_theta = scipy.optimize.minimize(
fun=cost_function,
x0=theta,
args=(X, y),
method='bfgs',
jac=gradient,
options={'maxiter': 200, 'disp': False},
callback=lambda x: history.append(x),
).x
history.append(optimal_theta)
return optimal_theta, np.vstack(history)
|
Learning_Curve_and_Other_Visualizations.ipynb
|
HaFl/ufldl-tutorial-python
|
mit
|
Also, we need to scale down the dimensions of our train data to two.
|
contour_train_X = train_X[:, (6, 13)]
|
Learning_Curve_and_Other_Visualizations.ipynb
|
HaFl/ufldl-tutorial-python
|
mit
|
Gather optimal theta as well as all theta values on the way there.
|
optimal_theta, history = train_with_history(contour_train_X, train_y)
|
Learning_Curve_and_Other_Visualizations.ipynb
|
HaFl/ufldl-tutorial-python
|
mit
|
Define a grid of potentially relevant theta values and compute J for all of them.
|
theta0 = np.linspace(optimal_theta[0] - 5, optimal_theta[0] + 5, 100)
theta1 = np.linspace(optimal_theta[1] - 5, optimal_theta[1] + 5, 100)
J_values = np.zeros(shape=(theta0.size, theta1.size))
for i, theta0_val in enumerate(theta0):
for j, theta1_val in enumerate(theta1):
J_values[i, j] = cost_function(np.array([theta0_val, theta1_val]), contour_train_X, train_y)
|
Learning_Curve_and_Other_Visualizations.ipynb
|
HaFl/ufldl-tutorial-python
|
mit
|
Finally, reward ourselves with some high class contour plots.
|
fig = plt.figure(figsize=(14, 6))
ax = fig.add_subplot(121)
ax.contour(theta0, theta1, J_values, cmap=cm.coolwarm)
ax.scatter(optimal_theta[0], optimal_theta[1])
ax.plot(history[:, 0], history[:, 1], 'k-', marker='o')
ax = fig.add_subplot(122, projection='3d')
ax.contourf(theta0, theta1, J_values, cmap=cm.coolwarm)
ax.scatter(optimal_theta[0], optimal_theta[1])
ax.plot(history[:, 0], history[:, 1], 'k-', marker='o')
ax.view_init(30, 150)
|
Learning_Curve_and_Other_Visualizations.ipynb
|
HaFl/ufldl-tutorial-python
|
mit
|
Compute source power spectral density (PSD) of VectorView and OPM data
Here we compute the resting state from raw for data recorded using
a Neuromag VectorView system and a custom OPM system.
The pipeline is meant to mostly follow the Brainstorm :footcite:TadelEtAl2011
OMEGA resting tutorial pipeline <bst_omega_>_.
The steps we use are:
Filtering: downsample heavily.
Artifact detection: use SSP for EOG and ECG.
Source localization: dSPM, depth weighting, cortically constrained.
Frequency: power spectral density (Welch), 4 sec window, 50% overlap.
Standardize: normalize by relative power for each source.
Preprocessing
|
# Authors: Denis Engemann <denis.engemann@gmail.com>
# Luke Bloy <luke.bloy@gmail.com>
# Eric Larson <larson.eric.d@gmail.com>
#
# License: BSD-3-Clause
import os.path as op
from mne.filter import next_fast_len
import mne
print(__doc__)
data_path = mne.datasets.opm.data_path()
subject = 'OPM_sample'
subjects_dir = op.join(data_path, 'subjects')
bem_dir = op.join(subjects_dir, subject, 'bem')
bem_fname = op.join(subjects_dir, subject, 'bem',
subject + '-5120-5120-5120-bem-sol.fif')
src_fname = op.join(bem_dir, '%s-oct6-src.fif' % subject)
vv_fname = data_path + '/MEG/SQUID/SQUID_resting_state.fif'
vv_erm_fname = data_path + '/MEG/SQUID/SQUID_empty_room.fif'
vv_trans_fname = data_path + '/MEG/SQUID/SQUID-trans.fif'
opm_fname = data_path + '/MEG/OPM/OPM_resting_state_raw.fif'
opm_erm_fname = data_path + '/MEG/OPM/OPM_empty_room_raw.fif'
opm_trans = mne.transforms.Transform('head', 'mri') # use identity transform
opm_coil_def_fname = op.join(data_path, 'MEG', 'OPM', 'coil_def.dat')
|
0.24/_downloads/b36af73820a7a52a4df3c42b66aef8a5/source_power_spectrum_opm.ipynb
|
mne-tools/mne-tools.github.io
|
bsd-3-clause
|
Exercice 1 - manipulation des bases
Durée : 10 minutes
Importer la base de données relatives aux joueurs de la Coupe du Monde 2014
Déterminer le nombre de joueurs dans chaque équipe et créer un dictionnaire { équipe : Nombre de joueurs}
Déterminer quels sont les 3 joueurs qui ont couvert le plus de distance. Y a t il un biais de sélection ?
Parmis les joueurs qui sont dans le premier décile des joueurs plus rapides, qui a passé le plus clair de son temps à courrir sans la balle ?
Import du fichier
|
import pandas as pd
data_players = pd.read_excel("Players_WC2014.xlsx", engine='openpyxl')
data_players.head()
|
_doc/notebooks/td2a_eco/td2a_eco_exercices_de_manipulation_de_donnees_correction_a.ipynb
|
sdpython/ensae_teaching_cs
|
mit
|
Nombre de joueurs par équipe
|
data_players.groupby(['Team']).size().to_dict()
|
_doc/notebooks/td2a_eco/td2a_eco_exercices_de_manipulation_de_donnees_correction_a.ipynb
|
sdpython/ensae_teaching_cs
|
mit
|
Les joueurs ayant couvert le plus de distance
|
## quels joueurs ont couvert le plus de distance ?
data_players['Distance Covered'] = data_players['Distance Covered'].str.replace('km','')
data_players['Distance Covered'] = pd.to_numeric(data_players['Distance Covered'])
data_players.sort_values(['Distance Covered'], ascending = 0).head(n=3)
|
_doc/notebooks/td2a_eco/td2a_eco_exercices_de_manipulation_de_donnees_correction_a.ipynb
|
sdpython/ensae_teaching_cs
|
mit
|
On voit un clair effet de sélection sur cette variable : ce sont les joueurs dont les équipes ont été le plus loin dans la compétition qui couvert le plus de distance.
Qui a été le plus efficace ?
On a besoin de rendre la variable Top Speed numérique, et de créer une nouvelle variable avec le poucentage de possession de balle
|
## Qui a été le plus rapide ?
data_players['Top Speed'] = data_players['Top Speed'].str.replace('km/h','')
data_players['Top Speed'] = pd.to_numeric(data_players['Top Speed'])
data_players.sort_values(['Top Speed'], ascending = 0).head(n=3)
## Parmis ceux qui sont dans le décile des plus rapides, qui a passé le plus clair de son temps à courrir sans la balle ?
data_players['Distance Covered In Possession'] = data_players['Distance Covered In Possession'].str.replace('km','')
data_players['Distance Covered In Possession'] = pd.to_numeric(data_players['Distance Covered In Possession'])
data_players['Share of Possession'] = data_players['Distance Covered In Possession']/data_players['Distance Covered']
data_players[data_players['Top Speed'] > data_players['Top Speed'].
quantile(.90)].sort_values(['Share of Possession'], ascending = 0).head()
|
_doc/notebooks/td2a_eco/td2a_eco_exercices_de_manipulation_de_donnees_correction_a.ipynb
|
sdpython/ensae_teaching_cs
|
mit
|
Accessing rows and columns
You can use loc and iloc as a chain to access the elements. Go from outer index to inner index
|
#access columns as usual
df['A']
#access rows
df.loc['G1']
#acess a single row form inner
df.loc['G1'].loc[1]
#access a single cell
df.loc['G2'].loc[3]['B']
|
python_crash_course/pandas_cheat_sheet_2.ipynb
|
AtmaMani/pyChakras
|
mit
|
Naming indices
Indices can have names (appear similar to column names)
|
df.index.names
df.index.names = ['Group', 'Serial']
df
|
python_crash_course/pandas_cheat_sheet_2.ipynb
|
AtmaMani/pyChakras
|
mit
|
Accessing rows and columns using cross section
The xs method allows to get a cross section. The advantage is it can penetrate a multilevel index in a single step. Now that we have named the indices, we can use cross section effectively
|
# Get all rows with Serial 1
df.xs(1, level='Serial')
# Get rows with serial 2 in group 1
df.xs(['G1',2])
|
python_crash_course/pandas_cheat_sheet_2.ipynb
|
AtmaMani/pyChakras
|
mit
|
Missing data
You can either drop rows/cols with missing values using dropna() or fill those cells with values using the fillna() methods.
dropna
Use dropna(axis, thresh...) where axis is 0 for rows, 1 for cols and thresh represents how many occurrences of nan before dropping happens
|
d = {'a':[1,2,np.nan], 'b':[np.nan, 5, np.nan], 'c':[6,7,8]}
dfna = pd.DataFrame(d)
dfna
# dropping rows with one or more na values
dfna.dropna()
# dropping cols with one or more na values
dfna.dropna(axis=1)
# Dropping rows only if 2 or more cols have na values
dfna.dropna(axis=0, thresh=2)
|
python_crash_course/pandas_cheat_sheet_2.ipynb
|
AtmaMani/pyChakras
|
mit
|
fillna
|
dfna.fillna(value=999)
# filling with mean value of entire dataframe
dfna.fillna(value = dfna.mean())
# fill with mean value row by row
dfna['a'].fillna(value = dfna['a'].mean())
|
python_crash_course/pandas_cheat_sheet_2.ipynb
|
AtmaMani/pyChakras
|
mit
|
Data aggregation
Pandas allows sql like control on the dataframes. You can treat each DF as a table and perform sql aggregation.
groupby
Format is: df.groupby('col_name').aggregation()
|
comp_data = {'Company':['GOOG','GOOG','MSFT','MSFT','FB','FB'],
'Person':['Sam','Charlie','Amy','Vanessa','Carl','Sarah'],
'Sales':[200,120,340,124,243,350]}
comp_df = pd.DataFrame(comp_data)
comp_df
|
python_crash_course/pandas_cheat_sheet_2.ipynb
|
AtmaMani/pyChakras
|
mit
|
mean min max
|
# mean sales by company - automatically only applies mean on numerical columns
comp_df.groupby('Company').mean()
# standard deviation in sales by company
comp_df.groupby('Company').std()
|
python_crash_course/pandas_cheat_sheet_2.ipynb
|
AtmaMani/pyChakras
|
mit
|
You can run other aggregation functions like mean, min, max, std, count etc. Lets look at describe which does all of it.
describe
|
comp_df.groupby('Company').describe()
|
python_crash_course/pandas_cheat_sheet_2.ipynb
|
AtmaMani/pyChakras
|
mit
|
transpose
Long over due, you can tile a DF by calling the transpose() method.
|
comp_df.groupby('Company').describe().transpose()
comp_df.groupby('Company').describe().index
|
python_crash_course/pandas_cheat_sheet_2.ipynb
|
AtmaMani/pyChakras
|
mit
|
Combining DataFrames
You can concatenate, merge and join data frames.
Lets take a look at 3 DataFrames
|
df1 = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'],'B': ['B0', 'B1', 'B2', 'B3'],
'C': ['C0', 'C1', 'C2', 'C3'],'D': ['D0', 'D1', 'D2', 'D3']}, index=[0, 1, 2, 3])
df2 = pd.DataFrame({'A': ['A4', 'A5', 'A6', 'A7'], 'B': ['B4', 'B5', 'B6', 'B7'],
'C': ['C4', 'C5', 'C6', 'C7'],'D': ['D4', 'D5', 'D6', 'D7']}, index=[4, 5, 6, 7])
df1
df2
|
python_crash_course/pandas_cheat_sheet_2.ipynb
|
AtmaMani/pyChakras
|
mit
|
concat
pd.concat([list_of_df], axis=0) will extend a dataframe either along rows or columns. All DF in the list should be of same dimension.
|
# extend along rows
pd.concat([df1, df2]) #flows well because index is sequential and colmns match
#extend along columns
pd.concat([df1, df2], axis=1) #fills NaN when index dont match
|
python_crash_course/pandas_cheat_sheet_2.ipynb
|
AtmaMani/pyChakras
|
mit
|
merge
merge lets you do a sql merge with inner, outer, right and left joins.
pd.merge(left, right, how='outer', on='key') where, left and right are your two DataFrames (tables) and on refers to the foreign key
|
left = pd.DataFrame({'key1': ['K0', 'K1', 'K2', 'K3'],'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3']})
right = pd.DataFrame({'key1': ['K0', 'K1', 'K2', 'K3'],'B': ['C0', 'C1', 'C2', 'C3'],
'C': ['D0', 'D1', 'D2', 'D3']})
left
right
|
python_crash_course/pandas_cheat_sheet_2.ipynb
|
AtmaMani/pyChakras
|
mit
|
inner merge
Inner join keeps only the intersection.
|
#merge along key1
pd.merge(left, right, how='inner', on='key1')
|
python_crash_course/pandas_cheat_sheet_2.ipynb
|
AtmaMani/pyChakras
|
mit
|
When both tables have same column names that are not used for merging (on) then pandas appends x and y to their names to differentiate
merge on multiple columns
Sometimes, your foreign key is composite. Then you can merge on multiple keys by passing a list to the on argument.
Now lets add a key2 column to both the tables.
|
left['key2'] = ['K0', 'K1', 'K0', 'K1']
left
right['key2'] = ['K0', 'K0', 'K0', 'K0']
right
pd.merge(left, right, how='inner', on=['key1', 'key2'])
|
python_crash_course/pandas_cheat_sheet_2.ipynb
|
AtmaMani/pyChakras
|
mit
|
inner merge will only keep the intersection, thus only 2 rows.
outer merge
Use how='outer' to keep the union of both the tables. pandas fills NaN when a cell has no values.
|
om = pd.merge(left, right, how='outer', on=['key1', 'key2'])
om
|
python_crash_course/pandas_cheat_sheet_2.ipynb
|
AtmaMani/pyChakras
|
mit
|
Sorting
Use DataFrame.sort_values(by=columns, inplace=False, ascending=True) to sort the table.
|
om.sort_values(by=['key1', 'key2']) #now you got the merge sorted by columns.
|
python_crash_course/pandas_cheat_sheet_2.ipynb
|
AtmaMani/pyChakras
|
mit
|
right merge
how='right' will keep all the rows of right table and drop the rows of left table that dont have a matching keys.
|
pd.merge(left, right, how='right', on=['key1', 'key2']).sort_values(by='key1')
|
python_crash_course/pandas_cheat_sheet_2.ipynb
|
AtmaMani/pyChakras
|
mit
|
left merge
how='left' will similarly keep all rows of left and those rows of right that has a matching foreign key.
|
pd.merge(left, right, how='left', on=['key1', 'key2']).sort_values(by='key1')
|
python_crash_course/pandas_cheat_sheet_2.ipynb
|
AtmaMani/pyChakras
|
mit
|
join
Joins are like merges but work on index instead of columns. Further, they are by default either left or right with inner as mode of joins. See example below:
|
df_a = pd.DataFrame({'A': ['A0', 'A1', 'A2'],
'B': ['B0', 'B1', 'B2']},
index=['K0', 'K1', 'K2'])
df_b = pd.DataFrame({'C': ['C0', 'C2', 'C3'],
'D': ['D0', 'D2', 'D3']},
index=['K0', 'K2', 'K3'])
df_a
df_b
#join b to a, default mode = keep all rows of a and matching rows of b (left join)
df_a.join(df_b)
|
python_crash_course/pandas_cheat_sheet_2.ipynb
|
AtmaMani/pyChakras
|
mit
|
Thus all rows of df_a and those in df_b. If df_b did not have that index, then NaN for values.
|
#join b to a
df_b.join(df_a)
#outer join - union of outputs
df_b.join(df_a, how='outer')
|
python_crash_course/pandas_cheat_sheet_2.ipynb
|
AtmaMani/pyChakras
|
mit
|
Running dynamic presentations
You need to install the RISE Ipython Library from Damián Avila for dynamic presentations
To convert and run this as a static presentation run the following command:
|
# I can't see notes in this - Not sure what the issue is. Try this in a solely python 2 environment
#PRIGINAL !ipython nbconvert 2016_Nottingham_GIServices_Lecture3_Beck_InteroperabilitySemanticsAndOpenData.ipynb --to slides --post serve
!ipython nbconvert 2016_Nottingham_GIServices_Lecture3_Beck_InteroperabilitySemanticsAndOpenData.ipynb --to slides --reveal-prefix https://cdn.jsdelivr.net/reveal.js/2.6.2 --post serve
#ipython nbconvert --to slides Analysis-scheme1.ipynb --reveal-prefix https://cdn.jsdelivr.net/reveal.js/2.6.2
!ipython nbconvert 2016_Nottingham_GIServices_Lecture3_Beck_InteroperabilitySemanticsAndOpenData.ipynb --to slides --reveal-prefix https://cdn.jsdelivr.net/reveal.js/2.6.2
|
20160202_Nottingham_GIServices_Lecture3_Beck_InteroperabilitySemanticsAndOpenData/.ipynb_checkpoints/2016_Nottingham_GIServices_Lecture3_Beck_InteroperabilitySemanticsAndOpenData-checkpoint.ipynb
|
AntArch/Presentations_Github
|
cc0-1.0
|
\newpage
About me
Honorary Research Fellow, University of Nottingham: orcid
Director, Geolytics Limited - A spatial data analytics consultancy
About this presentation
Available on GitHub - https://github.com/AntArch/Presentations_Github/
Fully referenced PDF
\newpage
Contribution to GIScience learning outcomes
This presentation contributes to the following learning outcomes for this course.
Knowledge and Understanding:
Appreciate the importance of standards for Geographic Information and the role of the Open Geospatial Consortium.
Understand the term 'interoperability'.
Appreciate the different models for database design.
Understand the basis of Linked Data.
Find UK government open data and understand some of the complexities in the use of this data.
Appreciate the data issues involved in managing large distributed databases, Location-Based Services and the emergence of real-time data gathering through the 'Sensor-Web'.
Understand the different models for creating international Spatial Data Infrastructures.
Intellectual Skills:
Evaluate the role of standards and professional bodies in GIS.
Articulate the meaning and importance of interoperability, semantics and ontologies.
Assess the technical and organisational issues which come into play when attempting to design large distributed geographic databases aimed at supporting 'real-world' problems.
Suggested Questions:
What do you think are the challenges for National Mapping Agencies and major Volunteer Geographic Data initiatives (OSM) are over the next 5 to 10 years?
This questions aims to get the student to think about how technology, policy and practice frameworks will impact upon the GI landscape. Will, for example, OSM make the Ordnance Survey irrelevant? What would then happen to organisations like the Land Registry that require credible data products to support their activites. How will things like the semantic web impact on GI. Can the aspirations of Open Data be realised and bring a wealth of nwe products and services. What impact will the Internet of Things have on GI.
Describe the issues surrounding interoperability. How have the OGC, OSGeo and INSPIRE helped and what challenges remain.
This should be a simple description of data silos, technical interoperability (syntactic, semantic, schematic), standards, standards bodies and inspire. Full dscussions will consider non-technical interoperability.
Is cartography relevant to modern GI? Discuss this in relation to map representations, the range of available open data and the needs of different users.
This is a somewhat open ended question with the purpose of getting the student to consider the differences between mapping as a broadly static data representation and GI as a dynamic data rich discipline. Obviously cartography or visualization is important - this is where issues like clutter, colour, visualization, symbology, generalisation can all be discussed. The open data point is to get the students to consider the range of available geo and attribute data and to think of GI as a spatial data stack. This should lead into the student being able to discuss generic products (providing similar service to the masses (i.e. maps)) and bespoke services (providing specific services to niche communities) which both use the same underlying data sources. Good answers will bring in some of the non-technical interoperability issues - particularly licencing.
SORT OUT DROPBOX and then let Ana know my details.
|
# THINGS TO DISCUSS
## Data needs to be credible, accessible, properly licenceed. Bring in Internet of Things
|
20160202_Nottingham_GIServices_Lecture3_Beck_InteroperabilitySemanticsAndOpenData/.ipynb_checkpoints/2016_Nottingham_GIServices_Lecture3_Beck_InteroperabilitySemanticsAndOpenData-checkpoint.ipynb
|
AntArch/Presentations_Github
|
cc0-1.0
|
A potted history of mapping
In the beginning was the geoword
and the word was cartography
\newpage
Cartography was king.
Static representations of spatial knowledge with the cartographer deciding what to represent.
Hence, maps are domain specific knowledge repositories for spatial data
\newpage
And then there was data .........
\newpage
Restrictive data
\newpage
Disconnected data with different:
Standards
Quality
Databases
Semantics
\newpage
Why is this an issue: INSPIRE
describe inspire
\newpage
Making data interoperable and open
\newpage
Interoperability
Put in interoperability quote here (plus references)
Defense interoperability
As defined by DoD policy, interoperability is the ability of systems, units, or forces to provide data, information, material, and services to, and accept the same from, other systems, units, or forces; and to use the data, information, materiel, and services so exchanged to enable them to operate effectively together. IT and NSS interoperability includes both the technical exchange of information and the end-to-end operational effectiveness of that exchanged information as required for mission accomplishment. Interoperability is more than just information exchange; it includes systems, processes, procedures, organizations, and missions over the life cycle and must be balanced with information assurance.
\newpage
Technical interoperability - levelling the field
\newpage
Discuss each interoperability and its resolution.
Bring in:
OGC - standards body (remember the standards must be open (read data/write data)
GML (interoperable foramt) - interchange
\newpage
What did technical interoperability facilitate
From Map to Model The changing paradigm of map creation from cartography to data driven visualization
\newpage
\newpage
\newpage
\newpage
The world was a happy place.......
Our data was interoperable!
Our data became open data
\newpage
Background - open..... nesss
in communities
in government
in academia
\newpage
\newpage
Open..... nesss -- in government
The Shakespeare review [-@shakespeare_shakespeare_2013] indicate that the amount of government Open Data, at least in the UK, is only going to grow.
Open data has the potential to trigger a revolution in how governments think about providing services to citizens and how they measure their success: this produces societal impact.
This will require an understanding of citizen needs, behaviours, and mental models, and how to use data to improve services.
\newpage
A McKinsey Global Institute report examines the economic impact of Open Data [@mckinsey_open_2013] and estimates that globally open data could be worth a minimum of $3 trillion annually.
\newpage
Background - open..... nesss
in academia
Open inquiry is at the heart of the scientific enterprise..... Science’s powerful capacity for self-correction comes from this openness to scrutiny and challenge.
Science as an open enterprise [@royal_society_science_2012 p. 7].
The Royal Society’s report Science as an open enterprise [-@royal_society_science_2012] identifies how 21^st^ century communication technologies are changing the ways in which scientists conduct, and society engages with, science. The report recognises that ‘open’ enquiry is pivotal for the success of science, both in research and in society.
This goes beyond open access to publications (Open Access), to include access to data and other research outputs (Open Data), and the process by which data is turned into knowledge (Open Science).
The next generation open data in academia
Zenodo is a DATA REPOSITORY which offers:
* accreditation
* different licences
* different exposure (private (closed), public (open) and embargoed (timestamped))
* DOIs
* is free at the point of use
* is likely to be around for a long time
* supported by Horizon 2020 and delivered by CREN
\newpage
The underlying rationale of Open Data is:
unfettered access to large amounts of ‘raw’ data
enables patterns of re-use and knowledge creation that were previously impossible.
improves transparency and efficiency
encourages innovative service delivery
introduces a range of data-mining and visualisation challenges,
which require multi-disciplinary collaboration across domains
catalyst to research and industry
supports the generation of new products, services and markets
the prize for succeeding is improved knowledge-led policy and practice that transforms
communities,
practitioners,
science and
society
the Open Data Landscape
Formal and informal data.OS versus VGI (Open Street Map)
Conclude with Linked Data + Open Data = Linked Open Data
So where are these new data products?
Data, data everywhere - but where are the new derivatives and services?
\newpage
Non-technical interoperability issues?
Issues surrounding non-technical interoperability include:
Policy interoperabilty
Licence interoperability
Legal interoperability
Social interoperability
We will focus on licence interoperability
\newpage
Policy Interoperability
Access control
Are you allowed to use data from this country?
From international journal of digital curation
Social Interoperability
Social interoperability is concerned about the environment and the human processes involved in the information exchange.
see here
It's a bit wooley
Legal Interoperability
{The purpose of the Case Studies is to:}(https://rd-alliance.org/groups/rdacodata-legal-interoperability-ig.html)
Provide more specific information on best practices, as well as barriers and constraints in a number of different scientific domains and communities of practice (lessons learned);
Illustrate the variety of legal frameworks that govern research data, different approaches to intellectual property and copyright across jurisdictions, different disciplinary expectations and norms, and alternative mechanisms to address the legal interoperability of data that have been tried in practice; and
Identify opportunities for cross-disciplinary, cross-domain fertilization and collaboration and for new initiatives to address key barriers and constraints.
Also look here
Chatham house paper
Licence Interoperability
A specific form of legal interoperability
Example of applying the semantic web to licence interoperability
There is a multitude of formal and informal data.
\newpage
What is a licence?
Wikipedia state:
A license may be granted by a party ("licensor") to another party ("licensee") as an element of an agreement between those parties.
A shorthand definition of a license is "an authorization (by the licensor) to use the licensed material (by the licensee)."
Each of these data objects can be licenced in a different way. This shows some of the licences described by the RDFLicence ontology
\newpage
Concepts (derived from Formal Concept Analysis) surrounding licences
\newpage
Two lead organisations have developed legal frameworks for content licensing:
Creative Commons (CC) and
Open Data Commons (ODC).
Until the release of CC version 4, published in November 2013, the CC licence did not cover data. Between them, CC and ODC licences can cover all forms of digital work.
There are many other licence types
Many are bespoke
Bespoke licences are difficult to manage
Many legacy datasets have bespoke licences
I'll describe CC in more detail
\newpage
Creative Commons Zero
Creative Commons Zero (CC0) is essentially public domain which allows:
Reproduction
Distribution
Derivations
Constraints on CC0
The following clauses constrain CC0:
Permissions
ND – No derivatives: the licensee can not derive new content from the resource.
Requirements
BY – By attribution: the licensee must attribute the source.
SA – Share-alike: if the licensee adapts the resource, it must be released under the same licence.
Prohibitions
NC – Non commercial: the licensee must not use the work commercially without prior approval.
CC license combinations
License|Reproduction|Distribution|Derivation|BY|SA|NC
----|----|----|----|----|----|----
CC0|X|X|X|||
CC-BY-ND|X|X||X||
CC-BY-NC-ND|X|X||X||X
CC-BY|X|X|X|X||
CC-BY-SA|X|X|X|X|X|
CC-BY-NC|X|X|X|X||X
CC-BY-NC-SA|X|X|X|X|X|X
Table: Creative Commons license combinations
\newpage
Why are licenses important?
They tell you what you can and can't do with 'stuff'
Very significant when multiple datasets are combined
It then becomes an issue of license compatibility
\newpage
Which is important when we mash up data
Certain licences when combined:
Are incompatible
Creating data islands
Inhibit commercial exploitation (NC)
Force the adoption of certain licences
If you want people to commercially exploit your stuff don't incorporate CC-BY-NC-SA data!
Stops the derivation of new works
A conceptual licence processing workflow. The licence processing service analyses the incoming licence metadata and determines if the data can be legally integrated and any resulting licence implications for the derived product.
\newpage
A rudimentry logic example
```
Data1 hasDerivedContentIn NewThing.
Data1 hasLicence a cc-by-sa.
What hasLicence a cc-by-sa? #reason here
If X hasDerivedContentIn Y and hasLicence Z then Y hasLicence Z. #reason here
Data2 hasDerivedContentIn NewThing.
Data2 hasLicence a cc-by-nc-sa.
What hasLicence a cc-by-nc-sa? #reason here
Nothing hasLicence a cc-by-nc-sa and hasLicence a cc-by-sa. #reason here
```
And processing this within the Protege reasoning environment
|
from IPython.display import YouTubeVideo
YouTubeVideo('jUzGF401vLc')
|
20160202_Nottingham_GIServices_Lecture3_Beck_InteroperabilitySemanticsAndOpenData/.ipynb_checkpoints/2016_Nottingham_GIServices_Lecture3_Beck_InteroperabilitySemanticsAndOpenData-checkpoint.ipynb
|
AntArch/Presentations_Github
|
cc0-1.0
|
\newpage
A more robust logic
Would need to decouple licence incompatibility from licence name into licence clause (see table below)
Deal with all licence type
Provide recommendations based on desired derivative licence type
Link this through to the type of process in a workflow:
data derivation is, from a licence position, very different to contextual display
License|Reproduction|Distribution|Derivation|BY|SA|NC
----|----|----|----|----|----|----
CC0|X|X|X|||
CC-BY-ND|X|X||X||
CC-BY-NC-ND|X|X||X||X
CC-BY|X|X|X|X||
CC-BY-SA|X|X|X|X|X|
CC-BY-NC|X|X|X|X||X
CC-BY-NC-SA|X|X|X|X|X|X
ODC-PDDL|X|X|X|||
ODC-BY|X|X|X|X||
ODC-ODbL|X|X|X|X|X|
OGL 2.0|X|X|X|X||
OS OpenData|X|X|X|X|?|
Table: Creative Commons license combinations
\newpage
OGC and Licence interoperability
The geo business landscape is increasingly based on integrating heterogeneous data to develop new products
Licence heterogeneity is a barrier to data integration and interoperability
A licence calculus can help resolve and identify heterogenties leading to
legal compliance
confidence
Use of standards and collaboration with organisations is crucial
Open Data Licensing ontology
The Open Data Institute
Failure to do this could lead to breaches in data licenses
and we all know where that puts us........
\newpage
Ontologies, semantics and linked data
We've already been introduced to ontologies in the licence calculus example.
SLides to describe ontologies, semantics and linked data
Inlcude geo-logic from Brandon
Geo example:
```
Leeds is a city.
Yorkshire is a county.
Sheffield is a city.
Lancaster is a city.
Lancashire is a county.
Lancaster has a port.
What is Leeds?
Leeds isIn Yorkshire.
Sheffield isIn Yorkshire.
Lancaster isIn Lancashire.
What isIn Yorkshire?
If X isIn Y then Y contains X.
What contains Leeds?
Yorkshire borders Lancashire.
If X borders Y then Y borders X.
What borders Lancashire?
Yorkshire isIn UnitedKingdom.
Lancashire isIn UnitedKingdom.
Transitivity
If X isIn Y and Y isIn Z then X isIn Z.
If X contains Y and Y contains Z then X contains Z
```
using proper isIn
```
Leeds is a city.
Yorkshire is a county.
Sheffield is a city.
Lancaster is a city.
Lancashire is a county.
Lancaster has a port.
What is Leeds?
Leeds is spatiallyWithin Yorkshire.
Sheffield is spatiallyWithin Yorkshire.
Lancaster is spatiallyWithin Lancashire.
What is spatiallyWithin Yorkshire?
If X is spatiallyWithin Y then Y spatiallyContains X.
What spatiallyContains Leeds?
Yorkshire borders Lancashire.
If X borders Y then Y borders X.
What borders Lancashire?
Yorkshire is spatiallyWithin UnitedKingdom.
Lancashire is spatiallyWithin UnitedKingdom.
Transitivity
If X is spatiallyWithin Y and Y is spatiallyWithin Z then X is spatiallyWithin Z.
If X spatiallyContains Y and Y spatiallyContains Z then X spatiallyContains Z
What is spatiallyWithin UnitedKingdom?
```
Adding more......
```
Pudsey is spatiallyWithin Leeds.
Kirkstall is spatiallyWithin Leeds.
Meanwood is spatiallyWithin Leeds.
Roundhay is spatiallyWithin Leeds.
Scarcroft is spatiallyWithin Leeds.
```
and more
```
UnitedKingdom isPartOf Europe.
UnitedKingdom is a country.
If X isPartOf Y and X spatiallyContains Z then Z isPartOf Y.
What isPartOf Europe?
```
|
and more
```
If X spatiallyContains Y and X is a city then Y is a place and Y is a cityPart.
Every city is a place.
What is a place.
```
and more
```
UK isPartOf Europe.
UK is sameAs UnitedKingdom.
If X has a port then X borders Water.
What borders Water?
```
|
20160202_Nottingham_GIServices_Lecture3_Beck_InteroperabilitySemanticsAndOpenData/.ipynb_checkpoints/2016_Nottingham_GIServices_Lecture3_Beck_InteroperabilitySemanticsAndOpenData-checkpoint.ipynb
|
AntArch/Presentations_Github
|
cc0-1.0
|
Now we apply the $k$-means algorithm. Therefore we:
Set the maximum number of iterations.
Set random inital center positions.
Iteratively:
1. Assignment step: Calculate Euclidean distances between
each data point and each center position and assign to nearest one.
2. Plot the assigned data points (just for visualization).
3. Update step: Move each center point to the average position of the data points
which are currently assigned to this center. If a center point has no assigned data points at all, define a new random position.
(This is not part of the original algorithm. You can disable this feature at the beginning of the following code.)
4. Stop* if movement of the center positions is below a certain threshold
or if the maximum number of iterations is reached.
|
max_iter = 16 # Set the maximum number of iterations.
update_empty_centers = True # If True, a center point with no assigned data points gets a new random position.
# Set random init center positions in range of training_data.
def get_rand_centers(num):
centers_x1 = (np.max(training_set[:,0])-np.min(training_set[:,0]))*np.random.rand(num)+np.min(training_set[:,0])
centers_x2 = (np.max(training_set[:,1])-np.min(training_set[:,1]))*np.random.rand(num)+np.min(training_set[:,1])
return np.stack((centers_x1, centers_x2), axis=1)
centers = get_rand_centers(m)
new_centers = np.empty([m,2])
center_history = np.array([centers])
# Prepare plot.
fig = plt.figure(1, figsize=(16,4*(-(-max_iter//4))))
if m > 6:
cmap=plt.cm.Paired
else:
cmap = mpl.colors.ListedColormap(['royalblue', 'red', 'green', 'm', 'darkorange', 'gray'][:m])
boundaries = np.arange(0,m,1.0)
norm = mpl.colors.BoundaryNorm(np.arange(0,m+1,1), cmap.N)
# Start iteration.
for n in range(0,max_iter):
# Calculate the Euclidean distance from each data point to each center point.
distances = np.sqrt(np.subtract(training_set[:,0,None],np.repeat(np.array([centers[:,0]]), repeats=N, axis=0))**2 + np.subtract(training_set[:,1,None],np.repeat(np.array([centers[:,1]]), repeats=N, axis=0))**2)
# Assignment step. Identify the closest center point to each data point.
argmin = np.argmin(distances, axis=1)
# Plot current center positions and center assignments of the data points.
ax = fig.add_subplot(-(-max_iter//4), 4, n+1)
plt.scatter(training_set[:,0], training_set[:,1], marker='o', s=30, c=argmin, cmap=cmap, norm=norm, alpha=0.5)
plt.scatter(centers[:,0], centers[:,1], marker='X', c=np.arange(0,m,1), cmap=cmap, norm=norm, edgecolors='black', s=200)
plt.tick_params(axis='both', labelsize=12)
plt.title('Iteration %d' % n, fontsize=12)
# Update step.
for i in range(0,m):
new_centers[i] = np.sum(training_set[argmin==i], axis=0) / len(argmin[argmin==i]) if len(argmin[argmin==i])>0 else get_rand_centers(1) if update_empty_centers else centers[i]
# Calc the movement of all center points as a stopping criterion.
center_movement = np.sum(np.sqrt((new_centers[:,0]-centers[:,0])**2 + (new_centers[:,1]-centers[:,1])**2))
if center_movement < 0.0001:
print("Finished early after %d iterations." % n)
break
centers = np.array(new_centers, copy=True)
center_history = np.append(center_history, [centers], axis=0)
fig.subplots_adjust(hspace=0.3)
plt.show()
|
mloc/ch6_Unsupervised_Learning/KMeans_Illustration.ipynb
|
kit-cel/wt
|
gpl-2.0
|
Python tutorials and references
Following are some resources to learn Python
Article with reviews about various tutorials http://noeticforce.com/best-free-tutorials-to-learn-python-pdfs-ebooks-online-interactive
user voted list of tutorials on quora: https://www.quora.com/What-is-the-best-online-resource-to-learn-Python
Google's Python class https://developers.google.com/edu/python/
https://www.learnpython.org/
Python reference documentation https://docs.python.org/3/
A list of Python libraries for various applications: https://github.com/vinta/awesome-python
Data structures
Lists
l1 = list()
l2 = [] #both empty lists
l3 = [1,2,3]
|
l1 = list()
type(l1)
l2 = []
len(l2)
|
python_crash_course/python_cheat_sheet_1.ipynb
|
AtmaMani/pyChakras
|
mit
|
list slicing
|
l3 = [1,2,3,4,5,6,7,8,9]
l3[:] #prints all
l3[0]
l3[:4] #prints first 4. the : is slicing operator
l3[4:7] #upto 1 less than highest index
a = len(l3)
l3[a-1] #negative index for traversing in opposite dir
l3[-4:] #to pick the last 4 elements
l3.reverse() #happens inplace
l3
|
python_crash_course/python_cheat_sheet_1.ipynb
|
AtmaMani/pyChakras
|
mit
|
append and extend
|
l3.append(10) #to add new values
l3
a1 = ['a','b','c']
l3.append(a1)
l3[-1]
a1 = ['a','b','c']
l3.extend(a1) #to splice two lists. need not be same data type
l3
lol = [[1,2,3],[4,5,6]] #lol - list of lists
len(lol)
lol[1].reverse()
lol[1]
|
python_crash_course/python_cheat_sheet_1.ipynb
|
AtmaMani/pyChakras
|
mit
|
mutability of lists
list elements are mutable and can be changed
|
l3
l3[-1] = 'solar fare' #modify the last element
l3
#list.insert(index, object) to insert a new value
print(str(len(l3))) #before insertion
l3.insert(1,'two')
l3
# l3.pop(index) remove item at index and give that item
l3.pop(-3) #remove 3rd item from last and give them
l3
# l3.clear() to empty a list
lol.clear()
lol
l3 = [9, 8, 7, 6, 5, 4, 3, 2, 1, 10, ['a', 'b', 'c'], 'a', 'b', 'c', 10,10,10]
l3
# l3.count(value) counts the number of occurrences of a value
l3.count(10)
|
python_crash_course/python_cheat_sheet_1.ipynb
|
AtmaMani/pyChakras
|
mit
|
Lists and indices
|
# l3.index(value, <start, <stop>>) returns the first occurrence of element
l3.index(10)
|
python_crash_course/python_cheat_sheet_1.ipynb
|
AtmaMani/pyChakras
|
mit
|
Find all the indices of an element
|
# indices = [i for i, x in enumerate(my_list) if x == "whatever"]
#find all occurrence of 10
indices_of_10 = [i for i, x in enumerate(l3) if x == 10]
indices_of_10
list(enumerate(l3))
|
python_crash_course/python_cheat_sheet_1.ipynb
|
AtmaMani/pyChakras
|
mit
|
Dictionaries
Key value pairs
d1 = dict()
d2 = {'key1':value,
'key2':value2}
|
d1 = dict()
d2 = {}
len(d2)
d3 = {'day':'Thursday',
'day_of_week':5,
'start_of_week':'Sunday',
'day_of_year':123,
'dod':{'month_of_year':'Feb',
'year':2017},
'list1':[8,7,66]}
len(d3)
d3.keys()
d3['start_of_week']
type(d3['dod'])
# now that dod is a dict, get its keys
d3['dod'].keys()
d3['dod']['year']
|
python_crash_course/python_cheat_sheet_1.ipynb
|
AtmaMani/pyChakras
|
mit
|
mutability of dicts
dicts like lists are mutable
|
d3['day_of_year'] = -48
d3
# insert new values just by adding kvp (key value pair)
d3['workout_of_the_week']='bungee jumpging'
d3
|
python_crash_course/python_cheat_sheet_1.ipynb
|
AtmaMani/pyChakras
|
mit
|
dict exploration
what happens when you inquire a key thats not present
|
d3['dayyy']
# safe way to get elements is to use get()
d3.get('day')
d3.get('dayyy') #retuns None
# use items() to get a list of tuples of key value pairs
d3.items()
# use values() to get only the values
d3.values()
|
python_crash_course/python_cheat_sheet_1.ipynb
|
AtmaMani/pyChakras
|
mit
|
Tuple
tuple is a immutable list
|
t1 = tuple()
t2 = ()
len(t1)
type(t2)
t3 = (3,4,5,'t','g','b')
t3[0]
#use it just like a list
t3[-1]
|
python_crash_course/python_cheat_sheet_1.ipynb
|
AtmaMani/pyChakras
|
mit
|
mutability of tuples
cannot modify tuples.
|
t3[0] = 'good evening'
|
python_crash_course/python_cheat_sheet_1.ipynb
|
AtmaMani/pyChakras
|
mit
|
Sets
set is a sequence of unique values
s1 = set(<sequence>)
s2 = {}
|
s1 = set([1,1,1,2,2,2,4,4,4,4,4,4,4,5])
s1
s2 = {1,2,2,2,2,3}
s2
|
python_crash_course/python_cheat_sheet_1.ipynb
|
AtmaMani/pyChakras
|
mit
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.