markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Now let’s do a describe and plot it again.
df.plot(x='Miles', y='Minutes', kind='scatter')
4 - pandas Basics/4-6 pandas DataFrame Renaming Cols, Handling NaN Values, Maps, Intermediate Plotting, + Rolling Values, + Basic Date Indexing.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
Let’s plot Miles and Minutes together in a scatter plot. Wow that’s linear. Let’s see how correlated they are. We do this with the cor method. We can see that Miles to time are very tightly correlated (using pearson standard correlation coefficients) there are two other correlation methods that you can use, kendall Tau, and Spearman rank correlation.
df.corr() df.corr(method='kendall') df.corr(method='spearman')
4 - pandas Basics/4-6 pandas DataFrame Renaming Cols, Handling NaN Values, Maps, Intermediate Plotting, + Rolling Values, + Basic Date Indexing.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
Now let’s see a box plot. With these two we get a much better idea of the data. We can see that most of my runs are below an hour except for a couple that are much longer.-
df.boxplot('Minutes', return_type='axes')
4 - pandas Basics/4-6 pandas DataFrame Renaming Cols, Handling NaN Values, Maps, Intermediate Plotting, + Rolling Values, + Basic Date Indexing.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
Now let’s add minutes per mile, we can just divide our two series to get those numbers.
df['Minutes'] / df['Miles'] df['Min_per_mile'] = df['Minutes'] / df['Miles'] df.describe()
4 - pandas Basics/4-6 pandas DataFrame Renaming Cols, Handling NaN Values, Maps, Intermediate Plotting, + Rolling Values, + Basic Date Indexing.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
We can see that along more shorter distances, my speed can vary a lot.
df.plot(x='Miles', y='Min_per_mile', kind='scatter') plt.ylabel("Minutes / Mile")
4 - pandas Basics/4-6 pandas DataFrame Renaming Cols, Handling NaN Values, Maps, Intermediate Plotting, + Rolling Values, + Basic Date Indexing.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
Let’s see a histogram of my speeds. Histograms are a great way of representing frequency data or how much certain things are occuring.
df.hist('Min_per_mile')
4 - pandas Basics/4-6 pandas DataFrame Renaming Cols, Handling NaN Values, Maps, Intermediate Plotting, + Rolling Values, + Basic Date Indexing.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
seems pretty center in that 7 minutes to 7.5 minute range. Let’s see if we can get more information with more bins which we specify with the bin argument.
df.hist('Min_per_mile',bins=20)
4 - pandas Basics/4-6 pandas DataFrame Renaming Cols, Handling NaN Values, Maps, Intermediate Plotting, + Rolling Values, + Basic Date Indexing.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
That’s interesting. Under 7 and then at 7.5 are the most popular. I bet that has something to do with my running distances too or the courses I choose to run.
df.hist('Min_per_mile',bins=20, figsize=(10,8)) plt.xlim((5, 11)) plt.ylim((0, 12)) plt.title("Minutes Per Mile Histogram") plt.grid(False) plt.savefig('../assets/minutes_per_mile_histogram.png') df['Miles']
4 - pandas Basics/4-6 pandas DataFrame Renaming Cols, Handling NaN Values, Maps, Intermediate Plotting, + Rolling Values, + Basic Date Indexing.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
Now another cool thing you can do with time series is see the rolling mean or rolling sum or even rolling correlations. There’s a lot of different “rolling” type things you can do.
df['Miles'].plot()
4 - pandas Basics/4-6 pandas DataFrame Renaming Cols, Handling NaN Values, Maps, Intermediate Plotting, + Rolling Values, + Basic Date Indexing.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
So here’s a standard plot of our Miles again, just a line over time. To add another line to the same plot we just add more details to the box. As I was touching on the rolling values. Let’s talk about the rolling average. Now to do that I pass it a series or a data frame.
df['Miles'].plot() pd.rolling_mean(df['Miles'], 7).plot()
4 - pandas Basics/4-6 pandas DataFrame Renaming Cols, Handling NaN Values, Maps, Intermediate Plotting, + Rolling Values, + Basic Date Indexing.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
I can do the same with the rolling standard deviation or sum.
df['Miles'].plot() pd.rolling_std(df['Miles'], 7).plot() df['Miles'].plot() pd.rolling_sum(df['Miles'], 7).plot()
4 - pandas Basics/4-6 pandas DataFrame Renaming Cols, Handling NaN Values, Maps, Intermediate Plotting, + Rolling Values, + Basic Date Indexing.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
Now on the last note one thing that’s cool about date time indexes is that you can query them very naturally. If I want to get all my runs in october of 2014, I just enter that as a string.
df.index
4 - pandas Basics/4-6 pandas DataFrame Renaming Cols, Handling NaN Values, Maps, Intermediate Plotting, + Rolling Values, + Basic Date Indexing.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
If I want to get from November to December, I can do that as a Series.
df['2014-11':'2014-12']
4 - pandas Basics/4-6 pandas DataFrame Renaming Cols, Handling NaN Values, Maps, Intermediate Plotting, + Rolling Values, + Basic Date Indexing.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
How do you think we might go from october to January 1 2015? Go ahead and give it a try and see if you can figure it out.
df['2014-11':'2015-1-1']['Miles'].plot()
4 - pandas Basics/4-6 pandas DataFrame Renaming Cols, Handling NaN Values, Maps, Intermediate Plotting, + Rolling Values, + Basic Date Indexing.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
Now we can specify a series this way but we can’t specific a specific date. To get a specific date’s run.
df['2014-8-12']
4 - pandas Basics/4-6 pandas DataFrame Renaming Cols, Handling NaN Values, Maps, Intermediate Plotting, + Rolling Values, + Basic Date Indexing.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
To do that we need to use loc.
df.loc['2014-8-12']
4 - pandas Basics/4-6 pandas DataFrame Renaming Cols, Handling NaN Values, Maps, Intermediate Plotting, + Rolling Values, + Basic Date Indexing.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
now that we’ve done all this work. We should save it so that we don’t have to remember what our operations were or what stage we did them at. Now we could save it to csv like we did our other one but I wanted to illustrate all the different ways you can save this file. Let’s save our csv, but we can also save it as an html page(which will give us a table view) or a json file.
df.head() df.to_csv('../data/date_fixed_running_data_with_time.csv') df.to_html('../data/date_fixed_running_data_with_time.html')
4 - pandas Basics/4-6 pandas DataFrame Renaming Cols, Handling NaN Values, Maps, Intermediate Plotting, + Rolling Values, + Basic Date Indexing.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
One thing to note with JSON files is that they want unique indexes (because they're going to be come the keys), so we've got to give it a new index. We can do this by resetting our index or setting our index to a column.
df.to_json('../data/date_fixed_running_data_with_time.json') df.reset_index() df['Date'] = df.index df.index = range(df.shape[0]) df.head() df.to_json('../data/date_fixed_running_data_with_time.json')
4 - pandas Basics/4-6 pandas DataFrame Renaming Cols, Handling NaN Values, Maps, Intermediate Plotting, + Rolling Values, + Basic Date Indexing.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
Now there’s a LOT more you can do with date time indexing but this is about all that I wanted to cover in this video. We will get into more specifics later. By now you should be getting a lot more familiar with pandas and what the ipython + pandas workflow is.
df.Date[0]
4 - pandas Basics/4-6 pandas DataFrame Renaming Cols, Handling NaN Values, Maps, Intermediate Plotting, + Rolling Values, + Basic Date Indexing.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
Homework 14 (or so): TF-IDF text analysis and clustering Hooray, we kind of figured out how text analysis works! Some of it is still magic, but at least the TF and IDF parts make a little sense. Kind of. Somewhat. No, just kidding, we're professionals now. Investigating the Congressional Record The Congressional Record is more or less what happened in Congress every single day. Speeches and all that. A good large source of text data, maybe? Let's pretend it's totally secret but we just got it leaked to us in a data dump, and we need to check it out. It was leaked from this page here.
# If you'd like to download it through the command line... #!curl -O http://www.cs.cornell.edu/home/llee/data/convote/convote_v1.1.tar.gz # And then extract it through the command line... #!tar -zxf convote_v1.1.tar.gz
homework13/14 - TF-IDF Homework.ipynb
radhikapc/foundation-homework
mit
So great, we have 702 of them. Now let's import them.
speeches = [] for path in paths: with open(path) as speech_file: speech = { 'pathname': path, 'filename': path.split('/')[-1], 'content': speech_file.read() } speeches.append(speech) speeches_df = pd.DataFrame(speeches) #speeches_df.head() speeches_df['pathname'][0]
homework13/14 - TF-IDF Homework.ipynb
radhikapc/foundation-homework
mit
In class we had the texts variable. For the homework can just do speeches_df['content'] to get the same sort of list of stuff. Take a look at the contents of the first 5 speeches
texts =speeches_df['content'] texts[:5]
homework13/14 - TF-IDF Homework.ipynb
radhikapc/foundation-homework
mit
Doing our analysis Use the sklearn package and a plain boring CountVectorizer to get a list of all of the tokens used in the speeches. If it won't list them all, that's ok! Make a dataframe with those terms as columns. Be sure to include English-language stopwords
from sklearn.feature_extraction.text import CountVectorizer count_vectorizer = CountVectorizer(stop_words='english') Xc = count_vectorizer.fit_transform(texts) Xc Xc.toarray() pd.DataFrame(Xc.toarray()).head(3) Xc_feature= pd.DataFrame(Xc.toarray(), columns=count_vectorizer.get_feature_names()) Xc_feature.head(3)
homework13/14 - TF-IDF Homework.ipynb
radhikapc/foundation-homework
mit
Okay, it's far too big to even look at. Let's try to get a list of features from a new CountVectorizer that only takes the top 100 words.
from nltk.stem.porter import PorterStemmer porter_stemmer = PorterStemmer() porter_stemmer = PorterStemmer() def stemming_tokenizer(str_input): words = re.sub(r"[^A-Za-z]", " ", str_input).lower().split() words = [porter_stemmer.stem(word) for word in words] #print(words) return words count_vectorizer = CountVectorizer(stop_words='english', tokenizer=stemming_tokenizer, max_features=100) Xc100 = count_vectorizer.fit_transform(texts) print(count_vectorizer.get_feature_names()) #count_vectorizer.get_feature_names()
homework13/14 - TF-IDF Homework.ipynb
radhikapc/foundation-homework
mit
Now let's push all of that into a dataframe with nicely named columns.
df_Xc = pd.DataFrame(Xc100.toarray(), columns=count_vectorizer.get_feature_names()) df_Xc.head(3)
homework13/14 - TF-IDF Homework.ipynb
radhikapc/foundation-homework
mit
Everyone seems to start their speeches with "mr chairman" - how many speeches are there total, and many don't mention "chairman" and how many mention neither "mr" nor "chairman"?
df_Xc['act'].count() df_Xc[df_Xc["chairman"]==0]['chairman'].count() df_Xc[df_Xc["mr"]==0]['mr'].count() total = df_Xc[df_Xc["mr"]==0]['mr'].count() + df_Xc[df_Xc["chairman"]==0]['chairman'].count() print(total,"speaches in total do not mention neither 'mr' nor 'chairman'")
homework13/14 - TF-IDF Homework.ipynb
radhikapc/foundation-homework
mit
What is the index of the speech thank is the most thankful, a.k.a. includes the word 'thank' the most times?
thank = df_Xc[df_Xc["thank"]!=0] thank.head(3) thank_column = thank['thank'] thank_column.sort(inplace=False, ascending=False).head(1)
homework13/14 - TF-IDF Homework.ipynb
radhikapc/foundation-homework
mit
If I'm searching for China and trade, what are the top 3 speeches to read according to the CountVectoriser?
china_trade = df_Xc['china'] + df_Xc['trade'] china_trade.sort(inplace=False, ascending=False).head(3)
homework13/14 - TF-IDF Homework.ipynb
radhikapc/foundation-homework
mit
Now what if I'm using a TfidfVectorizer?
tfidf_vectorizer = TfidfVectorizer(stop_words='english', tokenizer=stemming_tokenizer, use_idf=False, norm='l1', max_features=100) Xt = tfidf_vectorizer.fit_transform(texts) pd.DataFrame(Xt.toarray(), columns=tfidf_vectorizer.get_feature_names()).head(3) print(tfidf_vectorizer.get_feature_names()) # checking inverse term_vectorizer.get_feature_names() l2_vectorizer = TfidfVectorizer(stop_words='english', tokenizer=stemming_tokenizer, use_idf=True, max_features=100) Xl2 = l2_vectorizer.fit_transform(texts) l2_df = pd.DataFrame(Xl2.toarray(), columns=l2_vectorizer.get_feature_names()) print(l2_vectorizer.get_feature_names())
homework13/14 - TF-IDF Homework.ipynb
radhikapc/foundation-homework
mit
What's the content of the speeches? Here's a way to get them:
# index 0 is the first speech, which was the first one imported. paths[0] # Pass that into 'cat' using { } which lets you put variables in shell commands # that way you can pass the path to cat !echo {paths[0]} !type a.text
homework13/14 - TF-IDF Homework.ipynb
radhikapc/foundation-homework
mit
Now search for something else! Another two terms that might show up. elections and chaos? Whatever you thnik might be interesting.
df_Xc.columns congress_lawsuit = df_Xc['lawsuit'] + df_Xc['congress'] congress_lawsuit.sort(inplace=False, ascending=False).head(5) pd.DataFrame([df_Xc['lawsuit'], df_Xc['congress'], df_Xc['lawsuit'] + df_Xc['congress']], index=["congress", "lawsuit", "congress + lawsuit"]).T
homework13/14 - TF-IDF Homework.ipynb
radhikapc/foundation-homework
mit
Enough of this garbage, let's cluster Using a simple counting vectorizer, cluster the documents into eight categories, telling me what the top terms are per category. Using a term frequency vectorizer, cluster the documents into eight categories, telling me what the top terms are per category. Using a term frequency inverse document frequency vectorizer, cluster the documents into eight categories, telling me what the top terms are per category.
from sklearn.cluster import KMeans #count vectorization Xc100 is a set of normalized for 100 top words number_of_clusters = 8 km = KMeans(n_clusters=number_of_clusters) km.fit(Xc100) #count vectorization print("Top terms per cluster:") order_centroids = km.cluster_centers_.argsort()[:, ::-1] terms = count_vectorizer.get_feature_names() for i in range(number_of_clusters): top_ten_words = [terms[ind] for ind in order_centroids[i, :5]] print("Cluster {}: {}".format(i, ' '.join(top_ten_words))) #texts results = pd.DataFrame() results['text'] = texts results['category'] = km.labels_ results #term vectorization Xc100 is a set of l1 term vect. for 100 top words number_of_clusters = 8 kmt = KMeans(n_clusters=number_of_clusters) kmt.fit(Xt) print("Top terms per cluster:") order_centroids = kmt.cluster_centers_.argsort()[:, ::-1] terms_t = tfidf_vectorizer.get_feature_names() for i in range(number_of_clusters): top_ten_words = [terms_t[ind] for ind in order_centroids[i, :5]] print("Cluster {}: {}".format(i, ' '.join(top_ten_words))) #inverse term vectorization number_of_clusters = 8 kml2 = KMeans(n_clusters=number_of_clusters) kml2.fit(Xl2) print("Top terms per cluster:") order_centroids = km.cluster_centers_.argsort()[:, ::-1] terms_r = l2_vectorizer.get_feature_names() for i in range(number_of_clusters): top_ten_words = [terms_r[ind] for ind in order_centroids[i, :5]] print("Cluster {}: {}".format(i, ' '.join(top_ten_words)))
homework13/14 - TF-IDF Homework.ipynb
radhikapc/foundation-homework
mit
Which one do you think works the best? Harry Potter time I have a scraped collection of Harry Potter fanfiction at https://github.com/ledeprogram/courses/raw/master/algorithms/data/hp.zip. I want you to read them in, vectorize them and cluster them. Use this process to find out the two types of Harry Potter fanfiction. What is your hypothesis?
import glob paths = glob.glob('hp/hp/*') paths[:5] len(paths) reviews = [] for path in paths: with open(path) as review_file: review = { 'pathname': path, 'filename': path.split('/')[-1], 'content': review_file.read() } reviews.append(review) reviews_df = pd.DataFrame(reviews) reviews_df.head() texts =reviews_df['content'] texts
homework13/14 - TF-IDF Homework.ipynb
radhikapc/foundation-homework
mit
Vectorize Count Vectorization
from sklearn.feature_extraction.text import CountVectorizer from nltk.stem.porter import PorterStemmer porter_stemmer = PorterStemmer() def stemming_tokenizer(str_input): words = re.sub(r"[^A-Za-z]", " ", str_input).lower().split() words = [porter_stemmer.stem(word) for word in words] #print(words) return words count_vectorizer = CountVectorizer(stop_words="english", tokenizer=stemming_tokenizer, max_features = 1000) Zc1000 = count_vectorizer.fit_transform(texts) #print(count_vectorizer.get_feature_names()) from sklearn.cluster import KMeans #count vectorization Zc100 is a set of normalized for 100 top words number_of_clusters = 8 kmzc = KMeans(n_clusters=number_of_clusters) kmzc.fit(Zc1000) #count vectorization print("Top terms per cluster:") order_centroids = kmzc.cluster_centers_.argsort()[:, ::-1] terms_zc = count_vectorizer.get_feature_names() for i in range(number_of_clusters): top_ten_words = [terms_zc[ind] for ind in order_centroids[i, :5]] print("Cluster {}: {}".format(i, ' '.join(top_ten_words))) #pd.DataFrame(Z.toarray(), columns=count_vectorizer.get_feature_names()).head(3) tfidf_vectorizer = TfidfVectorizer(stop_words='english', tokenizer=stemming_tokenizer, use_idf=False, norm='l2') Zt = tfidf_vectorizer.fit_transform(texts) print(tfidf_vectorizer.get_feature_names()) #pd.DataFrame(Z.toarray(), columns=tfidf_vectorizer.get_feature_names()).head(3) number_of_clusters = 8 kmt = KMeans(n_clusters=number_of_clusters) kmt.fit(Zt) #term vectorization print("Top terms per cluster:") order_centroids = kmt.cluster_centers_.argsort()[:, ::-1] terms_zt = tfidf_vectorizer.get_feature_names() for i in range(number_of_clusters): top_ten_words = [terms_zt[ind] for ind in order_centroids[i, :5]] print("Cluster {}: {}".format(i, ' '.join(top_ten_words))) #reverse tfidf_vectorizer = TfidfVectorizer(stop_words='english', tokenizer=stemming_tokenizer, use_idf=False, norm='l2') Ztr = tfidf_vectorizer.fit_transform(texts) print(tfidf_vectorizer.get_feature_names()) number_of_clusters = 8 kmtr = KMeans(n_clusters=number_of_clusters) kmtr.fit(Ztr) #term vectorization print("Top terms per cluster:") order_centroids = kmtr.cluster_centers_.argsort()[:, ::-1] terms_ztr = tfidf_vectorizer.get_feature_names() for i in range(number_of_clusters): top_ten_words = [terms_ztr[ind] for ind in order_centroids[i, :5]] print("Cluster {}: {}".format(i, ' '.join(top_ten_words))) ### genre it should be wa hermion s t hi and lili wa t s jame
homework13/14 - TF-IDF Homework.ipynb
radhikapc/foundation-homework
mit
Time to build the network Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes. The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation. We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation. Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$. Below, you have these tasks: 1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function. 2. Implement the forward pass in the train method. 3. Implement the backpropagation algorithm in the train method, including calculating the output error. 4. Implement the forward pass in the run method.
class NeuralNetwork(object): def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate): def sigmoid(x): #Sigmoid Function return 1/(1+np.exp(-x)) # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Initialize weights self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5, (self.hidden_nodes, self.input_nodes)) self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5, (self.output_nodes, self.hidden_nodes)) self.lr = learning_rate #### Set this to your implemented sigmoid function #### # Activation function is the sigmoid function #self.activation_function = sigmoid self.activation_function = lambda x: 1/(1+np.exp(-x)) def train(self, inputs_list, targets_list): # Convert inputs list to 2d array inputs = np.array(inputs_list, ndmin=2).T targets = np.array(targets_list, ndmin=2).T #### Implement the forward pass here #### ### Forward pass ### # TODO: Hidden layer hidden_inputs = np.dot(self.weights_input_to_hidden, inputs)# signals into hidden layer hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer # TODO: Output layer final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs) # signals into final output layer final_outputs = final_inputs # signals from final output layer #### Implement the backward pass here #### ### Backward pass ### # TODO: Output error output_errors = targets - final_outputs # Output layer error is the difference between desired target and actual output. # TODO: Backpropagated error hidden_errors = np.dot(self.weights_hidden_to_output.T, output_errors) # errors propagated to the hidden layer hidden_grad = hidden_outputs * (1.0-hidden_outputs) # hidden layer gradients # TODO: Update the weights self.weights_hidden_to_output += self.lr * np.dot(output_errors, hidden_outputs.T) # update hidden-to-output weights with gradient descent step self.weights_input_to_hidden += self.lr * np.dot(hidden_errors * hidden_grad, inputs.T) # update input-to-hidden weights with gradient descent step def run(self, inputs_list): # Run a forward pass through the network inputs = np.array(inputs_list, ndmin=2).T #### Implement the forward pass here #### # TODO: Hidden layer hidden_inputs = np.dot(self.weights_input_to_hidden, inputs)# signals into hidden layer hidden_outputs = self.activation_function(hidden_inputs)# signals from hidden layer # TODO: Output layer final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)# signals into final output layer final_outputs = final_inputs # signals from final output layer return final_outputs def MSE(y, Y): return np.mean((y-Y)**2)
dlnd-your-first-neural-network.ipynb
luiscapo/DLND-your-first-neural-network
gpl-3.0
Training the network Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops. You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later. Choose the number of epochs This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting. Choose the learning rate This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge. Choose the number of hidden nodes The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
import sys ### Set the hyperparameters here ### epochs = 4000 learning_rate = 0.01 hidden_nodes = 30 output_nodes = 1 N_i = train_features.shape[1] network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate) losses = {'train':[], 'validation':[]} for e in range(epochs): # Go through a random batch of 128 records from the training data set batch = np.random.choice(train_features.index, size=128) for record, target in zip(train_features.ix[batch].values, train_targets.ix[batch]['cnt']): network.train(record, target) # Printing out the training progress train_loss = MSE(network.run(train_features), train_targets['cnt'].values) val_loss = MSE(network.run(val_features), val_targets['cnt'].values) sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \ + "% ... Training loss: " + str(train_loss)[:5] \ + " ... Validation loss: " + str(val_loss)[:5]) losses['train'].append(train_loss) losses['validation'].append(val_loss) plt.plot(losses['train'], label='Training loss') plt.plot(losses['validation'], label='Validation loss') plt.legend() plt.ylim(ymax=0.5)
dlnd-your-first-neural-network.ipynb
luiscapo/DLND-your-first-neural-network
gpl-3.0
Thinking about your results Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does? Note: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter Your answer below With this Parameters: epochs = 4000 learning_rate = 0.01 hidden_nodes = 30 output_nodes = 1 My Model is able to predict with following loses: Training loss: 0.050 - how well predicts the network on training data. Validation loss: 0.129 - how well predicts the network on new data. Validation loss will be always higher than training loss - an excessive difference will mean, that the model has overfitted the train and cannot predict new data. The values seem good - according to forums and Slack data and comparing with tests with different parameters - see excel. It has problems on day of low occupancy - it overstimates in those cases (see Dec 23rd - Dec 26th), but on high occupancy, it is quite accurate. Why the network performs better on those days, do not know yet for sure, I am asking in the forums... but I think that the model overreacts to small variations in the data, because of overfitting. The modell fits the training data so well, that it follows up variations on the training data very quick - causing that if the data variations are small, the modell overreacts as can be seen in the graphs. Unit tests Run these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project.
import unittest inputs = [0.5, -0.2, 0.1] targets = [0.4] test_w_i_h = np.array([[0.1, 0.4, -0.3], [-0.2, 0.5, 0.2]]) test_w_h_o = np.array([[0.3, -0.1]]) class TestMethods(unittest.TestCase): ########## # Unit tests for data loading ########## def test_data_path(self): # Test that file path to dataset has been unaltered self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv') def test_data_loaded(self): # Test that data frame loaded self.assertTrue(isinstance(rides, pd.DataFrame)) ########## # Unit tests for network functionality ########## def test_activation(self): network = NeuralNetwork(3, 2, 1, 0.5) # Test that the activation function is a sigmoid self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5)))) def test_train(self): # Test that weights are updated correctly on training network = NeuralNetwork(3, 2, 1, 0.5) network.weights_input_to_hidden = test_w_i_h.copy() network.weights_hidden_to_output = test_w_h_o.copy() network.train(inputs, targets) self.assertTrue(np.allclose(network.weights_hidden_to_output, np.array([[ 0.37275328, -0.03172939]]))) self.assertTrue(np.allclose(network.weights_input_to_hidden, np.array([[ 0.10562014, 0.39775194, -0.29887597], [-0.20185996, 0.50074398, 0.19962801]]))) def test_run(self): # Test correctness of run method network = NeuralNetwork(3, 2, 1, 0.5) network.weights_input_to_hidden = test_w_i_h.copy() network.weights_hidden_to_output = test_w_h_o.copy() self.assertTrue(np.allclose(network.run(inputs), 0.09998924)) suite = unittest.TestLoader().loadTestsFromModule(TestMethods()) unittest.TextTestRunner().run(suite)
dlnd-your-first-neural-network.ipynb
luiscapo/DLND-your-first-neural-network
gpl-3.0
上面可以初步看到程序本身执行时间很短,大部分时间在等待写什么. 只能看到一个大概,不能定位到具体代码. contextmanager 使用python的上下文管理器机制对代码进行耗时度量: - __enter__:记录开始时间 - __exit__: 记录结束时间
%%writefile timer.py import time class Timer(object): def __init__(self, verbose=False): self.verbose = verbose def __enter__(self): self.start = time.time() return self def __exit__(self, *args): self.end = time.time() self.secs = self.end - self.start self.msecs = self.secs * 1000 # millisecs if self.verbose: print('elapsed time: %f ms', self.msecs) from timer import Timer with Timer() as t: for i in range(1000000): pass print('elasped time %s s',t.secs)
books/optimization/performance-analysis.ipynb
510908220/python-toolbox
mit
可以将耗时写到日志里,这样在写代码的时候对关键的逻辑处(数据库、网络等)进行如上改写,然后通过分析日志排查性能问题. 当然也可以扩展一下将每次性能数据写入数据库分析. line_profiler line_profiler可以分析每一行代码的执行耗时信息. 为了使用line_profiler,使用pip install line_profiler进行安装. 安装成功后可以看到叫做kernprof的可执行程序. 在使用工具测试代码性能的时候, 需要给函数加上@profile装饰器.(不需要显示import任何模块,kernprof会自动注入的)
%%writefile slow_app_for_profiler.py import sys import time @profile def mock_download(): for i in range(5): time.sleep(1) @profile def mock_database(): for i in range(20): time.sleep(0.1) @profile def main(): mock_download() mock_database() if __name__ == "__main__": sys.exit(main()) !pip install line_profiler !kernprof -l -v slow_app_for_profiler.py
books/optimization/performance-analysis.ipynb
510908220/python-toolbox
mit
-l选项告诉kernprof注入@profile到脚本里. -v告诉kernprof显示执行结果到控制台. Line #:行号. Hits: 这行代码运行次数. Time: 这一行总耗时 Per Hit: 本行代码执行一次耗时. % Time:本行耗时占总耗时(函数耗时)百分比. Line Contents: 代码 从结果可以很清楚的看到每一行的耗时, 这个对于一般的脚本很方便, 但是对于django项目怎么办呢: - 使用django-devserver: 这个适合在开发环境发现一些性能问题,但是很多问题在线上才能发现. http://djangotricks.blogspot.com/2015/01/performance-bottlenecks-in-django-views.html - django-debug-toolbar: - yet-another-django-profiler 大概瞄了一眼django-devserver使用的是LineProfiler,我们可以在代码层面加入LineProfiler. memory_profiler memory_profiler 分析每一行内存消耗.
!pip install memory_profiler psutil !python -m memory_profiler slow_app_for_profiler.py
books/optimization/performance-analysis.ipynb
510908220/python-toolbox
mit
<img src="image/weight_biases.png" style="height: 60%;width: 60%; position: relative; right: 10%"> Problem 2 For the neural network to train on your data, you need the following <a href="https://www.tensorflow.org/resources/dims_types.html#data-types">float32</a> tensors: - features - Placeholder tensor for feature data (train_features/valid_features/test_features) - labels - Placeholder tensor for label data (train_labels/valid_labels/test_labels) - weights - Variable Tensor with random numbers from a truncated normal distribution. - See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#truncated_normal">tf.truncated_normal() documentation</a> for help. - biases - Variable Tensor with all zeros. - See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#zeros"> tf.zeros() documentation</a> for help. If you're having trouble solving problem 2, review "TensorFlow Linear Function" section of the class. If that doesn't help, the solution for this problem is available here.
features_count = 784 labels_count = 10 # TODO: Set the features and labels tensors # features = # labels = # TODO: Set the weights and biases tensors # weights = # biases = ### DON'T MODIFY ANYTHING BELOW ### #Test Cases from tensorflow.python.ops.variables import Variable assert features._op.name.startswith('Placeholder'), 'features must be a placeholder' assert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder' assert isinstance(weights, Variable), 'weights must be a TensorFlow variable' assert isinstance(biases, Variable), 'biases must be a TensorFlow variable' assert features._shape == None or (\ features._shape.dims[0].value is None and\ features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect' assert labels._shape == None or (\ labels._shape.dims[0].value is None and\ labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect' assert weights._variable._shape == (784, 10), 'The shape of weights is incorrect' assert biases._variable._shape == (10), 'The shape of biases is incorrect' assert features._dtype == tf.float32, 'features must be type float32' assert labels._dtype == tf.float32, 'labels must be type float32' # Feed dicts for training, validation, and test session train_feed_dict = {features: train_features, labels: train_labels} valid_feed_dict = {features: valid_features, labels: valid_labels} test_feed_dict = {features: test_features, labels: test_labels} # Linear Function WX + b logits = tf.matmul(features, weights) + biases prediction = tf.nn.softmax(logits) # Cross entropy cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1) # Training loss loss = tf.reduce_mean(cross_entropy) # Create an operation that initializes all variables init = tf.global_variables_initializer() # Test Cases with tf.Session() as session: session.run(init) session.run(loss, feed_dict=train_feed_dict) session.run(loss, feed_dict=valid_feed_dict) session.run(loss, feed_dict=test_feed_dict) biases_data = session.run(biases) assert not np.count_nonzero(biases_data), 'biases must be zeros' print('Tests Passed!') # Determine if the predictions are correct is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1)) # Calculate the accuracy of the predictions accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32)) print('Accuracy function created.')
Term_1/TensorFlow_3/TensorFlow_Lab/lab.ipynb
akshaybabloo/Car-ND
mit
These are our observations: The maximum number of survivors are in the first and third class, respectively With respect to the total number of passengers in each class, first class has the maximum survivors at around 61% With respect to the total number of passengers in each class, third class has the minimum number of survivors at around 25% This is our key takeaway: There was clearly a preference toward saving those from the first class as the ship was drowning. It also had the maximum percentage of survivors What is the distribution of survivors based on gender among the various classes?
# Checking for any null values df['Sex'].isnull().value_counts() # Male passengers survived in each class male_survivors = df[df['Sex'] == 'male'].groupby('Pclass')['Survived'].agg(sum) male_survivors # Total Male Passengers in each class male_total_passengers = df[df['Sex'] == 'male'].groupby('Pclass')['PassengerId'].count() male_total_passengers male_survivor_percentage = male_survivors / male_total_passengers male_survivor_percentage # Female Passengers survived in each class female_survivors = df[df['Sex'] == 'female'].groupby('Pclass')['Survived'].agg(sum) female_survivors # Total Female Passengers in each class female_total_passengers = df[df['Sex'] == 'female'].groupby('Pclass')['PassengerId'].count() female_survivor_percentage = female_survivors / female_total_passengers female_survivor_percentage # Plotting the total passengers who survived based on Gender fig = plt.figure() ax = fig.add_subplot(111) index = np.arange(male_survivors.count()) bar_width = 0.35 rect1 = ax.bar(index, male_survivors, bar_width, color='blue',label='Men') rect2 = ax.bar(index + bar_width, female_survivors, bar_width, color='y', label='Women') ax.set_ylabel('Survivor Numbers') ax.set_title('Male and Female survivors based on class') xTickMarks = male_survivors.index.values.tolist() ax.set_xticks(index + bar_width) xtickNames = ax.set_xticklabels(xTickMarks) plt.setp(xtickNames, fontsize=20) plt.legend() plt.tight_layout() plt.show() # Plotting the percentage of passengers who survived based on Gender fig = plt.figure() ax = fig.add_subplot(111) index = np.arange(male_survivor_percentage.count()) bar_width = 0.35 rect1 = ax.bar(index, male_survivor_percentage, bar_width, color='blue', label='Men') rect2 = ax.bar(index + bar_width, female_survivor_percentage, bar_width, color='y', label='Women') ax.set_ylabel('Survivor Percentage') ax.set_title('Percentage Male and Female of survivors based on class') xTickMarks = male_survivor_percentage.index.values.tolist() ax.set_xticks(index + bar_width) xtickNames = ax.set_xticklabels(xTickMarks) plt.setp(xtickNames, fontsize=20) plt.legend() plt.tight_layout() plt.show()
_oldnotebooks/Titanic_Data_Mining.ipynb
eneskemalergin/OldBlog
mit
These are our observations: The majority of survivors are females in all the classes More than 90% of female passengers in first and second class survived The percentage of male passengers who survived in first and third class, respectively, are comparable This is our key takeaway: Female passengers were given preference for lifeboats and the majority were saved. What is the distribution of non survivors among the various classes who have family aboard the ship?
# Checking for the null values df['SibSp'].isnull().value_counts() # Checking for the null values df['Parch'].isnull().value_counts() # Total number of non-survivors in each class non_survivors = df[(df['SibSp'] > 0) | (df['Parch'] > 0) & (df['Survived'] == 0)].groupby('Pclass')['Survived'].agg('count') non_survivors # Total passengers in each class total_passengers = df.groupby('Pclass')['PassengerId'].count() total_passengers non_survivor_percentage = non_survivors / total_passengers non_survivor_percentage # Total number of non survivors with family based on class fig = plt.figure() ax = fig.add_subplot(111) rect = ax.bar(non_survivors.index.values.tolist(), non_survivors, color='blue', width=0.5) ax.set_ylabel('No. of non survivors') ax.set_title('Total number of non survivors with family based on class') xTickMarks = non_survivors.index.values.tolist() ax.set_xticks(non_survivors.index.values.tolist()) xtickNames = ax.set_xticklabels(xTickMarks) plt.setp(xtickNames, fontsize=20) plt.show() # Plot of percentage of non survivors with family based on class fig = plt.figure() ax = fig.add_subplot(111) rect = ax.bar(non_survivor_percentage.index.values.tolist(), non_survivor_percentage, color='blue', width=0.5) ax.set_ylabel('Non Survivor Percentage') ax.set_title('Percentage of non survivors with family based on class') xTickMarks = non_survivor_percentage.index.values.tolist() ax.set_xticks(non_survivor_percentage.index.values.tolist()) xtickNames = ax.set_xticklabels(xTickMarks) plt.setp(xtickNames, fontsize=20) plt.show()
_oldnotebooks/Titanic_Data_Mining.ipynb
eneskemalergin/OldBlog
mit
These are our observations: There are lot of nonsurvivors in the third class Second class has the least number of nonsurvivors with relatives With respect to the total number of passengers, the first class, who had relatives aboard, has the maximum nonsurvivor percentage and the third class has the least This is our key takeaway: Even though third class has the highest number of nonsurvivors with relatives aboard, it primarily had passengers who did not have relatives on the ship, whereas in first class, most of the people had relatives aboard the ship. What was the survival percentage among different age groups?
# Checking for null values df['Age'].isnull().value_counts() # Defining the age binning interval age_bin = [0, 18, 25, 40, 60, 100] # Creating the bins df['AgeBin'] = pd.cut(df.Age, bins=age_bin) d_temp = df[np.isfinite(df['Age'])] # Number of survivors based on Age bin survivors = d_temp.groupby('AgeBin')['Survived'].agg(sum) survivors # Total passengers in each bin total_passengers = d_temp.groupby('AgeBin')['Survived'].agg('count') total_passengers list(total_passengers.index.values) # Plotting the pie chart of total passengers in each bin plt.pie(total_passengers, labels=list(total_passengers.index.values), autopct='%1.1f%%', shadow=True, startangle=90) plt.title('Total Passengers in different age groups') plt.show() # Plotting the pie chart of percentage passengers in each bin plt.pie(survivors, labels=list(total_passengers.index.values), autopct='%1.1f%%', shadow=True, startangle=90) plt.title('Survivors in different age groups') plt.show()
_oldnotebooks/Titanic_Data_Mining.ipynb
eneskemalergin/OldBlog
mit
Load Dataset
IB = pd.read_csv("india-batting.csv") IB.head(5) IB.columns
India’s_batting_performance_2016.ipynb
erayon/India-Australia-Cricket-Analysis
gpl-3.0
Split the year from 'Start Date' columns and create a new column name 'year'
year=[] for i in range(len(IB)): x = IB['Start Date'][i].split(" ")[-1] year.append(x) year= pd.DataFrame(year,columns=["year"]) mr = [IB,year] df=pd.concat(mr,axis=1) df.head(5)
India’s_batting_performance_2016.ipynb
erayon/India-Australia-Cricket-Analysis
gpl-3.0
Find all rows in the year 2016 from new dataframe and remove 'DND' rows from the dataframe which appear in the 'Runs' columns
df_16 = df[df["year"]=="2016"] df_16=df_16.reset_index(drop=True) df_16.columns Runs = np.array(df_16["Runs"]) np.squeeze(np.where(Runs=="DNB")) ndf_16=df_16[0:88] ndf_16.head(5)
India’s_batting_performance_2016.ipynb
erayon/India-Australia-Cricket-Analysis
gpl-3.0
Create a Dateframe of unique players name and their maximum score
ndf_16.Player.unique() playernames = ndf_16.Player.unique() runs=[] for i in range(len(ndf_16)): try: r = np.int(ndf_16['Runs'][i]) except: r= np.int(ndf_16.Runs.unique()[0].split("*")[0]) runs.append(r) modRun = pd.DataFrame(runs,columns=["modRun"]) modDf = pd.concat([ndf_16,modRun],axis=1) def PlayerMaxRun(playername): tmpPlayer = modDf[modDf["Player"]==playername] tmpPlayer = tmpPlayer.reset_index(drop=True) maxrun = np.max(np.array(tmpPlayer["modRun"])) totalrun = sum(np.array(tmpPlayer["modRun"])) return (maxrun,totalrun) tb1=[] rnn=[] for i in playernames: [mxrn,trn] = PlayerMaxRun(i) tb1.append([i,mxrn,trn]) rnn.append(mxrn) tb1 dfx = pd.DataFrame(tb1,columns=['player_name','max_run','total_run']) dfx
India’s_batting_performance_2016.ipynb
erayon/India-Australia-Cricket-Analysis
gpl-3.0
Visulaize in Plotly
import plotly plotly.tools.set_credentials_file(username='ayon.mi1', api_key='iIBYMNu0RVcR1GmQSeD0') data = [go.Bar( x=np.array(dfx['player_name']), y=np.array(dfx['max_run']) )] layout = go.Layout( title='Maximun_Score per player', xaxis=dict( title='Players_name', titlefont=dict( family='Courier New, monospace', size=18, color='#7f7f7f' ) ), yaxis=dict( title='Max_Run', titlefont=dict( family='Courier New, monospace', size=18, color='#7f7f7f' ) ) ) fig = go.Figure(data=data, layout=layout) py.iplot(fig, filename='basic-bar') from IPython.display import Image Image(filename='f1.png') data = [go.Bar( x=np.array(dfx['player_name']), y=np.array(dfx['total_run']) )] layout = go.Layout( title='Total_Run per player', xaxis=dict( title='Players_name', titlefont=dict( family='Courier New, monospace', size=18, color='#7f7f7f' ) ), yaxis=dict( title='Total_run', titlefont=dict( family='Courier New, monospace', size=18, color='#7f7f7f' ) ) ) fig = go.Figure(data=data, layout=layout) py.iplot(fig, filename='basic-bar') from IPython.display import Image Image(filename='f2.png')
India’s_batting_performance_2016.ipynb
erayon/India-Australia-Cricket-Analysis
gpl-3.0
Expected output: <table> <tr> <td> **gradients["dWaa"][1][2] ** </td> <td> 10.0 </td> </tr> <tr> <td> **gradients["dWax"][3][1]** </td> <td> -10.0 </td> </td> </tr> <tr> <td> **gradients["dWya"][1][2]** </td> <td> 0.29713815361 </td> </tr> <tr> <td> **gradients["db"][4]** </td> <td> [ 10.] </td> </tr> <tr> <td> **gradients["dby"][1]** </td> <td> [ 8.45833407] </td> </tr> </table> 2.2 - Sampling Now assume that your model is trained. You would like to generate new text (characters). The process of generation is explained in the picture below: <img src="images/dinos3.png" style="width:500;height:300px;"> <caption><center> Figure 3: In this picture, we assume the model is already trained. We pass in $x^{\langle 1\rangle} = \vec{0}$ at the first time step, and have the network then sample one character at a time. </center></caption> Exercise: Implement the sample function below to sample characters. You need to carry out 4 steps: Step 1: Pass the network the first "dummy" input $x^{\langle 1 \rangle} = \vec{0}$ (the vector of zeros). This is the default input before we've generated any characters. We also set $a^{\langle 0 \rangle} = \vec{0}$ Step 2: Run one step of forward propagation to get $a^{\langle 1 \rangle}$ and $\hat{y}^{\langle 1 \rangle}$. Here are the equations: $$ a^{\langle t+1 \rangle} = \tanh(W_{ax} x^{\langle t \rangle } + W_{aa} a^{\langle t \rangle } + b)\tag{1}$$ $$ z^{\langle t + 1 \rangle } = W_{ya} a^{\langle t + 1 \rangle } + b_y \tag{2}$$ $$ \hat{y}^{\langle t+1 \rangle } = softmax(z^{\langle t + 1 \rangle })\tag{3}$$ Note that $\hat{y}^{\langle t+1 \rangle }$ is a (softmax) probability vector (its entries are between 0 and 1 and sum to 1). $\hat{y}^{\langle t+1 \rangle}_i$ represents the probability that the character indexed by "i" is the next character. We have provided a softmax() function that you can use. Step 3: Carry out sampling: Pick the next character's index according to the probability distribution specified by $\hat{y}^{\langle t+1 \rangle }$. This means that if $\hat{y}^{\langle t+1 \rangle }_i = 0.16$, you will pick the index "i" with 16% probability. To implement it, you can use np.random.choice. Here is an example of how to use np.random.choice(): python np.random.seed(0) p = np.array([0.1, 0.0, 0.7, 0.2]) index = np.random.choice([0, 1, 2, 3], p = p.ravel()) This means that you will pick the index according to the distribution: $P(index = 0) = 0.1, P(index = 1) = 0.0, P(index = 2) = 0.7, P(index = 3) = 0.2$. Step 4: The last step to implement in sample() is to overwrite the variable x, which currently stores $x^{\langle t \rangle }$, with the value of $x^{\langle t + 1 \rangle }$. You will represent $x^{\langle t + 1 \rangle }$ by creating a one-hot vector corresponding to the character you've chosen as your prediction. You will then forward propagate $x^{\langle t + 1 \rangle }$ in Step 1 and keep repeating the process until you get a "\n" character, indicating you've reached the end of the dinosaur name.
# GRADED FUNCTION: sample def sample(parameters, char_to_ix, seed): """ Sample a sequence of characters according to a sequence of probability distributions output of the RNN Arguments: parameters -- python dictionary containing the parameters Waa, Wax, Wya, by, and b. char_to_ix -- python dictionary mapping each character to an index. seed -- used for grading purposes. Do not worry about it. Returns: indices -- a list of length n containing the indices of the sampled characters. """ # Retrieve parameters and relevant shapes from "parameters" dictionary Waa, Wax, Wya, by, b = parameters['Waa'], parameters['Wax'], parameters['Wya'], parameters['by'], parameters['b'] vocab_size = by.shape[0] n_a = Waa.shape[1] ### START CODE HERE ### # Step 1: Create the one-hot vector x for the first character (initializing the sequence generation). (≈1 line) x = np.zeros((vocab_size, 1)) # Step 1': Initialize a_prev as zeros (≈1 line) a_prev = np.zeros((n_a, 1)) # Create an empty list of indices, this is the list which will contain the list of indices of the characters to generate (≈1 line) indices = [] # Idx is a flag to detect a newline character, we initialize it to -1 idx = -1 # Loop over time-steps t. At each time-step, sample a character from a probability distribution and append # its index to "indices". We'll stop if we reach 50 characters (which should be very unlikely with a well # trained model), which helps debugging and prevents entering an infinite loop. counter = 0 newline_character = char_to_ix['\n'] while (idx != newline_character and counter != 50): # Step 2: Forward propagate x using the equations (1), (2) and (3) a = np.tanh(np.matmul(Wax, x) + np.matmul(Waa, a_prev) + b) z = np.matmul(Wya, a) + by y = softmax(z) # for grading purposes np.random.seed(counter+seed) # Step 3: Sample the index of a character within the vocabulary from the probability distribution y idx = np.random.choice(range(vocab_size), p = y.ravel()) # Append the index to "indices" indices.append(idx) # Step 4: Overwrite the input character as the one corresponding to the sampled index. x = np.zeros((vocab_size, 1)) x[idx] = 1 # Update "a_prev" to be "a" a_prev = a # for grading purposes seed += 1 counter +=1 ### END CODE HERE ### if (counter == 50): indices.append(char_to_ix['\n']) return indices np.random.seed(2) _, n_a = 20, 100 Wax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a) b, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1) parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "b": b, "by": by} indices = sample(parameters, char_to_ix, 0) print("Sampling:") print("list of sampled indices:", indices) print("list of sampled characters:", [ix_to_char[i] for i in indices])
coursera/deep-neural-network/quiz and assignments/RNN/Dinosaurus+Island+--+Character+level+language+model+final+-+v3.ipynb
jinntrance/MOOC
cc0-1.0
Time to build the network Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes. <img src="assets/neural_network.png" width=300px> The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation. We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation. Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$. Below, you have these tasks: 1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function. 2. Implement the forward pass in the train method. 3. Implement the backpropagation algorithm in the train method, including calculating the output error. 4. Implement the forward pass in the run method.
class NeuralNetwork(object): def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Initialize weights self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5, (self.input_nodes, self.hidden_nodes)) self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5, (self.hidden_nodes, self.output_nodes)) self.lr = learning_rate #### TODO: Set self.activation_function to your implemented sigmoid function #### # # Note: in Python, you can define a function with a lambda expression, # as shown below. #self.activation_function = lambda x : sigmoid(x) # Replace 0 with your sigmoid calculation. self.activation_function = lambda x: 1/(1+np.exp(-x)) ### If the lambda code above is not something you're familiar with, # You can uncomment out the following three lines and put your # implementation there instead. # #def sigmoid(x): # return 1/(1+np.exp(-x)) # Replace 0 with your sigmoid calculation here #self.activation_function = sigmoid #def sigmoid(x): # return 1/(1+np.exp(-x)) def train(self, features, targets): ''' Train the network on batch of features and targets. Arguments --------- features: 2D array, each row is one data record, each column is a feature targets: 1D array of target values ''' n_records = features.shape[0] delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape) delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape) for X, y in zip(features, targets): #### Implement the forward pass here #### ### Forward pass ### # TODO: Hidden layer - Replace these values with your calculations. hidden_inputs =np.dot(X,self.weights_input_to_hidden) # signals into hidden layer hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer #delta=output_error_term*hidden_outputs # TODO: Output layer - Replace these values with your calculations. final_inputs = np.matmul(hidden_outputs,self.weights_hidden_to_output) final_outputs = final_inputs # signals from final output layer #### Implement the backward pass here #### ### Backward pass ### # TODO: Output error - Replace this value with your calculations. error = y-final_outputs # Output layer error is the difference between desired target and actual output. # TODO: Calculate the hidden layer's contribution to the error hidden_error = error*(self.weights_hidden_to_output.T) # TODO: Backpropagated error terms - Replace these values with your calculations. output_error_term = error hidden_error_term = hidden_error*hidden_outputs*(1-hidden_outputs) # Weight step (input to hidden) delta_weights_i_h += hidden_error_term*X[:,None] # Weight step (hidden to output) delta_weights_h_o += output_error_term*hidden_outputs[:,None] # TODO: Update the weights - Replace these values with your calculations. self.weights_hidden_to_output += self.lr*delta_weights_h_o/ n_records# update hidden-to-output weights with gradient descent step self.weights_input_to_hidden += self.lr*delta_weights_i_h/n_records # update input-to-hidden weights with gradient descent step def run(self, features): ''' Run a forward pass through the network with input features Arguments --------- features: 1D array of feature values ''' #### Implement the forward pass here #### # TODO: Hidden layer - replace these values with the appropriate calculations. hidden_inputs = np.dot(features,self.weights_input_to_hidden) # signals into hidden layer hidden_outputs = self.activation_function(hidden_inputs)# signals from hidden layer # TODO: Output layer - Replace these values with the appropriate calculations. final_inputs = np.dot(hidden_outputs,self.weights_hidden_to_output) # signals into final output layer final_outputs = final_inputs # signals from final output layer return final_outputs def MSE(y, Y): return np.mean((y-Y)**2)
first-neural-network/Your_first_neural_network.ipynb
tanmay987/deepLearning
mit
Training the network Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops. You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later. Choose the number of iterations This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase. Choose the learning rate This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge. Choose the number of hidden nodes The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
import sys ### Set the hyperparameters here ### iterations = 5000 learning_rate = 0.5 hidden_nodes = 30 output_nodes = 1 N_i = train_features.shape[1] network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate) losses = {'train':[], 'validation':[]} for ii in range(iterations): # Go through a random batch of 128 records from the training data set batch = np.random.choice(train_features.index, size=128) X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt'] network.train(X, y) # Printing out the training progress train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values) val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values) sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \ + "% ... Training loss: " + str(train_loss)[:5] \ + " ... Validation loss: " + str(val_loss)[:5]) sys.stdout.flush() losses['train'].append(train_loss) losses['validation'].append(val_loss) plt.plot(losses['train'], label='Training loss') plt.plot(losses['validation'], label='Validation loss') plt.legend() _ = plt.ylim()
first-neural-network/Your_first_neural_network.ipynb
tanmay987/deepLearning
mit
<a id='wrangling'></a> Data Wrangling General Properties
# Load TMDb data and print out a few lines. Perform operations to inspect data # types and look for instances of missing or possibly errant data. tmdb_movies = pd.read_csv('tmdb-movies.csv') tmdb_movies.head() tmdb_movies.describe()
mlfoundation/istat/project/investigate-a-dataset-template.ipynb
vikashvverma/machine-learning
mit
Data Cleaning As evident from the data, it seems we have cast of the movie as string separated by | symbol. This needs to be converted into a suitable type in order to consume it properly later.
# Pandas read empty string value as nan, make it empty string tmdb_movies.cast.fillna('', inplace=True) tmdb_movies.genres.fillna('', inplace=True) tmdb_movies.director.fillna('', inplace=True) tmdb_movies.production_companies.fillna('', inplace=True) def string_to_array(data): """ This function returns given splitss the data by separator `|` and returns the result as array """ return data.split('|')
mlfoundation/istat/project/investigate-a-dataset-template.ipynb
vikashvverma/machine-learning
mit
Convert cast, genres, director and production_companies columns to array
tmdb_movies.cast = tmdb_movies.cast.apply(string_to_array) tmdb_movies.genres = tmdb_movies.genres.apply(string_to_array) tmdb_movies.director = tmdb_movies.director.apply(string_to_array) tmdb_movies.production_companies = tmdb_movies.production_companies.apply(string_to_array)
mlfoundation/istat/project/investigate-a-dataset-template.ipynb
vikashvverma/machine-learning
mit
<a id='eda'></a> Exploratory Data Analysis Research Question 1: What is the yearly revenue change? It's evident from observations below that there is no clear trend in change in mean revenue over years. Mean revenue from year to year is quite unstable. This can be attributed to number of movies and number of movies having high or low revenue The gap between budget and revenue have widened after 2000. This can be attributed to circulation of movies worldwide compared to earlier days. There seems to be a correlation between gross budget and gross revenue over years. When log of revenue_adj is plotted against log of budget_adj, we can see a clear correlation between revenue of a movie against the budget
def yearly_growth(mean_revenue): return mean_revenue - mean_revenue.shift(1).fillna(0) # Show change in mean revenue over years, considering only movies for which we have revenue data movies_with_budget = tmdb_movies[tmdb_movies.budget_adj > 0] movies_with_revenue = movies_with_budget[movies_with_budget.revenue_adj > 0] revenues_over_years = movies_with_revenue.groupby('release_year').sum() revenues_over_years.apply(yearly_growth)['revenue'].plot() revenues_over_years[['budget_adj', 'revenue_adj']].plot() def log(data): return np.log(data) movies_with_revenue[['budget_adj', 'revenue_adj']].apply(log) \ .sort_values(by='budget_adj').set_index('budget_adj')['revenue_adj'].plot(figsize=(20,6))
mlfoundation/istat/project/investigate-a-dataset-template.ipynb
vikashvverma/machine-learning
mit
Research Question 2: Which genres are most popular from year to year? Since popularity column indicates all time popularity of the movie, it might not be the right metric to measure popularity over years. We can measure popularty of a movie based on average vote. I think a movie is popular if vote_average &gt;= 7. On analyzing the popular movies since 1960(check illustrations below), following onservations can be made: Almost all popular movies have Drama genre Over years Comedy, Action and Adventure got popular. In recent years, Documentry, Action and Animation movies got more popularity.
def popular_movies(movies): return movies[movies['vote_average']>=7] def group_by_genre(data): """ This function takes a Data Frame having and returns a dictionary having release_year as key and value a dictionary having key as movie's genre and value as frequency of the genre that year. """ genres_by_year = {} for (year, position), genres in data.items(): for genre in genres: if year in genres_by_year: if genre in genres_by_year[year]: genres_by_year[year][genre] += 1 else: genres_by_year[year][genre] = 1 else: genres_by_year[year] = {} genres_by_year[year][genre] = 1 return genres_by_year def plot(genres_by_year): """ This function iterates over each row of Data Frame and prints rows having release_year divisible by 5 to avoid plotting too many graphs. """ for year, genres in genres_by_year.items(): if year%5 == 0: pd.DataFrame(grouped_genres[year], index=[year]).plot(kind='bar', figsize=(20, 6)) # Group movies by genre for each year and try to find the correlations # of genres over years. grouped_genres = group_by_genre(tmdb_movies.groupby('release_year').apply(popular_movies).genres) plot(grouped_genres)
mlfoundation/istat/project/investigate-a-dataset-template.ipynb
vikashvverma/machine-learning
mit
Research Question 3: What kinds of properties are associated with movies that have high revenues? We can consider those movies with at least 1 billion revenue and see what are common properties among them. Considering this criteria and based on illustrations below, we can make following observations about highest grossing movies: Adventure and Action are most common genres among these movies followed by Science Fiction, Fantasy and Family. Most of the movies have more than 7 average vote, some movies have less than 7 but that is because of less number of total votes. This means highest grossing movies are popular as well. Steven Spielberg and Peter Jackson are directors who have highest muber of movies having at least 1 billion revenue. Most of the directors have only one movies having at least a billion revenue, hence there seems to be no corelation between highest grossing movies and directors. Most of the cast have one movie having at least a billion revenue. Warner Bros., Walt Disney, Fox Film and Universal picture seems to have figured out the secret of highest grossing movies. They have highest number of at least a billion revenue movies. This does not mean all their movies have pretty high revenue.
highest_grossing_movies = tmdb_movies[tmdb_movies['revenue_adj'] >= 1000000000]\ .sort_values(by='revenue_adj', ascending=False) highest_grossing_movies.head()
mlfoundation/istat/project/investigate-a-dataset-template.ipynb
vikashvverma/machine-learning
mit
Find common genres in highest grossing movies
def count_frequency(data): frequency_count = {} for items in data: for item in items: if item in frequency_count: frequency_count[item] += 1 else: frequency_count[item] = 1 return frequency_count highest_grossing_genres = count_frequency(highest_grossing_movies.genres) print(highest_grossing_genres) pd.DataFrame(highest_grossing_genres, index=['Genres']).plot(kind='bar', figsize=(20, 8))
mlfoundation/istat/project/investigate-a-dataset-template.ipynb
vikashvverma/machine-learning
mit
Popularity of highest grossing movies
highest_grossing_movies.vote_average.hist()
mlfoundation/istat/project/investigate-a-dataset-template.ipynb
vikashvverma/machine-learning
mit
Directors of highest grossing movies
def list_to_dict(data, label): """ This function creates returns statistics and indices for a data frame from a list having label and value. """ statistics = {label: []} index = [] for item in data: statistics[label].append(item[1]) index.append(item[0]) return statistics, index import operator high_grossing_dirs = count_frequency(highest_grossing_movies.director) revenues, indexes = list_to_dict(sorted(high_grossing_dirs.items(), key=operator.itemgetter(1), reverse=True)[:20], 'revenue') pd.DataFrame(revenues, index=indexes).plot(kind='bar', figsize=(20, 5))
mlfoundation/istat/project/investigate-a-dataset-template.ipynb
vikashvverma/machine-learning
mit
Cast of highest grossing movies
high_grossing_cast = count_frequency(highest_grossing_movies.cast) revenues, index = list_to_dict(sorted(high_grossing_cast.items(), key=operator.itemgetter(1), reverse=True)[:30], 'number of movies') pd.DataFrame(revenues, index=index).plot(kind='bar', figsize=(20, 5))
mlfoundation/istat/project/investigate-a-dataset-template.ipynb
vikashvverma/machine-learning
mit
Production companies of highest grossing movies
high_grossing_prod_comps = count_frequency(highest_grossing_movies.production_companies) revenues, index = list_to_dict(sorted(high_grossing_prod_comps.items(), key=operator.itemgetter(1), reverse=True)[:30]\ , 'number of movies') pd.DataFrame(revenues, index=index).plot(kind='bar', figsize=(20, 5))
mlfoundation/istat/project/investigate-a-dataset-template.ipynb
vikashvverma/machine-learning
mit
Highest grossing budget Research Question 4: Who are top 15 highest grossing directors? We can see the top 30 highest grossing directors in bar chart below. It seems Steven Spielberg surpasses other directors in gross revenue.
def grossing(movies, by): """ This function returns the movies' revenues over key passed as `by` value in argument. """ revenues = {} for id, movie in movies.iterrows(): for key in movie[by]: if key in revenues: revenues[key].append(movie.revenue_adj) else: revenues[key] = [movie.revenue_adj] return revenues def gross_revenue(data): """ This functions computes the sum of values of the dictionary and return a new dictionary with same key but cummulative value. """ gross = {} for key, revenues in data.items(): gross[key] = np.sum(revenues) return gross gross_by_dirs = grossing(movies=movies_with_revenue, by='director') director_gross_revenue = gross_revenue(gross_by_dirs) top_15_directors = sorted(director_gross_revenue.items(), key=operator.itemgetter(1), reverse=True)[:15] revenues, index = list_to_dict(top_15_directors, 'director') pd.DataFrame(data=revenues, index=index).plot(kind='bar', figsize=(15, 9))
mlfoundation/istat/project/investigate-a-dataset-template.ipynb
vikashvverma/machine-learning
mit
Research Question 5: Who are top 15 highest grossing actors? We can find the top 30 actors based on gross revenue as shown in subsequent sections below. As we can see Harison Ford tops the chart with highest grossing.
gross_by_actors = grossing(movies=tmdb_movies, by='cast') actors_gross_revenue = gross_revenue(gross_by_actors) top_15_actors = sorted(actors_gross_revenue.items(), key=operator.itemgetter(1), reverse=True)[:15] revenues, indexes = list_to_dict(top_15_actors, 'actors') pd.DataFrame(data=revenues, index=indexes).plot(kind='bar', figsize=(15, 9))
mlfoundation/istat/project/investigate-a-dataset-template.ipynb
vikashvverma/machine-learning
mit
Step 1: Truncate the series to the interval that has observations. Outside this interval the interpolation blows up.
print('Original bounds: ', t[0], t[-1]) t_obs = t[D['T_flag'] != -1] D = D[t_obs[0]:t_obs[-1]] # Truncate dataframe so it is sandwiched between observed values t = D.index T = D['T'] print('New bounds: ', t[0], t[-1]) t_obs = D.index[D['T_flag'] != -1] t_interp = D.index[D['T_flag'] == -1] T_obs = D.loc[t_obs, 'T'] T_interp = D.loc[t_interp, 'T'] c = ['b' if flag != -1 else 'orange' for flag in D['T_flag'] plt.scatter(t, T, c = c, alpha = 0.5, s = 0.5) plt.title('T') #obs = plt.scatter(t_obs, T_obs, marker = '.', alpha = 0.5, s = 0.5, color = 'blue'); #interp = plt.scatter(t_interp, T_interp, marker = '.', alpha = 0.5, s = 0.5, color = 'red'); # If I plot one after the other, the red is much more prominant... Very annoying #plt.legend((obs, interp), ('Observed', 'Interpolated'), markerscale = 15);
notebooks/clean_data.ipynb
RJTK/dwglasso_cweeds
mit
Red dots are interpolated values.
# Centre the data mu = D['T'].mean() D.loc[:, 'T'] = D.loc[:, 'T'] - mu T = D['T'] print('E[T] = ', mu)
notebooks/clean_data.ipynb
RJTK/dwglasso_cweeds
mit
We want to obtain a stationary "feature" from the data, firt differences are an easy place to start.
T0 = T[0] dT = T.diff() dT = dT - dT.mean() # Center the differences dT_obs = dT[t_obs] dT_interp = dT[t_interp] plt.scatter(t, dT, marker = '.', alpha = 0.5, s = 0.5, c = c) #obs = plt.scatter(t_obs, dT_obs, marker = '.', alpha = 0.5, s = 0.5, color = 'blue'); #interp = plt.scatter(t_interp, dT_interp, marker = '.', alpha = 0.5, s = 0.5, color = 'red'); #plt.legend((obs, interp), ('Observed', 'Interpolated'), markerscale = 15); plt.title('dT')
notebooks/clean_data.ipynb
RJTK/dwglasso_cweeds
mit
It appears that early temperature sensors had rather imprecise readings. It also appears as though the interpolation introduces some systematic errors. I used pchip interpolation, which tries to avoid overshoot, so we may be seeing the effects of clipping. This would particularly make sense if missing data was from regular periods, e.g. at night when the temperature was reaching a minimum.
rolling1w_dT = dT.rolling(window = 7*24) # 1 week rolling window of dT rolling1m_dT = dT.rolling(window = 30*24) # 1 month rolling window of dT rolling1y_dT = dT.rolling(window = 365*24) # 1 year rolling dindow of dT fig, axes = plt.subplots(3, 1) axes[0].plot(rolling1w_dT.var()) axes[1].plot(rolling1m_dT.var()) axes[2].plot(rolling1y_dT.var())
notebooks/clean_data.ipynb
RJTK/dwglasso_cweeds
mit
It looks like there is still some nonstationarity in the first differences.
from itertools import product t_days = [t[np.logical_and(t.month == m, t.day == d)] for m, d in product(range(1,13), range(1, 32))] day_vars = pd.Series(dT[ti].var() for ti in t_days) day_vars = day_vars.dropna() plt.scatter(day_vars.index, day_vars) r = day_vars.rolling(window = 20, center = True) plt.plot(day_vars.index, r.mean(), color = 'red', linewidth = 2) plt.title('Variance of dT, folded by days')
notebooks/clean_data.ipynb
RJTK/dwglasso_cweeds
mit
Generating equations for fully contracted terms In the previous notebook, we computed the coupled cluster energy expression \begin{equation} E = \langle \Phi | e^{-\hat{T}} \hat{H} e^{\hat{T}} | \Phi \rangle = E_0 + \sum_{i}^\mathbb{O} \sum_{a}^\mathbb{V} f^{a}{i} t^{i}{a} + \frac{1}{4} \sum_{ij}^\mathbb{O} \sum_{ab}^\mathbb{V} (t^{i j}{a b} + 2 t^{i}{a} t^{j}{b}) v^{a b}{i j} \end{equation} with the following code
E0 = w.op("E_0",[""]) F = w.utils.gen_op('f',1,'ov','ov') V = w.utils.gen_op('v',2,'ov','ov') H = E0 + F + V T = w.op("t",["v+ o", "v+ v+ o o"]) Hbar = w.bch_series(H,T,2) expr = wt.contract(Hbar,0,0) expr
tutorials/04-GeneratingCode.ipynb
fevangelista/wicked
mit
First we convert the expression derived into a set of equations. You get back a dictionary that shows all the components to the equations. The vertical bar (|) in the key separates the lower (left) and upper (right) indices in the resulting expression
mbeq = expr.to_manybody_equations('r') mbeq
tutorials/04-GeneratingCode.ipynb
fevangelista/wicked
mit
Converting equations to code From the equations generated above, you can get tensor contractions by calling the compile function on each individual term in the equations. Here we generate python code that uses numpy's einsum function to evaluate contractions. To use this code you will need to import einsum python from numpy import einsum and you will need to define a dictionary of tensors (f["vo"],v["vvoo"],t["ov"],...) of appropriate dimensions:
for eq in mbeq['|']: print(eq.compile('einsum'))
tutorials/04-GeneratingCode.ipynb
fevangelista/wicked
mit
Many-body equations Suppose we want to compute the contributions to the coupled cluster residual equations \begin{equation} r^{i}{a} = \langle \Phi| { \hat{a}^\dagger{i} \hat{a}a } [\hat{F},\hat{T}_1] | \Phi \rangle \end{equation} Wick&d can compute this quantity using the corresponding many-body representation of the operator $[\hat{F},\hat{T}_1]$. If you expand the operator $[\hat{F},\hat{T}_1]$ into its second quantized operator components we can identify a particle-hole excitation term: \begin{equation} [\hat{F},\hat{T}_1] = g^{j}{b} { \hat{a}^\dagger_{b} \hat{a}j } + \cdots \end{equation} From this expression we see that the residual $r{a}^{i}$ is precisely the quantity we need to evaluate since \begin{equation} r^{i}{a} = \langle \Phi| { \hat{a}^\dagger{i} \hat{a}a } [\hat{F},\hat{T}_1] | \Phi \rangle = g^{j}{b} \langle \Phi| { \hat{a}^\dagger_{i} \hat{a}a } { \hat{a}^\dagger{b} \hat{a}j } | \Phi \rangle = g^{i}{a} \end{equation} where in the last step we applied Wick's theorem to evaluate the expectation value. Let's start by computing $[\hat{F},\hat{T}_1]$ with Wick's theorem:
F = w.utils.gen_op('f',1,'ov','ov') T1 = w.op("t",["v+ o"]) expr = wt.contract(w.commutator(F,T1),2,2) latex(expr)
tutorials/04-GeneratingCode.ipynb
fevangelista/wicked
mit
Next, we call to_manybody_equations to generate many-body equations
mbeq = expr.to_manybody_equations('g') print(mbeq)
tutorials/04-GeneratingCode.ipynb
fevangelista/wicked
mit
Out of all the terms, we select the terms that multiply the excitation operator ${ \hat{a}^\dagger_{a} \hat{a}_i }$ ("o|v")
mbeq_ov = mbeq["o|v"] for eq in mbeq_ov: latex(eq)
tutorials/04-GeneratingCode.ipynb
fevangelista/wicked
mit
Lastly, we can compile these equations into code
for eq in mbeq_ov: print(eq.compile('einsum'))
tutorials/04-GeneratingCode.ipynb
fevangelista/wicked
mit
Antisymmetrization of uncontracted operator indices To gain efficiency, Wick&d treats contractions involving inequivalent lines in a special way. Consider the following term contributing to the CCSD doubles amplitude equations that arises from $[\hat{V}\mathrm{ovov},\hat{T}_2]$ (see the sixth term in Eq. (153) of Crawford and Schaefer, https://doi.org/10.1002/9780470125915.ch2) \begin{equation} r^{ij}{ab} \leftarrow \langle \Phi| { \hat{a}^\dagger_{i}\hat{a}^\dagger_{j} \hat{a}b \hat{a}_a } [\hat{V}\mathrm{ovov},\hat{T}2] | \Phi \rangle = - P(ij)P(ab) \sum{kc} \langle kb \| jc \rangle t^{ik}_{ac} \end{equation} where $P(pq)$ is an antisymmetric permutation operator [$P(pq)f(p,q) = f(p,q) - f(q,p)$]. This expression corresponds to a single diagram, but algebraically it consists of four terms obtained by index permutations $i \leftrightarrow j$ and $a \leftrightarrow b$, so that the residual is antisymmetric with respect to separate permutations of upper and lower indices. Let's first take a look at what happens when we apply Wick's theorem with wick&d to the quantity $[\hat{V}_\mathrm{ovov},\hat{T}_2]$
T2 = w.op("t", ["v+ v+ o o"]) Vovov = w.op("v", ["o+ v+ v o"]) expr = wt.contract(w.commutator(Vovov, T2), 4, 4) latex(expr)
tutorials/04-GeneratingCode.ipynb
fevangelista/wicked
mit
In wick&d the two-body part of $[\hat{V}\mathrm{ovov},\hat{T}_2]$ gives us only a single term \begin{equation} [\hat{V}\mathrm{ovov},\hat{T}2]\text{2-body} = - \sum_{abcijk} \langle kb \| jc \rangle t^{ik}{ac} { \hat{a}^{ab}{ij} } = \sum_{abij} g^{ij}{ab} { \hat{a}^{ab}{ij} } \end{equation} where the tensor $g^{ij}{ab}$ is defined as \begin{equation} g^{ij}{ab} = -\sum_{kc} \langle kb \| jc \rangle t^{ik}{ac} \end{equation} Note that contrary to $r^{ij}_{ab}$, the tensor $g^{ij}{ab}$ does not have any specific index symmetry. In other words, you need to enforce the antisymmetry. <!-- In particular, the many-body tensors generated by wick&d are not guaranteed to be antisymmetric, i --> This quantity is related to the CCSD residual contribution reported above in the following way \begin{equation} r^{ij}{ab} \leftarrow \langle \Phi| { \hat{a}^\dagger{i}\hat{a}^\dagger_{j} \hat{a}b \hat{a}_a } [\hat{V}\mathrm{ovov},\hat{T}2] | \Phi \rangle = g^{ij}{ab} - g^{ji}{ab} - g^{ij}{ba} + g^{ji}{ba} = P(ij)P(ab) g^{ij}{ab} \end{equation} Therefore, this example shows an important distinction between the traditional projective equation (which yields $P(ij)P(ab) g^{ij}{ab}$) vs. the many-body approach (which yields $g^{ij}{ab}$). How are the difference between these two approaches reconciled in practic? When you solve the many-body equations, you must enforce the antisymmetry of the equations, which means that the residual contribution should be written as \begin{equation} \sum_{abij} g^{ij}{ab} { \hat{a}^{ab}{ij} } = \frac{1}{4} \sum_{abij} (P(ij)P(ab) g^{ij}{ab}) { \hat{a}^{ab}{ij} } \end{equation} The factor $\frac{1}{4}$ now brings this term in a form consistent with the prefactor we associate with the operator $ { \hat{a}^{ab}_{ij} }$. When you ask wick&d to compile the many-body equation we again get a single term
for eq in expr.to_manybody_equations('g')['oo|vv']: print(eq.compile('einsum'))
tutorials/04-GeneratingCode.ipynb
fevangelista/wicked
mit
Experimental Options The Options class allows the download of options data from Google Finance. The get_options_data method downloads options data for specified expiry date and provides a formatted DataFrame with a hierarchical index, so its easy to get to the specific option you want. Available expiry dates can be accessed from the expiry_dates property.
from pandas_datareader.data import Options fb_options = Options('FB', 'google') data = fb_options.get_options_data(expiry = fb_options.expiry_dates[0]) data.head()
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/06-Data-Sources/01 - Pandas-Datareader.ipynb
arcyfelix/Courses
apache-2.0
FRED
import pandas_datareader.data as web import datetime start = datetime.datetime(2010, 1, 1) end = datetime.datetime(2017, 1, 1) gdp = web.DataReader("GDP", "fred", start, end) gdp.head()
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/06-Data-Sources/01 - Pandas-Datareader.ipynb
arcyfelix/Courses
apache-2.0
Split into training and testing Next we split the data into training and testing data sets
(training, test) = ratingsRDD.randomSplit([0.8, 0.2]) numTraining = training.count() numTest = test.count() # verify row counts for each dataset print("Total: {0}, Training: {1}, test: {2}".format(ratingsRDD.count(), numTraining, numTest))
notebooks/Step 04 - Realtime Recommendations.ipynb
snowch/movie-recommender-demo
apache-2.0
Build the recommendation model using ALS on the training data I've chosen some values for the ALS parameters. You should probaly experiment with different values.
from pyspark.mllib.recommendation import ALS rank = 50 numIterations = 20 lambdaParam = 0.1 model = ALS.train(training, rank, numIterations, lambdaParam)
notebooks/Step 04 - Realtime Recommendations.ipynb
snowch/movie-recommender-demo
apache-2.0
Extract the product (movie) features
import numpy as np pf = model.productFeatures().cache() pf_keys = pf.sortByKey().keys().collect() pf_vals = pf.sortByKey().map(lambda x: list(x[1])).collect() Vt = np.matrix(np.asarray(pf.values().collect()))
notebooks/Step 04 - Realtime Recommendations.ipynb
snowch/movie-recommender-demo
apache-2.0
Simulate a new user rating a movie
full_u = np.zeros(len(pf_keys)) full_u.itemset(1, 5) # user has rated product_id:1 = 5 recommendations = full_u*Vt*Vt.T print("predicted rating value", np.sort(recommendations)[:,-10:]) top_ten_recommended_product_ids = np.where(recommendations >= np.sort(recommendations)[:,-10:].min())[1] print("predict rating prod_id", np.array_repr(top_ten_recommended_product_ids))
notebooks/Step 04 - Realtime Recommendations.ipynb
snowch/movie-recommender-demo
apache-2.0
Volume Distribution The volume distribution function $V(\sigma_0)$ is normalized by the bin size, giving a results that is independent of the choice of density bin spacing.
def calc_volume(ds, rholevs, zrange=slice(0,-6000)): vol = ds.HFacC * ds.drF * ds.rA delta_rho = rholevs[1] - rholevs[0] ds['volume_rho'] = xgcm.regrid_vertical(vol.sel(Z=zrange), ds.TRAC01[0].sel(Z=zrange), rholevs, 'Z') / delta_rho for ds in dsets.values(): calc_volume(ds, rholevs) fig = plt.figure(figsize=(14,6)) ax = fig.add_subplot(111) for k in atlases: ds = dsets[k] # net transformation in Southern Ocean vol_net = ds.volume_rho.sel(Y=slice(-80,-30)).sum(dim=('X','Y')) vol_net.plot.line(marker='.') plt.legend(atlases, loc='upper left') plt.title(r'Southern Ocean Volume Density [m$^3$ / (kg m$^{-3}$)]') ax.set_xlabel(r'$\sigma_0$') ax.grid()
MITgcm_WOA13_mixing.ipynb
rabernat/mitgcm-xray
mit
Cumulative Distribution By integrating $$ \int_{\sigma_{min}}^\sigma V(\sigma) d\sigma $$ we obtain the cumulative distribution function.
fig = plt.figure(figsize=(14,6)) ax = fig.add_subplot(111) for k in atlases: delta_rho = rholevs[1] - rholevs[0] ds = dsets[k] vol_net = ds.volume_rho.sel(Y=slice(-80,-30)).sum(dim=('X','Y')) vol_cum = vol_net.values.cumsum(axis=0) plt.plot(rholevs[1:], vol_cum, '.-') plt.legend(atlases, loc='upper left') plt.title(r'Southern Ocean Volume Cumulative Density (m$^3$)') ax.set_xlabel(r'$\sigma_0$') ax.grid()
MITgcm_WOA13_mixing.ipynb
rabernat/mitgcm-xray
mit
Vertical Diffusive Fluxes
rk_sign = -1 plt.figure(figsize=(14,6)) for n, (name, k) in enumerate(zip( ['THETA', 'SALT', 'SIGMA0'], ['_TH', '_SLT', 'Tr01'])): ax = plt.subplot(1,3,n+1) for aname in dsets: ds = dsets[aname] net_vflux = rk_sign*( ds['DFrI' + k] + ds['DFrE' + k] )[0].sel(Y=slice(-80,-30)).sum(dim=('X','Y')) plt.plot(net_vflux, net_vflux.Zl) ax.set_ylim([-2000,0]) vmax = abs(net_vflux[5:]).max() ax.set_xlim([-vmax, vmax]) plt.grid() plt.legend(atlases, loc='lower right') plt.title('Net SO vertical flux of %s' % name) plt.xlabel(r'%s units * m$^{3}$ / s' % name)
MITgcm_WOA13_mixing.ipynb
rabernat/mitgcm-xray
mit
Vertical Diffusive Heat Flux
import gsw rho0 = 1030 plt.figure(figsize=(4.2,6)) k = '_TH' ax = plt.subplot(111) for aname in dsets: ds = dsets[aname] net_vflux = rk_sign*rho0*gsw.cp0*( ds['DFrI' + k] + ds['DFrE' + k] )[0].sel(Y=slice(-80,-30)).sum(dim=('X','Y')) plt.plot(net_vflux/1e12, net_vflux.Zl) ax.set_ylim([-2000,0]) vmax = 2000 ax.set_xlim([-vmax, vmax]) plt.grid() plt.legend(atlases, loc='lower right') plt.title('Net SO vertical heat flux') plt.xlabel('TW')
MITgcm_WOA13_mixing.ipynb
rabernat/mitgcm-xray
mit
Symmetric Difference https://www.hackerrank.com/challenges/symmetric-difference/problem Task Given sets of integers, and , print their symmetric difference in ascending order. The term symmetric difference indicates those values that exist in either or but do not exist in both. Input Format The first line of input contains an integer, . The second line contains space-separated integers. The third line contains an integer, . The fourth line contains space-separated integers. Output Format Output the symmetric difference integers in ascending order, one per line. Sample Input 4 2 4 5 9 4 2 4 11 12 Sample Output 5 9 11 12
M = int(input()) m =set((map(int,input().split()))) N = int(input()) n =set((map(int,input().split()))) m ^ n S='add 5 6' method, *args = S.split() print(method) print(*map(int,args)) method,(*map(int,args)) # methods # (*map(int,args)) # command='add'.split() # method, args = command[0], list(map(int,command[1:])) # method, args for _ in range(2): met, *args = input().split() print(met, args) try: pass # methods[met](*list(map(int,args))) except: pass class Stack: def __init__(self): self.data = [] def is_empty(self): return self.data == [] def size(self): return len(self.data) def push(self, val): self.data.append(val) def clear(self): self.data.clear() def pop(self): return self.data.pop() def __repr__(self): return "Stack("+str(self.data)+")" def sum_list(ls): if len(ls)==0: return 0 elif len(ls)==1: return ls[0] else: return ls[0] + sum_list(ls[1:]) def max_list(ls): print(ls) if len(ls)==0: return None elif len(ls)==1: return ls[0] else: m = max_list(ls[1:]) return ls[0] if ls[0]>m else m def reverse_list(ls): if len(ls)<2: return ls return reverse_list(ls[1:])+ls[0:1] def is_ana(s=''): if len(s)<2: return True return s[0]==s[-1] and is_ana(s[1:len(s)-1]) print(is_ana("abc")) import turtle myTurtle = turtle.Turtle() myWin = turtle.Screen() def drawSpiral(myTurtle, lineLen): if lineLen > 0: myTurtle.forward(lineLen) myTurtle.right(90) drawSpiral(myTurtle,lineLen-5) drawSpiral(myTurtle,100) # myWin.exitonclick() t.forward(100) from itertools import combinations_with_replacement list(combinations_with_replacement([1,1,3,3,3],2)) hash((1,2)) # 4 # a a c d # 2 from itertools import combinations # N=int(input()) # s=input().split() # k=int(input()) s='a a c d'.split() k=2 combs=list(combinations(s,k)) print('{:.4f}'.format(len([x for x in combs if 'a' in x])/len(combs))) # ------------------------------------------ import random num_trials=10000 num_found=0 for i in range(num_trials): if 'a' in random.sample(s,k): num_found+=1 print('{:.4f}'.format(num_found/num_trials)) dir(5)
coding/hacker rank.ipynb
vadim-ivlev/STUDY
mit
Load house value vs. crime rate data Dataset is from Philadelphia, PA and includes average house sales price in a number of neighborhoods. The attributes of each neighborhood we have include the crime rate ('CrimeRate'), miles from Center City ('MilesPhila'), town name ('Name'), and county name ('County').
regressionDir = '/home/weenkus/workspace/Machine Learning - University of Washington/Regression' sales = pa.read_csv(regressionDir + '/datasets/Philadelphia_Crime_Rate_noNA.csv') sales # Show plots in jupyter %matplotlib inline
Regression/assignments/Simple Linear Regression slides.ipynb
Weenkus/Machine-Learning-University-of-Washington
mit
Exploring the data The house price in a town is correlated with the crime rate of that town. Low crime towns tend to be associated with higher house prices and vice versa.
plt.scatter(sales.CrimeRate, sales.HousePrice, alpha=0.5) plt.ylabel('House price') plt.xlabel('Crime rate')
Regression/assignments/Simple Linear Regression slides.ipynb
Weenkus/Machine-Learning-University-of-Washington
mit
Fit the regression model using crime as the feature
# Check the type and shape X = sales[['CrimeRate']] print (type(X)) print (X.shape) y = sales['HousePrice'] print (type(y)) print (y.shape) crime_model = linear_model.LinearRegression() crime_model.fit(X, y)
Regression/assignments/Simple Linear Regression slides.ipynb
Weenkus/Machine-Learning-University-of-Washington
mit
Let's see what our fit looks like
plt.plot(sales.CrimeRate, sales.HousePrice, '.', X, crime_model.predict(X), '-', linewidth=3) plt.ylabel('House price') plt.xlabel('Crime rate')
Regression/assignments/Simple Linear Regression slides.ipynb
Weenkus/Machine-Learning-University-of-Washington
mit
Remove Center City and redo the analysis Center City is the one observation with an extremely high crime rate, yet house prices are not very low. This point does not follow the trend of the rest of the data very well. A question is how much including Center City is influencing our fit on the other datapoints. Let's remove this datapoint and see what happens.
sales_noCC = sales[sales['MilesPhila'] != 0.0] plt.scatter(sales_noCC.CrimeRate, sales_noCC.HousePrice, alpha=0.5) plt.ylabel('House price') plt.xlabel('Crime rate') crime_model_noCC = linear_model.LinearRegression() crime_model_noCC.fit(sales_noCC[['CrimeRate']], sales_noCC['HousePrice']) plt.plot(sales_noCC.CrimeRate, sales_noCC.HousePrice, '.', sales_noCC[['CrimeRate']], crime_model_noCC.predict(sales_noCC[['CrimeRate']]), '-', linewidth=3) plt.ylabel('House price') plt.xlabel('Crime rate')
Regression/assignments/Simple Linear Regression slides.ipynb
Weenkus/Machine-Learning-University-of-Washington
mit
Compare coefficients for full-data fit versus no-Center-City fit¶ Visually, the fit seems different, but let's quantify this by examining the estimated coefficients of our original fit and that of the modified dataset with Center City removed.
print ('slope: ', crime_model.coef_) print ('intercept: ', crime_model.intercept_) print ('slope: ', crime_model_noCC.coef_) print ('intercept: ', crime_model_noCC.intercept_)
Regression/assignments/Simple Linear Regression slides.ipynb
Weenkus/Machine-Learning-University-of-Washington
mit
Above: We see that for the "no Center City" version, per unit increase in crime, the predicted decrease in house prices is 2,287. In contrast, for the original dataset, the drop is only 576 per unit increase in crime. This is significantly different! High leverage points: Center City is said to be a "high leverage" point because it is at an extreme x value where there are not other observations. As a result, recalling the closed-form solution for simple regression, this point has the potential to dramatically change the least squares line since the center of x mass is heavily influenced by this one point and the least squares line will try to fit close to that outlying (in x) point. If a high leverage point follows the trend of the other data, this might not have much effect. On the other hand, if this point somehow differs, it can be strongly influential in the resulting fit. Influental observations: An influential observation is one where the removal of the point significantly changes the fit. As discussed above, high leverage points are good candidates for being influential observations, but need not be. Other observations that are not leverage points can also be influential observations (e.g., strongly outlying in y even if x is a typical value). Remove high-value outlier neighborhoods and redo analysis Based on the discussion above, a question is whether the outlying high-value towns are strongly influencing the fit. Let's remove them and see what happens.
sales_nohighend = sales_noCC[sales_noCC['HousePrice'] < 350000] crime_model_nohighhend = linear_model.LinearRegression() crime_model_nohighhend.fit(sales_nohighend[['CrimeRate']], sales_nohighend['HousePrice']) plt.plot(sales_nohighend.CrimeRate, sales_nohighend.HousePrice, '.', sales_nohighend[['CrimeRate']], crime_model_nohighhend.predict(sales_nohighend[['CrimeRate']]), '-', linewidth=3) plt.ylabel('House price') plt.xlabel('Crime rate')
Regression/assignments/Simple Linear Regression slides.ipynb
Weenkus/Machine-Learning-University-of-Washington
mit
Do the coefficients change much?
print ('slope: ', crime_model_noCC.coef_) print ('intercept: ', crime_model_noCC.intercept_) print ('slope: ', crime_model_nohighhend.coef_) print ('intercept: ', crime_model_nohighhend.intercept_)
Regression/assignments/Simple Linear Regression slides.ipynb
Weenkus/Machine-Learning-University-of-Washington
mit
Reading in data to a dataframe For 1D analysis, we are generally thinking about data that varies in time, so time series analysis. The pandas package is particularly suited to deal with this type of data, having very convenient methods for interpreting, searching through, and using time representations. Let's start with the example we started the class with: taxi rides in New York City.
df = pd.read_csv('../data/yellow_tripdata_2016-05-01_decimated.csv', parse_dates=[0, 2], index_col=[0])
materials/4_pandas.ipynb
hetland/python4geosciences
mit
What do all these (and other) input keyword arguments do? header: tells which row of the data file is the header, from which it will extract column names parse_dates: try to interpret the values in [col] or [[col1, col2]] as dates, to convert them into datetime objects. index_col: if no index column is given, an index counting from 0 is given to the rows. By inputting index_col=[column integer], that column will be used as the index instead. This is usually done with the time information for the dataset. skiprows: can skip specific rows, skiprows=[list of rows to skip numbered from start of file with 0], or number of rows to skip, skiprows=N. We can check to make sure the date/time information has been read in as the index, which allows us to reference the other columns using this time information really easily:
df.index
materials/4_pandas.ipynb
hetland/python4geosciences
mit