markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Now let’s do a describe and plot it again.
df.plot(x='Miles', y='Minutes', kind='scatter')
4 - pandas Basics/4-6 pandas DataFrame Renaming Cols, Handling NaN Values, Maps, Intermediate Plotting, + Rolling Values, + Basic Date Indexing.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
Let’s plot Miles and Minutes together in a scatter plot. Wow that’s linear. Let’s see how correlated they are. We do this with the cor method. We can see that Miles to time are very tightly correlated (using pearson standard correlation coefficients) there are two other correlation methods that you can use, kendall Tau...
df.corr() df.corr(method='kendall') df.corr(method='spearman')
4 - pandas Basics/4-6 pandas DataFrame Renaming Cols, Handling NaN Values, Maps, Intermediate Plotting, + Rolling Values, + Basic Date Indexing.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
Now let’s see a box plot. With these two we get a much better idea of the data. We can see that most of my runs are below an hour except for a couple that are much longer.-
df.boxplot('Minutes', return_type='axes')
4 - pandas Basics/4-6 pandas DataFrame Renaming Cols, Handling NaN Values, Maps, Intermediate Plotting, + Rolling Values, + Basic Date Indexing.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
Now let’s add minutes per mile, we can just divide our two series to get those numbers.
df['Minutes'] / df['Miles'] df['Min_per_mile'] = df['Minutes'] / df['Miles'] df.describe()
4 - pandas Basics/4-6 pandas DataFrame Renaming Cols, Handling NaN Values, Maps, Intermediate Plotting, + Rolling Values, + Basic Date Indexing.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
We can see that along more shorter distances, my speed can vary a lot.
df.plot(x='Miles', y='Min_per_mile', kind='scatter') plt.ylabel("Minutes / Mile")
4 - pandas Basics/4-6 pandas DataFrame Renaming Cols, Handling NaN Values, Maps, Intermediate Plotting, + Rolling Values, + Basic Date Indexing.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
Let’s see a histogram of my speeds. Histograms are a great way of representing frequency data or how much certain things are occuring.
df.hist('Min_per_mile')
4 - pandas Basics/4-6 pandas DataFrame Renaming Cols, Handling NaN Values, Maps, Intermediate Plotting, + Rolling Values, + Basic Date Indexing.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
seems pretty center in that 7 minutes to 7.5 minute range. Let’s see if we can get more information with more bins which we specify with the bin argument.
df.hist('Min_per_mile',bins=20)
4 - pandas Basics/4-6 pandas DataFrame Renaming Cols, Handling NaN Values, Maps, Intermediate Plotting, + Rolling Values, + Basic Date Indexing.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
That’s interesting. Under 7 and then at 7.5 are the most popular. I bet that has something to do with my running distances too or the courses I choose to run.
df.hist('Min_per_mile',bins=20, figsize=(10,8)) plt.xlim((5, 11)) plt.ylim((0, 12)) plt.title("Minutes Per Mile Histogram") plt.grid(False) plt.savefig('../assets/minutes_per_mile_histogram.png') df['Miles']
4 - pandas Basics/4-6 pandas DataFrame Renaming Cols, Handling NaN Values, Maps, Intermediate Plotting, + Rolling Values, + Basic Date Indexing.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
Now another cool thing you can do with time series is see the rolling mean or rolling sum or even rolling correlations. There’s a lot of different “rolling” type things you can do.
df['Miles'].plot()
4 - pandas Basics/4-6 pandas DataFrame Renaming Cols, Handling NaN Values, Maps, Intermediate Plotting, + Rolling Values, + Basic Date Indexing.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
So here’s a standard plot of our Miles again, just a line over time. To add another line to the same plot we just add more details to the box. As I was touching on the rolling values. Let’s talk about the rolling average. Now to do that I pass it a series or a data frame.
df['Miles'].plot() pd.rolling_mean(df['Miles'], 7).plot()
4 - pandas Basics/4-6 pandas DataFrame Renaming Cols, Handling NaN Values, Maps, Intermediate Plotting, + Rolling Values, + Basic Date Indexing.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
I can do the same with the rolling standard deviation or sum.
df['Miles'].plot() pd.rolling_std(df['Miles'], 7).plot() df['Miles'].plot() pd.rolling_sum(df['Miles'], 7).plot()
4 - pandas Basics/4-6 pandas DataFrame Renaming Cols, Handling NaN Values, Maps, Intermediate Plotting, + Rolling Values, + Basic Date Indexing.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
Now on the last note one thing that’s cool about date time indexes is that you can query them very naturally. If I want to get all my runs in october of 2014, I just enter that as a string.
df.index
4 - pandas Basics/4-6 pandas DataFrame Renaming Cols, Handling NaN Values, Maps, Intermediate Plotting, + Rolling Values, + Basic Date Indexing.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
If I want to get from November to December, I can do that as a Series.
df['2014-11':'2014-12']
4 - pandas Basics/4-6 pandas DataFrame Renaming Cols, Handling NaN Values, Maps, Intermediate Plotting, + Rolling Values, + Basic Date Indexing.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
How do you think we might go from october to January 1 2015? Go ahead and give it a try and see if you can figure it out.
df['2014-11':'2015-1-1']['Miles'].plot()
4 - pandas Basics/4-6 pandas DataFrame Renaming Cols, Handling NaN Values, Maps, Intermediate Plotting, + Rolling Values, + Basic Date Indexing.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
Now we can specify a series this way but we can’t specific a specific date. To get a specific date’s run.
df['2014-8-12']
4 - pandas Basics/4-6 pandas DataFrame Renaming Cols, Handling NaN Values, Maps, Intermediate Plotting, + Rolling Values, + Basic Date Indexing.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
To do that we need to use loc.
df.loc['2014-8-12']
4 - pandas Basics/4-6 pandas DataFrame Renaming Cols, Handling NaN Values, Maps, Intermediate Plotting, + Rolling Values, + Basic Date Indexing.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
now that we’ve done all this work. We should save it so that we don’t have to remember what our operations were or what stage we did them at. Now we could save it to csv like we did our other one but I wanted to illustrate all the different ways you can save this file. Let’s save our csv, but we can also save it as an ...
df.head() df.to_csv('../data/date_fixed_running_data_with_time.csv') df.to_html('../data/date_fixed_running_data_with_time.html')
4 - pandas Basics/4-6 pandas DataFrame Renaming Cols, Handling NaN Values, Maps, Intermediate Plotting, + Rolling Values, + Basic Date Indexing.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
One thing to note with JSON files is that they want unique indexes (because they're going to be come the keys), so we've got to give it a new index. We can do this by resetting our index or setting our index to a column.
df.to_json('../data/date_fixed_running_data_with_time.json') df.reset_index() df['Date'] = df.index df.index = range(df.shape[0]) df.head() df.to_json('../data/date_fixed_running_data_with_time.json')
4 - pandas Basics/4-6 pandas DataFrame Renaming Cols, Handling NaN Values, Maps, Intermediate Plotting, + Rolling Values, + Basic Date Indexing.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
Now there’s a LOT more you can do with date time indexing but this is about all that I wanted to cover in this video. We will get into more specifics later. By now you should be getting a lot more familiar with pandas and what the ipython + pandas workflow is.
df.Date[0]
4 - pandas Basics/4-6 pandas DataFrame Renaming Cols, Handling NaN Values, Maps, Intermediate Plotting, + Rolling Values, + Basic Date Indexing.ipynb
mitchshack/data_analysis_with_python_and_pandas
apache-2.0
Homework 14 (or so): TF-IDF text analysis and clustering Hooray, we kind of figured out how text analysis works! Some of it is still magic, but at least the TF and IDF parts make a little sense. Kind of. Somewhat. No, just kidding, we're professionals now. Investigating the Congressional Record The Congressional Record...
# If you'd like to download it through the command line... #!curl -O http://www.cs.cornell.edu/home/llee/data/convote/convote_v1.1.tar.gz # And then extract it through the command line... #!tar -zxf convote_v1.1.tar.gz
homework13/14 - TF-IDF Homework.ipynb
radhikapc/foundation-homework
mit
So great, we have 702 of them. Now let's import them.
speeches = [] for path in paths: with open(path) as speech_file: speech = { 'pathname': path, 'filename': path.split('/')[-1], 'content': speech_file.read() } speeches.append(speech) speeches_df = pd.DataFrame(speeches) #speeches_df.head() speeches_df['pathnam...
homework13/14 - TF-IDF Homework.ipynb
radhikapc/foundation-homework
mit
In class we had the texts variable. For the homework can just do speeches_df['content'] to get the same sort of list of stuff. Take a look at the contents of the first 5 speeches
texts =speeches_df['content'] texts[:5]
homework13/14 - TF-IDF Homework.ipynb
radhikapc/foundation-homework
mit
Doing our analysis Use the sklearn package and a plain boring CountVectorizer to get a list of all of the tokens used in the speeches. If it won't list them all, that's ok! Make a dataframe with those terms as columns. Be sure to include English-language stopwords
from sklearn.feature_extraction.text import CountVectorizer count_vectorizer = CountVectorizer(stop_words='english') Xc = count_vectorizer.fit_transform(texts) Xc Xc.toarray() pd.DataFrame(Xc.toarray()).head(3) Xc_feature= pd.DataFrame(Xc.toarray(), columns=count_vectorizer.get_feature_names()) Xc_feature.head(3)
homework13/14 - TF-IDF Homework.ipynb
radhikapc/foundation-homework
mit
Okay, it's far too big to even look at. Let's try to get a list of features from a new CountVectorizer that only takes the top 100 words.
from nltk.stem.porter import PorterStemmer porter_stemmer = PorterStemmer() porter_stemmer = PorterStemmer() def stemming_tokenizer(str_input): words = re.sub(r"[^A-Za-z]", " ", str_input).lower().split() words = [porter_stemmer.stem(word) for word in words] #print(words) return words coun...
homework13/14 - TF-IDF Homework.ipynb
radhikapc/foundation-homework
mit
Now let's push all of that into a dataframe with nicely named columns.
df_Xc = pd.DataFrame(Xc100.toarray(), columns=count_vectorizer.get_feature_names()) df_Xc.head(3)
homework13/14 - TF-IDF Homework.ipynb
radhikapc/foundation-homework
mit
Everyone seems to start their speeches with "mr chairman" - how many speeches are there total, and many don't mention "chairman" and how many mention neither "mr" nor "chairman"?
df_Xc['act'].count() df_Xc[df_Xc["chairman"]==0]['chairman'].count() df_Xc[df_Xc["mr"]==0]['mr'].count() total = df_Xc[df_Xc["mr"]==0]['mr'].count() + df_Xc[df_Xc["chairman"]==0]['chairman'].count() print(total,"speaches in total do not mention neither 'mr' nor 'chairman'")
homework13/14 - TF-IDF Homework.ipynb
radhikapc/foundation-homework
mit
What is the index of the speech thank is the most thankful, a.k.a. includes the word 'thank' the most times?
thank = df_Xc[df_Xc["thank"]!=0] thank.head(3) thank_column = thank['thank'] thank_column.sort(inplace=False, ascending=False).head(1)
homework13/14 - TF-IDF Homework.ipynb
radhikapc/foundation-homework
mit
If I'm searching for China and trade, what are the top 3 speeches to read according to the CountVectoriser?
china_trade = df_Xc['china'] + df_Xc['trade'] china_trade.sort(inplace=False, ascending=False).head(3)
homework13/14 - TF-IDF Homework.ipynb
radhikapc/foundation-homework
mit
Now what if I'm using a TfidfVectorizer?
tfidf_vectorizer = TfidfVectorizer(stop_words='english', tokenizer=stemming_tokenizer, use_idf=False, norm='l1', max_features=100) Xt = tfidf_vectorizer.fit_transform(texts) pd.DataFrame(Xt.toarray(), columns=tfidf_vectorizer.get_feature_names()).head(3) print(tfidf_vectorizer.get_feature_names()) # checking inverse t...
homework13/14 - TF-IDF Homework.ipynb
radhikapc/foundation-homework
mit
What's the content of the speeches? Here's a way to get them:
# index 0 is the first speech, which was the first one imported. paths[0] # Pass that into 'cat' using { } which lets you put variables in shell commands # that way you can pass the path to cat !echo {paths[0]} !type a.text
homework13/14 - TF-IDF Homework.ipynb
radhikapc/foundation-homework
mit
Now search for something else! Another two terms that might show up. elections and chaos? Whatever you thnik might be interesting.
df_Xc.columns congress_lawsuit = df_Xc['lawsuit'] + df_Xc['congress'] congress_lawsuit.sort(inplace=False, ascending=False).head(5) pd.DataFrame([df_Xc['lawsuit'], df_Xc['congress'], df_Xc['lawsuit'] + df_Xc['congress']], index=["congress", "lawsuit", "congress + lawsuit"]).T
homework13/14 - TF-IDF Homework.ipynb
radhikapc/foundation-homework
mit
Enough of this garbage, let's cluster Using a simple counting vectorizer, cluster the documents into eight categories, telling me what the top terms are per category. Using a term frequency vectorizer, cluster the documents into eight categories, telling me what the top terms are per category. Using a term frequency in...
from sklearn.cluster import KMeans #count vectorization Xc100 is a set of normalized for 100 top words number_of_clusters = 8 km = KMeans(n_clusters=number_of_clusters) km.fit(Xc100) #count vectorization print("Top terms per cluster:") order_centroids = km.cluster_centers_.argsort()[:, ::-1] terms = count_vectorizer...
homework13/14 - TF-IDF Homework.ipynb
radhikapc/foundation-homework
mit
Which one do you think works the best? Harry Potter time I have a scraped collection of Harry Potter fanfiction at https://github.com/ledeprogram/courses/raw/master/algorithms/data/hp.zip. I want you to read them in, vectorize them and cluster them. Use this process to find out the two types of Harry Potter fanfiction....
import glob paths = glob.glob('hp/hp/*') paths[:5] len(paths) reviews = [] for path in paths: with open(path) as review_file: review = { 'pathname': path, 'filename': path.split('/')[-1], 'content': review_file.read() } reviews.append(review) reviews_df = pd...
homework13/14 - TF-IDF Homework.ipynb
radhikapc/foundation-homework
mit
Vectorize Count Vectorization
from sklearn.feature_extraction.text import CountVectorizer from nltk.stem.porter import PorterStemmer porter_stemmer = PorterStemmer() def stemming_tokenizer(str_input): words = re.sub(r"[^A-Za-z]", " ", str_input).lower().split() words = [porter_stemmer.stem(word) for word in words] #print(words) ...
homework13/14 - TF-IDF Homework.ipynb
radhikapc/foundation-homework
mit
Time to build the network Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes. The network has two layers, a hid...
class NeuralNetwork(object): def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate): def sigmoid(x): #Sigmoid Function return 1/(1+np.exp(-x)) # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nod...
dlnd-your-first-neural-network.ipynb
luiscapo/DLND-your-first-neural-network
gpl-3.0
Training the network Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training se...
import sys ### Set the hyperparameters here ### epochs = 4000 learning_rate = 0.01 hidden_nodes = 30 output_nodes = 1 N_i = train_features.shape[1] network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate) losses = {'train':[], 'validation':[]} for e in range(epochs): # Go through a random batch of...
dlnd-your-first-neural-network.ipynb
luiscapo/DLND-your-first-neural-network
gpl-3.0
Thinking about your results Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does? Note: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter Your answer below With this Pa...
import unittest inputs = [0.5, -0.2, 0.1] targets = [0.4] test_w_i_h = np.array([[0.1, 0.4, -0.3], [-0.2, 0.5, 0.2]]) test_w_h_o = np.array([[0.3, -0.1]]) class TestMethods(unittest.TestCase): ########## # Unit tests for data loading ########## def test_data_path(self...
dlnd-your-first-neural-network.ipynb
luiscapo/DLND-your-first-neural-network
gpl-3.0
上面可以初步看到程序本身执行时间很短,大部分时间在等待写什么. 只能看到一个大概,不能定位到具体代码. contextmanager 使用python的上下文管理器机制对代码进行耗时度量: - __enter__:记录开始时间 - __exit__: 记录结束时间
%%writefile timer.py import time class Timer(object): def __init__(self, verbose=False): self.verbose = verbose def __enter__(self): self.start = time.time() return self def __exit__(self, *args): self.end = time.time() self.secs = self.end - self.start ...
books/optimization/performance-analysis.ipynb
510908220/python-toolbox
mit
可以将耗时写到日志里,这样在写代码的时候对关键的逻辑处(数据库、网络等)进行如上改写,然后通过分析日志排查性能问题. 当然也可以扩展一下将每次性能数据写入数据库分析. line_profiler line_profiler可以分析每一行代码的执行耗时信息. 为了使用line_profiler,使用pip install line_profiler进行安装. 安装成功后可以看到叫做kernprof的可执行程序. 在使用工具测试代码性能的时候, 需要给函数加上@profile装饰器.(不需要显示import任何模块,kernprof会自动注入的)
%%writefile slow_app_for_profiler.py import sys import time @profile def mock_download(): for i in range(5): time.sleep(1) @profile def mock_database(): for i in range(20): time.sleep(0.1) @profile def main(): mock_download() mock_database() if __name__ == "__main__": sys.exit(...
books/optimization/performance-analysis.ipynb
510908220/python-toolbox
mit
-l选项告诉kernprof注入@profile到脚本里. -v告诉kernprof显示执行结果到控制台. Line #:行号. Hits: 这行代码运行次数. Time: 这一行总耗时 Per Hit: 本行代码执行一次耗时. % Time:本行耗时占总耗时(函数耗时)百分比. Line Contents: 代码 从结果可以很清楚的看到每一行的耗时, 这个对于一般的脚本很方便, 但是对于django项目怎么办呢: - 使用django-devserver: 这个适合在开发环境发现一些性能问题,但是很多问题在线上才能发现. http://djangotricks.blogspot.com/2015/01/performance-...
!pip install memory_profiler psutil !python -m memory_profiler slow_app_for_profiler.py
books/optimization/performance-analysis.ipynb
510908220/python-toolbox
mit
<img src="image/weight_biases.png" style="height: 60%;width: 60%; position: relative; right: 10%"> Problem 2 For the neural network to train on your data, you need the following <a href="https://www.tensorflow.org/resources/dims_types.html#data-types">float32</a> tensors: - features - Placeholder tensor for feature ...
features_count = 784 labels_count = 10 # TODO: Set the features and labels tensors # features = # labels = # TODO: Set the weights and biases tensors # weights = # biases = ### DON'T MODIFY ANYTHING BELOW ### #Test Cases from tensorflow.python.ops.variables import Variable assert features._op.name.startswith...
Term_1/TensorFlow_3/TensorFlow_Lab/lab.ipynb
akshaybabloo/Car-ND
mit
These are our observations: The maximum number of survivors are in the first and third class, respectively With respect to the total number of passengers in each class, first class has the maximum survivors at around 61% With respect to the total number of passengers in each class, third class has the minimum number o...
# Checking for any null values df['Sex'].isnull().value_counts() # Male passengers survived in each class male_survivors = df[df['Sex'] == 'male'].groupby('Pclass')['Survived'].agg(sum) male_survivors # Total Male Passengers in each class male_total_passengers = df[df['Sex'] == 'male'].groupby('Pclass')['PassengerId'...
_oldnotebooks/Titanic_Data_Mining.ipynb
eneskemalergin/OldBlog
mit
These are our observations: The majority of survivors are females in all the classes More than 90% of female passengers in first and second class survived The percentage of male passengers who survived in first and third class, respectively, are comparable This is our key takeaway: Female passengers were given pre...
# Checking for the null values df['SibSp'].isnull().value_counts() # Checking for the null values df['Parch'].isnull().value_counts() # Total number of non-survivors in each class non_survivors = df[(df['SibSp'] > 0) | (df['Parch'] > 0) & (df['Survived'] == 0)].groupby('Pclass')['Survived'].agg('count') non_survivors...
_oldnotebooks/Titanic_Data_Mining.ipynb
eneskemalergin/OldBlog
mit
These are our observations: There are lot of nonsurvivors in the third class Second class has the least number of nonsurvivors with relatives With respect to the total number of passengers, the first class, who had relatives aboard, has the maximum nonsurvivor percentage and the third class has the least This is our ...
# Checking for null values df['Age'].isnull().value_counts() # Defining the age binning interval age_bin = [0, 18, 25, 40, 60, 100] # Creating the bins df['AgeBin'] = pd.cut(df.Age, bins=age_bin) d_temp = df[np.isfinite(df['Age'])] # Number of survivors based on Age bin survivors = d_temp.groupby('AgeBin')['Survived...
_oldnotebooks/Titanic_Data_Mining.ipynb
eneskemalergin/OldBlog
mit
Load Dataset
IB = pd.read_csv("india-batting.csv") IB.head(5) IB.columns
India’s_batting_performance_2016.ipynb
erayon/India-Australia-Cricket-Analysis
gpl-3.0
Split the year from 'Start Date' columns and create a new column name 'year'
year=[] for i in range(len(IB)): x = IB['Start Date'][i].split(" ")[-1] year.append(x) year= pd.DataFrame(year,columns=["year"]) mr = [IB,year] df=pd.concat(mr,axis=1) df.head(5)
India’s_batting_performance_2016.ipynb
erayon/India-Australia-Cricket-Analysis
gpl-3.0
Find all rows in the year 2016 from new dataframe and remove 'DND' rows from the dataframe which appear in the 'Runs' columns
df_16 = df[df["year"]=="2016"] df_16=df_16.reset_index(drop=True) df_16.columns Runs = np.array(df_16["Runs"]) np.squeeze(np.where(Runs=="DNB")) ndf_16=df_16[0:88] ndf_16.head(5)
India’s_batting_performance_2016.ipynb
erayon/India-Australia-Cricket-Analysis
gpl-3.0
Create a Dateframe of unique players name and their maximum score
ndf_16.Player.unique() playernames = ndf_16.Player.unique() runs=[] for i in range(len(ndf_16)): try: r = np.int(ndf_16['Runs'][i]) except: r= np.int(ndf_16.Runs.unique()[0].split("*")[0]) runs.append(r) modRun = pd.DataFrame(runs,columns=["modRun"]) modDf = pd.concat([ndf_16,modRun],axis...
India’s_batting_performance_2016.ipynb
erayon/India-Australia-Cricket-Analysis
gpl-3.0
Visulaize in Plotly
import plotly plotly.tools.set_credentials_file(username='ayon.mi1', api_key='iIBYMNu0RVcR1GmQSeD0') data = [go.Bar( x=np.array(dfx['player_name']), y=np.array(dfx['max_run']) )] layout = go.Layout( title='Maximun_Score per player', xaxis=dict( title='Pla...
India’s_batting_performance_2016.ipynb
erayon/India-Australia-Cricket-Analysis
gpl-3.0
Expected output: <table> <tr> <td> **gradients["dWaa"][1][2] ** </td> <td> 10.0 </td> </tr> <tr> <td> **gradients["dWax"][3][1]** </td> <td> -10.0 </td> </td> </tr> <tr> <td> **gradients["dWya"][1][2]** </td> <td> 0.29713815361 </td> </tr> <...
# GRADED FUNCTION: sample def sample(parameters, char_to_ix, seed): """ Sample a sequence of characters according to a sequence of probability distributions output of the RNN Arguments: parameters -- python dictionary containing the parameters Waa, Wax, Wya, by, and b. char_to_ix -- python dictio...
coursera/deep-neural-network/quiz and assignments/RNN/Dinosaurus+Island+--+Character+level+language+model+final+-+v3.ipynb
jinntrance/MOOC
cc0-1.0
Time to build the network Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes. <img src="assets/neural_network.p...
class NeuralNetwork(object): def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Initialize we...
first-neural-network/Your_first_neural_network.ipynb
tanmay987/deepLearning
mit
Training the network Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training se...
import sys ### Set the hyperparameters here ### iterations = 5000 learning_rate = 0.5 hidden_nodes = 30 output_nodes = 1 N_i = train_features.shape[1] network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate) losses = {'train':[], 'validation':[]} for ii in range(iterations): # Go through a random ...
first-neural-network/Your_first_neural_network.ipynb
tanmay987/deepLearning
mit
<a id='wrangling'></a> Data Wrangling General Properties
# Load TMDb data and print out a few lines. Perform operations to inspect data # types and look for instances of missing or possibly errant data. tmdb_movies = pd.read_csv('tmdb-movies.csv') tmdb_movies.head() tmdb_movies.describe()
mlfoundation/istat/project/investigate-a-dataset-template.ipynb
vikashvverma/machine-learning
mit
Data Cleaning As evident from the data, it seems we have cast of the movie as string separated by | symbol. This needs to be converted into a suitable type in order to consume it properly later.
# Pandas read empty string value as nan, make it empty string tmdb_movies.cast.fillna('', inplace=True) tmdb_movies.genres.fillna('', inplace=True) tmdb_movies.director.fillna('', inplace=True) tmdb_movies.production_companies.fillna('', inplace=True) def string_to_array(data): """ This function returns gi...
mlfoundation/istat/project/investigate-a-dataset-template.ipynb
vikashvverma/machine-learning
mit
Convert cast, genres, director and production_companies columns to array
tmdb_movies.cast = tmdb_movies.cast.apply(string_to_array) tmdb_movies.genres = tmdb_movies.genres.apply(string_to_array) tmdb_movies.director = tmdb_movies.director.apply(string_to_array) tmdb_movies.production_companies = tmdb_movies.production_companies.apply(string_to_array)
mlfoundation/istat/project/investigate-a-dataset-template.ipynb
vikashvverma/machine-learning
mit
<a id='eda'></a> Exploratory Data Analysis Research Question 1: What is the yearly revenue change? It's evident from observations below that there is no clear trend in change in mean revenue over years. Mean revenue from year to year is quite unstable. This can be attributed to number of movies and number of movies h...
def yearly_growth(mean_revenue): return mean_revenue - mean_revenue.shift(1).fillna(0) # Show change in mean revenue over years, considering only movies for which we have revenue data movies_with_budget = tmdb_movies[tmdb_movies.budget_adj > 0] movies_with_revenue = movies_with_budget[movies_with_budget.revenue_ad...
mlfoundation/istat/project/investigate-a-dataset-template.ipynb
vikashvverma/machine-learning
mit
Research Question 2: Which genres are most popular from year to year? Since popularity column indicates all time popularity of the movie, it might not be the right metric to measure popularity over years. We can measure popularty of a movie based on average vote. I think a movie is popular if vote_average &gt;= 7. On a...
def popular_movies(movies): return movies[movies['vote_average']>=7] def group_by_genre(data): """ This function takes a Data Frame having and returns a dictionary having release_year as key and value a dictionary having key as movie's genre and value as frequency of the genre that year...
mlfoundation/istat/project/investigate-a-dataset-template.ipynb
vikashvverma/machine-learning
mit
Research Question 3: What kinds of properties are associated with movies that have high revenues? We can consider those movies with at least 1 billion revenue and see what are common properties among them. Considering this criteria and based on illustrations below, we can make following observations about highest gross...
highest_grossing_movies = tmdb_movies[tmdb_movies['revenue_adj'] >= 1000000000]\ .sort_values(by='revenue_adj', ascending=False) highest_grossing_movies.head()
mlfoundation/istat/project/investigate-a-dataset-template.ipynb
vikashvverma/machine-learning
mit
Find common genres in highest grossing movies
def count_frequency(data): frequency_count = {} for items in data: for item in items: if item in frequency_count: frequency_count[item] += 1 else: frequency_count[item] = 1 return frequency_count highest_grossing_genres = count_frequency(highe...
mlfoundation/istat/project/investigate-a-dataset-template.ipynb
vikashvverma/machine-learning
mit
Popularity of highest grossing movies
highest_grossing_movies.vote_average.hist()
mlfoundation/istat/project/investigate-a-dataset-template.ipynb
vikashvverma/machine-learning
mit
Directors of highest grossing movies
def list_to_dict(data, label): """ This function creates returns statistics and indices for a data frame from a list having label and value. """ statistics = {label: []} index = [] for item in data: statistics[label].append(item[1]) index.append(item[0]) return st...
mlfoundation/istat/project/investigate-a-dataset-template.ipynb
vikashvverma/machine-learning
mit
Cast of highest grossing movies
high_grossing_cast = count_frequency(highest_grossing_movies.cast) revenues, index = list_to_dict(sorted(high_grossing_cast.items(), key=operator.itemgetter(1), reverse=True)[:30], 'number of movies') pd.DataFrame(revenues, index=index).plot(kind='bar', figsize=(20, 5))
mlfoundation/istat/project/investigate-a-dataset-template.ipynb
vikashvverma/machine-learning
mit
Production companies of highest grossing movies
high_grossing_prod_comps = count_frequency(highest_grossing_movies.production_companies) revenues, index = list_to_dict(sorted(high_grossing_prod_comps.items(), key=operator.itemgetter(1), reverse=True)[:30]\ , 'number of movies') pd.DataFrame(revenues, index=index).plot(kind='bar', f...
mlfoundation/istat/project/investigate-a-dataset-template.ipynb
vikashvverma/machine-learning
mit
Highest grossing budget Research Question 4: Who are top 15 highest grossing directors? We can see the top 30 highest grossing directors in bar chart below. It seems Steven Spielberg surpasses other directors in gross revenue.
def grossing(movies, by): """ This function returns the movies' revenues over key passed as `by` value in argument. """ revenues = {} for id, movie in movies.iterrows(): for key in movie[by]: if key in revenues: revenues[key].append(movie.revenue_adj) ...
mlfoundation/istat/project/investigate-a-dataset-template.ipynb
vikashvverma/machine-learning
mit
Research Question 5: Who are top 15 highest grossing actors? We can find the top 30 actors based on gross revenue as shown in subsequent sections below. As we can see Harison Ford tops the chart with highest grossing.
gross_by_actors = grossing(movies=tmdb_movies, by='cast') actors_gross_revenue = gross_revenue(gross_by_actors) top_15_actors = sorted(actors_gross_revenue.items(), key=operator.itemgetter(1), reverse=True)[:15] revenues, indexes = list_to_dict(top_15_actors, 'actors') pd.DataFrame(data=revenues, index=indexes).plot(k...
mlfoundation/istat/project/investigate-a-dataset-template.ipynb
vikashvverma/machine-learning
mit
Step 1: Truncate the series to the interval that has observations. Outside this interval the interpolation blows up.
print('Original bounds: ', t[0], t[-1]) t_obs = t[D['T_flag'] != -1] D = D[t_obs[0]:t_obs[-1]] # Truncate dataframe so it is sandwiched between observed values t = D.index T = D['T'] print('New bounds: ', t[0], t[-1]) t_obs = D.index[D['T_flag'] != -1] t_interp = D.index[D['T_flag'] == -1] T_obs = D.loc[t_obs, 'T'] ...
notebooks/clean_data.ipynb
RJTK/dwglasso_cweeds
mit
Red dots are interpolated values.
# Centre the data mu = D['T'].mean() D.loc[:, 'T'] = D.loc[:, 'T'] - mu T = D['T'] print('E[T] = ', mu)
notebooks/clean_data.ipynb
RJTK/dwglasso_cweeds
mit
We want to obtain a stationary "feature" from the data, firt differences are an easy place to start.
T0 = T[0] dT = T.diff() dT = dT - dT.mean() # Center the differences dT_obs = dT[t_obs] dT_interp = dT[t_interp] plt.scatter(t, dT, marker = '.', alpha = 0.5, s = 0.5, c = c) #obs = plt.scatter(t_obs, dT_obs, marker = '.', alpha = 0.5, s = 0.5, color = 'blue'); #interp = plt.scatter(t_interp, dT_interp, marker = '.'...
notebooks/clean_data.ipynb
RJTK/dwglasso_cweeds
mit
It appears that early temperature sensors had rather imprecise readings. It also appears as though the interpolation introduces some systematic errors. I used pchip interpolation, which tries to avoid overshoot, so we may be seeing the effects of clipping. This would particularly make sense if missing data was from r...
rolling1w_dT = dT.rolling(window = 7*24) # 1 week rolling window of dT rolling1m_dT = dT.rolling(window = 30*24) # 1 month rolling window of dT rolling1y_dT = dT.rolling(window = 365*24) # 1 year rolling dindow of dT fig, axes = plt.subplots(3, 1) axes[0].plot(rolling1w_dT.var()) axes[1].plot(rolling1m_dT.var()) ax...
notebooks/clean_data.ipynb
RJTK/dwglasso_cweeds
mit
It looks like there is still some nonstationarity in the first differences.
from itertools import product t_days = [t[np.logical_and(t.month == m, t.day == d)] for m, d in product(range(1,13), range(1, 32))] day_vars = pd.Series(dT[ti].var() for ti in t_days) day_vars = day_vars.dropna() plt.scatter(day_vars.index, day_vars) r = day_vars.rolling(window = 20, center = True) plt.plot(day_vars.i...
notebooks/clean_data.ipynb
RJTK/dwglasso_cweeds
mit
Generating equations for fully contracted terms In the previous notebook, we computed the coupled cluster energy expression \begin{equation} E = \langle \Phi | e^{-\hat{T}} \hat{H} e^{\hat{T}} | \Phi \rangle = E_0 + \sum_{i}^\mathbb{O} \sum_{a}^\mathbb{V} f^{a}{i} t^{i}{a} + \frac{1}{4} \sum_{ij}^\mathbb{O} \sum_{ab}^...
E0 = w.op("E_0",[""]) F = w.utils.gen_op('f',1,'ov','ov') V = w.utils.gen_op('v',2,'ov','ov') H = E0 + F + V T = w.op("t",["v+ o", "v+ v+ o o"]) Hbar = w.bch_series(H,T,2) expr = wt.contract(Hbar,0,0) expr
tutorials/04-GeneratingCode.ipynb
fevangelista/wicked
mit
First we convert the expression derived into a set of equations. You get back a dictionary that shows all the components to the equations. The vertical bar (|) in the key separates the lower (left) and upper (right) indices in the resulting expression
mbeq = expr.to_manybody_equations('r') mbeq
tutorials/04-GeneratingCode.ipynb
fevangelista/wicked
mit
Converting equations to code From the equations generated above, you can get tensor contractions by calling the compile function on each individual term in the equations. Here we generate python code that uses numpy's einsum function to evaluate contractions. To use this code you will need to import einsum python from ...
for eq in mbeq['|']: print(eq.compile('einsum'))
tutorials/04-GeneratingCode.ipynb
fevangelista/wicked
mit
Many-body equations Suppose we want to compute the contributions to the coupled cluster residual equations \begin{equation} r^{i}{a} = \langle \Phi| { \hat{a}^\dagger{i} \hat{a}a } [\hat{F},\hat{T}_1] | \Phi \rangle \end{equation} Wick&d can compute this quantity using the corresponding many-body representation of the ...
F = w.utils.gen_op('f',1,'ov','ov') T1 = w.op("t",["v+ o"]) expr = wt.contract(w.commutator(F,T1),2,2) latex(expr)
tutorials/04-GeneratingCode.ipynb
fevangelista/wicked
mit
Next, we call to_manybody_equations to generate many-body equations
mbeq = expr.to_manybody_equations('g') print(mbeq)
tutorials/04-GeneratingCode.ipynb
fevangelista/wicked
mit
Out of all the terms, we select the terms that multiply the excitation operator ${ \hat{a}^\dagger_{a} \hat{a}_i }$ ("o|v")
mbeq_ov = mbeq["o|v"] for eq in mbeq_ov: latex(eq)
tutorials/04-GeneratingCode.ipynb
fevangelista/wicked
mit
Lastly, we can compile these equations into code
for eq in mbeq_ov: print(eq.compile('einsum'))
tutorials/04-GeneratingCode.ipynb
fevangelista/wicked
mit
Antisymmetrization of uncontracted operator indices To gain efficiency, Wick&d treats contractions involving inequivalent lines in a special way. Consider the following term contributing to the CCSD doubles amplitude equations that arises from $[\hat{V}\mathrm{ovov},\hat{T}_2]$ (see the sixth term in Eq. (153) of Crawf...
T2 = w.op("t", ["v+ v+ o o"]) Vovov = w.op("v", ["o+ v+ v o"]) expr = wt.contract(w.commutator(Vovov, T2), 4, 4) latex(expr)
tutorials/04-GeneratingCode.ipynb
fevangelista/wicked
mit
In wick&d the two-body part of $[\hat{V}\mathrm{ovov},\hat{T}_2]$ gives us only a single term \begin{equation} [\hat{V}\mathrm{ovov},\hat{T}2]\text{2-body} = - \sum_{abcijk} \langle kb \| jc \rangle t^{ik}{ac} { \hat{a}^{ab}{ij} } = \sum_{abij} g^{ij}{ab} { \hat{a}^{ab}{ij} } \end{equation} where the tensor $g^{ij}{ab...
for eq in expr.to_manybody_equations('g')['oo|vv']: print(eq.compile('einsum'))
tutorials/04-GeneratingCode.ipynb
fevangelista/wicked
mit
Experimental Options The Options class allows the download of options data from Google Finance. The get_options_data method downloads options data for specified expiry date and provides a formatted DataFrame with a hierarchical index, so its easy to get to the specific option you want. Available expiry dates can be acc...
from pandas_datareader.data import Options fb_options = Options('FB', 'google') data = fb_options.get_options_data(expiry = fb_options.expiry_dates[0]) data.head()
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/06-Data-Sources/01 - Pandas-Datareader.ipynb
arcyfelix/Courses
apache-2.0
FRED
import pandas_datareader.data as web import datetime start = datetime.datetime(2010, 1, 1) end = datetime.datetime(2017, 1, 1) gdp = web.DataReader("GDP", "fred", start, end) gdp.head()
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/06-Data-Sources/01 - Pandas-Datareader.ipynb
arcyfelix/Courses
apache-2.0
Split into training and testing Next we split the data into training and testing data sets
(training, test) = ratingsRDD.randomSplit([0.8, 0.2]) numTraining = training.count() numTest = test.count() # verify row counts for each dataset print("Total: {0}, Training: {1}, test: {2}".format(ratingsRDD.count(), numTraining, numTest))
notebooks/Step 04 - Realtime Recommendations.ipynb
snowch/movie-recommender-demo
apache-2.0
Build the recommendation model using ALS on the training data I've chosen some values for the ALS parameters. You should probaly experiment with different values.
from pyspark.mllib.recommendation import ALS rank = 50 numIterations = 20 lambdaParam = 0.1 model = ALS.train(training, rank, numIterations, lambdaParam)
notebooks/Step 04 - Realtime Recommendations.ipynb
snowch/movie-recommender-demo
apache-2.0
Extract the product (movie) features
import numpy as np pf = model.productFeatures().cache() pf_keys = pf.sortByKey().keys().collect() pf_vals = pf.sortByKey().map(lambda x: list(x[1])).collect() Vt = np.matrix(np.asarray(pf.values().collect()))
notebooks/Step 04 - Realtime Recommendations.ipynb
snowch/movie-recommender-demo
apache-2.0
Simulate a new user rating a movie
full_u = np.zeros(len(pf_keys)) full_u.itemset(1, 5) # user has rated product_id:1 = 5 recommendations = full_u*Vt*Vt.T print("predicted rating value", np.sort(recommendations)[:,-10:]) top_ten_recommended_product_ids = np.where(recommendations >= np.sort(recommendations)[:,-10:].min())[1] print("predict rating prod...
notebooks/Step 04 - Realtime Recommendations.ipynb
snowch/movie-recommender-demo
apache-2.0
Volume Distribution The volume distribution function $V(\sigma_0)$ is normalized by the bin size, giving a results that is independent of the choice of density bin spacing.
def calc_volume(ds, rholevs, zrange=slice(0,-6000)): vol = ds.HFacC * ds.drF * ds.rA delta_rho = rholevs[1] - rholevs[0] ds['volume_rho'] = xgcm.regrid_vertical(vol.sel(Z=zrange), ds.TRAC01[0].sel(Z=zrange), rholevs, 'Z') / delta_rho for ...
MITgcm_WOA13_mixing.ipynb
rabernat/mitgcm-xray
mit
Cumulative Distribution By integrating $$ \int_{\sigma_{min}}^\sigma V(\sigma) d\sigma $$ we obtain the cumulative distribution function.
fig = plt.figure(figsize=(14,6)) ax = fig.add_subplot(111) for k in atlases: delta_rho = rholevs[1] - rholevs[0] ds = dsets[k] vol_net = ds.volume_rho.sel(Y=slice(-80,-30)).sum(dim=('X','Y')) vol_cum = vol_net.values.cumsum(axis=0) plt.plot(rholevs[1:], vol_cum, '.-') plt.legend(atlases,...
MITgcm_WOA13_mixing.ipynb
rabernat/mitgcm-xray
mit
Vertical Diffusive Fluxes
rk_sign = -1 plt.figure(figsize=(14,6)) for n, (name, k) in enumerate(zip( ['THETA', 'SALT', 'SIGMA0'], ['_TH', '_SLT', 'Tr01'])): ax = plt.subplot(1,3,n+1) for aname in dsets: ds = dsets[aname] net_vflux = rk_sign*( ds['DFrI' + k]...
MITgcm_WOA13_mixing.ipynb
rabernat/mitgcm-xray
mit
Vertical Diffusive Heat Flux
import gsw rho0 = 1030 plt.figure(figsize=(4.2,6)) k = '_TH' ax = plt.subplot(111) for aname in dsets: ds = dsets[aname] net_vflux = rk_sign*rho0*gsw.cp0*( ds['DFrI' + k] + ds['DFrE' + k] )[0].sel(Y=slice(-80,-30)).sum(dim=('X','Y')) plt.plot(net_vflux/1e12, net_vflux....
MITgcm_WOA13_mixing.ipynb
rabernat/mitgcm-xray
mit
Symmetric Difference https://www.hackerrank.com/challenges/symmetric-difference/problem Task Given sets of integers, and , print their symmetric difference in ascending order. The term symmetric difference indicates those values that exist in either or but do not exist in both. Input Format The first line of input ...
M = int(input()) m =set((map(int,input().split()))) N = int(input()) n =set((map(int,input().split()))) m ^ n S='add 5 6' method, *args = S.split() print(method) print(*map(int,args)) method,(*map(int,args)) # methods # (*map(int,args)) # command='add'.split() # method, args = command[0], list(map(int,command[1:]))...
coding/hacker rank.ipynb
vadim-ivlev/STUDY
mit
Load house value vs. crime rate data Dataset is from Philadelphia, PA and includes average house sales price in a number of neighborhoods. The attributes of each neighborhood we have include the crime rate ('CrimeRate'), miles from Center City ('MilesPhila'), town name ('Name'), and county name ('County').
regressionDir = '/home/weenkus/workspace/Machine Learning - University of Washington/Regression' sales = pa.read_csv(regressionDir + '/datasets/Philadelphia_Crime_Rate_noNA.csv') sales # Show plots in jupyter %matplotlib inline
Regression/assignments/Simple Linear Regression slides.ipynb
Weenkus/Machine-Learning-University-of-Washington
mit
Exploring the data The house price in a town is correlated with the crime rate of that town. Low crime towns tend to be associated with higher house prices and vice versa.
plt.scatter(sales.CrimeRate, sales.HousePrice, alpha=0.5) plt.ylabel('House price') plt.xlabel('Crime rate')
Regression/assignments/Simple Linear Regression slides.ipynb
Weenkus/Machine-Learning-University-of-Washington
mit
Fit the regression model using crime as the feature
# Check the type and shape X = sales[['CrimeRate']] print (type(X)) print (X.shape) y = sales['HousePrice'] print (type(y)) print (y.shape) crime_model = linear_model.LinearRegression() crime_model.fit(X, y)
Regression/assignments/Simple Linear Regression slides.ipynb
Weenkus/Machine-Learning-University-of-Washington
mit
Let's see what our fit looks like
plt.plot(sales.CrimeRate, sales.HousePrice, '.', X, crime_model.predict(X), '-', linewidth=3) plt.ylabel('House price') plt.xlabel('Crime rate')
Regression/assignments/Simple Linear Regression slides.ipynb
Weenkus/Machine-Learning-University-of-Washington
mit
Remove Center City and redo the analysis Center City is the one observation with an extremely high crime rate, yet house prices are not very low. This point does not follow the trend of the rest of the data very well. A question is how much including Center City is influencing our fit on the other datapoints. Let's rem...
sales_noCC = sales[sales['MilesPhila'] != 0.0] plt.scatter(sales_noCC.CrimeRate, sales_noCC.HousePrice, alpha=0.5) plt.ylabel('House price') plt.xlabel('Crime rate') crime_model_noCC = linear_model.LinearRegression() crime_model_noCC.fit(sales_noCC[['CrimeRate']], sales_noCC['HousePrice']) plt.plot(sales_noCC.Crime...
Regression/assignments/Simple Linear Regression slides.ipynb
Weenkus/Machine-Learning-University-of-Washington
mit
Compare coefficients for full-data fit versus no-Center-City fit¶ Visually, the fit seems different, but let's quantify this by examining the estimated coefficients of our original fit and that of the modified dataset with Center City removed.
print ('slope: ', crime_model.coef_) print ('intercept: ', crime_model.intercept_) print ('slope: ', crime_model_noCC.coef_) print ('intercept: ', crime_model_noCC.intercept_)
Regression/assignments/Simple Linear Regression slides.ipynb
Weenkus/Machine-Learning-University-of-Washington
mit
Above: We see that for the "no Center City" version, per unit increase in crime, the predicted decrease in house prices is 2,287. In contrast, for the original dataset, the drop is only 576 per unit increase in crime. This is significantly different! High leverage points: Center City is said to be a "high leverage" p...
sales_nohighend = sales_noCC[sales_noCC['HousePrice'] < 350000] crime_model_nohighhend = linear_model.LinearRegression() crime_model_nohighhend.fit(sales_nohighend[['CrimeRate']], sales_nohighend['HousePrice']) plt.plot(sales_nohighend.CrimeRate, sales_nohighend.HousePrice, '.', sales_nohighend[['CrimeRate']], cr...
Regression/assignments/Simple Linear Regression slides.ipynb
Weenkus/Machine-Learning-University-of-Washington
mit
Do the coefficients change much?
print ('slope: ', crime_model_noCC.coef_) print ('intercept: ', crime_model_noCC.intercept_) print ('slope: ', crime_model_nohighhend.coef_) print ('intercept: ', crime_model_nohighhend.intercept_)
Regression/assignments/Simple Linear Regression slides.ipynb
Weenkus/Machine-Learning-University-of-Washington
mit
Reading in data to a dataframe For 1D analysis, we are generally thinking about data that varies in time, so time series analysis. The pandas package is particularly suited to deal with this type of data, having very convenient methods for interpreting, searching through, and using time representations. Let's start wit...
df = pd.read_csv('../data/yellow_tripdata_2016-05-01_decimated.csv', parse_dates=[0, 2], index_col=[0])
materials/4_pandas.ipynb
hetland/python4geosciences
mit
What do all these (and other) input keyword arguments do? header: tells which row of the data file is the header, from which it will extract column names parse_dates: try to interpret the values in [col] or [[col1, col2]] as dates, to convert them into datetime objects. index_col: if no index column is given, an index...
df.index
materials/4_pandas.ipynb
hetland/python4geosciences
mit