markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
Now let's use the `flatten` method, which applies a Savitzky-Golay filter, to remove long-term variability that we are not interested in. We'll use the `return_trend` keyword so that it returns both the corrected `KeplerLightCurve` object and a new `KeplerLightCurve` object called 'trend'. This contains only the long term variability. | flat, trend = lc.flatten(window_length=301, return_trend=True) | _____no_output_____ | MIT | docs/source/tutorials/2.02-recover-a-planet.ipynb | ceb8/lightkurve |
Let's plot the trend estimated by the Savitzky-Golay filter: | ax = lc.plot() #plot() returns a matplotlib axis
trend.plot(ax, color='red'); #which we can pass to the next plot() to use the same plotting window | _____no_output_____ | MIT | docs/source/tutorials/2.02-recover-a-planet.ipynb | ceb8/lightkurve |
and the flat lightcurve: | flat.plot(); | _____no_output_____ | MIT | docs/source/tutorials/2.02-recover-a-planet.ipynb | ceb8/lightkurve |
Now, let's run a period search function using the Box-Least Squares algorithm (http://adsabs.harvard.edu/abs/2002A%26A...391..369K). We will shortly have a built in BLS implementation, but until then you can download and install it separately from lightkurve using `pip install git+https://github.com/mirca/transit-periodogram.git` | from transit_periodogram import transit_periodogram
import numpy as np
import matplotlib.pyplot as plt
periods = np.arange(0.3, 1.5, 0.0001)
durations = np.arange(0.005, 0.15, 0.001)
power, _, _, _, _, _, _ = transit_periodogram(time=flat.time,
flux=flat.flux,
flux_err=flat.flux_err,
periods=periods,
durations=durations)
best_fit = periods[np.argmax(power)]
print('Best Fit Period: {} days'.format(best_fit))
flat.fold(best_fit).plot(alpha=0.4); | _____no_output_____ | MIT | docs/source/tutorials/2.02-recover-a-planet.ipynb | ceb8/lightkurve |
**Note**: Click on "*Kernel*" > "*Restart Kernel and Run All*" in [JupyterLab](https://jupyterlab.readthedocs.io/en/stable/) *after* finishing the exercises to ensure that your solution runs top to bottom *without* any errors. If you cannot run this file on your machine, you may want to open it [in the cloud ](https://mybinder.org/v2/gh/webartifex/intro-to-python/develop?urlpath=lab/tree/01_elements/01_exercises.ipynb). Chapter 1: Elements of a Program (Coding Exercises) The exercises below assume that you have read the [first part ](https://nbviewer.jupyter.org/github/webartifex/intro-to-python/blob/develop/01_elements/00_content.ipynb) of Chapter 1.The `...`'s in the code cells indicate where you need to fill in code snippets. The number of `...`'s within a code cell give you a rough idea of how many lines of code are needed to solve the task. You should not need to create any additional code cells for your final solution. However, you may want to use temporary code cells to try out some ideas. Printing Output **Q1**: *Concatenate* `greeting` and `audience` below with the `+` operator and print out the resulting message `"Hello World"` with only *one* call of the built-in [print() ](https://docs.python.org/3/library/functions.htmlprint) function!Hint: You may have to "add" a space character in between `greeting` and `audience`. | greeting = "Hello"
audience = "World"
print(...) | _____no_output_____ | MIT | 01_elements/01_exercises.ipynb | ramsaut/intro-to-python |
**Q2**: How is your answer to **Q1** an example of the concept of **operator overloading**? **Q3**: Read the documentation on the built-in [print() ](https://docs.python.org/3/library/functions.htmlprint) function! How can you print the above message *without* concatenating `greeting` and `audience` first in *one* call of [print() ](https://docs.python.org/3/library/functions.htmlprint)?Hint: The `*objects` in the documentation implies that we can put several *expressions* (i.e., variables) separated by commas within the same call of the [print() ](https://docs.python.org/3/library/functions.htmlprint) function. | print(...) | _____no_output_____ | MIT | 01_elements/01_exercises.ipynb | ramsaut/intro-to-python |
**Q4**: What does the `sep=" "` mean in the documentation on the built-in [print() ](https://docs.python.org/3/library/functions.htmlprint) function? Adjust and use it to print out the three names referenced by `first`, `second`, and `third` on *one* line separated by *commas* with only *one* call of the [print() ](https://docs.python.org/3/library/functions.htmlprint) function! | first = "Anthony"
second = "Berta"
third = "Christian"
print(...) | _____no_output_____ | MIT | 01_elements/01_exercises.ipynb | ramsaut/intro-to-python |
**Q5**: Lastly, what does the `end="\n"` mean in the documentation? Adjust and use it within the `for`-loop to print the numbers `1` through `10` on *one* line with only *one* call of the [print() ](https://docs.python.org/3/library/functions.htmlprint) function! | for number in [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]:
print(...) | _____no_output_____ | MIT | 01_elements/01_exercises.ipynb | ramsaut/intro-to-python |
Introduction to Machine Learning (The examples in this notebook were inspired by my work for EmergentAlliance, the Scikit-Learn documentation and Jason Brownlee's "Machine Learning Mastery with Python") In this short intro course we will focus on predictive modeling. That means that we want to use the models to make predictions, e.g. a system's future behaviour or a system's response to specific inputs, aka classification and regression. So from all the various types of machine learning categories we will look at **supervised learning**. So we will train a model based on labelled training data. For example when training an image recognition model for recognizing cats vs dogs you need to label a lot of pictures for training purpose upfront. The other categories cover **unsupervised learning**, e.g. clustering and **Reinforcement learning**, e.g. Deepmind's AlphaGo.  Datasets:We will look at two different datasets:1. Iris Flower Dataset2. Boston Housing PricesThese datasets are so called toy datasets, well known machine learning examples, and already included in the Python machine learning library scikitlearn https://scikit-learn.org/stable/datasets/toy_dataset.html. The Iris Flower dataset is an example for a classification problem, whereas the Boston Housing Price dataset is a regression example. What does a ML project always look like?* Idea --> Problem Definition / Hypothesis formulation* Analyze and Visualize your data - Understand your data (dimensions, data types, class distributions (bias!), data summary, correllations, skewness) - Visualize your data (box and whisker / violine / distribution / scatter matrix)* Data Preprocessing including data cleansing, data wrangling, data compilation, normalization, standardization* Apply algorithms and make predictions* Improve, validate and present results Let's get startedLoad some libraries | import pandas as pd # data analysis
import numpy as np # math operations on arrays and vectors
import matplotlib.pyplot as plt # plotting
# display plots directly in the notebook
%matplotlib inline
import sklearn # the library we use for all ML related functions, algorithms | _____no_output_____ | MIT | ML/3_ML.ipynb | astridwalle/python_jupyter_basics |
Example 1: Iris flower datasethttps://scikit-learn.org/stable/datasets/toy_dataset.htmliris-dataset4 numeric, predictive attributes (sepal length in cm, sepal width in cm, petal length in cm, petal width in cm) and the class (Iris-Setosa, Iris-Versicolour, Iris-Virginica)**Hypothesis:** One can predict the class of Iris Flower based on their attributes.Here this is just one sentence, but formulating this hypothesis is a non-trivial, iterative task, which is the basis for data and feature selection and extremely important for the overall success! 1. Load the data | # check here again with autocompletion --> then you can see all availbale datasets
# https://scikit-learn.org/stable/datasets/toy_dataset.html
from sklearn.datasets import load_iris
(data, target) =load_iris(return_X_y=True, as_frame=True)
data
target | _____no_output_____ | MIT | ML/3_ML.ipynb | astridwalle/python_jupyter_basics |
We will combine this now into one dataframe and check the classes | data["class"]=target
data | _____no_output_____ | MIT | ML/3_ML.ipynb | astridwalle/python_jupyter_basics |
2. Understand your data | data.describe() | _____no_output_____ | MIT | ML/3_ML.ipynb | astridwalle/python_jupyter_basics |
This is a classification problem, so we will check the class distribution. This is important to avoid bias due to over- oder underrepresentation of classes. Well known example of this problem are predictive maintenance (very less errors compared to normal runs, Amazon's hiring AI https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G) | class_counts = data.groupby('class').size()
class_counts | _____no_output_____ | MIT | ML/3_ML.ipynb | astridwalle/python_jupyter_basics |
Now let's check for correlationsCorrelation means the relationship between two variables and how they may or may not change together.There are different methods available (--> check with ?data.corr) | correlations = data.corr(method='pearson')
correlations | _____no_output_____ | MIT | ML/3_ML.ipynb | astridwalle/python_jupyter_basics |
Let's do a heatmap plot for the correlation matrix (pandas built-in) | correlations.style.background_gradient(cmap='coolwarm').set_precision(2) | _____no_output_____ | MIT | ML/3_ML.ipynb | astridwalle/python_jupyter_basics |
Now we will also check the skewness of the distributions, assuming a normal Gaussian distribution. The skew results show a positive (right) or negative (left) skew. Values closer to zero show less skew. | skew=data.skew()
skew | _____no_output_____ | MIT | ML/3_ML.ipynb | astridwalle/python_jupyter_basics |
2. Visualize your data- Histogram- Paiplot- Density | data.hist()
data.plot(kind="density", subplots=True, layout=(3,2),sharex=False) | _____no_output_____ | MIT | ML/3_ML.ipynb | astridwalle/python_jupyter_basics |
Another nice plot is the box and whisker plot, visualizing the quartiles of a distribution | data.plot(kind="box", subplots=True, layout=(3,2),sharex=False) | _____no_output_____ | MIT | ML/3_ML.ipynb | astridwalle/python_jupyter_basics |
Another option are the seaborn violine plots, which give a more intuitive feeling about the distribution of values | import seaborn as sns
sns.violinplot(data=data,x="class", y="sepal length (cm)") | _____no_output_____ | MIT | ML/3_ML.ipynb | astridwalle/python_jupyter_basics |
And last but not least a scatterplot matrix, similar to the pairplot we did already in the last session. This should also give insights about correllations. | sns.pairplot(data) | _____no_output_____ | MIT | ML/3_ML.ipynb | astridwalle/python_jupyter_basics |
3. Data PreprocessingFor this dataset, there are already some steps we don't need to take, like:Conglomeration of multiple datasources to one table, including the adaption of formats and granularities. Also we don't need to take care for missing values or NaN's. But among preprocessing there are as well- Rescaling- NormalizationThe goal of these transformtions is bringing the data into a format, which is most beneficial for the later applied algorithms. So for example optimization algorithms for multivariate optimizations perform better, when all attributes / parameters have the same scale. And other methods assume that input variables have a Gaussian distribution, so it is better to transform the input parameters to meet these requirements. At first we look at **rescaling**. This is done to rescale all attributes (parameters) into the same range, most of the times this is the range [0,1].For applying these preprocessing steps at first we need to transform the dataframe into an array and split the arry in input and output values, here the descriptive parameters and the class. | # transform into array
array = data.values
array
# separate array into input and output components
X = array[:,0:4]
Y = array[:,4]
# Now we apply the MinMaxScaler with a range of [0,1], so that afterwards all columns have a min of 0 and a max of 1.
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler(feature_range=(0, 1))
rescaledX = scaler.fit_transform(X)
rescaledX | _____no_output_____ | MIT | ML/3_ML.ipynb | astridwalle/python_jupyter_basics |
Now we will apply Normalization by using the Standard Scaler, which means that each column (each attribute / parameter) will be transformed, such that afterwards each attribute has a standard distribution with mean = 0 and std. dev. = 1.Given the distribution of the data, each value in the dataset will have the mean value subtracted, and then divided by the standard deviation of the whole dataset (or feature in the multivariate case) | from sklearn.preprocessing import StandardScaler
scaler = StandardScaler().fit(X)
rescaledX = scaler.transform(X)
rescaledX | _____no_output_____ | MIT | ML/3_ML.ipynb | astridwalle/python_jupyter_basics |
4. Feature Selection (Parameter Sensitivity)Now we come to an extremely interesting part, which is about finding out which parameters do really have an impact onto my outputs. This is the first time we can validate our assumptions. So we will get a qualitative and a quantitative answer to the question which parameters are important. This is also important as having irrelevant features in your data can decrease the accuracy of many models and increases the training time. | # Feature Extraction with Univariate Statistical Tests (Chi-squared for classification)
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2
# feature extraction
test = SelectKBest(score_func=chi2, k=3)
fit = test.fit(X, Y)
# summarize scores
print(fit.scores_)
features = fit.transform(X)
# summarize selected features
print(features[0:5,:]) | _____no_output_____ | MIT | ML/3_ML.ipynb | astridwalle/python_jupyter_basics |
Here we can see the scores of the features. The higher the score, the more impact they have. As we have selected to take 3 attributes into account, we can see the values of the three selected features (sepal length (cm), sepal width (cm), petal length (cm), petal width (cm)). This result also makes sense, when remembering the correlation heatmap... Another very interesting transformation, which fulfills the same job as feature extraction in terms of data reduction is the PCA. Here the complete dataset is transformed into a reduced dataset (you set the number of resulting principal components). A Singular Value Decomposition of the data is performed to project it to a lower dimensional space. | from sklearn.decomposition import PCA
pca = PCA(n_components=3)
fit = pca.fit(X)
# summarize components
print("Explained Variance: %s" % fit.explained_variance_ratio_)
print(fit.components_) | _____no_output_____ | MIT | ML/3_ML.ipynb | astridwalle/python_jupyter_basics |
Of course there are even more possibilities, especially when you consider that the application of ML algorithms itself will give the feature importance. So there are multiple built-in methods available in sklearn. 5. Apply ML algorithms- The first step is to split our data into **training and testing data**. We need to have a separate testing dataset, which was not used for training purpose to validate the performance and accuracy of our trained model.- **Which algorithm to take?** There is no simple answer to that. Based on your problem (classification vs regression), there are different clases of algorithms, but you cannot know beforehand whoch algorithm will perform best on your data. So it is alwyas a good idea to try different algorithms and check the performance.- How to evaluate the performance? There are different metrics available to check the **performance of a ML model** | # specifying the size of the testing data set
# seed: reproducable random split --> especially important when comparing different algorithms with each other.
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
test_size = 0.33
seed = 7 # we set a seed to get a reproducable split - especially important when you want to compare diff. algorithms with each other
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=test_size,
random_state=seed)
model = LogisticRegression(solver='liblinear')
model.fit(X_train, Y_train)
result = model.score(X_test, Y_test)
print("Accuracy: %.3f%%" % (result*100.0))
# Let's compare the accuracy, when we use the same data for training and testing
model = LogisticRegression(solver='liblinear')
model.fit(X, Y)
result = model.score(X, Y)
print("Accuracy: %.3f%%" % (result*100.0))
# get importance
model = LogisticRegression(solver='liblinear')
model.fit(X_train, Y_train)
importance = model.coef_[0]
# summarize feature importance
for i,v in enumerate(importance):
print('Feature: %0d, Score: %.5f' % (i,v))
# print("Feature: "+str(i)+", Score: "+str(v))
# plot feature importance
plt.bar([x for x in range(len(importance))], importance)
# decision tree for feature importance on a regression problem
from sklearn.datasets import make_regression
from sklearn.tree import DecisionTreeRegressor
model = DecisionTreeRegressor()
# fit the model
model.fit(X_train, Y_train)
# get importance
importance = model.feature_importances_
# summarize feature importance
for i,v in enumerate(importance):
print('Feature: %0d, Score: %.5f' % (i,v))
# plot feature importance
plt.bar([x for x in range(len(importance))], importance) | _____no_output_____ | MIT | ML/3_ML.ipynb | astridwalle/python_jupyter_basics |
Test-Train-SplitsPerforming just one test-train-split and checking the performance or feature importance might be not good enough, as the result could be very good or very bad by coincidence due to this specific split. So the easiest solution is to repeat this process several times and check the averaged accuracy or use some of the ready-to-use built-in tools in scikit-learn, like KFold, cross-val-score, LeaveOneOut, ShuffleSplit. Which ML model to use?Here is just a tiny overview of some mosdels one can use for classification and regression problems. For more models, which are just built-in in sciki-learn, please refer to https://scikit-learn.org/stable/index.html and https://machinelearningmastery.com- Logistic / Linear Regression- k-nearest neighbour- Classification and Regression Trees- Support Vector Machines- Neural NetworksIn the following we will just use logistic regression (https://scikit-learn.org/stable/modules/linear_model.htmllogistic-regression) for our classification example and linear regression (https://scikit-learn.org/stable/modules/linear_model.htmlgeneralized-linear-regression) for our regression example. ML model evaluationFor evaluating the model performance, there are different metrics available, depending on your type of problem (classification vs regression)For classification, there are for example:- Classification accuracy- Logistic Loss- Confusion Matrix- ...For regression, there are for example:- Mean Absolute Error- Mean Squared Error (R)MSE- R^2 So the accuracy alone does by far not tell you the whole story, you need to check other metrics as well!The confusion matrix is a handy presentation of the accuracy of a model with two or more classes. The table presents predictions on the x-axis and true outcomes on the y-axis. --> false negative, false positivehttps://www.dataschool.io/simple-guide-to-confusion-matrix-terminology/ | from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.metrics import confusion_matrix
#Lets have a look at our classification problem:
kfold = KFold(n_splits=10, random_state=7, shuffle=True)
model = LogisticRegression(solver='liblinear')
# Classification accuracy:
scoring = 'accuracy'
results = cross_val_score(model, X, Y, cv=kfold, scoring=scoring)
print("Accuracy: %.3f (%.3f)" % (results.mean(), results.std()))
# Logistic Loss
scoring = 'neg_log_loss'
results = cross_val_score(model, X, Y, cv=kfold, scoring=scoring)
print("Logloss: %.3f (%.3f)" % (results.mean(), results.std()))
# Confusion Matrix
model.fit(X_train, Y_train)
predicted = model.predict(X_test)
matrix = confusion_matrix(Y_test, predicted)
print(matrix) | _____no_output_____ | MIT | ML/3_ML.ipynb | astridwalle/python_jupyter_basics |
Regression Example: Boston Housing Example | import sklearn
from sklearn.datasets import load_boston
data =load_boston(return_X_y=False)
print(data.DESCR)
df=pd.DataFrame(data.data)
df.columns=data.feature_names
df
df["MEDV"]=data.target
df | _____no_output_____ | MIT | ML/3_ML.ipynb | astridwalle/python_jupyter_basics |
Now we start again with our procedure:* Hypothesis* Understand and visualize the data * Preprocessing* Feature Selection* Apply Model* Evaluate ResultsOur **Hypothesis** here is, that we can actually predict the price of a house based on attributes of the geographic area, population and the property. | df.describe()
sns.pairplot(df[["DIS","RM","CRIM","LSTAT","MEDV"]])
from sklearn.linear_model import LinearRegression
# Now we do the
# preprocessing
# feature selection
# training-test-split
# ML model application
# evaluation
array = df.values
X = array[:,0:13]
Y = array[:,13]
# preprocessing
scaler = StandardScaler().fit(X)
rescaledX = scaler.transform(X)
# feature selection
test = SelectKBest(k=6)
fit = test.fit(rescaledX, Y)
features = fit.transform(X)
# train-test-split
X_train, X_test, Y_train, Y_test = train_test_split(features, Y, test_size=0.3,
random_state=5)
# build model
kfold = KFold(n_splits=10, random_state=7, shuffle=True)
model = LinearRegression()
model.fit(X_train,Y_train)
acc = model.score(X_test, Y_test)
# evaluate model
model = LinearRegression()
scoring = 'neg_mean_squared_error'
results = cross_val_score(model, X, Y, cv=kfold, scoring=scoring)
print("Accuracy: %.3f%%" % (acc*100.0))
print("MSE: %.3f (%.3f)" % (results.mean(), results.std()))
# And now:
# Make predictions
# make predictions
# model.predict(new_data) | _____no_output_____ | MIT | ML/3_ML.ipynb | astridwalle/python_jupyter_basics |
-->Introduction to Pandas | %matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
pd.options.display.max_rows = 8 | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
1. Let's start with a showcase Case 1: titanic survival data | df = pd.read_csv("data/titanic.csv")
df.head() | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
Starting from reading this dataset, to answering questions about this data in a few lines of code: **What is the age distribution of the passengers?** | df['Age'].hist() | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
**How does the survival rate of the passengers differ between sexes?** | df.groupby('Sex')[['Survived']].aggregate(lambda x: x.sum() / len(x)) | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
**Or how does it differ between the different classes?** | df.groupby('Pclass')['Survived'].aggregate(lambda x: x.sum() / len(x)).plot(kind='bar') | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
All the needed functionality for the above examples will be explained throughout this tutorial. Case 2: air quality measurement timeseries AirBase (The European Air quality dataBase): hourly measurements of all air quality monitoring stations from EuropeStarting from these hourly data for different stations: | data = pd.read_csv('data/20000101_20161231-NO2.csv', sep=';', skiprows=[1], na_values=['n/d'], index_col=0, parse_dates=True)
data.head() | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
to answering questions about this data in a few lines of code:**Does the air pollution show a decreasing trend over the years?** | data['1999':].resample('M').mean().plot(ylim=[0,120])
data['1999':].resample('A').mean().plot(ylim=[0,100]) | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
**What is the difference in diurnal profile between weekdays and weekend?** | data['weekday'] = data.index.weekday
data['weekend'] = data['weekday'].isin([5, 6])
data_weekend = data.groupby(['weekend', data.index.hour])['BASCH'].mean().unstack(level=0)
data_weekend.plot() | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
We will come back to these example, and build them up step by step. 2. Pandas: data analysis in pythonFor data-intensive work in Python the [Pandas](http://pandas.pydata.org) library has become essential.What is `pandas`?* Pandas can be thought of as *NumPy arrays with labels* for rows and columns, and better support for heterogeneous data types, but it's also much, much more than that.* Pandas can also be thought of as `R`'s `data.frame` in Python.* Powerful for working with missing data, working with time series data, for reading and writing your data, for reshaping, grouping, merging your data, ...It's documentation: http://pandas.pydata.org/pandas-docs/stable/** When do you need pandas? **When working with **tabular or structured data** (like R dataframe, SQL table, Excel spreadsheet, ...):- Import data- Clean up messy data- Explore data, gain insight into data- Process and prepare your data for analysis- Analyse your data (together with scikit-learn, statsmodels, ...)ATTENTION!: Pandas is great for working with heterogeneous and tabular 1D/2D data, but not all types of data fit in such structures!When working with array data (e.g. images, numerical algorithms): just stick with numpyWhen working with multidimensional labeled data (e.g. climate data): have a look at [xarray](http://xarray.pydata.org/en/stable/) 2. The pandas data structures: `DataFrame` and `Series`A `DataFrame` is a **tablular data structure** (multi-dimensional object to hold labeled data) comprised of rows and columns, akin to a spreadsheet, database table, or R's data.frame object. You can think of it as multiple Series object which share the same index. | df | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
Attributes of the DataFrameA DataFrame has besides a `index` attribute, also a `columns` attribute: | df.index
df.columns | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
To check the data types of the different columns: | df.dtypes | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
An overview of that information can be given with the `info()` method: | df.info() | <class 'pandas.core.frame.DataFrame'>
RangeIndex: 891 entries, 0 to 890
Data columns (total 12 columns):
PassengerId 891 non-null int64
Survived 891 non-null int64
Pclass 891 non-null int64
Name 891 non-null object
Sex 891 non-null object
Age 714 non-null float64
SibSp 891 non-null int64
Parch 891 non-null int64
Ticket 891 non-null object
Fare 891 non-null float64
Cabin 204 non-null object
Embarked 889 non-null object
dtypes: float64(2), int64(5), object(5)
memory usage: 66.2+ KB
| MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
Also a DataFrame has a `values` attribute, but attention: when you have heterogeneous data, all values will be upcasted: | df.values | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
Apart from importing your data from an external source (text file, excel, database, ..), one of the most common ways of creating a dataframe is from a dictionary of arrays or lists.Note that in the IPython notebook, the dataframe will display in a rich HTML view: | data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],
'population': [11.3, 64.3, 81.3, 16.9, 64.9],
'area': [30510, 671308, 357050, 41526, 244820],
'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']}
df_countries = pd.DataFrame(data)
df_countries | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
One-dimensional data: `Series` (a column of a DataFrame)A Series is a basic holder for **one-dimensional labeled data**. | df['Age']
age = df['Age'] | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
Attributes of a Series: `index` and `values`The Series has also an `index` and `values` attribute, but no `columns` | age.index | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
You can access the underlying numpy array representation with the `.values` attribute: | age.values[:10] | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
We can access series values via the index, just like for NumPy arrays: | age[0] | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
Unlike the NumPy array, though, this index can be something other than integers: | df = df.set_index('Name')
df
age = df['Age']
age
age['Dooley, Mr. Patrick'] | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
but with the power of numpy arrays. Many things you can do with numpy arrays, can also be applied on DataFrames / Series.Eg element-wise operations: | age * 1000 | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
A range of methods: | age.mean() | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
Fancy indexing, like indexing with a list or boolean indexing: | age[age > 70] | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
But also a lot of pandas specific methods, e.g. | df['Embarked'].value_counts() | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
EXERCISE: What is the maximum Fare that was paid? And the median? | df["Fare"].max()
df["Fare"].median() | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
EXERCISE: Calculate the average survival ratio for all passengers (note: the 'Survived' column indicates whether someone survived (1) or not (0)). | survived_0 = df[df["Survived"] == 0]["Survived"].count()
survived_1 = df[df["Survived"] == 1]["Survived"].count()
total = df["Survived"].count()
survived_0_ratio = survived_0/total
survived_1_ratio = survived_1/total
print(survived_0_ratio)
print(survived_1_ratio)
# Method 2
print(df["Survived"].mean()) | 0.6161616161616161
0.3838383838383838
0.3838383838383838
| MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
3. Data import and export A wide range of input/output formats are natively supported by pandas:* CSV, text* SQL database* Excel* HDF5* json* html* pickle* sas, stata* (parquet)* ... | #pd.read
#df.to | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
Very powerful csv reader: | pd.read_csv? | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
Luckily, if we have a well formed csv file, we don't need many of those arguments: | df = pd.read_csv("data/titanic.csv")
df.head() | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
EXERCISE: Read the `data/20000101_20161231-NO2.csv` file into a DataFrame `no2`Some aspects about the file: Which separator is used in the file? The second row includes unit information and should be skipped (check `skiprows` keyword) For missing values, it uses the `'n/d'` notation (check `na_values` keyword) We want to parse the 'timestamp' column as datetimes (check the `parse_dates` keyword) | no2 = pd.read_csv("./data/20000101_20161231-NO2.csv", sep=";", skiprows=[1],
index_col =[0], na_values=["n/d"], parse_dates=True )
no2 | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
4. Exploration Some useful methods:`head` and `tail` | no2.head(3)
no2.tail() | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
`info()` | no2.info() | <class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 149039 entries, 2000-01-01 01:00:00 to 2016-12-31 23:00:00
Data columns (total 4 columns):
BASCH 139949 non-null float64
BONAP 136493 non-null float64
PA18 142259 non-null float64
VERS 143813 non-null float64
dtypes: float64(4)
memory usage: 5.7 MB
| MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
Getting some basic summary statistics about the data with `describe`: | no2.describe() | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
Quickly visualizing the data | no2.plot(kind='box', ylim=[0,250])
no2['BASCH'].plot(kind='hist', bins=50) | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
EXERCISE: Plot the age distribution of the titanic passengers | df["Age"].hist() | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
The default plot (when not specifying `kind`) is a line plot of all columns: | no2.plot(figsize=(12,6)) | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
This does not say too much .. We can select part of the data (eg the latest 500 data points): | no2[-500:].plot(figsize=(12,6)) | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
Or we can use some more advanced time series features -> see further in this notebook! 5. Selecting and filtering data ATTENTION!: One of pandas' basic features is the labeling of rows and columns, but this makes indexing also a bit more complex compared to numpy. We now have to distuinguish between: selection by **label** selection by **position** | df = pd.read_csv("data/titanic.csv") | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
`df[]` provides some convenience shortcuts For a DataFrame, basic indexing selects the columns.Selecting a single column: | df['Age'] | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
or multiple columns: | df[['Age', 'Fare']] | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
But, slicing accesses the rows: | df[10:15] | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
Systematic indexing with `loc` and `iloc`When using `[]` like above, you can only select from one axis at once (rows or columns, not both). For more advanced indexing, you have some extra attributes: * `loc`: selection by label* `iloc`: selection by position | df = df.set_index('Name')
df.loc['Bonnell, Miss. Elizabeth', 'Fare']
df.loc['Bonnell, Miss. Elizabeth':'Andersson, Mr. Anders Johan', :] | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
Selecting by position with `iloc` works similar as indexing numpy arrays: | df.iloc[0:2,1:3] | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
The different indexing methods can also be used to assign data: | df.loc['Braund, Mr. Owen Harris', 'Survived'] = 100
df | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
Boolean indexing (filtering) Often, you want to select rows based on a certain condition. This can be done with 'boolean indexing' (like a where clause in SQL) and comparable to numpy. The indexer (or boolean mask) should be 1-dimensional and the same length as the thing being indexed. | df['Fare'] > 50
df[df['Fare'] > 50] | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
EXERCISE: Based on the titanic data set, select all rows for male passengers and calculate the mean age of those passengers. Do the same for the female passengers | df = pd.read_csv("data/titanic.csv")
# %load snippets/01-pandas_introduction63.py
male_mean_age = df[df["Sex"] == "male"]["Age"].mean()
female_mean_age = df[df["Sex"] == "female"]["Age"].mean()
print(male_mean_age)
print(female_mean_age)
print(male_mean_age == female_mean_age)
# by loc
male_mean_age = df.loc[df["Sex"] == "male", "Age"].mean()
female_mean_age = df.loc[df["Sex"] == "female", "Age"].mean()
print(male_mean_age)
print(female_mean_age)
print(male_mean_age == female_mean_age) | 30.72664459161148
27.915708812260537
False
| MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
EXERCISE: Based on the titanic data set, how many passengers older than 70 were on the Titanic? | len(df[df["Age"] >= 70]) | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
6. The group-by operation Some 'theory': the groupby operation (split-apply-combine) | df = pd.DataFrame({'key':['A','B','C','A','B','C','A','B','C'],
'data': [0, 5, 10, 5, 10, 15, 10, 15, 20]})
df | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
Recap: aggregating functions When analyzing data, you often calculate summary statistics (aggregations like the mean, max, ...). As we have seen before, we can easily calculate such a statistic for a Series or column using one of the many available methods. For example: | df['data'].sum() | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
However, in many cases your data has certain groups in it, and in that case, you may want to calculate this statistic for each of the groups.For example, in the above dataframe `df`, there is a column 'key' which has three possible values: 'A', 'B' and 'C'. When we want to calculate the sum for each of those groups, we could do the following: | for key in ['A', 'B', 'C']:
print(key, df[df['key'] == key]['data'].sum()) | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
This becomes very verbose when having multiple groups. You could make the above a bit easier by looping over the different values, but still, it is not very convenient to work with.What we did above, applying a function on different groups, is a "groupby operation", and pandas provides some convenient functionality for this. Groupby: applying functions per group The "group by" concept: we want to **apply the same function on subsets of your dataframe, based on some key to split the dataframe in subsets**This operation is also referred to as the "split-apply-combine" operation, involving the following steps:* **Splitting** the data into groups based on some criteria* **Applying** a function to each group independently* **Combining** the results into a data structureSimilar to SQL `GROUP BY` Instead of doing the manual filtering as above df[df['key'] == "A"].sum() df[df['key'] == "B"].sum() ...pandas provides the `groupby` method to do exactly this: | df.groupby('key').sum()
df.groupby('key').aggregate(np.sum) # 'sum' | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
And many more methods are available. | df.groupby('key')['data'].sum() | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
Application of the groupby concept on the titanic data We go back to the titanic passengers survival data: | df = pd.read_csv("data/titanic.csv")
df.head() | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
EXERCISE: Calculate the average age for each sex again, but now using groupby. | # %load snippets/01-pandas_introduction76.py
df.groupby("Sex")["Age"].mean() | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
EXERCISE: Calculate the average survival ratio for all passengers. | # df.groupby("Survived")["Survived"].count()
df["Survived"].mean()
# %load snippets/01-pandas_introduction77.py | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
EXERCISE: Calculate this survival ratio for all passengers younger that 25 (remember: filtering/boolean indexing). | # %load snippets/01-pandas_introduction78.py
df[df["Age"] <= 25]["Survived"].mean()
df25 = df[df['Age'] <= 25]
df25['Survived'].sum() / len(df25['Survived']) | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
EXERCISE: What is the difference in the survival ratio between the sexes? | # %load snippets/01-pandas_introduction79.py
df.groupby("Sex")["Survived"].mean() | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
EXERCISE: Or how does it differ between the different classes? Make a bar plot visualizing the survival ratio for the 3 classes. | # %load snippets/01-pandas_introduction80.py
df.groupby("Pclass")["Survived"].mean().plot(kind = "bar") | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
EXERCISE: Make a bar plot to visualize the average Fare payed by people depending on their age. The age column is devided is separate classes using the `pd.cut` function as provided below. | df['AgeClass'] = pd.cut(df['Age'], bins=np.arange(0,90,10))
# %load snippets/01-pandas_introduction82.py
df.groupby("AgeClass")["Fare"].mean().plot(kind="bar") | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
7. Working with time series data | no2 = pd.read_csv('data/20000101_20161231-NO2.csv', sep=';', skiprows=[1], na_values=['n/d'], index_col=0, parse_dates=True) | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
When we ensure the DataFrame has a `DatetimeIndex`, time-series related functionality becomes available: | no2.index | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
Indexing a time series works with strings: | no2["2010-01-01 09:00": "2010-01-01 12:00"] | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
A nice feature is "partial string" indexing, so you don't need to provide the full datetime string. E.g. all data of January up to March 2012: | no2['2012-01':'2012-03'] | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
Time and date components can be accessed from the index: | no2.index.hour
no2.index.year | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
Converting your time series with `resample` A very powerfull method is **`resample`: converting the frequency of the time series** (e.g. from hourly to daily data).Remember the air quality data: | no2.plot() | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
The time series has a frequency of 1 hour. I want to change this to daily: | no2.head()
no2.resample('D').mean().head() | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
Above I take the mean, but as with `groupby` I can also specify other methods: | no2.resample('D').max().head() | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
The string to specify the new time frequency: http://pandas.pydata.org/pandas-docs/dev/timeseries.htmloffset-aliases These strings can also be combined with numbers, eg `'10D'`. Further exploring the data: | no2.resample('M').mean().plot() # 'A'
# no2['2012'].resample('D').plot()
# %load snippets/01-pandas_introduction95.py | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
EXERCISE: The evolution of the yearly averages with, and the overall mean of all stations Use `resample` and `plot` to plot the yearly averages for the different stations. The overall mean of all stations can be calculated by taking the mean of the different columns (`.mean(axis=1)`). | # %load snippets/01-pandas_introduction96.py | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
EXERCISE: how does the *typical monthly profile* look like for the different stations? Add a 'month' column to the dataframe. Group by the month to obtain the typical monthly averages over the different years. First, we add a column to the dataframe that indicates the month (integer value of 1 to 12): | # %load snippets/01-pandas_introduction97.py | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
Now, we can calculate the mean of each month over the different years: | # %load snippets/01-pandas_introduction98.py
# %load snippets/01-pandas_introduction99.py | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
EXERCISE: The typical diurnal profile for the different stations Similar as for the month, you can now group by the hour of the day. | # %load snippets/01-pandas_introduction100.py | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
EXERCISE: What is the difference in the typical diurnal profile between week and weekend days for the 'BASCH' station. Add a column 'weekday' defining the different days in the week. Add a column 'weekend' defining if a days is in the weekend (i.e. days 5 and 6) or not (True/False). You can groupby on multiple items at the same time. In this case you would need to group by both weekend/weekday and hour of the day. Add a column indicating the weekday: | no2.index.weekday?
# %load snippets/01-pandas_introduction102.py | _____no_output_____ | MIT | pandas/01-pandas_introduction.ipynb | jai-singhal/data_science |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.