markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Convert numpy.datetime64 objects (which are the indices of our DataFrame) to matplotlib floating point numbers. These numbers represent the number of days (fraction part represents hours, minutes, seconds) since 0001-01-01 00:00:00 UTC (assuming Gregorian calendar).
# mdt = mdates.date2num(df.index.astype(pd.datetime))
Day_2/17-Pandas-Intro.ipynb
ueapy/enveast_python_course_materials
mit
Append the new data to the original DataFrame:
# df['mpl_date'] = mdt
Day_2/17-Pandas-Intro.ipynb
ueapy/enveast_python_course_materials
mit
Create a scatter plot
# ax = df.plot(kind='scatter', x='OLR', y='SOI', c='mpl_date', # colormap='viridis', colorbar=False, edgecolors='none') # plt.colorbar(ax.collections[0], ticks=mdates.YearLocator(5), # format=mdates.DateFormatter('%Y'))
Day_2/17-Pandas-Intro.ipynb
ueapy/enveast_python_course_materials
mit
Exercise: rolling functions 1. Subset data Start by subsetting the SOI DataFrame Use either numerical indices, or, even better, datetime indices
# your code here
Day_2/17-Pandas-Intro.ipynb
ueapy/enveast_python_course_materials
mit
2. Plot the subset data You can create figure and axis using matplotlib.pyplot Or just use the plot() method of pandas DataFrame
# your code here
Day_2/17-Pandas-Intro.ipynb
ueapy/enveast_python_course_materials
mit
3. Explore what rolling() method is What does this method return?
# df.rolling? # your code here
Day_2/17-Pandas-Intro.ipynb
ueapy/enveast_python_course_materials
mit
4. Plot the original series and the smoothed series
# your code here
Day_2/17-Pandas-Intro.ipynb
ueapy/enveast_python_course_materials
mit
EXERCISE Three Variables Exploration Explore the relationship between age, income and defualt 5. Transform
# Let's again revisit the data types in the dataset df.dtypes
credit-risk/notebooks/1-ML-Model.ipynb
amitkaps/full-stack-data-science
mit
Two of the columns are categorical in nature - grade and ownership. To build models, we need all of the features to be numeric. There exists a number of ways to convert categorical variables to numeric values. We will use one of the popular options: LabelEncoding
from sklearn.preprocessing import LabelEncoder # Let's not modify the original dataset. # Let's transform it in another dataset df_encoded = df.copy() # instantiate label encoder le_grade = LabelEncoder() # fit label encoder le_grade = le_grade.fit(df_encoded["grade"]) df_encoded.grade = le_grade.transform(df.grade) df_encoded.head()
credit-risk/notebooks/1-ML-Model.ipynb
amitkaps/full-stack-data-science
mit
EXERCISE Do label encoding on ownership 6. Model Common approaches: Linear models Tree-based models Neural Networks ... Some choices to consider: Interpretability Run-time Model complexity Scalability For the purpose of this workshop, we will use tree-based models. We will do the following two: Decision Tree Random Forest Decision Trees Decision Trees are a non-parametric supervised learning method used for classification and regression. The goal is to create a model that predicts the value of a target variable by learning simple decision rules inferred from the data features. Let's first build a model using just two features to build some intuition around decision trees Step 1 - Create features matrix and target vector
X_2 = df_encoded.loc[:,('age', 'amount')] y = df_encoded.loc[:,'default']
credit-risk/notebooks/1-ML-Model.ipynb
amitkaps/full-stack-data-science
mit
Step 3 - Visualize the decision tree
import pydotplus from IPython.display import Image dot_data = tree.export_graphviz(clf_dt_2, out_file='tree.dot', feature_names=X_2.columns, class_names=['no', 'yes'], filled=True, rounded=True, special_characters=True) # Incase you don't have graphviz installed # txt = open("tree_3.dot").read().replace("\\n", "\n ").replace(";", ";\n") # print(txt) graph = pydotplus.graph_from_dot_file('tree.dot') Image(graph.create_png())
credit-risk/notebooks/1-ML-Model.ipynb
amitkaps/full-stack-data-science
mit
EXERCISE Change the depth of the Decision Tree classifier to 10 and plot the decision boundaries again. Lets understand first just the difference between Class prediction and Class Probabilities
pred_class = clf_dt_10.predict(X_2) pred_proba = clf_dt_10.predict_proba(X_2) plt.hist(pred_class); import seaborn as sns sns.kdeplot(pred_proba[:,1], shade=True)
credit-risk/notebooks/1-ML-Model.ipynb
amitkaps/full-stack-data-science
mit
EXERCISE Build a decison tree classifier with max_depth = 10 and plot confusion_matrix & auc Cross-validation Now that we have chosen the error metric, how do we find the generalization error? We do this using cross-validation. ([source] (https://en.wikipedia.org/wiki/Cross-validation_(statistics)) From wiki: One round of cross-validation involves partitioning a sample of data into complementary subsets, performing the analysis on one subset (called the training set), and validating the analysis on the other subset (called the validation set or testing set). To reduce variability, multiple rounds of cross-validation are performed using different partitions, and the validation results are averaged over the rounds. We will use StratifiedKFold. This ensures that in each fold, the proportion of positive class and negative class remain similar to the original dataset This is the process we will follow to get the mean cv-score Generate k-fold Train the model using k-1 fold Predict for the kth fold Find the accuracy. Append it to the array Repeat 2-5 for different validation folds Report the mean cross validation score
from sklearn.model_selection import StratifiedKFold def cross_val(clf, k): # Instantiate stratified k fold. kf = StratifiedKFold(n_splits=k) # Let's use an array to store the results of cross-validation kfold_auc_score = [] # Run kfold CV for train_index, test_index in kf.split(X,y): clf = clf.fit(X.iloc[train_index], y.iloc[train_index]) proba = clf.predict_proba(X.iloc[test_index])[:,1] auc_score = roc_auc_score(y.iloc[test_index],proba) print(auc_score) kfold_auc_score.append(auc_score) print("Mean K Fold CV:", np.mean(kfold_auc_score)) cross_val(clf_dt, 3)
credit-risk/notebooks/1-ML-Model.ipynb
amitkaps/full-stack-data-science
mit
EXERCISE Build a classifier with max_depth = 10 and run a 5-fold CV to get the auc score. Build a classifier with max_depth = 20 and run a 5-fold CV to get the auc score. Bagging Decision trees in general have low bias and high variance. We can think about it like this: given a training set, we can keep asking questions until we are able to distinguish between ALL examples in the data set. We could keep asking questions until there is only a single example in each leaf. Since this allows us to correctly classify all elements in the training set, the tree is unbiased. However, there are many possible trees that could distinguish between all elements, which means higher variance. How do we reduce variance? In order to reduce the variance of a single error tree, we usually place a restriction on the number of questions asked in a tree. This is true for single decision trees which we have seen in previous notebooks. Along with this other method to do reduce variance is to ensemble models of decision trees. The goal of ensemble methods is to combine the predictions of several base estimators built with a given learning algorithm in order to improve generalizability / robustness over a single estimator. How to ensemble? Averaging: Build several estimators independently and then average their predictions. On average, the combined estimator is usually better than any of the single base estimator because its variance is reduced. Examples: Bagging Random Forest Extremely Randomized Trees Boosting: Build base estimators sequentially and then try to reduce the bias of the combined estimator. The motivation is to combine several weak models to produce a powerful ensemble. AdaBoost Gradient Boosting (e.g. xgboost) Random Forest In random forests, each tree in the ensemble is built from a sample drawn with replacement (i.e., a bootstrap sample) from the training set. In addition, when splitting a node during the construction of the tree, the split that is chosen is no longer the best split among all features. Instead, the split that is picked is the best split among a random subset of the features. As a result of this randomness, the bias of the forest usually slightly increases (with respect to the bias of a single non-random tree) but, due to averaging, its variance also decreases, usually more than compensating for the increase in bias, hence yielding an overall better model. Random Forest Model The advantage of the scikit-learn API is that the syntax remains fairly consistent across all the classifiers. If we change the DecisionTreeClassifier to RandomForestClassifier in the above code, we should be good to go :-)
from sklearn.ensemble import RandomForestClassifier clf_rf = RandomForestClassifier(n_estimators=10) cross_val(clf_rf, 5)
credit-risk/notebooks/1-ML-Model.ipynb
amitkaps/full-stack-data-science
mit
EXERCISE Change the number of trees from 10 to 100 and make it 5-fold. And report the cross-validation error (Hint: You should get ~ 0.74. ) A more detailed version of bagging and random forest can be found in the speakers' introductory machine learning workshop material bagging random forest Model Selection We choose the model and its hyper-parameters that has the best cross-validation score on the chosen error metric. In our case, it is random forest. Now - how do we get the model? We need to run the model with the chosen hyper-parameters on all of the train data. And serialize it.
final_model = RandomForestClassifier(n_estimators=100) final_model = final_model.fit(X, y)
credit-risk/notebooks/1-ML-Model.ipynb
amitkaps/full-stack-data-science
mit
BigQuery query magic Jupyter magics are notebook-specific shortcuts that allow you to run commands with minimal syntax. Jupyter notebooks come with many built-in commands. The BigQuery client library, google-cloud-bigquery, provides a cell magic, %%bigquery. The %%bigquery magic runs a SQL query and returns the results as a pandas DataFrame. Run a query on a public dataset The following example queries the BigQuery usa_names public dataset. usa_names is a Social Security Administration dataset that contains all names from Social Security card applications for births that occurred in the United States after 1879. The following example shows how to invoke the magic (%%bigquery), and how to pass in a standard SQL query in the body of the code cell. The results are displayed below the input cell as a pandas DataFrame.
%%bigquery SELECT name, SUM(number) as count FROM `bigquery-public-data.usa_names.usa_1910_current` GROUP BY name ORDER BY count DESC LIMIT 10
notebooks/official/template_notebooks/bigquery_magic.ipynb
GoogleCloudPlatform/bigquery-notebooks
apache-2.0
Explicitly specify a project By default, the %%bigquery magic command uses your default project to run the query. You may also explicitly provide a project ID using the --project flag. Note that your credentials must have permissions to create query jobs in the project you specify.
project_id = "your-project-id" %%bigquery --project $project_id SELECT name, SUM(number) as count FROM `bigquery-public-data.usa_names.usa_1910_current` GROUP BY name ORDER BY count DESC LIMIT 10
notebooks/official/template_notebooks/bigquery_magic.ipynb
GoogleCloudPlatform/bigquery-notebooks
apache-2.0
The class below implements all the logic you need to run the momentum backtester. Go through it and make sure you understand each part. You can run it first and make changes later to see if you made any improvements over the naive strategy. There are 6 functions within the class: __init__ getSymbolsToTrade getInstrumentFeatureConfigDicts getPredictions hurst_f updateCount init Initializes the class getSymbolsToTrade This is where we can select which stocks we want to test our strategy on. Here we're using just AAPL is it is the only ticker returned getInstrumentConfigDicts This is the way that the toolbox creates features that we want to use in our logic. It's really important for resource optimisation at scale but can look a little daunting at first. We've created the features you'll need for you. If you're interested in learning more you can here: https://blog.quant-quest.com/toolbox-breakdown-getfeatureconfigdicts-function/ getPrediction This again is fairly straight forward. We've included a few notes here, but for more detail: https://blog.quant-quest.com/toolbox-breakdown-getprediction-function/ Once you've calculated the hurst exponent, this should contain the logic to use it and make profitable trades. hurst_f This is your time to shine! This is where you will need to implement the hurst exponent as shown in the previous lecture. There are several different ways of calculating the hurst exponent, so we recommend you use the method shown in the lecture to allow other people to easily help you - if needed! updateCount A counter
class MyTradingFunctions(): def __init__(self): self.count = 0 # When to start trading self.start_date = '2015/01/02' # When to end trading self.end_date = '2017/08/31' self.params = {} def getSymbolsToTrade(self): ''' Specify the stock names that you want to trade. ''' return ['AAPL'] def getInstrumentFeatureConfigDicts(self): ''' Specify all Features you want to use by creating config dictionaries. Create one dictionary per feature and return them in an array. Feature config Dictionary have the following keys: featureId: a str for the type of feature you want to use featureKey: {optional} a str for the key you will use to call this feature If not present, will just use featureId params: {optional} A dictionary with which contains other optional params if needed by the feature msDict = { 'featureKey': 'ms_5', 'featureId': 'moving_sum', 'params': { 'period': 5, 'featureName': 'basis' } } return [msDict] You can now use this feature by in getPRediction() calling it's featureKey, 'ms_5' ''' ma1Dict = { 'featureKey': 'ma_90', 'featureId': 'moving_average', 'params': { 'period': 90, 'featureName': 'adjClose' } } mom30Dict = { 'featureKey': 'mom_30', 'featureId': 'momentum', 'params': { 'period': 30, 'featureName': 'adjClose' } } mom10Dict = { 'featureKey': 'mom_10', 'featureId': 'momentum', 'params': { 'period': 10, 'featureName': 'adjClose' } } return [ma1Dict, mom10Dict, mom30Dict] def getPrediction(self, time, updateNum, instrumentManager, predictions): ''' Combine all the features to create the desired predictions for each stock. 'predictions' is Pandas Series with stock as index and predictions as values We first call the holder for all the instrument features for all stocks as lookbackInstrumentFeatures = instrumentManager.getLookbackInstrumentFeatures() Then call the dataframe for a feature using its feature_key as ms5Data = lookbackInstrumentFeatures.getFeatureDf('ms_5') This returns a dataFrame for that feature for ALL stocks for all times upto lookback time Now you can call just the last data point for ALL stocks as ms5 = ms5Data.iloc[-1] You can call last datapoint for one stock 'ABC' as value_for_abs = ms5['ABC'] Output of the prediction function is used by the toolbox to make further trading decisions and evaluate your score. ''' self.updateCount() # uncomment if you want a counter # holder for all the instrument features for all instruments lookbackInstrumentFeatures = instrumentManager.getLookbackInstrumentFeatures() def hurst_f(input_ts, lags_to_test=20): # interpretation of return value # hurst < 0.5 - input_ts is mean reverting # hurst = 0.5 - input_ts is effectively random/geometric brownian motion # hurst > 0.5 - input_ts is trending tau = [] lagvec = [] # Step through the different lags for lag in range(2, lags_to_test): # produce price difference with lag pp = np.subtract(input_ts[lag:].values, input_ts[:-lag].values) # Write the different lags into a vector lagvec.append(lag) # Calculate the variance of the differnce vector tau.append(np.sqrt(np.std(pp))) # linear fit to double-log graph (gives power) m = np.polyfit(np.log10(lagvec), np.log10(tau), 1) # calculate hurst hurst = m[0]*2 print(hurst) return hurst # dataframe for a historical instrument feature (ma_90 in this case). The index is the timestamps # of upto lookback data points. The columns of this dataframe are the stock symbols/instrumentIds. mom10Data = lookbackInstrumentFeatures.getFeatureDf('mom_10') mom30Data = lookbackInstrumentFeatures.getFeatureDf('mom_30') ma90Data = lookbackInstrumentFeatures.getFeatureDf('ma_90') # Here we are making predictions on the basis of Hurst exponent if enough data is available, otherwise # we simply get out of our position if len(ma90Data.index)>20: mom30 = mom30Data.iloc[-1] mom10 = mom10Data.iloc[-1] ma90 = ma90Data.iloc[-1] # Calculate Hurst Exponent hurst = ma90Data.apply(hurst_f, axis=0) # Go long if Hurst > 0.5 and both long term and short term momentum are positive predictions[(hurst > 0.5) & (mom30 > 0) & (mom10 > 0)] = 1 # Go short if Hurst > 0.5 and both long term and short term momentum are negative predictions[(hurst > 0.5) & (mom30 <= 0) & (mom10 <= 0)] = 0 # Get out of position if Hurst > 0.5 and long term momentum is positive while short term is negative predictions[(hurst > 0.5) & (mom30 > 0) & (mom10 <= 0)] = 0.5 # Get out of position if Hurst > 0.5 and long term momentum is negative while short term is positive predictions[(hurst > 0.5) & (mom30 <= 0) & (mom10 > 0)] = 0.5 # Get out of position if Hurst < 0.5 predictions[hurst <= 0.5] = 0.5 else: # If no sufficient data then don't take any positions predictions.values[:] = 0.5 return predictions def updateCount(self): self.count = self.count + 1
courses/ai-for-finance/solution/momentum_using_hurst_solution.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Create an example sales report TODO(you): Write a query that shows key sales stats for each item sold from the Catalog and execute it here: - total orders - total unit quantity - total revenue - total profit - sorted by total orders highest to lowest, limit 10
%%bigquery --verbose --TODO: SELECT FROM `qwiklabs-resources.tpcds_2t_baseline.catalog_sales` LIMIT 10
quests/bq-teradata/01_teradata_bq_essentials/labs/bigquery_essentials_for_teradata_users.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Figure 1. Behavior of the median filter with given window length and different S/N ratio.
# Calculate and save all values. # Because the for loop doesn't count from 1 to 10 for example, # we need a counter to iterate through the array. # The counter is assigne to -1, so we can iterate from 0 to len(values) count = -1 count2 = -1 values = np.zeros((len(sn), len(wl))) for w in wl[:11]: count = count + 1 for x in sn: count2 = count2 + 1 for i in range (len(diff_noise)): # Create different noises, with x we change the signal to noise # ratio from 10 to 1. diff_noise[i, :] = np.random.normal(0, 0.706341266/np.sqrt(x), len(data)) # Add noise to each sine wave, to create a realisitc signal. noised_sines[i, :] = data + diff_noise[i, :] # Filter the all noised sine waves. medfilter[i, :] = medfilt(noised_sines[i, :], w) # Subtract the filtered wave from the noised sine waves. filtered_sines[i, :] = noised_sines[i, :] - medfilter[i, :] # Calculate the root mean square (RMS) of each sine wave behav[i] = np.sqrt(np.mean(np.square(filtered_sines[i, :]))) # Calculate the mean of the bahvior, so we can see how # the signal to noise ratio effects the median filter # with different window lengths. mean = np.mean(behav) # Save the result in the 'values' array values[count2:count2+1:,count] = mean # Set coun2 back to -1, so we can iterate again from 0 to len(values). # Otherwise the counter would get higher and is out of range. count2 = - 1 # Save the array, because the calculation take some time. # Load the array with "values = np.loadtxt('values.txt')". np.savetxt("values.txt", values) values = np.loadtxt("values.txt") viridis_data = np.loadtxt('viridis_data.txt') plasma_data = np.loadtxt('plasma_data.txt') # viris_data and plasma_data taken from # https://github.com/BIDS/colormap/blob/master/colormaps.py fig = plt.figure(figsize=(20, 7)) for p in range(0,11): ax = plt.subplot(2, 5, p) plt.axis([0, 11, 0, 1.5]) plt.plot(sn,values[:,p], 'o-', color=viridis_data[(p*25)-25,:]) plt.savefig('Behavior with given SN ratio and different wl.png',dpi=300) fig = plt.figure() values3 = np.zeros((len(sn),len(wl))) for p in range(6): ax = plt.subplot() values3[:,p] = values[::,p]/0.7069341 plt.axis([0, 11, 0, 2]) plt.ylabel('Normalized RMS', size = 14) plt.xlabel('S/N Ratio', size = 14) plt.hlines(1,1,10, color = 'b', linestyle = '--') plt.plot(sn,values3[:,p], color=plasma_data[(p*40),:]) plt.savefig('Behavior with given SN ratio and different wl3.png',dpi=300)
MedianFilter/Python/06. Behavior of the median filter with white noise/.ipynb_checkpoints/Behavior of the median filter with a noised sine wave-checkpoint.ipynb
ktakagaki/kt-2015-DSPHandsOn
gpl-2.0
Figure 2.1: Behavior of the median filter with given window length and different S/N ratio
# Alternative values = np.zeros((len(wl), len(sn))) count = -1 count2 = -1 for x in sn: count = count + 1 for w in wl: count2 = count2 + 1 for i in range (len(diff_noise)): diff_noise[i, :] = np.random.normal(0, 0.706341266/np.sqrt(x), len(data)) noised_sines[i, :] = data + diff_noise[i, :] medfilter[i, :] = medfilt(noised_sines[i, :], w) filtered_sines[i, :] = data - medfilter[i, :] behav[i] = np.sqrt(np.mean(np.square(filtered_sines[i, :]))) mean = np.mean(behav) values[count2:count2+1:,-count] = mean count2 = -1 np.savetxt("values2A.txt", values) values2A = np.loadtxt("values2A.txt") fig = plt.figure() values4 = np.zeros((len(wl), len(sn))) for i in range (11): # Normalize the RMS with the RMS of a normal sine wave values4[::,i] = values2A[::,i]/0.7069341 ax = plt.subplot() plt.axis([0, 450, 0, 2]) # Set xticks at each 64th point xticks = np.arange(0, max(wl) + 1, 64) ax.set_xticks(xticks) # x_labe = pi at each 64th point x_label = [r"${%s\pi}$" % (v) for v in range(len(xticks))] ax.set_xticklabels(x_label) plt.ylabel('Normalized RMS', size = 14) plt.xlabel('Window length', size = 14) plt.plot(wl,values4[::,i], color=viridis_data[(i*25)-25,:]) plt.hlines(1,1,max(wl), color = 'b', linestyle = '--') plt.savefig('Behavior with given wl and different SN ratio2A.png',dpi=300)
MedianFilter/Python/06. Behavior of the median filter with white noise/.ipynb_checkpoints/Behavior of the median filter with a noised sine wave-checkpoint.ipynb
ktakagaki/kt-2015-DSPHandsOn
gpl-2.0
Uniform random variables are super important because they are the basis from which we generate other random variables, such as binomial, normal, exponential etc. Discussion We could generate real random numbers by accessing, for example, noise on the ethernet network device but that noise might not be uniformly distributed. We typically generate pseudorandom numbers that aren't really random but look like they are. From Ross' Simulation book, we see a very easy recursive mechanism (recurrence relation) that generates values in $[0,1)$ using the modulo (remainder) operation: $x_{i+1} = a x_i$ modulo $m$ That is recursive (or iterative and not closed form) because $x_i$ is a function of a prior value: $x_1 = ax_0$ modulo $m$<br> $x_2 = ax_1$ modulo $m$<br> $x_3 = ax_2$ modulo $m$<br> $x_4 = ax_3$ modulo $m$<br> $...$ Ross indicates that the $x_i$ values are in [0,m-1] but setting any $x_i=0$ renders all subsequent $x_i=0$, so we should avoid that. Practically speaking, then, this method returns values in (0,1). To get random numbers in [0,1) from $x_i$, we use $x_i / m$. We must pick a value for $a$ and $m$ that make $x_i$ seem random. Ross suggests choosing a large prime number for $m$ that fits in our integer word size, e.g., $m = 2^{31} - 1$, and $a = 7^5 = 16807$. Initially we set a value for $x_0$, called the random seed (it is not the first random number). Every seed leads to a different sequence of pseudorandom numbers. (In Python, you can set the seed of the standard library by using random.seed([x]).) Python implementation Our goal is to take that simple recursive formula and use it to generate uniform random numbers. Function runif01() returns a new random value for every call. Use $m = 2^{31} - 1$, $a = 7^5 = 16807$, and an initial seed of $x_0 = 666$.
a = 16807 m = pow(2,31)-1 DFLT_SEED = 666 x_i = DFLT_SEED # this is our x_i that changes each runif01() call def runif01(): "Return a random value in U(0,1)" global x_i x_i = a * x_i % m # display(callsviz(varnames=['a','m','x_i'])) return x_i / float(m)
notes/random-uniform.ipynb
parrt/msan501
mit
Notice that x_i is in the global space not the runif() space.
from lolviz import callsviz runif01()
notes/random-uniform.ipynb
parrt/msan501
mit
Let's try it out:
[runif01() for i in range(4)]
notes/random-uniform.ipynb
parrt/msan501
mit
Exercise Define a new function, runif(a,b) that generates a random number in [a,b) instead of [0,1). Hint: We need to scale and shift a random uniform value in [0,1). Note: You can't use random.random() or any other built-in random number generators for this lab. python def runif(a,b): "Return a random value in U(a,b)" ...
def runif(a,b): "Return a random value in U(a,b)" if b<a: # swap t = a a = b b = t return runif01()*(b-a) + a print([runif(0,10) for i in range(3)]) print([runif(5,6) for i in range(3)])
notes/random-uniform.ipynb
parrt/msan501
mit
Exercise Define a new function, setseed(x), that updates the seed global variable. python def setseed(s): "Update the seed global variable but ensure seed &gt; 0" ... This test sequence: python setseed(501) print runif01() print runif01() print runif01() should generate: 0.00392101099897 0.900431859726 0.558266419712
def setseed(s): "Update the seed global variable but ensure seed > 0" global x_i if s <= 0: s = 666 x_i = s setseed(501) print([runif01() for i in range(3)]) print([runif(5,6) for i in range(3)])
notes/random-uniform.ipynb
parrt/msan501
mit
Random variable density function estimate Jumping ahead a bit, we can use the histogram plotting example from Manipulating and Visualizing Data as a crude form of density estimation to verify that the distribution of random values is approximately uniform:
import matplotlib.pyplot as plt # jupyter notebook command (ignore) %matplotlib inline sample = [runif01() for i in range(5000)] # Get 5000 random variables plt.figure(figsize=(4, 1.5)) plt.hist(sample, bins=10, density=True, alpha=0.3) plt.xlabel('Random value from U(0,1)') plt.ylabel('Probability') plt.show()
notes/random-uniform.ipynb
parrt/msan501
mit
透视表 ~ pivot_table
df = pd.read_csv('data/_all_atlogger2.csv') df.head() #df.columns df.plot(kind='area', stacked=False) df.plot(kind='kde', stacked=False) import pandas as pd import matplotlib.pyplot as plt ls data/pom_* df = pd.read_csv('data/pom_1302.txt') df.head()
R2Py/chaos.ipynb
OpenBookProjects/ipynb
mit
Step 2. Reading the time series The next step is to import the time series data. Three series are used in this example; the observed groundwater head, the rainfall and the evaporation. The data can be read using different methods, in this case the Pandas read_csv method is used to read the csv files. Each file consists of two columns; a date column called 'Date' and a column containing the values for the time series. The index column is the first column and is read as a date format. The heads series are stored in the variable obs, the rainfall in rain and the evaporation in evap. All variables are transformed to SI-units.
obs = pd.read_csv('obs.csv', index_col='Date', parse_dates=True) * 0.3048 rain = pd.read_csv('rain.csv', index_col='Date', parse_dates=True) * 0.3048 rain = rain.asfreq("D", fill_value=0.0) # There are some nan-values present evap = pd.read_csv('evap.csv', index_col='Date', parse_dates=True) * 0.3048
examples/groundwater_paper/Ex1_simple_model/Example1.ipynb
gwtsa/gwtsa
mit
7. Diagnosing the noise series The diagnostics plot can be used to interpret how well the noise follows a normal distribution and suffers from autocorrelation (or not).
ml.plots.diagnostics()
examples/groundwater_paper/Ex1_simple_model/Example1.ipynb
gwtsa/gwtsa
mit
Make plots for publication In the next codeblocks the Figures used in the Pastas paper are created. The first codeblock sets the matplotlib parameters to obtain publication-quality figures. The following figures are created: Figure of the impulse and step respons for the scaled Gamma response function Figure of the stresses used in the model Figure of the modelfit and the step response Figure of the model fit as returned by Pastas Figure of the model residuals and noise Figure of the Autocorrelation function
# Set matplotlib params to create publication figures params = { 'axes.labelsize': 18, 'axes.grid': True, 'font.size': 16, 'font.family': 'serif', 'legend.fontsize': 16, 'xtick.labelsize': 16, 'ytick.labelsize': 16, 'text.usetex': False, 'figure.figsize': [8.2, 5], 'lines.linewidth' : 2 } plt.rcParams.update(params) # Save figures or not savefig = True figpath = "figures" if not os.path.exists(figpath): os.mkdir(figpath)
examples/groundwater_paper/Ex1_simple_model/Example1.ipynb
gwtsa/gwtsa
mit
Make a plot of the impulse and step response for the Gamma and Hantush functions
rfunc = ps.Gamma(cutoff=0.999) p = [100, 1.5, 15] b = np.append(0, rfunc.block(p)) s = rfunc.step(p) rfunc2 = ps.Hantush(cutoff=0.999) p2 = [-100, 4, 15] b2 = np.append(0, rfunc2.block(p2)) s2 = rfunc2.step(p2) # Make a figure of the step and block response fig, [ax1, ax2] = plt.subplots(1, 2, sharex=True, figsize=(8, 4)) ax1.plot(b) ax1.plot(b2) ax1.set_ylabel("block response") ax1.set_xlabel("days") ax1.legend(["Gamma", "Hantush"], handlelength=1.3) ax1.axhline(0.0, linestyle="--", c="k") ax2.plot(s) ax2.plot(s2) ax2.set_xlim(0,100) ax2.set_ylim(-105, 105) ax2.set_ylabel("step response") ax2.set_xlabel("days") ax2.axhline(0.0, linestyle="--", c="k") ax2.annotate('', xy=(95, 100), xytext=(95, 0), arrowprops={'arrowstyle': '<->'}) ax2.annotate('A', xy=(95, 100), xytext=(85, 50)) ax2.annotate('', xy=(95, -100), xytext=(95, 0), arrowprops={'arrowstyle': '<->'}) ax2.annotate('A', xy=(95, 100), xytext=(85, -50)) plt.tight_layout() if savefig: path = os.path.join(figpath, "impuls_step_response.eps") plt.savefig(path, dpi=300, bbox_inches="tight")
examples/groundwater_paper/Ex1_simple_model/Example1.ipynb
gwtsa/gwtsa
mit
Make a plot of the stresses used in the model
fig, [ax1, ax2, ax3] = plt.subplots(3,1, sharex=True, figsize=(8, 7)) ax1.plot(obs, 'k.',label='obs', markersize=2) ax1.set_ylabel('head (m)', labelpad=0) ax1.set_yticks([-4, -3, -2]) plot_rain = ax2.plot(rain * 1000, color='k', label='prec', linewidth=1) ax2.set_ylabel('rain (mm/d)', labelpad=-5) ax2.set_xlabel('Date'); ax2.set_ylim([0,150]) ax2.set_yticks(np.arange(0, 151, 50)) plot_evap = ax3.plot(evap * 1000,'k', label='evap', linewidth=1) ax3.set_ylabel('evap (mm/d)') ax3.tick_params('y') ax3.set_ylim([0,8]) plt.xlim(['2003','2019']) plt.xticks([str(x) for x in np.arange(2004, 2019, 2)], rotation=0, horizontalalignment='center') ax2.set_xlabel("") ax3.set_xlabel("year") if savefig: path = os.path.join(figpath, "data_example_1.eps") plt.savefig(path, bbox_inches='tight', dpi=300)
examples/groundwater_paper/Ex1_simple_model/Example1.ipynb
gwtsa/gwtsa
mit
Make a custom figure of the model fit and the estimated step response
# Create the main plot fig, ax = plt.subplots(figsize=(16,5)) ax.plot(obs, marker=".", c="grey", linestyle=" ") ax.plot(obs.loc[:"2013":14], marker="x", markersize=7, c="C3", linestyle=" ", mew=2) ax.plot(ml.simulate(tmax="2019"), c="k") plt.ylabel('head (m)') plt.xlabel('year') plt.title("") plt.xticks([str(x) for x in np.arange(2004, 2019, 2)], rotation=0, horizontalalignment='center') plt.xlim('2003', '2019') plt.ylim(-4.7, -1.6) plt.yticks(np.arange(-4, -1, 1)) # Create the arrows indicating the calibration and validation period ax.annotate("calibration period", xy=("2003-01-01", -4.6), xycoords='data', xytext=(300, 0), textcoords='offset points', arrowprops=dict(arrowstyle="->"), va="center", ha="center") ax.annotate("", xy=("2014-01-01", -4.6), xycoords='data', xytext=(-230, 0), textcoords='offset points', arrowprops=dict(arrowstyle="->"), va="center", ha="center") ax.annotate("validation", xy=("2014-01-01", -4.6), xycoords='data', xytext=(150, 0), textcoords='offset points', arrowprops=dict(arrowstyle="->"), va="center", ha="center") ax.annotate("", xy=("2019-01-01", -4.6), xycoords='data', xytext=(-85, 0), textcoords='offset points', arrowprops=dict(arrowstyle="->"), va="center", ha="center") plt.legend(["observed head", "used for calibration","simulated head"], loc=2, numpoints=3) # Create the inset plot with the step response ax2 = plt.axes([0.66, 0.65, 0.22, 0.2]) s = ml.get_step_response("recharge") ax2.plot(s, c="k") ax2.set_ylabel("response") ax2.set_xlabel("days", labelpad=-15) ax2.set_xlim(0, s.index.size) ax2.set_xticks([0, 300]) if savefig: path = os.path.join(figpath, "results.eps") plt.savefig(path, bbox_inches='tight', dpi=300)
examples/groundwater_paper/Ex1_simple_model/Example1.ipynb
gwtsa/gwtsa
mit
Make a figure of the fit report
from matplotlib.font_manager import FontProperties font = FontProperties() #font.set_size(10) font.set_weight('normal') font.set_family('monospace') font.set_name("courier new") plt.text(-1, -1, str(ml.fit_report()), fontproperties=font) plt.axis('off') plt.tight_layout() if savefig: path = os.path.join(figpath, "fit_report.eps") plt.savefig(path, bbox_inches='tight', dpi=600)
examples/groundwater_paper/Ex1_simple_model/Example1.ipynb
gwtsa/gwtsa
mit
Make a Figure of the noise, residuals and autocorrelation
fig, ax1 = plt.subplots(1,1, figsize=(8, 3)) ml.residuals(tmax="2019").plot(ax=ax1, c="k") ml.noise(tmax="2019").plot(ax=ax1, c="C0") plt.xticks([str(x) for x in np.arange(2004, 2019, 2)], rotation=0, horizontalalignment='center') ax1.set_ylabel('(m)') ax1.set_xlabel('year') ax1.legend(["residuals", "noise"], ncol=2) if savefig: path = os.path.join(figpath, "residuals.eps") plt.savefig(path, bbox_inches='tight', dpi=300) fig, ax2 = plt.subplots(1,1, figsize=(9, 2)) n =ml.noise() conf = 1.96 / np.sqrt(n.index.size) acf = ps.stats.acf(n) ax2.axhline(conf, linestyle='--', color="dimgray") ax2.axhline(-conf, linestyle='--', color="dimgray") ax2.stem(acf.index, acf.values) ax2.set_ylabel('ACF (-)') ax2.set_xlabel('lag (days)') plt.xlim(0, 370) plt.ylim(-0.25, 0.25) plt.legend(["95% confidence interval"]) if savefig: path = os.path.join(figpath, "acf.eps") plt.savefig(path, bbox_inches='tight', dpi=300) h, test = ps.stats.ljung_box(ml.noise()) print("The hypothesis that there is significant autocorrelation is:", h) test
examples/groundwater_paper/Ex1_simple_model/Example1.ipynb
gwtsa/gwtsa
mit
Get Field and Sample Blocks AOIs
def load_geojson(filename): with open(filename, 'r') as f: return json.load(f) # this feature comes from within the sacramento_crops aoi # it is the first feature in 'ground-truth-test.geojson', which # was prepared in crop-classification/datasets-prepare.ipynb field_filename = os.path.join('pre-data', 'field.geojson') field = load_geojson(field_filename) pprint(field) # visualize field and determine size in acres print('{} acres'.format(field['properties']['ACRES'])) field_aoi = field['geometry'] shape(field_aoi) # visualize field and sample blocks # these blocks were drawn by hand randomly for this demo # they don't actually represent test field blocks blocks = load_geojson(os.path.join('pre-data', 'blocks.geojson')) block_aois = [b['geometry'] for b in blocks] MultiPolygon([shape(a) for a in [field_aoi] + block_aois])
jupyter-notebooks/pixels-to-tabular-data/field_statistical_analysis.ipynb
planetlabs/notebooks
apache-2.0
Part 2: Get Field NDVI In this section, we use the Data and Orders APIs to find images that overlap the field AOI in the specified time period and then to download the NDVI values of pixels within the field for all of the images. Once the images are downloaded, we use the UDM2 asset to filter to images that have no unusable pixels within the AOI. Finally, we get to check out what the NDVI of the field looks like! Step 1: Search Data API The goal of this step is to get the scene ids that meet the search criteria for this use case.
# if your Planet API Key is not set as an environment variable, you can paste it below API_KEY = os.environ.get('PL_API_KEY', 'PASTE_YOUR_KEY_HERE') client = api.ClientV1(api_key=API_KEY) # create an api request from the search specifications # relax the cloud cover requirement as filtering will be done within the aoi def build_request(aoi_geom, start_date, stop_date): '''build a data api search request for clear PSScene imagery''' query = filters.and_filter( filters.geom_filter(aoi_geom), filters.date_range('acquired', gt=start_date), filters.date_range('acquired', lt=stop_date) ) return filters.build_search_request(query, ['PSScene']) def search_data_api(request, client, limit=500): result = client.quick_search(request) # this returns a generator return result.items_iter(limit=limit) # define test data for the filter test_start_date = datetime.datetime(year=2019,month=4,day=1) test_stop_date = datetime.datetime(year=2019,month=5,day=1) request = build_request(field_aoi, test_start_date, test_stop_date) print(request) items = list(search_data_api(request, client)) print('{} images match the search criteria.'.format(len(items))) # uncomment to see what an item looks like # pprint(items[0])
jupyter-notebooks/pixels-to-tabular-data/field_statistical_analysis.ipynb
planetlabs/notebooks
apache-2.0
Now that we have found the images that match the search criteria, let's make sure all of the images fully contain the field AOI (we don't want to just get half of the field) and then let's see what the image footprint and the AOI look like together.
footprints = [shape(i['geometry']) for i in items] # make sure all footprints contain the field aoi (that is, no partial overlaps) for f in footprints: assert f.contains(shape(field_aoi)) # visualize aoi and footprint MultiPolygon([shape(field_aoi), footprints[0]])
jupyter-notebooks/pixels-to-tabular-data/field_statistical_analysis.ipynb
planetlabs/notebooks
apache-2.0
Whoa look! That AOI is tiny relative to the image footprint. We don't want to wrangle all those pixels outside of the AOI. We definately want to clip the imagery footprints to the AOI. Step 2: Submit Order Now that we have the scene ids, we can create the order. The output of this step is a single zip file that contains all of the scenes that meet our criteria. The tools we want to apply are: clip imagery to AOI and convert imagery to NDVI. Step 2.1: Define Toolchain Tools
def get_tools(aoi_geom): # clip to AOI clip_tool = {'clip': {'aoi': aoi_geom}} # convert to NDVI ndvi_tool = {'bandmath': { "pixel_type": "32R", "b1": "(b4 - b3) / (b4+b3)" }} tools = [clip_tool, ndvi_tool] return tools
jupyter-notebooks/pixels-to-tabular-data/field_statistical_analysis.ipynb
planetlabs/notebooks
apache-2.0
Step 2.2: Build Order Requests
def build_order(ids, name, aoi_geom): # specify the PSScene 4-Band surface reflectance product # make sure to get the *_udm2 bundle so you get the udm2 product # note: capitalization really matters in item_type when using planet client orders api item_type = 'PSScene' bundle = 'analytic_sr_udm2' orders_request = { 'name': name, 'products': [{ 'item_ids': ids, 'item_type': item_type, 'product_bundle': bundle }], 'tools': get_tools(aoi_geom), 'delivery': { 'single_archive': True, 'archive_filename':'{{name}}_{{order_id}}.zip', 'archive_type':'zip' }, 'notifications': { 'email': False }, } return orders_request # uncomment to see what an order request would look like # pprint(build_order(['id'], 'demo', test_aoi_geom), indent=4) ids = [i['id'] for i in items] name = 'pixels_to_tabular' order_request = build_order(ids, name, field_aoi)
jupyter-notebooks/pixels-to-tabular-data/field_statistical_analysis.ipynb
planetlabs/notebooks
apache-2.0
Step 2.3: Submit Order
def create_order(order_request, client): orders_info = client.create_order(order_request).get() return orders_info['id'] order_id = create_order(order_request, client) order_id
jupyter-notebooks/pixels-to-tabular-data/field_statistical_analysis.ipynb
planetlabs/notebooks
apache-2.0
Step 3: Download Orders Step 3.1: Wait Until Orders are Successful Before we can download the orders, they have to be prepared on the server.
def poll_for_success(order_id, client, num_loops=50): count = 0 while(count < num_loops): count += 1 order_info = client.get_individual_order(order_id).get() state = order_info['state'] print(state) success_states = ['success', 'partial'] if state == 'failed': raise Exception(response) elif state in success_states: break time.sleep(10) poll_for_success(order_id, client)
jupyter-notebooks/pixels-to-tabular-data/field_statistical_analysis.ipynb
planetlabs/notebooks
apache-2.0
Step 3.2: Run Download For this step we will use the planet python orders API because the CLI doesn't do a complete download with large orders.
data_dir = os.path.join('data', 'field_statistical_analysis') # make the download directory if it doesn't exist Path(data_dir).mkdir(parents=True, exist_ok=True) def poll_for_download(dest, endswith, num_loops=50): count = 0 while(count < num_loops): count += 1 matched_files = (f for f in os.listdir(dest) if os.path.isfile(os.path.join(dest, f)) and f.endswith(endswith)) match = next(matched_files, None) if match: match = os.path.join(dest, match) print('downloaded') break else: print('waiting...') time.sleep(10) return match def download_order(order_id, dest, client, limit=None): '''Download an order by given order ID''' # this returns download stats but they aren't accurate or informative # so we will look for the downloaded file on our own. dl = downloader.create(client, order=True) urls = client.get_individual_order(order_id).items_iter(limit=limit) dl.download(urls, [], dest) endswith = '{}.zip'.format(order_id) filename = poll_for_download(dest, endswith) return filename downloaded_file = download_order(order_id, data_dir, client) downloaded_file
jupyter-notebooks/pixels-to-tabular-data/field_statistical_analysis.ipynb
planetlabs/notebooks
apache-2.0
Step 4: Unzip Order In this section, we will unzip the order into a directory named after the downloaded zip file.
def unzip(filename, overwrite=False): location = Path(filename) zipdir = location.parent / location.stem if os.path.isdir(zipdir): if overwrite: print('{} exists. overwriting.'.format(zipdir)) shutil.rmtree(zipdir) else: raise Exception('{} already exists'.format(zipdir)) with ZipFile(location) as myzip: myzip.extractall(zipdir) return zipdir zipdir = unzip(downloaded_file) zipdir def get_unzipped_files(zipdir): filedir = zipdir / 'files' filenames = os.listdir(filedir) return [filedir / f for f in filenames] file_paths = get_unzipped_files(zipdir) file_paths[0]
jupyter-notebooks/pixels-to-tabular-data/field_statistical_analysis.ipynb
planetlabs/notebooks
apache-2.0
Step 5: Filter by Cloudiness In this section, we will filter images that have any clouds within the AOI. We use the Unusable Data Mask (UDM2) to determine cloud pixels.
udm2_files = [f for f in file_paths if 'udm2' in str(f)] # we want to find pixels that are inside the footprint but cloudy # the easiest way to do this is is the udm values (band 8) # https://developers.planet.com/docs/data/udm-2/ # the UDM values are given in # https://assets.planet.com/docs/Combined-Imagery-Product-Spec-Dec-2018.pdf # Bit 0: blackfill (footprint) # Bit 1: cloud covered def read_udm(udm2_filename): with rasterio.open(udm2_filename) as img: # band 8 is the udm band return img.read(8) def get_cloudy_percent(udm_band): blackfill = udm_band == int('1', 2) footprint_count = udm_band.size - np.count_nonzero(blackfill) cloudy = udm_band.size - udm_band == int('10', 2) cloudy_count = np.count_nonzero(cloudy) return (cloudy_count / footprint_count) get_cloudy_percent(read_udm(udm2_files[0])) clear_udm2_files = [f for f in udm2_files if get_cloudy_percent(read_udm(f)) < 0.00001] print(len(clear_udm2_files)) def get_id(udm2_filename): return udm2_filename.name.split('_3B')[0] clear_ids = [get_id(f) for f in clear_udm2_files] clear_ids[0]
jupyter-notebooks/pixels-to-tabular-data/field_statistical_analysis.ipynb
planetlabs/notebooks
apache-2.0
Step 6: Get Clear Images
def get_img_path(img_id, file_paths): filename = '{}_3B_AnalyticMS_SR_clip_bandmath.tif'.format(img_id) return next(f for f in file_paths if f.name == filename) def read_ndvi(img_filename): with rasterio.open(img_filename) as img: # ndvi is a single-band image band = img.read(1) return band plot.show(read_ndvi(get_img_path(clear_ids[0], file_paths)))
jupyter-notebooks/pixels-to-tabular-data/field_statistical_analysis.ipynb
planetlabs/notebooks
apache-2.0
The field AOI isn't an exact square so there are some blank pixels. Let's mask those out. We can use the UDM for that.
def get_udm2_path(img_id, file_paths): filename = '{}_3B_udm2_clip.tif'.format(img_id) return next(f for f in file_paths if f.name == filename) def read_blackfill(udm2_filename): with rasterio.open(udm2_filename) as img: # the last band is the udm band udm_band = img.read(8) blackfill = udm_band == int('1', 2) return blackfill plot.show(read_blackfill(get_udm2_path(clear_ids[0], file_paths))) # there is an issue where some udms aren't the same size as the images # to deal with this just cut off any trailing rows/columns # this isn't ideal as it can result in up to one pixel shift in x or y direction def crop(img, shape): return img[:shape[0], :shape[1]] def read_masked_ndvi(img_filename, udm2_filename): ndvi = read_ndvi(img_filename) blackfill = read_blackfill(udm2_filename) # crop image and mask to same size img_shape = min(ndvi.shape, blackfill.shape) ndvi = np.ma.array(crop(ndvi, img_shape), mask=crop(blackfill, img_shape)) return ndvi plot.show(read_masked_ndvi(get_img_path(clear_ids[0], file_paths), get_udm2_path(clear_ids[0], file_paths)))
jupyter-notebooks/pixels-to-tabular-data/field_statistical_analysis.ipynb
planetlabs/notebooks
apache-2.0
That looks better! We now have the NDVI values for the pixels within the field AOI. Now, lets make that a little easier to generate.
def read_masked_ndvi_by_id(iid, file_paths): return read_masked_ndvi(get_img_path(iid, file_paths), get_udm2_path(iid, file_paths)) plot.show(read_masked_ndvi_by_id(clear_ids[0], file_paths))
jupyter-notebooks/pixels-to-tabular-data/field_statistical_analysis.ipynb
planetlabs/notebooks
apache-2.0
In the images above, we are just using the default visualization for the imagery. But this is NDVI imagery. Values are given between -1 and 1. Let's see how this looks if we use visualization specivic to NDVI.
# we demonstrated visualization in the best practices tutorial # here, we save space by just importing the functionality from visual import show_ndvi # and here's what it looks like when we visualize as ndvi # (data range -1 to 1). it actually looks worse becaue the # pixel value range is so small show_ndvi(read_masked_ndvi_by_id(clear_ids[0], file_paths))
jupyter-notebooks/pixels-to-tabular-data/field_statistical_analysis.ipynb
planetlabs/notebooks
apache-2.0
Well, the contrast has certainly gone down. This is because the NDVI values within the field are pretty uniform. That's what we would expect for a uniform field! So it is actually good news. The NDVI values are pretty low, ranging from 0.16 to just above 0.22. The time range used for this search is basically the month of April. This is pretty early in the growth season and so likely the plants are still tiny seedlings. So even the low NDVI value makes sense here. Part 3: Sample Field Blocks Ok, here is where we convert pixels to tabular data. We do this for one image then we expand to doing this for all images in the time series. In this section, we want to sample the pixel values within each field block and put the values into a table. For this, we first need to identify the field block pixels. Next, we calculate the median, mean, variance, and random point value for each field block. We put those into a table. And at the end we visualize the results. Step 1: Get Field Block Pixels In this step, we find the pixel values that are associated with each field block. To get the field block pixels, we have to project the block geometries into the image coordinates. Then we create masks that just pull the field block pixels from the aoi.
def block_aoi_masks(block_aois, ref_img_path): # find the coordinate reference system of the image with rasterio.open(ref_img_path) as src: dst_crs = src.crs # geojson features (the field block geometries) # are always given in WGS84 # project these to the image coordinates wgs84 = pyproj.CRS('EPSG:4326') project = pyproj.Transformer.from_crs(wgs84, dst_crs, always_xy=True).transform proj_block_aois = [transform(project, shape(b)) for b in block_aois] masks = [raster_geometry_mask(src, [b], crop=False)[0] for b in proj_block_aois] return masks ref_img_path = get_img_path(clear_ids[0], file_paths) block_masks = block_aoi_masks(block_aois, img) ndvi = read_masked_ndvi_by_id(clear_ids[0], file_paths) fig, ax = plt.subplots(2,3, figsize=(15,10)) axf = ax.flatten() fig.delaxes(axf[-1]) for i, mask in enumerate(block_masks): ndvi.mask = mask plot.show(ndvi, ax=axf[i])
jupyter-notebooks/pixels-to-tabular-data/field_statistical_analysis.ipynb
planetlabs/notebooks
apache-2.0
Step 2: Random Sampling Summary statistics such as mean, mode, and variance will be easy to calculate with the numpy python package. We need to do a little work to get random sampling, however.
np.random.seed(0) # 0 - make random sampling repeatable, no arg - nonrepeatable def random_mask_sample(mask, count): # get shape of unmasked pixels unmasked = mask == False unmasked_shape = mask[unmasked].shape # uniformly sample pixel indices num_unmasked = unmasked_shape[0] idx = np.random.choice(num_unmasked, count, replace=False) # assign uniformly sampled indices to False (unmasked) random_mask = np.ones(unmasked_shape, dtype=np.bool) random_mask[idx] = False # reshape back to image shape and account for image mask random_sample_mask = np.ones(mask.shape, dtype=np.bool) random_sample_mask[unmasked] = random_mask return random_sample_mask # lets just check out how our random sampling performs ndvi = read_masked_ndvi_by_id(clear_ids[0], file_paths) ndvi.mask = random_mask_sample(ndvi.mask, 13) plot.show(ndvi) ndvi = read_masked_ndvi_by_id(clear_ids[0], file_paths) ndvi.mask = random_mask_sample(ndvi.mask, 1300) plot.show(ndvi)
jupyter-notebooks/pixels-to-tabular-data/field_statistical_analysis.ipynb
planetlabs/notebooks
apache-2.0
Ok, great! The first image shows what would result from sampling 13 pixels. The second image is for nearly all the pixels and demonstrates that the mask is taken into account with sampling. Now lets get down to calculating the summary statistics and placing them in a table entry. Step 3: Prepare Table of Summary Statistics Now that we have all the tools we need, we are ready to calculate summary statistics for each field block and put them into a table. We will calculate the median, mean, variance, and single random point value for each field block.
def get_stats(ndvi, masks): def _get_stats(mask, block_number): block = np.ma.array(ndvi, mask=mask) mean = np.ma.mean(block) median = np.ma.median(block) var = np.ma.var(block) random_mask = random_mask_sample(block.mask, 1) random_val = np.ma.mean(np.ma.array(block, mask=random_mask)) return {'block': block_number, 'mean': mean, 'median': median, 'variance': var, 'random': random_val} data = [_get_stats(m, i) for i, m in enumerate(masks)] df = pd.DataFrame(data) return df ndvi = read_masked_ndvi_by_id(clear_ids[0], file_paths) get_stats(ndvi, block_masks)
jupyter-notebooks/pixels-to-tabular-data/field_statistical_analysis.ipynb
planetlabs/notebooks
apache-2.0
Okay! We have statistics for each block in a table. Yay! Okay, now lets move on to running this across a time series. Step 4: Perform Time Series Analysis
def get_stats_by_id(iid, block_masks, file_paths): ndvi = read_masked_ndvi_by_id(iid, file_paths) ndvi_stats = get_stats(ndvi, block_masks) acquired = get_acquired(iid) ndvi_stats['acquired'] = [acquired]*len(block_masks) return ndvi_stats def get_acquired(iid): metadata_path = get_metadata(iid, file_paths) with open(metadata_path) as src: md = json.load(src) return md['properties']['acquired'] def get_metadata(img_id, file_paths): filename = '{}_metadata.json'.format(img_id) return next(f for f in file_paths if f.name == filename) get_stats_by_id(clear_ids[0], block_masks, file_paths) dfs = [get_stats_by_id(i, block_masks, file_paths) for i in clear_ids] all_stats = pd.concat(dfs) all_stats
jupyter-notebooks/pixels-to-tabular-data/field_statistical_analysis.ipynb
planetlabs/notebooks
apache-2.0
Okay! We have 165 rows, which is (number of blocks)x(number of images). It all checks out. Lets check out these stats in some plots! In these plots, color indicates the blocks. The blocks are colored red, blue, green, black, and purple. The x axis is acquisition time. So each 'column' of colored dots is the block statistic value for a given image.
colors = {0:'red', 1:'blue', 2:'green', 3:'black', 4:'purple'} df = all_stats stats = ['mean', 'median', 'random', 'variance'] fig, axes = plt.subplots(2, 2, sharex=True, figsize=(15,15)) # print(dir(axes[0][0])) for stat, ax in zip(stats, axes.flatten()): ax.scatter(df['acquired'], df[stat], c=df['block'].apply(lambda x: colors[x])) ax.set_title(stat) plt.sca(ax) plt.xticks(rotation=90) plt.show()
jupyter-notebooks/pixels-to-tabular-data/field_statistical_analysis.ipynb
planetlabs/notebooks
apache-2.0
Fetch the data and load it in pandas
local_filename = 'data/train.csv' # Open file and print the first 3 lines with open(local_filename) as fid: for line in fid.readlines()[:3]: print(line) data = pd.read_csv(local_filename) data.head() data.shape data.describe() data.hist(figsize=(10, 10), bins=50, layout=(3, 2)); sns.pairplot(data);
rampwf/tests/kits/iris_old/iris_old_starting_kit.ipynb
paris-saclay-cds/ramp-workflow
bsd-3-clause
Local testing (before submission) It is <b><span style="color:red">important that you test your submission files before submitting them</span></b>. For this we provide a unit test. Note that the test runs on your files in submissions/starting_kit. First pip install ramp-workflow or install it from the github repo. Make sure that the python file classifier.py is in the submissions/starting_kit folder, and the data train.csv and test.csv are in data. Then run ramp_test_submission If it runs and print training and test errors on each fold, then you can submit the code.
!ramp_test_submission
rampwf/tests/kits/iris_old/iris_old_starting_kit.ipynb
paris-saclay-cds/ramp-workflow
bsd-3-clause
Alternatively, load and execute rampwf.utils.testing.py, and call assert_submission. This may be useful if you would like to understand how we instantiate the workflow, the scores, the data connectors, and the cross validation scheme defined in problem.py, and how we insert and train/test your submission.
# %load https://raw.githubusercontent.com/paris-saclay-cds/ramp-workflow/master/rampwf/utils/testing.py # assert_submission()
rampwf/tests/kits/iris_old/iris_old_starting_kit.ipynb
paris-saclay-cds/ramp-workflow
bsd-3-clause
All the above commands are using Pandas module for python....still dont know how to plot against time yet. Another option to look at is Yahoo's finance package
goog? goog.T goog.High goog goog.columns goog.plot.im_self plt.plot(goog.index,goog['High']); %matplotlib qt plt.plot(goog.index,goog['High']); plt.plot(goog.index,goog['Low']); %matplotlib inline plt.plot(goog.index,goog['High'],goog.index,goog['Low']) plt.grid() goog.High[0] a=[]; for i in range(1,len(goog)): if(goog.High[i]<goog.High[i-1]): a.append(i) plt.plot(a) a[0:10] b=[]; for i in range(1,len(goog)): if(goog.High[i]<=goog.High[i-1]): b.append(i) b[0:10] len(a) len(b) len(a) len(b) a=np.array(a) b=np.array(b) plt.plot(a-b)
analysis/notebooks/darvasBoxExperiments.ipynb
vakilp/darvasBox
gpl-2.0
So both the outputs wheter I have <= or < are the same for this set of data Experiment with Drawing Darvas Box on Google Data
c=[]; for i in range(1,len(goog)): if((goog.High[i]<=goog.High[i-1]) and (goog.Low[i]>=goog.Low[i-1])): c.append(i) len(c) a_diff = np.diff(a) %matplotlib qt plt.plot(a_diff) idx = 0; d = [] for i in range(1,len(goog)): if(goog.High[i]<=goog.High[i-1]): idx = idx + 1; else: idx = 0; if idx==3: d.append(i) len(d) np.transpose(b) goog.High[4:7] np.transpose(d) goog.High[17:20] goog.High[8] goog.Low[4:7] class box(object): high = [] low = [] box1 = box() box1.high.append(300) box1.high xx = goog.High[4:7] xx.max() class box(object): high = [] low = [] box1 = box() idx = 0; d = [] index = 0; for i in range(1,len(goog)): if(goog.High[i]<=goog.High[i-1]): idx = idx + 1; else: idx = 0; if idx==3: d.append(i) high_vals = goog.High[i-3:i] low_vals = goog.Low[i-3:i] box1.high.append(high_vals.max()) box1.low.append(low_vals.min()) box1 box1.high len(box1.high)
analysis/notebooks/darvasBoxExperiments.ipynb
vakilp/darvasBox
gpl-2.0
So, the good thing here is that number of elements in box1.high is same as we expect
len(box1.low) np.argmin(goog.Low[4:7]) goog.Low[4:7] class box(object): high = [] low = [] date_high = [] date_low = [] box1 = box() idx = 0; d = [] index = 0; for i in range(1,len(goog)): if(goog.High[i]<=goog.High[i-1]): idx = idx + 1; else: idx = 0; if idx==3: d.append(i) high_vals = goog.High[i-3:i] low_vals = goog.Low[i-3:i] box1.high.append(high_vals.max()) box1.low.append(low_vals.min()) box1.date_high.append(goog.index[i-3]) box1.date_low.append(goog.index[i-3+np.argmin(low_vals)]) high = np.array(box1.high) low = np.array(box1.low) date_high = np.array(box1.date_high) date_low = np.array(box1.date_low) plt.plot(date_high,high,'o',goog.index,goog.High,date_low,low,'ro',goog.index,goog.Low); import matplotlib.patches as patches
analysis/notebooks/darvasBoxExperiments.ipynb
vakilp/darvasBox
gpl-2.0
So, clearly the rectangle is not working yet. Need to work that out
from matplotlib.patches import FancyBboxPatch def draw_bbox(ax, bb): # boxstyle=square with pad=0, i.e. bbox itself. p_bbox = FancyBboxPatch((bb.xmin, bb.ymin), abs(bb.width), abs(bb.height), boxstyle="square,pad=0.", ec="k", fc="none", zorder=10., ) ax.add_patch(p_bbox) import matplotlib.transforms as mtransforms bb = mtransforms.Bbox([[0.3, 0.4], [0.7, 0.6]]) ax = plt.subplot(111) draw_bbox(ax,bb) plt.draw() plt.plot(date_high,high,'o',goog.index,goog.High,date_low,low,'ro',goog.index,goog.Low); plt.clf() plt.plot(date_high,high,'o',goog.index,goog.High,date_low,low,'ro',goog.index,goog.Low); plt.close() plt.plot(date_high,high,'o',goog.index,goog.High,date_low,low,'ro',goog.index,goog.Low); plt.subplot(111) plt.plot(date_high,high,'o',goog.index,goog.High,date_low,low,'ro',goog.index,goog.Low); plt.draw() box1
analysis/notebooks/darvasBoxExperiments.ipynb
vakilp/darvasBox
gpl-2.0
Things to figure out about plotting: How to close/clf a plot successfully so that I can draw a new plot? How to get current axes for a given plot? How to do hold on and hold off?
%matplotlib inline bb = mtransforms.Bbox([[0.3, 0.4], [0.7, 0.6]]) ax = plt.subplot(111) draw_bbox(ax,bb) bb.corners ax = plt.subplot(111) plt.plot(date_high,high,'o',goog.index,goog.High,date_low,low,'ro',goog.index,goog.Low); draw_bbox(ax,bb) %matplotlib qt ax = plt.subplot(111) plt.plot(date_high,high,'o',goog.index,goog.High,date_low,low,'ro',goog.index,goog.Low); draw_bbox(ax,bb) bb = mtransforms.Bbox([[box1.date_high[1], box1.high[1]], [box1.date_low[1], box1.low[1]]]) box1.date_high[1] datetime.now() datetime.ctime? datetime.ctime(datetime.now()) datetime.ctime(box1.date_high[1]) import matplotlib.dates as mdates mdates.date2num(datetime.fromtimestamp(box1.high[1])) mdates.num2date(719162.7978846341) datetime.timetuple(box1.date_high[1]) datetime.fromtimestamp(box1.high[1]) datetime.fromtimestamp(1) mdates.datestr2num(datetime.ctime(box1.date_high[1])) mdates.num2date(735345) mdates.datestr2num(datetime.ctime(box1.date_high[])) x = box1.date_high[1] x.to_datetime() mdates.date2num(x.to_datetime()) mdates.num2date(mdates.date2num(x.to_datetime()))
analysis/notebooks/darvasBoxExperiments.ipynb
vakilp/darvasBox
gpl-2.0
Time Stamps and Drawing Rectangles The output of the read from finance.yahoo.com is timestamp class. To work with rectangles, I need x values that indicate time. The rectangles and bboxes work only with numbers. So, I need to convert the timestamp I have into a number. The way to do is as follows: Pick out a timestamp from the list that I form from box1 The timestamp itself has a function to_datetime() that needs to be used to convert from timestamp type to MATPLOTLIB has a module called dates that allows for converting datetime type into a number
x_high = box1.date_high[1] x_high = x_high.to_datetime() x_high mdates.date2num(x_high) x_low = box1.date_low[1] x_low = x_low.to_datetime() mdates.date2num(x_low) box1.low[1] box1.high[1] bb = mtransforms.Bbox([[np.float(x_high), np.float(box1.low[1])], [np.float(x_low), np.float(box1.high[1])]]) ax = plt.subplot(111) draw_bbox(ax,bb) type(box1.low[1]) type(np.float(box1.low[1])) type(x_high) x_low = mdates.date2num(x_low) x_high = mdates.date2num(x_high) bb = mtransforms.Bbox([[np.float(x_high), np.float(box1.low[1])], [np.float(x_low), np.float(box1.high[1])]]) ax = plt.subplot(111) draw_bbox(ax,bb) plt.show() %matplotlib inline draw_bbox(ax,bb) bb plt.show() ax=plt.subplot(111) bb = mtransforms.Bbox([[0.3,1.4],[0.4,1.5]]) bb = mtransforms.Bbox([[np.float(x_high), np.float(box1.low[1])], [np.float(x_low), np.float(box1.high[1])]]) draw_bbox(ax,bb) ax.autoscale()
analysis/notebooks/darvasBoxExperiments.ipynb
vakilp/darvasBox
gpl-2.0
So now that we know overfitting is a problem, but how do we fix it? One way is to throw more data at the problem. A simple rule of thumb as presented by Yaser Abu-Mostaf is his excellent machine learning course available from Caltech, is that you should have at least 10 times as many examples as the degrees for freedom in your model. For us, since we have 9 weights that can change, we would need 90 observations, which we certainly don’t have. Link to course: https://work.caltech.edu/telecourse.html Another popular and effective way to mitigate overfitting is to use a technique called regularization. One way to implement regularization is to add a term to our cost function that penalizes overly complex models. A simple, but effective way to do this is to add together the square of our weights to our cost function, this way, models with larger magnitudes of weights, cost more. We’ll need to normalize the other part of our cost function to ensure that our ratio of the two error terms does not change with respect to the number of examples. We’ll introduce a regularization hyper parameter, lambda, that will allow us to tune the relative cost – higher values of lambda will impose bigger penalties for high model complexity.
#Regularization Parameter: Lambda = 0.0001 #Need to make changes to costFunction and costFunctionPrim: def costFunction(self, X, y): #Compute cost for given X,y, use weights already stored in class. self.yHat = self.forward(X) J = 0.5*sum((y-self.yHat)**2)/X.shape[0] + (self.lambd/2)*(sum(self.W1**2)+sum(self.W2**2)) return J def costFunctionPrime(self, X, y): #Compute derivative with respect to W and W2 for a given X and y: self.yHat = self.forward(X) delta3 = np.multiply(-(y-self.yHat), self.sigmoidPrime(self.z3)) #Add gradient of regularization term: dJdW2 = np.dot(self.a2.T, delta3)/X.shape[0] + self.lambd*self.W2 delta2 = np.dot(delta3, self.W2.T)*self.sigmoidPrime(self.z2) #Add gradient of regularization term: dJdW1 = np.dot(X.T, delta2)/X.shape[0] + self.lambd*self.W1 return dJdW1, dJdW2 #New complete class, with changes: class Neural_Network(object): def __init__(self, Lambda=0): #Define Hyperparameters self.inputLayerSize = 2 self.outputLayerSize = 1 self.hiddenLayerSize = 3 #Weights (parameters) self.W1 = np.random.randn(self.inputLayerSize,self.hiddenLayerSize) self.W2 = np.random.randn(self.hiddenLayerSize,self.outputLayerSize) #Regularization Parameter: self.Lambda = Lambda def forward(self, X): #Propogate inputs though network self.z2 = np.dot(X, self.W1) self.a2 = self.sigmoid(self.z2) self.z3 = np.dot(self.a2, self.W2) yHat = self.sigmoid(self.z3) return yHat def sigmoid(self, z): #Apply sigmoid activation function to scalar, vector, or matrix return 1/(1+np.exp(-z)) def sigmoidPrime(self,z): #Gradient of sigmoid return np.exp(-z)/((1+np.exp(-z))**2) def costFunction(self, X, y): #Compute cost for given X,y, use weights already stored in class. self.yHat = self.forward(X) J = 0.5*sum((y-self.yHat)**2)/X.shape[0] + (self.Lambda/2)*(sum(self.W1**2)+sum(self.W2**2)) return J def costFunctionPrime(self, X, y): #Compute derivative with respect to W and W2 for a given X and y: self.yHat = self.forward(X) delta3 = np.multiply(-(y-self.yHat), self.sigmoidPrime(self.z3)) #Add gradient of regularization term: dJdW2 = np.dot(self.a2.T, delta3)/X.shape[0] + self.Lambda*self.W2 delta2 = np.dot(delta3, self.W2.T)*self.sigmoidPrime(self.z2) #Add gradient of regularization term: dJdW1 = np.dot(X.T, delta2)/X.shape[0] + self.Lambda*self.W1 return dJdW1, dJdW2 #Helper functions for interacting with other methods/classes def getParams(self): #Get W1 and W2 Rolled into vector: params = np.concatenate((self.W1.ravel(), self.W2.ravel())) return params def setParams(self, params): #Set W1 and W2 using single parameter vector: W1_start = 0 W1_end = self.hiddenLayerSize*self.inputLayerSize self.W1 = np.reshape(params[W1_start:W1_end], \ (self.inputLayerSize, self.hiddenLayerSize)) W2_end = W1_end + self.hiddenLayerSize*self.outputLayerSize self.W2 = np.reshape(params[W1_end:W2_end], \ (self.hiddenLayerSize, self.outputLayerSize)) def computeGradients(self, X, y): dJdW1, dJdW2 = self.costFunctionPrime(X, y) return np.concatenate((dJdW1.ravel(), dJdW2.ravel()))
Python/ML_DL/DL/Neural-Networks-Demystified-master/.ipynb_checkpoints/Part 7 Overfitting, Testing, and Regularization-checkpoint.ipynb
vbsteja/code
apache-2.0
The Events and Annotations data structures ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Generally speaking, both the Events and :class:~mne.Annotations data structures serve the same purpose: they provide a mapping between times during an EEG/MEG recording and a description of what happened at those times. In other words, they associate a when with a what. The main differences are: Units: the Events data structure represents the when in terms of samples, whereas the :class:~mne.Annotations data structure represents the when in seconds. Limits on the description: the Events data structure represents the what as an integer "Event ID" code, whereas the :class:~mne.Annotations data structure represents the what as a string. How duration is encoded: Events in an Event array do not have a duration (though it is possible to represent duration with pairs of onset/offset events within an Events array), whereas each element of an :class:~mne.Annotations object necessarily includes a duration (though the duration can be zero if an instantaneous event is desired). Internal representation: Events are stored as an ordinary :class:NumPy array &lt;numpy.ndarray&gt;, whereas :class:~mne.Annotations is a :class:list-like class defined in MNE-Python. What is a STIM channel? ^^^^^^^^^^^^^^^^^^^^^^^ A :term:STIM channel (short for "stimulus channel") is a channel that does not receive signals from an EEG, MEG, or other sensor. Instead, STIM channels record voltages (usually short, rectangular DC pulses of fixed magnitudes sent from the experiment-controlling computer) that are time-locked to experimental events, such as the onset of a stimulus or a button-press response by the subject (those pulses are sometimes called TTL_ pulses, event pulses, trigger signals, or just "triggers"). In other cases, these pulses may not be strictly time-locked to an experimental event, but instead may occur in between trials to indicate the type of stimulus (or experimental condition) that is about to occur on the upcoming trial. The DC pulses may be all on one STIM channel (in which case different experimental events or trial types are encoded as different voltage magnitudes), or they may be spread across several channels, in which case the channel(s) on which the pulse(s) occur can be used to encode different events or conditions. Even on systems with multiple STIM channels, there is often one channel that records a weighted sum of the other STIM channels, in such a way that voltage levels on that channel can be unambiguously decoded as particular event types. On older Neuromag systems (such as that used to record the sample data) this "summation channel" was typically STI 014; on newer systems it is more commonly STI101. You can see the STIM channels in the raw data file here:
raw.copy().pick_types(meg=False, stim=True).plot(start=3, duration=6)
0.19/_downloads/6684371ec2bc8e72513b3bdbec0d3a9f/plot_20_events_from_raw.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
You can see that STI 014 (the summation channel) contains pulses of different magnitudes whereas pulses on other channels have consistent magnitudes. You can also see that every time there is a pulse on one of the other STIM channels, there is a corresponding pulse on STI 014. .. TODO: somewhere in prev. section, link out to a table of which systems have STIM channels vs. which have marker files or embedded event arrays (once such a table has been created). Converting a STIM channel signal to an Events array ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ If your data has events recorded on a STIM channel, you can convert them into an events array using :func:mne.find_events. The sample number of the onset (or offset) of each pulse is recorded as the event time, the pulse magnitudes are converted into integers, and these pairs of sample numbers plus integer codes are stored in :class:NumPy arrays &lt;numpy.ndarray&gt; (usually called "the events array" or just "the events"). In its simplest form, the function requires only the :class:~mne.io.Raw object, and the name of the channel(s) from which to read events:
events = mne.find_events(raw, stim_channel='STI 014') print(events[:5]) # show the first 5
0.19/_downloads/6684371ec2bc8e72513b3bdbec0d3a9f/plot_20_events_from_raw.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
.. sidebar:: The middle column of the Events array MNE-Python events are actually *three* values: in between the sample number and the integer event code is a value indicating what the event code was on the immediately preceding sample. In practice, that value is almost always `0`, but it can be used to detect the *endpoint* of an event whose duration is longer than one sample. See the documentation of :func:`mne.find_events` for more details. If you don't provide the name of a STIM channel, :func:~mne.find_events will first look for MNE-Python config variables &lt;tut-configure-mne&gt; for variables MNE_STIM_CHANNEL, MNE_STIM_CHANNEL_1, etc. If those are not found, channels STI 014 and STI101 are tried, followed by the first channel with type "STIM" present in raw.ch_names. If you regularly work with data from several different MEG systems with different STIM channel names, setting the MNE_STIM_CHANNEL config variable may not be very useful, but for researchers whose data is all from a single system it can be a time-saver to configure that variable once and then forget about it. :func:~mne.find_events has several options, including options for aligning events to the onset or offset of the STIM channel pulses, setting the minimum pulse duration, and handling of consecutive pulses (with no return to zero between them). For example, you can effectively encode event duration by passing output='step' to :func:mne.find_events; see the documentation of :func:~mne.find_events for details. More information on working with events arrays (including how to plot, combine, load, and save event arrays) can be found in the tutorial tut-event-arrays. Reading embedded events as Annotations ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Some EEG/MEG systems generate files where events are stored in a separate data array rather than as pulses on one or more STIM channels. For example, the EEGLAB format stores events as a collection of arrays in the :file:.set file. When reading those files, MNE-Python will automatically convert the stored events into an :class:~mne.Annotations object and store it as the :attr:~mne.io.Raw.annotations attribute of the :class:~mne.io.Raw object:
testing_data_folder = mne.datasets.testing.data_path() eeglab_raw_file = os.path.join(testing_data_folder, 'EEGLAB', 'test_raw.set') eeglab_raw = mne.io.read_raw_eeglab(eeglab_raw_file) print(eeglab_raw.annotations)
0.19/_downloads/6684371ec2bc8e72513b3bdbec0d3a9f/plot_20_events_from_raw.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
More information on working with :class:~mne.Annotations objects, including how to add annotations to :class:~mne.io.Raw objects interactively, and how to plot, concatenate, load, save, and export :class:~mne.Annotations objects can be found in the tutorial tut-annotate-raw. Converting between Events arrays and Annotations objects ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Once your experimental events are read into MNE-Python (as either an Events array or an :class:~mne.Annotations object), you can easily convert between the two formats as needed. You might do this because, e.g., an Events array is needed for epoching continuous data, or because you want to take advantage of the "annotation-aware" capability of some functions, which automatically omit spans of data if they overlap with certain annotations. To convert an :class:~mne.Annotations object to an Events array, use the function :func:mne.events_from_annotations on the :class:~mne.io.Raw file containing the annotations. This function will assign an integer Event ID to each unique element of raw.annotations.description, and will return the mapping of descriptions to integer Event IDs along with the derived Event array. By default, one event will be created at the onset of each annotation; this can be modified via the chunk_duration parameter of :func:~mne.events_from_annotations to create equally spaced events within each annotation span (see chunk-duration, below, or see fixed-length-events for direct creation of an Events array of equally-spaced events).
events_from_annot, event_dict = mne.events_from_annotations(eeglab_raw) print(event_dict) print(events_from_annot[:5])
0.19/_downloads/6684371ec2bc8e72513b3bdbec0d3a9f/plot_20_events_from_raw.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Making multiple events per annotation ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ As mentioned above, you can generate equally-spaced events from an :class:~mne.Annotations object using the chunk_duration parameter of :func:~mne.events_from_annotations. For example, suppose we have an annotation in our :class:~mne.io.Raw object indicating when the subject was in REM sleep, and we want to perform a resting-state analysis on those spans of data. We can create an Events array with a series of equally-spaced events within each "REM" span, and then use those events to generate (potentially overlapping) epochs that we can analyze further.
# create the REM annotations rem_annot = mne.Annotations(onset=[5, 41], duration=[16, 11], description=['REM'] * 2) raw.set_annotations(rem_annot) (rem_events, rem_event_dict) = mne.events_from_annotations(raw, chunk_duration=1.5)
0.19/_downloads/6684371ec2bc8e72513b3bdbec0d3a9f/plot_20_events_from_raw.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
🎯 Nomenclature First off, let's clarify a bit the nomenclature we are going to use, introducing the following terms: Datasets, Scenarios, Benchmarks and Generators. By Dataset we mean a collection of examples that can be used for training or testing purposes but not already organized to be processed as a stream of batches or tasks. Since Avalanche is based on Pytorch, our Datasets are torch.utils.Datasets objects. By Scenario we mean a particular setting, i.e. specificities about the continual stream of data, a continual learning algorithm will face. By Benchmark we mean a well-defined and carefully thought combination of a scenario with one or multiple datasets that we can use to asses our continual learning algorithms. By Generator we mean a function that given a specific scenario and a dataset can generate a Benchmark. 📚 The Benchmarks Module The bechmarks module offers 3 types of utils: Datasets: all the Pytorch datasets plus additional ones prepared by our community and particularly interesting for continual learning. Classic Benchmarks: classic benchmarks used in CL litterature ready to be used with great flexibility. Benchmarks Generators: a set of functions you can use to create your own benchmark starting from any kind of data and scenario. In particular, we distinguish two type of generators: Specific and Generic. The first ones will let you create a benchmark based on a clear scenarios and Pytorch dataset(s); the latters, instead, are more generic and flexible, both in terms of scenario definition then in terms of type of data they can manage. Specific: nc_benchmark: given one or multiple datasets it creates a benchmark instance based on scenarios where New Classes (NC) are encountered over time. Notable scenarios that can be created using this utility include Class-Incremental, Task-Incremental and Task-Agnostic scenarios. ni_benchmark: it creates a benchmark instance based on scenarios where New Instances (NI), i.e. new examples of the same classes are encountered over time. Notable scenarios that can be created using this utility include Domain-Incremental scenarios. Generic: filelist_benchmark: It creates a benchmark instance given a list of filelists. paths_benchmark: It creates a benchmark instance given a list of file paths and class labels. tensors_benchmark: It creates a benchmark instance given a list of tensors. dataset_benchmark: It creates a benchmark instance given a list of pytorch datasets. But let's see how we can use this module in practice! 🖼️ Datasets Let's start with the Datasets. As we previously hinted, in Avalanche you'll find all the standard Pytorch Datasets available in the torchvision package as well as a few others that are useful for continual learning but not already officially available within the Pytorch ecosystem.
import torch import torchvision from avalanche.benchmarks.datasets import MNIST, FashionMNIST, KMNIST, EMNIST, \ QMNIST, FakeData, CocoCaptions, CocoDetection, LSUN, ImageNet, CIFAR10, \ CIFAR100, STL10, SVHN, PhotoTour, SBU, Flickr8k, Flickr30k, VOCDetection, \ VOCSegmentation, Cityscapes, SBDataset, USPS, Kinetics400, HMDB51, UCF101, \ CelebA, CORe50Dataset, TinyImagenet, CUB200, OpenLORIS # As we would simply do with any Pytorch dataset we can create the train and # test sets from it. We could use any of the above imported Datasets, but let's # just try to use the standard MNIST. train_MNIST = MNIST( './data/mnist', train=True, download=True, transform=torchvision.transforms.ToTensor() ) test_MNIST = MNIST( './data/mnist', train=False, download=True, transform=torchvision.transforms.ToTensor() ) # Given these two sets we can simply iterate them to get the examples one by one for i, example in enumerate(train_MNIST): pass print("Num. examples processed: {}".format(i)) # or use a Pytorch DataLoader train_loader = torch.utils.data.DataLoader( train_MNIST, batch_size=32, shuffle=True ) for i, (x, y) in enumerate(train_loader): pass print("Num. mini-batch processed: {}".format(i))
notebooks/from-zero-to-hero-tutorial/03_benchmarks.ipynb
ContinualAI/avalanche
mit
Of course also the basic utilities ImageFolder and DatasetFolder can be used. These are two classes that you can use to create a Pytorch Dataset directly from your files (following a particular structure). You can read more about these in the Pytorch official documentation here. We also provide an additional FilelistDataset and AvalancheDataset classes. The former to construct a dataset from a filelist (caffe style) pointing to files anywhere on the disk. The latter to augment the basic Pytorch Dataset functionalities with an extention to better deal with a stack of transformations to be used during train and test.
from avalanche.benchmarks.utils import ImageFolder, DatasetFolder, FilelistDataset, AvalancheDataset
notebooks/from-zero-to-hero-tutorial/03_benchmarks.ipynb
ContinualAI/avalanche
mit
🛠️ Benchmarks Basics The Avalanche benchmarks (instances of the Scenario class), contains several attributes that characterize the benchmark. However, the most important ones are the train and test streams. In Avalanche we often suppose to have access to these two parallel stream of data (even though some benchmarks may not provide such feature, but contain just a unique test set). Each of these streams are iterable, indexable and sliceable objects that are composed of unique experiences. Experiences are batch of data (or "tasks") that can be provided with or without a specific task label. Efficiency It is worth mentioning that all the data belonging to a stream are not loaded into the RAM beforehand. Avalanche actually loads the data when a specific mini-batches are requested at training/test time based on the policy defined by each Dataset implementation. This means that memory requirements are very low, while the speed is guaranteed by a multi-processing data loading system based on the one defined in Pytorch. Scenarios So, as we have seen, each scenario object in Avalanche has several useful attributes that characterizes the benchmark, including the two important train and test streams. Let's check what you can get from a scenario object more in details:
from avalanche.benchmarks.classic import SplitMNIST split_mnist = SplitMNIST(n_experiences=5, seed=1) # Original train/test sets print('--- Original datasets:') print(split_mnist.original_train_dataset) print(split_mnist.original_test_dataset) # A list describing which training patterns are assigned to each experience. # Patterns are identified by their id w.r.t. the dataset found in the # original_train_dataset field. print('--- Train patterns assignment:') print(split_mnist.train_exps_patterns_assignment) # A list describing which test patterns are assigned to each experience. # Patterns are identified by their id w.r.t. the dataset found in the # original_test_dataset field print('--- Test patterns assignment:') print(split_mnist.test_exps_patterns_assignment) # the task label of each experience. print('--- Task labels:') print(split_mnist.task_labels) # train and test streams print('--- Streams:') print(split_mnist.train_stream) print(split_mnist.test_stream) # A list that, for each experience (identified by its index/ID), # stores a set of the (optionally remapped) IDs of classes of patterns # assigned to that experience. print('--- Classes in each experience:') print(split_mnist.original_classes_in_exp)
notebooks/from-zero-to-hero-tutorial/03_benchmarks.ipynb
ContinualAI/avalanche
mit
Train and Test Streams The train and test streams can be used for training and testing purposes, respectively. This is what you can do with these streams:
# each stream has a name: "train" or "test" train_stream = split_mnist.train_stream print(train_stream.name) # we have access to the scenario from which the stream was taken train_stream.benchmark # we can slice and reorder the stream as we like! substream = train_stream[0] substream = train_stream[0:2] substream = train_stream[0,2,1] len(substream)
notebooks/from-zero-to-hero-tutorial/03_benchmarks.ipynb
ContinualAI/avalanche
mit
Experiences Each stream can in turn be treated as an iterator that produces a unique experience, containing all the useful data regarding a batch or task in the continual stream our algorithms will face. Check out how can you use these experiences below:
# we get the first experience experience = train_stream[0] # task label and dataset are the main attributes t_label = experience.task_label dataset = experience.dataset # but you can recover additional info experience.current_experience experience.classes_in_this_experience experience.classes_seen_so_far experience.previous_classes experience.future_classes experience.origin_stream experience.benchmark # As always, we can iterate over it normally or with a pytorch # data loader. # For instance, we can use tqdm to add a progress bar. from tqdm import tqdm for i, data in enumerate(tqdm(dataset)): pass print("\nNumber of examples:", i + 1) print("Task Label:", t_label)
notebooks/from-zero-to-hero-tutorial/03_benchmarks.ipynb
ContinualAI/avalanche
mit
🏛️ Classic Benchmarks Now that we know how our benchmarks work in general through scenarios, streams and experiences objects, in this section we are going to explore common benchmarks already available for you with one line of code yet flexible enough to allow proper tuning based on your needs:
from avalanche.benchmarks.classic import CORe50, SplitTinyImageNet, \ SplitCIFAR10, SplitCIFAR100, SplitCIFAR110, SplitMNIST, RotatedMNIST, \ PermutedMNIST, SplitCUB200, SplitImageNet # creating PermutedMNIST (Task-Incremental) perm_mnist = PermutedMNIST( n_experiences=2, seed=1234, )
notebooks/from-zero-to-hero-tutorial/03_benchmarks.ipynb
ContinualAI/avalanche
mit
Many of the classic benchmarks will download the original datasets they are based on automatically and put it under the "~/.avalanche/data" directory. How to Use the Benchmarks Let's see now how we can use the classic benchmark or the ones that you can create through the generators (see next section). For example, let's try out the classic PermutedMNIST benchmark (Task-Incremental scenario).
# creating the benchmark instance (scenario object) perm_mnist = PermutedMNIST( n_experiences=3, seed=1234, ) # recovering the train and test streams train_stream = perm_mnist.train_stream test_stream = perm_mnist.test_stream # iterating over the train stream for experience in train_stream: print("Start of task ", experience.task_label) print('Classes in this task:', experience.classes_in_this_experience) # The current Pytorch training set can be easily recovered through the # experience current_training_set = experience.dataset # ...as well as the task_label print('Task {}'.format(experience.task_label)) print('This task contains', len(current_training_set), 'training examples') # we can recover the corresponding test experience in the test stream current_test_set = test_stream[experience.current_experience].dataset print('This task contains', len(current_test_set), 'test examples')
notebooks/from-zero-to-hero-tutorial/03_benchmarks.ipynb
ContinualAI/avalanche
mit
🐣 Benchmarks Generators What if we want to create a new benchmark that is not present in the "Classic" ones? Well, in that case Avalanche offer a number of utilites that you can use to create your own benchmark with maximum flexibility: the benchmarks generators! Specific Generators The specific scenario generators are useful when starting from one or multiple Pytorch datasets you want to create a "New Instances" or "New Classes" benchmark: i.e. it supports the easy and flexible creation of a Domain-Incremental, Class-Incremental or Task-Incremental scenarios among others. For the New Classes scenario you can use the following function: nc_benchmark for the New Instances: ni_benchmark
from avalanche.benchmarks.generators import nc_benchmark, ni_benchmark
notebooks/from-zero-to-hero-tutorial/03_benchmarks.ipynb
ContinualAI/avalanche
mit
Let's start by creating the MNIST dataset object as we would normally do in Pytorch:
from torchvision.transforms import Compose, ToTensor, Normalize, RandomCrop train_transform = Compose([ RandomCrop(28, padding=4), ToTensor(), Normalize((0.1307,), (0.3081,)) ]) test_transform = Compose([ ToTensor(), Normalize((0.1307,), (0.3081,)) ]) mnist_train = MNIST( './data/mnist', train=True, download=True, transform=train_transform ) mnist_test = MNIST( './data/mnist', train=False, download=True, transform=test_transform )
notebooks/from-zero-to-hero-tutorial/03_benchmarks.ipynb
ContinualAI/avalanche
mit
Then we can, for example, create a new benchmark based on MNIST and the classic Domain-Incremental scenario:
scenario = ni_benchmark( mnist_train, mnist_test, n_experiences=10, shuffle=True, seed=1234, balance_experiences=True ) train_stream = scenario.train_stream for experience in train_stream: t = experience.task_label exp_id = experience.current_experience training_dataset = experience.dataset print('Task {} batch {} -> train'.format(t, exp_id)) print('This batch contains', len(training_dataset), 'patterns')
notebooks/from-zero-to-hero-tutorial/03_benchmarks.ipynb
ContinualAI/avalanche
mit
Or, we can create a benchmark based on MNIST and the Class-Incremental (what's commonly referred to as "Split-MNIST" benchmark):
scenario = nc_benchmark( mnist_train, mnist_test, n_experiences=10, shuffle=True, seed=1234, task_labels=False ) train_stream = scenario.train_stream for experience in train_stream: t = experience.task_label exp_id = experience.current_experience training_dataset = experience.dataset print('Task {} batch {} -> train'.format(t, exp_id)) print('This batch contains', len(training_dataset), 'patterns')
notebooks/from-zero-to-hero-tutorial/03_benchmarks.ipynb
ContinualAI/avalanche
mit
Generic Generators Finally, if you cannot create your ideal benchmark since it does not fit well in the aforementioned new classes or new instances scenarios, you can always use our generic generators: filelist_benchmark paths_benchmark dataset_benchmark tensors_benchmark
from avalanche.benchmarks.generators import filelist_benchmark, dataset_benchmark, \ tensors_benchmark, paths_benchmark
notebooks/from-zero-to-hero-tutorial/03_benchmarks.ipynb
ContinualAI/avalanche
mit
Let's start with the filelist_benchmark utility. This function is particularly useful when it is important to preserve a particular order of the patterns to be processed (for example if they are frames of a video), or in general if we have data scattered around our drive and we want to create a sequence of batches/tasks providing only a txt file containing the list of their paths. For Avalanche we follow the same format of the Caffe filelists ("path class_label"): /path/to/a/file.jpg 0 /path/to/another/file.jpg 0 ... /path/to/another/file.jpg M /path/to/another/file.jpg M ... /path/to/another/file.jpg N /path/to/another/file.jpg N So let's download the classic "Cats vs Dogs" dataset as an example:
!wget -N --no-check-certificate \ https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip !unzip -q -o cats_and_dogs_filtered.zip
notebooks/from-zero-to-hero-tutorial/03_benchmarks.ipynb
ContinualAI/avalanche
mit
You can now see in the content directory on colab the image we downloaded. We are now going to create the filelists and then use the filelist_benchmark function to create our benchmark:
import os # let's create the filelists since we don't have it dirpath = "cats_and_dogs_filtered/train" for filelist, rel_dir, t_label in zip( ["train_filelist_00.txt", "train_filelist_01.txt"], ["cats", "dogs"], [0, 1]): # First, obtain the list of files filenames_list = os.listdir(os.path.join(dirpath, rel_dir)) # Create the text file containing the filelist # Filelists must be in Caffe-style, which means # that they must define path in the format: # # relative_path_img1 class_label_first_img # relative_path_img2 class_label_second_img # ... # # For instance: # cat/cat_0.png 1 # dog/dog_54.png 0 # cat/cat_3.png 1 # ... # # Paths are relative to a root path # (specified when calling filelist_benchmark) with open(filelist, "w") as wf: for name in filenames_list: wf.write( "{} {}\n".format(os.path.join(rel_dir, name), t_label) ) # Here we create a GenericCLScenario ready to be iterated generic_scenario = filelist_benchmark( dirpath, ["train_filelist_00.txt", "train_filelist_01.txt"], ["train_filelist_00.txt"], task_labels=[0, 0], complete_test_set_only=True, train_transform=ToTensor(), eval_transform=ToTensor() )
notebooks/from-zero-to-hero-tutorial/03_benchmarks.ipynb
ContinualAI/avalanche
mit
In the previous cell we created a benchmark instance starting from file lists. However, paths_benchmark is a better choice if you already have the list of paths directly loaded in memory:
train_experiences = [] for rel_dir, label in zip( ["cats", "dogs"], [0, 1]): # First, obtain the list of files filenames_list = os.listdir(os.path.join(dirpath, rel_dir)) # Don't create a file list: instead, we create a list of # paths + class labels experience_paths = [] for name in filenames_list: instance_tuple = (os.path.join(dirpath, rel_dir, name), label) experience_paths.append(instance_tuple) train_experiences.append(experience_paths) # Here we create a GenericCLScenario ready to be iterated generic_scenario = paths_benchmark( train_experiences, [train_experiences[0]], # Single test set task_labels=[0, 0], complete_test_set_only=True, train_transform=ToTensor(), eval_transform=ToTensor() )
notebooks/from-zero-to-hero-tutorial/03_benchmarks.ipynb
ContinualAI/avalanche
mit
Let us see how we can use the dataset_benchmark utility, where we can use several PyTorch datasets as different batches or tasks. This utility expectes a list of datasets for the train, test (and other custom) streams. Each dataset will be used to create an experience:
train_cifar10 = CIFAR10( './data/cifar10', train=True, download=True ) test_cifar10 = CIFAR10( './data/cifar10', train=False, download=True ) generic_scenario = dataset_benchmark( [train_MNIST, train_cifar10], [test_MNIST, test_cifar10] )
notebooks/from-zero-to-hero-tutorial/03_benchmarks.ipynb
ContinualAI/avalanche
mit
Adding task labels can be achieved by wrapping each datasets using AvalancheDataset. Apart from task labels, AvalancheDataset allows for more control over transformations and offers an ever growing set of utilities (check the documentation for more details).
# Alternatively, task labels can also be a list (or tensor) # containing the task label of each pattern train_MNIST_task0 = AvalancheDataset(train_cifar10, task_labels=0) test_MNIST_task0 = AvalancheDataset(test_cifar10, task_labels=0) train_cifar10_task1 = AvalancheDataset(train_cifar10, task_labels=1) test_cifar10_task1 = AvalancheDataset(test_cifar10, task_labels=1) scenario_custom_task_labels = dataset_benchmark( [train_MNIST_task0, train_cifar10_task1], [test_MNIST_task0, test_cifar10_task1] ) print('Without custom task labels:', generic_scenario.train_stream[1].task_label) print('With custom task labels:', scenario_custom_task_labels.train_stream[1].task_label)
notebooks/from-zero-to-hero-tutorial/03_benchmarks.ipynb
ContinualAI/avalanche
mit
And finally, the tensors_benchmark generator:
pattern_shape = (3, 32, 32) # Definition of training experiences # Experience 1 experience_1_x = torch.zeros(100, *pattern_shape) experience_1_y = torch.zeros(100, dtype=torch.long) # Experience 2 experience_2_x = torch.zeros(80, *pattern_shape) experience_2_y = torch.ones(80, dtype=torch.long) # Test experience # For this example we define a single test experience, # but "tensors_benchmark" allows you to define even more than one! test_x = torch.zeros(50, *pattern_shape) test_y = torch.zeros(50, dtype=torch.long) generic_scenario = tensors_benchmark( train_tensors=[(experience_1_x, experience_1_y), (experience_2_x, experience_2_y)], test_tensors=[(test_x, test_y)], task_labels=[0, 0], # Task label of each train exp complete_test_set_only=True )
notebooks/from-zero-to-hero-tutorial/03_benchmarks.ipynb
ContinualAI/avalanche
mit
As you can see, it doesn't generate the complete list like the listcomp above. Genexps don't produce entire lists in memory. You need to iterate over them, and then you will get the item one by one. No one is keeping the entire list!
for article in couleurs_et_tailles: print(article)
Chapter 2 - Iterables Consistency.ipynb
rayxi2dot71828/FluentPython
mit
Next, it's about tuple (in short, not just an immutable list), and slicing. More examples:
jours = ["lundi", "mardi", "mercredi", "jeudi", "vendredi", "samedi", "dimanche"] all_slice = slice(None, None, None) print(jours[all_slice]) every_other_slice = slice(0, None, 2) print(jours[every_other_slice]) reverse_slice = slice(None, None, -1) print(jours[reverse_slice])
Chapter 2 - Iterables Consistency.ipynb
rayxi2dot71828/FluentPython
mit
Next, the chapter moves on to list.sort() vs. sorted(). I kinda like sorted because it works with everything including generators, and it returns a new list. Whereas list.sort() is more limited because it is an in-place sort which returns None. Next is about bisect, which is interesting but one can read the documentation easily and experiment with it. Also obviously important is knowing which data structure meets your needs. A bit like knowing the Java Collections Framework again. What I don't really like about Python here is that the nicer and supposedly basic things are not part of the standard library. For example, SortedContainers is by all accounts an excellent package. This is something we've been taking for granted in Java for example. One notable thing from sorting that's different (and I agree with the author, is nicer) than in other languages is using the "key" argument instead of the usual "Comparator" function/object that returns the standard -1, 0, 1. Again, more examples:
jours = ["lundi", "mardi", "mercredi", "jeudi", "vendredi", "samedi", "dimanche"] # already ordered print(sorted(jours)) print(sorted(jours, reverse=True)) print(sorted(jours, key=len, reverse=True)) # by length of string, descend print(sorted(jours, reverse=True, key=lambda x: jours.index(x))) # by actual order of week, descending
Chapter 2 - Iterables Consistency.ipynb
rayxi2dot71828/FluentPython
mit
Performance Test One:
timeit(get_subStrs_of_uniqueChars(longString, 6623)) # hackerrank, most voted solution from discussion board (for Python) # S, N = input(), int(input()) def hackrRank_getSubsOfUniqueChars_v1(S, N): # modified slightly to integrate for this timeit test: rtnVal = [] # changed print() to rtnVal for part in zip(*[iter(S)] * N): d = dict() rtnVal.append(''.join([ d.setdefault(c, c) for c in part if c not in d ])) return rtnVal # tests to show it works the same as earlier examples test_funcs_thisNB(hackrRank_getSubsOfUniqueChars_v1) hackrRank_getSubsOfUniqueChars_v1(longString, 6623) # should only ouput one string
PY_Basics/TMWP_PerfTesting_in_JupyterNBs.ipynb
TheMitchWorksPro/DataTech_Playground
mit
Performance Test Two: This one runs faster by a noticeable factor (6x faster). But the next one is even more extreme.
timeit(hackrRank_getSubsOfUniqueChars_v1(longString, 6623)) # a solution from the discussion board of hackerrank def without_repetition(t): s = set() o = [] for c in t: if c not in s: s.add(c) o.append(c) return ''.join(o) def hackrRank_getSubsOfUniqueChars_v2(s, w): n = len(s) rtnVal = [] for t in [s[i:i+w] for i in range(0, n, w)]: rtnVal.append(without_repetition(t)) return rtnVal # tests to show it works the same as earlier examples test_funcs_thisNB(hackrRank_getSubsOfUniqueChars_v2) hackrRank_getSubsOfUniqueChars_v2(longString, 6623) # should only ouput one string
PY_Basics/TMWP_PerfTesting_in_JupyterNBs.ipynb
TheMitchWorksPro/DataTech_Playground
mit
Performance Test Three: This one runs faster by a much more extreme multiplier. Note that we have switched from ms to us in the output metric. If this function were called on big data, the performance boost would be significant (3 decimal places faster).
timeit(hackrRank_getSubsOfUniqueChars_v2(longString, 6623)) from collections import OrderedDict def hackrRank_getSubsOfUniqueChars_v3(string, k): rtnVal = [] for x in range(0,len(string),k): rtnVal.append(''.join(list(OrderedDict.fromkeys(string[x:x+k])))) return rtnVal # tests to show it works the same as earlier examples test_funcs_thisNB(hackrRank_getSubsOfUniqueChars_v3) hackrRank_getSubsOfUniqueChars_v3(longString, 6623) # should only ouput one string
PY_Basics/TMWP_PerfTesting_in_JupyterNBs.ipynb
TheMitchWorksPro/DataTech_Playground
mit
Performance Test Four: Ironically, though this one also tests in microseconds, this ordered dictionary solution that also looks "more Pythonic" then the previous example, is not able to beat the performance of the previous example. It only comes in second place.
timeit(hackrRank_getSubsOfUniqueChars_v3(longString, 6623))
PY_Basics/TMWP_PerfTesting_in_JupyterNBs.ipynb
TheMitchWorksPro/DataTech_Playground
mit
Another Simple Test This test shows how two lines performs better than one since it avoids doing a list conversion twice
def averageUniqueElements(array): return sum(set(array)) / len(set(array)) def averageUniqueElements_v2(array): arrSet = set(array) return sum(arrSet) / len(arrSet) timeit(averageUniqueElements([456789, 456789, 11111111, 111111111, 12111111111111, 78787878, 7878, 99999999, 99999999, 6])) timeit(averageUniqueElements_v2([456789, 456789, 11111111, 111111111, 12111111111111, 78787878, 7878, 99999999, 99999999, 6]))
PY_Basics/TMWP_PerfTesting_in_JupyterNBs.ipynb
TheMitchWorksPro/DataTech_Playground
mit
In this example, we will get the page for Berkeley, California and count the most commonly used words in the article. I'm using nltk, which is a nice library for natural language processing (although it is probably overkill for this).
bky = wikipedia.page("Berkeley, California") bky bk_split = bky.content.split() bk_split[:10] !pip install nltk import nltk fdist1 = nltk.FreqDist(bk_split) fdist1.most_common(10)
scraping_wikipedia/wikipedia-api-query.ipynb
katyhuff/berkeley
bsd-3-clause
There are many functions in a Wikipedia page object. We can also get all the Wikipedia articles that are linked from a page, all the URL links in the page, or all the geographical coordinates in the page. There was a study about which domains were most popular in Wikipedia articles.
print(bky.references[:10]) print(bky.links[:10])
scraping_wikipedia/wikipedia-api-query.ipynb
katyhuff/berkeley
bsd-3-clause
Querying using pywikibot pywikibot is one of the most well-developed and widely used libraries for querying the Wikipedia API. It does need a configuration script (user-config.py) in the directory where you are running the python script. It is often used by bots that edit, so there are many features that are not available unless you login with a Wikipedia account. Register an account on Wikipedia If you don't have one, register an account on Wikipedia. Then modify the string below so that the usernames line reads u'YourUserName'. You are not inputting your password, because you are not logging in with this account. This is just so that there is a place to contact you if your script goes out of control. This is not required to use pywikibot, but it is part of the rules for accessing Wikipedia's API. In this tutorial, I'm not going to tell you how to set up OAuth so that you can login and edit. But if you are interested in this, I'd love to talk to you about it. Note: you can edit pages with pywikibot (even when not logged in), but please don't! You have to get approval from Wikipedia's bot approval group, or else your IP address is likely to be banned.
user_config=""" family = 'wikipedia' mylang = 'en' usernames['wikipedia']['en'] = u'REPLACE THIS WITH YOUR USERNAME' """ f = open('user-config.py', 'w') f.write(user_config) f.close() !pip install pywikibot import pywikibot site = pywikibot.Site() bky_page = pywikibot.Page(site, "Berkeley, California") bky_page # page text with all the wikimarkup and templates bky_page_text = bky_page.text # page text expanded to HTML bky_page.expand_text() # All the geographical coordinates linked in a page (may have multiple per article) bky_page.coordinates()
scraping_wikipedia/wikipedia-api-query.ipynb
katyhuff/berkeley
bsd-3-clause
Generators Generators are a way of querying for a kind of page, and then iterating through those pages. Generators are frequently used with categories, but you can also use a generator for things like a search, or all pages linking to a page.
from pywikibot import pagegenerators cat = pywikibot.Category(site,'Category:Cities in Alameda County, California') gen = cat.members() gen # create an empty list coord_d = [] for page in gen: print(page.title(), page.coordinates()) pc = page.coordinates() for coord in pc: # If the page is not a category if(page.isCategory()==False): coord_d.append({'label':page.title(), 'latitude':coord.lat, 'longitude':coord.lon}) coord_d[:3] import pandas as pd coord_df = pd.DataFrame(coord_d) coord_df
scraping_wikipedia/wikipedia-api-query.ipynb
katyhuff/berkeley
bsd-3-clause