markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
The following function plots the original data and the fitted curve.
def PlotLinearModel(daily, name): """Plots a linear fit to a sequence of prices, and the residuals. daily: DataFrame of daily prices name: string """ model, results = RunLinearModel(daily) PlotFittedValues(model, results, label=name) thinkplot.Config(title='Fitted values', ...
code/chap12soln.ipynb
smorton2/think-stats
gpl-3.0
Here are results for the high quality category:
name = 'high' daily = dailies[name] PlotLinearModel(daily, name)
code/chap12soln.ipynb
smorton2/think-stats
gpl-3.0
Moving averages As a simple example, I'll show the rolling average of the numbers from 1 to 10.
series = np.arange(10)
code/chap12soln.ipynb
smorton2/think-stats
gpl-3.0
With a "window" of size 3, we get the average of the previous 3 elements, or nan when there are fewer than 3.
pd.rolling_mean(series, 3)
code/chap12soln.ipynb
smorton2/think-stats
gpl-3.0
The following function plots the rolling mean.
def PlotRollingMean(daily, name): """Plots rolling mean. daily: DataFrame of daily prices """ dates = pd.date_range(daily.index.min(), daily.index.max()) reindexed = daily.reindex(dates) thinkplot.Scatter(reindexed.ppg, s=15, alpha=0.2, label=name) roll_mean = pd.rolling_mean(reindexed.ppg...
code/chap12soln.ipynb
smorton2/think-stats
gpl-3.0
The exponentially-weighted moving average gives more weight to more recent points.
def PlotEWMA(daily, name): """Plots rolling mean. daily: DataFrame of daily prices """ dates = pd.date_range(daily.index.min(), daily.index.max()) reindexed = daily.reindex(dates) thinkplot.Scatter(reindexed.ppg, s=15, alpha=0.2, label=name) roll_mean = pd.ewma(reindexed.ppg, 30) think...
code/chap12soln.ipynb
smorton2/think-stats
gpl-3.0
We can use resampling to generate missing values with the right amount of noise.
def FillMissing(daily, span=30): """Fills missing values with an exponentially weighted moving average. Resulting DataFrame has new columns 'ewma' and 'resid'. daily: DataFrame of daily prices span: window size (sort of) passed to ewma returns: new DataFrame of daily prices """ dates = pd...
code/chap12soln.ipynb
smorton2/think-stats
gpl-3.0
Even if the correlations between consecutive days are weak, there might be correlations across intervals of one week, one month, or one year.
rows = [] for lag in [1, 7, 30, 365]: print(lag, end='\t') for name, filled in filled_dailies.items(): corr = SerialCorr(filled.resid, lag) print('%.2g' % corr, end='\t') print()
code/chap12soln.ipynb
smorton2/think-stats
gpl-3.0
The strongest correlation is a weekly cycle in the medium quality category. Autocorrelation The autocorrelation function is the serial correlation computed for all lags. We can use it to replicate the results from the previous section.
import statsmodels.tsa.stattools as smtsa filled = filled_dailies['high'] acf = smtsa.acf(filled.resid, nlags=365, unbiased=True) print('%0.2g, %.2g, %0.2g, %0.2g, %0.2g' % (acf[0], acf[1], acf[7], acf[30], acf[365]))
code/chap12soln.ipynb
smorton2/think-stats
gpl-3.0
To get a sense of how much autocorrelation we should expect by chance, we can resample the data (which eliminates any actual autocorrelation) and compute the ACF.
def SimulateAutocorrelation(daily, iters=1001, nlags=40): """Resample residuals, compute autocorrelation, and plot percentiles. daily: DataFrame iters: number of simulations to run nlags: maximum lags to compute autocorrelation """ # run simulations t = [] for _ in range(iters): ...
code/chap12soln.ipynb
smorton2/think-stats
gpl-3.0
The following function plots the actual autocorrelation for lags up to 40 days. The flag add_weekly indicates whether we should add a simulated weekly cycle.
def PlotAutoCorrelation(dailies, nlags=40, add_weekly=False): """Plots autocorrelation functions. dailies: map from category name to DataFrame of daily prices nlags: number of lags to compute add_weekly: boolean, whether to add a simulated weekly pattern """ thinkplot.PrePlot(3) daily = dai...
code/chap12soln.ipynb
smorton2/think-stats
gpl-3.0
To show what a strong weekly cycle would look like, we have the option of adding a price increase of 1-2 dollars on Friday and Saturdays.
def AddWeeklySeasonality(daily): """Adds a weekly pattern. daily: DataFrame of daily prices returns: new DataFrame of daily prices """ fri_or_sat = (daily.index.dayofweek==4) | (daily.index.dayofweek==5) fake = daily.copy() fake.ppg.loc[fri_or_sat] += np.random.uniform(0, 2, fri_or_sat.sum...
code/chap12soln.ipynb
smorton2/think-stats
gpl-3.0
Here's what the real ACFs look like. The gray regions indicate the levels we expect by chance.
axis = [0, 41, -0.2, 0.2] PlotAutoCorrelation(dailies, add_weekly=False) thinkplot.Config(axis=axis, loc='lower right', ylabel='correlation', xlabel='lag (day)')
code/chap12soln.ipynb
smorton2/think-stats
gpl-3.0
Here's what it would look like if there were a weekly cycle.
PlotAutoCorrelation(dailies, add_weekly=True) thinkplot.Config(axis=axis, loc='lower right', xlabel='lag (days)')
code/chap12soln.ipynb
smorton2/think-stats
gpl-3.0
Prediction The simplest way to generate predictions is to use statsmodels to fit a model to the data, then use the predict method from the results.
def GenerateSimplePrediction(results, years): """Generates a simple prediction. results: results object years: sequence of times (in years) to make predictions for returns: sequence of predicted values """ n = len(years) inter = np.ones(n) d = dict(Intercept=inter, years=years, years2=...
code/chap12soln.ipynb
smorton2/think-stats
gpl-3.0
Here's what the prediction looks like for the high quality category, using the linear model.
name = 'high' daily = dailies[name] _, results = RunLinearModel(daily) years = np.linspace(0, 5, 101) PlotSimplePrediction(results, years)
code/chap12soln.ipynb
smorton2/think-stats
gpl-3.0
To visualize predictions, I show a darker region that quantifies modeling uncertainty and a lighter region that quantifies predictive uncertainty.
def PlotPredictions(daily, years, iters=101, percent=90, func=RunLinearModel): """Plots predictions. daily: DataFrame of daily prices years: sequence of times (in years) to make predictions for iters: number of simulations percent: what percentile range to show func: function that fits a model ...
code/chap12soln.ipynb
smorton2/think-stats
gpl-3.0
Here are the results for the high quality category.
years = np.linspace(0, 5, 101) thinkplot.Scatter(daily.years, daily.ppg, alpha=0.1, label=name) PlotPredictions(daily, years) xlim = years[0]-0.1, years[-1]+0.1 thinkplot.Config(title='Predictions', xlabel='Years', xlim=xlim, ylabel='Price per gram ($)')
code/chap12soln.ipynb
smorton2/think-stats
gpl-3.0
But there is one more source of uncertainty: how much past data should we use to build the model? The following function generates a sequence of models based on different amounts of past data.
def SimulateIntervals(daily, iters=101, func=RunLinearModel): """Run simulations based on different subsets of the data. daily: DataFrame of daily prices iters: number of simulations func: function that fits a model to the data returns: list of result objects """ result_seq = [] starts...
code/chap12soln.ipynb
smorton2/think-stats
gpl-3.0
And this function plots the results.
def PlotIntervals(daily, years, iters=101, percent=90, func=RunLinearModel): """Plots predictions based on different intervals. daily: DataFrame of daily prices years: sequence of times (in years) to make predictions for iters: number of simulations percent: what percentile range to show func: ...
code/chap12soln.ipynb
smorton2/think-stats
gpl-3.0
Here's what the high quality category looks like if we take into account uncertainty about how much past data to use.
name = 'high' daily = dailies[name] thinkplot.Scatter(daily.years, daily.ppg, alpha=0.1, label=name) PlotIntervals(daily, years) PlotPredictions(daily, years) xlim = years[0]-0.1, years[-1]+0.1 thinkplot.Config(title='Predictions', xlabel='Years', xlim=xlim, ylabel='P...
code/chap12soln.ipynb
smorton2/think-stats
gpl-3.0
Exercises Exercise: The linear model I used in this chapter has the obvious drawback that it is linear, and there is no reason to expect prices to change linearly over time. We can add flexibility to the model by adding a quadratic term, as we did in Section 11.3. Use a quadratic model to fit the time series of daily...
# Solution def RunQuadraticModel(daily): """Runs a linear model of prices versus years. daily: DataFrame of daily prices returns: model, results """ daily['years2'] = daily.years**2 model = smf.ols('ppg ~ years + years2', data=daily) results = model.fit() return model, results # Solu...
code/chap12soln.ipynb
smorton2/think-stats
gpl-3.0
Exercise: Write a definition for a class named SerialCorrelationTest that extends HypothesisTest from Section 9.2. It should take a series and a lag as data, compute the serial correlation of the series with the given lag, and then compute the p-value of the observed correlation. Use this class to test whether the seri...
# Solution class SerialCorrelationTest(thinkstats2.HypothesisTest): """Tests serial correlations by permutation.""" def TestStatistic(self, data): """Computes the test statistic. data: tuple of xs and ys """ series, lag = data test_stat = abs(SerialCorr(series, lag)) ...
code/chap12soln.ipynb
smorton2/think-stats
gpl-3.0
Worked example: There are several ways to extend the EWMA model to generate predictions. One of the simplest is something like this: Compute the EWMA of the time series and use the last point as an intercept, inter. Compute the EWMA of differences between successive elements in the time series and use the last poin...
name = 'high' daily = dailies[name] filled = FillMissing(daily) diffs = filled.ppg.diff() thinkplot.plot(diffs) plt.xticks(rotation=30) thinkplot.Config(ylabel='Daily change in price per gram ($)') filled['slope'] = pd.ewma(diffs, span=365) thinkplot.plot(filled.slope[-365:]) plt.xticks(rotation=30) thinkplot.Config...
code/chap12soln.ipynb
smorton2/think-stats
gpl-3.0
Plotting with parameters Write a plot_sin1(a, b) function that plots $sin(ax+b)$ over the interval $[0,4\pi]$. Customize your visualization to make it effective and beautiful. Customize the box, grid, spines and ticks to match the requirements of this data. Use enough points along the x-axis to get a smooth plot. For ...
def plot_sine1(a, b): x = np.arange(0, 4.01 * np.pi, 0.01 * np.pi) plt.plot(x, np.sin(a * x + b)) plt.xlim(0, 4 * np.pi) plt.xticks([0, 1 * np.pi, 2 * np.pi, 3 * np.pi, 4 * np.pi], ['0', '$\pi$', '$2\pi$', '$3\pi$', '$4\pi$']) plot_sine1(5, 3.4)
assignments/assignment05/InteractEx02.ipynb
geoneill12/phys202-2015-work
mit
Then use interact to create a user interface for exploring your function: a should be a floating point slider over the interval $[0.0,5.0]$ with steps of $0.1$. b should be a floating point slider over the interval $[-5.0,5.0]$ with steps of $0.1$.
interact(plot_sine1, a = (0.0, 5.0, 0.1), b = (-5.0, 5.0, 0.1)) assert True # leave this for grading the plot_sine1 exercise
assignments/assignment05/InteractEx02.ipynb
geoneill12/phys202-2015-work
mit
In matplotlib, the line style and color can be set with a third argument to plot. Examples of this argument: dashed red: r-- blue circles: bo dotted black: k. Write a plot_sine2(a, b, style) function that has a third style argument that allows you to set the line style of the plot. The style should default to a blue ...
def plot_sine2(a, b, style): x = np.arange(0, 4.01 * np.pi, 0.01 * np.pi) plt.plot(x, np.sin(a * x + b), style) plt.xlim(0, 4 * np.pi) plt.xticks([0, 1 * np.pi, 2 * np.pi, 3 * np.pi, 4 * np.pi], ['0', '$\pi$', '$2\pi$', '$3\pi$', '$4\pi$']) plot_sine2(4.0, -1.0, 'r--')
assignments/assignment05/InteractEx02.ipynb
geoneill12/phys202-2015-work
mit
Use interact to create a UI for plot_sine2. Use a slider for a and b as above. Use a drop down menu for selecting the line style between a dotted blue line line, black circles and red triangles.
interact(plot_sine2, a = (0.0, 5.0, 0.1), b = (-5.0, 5.0, 0.1), style = ('b.', 'ko', 'r^')) assert True # leave this for grading the plot_sine2 exercise
assignments/assignment05/InteractEx02.ipynb
geoneill12/phys202-2015-work
mit
Chinmai Raman Homework 3 A.4 Solving a system of difference equations Computes the development of a loan over time. The below function calculates the amount paid per month (the first array) and the amount left to be paid (the second array) at each month of the year at a principal of $10,000 to be paid over 1 year at an...
p1.loan(6, 10000, 12)
HW3Notebook.ipynb
chapman-phys227-2016s/hw-3-ChinmaiRaman
mit
A.11 Testing different methods of root finding $f(x) = Sin(x)$
p2.graph(p2.f1, 100, -2 * np.pi, 2 * np.pi) p2.Newton(p2.f1, p2.f1prime, -4) p2.bisect(p2.f1, -4, -2) p2.secant(p2.f1, -4.5, -3.5)
HW3Notebook.ipynb
chapman-phys227-2016s/hw-3-ChinmaiRaman
mit
$f(x) = x - sin(x)$
p2.graph(p2.f2, 100, -np.pi, np.pi) p2.Newton(p2.f2, p2.f2prime, 1) p2.bisect(p2.f2, -1, 1) p2.secant(p2.f2, -2, -1)
HW3Notebook.ipynb
chapman-phys227-2016s/hw-3-ChinmaiRaman
mit
$f(x) = x^5 - sin x$
p2.graph(p2.f3, 100, -np.pi / 2, np.pi / 2) p2.Newton(p2.f3, p2.f3prime, -1) p2.bisect(p2.f3, -1, 1) p2.secant(p2.f3, -1, -0.5)
HW3Notebook.ipynb
chapman-phys227-2016s/hw-3-ChinmaiRaman
mit
$f(x) = x^4sinx$
p2.graph(p2.f4, 100, -2 * np.pi, 2 * np.pi) p2.Newton(p2.f4, p2.f4prime, -4) p2.bisect(p2.f4, -4, -2) p2.secant(p2.f4, -5, -4)
HW3Notebook.ipynb
chapman-phys227-2016s/hw-3-ChinmaiRaman
mit
$f(x) = x^4 - 16$
p2.graph(p2.f5, 100, -2 * np.pi, 2 * np.pi) p2.Newton(p2.f5, p2.f5prime, -3) p2.bisect(p2.f5, -3, -1) p2.secant(p2.f5, -4, -3)
HW3Notebook.ipynb
chapman-phys227-2016s/hw-3-ChinmaiRaman
mit
$f(x) = x^{10} - 1$
p2.graph(p2.f6, 100, -2 * np.pi, 2 * np.pi) p2.Newton(p2.f6, p2.f6prime, 2) p2.bisect(p2.f6, 0, 2) p2.secant(p2.f6, 3, 2)
HW3Notebook.ipynb
chapman-phys227-2016s/hw-3-ChinmaiRaman
mit
$tanh(x) - x^{10}$
p2.graph(p2.f7, 100, -2 * np.pi, 2 * np.pi) p2.Newton(p2.f7, p2.f7prime, 1) p2.bisect(p2.f7, 0.5, 2) p2.secant(p2.f7, 3, 2)
HW3Notebook.ipynb
chapman-phys227-2016s/hw-3-ChinmaiRaman
mit
A.13 Computing the arc length of a curve
h1 = -4 * (x)**2 x = sp.Symbol('x') h2 = sp.exp(h1) h3 = 1 / np.sqrt(2 * np.pi) * h2 length = p3.arclength(h3, -2, 2, 10) print length
HW3Notebook.ipynb
chapman-phys227-2016s/hw-3-ChinmaiRaman
mit
The arclength of the function f(x) from -2 to 2 is 4.18
fig = plt.figure(1) x = np.linspace(-2, 2, 100) y = 1 / np.sqrt(2 * np.pi) * np.exp(-4 * x**2) x1 = length[0] y1 = length[1] plt.plot(x, y, 'r-', x1, y1, 'b-') plt.xlabel('x') plt.ylabel('y') plt.title('1/sqrt(2pi) * e^(-4t^2)') plt.show(fig)
HW3Notebook.ipynb
chapman-phys227-2016s/hw-3-ChinmaiRaman
mit
A.14 Finding difference equations for computing sin(x) The accuracy of a Taylor polynomial improves as x decreases (moves closer to zero).
x = [-3 * np.pi / 4.0, -np.pi / 4.0, np.pi / 4.0, 3 * np.pi / 4] N = [5, 5, 5, 5] n = 0 Sn = [] while n < 4: Sn.append(p4.sin_Taylor(x[n], N[n])[0]) n += 1 print Sn
HW3Notebook.ipynb
chapman-phys227-2016s/hw-3-ChinmaiRaman
mit
The accuracy of a Taylor polynomial also improves as n increases.
x = [np.pi / 4, np.pi / 4, np.pi / 4, np.pi / 4] N = [1, 3, 5, 10] n = 0 Sn = [] while n < 4: Sn.append(p4.sin_Taylor(x[n], N[n])[0]) n += 1 print Sn
HW3Notebook.ipynb
chapman-phys227-2016s/hw-3-ChinmaiRaman
mit
Motivating Random Forests: Decision Trees Random forests are an example of an ensemble learner built on decision trees. For this reason we'll start by discussing decision trees themselves. Decision trees are extremely intuitive ways to classify or label objects: you simply ask a series of questions designed to zero-in ...
import fig_code fig_code.plot_example_decision_tree()
present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/03.2-Regression-Forests.ipynb
csaladenes/csaladenes.github.io
mit
The binary splitting makes this extremely efficient. As always, though, the trick is to ask the right questions. This is where the algorithmic process comes in: in training a decision tree classifier, the algorithm looks at the features and decides which questions (or "splits") contain the most information. Creating a ...
from sklearn.datasets import make_blobs X, y = make_blobs(n_samples=300, centers=4, random_state=0, cluster_std=1.0) plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='rainbow');
present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/03.2-Regression-Forests.ipynb
csaladenes/csaladenes.github.io
mit
We have some convenience functions in the repository that help
from fig_code import visualize_tree, plot_tree_interactive
present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/03.2-Regression-Forests.ipynb
csaladenes/csaladenes.github.io
mit
Now using IPython's interact (available in IPython 2.0+, and requires a live kernel) we can view the decision tree splits:
plot_tree_interactive(X, y);
present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/03.2-Regression-Forests.ipynb
csaladenes/csaladenes.github.io
mit
Notice that at each increase in depth, every node is split in two except those nodes which contain only a single class. The result is a very fast non-parametric classification, and can be extremely useful in practice. Question: Do you see any problems with this? Decision Trees and over-fitting One issue with decision t...
from sklearn.tree import DecisionTreeClassifier clf = DecisionTreeClassifier() plt.figure() visualize_tree(clf, X[:200], y[:200], boundaries=False) plt.figure() visualize_tree(clf, X[-200:], y[-200:], boundaries=False)
present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/03.2-Regression-Forests.ipynb
csaladenes/csaladenes.github.io
mit
The details of the classifications are completely different! That is an indication of over-fitting: when you predict the value for a new point, the result is more reflective of the noise in the model rather than the signal. Ensembles of Estimators: Random Forests One possible way to address over-fitting is to use an En...
def fit_randomized_tree(random_state=0): X, y = make_blobs(n_samples=300, centers=4, random_state=0, cluster_std=2.0) clf = DecisionTreeClassifier(max_depth=15) rng = np.random.RandomState(random_state) i = np.arange(len(y)) rng.shuffle(i) visualize_tree(clf, X[i[:250]...
present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/03.2-Regression-Forests.ipynb
csaladenes/csaladenes.github.io
mit
See how the details of the model change as a function of the sample, while the larger characteristics remain the same! The random forest classifier will do something similar to this, but use a combined version of all these trees to arrive at a final answer:
from sklearn.ensemble import RandomForestClassifier clf = RandomForestClassifier(n_estimators=100, random_state=0) visualize_tree(clf, X, y, boundaries=False);
present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/03.2-Regression-Forests.ipynb
csaladenes/csaladenes.github.io
mit
By averaging over 100 randomly perturbed models, we end up with an overall model which is a much better fit to our data! (Note: above we randomized the model through sub-sampling... Random Forests use more sophisticated means of randomization, which you can read about in, e.g. the scikit-learn documentation) Quick Exam...
from sklearn.ensemble import RandomForestRegressor x = 10 * np.random.rand(100) def model(x, sigma=0.3): fast_oscillation = np.sin(5 * x) slow_oscillation = np.sin(0.5 * x) noise = sigma * np.random.randn(len(x)) return slow_oscillation + fast_oscillation + noise y = model(x) plt.errorbar(x, y, 0.3,...
present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/03.2-Regression-Forests.ipynb
csaladenes/csaladenes.github.io
mit
As you can see, the non-parametric random forest model is flexible enough to fit the multi-period data, without us even specifying a multi-period model! Example: Random Forest for Classifying Digits We previously saw the hand-written digits data. Let's use that here to test the efficacy of the SVM and Random Forest cla...
from sklearn.datasets import load_digits digits = load_digits() digits.keys() X = digits.data y = digits.target print(X.shape) print(y.shape)
present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/03.2-Regression-Forests.ipynb
csaladenes/csaladenes.github.io
mit
To remind us what we're looking at, we'll visualize the first few data points:
# set up the figure fig = plt.figure(figsize=(6, 6)) # figure size in inches fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05) # plot the digits: each image is 8x8 pixels for i in range(64): ax = fig.add_subplot(8, 8, i + 1, xticks=[], yticks=[]) ax.imshow(digits.images[i], cmap=...
present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/03.2-Regression-Forests.ipynb
csaladenes/csaladenes.github.io
mit
We can quickly classify the digits using a decision tree as follows:
from sklearn.model_selection import train_test_split from sklearn import metrics Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, random_state=0) clf = DecisionTreeClassifier(max_depth=11) clf.fit(Xtrain, ytrain) ypred = clf.predict(Xtest)
present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/03.2-Regression-Forests.ipynb
csaladenes/csaladenes.github.io
mit
We can check the accuracy of this classifier:
metrics.accuracy_score(ypred, ytest)
present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/03.2-Regression-Forests.ipynb
csaladenes/csaladenes.github.io
mit
and for good measure, plot the confusion matrix:
plt.imshow(metrics.confusion_matrix(ypred, ytest), interpolation='nearest', cmap=plt.cm.binary) plt.grid(False) plt.colorbar() plt.xlabel("predicted label") plt.ylabel("true label");
present/bi2/2020/ubb/az_en_jupyter2_mappam/sklearn_tutorial/03.2-Regression-Forests.ipynb
csaladenes/csaladenes.github.io
mit
Dataset I used training dataset provided by Udacity. I use all 3 positions of camera with correction of 0.25 , i.e addition of 0.25 to steering angle for left-positioned camera and substraction of 0.25 for right-positioned camera. I could have self-produced ore data but due to time constraint, I only used Udacity datas...
'''Read data''' image_path = '../../../data' # row in log path is IMG/<name> driving_log_path = '../../../data/driving_log.csv' rows = [] with open(driving_log_path) as csvfile: reader = csv.reader(csvfile) for row in reader: rows.append(row)
Behavior Cloning/notebook/Behavior Cloning.ipynb
tranlyvu/autonomous-vehicle-projects
apache-2.0
Model Architecture and Training Strategy First Model In my first attempt, I used 9-layers network from end to end learning for self-driving cars by NVIDIA Pre-processing pipeline Data augmentation: Fliping the image horizontal (from function append_data) Cropping the image Normalization and Mean centering NVIDIA or...
def append_data(col, images, measurement, steering_measurements): current_path = image_path + '/' + col.strip() image = cv2.imread(current_path) image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) images.append(np.asarray(image)) steering_measurements.append(measurement) # random flipping ...
Behavior Cloning/notebook/Behavior Cloning.ipynb
tranlyvu/autonomous-vehicle-projects
apache-2.0
Model Architecture Definition
model = Sequential() #The cameras in the simulator capture 160 pixel by 320 pixel images., after cropping, it is 66x200 model.add(Cropping2D(cropping = ((74,20), (60,60)),input_shape=(160, 320, 3))) model.add(Lambda(lambda x: x / 255.0 - 0.5, input_shape=(66, 200, 3))) model.add(Convolution2D(24, 5, 5, subsample=(2,2)...
Behavior Cloning/notebook/Behavior Cloning.ipynb
tranlyvu/autonomous-vehicle-projects
apache-2.0
Model Training For every time of training and parameter tunning, the model was tested by running it through the simulator and ensuring that the vehicle could stay on the track. At epoch of 10, the training and validation loss both went down fast and I think they would converge if I'd have increased number of epoch (gra...
print('Training model') samples = list(zip(X_total, y_total)) train_samples, validation_samples = train_test_split(samples, test_size = 0.2) train_generator = generator(train_samples, batch_size = 32) validation_generator = generator(validation_samples, batch_size = 32) history_object = model.fit...
Behavior Cloning/notebook/Behavior Cloning.ipynb
tranlyvu/autonomous-vehicle-projects
apache-2.0
Note: Restart your kernel to use updated packages. Kindly ignore the deprecation warnings and incompatibility errors related to google-cloud-storage. Import necessary libraries.
from google.cloud import bigquery import pandas as pd import datetime import os import shutil import matplotlib.pyplot as plt import tensorflow as tf print(tf.__version__)
courses/machine_learning/deepdive2/end_to_end_ml/solutions/keras_dnn_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Set environment variables so that we can use them throughout the notebook.
%%bash export PROJECT=$(gcloud config list project --format "value(core.project)") echo "Your current GCP Project Name is: "$PROJECT PROJECT = "cloud-training-demos" # Replace with your PROJECT
courses/machine_learning/deepdive2/end_to_end_ml/solutions/keras_dnn_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Now that we know that our splitting values produce a good global splitting on our data, here's a way to get a well-distributed portion of the data in such a way that the train, eval, test sets do not overlap and takes a subsample of our global splits.
# every_n allows us to subsample from each of the hash values # This helps us get approximately the record counts we want every_n = 1000 splitting_string = "ABS(MOD(hash_values, {0} * {1}))".format(every_n, modulo_divisor) def create_data_split_sample_df(query_string, splitting_string, lo, up): """Creates a dataf...
courses/machine_learning/deepdive2/end_to_end_ml/solutions/keras_dnn_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
It is always crucial to clean raw data before using in machine learning, so we have a preprocessing step. We'll define a preprocess function below. Note that the mother's age is an input to our model so users will have to provide the mother's age; otherwise, our service won't work. The features we use for our model wer...
def preprocess(df): """ Preprocess pandas dataframe for augmented babyweight data. Args: df: Dataframe containing raw babyweight data. Returns: Pandas dataframe containing preprocessed raw babyweight data as well as simulated no ultrasound data masking some of the original d...
courses/machine_learning/deepdive2/end_to_end_ml/solutions/keras_dnn_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Write to .csv files In the final versions, we want to read from files, not Pandas dataframes. So, we write the Pandas dataframes out as csv files. Using csv files gives us the advantage of shuffling during read. This is important for distributed training because some workers might be slower than others, and shuffling t...
# Define columns columns = ["weight_pounds", "is_male", "mother_age", "plurality", "gestation_weeks"] # Write out CSV files train_df.to_csv( path_or_buf="train.csv", columns=columns, header=False, index=False) eval_df.to_csv( path_or_buf="eval.csv", columns=columns, ...
courses/machine_learning/deepdive2/end_to_end_ml/solutions/keras_dnn_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Create Keras model Set CSV Columns, label column, and column defaults. Now that we have verified that our CSV files exist, we need to set a few things that we will be using in our input function. * CSV_COLUMNS is going to be our header name of our column. Make sure that they are in the same order as in the CSV files * ...
# Determine CSV, label, and key columns # Create list of string column headers, make sure order matches. CSV_COLUMNS = ["weight_pounds", "is_male", "mother_age", "plurality", "gestation_weeks"] # Add string name for label column LABEL_COLUMN = "weight_pounds"...
courses/machine_learning/deepdive2/end_to_end_ml/solutions/keras_dnn_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Make dataset of features and label from CSV files. Next, we will write an input_fn to read the data. Since we are reading from CSV files we can save ourselves from trying to recreate the wheel and can use tf.data.experimental.make_csv_dataset. This will create a CSV dataset object. However we will need to divide the co...
def features_and_labels(row_data): """Splits features and labels from feature dictionary. Args: row_data: Dictionary of CSV column names and tensor values. Returns: Dictionary of feature tensors and label tensor. """ label = row_data.pop(LABEL_COLUMN) return row_data, label # ...
courses/machine_learning/deepdive2/end_to_end_ml/solutions/keras_dnn_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Create input layers for raw features. We'll need to get the data to read in by our input function to our model function, but just how do we go about connecting the dots? We can use Keras input layers (tf.Keras.layers.Input) by defining: * shape: A shape tuple (integers), not including the batch size. For instance, shap...
# TODO 1 def create_input_layers(): """Creates dictionary of input layers for each feature. Returns: Dictionary of `tf.Keras.layers.Input` layers for each feature. """ inputs = { colname: tf.keras.layers.Input( name=colname, shape=(), dtype="float32") for colname in ...
courses/machine_learning/deepdive2/end_to_end_ml/solutions/keras_dnn_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Create feature columns for inputs. Next, define the feature columns. mother_age and gestation_weeks should be numeric. The others, is_male and plurality, should be categorical. Remember, only dense feature columns can be inputs to a DNN.
# TODO 2 def categorical_fc(name, values): """Helper function to wrap categorical feature by indicator column. Args: name: str, name of feature. values: list, list of strings of categorical values. Returns: Indicator column of categorical feature. """ cat_column = tf.feature...
courses/machine_learning/deepdive2/end_to_end_ml/solutions/keras_dnn_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Create DNN dense hidden layers and output layer. So we've figured out how to get our inputs ready for machine learning but now we need to connect them to our desired output. Our model architecture is what links the two together. Let's create some hidden dense layers beginning with our inputs and end with a dense output...
# TODO 3 def get_model_outputs(inputs): """Creates model architecture and returns outputs. Args: inputs: Dense tensor used as inputs to model. Returns: Dense tensor output from the model. """ # Create two hidden layers of [64, 32] just in like the BQML DNN h1 = tf.keras.layers.D...
courses/machine_learning/deepdive2/end_to_end_ml/solutions/keras_dnn_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Create custom evaluation metric. We want to make sure that we have some useful way to measure model performance for us. Since this is regression, we would like to know the RMSE of the model on our evaluation dataset, however, this does not exist as a standard evaluation metric, so we'll have to create our own by using ...
def rmse(y_true, y_pred): """Calculates RMSE evaluation metric. Args: y_true: tensor, true labels. y_pred: tensor, predicted labels. Returns: Tensor with value of RMSE between true and predicted labels. """ return tf.sqrt(tf.reduce_mean((y_pred - y_true) ** 2))
courses/machine_learning/deepdive2/end_to_end_ml/solutions/keras_dnn_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Build DNN model tying all of the pieces together. Excellent! We've assembled all of the pieces, now we just need to tie them all together into a Keras Model. This is a simple feedforward model with no branching, side inputs, etc. so we could have used Keras' Sequential Model API but just for fun we're going to use Kera...
# TODO 4 def build_dnn_model(): """Builds simple DNN using Keras Functional API. Returns: `tf.keras.models.Model` object. """ # Create input layer inputs = create_input_layers() # Create feature columns feature_columns = create_feature_columns() # The constructor for DenseFeat...
courses/machine_learning/deepdive2/end_to_end_ml/solutions/keras_dnn_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
We can visualize the DNN using the Keras plot_model utility.
tf.keras.utils.plot_model( model=model, to_file="dnn_model.png", show_shapes=False, rankdir="LR")
courses/machine_learning/deepdive2/end_to_end_ml/solutions/keras_dnn_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Run and evaluate model Train and evaluate. We've built our Keras model using our inputs from our CSV files and the architecture we designed. Let's now run our model by training our model parameters and periodically running an evaluation to track how well we are doing on outside data as training goes on. We'll need to l...
# TODO 5 TRAIN_BATCH_SIZE = 32 NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset repeats, it'll wrap around NUM_EVALS = 5 # how many times to evaluate # Enough to get a reasonable sample, but not so much that it slows down NUM_EVAL_EXAMPLES = 10000 trainds = load_dataset( pattern="train*", batch_size=TRAIN_B...
courses/machine_learning/deepdive2/end_to_end_ml/solutions/keras_dnn_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Visualize loss curve
# Plot import matplotlib.pyplot as plt nrows = 1 ncols = 2 fig = plt.figure(figsize=(10, 5)) for idx, key in enumerate(["loss", "rmse"]): ax = fig.add_subplot(nrows, ncols, idx+1) plt.plot(history.history[key]) plt.plot(history.history["val_{}".format(key)]) plt.title("model {}".format(key)) plt.yl...
courses/machine_learning/deepdive2/end_to_end_ml/solutions/keras_dnn_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Save the model
OUTPUT_DIR = "babyweight_trained" shutil.rmtree(OUTPUT_DIR, ignore_errors=True) EXPORT_PATH = os.path.join( OUTPUT_DIR, datetime.datetime.now().strftime("%Y%m%d%H%M%S")) tf.saved_model.save( obj=model, export_dir=EXPORT_PATH) # with default serving function print("Exported trained model to {}".format(EXPORT_PA...
courses/machine_learning/deepdive2/end_to_end_ml/solutions/keras_dnn_babyweight.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Create evoked objects in delayed SSP mode This script shows how to apply SSP projectors delayed, that is, at the evoked stage. This is particularly useful to support decisions related to the trade-off between denoising and preserving signal. We first will extract Epochs and create evoked objects with the required setti...
# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr> # Denis Engemann <denis.engemann@gmail.com> # # License: BSD (3-clause) import matplotlib.pyplot as plt import mne from mne import io from mne.datasets import sample print(__doc__) data_path = sample.data_path()
0.14/_downloads/plot_evoked_delayed_ssp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Set parameters
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif' event_id, tmin, tmax = 1, -0.2, 0.5 # Setup for reading the raw data raw = io.Raw(raw_fname, preload=True) raw.filter(1, 40, method='iir') events = mne.read_events(event_fna...
0.14/_downloads/plot_evoked_delayed_ssp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Interactively select / deselect the SSP projection vectors
# Here we expose the details of how to apply SSPs reversibly title = 'Incremental SSP application' # let's first move the proj list to another location projs, evoked.info['projs'] = evoked.info['projs'], [] fig, axes = plt.subplots(2, 2) # create 4 subplots for our four vectors # As the bulk of projectors was extrac...
0.14/_downloads/plot_evoked_delayed_ssp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
cross_val_score uses the KFold or StratifiedKFold strategies by default
# define cross_val func def xVal_score(clf, X, y, K): # creating K using KFold cv = KFold(n_splits=2) # Can use suffle as well # cv = ShuffleSplit(n_splits=3, test_size=0.3, random_state=0) # doing cross validation scores = cross_val_score(clf, X, y, cv=cv) print(scores) ...
Sklearn_MLPython/cross_validation-0.18.ipynb
atulsingh0/MachineLearning
gpl-3.0
Cross Validation Iterator K-Fold - KFold divides all the samples in k groups of samples, called folds (if k = n, this is equivalent to the Leave One Out strategy), of equal sizes (if possible). The prediction function is learned using k - 1 folds, and the fold left out is used for test.
X = [1,2,3,4,5] kf = KFold(n_splits=2) print(kf) for i in kf.split(X): print(i)
Sklearn_MLPython/cross_validation-0.18.ipynb
atulsingh0/MachineLearning
gpl-3.0
Leave One Out (LOO) - LeaveOneOut (or LOO) is a simple cross-validation. Each learning set is created by taking all the samples except one, the test set being the sample left out. Thus, for n samples, we have n different training sets and n different tests set. This cross-validation procedure does not waste much data...
X = [1,2,3,4,5] loo = LeaveOneOut() print(loo) for i in loo.split(X): print(i)
Sklearn_MLPython/cross_validation-0.18.ipynb
atulsingh0/MachineLearning
gpl-3.0
Leave P Out (LPO) - LeavePOut is very similar to LeaveOneOut as it creates all the possible training/test sets by removing p samples from the complete set. For n samples, this produces {n \choose p} train-test pairs. Unlike LeaveOneOut and KFold, the test sets will overlap for p > 1
X = [1,2,3,4,5] loo = LeavePOut(p=3) print(loo) for i in loo.split(X): print(i)
Sklearn_MLPython/cross_validation-0.18.ipynb
atulsingh0/MachineLearning
gpl-3.0
Random permutations cross-validation a.k.a. Shuffle & Split - The ShuffleSplit iterator will generate a user defined number of independent train / test dataset splits. Samples are first shuffled and then split into a pair of train and test sets. It is possible to control the randomness for reproducibility of the resul...
X = [1,2,3,4,5] loo = ShuffleSplit(n_splits=3, test_size=0.25,random_state=0) print(loo) for i in loo.split(X): print(i)
Sklearn_MLPython/cross_validation-0.18.ipynb
atulsingh0/MachineLearning
gpl-3.0
Some classification problems can exhibit a large imbalance in the distribution of the target classes: for instance there could be several times more negative samples than positive samples. In such cases it is recommended to use stratified sampling as implemented in StratifiedKFold and StratifiedShuffleSplit to ensure t...
X = np.ones(10) y = [0, 0, 0, 0, 1, 1, 1, 1, 1, 1] skf = StratifiedKFold(n_splits=3) for i in skf.split(X, y): print(i)
Sklearn_MLPython/cross_validation-0.18.ipynb
atulsingh0/MachineLearning
gpl-3.0
Stratified Shuffle Split StratifiedShuffleSplit is a variation of ShuffleSplit, which returns stratified splits, i.e which creates splits by preserving the same percentage for each target class as in the complete set.
X = np.ones(10) y = [0, 0, 0, 0, 1, 1, 1, 1, 1, 1] skf = StratifiedShuffleSplit(n_splits=3, test_size=0.25, random_state=33) for i in skf.split(X, y): print(i)
Sklearn_MLPython/cross_validation-0.18.ipynb
atulsingh0/MachineLearning
gpl-3.0
Cross-validation iterators for grouped data The i.i.d. assumption is broken if the underlying generative process yield groups of dependent samples. Such a grouping of data is domain specific. An example would be when there is medical data collected from multiple patients, with multiple samples taken from each patient. ...
X = [0.1, 0.2, 2.2, 2.4, 2.3, 4.55, 5.8, 8.8, 9, 10] y = ["a", "b", "b", "b", "c", "c", "c", "d", "d", "d"] groups = [1, 1, 1, 2, 2, 2, 3, 3, 3, 3] gkf = GroupKFold(n_splits=3) for train, test in gkf.split(X, y, groups=groups): print("%s %s" % (train, test))
Sklearn_MLPython/cross_validation-0.18.ipynb
atulsingh0/MachineLearning
gpl-3.0
LeaveOneGroupOut LeaveOneGroupOut is a cross-validation scheme which holds out the samples according to a third-party provided array of integer groups. This group information can be used to encode arbitrary domain specific pre-defined cross-validation folds. Each training set is thus constituted by all the samples exc...
X = [0.1, 0.2, 2.2, 2.4, 2.3, 4.55, 5.8, 8.8, 9, 10] y = ["a", "b", "b", "b", "c", "c", "c", "d", "d", "d"] groups = [1, 1, 1, 2, 2, 2, 3, 3, 3, 3] gkf = LeaveOneGroupOut() for train, test in gkf.split(X, y, groups=groups): print("%s %s" % (train, test))
Sklearn_MLPython/cross_validation-0.18.ipynb
atulsingh0/MachineLearning
gpl-3.0
Leave P Groups Out LeavePGroupsOut is similar as LeaveOneGroupOut, but removes samples related to P groups for each training/test set.
X = [0.1, 0.2, 2.2, 2.4, 2.3, 4.55, 5.8, 8.8, 9, 10] y = ["a", "b", "b", "b", "c", "c", "c", "d", "d", "d"] groups = [1, 1, 1, 2, 2, 2, 3, 3, 3, 3] gkf = LeavePGroupsOut(n_groups=2) for train, test in gkf.split(X, y, groups=groups): print("%s %s" % (train, test))
Sklearn_MLPython/cross_validation-0.18.ipynb
atulsingh0/MachineLearning
gpl-3.0
Group Shuffle Split The GroupShuffleSplit iterator behaves as a combination of ShuffleSplit and LeavePGroupsOut, and generates a sequence of randomized partitions in which a subset of groups are held out for each split.
X = [0.1, 0.2, 2.2, 2.4, 2.3, 4.55, 5.8, 8.8, 9, 10] y = ["a", "b", "b", "b", "c", "c", "c", "d", "d", "d"] groups = [1, 1, 1, 2, 2, 2, 3, 3, 3, 3] gkf = GroupShuffleSplit(n_splits=4, test_size=0.5, random_state=33) for train, test in gkf.split(X, y, groups=groups): print("%s %s" % (train, test))
Sklearn_MLPython/cross_validation-0.18.ipynb
atulsingh0/MachineLearning
gpl-3.0
Time Series Split TimeSeriesSplit is a variation of k-fold which returns first k folds as train set and the (k+1) th fold as test set. Note that unlike standard cross-validation methods, successive training sets are supersets of those that come before them. Also, it adds all surplus data to the first training partitio...
X = np.array([[1, 2], [3, 4], [1, 2], [3, 4], [1, 2], [3, 4]]) y = np.array([1, 2, 3, 4, 5, 6]) tscv = TimeSeriesSplit(n_splits=3) print(tscv) for train, test in tscv.split(X): print("%s %s" % (train, test))
Sklearn_MLPython/cross_validation-0.18.ipynb
atulsingh0/MachineLearning
gpl-3.0
"Equalize": Make rhythms the same length Use LCM to "equalize" rhythms so that they're of equal length. e.g. a = [1,0,0,1] b = [1,1,0] become equalized_a = [1,-,-,0,-,-,0,-,-,1,-,-] equalized_b = [1,-,-,-,1,-,-,-,0,-,-,-]
# "Equalize" (i.e. scale rhythms so they're of equal length) def equalize_rhythm_subdivisions(original_rhythm, target_rhythm, delimiter="-"): original_length = len(original_rhythm) target_length = len(target_rhythm) lcm = (original_length*target_length) / fibonaccistretch.euclid(original_length, target_leng...
nbs/format_rhythms_for_rtstretch.ipynb
usdivad/fibonaccistretch
mit
Get pulse indices so we can see how the equalized original and target relate. In particular, our goal is to create a relationship such that the original pulse indices always come first (so that they're bufferable in real-time)
def get_pulse_indices_for_rhythm(rhythm, pulse_symbols=[1]): pulse_symbols = [str(s) for s in pulse_symbols] rhythm = [str(x) for x in rhythm] pulse_indices = [i for i,symbol in enumerate(rhythm) if symbol in pulse_symbols] return pulse_indices equalized_original_pulse_indices = get_pulse_indices_for_r...
nbs/format_rhythms_for_rtstretch.ipynb
usdivad/fibonaccistretch
mit
For original we'll actually use ALL the steps instead of just pulses though. So:
equalized_original_pulse_indices = get_pulse_indices_for_rhythm(equalized_original_rhythm, [1,0]) equalized_target_pulse_indices = get_pulse_indices_for_rhythm(equalized_target_rhythm, [1]) print(equalized_original_pulse_indices, equalized_target_pulse_indices)
nbs/format_rhythms_for_rtstretch.ipynb
usdivad/fibonaccistretch
mit
Now we can check to see if all the original pulse indices come first (this is our goal):
for i in range(len(equalized_original_pulse_indices)): opi = equalized_original_pulse_indices[i] tpi = equalized_target_pulse_indices[i] if (opi > tpi): print("Oh no; original pulse at {} comes after target pulse at {} (diff={})".format(opi, tpi, opi-tpi))
nbs/format_rhythms_for_rtstretch.ipynb
usdivad/fibonaccistretch
mit
Oh no... how do we fix this?? One solution is to just nudge them over, especially since they only differ by 1/104 to 2/104ths of a measure in this case. Another solution would be to use the same data from the original pulse if there's not a new pulse available. Hmmmm Or use as much of the original buffer as we can......
# Format original and target rhythms for real-time manipulation def format_rhythms(original_rhythm, target_rhythm): # Equalize rhythm lengths and get pulse indices eor, etr = equalize_rhythm_subdivisions(original_rhythm, target_rhythm) eopi = get_pulse_indices_for_rhythm(eor, pulse_symbols=[1,0]) etpi =...
nbs/format_rhythms_for_rtstretch.ipynb
usdivad/fibonaccistretch
mit
Alright let's try this out:
# len(original) > len(target) formatted = format_rhythms([1,0,0,1,0,0,1,0], [1,0,1]) # len(original) < len(target) formatted = format_rhythms([1,0,0,1,0,0,1,0], [1,0,0,1,1,0,0,1,1,1,1]) # Trying [1,0,1,0] and [1,1] as originals, with the same target formatted = format_rhythms([1,0,1,0], [1,0,0,1,0,0,1,0,0,0]) print("...
nbs/format_rhythms_for_rtstretch.ipynb
usdivad/fibonaccistretch
mit
To make things a bit clearer maybe we'll try the abcd format for rtos()
# Rhythm to string # Method: "str", "alphabet" def rtos(rhythm, format_method="str", pulse_symbols=["1"]): pulse_symbols = [str(s) for s in pulse_symbols] if format_method == "str": return "".join(rhythm) elif format_method == "alphabet": alphabet = list("ABCDEFGHIJKLMNOPQRSTUVWXYZ") ...
nbs/format_rhythms_for_rtstretch.ipynb
usdivad/fibonaccistretch
mit
Exploring adjustment options Let's use this example to explore adjustment options:
print("Original rhythm: {}\nTarget rhythm: {}\n".format(original_rhythm, target_rhythm)) formatted = format_rhythms(original_rhythm, target_rhythm, format_method="alphabet")
nbs/format_rhythms_for_rtstretch.ipynb
usdivad/fibonaccistretch
mit
In all the following cases only the target changes, not the original: 1. For every problematic pulse (e.g. C), just re-use the previous pulse Orig: A------------B------------C------------D------------E------------F------------G------------H------------ Trgt: A-------0-------B-------C-------0-------D-------0-------E----...
formatted = format_rhythms(original_rhythm, target_rhythm, format_method="alphabet") print("\n--------\n") formatted = format_rhythms([1,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0], target_rhythm, format_method="alphabet")
nbs/format_rhythms_for_rtstretch.ipynb
usdivad/fibonaccistretch
mit
Needs more work. 5. Method 4, but use 0s of target rhythm as pulses too Orig: A------------B------------C------------D------------E------------F------------G------------H------------I------------J------------K------------L------------M------------N------------O------------P------------ Trgt: A---------------0----------...
# Rhythm to string # Method: "str", "alphabet" def rtos(rhythm, format_method="str", pulse_symbols=["1"]): pulse_symbols = [str(s) for s in pulse_symbols] rhythm = [str(x) for x in rhythm] if format_method == "str": return "".join(rhythm) elif format_method == "alphabet": alphabet =...
nbs/format_rhythms_for_rtstretch.ipynb
usdivad/fibonaccistretch
mit
That looks more like it! Let's try a few more:
formatted = format_rhythms([1,0,1,1], [1,0,1,0,1], format_method="alphabet") formatted = format_rhythms([1,0,1,1], [1,0,1,0,1,1], format_method="alphabet") formatted = format_rhythms([1,0,1,1,0,1], [1,0,1,1,0,1,0], format_method="alphabet") formatted = format_rhythms([1,0,1,1,0,1], [0], format_method="alphabet")
nbs/format_rhythms_for_rtstretch.ipynb
usdivad/fibonaccistretch
mit
In the previous two examples, the fixed rhythm doesn't resemble the target rhythm at all... I guess this is because we're using the Bjorklund fix too, to fix pulses. But if we're going through all this trouble to "fix" rhythms, maybe we should just place constraints on what the user can do in the first place? I guess w...
# TODO: Try with method 7 vs. method 10
nbs/format_rhythms_for_rtstretch.ipynb
usdivad/fibonaccistretch
mit