markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Residuals takes sequences xs and ys and estimated parameters inter and slope. It returns the differences between the actual values and the fitted line.
residuals = list(Residuals(regression_data["YearsExperience"], regression_data["Salary"], inter, slope)) regression_data["Residuals"] = residuals bins = np.arange(0, 15, 2) indices = np.digitize(regression_data.YearsExperience, bins) groups = regression_data.groupby(indices) for i, group in groups: print(i, len(g...
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
Ideally these lines should be flat, indicating that the residuals are random, and parallel, indicating that the variance of the residuals is the same for all age groups. In fact, the lines are close to parallel, so that’s good; but they have some curvature, indicating that the relationship is nonlinear. Nevertheless, t...
def CoefDetermination(ys, res): return 1 - pd.Series(res).var() / pd.Series(ys).var()
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
Var(res) is the MSE of your guesses using the model, Var(ys) is the MSE without it. So their ratio is the fraction of MSE that remains if you use the model, and R2 is the fraction of MSE the model eliminates. There is a simple relationship between the coefficient of determination and Pearson’s coefficient of correlatio...
#IQ scores are normalized with Std(ys) = 15, so var_ys = 15**2 rho = 0.72 r2 = rho**2 var_res = (1 - r2) * var_ys std_res = math.sqrt(var_res) print(std_res)
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
So using SAT score to predict IQ reduces RMSE from 15 points to 10.4 points. A correlation of 0.72 yields a reduction in RMSE of only 31%. If you see a correlation that looks impressive, remember that R2 is a better indicator of reduction in MSE, and reduction in RMSE is a better indicator of predictive power. Testing ...
Corr(regression_data["YearsExperience"], regression_data["Salary"]) def TestStatistic(data): exp, sal = data _, slope = LeastSquares(exp, sal) return slope def MakeModel(data): _, sals = data ybar = sals.mean() res = sals - ybar return ybar, res def RunModel(data): exp, _ = data sal...
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
The data are represented as sequences of exp and sals. The test statistic is the slope estimated by LeastSquares. The model of the null hypothesis is represented by the mean sals of all employees and the deviations from the mean. To generate simulated data, we permute the deviations and add them to the mean.
data = regression_data.YearsExperience.values, regression_data.Salary.values actual_diff = TestStatistic(data) def calculate_pvalue(data, iters=1000): test_stats = [TestStatistic(RunModel(data)) for _ in range(iters)] count = sum(1 for x in test_stats if x >= actual_diff) return...
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
The p-value is less than 0.001, so although the estimated slope is small, it is unlikely to be due to chance. Weighted Resampling As an example, if you survey 100,000 people in a country of 300 million, each respondent represents 3,000 people. If you oversample one group by a factor of 2, each person in the oversampled...
import statsmodels.formula.api as smf formula = 'Salary ~ YearsExperience' model = smf.ols(formula, data=regression_data) results = model.fit() results
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
statsmodels provides two interfaces (APIs); the “formula” API uses strings to identify the dependent and explanatory variables. It uses a syntax called patsy; in this example, the ~ operator separates the dependent variable on the left from the explanatory variables on the right. smf.ols takes the formula string and th...
inter = results.params['Intercept'] slope = results.params['YearsExperience'] slope_pvalue = results.pvalues['YearsExperience'] print(slope_pvalue)
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
pvalues is a Series that maps from variable names to the associated p-values, so we can check whether the estimated slope is statistically significant: The p-value associated with agepreg is 1.14e-20, which is less than 0.001, as expected.
print(results.summary()) print(results.rsquared)
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
results.rsquared contains R2, which is 0.0047. results also provides f_pvalue, which is the p-value associated with the model as a whole, similar to testing whether R2 is statistically significant. And results provides resid, a sequence of residuals, and fittedvalues, a sequence of fitted values corresponding to agepre...
y = np.array([0, 1, 0, 1]) x1 = np.array([0, 0, 0, 1]) x2 = np.array([0, 1, 1, 1]) #And we start with the initial guesses β0 = -1.5, β1 = 2.8, β2 = 1.1 beta = [-1.5, 2.8, 1.1] #Then for each row we can compute log_o: log_o = beta[0] + beta[1] * x1 + beta[2] * x2 #convert from log odds to probabilities: o = np.exp...
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
Notice that when log_o is greater than 0, o is greater than 1 and p is greater than 0.5. The likelihood of an outcome is p when y==1 and 1-p when y==0. For example, if we think the probability of a boy is 0.8 and the outcome is a boy, the likelihood is 0.8; if the outcome is a girl, the likelihood is 0.2. We can comput...
likes = y * p + (1-y) * (1-p) print(likes) #The overall likelihood of the data is the product of likes: like = np.prod(likes)
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
For these values of beta, the likelihood of the data is 0.18. The goal of logistic regression is to find parameters that maximize this like- lihood. To do that, most statistics packages use an iterative solver like Newton’s method (see https://en.wikipedia.org/wiki/Logistic_regression#Model_fitting). Note I have skippe...
mj_clean = pd.read_csv('mj-clean.csv', engine='python', parse_dates=[5]) #parse_dates tells read_csv to interpret values in column 5 as dates and convert them to NumPy datetime64 objects.
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
The DataFrame has a row for each reported transaction and the following columns: city: string city name. state: two-letter state abbreviation. price: price paid in dollars. amount: quantity purchased in grams. quality: high, medium, or low quality, as reported by the purchaser. • date: date of report, presumed to be s...
def GroupByQualityAndDay(transactions): groups = transactions.groupby('quality') dailies = {} for name, group in groups: dailies[name] = GroupByDay(group) return dailies
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
groupby is a DataFrame method that returns a GroupBy object, groups; used in a for loop, it iterates the names of the groups and the DataFrames that represent them. Since the values of quality are low, medium, and high, we get three groups with those names. The loop iterates through the groups and calls GroupByDay, whi...
def GroupByDay(transactions, func=np.mean): grouped = transactions[['date', 'ppg']].groupby('date') daily = grouped.aggregate(func) daily['date'] = daily.index start = daily.date[0] one_year = np.timedelta64(1, 'Y') daily['years'] = (daily.date - start) / one_year return daily
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
The parameter, transactions, is a DataFrame that contains columns date and ppg. We select these two columns, then group by date. The result, grouped, is a map from each date to a DataFrame that contains prices reported on that date. aggregate is a GroupBy method that iterates through the groups and applies a function t...
dailies = GroupByQualityAndDay(mj_clean) plt.figure(figsize=(6,8)) plt.subplot(3, 1, 1) for i, (k, v) in enumerate(dailies.items()): plt.subplot(3, 1, i+1) plt.title(k) plt.scatter(dailies[k].index, dailies[k].ppg, s=10) plt.xticks(rotation=30) plt.ylabel("Price per gram") plt.xlabel("Months") ...
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
One apparent feature in these plots is a gap around November 2013. It’s possible that data collection was not active during this time, or the data might not be available. We will consider ways to deal with this missing data later. Visually, it looks like the price of high quality cannabis is declining during this perio...
def RunLinearModel(daily): model = smf.ols('ppg ~ years', data=daily) results = model.fit() return model, results def SummarizeResults(results): """Prints the most important parts of linear regression results: results: RegressionResults object """ for name, param in results.params.items(): ...
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
The estimated slopes indicate that the price of high quality cannabis dropped by about 71 cents per year during the observed interval; for medium quality it increased by 28 cents per year, and for low quality it increased by 57 cents per year. These estimates are all statistically significant with very small p-values. ...
#The following code plots the observed prices and the fitted values: def PlotFittedValues(model, results, label=''): years = model.exog[:,1] values = model.endog plt.scatter(years, values, s=15, label=label) plt.plot(years, results.fittedvalues, label='model') plt.xlabel("years") plt.ylabel("ppg...
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
PlotFittedValues makes a scatter plot of the data points and a line plot of the fitted values. Plot shows the results for high quality cannabis. The model seems like a good linear fit for the data; nevertheless, linear regression is not the most appropriate choice for this data: First, there is no reason to expect the...
series = pd.Series(np.arange(10)) moving_avg = series.rolling(3).mean()
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
The first two values are nan; the next value is the mean of the first three elements, 0, 1, and 2. The next value is the mean of 1, 2, and 3. And so on. Before we can apply rolling mean to the cannabis data, we have to deal with missing values. There are a few days in the observed interval with no reported transactions...
dates = pd.date_range(dailies["high"].index.min(), dailies["high"].index.max()) reindexed = dailies["high"].reindex(dates) #dailies["high"].index reindexed.shape
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
The first line computes a date range that includes every day from the be- ginning to the end of the observed interval. The second line creates a new DataFrame with all of the data from daily, but including rows for all dates, filled with nan.
#Now we can plot the rolling mean like this: #The window size is 30, so each value in roll_mean is the mean of 30 values from reindexed.ppg. roll_mean = reindexed.ppg.rolling(30).mean() plt.plot(roll_mean.index, roll_mean) plt.xticks(rotation=30)
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
The rolling mean seems to do a good job of smoothing out the noise and extracting the trend. The first 29 values are nan, and wherever there’s a missing value, it’s followed by another 29 nans. There are ways to fill in these gaps, but they are a minor nuisance. An alternative is the exponentially-weighted moving avera...
ewma = reindexed.ppg.ewm(30).mean() plt.plot(ewma.index, ewma) plt.xticks(rotation=30)
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
The span parameter corresponds roughly to the window size of a moving average; it controls how fast the weights drop off, so it determines the number of points that make a non-negligible contribution to each average. above plot shows the EWMA for the same data. It is similar to the rolling mean, where they are both def...
reindexed.ppg.fillna(ewma, inplace=True)
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
Wherever reindexed.ppg is nan, fillna replaces it with the corresponding value from ewma. The inplace flag tells fillna to modify the existing Series rather than create a new one. A drawback of this method is that it understates the noise in the series. We can solve that problem by adding in resampled residuals:
def Resample(xs, n=None): """Draw a sample from xs with the same length as xs. xs: sequence n: sample size (default: len(xs)) returns: NumPy array """ if n is None: n = len(xs) return np.random.choice(xs, n, replace=True) resid = (reindexed.ppg - ewma).dropna() fake_data = ewma + Re...
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
resid contains the residual values, not including days when ppg is nan. fake_data contains the sum of the moving average and a random sample of residuals. Finally, fillna replaces nan with values from fake_data. The filled data is visually similar to the actual values. Since the resampled residuals are random, the resu...
def SerialCorr(series, lag=1): xs = series[lag:] ys = series.shift(lag)[lag:] corr = Corr(xs, ys) return corr
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
After the shift, the first lag values are nan, so I use a slice to remove them before computing Corr. If we apply SerialCorr to the raw price data with lag 1, we find serial correlation 0.48 for the high quality category, 0.16 for medium and 0.10 for low. In any time series with a long-term trend, we expect to see stro...
ewma = reindexed.ppg.ewm(30).mean() resid = reindexed.ppg - ewma corr = SerialCorr(resid, 1) print(corr)
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
With lag=1, the serial correlations for the de-trended data are -0.022 for high quality, -0.015 for medium, and 0.036 for low. These values are small, indicating that there is little or no one-day serial correlation in this series.
ewma = reindexed.ppg.ewm(30).mean() resid = reindexed.ppg - ewma corr = SerialCorr(resid, 7) print(corr) ewma = reindexed.ppg.ewm(30).mean() resid = reindexed.ppg - ewma corr = SerialCorr(resid, 30) print(corr)
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
at this point we can tentatively conclude that there are no substantial seasonal patterns in these series, at least not with these lags. Autocorrelation If you think a series might have some serial correlation, but you don’t know which lags to test, you can test them all! The autocorrelation function is a function that...
import statsmodels.tsa.stattools as smtsa acf = smtsa.acf(resid, nlags=120, unbiased=True) acf[0], acf[1], acf[45], acf[60]
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
With lag=0, acf computes the correlation of the series with itself, which is always 1.
plt.plot(range(len(acf)),acf)
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
Prediction Time series analysis can be used to investigate, and sometimes explain, the behavior of systems that vary in time. It can also make predictions. The linear regressions can be used for prediction. The RegressionResults class provides predict, which takes a DataFrame containing the explanatory variables and re...
def EvalNormalCdfInverse(p, mu=0, sigma=1): return scipy.stats.norm.ppf(p, loc=mu, scale=sigma)
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
Central limit theorem If we add values drawn from normal distributions, the distribution of the sum is normal. Most other distributions don’t have this property; if we add values drawn from other distributions, the sum does not generally have an analytic distribution. But if we add up n values from almost any distribut...
def StudentCdf(n): ts = np.linspace(-3, 3, 101) ps = scipy.stats.t.cdf(ts, df=n-2) rs = ts / np.sqrt(n - 2 + ts**2) return thinkstats2.Cdf(rs, ps)
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
ts is a NumPy array of values for t, the transformed correlation. ps contains the corresponding probabilities, computed using the CDF of the Student’s t-distribution implemented in SciPy. The parameter of the t-distribution, df, stands for “degrees of freedom.” I won’t explain that term, but you can read about it at ht...
def ChiSquaredCdf(n): xs = np.linspace(0, 25, 101) ps = scipy.stats.chi2.cdf(xs, df=n-1) return thinkstats2.Cdf(xs, ps)
Statistics/Pandas and ThinkStat.ipynb
Mayurji/Machine-Learning
gpl-3.0
Violations of graphical excellence and integrity Find a data-focused visualization on one of the following websites that is a negative example of the principles that Tufte describes in The Visual Display of Quantitative Information. CNN Fox News Time Upload the image for the visualization to this directory and displa...
# Add your filename and uncomment the following line: # Image(filename='yourfile.png')
assignments/assignment04/TheoryAndPracticeEx02.ipynb
aschaffn/phys202-2015-work
mit
In order to store sequences from the DICOM, we created a JSON. We will load in that JSON now.
# load json with gzip.open('dicom-metadata.json.gz', 'r') as fp: tmp = json.load(fp) dcm_metadata = dict() # convert from length list of 1 item dicts to single dict for d in tmp: for k, v in d.items(): dcm_metadata[k] = v del tmp # figure out how many unique top level meta-data fields in the ...
mimic-iv-cxr/dcm/create-mimic-cxr-jpg-metadata.ipynb
MIT-LCP/mimic-code
mit
There are two very useful items in this sequence that we'd like to have in an easier form for all images: the procedure code sequence ('528434'), the coded view position ('5505568'), and the coded patient orientation ('5506064'). For convenience, we will pull the textual description of each ('524548'), rather than the ...
cols = ['528434', '5505568', '5506064'] dcm_metadata_simple = {} for k, v in dcm_metadata.items(): dcm_metadata_simple[k] = [v[c][0]['524548'] for c in cols if c in v and len(v[c])>0] dcm_metadata_simple = pd.DataFrame.from_dict(dcm_metadata_simple, orient...
mimic-iv-cxr/dcm/create-mimic-cxr-jpg-metadata.ipynb
MIT-LCP/mimic-code
mit
You have: - a numpy-array (matrix) X that contains your features (x1, x2) - a numpy-array (vector) Y that contains your labels (red:0, blue:1). Lets first get a better sense of what our data is like. Exercise: How many training examples do you have? In addition, what is the shape of the variables X and Y? Hin...
### START CODE HERE ### (≈ 3 lines of code) shape_X = np.shape(X) shape_Y = np.shape(Y) m = shape_Y[1] # training set size ### END CODE HERE ### print ('The shape of X is: ' + str(shape_X)) print ('The shape of Y is: ' + str(shape_Y)) print ('I have m = %d training examples!' % (m))
deeplearning.ai/C1.NN_DL/week3/Planar+data+classification+with+one+hidden+layer+v4.ipynb
jinzishuai/learn2deeplearn
gpl-3.0
Expected Output: <table style="width:50%"> <tr> <td> 0.262818640198 0.091999045227 -1.30766601287 0.212877681719 </td> </tr> </table> Now that you have computed $A^{[2]}$ (in the Python variable "A2"), which contains $a^{2}$ for every example, you can compute the cost function as follows: $$J = - \frac{1}{m} ...
# GRADED FUNCTION: compute_cost def compute_cost(A2, Y, parameters): """ Computes the cross-entropy cost given in equation (13) Arguments: A2 -- The sigmoid output of the second activation, of shape (1, number of examples) Y -- "true" labels vector of shape (1, number of examples) paramete...
deeplearning.ai/C1.NN_DL/week3/Planar+data+classification+with+one+hidden+layer+v4.ipynb
jinzishuai/learn2deeplearn
gpl-3.0
Expected Output: <table style="width:20%"> <tr> <td>**cost**</td> <td> 0.693058761... </td> </tr> </table> Using the cache computed during forward propagation, you can now implement backward propagation. Question: Implement the function backward_propagation(). Instructions: Backpropagation is usually the...
# GRADED FUNCTION: backward_propagation def backward_propagation(parameters, cache, X, Y): """ Implement the backward propagation using the instructions above. Arguments: parameters -- python dictionary containing our parameters cache -- a dictionary containing "Z1", "A1", "Z2" and "A2". ...
deeplearning.ai/C1.NN_DL/week3/Planar+data+classification+with+one+hidden+layer+v4.ipynb
jinzishuai/learn2deeplearn
gpl-3.0
Expected output: <table style="width:80%"> <tr> <td>**dW1**</td> <td> [[ 0.00301023 -0.00747267] [ 0.00257968 -0.00641288] [-0.00156892 0.003893 ] [-0.00652037 0.01618243]] </td> </tr> <tr> <td>**db1**</td> <td> [[ 0.00176201] [ 0.00150995] [-0.00091736] [-0.00381422]] </td> </tr> ...
# GRADED FUNCTION: update_parameters def update_parameters(parameters, grads, learning_rate = 1.2): """ Updates parameters using the gradient descent update rule given above Arguments: parameters -- python dictionary containing your parameters grads -- python dictionary containing your gradie...
deeplearning.ai/C1.NN_DL/week3/Planar+data+classification+with+one+hidden+layer+v4.ipynb
jinzishuai/learn2deeplearn
gpl-3.0
Expected Output: <table style="width:80%"> <tr> <td>**W1**</td> <td> [[-0.00643025 0.01936718] [-0.02410458 0.03978052] [-0.01653973 -0.02096177] [ 0.01046864 -0.05990141]]</td> </tr> <tr> <td>**b1**</td> <td> [[ -1.02420756e-06] [ 1.27373948e-05] [ 8.32996807e-07] [ -3.20136836e-06]]<...
# GRADED FUNCTION: nn_model def nn_model(X, Y, n_h, num_iterations = 10000, print_cost=False): """ Arguments: X -- dataset of shape (2, number of examples) Y -- labels of shape (1, number of examples) n_h -- size of the hidden layer num_iterations -- Number of iterations in gradient descent loo...
deeplearning.ai/C1.NN_DL/week3/Planar+data+classification+with+one+hidden+layer+v4.ipynb
jinzishuai/learn2deeplearn
gpl-3.0
Expected Output: <table style="width:90%"> <tr> <td> **cost after iteration 0** </td> <td> 0.692739 </td> </tr> <tr> <td> <center> $\vdots$ </center> </td> <td> <center> $\vdots$ </center> </td> </tr> <tr> <td>**W1**</td> <td> [[-0.65848...
# GRADED FUNCTION: predict def predict(parameters, X): """ Using the learned parameters, predicts a class for each example in X Arguments: parameters -- python dictionary containing your parameters X -- input data of size (n_x, m) Returns predictions -- vector of predictions of o...
deeplearning.ai/C1.NN_DL/week3/Planar+data+classification+with+one+hidden+layer+v4.ipynb
jinzishuai/learn2deeplearn
gpl-3.0
Subsampling Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ i...
from collections import Counter import random drop_threshold = 1e-5 word_counts = Counter(int_words) total_count = len(int_words) freqs = {word: count/total_count for word, count in word_counts.items()} p_drop = {word: 1 - np.sqrt(threshold/freqs[word]) for word in word_counts} train_words = [word for word in int_word...
embeddings/Skip-Gram_word2vec.ipynb
seifip/udacity-deep-learning-nanodegree
mit
Making batches Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$. From Mikolov et al.: "Since the more distant words are usually l...
def get_target(words, idx, window_size=5): ''' Get a list of words in a window around an index. ''' R = np.random.randint(1, window_size+1) start = idx - R if (idx - R) > 0 else 0 stop = idx + R target_words = set(words[start:idx] + words[idx+1:stop+1]) return list(target_words)
embeddings/Skip-Gram_word2vec.ipynb
seifip/udacity-deep-learning-nanodegree
mit
Building the graph From Chris McCormick's blog, we can see the general structure of our network. The input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal. The idea here is to train the hidden lay...
train_graph = tf.Graph() with train_graph.as_default(): inputs = tf.placeholder(tf.int32, [None], name='inputs') labels = tf.placeholder(tf.int32, [None, None], name='labels')
embeddings/Skip-Gram_word2vec.ipynb
seifip/udacity-deep-learning-nanodegree
mit
Embedding The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the n...
n_vocab = len(int_to_vocab) n_embedding = 200 with train_graph.as_default(): embedding = tf.Variable(tf.random_uniform((n_vocab, n_embedding), -1, 1)) embed = tf.nn.embedding_lookup(embedding, inputs)
embeddings/Skip-Gram_word2vec.ipynb
seifip/udacity-deep-learning-nanodegree
mit
Pulso cuadrado: Sea $$ f(x):=\left{ \begin{matrix} 1, &\rm{si}\quad |x|<a \\ 0, &\rm{si}\quad|x|>a \ \end{matrix}\right. , $$ con transformada de Fourier $$ {\cal F}f=2\frac{\sin(k a)}{k}. $$
def p(x,a): if abs(x)<a: return 1. else: return 0. pulso = np.vectorize(p) #vectorizando la función pulso
Notebooks/Ejemplos-Transformada-Fourier.ipynb
gfrubi/FM2
gpl-3.0
Definimos 1000 puntos en el intervalo $[-\pi,\pi]$:
x = np.linspace(-10,10,1000) k = np.linspace(-10,10,1000) def p(a=1): plt.figure(figsize=(12,5)) plt.subplot(1,2,1) #fig,ej=subplots(1,2,figsize=(14,5)) plt.plot(x,pulso(x,a), lw = 2) plt.xlim(-10,10) plt.ylim(-.1,1.1) plt.grid(True) plt.xlabel(r'$x$',fontsize=15) plt.ylabel(r'$f(x)...
Notebooks/Ejemplos-Transformada-Fourier.ipynb
gfrubi/FM2
gpl-3.0
Función Guassiana Sea $$ f(x):=e^{-\alpha x^2}, \qquad \alpha>0, $$ con transformada de Fourier $$ {\cal F}f=\sqrt{\frac{\pi}{\alpha}}e^{-k^2/(4\alpha)}. $$
def gaussina(alpha=1): plt.figure(figsize=(12,5)) plt.subplot(1,2,1) plt.plot(x,np.exp(-alpha*x**2), lw=2) plt.xlim(-3,3) plt.grid(True) plt.xlabel('$x$',fontsize=15) plt.ylabel('$f(x)$',fontsize=15) plt.subplot(1,2,2) plt.plot(k,np.sqrt(np.pi/alpha)*np.exp(-k**2/(4.*alpha)), lw=2) ...
Notebooks/Ejemplos-Transformada-Fourier.ipynb
gfrubi/FM2
gpl-3.0
```python class Class_Name(base_classes_if_any): """optional documentation string""" static_member_declarations = 1 def method_declarations(self): """ documentation """ pass ```
# first.py class First: pass fr = First() print (type(fr)) print (type(First)) print(type(int)) # first.py class First(object): pass fr = First() print (type(fr)) print (type(First)) print(type(int)) # first.py # Class with it's methods class Second: def set_name(self, name): self.fullname ...
Section 1 - Core Python/Chapter 09 - Classes & OOPS/01_Classes_and_OOPS.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
NOTE: both Second and sec are same object as their id's are same
class Second: def __init__(self, name = ""): self.fullname = name def set_name(self, name): print(id(self)) self.fullname = name def get_name(self): return self.fullname sec = Second("Vishal Saxena") print(sec.get_name()) # first.py class Second: def ...
Section 1 - Core Python/Chapter 09 - Classes & OOPS/01_Classes_and_OOPS.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
The magic of mutables
class Bridge: fullname = ["Mayank", "Johri"] age = 33 def name(self, name): self.fullname.append(name) def get_name(self): return self.fullname rs_1 = Bridge() rs_2 = Bridge() print(rs_1.fullname == rs_2.fullname) print(id(rs_1.fullname) == id(rs_2.fullname)) rs_1.na...
Section 1 - Core Python/Chapter 09 - Classes & OOPS/01_Classes_and_OOPS.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
So, we can't have any object creation with lesser than two parameters. Lets comment out the first three object creation code and try again
# Create Class Instances try: # user = User() # arya = User("Arya") # gupta = User(surname="Gupta") gupta = User(surname="Gupta", firstname="Manish") arya.showName() except Exception as e: print(e) class PrivateVariables(): __version = 1.0 _vers = 11.0 ver = 10.0 def show_v...
Section 1 - Core Python/Chapter 09 - Classes & OOPS/01_Classes_and_OOPS.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
static / class variables Reference: https://stackoverflow.com/questions/68645/are-static-class-variables-possible. Static variables are variables declared inside the class definition, and not inside a method are class or static variables. But before you go all, Yahooooo... about understanding of static variables. Ple...
class Static_Test(object): val = "Rajeev Chaturvedi" s = Static_Test() print(s.val,"\b,", id(s.val)) print(Static_Test.val, "\b,", id(Static_Test.val))
Section 1 - Core Python/Chapter 09 - Classes & OOPS/01_Classes_and_OOPS.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
So far so good, val & id of val from both the instance and class seems to be same, thus they are pointing to same memory location which contains the value. Now lets try to update it in class
Static_Test.val = "राजीव चतुर्वेदी" print(s.val,"\b,", id(s.val)) print(Static_Test.val, "\b,", id(Static_Test.val)) s_new = Static_Test() print(s_new.val,"\b,", id(s.val))
Section 1 - Core Python/Chapter 09 - Classes & OOPS/01_Classes_and_OOPS.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
So, if we update values at class level, than they are getting reflected in all the instances as well. Now lets try to update its value in an instance and check its effect
s.val = "Sachin" print(s.val,"\b,", id(s.val)) print(Static_Test.val, "\b,", id(Static_Test.val)) s_new = Static_Test() print(s_new.val,"\b,", id(s.val))
Section 1 - Core Python/Chapter 09 - Classes & OOPS/01_Classes_and_OOPS.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
Once, instance value has been changed then it remain changed and cannot be reverted by changing class variable value as shown in the below code
Static_Test.val = "Sachin Shah" print(s.val,"\b,", id(s.val)) print(Static_Test.val, "\b,", id(Static_Test.val)) s_new = Static_Test() print(s_new.val,"\b,", id(s.val))
Section 1 - Core Python/Chapter 09 - Classes & OOPS/01_Classes_and_OOPS.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
Static and Class Methods Python provides decorators @classmethod & @staticmethod @staticmethod A static method does not receive an implicit first argument (self or cls). To declare a static method decorator staticmethod is used as shown in the below example
class Circle(object): PI = 3.14 @staticmethod def area_circle(radius): area = 0 try: area = PI * radius * radius except Exception as e: print(e) return area c = Circle() print(c.area_circle(10))
Section 1 - Core Python/Chapter 09 - Classes & OOPS/01_Classes_and_OOPS.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
As shown in the above example, static methods do not have access to any class or instance attributes. We tried to access class attribute PI and received error message that variable not defined. Static methods for all intent and purpose act as normal function, but are called from within an object or class. Static method...
Many times, we have to
Section 1 - Core Python/Chapter 09 - Classes & OOPS/01_Classes_and_OOPS.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
Having a single implementation attributes In Python, attribute is everything, contained inside an object. In Python there is no real distinction between plain data and functions, being both objects. The following example represents a book with a title and an author. It also provides a get_entry() method which returns a...
class Book: def __init__(self, title, author): self.title = title self.author = author def get_entry(self): return f"{self.title} by {self.author}"
Section 1 - Core Python/Chapter 09 - Classes & OOPS/01_Classes_and_OOPS.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
Every instance of this class will contain three attributes, namely title, author, and get_entry, in addition to the standard attributes provided by the object ancestor.
b = Book(title="Akme", author="Mayank") print(dir(b)) print(b.title) b.title = "Lets Go" print(b.title) print(b.get_entry()) data = b.get_entry print(data) print(data()) print(type(b.__dict__)) print(b.__dict__) #print(b.nonExistAttribute()) def testtest(func): print(func()) testtest(data)
Section 1 - Core Python/Chapter 09 - Classes & OOPS/01_Classes_and_OOPS.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
Instead of using the normal statements to access attributes, you can use the following functions − getattr : to access the attribute of the object The getattr(obj, name[, default]) : to access the attribute of object. The hasattr(obj,name) : to check if an attribute exists or not. The setattr(obj,name,value) : to s...
class Book(object): def __init__(self, title, author): self.title = title self.author = author def get_entry(self): return "{0} by {1}".format(self.title, self.author) entry = property(get_entry) b = Book(title="Pawn of Prophecy", author="David Eddings") print(b.entry)
Section 1 - Core Python/Chapter 09 - Classes & OOPS/01_Classes_and_OOPS.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
Properties allow to specify also a write method (a setter), that is automatically called when you try to change the value of the property itself. NOTE: Don't Worry to much about properties, we have entire chapter dedicated for it.
class User(): def __init__(self, name): self.name = name def getname(self): return "User's full name is: {0}".format(self.name) def setname(self, name): self.name = name fullname = property(getname, setname) user = User("Roshan Musheer") print(user.fullna...
Section 1 - Core Python/Chapter 09 - Classes & OOPS/01_Classes_and_OOPS.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
__new__ __new__ is called for new Class type, Overriding the new method As per "https://www.python.org/download/releases/2.2/descrintro/#new" Here are some rules for __new__: __new__ is a static method. When defining it, you don't need to (but may!) use the phrase "__new__ = staticmethod(__new__)", because this is im...
class MyTest: def __new__(self): print("in new") def __init__(self): print("in init") mnt = MyTest() class MyNewTest: def __new__(self): print("in new") def __new__(self, name): print("in new", name) def __init__(self, name): print("in ini...
Section 1 - Core Python/Chapter 09 - Classes & OOPS/01_Classes_and_OOPS.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
Lets look at another example, we have removed the __new__ method from the above class and created an object.
class MyNewTest: def __init__(self, name): print("in init", name) mnt = MyNewTest("Hari Hari")
Section 1 - Core Python/Chapter 09 - Classes & OOPS/01_Classes_and_OOPS.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
Now lets check where its goog idea to use __init__ and where __new__. One thumb rule is try to avoid using __new__ and let python handle it because almost all the things you wish to do in constructor can be done in __init__. Still if you wish to do so, below examples will show you how to do it currectly. In the first ...
class MyNewTest: def __init__(self, name): print("in init", name) self.name = name def print_name(self): print(self.name) mnt = MyNewTest("Hari Hari") mnt.print_name()
Section 1 - Core Python/Chapter 09 - Classes & OOPS/01_Classes_and_OOPS.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
We saw, that everything was working without any issue. Now lets try to replace __init__ with __new__.
# -----------------# # Very Bad Example # # -----------------# class MyNewTest: def __new__(cls, name): print("in init", name) cls.name = name def print_name(self): print(self.name) try: mnt = MyNewTest("Hari Hari") mnt.print_name() except Exception as e: pr...
Section 1 - Core Python/Chapter 09 - Classes & OOPS/01_Classes_and_OOPS.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
Now, since we have not returned any thing in __new__ thus mnt is null. We must have __new__ which returns the object itself. Now to over come this issue, we need to return an instance of our class. We can do that using instance = super(&lt;class&gt;, cls).__new__(cls) as shown in the below example
class MyNewTest(object): def __new__(cls, name): print("in __new__:\n\t{0}".format(name)) instance = super(MyNewTest, cls).__new__(cls) instance.name = name return instance def print_name(self): print("print_name:\n\t{0}".format(self.name)) mnt = MyN...
Section 1 - Core Python/Chapter 09 - Classes & OOPS/01_Classes_and_OOPS.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
or, we can create the class using the following code, instance = object.__new__(cls). As object is parent, we are directly calling it instead of using super.
class MyNewTest(object): def __new__(cls, name): print("in __new__", name) instance = object.__new__(cls) instance.name = name print("exiting __new__", name) return instance # __init__ is redundent in this example. def __init__(self, name): ...
Section 1 - Core Python/Chapter 09 - Classes & OOPS/01_Classes_and_OOPS.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
both super(MyNewTest, cls).__new__(cls) and object.__new__(cls) produce the desired instance as shown in the above examples. If we were to return anything other than instance of object, then __init__ function will never be called as shown in the below example.
class Distance(float): def __new__(cls, dist): print("in __new__", dist) return dist*0.0254 # __init__ is redundent in this example, # as it will never be called. def __init__(self, dist): print("in __init__", dist) def print_dist(self): print(...
Section 1 - Core Python/Chapter 09 - Classes & OOPS/01_Classes_and_OOPS.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
Where can we use __new__ Creating singleton class In singleton pattern, we create one instance of the class and all subsequent objects of that class points to the first instance. Lets try to create a singleton class using __new__ constructor.
class Godlike(object): def __new__(cls, name): it = cls.__dict__.get("__it__") if it is not None: return it cls.__it__ = it = object.__new__(cls) it.init(name) return it def init(self, name): self.name = name def print_name(self): ...
Section 1 - Core Python/Chapter 09 - Classes & OOPS/01_Classes_and_OOPS.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
Note, in the above example all three objects are pointing to same object ohm meaning all three objects are same. Now, we might have situations where we need to raise exception, if creation of more than one instance is attempted. We can achieve it by raising an exception as shown in below example.
class SingletonError(Exception): pass class HeadMaster(object): def __new__(cls, name): it = cls.__dict__.get("__it__") if it is not None: raise SingletonError(f"Count not create new instance for value {name}") cls.__it__ = it = object.__new__(cls) ...
Section 1 - Core Python/Chapter 09 - Classes & OOPS/01_Classes_and_OOPS.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
Regulating number of object creation we are going to tweak previous example and convert it to have a finite number of objects created for the class
class HeadMaster(object): _instances = [] # Keep track of instance reference limit = 2 def __new__(cls, *args, **kwargs): if len(cls._instances) >= cls.limit: raise RuntimeError("Creation Limit %s reached" % cls.limit) instance = object.__init__(cls) cls._instances.appe...
Section 1 - Core Python/Chapter 09 - Classes & OOPS/01_Classes_and_OOPS.ipynb
mayankjohri/LetsExplorePython
gpl-3.0
When analyzing data, I usually use the following three modules. I use pandas for data management, filtering, grouping, and processing. I use numpy for basic array math. I use toyplot for rendering the charts.
import pandas import numpy import toyplot import toyplot.pdf import toyplot.png import toyplot.svg print('Pandas version: ', pandas.__version__) print('Numpy version: ', numpy.__version__) print('Toyplot version: ', toyplot.__version__)
images/better-plots/Detail_MultiSeries.ipynb
kmorel/kmorel.github.io
mit
Load in the "auto" dataset. This is a fun collection of data on cars manufactured between 1970 and 1982. The source for this data can be found at https://archive.ics.uci.edu/ml/datasets/Auto+MPG. The data are stored in a text file containing columns of data. We use the pandas.read_table() method to parse the data and l...
column_names = ['MPG', 'Cylinders', 'Displacement', 'Horsepower', 'Weight', 'Acceleration', 'Model Year', 'Origin Index', 'Car Name'] data = pandas.read_table('auto-mpg.data', ...
images/better-plots/Detail_MultiSeries.ipynb
kmorel/kmorel.github.io
mit
The origin column indicates the country of origin for the car manufacture. It has three numeric values, 1, 2, or 3. These indicate USA, Europe, or Japan, respectively. Replace the origin column with a string representing the country name.
country_map = pandas.Series(index=[1,2,3], data=['USA', 'Europe', 'Japan']) data['Origin'] = numpy.array(country_map[data['Origin Index']])
images/better-plots/Detail_MultiSeries.ipynb
kmorel/kmorel.github.io
mit
In this plot we are going to show the trend of the average miles per gallon (MPG) rating for subsequent model years separated by country of origin. This time period saw a significant increase in MPG driven by the U.S. fuel crisis. We can use the pivot_table feature of pandas to get this information from the data. (Exce...
average_mpg_per_year = data.pivot_table(index='Model Year', columns='Origin', values='MPG', aggfunc='mean') average_mpg_per_year
images/better-plots/Detail_MultiSeries.ipynb
kmorel/kmorel.github.io
mit
Use toyplot to make a plot of the MPG for every car in the database organized by year and colored by origin.
canvas = toyplot.Canvas('4in', '2.6in') axes = canvas.cartesian(bounds=(41,-1,6,-43), xlabel = 'Model Year', ylabel = 'MPG') colormap = toyplot.color.CategoricalMap() axes.scatterplot(data['Model Year'] + 1900 + 0.2*(data['Origin Index']-2), data['MPG'...
images/better-plots/Detail_MultiSeries.ipynb
kmorel/kmorel.github.io
mit
Now use toyplot to plot this data along with trend lines.
canvas = toyplot.Canvas('4in', '2.6in') axes = canvas.cartesian(bounds=(41,-1,6,-43), xlabel = 'Model Year', ylabel = 'MPG') colormap = toyplot.color.CategoricalMap() axes.scatterplot(data['Model Year'] + 1900 + 0.2*(data['Origin Index']-2), data['MPG'...
images/better-plots/Detail_MultiSeries.ipynb
kmorel/kmorel.github.io
mit
This lesson will control all the mouse controlling functions in this module. The module treats the screen as cartesian coordinates of pixels, with x referencing a point on the horizontal line, and y referencing a point on the vertical line. We can examine the available screen size using the size() function, with var...
pyautogui.size()
AutomateTheBoringStuffWithPython/lesson48.ipynb
Vvkmnn/books
gpl-3.0
We can store this tuple in two variables, width and height:
width, height = pyautogui.size()
AutomateTheBoringStuffWithPython/lesson48.ipynb
Vvkmnn/books
gpl-3.0
Similarly, the position() function returns the current position of the mouse cursor.
pyautogui.position()
AutomateTheBoringStuffWithPython/lesson48.ipynb
Vvkmnn/books
gpl-3.0
The left and right most corners are 1 less than the maximum.
# left corner print(pyautogui.position()) # right corner print(pyautogui.position())
AutomateTheBoringStuffWithPython/lesson48.ipynb
Vvkmnn/books
gpl-3.0
The first function to control the mouse is the moveTo() function, which moves the mouse immediately to an absolute location.
pyautogui.moveTo(10,10)
AutomateTheBoringStuffWithPython/lesson48.ipynb
Vvkmnn/books
gpl-3.0
We can pass the duration parameter to this function slow down this process to simulate human activity over the duration defined.
pyautogui.moveTo(10,10, duration=2)
AutomateTheBoringStuffWithPython/lesson48.ipynb
Vvkmnn/books
gpl-3.0
We can also use the moveRel() function to move the mouse relative to a certain position.
pyautogui.moveRel(200, 0, duration=2)
AutomateTheBoringStuffWithPython/lesson48.ipynb
Vvkmnn/books
gpl-3.0
We can also pass in y coordinates to move up or down, but we have to use negative values to move 'up' in a relative way.
pyautogui.moveRel(0, -100, duration=1.5)
AutomateTheBoringStuffWithPython/lesson48.ipynb
Vvkmnn/books
gpl-3.0
Now that we have mastered movement, we can now use click() functions to interact with objects.
# Find the 'Help' button in the top Jupyter Navigation # helpCoordinates = pyautogui.position() helpCoordinates = (637, 126) # Click at those coordinates pyautogui.click(helpCoordinates)
AutomateTheBoringStuffWithPython/lesson48.ipynb
Vvkmnn/books
gpl-3.0
We can use functions like rightClick(), doubleClick(), or middleClick() for similar behavior. We can even run these functions without any coordinates, which will click at any particular location. We also have dragRel() and dragTo() functions, which can be used to click and drag; used here to draw in a paint program. ...
pyautogui.moveRel(0, -100, duration=1.5)
AutomateTheBoringStuffWithPython/lesson48.ipynb
Vvkmnn/books
gpl-3.0
To make pyautogui useful, you need to know the coordinates on the screen at any given time, and interact with them. The module contains a useful sub program, called displayMousePosition, which can be run via the terminal to track the position of the real-time mouse at any given time (it also returns RGB values.)
# Not run here, just keeps spitting out print functions. Much more useful in the terminal. pyautogui.displayMousePosition()
AutomateTheBoringStuffWithPython/lesson48.ipynb
Vvkmnn/books
gpl-3.0
First, grab a FontAwesome instance which knows about all of the icons.
fa = FontAwesome()
docs/Icon.ipynb
bollwyvl/ip-bootstrap
bsd-3-clause
fa exposes Python-friendly, autocompletable names for all of the FontAwesome icons, and you can preview them immediately.
fa.space_shuttle
docs/Icon.ipynb
bollwyvl/ip-bootstrap
bsd-3-clause
You can apply effects like rotation and scaling.
fa.space_shuttle.rotate_270 * 3
docs/Icon.ipynb
bollwyvl/ip-bootstrap
bsd-3-clause
The actual widget supports the stack case, such that you can display a single icon...
icon = Icon(fa.space_shuttle) icon
docs/Icon.ipynb
bollwyvl/ip-bootstrap
bsd-3-clause
Or several icons stacked together...
icon = Icon(fa.square * 2, fa.empire.context_inverse, size=Size.x3) icon
docs/Icon.ipynb
bollwyvl/ip-bootstrap
bsd-3-clause
1.2) Boucles en python Expliquez brièvement le fonctionnement des deux principales boucles en python (for et while) en précisant notamment : qu’est ce qui change à chaque itération quand est-ce que le boucle s’arrête
for it in range(10): print(it) it=0 while it < 10: print(it) it +=1
Cours09_DILLMANN_ISEP2016.ipynb
DillmannFrench/Intro-PYTHON
gpl-3.0
1.3) Concept informatique Qu’est ce qu’est la généricité (le fait qu’un code soit générique) ?
def Ajoute_dix(inputData): if isinstance(inputData, int): return inputData + 10 elif isinstance(inputData, str): return int(inputData) + 10 else: return 10 # applications avec un INT a=15 print("Le type de a est : {}".format(type(a))) b=Ajoute_dix(a) print("En applicant la fonction ...
Cours09_DILLMANN_ISEP2016.ipynb
DillmannFrench/Intro-PYTHON
gpl-3.0
2) Partie 2 : Operateurs Arithmétiques
a=5%2 print('a=5%2 is a {} of type {}'.format(a,type(a))) a=[1]+3 print('a=[1]+3 is a {} of type {}'.format(a,type(a))) a=[1]*3 print('a=a=[1]*3 is a {} of type {}'.format(a,type(a))) a=9.0*9 print('a=9.0*9 is a {} of type {}'.format(a,type(a))) a=9*"9" print("a=9*“9” is a {} of type {}".format(a,type(a))) a = 4 =...
Cours09_DILLMANN_ISEP2016.ipynb
DillmannFrench/Intro-PYTHON
gpl-3.0
2.2) Question 2
x=3 y=5 z=11 a = True b = False (x < y ) and (a or b) (x < y < z) != (a != b) (a and (not b) and ( -y <= -z ) ) (not (a and b)) or ((y**2 < z**2))
Cours09_DILLMANN_ISEP2016.ipynb
DillmannFrench/Intro-PYTHON
gpl-3.0
2.3 Question 3
liste1 = ["a", "b", "c", "d", "e"]
Cours09_DILLMANN_ISEP2016.ipynb
DillmannFrench/Intro-PYTHON
gpl-3.0
2.3.1 Question 3.1
liste1[ len(liste1) ] liste1[ - len(liste1) ]
Cours09_DILLMANN_ISEP2016.ipynb
DillmannFrench/Intro-PYTHON
gpl-3.0
2.4 Question 4
a=2 while a <= 7: a = a*a + a + 1 print(a)
Cours09_DILLMANN_ISEP2016.ipynb
DillmannFrench/Intro-PYTHON
gpl-3.0
2.5 Question 5
a=1 for it in range(5): a = a + it print(a)
Cours09_DILLMANN_ISEP2016.ipynb
DillmannFrench/Intro-PYTHON
gpl-3.0