markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Now plot stocks
plt.figure(figsize=[10,9]) plt.subplot(3,1,1) plt.plot(X_path[:,0]) plt.title(r'Employment') plt.subplot(3,1,2) plt.plot(X_path[:,1]) plt.title(r'Unemployment') plt.subplot(3,1,3) plt.plot(X_path.sum(1)) plt.title(r'Labor Force')
solutions/lakemodel_solutions.ipynb
gxxjjj/QuantEcon.py
bsd-3-clause
And how the rates evolve:
plt.figure(figsize=[10,6]) plt.subplot(2,1,1) plt.plot(x_path[:,0]) plt.hlines(xbar[0],0,T,'r','--') plt.title(r'Employment Rate') plt.subplot(2,1,2) plt.plot(x_path[:,1]) plt.hlines(xbar[1],0,T,'r','--') plt.title(r'Unemployment Rate')
solutions/lakemodel_solutions.ipynb
gxxjjj/QuantEcon.py
bsd-3-clause
We see that it takes 20 periods for the economy to converge to it's new steady state levels Exercise 2 This next exercise has the economy expriencing a boom in entrances to the labor market and then later returning to the original levels. For 20 periods the economy has a new entry rate into the labor market
bhat = 0.003 T_hat = 20 LM1 = LakeModel(lamb,alpha,bhat,d)
solutions/lakemodel_solutions.ipynb
gxxjjj/QuantEcon.py
bsd-3-clause
We simulate for 20 periods at the new parameters
X_path1 = np.vstack(LM1.simulate_stock_path(x0*N0,T_hat)) # simulate stocks x_path1 = np.vstack(LM1.simulate_rate_path(x0,T_hat)) # simulate rates
solutions/lakemodel_solutions.ipynb
gxxjjj/QuantEcon.py
bsd-3-clause
Now using the state after 20 periods for the new initial conditions we simulate for the additional 30 periods
X_path2 = np.vstack(LM0.simulate_stock_path(X_path1[-1,:2],T-T_hat+1)) # simulate stocks x_path2 = np.vstack(LM0.simulate_rate_path(x_path1[-1,:2],T-T_hat+1)) # simulate rates
solutions/lakemodel_solutions.ipynb
gxxjjj/QuantEcon.py
bsd-3-clause
Finally we combine these two paths and plot
x_path = np.vstack([x_path1,x_path2[1:]]) # note [1:] to avoid doubling period 20 X_path = np.vstack([X_path1,X_path2[1:]]) # note [1:] to avoid doubling period 20 plt.figure(figsize=[10,9]) plt.subplot(3,1,1) plt.plot(X_path[:,0]) plt.title(r'Employment') plt.subplot(3,1,2) plt.plot(X_path[:,1]) plt.title(r'Unemployment') plt.subplot(3,1,3) plt.plot(X_path.sum(1)) plt.title(r'Labor Force')
solutions/lakemodel_solutions.ipynb
gxxjjj/QuantEcon.py
bsd-3-clause
And the rates:
plt.figure(figsize=[10,6]) plt.subplot(2,1,1) plt.plot(x_path[:,0]) plt.hlines(x0[0],0,T,'r','--') plt.title(r'Employment Rate') plt.subplot(2,1,2) plt.plot(x_path[:,1]) plt.hlines(x0[1],0,T,'r','--') plt.title(r'Unemployment Rate')
solutions/lakemodel_solutions.ipynb
gxxjjj/QuantEcon.py
bsd-3-clause
This is a very similar problem to the prediction intervals we had before. We know that $p(\mu - \bar{x})$ follows a $T(0, \sigma_x /\sqrt{N}, N - 1)$ distribution and we can use the same idea as $Z$-scores as we did for prediction intervals $$T(y) = \frac{y - 0}{\sigma_x / \sqrt{N}}$$ The 'mean' our error in the population mean distribution is 0, because our error in population mean is always centered around 0. After taking 5 samples, we've found that the sample mean is 45 and the sample standard deviation, $\sigma_x$ is 3. What is the 95% confidence interval for the true mean, $\mu$? We can write this more like this: $$P(- y < \mu - \bar{x} < +y) = 0.95$$ Our interval will go from 2.5% to 97.5% (95% of probability), so let's find the $T$-value for $-\infty$ to 2.5% and 97.5% to $\infty$. Remember that the $T$-value depends on the degrees of freedom, N-1.
import scipy.stats #The lower T Value. YOU MUST GIVE THE SAMPLE NUMBER print(scipy.stats.t.ppf(0.025, 4)) print(scipy.stats.t.ppf(0.975, 4))
unit_8/lectures/lecture_2.ipynb
whitead/numerical_stats
gpl-3.0
$$T_{low} = \frac{-y - 0}{\sigma_x / \sqrt{N}}$$ $$T_{low} = -\frac{y}{\sigma_x / \sqrt{N}}$$ $$y = -T_{low}\frac{\sigma_x}{\sqrt{N}}$$
print(-scipy.stats.t.ppf(0.025, 4) * 3 / np.sqrt(5))
unit_8/lectures/lecture_2.ipynb
whitead/numerical_stats
gpl-3.0
The final answer is $P(45 - 3.72 < 45 < 45 + 3.72) = 0.95$ or $45\pm 3.72$ Computing Confidence Interval for Error in Population Mean Steps Is the sample size greater than 25 OR do you know the true (population) standard deviation? If so, then use standard normal ($Z$) otherwise the $t$-distribution for your sample size ($T$). Build your interval in probability. For example, a 95% double-sided goes from 2.5% to 97.5% Find the $Z$ or $T$ values that match your interval. For example, $Z_{low} = -1.96$ to $Z_{high} = 1.96$ is for a double-sided 95% confidence inerval. Use the scipy.stats.t.ppf or scipy.stats.norm.ppf function to find them. Use the $y = Z \sigma / \sqrt{N}$ or $y = T \sigma_x / \sqrt{N}$ equation to find the interval values in your particular distribution, where $y$ is the interval width Report your answer either as an interval or the $\bar{x} \pm y$ notation. Shortcut Method For Normal Here's how to quickly do these steps in Python for sample size greater than 25
# DO NOT COPY, JUST GENERATING DATA FOR EXAMPLE data = scipy.stats.norm.rvs(size=100, scale=15, loc=50) #Check if sample size is big enough. #This code will cause an error if it's not assert len(data) > 25 CI = 0.95 sample_mean = np.mean(data) #The second argument specifies what the denominator should be (N - x), #where x is 1 in this case sample_var = np.var(data, ddof=1) Z = scipy.stats.norm.ppf((1 - CI) / 2) y = -Z * np.sqrt(sample_var / len(data)) print('{} +/ {}'.format(sample_mean, y))
unit_8/lectures/lecture_2.ipynb
whitead/numerical_stats
gpl-3.0
Is that low? Well, remember that our error in the mean follows standard deviation divided by the root of number of samples. Shortcut Method For $t$-Distribution Here's how to quickly do these steps in Python for sample size less than 25
# DO NOT COPY, THIS JUST GENERATES DATA FOR EXAMPLE data = scipy.stats.norm.rvs(size=4, scale=15, loc=50) CI = 0.95 sample_mean = np.mean(data) sample_var = np.var(data, ddof=1) T = scipy.stats.t.ppf((1 - CI) / 2, df=len(data)-1) y = -T * np.sqrt(sample_var / len(data)) print('{} +/ {}'.format(sample_mean, y))
unit_8/lectures/lecture_2.ipynb
whitead/numerical_stats
gpl-3.0
Example of Prediction Intervals I know that the thickness of a metal slab is distributed according to ${\cal N}(3.4, 0.75)$. Construct a prediction interval so that a randomly chosen metal slab will lie within it 95% confidence. $$P( \mu - y < x < \mu + y) = 0.95$$ This is a prediction interval, so we're computing a interval on the distribution itself and we know everything about it. $$Z(\mu + y) = \frac{\mu + y - \mu}{\sigma} \Rightarrow y = \sigma Z$$ $$Z = 1.96$$ $$x = \mu \pm 1.96 \sigma = 3.4 \pm 1.40$$ A randomly chosen slab will have a thickness of $3.4 \pm 1.40$ 95% of the time. Example 1 of error in population mean with known $\sigma$ I measure the thickness of 35 metal slabs and find that $\bar{x}$, the sample mean, is 3.38. If I know that $\sigma = 0.75$, construct a confidence interval that will contain the true mean $\mu$ with 95% confidence. We know that $p(\bar{x} - \mu)$ is normally distributed with ${\cal N}(0, \sigma / \sqrt{N})$. We want to find $$ P(-y < \bar{x} - \mu < +y) = 0.95$$ $$ Z(+y) = \frac{y - 0}{\sigma_e} = \frac{y}{\sigma / \sqrt{N}} \Rightarrow y = \frac{\sigma}{\sqrt{N}} Z$$ $$y = \frac{0.75}{\sqrt{35}}1.96 = 0.248$$ $$ \mu - \bar{x} = 0 \pm 0.248$$ $$ \mu = 3.38 \pm 0.248$$ At a 95% confidence level, the true mean is $3.38 \pm 0.248$. Example 2 of error in population mean with known $\sigma$ I measure the thickness of 11 metal slabs and find that $\bar{x}$, the sample mean, is 5.64. If I know that $\sigma = 1.2$, construct a confidence interval that will contain the true mean $\mu$ with 99% confidence. Again we know that $p(\bar{x} - \mu)$ is normally distributed with ${\cal N}(0, \sigma / \sqrt{N})$. We want to find $$ P(-y < \bar{x} - \mu < +y) = 0.99$$ $$ Z(+y) = \frac{y - 0}{\sigma_e} = \frac{y}{\sigma / \sqrt{N}} \Rightarrow y = \frac{\sigma}{\sqrt{N}} Z$$ $$y = \frac{1.2}{\sqrt{11}}2.575 = 0.932$$ $$ \mu - \bar{x} = 0 \pm 0.932$$ $$ \mu = 5.64 \pm 0.932$$ Example 1 of error in population mean with unknown $\sigma$ I measure the thickness of 6 metal slabs and find that $\bar{x}$, the sample mean, is 3.65 and the sample standard deviation is $1.25$. Construct a confidence interval that will contain the true mean $\mu$ with 90% confidence. $$T(+y) = \frac{y - 0}{\sigma_x / \sqrt{N}} \Rightarrow y = \frac{\sigma_x}{\sqrt{N}} T$$ We know that $p(\bar{x} - \mu)$ is a $t$-distribution because $N$ is small. It is distributed as $T(0, \sigma_x / \sqrt{N})$. We want to find $$ P(-y < \bar{x} - \mu < +y) = 0.90$$
#Notice it is 95%, so the interval goes from #5% to 95% containing 90% of probability T = scipy.stats.t.ppf(0.95, df=6-1) print(T)
unit_8/lectures/lecture_2.ipynb
whitead/numerical_stats
gpl-3.0
$$ y = \frac{1.25}{\sqrt{6}} 2.015 = 1.028 $$ $$\mu = 3.65 \pm 1.028$$ The population mean of the slabs is $3.65 \pm 1.028$ with 90% confidence. Example 2 of error in population mean with unknown $\sigma$ I measure the thickness of 25 metal slabs and find that $\bar{x}$, the sample mean, is 3.42 and the sample standard deviation is 0.85. Construct a confidence interval that will contain the true mean $\mu$ with 90% confidence. We know, just like last example, that $P(\bar{x} - \mu)$ is a normal distribution because $N$ is large enough for the central limit theorem to apply. It is distributed as ${\cal N}(0, \sigma_x / \sqrt{N})$. We want to find $$ P(-y < \bar{x} - \mu < +y) = 0.90$$ $$Z(+y) = \frac{y - 0}{\sigma_x / \sqrt{N}} \Rightarrow y = \frac{\sigma_x}{\sqrt{N}} Z$$ $$ y = \frac{0.85}{\sqrt{25}} 1.645 = 0.28$$ $$\mu = 3.42 \pm 0.28$$ Single-Sided Confidence Intervals Sometimes, you would desire only to bound the population mean to be on one side. Upper Interval (Lower-Bound) An upper-interval covers the upper x% of the probability mass and can be defined as an interval from $(y, \infty)$, where $y$ acts as a lower-bound. A visual is shown below for an upper 90% confidence interval.
#make some points for plot N = 5 x = np.linspace(-5,5, 1000) T = ss.t.ppf(0.10, df=N-1) y = ss.t.pdf(x, df=N-1) plt.plot(x,y) plt.fill_between(x, y, where= x > T) plt.text(0,np.max(y) / 3, 'Area=0.90', fontdict={'size':14}, horizontalalignment='center') plt.axvline(T, linestyle='--', color='orange') plt.xticks([T], ['lower-bound']) plt.yticks([]) plt.ylabel(r'$p(\mu - \bar{x})$') plt.xlabel(r'$\mu - \bar{x}$') plt.show()
unit_8/lectures/lecture_2.ipynb
whitead/numerical_stats
gpl-3.0
Lower Interval (Upper-Bound) A lower interval covers the lower x% of probability mass. It is defined with an upper bound like so: $(-\infty, y)$. An example is below:
#make some points for plot N = 5 x = np.linspace(-5,5, 1000) T = ss.t.ppf(0.90, df=N-1) y = ss.t.pdf(x, df=N-1) plt.plot(x,y) plt.fill_between(x, y, where= x < T) plt.text(0,np.max(y) / 3, 'Area=0.90', fontdict={'size':14}, horizontalalignment='center') plt.axvline(T, linestyle='--', color='orange') plt.xticks([T], ['upper-bound']) plt.yticks([]) plt.ylabel(r'$p(\mu - \bar{x})$') plt.xlabel(r'$\mu - \bar{x}$') plt.show()
unit_8/lectures/lecture_2.ipynb
whitead/numerical_stats
gpl-3.0
Concepts
concepts = pd.read_csv('data/concepts.csv') concepts.head()
analysis/demo/demo.ipynb
adaptive-learning/flocs
gpl-2.0
Blocks
blocks = pd.read_csv('data/blocks.csv') blocks.head()
analysis/demo/demo.ipynb
adaptive-learning/flocs
gpl-2.0
Instructions
instructions = pd.read_csv('data/instructions.csv') instructions.head()
analysis/demo/demo.ipynb
adaptive-learning/flocs
gpl-2.0
Tasks
tasks = pd.read_csv('data/tasks.csv') tasks.head(3)
analysis/demo/demo.ipynb
adaptive-learning/flocs
gpl-2.0
Students
students = pd.read_csv('data/students.csv') students.head()
analysis/demo/demo.ipynb
adaptive-learning/flocs
gpl-2.0
Task Instances
task_instances = pd.read_csv('data/task-instances.csv') task_instances.head()
analysis/demo/demo.ipynb
adaptive-learning/flocs
gpl-2.0
Attempts
attempts = pd.read_csv('data/attempts.csv') attempts.head()
analysis/demo/demo.ipynb
adaptive-learning/flocs
gpl-2.0
Analysis Example Problem: Find median of a task solving time for each programming concept.
programming_concepts = concepts[concepts.type == 'programming'] programming_concepts solved_instances = task_instances[task_instances.solved] instances_concepts = pd.merge(solved_instances, tasks, on='task_id')[['time_spent', 'concepts_ids']] instances_concepts.head() # unpack concepts IDs from ast import literal_eval concepts_lists = [literal_eval(c) for c in instances_concepts.concepts_ids] times = instances_concepts.time_spent concepts_times = pd.DataFrame([(times[i], concept_id) for i, concepts_list in enumerate(concepts_lists) for concept_id in concepts_list], columns=['time', 'concept_id']) concepts_times.head() # (If you know how to do this better (ideally function to unpack any column), let mi know.) # filter programming concepts programming_concepts_times = pd.merge(concepts_times, programming_concepts) programming_concepts_times.head() # calculate median for each programming concept medians = programming_concepts_times.groupby(['concept_id', 'name']).median() medians # plot programming_concepts_times['concept'] = programming_concepts_times['name'].apply(lambda x: x.split('_')[-1].lower()) programming_concepts_times[['concept', 'time']].boxplot(by='concept')
analysis/demo/demo.ipynb
adaptive-learning/flocs
gpl-2.0
Let's start with a string lightning round to warm up. What are the lengths of the strings below? For each of the five strings below, predict what len() would return when passed that string. Use the variable length to record your answer, then run the cell to check whether you were right. 0a.
a = "" length = ____ q0.a.check()
notebooks/python/raw/ex_6.ipynb
Kaggle/learntools
apache-2.0
0b.
b = "it's ok" length = ____ q0.b.check()
notebooks/python/raw/ex_6.ipynb
Kaggle/learntools
apache-2.0
0c.
c = 'it\'s ok' length = ____ q0.c.check()
notebooks/python/raw/ex_6.ipynb
Kaggle/learntools
apache-2.0
0d.
d = """hey""" length = ____ q0.d.check()
notebooks/python/raw/ex_6.ipynb
Kaggle/learntools
apache-2.0
0e.
e = '\n' length = ____ q0.e.check()
notebooks/python/raw/ex_6.ipynb
Kaggle/learntools
apache-2.0
1. There is a saying that "Data scientists spend 80% of their time cleaning data, and 20% of their time complaining about cleaning data." Let's see if you can write a function to help clean US zip code data. Given a string, it should return whether or not that string represents a valid zip code. For our purposes, a valid zip code is any string consisting of exactly 5 digits. HINT: str has a method that will be useful here. Use help(str) to review a list of string methods.
def is_valid_zip(zip_code): """Returns whether the input string is a valid (5 digit) zip code """ pass # Check your answer q1.check() #%%RM_IF(PROD)%% def is_valid_zip(zip_code): """Returns whether the input string is a valid (5 digit) zip code """ return len(zip_code) == 5 and zip_code.isdigit() q1.assert_check_passed() #%%RM_IF(PROD)%% def is_valid_zip(zip_code): """Returns whether the input string is a valid (5 digit) zip code """ return len(zip_code) == 5 q1.assert_check_failed() #_COMMENT_IF(PROD)_ q1.hint() #_COMMENT_IF(PROD)_ q1.solution()
notebooks/python/raw/ex_6.ipynb
Kaggle/learntools
apache-2.0
2. A researcher has gathered thousands of news articles. But she wants to focus her attention on articles including a specific word. Complete the function below to help her filter her list of articles. Your function should meet the following criteria: Do not include documents where the keyword string shows up only as a part of a larger word. For example, if she were looking for the keyword “closed”, you would not include the string “enclosed.” She does not want you to distinguish upper case from lower case letters. So the phrase “Closed the case.” would be included when the keyword is “closed” Do not let periods or commas affect what is matched. “It is closed.” would be included when the keyword is “closed”. But you can assume there are no other types of punctuation.
def word_search(doc_list, keyword): """ Takes a list of documents (each document is a string) and a keyword. Returns list of the index values into the original list for all documents containing the keyword. Example: doc_list = ["The Learn Python Challenge Casino.", "They bought a car", "Casinoville"] >>> word_search(doc_list, 'casino') >>> [0] """ pass # Check your answer q2.check() #_COMMENT_IF(PROD)_ q2.hint() #_COMMENT_IF(PROD)_ q2.solution()
notebooks/python/raw/ex_6.ipynb
Kaggle/learntools
apache-2.0
3. Now the researcher wants to supply multiple keywords to search for. Complete the function below to help her. (You're encouraged to use the word_search function you just wrote when implementing this function. Reusing code in this way makes your programs more robust and readable - and it saves typing!)
def multi_word_search(doc_list, keywords): """ Takes list of documents (each document is a string) and a list of keywords. Returns a dictionary where each key is a keyword, and the value is a list of indices (from doc_list) of the documents containing that keyword >>> doc_list = ["The Learn Python Challenge Casino.", "They bought a car and a casino", "Casinoville"] >>> keywords = ['casino', 'they'] >>> multi_word_search(doc_list, keywords) {'casino': [0, 1], 'they': [1]} """ pass # Check your answer q3.check() #_COMMENT_IF(PROD)_ q3.solution()
notebooks/python/raw/ex_6.ipynb
Kaggle/learntools
apache-2.0
Synthesize the dataset Create 1000 random integers between 0, 100 for X and create y such that $$ y = \beta_{0} + \beta_{1}X + \epsilon $$ where $$ \beta_{0} = 30 \ and \ \beta_{1} = 1.8 \ and \ \epsilon \ = \ standard \ normal \ error $$
rand_1kx = np.random.randint(0,100,1000) x_mean = np.mean(rand_1kx) x_sd = np.std(rand_1kx) x_mean pop_intercept = 30 pop_slope = 1.8 error_boost = 10 pop_error = np.random.standard_normal(size = rand_1kx.size) * error_boost # I added an error booster since without it, the correlation was too high. y = pop_intercept + pop_slope*rand_1kx + pop_error y_mean = np.mean(y) y_sd = np.std(y) y_mean
islr/verifying_clt_in_regression.ipynb
AtmaMani/pyChakras
mit
Make a scatter plot of X and y variables.
sns.jointplot(rand_1kx, y)
islr/verifying_clt_in_regression.ipynb
AtmaMani/pyChakras
mit
X and y follow uniform distribution, but the error $\epsilon$ is generated from standard normal distribution with a boosting factor. Let us plot its histogram to verify the distribution
sns.distplot(pop_error)
islr/verifying_clt_in_regression.ipynb
AtmaMani/pyChakras
mit
Predict using population Let us predict the coefficients and intercept when using the whole dataset. We will compare this approach with CLT approach of breaking into multiple subsets and averaging the coefficients and intercepts Using whole population
from sklearn.linear_model import LinearRegression X_train_full = rand_1kx.reshape(-1,1) y_train_full = y.reshape(-1,1) y_train_full.shape lm.fit(X_train, y_train) #print the linear model built predicted_pop_slope = lm.coef_[0][0] predicted_pop_intercept = lm.intercept_[0] print("y = " + str(predicted_pop_slope) + "*X" + " + " + str(predicted_pop_intercept))
islr/verifying_clt_in_regression.ipynb
AtmaMani/pyChakras
mit
Prediction with 66% of data
from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(rand_1kx, y, test_size=0.33) print(X_train.size) from sklearn.linear_model import LinearRegression lm = LinearRegression() X_train = X_train.reshape(-1,1) X_test = X_test.reshape(-1,1) y_train = y_train.reshape(-1,1) y_test = y_test.reshape(-1,1) y_train.shape lm.fit(X_train, y_train) #print the linear model built predicted_subset_slope = lm.coef_[0][0] predicted_subset_intercept = lm.intercept_[0] print("y = " + str(predicted_subset_slope) + "*X" + " + " + str(predicted_subset_intercept))
islr/verifying_clt_in_regression.ipynb
AtmaMani/pyChakras
mit
Perform predictions and plot the charts
y_predicted = lm.predict(X_test) residuals = y_test - y_predicted
islr/verifying_clt_in_regression.ipynb
AtmaMani/pyChakras
mit
Fitted vs Actual scatter
jax = sns.jointplot(y_test, y_predicted) jax.set_axis_labels(xlabel='Y', ylabel='Predicted Y') dax = sns.distplot(residuals) dax.set_title('Distribution of residuals') jax = sns.jointplot(y_predicted, residuals) jax.set_axis_labels(xlabel='Predicted Y', ylabel='Residuals') jax = sns.jointplot(y_test, residuals) jax.set_axis_labels(xlabel='Y', ylabel='Residuals')
islr/verifying_clt_in_regression.ipynb
AtmaMani/pyChakras
mit
Predict using multiple samples
pop_df = pd.DataFrame(data={'x':rand_1kx, 'y':y}) pop_df.head() pop_df.shape
islr/verifying_clt_in_regression.ipynb
AtmaMani/pyChakras
mit
Select 50 samples of size 200 and perform regression
sample_slopes = [] sample_intercepts = [] for i in range(0,50): # perform a choice on dataframe index sample_index = np.random.choice(pop_df.index, size=50) # select the subset using that index sample_df = pop_df.iloc[sample_index] # convert to numpy and reshape the matrix for lm.fit sample_x = np.array(sample_df['x']).reshape(-1,1) sample_y = np.array(sample_df['y']).reshape(-1,1) lm.fit(X=sample_x, y=sample_y) sample_slopes.append(lm.coef_[0][0]) sample_intercepts.append(lm.intercept_[0])
islr/verifying_clt_in_regression.ipynb
AtmaMani/pyChakras
mit
Plot the distribution of sample slopes and intercepts
mean_sample_slope = np.mean(sample_slopes) mean_sample_intercept = np.mean(sample_intercepts) fig, ax = plt.subplots(1,2, figsize=(15,6)) # plot sample slopes sns.distplot(sample_slopes, ax=ax[0]) ax[0].set_title('Distribution of sample slopes. Mean: ' + str(round(mean_sample_slope, 2))) ax[0].axvline(mean_sample_slope, color='black') # plot sample slopes sns.distplot(sample_intercepts, ax=ax[1]) ax[1].set_title('Distribution of sample intercepts. Mean: ' + str(round(mean_sample_intercept,2))) ax[1].axvline(mean_sample_intercept, color='black')
islr/verifying_clt_in_regression.ipynb
AtmaMani/pyChakras
mit
Conclusion Here we compare the coefficients and intercepts obtained by different methods to see how CLT adds up.
print("Predicting using population") print("----------------------------") print("Error in intercept: {}".format(pop_intercept - predicted_pop_intercept)) print("Error in slope: {}".format(pop_slope - predicted_pop_slope)) print("\n\nPredicting using subset") print("----------------------------") print("Error in intercept: {}".format(pop_intercept - predicted_subset_intercept)) print("Error in slope: {}".format(pop_slope - predicted_subset_slope)) print("\n\nPredicting using a number of smaller samples") print("------------------------------------------------") print("Error in intercept: {}".format(pop_intercept - mean_sample_intercept)) print("Error in slope: {}".format(pop_slope - mean_sample_slope))
islr/verifying_clt_in_regression.ipynb
AtmaMani/pyChakras
mit
Likewise, as before we need to create an event parth function and a severity level function.
event_names = ["event_%i"%i for i in range(n_events)] def event_path(x): # Returns a list of strings with 3 elements return ["Type_%i"%(x/N) for N in [50, 10]]+[event_names[x]] def severity_level(x): # returns 3 different severity levels: 0, 1, 2 return x-(x/3)*3
docs/visISC_query_dialog_example.ipynb
STREAM3/visisc
bsd-3-clause
Next, we need to make an subclass or an instance of the visisc.EventSelectionQuery. This class uses the <a href="http://docs.enthought.com/traits">Traits</a> library which is also used by <a href="http://docs.enthought.com/mayavi/mayavi/">Mayavi</a>, the 3D visualization library that we use for visualizing the data. In the initialization of an instance, we need to set four Trait lists: list_of_source_ids, list_of_source_classes, list_of_event_names, list_of_event_severity_levels. In addition to that, we need to set period_start_date and period_end_date. In the current version, we also need to programatically set selected_list_of_source_ids. We need also implement the execute_query method similarly to as shown below. The execute_query can access the users selection from selected_list_of_source_ids, selected_list_of_source_classes, selected_list_of_event_names, and selected_list_of_event_severity_levels.
class MySelectionQuery(visisc.EventSelectionQuery): def __init__(self): self.list_of_source_ids = [i for i in range(n_sources*n_classes)] # Below: a list of pairs with id and name, where the name is shown in the GUI while the id is put into teh selection. self.list_of_source_classes = [(i, "class_%i"%i) for i in range(n_source_classes)] self.list_of_event_names = event_names # Below: a list of pairs with id and name, where the name is shown in the GUI while the id is put into teh selection. self.list_of_event_severity_levels = [(i, "Level %i"%i) for i in range(3)] self.period_start_date = data.T[date_column].min() self.period_end_date = data.T[date_column].max() def execute_query(self): query = self query.selected_list_of_source_ids = query.list_of_source_ids data_query = np.array( [ data[i] for i in range(len(data)) if data[i][source_column] in query.selected_list_of_source_ids and data[i][class_column] in query.selected_list_of_source_classes and data[i][date_column] >= query.period_start_date and data[i][date_column] <= query.period_end_date ] ) event_columns = [first_event_column+event_names.index(e) for e in query.selected_list_of_event_names if severity_level(first_event_column+event_names.index(e)) in query.selected_list_of_event_severity_levels] model = visisc.EventDataModel.hierarchical_model( event_columns=event_columns, get_event_path = event_path, get_severity_level = severity_level, num_of_severity_levels=3 ) data_object = model.data_object( data_query, source_column = source_column, class_column = class_column, period_column=period_column, date_column=date_column ) anomaly_detector = model.fit_anomaly_detector(data_object,poisson_onesided=True) vis = visisc.EventVisualization(model, 13.8, start_day=query.period_end_date,# yes confusing, start day in the EventVisualization is backward looking precompute_cache=True) # Precompute all anomaly calculation in order to speed up visualization.
docs/visISC_query_dialog_example.ipynb
STREAM3/visisc
bsd-3-clause
Given that we have the query class, we can now create and open a query selection dialog where it is possible to customize the labels for source classes and the severity levels.
query = MySelectionQuery() dialog = visisc.EventSelectionDialog( query, source_class_label="Select Machine Types", severity_level_label="Select Event Severity Types" )
docs/visISC_query_dialog_example.ipynb
STREAM3/visisc
bsd-3-clause
For opening the window, we can the call. However, simarly to previous visualization examples, we have to run it outside the Jupyter notebook by calling ipython directly. dialog.configure_traits()
!ipython --matplotlib=wx --gui=wx -i visISC_query_dialog_example.py
docs/visISC_query_dialog_example.ipynb
STREAM3/visisc
bsd-3-clause
It's in a zip format, so unzip it:
!unzip 2013-Q1-Trips-History-Data.zip
lectures/week-07-20151027-more-sql.ipynb
dchud/warehousing-course
cc0-1.0
How big is it?
!wc 2013-Q1-Trips-History-Data.csv
lectures/week-07-20151027-more-sql.ipynb
dchud/warehousing-course
cc0-1.0
What are its columns?
!csvcut -n 2013-Q1-Trips-History-Data.csv
lectures/week-07-20151027-more-sql.ipynb
dchud/warehousing-course
cc0-1.0
Okay, let's have a look.
!head -5 2013-Q1-Trips-History-Data.csv | csvlook
lectures/week-07-20151027-more-sql.ipynb
dchud/warehousing-course
cc0-1.0
Ah, that's kinda wordy. Let's cut out that first column, which we can compute for ourselves later.
!head 2013-Q1-Trips-History-Data.csv | csvcut -C1 | csvlook
lectures/week-07-20151027-more-sql.ipynb
dchud/warehousing-course
cc0-1.0
That's a little bit cleaner, and the rest of the data should be useful. Let's clean up the data by removing that column and renaming the headers so they're a little easier to query.
!csvcut -C1 2013-Q1-Trips-History-Data.csv | \ header -r "start_date,end_date,start_station,end_station,bike_id,sub_type" \ > bikeshare.csv
lectures/week-07-20151027-more-sql.ipynb
dchud/warehousing-course
cc0-1.0
Make sure you haven't lost anything!
!wc bikeshare.csv
lectures/week-07-20151027-more-sql.ipynb
dchud/warehousing-course
cc0-1.0
Prepping and loading data into the database Alright, then, let's get loading.
%load_ext sql
lectures/week-07-20151027-more-sql.ipynb
dchud/warehousing-course
cc0-1.0
NOTE: See a bunch of ShimWarnings with a pink background? That's normal. It's just a heads-up about ongoing changes to IPython/Jupyter code. You can keep going. First, we create a database in mysql. Note: you can do the same thing on the command line by issuing the CREATE DATABASE command part before the pipe within the mysql shell, which you get to with the second part after the pipe. Here we'll pipe the one into the other so it reads well in the notebook.
!echo "CREATE DATABASE bikedb" | mysql --user=mysqluser --password=mysqlpass
lectures/week-07-20151027-more-sql.ipynb
dchud/warehousing-course
cc0-1.0
Here's how we connect the notebook up to the mysql database using a username and password. Remember that this shorthand version is possible thanks to the excellent ipython-sql Jupyter extension that we're using, otherwise you'd have to establish the connection, get a cursor, etc., like you've done explicitly in python in your other class. Not that there's anything wrong with that.
%sql mysql://mysqluser:mysqlpass@localhost/bikedb
lectures/week-07-20151027-more-sql.ipynb
dchud/warehousing-course
cc0-1.0
Very easy, no? First, clean up if we're not running this for the first time.
%%sql DROP TABLE IF EXISTS bikeshare;
lectures/week-07-20151027-more-sql.ipynb
dchud/warehousing-course
cc0-1.0
Next, create a table schema using DDL.
%%sql CREATE TABLE bikeshare ( start_date DATETIME, end_date DATETIME, start_station VARCHAR(100), end_station VARCHAR(100), bike_id CHAR(7), sub_type CHAR(10) )
lectures/week-07-20151027-more-sql.ipynb
dchud/warehousing-course
cc0-1.0
Just to verify it worked:
%%sql SELECT COUNT(*) FROM bikeshare
lectures/week-07-20151027-more-sql.ipynb
dchud/warehousing-course
cc0-1.0
It worked! We just don't have any data in there yet. Now we load the data using LOAD DATA INFILE. You can do pretty much the same thing from the bash shell using mysqlimport and a bunch of options. It'll read better here in the notebook with the options spelled out. Docs for LOAD DATA INFILE are available at https://dev.mysql.com/doc/refman/5.1/en/load-data.html. Note: this assumes you've placed your bikeshare file in the directory /vagrant. Note also: I had to look up the mysql date formatting docs to get this date format conversion correct. It took me a few trials and errors before I got it right. This is an extremely common thing to have to do if you ever spend time wrangling data - every system handles dates in its own way.
%%sql LOAD DATA INFILE '/vagrant/bikeshare.csv' REPLACE INTO TABLE bikeshare FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' IGNORE 1 LINES (@start_date, @end_date, start_station, end_station, bike_id, sub_type) SET start_date = STR_TO_DATE(@start_date, '%c/%e/%Y %k:%i'), end_date = STR_TO_DATE(@end_date, '%c/%e/%Y %k:%i')
lectures/week-07-20151027-more-sql.ipynb
dchud/warehousing-course
cc0-1.0
Note: if the above command fails for you with a "file not found" error, please read these notes about apparmor. Follow that advice, and add a line like it shows, e.g.: /vagrant/* r ...to the file, or whatever path you have your data on, reload apparmor, and try again. I had to do this, and it worked perfectly after I made that change. Exploring your data Now that we've loaded our data, or we think we have, let's just verify it. Should be the same row count as what csvkit and wc gave us.
%%sql SELECT COUNT(*) FROM bikeshare
lectures/week-07-20151027-more-sql.ipynb
dchud/warehousing-course
cc0-1.0
Looks good! Let's look at the data a little.
%%sql SELECT * FROM bikeshare LIMIT 5
lectures/week-07-20151027-more-sql.ipynb
dchud/warehousing-course
cc0-1.0
How does MySQL construct this query, or more specifically, what's its execution plan? We can find out with EXPLAIN. For more about how to read MySQL 5.5's query plan, see https://dev.mysql.com/doc/refman/5.5/en/execution-plan-information.html.
%%sql EXPLAIN SELECT COUNT(*) FROM bikeshare LIMIT 5
lectures/week-07-20151027-more-sql.ipynb
dchud/warehousing-course
cc0-1.0
This says "using no keys, we're going to just scan roughly 395,390 rows, sans indexes, to answer this query."
%%sql SELECT MAX(start_date) FROM bikeshare %%sql EXPLAIN SELECT MAX(start_date) FROM bikeshare
lectures/week-07-20151027-more-sql.ipynb
dchud/warehousing-course
cc0-1.0
Pretty much the same thing. You can't get the max without looking at all of the values if there is no index.
%%sql SELECT COUNT(*) FROM bikeshare WHERE start_station LIKE "%dupont%" %%sql EXPLAIN SELECT COUNT(*) FROM bikeshare WHERE start_station LIKE "%dupont%"
lectures/week-07-20151027-more-sql.ipynb
dchud/warehousing-course
cc0-1.0
Now we see "using where" under "extra", so we know there's a filter operation, but that's about the only change. What if we add more things to filter on?
%%sql EXPLAIN SELECT start_station, end_station, COUNT(*) FROM bikeshare WHERE start_station LIKE "%dupont%" AND end_station LIKE "%21st%" AND start_date LIKE "2013-02-14%" GROUP BY start_station, end_station ORDER BY start_station, end_station
lectures/week-07-20151027-more-sql.ipynb
dchud/warehousing-course
cc0-1.0
Ah, some more info - it looks like it's using a temporary relation to store intermediate results, perhaps for the GROUP BY, then a sort to handle ORDER BY. Still no indexes, though. Let's change that.
%%sql CREATE INDEX idx_start_station ON bikeshare (start_station) %%sql EXPLAIN SELECT start_station, end_station, COUNT(*) FROM bikeshare WHERE start_station LIKE "21st%" AND start_date LIKE "2013-02-14%" GROUP BY start_station, end_station ORDER BY start_station, end_station
lectures/week-07-20151027-more-sql.ipynb
dchud/warehousing-course
cc0-1.0
I changed the query a little bit to use the index, do you see the difference? It found search keys in the index, and the row count went down by an order of magnitude. That's the power of indexes. It helps even on simple queries like this.
%%sql EXPLAIN SELECT DISTINCT start_station FROM bikeshare ORDER BY start_station
lectures/week-07-20151027-more-sql.ipynb
dchud/warehousing-course
cc0-1.0
What's that 201 value for rows? Maybe the actual count of distinct values. We can test that:
%%sql SELECT COUNT(*) FROM ( SELECT DISTINCT start_station FROM bikeshare ) made_up_subquery_alias_name
lectures/week-07-20151027-more-sql.ipynb
dchud/warehousing-course
cc0-1.0
There you go, that's exactly the answer. How about that MAX() query we tried a little while back?
%%sql SELECT MAX(start_date) FROM bikeshare %%sql EXPLAIN SELECT MAX(start_date) FROM bikeshare
lectures/week-07-20151027-more-sql.ipynb
dchud/warehousing-course
cc0-1.0
Let's create another index on start_date to see what the effect on the query plan will be.
%%sql CREATE INDEX idx_start_date ON bikeshare (start_date) %%sql SELECT MAX(start_date) FROM bikeshare
lectures/week-07-20151027-more-sql.ipynb
dchud/warehousing-course
cc0-1.0
Same result, but...
%%sql EXPLAIN SELECT MAX(start_date) FROM bikeshare
lectures/week-07-20151027-more-sql.ipynb
dchud/warehousing-course
cc0-1.0
That's new! In this case it doesn't have to look at any rows, it can just look at one end of the index. We've optimized away the need to even look at the table. Let's go back to COUNT() and try a few more things before we move on.
%%sql EXPLAIN SELECT COUNT(*) FROM bikeshare %%sql EXPLAIN SELECT COUNT(start_date) FROM bikeshare %%sql EXPLAIN SELECT COUNT(end_date) FROM bikeshare
lectures/week-07-20151027-more-sql.ipynb
dchud/warehousing-course
cc0-1.0
Do you see what happened there? Normalizing attributes Let's look at a few tasks you might need to perform if you were normalizing this dataset. Remember that in normalization, we reduce redundancy with the goal of consistency. What's redundant? Well, the station names for one.
%%sql SELECT COUNT(DISTINCT start_station) FROM bikeshare %%sql SELECT COUNT(DISTINCT end_station) FROM bikeshare
lectures/week-07-20151027-more-sql.ipynb
dchud/warehousing-course
cc0-1.0
Hmm, they're different. Let's put them together.
%%sql SELECT COUNT(DISTINCT station) FROM ( SELECT start_station AS station FROM bikeshare UNION SELECT end_station AS station FROM bikeshare ) a
lectures/week-07-20151027-more-sql.ipynb
dchud/warehousing-course
cc0-1.0
We'll create a table to hold the names of stations. Each station name should be represented once, and we'll assign a primary key to each in the form of a unique integer.
%%sql CREATE TABLE station ( id SMALLINT NOT NULL AUTO_INCREMENT, name VARCHAR(100), PRIMARY KEY (id) ) %%sql SELECT COUNT(*) FROM station
lectures/week-07-20151027-more-sql.ipynb
dchud/warehousing-course
cc0-1.0
Looks good. Now we can load the data with an INSERT that draws from our previous query. We can skip specifying the id because MySQL will do that for us. Note: every database handles this issue its own way. This is a nice convenience in MySQL; other database backends require more work.
%%sql INSERT INTO station (name) SELECT DISTINCT station AS name FROM ( SELECT start_station AS station FROM bikeshare UNION SELECT end_station AS station FROM bikeshare ) a %%sql SELECT * FROM station LIMIT 10
lectures/week-07-20151027-more-sql.ipynb
dchud/warehousing-course
cc0-1.0
It worked. Now we can update the bikeshare table to add columns for station identifiers.
%%sql ALTER TABLE bikeshare ADD COLUMN start_station_id SMALLINT AFTER start_station
lectures/week-07-20151027-more-sql.ipynb
dchud/warehousing-course
cc0-1.0
Looks good. But what exactly just happened?
%%sql DESCRIBE bikeshare %%sql SELECT * FROM bikeshare LIMIT 5
lectures/week-07-20151027-more-sql.ipynb
dchud/warehousing-course
cc0-1.0
What just happened? Why are all the start_station_id values None? Let's fill in those values with our new identifiers from the station table.
%%sql UPDATE bikeshare INNER JOIN station ON bikeshare.start_station = station.name SET bikeshare.start_station_id = station.id %%sql SELECT * FROM bikeshare LIMIT 5 %%sql SELECT * FROM station WHERE id = 161
lectures/week-07-20151027-more-sql.ipynb
dchud/warehousing-course
cc0-1.0
Great, now we can drop start_station from bikeshare and save a lot of space.
%%sql ALTER TABLE bikeshare DROP COLUMN start_station %%sql DESCRIBE bikeshare %%sql SELECT * FROM bikeshare LIMIT 5
lectures/week-07-20151027-more-sql.ipynb
dchud/warehousing-course
cc0-1.0
Worked! And we can repeat the process for end_station.
%%sql ALTER TABLE bikeshare ADD COLUMN end_station_id SMALLINT AFTER end_station %%sql UPDATE bikeshare INNER JOIN station ON bikeshare.end_station = station.name SET bikeshare.end_station_id = station.id %%sql ALTER TABLE bikeshare DROP COLUMN end_station %%sql SELECT * FROM bikeshare LIMIT 5
lectures/week-07-20151027-more-sql.ipynb
dchud/warehousing-course
cc0-1.0
A lot leaner, right? JOINs and indexes Now let's look at queries that return station names, thus requiring a JOIN across the two tables. Keep in mind our two table schema.
%%sql DESCRIBE station %%sql DESCRIBE bikeshare
lectures/week-07-20151027-more-sql.ipynb
dchud/warehousing-course
cc0-1.0
Let's try a basic query that looks for the most busy station pairs.
%%sql SELECT COUNT(*) AS c, start_station_id, end_station_id FROM bikeshare GROUP BY start_station_id, end_station_id ORDER BY c DESC LIMIT 5
lectures/week-07-20151027-more-sql.ipynb
dchud/warehousing-course
cc0-1.0
Now let's liven it up by joining to station and including station names. We'll need to join twice, using two aliases. Worked just fine. Let's look under the hood, though.
%%sql SELECT COUNT(*) AS c, station_1.name AS start_station, station_2.name AS end_station FROM bikeshare, station AS station_1, station AS station_2 WHERE station_1.id = bikeshare.start_station_id AND station_2.id = bikeshare.end_station_id GROUP BY bikeshare.start_station_id, bikeshare.end_station_id ORDER BY c DESC LIMIT 5
lectures/week-07-20151027-more-sql.ipynb
dchud/warehousing-course
cc0-1.0
Looks good, and it's in my neighborhood. :) Let's look at the query plan for all this:
%%sql EXPLAIN SELECT COUNT(*) AS c, station_1.name AS start_station, station_2.name AS end_station FROM station AS station_1, station AS station_2, bikeshare WHERE bikeshare.start_station_id = station_1.id AND bikeshare.end_station_id = station_2.id GROUP BY bikeshare.start_station_id, bikeshare.end_station_id ORDER BY c DESC LIMIT 5
lectures/week-07-20151027-more-sql.ipynb
dchud/warehousing-course
cc0-1.0
Not bad, but it's doing a full table scan on bikeshare. Let's see if some indexes would help with the two joins.
%%sql CREATE INDEX idx_start_station_id ON bikeshare (start_station_id) %%sql CREATE INDEX idx_end_station_id ON bikeshare (end_station_id) %%sql EXPLAIN SELECT COUNT(*) AS c, station_1.name AS s1_name, station_2.name AS s2_name FROM bikeshare, station AS station_1, station AS station_2 WHERE station_1.id = bikeshare.start_station_id AND station_2.id = bikeshare.end_station_id GROUP BY bikeshare.start_station_id, bikeshare.end_station_id ORDER BY c DESC LIMIT 5
lectures/week-07-20151027-more-sql.ipynb
dchud/warehousing-course
cc0-1.0
Well, it's hard to say how much better this will perform without a lot more data. A COUNT operation simply needs to be able to count everything, if the level of granularity it's counting doesn't already have an easy lookup like we saw before. Sometimes you just don't feel the pain of scale until you hit a scaling threshold that varies with the shape of your data. But - see the possible_keys in the first row? That means the optimizer sees the indexes present and will attempt to use those to at least organize the query a little better than it would be able to do without them. Let's try one more thing - we can create an index on multiple columns that matches our query more precisely. It's inefficient tot look up one column, then another, after all, we're looking for combinations of both. A multiple column index can precompute that.
%%sql CREATE INDEX idx_stations ON bikeshare (start_station_id, end_station_id) %%sql EXPLAIN SELECT COUNT(*) AS c, station_1.name AS s1_name, station_2.name AS s2_name FROM bikeshare, station AS station_1, station AS station_2 WHERE station_1.id = bikeshare.start_station_id AND station_2.id = bikeshare.end_station_id GROUP BY bikeshare.start_station_id, bikeshare.end_station_id ORDER BY c DESC LIMIT 5
lectures/week-07-20151027-more-sql.ipynb
dchud/warehousing-course
cc0-1.0
Let's put the CSMF Accuracy calculation right at the top
def measure_prediction_quality(csmf_pred, y_test): """Calculate population-level prediction quality (CSMF Accuracy) Parameters ---------- csmf_pred : pd.Series, predicted distribution of causes y_test : array-like, labels for test dataset Results ------- csmf_acc : float """ csmf_true = pd.Series(y_test).value_counts() / float(len(y_test)) csmf_acc = 1 - np.sum(np.absolute(csmf_pred - csmf_true)) / (2*(1 - csmf_true.min())) # cccsmf_acc = (csmf_acc - 0.632) / (1 - 0.632) return csmf_acc
2-tutorial-notebook-solutions/4-va_csmf.ipynb
aflaxman/siaman16-va-minitutorial
gpl-3.0
How can I test this?
csmf_pred = pd.Series({'cause_1': .5, 'cause_2': .5}) y_test = ['cause_1', 'cause_2'] measure_prediction_quality(csmf_pred, y_test) csmf_pred = pd.Series({'cause_1': 0., 'cause_2': 1.}) y_test = ['cause_1']*1000 + ['cause_2'] measure_prediction_quality(csmf_pred, y_test)
2-tutorial-notebook-solutions/4-va_csmf.ipynb
aflaxman/siaman16-va-minitutorial
gpl-3.0
Things we don't have time for An approach to really do the cross-validation out of sample:
val = {} module = 'Adult' val[module] = pd.read_csv('../3-data/phmrc_cleaned.csv') def get_data(module): X = np.array(val[module].filter(regex='(^s[0-9]+|age|sex)').fillna(0)) y = np.array(val[module].gs_text34) site = np.array(val[module].site) return X, y, site X, y, site = get_data(module) X.shape def my_resample(X, y, N2, csmf_new): """"Randomly resample X and y so that resampled cause distribution follows csmf_new and there are N2 samples total Parameters ---------- X : array-like, feature vectors y : array-like, corresponding labels N2 : int, number of samples in resampled results csmf_new : pd.Series, distribution of resampled data Results ------- X_new : array-like, resampled feature vectors y_new : array-like, corresponding resampled labels """ N, I = X.shape assert len(y) == N, 'X and y must have same length' causes = csmf_new.index J, = causes.shape # trailing comma for sneaky numpy reasons # generate count of examples for each cause according to csmf_new cnt_new = np.random.multinomial(N2, csmf_new) # replace y_new with original values y_new = [] for cnt, cause in zip(cnt_new, causes): for n_j in range(cnt): y_new.append(cause) y_new = np.array(y_new) # resample rows of X appropriately X_new = np.zeros((len(y_new), I)) for j in causes: new_rows, = np.where(y_new == j) # trailing comma for sneaky numpy reasons candidate_rows, = np.where(y == j) # trailing comma for sneaky numpy reasons assert len(candidate_rows) > 0, 'must have examples of each resampled cause' old_rows = np.random.choice(candidate_rows, size=len(new_rows), replace=True) X_new[new_rows,] = X[old_rows,] return X_new, y_new def random_allocation(X_train, y_train): """ make predictions by random allocation""" clf = sklearn.base.BaseEstimator() def my_predict(X_test): N = len(X_test) J = float(len(np.unique(y_train))) y_pred = np.ones((N, J)) / J csmf_pred = pd.Series(y_pred.sum(axis=0), index=np.unique(y_train)) / N return csmf_pred clf.my_predict = my_predict return clf def my_key(module, clf): return '{}-{}'.format(module, clf) import sklearn.model_selection results = [] def measure_csmf_acc(my_fit_predictor, replicates=10): """ my_fit_predictor : function that takes X,y returns clf object with my_predict method clf.my_predict takes X_test, return csmf_pred Results ------- stores calculation in results dict, returns calc for adults """ X, y, site = get_data(module) acc = [] np.random.seed(12345) # set seed for reproducibility cv = sklearn.model_selection.StratifiedShuffleSplit(n_iter=replicates, test_size=0.25) for train_index, test_index in cv.split(X, y): # make train test split X_train, X_test = X[train_index], X[test_index] y_train, y_test = y[train_index], y[test_index] # resample train set for equal class weights J = len(np.unique(y)) csmf_flat = pd.Series(np.ones(J)/J, index=np.unique(y)) X_train, y_train = my_resample(X_train, y_train, J*100, csmf_flat) clf = my_fit_predictor(X_train, y_train) # resample test set to have uninformative cause distribution csmf_rand = pd.Series(np.random.dirichlet(np.ones(J)), index=np.unique(y)) X_test_resamp, y_test_resamp = my_resample(X_test, y_test, J*100, csmf_rand) # make predictions csmf_pred = clf.my_predict(X_test_resamp) # test predictions csmf_acc = measure_prediction_quality(csmf_pred, y_test_resamp) results.append({'csmf_acc':csmf_acc, 'key':my_key(module, clf)}) df = pd.DataFrame(results) g = df.groupby('key') return g.csmf_acc.describe().unstack() baseline_csmf_acc = measure_csmf_acc(random_allocation) baseline_csmf_acc import sklearn.naive_bayes def nb_pr_allocation(X_train, y_train): clf = sklearn.naive_bayes.BernoulliNB() clf.fit(X_train, y_train) def my_predict(X_test): y_pred = clf.predict_proba(X_test) csmf_pred = pd.Series(y_pred.sum(axis=0), index=clf.classes_) / float(len(y_pred)) return csmf_pred clf.my_predict = my_predict return clf measure_csmf_acc(nb_pr_allocation)
2-tutorial-notebook-solutions/4-va_csmf.ipynb
aflaxman/siaman16-va-minitutorial
gpl-3.0
Introduction Classical electromagnetism is most often described using maxwell's equations. Instead, we can also describe it using a Lagrange density and an action which is the spacetime integral over the Lagrange density. The field is represented by a 4-vector in the spacetime-algebra where the first component is the electric potential and the last three components are the magnetic vector potential. Such as 4-vector is given at every point in spacetime. The Lagrangian density at a spacetime point $X = (t, x, y, z)$ for such a 4-vector field $A(X)$ when speed of light $c = 1$ and without external sources is given by the following equation: $\mathcal{L}(A, X) = \langle \nabla_X A(X) * \widetilde{\nabla_X A}(X) \rangle_0$ The principle of stationary action then states that the classical solution of the field is achieved when the action $S(A) = \int_{X}{\mathcal{L}(A, X) dX}$ does not change anymore, that is $\delta S(A) = 0$. Goal Below we will obtain an entire space-time field configuration $A(X)$ given only some boundary conditions and a function for the action given $A$. We will then use Tensorflow's optimizer to find a field configuration that makes the action stationary. Create the spacetime algebra Here we initialize a tfga.GeometricAlgebra instance with bases $e_0=e_t, e_1=e_x, e_2=e_y, e_3=e_z$ and corresponding metric $[-1, 1, 1, 1]$. We will use this when calculating the action later as we need the geometric product and reversion operations.
ga = GeometricAlgebra([-1, 1, 1, 1])
notebooks/em.ipynb
RobinKa/tfga
mit
Calculate the action Now we create a function which returns the action $S$ given a field configuration $A(X)$ on a discretized spacetime lattice of size $[N, N, N, N]$. We use the following boundary conditions for $A(X)$: $A_{t=-1} = 0, A_{t=N} = 0$ $A_{x=-1} = 10 sin(4 * \pi / N * t) e_0, A_{x=N} = -5 e_0$ $A_{y=-1} = 0, A_{y=N} = 0$ $A_{z=-1} = 0, A_{z=N} = 0$ As a reminder, $e_0$ is the electric potential part of the 4-vector, so we have a periodic sine electric potential that changes over time (two periods in total) and amplitude 10 at the lower x boundary and a constant negative electric potential of -5 at the upper x boundary.
def get_action(config_a_variable): # config_a_variable will be of shape [N, N, N, N, 4]. # The last axis' values are the e0, e1, e2, e3 parts of the multivector. # Finite differences in each direction using padding. # Example with zero padding (ie. zeros on the boundary): # 1 2 3 # 1 2 3 0 padded right # - 0 1 2 3 padded left # = 1 1 1-3 padded right - padded left # As spacing we use 1 so we don't need to divide by anything here. # Also use the boundary conditions in the padded values here. # This gets a bit verbose because of the pad syntax esepcially since we only want to pad the # first index of the last axis with non-zeros. # Create time-varying boundary condition. Start with sine of shape [N]. # Then reshape to [N, 1, N, N, 1] which we can concatenate with the # original values. pad_values = 10.0 * tf.sin(2.0 * tf.range(grid_size[0], dtype=tf.float32) * 2.0 * tf.constant(np.pi, dtype=tf.float32) / grid_size[0]) pad_values = tf.expand_dims(pad_values, axis=-1) pad_values = tf.expand_dims(pad_values, axis=-1) pad_values = tf.expand_dims(pad_values, axis=-1) pad_values = tf.expand_dims(pad_values, axis=-1) pad_values = tf.tile(pad_values, [1, 1, grid_size[0], grid_size[0], 1]) config_left_pad_x = tf.concat([ tf.concat([pad_values, config_a_variable[..., :1]], axis=1), tf.pad(config_a_variable[..., 1:], [[0, 0], [1, 0], [0, 0], [0, 0], [0, 0]]), ], axis=-1) config_right_pad_x = tf.concat([ tf.pad(config_a_variable[..., :1], [[0, 0], [0, 1], [0, 0], [0, 0], [0, 0]], constant_values=-5), tf.pad(config_a_variable[..., 1:], [[0, 0], [0, 1], [0, 0], [0, 0], [0, 0]]), ], axis=-1) config_left_pad_y = tf.concat([ tf.pad(config_a_variable[..., :1], [[0, 0], [0, 0], [1, 0], [0, 0], [0, 0]]), tf.pad(config_a_variable[..., 1:], [[0, 0], [0, 0], [1, 0], [0, 0], [0, 0]]), ], axis=-1) config_dt_a = ( tf.pad(config_a_variable, [[0, 1], [0, 0], [0, 0], [0, 0], [0, 0]]) - tf.pad(config_a_variable, [[1, 0], [0, 0], [0, 0], [0, 0], [0, 0]]) ) config_dx_a = config_right_pad_x - config_left_pad_x config_dy_a = ( tf.pad(config_a_variable, [[0, 0], [0, 0], [0, 1], [0, 0], [0, 0]]) - config_left_pad_y ) config_dz_a = ( tf.pad(config_a_variable, [[0, 0], [0, 0], [0, 0], [0, 1], [0, 0]]) - tf.pad(config_a_variable, [[0, 0], [0, 0], [0, 0], [1, 0], [0, 0]]) ) # Convert to multivectors so we can use GA ops we need in the Lagrangian: # the geometric product and reversion. config_dt_a = ga.from_tensor_with_kind(config_dt_a, "vector") config_dx_a = ga.from_tensor_with_kind(config_dx_a, "vector") config_dy_a = ga.from_tensor_with_kind(config_dy_a, "vector") config_dz_a = ga.from_tensor_with_kind(config_dz_a, "vector") # Sum all the derivatives according to the action / Lagrangian and return a single scalar value return ( tf.reduce_sum(ga.geom_prod(config_dt_a, ga.reversion(config_dt_a))[..., 0]) + tf.reduce_sum(ga.geom_prod(config_dx_a, ga.reversion(config_dx_a))[..., 0]) + tf.reduce_sum(ga.geom_prod(config_dy_a, ga.reversion(config_dy_a))[..., 0]) + tf.reduce_sum(ga.geom_prod(config_dz_a, ga.reversion(config_dz_a))[..., 0]) )
notebooks/em.ipynb
RobinKa/tfga
mit
Initialize the 4-vector field variable randomly
grid_size = [16, 16, 16, 16] config_a_variable = tf.Variable(tf.random.normal([*grid_size, 4], seed=0))
notebooks/em.ipynb
RobinKa/tfga
mit
Optimize the 4-vector field variable to make the action stationary In order to make the action stationary we use a loss function that is minimal when the action is stationary (ie. the gradient of the action with respect to the field configuration is 0). We use the mean-squared error to create such a loss function, although other functions such as the absolute value would work too. We use Tensorflow's Adam optimizer to find a field configuration which minimizes the loss.
optimizer = tf.optimizers.Adam(0.01) @tf.function def train_step(config_a_variable): # Principle of stationary action: # Minimize the distance of gradient of the action to zero with respect to our field with tf.GradientTape() as tape_outer: tape_outer.watch(config_a_variable) with tf.GradientTape() as tape: tape.watch(config_a_variable) loss = get_action(config_a_variable) grads = tape.gradient(loss, [config_a_variable]) grads_mse = tf.reduce_mean(tf.square(grads)) grads2 = tape_outer.gradient(grads_mse, [config_a_variable]) optimizer.apply_gradients(zip(grads2, [config_a_variable])) for i in range(3000): train_step(config_a_variable)
notebooks/em.ipynb
RobinKa/tfga
mit
Extract and visualize the optimized electric field Now we can take the result, that is the $A$ at every spacetime point and visualize it. Obviously we can't visualize a 4 dimensional 4-vector field. However we can look at individual 2D slices of the electric potential field, which is the first component of the 4-vector, where the other two coordinates take on a specific value.
# Plot electric potential slices. We are not plotting the boundaries here. plt.figure(figsize=(7, 7)) plt.imshow(config_a_variable[..., 0, 0, 0]) plt.colorbar() plt.title("Electric potential in TX plane Y=0, Z=0") plt.xlabel("X") plt.ylabel("T") plt.show() plt.figure(figsize=(7, 7)) plt.imshow(config_a_variable[..., 0, 5, :, :, 0]) plt.colorbar() plt.title("Electric potential in YZ plane T=0, X=5") plt.xlabel("Z") plt.ylabel("Y") plt.show() plt.figure(figsize=(7, 7)) plt.imshow(config_a_variable[..., 2, :, :, 0, 0]) plt.colorbar() plt.title("Electric potential in XY plane T=2, Z=0") plt.xlabel("Y") plt.ylabel("X") plt.show()
notebooks/em.ipynb
RobinKa/tfga
mit
In the first figure we can see the potential close to X=0 (where we applied the sine boundary condition) changing over time. The second figure shows the YZ slice at T=0, X=5 where the potential is almost constant but we still have a radial symmetry. The last figure shows the XY slice at T=2, Z=0 where the potential takes its maximum value around X=0 if we look at the first figure. We can also see that on upper boundary of X that we have a negative potential as we applied a constant negative electric potential for boundary condition there. We can also visualize the XY slices over time in a video. For this I saved the XY slices at all times and converted them to a webm using ffmpeg. Here we can see the electric potential close to X=0 changing over time as we expected from the boundary condition. (Direct link: em_output/electric_potential.webm)
Video("./em_output/electric_potential.webm")
notebooks/em.ipynb
RobinKa/tfga
mit
Next we can look at the electric vector field corresponding to the electric potential: $E = -\nabla_{x,y,z} \langle A(X) \rangle_{e0} - \nabla_t \langle A(X) \rangle_{e1,e2,e3}$
def draw_electric_field_xy(t, z): # Extract XY slice of electric potential [T=t, X, Y, Z=0, 0] electric_potential = config_a_variable[t, :, :, z, 0] magnetic_potential_t = config_a_variable[t, :, :, z, 1:] magnetic_potential_t2 = config_a_variable[t+1, :, :, z, 1:] # The electric field can be obtained from the 4-vector potential: # E = - (d/dx, d/dy, d/dz) <A>_e0 - d/dt <A>_e1,e2,e3 # We can use finite differences again to approximate the derivatives. # We also need to get rid of the last element of the respective other axis, # since we couldn't calculate the last finite difference as that would # require using the boundary condition (which is possible, but would require extra code). # Start with -(d/dx, d/dy, d/dz) <A>_e0 ex = -(electric_potential[1:, :-1] - electric_potential[:-1, :-1]) ey = -(electric_potential[:-1, 1:] - electric_potential[:-1, :-1]) # Calculate d/dt <A>_e1,e2,e3 and add it to the previous calculation dt_mag_a = -(magnetic_potential_t2[-1, :-1] - magnetic_potential_t[:-1, :-1]) ex += dt_mag_a[..., 0] ey += dt_mag_a[..., 1] ys, xs = np.meshgrid(np.arange(ex.shape[0]), np.arange(ex.shape[1])) plt.figure(figsize=(10, 10)) plt.quiver(ys, xs, ey, ex, scale=10, scale_units="inches") plt.xlabel("Y") plt.ylabel("X") plt.title("Electric field XY at T=%d, Z=%d" % (t, z)) draw_electric_field_xy(t=2, z=0) plt.show()
notebooks/em.ipynb
RobinKa/tfga
mit
And again I made a video showing all the time slices. (Direct link: em_output/electric_field.webm)
Video("./em_output/electric_field.webm")
notebooks/em.ipynb
RobinKa/tfga
mit
Load music data
song_data = graphlab.SFrame('song_data.gl/')
songrecommender/.ipynb_checkpoints/Song recommender-checkpoint.ipynb
anilcs13m/MachineLearning_Mastering
gpl-2.0
Explore data Music data shows how many times a user listened to a song, as well as the details of the song.
song_data.head(5)
songrecommender/.ipynb_checkpoints/Song recommender-checkpoint.ipynb
anilcs13m/MachineLearning_Mastering
gpl-2.0