markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
<img src="inner_join.png" width="50%" /> We can utilize the pandas merge method to join our members DataFrame and our rsvps DataFrame:
joined_with_rsvps_df = pd.merge(members_df, rsvps_df, left_on='anon_id', right_on='member_id') joined_with_rsvps_df.head(3) joined_with_rsvps_df.columns
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
Now we have a ton of data, let's see what kind of interesting things we can discover. Let's look at the some stats on male attendees vs. female attendees: First we can use the isin method to make DataFrames for male and female members.
male_attendees = joined_with_rsvps_df[joined_with_rsvps_df['gender'].isin(['male', 'mostly_male'])] male_attendees.tail(3) female_attendees = joined_with_rsvps_df[joined_with_rsvps_df['gender'].isin(['female', 'mostly_female'])] female_attendees.tail(3)
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
Next we can use the sum method to count the number of male and female attendees per event and create a Series for each.
event_ids = [ '102502622', '106043892', '107740582', '120425212', '133803672', '138415912', '144769822', '149515412', '160323532', '168747852', '175993712', '182860422', '206754182', '215265722', '219055217', '219840555', '220526799', '221245827', '225488147', '89769502', '98833672' ] male_attendees[event_i...
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
We can then recombine the male and female Series' into a new DataFrame.
gender_attendance = pd.DataFrame({'male': male_attendees[event_ids].sum(), 'female': female_attendees[event_ids].sum()}) gender_attendance.head(3)
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
And then we can use merge again to combine this with our events DataFrame.
events_with_gender_df = pd.merge(events_df, gender_attendance, left_on='id', right_index=True) events_with_gender_df.head(3)
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
The we can plot the attendance by gender over time
gender_df = events_with_gender_df[['female', 'male']] gender_df.plot(title='Attendance by gender over time')
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
This might be easier to interpret by looking at the percentage of females in attendance. We can use the div (divide) method to calculate this.
female_ratio = gender_df['female'].div(gender_df['male'] + gender_df['female']) female_ratio.plot(title='Percentage female attendance over time', ylim=(0.0, 1.0))
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
The members DataFrame also has some other interesting stuff in it. Let's take a look at the topics column.
members_df['topics'].iloc[0]
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
Let's see if we can identify any trends in member's topics. Let's start off by identifying the most common topics:
from collections import Counter topic_counter = Counter() for m in members_df['topics']: topic_counter.update([t['name'] for t in m]) topic_counter.most_common(20)
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
Next let's create a new DataFrame where each column is one of the top 100 topics, and each row is a member. We'll set the values of each cell to be either 0 or 1 to indicate that that member has (or doesn't have) that topic.
top_100_topics = set([t[0] for t in topic_counter.most_common(100)]) topic_member_map = {} for i, m in members_df.iterrows(): if m['topics']: top_topic_count = {} for topic in m['topics']: if topic['name'] in top_100_topics: top_topic_count[topic['name']] = 1 topi...
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
Okay for what I'm going to do next, I want the rows to be the members and the columns to be the topics. We can use the T (transpose) method to fix this.
top_topic_df = top_topic_df.T top_topic_df.head(3)
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
Next we can use the fillna method to fill in the missing values with zeros.
top_topic_df.fillna(0, inplace=True) top_topic_df.head(3)
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
Next let's use a clustering algorithm to see if there are any patterns in the topics members are interested in. A clustering algorithm groups a set of data points so that similar objects are in the same group. This is a classic type of unsupervised machine learning. Below you can find visualisations of how different cl...
from sklearn.cluster import MiniBatchKMeans as KMeans X = top_topic_df.as_matrix() n_clusters = 3 k_means = KMeans(init='k-means++', n_clusters=n_clusters, n_init=10, random_state=47) k_means.fit(X) k_means.labels_
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
We've grouped our members into 3 clusters, let's see how many members are in each cluster
Counter(list(k_means.labels_)).most_common()
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
Next let's see which topics are most popular in each cluster:
from collections import defaultdict cluster_index_map = defaultdict(list) for i in range(k_means.labels_.shape[0]): cluster_index_map[k_means.labels_[i]].append(top_topic_df.index[i]) for cluster_num in range(n_clusters): print 'Cluster {}'.format(cluster_num) f = top_topic_df[top_topic_df.index.isin(clus...
DataPhilly_Analysis.ipynb
mdbecker/daa_philly_2015
mit
Exploring TIMIT Data <a id='timit'></a> We will start off by exploring TIMIT data taken from 8 different regions. These measurements are taken at the midpoint of vowels, where vowel boundaries were determined automatically using forced alignment. Uploading the data Prior to being able to work with the data, we have to ...
timit = pd.read_csv('data/timitvowels.csv') timit.head()
FormantsUpdated/Assignment.ipynb
ds-modules/LINGUIS-110
mit
Look at the dataframe you created and try to figure out what each column measures. Each column represents a different attribute, see the following table for more information. |Column Name|Details| |---|---| |speaker|unique speaker ID| |gender|Speaker’s self-reported gender| |region|Speaker dialect region number| |word|...
IPAdict = {"AO" : "ɔ", "AA" : "ɑ", "IY" : "i", "UW" : "u", "EH" : "ɛ", "IH" : "ɪ", "UH":"ʊ", "AH": "ʌ", "AX" : "ə", "AE":"æ", "EY" :"eɪ", "AY": "aɪ", "OW":"oʊ", "AW":"aʊ", "OY" :"ɔɪ", "ER":"ɚ"} timit['vowel'] = [IPAdict[x] for x in timit['vowel']] timit.head()
FormantsUpdated/Assignment.ipynb
ds-modules/LINGUIS-110
mit
Most of the speakers will say the same vowel multiple times, so we are going to average those values together. The end result will be a dataframe where each row represents the average values for each vowel for each speaker.
timit_avg = timit.groupby(['speaker', 'vowel', 'gender', 'region']).mean().reset_index() timit_avg.head()
FormantsUpdated/Assignment.ipynb
ds-modules/LINGUIS-110
mit
Splitting on Gender Using the same dataframe from above, timit_avg, we are going to split into dataframes grouped by gender. To identify the possible values of gender in the gender column, we can use the method .unique on the column.
timit_avg.gender.unique()
FormantsUpdated/Assignment.ipynb
ds-modules/LINGUIS-110
mit
You could see that for this specific dataset there are only "female" and "male" values in the column. Given that information, we'll create two subsets based off of gender. We'll split timit_avg into two separate dataframes, one for females, timit_female, and one for males, timit_male. Creating these subset dataframes ...
timit_female = timit_avg[timit_avg['gender'] == 'female'] timit_male = timit_avg[timit_avg['gender'] == 'male']
FormantsUpdated/Assignment.ipynb
ds-modules/LINGUIS-110
mit
Distribution of Formants We want to inspect the distributions of F1, F2, and F3 for those that self-report as male and those that self-report as female to identify possible trends or relationships. Having our two split dataframes, timit_female and timit_male, eases the plotting process. Run the cell below to see the d...
sns.distplot(timit_female['F1'], kde_kws={"label": "female"}) sns.distplot(timit_male['F1'], kde_kws={"label": "male"}) plt.title('F1') plt.xlabel("Hz") plt.ylabel('Proportion per Hz');
FormantsUpdated/Assignment.ipynb
ds-modules/LINGUIS-110
mit
Does there seem to be a notable difference between male and female distributions of F1? Next, we plot F2.
sns.distplot(timit_female['F2'], kde_kws={"label": "female"}) sns.distplot(timit_male['F2'], kde_kws={"label": "male"}) plt.title('F2') plt.xlabel("Hz") plt.ylabel('Proportion per Hz');
FormantsUpdated/Assignment.ipynb
ds-modules/LINGUIS-110
mit
Finally, we create the same visualization, but for F3.
sns.distplot(timit_female['F3'], kde_kws={"label": "female"}) sns.distplot(timit_male['F3'], kde_kws={"label": "male"}) plt.title('F3') plt.xlabel("Hz") plt.ylabel('Proportion per Hz');
FormantsUpdated/Assignment.ipynb
ds-modules/LINGUIS-110
mit
Do you see a more pronounced difference across the the different F values? Are they the same throughout? Can we make any meaningful assumptions from these visualizations? An additional question: How do you think the fact that we average each vowel together first for each individual affects the shape of the histograms? ...
# reading in the data class_data = pd.read_csv('data/110_formants.csv') class_data.head()
FormantsUpdated/Assignment.ipynb
ds-modules/LINGUIS-110
mit
The ID column contains a unique value for each individual. Each individual has a row for each of the different vowels they measured.
# translating the vowel column class_data['vowel'] = [IPAdict[x] for x in class_data['vowel']] class_data.head()
FormantsUpdated/Assignment.ipynb
ds-modules/LINGUIS-110
mit
Splitting on Gender As we did with the TIMIT data, we are going to split class_data based on self-reported gender. We need to figure out what the possible responses for the column were.
class_data['Gender'].unique()
FormantsUpdated/Assignment.ipynb
ds-modules/LINGUIS-110
mit
Notice that there are three possible values for the column. We do not have a large enough sample size to responsibly come to conclusions for Prefer not to answer, so for now we'll compare Male and Female. We'll call our new split dataframes class_female and class_male.
class_female = class_data[class_data['Gender'] == 'Female'] class_male = class_data[class_data['Gender'] == 'Male']
FormantsUpdated/Assignment.ipynb
ds-modules/LINGUIS-110
mit
Comparing Distributions The following visualizations compare the the distribution of formants for males and females, like we did for the TIMIT data. First, we'll start with F1.
sns.distplot(class_female['F1'], kde_kws={"label": "female"}) sns.distplot(class_male['F1'], kde_kws={"label": "male"}) plt.title('F1') plt.xlabel("Hz") plt.ylabel('Proportion per Hz');
FormantsUpdated/Assignment.ipynb
ds-modules/LINGUIS-110
mit
Next is F2.
sns.distplot(class_female['F2'], kde_kws={"label": "female"}) sns.distplot(class_male['F2'], kde_kws={"label": "male"}) plt.title('F2') plt.xlabel("Hz") plt.ylabel('Proportion per Hz');
FormantsUpdated/Assignment.ipynb
ds-modules/LINGUIS-110
mit
And finally F3.
sns.distplot(class_female['F3'], kde_kws={"label": "female"}) sns.distplot(class_male['F3'], kde_kws={"label": "male"}) plt.title('F3') plt.xlabel("Hz") plt.ylabel('Proportion per Hz');
FormantsUpdated/Assignment.ipynb
ds-modules/LINGUIS-110
mit
Do the spread of values appear to be the same for females and males? Do the same patterns that occur in the TIMIT data appear in the class's data? Vowel Spaces <a id='vs'></a> Run the cell below to define some functions that we will be using.
def plot_blank_vowel_chart(): im = plt.imread('images/blankvowel.png') plt.imshow(im, extent=(plt.xlim()[0], plt.xlim()[1], plt.ylim()[0], plt.ylim()[1])) def plot_vowel_space(avgs_df): plt.figure(figsize=(10, 8)) plt.gca().invert_yaxis() plt.gca().invert_xaxis() vowels = ['eɪ', 'i', 'oʊ',...
FormantsUpdated/Assignment.ipynb
ds-modules/LINGUIS-110
mit
We are going to be recreating the following graphic from this website. Before we can get to creating, we need to get a singular value for each column for each of the vowels (so we can create coordinate pairs). To do this, we are going to find the average formant values for each of the vowels in our dataframes. We'll d...
class_vowel_avgs = class_data.drop('ID', axis=1).groupby('vowel').mean() class_vowel_avgs.head() timit_vowel_avgs = timit.groupby('vowel').mean() timit_vowel_avgs.head()
FormantsUpdated/Assignment.ipynb
ds-modules/LINGUIS-110
mit
Each of these new tables has a row for each vowel, which comprisises of the averaged values across all speakers. Plotting the Vowel Space Run the cell below to construct a vowel space for the class's data, in which we plot F1 on F2. Note that both axes are descending.
plot_vowel_space(class_vowel_avgs) plt.xlabel('F2 (Hz)') plt.ylabel('F1 (Hz)');
FormantsUpdated/Assignment.ipynb
ds-modules/LINGUIS-110
mit
Using Logarithmic Axes In our visualization above, we use linear axes in order to construct our vowel space. The chart we are trying to recreate has logged axes (though the picture does not indicate it). Below we log-transform all of the values in our dataframes.
log_timit_vowels = timit_vowel_avgs.apply(np.log) log_class_vowels = class_vowel_avgs.apply(np.log) class_data['log(F1)'] = np.log(class_data['F1']) class_data['log(F2)'] = np.log(class_data['F2']) log_class_vowels.head()
FormantsUpdated/Assignment.ipynb
ds-modules/LINGUIS-110
mit
Below we plot the vowel space using these new values.
plot_vowel_space(log_class_vowels) plt.xlabel('log(F2) (Hz)') plt.ylabel('log(F1) (Hz)');
FormantsUpdated/Assignment.ipynb
ds-modules/LINGUIS-110
mit
What effect does using the logged values have, if any? What advantages does using these values have? Are there any negatives? This paper might give some ideas. Overlaying a Vowel Space Chart Finally, we are going to overlay a blank vowel space chart outline to see how close our data reflects the theoretical vowel chart...
plot_vowel_space(log_class_vowels) plot_blank_vowel_chart() plt.xlabel('log(F2) (Hz)') plt.ylabel('log(F1) (Hz)');
FormantsUpdated/Assignment.ipynb
ds-modules/LINGUIS-110
mit
How well does it match the original? Below we generate the same graph, except using the information from the TIMIT dataset.
plot_vowel_space(log_timit_vowels) plot_blank_vowel_chart() plt.xlabel('log(F2) (Hz)') plt.ylabel('log(F1) (Hz)');
FormantsUpdated/Assignment.ipynb
ds-modules/LINGUIS-110
mit
How does the TIMIT vowel space compare to the vowel space from our class data? What may be the cause for any differences between our vowel space and the one constructed using the TIMIT data? Do you notice any outliers or do any points that seem off? Variation in Vowel Spaces <a id='vvs'></a> In the following visualizat...
sns.lmplot('log(F2)', 'log(F1)', hue='vowel', data=class_data, fit_reg=False, size=8, scatter_kws={'s':30}) plt.xlim(8.2, 6.7) plt.ylim(7.0, 5.7);
FormantsUpdated/Assignment.ipynb
ds-modules/LINGUIS-110
mit
In the following visualization, we replace the colors with the IPA characters and attempt to clump the vowels together.
plt.figure(figsize=(10, 12)) pick_vowel = lambda v: class_data[class_data['vowel'] == v] colors = ['Greys_r', 'Purples_r', 'Blues_r', 'Greens_r', 'Oranges_r', \ 'Reds_r', 'GnBu_r', 'PuRd_r', 'winter_r', 'YlOrBr_r', 'pink_r', 'copper_r'] for vowel, color in list(zip(class_data.vowel.unique(), colors)): vo...
FormantsUpdated/Assignment.ipynb
ds-modules/LINGUIS-110
mit
Formants vs Height <a id='fvh'></a> We are going to compare each of the formants and height to see if there is a relationship between the two. To help visualize that, we are going to plot a regression line, which is also referred to as the line of best fit. We are going to use the maximum of each formant to compare to ...
genders = class_data['Gender'] plotting_data = class_data.drop('vowel', axis=1)[np.logical_or(genders == 'Male', genders == 'Female')] maxes = plotting_data.groupby(['ID', 'Gender']).max().reset_index()[plotting_data.columns[:-2]] maxes.columns = ['ID', 'Language', 'Gender', 'Height', 'Max F1', 'Max F2', 'Max F3'] maxe...
FormantsUpdated/Assignment.ipynb
ds-modules/LINGUIS-110
mit
First we will plot Max F1 against Height. Note: Each gender has a different color dot, but the line represents the line of best fit for ALL points.
sns.regplot('Height', 'Max F1', data=maxes) sns.regplot('Height', 'Max F1', data=maxes_male, fit_reg=False) sns.regplot('Height', 'Max F1', data=maxes_female, fit_reg=False) plt.xlabel('Height (cm)') plt.ylabel('Max F1 (Hz)') print('female: green') print('male: orange')
FormantsUpdated/Assignment.ipynb
ds-modules/LINGUIS-110
mit
Is there a general trend for the data that you notice? What do you notice about the different color dots? Next, we plot Max F2 on Height.
sns.regplot('Height', 'Max F2', data=maxes) sns.regplot('Height', 'Max F2', data=maxes_male, fit_reg=False) sns.regplot('Height', 'Max F2', data=maxes_female, fit_reg=False) plt.xlabel('Height (cm)') plt.ylabel('Max F2 (Hz)') print('female: green') print('male: orange')
FormantsUpdated/Assignment.ipynb
ds-modules/LINGUIS-110
mit
Finally, Max F3 vs Height.
sns.regplot('Height', 'Max F3', data=maxes) sns.regplot('Height', 'Max F3', data=maxes_male, fit_reg=False) sns.regplot('Height', 'Max F3', data=maxes_female, fit_reg=False) plt.xlabel('Height (cm)') plt.ylabel('Max F3 (Hz)') print('female: green') print('male: orange')
FormantsUpdated/Assignment.ipynb
ds-modules/LINGUIS-110
mit
Do you notice a difference between the trends for the three formants? Now we are going to plot two lines of best fit -- one for males, one for females. Before we plotted one line for all of the values, but now we are separating by gender to see if gender explains some of the difference in formants values. For now, we'...
sns.lmplot('Height', 'Max F1', data=maxes, hue='Gender') plt.xlabel('Height (cm)') plt.ylabel('Max F1 (Hz)');
FormantsUpdated/Assignment.ipynb
ds-modules/LINGUIS-110
mit
Is there a noticeable difference between the two? Did you expect this result? We're going to repeat the above graph, plotting a different regression line for males and females, but this time, using timit -- having a larger sample size may help expose patterns. Before we do that, we have to repeat the process of calulat...
timit_maxes = timit.groupby(['speaker', 'gender']).max().reset_index() timit_maxes.columns = ['speaker', 'gender', 'region', 'height', 'word', 'vowel', 'Max duration', 'Max F1', 'Max F2', 'Max F3', 'Max f0'] plt.xlim(140, 210) plt.ylim(500, 1400) sns.regplot('height', 'Max F1', data=timit_maxes[timit_maxes['gender'] ==...
FormantsUpdated/Assignment.ipynb
ds-modules/LINGUIS-110
mit
We want to be able to march forward in time from our starting point (just like the picture above) where $\theta = \theta_0$ to obtain the value of $\theta$ at later times. To do this, we need to approximate the original differential equation, and, in particular, the value of the time derivative at each time. There are ...
steps = 10 theta_0 = 1.0 const_k = 10.0 delta_t = 1.0 / steps theta_values = np.zeros(steps) time_values = np.zeros(steps) theta_values[0] = theta_0 time_values[0] = 0.0 for i in range(1, steps): theta_values[i] = theta_values[i-1] * (1 - const_k * delta_t) time_values[i] = time_values[i-1] + delta_t exa...
Notebooks/SolveMathProblems/0 - IntroductionToNumericalSolutions.ipynb
lmoresi/UoM-VIEPS-Intro-to-Python
mit
Higher order expansion First we try fitting the local expansion for \(\theta\) through an additional point. This time we assume that the variation in \(\theta(t)\) is quadratic, i.e. $$ \theta(t') = \theta_{n-1} + \beta t' + \gamma {t'}^2 $$ The local time coordinate is $t' = t - (n-1)\Delta t$, and when we diffe...
steps = 100 theta_0 = 1.0 const_k = 10.0 delta_t = 1.0 / steps theta_values = np.zeros(steps) time_values = np.zeros(steps) theta_values[0] = theta_0 time_values[0] = 0.0 theta_values[1] = theta_values[0] * (1 - const_k * delta_t) time_values[1] = delta_t for i in range(2, steps): theta_values[i] = theta_value...
Notebooks/SolveMathProblems/0 - IntroductionToNumericalSolutions.ipynb
lmoresi/UoM-VIEPS-Intro-to-Python
mit
The results are more accurate when a smaller timestep is used although it requires more computation to achieve the greater accuracy. Higher order expansion also increases the accuracy and may be more efficient in terms of the number of computations required for a given level of accuracy. Note, however, that the suppose...
steps = 100 theta_0 = 1.0 const_k = 10.0 delta_t = 1.0 / steps theta_values = np.zeros(steps) time_values = np.zeros(steps) theta_values[0] = theta_0 time_values[0] = 0.0 for i in range(1, steps): theta_values[i] = theta_values[i-1] * (1 - const_k * delta_t + const_k**2 * delta_t**2 / 2.0) time_values[i] ...
Notebooks/SolveMathProblems/0 - IntroductionToNumericalSolutions.ipynb
lmoresi/UoM-VIEPS-Intro-to-Python
mit
Surface Analysis using Declarative Syntax The MetPy declarative syntax allows for a simplified interface to creating common meteorological analyses including surface observation plots.
from datetime import datetime, timedelta import cartopy.crs as ccrs import pandas as pd from metpy.cbook import get_test_data import metpy.plots as mpplots
dev/_downloads/9041777e133eed610f5b243c688e89f9/surface_declarative.ipynb
metpy/MetPy
bsd-3-clause
Getting the data In this example, data is originally from the Iowa State ASOS archive (https://mesonet.agron.iastate.edu/request/download.phtml) downloaded through a separate Python script. The data are pre-processed to determine sky cover and weather symbols from text output.
data = pd.read_csv(get_test_data('SFC_obs.csv', as_file_obj=False), infer_datetime_format=True, parse_dates=['valid'])
dev/_downloads/9041777e133eed610f5b243c688e89f9/surface_declarative.ipynb
metpy/MetPy
bsd-3-clause
Plotting the data Use the declarative plotting interface to plot surface observations over the state of Georgia.
# Plotting the Observations using a 15 minute time window for surface observations obs = mpplots.PlotObs() obs.data = data obs.time = datetime(1993, 3, 12, 13) obs.time_window = timedelta(minutes=15) obs.level = None obs.fields = ['tmpf', 'dwpf', 'emsl', 'cloud_cover', 'wxsym'] obs.locations = ['NW', 'SW', 'NE', 'C', '...
dev/_downloads/9041777e133eed610f5b243c688e89f9/surface_declarative.ipynb
metpy/MetPy
bsd-3-clause
Homework 2 (DUE: Thursday February 16) Instructions: Complete the instructions in this notebook. You may work together with other students in the class and you may take full advantage of any internet resources available. You must provide thorough comments in your code so that it's clear that you understand what your co...
# Question 1
winter2017/econ129/python/Econ129_Winter2017_Homework2.ipynb
letsgoexploring/teaching
mit
Question 2 For each of the following first-difference processes, compute the values of $y$ from $t=0$ through $t = 12$. For each, assume that $y_0 = 0$. $y_t = 1 + 0.5y_{t-1}$ $y_t = 0.5y_{t-1}$ $y_t = -1 + 0.5y_{t-1}$ Plot the the simulated values for each process on the same axes and be sure to include a legend. S...
# Question 2
winter2017/econ129/python/Econ129_Winter2017_Homework2.ipynb
letsgoexploring/teaching
mit
Question 3 Download a file called Econ129_US_Production_A_Data.csv from the link "Production data for the US" under the "Data" section on the course website. The file contains annual production data for the US economy including ouput, consumption, investment, and labor hours, among others. The capital stock of the US i...
# Question 3.1 # Question 3.2 # Question 3.3
winter2017/econ129/python/Econ129_Winter2017_Homework2.ipynb
letsgoexploring/teaching
mit
Question 4: The Solow model with exogenous population and TFP growth Suppose that the aggregate production function is given by: \begin{align} Y_t & = A_tK_t^{\alpha} L_t^{1-\alpha}, \tag{1} \end{align} where $Y_t$ denotes output, $K_t$ denotes the capital stock, $L_t$ denotes the labor supply, and $A_t$ denotes total ...
# Initialize parameters for the simulation (A, s, T, delta, alpha, g, n, K0, A0, L0) # Initialize a variable called tfp as a (T+1)x1 array of zeros and set first value to A0 # Compute all subsequent tfp values by iterating over t from 0 through T # Plot the simulated tfp series # Initialize a variable ...
winter2017/econ129/python/Econ129_Winter2017_Homework2.ipynb
letsgoexploring/teaching
mit
Question 5 Recall the Solow growth model with exogenous growth in labor and TFP: \begin{align} Y_t & = A_tK_t^{\alpha} L_t^{1-\alpha}, \tag{1}\ C_t & = (1-s)Y_t \tag{2}\ Y_t & = C_t + I_t \tag{3}\ K_{t+1} & = I_t + ( 1- \delta)K_t \tag{4}\ L_{t+1} & = (1+n) L_t \tag{5} \ A_{t+1} & = (1+g) A_t. \tag{6} \end{align} Suppo...
# Question 5.1 # Question 5.2
winter2017/econ129/python/Econ129_Winter2017_Homework2.ipynb
letsgoexploring/teaching
mit
The function above can be used to generate three-dimensional datasets with the shape of a Swiss roll, the letter S, or an helix. These are three examples of datasets which have been extensively used to compare different dimension reduction algorithms. As an illustrative exercise of what dimensionality reduction can do,...
%matplotlib inline import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D %matplotlib inline def plot(data, embedded_data, colors='m'): fig = plt.figure() fig.set_facecolor('white') ax = fig.add_subplot(121,projection='3d') ax.scatter(data[0],data[1],data[2],c=colors,cmap=plt.cm.Spectral) plt.ax...
doc/ipython-notebooks/converter/Tapkee.ipynb
besser82/shogun
bsd-3-clause
As it can be seen from the figure above, Isomap has been able to "unroll" the data, reducing its dimension from three to two. At the same time, points with similar colours in the input space are close to points with similar colours in the output space. This is, a new representation of the data has been obtained; this n...
# wrap data into Shogun features data, colors = generate_data('helix') features = sg.features(data) # create MDS instance converter = sg.transformer('StochasticProximityEmbedding') converter.put('target_dim', 2) # embed helix data embedded_features = converter.transform(features) embedded_data = embedded_features.get...
doc/ipython-notebooks/converter/Tapkee.ipynb
besser82/shogun
bsd-3-clause
Use pd.read_excel in order to open file. If it says file not found, then make sure your directory is correct Make sure you assign the file to a variable so it doesn't have to run every time
table = pd.read_excel("GASISData.xls")
media/f16-scientific-python/week2/Scientific Python Workshop 2.ipynb
texaspse/blog
mit
Lets say we want to see the first 10 rows of the data to make sure it is the correct file (Google "pandas data preview") #table.tail is end of data
table.head()
media/f16-scientific-python/week2/Scientific Python Workshop 2.ipynb
texaspse/blog
mit
What if I want to look at just one column of data
table['PLAYNAME']
media/f16-scientific-python/week2/Scientific Python Workshop 2.ipynb
texaspse/blog
mit
What if I want to create a new column
table['NEW COLUMN'] = 5 table['NEW COLUMN']
media/f16-scientific-python/week2/Scientific Python Workshop 2.ipynb
texaspse/blog
mit
What if I want to find data in a certain set, such as only in Texas (Google) (panas find rows where value is)
texasTable = table.loc[table['STATE'] == "TEXAS"] print(texasTable)
media/f16-scientific-python/week2/Scientific Python Workshop 2.ipynb
texaspse/blog
mit
Run the following to get shape of table
sizeTable = table.shape print(sizeTable)
media/f16-scientific-python/week2/Scientific Python Workshop 2.ipynb
texaspse/blog
mit
This is done to find the count of number of rows and number of cols
num_rows = sizeTable[0] num_cols = sizeTable[1] print(num_rows) print(num_cols)
media/f16-scientific-python/week2/Scientific Python Workshop 2.ipynb
texaspse/blog
mit
Rows where you have some preset parameter, such as where lattitude is greater than 80 (Google) (Google same thing as above)
table.loc[table['LATITUDE'] > 10]
media/f16-scientific-python/week2/Scientific Python Workshop 2.ipynb
texaspse/blog
mit
Exercise: Make them find out how to rename columns Exercise: (Usually we use Excel equations, now we are gonna practice this) Google how to add two columns together, and then create a new column with all these added values Give them 5 mins for each excersice, help anyone around you If you want to learn more, look it up...
pd.DataFrame.hist(table,"AVPERM")
media/f16-scientific-python/week2/Scientific Python Workshop 2.ipynb
texaspse/blog
mit
過学習と学習不足について知る <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/r1/tutorials/keras/overfit_and_underfit.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>...
import tensorflow.compat.v1 as tf from tensorflow import keras import numpy as np import matplotlib.pyplot as plt print(tf.__version__)
site/ja/r1/tutorials/keras/overfit_and_underfit.ipynb
tensorflow/docs-l10n
apache-2.0
IMDBデータセットのダウンロード 以前のノートブックで使用したエンベディングの代わりに、ここでは文をマルチホットエンコードします。このモデルは、訓練用データセットをすぐに過学習します。このモデルを使って、過学習がいつ起きるかということと、どうやって過学習と戦うかをデモします。 リストをマルチホットエンコードすると言うのは、0と1のベクトルにするということです。具体的にいうと、例えば[3, 5]というシーケンスを、インデックス3と5の値が1で、それ以外がすべて0の、10,000次元のベクトルに変換するということを意味します。
NUM_WORDS = 10000 (train_data, train_labels), (test_data, test_labels) = keras.datasets.imdb.load_data(num_words=NUM_WORDS) def multi_hot_sequences(sequences, dimension): # 形状が (len(sequences), dimension)ですべて0の行列を作る results = np.zeros((len(sequences), dimension)) for i, word_indices in enumerate(sequences...
site/ja/r1/tutorials/keras/overfit_and_underfit.ipynb
tensorflow/docs-l10n
apache-2.0
結果として得られるマルチホットベクトルの1つを見てみましょう。単語のインデックスは頻度順にソートされています。このため、インデックスが0に近いほど1が多く出現するはずです。分布を見てみましょう。
plt.plot(train_data[0])
site/ja/r1/tutorials/keras/overfit_and_underfit.ipynb
tensorflow/docs-l10n
apache-2.0
過学習のデモ 過学習を防止するための最も単純な方法は、モデルのサイズ、すなわち、モデル内の学習可能なパラメータの数を小さくすることです(学習パラメータの数は、層の数と層ごとのユニット数で決まります)。ディープラーニングでは、モデルの学習可能なパラメータ数を、しばしばモデルの「キャパシティ」と呼びます。直感的に考えれば、パラメータ数の多いモデルほど「記憶容量」が大きくなり、訓練用のサンプルとその目的変数の間の辞書のようなマッピングをたやすく学習することができます。このマッピングには汎化能力がまったくなく、これまで見たことが無いデータを使って予測をする際には役に立ちません。 ディープラーニングのモデルは訓練用データに適応しやすいけれど、...
baseline_model = keras.Sequential([ # `.summary` を見るために`input_shape`が必要 keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)), keras.layers.Dense(16, activation=tf.nn.relu), keras.layers.Dense(1, activation=tf.nn.sigmoid) ]) baseline_model.compile(optimizer='adam', ...
site/ja/r1/tutorials/keras/overfit_and_underfit.ipynb
tensorflow/docs-l10n
apache-2.0
より小さいモデルの構築 今作成したばかりの比較基準となるモデルに比べて隠れユニット数が少ないモデルを作りましょう。
smaller_model = keras.Sequential([ keras.layers.Dense(4, activation=tf.nn.relu, input_shape=(NUM_WORDS,)), keras.layers.Dense(4, activation=tf.nn.relu), keras.layers.Dense(1, activation=tf.nn.sigmoid) ]) smaller_model.compile(optimizer='adam', loss='binary_crossentropy', met...
site/ja/r1/tutorials/keras/overfit_and_underfit.ipynb
tensorflow/docs-l10n
apache-2.0
同じデータを使って訓練します。
smaller_history = smaller_model.fit(train_data, train_labels, epochs=20, batch_size=512, validation_data=(test_data, test_labels), verbose=2...
site/ja/r1/tutorials/keras/overfit_and_underfit.ipynb
tensorflow/docs-l10n
apache-2.0
より大きなモデルの構築 練習として、より大きなモデルを作成し、どれほど急速に過学習が起きるかを見ることもできます。次はこのベンチマークに、この問題が必要とするよりはるかに容量の大きなネットワークを追加しましょう。
bigger_model = keras.models.Sequential([ keras.layers.Dense(512, activation=tf.nn.relu, input_shape=(NUM_WORDS,)), keras.layers.Dense(512, activation=tf.nn.relu), keras.layers.Dense(1, activation=tf.nn.sigmoid) ]) bigger_model.compile(optimizer='adam', loss='binary_crossentropy', ...
site/ja/r1/tutorials/keras/overfit_and_underfit.ipynb
tensorflow/docs-l10n
apache-2.0
このモデルもまた同じデータを使って訓練します。
bigger_history = bigger_model.fit(train_data, train_labels, epochs=20, batch_size=512, validation_data=(test_data, test_labels), verbose=2)
site/ja/r1/tutorials/keras/overfit_and_underfit.ipynb
tensorflow/docs-l10n
apache-2.0
訓練時と検証時の損失をグラフにする <!--TODO(markdaoust): This should be a one-liner with tensorboard --> 実線は訓練用データセットの損失、破線は検証用データセットでの損失です(検証用データでの損失が小さい方が良いモデルです)。これをみると、小さいネットワークのほうが比較基準のモデルよりも過学習が始まるのが遅いことがわかります(4エポックではなく6エポック後)。また、過学習が始まっても性能の低下がよりゆっくりしています。
def plot_history(histories, key='binary_crossentropy'): plt.figure(figsize=(16,10)) for name, history in histories: val = plt.plot(history.epoch, history.history['val_'+key], '--', label=name.title()+' Val') plt.plot(history.epoch, history.history[key], color=val[0].get_color(), ...
site/ja/r1/tutorials/keras/overfit_and_underfit.ipynb
tensorflow/docs-l10n
apache-2.0
より大きなネットワークでは、すぐに、1エポックで過学習が始まり、その度合も強いことに注目してください。ネットワークの容量が大きいほど訓練用データをモデル化するスピードが早くなり(結果として訓練時の損失値が小さくなり)ますが、より過学習しやすく(結果として訓練時の損失値と検証時の損失値が大きく乖離しやすく)なります。 戦略 重みの正則化を加える 「オッカムの剃刀」の原則をご存知でしょうか。何かの説明が2つあるとすると、最も正しいと考えられる説明は、仮定の数が最も少ない「一番単純な」説明だというものです。この原則は、ニューラルネットワークを使って学習されたモデルにも当てはまります。ある訓練用データとネットワーク構造があって、そのデータを...
l2_model = keras.models.Sequential([ keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001), activation=tf.nn.relu, input_shape=(NUM_WORDS,)), keras.layers.Dense(16, kernel_regularizer=keras.regularizers.l2(0.001), activation=tf.nn.relu), keras.l...
site/ja/r1/tutorials/keras/overfit_and_underfit.ipynb
tensorflow/docs-l10n
apache-2.0
l2(0.001)というのは、層の重み行列の係数全てに対して0.001 * 重み係数の値 **2をネットワークの損失値合計に加えることを意味します。このペナルティは訓練時のみに加えられるため、このネットワークの損失値は、訓練時にはテスト時に比べて大きくなることに注意してください。 L2正則化の影響を見てみましょう。
plot_history([('baseline', baseline_history), ('l2', l2_model_history)])
site/ja/r1/tutorials/keras/overfit_and_underfit.ipynb
tensorflow/docs-l10n
apache-2.0
ご覧のように、L2正則化ありのモデルは比較基準のモデルに比べて過学習しにくくなっています。両方のモデルのパラメータ数は同じであるにもかかわらずです。 ドロップアウトを追加する ドロップアウトは、ニューラルネットワークの正則化テクニックとして最もよく使われる手法の一つです。この手法は、トロント大学のヒントンと彼の学生が開発したものです。ドロップアウトは層に適用するもので、訓練時に層から出力された特徴量に対してランダムに「ドロップアウト(つまりゼロ化)」を行うものです。例えば、ある層が訓練時にある入力サンプルに対して、普通は[0.2, 0.5, 1.3, 0.8, 1.1] というベクトルを出力するとします。ドロップアウトを適用すると、...
dpt_model = keras.models.Sequential([ keras.layers.Dense(16, activation=tf.nn.relu, input_shape=(NUM_WORDS,)), keras.layers.Dropout(rate=0.5), keras.layers.Dense(16, activation=tf.nn.relu), keras.layers.Dropout(rate=0.5), keras.layers.Dense(1, activation=tf.nn.sigmoid) ]) dpt_model.compile(optimize...
site/ja/r1/tutorials/keras/overfit_and_underfit.ipynb
tensorflow/docs-l10n
apache-2.0
We want to train the model using the training set, then evaluate it on the test set. As our evaluation metric we will use the ROC AUC, averaged over the 12 tasks included in the dataset. First let's see how to do this with the DeepChem API.
model.fit(train_dataset, nb_epoch=100) metric = dc.metrics.Metric(dc.metrics.roc_auc_score, np.mean) print(model.evaluate(test_dataset, [metric]))
examples/notebooks/Estimators.ipynb
ktaneishi/deepchem
mit
Simple enough. Now let's see how to do the same thing with the Tensorflow APIs. Fair warning: this is going to take a lot more code! To begin with, Tensorflow doesn't allow a dataset to be passed directly to a model. Instead, you need to write an "input function" to construct a particular set of tensors and return t...
def input_fn(dataset, epochs): x, y, weights = dataset.make_iterator(batch_size=100, epochs=epochs).get_next() return {'x': x, 'weights': weights}, y
examples/notebooks/Estimators.ipynb
ktaneishi/deepchem
mit
Next, you have to use the functions in the tf.feature_column module to create an object representing each feature and weight column (but curiously, not the label column—don't ask me why!). These objects describe the data type and shape of each column, and give each one a name. The names must match the keys in the dic...
x_col = tf.feature_column.numeric_column('x', shape=(n_features,)) weight_col = tf.feature_column.numeric_column('weights', shape=(n_tasks,))
examples/notebooks/Estimators.ipynb
ktaneishi/deepchem
mit
Unlike DeepChem models, which allow arbitrary metrics to be passed to evaluate(), estimators require all metrics to be defined up front when you create the estimator. Unfortunately, Tensorflow doesn't have very good support for multitask models. It provides an AUC metric, but no easy way to average this metric over t...
def mean_auc(labels, predictions, weights): metric_ops = [] update_ops = [] for i in range(n_tasks): metric, update = tf.metrics.auc(labels[:,i], predictions[:,i], weights[:,i]) metric_ops.append(metric) update_ops.append(update) mean_metric = tf.reduce_mean(tf.stack(metric_ops))...
examples/notebooks/Estimators.ipynb
ktaneishi/deepchem
mit
Now we create our Estimator by calling make_estimator() on the DeepChem model. We provide as arguments the objects created above to represent the feature and weight columns, as well as our metric function.
estimator = model.make_estimator(feature_columns=[x_col], weight_column=weight_col, metrics={'mean_auc': mean_auc}, model_dir='estimator')
examples/notebooks/Estimators.ipynb
ktaneishi/deepchem
mit
We are finally ready to train and evaluate it! Notice how the input function passed to each method is actually a lambda. This allows us to write a single function, then use it with different datasets and numbers of epochs.
estimator.train(input_fn=lambda: input_fn(train_dataset, 100)) print(estimator.evaluate(input_fn=lambda: input_fn(test_dataset, 1)))
examples/notebooks/Estimators.ipynb
ktaneishi/deepchem
mit
Natural Neighbor Verification Walks through the steps of Natural Neighbor interpolation to validate that the algorithmic approach taken in MetPy is correct. Find natural neighbors visual test A triangle is a natural neighbor for a point if the circumscribed circle &lt;https://en.wikipedia.org/wiki/Circumscribed_circle&...
import matplotlib.pyplot as plt import numpy as np from scipy.spatial import ConvexHull, Delaunay, delaunay_plot_2d, Voronoi, voronoi_plot_2d from scipy.spatial.distance import euclidean from metpy.interpolate import geometry from metpy.interpolate.points import natural_neighbor_point
v0.12/_downloads/e5685967297554788de3cf5858571b23/Natural_Neighbor_Verification.ipynb
metpy/MetPy
bsd-3-clause
For a test case, we generate 10 random points and observations, where the observation values are just the x coordinate value times the y coordinate value divided by 1000. We then create two test points (grid 0 & grid 1) at which we want to estimate a value using natural neighbor interpolation. The locations of these ob...
np.random.seed(100) pts = np.random.randint(0, 100, (10, 2)) xp = pts[:, 0] yp = pts[:, 1] zp = (pts[:, 0] * pts[:, 0]) / 1000 tri = Delaunay(pts) fig, ax = plt.subplots(1, 1, figsize=(15, 10)) ax.ishold = lambda: True # Work-around for Matplotlib 3.0.0 incompatibility delaunay_plot_2d(tri, ax=ax) for i, zval in e...
v0.12/_downloads/e5685967297554788de3cf5858571b23/Natural_Neighbor_Verification.ipynb
metpy/MetPy
bsd-3-clause
Using the circumcenter and circumcircle radius information from :func:metpy.interpolate.geometry.find_natural_neighbors, we can visually examine the results to see if they are correct.
def draw_circle(ax, x, y, r, m, label): th = np.linspace(0, 2 * np.pi, 100) nx = x + r * np.cos(th) ny = y + r * np.sin(th) ax.plot(nx, ny, m, label=label) fig, ax = plt.subplots(1, 1, figsize=(15, 10)) ax.ishold = lambda: True # Work-around for Matplotlib 3.0.0 incompatibility delaunay_plot_2d(tri, ...
v0.12/_downloads/e5685967297554788de3cf5858571b23/Natural_Neighbor_Verification.ipynb
metpy/MetPy
bsd-3-clause
What?....the circle from triangle 8 looks pretty darn close. Why isn't grid 0 included in that circle?
x_t, y_t = circumcenters[8] r = geometry.circumcircle_radius(*tri.points[tri.simplices[8]]) print('Distance between grid0 and Triangle 8 circumcenter:', euclidean([x_t, y_t], [sim_gridx[0], sim_gridy[0]])) print('Triangle 8 circumradius:', r)
v0.12/_downloads/e5685967297554788de3cf5858571b23/Natural_Neighbor_Verification.ipynb
metpy/MetPy
bsd-3-clause
Lets do a manual check of the above interpolation value for grid 0 (southernmost grid) Grab the circumcenters and radii for natural neighbors
cc = np.array(circumcenters) r = np.array([geometry.circumcircle_radius(*tri.points[tri.simplices[m]]) for m in members[0]]) print('circumcenters:\n', cc) print('radii\n', r)
v0.12/_downloads/e5685967297554788de3cf5858571b23/Natural_Neighbor_Verification.ipynb
metpy/MetPy
bsd-3-clause
Draw the natural neighbor triangles and their circumcenters. Also plot a Voronoi diagram &lt;https://docs.scipy.org/doc/scipy/reference/tutorial/spatial.html#voronoi-diagrams&gt;_ which serves as a complementary (but not necessary) spatial data structure that we use here simply to show areal ratios. Notice that the two...
vor = Voronoi(list(zip(xp, yp))) fig, ax = plt.subplots(1, 1, figsize=(15, 10)) ax.ishold = lambda: True # Work-around for Matplotlib 3.0.0 incompatibility voronoi_plot_2d(vor, ax=ax) nn_ind = np.array([0, 5, 7, 8]) z_0 = zp[nn_ind] x_0 = xp[nn_ind] y_0 = yp[nn_ind] for x, y, z in zip(x_0, y_0, z_0): ax.annotat...
v0.12/_downloads/e5685967297554788de3cf5858571b23/Natural_Neighbor_Verification.ipynb
metpy/MetPy
bsd-3-clause
Put all of the generated polygon areas and their affiliated values in arrays. Calculate the total area of all of the generated polygons.
areas = np.array([60.434, 448.296, 25.916, 70.647]) values = np.array([0.064, 1.156, 2.809, 0.225]) total_area = np.sum(areas) print(total_area)
v0.12/_downloads/e5685967297554788de3cf5858571b23/Natural_Neighbor_Verification.ipynb
metpy/MetPy
bsd-3-clause
For each polygon area, calculate its percent of total area.
proportions = areas / total_area print(proportions)
v0.12/_downloads/e5685967297554788de3cf5858571b23/Natural_Neighbor_Verification.ipynb
metpy/MetPy
bsd-3-clause
Multiply the percent of total area by the respective values.
contributions = proportions * values print(contributions)
v0.12/_downloads/e5685967297554788de3cf5858571b23/Natural_Neighbor_Verification.ipynb
metpy/MetPy
bsd-3-clause
The sum of this array is the interpolation value!
interpolation_value = np.sum(contributions) function_output = natural_neighbor_point(xp, yp, zp, (sim_gridx[0], sim_gridy[0]), tri, members[0], circumcenters) print(interpolation_value, function_output)
v0.12/_downloads/e5685967297554788de3cf5858571b23/Natural_Neighbor_Verification.ipynb
metpy/MetPy
bsd-3-clause
The values are slightly different due to truncating the area values in the above visual example to the 3rd decimal place.
plt.show()
v0.12/_downloads/e5685967297554788de3cf5858571b23/Natural_Neighbor_Verification.ipynb
metpy/MetPy
bsd-3-clause
Chargement des images dans python
def cv_image_vers_vecteur(image): #Pour convertir une image en vecteur, cela servira pour les opérations suivantes return ravel(image) def charge_l_image(nom_de_fichier): return misc.imread(nom_de_fichier, flatten=True, mode = "L")/255. #avec cela on convertit l'image en suite de 0 et de 1 def charge_l_image_...
README.ipynb
konkam/perceptron_guide
gpl-3.0
On voit qu'une image est constituée de 50x50 = 2500 valeurs qui peuvent etre égales à 0 ou à 1.
charge_l_image_sous_forme_de_vecteur("training_set_perceptron/A1.png") shape(charge_l_image_sous_forme_de_vecteur("training_set_perceptron/A1.png")) charge_l_image_et_trace("training_set_perceptron/A1.png")
README.ipynb
konkam/perceptron_guide
gpl-3.0
Can I extract just the sentence that belongs to the replied comment?
html = commentsHTML[0] comms = html.findAll('comment') first_comm_s = html.findAll('s', recursive=False) first_comm_s
testdataextractor/TestDataExtractor.ipynb
betoesquivel/onforums-application
mit
Can I extract all the comment tags, including the nested ones? Turns out the findAll is recursive and gets me every comment. From there, getting the parents is easy.
for c in commentsHTML: if c['id'] == "c4": print c print [p['id'] for p in c.findParents("comment")] break
testdataextractor/TestDataExtractor.ipynb
betoesquivel/onforums-application
mit