markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
So tweets about Jeb Bush, on average, aren't as positive as the other candidates, but the people tweeting about Bush get more retweets and followers.
I used the formula influence = sqrt(followers + 1) * sqrt(retweets + 1). You can experiment with different functions if you like [preprocess.py:influence].
We can look a... | jeb = candidate_groupby.get_group('Jeb Bush')
jeb_influence = jeb.sort('influence', ascending = False)
jeb_influence[['influence', 'polarity', 'influenced_polarity', 'user_name', 'text', 'created_at']].head(5) | arrows.ipynb | savioabuga/arrows | mit |
Side note: you can see that sentiment analysis isn't perfect - the last tweet is certainly negative toward Jeb Bush, but it was actually assigned a positive polarity. Over a large number of tweets, though, sentiment analysis is more meaningful.
As to the high influence of tweets about Bush: it looks like Donald Trump (... | df[df.user_name == 'Donald J. Trump'].groupby('candidate').size() | arrows.ipynb | savioabuga/arrows | mit |
Looks like our favorite toupéed candidate hasn't even been tweeting about anyone else!
What else can we do? We know the language each tweet was (tweeted?) in. | language_groupby = df.groupby(['candidate', 'lang'])
language_groupby.size() | arrows.ipynb | savioabuga/arrows | mit |
That's a lot of languages! Let's try plotting to get a better idea, but first, I'll remove smaller language/candidate groups.
By the way, each lang value is an IANA language tag - you can look them up at https://www.iana.org/assignments/language-subtag-registry/language-subtag-registry. | largest_languages = language_groupby.filter(lambda group: len(group) > 10) | arrows.ipynb | savioabuga/arrows | mit |
I'll also remove English, since it would just dwarf all the other languages. | non_english = largest_languages[largest_languages.lang != 'en']
non_english_groupby = non_english.groupby(['lang', 'candidate'], as_index = False)
sizes = non_english_groupby.text.agg(np.size)
sizes = sizes.rename(columns={'text': 'count'})
sizes_pivot = sizes.pivot_table(index='lang', columns='candidate', values='cou... | arrows.ipynb | savioabuga/arrows | mit |
Looks like Spanish and Portuguese speakers mostly tweet about Jeb Bush, while Francophones lean more liberal, and Clinton tweeters span the largest range of languages.
We also have the time-of-tweet information - I'll plot influenced polarity over time for each candidate. I'm also going to resample the influenced_polar... | mean_polarities = df.groupby(['candidate', 'created_at']).influenced_polarity.mean()
plot = mean_polarities.unstack('candidate').resample('60min').plot()
plot.set_title('Influenced Polarity over Time by Candidate', family='Ubuntu')
plot.set_ylabel('influenced polarity', family='Ubuntu')
plot.set_xlabel('time', family='... | arrows.ipynb | savioabuga/arrows | mit |
Since I only took the last 20,000 tweets for each candidate, I didn't receive as large a timespan from Clinton (a candidate with many, many tweeters) compared to Rand Paul.
But we can still analyze the data in terms of hour-of-day. I'd like to know when tweeters in each language tweet each day, and I'm going to use pe... | language_sizes = df.groupby('lang').size()
threshold = language_sizes.quantile(.75)
top_languages_df = language_sizes[language_sizes > threshold]
top_languages = set(top_languages_df.index) - {'und'}
top_languages
df['hour'] = df.created_at.apply(lambda datetime: datetime.hour)
for language_code in top_languages:
... | arrows.ipynb | savioabuga/arrows | mit |
Note that English, French, and Spanish are significantly flatter than the other languages - this means that there's a large spread of speakers all over the globe.
But why is Portuguese spiking at 11pm Brasilia time / 3 am Lisbon time? Let's find out!
My first guess was that maybe there's a single person making a ton of... | df_of_interest = df[(df.hour == 2) & (df.lang == 'pt')]
print('Number of tweets:', df_of_interest.text.count())
print('Number of unique users:', df_of_interest.user_name.unique().size) | arrows.ipynb | savioabuga/arrows | mit |
So that's not it. Maybe there was a major event everyone was retweeting? | df_of_interest.text.head(25).unique() | arrows.ipynb | savioabuga/arrows | mit |
Seems to be a lot of these 'Jeb Bush diz que foi atingido...' tweets. How many? We can't just count unique ones because they all are different slightly, but we can check for a large-enough substring. | df_of_interest[df_of_interest.text.str.contains('Jeb Bush diz que foi atingido')].text.count() | arrows.ipynb | savioabuga/arrows | mit |
That's it!
Looks like there was a news article from a Brazilian website (http://jconline.ne10.uol.com.br/canal/mundo/internacional/noticia/2015/07/05/jeb-bush-diz-que-foi-atingido-por-criticas-de-trump-a-mexicanos-188801.php) that happened to get a lot of retweets at that time period.
A similar article in English is a... | tz_df = english_df.dropna(subset=['user_time_zone'])
us_tz_df = tz_df[tz_df.user_time_zone.str.contains("US & Canada")]
us_tz_candidate_groupby = us_tz_df.groupby(['candidate', 'user_time_zone'])
us_tz_candidate_groupby.influenced_polarity.mean() | arrows.ipynb | savioabuga/arrows | mit |
That's our raw data: now to plot it on a map. I got the timezone Shapefile from http://efele.net/maps/tz/world/. First, I read in the Shapefile with Cartopy. | tz_shapes = cartopy.io.shapereader.Reader('arrows/world/tz_world_mp.shp')
tz_records = list(tz_shapes.records())
tz_translator = {
'Eastern Time (US & Canada)': 'America/New_York',
'Central Time (US & Canada)': 'America/Chicago',
'Mountain Time (US & Canada)': 'America/Denver',
'Pacific Time (US & C... | arrows.ipynb | savioabuga/arrows | mit |
Next, I have to choose a projection and plot it (again using Cartopy). The Albers Equal-Area is good for maps of the U.S. I'll also download some featuresets from the Natural Earth dataset to display state borders. | albers_equal_area = cartopy.crs.AlbersEqualArea(-95, 35)
plate_carree = cartopy.crs.PlateCarree()
states_and_provinces = cartopy.feature.NaturalEarthFeature(
category='cultural',
name='admin_1_states_provinces_lines',
scale='50m',
facecolor='none'
)
cmaps = [matplotlib.cm.Blues, matplotlib.cm.Greens, ... | arrows.ipynb | savioabuga/arrows | mit |
My friend Gabriel Wang pointed out that U.S. timezones other than Pacific don't mean much since each timezone covers both blue and red states, but the data is still interesting.
As expected, midwestern states lean toward Jeb Bush. I wasn't expecting Jeb Bush's highest polarity-tweets to come from the East; this is pro... | american_timezones = ('US & Canada|Canada|Arizona|America|Hawaii|Indiana|Alaska'
'|New_York|Chicago|Los_Angeles|Detroit|CST|PST|EST|MST')
foreign_tz_df = tz_df[~tz_df.user_time_zone.str.contains(american_timezones)]
foreign_tz_groupby = foreign_tz_df.groupby('user_time_zone')
foreign_tz_groupby.s... | arrows.ipynb | savioabuga/arrows | mit |
I also want to look at polarity, so I'll only use English tweets.
(Sorry, Central/South Americans - my very rough method of filtering out American timezones gets rid of some of your timezones too. Let me know if there's a better way to do this.) | foreign_english_tz_df = foreign_tz_df[foreign_tz_df.lang == 'en'] | arrows.ipynb | savioabuga/arrows | mit |
Now we have a dataframe containing (mostly) world cities as time zones. Let's get the top cities by number of tweets for each candidate, then plot polarities. | foreign_tz_groupby = foreign_english_tz_df.groupby(['candidate', 'user_time_zone'])
top_foreign_tz_df = foreign_tz_groupby.filter(lambda group: len(group) > 40)
top_foreign_tz_groupby = top_foreign_tz_df.groupby(['user_time_zone', 'candidate'], as_index = False)
mean_influenced_polarities = top_foreign_tz_groupby.inf... | arrows.ipynb | savioabuga/arrows | mit |
Exercise for the reader: why is Rand Paul disliked in Athens? You can probably guess, but the actual tweets causing this are rather amusing.
Greco-libertarian relations aside, the data shows that London and Amsterdam are among the most influential of cities, with the former leaning toward Jeb Bush and the latter about ... | df_place = df.dropna(subset=['place'])
mollweide = cartopy.crs.Mollweide()
plot = plt.axes(projection=mollweide)
plot.set_global()
plot.add_feature(cartopy.feature.LAND)
plot.add_feature(cartopy.feature.COASTLINE)
plot.add_feature(cartopy.feature.BORDERS)
plot.scatter(
list(df_place.longitude),
list(df_place... | arrows.ipynb | savioabuga/arrows | mit |
Model Inputs
First we need to create the inputs for our graph. We need two inputs, one for the discriminator and one for the generator. Here we'll call the discriminator input inputs_real and the generator input inputs_z. We'll assign them the appropriate sizes for each of the networks. | def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
return inputs_real, inputs_z | gan_mnist/Intro_to_GANs_Solution.ipynb | flaviocordova/udacity_deep_learn_project | mit |
Generator network
Here we'll build the generator network. To make this network a universal function approximator, we'll need at least one hidden layer. We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero ... | def generator(z, out_dim, n_units=128, reuse=False, alpha=0.01):
with tf.variable_scope('generator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(z, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
# Logits and tanh output
logits = ... | gan_mnist/Intro_to_GANs_Solution.ipynb | flaviocordova/udacity_deep_learn_project | mit |
Discriminator
The discriminator network is almost exactly the same as the generator network, except that we're using a sigmoid output layer. | def discriminator(x, n_units=128, reuse=False, alpha=0.01):
with tf.variable_scope('discriminator', reuse=reuse):
# Hidden layer
h1 = tf.layers.dense(x, n_units, activation=None)
# Leaky ReLU
h1 = tf.maximum(alpha * h1, h1)
logits = tf.layers.dense(h1, 1, activation=... | gan_mnist/Intro_to_GANs_Solution.ipynb | flaviocordova/udacity_deep_learn_project | mit |
Hyperparameters | # Size of input image to discriminator
input_size = 784
# Size of latent vector to generator
z_size = 100
# Sizes of hidden layers in generator and discriminator
g_hidden_size = 128
d_hidden_size = 128
# Leak factor for leaky ReLU
alpha = 0.01
# Smoothing
smooth = 0.1 | gan_mnist/Intro_to_GANs_Solution.ipynb | flaviocordova/udacity_deep_learn_project | mit |
Build network
Now we're building the network from the functions defined above.
First is to get our inputs, input_real, input_z from model_inputs using the sizes of the input and z.
Then, we'll create the generator, generator(input_z, input_size). This builds the generator with the appropriate input and output sizes.
Th... | tf.reset_default_graph()
# Create our input placeholders
input_real, input_z = model_inputs(input_size, z_size)
# Build the model
g_model = generator(input_z, input_size, n_units=g_hidden_size, alpha=alpha)
# g_model is the generator output
d_model_real, d_logits_real = discriminator(input_real, n_units=d_hidden_size... | gan_mnist/Intro_to_GANs_Solution.ipynb | flaviocordova/udacity_deep_learn_project | mit |
Discriminator and Generator Losses
Now we need to calculate the losses, which is a little tricky. For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_loss_real + d_loss_fake. The losses will by sigmoid cross-entropys, which we can get with tf.nn.sigmoid_cross_entropy_with... | # Calculate losses
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real,
labels=tf.ones_like(d_logits_real) * (1 - smooth)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with... | gan_mnist/Intro_to_GANs_Solution.ipynb | flaviocordova/udacity_deep_learn_project | mit |
Optimizers
We want to update the generator and discriminator variables separately. So we need to get the variables for each part build optimizers for the two parts. To get all the trainable variables, we use tf.trainable_variables(). This creates a list of all the variables we've defined in our graph.
For the generator... | # Optimizers
learning_rate = 0.002
# Get the trainable_variables, split into G and D parts
t_vars = tf.trainable_variables()
g_vars = [var for var in t_vars if var.name.startswith('generator')]
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
d_train_opt = tf.train.AdamOptimizer(learning_rate)... | gan_mnist/Intro_to_GANs_Solution.ipynb | flaviocordova/udacity_deep_learn_project | mit |
Training | batch_size = 100
epochs = 100
samples = []
losses = []
# Only save generator variables
saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.t... | gan_mnist/Intro_to_GANs_Solution.ipynb | flaviocordova/udacity_deep_learn_project | mit |
Training loss
Here we'll check out the training losses for the generator and discriminator. | fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend() | gan_mnist/Intro_to_GANs_Solution.ipynb | flaviocordova/udacity_deep_learn_project | mit |
Generator samples from training
Here we can view samples of images from the generator. First we'll look at images taken while training. | def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
... | gan_mnist/Intro_to_GANs_Solution.ipynb | flaviocordova/udacity_deep_learn_project | mit |
These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 1, 7, 3, 2. Since this is just a sample, it isn't representative of the full range of images this generator can make. | _ = view_samples(-1, samples) | gan_mnist/Intro_to_GANs_Solution.ipynb | flaviocordova/udacity_deep_learn_project | mit |
Below I'm showing the generated images as the network was training, every 10 epochs. With bonus optical illusion! | rows, cols = 10, 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set... | gan_mnist/Intro_to_GANs_Solution.ipynb | flaviocordova/udacity_deep_learn_project | mit |
It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise like 1s and 9s.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We j... | saver = tf.train.Saver(var_list=g_vars)
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
sample_z = np.random.uniform(-1, 1, size=(16, z_size))
gen_samples = sess.run(
generator(input_z, input_size, n_units=g_hidden_size, reuse=True, alpha=alpha),
... | gan_mnist/Intro_to_GANs_Solution.ipynb | flaviocordova/udacity_deep_learn_project | mit |
definitions | # define symbol
x = sym.Symbol('x')
# function to be approximated
f = sym.cos( x )
f = sym.exp( x )
#f = sym.sqrt( x )
# define lower and upper bound for L[a,b]
# -> might be relevant to be changed if you are adapting the function to be approximated
a = -1
b = 1 | sigNT/tutorial/approximation.ipynb | kit-cel/wt | gpl-2.0 |
Define Gram-Schmidt | # basis and their number of functions
M = [ x**c for c in range( 0, 4 ) ]
n = len( M )
print(M)
# apply Gram-Schmidt for user-defined set M
# init ONB
ONB = [ ]
# loop for new functions and apply Gram-Schmidt
for _n in range( n ):
# get function
f_temp = M[ _n ]
# subtract influence of past ON... | sigNT/tutorial/approximation.ipynb | kit-cel/wt | gpl-2.0 |
now approximate a function | # init approx and extend successively
approx = 0
# add next ONB function with according coefficient
for _n in range( n ):
coeff = sym.integrate( f * ONB[ _n ], (x,a,b) )
approx += coeff * ONB[ _n ]
# if you like to see the function
print( approx )
p = plot( f, (x,a,b), show=False)
p.extend( plot( appro... | sigNT/tutorial/approximation.ipynb | kit-cel/wt | gpl-2.0 |
Load the data. | dataset = 'SPARC'
if dataset == 'HCP':
subject_path = conf['HCP']['data_paths']['mgh_1007']
loader = get_HCP_loader(subject_path)
small_data_path = '{}/mri/small_data.npy'.format(subject_path)
loader.update_filename_data(small_data_path)
data = loader.data
gtab = loader.gtab
voxel_size = ... | notebooks/show_ODFs.ipynb | jsjol/GaussianProcessRegressionForDiffusionMRI | bsd-3-clause |
Fit a MAPL model to the data. | map_model_laplacian_aniso = mapmri.MapmriModel(gtab, radial_order=6,
laplacian_regularization=True,
laplacian_weighting='GCV')
mapfit_laplacian_aniso = map_model_laplacian_aniso.fit(data) | notebooks/show_ODFs.ipynb | jsjol/GaussianProcessRegressionForDiffusionMRI | bsd-3-clause |
We want to use an FA image as background, this requires us to fit a DTI model. | tenmodel = dti.TensorModel(gtab)
tenfit = tenmodel.fit(data)
fitted = {'MAPL': mapfit_laplacian_aniso.predict(gtab)[:, :, 0],
'DTI': tenfit.predict(gtab)[:, :, 0]} | notebooks/show_ODFs.ipynb | jsjol/GaussianProcessRegressionForDiffusionMRI | bsd-3-clause |
Fit GP without mean and with DTI and MAPL as mean. | kern = get_default_kernel(n_max=6, spatial_dims=2)
gp_model = GaussianProcessModel(gtab, spatial_dims=2, kernel=kern, verbose=False)
gp_fit = gp_model.fit(np.squeeze(data), mean=None, voxel_size=voxel_size[0:2], retrain=True)
kern = get_default_kernel(n_max=2, spatial_dims=2)
gp_dti_model = GaussianProcessModel(gtab, ... | notebooks/show_ODFs.ipynb | jsjol/GaussianProcessRegressionForDiffusionMRI | bsd-3-clause |
gp_model = GaussianProcessModel(gtab, spatial_dims=2, q_magnitude_transform=np.sqrt, verbose=False)
gp_fit = gp_model.fit(np.squeeze(data), mean=None, voxel_size=voxel_size[0:2], retrain=True)
gp_dti_fit = gp_model.fit(np.squeeze(data), mean=fitted['DTI'], voxel_size=voxel_size[0:2], retrain=True)
gp_mapl_fit = gp_mode... | pred = {'MAPL': mapfit_laplacian_aniso.predict(gtab_dsi)[:, :, 0],
'DTI': tenfit.predict(gtab_dsi)[:, :, 0]} | notebooks/show_ODFs.ipynb | jsjol/GaussianProcessRegressionForDiffusionMRI | bsd-3-clause |
Compute the ODFs
Load an odf reconstruction sphere | sphere = get_sphere('symmetric724').subdivide(1) | notebooks/show_ODFs.ipynb | jsjol/GaussianProcessRegressionForDiffusionMRI | bsd-3-clause |
The radial order $s$ can be increased to sharpen the results, but it might
also make the odfs noisier. Note that a "proper" ODF corresponds to $s=0$. | odf = {'MAPL': mapfit_laplacian_aniso.odf(sphere, s=0),
'DTI': tenfit.odf(sphere)}
odf['GP'] = gp_fit.odf(sphere, gtab_dsi=gtab_dsi, mean=None)[:, :, None, :]
odf['DTI_GP'] = gp_dti_fit.odf(sphere, gtab_dsi=gtab_dsi, mean=pred['DTI'])[:, :, None, :]
odf['MAPL_GP'] = gp_mapl_fit.odf(sphere, gtab_dsi=gtab_dsi, me... | notebooks/show_ODFs.ipynb | jsjol/GaussianProcessRegressionForDiffusionMRI | bsd-3-clause |
Display the ODFs | for name, _odf in odf.items():
ren = window.Renderer()
ren.background((1, 1, 1))
odf_actor = actor.odf_slicer(_odf, sphere=sphere, scale=0.5, colormap='jet')
background_actor = actor.slicer(tenfit.fa, opacity=1)
odf_actor.display(z=0)
odf_actor.RotateZ(90)
background_actor.display(z=0)
... | notebooks/show_ODFs.ipynb | jsjol/GaussianProcessRegressionForDiffusionMRI | bsd-3-clause |
Load data
Let us load training data and store features, labels and other data into numpy arrays. | # Load data from file
data = pd.read_csv('../facies_vectors.csv')
# Store features and labels
X = data[feature_names].values # features
y = data['Facies'].values # labels
# Store well labels and depths
well = data['Well Name'].values
depth = data['Depth'].values | ar4/ar4_submission2_VALIDATION.ipynb | esa-as/2016-ml-contest | apache-2.0 |
Data inspection
Let us inspect the features we are working with. This step is useful to understand how to normalize them and how to devise a correct cross-validation strategy. Specifically, it is possible to observe that:
- Some features seem to be affected by a few outlier measurements.
- Only a few wells contain samp... | # Define function for plotting feature statistics
def plot_feature_stats(X, y, feature_names, facies_colors, facies_names):
# Remove NaN
nan_idx = np.any(np.isnan(X), axis=1)
X = X[np.logical_not(nan_idx), :]
y = y[np.logical_not(nan_idx)]
# Merge features and labels into a single DataFram... | ar4/ar4_submission2_VALIDATION.ipynb | esa-as/2016-ml-contest | apache-2.0 |
Feature imputation
Let us fill missing PE values. This is the only cell that differs from the approach of Paolo Bestagini. Currently no feature engineering is used, but this should be explored in the future. | def make_pe(X, seed):
reg = RandomForestRegressor(max_features='sqrt', n_estimators=50, random_state=seed)
DataImpAll = data[feature_names].copy()
DataImp = DataImpAll.dropna(axis = 0, inplace=False)
Ximp=DataImp.loc[:, DataImp.columns != 'PE']
Yimp=DataImp.loc[:, 'PE']
reg.fit(Ximp, Yimp)
X... | ar4/ar4_submission2_VALIDATION.ipynb | esa-as/2016-ml-contest | apache-2.0 |
Feature augmentation
Our guess is that facies do not abrutly change from a given depth layer to the next one. Therefore, we consider features at neighboring layers to be somehow correlated. To possibly exploit this fact, let us perform feature augmentation by:
- Aggregating features at neighboring depths.
- Computing f... | # Feature windows concatenation function
def augment_features_window(X, N_neig):
# Parameters
N_row = X.shape[0]
N_feat = X.shape[1]
# Zero padding
X = np.vstack((np.zeros((N_neig, N_feat)), X, (np.zeros((N_neig, N_feat)))))
# Loop over windows
X_aug = np.zeros((N_row, N_feat*(2*N_nei... | ar4/ar4_submission2_VALIDATION.ipynb | esa-as/2016-ml-contest | apache-2.0 |
Generate training, validation and test data splits
The choice of training and validation data is paramount in order to avoid overfitting and find a solution that generalizes well on new data. For this reason, we generate a set of training-validation splits so that:
- Features from each well belongs to training or valid... | # Initialize model selection methods
lpgo = LeavePGroupsOut(2)
# Generate splits
split_list = []
for train, val in lpgo.split(X, y, groups=data['Well Name']):
hist_tr = np.histogram(y[train], bins=np.arange(len(facies_names)+1)+.5)
hist_val = np.histogram(y[val], bins=np.arange(len(facies_names)+1)+.5)
if ... | ar4/ar4_submission2_VALIDATION.ipynb | esa-as/2016-ml-contest | apache-2.0 |
Classification parameters optimization
Let us perform the following steps for each set of parameters:
- Select a data split.
- Normalize features using a robust scaler.
- Train the classifier on training data.
- Test the trained classifier on validation data.
- Repeat for all splits and average the F1 scores.
At the en... | # Parameters search grid (uncomment parameters for full grid search... may take a lot of time)
N_grid = [100] # [50, 100, 150]
M_grid = [10] # [5, 10, 15]
S_grid = [25] # [10, 25, 50, 75]
L_grid = [5] # [2, 3, 4, 5, 10, 25]
param_grid = []
for N in N_grid:
for M in M_grid:
for S in S_grid:
fo... | ar4/ar4_submission2_VALIDATION.ipynb | esa-as/2016-ml-contest | apache-2.0 |
Predict labels on test data
Let us now apply the selected classification technique to test data. | param_best = {'S': 25, 'M': 10, 'L': 5, 'N': 100}
# Load data from file
test_data = pd.read_csv('../validation_data_nofacies.csv')
# Prepare test data
well_ts = test_data['Well Name'].values
depth_ts = test_data['Depth'].values
X_ts = test_data[feature_names].values
y_pred = []
print('o' * 100)
for seed in range(100... | ar4/ar4_submission2_VALIDATION.ipynb | esa-as/2016-ml-contest | apache-2.0 |
Lectura de mapas de direcciones y de elevación:
Trazado de cuencas y corrientes | cuCap = wmf.SimuBasin(0,0,0,0,rute='/media/nicolas/discoGrande/01_SIATA/nc_cuencas/Picacha_Abajo.nc')
#Guarda Vector de la cuenca
cuCap.Save_Basin2Map('/media/nicolas/discoGrande/01_SIATA/vector/Cuenca_AltaVista2.shp')
cuCap.Save_Net2Map('/media/nicolas/discoGrande/01_SIATA/vector/Red_Altavista_Abajo.shp',dx = 12.7, ... | Examples/Ejemplo_Hidrologia_Maximos.ipynb | nicolas998/wmf | gpl-3.0 |
Tiempo de viaje | cuCap.GetGeo_Parameters()
cuCap.Tc
#Parametros Geomorfologicos de las cuencas
cuCap.GetGeo_Parameters(rutaParamASC=ruta_images+'param_cap.txt',
plotTc=True,
rutaTcPlot=ruta_images+'Tc_cap.png') | Examples/Ejemplo_Hidrologia_Maximos.ipynb | nicolas998/wmf | gpl-3.0 |
No se tienen en cuenta los tiempos de concentración de campo y munera y Giandotti, los demás si, se tiene como tiempo de concentración medio un valor de $T_c = 2.69 hrs$ | 0.58*60.0
#Tiempo medio y mapas de tiempos de viajes
TcCap = np.array(cuCap.Tc.values()).mean()
#Calcula tiempos de viajes
cuCap.GetGeo_IsoChrones(TcCap, Niter= 6)
#Figura de tiempos de viaje
cuCap.Plot_basin(cuCap.CellTravelTime,
ruta = '/media/nicolas/discoGrande/01_SIATA/ParamCuencas/AltaVistaAbajo/IsoCronas.... | Examples/Ejemplo_Hidrologia_Maximos.ipynb | nicolas998/wmf | gpl-3.0 |
Este mapa debe ser recalculado con una mayor cantidad de iteraciones, lo dejamos haciendo luego, ya que toma tiempo, de momento esta malo. | ruta_images = '/media/nicolas/discoGrande/01_SIATA/ParamCuencas/AltaVistaAbajo/'
cuCap.Plot_Travell_Hist(ruta=ruta_images + 'Histogram_IsoCronas.png') | Examples/Ejemplo_Hidrologia_Maximos.ipynb | nicolas998/wmf | gpl-3.0 |
Curva hipsometrica y cauce ppal | cuCap.GetGeo_Cell_Basics()
cuCap.GetGeo_Ppal_Hipsometric(intervals=50)
cuCap.Plot_Hipsometric(normed=True,ventana=10, ruta=ruta_images+'Hipsometrica_Captacion.png')
cuCap.PlotPpalStream(ruta=ruta_images+'Perfil_cauce_ppal_Capta.png') | Examples/Ejemplo_Hidrologia_Maximos.ipynb | nicolas998/wmf | gpl-3.0 |
El cauce principal presenta un desarrollo típico de una cuenca mediana-grande, en donde de ve claramente una zona de producción de sedimentos entre los 0 y 10 km, y de los 10km en adelante se presenta una zona de transporte y depositación con pendientes que oscilan entre 0.0 y 0.8 % | cuCap.PlotSlopeHist(bins=[0,2,0.2],ruta=ruta_images+'Slope_hist_cap.png') | Examples/Ejemplo_Hidrologia_Maximos.ipynb | nicolas998/wmf | gpl-3.0 |
El histograma de pendientes muestra que gran parte de las pendientes son inferiores al 0.6, por lo cual se considera que el cauce ppal de la cuenca se desarrolla ppalmente sobre un valle.
Mapas de Variables Geomorfo | cuCap.GetGeo_HAND()
cuCap.Plot_basin(cuCap.CellHAND_class, ruta=ruta_images+'Map_HAND_class.png', lines_spaces=0.01)
cuCap.Plot_basin(cuCap.CellSlope, ruta=ruta_images + 'Map_Slope.png', lines_spaces=0.01) | Examples/Ejemplo_Hidrologia_Maximos.ipynb | nicolas998/wmf | gpl-3.0 |
El mapa de pendientes muestra como las mayores pendientes en la cuenca se desarrollan en la parte alta de la misma, en la medida en que se observa el desarrollo de la cuenca en la zona baja esta muestra claramente como las pendientes comienzan a ser bajas. | IT = cuCap.GetGeo_IT()
cuCap.Plot_basin(IT, ruta = ruta_images+'Indice_topografico.png', lines_spaces= 0.01) | Examples/Ejemplo_Hidrologia_Maximos.ipynb | nicolas998/wmf | gpl-3.0 |
Precipitación
A continuación se realiza el análisis de la precipitación en la zona, de esta manera se conocen las condiciones climáticas en la región.
Procedimiento para Desagregar lluvia (obtener IDF)
Lee la estación de epm con datos horarios
Caudales
Calculo de caudales medio de largo plazo mediante el campo de prec... | Precip = 1650
cuCap.GetQ_Balance(Precip)
cuCap.Plot_basin(cuCap.CellETR, ruta=ruta_images+'Map_ETR_Turc.png', lines_spaces=0.01)
cuCap.GetQ_Balance(Precip)
print 'Caudal Captacion:', cuCap.CellQmed[-1]
cuCap.Plot_basin(Precip - cuCap.CellETR, ruta = ruta_images+'Map_RunOff_mm_ano.png',
lines_spaces=0.01,
colo... | Examples/Ejemplo_Hidrologia_Maximos.ipynb | nicolas998/wmf | gpl-3.0 |
Caudales extremos por regionalización
Se calculan los caudales extremos máximos y mínimos para los periodos de retorno de:
- 2.33, 5, 10, 25, 50, 75 y 100
Se utilizan las siguientes metodologías:
Regionalización con gumbel y lognormal. | #Periodos de retrorno para obtener maximos y minimos
Tr=[2.33, 5, 10, 25, 50, 100]
QmaxRegGum = cuCap.GetQ_Max(cuCap.CellQmed, Dist='gumbel', Tr= Tr, Coef = [6.71, 3.29], Expo = [1.2, 0.9])
QmaxRegLog = cuCap.GetQ_Max(cuCap.CellQmed, Dist='lognorm', Tr= Tr, Coef = [6.71, 3.29], Expo = [1.2, 0.9])
QminRegLog = cuCap.... | Examples/Ejemplo_Hidrologia_Maximos.ipynb | nicolas998/wmf | gpl-3.0 |
Se guarda el mapa con el caudal medio, y los maximos y minimos para toda la red hídrica de la cuenca | Dict = {'Qmed':cuCap.CellQmed}
for t,q in zip([2.33,5,10,25,50,100],QminRegGum):
Dict.update({'min_g'+str(t):q})
for t,q in zip([2.33,5,10,25,50,100],QminRegLog):
Dict.update({'min_l'+str(t):q})
for t,q in zip([2.33,5,10,25,50,100],QmaxRegGum):
Dict.update({'max_g'+str(t):q})
for t,q in... | Examples/Ejemplo_Hidrologia_Maximos.ipynb | nicolas998/wmf | gpl-3.0 |
Caudales Máximos
Adicional a los caudales máximos estimados por regionalización, se estiman los caudales máximos por el método de hidrógrafa unitaria sintética:
sneyder.
scs.
williams | cuCap.GetGeo_Parameters()
#Parametros pára maximos
TcCap = np.median(cuCap.Tc.values())
#CN = 50
CN=80
print 'tiempo viaje medio Captacion:', TcCap
print u'Número de curva:', CN
# Obtención de la lluvia de diseño.
Intensidad = [40.9, 49.5, 55.5, 60.6, 67.4, 75.7]
# Lluvia efectiva
lluviaTr,lluvEfect,S = cuCap.Ge... | Examples/Ejemplo_Hidrologia_Maximos.ipynb | nicolas998/wmf | gpl-3.0 |
Se presenta en la figura como para diferentes periodos de retorno se da una pérdida de la cantidad de lluvia efectiva | #Calcula los HU para la descarga
Tscs,Qscs,HU=cuCap.GetHU_SCS(cuCap.GeoParameters['Area[km2]'],
TcCap,)
Tsnyder,Qsnyder,HU,Diferencia=cuCap.GetHU_Snyder(cuCap.GeoParameters['Area[km2]'],
TcCap,
Cp=0.8,
Fc=2.9)
#Cp=1.65/(np.sqrt(PendCauce)**0.38))
Twilliam,Qwilliam,HU=cuCap.GetHU_Williams(cuCap.GeoParameters['Ar... | Examples/Ejemplo_Hidrologia_Maximos.ipynb | nicolas998/wmf | gpl-3.0 |
Hidrogramas unitarios calibrados para la cuenca, williams muestra un rezago en este caso con las demás metodologías | #QmaxRegGum = cuCap.GetQ_Max(cuCap.CellQmed, Dist='gumbel', Tr= Tr, Coef = [6.71, 3.29], Expo = [1.2, 0.9])
#QmaxRegLog = cuCap.GetQ_Max(cuCap.CellQmed, Dist='lognorm', Tr= Tr, Coef = [6.71, 3.29], Expo = [1.2, 0.9])
#Realiza la convolucion de los hidrogramas sinteticos con la tormenta de diseno
HidroSnyder,QmaxSnyd... | Examples/Ejemplo_Hidrologia_Maximos.ipynb | nicolas998/wmf | gpl-3.0 |
Pandas
Pandas is an excellent library for handling tabular data and quickly performing data analysis on it. It can handle many textfile types. | import pandas as pd
df = pd.read_csv('../Pandas/Jan17_CO_ASOS.txt', sep='\t')
df.head() | notebooks/Python_Ecosystem/Scientific_Python_Ecosystem_Overview.ipynb | julienchastang/unidata-python-workshop | mit |
xarray
xarray is a Python library meant to handle N-dimensional arrays with metadata (think netCDF files). With the Dask library, it can work with Big Data efficiently in a Python framework. | import xarray as xr
ds = xr.open_dataset('../../data/NARR_19930313_0000.nc')
ds | notebooks/Python_Ecosystem/Scientific_Python_Ecosystem_Overview.ipynb | julienchastang/unidata-python-workshop | mit |
Dask
Dask is a parallel-computing library in Python. You can use it on your laptop, cloud environment, or on a high-performance computer (NCAR's Cheyenne for example). It allows for lazy evaluations so that computations only occur after you've chained all of your operations together. Additionally, it has a built-in sch... | import matplotlib.pyplot as plt
plt.plot(x,y)
plt.title('Demo of Matplotlib')
plt.show() | notebooks/Python_Ecosystem/Scientific_Python_Ecosystem_Overview.ipynb | julienchastang/unidata-python-workshop | mit |
Retrieving training and test data
The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.
Each MNIST data point has:
1. an image of a handwritten digit and
2. a corresponding label (a number 0-9 that identifies the image)
We'll cal... | # Retrieve the training and test data
trainX, trainY, testX, testY = mnist.load_data(one_hot=True) | Sentiment Analysis/Handwritten Digit Recognition with TFLearn and MNIST/handwritten-digit-recognition-with-tflearn.ipynb | nehal96/Deep-Learning-ND-Exercises | mit |
Visualize the training data
Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title. | # Visualizing the data
import matplotlib.pyplot as plt
%matplotlib inline
# Function for displaying a training image by it's index in the MNIST set
def show_digit(index):
label = trainY[index].argmax(axis=0)
# Reshape 784 array into 28x28 image
image = trainX[index].reshape([28,28])
plt.title('Training... | Sentiment Analysis/Handwritten Digit Recognition with TFLearn and MNIST/handwritten-digit-recognition-with-tflearn.ipynb | nehal96/Deep-Learning-ND-Exercises | mit |
Building the network
TFLearn lets you build the network by defining the layers in that network.
For this example, you'll define:
The input layer, which tells the network the number of inputs it should expect for each piece of MNIST data.
Hidden layers, which recognize patterns in data and connect the input to the ou... | # Define the neural network
def build_model(learning_rate):
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
# Include the input layer, hidden layer(s), and set how you want to train the model
# Input layer
net = tflearn.input_dat... | Sentiment Analysis/Handwritten Digit Recognition with TFLearn and MNIST/handwritten-digit-recognition-with-tflearn.ipynb | nehal96/Deep-Learning-ND-Exercises | mit |
Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You ca... | # Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=10) | Sentiment Analysis/Handwritten Digit Recognition with TFLearn and MNIST/handwritten-digit-recognition-with-tflearn.ipynb | nehal96/Deep-Learning-ND-Exercises | mit |
Testing
After you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.
A good result will be higher than 98% accuracy! Some simple models have been kno... | # Compare the labels that our model predicts with the actual labels
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
# Print o... | Sentiment Analysis/Handwritten Digit Recognition with TFLearn and MNIST/handwritten-digit-recognition-with-tflearn.ipynb | nehal96/Deep-Learning-ND-Exercises | mit |
Let us test the above function on the simple example: full triangle with values 0, 1 and 2 on the vertices labeled with 1, 2 and 3. | K = closure([(1, 2, 3)])
f = {1: 0, 2: 1, 3: 2}
for v in (1, 2, 3):
print"{0}: {1}".format((v,), lower_link((v,), K, f)) | 2015_2016/lab13/Extending values on vertices.ipynb | gregorjerse/rt2 | gpl-3.0 |
Now let us implement an extension algorithm. We are leaving out the cancelling step for clarity. | def join(a, b):
"""Return the join of 2 simplices a and b."""
return tuple(sorted(set(a).union(b)))
def extend(K, f):
"""Extend the field to the complex K.
Function on vertices is given in f.
Returns the pair V, C, where V is the dictionary containing discrete gradient vector field
and C is the... | 2015_2016/lab13/Extending values on vertices.ipynb | gregorjerse/rt2 | gpl-3.0 |
Let us test the algorithm on the example from the previous step (full triangle). | K = closure([(1, 2, 3)])
f = {1: 0, 2: 1, 3: 2}
extend(K, f)
K = closure([(1, 2, 3), (2, 3, 4)])
f = {1: 0, 2: 1, 3: 2, 4: 0}
extend(K, f)
K = closure([(1, 2, 3), (2, 3, 4)])
f = {1: 0, 2: 1, 3: 2, 4: 3}
extend(K, f) | 2015_2016/lab13/Extending values on vertices.ipynb | gregorjerse/rt2 | gpl-3.0 |
Preparation | # for VGG, ResNet, and MobileNet
INPUT_SHAPE = (224, 224)
# for InceptionV3, InceptionResNetV2, Xception
# INPUT_SHAPE = (299, 299)
import os
import skimage.data
import skimage.transform
from keras.utils.np_utils import to_categorical
import numpy as np
def load_data(data_dir, type=".ppm"):
num_categories = 6
... | notebooks/workshops/tss/cnn-standard-architectures.ipynb | DJCordhose/ai | mit |
Uncomment next three cells if you want to train on augmented image set
Otherwise Overfitting can not be avoided because image set is simply too small | # !curl -O https://raw.githubusercontent.com/DJCordhose/speed-limit-signs/master/data/augmented-signs.zip
# from zipfile import ZipFile
# zip = ZipFile('augmented-signs.zip')
# zip.extractall('.')
data_dir = os.path.join(ROOT_PATH, "augmented-signs")
augmented_images, augmented_labels = load_data(data_dir, type=".png"... | notebooks/workshops/tss/cnn-standard-architectures.ipynb | DJCordhose/ai | mit |
Split test and train data 80% to 20% | from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42, stratify=y)
X_train.shape, y_train.shape | notebooks/workshops/tss/cnn-standard-architectures.ipynb | DJCordhose/ai | mit |
Training Xception
Slighly optimized version of Inception: https://keras.io/applications/#xception
Inception V3 no longer using non-sequential tower architecture, rahter short cuts: https://keras.io/applications/#inceptionv3
Uses Batch Normalization:
https://keras.io/layers/normalization/#batchnormalization
http://cs23... | from keras.applications.xception import Xception
model = Xception(classes=6, weights=None)
model.summary()
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
# !rm -rf ./tf_log
# https://keras.io/callbacks/#tensorboard
tb_callback = keras.callbacks.Ten... | notebooks/workshops/tss/cnn-standard-architectures.ipynb | DJCordhose/ai | mit |
This is a truly complex model
Batch size needs to be small overthise model does not fit in memory
Will take long to train, even on GPU
on augmented dataset 4 minutes on K80 per Epoch: 400 Minutes for 100 Epochs = 6-7 hours | # Depends on harware GPU architecture, model is really complex, batch needs to be small (this works well on K80)
BATCH_SIZE = 25
early_stopping_callback = keras.callbacks.EarlyStopping(monitor='val_loss', patience=5, verbose=1)
%time model.fit(X_train, y_train, epochs=50, validation_split=0.2, callbacks=[tb_callback,... | notebooks/workshops/tss/cnn-standard-architectures.ipynb | DJCordhose/ai | mit |
Each Epoch takes very long
Extremely impressing how fast it converges: Almost 100% for validation starting from epoch 25
TODO: Metrics for Augmented Data
Accuracy
Validation Accuracy | train_loss, train_accuracy = model.evaluate(X_train, y_train, batch_size=BATCH_SIZE)
train_loss, train_accuracy
test_loss, test_accuracy = model.evaluate(X_test, y_test, batch_size=BATCH_SIZE)
test_loss, test_accuracy
original_loss, original_accuracy = model.evaluate(original_images, original_labels, batch_size=BATCH... | notebooks/workshops/tss/cnn-standard-architectures.ipynb | DJCordhose/ai | mit |
Alternative: ResNet
basic ideas
depth does matter
8x deeper than VGG
possible by using shortcuts and skipping final fc layer
https://keras.io/applications/#resnet50
https://medium.com/towards-data-science/neural-network-architectures-156e5bad51ba
http://arxiv.org/abs/1512.03385 | from keras.applications.resnet50 import ResNet50
model = ResNet50(classes=6, weights=None)
model.summary()
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
early_stopping_callback = keras.callbacks.EarlyStopping(monitor='val_loss', patience=10, verbo... | notebooks/workshops/tss/cnn-standard-architectures.ipynb | DJCordhose/ai | mit |
Results are a bit less good
Maybe need to train longer?
Batches can be larger, training is faster even though more epochs
Metrics for Augmented Data
Accuracy
Validation Accuracy | train_loss, train_accuracy = model.evaluate(X_train, y_train, batch_size=BATCH_SIZE)
train_loss, train_accuracy
test_loss, test_accuracy = model.evaluate(X_test, y_test, batch_size=BATCH_SIZE)
test_loss, test_accuracy
original_loss, original_accuracy = model.evaluate(original_images, original_labels, batch_size=BATCH... | notebooks/workshops/tss/cnn-standard-architectures.ipynb | DJCordhose/ai | mit |
1 - The data | # We load our data from some available ones shipped with dcgpy.
# In this particular case we use the problem sinecosine from the paper:
# Vladislavleva, Ekaterina J., Guido F. Smits, and Dick Den Hertog.
# "Order of nonlinearity as a complexity measure for models generated by symbolic regression via pareto genetic
# pr... | doc/sphinx/notebooks/symbolic_regression_3.ipynb | darioizzo/d-CGP | gpl-3.0 |
2 - The symbolic regression problem | # We define our kernel set, that is the mathematical operators we will
# want our final model to possibly contain. What to choose in here is left
# to the competence and knowledge of the user. A list of kernels shipped with dcgpy
# can be found on the online docs. The user can also define its own kernels (see the corr... | doc/sphinx/notebooks/symbolic_regression_3.ipynb | darioizzo/d-CGP | gpl-3.0 |
3 - The search algorithm | # We instantiate here the evolutionary strategy we want to use to
# search for models. Note we specify we want the evolutionary operators
# to be applied also to the constants via the kwarg *learn_constants*
uda = dcgpy.momes4cgp(gen = 250, max_mut = 4)
algo = pg.algorithm(uda)
algo.set_verbosity(10) | doc/sphinx/notebooks/symbolic_regression_3.ipynb | darioizzo/d-CGP | gpl-3.0 |
4 - The search | # We use a population of 100 individuals
pop = pg.population(prob, 100)
# Here is where we run the actual evolution. Note that the screen output
# will show in the terminal (not on your Jupyter notebook in case
# you are using it). Note you will have to run this a few times before
# solving the problem entirely.
pop... | doc/sphinx/notebooks/symbolic_regression_3.ipynb | darioizzo/d-CGP | gpl-3.0 |
5 - Inspecting the non dominated front | # Compute here the non dominated front.
ndf = pg.non_dominated_front_2d(pop.get_f())
# Inspect the front and print the proposed expressions.
print("{: >20} {: >30}".format("Loss:", "Model:"), "\n")
for idx in ndf:
x = pop.get_x()[idx]
f = pop.get_f()[idx]
a = parse_expr(udp.prettier(x))[0]
print("{: >2... | doc/sphinx/notebooks/symbolic_regression_3.ipynb | darioizzo/d-CGP | gpl-3.0 |
6 - Lets have a look to the log content | # Here we get the log of the latest call to the evolve
log = algo.extract(dcgpy.momes4cgp).get_log()
gen = [it[0] for it in log]
loss = [it[2] for it in log]
compl = [it[4] for it in log]
# And here we plot, for example, the generations against the best loss
_ = plt.plot(gen, loss)
_ = plt.title('last call to evolve')... | doc/sphinx/notebooks/symbolic_regression_3.ipynb | darioizzo/d-CGP | gpl-3.0 |
Open your dataset up using pandas in a Jupyter notebook | df = pd.read_csv("congress.csv", error_bad_lines=False) | foundations_hw/08/Homework8_benzaquen_congress_data.ipynb | mercybenzaquen/foundations-homework | mit |
Do a .head() to get a feel for your data | df.head()
#bioguide: The alphanumeric ID for legislators in http://bioguide.congress.gov. | foundations_hw/08/Homework8_benzaquen_congress_data.ipynb | mercybenzaquen/foundations-homework | mit |
Write down 12 questions to ask your data, or 12 things to hunt for in the data
1)How many senators and how many representatives in total since 1947? | df['chamber'].value_counts() #sounds like a lot. We might have repetitions.
df['bioguide'].describe() #we count the bioguide, which is unique to each legislator.
#There are only 3188 unique values, hence only 3188 senators and representatives in total. | foundations_hw/08/Homework8_benzaquen_congress_data.ipynb | mercybenzaquen/foundations-homework | mit |
2) How many from each party in total ? | total_democrats = (df['party'] == 'D').value_counts()
total_democrats
total_republicans =(df['party'] == 'R').value_counts()
total_republicans | foundations_hw/08/Homework8_benzaquen_congress_data.ipynb | mercybenzaquen/foundations-homework | mit |
3) What is the average age for people that have worked in congress (both Senators and Representatives) | df['age'].describe() | foundations_hw/08/Homework8_benzaquen_congress_data.ipynb | mercybenzaquen/foundations-homework | mit |
4) What is the average age of Senators that have worked in the Senate? And for Representatives in the house? | df.groupby("chamber")['age'].describe() | foundations_hw/08/Homework8_benzaquen_congress_data.ipynb | mercybenzaquen/foundations-homework | mit |
5) How many in total from each state? | df['state'].value_counts() | foundations_hw/08/Homework8_benzaquen_congress_data.ipynb | mercybenzaquen/foundations-homework | mit |
5) How many Senators in total from each state? How many Representatives? | df.groupby("state")['chamber'].value_counts() | foundations_hw/08/Homework8_benzaquen_congress_data.ipynb | mercybenzaquen/foundations-homework | mit |
6) How many terms are recorded in this dataset? | df['termstart'].describe() #here we would look at unique. | foundations_hw/08/Homework8_benzaquen_congress_data.ipynb | mercybenzaquen/foundations-homework | mit |
7) Who has been the oldest serving in the US, a senator or a representative? How old was he/she? | df.sort_values(by='age').tail(1) #A senator! | foundations_hw/08/Homework8_benzaquen_congress_data.ipynb | mercybenzaquen/foundations-homework | mit |
8) Who have been the oldest and youngest serving Representative in the US? | representative = df[df['chamber'] == 'house']
representative.sort_values(by='age').tail(1)
representative.sort_values(by='age').head(2) | foundations_hw/08/Homework8_benzaquen_congress_data.ipynb | mercybenzaquen/foundations-homework | mit |
9) Who have been the oldest and youngest serving Senator in the US? | senator = df[df['chamber'] == 'senate']
senator.sort_values(by='age')
senator.sort_values(by='age').head(2) | foundations_hw/08/Homework8_benzaquen_congress_data.ipynb | mercybenzaquen/foundations-homework | mit |
10) Who has served for more periods (in this question I am not paying attention to the period length)? | # Store a new column
df['complete_name'] = df['firstname']+ " "+ df['middlename'] + " "+df['lastname']
df.head()
period_count = df.groupby('complete_name')['termstart'].value_counts().sort_values(ascending=False)
pd.DataFrame(period_count)
#With the help of Stephan we figured out that term-start is every 2 years
#(... | foundations_hw/08/Homework8_benzaquen_congress_data.ipynb | mercybenzaquen/foundations-homework | mit |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.