markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
While including the noise from our intrinsic likelihoods appears to substantially increase our error budget, it didn't actually shift our mean prediction closer to the truth. What gives? The issue is that we aren't accounting for the fact that we are able to get an estimate of the true (expected) log-likelihood from ou...
# compute sample mean and std(sample mean) logls = np.array([[loglikelihood2(s) for s in dres2.samples] for i in range(Nmc)]) logls_est = logls.mean(axis=0) # sample mean logls_bt = [] for i in range(Nmc * 10): idx = rstate.choice(Nmc, size=Nmc) logls_bt.append(logls[idx].mean(axis=0)) # bootstrapped mean log...
demos/Examples -- Noisy Likelihoods.ipynb
joshspeagle/dynesty
mit
We see that after reweighting using our mean likelihoods (with bootstrapped errors) now properly shifts the mean while leaving us with uncertainties that are slightly larger than the noiseless case. This is what we'd expect given that we only have a noisy estimate of the true log-likelihood at a given position.
# initialize figure fig, axes = plt.subplots(3, 7, figsize=(35, 15)) axes = axes.reshape((3, 7)) [a.set_frame_on(False) for a in axes[:, 3]] [a.set_xticks([]) for a in axes[:, 3]] [a.set_yticks([]) for a in axes[:, 3]] # plot noiseless run (left) fg, ax = dyplot.cornerplot(dres, color='blue', truths=[0., 0., 0.], trut...
demos/Examples -- Noisy Likelihoods.ipynb
joshspeagle/dynesty
mit
In a previous example, we modeled the interaction between the Earth and the Sun, simulating what would happen if the Earth stopped in its orbit and fell straight into the Sun. Now let's extend the model to two dimensions and simulate one revolution of the Earth around the Sun, that is, one year. At perihelion, the dist...
r_0 = 147.09e9 # initial distance m v_0 = 30.29e3 # initial velocity m/s
python/soln/examples/orbit_soln.ipynb
AllenDowney/ModSim
gpl-2.0
Here are the other constants we'll need, all with about 4 significant digits.
G = 6.6743e-11 # gravitational constant N / kg**2 * m**2 m1 = 1.989e30 # mass of the Sun kg m2 = 5.972e24 # mass of the Earth kg t_end = 3.154e7 # one year in seconds
python/soln/examples/orbit_soln.ipynb
AllenDowney/ModSim
gpl-2.0
Exercise: Put the initial conditions in a State object with variables x, y, vx, and vy. Create a System object with variables init and t_end.
# Solution init = State(x=r_0, y=0, vx=0, vy=-v_0) # Solution system = System(init=init, t_end=t_end)
python/soln/examples/orbit_soln.ipynb
AllenDowney/ModSim
gpl-2.0
Exercise: Write a function called universal_gravitation that takes a State and a System and returns the gravitational force of the Sun on the Earth as a Vector. Test your function with the initial conditions; the result should be a Vector with approximate components: x -3.66e+22 y 0
# Solution def universal_gravitation(state, system): """Computes gravitational force. state: State object with distance r system: System object with m1, m2, and G returns: Vector """ x, y, vx, vy = state R = Vector(x, y) mag = G * m1 * m2 / vector_mag(R)**2 direction ...
python/soln/examples/orbit_soln.ipynb
AllenDowney/ModSim
gpl-2.0
Exercise: Write a slope function that takes a timestamp, a State, and a System and computes the derivatives of the state variables. Test your function with the initial conditions. The result should be a sequence of four values, approximately 0.0, -30290.0, -0.006, 0.0
# Solution def slope_func(t, state, system): x, y, vx, vy = state F = universal_gravitation(state, system) A = F / m2 return vx, vy, A.x, A.y # Solution slope_func(0, init, system)
python/soln/examples/orbit_soln.ipynb
AllenDowney/ModSim
gpl-2.0
Exercise: Use run_solve_ivp to run the simulation. Save the return values in variables called results and details.
# Solution results, details = run_solve_ivp(system, slope_func) details.message
python/soln/examples/orbit_soln.ipynb
AllenDowney/ModSim
gpl-2.0
You can use the following function to plot the results.
from matplotlib.pyplot import plot def plot_trajectory(results): x = results.x / 1e9 y = results.y / 1e9 make_series(x, y).plot(label='orbit') plot(0, 0, 'yo') decorate(xlabel='x distance (million km)', ylabel='y distance (million km)') plot_trajectory(results)
python/soln/examples/orbit_soln.ipynb
AllenDowney/ModSim
gpl-2.0
You will probably see that the earth does not end up back where it started, as we expect it to after one year. The following cells compute the error, which is the distance between the initial and final positions.
error = results.iloc[-1] - system.init error offset = Vector(error.x, error.y) vector_mag(offset) / 1e9
python/soln/examples/orbit_soln.ipynb
AllenDowney/ModSim
gpl-2.0
The problem is that the algorithm used by run_solve_ivp does not work very well with systems like this. There are two ways we can improve it. run_solve_ivp takes a keyword argument, rtol, that specifies the "relative tolerance", which determines the size of the time steps in the simulation. Lower values of rtol requir...
details.nfev
python/soln/examples/orbit_soln.ipynb
AllenDowney/ModSim
gpl-2.0
Animation You can use the following draw function to animate the results, if you want to see what the orbit looks like (not in real time).
xlim = results.x.min(), results.x.max() ylim = results.y.min(), results.y.max() def draw_func(t, state): x, y, vx, vy = state plot(x, y, 'b.') plot(0, 0, 'yo') decorate(xlabel='x distance (million km)', ylabel='y distance (million km)', xlim=xlim, ylim=ylim) # an...
python/soln/examples/orbit_soln.ipynb
AllenDowney/ModSim
gpl-2.0
Creating Isochrones To use the isochrone module, you must have the isochrone library installed (see instructions here). The isochrone module provides an API to create isochrones and calculate various characteristics. The easiest way to create an isochrone is through the general factory interface shown below.
def plot_iso(iso): plt.scatter(iso.mag_1-iso.mag_2,iso.mag_1+iso.distance_modulus,marker='o',c='k') plt.gca().invert_yaxis() plt.xlabel('%s - %s'%(iso.band_1,iso.band_2)); plt.ylabel(iso.band_1) iso1 = isochrone.factory(name='Padova', age=12, # Gyr metallici...
notebooks/isochrone_example.ipynb
kadrlica/ugali
mit
Modifying Isochrones Once you create an isochrone, you can modify it's parameters on the fly.
iso = isochrone.factory(name='Padova', age=12, # Gyr metallicity=0.0002, # Z distance_modulus=17 ) # You can set the age, metallicity, and distance modulus iso.age = 11 iso.distance_modulus = 20 iso.metallicity = 0.00015 pr...
notebooks/isochrone_example.ipynb
kadrlica/ugali
mit
Advanced Methods The Isochrone class wraps several more complicated functions related to isochrones. A few examples are shown below.
# Draw a regular grid of points from the isochrone with associated IMF initial_mass,mass_pdf,actual_mass,mag_1,mag_2 = iso1.sample(mass_steps=1e2) plt.scatter(mag_1-mag_2,mag_1+iso1.distance_modulus,c=mass_pdf,marker='o',facecolor='none',vmax=0.001) plt.colorbar() plt.gca().invert_yaxis() plt.xlabel('%s - %s'%(iso.ban...
notebooks/isochrone_example.ipynb
kadrlica/ugali
mit
Table 2 - Probable Low Mass and Substellar Mass Members of rho Oph, with MOIRCS Spectroscopy Follow-up
names = ["No.","R.A. (J2000)","Decl. (J2000)","i (mag)","J (mag)","K_s (mag)","T_eff (K)","A_V","Notes"] tbl2 = pd.read_csv("http://iopscience.iop.org/0004-637X/726/1/23/suppdata/apj373191t2_ascii.txt", sep="\t", skiprows=[0,1,2,3], na_values="sat", names = names) tbl2.dropna(how="all", inplace=True)...
notebooks/Geers2011.ipynb
BrownDwarf/ApJdataFrames
mit
Save the data
! mkdir ../data/Geers2011 tbl2.to_csv("../data/Geers2011/tb2.csv", index=False, sep='\t')
notebooks/Geers2011.ipynb
BrownDwarf/ApJdataFrames
mit
Acquire data The Python Pandas packages helps us work with our datasets. We start by acquiring the training and testing datasets into Pandas DataFrames. We also combine these datasets to run certain operations on both datasets together.
train_df = pd.read_csv('data/titanic-kaggle/train.csv') test_df = pd.read_csv('data/titanic-kaggle/test.csv') combine = [train_df, test_df]
.ipynb_checkpoints/titanic-data-science-solutions-refactor-checkpoint.ipynb
Startupsci/data-science-notebooks
mit
Analyze by describing data Pandas also helps describe the datasets answering following questions early in our project. Which features are available in the dataset? Noting the feature names for directly manipulating or analyzing these. These feature names are described on the Kaggle data page here.
print(train_df.columns.values)
.ipynb_checkpoints/titanic-data-science-solutions-refactor-checkpoint.ipynb
Startupsci/data-science-notebooks
mit
What is the distribution of numerical feature values across the samples? This helps us determine, among other early insights, how representative is the training dataset of the actual problem domain. Total samples are 891 or 40% of the actual number of passengers on board the Titanic (2,224). Survived is a categorical ...
train_df.describe() # Review survived rate using `percentiles=[.61, .62]` knowing our problem description mentions 38% survival rate. # Review Parch distribution using `percentiles=[.75, .8]` # SibSp distribution `[.68, .69]` # Age and Fare `[.1, .2, .3, .4, .5, .6, .7, .8, .9, .99]`
.ipynb_checkpoints/titanic-data-science-solutions-refactor-checkpoint.ipynb
Startupsci/data-science-notebooks
mit
What is the distribution of categorical features? Names are unique across the dataset (count=unique=891) Sex variable as two possible values with 65% male (top=male, freq=577/count=891). Cabin values have several dupicates across samples. Alternatively several passengers shared a cabin. Embarked takes three possible v...
train_df.describe(include=['O'])
.ipynb_checkpoints/titanic-data-science-solutions-refactor-checkpoint.ipynb
Startupsci/data-science-notebooks
mit
Assumtions based on data analysis We arrive at following assumptions based on data analysis done so far. We may validate these assumptions further before taking appropriate actions. Correlating. We want to know how well does each feature correlate with Survival. We want to do this early in our project and match these q...
pivot = train_df[['Pclass', 'Survived']] pivot = pivot.groupby(['Pclass'], as_index=False).mean() pivot.sort_values(by='Survived', ascending=False) pivot = train_df[["Sex", "Survived"]] pivot = pivot.groupby(['Sex'], as_index=False).mean() pivot.sort_values(by='Survived', ascending=False) pivot = train_df[["SibSp", "...
.ipynb_checkpoints/titanic-data-science-solutions-refactor-checkpoint.ipynb
Startupsci/data-science-notebooks
mit
Analyze by visualizing data Now we can continue confirming some of our assumptions using visualizations for analyzing the data. Correlating numerical features Let us start by understanding correlations between numerical features and our solution goal (Survived). A histogram chart is useful for analyzing continous numer...
g = sns.FacetGrid(train_df, col='Survived') g.map(plt.hist, 'Age', bins=20)
.ipynb_checkpoints/titanic-data-science-solutions-refactor-checkpoint.ipynb
Startupsci/data-science-notebooks
mit
Correlating numerical and ordinal features We can combine multiple features for identifying correlations using a single plot. This can be done with numerical and categorical features which have numeric values. Observations. Pclass=3 had most passengers, however most did not survive. Confirms our classifying assumption...
# grid = sns.FacetGrid(train_df, col='Pclass', hue='Survived') grid = sns.FacetGrid(train_df, col='Survived', row='Pclass', size=2.2, aspect=1.6) grid.map(plt.hist, 'Age', alpha=.5, bins=20) grid.add_legend()
.ipynb_checkpoints/titanic-data-science-solutions-refactor-checkpoint.ipynb
Startupsci/data-science-notebooks
mit
Correlating categorical features Now we can correlate categorical features with our solution goal. Observations. Female passengers had much better survival rate than males. Confirms classifying (#1). Exception in Embarked=C where males had higher survival rate. This could be a correlation between Pclass and Embarked a...
# grid = sns.FacetGrid(train_df, col='Embarked') grid = sns.FacetGrid(train_df, row='Embarked', size=2.2, aspect=1.6) grid.map(sns.pointplot, 'Pclass', 'Survived', 'Sex', palette='deep') grid.add_legend()
.ipynb_checkpoints/titanic-data-science-solutions-refactor-checkpoint.ipynb
Startupsci/data-science-notebooks
mit
Correlating categorical and numerical features We may also want to correlate categorical features (with non-numeric values) and numeric features. We can consider correlating Embarked (Categorical non-numeric), Sex (Categorical non-numeric), Fare (Numeric continuous), with Survived (Categorical numeric). Observations. ...
# grid = sns.FacetGrid(train_df, col='Embarked', hue='Survived', palette={0: 'k', 1: 'w'}) grid = sns.FacetGrid(train_df, row='Embarked', col='Survived', size=2.2, aspect=1.6) grid.map(sns.barplot, 'Sex', 'Fare', alpha=.5, ci=None) grid.add_legend()
.ipynb_checkpoints/titanic-data-science-solutions-refactor-checkpoint.ipynb
Startupsci/data-science-notebooks
mit
Wrangle data We have collected several assumptions and decisions regarding our datasets and solution requirements. So far we did not have to change a single feature or value to arrive at these. Let us now execute our decisions and assumptions for correcting, creating, and completing goals. Correcting by dropping featur...
print("Before", train_df.shape, test_df.shape, combine[0].shape, combine[1].shape) train_df = train_df.drop(['Ticket', 'Cabin'], axis=1) test_df = test_df.drop(['Ticket', 'Cabin'], axis=1) combine = [train_df, test_df] "After", train_df.shape, test_df.shape, combine[0].shape, combine[1].shape
.ipynb_checkpoints/titanic-data-science-solutions-refactor-checkpoint.ipynb
Startupsci/data-science-notebooks
mit
Creating new feature extracting from existing We want to analyze if Name feature can be engineered to extract titles and test correlation between titles and survival, before dropping Name and PassengerId features. In the following code we extract Title feature using regular expressions. The RegEx pattern (\w+\.) matche...
for dataset in combine: dataset['Title'] = dataset.Name.str.extract(' ([A-Za-z]+)\.', expand=False) pd.crosstab(train_df['Title'], train_df['Sex'])
.ipynb_checkpoints/titanic-data-science-solutions-refactor-checkpoint.ipynb
Startupsci/data-science-notebooks
mit
We can replace many titles with a more common name or classify them as Rare.
for dataset in combine: dataset['Title'] = dataset['Title'].replace([ 'Lady', 'Countess','Capt', 'Col', 'Don', 'Dr', 'Major', 'Rev', 'Sir', 'Jonkheer', 'Dona'], 'Rare') dataset['Title'] = dataset['Title'].replace('Mlle', 'Miss') dataset['Title'] = dataset['Title'].repla...
.ipynb_checkpoints/titanic-data-science-solutions-refactor-checkpoint.ipynb
Startupsci/data-science-notebooks
mit
We can convert the categorical titles to ordinal.
title_mapping = {"Mr": 1, "Miss": 2, "Mrs": 3, "Master": 4, "Rare": 5} for dataset in combine: dataset['Title'] = dataset['Title'].map(title_mapping) dataset['Title'] = dataset['Title'].fillna(0) train_df.head()
.ipynb_checkpoints/titanic-data-science-solutions-refactor-checkpoint.ipynb
Startupsci/data-science-notebooks
mit
Now we can safely drop the Name feature from training and testing datasets. We also do not need the PassengerId feature in the training dataset.
train_df = train_df.drop(['Name', 'PassengerId'], axis=1) test_df = test_df.drop(['Name'], axis=1) combine = [train_df, test_df] train_df.shape, test_df.shape
.ipynb_checkpoints/titanic-data-science-solutions-refactor-checkpoint.ipynb
Startupsci/data-science-notebooks
mit
Converting a categorical feature Now we can convert features which contain strings to numerical values. This is required by most model algorithms. Doing so will also help us in achieving the feature completing goal. Let us start by converting Sex feature to a new feature called Gender where female=1 and male=0.
for dataset in combine: dataset['Sex'] = dataset['Sex'].map( {'female': 1, 'male': 0} ).astype(int) train_df.head()
.ipynb_checkpoints/titanic-data-science-solutions-refactor-checkpoint.ipynb
Startupsci/data-science-notebooks
mit
Completing a numerical continuous feature Now we should start estimating and completing features with missing or null values. We will first do this for the Age feature. We can consider three methods to complete a numerical continuous feature. A simple way is to generate random numbers between mean and standard deviat...
# grid = sns.FacetGrid(train_df, col='Pclass', hue='Gender') grid = sns.FacetGrid(train_df, row='Pclass', col='Sex', size=2.2, aspect=1.6) grid.map(plt.hist, 'Age', alpha=.5, bins=20) grid.add_legend()
.ipynb_checkpoints/titanic-data-science-solutions-refactor-checkpoint.ipynb
Startupsci/data-science-notebooks
mit
Let us start by preparing an empty array to contain guessed Age values based on Pclass x Sex combinations.
guess_ages = np.zeros((2,3)) guess_ages
.ipynb_checkpoints/titanic-data-science-solutions-refactor-checkpoint.ipynb
Startupsci/data-science-notebooks
mit
Now we iterate over Sex (0 or 1) and Pclass (1, 2, 3) to calculate guessed values of Age for the six combinations.
for dataset in combine: for i in range(0, 2): for j in range(0, 3): guess_df = dataset[(dataset['Sex'] == i) & \ (dataset['Pclass'] == j+1)]['Age'].dropna() # age_mean = guess_df.mean() # age_std = guess_df.std() # age_guess ...
.ipynb_checkpoints/titanic-data-science-solutions-refactor-checkpoint.ipynb
Startupsci/data-science-notebooks
mit
Let us create Age bands and determine correlations with Survived.
train_df['AgeBand'] = pd.cut(train_df['Age'], 5) pivot = train_df[['AgeBand', 'Survived']] pivot = pivot.groupby(['AgeBand'], as_index=False).mean() pivot.sort_values(by='AgeBand', ascending=True)
.ipynb_checkpoints/titanic-data-science-solutions-refactor-checkpoint.ipynb
Startupsci/data-science-notebooks
mit
Let us replace Age with ordinals based on these bands.
for dataset in combine: dataset.loc[ dataset['Age'] <= 16, 'Age'] = 0 dataset.loc[(dataset['Age'] > 16) & (dataset['Age'] <= 32), 'Age'] = 1 dataset.loc[(dataset['Age'] > 32) & (dataset['Age'] <= 48), 'Age'] = 2 dataset.loc[(dataset['Age'] > 48) & (dataset['Age'] <= 64), 'Age'] = 3 dataset.loc[ ...
.ipynb_checkpoints/titanic-data-science-solutions-refactor-checkpoint.ipynb
Startupsci/data-science-notebooks
mit
We can now remove the AgeBand feature.
train_df = train_df.drop(['AgeBand'], axis=1) combine = [train_df, test_df] train_df.head()
.ipynb_checkpoints/titanic-data-science-solutions-refactor-checkpoint.ipynb
Startupsci/data-science-notebooks
mit
Create new feature combining existing features We can create a new feature for FamilySize which combines Parch and SibSp. This will enable us to drop Parch and SibSp from our datasets.
for dataset in combine: dataset['FamilySize'] = dataset['SibSp'] + dataset['Parch'] + 1 pivot = train_df[['FamilySize', 'Survived']] pivot = pivot.groupby(['FamilySize'], as_index=False).mean() pivot.sort_values(by='Survived', ascending=False)
.ipynb_checkpoints/titanic-data-science-solutions-refactor-checkpoint.ipynb
Startupsci/data-science-notebooks
mit
We can create another feature called IsAlone based on FamilySize feature we just created.
for dataset in combine: dataset['IsAlone'] = 0 dataset.loc[dataset['FamilySize'] == 1, 'IsAlone'] = 1 train_df[['IsAlone', 'Survived']].groupby(['IsAlone'], as_index=False).mean()
.ipynb_checkpoints/titanic-data-science-solutions-refactor-checkpoint.ipynb
Startupsci/data-science-notebooks
mit
Let us drop Parch, SibSp, and FamilySize features in favor of IsAlone.
train_df = train_df.drop(['Parch', 'SibSp', 'FamilySize'], axis=1) test_df = test_df.drop(['Parch', 'SibSp', 'FamilySize'], axis=1) combine = [train_df, test_df] train_df.head()
.ipynb_checkpoints/titanic-data-science-solutions-refactor-checkpoint.ipynb
Startupsci/data-science-notebooks
mit
We can also create an artificial feature combining Pclass and Age.
for dataset in combine: dataset['Age*Class'] = dataset.Age * dataset.Pclass train_df.loc[:, ['Age*Class', 'Age', 'Pclass']].head(10)
.ipynb_checkpoints/titanic-data-science-solutions-refactor-checkpoint.ipynb
Startupsci/data-science-notebooks
mit
Completing a categorical feature Embarked feature takes S, Q, C values based on port of embarkation. Our training dataset has two missing values. We simply fill these with the most common occurance.
freq_port = train_df.Embarked.dropna().mode()[0] freq_port for dataset in combine: dataset['Embarked'] = dataset['Embarked'].fillna(freq_port) pivot = train_df[['Embarked', 'Survived']] pivot = pivot.groupby(['Embarked'], as_index=False).mean() pivot.sort_values(by='Survived', ascending=False)
.ipynb_checkpoints/titanic-data-science-solutions-refactor-checkpoint.ipynb
Startupsci/data-science-notebooks
mit
Converting categorical feature to numeric We can now convert the Embarked feature to a new numeric feature.
for dataset in combine: dataset['Embarked'] = dataset['Embarked'].map( {'S': 0, 'C': 1, 'Q': 2} ).astype(int) train_df.head()
.ipynb_checkpoints/titanic-data-science-solutions-refactor-checkpoint.ipynb
Startupsci/data-science-notebooks
mit
Quick completing and converting a numeric feature We can now complete the Fare feature for single missing value in test dataset using mode to get the value that occurs most frequently for this feature. We do this in a single line of code. Note that we are not creating an intermediate new feature or doing any further an...
test_df['Fare'].fillna(test_df['Fare'].dropna().median(), inplace=True) test_df.head()
.ipynb_checkpoints/titanic-data-science-solutions-refactor-checkpoint.ipynb
Startupsci/data-science-notebooks
mit
We can not create FareBand temporary or reference feature.
train_df['FareBand'] = pd.qcut(train_df['Fare'], 4) pivot = train_df[['FareBand', 'Survived']] pivot = pivot.groupby(['FareBand'], as_index=False).mean() pivot.sort_values(by='FareBand', ascending=True)
.ipynb_checkpoints/titanic-data-science-solutions-refactor-checkpoint.ipynb
Startupsci/data-science-notebooks
mit
Convert the Fare feature to ordinal values based on the FareBand.
for dataset in combine: dataset.loc[ dataset['Fare'] <= 7.91, 'Fare'] = 0 dataset.loc[(dataset['Fare'] > 7.91) & (dataset['Fare'] <= 14.454), 'Fare'] = 1 dataset.loc[(dataset['Fare'] > 14.454) & (dataset['Fare'] <= 31), 'Fare'] = 2 dataset.loc[ dataset['Fare'] > 31, '...
.ipynb_checkpoints/titanic-data-science-solutions-refactor-checkpoint.ipynb
Startupsci/data-science-notebooks
mit
And the test dataset.
test_df.head(10)
.ipynb_checkpoints/titanic-data-science-solutions-refactor-checkpoint.ipynb
Startupsci/data-science-notebooks
mit
We can use Logistic Regression to validate our assumptions and decisions for feature creating and completing goals. This can be done by calculating the coefficient of the features in the decision function. Positive coefficients increase the log-odds of the response (and thus increase the probability), and negative coef...
coeff_df = pd.DataFrame(train_df.columns.delete(0)) coeff_df.columns = ['Feature'] coeff_df["Correlation"] = pd.Series(logreg.coef_[0]) coeff_df.sort_values(by='Correlation', ascending=False)
.ipynb_checkpoints/titanic-data-science-solutions-refactor-checkpoint.ipynb
Startupsci/data-science-notebooks
mit
The next cell shows the start of how to set up something like the WISE app.
from bokeh.models.widgets import Slider, RadioGroup, Button from bokeh.io import output_file, show, vform from bokeh.plotting import figure output_file("queryWise.html") band = RadioGroup(labels=["3.5 microns", "4.5 microns", "12 microns", "22 microns"], active=0) fov = Slider(start=5, en...
Data_apps.ipynb
stargaser/advancedviz2016
mit
The rest of the notebook is not currently working An attempt at a simple app Since the slider does not display...there is some problem with my installation for using widgets in the notebook.
from ipywidgets import * from IPython.display import display fov = FloatSlider(value = 5.0, min = 5.0, max = 15.0, step = 0.25) display(fov)
Data_apps.ipynb
stargaser/advancedviz2016
mit
Example from the blog post
%matplotlib notebook import pandas as pd import matplotlib.pyplot as plt from ipywidgets import * from IPython.display import display ##from jnotebook import display from IPython.html import widgets plt.style.use('ggplot') NUMBER_OF_PINGS = 4 #displaying the text widget text = widgets.Text(description="Domain to...
Data_apps.ipynb
stargaser/advancedviz2016
mit
TF-Keras Image Classification Distributed Multi-Worker Training on CPU using Vertex Training with Custom Container <table align="left"> <td> <a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/community-content/tf_keras_image_classification_distributed_multi_worker_with_vertex_sdk/multi_...
PROJECT_ID = "YOUR PROJECT ID" BUCKET_NAME = "gs://YOUR BUCKET NAME" REGION = "YOUR REGION" SERVICE_ACCOUNT = "YOUR SERVICE ACCOUNT" content_name = "tf-keras-img-cls-dist-multi-worker-cpu-cust-cont"
community-content/tf_keras_image_classification_distributed_multi_worker_with_vertex_sdk/multi_worker_vertex_training_on_cpu_with_custom_container.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Vertex Training using Vertex SDK and Custom Container Build Custom Container
hostname = "gcr.io" image_name = content_name tag = "latest" custom_container_image_uri = f"{hostname}/{PROJECT_ID}/{image_name}:{tag}" ! cd trainer && docker build -t $custom_container_image_uri -f cpu.Dockerfile . ! docker run --rm $custom_container_image_uri --epochs 2 --local-mode ! docker push $custom_containe...
community-content/tf_keras_image_classification_distributed_multi_worker_with_vertex_sdk/multi_worker_vertex_training_on_cpu_with_custom_container.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Option: Use a Previously Created Vertex Tensorboard Instance tensorboard_name = "Your Tensorboard Resource Name or Tensorboard ID" tensorboard = aiplatform.Tensorboard(tensorboard_name=tensorboard_name) Run a Vertex SDK CustomContainerTrainingJob
display_name = content_name gcs_output_uri_prefix = f"{BUCKET_NAME}/{display_name}" replica_count = 4 machine_type = "n1-standard-4" container_args = [ "--epochs", "50", "--batch-size", "32", ] custom_container_training_job = aiplatform.CustomContainerTrainingJob( display_name=display_name, c...
community-content/tf_keras_image_classification_distributed_multi_worker_with_vertex_sdk/multi_worker_vertex_training_on_cpu_with_custom_container.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Implement Preprocessing Functions The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below: - Lookup Table - Tokenize Punctuation Lookup Table To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries: - Dict...
import numpy as np import problem_unittests as tests def create_lookup_tables(text): """ Create lookup tables for vocabulary :param text: The text of tv scripts split into words :return: A tuple of dicts (vocab_to_int, int_to_vocab) """ # TODO: Implement Function vocab = set(text) ...
tv-script-generation/olds_ipnbs/dlnd_tv_script_generation.ipynb
blua/deep-learning
mit
Tokenize Punctuation We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!". Implement the function token_lookup to return a dict that will be used to token...
def token_lookup(): """ Generate a dict to turn punctuation into a token. :return: Tokenize dictionary where the key is the punctuation and the value is the token """ # TODO: Implement Function punctuation_dict = { '.' : '||Period||', ',' : '||Comma||', '"' : '||Quotation_mark||...
tv-script-generation/olds_ipnbs/dlnd_tv_script_generation.ipynb
blua/deep-learning
mit
Input Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: - Input text placeholder named "input" using the TF Placeholder name parameter. - Targets placeholder - Learning Rate placeholder Return the placeholders in the following the tuple (...
def get_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate) """ # TODO: Implement Function inputs = tf.placeholder(tf.int32, shape=[None, None], name='input') targets = tf.placeholder(tf.int32, shape=[None, None], na...
tv-script-generation/olds_ipnbs/dlnd_tv_script_generation.ipynb
blua/deep-learning
mit
Build RNN Cell and Initialize Stack one or more BasicLSTMCells in a MultiRNNCell. - The Rnn size should be set using rnn_size - Initalize Cell State using the MultiRNNCell's zero_state() function - Apply the name "initial_state" to the initial state using tf.identity() Return the cell and initial state in the follo...
def get_init_cell(batch_size, rnn_size): """ Create an RNN Cell and initialize it. :param batch_size: Size of batches :param rnn_size: Size of RNNs :return: Tuple (cell, initialize state) """ # TODO: Implement Function LSTM = tf.contrib.rnn.BasicLSTMCell(rnn_size, state_is_tuple=Tr...
tv-script-generation/olds_ipnbs/dlnd_tv_script_generation.ipynb
blua/deep-learning
mit
Build RNN You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN. - Build the RNN using the tf.nn.dynamic_rnn() - Apply the name "final_state" to the final state using tf.identity() Return the outputs and final_state state in the following tuple (Outputs, FinalState)
def build_rnn(cell, inputs): """ Create a RNN using a RNN Cell :param cell: RNN Cell :param inputs: Input text data :return: Tuple (Outputs, Final State) """ # TODO: Implement Function outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32) final_state = tf.ide...
tv-script-generation/olds_ipnbs/dlnd_tv_script_generation.ipynb
blua/deep-learning
mit
Build the Neural Network Apply the functions you implemented above to: - Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function. - Build RNN using cell and your build_rnn(cell, inputs) function. - Apply a fully connected layer with a linear activation and vocab_size as the number...
def build_nn(cell, rnn_size, input_data, vocab_size): """ Build part of the neural network :param cell: RNN cell :param rnn_size: Size of rnns :param input_data: Input data :param vocab_size: Vocabulary size :return: Tuple (Logits, FinalState) """ # TODO: Implement Function embe...
tv-script-generation/olds_ipnbs/dlnd_tv_script_generation.ipynb
blua/deep-learning
mit
Batches Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements: - The first element is a single batch of input with the shape [batch size, sequence length] - Th...
def get_batches(int_text, batch_size, seq_length): """ Return batches of input and target :param int_text: Text with the words replaced by their ids :param batch_size: The size of batch :param seq_length: The length of sequence :return: Batches as a Numpy array """ # TODO: Implement Func...
tv-script-generation/olds_ipnbs/dlnd_tv_script_generation.ipynb
blua/deep-learning
mit
Neural Network Training Hyperparameters Tune the following parameters: Set num_epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set seq_length to the length of sequence. Set learning_rate to the learning rate. Set show_every_n_batches to the number of batches the ...
# Number of Epochs num_epochs = 5 # Batch Size batch_size = 100 # RNN Size rnn_size = 200 # Sequence Length seq_length = 15 # Learning Rate learning_rate = 0.01 # Show stats for every n number of batches show_every_n_batches = 30 """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ save_dir = './save'
tv-script-generation/olds_ipnbs/dlnd_tv_script_generation.ipynb
blua/deep-learning
mit
Implement Generate Functions Get Tensors Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names: - "input:0" - "initial_state:0" - "final_state:0" - "probs:0" Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTen...
def get_tensors(loaded_graph): """ Get input, initial state, final state, and probabilities tensor from <loaded_graph> :param loaded_graph: TensorFlow graph loaded from file :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor) """ # TODO: Implement Function in...
tv-script-generation/olds_ipnbs/dlnd_tv_script_generation.ipynb
blua/deep-learning
mit
Choose Word Implement the pick_word() function to select the next word using probabilities.
def pick_word(probabilities, int_to_vocab): """ Pick the next word in the generated text :param probabilities: Probabilites of the next word :param int_to_vocab: Dictionary of word ids as the keys and words as the values :return: String of the predicted word """ # TODO: Implement Function ...
tv-script-generation/olds_ipnbs/dlnd_tv_script_generation.ipynb
blua/deep-learning
mit
Condensing the Trip Data The first step is to look at the structure of the dataset to see if there's any data wrangling we should perform. The below cell will read in the sampled data file that you created in the previous cell, and print out the first few rows of the table.
sample_data = pd.read_csv('201309_trip_data.csv') display(sample_data.head())
Project0/.ipynb_checkpoints/Bay_Area_Bike_Share_Analysis-checkpoint.ipynb
katie-truong/Udacity-DA
mit
In this exploration, we're going to concentrate on factors in the trip data that affect the number of trips that are taken. Let's focus down on a few selected columns: the trip duration, start time, start terminal, end terminal, and subscription type. Start time will be divided into year, month, and hour components. We...
# Display the first few rows of the station data file. station_info = pd.read_csv('201402_station_data.csv') display(station_info.head()) # This function will be called by another function later on to create the mapping. def create_station_mapping(station_data): """ Create a mapping from station IDs to cities,...
Project0/.ipynb_checkpoints/Bay_Area_Bike_Share_Analysis-checkpoint.ipynb
katie-truong/Udacity-DA
mit
You can now use the mapping to condense the trip data to the selected columns noted above. This will be performed in the summarise_data() function below. As part of this function, the datetime module is used to parse the timestamp strings from the original data file as datetime objects (strptime), which can then be out...
def summarise_data(trip_in, station_data, trip_out): """ This function takes trip and station information and outputs a new data file with a condensed summary of major trip information. The trip_in and station_data arguments will be lists of data files for the trip and station information, respectiv...
Project0/.ipynb_checkpoints/Bay_Area_Bike_Share_Analysis-checkpoint.ipynb
katie-truong/Udacity-DA
mit
Question 3: Run the below code block to call the summarise_data() function you finished in the above cell. It will take the data contained in the files listed in the trip_in and station_data variables, and write a new file at the location specified in the trip_out variable. If you've performed the data wrangling correc...
# Process the data by running the function we wrote above. station_data = ['201402_station_data.csv'] trip_in = ['201309_trip_data.csv'] trip_out = '201309_trip_summary.csv' summarise_data(trip_in, station_data, trip_out) # Load in the data file and print out the first few rows sample_data = pd.read_csv(trip_out) disp...
Project0/.ipynb_checkpoints/Bay_Area_Bike_Share_Analysis-checkpoint.ipynb
katie-truong/Udacity-DA
mit
Tip: If you save a jupyter Notebook, the output from running code blocks will also be saved. However, the state of your workspace will be reset once a new session is started. Make sure that you run all of the necessary code blocks from your previous session to reestablish variables and functions before picking up where...
trip_data = pd.read_csv('201309_trip_summary.csv') usage_stats(trip_data)
Project0/.ipynb_checkpoints/Bay_Area_Bike_Share_Analysis-checkpoint.ipynb
katie-truong/Udacity-DA
mit
You should see that there are over 27,000 trips in the first month, and that the average trip duration is larger than the median trip duration (the point where 50% of trips are shorter, and 50% are longer). In fact, the mean is larger than the 75% shortest durations. This will be interesting to look at later on. Let's ...
usage_plot(trip_data, 'subscription_type')
Project0/.ipynb_checkpoints/Bay_Area_Bike_Share_Analysis-checkpoint.ipynb
katie-truong/Udacity-DA
mit
Seems like there's about 50% more trips made by subscribers in the first month than customers. Let's try a different variable now. What does the distribution of trip durations look like?
usage_plot(trip_data, 'duration')
Project0/.ipynb_checkpoints/Bay_Area_Bike_Share_Analysis-checkpoint.ipynb
katie-truong/Udacity-DA
mit
Looks pretty strange, doesn't it? Take a look at the duration values on the x-axis. Most rides are expected to be 30 minutes or less, since there are overage charges for taking extra time in a single trip. The first bar spans durations up to about 1000 minutes, or over 16 hours. Based on the statistics we got out of us...
usage_plot(trip_data, 'duration', ['duration < 60'])
Project0/.ipynb_checkpoints/Bay_Area_Bike_Share_Analysis-checkpoint.ipynb
katie-truong/Udacity-DA
mit
This is looking better! You can see that most trips are indeed less than 30 minutes in length, but there's more that you can do to improve the presentation. Since the minimum duration is not 0, the left hand bar is slighly above 0. We want to be able to tell where there is a clear boundary at 30 minutes, so it will loo...
usage_plot(trip_data, 'duration', ['duration < 60'], boundary = 0, bin_width = 5)
Project0/.ipynb_checkpoints/Bay_Area_Bike_Share_Analysis-checkpoint.ipynb
katie-truong/Udacity-DA
mit
Question 4: Which five-minute trip duration shows the most number of trips? Approximately how many trips were made in this range? Answer: Approximately 9,000 trips were made in the 5-10 minutes range. Visual adjustments like this might be small, but they can go a long way in helping you understand the data and convey y...
station_data = ['201402_station_data.csv', '201408_station_data.csv', '201508_station_data.csv' ] trip_in = ['201402_trip_data.csv', '201408_trip_data.csv', '201508_trip_data.csv' ] trip_out = 'babs_y1_y2_summary.csv' # This function will take in the station data a...
Project0/.ipynb_checkpoints/Bay_Area_Bike_Share_Analysis-checkpoint.ipynb
katie-truong/Udacity-DA
mit
Since the summarise_data() function has created a standalone file, the above cell will not need to be run a second time, even if you close the notebook and start a new session. You can just load in the dataset and then explore things from there.
trip_data = pd.read_csv('babs_y1_y2_summary.csv') display(trip_data.head())
Project0/.ipynb_checkpoints/Bay_Area_Bike_Share_Analysis-checkpoint.ipynb
katie-truong/Udacity-DA
mit
Now it's your turn to explore the new dataset with usage_stats() and usage_plot() and report your findings! Here's a refresher on how to use the usage_plot() function: first argument (required): loaded dataframe from which data will be analyzed. second argument (required): variable on which trip counts will be divided...
usage_stats(trip_data) usage_plot(trip_data, 'start_city', boundary = 0, bin_width = 1) usage_plot(trip_data, 'start_hour', boundary = 0, bin_width = 1) usage_plot(trip_data, 'start_month', boundary = 0, bin_width = 1) usage_plot(trip_data, 'weekday', boundary = 0, bin_width = 1)
Project0/.ipynb_checkpoints/Bay_Area_Bike_Share_Analysis-checkpoint.ipynb
katie-truong/Udacity-DA
mit
Explore some different variables using the functions above and take note of some trends you find. Feel free to create additional cells if you want to explore the dataset in other ways or multiple ways. Tip: In order to add additional cells to a notebook, you can use the "Insert Cell Above" and "Insert Cell Below" opti...
# Final Plot 1 usage_plot(trip_data, 'start_hour', boundary = 0, bin_width = 1)
Project0/.ipynb_checkpoints/Bay_Area_Bike_Share_Analysis-checkpoint.ipynb
katie-truong/Udacity-DA
mit
Question 5a: What is interesting about the above visualization? Why did you select it? Answer: The plot shows the usage of the service during the day. We can see that busiest hours are 7-10AM and 4-7PM, which are the time people commute to and from work/school. There are also a fair share of trips happen during the day...
# Final Plot 2 usage_plot(trip_data, 'duration', ['duration < 60'], boundary = 0, bin_width = 5)
Project0/.ipynb_checkpoints/Bay_Area_Bike_Share_Analysis-checkpoint.ipynb
katie-truong/Udacity-DA
mit
Set working directory to 'data'
os.chdir('./data')
Lesson 9/Excercise 9.ipynb
jornvdent/WUR-Geo-Scripting-Course
gpl-3.0
Interactive input system
layername = raw_input("Name of Layer: ") pointnumber = raw_input("How many points do you want to insert? ") pointcoordinates = [] for number in range(1, (int(pointnumber)+1)): x = raw_input(("What is the Latitude (WGS 84) of Point %s ? " % str(number))) y = raw_input(("What is the Longitude (WGS 84) of Point ...
Lesson 9/Excercise 9.ipynb
jornvdent/WUR-Geo-Scripting-Course
gpl-3.0
Create shape file from input
# Set filename fn = layername + ".shp" ds = drv.CreateDataSource(fn) # Set spatial reference spatialReference = osr.SpatialReference() spatialReference.ImportFromEPSG(4326) ## Create Layer layer=ds.CreateLayer(layername, spatialReference, ogr.wkbPoint) # Get layer Definition layerDefinition = layer.GetLayerDefn() ...
Lesson 9/Excercise 9.ipynb
jornvdent/WUR-Geo-Scripting-Course
gpl-3.0
Convert shapefile to KML with bash
bashcommand = 'ogr2ogr -f KML -t_srs crs:84 points.kml points.shp' os.system(bashcommand)
Lesson 9/Excercise 9.ipynb
jornvdent/WUR-Geo-Scripting-Course
gpl-3.0
Numbers
abs(-1) import math math.floor(4.5) math.exp(1) math.log(1) math.log10(10) math.sqrt(9) round(4.54,1)
3_Types_Functions_FlowControl.ipynb
apozas/BIST-Python-Bootcamp
gpl-3.0
If this should not make sense, you can print some documentation:
round?
3_Types_Functions_FlowControl.ipynb
apozas/BIST-Python-Bootcamp
gpl-3.0
Strings
string = 'Hello World!' string2 = "This is also allowed, helps if you want 'this' in a string and vice versa" len(string)
3_Types_Functions_FlowControl.ipynb
apozas/BIST-Python-Bootcamp
gpl-3.0
Slicing
print(string) print(string[0]) print(string[2:5]) print(string[2:]) print(string[:5]) print(string * 2) print(string + 'TEST') print(string[-1])
3_Types_Functions_FlowControl.ipynb
apozas/BIST-Python-Bootcamp
gpl-3.0
String Operations
print(string/2) print(string - 'TEST') print(string**2)
3_Types_Functions_FlowControl.ipynb
apozas/BIST-Python-Bootcamp
gpl-3.0
capitalizing strings:
x = 'test' x.capitalize() x.find('e') x = 'TEST' x.lower()
3_Types_Functions_FlowControl.ipynb
apozas/BIST-Python-Bootcamp
gpl-3.0
Enviromnents like Jupyter and Spyder allow you to explore the methods (like .capitalize() or .upper() using x. and pressing tab. Formating You can also format strings, e.g. to display rounded numbers
print('Pi is {:06.2f}'.format(3.14159)) print('Space can be filled using {:_>10}'.format(x))
3_Types_Functions_FlowControl.ipynb
apozas/BIST-Python-Bootcamp
gpl-3.0
With python 3.6 this became even more readable
print(f'{x} 1 2 3')
3_Types_Functions_FlowControl.ipynb
apozas/BIST-Python-Bootcamp
gpl-3.0
Lists
x_list x_list[0] x_list.append('III') x_list x_list.append('III') x_list del x_list[-1] x_list y_list = ['john', '2.', '1'] y_list + x_list x_list*2 z_list=[4,78,3] max(z_list) min(z_list) sum(z_list) z_list.count(4) z_list.append(4) z_list.count(4) z_list.sort() z_list z_list.reverse() z_list
3_Types_Functions_FlowControl.ipynb
apozas/BIST-Python-Bootcamp
gpl-3.0
Tuples Tuples are immutable and can be thought of as read-only lists.
y_tuple = ('john', '2.', '1') type(y_tuple) y_list y_list[0] = 'Erik' y_list y_tuple[0] = 'Erik'
3_Types_Functions_FlowControl.ipynb
apozas/BIST-Python-Bootcamp
gpl-3.0
Dictionaries Dictonaries are lists with named entries. There is also named tuples, which are immutable dictonaries. Use OrderedDict from collections if you need to preserve the order.
tinydict = {'name': 'john', 'code':6734, 'dept': 'sales'} type(tinydict) print(tinydict) print(tinydict.keys()) print(tinydict.values()) tinydict['code'] tinydict['surname'] tinydict['dept'] = 'R&D' # update existing entry tinydict['surname'] = 'Sloan' # Add new entry tinydict['surname'] del tinydict['code'] #...
3_Types_Functions_FlowControl.ipynb
apozas/BIST-Python-Bootcamp
gpl-3.0
When duplicate keys encountered during assignment, the last assignment wins
dic = {'Name': 'Zara', 'Age': 7, 'Name': 'Manni'} dic
3_Types_Functions_FlowControl.ipynb
apozas/BIST-Python-Bootcamp
gpl-3.0
Finding the total number of items in the dictionary:
len(dic)
3_Types_Functions_FlowControl.ipynb
apozas/BIST-Python-Bootcamp
gpl-3.0
Produces a printable string representation of a dictionary:
str(dic)
3_Types_Functions_FlowControl.ipynb
apozas/BIST-Python-Bootcamp
gpl-3.0
Functions
def mean(mylist): """Calculate the mean of the elements in mylist""" number_of_items = len(mylist) sum_of_items = sum(mylist) return sum_of_items / number_of_items type(mean) z_list mean(z_list) help(mean) mean?
3_Types_Functions_FlowControl.ipynb
apozas/BIST-Python-Bootcamp
gpl-3.0
Flow Control In general, statements are executed sequentially: The first statement in a function is executed first, followed by the second, and so on. There may be a situation when you need to execute a block of code several number of times. In Python a block is delimitated by intendation, i.e. all lines starting at th...
count = 0 while (count < 9): print('The count is: ' + str(count)) count += 1 print('Good bye!')
3_Types_Functions_FlowControl.ipynb
apozas/BIST-Python-Bootcamp
gpl-3.0
A loop becomes infinite loop if a condition never becomes FALSE. You must use caution when using while loops because of the possibility that this condition never resolves to a FALSE value. This results in a loop that never ends. Such a loop is called an infinite loop. An infinite loop might be useful in client/server p...
fruits = ['banana', 'apple', 'mango'] for fruit in fruits: # Second Example print('Current fruit :', fruit)
3_Types_Functions_FlowControl.ipynb
apozas/BIST-Python-Bootcamp
gpl-3.0