markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
Here we transformed our Tf-Idf corpus via Latent Semantic Indexing into a latent 2-D space (2-D because we set num_topics=2). Now you’re probably wondering: what do these two latent dimensions stand for? Let’s inspect with models.LsiModel.print_topics(): | lsi.print_topics(2) | docs/notebooks/Topics_and_Transformations.ipynb | Kreiswolke/gensim | lgpl-2.1 |
(the topics are printed to log – see the note at the top of this page about activating logging)
It appears that according to LSI, “trees”, “graph” and “minors” are all related words (and contribute the most to the direction of the first topic), while the second topic practically concerns itself with all the other words. As expected, the first five documents are more strongly related to the second topic while the remaining four documents to the first topic: | for doc in corpus_lsi: # both bow->tfidf and tfidf->lsi transformations are actually executed here, on the fly
print(doc)
lsi.save('/tmp/model.lsi') # same for tfidf, lda, ...
lsi = models.LsiModel.load('/tmp/model.lsi') | docs/notebooks/Topics_and_Transformations.ipynb | Kreiswolke/gensim | lgpl-2.1 |
The next question might be: just how exactly similar are those documents to each other? Is there a way to formalize the similarity, so that for a given input document, we can order some other set of documents according to their similarity? Similarity queries are covered in the next tutorial.
Available transformations
Gensim implements several popular Vector Space Model algorithms:
Term Frequency * Inverse Document Frequency
Tf-Idf expects a bag-of-words (integer values) training corpus during initialization. During transformation, it will take a vector and return another vector of the same dimensionality, except that features which were rare in the training corpus will have their value increased. It therefore converts integer-valued vectors into real-valued ones, while leaving the number of dimensions intact. It can also optionally normalize the resulting vectors to (Euclidean) unit length. | model = models.TfidfModel(corpus, normalize=True) | docs/notebooks/Topics_and_Transformations.ipynb | Kreiswolke/gensim | lgpl-2.1 |
Latent Semantic Indexing, LSI (or sometimes LSA)
LSI transforms documents from either bag-of-words or (preferrably) TfIdf-weighted space into a latent space of a lower dimensionality. For the toy corpus above we used only 2 latent dimensions, but on real corpora, target dimensionality of 200–500 is recommended as a “golden standard” [1]. | model = models.LsiModel(corpus_tfidf, id2word=dictionary, num_topics=300) | docs/notebooks/Topics_and_Transformations.ipynb | Kreiswolke/gensim | lgpl-2.1 |
LSI training is unique in that we can continue “training” at any point, simply by providing more training documents. This is done by incremental updates to the underlying model, in a process called online training. Because of this feature, the input document stream may even be infinite – just keep feeding LSI new documents as they arrive, while using the computed transformation model as read-only in the meanwhile!
<b>Example</b>
model.add_documents(another_tfidf_corpus) # now LSI has been trained on tfidf_corpus + another_tfidf_corpus
lsi_vec = model[tfidf_vec] # convert some new document into the LSI space, without affecting the model
model.add_documents(more_documents) # tfidf_corpus + another_tfidf_corpus + more_documents
lsi_vec = model[tfidf_vec]
See the gensim.models.lsimodel documentation for details on how to make LSI gradually “forget” old observations in infinite streams. If you want to get dirty, there are also parameters you can tweak that affect speed vs. memory footprint vs. numerical precision of the LSI algorithm.
gensim uses a novel online incremental streamed distributed training algorithm (quite a mouthful!), which I published in [5]. gensim also executes a stochastic multi-pass algorithm from Halko et al. [4] internally, to accelerate in-core part of the computations. See also
Experiments on the English Wikipedia for further speed-ups by distributing the computation across a cluster of computers.
Random Projections
RP aim to reduce vector space dimensionality. This is a very efficient (both memory- and CPU-friendly) approach to approximating TfIdf distances between documents, by throwing in a little randomness. Recommended target dimensionality is again in the hundreds/thousands, depending on your dataset. | model = models.RpModel(corpus_tfidf, num_topics=500) | docs/notebooks/Topics_and_Transformations.ipynb | Kreiswolke/gensim | lgpl-2.1 |
Latent Dirichlet Allocation, LDA
LDA is yet another transformation from bag-of-words counts into a topic space of lower dimensionality. LDA is a probabilistic extension of LSA (also called multinomial PCA), so LDA’s topics can be interpreted as probability distributions over words. These distributions are, just like with LSA, inferred automatically from a training corpus. Documents are in turn interpreted as a (soft) mixture of these topics (again, just like with LSA). | model = models.LdaModel(corpus, id2word=dictionary, num_topics=100) | docs/notebooks/Topics_and_Transformations.ipynb | Kreiswolke/gensim | lgpl-2.1 |
gensim uses a fast implementation of online LDA parameter estimation based on [2], modified to run in distributed mode on a cluster of computers.
Hierarchical Dirichlet Process, HDP
HDP is a non-parametric bayesian method (note the missing number of requested topics): | model = models.HdpModel(corpus, id2word=dictionary) | docs/notebooks/Topics_and_Transformations.ipynb | Kreiswolke/gensim | lgpl-2.1 |
Getting and inspecting the data
Downloading the data. At the moment, only data from June 2015 is considered. | # list containing the link(s) to the csv file(s)
data_links = ['https://storage.googleapis.com/tlc-trip-data/2015/yellow_tripdata_2015-06.csv']
filenames = []
for link in data_links:
filenames.append(link.split('/')[-1])
if not(os.path.isfile(filenames[-1])): # do not download file if it already exists
urllib.urlretrieve(link, filename) | data_preparation.ipynb | jdoepfert/nyc_taxi_tips | apache-2.0 |
Loading the data into a pandas data frame and look at it: | df = pd.DataFrame()
for filename in filenames:
df = df.append(pd.read_csv(filename), ignore_index=True)
df.head()
df.info()
df.describe() | data_preparation.ipynb | jdoepfert/nyc_taxi_tips | apache-2.0 |
Note that some of the numerical features like tip amount and fare amount actually contain negative values. Those invalid values will be deleted in the next section.
Data cleaning
1) Retaining only trips paid by credit card
Only the credit card tips are recorded in the data set. Therefore, let's only retain trips with credit card payment. This might introduce some bias (as credit card payers may have a different tipping behaviour than others).
As seen below, most of the trips are anyway paid by credit card (label "1", followed by cash payment, label "2"). | df.groupby('payment_type').size().plot(kind='bar'); | data_preparation.ipynb | jdoepfert/nyc_taxi_tips | apache-2.0 |
For some trips, people actually tipped with credit card, even though they did not pay with credit card: | np.sum((df.payment_type != 1) & (df.tip_amount != 0)) | data_preparation.ipynb | jdoepfert/nyc_taxi_tips | apache-2.0 |
However, the number of those trips is negligible, so I ignore them here and only retain credit card trips. Then, the column "payment_type" can be removed: | df = df[df.payment_type == 1]
df.drop('payment_type', axis=1, inplace=True)
df.shape | data_preparation.ipynb | jdoepfert/nyc_taxi_tips | apache-2.0 |
2) Checking for unfeasible values in numerical features
As seen above, some of the numerical features contained negative values. Let's have a closer look... | (df < 0).sum() | data_preparation.ipynb | jdoepfert/nyc_taxi_tips | apache-2.0 |
...and remove the corresponding rows where negative values do not make any sense: | col_names = ['total_amount', 'improvement_surcharge', 'tip_amount', 'mta_tax', 'extra', 'fare_amount']
# this removes all rows where at least one value of the columns in col_names is < 0
rows_to_keep = (df[col_names] >= 0).sum(axis=1) == len(col_names)
print 'removing '+ str((~rows_to_keep).sum()) + ' rows...'
df = df[rows_to_keep]
(df[col_names] < 0).sum() # check if it worked | data_preparation.ipynb | jdoepfert/nyc_taxi_tips | apache-2.0 |
3) Deleting "invalid" trips
Inspecting trip distance | ax = df.loc[sample(df.index, 30000)].plot(y='trip_distance',kind='hist', bins=200)
ax.set_xlim([0,25]); | data_preparation.ipynb | jdoepfert/nyc_taxi_tips | apache-2.0 |
Delete trips that are longer than 50 miles... | rows_to_keep = df.trip_distance <= 50
print 'removing '+ str((~rows_to_keep).sum()) + ' rows...'
df = df[rows_to_keep] | data_preparation.ipynb | jdoepfert/nyc_taxi_tips | apache-2.0 |
...and shorter than 0.1 miles: | rows_to_keep = df.trip_distance >= 0.1
print 'removing '+ str((~rows_to_keep).sum()) + ' rows...'
df = df[rows_to_keep] | data_preparation.ipynb | jdoepfert/nyc_taxi_tips | apache-2.0 |
Inspecting trip fare | ax = df.loc[sample(df.index, 300000)].plot(y='fare_amount',kind='hist', bins=200)
ax.set_xlim([0,102]); | data_preparation.ipynb | jdoepfert/nyc_taxi_tips | apache-2.0 |
There seem to be a decent amount of trips with a fixed rate of 50 USD (see spike above).
Now let's remove rows where the fare is below 1 USD: | rows_to_keep = df.fare_amount >= 1
print 'removing '+ str((~rows_to_keep).sum()) + ' rows...'
df = df[rows_to_keep] | data_preparation.ipynb | jdoepfert/nyc_taxi_tips | apache-2.0 |
Inspecting trip duration | df.tpep_pickup_datetime = pd.to_datetime(df.tpep_pickup_datetime)
df.tpep_dropoff_datetime = pd.to_datetime(df.tpep_dropoff_datetime)
df['trip_duration'] = df.tpep_dropoff_datetime - df.tpep_pickup_datetime
df['trip_duration_minutes'] = df.trip_duration.dt.seconds/60
ax = df.loc[sample(df.index, 300000)].plot(y='trip_duration_minutes', kind='hist', bins=500)
ax.set_xlim([0,150]); | data_preparation.ipynb | jdoepfert/nyc_taxi_tips | apache-2.0 |
Remove trips that took less than half a minute... | rows_to_keep = df.trip_duration_minutes>0.5
print 'removing '+ str((~rows_to_keep).sum()) + ' rows...'
df = df[rows_to_keep] | data_preparation.ipynb | jdoepfert/nyc_taxi_tips | apache-2.0 |
...as well as trips with a duration of more than 2 hours: | rows_to_keep = df.trip_duration_minutes<=2*60
print 'removing '+ str((~rows_to_keep).sum()) + ' rows...'
df = df[rows_to_keep] | data_preparation.ipynb | jdoepfert/nyc_taxi_tips | apache-2.0 |
Inspecting passenger count | df.plot(y='passenger_count', kind='hist', bins=30); | data_preparation.ipynb | jdoepfert/nyc_taxi_tips | apache-2.0 |
Remove trips with zero passenger count: | rows_to_keep = df.passenger_count > 0
print 'removing '+ str((~rows_to_keep).sum()) + ' rows...'
df = df[rows_to_keep] | data_preparation.ipynb | jdoepfert/nyc_taxi_tips | apache-2.0 |
Remove trips with a passenger count of more than 6: | rows_to_keep = df.passenger_count <= 6
print 'removing '+ str((~rows_to_keep).sum()) + ' rows...'
df = df[rows_to_keep] | data_preparation.ipynb | jdoepfert/nyc_taxi_tips | apache-2.0 |
Removing invalid location coordinates
Remove trips that obviously did not start in NY: | within_NY = (df.pickup_latitude > 40) & (df.pickup_latitude < 40.9) & \
(df.pickup_longitude > -74.4) & (df.pickup_longitude < -73.4)
print 'removing '+ str((~within_NY).sum()) + ' rows...'
df = df[within_NY] | data_preparation.ipynb | jdoepfert/nyc_taxi_tips | apache-2.0 |
Plot the pickup locations to check if they look good. Choose a random sample of all trips, since plotting all trips would take quite a while. | fig, ax = plt.subplots(figsize=(15, 10))
df.loc[sample(df.index, 200000)].plot(x='pickup_longitude', y='pickup_latitude',
kind='scatter', ax=ax, alpha=0.3, s=3)
ax.set_xlim([-74.2, -73.7])
ax.set_ylim([40.6, 40.9]); | data_preparation.ipynb | jdoepfert/nyc_taxi_tips | apache-2.0 |
The above plot looks reasonable, you can clearly identify the geometry of New York. Let's plot a small subset of data points on a map. Next to central NY, one can identify small hotspots at the surrounding airports. | subdf = df.loc[sample(df.index, 10000)] # subsample df
data = subdf[['pickup_latitude', 'pickup_longitude']].values
mapa = folium.Map([40.7, -73.9], zoom_start=11, tiles='stamentoner') # create heatmap
mapa.add_children(plugins.HeatMap(data, min_opacity=0.005, max_zoom=18,
max_val=0.01, radius=3, blur=3))
mapa | data_preparation.ipynb | jdoepfert/nyc_taxi_tips | apache-2.0 |
Inspecting the tip
As the tip distribution below shows, people tend to tip whole numbers of dollars (see peaks at e.g. 1 and 2 dollars). | fig, ax = plt.subplots(figsize=(12,4))
ax = df.loc[sample(df.index, 100000)].plot(y='tip_amount', kind='hist',bins=1500, ax=ax)
ax.set_xlim([0,10.5])
ax.set_xticks(np.arange(0, 11, 0.5)); | data_preparation.ipynb | jdoepfert/nyc_taxi_tips | apache-2.0 |
A useful metric for a taxi driver to compare tips is the percentage of tip given with respect to the total fare amount. | # check if the fares and fees sum up to total_amount
print pd.concat([df.tip_amount + df.fare_amount + df.tolls_amount + \
df.extra + df.mta_tax + df.improvement_surcharge, \
df.total_amount], axis=1).head()
# calculate tip percentage
df['total_fare'] = df.total_amount - df.tip_amount
df['tip_percentage'] = df.tip_amount / df.total_fare * 100 | data_preparation.ipynb | jdoepfert/nyc_taxi_tips | apache-2.0 |
The tip percentage distribution below shows that people mostly seem to tip 0, 20, 25 or 30%. | data = df.loc[sample(df.index, 100000)].tip_percentage.values
plt.hist(data, np.arange(min(data)-0.5, max(data)+1.5))
plt.gca().set_xlim([0,35])
plt.gca().set_xticks(np.arange(0, 51, 5));
plt.legend(['tip_percentage']); | data_preparation.ipynb | jdoepfert/nyc_taxi_tips | apache-2.0 |
Remove trips where a tip of more than 100% was recorded, regarding them as invalid outliers. | rows_to_keep = df.tip_percentage <= 100
print 'removing '+ str((~rows_to_keep).sum()) + ' rows...'
df = df[rows_to_keep]
df.tip_percentage.mean()
df.tip_percentage.median()
df.tip_percentage.mode()
df.tip_percentage.quantile(0.25)
# fig, ax = plt.subplots(figsize=(14,5))
# ax = df.loc[sample(df.index, 100000)].tip_percentage.plot(kind='hist',bins=2000, cumulative=True)
# ax.set_xlim([0,200]) | data_preparation.ipynb | jdoepfert/nyc_taxi_tips | apache-2.0 |
Tip percentage by day of the week (Monday=0, Sunday=6). People tend to tip a little less on weekends (day 5-6). | fig, ax = plt.subplots(figsize=(12, 6))
for i in range(7):
df[df.pickup_weekday==i].groupby('pickup_hour').mean().plot(y='tip_percentage', ax=ax)
plt.legend(['day ' + str(x) for x in range(7)])
ax.set_ylabel('average tip percentage') | data_preparation.ipynb | jdoepfert/nyc_taxi_tips | apache-2.0 |
Let's look at the number of trips per hour and day: | fig, ax = plt.subplots(figsize=(12, 6))
for i in range(7):
df[df.pickup_weekday==i].groupby('pickup_hour').size().plot(ax=ax)
plt.legend(['day ' + str(x) for x in range(7)])
ax.set_ylabel('number of trips') | data_preparation.ipynb | jdoepfert/nyc_taxi_tips | apache-2.0 |
The tip percentage does seem to depend too much on the number of passengers: | fig, ax = plt.subplots(figsize=(8,7))
df.boxplot('tip_percentage', by='passenger_count', showmeans=True, ax=ax)
ax.set_ylim([15,21]) | data_preparation.ipynb | jdoepfert/nyc_taxi_tips | apache-2.0 |
Save the cleaned data frame to a file: | df.to_pickle('df.pickle') | data_preparation.ipynb | jdoepfert/nyc_taxi_tips | apache-2.0 |
Data | # https://github.com/probml/pyprobml/blob/master/scripts/schools8_pymc3.py
# Data of the Eight Schools Model
J = 8
y = np.array([28.0, 8.0, -3.0, 7.0, -1.0, 1.0, 18.0, 12.0])
sigma = np.array([15.0, 10.0, 16.0, 11.0, 9.0, 11.0, 10.0, 18.0])
print(np.mean(y))
print(np.median(y))
names = []
for t in range(8):
names.append("{}".format(t))
# Plot raw data
fig, ax = plt.subplots()
y_pos = np.arange(8)
ax.errorbar(y, y_pos, xerr=sigma, fmt="o")
ax.set_yticks(y_pos)
ax.set_yticklabels(names)
ax.invert_yaxis() # labels read top-to-bottom
plt.title("8 schools")
plt.savefig("../figures/schools8_data.png")
plt.show() | notebooks/book2/03/schools8_pymc3.ipynb | probml/pyprobml | mit |
Centered model | # Centered model
with pm.Model() as Centered_eight:
mu_alpha = pm.Normal("mu_alpha", mu=0, sigma=5)
sigma_alpha = pm.HalfCauchy("sigma_alpha", beta=5)
alpha = pm.Normal("alpha", mu=mu_alpha, sigma=sigma_alpha, shape=J)
obs = pm.Normal("obs", mu=alpha, sigma=sigma, observed=y)
log_sigma_alpha = pm.Deterministic("log_sigma_alpha", tt.log(sigma_alpha))
np.random.seed(0)
with Centered_eight:
trace_centered = pm.sample(1000, chains=4, return_inferencedata=False)
pm.summary(trace_centered).round(2)
# PyMC3 gives multiple warnings about divergences
# Also, see r_hat ~ 1.01, ESS << nchains*1000, especially for sigma_alpha
# We can solve these problems below by using a non-centered parameterization.
# In practice, for this model, the results are very similar.
# Display the total number and percentage of divergent chains
diverging = trace_centered["diverging"]
print("Number of Divergent Chains: {}".format(diverging.nonzero()[0].size))
diverging_pct = diverging.nonzero()[0].size / len(trace_centered) * 100
print("Percentage of Divergent Chains: {:.1f}".format(diverging_pct))
dir(trace_centered)
trace_centered.varnames
with Centered_eight:
# fig, ax = plt.subplots()
az.plot_autocorr(trace_centered, var_names=["mu_alpha", "sigma_alpha"], combined=True)
plt.savefig("schools8_centered_acf_combined.png", dpi=300)
with Centered_eight:
# fig, ax = plt.subplots()
az.plot_autocorr(trace_centered, var_names=["mu_alpha", "sigma_alpha"])
plt.savefig("schools8_centered_acf.png", dpi=300)
with Centered_eight:
az.plot_forest(trace_centered, var_names="alpha", hdi_prob=0.95, combined=True)
plt.savefig("schools8_centered_forest_combined.png", dpi=300)
with Centered_eight:
az.plot_forest(trace_centered, var_names="alpha", hdi_prob=0.95, combined=False)
plt.savefig("schools8_centered_forest.png", dpi=300) | notebooks/book2/03/schools8_pymc3.ipynb | probml/pyprobml | mit |
Non-centered | # Non-centered parameterization
with pm.Model() as NonCentered_eight:
mu_alpha = pm.Normal("mu_alpha", mu=0, sigma=5)
sigma_alpha = pm.HalfCauchy("sigma_alpha", beta=5)
alpha_offset = pm.Normal("alpha_offset", mu=0, sigma=1, shape=J)
alpha = pm.Deterministic("alpha", mu_alpha + sigma_alpha * alpha_offset)
# alpha = pm.Normal('alpha', mu=mu_alpha, sigma=sigma_alpha, shape=J)
obs = pm.Normal("obs", mu=alpha, sigma=sigma, observed=y)
log_sigma_alpha = pm.Deterministic("log_sigma_alpha", tt.log(sigma_alpha))
np.random.seed(0)
with NonCentered_eight:
trace_noncentered = pm.sample(1000, chains=4)
pm.summary(trace_noncentered).round(2)
# Samples look good: r_hat = 1, ESS ~= nchains*1000
with NonCentered_eight:
az.plot_autocorr(trace_noncentered, var_names=["mu_alpha", "sigma_alpha"], combined=True)
plt.savefig("schools8_noncentered_acf_combined.png", dpi=300)
with NonCentered_eight:
az.plot_forest(trace_noncentered, var_names="alpha", combined=True, hdi_prob=0.95)
plt.savefig("schools8_noncentered_forest_combined.png", dpi=300)
az.plot_forest(
[trace_centered, trace_noncentered],
model_names=["centered", "noncentered"],
var_names="alpha",
combined=True,
hdi_prob=0.95,
)
plt.axvline(np.mean(y), color="k", linestyle="--")
az.plot_forest(
[trace_centered, trace_noncentered],
model_names=["centered", "noncentered"],
var_names="alpha",
kind="ridgeplot",
combined=True,
hdi_prob=0.95,
); | notebooks/book2/03/schools8_pymc3.ipynb | probml/pyprobml | mit |
Funnel of hell | # Plot the "funnel of hell"
# Based on
# https://github.com/twiecki/WhileMyMCMCGentlySamples/blob/master/content/downloads/notebooks/GLM_hierarchical_non_centered.ipynb
fig, axs = plt.subplots(ncols=2, sharex=True, sharey=True)
x = pd.Series(trace_centered["mu_alpha"], name="mu_alpha")
y = pd.Series(trace_centered["log_sigma_alpha"], name="log_sigma_alpha")
axs[0].plot(x, y, ".")
axs[0].set(title="Centered", xlabel="µ", ylabel="log(sigma)")
# axs[0].axhline(0.01)
x = pd.Series(trace_noncentered["mu_alpha"], name="mu")
y = pd.Series(trace_noncentered["log_sigma_alpha"], name="log_sigma_alpha")
axs[1].plot(x, y, ".")
axs[1].set(title="NonCentered", xlabel="µ", ylabel="log(sigma)")
# axs[1].axhline(0.01)
plt.savefig("schools8_funnel.png", dpi=300)
xlim = axs[0].get_xlim()
ylim = axs[0].get_ylim()
x = pd.Series(trace_centered["mu_alpha"], name="mu")
y = pd.Series(trace_centered["log_sigma_alpha"], name="log sigma_alpha")
sns.jointplot(x, y, xlim=xlim, ylim=ylim)
plt.suptitle("centered")
plt.savefig("schools8_centered_joint.png", dpi=300)
x = pd.Series(trace_noncentered["mu_alpha"], name="mu")
y = pd.Series(trace_noncentered["log_sigma_alpha"], name="log sigma_alpha")
sns.jointplot(x, y, xlim=xlim, ylim=ylim)
plt.suptitle("noncentered")
plt.savefig("schools8_noncentered_joint.png", dpi=300)
group = 0
fig, axs = plt.subplots(ncols=2, sharex=True, sharey=True, figsize=(10, 5))
x = pd.Series(trace_centered["alpha"][:, group], name=f"alpha {group}")
y = pd.Series(trace_centered["log_sigma_alpha"], name="log_sigma_alpha")
axs[0].plot(x, y, ".")
axs[0].set(title="Centered", xlabel=r"$\alpha_0$", ylabel=r"$\log(\sigma_\alpha)$")
x = pd.Series(trace_noncentered["alpha"][:, group], name=f"alpha {group}")
y = pd.Series(trace_noncentered["log_sigma_alpha"], name="log_sigma_alpha")
axs[1].plot(x, y, ".")
axs[1].set(title="NonCentered", xlabel=r"$\alpha_0$", ylabel=r"$\log(\sigma_\alpha)$")
xlim = axs[0].get_xlim()
ylim = axs[0].get_ylim()
plt.savefig("schools8_funnel_group0.png", dpi=300) | notebooks/book2/03/schools8_pymc3.ipynb | probml/pyprobml | mit |
Fetch and parse the HTML | # use the `get()` method to fetch a copy of the IRE home page
ire_page = requests.get('http://ire.org')
# feed the text of the web page to a BeautifulSoup object
soup = BeautifulSoup(ire_page.text, 'html.parser') | completed/13. Web scraping (Part 3).ipynb | ireapps/cfj-2017 | mit |
Target the headlines
View source on the IRE homepage and find the headlines. What's the pattern? | # get a list of headlines we're interested in
heds = soup.find_all('h1', {'class': 'title1'}) | completed/13. Web scraping (Part 3).ipynb | ireapps/cfj-2017 | mit |
Loop over the heds, printing out the text
You can drill down into a nested tag using a period. | for hed in heds:
print(hed.a.string) | completed/13. Web scraping (Part 3).ipynb | ireapps/cfj-2017 | mit |
Introduction
In Data Science it is common to start with data and develop a model of that data. Such models can help to explain the data and make predictions about future observations. In fields like Physics, these models are often given in the form of differential equations, whose solutions explain and predict the data. In most other fields, such differential equations are not known. Often, models have to include sources of uncertainty and randomness. Given a set of data, fitting a model to the data is the process of tuning the parameters of the model to best explain the data.
When a model has a linear dependence on its parameters, such as $a x^2 + b x + c$, this process is known as linear regression. When a model has a non-linear dependence on its parameters, such as $ a e^{bx} $, this process in known as non-linear regression. Thus, fitting data to a straight line model of $m x + b $ is linear regression, because of its linear dependence on $m$ and $b$ (rather than $x$).
Fitting a straight line
A classical example of fitting a model is finding the slope and intercept of a straight line that goes through a set of data points ${x_i,y_i}$. For a straight line the model is:
$$
y_{model}(x) = mx + b
$$
Given this model, we can define a metric, or cost function, that quantifies the error the model makes. One commonly used metric is $\chi^2$, which depends on the deviation of the model from each data point ($y_i - y_{model}(x_i)$) and the measured uncertainty of each data point $ \sigma_i$:
$$
\chi^2 = \sum_{i=1}^N \left(\frac{y_i - y_{model}(x)}{\sigma_i}\right)^2
$$
When $\chi^2$ is small, the model's predictions will be close the data points. Likewise, when $\chi^2$ is large, the model's predictions will be far from the data points. Given this, our task is to minimize $\chi^2$ with respect to the model parameters $\theta = [m, b]$ in order to find the best fit.
To illustrate linear regression, let's create a synthetic data set with a known slope and intercept, but random noise that is additive and normally distributed. | N = 50
m_true = 2
b_true = -1
dy = 2.0 # uncertainty of each point
np.random.seed(0)
xdata = 10 * np.random.random(N) # don't use regularly spaced data
ydata = b_true + m_true * xdata + np.random.normal(0.0, dy, size=N) # our errors are additive
plt.errorbar(xdata, ydata, dy,
fmt='.k', ecolor='lightgray')
plt.xlabel('x')
plt.ylabel('y'); | days/day19/FittingModels.ipynb | bjshaw/phys202-2015-work | mit |
Fitting by hand
It is useful to see visually how changing the model parameters changes the value of $\chi^2$. By using IPython's interact function, we can create a user interface that allows us to pick a slope and intercept interactively and see the resulting line and $\chi^2$ value.
Here is the function we want to minimize. Note how we have combined the two parameters into a single parameters vector $\theta = [m, b]$, which is the first argument of the function: | def chi2(theta, x, y, dy):
# theta = [b, m]
return np.sum(((y - theta[0] - theta[1] * x) / dy) ** 2)
def manual_fit(b, m):
modely = m*xdata + b
plt.plot(xdata, modely)
plt.errorbar(xdata, ydata, dy,
fmt='.k', ecolor='lightgray')
plt.xlabel('x')
plt.ylabel('y')
plt.text(1, 15, 'b={0:.2f}'.format(b))
plt.text(1, 12.5, 'm={0:.2f}'.format(m))
plt.text(1, 10.0, '$\chi^2$={0:.2f}'.format(chi2([b,m],xdata,ydata, dy)))
interact(manual_fit, b=(-3.0,3.0,0.01), m=(0.0,4.0,0.01)); | days/day19/FittingModels.ipynb | bjshaw/phys202-2015-work | mit |
Go ahead and play with the sliders and try to:
Find the lowest value of $\chi^2$
Find the "best" line through the data points.
You should see that these two conditions coincide.
Minimize $\chi^2$ using scipy.optimize.minimize
Now that we have seen how minimizing $\chi^2$ gives the best parameters in a model, let's perform this minimization numerically using scipy.optimize.minimize. We have already defined the function we want to minimize, chi2, so we only have to pass it to minimize along with an initial guess and the additional arguments (the raw data): | theta_guess = [0.0,1.0]
result = opt.minimize(chi2, theta_guess, args=(xdata,ydata,dy)) | days/day19/FittingModels.ipynb | bjshaw/phys202-2015-work | mit |
Here are the values of $b$ and $m$ that minimize $\chi^2$: | theta_best = result.x
print(theta_best) | days/day19/FittingModels.ipynb | bjshaw/phys202-2015-work | mit |
These values are close to the true values of $b=-1$ and $m=2$. The reason our values are different is that our data set has a limited number of points. In general, we expect that as the number of points in our data set increases, the model parameters will converge to the true values. But having a limited number of data points is not a problem - it is a reality of most data collection processes.
We can plot the raw data and the best fit line: | xfit = np.linspace(0,10.0)
yfit = theta_best[1]*xfit + theta_best[0]
plt.plot(xfit, yfit)
plt.errorbar(xdata, ydata, dy,
fmt='.k', ecolor='lightgray')
plt.xlabel('x')
plt.ylabel('y'); | days/day19/FittingModels.ipynb | bjshaw/phys202-2015-work | mit |
Minimize $\chi^2$ using scipy.optimize.leastsq
Performing regression by minimizing $\chi^2$ is known as least squares regression, because we are minimizing the sum of squares of the deviations. The linear version of this is known as linear least squares. For this case, SciPy provides a purpose built function, scipy.optimize.leastsq. Instead of taking the $\chi^2$ function to minimize, leastsq takes a function that computes the deviations: | def deviations(theta, x, y, dy):
return (y - theta[0] - theta[1] * x) / dy
result = opt.leastsq(deviations, theta_guess, args=(xdata, ydata, dy), full_output=True) | days/day19/FittingModels.ipynb | bjshaw/phys202-2015-work | mit |
Here we have passed the full_output=True option. When this is passed the covariance matrix $\Sigma_{ij}$ of the model parameters is also returned. The uncertainties (as standard deviations) in the parameters are the square roots of the diagonal elements of the covariance matrix:
$$ \sigma_i = \sqrt{\Sigma_{ii}} $$
A proof of this is beyond the scope of the current notebook. | theta_best = result[0]
theta_cov = result[1]
print('b = {0:.3f} +/- {1:.3f}'.format(theta_best[0], np.sqrt(theta_cov[0,0])))
print('m = {0:.3f} +/- {1:.3f}'.format(theta_best[1], np.sqrt(theta_cov[1,1]))) | days/day19/FittingModels.ipynb | bjshaw/phys202-2015-work | mit |
We can again plot the raw data and best fit line: | yfit = theta_best[0] + theta_best[1] * xfit
plt.errorbar(xdata, ydata, dy,
fmt='.k', ecolor='lightgray');
plt.plot(xfit, yfit, '-b'); | days/day19/FittingModels.ipynb | bjshaw/phys202-2015-work | mit |
Fitting using scipy.optimize.curve_fit
SciPy also provides a general curve fitting function, curve_fit, that can handle both linear and non-linear models. This function:
Allows you to directly specify the model as a function, rather than the cost function (it assumes $\chi^2$).
Returns the covariance matrix for the parameters that provides estimates of the errors in each of the parameters.
Let's apply curve_fit to the above data. First we define a model function. The first argument should be the independent variable of the model. | def model(x, b, m):
return m*x+b | days/day19/FittingModels.ipynb | bjshaw/phys202-2015-work | mit |
Then call curve_fit passing the model function and the raw data. The uncertainties of each data point are provided with the sigma keyword argument. If there are no uncertainties, this can be omitted. By default the uncertainties are treated as relative. To treat them as absolute, pass the absolute_sigma=True argument. | theta_best, theta_cov = opt.curve_fit(model, xdata, ydata, sigma=dy) | days/day19/FittingModels.ipynb | bjshaw/phys202-2015-work | mit |
Again, display the optimal values of $b$ and $m$ along with their uncertainties: | print('b = {0:.3f} +/- {1:.3f}'.format(theta_best[0], np.sqrt(theta_cov[0,0])))
print('m = {0:.3f} +/- {1:.3f}'.format(theta_best[1], np.sqrt(theta_cov[1,1]))) | days/day19/FittingModels.ipynb | bjshaw/phys202-2015-work | mit |
We can again plot the raw data and best fit line: | xfit = np.linspace(0,10.0)
yfit = theta_best[1]*xfit + theta_best[0]
plt.plot(xfit, yfit)
plt.errorbar(xdata, ydata, dy,
fmt='.k', ecolor='lightgray')
plt.xlabel('x')
plt.ylabel('y'); | days/day19/FittingModels.ipynb | bjshaw/phys202-2015-work | mit |
Non-linear models
So far we have been using a linear model $y_{model}(x) = m x +b$. Remember this model was linear, not because of its dependence on $x$, but on $b$ and $m$. A non-linear model will have a non-linear dependece on the model parameters. Examples are $A e^{B x}$, $A \cos{B x}$, etc. In this section we will generate data for the following non-linear model:
$$y_{model}(x) = Ae^{Bx}$$
and fit that data using curve_fit. Let's start out by using this model to generate a data set to use for our fitting: | npoints = 20
Atrue = 10.0
Btrue = -0.2
xdata = np.linspace(0.0, 20.0, npoints)
dy = np.random.normal(0.0, 0.1, size=npoints)
ydata = Atrue*np.exp(Btrue*tdata) + dy | days/day19/FittingModels.ipynb | bjshaw/phys202-2015-work | mit |
Plot the raw data: | plt.plot(xdata, ydata, 'k.')
plt.xlabel('x')
plt.ylabel('y'); | days/day19/FittingModels.ipynb | bjshaw/phys202-2015-work | mit |
Let's see if we can use non-linear regression to recover the true values of our model parameters. First define the model: | def exp_model(x, A, B):
return A*np.exp(x*B) | days/day19/FittingModels.ipynb | bjshaw/phys202-2015-work | mit |
Then use curve_fit to fit the model: | theta_best, theta_cov = opt.curve_fit(exp_model2, xdata, ydata) | days/day19/FittingModels.ipynb | bjshaw/phys202-2015-work | mit |
Our optimized parameters are close to the true values of $A=10$ and $B=-0.2$: | print('A = {0:.3f} +/- {1:.3f}'.format(theta_best[0], np.sqrt(theta_cov[0,0])))
print('B = {0:.3f} +/- {1:.3f}'.format(theta_best[1], np.sqrt(theta_cov[1,1]))) | days/day19/FittingModels.ipynb | bjshaw/phys202-2015-work | mit |
Plot the raw data and fitted model: | xfit = np.linspace(0,20)
yfit = exp_model(xfit, theta_best[0], theta_best[1])
plt.plot(xfit, yfit)
plt.plot(xdata, ydata, 'k.')
plt.xlabel('x')
plt.ylabel('y'); | days/day19/FittingModels.ipynb | bjshaw/phys202-2015-work | mit |
First, we make a query to the datacube to find out what datasets we have. | dc_a = AnalyticsEngine()
dc_e = ExecutionEngine()
dc_api = API()
print(dc_api.list_field_values('product')) # 'LEDAPS' should be in the list
print(dc_api.list_field_values('platform')) # 'LANDSAT_5' should be in the list | agdc-v2/contrib/notebooks/CSIRO Water Quality Analysis using Turbidity .ipynb | ceos-seo/Data_Cube_v2 | apache-2.0 |
Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) is a NASA-funded project to map North American forest disturbance since 1975. We have datasets in the same format for Australia.
Let's find out what kind of datasets we have for Landsat 5. | query = {
'product': 'LEDAPS',
'platform': 'LANDSAT_5',
}
descriptor = dc_api.get_descriptor(query, include_storage_units=False)
pprint(descriptor) | agdc-v2/contrib/notebooks/CSIRO Water Quality Analysis using Turbidity .ipynb | ceos-seo/Data_Cube_v2 | apache-2.0 |
For Landsat 5, Band 1-3 are Blue, Green, and Red visible light spectrum bands.
To set up the Engine, we first need to instantiate the modules and setup query parameters.
create_array sets up the platform and product we are interested in querying, as well as the bands (variables) of the satellite data set. We also limit the amount of data processed by a long-lat boundary and time.
apply_expression binds the variables into a generic string to execute, in this case an average of two bands.
execute_plan is when the computation is actually run and returned. | dimensions = {
'x': {
'range': (140, 141)
},
'y': {
'range': (-35.5, -36.5)
},
'time': {
'range': (datetime(2011, 10, 17), datetime(2011, 10, 18))
}
}
red = dc_a.create_array(('LANDSAT_5', 'LEDAPS'), ['band3'], dimensions, 'red')
green = dc_a.create_array(('LANDSAT_5', 'LEDAPS'), ['band2'], dimensions, 'green')
blue = dc_a.create_array(('LANDSAT_5', 'LEDAPS'), ['band1'], dimensions, 'blue') | agdc-v2/contrib/notebooks/CSIRO Water Quality Analysis using Turbidity .ipynb | ceos-seo/Data_Cube_v2 | apache-2.0 |
Now we have created references to the green and blue bands, we can do simple band maths. | blue_result = dc_a.apply_expression([blue], 'array1', 'blue')
dc_e.execute_plan(dc_a.plan)
plot(dc_e.cache['blue'])
turbidity = dc_a.apply_expression([blue, green, red], '(array1 + array2 - array3) / 2', 'turbidity')
dc_e.execute_plan(dc_a.plan)
plot(dc_e.cache['turbidity']) | agdc-v2/contrib/notebooks/CSIRO Water Quality Analysis using Turbidity .ipynb | ceos-seo/Data_Cube_v2 | apache-2.0 |
geo_xarray.reproject reprojects northings and eastings to longitude and latitude units.
Let's reproject the file into the common longitude-latitude projection, and save it to a picture. | result = dc_e.cache['turbidity']['array_result']['turbidity']
reprojected = datacube.api.geo_xarray.reproject(result.isel(time=0), 'EPSG:3577', 'WGS84')
pprint(reprojected)
reprojected.plot.imshow()
matplotlib.image.imsave('turbidity.png', reprojected) | agdc-v2/contrib/notebooks/CSIRO Water Quality Analysis using Turbidity .ipynb | ceos-seo/Data_Cube_v2 | apache-2.0 |
The boundaries in long-lat are as follows: | map(float, (reprojected.x[0], reprojected.x[-1], reprojected.y[0], reprojected.y[-1])) | agdc-v2/contrib/notebooks/CSIRO Water Quality Analysis using Turbidity .ipynb | ceos-seo/Data_Cube_v2 | apache-2.0 |
That looks like any other Python code! But this example is a bit silly.
How do we leverage Noodles to earn an honest living? Here's a slightly less
silly example (but only just!). We will build a small translation engine
that translates sentences by submitting each word to an online dictionary
over a Rest API. To do this we make loops ("For thou shalt make loops of
blue"). First we build the program as you would do in Python, then we
sprinkle some Noodles magic and make it work parallel! Furthermore, we'll
see how to:
make more loops
cache results for reuse
Making loops
Thats all swell, but how do we make a parallel loop? Let's look at a map operation; in Python there are several ways to perform a function on all elements in an array. For this example, we will translate some words using the Glosbe service, which has a nice REST interface. We first build some functionality to use this interface. | import urllib.request
import json
import re
class Translate:
"""Translate words and sentences in the worst possible way. The Glosbe dictionary
has a nice REST interface that we query for a phrase. We then take the first result.
To translate a sentence, we cut it in pieces, translate it and paste it back into
a Frankenstein monster."""
def __init__(self, src_lang='en', tgt_lang='fy'):
self.src = src_lang
self.tgt = tgt_lang
self.url = 'https://glosbe.com/gapi/translate?' \
'from={src}&dest={tgt}&' \
'phrase={{phrase}}&format=json'.format(
src=src_lang, tgt=tgt_lang)
def query_phrase(self, phrase):
with urllib.request.urlopen(self.url.format(phrase=phrase.lower())) as response:
translation = json.loads(response.read().decode())
return translation
def word(self, phrase):
translation = self.query_phrase(phrase)
#translation = {'tuc': [{'phrase': {'text': phrase.lower()[::-1]}}]}
if len(translation['tuc']) > 0 and 'phrase' in translation['tuc'][0]:
result = translation['tuc'][0]['phrase']['text']
if phrase[0].isupper():
return result.title()
else:
return result
else:
return "<" + phrase + ">"
def sentence(self, phrase):
words = re.sub("[^\w]", " ", phrase).split()
space = re.sub("[\w]+", "{}", phrase)
return space.format(*map(self.word, words)) | notebooks/poetry_tutorial.ipynb | NLeSC/noodles | apache-2.0 |
We start with a list of strings that desparately need translation. And add a little
routine to print it in a gracious manner. | shakespeare = [
"If music be the food of love, play on,",
"Give me excess of it; that surfeiting,",
"The appetite may sicken, and so die."]
def print_poem(intro, poem):
print(intro)
for line in poem:
print(" ", line)
print()
print_poem("Original:", shakespeare) | notebooks/poetry_tutorial.ipynb | NLeSC/noodles | apache-2.0 |
Beginning Python programmers like to append things; this is not how you are
supposed to program in Python; if you do, please go and read Jeff Knupp's Writing Idiomatic Python. | shakespeare_auf_deutsch = []
for line in shakespeare:
shakespeare_auf_deutsch.append(
Translate('en', 'de').sentence(line))
print_poem("Auf Deutsch:", shakespeare_auf_deutsch) | notebooks/poetry_tutorial.ipynb | NLeSC/noodles | apache-2.0 |
Rather use a comprehension like so: | shakespeare_ynt_frysk = \
(Translate('en', 'fy').sentence(line) for line in shakespeare)
print_poem("Yn it Frysk:", shakespeare_ynt_frysk) | notebooks/poetry_tutorial.ipynb | NLeSC/noodles | apache-2.0 |
Or use map: | shakespeare_pa_dansk = \
map(Translate('en', 'da').sentence, shakespeare)
print_poem("På Dansk:", shakespeare_pa_dansk) | notebooks/poetry_tutorial.ipynb | NLeSC/noodles | apache-2.0 |
Noodlify!
If your connection is a bit slow, you may find that the translations take a while to process. Wouldn't it be nice to do it in parallel? How much code would we have to change to get there in Noodles? Let's take the slow part of the program and add a @schedule decorator, and run! Sadly, it is not that simple. We can add @schedule to the word method. This means that it will return a promise.
Rule: Functions that take promises need to be scheduled functions, or refer to a scheduled function at some level.
We could write
return schedule(space.format)(*(self.word(w) for w in words))
in the last line of the sentence method, but the string format method doesn't support wrapping. We rely on getting the signature of a function by calling inspect.signature. In some cases of build-in function this raises an exception. We may find a work around for these cases in future versions of Noodles. For the moment we'll have to define a little wrapper function. | from noodles import schedule
@schedule
def format_string(s, *args, **kwargs):
return s.format(*args, **kwargs)
import urllib.request
import json
import re
class Translate:
"""Translate words and sentences in the worst possible way. The Glosbe dictionary
has a nice REST interface that we query for a phrase. We then take the first result.
To translate a sentence, we cut it in pieces, translate it and paste it back into
a Frankenstein monster."""
def __init__(self, src_lang='en', tgt_lang='fy'):
self.src = src_lang
self.tgt = tgt_lang
self.url = 'https://glosbe.com/gapi/translate?' \
'from={src}&dest={tgt}&' \
'phrase={{phrase}}&format=json'.format(
src=src_lang, tgt=tgt_lang)
def query_phrase(self, phrase):
with urllib.request.urlopen(self.url.format(phrase=phrase.lower())) as response:
translation = json.loads(response.read().decode())
return translation
@schedule
def word(self, phrase):
#translation = {'tuc': [{'phrase': {'text': phrase.lower()[::-1]}}]}
translation = self.query_phrase(phrase)
if len(translation['tuc']) > 0 and 'phrase' in translation['tuc'][0]:
result = translation['tuc'][0]['phrase']['text']
if phrase[0].isupper():
return result.title()
else:
return result
else:
return "<" + phrase + ">"
def sentence(self, phrase):
words = re.sub("[^\w]", " ", phrase).split()
space = re.sub("[\w]+", "{}", phrase)
return format_string(space, *map(self.word, words))
def __str__(self):
return "[{} -> {}]".format(self.src, self.tgt)
def __serialize__(self, pack):
return pack({'src_lang': self.src,
'tgt_lang': self.tgt})
@classmethod
def __construct__(cls, msg):
return cls(**msg) | notebooks/poetry_tutorial.ipynb | NLeSC/noodles | apache-2.0 |
Let's take stock of the mutations to the original. We've added a @schedule decorator to word, and changed a function call in sentence. Also we added the __str__ method; this is only needed to plot the workflow graph. Let's run the new script. | from noodles import gather, run_parallel
from noodles.tutorial import get_workflow_graph
shakespeare_en_esperanto = \
map(Translate('en', 'eo').sentence, shakespeare)
wf = gather(*shakespeare_en_esperanto)
workflow_graph = get_workflow_graph(wf._workflow)
result = run_parallel(wf, n_threads=8)
print_poem("Shakespeare en Esperanto:", result) | notebooks/poetry_tutorial.ipynb | NLeSC/noodles | apache-2.0 |
The last peculiar thing that you may notice, is the gather function. It collects the promises that map generates and creates a single new promise. The definition of gather is very simple:
@schedule
def gather(*lst):
return lst
The workflow graph of the Esperanto translator script looks like this: | workflow_graph.attr(size='10')
workflow_graph | notebooks/poetry_tutorial.ipynb | NLeSC/noodles | apache-2.0 |
Dealing with repetition
In the following example we have a line with some repetition. | from noodles import (schedule, gather_all)
import re
@schedule
def word_size(word):
return len(word)
@schedule
def format_string(s, *args, **kwargs):
return s.format(*args, **kwargs)
def word_size_phrase(phrase):
words = re.sub("[^\w]", " ", phrase).split()
space = re.sub("[\w]+", "{}", phrase)
word_lengths = map(word_size, words)
return format_string(space, *word_lengths)
from noodles.tutorial import display_workflows, run_and_print_log
display_workflows(
prefix='poetry',
sizes=word_size_phrase("Oote oote oote, Boe")) | notebooks/poetry_tutorial.ipynb | NLeSC/noodles | apache-2.0 |
Let's run the example workflows now, but focus on the actions taken, looking at the logs. The function run_and_print_log in the tutorial module runs our workflow with four parallel threads and caches results in a Sqlite3 database.
To see how this program is being run, we monitor the job submission, retrieval and result storage. First, should you have run this tutorial before, remove the database file. | # remove the database if it already exists
!rm -f tutorial.db | notebooks/poetry_tutorial.ipynb | NLeSC/noodles | apache-2.0 |
Running the workflow, we can now see that at the second occurence of the word 'oote', the function call is attached to the first job that asked for the same result. The job word_size('oote') is run only once. | run_and_print_log(word_size_phrase("Oote oote oote, Boe"), highlight=range(4, 8)) | notebooks/poetry_tutorial.ipynb | NLeSC/noodles | apache-2.0 |
Now, running a similar workflow again, notice that previous results are retrieved from the database. | run_and_print_log(word_size_phrase("Oe oe oote oote oote"), highlight=range(5, 10)) | notebooks/poetry_tutorial.ipynb | NLeSC/noodles | apache-2.0 |
Although the result of every single job is retrieved we still had to go through the trouble of looking up the results of word_size('Oote'), word_size('oote'), and word_size('Boe') to find out that we wanted the result from the format_string. If you want to cache the result of an entire workflow, pack the workflow in another scheduled function!
Versioning
We may add a version string to a function. This version is taken into account when looking up results in the database. | @schedule(version='1.0')
def word_size_phrase(phrase):
words = re.sub("[^\w]", " ", phrase).split()
space = re.sub("[\w]+", "{}", phrase)
word_lengths = map(word_size, words)
return format_string(space, *word_lengths)
run_and_print_log(
word_size_phrase("Kneu kneu kneu kneu ote kneu eur"),
highlight=[1, 17]) | notebooks/poetry_tutorial.ipynb | NLeSC/noodles | apache-2.0 |
See how the first job is evaluated to return a new workflow. Note that if the version is omitted, it is automatically generated from the source of the function. For example, let's say we decided the function word_size_phrase should return a dictionary of all word sizes in stead of a string. Here we use the function called lift to transform a dictionary containing promises to a promise of a dictionary. lift can handle lists, dictionaries, sets, tuples and objects that are constructable from their __dict__ member. | from noodles import lift
def word_size_phrase(phrase):
words = re.sub("[^\w]", " ", phrase).split()
return lift({word: word_size(word) for word in words})
display_workflows(prefix='poetry', lift=word_size_phrase("Kneu kneu kneu kneu ote kneu eur"))
run_and_print_log(word_size_phrase("Kneu kneu kneu kneu ote kneu eur")) | notebooks/poetry_tutorial.ipynb | NLeSC/noodles | apache-2.0 |
Be careful with versions! Noodles will believe you upon your word! If we lie about the version, it will go ahead and retrieve the result belonging to the old function: | @schedule(version='1.0')
def word_size_phrase(phrase):
words = re.sub("[^\w]", " ", phrase).split()
return lift({word: word_size(word) for word in words})
run_and_print_log(
word_size_phrase("Kneu kneu kneu kneu ote kneu eur"),
highlight=[1]) | notebooks/poetry_tutorial.ipynb | NLeSC/noodles | apache-2.0 |
Parse results | pr, eigen, bet = parse_results('test_rdbg.txt') | results/RandomGraph Results Analysis.ipynb | indiependente/Social-Networks-Structure | mit |
PageRank Seeds Percentage
How many times the "Top X" nodes from PageRank have led to the max infection | draw_pie(get_percentage(pr)) | results/RandomGraph Results Analysis.ipynb | indiependente/Social-Networks-Structure | mit |
Avg adopters per seed comparison | draw_bars_comparison('Avg adopters per seeds', 'Avg adopters', np.array(get_avg_per_seed(pr)+[(0, np.mean(pr[:,1]))])) | results/RandomGraph Results Analysis.ipynb | indiependente/Social-Networks-Structure | mit |
Eigenvector Seeds Percentage
How many times the "Top X" nodes from Eigenvector have led to the max infection | draw_pie(get_percentage(eigen)) | results/RandomGraph Results Analysis.ipynb | indiependente/Social-Networks-Structure | mit |
Avg adopters per seed comparison | draw_bars_comparison('Avg adopters per seeds', 'Avg adopters', np.array(get_avg_per_seed(eigen)+[(0, np.mean(eigen[:,1]))])) | results/RandomGraph Results Analysis.ipynb | indiependente/Social-Networks-Structure | mit |
Betweenness Seeds Percentage
How many times the "Top X" nodes from Betweenness have led to the max infection | draw_pie(get_percentage(bet)) | results/RandomGraph Results Analysis.ipynb | indiependente/Social-Networks-Structure | mit |
Avg adopters per seed comparison | draw_bars_comparison('Avg adopters per seeds', 'Avg adopters', np.array(get_avg_per_seed(bet)+[(0, np.mean(bet[:,1]))])) | results/RandomGraph Results Analysis.ipynb | indiependente/Social-Networks-Structure | mit |
100 runs adopters comparison | draw_bars(np.sort(pr.view('i8,i8'), order=['f0'], axis=0).view(np.int),
np.sort(eigen.view('i8,i8'), order=['f0'], axis=0).view(np.int),
np.sort(bet.view('i8,i8'), order=['f0'], axis=0).view(np.int)) | results/RandomGraph Results Analysis.ipynb | indiependente/Social-Networks-Structure | mit |
Centrality Measures Averages
PageRank avg adopters and seed | pr_mean = np.mean(pr[:,1])
pr_mean_seed = np.mean(pr[:,0])
print 'Avg Seed:',pr_mean_seed, 'Avg adopters:', pr_mean | results/RandomGraph Results Analysis.ipynb | indiependente/Social-Networks-Structure | mit |
Eigenv avg adopters and seed | eigen_mean = np.mean(eigen[:,1])
eigen_mean_seed = np.mean(eigen[:,0])
print 'Avg Seed:',eigen_mean_seed, 'Avg adopters:',eigen_mean | results/RandomGraph Results Analysis.ipynb | indiependente/Social-Networks-Structure | mit |
Betweenness avg adopters and seed | bet_mean = np.mean(bet[:,1])
bet_mean_seed = np.mean(bet[:,0])
print 'Avg Seed:',bet_mean_seed, 'Avg adopters:',bet_mean
draw_avgs([pr_mean, eigen_mean, bet_mean]) | results/RandomGraph Results Analysis.ipynb | indiependente/Social-Networks-Structure | mit |
Import the file into pandas, and drop all rows without a GPS fix | dname = '/Users/astyler/projects/torquedata/'
trips = []
fnames = glob.glob(dname+'*.csv')
for fname in fnames:
trip = pd.read_csv(fname, na_values=['-'],encoding ='U8',index_col=False, header=False, names=['GPSTime','Time','Longitude','Latitude','GPSSpeed','GPSError','Altitude','Bearing','Gx','Gy','Gz','G','Az','Ay','Ax','A','Power','Accuracy','Satellites','GPSAltitude','GPSBearing','Lat2','Lon2','OBDSpeed','GPSSpeedkmhr'])
trip = trip.dropna(subset = ['Longitude','Latitude'])
trips.append(trip)
fnames | torque plotting.ipynb | astyler/scratch | mit |
Find the Lat/Lon bounding box and create a new map from the osmapping library | buffr = 0.01
mins=[(min(trip.Longitude) -buffr,min(trip.Latitude)-buffr) for trip in trips]
maxs=[(max(trip.Longitude) + buffr,max(trip.Latitude)+buffr) for trip in trips]
ll = map(min,zip(*mins))
ur = map(max,zip(*maxs))
print ll
print ur
mymap = osmapping.MLMap(ll,ur)
for trip in trips:
trip['x'], trip['y'] = mymap.convert_coordinates(trip[['Longitude','Latitude']].values).T | torque plotting.ipynb | astyler/scratch | mit |
Import the shapefiles from Mapzen for Boston | reload(osmapping)
mymap.load_shape_file('./shapefiles/boston/line.shp')
mymap.load_shape_file('./shapefiles/boston/polygon.shp')
mymap.shapes.shape
coords = [(79,80),(15,24)]
print zip(*coords)
print zip(*[(1,1),(2,2)])
#print mymap.basemap([79,15],[80,24])
print mymap.basemap(79,80)
print mymap.basemap(15,24)
print zip(*mymap.basemap(*zip(*coords)))
| torque plotting.ipynb | astyler/scratch | mit |
Select most road-types and some parks for plotting | mymap.clear_selected_shapes()
road = {'edgecolor':'white','lw':3, 'facecolor':'none','zorder':6};
mymap.select_shape('highway','motorway',**road)
mymap.select_shape('highway','trunk',**road)
mymap.select_shape('highway','primary',**road)
mymap.select_shape('highway','secondary',**road)
mymap.select_shape('highway','tertiary',**road)
mymap.select_shape('highway','residential',**road)
mymap.select_shape('leisure','park',facecolor='#BBDDBB',edgecolor='none',zorder=4)
mymap.select_shape('waterway','riverbank',facecolor='#0044CC', edgecolor='none', zorder=5)
mymap.select_shape('natural','water',facecolor='#CCCCEE', edgecolor='none', zorder=5)
bselect = lambda x: x['building'] in ['yes', 'apartments', 'commercial', 'house', 'residential', 'university', 'church', 'garage']
bldg = {'facecolor':'none', 'edgecolor':'#dedede', 'hatch':'////','zorder':7}
mymap.select_shapes(bselect, **bldg) | torque plotting.ipynb | astyler/scratch | mit |
Plot the basemap and then overlay the trip trace | for trip in trips:
trip.loc[trip.Satellites < 5,'Satellites'] = None
trip.loc[trip.Accuracy > 20,'Accuracy'] = None
trip.dropna(subset=['Accuracy'], inplace=True)
fig = plt.figure(figsize=(12,12))
ax = fig.add_subplot(111)
mymap.draw_map(ax, map_fill='#eeeeee')
for (idx,trip) in enumerate(trips):
ax.plot(trip.x, trip.y, lw=2, alpha=1,zorder=99, label=str(idx))
plt.legend() | torque plotting.ipynb | astyler/scratch | mit |
Modell-Architektur
http://cs231n.github.io/neural-networks-1/#power
Layout of a typical CNN
http://cs231n.github.io/convolutional-networks/
Classic VGG like Architecture
we use a VGG like architecture
based on https://arxiv.org/abs/1409.1556
basic idea: sequential, deep, small convolutional filters, use dropouts to reduce overfitting
16/19 layers are typical
we choose less layers, because we have limited resources
Convolutional Blocks: Cascading many Convolutional Layers having down sampling in between
http://cs231n.github.io/convolutional-networks/#conv
Example of a Convolution
Original Image
Many convolutional filters applied over all channels
http://cs.stanford.edu/people/karpathy/convnetjs/demo/cifar10.html
Downlsampling Layer: Reduces data sizes and risk of overfitting
http://cs231n.github.io/convolutional-networks/#pool
Activation Functions | def centerAxis(uses_negative=False):
# http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.plot
ax = plt.gca()
ax.spines['left'].set_position('center')
if uses_negative:
ax.spines['bottom'].set_position('center')
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left') | notebooks/workshops/tss/cnn-intro.ipynb | DJCordhose/ai | mit |
Sigmoid
This is the classic
Continuous version of step function | def np_sigmoid(X):
return 1 / (1 + np.exp(X * -1))
x = np.arange(-10,10,0.01)
y = np_sigmoid(x)
centerAxis()
plt.plot(x,y,lw=3) | notebooks/workshops/tss/cnn-intro.ipynb | DJCordhose/ai | mit |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.