markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
|---|---|---|---|---|
Load the movie ids and titles for querying embeddings
|
!gsutil cp gs://cloud-samples-data/vertex-ai/matching-engine/swivel/movielens_25m/movies.csv ./movies.csv
movies = pd.read_csv("movies.csv")
print(f"Movie count: {len(movies.index)}")
movies.head()
# Change to your favourite movies.
query_movies = [
"Lion King, The (1994)",
"Aladdin (1992)",
"Star Wars: Episode IV - A New Hope (1977)",
"Star Wars: Episode VI - Return of the Jedi (1983)",
"Terminator 2: Judgment Day (1991)",
"Aliens (1986)",
"Godfather, The (1972)",
"Goodfellas (1990)",
]
def get_movie_id(title):
return list(movies[movies.title == title].movieId)[0]
input_items = [str(get_movie_id(title)) for title in query_movies]
|
notebooks/official/matching_engine/intro-swivel.ipynb
|
GoogleCloudPlatform/vertex-ai-samples
|
apache-2.0
|
Look up embedding by making an online prediction request
|
predictions = endpoint.predict(instances=input_items)
embeddings = predictions.predictions
print(len(embeddings))
|
notebooks/official/matching_engine/intro-swivel.ipynb
|
GoogleCloudPlatform/vertex-ai-samples
|
apache-2.0
|
Explore movie embedding similarities:
|
for idx1 in range(0, len(input_items) - 1, 2):
item1 = input_items[idx1]
title1 = query_movies[idx1]
print(title1)
print("==================")
embedding1 = embeddings[idx1]
for idx2 in range(0, len(input_items)):
item2 = input_items[idx2]
embedding2 = embeddings[idx2]
similarity = round(cosine_similarity([embedding1], [embedding2])[0][0], 5)
title1 = query_movies[idx1]
title2 = query_movies[idx2]
print(f" - Similarity to '{title2}' = {similarity}")
print()
|
notebooks/official/matching_engine/intro-swivel.ipynb
|
GoogleCloudPlatform/vertex-ai-samples
|
apache-2.0
|
Create the Swivel job for Wikipedia text embedding (Optional)
This section shows you how to create embeddings for the movies in the wikipedia dataset using Swivel. You need to do the following steps:
1. Configure the swivel template (using the text input_type) and create a pipeline job.
2. Run the following item embedding exploration code.
The following cell overwrites swivel_pipeline_template.json; the new pipeline template file is almost identical, but it's labeled with your new pipeline suffix to distinguish it. This job will take a few hours.
|
# Copy the wikipedia sample dataset
! gsutil -m cp -r gs://cloud-samples-data/vertex-ai/matching-engine/swivel/wikipedia/* {SOURCE_DATA_PATH}/wikipedia
YOUR_PIPELINE_SUFFIX = "my-first-pipeline-wiki" # @param {type:"string"}
!./swivel_template_configuration.sh -pipeline_suffix {YOUR_PIPELINE_SUFFIX} -project_id {PROJECT_ID} -machine_type {MACHINE_TYPE} -accelerator_count {ACCELERATOR_COUNT} -accelerator_type {ACCELERATOR_TYPE} -pipeline_root {BUCKET}
# wikipedia text embedding sample
PARAMETER_VALUES = {
"embedding_dim": 100, # <---CHANGE THIS (OPTIONAL)
"input_base": "{}/wikipedia".format(SOURCE_DATA_PATH),
"input_type": "text", # For wikipedia sample
"max_vocab_size": 409600, # <---CHANGE THIS (OPTIONAL)
"num_epochs": 20, # <---CHANGE THIS (OPTIONAL)
}
|
notebooks/official/matching_engine/intro-swivel.ipynb
|
GoogleCloudPlatform/vertex-ai-samples
|
apache-2.0
|
Submit the pipeline job through aiplatform.PipelineJob object.
After the job finishes successfully (~a few hours), you can view the trained model in your CLoud Storage browser. It is going to have the following format:
{BUCKET_NAME}/{PROJECT_NUMBER}/swivel-{TIMESTAMP}/EmbTrainerComponent_-{SOME_NUMBER}/model/
You may copy this path for the MODELOUTPUT_DIR below. For demo purpose, you can download a pretrained model to {SOURCE_DATA_PATH}/wikipedia_model and proceed. This pretrained model is for demo purpose and not optimized for production usage.
|
! gsutil -m cp -r gs://cloud-samples-data/vertex-ai/matching-engine/swivel/models/wikipedia/model {SOURCE_DATA_PATH}/wikipedia_model
SAVEDMODEL_DIR = os.path.join(SOURCE_DATA_PATH, "wikipedia_model/model")
embedding_model = tf.saved_model.load(SAVEDMODEL_DIR)
|
notebooks/official/matching_engine/intro-swivel.ipynb
|
GoogleCloudPlatform/vertex-ai-samples
|
apache-2.0
|
Explore the trained text embeddings
Load the SavedModel to lookup embeddings for items. Note the following:
* The SavedModel expects a list of string inputs.
* Each string input is treated as a list of space-separated tokens.
* If the input is text, the string input is lowercased with punctuation removed.
* An embedding is generated for each input by looking up the embedding of each token in the input and computing the average embedding per string input.
* The embedding of an out-of-vocabulary (OOV) token is a vector of zeros.
For example, if the input is ['horror', 'film', 'HORROR! Film'], the output will be three embedding vectors, where the third is the average of the first two.
|
input_items = ["horror", "film", '"HORROR! Film"', "horror-film"]
output_embeddings = embedding_model(input_items)
horror_film_embedding = tf.math.reduce_mean(output_embeddings[:2], axis=0)
# Average of embeddings for 'horror' and 'film' equals that for '"HORROR! Film"'
# since preprocessing cleans punctuation and lowercases.
assert tf.math.reduce_all(tf.equal(horror_film_embedding, output_embeddings[2])).numpy()
# Embedding for '"HORROR! Film"' equal that for 'horror-film' since the
# latter contains a hyphenation and thus is a separate token.
assert not tf.math.reduce_all(
tf.equal(output_embeddings[2], output_embeddings[3])
).numpy()
# Change input_items with your own item tokens
input_items = ["apple", "orange", "hammer", "nails"]
output_embeddings = embedding_model(input_items)
for idx1 in range(len(input_items)):
item1 = input_items[idx1]
embedding1 = output_embeddings[idx1].numpy()
for idx2 in range(idx1 + 1, len(input_items)):
item2 = input_items[idx2]
embedding2 = output_embeddings[idx2].numpy()
similarity = round(cosine_similarity([embedding1], [embedding2])[0][0], 5)
print(f"Similarity between '{item1}' and '{item2}' = {similarity}")
|
notebooks/official/matching_engine/intro-swivel.ipynb
|
GoogleCloudPlatform/vertex-ai-samples
|
apache-2.0
|
You can use the TensorBoard Embedding Projector to graphically represent high dimensional embeddings, which can be helpful in examining and understanding your embeddings.
Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
|
# Delete endpoint resource
# If force is set to True, all deployed models on this Endpoint will be undeployed first.
endpoint.delete(force=True)
# Delete model resource
MODEL_RESOURCE_NAME = model.resource_name
! gcloud ai models delete $MODEL_RESOURCE_NAME --region $REGION --quiet
# Delete Cloud Storage objects that were created
! gsutil -m rm -r $SOURCE_DATA_PATH
|
notebooks/official/matching_engine/intro-swivel.ipynb
|
GoogleCloudPlatform/vertex-ai-samples
|
apache-2.0
|
1.9 A brief introduction to interferometry and its history
1.9.1 The double-slit experiment
The basics of interferometry date back to Thomas Young's double-slit experiment ⤴ of 1801. In this experiment, a plate pierced by two parallel slits is illuminated by a monochromatic source of light. Due to the wave-like nature of light, the waves passing through the two slits interfere, resulting in an interference pattern, or fringe, projected onto a screen behind the slits:
<img src="figures/514px-Doubleslit.svg.png" width="50%"/>
Figure 1.9.1: Schematic diagram of Young's double-slit experiment. Credit: Unknown.
The position on the screen $P$ determines the phase difference between the two arriving wavefronts. Waves arriving in phase interfere constructively and produce bright strips in the interference pattern. Waves arriving out of phase interfere destructively and result in dark strips in the pattern.
In this section we'll construct a toy model of a dual-slit experiment. Note that this model is not really physically accurate, it is literally just a "toy" to help us get some intuition for what's going on. A proper description of interfering electromagnetic waves will follow later.
Firstly, a monochromatic electromagnetic wave of wavelength $\lambda$ can be described by at each point in time and space as a complex quantity i.e. having an amplitude and a phase, $A\mathrm{e}^{\imath\phi}$. For simplicity, let us assume a constant amplitude $A$ but allow the phase to vary as a function of time and position.
Now if the same wave travels along two paths of different lengths and recombines at point $P$, the resulting electric field is a sum:
$E=E_1+E_2 = A\mathrm{e}^{\imath\phi}+A\mathrm{e}^{\imath(\phi-\phi_0)},$
where the phase delay $\phi_0$ corresponds to the pathlength difference $\tau_0$:
$\phi_0 = 2\pi\tau_0/\lambda.$
What is actually "measured" on the screen, the brightness, is, physically, a time-averaged electric field intensity $EE^$, where the $^$ represents complex conjugation (this exactly what our eyes, or a photographic plate, or a detector in the camera perceive as "brightness"). We can work this out as
$
EE^ = (E_1+E_2)(E_1+E_2)^ = E_1 E_1^ + E_2 E_2^ + E_1 E_2^ + E_2 E_1^ = A^2 + A^2
+ A^2 \mathrm{e}^{\imath\phi_0}
+ A^2 \mathrm{e}^{-\imath\phi_0} =
2A^2 + 2A^2 \cos{\phi_0}.
$
Note how phase itself has dropped out, and the only thing that's left is the phase delay $\phi_0$. The first part of the sum is constant, while the second part, the interfering term, varies with phase difference $\phi_0$, which in turn depends on position on the screen $P$. It is easy to see that the resulting intensity $EE^*$ is a purely real quantity that varies from 0 to $4A^2$. This is exactly what produces the alternating bright and dark stripes on the screen.
1.9.2 A toy double-slit simulator
Let us write a short Python function to (very simplistically) simulate a double-slit experiment. Note, understanding the code presented is not a requirement to understand the experiment. Those not interested in the code implementation should feel free to look only at the results.
|
def double_slit (p0=[0],a0=[1],baseline=1,d1=5,d2=5,wavelength=.1,maxint=None):
"""Renders a toy dual-slit experiment.
'p0' is a list or array of source positions (drawn along the vertical axis)
'a0' is an array of source intensities
'baseline' is the distance between the slits
'd1' and 'd2' are distances between source and plate and plate and screen
'wavelength' is wavelength
'maxint' is the maximum intensity scale use to render the fringe pattern. If None, the pattern
is auto-scaled. Maxint is useful if you want to render fringes from multiple invocations
of double_slit() into the same intensity scale, i.e. for comparison.
"""
## setup figure and axes
plt.figure(figsize=(20, 5))
plt.axes(frameon=False)
plt.xlim(-d1-.1, d2+2) and plt.ylim(-1, 1)
plt.xticks([]) and plt.yticks([])
plt.axhline(0, ls=':')
baseline /= 2.
## draw representation of slits
plt.arrow(0, 1,0, baseline-1, lw=0, width=.1, head_width=.1, length_includes_head=True)
plt.arrow(0,-1,0, 1-baseline, lw=0, width=.1, head_width=.1, length_includes_head=True)
plt.arrow(0, 0,0, baseline, lw=0, width=.1, head_width=.1, length_includes_head=True)
plt.arrow(0, 0,0, -baseline, lw=0, width=.1, head_width=.1, length_includes_head=True)
## draw representation of lightpath from slits to centre of screen
plt.arrow(0, baseline,d2,-baseline, length_includes_head=True)
plt.arrow(0,-baseline,d2, baseline, length_includes_head=True)
## draw representation of sinewave from the central position
xw = np.arange(-d1, -d1+(d1+d2)/4, .01)
yw = np.sin(2*np.pi*xw/wavelength)*.1 + (p0[0]+p0[-1])/2
plt.plot(xw,yw,'b')
## 'xs' is a vector of x cordinates on the screen
## and we accumulate the interference pattern for each source into 'pattern'
xs = np.arange(-1, 1, .01)
pattern = 0
total_intensity = 0
## compute contribution to pattern from each source position p
for p,a in np.broadcast(p0,a0):
plt.plot(-d1, p, marker='o', ms=10, mfc='red', mew=0)
total_intensity += a
if p == p0[0] or p == p0[-1]:
plt.arrow(-d1, p, d1, baseline-p, length_includes_head=True)
plt.arrow(-d1, p, d1,-baseline-p, length_includes_head=True)
# compute the two pathlenghts
path1 = np.sqrt(d1**2 + (p-baseline)**2) + np.sqrt(d2**2 + (xs-baseline)**2)
path2 = np.sqrt(d1**2 + (p+baseline)**2) + np.sqrt(d2**2 + (xs+baseline)**2)
diff = path1 - path2
# caccumulate interference pattern from this source
pattern = pattern + a*np.cos(2*np.pi*diff/wavelength)
maxint = maxint or total_intensity
# add fake axis to interference pattern just to make it a "wide" image
pattern_image = pattern[:,np.newaxis] + np.zeros(10)[np.newaxis,:]
plt.imshow(pattern_image, extent=(d2,d2+1,-1,1), cmap=plt.gray(), vmin=-maxint, vmax=maxint)
# make a plot of the interference pattern
plt.plot(d2+1.5+pattern/(maxint*2), xs, 'r')
plt.show()
# show patern for one source at 0
double_slit(p0=[0])
|
1_Radio_Science/01_09_a_brief_introduction_to_interferometry.ipynb
|
landmanbester/fundamentals_of_interferometry
|
gpl-2.0
|
A.1 The Betelgeuse size measurement
For fun, let us use our toy to re-create the Betelgeuse size measurement of 1920 by A.A. Michelson and F.G. Pease. Their experiment was set up as follows. The interferometer they constructed had movable outside mirrors, giving it a baseline that could be adjusted from a maximum of 6m downwards. Red light has a wavelength of ~650n; this gave them a maximum baseline of 10 million wavelengths.
For the experiment, they started with a baseline of 1m (1.5 million wavelengths), and verified that they could see fringes from Betelguese with the naked eye. They then adjusted the baseline up in small increments, until at 3m the fringes disappeared. From this, they inferred the diameter of Betelgeuse to be about 0.05".
You can repeat the experiment using the sliders below. You will probably find your toy Betelegeuse to be somewhat larger than 0.05". This is because or simulator is too simplistic -- in particular, it assumes a monochromatic source of light, which makes the fringes a lot sharper.
|
arcsec = 1/3600.
interact(lambda extent_arcsec, baseline:
michelson(p0=[0], a0=[1], extent=extent_arcsec*arcsec, maxint=1,
baseline=baseline,fov=1*arcsec),
extent_arcsec=(0,0.1,0.001),
baseline=(1e+4,1e+7,1e+4)
) and None
|
1_Radio_Science/01_09_a_brief_introduction_to_interferometry.ipynb
|
landmanbester/fundamentals_of_interferometry
|
gpl-2.0
|
To take a glimpse into the kinds of patterns that the network learned to recognize, we will try to generate images that maximize the sum of activations of particular channel of a particular convolutional layer of the neural network. The network we explore contains many convolutional layers, each of which outputs tens to hundreds of feature channels, so we have plenty of patterns to explore.
Lei Mao: Weights are basically what the neural network has learned. The images that maximize the sum of activations of particular channel of a particular convolutional layer of the neural network represents the specific pattern that certain convolutional layer has learned. It might be tricky for beginners to understand. But you may think in this way. Whenever, you input an image to a convolutional neural network, there are outputs after activation in each layer. If the image has certain pattern that match the "flavor" of certain convolutional layer, the outputs of that convolutional layer contains many activations at least in some regions that matches the "flavor". But this kind of activations is not maximized. You may consider an image that the maximize of the sum of activations in a particular convolutional layer as an image that certain convolutional layer likes most. Your job is to find or generate this image.
|
layers = [op.name for op in graph.get_operations() if op.type=='Conv2D' and 'import/' in op.name]
feature_nums = [int(graph.get_tensor_by_name(name+':0').get_shape()[-1]) for name in layers]
print('Number of layers', len(layers))
print('Total number of feature channels:', sum(feature_nums))
# Helper functions for TF Graph visualization
def strip_consts(graph_def, max_const_size=32):
"""Strip large constant values from graph_def."""
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = tf.compat.as_bytes("<stripped %d bytes>"%size)
return strip_def
def rename_nodes(graph_def, rename_func):
res_def = tf.GraphDef()
for n0 in graph_def.node:
n = res_def.node.add()
n.MergeFrom(n0)
n.name = rename_func(n.name)
for i, s in enumerate(n.input):
n.input[i] = rename_func(s) if s[0]!='^' else '^'+rename_func(s[1:])
return res_def
def show_graph(graph_def, max_const_size=32):
"""Visualize TensorFlow graph."""
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=max_const_size)
code = """
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
""".format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))
iframe = """
<iframe seamless style="width:800px;height:620px;border:0" srcdoc="{}"></iframe>
""".format(code.replace('"', '"'))
display(HTML(iframe))
# Visualizing the network graph. Be sure expand the "mixed" nodes to see their
# internal structure. We are going to visualize "Conv2D" nodes.
tmp_def = rename_nodes(graph_def, lambda s:"/".join(s.split('_',1)))
show_graph(tmp_def)
|
downloads/articles/2017-09-14-Google-DeepDream-Python/deepdream.ipynb
|
leimao/leimao.github.io
|
gpl-2.0
|
<a id='naive'></a>
Naive feature visualization
Let's start with a naive way of visualizing these. Image-space gradient ascent!
Lei Mao: Unlike loss minimization in classification jobs, we do maximization in generating the pattern that certain convolutional layer "likes most". The basic principles of minization and maximization are the same. Both of minimization and maximization could be easily implemented in TensorFlow when given appropriate functions for optimization.
|
# Picking some internal layer. Note that we use outputs before applying the ReLU nonlinearity
# to have non-zero gradients for features with negative initial activations.
layer = 'mixed4d_3x3_bottleneck_pre_relu'
channel = 139 # picking some feature channel to visualize
# start with a gray image with a little noise
img_noise = np.random.uniform(size=(224,224,3)) + 100.0
def showarray(a, fmt='jpeg'):
a = np.uint8(np.clip(a, 0, 1)*255)
f = BytesIO()
PIL.Image.fromarray(a).save(f, fmt)
display(Image(data=f.getvalue()))
def visstd(a, s=0.1):
'''Normalize the image range for visualization'''
return (a-a.mean())/max(a.std(), 1e-4)*s + 0.5
def T(layer):
'''Helper for getting layer output tensor'''
return graph.get_tensor_by_name("import/%s:0"%layer)
def render_naive(t_obj, img0=img_noise, iter_n=20, step=1.0):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
img = img0.copy()
for i in range(iter_n):
g, score = sess.run([t_grad, t_score], {t_input:img})
# normalizing the gradient, so the same step size should work
g /= g.std()+1e-8 # for different layers and networks
img += g*step
print(score, end = ' ')
clear_output()
showarray(visstd(img))
render_naive(T(layer)[:,:,:,channel])
|
downloads/articles/2017-09-14-Google-DeepDream-Python/deepdream.ipynb
|
leimao/leimao.github.io
|
gpl-2.0
|
<a id="multiscale"></a>
Multiscale image generation
Looks like the network wants to show us something interesting! Let's help it. We are going to apply gradient ascent on multiple scales. Details formed on smaller scale will be upscaled and augmented with additional details on the next scale.
With multiscale image generation it may be tempting to set the number of octaves to some high value to produce wallpaper-sized images. Storing network activations and backprop values will quickly run out of GPU memory in this case. There is a simple trick to avoid this: split the image into smaller tiles and compute each tile gradient independently. Applying random shifts to the image before every iteration helps avoid tile seams and improves the overall image quality.
Lei Mao: Somehow this random shift trick work very well for blurring the boundaries.
|
def tffunc(*argtypes):
'''Helper that transforms TF-graph generating function into a regular one.
See "resize" function below.
'''
placeholders = list(map(tf.placeholder, argtypes))
def wrap(f):
out = f(*placeholders)
def wrapper(*args, **kw):
return out.eval(dict(zip(placeholders, args)), session=kw.get('session'))
return wrapper
return wrap
# Helper function that uses TF to resize an image
def resize(img, size):
img = tf.expand_dims(img, 0)
return tf.image.resize_bilinear(img, size)[0,:,:,:]
resize = tffunc(np.float32, np.int32)(resize)
def calc_grad_tiled(img, t_grad, tile_size=512):
'''Compute the value of tensor t_grad over the image in a tiled way.
Random shifts are applied to the image to blur tile boundaries over
multiple iterations.'''
sz = tile_size
h, w = img.shape[:2]
sx, sy = np.random.randint(sz, size=2)
img_shift = np.roll(np.roll(img, sx, 1), sy, 0)
grad = np.zeros_like(img)
for y in range(0, max(h-sz//2, sz),sz):
for x in range(0, max(w-sz//2, sz),sz):
sub = img_shift[y:y+sz,x:x+sz]
g = sess.run(t_grad, {t_input:sub})
grad[y:y+sz,x:x+sz] = g
return np.roll(np.roll(grad, -sx, 1), -sy, 0)
def render_multiscale(t_obj, img0=img_noise, iter_n=10, step=1.0, octave_n=3, octave_scale=1.4):
t_score = tf.reduce_mean(t_obj) # defining the optimization objective
t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation!
img = img0.copy()
for octave in range(octave_n):
if octave>0:
hw = np.float32(img.shape[:2])*octave_scale
img = resize(img, np.int32(hw))
for i in range(iter_n):
g = calc_grad_tiled(img, t_grad)
# normalizing the gradient, so the same step size should work
g /= g.std()+1e-8 # for different layers and networks
img += g*step
print('.', end = ' ')
clear_output()
showarray(visstd(img))
render_multiscale(T(layer)[:,:,:,channel])
|
downloads/articles/2017-09-14-Google-DeepDream-Python/deepdream.ipynb
|
leimao/leimao.github.io
|
gpl-2.0
|
Is there a difference in the precision-recall values of different models?
|
query_dict = {'expansions__vectors__rep': 0,
'expansions__k':3,
'labelled':'amazon_grouped-tagged',
'expansions__use_similarity': 0,
'expansions__neighbour_strategy':'linear',
'expansions__vectors__dimensionality': 100,
'document_features_ev': 'AN+NN',
'document_features_tr': 'J+N+AN+NN',
'expansions__allow_overlap': False,
'expansions__entries_of': None,
'expansions__vectors__algorithm': 'glove',
'expansions__vectors__composer__in': ['Left'],
'expansions__vectors__unlabelled': 'wiki',
'expansions__vectors__unlabelled_percentage':100,
'expansions__decode_handler': 'SignifiedOnlyFeatureHandler',
'expansions__noise': 0}
ids = Experiment.objects.filter(**query_dict).order_by('expansions__vectors__unlabelled_percentage',
'expansions__vectors__composer').values_list('id', flat=True)
ids
get_ci(ids[0])[:-1]
results = Results.objects.get(id=ids[0], classifier='MultinomialNB')
pred = results.predictions
gold = results.gold
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.utils.multiclass import unique_labels
print(classification_report(gold, pred))
sns.set_style('white')
plot_confusion_matrix(gold, pred)
|
notebooks/error_analysis.ipynb
|
mbatchkarov/ExpLosion
|
bsd-3-clause
|
Overall, precision and recall are balanced and roughly equal. Better models are better in both P and R.
What's in each cluster when using VQ?
|
path = '../FeatureExtractionToolkit/word2vec_vectors/composed/AN_NN_word2vec-wiki_100percent-rep0_Add.events.filtered.strings.kmeans2000'
df = pd.read_hdf(path, key='clusters')
counts = df.clusters.value_counts()
g = sns.distplot(counts.values, kde_kws={'cut':True})
g.set(xlim=(0, None))
plt.title('Distribution of cluster sizes, k=500');
counts.describe()
|
notebooks/error_analysis.ipynb
|
mbatchkarov/ExpLosion
|
bsd-3-clause
|
Find the smallest cluster and print phrases it contains
|
df[df.clusters==counts.argmin()].head(20)
df[df.clusters == 5]
# cluster 5 (negative sentiment), 2 (royalty), 8 (cheap, expencive) are very sensible
# cluster 3 ('arm'), 1 ('product'), 15 (hot), 16 (playing) are dominated by a single word (may contain multiple senses, e.g. hot water, hot waiter)
# cluster 6 (grand slam, grand prix, door slam) dominated by a few words and a polysemous word bridging senses
# cluster 10- film characters + misc
# 11 - sentiment, mix of positive and negative
# 13- named entities
# 14- arche, tower, veranda + related words + other senses (arch enemy)
from collections import Counter
Counter([str(x).split('_')[0] for x in df[df.clusters == 5].index]).most_common(10)
|
notebooks/error_analysis.ipynb
|
mbatchkarov/ExpLosion
|
bsd-3-clause
|
Large clusters form (this one is 934 unigrams and NP) that share a word (e.g. bad occurs in 726 of those), and even though they are not pure (e.g. good is also in that cluster 84 times) the vast majority of "bad" stuff ends up in cluster 5, which starts to correspond to negative sentiment. This can be something the classifier picks up on.
For comparison, nearest neighbours using a KDTree
|
from discoutils.thesaurus_loader import Vectors as vv
# not quite the same vectors (15% vs 100%), but that's all I've got on this machine
v = vv.from_tsv('../FeatureExtractionToolkit/word2vec_vectors/composed/AN_NN_word2vec-wiki_15percent-rep0_Add.events.filtered.strings')
v.init_sims(n_neighbors=30)
v.get_nearest_neighbours('bad/J')[:5]
|
notebooks/error_analysis.ipynb
|
mbatchkarov/ExpLosion
|
bsd-3-clause
|
Let's find if there is a positive sentiment cluster
|
cluster_num = df.ix['good/J_guy/N'][0]
print(cluster_num)
df[df.clusters == cluster_num]
Counter([str(x).split('_')[0] for x in df[df.clusters == cluster_num].index]).most_common(10)
cluster_num = df.ix['good/J_movie/N'][0]
print(cluster_num)
df[df.clusters == cluster_num]
Counter([str(x).split('_')[1] for x in df[df.clusters == cluster_num].index]).most_common(10)
df[df.clusters == counts.argmax()] # these appear to be names, they are 99% unigrams
|
notebooks/error_analysis.ipynb
|
mbatchkarov/ExpLosion
|
bsd-3-clause
|
Does the same hold for Turian vectors?
|
path = '../FeatureExtractionToolkit/socher_vectors/composed/AN_NN_turian_Socher.events.filtered.strings.kmeans2000'
ddf = pd.read_hdf(path, key='clusters')
cluster_num = ddf.ix['bad/J_guy/N'][0]
print(cluster_num)
ddf[ddf.clusters == cluster_num]
Counter([str(x).split('_')[1] for x in ddf[ddf.clusters == cluster_num].index]).most_common(10)
|
notebooks/error_analysis.ipynb
|
mbatchkarov/ExpLosion
|
bsd-3-clause
|
Is it OK to use accuracy instead of Averaged F1 score?
|
gaps = []
for r in Results.objects.filter(classifier=CLASSIFIER):
gap = r.accuracy_mean - r.macrof1_mean
if abs(gap) > 0.1:
print(r.id.id)
gaps.append(gap)
plt.hist(gaps);
|
notebooks/error_analysis.ipynb
|
mbatchkarov/ExpLosion
|
bsd-3-clause
|
Are neighbours of words other words, and is there grouping by PoS tag?
|
from discoutils.thesaurus_loader import Vectors
from discoutils.tokens import DocumentFeature
v = Vectors.from_tsv('../FeatureExtractionToolkit/word2vec_vectors/word2vec-wiki-15perc.unigr.strings.rep0')
from random import sample
sampled_words = sample(list(v.keys()), 5000)
v.init_sims(n_neighbors=100)
data = []
for w in sampled_words:
doc_feat = DocumentFeature.from_string(w)
if doc_feat.tokens[0].pos == 'N' and np.random.uniform() < 0.8:
# too many nouns, ignore some of them
continue
neigh = v.get_nearest_neighbours(w)
for rank, (n, sim) in enumerate(neigh):
pospos = doc_feat.tokens[0].pos + DocumentFeature.from_string(n).tokens[0].pos
data.append([''.join(pospos), sim, rank])
df = pd.DataFrame(data, columns='pospos sim rank'.split())
mask = df.pospos.str.len() == 2
df = df[mask]
df.pospos.value_counts(), df.shape
g = sns.FacetGrid(df, col='pospos', col_wrap=3);
g.map(plt.hist, 'sim');
g = sns.FacetGrid(df, col='pospos', col_wrap=3);
g.map(plt.hist, 'rank');
|
notebooks/error_analysis.ipynb
|
mbatchkarov/ExpLosion
|
bsd-3-clause
|
Preliminary Report
Read the following results/report. While you are reading it, think about if the conclusions are correct, incorrect, misleading or unfounded. Think about what you would change or what additional analyses you would perform.
A. Initial observations based on the plot above
+ Overall, rate of readmissions is trending down with increasing number of discharges
+ With lower number of discharges, there is a greater incidence of excess rate of readmissions (area shaded red)
+ With higher number of discharges, there is a greater incidence of lower rates of readmissions (area shaded green)
B. Statistics
+ In hospitals/facilities with number of discharges < 100, mean excess readmission rate is 1.023 and 63% have excess readmission rate greater than 1
+ In hospitals/facilities with number of discharges > 1000, mean excess readmission rate is 0.978 and 44% have excess readmission rate greater than 1
C. Conclusions
+ There is a significant correlation between hospital capacity (number of discharges) and readmission rates.
+ Smaller hospitals/facilities may be lacking necessary resources to ensure quality care and prevent complications that lead to readmissions.
D. Regulatory policy recommendations
+ Hospitals/facilties with small capacity (< 300) should be required to demonstrate upgraded resource allocation for quality care to continue operation.
+ Directives and incentives should be provided for consolidation of hospitals and facilities to have a smaller number of them with higher capacity and number of discharges.
|
# A. Do you agree with the above analysis and recommendations? Why or why not?
import seaborn as sns
relevant_columns = clean_hospital_read_df[['Excess Readmission Ratio', 'Number of Discharges']][81:-3]
sns.regplot(relevant_columns['Number of Discharges'], relevant_columns['Excess Readmission Ratio'])
|
Statistics_Exercises/sliderule_dsi_inferential_statistics_exercise_3.ipynb
|
RyanAlberts/Springbaord-Capstone-Project
|
mit
|
<div class="span5 alert alert-info">
### Exercise
Include your work on the following **in this notebook and submit to your Github account**.
A. Do you agree with the above analysis and recommendations? Why or why not?
B. Provide support for your arguments and your own recommendations with a statistically sound analysis:
1. Setup an appropriate hypothesis test.
2. Compute and report the observed significance value (or p-value).
3. Report statistical significance for $\alpha$ = .01.
4. Discuss statistical significance and practical significance. Do they differ here? How does this change your recommendation to the client?
5. Look at the scatterplot above.
- What are the advantages and disadvantages of using this plot to convey information?
- Construct another plot that conveys the same information in a more direct manner.
You can compose in notebook cells using Markdown:
+ In the control panel at the top, choose Cell > Cell Type > Markdown
+ Markdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet
</div>
Overall, rate of readmissions is trending down with increasing number of discharges
Agree, according to regression trend line shown above
With lower number of discharges, there is a greater incidence of excess rate of readmissions (area shaded red)
Agree
With higher number of discharges, there is a greater incidence of lower rates of readmissions (area shaded green)
Agree
|
rv =relevant_columns
print rv[rv['Number of Discharges'] < 100][['Excess Readmission Ratio']].mean()
print '\nPercent of subset with excess readmission rate > 1: ', len(rv[(rv['Number of Discharges'] < 100) & (rv['Excess Readmission Ratio'] > 1)]) / len(rv[relevant_columns['Number of Discharges'] < 100])
print '\n', rv[rv['Number of Discharges'] > 1000][['Excess Readmission Ratio']].mean()
print '\nPercent of subset with excess readmission rate > 1: ', len(rv[(rv['Number of Discharges'] > 1000) & (rv['Excess Readmission Ratio'] > 1)]) / len(rv[relevant_columns['Number of Discharges'] > 1000])
|
Statistics_Exercises/sliderule_dsi_inferential_statistics_exercise_3.ipynb
|
RyanAlberts/Springbaord-Capstone-Project
|
mit
|
In hospitals/facilities with number of discharges < 100, mean excess readmission rate is 1.023 and 63% have excess readmission rate greater than 1
Accurate
In hospitals/facilities with number of discharges > 1000, mean excess readmission rate is 0.978 and 44% have excess readmission rate greater than 1
Correction: mean excess readmission rate is 0.979, and 44.565% have excess readmission rate > 1
|
np.corrcoef(rv['Number of Discharges'], rv['Excess Readmission Ratio'])
|
Statistics_Exercises/sliderule_dsi_inferential_statistics_exercise_3.ipynb
|
RyanAlberts/Springbaord-Capstone-Project
|
mit
|
You will work with the Housing Prices Competition for Kaggle Learn Users from the previous exercise.
Run the next code cell without changes to load the training and test data in X and X_test. For simplicity, we drop categorical variables.
|
import pandas as pd
from sklearn.model_selection import train_test_split
# Read the data
train_data = pd.read_csv('../input/train.csv', index_col='Id')
test_data = pd.read_csv('../input/test.csv', index_col='Id')
# Remove rows with missing target, separate target from predictors
train_data.dropna(axis=0, subset=['SalePrice'], inplace=True)
y = train_data.SalePrice
train_data.drop(['SalePrice'], axis=1, inplace=True)
# Select numeric columns only
numeric_cols = [cname for cname in train_data.columns if train_data[cname].dtype in ['int64', 'float64']]
X = train_data[numeric_cols].copy()
X_test = test_data[numeric_cols].copy()
|
notebooks/ml_intermediate/raw/ex5.ipynb
|
Kaggle/learntools
|
apache-2.0
|
Use the next code cell to print the first several rows of the data.
|
X.head()
|
notebooks/ml_intermediate/raw/ex5.ipynb
|
Kaggle/learntools
|
apache-2.0
|
So far, you've learned how to build pipelines with scikit-learn. For instance, the pipeline below will use SimpleImputer() to replace missing values in the data, before using RandomForestRegressor() to train a random forest model to make predictions. We set the number of trees in the random forest model with the n_estimators parameter, and setting random_state ensures reproducibility.
|
from sklearn.ensemble import RandomForestRegressor
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
my_pipeline = Pipeline(steps=[
('preprocessor', SimpleImputer()),
('model', RandomForestRegressor(n_estimators=50, random_state=0))
])
|
notebooks/ml_intermediate/raw/ex5.ipynb
|
Kaggle/learntools
|
apache-2.0
|
You have also learned how to use pipelines in cross-validation. The code below uses the cross_val_score() function to obtain the mean absolute error (MAE), averaged across five different folds. Recall we set the number of folds with the cv parameter.
|
from sklearn.model_selection import cross_val_score
# Multiply by -1 since sklearn calculates *negative* MAE
scores = -1 * cross_val_score(my_pipeline, X, y,
cv=5,
scoring='neg_mean_absolute_error')
print("Average MAE score:", scores.mean())
|
notebooks/ml_intermediate/raw/ex5.ipynb
|
Kaggle/learntools
|
apache-2.0
|
Step 1: Write a useful function
In this exercise, you'll use cross-validation to select parameters for a machine learning model.
Begin by writing a function get_score() that reports the average (over three cross-validation folds) MAE of a machine learning pipeline that uses:
- the data in X and y to create folds,
- SimpleImputer() (with all parameters left as default) to replace missing values, and
- RandomForestRegressor() (with random_state=0) to fit a random forest model.
The n_estimators parameter supplied to get_score() is used when setting the number of trees in the random forest model.
|
def get_score(n_estimators):
"""Return the average MAE over 3 CV folds of random forest model.
Keyword argument:
n_estimators -- the number of trees in the forest
"""
# Replace this body with your own code
pass
# Check your answer
step_1.check()
#%%RM_IF(PROD)%%
def get_score(n_estimators):
my_pipeline = Pipeline(steps=[
('preprocessor', SimpleImputer()),
('model', RandomForestRegressor(n_estimators, random_state=0))
])
scores = -1 * cross_val_score(my_pipeline, X, y,
cv=3,
scoring='neg_mean_absolute_error')
return scores.mean()
step_1.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_1.hint()
#_COMMENT_IF(PROD)_
step_1.solution()
|
notebooks/ml_intermediate/raw/ex5.ipynb
|
Kaggle/learntools
|
apache-2.0
|
Step 2: Test different parameter values
Now, you will use the function that you defined in Step 1 to evaluate the model performance corresponding to eight different values for the number of trees in the random forest: 50, 100, 150, ..., 300, 350, 400.
Store your results in a Python dictionary results, where results[i] is the average MAE returned by get_score(i).
|
results = ____ # Your code here
# Check your answer
step_2.check()
#%%RM_IF(PROD)%%
results = {}
for i in range(1,9):
results[50*i] = get_score(50*i)
step_2.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_2.hint()
#_COMMENT_IF(PROD)_
step_2.solution()
|
notebooks/ml_intermediate/raw/ex5.ipynb
|
Kaggle/learntools
|
apache-2.0
|
Use the next cell to visualize your results from Step 2. Run the code without changes.
|
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(list(results.keys()), list(results.values()))
plt.show()
|
notebooks/ml_intermediate/raw/ex5.ipynb
|
Kaggle/learntools
|
apache-2.0
|
Step 3: Find the best parameter value
Given the results, which value for n_estimators seems best for the random forest model? Use your answer to set the value of n_estimators_best.
|
n_estimators_best = ____
# Check your answer
step_3.check()
#%%RM_IF(PROD)%%
n_estimators_best = min(results, key=results.get)
step_3.assert_check_passed()
#%%RM_IF(PROD)%%
n_estimators_best = 200
step_3.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_3.hint()
#_COMMENT_IF(PROD)_
step_3.solution()
|
notebooks/ml_intermediate/raw/ex5.ipynb
|
Kaggle/learntools
|
apache-2.0
|
Now let us write a custom function to run the xgboost model.
|
def runXGB(train_X, train_y, test_X, test_y=None, feature_names=None, seed_val=0, num_rounds=1000):
param = {}
param['objective'] = 'multi:softprob'
param['eta'] = 0.1
param['max_depth'] = 6
param['silent'] = 1
param['num_class'] = 3
param['eval_metric'] = "mlogloss"
param['min_child_weight'] = 1
param['subsample'] = 0.7
param['colsample_bytree'] = 0.7
param['seed'] = seed_val
num_rounds = num_rounds
plst = list(param.items())
xgtrain = xgb.DMatrix(train_X, label=train_y)
if test_y is not None:
xgtest = xgb.DMatrix(test_X, label=test_y)
watchlist = [ (xgtrain,'train'), (xgtest, 'test') ]
model = xgb.train(plst, xgtrain, num_rounds, watchlist, early_stopping_rounds=20)
else:
xgtest = xgb.DMatrix(test_X)
model = xgb.train(plst, xgtrain, num_rounds)
pred_test_y = model.predict(xgtest)
return pred_test_y, model
|
xgboost.ipynb
|
shengqiu/renthop
|
gpl-2.0
|
Let us read the train and test files and store it.
|
data_path = "../input/"
train_file = data_path + "train.json"
test_file = data_path + "test.json"
train_df = pd.read_json(train_file)
test_df = pd.read_json(test_file)
print(train_df.shape)
print(test_df.shape)
|
xgboost.ipynb
|
shengqiu/renthop
|
gpl-2.0
|
We do not need any pre-processing for numerical features and so create a list with those features.
|
features_to_use = ["bathrooms", "bedrooms", "latitude", "longitude", "price"]
|
xgboost.ipynb
|
shengqiu/renthop
|
gpl-2.0
|
Now let us create some new features from the given features.
|
# count of photos #
train_df["num_photos"] = train_df["photos"].apply(len)
test_df["num_photos"] = test_df["photos"].apply(len)
# count of "features" #
train_df["num_features"] = train_df["features"].apply(len)
test_df["num_features"] = test_df["features"].apply(len)
# count of words present in description column #
train_df["num_description_words"] = train_df["description"].apply(lambda x: len(x.split(" ")))
test_df["num_description_words"] = test_df["description"].apply(lambda x: len(x.split(" ")))
# convert the created column to datetime object so as to extract more features
train_df["created"] = pd.to_datetime(train_df["created"])
test_df["created"] = pd.to_datetime(test_df["created"])
# Let us extract some features like year, month, day, hour from date columns #
train_df["created_year"] = train_df["created"].dt.year
test_df["created_year"] = test_df["created"].dt.year
train_df["created_month"] = train_df["created"].dt.month
test_df["created_month"] = test_df["created"].dt.month
train_df["created_day"] = train_df["created"].dt.day
test_df["created_day"] = test_df["created"].dt.day
train_df["created_hour"] = train_df["created"].dt.hour
test_df["created_hour"] = test_df["created"].dt.hour
# adding all these new features to use list #
features_to_use.extend(["num_photos", "num_features", "num_description_words","created_year", "created_month", "created_day", "listing_id", "created_hour"])
|
xgboost.ipynb
|
shengqiu/renthop
|
gpl-2.0
|
We have 4 categorical features in our data
display_address
manager_id
building_id
listing_id
So let us label encode these features.
|
categorical = ["display_address", "manager_id", "building_id", "street_address"]
for f in categorical:
if train_df[f].dtype=='object':
#print(f)
lbl = preprocessing.LabelEncoder()
lbl.fit(list(train_df[f].values) + list(test_df[f].values))
train_df[f] = lbl.transform(list(train_df[f].values))
test_df[f] = lbl.transform(list(test_df[f].values))
features_to_use.append(f)
|
xgboost.ipynb
|
shengqiu/renthop
|
gpl-2.0
|
We have features column which is a list of string values. So we can first combine all the strings together to get a single string and then apply count vectorizer on top of it.
|
train_df['features'] = train_df["features"].apply(lambda x: " ".join(["_".join(i.split(" ")) for i in x]))
test_df['features'] = test_df["features"].apply(lambda x: " ".join(["_".join(i.split(" ")) for i in x]))
print(train_df["features"].head())
tfidf = CountVectorizer(stop_words='english', max_features=200)
tr_sparse = tfidf.fit_transform(train_df["features"])
te_sparse = tfidf.transform(test_df["features"])
|
xgboost.ipynb
|
shengqiu/renthop
|
gpl-2.0
|
Now let us stack both the dense and sparse features into a single dataset and also get the target variable.
|
train_X = sparse.hstack([train_df[features_to_use], tr_sparse]).tocsr()
test_X = sparse.hstack([test_df[features_to_use], te_sparse]).tocsr()
target_num_map = {'high':0, 'medium':1, 'low':2}
train_y = np.array(train_df['interest_level'].apply(lambda x: target_num_map[x]))
print(train_X.shape, test_X.shape)
|
xgboost.ipynb
|
shengqiu/renthop
|
gpl-2.0
|
Now let us do some cross validation to check the scores.
Please run it in local to get the cv scores. I am commenting it out here for time.
|
cv_scores = []
kf = model_selection.KFold(n_splits=5, shuffle=True, random_state=2016)
for dev_index, val_index in kf.split(range(train_X.shape[0])):
dev_X, val_X = train_X[dev_index,:], train_X[val_index,:]
dev_y, val_y = train_y[dev_index], train_y[val_index]
preds, model = runXGB(dev_X, dev_y, val_X, val_y)
cv_scores.append(log_loss(val_y, preds))
print(cv_scores)
break
|
xgboost.ipynb
|
shengqiu/renthop
|
gpl-2.0
|
Now let us build the final model and get the predictions on the test set.
|
preds, model = runXGB(train_X, train_y, test_X, num_rounds=400)
out_df = pd.DataFrame(preds)
out_df.columns = ["high", "medium", "low"]
out_df["listing_id"] = test_df.listing_id.values
out_df.to_csv("xgb_starter2.csv", index=False)
|
xgboost.ipynb
|
shengqiu/renthop
|
gpl-2.0
|
Collate and output the results as a plain-text alignment table, as JSON, and as colored HTML
|
collationText = collate(json_input,output='table',layout='vertical')
print(collationText)
collationJSON = collate(json_input,output='json')
print(collationJSON)
collationHTML2 = collate(json_input,output='html2')
|
unit8/unit8-collatex-and-XML/CollateX and XML, Part 2.ipynb
|
DiXiT-eu/collatex-tutorial
|
gpl-3.0
|
Different forms of the network.
The node-link network that we get from Source includes topological information, in addition to the geometries of the various nodes, links and catchments, and their attributes, such as node names.
When we initial retrieve the network, with v.network() we get an object that includes a number of queries based on this topology.
Note: These queries are not implemented on the dataframe of the network, created with v.network().as_dataframe(). However you can call as_dataframe() on the result of some of the topological queries.
|
network = v.network()
|
doc/examples/network/TopologicalQueries.ipynb
|
flowmatters/veneer-py
|
isc
|
eg, find all outlet nodes
|
outlets = network.outlet_nodes().as_dataframe()
outlets[:10]
|
doc/examples/network/TopologicalQueries.ipynb
|
flowmatters/veneer-py
|
isc
|
Feature id
Other topological queries are based on the id attribute of features in the network. For example /network/nodes/187
|
upstream_features = network.upstream_features('/network/nodes/214').as_dataframe()
upstream_features
upstream_features.plot()
|
doc/examples/network/TopologicalQueries.ipynb
|
flowmatters/veneer-py
|
isc
|
Partitioning the network
The network.partition method can be very useful for a range of parameterisation and reporting needs.
partition groups all features (nodes, links and catchments) in the network based on which of a series of key nodes those features drain through.
parition adds a new property to each feature, naming the relevant key node (or the outlet node if none of the key nodes are downstream of a particular feature).
Note: You can name the property used to identify the key nodes, which means you can run partition multiple times to identify different groupings within the network
|
network.partition?
gauge_names = network['features'].find_by_icon('/resources/GaugeNodeModel')._select(['name'])
gauge_names
network.partition(gauge_names,'downstream_gauge')
dataframe = network.as_dataframe()
dataframe[:10]
## Path between two features
network.path_between?
network.path_between('/network/catchments/20797','/network/nodes/56').as_dataframe()
|
doc/examples/network/TopologicalQueries.ipynb
|
flowmatters/veneer-py
|
isc
|
The (College Student) Diet Problem
Consider the canonical college student. After a hard afternoon's work of solving way too many partial differential equations, she emerges from her room to obtain sustenance for the day.
She has a choice between getting chicken over rice (\$5) from the halal cart on her street ($r$), or subs (\$7) from the deli ($s$). She's a poor college student, so she will obviously want to get her money's worth. This is obviously an optimisation problem: she wants to find the amount of chicken over rice and subs she has to buy in order to minimise the total cost she spends on food.
$$
\text{minimise} \quad 5r + 7s
$$
In optimisation, we like to call this expression the objective function.
Well, it's not as simple as that. A girl's got to get her fill of daily nutrients. Fibre, protein, and carbohydrates are all important, and however far away food pyramids are from the quotidien thoughts of college students, a girl can still dream of a pseudo-healthy diet with at least 4 servings of fibre, 3 servings of protein, and 6 servings of carbohydrates.
A chicken over rice has 2 servings of fibre, 3 servings of protein, and 3 servings of carbohydrates, while a sub has 1 serving of fibre, 3 servings of protein, and 4 servings of carbohydrates. To find the combination of meals that satisfies the daily nutritional requirements, we impose the following constraints:
\begin{align}
\text{Fibre: } &2r + s \geq 4 \
\text{Protein: } &3r + 3s \geq 3 \
\text{Carbohydrates: } &3r + 4s \geq 6
\end{align}
Visualising the Problem
|
fig = plt.figure()
axes = fig.add_subplot(1,1,1)
# define view
r_min = 0.0
r_max = 3.0
s_min = 0.0
s_max = 5.0
res = 50
r = numpy.linspace(r_min, r_max, res)
# plot axes
axes.axhline(0, color='#B3B3B3', linewidth=5)
axes.axvline(0, color='#B3B3B3', linewidth=5)
# plot constraints
c_1 = lambda x: 4 - 2*x
c_2 = lambda x: 1 - x
c_3 = lambda x: 0.25 * ( 6 - 3*x )
c_1_line = axes.plot( r, c_1(r), label='Fibre' ) # 2r + s \geq 4
c_2_line = axes.plot( r, c_2(r), label='Protein' ) # 3r + 3s \geq 3
c_3_line = axes.plot( r, c_3(r), label='Carbohydrate' ) # 3r + 4s \geq 6
# plot objective
s = numpy.linspace(s_min, s_max, res)
c = numpy.empty([r.size, s.size])
for i, r_i in enumerate(r):
c[:,i] = 5 * r_i + 12 * s
axes.contourf(r, s, c, res, cmap='Oranges', alpha=0.5)
r_cut = numpy.linspace(0.0, 2.0, 100)
axes.fill_between(r_cut, c_1(r_cut), color='w')
# plot cost minimising point
axes.plot(2.0, 0, 'o')
# label graph
axes.set_title('Visualising the Diet Problem')
axes.set_xlabel('Chicken Over Rice')
axes.set_ylabel('Sub')
axes.legend()
plt.show()
|
.ipynb_checkpoints/lpsm-checkpoint.ipynb
|
constellationcolon/simplexity
|
mit
|
We can visualise our diet problem on a graph of "Number of Subs vs. Number of Chicken or Rice", where lines each represent a constraint, and our cost function can be represented in shades of blue: the deeper the blue, the more we will spend on meals.
The regions where we will satisfy our constraints will be the regions above our constraint lines, since we want more than or equal to the number of minimum servings. Obviously, we can't buy a negative number of subs or chicken over rice, so we have the implicit constraints that $r>0$ and $s>0$.
The intersection of all the regions that satisfy each of our constraints is what we call the feasible region, or the feasible set, the region where solutions that satisfy all constraints. In the graph, this is the region with the blue gradient fill.
So our problem of deciding how much of what food to buy has been essentially reduced to finding the point in the feasible set with the minimum cost (i.e. the lightest shade of blue.) With one glance, we can tell that this point is $(0, 2)$, so we should buy 2 chicken over rice, and 0 subs. Interestingly, our feasible region is determined largely by the fibre constraint—read from this what you want.
Well, you think to yourself, that was easy; I can stop reading now!
That's true, if you only have 2 foods to choose between. But in general, life isn't as simple as this; if, say, you're a functioning adult and actually cook, you'll want to choose between the 1000's of grocery items available to you at the local supermarket. In that case, you'll have to draw out one axis for each food item (how you'll do that, I don't know), and then compare the colors across this unvisualisable space. This shit gets real, and fast.
Linear Programming
Well, luckily for us, a clever guy by the name of George Dantzig managed to solve exactly this type of problem for us while he was working for the U.S. Air Force in WWII, when computers were just starting to come out of the realm of science fiction. They faced a similar problem then, as many do now: they only had a set amount of men and resources, and wanted to maximise the amount of work they could do in winning the war.
In other areas, you could also imagine say, a furniture manufacturer wanting to find the most efficient way of using the manpower and planks, screws, tools, and whatever they use to build furniture these days, to produce the combination of furniture that will maximise their profits. Or, on Wall Street, a trader wanting to find the best combination of differently priced assets that maximises projected profits, or minimises risk (or something along those lines; I know nuts about finance).
We call these sorts of problems, wherein we want to maximise (or minimise!) some linear objective function subject to a set of linear constraints linear optimisation problems, and the methods we use to solve these problems linear programming.
Standard Form and Duality
Linear optimisation problems can always be expressed as
\begin{align}
\text{maximise} \quad & b_1 x_1 + b_2 x_2 + \ldots + b_m x_m \
\text{subject to} \quad & a_{11} x_{1} + a_{21} x_{2} + \ldots + a_{m1} x_{m} \leq c_1 \
& a_{12} x_{1} + a_{22} x_{2} + \ldots + a_{m2} x_{m} \leq c_2 \
& \vdots \
& a_{1n} x_{1} + a_{2n} x_{2} + \ldots + a_{mn} x_{m} \leq c_n
\end{align}
In less symbols, this is
\begin{align}
\text{maximise} \quad & b^T x \
\text{subject to} \quad & Ax \leq c
\end{align}
This is what is commonly known as the dual form of the problem. Well, so if there is a dual, then there must actually be 2 problems, right? So what was the first?
Turns out, we call the "first" problem the primal problem, and surprisingly (or not), the solution of the primal problem will give us an upper bound on the corresponding solution of the dual problem. It looks like this:
\begin{align}
\text{minimise} \quad & c_1 y_1 + c_2 y_2 + \ldots + c_m y_ n\
\text{subject to} \quad & a_{11} y_{1} + a_{12} y_{2} + \ldots + a_{1n} y_{n} = b_1 \
& a_{21} y_{1} + a_{22} y_{2} + \ldots + a_{m2} y_{n} = b_2 \
& \vdots \
& a_{m1} y_{1} + a_{m2} y_{2} + \ldots + a_{nm} y_{n} = b_m \
\text{and} \quad & { y_i \geq 0 }_{i=1}^m
\end{align}
aka
\begin{align}
\text{minimise} \quad & c^T y \
\text{subject to} \quad & A^T y = b \
\text{and} \quad & y \geq 0
\end{align}
We basically interchange the constraints' constants and the coefficients in our objective function, and turn the inequalities into equalities. The nice thing about the dual problem and its primal, is that the primal problem has an optimal solution $x^$, then the dual also has an optimal solution $y^$ related by $b^Tx^=c^Ty^$, i.e. the two problems have the same optimum value!
The dual problem for linear optimisation problems was first conjectured by von Neumann, who was then working on game theory. We can think of the fact that any linear programme has a dual problem as 2 players are playing a zero-sum game; any gains on the part of one player must necessarily result in losses for the other player. When you maximise utility for one player, you are at the same time minimising utility for the other.
So what does our college student diet problem look like in the standard form (and its primal?)
Since maximising a function is just minimising the negative of the function, the problem becomes
\begin{align}
\text{maximise} \quad & - 5r - 7s \
\text{subject to} \quad & - 2r - s \leq - 4 \
& - 3r - 3s \leq - 3 \
& - 3r - 4s \leq - 6
\end{align}
|
fig = plt.figure()
axes = fig.add_subplot(1,1,1)
# plot axes
axes.axhline(0, color='k')
axes.axvline(0, color='k')
# plot constraints
c_1 = lambda x: 4 - 2*x
c_2 = lambda x: 1 - x
c_3 = lambda x: - 0.25 * ( - 6 + 3*x )
c_1_line = axes.plot( r, c_1(r), label='Fibre' ) # 2r + s \geq 4
c_2_line = axes.plot( r, c_2(r), label='Protein' ) # 3r + 3s \geq 3
c_3_line = axes.plot( r, c_3(r), label='Carbohydrate' ) # 3r + 4s \geq 6
# plot objective
s = numpy.linspace(s_min, s_max, res)
c = numpy.empty([r.size, s.size])
for i, r_i in enumerate(r):
c[:,i] = - 5 * r_i - 12 * s
axes.contourf(r, s, c, res, cmap='Oranges', alpha=0.5)
r_cut = numpy.linspace(0.0, 2.0, 100)
axes.fill_between(r_cut, c_1(r_cut), color='w')
# plot cost minimising point
axes.plot(2.0, 0, 'o')
# label graph
axes.set_title('Visualising the Diet Problem, Standard Form')
axes.set_xlabel('Chicken Over Rice')
axes.set_ylabel('Sub')
axes.legend(loc=1)
plt.show()
|
.ipynb_checkpoints/lpsm-checkpoint.ipynb
|
constellationcolon/simplexity
|
mit
|
In dual form, this is
\begin{align}
\text{minimise} \quad & - 4y_1 - 3y_2 - 6y_3 \
\text{subject to} \quad & 2y_1 + 3y_2 + 3y_3 = 5 \
& y_1 + 3y_2 + 4y_3 = 7 \
\text{and} \quad & { y_i \geq 0 }_{i=1}^3 \
\end{align}
Which can be seen as minimising the objective function on the line segment formed by intersecting the 2 constraint planes. We can also interpret this as wanting to maximise the nutritional value of our meals, given that trying to increase the quantity of one nutrient will necessarily mean that we have to give up some amount of another nutrient.
The Simplex Method
Standard Form (for the Simplex Method)
\begin{align}
\text{maximise} \quad & c_1 x_1 + c_2 x_2 + \ldots + c_m x_m \
\text{subject to} \quad & a_{11} x_1 + a_{12} x_2 + \ldots + a_{1m} x_m \leq b_1 \
& a_{21} x_2 + a_{22} x_2 + \ldots + a_{2m} x_m \leq b_2 \
& \vdots \
& a_{n1} x_n + a_{n2} x_n + \ldots + a_{nm} x_m \leq b_n \
\text{and} \quad & { x_i \geq 0 }{i=1}^m \text{ and } { b_j \geq 0 }{j=1}^n
\end{align}
If you are currently trying to minimise the objective function, turn it into a maximisation problem by taking the negative of the expression
Turn all the inequality constraints into equality constraints by adding slack variables
If these transformation still don't allow your system of equations to fit the form, solve the dual form of the problem!
System of Constraint Equations
\begin{align}
\text{maximise} \quad & c_1 x_1 + c_2 x_2 + \ldots + c_m x_m = z \
\text{subject to} \quad & a_11 x_1 + a_12 x_2 + \ldots + a_1m x_m + s_1 = b_1 \
& a_21 x_2 + a_22 x_2 + \ldots + a_2m x_m + s_2 = b_2 \
& \vdots \
& a_n1 x_n + a_n2 x_n + \ldots + a_nm x_m + s_n = b_n \
\text{and} \quad & { x_i \geq 0 }{i=1}^{m}, ~ { s_i \geq 0 }{j=1}^{n}, ~ \text{ and } { b_j \geq 0 }_{j=1}^n
\end{align}
Taking another look at our diet problem, we can put this problem
\begin{align}
\text{maximise} \quad & - 5r - 7s \
\text{subject to} \quad & - 2r - s \leq - 4 \
& - 3r - 3s \leq - 3 \
& - 3r - 4s \leq - 6 \
\text{and} \quad & r, s \geq 0
\end{align}
into standard form for the simplex method by putting it into its dual form:
\begin{align}
\text{maximise} \quad & 6y_1 + 3y_2 + 4y_3 \
\text{subject to} \quad & 3y_1 + 3y_2 + 2y_3 \leq 5 \
& 4y_1 + 3y_2 + y_3 \leq 7 \
\text{and} \quad & { y_i \geq 0 }_{i=1}^3 \
\end{align}
Hence, the constraint equations are
\begin{align}
\text{maximise} \quad & 6y_1 + 3y_2 + 4y_3 = z \
\text{subject to} \quad & 3y_1 + 3y_2 + 2y_3 + s_1 = 5 \
& 4y_1 + 3y_2 + y_3 + s_2 = 7 \
\text{and} \quad & { y_i \geq 0 }{i=1}^3 \text{ and } { s_i \geq 0 }{i=1}^2 \
\end{align}
The Algorithm
|
import pandas as pd
pd.set_option('display.notebook_repr_html', True)
def pivot(departing, entering, tab):
dpi = tab[tab['basic_variable']==departing].index[0] # index of the departing row
# update basic variable
tab['basic_variable'][dpi] = entering
# normalise departing_row
tab.ix[dpi,0:-1] = tab.ix[dpi,0:-1] / tab[entering][dpi]
departing_row = tab.ix[dpi,0:-1]
# do gauss-jordan on entering variable column
for row in tab.index[tab.index!=dpi]:
tab.ix[row, 0:-1] = tab.ix[row, 0:-1] - tab[entering][row] * departing_row
# Bland's rule
def calculate_ratio(entering, tab):
ratios = tab.ix[0:-1, 'value'] * 0 - 1
for index, is_valid in enumerate(tab.ix[0:-1, entering] > 0):
if is_valid==True:
ratios[index] = tab.ix[index, 'value']/tab.ix[index, entering]
return ratios
def find_entering(tab):
return tab.ix['z',0:-2].idxmin()
def find_departing(ratios, tab):
return tab.ix[ratios[ratios>=0].idxmin(),'basic_variable']
def update_stats(tab):
print "Basic variables: "
basic_variables = tab.ix[0:-1, 'basic_variable'].values
print basic_variables
print "Non-basic variables: "
non_basic_variables = numpy.setdiff1d(tab.columns[0:-2], basic_variables)
print non_basic_variables
print "Entering variable: "
entering_variable = find_entering(tab)
print entering_variable
print "Ratios: "
ratios = calculate_ratio(entering_variable, tab)
print ratios
print "Departing variable: "
departing_variable = find_departing(ratios, tab)
print departing_variable
return departing_variable, entering_variable
def is_optimum(tab):
return (tab.ix['z',0:-2] >= 0).all()
def run_simplex(tableau_dict, tableau_orig, max_iterations=10, force_iterations=0):
if force_iterations == 0:
for i in xrange(max_iterations):
tableau_dict[i] = tableau_orig.copy()
display(tableau_orig)
if is_optimum(tableau_orig):
break
departing_variable, entering_variable = update_stats(tableau_orig)
pivot(departing_variable, entering_variable, tableau_orig)
else:
for i in xrange(force_iterations):
tableau_dict[i] = tableau_orig.copy()
display(tableau_orig)
departing_variable, entering_variable = update_stats(tableau_orig)
pivot(departing_variable, entering_variable, tableau_orig)
c_1 = numpy.array([[ 3, 3, 2, 1, 0, 5, 's_1']])
c_2 = numpy.array([[ 4, 3, 1, 0, 1, 7, 's_2']])
z = numpy.array([[-6, -3, -4, 0, 0, 0, '']])
rows= numpy.concatenate((c_1, c_2, z), axis=0)
tableau = pd.DataFrame(rows, columns=['y_1','y_2','y_3','s_1','s_2','value', 'basic_variable'], index=['c_1','c_2','z'])
tableau.ix[:,0:-1] = tableau.ix[:,0:-1].astype('float')
tableaux = dict()
run_simplex(tableaux, tableau)
from ipywidgets import interact
def diet_problem(step):
fig = plt.figure()
axes = fig.add_subplot(1,1,1)
# plot axes
axes.axhline(0, color='k')
axes.axvline(0, color='k')
# plot constraints
c_1 = lambda x: 4 - 2*x
c_2 = lambda x: 1 - x
c_3 = lambda x: - 0.25 * ( - 6 + 3*x )
c_1_line = axes.plot( r, c_1(r), label='Fibre' ) # 2r + s \geq 4
c_2_line = axes.plot( r, c_2(r), label='Protein' ) # 3r + 3s \geq 3
c_3_line = axes.plot( r, c_3(r), label='Carbohydrate' ) # 3r + 4s \geq 6
# plot objective
for i, r_i in enumerate(r):
c[:,i] = - 5 * r_i - 12 * s
axes.contourf(r, s, c, res, cmap='Oranges', alpha=0.5)
axes.fill_between(r_cut, c_1(r_cut), color='w')
step_coords = numpy.array([[0.0, 0.0], [2.0, 0.0]])
# plot point
axes.plot(step_coords[step][0], step_coords[step][1], 'ro', markersize=10)
# label graph
axes.set_title('Simplex Method on the College Diet Problem, Iteration ' + str(step))
axes.set_xlabel('Chicken Over Rice')
axes.set_ylabel('Sub')
axes.legend(loc=1)
plt.show()
display(tableaux[step])
interact(diet_problem, step=(0,1));
|
.ipynb_checkpoints/lpsm-checkpoint.ipynb
|
constellationcolon/simplexity
|
mit
|
Bland's Rule
This seemingly arbitrary rule will seem less arbitrary in just a while.
Multiple Optimal Solutions
So, given the graphical intuition we now have for how the simplex method works, do we know if there ever a time when we would encounter more than 1 optimal solution for a given problem?
\begin{align}
\text{maximise} \quad & 5x_1 + 7x_2 \
\text{subject to} \quad & 2x_1 + x_2 \leq 4 \
& 10x_1 + 14x_2 \leq 30 \
\text{and} \quad & x_1, x_2 \geq 0
\end{align}
|
fig = plt.figure()
axes = fig.add_subplot(1,1,1)
# define view
x_1_min = 0.0
x_1_max = 3.0
x_2_min = 0.0
x_2_max = 5.0
res = 50
# plot axes
axes.axhline(0, color='k')
axes.axvline(0, color='k')
# plot constraints
x_1 = numpy.linspace(x_1_min, x_1_max, res)
c_1 = lambda x: 4.0 - 2.0*x
c_2 = lambda x: (30.0 - 10.0*x)/14.0
c_1_line = axes.plot( x_1, c_1(x_1), label='Constraint 1' ) # 2x_1 + x_2 \leq 4
c_2_line = axes.plot( x_1, c_2(x_1), label='Constraint 2' ) # 10x_1 + 14x_2 \leq 30
# plot objective
x_2 = numpy.linspace(x_2_min, x_2_max, res)
c = numpy.empty([x_1.size, x_2.size])
for i, x_1_i in enumerate(x_1):
c[:,i] = 5 * x_1_i + 7 * x_2
axes.contourf(x_1, x_2, c, res, cmap='Oranges', alpha=0.5)
# shade feasible region
c_1_bottom = numpy.linspace(0.0, 2.0, res)
c_2_bottom = numpy.linspace(0.0, 3.0, res)
axes.fill_between(c_1_bottom, c_1(c_1_bottom), color=plt.rcParams['axes.color_cycle'][0], alpha=0.5)
axes.fill_between(c_2_bottom, c_2(c_2_bottom), color=plt.rcParams['axes.color_cycle'][1], alpha=0.5)
# label graph
axes.set_title('How many solutions?')
axes.set_xlabel(r'x_1')
axes.set_ylabel(r'x_2')
axes.legend(loc=1)
plt.show()
|
.ipynb_checkpoints/lpsm-checkpoint.ipynb
|
constellationcolon/simplexity
|
mit
|
\begin{align}
\text{maximise} \quad & 5x_1 + 7x_2 \
\text{subject to} \quad & 2x_1 + x_2 + s_1 = 4 \
& 10x_1 + 14x_2 + s_2 = 30 \
\text{and} \quad & x_1, x_2, s_1, s_2 \geq 0
\end{align}
|
c_1 = numpy.array([[ 2, 1, 1, 0, 4, 's_1']])
c_2 = numpy.array([[10, 14, 0, 1, 30, 's_2']])
z = numpy.array([[-5, -7, 0, 0, 0, '']])
rows= numpy.concatenate((c_1, c_2, z), axis=0)
tableau_multiple = pd.DataFrame(rows, columns=['x_1','x_2','s_1','s_2','value', 'basic_variable'], index=['c_1','c_2','z'])
tableau_multiple.ix[:,0:-1] = tableau_multiple.ix[:,0:-1].astype('float')
tableaux_multiple = dict()
run_simplex(tableaux_multiple, tableau_multiple, force_iterations=3)
step_coords = numpy.array([[0.0, 0.0], [0.0, 2.14286], [tableaux_multiple[2].ix['c_1','value'], tableaux_multiple[2].ix['c_2','value']]])
step_value = numpy.array([tableaux_multiple[0].ix['z','value'], tableaux_multiple[1].ix['z','value'], tableaux_multiple[2].ix['z','value']])
def multiple_problem(step):
fig = plt.figure()
axes = fig.add_subplot(1,1,1)
# define view
x_1_min = 0.0
x_1_max = 3.0
x_2_min = 0.0
x_2_max = 5.0
# plot axes
axes.axhline(0, color='k')
axes.axvline(0, color='k')
# plot constraints
x_1 = numpy.linspace(x_1_min, x_1_max, res)
c_1 = lambda x: 4.0 - 2.0*x
c_2 = lambda x: (30.0 - 10.0*x)/14.0
c_1_line = axes.plot( r, c_1(r), label='Constraint 1' ) # 2x_1 + x_2 \leq 4
c_2_line = axes.plot( r, c_2(r), label='Constraint 2' ) # 10x_1 + 14x_2 \leq 30
# plot objective
x_2 = numpy.linspace(x_2_min, x_2_max, res)
c = numpy.empty([x_1.size, x_2.size])
for i, x_1_i in enumerate(x_1):
c[:,i] = 5 * x_1_i + 7 * x_2
# color map of objective function values
axes.contourf(x_1, x_2, c, res, cmap='Oranges', alpha=0.5)
# shade feasible region
c_1_bottom = numpy.linspace(0.0, 2.0, res)
c_2_bottom = numpy.linspace(0.0, 3.0, res)
axes.fill_between(c_1_bottom, c_1(c_1_bottom), color=plt.rcParams['axes.color_cycle'][0], alpha=0.5)
axes.fill_between(c_2_bottom, c_2(c_2_bottom), color=plt.rcParams['axes.color_cycle'][1], alpha=0.5)
# plot point
axes.plot(step_coords[step][0], step_coords[step][1], 'ro', markersize=10)
axes.text(step_coords[step][0]+0.1, step_coords[step][1], step_value[step])
# label graph
axes.set_title('How many solutions?')
axes.set_xlabel('x_1')
axes.set_ylabel('x_2')
axes.legend(loc=1)
plt.show()
display(tableaux_multiple[step])
interact(multiple_problem, step=(0,2));
|
.ipynb_checkpoints/lpsm-checkpoint.ipynb
|
constellationcolon/simplexity
|
mit
|
Unbounded Optima
\begin{align}
\text{maximise} \quad & 5x_1 + 7x_2 \
\text{subject to} \quad & -x_1 + x_2 \leq 5 \
& -\frac{1}{2}x_1 + x_2 \leq 7 \
\text{and} \quad & x_1, x_2 \geq 0
\end{align}
|
fig = plt.figure()
axes = fig.add_subplot(1,1,1)
# define view
x_1_min = 0.0
x_1_max = 10.0
x_2_min = 0.0
x_2_max = 15.0
# res = 100
# plot axes
axes.axhline(0, color='k')
axes.axvline(0, color='k')
# plot constraints
x_1 = numpy.linspace(x_1_min, x_1_max, res)
c_1 = lambda x: 5.0 + x
c_2 = lambda x: 7 + 0.5*x
c_1_line = axes.plot( x_1, c_1(x_1), label='Constraint 1' ) # -x_1 + x_2 \leq 5
c_2_line = axes.plot( x_1, c_2(x_1), label='Constraint 2' ) # -\frac{1}{2}x_1 + x_2 \leq 7
# plot objective
x_2 = numpy.linspace(x_2_min, x_2_max, res)
c = numpy.empty([x_1.size, x_2.size])
for i, x_1_i in enumerate(x_1):
c[:,i] = 5 * x_1_i + 7 * x_2
axes.contourf(x_1, x_2, c, res, cmap='Oranges', alpha=0.5)
# shade feasible region
# c_1_bottom = numpy.linspace(0.0, 2.0, res)
# c_2_bottom = numpy.linspace(0.0, 3.0, res)
axes.fill_between(x_1, c_1(x_1), color=plt.rcParams['axes.color_cycle'][0], alpha=0.5)
axes.fill_between(x_1, c_2(x_1), color=plt.rcParams['axes.color_cycle'][1], alpha=0.5)
# label graph
axes.set_title('Unbounded Optima')
axes.set_xlabel(r'$x_1$')
axes.set_ylabel(r'$x_2$')
axes.legend(loc=2)
plt.show()
|
.ipynb_checkpoints/lpsm-checkpoint.ipynb
|
constellationcolon/simplexity
|
mit
|
\begin{align}
\text{maximise} \quad & 5x_1 + 7x_2 \
\text{subject to} \quad & -x_1 + x_2 + s_1 = 5 \
& -\frac{1}{2}x_1 + x_2 + s_2 = 7 \
\text{and} \quad & x_1, x_2, s_1, s_2 \geq 0
\end{align}
|
c_1 = numpy.array([[ -1, 1, 1, 0, 5, 's_1']])
c_2 = numpy.array([[-0.5, 1, 0, 1, 7, 's_2']])
z = numpy.array([[ -5, -7, 0, 0, 0, '']])
rows= numpy.concatenate((c_1, c_2, z), axis=0)
tableau_unbounded = pd.DataFrame(rows, columns=['x_1','x_2','s_1','s_2','value', 'basic_variable'], index=['c_1','c_2','z'])
tableau_unbounded.ix[:,0:-1] = tableau_unbounded.ix[:,0:-1].astype('float')
tableaux_unbounded = dict()
run_simplex(tableaux_unbounded, tableau_unbounded)
|
.ipynb_checkpoints/lpsm-checkpoint.ipynb
|
constellationcolon/simplexity
|
mit
|
We got an error!
ValueError: attempt to get argmin of an empty sequence
Usually, errors are bad things, but in this case, the error is trying to tell us something
In the code:
return tab.ix[ratios[ratios>=0].idxmin(),'basic_variable']
Which is telling us that no non-negative ratio was found! Why is this a problem for us? Let's take a look at the equations our tableau, and our equations at this point in time:
|
display(tableau_unbounded)
|
.ipynb_checkpoints/lpsm-checkpoint.ipynb
|
constellationcolon/simplexity
|
mit
|
\begin{gather}
z = 83 + 17 s_1 - 24 s_2 \
x_1 = 4 + 2 s_1 - 2 s_2 \
x_2 = 9 + s_1 - 2 s_2
\end{gather}
At this point, we want to pick $s_1$ as our entering variable because it is has most negative coefficient, and increasing the value of $s_1$ would most increase the value of $z$.
Usually, increasing the value of $s_1$ would also mean that we have to decrease the value of one of the basic variables to 0 (so that we stay within our feasible region).
Here, what we have is that increasing the value of $s_1$ would also increase the value of both our basic variables, which means that our objective function will be able to increase without bound.
So the simplex method is able to tell us when our problem is unbounded, by virtue of the fact that the negative coefficient in the tableau indicates that we have not attained the optimum, but we are also unable to find a positive ratio to choose our departing variable.
Degeneracy and Cycling
Disclaimer: this example was stolen from here.
\begin{align}
\text{maximise} \quad & 2x_1 + 7x_2 \
\text{subject to} \quad & -x_1 + x_2 \leq 3 \
& x_1 - x_2 \leq 3 \
& x_2 \leq 2 \
\text{and} \quad & x_1, x_2 \geq 0
\end{align}
|
fig = plt.figure()
axes = fig.add_subplot(1,1,1)
# define view
x_1_min = 0.0
x_1_max = 3.0
x_2_min = 0.0
x_2_max = 5.0
# res = 100
# plot axes
axes.axhline(0, color='k')
axes.axvline(0, color='k')
# plot constraints
x_1 = numpy.linspace(x_1_min, x_1_max, res)
c_1 = lambda x: 3.0 - x
c_2 = lambda x: -3.0 + x
c_3 = lambda x: 2.0 * numpy.ones(x.size)
c_1_line = axes.plot( x_1, c_1(x_1), label='Constraint 1' ) # 2x_1 + x_2 \leq 4
c_2_line = axes.plot( x_1, c_2(x_1), label='Constraint 2' ) # 10x_1 + 14x_2 \leq 30
c_3_line = axes.plot( x_1, c_3(x_1), label='Constraint 3' ) # -2x_1 + x_2 \leq 0
# plot objective
x_2 = numpy.linspace(x_2_min, x_2_max, res)
c = numpy.empty([x_1.size, x_2.size])
for i, x_1_i in enumerate(x_1):
c[:,i] = 2.0 * x_1_i + x_2
axes.contourf(x_1, x_2, c, res, cmap='Oranges', alpha=0.5)
# shade feasible region
c_1_bottom = numpy.linspace(0.0, 3.0, res)
c_2_bottom = numpy.linspace(0.0, 3.0, res)
c_3_bottom = numpy.linspace(0.0, 3.0, res)
axes.fill_between(c_1_bottom, c_1(c_1_bottom), color=plt.rcParams['axes.color_cycle'][0], alpha=0.5)
axes.fill_between(c_2_bottom, c_2(c_2_bottom), color=plt.rcParams['axes.color_cycle'][1], alpha=0.5)
axes.fill_between(c_3_bottom, c_3(c_3_bottom), color=plt.rcParams['axes.color_cycle'][2], alpha=0.5)
# label graph
axes.set_title('Degeneracy and Cycling')
axes.set_xlabel(r'x_1')
axes.set_ylabel(r'x_2')
axes.legend(loc=1)
plt.show()
|
.ipynb_checkpoints/lpsm-checkpoint.ipynb
|
constellationcolon/simplexity
|
mit
|
\begin{align}
\text{maximise} \quad & 2x_1 + 7x_2 \
\text{subject to} \quad & -x_1 + x_2 + s_1 = 3 \
& x_1 - x_2 + s_2 = 3 \
& x_2 + s_3 = 2 \
\text{and} \quad & {x_i}{i=1}^2, {s_j}{j=1}^3 \geq 0
\end{align}
|
c_1 = numpy.array([[ 3, 1, 1, 0, 0, 6, 's_1']])
c_2 = numpy.array([[ 1, -1, 0, 1, 0, 2, 's_2']])
c_3 = numpy.array([[ 0, 1, 0, 0, 1, 3, 's_3']])
z = numpy.array([[-2, -1, 0, 0, 0, 0, '']])
rows= numpy.concatenate((c_1, c_2, c_3, z), axis=0)
tableau_degenerate = pd.DataFrame(rows, columns=['x_1','x_2','s_1','s_2','s_3','value', 'basic_variable'], index=['c_1','c_2','c_3','z'])
tableau_degenerate.ix[:,0:-1] = tableau_degenerate.ix[:,0:-1].astype('float')
tableaux_degenerate = dict()
run_simplex(tableaux_degenerate, tableau_degenerate)
step_coords = numpy.transpose([numpy.zeros(len(tableaux_degenerate)), 2.0*numpy.ones(len(tableaux_degenerate))])
step_coords[0][1] = 0.0
def degeneracy_plot(step):
fig = plt.figure()
axes = fig.add_subplot(1,1,1)
# define view
x_1_min = 0.0
x_1_max = 3.0
x_2_min = 0.0
x_2_max = 5.0
# res = 100
# plot axes
axes.axhline(0, color='k')
axes.axvline(0, color='k')
# plot constraints
x_1 = numpy.linspace(x_1_min, x_1_max, res)
c_1 = lambda x: 3.0 - x
c_2 = lambda x: -3.0 + x
c_3 = lambda x: 2.0 * numpy.ones(x.size)
c_1_line = axes.plot( x_1, c_1(x_1), label='Constraint 1' ) # 2x_1 + x_2 \leq 4
c_2_line = axes.plot( x_1, c_2(x_1), label='Constraint 2' ) # 10x_1 + 14x_2 \leq 30
c_3_line = axes.plot( x_1, c_3(x_1), label='Constraint 3' ) # -2x_1 + x_2 \leq 0
# plot objective
x_2 = numpy.linspace(x_2_min, x_2_max, res)
c = numpy.empty([x_1.size, x_2.size])
for i, x_1_i in enumerate(x_1):
c[:,i] = 2.0 * x_1_i + x_2
axes.contourf(x_1, x_2, c, res, cmap='Oranges', alpha=0.5)
# shade feasible region
c_1_bottom = numpy.linspace(0.0, 3.0, res)
c_2_bottom = numpy.linspace(0.0, 3.0, res)
c_3_bottom = numpy.linspace(0.0, 3.0, res)
axes.fill_between(c_1_bottom, c_1(c_1_bottom), color=plt.rcParams['axes.color_cycle'][0], alpha=0.5)
axes.fill_between(c_2_bottom, c_2(c_2_bottom), color=plt.rcParams['axes.color_cycle'][1], alpha=0.5)
axes.fill_between(c_3_bottom, c_3(c_3_bottom), color=plt.rcParams['axes.color_cycle'][2], alpha=0.5)
# plot point
axes.plot(step_coords[step][0], step_coords[step][1], 'ro', markersize=10)
# label graph
axes.set_title('Degeneracy and Cycling, Iteration ' + str(step))
axes.set_xlabel(r'x_1')
axes.set_ylabel(r'x_2')
axes.legend(loc=1)
plt.show()
display(tableaux_degenerate[step])
interact(degeneracy_plot, step=(0,len(tableaux_degenerate)-1))
|
.ipynb_checkpoints/lpsm-checkpoint.ipynb
|
constellationcolon/simplexity
|
mit
|
You think you're moving, but you get nowhere. — Stop and Stare, OneRepublic
As its name suggests, degeneracy is when you get a basic variable (that's supposed to have a non-zero value) with a value of 0, and you are able to modify the value of the objective function without moving on the simplex.
In general, predicting when degeneracy will occur is non-trivial; one source claims that it is NP-complete. You can read more about it here.
Bland's Rule
Choose the non-basic variable with the most negative coefficient as the entering variable
Choose the basic variable producing the smallest value/pivot ratio as the departing variable
Using Bland's Rule, the Simplex Method will never cycle if it encounters degeneracy (i.e. it halts on all inputs)
So what is cycling?
|
tableaux_degenerate[1]
|
.ipynb_checkpoints/lpsm-checkpoint.ipynb
|
constellationcolon/simplexity
|
mit
|
Without Bland's Rule, one could potentially choose to pivot on $s_2$, which will give us
|
pivot('s_2', 'x_2', tableaux_degenerate[1])
tableaux_degenerate[1]
|
.ipynb_checkpoints/lpsm-checkpoint.ipynb
|
constellationcolon/simplexity
|
mit
|
Choosing $x_2$ to pivot back to seems like a good idea, right? Nope.
|
pivot('x_2', 's_2', tableaux_degenerate[1])
tableaux_degenerate[1]
|
.ipynb_checkpoints/lpsm-checkpoint.ipynb
|
constellationcolon/simplexity
|
mit
|
Cycling, ladies and gentlemen, aka a slow spiral into insanity.
$\epsilon-$perturbations
Another earlier (and nowadays less popular) method for avoiding degeneracy is by introducing $\epsilon$-perturbations into the problem. Recall that the standard system goes like
\begin{align}
\text{maximise} \quad & c^T x \
\text{subject to} \quad & Ax = b \
\text{and} \quad & x \geq 0
\end{align}
With $\epsilon$-perturbations, we will instead solve
\begin{align}
\text{maximise} \quad & c^T x \
\text{subject to} \quad & Ax = b + \epsilon \
\text{and} \quad & x \geq 0
\end{align}
which will give us a close enough answer to the original problem, and help us avoid the problem with the 0's. This kind of happens automatically as a bonus if you're running the simplex algorithm on a computer; as the program runs, errors from truncation, etc. build up, and you eventually get out of the cycle because your computer is doing floating point arithmetic.
Which is just about the one good thing about floating point arithmetic, I guess.
Time Complexity of the Simplex Method
The Klee-Minty Cube
\begin{align}
\text{maximise} \quad & 100x_1 + 10x_2 + x_3 \
\text{subject to} \quad & x_1 \leq 1 \
& 20x_1 + x_2 \leq 100 \
& 200x_1 + 20x_2 + x_3 \leq 10000\
\text{and} \quad & x_1, x_2, x_3 \geq 0
\end{align}
|
c_1 = numpy.array([[ 1, 0, 0, 1, 0, 0, 1, 's_1']])
c_2 = numpy.array([[ 20, 1, 0, 0, 1, 0, 100, 's_2']])
c_3 = numpy.array([[ 200, 20, 1, 0, 0, 1, 10000, 's_3']])
z = numpy.array([[-100, -10, -1, 0, 0, 0, 0, '']])
rows= numpy.concatenate((c_1, c_2, c_3, z), axis=0)
tableau_klee_minty = pd.DataFrame(rows, columns=['x_1','x_2', 'x_3','s_1','s_2','s_3','value', 'basic_variable'], index=['c_1','c_2','c_3','z'])
tableau_klee_minty.ix[:,0:-1] = tableau_klee_minty.ix[:,0:-1].astype('float')
tableaux_klee_minty = dict()
run_simplex(tableaux_klee_minty, tableau_klee_minty)
|
.ipynb_checkpoints/lpsm-checkpoint.ipynb
|
constellationcolon/simplexity
|
mit
|
The rows contains the electricity used in each hour for a one year period.
Each row indicates the usage for the hour starting at the specified time, so 1/1/13 0:00 indicates the usage for the first hour of January 1st.
Working with datetime data
Let's take a closer look at our data:
|
nrg.head()
|
NevadaDashboard/pythonic_pandas.ipynb
|
bgroveben/python3_machine_learning_projects
|
mit
|
Both pandas and Numpy use the concept of dtypes as data types, and if no arguments are specified, date_time will take on an object dtype.
|
nrg.dtypes
# https://docs.python.org/3/library/functions.html#type
# https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.iat.html
type(nrg.iat[0,0])
|
NevadaDashboard/pythonic_pandas.ipynb
|
bgroveben/python3_machine_learning_projects
|
mit
|
This will be an issue with any column that can't neatly fit into a single data type.
Working with dates as strings is also an inefficient use of memory and programmer time (not to mention patience).
This exercise will work with time series data, and the date_time column will be formatted as an array of datetime objects called a pandas.Timestamp.
|
nrg['date_time'] = pd.to_datetime(nrg['date_time'])
# https://stackoverflow.com/questions/29206612/difference-between-data-type-datetime64ns-and-m8ns
nrg['date_time'].dtype
|
NevadaDashboard/pythonic_pandas.ipynb
|
bgroveben/python3_machine_learning_projects
|
mit
|
If you're curious about alternatives to the code above, check out pandas.PeriodIndex, which can store ordinal values indicating regular time periods.
We now have a pandas.DataFrame called nrg that contains the data from our .csv file.
Notice how the time is displayed differently in the date_time column.
|
nrg.head()
|
NevadaDashboard/pythonic_pandas.ipynb
|
bgroveben/python3_machine_learning_projects
|
mit
|
Time for a timing decorator
The code above is pretty straightforward, but how fast does it run?
Let's find out by using a timing decorator called @timeit (an homage to Python's timeit).
This decorator behaves like timeit.repeat(), but it also allows you to return the result of the function itself as well as get the average runtime from multiple trials.
When you create a function and place the @timeit decorator above it, the function will be timed every time it is called.
Keep in mind that the decorator runs an outer and an inner loop.
|
from timer import timeit
@timeit(repeat=3, number=10)
def convert_with_format(nrg, column_name):
return pd.to_datetime(nrg[column_name], format='%d/%m/%y %H:%M')
nrg['date_time'] = convert_with_format(nrg, 'date_time')
|
NevadaDashboard/pythonic_pandas.ipynb
|
bgroveben/python3_machine_learning_projects
|
mit
|
One easily overlooked detail is that the datetimes in the energy_consumption.csv file are not in ISO 8601 format.
You need YYYY-MM-DD HH:MM.
If you don’t specify a format, Pandas will use the dateutil package to convert each string to a date.
Conversely, if the raw datetime data is already in ISO 8601 format, pandas can immediately take a fast route to parsing the dates.
This is one reason why being explicit about the format is so beneficial here.
Another option is to pass the infer_datetime_format=True parameter, but it generally pays to be explicit.
Also, remember that pandas' read_csv() method allows you to parse dates as part of the file I/O using the parse_dates, infer_datetime_format, and date_parser parameters.
Simple Looping Over Pandas Data
Now that dates and times are in a tidy format, we can begin calculating electricity costs.
Cost varies by hour, so a cost factor is conditionally applied for each hour of the day:
| Usage | Cents per kWh | Time Range |
|-------------|----------------|----------------|
| Peak | 28 | 17:00 to 24:00 |
| Shoulder | 20 | 7:00 to 17:00 |
| Off-Peak | 12 | 0:00 to 7:00 |
If costs were a flat rate of 28 cents per kilowatt hour every hour, we could just do this:
|
nrg['cost_cents'] = nrg['energy_kwh'] * 28; nrg.head()
|
NevadaDashboard/pythonic_pandas.ipynb
|
bgroveben/python3_machine_learning_projects
|
mit
|
However, our hourly costs depend on the time of day.
If you use a loop to do the conditional calculation, you are not using pandas the way it was intended.
For the rest of this tutorial, you'll start with a sub-optimal solution and work your way up to a Pythonic approach that leverages the full power of pandas.
Take a look at a loop approach and see how it performs using our timing decorator.
|
# Create a function to apply the appropriate rate to the given hour:
def apply_rate(kwh, hour):
"""
Calculates the cost of electricity for a given hour.
"""
if 0 <= hour < 7:
rate = 12
elif 7 <= hour <= 17:
rate = 20
elif 17 <= hour <= 24:
rate = 28
else:
# +1 for error handling:
raise ValueError(f'Invalid datetime entry: {hour}')
return rate * kwh
|
NevadaDashboard/pythonic_pandas.ipynb
|
bgroveben/python3_machine_learning_projects
|
mit
|
Now for a computationally expensive and non-Pythonic loop:
|
# Not the best way:
@timeit(repeat=2, number = 10)
def apply_rate_loop(nrg):
"""
Calculate the costs using a loop, and modify `nrg` dataframe in place.
"""
energy_cost_list = []
for i in range(len(nrg)):
# Get electricity used and the corresponding rate.
energy_used = nrg.iloc[i]['energy_kwh']
hour = nrg.iloc[i]['date_time'].hour
energy_cost = apply_rate(energy_used, hour)
energy_cost_list.append(energy_cost)
nrg['cost_cents'] = energy_cost_list
apply_rate_loop(nrg)
|
NevadaDashboard/pythonic_pandas.ipynb
|
bgroveben/python3_machine_learning_projects
|
mit
|
You can consider the above to be an “antipattern” in pandas for several reasons.
First, initialize a list in which the outputs will be recorded.
Second, use the opaque object range(0, len(df)) to loop through nrg, then apply apply_rate(), and append the result to a list used to make the new DataFrame column.
Third, chained indexing with df.iloc[i]['date_time'] may lead to unintended results.
Each of these increase the time cost of the calculations.
On my machine, this loop took about 3 seconds for 8760 rows of data.
Next, you’ll look at some improved solutions for iteration over Pandas structures.
Looping with .itertuples() and .iterrows()
Instead of looping through a range of objects, you can use generator methods that yield one row at a time.
.itertuples() yields a namedtuple() for each row, with the row’s index value as the first element of the tuple.
A namedtuple() is a data structure from Python’s collections module that behaves like a Python tuple but has fields accessible by attribute lookup.
.iterrows() yields pairs (tuples) of (index, Series) for each row in the DataFrame.
While .itertuples() tends to be a bit faster, let’s focus on pandas and use .iterrows() in this example.
|
@timeit(repeat=2, number=10)
def apply_rate_iterrows(nrg):
energy_cost_list = []
for index, row in nrg.iterrows():
energy_used = row['energy_kwh']
hour = row['date_time'].hour
energy_cost = apply_rate(energy_used, hour)
energy_cost_list.append(energy_cost)
nrg['cost_cents'] = energy_cost_list
apply_rate_iterrows(nrg)
|
NevadaDashboard/pythonic_pandas.ipynb
|
bgroveben/python3_machine_learning_projects
|
mit
|
Clean Raw Annotations
Load raw annotations
|
"""
# v4_annotated
user_blocked = [
'annotated_onion_layer_5_rows_0_to_5000_raters_20',
'annotated_onion_layer_5_rows_0_to_10000',
'annotated_onion_layer_5_rows_0_to_10000_raters_3',
'annotated_onion_layer_5_rows_10000_to_50526_raters_10',
'annotated_onion_layer_10_rows_0_to_1000',
'annotated_onion_layer_20_rows_0_to_1000',
'annotated_onion_layer_30_rows_0_to_1000',
]
user_random = [
'annotated_random_data_rows_0_to_5000_raters_20',
'annotated_random_data_rows_5000_to_10000',
'annotated_random_data_rows_5000_to_10000_raters_3',
'annotated_random_data_rows_10000_to_20000_raters_10',
]
article_blocked = ['article_onion_layer_5_all_rows_raters_10',]
article_random = ['article_random_data_all_rows_raters_10',]
"""
user_blocked = [
'user_blocked',
'user_blocked_2',
'user_blocked_3',
'user_blocked_4',
'user_blocked_layer_10',
'user_blocked_layer_20',
'user_blocked_layer_30',
]
user_random = [
'user_random',
'user_random_2',
'user_random_3',
'user_random_4',
'user_random_extra_baselines',
]
article_blocked = [ 'article_blocked',
'article_blocked_layer_5_extra_baselines' ]
article_random = ['article_random',
'article_random_extra_baselines']
files = {
'user': {'blocked': user_blocked, 'random': user_random},
'article': {'blocked': article_blocked, 'random': article_random}
}
dfs = []
for ns, d in files.items():
for sample, files in d.items():
for f in files:
df = pd.read_csv('../../data/annotations/raw/%s/%s.csv' % (ns,f))
df['src'] = f
df['ns'] = ns
df['sample'] = sample
dfs.append(df)
df = pd.concat(dfs)
print('# annotations: ', df.shape[0])
|
src/modeling/Clean Annotations.ipynb
|
ewulczyn/talk_page_abuse
|
apache-2.0
|
Make random and blocked samples disjoint
|
df.drop_duplicates(subset=['rev_id', 'sample'])['rev_id'].value_counts().value_counts()
df.index = df.rev_id
df.sample_count = df.drop_duplicates(subset=['rev_id', 'sample'])['rev_id'].value_counts()
df.sample_count.value_counts()
# just set them all to random
df['sample'][df.sample_count == 2] = 'random'
df.drop_duplicates(subset=['rev_id', 'sample'])['rev_id'].value_counts().value_counts()
del df.sample_count
print('# annotations: ', df.shape[0])
|
src/modeling/Clean Annotations.ipynb
|
ewulczyn/talk_page_abuse
|
apache-2.0
|
Tidy is_harassment_or_attack column
|
df = tidy_labels(df)
|
src/modeling/Clean Annotations.ipynb
|
ewulczyn/talk_page_abuse
|
apache-2.0
|
Remap aggression score
|
df['aggression'] = df['aggression_score'].apply(map_aggression_score_to_2class)
|
src/modeling/Clean Annotations.ipynb
|
ewulczyn/talk_page_abuse
|
apache-2.0
|
Remove answers to test questions
|
df = df.query('_golden == False')
print('# annotations: ', df.shape[0])
|
src/modeling/Clean Annotations.ipynb
|
ewulczyn/talk_page_abuse
|
apache-2.0
|
Remove annotations where revision could not be read
|
# remove all annotations for a revisions where more than 50% of annotators for that revision could not read the comment
df = remove_na(df)
print('# annotations: ', df.shape[0])
# remove all annotations where the annotator could not read the comment
df = df.query('na==False')
print('# annotations: ', df.shape[0])
|
src/modeling/Clean Annotations.ipynb
|
ewulczyn/talk_page_abuse
|
apache-2.0
|
Examine aggression_score or is_harassment_or_attack input
|
df['aggression_score'].value_counts(dropna=False)
df['is_harassment_or_attack'].value_counts(dropna=False)
|
src/modeling/Clean Annotations.ipynb
|
ewulczyn/talk_page_abuse
|
apache-2.0
|
Drop NAs in aggression_score or is_harassment_or_attack input
|
df = df.dropna(subset = ['aggression_score', 'is_harassment_or_attack'])
print('# annotations: ', df.shape[0])
|
src/modeling/Clean Annotations.ipynb
|
ewulczyn/talk_page_abuse
|
apache-2.0
|
Remove ambivalent is_harassment_or_attack annotations
An annotations is ambivalent if it was labeled as both an attack and not an attack
|
# remove all annotations from users who are ambivalent in 10% or more of revisions
# we consider these users unreliable
def ambivalent(s):
return 'not_attack' in s and s!= 'not_attack'
df['ambivalent'] = df['is_harassment_or_attack'].apply(ambivalent)
non_ambivalent_workers = df.groupby('_worker_id', as_index = False)['ambivalent'].mean().query('ambivalent < 0.1')
df = df.merge(non_ambivalent_workers[['_worker_id']], how = 'inner', on = '_worker_id')
print('# annotations: ', df.shape[0])
# remove all other ambivalent annotations
df = df.query('ambivalent==False')
print('# annotations: ', df.shape[0])
|
src/modeling/Clean Annotations.ipynb
|
ewulczyn/talk_page_abuse
|
apache-2.0
|
Make sure that each rev was only annotated by the same worker once
|
df.groupby(['rev_id', '_worker_id']).size().value_counts()
df = df.drop_duplicates(subset = ['rev_id', '_worker_id'])
print('# annotations: ', df.shape[0])
|
src/modeling/Clean Annotations.ipynb
|
ewulczyn/talk_page_abuse
|
apache-2.0
|
Filter out annotations for revisions with duplicated diff content
|
comments = df.drop_duplicates(subset = ['rev_id'])
print(comments.shape[0])
u_comments = comments.drop_duplicates(subset = ['clean_diff'])
print(u_comments.shape[0])
comments[comments.duplicated(subset = ['clean_diff'])].head(5)
df = df.merge(u_comments[['rev_id']], how = 'inner', on = 'rev_id')
print('# annotations: ', df.shape[0])
|
src/modeling/Clean Annotations.ipynb
|
ewulczyn/talk_page_abuse
|
apache-2.0
|
Check that labels are not None
|
df['recipient'].value_counts(dropna=False)
df['attack'].value_counts(dropna=False)
df['aggression'].value_counts(dropna=False)
|
src/modeling/Clean Annotations.ipynb
|
ewulczyn/talk_page_abuse
|
apache-2.0
|
Remove annotations from all revisions that were annotated less than 8 times
|
counts = df['rev_id'].value_counts().to_frame()
counts.columns = ['n']
counts['rev_id'] = counts.index
counts.shape
counts['n'].value_counts().head()
counts_enough = counts.query("n>=8")
counts_enough.shape
df = df.merge(counts_enough[['rev_id']], how = 'inner', on = 'rev_id')
print('# annotations: ', df.shape[0])
|
src/modeling/Clean Annotations.ipynb
|
ewulczyn/talk_page_abuse
|
apache-2.0
|
Discard nuisance columns
|
df.columns
cols = ['rev_id', '_worker_id', 'ns', 'sample', 'src','clean_diff', 'diff', 'insert_only', 'page_id',
'page_title', 'rev_comment', 'rev_timestamp',
'user_id', 'user_text', 'not_attack', 'other', 'quoting', 'recipient',
'third_party', 'attack', 'aggression', 'aggression_score']
df = df[cols]
|
src/modeling/Clean Annotations.ipynb
|
ewulczyn/talk_page_abuse
|
apache-2.0
|
Summary Stats
|
df.groupby(['ns', 'sample']).size()
df.to_csv('../../data/annotations/clean/annotations.tsv', index=False, sep='\t')
pd.read_csv('../../data/annotations/clean/annotations.tsv', sep='\t').shape
|
src/modeling/Clean Annotations.ipynb
|
ewulczyn/talk_page_abuse
|
apache-2.0
|
2) Depth-01 term, GO:0019012 (virion) has dcnt=0 through is_a relationships (default)
GO:0019012, virion, has no GO terms below it through the is_a relationship, so the default value of dcnt will be zero, even though it is very high in the DAG at depth=01.
|
virion = 'GO:0019012'
from goatools.gosubdag.gosubdag import GoSubDag
gosubdag_r0 = GoSubDag(go_leafs, godag)
|
notebooks/relationships_change_dcnt_values.ipynb
|
tanghaibao/goatools
|
bsd-2-clause
|
Notice that dcnt=0 for GO:0019012, virion, even though it is very high in the DAG hierarchy (depth=1). This is because there are no GO IDs under GO:0019012 (virion) using the is_a relationship.
|
nt_virion = gosubdag_r0.go2nt[virion]
print(nt_virion)
print('THE VALUE OF dcnt IS: {dcnt}'.format(dcnt=nt_virion.dcnt))
|
notebooks/relationships_change_dcnt_values.ipynb
|
tanghaibao/goatools
|
bsd-2-clause
|
3) Depth-01 term, GO:0019012 (virion) dcnt value is higher using all relationships
Load all relationships into GoSubDag using relationships=True
|
gosubdag_r1 = GoSubDag(go_leafs, godag, relationships=True)
nt_virion = gosubdag_r1.go2nt[virion]
print(nt_virion)
print('THE VALUE OF dcnt IS: {dcnt}'.format(dcnt=nt_virion.dcnt))
|
notebooks/relationships_change_dcnt_values.ipynb
|
tanghaibao/goatools
|
bsd-2-clause
|
4) Depth-01 term, GO:0019012 (virion) dcnt value is higher using part_of relationships
Load all relationships into GoSubDag using relationships={'part_of'}
|
gosubdag_partof = GoSubDag(go_leafs, godag, relationships={'part_of'})
nt_virion = gosubdag_partof.go2nt[virion]
print(nt_virion)
print('THE VALUE OF dcnt IS: {dcnt}'.format(dcnt=nt_virion.dcnt))
|
notebooks/relationships_change_dcnt_values.ipynb
|
tanghaibao/goatools
|
bsd-2-clause
|
5) Descendants under GO:0019012 (virion)
|
virion_descendants = gosubdag_partof.rcntobj.go2descendants[virion]
print('{N} descendants of virion were found'.format(N=len(virion_descendants)))
|
notebooks/relationships_change_dcnt_values.ipynb
|
tanghaibao/goatools
|
bsd-2-clause
|
6) Plot descendants of virion
|
from goatools.gosubdag.plot.gosubdag_plot import GoSubDagPlot
# Limit plot of descendants to get a smaller plot
virion_capsid_fiber = {'GO:0098033', 'GO:0098032'}
nts = gosubdag_partof.prt_goids(virion_capsid_fiber,
'{NS} {GO} dcnt({dcnt}) D-{depth:02} {GO_name}')
# Limit plot size by choosing just two virion descendants
# Get a subset containing only a couple virion descendants and their ancestors
pltdag = GoSubDag(virion_capsid_fiber, godag, relationships={'part_of'})
pltobj = GoSubDagPlot(pltdag)
pltobj.plt_dag('virion_capsid_fiber.png')
|
notebooks/relationships_change_dcnt_values.ipynb
|
tanghaibao/goatools
|
bsd-2-clause
|
Download
|
resp = requests.get('https://www.indiegogo.com/explore?filter_title=dayton')
|
scraping.ipynb
|
centaurustech/crowdfunding937
|
mit
|
Parse
The BeautifulSoup library converts a raw string of HTML into a highly searchable object
|
soup = bs4.BeautifulSoup(resp.text)
|
scraping.ipynb
|
centaurustech/crowdfunding937
|
mit
|
inspecting the HTML, it looks like each project is described in a div of class i-project-card. For example:
|
proj0 = soup.find_all('div', class_='i-project-card')[0]
proj0
|
scraping.ipynb
|
centaurustech/crowdfunding937
|
mit
|
We may want to drill into each individual project page for more details.
|
detail_url = proj0.find('a', class_='i-project')
detail_url['href']
detail_url = 'https://www.indiegogo.com' + detail_url['href']
detail_url
detail_resp = requests.get('https://www.indiegogo.com' + detail_url['href'])
detail_soup = bs4.BeautifulSoup(detail_resp.text)
detail_soup
|
scraping.ipynb
|
centaurustech/crowdfunding937
|
mit
|
Kickstarter
There's an undocumented API that can give us JSON.
|
kicks_raw = requests.get('http://www.kickstarter.com/projects/search.json?search=&term=dayton')
import json
data = json.loads(kicks_raw.text)
data['projects'][0]
|
scraping.ipynb
|
centaurustech/crowdfunding937
|
mit
|
Regression Project
We have learned about regression and how to build regression models using both scikit-learn and TensorFlow. Now we'll build a regression model from start to finish. We will acquire data and perform exploratory data analysis and data preprocessing. We'll build and tune our model and measure how well our model generalizes.
Framing the Problem
Overview
Friendly Insurance, Inc. has requested we do a study for them to help predict the cost of their policyholders. They have provided us with sample anonymous data about some of their policyholders for the previous year. The dataset includes the following information:
Column | Description
---------|-------------
age | age of primary beneficiary
sex | gender of the primary beneficiary (male or female)
bmi | body mass index of the primary beneficiary
children | number of children covered by the plan
smoker | is the primary beneficiary a smoker (yes or no)
region | geographic region of the beneficiaries (northeast, southeast, southwest, or northwest)
charges | costs to the insurance company
We have been asked to create a model that, given the first six columns, can predict the charges the insurance company might incur.
The company wants to see how accurate we can get with our predictions. If we can make a case for our model, they will provide us with the full dataset of all of their customers for the last ten years to see if we can improve on our model and possibly even predict cost per client year over year.
Exercise 1: Thinking About the Data
Before we dive in to looking closely at the data, let's think about the problem space and the dataset. Consider the questions below.
Question 1
Is this problem actually a good fit for machine learning? Why or why not?
Student Solution
Please Put Your Answer Here
Question 2
If we do build the machine learning model, what biases might exist in the data? Is there anything that might cause the model to have trouble generalizing to other data? If so, how might we make the model more resilient?
Student Solution
Please Put Your Answer Here
Question 3
We have been asked to take input features about people who are insured and predict costs, but we haven't been given much information about how these predictions will be used. What effect might our predictions have on decisions made by the insurance company? How might this affect the insured?
Student Solution
Please Put Your Answer Here
Exploratory Data Analysis
Now that we have considered the societal implications of our model, we can start looking at the data to get a better understanding of what we are working with.
The data we'll be using for this project can be found on Kaggle. Upload your kaggle.json file and run the code block below.
|
! chmod 600 kaggle.json && (ls ~/.kaggle 2>/dev/null || mkdir ~/.kaggle) && mv kaggle.json ~/.kaggle/ && echo 'Done'
|
content/03_regression/09_regression_project/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
Exercise 2: EDA and Data Preprocessing
Using as many code and text blocks as you need, download the dataset, explore it, and do any model-independent preprocessing that you think is necessary. Feel free to use any of the tools for data analysis and visualization that we have covered in this course so far. Be sure to do individual column analysis and cross-column analysis. Explain your findings.
Student Solution
|
# Add code and text blocks to explore the data and explain your work
|
content/03_regression/09_regression_project/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
Modeling
Now that we understand our data a little better, we can build a model. We are trying to predict 'charges', which is a continuous variable. We'll use a regression model to predict 'charges'.
Exercise 3: Modeling
Using as many code and text blocks as you need, build a model that can predict 'charges' given the features that we have available. To do this, feel free to use any of the toolkits and models that we have explored so far.
You'll be expected to:
1. Prepare the data for the model (or models) that you choose. Remember that some of the data is categorical. In order for your model to use it, you'll need to convert the data to some numeric representation.
1. Build a model or models and adjust parameters.
1. Validate your model with holdout data. Hold out some percentage of your data (10-20%), and use it as a final validation of your model. Print the root mean squared error. We were able to get an RMSE between 3500 and 4000, but your final RMSE will likely be different.
Student Solution
|
# Add code and text blocks to build and validate a model and explain your work
|
content/03_regression/09_regression_project/colab.ipynb
|
google/applied-machine-learning-intensive
|
apache-2.0
|
The majority of machine learning algorithms assume that they are operating on a fully observed data set. In contast, a great deal of data sets in the real world are missing some values. Sometimes, this missingness is missing at random (MAR), which means that there is no important pattern to the missingness, and sometimes the missingness itself can be interpreted as a feature. For example, in the Titanic data set, males were more likely to have missing records than females were, and those without children were more likely to have missing records.
A common approach to bridging this gap is to impute the missing values and then treat the entire data set as observed. For continuous features this is commonly done by replacing the missing values with the mean or median of the column. For categorical variables it is commonly done by replacing the missing values with the most common category observed in that column. While these techniques are simple and allow for almost any ML algorithm to be run, they are frequently suboptimal. Consider the follow simple example of continuous data that is bimodally distributed:
|
X = numpy.concatenate([numpy.random.normal(0, 1, size=(1000)), numpy.random.normal(6, 1, size=(1250))])
plt.title("Bimodal Distribution", fontsize=14)
plt.hist(X, bins=numpy.arange(-3, 9, 0.1), alpha=0.6)
plt.ylabel("Count", fontsize=14)
plt.yticks(fontsize=12)
plt.xlabel("Value", fontsize=14)
plt.yticks(fontsize=12)
plt.vlines(numpy.mean(X), 0, 80, color='r', label="Mean")
plt.vlines(numpy.median(X), 0, 80, color='b', label="Median")
plt.legend(fontsize=14)
plt.show()
|
tutorials/old/Tutorial_9_Missing_Values.ipynb
|
jmschrei/pomegranate
|
mit
|
Even when the data is all drawn from a single, Gaussian, distribution, it is not a great idea to do mean imputation. We can see that the standard deviation of the learned distribution is significantly smaller than the true standard deviation (of 1), whereas if the missing data is ignored the value is closer.
This might all be intuitive for a single variable. However, the concept of only collecting sufficient statistics from values that are present in the data and ignoring the missing values can be used in much more complicated, and/or multivariate models. Let's take a look at how well one can estimate the covariance matrix of a multivariate Gaussian distribution using these two strategies.
|
n, d, steps = 1000, 10, 50
diffs1 = numpy.zeros(int(steps*0.86))
diffs2 = numpy.zeros(int(steps*0.86))
X = numpy.random.normal(6, 3, size=(n, d))
for k, size in enumerate(range(0, int(n*d*0.86), n*d / steps)):
idxs = numpy.random.choice(numpy.arange(n*d), replace=False, size=size)
i, j = idxs / d, idxs % d
cov_true = numpy.cov(X, rowvar=False, bias=True)
X_nan = X.copy()
X_nan[i, j] = numpy.nan
X_mean = X_nan.copy()
for col in range(d):
mask = numpy.isnan(X_mean[:,col])
X_mean[mask, col] = X_mean[~mask, col].mean()
diff = numpy.abs(numpy.cov(X_mean, rowvar=False, bias=True) - cov_true).sum()
diffs1[k] = diff
dist = MultivariateGaussianDistribution.from_samples(X_nan)
diff = numpy.abs(numpy.array(dist.parameters[1]) - cov_true).sum()
diffs2[k] = diff
plt.title("Error in Multivariate Gaussian Covariance Matrix", fontsize=16)
plt.plot(diffs1, label="Mean")
plt.plot(diffs2, label="Ignore")
plt.xlabel("Percentage Missing", fontsize=14)
plt.ylabel("L1 Errors", fontsize=14)
plt.xticks(range(0, 51, 10), numpy.arange(0, 5001, 1000) / 5000.)
plt.xlim(0, 50)
plt.legend(fontsize=14)
plt.show()
|
tutorials/old/Tutorial_9_Missing_Values.ipynb
|
jmschrei/pomegranate
|
mit
|
In even the simplest case of Gaussian distributed data with a diagonal covariance matrix, it is more accurate to use the ignoring strategy rather than imputing the mean. When the data set is mostly unobserved the mean imputation strategy tends to do better in this case, but only because there is so little data for the ignoring strategy to actually train on. The deflation of the variance benefits the mean imputation strategy because all of the off-diagonal elements should be 0, but are likely to be artificially high when there are only few examples of the pairs of the variables co-existing in the dataset. This weakness in the ignoring strategy also makes it more likely to encounter linear algebra errors, such as a non-invertable covariance matrix.
This long introduction is a way of saying that pomegranate uses a strategy of ignoring missing values instead of attempting to impute them, followed by fitting to the newly complete data set. There are other imputation strategies, such as those based on EM, that would be a natural fit with the types of probabilistic models implemented in pomegranate. While those have not yet been added, they would be a good addition that I hope to get to this year.
Let's now take a look at how to use missing values in some pomegranate models!
1. Distributions
We've seen some examples of fitting distributions to missing data. For univariate distributions, the missing values are simply ignored when fitting to the data.
|
X = numpy.random.randn(100)
X_nan = numpy.concatenate([X, [numpy.nan]*100])
print "Fitting only to observed values:"
print NormalDistribution.from_samples(X)
print
print "Fitting to observed and missing values:"
print NormalDistribution.from_samples(X_nan)
|
tutorials/old/Tutorial_9_Missing_Values.ipynb
|
jmschrei/pomegranate
|
mit
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.