markdown stringlengths 0 37k | code stringlengths 1 33.3k | path stringlengths 8 215 | repo_name stringlengths 6 77 | license stringclasses 15
values |
|---|---|---|---|---|
Download Model | opener = urllib.request.URLopener()
opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)
tar_file = tarfile.open(MODEL_FILE)
for file in tar_file.getmembers():
file_name = os.path.basename(file.name)
if 'frozen_inference_graph.pb' in file_name:
tar_file.extract(file, os.getcwd()) | research/object_detection/object_detection_tutorial.ipynb | jiaphuan/models | apache-2.0 |
Load a (frozen) Tensorflow model into memory. | detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='') | research/object_detection/object_detection_tutorial.ipynb | jiaphuan/models | apache-2.0 |
Loading label map
Label maps map indices to category names, so that when our convolution network predicts 5, we know that this corresponds to airplane. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine | label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)
category_index = label_map_util.create_category_index(categories) | research/object_detection/object_detection_tutorial.ipynb | jiaphuan/models | apache-2.0 |
Helper code | def load_image_into_numpy_array(image):
(im_width, im_height) = image.size
return np.array(image.getdata()).reshape(
(im_height, im_width, 3)).astype(np.uint8) | research/object_detection/object_detection_tutorial.ipynb | jiaphuan/models | apache-2.0 |
Detection | # For the sake of simplicity we will use only 2 images:
# image1.jpg
# image2.jpg
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = 'test_images'
TEST_IMAGE_PATHS = [ os.path.join(PATH_TO_TEST_IMAGES_DIR, 'image{}.jpg'.format(i)) for i in ran... | research/object_detection/object_detection_tutorial.ipynb | jiaphuan/models | apache-2.0 |
Masked soft attention | def sequence_mask(X, valid_len, value=0):
"""Mask irrelevant entries in sequences."""
maxlen = X.size(1)
mask = torch.arange((maxlen), dtype=torch.float32, device=X.device)[None, :] < valid_len[:, None]
X[~mask] = value
return X
def masked_softmax(X, valid_lens):
"""Perform softmax operation b... | notebooks/book1/15/attention_torch.ipynb | probml/pyprobml | mit |
Example. Batch size 2, feature size 2, sequence length 4.
The valid lengths are 2,3. So the output has size (2,2,4),
but the length dimension is full of 0s in the invalid locations. | Y = masked_softmax(torch.rand(2, 2, 4), torch.tensor([2, 3]))
print(Y) | notebooks/book1/15/attention_torch.ipynb | probml/pyprobml | mit |
Example. Batch size 2, feature size 2, sequence length 4.
The valid lengths are (1,3) for batch 1, and (2,4) for batch 2. | Y = masked_softmax(torch.rand(2, 2, 4), torch.tensor([[1, 3], [2, 4]]))
print(Y) | notebooks/book1/15/attention_torch.ipynb | probml/pyprobml | mit |
Additive attention
$$
\alpha(q,k) = w_v^T \tanh(W_q q + w_k k)
$$ | class AdditiveAttention(nn.Module):
def __init__(self, key_size, query_size, num_hiddens, dropout, **kwargs):
super(AdditiveAttention, self).__init__(**kwargs)
self.W_k = nn.Linear(key_size, num_hiddens, bias=False)
self.W_q = nn.Linear(query_size, num_hiddens, bias=False)
self.w_v =... | notebooks/book1/15/attention_torch.ipynb | probml/pyprobml | mit |
The heatmap is uniform across the keys, since the keys are all 1s.
However, the support is truncated to the valid length. | def show_heatmaps(matrices, xlabel, ylabel, titles=None, figsize=(2.5, 2.5), cmap="Reds"):
display.set_matplotlib_formats("svg")
num_rows, num_cols = matrices.shape[0], matrices.shape[1]
fig, axes = plt.subplots(num_rows, num_cols, figsize=figsize, sharex=True, sharey=True, squeeze=False)
for i, (row_ax... | notebooks/book1/15/attention_torch.ipynb | probml/pyprobml | mit |
Dot-product attention
$$
A = \text{softmax}(Q K^T/\sqrt{d}) V
$$ | class DotProductAttention(nn.Module):
"""Scaled dot product attention."""
def __init__(self, dropout, **kwargs):
super(DotProductAttention, self).__init__(**kwargs)
self.dropout = nn.Dropout(dropout)
# Shape of `queries`: (`batch_size`, no. of queries, `d`)
# Shape of `keys`: (`batch_s... | notebooks/book1/15/attention_torch.ipynb | probml/pyprobml | mit |
SSFS files
We'll read the key and data files used in the test case suite and use them as example: | with open("../../tests/data/ssfs_hdb_dat", "rb") as fd:
data = fd.read()
ssfs_data = SAPSSFSData(data)
with open("../../tests/data/ssfs_hdb_key", "rb") as fd:
key = fd.read()
ssfs_key = SAPSSFSKey(key) | docs/fileformats/SAPSSFS.ipynb | CoreSecurity/pysap | gpl-2.0 |
SSFS files are comprised of the following main structures:
SSFS Data | ssfs_data.show() | docs/fileformats/SAPSSFS.ipynb | CoreSecurity/pysap | gpl-2.0 |
As can be observed, a SSFS Data file contains multiple records with different key/value pairs, as well as associated meta data.
Some records contain values stored in plaintext, while others are stored in an encrypted fashion. We'll see a password record, which is stored encrypted: | ssfs_data.records[-1].canvas_dump() | docs/fileformats/SAPSSFS.ipynb | CoreSecurity/pysap | gpl-2.0 |
Additionally, each SSFS record contains an HMAC-SHA1 value calculated using a fixed key. The intent of this value is to provide integrity validation as well as ensure that an authentic tool was used to generate the files: | ssfs_data.records[-1].valid | docs/fileformats/SAPSSFS.ipynb | CoreSecurity/pysap | gpl-2.0 |
SSFS Key content | ssfs_key.canvas_dump() | docs/fileformats/SAPSSFS.ipynb | CoreSecurity/pysap | gpl-2.0 |
SSFS Value access
The values contained in SSFS Data records can be accessed by providing the key name: | ssfs_data.get_value('HDB/KEYNAME/DB_USER') | docs/fileformats/SAPSSFS.ipynb | CoreSecurity/pysap | gpl-2.0 |
SSFS Data content decryption
For those records that are stored encrypted, it's possible to access the right value by providing the key name and the proper SSFS decryption key structure: | ssfs_data.get_value('HDB/KEYNAME/DB_PASSWORD', ssfs_key) | docs/fileformats/SAPSSFS.ipynb | CoreSecurity/pysap | gpl-2.0 |
SSFS Decrypted Payload structure
The decryption mechanism can be user to obtain the raw data stored encrypted: | decrypted_blob = rsec_decrypt(ssfs_data.get_record('HDB/KEYNAME/DB_PASSWORD').data, ssfs_key.key)
decrypted_blob | docs/fileformats/SAPSSFS.ipynb | CoreSecurity/pysap | gpl-2.0 |
It's possible also to parse that raw data and obtain the underlying strucutures and meta data associated: | payload = SAPSSFSDecryptedPayload(decrypted_blob)
payload.canvas_dump() | docs/fileformats/SAPSSFS.ipynb | CoreSecurity/pysap | gpl-2.0 |
The decrypted payload contains a hash calculated using the SHA-1 algorithm, and that can be used to validate integrity of the entire payload: | payload.valid | docs/fileformats/SAPSSFS.ipynb | CoreSecurity/pysap | gpl-2.0 |
Part 1: Featurize categorical data using one-hot-encoding
(1a) One-hot-encoding
We would like to develop code to convert categorical features to numerical ones, and to build intuition, we will work with a sample unlabeled dataset with three data points, with each data point representing an animal. The first feature ... | # Data for manual OHE
# Note: the first data point does not include any value for the optional third feature
sampleOne = [(0, 'mouse'), (1, 'black')]
sampleTwo = [(0, 'cat'), (1, 'tabby'), (2, 'mouse')]
sampleThree = [(0, 'bear'), (1, 'black'), (2, 'salmon')]
sampleDataRDD = sc.parallelize([sampleOne, sampleTwo, sampl... | week13/MIDS-MLS-Project-Criteo-CTR.ipynb | kradams/MIDS-W261-2015-Adams | mit |
(1b) Sparse vectors
Data points can typically be represented with a small number of non-zero OHE features relative to the total number of features that occur in the dataset. By leveraging this sparsity and using sparse vector representations of OHE data, we can reduce storage and computational burdens. Below are a f... | import numpy as np
from pyspark.mllib.linalg import SparseVector
# TODO: Replace <FILL IN> with appropriate code
aDense = np.array([0., 3., 0., 4.])
aSparse = SparseVector(4,[(1,3),(3,4)])
bDense = np.array([0., 0., 0., 1.])
bSparse = SparseVector(4,[(3,1)])
w = np.array([0.4, 3.1, -1.4, -.5])
print aDense.dot(w)
pr... | week13/MIDS-MLS-Project-Criteo-CTR.ipynb | kradams/MIDS-W261-2015-Adams | mit |
(1c) OHE features as sparse vectors
Now let's see how we can represent the OHE features for points in our sample dataset. Using the mapping defined by the OHE dictionary from Part (1a), manually define OHE features for the three sample data points using SparseVector format. Any feature that occurs in a point should ... | # Reminder of the sample features
# sampleOne = [(0, 'mouse'), (1, 'black')]
# sampleTwo = [(0, 'cat'), (1, 'tabby'), (2, 'mouse')]
# sampleThree = [(0, 'bear'), (1, 'black'), (2, 'salmon')]
# TODO: Replace <FILL IN> with appropriate code
sampleOneOHEFeatManual = SparseVector(7,[(2,1),(3,1)])
sampleTwoOHEFeatManual =... | week13/MIDS-MLS-Project-Criteo-CTR.ipynb | kradams/MIDS-W261-2015-Adams | mit |
(1d) Define a OHE function
Next we will use the OHE dictionary from Part (1a) to programatically generate OHE features from the original categorical data. First write a function called oneHotEncoding that creates OHE feature vectors in SparseVector format. Then use this function to create OHE features for the first ... | print sampleOHEDictManual
# TODO: Replace <FILL IN> with appropriate code
def oneHotEncoding(rawFeats, OHEDict, numOHEFeats):
"""Produce a one-hot-encoding from a list of features and an OHE dictionary.
Note:
You should ensure that the indices used to create a SparseVector are sorted.
Args:
... | week13/MIDS-MLS-Project-Criteo-CTR.ipynb | kradams/MIDS-W261-2015-Adams | mit |
(1e) Apply OHE to a dataset
Finally, use the function from Part (1d) to create OHE features for all 3 data points in the sample dataset. | # TODO: Replace <FILL IN> with appropriate code
sampleOHEData = sampleDataRDD.map(lambda point: oneHotEncoding(point,sampleOHEDictManual, numSampleOHEFeats))
print sampleOHEData.collect()
# TEST Apply OHE to a dataset (1e)
sampleOHEDataValues = sampleOHEData.collect()
Test.assertTrue(len(sampleOHEDataValues) == 3, 'sa... | week13/MIDS-MLS-Project-Criteo-CTR.ipynb | kradams/MIDS-W261-2015-Adams | mit |
Part 2: Construct an OHE dictionary
(2a) Pair RDD of (featureID, category)
To start, create an RDD of distinct (featureID, category) tuples. In our sample dataset, the 7 items in the resulting RDD are (0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'), (1, 'tabby'), (2, 'mouse'), (2, 'salmon'). Notably 'black' appea... | print sampleDataRDD.flatMap(lambda x: x).distinct().collect()
# TODO: Replace <FILL IN> with appropriate code
sampleDistinctFeats = sampleDataRDD.flatMap(lambda x: x).distinct()
# TEST Pair RDD of (featureID, category) (2a)
Test.assertEquals(sorted(sampleDistinctFeats.collect()),
[(0, 'bear'), (0, '... | week13/MIDS-MLS-Project-Criteo-CTR.ipynb | kradams/MIDS-W261-2015-Adams | mit |
(2b) OHE Dictionary from distinct features
Next, create an RDD of key-value tuples, where each (featureID, category) tuple in sampleDistinctFeats is a key and the values are distinct integers ranging from 0 to (number of keys - 1). Then convert this RDD into a dictionary, which can be done using the collectAsMap acti... | # TODO: Replace <FILL IN> with appropriate code
sampleOHEDict = (sampleDistinctFeats
.zipWithIndex().collectAsMap())
print sampleOHEDict
# TEST OHE Dictionary from distinct features (2b)
Test.assertEquals(sorted(sampleOHEDict.keys()),
[(0, 'bear'), (0, 'cat'), (0, 'mouse'),... | week13/MIDS-MLS-Project-Criteo-CTR.ipynb | kradams/MIDS-W261-2015-Adams | mit |
(2c) Automated creation of an OHE dictionary
Now use the code from Parts (2a) and (2b) to write a function that takes an input dataset and outputs an OHE dictionary. Then use this function to create an OHE dictionary for the sample dataset, and verify that it matches the dictionary from Part (2b). | # TODO: Replace <FILL IN> with appropriate code
def createOneHotDict(inputData):
"""Creates a one-hot-encoder dictionary based on the input data.
Args:
inputData (RDD of lists of (int, str)): An RDD of observations where each observation is
made up of a list of (featureID, value) tuples.
... | week13/MIDS-MLS-Project-Criteo-CTR.ipynb | kradams/MIDS-W261-2015-Adams | mit |
Part 3: Parse CTR data and generate OHE features
Before we can proceed, you'll first need to obtain the data from Criteo. If you have already completed this step in the setup lab, just run the cells below and the data will be loaded into the rawData variable.
Below is Criteo's data sharing agreement. After you accept... | # Run this code to view Criteo's agreement
from IPython.lib.display import IFrame
IFrame("http://labs.criteo.com/downloads/2014-kaggle-display-advertising-challenge-dataset/",
600, 350)
# TODO: Replace <FILL IN> with appropriate code
# Just replace <FILL IN> with the url for dac_sample.tar.gz
import glob
impor... | week13/MIDS-MLS-Project-Criteo-CTR.ipynb | kradams/MIDS-W261-2015-Adams | mit |
(3a) Loading and splitting the data
We are now ready to start working with the actual CTR data, and our first task involves splitting it into training, validation, and test sets. Use the randomSplit method with the specified weights and seed to create RDDs storing each of these datasets, and then cache each of these ... | # TODO: Replace <FILL IN> with appropriate code
weights = [.8, .1, .1]
seed = 42
# Use randomSplit with weights and seed
rawTrainData, rawValidationData, rawTestData = rawData.randomSplit(weights, seed)
# Cache the data
rawTrainData.cache()
rawValidationData.cache()
rawTestData.cache()
nTrain = rawTrainData.count()
nV... | week13/MIDS-MLS-Project-Criteo-CTR.ipynb | kradams/MIDS-W261-2015-Adams | mit |
(3b) Extract features
We will now parse the raw training data to create an RDD that we can subsequently use to create an OHE dictionary. Note from the take() command in Part (3a) that each raw data point is a string containing several fields separated by some delimiter. For now, we will ignore the first field (which ... | point = '0,1,1,5,0,1382,4,15,2,181,1,2,,2,68fd1e64,80e26c9b,fb936136,7b4723c4,25c83c98,7e0ccccf,de7995b8,1f89b562,a73ee510,a8cd5504,b2cb9c98,37c9c164,2824a5f6,1adce6ef,8ba8b39a,891b62e7,e5ba7672,f54016b9,21ddcdc9,b1252a9d,07b5194c,,3a171ecb,c5c50484,e8b83407,9727dd16'
print parsePoint(point)
# TODO: Replace <FILL IN> ... | week13/MIDS-MLS-Project-Criteo-CTR.ipynb | kradams/MIDS-W261-2015-Adams | mit |
(3c) Create an OHE dictionary from the dataset
Note that parsePoint returns a data point as a list of (featureID, category) tuples, which is the same format as the sample dataset studied in Parts 1 and 2 of this lab. Using this observation, create an OHE dictionary using the function implemented in Part (2c). Note th... | # TODO: Replace <FILL IN> with appropriate code
ctrOHEDict = createOneHotDict(parsedTrainFeat)
numCtrOHEFeats = len(ctrOHEDict.keys())
print numCtrOHEFeats
print ctrOHEDict[(0, '')]
# TEST Create an OHE dictionary from the dataset (3c)
Test.assertEquals(numCtrOHEFeats, 233286, 'incorrect number of features in ctrOHEDi... | week13/MIDS-MLS-Project-Criteo-CTR.ipynb | kradams/MIDS-W261-2015-Adams | mit |
(3d) Apply OHE to the dataset
Now let's use this OHE dictionary by starting with the raw training data and creating an RDD of LabeledPoint objects using OHE features. To do this, complete the implementation of the parseOHEPoint function. Hint: parseOHEPoint is an extension of the parsePoint function from Part (3b) an... | from pyspark.mllib.regression import LabeledPoint
# TODO: Replace <FILL IN> with appropriate code
def parseOHEPoint(point, OHEDict, numOHEFeats):
"""Obtain the label and feature vector for this raw observation.
Note:
You must use the function `oneHotEncoding` in this implementation or later portions
... | week13/MIDS-MLS-Project-Criteo-CTR.ipynb | kradams/MIDS-W261-2015-Adams | mit |
Visualization 1: Feature frequency
We will now visualize the number of times each of the 233,286 OHE features appears in the training data. We first compute the number of times each feature appears, then bucket the features by these counts. The buckets are sized by powers of 2, so the first bucket corresponds to feat... | def bucketFeatByCount(featCount):
"""Bucket the counts by powers of two."""
for i in range(11):
size = 2 ** i
if featCount <= size:
return size
return -1
featCounts = (OHETrainData
.flatMap(lambda lp: lp.features.indices)
.map(lambda x: (x, 1))
... | week13/MIDS-MLS-Project-Criteo-CTR.ipynb | kradams/MIDS-W261-2015-Adams | mit |
(3e) Handling unseen features
We naturally would like to repeat the process from Part (3d), e.g., to compute OHE features for the validation and test datasets. However, we must be careful, as some categorical values will likely appear in new data that did not exist in the training data. To deal with this situation, u... | # TODO: Replace <FILL IN> with appropriate code
def oneHotEncoding(rawFeats, OHEDict, numOHEFeats):
"""Produce a one-hot-encoding from a list of features and an OHE dictionary.
Note:
If a (featureID, value) tuple doesn't have a corresponding key in OHEDict it should be
ignored.
Args:
... | week13/MIDS-MLS-Project-Criteo-CTR.ipynb | kradams/MIDS-W261-2015-Adams | mit |
Part 4: CTR prediction and logloss evaluation
(4a) Logistic regression
We are now ready to train our first CTR classifier. A natural classifier to use in this setting is logistic regression, since it models the probability of a click-through event rather than returning a binary response, and when working with rare ... | from pyspark.mllib.classification import LogisticRegressionWithSGD
# fixed hyperparameters
numIters = 50
stepSize = 10.
regParam = 1e-6
regType = 'l2'
includeIntercept = True
# TODO: Replace <FILL IN> with appropriate code
model0 = LogisticRegressionWithSGD.train(OHETrainData, iterations=numIters,
... | week13/MIDS-MLS-Project-Criteo-CTR.ipynb | kradams/MIDS-W261-2015-Adams | mit |
(4b) Log loss
Throughout this lab, we will use log loss to evaluate the quality of models. Log loss is defined as: $$ \begin{align} \scriptsize \ell_{log}(p, y) = \begin{cases} -\log (p) & \text{if } y = 1 \\ -\log(1-p) & \text{if } y = 0 \end{cases} \end{align} $$ where $ \scriptsize p$ is a probability between 0 a... | # TODO: Replace <FILL IN> with appropriate code
from math import log
def computeLogLoss(p, y):
"""Calculates the value of log loss for a given probabilty and label.
Note:
log(0) is undefined, so when p is 0 we need to add a small value (epsilon) to it
and when p is 1 we need to subtract a smal... | week13/MIDS-MLS-Project-Criteo-CTR.ipynb | kradams/MIDS-W261-2015-Adams | mit |
(4c) Baseline log loss
Next we will use the function we wrote in Part (4b) to compute the baseline log loss on the training data. A very simple yet natural baseline model is one where we always make the same prediction independent of the given datapoint, setting the predicted value equal to the fraction of training p... |
# TODO: Replace <FILL IN> with appropriate code
# Note that our dataset has a very high click-through rate by design
# In practice click-through rate can be one to two orders of magnitude lower
classOneFracTrain = 1.0*OHETrainData.filter(lambda point: point.label==1).count()/OHETrainData.count()
print classOneFracTrai... | week13/MIDS-MLS-Project-Criteo-CTR.ipynb | kradams/MIDS-W261-2015-Adams | mit |
(4d) Predicted probability
In order to compute the log loss for the model we trained in Part (4a), we need to write code to generate predictions from this model. Write a function that computes the raw linear prediction from this logistic regression model and then passes it through a sigmoid function $ \scriptsize \sig... | # TODO: Replace <FILL IN> with appropriate code
from math import exp # exp(-t) = e^-t
def getP(x, w, intercept):
"""Calculate the probability for an observation given a set of weights and intercept.
Note:
We'll bound our raw prediction between 20 and -20 for numerical purposes.
Args:
x (... | week13/MIDS-MLS-Project-Criteo-CTR.ipynb | kradams/MIDS-W261-2015-Adams | mit |
(4e) Evaluate the model
We are now ready to evaluate the quality of the model we trained in Part (4a). To do this, first write a general function that takes as input a model and data, and outputs the log loss. Then run this function on the OHE training data, and compare the result with the baseline log loss. | # TODO: Replace <FILL IN> with appropriate code
def evaluateResults(model, data):
"""Calculates the log loss for the data given the model.
Args:
model (LogisticRegressionModel): A trained logistic regression model.
data (RDD of LabeledPoint): Labels and features for each observation.
Retur... | week13/MIDS-MLS-Project-Criteo-CTR.ipynb | kradams/MIDS-W261-2015-Adams | mit |
(4f) Validation log loss
Next, following the same logic as in Parts (4c) and 4(e), compute the validation log loss for both the baseline and logistic regression models. Notably, the baseline model for the validation data should still be based on the label fraction from the training dataset. | # TODO: Replace <FILL IN> with appropriate code
logLossValBase = OHEValidationData.map(lambda point: computeLogLoss(classOneFracTrain, point.label)).mean()
logLossValLR0 = evaluateResults(model0, OHEValidationData)
print ('OHE Features Validation Logloss:\n\tBaseline = {0:.3f}\n\tLogReg = {1:.3f}'
.format(logLo... | week13/MIDS-MLS-Project-Criteo-CTR.ipynb | kradams/MIDS-W261-2015-Adams | mit |
Visualization 2: ROC curve
We will now visualize how well the model predicts our target. To do this we generate a plot of the ROC curve. The ROC curve shows us the trade-off between the false positive rate and true positive rate, as we liberalize the threshold required to predict a positive outcome. A random model ... | labelsAndScores = OHEValidationData.map(lambda lp:
(lp.label, getP(lp.features, model0.weights, model0.intercept)))
labelsAndWeights = labelsAndScores.collect()
labelsAndWeights.sort(key=lambda (k, v): v, reverse=True)
labelsByWeight = np.array([k for (k, v) in labelsAndWeigh... | week13/MIDS-MLS-Project-Criteo-CTR.ipynb | kradams/MIDS-W261-2015-Adams | mit |
Part 5: Reduce feature dimension via feature hashing
(5a) Hash function
As we just saw, using a one-hot-encoding featurization can yield a model with good statistical accuracy. However, the number of distinct categories across all features is quite large -- recall that we observed 233K categories in the training dat... | from collections import defaultdict
import hashlib
def hashFunction(numBuckets, rawFeats, printMapping=False):
"""Calculate a feature dictionary for an observation's features based on hashing.
Note:
Use printMapping=True for debug purposes and to better understand how the hashing works.
Args:
... | week13/MIDS-MLS-Project-Criteo-CTR.ipynb | kradams/MIDS-W261-2015-Adams | mit |
(5b) Creating hashed features
Next we will use this hash function to create hashed features for our CTR datasets. First write a function that uses the hash function from Part (5a) with numBuckets = $ \scriptsize 2^{15} \approx 33K $ to create a LabeledPoint with hashed features stored as a SparseVector. Then use this... | print 2**15
point = rawTrainData.take(1)[0]
feats = point.split(',')
featlist= []
for i,feat in enumerate(feats):
#print i, feat
if i==0:
label=float(feat)
else:
featlist.append((i-1,feat))
print label, featlist
print hashFunction(2**15, featlist, printMapping=False)
# TODO: Replace <FILL ... | week13/MIDS-MLS-Project-Criteo-CTR.ipynb | kradams/MIDS-W261-2015-Adams | mit |
(5c) Sparsity
Since we have 33K hashed features versus 233K OHE features, we should expect OHE features to be sparser. Verify this hypothesis by computing the average sparsity of the OHE and the hashed training datasets.
Note that if you have a SparseVector named sparse, calling len(sparse) returns the total number of... | # TODO: Replace <FILL IN> with appropriate code
def computeSparsity(data, d, n):
"""Calculates the average sparsity for the features in an RDD of LabeledPoints.
Args:
data (RDD of LabeledPoint): The LabeledPoints to use in the sparsity calculation.
d (int): The total number of features.
... | week13/MIDS-MLS-Project-Criteo-CTR.ipynb | kradams/MIDS-W261-2015-Adams | mit |
(5d) Logistic model with hashed features
Now let's train a logistic regression model using the hashed features. Run a grid search to find suitable hyperparameters for the hashed features, evaluating via log loss on the validation data. Note: This may take a few minutes to run. Use 1 and 10 for stepSizes and 1e-6 and 1... | numIters = 500
regType = 'l2'
includeIntercept = True
# Initialize variables using values from initial model training
bestModel = None
bestLogLoss = 1e10
# TODO: Replace <FILL IN> with appropriate code
stepSizes = [1.0,10.0]
regParams = [1.0e-6,1.0e-3]
for stepSize in stepSizes:
for regParam in regParams:
... | week13/MIDS-MLS-Project-Criteo-CTR.ipynb | kradams/MIDS-W261-2015-Adams | mit |
Visualization 3: Hyperparameter heat map
We will now perform a visualization of an extensive hyperparameter search. Specifically, we will create a heat map where the brighter colors correspond to lower values of logLoss.
The search was run using six step sizes and six values for regularization, which required the trai... | from matplotlib.colors import LinearSegmentedColormap
# Saved parameters and results. Eliminate the time required to run 36 models
stepSizes = [3, 6, 9, 12, 15, 18]
regParams = [1e-7, 1e-6, 1e-5, 1e-4, 1e-3, 1e-2]
logLoss = np.array([[ 0.45808431, 0.45808493, 0.45809113, 0.45815333, 0.45879221, 0.46556321],
... | week13/MIDS-MLS-Project-Criteo-CTR.ipynb | kradams/MIDS-W261-2015-Adams | mit |
(5e) Evaluate on the test set
Finally, evaluate the best model from Part (5d) on the test set. Compare the resulting log loss with the baseline log loss on the test set, which can be computed in the same way that the validation log loss was computed in Part (4f). | # TODO: Replace <FILL IN> with appropriate code
# Log loss for the best model from (5d)
# fixed hyperparameters
numIters = 50
stepSize = 10.
regParam = 1e-6
regType = 'l2'
includeIntercept = True
model_5dbest = LogisticRegressionWithSGD.train(hashTrainData, iterations=numIters,
... | week13/MIDS-MLS-Project-Criteo-CTR.ipynb | kradams/MIDS-W261-2015-Adams | mit |
imports for Python, Pandas | import json
from pandas.io.json import json_normalize | exercises/data_wrangling_json/sliderule_dsi_json_exercise.ipynb | RoebideBruijn/datascience-intensive-course | mit |
JSON example, with string
demonstrates creation of normalized dataframes (tables) from nested json string
source: http://pandas.pydata.org/pandas-docs/stable/io.html#normalization | # define json string
data = [{'state': 'Florida',
'shortname': 'FL',
'info': {'governor': 'Rick Scott'},
'counties': [{'name': 'Dade', 'population': 12345},
{'name': 'Broward', 'population': 40000},
{'name': 'Palm Beach', 'population': 60000}]},
... | exercises/data_wrangling_json/sliderule_dsi_json_exercise.ipynb | RoebideBruijn/datascience-intensive-course | mit |
JSON example, with file
demonstrates reading in a json file as a string and as a table
uses small sample file containing data about projects funded by the World Bank
data source: http://jsonstudio.com/resources/ | # load json as string
json.load((open('data/world_bank_projects_less.json')))
# load as Pandas dataframe
sample_json_df = pd.read_json('data/world_bank_projects_less.json')
sample_json_df | exercises/data_wrangling_json/sliderule_dsi_json_exercise.ipynb | RoebideBruijn/datascience-intensive-course | mit |
JSON exercise
Using data in file 'data/world_bank_projects.json' and the techniques demonstrated above,
1. Find the 10 countries with most projects
2. Find the top 10 major project themes (using column 'mjtheme_namecode')
3. In 2. above you will notice that some entries have only the code and the name is missing. Creat... | bank = pd.read_json('data/world_bank_projects.json')
bank.head() | exercises/data_wrangling_json/sliderule_dsi_json_exercise.ipynb | RoebideBruijn/datascience-intensive-course | mit |
1. Find the 10 countries with most projects | bank.countryname.value_counts().head(10) | exercises/data_wrangling_json/sliderule_dsi_json_exercise.ipynb | RoebideBruijn/datascience-intensive-course | mit |
2. Find the top 10 major project themes (using column 'mjtheme_namecode') | names = []
for i in bank.index:
namecode = bank.loc[i,'mjtheme_namecode']
names.extend(list(json_normalize(namecode)['name']))
pd.Series(names).value_counts().head(10).drop('', axis=0) | exercises/data_wrangling_json/sliderule_dsi_json_exercise.ipynb | RoebideBruijn/datascience-intensive-course | mit |
3. In 2. above you will notice that some entries have only the code and the name is missing. Create a dataframe with the missing names filled in. | codes_names = pd.DataFrame(columns=['code', 'name'])
for i in bank.index:
namecode = bank.loc[i,'mjtheme_namecode']
codes_names = pd.concat([codes_names, json_normalize(namecode)])
codes_names_dict = (codes_names[codes_names.name != '']
.drop_duplicates()
... | exercises/data_wrangling_json/sliderule_dsi_json_exercise.ipynb | RoebideBruijn/datascience-intensive-course | mit |
Examples | testing = (__name__ == "__main__")
if testing:
import numpy as np
import sys,os
ia898path = os.path.abspath('../../')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
| src/normalize.ipynb | robertoalotufo/ia898 | mit |
Example 1 | if testing:
f = np.array([100., 500., 1000.])
g1 = ia.normalize(f, [0,255])
print(g1)
if testing:
g2 = ia.normalize(f, [-1,1])
print(g2)
if testing:
g3 = ia.normalize(f, [0,1])
print(g3)
if testing:
#
f = np.array([-100., 0., 100.])
g4 = ia.normalize(f, [0,255])
print(g4)
... | src/normalize.ipynb | robertoalotufo/ia898 | mit |
Equation
$$ g = f|{gmin}^{gmax}$$
$$ g(p) = \frac{g{max} - g_{min}}{f_{max}-f_{min}} (f(p) - f_{min}) + g_{min} $$
See Also
iaimginfo - Print image size and pixel data type information
iait - Illustrate the contrast transform function
References
Wikipedia - Normalization
Contributions
Marcos Fernandes, cour... | if testing:
print('testing normalize')
print(repr(ia.normalize(np.array([-100., 0., 100.]), [0,255])) == repr(np.array([ 0. , 127.5, 255. ]))) | src/normalize.ipynb | robertoalotufo/ia898 | mit |
Notice below that
the current values of year, month, and day
are ignored when evaluating date_formats['iso']. | year, month, day = 2017, 3, 27
print(year, month, day)
print(date_formats['iso']) | 20170327-cohpy-defer-fstring-evaluation.ipynb | james-prior/cohpy | mit |
A solution is to use lambdas in the dictionary, and call them later,
as shown below. | year, month, day = 'hello', -1, 0
# year, month, and day do not have to be defined when creating dictionary.
del year # Test that with one of them.
date_formats = {
'iso': (lambda: f'{year}-{month:02d}-{day:02d}'),
'us': (lambda: f'{month}/{day}/{year}'),
'other': (lambda: f'{day}.{month}.{year}'),
}
dates... | 20170327-cohpy-defer-fstring-evaluation.ipynb | james-prior/cohpy | mit |
Generator
Here you'll build the generator network. The input will be our noise vector z as before. Also as before, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images.
What's new here is we'll use convolutional layers to create our new images. The first layer is a full... | def generator(z, output_dim, reuse=False, alpha=0.2, training=True):
with tf.variable_scope('generator', reuse=reuse):
# First fully connected layer
x1 = tf.layers.dense(z, 4*4*512)
x1 = tf.reshape(x1, (-1,4,4,512))
x1 = tf.layers.batch_normalization(x1,training=training)
x1 ... | dcgan-svhn/DCGAN_Exercises.ipynb | tanmay987/deepLearning | mit |
Discriminator
Here you'll build the discriminator. This is basically just a convolutional classifier like you've build before. The input to the discriminator are 32x32x3 tensors/images. You'll want a few convolutional layers, then a fully connected layer for the output. As before, we want a sigmoid output, and you'll n... | def discriminator(x, reuse=False, alpha=0.2):
with tf.variable_scope('discriminator', reuse=reuse):
# Input layer is 32x32x3
x = tf.layers.conv2d(x, 64, 5, strides=2, padding='same')
relu1 = tf.maximum(alpha*x,x)
# 16*16*64
x2 = tf.layers.conv2d(relu1, 128, 5, stride... | dcgan-svhn/DCGAN_Exercises.ipynb | tanmay987/deepLearning | mit |
Install BAZEL with Baselisk | # Download Latest version of Bazelisk
!wget https://github.com/bazelbuild/bazelisk/releases/latest/download/bazelisk-linux-amd64
# Make script executable
!chmod +x bazelisk-linux-amd64
# Adding to the path
!sudo mv bazelisk-linux-amd64 /usr/local/bin/bazel
# Extract bazel info
!bazel
# Clone TensorFlow Lite Support... | third_party/tflite_support/src/tensorflow_lite_support/tools/Build_TFLite_Support_Targets.ipynb | nwjs/chromium.src | bsd-3-clause |
Build .aar files | #@title Select library. { display-mode: "form" }
library = 'Support library' #@param ["Support library", "Task Vision library", "Task Text library", "Task Audio library","Metadata library","C++ image_classifier","C++ image_objector","C++ image_segmenter","C++ image_embedder","C++ nl_classifier","C++ bert_nl_classifier... | third_party/tflite_support/src/tensorflow_lite_support/tools/Build_TFLite_Support_Targets.ipynb | nwjs/chromium.src | bsd-3-clause |
Iterate over all "true" emission measure distributions and time-average them over the given interval. | time_averaged_ems = {'{}'.format(freq):None for freq in frequencies}
for freq in frequencies:
print('tn = {} s'.format(freq))
if type(freq) == int:
base = base1
else:
base = base2
# setup field and observer objects
field = synthesizAR.Skeleton.restore(os.path.join(base.format(freq),... | notebooks/time_average_em.ipynb | wtbarnes/loops-workshop-2017-talk | mit |
Visualize the results to make sure we've averaged correctly. | fig = plt.figure(figsize=(20,15))
plt.subplots_adjust(right=0.87)
cax = fig.add_axes([0.88, 0.12, 0.025, 0.75])
plt.subplots_adjust(hspace=0.1)
for i in range(time_averaged_ems['250'].temperature_bin_edges.shape[0]-1):
# apply a filter to the
tmp = time_averaged_ems['250'][i].submap(u.Quantity([250,500],u.arcs... | notebooks/time_average_em.ipynb | wtbarnes/loops-workshop-2017-talk | mit |
Now save the results to our local temporary data folder. | for key in time_averaged_ems:
time_averaged_ems[key].save('../data/em_cubes_true_tn{}_t7500-12500.h5'.format(key))
foo = EMCube.restore('../data/em_cubes_tn250_t7500-12500.h5')
fig = plt.figure(figsize=(20,15))
plt.subplots_adjust(right=0.87)
cax = fig.add_axes([0.88, 0.12, 0.025, 0.75])
plt.subplots_adjust(hspac... | notebooks/time_average_em.ipynb | wtbarnes/loops-workshop-2017-talk | mit |
Overview
Please see Chapter 3 for more details on logistic regression.
Implementing logistic regression in Python
The following implementation is similar to the Adaline implementation in Chapter 2 except that we replace the sum of squared errors cost function with the logistic cost function
$$J(\mathbf{w}) = \sum_{i=1... | class LogisticRegression(object):
"""LogisticRegression classifier.
Parameters
------------
eta : float
Learning rate (between 0.0 and 1.0)
n_iter : int
Passes over the training dataset.
Attributes
-----------
w_ : 1d-array
Weights after fitting.
cost_ : lis... | code/bonus/logistic_regression.ipynb | wei-Z/Python-Machine-Learning | mit |
Reading-in the Iris data | import pandas as pd
df = pd.read_csv('https://archive.ics.uci.edu/ml/'
'machine-learning-databases/iris/iris.data', header=None)
df.tail()
import numpy as np
# select setosa and versicolor
y = df.iloc[0:100, 4].values
y = np.where(y == 'Iris-setosa', 1, 0)
# extract sepal length and petal length
X = df.iloc... | code/bonus/logistic_regression.ipynb | wei-Z/Python-Machine-Learning | mit |
A function for plotting decision regions | from matplotlib.colors import ListedColormap
def plot_decision_regions(X, y, classifier, resolution=0.02):
# setup marker generator and color map
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# plot the... | code/bonus/logistic_regression.ipynb | wei-Z/Python-Machine-Learning | mit |
Next, we'll import the Sequential model type from Keras. This is simply a linear stack of neural network layers, and it's perfect for the type of feed-forward CNN we're building in this tutorial. | from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Convolution2D, MaxPooling2D
from keras.utils import np_utils
from keras.datasets import mnist
#load pre-shffuled MNIST data into train and test sets
(X_train, y_train), (X_test, y_test) = mnist.... | kerasTutorial/FirstMNISTProgram.ipynb | benkoo/fast_ai_coursenotes | apache-2.0 |
Step 7: Define model architecture.
Now we're ready to define our model architecture. In actual R&D work, researchers will spend a considerable amount of time studying model architectures.
To keep this tutorial moving along, we're not going to discuss the theory or math here. This alone is a rich and meaty field, and w... | model = Sequential()
model.add(Convolution2D(32, 3, 3, activation='relu', input_shape=(1,28,28)))
print(model.output_shape)
model.add(Convolution2D(32, 3, 3, activation='relu'))
model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(10, activation='relu'))
model.add(D... | kerasTutorial/FirstMNISTProgram.ipynb | benkoo/fast_ai_coursenotes | apache-2.0 |
Inverse Distance Verification: Cressman and Barnes
Compare inverse distance interpolation methods
Two popular interpolation schemes that use inverse distance weighting of observations are the
Barnes and Cressman analyses. The Cressman analysis is relatively straightforward and uses
the ratio between distance of an obse... | import matplotlib.pyplot as plt
import numpy as np
from scipy.spatial import cKDTree
from scipy.spatial.distance import cdist
from metpy.interpolate.geometry import dist_2
from metpy.interpolate.points import barnes_point, cressman_point
from metpy.interpolate.tools import calc_kappa
def draw_circle(ax, x, y, r, m, ... | v0.9/_downloads/52c3e3d710569bed83f26e14e23bb356/Inverse_Distance_Verification.ipynb | metpy/MetPy | bsd-3-clause |
For grid 1, we will use barnes to interpolate its value.
We need to calculate kappa--the average distance between observations over the domain. | x2, y2 = obs_tree.data[indices[1]].T
barnes_dist = dist_2(sim_gridx[1], sim_gridy[1], x2, y2)
barnes_obs = zp[indices[1]]
ave_spacing = np.mean((cdist(list(zip(xp, yp)), list(zip(xp, yp)))))
kappa = calc_kappa(ave_spacing)
barnes_val = barnes_point(barnes_dist, barnes_obs, kappa) | v0.9/_downloads/52c3e3d710569bed83f26e14e23bb356/Inverse_Distance_Verification.ipynb | metpy/MetPy | bsd-3-clause |
Plot all of the affiliated information and interpolation values. | fig, ax = plt.subplots(1, 1, figsize=(15, 10))
for i, zval in enumerate(zp):
ax.plot(pts[i, 0], pts[i, 1], '.')
ax.annotate(str(zval) + ' F', xy=(pts[i, 0] + 2, pts[i, 1]))
ax.plot(sim_gridx, sim_gridy, '+', markersize=10)
ax.plot(x1, y1, 'ko', fillstyle='none', markersize=10, label='grid 0 matches')
ax.plot(... | v0.9/_downloads/52c3e3d710569bed83f26e14e23bb356/Inverse_Distance_Verification.ipynb | metpy/MetPy | bsd-3-clause |
For each point, we will do a manual check of the interpolation values by doing a step by
step and visual breakdown.
Plot the grid point, observations within radius of the grid point, their locations, and
their distances from the grid point. | fig, ax = plt.subplots(1, 1, figsize=(15, 10))
ax.annotate('grid 0: ({}, {})'.format(sim_gridx[0], sim_gridy[0]),
xy=(sim_gridx[0] + 2, sim_gridy[0]))
ax.plot(sim_gridx[0], sim_gridy[0], '+', markersize=10)
mx, my = obs_tree.data[indices[0]].T
mz = zp[indices[0]]
for x, y, z in zip(mx, my, mz):
d = np... | v0.9/_downloads/52c3e3d710569bed83f26e14e23bb356/Inverse_Distance_Verification.ipynb | metpy/MetPy | bsd-3-clause |
Now repeat for grid 1, except use barnes interpolation. | fig, ax = plt.subplots(1, 1, figsize=(15, 10))
ax.annotate('grid 1: ({}, {})'.format(sim_gridx[1], sim_gridy[1]),
xy=(sim_gridx[1] + 2, sim_gridy[1]))
ax.plot(sim_gridx[1], sim_gridy[1], '+', markersize=10)
mx, my = obs_tree.data[indices[1]].T
mz = zp[indices[1]]
for x, y, z in zip(mx, my, mz):
d = np... | v0.9/_downloads/52c3e3d710569bed83f26e14e23bb356/Inverse_Distance_Verification.ipynb | metpy/MetPy | bsd-3-clause |
Get draft data
First, get the sample drafts for a range of dates. | window = {
'since' : "2018-01-01T00:00:00",
'until' : "2018-01-10T00:00:00"
}
dt = DataTracker()
def extract_data(doc):
data = {}
## TODO: Add document UID?
data['title'] = doc.title
data['time'] = doc.time
data['group-acronym'] = dt.group(doc.group).acronym
data['type'] = doc.type.uri... | examples/datatracker/Working Group Affiliations.ipynb | datactive/bigbang | mit |
Create a table for the group and affiliation links in particular. | link_df = draft_df[['group-acronym', 'affiliation','time']]
link_df[:5] | examples/datatracker/Working Group Affiliations.ipynb | datactive/bigbang | mit |
Entity resolution on the affiliations
Using containment distance, collapse the entities in the affilation column and removed suspected duplicates. | all_affiliations = link_df.groupby('affiliation').size()
ents = process.resolve_entities(all_affiliations,
process.containment_distance,
threshold=.25)
replacements = {}
for r in [{name: ent for name in ents[ent]} for ent in ents]:
replacements.updat... | examples/datatracker/Working Group Affiliations.ipynb | datactive/bigbang | mit |
Plot the network links between working groups and affiliations | edges = [
(row[1]['group-acronym'], row[1]['affiliation'])
for row
in link_df[['group-acronym','affiliation']].iterrows()
]
G = nx.Graph()
G.add_nodes_from([x[0]
for x
in link_df[['group-acronym']].drop_duplicates().values],
category=0)
G.add_nodes_from(all_affi... | examples/datatracker/Working Group Affiliations.ipynb | datactive/bigbang | mit |
Look at categories of affiliations | import bigbang.datasets.organizations as organizations
cat = organizations.load_data()
replacements = {x[1]['name'] : x[1]['category']
for x
in cat.iterrows()
if not pd.isna(x[1]['category'])}
link_cat_df = link_df.replace(to_replace=replacements)
link_cat_df.groupby('... | examples/datatracker/Working Group Affiliations.ipynb | datactive/bigbang | mit |
Q1. Solution
3-shingles for "hello world":
hel, ell, llo, lo_, o_w ,_wo, wor, orl, rld => 9 in total
Q2. Solution | ## Q2 Solution.
def hash(x):
return math.fmod(3 * x + 2, 11)
for i in xrange(1,12):
print hash(i) | final/Final-basic.ipynb | Hexiang-Hu/mmds | mit |
Q3.
This question involves three different Bloom-filter-like scenarios. Each scenario involves setting to 1 certain bits of a 10-bit array, each bit of which is initially 0.
Scenario A:
we use one hash function that randomly, and with equal probability, selects one of the ten bits of the array. We apply this hash... | ## Q3 Solution.
prob = 1.0 / 10
a = (1 - prob)**4
print a
b = (1 - ( 1 - (1 - prob)**2) )**2
print b
c = (1 - (1.0 /10 * 1.0 / 9))
print c | final/Final-basic.ipynb | Hexiang-Hu/mmds | mit |
Q4.
In this market-basket problem, there are 99 items, numbered 2 to 100. There is a basket for each prime number between 2 and 100. The basket for p contains all and only the items whose numbers are a multiple of p. For example, the basket for 17 contains the following items: {17, 34, 51, 68, 85}. What is the support... | ## Q5 Solution.
vec1 = np.array([2, 1, 1])
vec2 = np.array([10, -7, 1])
print vec1.dot(vec2) / (np.linalg.norm(vec1) * np.linalg.norm(vec2)) | final/Final-basic.ipynb | Hexiang-Hu/mmds | mit |
Q6.
In this question we use six minhash functions, organized as three bands of two rows each, to identify sets of high Jaccard similarity. If two sets have Jaccard similarity 0.6, what is the probability (to two decimal places) that this pair will become a candidate pair? | ## Q6 Solution.
# probability that they agree at one particular band
p1 = 0.6**2
print (1 - p1)**3 | final/Final-basic.ipynb | Hexiang-Hu/mmds | mit |
Q7.
Suppose we have a (.4, .6, .9, .1)-sensitive family of functions. If we apply a 3-way OR construction to this family, we get a new family of functions whose sensitivity is: | ## Q7 Solution.
p1 = 1 - (1 - .9)**3
p2 = 1 - (1 - .1)**3
print "new LSH is (.4, .6, {}, {})-sensitive family".format(p1, p2) | final/Final-basic.ipynb | Hexiang-Hu/mmds | mit |
Q8.
Suppose we have a database of (Class, Student, Grade) facts, each giving the grade the student got in the class. We want to estimate the fraction of students who have gotten A's in at least 10 classes, but we do not want to examine the entire relation, just a sample of 10% of the tuples. We shall hash tuples to 10... | ## Q9 Solution.
M = np.array([[0, 0, 0, .25],
[1, 0, 0, .25],
[0, 1, 0, .25],
[0, 0, 1, .25]])
r = np.array([.25, .25, .25, .25])
for i in xrange(30):
r = M.dot(r)
print r | final/Final-basic.ipynb | Hexiang-Hu/mmds | mit |
Q10.
Suppose in the AGM model we have four individuals (A,B,C,D} and two communities. Community 1 consists of {A,B,C} and Community 2 consists of {B,C,D}. For Community 1 there is a 30% chance- it will cause an edge between any two of its members. For Community 2 there is a 40% chance it will cause an edge between any... | ## Q10 Solution.
print 1 - (1 - .3)*(1 - .4) | final/Final-basic.ipynb | Hexiang-Hu/mmds | mit |
Q11.
X is a dataset of n columns for which we train a supervised Machine Learning algorithm. e is the error of the model measured against a validation dataset. Unfortunately, e is too high because model has overfitted on the training data X and it doesn't generalize well. We now decide to reduce the model variance by ... | ##Q12
L = np.array([[-.25, -.5, -.76, -.29, -.03, -.07, -.01],
[-.05, -.1, -.15, .20, .26, .51, .77 ]]).T
print L
V = np.array([[6.74, 0],[0, 5.44]])
print V
R = np.array([[-.57, -.11, -.57, -.11, -.57],
[-.09, 0.70, -.09, .7, -.09]])
print R
print L.dot(V).dot(R) | final/Final-basic.ipynb | Hexiang-Hu/mmds | mit |
Q13.
Recall that the power iteration does r=X·r until converging, where X is a nxn matrix and n is the number of nodes in the graph.
Using the power iteration notation above, what is matrix X value when solving topic sensitive Pagerank with teleport set {0,1} for the following graph? Use beta=0.8. (Recall that the... | X = 0.8 * np.array([[1.0/3, 0, 0],
[1.0/3, 0, 0],
[1.0/3, 1, 0]])
X += 0.2 * np.array([[.5, .5, .5],
[.5, .5, .5],
[ 0, 0, 0]])
print X | final/Final-basic.ipynb | Hexiang-Hu/mmds | mit |
Q14.
Here are two sets of integers S = {1,2,3,4} and T = {1,2,5,6,x}, where x stands for some integer. For how many different integer values of x are the Jaccard similarity and the Jaccard distance of S and T the same? (Note: x can be one of 1, 2, 5, or 6, but in that case T, being a set, will contain x only once and ... | from scipy.cluster.vq import kmeans,vq
def L1(p1, p2):
return abs(p1[0] - p2[0]) + abs(p1[1] - p2[1])
c1 = [(1,1), (4,4)]
points = np.array([[1, 1], [2, 1], [2, 2], [3, 3], [4, 2], [2, 4], [4, 4]]
)
# for point in points:
# minDist = 9999
# minIdx = None
# for point in points:
# print points | final/Final-basic.ipynb | Hexiang-Hu/mmds | mit |
Q20.
Consider an execution of the BALANCE algorithm with 4 advertisers, A1, A2, A3, A4, and 4 kinds of queries, Q1, Q2, Q3, Q4.
Advertiser A1 bids on queries Q1 and Q2;
A2 bids on queries Q2 and Q3;
A3 on queries Q3 and Q4;
and A4 on queries Q1 and Q4.
All bids are equal to 1, and all clickthrough rates are equal... | ## Solution
vec1 = np.array([0, 1, -1, 0, 0])
vec2 = np.array([0, 1, 0, 0, -1])
print vec1.dot(vec2) / (np.linalg.norm(vec1) * np.linalg.norm(vec2)) | final/Final-basic.ipynb | Hexiang-Hu/mmds | mit |
Q23.
The Utility Matrix below captures the ratings of 5 users (A,B,C,D,E) for 5 movies (P,Q,R,S,T). Each known rating is a number between 1 and 5, and blanks represent unknown ratings. Let (U,M) denote the rating of movie M by user U. We evaluate a Recommender System by withholding the ratings (A,P), (B,Q), and (C,S).... | ## Solution
print "RMSE = {}".format(math.sqrt(2.0 / 3)) | final/Final-basic.ipynb | Hexiang-Hu/mmds | mit |
Какую задачу можно поставить для этого набора данных? | %matplotlib inline
import matplotlib.pyplot as plt
n = 19
print("Каждая цифра представлена матрицей формы ", digits.data[n, :].shape) | labs/demos-1.ipynb | 1x0r/pspis | mit |
Чтобы отобразить её на экране, нужно применить метод reshape. Целевая форма — $8 \times 8$. | digit = 255 - digits.data[n, :].reshape(8, 8)
plt.imshow(digit, cmap='gray', interpolation='none')
plt.title("This is " + str(digits.target[n]))
plt.show() | labs/demos-1.ipynb | 1x0r/pspis | mit |
Возьмем один из методов прошлой лекции. Например, метод классификации, основанный на деревьях (CART). | from sklearn.tree import DecisionTreeClassifier | labs/demos-1.ipynb | 1x0r/pspis | mit |
Почти у всех классов, отвечающих за методы классификации в scikit-learn, есть следующие методы:
- fit — обучение модели;
- predict — классификация примера обученным классификатором;
- score —оценка качества классификации в соответствии с некоторым критерием.
Чтобы создать дерево-классификатор, достато... | clf = DecisionTreeClassifier(random_state=0) | labs/demos-1.ipynb | 1x0r/pspis | mit |
Обучим классификатор на всех цифрах, кроме последних 10. | clf.fit(digits.data[:-10], digits.target[:-10]) | labs/demos-1.ipynb | 1x0r/pspis | mit |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.