markdown stringlengths 0 1.02M | code stringlengths 0 832k | output stringlengths 0 1.02M | license stringlengths 3 36 | path stringlengths 6 265 | repo_name stringlengths 6 127 |
|---|---|---|---|---|---|
Calculate CoM from WDFUse force plate data to predict rider CoM position | Ffwt_fx = np.array(89.28) #measured force on plate under front wheel with rider
Ffwt_fx = Ffwt_fx / 2.205 * g #convert from lb to N
Lcmrc_fx = (Ffwt_fx - Ffwb) * Lt / Wr
Lcmfc_fx = Lt - Lcmrc_fx
Lcmbb_fx = Lcmfc_fx - Lfc
print(Lcmbb_fx) | -0.009825275032207537
| MIT | docs/calculate-wdf.ipynb | ross-wilkinson/vantage-com |
**Knowing our data** | import pandas as pd
import numpy as np
data = 'https://s3-us-west-2.amazonaws.com/streamlit-demo-data/uber-raw-data-sep14.csv.gz'
df = pd.read_csv(data, nrows=500)
df.head() | _____no_output_____ | MIT | Project-Uber/Support_notebook.ipynb | rafaelgrecco/Streamlit-library-Projects |
**Putting the column names in a lower case**To avoid mistakes | lower_str = lambda x: str(x).lower()
df.rename(lower_str, axis='columns', inplace=True)
df.head() | _____no_output_____ | MIT | Project-Uber/Support_notebook.ipynb | rafaelgrecco/Streamlit-library-Projects |
As you can see the column names are in lower case **Checking if dates are on datetime**To access only the hours of our column `date/time`.We have to make sure that this column is in datetime | df.dtypes | _____no_output_____ | MIT | Project-Uber/Support_notebook.ipynb | rafaelgrecco/Streamlit-library-Projects |
As we can see the column is an object type, so we will convert it to datetime | df['date/time'] = pd.to_datetime(df['date/time'])
df.dtypes | _____no_output_____ | MIT | Project-Uber/Support_notebook.ipynb | rafaelgrecco/Streamlit-library-Projects |
"Unambiguous fire pixels" test 1 (daytime, normal conditions). | firecond1 = np.logical_and(R75 > 2.5, rho7 > .5)
firecond1 = np.logical_and(firecond1, rho7 - rho5 > .3)
firecond1_masked = np.ma.masked_where(
~firecond1, np.ones((ymax, xmax))) | _____no_output_____ | MIT | ipynb_viirs1/Landsat8 fire detection dev notes.ipynb | chryss/VIFDAHL |
"Unambiguous fire pixels" test 2 (daytime, sensor anomalies) | firecond2 = np.logical_and(rho6 > .8, rho1 < .2)
firecond2 = np.logical_and(firecond2,
np.logical_or(rho5 > .4, rho7 < .1)
)
firecond2_masked = np.ma.masked_where(
~firecond2, np.ones((ymax, xmax))) | _____no_output_____ | MIT | ipynb_viirs1/Landsat8 fire detection dev notes.ipynb | chryss/VIFDAHL |
"Relaxed conditions" | firecond3 = np.logical_and(R75 > 1.8, rho7 - rho5 > .17)
firecond3_masked = np.ma.masked_where(
~firecond3, np.ones((ymax, xmax))) | _____no_output_____ | MIT | ipynb_viirs1/Landsat8 fire detection dev notes.ipynb | chryss/VIFDAHL |
"Extra tests" for relaxed conditions:1. R76 > 1.62. R75 at least 3 sigma and 0.8 larger than avg of a 61x61 window of valid pixels3. rho7 at least 3 sigma and 0.08 larger than avg of a 61x61 window of valid pixelsValid pixels are:1. Not "unambiguous fire pixel"2. rho7 > 0 3. Not water as per water test 1: rho4 > rho5 AND rho5 > rho6 AND rho6 > rho7 AND rho1 - rho7 < 0.24. Not water as per test 2: rho3 > rho2 OR ( rho1 > rho2 AND rho2 > rho3 AND rho3 > rho4 ) So let's get started on the validation tests... | newfirecandidates = np.logical_and(~firecond1, ~firecond2)
newfirecandidates = np.logical_and(newfirecandidates, firecond3)
newfirecandidates = np.logical_and(newfirecandidates, R76 > 0)
sum(sum(newfirecandidates)) | _____no_output_____ | MIT | ipynb_viirs1/Landsat8 fire detection dev notes.ipynb | chryss/VIFDAHL |
We'll need a +-30 pixel window around a coordinate pair to carry out the averaging for the contextual tests | iidxmax, jidxmax = landsat.band1.data.shape
def get_window(ii, jj, N, iidxmax, jidxmax):
"""Return 2D Boolean array that is True where a window of size N
around a given point is masked out """
imin = max(0, ii-N)
imax = min(iidxmax, ii+N)
jmin = max(0, jj-N)
jmax = min(jidxmax, jj+N)
mask1 = np.zeros((iidxmax, jidxmax))
mask1[imin:imax+1, jmin:jmax+1] = 1
return mask1 == 1
plt.imshow(get_window(100, 30, 30, iidxmax, jidxmax) , cmap=cmap3, vmin=0, vmax=1) | _____no_output_____ | MIT | ipynb_viirs1/Landsat8 fire detection dev notes.ipynb | chryss/VIFDAHL |
We can then get the union of those windows over all detected fire pixel candidates. | windows = [get_window(ii, jj, 30, iidxmax, jidxmax) for ii, jj in np.argwhere(newfirecandidates)]
window = np.any(windows, axis=0)
plt.imshow(window , cmap=cmap3, vmin=0, vmax=1) | _____no_output_____ | MIT | ipynb_viirs1/Landsat8 fire detection dev notes.ipynb | chryss/VIFDAHL |
We also need a water mask... | def get_l8watermask_frombands(
rho1, rho2, rho3,
rho4, rho5, rho6, rho7):
"""
Takes L8 bands, returns 2D Boolean numpy array of same shape
"""
turbidwater = get_l8turbidwater(rho1, rho2, rho3, rho4, rho5, rho6, rho7)
deepwater = get_l8deepwater(rho1, rho2, rho3, rho4, rho5, rho6, rho7)
return np.logical_or(turbidwater, deepwater)
def get_l8commonwater(rho1, rho4, rho5, rho6, rho7):
"""Returns Boolean numpy array common to turbid and deep water schemes"""
water1cond = np.logical_and(rho4 > rho5, rho5 > rho6)
water1cond = np.logical_and(water1cond, rho6 > rho7)
water1cond = np.logical_and(water1cond, rho1 - rho7 < 0.2)
return water1cond
def get_l8turbidwater(rho1, rho2, rho3, rho4, rho5, rho6, rho7):
"""Returns Boolean numpy array that marks shallow, turbid water"""
watercond2 = get_l8commonwater(rho1, rho4, rho5, rho6, rho7)
watercond2 = np.logical_and(watercond2, rho3 > rho2)
return watercond2
def get_l8deepwater(rho1, rho2, rho3, rho4, rho5, rho6, rho7):
"""Returns Boolean numpy array that marks deep, clear water"""
watercond3 = get_l8commonwater(rho1, rho4, rho5, rho6, rho7)
watercondextra = np.logical_and(rho1 > rho2, rho2 > rho3)
watercondextra = np.logical_and(watercondextra, rho3 > rho4)
return np.logical_and(watercond3, watercondextra)
water = get_l8watermask_frombands(rho1, rho2, rho3, rho4, rho5, rho6, rho7)
plt.imshow(~water , cmap=cmap3, vmin=0, vmax=1) | _____no_output_____ | MIT | ipynb_viirs1/Landsat8 fire detection dev notes.ipynb | chryss/VIFDAHL |
Let's try out the two components, out of interest... apparently, only the "deep water" test catches the water bodies here. | turbidwater = get_l8turbidwater(rho1, rho2, rho3, rho4, rho5, rho6, rho7)
deepwater = get_l8deepwater(rho1, rho2, rho3, rho4, rho5, rho6, rho7)
plt.imshow(~turbidwater , cmap=cmap3, vmin=0, vmax=1)
plt.show()
plt.imshow(~deepwater , cmap=cmap3, vmin=0, vmax=1)
def get_valid_pixels(otherfirecond, rho1, rho2, rho3,
rho4, rho5, rho6, rho7, mask=None):
"""returns masked array of 1 for valid, 0 for not"""
if not np.any(mask):
mask = np.zeros(otherfirecond.shape)
rho = {}
for rho in [rho1, rho2, rho3, rho4, rho5, rho6, rho7]:
rho = np.ma.masked_array(rho, mask=mask)
watercond = get_l8watermask_frombands(
rho1, rho2, rho3,
rho4, rho5, rho6, rho7)
greater0cond = rho7 > 0
finalcond = np.logical_and(greater0cond, ~watercond)
finalcond = np.logical_and(finalcond, ~otherfirecond)
return np.ma.masked_array(finalcond, mask=mask)
otherfirecond = np.logical_or(firecond1, firecond2)
validpix = get_valid_pixels(otherfirecond, rho1, rho2, rho3,
rho4, rho5, rho6, rho7, mask=~window)
fig1 = plt.figure(1, figsize=(15, 15))
ax1 = fig1.add_subplot(111)
ax1.set_aspect('equal')
ax1.pcolormesh(np.flipud(validpix), cmap=cmap3, vmin=0, vmax=1)
iidxmax, jidxmax = landsat.band1.data.shape
output = np.zeros((iidxmax, jidxmax))
for ii, jj in np.argwhere(firecond3):
window = get_window(ii, jj, 30, iidxmax, jidxmax)
newmask = np.logical_or(~window, ~validpix.data)
rho7_win = np.ma.masked_array(rho7, mask=newmask)
R75_win = np.ma.masked_array(rho7/rho5, mask=newmask)
rho7_bar = np.mean(rho7_win.flatten())
rho7_std = np.std(rho7_win.flatten())
R75_bar = np.mean(R75_win.flatten())
R75_std = np.std(R75_win.flatten())
rho7_test = rho7_win[ii, jj] - rho7_bar > max(3*rho7_std, 0.08)
R75_test = R75_win[ii, jj]- R75_bar > max(3*R75_std, 0.8)
if rho7_test and R75_test:
output[ii, jj] = 1
lowfirecond = output == 1
sum(sum(lowfirecond))
fig1 = plt.figure(1, figsize=(15, 15))
ax1 = fig1.add_subplot(111)
ax1.set_aspect('equal')
ax1.pcolormesh(np.flipud(lowfirecond), cmap=cmap1, vmin=0, vmax=1)
fig1 = plt.figure(1, figsize=(15, 15))
ax1 = fig1.add_subplot(111)
ax1.set_aspect('equal')
ax1.pcolormesh(np.flipud(firecond1), cmap=cmap3, vmin=0, vmax=1)
allfirecond = np.logical_or(firecond1, firecond2)
allfirecond = np.logical_or(allfirecond, lowfirecond)
fig1 = plt.figure(1, figsize=(15, 15))
ax1 = fig1.add_subplot(111)
ax1.set_aspect('equal')
ax1.pcolormesh(np.flipud(allfirecond), cmap=cmap1, vmin=0, vmax=1) | _____no_output_____ | MIT | ipynb_viirs1/Landsat8 fire detection dev notes.ipynb | chryss/VIFDAHL |
So this works! Now we can do the same using the module that incorporates the above code: | testfire, highfire, anomfire, lowfire = lfire.get_l8fire(landsat)
sum(sum(lowfire))
sum(sum(testfire))
firecond1_masked = np.ma.masked_where(
~testfire, np.ones((ymax, xmax)))
firecondlow_masked = np.ma.masked_where(
~lowfire, np.ones((ymax, xmax)))
fig1 = plt.figure(1, figsize=(15, 15))
ax1 = fig1.add_subplot(111)
ax1.set_aspect('equal')
ax1.pcolormesh(np.flipud(firecond1_masked), cmap=cmap1, vmin=0, vmax=1)
ax1.pcolormesh(np.flipud(firecondlow_masked), cmap=cmap3, vmin=0, vmax=1) | _____no_output_____ | MIT | ipynb_viirs1/Landsat8 fire detection dev notes.ipynb | chryss/VIFDAHL |
Finding fraud patterns with FP-growth Data Collection and Investigation | import pandas as pd
# Input data files are available in the "../input/" directory
df = pd.read_csv('D:/Python Project/Credit Card Fraud Detection/benchmark dataset/Test FP-Growth.csv')
# printing the first 5 columns for data visualization
df.head()
| _____no_output_____ | Apache-2.0 | Python/Benchmark dataset/Finding fraud patterns with FP-growth benchmark dataset.ipynb | limkhashing/Credit-Card-Fraud-Detection |
Execute FP-growth algorithm Spark | # import environment path to pyspark
import os
import sys
spark_path = r"D:\apache-spark" # spark installed folder
os.environ['SPARK_HOME'] = spark_path
sys.path.insert(0, spark_path + "/bin")
sys.path.insert(0, spark_path + "/python/pyspark/")
sys.path.insert(0, spark_path + "/python/lib/pyspark.zip")
sys.path.insert(0, spark_path + "/python/lib/py4j-0.10.7-src.zip")
# Export csv to txt file
df.to_csv('processed_itemsets.txt', index=None, sep=' ', mode='w+')
import csv
# creating necessary variable
new_itemsets_list = []
skip_first_iteration = 1
# find the duplicate item and add a counter at behind
with open("processed_itemsets.txt", 'r') as fp:
itemsets_list = csv.reader(fp, delimiter =' ', skipinitialspace=True)
for itemsets in itemsets_list:
unique_itemsets = []
counter = 2
for item in itemsets:
if itemsets.count(item) > 1:
if skip_first_iteration == 1:
unique_itemsets.append(item)
skip_first_iteration = skip_first_iteration + 1
continue
duplicate_item = item + "__(" + str(counter) + ")"
unique_itemsets.append(duplicate_item)
counter = counter + 1
else:
unique_itemsets.append(item)
print(itemsets)
new_itemsets_list.append(unique_itemsets)
# write the new itemsets into file
with open('processed_itemsets.txt', 'w+') as f:
for items in new_itemsets_list:
for item in items:
f.write("{} ".format(item))
f.write("\n")
from pyspark import SparkContext
from pyspark.mllib.fpm import FPGrowth
# initialize spark
sc = SparkContext.getOrCreate()
data = sc.textFile('processed_itemsets.txt').cache()
transactions = data.map(lambda line: line.strip().split(' '))
| _____no_output_____ | Apache-2.0 | Python/Benchmark dataset/Finding fraud patterns with FP-growth benchmark dataset.ipynb | limkhashing/Credit-Card-Fraud-Detection |
__minSupport__: The minimum support for an itemset to be identified as frequent. For example, if an item appears 3 out of 5 transactions, it has a support of 3/5=0.6.__minConfidence__: Minimum confidence for generating Association Rule. Confidence is an indication of how often an association rule has been found to be true. For example, if in the transactions itemset X appears 4 times, X and Y co-occur only 2 times, the confidence for the rule X => Y is then 2/4 = 0.5.__numPartitions__: The number of partitions used to distribute the work. By default the param is not set, and number of partitions of the input dataset is used | model = FPGrowth.train(transactions, minSupport=0.6, numPartitions=10)
result = model.freqItemsets().collect()
print("Frequent Itemsets : Item Support")
print("====================================")
for index, frequent_itemset in enumerate(result):
print(str(frequent_itemset.items) + ' : ' + str(frequent_itemset.freq))
rules = sorted(model._java_model.generateAssociationRules(0.8).collect(), key=lambda x: x.confidence(), reverse=True)
print("Antecedent => Consequent : Min Confidence")
print("========================================")
for rule in rules[:200]:
print(rule)
# stop spark session
sc.stop() | _____no_output_____ | Apache-2.0 | Python/Benchmark dataset/Finding fraud patterns with FP-growth benchmark dataset.ipynb | limkhashing/Credit-Card-Fraud-Detection |
Before we can validate models, we need an understanding of how to create and work with them. This chapter provides an introduction to running regression and classification models in scikit-learn. We will use this model building foundation throughout the remaining chapters. | ### Seen vs. unseen data
# The model is fit using X_train and y_train
model.fit(X_train, y_train)
# Create vectors of predictions
train_predictions = model.predict(X_train)
test_predictions = model.predict(X_test)
# Train/Test Errors
train_error = mae(y_true=y_train, y_pred=train_predictions)
test_error = mae(y_true=y_test, y_pred=test_predictions)
# Print the accuracy for seen and unseen data
print("Model error on seen data: {0:.2f}.".format(train_error))
print("Model error on unseen data: {0:.2f}.".format(test_error))
# Set parameters and fit a model
# Set the number of trees
rfr.n_estimators = 1000
# Add a maximum depth
rfr.max_depth = 6
# Set the random state
rfr.random_state = 11
# Fit the model
rfr.fit(X_train, y_train)
## Feature importances
# Fit the model using X and y
rfr.fit(X_train, y_train)
# Print how important each column is to the model
for i, item in enumerate(rfr.feature_importances_):
# Use i and item to print out the feature importance of each column
print("{0:s}: {1:.2f}".format(X_train.columns[i], item))
### lassification predictions
# Fit the rfc model.
rfc.fit(X_train, y_train)
# Create arrays of predictions
classification_predictions = rfc.predict(X_test)
probability_predictions = rfc.predict_proba(X_test)
# Print out count of binary predictions
print(pd.Series(classification_predictions).value_counts())
# Print the first value from probability_predictions
print('The first predicted probabilities are: {}'.format(probability_predictions[0]))
## Reusing model parameters
rfc = RandomForestClassifier(n_estimators=50, max_depth=6, random_state=1111)
# Print the classification model
print(rfc)
# Print the classification model's random state parameter
print('The random state is: {}'.format(rfc.random_state))
# Print all parameters
print('Printing the parameters dictionary: {}'.format(rfc.get_params()))
## Random forest classifier
from sklearn.ensemble import RandomForestClassifier
# Create a random forest classifier
rfc = RandomForestClassifier(n_estimators=50, max_depth=6, random_state=1111)
# Fit rfc using X_train and y_train
rfc.fit(X_train, y_train)
# Create predictions on X_test
predictions = rfc.predict(X_test)
print(predictions[0:5])
# Print model accuracy using score() and the testing data
print(rfc.score(X_test, y_test))
## MODULE 2
## Validation Basics | _____no_output_____ | MIT | Model Validation in Python/.ipynb_checkpoints/Model Validation in Python-checkpoint.ipynb | frankgarciav/Datacamp-Courses |
This chapter focuses on the basics of model validation. From splitting data into training, validation, and testing datasets, to creating an understanding of the bias-variance tradeoff, we build the foundation for the techniques of K-Fold and Leave-One-Out validation practiced in chapter three. | ## Create one holdout set
# Create dummy variables using pandas
X = pd.get_dummies(tic_tac_toe.iloc[:,0:9])
y = tic_tac_toe.iloc[:, 9]
# Create training and testing datasets. Use 10% for the test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.1, random_state=1111)
## Create two holdout sets
# Create temporary training and final testing datasets
X_temp, X_test, y_temp, y_test =\
train_test_split(X, y, test_size=.2, random_state=1111)
# Create the final training and validation datasets
X_train, X_val, y_train, y_val = train_test_split(X_temp, y_temp, test_size=.25, random_state=1111)
### Mean absolute error
from sklearn.metrics import mean_absolute_error
# Manually calculate the MAE
n = len(predictions)
mae_one = sum(abs(y_test - predictions)) / n
print('With a manual calculation, the error is {}'.format(mae_one))
# Use scikit-learn to calculate the MAE
mae_two = mean_absolute_error(y_test, predictions)
print('Using scikit-lean, the error is {}'.format(mae_two))
# <script.py> output:
# With a manual calculation, the error is 5.9
# Using scikit-lean, the error is 5.9
### Mean squared error
from sklearn.metrics import mean_squared_error
n = len(predictions)
# Finish the manual calculation of the MSE
mse_one = sum(abs(y_test - predictions)**2) / n
print('With a manual calculation, the error is {}'.format(mse_one))
# Use the scikit-learn function to calculate MSE
mse_two = mean_squared_error(y_test, predictions)
print('Using scikit-lean, the error is {}'.format(mse_two))
### Performance on data subsets
# Find the East conference teams
east_teams = labels == "E"
# Create arrays for the true and predicted values
true_east = y_test[east_teams]
preds_east = predictions[east_teams]
# Print the accuracy metrics
print('The MAE for East teams is {}'.format(
mae(true_east, preds_east)))
# Print the West accuracy
print('The MAE for West conference is {}'.format(west_error))
### Confusion matrices
# Calculate and print the accuracy
accuracy = (324 + 491) / (953)
print("The overall accuracy is {0: 0.2f}".format(accuracy))
# Calculate and print the precision
precision = (491) / (491 + 15)
print("The precision is {0: 0.2f}".format(precision))
# Calculate and print the recall
recall = (491) / (491 + 123)
print("The recall is {0: 0.2f}".format(recall))
### Confusion matrices, again
from sklearn.metrics import confusion_matrix
# Create predictions
test_predictions = rfc.predict(X_test)
# Create and print the confusion matrix
cm = confusion_matrix(y_test, test_predictions)
print(cm)
# Print the true positives (actual 1s that were predicted 1s)
print("The number of true positives is: {}".format(cm[1, 1]))
## <script.py> output:
## [[177 123]
## [ 92 471]]
## The number of true positives is: 471
## Row 1, column 1 represents the number of actual 1s that were predicted 1s (the true positives).
## Always make sure you understand the orientation of the confusion matrix before you start using it!
### Precision vs. recall
from sklearn.metrics import precision_score
test_predictions = rfc.predict(X_test)
# Create precision or recall score based on the metric you imported
score = precision_score(y_test, test_predictions)
# Print the final result
print("The precision value is {0:.2f}".format(score))
### Error due to under/over-fitting
# Update the rfr model
rfr = RandomForestRegressor(n_estimators=25,
random_state=1111,
max_features=2)
rfr.fit(X_train, y_train)
# Print the training and testing accuracies
print('The training error is {0:.2f}'.format(
mae(y_train, rfr.predict(X_train))))
print('The testing error is {0:.2f}'.format(
mae(y_test, rfr.predict(X_test))))
## <script.py> output:
## The training error is 3.88
## The testing error is 9.15
# Update the rfr model
rfr = RandomForestRegressor(n_estimators=25,
random_state=1111,
max_features=11)
rfr.fit(X_train, y_train)
# Print the training and testing accuracies
print('The training error is {0:.2f}'.format(
mae(y_train, rfr.predict(X_train))))
print('The testing error is {0:.2f}'.format(
mae(y_test, rfr.predict(X_test))))
## <script.py> output:
## The training error is 3.57
## The testing error is 10.05
# Update the rfr model
rfr = RandomForestRegressor(n_estimators=25,
random_state=1111,
max_features=4)
rfr.fit(X_train, y_train)
# Print the training and testing accuracies
print('The training error is {0:.2f}'.format(
mae(y_train, rfr.predict(X_train))))
print('The testing error is {0:.2f}'.format(
mae(y_test, rfr.predict(X_test))))
## <script.py> output:
## The training error is 3.60
## The testing error is 8.79
### Am I underfitting?
from sklearn.metrics import accuracy_score
test_scores, train_scores = [], []
for i in [1, 2, 3, 4, 5, 10, 20, 50]:
rfc = RandomForestClassifier(n_estimators=i, random_state=1111)
rfc.fit(X_train, y_train)
# Create predictions for the X_train and X_test datasets.
train_predictions = rfc.predict(X_train)
test_predictions = rfc.predict(X_test)
# Append the accuracy score for the test and train predictions.
train_scores.append(round(accuracy_score(y_train, train_predictions), 2))
test_scores.append(round(accuracy_score(y_test, test_predictions), 2))
# Print the train and test scores.
print("The training scores were: {}".format(train_scores))
print("The testing scores were: {}".format(test_scores))
### MODULE 3
### Cross Validation | _____no_output_____ | MIT | Model Validation in Python/.ipynb_checkpoints/Model Validation in Python-checkpoint.ipynb | frankgarciav/Datacamp-Courses |
Holdout sets are a great start to model validation. However, using a single train and test set if often not enough. Cross-validation is considered the gold standard when it comes to validating model performance and is almost always used when tuning model hyper-parameters. This chapter focuses on performing cross-validation to validate model performance. | ### Two samples
# Create two different samples of 200 observations
sample1 = tic_tac_toe.sample(200, random_state=1111)
sample2 = tic_tac_toe.sample(200, random_state=1171)
# Print the number of common observations
print(len([index for index in sample1.index if index in sample2.index]))
# Print the number of observations in the Class column for both samples
print(sample1['Class'].value_counts())
print(sample2['Class'].value_counts())
### scikit-learn's KFold()
from sklearn.model_selection import KFold
# Use KFold
kf = KFold(n_splits=5, shuffle=True, random_state=1111)
# Create splits
splits = kf.split(X)
# Print the number of indices
for train_index, val_index in splits:
print("Number of training indices: %s" % len(train_index))
print("Number of validation indices: %s" % len(val_index))
### Using KFold indices
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_squared_error
rfc = RandomForestRegressor(n_estimators=25, random_state=1111)
# Access the training and validation indices of splits
for train_index, val_index in splits:
# Setup the training and validation data
X_train, y_train = X[train_index], y[train_index]
X_val, y_val = X[val_index], y[val_index]
# Fit the random forest model
rfc.fit(X_train, y_train)
# Make predictions, and print the accuracy
predictions = rfc.predict(X_val)
print("Split accuracy: " + str(mean_squared_error(y_val, predictions)))
### scikit-learn's methods
# Instruction 1: Load the cross-validation method
from sklearn.model_selection import cross_val_score
# Instruction 2: Load the random forest regression model
from sklearn.ensemble import RandomForestClassifier
# Instruction 3: Load the mean squared error method
# Instruction 4: Load the function for creating a scorer
from sklearn.metrics import mean_squared_error, make_scorer
## It is easy to see how all of the methods can get mixed up, but
## it is important to know the names of the methods you need.
## You can always review the scikit-learn documentation should you need any help
### Implement cross_val_score()
rfc = RandomForestRegressor(n_estimators=25, random_state=1111)
mse = make_scorer(mean_squared_error)
# Set up cross_val_score
cv = cross_val_score(estimator=rfc,
X=X_train,
y=y_train,
cv=10,
scoring=mse)
# Print the mean error
print(cv.mean())
### Leave-one-out-cross-validation
from sklearn.metrics import mean_absolute_error, make_scorer
# Create scorer
mae_scorer = make_scorer(mean_absolute_error)
rfr = RandomForestRegressor(n_estimators=15, random_state=1111)
# Implement LOOCV
scores = cross_val_score(estimator=rfr, X=X, y=y, cv=85, scoring=mae_scorer)
# Print the mean and standard deviation
print("The mean of the errors is: %s." % np.mean(scores))
print("The standard deviation of the errors is: %s." % np.std(scores))
### MODULE 4
### Selecting the best model with Hyperparameter tuning. | _____no_output_____ | MIT | Model Validation in Python/.ipynb_checkpoints/Model Validation in Python-checkpoint.ipynb | frankgarciav/Datacamp-Courses |
The first three chapters focused on model validation techniques. In chapter 4 we apply these techniques, specifically cross-validation, while learning about hyperparameter tuning. After all, model validation makes tuning possible and helps us select the overall best model. | ### Creating Hyperparameters
# Review the parameters of rfr
print(rfr.get_params())
# Maximum Depth
max_depth = [4, 8, 12]
# Minimum samples for a split
min_samples_split = [2, 5, 10]
# Max features
max_features = [4, 6, 8, 10]
### Running a model using ranges
from sklearn.ensemble import RandomForestRegressor
# Fill in rfr using your variables
rfr = RandomForestRegressor(
n_estimators=100,
max_depth=random.choice(max_depth),
min_samples_split=random.choice(min_samples_split),
max_features=random.choice(max_features))
# Print out the parameters
print(rfr.get_params())
### Preparing for RandomizedSearch
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import make_scorer, mean_squared_error
# Finish the dictionary by adding the max_depth parameter
param_dist = {"max_depth": [2, 4, 6, 8],
"max_features": [2, 4, 6, 8, 10],
"min_samples_split": [2, 4, 8, 16]}
# Create a random forest regression model
rfr = RandomForestRegressor(n_estimators=10, random_state=1111)
# Create a scorer to use (use the mean squared error)
scorer = make_scorer(mean_squared_error)
# Import the method for random search
from sklearn.model_selection import RandomizedSearchCV
# Build a random search using param_dist, rfr, and scorer
random_search =\
RandomizedSearchCV(
estimator=rfr,
param_distributions=param_dist,
n_iter=10,
cv=5,
scoring=scorer)
### Selecting the best precision model
from sklearn.metrics import precision_score, make_scorer
# Create a precision scorer
precision = make_scorer(precision_score)
# Finalize the random search
rs = RandomizedSearchCV(
estimator=rfc, param_distributions=param_dist,
scoring = precision,
cv=5, n_iter=10, random_state=1111)
rs.fit(X, y)
# print the mean test scores:
print('The accuracy for each run was: {}.'.format(rs.cv_results_['mean_test_score']))
# print the best model score:
print('The best accuracy for a single model was: {}'.format(rs.best_score_)) | _____no_output_____ | MIT | Model Validation in Python/.ipynb_checkpoints/Model Validation in Python-checkpoint.ipynb | frankgarciav/Datacamp-Courses |
Goals Learn how to change train validation splits Table of Contents [0. Install](0) [1. Load experiment with defaut transforms](1) [2. Reset Transforms andapply new transforms](2) Install Monk - git clone https://github.com/Tessellate-Imaging/monk_v1.git - cd monk_v1/installation/Linux && pip install -r requirements_cu9.txt - (Select the requirements file as per OS and CUDA version) | !git clone https://github.com/Tessellate-Imaging/monk_v1.git
# Select the requirements file as per OS and CUDA version
!cd monk_v1/installation/Linux && pip install -r requirements_cu9.txt | _____no_output_____ | Apache-2.0 | study_roadmaps/1_getting_started_roadmap/5_update_hyperparams/2_data_params/6) Dataset Transforms.ipynb | kshitij12345/monk_v1 |
Dataset - Broad Leaved Dock Image Classification - https://www.kaggle.com/gavinarmstrong/open-sprayer-images | ! wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1uL-VV4nV_u0kry3gLH1TATUTu8hWJ0_d' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1uL-VV4nV_u0kry3gLH1TATUTu8hWJ0_d" -O open_sprayer_images.zip && rm -rf /tmp/cookies.txt
! unzip -qq open_sprayer_images.zip | _____no_output_____ | Apache-2.0 | study_roadmaps/1_getting_started_roadmap/5_update_hyperparams/2_data_params/6) Dataset Transforms.ipynb | kshitij12345/monk_v1 |
Imports | # Monk
import os
import sys
sys.path.append("monk_v1/monk/");
#Using mxnet-gluon backend
from gluon_prototype import prototype | _____no_output_____ | Apache-2.0 | study_roadmaps/1_getting_started_roadmap/5_update_hyperparams/2_data_params/6) Dataset Transforms.ipynb | kshitij12345/monk_v1 |
Load experiment with default transforms | gtf = prototype(verbose=1);
gtf.Prototype("project", "understand_transforms");
gtf.Default(dataset_path="open_sprayer_images/train",
model_name="resnet18_v1",
freeze_base_network=True,
num_epochs=5);
#Read the summary generated once you run this cell. | Dataset Details
Train path: open_sprayer_images/train
Val path: None
CSV train path: None
CSV val path: None
Dataset Params
Input Size: 224
Batch Size: 4
Data Shuffle: True
Processors: 4
Train-val split: 0.7
Pre-Composed Train Transforms
[{'RandomHorizontalFlip': {'p': 0.8}}, {'Normalize': {'mean': [0.485, 0.456, 0.406], 'std': [0.229, 0.224, 0.225]}}]
Pre-Composed Val Transforms
[{'RandomHorizontalFlip': {'p': 0.8}}, {'Normalize': {'mean': [0.485, 0.456, 0.406], 'std': [0.229, 0.224, 0.225]}}]
Dataset Numbers
Num train images: 4218
Num val images: 1809
Num classes: 2
Model Params
Model name: resnet18_v1
Use Gpu: True
Use pretrained: True
Freeze base network: True
Model Details
Loading pretrained model
Model Loaded on device
Model name: resnet18_v1
Num of potentially trainable layers: 41
Num of actual trainable layers: 1
Optimizer
Name: sgd
Learning rate: 0.01
Params: {'lr': 0.01, 'momentum': 0, 'weight_decay': 0, 'momentum_dampening_rate': 0, 'clipnorm': 0.0, 'clipvalue': 0.0}
Learning rate scheduler
Name: steplr
Params: {'step_size': 1, 'gamma': 0.98, 'last_epoch': -1}
Loss
Name: softmaxcrossentropy
Params: {'weight': None, 'batch_axis': 0, 'axis_to_sum_over': -1, 'label_as_categories': True, 'label_smoothing': False}
Training params
Num Epochs: 5
Display params
Display progress: True
Display progress realtime: True
Save Training logs: True
Save Intermediate models: True
Intermediate model prefix: intermediate_model_
| Apache-2.0 | study_roadmaps/1_getting_started_roadmap/5_update_hyperparams/2_data_params/6) Dataset Transforms.ipynb | kshitij12345/monk_v1 |
Default Transforms are Train Transforms {'RandomHorizontalFlip': {'p': 0.8}}, {'Normalize': {'mean': [0.485, 0.456, 0.406], 'std': [0.229, 0.224, 0.225]}}] Val Transforms {'RandomHorizontalFlip': {'p': 0.8}}, {'Normalize': {'mean': [0.485, 0.456, 0.406], 'std': [0.229, 0.224, 0.225]}} In that order Reset transforms | # Reset train and validation transforms
gtf.reset_transforms();
# Reset test transforms
gtf.reset_transforms(test=True); | _____no_output_____ | Apache-2.0 | study_roadmaps/1_getting_started_roadmap/5_update_hyperparams/2_data_params/6) Dataset Transforms.ipynb | kshitij12345/monk_v1 |
Apply new transforms | gtf.List_Transforms();
# Transform applied to only train and val
gtf.apply_center_crop(224,
train=True,
val=True,
test=False)
# Transform applied to all train, val and test
gtf.apply_normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225],
train=True,
val=True,
test=True
)
# Very important to reload post update
gtf.Reload(); | Pre-Composed Train Transforms
[{'CenterCrop': {'input_size': 224}}, {'Normalize': {'mean': [0.485, 0.456, 0.406], 'std': [0.229, 0.224, 0.225]}}]
Pre-Composed Val Transforms
[{'CenterCrop': {'input_size': 224}}, {'Normalize': {'mean': [0.485, 0.456, 0.406], 'std': [0.229, 0.224, 0.225]}}]
Dataset Numbers
Num train images: 4218
Num val images: 1809
Num classes: 2
Model Details
Loading pretrained model
Model Loaded on device
Model name: resnet18_v1
Num of potentially trainable layers: 41
Num of actual trainable layers: 1
| Apache-2.0 | study_roadmaps/1_getting_started_roadmap/5_update_hyperparams/2_data_params/6) Dataset Transforms.ipynb | kshitij12345/monk_v1 |
Dataproc - Submit Hadoop Job Intended UseA Kubeflow Pipeline component to submit a Apache Hadoop MapReduce job on Apache Hadoop YARN in Google Cloud Dataproc service. Run-Time Parameters:Name | Description:--- | :----------project_id | Required. The ID of the Google Cloud Platform project that the cluster belongs to.region | Required. The Cloud Dataproc region in which to handle the request.cluster_name | Required. The cluster to run the job.main_jar_file_uri | The HCFS URI of the jar file containing the main class. Examples: `gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar` `hdfs:/tmp/test-samples/custom-wordcount.jar` `file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar`main_class | The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jarFileUris. args | Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.hadoop_job | Optional. The full payload of a [HadoopJob](https://cloud.google.com/dataproc/docs/reference/rest/v1/HadoopJob).job | Optional. The full payload of a [Dataproc job](https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.jobs).wait_interval | Optional. The wait seconds between polling the operation. Defaults to 30s. Output:Name | Description:--- | :----------job_id | The ID of the created job. SampleNote: the sample code below works in both IPython notebook or python code directly. Setup a Dataproc clusterFollow the [guide](https://cloud.google.com/dataproc/docs/guides/create-cluster) to create a new Dataproc cluster or reuse an existing one. Prepare Hadoop jobUpload your Hadoop jar file to a Google Cloud Storage (GCS) bucket. In the sample, we will use a jar file that is pre-installed in the main cluster, so there is no need to provide the `main_jar_file_uri`. We only set `main_class` to be `org.apache.hadoop.examples.WordCount`.Here is the [source code of example](https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/WordCount.java).To package a self-contained Hadoop MapReduct application from source code, follow the [instructions](https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html). Set sample parameters | PROJECT_ID = '<Please put your project ID here>'
CLUSTER_NAME = '<Please put your existing cluster name here>'
OUTPUT_GCS_PATH = '<Please put your output GCS path here>'
REGION = 'us-central1'
MAIN_CLASS = 'org.apache.hadoop.examples.WordCount'
INTPUT_GCS_PATH = 'gs://ml-pipeline-playground/shakespeare1.txt'
EXPERIMENT_NAME = 'Dataproc - Submit Hadoop Job'
COMPONENT_SPEC_URI = 'https://raw.githubusercontent.com/kubeflow/pipelines/7622e57666c17088c94282ccbe26d6a52768c226/components/gcp/dataproc/submit_hadoop_job/component.yaml' | _____no_output_____ | Apache-2.0 | components/gcp/dataproc/submit_hadoop_job/sample.ipynb | ryan-williams/pipelines |
Insepct Input DataThe input file is a simple text file: | !gsutil cat $INTPUT_GCS_PATH | With which he yoketh your rebellious necks Razeth your cities and subverts your towns And in a moment makes them desolate
| Apache-2.0 | components/gcp/dataproc/submit_hadoop_job/sample.ipynb | ryan-williams/pipelines |
Clean up existing output files (Optional)This is needed because the sample code requires the output folder to be a clean folder.To continue to run the sample, make sure that the service account of the notebook server has access to the `OUTPUT_GCS_PATH`.**CAUTION**: This will remove all blob files under `OUTPUT_GCS_PATH`. | !gsutil rm $OUTPUT_GCS_PATH/** | CommandException: No URLs matched: gs://hongyes-ml-tests/dataproc/hadoop/output/**
| Apache-2.0 | components/gcp/dataproc/submit_hadoop_job/sample.ipynb | ryan-williams/pipelines |
Install KFP SDKInstall the SDK (Uncomment the code if the SDK is not installed before) | # KFP_PACKAGE = 'https://storage.googleapis.com/ml-pipeline/release/0.1.12/kfp.tar.gz'
# !pip3 install $KFP_PACKAGE --upgrade | _____no_output_____ | Apache-2.0 | components/gcp/dataproc/submit_hadoop_job/sample.ipynb | ryan-williams/pipelines |
Load component definitions | import kfp.components as comp
dataproc_submit_hadoop_job_op = comp.load_component_from_url(COMPONENT_SPEC_URI)
display(dataproc_submit_hadoop_job_op) | _____no_output_____ | Apache-2.0 | components/gcp/dataproc/submit_hadoop_job/sample.ipynb | ryan-williams/pipelines |
Here is an illustrative pipeline that uses the component | import kfp.dsl as dsl
import kfp.gcp as gcp
import json
@dsl.pipeline(
name='Dataproc submit Hadoop job pipeline',
description='Dataproc submit Hadoop job pipeline'
)
def dataproc_submit_hadoop_job_pipeline(
project_id = PROJECT_ID,
region = REGION,
cluster_name = CLUSTER_NAME,
main_jar_file_uri = '',
main_class = MAIN_CLASS,
args = json.dumps([
INTPUT_GCS_PATH,
OUTPUT_GCS_PATH
]),
hadoop_job='',
job='{}',
wait_interval='30'
):
dataproc_submit_hadoop_job_op(project_id, region, cluster_name, main_jar_file_uri, main_class,
args, hadoop_job, job, wait_interval).apply(gcp.use_gcp_secret('user-gcp-sa'))
| _____no_output_____ | Apache-2.0 | components/gcp/dataproc/submit_hadoop_job/sample.ipynb | ryan-williams/pipelines |
Compile the pipeline | pipeline_func = dataproc_submit_hadoop_job_pipeline
pipeline_filename = pipeline_func.__name__ + '.pipeline.tar.gz'
import kfp.compiler as compiler
compiler.Compiler().compile(pipeline_func, pipeline_filename) | _____no_output_____ | Apache-2.0 | components/gcp/dataproc/submit_hadoop_job/sample.ipynb | ryan-williams/pipelines |
Submit the pipeline for execution | #Specify pipeline argument values
arguments = {}
#Get or create an experiment and submit a pipeline run
import kfp
client = kfp.Client()
experiment = client.create_experiment(EXPERIMENT_NAME)
#Submit a pipeline run
run_name = pipeline_func.__name__ + ' run'
run_result = client.run_pipeline(experiment.id, run_name, pipeline_filename, arguments) | _____no_output_____ | Apache-2.0 | components/gcp/dataproc/submit_hadoop_job/sample.ipynb | ryan-williams/pipelines |
Inspect the outputsThe sample in the notebook will count the words in the input text and output them in sharded files. Here is the command to inspect them: | !gsutil cat $OUTPUT_GCS_PATH/* | AccessDeniedException: 403
| Apache-2.0 | components/gcp/dataproc/submit_hadoop_job/sample.ipynb | ryan-williams/pipelines |
- The data used for the calculations is soil water balance data from CGIAR-CSI available at http://www.cgiar-csi.org/data/global-high-resolution-soil-water-balance.- Data on irrigation are from the FAO's AQUASTAT information system available at http://www.fao.org/nr/water/aquastat/irrigationmap/index10.stm.- Soil data is from soilgrids.org and can be downloaded from their ftp server: ftp://ftp.soilgrids.org/data/recent/.The following code also does that (since the files are huge, the execution might take some time). The layers are used to calculate the topsoil average clay content, which is resampled from 250m to 1km resolution. Alternatively, you can download the averaged and resampled layer from the ETH research collection (https://doi.org/10.3929/ethz-b-000253177) and store it in the "output/soilgrids_prepared"-folder.- Data on crop-area and potato-area has been downloaded from http://www.earthstat.org/data-download/. The datasets “Cropland and Pasture Area in 2000” and “Harvested Area and Yield for 175 Crops” are used.- Data on potential potato-area has been downloaded from http://gaez.fao.org/Main.html. The dataset chosen was “Crop suitability index (class) for high input level rain-fed white potato”, Future period 2020s, MPI ECHAM4 B2, Without CO2 fertilization (res03ehb22020hsihr0wpo_package.zip) | import os
from ftplib import FTP | _____no_output_____ | MIT | python/0_download_files.ipynb | ethz-esd/compaction_stoessel_2018 |
Specify your data directory to where you want to download the files: | data_dir = os.path.join('..', 'data/soilgrids') | _____no_output_____ | MIT | python/0_download_files.ipynb | ethz-esd/compaction_stoessel_2018 |
Download files: | # connect to data folder on ftp-server:
ftp = FTP('ftp.soilgrids.org')
ftp.login()
ftp.cwd('data/recent')
# select download directory:
os.chdir(data_dir)
# download the given files:
files = ['CLYPPT_M_sl1_250m.tif', 'CLYPPT_M_sl2_250m.tif', 'CLYPPT_M_sl3_250m.tif',
'CLYPPT_M_sl4_250m.tif']
for filename in files:
file = open(filename, 'wb')
ftp.retrbinary('RETR ' + filename, file.write)
file.close()
ftp.quit() | _____no_output_____ | MIT | python/0_download_files.ipynb | ethz-esd/compaction_stoessel_2018 |
Random Variables:label:`sec_random_variables`In :numref:`sec_prob` we saw the basics of how to work with discrete random variables, which in our case refer to those random variables which take either a finite set of possible values, or the integers. In this section, we develop the theory of *continuous random variables*, which are random variables which can take on any real value. Continuous Random VariablesContinuous random variables are a significantly more subtle topic than discrete random variables. A fair analogy to make is that the technical jump is comparable to the jump between adding lists of numbers and integrating functions. As such, we will need to take some time to develop the theory. From Discrete to ContinuousTo understand the additional technical challenges encountered when working with continuous random variables, let us perform a thought experiment. Suppose that we are throwing a dart at the dart board, and we want to know the probability that it hits exactly $2 \text{cm}$ from the center of the board.To start with, we imagine measuring a single digit of accuracy, that is to say with bins for $0 \text{cm}$, $1 \text{cm}$, $2 \text{cm}$, and so on. We throw say $100$ darts at the dart board, and if $20$ of them fall into the bin for $2\text{cm}$ we conclude that $20\%$ of the darts we throw hit the board $2 \text{cm}$ away from the center.However, when we look closer, this does not match our question! We wanted exact equality, whereas these bins hold all that fell between say $1.5\text{cm}$ and $2.5\text{cm}$.Undeterred, we continue further. We measure even more precisely, say $1.9\text{cm}$, $2.0\text{cm}$, $2.1\text{cm}$, and now see that perhaps $3$ of the $100$ darts hit the board in the $2.0\text{cm}$ bucket. Thus we conclude the probability is $3\%$.However, this does not solve anything! We have just pushed the issue down one digit further. Let us abstract a bit. Imagine we know the probability that the first $k$ digits match with $2.00000\ldots$ and we want to know the probability it matches for the first $k+1$ digits. It is fairly reasonable to assume that the ${k+1}^{\mathrm{th}}$ digit is essentially a random choice from the set $\{0, 1, 2, \ldots, 9\}$. At least, we cannot conceive of a physically meaningful process which would force the number of micrometers away form the center to prefer to end in a $7$ vs a $3$.What this means is that in essence each additional digit of accuracy we require should decrease probability of matching by a factor of $10$. Or put another way, we would expect that$$P(\text{distance is}\; 2.00\ldots, \;\text{to}\; k \;\text{digits} ) \approx p\cdot10^{-k}.$$The value $p$ essentially encodes what happens with the first few digits, and the $10^{-k}$ handles the rest.Notice that if we know the position accurate to $k=4$ digits after the decimal. that means we know the value falls within the interval say $[(1.99995,2.00005]$ which is an interval of length $2.00005-1.99995 = 10^{-4}$. Thus, if we call the length of this interval $\epsilon$, we can say$$P(\text{distance is in an}\; \epsilon\text{-sized interval around}\; 2 ) \approx \epsilon \cdot p.$$Let us take this one final step further. We have been thinking about the point $2$ the entire time, but never thinking about other points. Nothing is different there fundamentally, but it is the case that the value $p$ will likely be different. We would at least hope that a dart thrower was more likely to hit a point near the center, like $2\text{cm}$ rather than $20\text{cm}$. Thus, the value $p$ is not fixed, but rather should depend on the point $x$. This tells us that we should expect$$P(\text{distance is in an}\; \epsilon \text{-sized interval around}\; x ) \approx \epsilon \cdot p(x).$$:eqlabel:`eq_pdf_deriv`Indeed, :eqref:`eq_pdf_deriv` precisely defines the *probability density function*. It is a function $p(x)$ which encodes the relative probability of hitting near one point vs. another. Let us visualize what such a function might look like. | %matplotlib inline
from d2l import mxnet as d2l
from IPython import display
from mxnet import np, npx
npx.set_np()
# Plot the probability density function for some random variable
x = np.arange(-5, 5, 0.01)
p = 0.2*np.exp(-(x - 3)**2 / 2)/np.sqrt(2 * np.pi) + \
0.8*np.exp(-(x + 1)**2 / 2)/np.sqrt(2 * np.pi)
d2l.plot(x, p, 'x', 'Density') | _____no_output_____ | MIT | d2l-en/mxnet/chapter_appendix-mathematics-for-deep-learning/random-variables.ipynb | gr8khan/d2lai |
The locations where the function value is large indicates regions where we are more likely to find the random value. The low portions are areas where we are unlikely to find the random value. Probability Density FunctionsLet us now investigate this further. We have already seen what a probability density function is intuitively for a random variable $X$, namely the density function is a function $p(x)$ so that$$P(X \; \text{is in an}\; \epsilon \text{-sized interval around}\; x ) \approx \epsilon \cdot p(x).$$:eqlabel:`eq_pdf_def`But what does this imply for the properties of $p(x)$?First, probabilities are never negative, thus we should expect that $p(x) \ge 0$ as well.Second, let us imagine that we slice up the $\mathbb{R}$ into an infinite number of slices which are $\epsilon$ wide, say with slices $(\epsilon\cdot i, \epsilon \cdot (i+1)]$. For each of these, we know from :eqref:`eq_pdf_def` the probability is approximately$$P(X \; \text{is in an}\; \epsilon\text{-sized interval around}\; x ) \approx \epsilon \cdot p(\epsilon \cdot i),$$so summed over all of them it should be$$P(X\in\mathbb{R}) \approx \sum_i \epsilon \cdot p(\epsilon\cdot i).$$This is nothing more than the approximation of an integral discussed in :numref:`sec_integral_calculus`, thus we can say that$$P(X\in\mathbb{R}) = \int_{-\infty}^{\infty} p(x) \; dx.$$We know that $P(X\in\mathbb{R}) = 1$, since the random variable must take on *some* number, we can conclude that for any density$$\int_{-\infty}^{\infty} p(x) \; dx = 1.$$Indeed, digging into this further shows that for any $a$, and $b$, we see that$$P(X\in(a, b]) = \int _ {a}^{b} p(x) \; dx.$$We may approximate this in code by using the same discrete approximation methods as before. In this case we can approximate the probability of falling in the blue region. | # Approximate probability using numerical integration
epsilon = 0.01
x = np.arange(-5, 5, 0.01)
p = 0.2*np.exp(-(x - 3)**2 / 2) / np.sqrt(2 * np.pi) + \
0.8*np.exp(-(x + 1)**2 / 2) / np.sqrt(2 * np.pi)
d2l.set_figsize()
d2l.plt.plot(x, p, color='black')
d2l.plt.fill_between(x.tolist()[300:800], p.tolist()[300:800])
d2l.plt.show()
f'approximate Probability: {np.sum(epsilon*p[300:800])}' | _____no_output_____ | MIT | d2l-en/mxnet/chapter_appendix-mathematics-for-deep-learning/random-variables.ipynb | gr8khan/d2lai |
It turns out that these two properties describe exactly the space of possible probability density functions (or *p.d.f.*'s for the commonly encountered abbreviation). They are non-negative functions $p(x) \ge 0$ such that$$\int_{-\infty}^{\infty} p(x) \; dx = 1.$$:eqlabel:`eq_pdf_int_one`We interpret this function by using integration to obtain the probability our random variable is in a specific interval:$$P(X\in(a, b]) = \int _ {a}^{b} p(x) \; dx.$$:eqlabel:`eq_pdf_int_int`In :numref:`sec_distributions` we will see a number of common distributions, but let us continue working in the abstract. Cumulative Distribution FunctionsIn the previous section, we saw the notion of the p.d.f. In practice, this is a commonly encountered method to discuss continuous random variables, but it has one significant pitfall: that the values of the p.d.f. are not themselves probabilities, but rather a function that we must integrate to yield probabilities. There is nothing wrong with a density being larger than $10$, as long as it is not larger than $10$ for more than an interval of length $1/10$. This can be counter-intuitive, so people often also think in terms of the *cumulative distribution function*, or c.d.f., which *is* a probability.In particular, by using :eqref:`eq_pdf_int_int`, we define the c.d.f. for a random variable $X$ with density $p(x)$ by$$F(x) = \int _ {-\infty}^{x} p(x) \; dx = P(X \le x).$$Let us observe a few properties.* $F(x) \rightarrow 0$ as $x\rightarrow -\infty$.* $F(x) \rightarrow 1$ as $x\rightarrow \infty$.* $F(x)$ is non-decreasing ($y > x \implies F(y) \ge F(x)$).* $F(x)$ is continuous (has no jumps) if $X$ is a continuous random variable.With the fourth bullet point, note that this would not be true if $X$ were discrete, say taking the values $0$ and $1$ both with probability $1/2$. In that case$$F(x) = \begin{cases}0 & x < 0, \\\frac{1}{2} & x < 1, \\1 & x \ge 1.\end{cases}$$In this example, we see one of the benefits of working with the c.d.f., the ability to deal with continuous or discrete random variables in the same framework, or indeed mixtures of the two (flip a coin: if heads return the roll of a die, if tails return the distance of a dart throw from the center of a dart board). MeansSuppose that we are dealing with a random variables $X$. The distribution itself can be hard to interpret. It is often useful to be able to summarize the behavior of a random variable concisely. Numbers that help us capture the behavior of a random variable are called *summary statistics*. The most commonly encountered ones are the *mean*, the *variance*, and the *standard deviation*.The *mean* encodes the average value of a random variable. If we have a discrete random variable $X$, which takes the values $x_i$ with probabilities $p_i$, then the mean is given by the weighted average: sum the values times the probability that the random variable takes on that value:$$\mu_X = E[X] = \sum_i x_i p_i.$$:eqlabel:`eq_exp_def`The way we should interpret the mean (albeit with caution) is that it tells us essentially where the random variable tends to be located.As a minimalistic example that we will examine throughout this section, let us take $X$ to be the random variable which takes the value $a-2$ with probability $p$, $a+2$ with probability $p$ and $a$ with probability $1-2p$. We can compute using :eqref:`eq_exp_def` that, for any possible choice of $a$ and $p$, the mean is$$\mu_X = E[X] = \sum_i x_i p_i = (a-2)p + a(1-2p) + (a+2)p = a.$$Thus we see that the mean is $a$. This matches the intuition since $a$ is the location around which we centered our random variable.Because they are helpful, let us summarize a few properties.* For any random variable $X$ and numbers $a$ and $b$, we have that $\mu_{aX+b} = a\mu_X + b$.* If we have two random variables $X$ and $Y$, we have $\mu_{X+Y} = \mu_X+\mu_Y$.Means are useful for understanding the average behavior of a random variable, however the mean is not sufficient to even have a full intuitive understanding. Making a profit of $\$10 \pm \$1$ per sale is very different from making $\$10 \pm \$15$ per sale despite having the same average value. The second one has a much larger degree of fluctuation, and thus represents a much larger risk. Thus, to understand the behavior of a random variable, we will need at minimum one more measure: some measure of how widely a random variable fluctuates. VariancesThis leads us to consider the *variance* of a random variable. This is a quantitative measure of how far a random variable deviates from the mean. Consider the expression $X - \mu_X$. This is the deviation of the random variable from its mean. This value can be positive or negative, so we need to do something to make it positive so that we are measuring the magnitude of the deviation.A reasonable thing to try is to look at $\left|X-\mu_X\right|$, and indeed this leads to a useful quantity called the *mean absolute deviation*, however due to connections with other areas of mathematics and statistics, people often use a different solution.In particular, they look at $(X-\mu_X)^2.$ If we look at the typical size of this quantity by taking the mean, we arrive at the variance$$\sigma_X^2 = \mathrm{Var}(X) = E\left[(X-\mu_X)^2\right] = E[X^2] - \mu_X^2.$$:eqlabel:`eq_var_def`The last equality in :eqref:`eq_var_def` holds by expanding out the definition in the middle, and applying the properties of expectation.Let us look at our example where $X$ is the random variable which takes the value $a-2$ with probability $p$, $a+2$ with probability $p$ and $a$ with probability $1-2p$. In this case $\mu_X = a$, so all we need to compute is $E\left[X^2\right]$. This can readily be done:$$E\left[X^2\right] = (a-2)^2p + a^2(1-2p) + (a+2)^2p = a^2 + 8p.$$Thus, we see that by :eqref:`eq_var_def` our variance is$$\sigma_X^2 = \mathrm{Var}(X) = E[X^2] - \mu_X^2 = a^2 + 8p - a^2 = 8p.$$This result again makes sense. The largest $p$ can be is $1/2$ which corresponds to picking $a-2$ or $a+2$ with a coin flip. The variance of this being $4$ corresponds to the fact that both $a-2$ and $a+2$ are $2$ units away from the mean, and $2^2 = 4$. On the other end of the spectrum, if $p=0$, this random variable always takes the value $0$ and so it has no variance at all.We will list a few properties of variance below:* For any random variable $X$, $\mathrm{Var}(X) \ge 0$, with $\mathrm{Var}(X) = 0$ if and only if $X$ is a constant.* For any random variable $X$ and numbers $a$ and $b$, we have that $\mathrm{Var}(aX+b) = a^2\mathrm{Var}(X)$.* If we have two *independent* random variables $X$ and $Y$, we have $\mathrm{Var}(X+Y) = \mathrm{Var}(X) + \mathrm{Var}(Y)$.When interpreting these values, there can be a bit of a hiccup. In particular, let us try imagining what happens if we keep track of units through this computation. Suppose that we are working with the star rating assigned to a product on the web page. Then $a$, $a-2$, and $a+2$ are all measured in units of stars. Similarly, the mean $\mu_X$ is then also measured in stars (being a weighted average). However, if we get to the variance, we immediately encounter an issue, which is we want to look at $(X-\mu_X)^2$, which is in units of *squared stars*. This means that the variance itself is not comparable to the original measurements. To make it interpretable, we will need to return to our original units. Standard DeviationsThis summary statistics can always be deduced from the variance by taking the square root! Thus we define the *standard deviation* to be$$\sigma_X = \sqrt{\mathrm{Var}(X)}.$$In our example, this means we now have the standard deviation is $\sigma_X = 2\sqrt{2p}$. If we are dealing with units of stars for our review example, $\sigma_X$ is again in units of stars.The properties we had for the variance can be restated for the standard deviation.* For any random variable $X$, $\sigma_{X} \ge 0$.* For any random variable $X$ and numbers $a$ and $b$, we have that $\sigma_{aX+b} = |a|\sigma_{X}$* If we have two *independent* random variables $X$ and $Y$, we have $\sigma_{X+Y} = \sqrt{\sigma_{X}^2 + \sigma_{Y}^2}$.It is natural at this moment to ask, "If the standard deviation is in the units of our original random variable, does it represent something we can draw with regards to that random variable?" The answer is a resounding yes! Indeed much like the mean told we the typical location of our random variable, the standard deviation gives the typical range of variation of that random variable. We can make this rigorous with what is known as Chebyshev's inequality:$$P\left(X \not\in [\mu_X - \alpha\sigma_X, \mu_X + \alpha\sigma_X]\right) \le \frac{1}{\alpha^2}.$$:eqlabel:`eq_chebyshev`Or to state it verbally in the case of $\alpha=10$, $99\%$ of the samples from any random variable fall within $10$ standard deviations of the mean. This gives an immediate interpretation to our standard summary statistics.To see how this statement is rather subtle, let us take a look at our running example again where $X$ is the random variable which takes the value $a-2$ with probability $p$, $a+2$ with probability $p$ and $a$ with probability $1-2p$. We saw that the mean was $a$ and the standard deviation was $2\sqrt{2p}$. This means, if we take Chebyshev's inequality :eqref:`eq_chebyshev` with $\alpha = 2$, we see that the expression is$$P\left(X \not\in [a - 4\sqrt{2p}, a + 4\sqrt{2p}]\right) \le \frac{1}{4}.$$This means that $75\%$ of the time, this random variable will fall within this interval for any value of $p$. Now, notice that as $p \rightarrow 0$, this interval also converges to the single point $a$. But we know that our random variable takes the values $a-2, a$, and $a+2$ only so eventually we can be certain $a-2$ and $a+2$ will fall outside the interval! The question is, at what $p$ does that happen. So we want to solve: for what $p$ does $a+4\sqrt{2p} = a+2$, which is solved when $p=1/8$, which is *exactly* the first $p$ where it could possibly happen without violating our claim that no more than $1/4$ of samples from the distribution would fall outside the interval ($1/8$ to the left, and $1/8$ to the right).Let us visualize this. We will show the probability of getting the three values as three vertical bars with height proportional to the probability. The interval will be drawn as a horizontal line in the middle. The first plot shows what happens for $p > 1/8$ where the interval safely contains all points. | # Define a helper to plot these figures
def plot_chebyshev(a, p):
d2l.set_figsize()
d2l.plt.stem([a-2, a, a+2], [p, 1-2*p, p], use_line_collection=True)
d2l.plt.xlim([-4, 4])
d2l.plt.xlabel('x')
d2l.plt.ylabel('p.m.f.')
d2l.plt.hlines(0.5, a - 4 * np.sqrt(2 * p),
a + 4 * np.sqrt(2 * p), 'black', lw=4)
d2l.plt.vlines(a - 4 * np.sqrt(2 * p), 0.53, 0.47, 'black', lw=1)
d2l.plt.vlines(a + 4 * np.sqrt(2 * p), 0.53, 0.47, 'black', lw=1)
d2l.plt.title(f'p = {p:.3f}')
d2l.plt.show()
# Plot interval when p > 1/8
plot_chebyshev(0.0, 0.2) | _____no_output_____ | MIT | d2l-en/mxnet/chapter_appendix-mathematics-for-deep-learning/random-variables.ipynb | gr8khan/d2lai |
The second shows that at $p = 1/8$, the interval exactly touches the two points. This shows that the inequality is *sharp*, since no smaller interval could be taken while keeping the inequality true. | # Plot interval when p = 1/8
plot_chebyshev(0.0, 0.125) | _____no_output_____ | MIT | d2l-en/mxnet/chapter_appendix-mathematics-for-deep-learning/random-variables.ipynb | gr8khan/d2lai |
The third shows that for $p < 1/8$ the interval only contains the center. This does not invalidate the inequality since we only needed to ensure that no more than $1/4$ of the probability falls outside the interval, which means that once $p < 1/8$, the two points at $a-2$ and $a+2$ can be discarded. | # Plot interval when p < 1/8
plot_chebyshev(0.0, 0.05) | _____no_output_____ | MIT | d2l-en/mxnet/chapter_appendix-mathematics-for-deep-learning/random-variables.ipynb | gr8khan/d2lai |
Means and Variances in the ContinuumThis has all been in terms of discrete random variables, but the case of continuous random variables is similar. To intuitively understand how this works, imagine that we split the real number line into intervals of length $\epsilon$ given by $(\epsilon i, \epsilon (i+1)]$. Once we do this, our continuous random variable has been made discrete and we can use :eqref:`eq_exp_def` say that$$\begin{aligned}\mu_X & \approx \sum_{i} (\epsilon i)P(X \in (\epsilon i, \epsilon (i+1)]) \\& \approx \sum_{i} (\epsilon i)p_X(\epsilon i)\epsilon, \\\end{aligned}$$where $p_X$ is the density of $X$. This is an approximation to the integral of $xp_X(x)$, so we can conclude that$$\mu_X = \int_{-\infty}^\infty xp_X(x) \; dx.$$Similarly, using :eqref:`eq_var_def` the variance can be written as$$\sigma^2_X = E[X^2] - \mu_X^2 = \int_{-\infty}^\infty x^2p_X(x) \; dx - \left(\int_{-\infty}^\infty xp_X(x) \; dx\right)^2.$$Everything stated above about the mean, the variance, and the standard deviation still applies in this case. For instance, if we consider the random variable with density$$p(x) = \begin{cases}1 & x \in [0,1], \\0 & \text{otherwise}.\end{cases}$$we can compute$$\mu_X = \int_{-\infty}^\infty xp(x) \; dx = \int_0^1 x \; dx = \frac{1}{2}.$$and$$\sigma_X^2 = \int_{-\infty}^\infty x^2p(x) \; dx - \left(\frac{1}{2}\right)^2 = \frac{1}{3} - \frac{1}{4} = \frac{1}{12}.$$As a warning, let us examine one more example, known as the *Cauchy distribution*. This is the distribution with p.d.f. given by$$p(x) = \frac{1}{1+x^2}.$$ | # Plot the Cauchy distribution p.d.f.
x = np.arange(-5, 5, 0.01)
p = 1 / (1 + x**2)
d2l.plot(x, p, 'x', 'p.d.f.') | _____no_output_____ | MIT | d2l-en/mxnet/chapter_appendix-mathematics-for-deep-learning/random-variables.ipynb | gr8khan/d2lai |
This function looks innocent, and indeed consulting a table of integrals will show it has area one under it, and thus it defines a continuous random variable.To see what goes astray, let us try to compute the variance of this. This would involve using :eqref:`eq_var_def` computing$$\int_{-\infty}^\infty \frac{x^2}{1+x^2}\; dx.$$The function on the inside looks like this: | # Plot the integrand needed to compute the variance
x = np.arange(-20, 20, 0.01)
p = x**2 / (1 + x**2)
d2l.plot(x, p, 'x', 'integrand') | _____no_output_____ | MIT | d2l-en/mxnet/chapter_appendix-mathematics-for-deep-learning/random-variables.ipynb | gr8khan/d2lai |
This function clearly has infinite area under it since it is essentially the constant one with a small dip near zero, and indeed we could show that$$\int_{-\infty}^\infty \frac{x^2}{1+x^2}\; dx = \infty.$$This means it does not have a well-defined finite variance.However, looking deeper shows an even more disturbing result. Let us try to compute the mean using :eqref:`eq_exp_def`. Using the change of variables formula, we see$$\mu_X = \int_{-\infty}^{\infty} \frac{x}{1+x^2} \; dx = \frac{1}{2}\int_1^\infty \frac{1}{u} \; du.$$The integral inside is the definition of the logarithm, so this is in essence $\log(\infty) = \infty$, so there is no well-defined average value either!Machine learning scientists define their models so that we most often do not need to deal with these issues, and will in the vast majority of cases deal with random variables with well-defined means and variances. However, every so often random variables with *heavy tails* (that is those random variables where the probabilities of getting large values are large enough to make things like the mean or variance undefined) are helpful in modeling physical systems, thus it is worth knowing that they exist. Joint Density FunctionsThe above work all assumes we are working with a single real valued random variable. But what if we are dealing with two or more potentially highly correlated random variables? This circumstance is the norm in machine learning: imagine random variables like $R_{i, j}$ which encode the red value of the pixel at the $(i, j)$ coordinate in an image, or $P_t$ which is a random variable given by a stock price at time $t$. Nearby pixels tend to have similar color, and nearby times tend to have similar prices. We cannot treat them as separate random variables, and expect to create a successful model (we will see in :numref:`sec_naive_bayes` a model that under-performs due to such an assumption). We need to develop the mathematical language to handle these correlated continuous random variables.Thankfully, with the multiple integrals in :numref:`sec_integral_calculus` we can develop such a language. Suppose that we have, for simplicity, two random variables $X, Y$ which can be correlated. Then, similar to the case of a single variable, we can ask the question:$$P(X \;\text{is in an}\; \epsilon \text{-sized interval around}\; x \; \text{and} \;Y \;\text{is in an}\; \epsilon \text{-sized interval around}\; y ).$$Similar reasoning to the single variable case shows that this should be approximately$$P(X \;\text{is in an}\; \epsilon \text{-sized interval around}\; x \; \text{and} \;Y \;\text{is in an}\; \epsilon \text{-sized interval around}\; y ) \approx \epsilon^{2}p(x, y),$$for some function $p(x, y)$. This is referred to as the joint density of $X$ and $Y$. Similar properties are true for this as we saw in the single variable case. Namely:* $p(x, y) \ge 0$;* $\int _ {\mathbb{R}^2} p(x, y) \;dx \;dy = 1$;* $P((X, Y) \in \mathcal{D}) = \int _ {\mathcal{D}} p(x, y) \;dx \;dy$.In this way, we can deal with multiple, potentially correlated random variables. If we wish to work with more than two random variables, we can extend the multivariate density to as many coordinates as desired by considering $p(\mathbf{x}) = p(x_1, \ldots, x_n)$. The same properties of being non-negative, and having total integral of one still hold. Marginal DistributionsWhen dealing with multiple variables, we oftentimes want to be able to ignore the relationships and ask, "how is this one variable distributed?" Such a distribution is called a *marginal distribution*.To be concrete, let us suppose that we have two random variables $X, Y$ with joint density given by $p _ {X, Y}(x, y)$. We will be using the subscript to indicate what random variables the density is for. The question of finding the marginal distribution is taking this function, and using it to find $p _ X(x)$.As with most things, it is best to return to the intuitive picture to figure out what should be true. Recall that the density is the function $p _ X$ so that$$P(X \in [x, x+\epsilon]) \approx \epsilon \cdot p _ X(x).$$There is no mention of $Y$, but if all we are given is $p _{X, Y}$, we need to include $Y$ somehow. We can first observe that this is the same as$$P(X \in [x, x+\epsilon] \text{, and } Y \in \mathbb{R}) \approx \epsilon \cdot p _ X(x).$$Our density does not directly tell us about what happens in this case, we need to split into small intervals in $y$ as well, so we can write this as$$\begin{aligned}\epsilon \cdot p _ X(x) & \approx \sum _ {i} P(X \in [x, x+\epsilon] \text{, and } Y \in [\epsilon \cdot i, \epsilon \cdot (i+1)]) \\& \approx \sum _ {i} \epsilon^{2} p _ {X, Y}(x, \epsilon\cdot i).\end{aligned}$$:label:`fig_marginal`This tells us to add up the value of the density along a series of squares in a line as is shown in :numref:`fig_marginal`. Indeed, after canceling one factor of epsilon from both sides, and recognizing the sum on the right is the integral over $y$, we can conclude that$$\begin{aligned} p _ X(x) & \approx \sum _ {i} \epsilon p _ {X, Y}(x, \epsilon\cdot i) \\ & \approx \int_{-\infty}^\infty p_{X, Y}(x, y) \; dy.\end{aligned}$$Thus we see$$p _ X(x) = \int_{-\infty}^\infty p_{X, Y}(x, y) \; dy.$$This tells us that to get a marginal distribution, we integrate over the variables we do not care about. This process is often referred to as *integrating out* or *marginalized out* the unneeded variables. CovarianceWhen dealing with multiple random variables, there is one additional summary statistic which is helpful to know: the *covariance*. This measures the degree that two random variable fluctuate together.Suppose that we have two random variables $X$ and $Y$, to begin with, let us suppose they are discrete, taking on values $(x_i, y_j)$ with probability $p_{ij}$. In this case, the covariance is defined as$$\sigma_{XY} = \mathrm{Cov}(X, Y) = \sum_{i, j} (x_i - \mu_X) (y_j-\mu_Y) p_{ij}. = E[XY] - E[X]E[Y].$$:eqlabel:`eq_cov_def`To think about this intuitively: consider the following pair of random variables. Suppose that $X$ takes the values $1$ and $3$, and $Y$ takes the values $-1$ and $3$. Suppose that we have the following probabilities$$\begin{aligned}P(X = 1 \; \text{and} \; Y = -1) & = \frac{p}{2}, \\P(X = 1 \; \text{and} \; Y = 3) & = \frac{1-p}{2}, \\P(X = 3 \; \text{and} \; Y = -1) & = \frac{1-p}{2}, \\P(X = 3 \; \text{and} \; Y = 3) & = \frac{p}{2},\end{aligned}$$where $p$ is a parameter in $[0,1]$ we get to pick. Notice that if $p=1$ then they are both always their minimum or maximum values simultaneously, and if $p=0$ they are guaranteed to take their flipped values simultaneously (one is large when the other is small and vice versa). If $p=1/2$, then the four possibilities are all equally likely, and neither should be related. Let us compute the covariance. First, note $\mu_X = 2$ and $\mu_Y = 1$, so we may compute using :eqref:`eq_cov_def`:$$\begin{aligned}\mathrm{Cov}(X, Y) & = \sum_{i, j} (x_i - \mu_X) (y_j-\mu_Y) p_{ij} \\& = (1-2)(-1-1)\frac{p}{2} + (1-2)(3-1)\frac{1-p}{2} + (3-2)(-1-1)\frac{1-p}{2} + (3-2)(3-1)\frac{p}{2} \\& = 4p-2.\end{aligned}$$When $p=1$ (the case where they are both maximally positive or negative at the same time) has a covariance of $2$. When $p=0$ (the case where they are flipped) the covariance is $-2$. Finally, when $p=1/2$ (the case where they are unrelated), the covariance is $0$. Thus we see that the covariance measures how these two random variables are related.A quick note on the covariance is that it only measures these linear relationships. More complex relationships like $X = Y^2$ where $Y$ is randomly chosen from $\{-2, -1, 0, 1, 2\}$ with equal probability can be missed. Indeed a quick computation shows that these random variables have covariance zero, despite one being a deterministic function of the other.For continuous random variables, much the same story holds. At this point, we are pretty comfortable with doing the transition between discrete and continuous, so we will provide the continuous analogue of :eqref:`eq_cov_def` without any derivation.$$\sigma_{XY} = \int_{\mathbb{R}^2} (x-\mu_X)(y-\mu_Y)p(x, y) \;dx \;dy.$$For visualization, let us take a look at a collection of random variables with tunable covariance. | # Plot a few random variables adjustable covariance
covs = [-0.9, 0.0, 1.2]
d2l.plt.figure(figsize=(12, 3))
for i in range(3):
X = np.random.normal(0, 1, 500)
Y = covs[i]*X + np.random.normal(0, 1, (500))
d2l.plt.subplot(1, 4, i+1)
d2l.plt.scatter(X.asnumpy(), Y.asnumpy())
d2l.plt.xlabel('X')
d2l.plt.ylabel('Y')
d2l.plt.title(f'cov = {covs[i]}')
d2l.plt.show() | _____no_output_____ | MIT | d2l-en/mxnet/chapter_appendix-mathematics-for-deep-learning/random-variables.ipynb | gr8khan/d2lai |
Let us see some properties of covariances:* For any random variable $X$, $\mathrm{Cov}(X, X) = \mathrm{Var}(X)$.* For any random variables $X, Y$ and numbers $a$ and $b$, $\mathrm{Cov}(aX+b, Y) = \mathrm{Cov}(X, aY+b) = a\mathrm{Cov}(X, Y)$.* If $X$ and $Y$ are independent then $\mathrm{Cov}(X, Y) = 0$.In addition, we can use the covariance to expand a relationship we saw before. Recall that is $X$ and $Y$ are two independent random variables then$$\mathrm{Var}(X+Y) = \mathrm{Var}(X) + \mathrm{Var}(Y).$$With knowledge of covariances, we can expand this relationship. Indeed, some algebra can show that in general,$$\mathrm{Var}(X+Y) = \mathrm{Var}(X) + \mathrm{Var}(Y) + 2\mathrm{Cov}(X, Y).$$This allows us to generalize the variance summation rule for correlated random variables. CorrelationAs we did in the case of means and variances, let us now consider units. If $X$ is measured in one unit (say inches), and $Y$ is measured in another (say dollars), the covariance is measured in the product of these two units $\text{inches} \times \text{dollars}$. These units can be hard to interpret. What we will often want in this case is a unit-less measurement of relatedness. Indeed, often we do not care about exact quantitative correlation, but rather ask if the correlation is in the same direction, and how strong the relationship is.To see what makes sense, let us perform a thought experiment. Suppose that we convert our random variables in inches and dollars to be in inches and cents. In this case the random variable $Y$ is multiplied by $100$. If we work through the definition, this means that $\mathrm{Cov}(X, Y)$ will be multiplied by $100$. Thus we see that in this case a change of units change the covariance by a factor of $100$. Thus, to find our unit-invariant measure of correlation, we will need to divide by something else that also gets scaled by $100$. Indeed we have a clear candidate, the standard deviation! Indeed if we define the *correlation coefficient* to be$$\rho(X, Y) = \frac{\mathrm{Cov}(X, Y)}{\sigma_{X}\sigma_{Y}},$$:eqlabel:`eq_cor_def`we see that this is a unit-less value. A little mathematics can show that this number is between $-1$ and $1$ with $1$ meaning maximally positively correlated, whereas $-1$ means maximally negatively correlated.Returning to our explicit discrete example above, we can see that $\sigma_X = 1$ and $\sigma_Y = 2$, so we can compute the correlation between the two random variables using :eqref:`eq_cor_def` to see that$$\rho(X, Y) = \frac{4p-2}{1\cdot 2} = 2p-1.$$This now ranges between $-1$ and $1$ with the expected behavior of $1$ meaning most correlated, and $-1$ meaning minimally correlated.As another example, consider $X$ as any random variable, and $Y=aX+b$ as any linear deterministic function of $X$. Then, one can compute that$$\sigma_{Y} = \sigma_{aX+b} = |a|\sigma_{X},$$$$\mathrm{Cov}(X, Y) = \mathrm{Cov}(X, aX+b) = a\mathrm{Cov}(X, X) = a\mathrm{Var}(X),$$and thus by :eqref:`eq_cor_def` that$$\rho(X, Y) = \frac{a\mathrm{Var}(X)}{|a|\sigma_{X}^2} = \frac{a}{|a|} = \mathrm{sign}(a).$$Thus we see that the correlation is $+1$ for any $a > 0$, and $-1$ for any $a < 0$ illustrating that correlation measures the degree and directionality the two random variables are related, not the scale that the variation takes.Let us again plot a collection of random variables with tunable correlation. | # Plot a few random variables adjustable correlations
cors = [-0.9, 0.0, 1.0]
d2l.plt.figure(figsize=(12, 3))
for i in range(3):
X = np.random.normal(0, 1, 500)
Y = cors[i] * X + np.sqrt(1 - cors[i]**2) * np.random.normal(0, 1, 500)
d2l.plt.subplot(1, 4, i + 1)
d2l.plt.scatter(X.asnumpy(), Y.asnumpy())
d2l.plt.xlabel('X')
d2l.plt.ylabel('Y')
d2l.plt.title(f'cor = {cors[i]}')
d2l.plt.show() | _____no_output_____ | MIT | d2l-en/mxnet/chapter_appendix-mathematics-for-deep-learning/random-variables.ipynb | gr8khan/d2lai |
Copyright 2018 The TensorFlow Authors. | #@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License. | _____no_output_____ | Apache-2.0 | tools/templates/subsite/g3doc/tutorials/notebook.ipynb | manivaradarajan/docs |
Gaia Real data!gully Sept 28, 2017 Outline:1. Batch download GaiaSource **Import these first-- I auto import them every time!:** | import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%config InlineBackend.figure_format = 'retina'
%matplotlib inline | _____no_output_____ | MIT | notebooks/adrasteia_04-01_get_gaiaSource_data.ipynb | gully/adrasteia |
1. Batch download the data | import os
i_max = 256 | _____no_output_____ | MIT | notebooks/adrasteia_04-01_get_gaiaSource_data.ipynb | gully/adrasteia |
```pythonfor j in range(21): if j == 20: i_max = 111 for i in range(i_max): fn = 'http://cdn.gea.esac.esa.int/Gaia/gaia_source/csv/GaiaSource_000-{:03d}-{:03d}.csv.gz'.format(j,i) executable = 'wget --directory-prefix=../data/GaiaSource/ '+fn print(executable) os.system(executable) Uncomment to actually download ``` | ! ls ../data/GaiaSource/ | tail | GaiaSource_000-000-069.csv.gz
GaiaSource_000-000-070.csv.gz
GaiaSource_000-000-071.csv.gz
GaiaSource_000-000-072.csv.gz
GaiaSource_000-000-073.csv.gz
GaiaSource_000-000-074.csv.gz
GaiaSource_000-000-075.csv.gz
GaiaSource_000-000-076.csv.gz
GaiaSource_000-000-077.csv.gz
GaiaSource_000-000-078.csv.gz
| MIT | notebooks/adrasteia_04-01_get_gaiaSource_data.ipynb | gully/adrasteia |
How many files are there? | 20*256+110 | _____no_output_____ | MIT | notebooks/adrasteia_04-01_get_gaiaSource_data.ipynb | gully/adrasteia |
Each file is about 40 MB. How many GB total is the dataset? | 5230*40/1000 | _____no_output_____ | MIT | notebooks/adrasteia_04-01_get_gaiaSource_data.ipynb | gully/adrasteia |
Capstone Project - Madiun Cafe Location Introduction / business problemi am looking to open a cafe in Madiun City, **the question is**, where is the best location for open new cafe? **The background of the problem** it is not worth setting up a cafe in the close promixity of existing ones. because the location of the new cafe has a significant impact on the expected returns. Data**A description of the data**: the data used to solve this problem is geolocation data collected from [FourSquare](https://foursquare.com/). Data is a single tabel, containing location of the existing cafe. **Explanation** of the location data are column `(lat, lng)`, where `lat` stands for latitude and `lng` for longitude. **Example** of the data: | Name | Shortname | Latitude | Londitude | | ------------------------ | ------------ | --------- | ---------- | | Markas Kopi | Coffee Shop | -7.648215 | 111.530610 | | Cafe Latté | Coffee Shop | -7.635934 | 111.519315 | | Coffee Toffee | Coffee Shop | -7.622158 | 111.536357 |**Data will be used**: by knowing the locations of already existing cafes, i will be using Kernel Density Estimation to determine the area of influence of the existing cafes, and recommend a new location which is not in the area of influence from existing cafe. Prep | !conda install -c conda-forge folium=0.5.0 --yes
import pandas as pd
import folium
import requests
# The code was removed by Watson Studio for sharing.
request_parameters = {
"client_id": CLIENT_ID,
"client_secret": CLIENT_SECRET,
"v": VERSION,
"section": "coffee",
"near": "Madiun",
"radius": 1000,
"limit": 50}
data = requests.get("https://api.foursquare.com/v2/venues/explore", params=request_parameters)
d = data.json()["response"]
d.keys()
d["headerLocationGranularity"], d["headerLocation"], d["headerFullLocation"]
d["suggestedBounds"], d["totalResults"]
d["geocode"]
d["groups"][0].keys()
d["groups"][0]["type"], d["groups"][0]["name"]
items = d["groups"][0]["items"]
print("items: %i" % len(items))
items[0]
items[1]
df_raw = []
for item in items:
venue = item["venue"]
categories, uid, name, location = venue["categories"], venue["id"], venue["name"], venue["location"]
assert len(categories) == 1
shortname = categories[0]["shortName"]
if not "address" in location:
address = ''
else:
address = location["address"]
if not "postalCode" in location:
postalcode = ''
else:
postalcode = location["postalCode"]
lat = location["lat"]
lng = location["lng"]
datarow = (uid, name, shortname, address, postalcode, lat, lng)
df_raw.append(datarow)
df = pd.DataFrame(df_raw, columns=["uid", "name", "shortname", "address", "postalcode", "lat", "lng"])
print("total %i cafes" % len(df))
df.head()
madiun_center = d["geocode"]["center"]
madiun_center | _____no_output_____ | MIT | Week 4/Capstone Project - Madiun Cafe Location Final.ipynb | Symefa/Coursera_Capstone-Madiun-Cafe |
Applying Heatmap to Map Some density based estimator is a good to be used to determine where to start a new coffee business. Using HeatMap plugin in Folium, to visualize all the existing Cafes to same map: |
from folium import plugins
# create map of Helsinki using latitude and longitude values
map_madiun = folium.Map(location=[madiun_center["lat"], madiun_center["lng"]], zoom_start=14)
folium.LatLngPopup().add_to(map_madiun)
def add_markers(df):
for (j, row) in df.iterrows():
label = folium.Popup(row["name"], parse_html=True)
folium.CircleMarker(
[row["lat"], row["lng"]],
radius=10,
popup=label,
color='blue',
fill=True,
fill_color='#3186cc',
fill_opacity=0.7,
parse_html=False).add_to(map_madiun)
add_markers(df)
hm_data = df[["lat", "lng"]].as_matrix().tolist()
map_madiun.add_child(plugins.HeatMap(hm_data))
map_madiun
| /opt/conda/envs/Python36/lib/python3.6/site-packages/ipykernel/__main__.py:22: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead.
| MIT | Week 4/Capstone Project - Madiun Cafe Location Final.ipynb | Symefa/Coursera_Capstone-Madiun-Cafe |
Result After further analysis, the best location for a new cafe is on Tulus Bakti Street, because it is not in close proximity with other cafe, and located near school and on densest population region in madiun. [BPS DATA](https://madiunkota.bps.go.id/statictable/2015/06/08/141/jumlah-penduduk-menurut-kecamatan-dan-agama-yang-dianut-di-kota-madiun-2013-.html) | lat = -7.6393
lng = 111.5285
school_1_lat = -7.6403
school_1_lng = 111.5316
map_best = folium.Map(location=[lat, lng], zoom_start=17)
add_markers(df)
folium.CircleMarker(
[school_1_lat, school_1_lng],
radius=15,
popup="School",
color='Yellow',
fill=True,
fill_color='#3186cc',
fill_opacity=0.7,
parse_html=False).add_to(map_best)
folium.CircleMarker(
[lat, lng],
radius=15,
popup="Best Location!",
color='red',
fill=True,
fill_color='#3186cc',
fill_opacity=0.7,
parse_html=False).add_to(map_best)
map_best | _____no_output_____ | MIT | Week 4/Capstone Project - Madiun Cafe Location Final.ipynb | Symefa/Coursera_Capstone-Madiun-Cafe |
**Artificial Intelligence - MSc**This notebook is designed specially for the moduleET5003 - MACHINE LEARNING APPLICATIONS Instructor: Enrique NaredoET5003_BayesianNN© All rights reserved to the author, do not share outside this module. Introduction A [Bayesian network](https://en.wikipedia.org/wiki/Bayesian_network) (also known as a Bayes network, Bayes net, belief network, or decision network) is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG). * Bayesian networks are ideal for taking an event that occurred and predicting the likelihood that any one of several possible known causes was the contributing factor. * For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. * Given symptoms, the network can be used to compute the probabilities of the presence of various diseases. **Acknowledgement**This notebook is refurbished taking source code from Alessio Benavoli's webpage and from the libraries numpy, GPy, pylab, and pymc3. Libraries | # Suppressing Warnings:
import warnings
warnings.filterwarnings("ignore")
# https://pypi.org/project/GPy/
!pip install gpy
import GPy as GPy
import numpy as np
import pylab as pb
import pymc3 as pm
%matplotlib inline | _____no_output_____ | BSD-3-Clause | Week-5/ET5003_BayesianNN.ipynb | davidnol/ET5003_SEM1_2021-2 |
Data generationGenerate data from a nonlinear function and use a Gaussian Process to sample it. | # seed the legacy random number generator
# to replicate experiments
seed = None
#seed = 7
np.random.seed(seed)
# Gaussian Processes
# https://gpy.readthedocs.io/en/deploy/GPy.kern.html
# Radial Basis Functions
# https://scikit-learn.org/stable/auto_examples/svm/plot_rbf_parameters.html
# kernel is a function that specifies the degree of similarity
# between variables given their relative positions in parameter space
kernel = GPy.kern.RBF(input_dim=1,lengthscale=0.15,variance=0.2)
print(kernel)
# number of samples
num_samples_train = 250
num_samples_test = 200
# intervals to sample
a, b, c = 0.2, 0.6, 0.8
# points evenly spaced over [0,1]
interval_1 = np.random.rand(int(num_samples_train/2))*b - c
interval_2 = np.random.rand(int(num_samples_train/2))*b + c
X_new_train = np.sort(np.hstack([interval_1,interval_2]))
X_new_test = np.linspace(-1,1,num_samples_test)
X_new_all = np.hstack([X_new_train,X_new_test]).reshape(-1,1)
# vector of the means
μ_new = np.zeros((len(X_new_all)))
# covariance matrix
C_new = kernel.K(X_new_all,X_new_all)
# noise factor
noise_new = 0.1
# generate samples path with mean μ and covariance C
TF_new = np.random.multivariate_normal(μ_new,C_new,1)[0,:]
y_new_train = TF_new[0:len(X_new_train)] + np.random.randn(len(X_new_train))*noise_new
y_new_test = TF_new[len(X_new_train):] + np.random.randn(len(X_new_test))*noise_new
TF_new = TF_new[len(X_new_train):] | _____no_output_____ | BSD-3-Clause | Week-5/ET5003_BayesianNN.ipynb | davidnol/ET5003_SEM1_2021-2 |
In this example, first generate a nonlinear functions and then generate noisy training data from that function.The constrains are:* Training samples $x$ belong to either interval $[-0.8,-0.2]$ or $[0.2,0.8]$.* There is not data training samples from the interval $[-0.2,0.2]$. * The goal is to evaluate the extrapolation error outside in the interval $[-0.2,0.2]$. | # plot
pb.figure()
pb.plot(X_new_test,TF_new,c='b',label='True Function',zorder=100)
# training data
pb.scatter(X_new_train,y_new_train,c='g',label='Train Samples',alpha=0.5)
pb.xlabel("x",fontsize=16);
pb.ylabel("y",fontsize=16,rotation=0)
pb.legend()
pb.savefig("New_data.pdf") | _____no_output_____ | BSD-3-Clause | Week-5/ET5003_BayesianNN.ipynb | davidnol/ET5003_SEM1_2021-2 |
Bayesian NNWe address the previous nonlinear regression problem by using a Bayesian NN.**The model is basically very similar to polynomial regression**. We first define the nonlinear function (NN)and the place a prior over the unknown parameters. We then compute the posterior. | # https://theano-pymc.readthedocs.io/en/latest/
import theano
# add a column of ones to include an intercept in the model
x1 = np.vstack([np.ones(len(X_new_train)), X_new_train]).T
floatX = theano.config.floatX
l = 15
# Initialize random weights between each layer
# we do that to help the numerical algorithm that computes the posterior
init_1 = np.random.randn(x1.shape[1], l).astype(floatX)
init_out = np.random.randn(l).astype(floatX)
# pymc3 model as neural_network
with pm.Model() as neural_network:
# we convert the data in theano type so we can do dot products with the correct type.
ann_input = pm.Data('ann_input', x1)
ann_output = pm.Data('ann_output', y_new_train)
# Priors
# Weights from input to hidden layer
weights_in_1 = pm.Normal('w_1', 0, sigma=10,
shape=(x1.shape[1], l), testval=init_1)
# Weights from hidden layer to output
weights_2_out = pm.Normal('w_0', 0, sigma=10,
shape=(l,),testval=init_out)
# Build neural-network using tanh activation function
# Inner layer
act_1 = pm.math.tanh(pm.math.dot(ann_input,weights_in_1))
# Linear layer, like in Linear regression
act_out = pm.Deterministic('act_out',pm.math.dot(act_1, weights_2_out))
# standard deviation of noise
sigma = pm.HalfCauchy('sigma',5)
# Normal likelihood
out = pm.Normal('out',
act_out,
sigma=sigma,
observed=ann_output)
# this can be slow because there are many parameters
# some parameters
par1 = 100 # start with 100, then use 1000+
par2 = 1000 # start with 1000, then use 10000+
# neural network
with neural_network:
posterior = pm.sample(par1,tune=par2,chains=1) | WARNING (theano.tensor.blas): We did not find a dynamic library in the library_dir of the library we use for blas. If you use ATLAS, make sure to compile it with dynamics library.
WARNING (theano.tensor.blas): We did not find a dynamic library in the library_dir of the library we use for blas. If you use ATLAS, make sure to compile it with dynamics library.
Only 100 samples in chain.
Auto-assigning NUTS sampler...
Initializing NUTS using jitter+adapt_diag...
Sequential sampling (1 chains in 1 job)
NUTS: [sigma, w_0, w_1]
| BSD-3-Clause | Week-5/ET5003_BayesianNN.ipynb | davidnol/ET5003_SEM1_2021-2 |
Specifically, PyMC3 supports the following Variational Inference (VI) methods: * Automatic Differentiation Variational Inference (ADVI): 'advi' * ADVI full rank: 'fullrank_advi' * Stein Variational Gradient Descent (SVGD): 'svgd' * Amortized Stein Variational Gradient Descent (ASVGD): 'asvgd' * Normalizing Flow with default scale-loc flow (NFVI): 'nfvi' | # we can do instead an approximated inference
param3 = 1000 # start with 1000, then use 50000+
VI = 'advi' # 'advi', 'fullrank_advi', 'svgd', 'asvgd', 'nfvi'
OP = pm.adam # pm.adam, pm.sgd, pm.adagrad, pm.adagrad_window, pm.adadelta
LR = 0.01
with neural_network:
approx = pm.fit(param3, method=VI, obj_optimizer=pm.adam(learning_rate=LR))
# plot
pb.plot(approx.hist, label='Variational Inference: '+ VI.upper(), alpha=.3)
pb.legend(loc='upper right')
# Evidence Lower Bound (ELBO)
# https://en.wikipedia.org/wiki/Evidence_lower_bound
pb.ylabel('ELBO')
pb.xlabel('iteration');
# draw samples from variational posterior
D = 500
posterior = approx.sample(draws=D) | _____no_output_____ | BSD-3-Clause | Week-5/ET5003_BayesianNN.ipynb | davidnol/ET5003_SEM1_2021-2 |
Now, we compute the prediction for each sample. * Note that we use `np.tanh` instead of `pm.math.tanh`for speed reason. * `pm.math.tanh` is slower outside a Pymc3 model because it converts all data in theano format.* It is convenient to do GPU-based training, but it is slow when we only need to compute predictions. | # add a column of ones to include an intercept in the model
x2 = np.vstack([np.ones(len(X_new_test)), X_new_test]).T
y_pred = []
for i in range(posterior['w_1'].shape[0]):
#inner layer
t1 = np.tanh(np.dot(posterior['w_1'][i,:,:].T,x2.T))
#outer layer
y_pred.append(np.dot(posterior['w_0'][i,:],t1))
# predictions
y_pred = np.array(y_pred) | _____no_output_____ | BSD-3-Clause | Week-5/ET5003_BayesianNN.ipynb | davidnol/ET5003_SEM1_2021-2 |
We first plot the mean of `y_pred`, this is very similar to the prediction that Keras returns | # plot
pb.plot(X_new_test,TF_new,label='true')
pb.plot(X_new_test,y_pred.mean(axis=0),label='Bayes NN mean')
pb.scatter(X_new_train,y_new_train,c='r',alpha=0.5)
pb.legend()
pb.ylim([-1,1])
pb.xlabel("x",fontsize=16);
pb.ylabel("y",fontsize=16,rotation=0)
pb.savefig("BayesNN_mean.pdf") | _____no_output_____ | BSD-3-Clause | Week-5/ET5003_BayesianNN.ipynb | davidnol/ET5003_SEM1_2021-2 |
Now, we plot the uncertainty, by plotting N nonlinear regression lines from the posterior | # plot
pb.plot(X_new_test,TF_new,label='true',Zorder=100)
pb.plot(X_new_test,y_pred.mean(axis=0),label='Bayes NN mean',Zorder=100)
N = 500
# nonlinear regression lines
for i in range(N):
pb.plot(X_new_test,y_pred[i,:],c='gray',alpha=0.05)
pb.scatter(X_new_train,y_new_train,c='r',alpha=0.5)
pb.xlabel("x",fontsize=16);
pb.ylabel("y",fontsize=16,rotation=0)
pb.ylim([-1,1.5])
pb.legend()
pb.savefig("BayesNN_samples.pdf")
# plot
pb.plot(X_new_test,TF_new,label='true',Zorder=100)
pb.plot(X_new_test,y_pred.mean(axis=0),label='Bayes NN mean',Zorder=100)
pb.scatter(X_new_train,y_new_train,c='r',alpha=0.5)
pb.xlabel("x",fontsize=16);
pb.ylabel("y",fontsize=16,rotation=0)
pb.ylim([-1,1.5])
pb.legend()
pb.savefig("BayesNN_mean.pdf") | _____no_output_____ | BSD-3-Clause | Week-5/ET5003_BayesianNN.ipynb | davidnol/ET5003_SEM1_2021-2 |
函数- 函数可以用来定义可重复代码,组织和简化- 一般来说一个函数在实际开发中为一个小功能- 一个类为一个大功能- 同样函数的长度不要超过一屏 Python中的所有函数实际上都是有返回值(return None),如果你没有设置return,那么Python将不显示None.如果你设置return,那么将返回出return这个值. | def HJN():
print('Hello')
return 1000
b=HJN()
print(b)
HJN
def panduan(number):
if number % 2 == 0:
print('O')
else:
print('J')
panduan(number=1)
panduan(2) | O
| Apache-2.0 | 7.20.ipynb | hzdhhh/Python |
定义一个函数def function_name(list of parameters): do something- 以前使用的random 或者range 或者print.. 其实都是函数或者类 函数的参数如果有默认值的情况,当你调用该函数的时候:可以不给予参数值,那么就会走该参数的默认值否则的话,就走你给予的参数值. | import random
def hahah():
n = random.randint(0,5)
while 1:
N = eval(input('>>'))
if n == N:
print('smart')
break
elif n < N:
print('太小了')
elif n > N:
print('太大了')
| _____no_output_____ | Apache-2.0 | 7.20.ipynb | hzdhhh/Python |
调用一个函数- functionName()- "()" 就代表调用 | def H():
print('hahaha')
def B():
H()
B()
def A(f):
f()
A(B) | hahaha
| Apache-2.0 | 7.20.ipynb | hzdhhh/Python |
 带返回值和不带返回值的函数- return 返回的内容- return 返回多个值- 一般情况下,在多个函数协同完成一个功能的时候,那么将会有返回值 - 当然也可以自定义返回None EP: | def main():
print(min(min(5,6),(51,6)))
def min(n1,n2):
a = n1
if n2 < a:
a = n2
main() | _____no_output_____ | Apache-2.0 | 7.20.ipynb | hzdhhh/Python |
类型和关键字参数- 普通参数- 多个参数- 默认值参数- 不定长参数 普通参数 多个参数 默认值参数 强制命名 | def U(str_):
xiaoxie = 0
for i in str_:
ASCII = ord(i)
if 97<=ASCII<=122:
xiaoxie +=1
elif xxxx:
daxie += 1
elif xxxx:
shuzi += 1
return xiaoxie,daxie,shuzi
U('HJi12') | H
J
i
1
2
| Apache-2.0 | 7.20.ipynb | hzdhhh/Python |
不定长参数- \*args> - 不定长,来多少装多少,不装也是可以的 - 返回的数据类型是元组 - args 名字是可以修改的,只是我们约定俗成的是args- \**kwargs > - 返回的字典 - 输入的一定要是表达式(键值对)- name,\*args,name2,\**kwargs 使用参数名 | def TT(a,b)
def TT(*args,**kwargs):
print(kwargs)
print(args)
TT(1,2,3,4,6,a=100,b=1000)
{'key':'value'}
TT(1,2,4,5,7,8,9,)
def B(name1,nam3):
pass
B(name1=100,2)
def sum_(*args,A='sum'):
res = 0
count = 0
for i in args:
res +=i
count += 1
if A == "sum":
return res
elif A == "mean":
mean = res / count
return res,mean
else:
print(A,'还未开放')
sum_(-1,0,1,4,A='var')
'aHbK134'.__iter__
b = 'asdkjfh'
for i in b :
print(i)
2,5
2 + 22 + 222 + 2222 + 22222 | _____no_output_____ | Apache-2.0 | 7.20.ipynb | hzdhhh/Python |
变量的作用域- 局部变量 local- 全局变量 global- globals 函数返回一个全局变量的字典,包括所有导入的变量- locals() 函数会以字典类型返回当前位置的全部局部变量。 | a = 1000
b = 10
def Y():
global a,b
a += 100
print(a)
Y()
def YY(a1):
a1 += 100
print(a1)
YY(a)
print(a) | 1200
1100
| Apache-2.0 | 7.20.ipynb | hzdhhh/Python |
注意:- global :在进行赋值操作的时候需要声明- 官方解释:This is because when you make an assignment to a variable in a scope, that variable becomes local to that scope and shadows any similarly named variable in the outer scope.-  Homework- 1 | def getPentagonalNumber(n):
count = 0
for n in range(100):
y=int(n*(3*n-1)/2)
print(y,end=' ')
count += 1
if count %10 == 0:
print()
getPentagonalNumber(100) | 0 1 5 12 22 35 51 70 92 117
145 176 210 247 287 330 376 425 477 532
590 651 715 782 852 925 1001 1080 1162 1247
1335 1426 1520 1617 1717 1820 1926 2035 2147 2262
2380 2501 2625 2752 2882 3015 3151 3290 3432 3577
3725 3876 4030 4187 4347 4510 4676 4845 5017 5192
5370 5551 5735 5922 6112 6305 6501 6700 6902 7107
7315 7526 7740 7957 8177 8400 8626 8855 9087 9322
9560 9801 10045 10292 10542 10795 11051 11310 11572 11837
12105 12376 12650 12927 13207 13490 13776 14065 14357 14652
| Apache-2.0 | 7.20.ipynb | hzdhhh/Python |
- 2  | def sumDigits(n):
bai=n//100
shi=n//10%10
ge=n%10
y=bai+shi+ge
print('%d(%d+%d+%d)'%(y,bai,shi,ge))
sumDigits(234) | 9(2+3+4)
| Apache-2.0 | 7.20.ipynb | hzdhhh/Python |
- 3 | def displaySortedNumber():
num1,num2,num3=map(float,input('Enter three number:').split(','))
a=[num1,num2,num3]
a.sort()
print(a)
displaySortedNumber() | Enter three number:2,1.0,3
[1.0, 2.0, 3.0]
| Apache-2.0 | 7.20.ipynb | hzdhhh/Python |
- 4 | def futureInvestmentValue(principal,rate,years):
for i in range(years):
principal = principal * (1+rate)
print("{}年内总额{}: ".format(i+1,principal))
principal = eval(input("输入存款金额: "))
rate = eval(input("输入利率: "))
years = eval(input("输入年份:" ))
futureInvestmentValue(principal,rate,years) | 输入存款金额: 1000
输入利率: 9
输入年份:30
1年内总额10000:
2年内总额100000:
3年内总额1000000:
4年内总额10000000:
5年内总额100000000:
6年内总额1000000000:
7年内总额10000000000:
8年内总额100000000000:
9年内总额1000000000000:
10年内总额10000000000000:
11年内总额100000000000000:
12年内总额1000000000000000:
13年内总额10000000000000000:
14年内总额100000000000000000:
15年内总额1000000000000000000:
16年内总额10000000000000000000:
17年内总额100000000000000000000:
18年内总额1000000000000000000000:
19年内总额10000000000000000000000:
20年内总额100000000000000000000000:
21年内总额1000000000000000000000000:
22年内总额10000000000000000000000000:
23年内总额100000000000000000000000000:
24年内总额1000000000000000000000000000:
25年内总额10000000000000000000000000000:
26年内总额100000000000000000000000000000:
27年内总额1000000000000000000000000000000:
28年内总额10000000000000000000000000000000:
29年内总额100000000000000000000000000000000:
30年内总额1000000000000000000000000000000000:
| Apache-2.0 | 7.20.ipynb | hzdhhh/Python |
- 5 | def printChars():
count=0
for i in range(49,91):
print(chr(i),end=' ')
count=count+1
if count%10==0:
print()
printChars() | 1 2 3 4 5 6 7 8 9 :
; < = > ? @ A B C D
E F G H I J K L M N
O P Q R S T U V W X
Y Z | Apache-2.0 | 7.20.ipynb | hzdhhh/Python |
- 6 | def numberofDaysInAYear():
for year in range(2010,2021):
if (year%4==0 and year%100!=0) or (year%400==0):
print('%d年366天'%year)
else:
print('%d年365天'%year)
numberofDaysInAYear() | 2010年365天
2011年365天
2012年366天
2013年365天
2014年365天
2015年365天
2016年366天
2017年365天
2018年365天
2019年365天
2020年366天
| Apache-2.0 | 7.20.ipynb | hzdhhh/Python |
- 7 | import numpy as np
import math
def xsj(x1,y1,x2,y2):
p1=np.array([x1,y1])
p2=np.array([x2,y2])
p3=p2-p1
p4=math.hypot(p3[0],p3[1])
print(p4)
x1,y1,x2,y2=map(int,input().split(','))
xsj(x1,y1,x2,y2) | 1,2,3,4
2.8284271247461903
| Apache-2.0 | 7.20.ipynb | hzdhhh/Python |
- 8 - 9 | import time
localtime = time.asctime(time.localtime(time.time()))
print("本地时间为 :", localtime)
2019 - 1970 | _____no_output_____ | Apache-2.0 | 7.20.ipynb | hzdhhh/Python |
- 10 | import random
num1=random.randrange(1,7)
num2=random.randrange(1,7)
sum_=num1+num2
if sum_==2 or sum_==3 or sum_==12:
print('You rolled %d+%d=%d'%(num1,num2,sum_))
print('you lose')
elif sum_==7 or sum_==11:
print('You rolled %d+%d=%d'%(num1,num2,sum_))
print('you win')
else:
print('You rolled %d+%d=%d'%(num1,num2,sum_))
print('point is %d'%sum_)
num1=random.randrange(1,7)
num2=random.randrange(1,7)
sum_1=num1+num2
if sum_1==sum_:
print('You rolled %d+%d=%d'%(num1,num2,sum_1))
print('you win')
else:
print('You rolled %d+%d=%d'%(num1,num2,sum_1))
print('you lose') | You rolled 5+2=7
you win
| Apache-2.0 | 7.20.ipynb | hzdhhh/Python |
Goals for Total number of goals scored so far this seasonGoals againstTotal number of goals conceded so far thisseasonGoals Differential Goals for – Goals againstPower Play Success Rate Ratio – scoring a goal when 5 on 4Power Kill Success Rate Ratio – not conceding a goal when 4 on 5Shot % Goal scored/shots takenSave % Goals conceded/shots savedWinning Streak Number of consecutive games wonConference Standing Latest ranking on conference tableFenwick Close % Possession ratioPDO Luck parameter5/5 Goal For/Against Ratio – 5 on 5 Goals For/Against | feat_Pisch = [ 'faceOffWinPercentage', | _____no_output_____ | MIT | Note_books/Explore_Models/Feature_Partitions.ipynb | joeyamosjohns/final_project_nhl_prediction_first_draft |
Системы координатТвердое тело обладает шестью степенями свободы, которые можно представить в виде трех перемещений и трех вращений.Чтобы описать положение тела оказывается удобным описать положение _локальной системы координат_ (ЛСК), жестко связанной с этим телом.Тогда описание положения ЛСК становится унифицированным способом описания положения любого тела.Положение ЛСК задается относительно некоторобо базового положения - _базовой системы координат_ (БСК). | %matplotlib notebook
import matplotlib.pyplot as plt
import numpy as np | _____no_output_____ | MIT | 1 - Coordinate System.ipynb | red-hara/jupyter-dh-notation |
Зададим положение и ориентацию ЛСК в двумерном пространстве: | x = 10
y = 7
alpha = np.deg2rad(15) | _____no_output_____ | MIT | 1 - Coordinate System.ipynb | red-hara/jupyter-dh-notation |
И отобразим ее: | fig = plt.figure(figsize=(6, 6))
ax = fig.add_subplot()
ax.axis("equal")
ax.set_xlim((-3, 12)); ax.set_ylim((-3, 12))
ax.arrow(0, 0, 5, 0, color="#ff0000", linewidth=6)
ax.arrow(0, 0, 0, 5, color="#00ff00", linewidth=6)
ax.arrow(x, y, np.cos(alpha), np.sin(alpha), color="#ff0000", linewidth=2)
ax.arrow(x, y, -np.sin(alpha), np.cos(alpha), color="#00ff00", linewidth=2)
ax.arrow(0, 0, x, y, color="#000000", linewidth=1, head_width=0.1)
fig.show() | _____no_output_____ | MIT | 1 - Coordinate System.ipynb | red-hara/jupyter-dh-notation |
Casos de prueba | var ex11 = "abcd";
var ex12 = "abdc";
var ex21 = "abcde";
var ex22 = "abc"; | _____no_output_____ | Unlicense | Chap1/2 CheckPermutation.ipynb | jenivial/cracking-the-coding-interview |
Solución | bool IsPermutation(string firstString,string secondString)
{
var dic = new Dictionary<char,int>();
foreach(var c in firstString)
{
if(!dic.ContainsKey(c))
{
dic.Add(c,0);
}
dic[c] = dic[c] + 1;
}
foreach(var c in secondString)
{
if(!dic.ContainsKey(c))
{
return false;
}
dic[c] = dic[c] - 1;
if(dic[c] < 0)
{
return false;
}
}
foreach(var keyValue in dic)
{
if(keyValue.Value != 0)
{
return false;
}
}
return true;
}
Console.WriteLine(IsPermutation(ex11,ex12));
Console.WriteLine(IsPermutation(ex21,ex22));
| True
False
| Unlicense | Chap1/2 CheckPermutation.ipynb | jenivial/cracking-the-coding-interview |
Linear RegressionWe will implement a linear regression model by using the Keras library. | %matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np | _____no_output_____ | MIT | Implementaciones/LinearRegression/LinearRegression-Keras.ipynb | sergiomora03/deep-learning-basics |
Data set: Weight and heightActive Drive and read the csv file with the weight and height data | df = pd.read_csv('/content/drive/My Drive/Colab Notebooks/DeepLearning-Intro-part2/weight-height.csv')
df.head()
df.plot(kind='scatter',
x='Height',
y='Weight',
title='Weight and Height in adults') | _____no_output_____ | MIT | Implementaciones/LinearRegression/LinearRegression-Keras.ipynb | sergiomora03/deep-learning-basics |
Model building | # Import the type of model: Sequential, because we will add elements to this model in a sequence
from keras.models import Sequential
# To build a linear model we will need only dense layers
from keras.layers import Dense
# Import the optimizers, they change the weights and biases looking for the minimum cost
from keras.optimizers import Adam, SGD | Using TensorFlow backend.
| MIT | Implementaciones/LinearRegression/LinearRegression-Keras.ipynb | sergiomora03/deep-learning-basics |
Define the model | # define the model to be sequential
model = Sequential() | _____no_output_____ | MIT | Implementaciones/LinearRegression/LinearRegression-Keras.ipynb | sergiomora03/deep-learning-basics |
```Dense(units, activation=None, use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None, **kwargs)```Just your regular densely-connected NN layer.Dense implements the operation: $output = activation(dot(input, kernel) + bias)$ where activation is the element-wise activation function passed as the activation argument, kernel is a weights matrix created by the layer, and bias is a bias vector created by the layer (only applicable if use_bias is True). | # we add to the model a dense layer
# the first parammeter is the number of units that is how many outputs this layer will have
# Since this is a linear regression we will require a model with one output and one input
model.add(Dense(1, input_shape=(1,))) #this code implements a model x*w+b
model.summary() | Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_1 (Dense) (None, 1) 2
=================================================================
Total params: 2
Trainable params: 2
Non-trainable params: 0
_________________________________________________________________
| MIT | Implementaciones/LinearRegression/LinearRegression-Keras.ipynb | sergiomora03/deep-learning-basics |
We have a single layer called 'dense_1' the Output Shape is 1 number and it has 2 parameters. The reason that the Output Shape is (None, 1) is because the model can accept multiple points at once, instead of passing a single value we can ask for many values of x in one single call. When we compile the model, Keras will construct the model based on the backend software that we define (here we are using TensorFlow model). ```model.compile(optimizer, loss=None, metrics=None, loss_weights=None, sample_weight_mode=None, weighted_metrics=None, target_tensors=None, **kwargs)``` | # we will compile using the cost function (loss) 'mean_squared_error'
model.compile(Adam(lr=0.8), 'mean_squared_error') | _____no_output_____ | MIT | Implementaciones/LinearRegression/LinearRegression-Keras.ipynb | sergiomora03/deep-learning-basics |
Fit the model | X = df[['Height']].values #input data
y_true = df['Weight'].values #output data | _____no_output_____ | MIT | Implementaciones/LinearRegression/LinearRegression-Keras.ipynb | sergiomora03/deep-learning-basics |
Fit the model by using the input data, X, and the output data, y_true. In each iteration the loss is decreasing by looking for the W and B values. In this example it will search 40 times (40 epochs). | model.fit(X, y_true, epochs=40)
y_pred = model.predict(X)
df.plot(kind='scatter',
x='Height',
y='Weight',
title='Weight and Height in adults')
plt.plot(X, y_pred, color='red') | _____no_output_____ | MIT | Implementaciones/LinearRegression/LinearRegression-Keras.ipynb | sergiomora03/deep-learning-basics |
Extract the values of W (slope) and B (bias). | W, B = model.get_weights()
W
B | _____no_output_____ | MIT | Implementaciones/LinearRegression/LinearRegression-Keras.ipynb | sergiomora03/deep-learning-basics |
Performance of the model | from sklearn.metrics import r2_score
print("The R2 score is {:0.3f}".format(r2_score(y_true, y_pred))) | The R2 score is 0.829
| MIT | Implementaciones/LinearRegression/LinearRegression-Keras.ipynb | sergiomora03/deep-learning-basics |
Train/test split | from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y_true,
test_size=0.2)
len(X_train)
len(X_test)
#reset the parameters of the model
W[0, 0] = 0.0
B[0] = 0.0
model.set_weights((W, B))
#retrain the model in the selected sample
model.fit(X_train, y_train, epochs=50, verbose=0) #verbose=0 doesn't show each iteration
y_train_pred = model.predict(X_train).ravel()
y_test_pred = model.predict(X_test).ravel()
from sklearn.metrics import mean_squared_error as mse
print("The Mean Squared Error on the Train set is:\t{:0.1f}".format(mse(y_train, y_train_pred)))
print("The Mean Squared Error on the Test set is:\t{:0.1f}".format(mse(y_test, y_test_pred)))
print("The R2 score on the Train set is:\t{:0.3f}".format(r2_score(y_train, y_train_pred)))
print("The R2 score on the Test set is:\t{:0.3f}".format(r2_score(y_test, y_test_pred))) | The R2 score on the Train set is: 0.837
The R2 score on the Test set is: 0.827
| MIT | Implementaciones/LinearRegression/LinearRegression-Keras.ipynb | sergiomora03/deep-learning-basics |
A demo of K-Means clustering on the handwritten digits dataIn this example we compare the various initialization strategies forK-means in terms of runtime and quality of the results.As the ground truth is known here, we also apply different clusterquality metrics to judge the goodness of fit of the cluster labels to theground truth.Cluster quality metrics evaluated (see `clustering_evaluation` fordefinitions and discussions of the metrics):| Shorthand | Full name ||------------|----------------------------:|| homo | homogeneity score || compl | completeness score || v-meas | V measure || ARI | adjusted Rand index || AMI | adjusted mutual information || silhouette | silhouette coefficient | | def bench_k_means(estimator, name, data):
t0 = time()
estimator.fit(data)
print('%-9s\t%.2fs\t%i\t%.3f\t%.3f\t%.3f\t%.3f\t%.3f\t%.3f'
% (name, (time() - t0), estimator.inertia_,
metrics.homogeneity_score(labels, estimator.labels_),
metrics.completeness_score(labels, estimator.labels_),
metrics.v_measure_score(labels, estimator.labels_),
metrics.adjusted_rand_score(labels, estimator.labels_),
metrics.adjusted_mutual_info_score(labels, estimator.labels_,
average_method='arithmetic'),
metrics.silhouette_score(data, estimator.labels_,
metric='euclidean',
sample_size=sample_size)))
print("n_digits: %d, \t n_samples %d, \t n_features %d"
% (n_digits, n_samples, n_features))
print(82 * '_')
print('init\t\ttime\tinertia\thomo\tcompl\tv-meas\tARI\tAMI\tsilhouette')
bench_k_means(KMeans(init='k-means++', n_clusters=n_digits, n_init=10),
name="k-means++", data=data)
bench_k_means(KMeans(init='random', n_clusters=n_digits, n_init=10),
name="random", data=data)
# in this case the seeding of the centers is deterministic, hence we run the
# kmeans algorithm only once with n_init=1
pca = PCA(n_components=n_digits).fit(data)
bench_k_means(KMeans(init=pca.components_, n_clusters=n_digits, n_init=1),
name="PCA-based", data=data)
print(82 * '_')
# Visualize the results on PCA-reduced data
reduced_data = PCA(n_components=2).fit_transform(data)
kmeans = KMeans(init='k-means++', n_clusters=n_digits, n_init=10)
kmeans.fit(reduced_data)
# Step size of the mesh. Decrease to increase the quality of the VQ.
h = .02 # point in the mesh [x_min, x_max]x[y_min, y_max].
# Plot the decision boundary. For that, we will assign a color to each
x_min, x_max = reduced_data[:, 0].min() - 1, reduced_data[:, 0].max() + 1
y_min, y_max = reduced_data[:, 1].min() - 1, reduced_data[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Obtain labels for each point in mesh. Use last trained model.
Z = kmeans.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure(1)
plt.clf()
plt.imshow(Z, interpolation='nearest',
extent=(xx.min(), xx.max(), yy.min(), yy.max()),
cmap=plt.cm.Paired,
aspect='auto', origin='lower')
plt.plot(reduced_data[:, 0], reduced_data[:, 1], 'k.', markersize=2)
# Plot the centroids as a white X
centroids = kmeans.cluster_centers_
plt.scatter(centroids[:, 0], centroids[:, 1],
marker='x', s=169, linewidths=3,
color='w', zorder=10)
plt.title('K-means clustering on the digits dataset (PCA-reduced data)\n'
'Centroids are marked with white cross')
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.xticks(())
plt.yticks(())
plt.show()
from sklearn.metrics import normalized_mutual_info_score
normalized_mutual_info_score(kmeans.labels_, labels) | _____no_output_____ | Apache-2.0 | Tutorials_PCA_KMeans_DBSCAN_Autoencoder_with_MNIST/PCA_mnist_kmeans_blog.ipynb | ustundag/2D-3D-Semantics |
Training vs validation loss[](https://colab.research.google.com/github/parrt/fundamentals-of-deep-learning/blob/main/notebooks/3.train-test-diabetes.ipynb)By [Terence Parr](https://explained.ai).This notebook explores how to use a validation set to estimate how well a model generalizes from its training data to unknown test vectors. We will see that deep learning models often have so many parameters that we can drive training loss to zero, but unfortunately the validation loss usually grows as the model overfits. We will also compare how deep learning performs compared to a random forest model as a baseline. Instead of the cars data set, we will use the [diabetes data set](https://scikit-learn.org/stable/datasets/toy_dataset.htmldiabetes-dataset) loaded via sklearn. Support code | import os
import sys
import torch
import copy
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_diabetes
from sklearn.metrics import r2_score
from sklearn.ensemble import RandomForestRegressor
import matplotlib.pyplot as plt
from matplotlib import colors
! pip install -q -U colour
import colour
%config InlineBackend.figure_format = 'retina'
import tsensor
def plot_history(history, ax=None, maxy=None, file=None):
if ax is None:
fig, ax = plt.subplots(1,1, figsize=(3.5,3))
ax.set_ylabel("Loss")
ax.set_xlabel("Epochs")
loss = history[:,0]
val_loss = history[:,1]
if maxy:
ax.set_ylim(0,maxy)
else:
ax.set_ylim(0,torch.max(val_loss))
ax.spines['top'].set_visible(False) # turns off the top "spine" completely
ax.spines['right'].set_visible(False)
ax.spines['left'].set_linewidth(.5)
ax.spines['bottom'].set_linewidth(.5)
ax.plot(loss, label='train_loss')
ax.plot(val_loss, label='val_loss')
ax.legend(loc='upper right')
plt.tight_layout()
if file:
# plt.savefig(f"/Users/{os.environ['USER']}/Desktop/{file}.pdf")
plt.savefig(f"{os.environ['HOME']}/{file}.pdf")
| _____no_output_____ | MIT | notebooks/deep-learning/3.train-test-diabetes.ipynb | edithlee972/msds621 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.