markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
**Chapter 10 – Introduction to Artificial Neural Networks** _This notebook contains all the sample code and solutions to the exercises in chapter 10._ Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function...
# To support both python 2 and python 3 from __future__ import division, print_function, unicode_literals # Common imports import numpy as np import os # to make this notebook's output stable across runs def reset_graph(seed=42): tf.reset_default_graph() tf.set_random_seed(seed) np.random.seed(seed) # To...
_____no_output_____
Apache-2.0
10_introduction_to_artificial_neural_networks.ipynb
leoluyi/handson-ml
Perceptrons
import numpy as np from sklearn.datasets import load_iris from sklearn.linear_model import Perceptron iris = load_iris() X = iris.data[:, (2, 3)] # petal length, petal width y = (iris.target == 0).astype(np.int) per_clf = Perceptron(max_iter=100, random_state=42) per_clf.fit(X, y) y_pred = per_clf.predict([[2, 0.5]...
Saving figure perceptron_iris_plot
Apache-2.0
10_introduction_to_artificial_neural_networks.ipynb
leoluyi/handson-ml
Activation functions
def logit(z): return 1 / (1 + np.exp(-z)) def relu(z): return np.maximum(0, z) def derivative(f, z, eps=0.000001): return (f(z + eps) - f(z - eps))/(2 * eps) z = np.linspace(-5, 5, 200) plt.figure(figsize=(11,4)) plt.subplot(121) plt.plot(z, np.sign(z), "r-", linewidth=2, label="Step") plt.plot(z, logit...
_____no_output_____
Apache-2.0
10_introduction_to_artificial_neural_networks.ipynb
leoluyi/handson-ml
FNN for MNIST Using the Estimator API (formerly `tf.contrib.learn`)
import tensorflow as tf
_____no_output_____
Apache-2.0
10_introduction_to_artificial_neural_networks.ipynb
leoluyi/handson-ml
**Warning**: `tf.examples.tutorials.mnist` is deprecated. We will use `tf.keras.datasets.mnist` instead. Moreover, the `tf.contrib.learn` API was promoted to `tf.estimators` and `tf.feature_columns`, and it has changed considerably. In particular, there is no `infer_real_valued_columns_from_input()` function or `SKComp...
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.mnist.load_data() X_train = X_train.astype(np.float32).reshape(-1, 28*28) / 255.0 X_test = X_test.astype(np.float32).reshape(-1, 28*28) / 255.0 y_train = y_train.astype(np.int32) y_test = y_test.astype(np.int32) X_valid, X_train = X_train[:5000], X_train[5000:] y...
INFO:tensorflow:Calling model_fn. INFO:tensorflow:Done calling model_fn. INFO:tensorflow:Graph was finalized. INFO:tensorflow:Restoring parameters from /tmp/tmpuflzeb_h/model.ckpt-44000 INFO:tensorflow:Running local_init_op. INFO:tensorflow:Done running local_init_op.
Apache-2.0
10_introduction_to_artificial_neural_networks.ipynb
leoluyi/handson-ml
Using plain TensorFlow
import tensorflow as tf n_inputs = 28*28 # MNIST n_hidden1 = 300 n_hidden2 = 100 n_outputs = 10 reset_graph() X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X") y = tf.placeholder(tf.int32, shape=(None), name="y") def neuron_layer(X, n_neurons, name, activation=None): with tf.name_scope(name): ...
_____no_output_____
Apache-2.0
10_introduction_to_artificial_neural_networks.ipynb
leoluyi/handson-ml
Using `dense()` instead of `neuron_layer()` Note: previous releases of the book used `tensorflow.contrib.layers.fully_connected()` rather than `tf.layers.dense()` (which did not exist when this chapter was written). It is now preferable to use `tf.layers.dense()`, because anything in the contrib module may change or b...
n_inputs = 28*28 # MNIST n_hidden1 = 300 n_hidden2 = 100 n_outputs = 10 reset_graph() X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X") y = tf.placeholder(tf.int32, shape=(None), name="y") with tf.name_scope("dnn"): hidden1 = tf.layers.dense(X, n_hidden1, name="hidden1", ...
_____no_output_____
Apache-2.0
10_introduction_to_artificial_neural_networks.ipynb
leoluyi/handson-ml
Exercise solutions 1. to 8. See appendix A. 9. _Train a deep MLP on the MNIST dataset and see if you can get over 98% precision. Just like in the last exercise of chapter 9, try adding all the bells and whistles (i.e., save checkpoints, restore the last checkpoint in case of an interruption, add summaries, plot lear...
n_inputs = 28*28 # MNIST n_hidden1 = 300 n_hidden2 = 100 n_outputs = 10 reset_graph() X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X") y = tf.placeholder(tf.int32, shape=(None), name="y") with tf.name_scope("dnn"): hidden1 = tf.layers.dense(X, n_hidden1, name="hidden1", ...
_____no_output_____
Apache-2.0
10_introduction_to_artificial_neural_networks.ipynb
leoluyi/handson-ml
Now we need to define the directory to write the TensorBoard logs to:
from datetime import datetime def log_dir(prefix=""): now = datetime.utcnow().strftime("%Y%m%d%H%M%S") root_logdir = "tf_logs" if prefix: prefix += "-" name = prefix + "run-" + now return "{}/{}/".format(root_logdir, name) logdir = log_dir("mnist_dnn")
_____no_output_____
Apache-2.0
10_introduction_to_artificial_neural_networks.ipynb
leoluyi/handson-ml
Now we can create the `FileWriter` that we will use to write the TensorBoard logs:
file_writer = tf.summary.FileWriter(logdir, tf.get_default_graph())
_____no_output_____
Apache-2.0
10_introduction_to_artificial_neural_networks.ipynb
leoluyi/handson-ml
Hey! Why don't we implement early stopping? For this, we are going to need to use the validation set.
m, n = X_train.shape n_epochs = 10001 batch_size = 50 n_batches = int(np.ceil(m / batch_size)) checkpoint_path = "/tmp/my_deep_mnist_model.ckpt" checkpoint_epoch_path = checkpoint_path + ".epoch" final_model_path = "./my_deep_mnist_model" best_loss = np.infty epochs_without_progress = 0 max_epochs_without_progress = ...
_____no_output_____
Apache-2.0
10_introduction_to_artificial_neural_networks.ipynb
leoluyi/handson-ml
Define a function for which we'd like to find the roots
def function_for_roots(x): a = 1.01 b = -3.04 c = 2.07 return a*x**2 + b*x + c #get the roots of ax^2 + bx + c
_____no_output_____
MIT
astr-119-session-7/bisection_search_demo.ipynb
spaceghst007/astro-119
We need a function to check whether our initial values are valid
def check_initial_values(f, x_min, x_max, tol): #check our initial guesses y_min = f(x_min) y_max = f(x_max) #check that x_min and x_max contain a zero crossing if(y_min*y_max>=0.0): print("No zero crossing found in the range = ",x_min,x_max) s = "f(%f) = %f, f(%f) = %f" % ...
_____no_output_____
MIT
astr-119-session-7/bisection_search_demo.ipynb
spaceghst007/astro-119
Now we will define the main work function that actually performs the iterative search
def bisection_root_finding(f, x_min_start, x_max_start, tol): #this function uses bisection search to find a root x_min = x_min_start #minimum x in bracket x_max = x_max_start #maximum x in bracket x_mid = 0.0 #mid point y_min = f(x_min) #function value at x...
_____no_output_____
MIT
astr-119-session-7/bisection_search_demo.ipynb
spaceghst007/astro-119
*This notebook contains an excerpt from the [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by Jake VanderPlas; the content is available [on GitHub](https://github.com/jakevdp/PythonDataScienceHandbook).**The text is released under the [CC-BY-NC-ND license](https://creativecommons.org/l...
%matplotlib inline import numpy as np import matplotlib.pyplot as plt plt.style.use('seaborn-white') data = np.random.randn(1000) plt.hist(data);
_____no_output_____
Apache-2.0
matplotlib/04.05-Histograms-and-Binnings.ipynb
purushothamgowthu/data-science-ipython-notebooks
The ``hist()`` function has many options to tune both the calculation and the display; here's an example of a more customized histogram:
plt.hist(data, bins=30, normed=True, alpha=0.5, histtype='stepfilled', color='steelblue', edgecolor='none');
_____no_output_____
Apache-2.0
matplotlib/04.05-Histograms-and-Binnings.ipynb
purushothamgowthu/data-science-ipython-notebooks
The ``plt.hist`` docstring has more information on other customization options available.I find this combination of ``histtype='stepfilled'`` along with some transparency ``alpha`` to be very useful when comparing histograms of several distributions:
x1 = np.random.normal(0, 0.8, 1000) x2 = np.random.normal(-2, 1, 1000) x3 = np.random.normal(3, 2, 1000) kwargs = dict(histtype='stepfilled', alpha=0.3, normed=True, bins=40) plt.hist(x1, **kwargs) plt.hist(x2, **kwargs) plt.hist(x3, **kwargs);
_____no_output_____
Apache-2.0
matplotlib/04.05-Histograms-and-Binnings.ipynb
purushothamgowthu/data-science-ipython-notebooks
If you would like to simply compute the histogram (that is, count the number of points in a given bin) and not display it, the ``np.histogram()`` function is available:
counts, bin_edges = np.histogram(data, bins=5) print(counts)
[ 12 190 468 301 29]
Apache-2.0
matplotlib/04.05-Histograms-and-Binnings.ipynb
purushothamgowthu/data-science-ipython-notebooks
Two-Dimensional Histograms and BinningsJust as we create histograms in one dimension by dividing the number-line into bins, we can also create histograms in two-dimensions by dividing points among two-dimensional bins.We'll take a brief look at several ways to do this here.We'll start by defining some data—an ``x`` an...
mean = [0, 0] cov = [[1, 1], [1, 2]] x, y = np.random.multivariate_normal(mean, cov, 10000).T
_____no_output_____
Apache-2.0
matplotlib/04.05-Histograms-and-Binnings.ipynb
purushothamgowthu/data-science-ipython-notebooks
``plt.hist2d``: Two-dimensional histogramOne straightforward way to plot a two-dimensional histogram is to use Matplotlib's ``plt.hist2d`` function:
plt.hist2d(x, y, bins=30, cmap='Blues') cb = plt.colorbar() cb.set_label('counts in bin')
_____no_output_____
Apache-2.0
matplotlib/04.05-Histograms-and-Binnings.ipynb
purushothamgowthu/data-science-ipython-notebooks
Just as with ``plt.hist``, ``plt.hist2d`` has a number of extra options to fine-tune the plot and the binning, which are nicely outlined in the function docstring.Further, just as ``plt.hist`` has a counterpart in ``np.histogram``, ``plt.hist2d`` has a counterpart in ``np.histogram2d``, which can be used as follows:
counts, xedges, yedges = np.histogram2d(x, y, bins=30)
_____no_output_____
Apache-2.0
matplotlib/04.05-Histograms-and-Binnings.ipynb
purushothamgowthu/data-science-ipython-notebooks
For the generalization of this histogram binning in dimensions higher than two, see the ``np.histogramdd`` function. ``plt.hexbin``: Hexagonal binningsThe two-dimensional histogram creates a tesselation of squares across the axes.Another natural shape for such a tesselation is the regular hexagon.For this purpose, Mat...
plt.hexbin(x, y, gridsize=30, cmap='Blues') cb = plt.colorbar(label='count in bin')
_____no_output_____
Apache-2.0
matplotlib/04.05-Histograms-and-Binnings.ipynb
purushothamgowthu/data-science-ipython-notebooks
``plt.hexbin`` has a number of interesting options, including the ability to specify weights for each point, and to change the output in each bin to any NumPy aggregate (mean of weights, standard deviation of weights, etc.). Kernel density estimationAnother common method of evaluating densities in multiple dimensions ...
from scipy.stats import gaussian_kde # fit an array of size [Ndim, Nsamples] data = np.vstack([x, y]) kde = gaussian_kde(data) # evaluate on a regular grid xgrid = np.linspace(-3.5, 3.5, 40) ygrid = np.linspace(-6, 6, 40) Xgrid, Ygrid = np.meshgrid(xgrid, ygrid) Z = kde.evaluate(np.vstack([Xgrid.ravel(), Ygrid.ravel(...
_____no_output_____
Apache-2.0
matplotlib/04.05-Histograms-and-Binnings.ipynb
purushothamgowthu/data-science-ipython-notebooks
The effect of a given mutation on antibody binding was represented by apparent affinity (avidity) relative to those for wild-type (WT) gp120, calculated with the formula ([(EC50_WT/EC50_mutant)/(EC50_WT for 2G12/EC50_mutant for 2G12)] × 100)
# Test data VIH_final = pd.read_csv('../data/VIH_Test15.csv',index_col=0) # original info data vih_data = pd.read_csv("../data/HIV_escape_mutations.csv",sep="\t") #vih_data["pred_ddg2EC50"] = vih_data["mCSM-AB_Pred"].apply(deltaG_to_Kd)*100 vih_original = vih_data.loc[vih_data["Mutation_type"]=="ORIGINAL"].copy() vih...
_____no_output_____
MIT
notebooks/benchmark_vih.ipynb
victorfica/Master-thesis
Example of extracting features from dataframes with Datetime indicesAssuming that time-varying measurements are taken at regular intervals can be sufficient for many situations. However, for a large number of tasks it is important to take into account **when** a measurement is made. An example can be healthcare, where...
import pandas as pd from tsfresh.feature_extraction import extract_features # TimeBasedFCParameters contains all functions that use the Datetime index of the timeseries container from tsfresh.feature_extraction.settings import TimeBasedFCParameters
_____no_output_____
MIT
notebooks/feature_extraction_with_datetime_index.ipynb
hoesler/tsfresh
Build a time series container with Datetime indicesLet's build a dataframe with a datetime index. The format must be with a `value` and a `kind` column, since each measurement has its own timestamp - i.e. measurements are not assumed to be simultaneous.
df = pd.DataFrame({"id": ["a", "a", "a", "a", "b", "b", "b", "b"], "value": [1, 2, 3, 1, 3, 1, 0, 8], "kind": ["temperature", "temperature", "pressure", "pressure", "temperature", "temperature", "pressure", "pressure"]}, index=pd.Date...
_____no_output_____
MIT
notebooks/feature_extraction_with_datetime_index.ipynb
hoesler/tsfresh
Right now `TimeBasedFCParameters` only contains `linear_trend_timewise`, which performs a calculation of a linear trend, but using the time difference in hours between measurements in order to perform the linear regression. As always, you can add your own functions in `tsfresh/feature_extraction/feature_calculators.py`...
settings_time = TimeBasedFCParameters() settings_time
_____no_output_____
MIT
notebooks/feature_extraction_with_datetime_index.ipynb
hoesler/tsfresh
We extract the features as usual, specifying the column value, kind, and id.
X_tsfresh = extract_features(df, column_id="id", column_value='value', column_kind='kind', default_fc_parameters=settings_time) X_tsfresh.head()
Feature Extraction: 100%|██████████| 4/4 [00:00<00:00, 591.10it/s]
MIT
notebooks/feature_extraction_with_datetime_index.ipynb
hoesler/tsfresh
The output looks exactly, like usual. If we compare it with the 'regular' `linear_trend` feature calculator, we can see that the intercept, p and R values are the same, as we'd expect – only the slope is now different.
settings_regular = {'linear_trend': [ {'attr': 'pvalue'}, {'attr': 'rvalue'}, {'attr': 'intercept'}, {'attr': 'slope'}, {'attr': 'stderr'} ]} X_tsfresh = extract_features(df, column_id="id", column_value='value', column_kind='kind', default_fc_parameters=settings_regular) X_tsfres...
Feature Extraction: 100%|██████████| 4/4 [00:00<00:00, 2517.59it/s]
MIT
notebooks/feature_extraction_with_datetime_index.ipynb
hoesler/tsfresh
Gymnasium
gym = pd.read_csv('../Results/Gym_Rating.csv') del gym['Unnamed: 0'] gym.replace('NAN', value=0, inplace=True) gym = gym.rename(columns={'gym Total Count':'Total Count', 'Facility gym':'Gymnasium Facility'}) gym['Rating']=gym['Rating'].astype(float) gym['Total Count']=gym['Total Count'].astype(int) gym.head() new_gym =...
======================================== ==================TEST==================== City Name Total Count 0 New York 20.0 1 Chicago 20.0 2 Boston 17.0 3 Washington DC 13.5 4 Los Angeles 13.0 5 Austin 7.5 6 Raleigh ...
MIT
Amenities_Niyati/Plots/Amazon_nearby_Amenities_Fitness_Ranking.ipynb
gvo34/BC_Project1
TalkingData: Fraudulent Click Prediction In this notebook, we will apply various boosting algorithms to solve an interesting classification problem from the domain of 'digital fraud'.The analysis is divided into the following sections:- Understanding the business problem- Understanding and exploring the data- Feature ...
import numpy as np import pandas as pd import sklearn import matplotlib.pyplot as plt import seaborn as sns from sklearn.model_selection import train_test_split from sklearn.model_selection import KFold from sklearn.model_selection import GridSearchCV from sklearn.model_selection import cross_val_score from sklearn....
_____no_output_____
MIT
TalkingData+Click+Fraud+.ipynb
gyanadata/TalkingData-Fraudulent-Click-Prediction
Reading the Data The code below reads the train_sample.csv file if you set testing = True, else reads the full train.csv file. You can read the sample while tuning the model etc., and then run the model on the full data once done. Important Note: Save memory when the data is hugeSince the training data is quite huge,...
# reading training data # specify column dtypes to save memory (by default pandas reads some columns as floats) dtypes = { 'ip' : 'uint16', 'app' : 'uint16', 'device' : 'uint16', 'os' : 'uint16', 'channel' : 'uint16', 'is_attr...
_____no_output_____
MIT
TalkingData+Click+Fraud+.ipynb
gyanadata/TalkingData-Fraudulent-Click-Prediction
Exploring the Data - Univariate Analysis Let's now understand and explore the data. Let's start with understanding the size and data types of the train_sample data.
# look at non-null values, number of entries etc. # there are no missing values train_sample.info() # Basic exploratory analysis # Number of unique values in each column def fraction_unique(x): return len(train_sample[x].unique()) number_unique_vals = {x: fraction_unique(x) for x in train_sample.columns} number_...
_____no_output_____
MIT
TalkingData+Click+Fraud+.ipynb
gyanadata/TalkingData-Fraudulent-Click-Prediction
There are certain 'apps' which have quite high number of instances/rows (each row is a click). The plot below shows this.
# # distribution of 'app' # # some 'apps' have a disproportionately high number of clicks (>15k), and some are very rare (3-4) plt.figure(figsize=(14, 8)) sns.countplot(x="app", data=train_sample) # # distribution of 'device' # # this is expected because a few popular devices are used heavily plt.figure(figsize=(14, ...
_____no_output_____
MIT
TalkingData+Click+Fraud+.ipynb
gyanadata/TalkingData-Fraudulent-Click-Prediction
Let's now look at the distribution of the target variable 'is_attributed'.
# # target variable distribution 100*(train_sample['is_attributed'].astype('object').value_counts()/len(train_sample.index))
_____no_output_____
MIT
TalkingData+Click+Fraud+.ipynb
gyanadata/TalkingData-Fraudulent-Click-Prediction
Only **about 0.2% of clicks are 'fraudulent'**, which is expected in a fraud detection problem. Such high class imbalance is probably going to be the toughest challenge of this problem. Exploring the Data - Segmented Univariate AnalysisLet's now look at how the target variable varies with the various predictors.
# plot the average of 'is_attributed', or 'download rate' # with app (clearly this is non-readable) app_target = train_sample.groupby('app').is_attributed.agg(['mean', 'count']) app_target
_____no_output_____
MIT
TalkingData+Click+Fraud+.ipynb
gyanadata/TalkingData-Fraudulent-Click-Prediction
This is clearly non-readable, so let's first get rid of all the apps that are very rare (say which comprise of less than 20% clicks) and plot the rest.
frequent_apps = train_sample.groupby('app').size().reset_index(name='count') frequent_apps = frequent_apps[frequent_apps['count']>frequent_apps['count'].quantile(0.80)] frequent_apps = frequent_apps.merge(train_sample, on='app', how='inner') frequent_apps.head() plt.figure(figsize=(10,10)) sns.countplot(y="app", hue="i...
_____no_output_____
MIT
TalkingData+Click+Fraud+.ipynb
gyanadata/TalkingData-Fraudulent-Click-Prediction
You can do lots of other interesting ananlysis with the existing features. For now, let's create some new features which will probably improve the model. Feature Engineering Let's now derive some new features from the existing ones. There are a number of features one can extract from ```click_time``` itself, and by gr...
# Creating datetime variables # takes in a df, adds date/time based columns to it, and returns the modified df def timeFeatures(df): # Derive new features using the click_time column df['datetime'] = pd.to_datetime(df['click_time']) df['day_of_week'] = df['datetime'].dt.dayofweek df["day_of_year"] = df[...
Training dataset uses 1.812103271484375 MB
MIT
TalkingData+Click+Fraud+.ipynb
gyanadata/TalkingData-Fraudulent-Click-Prediction
IP Grouping Based Features Let's now create some important features by grouping IP addresses with features such as os, channel, hour, day etc. Also, count of each IP address will also be a feature.Note that though we are deriving new features by grouping IP addresses, using IP adress itself as a features is not a good...
# number of clicks by count of IP address # note that we are explicitly asking pandas to re-encode the aggregated features # as 'int16' to save memory ip_count = train_sample.groupby('ip').size().reset_index(name='ip_count').astype('int16') ip_count.head()
_____no_output_____
MIT
TalkingData+Click+Fraud+.ipynb
gyanadata/TalkingData-Fraudulent-Click-Prediction
We can now merge this dataframe with the original training df. Similarly, we can create combinations of various features such as ip_day_hour (count of ip-day-hour combinations), ip_hour_channel, ip_hour_app, etc. The following function takes in a dataframe and creates these features.
# creates groupings of IP addresses with other features and appends the new features to the df def grouped_features(df): # ip_count ip_count = df.groupby('ip').size().reset_index(name='ip_count').astype('uint16') ip_day_hour = df.groupby(['ip', 'day_of_week', 'hour']).size().reset_index(name='ip_day_hour')....
_____no_output_____
MIT
TalkingData+Click+Fraud+.ipynb
gyanadata/TalkingData-Fraudulent-Click-Prediction
ModellingLet's now build models to predict the variable ```is_attributed``` (downloaded). We'll try the several variants of boosting (adaboost, gradient boosting and XGBoost), tune the hyperparameters in each model and choose the one which gives the best performance.In the original Kaggle competition, the metric for m...
# create x and y train X = train_sample.drop('is_attributed', axis=1) y = train_sample[['is_attributed']] # split data into train and test/validation sets X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=101) print(X_train.shape) print(y_train.shape) print(X_test.shape) print(y_te...
is_attributed 0.002275 dtype: float64 is_attributed 0.00225 dtype: float64
MIT
TalkingData+Click+Fraud+.ipynb
gyanadata/TalkingData-Fraudulent-Click-Prediction
AdaBoost
# adaboost classifier with max 600 decision trees of depth=2 # learning_rate/shrinkage=1.5 # base estimator tree = DecisionTreeClassifier(max_depth=2) # adaboost with the tree as base estimator adaboost_model_1 = AdaBoostClassifier( base_estimator=tree, n_estimators=600, learning_rate=1.5, algorithm="...
_____no_output_____
MIT
TalkingData+Click+Fraud+.ipynb
gyanadata/TalkingData-Fraudulent-Click-Prediction
AdaBoost - Hyperparameter TuningLet's now tune the hyperparameters of the AdaBoost classifier. In this case, we have two types of hyperparameters - those of the component trees (max_depth etc.) and those of the ensemble (n_estimators, learning_rate etc.). We can tune both using the following technique - the keys of th...
# parameter grid param_grid = {"base_estimator__max_depth" : [2, 5], "n_estimators": [200, 400, 600] } # base estimator tree = DecisionTreeClassifier() # adaboost with the tree as base estimator # learning rate is arbitrarily set to 0.6, we'll discuss learning_rate below ABC = AdaBoostClassi...
_____no_output_____
MIT
TalkingData+Click+Fraud+.ipynb
gyanadata/TalkingData-Fraudulent-Click-Prediction
The results above show that:- The ensemble with max_depth=5 is clearly overfitting (training auc is almost 1, while the test score is much lower)- At max_depth=2, the model performs slightly better (approx 95% AUC) with a higher test score Thus, we should go ahead with ```max_depth=2``` and ```n_estimators=200```.Note ...
# model performance on test data with chosen hyperparameters # base estimator tree = DecisionTreeClassifier(max_depth=2) # adaboost with the tree as base estimator # learning rate is arbitrarily set, we'll discuss learning_rate below ABC = AdaBoostClassifier( base_estimator=tree, learning_rate=0.6, n_esti...
_____no_output_____
MIT
TalkingData+Click+Fraud+.ipynb
gyanadata/TalkingData-Fraudulent-Click-Prediction
Gradient Boosting ClassifierLet's now try the gradient boosting classifier. We'll experiment with two main hyperparameters now - ```learning_rate``` (shrinkage) and ```subsample```. By adjusting the learning rate to less than 1, we can regularize the model. A model with higher learning_rate learns fast, but is prone t...
# parameter grid param_grid = {"learning_rate": [0.2, 0.6, 0.9], "subsample": [0.3, 0.6, 0.9] } # adaboost with the tree as base estimator GBC = GradientBoostingClassifier(max_depth=2, n_estimators=200) # run grid search folds = 3 grid_search_GBC = GridSearchCV(GBC, ...
_____no_output_____
MIT
TalkingData+Click+Fraud+.ipynb
gyanadata/TalkingData-Fraudulent-Click-Prediction
It is clear from the plot above that the model with a lower subsample ratio performs better, while those with higher subsamples tend to overfit. Also, a lower learning rate results in less overfitting. XGBoostLet's finally try XGBoost. The hyperparameters are the same, some important ones being ```subsample```, ```lea...
# fit model on training data with default hyperparameters model = XGBClassifier() model.fit(X_train, y_train) # make predictions for test data # use predict_proba since we need probabilities to compute auc y_pred = model.predict_proba(X_test) y_pred[:10] # evaluate predictions roc = metrics.roc_auc_score(y_test, y_pred...
AUC: 94.85%
MIT
TalkingData+Click+Fraud+.ipynb
gyanadata/TalkingData-Fraudulent-Click-Prediction
The roc_auc in this case is about 0.95% with default hyperparameters. Let's try changing the hyperparameters - an exhaustive list of XGBoost hyperparameters is here: http://xgboost.readthedocs.io/en/latest/parameter.html Let's now try tuning the hyperparameters using k-fold CV. We'll then use grid search CV to find the...
# hyperparameter tuning with XGBoost # creating a KFold object folds = 3 # specify range of hyperparameters param_grid = {'learning_rate': [0.2, 0.6], 'subsample': [0.3, 0.6, 0.9]} # specify model xgb_model = XGBClassifier(max_depth=2, n_estimators=200) # set up GridSearchCV() model_cv = G...
_____no_output_____
MIT
TalkingData+Click+Fraud+.ipynb
gyanadata/TalkingData-Fraudulent-Click-Prediction
The results show that a subsample size of 0.6 and learning_rate of about 0.2 seems optimal. Also, XGBoost has resulted in the highest ROC AUC obtained (across various hyperparameters). Let's build a final model with the chosen hyperparameters.
# chosen hyperparameters # 'objective':'binary:logistic' outputs probability rather than label, which we need for auc params = {'learning_rate': 0.2, 'max_depth': 2, 'n_estimators':200, 'subsample':0.6, 'objective':'binary:logistic'} # fit model on training data model = XGBClass...
_____no_output_____
MIT
TalkingData+Click+Fraud+.ipynb
gyanadata/TalkingData-Fraudulent-Click-Prediction
The first column in y_pred is the P(0), i.e. P(not fraud), and the second column is P(1/fraud).
# roc_auc auc = sklearn.metrics.roc_auc_score(y_test, y_pred[:, 1]) auc
_____no_output_____
MIT
TalkingData+Click+Fraud+.ipynb
gyanadata/TalkingData-Fraudulent-Click-Prediction
Finally, let's also look at the feature importances.
# feature importance importance = dict(zip(X_train.columns, model.feature_importances_)) importance # plot plt.bar(range(len(model.feature_importances_)), model.feature_importances_) plt.show()
_____no_output_____
MIT
TalkingData+Click+Fraud+.ipynb
gyanadata/TalkingData-Fraudulent-Click-Prediction
Predictions on Test DataSince this problem is hosted on Kaggle, you can choose to make predictions on the test data and submit your results. Please note the following points and recommendations if you go ahead with Kaggle:Recommendations for training:- We have used only a fraction of the training set (train_sample, 1...
# # read submission file #sample_sub = pd.read_csv(path+'sample_submission.csv') #sample_sub.head() # # predict probability of test data # test_final = pd.read_csv(path+'test.csv') # test_final.head() # # predictions on test data # test_final = timeFeatures(test_final) # test_final.head() # test_final.drop(['click_time...
_____no_output_____
MIT
TalkingData+Click+Fraud+.ipynb
gyanadata/TalkingData-Fraudulent-Click-Prediction
Topic 2: Neural network Lesson 1: Introduction to Neural Networks 1. AND perceptronComplete the cell below:
import pandas as pd # TODO: Set weight1, weight2, and bias weight1 = 0.0 weight2 = 0.0 bias = 0.0 # DON'T CHANGE ANYTHING BELOW # Inputs and outputs test_inputs = [(0, 0), (0, 1), (1, 0), (1, 1)] correct_outputs = [False, False, False, True] outputs = [] # Generate and check output for test_input, correct_output in...
_____no_output_____
MIT
exercise_notebooks_my_solutions/2. Neural Networks/1. Introduction to Neural Networks.ipynb
Yixuan-Lee/udacity-deep-learning-nanodegree
My answer:
import pandas as pd # TODO: Set weight1, weight2, and bias k = 100 weight1 = k * 1.0 weight2 = k * 1.0 bias = k * (-2.0) # DON'T CHANGE ANYTHING BELOW # Inputs and outputs test_inputs = [(0, 0), (0, 1), (1, 0), (1, 1)] correct_outputs = [False, False, False, True] outputs = [] # Generate and check output for test_i...
Nice! You got it all correct. Input 1 Input 2 Linear Combination Activation Output Is Correct 0 0 -200.0 0 Yes 0 1 -100.0 0 Yes 1 0 -100.0 ...
MIT
exercise_notebooks_my_solutions/2. Neural Networks/1. Introduction to Neural Networks.ipynb
Yixuan-Lee/udacity-deep-learning-nanodegree
2. OR PerceptronComplete the cell below:
import pandas as pd # TODO: Set weight1, weight2, and bias weight1 = 0.0 weight2 = 0.0 bias = 0.0 # DON'T CHANGE ANYTHING BELOW # Inputs and outputs test_inputs = [(0, 0), (0, 1), (1, 0), (1, 1)] correct_outputs = [False, True, True, True] outputs = [] # Generate and check output for test_input, correct_output in z...
_____no_output_____
MIT
exercise_notebooks_my_solutions/2. Neural Networks/1. Introduction to Neural Networks.ipynb
Yixuan-Lee/udacity-deep-learning-nanodegree
My answer:
import pandas as pd # TODO: Set weight1, weight2, and bias k = 100 weight1 = k * 1.0 weight2 = k * 1.0 bias = k * (-1.0) # DON'T CHANGE ANYTHING BELOW # Inputs and outputs test_inputs = [(0, 0), (0, 1), (1, 0), (1, 1)] correct_outputs = [False, True, True, True] outputs = [] # Generate and check output for test_inp...
Nice! You got it all correct. Input 1 Input 2 Linear Combination Activation Output Is Correct 0 0 -100.0 0 Yes 0 1 0.0 1 Yes 1 0 0.0 ...
MIT
exercise_notebooks_my_solutions/2. Neural Networks/1. Introduction to Neural Networks.ipynb
Yixuan-Lee/udacity-deep-learning-nanodegree
2 ways to transform AND perceptron to OR perceptron:* Increase the weights $w$* Decrease the magnitude of the bias $|b|$ 3. NOT PerceptronComplete the code below:Only consider the second number in ```test_inputs``` is the input, ignore the first number.
import pandas as pd # TODO: Set weight1, weight2, and bias weight1 = 0.0 weight2 = 0.0 bias = 0.0 # DON'T CHANGE ANYTHING BELOW # Inputs and outputs test_inputs = [(0, 0), (0, 1), (1, 0), (1, 1)] correct_outputs = [True, False, True, False] outputs = [] # Generate and check output for test_input, correct_output in ...
_____no_output_____
MIT
exercise_notebooks_my_solutions/2. Neural Networks/1. Introduction to Neural Networks.ipynb
Yixuan-Lee/udacity-deep-learning-nanodegree
My answer:
import pandas as pd # TODO: Set weight1, weight2, and bias k = 100 weight1 = 0.0 weight2 = k * (-1.0) bias = 0.0 # DON'T CHANGE ANYTHING BELOW # Inputs and outputs test_inputs = [(0, 0), (0, 1), (1, 0), (1, 1)] correct_outputs = [True, False, True, False] outputs = [] # Generate and check output for test_input, cor...
Nice! You got it all correct. Input 1 Input 2 Linear Combination Activation Output Is Correct 0 0 0.0 1 Yes 0 1 -100.0 0 Yes 1 0 0.0 ...
MIT
exercise_notebooks_my_solutions/2. Neural Networks/1. Introduction to Neural Networks.ipynb
Yixuan-Lee/udacity-deep-learning-nanodegree
4. XOR Perceptronan XOR Perceptron can be built by an AND Perceptron, an OR Perceptron and a NOT Perceptron.(image source: Udacity)```NAND``` consists of an AND perceptron and a NON perceptron. 5. Perceptron algorithmComplete the cell below:
import numpy as np # Setting the random seed, feel free to change it and see different solutions. np.random.seed(42) def stepFunction(t): if t >= 0: return 1 return 0 def prediction(X, W, b): return stepFunction((np.matmul(X,W)+b)[0]) # TODO: Fill in the code below to implement the perceptron tri...
_____no_output_____
MIT
exercise_notebooks_my_solutions/2. Neural Networks/1. Introduction to Neural Networks.ipynb
Yixuan-Lee/udacity-deep-learning-nanodegree
This is data.csv: ```0.78051,-0.063669,10.28774,0.29139,10.40714,0.17878,10.2923,0.4217,10.50922,0.35256,10.27785,0.10802,10.27527,0.33223,10.43999,0.31245,10.33557,0.42984,10.23448,0.24986,10.0084492,0.13658,10.12419,0.33595,10.25644,0.42624,10.4591,0.40426,10.44547,0.45117,10.42218,0.20118,10.49563,0.21445,10.30848,0...
import numpy as np X = np.array([ [0.78051,-0.063669], [0.28774,0.29139], [0.40714,0.17878], [0.2923,0.4217], [0.50922,0.35256], [0.27785,0.10802], [0.27527,0.33223], [0.43999,0.31245], [0.33557,0.42984], [0.23448,0.24986], [0.0084492,0.13658], [0.12419,0.33595], [0....
_____no_output_____
MIT
exercise_notebooks_my_solutions/2. Neural Networks/1. Introduction to Neural Networks.ipynb
Yixuan-Lee/udacity-deep-learning-nanodegree
Solution:```def perceptronStep(X, y, W, b, learn_rate = 0.01): for i in range(len(X)): y_hat = prediction(X[i],W,b) if y[i]-y_hat == 1: W[0] += X[i][0]*learn_rate W[1] += X[i][1]*learn_rate b += learn_rate elif y[i]-y_hat == -1: W[0] -= X[i][0]*learn_r...
import numpy as np # Write a function that takes as input a list of numbers, and returns # the list of values given by the softmax function. def softmax(L): pass
_____no_output_____
MIT
exercise_notebooks_my_solutions/2. Neural Networks/1. Introduction to Neural Networks.ipynb
Yixuan-Lee/udacity-deep-learning-nanodegree
My answer:
import numpy as np # Write a function that takes as input a list of numbers, and returns # the list of values given by the softmax function. def softmax(L): return [(np.exp(L[i]) / np.sum(np.exp(L))) for i in range(len(L))] L = [0, 2, 1] softmax(L)
_____no_output_____
MIT
exercise_notebooks_my_solutions/2. Neural Networks/1. Introduction to Neural Networks.ipynb
Yixuan-Lee/udacity-deep-learning-nanodegree
7. Cross-EntropyFormula: $$Cross Entropy = - \sum_{i=1}^{|X|}y_i log(p_i) + (1 - y_i) log(1 - p_i) $$where * $y_i$ is the true label for $i^{th}$ instance* $p_i$ is the probability of the $i^{th}$ instance is positive.Complete the code below
import numpy as np # Write a function that takes as input two lists Y, P, # and returns the float corresponding to their cross-entropy. def cross_entropy(Y, P): pass
_____no_output_____
MIT
exercise_notebooks_my_solutions/2. Neural Networks/1. Introduction to Neural Networks.ipynb
Yixuan-Lee/udacity-deep-learning-nanodegree
My answer:
import numpy as np # Write a function that takes as input two lists Y, P, # and returns the float corresponding to their cross-entropy. def cross_entropy(Y, P): return -np.sum([Y[i] * np.log(P[i]) + (1 - Y[i]) * np.log(1 - P[i]) for i in range(len(Y))]) Y = np.array([1, 0, 1, 1]) P = np.array([0.4, 0.6, 0.1, 0.5])...
_____no_output_____
MIT
exercise_notebooks_my_solutions/2. Neural Networks/1. Introduction to Neural Networks.ipynb
Yixuan-Lee/udacity-deep-learning-nanodegree
AHDB wheat lodging risk and recommendationsThis example notebook was inspired by the [AHDB lodging practical guidelines](https://ahdb.org.uk/knowledge-library/lodging): we evaluate the lodging risk for a field and output practical recommendations. We then adjust the estimated risk according to the Leaf Area Index (LAI...
# Input AHDB factors for evaluating lodging risks def sns_index_score(sns_index): return 3 - 6 * sns_index / 4 # Sowing dates and associated lodging resistance score sowing_date_scores = {'Mid Sept': -2, 'End Sept': -1, 'Mid Oct': 0, 'End Oct': 1, 'Nov onwards': 2} # Density ranges and associated lodging resistan...
_____no_output_____
MIT
verde-examples/lodging.ipynb
markbneal/api-examples
AHDB practical adviceAHDB provides practical advice for managing the risk of stem and root lodging. This advice depends on the resistance score calculated specifically for a field. AHDB recommends fertilizer and PGR actions for managing stem lodging risk. For root lodging, AHDB also advises if the crop needs to be rol...
# Nitrogen fertiliser advice for stem risk stem_risk_N_advice = { 'below 5': 'Delay & reduce N', '5-6.8': 'Delay & reduce N', '7-8.8': 'Delay N', } # PGR advice for stem risk stem_risk_PGR_advice = { 'below 5': 'Full PGR', '5-6.8': 'Full PGR', '7-8.8': 'Single PGR', '9-10': 'PGR if high yie...
_____no_output_____
MIT
verde-examples/lodging.ipynb
markbneal/api-examples
AHDB standard lodging risk management recommendationsUsing the definitions above, we can calculate the AHDB recommendation according to individual factors:
import pandas as pd from ipywidgets import widgets from pandas.plotting import register_matplotlib_converters register_matplotlib_converters() style = {'description_width': 'initial'} def ahdb_lodging_recommendation(resistance_score, sns_index, sowing_date, sowing_density): category = lodging_resistance_category(...
_____no_output_____
MIT
verde-examples/lodging.ipynb
markbneal/api-examples
[Widget image](img/lodging/recommendations_slider.png) Adjusting recommendations based on remote sensing informationThe same practical guidelines from AHDB explains that crop conditions in Spring can indicate future lodging risk. In particular, Green Area Index (GAI) greater than 2 or Ground Cover Fraction (GCF) above...
import os import requests GRAPHQL_ENDPOINT = "https://api.agrimetrics.co.uk/graphql/v1/" if "API_KEY" in os.environ: API_KEY = os.environ["API_KEY"] else: API_KEY = input("Query API Subscription Key: ").strip()
_____no_output_____
MIT
verde-examples/lodging.ipynb
markbneal/api-examples
We will also need a short function to help catch and report errors from making GraphQL queries.
def check_results(result): if result.status_code != 200: raise Exception(f"Request failed with code {result.status_code}.\n{result.text}") errors = result.json().get("errors", []) if errors: for err in errors: print(f"{err['message']}:") print( " at", " and ".join([f...
_____no_output_____
MIT
verde-examples/lodging.ipynb
markbneal/api-examples
A GraphQL query is posted to the GraphQL endpoint in a json body. With our first query, we retrieve the Agrimetrics field id at a given location.
graphql_url = 'https://api.agrimetrics.co.uk/graphql' headers = { 'Ocp-Apim-Subscription-Key': API_KEY, 'Content-Type': "application/json", 'Accept-Encoding': "gzip, deflate, br", } centroid = (-0.929365345, 51.408374978) response = requests.post(graphql_url, headers=headers, json={ 'query': ''' ...
Agrimetrics field id: https://data.agrimetrics.co.uk/fields/BZwCrEVaXO62NTX_Jfl1yw
MIT
verde-examples/lodging.ipynb
markbneal/api-examples
GraphQL API supports filtering by object ids. Here, we retrieve the sowing crop information associated to the field id obtained in our first query.
# Verify field was a wheat crop in 2018 response = requests.post(graphql_url, headers=headers, json={ 'query': ''' query getSownCrop($fieldId: [ID!]!) { fields(where: {id: {EQ: $fieldId}}) { sownCrop { cropType harvestYear }...
[{'cropType': 'WHEAT', 'harvestYear': 2016}, {'cropType': 'MAIZE', 'harvestYear': 2017}, {'cropType': 'WHEAT', 'harvestYear': 2018}]
MIT
verde-examples/lodging.ipynb
markbneal/api-examples
It is necessary to register for accessing Verde crop observations on our field of interest. LAI is a crop-specific attribute, so it is necessary to provide `cropType` when registering.
# Register for CROP_SPECIFIC verde data on our field response = requests.post(graphql_url, headers=headers, json={ 'query': ''' mutation registerCropObservations($fieldId: ID!) { account { premiumData { addCropObservationRegistrations(registrations: {fieldId: ...
_____no_output_____
MIT
verde-examples/lodging.ipynb
markbneal/api-examples
GCF is not crop specific, so we need to register as well for accessing non crop-specific attributes.
# Register for NON_CROP_SPECIFIC verde data on our field response = requests.post(graphql_url, headers=headers, json={ 'query': ''' mutation registerCropObservations($fieldId: ID!) { account { premiumData { addCropObservationRegistrations(registrations: {field...
_____no_output_____
MIT
verde-examples/lodging.ipynb
markbneal/api-examples
Once Verde data for this field is available, we can easily retrieve it, for instance:
response = requests.post(graphql_url, headers=headers, json={ 'query': ''' query getCropObservations($fieldId: [ID!]!) { fields(where: {id: {EQ: $fieldId}}) { cropObservations { leafAreaIndex { dateTime mean } } } } ''',...
_____no_output_____
MIT
verde-examples/lodging.ipynb
markbneal/api-examples
The data can be loaded as a pandas DataFrame:
results = response.json() leafAreaIndex = pd.io.json.json_normalize( results['data']['fields'], record_path=['cropObservations', 'leafAreaIndex'], ) leafAreaIndex['date_time'] = pd.to_datetime(leafAreaIndex['dateTime']) leafAreaIndex['value'] = leafAreaIndex['mean'] leafAreaIndex = leafAreaIndex[['date_time', '...
_____no_output_____
MIT
verde-examples/lodging.ipynb
markbneal/api-examples
[Table image](img/lodging/lai_for_field.png) We proceed to a second similar query to obtain green vegetation cover fraction:
response = requests.post(graphql_url, headers=headers, json={ 'query': ''' query getCropObservations($fieldId: [ID!]!) { fields(where: {id: {EQ: $fieldId}}) { cropObservations { greenVegetationCoverFraction { dateTime mean } } } ...
_____no_output_____
MIT
verde-examples/lodging.ipynb
markbneal/api-examples
A year of observations was retrieved:
import matplotlib.pyplot as plt plt.plot(leafAreaIndex['date_time'], leafAreaIndex['value'], label='LAI') plt.plot(greenCoverFraction['date_time'], greenCoverFraction['value'], label='GCF') plt.legend() plt.show()
_____no_output_____
MIT
verde-examples/lodging.ipynb
markbneal/api-examples
[Graph image](img/lodging/lai_gfc.png) Adjusting recommendationGS31 marks the beginning of the stem elongation and generally occurs around mid April. Let's filter our LAI and GCF around this time of year:
from datetime import datetime, timezone from_date = datetime(2018, 4, 7, tzinfo=timezone.utc) to_date = datetime(2018, 4, 21, tzinfo=timezone.utc) leafAreaIndex_mid_april = leafAreaIndex[(leafAreaIndex['date_time'] > from_date) & (leafAreaIndex['date_time'] < to_date)] greenCoverFraction_mid_april = greenCoverFraction[...
_____no_output_____
MIT
verde-examples/lodging.ipynb
markbneal/api-examples
Check if LAI or GCF are above their respective thresholds:
(leafAreaIndex_mid_april['value'] > 2).any() | (greenCoverFraction_mid_april['value'] > 0.6).any()
_____no_output_____
MIT
verde-examples/lodging.ipynb
markbneal/api-examples
Our field has an LAI below 2 in the 2 weeks around mid April and no GCF reading close enough to be taken into account. But we have now the basis for adjusting our recommendation by using Agrimetrics Verde crop observations. Let's broaden our evaluation to nearby Agrimetrics fields with a wheat crop in 2018.
response = requests.post(graphql_url, headers=headers, json={ 'query': ''' query getFieldsWithinRadius($centroid: CoordinateScalar!, $distance: Float!) { fields(geoFilter: {location: {type: Point, coordinates: $centroid}, distance: {LE: $distance}}) { id sownCrop ...
_____no_output_____
MIT
verde-examples/lodging.ipynb
markbneal/api-examples
Using the same approach as above, we implement the retrieval of Verde LAI and GCF for the selected fields:
def register(field_id): # Register for CROP_SPECIFIC verde data on our field response = requests.post(graphql_url, headers=headers, json={ 'query': ''' mutation registerCropObservations($fieldId: ID!) { account { premiumData { addCr...
_____no_output_____
MIT
verde-examples/lodging.ipynb
markbneal/api-examples
We then revisit the recommendation algorithm:
def adjusted_lodging_recommendation(field_id, resistance_score, sns_index, sowing_date, sowing_density): register(field_id) leafAreaIndex = crop_observations(field_id, 'leafAreaIndex') greenCoverFraction = crop_observations(field_id, 'greenVegetationCoverFraction') high_LAI = has_high_LAI(field_id,...
_____no_output_____
MIT
verde-examples/lodging.ipynb
markbneal/api-examples
iloc (위치기반)
data_df.head() data_df.iloc[0, 0] # 아래 코드는 오류를 발생시킴 data_df.iloc['Name', 0] data_df.reset_index()
_____no_output_____
MIT
self/pandas_basic_2.ipynb
Karmantez/Tensorflow_Practice
loc (명칭기반)
data_df data_df.loc['one', 'Name'] data_df_reset.loc[1, 'Name'] data_df_reset.loc[0, 'Name']
_____no_output_____
MIT
self/pandas_basic_2.ipynb
Karmantez/Tensorflow_Practice
불린 인덱싱(Boolean Indexing)
titanic_df = pd.read_csv('titanic_train.csv') titanic_boolean = titanic_df[titanic_df['Age'] > 60] titanic_boolean var1 = titanic_df['Age'] > 60 print('결과:\n', var1) print(type(var1)) titanic_df[titanic_df['Age'] > 60][['Name', 'Age']].head(3) titanic_df[['Name', 'Age']][titanic_df['Age'] > 60].head(3) titanic_df['Age_...
_____no_output_____
MIT
self/pandas_basic_2.ipynb
Karmantez/Tensorflow_Practice
Settings
%load_ext autoreload %autoreload 2 %env TF_KERAS = 1 import os sep_local = os.path.sep import sys sys.path.append('..'+sep_local+'..') print(sep_local) os.chdir('..'+sep_local+'..'+sep_local+'..'+sep_local+'..'+sep_local+'..') print(os.getcwd()) import tensorflow as tf print(tf.__version__)
2.1.0
MIT
notebooks/losses_evaluation/Dstripes/basic/ellwlb/convolutional/VAE/DstripesVAE_Convolutional_reconst_1ellwlb_1psnr.ipynb
Fidan13/Generative_Models
Dataset loading
dataset_name='Dstripes' images_dir = 'C:\\Users\\Khalid\\Documents\projects\\Dstripes\DS06\\' validation_percentage = 20 valid_format = 'png' from training.generators.file_image_generator import create_image_lists, get_generators imgs_list = create_image_lists( image_dir=images_dir, validation_pct=validation_p...
_____no_output_____
MIT
notebooks/losses_evaluation/Dstripes/basic/ellwlb/convolutional/VAE/DstripesVAE_Convolutional_reconst_1ellwlb_1psnr.ipynb
Fidan13/Generative_Models
Model's Layers definition
units=20 c=50 menc_lays = [ tf.keras.layers.Conv2D(filters=units//2, kernel_size=3, strides=(2, 2), activation='relu'), tf.keras.layers.Conv2D(filters=units*9//2, kernel_size=3, strides=(2, 2), activation='relu'), tf.keras.layers.Flatten(), # No activation tf.keras.layers.Dense(latents_dim) ] venc_...
_____no_output_____
MIT
notebooks/losses_evaluation/Dstripes/basic/ellwlb/convolutional/VAE/DstripesVAE_Convolutional_reconst_1ellwlb_1psnr.ipynb
Fidan13/Generative_Models
Model definition
model_name = dataset_name+'VAE_Convolutional_reconst_1ell_1psnr' experiments_dir='experiments'+sep_local+model_name from training.autoencoding_basic.autoencoders.VAE import VAE as AE inputs_shape=image_size variables_params = \ [ { 'name': 'inference_mean', 'inputs_shape':inputs_shape, 'out...
Model: "pokemonAE_Dense_reconst_1ell_1ssmi" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= inference_inputs (InputLayer [(None, 200, 200, 3)] 0 ____________...
MIT
notebooks/losses_evaluation/Dstripes/basic/ellwlb/convolutional/VAE/DstripesVAE_Convolutional_reconst_1ellwlb_1psnr.ipynb
Fidan13/Generative_Models
Callbacks
from training.callbacks.sample_generation import SampleGeneration from training.callbacks.save_model import ModelSaver es = tf.keras.callbacks.EarlyStopping( monitor='loss', min_delta=1e-12, patience=12, verbose=1, restore_best_weights=False ) ms = ModelSaver(filepath=_restore) csv_dir = os.pa...
_____no_output_____
MIT
notebooks/losses_evaluation/Dstripes/basic/ellwlb/convolutional/VAE/DstripesVAE_Convolutional_reconst_1ellwlb_1psnr.ipynb
Fidan13/Generative_Models
Model Training
ae.fit( x=train_ds, input_kw=None, steps_per_epoch=int(1e4), epochs=int(1e6), verbose=2, callbacks=[ es, ms, csv_log, sg, gts_mertics, gtu_mertics], workers=-1, use_multiprocessing=True, validation_data=test_ds, validation_steps=int(1e4) )
_____no_output_____
MIT
notebooks/losses_evaluation/Dstripes/basic/ellwlb/convolutional/VAE/DstripesVAE_Convolutional_reconst_1ellwlb_1psnr.ipynb
Fidan13/Generative_Models
Model Evaluation inception_score
from evaluation.generativity_metrics.inception_metrics import inception_score is_mean, is_sigma = inception_score(ae, tolerance_threshold=1e-6, max_iteration=200) print(f'inception_score mean: {is_mean}, sigma: {is_sigma}')
_____no_output_____
MIT
notebooks/losses_evaluation/Dstripes/basic/ellwlb/convolutional/VAE/DstripesVAE_Convolutional_reconst_1ellwlb_1psnr.ipynb
Fidan13/Generative_Models
Frechet_inception_distance
from evaluation.generativity_metrics.inception_metrics import frechet_inception_distance fis_score = frechet_inception_distance(ae, training_generator, tolerance_threshold=1e-6, max_iteration=10, batch_size=32) print(f'frechet inception distance: {fis_score}')
_____no_output_____
MIT
notebooks/losses_evaluation/Dstripes/basic/ellwlb/convolutional/VAE/DstripesVAE_Convolutional_reconst_1ellwlb_1psnr.ipynb
Fidan13/Generative_Models
perceptual_path_length_score
from evaluation.generativity_metrics.perceptual_path_length import perceptual_path_length_score ppl_mean_score = perceptual_path_length_score(ae, training_generator, tolerance_threshold=1e-6, max_iteration=200, batch_size=32) print(f'perceptual path length score: {ppl_mean_score}')
_____no_output_____
MIT
notebooks/losses_evaluation/Dstripes/basic/ellwlb/convolutional/VAE/DstripesVAE_Convolutional_reconst_1ellwlb_1psnr.ipynb
Fidan13/Generative_Models
precision score
from evaluation.generativity_metrics.precision_recall import precision_score _precision_score = precision_score(ae, training_generator, tolerance_threshold=1e-6, max_iteration=200) print(f'precision score: {_precision_score}')
_____no_output_____
MIT
notebooks/losses_evaluation/Dstripes/basic/ellwlb/convolutional/VAE/DstripesVAE_Convolutional_reconst_1ellwlb_1psnr.ipynb
Fidan13/Generative_Models
recall score
from evaluation.generativity_metrics.precision_recall import recall_score _recall_score = recall_score(ae, training_generator, tolerance_threshold=1e-6, max_iteration=200) print(f'recall score: {_recall_score}')
_____no_output_____
MIT
notebooks/losses_evaluation/Dstripes/basic/ellwlb/convolutional/VAE/DstripesVAE_Convolutional_reconst_1ellwlb_1psnr.ipynb
Fidan13/Generative_Models
Image Generation image reconstruction Training dataset
%load_ext autoreload %autoreload 2 from training.generators.image_generation_testing import reconstruct_from_a_batch from utils.data_and_files.file_utils import create_if_not_exist save_dir = os.path.join(experiments_dir, 'reconstruct_training_images_like_a_batch_dir') create_if_not_exist(save_dir) reconstruct_from_a_...
_____no_output_____
MIT
notebooks/losses_evaluation/Dstripes/basic/ellwlb/convolutional/VAE/DstripesVAE_Convolutional_reconst_1ellwlb_1psnr.ipynb
Fidan13/Generative_Models
with Randomness
from training.generators.image_generation_testing import generate_images_like_a_batch from utils.data_and_files.file_utils import create_if_not_exist save_dir = os.path.join(experiments_dir, 'generate_training_images_like_a_batch_dir') create_if_not_exist(save_dir) generate_images_like_a_batch(ae, training_generator, ...
_____no_output_____
MIT
notebooks/losses_evaluation/Dstripes/basic/ellwlb/convolutional/VAE/DstripesVAE_Convolutional_reconst_1ellwlb_1psnr.ipynb
Fidan13/Generative_Models
Complete Randomness
from training.generators.image_generation_testing import generate_images_randomly from utils.data_and_files.file_utils import create_if_not_exist save_dir = os.path.join(experiments_dir, 'random_synthetic_dir') create_if_not_exist(save_dir) generate_images_randomly(ae, save_dir) from training.generators.image_generati...
100%|██████████| 15/15 [00:00<00:00, 19.90it/s]
MIT
notebooks/losses_evaluation/Dstripes/basic/ellwlb/convolutional/VAE/DstripesVAE_Convolutional_reconst_1ellwlb_1psnr.ipynb
Fidan13/Generative_Models
Arbol de decision
data.columns = ["price", "maintenance", "n_doors", "capacity", "size_lug", "safety", "class"] data.sample(10) data.price.replace(("vhigh", "high", "med", "low"), (4, 3, 2, 1), inplace = True) data.maintenance.replace(("vhigh", "high", "med", "low"), (4, 3, 2, 1), inplace = True) data.n_doors.replace(("2", "3", "4", "5m...
Precisión: 0.9682
MIT
car.ipynb
karvaroz/CarEvaluation
Modeling and Simulation in PythonCase study.Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
# Configure Jupyter so figures appear in the notebook %matplotlib inline # Configure Jupyter to display the assigned value after an assignment %config InteractiveShell.ast_node_interactivity='last_expr_or_assign' # import functions from the modsim.py module from modsim import *
_____no_output_____
MIT
soln/oem_soln.ipynb
pmalo46/ModSimPy