markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
testing database connection.We have a lookup table containing the FRED series along with the value. Let's export the connection parameters and test the connection by running a select query against the lookup table.
cn = mysql.connector.connect(user=fred_sql['user'], password=fred_sql['password'], host='127.0.0.1', database=fred_sql['database']) cursor = cn.cursor() query = ("SELECT frd_cd,frd_val FROM frd_lkp") cursor.execute(query) for (frd_cd,frd_val) in cursor: ...
UMCSENT - University of Michigan Consumer Sentiment Index GDPC1 - Real Gross Domestic Product UNRATE - US Civilian Unemployment Rate
MIT
Fred API.ipynb
Anandkarthick/API_Stuff
Helper functions.. We are doing this exercise with minimal modelling. Hence, just one target table to store the observations for all series. Let's create few helper functions to make this process easier. db_max_count - We are adding surrogate key to the table to make general querying operations and loads easier. C...
def db_max_count(): cn = mysql.connector.connect(user=fred_sql['user'], password=fred_sql['password'], host='127.0.0.1', database=fred_sql['database']) cursor = cn.cursor() dbquery = ("SELECT COALESCE(max(idfrd_srs),0) FROM frd_srs_data") curso...
_____no_output_____
MIT
Fred API.ipynb
Anandkarthick/API_Stuff
Main functions.. We are creating main functions to support the process. Here are the steps 1) Get the data from FRED API. (helper function created above) 2) Validate and transform the observations data from API. 3) Create tuples according to the table structure. 4) Load the tuples into the relational dat...
def dbload(tuple_list): try: cn = mysql.connector.connect(user=fred_sql['user'], password=fred_sql['password'], host='127.0.0.1', database=fred_sql['database']) cursor = cn.cursor() insert_query = ("INSERT INTO...
_____no_output_____
MIT
Fred API.ipynb
Anandkarthick/API_Stuff
Starting point... So, we have all functions created based on few assumptions (that data is all good with very minimal or no issues).
sr_list = ['UMCSENT', 'GDPC1', 'UNRATE'] for series in sr_list: fred_data(series) cn = mysql.connector.connect(user=fred_sql['user'], password=fred_sql['password'], host='127.0.0.1', database=fred_sql['database']) cursor = cn.cursor() quizquer...
(1980, 7.175000000000001) (1981, 7.616666666666667) (1982, 9.708333333333332) (1983, 9.6) (1984, 7.508333333333334) (1985, 7.191666666666666) (1986, 7.0) (1987, 6.175000000000001) (1988, 5.491666666666666) (1989, 5.258333333333333) (1990, 5.616666666666666) (1991, 6.849999999999999) (1992, 7.491666666666667) (1993, 6.9...
MIT
Fred API.ipynb
Anandkarthick/API_Stuff
Arize Tutorial: Surrogate Model Feature ImportanceA surrogate model is an interpretable model trained on predicting the predictions of a black box model. The goal is to approximate the predictions of the black box model as closely as possible and generate feature importance values from the interpretable surrogate mode...
!pip install -q interpret==0.2.7 interpret-community==0.22.0 from interpret_community.mimic.mimic_explainer import ( MimicExplainer, LGBMExplainableModel, )
_____no_output_____
BSD-3-Clause
arize/examples/tutorials/Arize_Tutorials/SHAP/Surrogate_Model_Feature_Importance.ipynb
Arize-ai/client_python
Classification Example Generate exampleIn this example we'll use a support vector machine (SVM) as our black box model. Only the prediction outputs of the SVM model is needed to train the surrogate model, and feature importances are generated from the surrogate model and sent to Arize.
import pandas as pd import numpy as np from sklearn.datasets import load_breast_cancer from sklearn.svm import SVC bc = load_breast_cancer() feature_names = bc.feature_names target_names = bc.target_names data, target = bc.data, bc.target df = pd.DataFrame(data, columns=feature_names) model = SVC(probability=True)....
_____no_output_____
BSD-3-Clause
arize/examples/tutorials/Arize_Tutorials/SHAP/Surrogate_Model_Feature_Importance.ipynb
Arize-ai/client_python
Generate feature importance valuesNote that the model itself is not used here. Only its prediction outputs are used.
def model_func(_): return np.array(list(map(lambda p: [1 - p, p], prediction_score))) explainer = MimicExplainer( model_func, df, LGBMExplainableModel, augment_data=False, is_function=True, ) feature_importance_values = pd.DataFrame( explainer.explain_local(df).local_importance_values, co...
_____no_output_____
BSD-3-Clause
arize/examples/tutorials/Arize_Tutorials/SHAP/Surrogate_Model_Feature_Importance.ipynb
Arize-ai/client_python
Send data to ArizeSet up Arize client. We'll be using the Pandas Logger. First copy the Arize `API_KEY` and `ORG_KEY` from your admin page linked below![![Button_Open.png](https://storage.googleapis.com/arize-assets/fixtures/Button_Open.png)](https://app.arize.com/admin)
!pip install -q arize from arize.pandas.logger import Client, Schema from arize.utils.types import ModelTypes, Environments ORGANIZATION_KEY = "ORGANIZATION_KEY" API_KEY = "API_KEY" arize_client = Client(organization_key=ORGANIZATION_KEY, api_key=API_KEY) if ORGANIZATION_KEY == "ORGANIZATION_KEY" or API_KEY == "API_...
_____no_output_____
BSD-3-Clause
arize/examples/tutorials/Arize_Tutorials/SHAP/Surrogate_Model_Feature_Importance.ipynb
Arize-ai/client_python
Helper functions to simulate prediction IDs and timestamps.
import uuid from datetime import datetime, timedelta # Prediction ID is required for logging any dataset def generate_prediction_ids(df): return pd.Series((str(uuid.uuid4()) for _ in range(len(df))), index=df.index) # OPTIONAL: We can directly specify when inferences were made def simulate_production_timestamps(...
_____no_output_____
BSD-3-Clause
arize/examples/tutorials/Arize_Tutorials/SHAP/Surrogate_Model_Feature_Importance.ipynb
Arize-ai/client_python
Assemble Pandas DataFrame as a production dataset with prediction IDs and timestamps.
feature_importance_values_column_names_mapping = { f"{feat}": f"{feat} (feature importance)" for feat in feature_names } production_dataset = pd.concat( [ pd.DataFrame( { "prediction_id": generate_prediction_ids(df), "prediction_ts": simulate_production_times...
_____no_output_____
BSD-3-Clause
arize/examples/tutorials/Arize_Tutorials/SHAP/Surrogate_Model_Feature_Importance.ipynb
Arize-ai/client_python
Send dataframe to Arize
# Define a Schema() object for Arize to pick up data from the correct columns for logging production_schema = Schema( prediction_id_column_name="prediction_id", # REQUIRED timestamp_column_name="prediction_ts", prediction_label_column_name="prediction_label", prediction_score_column_name="prediction_sc...
_____no_output_____
BSD-3-Clause
arize/examples/tutorials/Arize_Tutorials/SHAP/Surrogate_Model_Feature_Importance.ipynb
Arize-ai/client_python
Regression Example Generate exampleIn this example we'll use a support vector machine (SVM) as our black box model. Only the prediction outputs of the SVM model is needed to train the surrogate model, and feature importances are generated from the surrogate model and sent to Arize.
import pandas as pd import numpy as np from sklearn.datasets import fetch_california_housing housing = fetch_california_housing() # Use only 1,000 data point for a speedier example data_reg = housing.data[:1000] target_reg = housing.target[:1000] feature_names_reg = housing.feature_names df_reg = pd.DataFrame(data_r...
_____no_output_____
BSD-3-Clause
arize/examples/tutorials/Arize_Tutorials/SHAP/Surrogate_Model_Feature_Importance.ipynb
Arize-ai/client_python
Generate feature importance valuesNote that the model itself is not used here. Only its prediction outputs are used.
def model_func_reg(_): return np.array(prediction_label_reg) explainer_reg = MimicExplainer( model_func_reg, df_reg, LGBMExplainableModel, augment_data=False, is_function=True, ) feature_importance_values_reg = pd.DataFrame( explainer_reg.explain_local(df_reg).local_importance_values, ...
_____no_output_____
BSD-3-Clause
arize/examples/tutorials/Arize_Tutorials/SHAP/Surrogate_Model_Feature_Importance.ipynb
Arize-ai/client_python
Assemble Pandas DataFrame as a production dataset with prediction IDs and timestamps.
feature_importance_values_column_names_mapping_reg = { f"{feat}": f"{feat} (feature importance)" for feat in feature_names_reg } production_dataset_reg = pd.concat( [ pd.DataFrame( { "prediction_id": generate_prediction_ids(df_reg), "prediction_ts": simulate_...
_____no_output_____
BSD-3-Clause
arize/examples/tutorials/Arize_Tutorials/SHAP/Surrogate_Model_Feature_Importance.ipynb
Arize-ai/client_python
Send DataFrame to Arize.
# Define a Schema() object for Arize to pick up data from the correct columns for logging production_schema_reg = Schema( prediction_id_column_name="prediction_id", # REQUIRED timestamp_column_name="prediction_ts", prediction_label_column_name="prediction_label", actual_label_column_name="actual_label"...
_____no_output_____
BSD-3-Clause
arize/examples/tutorials/Arize_Tutorials/SHAP/Surrogate_Model_Feature_Importance.ipynb
Arize-ai/client_python
Plotting
ts = pd.Series(np.random.randn(1000), index=pd.date_range('1/1/2000', periods=1000)) ts = ts.cumsum() ts.plot();
_____no_output_____
MIT
01_pandas_basics/10_pandas_plotting.ipynb
markumreed/data_management_sp_2021
On a DataFrame, the plot() method is a convenience to plot all of the columns with labels:
df = pd.DataFrame(np.random.randn(1000, 4), index=ts.index, columns=['A', 'B', 'C', 'D']) df = df.cumsum() df.plot();
_____no_output_____
MIT
01_pandas_basics/10_pandas_plotting.ipynb
markumreed/data_management_sp_2021
!pip show tensorflow !git clone https://github.com/MingSheng92/AE_denoise.git from google.colab import drive drive.mount('/content/drive') %load /content/AE_denoise/scripts/utility.py %load /content/AE_denoise/scripts/Denoise_NN.py from AE_denoise.scripts.utility import load_data, faceGrid, ResultGrid, subsample, AddN...
_____no_output_____
MIT
DL_Example.ipynb
MingSheng92/AE_denoise
The RosenBlatt Perceptron An exemple on the MNIST database Import
from tensorflow.examples.tutorials.mnist import input_data import tensorflow as tf mnist_data = input_data.read_data_sets('MNIST_data', one_hot=True)
Extracting MNIST_data/train-images-idx3-ubyte.gz Extracting MNIST_data/train-labels-idx1-ubyte.gz Extracting MNIST_data/t10k-images-idx3-ubyte.gz Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
MIT
docs/content/perceptron/Rosenblatt.ipynb
yiyulanghuan/deeplearning
Model Parameters
input_size = 784 no_classes = 10 batch_size = 100 total_batches = 200 x_input = tf.placeholder(tf.float32, shape=[None, input_size]) y_input = tf.placeholder(tf.float32, shape=[None, no_classes]) weights = tf.Variable(tf.random_normal([input_size, no_classes])) bias = tf.Variable(tf.random_normal([no_classes])) logits ...
_____no_output_____
MIT
docs/content/perceptron/Rosenblatt.ipynb
yiyulanghuan/deeplearning
Run the model
session = tf.Session() session.run(tf.global_variables_initializer()) for batch_no in range(total_batches): amnist_batch = mnist_data.train.next_batch(batch_size) _, loss_value = session.run([optimiser, loss_operation], feed_dict={ x_input: mnist_batch[0], y_input: mnist_batch[1]}) print(loss_...
/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. from ._conv import register_converters as _register_converters /anaconda3/lib/p...
MIT
docs/content/perceptron/Rosenblatt.ipynb
yiyulanghuan/deeplearning
PyTorch: nn-----------A fully-connected ReLU network with one hidden layer, trained to predict y from xby minimizing squared Euclidean distance.This implementation uses the nn package from PyTorch to build the network.PyTorch autograd makes it easy to define computational graphs and take gradients,but raw autograd can ...
import torch from torch.autograd import Variable # N is batch size; D_in is input dimension; # H is hidden dimension; D_out is output dimension. N, D_in, H, D_out = 64, 1000, 100, 10 # Create random Tensors to hold inputs and outputs, and wrap them in Variables. x = Variable(torch.randn(N, D_in)) y = Variable(torch.r...
_____no_output_____
MIT
two_layer_net_nn.ipynb
asapypy/mokumokuTorch
Understanding Data Types in Python Effective data-driven science and computation requires understanding how data is stored and manipulated. This section outlines and contrasts how arrays of data are handled in the Python language itself, and how NumPy improves on this. Understanding this difference is fundamental to u...
x = 4 x = "four"
_____no_output_____
MIT
#2 Introduction to Numpy/Python-Data-Types.ipynb
Sphincz/dsml
This sort of flexibility is one piece that makes Python and other dynamically-typed languages convenient and easy to use. 1.1. Data Types We have several data types in python:* None* Numeric (int, float, complex, bool)* List* Tuple* Set * String* Range* Dictionary (Map)
# NoneType a = None type(a) # int a = 1+1 print(a) type(a) # complex c = 1.5 + 0.5j type(c) c.real c.imag # boolean d = 2 > 3 print(d) type(d)
False
MIT
#2 Introduction to Numpy/Python-Data-Types.ipynb
Sphincz/dsml
Python Lists Let's consider now what happens when we use a Python data structure that holds many Python objects. The standard mutable multi-element container in Python is the list. We can create a list of integers as follows:
L = list(range(10)) L type(L[0])
_____no_output_____
MIT
#2 Introduction to Numpy/Python-Data-Types.ipynb
Sphincz/dsml
Or, similarly, a list of strings:
L2 = [str(c) for c in L] L2 type(L2[0])
_____no_output_____
MIT
#2 Introduction to Numpy/Python-Data-Types.ipynb
Sphincz/dsml
Because of Python's dynamic typing, we can even create heterogeneous lists:
L3 = [True, "2", 3.0, 4] [type(item) for item in L3]
_____no_output_____
MIT
#2 Introduction to Numpy/Python-Data-Types.ipynb
Sphincz/dsml
Python Dictionaries
keys = [1, 2, 3, 4, 5] values = ['monday', 'tuesday', 'wendsday', 'friday'] dictionary = dict(zip(keys, values)) dictionary dictionary.get(1) dictionary[1]
_____no_output_____
MIT
#2 Introduction to Numpy/Python-Data-Types.ipynb
Sphincz/dsml
Fixed-Type Arrays in Python
import numpy as np
_____no_output_____
MIT
#2 Introduction to Numpy/Python-Data-Types.ipynb
Sphincz/dsml
First, we can use np.array to create arrays from Python lists:
# integer array: np.array([1, 4, 2, 5, 3])
_____no_output_____
MIT
#2 Introduction to Numpy/Python-Data-Types.ipynb
Sphincz/dsml
Unlike python lists, NumPy is constrained to arrays that all contain the same type. If we want to explicitly set the data type of the resulting array, we can use the dtype keyword:
np.array([1, 2, 3, 4], dtype='float32')
_____no_output_____
MIT
#2 Introduction to Numpy/Python-Data-Types.ipynb
Sphincz/dsml
Creating Arrays from Scratch Especially for larger arrays, it is more efficient to create arrays from scratch using routines built into NumPy. Here are several examples:
# Create a length-10 integer array filled with zeros np.zeros(10, dtype=int) # Create a 3x5 floating-point array filled with ones np.ones((3, 5), dtype=float) # Create a 3x5 array filled with 3.14 np.full((3, 5), 3.14) # Create an array filled with a linear sequence np.arange(1, 10) # Starting at 0, ending at 20, stepp...
_____no_output_____
MIT
#2 Introduction to Numpy/Python-Data-Types.ipynb
Sphincz/dsml
GlobalMaxPooling1D **[pooling.GlobalMaxPooling1D.0] input 6x6**
data_in_shape = (6, 6) L = GlobalMaxPooling1D() layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(260) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in...
in shape: (6, 6) in: [-0.806777, -0.564841, -0.481331, 0.559626, 0.274958, -0.659222, -0.178541, 0.689453, -0.028873, 0.053859, -0.446394, -0.53406, 0.776897, -0.700858, -0.802179, -0.616515, 0.718677, 0.303042, -0.080606, -0.850593, -0.795971, 0.860487, -0.90685, 0.89858, 0.617251, 0.334305, -0.351137, -0.642574, 0.1...
MIT
notebooks/layers/pooling/GlobalMaxPooling1D.ipynb
HimariO/keras-js
**[pooling.GlobalMaxPooling1D.1] input 3x7**
data_in_shape = (3, 7) L = GlobalMaxPooling1D() layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(261) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in...
in shape: (3, 7) in: [0.601872, -0.028379, 0.654213, 0.217731, -0.864161, 0.422013, 0.888312, -0.714141, -0.184753, 0.224845, -0.221123, -0.847943, -0.511334, -0.871723, -0.597589, -0.889034, -0.544887, -0.004798, 0.406639, -0.35285, 0.648562] out shape: (7,) out: [0.601872, -0.028379, 0.654213, 0.217731, 0.406639, 0....
MIT
notebooks/layers/pooling/GlobalMaxPooling1D.ipynb
HimariO/keras-js
**[pooling.GlobalMaxPooling1D.2] input 8x4**
data_in_shape = (8, 4) L = GlobalMaxPooling1D() layer_0 = Input(shape=data_in_shape) layer_1 = L(layer_0) model = Model(inputs=layer_0, outputs=layer_1) # set weights to random (use seed for reproducibility) np.random.seed(262) data_in = 2 * np.random.random(data_in_shape) - 1 result = model.predict(np.array([data_in...
in shape: (8, 4) in: [-0.215694, 0.441215, 0.116911, 0.53299, 0.883562, -0.535525, -0.869764, -0.596287, 0.576428, -0.689083, -0.132924, -0.129935, -0.17672, -0.29097, 0.590914, 0.992098, 0.908965, -0.170202, 0.640203, 0.178644, 0.866749, -0.545566, -0.827072, 0.420342, -0.076191, 0.207686, -0.908472, -0.795307, -0.94...
MIT
notebooks/layers/pooling/GlobalMaxPooling1D.ipynb
HimariO/keras-js
export for Keras.js tests
import os filename = '../../../test/data/layers/pooling/GlobalMaxPooling1D.json' if not os.path.exists(os.path.dirname(filename)): os.makedirs(os.path.dirname(filename)) with open(filename, 'w') as f: json.dump(DATA, f) print(json.dumps(DATA))
{"pooling.GlobalMaxPooling1D.0": {"input": {"data": [-0.806777, -0.564841, -0.481331, 0.559626, 0.274958, -0.659222, -0.178541, 0.689453, -0.028873, 0.053859, -0.446394, -0.53406, 0.776897, -0.700858, -0.802179, -0.616515, 0.718677, 0.303042, -0.080606, -0.850593, -0.795971, 0.860487, -0.90685, 0.89858, 0.617251, 0.334...
MIT
notebooks/layers/pooling/GlobalMaxPooling1D.ipynb
HimariO/keras-js
Water quality Setup software libraries
# Import and initialize the Earth Engine library. import ee ee.Initialize() ee.__version__ # Folium setup. import folium print(folium.__version__) # Skydipper library. import Skydipper print(Skydipper.__version__) import matplotlib.pyplot as plt import numpy as np import pandas as pd import functools import json import...
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
Composite image**Variables**
collection = 'Lake-Water-Quality-100m' init_date = '2019-01-21' end_date = '2019-01-31' # Define the URL format used for Earth Engine generated map tiles. EE_TILES = 'https://earthengine.googleapis.com/map/{mapid}/{{z}}/{{x}}/{{y}}?token={token}' composite = ee_collection_specifics.Composite(collection)(init_date, end...
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
*** GeostoreWe select the areas from which we will export the training data.**Variables**
def polygons_to_multipoligon(polygons): multipoligon = [] MultiPoligon = {} for polygon in polygons.get('features'): multipoligon.append(polygon.get('geometry').get('coordinates')) MultiPoligon = { "type": "FeatureCollection", "features": [ { ...
Number of training polygons: 2
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Display Polygons**
# Define the URL format used for Earth Engine generated map tiles. EE_TILES = 'https://earthengine.googleapis.com/map/{mapid}/{{z}}/{{x}}/{{y}}?token={token}' composite = ee_collection_specifics.Composite(collection)(init_date, end_date) mapid = composite.getMapId(ee_collection_specifics.vizz_params_rgb(collection)) ...
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
*** Data pre-processingWe normalize the composite images to have values from 0 to 1.**Variables**
input_dataset = 'Sentinel-2-Top-of-Atmosphere-Reflectance' output_dataset = 'Lake-Water-Quality-100m' init_date = '2019-01-21' end_date = '2019-01-31' scale = 100 #scale in meters collections = [input_dataset, output_dataset]
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Normalize images**
def min_max_values(image, collection, scale, polygons=None): normThreshold = ee_collection_specifics.ee_bands_normThreshold(collection) num = 2 lon = np.linspace(-180, 180, num) lat = np.linspace(-90, 90, num) features = [] for i in range(len(lon)-1): for j in range(len(la...
{'B11_max': 10857.5, 'B11_min': 7.0, 'B12_max': 10691.0, 'B12_min': 1.0, 'B1_max': 6806.0, 'B1_min': 983.0, 'B2_max': 6406.0, 'B2_min': 685.0, 'B3_max': 6182.0, 'B3_min': 412.0, 'B4_max': 7485.5, 'B4_min': 229.0, 'B5_max': 8444.0, 'B5_min': 186.0, 'B6_max': 9923.0, 'B6_min': 153.0, 'B7_max': 11409.0, 'B7_min': 128.0, '...
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Display composite**
# Define the URL format used for Earth Engine generated map tiles. EE_TILES = 'https://earthengine.googleapis.com/map/{mapid}/{{z}}/{{x}}/{{y}}?token={token}' map = folium.Map(location=[39.31, 0.302], zoom_start=6) for n, collection in enumerate(collections): for params in ee_collection_specifics.vizz_params(collec...
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
*** Create TFRecords for training Export pixels**Variables**
input_bands = ['B2','B3','B4','B5','ndvi','ndwi'] output_bands = ['turbidity_blended_mean'] bands = [input_bands, output_bands] dataset_name = 'Sentinel2_WaterQuality' base_names = ['training_pixels', 'eval_pixels'] bucket = env.bucket_name folder = 'cnn-models/'+dataset_name+'/data'
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Select the bands**
# Select the bands we want c = images[0].select(bands[0])\ .addBands(images[1].select(bands[1])) pprint(c.getInfo())
{'bands': [{'crs': 'EPSG:4326', 'crs_transform': [1.0, 0.0, 0.0, 0.0, 1.0, 0.0], 'data_type': {'max': 1.0, 'min': 0.0, 'precision': 'double', 'type': 'PixelType'}, 'id': 'B2'}, {'crs': 'EPSG:4326...
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Sample pixels**
sr = c.sample(region = trainFeatures, scale = scale, numPixels=20000, tileScale=4, seed=999) # Add random column sr = sr.randomColumn(seed=999) # Partition the sample approximately 70-30. train_dataset = sr.filter(ee.Filter.lt('random', 0.7)) eval_dataset = sr.filter(ee.Filter.gte('random', 0.7)) # Print the first co...
{'training': 8091} {'testing': 3508}
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Export the training and validation data**
def export_TFRecords_pixels(datasets, base_names, bucket, folder, selectors): # Export all the training/evaluation data filePaths = [] for n, dataset in enumerate(datasets): filePaths.append(bucket+ '/' + folder + '/' + base_names[n]) # Create the tasks. task ...
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
*** Inspect data Inspect pixelsLoad the data exported from Earth Engine into a tf.data.Dataset. **Helper functions**
# Tensorflow setup. import tensorflow as tf if tf.__version__ == '1.15.0': tf.enable_eager_execution() print(tf.__version__) def parse_function(proto): """The parsing function. Read a serialized example into the structure defined by FEATURES_DICT. Args: example_proto: a serialized Example. Re...
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Variables**
buffer_size = 100 batch_size = 4
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Dataset**
glob = 'gs://' + bucket + '/' + folder + '/' + base_names[0] + '*' dataset = get_dataset(glob, buffer_size, batch_size) dataset
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Check the first record**
arr = iter(dataset.take(1)).next() input_arr = arr[0].numpy() print(input_arr.shape) output_arr = arr[1].numpy() print(output_arr.shape)
(4, 1, 1, 6) (4, 1, 1, 1)
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
*** Training the model locally**Variables**
job_dir = 'gs://' + bucket + '/' + 'cnn-models/'+ dataset_name +'/trainer' logs_dir = job_dir + '/logs' model_dir = job_dir + '/model' shuffle_size = 2000 batch_size = 4 epochs=50 train_size=train_size eval_size=eval_size output_activation=''
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Training/evaluation data**The following is code to load training/evaluation data.
import tensorflow as tf def parse_function(proto): """The parsing function. Read a serialized example into the structure defined by FEATURES_DICT. Args: example_proto: a serialized Example. Returns: A tuple of the predictors dictionary and the labels. """ # Define your tfrecor...
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Model**
from tensorflow.python.keras import Model # Keras model module from tensorflow.python.keras.layers import Input, Dense, Dropout, Activation def create_keras_model(inputShape, nClasses, output_activation='linear'): inputs = Input(shape=inputShape, name='vector') x = Dense(32, input_shape=inputShape, act...
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Training task**The following will get the training and evaluation data, train the model and save it when it's done in a Cloud Storage bucket.
import tensorflow as tf import time import os def train_and_evaluate(): """Trains and evaluates the Keras model. Uses the Keras model defined in model.py and trains on data loaded and preprocessed in util.py. Saves the trained model in TensorFlow SavedModel format to the path defined in part...
Train for 2022 steps, validate for 877 steps Epoch 1/50 1/2022 [..............................] - ETA: 36:44 - loss: 0.0110 - mean_squared_error: 0.0110WARNING:tensorflow:Method (on_train_batch_end) is slow compared to the batch update (3.539397). Check your callbacks. 2022/2022 [==============================] - 15...
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Evaluate model**
evaluation_dataset = get_evaluation_dataset() model.evaluate(evaluation_dataset, steps=int(eval_size / batch_size))
877/877 [==============================] - 1s 1ms/step - loss: 4.6872 - mean_squared_error: 4.6872
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
Read pretrained model
job_dir = 'gs://' + env.bucket_name + '/' + 'cnn-models/' + dataset_name + '/trainer' model_dir = job_dir + '/model' PROJECT_ID = env.project_id # Pick the directory with the latest timestamp, in case you've trained multiple times exported_model_dirs = ! gsutil ls {model_dir} saved_model_path = exported_model_dirs[-1] ...
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
*** Predict in Earth Engine Prepare the model for making predictions in Earth EngineBefore we can use the model in Earth Engine, it needs to be hosted by AI Platform. But before we can host the model on AI Platform we need to *EEify* (a new word!) it. The EEification process merely appends some extra operations to th...
dataset_name = 'Sentinel2_WaterQuality' job_dir = 'gs://' + env.bucket_name + '/' + 'cnn-models/' + dataset_name + '/trainer' model_dir = job_dir + '/model' project_id = env.project_id # Pick the directory with the latest timestamp, in case you've trained multiple times exported_model_dirs = ! gsutil ls {model_dir} sav...
Running command using Cloud API. Set --no-use_cloud_api to go back to using the API Successfully saved project id Running command using Cloud API. Set --no-use_cloud_api to go back to using the API Success: model at 'gs://skydipper_materials/cnn-models/Sentinel2_WaterQuality/trainer/eeified/1580824709' is ready to ...
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
Deployed the model to AI Platform
from googleapiclient import discovery from googleapiclient import errors
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Authenticate your GCP account**Enter the path to your service account key as the`GOOGLE_APPLICATION_CREDENTIALS` variable in the cell below and run the cell.
%env GOOGLE_APPLICATION_CREDENTIALS {env.privatekey_path} model_name = 'water_quality_test' version_name = 'v' + folder_name project_id = env.project_id
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Create model**
print('Creating model: ' + model_name) # Store your full project ID in a variable in the format the API needs. project = 'projects/{}'.format(project_id) # Build a representation of the Cloud ML API. ml = discovery.build('ml', 'v1') # Create a dictionary with the fields from the request body. request_dict = {'name':...
Creating model: water_quality_test There was an error creating the model. Check the details: Field: model.name Error: A model with the same name already exists.
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Create version**
ml = discovery.build('ml', 'v1') request_dict = { 'name': version_name, 'deploymentUri': EEIFIED_DIR, 'runtimeVersion': '1.14', 'pythonVersion': '3.5', 'framework': 'TENSORFLOW', 'autoScaling': { "minNodes": 10 }, 'machineType': 'mls1-c4-m2' } request = ml.projects().models().ver...
{'name': 'projects/skydipper-196010/operations/create_water_quality_test_v1580824709-1580824821325', 'metadata': {'@type': 'type.googleapis.com/google.cloud.ml.v1.OperationMetadata', 'createTime': '2020-02-04T14:00:22Z', 'operationType': 'CREATE_VERSION', 'modelName': 'projects/skydipper-196010/models/water_quality_tes...
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Check deployment status**
def check_status_deployment(model_name, version_name): desc = !gcloud ai-platform versions describe {version_name} --model={model_name} return desc.grep('state:')[0].split(':')[1].strip() print(check_status_deployment(model_name, version_name))
READY
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
Load the trained model and use it for prediction in Earth Engine**Variables**
# polygon where we want to display de predictions geometry = { "type": "FeatureCollection", "features": [ { "type": "Feature", "properties": {}, "geometry": { "type": "Polygon", "coordinates": [ [ [ -2.63671875, 34.5608593670838...
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Input image**Select bands and convert them into float
image = images[0].select(bands[0]).float()
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Output image**
# Load the trained model and use it for prediction. model = ee.Model.fromAiPlatformPredictor( projectName = project_id, modelName = model_name, version = version_name, inputTileSize = [1, 1], inputOverlapSize = [0, 0], proj = ee.Projection('EPSG:4326').atScale(scale), fixInputProj = True, ...
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
Clip the prediction area with the polygon
# Clip the prediction area with the polygon polygon = ee.Geometry.Polygon(geometry.get('features')[0].get('geometry').get('coordinates')) predictions = predictions.clip(polygon) # Get centroid centroid = polygon.centroid().getInfo().get('coordinates')[::-1]
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Display**Use folium to visualize the input imagery and the predictions.
# Define the URL format used for Earth Engine generated map tiles. EE_TILES = 'https://earthengine.googleapis.com/map/{mapid}/{{z}}/{{x}}/{{y}}?token={token}' mapid = image.getMapId({'bands': ['B4', 'B3', 'B2'], 'min': 0, 'max': 1}) map = folium.Map(location=centroid, zoom_start=8) folium.TileLayer( tiles=EE_TILES...
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
*** Make predictions of an image outside Earth Engine Export the imageryWe export the imagery using TFRecord format. **Variables**
#Input image image = images[0].select(bands[0]) dataset_name = 'Sentinel2_WaterQuality' file_name = 'image_pixel' bucket = env.bucket_name folder = 'cnn-models/'+dataset_name+'/data' # polygon where we want to display de predictions geometry = { "type": "FeatureCollection", "features": [ { "type": "Feat...
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Read the JSON mixer file**The mixer contains metadata and georeferencing information for the exported patches, each of which is in a different file. Read the mixer to get some information needed for prediction.
json_file = f'gs://{bucket}' + '/' + folder + '/' + file_name +'.json' # Load the contents of the mixer file to a JSON object. json_text = !gsutil cat {json_file} # Get a single string w/ newlines from the IPython.utils.text.SList mixer = json.loads(json_text.nlstr) pprint(mixer)
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Read the image files into a dataset**The input needs to be preprocessed differently than the training and testing. Mainly, this is because the pixels are written into records as patches, we need to read the patches in as one big tensor (one patch for each band), then flatten them into lots of little tensors.
# Get relevant info from the JSON mixer file. PATCH_WIDTH = mixer['patchDimensions'][0] PATCH_HEIGHT = mixer['patchDimensions'][1] PATCHES = mixer['totalPatches'] PATCH_DIMENSIONS_FLAT = [PATCH_WIDTH * PATCH_HEIGHT, 1] features = bands[0] glob = f'gs://{bucket}' + '/' + folder + '/' + file_name +'.tfrecord.gz' # Note...
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Check the first record**
arr = iter(image_dataset.take(1)).next() input_arr = arr[0].numpy() print(input_arr.shape)
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Display the input channels**
def display_channels(data, nChannels, titles = False): if nChannels == 1: plt.figure(figsize=(5,5)) plt.imshow(data[:,:,0]) if titles: plt.title(titles[0]) else: fig, axs = plt.subplots(nrows=1, ncols=nChannels, figsize=(5*nChannels,5)) for i in range(nChannel...
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
Generate predictions for the image pixelsTo get predictions in each pixel, run the image dataset through the trained model using model.predict(). Print the first prediction to see that the output is a list of the three class probabilities for each pixel. Running all predictions might take a while.
predictions = model.predict(image_dataset, steps=PATCHES, verbose=1) output_arr = predictions.reshape((PATCHES, PATCH_WIDTH, PATCH_HEIGHT, len(bands[1]))) output_arr.shape display_channels(output_arr[9,:,:,:], output_arr.shape[3], titles=bands[1])
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
Write the predictions to a TFRecord fileWe need to write the pixels into the file as patches in the same order they came out. The records are written as serialized `tf.train.Example` protos.
dataset_name = 'Sentinel2_WaterQuality' bucket = env.bucket_name folder = 'cnn-models/'+dataset_name+'/data' output_file = 'gs://' + bucket + '/' + folder + '/predicted_image_pixel.TFRecord' print('Writing to file ' + output_file) # Instantiate the writer. writer = tf.io.TFRecordWriter(output_file) patch = [[]] nPatc...
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
**Verify the existence of the predictions file**
!gsutil ls -l {output_file}
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
Upload the predicted image to an Earth Engine asset
asset_id = 'projects/vizzuality/skydipper-water-quality/predicted-image' print('Writing to ' + asset_id) # Start the upload. !earthengine upload image --asset_id={asset_id} {output_file} {json_file}
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
View the predicted image
# Get centroid polygon = ee.Geometry.Polygon(geometry.get('features')[0].get('geometry').get('coordinates')) centroid = polygon.centroid().getInfo().get('coordinates')[::-1] EE_TILES = 'https://earthengine.googleapis.com/map/{mapid}/{{z}}/{{x}}/{{y}}?token={token}' map = folium.Map(location=centroid, zoom_start=8) fo...
_____no_output_____
MIT
notebooks/water_quality.ipynb
Skydipper/CNN-tests
Calculating Area and Center Coordinates of a Polygon
%load_ext lab_black %load_ext autoreload %autoreload 2 import geopandas as gpd import pandas as pd %aimport src.utils from src.utils import show_df
_____no_output_____
MIT
0_1_calculate_area_centroid.ipynb
edesz/chicago-bikeshare
[Table of Contents](table-of-contents)0. [About](about)1. [User Inputs](user-inputs)2. [Load Chicago Community Areas GeoData](load-chicago-community-areas-geodata)3. [Calculate Area of each Community Area](calculate-area-of-each-community-area)4. [Calculate Coordinates of Midpoint of each Community Area](calculate-coo...
ca_url = "https://data.cityofchicago.org/api/geospatial/cauq-8yn6?method=export&format=GeoJSON" convert_sqm_to_sqft = 10.7639
_____no_output_____
MIT
0_1_calculate_area_centroid.ipynb
edesz/chicago-bikeshare
2. [Load Chicago Community Areas GeoData](load-chicago-community-areas-geodata) Load the boundaries geodata for the [Chicago community areas](https://data.cityofchicago.org/Facilities-Geographic-Boundaries/Boundaries-Community-Areas-current-/cauq-8yn6)
%%time gdf_ca = gpd.read_file(ca_url) print(gdf_ca.crs) gdf_ca.head(2)
epsg:4326 CPU times: user 209 ms, sys: 11.7 ms, total: 221 ms Wall time: 1.1 s
MIT
0_1_calculate_area_centroid.ipynb
edesz/chicago-bikeshare
3. [Calculate Area of each Community Area](calculate-area-of-each-community-area) To get the area, we need to- project the geometry into a Cylindrical Equal-Area (CEA) format, an equal area projection, with that preserves area ([1](https://learn.arcgis.com/en/projects/choose-the-right-projection/))- calculate the area...
%%time gdf_ca["cea_area_square_feet"] = gdf_ca.to_crs({"proj": "cea"}).area * convert_sqm_to_sqft gdf_ca["diff_sq_feet"] = gdf_ca["shape_area"].astype(float) - gdf_ca["cea_area_square_feet"] gdf_ca["diff_pct"] = gdf_ca["diff_sq_feet"] / gdf_ca["shape_area"].astype(float) * 100 show_df(gdf_ca.drop(columns=["geometry"]))...
_____no_output_____
MIT
0_1_calculate_area_centroid.ipynb
edesz/chicago-bikeshare
**Observations**1. It is reassuring that the CEA projection has given us areas in square feet that are within less than 0.01 percent of the areas provided with the Chicago community areas dataset. We'll use this approach to calculate shape areas. 4. [Calculate Coordinates of Midpoint of each Community Area](calculate-...
%%time centroid_cea = gdf_ca["geometry"].to_crs("+proj=cea").centroid.to_crs(gdf_ca.crs) centroid_3395 = gdf_ca["geometry"].to_crs(epsg=3395).centroid.to_crs(gdf_ca.crs) centroid_32663 = gdf_ca["geometry"].to_crs(epsg=32663).centroid.to_crs(gdf_ca.crs) centroid_4087 = gdf_ca["geometry"].to_crs(epsg=4087).centroid.to_cr...
_____no_output_____
MIT
0_1_calculate_area_centroid.ipynb
edesz/chicago-bikeshare
PTN TemplateThis notebook serves as a template for single dataset PTN experiments It can be run on its own by setting STANDALONE to True (do a find for "STANDALONE" to see where) But it is intended to be executed as part of a *papermill.py script. See any of the experimentes with a papermill script to get started ...
%load_ext autoreload %autoreload 2 %matplotlib inline import os, json, sys, time, random import numpy as np import torch from torch.optim import Adam from easydict import EasyDict import matplotlib.pyplot as plt from steves_models.steves_ptn import Steves_Prototypical_Network from steves_utils.lazy_iterable_wr...
_____no_output_____
MIT
experiments/tuned_1v2/oracle.run1_limited/trials/8/trial.ipynb
stevester94/csc500-notebooks
Required ParametersThese are allowed parameters, not defaultsEach of these values need to be present in the injected parameters (the notebook will raise an exception if they are not present)Papermill uses the cell tag "parameters" to inject the real parameters below this cell.Enable tags to see what I mean
required_parameters = { "experiment_name", "lr", "device", "seed", "dataset_seed", "labels_source", "labels_target", "domains_source", "domains_target", "num_examples_per_domain_per_label_source", "num_examples_per_domain_per_label_target", "n_shot", "n_way", "n_q...
_____no_output_____
MIT
experiments/tuned_1v2/oracle.run1_limited/trials/8/trial.ipynb
stevester94/csc500-notebooks
Batch Normalization – Practice Batch normalization is most useful when building deep neural networks. To demonstrate this, we'll create a convolutional neural network with 20 convolutional layers, followed by a fully connected layer. We'll use it to classify handwritten digits in the MNIST dataset, which should be fam...
import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("MNIST_data/", one_hot=True, reshape=False)
/usr/local/lib/python3.5/dist-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. from ._conv import register_converters as _register_converters
MIT
Deep.Learning/5.Generative-Adversial-Networks/2.Deep-Convolutional-Gan/Batch_Normalization_Exercises.ipynb
Scrier/udacity
Batch Normalization using `tf.layers.batch_normalization`This version of the network uses `tf.layers` for almost everything, and expects you to implement batch normalization using [`tf.layers.batch_normalization`](https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization) We'll use the following functi...
""" DO NOT MODIFY THIS CELL """ def fully_connected(prev_layer, num_units): """ Create a fully connectd layer with the given layer as input and the given number of neurons. :param prev_layer: Tensor The Tensor that acts as input into this layer :param num_units: int The size of the ...
_____no_output_____
MIT
Deep.Learning/5.Generative-Adversial-Networks/2.Deep-Convolutional-Gan/Batch_Normalization_Exercises.ipynb
Scrier/udacity
We'll use the following function to create convolutional layers in our network. They are very basic: we're always using a 3x3 kernel, ReLU activation functions, strides of 1x1 on layers with odd depths, and strides of 2x2 on layers with even depths. We aren't bothering with pooling layers at all in this network.This ve...
""" DO NOT MODIFY THIS CELL """ def conv_layer(prev_layer, layer_depth): """ Create a convolutional layer with the given layer as input. :param prev_layer: Tensor The Tensor that acts as input into this layer :param layer_depth: int We'll set the strides and number of feature maps b...
_____no_output_____
MIT
Deep.Learning/5.Generative-Adversial-Networks/2.Deep-Convolutional-Gan/Batch_Normalization_Exercises.ipynb
Scrier/udacity
**Run the following cell**, along with the earlier cells (to load the dataset and define the necessary functions). This cell builds the network **without** batch normalization, then trains it on the MNIST dataset. It displays loss and accuracy data periodically while training.
""" DO NOT MODIFY THIS CELL """ def train(num_batches, batch_size, learning_rate): # Build placeholders for the input samples and labels inputs = tf.placeholder(tf.float32, [None, 28, 28, 1]) labels = tf.placeholder(tf.float32, [None, 10]) # Feed the inputs into a series of 20 convolutional layers...
Batch: 0: Validation loss: 0.69052, Validation accuracy: 0.10020 Batch: 25: Training loss: 0.41248, Training accuracy: 0.07812 Batch: 50: Training loss: 0.32848, Training accuracy: 0.04688 Batch: 75: Training loss: 0.32555, Training accuracy: 0.07812 Batch: 100: Validation loss: 0.32519, Validation accuracy: 0.11000 B...
MIT
Deep.Learning/5.Generative-Adversial-Networks/2.Deep-Convolutional-Gan/Batch_Normalization_Exercises.ipynb
Scrier/udacity
With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.)Using batch normalization...
def fully_connected(prev_layer, num_units, is_training): """ Create a fully connectd layer with the given layer as input and the given number of neurons. :param prev_layer: Tensor The Tensor that acts as input into this layer :param num_units: int The size of the layer. That is, the...
_____no_output_____
MIT
Deep.Learning/5.Generative-Adversial-Networks/2.Deep-Convolutional-Gan/Batch_Normalization_Exercises.ipynb
Scrier/udacity
**TODO:** Modify `conv_layer` to add batch normalization to the convolutional layers it creates. Feel free to change the function's parameters if it helps.
def conv_layer(prev_layer, layer_depth, is_training): """ Create a convolutional layer with the given layer as input. :param prev_layer: Tensor The Tensor that acts as input into this layer :param layer_depth: int We'll set the strides and number of feature maps based on the layer's...
_____no_output_____
MIT
Deep.Learning/5.Generative-Adversial-Networks/2.Deep-Convolutional-Gan/Batch_Normalization_Exercises.ipynb
Scrier/udacity
**TODO:** Edit the `train` function to support batch normalization. You'll need to make sure the network knows whether or not it is training, and you'll need to make sure it updates and uses its population statistics correctly.
def train(num_batches, batch_size, learning_rate): # Build placeholders for the input samples and labels inputs = tf.placeholder(tf.float32, [None, 28, 28, 1]) labels = tf.placeholder(tf.float32, [None, 10]) # Add placeholder to indicate whether or not we're training the model is_training = tf.pla...
Batch: 0: Validation loss: 0.69111, Validation accuracy: 0.08680 Batch: 25: Training loss: 0.58037, Training accuracy: 0.17188 Batch: 50: Training loss: 0.46359, Training accuracy: 0.10938 Batch: 75: Training loss: 0.39624, Training accuracy: 0.07812 Batch: 100: Validation loss: 0.35559, Validation accuracy: 0.10020 B...
MIT
Deep.Learning/5.Generative-Adversial-Networks/2.Deep-Convolutional-Gan/Batch_Normalization_Exercises.ipynb
Scrier/udacity
With batch normalization, you should now get an accuracy over 90%. Notice also the last line of the output: `Accuracy on 100 samples`. If this value is low while everything else looks good, that means you did not implement batch normalization correctly. Specifically, it means you either did not calculate the population...
def fully_connected(prev_layer, num_units, is_training): """ Create a fully connectd layer with the given layer as input and the given number of neurons. :param prev_layer: Tensor The Tensor that acts as input into this layer :param num_units: int The size of the layer. That is, the...
_____no_output_____
MIT
Deep.Learning/5.Generative-Adversial-Networks/2.Deep-Convolutional-Gan/Batch_Normalization_Exercises.ipynb
Scrier/udacity
**TODO:** Modify `conv_layer` to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.**Note:** Unlike in the previous example that used `tf.layers`, adding batch normalization to these convolutional layers _does_ require some slight differences to ...
def conv_layer(prev_layer, layer_depth, is_training): """ Create a convolutional layer with the given layer as input. :param prev_layer: Tensor The Tensor that acts as input into this layer :param layer_depth: int We'll set the strides and number of feature maps based on the layer's...
_____no_output_____
MIT
Deep.Learning/5.Generative-Adversial-Networks/2.Deep-Convolutional-Gan/Batch_Normalization_Exercises.ipynb
Scrier/udacity
**TODO:** Edit the `train` function to support batch normalization. You'll need to make sure the network knows whether or not it is training.
def train(num_batches, batch_size, learning_rate): # Build placeholders for the input samples and labels inputs = tf.placeholder(tf.float32, [None, 28, 28, 1]) labels = tf.placeholder(tf.float32, [None, 10]) # Add placeholder to indicate whether or not we're training the model is_training = tf.pla...
Batch: 0: Validation loss: 0.69128, Validation accuracy: 0.09580 Batch: 25: Training loss: 0.58242, Training accuracy: 0.07812 Batch: 50: Training loss: 0.46814, Training accuracy: 0.07812 Batch: 75: Training loss: 0.40309, Training accuracy: 0.17188 Batch: 100: Validation loss: 0.36373, Validation accuracy: 0.09900 B...
MIT
Deep.Learning/5.Generative-Adversial-Networks/2.Deep-Convolutional-Gan/Batch_Normalization_Exercises.ipynb
Scrier/udacity
PyTorch on GPU: first steps Put tensor to GPU
import torch device = torch.device("cuda:0") my_tensor = torch.Tensor([1., 2., 3., 4., 5.]) mytensor = my_tensor.to(device) mytensor my_tensor
_____no_output_____
MIT
Section_1/Video_1_5.ipynb
PacktPublishing/-Dynamic-Neural-Network-Programming-with-PyTorch
Put model to GPU
from torch import nn class Model(nn.Module): def __init__(self, input_size, output_size): super(Model, self).__init__() self.fc = nn.Linear(input_size, output_size) def forward(self, input): output = self.fc(input) print("\tIn Model: input size", input.size(), "ou...
_____no_output_____
MIT
Section_1/Video_1_5.ipynb
PacktPublishing/-Dynamic-Neural-Network-Programming-with-PyTorch
Data parallelism
from torch.nn import DataParallel torch.cuda.is_available() torch.cuda.device_count()
_____no_output_____
MIT
Section_1/Video_1_5.ipynb
PacktPublishing/-Dynamic-Neural-Network-Programming-with-PyTorch
Part on CPU, part on GPU
device = torch.device("cuda:0") class Model(nn.Module): def __init__(self, input_size, output_size): super(Model, self).__init__() self.fc = nn.Linear(input_size, 100) self.fc2 = nn.Linear(100, output_size).to(device) def forward(self, x): # Compute first layer on CPU ...
_____no_output_____
MIT
Section_1/Video_1_5.ipynb
PacktPublishing/-Dynamic-Neural-Network-Programming-with-PyTorch
Country Economic Conditions for Cargo Carriers This report is written from the point of view of a data scientist preparing a report to the Head of Analytics for a logistics company. The company needs information on economic and financial conditions is different countries, including data on their international trade, t...
#Import required packages import numpy as np import pandas as pd from sklearn import linear_model from scipy import stats import math from sklearn import datasets, linear_model from sklearn.linear_model import LinearRegression import statsmodels.api as sm #Import IMF World Economic Outlook Data from GitHub WEO = pd.re...
4289 ['CountryCode', 'Country', 'Indicator', 'Notes', 'Units', 'Scale', '2000', '2001', '2002', '2003', '2004', '2005', '2006', '2007', '2008', '2009', '2010', '2011', '2012', '2013', '2014', '2015', '2016', '2017', '2018', '2019'] CountryCode object Country object Indicator object Notes obj...
CC0-1.0
Country_Economic_Conditions_for_Cargo_Carriers.ipynb
jamiemfraser/machine_learning