markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Made a mistake on the last loop above. The penultimate batch -- the last full 4096-image batch -- was added onto the end of the predictions array twice. The final 2194 image predictions were never run. Easy enough to fix: modify the above code to work perfectly. Then either: * create entirely new predictions from scratch (~ 1 hour) * remove the last increment (4096) of predictions from the array, and add the last batch. Gonna take option 2. EDIT: actually, option 1. preds was stored in memory, which was erased when I closed this machine for the night. So this time I'll just build the predictions array properly. Below is testing/debugging output from the night before
print(81920 - 79726) print(79726 % 4096) print(81920 % 4096) # <-- that's yeh problem right there, kid x = preds[len(preds) - 4096] print(preds[-1]) print(x) preds[0] # ??image.ImageDataGenerator.flow_from_directory # ??Sequential.predict()
FAI_old/conv_test_Asus.ipynb
WNoxchi/Kaukasos
mit
Redoing predictions here:
fname = path + 'results/conv_test_feat.dat' idx, inc = 4096, 4096 preds = [] conv_test_feat = bcolz.open(fname)[:idx] preds = bn_model.predict(conv_test_feat, batch_size=batch_size, verbose=0) while idx < test_batches.n - inc: conv_test_feat = bcolz.open(fname)[idx:idx+inc] idx += inc next_preds = bn_model.predict(conv_test_feat, batch_size=batch_size, verbose=0) preds = np.concatenate([preds, next_preds]) conv_test_feat = bcolz.open(fname)[idx:] next_preds = bn_model.predict(conv_test_feat, batch_size=batch_size, verbose=0) preds = np.concatenate([preds, next_preds]) print(len(preds)) if len(preds) != len(bcolz.open(fname)): print("Ya done fucked up, son.")
FAI_old/conv_test_Asus.ipynb
WNoxchi/Kaukasos
mit
Oh I forgot, predictions through a FC NN are fast. CNNs are where it takes a long time. This is just quick testing that it works. Full/polished will be in the reworked statefarm-codealong (or just statefarm) JNB:
def do_clip(arr, mx): return np.clip(arr, (1-mx)/9, mx) subm = do_clip(preds, 0.93) subm_name = path + 'results/subm01.gz' trn_batches = get_batches(path + 'train', batch_size=batch_size, shuffle=False) # make sure training batches defined before this: classes = sorted(trn_batches.class_indices, key=trn_batches.class_indices.get) import pandas as pd submission = pd.DataFrame(subm, columns=classes) submission.insert(0, 'img', [f[8:] for f in test_filenames]) submission.head() submission.to_csv(subm_name, index=False, compression='gzip') from IPython.display import FileLink FileLink(subm_name)
FAI_old/conv_test_Asus.ipynb
WNoxchi/Kaukasos
mit
Configuring logging: save DEBUG to a file and INFO to stdout
log = logging.getLogger() log.setLevel(logging.DEBUG) formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') fileHandler = logging.FileHandler("debug.log", 'a') fileHandler.setLevel(logging.DEBUG) fileHandler.setFormatter(formatter) log.addHandler(fileHandler) cformat = logging.Formatter("[%(levelname)8s] : %(message)s") consoleHandler = logging.StreamHandler(sys.stdout) consoleHandler.setFormatter(cformat) consoleHandler.setLevel(logging.INFO) log.addHandler(consoleHandler)
docs/tutorials/iotlab/tuto_iotlab_g5k_ipv6.ipynb
BeyondTheClouds/enoslib
gpl-3.0
Getting resources IoT-LAB provider configuration: reserve M3 nodes in saclay site Note: It uses the following M3 images: border-router.iotlab-m3 and er-example-server.iotlab-m3. More details on how to generate these images in: https://www.iot-lab.info/legacy/tutorials/contiki-coap-m3/index.html
job_name="iotlab_g5k-ipv6" iotlab_dict = { "walltime": "01:00", "job_name": job_name, "resources": { "machines": [ { "roles": ["border_router"], "archi": "m3:at86rf231", "site": "saclay", "number": 1, "image": "border-router.iotlab-m3", }, { "roles": ["sensor"], "archi": "m3:at86rf231", "site": "saclay", "number": 2, "image": "er-example-server.iotlab-m3", }, ] }, } iotlab_conf = IotlabConf.from_dictionary(iotlab_dict)
docs/tutorials/iotlab/tuto_iotlab_g5k_ipv6.ipynb
BeyondTheClouds/enoslib
gpl-3.0
Grid'5000 provider configuration: reserve nodes in grenoble
g5k_dict = { "job_type": "allow_classic_ssh", "job_name": job_name, "resources": { "machines": [ { "roles": ["client"], "cluster": "yeti", "nodes": 1, "primary_network": "default", "secondary_networks": [], }, ], "networks": [ {"id": "default", "type": "prod", "roles": ["my_network"], "site": "grenoble"} ], }, } g5k_conf = G5kConf.from_dictionnary(g5k_dict)
docs/tutorials/iotlab/tuto_iotlab_g5k_ipv6.ipynb
BeyondTheClouds/enoslib
gpl-3.0
We still need a Static provider to interact with the IoT-LAB frontend machine
import iotlabcli.auth iotlab_user, _ = iotlabcli.auth.get_user_credentials() iotlab_frontend_conf = ( StaticConf() .add_machine( roles=["frontend"], address="saclay.iot-lab.info", alias="saclay", user=iotlab_user ) .finalize() )
docs/tutorials/iotlab/tuto_iotlab_g5k_ipv6.ipynb
BeyondTheClouds/enoslib
gpl-3.0
IoT-LAB: getting resources
iotlab_provider = Iotlab(iotlab_conf) iotlab_roles, _ = iotlab_provider.init() print(iotlab_roles)
docs/tutorials/iotlab/tuto_iotlab_g5k_ipv6.ipynb
BeyondTheClouds/enoslib
gpl-3.0
Grid'5000: getting resources
g5k_provider = G5k(g5k_conf) g5k_roles, g5knetworks = g5k_provider.init() print(g5k_roles)
docs/tutorials/iotlab/tuto_iotlab_g5k_ipv6.ipynb
BeyondTheClouds/enoslib
gpl-3.0
Static: getting resources
frontend_provider = Static(iotlab_frontend_conf) frontend_roles, _ = frontend_provider.init() print(frontend_roles)
docs/tutorials/iotlab/tuto_iotlab_g5k_ipv6.ipynb
BeyondTheClouds/enoslib
gpl-3.0
Configuring network connectivity Enabling IPv6 on Grid'5000 nodes (https://www.grid5000.fr/w/IPv6)
result=run_command("dhclient -6 br0", roles=g5k_roles) result = run_command("ip address show dev br0", roles=g5k_roles) print(result['ok'])
docs/tutorials/iotlab/tuto_iotlab_g5k_ipv6.ipynb
BeyondTheClouds/enoslib
gpl-3.0
Starting tunslip command in frontend. Redirect tunslip command output to a file to read it later.
iotlab_ipv6_net="2001:660:3207:4c0::" tun_cmd = "sudo tunslip6.py -v2 -L -a %s -p 20000 %s1/64 > tunslip.output 2>&1" % (iotlab_roles["border_router"][0].alias, iotlab_ipv6_net) result=run_command(tun_cmd, roles=frontend_roles, asynch=3600, poll=0)
docs/tutorials/iotlab/tuto_iotlab_g5k_ipv6.ipynb
BeyondTheClouds/enoslib
gpl-3.0
Reseting border router
iotlab_roles["border_router"][0].reset()
docs/tutorials/iotlab/tuto_iotlab_g5k_ipv6.ipynb
BeyondTheClouds/enoslib
gpl-3.0
Get the Border Router IPv6 address from tunslip output
result = run_command("cat tunslip.output", roles=frontend_roles) print(result['ok']) import re out = result['ok']['saclay']['stdout'] print(out) match = re.search(rf'Server IPv6 addresses:\n.+({iotlab_ipv6_net}\w{{4}})', out, re.MULTILINE|re.DOTALL) br_ipv6 = match.groups()[0] print("Border Router IPv6 address from tunslip output: %s" % br_ipv6)
docs/tutorials/iotlab/tuto_iotlab_g5k_ipv6.ipynb
BeyondTheClouds/enoslib
gpl-3.0
Checking ping from Grid'5000 to border router node
result = run_command("ping6 -c3 %s" % br_ipv6, pattern_hosts="client*", roles=g5k_roles) print(result['ok'])
docs/tutorials/iotlab/tuto_iotlab_g5k_ipv6.ipynb
BeyondTheClouds/enoslib
gpl-3.0
Installing and using CoAP clients Install aiocoap client and lynx on grid'5000 nodes
with play_on(roles=g5k_roles) as p: p.apt(name=["python3-aiocoap", "lynx"], state="present")
docs/tutorials/iotlab/tuto_iotlab_g5k_ipv6.ipynb
BeyondTheClouds/enoslib
gpl-3.0
Grab the CoAP server node’s IPv6 address from the BR’s web interface
result = run_command("lynx -dump http://[%s]" % br_ipv6, roles=g5k_roles) print(result['ok'])
docs/tutorials/iotlab/tuto_iotlab_g5k_ipv6.ipynb
BeyondTheClouds/enoslib
gpl-3.0
For a CoAP server, GET light sensor
out = result['ok'][g5k_roles["client"][0].address]['stdout'] print(out) match = re.search(r'fe80::(\w{4})', out, re.MULTILINE|re.DOTALL) node_uid = match.groups()[0] print(node_uid) result = run_command("aiocoap-client coap://[%s%s]:5683/sensors/light" % (iotlab_ipv6_net, node_uid), roles=g5k_roles) print(result['ok'])
docs/tutorials/iotlab/tuto_iotlab_g5k_ipv6.ipynb
BeyondTheClouds/enoslib
gpl-3.0
GET pressure for the same sensor
result = run_command("aiocoap-client coap://[%s%s]:5683/sensors/pressure" % (iotlab_ipv6_net, node_uid), roles=g5k_roles) print(result['ok'])
docs/tutorials/iotlab/tuto_iotlab_g5k_ipv6.ipynb
BeyondTheClouds/enoslib
gpl-3.0
Clean-up phase Stop tunslip in frontend node
result = run_command("pgrep tunslip6 | xargs kill", roles=frontend_roles)
docs/tutorials/iotlab/tuto_iotlab_g5k_ipv6.ipynb
BeyondTheClouds/enoslib
gpl-3.0
Destroy jobs in testbeds
g5k_provider.destroy() iotlab_provider.destroy()
docs/tutorials/iotlab/tuto_iotlab_g5k_ipv6.ipynb
BeyondTheClouds/enoslib
gpl-3.0
Now we make some plots and calculate some figures. For example, here's how the correlation (Pearson's r) changes by game number:
# Drop short season and current season, and get correlations data2 = data[(data.Season != 2012) & (data.Season != 2017)].dropna() corrs = data2.drop({'Season', 'Team'}, axis=1).groupby('GameNum').corr().drop('ROY_CF%', axis=1) corrs = corrs[corrs['YTD_CF%'] < 1] corrs = corrs.reset_index().drop('level_1', axis=1).rename(columns={'YTD_CF%': 'r'}) plot(corrs.GameNum, corrs.r) xlabel('Game number') ylabel('R') title('Correlation between YTD CF% and ROY CF%')
examples/YTD and ROY CF%.ipynb
muneebalam/scrapenhl2
mit
Here's how the slope changes by game:
# Now look at the slope from scipy.stats import linregress def get_slope(df): m, b, r, p, e = linregress(df['YTD_CF%'], df['ROY_CF%']) return m plot(data2.groupby('GameNum').apply(get_slope), label = 'Slope') xlabel('Game Number') plot(corrs.r, label = 'R') title('Correlation between YTD CF% and ROY CF%') legend(loc=2)
examples/YTD and ROY CF%.ipynb
muneebalam/scrapenhl2
mit
You'll note that although the predictivity (as measured by r) is best at 40 games, the slope is still not 1--meaning we expect some regression still. You can see that in the scatterplot below (with the best-fit and 1:1 lines included for reference):
tmp = data2[data2.GameNum == 40] x = 'YTD_CF%' y = 'ROY_CF%' scatter(tmp[x], tmp[y], label='_nolegend') xlabel('Season to date CF%') ylabel('Rest of season CF%') title('YTD and ROY CF% at game 40') m, b, r, p, e = linregress(tmp[x], tmp[y]) xs = arange(0, 1, 0.01) ys = m * xs + b xlimits = xlim() ylimits = ylim() plot(xs, ys, color='k', ls='--', label='Best slope') plot(xs, xs, color='k', ls=':', label='Slope=1') xlim(*xlimits) ylim(*ylimits) legend(loc=2)
examples/YTD and ROY CF%.ipynb
muneebalam/scrapenhl2
mit
Vertex SDK: AutoML training text classification model for online prediction <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_text_classification_online.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_text_classification_online.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> <td> <a href="https://console.cloud.google.com/ai/platform/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_text_classification_online.ipynb"> Open in Google Cloud Notebooks </a> </td> </table> <br/><br/><br/> Overview This tutorial demonstrates how to use the Vertex SDK to create text classification models and do online prediction using a Google Cloud AutoML model. Dataset The dataset used for this tutorial is the Happy Moments dataset from Kaggle Datasets. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. Objective In this tutorial, you create an AutoML text classification model and deploy for online prediction from a Python script using the Vertex SDK. You can alternatively create and deploy models using the gcloud command-line tool or online using the Cloud Console. The steps performed include: Create a Vertex Dataset resource. Train the model. View the model evaluation. Deploy the Model resource to a serving Endpoint resource. Make a prediction. Undeploy the Model. Costs This tutorial uses billable components of Google Cloud: Vertex AI Cloud Storage Learn about Vertex AI pricing and Cloud Storage pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage. Set up your local development environment If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step. Otherwise, make sure your environment meets this notebook's requirements. You need the following: The Cloud Storage SDK Git Python 3 virtualenv Jupyter notebook running in a virtual environment with Python 3 The Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions: Install and initialize the SDK. Install Python 3. Install virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment. To install Jupyter, run pip3 install jupyter on the command-line in a terminal shell. To launch Jupyter, run jupyter notebook on the command-line in a terminal shell. Open this notebook in the Jupyter Notebook Dashboard. Installation Install the latest version of Vertex SDK for Python.
import os # Google Cloud Notebook if os.path.exists("/opt/deeplearning/metadata/env_version"): USER_FLAG = "--user" else: USER_FLAG = "" ! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
notebooks/community/sdk/sdk_automl_text_classification_online.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Send a online prediction request Send a online prediction to your deployed model. Get test item You will use an arbitrary example out of the dataset as a test item. Don't be concerned that the example was likely used in training the model -- we just want to demonstrate how to make a prediction.
test_item = ! gsutil cat $IMPORT_FILE | head -n1 if len(test_item[0]) == 3: _, test_item, test_label = str(test_item[0]).split(",") else: test_item, test_label = str(test_item[0]).split(",") print(test_item, test_label)
notebooks/community/sdk/sdk_automl_text_classification_online.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
We're going to use one convolutional layer. To avoid config-fu with caffe, the data will go into LMDB store. But 1st, let's cleanup the environment to avoid costly debugging.
rm -r digits_train_lmdb digits_test_lmdb
Classification_With_NN_1.ipynb
marcino239/notebooks
gpl-2.0
Caffe uses fairly simple kev / value format. The code below performs conversion from sklearn dataset into LMDB.
import lmdb import sys sys.path.append( './caffe/python' ) import caffe env = lmdb.open( 'digits_train_lmdb', map_size=1000000000L ) k = 0 for i in range( X_train.shape[0] ): x_in = X_train[ i ].reshape( (8,8) ).astype( np.uint8 ) # add original digit datum = caffe.proto.caffe_pb2.Datum() datum.channels = 1 datum.height = 8 datum.width = 8 datum.data = x_in.tostring() datum.label = int( y_train[i] ) str_id = '{:08}'.format( k ) k += 1 with env.begin( write=True ) as txn: txn.put( str_id.encode('ascii'), datum.SerializeToString() ) print( env.stat() ) env.close() env = lmdb.open( 'digits_test_lmdb', map_size=1000000000L ) k = 0 for i in range( X_test.shape[0] ): x_in = X_test[ i ].reshape( (8,8) ).astype( np.uint8 ) # add original digit datum = caffe.proto.caffe_pb2.Datum() datum.channels = 1 datum.height = 8 datum.width = 8 datum.data = x_in.tostring() datum.label = int( y_test[i] ) str_id = '{:08}'.format( k ) k += 1 with env.begin( write=True ) as txn: txn.put( str_id.encode('ascii'), datum.SerializeToString() ) print( env.stat() ) env.close()
Classification_With_NN_1.ipynb
marcino239/notebooks
gpl-2.0
We're going to define here solver parameters and network structure. The original MNIST reference net included with caffe, had two convolutional layers. Given the small size of the input picture (8x8), we're only going to use one convolutional layer.
solver_txt = """ # The train/test net protocol buffer definition net: "digits_train_test.prototxt" # test_iter specifies how many forward passes the test should carry out. # the batch is 64 images test_iter: 100 # Carry out testing every 100 training iterations. test_interval: 100 # The base learning rate, momentum and the weight decay of the network. base_lr: 0.01 momentum: 0.95 weight_decay: 0.0005 # The learning rate policy lr_policy: "inv" gamma: 0.0001 power: 0.75 # Display every 100 iterations display: 100 # The maximum number of iterations max_iter: 1000 # snapshot intermediate results snapshot: 5000 snapshot_prefix: "digits" # solver mode: CPU or GPU solver_mode: GPU """ open( 'digits_solver.prototxt', 'wt' ).write( solver_txt ) digits_train_test = """ name: "digits" layer { name: "digits_data" type: "Data" top: "data" top: "label" include { phase: TRAIN } transform_param { scale: 0.0625 } data_param { source: "digits_train_lmdb" batch_size: 100 backend: LMDB } } layer { name: "digits_data" type: "Data" top: "data" top: "label" include { phase: TEST } transform_param { scale: 0.0625 } data_param { source: "digits_test_lmdb" batch_size: 100 backend: LMDB } } layer { name: "conv1" type: "Convolution" bottom: "data" top: "conv1" param { lr_mult: 1 } param { lr_mult: 2 } convolution_param { num_output: 50 kernel_size: 3 stride: 1 weight_filler { type: "xavier" } bias_filler { type: "constant" } } } layer { name: "pool1" type: "Pooling" bottom: "conv1" top: "pool1" pooling_param { pool: MAX kernel_size: 2 stride: 2 } } layer { name: "ip1" type: "InnerProduct" bottom: "pool1" top: "ip1" param { lr_mult: 1 } param { lr_mult: 2 } inner_product_param { num_output: 500 weight_filler { type: "xavier" } bias_filler { type: "constant" } } } layer { name: "relu1" type: "ReLU" bottom: "ip1" top: "ip1" } layer { name: "ip2" type: "InnerProduct" bottom: "ip1" top: "ip2" param { lr_mult: 1 } param { lr_mult: 2 } inner_product_param { num_output: 10 weight_filler { type: "xavier" } bias_filler { type: "constant" } } } layer { name: "accuracy" type: "Accuracy" bottom: "ip2" bottom: "label" top: "accuracy" include { phase: TEST } } layer { name: "loss" type: "SoftmaxWithLoss" bottom: "ip2" bottom: "label" top: "loss" } """ open( 'digits_train_test.prototxt', 'wt' ).write( digits_train_test )
Classification_With_NN_1.ipynb
marcino239/notebooks
gpl-2.0
Let's start the training using GPU.
!./caffe/build/tools/caffe train -solver digits_solver.prototxt 2> digits_train.log !python ./caffe/tools/extra/parse_log.py digits_train.log . import pandas as pd # columns: NumIters,Seconds,TrainingLoss,LearningRate df_train = pd.read_csv( 'digits_train.log.train' ) # columns: NumIters,Seconds,TestAccuracy,TestLoss df_test = pd.read_csv( 'digits_train.log.test' ) fig, (ax1, ax2) = plt.subplots( 2, sharex=True, figsize=(10, 12) ) ax1.plot( df_train.NumIters, df_train.TrainingLoss, 'b-' ) ax1.set_xlabel( 'Iterations' ) ax1.set_ylabel( 'Loss' ) ax3 = ax1.twinx() ax3.plot( df_train.NumIters, df_train.LearningRate, 'g-' ) ax3.set_ylabel( 'Learning Rate' ) ax2.plot( df_test.NumIters, df_test.TestAccuracy, 'r-' ) ax2.set_xlabel( 'Iterations' ) ax2.set_ylabel( 'Test Accuracy' ) plt.show() print( 'Final accuracy: {0}'.format( df_test.tail(1).TestAccuracy.iloc[0] ) )
Classification_With_NN_1.ipynb
marcino239/notebooks
gpl-2.0
Time-varying Convex Optimization Copyright 2018 Google LLC. Install Dependencies Time Varying Semi-definite Programs Some Polynomial Tools Examples: To Add. Install Dependencies
!pip install cvxpy !pip install sympy import numpy as np import scipy as sp
time_varying_optimization/tvsdp.ipynb
google-research/google-research
apache-2.0
Time Varying Semi-definite Programs The TV-SDP framework for CVXPY for imposing constraints of the form: $$A(t) \succeq 0 \; \forall t \in [0, 1],$$ where $$A(t)$$ is a polynomial symmetric matrix, i.e. a symmetric matrix whose entries are polynomial functions of time, and $$A(t) \succeq 0$$ means that all the eigen values of the matrix $$A(t)$$ are nonnegative.
def _mult_poly_matrix_poly(p, mat_y): """Multiplies the polynomial matrix mat_y by the polynomial p entry-wise. Args: p: list of size d1+1 representation the polynomial sum p[i] t^i. mat_y: (m, m, d2+1) tensor representing a polynomial matrix Y_ij(t) = sum mat_y[i, j, k] t^k. Returns: (m, m, d1+d2+1) tensor representing the polynomial matrix p(t)*Y(t). """ mult_op = lambda q: np.convolve(p, q) p_times_y = np.apply_along_axis(mult_op, 2, mat_y) return p_times_y def _make_zero(p): """Returns the constraints p_i == 0. Args: p: list of cvxpy expressions. Returns: A list of cvxpy constraints [pi == 0 for pi in p]. """ return [pi == 0 for pi in p] def _lambda(m, d, Q): """Returns the mxm polynomial matrix of degree d whose Gram matrix is Q. Args: m: size of the polynomial matrix to be returned. d: degreen of the polynomial matrix to be returned. Q: (m*d/2, m*d/2) gram matrix of the polynomial matrix to be returned. Returns: (m, m, d+1) tensor representing the polynomial whose gram matrix is Q. i.e. $$Y_ij(t) == sum_{r, s s.t. r+s == k} Q_{y_i t^r, y_j t^s} t^k$$. """ d_2 = int(d / 2) def y_i_j(i, j): poly = list(np.zeros((d + 1, 1))) for k in range(d_2 + 1): for l in range(d_2 + 1): poly[k + l] += Q[i + k * m, j + l * m] return poly mat_y = [[y_i_j(i, j) for j in range(m)] for i in range(m)] mat_y = np.array(mat_y) return mat_y def _alpha(m, d, Q): """Returns t*Lambda(Q) if d odd, Lambda(Q) o.w. Args: m: size of the polynomial matrix to be returned. d: degreen of the polynomial matrix to be returned. Q: gram matrix of the polynomial matrix. Returns: t*Lambda(Q) if d odd, Lambda(Q) o.w. """ if d % 2 == 1: w1 = np.array([0, 1]) # t else: w1 = np.array([1]) # 1 mat_y = _lambda(m, d + 1 - len(w1), Q) return _mult_poly_matrix_poly(w1, mat_y) def _beta(m, d, Q): """Returns (1-t)*Lambda(Q) if d odd, t(1-t)*Lambda(Q) o.w. Args: m: size of the polynomial matrix to be returned. d: degreen of the polynomial matrix to be returned. Q: gram matrix of the polynomial matrix. Returns: (1-t)*Lambda(Q) if d odd, t(1-t)*Lambda(Q) o.w. """ if d % 2 == 1: w2 = np.array([1, -1]) # 1 - t else: w2 = np.array([0, 1, -1]) # t - t^2 mat_y = _lambda(m, d + 1 - len(w2), Q) return _mult_poly_matrix_poly(w2, mat_y) def make_poly_matrix_psd_on_0_1(mat_x): """Returns the constraint X(t) psd on [0, 1]. Args: mat_x: (m, m, d+1) tensor representing a mxm polynomial matrix of degree d. Returns: A list of cvxpy constraints imposing that X(t) psd on [0, 1]. """ m, m2, d = len(mat_x), len(mat_x[0]), len(mat_x[0][0]) - 1 # square matrix assert m == m2 # build constraints: X == alpha(Q1) + beta(Q2) with Q1, Q2 >> 0 d_2 = int(d / 2) size_Q1 = m * (d_2 + 1) size_Q2 = m * d_2 if d % 2 == 0 else m * (d_2 + 1) Q1 = cvxpy.Variable((size_Q1, size_Q1)) Q2 = cvxpy.Variable((size_Q2, size_Q2)) diff = mat_x - _alpha(m, d, Q1) - _beta(m, d, Q2) diff = diff.reshape(-1) const = _make_zero(diff) const += [Q1 >> 0, Q2 >> 0, Q1.T == Q1, Q2.T == Q2] return const
time_varying_optimization/tvsdp.ipynb
google-research/google-research
apache-2.0
Some Polynomial Tools
def integ_poly_0_1(p): """Return the integral of p(t) between 0 and 1.""" return np.array(p).dot(1 / np.linspace(1, len(p), len(p))) def spline_regression(x, y, num_parts, deg=3, alpha=.01, smoothness=1): """Fits splines with `num_parts` to data `(x, y)`. Finds a piecewise polynomial function `p` of degree `deg` with `num_parts` pieces that minimizes the fitting error sum |y_i - p(x_i)| + alpha |p|_1. Args: x: [N] ndarray of input data. Must be increasing. y: [N] ndarray, same size as `x`. num_parts: int, Number of pieces of the piecewise polynomial function `p`. deg: int, degree of each polynomial piece of `p`. alpha: float, Regularizer. smoothness: int, the desired degree of smoothness of `p`, e.g. `smoothness==0` corresponds to a continuous `p`. Returns: [num_parts, deg+1] ndarray representing the piecewise polynomial `p`. Entry (i, j) contains j^th coefficient of the i^th piece of `p`. """ # coefficients of the polynomial of p. p = cvxpy.Variable((num_parts, deg + 1), name='p') # convert to numpy format because it is easier to work with. numpy_p = np.array([[p[i, j] for j in range(deg+1)] \ for i in range(num_parts)]) regularizer = alpha * cvxpy.norm(p, 1) num_points_per_part = int(len(x) / num_parts) smoothness_constraints = [] # cuttoff values t = [] fitting_value = 0 # split the data into equal `num_parts` pieces for i in range(num_parts): # the part of the data that the current piece fits sub_x = x[num_points_per_part * i:num_points_per_part * (i + 1)] sub_y = y[num_points_per_part * i:num_points_per_part * (i + 1)] # compute p(sub_x) # pow_x = np.array([sub_x**k for k in range(deg + 1)]) # sub_p = polyval(sub_xnumpy_p[i, :].dot(pow_x) sub_p = eval_poly_from_coefficients(numpy_p[i], sub_x) # fitting value of the current part of p, # equal to sqrt(sum |p(x_i) - y_i|^2), where the sum # is over data (x_i, y_i) in the current piece. fitting_value += cvxpy.norm(cvxpy.vstack(sub_p - sub_y), 1) # glue things together by ensuring smoothness of the p at x1 if i > 0: x1 = x[num_points_per_part * i] # computes the derivatives p'(x1) for the left and from the right of x1 # x_deriv is the 2D matrix k!/(k-j)! x1^(k-j) indexed by (j, k) x1_deriv = np.array( [[np.prod(range(k - j, k)) * x1**(k - j) for k in range(deg + 1)] for j in range(smoothness + 1)]).T p_deriv_left = numpy_p[i - 1].dot(x1_deriv) p_deriv_right = numpy_p[i].dot(x1_deriv) smoothness_constraints += [ cvxpy.vstack(p_deriv_left - p_deriv_right) == 0 ] t.append(x1) min_loss = cvxpy.Minimize(fitting_value + regularizer) prob = cvxpy.Problem(min_loss, smoothness_constraints) prob.solve(verbose=False) return _piecewise_polynomial_as_function(p.value, t) def _piecewise_polynomial_as_function(p, t): """Returns the piecewise polynomial `p` as a function. Args: p: [N, d+1] array of coefficients of p. t: [N] array of cuttoffs. Returns: The function f s.t. f(x) = p_i(x) if t[i] < x < t[i+1]. """ def evaluate_p_at(x): """Returns p(x).""" pieces = [x < t[0]] + [(x >= ti) & (x < ti_plusone) \ for ti, ti_plusone in zip(t[:-1], t[1:])] +\ [x >= t[-1]] # pylint: disable=unused-variable func_list = [ lambda u, pi=pi: eval_poly_from_coefficients(pi, u) for pi in p ] return np.piecewise(x, pieces, func_list) return evaluate_p_at def eval_poly_from_coefficients(coefficients, x): """Evaluates the polynomial whose coefficients are `coefficients` at `x`.""" return coefficients.dot([x**i for i in range(len(coefficients))])
time_varying_optimization/tvsdp.ipynb
google-research/google-research
apache-2.0
Paste existing code into model.py A Python package requires our code to be in a .py file, as opposed to notebook cells. So, we simply copy and paste our existing code for the previous notebook into a single file. In the cell below, we write the contents of the cell into model.py packaging the model we developed in the previous labs so that we can deploy it to AI Platform Training Service. Exercise. There are two places to fill in TODOs in model.py. in the build_dnn_model function, add code to use a optimizer with a custom learning rate. in the train_and_evaluate function, add code to define variables using the hparams dictionary.
%%writefile ./taxifare/trainer/model.py #TODO 1 import datetime import logging import os import shutil import numpy as np import tensorflow as tf from tensorflow.keras import activations from tensorflow.keras import callbacks from tensorflow.keras import layers from tensorflow.keras import models from tensorflow import feature_column as fc logging.info(tf.version.VERSION) CSV_COLUMNS = [ 'fare_amount', 'pickup_datetime', 'pickup_longitude', 'pickup_latitude', 'dropoff_longitude', 'dropoff_latitude', 'passenger_count', 'key', ] LABEL_COLUMN = 'fare_amount' DEFAULTS = [[0.0], ['na'], [0.0], [0.0], [0.0], [0.0], [0.0], ['na']] DAYS = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat'] def features_and_labels(row_data): for unwanted_col in ['key']: row_data.pop(unwanted_col) label = row_data.pop(LABEL_COLUMN) return row_data, label def load_dataset(pattern, batch_size, num_repeat): dataset = tf.data.experimental.make_csv_dataset( file_pattern=pattern, batch_size=batch_size, column_names=CSV_COLUMNS, column_defaults=DEFAULTS, num_epochs=num_repeat, ) return dataset.map(features_and_labels) def create_train_dataset(pattern, batch_size): dataset = load_dataset(pattern, batch_size, num_repeat=None) return dataset.prefetch(1) def create_eval_dataset(pattern, batch_size): dataset = load_dataset(pattern, batch_size, num_repeat=1) return dataset.prefetch(1) def parse_datetime(s): if type(s) is not str: s = s.numpy().decode('utf-8') return datetime.datetime.strptime(s, "%Y-%m-%d %H:%M:%S %Z") def euclidean(params): lon1, lat1, lon2, lat2 = params londiff = lon2 - lon1 latdiff = lat2 - lat1 return tf.sqrt(londiff*londiff + latdiff*latdiff) def get_dayofweek(s): ts = parse_datetime(s) return DAYS[ts.weekday()] @tf.function def dayofweek(ts_in): return tf.map_fn( lambda s: tf.py_function(get_dayofweek, inp=[s], Tout=tf.string), ts_in ) @tf.function def fare_thresh(x): return 60 * activations.relu(x) def transform(inputs, NUMERIC_COLS, STRING_COLS, nbuckets): # Pass-through columns transformed = inputs.copy() del transformed['pickup_datetime'] feature_columns = { colname: fc.numeric_column(colname) for colname in NUMERIC_COLS } # Scaling longitude from range [-70, -78] to [0, 1] for lon_col in ['pickup_longitude', 'dropoff_longitude']: transformed[lon_col] = layers.Lambda( lambda x: (x + 78)/8.0, name='scale_{}'.format(lon_col) )(inputs[lon_col]) # Scaling latitude from range [37, 45] to [0, 1] for lat_col in ['pickup_latitude', 'dropoff_latitude']: transformed[lat_col] = layers.Lambda( lambda x: (x - 37)/8.0, name='scale_{}'.format(lat_col) )(inputs[lat_col]) # Adding Euclidean dist (no need to be accurate: NN will calibrate it) transformed['euclidean'] = layers.Lambda(euclidean, name='euclidean')([ inputs['pickup_longitude'], inputs['pickup_latitude'], inputs['dropoff_longitude'], inputs['dropoff_latitude'] ]) feature_columns['euclidean'] = fc.numeric_column('euclidean') # hour of day from timestamp of form '2010-02-08 09:17:00+00:00' transformed['hourofday'] = layers.Lambda( lambda x: tf.strings.to_number( tf.strings.substr(x, 11, 2), out_type=tf.dtypes.int32), name='hourofday' )(inputs['pickup_datetime']) feature_columns['hourofday'] = fc.indicator_column( fc.categorical_column_with_identity( 'hourofday', num_buckets=24)) latbuckets = np.linspace(0, 1, nbuckets).tolist() lonbuckets = np.linspace(0, 1, nbuckets).tolist() b_plat = fc.bucketized_column( feature_columns['pickup_latitude'], latbuckets) b_dlat = fc.bucketized_column( feature_columns['dropoff_latitude'], latbuckets) b_plon = fc.bucketized_column( feature_columns['pickup_longitude'], lonbuckets) b_dlon = fc.bucketized_column( feature_columns['dropoff_longitude'], lonbuckets) ploc = fc.crossed_column( [b_plat, b_plon], nbuckets * nbuckets) dloc = fc.crossed_column( [b_dlat, b_dlon], nbuckets * nbuckets) pd_pair = fc.crossed_column([ploc, dloc], nbuckets ** 4) feature_columns['pickup_and_dropoff'] = fc.embedding_column( pd_pair, 100) return transformed, feature_columns def rmse(y_true, y_pred): return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true))) def build_dnn_model(nbuckets, nnsize, lr): # input layer is all float except for pickup_datetime which is a string STRING_COLS = ['pickup_datetime'] NUMERIC_COLS = ( set(CSV_COLUMNS) - set([LABEL_COLUMN, 'key']) - set(STRING_COLS) ) inputs = { colname: layers.Input(name=colname, shape=(), dtype='float32') for colname in NUMERIC_COLS } inputs.update({ colname: layers.Input(name=colname, shape=(), dtype='string') for colname in STRING_COLS }) # transforms transformed, feature_columns = transform( inputs, NUMERIC_COLS, STRING_COLS, nbuckets=nbuckets) dnn_inputs = layers.DenseFeatures(feature_columns.values())(transformed) x = dnn_inputs for layer, nodes in enumerate(nnsize): x = layers.Dense(nodes, activation='relu', name='h{}'.format(layer))(x) output = layers.Dense(1, name='fare')(x) model = models.Model(inputs, output) lr_optimizer = # TODO: Your code goes here model.compile( # TODO: Your code goes here return model def train_and_evaluate(hparams): batch_size = # TODO: Your code goes here nbuckets = # TODO: Your code goes here lr = # TODO: Your code goes here nnsize = hparams['nnsize'] eval_data_path = hparams['eval_data_path'] num_evals = hparams['num_evals'] num_examples_to_train_on = hparams['num_examples_to_train_on'] output_dir = hparams['output_dir'] train_data_path = hparams['train_data_path'] timestamp = datetime.datetime.now().strftime('%Y%m%d%H%M%S') savedmodel_dir = os.path.join(output_dir, 'export/savedmodel') model_export_path = os.path.join(savedmodel_dir, timestamp) checkpoint_path = os.path.join(output_dir, 'checkpoints') tensorboard_path = os.path.join(output_dir, 'tensorboard') if tf.io.gfile.exists(output_dir): tf.io.gfile.rmtree(output_dir) model = build_dnn_model(nbuckets, nnsize, lr) logging.info(model.summary()) trainds = create_train_dataset(train_data_path, batch_size) evalds = create_eval_dataset(eval_data_path, batch_size) steps_per_epoch = num_examples_to_train_on // (batch_size * num_evals) checkpoint_cb = callbacks.ModelCheckpoint( checkpoint_path, save_weights_only=True, verbose=1 ) tensorboard_cb = callbacks.TensorBoard(tensorboard_path) history = model.fit( trainds, validation_data=evalds, epochs=num_evals, steps_per_epoch=max(1, steps_per_epoch), verbose=2, # 0=silent, 1=progress bar, 2=one line per epoch callbacks=[checkpoint_cb, tensorboard_cb] ) # Exporting the model with default serving function. tf.saved_model.save(model, model_export_path) return history
courses/machine_learning/deepdive2/building_production_ml_systems/labs/1_training_at_scale.ipynb
turbomanage/training-data-analyst
apache-2.0
Modify code to read data from and write checkpoint files to GCS If you look closely above, you'll notice a new function, train_and_evaluate that wraps the code that actually trains the model. This allows us to parametrize the training by passing a dictionary of parameters to this function (e.g, batch_size, num_examples_to_train_on, train_data_path etc.) This is useful because the output directory, data paths and number of train steps will be different depending on whether we're training locally or in the cloud. Parametrizing allows us to use the same code for both. We specify these parameters at run time via the command line. Which means we need to add code to parse command line parameters and invoke train_and_evaluate() with those params. This is the job of the task.py file. Exercise. Fill in the TODOs in the code below to parse three additional command line arguments for the batch size, the learning rate and the number of buckets (i.e batch_size, lr and nbuckets).
%%writefile taxifare/trainer/task.py # TODO 1 import argparse from trainer import model if __name__ == '__main__': parser = argparse.ArgumentParser() # TODO: Your code goes here # TODO: Your code goes here # TODO: Your code goes here parser.add_argument( "--eval_data_path", help="GCS location pattern of eval files", required=True ) parser.add_argument( "--nnsize", help="Hidden layer sizes (provide space-separated sizes)", nargs="+", type=int, default=[32, 8] ) parser.add_argument( "--num_evals", help="Number of times to evaluate model on eval data training.", type=int, default=5 ) parser.add_argument( "--num_examples_to_train_on", help="Number of examples to train on.", type=int, default=100 ) parser.add_argument( "--output_dir", help="GCS location to write checkpoints and export models", required=True ) parser.add_argument( "--train_data_path", help="GCS location pattern of train files containing eval URLs", required=True ) parser.add_argument( "--job-dir", help="this model ignores this field, but it is required by gcloud", default="junk" ) args = parser.parse_args() hparams = args.__dict__ hparams.pop("job-dir", None) model.train_and_evaluate(hparams)
courses/machine_learning/deepdive2/building_production_ml_systems/labs/1_training_at_scale.ipynb
turbomanage/training-data-analyst
apache-2.0
Run your training package on Cloud AI Platform Once the code works in standalone mode locally, you can run it on Cloud AI Platform. To submit to the Cloud we use gcloud ai-platform jobs submit training [jobname] and simply specify some additional parameters for AI Platform Training Service: - jobid: A unique identifier for the Cloud job. We usually append system time to ensure uniqueness - region: Cloud region to train in. See here for supported AI Platform Training Service regions The arguments before -- \ are for AI Platform Training Service. The arguments after -- \ are sent to our task.py. Because this is on the entire dataset, it will take a while. You can monitor the job from the GCP console in the Cloud AI Platform section. Exercise. Fill in the TODOs in the code below to submit your job for training on Cloud AI Platform.
%%bash # TODO 2 # Output directory and jobID OUTDIR=gs://${BUCKET}/taxifare/trained_model_$(date -u +%y%m%d_%H%M%S) JOBID=taxifare_$(date -u +%y%m%d_%H%M%S) echo ${OUTDIR} ${REGION} ${JOBID} gsutil -m rm -rf ${OUTDIR} # Model and training hyperparameters BATCH_SIZE=50 NUM_EXAMPLES_TO_TRAIN_ON=100 NUM_EVALS=100 NBUCKETS=10 LR=0.001 NNSIZE="32 8" # GCS paths GCS_PROJECT_PATH=gs://$BUCKET/taxifare DATA_PATH=$GCS_PROJECT_PATH/data TRAIN_DATA_PATH=$DATA_PATH/taxi-train* EVAL_DATA_PATH=$DATA_PATH/taxi-valid* gcloud ai-platform jobs submit training $JOBID \ --module-name= # TODO: Your code goes here --package-path= # TODO: Your code goes here --staging-bucket= # TODO: Your code goes here --python-version= # TODO: Your code goes here --runtime-version= # TODO: Your code goes here --region= # TODO: Your code goes here -- \ --eval_data_path # TODO: Your code goes here --output_dir # TODO: Your code goes here --train_data_path # TODO: Your code goes here --batch_size # TODO: Your code goes here --num_examples_to_train_on # TODO: Your code goes here --num_evals # TODO: Your code goes here --nbuckets # TODO: Your code goes here --lr # TODO: Your code goes here --nnsize # TODO: Your code goes here
courses/machine_learning/deepdive2/building_production_ml_systems/labs/1_training_at_scale.ipynb
turbomanage/training-data-analyst
apache-2.0
2) Setup the Request Extract the Models from the Cache with this request and upload them object files to the configured S3 Bucket. Please make sure the environment variables are set correctly and the S3 Bucket exists: ENV_AWS_KEY=&lt;AWS API Key&gt; ENV_AWS_SECRET=&lt;AWS API Secret&gt; For docker containers make sure to set these keys in the correct Jupyter env file and restart the container: &lt;repo base dir&gt;/justredis/redis-labs.env &lt;repo base dir&gt;/local/jupyter.env &lt;repo base dir&gt;/test/jupyter.env What's the dataset name?
ds_name = "iris_regressor"
examples/ML-IRIS-Redis-Labs-Extract-From-Cache.ipynb
jay-johnson/sci-pype
apache-2.0
3) Build and Run the Extract + Upload Request
cache_req = { "RAName" : "CACHE", # Redis endpoint name holding the models "DSName" : str(ds_name), # Dataset name for pulling out of the cache "S3Loc" : str(s3_loc), # S3 location to store the model file "DeleteAfter" : False, # Optional delete after upload "SaveDir" : data_dir, # Optional dir to save the model file - default is ENV_DATA_DST_DIR "TrackingID" : "" # Future support for using the tracking id } upload_results = core.ml_upload_cached_dataset_to_s3(cache_req, core.get_rds(), core.get_dbs(), debug) if upload_results["Status"] == "SUCCESS": lg("Done Uploading Model and Analysis DSName(" + str(ds_name) + ") S3Loc(" + str(cache_req["S3Loc"]) + ")", 6) else: lg("", 6) lg("ERROR: Failed Upload Model and Analysis Caches as file for DSName(" + str(ds_name) + ")", 6) lg(upload_results["Error"], 6) lg("", 6) sys.exit(1) # end of if extract + upload worked lg("", 6) lg("Extract and Upload Completed", 5) lg("", 6)
examples/ML-IRIS-Redis-Labs-Extract-From-Cache.ipynb
jay-johnson/sci-pype
apache-2.0
The right hand side of ODEs can be implemented in either python or in C. Although not absolutely necessary for this example, we here show how to implement the RHS in C. This is often significantly faster than using a python callback function. We use a simple spin model which is one second order ODE, or a set of two coupled first order ODEs. For more details on the physics behind this model, see Danby (1962), Goldreich and Peale (1966), and Wisdom and Peale (1983). The RHS of this set of ODEs implemented in C is:
%%writefile rhs.c #include "rebound.h" void derivatives(struct reb_ode* const ode, double* const yDot, const double* const y, const double t){ struct reb_orbit o = reb_tools_particle_to_orbit(ode->r->G, ode->r->particles[1], ode->r->particles[0]); double omega2 = 3.*0.26; yDot[0] = y[1]; yDot[1] = -omega2/(2.*o.d*o.d*o.d)*sin(2.*(y[0]-o.f)); }
ipython_examples/ChaoticHyperion.ipynb
hannorein/rebound
gpl-3.0
We now compile this into a shared library. We need the REBOUND headers and library for this. The following is a bit of hack: we just copy the files into the current folder. This works if you've installed REBOUND from the git repository. Otherwise, you'll need to find these files manually (which might depend on your python environment).
!cp ../src/librebound.so . !cp ../src/rebound.h . !gcc -c -O3 -fPIC rhs.c -o rhs.o !gcc -L. -shared rhs.o -o rhs.so -lrebound
ipython_examples/ChaoticHyperion.ipynb
hannorein/rebound
gpl-3.0
Using ctypes, we can load the library into python
from ctypes import cdll clibrhs = cdll.LoadLibrary("rhs.so")
ipython_examples/ChaoticHyperion.ipynb
hannorein/rebound
gpl-3.0
The following function is setting up the N-body simulation as well as the ODE system that governs the spin evolution. Note that we set the derivatives function pointer to the C function we've just compiled. You could also set this function pointer to a python function and avoid all the C complications.
def setup(): sim = rebound.Simulation() sim.add(m=1) # Saturn sim.add(a=1, e=0.123233) # Hyperion, massless, semi-major axis of 1 sim.integrator = "BS" sim.ri_bs.eps_rel = 1e-12 # tolerance sim.ri_bs.eps_abs = 1e-12 ode_spin = sim.create_ode(length=2, needs_nbody=True) ode_spin.y[0] = 0.01 # initial conditions that lead to chaos ode_spin.y[1] = 1 ode_spin.derivatives = clibrhs.derivatives return sim, ode_spin
ipython_examples/ChaoticHyperion.ipynb
hannorein/rebound
gpl-3.0
We will create two simulations that are slightly offset from each other.
sim, ode_spin = setup() sim2, ode_spin2 = setup() ode_spin2.y[0] += 1e-8 # small perturbation
ipython_examples/ChaoticHyperion.ipynb
hannorein/rebound
gpl-3.0
With these two simulations, we can measure the growing divergence of nearby trajectories, a key feature of chaos.
times = 2.*np.pi*np.linspace(0,30,100) # a couple of orbits obliq = np.zeros((len(times))) obliq2 = obliq.copy() for i, t in enumerate(times): sim.integrate(t, exact_finish_time=1) sim2.integrate(t, exact_finish_time=1) obliq[i] = ode_spin.y[0] obliq2[i] = ode_spin2.y[0]
ipython_examples/ChaoticHyperion.ipynb
hannorein/rebound
gpl-3.0
Finally, let us plot the divergence as a function of time.
fig, ax = plt.subplots(1,1) ax.set_xlabel("time [orbits]") ax.set_ylabel("obliquity difference [degrees]") ax.set_yscale("log") ax.set_ylim([1e-8,1e2]) o1 = np.remainder((obliq-obliq2)*180/np.pi,360.) o2 = np.remainder((obliq2-obliq)*180/np.pi,360.) ax.scatter(times/np.pi/2.,np.minimum(o1,o2)) ax.plot(times/np.pi/2.,1e-8/np.pi*180.*np.exp(times/(np.pi*2.)/1.2),color="black");
ipython_examples/ChaoticHyperion.ipynb
hannorein/rebound
gpl-3.0
Next, import all the dependencies we'll use in this exercise, which include Fairness Indicators, TensorFlow Model Analysis (tfma), and the What-If tool (WIT):
%tensorflow_version 2.x import os import tempfile import apache_beam as beam import numpy as np import pandas as pd from datetime import datetime import tensorflow_hub as hub import tensorflow as tf import tensorflow_model_analysis as tfma from tensorflow_model_analysis.addons.fairness.post_export_metrics import fairness_indicators from tensorflow_model_analysis.addons.fairness.view import widget_view from witwidget.notebook.visualization import WitConfigBuilder from witwidget.notebook.visualization import WitWidget
ml/pc/exercises/fairness_text_toxicity_part2.ipynb
google/eng-edu
apache-2.0
Run the following code to download and import the training and validation datasets. By default, the following code will load the preprocessed data (see Fairness Exercise 1: Explore the Model for more details). If you prefer, you can enable the download_original_data checkbox at right to download the original dataset and preprocess it as described in the previous section (this may take 5-10 minutes).
download_original_data = False #@param {type:"boolean"} if download_original_data: train_tf_file = tf.keras.utils.get_file('train_tf.tfrecord', 'https://storage.googleapis.com/civil_comments_dataset/train_tf.tfrecord') validate_tf_file = tf.keras.utils.get_file('validate_tf.tfrecord', 'https://storage.googleapis.com/civil_comments_dataset/validate_tf.tfrecord') # The identity terms list will be grouped together by their categories # (see 'IDENTITY_COLUMNS') on threshould 0.5. Only the identity term column, # text column and label column will be kept after processing. train_tf_file = util.convert_comments_data(train_tf_file) validate_tf_file = util.convert_comments_data(validate_tf_file) else: train_tf_file = tf.keras.utils.get_file('train_tf_processed.tfrecord', 'https://storage.googleapis.com/civil_comments_dataset/train_tf_processed.tfrecord') validate_tf_file = tf.keras.utils.get_file('validate_tf_processed.tfrecord', 'https://storage.googleapis.com/civil_comments_dataset/validate_tf_processed.tfrecord')
ml/pc/exercises/fairness_text_toxicity_part2.ipynb
google/eng-edu
apache-2.0
Next, train the original model from Fairness Exercise 1: Explore the Model, which we'll use as the baseline model for this exercise:
#@title Run this cell to train the baseline model from Exercise 1 TEXT_FEATURE = 'comment_text' LABEL = 'toxicity' FEATURE_MAP = { # Label: LABEL: tf.io.FixedLenFeature([], tf.float32), # Text: TEXT_FEATURE: tf.io.FixedLenFeature([], tf.string), # Identities: 'sexual_orientation':tf.io.VarLenFeature(tf.string), 'gender':tf.io.VarLenFeature(tf.string), 'religion':tf.io.VarLenFeature(tf.string), 'race':tf.io.VarLenFeature(tf.string), 'disability':tf.io.VarLenFeature(tf.string), } def train_input_fn(): def parse_function(serialized): parsed_example = tf.io.parse_single_example( serialized=serialized, features=FEATURE_MAP) # Adds a weight column to deal with unbalanced classes. parsed_example['weight'] = tf.add(parsed_example[LABEL], 0.1) return (parsed_example, parsed_example[LABEL]) train_dataset = tf.data.TFRecordDataset( filenames=[train_tf_file]).map(parse_function).batch(512) return train_dataset BASE_DIR = tempfile.gettempdir() model_dir = os.path.join(BASE_DIR, 'train', datetime.now().strftime( "%Y%m%d-%H%M%S")) embedded_text_feature_column = hub.text_embedding_column( key=TEXT_FEATURE, module_spec='https://tfhub.dev/google/nnlm-en-dim128/1') classifier = tf.estimator.DNNClassifier( hidden_units=[500, 100], weight_column='weight', feature_columns=[embedded_text_feature_column], optimizer=tf.optimizers.Adagrad(learning_rate=0.003), loss_reduction=tf.losses.Reduction.SUM, n_classes=2, model_dir=model_dir) classifier.train(input_fn=train_input_fn, steps=1000)
ml/pc/exercises/fairness_text_toxicity_part2.ipynb
google/eng-edu
apache-2.0
In the next section, we'll apply bias-remediation techniques on our data and then train a revised model on the updated data. Remediate Bias To remediate bias in our model, we'll first need to define the remediation metrics we'll use to gauge success and choose an appropriate remediation technique. Then we'll retrain the model using the technique we've selected. Define the remediation metrics Before we can apply bias-remediation techniques to our model, we first need to define what successful remediation looks like in the context of our particular problem. As we saw in Fairness Exercise 1: Explore the Model, there are often tradeoffs that come into play when optimizing a model (for example, adjustments that decrease false positives may increase false negatives), so we need to choose the evaluation metrics that best align with our priorities. For our toxicity classifier, we've identified that our primary concern is ensuring that gender-related comments are not disproportionately misclassified as toxic, which could result in constructive discourse being suppressed. So here, we will define successful remediation as a decrease in the FPR (false-positive rate) for gender subgroups relative to the overall FPR. Choose a remediation technique To mitigate false-positive rate for gender subgroups, we want to help the model "unlearn" any false correlations it's learned between gender-related terminology and toxicity. We've determined that this false correlation likely stems from an insufficient number of training examples in which gender terminology was used in nontoxic contexts. One excellent way to remediate this issue would be to add more nontoxic examples to each gender subgroup to balance out the dataset, and then retrain on the amended data. However, we've already trained on all the data we have, so what can we do? This is a common problem ML engineers face. Collecting additional data can be costly, resource-intensive, and time-consuming, and as a result, it may just not be feasible in certain circumstances. One alternative solution is to simulate additional data by upweighting the existing examples in the disproportionately underrepresented group (increasing the loss penalty for errors for these examples) so they carry more weight and are not as easily overwhelmed by the rest of the data. Let's update the input fuction of our model to implement upweighting for nontoxic examples belonging to one or more gender subgroups. In the UPDATES FOR UPWEIGHTING section of the code below, we've increased the weight values for nontoxic examples that contain a gender value of transgender, female, or male:
def train_input_fn_with_remediation(): def parse_function(serialized): parsed_example = tf.io.parse_single_example( serialized=serialized, features=FEATURE_MAP) # Adds a weight column to deal with unbalanced classes. parsed_example['weight'] = tf.add(parsed_example[LABEL], 0.1) # BEGIN UPDATES FOR UPWEIGHTING # Up-weighting non-toxic examples to balance toxic and non-toxic examples # for gender slice. # values = parsed_example['gender'].values # 'toxicity' label zero represents the example is non-toxic. if tf.equal(parsed_example[LABEL], 0): # We tuned the upweighting hyperparameters, and found we got good # results by setting `weight`s of 0.4 for `transgender`, # 0.5 for `female`, and 0.7 for `male`. # NOTE: `other_gender` is not upweighted separately, because all examples # tagged with `other_gender` were also tagged with one of the other # values below if tf.greater(tf.math.count_nonzero(tf.equal(values, 'transgender')), 0): parsed_example['weight'] = tf.constant(0.4) if tf.greater(tf.math.count_nonzero(tf.equal(values, 'female')), 0): parsed_example['weight'] = tf.constant(0.5) if tf.greater(tf.math.count_nonzero(tf.equal(values, 'male')), 0): parsed_example['weight'] = tf.constant(0.7) return (parsed_example, parsed_example[LABEL]) # END UPDATES FOR UPWEIGHTING train_dataset = tf.data.TFRecordDataset( filenames=[train_tf_file]).map(parse_function).batch(512) return train_dataset
ml/pc/exercises/fairness_text_toxicity_part2.ipynb
google/eng-edu
apache-2.0
Retrain the model Now, let's retrain the model with our upweighted examples:
BASE_DIR = tempfile.gettempdir() model_dir_with_remediation = os.path.join(BASE_DIR, 'train', datetime.now().strftime( "%Y%m%d-%H%M%S")) embedded_text_feature_column = hub.text_embedding_column( key=TEXT_FEATURE, module_spec='https://tfhub.dev/google/nnlm-en-dim128/1') classifier_with_remediation = tf.estimator.DNNClassifier( hidden_units=[500, 100], weight_column='weight', feature_columns=[embedded_text_feature_column], n_classes=2, optimizer=tf.optimizers.Adagrad(learning_rate=0.003), loss_reduction=tf.losses.Reduction.SUM, model_dir=model_dir_with_remediation) classifier_with_remediation.train(input_fn=train_input_fn_with_remediation, steps=1000)
ml/pc/exercises/fairness_text_toxicity_part2.ipynb
google/eng-edu
apache-2.0
Recompute fairness metrics Now that we've retrained the model, let's recompute our fairness metrics. First, export the model:
def eval_input_receiver_fn(): serialized_tf_example = tf.compat.v1.placeholder( dtype=tf.string, shape=[None], name='input_example_placeholder') receiver_tensors = {'examples': serialized_tf_example} features = tf.io.parse_example(serialized_tf_example, FEATURE_MAP) features['weight'] = tf.ones_like(features[LABEL]) return tfma.export.EvalInputReceiver( features=features, receiver_tensors=receiver_tensors, labels=features[LABEL]) tfma_export_dir_with_remediation = tfma.export.export_eval_savedmodel( estimator=classifier_with_remediation, export_dir_base=os.path.join(BASE_DIR, 'tfma_eval_model_with_remediation'), eval_input_receiver_fn=eval_input_receiver_fn)
ml/pc/exercises/fairness_text_toxicity_part2.ipynb
google/eng-edu
apache-2.0
Next, run the fairness evaluation using TFMA:
tfma_eval_result_path_with_remediation = os.path.join(BASE_DIR, 'tfma_eval_result_with_remediation') slice_selection = 'gender' compute_confidence_intervals = False # Define slices that you want the evaluation to run on. slice_spec = [ tfma.slicer.SingleSliceSpec(), # Overall slice tfma.slicer.SingleSliceSpec(columns=['gender']), ] # Add the fairness metrics. add_metrics_callbacks = [ tfma.post_export_metrics.fairness_indicators( thresholds=[0.1, 0.3, 0.5, 0.7, 0.9], labels_key=LABEL ) ] eval_shared_model_with_remediation = tfma.default_eval_shared_model( eval_saved_model_path=tfma_export_dir_with_remediation, add_metrics_callbacks=add_metrics_callbacks) validate_dataset = tf.data.TFRecordDataset(filenames=[validate_tf_file]) # Run the fairness evaluation. with beam.Pipeline() as pipeline: _ = ( pipeline | 'ReadData' >> beam.io.ReadFromTFRecord(validate_tf_file) | 'ExtractEvaluateAndWriteResults' >> tfma.ExtractEvaluateAndWriteResults( eval_shared_model=eval_shared_model_with_remediation, slice_spec=slice_spec, compute_confidence_intervals=compute_confidence_intervals, output_path=tfma_eval_result_path_with_remediation) ) eval_result_with_remediation = tfma.load_eval_result(output_path=tfma_eval_result_path_with_remediation)
ml/pc/exercises/fairness_text_toxicity_part2.ipynb
google/eng-edu
apache-2.0
Load evaluation results Run the following two cells to load results in the What-If tool and Fairness Indicators. In the What-If tool, we'll load 1,000 examples with the corresponding predictions returned from both the baseline model and the remediated model. WARNING: When you launch the What-If tool widget below, the left panel will display the full text of individual comments from the Civil Comments dataset. Some of these comments include profanity, offensive statements, and offensive statements involving identity terms. If this is a concern, run the Alternative cell at the end of this section instead of the two code cells below, and skip question #4 in the following Exercise.
DEFAULT_MAX_EXAMPLES = 1000 # Load 100000 examples in memory. When first rendered, What-If Tool only # displays 1000 of these examples to ensure data loads successfully for most # browser/machine configurations. def wit_dataset(file, num_examples=100000): dataset = tf.data.TFRecordDataset( filenames=[train_tf_file]).take(num_examples) return [tf.train.Example.FromString(d.numpy()) for d in dataset] wit_data = wit_dataset(train_tf_file) # Configure WIT with 1000 examples, the FEATURE_MAP we defined above, and # a label of 1 for positive (toxic) examples and 0 for negative (nontoxic) # examples config_builder = WitConfigBuilder(wit_data[:DEFAULT_MAX_EXAMPLES]).set_estimator_and_feature_spec( classifier, FEATURE_MAP).set_compare_estimator_and_feature_spec( classifier_with_remediation, FEATURE_MAP).set_label_vocab(['0', '1']).set_target_feature(LABEL) wit = WitWidget(config_builder)
ml/pc/exercises/fairness_text_toxicity_part2.ipynb
google/eng-edu
apache-2.0
In Fairness Indicators, we'll display the remediated model's evaluation results on the validation set.
# Link Fairness Indicators widget with WIT widget above, # so that clicking a slice in FI below will load its data in WIT above. event_handlers={'slice-selected': wit.create_selection_callback(wit_data, DEFAULT_MAX_EXAMPLES)} widget_view.render_fairness_indicator(eval_result=eval_result_with_remediation, slicing_column=slice_selection, event_handlers=event_handlers) #@title Alternative: Run this cell only if you intend to skip the What-If tool exercises (see Warning above) # Link Fairness Indicators widget with WIT widget above, # so that clicking a slice in FI below will load its data in WIT above. widget_view.render_fairness_indicator(eval_result=eval_result_with_remediation, slicing_column=slice_selection)
ml/pc/exercises/fairness_text_toxicity_part2.ipynb
google/eng-edu
apache-2.0
We can instantiate an instance of the tool using an API key.
cii = CancerImageInterface(api_key=YOUR_API_KEY_HERE)
tutorials/2_cancer_imaging_archive.ipynb
TariqAHassan/BioVida
bsd-3-clause
Note: The number of studies (collections) provided here is somewhat shorter than what is provided on The Cancer Imaging Archive's website. This is because certain studies, such as those with restricted access, are excluded. With this notice out of the way, let's go ahead and perform a search.
cii.search(cancer_type='breast')
tutorials/2_cancer_imaging_archive.ipynb
TariqAHassan/BioVida
bsd-3-clause
Next, we can easily download this data.
import numpy as np def simplify_df(df): """This function simplifies dataframes for the purposes of this tutorial.""" data_frame = df.copy() for c in ('cached_dicom_images_path', 'cached_images_path'): data_frame[c] = data_frame[c].map( lambda x: tuple(['path_to_image'] * len(x)) if isinstance(x, tuple) else x) return data_frame[0:5].replace({np.NaN: ''})
tutorials/2_cancer_imaging_archive.ipynb
TariqAHassan/BioVida
bsd-3-clause
The code below will download the data in our search results, but with two noteworthy restrictions. <br> First, patient_limit=5 will limit the number of patients/subjects downloaded to the first 5. <br> Second, collections_limit will limit the number of collections downloaded to one (in this case, 'TCGA-COAD'). <br> Third, session_limit=1 will limit the results returned to the first time the patient/subject was scanned, e.g., before surgical intervention to remove diseased tissue. <br> Additionally, the save_dicom parameter will enable us to save the raw DICOM image files that the Cancer Imaging Archive provides. By default, pull() only generates DICOM files. However, the save_png argument also gives you the option to convert the DICOM files to PNG images.
pull_df = cii.pull(patient_limit=5, collections_limit=1, session_limit=1)
tutorials/2_cancer_imaging_archive.ipynb
TariqAHassan/BioVida
bsd-3-clause
Let's take a look at the data we've downloaded. We could view the pull_df object above, or the identical records_db attribute of cii, e.g., cii.records_db. However, both of those DataFrames contain several column which are not typically relevant for every data use. So, instead, we can view an abbreviated DataFrame, records_db_short.
simplify_df(cii.records_db_short)
tutorials/2_cancer_imaging_archive.ipynb
TariqAHassan/BioVida
bsd-3-clause
Notes: The 'cached_dicom_images_path' and 'cached_images_path' columns refer to multiple images. The number of converted images may differ from the number of raw DICOM images because 3D DICOM images are saved as individual frames when they are converted to PNG. The 'image_count_converted_cache' column provides an account of how many images resulted from any given DICOM $\rightarrow$ PNG conversion. Working With DICOMs The Cancer Imaging Archive stores images in a format known as Digital Imaging and Communications in Medicine (DICOM). If you have experience working with this file format, you can safely skip this section. In python, we can manipulate DICOM files using the pydicom library. This tool will allow us to extract the images data as ndarrays.
import dicom # in the future you will have to use `import pydicom as dicom`
tutorials/2_cancer_imaging_archive.ipynb
TariqAHassan/BioVida
bsd-3-clause
We can also go ahead an import matplotlib to allow us to visualize the ndarrays we extract. We can start by extracting a list of DICOMs from the images we downloaded above.
sample_dicoms = cii.records_db['cached_dicom_images_path'].iloc[1]
tutorials/2_cancer_imaging_archive.ipynb
TariqAHassan/BioVida
bsd-3-clause
We can load these images in as ndarrays using the dicom (pydicom) library.
dicoms_arrs = [dicom.read_file(f).pixel_array for f in sample_dicoms]
tutorials/2_cancer_imaging_archive.ipynb
TariqAHassan/BioVida
bsd-3-clause
DICOM represents imaging sessions with a tag known as SeriesInstanceUID. That is, the unique ID of the series. If multiple DICOM files/images share the same SeriesInstanceUID, it means they are part of the same 'series'. If we get the length of dicoms_arrs we see that multiple DICOM files share the same SeriesInstanceUID
len(dicoms_arrs) # cii.records_db['series_instance_uid'].iloc[1]
tutorials/2_cancer_imaging_archive.ipynb
TariqAHassan/BioVida
bsd-3-clause
Thus suggesting that this particular series is either a 3D volume or a time-series. So we can go ahead stack these images on top of one another as a way of representing this relationship between the images.
stacked_dicoms = np.stack(dicoms_arrs)
tutorials/2_cancer_imaging_archive.ipynb
TariqAHassan/BioVida
bsd-3-clause
If we check the shape of stacked_dicoms can see that we have indeed stacked 15 256x256 images on top of one another.
stacked_dicoms.shape
tutorials/2_cancer_imaging_archive.ipynb
TariqAHassan/BioVida
bsd-3-clause
We can also go ahead and define a small function which will enable us to visualize this 'stack' of images.
import matplotlib.pyplot as plt def sample_stack(stack, rows=3, cols=5, start_with=0, show_every=1): """Function to display stacked ndarray. Source: https://www.raddq.com/dicom-processing-segmentation-visualization-in-python Note: this code has been slightly modified.""" if rows*cols != stack.shape[0]: raise ValueError("The product of `rows` and `cols` does not equal number of images.") fig, ax = plt.subplots(rows, cols, figsize=[12, 12]) for i in range(rows*cols): ind = start_with + i*show_every ax[int(i/cols), int(i % cols)].set_title('slice {0}'.format(str(ind + 1))) ax[int(i/cols), int(i % cols)].imshow(stack[ind], cmap='gray') ax[int(i/cols), int(i % cols)].axis('off') plt.show()
tutorials/2_cancer_imaging_archive.ipynb
TariqAHassan/BioVida
bsd-3-clause
If you're curious to see what these images look like, you can uncomment the line below to view the stack of images.
# sample_stack(stacked_dicoms, rows=3)
tutorials/2_cancer_imaging_archive.ipynb
TariqAHassan/BioVida
bsd-3-clause
Note: Ordering DICOM images in space is tricky. Currently, this class uses a somewhat reliable, but far from ideal, means of ordering images. <br> Errors are possible. <br> <small> For the Medical Imaging Folks: <br> Images in the 'series_instance_uid' column are ordered against the InstanceNumber tag instead of actually working out the geometry required to sort the images in space. This is obviously not ideal because, among other reasons, InstanceNumber is a type 2 tag. Hopefully, in the future, this is something that will be improved. </small> Train, Validation and Test Spitting images obtained from the Cancer Imaging Archive into training, validation and/or testing sets is nearly identical to doing so using an instance of OpeniInterface class introduced in the prior tutorial. Accordingly, the instructions provided here will be condensed. If you would like more detail, please review this earlier tutorial. First, we import the image_divvy tool.
from biovida.images import image_divvy
tutorials/2_cancer_imaging_archive.ipynb
TariqAHassan/BioVida
bsd-3-clause
Next, we can define a 'divvy_rule'.
def my_divvy_rule(row): if row['modality_full'] == 'Magnetic Resonance Imaging (MRI)': return 'mri' if row['modality_full'] == 'Segmentation': return 'seg'
tutorials/2_cancer_imaging_archive.ipynb
TariqAHassan/BioVida
bsd-3-clause
This rule will select only those images which are MRIs. All other images will be excluded.
train_test = image_divvy(instance=cii, divvy_rule=my_divvy_rule, db_to_extract='records_db', action='ndarray', train_val_test_dict={'train': 0.8, 'test': 0.2}) train_mri, test_mri = train_test['train']['mri'], train_test['test']['mri'] train_seg, test_seg = train_test['train']['seg'], train_test['test']['seg']
tutorials/2_cancer_imaging_archive.ipynb
TariqAHassan/BioVida
bsd-3-clause
One important thing to point out is that some of the image arrays returned will, in fact, be stacked arrays of images. <br> For example:
train_seg[10].shape
tutorials/2_cancer_imaging_archive.ipynb
TariqAHassan/BioVida
bsd-3-clause
Loading FFT routines
gridDIM = 64 size = gridDIM*gridDIM axes0 = 0 axes1 = 1 makeC2C = 0 makeR2C = 1 makeC2R = 1 axesSplit_0 = 0 axesSplit_1 = 1 m = size segment_axes0 = 0 segment_axes1 = 0 DIR_BASE = "/home/robert/Documents/new1/FFT/mycode/" # FAFT _faft128_2D = ctypes.cdll.LoadLibrary( DIR_BASE+'FAFT128_2D_R2C.so' ) _faft128_2D.FAFT128_2D_R2C.restype = int _faft128_2D.FAFT128_2D_R2C.argtypes = [ctypes.c_void_p, ctypes.c_void_p, ctypes.c_float, ctypes.c_float, ctypes.c_int, ctypes.c_int, ctypes.c_int, ctypes.c_int] cuda_faft = _faft128_2D.FAFT128_2D_R2C # Inv FAFT _ifaft128_2D = ctypes.cdll.LoadLibrary( DIR_BASE+'IFAFT128_2D_C2R.so' ) _ifaft128_2D.IFAFT128_2D_C2R.restype = int _ifaft128_2D.IFAFT128_2D_C2R.argtypes = [ctypes.c_void_p, ctypes.c_void_p, ctypes.c_float, ctypes.c_float, ctypes.c_int, ctypes.c_int, ctypes.c_int, ctypes.c_int] cuda_ifaft = _ifaft128_2D.IFAFT128_2D_C2R def fftGaussian(p,sigma): return np.exp( - p**2*sigma**2/2. )
FAFT_64-points_R2C/nbFAFT128_offset_xy_2D.ipynb
robertclf/FAFT
bsd-3-clause
Initializing Data Gaussian
def Gaussian(x,mu,sigma): return np.exp( - (x-mu)**2/sigma**2/2. )/(sigma*np.sqrt( 2*np.pi )) def fftGaussian(p,mu,sigma): return np.exp(-1j*mu*p)*np.exp( - p**2*sigma**2/2. ) # Gaussian parameters mu_x = 1.5 sigma_x = 1. mu_y = 1.5 sigma_y = 1. # Grid parameters x_amplitude = 7. p_amplitude = 5. # With the traditional method p amplitude is fixed to: 2 * np.pi /( 2*x_amplitude ) dx = 2*x_amplitude/float(gridDIM) # This is dx in Bailey's paper dp = 2*p_amplitude/float(gridDIM) # This is gamma in Bailey's paper delta = dx*dp/(2*np.pi) x_range = np.linspace( -x_amplitude, x_amplitude-dx, gridDIM) p_range = np.linspace( -p_amplitude, p_amplitude-dp, gridDIM) x = x_range[ np.newaxis, : ] y = x_range[ :, np.newaxis ] f = Gaussian(x,mu_x,sigma_x)*Gaussian(y,mu_y,sigma_y) plt.imshow( f, extent=[-x_amplitude , x_amplitude-dx, -x_amplitude , x_amplitude-dx] , origin='lower') axis_font = {'size':'24'} plt.text( 0., 7.1, '$W$' , **axis_font) plt.colorbar() #plt.ylim(0,0.44) print ' Amplitude x = ',x_amplitude print ' Amplitude p = ',p_amplitude print ' ' print 'mu_x = ', mu_x print 'mu_y = ', mu_y print 'sigma_x = ', sigma_x print 'sigma_y = ', sigma_y print ' ' print 'n = ', x.size print 'dx = ', dx print 'dp = ', dp print ' standard fft dp = ',2 * np.pi /( 2*x_amplitude ) , ' ' print ' ' print 'delta = ', delta print ' ' print 'The Gaussian extends to the numerical error in single precision:' print ' min = ', np.min(f)
FAFT_64-points_R2C/nbFAFT128_offset_xy_2D.ipynb
robertclf/FAFT
bsd-3-clause
Forward Transform
# Executing FFT cuda_faft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, delta, segment_axes0, axes0, makeR2C, axesSplit_0 ) cuda_faft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, delta, segment_axes1, axes1, makeC2C, axesSplit_0 ) plt.imshow( f_gpu.get() ) plt.plot( f33_gpu.get().real.reshape(64) ) def ReconstructFFT2D_axesSplit_0(f,f65): n = f.shape[0] freal_half = f_gpu.get()[:n/2,:] freal = np.append( freal_half , f65.real.reshape(1,f65.size) , axis=0) freal = np.append( freal , freal_half[:0:-1,:] ,axis=0) fimag_half = f_gpu.get()[n/2:,:] fimag = np.append( fimag_half , f65.imag.reshape(1,f65.size) ,axis=0) fimag = np.append( fimag , -fimag_half[:0:-1,:] ,axis=0) return freal + 1j*fimag plt.imshow( ReconstructFFT2D_axesSplit_0( f_gpu.get() , f33_gpu.get() ).real/float(size), extent=[-p_amplitude , p_amplitude-dp, -p_amplitude , p_amplitude-dp] , origin='lower') plt.colorbar() axis_font = {'size':'24'} plt.text( -2, 6.2, '$Re \\mathcal{F}(W)$', **axis_font ) plt.xlim(-p_amplitude , p_amplitude-dp) plt.ylim(-p_amplitude , p_amplitude-dp) plt.xlabel('$p_x$',**axis_font) plt.ylabel('$p_y$',**axis_font) plt.imshow( ReconstructFFT2D_axesSplit_0( f_gpu.get() , f33_gpu.get() ).imag/float(size), extent=[-p_amplitude , p_amplitude-dp, -p_amplitude , p_amplitude-dp] , origin='lower') plt.colorbar() axis_font = {'size':'24'} plt.text( -2, 6.2, '$Imag\, \\mathcal{F}(W)$', **axis_font ) plt.xlim(-p_amplitude , p_amplitude-dp) plt.ylim(-p_amplitude , p_amplitude-dp) plt.xlabel('$p_x$',**axis_font) plt.ylabel('$p_y$',**axis_font)
FAFT_64-points_R2C/nbFAFT128_offset_xy_2D.ipynb
robertclf/FAFT
bsd-3-clause
Central Section: $p_y =0$
plt.figure(figsize=(10,10)) plt.plot( p_range, ReconstructFFT2D_axesSplit_0( f_gpu.get() , f33_gpu.get() )[32,:].real/float(size), 'o-' , label='Real') plt.plot( p_range, ReconstructFFT2D_axesSplit_0( f_gpu.get() , f33_gpu.get() )[32,:].imag/float(size), 'ro-' , label='Imag') plt.xlabel('$p_x$',**axis_font) plt.plot( p_range , 4*fftGaussian(p_range,mu_x,sigma_x).real ,'bx'); plt.plot( p_range , 4*fftGaussian(p_range,mu_x,sigma_x).imag ,'rx'); plt.legend(loc='upper left')
FAFT_64-points_R2C/nbFAFT128_offset_xy_2D.ipynb
robertclf/FAFT
bsd-3-clause
Inverse Transform
# Executing iFFT cuda_ifaft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, -delta, segment_axes1, axes1, makeC2C, axesSplit_0 ) cuda_ifaft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, -delta, segment_axes0, axes0, makeC2R, axesSplit_0 ) plt.imshow( f_gpu.get()/(float(size*size)) , extent=[-x_amplitude , x_amplitude-dx, -x_amplitude, x_amplitude-dx], origin='lower' ) plt.colorbar() axis_font = {'size':'24'} plt.text( -1, 7.2, '$W$', **axis_font ) plt.xlim(-x_amplitude , x_amplitude-dx) plt.ylim(-x_amplitude , x_amplitude-dx) plt.xlabel('$x$',**axis_font) plt.ylabel('$y$',**axis_font)
FAFT_64-points_R2C/nbFAFT128_offset_xy_2D.ipynb
robertclf/FAFT
bsd-3-clause
$W$ TRANSFORM FROM AXES-1 After the transfom, f_gpu[:, :32] contains real values and f_gpu[:, 32:] contains imaginary values. f33_gpu contains the 33th. complex values
f = Gaussian(x,mu_x,sigma_x)*Gaussian(y,mu_y,sigma_y) plt.imshow( f, extent=[-x_amplitude , x_amplitude-dx, -x_amplitude , x_amplitude-dx] , origin='lower') f33 = np.zeros( [64, 1], dtype = np.complex64 ) # One gpu array. f_gpu = gpuarray.to_gpu( np.ascontiguousarray( f , dtype = np.float32 ) ) f33_gpu = gpuarray.to_gpu( np.ascontiguousarray( f33 , dtype = np.complex64 ) )
FAFT_64-points_R2C/nbFAFT128_offset_xy_2D.ipynb
robertclf/FAFT
bsd-3-clause
Forward Transform
# Executing FFT cuda_faft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, delta, segment_axes1, axes1, makeR2C, axesSplit_1 ) cuda_faft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, delta, segment_axes0, axes0, makeC2C, axesSplit_1 ) plt.imshow( f_gpu.get() ) plt.plot( f33_gpu.get().real.reshape(64) ) def ReconstructFFT2D_axesSplit_1(f,f65): n = f.shape[0] freal_half = f_gpu.get()[:,:n/2] freal = np.append( freal_half , f65.real.reshape(f65.size,1) , axis=1) freal = np.append( freal , freal_half[:,:0:-1] , axis=1) fimag_half = f_gpu.get()[:,n/2:] fimag = np.append( fimag_half , f65.imag.reshape(f65.size,1) ,axis=1) fimag = np.append( fimag , -fimag_half[:,:0:-1] ,axis=1) return freal + 1j*fimag ReconstructFFT2D_axesSplit_1( f_gpu.get() , f33_gpu.get() ).shape plt.imshow( ReconstructFFT2D_axesSplit_1( f_gpu.get() , f33_gpu.get() ).real/float(size), extent=[-p_amplitude , p_amplitude-dp, -p_amplitude, p_amplitude-dp] ) plt.colorbar() axis_font = {'size':'24'} plt.text( -3.0, 6.2, '$Re \\mathcal{F}(W)$', **axis_font ) plt.xlim(-p_amplitude , p_amplitude-dp) plt.ylim(-p_amplitude , p_amplitude-dp) plt.xlabel('$p_x$',**axis_font) plt.ylabel('$p_y$',**axis_font) plt.imshow( ReconstructFFT2D_axesSplit_1( f_gpu.get() , f33_gpu.get() ).imag/float(size), extent=[-p_amplitude , p_amplitude-dp, -p_amplitude, p_amplitude-dp] ) plt.colorbar() axis_font = {'size':'24'} plt.text( -3.0, 6.2, '$Imag \\mathcal{F}(W)$', **axis_font ) plt.xlim(-p_amplitude , p_amplitude-dp) plt.ylim(-p_amplitude , p_amplitude-dp) plt.xlabel('$p_x$',**axis_font) plt.ylabel('$p_y$',**axis_font) plt.figure(figsize=(10,10)) plt.plot( p_range, ReconstructFFT2D_axesSplit_1( f_gpu.get() , f33_gpu.get() )[32,:].real/float(size), 'o-' , label='Real') plt.plot( p_range, ReconstructFFT2D_axesSplit_1( f_gpu.get() , f33_gpu.get() )[32,:].imag/float(size), 'ro-' , label='Imag') plt.xlabel('$p_x$',**axis_font) plt.plot( p_range , 4*fftGaussian(p_range,mu_x,sigma_x).real ,'bx'); plt.plot( p_range , 4*fftGaussian(p_range,mu_x,sigma_x).imag ,'rx'); plt.legend(loc='upper left')
FAFT_64-points_R2C/nbFAFT128_offset_xy_2D.ipynb
robertclf/FAFT
bsd-3-clause
Inverse Transform
# Executing iFFT cuda_ifaft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, -delta, segment_axes0, axes0, makeC2C, axesSplit_1 ) cuda_ifaft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, -delta, segment_axes1, axes1, makeC2R, axesSplit_1 ) plt.imshow( f_gpu.get()/float(size)**2 , extent=[-x_amplitude , x_amplitude-dx, -x_amplitude , x_amplitude-dx] , origin='lower') axis_font = {'size':'24'} plt.text( 0., 7.1, '$W$' , **axis_font) plt.colorbar()
FAFT_64-points_R2C/nbFAFT128_offset_xy_2D.ipynb
robertclf/FAFT
bsd-3-clause
User ~~Products purchased (total_distinct_items)~~ ~~Orders made (nb_orders)~~ ~~frequency and recency of orders (average_days_between_orders)~~ Aisle purchased from Department purchased from ~~frequency and recency of reorders~~ tenure ~~mean order size (average basket)~~ etc.
usr = pd.DataFrame() o_grouped = orders.groupby('user_id') p_grouped = priors.groupby('user_id') usr['average_days_between_orders'] = o_grouped.days_since_prior_order.mean() usr['max_days_between_orders'] = o_grouped.days_since_prior_order.max() usr['min_days_between_orders'] = o_grouped.days_since_prior_order.min() usr['std_days_between_orders'] = o_grouped.days_since_prior_order.std() usr["period"] = o_grouped.days_since_prior_order.fillna(0).sum() usr['nb_orders'] = o_grouped.size().astype(np.int16) users = pd.DataFrame() users['total_items'] = p_grouped.size().astype(np.int16) users['all_products'] = p_grouped['product_id'].apply(set) users['total_distinct_items'] = (users.all_products.map(len)).astype(np.int16) users['reorders'] = p_grouped["reordered"].sum() users['reorder_rate'] = (users.reorders / usr.nb_orders) users = users.join(usr) del usr, o_grouped, p_grouped users['average_basket'] = (users.total_items / users.nb_orders).astype(np.float32) gc.collect() print('user f', users.shape) def merge_user_features(df): df['user_total_orders'] = df.user_id.map(users.nb_orders) df['user_total_items'] = df.user_id.map(users.total_items) df['user_total_distinct_items'] = df.user_id.map(users.total_distinct_items) df['user_average_days_between_orders'] = df.user_id.map(users.average_days_between_orders) df['user_max_days_between_orders'] = df.user_id.map(users.max_days_between_orders) df['user_min_days_between_orders'] = df.user_id.map(users.min_days_between_orders) df['user_std_days_between_orders'] = df.user_id.map(users.std_days_between_orders) df['user_average_basket'] = df.user_id.map(users.average_basket) df['user_period'] = df.user_id.map(users.period) return df
instacart/Model.ipynb
bataeves/kaggle
unlicense
Product ~~users~~ ~~orders~~ ~~order frequency~~ ~~reorder rate~~ recency ~~mean/std add_to_cart_order~~ Срок годности/время перепокупки etc.
prods = pd.DataFrame() p_grouped = priors.groupby(priors.product_id) prods['orders'] = p_grouped.size().astype(np.float32) prods['order_freq'] = (prods['orders'] / len(priors.order_id.unique())) prods['users'] = p_grouped.user_id.unique().apply(len) prods['add_to_cart_order_mean'] = p_grouped.add_to_cart_order.mean().astype(np.float32) prods['add_to_cart_order_std'] = p_grouped.add_to_cart_order.std().astype(np.float32) prods['reorders'] = p_grouped['reordered'].sum().astype(np.float32) prods['reorder_rate'] = (prods.reorders / prods.orders).astype(np.float32) products = products.join(prods, on='product_id') products.set_index('product_id', drop=False, inplace=True) del prods, p_grouped gc.collect() def merge_product_features(df): df['product_orders'] = df.product_id.map(products.orders).astype(np.float32) df['product_users'] = df.product_id.map(products.users).astype(np.float32) df['product_order_freq'] = df.product_id.map(products.order_freq).astype(np.float32) df['product_reorders'] = df.product_id.map(products.reorders).astype(np.float32) df['product_reorder_rate'] = df.product_id.map(products.reorder_rate).astype(np.float32) df['product_add_to_cart_order_mean'] = df.product_id.map(products.add_to_cart_order_mean).astype(np.float32) df['product_add_to_cart_order_std'] = df.product_id.map(products.add_to_cart_order_std).astype(np.float32) return df
instacart/Model.ipynb
bataeves/kaggle
unlicense
Aisle ~~users~~ ~~orders~~ ~~order frequency~~ ~~reorder rate~~ recency ~~mean/std add_to_cart_order~~ Срок годности/время перепокупки etc.
prods = pd.DataFrame() p_grouped = priors.groupby(priors.aisle_id) prods['orders'] = p_grouped.size().astype(np.float32) prods['order_freq'] = (prods['orders'] / len(priors.order_id.unique())).astype(np.float32) prods['users'] = p_grouped.user_id.unique().apply(len).astype(np.float32) prods['add_to_cart_order_mean'] = p_grouped.add_to_cart_order.mean().astype(np.float32) prods['add_to_cart_order_std'] = p_grouped.add_to_cart_order.std().astype(np.float32) prods['reorders'] = p_grouped['reordered'].sum().astype(np.float32) prods['reorder_rate'] = (prods.reorders / prods.orders).astype(np.float32) aisles.set_index('aisle_id', drop=False, inplace=True) aisles = aisles.join(prods) del prods, p_grouped def merge_aisle_features(df): df['aisle_orders'] = df.aisle_id.map(aisles.orders) df['aisle_users'] = df.aisle_id.map(aisles.users) df['aisle_order_freq'] = df.aisle_id.map(aisles.order_freq) df['aisle_reorders'] = df.aisle_id.map(aisles.reorders) df['aisle_reorder_rate'] = df.aisle_id.map(aisles.reorder_rate) df['aisle_add_to_cart_order_mean'] = df.aisle_id.map(aisles.add_to_cart_order_mean) df['aisle_add_to_cart_order_std'] = df.aisle_id.map(aisles.add_to_cart_order_std) return df
instacart/Model.ipynb
bataeves/kaggle
unlicense
Department ~~users~~ ~~orders~~ ~~order frequency~~ ~~reorder rate~~ recency ~~mean add_to_cart_order~~ Срок годности/время перепокупки etc.
prods = pd.DataFrame() p_grouped = priors.groupby(priors.department_id) prods['orders'] = p_grouped.size().astype(np.float32) prods['order_freq'] = (prods['orders'] / len(priors.order_id.unique())).astype(np.float32) prods['users'] = p_grouped.user_id.unique().apply(len).astype(np.float32) prods['add_to_cart_order_mean'] = p_grouped.add_to_cart_order.mean().astype(np.float32) prods['add_to_cart_order_std'] = p_grouped.add_to_cart_order.std().astype(np.float32) prods['reorders'] = p_grouped['reordered'].sum().astype(np.float32) prods['reorder_rate'] = (prods.reorders / prods.orders).astype(np.float32) departments.set_index('department_id', drop=False, inplace=True) departments = departments.join(prods) del prods, p_grouped def merge_department_features(df): df['department_orders'] = df.department_id.map(departments.orders) df['department_users'] = df.department_id.map(departments.users) df['department_order_freq'] = df.department_id.map(departments.order_freq) df['department_reorders'] = df.department_id.map(departments.reorders) df['department_reorder_rate'] = df.department_id.map(departments.reorder_rate) df['department_add_to_cart_order_mean'] = df.department_id.map(departments.add_to_cart_order_mean) df['department_add_to_cart_order_std'] = df.department_id.map(departments.add_to_cart_order_std) return df
instacart/Model.ipynb
bataeves/kaggle
unlicense
User Product Interaction (UP) ~~purchases (nb_orders)~~ ~~purchases ratio~~ ~~reorders~~ ~~Average position in cart~~ day since last purchase average/min/max days between purchase day since last purchase - срок годности ~~order since last purchase (UP_orders_since_last)~~ Latest one/two/three/four week features etc.
%%cache userXproduct.pkl userXproduct priors['z'] = priors.product_id + priors.user_id * 100000 d = dict() for row in tqdm_notebook(priors.itertuples(), total=len(priors)): z = row.z if z not in d: d[z] = ( 1, (row.order_number, row.order_id), row.add_to_cart_order, row.reordered ) else: d[z] = ( d[z][0] + 1, max(d[z][1], (row.order_number, row.order_id)), d[z][2] + row.add_to_cart_order, d[z][3] + row.reordered ) priors.drop(['z'], axis=1, inplace=True) print('to dataframe (less memory)') userXproduct = pd.DataFrame.from_dict(d, orient='index') del d gc.collect() userXproduct.columns = ['nb_orders', 'last_order_id', 'sum_pos_in_cart', 'reorders'] userXproduct.nb_orders = userXproduct.nb_orders.astype(np.int16) userXproduct.last_order_id = userXproduct.last_order_id.map(lambda x: x[1]).astype(np.int32) userXproduct.sum_pos_in_cart = userXproduct.sum_pos_in_cart.astype(np.int16) userXproduct.reorders = userXproduct.reorders.astype(np.int16) print('user X product f', len(userXproduct)) def merge_user_X_product_features(df): df['z'] = df.product_id + df.user_id * 100000 df['UP_orders'] = df.z.map(userXproduct.nb_orders) df['UP_orders_ratio'] = (df.UP_orders / df.user_total_orders) df['UP_last_order_id'] = df.z.map(userXproduct.last_order_id) df['UP_average_pos_in_cart'] = (df.z.map(userXproduct.sum_pos_in_cart) / df.UP_orders) df['UP_reorders'] = df.z.map(userXproduct.reorders) df['UP_orders_since_last'] = df.user_total_orders - df.UP_last_order_id.map(orders.order_number) #df['UP_days_since_last'] = # df['UP_delta_hour_vs_last'] = abs( # df.order_hour_of_day - df.UP_last_order_id.map(orders.order_hour_of_day) # ).map(lambda x: min(x, 24-x)).astype(np.int8) #df['UP_same_dow_as_last_order'] = df.UP_last_order_id.map(orders.order_dow) == \ # df.order_id.map(orders.order_dow) df.drop(['UP_last_order_id', 'z'], axis=1, inplace=True) return df
instacart/Model.ipynb
bataeves/kaggle
unlicense
User aisle interaction (UA) purchases reorders day since last purchase average/min/max days between purchase order since last purchase etc. User department interaction (UD) purchases reorders day since last purchase average days between purchase order since last purchase etc. User time interaction (UT) user preferred day of week user preferred time of day similar features for products and aisles Combine
### build list of candidate products to reorder, with features ### train_index = set(op_train.index) def features(selected_orders, labels_given=False): order_list = [] product_list = [] labels = [] for row in tqdm_notebook( selected_orders.itertuples(), total=len(selected_orders) ): order_id = row.order_id user_id = row.user_id user_products = users.all_products[user_id] product_list += user_products order_list += [order_id] * len(user_products) if labels_given: labels += [ (order_id, product) in train_index for product in user_products ] df = pd.DataFrame({'order_id': order_list, 'product_id': product_list}) df.order_id = df.order_id.astype(np.int32) df.product_id = df.product_id.astype(np.int32) del order_list del product_list df['user_id'] = df.order_id.map(orders.user_id) df['aisle_id'] = df.product_id.map(products.aisle_id).astype(np.int8) df['department_id'] = df.product_id.map(products.department_id).astype(np.int8) labels = np.array(labels, dtype=np.int8) print('user related features') df = merge_user_features(df) print('order related features') df['dow'] = df.order_id.map(orders.order_dow) df['order_hour_of_day'] = df.order_id.map(orders.order_hour_of_day) df['days_since_prior_order'] = df.order_id.map(orders.days_since_prior_order) df['days_since_ratio'] = df.days_since_prior_order / df.user_average_days_between_orders print('product related features') df = merge_product_features(df) print('aisle related features') df = merge_aisle_features(df) print('department related features') df = merge_department_features(df) print('user_X_product related features') df = merge_user_X_product_features(df) return (df, labels) # %%cache dataset.pkl df_train df_test labels ### train / test orders ### print('split orders : train, test') test_orders = orders[orders.eval_set == 2] train_orders = orders[orders.eval_set == 1] df_train, labels = features(train_orders, labels_given=True) del train_orders gc.collect() df_test, _ = features(test_orders, labels_given=False) del test_orders gc.collect()
instacart/Model.ipynb
bataeves/kaggle
unlicense
Train
f_to_use = [ 'user_total_orders', 'user_total_items', 'user_total_distinct_items', 'user_average_days_between_orders', 'user_average_basket', 'order_hour_of_day', 'days_since_prior_order', 'days_since_ratio', 'aisle_id', 'department_id', 'product_orders', 'product_reorders', 'product_reorder_rate', 'UP_orders', 'UP_orders_ratio', 'UP_average_pos_in_cart', 'UP_reorders', 'UP_orders_since_last', 'UP_delta_hour_vs_last' ] def feature_select(df): # return df[f_to_use] return df.drop( ["user_id", "order_id", "product_id"], axis=1, errors="ignore" ) params = { 'task': 'train', 'boosting_type': 'gbdt', 'objective': 'binary', 'metric': {'binary_logloss'}, 'num_leaves': 96, 'feature_fraction': 0.9, 'bagging_fraction': 0.95, 'bagging_freq': 5 } ROUNDS = 98 def train(traindf, y): d_train = lgb.Dataset( feature_select(traindf), label=y, categorical_feature=['aisle_id', 'department_id'] ) model = lgb.train(params, d_train, ROUNDS) return model model = train(df_train, labels)
instacart/Model.ipynb
bataeves/kaggle
unlicense
Predict
def predict(model, df_test, TRESHOLD=0.19): ### build candidates list for test ### df_test['pred'] = model.predict(feature_select(df_test)) # Нужно добавить https://www.kaggle.com/mmueller/f1-score-expectation-maximization-in-o-n/code d = dict() for row in df_test.itertuples(): # Вот тут можно отрезать не по threshold, а с помощью модели определять кол-во покупок if row.pred > TRESHOLD: try: d[row.order_id] += ' ' + str(row.product_id) except KeyError: d[row.order_id] = str(row.product_id) for order_id in df_test.order_id: if order_id not in d: d[order_id] = 'None' sub = pd.DataFrame.from_dict(d, orient='index') sub.reset_index(inplace=True) sub.columns = ['order_id', 'products'] return sub sub = predict(model, df_test) sub.to_csv('sub.csv', index=False)
instacart/Model.ipynb
bataeves/kaggle
unlicense
CV https://www.kaggle.com/happycube/validation-demo-325-cv-3276-lb/notebook
lgb.cv(params, d_train, ROUNDS, nfold=5, verbose_eval=10) %%cache df_train_gt.pkl df_train_gt from functools import partial products_raw = pd.read_csv(IDIR + 'products.csv') # combine aisles, departments and products (left joined to products) goods = pd.merge( left=pd.merge( left=products_raw, right=departments, how='left' ), right=aisles, how='left' ) # to retain '-' and make product names more "standard" goods.product_name = goods.product_name.str.replace(' ', '_').str.lower() # retype goods to reduce memory usage goods.product_id = goods.product_id.astype(np.int32) goods.aisle_id = goods.aisle_id.astype(np.int16) goods.department_id = goods.department_id.astype(np.int8) # initialize it with train dataset train_details = pd.merge( left=op_train, right=orders, how='left', on='order_id' ).apply(partial(pd.to_numeric, errors='ignore', downcast='integer')) # add order hierarchy train_details = pd.merge( left=train_details, right=goods[['product_id', 'aisle_id', 'department_id']].apply(partial(pd.to_numeric, errors='ignore', downcast='integer')), how='left', on='product_id' ) train_gtl = [] for uid, subset in train_details.groupby('user_id'): subset1 = subset[subset.reordered == 1] oid = subset.order_id.values[0] if len(subset1) == 0: train_gtl.append((oid, 'None')) continue ostr = ' '.join([str(int(e)) for e in subset1.product_id.values]) # .strip is needed because join can have a padding space at the end train_gtl.append((oid, ostr.strip())) del train_details del goods del products_raw gc.collect() df_train_gt = pd.DataFrame(train_gtl) df_train_gt.columns = ['order_id', 'products'] df_train_gt.set_index('order_id', inplace=True) df_train_gt.sort_index(inplace=True) from sklearn.model_selection import GroupKFold def f1_score(cvpred): joined = df_train_gt.join(cvpred, rsuffix="_cv", how="inner") lgts = joined.products.replace("None", "-1").apply(lambda x: x.split(" ")).values lpreds = joined.products_cv.replace("None", "-1").apply(lambda x: x.split(" ")).values f1 = [] for lgt, lpred in zip(lgts, lpreds): rr = (np.intersect1d(lgt, lpred)) precision = np.float(len(rr)) / len(lpred) recall = np.float(len(rr)) / len(lgt) denom = precision + recall f1.append(((2 * precision * recall) / denom) if denom > 0 else 0) return np.mean(f1) def cv(threshold=0.22): gkf = GroupKFold(n_splits=5) scores = [] for train_idx, test_idx in gkf.split(df_train.index, groups=df_train.user_id): dftrain = df_train.iloc[train_idx] dftest = df_train.iloc[test_idx] y = labels[train_idx] model = train(dftrain, y) pred = predict(model, dftest, threshold).set_index("order_id") f1 = f1_score(pred) print(f1) scores.append(f1) del dftrain del dftest gc.collect() return np.mean(scores), np.std(scores) cv()
instacart/Model.ipynb
bataeves/kaggle
unlicense
Подбираем threshold
for th in np.arange(0.18, 0.22, 0.01): print th print cv(threshold=th) print
instacart/Model.ipynb
bataeves/kaggle
unlicense
Модель определения кол-ва покупок
prior_orders_count = priors[["order_id", "product_id"]].groupby("order_id").count() prior_orders_count = prior_orders_count.rename(columns={"product_id": "product_counts"}) train_orders_count = op_train.drop(["product_id", "order_id"], axis=1, errors="ignore") train_orders_count = train_orders_count.reset_index()[["order_id", "product_id"]].groupby("order_id").count() train_orders_count = train_orders_count.rename(columns={"product_id": "product_counts"}) prior_orders_count = orders.join(prior_orders_count, how='inner') train_orders_count = orders.join(train_orders_count, how='inner') prior_orders_count.head(15) from sklearn.linear_model import Lasso from sklearn.metrics import mean_squared_error def get_order_count(order, alpha=0.5): user_id = order["user_id"] df = prior_orders_count[prior_orders_count["user_id"] == user_id] feats = ["order_number", "order_dow", "order_hour_of_day", "days_since_prior_order"] X = df[feats].fillna(0).values y = df["product_counts"].values # create dataset for lightgbm # lgb_train = lgb.Dataset(X, y) # params = { # 'task': 'train', # 'boosting_type': 'gbdt', # 'objective': 'regression', # 'metric': {'rmse'}, # 'num_leaves': 100, # 'learning_rate': 0.01, # 'feature_fraction': 0.9, # 'bagging_fraction': 0.8, # 'bagging_freq': 5, # 'verbose': 0, # } # clf = lgb.train(params, # lgb_train, # num_boost_round=40) xgb_params = { 'max_depth': 5, 'n_estimators': 200, 'learning_rate': 0.05, 'objective': 'reg:linear', 'eval_metric': 'rmse', 'silent': 1 } dtrain_all = xgb.DMatrix(X, y) clf = xgb.train(xgb_params, dtrain_all, num_boost_round=400) # clf = Lasso(alpha=0.01) # clf.fit(X, y) Xpred = np.array([order[f] or 0 for f in feats]).reshape(1, -1) Xpred = np.nan_to_num(Xpred, 0) Xpred = xgb.DMatrix(Xpred) return clf.predict(Xpred)[0] df = train_orders_count.head(10000) df["pred_products_count"] = df.apply(get_order_count, axis=1) print(mean_squared_error( df["product_counts"], df["pred_products_count"] ))
instacart/Model.ipynb
bataeves/kaggle
unlicense
Let's look at the tables and columns we have for analysis. Please query the INFORMATION_SCHEMA. Learning objective 1.
%%with_globals %%bigquery --project {PROJECT} SELECT table_name, column_name, data_type FROM `asl-ml-immersion.stock_src.INFORMATION_SCHEMA.COLUMNS` ORDER BY table_name, ordinal_position
courses/machine_learning/deepdive2/time_series_prediction/labs/optional_1_data_exploration.ipynb
turbomanage/training-data-analyst
apache-2.0
Price History TODO: Visualize stock symbols from the dataset.
%%with_globals %%bigquery --project {PROJECT} SELECT * FROM `asl-ml-immersion.stock_src.price_history` LIMIT 10 def query_stock(symbol): return bq.query(''' # TODO: query a specific stock '''.format(symbol)).to_dataframe() df_stock = query_stock('GOOG') df_stock.Date = pd.to_datetime(df_stock.Date) ax = df_stock.plot(x='Date', y='Close', title='price') # Add smoothed plot. df_stock['Close_smoothed'] = df_stock.Close.rolling(100, center=True).mean() df_stock.plot(x='Date', y='Close_smoothed', ax=ax);
courses/machine_learning/deepdive2/time_series_prediction/labs/optional_1_data_exploration.ipynb
turbomanage/training-data-analyst
apache-2.0
TODO 2: Compare individual stocks to the S&P 500.
SP500_SYMBOL = gspc df_sp = query_stock(SP500_SYMBOL) # TODO: visualize S&P 500 price
courses/machine_learning/deepdive2/time_series_prediction/labs/optional_1_data_exploration.ipynb
turbomanage/training-data-analyst
apache-2.0
Let's see how the price of stocks change over time on a yearly basis. Using the LAG function we can compute the change in stock price year-over-year. Let's compute average close difference for each year. This line could, of course, be done in Pandas. Often times it's useful to use some combination of BigQuery and Pandas for exploration analysis. In general, it's most effective to let BigQuery do the heavy-duty processing and then use Pandas for smaller data and visualization. Learning objective 1, 2
%%with_globals %%bigquery --project {PROJECT} df WITH with_year AS ( SELECT symbol, EXTRACT(YEAR FROM date) AS year, close FROM `asl-ml-immersion.stock_src.price_history` WHERE symbol in (SELECT symbol FROM `asl-ml-immersion.stock_src.snp500`) ), year_aggregated AS ( SELECT year, symbol, AVG(close) as avg_close FROM with_year WHERE year >= 2000 GROUP BY year, symbol ) SELECT year, symbol, avg_close as close, (LAG( --# TODO: compute a year lag on avg_close )) AS next_yr_close FROM year_aggregated ORDER BY symbol, year
courses/machine_learning/deepdive2/time_series_prediction/labs/optional_1_data_exploration.ipynb
turbomanage/training-data-analyst
apache-2.0
Stock splits can also impact our data - causing a stock price to rapidly drop. In practice, we would need to clean all of our stock data to account for this. This would be a major effort! Fortunately, in the case of IBM, for example, all stock splits occurred before the year 2000. Learning objective 2 TODO: Query the IBM stock history and to visualize how the stock splits affect our data. A stock split occurs when there is a sudden drop in price.
stock_symbol = 'IBM' %%with_globals %%bigquery --project {PROJECT} df SELECT date, close FROM `asl-ml-immersion.stock_src.price_history` WHERE symbol='{stock_symbol}' ORDER BY date # TODO: can you visualize when the major stock splits occured?
courses/machine_learning/deepdive2/time_series_prediction/labs/optional_1_data_exploration.ipynb
turbomanage/training-data-analyst
apache-2.0
Making Predictions First let's generate our own sentences to see how the model classifies them.
def string_to_array(s, separator=' '): return s.split(separator) def generate_data_row(sentence, label, max_length): sequence = np.zeros((max_length), dtype='int32') for i, word in enumerate(string_to_array(sentence)): sequence[i] = word_list.index(word) return sequence, label, max_length def generate_data(sentences, labels, max_length): data = [] for s, l in zip(sentences, labels): data.append(generate_data_row(s, l, max_length)) return np.asarray(data) sentences = ['i thought the movie was incredible and inspiring', 'this is a great movie', 'this is a good movie but isnt the best', 'it was fine i guess', 'it was definitely bad', 'its not that bad', 'its not that bad i think its a good movie', 'its not bad i think its a good movie'] labels = [[1, 0], [1, 0], [1, 0], [0, 1], [0, 1], [1, 0], [1, 0], [1, 0]] # [1, 0]: positive, [0, 1]: negative my_test_data = generate_data(sentences, labels, 10) estimator = tf.estimator.Estimator(model_fn=model_fn, config=tf.contrib.learn.RunConfig(model_dir='tensorboard/batch_32')) preds = estimator.predict(input_fn=get_input_fn(my_test_data, 1, 1, shuffle=False)) print() for p, s in zip(preds, sentences): print('sentence:', s) print('good review:', p[0], 'bad review:', p[1]) print('-' * 10)
code_samples/RNN/sentiment_analysis/.ipynb_checkpoints/SentimentAnalysis-New-checkpoint.ipynb
mari-linhares/tensorflow-workshop
apache-2.0
Data CoTeDe comes with a few datasets for demonstration. Here we will use a CTD cast from the PIRATA hydrographic collection, i.e. measurements from the Tropical Atlantic Ocean. If curious about this data, check CoTeDe's documentation for more details. Let's start by loading this dataset.
data = cotede.datasets.load_ctd() print("There is a total of {} observed levels.\n".format(len(data["TEMP"]))) print("The variables are: ", list(data.keys()))
docs/notebooks/anomaly_detection/anomaly_detection_profile.ipynb
castelao/CoTeDe
bsd-3-clause
This CTD was equipped with backup sensors to provide more robustness. Measurements from the secondary sensor are identified by a 2 in the end of the name, such as "TEMP2" for the secondary temperature sensor. Let's focus on the primary sensors. To visualize this profile we will use Bokeh so you can explore the data and results. For instance, in the following plot zoom in to better see the profiles of temperature and salinity.
p1 = figure(plot_width=420, plot_height=600) p1.circle( data['TEMP'], -data['PRES'], size=8, line_color="seagreen", fill_color="mediumseagreen", fill_alpha=0.3) p1.xaxis.axis_label = "Temperature [C]" p1.yaxis.axis_label = "Depth [m]" p2 = figure(plot_width=420, plot_height=600) p2.y_range = p1.y_range p2.circle( data['PSAL'], -data['PRES'], size=8, line_color="seagreen", fill_color="mediumseagreen", fill_alpha=0.3) p2.xaxis.axis_label = "Salinity" p2.yaxis.axis_label = "Depth [m]" p = row(p1, p2) show(p)
docs/notebooks/anomaly_detection/anomaly_detection_profile.ipynb
castelao/CoTeDe
bsd-3-clause
Considering the unusual magnitudes and variability near the bottom, there are clearly bad measurements on this profile. Let's start with a traditional QC and then we'll include the Anomaly Detection. Traditional QC with CoTeDe framework NOTE: If you are not familiar with CoTeDe, it might be helpfull to check first the notebook profile_CTD and come back after that. Let's start with the procedure recommended by the EuroGOOS for non-realtime data, which includes the climatology test comparison with the World Ocean Atlas (WOA). If interested, check CoTeDe's manual for more details, including the reference, on the EuroGOOS recomendations.
pqc = cotede.ProfileQC(data, cfg="eurogoos")
docs/notebooks/anomaly_detection/anomaly_detection_profile.ipynb
castelao/CoTeDe
bsd-3-clause
That's it, the temperature and salinity from the primary and secondary sensors were all evaluated. Which criteria were flagged for the primary sensors?
print("Flags for temperature:\n {}\n".format(list(pqc.flags["TEMP"].keys()))) print("Flags for salinity:\n {}\n".format(list(pqc.flags["PSAL"].keys())))
docs/notebooks/anomaly_detection/anomaly_detection_profile.ipynb
castelao/CoTeDe
bsd-3-clause