markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Made a mistake on the last loop above. The penultimate batch -- the last full 4096-image batch -- was added onto the end of the predictions array twice. The final 2194 image predictions were never run. Easy enough to fix: modify the above code to work perfectly. Then either: * create entirely new predictions from scrat...
print(81920 - 79726) print(79726 % 4096) print(81920 % 4096) # <-- that's yeh problem right there, kid x = preds[len(preds) - 4096] print(preds[-1]) print(x) preds[0] # ??image.ImageDataGenerator.flow_from_directory # ??Sequential.predict()
FAI_old/conv_test_Asus.ipynb
WNoxchi/Kaukasos
mit
Redoing predictions here:
fname = path + 'results/conv_test_feat.dat' idx, inc = 4096, 4096 preds = [] conv_test_feat = bcolz.open(fname)[:idx] preds = bn_model.predict(conv_test_feat, batch_size=batch_size, verbose=0) while idx < test_batches.n - inc: conv_test_feat = bcolz.open(fname)[idx:idx+inc] idx += inc next_preds = bn_model...
FAI_old/conv_test_Asus.ipynb
WNoxchi/Kaukasos
mit
Oh I forgot, predictions through a FC NN are fast. CNNs are where it takes a long time. This is just quick testing that it works. Full/polished will be in the reworked statefarm-codealong (or just statefarm) JNB:
def do_clip(arr, mx): return np.clip(arr, (1-mx)/9, mx) subm = do_clip(preds, 0.93) subm_name = path + 'results/subm01.gz' trn_batches = get_batches(path + 'train', batch_size=batch_size, shuffle=False) # make sure training batches defined before this: classes = sorted(trn_batches.class_indices, key=trn_batches.cla...
FAI_old/conv_test_Asus.ipynb
WNoxchi/Kaukasos
mit
Configuring logging: save DEBUG to a file and INFO to stdout
log = logging.getLogger() log.setLevel(logging.DEBUG) formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s') fileHandler = logging.FileHandler("debug.log", 'a') fileHandler.setLevel(logging.DEBUG) fileHandler.setFormatter(formatter) log.addHandler(fileHandler) cformat = logging.Formatte...
docs/tutorials/iotlab/tuto_iotlab_g5k_ipv6.ipynb
BeyondTheClouds/enoslib
gpl-3.0
Getting resources IoT-LAB provider configuration: reserve M3 nodes in saclay site Note: It uses the following M3 images: border-router.iotlab-m3 and er-example-server.iotlab-m3. More details on how to generate these images in: https://www.iot-lab.info/legacy/tutorials/contiki-coap-m3/index.html
job_name="iotlab_g5k-ipv6" iotlab_dict = { "walltime": "01:00", "job_name": job_name, "resources": { "machines": [ { "roles": ["border_router"], "archi": "m3:at86rf231", "site": "saclay", "number": 1, "image"...
docs/tutorials/iotlab/tuto_iotlab_g5k_ipv6.ipynb
BeyondTheClouds/enoslib
gpl-3.0
Grid'5000 provider configuration: reserve nodes in grenoble
g5k_dict = { "job_type": "allow_classic_ssh", "job_name": job_name, "resources": { "machines": [ { "roles": ["client"], "cluster": "yeti", "nodes": 1, "primary_network": "default", "secondary_networks": [], ...
docs/tutorials/iotlab/tuto_iotlab_g5k_ipv6.ipynb
BeyondTheClouds/enoslib
gpl-3.0
We still need a Static provider to interact with the IoT-LAB frontend machine
import iotlabcli.auth iotlab_user, _ = iotlabcli.auth.get_user_credentials() iotlab_frontend_conf = ( StaticConf() .add_machine( roles=["frontend"], address="saclay.iot-lab.info", alias="saclay", user=iotlab_user ) .finalize() )
docs/tutorials/iotlab/tuto_iotlab_g5k_ipv6.ipynb
BeyondTheClouds/enoslib
gpl-3.0
IoT-LAB: getting resources
iotlab_provider = Iotlab(iotlab_conf) iotlab_roles, _ = iotlab_provider.init() print(iotlab_roles)
docs/tutorials/iotlab/tuto_iotlab_g5k_ipv6.ipynb
BeyondTheClouds/enoslib
gpl-3.0
Grid'5000: getting resources
g5k_provider = G5k(g5k_conf) g5k_roles, g5knetworks = g5k_provider.init() print(g5k_roles)
docs/tutorials/iotlab/tuto_iotlab_g5k_ipv6.ipynb
BeyondTheClouds/enoslib
gpl-3.0
Static: getting resources
frontend_provider = Static(iotlab_frontend_conf) frontend_roles, _ = frontend_provider.init() print(frontend_roles)
docs/tutorials/iotlab/tuto_iotlab_g5k_ipv6.ipynb
BeyondTheClouds/enoslib
gpl-3.0
Configuring network connectivity Enabling IPv6 on Grid'5000 nodes (https://www.grid5000.fr/w/IPv6)
result=run_command("dhclient -6 br0", roles=g5k_roles) result = run_command("ip address show dev br0", roles=g5k_roles) print(result['ok'])
docs/tutorials/iotlab/tuto_iotlab_g5k_ipv6.ipynb
BeyondTheClouds/enoslib
gpl-3.0
Starting tunslip command in frontend. Redirect tunslip command output to a file to read it later.
iotlab_ipv6_net="2001:660:3207:4c0::" tun_cmd = "sudo tunslip6.py -v2 -L -a %s -p 20000 %s1/64 > tunslip.output 2>&1" % (iotlab_roles["border_router"][0].alias, iotlab_ipv6_net) result=run_command(tun_cmd, roles=frontend_roles, asynch=3600, poll=0)
docs/tutorials/iotlab/tuto_iotlab_g5k_ipv6.ipynb
BeyondTheClouds/enoslib
gpl-3.0
Reseting border router
iotlab_roles["border_router"][0].reset()
docs/tutorials/iotlab/tuto_iotlab_g5k_ipv6.ipynb
BeyondTheClouds/enoslib
gpl-3.0
Get the Border Router IPv6 address from tunslip output
result = run_command("cat tunslip.output", roles=frontend_roles) print(result['ok']) import re out = result['ok']['saclay']['stdout'] print(out) match = re.search(rf'Server IPv6 addresses:\n.+({iotlab_ipv6_net}\w{{4}})', out, re.MULTILINE|re.DOTALL) br_ipv6 = match.groups()[0] print("Border Router IPv6 address from tu...
docs/tutorials/iotlab/tuto_iotlab_g5k_ipv6.ipynb
BeyondTheClouds/enoslib
gpl-3.0
Checking ping from Grid'5000 to border router node
result = run_command("ping6 -c3 %s" % br_ipv6, pattern_hosts="client*", roles=g5k_roles) print(result['ok'])
docs/tutorials/iotlab/tuto_iotlab_g5k_ipv6.ipynb
BeyondTheClouds/enoslib
gpl-3.0
Installing and using CoAP clients Install aiocoap client and lynx on grid'5000 nodes
with play_on(roles=g5k_roles) as p: p.apt(name=["python3-aiocoap", "lynx"], state="present")
docs/tutorials/iotlab/tuto_iotlab_g5k_ipv6.ipynb
BeyondTheClouds/enoslib
gpl-3.0
Grab the CoAP server node’s IPv6 address from the BR’s web interface
result = run_command("lynx -dump http://[%s]" % br_ipv6, roles=g5k_roles) print(result['ok'])
docs/tutorials/iotlab/tuto_iotlab_g5k_ipv6.ipynb
BeyondTheClouds/enoslib
gpl-3.0
For a CoAP server, GET light sensor
out = result['ok'][g5k_roles["client"][0].address]['stdout'] print(out) match = re.search(r'fe80::(\w{4})', out, re.MULTILINE|re.DOTALL) node_uid = match.groups()[0] print(node_uid) result = run_command("aiocoap-client coap://[%s%s]:5683/sensors/light" % (iotlab_ipv6_net, node_uid), roles=g5k_roles) print(result['ok']...
docs/tutorials/iotlab/tuto_iotlab_g5k_ipv6.ipynb
BeyondTheClouds/enoslib
gpl-3.0
GET pressure for the same sensor
result = run_command("aiocoap-client coap://[%s%s]:5683/sensors/pressure" % (iotlab_ipv6_net, node_uid), roles=g5k_roles) print(result['ok'])
docs/tutorials/iotlab/tuto_iotlab_g5k_ipv6.ipynb
BeyondTheClouds/enoslib
gpl-3.0
Clean-up phase Stop tunslip in frontend node
result = run_command("pgrep tunslip6 | xargs kill", roles=frontend_roles)
docs/tutorials/iotlab/tuto_iotlab_g5k_ipv6.ipynb
BeyondTheClouds/enoslib
gpl-3.0
Destroy jobs in testbeds
g5k_provider.destroy() iotlab_provider.destroy()
docs/tutorials/iotlab/tuto_iotlab_g5k_ipv6.ipynb
BeyondTheClouds/enoslib
gpl-3.0
Now we make some plots and calculate some figures. For example, here's how the correlation (Pearson's r) changes by game number:
# Drop short season and current season, and get correlations data2 = data[(data.Season != 2012) & (data.Season != 2017)].dropna() corrs = data2.drop({'Season', 'Team'}, axis=1).groupby('GameNum').corr().drop('ROY_CF%', axis=1) corrs = corrs[corrs['YTD_CF%'] < 1] corrs = corrs.reset_index().drop('level_1', axis=1).renam...
examples/YTD and ROY CF%.ipynb
muneebalam/scrapenhl2
mit
Here's how the slope changes by game:
# Now look at the slope from scipy.stats import linregress def get_slope(df): m, b, r, p, e = linregress(df['YTD_CF%'], df['ROY_CF%']) return m plot(data2.groupby('GameNum').apply(get_slope), label = 'Slope') xlabel('Game Number') plot(corrs.r, label = 'R') title('Correlation between YTD CF% and ROY CF%') lege...
examples/YTD and ROY CF%.ipynb
muneebalam/scrapenhl2
mit
You'll note that although the predictivity (as measured by r) is best at 40 games, the slope is still not 1--meaning we expect some regression still. You can see that in the scatterplot below (with the best-fit and 1:1 lines included for reference):
tmp = data2[data2.GameNum == 40] x = 'YTD_CF%' y = 'ROY_CF%' scatter(tmp[x], tmp[y], label='_nolegend') xlabel('Season to date CF%') ylabel('Rest of season CF%') title('YTD and ROY CF% at game 40') m, b, r, p, e = linregress(tmp[x], tmp[y]) xs = arange(0, 1, 0.01) ys = m * xs + b xlimits = xlim() ylimits = ylim() pl...
examples/YTD and ROY CF%.ipynb
muneebalam/scrapenhl2
mit
Vertex SDK: AutoML training text classification model for online prediction <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_automl_text_classification_online.ipynb"> <img src="https://cloud.google.com...
import os # Google Cloud Notebook if os.path.exists("/opt/deeplearning/metadata/env_version"): USER_FLAG = "--user" else: USER_FLAG = "" ! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
notebooks/community/sdk/sdk_automl_text_classification_online.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Send a online prediction request Send a online prediction to your deployed model. Get test item You will use an arbitrary example out of the dataset as a test item. Don't be concerned that the example was likely used in training the model -- we just want to demonstrate how to make a prediction.
test_item = ! gsutil cat $IMPORT_FILE | head -n1 if len(test_item[0]) == 3: _, test_item, test_label = str(test_item[0]).split(",") else: test_item, test_label = str(test_item[0]).split(",") print(test_item, test_label)
notebooks/community/sdk/sdk_automl_text_classification_online.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
We're going to use one convolutional layer. To avoid config-fu with caffe, the data will go into LMDB store. But 1st, let's cleanup the environment to avoid costly debugging.
rm -r digits_train_lmdb digits_test_lmdb
Classification_With_NN_1.ipynb
marcino239/notebooks
gpl-2.0
Caffe uses fairly simple kev / value format. The code below performs conversion from sklearn dataset into LMDB.
import lmdb import sys sys.path.append( './caffe/python' ) import caffe env = lmdb.open( 'digits_train_lmdb', map_size=1000000000L ) k = 0 for i in range( X_train.shape[0] ): x_in = X_train[ i ].reshape( (8,8) ).astype( np.uint8 ) # add original digit datum = caffe.proto.caffe_pb2.Datum() datum.c...
Classification_With_NN_1.ipynb
marcino239/notebooks
gpl-2.0
We're going to define here solver parameters and network structure. The original MNIST reference net included with caffe, had two convolutional layers. Given the small size of the input picture (8x8), we're only going to use one convolutional layer.
solver_txt = """ # The train/test net protocol buffer definition net: "digits_train_test.prototxt" # test_iter specifies how many forward passes the test should carry out. # the batch is 64 images test_iter: 100 # Carry out testing every 100 training iterations. test_interval: 100 # The base learning rate, momentum and...
Classification_With_NN_1.ipynb
marcino239/notebooks
gpl-2.0
Let's start the training using GPU.
!./caffe/build/tools/caffe train -solver digits_solver.prototxt 2> digits_train.log !python ./caffe/tools/extra/parse_log.py digits_train.log . import pandas as pd # columns: NumIters,Seconds,TrainingLoss,LearningRate df_train = pd.read_csv( 'digits_train.log.train' ) # columns: NumIters,Seconds,TestAccuracy,TestLos...
Classification_With_NN_1.ipynb
marcino239/notebooks
gpl-2.0
Time-varying Convex Optimization Copyright 2018 Google LLC. Install Dependencies Time Varying Semi-definite Programs Some Polynomial Tools Examples: To Add. Install Dependencies
!pip install cvxpy !pip install sympy import numpy as np import scipy as sp
time_varying_optimization/tvsdp.ipynb
google-research/google-research
apache-2.0
Time Varying Semi-definite Programs The TV-SDP framework for CVXPY for imposing constraints of the form: $$A(t) \succeq 0 \; \forall t \in [0, 1],$$ where $$A(t)$$ is a polynomial symmetric matrix, i.e. a symmetric matrix whose entries are polynomial functions of time, and $$A(t) \succeq 0$$ means that all the eigen va...
def _mult_poly_matrix_poly(p, mat_y): """Multiplies the polynomial matrix mat_y by the polynomial p entry-wise. Args: p: list of size d1+1 representation the polynomial sum p[i] t^i. mat_y: (m, m, d2+1) tensor representing a polynomial matrix Y_ij(t) = sum mat_y[i, j, k] t^k. Returns: (m, m, d...
time_varying_optimization/tvsdp.ipynb
google-research/google-research
apache-2.0
Some Polynomial Tools
def integ_poly_0_1(p): """Return the integral of p(t) between 0 and 1.""" return np.array(p).dot(1 / np.linspace(1, len(p), len(p))) def spline_regression(x, y, num_parts, deg=3, alpha=.01, smoothness=1): """Fits splines with `num_parts` to data `(x, y)`. Finds a piecewise polynomial function `p` of degree ...
time_varying_optimization/tvsdp.ipynb
google-research/google-research
apache-2.0
Paste existing code into model.py A Python package requires our code to be in a .py file, as opposed to notebook cells. So, we simply copy and paste our existing code for the previous notebook into a single file. In the cell below, we write the contents of the cell into model.py packaging the model we developed in the...
%%writefile ./taxifare/trainer/model.py #TODO 1 import datetime import logging import os import shutil import numpy as np import tensorflow as tf from tensorflow.keras import activations from tensorflow.keras import callbacks from tensorflow.keras import layers from tensorflow.keras import models from tensorflow imp...
courses/machine_learning/deepdive2/building_production_ml_systems/labs/1_training_at_scale.ipynb
turbomanage/training-data-analyst
apache-2.0
Modify code to read data from and write checkpoint files to GCS If you look closely above, you'll notice a new function, train_and_evaluate that wraps the code that actually trains the model. This allows us to parametrize the training by passing a dictionary of parameters to this function (e.g, batch_size, num_examples...
%%writefile taxifare/trainer/task.py # TODO 1 import argparse from trainer import model if __name__ == '__main__': parser = argparse.ArgumentParser() # TODO: Your code goes here # TODO: Your code goes here # TODO: Your code goes here parser.add_argument( "--eval_data_path", ...
courses/machine_learning/deepdive2/building_production_ml_systems/labs/1_training_at_scale.ipynb
turbomanage/training-data-analyst
apache-2.0
Run your training package on Cloud AI Platform Once the code works in standalone mode locally, you can run it on Cloud AI Platform. To submit to the Cloud we use gcloud ai-platform jobs submit training [jobname] and simply specify some additional parameters for AI Platform Training Service: - jobid: A unique identifier...
%%bash # TODO 2 # Output directory and jobID OUTDIR=gs://${BUCKET}/taxifare/trained_model_$(date -u +%y%m%d_%H%M%S) JOBID=taxifare_$(date -u +%y%m%d_%H%M%S) echo ${OUTDIR} ${REGION} ${JOBID} gsutil -m rm -rf ${OUTDIR} # Model and training hyperparameters BATCH_SIZE=50 NUM_EXAMPLES_TO_TRAIN_ON=100 NUM_EVALS=100 NBUCKE...
courses/machine_learning/deepdive2/building_production_ml_systems/labs/1_training_at_scale.ipynb
turbomanage/training-data-analyst
apache-2.0
2) Setup the Request Extract the Models from the Cache with this request and upload them object files to the configured S3 Bucket. Please make sure the environment variables are set correctly and the S3 Bucket exists: ENV_AWS_KEY=&lt;AWS API Key&gt; ENV_AWS_SECRET=&lt;AWS API Secret&gt; For docker containers make su...
ds_name = "iris_regressor"
examples/ML-IRIS-Redis-Labs-Extract-From-Cache.ipynb
jay-johnson/sci-pype
apache-2.0
3) Build and Run the Extract + Upload Request
cache_req = { "RAName" : "CACHE", # Redis endpoint name holding the models "DSName" : str(ds_name), # Dataset name for pulling out of the cache "S3Loc" : str(s3_loc), # S3 location to store the model file ...
examples/ML-IRIS-Redis-Labs-Extract-From-Cache.ipynb
jay-johnson/sci-pype
apache-2.0
The right hand side of ODEs can be implemented in either python or in C. Although not absolutely necessary for this example, we here show how to implement the RHS in C. This is often significantly faster than using a python callback function. We use a simple spin model which is one second order ODE, or a set of two co...
%%writefile rhs.c #include "rebound.h" void derivatives(struct reb_ode* const ode, double* const yDot, const double* const y, const double t){ struct reb_orbit o = reb_tools_particle_to_orbit(ode->r->G, ode->r->particles[1], ode->r->particles[0]); double omega2 = 3.*0.26; yDot[0] = y[1]; yDot[1] =...
ipython_examples/ChaoticHyperion.ipynb
hannorein/rebound
gpl-3.0
We now compile this into a shared library. We need the REBOUND headers and library for this. The following is a bit of hack: we just copy the files into the current folder. This works if you've installed REBOUND from the git repository. Otherwise, you'll need to find these files manually (which might depend on your pyt...
!cp ../src/librebound.so . !cp ../src/rebound.h . !gcc -c -O3 -fPIC rhs.c -o rhs.o !gcc -L. -shared rhs.o -o rhs.so -lrebound
ipython_examples/ChaoticHyperion.ipynb
hannorein/rebound
gpl-3.0
Using ctypes, we can load the library into python
from ctypes import cdll clibrhs = cdll.LoadLibrary("rhs.so")
ipython_examples/ChaoticHyperion.ipynb
hannorein/rebound
gpl-3.0
The following function is setting up the N-body simulation as well as the ODE system that governs the spin evolution. Note that we set the derivatives function pointer to the C function we've just compiled. You could also set this function pointer to a python function and avoid all the C complications.
def setup(): sim = rebound.Simulation() sim.add(m=1) # Saturn sim.add(a=1, e=0.123233) # Hyperion, massless, semi-major axis of 1 sim.integrator = "BS" sim.ri_bs.eps_rel = 1e-12 # tolerance sim.ri_bs.eps_abs = 1e-12 ode_spin = sim.create_ode(length=2, needs_nbody=True) ...
ipython_examples/ChaoticHyperion.ipynb
hannorein/rebound
gpl-3.0
We will create two simulations that are slightly offset from each other.
sim, ode_spin = setup() sim2, ode_spin2 = setup() ode_spin2.y[0] += 1e-8 # small perturbation
ipython_examples/ChaoticHyperion.ipynb
hannorein/rebound
gpl-3.0
With these two simulations, we can measure the growing divergence of nearby trajectories, a key feature of chaos.
times = 2.*np.pi*np.linspace(0,30,100) # a couple of orbits obliq = np.zeros((len(times))) obliq2 = obliq.copy() for i, t in enumerate(times): sim.integrate(t, exact_finish_time=1) sim2.integrate(t, exact_finish_time=1) obliq[i] = ode_spin.y[0] obliq2[i] = ode_spin2.y[0]
ipython_examples/ChaoticHyperion.ipynb
hannorein/rebound
gpl-3.0
Finally, let us plot the divergence as a function of time.
fig, ax = plt.subplots(1,1) ax.set_xlabel("time [orbits]") ax.set_ylabel("obliquity difference [degrees]") ax.set_yscale("log") ax.set_ylim([1e-8,1e2]) o1 = np.remainder((obliq-obliq2)*180/np.pi,360.) o2 = np.remainder((obliq2-obliq)*180/np.pi,360.) ax.scatter(times/np.pi/2.,np.minimum(o1,o2)) ax.plot(times/np.pi/2.,1e...
ipython_examples/ChaoticHyperion.ipynb
hannorein/rebound
gpl-3.0
Next, import all the dependencies we'll use in this exercise, which include Fairness Indicators, TensorFlow Model Analysis (tfma), and the What-If tool (WIT):
%tensorflow_version 2.x import os import tempfile import apache_beam as beam import numpy as np import pandas as pd from datetime import datetime import tensorflow_hub as hub import tensorflow as tf import tensorflow_model_analysis as tfma from tensorflow_model_analysis.addons.fairness.post_export_metrics import fairn...
ml/pc/exercises/fairness_text_toxicity_part2.ipynb
google/eng-edu
apache-2.0
Run the following code to download and import the training and validation datasets. By default, the following code will load the preprocessed data (see Fairness Exercise 1: Explore the Model for more details). If you prefer, you can enable the download_original_data checkbox at right to download the original dataset an...
download_original_data = False #@param {type:"boolean"} if download_original_data: train_tf_file = tf.keras.utils.get_file('train_tf.tfrecord', 'https://storage.googleapis.com/civil_comments_dataset/train_tf.tfrecord') validate_tf_file = tf.keras.utils.get_file('validate_t...
ml/pc/exercises/fairness_text_toxicity_part2.ipynb
google/eng-edu
apache-2.0
Next, train the original model from Fairness Exercise 1: Explore the Model, which we'll use as the baseline model for this exercise:
#@title Run this cell to train the baseline model from Exercise 1 TEXT_FEATURE = 'comment_text' LABEL = 'toxicity' FEATURE_MAP = { # Label: LABEL: tf.io.FixedLenFeature([], tf.float32), # Text: TEXT_FEATURE: tf.io.FixedLenFeature([], tf.string), # Identities: 'sexual_orientation':tf.io.VarLen...
ml/pc/exercises/fairness_text_toxicity_part2.ipynb
google/eng-edu
apache-2.0
In the next section, we'll apply bias-remediation techniques on our data and then train a revised model on the updated data. Remediate Bias To remediate bias in our model, we'll first need to define the remediation metrics we'll use to gauge success and choose an appropriate remediation technique. Then we'll retrain th...
def train_input_fn_with_remediation(): def parse_function(serialized): parsed_example = tf.io.parse_single_example( serialized=serialized, features=FEATURE_MAP) # Adds a weight column to deal with unbalanced classes. parsed_example['weight'] = tf.add(parsed_example[LABEL], 0.1) # BEGIN U...
ml/pc/exercises/fairness_text_toxicity_part2.ipynb
google/eng-edu
apache-2.0
Retrain the model Now, let's retrain the model with our upweighted examples:
BASE_DIR = tempfile.gettempdir() model_dir_with_remediation = os.path.join(BASE_DIR, 'train', datetime.now().strftime( "%Y%m%d-%H%M%S")) embedded_text_feature_column = hub.text_embedding_column( key=TEXT_FEATURE, module_spec='https://tfhub.dev/google/nnlm-en-dim128/1') classifier_with_remediation = tf....
ml/pc/exercises/fairness_text_toxicity_part2.ipynb
google/eng-edu
apache-2.0
Recompute fairness metrics Now that we've retrained the model, let's recompute our fairness metrics. First, export the model:
def eval_input_receiver_fn(): serialized_tf_example = tf.compat.v1.placeholder( dtype=tf.string, shape=[None], name='input_example_placeholder') receiver_tensors = {'examples': serialized_tf_example} features = tf.io.parse_example(serialized_tf_example, FEATURE_MAP) features['weight'] = tf.ones_like(fea...
ml/pc/exercises/fairness_text_toxicity_part2.ipynb
google/eng-edu
apache-2.0
Next, run the fairness evaluation using TFMA:
tfma_eval_result_path_with_remediation = os.path.join(BASE_DIR, 'tfma_eval_result_with_remediation') slice_selection = 'gender' compute_confidence_intervals = False # Define slices that you want the evaluation to run on. slice_spec = [ tfma.slicer.SingleSliceSpec(), # Overall slice tfma.slicer.SingleSliceSpec...
ml/pc/exercises/fairness_text_toxicity_part2.ipynb
google/eng-edu
apache-2.0
Load evaluation results Run the following two cells to load results in the What-If tool and Fairness Indicators. In the What-If tool, we'll load 1,000 examples with the corresponding predictions returned from both the baseline model and the remediated model. WARNING: When you launch the What-If tool widget below, the ...
DEFAULT_MAX_EXAMPLES = 1000 # Load 100000 examples in memory. When first rendered, What-If Tool only # displays 1000 of these examples to ensure data loads successfully for most # browser/machine configurations. def wit_dataset(file, num_examples=100000): dataset = tf.data.TFRecordDataset( filenames=[train_tf...
ml/pc/exercises/fairness_text_toxicity_part2.ipynb
google/eng-edu
apache-2.0
In Fairness Indicators, we'll display the remediated model's evaluation results on the validation set.
# Link Fairness Indicators widget with WIT widget above, # so that clicking a slice in FI below will load its data in WIT above. event_handlers={'slice-selected': wit.create_selection_callback(wit_data, DEFAULT_MAX_EXAMPLES)} widget_view.render_fairness_indicator(eval_result=eval_result_with_remediation, ...
ml/pc/exercises/fairness_text_toxicity_part2.ipynb
google/eng-edu
apache-2.0
We can instantiate an instance of the tool using an API key.
cii = CancerImageInterface(api_key=YOUR_API_KEY_HERE)
tutorials/2_cancer_imaging_archive.ipynb
TariqAHassan/BioVida
bsd-3-clause
Note: The number of studies (collections) provided here is somewhat shorter than what is provided on The Cancer Imaging Archive's website. This is because certain studies, such as those with restricted access, are excluded. With this notice out of the way, let's go ahead and perform a search.
cii.search(cancer_type='breast')
tutorials/2_cancer_imaging_archive.ipynb
TariqAHassan/BioVida
bsd-3-clause
Next, we can easily download this data.
import numpy as np def simplify_df(df): """This function simplifies dataframes for the purposes of this tutorial.""" data_frame = df.copy() for c in ('cached_dicom_images_path', 'cached_images_path'): data_frame[c] = data_frame[c].map( lambda x: tuple(['path_to_image'] * len(x)) if i...
tutorials/2_cancer_imaging_archive.ipynb
TariqAHassan/BioVida
bsd-3-clause
The code below will download the data in our search results, but with two noteworthy restrictions. <br> First, patient_limit=5 will limit the number of patients/subjects downloaded to the first 5. <br> Second, collections_limit will limit the number of collections downloaded to one (in this case, 'TCGA-COAD'). <br> Thi...
pull_df = cii.pull(patient_limit=5, collections_limit=1, session_limit=1)
tutorials/2_cancer_imaging_archive.ipynb
TariqAHassan/BioVida
bsd-3-clause
Let's take a look at the data we've downloaded. We could view the pull_df object above, or the identical records_db attribute of cii, e.g., cii.records_db. However, both of those DataFrames contain several column which are not typically relevant for every data use. So, instead, we can view an abbreviated DataFrame, rec...
simplify_df(cii.records_db_short)
tutorials/2_cancer_imaging_archive.ipynb
TariqAHassan/BioVida
bsd-3-clause
Notes: The 'cached_dicom_images_path' and 'cached_images_path' columns refer to multiple images. The number of converted images may differ from the number of raw DICOM images because 3D DICOM images are saved as individual frames when they are converted to PNG. The 'image_count_converted_cache' column provides an acco...
import dicom # in the future you will have to use `import pydicom as dicom`
tutorials/2_cancer_imaging_archive.ipynb
TariqAHassan/BioVida
bsd-3-clause
We can also go ahead an import matplotlib to allow us to visualize the ndarrays we extract. We can start by extracting a list of DICOMs from the images we downloaded above.
sample_dicoms = cii.records_db['cached_dicom_images_path'].iloc[1]
tutorials/2_cancer_imaging_archive.ipynb
TariqAHassan/BioVida
bsd-3-clause
We can load these images in as ndarrays using the dicom (pydicom) library.
dicoms_arrs = [dicom.read_file(f).pixel_array for f in sample_dicoms]
tutorials/2_cancer_imaging_archive.ipynb
TariqAHassan/BioVida
bsd-3-clause
DICOM represents imaging sessions with a tag known as SeriesInstanceUID. That is, the unique ID of the series. If multiple DICOM files/images share the same SeriesInstanceUID, it means they are part of the same 'series'. If we get the length of dicoms_arrs we see that multiple DICOM files share the same SeriesInstance...
len(dicoms_arrs) # cii.records_db['series_instance_uid'].iloc[1]
tutorials/2_cancer_imaging_archive.ipynb
TariqAHassan/BioVida
bsd-3-clause
Thus suggesting that this particular series is either a 3D volume or a time-series. So we can go ahead stack these images on top of one another as a way of representing this relationship between the images.
stacked_dicoms = np.stack(dicoms_arrs)
tutorials/2_cancer_imaging_archive.ipynb
TariqAHassan/BioVida
bsd-3-clause
If we check the shape of stacked_dicoms can see that we have indeed stacked 15 256x256 images on top of one another.
stacked_dicoms.shape
tutorials/2_cancer_imaging_archive.ipynb
TariqAHassan/BioVida
bsd-3-clause
We can also go ahead and define a small function which will enable us to visualize this 'stack' of images.
import matplotlib.pyplot as plt def sample_stack(stack, rows=3, cols=5, start_with=0, show_every=1): """Function to display stacked ndarray. Source: https://www.raddq.com/dicom-processing-segmentation-visualization-in-python Note: this code has been slightly modified.""" if rows*cols != stack.shape[0]: ...
tutorials/2_cancer_imaging_archive.ipynb
TariqAHassan/BioVida
bsd-3-clause
If you're curious to see what these images look like, you can uncomment the line below to view the stack of images.
# sample_stack(stacked_dicoms, rows=3)
tutorials/2_cancer_imaging_archive.ipynb
TariqAHassan/BioVida
bsd-3-clause
Note: Ordering DICOM images in space is tricky. Currently, this class uses a somewhat reliable, but far from ideal, means of ordering images. <br> Errors are possible. <br> <small> For the Medical Imaging Folks: <br> Images in the 'series_instance_uid' column are ordered against the InstanceNumber tag instead of actual...
from biovida.images import image_divvy
tutorials/2_cancer_imaging_archive.ipynb
TariqAHassan/BioVida
bsd-3-clause
Next, we can define a 'divvy_rule'.
def my_divvy_rule(row): if row['modality_full'] == 'Magnetic Resonance Imaging (MRI)': return 'mri' if row['modality_full'] == 'Segmentation': return 'seg'
tutorials/2_cancer_imaging_archive.ipynb
TariqAHassan/BioVida
bsd-3-clause
This rule will select only those images which are MRIs. All other images will be excluded.
train_test = image_divvy(instance=cii, divvy_rule=my_divvy_rule, db_to_extract='records_db', action='ndarray', train_val_test_dict={'train': 0.8, 'test': 0.2}) train_mri, test_mri = train_test['train']['mri'], train_tes...
tutorials/2_cancer_imaging_archive.ipynb
TariqAHassan/BioVida
bsd-3-clause
One important thing to point out is that some of the image arrays returned will, in fact, be stacked arrays of images. <br> For example:
train_seg[10].shape
tutorials/2_cancer_imaging_archive.ipynb
TariqAHassan/BioVida
bsd-3-clause
Loading FFT routines
gridDIM = 64 size = gridDIM*gridDIM axes0 = 0 axes1 = 1 makeC2C = 0 makeR2C = 1 makeC2R = 1 axesSplit_0 = 0 axesSplit_1 = 1 m = size segment_axes0 = 0 segment_axes1 = 0 DIR_BASE = "/home/robert/Documents/new1/FFT/mycode/" # FAFT _faft128_2D = ctypes.cdll.LoadLibrary( DIR_BASE+'FAFT128_2D_R2C.so' ) _faft128_2D....
FAFT_64-points_R2C/nbFAFT128_offset_xy_2D.ipynb
robertclf/FAFT
bsd-3-clause
Initializing Data Gaussian
def Gaussian(x,mu,sigma): return np.exp( - (x-mu)**2/sigma**2/2. )/(sigma*np.sqrt( 2*np.pi )) def fftGaussian(p,mu,sigma): return np.exp(-1j*mu*p)*np.exp( - p**2*sigma**2/2. ) # Gaussian parameters mu_x = 1.5 sigma_x = 1. mu_y = 1.5 sigma_y = 1. # Grid parameters x_amplitude = 7. p_amplitude = 5. ...
FAFT_64-points_R2C/nbFAFT128_offset_xy_2D.ipynb
robertclf/FAFT
bsd-3-clause
Forward Transform
# Executing FFT cuda_faft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, delta, segment_axes0, axes0, makeR2C, axesSplit_0 ) cuda_faft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, delta, segment_axes1, axes1, makeC2C, axesSplit_0 ) plt.imshow( f_gpu.get() ) plt.plot( f33_gpu.get().real.reshape(64) ) def Recons...
FAFT_64-points_R2C/nbFAFT128_offset_xy_2D.ipynb
robertclf/FAFT
bsd-3-clause
Central Section: $p_y =0$
plt.figure(figsize=(10,10)) plt.plot( p_range, ReconstructFFT2D_axesSplit_0( f_gpu.get() , f33_gpu.get() )[32,:].real/float(size), 'o-' , label='Real') plt.plot( p_range, ReconstructFFT2D_axesSplit_0( f_gpu.get() , f33_gpu.get() )[32,:].imag/float(size), 'ro-' , label='Imag') plt.xlabel('$p_x$',**axis_f...
FAFT_64-points_R2C/nbFAFT128_offset_xy_2D.ipynb
robertclf/FAFT
bsd-3-clause
Inverse Transform
# Executing iFFT cuda_ifaft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, -delta, segment_axes1, axes1, makeC2C, axesSplit_0 ) cuda_ifaft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, -delta, segment_axes0, axes0, makeC2R, axesSplit_0 ) plt.imshow( f_gpu.get()/(float(size*size)) , extent=[-x_amplitude ,...
FAFT_64-points_R2C/nbFAFT128_offset_xy_2D.ipynb
robertclf/FAFT
bsd-3-clause
$W$ TRANSFORM FROM AXES-1 After the transfom, f_gpu[:, :32] contains real values and f_gpu[:, 32:] contains imaginary values. f33_gpu contains the 33th. complex values
f = Gaussian(x,mu_x,sigma_x)*Gaussian(y,mu_y,sigma_y) plt.imshow( f, extent=[-x_amplitude , x_amplitude-dx, -x_amplitude , x_amplitude-dx] , origin='lower') f33 = np.zeros( [64, 1], dtype = np.complex64 ) # One gpu array. f_gpu = gpuarray.to_gpu( np.ascontiguousarray( f , dtype = np.float32 ) ) f33_gpu = gpuarr...
FAFT_64-points_R2C/nbFAFT128_offset_xy_2D.ipynb
robertclf/FAFT
bsd-3-clause
Forward Transform
# Executing FFT cuda_faft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, delta, segment_axes1, axes1, makeR2C, axesSplit_1 ) cuda_faft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, delta, segment_axes0, axes0, makeC2C, axesSplit_1 ) plt.imshow( f_gpu.get() ) plt.plot( f33_gpu.get().real.reshape(64) ) def Reconst...
FAFT_64-points_R2C/nbFAFT128_offset_xy_2D.ipynb
robertclf/FAFT
bsd-3-clause
Inverse Transform
# Executing iFFT cuda_ifaft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, -delta, segment_axes0, axes0, makeC2C, axesSplit_1 ) cuda_ifaft( int(f_gpu.gpudata), int(f33_gpu.gpudata), dx, -delta, segment_axes1, axes1, makeC2R, axesSplit_1 ) plt.imshow( f_gpu.get()/float(size)**2 , extent=[-x_amplitude , x_a...
FAFT_64-points_R2C/nbFAFT128_offset_xy_2D.ipynb
robertclf/FAFT
bsd-3-clause
User ~~Products purchased (total_distinct_items)~~ ~~Orders made (nb_orders)~~ ~~frequency and recency of orders (average_days_between_orders)~~ Aisle purchased from Department purchased from ~~frequency and recency of reorders~~ tenure ~~mean order size (average basket)~~ etc.
usr = pd.DataFrame() o_grouped = orders.groupby('user_id') p_grouped = priors.groupby('user_id') usr['average_days_between_orders'] = o_grouped.days_since_prior_order.mean() usr['max_days_between_orders'] = o_grouped.days_since_prior_order.max() usr['min_days_between_orders'] = o_grouped.days_since_prior_order.min() u...
instacart/Model.ipynb
bataeves/kaggle
unlicense
Product ~~users~~ ~~orders~~ ~~order frequency~~ ~~reorder rate~~ recency ~~mean/std add_to_cart_order~~ Срок годности/время перепокупки etc.
prods = pd.DataFrame() p_grouped = priors.groupby(priors.product_id) prods['orders'] = p_grouped.size().astype(np.float32) prods['order_freq'] = (prods['orders'] / len(priors.order_id.unique())) prods['users'] = p_grouped.user_id.unique().apply(len) prods['add_to_cart_order_mean'] = p_grouped.add_to_cart_order.mean()....
instacart/Model.ipynb
bataeves/kaggle
unlicense
Aisle ~~users~~ ~~orders~~ ~~order frequency~~ ~~reorder rate~~ recency ~~mean/std add_to_cart_order~~ Срок годности/время перепокупки etc.
prods = pd.DataFrame() p_grouped = priors.groupby(priors.aisle_id) prods['orders'] = p_grouped.size().astype(np.float32) prods['order_freq'] = (prods['orders'] / len(priors.order_id.unique())).astype(np.float32) prods['users'] = p_grouped.user_id.unique().apply(len).astype(np.float32) prods['add_to_cart_order_mean'] =...
instacart/Model.ipynb
bataeves/kaggle
unlicense
Department ~~users~~ ~~orders~~ ~~order frequency~~ ~~reorder rate~~ recency ~~mean add_to_cart_order~~ Срок годности/время перепокупки etc.
prods = pd.DataFrame() p_grouped = priors.groupby(priors.department_id) prods['orders'] = p_grouped.size().astype(np.float32) prods['order_freq'] = (prods['orders'] / len(priors.order_id.unique())).astype(np.float32) prods['users'] = p_grouped.user_id.unique().apply(len).astype(np.float32) prods['add_to_cart_order_mea...
instacart/Model.ipynb
bataeves/kaggle
unlicense
User Product Interaction (UP) ~~purchases (nb_orders)~~ ~~purchases ratio~~ ~~reorders~~ ~~Average position in cart~~ day since last purchase average/min/max days between purchase day since last purchase - срок годности ~~order since last purchase (UP_orders_since_last)~~ Latest one/two/three/four week features etc.
%%cache userXproduct.pkl userXproduct priors['z'] = priors.product_id + priors.user_id * 100000 d = dict() for row in tqdm_notebook(priors.itertuples(), total=len(priors)): z = row.z if z not in d: d[z] = ( 1, (row.order_number, row.order_id), row.add_to_cart_order,...
instacart/Model.ipynb
bataeves/kaggle
unlicense
User aisle interaction (UA) purchases reorders day since last purchase average/min/max days between purchase order since last purchase etc. User department interaction (UD) purchases reorders day since last purchase average days between purchase order since last purchase etc. User time interaction (UT) user prefer...
### build list of candidate products to reorder, with features ### train_index = set(op_train.index) def features(selected_orders, labels_given=False): order_list = [] product_list = [] labels = [] for row in tqdm_notebook( selected_orders.itertuples(), total=len(selected_orders) )...
instacart/Model.ipynb
bataeves/kaggle
unlicense
Train
f_to_use = [ 'user_total_orders', 'user_total_items', 'user_total_distinct_items', 'user_average_days_between_orders', 'user_average_basket', 'order_hour_of_day', 'days_since_prior_order', 'days_since_ratio', 'aisle_id', 'department_id', 'product_orders', 'product_reorders', 'product_reorder_rate', ...
instacart/Model.ipynb
bataeves/kaggle
unlicense
Predict
def predict(model, df_test, TRESHOLD=0.19): ### build candidates list for test ### df_test['pred'] = model.predict(feature_select(df_test)) # Нужно добавить https://www.kaggle.com/mmueller/f1-score-expectation-maximization-in-o-n/code d = dict() for row in df_test.itertuples(): # Вот тут мо...
instacart/Model.ipynb
bataeves/kaggle
unlicense
CV https://www.kaggle.com/happycube/validation-demo-325-cv-3276-lb/notebook
lgb.cv(params, d_train, ROUNDS, nfold=5, verbose_eval=10) %%cache df_train_gt.pkl df_train_gt from functools import partial products_raw = pd.read_csv(IDIR + 'products.csv') # combine aisles, departments and products (left joined to products) goods = pd.merge( left=pd.merge( left=products_raw, r...
instacart/Model.ipynb
bataeves/kaggle
unlicense
Подбираем threshold
for th in np.arange(0.18, 0.22, 0.01): print th print cv(threshold=th) print
instacart/Model.ipynb
bataeves/kaggle
unlicense
Модель определения кол-ва покупок
prior_orders_count = priors[["order_id", "product_id"]].groupby("order_id").count() prior_orders_count = prior_orders_count.rename(columns={"product_id": "product_counts"}) train_orders_count = op_train.drop(["product_id", "order_id"], axis=1, errors="ignore") train_orders_count = train_orders_count.reset_index()[["or...
instacart/Model.ipynb
bataeves/kaggle
unlicense
Let's look at the tables and columns we have for analysis. Please query the INFORMATION_SCHEMA. Learning objective 1.
%%with_globals %%bigquery --project {PROJECT} SELECT table_name, column_name, data_type FROM `asl-ml-immersion.stock_src.INFORMATION_SCHEMA.COLUMNS` ORDER BY table_name, ordinal_position
courses/machine_learning/deepdive2/time_series_prediction/labs/optional_1_data_exploration.ipynb
turbomanage/training-data-analyst
apache-2.0
Price History TODO: Visualize stock symbols from the dataset.
%%with_globals %%bigquery --project {PROJECT} SELECT * FROM `asl-ml-immersion.stock_src.price_history` LIMIT 10 def query_stock(symbol): return bq.query(''' # TODO: query a specific stock '''.format(symbol)).to_dataframe() df_stock = query_stock('GOOG') df_stock.Date = pd.to_datetime(df_stock.Date) ax = ...
courses/machine_learning/deepdive2/time_series_prediction/labs/optional_1_data_exploration.ipynb
turbomanage/training-data-analyst
apache-2.0
TODO 2: Compare individual stocks to the S&P 500.
SP500_SYMBOL = gspc df_sp = query_stock(SP500_SYMBOL) # TODO: visualize S&P 500 price
courses/machine_learning/deepdive2/time_series_prediction/labs/optional_1_data_exploration.ipynb
turbomanage/training-data-analyst
apache-2.0
Let's see how the price of stocks change over time on a yearly basis. Using the LAG function we can compute the change in stock price year-over-year. Let's compute average close difference for each year. This line could, of course, be done in Pandas. Often times it's useful to use some combination of BigQuery and Pand...
%%with_globals %%bigquery --project {PROJECT} df WITH with_year AS ( SELECT symbol, EXTRACT(YEAR FROM date) AS year, close FROM `asl-ml-immersion.stock_src.price_history` WHERE symbol in (SELECT symbol FROM `asl-ml-immersion.stock_src.snp500`) ), year_aggregated AS ( SELECT year, s...
courses/machine_learning/deepdive2/time_series_prediction/labs/optional_1_data_exploration.ipynb
turbomanage/training-data-analyst
apache-2.0
Stock splits can also impact our data - causing a stock price to rapidly drop. In practice, we would need to clean all of our stock data to account for this. This would be a major effort! Fortunately, in the case of IBM, for example, all stock splits occurred before the year 2000. Learning objective 2 TODO: Query the I...
stock_symbol = 'IBM' %%with_globals %%bigquery --project {PROJECT} df SELECT date, close FROM `asl-ml-immersion.stock_src.price_history` WHERE symbol='{stock_symbol}' ORDER BY date # TODO: can you visualize when the major stock splits occured?
courses/machine_learning/deepdive2/time_series_prediction/labs/optional_1_data_exploration.ipynb
turbomanage/training-data-analyst
apache-2.0
Making Predictions First let's generate our own sentences to see how the model classifies them.
def string_to_array(s, separator=' '): return s.split(separator) def generate_data_row(sentence, label, max_length): sequence = np.zeros((max_length), dtype='int32') for i, word in enumerate(string_to_array(sentence)): sequence[i] = word_list.index(word) return sequence, label, max_length ...
code_samples/RNN/sentiment_analysis/.ipynb_checkpoints/SentimentAnalysis-New-checkpoint.ipynb
mari-linhares/tensorflow-workshop
apache-2.0
Data CoTeDe comes with a few datasets for demonstration. Here we will use a CTD cast from the PIRATA hydrographic collection, i.e. measurements from the Tropical Atlantic Ocean. If curious about this data, check CoTeDe's documentation for more details. Let's start by loading this dataset.
data = cotede.datasets.load_ctd() print("There is a total of {} observed levels.\n".format(len(data["TEMP"]))) print("The variables are: ", list(data.keys()))
docs/notebooks/anomaly_detection/anomaly_detection_profile.ipynb
castelao/CoTeDe
bsd-3-clause
This CTD was equipped with backup sensors to provide more robustness. Measurements from the secondary sensor are identified by a 2 in the end of the name, such as "TEMP2" for the secondary temperature sensor. Let's focus on the primary sensors. To visualize this profile we will use Bokeh so you can explore the data and...
p1 = figure(plot_width=420, plot_height=600) p1.circle( data['TEMP'], -data['PRES'], size=8, line_color="seagreen", fill_color="mediumseagreen", fill_alpha=0.3) p1.xaxis.axis_label = "Temperature [C]" p1.yaxis.axis_label = "Depth [m]" p2 = figure(plot_width=420, plot_height=600) p2.y_range = p1.y_range p2.circ...
docs/notebooks/anomaly_detection/anomaly_detection_profile.ipynb
castelao/CoTeDe
bsd-3-clause
Considering the unusual magnitudes and variability near the bottom, there are clearly bad measurements on this profile. Let's start with a traditional QC and then we'll include the Anomaly Detection. Traditional QC with CoTeDe framework NOTE: If you are not familiar with CoTeDe, it might be helpfull to check first the ...
pqc = cotede.ProfileQC(data, cfg="eurogoos")
docs/notebooks/anomaly_detection/anomaly_detection_profile.ipynb
castelao/CoTeDe
bsd-3-clause
That's it, the temperature and salinity from the primary and secondary sensors were all evaluated. Which criteria were flagged for the primary sensors?
print("Flags for temperature:\n {}\n".format(list(pqc.flags["TEMP"].keys()))) print("Flags for salinity:\n {}\n".format(list(pqc.flags["PSAL"].keys())))
docs/notebooks/anomaly_detection/anomaly_detection_profile.ipynb
castelao/CoTeDe
bsd-3-clause