markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Inlining apps in the notebook Instead of displaying our app in a new browser window we can also display an app inline in the notebook simply by using the .app method on Panel object. The server app will be killed whenever you rerun or delete the cell that contains the output. Additionally, if your Jupyter Notebook serv...
pn.panel(dmap).app('localhost:8888')
examples/user_guide/Deploying_Bokeh_Apps.ipynb
ioam/holoviews
bsd-3-clause
<img width='80%' src='https://assets.holoviews.org/gifs/guides/user_guide/Deploying_Bokeh_Apps/bokeh_server_inline_simple.gif'></img> Periodic callbacks One of the most important features of deploying apps is the ability to attach asynchronous, periodic callbacks, which update the plot. The simplest way of achieving th...
def sine(counter): phase = counter*0.1%np.pi*2 xs = np.linspace(0, np.pi*4) return hv.Curve((xs, np.sin(xs+phase))).opts(width=800) counter = hv.streams.Counter() dmap = hv.DynamicMap(sine, streams=[counter]) dmap_pane = pn.panel(dmap) dmap_pane.app('localhost:8891')
examples/user_guide/Deploying_Bokeh_Apps.ipynb
ioam/holoviews
bsd-3-clause
<img width='80%' src='https://assets.holoviews.org/gifs/guides/user_guide/Deploying_Bokeh_Apps/bokeh_server_periodic.gif'></img> Once we have created a Panel object we can call the add_periodic_callback method to set up a periodic callback. The first argument to the method is the callback and the second argument period...
def update(): counter.event(counter=counter.counter+1) cb = dmap_pane.add_periodic_callback(update, period=200)
examples/user_guide/Deploying_Bokeh_Apps.ipynb
ioam/holoviews
bsd-3-clause
Once started we can stop and start it at will using the .stop and .start methods:
cb.stop()
examples/user_guide/Deploying_Bokeh_Apps.ipynb
ioam/holoviews
bsd-3-clause
Combining Bokeh Application and Flask Application While Panel and Bokeh are great ways to create an application often we want to leverage the simplicity of a Flask server. With Flask we can easily embed a HoloViews, Bokeh and Panel application in a regular website. The main idea for getting Bokeh and Flask to work toge...
def sine(frequency, phase, amplitude): xs = np.linspace(0, np.pi*4) return hv.Curve((xs, np.sin(frequency*xs+phase)*amplitude)).options(width=800) ranges = dict(frequency=(1, 5), phase=(-np.pi, np.pi), amplitude=(-2, 2), y=(-2, 2)) dmap = hv.DynamicMap(sine, kdims=['frequency', 'phase', 'amplitude']).redim.ran...
examples/user_guide/Deploying_Bokeh_Apps.ipynb
ioam/holoviews
bsd-3-clause
We run load up our dynamic map into a Bokeh Application with the parameter allow_websocket_origin=["localhost:5000"] ```python from bokeh.client import pull_session from bokeh.embed import server_session from flask import Flask, render_template from flask import send_from_directory app = Flask(name) locally creates a p...
import holoviews as hv import numpy as np import panel as pn # Create the holoviews app again def sine(phase): xs = np.linspace(0, np.pi*4) return hv.Curve((xs, np.sin(xs+phase))).opts(width=800) stream = hv.streams.Stream.define('Phase', phase=0.)() dmap = hv.DynamicMap(sine, streams=[stream]) start, end = ...
examples/user_guide/Deploying_Bokeh_Apps.ipynb
ioam/holoviews
bsd-3-clause
If instead we want to deploy this we could add .servable as discussed before or use pn.serve. Note however that when using pn.serve all sessions will share the same state therefore it is best to wrap the creation of the app in a function which we can then provide to pn.serve. For more detail on deploying Panel applica...
import numpy as np import holoviews as hv from bokeh.io import show, curdoc from bokeh.layouts import layout from bokeh.models import Slider, Button renderer = hv.renderer('bokeh').instance(mode='server') # Create the holoviews app again def sine(phase): xs = np.linspace(0, np.pi*4) return hv.Curve((xs, np.s...
examples/user_guide/Deploying_Bokeh_Apps.ipynb
ioam/holoviews
bsd-3-clause
Exemplo simples de fatiamento Para a realização do fatiamento são utilizados 3 parâmetros, colocados no local do índice do array. Os 3 parâmetros são separados por dois pontos ":". Todos os 3 parâmetros podem ser opcionais que ocorrem quando o valor inicial é 0, o valor final é o tamanho do array e o passo é 1. Lembrar...
a = np.arange(20) print('Resultado da operação a[1:15:2]') print(a[1:15:2])
master/tutorial_numpy_1_2.ipynb
robertoalotufo/ia898
mit
Exemplo de fatiamento com indices negativos Acessando o último elemento com índice negativo O código abaixo acessa os elementos ímpares até antes do último elemento:
a = np.arange(20) print('Resultado da operação a[1:-1:2]') print(a[1:-1:2]) print('Note que o fatiamento termina antes do último elemento (-1)')
master/tutorial_numpy_1_2.ipynb
robertoalotufo/ia898
mit
Inversão do array com step negativo (step = -1)
a = np.arange(20) print('Resultado da operação a[-3:2:-1]') print(a[-3:2:-1]) print('Note que o fatiamento retorna o array invertido') print('Antepenúltimo até o terceiro elemento com step = -1')
master/tutorial_numpy_1_2.ipynb
robertoalotufo/ia898
mit
Fatiamento avançado É possível realizar o fatiamento utilizando os 3 parâmetros explícitos ( o limite inferior, limite superior e o step), ou podemos suprimir algum desses parâmetros. Nestes casos a função toma o valor defaut: limite inferior = primeiro elemento, limite superior = último elemento e step = 1. É possível...
a = np.arange(20) print('Resultado da operação a[:15:2]') print(a[:15:2]) print('Note que o fatiamento inicia do primeiro elemento') print('Primeiro elemento até antes do 15o com passo duplo')
master/tutorial_numpy_1_2.ipynb
robertoalotufo/ia898
mit
Supressão do indice limite superior Quando o índice do limite superior é omitido, fica implícito que é o último elemento:
a = np.arange(20) print('Resultado da operação a[1::2]') print(a[1::2]) print('Note que o fatiamento termina último elemento') print('Primeiro elemento até o último com passo duplo')
master/tutorial_numpy_1_2.ipynb
robertoalotufo/ia898
mit
Supressão do indice do step O índice do step é opcional e quando não é indicado, seu valor é 1:
a = np.arange(20) print('Resultado da operação a[1:15]') print(a[1:15]) print('Note que o fatiamento tem step unitário') print('Primeiro elemento até antes do 15o com passo um')
master/tutorial_numpy_1_2.ipynb
robertoalotufo/ia898
mit
Todos os elementos com passo unitário
a = np.arange(20) print('Resultado da operação a[:]') print(a[:]) print('Todos os elementos com passo unitário')
master/tutorial_numpy_1_2.ipynb
robertoalotufo/ia898
mit
Explaining simpler models Here we'll use the learned coefficients from a linear regression model as an explainability approach. Note: be cautious when drawing conclusions from learned weights, see the Explainability section in the book for more details.
!gsutil cp gs://ml-design-patterns/auto-mpg.csv . data = pd.read_csv('auto-mpg.csv', na_values='?') data = data.dropna() data = data.drop(columns=['car name']) data = pd.get_dummies(data, columns=['origin']) data.head() labels = data['mpg'] data = data.drop(columns=['mpg', 'cylinders']) x,y = data,labels x_train,...
07_responsible_ai/explainability.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
Train a Scikit-learn linear regression model on the data and print the learned coefficients
model = LinearRegression().fit(x_train, y_train) coefficients = model.coef_ coefdf = pd.DataFrame(coefficients, index=data.columns.tolist(), columns=['Learned coefficients']) coefdf
07_responsible_ai/explainability.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
Feature attributions with SHAP Using the same dataset, we'll train a deep neural net with TensorFlow and use the SHAP library to get feature attributions.
model = tf.keras.Sequential([ tf.keras.layers.Dense(16, activation='relu', input_shape=[len(x_train.iloc[0])]), tf.keras.layers.Dense(16, activation='relu'), tf.keras.layers.Dense(1) ]) optimizer = tf.keras.optimizers.RMSprop(0.001) model.compile(loss='mse', optimizer=optimizer, metr...
07_responsible_ai/explainability.ipynb
GoogleCloudPlatform/ml-design-patterns
apache-2.0
Install Pipeline SDK
!python3 -m pip install 'kfp>=0.1.31' --quiet
samples/core/dataflow/dataflow.ipynb
kubeflow/pipelines
apache-2.0
Load the component using KFP SDK
import kfp.deprecated.components as comp dataflow_python_op = comp.load_component_from_url( 'https://raw.githubusercontent.com/kubeflow/pipelines/1.7.0-rc.3/components/gcp/dataflow/launch_python/component.yaml') help(dataflow_python_op)
samples/core/dataflow/dataflow.ipynb
kubeflow/pipelines
apache-2.0
Use the wordcount python sample In this sample, we run a wordcount sample code in a Kubeflow Pipeline. The output will be stored in a Cloud Storage bucket. Here is the sample code:
!gsutil cat gs://ml-pipeline/sample-pipeline/word-count/wc.py
samples/core/dataflow/dataflow.ipynb
kubeflow/pipelines
apache-2.0
Example pipeline that uses the component
import kfp.deprecated as kfp from kfp.deprecated import dsl, Client import json @dsl.pipeline( name='dataflow-launch-python-pipeline', description='Dataflow launch python pipeline' ) def pipeline( python_file_path = 'gs://ml-pipeline/sample-pipeline/word-count/wc.py', project_id = project, region = ...
samples/core/dataflow/dataflow.ipynb
kubeflow/pipelines
apache-2.0
Submit the pipeline for execution
Client().create_run_from_pipeline_func(pipeline, arguments={})
samples/core/dataflow/dataflow.ipynb
kubeflow/pipelines
apache-2.0
Inspect the output
!gsutil cat $output/wc/wordcount.out
samples/core/dataflow/dataflow.ipynb
kubeflow/pipelines
apache-2.0
Data preparation Load raw data
filename = 'FIWT_Exp015_20150601145005.dat.npz' def loadData(): # Read and parse raw data global exp_data exp_data = np.load(filename) # Select colums global T_cmp, da_cmp T_cmp = exp_data['data33'][:,0] da_cmp = np.average(exp_data['data33'][:,3:11:2], axis=1) global T_rig, phi_rig ...
workspace_py/RigRollId-Copy5.ipynb
matthewzhenggong/fiwt
lgpl-3.0
Check time sequence and inputs/outputs Click 'Check data' button to show the raw data. Click on curves to select time point and push into queue; click 'T/s' text to pop up last point in the queue; and click 'Output' text to print time sequence table.
def checkInputOutputData(): #check inputs/outputs fig, ax = plt.subplots(2,1,True) ax[0].plot(T_cmp,da_cmp,'r',picker=1) ax[1].plot(T_rig,phi_rig, 'b', picker=2) ax[0].set_ylabel('$\delta \/ / \/ ^o$') ax[1].set_ylabel('$\phi \/ / \/ ^o/s$') ax[1].set_xlabel('$T \/ / \/ s$', picker=True) ...
workspace_py/RigRollId-Copy5.ipynb
matthewzhenggong/fiwt
lgpl-3.0
Input $\delta_T$ and focused time ranges For each section, * Select time range and shift it to start from zero; * Resample Time, Inputs, Outputs in unique $\delta_T$; * Smooth Input/Observe data if flag bit0 is set; * Take derivatives of observe data if flag bit1 is set.
# Pick up focused time ranges time_marks = [[1501.28, 1505.50, "doublet u1"], [1507.40, 1511.80, "doublet u2"], [1513.55, 1517.87, "doublet u3"], [1519.70, 1523.50, "doublet u4"], [1537.60, 1541.64, "doublet d1"], [1543.76, 1547.90, "doublet d2"], ...
workspace_py/RigRollId-Copy5.ipynb
matthewzhenggong/fiwt
lgpl-3.0
Resample and filter data in sections
resample(True);
workspace_py/RigRollId-Copy5.ipynb
matthewzhenggong/fiwt
lgpl-3.0
Define dynamic model to be estimated $$\left{\begin{matrix} \ddot{\phi}{rig} = \frac{M{x,rig}}{I_{xx,rig}} \ M_{x,rig} = M_{x,a} + M_{x,f} + M_{x,cg} \ M_{x,a} = \frac{1}{2} \rho V^2S_cb_c \left ( C_{la,cmp}\delta_{a,cmp} + C{lp,cmp} \frac{b_c}{2V} \dot{\phi}{rig} \right ) \ M{x,f} = -F_c \, sign(\dot{\phi}{rig}) - f\d...
%%px --local #update common const parameters in all engines #problem size Nx = 2 Nu = 1 Ny = 2 Npar = 9 #reference S_c = 0.1254 #S_c(m2) b_c = 0.7 #b_c(m) g = 9.81 #g(m/s2) V = 30 #V(m/s) #other parameters v_th = 0.5/57.3 #v_th(rad/s) v_th2 = 0.5/57.3 #v_th(rad/s) #for short qbarSb = 0.5*1.2...
workspace_py/RigRollId-Copy5.ipynb
matthewzhenggong/fiwt
lgpl-3.0
Initial guess Input default values and ranges for parameters Select sections for trainning Adjust parameters based on simulation results Decide start values of parameters for optimization
#initial guess param0 = [ -0.3, #Cla_cmp(1/rad) -0.5, #Clp_cmp 0.199141909329, #Ixx(kg*m2) 0.0580817418532, #F_c(N*m) 0.0407466009837, #f(N*m/(rad/s)) 7.5588, #m_T(kg) 0.0444, #l_z_T(m) 1.01, #kBrk 0, #phi0(rad) ] param_name = ['$Cla_{cmp}$'...
workspace_py/RigRollId-Copy5.ipynb
matthewzhenggong/fiwt
lgpl-3.0
Optimize using ML
display_preopt_params() if True: InfoMat = None method = 'trust-ncg' def hessian(opt_params, index): global InfoMat return InfoMat dview['enable_infomat']=True options={'gtol':1} opt_bounds = None else: method = 'L-BFGS-B' hessian = None dview['enable_infomat']=False...
workspace_py/RigRollId-Copy5.ipynb
matthewzhenggong/fiwt
lgpl-3.0
Show and test results
display_opt_params() # show result idx = random.sample(range(8), 2) \ + random.sample(range(8,16), 2) \ + random.sample(range(16,19), 2) display_data_for_test(); update_guess(); toggle_inputs() button_qtconsole()
workspace_py/RigRollId-Copy5.ipynb
matthewzhenggong/fiwt
lgpl-3.0
Load and clean data Load CLIWOC ship logs
# extract data from zip file cliwoc_data = extract_logbook_data('CLIWOC15.csv') label_encoding = preprocessing.LabelEncoder().fit(cliwoc_data['LogbookIdent']).classes_ cliwoc_data['LogbookIdent'] = preprocessing.LabelEncoder().fit_transform(cliwoc_data['LogbookIdent'])
scripts/classifier-notebook.ipynb
clarka34/exploring-ship-logbooks
mit
Find definite slave data in CLIWOC data set These logs will be used to test the classifier
# extract logs that mention slaves slave_mask = wc.count_key_words(cliwoc_data, text_columns, slave_words) print('Found ', len(slave_mask[slave_mask]), ' logs that mention slaves')
scripts/classifier-notebook.ipynb
clarka34/exploring-ship-logbooks
mit
Clean CLIWOC data
# find indices of ship names that are "non-slave" ships before dropping ship name column non_slave_log_locations = isolate_training_data(cliwoc_data, {'ShipName': non_slave_ships}) print('Found ', len(non_slave_log_locations[non_slave_log_locations==True]), ' logs that are non-slave ships') cliwoc_data['slave_logs'] =...
scripts/classifier-notebook.ipynb
clarka34/exploring-ship-logbooks
mit
cliwoc_data (unclassified) = 0 cliwoc_data (no slaves) = 1 cliwoc_data (slaves) = 2 slave_data = 3
cliwoc_data.loc[non_slave_log_locations,'slave_logs'] = 1 cliwoc_data.loc[slave_log_locations,'slave_logs'] = 2 cliwoc_data = cliwoc_data.sort_values('LogbookIdent', ascending=True) cliwoc_data_all = cliwoc_data.set_index('LogbookIdent', drop= False).copy() cliwoc_data = cliwoc_data.set_index('LogbookIdent', drop = Fa...
scripts/classifier-notebook.ipynb
clarka34/exploring-ship-logbooks
mit
Load Slave Voyages data
data_path = op.join(exploringShipLogbooks.__path__[0], 'data') file_name = data_path + '/tastdb-exp-2010' slave_voyage_logs = pd.read_pickle(file_name) year_ind = ~(slave_voyage_logs['yeardep'].isnull()) slave_voyage_logs = slave_voyage_logs[year_ind] cliwoc_ind = (slave_voyage_logs['yeardep']>cliwoc_data['Year'].min...
scripts/classifier-notebook.ipynb
clarka34/exploring-ship-logbooks
mit
Clean Slave voyages data
slave_voyage_desired_cols = list(slave_voyage_conversions.keys()) slave_voyage_logs = isolate_columns(slave_voyage_logs, slave_voyage_desired_cols) slave_voyage_logs.rename(columns=slave_voyage_conversions, inplace=True) #slave_voyage_logs.columns = ['Nationality', 'ShipType', 'VoyageFrom', 'VoyageTo', 'Year'] slave_...
scripts/classifier-notebook.ipynb
clarka34/exploring-ship-logbooks
mit
Join data sets
all_data = pd.concat([cliwoc_data, slave_voyage_logs]) #all_data = cliwoc_data.append(slave_voyage_logs) all_data = clean_data(all_data) # cleanup #del cliwoc_data, slave_voyage_logs all_data.head()
scripts/classifier-notebook.ipynb
clarka34/exploring-ship-logbooks
mit
Test of fuzzywuzzy method
all_data_test = all_data.copy() fuzz_columns = ['Nationality', 'ShipType', 'VoyageFrom', 'VoyageTo'] for col in fuzz_columns: all_data = fuzzy_wuzzy_classification(all_data, col)
scripts/classifier-notebook.ipynb
clarka34/exploring-ship-logbooks
mit
Encode data Must encode data before separating, otherwise values that do not occur in a subset will be encoded differently
from sklearn.preprocessing import LabelEncoder class MultiColumnLabelEncoder: def __init__(self,columns = None): self.columns = columns # array of column names to encode def fit(self,X,y=None): return self # not relevant here def transform(self,X): ''' Transforms columns of...
scripts/classifier-notebook.ipynb
clarka34/exploring-ship-logbooks
mit
Extract training data, and create list of classes
unclassified_logs = all_data[all_data['slave_logs']==0] #unclassified_logs = unclassified_logs.drop('slave_logs', axis=1) validation_set_1 = all_data[all_data['slave_logs']==2] #validation_set_1 = validation_set_1.drop('slave_logs', axis=1) # reserve first 20% of slave_voyage_logs as validation set validation_set_2_i...
scripts/classifier-notebook.ipynb
clarka34/exploring-ship-logbooks
mit
left this code so we can check if there are any null values in each dataframe
def finding_null_values(df): return df.isnull().sum()[df.isnull().sum()>0] repeat_multiplier = round(len(training_logs_pos)/len(training_logs_neg)) # create list of classes for training data (0 is for non-slave, 1 is for slave) # index matches training_data classes = np.zeros(len(training_logs_neg)).repeat(repeat...
scripts/classifier-notebook.ipynb
clarka34/exploring-ship-logbooks
mit
Fit training data to classifier note! first column of numpy array is index! do not include in classification!
if classifier_algorithm == "Decision Tree": classifier = MultinomialNB(alpha = 1.0, class_prior = None, fit_prior = True) classifier.fit(training_data[::,1::], classes) elif classifier_algorithm == "Naive Bayes": classifier = tree.DecisionTreeClassifier() classifier.fit(training_data[::,1::], classes) e...
scripts/classifier-notebook.ipynb
clarka34/exploring-ship-logbooks
mit
Test classifier check if slave logs from cliwoc data were classified correctly (want mostly classified as 1) compare first column with slave_index
def validation_test(classifier, validation_set, expected_class): """ input classifer object, validation set (data frame), and expected class of validation set (i.e. 1 or 0). Prints successful classification rate. """ columns = list(validation_set.columns) columns.remove('slave_logs') valida...
scripts/classifier-notebook.ipynb
clarka34/exploring-ship-logbooks
mit
try decision trees plotting Following lines of code do not currently work, we need to install graphviz
# export PDF with decision tree from sklearn.externals.six import StringIO import os import pydot dot_data = StringIO() tree.export_graphviz(new_classifier, out_file=dot_data) graph = pydot.graph_from_dot_data(dot_data.getvalue()) graph.write_pdf("test.pdf")
scripts/classifier-notebook.ipynb
clarka34/exploring-ship-logbooks
mit
We model our distribution as a mixture of a normal distribution (parameters mu and sigma and mixture weight f) and an exponential distribution (parameter lamb and mixture weight 1 -f). This model can be translated into TensorProb as follows:
with Model() as model: mu = Parameter() sigma = Parameter(lower=0) lamb = Parameter(lower=0) f = Parameter(lower=0.0, upper=1) X = Mix2(f, Normal(mu, sigma, lower=0, upper=50), Exponential(lamb, lower=0, upper=50), lower=0, upper=50, )
examples/example1_particle_decays.ipynb
ibab/tensorprob
mit
We declare X as an observed variable and set suitable initial parameter values:
model.observed(X) model.initialize({ mu: 25, sigma: 2, lamb: 0.03, f: 0.2 })
examples/example1_particle_decays.ipynb
ibab/tensorprob
mit
The dataset is generated with numpy:
np.random.seed(0) exp_data = np.random.exponential(40, 10000) exp_data = exp_data[(0 < exp_data) & (exp_data < 50)] norm_data = np.random.normal(20, 2, 500) data = np.concatenate([exp_data, norm_data])
examples/example1_particle_decays.ipynb
ibab/tensorprob
mit
Now we perform a fit of the model using the default optimizer:
result = model.fit(data) print(result)
examples/example1_particle_decays.ipynb
ibab/tensorprob
mit
The fit converged successfully and we can visualize the distribution:
xs = np.linspace(0, 50, 200) x, N, w = histpoints(data, bins=60, color='k', ms=3, capsize=0) plt.plot(xs, w * model.pdf(xs), 'b-', lw=2) plt.xlabel('mass') plt.ylabel('candidates')
examples/example1_particle_decays.ipynb
ibab/tensorprob
mit
The sock problem Yuzhong Huang There are two drawers of socks. The first drawer has 40 white socks and 10 black socks; the second drawer has 20 white socks and 30 black socks. We randomly get 2 socks from a drawer, and it turns out to be a pair(same color) but we don't know the color of these socks. What is the chance ...
# Solution pmf = Pmf(['drawer 1', 'drawer 2']) pmf['drawer 1'] *= (40/50)**2 + (10/50)**2 pmf['drawer 2'] *= (30/50)**2 + (20/50)**2 pmf.Normalize() pmf.Print() # Solution pmf = Pmf(['drawer 1', 'drawer 2']) pmf['drawer 1'] *= (40/50)*(39/49) + (10/50)*(9/49) pmf['drawer 2'] *= (30/50)*(29/49) + (20/50)*(19/49) pmf....
examples/btp01soln.ipynb
AllenDowney/ThinkBayes2
mit
Chess-playing twins Allen Downey Two identical twins are members of my chess club, but they never show up on the same day; in fact, they strictly alternate the days they show up. I can't tell them apart except that one is a better player than the other: Avery beats me 60% of the time and I beat Blake 70% of the time....
# Solution pmf = Pmf(['AB', 'BA']) pmf['AB'] = 0.4 * 0.3 pmf['BA'] = 0.7 * 0.6 pmf.Normalize() pmf.Print() # Solution class Chess(Suite): prob_I_beat = dict(A=0.4, B=0.7) def Likelihood(self, data, hypo): """Probability of data under hypo. data: sequence of 'W' and 'L' ...
examples/btp01soln.ipynb
AllenDowney/ThinkBayes2
mit
1984 by Katerina Zoltan The place: Airstrip One. The reason: thoughtcrime. The time: ??? John's parents were taken by the Thought Police and erased from all records. John is being initiated into the Youth League and must pass a test. He is asked whether his parents are good comrades. It is not clear what John's admissi...
# Solution officer = {'everything':0.15, 'something':0.25, 'nothing':0.6} class ThoughtPolice(Suite): def Likelihood(self, data, hypo): if data == 'gave away': if hypo == 'everything': return 0 elif hypo == 'something': return 1 else: ...
examples/btp01soln.ipynb
AllenDowney/ThinkBayes2
mit
Where Am I? - The Robot Localization Problem by Kathryn Hite Bayes's Theorem proves to be extremely useful when building mobile robots that need to know where they are within an environment at any given time. Because of the error in motion and sensor systems, a robot's knowledge of its location in the world is based o...
# Solution colors = 'GRRGGG' locs = range(len(colors)) data = 'R' pmf = Pmf(locs) for hypo in pmf: if colors[hypo] == data: pmf[hypo] *= 0.8 else: pmf[hypo] *= 0.2 pmf.Normalize() pmf.Print() # Solution class Robot(Suite): colors = 'GRRGGG' def Likelihood(self, data, hypo):...
examples/btp01soln.ipynb
AllenDowney/ThinkBayes2
mit
Part B: This becomes an extremely useful tool as we begin to move around the map. Let's try to get a more accurate knowledge of where the robot falls in the world by telling it to move forward one cell. The robot moves forward one cell from its previous position and the sensor reads green, again with an 80% accuracy r...
# Solution class Robot2(Suite): colors = 'GRRGGG' def Likelihood(self, data, hypo): """ data: tuple (offset, 'R' or 'G') hypo: index of starting location """ offset, color = data index = (hypo + offset) % len(self.colors) if self.colors...
examples/btp01soln.ipynb
AllenDowney/ThinkBayes2
mit
Red Dice problems Suppose I have a six-sided die that is red on 2 sides and blue on 4 sides, and another die that's the other way around, red on 4 sides and blue on 2. I choose a die at random and roll it, and I tell you it came up red. What is the probability that I rolled the second die (red on 4 sides)?
# Solution from fractions import Fraction d1 = Pmf({'Red':Fraction(2), 'Blue':Fraction(4)}, label='d1 (bluish) ') d1.Print() # Solution d2 = Pmf({'Red':Fraction(4), 'Blue':Fraction(2)}, label='d2 (reddish)') d2.Print() # Solution dice = Pmf({d1:Fraction(1), d2:Fraction(1)}) dice.Print() # Solution class Dice(Su...
examples/btp01soln.ipynb
AllenDowney/ThinkBayes2
mit
Scenario B Suppose I roll the same die again. What is the probability I get red?
# Solution from thinkbayes2 import MakeMixture predictive = MakeMixture(posterior) predictive.Print()
examples/btp01soln.ipynb
AllenDowney/ThinkBayes2
mit
Scenario A Instead of rolling the same die, suppose I choosing a die at random and roll it. What is the probability that I get red?
# Solution from thinkbayes2 import MakeMixture predictive = MakeMixture(prior) predictive.Print()
examples/btp01soln.ipynb
AllenDowney/ThinkBayes2
mit
Scenario C Now let's run a different experiment. Suppose I choose a die and roll it. If the outcome is red, I report the outcome. Otherwise I choose a die again and roll again, and repeat until I get red. What is the probability that the last die I rolled is the reddish one?
# Solution # On each roll, there are four possible results, with these probabilities: # d1, red 1/2 * 1/3 # d1, blue 1/2 * 2/3 # d2, red 1/2 * 2/3 # d2, blue 1/2 * 1/3 #On the last roll, I tell you that the outcome is red, so we are left with two possibilities: # d1, red 1...
examples/btp01soln.ipynb
AllenDowney/ThinkBayes2
mit
Scenario D Finally, suppose I choose a die and roll it over and over until I get red, then report the outcome. What is the probability that the die I rolled is the reddish one?
# Solution # In this case, the likelihood of the data is the same regardless of # which die I rolled, so the posterior is the same as the prior. posterior = prior.Copy() posterior.Print() # Solution #In summary, each of the four scenarios yields a different pair of posterior # and predictive distributions. # Scena...
examples/btp01soln.ipynb
AllenDowney/ThinkBayes2
mit
The bus problem Allen Downey Two buses routes run past my house, headed for Arlington and Billerica. In theory, the Arlington bus runs every 20 minutes and the Billerica bus every 30 minutes, but by the time they get to me, the time between buses is well-modeled by exponential distributions with means 20 and 30. Part ...
# Solution def generate_times(lam, n=10): gaps = np.random.exponential(lam, n) times = np.cumsum(gaps) for time in times: yield time # Solution for time in generate_times(20, 10): print(time) # Solution def generate_buses(names, lams, n): buses = [generate_times(lam, n) for lam in lams]...
examples/btp01soln.ipynb
AllenDowney/ThinkBayes2
mit
Working with preprocessing layers <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/guide/keras/preprocessing_layers"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="http...
import numpy as np import tensorflow as tf from tensorflow.keras import layers data = np.array([[0.1, 0.2, 0.3], [0.8, 0.9, 1.0], [1.5, 1.6, 1.7],]) layer = layers.Normalization() layer.adapt(data) normalized_data = layer(data) print("Features mean: %.2f" % (normalized_data.numpy().mean())) print("Features std: %.2f"...
site/en-snapshot/guide/keras/preprocessing_layers.ipynb
tensorflow/docs-l10n
apache-2.0
The adapt() method takes either a Numpy array or a tf.data.Dataset object. In the case of StringLookup and TextVectorization, you can also pass a list of strings:
data = [ "ξεῖν᾽, ἦ τοι μὲν ὄνειροι ἀμήχανοι ἀκριτόμυθοι", "γίγνοντ᾽, οὐδέ τι πάντα τελείεται ἀνθρώποισι.", "δοιαὶ γάρ τε πύλαι ἀμενηνῶν εἰσὶν ὀνείρων:", "αἱ μὲν γὰρ κεράεσσι τετεύχαται, αἱ δ᾽ ἐλέφαντι:", "τῶν οἳ μέν κ᾽ ἔλθωσι διὰ πριστοῦ ἐλέφαντος,", "οἵ ῥ᾽ ἐλεφαίρονται, ἔπε᾽ ἀκράαντα φέροντες:"...
site/en-snapshot/guide/keras/preprocessing_layers.ipynb
tensorflow/docs-l10n
apache-2.0
In addition, adaptable layers always expose an option to directly set state via constructor arguments or weight assignment. If the intended state values are known at layer construction time, or are calculated outside of the adapt() call, they can be set without relying on the layer's internal computation. For instance,...
vocab = ["a", "b", "c", "d"] data = tf.constant([["a", "c", "d"], ["d", "z", "b"]]) layer = layers.StringLookup(vocabulary=vocab) vectorized_data = layer(data) print(vectorized_data)
site/en-snapshot/guide/keras/preprocessing_layers.ipynb
tensorflow/docs-l10n
apache-2.0
Preprocessing data before the model or inside the model There are two ways you could be using preprocessing layers: Option 1: Make them part of the model, like this: python inputs = keras.Input(shape=input_shape) x = preprocessing_layer(inputs) outputs = rest_of_the_model(x) model = keras.Model(inputs, outputs) With th...
from tensorflow import keras from tensorflow.keras import layers # Create a data augmentation stage with horizontal flipping, rotations, zooms data_augmentation = keras.Sequential( [ layers.RandomFlip("horizontal"), layers.RandomRotation(0.1), layers.RandomZoom(0.1), ] ) # Load some da...
site/en-snapshot/guide/keras/preprocessing_layers.ipynb
tensorflow/docs-l10n
apache-2.0
You can see a similar setup in action in the example image classification from scratch. Normalizing numerical features
# Load some data (x_train, y_train), _ = keras.datasets.cifar10.load_data() x_train = x_train.reshape((len(x_train), -1)) input_shape = x_train.shape[1:] classes = 10 # Create a Normalization layer and set its internal state using the training data normalizer = layers.Normalization() normalizer.adapt(x_train) # Creat...
site/en-snapshot/guide/keras/preprocessing_layers.ipynb
tensorflow/docs-l10n
apache-2.0
Encoding string categorical features via one-hot encoding
# Define some toy data data = tf.constant([["a"], ["b"], ["c"], ["b"], ["c"], ["a"]]) # Use StringLookup to build an index of the feature values and encode output. lookup = layers.StringLookup(output_mode="one_hot") lookup.adapt(data) # Convert new test data (which includes unknown feature values) test_data = tf.cons...
site/en-snapshot/guide/keras/preprocessing_layers.ipynb
tensorflow/docs-l10n
apache-2.0
Note that, here, index 0 is reserved for out-of-vocabulary values (values that were not seen during adapt()). You can see the StringLookup in action in the Structured data classification from scratch example. Encoding integer categorical features via one-hot encoding
# Define some toy data data = tf.constant([[10], [20], [20], [10], [30], [0]]) # Use IntegerLookup to build an index of the feature values and encode output. lookup = layers.IntegerLookup(output_mode="one_hot") lookup.adapt(data) # Convert new test data (which includes unknown feature values) test_data = tf.constant(...
site/en-snapshot/guide/keras/preprocessing_layers.ipynb
tensorflow/docs-l10n
apache-2.0
Note that index 0 is reserved for missing values (which you should specify as the value 0), and index 1 is reserved for out-of-vocabulary values (values that were not seen during adapt()). You can configure this by using the mask_token and oov_token constructor arguments of IntegerLookup. You can see the IntegerLookup...
# Sample data: 10,000 random integers with values between 0 and 100,000 data = np.random.randint(0, 100000, size=(10000, 1)) # Use the Hashing layer to hash the values to the range [0, 64] hasher = layers.Hashing(num_bins=64, salt=1337) # Use the CategoryEncoding layer to multi-hot encode the hashed values encoder = ...
site/en-snapshot/guide/keras/preprocessing_layers.ipynb
tensorflow/docs-l10n
apache-2.0
Encoding text as a sequence of token indices This is how you should preprocess text to be passed to an Embedding layer.
# Define some text data to adapt the layer adapt_data = tf.constant( [ "The Brain is wider than the Sky", "For put them side by side", "The one the other will contain", "With ease and You beside", ] ) # Create a TextVectorization layer text_vectorizer = layers.TextVectorization(...
site/en-snapshot/guide/keras/preprocessing_layers.ipynb
tensorflow/docs-l10n
apache-2.0
You can see the TextVectorization layer in action, combined with an Embedding mode, in the example text classification from scratch. Note that when training such a model, for best performance, you should always use the TextVectorization layer as part of the input pipeline. Encoding text as a dense matrix of ngrams with...
# Define some text data to adapt the layer adapt_data = tf.constant( [ "The Brain is wider than the Sky", "For put them side by side", "The one the other will contain", "With ease and You beside", ] ) # Instantiate TextVectorization with "multi_hot" output_mode # and ngrams=2 (in...
site/en-snapshot/guide/keras/preprocessing_layers.ipynb
tensorflow/docs-l10n
apache-2.0
Encoding text as a dense matrix of ngrams with TF-IDF weighting This is an alternative way of preprocessing text before passing it to a Dense layer.
# Define some text data to adapt the layer adapt_data = tf.constant( [ "The Brain is wider than the Sky", "For put them side by side", "The one the other will contain", "With ease and You beside", ] ) # Instantiate TextVectorization with "tf-idf" output_mode # (multi-hot with TF-...
site/en-snapshot/guide/keras/preprocessing_layers.ipynb
tensorflow/docs-l10n
apache-2.0
<a id='fit'></a> 1. Fit a Topic Model, using LDA Now we're ready to fit the model. This requires the use of CountVecorizer, which we've already used, and the scikit-learn function LatentDirichletAllocation. See here for more information about this function.
####Adopted From: #Author: Olivier Grisel <olivier.grisel@ensta.org> # Lars Buitinck # Chyi-Kwei Yau <chyikwei.yau@gmail.com> # License: BSD 3 clause from sklearn.feature_extraction.text import CountVectorizer from sklearn.decomposition import LatentDirichletAllocation n_samples = 2000 n_topics = 4 n...
05-TextExploration/00-IntroductionToTopicModeling_ExerciseSolutions.ipynb
lknelson/text-analysis-2017
bsd-3-clause
<a id='dtd'></a> 2. Document by Topic Distribution One thing we may want to do with the output is find the most representative texts for each topic. A simple way to do this (but not memory efficient), is to merge the topic distribution back into the Pandas dataframe. First get the topic distribution array.
topic_dist = lda.transform(tf) topic_dist
05-TextExploration/00-IntroductionToTopicModeling_ExerciseSolutions.ipynb
lknelson/text-analysis-2017
bsd-3-clause
Merge back in with the original dataframe.
topic_dist_df = pandas.DataFrame(topic_dist) df_w_topics = topic_dist_df.join(df_lit) df_w_topics
05-TextExploration/00-IntroductionToTopicModeling_ExerciseSolutions.ipynb
lknelson/text-analysis-2017
bsd-3-clause
Now we can sort the dataframe for the topic of interest, and view the top documents for the topics. Below we sort the documents first by Topic 0 (looking at the top words for this topic I think it's about family, health, and domestic activities), and next by Topic 1 (again looking at the top words I think this topic is...
print(df_w_topics[['title', 'author gender', 0]].sort_values(by=[0], ascending=False)) print(df_w_topics[['title', 'author gender', 1]].sort_values(by=[1], ascending=False)) #EX: What is the average topic weight by author gender, for each topic? ### Grapth these results #Hint: You can use the python 'range' function ...
05-TextExploration/00-IntroductionToTopicModeling_ExerciseSolutions.ipynb
lknelson/text-analysis-2017
bsd-3-clause
<a id='words'></a> 3. Words Aligned with each Topic Following DiMaggio et al., we can calculate the total number of words aligned with each topic, and compare by author gender.
#first create word count column df_w_topics['word_count'] = df_w_topics['text'].apply(lambda x: len(str(x).split())) df_w_topics['word_count'] #multiple topic weight by word count df_w_topics['0_wc'] = df_w_topics[0] * df_w_topics['word_count'] df_w_topics['0_wc'] #create a for loop to do this for every topic topi...
05-TextExploration/00-IntroductionToTopicModeling_ExerciseSolutions.ipynb
lknelson/text-analysis-2017
bsd-3-clause
Question: Why might we want to do one calculation over the other? Take average topic weight per documents versus the average number of words aligned with each topic? This brings us to... <a id='prev'></a> 4. Topic Prevalence
###EX: # Find the most prevalent topic in the corpus. # Find the least prevalent topic in the corpus. # Hint: How do we define prevalence? What are different ways of measuring this, # and the benefits/drawbacks of each? for e in col_list: print(e) print(df_w_topics[e].sum()/df...
05-TextExploration/00-IntroductionToTopicModeling_ExerciseSolutions.ipynb
lknelson/text-analysis-2017
bsd-3-clause
<a id='time'></a> 4. Prevalence over time We can do the same as above, but by year, to graph the prevalence of each topic over time.
grouped_year = df_w_topics.groupby('year') fig3 = plt.figure() chrt = 0 for e in col_list: chrt += 1 ax2 = fig3.add_subplot(2,3, chrt) (grouped_year[e].sum()/grouped_year['word_count'].sum()).plot(kind='line', title=e) fig3.tight_layout() plt.show()
05-TextExploration/00-IntroductionToTopicModeling_ExerciseSolutions.ipynb
lknelson/text-analysis-2017
bsd-3-clause
NetworKIN uses as input two different files + fasta_file: A file containing all sequences of the proteins of interest + site_file: A file listing all phosphosites in the format: ID tab position tab residue With the function prepare_networkin_files, the needed files with the right layout are produced in a specif...
kinact.networkin.prepare_networkin_files(phospho_sites=data_log2.index.tolist(), output_dir='./networkin_example_files/', organism='human')
doc/networkin_example.ipynb
saezlab/kinact
gpl-3.0
Usage of NetworKIN Web-Interface NetworKIN can be used via the high-throughput version of the web interface. In order to do so, select 'Human - UniProt' or 'Yeast - Uniprot' from the drop-down menu and paste the contents of the file 'site_file.txt' into the dedicated field. It is possible, that several phosphosites can...
adjacency_matrix = kinact.networkin.get_kinase_targets_from_networkin('./networkin_example_files/output.txt', add_omnipath=False, score_cut_off=1) scores, p_values = kinact.networkin.weighted_mean(data_fc=data_log2['5min'], ...
doc/networkin_example.ipynb
saezlab/kinact
gpl-3.0
Sparse 2d interpolation In this example the values of a scalar field $f(x,y)$ are known at a very limited set of points in a square domain: The square domain covers the region $x\in[-5,5]$ and $y\in[-5,5]$. The values of $f(x,y)$ are zero on the boundary of the square at integer spaced points. The value of $f$ is know...
# YOUR CODE HERE raise NotImplementedError()
assignments/assignment08/InterpolationEx02.ipynb
aschaffn/phys202-2015-work
mit
Use meshgrid and griddata to interpolate the function $f(x,y)$ on the entire square domain: xnew and ynew should be 1d arrays with 100 points between $[-5,5]$. Xnew and Ynew should be 2d versions of xnew and ynew created by meshgrid. Fnew should be a 2d array with the interpolated values of $f(x,y)$ at the points (Xne...
# YOUR CODE HERE raise NotImplementedError() assert xnew.shape==(100,) assert ynew.shape==(100,) assert Xnew.shape==(100,100) assert Ynew.shape==(100,100) assert Fnew.shape==(100,100)
assignments/assignment08/InterpolationEx02.ipynb
aschaffn/phys202-2015-work
mit
Plot the values of the interpolated scalar field using a contour plot. Customize your plot to make it effective and beautiful.
# YOUR CODE HERE raise NotImplementedError() assert True # leave this to grade the plot
assignments/assignment08/InterpolationEx02.ipynb
aschaffn/phys202-2015-work
mit
Quickstart: simple NN in tensorflow
n_input_nodes = 2 n_output_nodes = 1 x = tf.placeholder(tf.float32, (None, n_input_nodes)) W = tf.Variable(tf.ones((n_input_nodes, n_output_nodes)), dtype=tf.float32) b = tf.Variable(tf.zeros(n_output_nodes), dtype=tf.float32) z = tf.matmul(x, W) + b out = tf.sigmoid(z) test_input = [[0.5, 0.5]] with tf.Session() as...
notebooks/07 - LSTM classifier tensorflow.ipynb
delsner/dl-exploration
mit
LSTMs for Tweet Sentiment Classification see https://github.com/nicholaslocascio/bcs-lstm/blob/master/Lab.ipynb Sentiment classification will be done based on words, not on characters! Model Parameters
# set variables tweet_size = 20 hidden_size = 100 vocab_size = 7597 # amount of words in our vocabulary batch_size = 64 # this just makes sure that all our following operations will be placed in the right graph. tf.reset_default_graph() # create a session variable that we can run later. session = tf.Session()
notebooks/07 - LSTM classifier tensorflow.ipynb
delsner/dl-exploration
mit
Placeholders for input
# batch_size x tweet_size (each word in tweet) x one_hot_vector of size vocab_size tweets = tf.placeholder(dtype=tf.float32, shape=[None, tweet_size, vocab_size]) # 1d vector of size batch_size as we predict one value per tweet in batch labels = tf.placeholder(dtype=tf.float32, shape=[None])
notebooks/07 - LSTM classifier tensorflow.ipynb
delsner/dl-exploration
mit
Build LSTM layers We want to feed the input sequence, word by word, into an LSTM layer, or multiple LSTM layers (we could also call this an LSTM encoder). At each "timestep", we feed in the next word, and the LSTM updates its cell state. The final LSTM cell state can then be fed through a final classification layer(s) ...
# create 2 LSTM cells -> creates a layer of LSTM cells not just a single one lstm_cell_1 = tf.contrib.rnn.LSTMCell(hidden_size) lstm_cell_2 = tf.contrib.rnn.LSTMCell(hidden_size) # create multiple LSTM layers by wrapping the two lstm cells in MultiRNNCell multi_lstm_cells = tf.contrib.rnn.MultiRNNCell([lstm_cell_1, ls...
notebooks/07 - LSTM classifier tensorflow.ipynb
delsner/dl-exploration
mit
Classification layer Once we have the final state of the LSTM layers after feeding in the tweet word by word we can take it and feed it into a classfication layer.
# function to create a weight matrix + bias parameters and matrix multiplication def linear(input_, output_size, name, init_bias=0.0): shape = input_.get_shape().as_list() with tf.variable_scope(name): W = tf.get_variable( name='weights', shape=[shape[-1], output_size], ...
notebooks/07 - LSTM classifier tensorflow.ipynb
delsner/dl-exploration
mit
Install TFX Note: In Google Colab, because of package updates, the first time you run this cell you must restart the runtime (Runtime > Restart runtime ...).
# Install the TensorFlow Extended library !pip install -U tfx
courses/machine_learning/deepdive2/tensorflow_extended/labs/components_keras.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Restart the kernel Please ignore any incompatibility warnings and errors. Restart the kernel to use updated packages. (On the Notebook menu, select Kernel > Restart Kernel > Restart). Import packages You import necessary packages, including standard TFX component classes.
import os import pprint import tempfile import urllib import absl import tensorflow as tf import tensorflow_model_analysis as tfma tf.get_logger().propagate = False pp = pprint.PrettyPrinter() from tfx import v1 as tfx from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext %loa...
courses/machine_learning/deepdive2/tensorflow_extended/labs/components_keras.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Let's check the library versions.
print('TensorFlow version: {}'.format(tf.__version__)) print('TFX version: {}'.format(tfx.__version__))
courses/machine_learning/deepdive2/tensorflow_extended/labs/components_keras.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Set up pipeline paths
# This is the root directory for your TFX pip package installation. _tfx_root = tfx.__path__[0] # This is the directory containing the TFX Chicago Taxi Pipeline example. _taxi_root = os.path.join(_tfx_root, 'examples/chicago_taxi_pipeline') # This is the path where your model will be pushed for serving. # TODO: Your ...
courses/machine_learning/deepdive2/tensorflow_extended/labs/components_keras.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Download example data You download the example dataset for use in your TFX pipeline. The dataset you're using is the Taxi Trips dataset released by the City of Chicago. The columns in this dataset are: <table> <tr><td>pickup_community_area</td><td>fare</td><td>trip_start_month</td></tr> <tr><td>trip_start_hour</td><td>...
_data_root = tempfile.mkdtemp(prefix='tfx-data') DATA_PATH = 'https://raw.githubusercontent.com/tensorflow/tfx/master/tfx/examples/chicago_taxi_pipeline/data/simple/data.csv' _data_filepath = # TODO: Your code goes here urllib.request.urlretrieve(DATA_PATH, _data_filepath)
courses/machine_learning/deepdive2/tensorflow_extended/labs/components_keras.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Take a quick look at the CSV file.
# print first ten lines of the file !head {_data_filepath}
courses/machine_learning/deepdive2/tensorflow_extended/labs/components_keras.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Disclaimer: This site provides applications using data that has been modified for use from its original source, www.cityofchicago.org, the official website of the City of Chicago. The City of Chicago makes no claims as to the content, accuracy, timeliness, or completeness of any of the data provided at this site. The d...
# Here, you create an InteractiveContext using default parameters. This will # use a temporary directory with an ephemeral ML Metadata database instance. # To use your own pipeline root or database, the optional properties # `pipeline_root` and `metadata_connection_config` may be passed to # InteractiveContext. Calls t...
courses/machine_learning/deepdive2/tensorflow_extended/labs/components_keras.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Run TFX components interactively In the cells that follow, you create TFX components one-by-one, run each of them, and visualize their output artifacts. ExampleGen The ExampleGen component is usually at the start of a TFX pipeline. It will: Split data into training and evaluation sets (by default, 2/3 training + 1/3 e...
example_gen = tfx.components.CsvExampleGen(input_base=_data_root) context.run(example_gen, enable_cache=True)
courses/machine_learning/deepdive2/tensorflow_extended/labs/components_keras.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Let's examine the output artifacts of ExampleGen. This component produces two artifacts, training examples and evaluation examples:
artifact = example_gen.outputs['examples'].get()[0] print(artifact.split_names, artifact.uri)
courses/machine_learning/deepdive2/tensorflow_extended/labs/components_keras.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
You can also take a look at the first three training examples:
# Get the URI of the output artifact representing the training examples, which is a directory train_uri = os.path.join(example_gen.outputs['examples'].get()[0].uri, 'Split-train') # Get the list of files in this directory (all compressed TFRecord files) tfrecord_filenames = [os.path.join(train_uri, name) ...
courses/machine_learning/deepdive2/tensorflow_extended/labs/components_keras.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Now that ExampleGen has finished ingesting the data, the next step is data analysis. StatisticsGen The StatisticsGen component computes statistics over your dataset for data analysis, as well as for use in downstream components. It uses the TensorFlow Data Validation library. StatisticsGen takes as input the dataset yo...
statistics_gen = tfx.components.StatisticsGen( examples=example_gen.outputs['examples']) context.run(statistics_gen, enable_cache=True)
courses/machine_learning/deepdive2/tensorflow_extended/labs/components_keras.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0