markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Run it on each test image and use the output detection features and metadata to build up a context feature bank:
def run_inference(model, image_path, date_captured, resize_image=True): """Runs inference over a single input image and extracts contextual features. Args: model: A tensorflow saved_model object. image_path: Absolute path to the input image. date_captured: A datetime string of format '%Y-%m-%d %H:%M:%S...
research/object_detection/colab_tutorials/context_rcnn_tutorial.ipynb
tombstone/models
apache-2.0
Run Detection With Context Load a context r-cnn object detection model:
context_rcnn_model_name = 'context_rcnn_resnet101_snapshot_serengeti_2020_06_10' context_rcnn_model = load_model(context_rcnn_model_name)
research/object_detection/colab_tutorials/context_rcnn_tutorial.ipynb
tombstone/models
apache-2.0
We need to define the expected context padding size for the model, this must match the definition in the model config (max_num_context_features).
context_padding_size = 2000
research/object_detection/colab_tutorials/context_rcnn_tutorial.ipynb
tombstone/models
apache-2.0
Check the model's input signature, it expects a batch of 3-color images of type uint8, plus context_features padded to the maximum context feature size for this model (2000) and valid_context_size to represent the non-padded context features:
context_rcnn_model.inputs
research/object_detection/colab_tutorials/context_rcnn_tutorial.ipynb
tombstone/models
apache-2.0
And returns several outputs:
context_rcnn_model.output_dtypes context_rcnn_model.output_shapes def run_context_rcnn_inference_for_single_image( model, image, context_features, context_padding_size): '''Run single image through a Context R-CNN saved_model. This function runs a saved_model on a (single) provided image and provided cont...
research/object_detection/colab_tutorials/context_rcnn_tutorial.ipynb
tombstone/models
apache-2.0
Define Matplotlib parameters for pretty visualizations
%matplotlib inline plt.rcParams['axes.grid'] = False plt.rcParams['xtick.labelsize'] = False plt.rcParams['ytick.labelsize'] = False plt.rcParams['xtick.top'] = False plt.rcParams['xtick.bottom'] = False plt.rcParams['ytick.left'] = False plt.rcParams['ytick.right'] = False plt.rcParams['figure.figsize'] = [15,10]
research/object_detection/colab_tutorials/context_rcnn_tutorial.ipynb
tombstone/models
apache-2.0
Run Context R-CNN inference and compare results to Faster R-CNN
for image_path in TEST_IMAGE_PATHS: image_id = image_path_to_id[str(image_path)] faster_rcnn_output_dict = faster_rcnn_results[image_id] context_rcnn_image, faster_rcnn_image = show_context_rcnn_inference( context_rcnn_model, image_path, context_features_matrix, faster_rcnn_output_dict, context_paddin...
research/object_detection/colab_tutorials/context_rcnn_tutorial.ipynb
tombstone/models
apache-2.0
Hacking into Evolutionary Dynamics! This Jupyter notebook implements some of the ideas in following two books, specifically chapters 1-5 in Evolutionary Dynamics. For better undrestanding of the equations and code please consult the books and relevant papers. This notebook contains interactive contents using Javascript...
%%html <div > <iframe type="text/html" width="336" height="550" frameborder="0" allowfullscreen style="max-width:100%;float: left" src="https://lesen.amazon.de/kp/card?asin=B003UV8TC2&preview=inline&linkCode=kpe&ref_=cm_sw_r_kb_dp_MamPyb1NWT7A8" ></iframe> </div> <div > <iframe type="text/html" width="336" height="550"...
Nature-Cooperation.ipynb
btabibian/misc_notebooks
mit
Evolution Basic model \begin{align} \dot{x} = \frac{dx}{dt} = (r-d)x(1-x/K) \end{align} $r$: reproduction rate $d$: hazard rate $K$: Maximum capacity
fig = plt.figure() plt.close(fig) def oneCell(r,d,max_x): clear_output(wait=True) t_f = 10 dt = 0.1 def int_(t,x): dev = x*(r-d) if max_x != None: dev *= (1-x/max_x) #print("dev",dev,x) return dev integ = integrate.ode(int_) y = np.zeros(int(t_f/dt)+1) x = np.zeros(int(t_f/dt)+1) ...
Nature-Cooperation.ipynb
btabibian/misc_notebooks
mit
Selection-Mutation Selection operates whenever different types of individuals reproduce at different rates. \begin{align} \dot{\vec{x}} =\vec{x}Q-\phi\vec{x}. \end{align} $\vec{x}$: population ratio of type $i$. $Q$: Mutation matrix. $\phi$: average fitness
fig = plt.figure() plt.close(fig) def twoCell(init_,rate): clear_output(wait=True) t_f = 10 dt = 0.1 update_rate = np.asarray(rate) def int_(t,x): dev = x.T.dot(update_rate)-x return dev integ = integrate.ode(int_) y = np.zeros((int(t_f/dt)+1,update_rate.shape[0])) x = np.zeros((int(t_f/dt)+1,...
Nature-Cooperation.ipynb
btabibian/misc_notebooks
mit
Multiple species.
objects_1 = [] status_label_1 = widgets.Label() _ = call_back_mute(3,objects_1,status_label_1,lambda x:updateplot(x,objects_1,status_label_1))
Nature-Cooperation.ipynb
btabibian/misc_notebooks
mit
Genomes are Sequences Quasispecies equation \begin{align} \dot{x_i} =\sum_{j=0}^{n} x_j ~ f_j ~ q_{ji} - \phi x_i. \end{align} $x$: population ratio of type $i$. $f_i$: fitness for type $i$. $q_{ji}$: probability of mutation from type $j$ to $i$ $q_{ji} = u^{h_ij}(1-u)^{L-h_{ij}}$ $~L:$ Length of genome. $~u:$ mutati...
fig = plt.figure() plt.close(fig) def genomeSequence(N,drich_alpha,point_mut): np.random.seed(0) clear_output(wait=True) if point_mut is not None: L,u = point_mut t_f = 10 dt = 0.1 x_ = np.random.uniform(size=(N)) x_ = x_/x_.sum() f = np.random.lognormal(size=(N)) if drich_alpha is not No...
Nature-Cooperation.ipynb
btabibian/misc_notebooks
mit
Fitness Landscape \begin{align} \dot{x_0} =& x_0(f_0q-\phi)\ \dot{x_1} =& x_0f_0(1-q)+x_1-\phi x_1 \end{align} $q = (1-u)^L$: probability of exact copy of master genome. $u$: probability of a mutation on one gene. $L$: length of genome.
fig = plt.figure() plt.close(fig) def genomeSequenceQ(f_0,u,L): np.random.seed(0) clear_output(wait=True) t_f = 10 dt = 0.1 x_ = np.random.uniform(size=2) x_ = x_/x_.sum() f = np.array([f_0,1]) q = (1-u)**L def int_(t,x): mean = f[0]*x[0]+f[1]*x[1] dev = np.zeros(x.shape[0]) dev[0...
Nature-Cooperation.ipynb
btabibian/misc_notebooks
mit
Evolutionary Games Two player games \begin{align} \dot{x_A} = x_A ~ [f_A(\vec{x}) - \phi ]\ \dot{x_B} = x_B ~ [f_B(\vec{x}) - \phi ] \end{align} \begin{align} f_A(\vec{x}) = a~x_A+b~x_B\ f_B(\vec{x}) = c~x_A+d~x_B \end{align} Payoff matrix: \begin{align} \begin{pmatrix} a & b \ c & d \ \end{pmatrix} \end{al...
fig = plt.figure() plt.close(fig) def evolutionaryGame(x_,f,labels = None): np.random.seed(0) clear_output(wait=True) t_f = 10 dt = 0.1 x_ = np.asarray(x_) x_ = np.atleast_2d(x_).T f = np.asarray(f) def int_(t,x): mean = x.T.dot(f.dot(x)) dev = x*(f.dot(x)-mean) return dev integ = in...
Nature-Cooperation.ipynb
btabibian/misc_notebooks
mit
Prisoners Dillema Payoff matrix: \begin{align} \begin{pmatrix} & C & D\ C & 3 & 0 \ D & 5 & 1 \ \end{pmatrix} \end{align} The Nash equilibria in this game is to always defect (D,D).
R = 3 S = 0 T = 5 P = 1 payoff = [[R,S],[T,P]] evolutionaryGame([0.6,0.4],payoff,["Cooperate","Defect"])
Nature-Cooperation.ipynb
btabibian/misc_notebooks
mit
Direct Respirocity vs. Always Defect. Tomorrow never dies! Payoff matrix: \begin{align} \begin{pmatrix} & GRIM & ALLD\ GRIM & m3 & 0+(m-1)1 \ ALLD & 5+(m-1)1 & m1 \ \end{pmatrix} \end{align} Where $m$ is expected days which the game will be repeated. if $3m > 5+(m-1)$ then GRIM is a strict Nash equilibrium w...
def _EvolutionaryGamesProb(v): R = 3 S = 0 T = 5 P = 1 m_ = prob_tomorrow.value payoff = [[R*m_,S+(m_-1)*P],[T+(m_-1)*P,m_*P]] return evolutionaryGame([0.99,0.01],payoff,["GRIM","ALLD"]) prob_tomorrow = widgets.FloatSlider( value=1, min=0, max=10.0, ...
Nature-Cooperation.ipynb
btabibian/misc_notebooks
mit
Reactive strategies Tit-for-Tat. Payoff matrix: \begin{align} \begin{pmatrix} & CC & CD & DC & DD\ CC & p_1p_2 & p_1(1-p_2) & (1-p_1)p_2 & (1-p_1)(1-p_2) \ CD & q_1p_2 & q_1(1-p_2) & (1-q_1)p_2 & (1-q_1)(1-p_2) \ DC & p_1q_2 & p_1(1-q_2) & (1-p_1)q_2 & (1-p_1)(1-q_2) \ DD & q_1q_2 & q_1(1-q_2) & (1-q_1)q...
p_1 = widgets.FloatSlider( value=0.5, min=0, max=1.0, description="p_1",layout=widgets.Layout(width='100%', height='80px')) q_1 = widgets.FloatSlider( value=0.5, min=0, max=1.0, description="q...
Nature-Cooperation.ipynb
btabibian/misc_notebooks
mit
Refine the Data Lets check the dataset for quality and compeleteness 1. Missing Values 2. Outliers Check for Missing Values
# Find if df has missing values. Hint: There is a isnull() function df.isnull().head()
reference/Module-01a-reference.ipynb
amitkaps/applied-machine-learning
mit
One consideration we check here is the number of observations with missing values for those columns that have missing values. If a column has too many missing values, it might make sense to drop the column.
#let's see how many missing values are present df.isnull().sum()
reference/Module-01a-reference.ipynb
amitkaps/applied-machine-learning
mit
So, we see that two columns have missing values: interest and years. Both the columns are numeric. We have three options for dealing with this missing values Options to treat Missing Values - REMOVE - NAN rows - IMPUTATION - Replace them with something?? - Mean - Median - Fixed Number - Domain Relevant ...
#Let's replace missing values with the median of the column df.describe() #there's a fillna function df = df.fillna(df.median()) #Now, let's check if train has missing values or not df.isnull().any()
reference/Module-01a-reference.ipynb
amitkaps/applied-machine-learning
mit
Check for Outlier Values Let us check first the categorical variables
# Which variables are Categorical? df.dtypes # Create a Crosstab of those variables with another variable pd.crosstab(df.default, df.grade) # Create a Crosstab of those variables with another variable pd.crosstab(df.default, df.ownership)
reference/Module-01a-reference.ipynb
amitkaps/applied-machine-learning
mit
Let us check outliers in the continuous variable Plotting Histogram Box-Plot Measuring Z-score > 3 Modified Z-score > 3.5 where modified Z-score = 0.6745 * (x - x_median) / MAD
# Describe the data set continuous values df.describe()
reference/Module-01a-reference.ipynb
amitkaps/applied-machine-learning
mit
Clearly the age variable looks like it has an outlier - Age cannot be greater 100! Also the income variable looks like it may also have an outlier.
# Make a histogram of age df.age.hist(bins=100) # Make a histogram of income df.income.hist(bins=100) # Make Histograms for all other variables # Make a scatter of age and income plt.scatter(df.age, df.income)
reference/Module-01a-reference.ipynb
amitkaps/applied-machine-learning
mit
Find the observation which has age = 144 and remove it from the dataframe
# Find the observation df[df.age == 144] df[df.age == 144].index # Use drop to remove the observation inplace df.drop(df[df.age == 144].index, axis=0, inplace=True) # Find the shape of the df df.shape # Check again for outliers df.describe() # Save the new file as cleaned data df.to_csv("data/loan_data_clean.csv"...
reference/Module-01a-reference.ipynb
amitkaps/applied-machine-learning
mit
Problem Setup This tutorial is based on an example that has appeared in a TLE tutorial(Louboutin et. al., 2017), in which one shot is modeled over a 2-layer velocity model.
# This cell sets up the problem that is already explained in the first TLE tutorial. #NBVAL_IGNORE_OUTPUT #%%flake8 from examples.seismic import Receiver from examples.seismic import RickerSource from examples.seismic import Model, plot_velocity, TimeAxis from devito import TimeFunction from devito import Eq, solve fr...
examples/seismic/tutorials/08_snapshotting.ipynb
opesci/devito
mit
Saving snaps to disk - naive approach We want to get equally spaced snaps from the nt-2 saved in u.data. The user can then define the total number of snaps nsnaps, which determines a factor to divide nt.
nsnaps = 100 factor = round(u.shape[0] / nsnaps) # Get approx nsnaps, for any nt ucopy = u.data.copy(order='C') filename = "naivsnaps.bin" file_u = open(filename, 'wb') for it in range(0, nsnaps): file_u.write(ucopy[it*factor, :, :]) file_u.close()
examples/seismic/tutorials/08_snapshotting.ipynb
opesci/devito
mit
Checking u.data spaced by factor using matplotlib,
#NBVAL_IGNORE_OUTPUT plt.rcParams['figure.figsize'] = (20, 20) # Increases figure size imcnt = 1 # Image counter for plotting plot_num = 5 # Number of images to plot for i in range(0, nsnaps, int(nsnaps/plot_num)): plt.subplot(1, plot_num+1, imcnt+1) imcnt = imcnt + 1 plt.imshow(np.transpose(u.data[i * f...
examples/seismic/tutorials/08_snapshotting.ipynb
opesci/devito
mit
Or from the saved file:
#NBVAL_IGNORE_OUTPUT fobj = open("naivsnaps.bin", "rb") snaps = np.fromfile(fobj, dtype = np.float32) snaps = np.reshape(snaps, (nsnaps, vnx, vnz)) #reshape vec2mtx, devito format. nx first fobj.close() plt.rcParams['figure.figsize'] = (20,20) # Increases figure size imcnt = 1 # Image counter for plotting plot_num ...
examples/seismic/tutorials/08_snapshotting.ipynb
opesci/devito
mit
This C/FORTRAN way of saving snaps is clearly not optimal when using Devito; the wavefield object u is specified to save all snaps, and a memory copy is done at every op time step. Giving that we don't want all the snaps saved, this process is wasteful; only the selected snapshots should be copied during execution. To...
#NBVAL_IGNORE_OUTPUT from devito import ConditionalDimension nsnaps = 103 # desired number of equally spaced snaps factor = round(nt / nsnaps) # subsequent calculated factor print(f"factor is {factor}") #Part 1 ############# time_subsampled = ConditionalDimension( 't_sub', parent=model.grid.time_dim,...
examples/seismic/tutorials/08_snapshotting.ipynb
opesci/devito
mit
As usave.data has the desired snaps, no extra variable copy is required. The snaps can then be visualized:
#NBVAL_IGNORE_OUTPUT fobj = open("snaps2.bin", "rb") snaps = np.fromfile(fobj, dtype=np.float32) snaps = np.reshape(snaps, (nsnaps, vnx, vnz)) fobj.close() plt.rcParams['figure.figsize'] = (20, 20) # Increases figure size imcnt = 1 # Image counter for plotting plot_num = 5 # Number of images to plot for i in range(0...
examples/seismic/tutorials/08_snapshotting.ipynb
opesci/devito
mit
About Part 1 Here a subsampled version (time_subsampled) of the full time Dimension (model.grid.time_dim) is created with the ConditionalDimension. time_subsampled is then used to define an additional symbolic wavefield usave, which will store in usave.data only the predefined number of snapshots (see Part 2). Further ...
def print2file(filename, thingToPrint): import sys orig_stdout = sys.stdout f = open(filename, 'w') sys.stdout = f print(thingToPrint) f.close() sys.stdout = orig_stdout # print2file("op1.c", op1) # uncomment to print to file # print2file("op2.c", op2) # uncomment to print to file # p...
examples/seismic/tutorials/08_snapshotting.ipynb
opesci/devito
mit
To run snaps as a movie (outside Jupyter Notebook), run the code below, altering filename, nsnaps, nx, nz accordingly:
#NBVAL_IGNORE_OUTPUT #NBVAL_SKIP from IPython.display import HTML import matplotlib.pyplot as plt import matplotlib.animation as animation filename = "naivsnaps.bin" nsnaps = 100 fobj = open(filename, "rb") snapsObj = np.fromfile(fobj, dtype=np.float32) snapsObj = np.reshape(snapsObj, (nsnaps, vnx, vnz)) fobj.close() ...
examples/seismic/tutorials/08_snapshotting.ipynb
opesci/devito
mit
DICS for power mapping In this tutorial, we'll simulate two signals originating from two locations on the cortex. These signals will be sinusoids, so we'll be looking at oscillatory activity (as opposed to evoked activity). We'll use dynamic imaging of coherent sources (DICS) :footcite:GrossEtAl2001 to map out spectral...
# Author: Marijn van Vliet <w.m.vanvliet@gmail.com> # # License: BSD (3-clause)
0.23/_downloads/b89584de6ec99a847868d7b80a32cf50/80_dics.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
We create an :class:mne.Epochs object containing two trials: one with both noise and signal and one with just noise
events = mne.find_events(raw, initial_event=True) tmax = (len(stc_signal.times) - 1) / sfreq epochs = mne.Epochs(raw, events, event_id=dict(signal=1, noise=2), tmin=0, tmax=tmax, baseline=None, preload=True) assert len(epochs) == 2 # ensure that we got the two expected events # Plot some of the ch...
0.23/_downloads/b89584de6ec99a847868d7b80a32cf50/80_dics.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
We will now compute the cortical power map at 10 Hz. using a DICS beamformer. A beamformer will construct for each vertex a spatial filter that aims to pass activity originating from the vertex, while dampening activity from other sources as much as possible. The :func:mne.beamformer.make_dics function has many switche...
# Estimate the cross-spectral density (CSD) matrix on the trial containing the # signal. csd_signal = csd_morlet(epochs['signal'], frequencies=[10]) # Compute the spatial filters for each vertex, using two approaches. filters_approach1 = make_dics( info, fwd, csd_signal, reg=0.05, pick_ori='max-power', depth=1., ...
0.23/_downloads/b89584de6ec99a847868d7b80a32cf50/80_dics.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Plot the DICS power maps for both approaches, starting with the first:
def plot_approach(power, n): """Plot the results on a brain.""" title = 'DICS power map, approach %d' % n brain = power_approach1.plot( 'sample', subjects_dir=subjects_dir, hemi='both', size=600, time_label=title, title=title) # Indicate the true locations of the source activity on the p...
0.23/_downloads/b89584de6ec99a847868d7b80a32cf50/80_dics.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Now the second:
brain2 = plot_approach(power_approach2, 2)
0.23/_downloads/b89584de6ec99a847868d7b80a32cf50/80_dics.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Toy Network Instantiate Add nodes Add edges Draw
G = nx.Graph() G.add_nodes_from(['A','B','C','D','E','F','G']) G.add_edges_from([('A','B'),('A','C'), ('A','D'),('A','F'), ('B','E'),('C','E'), ('F','G')]) nx.draw_networkx(G, with_labels=True)
talks/MDI3/.ipynb_checkpoints/networkslides-checkpoint.ipynb
lwahedi/CurrentPresentation
mit
Centrality Way to measure the nature of the connectedness of a group Many centrality measures Use theory to pick one. Some common measures: Degree Centrality Number of ties Sum of rows In-degree: number of edges to a node Out-degree: number of edges from a node
deg = nx.degree_centrality(G) print(deg)
talks/MDI3/.ipynb_checkpoints/networkslides-checkpoint.ipynb
lwahedi/CurrentPresentation
mit
Eigenvector Centrality Connectedness to other well-connected nodes Theoretical Implication: A lot of work to maintain ties to everyone, sometimes just as good to know someone who knows everyone. Finding a job Rumors Supply Requires connected network Cannot compare across networks When might eigenvector centrality...
eig_c = nx.eigenvector_centrality_numpy(G) toy_adj = nx.adjacency_matrix(G) print(eig_c) val,vec = np.linalg.eig(toy_adj.toarray()) print(val) vec[:,0]
talks/MDI3/.ipynb_checkpoints/networkslides-checkpoint.ipynb
lwahedi/CurrentPresentation
mit
Betweenness Centrality Proportional to the number of shortest paths that pass through a given node How important is that node in connecting other nodes Medicci family was not well connected, but strategically connected.
betw = nx.betweenness_centrality(G) print(betw)
talks/MDI3/.ipynb_checkpoints/networkslides-checkpoint.ipynb
lwahedi/CurrentPresentation
mit
Centrality Measures Are Different Select based on theory you want to capture Take a minute to play around with the network and see how the relationships change
cent_scores = pd.DataFrame({'deg':deg,'eig_c':eig_c,'betw':betw}) print(cent_scores.corr()) cent_scores
talks/MDI3/.ipynb_checkpoints/networkslides-checkpoint.ipynb
lwahedi/CurrentPresentation
mit
Transitivity Extent to which friends have friends in common Probability two nodes are tied given that they have a partner in common Make a more transitive network:
G_trans = G.copy() G_trans.add_edge('A','E') G_trans.add_edge('F','D') nx.draw_networkx(G_trans, with_labels=True)
talks/MDI3/.ipynb_checkpoints/networkslides-checkpoint.ipynb
lwahedi/CurrentPresentation
mit
Measure Transitivity Whole network: Transitivity: Proportion of possible triangles present in the network Individual nodes: Count the triangles
print("Transitivity:") print(nx.transitivity(G)) print(nx.transitivity(G_trans)) print("Triangles:") print(nx.triangles(G)) print(nx.triangles(G_trans))
talks/MDI3/.ipynb_checkpoints/networkslides-checkpoint.ipynb
lwahedi/CurrentPresentation
mit
Clustering Coefficient Individual Nodes: Proportion of possible triangles through a given node Whole Network Average clustering across whole network
print("Clustering coefficient") print(nx.clustering(G)) print(nx.clustering(G_trans)) print("Average Clustering") print(nx.average_clustering(G)) print(nx.average_clustering(G_trans))
talks/MDI3/.ipynb_checkpoints/networkslides-checkpoint.ipynb
lwahedi/CurrentPresentation
mit
Community Detection Divide the network into subgroups using different algorithms Examples Percolation: find communities with fully connected cores Minimum cuts (nodes): Find the minimum number of nodes that, if removed, break the network into multiple components. Progressively remove them. <strong>Girvan Newman Algo...
coms = nx.algorithms.community.centrality.girvan_newman(G) i = 2 for com in itertools.islice(coms,4): print(i, ' communities') i+=1 print(tuple(c for c in com))
talks/MDI3/.ipynb_checkpoints/networkslides-checkpoint.ipynb
lwahedi/CurrentPresentation
mit
Real Network: Senate co-sponsorship Nodes: Senators Links: Sponsorship of the same piece of legislation. Weighted <h4>Download here:</h4> https://dataverse.harvard.edu/file.xhtml;jsessionid=e627083a7d8f43616bbe7d4ada3e?fileId=615937&version=RELEASED&version=.0 <h4> Start with the cosponsors.txt file</h4> Similar to...
edges = [] with open('cosponsors.txt') as d: for line in d: edges.append(line.split())
talks/MDI3/.ipynb_checkpoints/networkslides-checkpoint.ipynb
lwahedi/CurrentPresentation
mit
Subset the Data: Year <h3> 2004</h3> Download dates.txt Each row is the date Year, month, day separated by "-"
dates = pd.read_csv('Dates.txt',sep='-',header=None) dates.columns = ['year','month','day'] index_loc = np.where(dates.year==2004) edges_04 = [edges[i] for i in index_loc[0]]
talks/MDI3/.ipynb_checkpoints/networkslides-checkpoint.ipynb
lwahedi/CurrentPresentation
mit
Subset the Data: Senate Download senate.csv Gives the ids for senators Filter down to the rows for 106th congress (2000) <h3> This gives us our nodes </h3> Instantiate adjacency matrix of size nxn Create an ordinal index so we can index the matrix Add an attribute
# Get nodes senate = pd.read_csv('senate.csv') senators = senate.loc[senate.congress==108,['id','party']] # Creae adjacency matrix adj_mat = np.zeros([len(senators),len(senators)]) senators = pd.DataFrame(senators) senators['adj_ind']=range(len(senators)) # Create Graph Object senateG= nx.Graph() senateG.add_nodes_from...
talks/MDI3/.ipynb_checkpoints/networkslides-checkpoint.ipynb
lwahedi/CurrentPresentation
mit
Create the network (two ways) Loop through bills Check that there's data, and that it's a senate bill Create pairs for every combination of cosponsors Add directly to NetworkX graph object Add edges from the list of combinations Not weighted Add to adjacency matrix using new index Identify index for each pair Add ...
for bill in edges_04: if bill[0] == "NA": continue bill = [int(i) for i in bill] if bill[0] not in list(senators.id): continue combos = list(itertools.combinations(bill,2)) senateG.add_edges_from(combos) for pair in combos: i = senators.loc[senators.id == int(pair[0]), 'adj_ind'] ...
talks/MDI3/.ipynb_checkpoints/networkslides-checkpoint.ipynb
lwahedi/CurrentPresentation
mit
Set edge weights for Network Object
for row in range(len(adj_mat)): cols = np.where(adj_mat[row,:])[0] i = senators.loc[senators.adj_ind==row,'id'] i = int(i) for col in cols: j = senators.loc[senators.adj_ind==col,'id'] j = int(j) senateG[i][j]['bills']=adj_mat[row,col]
talks/MDI3/.ipynb_checkpoints/networkslides-checkpoint.ipynb
lwahedi/CurrentPresentation
mit
Thresholding Some bills have everyone as a sponsor These popular bills are less informative, end up with complete network Threshold: Take edges above a certain weight (more than n cosponsorships) Try different numbers
bill_dict = nx.get_edge_attributes(senateG,'bills') elarge=[(i,j) for (i,j) in bill_dict if bill_dict[(i,j)] >40]
talks/MDI3/.ipynb_checkpoints/networkslides-checkpoint.ipynb
lwahedi/CurrentPresentation
mit
Look at the network Different layouts possible: <br> https://networkx.github.io/documentation/networkx-1.10/reference/drawing.html
nx.draw_spring(senateG, edgelist = elarge,with_labels=True)
talks/MDI3/.ipynb_checkpoints/networkslides-checkpoint.ipynb
lwahedi/CurrentPresentation
mit
Take out the singletons to get a clearer picture:
senateGt= nx.Graph() senateGt.add_nodes_from(senateG.nodes) senateGt.add_edges_from(elarge) deg = senateGt.degree() rem = [n[0] for n in deg if n[1]==0] senateGt_all = senateGt.copy() senateGt.remove_nodes_from(rem) nx.draw_spring(senateGt,with_labels=True)
talks/MDI3/.ipynb_checkpoints/networkslides-checkpoint.ipynb
lwahedi/CurrentPresentation
mit
Look at the degree distribution Degree is a tuple listing the group name and the number of partnerships Add to a dataframe Separate the column into two columns using .apply Plot a histogram
foo=pd.DataFrame({'tup':deg}) deg = senateGt.degree() foo = pd.DataFrame(foo) foo[['grp','deg']]=foo['tup'].apply(pd.Series) foo.deg.plot.hist()
talks/MDI3/.ipynb_checkpoints/networkslides-checkpoint.ipynb
lwahedi/CurrentPresentation
mit
Look at party in the network Extract the party information Democrats coded as 100, republicans as 200
party = nx.get_node_attributes(senateG,'party') dems = [] gop = [] for i in party: if party[i]==100: dems.append(i) else: gop.append(i)
talks/MDI3/.ipynb_checkpoints/networkslides-checkpoint.ipynb
lwahedi/CurrentPresentation
mit
Prepare the Visualization Create positional coordinates for the groups with ties, and without ties Instantiate dictionaries to hold different sets of coordinates Loop through party members If they have no parters, add calculated position to the lonely dictionary If they have partners, add calculated position to the pa...
pos = nx.spring_layout(senateGt) pos_all = nx.circular_layout(senateG) dem_dict={} gop_dict={} dem_lone = {} gop_lone= {} for n in dems: if n in rem: dem_lone[n]=pos_all[n] else:dem_dict[n] = pos[n] for n in gop: if n in rem: gop_lone[n]=pos_all[n] else:gop_dict[n] = pos[n]
talks/MDI3/.ipynb_checkpoints/networkslides-checkpoint.ipynb
lwahedi/CurrentPresentation
mit
Visualize the network by party Create lists of the party members who have ties Draw nodes in four categories using the position dictionaries we created party members, untied party members
dems = list(set(dems)-set(rem)) gop = list(set(gop)-set(rem)) nx.draw_networkx_nodes(senateGt, pos=dem_dict, nodelist = dems,node_color='b',node_size = 100) nx.draw_networkx_nodes(senateGt, pos=gop_dict, nodelist = gop,node_color='r', node_size = 100) nx.draw_networkx_nodes(senateG, pos=dem_lone, nodelist = list(dem_lo...
talks/MDI3/.ipynb_checkpoints/networkslides-checkpoint.ipynb
lwahedi/CurrentPresentation
mit
Do it again with a lower threshold:
dems = list(set(dems)-set(rem)) gop = list(set(gop)-set(rem)) nx.draw_networkx_nodes(senateGt, pos=dem_dict, nodelist = dems,node_color='b',node_size = 100) nx.draw_networkx_nodes(senateGt, pos=gop_dict, nodelist = gop,node_color='r', node_size = 100) nx.draw_networkx_nodes(senateGt_all, pos=dem_lone, nodelist = list(d...
talks/MDI3/.ipynb_checkpoints/networkslides-checkpoint.ipynb
lwahedi/CurrentPresentation
mit
Modularity: fraction of edges within a community minus the expected fraction if they were distributed randomly across the whole network High modularity >0 when there are more connections in a community than between communities Different algorithms to try to maximize this. Used a newer one from NetworkX. Run cell at e...
colors = greedy_modularity_communities(senateGt, weight = 'bills')
talks/MDI3/.ipynb_checkpoints/networkslides-checkpoint.ipynb
lwahedi/CurrentPresentation
mit
Visualize the Communities Calculate a position for all nodes Separate network by the communities Draw the first set as red Draw the second set as blue Add the edges
pos = nx.spring_layout(senateGt) pos0={} pos1={} for n in colors[0]: pos0[n] = pos[n] for n in colors[1]: pos1[n] = pos[n]nx.draw_networkx_nodes(senateGt, pos=pos0, nodelist = colors[0],node_color='r') nx.draw_networkx_nodes(senateGt, pos=pos1, nodelist = colors[1],node_color='b') nx.draw_networkx_edges(senateG...
talks/MDI3/.ipynb_checkpoints/networkslides-checkpoint.ipynb
lwahedi/CurrentPresentation
mit
How did we do? How many were misclassified Note: It's random, so you may need to flip the comparison by switching colors[0] and colors[1] Did pretty well!
print('gop misclassification') for i in colors[1]: if i in dems: print(i,len(senateGt[i])) print('dem misclassification') for i in colors[0]: if i in gop: print(i,len(senateGt[i]))
talks/MDI3/.ipynb_checkpoints/networkslides-checkpoint.ipynb
lwahedi/CurrentPresentation
mit
Pretty, but now what? Structure is interesting itself Is polarization changing over time? What attributes of a senator or environment lead to more in-party cosponsorship. Use ERGM or Latent Space Model Beyond what we'll cover today, but check out: Edward's implementation of Latent Space Models: http://edwardlib.org/...
sh = pd.read_csv('SH.tab',sep='\t') sh['dem']= sh.party==100 sh['dem']=sh.dem*1 model_data = sh.loc[ (sh.congress == 108) & (sh.chamber=='S'), ['ids','dem','pb','pa'] ] model_data['passed']=model_data.pb+model_data.pa model_data.set_index('ids',inplace=True)
talks/MDI3/.ipynb_checkpoints/networkslides-checkpoint.ipynb
lwahedi/CurrentPresentation
mit
Merge in some network data Remember: The merge works because they have the same index
bet_cent = nx.betweenness_centrality(senateG,weight='bills') bet_cent = pd.Series(bet_cent) deg_cent = nx.degree_centrality(senateGt) deg_cent = pd.Series(deg_cent) model_data['between']=bet_cent model_data['degree']=deg_cent
talks/MDI3/.ipynb_checkpoints/networkslides-checkpoint.ipynb
lwahedi/CurrentPresentation
mit
Degree is not significant
y =model_data.loc[:,'passed'] x =model_data.loc[:,['degree','dem']] x['c'] = 1 ols_model1 = sm.OLS(y,x,missing='drop') results = ols_model1.fit() print(results.summary())
talks/MDI3/.ipynb_checkpoints/networkslides-checkpoint.ipynb
lwahedi/CurrentPresentation
mit
Betweeness is! It's not how many bills that matter, it's who you cosponsor with
y =model_data.loc[:,'passed'] x =model_data.loc[:,['between','dem']] x['c'] = 1 ols_model1 = sm.OLS(y,x,missing='drop') results = ols_model1.fit() print(results.summary())
talks/MDI3/.ipynb_checkpoints/networkslides-checkpoint.ipynb
lwahedi/CurrentPresentation
mit
Questions? Add functions from networkx NetworkX documentation is buggy Version that comes with Anaconda is incomplete Below I pasted a community detection function from their source code Don't worry about what it's doing, just run it to add
# Some functions from the NetworkX package import heapq class MappedQueue(object): """The MappedQueue class implements an efficient minimum heap. The smallest element can be popped in O(1) time, new elements can be pushed in O(log n) time, and any element can be removed or updated in O(log n) time. The...
talks/MDI3/.ipynb_checkpoints/networkslides-checkpoint.ipynb
lwahedi/CurrentPresentation
mit
TensorFlow Constrained Optimization Example Using CelebA Dataset <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/responsible_ai/fairness_indicators/tutorials/Fairness_Indicators_TFCO_CelebA_Case_Study"><img src="https://www.tensorflow.org/images/tf_logo_3...
#@title Pip installs !pip install -q -U pip==20.2 !pip install git+https://github.com/google-research/tensorflow_constrained_optimization !pip install -q tensorflow-datasets tensorflow !pip install fairness-indicators \ "absl-py==0.12.0" \ "apache-beam<3,>=2.38" \ "avro-python3==1.9.1" \ "pyzmq==17.0.0"
g3doc/tutorials/Fairness_Indicators_TFCO_CelebA_Case_Study.ipynb
tensorflow/fairness-indicators
apache-2.0
Note that depending on when you run the cell below, you may receive a warning about the default version of TensorFlow in Colab switching to TensorFlow 2.X soon. You can safely ignore that warning as this notebook was designed to be compatible with TensorFlow 1.X and 2.X.
#@title Import Modules import os import sys import tempfile import urllib import tensorflow as tf from tensorflow import keras import tensorflow_datasets as tfds tfds.disable_progress_bar() import numpy as np import tensorflow_constrained_optimization as tfco from tensorflow_metadata.proto.v0 import schema_pb2 fro...
g3doc/tutorials/Fairness_Indicators_TFCO_CelebA_Case_Study.ipynb
tensorflow/fairness-indicators
apache-2.0
Additionally, we add a few imports that are specific to Fairness Indicators which we will use to evaluate and visualize the model's performance.
#@title Fairness Indicators related imports import tensorflow_model_analysis as tfma import fairness_indicators as fi from google.protobuf import text_format import apache_beam as beam
g3doc/tutorials/Fairness_Indicators_TFCO_CelebA_Case_Study.ipynb
tensorflow/fairness-indicators
apache-2.0
Although TFCO is compatible with eager and graph execution, this notebook assumes that eager execution is enabled by default as it is in TensorFlow 2.x. To ensure that nothing breaks, eager execution will be enabled in the cell below.
#@title Enable Eager Execution and Print Versions if tf.__version__ < "2.0.0": tf.compat.v1.enable_eager_execution() print("Eager execution enabled.") else: print("Eager execution enabled by default.") print("TensorFlow " + tf.__version__) print("TFMA " + tfma.VERSION_STRING) print("TFDS " + tfds.version.__versi...
g3doc/tutorials/Fairness_Indicators_TFCO_CelebA_Case_Study.ipynb
tensorflow/fairness-indicators
apache-2.0
CelebA Dataset CelebA is a large-scale face attributes dataset with more than 200,000 celebrity images, each with 40 attribute annotations (such as hair type, fashion accessories, facial features, etc.) and 5 landmark locations (eyes, mouth and nose positions). For more details take a look at the paper. With the permis...
gcs_base_dir = "gs://celeb_a_dataset/" celeb_a_builder = tfds.builder("celeb_a", data_dir=gcs_base_dir, version='2.0.0') celeb_a_builder.download_and_prepare() num_test_shards_dict = {'0.3.0': 4, '2.0.0': 2} # Used because we download the test dataset separately version = str(celeb_a_builder.info.version) print('Cele...
g3doc/tutorials/Fairness_Indicators_TFCO_CelebA_Case_Study.ipynb
tensorflow/fairness-indicators
apache-2.0
Caveats Before moving forward, there are several considerations to keep in mind in using CelebA: * Although in principle this notebook could use any dataset of face images, CelebA was chosen because it contains public domain images of public figures. * All of the attribute annotations in CelebA are operationalized ...
#@title Define Variables ATTR_KEY = "attributes" IMAGE_KEY = "image" LABEL_KEY = "Smiling" GROUP_KEY = "Young" IMAGE_SIZE = 28 #@title Define Preprocessing Functions def preprocess_input_dict(feat_dict): # Separate out the image and target variable from the feature dictionary. image = feat_dict[IMAGE_KEY] label ...
g3doc/tutorials/Fairness_Indicators_TFCO_CelebA_Case_Study.ipynb
tensorflow/fairness-indicators
apache-2.0
Then, we build out the data functions we need in the rest of the colab.
# Train data returning either 2 or 3 elements (the third element being the group) def celeb_a_train_data_wo_group(batch_size): celeb_a_train_data = celeb_a_builder.as_dataset(split='train').shuffle(1024).repeat().batch(batch_size).map(preprocess_input_dict) return celeb_a_train_data.map(get_image_and_label) def cel...
g3doc/tutorials/Fairness_Indicators_TFCO_CelebA_Case_Study.ipynb
tensorflow/fairness-indicators
apache-2.0
Build a simple DNN Model Because this notebook focuses on TFCO, we will assemble a simple, unconstrained tf.keras.Sequential model. We may be able to greatly improve model performance by adding some complexity (e.g., more densely-connected layers, exploring different activation functions, increasing image size), but th...
def create_model(): # For this notebook, accuracy will be used to evaluate performance. METRICS = [ tf.keras.metrics.BinaryAccuracy(name='accuracy') ] # The model consists of: # 1. An input layer that represents the 28x28x3 image flatten. # 2. A fully connected layer with 64 units activated by a ReLU f...
g3doc/tutorials/Fairness_Indicators_TFCO_CelebA_Case_Study.ipynb
tensorflow/fairness-indicators
apache-2.0
We also define a function to set seeds to ensure reproducible results. Note that this colab is meant as an educational tool and does not have the stability of a finely tuned production pipeline. Running without setting a seed may lead to varied results.
def set_seeds(): np.random.seed(121212) tf.compat.v1.set_random_seed(212121)
g3doc/tutorials/Fairness_Indicators_TFCO_CelebA_Case_Study.ipynb
tensorflow/fairness-indicators
apache-2.0
Fairness Indicators Helper Functions Before training our model, we define a number of helper functions that will allow us to evaluate the model's performance via Fairness Indicators. First, we create a helper function to save our model once we train it.
def save_model(model, subdir): base_dir = tempfile.mkdtemp(prefix='saved_models') model_location = os.path.join(base_dir, subdir) model.save(model_location, save_format='tf') return model_location
g3doc/tutorials/Fairness_Indicators_TFCO_CelebA_Case_Study.ipynb
tensorflow/fairness-indicators
apache-2.0
Next, we define functions used to preprocess the data in order to correctly pass it through to TFMA.
#@title Data Preprocessing functions for def tfds_filepattern_for_split(dataset_name, split): return f"{local_test_file_full_prefix()}*" class PreprocessCelebA(object): """Class that deserializes, decodes and applies additional preprocessing for CelebA input.""" def __init__(self, dataset_name): builder = t...
g3doc/tutorials/Fairness_Indicators_TFCO_CelebA_Case_Study.ipynb
tensorflow/fairness-indicators
apache-2.0
Finally, we define a function that evaluates the results in TFMA.
def get_eval_results(model_location, eval_subdir): base_dir = tempfile.mkdtemp(prefix='saved_eval_results') tfma_eval_result_path = os.path.join(base_dir, eval_subdir) eval_config_pbtxt = """ model_specs { label_key: "%s" } metrics_specs { metrics { class_n...
g3doc/tutorials/Fairness_Indicators_TFCO_CelebA_Case_Study.ipynb
tensorflow/fairness-indicators
apache-2.0
Train & Evaluate Unconstrained Model With the model now defined and the input pipeline in place, we’re now ready to train our model. To cut back on the amount of execution time and memory, we will train the model by slicing the data into small batches with only a few repeated iterations. Note that running this notebook...
BATCH_SIZE = 32 # Set seeds to get reproducible results set_seeds() model_unconstrained = create_model() model_unconstrained.fit(celeb_a_train_data_wo_group(BATCH_SIZE), epochs=5, steps_per_epoch=1000)
g3doc/tutorials/Fairness_Indicators_TFCO_CelebA_Case_Study.ipynb
tensorflow/fairness-indicators
apache-2.0
Evaluating the model on the test data should result in a final accuracy score of just over 85%. Not bad for a simple model with no fine tuning.
print('Overall Results, Unconstrained') celeb_a_test_data = celeb_a_builder.as_dataset(split='test').batch(1).map(preprocess_input_dict).map(get_image_label_and_group) results = model_unconstrained.evaluate(celeb_a_test_data)
g3doc/tutorials/Fairness_Indicators_TFCO_CelebA_Case_Study.ipynb
tensorflow/fairness-indicators
apache-2.0
However, performance evaluated across age groups may reveal some shortcomings. To explore this further, we evaluate the model with Fairness Indicators (via TFMA). In particular, we are interested in seeing whether there is a significant gap in performance between "Young" and "Not Young" categories when evaluated on fal...
model_location = save_model(model_unconstrained, 'model_export_unconstrained') eval_results_unconstrained = get_eval_results(model_location, 'eval_results_unconstrained')
g3doc/tutorials/Fairness_Indicators_TFCO_CelebA_Case_Study.ipynb
tensorflow/fairness-indicators
apache-2.0
As mentioned above, we are concentrating on the false positive rate. The current version of Fairness Indicators (0.1.2) selects false negative rate by default. After running the line below, deselect false_negative_rate and select false_positive_rate to look at the metric we are interested in.
tfma.addons.fairness.view.widget_view.render_fairness_indicator(eval_results_unconstrained)
g3doc/tutorials/Fairness_Indicators_TFCO_CelebA_Case_Study.ipynb
tensorflow/fairness-indicators
apache-2.0
As the results show above, we do see a disproportionate gap between "Young" and "Not Young" categories. This is where TFCO can help by constraining the false positive rate to be within a more acceptable criterion. Constrained Model Set Up As documented in TFCO's library, there are several helpers that will make it easi...
# The batch size is needed to create the input, labels and group tensors. # These tensors are initialized with all 0's. They will eventually be assigned # the batch content to them. A large batch size is chosen so that there are # enough number of "Young" and "Not Young" examples in each batch. set_seeds() model_constr...
g3doc/tutorials/Fairness_Indicators_TFCO_CelebA_Case_Study.ipynb
tensorflow/fairness-indicators
apache-2.0
The model is now set up and ready to be trained with the false positive rate constraint across age group. Now, because the last iteration of the constrained model may not necessarily be the best performing model in terms of the defined constraint, the TFCO library comes equipped with tfco.find_best_candidate_index() th...
# Obtain train set batches. NUM_ITERATIONS = 100 # Number of training iterations. SKIP_ITERATIONS = 10 # Print training stats once in this many iterations. # Create temp directory for saving snapshots of models. temp_directory = tempfile.mktemp() os.mkdir(temp_directory) # List of objective and constraints across ...
g3doc/tutorials/Fairness_Indicators_TFCO_CelebA_Case_Study.ipynb
tensorflow/fairness-indicators
apache-2.0
After having applied the constraint, we evaluate the results once again using Fairness Indicators.
model_location = save_model(model_constrained, 'model_export_constrained') eval_result_constrained = get_eval_results(model_location, 'eval_results_constrained')
g3doc/tutorials/Fairness_Indicators_TFCO_CelebA_Case_Study.ipynb
tensorflow/fairness-indicators
apache-2.0
As with the previous time we used Fairness Indicators, deselect false_negative_rate and select false_positive_rate to look at the metric we are interested in. Note that to fairly compare the two versions of our model, it is important to use thresholds that set the overall false positive rate to be roughly equal. This e...
eval_results_dict = { 'constrained': eval_result_constrained, 'unconstrained': eval_results_unconstrained, } tfma.addons.fairness.view.widget_view.render_fairness_indicator(multi_eval_results=eval_results_dict)
g3doc/tutorials/Fairness_Indicators_TFCO_CelebA_Case_Study.ipynb
tensorflow/fairness-indicators
apache-2.0
Initialize
folder = '../twoGaussians/' metric = get_metric()
MES/divOfScalarTimesVector/2b-JTimesDivSource/calculations/exactSolutions.ipynb
CELMA-project/CELMA
lgpl-3.0
Define the variables
# Initialization the_vars = {}
MES/divOfScalarTimesVector/2b-JTimesDivSource/calculations/exactSolutions.ipynb
CELMA-project/CELMA
lgpl-3.0
Define manifactured solutions We have that $$JS = J\nabla\cdot(S_n\nabla_\perp\phi) = JS_n\nabla_\perp^2\phi + J\nabla S_n\cdot \nabla_\perp \phi = JS_n\nabla_\perp^2\phi + J\nabla_\perp S_n\cdot \nabla_\perp \phi$$ We will use the Delp2 operator for the perpendicular Laplace operator (as the y-derivatives vanishes in ...
# We need Lx from boututils.options import BOUTOptions myOpts = BOUTOptions(folder) Lx = eval(myOpts.geom['Lx']) # Two normal gaussians # The gaussian # In cartesian coordinates we would like # f = exp(-(1/(2*w^2))*((x-x0)^2 + (y-y0)^2)) # In cylindrical coordinates, this translates to # f = exp(-(1/(2*w^2))*(x^2 + y...
MES/divOfScalarTimesVector/2b-JTimesDivSource/calculations/exactSolutions.ipynb
CELMA-project/CELMA
lgpl-3.0
Calculate the solution
the_vars['S'] = metric.J*( the_vars['S_n']*Delp2(the_vars['phi'], metric=metric)\ + metric.g11*DDX(the_vars['S_n'], metric=metric)*DDX(the_vars['phi'], metric=metric)\ + metric.g33*DDZ(the_vars['S_n'], metric=metric)*DDZ(the_vars['phi'], metric=metric)\ ...
MES/divOfScalarTimesVector/2b-JTimesDivSource/calculations/exactSolutions.ipynb
CELMA-project/CELMA
lgpl-3.0
Plot
make_plot(folder=folder, the_vars=the_vars, plot2d=True, include_aux=False)
MES/divOfScalarTimesVector/2b-JTimesDivSource/calculations/exactSolutions.ipynb
CELMA-project/CELMA
lgpl-3.0
Print the variables in BOUT++ format
BOUT_print(the_vars, rational=False)
MES/divOfScalarTimesVector/2b-JTimesDivSource/calculations/exactSolutions.ipynb
CELMA-project/CELMA
lgpl-3.0
2. Scikit Scikit is a machine learning library for Python built upon numpy and matplotlib. It provides functions for classification, regression, clustering and other common analytics tasks.
from sklearn.cluster import KMeans kmeans = KMeans(n_clusters=3) results = kmeans.fit_predict(data[['SepalLength', 'SepalWidth', 'PetalLength', 'PetalWidth']]) data_kmeans=pd.concat([data, pd.Series(results, name="ClusterId")], axis=1) data_kmeans.head()
03_analytics/Kmeans.ipynb
radical-cybertools/supercomputing2015-tutorial
apache-2.0
In the following we evaluate the resulting fit (commonly referred to as the model), using the sum of squared errors and a pair plot. The following pair plot shows the scatter-plot between each of the four features. Clusters for the different species are indicated by different colors.
print "Sum of squared error: %.1f"%kmeans.inertia_ sns.pairplot(data_kmeans, vars=["SepalLength", "SepalWidth", "PetalLength", "PetalWidth"], hue="ClusterId");
03_analytics/Kmeans.ipynb
radical-cybertools/supercomputing2015-tutorial
apache-2.0
3. Pilot Approach We will now use RADICAL-Pilot to compute the distance function, as a simple representation of how the above example can be executed as a task-parallel application.
import os, sys import commands import radical.pilot as rp os.environ["RADICAL_PILOT_DBURL"]="mongodb://ec2-54-221-194-147.compute-1.amazonaws.com:24242/giannis" def print_details(detail_object): if type(detail_object)==str: detail_object = ast.literal_eval(detail_object) for i in detail_object: ...
03_analytics/Kmeans.ipynb
radical-cybertools/supercomputing2015-tutorial
apache-2.0
In the following, we will partition the data and distribute it to a set of CUs for fast processing
number_clusters = 3 clusters = data.sample(number_clusters) clusters clusters.to_csv("clusters.csv") data.to_csv("points.csv")
03_analytics/Kmeans.ipynb
radical-cybertools/supercomputing2015-tutorial
apache-2.0
Helper Function for computing new centroids as mean of all points assigned to a cluster
def compute_new_centroids(distances): df = pd.DataFrame(distances) df[4] = df[4].astype(int) df = df.groupby(4)[0,1,2,3].mean() centroids_np = df.as_matrix() return centroids_np
03_analytics/Kmeans.ipynb
radical-cybertools/supercomputing2015-tutorial
apache-2.0
Running Mapper Function as an External Process
for i in range(10): distances =!/opt/anaconda/bin/python mapper.py points.csv clusters.csv distances_np = np.array(eval(" ".join(distances))) new_centroids = compute_new_centroids(distances_np) new_centroids_df = pd.DataFrame(new_centroids, columns=["SepalLength", "SepalWidth", "PetalLength", "PetalWid...
03_analytics/Kmeans.ipynb
radical-cybertools/supercomputing2015-tutorial
apache-2.0
Running Mapper Function inside RADICAL-Pilot Helper function to read output from completed compute units after it has been executed inside the Pilot.
import urlparse def get_output(compute_unit): working_directory=compute_unit.as_dict()['working_directory'] path = urlparse.urlparse(working_directory).path output=open(os.path.join(path, "STDOUT")).read() return output
03_analytics/Kmeans.ipynb
radical-cybertools/supercomputing2015-tutorial
apache-2.0