markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
|---|---|---|---|---|
Validate loading
|
df = pd.read_csv("./parcel-meta.csv")
for r in res:
row = df[df.Filename == r['table']]
if int(row['Records']) != r['count']:
display('Mismatch on parcel: {} csv count {} != db count of {}'.format(row['Filename'], row['Records'], r['count']))
|
sources/parcels/notebooks/parcel-loading.ipynb
|
FireCARES/data
|
mit
|
Consolidate into single table
Create destination parcels table
|
with conn.cursor() as c:
c.execute('CREATE TABLE public.parcels_2018 AS TABLE public.parcels WITH NO DATA;')
conn.commit()
sql = """
insert into public.parcels_2018 (ogc_fid, wkb_geometry, parcel_id, state_code, cnty_code, apn,
apn2, addr, city, state, zip, plus, std_addr, std_city, std_state,
std_zip, std_plus, fips_code, unfrm_apn, apn_seq_no, frm_apn,
orig_apn, acct_no, census_tr, block_nbr, lot_nbr, land_use, m_home_ind,
prop_ind, own_cp_ind, tot_val, lan_val, imp_val, tot_val_cd,
lan_val_cd, assd_val, assd_lan, assd_imp, mkt_val, mkt_lan, mkt_imp,
appr_val, appr_lan, appr_imp, tax_amt, tax_yr, assd_yr, ubld_sq_ft,
bld_sq_ft, liv_sq_ft, gr_sq_ft, yr_blt, eff_yr_blt, bedrooms,
rooms, bld_code, bld_imp_cd, condition, constr_typ, ext_walls,
quality, story_nbr, bld_units, units_nbr)
select {}, shape, parcel_id, state_code, cnty_code, apn,
apn2, addr, city, state, zip, plus, std_addr, std_city, std_state,
std_zip, std_plus, fips_code, unfrm_apn, apn_seq_no, frm_apn,
orig_apn, acct_no, census_tr, block_nbr, lot_nbr, land_use, m_home_ind,
prop_ind, own_cp_ind, tot_val, lan_val, imp_val, tot_val_cd,
lan_val_cd, assd_val, assd_lan, assd_imp, mkt_val, mkt_lan, mkt_imp,
appr_val, appr_lan, appr_imp, tax_amt, tax_yr, assd_yr, ubld_sq_ft,
bld_sq_ft, liv_sq_ft, gr_sq_ft, yr_blt, eff_yr_blt, bedrooms,
rooms, bld_code, bld_imp_cd, condition, constr_typ, ext_walls,
quality, story_nbr, bld_units, units_nbr
from core_logic_2018.{}
"""
try:
for r in res[50:]:
with conn.cursor() as c:
c.execute("select count(1) from information_schema.columns where table_name = %(table)s and table_schema = 'core_logic_2018' and column_name = 'objectid'",
r)
id_col = 'objectid' if c.fetchone()[0] == 1 else 'ogc_fid'
c.execute(sql.format(id_col, r['table']))
conn.commit()
except Exception as e:
print(e)
conn.rollback()
with conn.cursor() as c:
c.execute("""CREATE INDEX ON public.parcels_2018 USING btree (census_tr COLLATE pg_catalog."default");""")
c.execute("""CREATE INDEX ON public.parcels_2018 USING btree ("substring"(census_tr::text, 0, 7) COLLATE pg_catalog."default");""")
c.execute("""CREATE INDEX ON public.parcels_2018 USING btree (city COLLATE pg_catalog."default", state COLLATE pg_catalog."default");""")
c.execute("""CREATE INDEX ON public.parcels_2018 USING btree (fips_code COLLATE pg_catalog."default");""")
c.execute("""CREATE INDEX ON public.parcels_2018 USING btree (land_use COLLATE pg_catalog."default");""")
c.execute("""CREATE UNIQUE INDEX ON public.parcels_2018 USING btree (parcel_id);""")
c.execute("""CREATE INDEX ON public.parcels_2018 USING gist (wkb_geometry);""")
c.execute("""CREATE INDEX ON public.parcels_2018 USING btree (state_code COLLATE pg_catalog."default", "substring"(census_tr::text, 0, 7) COLLATE pg_catalog."default");""")
c.execute("""CREATE INDEX ON public.parcels_2018 USING btree (state COLLATE pg_catalog."default");""")
c.execute("""CREATE INDEX ON public.parcels_2018 USING btree (story_nbr);""")
|
sources/parcels/notebooks/parcel-loading.ipynb
|
FireCARES/data
|
mit
|
You may see a warning message from Kubeflow Pipeline logs saying "Insufficient nvidia.com/gpu". If so, this probably means that your GPU-enabled node is still spinning up; please wait for few minutes. You can check the current nodes in your cluster like this:
kubectl get nodes -o wide
If everything runs as expected, the nvidia-smi command should list the CUDA version, GPU type, usage, etc. (See the logs panel in the pipeline UI to view output).
You may also notice that after the pipeline step's GKE pod has finished, the new GPU cluster node is still there. GKE autoscale algorithm will free that node if no usage for certain time. More info is here.
Multiple GPUs pool in one cluster
It's possible you want more than one type of GPU to be supported in one cluster.
There are several types of GPUs.
Certain regions often support a only subset of the GPUs (document).
Since we can set --num-nodes=0 for certain GPU node pool to save costs if no workload, we can create multiple node pools for different types of GPUs.
Add additional GPU nodes to your cluster
In a previous section, we added a node pool for P100s. Here we add another pool for V100s.
```shell
You may customize these parameters.
export GPU_POOL_NAME=v100pool
export CLUSTER_NAME=existingClusterName
export CLUSTER_ZONE=us-west1-a
export GPU_TYPE=nvidia-tesla-v100
export GPU_COUNT=1
export MACHINE_TYPE=n1-highmem-8
Node pool creation may take several minutes.
gcloud container node-pools create ${GPU_POOL_NAME} \
--accelerator type=${GPU_TYPE},count=${GPU_COUNT} \
--zone ${CLUSTER_ZONE} --cluster ${CLUSTER_NAME} \
--num-nodes=0 --machine-type=${MACHINE_TYPE} --min-nodes=0 --max-nodes=5 --enable-autoscaling
```
Consume certain GPU via Kubeflow Pipelines SDK
If your cluster has multiple GPU node pools, you can explicitly specify that a given pipeline step should use a particular type of accelerator.
This example shows how to use P100s for one pipeline step, and V100s for another.
|
import kfp
from kfp import dsl
def gpu_p100_op():
return dsl.ContainerOp(
name='check_p100',
image='tensorflow/tensorflow:latest-gpu',
command=['sh', '-c'],
arguments=['nvidia-smi']
).set_gpu_limit(1).add_node_selector_constraint('cloud.google.com/gke-accelerator', 'nvidia-tesla-p100')
def gpu_v100_op():
return dsl.ContainerOp(
name='check_v100',
image='tensorflow/tensorflow:latest-gpu',
command=['sh', '-c'],
arguments=['nvidia-smi']
).set_gpu_limit(1).add_node_selector_constraint('cloud.google.com/gke-accelerator', 'nvidia-tesla-v100')
@dsl.pipeline(
name='GPU smoke check',
description='Smoke check as to whether GPU env is ready.'
)
def gpu_pipeline():
gpu_p100 = gpu_p100_op()
gpu_v100 = gpu_v100_op()
if __name__ == '__main__':
kfp.compiler.Compiler().compile(gpu_pipeline, 'gpu_smoking_check.yaml')
|
samples/tutorials/gpu/gpu.ipynb
|
kubeflow/kfp-tekton-backend
|
apache-2.0
|
You should see different "nvidia-smi" logs from the two pipeline steps.
Using Preemptible GPUs
A Preemptible GPU resource is cheaper, but use of these instances means that a pipeline step has the potential to be aborted and then retried. This means that pipeline steps used with preemptible instances must be idempotent (the step gives the same results if run again), or creates some kind of checkpoint so that it can pick up where it left off. To use preemptible GPUs, create a node pool as follows. Then when specifying a pipeline, you can indicate use of a preemptible node pool for a step.
The only difference in the following node-pool creation example is that the --preemptible and --node-taints=preemptible=true:NoSchedule parameters have been added.
```
export GPU_POOL_NAME=v100pool-preemptible
export CLUSTER_NAME=existingClusterName
export CLUSTER_ZONE=us-west1-a
export GPU_TYPE=nvidia-tesla-v100
export GPU_COUNT=1
export MACHINE_TYPE=n1-highmem-8
gcloud container node-pools create ${GPU_POOL_NAME} \
--accelerator type=${GPU_TYPE},count=${GPU_COUNT} \
--zone ${CLUSTER_ZONE} --cluster ${CLUSTER_NAME} \
--preemptible \
--node-taints=preemptible=true:NoSchedule \
--num-nodes=0 --machine-type=${MACHINE_TYPE} --min-nodes=0 --max-nodes=5 --enable-autoscaling
```
Then, you can define a pipeline as follows (note the use of use_preemptible_nodepool()).
|
import kfp
import kfp.gcp as gcp
from kfp import dsl
def gpu_p100_op():
return dsl.ContainerOp(
name='check_p100',
image='tensorflow/tensorflow:latest-gpu',
command=['sh', '-c'],
arguments=['nvidia-smi']
).set_gpu_limit(1).add_node_selector_constraint('cloud.google.com/gke-accelerator', 'nvidia-tesla-p100')
def gpu_v100_op():
return dsl.ContainerOp(
name='check_v100',
image='tensorflow/tensorflow:latest-gpu',
command=['sh', '-c'],
arguments=['nvidia-smi']
).set_gpu_limit(1).add_node_selector_constraint('cloud.google.com/gke-accelerator', 'nvidia-tesla-v100')
def gpu_v100_preemptible_op():
v100_op = dsl.ContainerOp(
name='check_v100_preemptible',
image='tensorflow/tensorflow:latest-gpu',
command=['sh', '-c'],
arguments=['nvidia-smi'])
v100_op.set_gpu_limit(1)
v100_op.add_node_selector_constraint('cloud.google.com/gke-accelerator', 'nvidia-tesla-v100')
v100_op.apply(gcp.use_preemptible_nodepool(hard_constraint=True))
return v100_op
@dsl.pipeline(
name='GPU smoking check',
description='Smoking check whether GPU env is ready.'
)
def gpu_pipeline():
gpu_p100 = gpu_p100_op()
gpu_v100 = gpu_v100_op()
gpu_v100_preemptible = gpu_v100_preemptible_op()
if __name__ == '__main__':
kfp.compiler.Compiler().compile(gpu_pipeline, 'gpu_smoking_check.yaml')
|
samples/tutorials/gpu/gpu.ipynb
|
kubeflow/kfp-tekton-backend
|
apache-2.0
|
First I'm going to pull out a small subset to work with
|
csub = cbar.loc[3323:3324,1:2]
csub
|
docs/quick_start/demo/op2_pandas_unstack.ipynb
|
saullocastro/pyNastran
|
lgpl-3.0
|
I happen to like the way that's organized, but let's say that I want the have the item descriptions in columns and the mode ID's and element numbers in rows. To do that, I'll first move the element ID's up to the columns using a .unstack(level=0) and the transpose the result:
|
csub.unstack(level=0).T
|
docs/quick_start/demo/op2_pandas_unstack.ipynb
|
saullocastro/pyNastran
|
lgpl-3.0
|
unstack requires unique row indices so I can't work with CQUAD4 stresses as they're currently output, but I'll work with CHEXA stresses. Let's pull out the first two elements and first two modes:
|
chs = isat.chexa_stress[1].data_frame.loc[3684:3685,1:2]
chs
|
docs/quick_start/demo/op2_pandas_unstack.ipynb
|
saullocastro/pyNastran
|
lgpl-3.0
|
Now I want to put ElementID and the Node ID in the rows along with the Load ID, and have the items in the columns:
|
cht = chs.unstack(level=[0,1]).T
cht
|
docs/quick_start/demo/op2_pandas_unstack.ipynb
|
saullocastro/pyNastran
|
lgpl-3.0
|
Maybe I'd like my rows organized with the modes on the inside. I can do that by swapping levels:
We actually need to get rid of the extra rows using dropna():
|
cht = cht.dropna()
cht
# mode, eigr, freq, rad, eids, nids # initial
# nids, eids, eigr, freq, rad, mode # final
cht.swaplevel(0,4).swaplevel(1,5).swaplevel(2,5).swaplevel(4, 5)
|
docs/quick_start/demo/op2_pandas_unstack.ipynb
|
saullocastro/pyNastran
|
lgpl-3.0
|
Alternatively I can do that by first using reset_index to move all the index columns into data, and then using set_index to define the order of columns I want as my index:
|
cht.reset_index().set_index(['ElementID','NodeID','Mode','Freq']).sort_index()
|
docs/quick_start/demo/op2_pandas_unstack.ipynb
|
saullocastro/pyNastran
|
lgpl-3.0
|
Support vector machines: training
Next, we train a SVM classifier on our data.
The first line creates our classifier using the SVC() function. For now we can ignore the parameter kernel='linear', this just means the decision boundaries should be straight lines. The second line uses the fit() method to train the classifier on the features in X_small, using the labels in y.
It is safe to ignore the parameter 'decision_function_shape'. This is not important for this tutorial, but its inclusion prevents warnings from Scikit-learn about future changes to the default.
|
# Create an instance of SVM and fit the data.
clf = SVC(kernel='linear', decision_function_shape='ovo')
clf.fit(X_small, y)
|
SVM_Tutorial.ipynb
|
abatula/MachineLearningIntro
|
gpl-2.0
|
Plot the classification boundaries
Now that we have our classifier, let's visualize what it's doing.
First we plot the decision spaces, or the areas assigned to the different labels (species of iris). Then we plot our examples onto the space, showing where each point lies and the corresponding decision boundary.
The colored background shows the areas that are considered to belong to a certain label. If we took sepal measurements from a new flower, we could plot it in this space and use the color to determine which type of iris our classifier believes it to be.
|
h = .02 # step size in the mesh
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, m_max]x[y_min, y_max].
x_min, x_max = X_small[:, 0].min() - 1, X_small[:, 0].max() + 1
y_min, y_max = X_small[:, 1].min() - 1, X_small[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()]) # Make a prediction oat every point
# in the mesh in order to find the
# classification areas for each label
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure(figsize=(8, 6))
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
# Plot the training points
plt.scatter(X_small[:, 0], X_small[:, 1], c=y, cmap=cmap_bold)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title("3-Class classification (SVM)")
plt.xlabel('Sepal length (cm)')
plt.ylabel('Sepal width (cm)')
# Plot the legend
plt.legend(labelList, labelNames)
plt.show()
|
SVM_Tutorial.ipynb
|
abatula/MachineLearningIntro
|
gpl-2.0
|
Making predictions
Now, let's say we go out and measure the sepals of two new iris plants, and want to know what species they are. We're going to use our classifier to predict the flowers with the following measurements:
Plant | Sepal length | Sepal width
------|--------------|------------
A |4.3 |2.5
B |6.3 |2.1
We can use our classifier's predict() function to predict the label for our input features. We pass in the variable examples to the predict() function, which is a list, and each element is another list containing the features (measurements) for a particular example. The output is a list of labels corresponding to the input examples.
|
# Add our new data examples
examples = [[4.3, 2.5], # Plant A
[6.3, 2.1]] # Plant B
# Create an instance of SVM and fit the data
clf = SVC(kernel='linear', decision_function_shape='ovo')
clf.fit(X_small, y)
# Predict the labels for our new examples
labels = clf.predict(examples)
# Print the predicted species names
print('A: ' + labelNames[labels[0]])
print('B: ' + labelNames[labels[1]])
|
SVM_Tutorial.ipynb
|
abatula/MachineLearningIntro
|
gpl-2.0
|
Plotting our predictions
Now let's plot our predictions to see why they were classified that way.
|
# Now plot the results
h = .02 # step size in the mesh
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, m_max]x[y_min, y_max].
x_min, x_max = X_small[:, 0].min() - 1, X_small[:, 0].max() + 1
y_min, y_max = X_small[:, 1].min() - 1, X_small[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()]) # Make a prediction oat every point
# in the mesh in order to find the
# classification areas for each label
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure(figsize=(8, 6))
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
# Plot the training points
plt.scatter(X_small[:, 0], X_small[:, 1], c=y, cmap=cmap_bold)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title("3-Class classification (SVM)")
plt.xlabel('Sepal length (cm)')
plt.ylabel('Sepal width (cm)')
# Display the new examples as labeled text on the graph
plt.text(examples[0][0], examples[0][1],'A', fontsize=14)
plt.text(examples[1][0], examples[1][1],'B', fontsize=14)
# Plot the legend
plt.legend(labelList, labelNames)
plt.show()
|
SVM_Tutorial.ipynb
|
abatula/MachineLearningIntro
|
gpl-2.0
|
What are the support vectors in this example?
Below, we define a function to plot the solid decision boundary and corresponding dashed lines, as shown in the introductory picture. Because there are three classes to separate, there will now be three sets of lines.
|
def plot_svc_decision_function(clf):
"""Plot the decision function for a 2D SVC"""
x = np.linspace(plt.xlim()[0], plt.xlim()[1], 30)
y = np.linspace(plt.ylim()[0], plt.ylim()[1], 30)
Y, X = np.meshgrid(y, x)
P = np.zeros((3,X.shape[0],X.shape[1]))
for i, xi in enumerate(x):
for j, yj in enumerate(y):
P[:, i,j] = clf.decision_function([[xi, yj]])[0]
for ind in range(3):
plt.contour(X, Y, P[ind,:,:], colors='k',
levels=[-1, 0, 1],
linestyles=['--', '-', '--'])
|
SVM_Tutorial.ipynb
|
abatula/MachineLearningIntro
|
gpl-2.0
|
And now we plot the lines on top of our previous plot
|
# Now plot the results
h = .02 # step size in the mesh
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, m_max]x[y_min, y_max].
x_min, x_max = X_small[:, 0].min() - 1, X_small[:, 0].max() + 1
y_min, y_max = X_small[:, 1].min() - 1, X_small[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()]) # Make a prediction at every point
# in the mesh in order to find the
# classification areas for each label
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure(figsize=(8, 6))
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
# Plot the training points
plt.scatter(X_small[:, 0], X_small[:, 1], c=y, cmap=cmap_bold)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title("3-Class classification (SVM)")
plt.xlabel('Sepal length (cm)')
plt.ylabel('Sepal width (cm)')
# Display the new examples as labeled text on the graph
plt.text(examples[0][0], examples[0][1],'A', fontsize=14)
plt.text(examples[1][0], examples[1][1],'B', fontsize=14)
# Plot the legend
plt.legend(labelList, labelNames)
plot_svc_decision_function(clf) # Plot the decison function
plt.show()
|
SVM_Tutorial.ipynb
|
abatula/MachineLearningIntro
|
gpl-2.0
|
This plot is much more visually cluttered than our previous toy example. There are a few points worth noticing if you take a closer look.
First, notice how the three solid lines run right along one of the decision boundaries. These are used to determine the boundaries between the classification areas (where the colors change).
Additionally, while the parallel dotted lines still pas through one or more support vectors, there are now data points located between the decision boundary and the dotted line (and even on the wrong side of the decision boundary!). This happens when our data is not "perfectly separable". A perfectly separable dataset is one where the classes can be separated completely with a single, straight (or at least simple) line. While it makes for nice examples, real world machine learning uses are almost never perfectly separable.
Kernels: Changing The Decision Boundary Lines
In our previous example, all of the decision boundaries are straight lines. But what if our data is grouped into more circular clusters, maybe a curved line would separate the data better.
SVMs use something called kernels to determine the shape of the decision boundary. Remember that when we called the SVC() function we gave it a parameter kernel='linear', which made the boundaries straight. A different kernel, the radial basis function (RBF) groups data into circular clusters instead of dividing by straight lines.
Below we show the same example as before, but with an RBF kernel.
|
# Create an instance of SVM and fit the data.
clf = SVC(kernel='rbf', decision_function_shape='ovo') # Use the RBF kernel this time
clf.fit(X_small, y)
# Now plot the results
h = .02 # step size in the mesh
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, m_max]x[y_min, y_max].
x_min, x_max = X_small[:, 0].min() - 1, X_small[:, 0].max() + 1
y_min, y_max = X_small[:, 1].min() - 1, X_small[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()]) # Make a prediction oat every point
# in the mesh in order to find the
# classification areas for each label
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure(figsize=(8, 6))
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
# Plot the training points
plt.scatter(X_small[:, 0], X_small[:, 1], c=y, cmap=cmap_bold)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title("3-Class classification (SVM)")
plt.xlabel('Sepal length (cm)')
plt.ylabel('Sepal width (cm)')
# Display the new examples as labeled text on the graph
plt.text(examples[0][0], examples[0][1],'A', fontsize=14)
plt.text(examples[1][0], examples[1][1],'B', fontsize=14)
# Plot the legend
plt.legend(labelList, labelNames)
plt.show()
|
SVM_Tutorial.ipynb
|
abatula/MachineLearningIntro
|
gpl-2.0
|
The boundaries are very similar to before, but now they're curved instead of straight. Now let's add the decision boundaries.
|
# Create an instance of SVM and fit the data.
clf = SVC(kernel='rbf', decision_function_shape='ovo') # Use the RBF kernel this time
clf.fit(X_small, y)
# Now plot the results
h = .02 # step size in the mesh
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, m_max]x[y_min, y_max].
x_min, x_max = X_small[:, 0].min() - 1, X_small[:, 0].max() + 1
y_min, y_max = X_small[:, 1].min() - 1, X_small[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()]) # Make a prediction oat every point
# in the mesh in order to find the
# classification areas for each label
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure(figsize=(8, 6))
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
# Plot the training points
plt.scatter(X_small[:, 0], X_small[:, 1], c=y, cmap=cmap_bold)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title("3-Class classification (SVM)")
plt.xlabel('Sepal length (cm)')
plt.ylabel('Sepal width (cm)')
# Display the new examples as labeled text on the graph
plt.text(examples[0][0], examples[0][1],'A', fontsize=14)
plt.text(examples[1][0], examples[1][1],'B', fontsize=14)
# Plot the legend
plt.legend(labelList, labelNames)
plot_svc_decision_function(clf) # Plot the decison function
plt.show()
|
SVM_Tutorial.ipynb
|
abatula/MachineLearningIntro
|
gpl-2.0
|
Now the plot looks very different from before! The solid black lines are now all curves, but each decision boundary still falls along one part of those lines. And instead of having dotted lines parallel to the solid, there are smaller ellipsoids on either side of the solid line.
What other kernels exist?
Scikit-learn comes with two other default kernels: polynomial and sigmoid. Advanced users can also creat their own kernels, but we will stick to the defaults for now.
Below, modify our previous examples to try out the other kernels. How do they change the decision boundaries?
|
# Your code here!
|
SVM_Tutorial.ipynb
|
abatula/MachineLearningIntro
|
gpl-2.0
|
What about my other features?
We've been looking at two features: the length and width of the plant's sepal. But what about the other two featurs, petal length and width? What does the graph look like when train on the petal length and width? How does it change when you change the SVM kernel?
How would you plot our two new plants, A and B, on these new plots? Assume we have all four measurements for each plant, as shown below.
Plant | Sepal length | Sepal width| Petal length | Petal width
------|--------------|------------|--------------|------------
A |4.3 |2.5 | 1.5 | 0.5
B |6.3 |2.1 | 4.8 | 1.5
|
# Your code here!
|
SVM_Tutorial.ipynb
|
abatula/MachineLearningIntro
|
gpl-2.0
|
Using more than two features
Sticking to two features is great for visualization, but is less practical for solving real machine learning problems. If you have time, you can experiment with using more features to train your classifier. It gets much harder to visualize the results with 3 features, and nearly impossible with 4 or more. There are techniques that can help you visualize the data in some form, and there are also ways to reduce the number of features you use while still retaining (hopefully) the same amount of information. However, those techniques are beyond the scope of this class.
Evaluating The Classifier
In order to evaluate a classifier, we need to split our dataset into training data, which we'll show to the classifier so it can learn, and testing data, which we will hold back from training and use to test its predictions.
Below, we create the training and testing datasets, using all four features. We then train our classifier on the training data, and get the predictions for the test data.
|
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
# Create an instance of SVM and fit the data
clf = SVC(kernel='linear', decision_function_shape='ovo')
clf.fit(X_train, y_train)
# Predict the labels of the test data
predictions = clf.predict(X_test)
|
SVM_Tutorial.ipynb
|
abatula/MachineLearningIntro
|
gpl-2.0
|
Next, we evaluate how well the classifier did. The easiest way to do this is to get the average number of correct predictions, usually referred to as the accuracy
|
accuracy = np.mean(predictions == y_test )*100
print('The accuracy is %.2f' % accuracy + '%')
|
SVM_Tutorial.ipynb
|
abatula/MachineLearningIntro
|
gpl-2.0
|
Comparing Models with Crossvalidation
To select the kernel to use in our model, we need to use crossvalidation. We can then get our final result using our test data.
First we choose the kernels we want to investigate, then divide our training data into folds. We loop through the sets of training and validation folds. Each time, we train each model on the training data and evaluate on the validation data. We store the accuracy of each classifier on each fold so we can look at them later.
|
# Choose our kernels
kernels = ['linear', 'rbf']
# Create a dictionary of arrays to store accuracies
accuracies = {}
for kernel in kernels:
accuracies[kernel] = []
# Loop through 5 folds
kf = KFold(n_splits=5)
for trainInd, valInd in kf.split(X_train):
X_tr = X_train[trainInd,:]
y_tr = y_train[trainInd]
X_val = X_train[valInd,:]
y_val = y_train[valInd]
# Loop through each kernel
for kernel in kernels:
# Create the classifier
clf = SVC(kernel=kernel, decision_function_shape='ovo')
# Train the classifier
clf.fit(X_tr, y_tr)
# Make our predictions
pred = clf.predict(X_val)
# Calculate the accuracy
accuracies[kernel].append(np.mean(pred == y_val))
|
SVM_Tutorial.ipynb
|
abatula/MachineLearningIntro
|
gpl-2.0
|
Select a Model
To select a model, we look at the average accuracy across all folds.
|
for kernel in kernels:
print('%s: %.2f' % (kernel, np.mean(accuracies[kernel])))
|
SVM_Tutorial.ipynb
|
abatula/MachineLearningIntro
|
gpl-2.0
|
Final Evaluation
The linear kernel gives us the highest accuracy, so we select it as our best model. Now we can evaluate it on our training set and get our final accuracy rating.
|
clf = SVC(kernel='linear', decision_function_shape='ovo')
clf.fit(X_train, y_train)
predictions = clf.predict(X_test)
accuracy = np.mean(predictions == y_test) * 100
print('The final accuracy is %.2f' % accuracy + '%')
|
SVM_Tutorial.ipynb
|
abatula/MachineLearningIntro
|
gpl-2.0
|
Lab Task #1: Set environment variables.
Set environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket, so you only need to change your project and region.
|
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "${PROJECT}
# TODO: Change these to try this notebook out
PROJECT = "cloud-training-demos" # Replace with your PROJECT
BUCKET = PROJECT # defaults to PROJECT
REGION = "us-central1" # Replace with your REGION
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = "2.0"
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
|
courses/machine_learning/deepdive2/structured/labs/5a_train_keras_ai_platform_babyweight.ipynb
|
turbomanage/training-data-analyst
|
apache-2.0
|
We then use the %%writefile magic to write the contents of the cell below to a file called task.py in the babyweight/trainer folder.
Lab Task #2: Create trainer module's task.py to hold hyperparameter argparsing code.
The cell below writes the file babyweight/trainer/task.py which sets up our training job. Here is where we determine which parameters of our model to pass as flags during training using the parser module. Look at how batch_size is passed to the model in the code below. Use this as an example to parse arguements for the following variables
- nnsize which represents the hidden layer sizes to use for DNN feature columns
- nembeds which represents the embedding size of a cross of n key real-valued parameters
- train_examples which represents the number of examples (in thousands) to run the training job
- eval_steps which represents the positive number of steps for which to evaluate model
Be sure to include a default value for the parsed arguments above and specfy the type if necessary.
|
%%writefile babyweight/trainer/task.py
import argparse
import json
import os
from babyweight.trainer import model
import tensorflow as tf
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"--job-dir",
help="this model ignores this field, but it is required by gcloud",
default="junk"
)
parser.add_argument(
"--train_data_path",
help="GCS location of training data",
required=True
)
parser.add_argument(
"--eval_data_path",
help="GCS location of evaluation data",
required=True
)
parser.add_argument(
"--output_dir",
help="GCS location to write checkpoints and export models",
required=True
)
parser.add_argument(
"--batch_size",
help="Number of examples to compute gradient over.",
type=int,
default=512
)
# TODO: Add nnsize argument
# TODO: Add nembeds argument
# TODO: Add num_epochs argument
# TODO: Add train_examples argument
# TODO: Add eval_steps argument
# Parse all arguments
args = parser.parse_args()
arguments = args.__dict__
# Unused args provided by service
arguments.pop("job_dir", None)
arguments.pop("job-dir", None)
# Modify some arguments
arguments["train_examples"] *= 1000
# Append trial_id to path if we are doing hptuning
# This code can be removed if you are not using hyperparameter tuning
arguments["output_dir"] = os.path.join(
arguments["output_dir"],
json.loads(
os.environ.get("TF_CONFIG", "{}")
).get("task", {}).get("trial", "")
)
# Run the training job
model.train_and_evaluate(arguments)
|
courses/machine_learning/deepdive2/structured/labs/5a_train_keras_ai_platform_babyweight.ipynb
|
turbomanage/training-data-analyst
|
apache-2.0
|
In the same way we can write to the file model.py the model that we developed in the previous notebooks.
Lab Task #3: Create trainer module's model.py to hold Keras model code.
Complete the TODOs in the code cell below to create our model.py. We'll use the code we wrote for the Wide & Deep model. Look back at your 9_keras_wide_and_deep_babyweight notebook and copy/paste the necessary code from that notebook into its place in the cell below.
|
%%writefile babyweight/trainer/model.py
import datetime
import os
import shutil
import numpy as np
import tensorflow as tf
# Determine CSV, label, and key columns
# TODO: Add CSV_COLUMNS and LABEL_COLUMN
# Set default values for each CSV column.
# Treat is_male and plurality as strings.
# TODO: Add DEFAULTS
def features_and_labels(row_data):
# TODO: Add your code here
pass
def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
# TODO: Add your code here
pass
def create_input_layers():
# TODO: Add your code here
pass
def categorical_fc(name, values):
# TODO: Add your code here
pass
def create_feature_columns(nembeds):
# TODO: Add your code here
pass
def get_model_outputs(wide_inputs, deep_inputs, dnn_hidden_units):
# TODO: Add your code here
pass
def rmse(y_true, y_pred):
# TODO: Add your code here
pass
def build_wide_deep_model(dnn_hidden_units=[64, 32], nembeds=3):
# TODO: Add your code here
pass
def train_and_evaluate(args):
model = build_wide_deep_model(args["nnsize"], args["nembeds"])
print("Here is our Wide-and-Deep architecture so far:\n")
print(model.summary())
trainds = load_dataset(
args["train_data_path"],
args["batch_size"],
tf.estimator.ModeKeys.TRAIN)
evalds = load_dataset(
args["eval_data_path"], 1000, tf.estimator.ModeKeys.EVAL)
if args["eval_steps"]:
evalds = evalds.take(count=args["eval_steps"])
num_batches = args["batch_size"] * args["num_epochs"]
steps_per_epoch = args["train_examples"] // num_batches
checkpoint_path = os.path.join(args["output_dir"], "checkpoints/babyweight")
cp_callback = tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_path, verbose=1, save_weights_only=True)
history = model.fit(
trainds,
validation_data=evalds,
epochs=args["num_epochs"],
steps_per_epoch=steps_per_epoch,
verbose=2, # 0=silent, 1=progress bar, 2=one line per epoch
callbacks=[cp_callback])
EXPORT_PATH = os.path.join(
args["output_dir"], datetime.datetime.now().strftime("%Y%m%d%H%M%S"))
tf.saved_model.save(
obj=model, export_dir=EXPORT_PATH) # with default serving function
print("Exported trained model to {}".format(EXPORT_PATH))
|
courses/machine_learning/deepdive2/structured/labs/5a_train_keras_ai_platform_babyweight.ipynb
|
turbomanage/training-data-analyst
|
apache-2.0
|
Lab Task #5: Train on Cloud AI Platform.
Once the code works in standalone mode, you can run it on Cloud AI Platform. Because this is on the entire dataset, it will take a while. The training run took about <b> two hours </b> for me. You can monitor the job from the GCP console in the Cloud AI Platform section. Complete the __#TODO__s to make sure you have the necessary user_args for our task.py's parser.
|
%%bash
OUTDIR=gs://${BUCKET}/babyweight/trained_model
JOBID=babyweight_$(date -u +%y%m%d_%H%M%S)
echo ${OUTDIR} ${REGION} ${JOBNAME}
gsutil -m rm -rf ${OUTDIR}
IMAGE=gcr.io/${PROJECT}/babyweight_training_container
gcloud ai-platform jobs submit training ${JOBID} \
--staging-bucket=gs://${BUCKET} \
--region=${REGION} \
--master-image-uri=${IMAGE} \
--master-machine-type=n1-standard-4 \
--scale-tier=CUSTOM \
-- \
--train_data_path=# TODO: Add path to training data in GCS
--eval_data_path=# TODO: Add path to evaluation data in GCS
--output_dir=${OUTDIR} \
--num_epochs=# TODO: Add the number of epochs to train for
--train_examples=# TODO: Add the number of examples to train each epoch for
--eval_steps=# TODO: Add the number of evaluation batches to run
--batch_size=# TODO: Add batch size
--nembeds=# TODO: Add number of embedding dimensions
|
courses/machine_learning/deepdive2/structured/labs/5a_train_keras_ai_platform_babyweight.ipynb
|
turbomanage/training-data-analyst
|
apache-2.0
|
2. Myria: Connections, Relations, and Queries (and Schemas and Plans)
|
# How many datasets are there on the server?
print len(connection.datasets())
# Let's look at the first dataset...
dataset = connection.datasets()[0]
print dataset['relationKey']['relationName']
print dataset['created']
# View data stored in this relation
MyriaRelation(dataset['relationKey'])
|
ipnb examples/myria.ipynb
|
uwescience/myria-python
|
bsd-3-clause
|
Uploading data
|
%%query
-- Load from S3
florida = load("https://s3-us-west-2.amazonaws.com/myria-demo-data/fl_insurance_sample_2.csv",
csv(schema(
id:int,
geo:string,
granularity:int,
deductable:float,
policyID:int,
construction:string,
line:string,
county:string,
state:string,
longitude:float,
latitude:float,
fl_site_deductible:float,
hu_site_deductible:float,
eq_site_deductible:float,
tiv_2012:float,
tiv_2011:float,
fr_site_limit:float,
fl_site_limit:float,
hu_site_limit:float,
eq_site_limit:float), skip=1));
clay_county = [from florida where county = 'CLAY COUNTY' emit *];
store(clay_county, insurance);
# Alternatively, you can upload directly from a Python string
name = {'userName': 'Brandon', 'programName': 'Demo', 'relationName': 'Books'}
schema = { "columnNames" : ["name", "pages"],
"columnTypes" : ["STRING_TYPE","LONG_TYPE"] }
data = """Brave New World,288
Nineteen Eighty-Four,376
We,256"""
result = connection.upload_file(
name, schema, data, delimiter=',', overwrite=True)
MyriaRelation(result['relationKey'], connection=connection)
#Or, load using the myria_upload command-line utility
!wget https://s3-us-west-2.amazonaws.com/myria-demo-data/books.csv
!myria_upload --hostname demo.myria.cs.washington.edu --port 8753 --no-ssl --user Brandon --program Demo --relation Demo --overwrite books.csv
|
ipnb examples/myria.ipynb
|
uwescience/myria-python
|
bsd-3-clause
|
Working with relations
|
# Using the previously-stored insurance relation
MyriaRelation("insurance")
# View details about this relation
relation = MyriaRelation("insurance")
print len(relation)
print relation.created_date
print relation.schema.names
|
ipnb examples/myria.ipynb
|
uwescience/myria-python
|
bsd-3-clause
|
Working Locally with Relations
|
# 1: Download as a Python dictionary
d = MyriaRelation("insurance").to_dict()
print 'First entry returned: %s' % d[0]['county']
# 2: Download as a Pandas DataFrame
df = MyriaRelation("insurance").to_dataframe()
print '%d entries with nonzero deductable' % len(df[df.eq_site_deductible > 0])
# 3: Download as a DataFrame and convert to a numpy array
array = MyriaRelation("insurance").to_dataframe().as_matrix()
print 'Mean site limit = %d' % array[:,4].mean()
|
ipnb examples/myria.ipynb
|
uwescience/myria-python
|
bsd-3-clause
|
Working with queries
|
%%query --Embed MyriaL in Jupyter notebook by using the "%%query" prefix
insurance = scan(insurance);
descriptives = [from insurance emit min(eq_site_deductible) as min_deductible,
max(eq_site_deductible) as max_deductible,
avg(eq_site_deductible) as mean_deductible,
stdev(eq_site_deductible) as stdev_deductible];
store(descriptives, descriptives);
# Grab the results of the most recent execution
query = _
or_this_works_too = _45
query
|
ipnb examples/myria.ipynb
|
uwescience/myria-python
|
bsd-3-clause
|
Single-line queries may be treated like Python expressions
|
query = %datalog Just500(column0, 500) :- TwitterK(column0, 500)%
print query.status
query
|
ipnb examples/myria.ipynb
|
uwescience/myria-python
|
bsd-3-clause
|
5. Variable Binding
|
low, high, destination = 543, 550, 'BoundRelation'
|
ipnb examples/myria.ipynb
|
uwescience/myria-python
|
bsd-3-clause
|
The tokens @low, @high, and @destination are bound to their values:
|
%%query
T1 = scan(TwitterK);
T2 = [from T1 where $0 > @low and $0 < @high emit $1 as x];
store(T2, @destination);
|
ipnb examples/myria.ipynb
|
uwescience/myria-python
|
bsd-3-clause
|
Deploying Myria in an Amazon Cluster!
1. Installing the Myria CLI
```
From the command line, execute:
sudo pip install myria-cluster
```
2. Launching Clusters
|
!myria-cluster create my-cluster
|
ipnb examples/myria.ipynb
|
uwescience/myria-python
|
bsd-3-clause
|
3. Connecting to the Cluster via Python
You can connect to the new cluster by using the MyriaX REST endpoint URL. In the example above, this is listed as http://ec2-50-112-33-121.us-west-2.compute.amazonaws.com:8753.
|
# Substitute your MyriaX REST URL here!
%connect http://ec2-52-1-38-182.compute-1.amazonaws.com:8753
|
ipnb examples/myria.ipynb
|
uwescience/myria-python
|
bsd-3-clause
|
Random weights and biases
train() allows us to shrink the output of neuron C, which has 10 inputs/fan-in-weights
no need to show that weights are asymmetrical, this is the usual case
|
initialize()
print(train())
print(train())
|
Breaking Symmetry/breaking_symmetry.ipynb
|
bbartoldson/examples
|
mit
|
Random biases, No randomness in weights
we can break symmetry in weights
|
initialize()
for layer in [a,b,c]:
layer.weight.data = torch.ones_like(layer.weight)
print(c.weight)
train(), print(c.weight)
train(), print(c.weight)
train(), print(c.weight)
|
Breaking Symmetry/breaking_symmetry.ipynb
|
bbartoldson/examples
|
mit
|
fan-ins are almost identical across neurons, elements within one neuron's fan-in are different
|
b.weight #there's a small amount of symmetry breaking, but the fan-ins are pretty similar
|
Breaking Symmetry/breaking_symmetry.ipynb
|
bbartoldson/examples
|
mit
|
after 25 epochs: fan-ins are different across neurons, elements within one neuron's fan-in are different
|
for _ in range(25):
train()
b.weight #now the fan-ins are pretty different
|
Breaking Symmetry/breaking_symmetry.ipynb
|
bbartoldson/examples
|
mit
|
No randomness in bias/weights
we can't break symmetry
|
initialize()
for layer in [a,b,c]:
layer.weight.data = torch.ones_like(layer.weight)
layer.bias.data = torch.ones_like(layer.bias)
print(c.weight, c.bias)
train(), print(c.weight, c.bias)
train(), print(c.weight, c.bias)
train(), print(c.weight, c.bias)
|
Breaking Symmetry/breaking_symmetry.ipynb
|
bbartoldson/examples
|
mit
|
fan-ins are identical across neurons, elements within one neuron's fan-in are identical
|
b.weight
|
Breaking Symmetry/breaking_symmetry.ipynb
|
bbartoldson/examples
|
mit
|
after 25 epochs: fan-ins are identical across neurons, elements within one neuron's fan-in are identical
|
for _ in range(25):
train()
b.weight
|
Breaking Symmetry/breaking_symmetry.ipynb
|
bbartoldson/examples
|
mit
|
DFF
The fundamental stateful element is a D-flip-flop. The flip-flop has a clock enable, its state will only be updated if the clock enable is true. Similarly, if a flip-flop has a reset signal, it will be reset to its initial value if reset is true.
|
from mantle import DFF
dff = DFF()
|
notebooks/tutorial/coreir/Register.ipynb
|
phanrahan/magmathon
|
mit
|
Register
A register is simply an array of flip-flops.
To create an instance of a register, call Register
with the number of bits n in the register.
|
from mantle import DefineRegister
|
notebooks/tutorial/coreir/Register.ipynb
|
phanrahan/magmathon
|
mit
|
Registers and DFFs are very similar to each other.
The only difference is that the input and output to a DFF
are Bit values,
whereas the inputs and the outputs to registers are Bits[n].
Registers with Enables and Resets
Flip-flops and registers can have with clock enables and resets.
The flip-flop has a clock enable, its state will only be updated
if the clock enable is true.
Similarly, if a flip-flop has a reset signal,
it will be reset to its initial value if reset is true.
To create registers with these additional inputs,
set the optional arguments has_ce and/or has_reset
when instancing the register.
|
Register4 = DefineRegister(4, init=5, has_ce=True, has_reset=True )
|
notebooks/tutorial/coreir/Register.ipynb
|
phanrahan/magmathon
|
mit
|
To wire the optional clock inputs, clock enable and reset,
use named arguments (ce and reset) when you call the register with its inputs.
In Magma, clock signals are handled differently than signals.
|
from magma.simulator import PythonSimulator
from fault import PythonTester
tester = PythonTester(Register4, Register4.CLK)
tester.poke(Register4.RESET, 1) # reset
tester.step(2)
tester.poke(Register4.RESET, 0)
print(f"Reset Val = {tester.peek(Register4.O)}")
tester.poke(Register4.CE, 1) # set enable
for i in range(5):
tester.poke(Register4.I, i)
tester.step(2)
print(f"Register4.I = {tester.peek(Register4.I)}, Register4.O = {tester.peek(Register4.O)}")
print("Lowering enable")
tester.poke(Register4.CE, 0)
for i in range(5):
tester.poke(Register4.I, i)
tester.step(2)
print(f"Register4.I = {tester.peek(Register4.I)}, Register4.O = {tester.peek(Register4.O)}")
|
notebooks/tutorial/coreir/Register.ipynb
|
phanrahan/magmathon
|
mit
|
Lorenz system
The Lorenz system is one of the earliest studied examples of a system of differential equations that exhibits chaotic behavior, such as bifurcations, attractors, and sensitive dependence on initial conditions. The differential equations read:
$$ \frac{dx}{dt} = \sigma(y-x) $$
$$ \frac{dy}{dt} = x(\rho-z) - y $$
$$ \frac{dz}{dt} = xy - \beta z $$
The solution vector is $[x(t),y(t),z(t)]$ and $\sigma$, $\rho$, and $\beta$ are parameters that govern the behavior of the solutions.
Write a function lorenz_derivs that works with scipy.integrate.odeint and computes the derivatives for this system.
|
def lorentz_derivs(yvec, t, sigma, rho, beta):
"""Compute the the derivatives for the Lorentz system at yvec(t)."""
# YOUR CODE HERE
x = yvec[0]
y = yvec[1]
z = yvec[2]
dx = sigma*(y - x)
dy = x*(rho - z) - y
dz = x*y - beta*z
return np.array([dx, dy, dz])
print(lorentz_derivs(np.array([0.0, 1.0, 0.0]), 1, 1, 1, 1))
assert np.allclose(lorentz_derivs((1,1,1),0, 1.0, 1.0, 2.0),[0.0,-1.0,-1.0])
|
assignments/assignment10/ODEsEx02.ipynb
|
enbanuel/phys202-2015-work
|
mit
|
Write a function solve_lorenz that solves the Lorenz system above for a particular initial condition $[x(0),y(0),z(0)]$. Your function should return a tuple of the solution array and time array.
|
def solve_lorentz(ic, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):
"""Solve the Lorenz system for a single initial condition.
Parameters
----------
ic : array, list, tuple
Initial conditions [x,y,z].
max_time: float
The max time to use. Integrate with 250 points per time unit.
sigma, rho, beta: float
Parameters of the differential equation.
Returns
-------
soln : np.ndarray
The array of the solution. Each row will be the solution vector at that time.
t : np.ndarray
The array of time points used.
"""
# YOUR CODE HERE
t = np.linspace(0, max_time, 5*max_time)
soln = odeint(lorentz_derivs, ic, t, args=(sigma, rho, beta), atol=1e-9, rtol=1e-8)
return np.array(soln), np.array(t)
print(solve_lorentz(np.array([0.0, 1.0, 0.0]), 2, 1, 1, 1))
assert True # leave this to grade solve_lorenz
|
assignments/assignment10/ODEsEx02.ipynb
|
enbanuel/phys202-2015-work
|
mit
|
Write a function plot_lorentz that:
Solves the Lorenz system for N different initial conditions. To generate your initial conditions, draw uniform random samples for x, y and z in the range $[-15,15]$. Call np.random.seed(1) a single time at the top of your function to use the same seed each time.
Plot $[x(t),z(t)]$ using a line to show each trajectory.
Color each line using the hot colormap from Matplotlib.
Label your plot and choose an appropriate x and y limit.
The following cell shows how to generate colors that can be used for the lines:
|
N = 5
colors = plt.cm.hot(np.linspace(0,1,N))
for i in range(N):
# To use these colors with plt.plot, pass them as the color argument
print(colors[i])
def plot_lorentz(N=10, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):
"""Plot [x(t),z(t)] for the Lorenz system.
Parameters
----------
N : int
Number of initial conditions and trajectories to plot.
max_time: float
Maximum time to use.
sigma, rho, beta: float
Parameters of the differential equation.
"""
# YOUR CODE HERE
plt.figure(figsize = (15,8))
np.random.seed(1)
k= []
for i in range(N):
data = (np.random.random(3)-0.5)*30
k.append(solve_lorentz(data, max_time, sigma, rho, beta))
for j in k:
x = [p[0] for p in j[0]]
z = [p[2] for p in j[0]]
color = plt.cm.hot((x[0] + z[0])/60+0.5)
plt.scatter(x, z, color = color)
plt.xlabel('$x(t)$')
plt.ylabel('$z(t)$')
plt.title('Lorentz System')
# print(plot_lorentz(N=10, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0))
plot_lorentz()
assert True # leave this to grade the plot_lorenz function
|
assignments/assignment10/ODEsEx02.ipynb
|
enbanuel/phys202-2015-work
|
mit
|
Use interact to explore your plot_lorenz function with:
max_time an integer slider over the interval $[1,10]$.
N an integer slider over the interval $[1,50]$.
sigma a float slider over the interval $[0.0,50.0]$.
rho a float slider over the interval $[0.0,50.0]$.
beta fixed at a value of $8/3$.
|
# YOUR CODE HERE
interact(plot_lorentz, max_time = [1,10], N = [1,50], sigma=[0.0,50.0], rho=[0.0,50.0], beta=fixed(8/3));
|
assignments/assignment10/ODEsEx02.ipynb
|
enbanuel/phys202-2015-work
|
mit
|
Set parameters
|
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
label_name = 'Aud-rh'
fname_label = data_path + '/MEG/sample/labels/%s.label' % label_name
tmin, tmax, event_id = -0.2, 0.5, 2
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname)
events = mne.find_events(raw, stim_channel='STI 014')
inverse_operator = read_inverse_operator(fname_inv)
include = []
raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more
# Picks MEG channels
picks = mne.pick_types(raw.info, meg=True, eeg=False, eog=True,
stim=False, include=include, exclude='bads')
reject = dict(grad=4000e-13, mag=4e-12, eog=150e-6)
# Load epochs
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject,
preload=True)
# Compute a source estimate per frequency band including and excluding the
# evoked response
frequencies = np.arange(7, 30, 2) # define frequencies of interest
label = mne.read_label(fname_label)
n_cycles = frequencies / 3. # different number of cycle per frequency
# subtract the evoked response in order to exclude evoked activity
epochs_induced = epochs.copy().subtract_evoked()
plt.close('all')
for ii, (this_epochs, title) in enumerate(zip([epochs, epochs_induced],
['evoked + induced',
'induced only'])):
# compute the source space power and phase lock
power, phase_lock = source_induced_power(
this_epochs, inverse_operator, frequencies, label, baseline=(-0.1, 0),
baseline_mode='percent', n_cycles=n_cycles, n_jobs=1)
power = np.mean(power, axis=0) # average over sources
phase_lock = np.mean(phase_lock, axis=0) # average over sources
times = epochs.times
##########################################################################
# View time-frequency plots
plt.subplots_adjust(0.1, 0.08, 0.96, 0.94, 0.2, 0.43)
plt.subplot(2, 2, 2 * ii + 1)
plt.imshow(20 * power,
extent=[times[0], times[-1], frequencies[0], frequencies[-1]],
aspect='auto', origin='lower', vmin=0., vmax=30., cmap='RdBu_r')
plt.xlabel('Time (s)')
plt.ylabel('Frequency (Hz)')
plt.title('Power (%s)' % title)
plt.colorbar()
plt.subplot(2, 2, 2 * ii + 2)
plt.imshow(phase_lock,
extent=[times[0], times[-1], frequencies[0], frequencies[-1]],
aspect='auto', origin='lower', vmin=0, vmax=0.7,
cmap='RdBu_r')
plt.xlabel('Time (s)')
plt.ylabel('Frequency (Hz)')
plt.title('Phase-lock (%s)' % title)
plt.colorbar()
plt.show()
|
0.12/_downloads/plot_source_label_time_frequency.ipynb
|
mne-tools/mne-tools.github.io
|
bsd-3-clause
|
$A_n(p) = \sum_{k=n}^{2 n - 1} \binom{2 n - 1}{k} p^k (1 - p)^{2 n - 1 - k}$
|
def how_many_games(p=0.6, initial_amount=1_000_000, win_subtraction=10_000):
expected_winnings = []
for num_wins in range(1,50):
series_length = 2*num_wins - 1
prize = initial_amount - (win_subtraction*num_wins)
win_p = 0
for k in range(num_wins, series_length + 1):
win_p += binom(series_length, k) * (p**k) * ((1-p)**(series_length - k))
expected_winning = prize * win_p
expected_winnings.append((series_length, expected_winning))
return expected_winnings
expected_winnings = how_many_games()
plt.plot(*zip(*expected_winnings))
plt.show()
sorted_by_winnings = sorted(expected_winnings, key=lambda x: x[1], reverse=True)
sorted_by_winnings[0:5]
|
FiveThirtyEightRiddler/2017-07-14/classic/championship.ipynb
|
andrewzwicky/puzzles
|
mit
|
To predict with the TensorFlow model, we also need a serving input function. We will want all the inputs from our user.
|
# Create serving input function to be able to serve predictions later using provided inputs
def serving_input_fn():
feature_placeholders = {
'is_male': tf.compat.v1.placeholder(tf.string, [None]),
'mother_age': tf.compat.v1.placeholder(tf.float32, [None]),
'plurality': tf.compat.v1.placeholder(tf.string, [None]),
'gestation_weeks': tf.compat.v1.placeholder(tf.float32, [None])
}
features = {
key: tf.expand_dims(tensor, -1)
for key, tensor in feature_placeholders.items()
}
return tf.estimator.export.ServingInputReceiver(features, feature_placeholders)
# Create estimator to train and evaluate
def train_and_evaluate(output_dir):
EVAL_INTERVAL = 300
run_config = tf.estimator.RunConfig(save_checkpoints_secs = EVAL_INTERVAL,
keep_checkpoint_max = 3)
estimator = tf.estimator.DNNRegressor(
model_dir = output_dir,
feature_columns = get_cols(),
hidden_units = [64, 32],
config = run_config)
train_spec = tf.estimator.TrainSpec(
input_fn = read_dataset('train.csv', mode = tf.estimator.ModeKeys.TRAIN),
max_steps = TRAIN_STEPS)
exporter = tf.estimator.LatestExporter('exporter', serving_input_fn)
eval_spec = tf.estimator.EvalSpec(
input_fn = read_dataset('eval.csv', mode = tf.estimator.ModeKeys.EVAL),
steps = None,
start_delay_secs = 60, # start evaluating after N seconds
throttle_secs = EVAL_INTERVAL, # evaluate every N seconds
exporters = exporter)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
|
courses/machine_learning/deepdive/06_structured/3_tensorflow_dnn.ipynb
|
GoogleCloudPlatform/training-data-analyst
|
apache-2.0
|
Alternative sea-salt correction methodology for Ireland
Our current sea-salt correction methodology for Mg, Ca and SO4 assumes (i) that all chloride is marine, and (ii) that no fractionation takes place between evaporation and deposition. These coarse assumptions work OK in many regions, but lead to negative values for "corrected" Mg* and SO4* in some of the Irish lakes. See the e-mail from Julian received 20.06.2019 at 16.22 for further details.
Apparently some labs tend to significantly overestimate chloride and Julian has suggested using Na as a tracer instead (subject to the caveats in Julian's e-mail). This notebook compares results obtained using correction methods based on Cl versus Na.
As a starting point, I've extracted all the ICPW data for the Irish lakes from 1990 to the present for Ca, Mg and SO4 (the three parameters we want to correct), plus Cl and Na.
1. Raw dataset
|
# Read data
xl_path = r'../../../Thematic_Trends_Report_2019/ireland_high_chloride.xlsx'
df = pd.read_excel(xl_path, sheet_name='DATA')
# Suspect that values of *exactly* zero are errors
df[df==0] = np.nan
df.head()
|
ireland_seasalt_correction.ipynb
|
JamesSample/icpw
|
mit
|
2. Reference values for sea water
The numbers below are taken from the World Data Centre for Precipitation Chemistry (WDCPC; PDF here), except for Ca, which I've taken from here.
|
data = {'par': ['SO4', 'Mg', 'Ca', 'Na', 'Cl'],
'mol_mass': [96.06, 24.31, 40.08, 22.99, 35.45],
'valency': [2, 2, 2, 1, 1],
'sw_mgpl': [2700, 1290, 400, 10800, 19374]}
sw_df = pd.DataFrame(data)
sw_df['sw_ueqpl'] = 1000 * sw_df['valency'] * sw_df['sw_mgpl'] / sw_df['mol_mass']
sw_df
|
ireland_seasalt_correction.ipynb
|
JamesSample/icpw
|
mit
|
As expected, the $\mu eq/l$ concentrations of Na and Cl in sea water are roughly the same, but the ratio is not exactly 1:1. For the sea-salt correction, we need the ratio of SO4, Ca and Mg to Na and to Cl.
|
corr_facs = {}
for par in ['SO4', 'Ca', 'Mg']:
sw_par = sw_df.query('par == @par')['sw_ueqpl'].values[0]
sw_cl = sw_df.query('par == "Cl"')['sw_ueqpl'].values[0]
sw_na = sw_df.query('par == "Na"')['sw_ueqpl'].values[0]
ratio_cl = sw_par / sw_cl
ratio_na = sw_par / sw_na
corr_facs['%s2cl' % par.lower()] = ratio_cl
corr_facs['%s2na' % par.lower()] = ratio_na
print(f'{par:3}:Cl {ratio_cl:.3f}')
print(f'{par:3}:Na {ratio_na:.3f}')
print('')
|
ireland_seasalt_correction.ipynb
|
JamesSample/icpw
|
mit
|
The ratios above for chloride are virtually the same as documented in our current workflow (here), so I assume the new ratios for Na should also be compatible. Note that the ratio of SO4:Na is about 20% higher than SO4:Cl, which might actually exacerbate the problem of negative values.
2. Compare Cl to Na in Irish lakes
|
fig, axes = plt.subplots(nrows=5, ncols=4, figsize=(20,20))
axes = axes.flatten()
for idx, stn in enumerate(df['station_code'].unique()):
df_stn = df.query('station_code == @stn')
axes[idx].plot(df_stn['ECl'], df_stn['ENa'], 'ro', label='Raw data')
axes[idx].plot(df_stn['ECl'], df_stn['ECl'], 'k-', label='1:1 line')
axes[idx].set_title(stn)
axes[idx].set_xlabel('ECl (ueq/l)')
axes[idx].set_ylabel('ENa (ueq/l)')
axes[idx].legend(loc='best')
plt.tight_layout()
|
ireland_seasalt_correction.ipynb
|
JamesSample/icpw
|
mit
|
Based on these plots, I'd say concentrations of Na are also pretty high in these lakes (although in most cases Cl is even higher). I suspect that using Na as a marine "tracer" instead of Cl will still lead to negative values.
4. Sea-salt correction for SO4
Using our original methodology, the corrected series for SO4* has the most negative values. The code below compares boxplots of SO4* calculated using Cl (red boxes) versus Na (blue boxes) for each site.
|
# Par of interest
par = 'ESO4'
df['%s*_Na' % par] = df[par] - (corr_facs['%s2na' % par[1:].lower()] * df['ENa'])
df['%s*_Cl' % par] = df[par] - (corr_facs['%s2cl' % par[1:].lower()] * df['ECl'])
df2 = df[['station_code', '%s*_Cl' % par, '%s*_Na' % par]]
df2 = df2.melt(id_vars=['station_code'], var_name='corr_method')
df2 = df2.dropna(how='any').reset_index(drop=True)
g = sn.catplot(x='corr_method',
y='value',
data=df2,
col='station_code',
col_wrap=4,
kind='box',
sharex=False,
sharey=False,
)
g.map(plt.axhline, y=0, lw=2, ls='--', c='k', alpha=0.4)
g.set(ylabel='%s* (ueq/l)' % par)
plt.tight_layout()
|
ireland_seasalt_correction.ipynb
|
JamesSample/icpw
|
mit
|
For simplicity, we use a constant forward rate as the given interest rate term structure. The method discussed here would work with any market yield curve as well.
|
sigma = 0.01
a = 0.001
timestep = 360
length = 30 # in years
forward_rate = 0.05
day_count = ql.Thirty360()
todays_date = ql.Date(15, 1, 2015)
ql.Settings.instance().evaluationDate = todays_date
yield_curve = ql.FlatForward(
todays_date,
ql.QuoteHandle(ql.SimpleQuote(forward_rate)),
day_count)
spot_curve_handle = ql.YieldTermStructureHandle(yield_curve)
|
content/extra/notebooks/moment_matching.ipynb
|
gouthambs/karuth-source
|
artistic-2.0
|
Here, I setup the Monte Carlo simulation of the Hull-White process. The result of the generate_paths function below is the time grid and a matrix of short rates generated by the model. This is discussed in detaio in the Hull-White simulation post.
|
hw_process = ql.HullWhiteProcess(spot_curve_handle, a, sigma)
rng = ql.GaussianRandomSequenceGenerator(
ql.UniformRandomSequenceGenerator(
timestep,
ql.UniformRandomGenerator(125)))
seq = ql.GaussianPathGenerator(hw_process, length, timestep, rng, False)
def generate_paths(num_paths, timestep):
arr = np.zeros((num_paths, timestep+1))
for i in range(num_paths):
sample_path = seq.next()
path = sample_path.value()
time = [path.time(j) for j in range(len(path))]
value = [path[j] for j in range(len(path))]
arr[i, :] = np.array(value)
return np.array(time), arr
|
content/extra/notebooks/moment_matching.ipynb
|
gouthambs/karuth-source
|
artistic-2.0
|
Here is a plot of the generated short rates.
|
num_paths = 128
time, paths = generate_paths(num_paths, timestep)
for i in range(num_paths):
plt.plot(time, paths[i, :], lw=0.8, alpha=0.6)
plt.title("Hull-White Short Rate Simulation")
plt.show()
|
content/extra/notebooks/moment_matching.ipynb
|
gouthambs/karuth-source
|
artistic-2.0
|
The model zero coupon bond price $B(0, T)$ is given as:
$$B(0, T) = E\left[\exp\left(-\int_0^T r(t)dt \right) \right]$$
where $r(t)$ is the short rate generated by the model. The expectation of the stochastic discount factor at time $T$ is the price of the zero coupon bond at that time. In a simulation the paths are generated in a time grid and the discretization introduces some error. The empirical estimation of the zero coupon bond price from a Monte Carlo simulation $\hat{B}(0, t_m)$ maturing at time $t_m$ is given as:
$$\hat{B}(0, t_m) = \frac{1}{N}\sum_{i=1}^{N} \exp\left(-\sum_{j=0}^{m-1} \hat{r}i(t_j)[t{j+1}-t_j] \right)$$
where $\hat{r}_i(t_j)$ is the short rate for the path $i$ at time $t_j$ on the time grid. The expression for the moment matched short rates is given as [1]:
$$ r^c_i(t_j) = \hat{r}i(t_j) + \frac{\log \hat{B}(0, t{j+1}) - \log \hat{B}(0, t_{j})}{t_{j+1} - t_j}
- \frac{\log B(0, t_{j+1}) - \log B(0, t_{j})}{t_{j+1} - t_j}$$
|
def stoch_df(paths, time):
return np.mean(
np.exp(-cumtrapz(paths, time, initial=0.)),axis=0
)
B_emp = stoch_df(paths, time)
logB_emp = np.log(B_emp)
B_yc = np.array([yield_curve.discount(t) for t in time])
logB_yc = np.log(B_yc)
deltaT = time[1:] - time[:-1]
deltaB_emp = logB_emp[1:]-logB_emp[:-1]
deltaB_yc = logB_yc[1:] - logB_yc[:-1]
new_paths = paths.copy()
new_paths[:,1:] += (deltaB_emp/deltaT - deltaB_yc/deltaT)
|
content/extra/notebooks/moment_matching.ipynb
|
gouthambs/karuth-source
|
artistic-2.0
|
The plots below show the zero coupon bond price and mean of short rates with and without the moment matching.
|
plt.plot(time,
stoch_df(paths, time),"r-.",
label="Original", lw=2)
plt.plot(time,
stoch_df(new_paths, time),"g:",
label="Corrected", lw=2)
plt.plot(time,B_yc, "k--",label="Market", lw=1)
plt.title("Zero Coupon Bond Price")
plt.legend()
def alpha(forward, sigma, a, t):
return forward + 0.5* np.power(sigma/a*(1.0 - np.exp(-a*t)), 2)
avg = [np.mean(paths[:, i]) for i in range(timestep+1)]
new_avg = [np.mean(new_paths[:, i]) for i in range(timestep+1)]
plt.plot(time, avg, "r-.", lw=3, alpha=0.6, label="Original")
plt.plot(time, new_avg, "g:", lw=3, alpha=0.6, label="Corrected")
plt.plot(time,alpha(forward_rate, sigma, a, time), "k--", lw=2, alpha=0.6, label="Model")
plt.title("Mean of Short Rates")
plt.legend(loc=0)
|
content/extra/notebooks/moment_matching.ipynb
|
gouthambs/karuth-source
|
artistic-2.0
|
Building the network
TFLearn lets you build the network by defining the layers in that network.
For this example, you'll define:
The input layer, which tells the network the number of inputs it should expect for each piece of MNIST data.
Hidden layers, which recognize patterns in data and connect the input to the output layer, and
The output layer, which defines how the network learns and outputs a label for a given image.
Let's start with the input layer; to define the input layer, you'll define the type of data that the network expects. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 inputs. The number of inputs to your network needs to match the size of your data. For this example, we're using 784 element long vectors to encode our input data, so we need 784 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit (or node) in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call, it designates the input to the hidden layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling tflearn.fully_connected(net, n_units).
Then, to set how you train the network, use:
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with categorical cross-entropy.
Finally, you put all this together to create the model with tflearn.DNN(net).
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
Hint: The final output layer must have 10 output nodes (one for each digit 0-9). It's also recommended to use a softmax activation layer as your final output layer.
|
# Define the neural network
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
# Include the input layer, hidden layer(s), and set how you want to train the model
net = tflearn.input_data([None, 784])
net = tflearn.fully_connected(net, 500, activation='ReLU')
net = tflearn.fully_connected(net, 250, activation='ReLU')
net = tflearn.fully_connected(net, 10, activation='softmax')
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
# This model assumes that your network is named "net"
model = tflearn.DNN(net)
return model
# Build the model
model = build_model()
|
intro-to-tflearn/TFLearn_Digit_Recognition.ipynb
|
mdiaz236/DeepLearningFoundations
|
mit
|
I also define a plotting function to use with the interact function to visualize the behavior of the stars when the disrupting galaxy orbits close to the main galaxy
|
def plotter(ic,sol,n=0):
"""Plots the positions of the stars and disrupting galaxy at each t in the time array
Parameters
--------------
ic : initial conditions
sol : solution array
n : integer
Returns
-------------
"""
plt.figure(figsize=(10,10))
y = np.linspace(-150,150,100)
plt.plot(-.01*y**2+25,y,color='k',label='S path')
plt.scatter(0,0,color='y',label='Galaxy M')
plt.scatter(sol[n][0],sol[n][1],color='b',label='Galaxy S')
for i in range(1,int(len(ic)/4)):
a = plt.scatter(sol[n][4*i],sol[n][4*i+1],color='r')
a.set_label('Star')
plt.legend()
plt.ylim(-50,50)
plt.xlim(-50,50)
|
galaxy_project/F) Plotting_function.ipynb
|
bjshaw/phys202-project
|
mit
|
Defining a plotting function that will help with plotting static images at certain times:
|
def static_plot(ic,sol,n=0):
"""Plots the positions of the stars and disrupting galaxy at a certain t in the time array
Parameters
--------------
ic : initial conditions
sol : solution array
n : integer
Returns
-------------
"""
plt.scatter(0,0,color='y',label='Galaxy M')
plt.scatter(sol[n][0],sol[n][1],color='b',label='Galaxy S')
for i in range(1,int(len(ic)/4)):
a = plt.scatter(sol[n][4*i],sol[n][4*i+1],color='r')
plt.ylim(-50,50)
plt.xlim(-50,50)
plt.tick_params(right=False,left=False,top=False,bottom=False)
ax=plt.gca()
ax.spines['top'].set_visible(False)
ax.spines['bottom'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.spines['right'].set_visible(False)
plt.tick_params(axis='x',labelbottom='off')
plt.tick_params(axis='y',labelleft='off')
|
galaxy_project/F) Plotting_function.ipynb
|
bjshaw/phys202-project
|
mit
|
Defining a plotting function that will help with plotting positions relative to the center of mass between the two galaxies:
|
def com_plot(ic,sol,M,S,n=0):
"""Plots the positions of the stars, main, and disrupting galaxy relative to the center of mass at a certain t in the time
array
Parameters
---------------
ic : initial condition
sol : solution array
M : mass of main galaxy
S : mass of disrupting galaxy
n : integer
Returns
--------------
"""
plt.figure(figsize=(10,10))
cm_x = (S*sol[n][0])/(M+S)
cm_y = (S*sol[n][1])/(M+S)
plt.scatter(0,0,color='g',label='Center of Mass')
plt.scatter(0-cm_x,0-cm_y,color='y',label='Galaxy M')
plt.scatter(sol[n][0]-cm_x,sol[n][1]-cm_y,color='b',label='Galaxy S')
for i in range(1,int(len(ic)/4)):
a = plt.scatter(sol[n][4*i]-cm_x,sol[n][4*i+1]-cm_y,color='r')
a.set_label('Star')
plt.legend()
plt.ylim(-100,100)
plt.xlim(-100,100)
|
galaxy_project/F) Plotting_function.ipynb
|
bjshaw/phys202-project
|
mit
|
Static plotting function around center of mass:
|
def static_plot_com(ic,sol,M,S,n=0):
"""Plots the positions of the stars, main, and disrupting galaxy relative to the center of mass at a certain t in the time
array
Parameters
--------------
ic : initial conditions
sol : solution array
M : mass of main galaxy
S : mass of disrupting galaxy
n : integer
Returns
-------------
"""
cm_x = (S*sol[n][0])/(M+S)
cm_y = (S*sol[n][1])/(M+S)
plt.scatter(0,0,color='g',label='Center of Mass')
plt.scatter(0-cm_x,0-cm_y,color='y',label='Galaxy M')
plt.scatter(sol[n][0]-cm_x,sol[n][1]-cm_y,color='b',label='Galaxy S')
for i in range(1,int(len(ic)/4)):
a = plt.scatter(sol[n][4*i]-cm_x,sol[n][4*i+1]-cm_y,color='r')
plt.ylim(-100,100)
plt.xlim(-100,100)
plt.tick_params(right=False,left=False,top=False,bottom=False)
ax=plt.gca()
ax.spines['top'].set_visible(False)
ax.spines['bottom'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.spines['right'].set_visible(False)
plt.tick_params(axis='x',labelbottom='off')
plt.tick_params(axis='y',labelleft='off')
|
galaxy_project/F) Plotting_function.ipynb
|
bjshaw/phys202-project
|
mit
|
Exercise 1:
Generate 1000 samples from a uniform distribution that spans from 2 to 5. Print the sample mean and check that it approximates its expected value.
Hint: check out the random.uniform() function
|
print('Exercise 1:\n')
# n = <FILL IN>
n = 1000
# x_unif = <FILL IN>
x_unif = np.random.uniform(2, 5, n)
# print('Sample mean = ', <FILL IN>)
print('Sample mean = ', x_unif.mean())
plt.hist(x_unif, bins=100)
plt.xlabel('Value')
plt.ylabel('Frequency')
plt.title('Uniform distribution between 2 and 5')
plt.show()
|
P3.Python_datos/Intro3_Working_with_Data_professor.ipynb
|
ML4DS/ML4all
|
mit
|
Exercise 2.: Generate 1000 samples from a Gaussian distribution with mean 3 and variance 2. Print the sample mean and variance and check that they approximate their expected values.
Hint: check out the random.normal() function. Also, think about the changes you need to apply to a standard normal distribution to modify its mean and variance and try to obtain the same results using the random.randn() function.
|
print('\nExercise 2:\n')
# n = <FILL IN>
n = 1000
# x_gauss = <FILL IN>
x_gauss = np.random.randn(n)*np.sqrt(2) + 3
# print('Sample mean = ', <FILL IN>)
print('Sample mean = ', x_gauss.mean())
# print('Sample variance = ', <FILL IN>)
print('Sample variance = ', x_gauss.var())
plt.hist(x_gauss, bins=100)
plt.xlabel('Value')
plt.ylabel('Frequency')
plt.title('Gaussian distribution with mean = 3 and variance = 2')
plt.show()
|
P3.Python_datos/Intro3_Working_with_Data_professor.ipynb
|
ML4DS/ML4all
|
mit
|
Exercise 3.: Generate 100 samples of a sine signal between -5 and 5 and add uniform noise with mean 0 and amplitude 1.
|
print('\nExercise 3:\n')
# n = <FILL IN>
n = 100
# x = <FILL IN>
x = np.linspace(-5, 5, n)
# y = <FILL IN>
y = np.sin(x)
# noise = <FILL IN>
noise = np.random.uniform(-0.5, 0.5, 100)
# y_noise = <FILL IN>
y_noise = y + noise
plt.plot(x, y_noise, color='green', label='Noisy signal')
plt.plot(x, y, color='black', linestyle='--', label='Clean signal')
plt.legend(loc=3, fontsize='large')
plt.title('Sine signal with added uniform noise')
plt.show()
|
P3.Python_datos/Intro3_Working_with_Data_professor.ipynb
|
ML4DS/ML4all
|
mit
|
Exercise 4.: Generate 1000 samples from a 2 dimensional Gaussian distribution with mean [2, 3] and covariance matrix [[2, 0], [0, 2]].
Hint: check out the random.multivariate_normal() function.
|
print('\nExercise 4:\n')
# n = <FILL IN>
n = 1000
# mean = <FILL IN>
mean = np.array([2, 3])
# cov = <FILL IN>
cov = np.array([[2, 0], [0, 2]])
# x_2d_gauss = <FILL IN>
x_2d_gauss = np.random.multivariate_normal(mean=mean, cov=cov, size=n)
plt.scatter(x_2d_gauss[:, 0], x_2d_gauss[:, 1], )
plt.title('2d Gaussian Scatter Plot')
plt.show()
|
P3.Python_datos/Intro3_Working_with_Data_professor.ipynb
|
ML4DS/ML4all
|
mit
|
In just a single example we have seen a lot of Matplotlib functionalities that can be easely tuned. You have all you need to draw decent figures. However, those of you who want to learn more about Matplotlib can take a look at AnatomyOfMatplotlib, a collection of notebooks in which you will explore more in depth Matplotlib.
Now, try to solve the following exercises:
Exercise 5: Generate a random vector x, taking 200 samples of a uniform distribution, defined in the [-2,2] interval.
|
# x = <FILL IN>
x = 4*np.random.rand(200) - 2
# Create a weights vector w, in which w[0] = 2.4, w[1] = -0.8 and w[2] = 1.
# w = <FILL IN>
w = np.array([2.4,-0.8,2])
print('x shape:\n',x.shape)
print('\nw:\n', w)
print('w shape:\n', w.shape)
|
P3.Python_datos/Intro3_Working_with_Data_professor.ipynb
|
ML4DS/ML4all
|
mit
|
Exercise 6: Obtain the vector y whose samples are obtained by the polynomial $w_2 x^2 + w_1 x + w_0$
|
# y = <FILL IN>
y = w[0] + w[1]*x + w[2]*(x**2)
print('y shape:\n',y.shape)
|
P3.Python_datos/Intro3_Working_with_Data_professor.ipynb
|
ML4DS/ML4all
|
mit
|
Exercise 7: You probably obtained the previous vector as a sum of different terms. If so, try to obtain y again (and name it y2) as a product of a matrix X and a vector w. Then, check that both methods lead to the same result (be careful with shapes).
Hint: w will remain the same, but now X has to be constructed in a way that the dot product of X and w is consistent).
|
# X = <FILL IN>
X = np.array([np.ones((len(x),)), x, x**2]).T
# y2 = <FILL IN>
y2 = X @ w
print('y shape:\n',y.shape)
print('y2 shape:\n',y2.shape)
if(np.sum(np.abs(y-y2))<1e-10):
print('\ny and y2 are the same, well done!')
else:
print('\nOops, something went wrong, try again!')
|
P3.Python_datos/Intro3_Working_with_Data_professor.ipynb
|
ML4DS/ML4all
|
mit
|
Exercise 8: Define x2 as a range vector, going from -1 to 2, in steps of 0.05. Then, obtain y3 as the output of polynomial $w_2 x^2 + w_1 x + w_0$ for input x2 and plot the result using a red dashed line (--).
|
# x2 = <FILL IN>
x2 = np.arange(-1,2,0.05)
# y3 = <FILL IN>
y3 = w[0] + w[1]*x2 + w[2]*(x2**2)
# Plot
# <SOL>
fig1 = plt.figure()
plt.plot(x2,y3,'r--')
plt.title('y3 = f(x2)')
plt.ylabel('y3')
plt.xlabel('x2')
plt.show()
# </SOL>
|
P3.Python_datos/Intro3_Working_with_Data_professor.ipynb
|
ML4DS/ML4all
|
mit
|
Now it's your turn!
Exercise 9: Obtain x_exp as 1000 samples of an exponential distribution with scale parameter of 10. Then, plot the corresponding histogram for the previous set of samples, using 50 bins. Obtain the empirical mean and make it appear in the histogram legend. Does it coincide with the theoretical one?
|
# x_exp = <FILL IN>
x_exp = np.random.exponential(10,1000)
# plt.hist(<FILL IN>)
plt.hist(x_exp,bins=50,label="Emp. mean: "+str(np.mean(x_exp)))
plt.legend(loc='best')
plt.show()
|
P3.Python_datos/Intro3_Working_with_Data_professor.ipynb
|
ML4DS/ML4all
|
mit
|
Exercise 10: Taking into account that the exponential density can be expressed as:
$f(x;\beta) = \frac{1}{\beta} e^{-\frac{x}{\beta}}; x>=0$.
where $\beta$ is the scale factor, fill the variable density using the vector x and apply it the theoretical density for an exponential distribution. Then, take a look at the plot. Do the histogram and the density look alike? How does the number of samples affect the final result?
|
np.random.seed(4) # Keep the same result
x_exp = np.random.exponential(10,10000) # exponential samples
x = np.arange(np.min(x_exp),np.max(x_exp),0.05)
# density = <FILL IN>
density = (1/10)*np.exp(-x/10)
w_n = np.zeros_like(x_exp) + 1. / x_exp.size
plt.hist(x_exp, weights=w_n,label='Histogram.',bins=75)
plt.plot(x,density,'r--',label='Theoretical density.')
plt.legend()
plt.show()
|
P3.Python_datos/Intro3_Working_with_Data_professor.ipynb
|
ML4DS/ML4all
|
mit
|
Let's now try to apply this knowledge about dictionaries with the following exercise:
Exercise 11: Create a dictionary with your name and a colleage's and create a dictionary for each of you with what you are wearing. Then print the whole list to see what are each of you wearing.
|
# alumnos = <FILL IN>
alumnos = {'Pedro Picapiedra':{},'Clark Kent':{}}
clothes = ['Shirt','Dress','Glasses','Shoes']
for alumno in alumnos:
print(alumno)
# <SOL>
for element in clothes:
alumnos[alumno][element] = input(element+': ')
print(alumnos)
# </SOL>
|
P3.Python_datos/Intro3_Working_with_Data_professor.ipynb
|
ML4DS/ML4all
|
mit
|
You can install the latest pre-release version using pip install --pre --upgrade bigdl-orca[automl].
|
# Install latest pre-release version of BigDL Orca
# Installing BigDL Orca from pip will automatically install pyspark, bigdl, and their dependencies.
!pip install --pre --upgrade bigdl-orca[automl]
# Install xgboost
!pip install xgboost
|
python/orca/colab-notebook/quickstart/autoxgboost_regressor_sklearn_boston.ipynb
|
intel-analytics/BigDL
|
apache-2.0
|
Step 1: Init Orca Context
|
# import necesary libraries and modules
from __future__ import print_function
import os
import argparse
from bigdl.orca import init_orca_context, stop_orca_context
from bigdl.orca import OrcaContext
# recommended to set it to True when running BigDL in Jupyter notebook.
OrcaContext.log_output = True # (this will display terminal's stdout and stderr in the Jupyter notebook).
cluster_mode = "local"
if cluster_mode == "local":
init_orca_context(cores=6, memory="2g", init_ray_on_spark=True) # run in local mode
elif cluster_mode == "k8s":
init_orca_context(cluster_mode="k8s", num_nodes=2, cores=4, init_ray_on_spark=True) # run on K8s cluster
elif cluster_mode == "yarn":
init_orca_context(
cluster_mode="yarn-client", cores=4, num_nodes=2, memory="2g", init_ray_on_spark=True,
driver_memory="10g", driver_cores=1) # run on Hadoop YARN cluster
|
python/orca/colab-notebook/quickstart/autoxgboost_regressor_sklearn_boston.ipynb
|
intel-analytics/BigDL
|
apache-2.0
|
This is the only place where you need to specify local or distributed mode. View Orca Context for more details.
Note: You should export HADOOP_CONF_DIR=/path/to/hadoop/conf/dir when you run on Hadoop YARN cluster.
### Step 2: Define Search space
You should define a dictionary as your hyper-parameter search space for XGBRegressor. The keys are hyper-parameter names you want to search for XGBRegressor, and you can specify how you want to sample each hyper-parameter in the values of the search space. See automl.hp for more details.
|
from bigdl.orca.automl import hp
search_space = {
"n_estimators": hp.grid_search([50, 100, 200]),
"max_depth": hp.choice([2, 4, 6]),
}
|
python/orca/colab-notebook/quickstart/autoxgboost_regressor_sklearn_boston.ipynb
|
intel-analytics/BigDL
|
apache-2.0
|
Step 3: Automatically fit and search with Orca AutoXGBoost
We will then fit AutoXGBoost automatically on Boston Housing dataset.
First create an AutoXGBRegressor.
You could also pass the sklearn XGBRegressor parameters to AutoXGBRegressor. Note that the XGBRegressor parameters shouldn't include the hyper-parameters in search_space or n_jobs, which is the same with cpus_per_trial.
|
from bigdl.orca.automl.xgboost import AutoXGBRegressor
auto_xgb_reg = AutoXGBRegressor(cpus_per_trial=2,
name="auto_xgb_classifier",
min_child_weight=3,
random_state=2)
|
python/orca/colab-notebook/quickstart/autoxgboost_regressor_sklearn_boston.ipynb
|
intel-analytics/BigDL
|
apache-2.0
|
Pairwise Regression Fairness
We will be training a linear scoring function $f(x) = w^\top x$ where $x \in \mathbb{R}^d$ is the input feature vector. Our goal is to train the regression model subject to pairwise fairness constraints.
Specifically, for the regression model $f$, we denote:
- $sqerr(f)$ as the squared error for model $f$.
$$
sqerr(f) = \mathbf{E}\big[\big(f(x) - y\big)^2\big]
$$
$err_{i,j}(f)$ as the pairwise error over example pairs where the higher label example is from group $i$, and the lower label example is from group $j$.
$$
err_{i, j}(f) = \mathbf{E}\big[\mathbb{I}\big(f(x) < f(x')\big) \,\big|\, y > y',~ grp(x) = i, ~grp(x') = j\big]
$$
<br>
We then wish to solve the following constrained problem:
$$min_f\; sqerr(f)$$
$$\text{ s.t. } |err_{i,j}(f) - err_{k,\ell}(f)| \leq \epsilon \;\;\; \forall ((i,j), (k,\ell)) \in \mathcal{G},$$
where $\mathcal{G}$ contains the pairs we are interested in constraining.
Load Communities & Crime Data
We will use the benchmark Communities and Crimes dataset from the UCI Machine Learning repository for our illustration. This dataset contains various demographic and racial distribution details (aggregated from census and law enforcement data sources) about different communities in the US, along with the per capita crime rate in each commmunity. Our goal is to predict the crime rate for a community, a regression problem. We consider communities where the percentage of black population is above the 70-th percentile as the protected group.
|
# We will divide the data into 25 minibatches and refer to them as 'queries'.
num_queries = 25
# List of column names in the dataset.
column_names = ["state", "county", "community", "communityname", "fold", "population", "householdsize", "racepctblack", "racePctWhite", "racePctAsian", "racePctHisp", "agePct12t21", "agePct12t29", "agePct16t24", "agePct65up", "numbUrban", "pctUrban", "medIncome", "pctWWage", "pctWFarmSelf", "pctWInvInc", "pctWSocSec", "pctWPubAsst", "pctWRetire", "medFamInc", "perCapInc", "whitePerCap", "blackPerCap", "indianPerCap", "AsianPerCap", "OtherPerCap", "HispPerCap", "NumUnderPov", "PctPopUnderPov", "PctLess9thGrade", "PctNotHSGrad", "PctBSorMore", "PctUnemployed", "PctEmploy", "PctEmplManu", "PctEmplProfServ", "PctOccupManu", "PctOccupMgmtProf", "MalePctDivorce", "MalePctNevMarr", "FemalePctDiv", "TotalPctDiv", "PersPerFam", "PctFam2Par", "PctKids2Par", "PctYoungKids2Par", "PctTeen2Par", "PctWorkMomYoungKids", "PctWorkMom", "NumIlleg", "PctIlleg", "NumImmig", "PctImmigRecent", "PctImmigRec5", "PctImmigRec8", "PctImmigRec10", "PctRecentImmig", "PctRecImmig5", "PctRecImmig8", "PctRecImmig10", "PctSpeakEnglOnly", "PctNotSpeakEnglWell", "PctLargHouseFam", "PctLargHouseOccup", "PersPerOccupHous", "PersPerOwnOccHous", "PersPerRentOccHous", "PctPersOwnOccup", "PctPersDenseHous", "PctHousLess3BR", "MedNumBR", "HousVacant", "PctHousOccup", "PctHousOwnOcc", "PctVacantBoarded", "PctVacMore6Mos", "MedYrHousBuilt", "PctHousNoPhone", "PctWOFullPlumb", "OwnOccLowQuart", "OwnOccMedVal", "OwnOccHiQuart", "RentLowQ", "RentMedian", "RentHighQ", "MedRent", "MedRentPctHousInc", "MedOwnCostPctInc", "MedOwnCostPctIncNoMtg", "NumInShelters", "NumStreet", "PctForeignBorn", "PctBornSameState", "PctSameHouse85", "PctSameCity85", "PctSameState85", "LemasSwornFT", "LemasSwFTPerPop", "LemasSwFTFieldOps", "LemasSwFTFieldPerPop", "LemasTotalReq", "LemasTotReqPerPop", "PolicReqPerOffic", "PolicPerPop", "RacialMatchCommPol", "PctPolicWhite", "PctPolicBlack", "PctPolicHisp", "PctPolicAsian", "PctPolicMinor", "OfficAssgnDrugUnits", "NumKindsDrugsSeiz", "PolicAveOTWorked", "LandArea", "PopDens", "PctUsePubTrans", "PolicCars", "PolicOperBudg", "LemasPctPolicOnPatr", "LemasGangUnitDeploy", "LemasPctOfficDrugUn", "PolicBudgPerPop", "ViolentCrimesPerPop"]
dataset_url = "http://archive.ics.uci.edu/ml/machine-learning-databases/communities/communities.data"
# Read dataset from the UCI web repository and assign column names.
data_df = pd.read_csv(dataset_url, sep=",", names=column_names,
na_values="?")
# Make sure that there are no missing values in the "ViolentCrimesPerPop" column.
assert(not data_df["ViolentCrimesPerPop"].isna().any())
# Real-valued label: "ViolentCrimesPerPop".
labels_df = data_df["ViolentCrimesPerPop"]
# Now that we have assigned binary labels,
# we drop the "ViolentCrimesPerPop" column from the data frame.
data_df.drop(columns="ViolentCrimesPerPop", inplace=True)
# Group features.
race_black_70_percentile = data_df["racepctblack"].quantile(q=0.7)
groups_df = (data_df["racepctblack"] >= race_black_70_percentile)
# Drop categorical features.
data_df.drop(columns=["state", "county", "community", "communityname", "fold"],
inplace=True)
# Handle missing features.
feature_names = data_df.columns
for feature_name in feature_names:
missing_rows = data_df[feature_name].isna() # Which rows have missing values?
if missing_rows.any(): # Check if at least one row has a missing value.
data_df[feature_name].fillna(0.0, inplace=True) # Fill NaN with 0.
missing_rows.rename(feature_name + "_is_missing", inplace=True)
data_df = data_df.join(missing_rows) # Append boolean "is_missing" feature.
labels = labels_df.values.astype(np.float32)
groups = groups_df.values.astype(np.float32)
features = data_df.values.astype(np.float32)
# Set random seed so that the results are reproducible.
np.random.seed(123456)
# We randomly divide the examples into 'num_queries' queries.
queries = np.random.randint(0, num_queries, size=features.shape[0])
# Train and test indices.
train_indices, test_indices = model_selection.train_test_split(
range(features.shape[0]), test_size=0.4)
# Train features, labels and protected groups.
train_set = {
'features': features[train_indices, :],
'labels': labels[train_indices],
'groups': groups[train_indices],
'queries': queries[train_indices],
'dimension': features.shape[-1],
'num_queries': num_queries
}
# Test features, labels and protected groups.
test_set = {
'features': features[test_indices, :],
'labels': labels[test_indices],
'groups': groups[test_indices],
'queries': queries[test_indices],
'dimension': features.shape[-1],
'num_queries': num_queries
}
|
pairwise_fairness/regression_crime.ipynb
|
google-research/google-research
|
apache-2.0
|
Evaluation Metrics
We will need functions to convert labeled data into paired data.
|
def pair_high_low_docs(data):
# Returns a DataFrame of pairs of larger-smaller labeled regression examples
# given in DataFrame.
# For all pairs of docs, and remove rows that are not needed.
pos_docs = data.copy()
neg_docs = data.copy()
# Include a merge key.
pos_docs.insert(0, "merge_key", 0)
neg_docs.insert(0, "merge_key", 0)
# Merge docs and drop merge key and label column.
pairs = pos_docs.merge(neg_docs, on="merge_key", how="outer",
suffixes=("_pos", "_neg"))
# Only retain rows where label_pos > label_neg.
pairs = pairs[pairs.label_pos > pairs.label_neg]
# Drop merge_key.
pairs.drop(columns=["merge_key"], inplace=True)
return pairs
def convert_labeled_to_paired_data(data_dict, index=None):
# Forms pairs of examples from each batch/query.
# Converts data arrays to pandas DataFrame with required column names and
# makes a call to convert_df_to_pairs and returns a dictionary.
features = data_dict['features']
labels = data_dict['labels']
groups = data_dict['groups']
queries = data_dict['queries']
if index is not None:
data_df = pd.DataFrame(features[queries == index, :])
data_df = data_df.assign(label=pd.DataFrame(labels[queries == index]))
data_df = data_df.assign(group=pd.DataFrame(groups[queries == index]))
data_df = data_df.assign(query_id=pd.DataFrame(queries[queries == index]))
else:
data_df = pd.DataFrame(features)
data_df = data_df.assign(label=pd.DataFrame(labels))
data_df = data_df.assign(group=pd.DataFrame(groups))
data_df = data_df.assign(query_id=pd.DataFrame(queries))
# Forms pairs of positive-negative docs for each query in given DataFrame
# if the DataFrame has a query_id column. Otherise forms pairs from all rows
# of the DataFrame.
data_pairs = data_df.groupby('query_id').apply(pair_high_low_docs)
# Create groups ndarray.
pos_groups = data_pairs['group_pos'].values.reshape(-1, 1)
neg_groups = data_pairs['group_neg'].values.reshape(-1, 1)
group_pairs = np.concatenate((pos_groups, neg_groups), axis=1)
# Create queries ndarray.
query_pairs = data_pairs['query_id_pos'].values.reshape(-1,)
# Create features ndarray.
feature_names = data_df.columns
feature_names = feature_names.drop(['query_id', 'label'])
feature_names = feature_names.drop(['group'])
pos_features = data_pairs[[str(s) + '_pos' for s in feature_names]].values
pos_features = pos_features.reshape(-1, 1, len(feature_names))
neg_features = data_pairs[[str(s) + '_neg' for s in feature_names]].values
neg_features = neg_features.reshape(-1, 1, len(feature_names))
feature_pairs = np.concatenate((pos_features, neg_features), axis=1)
# Paired data dict.
paired_data = {
'feature_pairs': feature_pairs,
'group_pairs': group_pairs,
'query_pairs': query_pairs,
'features': features,
'labels': labels,
'queries': queries,
'dimension': data_dict['dimension'],
'num_queries': data_dict['num_queries']
}
return paired_data
|
pairwise_fairness/regression_crime.ipynb
|
google-research/google-research
|
apache-2.0
|
We will also need functions to evaluate the pairwise error rates for a linear model.
|
def get_mask(groups, pos_group, neg_group=None):
# Returns a boolean mask selecting positive-negative document pairs where
# the protected group for the positive document is pos_group and
# the protected group for the negative document (if specified) is neg_group.
# Repeat group membership positive docs as many times as negative docs.
mask_pos = groups[:, 0] == pos_group
if neg_group is None:
return mask_pos
else:
mask_neg = groups[:, 1] == neg_group
return mask_pos & mask_neg
def mean_squared_error(model, dataset):
# Returns mean squared error for Keras model on dataset.
scores = model.predict(dataset['features'])
labels = dataset['labels']
return np.mean((scores - labels) ** 2)
def group_error_rate(model, dataset, pos_group, neg_group=None):
# Returns error rate for Keras model on data set, considering only document
# pairs where the protected group for the positive document is pos_group, and
# the protected group for the negative document (if specified) is neg_group.
d = dataset['dimension']
scores0 = model.predict(dataset['feature_pairs'][:, 0, :].reshape(-1, d))
scores1 = model.predict(dataset['feature_pairs'][:, 1, :].reshape(-1, d))
mask = get_mask(dataset['group_pairs'], pos_group, neg_group)
diff = scores0 - scores1
diff = diff[mask > 0].reshape((-1))
return np.mean(diff < 0)
|
pairwise_fairness/regression_crime.ipynb
|
google-research/google-research
|
apache-2.0
|
Create Linear Model
We then write a function to create the linear scoring model.
|
def create_scoring_model(feature_pairs, features, dimension):
# Returns a linear Keras scoring model, and returns a nullary function
# returning predictions on the features.
# Linear scoring model with no hidden layers.
layers = []
# Input layer takes `dimension` inputs.
layers.append(tf.keras.Input(shape=(dimension,)))
layers.append(tf.keras.layers.Dense(1))
scoring_model = tf.keras.Sequential(layers)
# Create a nullary function that returns applies the linear model to the
# features and returns the tensor with the prediction differences on pairs.
def prediction_diffs():
scores0 = scoring_model(feature_pairs()[:, 0, :].reshape(-1, dimension))
scores1 = scoring_model(feature_pairs()[:, 1, :].reshape(-1, dimension))
return scores0 - scores1
# Create a nullary function that returns the predictions on individual
# examples.
predictions = lambda: scoring_model(features())
return scoring_model, prediction_diffs, predictions
|
pairwise_fairness/regression_crime.ipynb
|
google-research/google-research
|
apache-2.0
|
Formulate Optimization Problem
We are ready to formulate the constrained optimization problem using the TFCO library.
|
def group_mask_fn(groups, pos_group, neg_group=None):
# Returns a nullary function returning group mask.
group_mask = lambda: np.reshape(
get_mask(groups(), pos_group, neg_group), (-1))
return group_mask
def formulate_problem(
feature_pairs, group_pairs, features, labels, dimension,
constraint_groups=[], constraint_slack=None):
# Formulates a constrained problem that optimizes the squared error for a linear
# model on the specified dataset, subject to pairwise fairness constraints
# specified by the constraint_groups and the constraint_slack.
#
# Args:
# feature_pairs: Nullary function returning paired features
# group_pairs: Nullary function returning paired groups
# features: Nullary function returning features
# labels: Nullary function returning labels
# dimension: Input dimension for scoring model
# constraint_groups: List containing tuples of the form
# ((pos_group0, neg_group0), (pos_group1, neg_group1)), specifying the
# group memberships for the document pairs to compare in the constraints.
# constraint_slack: slackness '\epsilon' allowed in the constraints.
# Returns:
# A RateMinimizationProblem object, and a Keras scoring model.
# Create linear scoring model: we get back a Keras model and a nullary
# function returning predictions on the features.
scoring_model, prediction_diffs, predictions = create_scoring_model(
feature_pairs, features, dimension)
# Context for the optimization objective.
context = tfco.rate_context(prediction_diffs)
# Squared loss objective.
squared_loss = lambda: tf.reduce_mean((predictions() - labels()) ** 2)
# Constraint set.
constraint_set = []
# Context for the constraints.
for ((pos_group0, neg_group0), (pos_group1, neg_group1)) in constraint_groups:
# Context for group 0.
group_mask0 = group_mask_fn(group_pairs, pos_group0, neg_group0)
context_group0 = context.subset(group_mask0)
# Context for group 1.
group_mask1 = group_mask_fn(group_pairs, pos_group1, neg_group1)
context_group1 = context.subset(group_mask1)
# Add constraints to constraint set.
constraint_set.append(
tfco.negative_prediction_rate(context_group0) <= (
tfco.negative_prediction_rate(context_group1) + constraint_slack))
constraint_set.append(
tfco.negative_prediction_rate(context_group1) <= (
tfco.negative_prediction_rate(context_group0) + constraint_slack))
# Formulate constrained minimization problem.
problem = tfco.RateMinimizationProblem(
tfco.wrap_rate(squared_loss), constraint_set)
return problem, scoring_model
|
pairwise_fairness/regression_crime.ipynb
|
google-research/google-research
|
apache-2.0
|
Train Model
The following function then trains the linear model by solving the above constrained optimization problem. We first provide a training function with minibatch gradient updates. There are three types of pairwise fairness criterion we handle (specified by 'constraint_type'), and assign the (pos_group, neg_group) pairs to compare accordingly.
|
def train_model(train_set, params):
# Trains the model with stochastic updates (one query per updates).
#
# Args:
# train_set: Dictionary of "paired" training data.
# params: Dictionary of hyper-paramters for training.
#
# Returns:
# Trained model, list of objectives, list of group constraint violations.
# Set random seed for reproducibility.
random.seed(333333)
np.random.seed(121212)
tf.random.set_seed(212121)
# Set up problem and model.
if params['constrained']:
# Constrained optimization.
if params['constraint_type'] == 'marginal_equal_opportunity':
constraint_groups = [((0, None), (1, None))]
elif params['constraint_type'] == 'cross_group_equal_opportunity':
constraint_groups = [((0, 1), (1, 0))]
else:
constraint_groups = [((0, 1), (1, 0)), ((0, 0), (1, 1))]
else:
# Unconstrained optimization.
constraint_groups = []
# Dictionary that will hold batch features pairs, group pairs and labels for
# current batch. We include one query per-batch.
paired_batch = {}
batch_index = 0 # Index of current query.
# Data functions.
feature_pairs = lambda: paired_batch['feature_pairs']
group_pairs = lambda: paired_batch['group_pairs']
features = lambda: paired_batch['features']
labels = lambda: paired_batch['labels']
# Create scoring model and constrained optimization problem.
problem, scoring_model = formulate_problem(
feature_pairs, group_pairs, features, labels, train_set['dimension'],
constraint_groups, params['constraint_slack'])
# Create a loss function for the problem.
lagrangian_loss, update_ops, multipliers_variables = (
tfco.create_lagrangian_loss(problem, dual_scale=params['dual_scale']))
# Create optimizer
optimizer = tf.keras.optimizers.Adagrad(learning_rate=params['learning_rate'])
# List of trainable variables.
var_list = (
scoring_model.trainable_weights + problem.trainable_variables +
[multipliers_variables])
# List of objectives, group constraint violations.
# violations, and snapshot of models during course of training.
objectives = []
group_violations = []
models = []
feature_pair_batches = train_set['feature_pairs']
group_pair_batches = train_set['group_pairs']
query_pairs = train_set['query_pairs']
feature_batches = train_set['features']
label_batches = train_set['labels']
queries = train_set['queries']
print()
# Run loops * iterations_per_loop full batch iterations.
for ii in range(params['loops']):
for jj in range(params['iterations_per_loop']):
# Populate paired_batch dict with all pairs for current query. The batch
# index is the same as the current query index.
paired_batch = {
'feature_pairs': feature_pair_batches[query_pairs == batch_index],
'group_pairs': group_pair_batches[query_pairs == batch_index],
'features': feature_batches[queries == batch_index],
'labels': label_batches[queries == batch_index]
}
# Optimize loss.
update_ops()
optimizer.minimize(lagrangian_loss, var_list=var_list)
# Update batch_index, and cycle back once last query is reached.
batch_index = (batch_index + 1) % train_set['num_queries']
# Snap shot current model.
model_copy = tf.keras.models.clone_model(scoring_model)
model_copy.set_weights(scoring_model.get_weights())
models.append(model_copy)
# Evaluate metrics for snapshotted model.
error, gerr, group_viol = evaluate_results(
scoring_model, train_set, params)
objectives.append(error)
group_violations.append(
[x - params['constraint_slack'] for x in group_viol])
sys.stdout.write(
'\r Loop %d: error = %.3f, max constraint violation = %.3f' %
(ii, objectives[-1], max(group_violations[-1])))
print()
if params['constrained']:
# Find model iterate that trades-off between objective and group violations.
best_index = tfco.find_best_candidate_index(
np.array(objectives), np.array(group_violations), rank_objectives=False)
else:
# Find model iterate that achieves lowest objective.
best_index = np.argmin(objectives)
return models[best_index]
|
pairwise_fairness/regression_crime.ipynb
|
google-research/google-research
|
apache-2.0
|
Summarize and Plot Results
Having trained a model, we will need functions to summarize the various evaluation metrics.
|
def evaluate_results(model, test_set, params):
# Returns sqaured error, group error rates, group-level constraint violations.
if params['constraint_type'] == 'marginal_equal_opportunity':
g0_error = group_error_rate(model, test_set, 0)
g1_error = group_error_rate(model, test_set, 1)
group_violations = [g0_error - g1_error, g1_error - g0_error]
return (mean_squared_error(model, test_set), [g0_error, g1_error],
group_violations)
else:
g00_error = group_error_rate(model, test_set, 0, 0)
g01_error = group_error_rate(model, test_set, 0, 1)
g10_error = group_error_rate(model, test_set, 1, 1)
g11_error = group_error_rate(model, test_set, 1, 1)
group_violations_offdiag = [g01_error - g10_error, g10_error - g01_error]
group_violations_diag = [g00_error - g11_error, g11_error - g00_error]
if params['constraint_type'] == 'cross_group_equal_opportunity':
return (mean_squared_error(model, test_set),
[[g00_error, g01_error], [g10_error, g11_error]],
group_violations_offdiag)
else:
return (mean_squared_error(model, test_set),
[[g00_error, g01_error], [g10_error, g11_error]],
group_violations_offdiag + group_violations_diag)
def display_results(
model, test_set, params, method, error_type, show_header=False):
# Prints evaluation results for model on test data.
error, group_error, diffs = evaluate_results(model, test_set, params)
if params['constraint_type'] == 'marginal_equal_opportunity':
if show_header:
print('\nMethod\t\t\tError\t\tMSE\t\tGroup 0\t\tGroup 1\t\tDiff')
print('%s\t%s\t\t%.3f\t\t%.3f\t\t%.3f\t\t%.3f' % (
method, error_type, error, group_error[0], group_error[1],
np.max(diffs)))
elif params['constraint_type'] == 'cross_group_equal_opportunity':
if show_header:
print('\nMethod\t\t\tError\t\tMSE\t\tGroup 0/1\tGroup 1/0\tDiff')
print('%s\t%s\t\t%.3f\t\t%.3f\t\t%.3f\t\t%.3f' % (
method, error_type, error, group_error[0][1], group_error[1][0],
np.max(diffs)))
else:
if show_header:
print('\nMethod\t\t\tError\t\MSE\t\tGroup 0/1\tGroup 1/0\t' +
'Group 0/0\tGroup 1/1\tDiff')
print('%s\t%s\t\t%.3f\t\t%.3f\t\t%.3f\t\t%.3f\t\t%.3f\t\t%.3f' % (
method, error_type, error, group_error[0][1], group_error[1][0],
group_error[0][0], group_error[1][1], np.max(diffs)))
|
pairwise_fairness/regression_crime.ipynb
|
google-research/google-research
|
apache-2.0
|
Experimental Results
We now run experiments with two types of pairwise fairness criteria: (1) marginal_equal_opportunity and (2) pairwise equal opportunity. In each case, we compare an unconstrained model trained to optimize just the squared error and a constrained model trained with pairwise fairness constraints.
|
# Convert train/test set to paired data for later evaluation.
paired_train_set = convert_labeled_to_paired_data(train_set)
paired_test_set = convert_labeled_to_paired_data(test_set)
|
pairwise_fairness/regression_crime.ipynb
|
google-research/google-research
|
apache-2.0
|
(1) Marginal Equal Opportunity
For a scoring model $f: \mathbb{R}^d \rightarrow \mathbb{R}$, recall:
- $sqerr(f)$ as the squared error for scoring function $f$.
and we additionally define:
$err_i(f)$ as the row-marginal pairwise error over example pairs where the higher label example is from group $i$, and the lower label is from either groups
$$
err_i(f) = \mathbf{E}\big[\mathbb{I}\big(f(x) < f(x')\big) \,\big|\, y > y',~ grp(x) = i\big]
$$
The constrained optimization problem we solve constraints the row-marginal pairwise errors to be similar:
$$min_f\;sqerr(f)$$
$$\text{s.t. }\;|err_0(f) - err_1(f)| \leq 0.02$$
|
# Model hyper-parameters.
model_params = {
'loops': 10,
'iterations_per_loop': 250,
'learning_rate': 0.1,
'constraint_type': 'marginal_equal_opportunity',
'constraint_slack': 0.02,
'dual_scale': 1.0}
# Unconstrained optimization.
model_params['constrained'] = False
model_unc = train_model(paired_train_set, model_params)
display_results(model_unc, paired_train_set, model_params, 'Unconstrained ',
'Train', show_header=True)
display_results(model_unc, paired_test_set, model_params, 'Unconstrained ',
'Test')
# Constrained optimization with TFCO.
model_params['constrained'] = True
model_con = train_model(paired_train_set, model_params)
display_results(model_con, paired_train_set, model_params, 'Constrained ',
'Train', show_header=True)
display_results(model_con, paired_test_set, model_params, 'Constrained ',
'Test')
|
pairwise_fairness/regression_crime.ipynb
|
google-research/google-research
|
apache-2.0
|
(2) Pairwise Equal Opportunity
Recall that we denote
$err_{i,j}(f)$ as the pairwise error over example pairs where the higher label example is from group $i$, and the lower label example is from group $j$.
$$
err_{i, j}(f) ~=~ \mathbf{E}\big[\mathbb{I}\big(f(x) < f(x')\big) \,\big|\, y > y',~ grp(x) = i, ~grp(x') = j\big]
$$
We first constrain only the cross-group errors, highlighted below.
<br>
<table border='1' bordercolor='black'>
<tr >
<td bgcolor='white'> </td>
<td bgcolor='white'> </td>
<td bgcolor='white' colspan=2 align=center><b>Negative</b></td>
</tr>
<tr>
<td bgcolor='white'></td>
<td bgcolor='white'></td>
<td>Group 0</td>
<td>Group 1</td>
</tr>
<tr>
<td bgcolor='white' rowspan=2><b>Positive</b></td>
<td bgcolor='white'>Group 0</td>
<td bgcolor='white'>$err_{0,0}$</td>
<td bgcolor='white'>$\mathbf{err_{0,1}}$</td>
</tr>
<tr>
<td>Group 1</td>
<td bgcolor='white'>$\mathbf{err_{1,0}}$</td>
<td bgcolor='white'>$err_{1,1}$</td>
</tr>
</table>
<br>
The optimization problem we solve constraints the cross-group pairwise errors to be similar:
$$min_f\; sqerr(f)$$
$$\text{s.t. }\;\; |err_{0,1}(f) - err_{1,0}(f)| \leq 0.02$$
|
# Model hyper-parameters.
model_params = {
'loops': 10,
'iterations_per_loop': 250,
'learning_rate': 0.1,
'constraint_type': 'cross_group_equal_opportunity',
'constraint_slack': 0.02,
'dual_scale': 1.0}
# Unconstrained optimization.
model_params['constrained'] = False
model_unc = train_model(paired_train_set, model_params)
display_results(model_unc, paired_train_set, model_params, 'Unconstrained ',
'Train', show_header=True)
display_results(model_unc, paired_test_set, model_params, 'Unconstrained ',
'Test')
# Constrained optimization with TFCO.
model_params['constrained'] = True
model_con = train_model(paired_train_set, model_params)
display_results(model_con, paired_train_set, model_params, 'Constrained ',
'Train', show_header=True)
display_results(model_con, paired_test_set, model_params, 'Constrained ',
'Test')
|
pairwise_fairness/regression_crime.ipynb
|
google-research/google-research
|
apache-2.0
|
Quiz Question. Select all the topics that have a cluster in the model created above. [multiple choice]
Comparing to random initialization
Create variables for randomly initializing the EM algorithm. Complete the following code block.
|
np.random.seed(5)
num_clusters = len(means)
num_docs, num_words = tf_idf.shape
random_means = []
random_covs = []
random_weights = []
for k in range(num_clusters):
# Create a numpy array of length num_words with random normally distributed values.
# Use the standard univariate normal distribution (mean 0, variance 1).
# YOUR CODE HERE
mean = ...
# Create a numpy array of length num_words with random values uniformly distributed between 1 and 5.
# YOUR CODE HERE
cov = ...
# Initially give each cluster equal weight.
# YOUR CODE HERE
weight = ...
random_means.append(mean)
random_covs.append(cov)
random_weights.append(weight)
|
Clustering_&_Retrieval/Week4/Assignment2/.ipynb_checkpoints/4_em-with-text-data_blank-checkpoint.ipynb
|
rashikaranpuria/Machine-Learning-Specialization
|
mit
|
pyNCS analysis
This is a basically a copy of NCS/python3-6/NCSdemo_simulation.py
|
# create normalized ideal image
fpath1 = os.path.join(py_ncs_path, "../randwlcposition.mat")
imgsz = 128
zoom = 8
Pixelsize = 0.1
NA = 1.4
Lambda = 0.7
t = time.time()
res = ncs.genidealimage(imgsz,Pixelsize,zoom,NA,Lambda,fpath1)
elapsed = time.time()-t
print('Elapsed time for generating ideal image:', elapsed)
imso = res[0]
pyplot.imshow(imso,cmap="gray")
# select variance map from calibrated map data
fpath = os.path.join(py_ncs_path, "../gaincalibration_561_gain.mat")
noisemap = ncs.gennoisemap(imgsz,fpath)
varsub = noisemap[0]*10 # increase the readout noise by 10 to demonstrate the effect of NCS algorithm
gainsub = noisemap[1]
# generate simulated data
I = 100
bg = 10
offset = 100
N = 1
dataimg = ncs.gendatastack(imso,varsub,gainsub,I,bg,offset,N)
imsd = dataimg[1]
print(imsd.shape)
alpha = 0.1
|
clib/jupyter_notebooks/ncs_tensorflow_simulation.ipynb
|
HuanglabPurdue/NCS
|
gpl-3.0
|
pyCNCS analysis
Mixed C and Python NCS analysis.
|
# Get the OTF mask that NCSDemo_simulation.py used.
rcfilter = ncs.genfilter(128,Pixelsize,NA,Lambda,'OTFweighted',1,0.7)
print(rcfilter.shape)
pyplot.imshow(rcfilter, cmap = "gray")
pyplot.show()
# Calculate gamma and run Python/C NCS.
gamma = varsub/(gainsub*gainsub)
# This takes ~100ms on my laptop.
out_c = ncsC.pyReduceNoise(imsd[0], gamma, rcfilter, alpha)
|
clib/jupyter_notebooks/ncs_tensorflow_simulation.ipynb
|
HuanglabPurdue/NCS
|
gpl-3.0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.