markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Validate loading
df = pd.read_csv("./parcel-meta.csv") for r in res: row = df[df.Filename == r['table']] if int(row['Records']) != r['count']: display('Mismatch on parcel: {} csv count {} != db count of {}'.format(row['Filename'], row['Records'], r['count']))
sources/parcels/notebooks/parcel-loading.ipynb
FireCARES/data
mit
Consolidate into single table Create destination parcels table
with conn.cursor() as c: c.execute('CREATE TABLE public.parcels_2018 AS TABLE public.parcels WITH NO DATA;') conn.commit() sql = """ insert into public.parcels_2018 (ogc_fid, wkb_geometry, parcel_id, state_code, cnty_code, apn, apn2, addr, city, state, zip, plus, std_addr, std_city, std_state, ...
sources/parcels/notebooks/parcel-loading.ipynb
FireCARES/data
mit
You may see a warning message from Kubeflow Pipeline logs saying "Insufficient nvidia.com/gpu". If so, this probably means that your GPU-enabled node is still spinning up; please wait for few minutes. You can check the current nodes in your cluster like this: kubectl get nodes -o wide If everything runs as expected, th...
import kfp from kfp import dsl def gpu_p100_op(): return dsl.ContainerOp( name='check_p100', image='tensorflow/tensorflow:latest-gpu', command=['sh', '-c'], arguments=['nvidia-smi'] ).set_gpu_limit(1).add_node_selector_constraint('cloud.google.com/gke-accelerator', 'nvidia-tesla...
samples/tutorials/gpu/gpu.ipynb
kubeflow/kfp-tekton-backend
apache-2.0
You should see different "nvidia-smi" logs from the two pipeline steps. Using Preemptible GPUs A Preemptible GPU resource is cheaper, but use of these instances means that a pipeline step has the potential to be aborted and then retried. This means that pipeline steps used with preemptible instances must be idempotent...
import kfp import kfp.gcp as gcp from kfp import dsl def gpu_p100_op(): return dsl.ContainerOp( name='check_p100', image='tensorflow/tensorflow:latest-gpu', command=['sh', '-c'], arguments=['nvidia-smi'] ).set_gpu_limit(1).add_node_selector_constraint('cloud.google.com/gke-accel...
samples/tutorials/gpu/gpu.ipynb
kubeflow/kfp-tekton-backend
apache-2.0
First I'm going to pull out a small subset to work with
csub = cbar.loc[3323:3324,1:2] csub
docs/quick_start/demo/op2_pandas_unstack.ipynb
saullocastro/pyNastran
lgpl-3.0
I happen to like the way that's organized, but let's say that I want the have the item descriptions in columns and the mode ID's and element numbers in rows. To do that, I'll first move the element ID's up to the columns using a .unstack(level=0) and the transpose the result:
csub.unstack(level=0).T
docs/quick_start/demo/op2_pandas_unstack.ipynb
saullocastro/pyNastran
lgpl-3.0
unstack requires unique row indices so I can't work with CQUAD4 stresses as they're currently output, but I'll work with CHEXA stresses. Let's pull out the first two elements and first two modes:
chs = isat.chexa_stress[1].data_frame.loc[3684:3685,1:2] chs
docs/quick_start/demo/op2_pandas_unstack.ipynb
saullocastro/pyNastran
lgpl-3.0
Now I want to put ElementID and the Node ID in the rows along with the Load ID, and have the items in the columns:
cht = chs.unstack(level=[0,1]).T cht
docs/quick_start/demo/op2_pandas_unstack.ipynb
saullocastro/pyNastran
lgpl-3.0
Maybe I'd like my rows organized with the modes on the inside. I can do that by swapping levels: We actually need to get rid of the extra rows using dropna():
cht = cht.dropna() cht # mode, eigr, freq, rad, eids, nids # initial # nids, eids, eigr, freq, rad, mode # final cht.swaplevel(0,4).swaplevel(1,5).swaplevel(2,5).swaplevel(4, 5)
docs/quick_start/demo/op2_pandas_unstack.ipynb
saullocastro/pyNastran
lgpl-3.0
Alternatively I can do that by first using reset_index to move all the index columns into data, and then using set_index to define the order of columns I want as my index:
cht.reset_index().set_index(['ElementID','NodeID','Mode','Freq']).sort_index()
docs/quick_start/demo/op2_pandas_unstack.ipynb
saullocastro/pyNastran
lgpl-3.0
Support vector machines: training Next, we train a SVM classifier on our data. The first line creates our classifier using the SVC() function. For now we can ignore the parameter kernel='linear', this just means the decision boundaries should be straight lines. The second line uses the fit() method to train the classi...
# Create an instance of SVM and fit the data. clf = SVC(kernel='linear', decision_function_shape='ovo') clf.fit(X_small, y)
SVM_Tutorial.ipynb
abatula/MachineLearningIntro
gpl-2.0
Plot the classification boundaries Now that we have our classifier, let's visualize what it's doing. First we plot the decision spaces, or the areas assigned to the different labels (species of iris). Then we plot our examples onto the space, showing where each point lies and the corresponding decision boundary. The c...
h = .02 # step size in the mesh # Plot the decision boundary. For that, we will assign a color to each # point in the mesh [x_min, m_max]x[y_min, y_max]. x_min, x_max = X_small[:, 0].min() - 1, X_small[:, 0].max() + 1 y_min, y_max = X_small[:, 1].min() - 1, X_small[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min...
SVM_Tutorial.ipynb
abatula/MachineLearningIntro
gpl-2.0
Making predictions Now, let's say we go out and measure the sepals of two new iris plants, and want to know what species they are. We're going to use our classifier to predict the flowers with the following measurements: Plant | Sepal length | Sepal width ------|--------------|------------ A |4.3 |2.5 B ...
# Add our new data examples examples = [[4.3, 2.5], # Plant A [6.3, 2.1]] # Plant B # Create an instance of SVM and fit the data clf = SVC(kernel='linear', decision_function_shape='ovo') clf.fit(X_small, y) # Predict the labels for our new examples labels = clf.predict(examples) # Print the predicted sp...
SVM_Tutorial.ipynb
abatula/MachineLearningIntro
gpl-2.0
Plotting our predictions Now let's plot our predictions to see why they were classified that way.
# Now plot the results h = .02 # step size in the mesh # Plot the decision boundary. For that, we will assign a color to each # point in the mesh [x_min, m_max]x[y_min, y_max]. x_min, x_max = X_small[:, 0].min() - 1, X_small[:, 0].max() + 1 y_min, y_max = X_small[:, 1].min() - 1, X_small[:, 1].max() + 1 xx, yy = np.m...
SVM_Tutorial.ipynb
abatula/MachineLearningIntro
gpl-2.0
What are the support vectors in this example? Below, we define a function to plot the solid decision boundary and corresponding dashed lines, as shown in the introductory picture. Because there are three classes to separate, there will now be three sets of lines.
def plot_svc_decision_function(clf): """Plot the decision function for a 2D SVC""" x = np.linspace(plt.xlim()[0], plt.xlim()[1], 30) y = np.linspace(plt.ylim()[0], plt.ylim()[1], 30) Y, X = np.meshgrid(y, x) P = np.zeros((3,X.shape[0],X.shape[1])) for i, xi in enumerate(x): for j, yj in ...
SVM_Tutorial.ipynb
abatula/MachineLearningIntro
gpl-2.0
And now we plot the lines on top of our previous plot
# Now plot the results h = .02 # step size in the mesh # Plot the decision boundary. For that, we will assign a color to each # point in the mesh [x_min, m_max]x[y_min, y_max]. x_min, x_max = X_small[:, 0].min() - 1, X_small[:, 0].max() + 1 y_min, y_max = X_small[:, 1].min() - 1, X_small[:, 1].max() + 1 xx, yy = np.m...
SVM_Tutorial.ipynb
abatula/MachineLearningIntro
gpl-2.0
This plot is much more visually cluttered than our previous toy example. There are a few points worth noticing if you take a closer look. First, notice how the three solid lines run right along one of the decision boundaries. These are used to determine the boundaries between the classification areas (where the colors...
# Create an instance of SVM and fit the data. clf = SVC(kernel='rbf', decision_function_shape='ovo') # Use the RBF kernel this time clf.fit(X_small, y) # Now plot the results h = .02 # step size in the mesh # Plot the decision boundary. For that, we will assign a color to each # point in the mesh [x_min, m_max]x[y_m...
SVM_Tutorial.ipynb
abatula/MachineLearningIntro
gpl-2.0
The boundaries are very similar to before, but now they're curved instead of straight. Now let's add the decision boundaries.
# Create an instance of SVM and fit the data. clf = SVC(kernel='rbf', decision_function_shape='ovo') # Use the RBF kernel this time clf.fit(X_small, y) # Now plot the results h = .02 # step size in the mesh # Plot the decision boundary. For that, we will assign a color to each # point in the mesh [x_min, m_max]x[y_m...
SVM_Tutorial.ipynb
abatula/MachineLearningIntro
gpl-2.0
Now the plot looks very different from before! The solid black lines are now all curves, but each decision boundary still falls along one part of those lines. And instead of having dotted lines parallel to the solid, there are smaller ellipsoids on either side of the solid line. What other kernels exist? Scikit-learn c...
# Your code here!
SVM_Tutorial.ipynb
abatula/MachineLearningIntro
gpl-2.0
What about my other features? We've been looking at two features: the length and width of the plant's sepal. But what about the other two featurs, petal length and width? What does the graph look like when train on the petal length and width? How does it change when you change the SVM kernel? How would you plot our two...
# Your code here!
SVM_Tutorial.ipynb
abatula/MachineLearningIntro
gpl-2.0
Using more than two features Sticking to two features is great for visualization, but is less practical for solving real machine learning problems. If you have time, you can experiment with using more features to train your classifier. It gets much harder to visualize the results with 3 features, and nearly impossible ...
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3) # Create an instance of SVM and fit the data clf = SVC(kernel='linear', decision_function_shape='ovo') clf.fit(X_train, y_train) # Predict the labels of the test data predictions = clf.predict(X_test)
SVM_Tutorial.ipynb
abatula/MachineLearningIntro
gpl-2.0
Next, we evaluate how well the classifier did. The easiest way to do this is to get the average number of correct predictions, usually referred to as the accuracy
accuracy = np.mean(predictions == y_test )*100 print('The accuracy is %.2f' % accuracy + '%')
SVM_Tutorial.ipynb
abatula/MachineLearningIntro
gpl-2.0
Comparing Models with Crossvalidation To select the kernel to use in our model, we need to use crossvalidation. We can then get our final result using our test data. First we choose the kernels we want to investigate, then divide our training data into folds. We loop through the sets of training and validation folds. E...
# Choose our kernels kernels = ['linear', 'rbf'] # Create a dictionary of arrays to store accuracies accuracies = {} for kernel in kernels: accuracies[kernel] = [] # Loop through 5 folds kf = KFold(n_splits=5) for trainInd, valInd in kf.split(X_train): X_tr = X_train[trainInd,:] y_tr = y_train[trainInd] ...
SVM_Tutorial.ipynb
abatula/MachineLearningIntro
gpl-2.0
Select a Model To select a model, we look at the average accuracy across all folds.
for kernel in kernels: print('%s: %.2f' % (kernel, np.mean(accuracies[kernel])))
SVM_Tutorial.ipynb
abatula/MachineLearningIntro
gpl-2.0
Final Evaluation The linear kernel gives us the highest accuracy, so we select it as our best model. Now we can evaluate it on our training set and get our final accuracy rating.
clf = SVC(kernel='linear', decision_function_shape='ovo') clf.fit(X_train, y_train) predictions = clf.predict(X_test) accuracy = np.mean(predictions == y_test) * 100 print('The final accuracy is %.2f' % accuracy + '%')
SVM_Tutorial.ipynb
abatula/MachineLearningIntro
gpl-2.0
Lab Task #1: Set environment variables. Set environment variables so that we can use them throughout the entire lab. We will be using our project name for our bucket, so you only need to change your project and region.
%%bash export PROJECT=$(gcloud config list project --format "value(core.project)") echo "Your current GCP Project Name is: "${PROJECT} # TODO: Change these to try this notebook out PROJECT = "cloud-training-demos" # Replace with your PROJECT BUCKET = PROJECT # defaults to PROJECT REGION = "us-central1" # Replace wi...
courses/machine_learning/deepdive2/structured/labs/5a_train_keras_ai_platform_babyweight.ipynb
turbomanage/training-data-analyst
apache-2.0
We then use the %%writefile magic to write the contents of the cell below to a file called task.py in the babyweight/trainer folder. Lab Task #2: Create trainer module's task.py to hold hyperparameter argparsing code. The cell below writes the file babyweight/trainer/task.py which sets up our training job. Here is wher...
%%writefile babyweight/trainer/task.py import argparse import json import os from babyweight.trainer import model import tensorflow as tf if __name__ == "__main__": parser = argparse.ArgumentParser() parser.add_argument( "--job-dir", help="this model ignores this field, but it is required by ...
courses/machine_learning/deepdive2/structured/labs/5a_train_keras_ai_platform_babyweight.ipynb
turbomanage/training-data-analyst
apache-2.0
In the same way we can write to the file model.py the model that we developed in the previous notebooks. Lab Task #3: Create trainer module's model.py to hold Keras model code. Complete the TODOs in the code cell below to create our model.py. We'll use the code we wrote for the Wide & Deep model. Look back at your 9_k...
%%writefile babyweight/trainer/model.py import datetime import os import shutil import numpy as np import tensorflow as tf # Determine CSV, label, and key columns # TODO: Add CSV_COLUMNS and LABEL_COLUMN # Set default values for each CSV column. # Treat is_male and plurality as strings. # TODO: Add DEFAULTS def fea...
courses/machine_learning/deepdive2/structured/labs/5a_train_keras_ai_platform_babyweight.ipynb
turbomanage/training-data-analyst
apache-2.0
Lab Task #5: Train on Cloud AI Platform. Once the code works in standalone mode, you can run it on Cloud AI Platform. Because this is on the entire dataset, it will take a while. The training run took about <b> two hours </b> for me. You can monitor the job from the GCP console in the Cloud AI Platform section. Complet...
%%bash OUTDIR=gs://${BUCKET}/babyweight/trained_model JOBID=babyweight_$(date -u +%y%m%d_%H%M%S) echo ${OUTDIR} ${REGION} ${JOBNAME} gsutil -m rm -rf ${OUTDIR} IMAGE=gcr.io/${PROJECT}/babyweight_training_container gcloud ai-platform jobs submit training ${JOBID} \ --staging-bucket=gs://${BUCKET} \ --region=${...
courses/machine_learning/deepdive2/structured/labs/5a_train_keras_ai_platform_babyweight.ipynb
turbomanage/training-data-analyst
apache-2.0
2. Myria: Connections, Relations, and Queries (and Schemas and Plans)
# How many datasets are there on the server? print len(connection.datasets()) # Let's look at the first dataset... dataset = connection.datasets()[0] print dataset['relationKey']['relationName'] print dataset['created'] # View data stored in this relation MyriaRelation(dataset['relationKey'])
ipnb examples/myria.ipynb
uwescience/myria-python
bsd-3-clause
Uploading data
%%query -- Load from S3 florida = load("https://s3-us-west-2.amazonaws.com/myria-demo-data/fl_insurance_sample_2.csv", csv(schema( id:int, geo:string, granularity:int, deductable:float, policyID:int, construction:string, line:string, ...
ipnb examples/myria.ipynb
uwescience/myria-python
bsd-3-clause
Working with relations
# Using the previously-stored insurance relation MyriaRelation("insurance") # View details about this relation relation = MyriaRelation("insurance") print len(relation) print relation.created_date print relation.schema.names
ipnb examples/myria.ipynb
uwescience/myria-python
bsd-3-clause
Working Locally with Relations
# 1: Download as a Python dictionary d = MyriaRelation("insurance").to_dict() print 'First entry returned: %s' % d[0]['county'] # 2: Download as a Pandas DataFrame df = MyriaRelation("insurance").to_dataframe() print '%d entries with nonzero deductable' % len(df[df.eq_site_deductible > 0]) # 3: Download as a DataFra...
ipnb examples/myria.ipynb
uwescience/myria-python
bsd-3-clause
Working with queries
%%query --Embed MyriaL in Jupyter notebook by using the "%%query" prefix insurance = scan(insurance); descriptives = [from insurance emit min(eq_site_deductible) as min_deductible, max(eq_site_deductible) as max_deductible, avg(eq_site_deducti...
ipnb examples/myria.ipynb
uwescience/myria-python
bsd-3-clause
Single-line queries may be treated like Python expressions
query = %datalog Just500(column0, 500) :- TwitterK(column0, 500)% print query.status query
ipnb examples/myria.ipynb
uwescience/myria-python
bsd-3-clause
5. Variable Binding
low, high, destination = 543, 550, 'BoundRelation'
ipnb examples/myria.ipynb
uwescience/myria-python
bsd-3-clause
The tokens @low, @high, and @destination are bound to their values:
%%query T1 = scan(TwitterK); T2 = [from T1 where $0 > @low and $0 < @high emit $1 as x]; store(T2, @destination);
ipnb examples/myria.ipynb
uwescience/myria-python
bsd-3-clause
Deploying Myria in an Amazon Cluster! 1. Installing the Myria CLI ``` From the command line, execute: sudo pip install myria-cluster ``` 2. Launching Clusters
!myria-cluster create my-cluster
ipnb examples/myria.ipynb
uwescience/myria-python
bsd-3-clause
3. Connecting to the Cluster via Python You can connect to the new cluster by using the MyriaX REST endpoint URL. In the example above, this is listed as http://ec2-50-112-33-121.us-west-2.compute.amazonaws.com:8753.
# Substitute your MyriaX REST URL here! %connect http://ec2-52-1-38-182.compute-1.amazonaws.com:8753
ipnb examples/myria.ipynb
uwescience/myria-python
bsd-3-clause
Random weights and biases train() allows us to shrink the output of neuron C, which has 10 inputs/fan-in-weights no need to show that weights are asymmetrical, this is the usual case
initialize() print(train()) print(train())
Breaking Symmetry/breaking_symmetry.ipynb
bbartoldson/examples
mit
Random biases, No randomness in weights we can break symmetry in weights
initialize() for layer in [a,b,c]: layer.weight.data = torch.ones_like(layer.weight) print(c.weight) train(), print(c.weight) train(), print(c.weight) train(), print(c.weight)
Breaking Symmetry/breaking_symmetry.ipynb
bbartoldson/examples
mit
fan-ins are almost identical across neurons, elements within one neuron's fan-in are different
b.weight #there's a small amount of symmetry breaking, but the fan-ins are pretty similar
Breaking Symmetry/breaking_symmetry.ipynb
bbartoldson/examples
mit
after 25 epochs: fan-ins are different across neurons, elements within one neuron's fan-in are different
for _ in range(25): train() b.weight #now the fan-ins are pretty different
Breaking Symmetry/breaking_symmetry.ipynb
bbartoldson/examples
mit
No randomness in bias/weights we can't break symmetry
initialize() for layer in [a,b,c]: layer.weight.data = torch.ones_like(layer.weight) layer.bias.data = torch.ones_like(layer.bias) print(c.weight, c.bias) train(), print(c.weight, c.bias) train(), print(c.weight, c.bias) train(), print(c.weight, c.bias)
Breaking Symmetry/breaking_symmetry.ipynb
bbartoldson/examples
mit
fan-ins are identical across neurons, elements within one neuron's fan-in are identical
b.weight
Breaking Symmetry/breaking_symmetry.ipynb
bbartoldson/examples
mit
after 25 epochs: fan-ins are identical across neurons, elements within one neuron's fan-in are identical
for _ in range(25): train() b.weight
Breaking Symmetry/breaking_symmetry.ipynb
bbartoldson/examples
mit
DFF The fundamental stateful element is a D-flip-flop. The flip-flop has a clock enable, its state will only be updated if the clock enable is true. Similarly, if a flip-flop has a reset signal, it will be reset to its initial value if reset is true.
from mantle import DFF dff = DFF()
notebooks/tutorial/coreir/Register.ipynb
phanrahan/magmathon
mit
Register A register is simply an array of flip-flops. To create an instance of a register, call Register with the number of bits n in the register.
from mantle import DefineRegister
notebooks/tutorial/coreir/Register.ipynb
phanrahan/magmathon
mit
Registers and DFFs are very similar to each other. The only difference is that the input and output to a DFF are Bit values, whereas the inputs and the outputs to registers are Bits[n]. Registers with Enables and Resets Flip-flops and registers can have with clock enables and resets. The flip-flop has a clock enable, i...
Register4 = DefineRegister(4, init=5, has_ce=True, has_reset=True )
notebooks/tutorial/coreir/Register.ipynb
phanrahan/magmathon
mit
To wire the optional clock inputs, clock enable and reset, use named arguments (ce and reset) when you call the register with its inputs. In Magma, clock signals are handled differently than signals.
from magma.simulator import PythonSimulator from fault import PythonTester tester = PythonTester(Register4, Register4.CLK) tester.poke(Register4.RESET, 1) # reset tester.step(2) tester.poke(Register4.RESET, 0) print(f"Reset Val = {tester.peek(Register4.O)}") tester.poke(Register4.CE, 1) # set enable for i in range(5...
notebooks/tutorial/coreir/Register.ipynb
phanrahan/magmathon
mit
Lorenz system The Lorenz system is one of the earliest studied examples of a system of differential equations that exhibits chaotic behavior, such as bifurcations, attractors, and sensitive dependence on initial conditions. The differential equations read: $$ \frac{dx}{dt} = \sigma(y-x) $$ $$ \frac{dy}{dt} = x(\rho-z) ...
def lorentz_derivs(yvec, t, sigma, rho, beta): """Compute the the derivatives for the Lorentz system at yvec(t).""" # YOUR CODE HERE x = yvec[0] y = yvec[1] z = yvec[2] dx = sigma*(y - x) dy = x*(rho - z) - y dz = x*y - beta*z return np.array([dx, dy, dz]) print(lorentz_derivs(np.arr...
assignments/assignment10/ODEsEx02.ipynb
enbanuel/phys202-2015-work
mit
Write a function solve_lorenz that solves the Lorenz system above for a particular initial condition $[x(0),y(0),z(0)]$. Your function should return a tuple of the solution array and time array.
def solve_lorentz(ic, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0): """Solve the Lorenz system for a single initial condition. Parameters ---------- ic : array, list, tuple Initial conditions [x,y,z]. max_time: float The max time to use. Integrate with 250 points per time u...
assignments/assignment10/ODEsEx02.ipynb
enbanuel/phys202-2015-work
mit
Write a function plot_lorentz that: Solves the Lorenz system for N different initial conditions. To generate your initial conditions, draw uniform random samples for x, y and z in the range $[-15,15]$. Call np.random.seed(1) a single time at the top of your function to use the same seed each time. Plot $[x(t),z(t)]$ u...
N = 5 colors = plt.cm.hot(np.linspace(0,1,N)) for i in range(N): # To use these colors with plt.plot, pass them as the color argument print(colors[i]) def plot_lorentz(N=10, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0): """Plot [x(t),z(t)] for the Lorenz system. Parameters ---------- ...
assignments/assignment10/ODEsEx02.ipynb
enbanuel/phys202-2015-work
mit
Use interact to explore your plot_lorenz function with: max_time an integer slider over the interval $[1,10]$. N an integer slider over the interval $[1,50]$. sigma a float slider over the interval $[0.0,50.0]$. rho a float slider over the interval $[0.0,50.0]$. beta fixed at a value of $8/3$.
# YOUR CODE HERE interact(plot_lorentz, max_time = [1,10], N = [1,50], sigma=[0.0,50.0], rho=[0.0,50.0], beta=fixed(8/3));
assignments/assignment10/ODEsEx02.ipynb
enbanuel/phys202-2015-work
mit
Set parameters
data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif' fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif' label_name = 'Aud-rh' fname_label = data_path + '/MEG/sample/labels/%s.label' % label_name tmin, tmax, event_id = -0.2, 0.5, 2 # Setup for reading the ra...
0.12/_downloads/plot_source_label_time_frequency.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
$A_n(p) = \sum_{k=n}^{2 n - 1} \binom{2 n - 1}{k} p^k (1 - p)^{2 n - 1 - k}$
def how_many_games(p=0.6, initial_amount=1_000_000, win_subtraction=10_000): expected_winnings = [] for num_wins in range(1,50): series_length = 2*num_wins - 1 prize = initial_amount - (win_subtraction*num_wins) win_p = 0 for k in range(num_wins, series_length + 1): ...
FiveThirtyEightRiddler/2017-07-14/classic/championship.ipynb
andrewzwicky/puzzles
mit
To predict with the TensorFlow model, we also need a serving input function. We will want all the inputs from our user.
# Create serving input function to be able to serve predictions later using provided inputs def serving_input_fn(): feature_placeholders = { 'is_male': tf.compat.v1.placeholder(tf.string, [None]), 'mother_age': tf.compat.v1.placeholder(tf.float32, [None]), 'plurality': tf.compat.v1.placehold...
courses/machine_learning/deepdive/06_structured/3_tensorflow_dnn.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Alternative sea-salt correction methodology for Ireland Our current sea-salt correction methodology for Mg, Ca and SO4 assumes (i) that all chloride is marine, and (ii) that no fractionation takes place between evaporation and deposition. These coarse assumptions work OK in many regions, but lead to negative values for...
# Read data xl_path = r'../../../Thematic_Trends_Report_2019/ireland_high_chloride.xlsx' df = pd.read_excel(xl_path, sheet_name='DATA') # Suspect that values of *exactly* zero are errors df[df==0] = np.nan df.head()
ireland_seasalt_correction.ipynb
JamesSample/icpw
mit
2. Reference values for sea water The numbers below are taken from the World Data Centre for Precipitation Chemistry (WDCPC; PDF here), except for Ca, which I've taken from here.
data = {'par': ['SO4', 'Mg', 'Ca', 'Na', 'Cl'], 'mol_mass': [96.06, 24.31, 40.08, 22.99, 35.45], 'valency': [2, 2, 2, 1, 1], 'sw_mgpl': [2700, 1290, 400, 10800, 19374]} sw_df = pd.DataFrame(data) sw_df['sw_ueqpl'] = 1000 * sw_df['valency'] * sw_df['sw_mgpl'] / sw_df['mol_mass'] sw_df
ireland_seasalt_correction.ipynb
JamesSample/icpw
mit
As expected, the $\mu eq/l$ concentrations of Na and Cl in sea water are roughly the same, but the ratio is not exactly 1:1. For the sea-salt correction, we need the ratio of SO4, Ca and Mg to Na and to Cl.
corr_facs = {} for par in ['SO4', 'Ca', 'Mg']: sw_par = sw_df.query('par == @par')['sw_ueqpl'].values[0] sw_cl = sw_df.query('par == "Cl"')['sw_ueqpl'].values[0] sw_na = sw_df.query('par == "Na"')['sw_ueqpl'].values[0] ratio_cl = sw_par / sw_cl ratio_na = sw_par / sw_na corr_facs['%s2c...
ireland_seasalt_correction.ipynb
JamesSample/icpw
mit
The ratios above for chloride are virtually the same as documented in our current workflow (here), so I assume the new ratios for Na should also be compatible. Note that the ratio of SO4:Na is about 20% higher than SO4:Cl, which might actually exacerbate the problem of negative values. 2. Compare Cl to Na in Irish lake...
fig, axes = plt.subplots(nrows=5, ncols=4, figsize=(20,20)) axes = axes.flatten() for idx, stn in enumerate(df['station_code'].unique()): df_stn = df.query('station_code == @stn') axes[idx].plot(df_stn['ECl'], df_stn['ENa'], 'ro', label='Raw data') axes[idx].plot(df_stn['ECl'], df_stn['ECl'], 'k-', label='...
ireland_seasalt_correction.ipynb
JamesSample/icpw
mit
Based on these plots, I'd say concentrations of Na are also pretty high in these lakes (although in most cases Cl is even higher). I suspect that using Na as a marine "tracer" instead of Cl will still lead to negative values. 4. Sea-salt correction for SO4 Using our original methodology, the corrected series for SO4* h...
# Par of interest par = 'ESO4' df['%s*_Na' % par] = df[par] - (corr_facs['%s2na' % par[1:].lower()] * df['ENa']) df['%s*_Cl' % par] = df[par] - (corr_facs['%s2cl' % par[1:].lower()] * df['ECl']) df2 = df[['station_code', '%s*_Cl' % par, '%s*_Na' % par]] df2 = df2.melt(id_vars=['station_code'], var_name='corr_method'...
ireland_seasalt_correction.ipynb
JamesSample/icpw
mit
For simplicity, we use a constant forward rate as the given interest rate term structure. The method discussed here would work with any market yield curve as well.
sigma = 0.01 a = 0.001 timestep = 360 length = 30 # in years forward_rate = 0.05 day_count = ql.Thirty360() todays_date = ql.Date(15, 1, 2015) ql.Settings.instance().evaluationDate = todays_date yield_curve = ql.FlatForward( todays_date, ql.QuoteHandle(ql.SimpleQuote(forward_rate)), day_count) spot_curv...
content/extra/notebooks/moment_matching.ipynb
gouthambs/karuth-source
artistic-2.0
Here, I setup the Monte Carlo simulation of the Hull-White process. The result of the generate_paths function below is the time grid and a matrix of short rates generated by the model. This is discussed in detaio in the Hull-White simulation post.
hw_process = ql.HullWhiteProcess(spot_curve_handle, a, sigma) rng = ql.GaussianRandomSequenceGenerator( ql.UniformRandomSequenceGenerator( timestep, ql.UniformRandomGenerator(125))) seq = ql.GaussianPathGenerator(hw_process, length, timestep, rng, False) def generate_paths(num_paths, timestep): ...
content/extra/notebooks/moment_matching.ipynb
gouthambs/karuth-source
artistic-2.0
Here is a plot of the generated short rates.
num_paths = 128 time, paths = generate_paths(num_paths, timestep) for i in range(num_paths): plt.plot(time, paths[i, :], lw=0.8, alpha=0.6) plt.title("Hull-White Short Rate Simulation") plt.show()
content/extra/notebooks/moment_matching.ipynb
gouthambs/karuth-source
artistic-2.0
The model zero coupon bond price $B(0, T)$ is given as: $$B(0, T) = E\left[\exp\left(-\int_0^T r(t)dt \right) \right]$$ where $r(t)$ is the short rate generated by the model. The expectation of the stochastic discount factor at time $T$ is the price of the zero coupon bond at that time. In a simulation the paths are ge...
def stoch_df(paths, time): return np.mean( np.exp(-cumtrapz(paths, time, initial=0.)),axis=0 ) B_emp = stoch_df(paths, time) logB_emp = np.log(B_emp) B_yc = np.array([yield_curve.discount(t) for t in time]) logB_yc = np.log(B_yc) deltaT = time[1:] - time[:-1] deltaB_emp = logB_emp[1:]-logB_emp[:-1] del...
content/extra/notebooks/moment_matching.ipynb
gouthambs/karuth-source
artistic-2.0
The plots below show the zero coupon bond price and mean of short rates with and without the moment matching.
plt.plot(time, stoch_df(paths, time),"r-.", label="Original", lw=2) plt.plot(time, stoch_df(new_paths, time),"g:", label="Corrected", lw=2) plt.plot(time,B_yc, "k--",label="Market", lw=1) plt.title("Zero Coupon Bond Price") plt.legend() def alpha(forward, sigma, a, t): return f...
content/extra/notebooks/moment_matching.ipynb
gouthambs/karuth-source
artistic-2.0
Building the network TFLearn lets you build the network by defining the layers in that network. For this example, you'll define: The input layer, which tells the network the number of inputs it should expect for each piece of MNIST data. Hidden layers, which recognize patterns in data and connect the input to the ou...
# Define the neural network def build_model(): # This resets all parameters and variables, leave this here tf.reset_default_graph() #### Your code #### # Include the input layer, hidden layer(s), and set how you want to train the model net = tflearn.input_data([None, 784]) net = tflearn.ful...
intro-to-tflearn/TFLearn_Digit_Recognition.ipynb
mdiaz236/DeepLearningFoundations
mit
I also define a plotting function to use with the interact function to visualize the behavior of the stars when the disrupting galaxy orbits close to the main galaxy
def plotter(ic,sol,n=0): """Plots the positions of the stars and disrupting galaxy at each t in the time array Parameters -------------- ic : initial conditions sol : solution array n : integer Returns ------------- """ plt.figure(figsize=(10,10)) y = np.linspa...
galaxy_project/F) Plotting_function.ipynb
bjshaw/phys202-project
mit
Defining a plotting function that will help with plotting static images at certain times:
def static_plot(ic,sol,n=0): """Plots the positions of the stars and disrupting galaxy at a certain t in the time array Parameters -------------- ic : initial conditions sol : solution array n : integer Returns ------------- """ plt.scatter(0,0,color='y',label='Gal...
galaxy_project/F) Plotting_function.ipynb
bjshaw/phys202-project
mit
Defining a plotting function that will help with plotting positions relative to the center of mass between the two galaxies:
def com_plot(ic,sol,M,S,n=0): """Plots the positions of the stars, main, and disrupting galaxy relative to the center of mass at a certain t in the time array Parameters --------------- ic : initial condition sol : solution array M : mass of main galaxy S : mass of disruptin...
galaxy_project/F) Plotting_function.ipynb
bjshaw/phys202-project
mit
Static plotting function around center of mass:
def static_plot_com(ic,sol,M,S,n=0): """Plots the positions of the stars, main, and disrupting galaxy relative to the center of mass at a certain t in the time array Parameters -------------- ic : initial conditions sol : solution array M : mass of main galaxy S : mass of disrupting ...
galaxy_project/F) Plotting_function.ipynb
bjshaw/phys202-project
mit
Exercise 1: Generate 1000 samples from a uniform distribution that spans from 2 to 5. Print the sample mean and check that it approximates its expected value. Hint: check out the random.uniform() function
print('Exercise 1:\n') # n = <FILL IN> n = 1000 # x_unif = <FILL IN> x_unif = np.random.uniform(2, 5, n) # print('Sample mean = ', <FILL IN>) print('Sample mean = ', x_unif.mean()) plt.hist(x_unif, bins=100) plt.xlabel('Value') plt.ylabel('Frequency') plt.title('Uniform distribution between 2 and 5') plt.show()
P3.Python_datos/Intro3_Working_with_Data_professor.ipynb
ML4DS/ML4all
mit
Exercise 2.: Generate 1000 samples from a Gaussian distribution with mean 3 and variance 2. Print the sample mean and variance and check that they approximate their expected values. Hint: check out the random.normal() function. Also, think about the changes you need to apply to a standard normal distribution to modify ...
print('\nExercise 2:\n') # n = <FILL IN> n = 1000 # x_gauss = <FILL IN> x_gauss = np.random.randn(n)*np.sqrt(2) + 3 # print('Sample mean = ', <FILL IN>) print('Sample mean = ', x_gauss.mean()) # print('Sample variance = ', <FILL IN>) print('Sample variance = ', x_gauss.var()) plt.hist(x_gauss, bins=100) plt.xlabel('Va...
P3.Python_datos/Intro3_Working_with_Data_professor.ipynb
ML4DS/ML4all
mit
Exercise 3.: Generate 100 samples of a sine signal between -5 and 5 and add uniform noise with mean 0 and amplitude 1.
print('\nExercise 3:\n') # n = <FILL IN> n = 100 # x = <FILL IN> x = np.linspace(-5, 5, n) # y = <FILL IN> y = np.sin(x) # noise = <FILL IN> noise = np.random.uniform(-0.5, 0.5, 100) # y_noise = <FILL IN> y_noise = y + noise plt.plot(x, y_noise, color='green', label='Noisy signal') plt.plot(x, y, color='black', linesty...
P3.Python_datos/Intro3_Working_with_Data_professor.ipynb
ML4DS/ML4all
mit
Exercise 4.: Generate 1000 samples from a 2 dimensional Gaussian distribution with mean [2, 3] and covariance matrix [[2, 0], [0, 2]]. Hint: check out the random.multivariate_normal() function.
print('\nExercise 4:\n') # n = <FILL IN> n = 1000 # mean = <FILL IN> mean = np.array([2, 3]) # cov = <FILL IN> cov = np.array([[2, 0], [0, 2]]) # x_2d_gauss = <FILL IN> x_2d_gauss = np.random.multivariate_normal(mean=mean, cov=cov, size=n) plt.scatter(x_2d_gauss[:, 0], x_2d_gauss[:, 1], ) plt.title('2d Gaussian Scatter...
P3.Python_datos/Intro3_Working_with_Data_professor.ipynb
ML4DS/ML4all
mit
In just a single example we have seen a lot of Matplotlib functionalities that can be easely tuned. You have all you need to draw decent figures. However, those of you who want to learn more about Matplotlib can take a look at AnatomyOfMatplotlib, a collection of notebooks in which you will explore more in depth Matplo...
# x = <FILL IN> x = 4*np.random.rand(200) - 2 # Create a weights vector w, in which w[0] = 2.4, w[1] = -0.8 and w[2] = 1. # w = <FILL IN> w = np.array([2.4,-0.8,2]) print('x shape:\n',x.shape) print('\nw:\n', w) print('w shape:\n', w.shape)
P3.Python_datos/Intro3_Working_with_Data_professor.ipynb
ML4DS/ML4all
mit
Exercise 6: Obtain the vector y whose samples are obtained by the polynomial $w_2 x^2 + w_1 x + w_0$
# y = <FILL IN> y = w[0] + w[1]*x + w[2]*(x**2) print('y shape:\n',y.shape)
P3.Python_datos/Intro3_Working_with_Data_professor.ipynb
ML4DS/ML4all
mit
Exercise 7: You probably obtained the previous vector as a sum of different terms. If so, try to obtain y again (and name it y2) as a product of a matrix X and a vector w. Then, check that both methods lead to the same result (be careful with shapes). Hint: w will remain the same, but now X has to be constructed in a w...
# X = <FILL IN> X = np.array([np.ones((len(x),)), x, x**2]).T # y2 = <FILL IN> y2 = X @ w print('y shape:\n',y.shape) print('y2 shape:\n',y2.shape) if(np.sum(np.abs(y-y2))<1e-10): print('\ny and y2 are the same, well done!') else: print('\nOops, something went wrong, try again!')
P3.Python_datos/Intro3_Working_with_Data_professor.ipynb
ML4DS/ML4all
mit
Exercise 8: Define x2 as a range vector, going from -1 to 2, in steps of 0.05. Then, obtain y3 as the output of polynomial $w_2 x^2 + w_1 x + w_0$ for input x2 and plot the result using a red dashed line (--).
# x2 = <FILL IN> x2 = np.arange(-1,2,0.05) # y3 = <FILL IN> y3 = w[0] + w[1]*x2 + w[2]*(x2**2) # Plot # <SOL> fig1 = plt.figure() plt.plot(x2,y3,'r--') plt.title('y3 = f(x2)') plt.ylabel('y3') plt.xlabel('x2') plt.show() # </SOL>
P3.Python_datos/Intro3_Working_with_Data_professor.ipynb
ML4DS/ML4all
mit
Now it's your turn! Exercise 9: Obtain x_exp as 1000 samples of an exponential distribution with scale parameter of 10. Then, plot the corresponding histogram for the previous set of samples, using 50 bins. Obtain the empirical mean and make it appear in the histogram legend. Does it coincide with the theoretical one?
# x_exp = <FILL IN> x_exp = np.random.exponential(10,1000) # plt.hist(<FILL IN>) plt.hist(x_exp,bins=50,label="Emp. mean: "+str(np.mean(x_exp))) plt.legend(loc='best') plt.show()
P3.Python_datos/Intro3_Working_with_Data_professor.ipynb
ML4DS/ML4all
mit
Exercise 10: Taking into account that the exponential density can be expressed as: $f(x;\beta) = \frac{1}{\beta} e^{-\frac{x}{\beta}}; x>=0$. where $\beta$ is the scale factor, fill the variable density using the vector x and apply it the theoretical density for an exponential distribution. Then, take a look at the plo...
np.random.seed(4) # Keep the same result x_exp = np.random.exponential(10,10000) # exponential samples x = np.arange(np.min(x_exp),np.max(x_exp),0.05) # density = <FILL IN> density = (1/10)*np.exp(-x/10) w_n = np.zeros_like(x_exp) + 1. / x_exp.size plt.hist(x_exp, weights=w_n,label='Histogram.',bins=75) plt.plot(x,den...
P3.Python_datos/Intro3_Working_with_Data_professor.ipynb
ML4DS/ML4all
mit
Let's now try to apply this knowledge about dictionaries with the following exercise: Exercise 11: Create a dictionary with your name and a colleage's and create a dictionary for each of you with what you are wearing. Then print the whole list to see what are each of you wearing.
# alumnos = <FILL IN> alumnos = {'Pedro Picapiedra':{},'Clark Kent':{}} clothes = ['Shirt','Dress','Glasses','Shoes'] for alumno in alumnos: print(alumno) # <SOL> for element in clothes: alumnos[alumno][element] = input(element+': ') print(alumnos) # </SOL>
P3.Python_datos/Intro3_Working_with_Data_professor.ipynb
ML4DS/ML4all
mit
You can install the latest pre-release version using pip install --pre --upgrade bigdl-orca[automl].
# Install latest pre-release version of BigDL Orca # Installing BigDL Orca from pip will automatically install pyspark, bigdl, and their dependencies. !pip install --pre --upgrade bigdl-orca[automl] # Install xgboost !pip install xgboost
python/orca/colab-notebook/quickstart/autoxgboost_regressor_sklearn_boston.ipynb
intel-analytics/BigDL
apache-2.0
Step 1: Init Orca Context
# import necesary libraries and modules from __future__ import print_function import os import argparse from bigdl.orca import init_orca_context, stop_orca_context from bigdl.orca import OrcaContext # recommended to set it to True when running BigDL in Jupyter notebook. OrcaContext.log_output = True # (this will dis...
python/orca/colab-notebook/quickstart/autoxgboost_regressor_sklearn_boston.ipynb
intel-analytics/BigDL
apache-2.0
This is the only place where you need to specify local or distributed mode. View Orca Context for more details. Note: You should export HADOOP_CONF_DIR=/path/to/hadoop/conf/dir when you run on Hadoop YARN cluster. ### Step 2: Define Search space You should define a dictionary as your hyper-parameter search space for XG...
from bigdl.orca.automl import hp search_space = { "n_estimators": hp.grid_search([50, 100, 200]), "max_depth": hp.choice([2, 4, 6]), }
python/orca/colab-notebook/quickstart/autoxgboost_regressor_sklearn_boston.ipynb
intel-analytics/BigDL
apache-2.0
Step 3: Automatically fit and search with Orca AutoXGBoost We will then fit AutoXGBoost automatically on Boston Housing dataset. First create an AutoXGBRegressor. You could also pass the sklearn XGBRegressor parameters to AutoXGBRegressor. Note that the XGBRegressor parameters shouldn't include the hyper-parameters in...
from bigdl.orca.automl.xgboost import AutoXGBRegressor auto_xgb_reg = AutoXGBRegressor(cpus_per_trial=2, name="auto_xgb_classifier", min_child_weight=3, random_state=2)
python/orca/colab-notebook/quickstart/autoxgboost_regressor_sklearn_boston.ipynb
intel-analytics/BigDL
apache-2.0
Pairwise Regression Fairness We will be training a linear scoring function $f(x) = w^\top x$ where $x \in \mathbb{R}^d$ is the input feature vector. Our goal is to train the regression model subject to pairwise fairness constraints. Specifically, for the regression model $f$, we denote: - $sqerr(f)$ as the squared erro...
# We will divide the data into 25 minibatches and refer to them as 'queries'. num_queries = 25 # List of column names in the dataset. column_names = ["state", "county", "community", "communityname", "fold", "population", "householdsize", "racepctblack", "racePctWhite", "racePctAsian", "racePctHisp", "agePct12t21", "ag...
pairwise_fairness/regression_crime.ipynb
google-research/google-research
apache-2.0
Evaluation Metrics We will need functions to convert labeled data into paired data.
def pair_high_low_docs(data): # Returns a DataFrame of pairs of larger-smaller labeled regression examples # given in DataFrame. # For all pairs of docs, and remove rows that are not needed. pos_docs = data.copy() neg_docs = data.copy() # Include a merge key. pos_docs.insert(0, "merge_key", 0) neg_docs...
pairwise_fairness/regression_crime.ipynb
google-research/google-research
apache-2.0
We will also need functions to evaluate the pairwise error rates for a linear model.
def get_mask(groups, pos_group, neg_group=None): # Returns a boolean mask selecting positive-negative document pairs where # the protected group for the positive document is pos_group and # the protected group for the negative document (if specified) is neg_group. # Repeat group membership positive docs as m...
pairwise_fairness/regression_crime.ipynb
google-research/google-research
apache-2.0
Create Linear Model We then write a function to create the linear scoring model.
def create_scoring_model(feature_pairs, features, dimension): # Returns a linear Keras scoring model, and returns a nullary function # returning predictions on the features. # Linear scoring model with no hidden layers. layers = [] # Input layer takes `dimension` inputs. layers.append(tf.keras.Input(shape...
pairwise_fairness/regression_crime.ipynb
google-research/google-research
apache-2.0
Formulate Optimization Problem We are ready to formulate the constrained optimization problem using the TFCO library.
def group_mask_fn(groups, pos_group, neg_group=None): # Returns a nullary function returning group mask. group_mask = lambda: np.reshape( get_mask(groups(), pos_group, neg_group), (-1)) return group_mask def formulate_problem( feature_pairs, group_pairs, features, labels, dimension, constraint_gr...
pairwise_fairness/regression_crime.ipynb
google-research/google-research
apache-2.0
Train Model The following function then trains the linear model by solving the above constrained optimization problem. We first provide a training function with minibatch gradient updates. There are three types of pairwise fairness criterion we handle (specified by 'constraint_type'), and assign the (pos_group, neg_gro...
def train_model(train_set, params): # Trains the model with stochastic updates (one query per updates). # # Args: # train_set: Dictionary of "paired" training data. # params: Dictionary of hyper-paramters for training. # # Returns: # Trained model, list of objectives, list of group constraint viol...
pairwise_fairness/regression_crime.ipynb
google-research/google-research
apache-2.0
Summarize and Plot Results Having trained a model, we will need functions to summarize the various evaluation metrics.
def evaluate_results(model, test_set, params): # Returns sqaured error, group error rates, group-level constraint violations. if params['constraint_type'] == 'marginal_equal_opportunity': g0_error = group_error_rate(model, test_set, 0) g1_error = group_error_rate(model, test_set, 1) group_violations = [...
pairwise_fairness/regression_crime.ipynb
google-research/google-research
apache-2.0
Experimental Results We now run experiments with two types of pairwise fairness criteria: (1) marginal_equal_opportunity and (2) pairwise equal opportunity. In each case, we compare an unconstrained model trained to optimize just the squared error and a constrained model trained with pairwise fairness constraints.
# Convert train/test set to paired data for later evaluation. paired_train_set = convert_labeled_to_paired_data(train_set) paired_test_set = convert_labeled_to_paired_data(test_set)
pairwise_fairness/regression_crime.ipynb
google-research/google-research
apache-2.0
(1) Marginal Equal Opportunity For a scoring model $f: \mathbb{R}^d \rightarrow \mathbb{R}$, recall: - $sqerr(f)$ as the squared error for scoring function $f$. and we additionally define: $err_i(f)$ as the row-marginal pairwise error over example pairs where the higher label example is from group $i$, and the lower l...
# Model hyper-parameters. model_params = { 'loops': 10, 'iterations_per_loop': 250, 'learning_rate': 0.1, 'constraint_type': 'marginal_equal_opportunity', 'constraint_slack': 0.02, 'dual_scale': 1.0} # Unconstrained optimization. model_params['constrained'] = False model_unc = train_model(p...
pairwise_fairness/regression_crime.ipynb
google-research/google-research
apache-2.0
(2) Pairwise Equal Opportunity Recall that we denote $err_{i,j}(f)$ as the pairwise error over example pairs where the higher label example is from group $i$, and the lower label example is from group $j$. $$ err_{i, j}(f) ~=~ \mathbf{E}\big[\mathbb{I}\big(f(x) < f(x')\big) \,\big|\, y > y',~ grp(x) = i, ~grp(x') = j\...
# Model hyper-parameters. model_params = { 'loops': 10, 'iterations_per_loop': 250, 'learning_rate': 0.1, 'constraint_type': 'cross_group_equal_opportunity', 'constraint_slack': 0.02, 'dual_scale': 1.0} # Unconstrained optimization. model_params['constrained'] = False model_unc = train_mode...
pairwise_fairness/regression_crime.ipynb
google-research/google-research
apache-2.0
Quiz Question. Select all the topics that have a cluster in the model created above. [multiple choice] Comparing to random initialization Create variables for randomly initializing the EM algorithm. Complete the following code block.
np.random.seed(5) num_clusters = len(means) num_docs, num_words = tf_idf.shape random_means = [] random_covs = [] random_weights = [] for k in range(num_clusters): # Create a numpy array of length num_words with random normally distributed values. # Use the standard univariate normal distribution (mean 0...
Clustering_&_Retrieval/Week4/Assignment2/.ipynb_checkpoints/4_em-with-text-data_blank-checkpoint.ipynb
rashikaranpuria/Machine-Learning-Specialization
mit
pyNCS analysis This is a basically a copy of NCS/python3-6/NCSdemo_simulation.py
# create normalized ideal image fpath1 = os.path.join(py_ncs_path, "../randwlcposition.mat") imgsz = 128 zoom = 8 Pixelsize = 0.1 NA = 1.4 Lambda = 0.7 t = time.time() res = ncs.genidealimage(imgsz,Pixelsize,zoom,NA,Lambda,fpath1) elapsed = time.time()-t print('Elapsed time for generating ideal image:', elapsed) imso =...
clib/jupyter_notebooks/ncs_tensorflow_simulation.ipynb
HuanglabPurdue/NCS
gpl-3.0
pyCNCS analysis Mixed C and Python NCS analysis.
# Get the OTF mask that NCSDemo_simulation.py used. rcfilter = ncs.genfilter(128,Pixelsize,NA,Lambda,'OTFweighted',1,0.7) print(rcfilter.shape) pyplot.imshow(rcfilter, cmap = "gray") pyplot.show() # Calculate gamma and run Python/C NCS. gamma = varsub/(gainsub*gainsub) # This takes ~100ms on my laptop. out_c = ncs...
clib/jupyter_notebooks/ncs_tensorflow_simulation.ipynb
HuanglabPurdue/NCS
gpl-3.0