markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Our target variable y has 2 unique values 0 and 1. 0 means the patient doesn't have a heart disease; 1 means unfortunately he/she does. We split our dataset into 70% training set and 30% testing set.
from sklearn.cross_validation import train_test_split from sklearn.preprocessing import StandardScaler X_train, X_test, y_train, y_test =\ train_test_split(X, y, test_size=0.3, random_state=1) X_train.shape, y_train.shape
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Feature importances with forests of trees This examples shows the use of forests of trees to evaluate the importance of features on an artificial classification task. The red bars are the feature importances of the forest, along with their inter-trees variability.
%matplotlib inline import numpy as np import matplotlib.pyplot as plt from sklearn.ensemble import ExtraTreesClassifier # Build a classification task using 3 informative features # Build a forest and compute the feature importances forest = ExtraTreesClassifier(n_estimators=250, random_...
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Decision Tree accuracy and time elapsed caculation
from time import time t0=time() print ("DecisionTree") dt = DecisionTreeClassifier(min_samples_split=20,random_state=99) # dt = DecisionTreeClassifier(min_samples_split=20,max_depth=5,random_state=99) clf_dt=dt.fit(X_train,y_train) print ("Acurracy: ", clf_dt.score(X_test,y_test)) t1=time() print ("time elapsed: ", ...
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Tuning our hyperparameters using GridSearch
from sklearn.grid_search import GridSearchCV from sklearn.metrics import classification_report pipeline = Pipeline([ ('clf', DecisionTreeClassifier(criterion='entropy')) ]) parameters = { 'clf__max_depth': (15, 20 , 25), 'clf__min_samples_leaf': (3, 5, 10) } grid_search = GridSearchCV(pipeline, parameter...
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
KNN accuracy and time elapsed caculation
t6=time() print ("KNN") knn = KNeighborsClassifier() clf_knn=knn.fit(X_train, y_train) print ("Acurracy: ", clf_knn.score(X_test,y_test) ) t7=time() print ("time elapsed: ", t7-t6) tt6=time() print ("cross result========") scores = cross_validation.cross_val_score(knn, X, y, cv=5) print (scores) print (scores.mean()) ...
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
SVM accuracy and time elapsed caculation¶
t7=time() print ("SVM") svc = SVC() clf_svc=svc.fit(X_train, y_train) print ("Acurracy: ", clf_svc.score(X_test,y_test) ) t8=time() print ("time elapsed: ", t8-t7) tt7=time() print ("cross result========") scores = cross_validation.cross_val_score(svc, X,y, cv=5) print (scores) print (scores.mean()) tt8=time() print ...
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Using the training dataset, we now will train two different classifiers— a decision tree classifier, and a k-nearest neighbors classifier—and look at their individual performances via a 10-fold cross-validation on the training dataset before we combine them into an ensemble classifier:
from sklearn.cross_validation import cross_val_score from sklearn.linear_model import LogisticRegression from sklearn.tree import DecisionTreeClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.pipeline import Pipeline import numpy as np clf2 = DecisionTreeClassifier(max_depth=1, ...
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
As you can see the accuracies o our individual classifiers are almost same and are on the high side. Now let's move on to the more exciting part and combine the individual classifiers for majority rule voting in our MajorityVoteClassifier:
# Majority Rule (hard) Voting mv_clf = MajorityVoteClassifier( classifiers=[clf2, pipe3]) clf_labels += ['Majority Voting'] all_clf = [clf2, pipe3, mv_clf] for clf, label in zip(all_clf, clf_labels): scores = cross_val_score(estimator=clf, X=X_train, ...
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
As we can see, the performance of the MajorityVotingClassifier has substantially improved over the individual classifiers in the 10-fold cross-validation evaluation. Evaluating and tuning the ensemble classifier In this section, we are going to compute the ROC curves from the test set to check if the MajorityVoteClassi...
%matplotlib inline import matplotlib.pyplot as plt from sklearn.metrics import roc_curve from sklearn.metrics import auc colors = ['black', 'orange', 'blue', 'green'] linestyles = [':', '--', '-.', '-'] for clf, label, clr, ls \ in zip(all_clf, clf_labels, colors, linestyles): # assuming t...
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
As we can see in the resulting ROC, the ensemble classifier also performs well on the test set (ROC AUC = 0.86). Before we tune the individual classifier parameters for ensemble classification, let's call the get_params method to get a basic idea of how we can access the individual parameters inside a GridSearch object...
mv_clf.get_params()
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Based on the values returned by the get_params method, we now know how to access the individual classifier's attributes. Let's now tune the decision tree depth via a grid search for demonstration purposes. The code is as follows:
from sklearn.grid_search import GridSearchCV params = {'decisiontreeclassifier__max_depth': [1,2], 'pipeline__clf__n_neighbors': [5,15,20]} grid = GridSearchCV(estimator=mv_clf, param_grid=params, cv=10, scoring='roc_auc') grid.fit(X_train, y_tr...
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
After the grid search has completed, we can print the different hyperparameter value combinations and the average ROC AUC scores computed via 10-fold cross-validation. The code is as follows:
print('Best parameters: %s' % grid.best_params_) print('Accuracy: %.2f' % grid.best_score_)
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
As we can see, we get the best cross-validation results when we choose a higher n_neighbors (n = 20) whereas the tree depth does not seem to affect the performance at all, suggesting that a decision stump is sufficient to separate the data. To remind ourselves that it is a bad practice to use the test dataset more than...
X = df[features] y = df['num'] X.shape , y.shape
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Next we encode the class labels into binary format and split the dataset into 60 percent training and 40 percent test set, respectively:
from sklearn.preprocessing import LabelEncoder from sklearn.cross_validation import train_test_split le = LabelEncoder() y = le.fit_transform(y) X_train, X_test, y_train, y_test =\ train_test_split(X, y, test_size=0.40, random_state=1)
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
A BaggingClassifier algorithm is already implemented in scikit-learn, which we can import from the ensemble submodule. Here, we will use an unpruned decision tree as the base classifier and create an ensemble of 500 decision trees fitted on different bootstrap samples of the training dataset:
from sklearn.ensemble import BaggingClassifier from sklearn.tree import DecisionTreeClassifier tree = DecisionTreeClassifier(criterion='entropy', max_depth=None) bag = BaggingClassifier(base_estimator=tree, n_estimators=500, max_samples=1...
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Next we will calculate the accuracy score of the prediction on the training and test dataset to compare the performance of the bagging classifier to the performance of a single unpruned decision tree. Based on the accuracy values, the unpruned decision tree predicts all class labels of the training samples correctly; h...
from sklearn.metrics import accuracy_score tree = tree.fit(X_train, y_train) y_train_pred = tree.predict(X_train) y_test_pred = tree.predict(X_test) tree_train = accuracy_score(y_train, y_train_pred) tree_test = accuracy_score(y_test, y_test_pred) print('Decision tree train/test accuracies %.3f/%.3f' % (tree_tr...
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Although the training accuracies of the decision tree and bagging classifier are similar on the training set (both 1.0), we can see that the bagging classifier has a slightly better generalization performance as estimated on the test set. In practice, more complex classification tasks and datasets' high dimensionality ...
from sklearn.ensemble import AdaBoostClassifier tree = DecisionTreeClassifier(criterion='entropy', max_depth=1) ada = AdaBoostClassifier(base_estimator=tree, n_estimators=500, learning_rate=0.1, random_state=0) ...
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
As we can see, the decision tree stump seems to overfit the training data in contrast with the unpruned decision tree that we saw in the previous section.
ada = ada.fit(X_train, y_train) y_train_pred = ada.predict(X_train) y_test_pred = ada.predict(X_test) ada_train = accuracy_score(y_train, y_train_pred) ada_test = accuracy_score(y_test, y_test_pred) print('AdaBoost train/test accuracies %.3f/%.3f' % (ada_train, ada_test))
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Simple notebook on performing a 1D MT problem.
#from IPython.display import Latex
notebooks/MT1DfwdProblem.ipynb
simpeg/simpegmt
mit
Maxwell's equations in 1D are as follows $i \omega b = - \partial_z e $ $ s = \partial_z(\mu^{-1} b) - \sigma(z) e$ $b(0) = 1 \hspace{1cm} ;\hspace{1cm} b(-\infty) = 0$ where $e = \widehat{\overrightarrow{E}}_x $ and $b = \widehat{\overrightarrow{B}}_y$ In weak form the equations become $i \omega(b,f) = - (\partial_...
import sys sys.path.append('C:/GudniWork/Codes/python/simpeg') import SimPEG as simpeg, numpy as np, scipy, scipy.sparse as sp
notebooks/MT1DfwdProblem.ipynb
simpeg/simpegmt
mit
We have $ i \omega b = - \partial_z e \hspace{1cm} ; \hspace{1cm} \partial_z(\mu^{-1} b) - \sigma(z) e \hspace{1cm} ;\hspace{1cm} b(0) = 1 \hspace{1cm} ;\hspace{1cm} b(-\infty) = 0 $ To deal with boundary: we assume that below depth L both $ \sigma $ and $ \mu $ are constants ($ z < - L $). At the boundary we have that...
# Set up the problem mu = 4*np.pi*1e-7 eps0 = 8.85e-12 # Frequency fr = np.array([1e1]) #np.logspace(0,5,200) #np.array([2000]) #np.logspace(-4,5,82) omega = 2*np.pi*fr # Mesh sig0 = 1e-2 #L = 3*np.sqrt(2/(mu*omega[0]*sig0)) #nn=np.ceil(np.log(0.3*L + 1)/np.log(1.3)) #h = 5*(1.3**(np.arange(nn+1))) h = np.ones(18) x...
notebooks/MT1DfwdProblem.ipynb
simpeg/simpegmt
mit
Warning: this is the code for the 1st edition of the book. Please visit https://github.com/ageron/handson-ml2 for the 2nd edition code, with up-to-date notebooks using the latest library versions. In particular, the 1st edition is based on TensorFlow 1, while the 2nd edition uses TensorFlow 2, which is much simpler to ...
from __future__ import division, print_function, unicode_literals try: # %tensorflow_version only exists in Colab. %tensorflow_version 1.x except Exception: pass import numpy as np import tensorflow as tf from tensorflow import keras
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
Checklist Do not run TensorFlow on the GPU. Beware of multithreading, and make TensorFlow single-threaded. Set all the random seeds. Eliminate any other source of variability. Do Not Run TensorFlow on the GPU Some operations (like tf.reduce_sum()) have favor performance over precision, and their outputs may vary slig...
import os os.environ["CUDA_VISIBLE_DEVICES"]=""
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
Beware of Multithreading Because floats have limited precision, the order of execution matters:
2. * 5. / 7. 2. / 7. * 5.
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
You should make sure TensorFlow runs your ops on a single thread:
config = tf.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1) with tf.Session(config=config) as sess: #... this will run single threaded pass
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
The thread pools for all sessions are created when you create the first session, so all sessions in the rest of this notebook will be single-threaded:
with tf.Session() as sess: #... also single-threaded! pass
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
Set all the random seeds! Python's built-in hash() function
print(set("Try restarting the kernel and running this again")) print(set("Try restarting the kernel and running this again"))
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
Since Python 3.3, the result will be different every time, unless you start Python with the PYTHONHASHSEED environment variable set to 0: shell PYTHONHASHSEED=0 python ```pycon print(set("Now the output is stable across runs")) {'n', 'b', 'h', 'o', 'i', 'a', 'r', 't', 'p', 'N', 's', 'c', ' ', 'l', 'e', 'w', 'u'} exi...
if os.environ.get("PYTHONHASHSEED") != "0": raise Exception("You must set PYTHONHASHSEED=0 when starting the Jupyter server to get reproducible results.")
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
Python Random Number Generators (RNGs)
import random random.seed(42) print(random.random()) print(random.random()) print() random.seed(42) print(random.random()) print(random.random())
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
NumPy RNGs
import numpy as np np.random.seed(42) print(np.random.rand()) print(np.random.rand()) print() np.random.seed(42) print(np.random.rand()) print(np.random.rand())
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
TensorFlow RNGs TensorFlow's behavior is more complex because of two things: * you create a graph, and then you execute it. The random seed must be set before you create the random operations. * there are two seeds: one at the graph level, and one at the individual random operation level.
import tensorflow as tf tf.set_random_seed(42) rnd = tf.random_uniform(shape=[]) with tf.Session() as sess: print(rnd.eval()) print(rnd.eval()) print() with tf.Session() as sess: print(rnd.eval()) print(rnd.eval())
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
Every time you reset the graph, you need to set the seed again:
tf.reset_default_graph() tf.set_random_seed(42) rnd = tf.random_uniform(shape=[]) with tf.Session() as sess: print(rnd.eval()) print(rnd.eval()) print() with tf.Session() as sess: print(rnd.eval()) print(rnd.eval())
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
If you create your own graph, it will ignore the default graph's seed:
tf.reset_default_graph() tf.set_random_seed(42) graph = tf.Graph() with graph.as_default(): rnd = tf.random_uniform(shape=[]) with tf.Session(graph=graph): print(rnd.eval()) print(rnd.eval()) print() with tf.Session(graph=graph): print(rnd.eval()) print(rnd.eval())
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
You must set its own seed:
graph = tf.Graph() with graph.as_default(): tf.set_random_seed(42) rnd = tf.random_uniform(shape=[]) with tf.Session(graph=graph): print(rnd.eval()) print(rnd.eval()) print() with tf.Session(graph=graph): print(rnd.eval()) print(rnd.eval())
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
If you set the seed after the random operation is created, the seed has no effet:
tf.reset_default_graph() rnd = tf.random_uniform(shape=[]) tf.set_random_seed(42) # BAD, NO EFFECT! with tf.Session() as sess: print(rnd.eval()) print(rnd.eval()) print() tf.set_random_seed(42) # BAD, NO EFFECT! with tf.Session() as sess: print(rnd.eval()) print(rnd.eval())
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
A note about operation seeds You can also set a seed for each individual random operation. When you do, it is combined with the graph seed into the final seed used by that op. The following table summarizes how this works: | Graph seed | Op seed | Resulting seed | |------------|---------|---------------...
tf.reset_default_graph() rnd1 = tf.random_uniform(shape=[], seed=42) rnd2 = tf.random_uniform(shape=[], seed=42) rnd3 = tf.random_uniform(shape=[]) with tf.Session() as sess: print(rnd1.eval()) print(rnd2.eval()) print(rnd3.eval()) print(rnd1.eval()) print(rnd2.eval()) print(rnd3.eval()) prin...
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
In the following example, you may think that all random ops will have the same random seed, but rnd3 will actually have a different seed:
tf.reset_default_graph() tf.set_random_seed(42) rnd1 = tf.random_uniform(shape=[], seed=42) rnd2 = tf.random_uniform(shape=[], seed=42) rnd3 = tf.random_uniform(shape=[]) with tf.Session() as sess: print(rnd1.eval()) print(rnd2.eval()) print(rnd3.eval()) print(rnd1.eval()) print(rnd2.eval()) ...
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
Estimators API Tip: in a Jupyter notebook, you probably want to set the random seeds regularly so that you can come back and run the notebook from there (instead of from the beginning) and still get reproducible outputs.
random.seed(42) np.random.seed(42) tf.set_random_seed(42)
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
If you use the Estimators API, make sure to create a RunConfig and set its tf_random_seed, then pass it to the constructor of your estimator:
my_config = tf.estimator.RunConfig(tf_random_seed=42) feature_cols = [tf.feature_column.numeric_column("X", shape=[28 * 28])] dnn_clf = tf.estimator.DNNClassifier(hidden_units=[300, 100], n_classes=10, feature_columns=feature_cols, config=my_con...
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
Let's try it on MNIST:
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.mnist.load_data() X_train = X_train.astype(np.float32).reshape(-1, 28*28) / 255.0 y_train = y_train.astype(np.int32)
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
Unfortunately, the numpy_input_fn does not allow us to set the seed when shuffle=True, so we must shuffle the data ourself and set shuffle=False.
indices = np.random.permutation(len(X_train)) X_train_shuffled = X_train[indices] y_train_shuffled = y_train[indices] input_fn = tf.estimator.inputs.numpy_input_fn( x={"X": X_train_shuffled}, y=y_train_shuffled, num_epochs=10, batch_size=32, shuffle=False) dnn_clf.train(input_fn=input_fn)
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
The final loss should be exactly 0.46282205. Instead of using the numpy_input_fn() function (which cannot reproducibly shuffle the dataset at each epoch), you can create your own input function using the Data API and set its shuffling seed:
def create_dataset(X, y=None, n_epochs=1, batch_size=32, buffer_size=1000, seed=None): dataset = tf.data.Dataset.from_tensor_slices(({"X": X}, y)) dataset = dataset.repeat(n_epochs) dataset = dataset.shuffle(buffer_size, seed=seed) return dataset.batch(batch_size) input_fn=lambda: cr...
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
The final loss should be exactly 1.0556093. ```python indices = np.random.permutation(len(X_train)) X_train_shuffled = X_train[indices] y_train_shuffled = y_train[indices] input_fn = tf.estimator.inputs.numpy_input_fn( x={"X": X_train_shuffled}, y=y_train_shuffled, num_epochs=10, batch_size=32, shuffle=False) d...
keras.backend.clear_session() random.seed(42) np.random.seed(42) tf.set_random_seed(42) model = keras.models.Sequential([ keras.layers.Dense(300, activation="relu"), keras.layers.Dense(100, activation="relu"), keras.layers.Dense(10, activation="softmax"), ]) model.compile(loss="sparse_categorical_crossent...
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
You should get exactly 97.16% accuracy on the training set at the end of training. Eliminate other sources of variability For example, os.listdir() returns file names in an order that depends on how the files were indexed by the file system:
for i in range(10): with open("my_test_foo_{}".format(i), "w"): pass [f for f in os.listdir() if f.startswith("my_test_foo_")] for i in range(10): with open("my_test_bar_{}".format(i), "w"): pass [f for f in os.listdir() if f.startswith("my_test_bar_")]
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
You should sort the file names before you use them:
filenames = os.listdir() filenames.sort() [f for f in filenames if f.startswith("my_test_foo_")] for f in os.listdir(): if f.startswith("my_test_foo_") or f.startswith("my_test_bar_"): os.remove(f)
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
Scatter plots Learn how to use Matplotlib's plt.scatter function to make a 2d scatter plot. Generate random data using np.random.randn. Style the markers (color, size, shape, alpha) appropriately. Include an x and y label and title.
a=np.random.randn(2,10) x=a[0,:] x y=a[1,:] y plt.scatter(x,y,color='red') plt.grid(True) plt.box(False) plt.xlabel('random x values') plt.ylabel('random y values') plt.title('TITLE')
assignments/assignment04/MatplotlibExercises.ipynb
phungkh/phys202-2015-work
mit
Histogram Learn how to use Matplotlib's plt.hist function to make a 1d histogram. Generate randpom data using np.random.randn. Figure out how to set the number of histogram bins and other style options. Include an x and y label and title.
a=np.random.randn(1,10) x=a[0,:] plt.hist(x,bins=15,histtype='bar',color='green') plt.title('MY HISTOGRAM') plt.xlabel('RANDOM X VALUES') plt.ylabel('FREQUENCY')
assignments/assignment04/MatplotlibExercises.ipynb
phungkh/phys202-2015-work
mit
Section 1: Magnetic Elements Modeling The very first step need to push forward is to correctly model the physical elements (one by one), in the beamline package, magnet components classes could be found in element module, e.g. quadrupole is abstracted in ElementQuad class, charge is in ElementCharge, etc., they are all...
#commdinfo = {'DATE': '2016-03-22', 'AUTHOR': 'Tong Zhang'} comminfo = 'DATE = 2016-03-24, AUTHOR = Tong Zhang' beamline.MagBlock.setCommInfo(comminfo)
tests/Usage Demo for Python Package beamline.ipynb
Archman/beamline
mit
STEP 2: create elements
# charge, this is visual element for the real accelerator, but is a must for elegant tracking chconf = {'total':1e-9} q = beamline.ElementCharge(name = 'q', config = chconf) # csrcsben, use elegant element name # simconf is complementary configurations for elegant tracking, # should set with setConf(simconf, type='si...
tests/Usage Demo for Python Package beamline.ipynb
Archman/beamline
mit
STEP 3: make lattice beamline
# METHOD 1: CANNOT get all configurations # use 'ElementBeamline' class of 'element' module # # beamline latele = [obj.name for obj in [q, D0, Q1, D0, B1, D0, B2, D0, D0, B3, D0, B4, D0, Q1, D0]] latstr = '(' + ' '.join(latele) + ')' bl = beamline.ElementBeamline(name = 'bl', config = {'lattice':latstr}) #bl = beamli...
tests/Usage Demo for Python Package beamline.ipynb
Archman/beamline
mit
Section 2: Lattice modeling STEP 4: create Lattice instance, make simulation required input files
eleb1.setConf('angle=0.1', type = 'simu') # e.g. '.lte' for elegant tracking, require all configurations latins = beamline.Lattice(latline_online.getAllConfig()) latfile = os.path.join(os.getcwd(), 'tracking/test.lte') latins.generateLatticeFile(latline_online.name, latfile) latins.dumpAllElements()
tests/Usage Demo for Python Package beamline.ipynb
Archman/beamline
mit
STEP 5: simulation with generated lattice file
simpath = os.path.join(os.getcwd(), 'tracking') elefile = os.path.join(simpath, 'test.ele') h5out = os.path.join(simpath, 'tpout.h5') elesim = beamline.Simulator() elesim.setMode('elegant') elesim.setScript('runElegant.sh') elesim.setExec('elegant') elesim.setPath(simpath) elesim.setInputfiles(ltefile = latfile, elef...
tests/Usage Demo for Python Package beamline.ipynb
Archman/beamline
mit
visualize data
import matplotlib.pyplot as plt # %matplotlib inline plt.plot(data_tp[:,0],data_tp[:,1],'.') plt.xlabel('$t\,[s]$') plt.ylabel('$\gamma$') plt.plot(data_sSx[:,0],data_sSx[:,1],'-') plt.ylabel('$\sigma_x\,[\mu m]$') plt.xlabel('$s\,[m]$') plt.plot(data_setax[:,0],data_setax[:,1],'-') plt.ylabel('$\eta_{x}\,[m]$') plt...
tests/Usage Demo for Python Package beamline.ipynb
Archman/beamline
mit
Lattice layout visualization
for e in latline_online._lattice_eleobjlist: print e.name, e.__class__.__name__ ptches, xr, yr = latline_online.draw(showfig=True) ptches
tests/Usage Demo for Python Package beamline.ipynb
Archman/beamline
mit
First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train mode...
# course-harcoded url url = 'http://commondatastorage.googleapis.com/books1000/' last_percent_reported = None def download_progress_hook(count, blockSize, totalSize): """A hook to report the progress of a download. This is mostly intended for users with slow internet connections. Reports every 5% change in dow...
assignments/1_notmnist.ipynb
Iolaum/ud370
gpl-3.0
Extract the dataset from the compressed .tar.gz file. This should give you a set of directories, labelled A through J.
num_classes = 10 np.random.seed(133) def maybe_extract(filename, force=False): # get new dir name dirn = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz # Create data directory path dpath = os.path.abspath(os.path.join(os.getcwd(), os.pardir)) dpath = os.path.join(dpath, 'data'...
assignments/1_notmnist.ipynb
Iolaum/ud370
gpl-3.0
Problem 1 Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.
def show_random_images(numi): """ Function to display a specified number of random images from the extracted dataset. Arguments: numi: Integer, how many images to show. """ # First let's create a list of all the files. # Create data directory path dpath = os.path.abspath(os.path...
assignments/1_notmnist.ipynb
Iolaum/ud370
gpl-3.0
Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size. We'll convert the ...
### Image preprocessing happening in this step !!! ### image_size = 28 # Pixel width and height. pixel_depth = 255.0 # Number of levels per pixel. def load_letter(folder, min_num_images): """Load the data for a single letter label.""" image_files = os.listdir(folder) dataset = np.ndarray(shape=(len(image...
assignments/1_notmnist.ipynb
Iolaum/ud370
gpl-3.0
Problem 2 Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.
## Open a random image from the pickled files. def show_rnd_pkl_image(): """Function that shows a random pickled image.""" # First let's create a list of all the files. # Create data directory path dpath = os.path.abspath(os.path.join(os.getcwd(), os.pardir)) dpath = os.path.join(dpath, 'data') ...
assignments/1_notmnist.ipynb
Iolaum/ud370
gpl-3.0
Problem 3 Another check: we expect the data to be balanced across classes. Verify that.
def disp_number_images(data_folders): for folder in data_folders: pickle_filename = folder + '.pickle' try: with open(pickle_filename, 'rb') as f: dataset = pickle.load(f) except Exception as e: print('Unable to read data from', pickle_filename, ':', e...
assignments/1_notmnist.ipynb
Iolaum/ud370
gpl-3.0
Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune train_size as needed. The labels will be stored into a separate array of integers 0 through 9. Also create a validation dataset for hyperparameter tuning.
def make_arrays(nb_rows, img_size): if nb_rows: dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32) labels = np.ndarray(nb_rows, dtype=np.int32) else: dataset, labels = None, None return dataset, labels def merge_datasets(pickle_files, train_size, valid_size=0): ...
assignments/1_notmnist.ipynb
Iolaum/ud370
gpl-3.0
Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.
def randomize(dataset, labels): permutation = np.random.permutation(labels.shape[0]) shuffled_dataset = dataset[permutation,:,:] shuffled_labels = labels[permutation] return (shuffled_dataset, shuffled_labels) train_dataset, train_labels = randomize(train_dataset, train_labels) test_dataset, test_labels...
assignments/1_notmnist.ipynb
Iolaum/ud370
gpl-3.0
Problem 4 Convince yourself that the data is still good after shuffling! To be sure that the data are still fine after the merger and the randomization, we will select one item and display the image alongside the label. Note: 0 = A, 1 = B, 2 = C, 3 = D, 4 = E, 5 = F, 6 = G, 7 = H, 8 = I, 9 = J.
pretty_labels = {0: 'A', 1: 'B', 2: 'C', 3: 'D', 4: 'E', 5: 'F', 6: 'G', 7: 'H', 8: 'I', 9: 'J'} def disp_sample_dataset(dataset, labels): items = random.sample(range(len(labels)), 8) for i, item in enumerate(items): plt.subplot(2, 4, i+1) plt.axis('off') plt.title(pretty_labels[labels[item]]) plt....
assignments/1_notmnist.ipynb
Iolaum/ud370
gpl-3.0
Finally, let's save the data for later reuse:
# Create data directory path dpath = os.path.abspath(os.path.join(os.getcwd(), os.pardir)) dpath = os.path.join(dpath, 'data') # create pickle data file path pickle_file = os.path.join(dpath,'notMNIST.pickle') # save data if they aren't already saved or forced. def maybe_save_data(filepath, force=False): # Downloa...
assignments/1_notmnist.ipynb
Iolaum/ud370
gpl-3.0
Problem 5 By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok ...
def display_overlap(overlap, source_dataset, target_dataset): item = random.choice(list(overlap.keys())) imgs = np.concatenate(([source_dataset[item]], target_dataset[overlap[item][0:7]])) plt.suptitle(item) for i, img in enumerate(imgs): plt.subplot(2, 4, i+1) plt.axis('off') pl...
assignments/1_notmnist.ipynb
Iolaum/ud370
gpl-3.0
Problem 6 Let's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it. Train a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint: y...
def tryLogRegr(sample_size): """ Arguments: sample_size: Integer to determine sample size """ regr = LogisticRegression() X_test = test_dataset.reshape(test_dataset.shape[0], 28 * 28) y_test = test_labels X_train = train_dataset[:sample_size].reshape(sample_size, 784)...
assignments/1_notmnist.ipynb
Iolaum/ud370
gpl-3.0
Looks like ~85% is the limit for a linear method Let's try on all of the dataset.
def tryLogRegrAll(): """ Function to perform Logistic Regression on all our dataset. """ # sag solver works better for bigger datasets # n_jobs = -2 automatically selects (max - 1) available cores! # using -1 regr = LogisticRegression(solver='sag', n_jobs = -1) X_test = test_dataset...
assignments/1_notmnist.ipynb
Iolaum/ud370
gpl-3.0
Download the sequence data Sequence data for this study are archived on the NCBI sequence read archive (SRA). Below I read in SraRunTable.txt for this project which contains all of the information we need to download the data. Project SRA: SRP065811 BioProject ID: PRJNA299298 SRA link: http://trace.ncbi.nlm.nih.gov/...
%%bash ## make a new directory for this analysis mkdir -p empirical_7/fastq/
emp_nb_Danio.ipynb
dereneaton/RADmissing
mit
For each ERS (individuals) get all of the ERR (sequence file accessions).
## IPython code import pandas as pd import numpy as np import urllib2 import os ## open the SRA run table from github url url = "https://raw.githubusercontent.com/"+\ "dereneaton/RADmissing/master/empirical_7_SraRunTable.txt" intable = urllib2.urlopen(url) indata = pd.read_table(intable, sep="\t") ## print firs...
emp_nb_Danio.ipynb
dereneaton/RADmissing
mit
Here we pass the SRR number and the sample name to the wget_download function so that the files are saved with their sample names.
for ID, SRR in zip(indata.Library_Name_s, indata.Run_s): wget_download(SRR, "empirical_7/fastq/", ID) %%bash ## convert sra files to fastq using fastq-dump tool ## output as gzipped into the fastq directory fastq-dump --gzip -O empirical_7/fastq/ empirical_7/fastq/*.sra ## remove .sra files rm empirical_7/fastq/*...
emp_nb_Danio.ipynb
dereneaton/RADmissing
mit
Note: The data here are from Illumina Casava <1.8, so the phred scores are offset by 64 instead of 33, so we use that in the params file below.
%%bash ## substitute new parameters into file sed -i '/## 1. /c\empirical_7/ ## 1. working directory ' params.txt sed -i '/## 6. /c\TGCAGG ## 6. cutters ' params.txt sed -i '/## 7. /c\20 ## 7. N processors ' params.txt sed -i '/## 9. /c\6 ## 9. NQu...
emp_nb_Danio.ipynb
dereneaton/RADmissing
mit
Assemble in pyrad
%%bash pyrad -p params.txt -s 234567 >> log.txt 2>&1 %%bash sed -i '/## 12./c\2 ## 12. MinCov ' params.txt sed -i '/## 14./c\empirical_7_m2 ## 14. output name ' params.txt %%bash pyrad -p params.txt -s 7 >> log.txt 2>&1
emp_nb_Danio.ipynb
dereneaton/RADmissing
mit
Results We are interested in the relationship between the amount of input (raw) data between any two samples, the average coverage they recover when clustered together, and the phylogenetic distances separating samples. Raw data amounts The average number of raw reads per sample is 1.36M.
import pandas as pd ## read in the data s2dat = pd.read_table("empirical_7/stats/s2.rawedit.txt", header=0, nrows=42) ## print summary stats print s2dat["passed.total"].describe() ## find which sample has the most raw data maxraw = s2dat["passed.total"].max() print "\nmost raw data in sample:" print s2dat['sample '][...
emp_nb_Danio.ipynb
dereneaton/RADmissing
mit
Look at distributions of coverage pyrad v.3.0.63 outputs depth information for each sample which I read in here and plot. First let's ask which sample has the highest depth of coverage. The std of coverages is pretty low in this data set compared to several others.
## read in the s3 results s7dat = pd.read_table("empirical_7/stats/s3.clusters.txt", header=0, nrows=42) ## print summary stats print "summary of means\n==================" print s7dat['dpt.me'].describe() ## print summary stats print "\nsummary of std\n==================" print s7dat['dpt.sd'].describe() ## print s...
emp_nb_Danio.ipynb
dereneaton/RADmissing
mit
Plot the coverage for the sample with highest mean coverage Green shows the loci that were discarded and orange the loci that were retained. The majority of data were discarded for being too low of coverage.
import toyplot import toyplot.svg import numpy as np ## read in the depth information for this sample with open("empirical_7/clust.85/Dkya_str_m.depths", 'rb') as indat: depths = np.array(indat.read().strip().split(","), dtype=int) ## make a barplot in Toyplot canvas = toyplot.Canvas(width=350, height=300) ax...
emp_nb_Danio.ipynb
dereneaton/RADmissing
mit
Print final stats table
cat empirical_7/stats/empirical_7_m4.stats %%bash head -n 20 empirical_7/stats/empirical_7_m2.stats
emp_nb_Danio.ipynb
dereneaton/RADmissing
mit
Infer ML phylogeny in raxml as an unrooted tree
%%bash ## raxml argumement w/ ... raxmlHPC-PTHREADS-AVX -f a -m GTRGAMMA -N 100 -x 12345 -p 12345 -T 20 \ -w /home/deren/Documents/RADmissing/empirical_7/ \ -n empirical_7_m4 -s empirical_7/outfiles/empirical_7_m4.phy %%bash ## raxml argumement w/ ... ...
emp_nb_Danio.ipynb
dereneaton/RADmissing
mit
Plot the tree in R using ape
%load_ext rpy2.ipython %%R -h 800 -w 800 library(ape) tre <- read.tree("empirical_7/RAxML_bipartitions.empirical_7_m4") ltre <- ladderize(tre) par(mfrow=c(1,2)) plot(ltre, use.edge.length=F) nodelabels(ltre$node.label) plot(ltre, type='u')
emp_nb_Danio.ipynb
dereneaton/RADmissing
mit
Examine a single patient
patientunitstayid = 2704494 query = query_schema + """ select * from apacheapsvar where patientunitstayid = {} """.format(patientunitstayid) df = pd.read_sql_query(query, con) df.head()
notebooks/apache.ipynb
mit-eicu/eicu-code
mit
Missing data represented by -1 apacheScore present but prediction is -1 Ventilation flags Unable to assess GCS (meds column) Hospitals with data available
query = query_schema + """ select pt.hospitalid , count(pt.patientunitstayid) as number_of_patients , count(a.patientunitstayid) as number_of_patients_with_tbl from patient pt left join apacheapsvar a on pt.patientunitstayid = a.patientunitstayid group by pt.hospitalid """.format(patientunitstayid) df = pd.re...
notebooks/apache.ipynb
mit-eicu/eicu-code
mit
Configure and build the neural network. Each layer has the 4 inputs $\mathbf{H}^{T}\mathbf{r}$, $\mathbf{H}^{T}\mathbf{H}$, $\mathbf{t}_k$ and $\mathbf{v}_k$. The index $k$ denotes the layer. The layers can also be interpreted as iterations of an optimization algorithm [1]. The nonlinear operation $ \begin{align} &\qua...
# DetNet config layers = 3*K v_len = 2*K z_len = 8*K # Training params training_steps = 10000 batch_size_train = 5000 snr_var_train = 3.0 # Maximum absolute deviation of the SNR from its mean in logarithmic scale. # Test params test_steps= 1000 batch_size_test = 5000 snr_range = np.arange(8, 14, 1) # Definition of t...
mloc/ch5_Algorithm_Unfolding/Deep_MIMO_Detection.ipynb
kit-cel/wt
gpl-2.0
Training of the network. The loss function takes into account the output of all the layers $\mathcal{L}$ and is normalized to the loss of a zero forcing equalizer $\Vert \mathbf{t}-\tilde{\mathbf{t}}\Vert^{2}$. The loss function is defined as: \begin{align} &L(\mathbf{t};\hat{\mathbf{t}}{\theta}(\mathbf{H}, \mathbf{r})...
model = DetNet(layers, K, v_len, z_len) model.to(device) # Adam Optimizer optimizer = optim.Adam(model.parameters(), eps=1e-07) results = [] ber = [] for i in range(training_steps): # Generate a batch of training data. r_train, Hr_train, HH_train, snr_train, t_train = data_generation_VC(K, N, batch_size_tra...
mloc/ch5_Algorithm_Unfolding/Deep_MIMO_Detection.ipynb
kit-cel/wt
gpl-2.0
Visualize
fig = plt.figure(1,figsize=(15,15)) plt.rcParams.update({'font.size': 18}) color=iter(cm.viridis_r(np.linspace(0,1,len(results)))) # Plot loss. plt.subplot(211) for i in range(0, len(results)): c=next(color) plt.semilogy(range(0, len(results[0])-1), results[i][1:], color=c) plt.grid(True) plt.title("Loss Functi...
mloc/ch5_Algorithm_Unfolding/Deep_MIMO_Detection.ipynb
kit-cel/wt
gpl-2.0
After initialization, the k-means algorithm iterates between the following two steps: 1. Assign each data point to the closest centroid. $$ z_i \gets \mathrm{argmin}j \|\mu_j - \mathbf{x}_i\|^2 $$ 2. Revise centroids as the mean of the assigned data points. $$ \mu_j \gets \frac{1}{n_j}\sum{i:z_i=j} \mathbf{x}_i $$ In p...
from sklearn.metrics import pairwise_distances # Get the TF-IDF vectors for documents 100 through 102. queries = tf_idf[100:102,:] # Compute pairwise distances from every data point to each query vector. dist = pairwise_distances(tf_idf, queries, metric='euclidean') print dist print queries.shape print tf_idf.shape...
machine_learning/4_clustering_and_retrieval/assigment/week3/2_kmeans-with-text-data_graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
More formally, dist[i,j] is assigned the distance between the ith row of X (i.e., X[i,:]) and the jth row of Y (i.e., Y[j,:]). Checkpoint: For a moment, suppose that we initialize three centroids with the first 3 rows of tf_idf. Write code to compute distances from each of the centroids to all data points in tf_idf. Th...
# Students should write code here first_3_centroids = tf_idf[:3,:] distances = pairwise_distances(tf_idf, first_3_centroids, metric='euclidean') dist = distances[430, 1] print dist '''Test cell''' if np.allclose(dist, pairwise_distances(tf_idf[430,:], tf_idf[1,:])): print('Pass') else: print('Check your code ...
machine_learning/4_clustering_and_retrieval/assigment/week3/2_kmeans-with-text-data_graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
Checkpoint: Next, given the pairwise distances, we take the minimum of the distances for each data point. Fittingly, NumPy provides an argmin function. See this documentation for details. Read the documentation and write code to produce a 1D array whose i-th entry indicates the centroid that is the closest to the i-th ...
# Students should write code here distances = distances.copy() closest_cluster = np.argmin(distances, axis=1) print closest_cluster print closest_cluster.shape '''Test cell''' reference = [list(row).index(min(row)) for row in distances] if np.allclose(closest_cluster, reference): print('Pass') else: print('Ch...
machine_learning/4_clustering_and_retrieval/assigment/week3/2_kmeans-with-text-data_graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
Checkpoint: Let's put these steps together. First, initialize three centroids with the first 3 rows of tf_idf. Then, compute distances from each of the centroids to all data points in tf_idf. Finally, use these distance calculations to compute cluster assignments and assign them to cluster_assignment.
# Students should write code here first_3_centroids = tf_idf[:3,:] distances = pairwise_distances(tf_idf, first_3_centroids, metric='euclidean') cluster_assignment = np.argmin(distances, axis=1) if len(cluster_assignment)==59071 and \ np.array_equal(np.bincount(cluster_assignment), np.array([23061, 10086, 25924])):...
machine_learning/4_clustering_and_retrieval/assigment/week3/2_kmeans-with-text-data_graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
Let's consider running k-means with K=3 clusters for a maximum of 400 iterations, recording cluster heterogeneity at every step. Then, let's plot the heterogeneity over iterations using the plotting function above.
k = 3 heterogeneity = [] initial_centroids = get_initial_centroids(tf_idf, k, seed=0) centroids, cluster_assignment = kmeans(tf_idf, k, initial_centroids, maxiter=400, record_heterogeneity=heterogeneity, verbose=True) plot_heterogeneity(heterogeneity, k) print heterogeneity
machine_learning/4_clustering_and_retrieval/assigment/week3/2_kmeans-with-text-data_graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
Quiz Question. (True/False) The clustering objective (heterogeneity) is non-increasing for this example. - True Quiz Question. Let's step back from this particular example. If the clustering objective (heterogeneity) would ever increase when running k-means, that would indicate: (choose one) k-means algorithm got stuc...
print np.bincount(cluster_assignment)
machine_learning/4_clustering_and_retrieval/assigment/week3/2_kmeans-with-text-data_graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
Beware of local maxima One weakness of k-means is that it tends to get stuck in a local minimum. To see this, let us run k-means multiple times, with different initial centroids created using different random seeds. Note: Again, in practice, you should set different seeds for every run. We give you a list of seeds for ...
k = 10 heterogeneity = {} cluster_assignment_dict = {} import time start = time.time() for seed in [0, 20000, 40000, 60000, 80000, 100000, 120000]: initial_centroids = get_initial_centroids(tf_idf, k, seed) centroids, cluster_assignment = kmeans(tf_idf, k, initial_centroids, maxiter=400, ...
machine_learning/4_clustering_and_retrieval/assigment/week3/2_kmeans-with-text-data_graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
Notice the variation in heterogeneity for different initializations. This indicates that k-means sometimes gets stuck at a bad local minimum. Quiz Question. Another way to capture the effect of changing initialization is to look at the distribution of cluster assignments. Add a line to the code above to compute the siz...
for k, v in cluster_assignment_dict.iteritems(): print k, np.max(v)
machine_learning/4_clustering_and_retrieval/assigment/week3/2_kmeans-with-text-data_graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
Maximum values: 18132 Minimum values: 15779 One effective way to counter this tendency is to use k-means++ to provide a smart initialization. This method tries to spread out the initial set of centroids so that they are not too close together. It is known to improve the quality of local optima and lower average runtim...
def smart_initialize(data, k, seed=None): '''Use k-means++ to initialize a good set of centroids''' if seed is not None: # useful for obtaining consistent results np.random.seed(seed) centroids = np.zeros((k, data.shape[1])) # Randomly choose the first centroid. # Since we have no prior...
machine_learning/4_clustering_and_retrieval/assigment/week3/2_kmeans-with-text-data_graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
A few things to notice from the box plot: * Random initialization results in a worse clustering than k-means++ on average. * The best result of k-means++ is better than the best result of random initialization. In general, you should run k-means at least a few times with different initializations and then return the ru...
def kmeans_multiple_runs(data, k, maxiter, num_runs, seed_list=None, verbose=False): heterogeneity = {} min_heterogeneity_achieved = float('inf') best_seed = None final_centroids = None final_cluster_assignment = None for i in xrange(num_runs): # Use UTC time if no see...
machine_learning/4_clustering_and_retrieval/assigment/week3/2_kmeans-with-text-data_graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
Quiz Question. Which of the 10 clusters above contains the greatest number of articles? Cluster 0: artists, poets, writers, environmentalists Cluster 4: track and field athletes Cluster 5: composers, songwriters, singers, music producers Cluster 7: baseball players Cluster 9: lawyers, judges, legal scholars
np.argmax(np.bincount(cluster_assignment[10]()))
machine_learning/4_clustering_and_retrieval/assigment/week3/2_kmeans-with-text-data_graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
Quiz Question. Which of the 10 clusters contains the least number of articles? Cluster 1: film directors Cluster 3: politicians Cluster 6: soccer (football) players Cluster 7: baseball players Cluster 9: lawyers, judges, legal scholars
np.argmin(np.bincount(cluster_assignment[10]()))
machine_learning/4_clustering_and_retrieval/assigment/week3/2_kmeans-with-text-data_graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
The class of rugby players have been broken into two clusters (11 and 72). Same goes for soccer (football) players (clusters 6, 21, 40, and 87), although some may like the benefit of having a separate category for Australian Football League. The class of baseball players have been also broken into two clusters (18 and ...
cluster_assignment_bincount = np.bincount(cluster_assignment[100]()) len(cluster_assignment_bincount[cluster_assignment_bincount <= 236])
machine_learning/4_clustering_and_retrieval/assigment/week3/2_kmeans-with-text-data_graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
Objectives : with this notebook you will understand the very basics of a sensitivity analysis and the specificity of an RBD-FAST analysis use on your own the python library SALib to perform an RBD-FAST analysis NOTE : for more detail on sensitivity analysis in general, refer to the café focus by Jeanne Goffart (april...
# STATE THE PROBLEM DICTIONNARY # what will be varying (=inputs) ? in what bounds ? problem = { 'num_vars': 3, 'names': ['x1', 'x2', 'x3'], 'bounds': [[-3.14159265359, 3.14159265359], [-3.14159265359, 3.14159265359], [-3.14159265359, 3.14159265359]] }
misc/Sensitivity_analysis.ipynb
locie/locie_notebook
lgpl-3.0
Draw a number of samples from your problem definition
# say we want to draw 150 samples num_samples = 150 # we draw a Latin Hypercube Sampling (LHS) that is fitted for an RBD FAST analysis # (other sampling metods available in the library though) from SALib.sample.latin import sample all_samples = sample(problem, num_samples)
misc/Sensitivity_analysis.ipynb
locie/locie_notebook
lgpl-3.0
Run your own model/workflow to get the output of interest Here, you use your own code, can be a python code like here, can call any other program (dymola, energy-plus, etc...) We use a lumped thermal model, aka RC model. The scientific question behind (it's not just a stupid example ^^) : can the model be entirely cali...
# this is where you define your own model, procedure, experiment... from SALib.test_functions import Ishigami import numpy as np def run_model(x1, x2, x3): """ COPY HERE YOUR OWN CODE the function takes as input 1 sample returns 1 or more output """ # Delete from HERE ================...
misc/Sensitivity_analysis.ipynb
locie/locie_notebook
lgpl-3.0
NOTE : you could also use outputs from a different program, you would then have to upload your data into a python numpy array.
# run your model, procedure, experiment for each sample of your sampling # unpack all_samples into 3 vectors x1, x2, x3 x1, x2, x3 = all_samples.T # run the model, all samples at the same time ishigami_results = run_model(x1, x2, x3)
misc/Sensitivity_analysis.ipynb
locie/locie_notebook
lgpl-3.0