markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Our target variable y has 2 unique values 0 and 1. 0 means the patient doesn't have a heart disease; 1 means unfortunately he/she does. We split our dataset into 70% training set and 30% testing set.
from sklearn.cross_validation import train_test_split from sklearn.preprocessing import StandardScaler X_train, X_test, y_train, y_test =\ train_test_split(X, y, test_size=0.3, random_state=1) X_train.shape, y_train.shape
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Feature importances with forests of trees This examples shows the use of forests of trees to evaluate the importance of features on an artificial classification task. The red bars are the feature importances of the forest, along with their inter-trees variability.
%matplotlib inline import numpy as np import matplotlib.pyplot as plt from sklearn.ensemble import ExtraTreesClassifier # Build a classification task using 3 informative features # Build a forest and compute the feature importances forest = ExtraTreesClassifier(n_estimators=250, random_state=0) forest.fit(X, y) importances = forest.feature_importances_ std = np.std([tree.feature_importances_ for tree in forest.estimators_], axis=0) indices = np.argsort(importances)[::-1] for f in range(5): print("%d. feature %d - %s (%f)" % (f + 1, indices[f], features[indices[f]] ,importances[indices[f]])) best_features = [] for i in indices[:5]: best_features.append(features[i]) # Plot the top 5 feature importances of the forest plt.figure(num=None, figsize=(8, 6), dpi=80, facecolor='w', edgecolor='k') plt.title("Feature importances") plt.bar(range(5), importances[indices][:5], color="r", yerr=std[indices][:5], align="center") plt.xticks(range(5), best_features) plt.xlim([-1, 5]) plt.show()
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Decision Tree accuracy and time elapsed caculation
from time import time t0=time() print ("DecisionTree") dt = DecisionTreeClassifier(min_samples_split=20,random_state=99) # dt = DecisionTreeClassifier(min_samples_split=20,max_depth=5,random_state=99) clf_dt=dt.fit(X_train,y_train) print ("Acurracy: ", clf_dt.score(X_test,y_test)) t1=time() print ("time elapsed: ", t1-t0) tt0=time() print ("cross result========") scores = cross_validation.cross_val_score(dt, X, y, cv=5) print (scores) print (scores.mean()) tt1=time() print ("time elapsed: ", tt1-tt0)
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Tuning our hyperparameters using GridSearch
from sklearn.grid_search import GridSearchCV from sklearn.metrics import classification_report pipeline = Pipeline([ ('clf', DecisionTreeClassifier(criterion='entropy')) ]) parameters = { 'clf__max_depth': (15, 20 , 25), 'clf__min_samples_leaf': (3, 5, 10) } grid_search = GridSearchCV(pipeline, parameters, n_jobs=-1, verbose=1, scoring='f1') grid_search.fit(X_train, y_train) print ('Best score: %0.3f' % grid_search.best_score_) print ('Best parameters set:') best_parameters = grid_search.best_estimator_.get_params() for param_name in sorted(parameters.keys()): print ('\t%s: %r' % (param_name, best_parameters[param_name])) predictions = grid_search.predict(X_test) print (classification_report(y_test, predictions))
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
KNN accuracy and time elapsed caculation
t6=time() print ("KNN") knn = KNeighborsClassifier() clf_knn=knn.fit(X_train, y_train) print ("Acurracy: ", clf_knn.score(X_test,y_test) ) t7=time() print ("time elapsed: ", t7-t6) tt6=time() print ("cross result========") scores = cross_validation.cross_val_score(knn, X, y, cv=5) print (scores) print (scores.mean()) tt7=time() print ("time elapsed: ", tt7-tt6)
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
SVM accuracy and time elapsed caculation¶
t7=time() print ("SVM") svc = SVC() clf_svc=svc.fit(X_train, y_train) print ("Acurracy: ", clf_svc.score(X_test,y_test) ) t8=time() print ("time elapsed: ", t8-t7) tt7=time() print ("cross result========") scores = cross_validation.cross_val_score(svc, X,y, cv=5) print (scores) print (scores.mean()) tt8=time() print ("time elapsed: ", tt7-tt6)
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Using the training dataset, we now will train two different classifiers— a decision tree classifier, and a k-nearest neighbors classifier—and look at their individual performances via a 10-fold cross-validation on the training dataset before we combine them into an ensemble classifier:
from sklearn.cross_validation import cross_val_score from sklearn.linear_model import LogisticRegression from sklearn.tree import DecisionTreeClassifier from sklearn.neighbors import KNeighborsClassifier from sklearn.pipeline import Pipeline import numpy as np clf2 = DecisionTreeClassifier(max_depth=1, criterion='entropy', random_state=0) clf3 = KNeighborsClassifier(n_neighbors=1, p=2, metric='minkowski') pipe3 = Pipeline([['sc', StandardScaler()], ['clf', clf3]]) clf_labels = ['Decision Tree', 'KNN'] print('10-fold cross validation:\n') for clf, label in zip([clf2, pipe3], clf_labels): scores = cross_val_score(estimator=clf, X=X_train, y=y_train, cv=10, scoring='roc_auc') print("ROC AUC: %0.2f (+/- %0.2f) [%s]" % (scores.mean(), scores.std(), label))
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
As you can see the accuracies o our individual classifiers are almost same and are on the high side. Now let's move on to the more exciting part and combine the individual classifiers for majority rule voting in our MajorityVoteClassifier:
# Majority Rule (hard) Voting mv_clf = MajorityVoteClassifier( classifiers=[clf2, pipe3]) clf_labels += ['Majority Voting'] all_clf = [clf2, pipe3, mv_clf] for clf, label in zip(all_clf, clf_labels): scores = cross_val_score(estimator=clf, X=X_train, y=y_train, cv=10, scoring='roc_auc') print("ROC AUC: %0.2f (+/- %0.2f) [%s]" % (scores.mean(), scores.std(), label))
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
As we can see, the performance of the MajorityVotingClassifier has substantially improved over the individual classifiers in the 10-fold cross-validation evaluation. Evaluating and tuning the ensemble classifier In this section, we are going to compute the ROC curves from the test set to check if the MajorityVoteClassifier generalizes well to unseen data. We should remember that the test set is not to be used for model selection; its only purpose is to report an unbiased estimate of the generalization performance of a classifier system. The code is as follows:
%matplotlib inline import matplotlib.pyplot as plt from sklearn.metrics import roc_curve from sklearn.metrics import auc colors = ['black', 'orange', 'blue', 'green'] linestyles = [':', '--', '-.', '-'] for clf, label, clr, ls \ in zip(all_clf, clf_labels, colors, linestyles): # assuming the label of the positive class is 1 y_pred = clf.fit(X_train, y_train).predict_proba(X_test)[:, 1] fpr, tpr, thresholds = roc_curve(y_true=y_test, y_score=y_pred) roc_auc = auc(x=fpr, y=tpr) plt.plot(fpr, tpr, color=clr, linestyle=ls, label='%s (auc = %0.2f)' % (label, roc_auc)) plt.legend(loc='lower right') plt.plot([0, 1], [0, 1], linestyle='--', color='gray', linewidth=2) plt.xlim([-0.1, 1.1]) plt.ylim([-0.1, 1.1]) plt.grid() plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.tight_layout() # plt.savefig('./figures/roc.png', dpi=300) plt.show()
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
As we can see in the resulting ROC, the ensemble classifier also performs well on the test set (ROC AUC = 0.86). Before we tune the individual classifier parameters for ensemble classification, let's call the get_params method to get a basic idea of how we can access the individual parameters inside a GridSearch object:
mv_clf.get_params()
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Based on the values returned by the get_params method, we now know how to access the individual classifier's attributes. Let's now tune the decision tree depth via a grid search for demonstration purposes. The code is as follows:
from sklearn.grid_search import GridSearchCV params = {'decisiontreeclassifier__max_depth': [1,2], 'pipeline__clf__n_neighbors': [5,15,20]} grid = GridSearchCV(estimator=mv_clf, param_grid=params, cv=10, scoring='roc_auc') grid.fit(X_train, y_train) for params, mean_score, scores in grid.grid_scores_: print("%0.3f+/-%0.2f %r" % (mean_score, scores.std() / 2, params))
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
After the grid search has completed, we can print the different hyperparameter value combinations and the average ROC AUC scores computed via 10-fold cross-validation. The code is as follows:
print('Best parameters: %s' % grid.best_params_) print('Accuracy: %.2f' % grid.best_score_)
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
As we can see, we get the best cross-validation results when we choose a higher n_neighbors (n = 20) whereas the tree depth does not seem to affect the performance at all, suggesting that a decision stump is sufficient to separate the data. To remind ourselves that it is a bad practice to use the test dataset more than once for model evaluation, we are not going to estimate the generalization performance of the tuned hyperparameters in this section. We will move on swiftly to an alternative approach for ensemble learning: bagging. Bagging -- Building an ensemble of classifiers from bootstrap samples Bagging is an ensemble learning technique that is closely related to the MajorityVoteClassifier that we implemented in the previous section, as illustrated in the following diagram: <img src = "images/bagging.png"> However, instead of using the same training set to fit the individual classifiers in the ensemble, we draw bootstrap samples (random samples with replacement) from the initial training set, which is why bagging is also known as bootstrap aggregating.
X = df[features] y = df['num'] X.shape , y.shape
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Next we encode the class labels into binary format and split the dataset into 60 percent training and 40 percent test set, respectively:
from sklearn.preprocessing import LabelEncoder from sklearn.cross_validation import train_test_split le = LabelEncoder() y = le.fit_transform(y) X_train, X_test, y_train, y_test =\ train_test_split(X, y, test_size=0.40, random_state=1)
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
A BaggingClassifier algorithm is already implemented in scikit-learn, which we can import from the ensemble submodule. Here, we will use an unpruned decision tree as the base classifier and create an ensemble of 500 decision trees fitted on different bootstrap samples of the training dataset:
from sklearn.ensemble import BaggingClassifier from sklearn.tree import DecisionTreeClassifier tree = DecisionTreeClassifier(criterion='entropy', max_depth=None) bag = BaggingClassifier(base_estimator=tree, n_estimators=500, max_samples=1.0, max_features=1.0, bootstrap=True, bootstrap_features=False, n_jobs=1, random_state=1)
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Next we will calculate the accuracy score of the prediction on the training and test dataset to compare the performance of the bagging classifier to the performance of a single unpruned decision tree. Based on the accuracy values, the unpruned decision tree predicts all class labels of the training samples correctly; however, the substantially lower test accuracy indicates high variance (overfitting) of the model:
from sklearn.metrics import accuracy_score tree = tree.fit(X_train, y_train) y_train_pred = tree.predict(X_train) y_test_pred = tree.predict(X_test) tree_train = accuracy_score(y_train, y_train_pred) tree_test = accuracy_score(y_test, y_test_pred) print('Decision tree train/test accuracies %.3f/%.3f' % (tree_train, tree_test)) bag = bag.fit(X_train, y_train) y_train_pred = bag.predict(X_train) y_test_pred = bag.predict(X_test) bag_train = accuracy_score(y_train, y_train_pred) bag_test = accuracy_score(y_test, y_test_pred) print('Bagging train/test accuracies %.3f/%.3f' % (bag_train, bag_test))
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Although the training accuracies of the decision tree and bagging classifier are similar on the training set (both 1.0), we can see that the bagging classifier has a slightly better generalization performance as estimated on the test set. In practice, more complex classification tasks and datasets' high dimensionality can easily lead to overfitting in single decision trees and this is where the bagging algorithm can really play out its strengths. Finally, we shall note that the bagging algorithm can be an effective approach to reduce the variance of a model. However, bagging is ineffective in reducing model bias, which is why we want to choose an ensemble of classifiers with low bias, for example, unpruned decision trees. Leveraging weak learners via adaptive boosting In this section about ensemble methods, we will discuss boosting with a special focus on its most common implementation, AdaBoost (short for Adaptive Boosting). In boosting, the ensemble consists of very simple base classifiers, also often referred to as weak learners, that have only a slight performance advantage over random guessing. A typical example of a weak learner would be a decision tree stump. The key concept behind boosting is to focus on training samples that are hard to classify, that is, to let the weak learners subsequently learn from misclassified training samples to improve the performance of the ensemble. In contrast to bagging, the initial formulation of boosting, the algorithm uses random subsets of training samples drawn from the training dataset without replacement. The original boosting procedure is summarized in four key steps as follows: Draw a random subset of training samples d1 without replacement from the training set D to train a weak learner C1. Draw second random training subset d2 without replacement from the training set and add 50 percent of the samples that were previously misclassified to train a weak learner C2. Find the training samples d3 in the training set D on which C1 and C2 disagree to train a third weak learner C3. Combine the weak learners C1, C2, and C3 via majority voting. Boosting can lead to a decrease in bias as well as variance compared to bagging models. In practice, however, boosting algorithms such as AdaBoost are also known for their high variance, that is, the tendency to overfit the training data. In contrast to the original boosting procedure as described here, AdaBoost uses the complete training set to train the weak learners where the training samples are reweighted in each iteration to build a strong classifier that learns from the mistakes of the previous weak learners in the ensemble. Take a look at the following figure to get a better grasp of the basic concept behind AdaBoost: <img src = "images/adaboost.png"> To walk through the AdaBoost illustration step by step, we start with subfigure 1, which represents a training set for binary classification where all training samples are assigned equal weights. Based on this training set, we train a decision stump (shown as a dashed line) that tries to classify the samples of the two classes (triangles and circles) as well as possible by minimizing the cost function (or the impurity score in the special case of decision tree ensembles). For the next round (subfigure 2), we assign a larger weight to the two previously misclassified samples (circles). Furthermore, we lower the weight of the correctly classified samples. The next decision stump will now be more focused on the training samples that have the largest weights, that is, the training samples that are supposedly hard to classify. The weak learner shown in subfigure 2 misclassifies three different samples from the circle-class, which are then assigned a larger weight as shown in subfigure 3. Assuming that our AdaBoost ensemble only consists of three rounds of boosting, we would then combine the three weak learners trained on different reweighted training subsets by a weighted majority vote, as shown in subfigure 4. Skipping to the more practical part, let's now train an AdaBoost ensemble classifier via scikit-learn. We will use the same Wine subset that we used in the previous section to train the bagging meta-classifier. Via the base_estimator attribute, we will train the AdaBoostClassifier on 500 decision tree stumps:
from sklearn.ensemble import AdaBoostClassifier tree = DecisionTreeClassifier(criterion='entropy', max_depth=1) ada = AdaBoostClassifier(base_estimator=tree, n_estimators=500, learning_rate=0.1, random_state=0) tree = tree.fit(X_train, y_train) y_train_pred = tree.predict(X_train) y_test_pred = tree.predict(X_test) tree_train = accuracy_score(y_train, y_train_pred) tree_test = accuracy_score(y_test, y_test_pred) print('Decision tree train/test accuracies %.3f/%.3f' % (tree_train, tree_test))
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
As we can see, the decision tree stump seems to overfit the training data in contrast with the unpruned decision tree that we saw in the previous section.
ada = ada.fit(X_train, y_train) y_train_pred = ada.predict(X_train) y_test_pred = ada.predict(X_test) ada_train = accuracy_score(y_train, y_train_pred) ada_test = accuracy_score(y_test, y_test_pred) print('AdaBoost train/test accuracies %.3f/%.3f' % (ada_train, ada_test))
Ensemble Learning/Ensemble Learning to Classify Patients with Heart Disease.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Simple notebook on performing a 1D MT problem.
#from IPython.display import Latex
notebooks/MT1DfwdProblem.ipynb
simpeg/simpegmt
mit
Maxwell's equations in 1D are as follows $i \omega b = - \partial_z e $ $ s = \partial_z(\mu^{-1} b) - \sigma(z) e$ $b(0) = 1 \hspace{1cm} ;\hspace{1cm} b(-\infty) = 0$ where $e = \widehat{\overrightarrow{E}}_x $ and $b = \widehat{\overrightarrow{B}}_y$ In weak form the equations become $i \omega(b,f) = - (\partial_ze,f)$ $(s,w) = -(\mu^{-1} b, \partial_z w) - (\sigma(z) e, w)$ where f and w are abritrary functions living in same discritizational space as b and e, respectivily. We consider e on nodes and b on cell centers. This way the derivative of any nodal function becomes $ (\partial_z u)_k \approx $ $h_{k}^{-1} ( u_{k+\frac{1}{2}} - u_{k-\frac{1}{2}}) + O(h^2) $ Matrix form $ e_z \approx \textbf{L}^{-1} \textbf{G} e = \begin{bmatrix} h_1^{-1} & & & \\ & h_2^{-1} & & \\ & & \ddots & \\ & & & h_n^{-1} \end{bmatrix}^{(n,n)} \begin{bmatrix} -1 & 1 & & & & \\ & -1 & 1 & & & \\ & & \ddots & \ddots & \\ & & & -1 & 1 \end{bmatrix}^{(n,n+1)} \begin{bmatrix} e_1 \\ \\ \vdots \\ \\ e_{n+1} \end{bmatrix}^{(n+1,1)} $ where $ \textbf{L} = diag(h) $ is the cell size and $ \textbf{G}$ is the gradient operator with -1,1 representing the topology of the mesh, taking the difference between adjoint cells. We need to compute 2 inner products, on cell centers and from nodes to cell centers. Cell centers inner product is $ (b,f) \approx \sum\limits_k h_k \textbf{b}_k \textbf{f}_k + O(h^2) $ and in matrix from $ (b,) \approx \textbf{b}^T \textbf{M}^f \textbf{f}$ and $ (\mu^{-1} b,f) \approx \textbf{b}^T \textbf{M}_{\mu}^f \textbf{f}$ where $ \textbf{M}_{\mu}^f = diag(\textbf{h} \odot \mu^{-1}) $ and $ \textbf{M}^f = diag(\textbf{h}) $ are the matrices. Nodes to cell centers inner product is $ (\sigma e, w) \approx \sum\limits_k \frac{h_k \sigma_k}{4} ( e_{k+\frac{1}{2}} w_{k+\frac{1}{2}} + e_{k+\frac{1}{2}} w_{k+\frac{1}{2}} )$ and in matrix from $ (\sigma e, w ) \approx (\textbf{h} \odot \sigma )^T ( \textbf{A}_v (\textbf{e} \odot \textbf{w} = \textbf{w}^T diag(\textbf{A}_v^T (\textbf{h} \odot \sigma)) \textbf{e} $ Here $\odot$ is a point wise Hadamard product and $\textbf{A}_v$ is the averaging operator/matrix from nodes to cell centers $ \textbf{A}_v = \begin{bmatrix} \frac{1}{2} & \frac{1}{2} & & \\ & \ddots & \ddots & \\ & & \frac{1}{2} & \frac{1}{2} \end{bmatrix} ^{(n+1 , n)} $ The sigma mass matrix is defined as $ \textbf{M}_{\sigma}^{e} = diag(\textbf{A}_v^T (\textbf{h} \odot \sigma)) $ In the MT problem there is no source in the domain, so $ s = 0 $. How ever the boundary conditions provide the right hand side where $ (\partial_z \mu^{-1} b, w ) = - (\mu^{-1} b, \partial_z w ) + (\mu^{-1} b w )|_0^{end} $ where $ (\mu^{-1} b w )|_0^{end} = \textbf{bc}^T (\textbf{BC w}) $ here $\textbf{BC}$ is an matrix operator that extracts the boundary elements from $ \textbf{w}$ and $\textbf{bc} $ are the known boundary condintions. For the 1D case with homogenous boundary conditions we have $ \textbf{B} = \begin{bmatrix} -1 & 0 \\ \vdots & \vdots \\ 0 & 1 \end{bmatrix} $ The weak form is $ (\mu^{-1} b, \partial_z w) + (\sigma e, w) = (\mu^{-1} b w )|_0^{end} $ $ (i \omega b,f) + (\partial_z e , f) = 0 $ Using the above matrix represntation we get the Maxwells equations in following form $ i \omega \textbf{f}^T \textbf{M}^f \textbf{b} + \textbf{f}^T \textbf{M}^f \textbf{L}^{-1} \textbf{G} \textbf{e} = 0 $ $ \textbf{w}^T \textbf{G}^T \textbf{L}^{-1} \textbf{M}^f_{\mu} \textbf{b} + \textbf{w}^T \textbf{M}_{\sigma}^e \textbf{e} = \textbf{w}^T \textbf{bc}^T \textbf{BC} $ Here we use that $ (\textbf{b},\textbf{f}) \approx \textbf{b}^T \textbf{M}^f \textbf{f} = \textbf{f}^T \textbf{M}^f \textbf{b} $ since $\textbf{M}^f$ is a symmetric diaoganal matrix of size n by n and $\textbf{b}$ and $\textbf{f}$ are vectors of length n. We eliminate testing vectors and get system of equations to solve $ i \omega \textbf{b} + \textbf{L}^{-1} \textbf{G} \textbf{e} = 0 $ $ \textbf{G}^T \textbf{L}^{-1} \textbf{M}^f_{\mu} \textbf{b} + \textbf{M}_{\sigma}^e \textbf{e} = \textbf{bc}^T \textbf{BC} $ and as $ \textbf{A} \textbf{x} = \textbf{bc} $ system that we will solve, where $ \textbf{A} = \begin{bmatrix} \textbf{G}^T \textbf{L}^{-1} \textbf{M}^f_{\mu} & \textbf{M}_{\sigma}^e \\ i \omega & \textbf{L}^{-1} \textbf{G} \end{bmatrix} $ $ \textbf{x} = \begin{bmatrix} \textbf{b} \\ \textbf{e} \end{bmatrix} $ $ \textbf{bc} = \begin{bmatrix} \textbf{bc}^T \textbf{BC} \\ \textbf{0} \end{bmatrix} $
import sys sys.path.append('C:/GudniWork/Codes/python/simpeg') import SimPEG as simpeg, numpy as np, scipy, scipy.sparse as sp
notebooks/MT1DfwdProblem.ipynb
simpeg/simpegmt
mit
We have $ i \omega b = - \partial_z e \hspace{1cm} ; \hspace{1cm} \partial_z(\mu^{-1} b) - \sigma(z) e \hspace{1cm} ;\hspace{1cm} b(0) = 1 \hspace{1cm} ;\hspace{1cm} b(-\infty) = 0 $ To deal with boundary: we assume that below depth L both $ \sigma $ and $ \mu $ are constants ($ z < - L $). At the boundary we have that $ e = c \exp(ikz) \hspace{0.2cm} where \hspace{0.2cm} k = \sqrt{i\omega\mu\sigma} $. Therefore for $ z < - L $ we have that $\omega b - k e = 0 $. We discretize the e field on the nodes and b field at the cell centers. The system we want to solve is $\begin{bmatrix} i \omega & \frac{\partial}{\partial z} \\ \frac{1}{\mu} \frac{\partial}{\partial z} & -\sigma \end{bmatrix} \begin{bmatrix} b \\ e \end{bmatrix} = \begin{bmatrix} s1 \\ s2 \end{bmatrix}$
# Set up the problem mu = 4*np.pi*1e-7 eps0 = 8.85e-12 # Frequency fr = np.array([1e1]) #np.logspace(0,5,200) #np.array([2000]) #np.logspace(-4,5,82) omega = 2*np.pi*fr # Mesh sig0 = 1e-2 #L = 3*np.sqrt(2/(mu*omega[0]*sig0)) #nn=np.ceil(np.log(0.3*L + 1)/np.log(1.3)) #h = 5*(1.3**(np.arange(nn+1))) h = np.ones(18) x0 = np.array([0]) #sig = sig0*np.ones((len(h),1)) #sig[0:50] = 0.1 #sig[50:100] = 1 # Make the mesh mesh = simpeg.Mesh.TensorMesh([h],x0) sig = np.zeros(mesh.nC) + 1e-8 sig[mesh.vectorCCx<=0] = 1e-2 fr,omega # Make the operators G = mesh.nodalGrad Av = mesh.aveN2CC Li = scipy.sparse.spdiags(1/mesh.hx,0,mesh.nNx,mesh.nNx) Mmu = scipy.sparse.spdiags(mesh.hx/mu,0,mesh.nCx,mesh.nCx) Msig = scipy.sparse.spdiags(Av.T.dot(mesh.hx*sig.ravel()),0,mesh.nNx,mesh.nNx) # The boundaries bc_b = np.zeros((mesh.nCx,1)) bc_b[0] = -1 # Set the top b field to 1 bc_e = np.zeros((mesh.nNx,1)) # Make the sparse matrix bc = sp.vstack((bc_b,bc_e)) b = np.empty((mesh.nCx,len(omega)),dtype=np.complex64) e = np.empty((mesh.nNx,len(omega)),dtype=np.complex64) # Loop all the frequencies for nrOm, om in enumerate(omega): # Left hand side A = sp.vstack((sp.hstack(( -G.conj().T.dot(Mmu), - Msig)), sp.hstack((1j*om*scipy.sparse.identity(mesh.nCx) , G)))) #A = A.tocsr # Solve the system bef = scipy.sparse.linalg.spsolve(A,bc) # Sort the output b[:,nrOm] = bef[0:mesh.nCx] e[:,nrOm] = bef[mesh.nCx::] import matplotlib.pyplot as plt # Plot the solution z=e[0,:]/(b[0,:]/mu) app_res = ((1./(8e-7*np.pi**2))/fr)*np.abs(z)**2 app_phs = np.arctan(z.imag/z.real)*(180/np.pi) ax_res = plt.subplot(2,1,1) ax_res.loglog(fr,app_res) ax_phs = plt.subplot(2,1,2) ax_phs.semilogx(fr,app_phs) plt.show() # Calculate the impedance z = e[0,:]/(b[0,:]/mu) z app_res
notebooks/MT1DfwdProblem.ipynb
simpeg/simpegmt
mit
Warning: this is the code for the 1st edition of the book. Please visit https://github.com/ageron/handson-ml2 for the 2nd edition code, with up-to-date notebooks using the latest library versions. In particular, the 1st edition is based on TensorFlow 1, while the 2nd edition uses TensorFlow 2, which is much simpler to use.
from __future__ import division, print_function, unicode_literals try: # %tensorflow_version only exists in Colab. %tensorflow_version 1.x except Exception: pass import numpy as np import tensorflow as tf from tensorflow import keras
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
Checklist Do not run TensorFlow on the GPU. Beware of multithreading, and make TensorFlow single-threaded. Set all the random seeds. Eliminate any other source of variability. Do Not Run TensorFlow on the GPU Some operations (like tf.reduce_sum()) have favor performance over precision, and their outputs may vary slightly across runs. To get reproducible results, make sure TensorFlow runs on the CPU:
import os os.environ["CUDA_VISIBLE_DEVICES"]=""
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
Beware of Multithreading Because floats have limited precision, the order of execution matters:
2. * 5. / 7. 2. / 7. * 5.
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
You should make sure TensorFlow runs your ops on a single thread:
config = tf.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1) with tf.Session(config=config) as sess: #... this will run single threaded pass
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
The thread pools for all sessions are created when you create the first session, so all sessions in the rest of this notebook will be single-threaded:
with tf.Session() as sess: #... also single-threaded! pass
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
Set all the random seeds! Python's built-in hash() function
print(set("Try restarting the kernel and running this again")) print(set("Try restarting the kernel and running this again"))
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
Since Python 3.3, the result will be different every time, unless you start Python with the PYTHONHASHSEED environment variable set to 0: shell PYTHONHASHSEED=0 python ```pycon print(set("Now the output is stable across runs")) {'n', 'b', 'h', 'o', 'i', 'a', 'r', 't', 'p', 'N', 's', 'c', ' ', 'l', 'e', 'w', 'u'} exit() ``` shell PYTHONHASHSEED=0 python ```pycon print(set("Now the output is stable across runs")) {'n', 'b', 'h', 'o', 'i', 'a', 'r', 't', 'p', 'N', 's', 'c', ' ', 'l', 'e', 'w', 'u'} ``` Alternatively, you could set this environment variable system-wide, but that's probably not a good idea, because this automatic randomization was introduced for security reasons. Unfortunately, setting the environment variable from within Python (e.g., using os.environ["PYTHONHASHSEED"]="0") will not work, because Python reads it upon startup. For Jupyter notebooks, you have to start the Jupyter server like this: shell PYTHONHASHSEED=0 jupyter notebook
if os.environ.get("PYTHONHASHSEED") != "0": raise Exception("You must set PYTHONHASHSEED=0 when starting the Jupyter server to get reproducible results.")
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
Python Random Number Generators (RNGs)
import random random.seed(42) print(random.random()) print(random.random()) print() random.seed(42) print(random.random()) print(random.random())
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
NumPy RNGs
import numpy as np np.random.seed(42) print(np.random.rand()) print(np.random.rand()) print() np.random.seed(42) print(np.random.rand()) print(np.random.rand())
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
TensorFlow RNGs TensorFlow's behavior is more complex because of two things: * you create a graph, and then you execute it. The random seed must be set before you create the random operations. * there are two seeds: one at the graph level, and one at the individual random operation level.
import tensorflow as tf tf.set_random_seed(42) rnd = tf.random_uniform(shape=[]) with tf.Session() as sess: print(rnd.eval()) print(rnd.eval()) print() with tf.Session() as sess: print(rnd.eval()) print(rnd.eval())
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
Every time you reset the graph, you need to set the seed again:
tf.reset_default_graph() tf.set_random_seed(42) rnd = tf.random_uniform(shape=[]) with tf.Session() as sess: print(rnd.eval()) print(rnd.eval()) print() with tf.Session() as sess: print(rnd.eval()) print(rnd.eval())
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
If you create your own graph, it will ignore the default graph's seed:
tf.reset_default_graph() tf.set_random_seed(42) graph = tf.Graph() with graph.as_default(): rnd = tf.random_uniform(shape=[]) with tf.Session(graph=graph): print(rnd.eval()) print(rnd.eval()) print() with tf.Session(graph=graph): print(rnd.eval()) print(rnd.eval())
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
You must set its own seed:
graph = tf.Graph() with graph.as_default(): tf.set_random_seed(42) rnd = tf.random_uniform(shape=[]) with tf.Session(graph=graph): print(rnd.eval()) print(rnd.eval()) print() with tf.Session(graph=graph): print(rnd.eval()) print(rnd.eval())
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
If you set the seed after the random operation is created, the seed has no effet:
tf.reset_default_graph() rnd = tf.random_uniform(shape=[]) tf.set_random_seed(42) # BAD, NO EFFECT! with tf.Session() as sess: print(rnd.eval()) print(rnd.eval()) print() tf.set_random_seed(42) # BAD, NO EFFECT! with tf.Session() as sess: print(rnd.eval()) print(rnd.eval())
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
A note about operation seeds You can also set a seed for each individual random operation. When you do, it is combined with the graph seed into the final seed used by that op. The following table summarizes how this works: | Graph seed | Op seed | Resulting seed | |------------|---------|--------------------------------| | None | None | Random | | graph_seed | None | f(graph_seed, op_index) | | None | op_seed | f(default_graph_seed, op_seed) | | graph_seed | op_seed | f(graph_seed, op_seed) | f() is a deterministic function. op_index = graph._last_id when there is a graph seed, different random ops without op seeds will have different outputs. However, each of them will have the same sequence of outputs at every run. In eager mode, there is a global seed instead of graph seed (since there is no graph in eager mode).
tf.reset_default_graph() rnd1 = tf.random_uniform(shape=[], seed=42) rnd2 = tf.random_uniform(shape=[], seed=42) rnd3 = tf.random_uniform(shape=[]) with tf.Session() as sess: print(rnd1.eval()) print(rnd2.eval()) print(rnd3.eval()) print(rnd1.eval()) print(rnd2.eval()) print(rnd3.eval()) print() with tf.Session() as sess: print(rnd1.eval()) print(rnd2.eval()) print(rnd3.eval()) print(rnd1.eval()) print(rnd2.eval()) print(rnd3.eval())
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
In the following example, you may think that all random ops will have the same random seed, but rnd3 will actually have a different seed:
tf.reset_default_graph() tf.set_random_seed(42) rnd1 = tf.random_uniform(shape=[], seed=42) rnd2 = tf.random_uniform(shape=[], seed=42) rnd3 = tf.random_uniform(shape=[]) with tf.Session() as sess: print(rnd1.eval()) print(rnd2.eval()) print(rnd3.eval()) print(rnd1.eval()) print(rnd2.eval()) print(rnd3.eval()) print() with tf.Session() as sess: print(rnd1.eval()) print(rnd2.eval()) print(rnd3.eval()) print(rnd1.eval()) print(rnd2.eval()) print(rnd3.eval())
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
Estimators API Tip: in a Jupyter notebook, you probably want to set the random seeds regularly so that you can come back and run the notebook from there (instead of from the beginning) and still get reproducible outputs.
random.seed(42) np.random.seed(42) tf.set_random_seed(42)
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
If you use the Estimators API, make sure to create a RunConfig and set its tf_random_seed, then pass it to the constructor of your estimator:
my_config = tf.estimator.RunConfig(tf_random_seed=42) feature_cols = [tf.feature_column.numeric_column("X", shape=[28 * 28])] dnn_clf = tf.estimator.DNNClassifier(hidden_units=[300, 100], n_classes=10, feature_columns=feature_cols, config=my_config)
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
Let's try it on MNIST:
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.mnist.load_data() X_train = X_train.astype(np.float32).reshape(-1, 28*28) / 255.0 y_train = y_train.astype(np.int32)
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
Unfortunately, the numpy_input_fn does not allow us to set the seed when shuffle=True, so we must shuffle the data ourself and set shuffle=False.
indices = np.random.permutation(len(X_train)) X_train_shuffled = X_train[indices] y_train_shuffled = y_train[indices] input_fn = tf.estimator.inputs.numpy_input_fn( x={"X": X_train_shuffled}, y=y_train_shuffled, num_epochs=10, batch_size=32, shuffle=False) dnn_clf.train(input_fn=input_fn)
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
The final loss should be exactly 0.46282205. Instead of using the numpy_input_fn() function (which cannot reproducibly shuffle the dataset at each epoch), you can create your own input function using the Data API and set its shuffling seed:
def create_dataset(X, y=None, n_epochs=1, batch_size=32, buffer_size=1000, seed=None): dataset = tf.data.Dataset.from_tensor_slices(({"X": X}, y)) dataset = dataset.repeat(n_epochs) dataset = dataset.shuffle(buffer_size, seed=seed) return dataset.batch(batch_size) input_fn=lambda: create_dataset(X_train, y_train, seed=42) random.seed(42) np.random.seed(42) tf.set_random_seed(42) my_config = tf.estimator.RunConfig(tf_random_seed=42) feature_cols = [tf.feature_column.numeric_column("X", shape=[28 * 28])] dnn_clf = tf.estimator.DNNClassifier(hidden_units=[300, 100], n_classes=10, feature_columns=feature_cols, config=my_config) dnn_clf.train(input_fn=input_fn)
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
The final loss should be exactly 1.0556093. ```python indices = np.random.permutation(len(X_train)) X_train_shuffled = X_train[indices] y_train_shuffled = y_train[indices] input_fn = tf.estimator.inputs.numpy_input_fn( x={"X": X_train_shuffled}, y=y_train_shuffled, num_epochs=10, batch_size=32, shuffle=False) dnn_clf.train(input_fn=input_fn) ``` Keras API If you use the Keras API, all you need to do is set the random seed any time you clear the session:
keras.backend.clear_session() random.seed(42) np.random.seed(42) tf.set_random_seed(42) model = keras.models.Sequential([ keras.layers.Dense(300, activation="relu"), keras.layers.Dense(100, activation="relu"), keras.layers.Dense(10, activation="softmax"), ]) model.compile(loss="sparse_categorical_crossentropy", optimizer="sgd", metrics=["accuracy"]) model.fit(X_train, y_train, epochs=10)
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
You should get exactly 97.16% accuracy on the training set at the end of training. Eliminate other sources of variability For example, os.listdir() returns file names in an order that depends on how the files were indexed by the file system:
for i in range(10): with open("my_test_foo_{}".format(i), "w"): pass [f for f in os.listdir() if f.startswith("my_test_foo_")] for i in range(10): with open("my_test_bar_{}".format(i), "w"): pass [f for f in os.listdir() if f.startswith("my_test_bar_")]
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
You should sort the file names before you use them:
filenames = os.listdir() filenames.sort() [f for f in filenames if f.startswith("my_test_foo_")] for f in os.listdir(): if f.startswith("my_test_foo_") or f.startswith("my_test_bar_"): os.remove(f)
extra_tensorflow_reproducibility.ipynb
ageron/ml-notebooks
apache-2.0
Scatter plots Learn how to use Matplotlib's plt.scatter function to make a 2d scatter plot. Generate random data using np.random.randn. Style the markers (color, size, shape, alpha) appropriately. Include an x and y label and title.
a=np.random.randn(2,10) x=a[0,:] x y=a[1,:] y plt.scatter(x,y,color='red') plt.grid(True) plt.box(False) plt.xlabel('random x values') plt.ylabel('random y values') plt.title('TITLE')
assignments/assignment04/MatplotlibExercises.ipynb
phungkh/phys202-2015-work
mit
Histogram Learn how to use Matplotlib's plt.hist function to make a 1d histogram. Generate randpom data using np.random.randn. Figure out how to set the number of histogram bins and other style options. Include an x and y label and title.
a=np.random.randn(1,10) x=a[0,:] plt.hist(x,bins=15,histtype='bar',color='green') plt.title('MY HISTOGRAM') plt.xlabel('RANDOM X VALUES') plt.ylabel('FREQUENCY')
assignments/assignment04/MatplotlibExercises.ipynb
phungkh/phys202-2015-work
mit
Section 1: Magnetic Elements Modeling The very first step need to push forward is to correctly model the physical elements (one by one), in the beamline package, magnet components classes could be found in element module, e.g. quadrupole is abstracted in ElementQuad class, charge is in ElementCharge, etc., they are all inherited from MagBlock. The common or shared information/configuration for all these elements could be predefined in MagBlock class, e.g. we can put information like facility name, time stamp, author, etc., common information is presumed not changed, so please defined in the first step (see STEP 1). To set the elements' configuration, method setConf(config, type) could be used to, in which 'config' is either configuration string with the format like "k1=10.0,l=0.1" or python dictionary like "{'k1': 10.0, 'l': 0.1}", and 'type' is the configuration type to be confiugred, could be 'comm' [common configuration], 'ctrl' [control configuration], 'simu' [simulation configuration], 'misc' [miscellaneous configuration] and 'all' [all configurations]. The unit between EPICS PV values and real physical variables usually are required to do conversions, so in the design stage, the method unitTrans(inval,direction = '+',transfun = None) is created for handling this issue. One can define this conversion function at the class stage, but this approach is limited to the case that all the elements with the same type only could share the same conversion function, which is not proper in the real situation. Thus, transfun is created as the input function parameter for unitTrans method, which is a user-defined function for each element object. STEP 1: define common information
#commdinfo = {'DATE': '2016-03-22', 'AUTHOR': 'Tong Zhang'} comminfo = 'DATE = 2016-03-24, AUTHOR = Tong Zhang' beamline.MagBlock.setCommInfo(comminfo)
tests/Usage Demo for Python Package beamline.ipynb
Archman/beamline
mit
STEP 2: create elements
# charge, this is visual element for the real accelerator, but is a must for elegant tracking chconf = {'total':1e-9} q = beamline.ElementCharge(name = 'q', config = chconf) # csrcsben, use elegant element name # simconf is complementary configurations for elegant tracking, # should set with setConf(simconf, type='simu') method simconf = {"edge1_effects": 1, "edge2_effects":1, "hgap":0.015, "csr":0, "nonlinear":1, "n_kicks":100, "integration_order":4, "bins":512, "sg_halfwidth":1, "block_csr":0, 'l':0.5,} angle = 0.1 # rad B1 = beamline.ElementCsrcsben(name = 'b1', config = {'angle':angle, 'e1':0, 'e2':angle}) B1.setConf(simconf, type = 'simu') B2 = beamline.ElementCsrcsben(name = 'b2', config = {'angle':-angle, 'e1':-angle, 'e2':0}) B3 = beamline.ElementCsrcsben(name = 'b3', config = {'angle':-angle, 'e1':0, 'e2':-angle}) B4 = beamline.ElementCsrcsben(name = 'b4', config = {'angle': angle, 'e1':angle, 'e2':0}) B2.setConf(simconf, type = 'simu') B3.setConf(simconf, type = 'simu') B4.setConf(simconf, type = 'simu') # drift D0 = beamline.ElementDrift(name = 'D0', config = "l=1.0") # quad # user-defined unit conversion function, # direction '+' means convertion from EPICS PV value to physical value, # direction '-' means convertion from physical value to EPICS PV value, def fUnitTrans(val, direction): if direction == '+': return val*4.0 else: return val*0.25 # create instance and apply user-defined unit conversion function Q1 = beamline.ElementQuad(name = 'Q1', config = "k1 = 10, l = 0.1") simuconf = {'tilt':"pi 4 /"} Q1.setConf(simuconf, type = 'simu') # control configurations for Q1 ctrlconf = {"k1":{'pv':"sxfel:lattice:Q09",'val':''}} Q1.setConf(ctrlconf, type = 'ctrl') Q1.transfun = fUnitTrans # apply unit conversion function # print 'online' configuration, 'online' will replace simulation field with control field print Q1.dumpConfig(type='online') #Q1.printConfig(type = 'simu') Q1.printConfig(type = 'all')
tests/Usage Demo for Python Package beamline.ipynb
Archman/beamline
mit
STEP 3: make lattice beamline
# METHOD 1: CANNOT get all configurations # use 'ElementBeamline' class of 'element' module # # beamline latele = [obj.name for obj in [q, D0, Q1, D0, B1, D0, B2, D0, D0, B3, D0, B4, D0, Q1, D0]] latstr = '(' + ' '.join(latele) + ')' bl = beamline.ElementBeamline(name = 'bl', config = {'lattice':latstr}) #bl = beamline.ElementBeamline(name = 'bl1', config = "lattice = (q d0 q1)") #bl.setConf("lattice = (d,q,b)", type = 'simu') #print bl # METHOD 2: CAN get all configurations # use 'Models' class of 'models' module # change mode to be 'simu' to start simulation mode, # 'online' mode will trig EPICS get/put processes when control configurations # could be found in elements' configuration. latline_online = beamline.Models(name = 'blchi', mode = 'online') qline = (D0, Q1, D0) chi = (B1, D0, B2, D0, D0, B3, D0, B4) latline_online.addElement(q, qline, chi, qline) # show defined elements number #print beamline.MagBlock.sumObjNum() # get 'b1' element from created model eleb1 = latline_online.getElementsByName('b1') print eleb1.name # change b1 configuration, e.g. angle eleb1.setConf('angle=0.5', type = 'simu') eleb1.printConfig() # print out all added elements latline_online.printAllElements() # get configuration of 'Q1' print latline_online.getAllConfig(fmt='dict')['Q1'] eleb1.printConfig() eleQ1 = latline_online.getElementsByName('Q1') eleQ1.printConfig(type='all') # update Q1's EPICS PV value latline_online.putCtrlConf(eleQ1, 'k1', 2.5, type = 'real') eleQ1.printConfig(type='all') latline_online.getAllConfig(fmt='dict')
tests/Usage Demo for Python Package beamline.ipynb
Archman/beamline
mit
Section 2: Lattice modeling STEP 4: create Lattice instance, make simulation required input files
eleb1.setConf('angle=0.1', type = 'simu') # e.g. '.lte' for elegant tracking, require all configurations latins = beamline.Lattice(latline_online.getAllConfig()) latfile = os.path.join(os.getcwd(), 'tracking/test.lte') latins.generateLatticeFile(latline_online.name, latfile) latins.dumpAllElements()
tests/Usage Demo for Python Package beamline.ipynb
Archman/beamline
mit
STEP 5: simulation with generated lattice file
simpath = os.path.join(os.getcwd(), 'tracking') elefile = os.path.join(simpath, 'test.ele') h5out = os.path.join(simpath, 'tpout.h5') elesim = beamline.Simulator() elesim.setMode('elegant') elesim.setScript('runElegant.sh') elesim.setExec('elegant') elesim.setPath(simpath) elesim.setInputfiles(ltefile = latfile, elefile = elefile) elesim.doSimulation() # data columns could be extracted from simulation output files, to memory or h5 files. data_tp = elesim.getOutput(file = 'test.out', data = ('t', 'p' ))#, dump = h5out) data_sSx = elesim.getOutput(file = 'test.sig', data = ('s', 'Sx' )) data_setax = elesim.getOutput(file = 'test.twi', data = ('s', 'etax'))
tests/Usage Demo for Python Package beamline.ipynb
Archman/beamline
mit
visualize data
import matplotlib.pyplot as plt # %matplotlib inline plt.plot(data_tp[:,0],data_tp[:,1],'.') plt.xlabel('$t\,[s]$') plt.ylabel('$\gamma$') plt.plot(data_sSx[:,0],data_sSx[:,1],'-') plt.ylabel('$\sigma_x\,[\mu m]$') plt.xlabel('$s\,[m]$') plt.plot(data_setax[:,0],data_setax[:,1],'-') plt.ylabel('$\eta_{x}\,[m]$') plt.xlabel('$s\,[m]$') # Scan parameter: final Dx v.s. angle of B1 import numpy as np dx = [] thetaArray = np.linspace(0.05,0.3,20) for theta in thetaArray: eleb1.setConf({'angle':theta}, type = 'simu') latins = beamline.Lattice(latline_online.getAllConfig()) latins.generateLatticeFile(latline_online.name, latfile) elesim.doSimulation() data = elesim.getOutput(file = 'test.twi', data = (['etax'])) dx.append(data[-1]) dxArray = np.array(dx) plt.plot(thetaArray, dxArray, 'r')
tests/Usage Demo for Python Package beamline.ipynb
Archman/beamline
mit
Lattice layout visualization
for e in latline_online._lattice_eleobjlist: print e.name, e.__class__.__name__ ptches, xr, yr = latline_online.draw(showfig=True) ptches
tests/Usage Demo for Python Package beamline.ipynb
Archman/beamline
mit
First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine.
# course-harcoded url url = 'http://commondatastorage.googleapis.com/books1000/' last_percent_reported = None def download_progress_hook(count, blockSize, totalSize): """A hook to report the progress of a download. This is mostly intended for users with slow internet connections. Reports every 5% change in download progress. """ global last_percent_reported percent = int(count * blockSize * 100 / totalSize) if last_percent_reported != percent: if percent % 5 == 0: sys.stdout.write("%s%%" % percent) sys.stdout.flush() else: sys.stdout.write(".") sys.stdout.flush() last_percent_reported = percent def maybe_download(filename, expected_bytes, force=False): """Download a file if not present, and make sure it's the right size.""" # Create path where the data file will be stored # get working directory and go to the parent - distro agnostic code dpath = os.path.abspath(os.path.join(os.getcwd(), os.pardir)) # go to data directory dpath = os.path.join(dpath, 'data') # get filepath fpath = os.path.join(dpath, filename) # Download file if needed if force or not os.path.exists(fpath): print('Attempting to download:', filename) filename, _ = urlretrieve(url + filename, filename, reporthook=download_progress_hook) print('\nDownload Complete!') # move new file from working directory to data location # current file location cpath = os.path.join(os.getcwd(), filename) os.rename(cpath, fpath) # check existing file if it exists or new if not statinfo = os.stat(fpath) # Verify file size. if statinfo.st_size == expected_bytes: print('Found and verified', filename) else: raise Exception( 'Failed to verify ' + filename + '. Can you get to it with a browser or download it again?') return (filename) train_filename = maybe_download('notMNIST_large.tar.gz', 247336696) test_filename = maybe_download('notMNIST_small.tar.gz', 8458043)
assignments/1_notmnist.ipynb
Iolaum/ud370
gpl-3.0
Extract the dataset from the compressed .tar.gz file. This should give you a set of directories, labelled A through J.
num_classes = 10 np.random.seed(133) def maybe_extract(filename, force=False): # get new dir name dirn = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz # Create data directory path dpath = os.path.abspath(os.path.join(os.getcwd(), os.pardir)) dpath = os.path.join(dpath, 'data') # Create dir to unzip data dirn = os.path.join(dpath, dirn) # Create zipped file path fpath = os.path.join(dpath, filename) if os.path.isdir(dirn) and not force: # You may override by setting force=True. print('%s already present - Skipping extraction of %s.' % (dirn, filename)) else: print('Extracting data for %s. This may take a while. Please wait.' % dirn) tar = tarfile.open(fpath) sys.stdout.flush() # set path so data are extracted within the data folder. tar.extractall(path=dpath) tar.close() data_folders = [ os.path.join(dirn, d) for d in sorted(os.listdir(dirn)) if os.path.isdir(os.path.join(dirn, d))] if len(data_folders) != num_classes: raise Exception( 'Expected %d folders, one per class. Found %d instead.' % ( num_classes, len(data_folders))) print("Data folders list:") print(data_folders) return(data_folders) train_folders = maybe_extract(train_filename) test_folders = maybe_extract(test_filename)
assignments/1_notmnist.ipynb
Iolaum/ud370
gpl-3.0
Problem 1 Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.
def show_random_images(numi): """ Function to display a specified number of random images from the extracted dataset. Arguments: numi: Integer, how many images to show. """ # First let's create a list of all the files. # Create data directory path dpath = os.path.abspath(os.path.join(os.getcwd(), os.pardir)) dpath = os.path.join(dpath, 'data') # notMNIST_small directory: dsmall = os.path.join(dpath, 'notMNIST_small') # notMNIST_large directory: dlarge = os.path.join(dpath, 'notMNIST_large') # create $numi random number of paths of images name1 = [] it1 = 0 while it1 < numi: # select random notMNIST rpath0 = random.choice([dlarge, dsmall]) # select random letter rpath1 = random.choice(["A/", "B/", "C/", "D/", "E/", "F/", "H/", "I/", "J/"]) # join them rpath = os.path.join(rpath0, rpath1) # select random image from files onlyfiles = [fi for fi in os.listdir(rpath) if os.path.isfile(os.path.join(rpath, fi))] name2 = random.choice(onlyfiles) # add that random name to it's path name2 = os.path.join(rpath, name2) # add it to list of images name1.append(name2) it1 += 1 for it2 in name1: print("Showing Image from path:\n" + it2) im1 = Image(filename=(it2)) display(im1) # show me 10 images show_random_images(10)
assignments/1_notmnist.ipynb
Iolaum/ud370
gpl-3.0
Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size. We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. A few images might not be readable, we'll just skip them.
### Image preprocessing happening in this step !!! ### image_size = 28 # Pixel width and height. pixel_depth = 255.0 # Number of levels per pixel. def load_letter(folder, min_num_images): """Load the data for a single letter label.""" image_files = os.listdir(folder) dataset = np.ndarray(shape=(len(image_files), image_size, image_size), dtype=np.float32) print(folder) num_images = 0 for image in image_files: image_file = os.path.join(folder, image) try: # read image as array: # https://docs.scipy.org/doc/scipy-0.18.1/reference/generated/scipy.ndimage.imread.html # code below also shifts average to 0 and standard deviation to 1 # This scaling happens assuming the 255 pixel depth is uniformly populated. image_data = (ndimage.imread(image_file).astype(float) - pixel_depth / 2) / pixel_depth if image_data.shape != (image_size, image_size): raise Exception('Unexpected image shape: %s' % str(image_data.shape)) dataset[num_images, :, :] = image_data num_images = num_images + 1 except IOError as e: print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.') dataset = dataset[0:num_images, :, :] if num_images < min_num_images: raise Exception('Many fewer images than expected: %d < %d' % (num_images, min_num_images)) print('Full dataset tensor:', dataset.shape) print('Mean:', np.mean(dataset)) print('Standard deviation:', np.std(dataset)) return(dataset) def maybe_pickle(data_folders, min_num_images_per_class, force=False): dataset_names = [] for folder in data_folders: set_filename = folder + '.pickle' dataset_names.append(set_filename) if os.path.exists(set_filename) and not force: # You may override by setting force=True. print('%s already present - Skipping pickling.' % set_filename) else: print('Pickling %s.' % set_filename) dataset = load_letter(folder, min_num_images_per_class) try: with open(set_filename, 'wb') as f: pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL) except Exception as e: print('Unable to save data to', set_filename, ':', e) return(dataset_names) train_datasets = maybe_pickle(train_folders, 50000) test_datasets = maybe_pickle(test_folders, 1800)
assignments/1_notmnist.ipynb
Iolaum/ud370
gpl-3.0
Problem 2 Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.
## Open a random image from the pickled files. def show_rnd_pkl_image(): """Function that shows a random pickled image.""" # First let's create a list of all the files. # Create data directory path dpath = os.path.abspath(os.path.join(os.getcwd(), os.pardir)) dpath = os.path.join(dpath, 'data') # notMNIST_small directory: dsmall = os.path.join(dpath, 'notMNIST_small') # notMNIST_large directory: dlarge = os.path.join(dpath, 'notMNIST_large') # Find all pickle files in each directory. # http://stackoverflow.com/a/3215392 # Create a list of all nonMNIST_small pickles lsmall = glob.glob(dsmall + '/*.pickle') # Create a list of all nonMNIST_large pickles llarge = glob.glob(dlarge + '/*.pickle') # Pick a random pickle to load (either large or small !) rpklfile = random.choice([lsmall, llarge]) rpklfile = random.choice(rpklfile) # verify randomness print(rpklfile) with open(rpklfile, 'rb') as rf: imgPkl = pickle.load(rf) plt.imshow(random.choice(list(imgPkl))) show_rnd_pkl_image()
assignments/1_notmnist.ipynb
Iolaum/ud370
gpl-3.0
Problem 3 Another check: we expect the data to be balanced across classes. Verify that.
def disp_number_images(data_folders): for folder in data_folders: pickle_filename = folder + '.pickle' try: with open(pickle_filename, 'rb') as f: dataset = pickle.load(f) except Exception as e: print('Unable to read data from', pickle_filename, ':', e) return print('Number of images in ', folder, ' : ', len(dataset)) disp_number_images(train_folders) disp_number_images(test_folders)
assignments/1_notmnist.ipynb
Iolaum/ud370
gpl-3.0
Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune train_size as needed. The labels will be stored into a separate array of integers 0 through 9. Also create a validation dataset for hyperparameter tuning.
def make_arrays(nb_rows, img_size): if nb_rows: dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32) labels = np.ndarray(nb_rows, dtype=np.int32) else: dataset, labels = None, None return dataset, labels def merge_datasets(pickle_files, train_size, valid_size=0): num_classes = len(pickle_files) valid_dataset, valid_labels = make_arrays(valid_size, image_size) train_dataset, train_labels = make_arrays(train_size, image_size) vsize_per_class = valid_size // num_classes tsize_per_class = train_size // num_classes start_v, start_t = 0, 0 end_v, end_t = vsize_per_class, tsize_per_class end_l = vsize_per_class+tsize_per_class for label, pickle_file in enumerate(pickle_files): try: with open(pickle_file, 'rb') as f: letter_set = pickle.load(f) # let's shuffle the letters to have random validation and training set np.random.shuffle(letter_set) if valid_dataset is not None: valid_letter = letter_set[:vsize_per_class, :, :] valid_dataset[start_v:end_v, :, :] = valid_letter valid_labels[start_v:end_v] = label start_v += vsize_per_class end_v += vsize_per_class train_letter = letter_set[vsize_per_class:end_l, :, :] train_dataset[start_t:end_t, :, :] = train_letter train_labels[start_t:end_t] = label start_t += tsize_per_class end_t += tsize_per_class except Exception as e: print('Unable to process data from', pickle_file, ':', e) raise return valid_dataset, valid_labels, train_dataset, train_labels train_size = 500000 valid_size = 29000 test_size = 18000 valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets( train_datasets, train_size, valid_size) _, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size) print('Training:', train_dataset.shape, train_labels.shape) print('Validation:', valid_dataset.shape, valid_labels.shape) print('Testing:', test_dataset.shape, test_labels.shape)
assignments/1_notmnist.ipynb
Iolaum/ud370
gpl-3.0
Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.
def randomize(dataset, labels): permutation = np.random.permutation(labels.shape[0]) shuffled_dataset = dataset[permutation,:,:] shuffled_labels = labels[permutation] return (shuffled_dataset, shuffled_labels) train_dataset, train_labels = randomize(train_dataset, train_labels) test_dataset, test_labels = randomize(test_dataset, test_labels) valid_dataset, valid_labels = randomize(valid_dataset, valid_labels)
assignments/1_notmnist.ipynb
Iolaum/ud370
gpl-3.0
Problem 4 Convince yourself that the data is still good after shuffling! To be sure that the data are still fine after the merger and the randomization, we will select one item and display the image alongside the label. Note: 0 = A, 1 = B, 2 = C, 3 = D, 4 = E, 5 = F, 6 = G, 7 = H, 8 = I, 9 = J.
pretty_labels = {0: 'A', 1: 'B', 2: 'C', 3: 'D', 4: 'E', 5: 'F', 6: 'G', 7: 'H', 8: 'I', 9: 'J'} def disp_sample_dataset(dataset, labels): items = random.sample(range(len(labels)), 8) for i, item in enumerate(items): plt.subplot(2, 4, i+1) plt.axis('off') plt.title(pretty_labels[labels[item]]) plt.imshow(dataset[item]) disp_sample_dataset(train_dataset, train_labels) disp_sample_dataset(train_dataset, train_labels)
assignments/1_notmnist.ipynb
Iolaum/ud370
gpl-3.0
Finally, let's save the data for later reuse:
# Create data directory path dpath = os.path.abspath(os.path.join(os.getcwd(), os.pardir)) dpath = os.path.join(dpath, 'data') # create pickle data file path pickle_file = os.path.join(dpath,'notMNIST.pickle') # save data if they aren't already saved or forced. def maybe_save_data(filepath, force=False): # Download file if needed if force or not os.path.exists(filepath): print('Attempting to save data at:\n', filepath) try: f = open(pickle_file, 'wb') save = { 'train_dataset': train_dataset, 'train_labels': train_labels, 'valid_dataset': valid_dataset, 'valid_labels': valid_labels, 'test_dataset': test_dataset, 'test_labels': test_labels, } pickle.dump(save, f, pickle.HIGHEST_PROTOCOL) f.close() print('Data saved.') except Exception as e: print('Unable to save data to', pickle_file, ':', e) raise else: print('Data has been processed and saved in previous run.') # Note: Previous run reshuffling will likely be different # from current run! statinfo = os.stat(pickle_file) print('Compressed pickle size:', statinfo.st_size) maybe_save_data(pickle_file)
assignments/1_notmnist.ipynb
Iolaum/ud370
gpl-3.0
Problem 5 By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it. Measure how much overlap there is between training, validation and test samples. Optional questions: - What about near duplicates between datasets? (images that are almost identical) - Create a sanitized validation and test set, and compare your accuracy on those in subsequent assignments. In this part, we will explore the datasets and understand better the overlap cases. There are overlaps, but there are also duplicates in the same dataset! Processing time is also critical. We will first use nested loops and matrix comparison, which is slow. Potential enhancement: Use a hash function to accelerate and process the whole dataset.
def display_overlap(overlap, source_dataset, target_dataset): item = random.choice(list(overlap.keys())) imgs = np.concatenate(([source_dataset[item]], target_dataset[overlap[item][0:7]])) plt.suptitle(item) for i, img in enumerate(imgs): plt.subplot(2, 4, i+1) plt.axis('off') plt.imshow(img) def extract_overlap(dataset_1, dataset_2): overlap = {} for i, img_1 in enumerate(dataset_1): for j, img_2 in enumerate(dataset_2): if np.array_equal(img_1, img_2): if not i in overlap.keys(): overlap[i] = [] overlap[i].append(j) return overlap %time overlap_test_train = extract_overlap(test_dataset[:200], train_dataset) print('Number of overlaps:', len(overlap_test_train.keys())) display_overlap(overlap_test_train, test_dataset[:200], train_dataset)
assignments/1_notmnist.ipynb
Iolaum/ud370
gpl-3.0
Problem 6 Let's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it. Train a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint: you can use the LogisticRegression model from sklearn.linear_model. Optional question: train an off-the-shelf model on all the data!
def tryLogRegr(sample_size): """ Arguments: sample_size: Integer to determine sample size """ regr = LogisticRegression() X_test = test_dataset.reshape(test_dataset.shape[0], 28 * 28) y_test = test_labels X_train = train_dataset[:sample_size].reshape(sample_size, 784) y_train = train_labels[:sample_size] %time regr.fit(X_train, y_train) rscore = regr.score(X_test, y_test) print("Mean acccuracy of the linear regression model is: {}" .format(rscore)) pred_labels = regr.predict(X_test) disp_sample_dataset(test_dataset, pred_labels) tryLogRegr(50) tryLogRegr(100) tryLogRegr(1000) tryLogRegr(5000)
assignments/1_notmnist.ipynb
Iolaum/ud370
gpl-3.0
Looks like ~85% is the limit for a linear method Let's try on all of the dataset.
def tryLogRegrAll(): """ Function to perform Logistic Regression on all our dataset. """ # sag solver works better for bigger datasets # n_jobs = -2 automatically selects (max - 1) available cores! # using -1 regr = LogisticRegression(solver='sag', n_jobs = -1) X_test = test_dataset.reshape(test_dataset.shape[0], 28 * 28) y_test = test_labels X_train = train_dataset.reshape(train_dataset.shape[0], 784) y_train = train_labels %time regr.fit(X_train, y_train) rscore = regr.score(X_test, y_test) print("Mean acccuracy of the linear regression model is: {}" .format(rscore)) pred_labels = regr.predict(X_test) disp_sample_dataset(test_dataset, pred_labels) tryLogRegrAll()
assignments/1_notmnist.ipynb
Iolaum/ud370
gpl-3.0
Download the sequence data Sequence data for this study are archived on the NCBI sequence read archive (SRA). Below I read in SraRunTable.txt for this project which contains all of the information we need to download the data. Project SRA: SRP065811 BioProject ID: PRJNA299298 SRA link: http://trace.ncbi.nlm.nih.gov/Traces/study/?acc=SRP065811
%%bash ## make a new directory for this analysis mkdir -p empirical_7/fastq/
emp_nb_Danio.ipynb
dereneaton/RADmissing
mit
For each ERS (individuals) get all of the ERR (sequence file accessions).
## IPython code import pandas as pd import numpy as np import urllib2 import os ## open the SRA run table from github url url = "https://raw.githubusercontent.com/"+\ "dereneaton/RADmissing/master/empirical_7_SraRunTable.txt" intable = urllib2.urlopen(url) indata = pd.read_table(intable, sep="\t") ## print first few rows print indata.head() def wget_download(SRR, outdir, outname): """ Python function to get sra data from ncbi and write to outdir with a new name using bash call wget """ ## get output name output = os.path.join(outdir, outname+".sra") ## create a call string call = "wget -q -r -nH --cut-dirs=9 -O "+output+" "+\ "ftp://ftp-trace.ncbi.nlm.nih.gov/"+\ "sra/sra-instant/reads/ByRun/sra/SRR/"+\ "{}/{}/{}.sra;".format(SRR[:6], SRR, SRR) ## call bash script ! $call
emp_nb_Danio.ipynb
dereneaton/RADmissing
mit
Here we pass the SRR number and the sample name to the wget_download function so that the files are saved with their sample names.
for ID, SRR in zip(indata.Library_Name_s, indata.Run_s): wget_download(SRR, "empirical_7/fastq/", ID) %%bash ## convert sra files to fastq using fastq-dump tool ## output as gzipped into the fastq directory fastq-dump --gzip -O empirical_7/fastq/ empirical_7/fastq/*.sra ## remove .sra files rm empirical_7/fastq/*.sra %%bash ls -lh empirical_7/fastq/
emp_nb_Danio.ipynb
dereneaton/RADmissing
mit
Note: The data here are from Illumina Casava <1.8, so the phred scores are offset by 64 instead of 33, so we use that in the params file below.
%%bash ## substitute new parameters into file sed -i '/## 1. /c\empirical_7/ ## 1. working directory ' params.txt sed -i '/## 6. /c\TGCAGG ## 6. cutters ' params.txt sed -i '/## 7. /c\20 ## 7. N processors ' params.txt sed -i '/## 9. /c\6 ## 9. NQual ' params.txt sed -i '/## 10./c\.85 ## 10. clust threshold ' params.txt sed -i '/## 12./c\4 ## 12. MinCov ' params.txt sed -i '/## 13./c\10 ## 13. maxSH ' params.txt sed -i '/## 14./c\empirical_7_m4 ## 14. output name ' params.txt sed -i '/## 18./c\empirical_7/fastq/*.gz ## 18. data location ' params.txt sed -i '/## 29./c\2,2 ## 29. trim overhang ' params.txt sed -i '/## 30./c\p,n,s ## 30. output formats ' params.txt cat params.txt
emp_nb_Danio.ipynb
dereneaton/RADmissing
mit
Assemble in pyrad
%%bash pyrad -p params.txt -s 234567 >> log.txt 2>&1 %%bash sed -i '/## 12./c\2 ## 12. MinCov ' params.txt sed -i '/## 14./c\empirical_7_m2 ## 14. output name ' params.txt %%bash pyrad -p params.txt -s 7 >> log.txt 2>&1
emp_nb_Danio.ipynb
dereneaton/RADmissing
mit
Results We are interested in the relationship between the amount of input (raw) data between any two samples, the average coverage they recover when clustered together, and the phylogenetic distances separating samples. Raw data amounts The average number of raw reads per sample is 1.36M.
import pandas as pd ## read in the data s2dat = pd.read_table("empirical_7/stats/s2.rawedit.txt", header=0, nrows=42) ## print summary stats print s2dat["passed.total"].describe() ## find which sample has the most raw data maxraw = s2dat["passed.total"].max() print "\nmost raw data in sample:" print s2dat['sample '][s2dat['passed.total']==maxraw]
emp_nb_Danio.ipynb
dereneaton/RADmissing
mit
Look at distributions of coverage pyrad v.3.0.63 outputs depth information for each sample which I read in here and plot. First let's ask which sample has the highest depth of coverage. The std of coverages is pretty low in this data set compared to several others.
## read in the s3 results s7dat = pd.read_table("empirical_7/stats/s3.clusters.txt", header=0, nrows=42) ## print summary stats print "summary of means\n==================" print s7dat['dpt.me'].describe() ## print summary stats print "\nsummary of std\n==================" print s7dat['dpt.sd'].describe() ## print summary stats print "\nsummary of proportion lowdepth\n==================" print pd.Series(1-s7dat['d>5.tot']/s7dat["total"]).describe() ## find which sample has the greatest depth of retained loci max_hiprop = (s7dat["d>5.tot"]/s7dat["total"]).max() print "\nhighest coverage in sample:" print s7dat['taxa'][s7dat['d>5.tot']/s7dat["total"]==max_hiprop] maxprop =(s7dat['d>5.tot']/s7dat['total']).max() print "\nhighest prop coverage in sample:" print s7dat['taxa'][s7dat['d>5.tot']/s7dat['total']==maxprop] import numpy as np ## print mean and std of coverage for the highest coverage sample with open("empirical_7/clust.85/Dkya_str_m.depths", 'rb') as indat: depths = np.array(indat.read().strip().split(","), dtype=int) print "Means for sample Dkya_str_m" print depths.mean(), depths.std() print depths[depths>5].mean(), depths[depths>5].std()
emp_nb_Danio.ipynb
dereneaton/RADmissing
mit
Plot the coverage for the sample with highest mean coverage Green shows the loci that were discarded and orange the loci that were retained. The majority of data were discarded for being too low of coverage.
import toyplot import toyplot.svg import numpy as np ## read in the depth information for this sample with open("empirical_7/clust.85/Dkya_str_m.depths", 'rb') as indat: depths = np.array(indat.read().strip().split(","), dtype=int) ## make a barplot in Toyplot canvas = toyplot.Canvas(width=350, height=300) axes = canvas.axes(xlabel="Depth of coverage (N reads)", ylabel="N loci", label="dataset7/sample=Dkya_str_m") ## select the loci with depth > 5 (kept) keeps = depths[depths>5] ## plot kept and discarded loci edat = np.histogram(depths, range(30)) # density=True) kdat = np.histogram(keeps, range(30)) #, density=True) axes.bars(edat) axes.bars(kdat) #toyplot.svg.render(canvas, "empirical_7_depthplot.svg")
emp_nb_Danio.ipynb
dereneaton/RADmissing
mit
Print final stats table
cat empirical_7/stats/empirical_7_m4.stats %%bash head -n 20 empirical_7/stats/empirical_7_m2.stats
emp_nb_Danio.ipynb
dereneaton/RADmissing
mit
Infer ML phylogeny in raxml as an unrooted tree
%%bash ## raxml argumement w/ ... raxmlHPC-PTHREADS-AVX -f a -m GTRGAMMA -N 100 -x 12345 -p 12345 -T 20 \ -w /home/deren/Documents/RADmissing/empirical_7/ \ -n empirical_7_m4 -s empirical_7/outfiles/empirical_7_m4.phy %%bash ## raxml argumement w/ ... raxmlHPC-PTHREADS-AVX -f a -m GTRGAMMA -N 100 -x 12345 -p 12345 -T 20 \ -w /home/deren/Documents/RADmissing/empirical_7/ \ -n empirical_7_m2 -s empirical_7/outfiles/empirical_7_m2.phy %%bash head -n 20 empirical_7/RAxML_info.empirical_7_m4 %%bash head -n 20 empirical_7/RAxML_info.empirical_7_m2
emp_nb_Danio.ipynb
dereneaton/RADmissing
mit
Plot the tree in R using ape
%load_ext rpy2.ipython %%R -h 800 -w 800 library(ape) tre <- read.tree("empirical_7/RAxML_bipartitions.empirical_7_m4") ltre <- ladderize(tre) par(mfrow=c(1,2)) plot(ltre, use.edge.length=F) nodelabels(ltre$node.label) plot(ltre, type='u')
emp_nb_Danio.ipynb
dereneaton/RADmissing
mit
Examine a single patient
patientunitstayid = 2704494 query = query_schema + """ select * from apacheapsvar where patientunitstayid = {} """.format(patientunitstayid) df = pd.read_sql_query(query, con) df.head()
notebooks/apache.ipynb
mit-eicu/eicu-code
mit
Missing data represented by -1 apacheScore present but prediction is -1 Ventilation flags Unable to assess GCS (meds column) Hospitals with data available
query = query_schema + """ select pt.hospitalid , count(pt.patientunitstayid) as number_of_patients , count(a.patientunitstayid) as number_of_patients_with_tbl from patient pt left join apacheapsvar a on pt.patientunitstayid = a.patientunitstayid group by pt.hospitalid """.format(patientunitstayid) df = pd.read_sql_query(query, con) df['data completion'] = df['number_of_patients_with_tbl'] / df['number_of_patients'] * 100.0 df.sort_values('number_of_patients_with_tbl', ascending=False, inplace=True) df.head(n=10) df[['data completion']].vgplot.hist(bins=10, var_name='Number of hospitals', value_name='Percent of patients with data')
notebooks/apache.ipynb
mit-eicu/eicu-code
mit
Configure and build the neural network. Each layer has the 4 inputs $\mathbf{H}^{T}\mathbf{r}$, $\mathbf{H}^{T}\mathbf{H}$, $\mathbf{t}_k$ and $\mathbf{v}_k$. The index $k$ denotes the layer. The layers can also be interpreted as iterations of an optimization algorithm [1]. The nonlinear operation $ \begin{align} &\quad z_{k} = \rho\left(\text{W}{1k}\begin{bmatrix} \mathbf{H}^{T}\mathbf{r}\ \hat{\mathbf{t}}{k}\ \mathbf{H}^{T}\mathbf{H}\hat{\text{t}}{k}\ \mathbf{v}{k} \end{bmatrix}+\mathbf{b}{1k}\right)\ &\hat{\mathbf{t}}{k+1} = \psi_{t_{k}}(\mathbf{W}{2k}\mathbf{z}{k}+\mathbf{b}{2k})\ &\hat{\mathbf{v}}{k+1} = \mathbf{W}{3k}\mathbf{z}{k}+\mathbf{b}{3k}\ &\qquad\hat{\mathbf{t}}{1} = \mathbf{0}\tag{10} \end{align} $ is applied to the input. $\mathbf{t}_0$ is the received data vector. Summarized, each layer does roughly the following steps: * Concatenate the inputs. * Linear transformation. * Apply ReLU function. * Calculate $\mathbf{v}{k+1}$ as a linear trafo of the ReLU output and use ResNet feature. * Calculate $\hat{\mathbf{t}}{k+1}$ as a linear trafo of the ReLU output which is then fed to the linear soft sign function.
# DetNet config layers = 3*K v_len = 2*K z_len = 8*K # Training params training_steps = 10000 batch_size_train = 5000 snr_var_train = 3.0 # Maximum absolute deviation of the SNR from its mean in logarithmic scale. # Test params test_steps= 1000 batch_size_test = 5000 snr_range = np.arange(8, 14, 1) # Definition of the Loss function def own_loss(t, t_train, t_ZF): loss_l = torch.zeros(len(t), 1, device=device) # Denotes the loss in Layer L for layer in range(1,len(t)+1): loss_l[layer-1] = torch.log(torch.Tensor([layer+1]).to(device))*torch.mean(torch.mean(torch.square(t_train - t[layer-1]),1)/torch.mean(torch.square(t_train - t_ZF),1)) return loss_l # Definition of the DetNet class DetNet(nn.Module): # Build DetNet def __init__(self, layers, K, v_len, z_len): # Here we define the trainable parameter (Net) super(DetNet, self).__init__() # We have to use here nn.ModuleList instead of a PythonList. (Otherwise you’ll get an error saying # that your model has no parameters, because PyTorch does not see the parameters of the layers stored # in a Python list) # Furtheremore, we initialize the linear trafo with normailzed weights # Linear Traffos W_1l, W_2l, W_3l self.linear_trafo_1_l = nn.ModuleList() self.linear_trafo_1_l.extend([nn.Linear(3*K + v_len, z_len) for i in range(1, layers+1)]) for i in range(0, layers): nn.init.normal_(self.linear_trafo_1_l[i].weight, std = 0.01) nn.init.normal_(self.linear_trafo_1_l[i].bias, std = 0.01) self.linear_trafo_2_l = nn.ModuleList() self.linear_trafo_2_l.extend([nn.Linear(z_len, K) for i in range(1, layers+1)]) for i in range(0, layers): nn.init.normal_(self.linear_trafo_2_l[i].weight, std = 0.01) nn.init.normal_(self.linear_trafo_2_l[i].bias, std = 0.01) self.linear_trafo_3_l = nn.ModuleList() self.linear_trafo_3_l.extend([nn.Linear(z_len , v_len) for i in range(1, layers+1)]) for i in range(0, layers): nn.init.normal_(self.linear_trafo_3_l[i].weight, std = 0.01) nn.init.normal_(self.linear_trafo_3_l[i].bias, std = 0.01) # For Linear Soft Sign function self.kappa_l = nn.ParameterList() self.kappa_l.extend([nn.Parameter(torch.rand(1, requires_grad=True, device=device)) for i in range(1, layers+1)]) # ReLU as activation faunction self.relu = nn.ReLU() def forward(self, Hr, HH): v = torch.zeros(len(Hr), v_len, device=device) # Internal Memory (state), that is passed to the next layer t = torch.zeros(1, len(Hr), K, device=device) # Transmit vector we want to estimate -> Initalizied as zero t_tilde = torch.zeros(len(Hr), K, device=device) # Transmit vector we want to estimate -> Initalizied as zero # Send Data through the staced DetNet for l in range(1,layers+1): # Concatenate the 4 inputs Hy, v, x and HH. concat = torch.cat((Hr, v, t[-1,:,:], torch.squeeze(torch.matmul(torch.unsqueeze(t[-1,:,:], 1), HH))), 1) # Apply linear transformation and rectified linear unit (ReLU). z = self.relu(self.linear_trafo_1_l[l-1](concat)) # Generate new t iterate with a final linear trafo. t_tilde = self.linear_trafo_2_l[l-1](z) t = torch.cat((t, torch.unsqueeze(-1+self.relu(t_tilde+self.kappa_l[l-1])/torch.abs(self.kappa_l[l-1])-self.relu(t_tilde-self.kappa_l[l-1])/torch.abs(self.kappa_l[l-1]),0)), 0) # Generate new v iterate with a final linear trafo. v = self.linear_trafo_3_l[l-1](z) del concat, z torch.cuda.empty_cache() del v, t_tilde torch.cuda.empty_cache() return t[1:,:,:]
mloc/ch5_Algorithm_Unfolding/Deep_MIMO_Detection.ipynb
kit-cel/wt
gpl-2.0
Training of the network. The loss function takes into account the output of all the layers $\mathcal{L}$ and is normalized to the loss of a zero forcing equalizer $\Vert \mathbf{t}-\tilde{\mathbf{t}}\Vert^{2}$. The loss function is defined as: \begin{align} &L(\mathbf{t};\hat{\mathbf{t}}{\theta}(\mathbf{H}, \mathbf{r}))=\sum{k=l}^{\mathcal{L}}\log(l)\frac{\Vert \mathbf{t}^{[train]}-\hat{\mathbf{t}}{l}\Vert ^{2}}{\Vert \mathbf{t}^{[train]}-\hat{\mathbf{t}}{ZF}\Vert^{2}},\tag{13}\ \text{where}:\ &\qquad\qquad \hat{\mathbf{t}}_{ZF}=(\mathbf{H}^{T}\mathbf{H})^{-1}\mathbf{H}^{T}\mathbf{r}. \tag{14} \end{align} is the estimated transmit vector of the ZF decoder
model = DetNet(layers, K, v_len, z_len) model.to(device) # Adam Optimizer optimizer = optim.Adam(model.parameters(), eps=1e-07) results = [] ber = [] for i in range(training_steps): # Generate a batch of training data. r_train, Hr_train, HH_train, snr_train, t_train = data_generation_VC(K, N, batch_size_train, snr, snr_var_train, device) # Feed the training data to network and update weights. t = model(Hr_train, HH_train) # compute loss # Calculate optimal decorrelation decoder to normalize the loss function later on. t_ZF = torch.squeeze(torch.matmul(torch.unsqueeze(Hr_train,1),torch.inverse(HH_train)), 1) loss = torch.sum(own_loss(t, t_train, t_ZF)) # compute gradients loss.backward() # Adapt weights optimizer.step() # reset gradients optimizer.zero_grad() # Print the current progress of the training (Loss and BER). # Pay attention that we are print the Loss/BER on the Trainings-Dataset. # For a real evaulation of the model we should test the model on the test dataset if i%500 == 0: results.append(own_loss(t, t_train, t_ZF).detach().cpu().numpy()) ber.append(1 - torch.mean(t_train.eq(torch.sign(t)).float(),[1,2]).detach().cpu().numpy()) print('Train step ', i, ', current loss: ', results[-1][-1], ', current ber: ', ber[-1][-1]) del r_train, Hr_train, HH_train, snr_train, t_train, t torch.cuda.empty_cache() # # Save the trained model # torch.save(model.state_dict(), 'Deep_MIMO_Detection_net')
mloc/ch5_Algorithm_Unfolding/Deep_MIMO_Detection.ipynb
kit-cel/wt
gpl-2.0
Visualize
fig = plt.figure(1,figsize=(15,15)) plt.rcParams.update({'font.size': 18}) color=iter(cm.viridis_r(np.linspace(0,1,len(results)))) # Plot loss. plt.subplot(211) for i in range(0, len(results)): c=next(color) plt.semilogy(range(0, len(results[0])-1), results[i][1:], color=c) plt.grid(True) plt.title("Loss Function of DetNet over Layers and Iterations") plt.xlabel("Layer") plt.ylabel(r"$l(\mathbf{x};\hat{\mathbf{x}}(\mathbf{H}, \mathbf{y}))$") # Plot BER. plt.subplot(212) color=iter(cm.viridis_r(np.linspace(0,1,len(results)))) for i in range(0, len(results)): c=next(color) plt.semilogy(range(0, len(results[0])), ber[i], color=c) plt.grid(True) plt.title("BER at 13 dB of DetNet over Layers and Iterations") plt.xlabel("Layer") plt.ylabel("BER") plt.show() fig.savefig("DetNet_layers.pdf", format='pdf')
mloc/ch5_Algorithm_Unfolding/Deep_MIMO_Detection.ipynb
kit-cel/wt
gpl-2.0
After initialization, the k-means algorithm iterates between the following two steps: 1. Assign each data point to the closest centroid. $$ z_i \gets \mathrm{argmin}j \|\mu_j - \mathbf{x}_i\|^2 $$ 2. Revise centroids as the mean of the assigned data points. $$ \mu_j \gets \frac{1}{n_j}\sum{i:z_i=j} \mathbf{x}_i $$ In pseudocode, we iteratively do the following: cluster_assignment = assign_clusters(data, centroids) centroids = revise_centroids(data, k, cluster_assignment) Assigning clusters How do we implement Step 1 of the main k-means loop above? First import pairwise_distances function from scikit-learn, which calculates Euclidean distances between rows of given arrays. See this documentation for more information. For the sake of demonstration, let's look at documents 100 through 102 as query documents and compute the distances between each of these documents and every other document in the corpus. In the k-means algorithm, we will have to compute pairwise distances between the set of centroids and the set of documents.
from sklearn.metrics import pairwise_distances # Get the TF-IDF vectors for documents 100 through 102. queries = tf_idf[100:102,:] # Compute pairwise distances from every data point to each query vector. dist = pairwise_distances(tf_idf, queries, metric='euclidean') print dist print queries.shape print tf_idf.shape print dist.shape
machine_learning/4_clustering_and_retrieval/assigment/week3/2_kmeans-with-text-data_graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
More formally, dist[i,j] is assigned the distance between the ith row of X (i.e., X[i,:]) and the jth row of Y (i.e., Y[j,:]). Checkpoint: For a moment, suppose that we initialize three centroids with the first 3 rows of tf_idf. Write code to compute distances from each of the centroids to all data points in tf_idf. Then find the distance between row 430 of tf_idf and the second centroid and save it to dist.
# Students should write code here first_3_centroids = tf_idf[:3,:] distances = pairwise_distances(tf_idf, first_3_centroids, metric='euclidean') dist = distances[430, 1] print dist '''Test cell''' if np.allclose(dist, pairwise_distances(tf_idf[430,:], tf_idf[1,:])): print('Pass') else: print('Check your code again')
machine_learning/4_clustering_and_retrieval/assigment/week3/2_kmeans-with-text-data_graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
Checkpoint: Next, given the pairwise distances, we take the minimum of the distances for each data point. Fittingly, NumPy provides an argmin function. See this documentation for details. Read the documentation and write code to produce a 1D array whose i-th entry indicates the centroid that is the closest to the i-th data point. Use the list of distances from the previous checkpoint and save them as distances. The value 0 indicates closeness to the first centroid, 1 indicates closeness to the second centroid, and so forth. Save this array as closest_cluster. Hint: the resulting array should be as long as the number of data points.
# Students should write code here distances = distances.copy() closest_cluster = np.argmin(distances, axis=1) print closest_cluster print closest_cluster.shape '''Test cell''' reference = [list(row).index(min(row)) for row in distances] if np.allclose(closest_cluster, reference): print('Pass') else: print('Check your code again')
machine_learning/4_clustering_and_retrieval/assigment/week3/2_kmeans-with-text-data_graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
Checkpoint: Let's put these steps together. First, initialize three centroids with the first 3 rows of tf_idf. Then, compute distances from each of the centroids to all data points in tf_idf. Finally, use these distance calculations to compute cluster assignments and assign them to cluster_assignment.
# Students should write code here first_3_centroids = tf_idf[:3,:] distances = pairwise_distances(tf_idf, first_3_centroids, metric='euclidean') cluster_assignment = np.argmin(distances, axis=1) if len(cluster_assignment)==59071 and \ np.array_equal(np.bincount(cluster_assignment), np.array([23061, 10086, 25924])): print('Pass') # count number of data points for each cluster else: print('Check your code again.')
machine_learning/4_clustering_and_retrieval/assigment/week3/2_kmeans-with-text-data_graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
Let's consider running k-means with K=3 clusters for a maximum of 400 iterations, recording cluster heterogeneity at every step. Then, let's plot the heterogeneity over iterations using the plotting function above.
k = 3 heterogeneity = [] initial_centroids = get_initial_centroids(tf_idf, k, seed=0) centroids, cluster_assignment = kmeans(tf_idf, k, initial_centroids, maxiter=400, record_heterogeneity=heterogeneity, verbose=True) plot_heterogeneity(heterogeneity, k) print heterogeneity
machine_learning/4_clustering_and_retrieval/assigment/week3/2_kmeans-with-text-data_graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
Quiz Question. (True/False) The clustering objective (heterogeneity) is non-increasing for this example. - True Quiz Question. Let's step back from this particular example. If the clustering objective (heterogeneity) would ever increase when running k-means, that would indicate: (choose one) k-means algorithm got stuck in a bad local minimum There is a bug in the k-means code All data points consist of exact duplicates Nothing is wrong. The objective should generally go down sooner or later. Quiz Question. Which of the cluster contains the greatest number of data points in the end? Hint: Use np.bincount() to count occurrences of each cluster label. 1. Cluster #0 2. Cluster #1 3. Cluster #2
print np.bincount(cluster_assignment)
machine_learning/4_clustering_and_retrieval/assigment/week3/2_kmeans-with-text-data_graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
Beware of local maxima One weakness of k-means is that it tends to get stuck in a local minimum. To see this, let us run k-means multiple times, with different initial centroids created using different random seeds. Note: Again, in practice, you should set different seeds for every run. We give you a list of seeds for this assignment so that everyone gets the same answer. This may take several minutes to run.
k = 10 heterogeneity = {} cluster_assignment_dict = {} import time start = time.time() for seed in [0, 20000, 40000, 60000, 80000, 100000, 120000]: initial_centroids = get_initial_centroids(tf_idf, k, seed) centroids, cluster_assignment = kmeans(tf_idf, k, initial_centroids, maxiter=400, record_heterogeneity=None, verbose=False) # To save time, compute heterogeneity only once in the end heterogeneity[seed] = compute_heterogeneity(tf_idf, k, centroids, cluster_assignment) cluster_assignment_dict[seed] = np.bincount(cluster_assignment) print('seed={0:06d}, heterogeneity={1:.5f}, cluster_distribution={2}'.format(seed, heterogeneity[seed], cluster_assignment_dict[seed])) sys.stdout.flush() end = time.time() print(end-start)
machine_learning/4_clustering_and_retrieval/assigment/week3/2_kmeans-with-text-data_graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
Notice the variation in heterogeneity for different initializations. This indicates that k-means sometimes gets stuck at a bad local minimum. Quiz Question. Another way to capture the effect of changing initialization is to look at the distribution of cluster assignments. Add a line to the code above to compute the size (# of member data points) of clusters for each run of k-means. Look at the size of the largest cluster (most # of member data points) across multiple runs, with seeds 0, 20000, ..., 120000. How much does this measure vary across the runs? What is the minimum and maximum values this quantity takes?
for k, v in cluster_assignment_dict.iteritems(): print k, np.max(v)
machine_learning/4_clustering_and_retrieval/assigment/week3/2_kmeans-with-text-data_graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
Maximum values: 18132 Minimum values: 15779 One effective way to counter this tendency is to use k-means++ to provide a smart initialization. This method tries to spread out the initial set of centroids so that they are not too close together. It is known to improve the quality of local optima and lower average runtime.
def smart_initialize(data, k, seed=None): '''Use k-means++ to initialize a good set of centroids''' if seed is not None: # useful for obtaining consistent results np.random.seed(seed) centroids = np.zeros((k, data.shape[1])) # Randomly choose the first centroid. # Since we have no prior knowledge, choose uniformly at random idx = np.random.randint(data.shape[0]) centroids[0] = data[idx,:].toarray() # Compute distances from the first centroid chosen to all the other data points distances = pairwise_distances(data, centroids[0:1], metric='euclidean').flatten() for i in xrange(1, k): # Choose the next centroid randomly, so that the probability for each data point to be chosen # is directly proportional to its squared distance from the nearest centroid. # Roughtly speaking, a new centroid should be as far as from ohter centroids as possible. idx = np.random.choice(data.shape[0], 1, p=distances/sum(distances)) centroids[i] = data[idx,:].toarray() # Now compute distances from the centroids to all data points distances = np.min(pairwise_distances(data, centroids[0:i+1], metric='euclidean'),axis=1) return centroids
machine_learning/4_clustering_and_retrieval/assigment/week3/2_kmeans-with-text-data_graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
A few things to notice from the box plot: * Random initialization results in a worse clustering than k-means++ on average. * The best result of k-means++ is better than the best result of random initialization. In general, you should run k-means at least a few times with different initializations and then return the run resulting in the lowest heterogeneity. Let us write a function that runs k-means multiple times and picks the best run that minimizes heterogeneity. The function accepts an optional list of seed values to be used for the multiple runs; if no such list is provided, the current UTC time is used as seed values.
def kmeans_multiple_runs(data, k, maxiter, num_runs, seed_list=None, verbose=False): heterogeneity = {} min_heterogeneity_achieved = float('inf') best_seed = None final_centroids = None final_cluster_assignment = None for i in xrange(num_runs): # Use UTC time if no seeds are provided if seed_list is not None: seed = seed_list[i] np.random.seed(seed) else: seed = int(time.time()) np.random.seed(seed) # Use k-means++ initialization # YOUR CODE HERE initial_centroids = smart_initialize(data, k, seed=seed) # Run k-means # YOUR CODE HERE centroids, cluster_assignment = kmeans(data, k, initial_centroids, maxiter=400, record_heterogeneity=None, verbose=False) # To save time, compute heterogeneity only once in the end # YOUR CODE HERE heterogeneity[seed] = compute_heterogeneity(data, k, centroids, cluster_assignment) if verbose: print('seed={0:06d}, heterogeneity={1:.5f}'.format(seed, heterogeneity[seed])) sys.stdout.flush() # if current measurement of heterogeneity is lower than previously seen, # update the minimum record of heterogeneity. if heterogeneity[seed] < min_heterogeneity_achieved: min_heterogeneity_achieved = heterogeneity[seed] best_seed = seed final_centroids = centroids final_cluster_assignment = cluster_assignment # Return the centroids and cluster assignments that minimize heterogeneity. return final_centroids, final_cluster_assignment
machine_learning/4_clustering_and_retrieval/assigment/week3/2_kmeans-with-text-data_graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
Quiz Question. Which of the 10 clusters above contains the greatest number of articles? Cluster 0: artists, poets, writers, environmentalists Cluster 4: track and field athletes Cluster 5: composers, songwriters, singers, music producers Cluster 7: baseball players Cluster 9: lawyers, judges, legal scholars
np.argmax(np.bincount(cluster_assignment[10]()))
machine_learning/4_clustering_and_retrieval/assigment/week3/2_kmeans-with-text-data_graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
Quiz Question. Which of the 10 clusters contains the least number of articles? Cluster 1: film directors Cluster 3: politicians Cluster 6: soccer (football) players Cluster 7: baseball players Cluster 9: lawyers, judges, legal scholars
np.argmin(np.bincount(cluster_assignment[10]()))
machine_learning/4_clustering_and_retrieval/assigment/week3/2_kmeans-with-text-data_graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
The class of rugby players have been broken into two clusters (11 and 72). Same goes for soccer (football) players (clusters 6, 21, 40, and 87), although some may like the benefit of having a separate category for Australian Football League. The class of baseball players have been also broken into two clusters (18 and 95). A high value of K encourages pure clusters, but we cannot keep increasing K. For large enough K, related documents end up going to different clusters. That said, the result for K=100 is not entirely bad. After all, it gives us separate clusters for such categories as Scotland, Brazil, LGBT, computer science and the Mormon Church. If we set K somewhere between 25 and 100, we should be able to avoid breaking up clusters while discovering new ones. Also, we should ask ourselves how much granularity we want in our clustering. If we wanted a rough sketch of Wikipedia, we don't want too detailed clusters. On the other hand, having many clusters can be valuable when we are zooming into a certain part of Wikipedia. There is no golden rule for choosing K. It all depends on the particular application and domain we are in. Another heuristic people use that does not rely on so much visualization, which can be hard in many applications (including here!) is as follows. Track heterogeneity versus K and look for the "elbow" of the curve where the heterogeneity decrease rapidly before this value of K, but then only gradually for larger values of K. This naturally trades off between trying to minimize heterogeneity, but reduce model complexity. In the heterogeneity versus K plot made above, we did not yet really see a flattening out of the heterogeneity, which might indicate that indeed K=100 is "reasonable" and we only see real overfitting for larger values of K (which are even harder to visualize using the methods we attempted above.) Quiz Question. Another sign of too large K is having lots of small clusters. Look at the distribution of cluster sizes (by number of member data points). How many of the 100 clusters have fewer than 236 articles, i.e. 0.004% of the dataset? Hint: Use cluster_assignment[100](), with the extra pair of parantheses for delayed loading.
cluster_assignment_bincount = np.bincount(cluster_assignment[100]()) len(cluster_assignment_bincount[cluster_assignment_bincount <= 236])
machine_learning/4_clustering_and_retrieval/assigment/week3/2_kmeans-with-text-data_graphlab.ipynb
tuanavu/coursera-university-of-washington
mit
Objectives : with this notebook you will understand the very basics of a sensitivity analysis and the specificity of an RBD-FAST analysis use on your own the python library SALib to perform an RBD-FAST analysis NOTE : for more detail on sensitivity analysis in general, refer to the café focus by Jeanne Goffart (april 2017), presentation is on the server. CAUTION : the sensitivity analysis tools should be used and analyzed with care. Without proper design, you won't be able to draw any conclusions at all, or worse, draw wrong conclusions... Introduction : how RBD-FAST (and sensitivity analysis in general) works <img src="Principle SAnalysis.jpg" alt="oops! missing picture" title="basic principle" /> RBD-FAST principle explained in Goffart, Rabouille and Mendes (2015) <img src="RBD-FAST chez Goffart et al 2015.png" alt="oops! missing picture" title="RBD-FAST method in more detail" /> GENERAL WORKFLOW define the "problem" dictionnary that states the number of inputs, their names, their bounds : prerequisite to SALib sample values for each parameters in the space you want to study (through LHS sampler of SALib) run your model/workflow and save the output(s) you're interested in (your own code) use a SA method (from SALib analyze tools) and a surpriiiise ! Define the "problem dictionnary" It is a regular python dictionnary object with mandatory keys : - 'num_vars' - 'names' - 'bounds' TRICKY ALERT Defining the bounds is one of the trickiest part of a sensitivity analysis Do read the literature on that subject, what your fellow researchers have set as bounds for their studies etc... Here we have chosen to focus on a specific area of all the possible values taken by the parameters : ∓ 10% around a value.
# STATE THE PROBLEM DICTIONNARY # what will be varying (=inputs) ? in what bounds ? problem = { 'num_vars': 3, 'names': ['x1', 'x2', 'x3'], 'bounds': [[-3.14159265359, 3.14159265359], [-3.14159265359, 3.14159265359], [-3.14159265359, 3.14159265359]] }
misc/Sensitivity_analysis.ipynb
locie/locie_notebook
lgpl-3.0
Draw a number of samples from your problem definition
# say we want to draw 150 samples num_samples = 150 # we draw a Latin Hypercube Sampling (LHS) that is fitted for an RBD FAST analysis # (other sampling metods available in the library though) from SALib.sample.latin import sample all_samples = sample(problem, num_samples)
misc/Sensitivity_analysis.ipynb
locie/locie_notebook
lgpl-3.0
Run your own model/workflow to get the output of interest Here, you use your own code, can be a python code like here, can call any other program (dymola, energy-plus, etc...) We use a lumped thermal model, aka RC model. The scientific question behind (it's not just a stupid example ^^) : can the model be entirely calibrated ? the parameters that have not influence on the output (indoor temperature) cannot be determined by any calibration algorithm and it's also ok, it means that we can just fix its value without interfering with the calibration process, and it reduces the dimensions of the inverse problem --> also good !
# this is where you define your own model, procedure, experiment... from SALib.test_functions import Ishigami import numpy as np def run_model(x1, x2, x3): """ COPY HERE YOUR OWN CODE the function takes as input 1 sample returns 1 or more output """ # Delete from HERE ======================================= # As an example, we'll look at the famous Ishigami function # (A and B are specific parameters for the Ishigami function) A = 7 B = 0.1 y = (np.sin(x1) + A * np.sin(x2)**2 + B * x3 ** 4 * np.sin(x1)) # ========= TO HERE and replace with your own piece of code return y
misc/Sensitivity_analysis.ipynb
locie/locie_notebook
lgpl-3.0
NOTE : you could also use outputs from a different program, you would then have to upload your data into a python numpy array.
# run your model, procedure, experiment for each sample of your sampling # unpack all_samples into 3 vectors x1, x2, x3 x1, x2, x3 = all_samples.T # run the model, all samples at the same time ishigami_results = run_model(x1, x2, x3)
misc/Sensitivity_analysis.ipynb
locie/locie_notebook
lgpl-3.0