markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Ensemble Methods Ensemble methods use a number of models to learn on data and then average or vote among their predictions. There are two families of ensemble methods: - Averaging methods: Bagging methods, Random Forests, ... - Boosting methods: AdaBoost, Gradient Tree Boosting
from sklearn.datasets import load_breast_cancer from sklearn.model_selection import train_test_split X, y = load_breast_cancer(return_X_y=True) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
04-more-supervised.ipynb
msadegh97/machine-learning-course
gpl-3.0
Random Forest
from sklearn.ensemble import RandomForestClassifier model = RandomForestClassifier(n_estimators=50, min_samples_split=2) model.fit(X_train, y_train); model.score(X_test, y_test)
04-more-supervised.ipynb
msadegh97/machine-learning-course
gpl-3.0
AdaBoost
from sklearn.ensemble import AdaBoostClassifier model = AdaBoostClassifier(n_estimators=100) model.fit(X_train, y_train); model.score(X_test, y_test)
04-more-supervised.ipynb
msadegh97/machine-learning-course
gpl-3.0
A repository: a group of linked commits <!-- offline: ![](files/fig/threecommits.png) --> <img src="https://raw.github.com/fperez/reprosw/master/fig/threecommits.png" > And this is pretty much the essence of Git! First things first: git must be configured before first use The minimal amount of configuration for git to work without pestering you is to tell it who you are:
%%bash git config --global user.name "Jeremy S. Perkins" git config --global user.email "jeremyshane@gmail.com"
Day_02/02_GitDevelopment/VersionControl.ipynb
saudijack/unfpyboot
mit
And how you will edit text files (it will often ask you to edit messages and other information, and thus wants to know how you like to edit your files):
%%bash # Put here your preferred editor. If this is not set, git will honor # the $EDITOR environment variable git config --global core.editor /usr/bin/nano # Yes, I still use nano # On Windows Notepad will do in a pinch, I recommend Notepad++ as a free alternative # On the mac, you can set nano or emacs as a basic option # And while we're at it, we also turn on the use of color, which is very useful git config --global color.ui "auto"
Day_02/02_GitDevelopment/VersionControl.ipynb
saudijack/unfpyboot
mit
Set git to use the credential memory cache so we don't have to retype passwords too frequently. On OSX, you should run the following (note that this requires git version 1.7.10 or newer):
%%bash git config --global credential.helper osxkeychain # Set the cache to timeout after 2 hours (setting is in seconds) git config --global credential.helper 'cache --timeout=7200'
Day_02/02_GitDevelopment/VersionControl.ipynb
saudijack/unfpyboot
mit
Github offers in its help pages instructions on how to configure the credentials helper for Linux and Windows. Stage 1: Local, single-user, linear workflow Simply type git to see a full list of all the 'core' commands. We'll now go through most of these via small practical exercises:
!git
Day_02/02_GitDevelopment/VersionControl.ipynb
saudijack/unfpyboot
mit
Now let's edit our first file in the test directory with a text editor... I'm doing it programatically here for automation purposes, but you'd normally be editing by hand
%%bash cd test echo "My first bit of text" > file1.txt
Day_02/02_GitDevelopment/VersionControl.ipynb
saudijack/unfpyboot
mit
Git supports aliases: new names given to command combinations. Let's make this handy shortlog an alias, so we only have to type git slog and see this compact log:
%%bash cd test # We create our alias (this saves it in git's permanent configuration file): git config --global alias.slog "log --oneline --topo-order --graph" # And now we can use it git slog
Day_02/02_GitDevelopment/VersionControl.ipynb
saudijack/unfpyboot
mit
And git rm works in a similar fashion. Local user, branching What is a branch? Simply a label for the 'current' commit in a sequence of ongoing commits: <!-- offline: ![](files/fig/masterbranch.png) --> <img src="https://raw.github.com/fperez/reprosw/master/fig/masterbranch.png" > There can be multiple branches alive at any point in time; the working directory is the state of a special pointer called HEAD. In this example there are two branches, master and testing, and testing is the currently active branch since it's what HEAD points to: <!-- offline: ![](files/fig/HEAD_testing.png) --> <img src="https://raw.github.com/fperez/reprosw/master/fig/HEAD_testing.png" > Once new commits are made on a branch, HEAD and the branch label move with the new commits: <!-- offline: ![](files/fig/branchcommit.png) --> <img src="https://raw.github.com/fperez/reprosw/master/fig/branchcommit.png" > This allows the history of both branches to diverge: <!-- offline: ![](files/fig/mergescenario.png) --> <img src="https://raw.github.com/fperez/reprosw/master/fig/mergescenario.png" > But based on this graph structure, git can compute the necessary information to merge the divergent branches back and continue with a unified line of development: <!-- offline: ![](files/fig/mergeaftermath.png) --> <img src="https://raw.github.com/fperez/reprosw/master/fig/mergeaftermath.png" > Let's now illustrate all of this with a concrete example. Let's get our bearings first:
%%bash cd test git status ls
Day_02/02_GitDevelopment/VersionControl.ipynb
saudijack/unfpyboot
mit
We are now going to try two different routes of development: on the master branch we will add one file and on the experiment branch, which we will create, we will add a different one. We will then merge the experimental branch into master.
%%bash cd test git branch experiment git checkout experiment %%bash cd test echo "Some crazy idea" > experiment.txt git add experiment.txt git commit -a -m"Trying something new" git slog %%bash cd test git checkout master git slog %%bash cd test git status %%bash cd test echo "fixed this little bug" >> file-newname.txt git commit -a -m"The mainline keeps moving" git slog %%bash cd test ls %%bash cd test git merge experiment git slog
Day_02/02_GitDevelopment/VersionControl.ipynb
saudijack/unfpyboot
mit
Using remotes as a single user We are now going to introduce the concept of a remote repository: a pointer to another copy of the repository that lives on a different location. This can be simply a different path on the filesystem or a server on the internet. For this discussion, we'll be using remotes hosted on the GitHub.com service, but you can equally use other services like BitBucket or Gitorious as well as host your own.
%%bash cd test ls echo "Let's see if we have any remote repositories here:" git remote -v
Day_02/02_GitDevelopment/VersionControl.ipynb
saudijack/unfpyboot
mit
Since the above cell didn't produce any output after the git remote -v call, it means we have no remote repositories configured. We will now proceed to do so. Once logged into GitHub, go to the new repository page and make a repository called test. Do not check the box that says Initialize this repository with a README, since we already have an existing repository here. That option is useful when you're starting first at Github and don't have a repo made already on a local computer. We can now follow the instructions from the next page:
%%bash cd test git remote add origin https://github.com/kialio/test.git %%bash cd test git push -u origin master
Day_02/02_GitDevelopment/VersionControl.ipynb
saudijack/unfpyboot
mit
We can now see this repository publicly on github. Let's see how this can be useful for backup and syncing work between two different computers. I'll simulate a 2nd computer by working in a different directory...
%%bash # Here I clone my 'test' repo but with a different name, test2, to simulate a 2nd computer git clone https://github.com/kialio/test.git test2 cd test2 pwd git remote -v
Day_02/02_GitDevelopment/VersionControl.ipynb
saudijack/unfpyboot
mit
Now we put this new work up on the github server so it's available from the internet
%%bash cd test2 git push
Day_02/02_GitDevelopment/VersionControl.ipynb
saudijack/unfpyboot
mit
Now let's fetch that work from machine #1:
%%bash cd test git pull
Day_02/02_GitDevelopment/VersionControl.ipynb
saudijack/unfpyboot
mit
So now let's see what happens if we try to merge the trouble branch into master:
%%bash cd test git merge trouble
Day_02/02_GitDevelopment/VersionControl.ipynb
saudijack/unfpyboot
mit
<br> Task 1: Effect of Learning Rate $\alpha$ Use Linear Regression code below using X="GrLivArea" as input variable and y="SalePrice" as target variable. Use different values of $\alpha$ given in table below and comment on why they are useful or not and which one is a good choice. $\alpha=0.000001$: $\alpha=0.00000001$: $\alpha=0.000000001$: <br> Load X and y
# Load X and y variables from pandas dataframe df_train cols = ['GrLivArea'] X_train = np.array(df_train[cols]) y_train = np.array(df_train[["SalePrice"]]) # Get m = number of samples and n = number of features m = X_train.shape[0] n = X_train.shape[1] # append a column of 1's to X for theta_0 X_train = np.insert(X_train,0,1,axis=1)
assignments/.ipynb_checkpoints/assignment01-regression-checkpoint.ipynb
w4zir/ml17s
mit
Linear Regression with Gradient Descent code
iterations = 1500 alpha = 0.000000001 # change it and find what happens def h(X, theta): #Linear hypothesis function hx = np.dot(X,theta) return hx def computeCost(theta,X,y): #Cost function """ theta is an n- dimensional vector, X is matrix with n- columns and m- rows y is a matrix with m- rows and 1 column """ #note to self: *.shape is (rows, columns) return float((1./(2*m)) * np.dot((h(X,theta)-y).T,(h(X,theta)-y))) #Actual gradient descent minimizing routine def gradientDescent(X,y, theta_start = np.zeros((n+1,1))): """ theta_start is an n- dimensional vector of initial theta guess X is input variable matrix with n- columns and m- rows. y is a matrix with m- rows and 1 column. """ theta = theta_start j_history = [] #Used to plot cost as function of iteration theta_history = [] #Used to visualize the minimization path later on for meaninglessvariable in range(iterations): tmptheta = theta # append for plotting j_history.append(computeCost(theta,X,y)) theta_history.append(list(theta[:,0])) #Simultaneously updating theta values for j in range(len(tmptheta)): tmptheta[j] = theta[j] - (alpha/m)*np.sum((h(X,theta) - y)*np.array(X[:,j]).reshape(m,1)) theta = tmptheta return theta, theta_history, j_history
assignments/.ipynb_checkpoints/assignment01-regression-checkpoint.ipynb
w4zir/ml17s
mit
Run Gradient Descent on training data
#Actually run gradient descent to get the best-fit theta values initial_theta = np.zeros((n+1,1)); theta, theta_history, j_history = gradientDescent(X_train,y_train,initial_theta) plt.plot(j_history) plt.title("Convergence of Cost Function") plt.xlabel("Iteration number") plt.ylabel("Cost function") plt.show()
assignments/.ipynb_checkpoints/assignment01-regression-checkpoint.ipynb
w4zir/ml17s
mit
Plot trained line on data
# predict output for training data hx_train= h(X_train, theta) # plot it plt.scatter(X_train[:,1],y_train) plt.plot(X_train[:,1],hx_train[:,0], color='red') plt.show()
assignments/.ipynb_checkpoints/assignment01-regression-checkpoint.ipynb
w4zir/ml17s
mit
<br> Task 2: Predict test data output and submit it to Kaggle In this task we will use the model trained above to predict "SalePrice" on test data. Test data has all the input variables/features but no target variable. Out aim is to use the trained model to predict the target variable for test data. This is called generalization i.e. how good your model works on unseen data. The output in the form "Id","SalePrice" in a .csv file should be submitted to kaggle. Please provide your score on kaggle after this step as an image. It will be compared to the 5 feature Linear Regression later.
# read data in pandas frame df_test and check first few rows # write code here df_test.head() # check statistics of test data, make sure no data is missing. print(df_test.shape) df_test[cols].describe() # Get X_test, no target variable (SalePrice) provided in test data. It is what we need to predict. X_test = np.array(df_test[cols]) #Insert the usual column of 1's into the "X" matrix X_test = np.insert(X_test,0,1,axis=1) # predict test data labels i.e. y_test predict = h(X_test, theta) # save prediction as .csv file pd.DataFrame({'Id': df_test.Id, 'SalePrice': predict[:,0]}).to_csv("predict1.csv", index=False)
assignments/.ipynb_checkpoints/assignment01-regression-checkpoint.ipynb
w4zir/ml17s
mit
Upload .csv file to Kaggle.com Create an account at https://www.kaggle.com Go to https://www.kaggle.com/c/house-prices-advanced-regression-techniques/submit Upload "predict1.csv" file created above. Upload your score as an image below.
from IPython.display import Image Image(filename='images/asgn_01.png', width=500)
assignments/.ipynb_checkpoints/assignment01-regression-checkpoint.ipynb
w4zir/ml17s
mit
<br> Task 3: Use scikit-learn for Linear Regression In this task we are going to use Linear Regression class from scikit-learn library to train the same model. The aim is to move from understanding algorithm to using an exisiting well established library. There is a Linear Regression example available on scikit-learn website as well. Use the scikit-learn linear regression class to train the model on df_train Compare the parameters from scikit-learn linear_model.LinearRegression.coef_ to the $\theta_s$ from earlier. Use the linear_model.LinearRegression.predict on test data and upload it to kaggle. See if your score improves. Provide screenshot. Note: no need to append 1's to X_train. Scitkit linear regression has parameter called fit_intercept that is by defauly enabled.
# import scikit-learn linear model from sklearn import linear_model # get X and y # write code here # Create linear regression object # write code here check link above for example # Train the model using the training sets. Use fit(X,y) command # write code here # The coefficients print('Intercept: \n', regr.intercept_) print('Coefficients: \n', regr.coef_) # The mean squared error print("Mean squared error: %.2f" % np.mean((regr.predict(X_train) - y_train) ** 2)) # Explained variance score: 1 is perfect prediction print('Variance score: %.2f' % regr.score(X_train, y_train)) # read test X without 1's # write code here # predict output for test data. Use predict(X) command. predict2 = # write code here # remove negative sales by replacing them with zeros predict2[predict2<0] = 0 # save prediction as predict2.csv file # write code here
assignments/.ipynb_checkpoints/assignment01-regression-checkpoint.ipynb
w4zir/ml17s
mit
<br> Task 4: Multivariate Linear Regression Lastly use columns ['OverallQual', 'GrLivArea', 'GarageCars', 'TotalBsmtSF', 'YearBuilt'] and scikit-learn or the code given above to predict output on test data. Upload it to kaggle like earlier and see how much it improves your score. Everything remains same except dimensions of X changes. There might be some data missing from the test or train data that you can check using pandas.DataFrame.describe() function. Below we provide some helping functions for removing that data.
# define columns ['OverallQual', 'GrLivArea', 'GarageCars', 'TotalBsmtSF', 'YearBuilt'] # write code here # check features range and statistics. Training dataset looks fine as all features has same count. df_train[cols].describe() # Load X and y variables from pandas dataframe df_train # write code here # Get m = number of samples and n = number of features # write code here #Feature normalizing the columns (subtract mean, divide by standard deviation) #Store the mean and std for later use #Note don't modify the original X matrix, use a copy stored_feature_means, stored_feature_stds = [], [] Xnorm = np.array(X_train).copy() for icol in range(Xnorm.shape[1]): stored_feature_means.append(np.mean(Xnorm[:,icol])) stored_feature_stds.append(np.std(Xnorm[:,icol])) #Skip the first column if 1's # if not icol: continue #Faster to not recompute the mean and std again, just used stored values Xnorm[:,icol] = (Xnorm[:,icol] - stored_feature_means[-1])/stored_feature_stds[-1] # check data after normalization pd.DataFrame(data=Xnorm,columns=cols).describe() # Run Linear Regression from scikit-learn or code given above. # write code here. Repeat from above. # To predict output using ['OverallQual', 'GrLivArea', 'GarageCars', 'TotalBsmtSF', 'YearBuilt'] as input features. # Check features range and statistics to see if there is any missing data. # As you can see from count "GarageCars" and "TotalBsmtSF" has 1 missing value each. df_test[cols].describe() # Replace missing value with the mean of the feature df_test['GarageCars'] = df_test['GarageCars'].fillna((df_test['GarageCars'].mean())) df_test['TotalBsmtSF'] = df_test['TotalBsmtSF'].fillna((df_test['TotalBsmtSF'].mean())) df_test[cols].describe() # read test X without 1's # write code here # predict using trained model predict3 = # write code here # replace any negative predicted saleprice by zero predict3[predict3<0] = 0 # predict target/output variable for test data using the trained model and upload to kaggle. # write code to save output as predict3.csv here
assignments/.ipynb_checkpoints/assignment01-regression-checkpoint.ipynb
w4zir/ml17s
mit
'+ is the operator symbol, 2 and 3 are the operands Operator Types Arithmetic Operators Comparison Operators Logical Operators Bitwise Operators Assignment Operators Special Operators Arithmetic Operators
x = 15 y = 6 # Addition Operator print('x + y = ', x + y) # Subtraction Operator print('x - y = ', x - y) # Multiplication Operator print('x * y = ', x * y) # Division Operator print('x / y = ', x / y) # True Division print('x // y =', x//y) # Class Division # Exponential Operator print('x ** y = ', x ** y)
Python+Operators.ipynb
vravishankar/Jupyter-Books
mit
Comparison Operators
x = 12 y = 10 # Greater Than Operator print('x > y = ',x > y) # Greater Than or Equal To print('x >= y', x >= y) # Lesser Than Operator print('x < y = ',x < y) # Lesser Than or Equal To print('x <= y', x <= y) # Equal To Operator print('x == y ',x == y) # Not Equal To print('x != y', x != y)
Python+Operators.ipynb
vravishankar/Jupyter-Books
mit
Logical Operators <pre> and - True if both operands are true or - True if one of the operand is true not - True if operand is false </pre> Bitwise Operators <pre> & -> Bitwise AND | -> Bitwise OR ~ -> Bitwise NOT ^ -> Bitwise XOR >> -> Bitwise Right Shift << -> Bitwise Left Shift </pre>
x = 10 y = 4 # Bitwise AND print('x & y = ', x & y) # Bitwise OR print('x | y = ', x | y) # Bitwise NOT print('~ x = ', ~ x) # Bitwise XOR print('x ^ y = ', x ^ y) # Right shift print('x >> y = ', x >> y) # Left Shift print('x << y = ', x << y)
Python+Operators.ipynb
vravishankar/Jupyter-Books
mit
Assignment Operators
a = 5 print(a) a += 5 # a = a + 5 print(a) a -= 5 # a = a - 5 print(a) a *= 5 # a = a * 5 print(a) a /= 5 # a = a / 5 print(a) a //= 2 # a = a // 2 print(a) a %= 1 # a = a % 1 print(a) a = 10 a **= 2 # a = a ** 2 print(a)
Python+Operators.ipynb
vravishankar/Jupyter-Books
mit
Special Operators Identity Operators <pre> is - True if the operands are identical is not - True if the operands are not identical </pre>
x1 = 2 y1 = 2 x2 = 'Hello' y2 = "Hello" x3 = [1,2,3] y3 = (1,2,3) print('x1 is y1 = ', x1 is y1) print('x1 is y2 = ', x1 is y2) print('x3 is not y3 = ', x3 is not y3)
Python+Operators.ipynb
vravishankar/Jupyter-Books
mit
Membership Operators <pre> in - True if value / variable is found in sequence not in - True if value / variable is not found in the sequence </pre>
x = 'Hello World' y = {1:'a',2:'b'} print("'H' in x ", 'H' in x) print('hello not in x ','hello' not in x) print('1 in y = ',1 in y) print('a in y = ',a in y)
Python+Operators.ipynb
vravishankar/Jupyter-Books
mit
conda install numpy
! pip install numpy -I import numpy.core.multiarray !pip install spacy ! pip install scipy ! pip install opencv-python ! pip install pillow ! pip install matplotlib ! pip install h5py ! pip install keras ! pip3 install https://github.com/OlafenwaMoses/ImageAI/releases/download/2.0.2/imageai-2.0.2-py3-none-any.whl
computer-vision/computer vision.ipynb
decisionstats/pythonfordatascience
apache-2.0
Download the RetinaNet model file that will be used for object detection via this link https://github.com/OlafenwaMoses/ImageAI/releases/download/1.0/resnet50_coco_best_v2.0.1.h5
import os as os os.getcwd() os.chdir('C:\\Users\\KOGENTIX\\Desktop\\image') os.getcwd() from imageai.Detection import ObjectDetection import os execution_path = os.getcwd() detector = ObjectDetection() detector.setModelTypeAsRetinaNet() detector.setModelPath( os.path.join(execution_path , "resnet50_coco_best_v2.0.1.h5")) detector.loadModel() detections = detector.detectObjectsFromImage(input_image=os.path.join(execution_path , "image.jpeg"), output_image_path=os.path.join(execution_path , "imagenew.jpeg")) for eachObject in detections: print(eachObject["name"] , " : " , eachObject["percentage_probability"] )
computer-vision/computer vision.ipynb
decisionstats/pythonfordatascience
apache-2.0
What is a cross-flow turbine? <figure style="float: right"> <img width="420px" src="figures/sandia_vawt.png"> <p class=citation>From Barone and Paquette (2012)</p> </figure> Axis perpendicular to flow Little success in onshore wind Fatigue issues Exaggerated power ratings <figure> <img width="50%" src=figures/flowind.jpg> <p class=citation>From wind-works.org &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;</p> </figure> Kinematics
import warnings warnings.filterwarnings("ignore") os.chdir(os.path.join(os.path.expanduser("~"), "Google Drive", "Research", "CFT-vectors")) import cft_vectors fig, ax = plt.subplots(figsize=(15, 15)) old_fontsize = plt.rcParams["font.size"] plt.rcParams["font.size"] *= 1.5 cft_vectors.plot_diagram(fig, ax, theta_deg=52, axis="off", label=True) os.chdir(talk_dir) plt.rcParams["font.size"] = old_fontsize
presentation.ipynb
petebachant/2015-UNH-ME-grad-seminar-slides
mit
Kinematics <center> <video width=100% controls loop> <source src="videos/cft-animation.mp4" type="video/mp4"> Your browser does not support the video tag. </video> </center> <!-- embed_video("C:/Users/Pete/Google Drive/Research/CFT-vectors/videos/cft-animation.mp4") --> Marine hydrokinetics: ORPC <center> <img width=60% src=figures/orpc.jpg> <p> <img width=60% src=figures/orpc-rivgen.jpg> <p class=citation> From orpc.co. </center> ## Wind turbine arrays: Caltech FLOWE <center> <img width=70% src=figures/caltech-flowe.jpg> <p class=citation> From flowe.caltech.edu. </center> ## Research objectives 1. Understand and predict wake recovery to open up array possibilities 2. Improve performance modeling <!--Note: will benefit all bladed-devices!--> * Progress in predicting unsteady separating flows ## Turbine test bed Automated turbine testing in the UNH tow tank <center> <img width=80% src="figures/rm2_video_snap.png"> </center> ## UNH tow tank upgrades * Redesigned broken linear guide system * Added closed-loop position and velocity control (servo, belt drive) * Improved acceleration by an order of magnitude * Network-based DAQ * On-board power and networking for turbine test bed * Multi-axis motion control ## Test bed instrumentation <center> <img width=80% src=figures/turbine-test-bed-drawing.PNG> </center> ## Wake measurement instrumentation * Nortek Vectrino+ acoustic Doppler velocimeter (ADV) * $y$–$z$ traversing carriage * Motion control integration <center> <img width=70% src="figures/traverse_alone.jpg"> </center> ## Automation <!-- Increased number of tows per experiment by order of magnitude. --> <center> <img width=60% src="figures/TurbineDAQ.PNG"> </center> ## Job destruction Test bed circa 2011: <center> <img width=90% src="figures/ivo.PNG"> </center> ## Operation <center> <video width=100% controls loop> <source src="videos/rm2-low-tsr-tow-edited.mp4" type="video/mp4"> Your browser does not support the video tag. </video> </center> ## UNH-RVAT <img width=44% class="float-right" src="figures/rvat-cad-no-hubs.PNG"> * Simple geometry * High solidity $(c/R)$ * Baseline experiments * $U_\infty = 1$ m/s * Characterize performance * Understand wake dynamics * Establish modeling "targets" ## UNH-RVAT baseline performance <!-- <center> <img src="figures/test.png" width=80%> </center> -->
# Generate figures from the experiments by their own methods os.chdir("C:/Users/Pete/Research/Experiments/RVAT baseline") import py_rvat_baseline.plotting as rvat_baseline fig, (ax1, ax2) = plt.subplots(figsize=(14, 5), nrows=1, ncols=2) rvat_baseline.plot_cp(ax1) rvat_baseline.plot_cd(ax2, color=sns.color_palette()[2]) fig.tight_layout()
presentation.ipynb
petebachant/2015-UNH-ME-grad-seminar-slides
mit
<center> $\lambda = \frac{\omega R}{U_\infty}$ &nbsp; &nbsp; &nbsp; &nbsp; $C_P = \frac{P_\mathrm{mech}}{\frac{1}{2}\rho A_\mathrm{f} U_\infty^3}$ &nbsp; &nbsp; &nbsp; &nbsp; $C_D = \frac{F_\mathrm{drag}}{\frac{1}{2}\rho A_\mathrm{f} U_\infty^2}$ </center> Baseline wake measurements $(x/D=1)$ <center> <img width=80% src="figures/unh-rvat-wake-meas-coord-sys.PNG"> </center> UNH-RVAT baseline wake characteristics
os.chdir(rvat_baseline_dir) rvat_baseline.plotwake("meancontquiv", scale=1.8) rvat_baseline.plotwake("kcont", scale=1.8)
presentation.ipynb
petebachant/2015-UNH-ME-grad-seminar-slides
mit
Mean momentum transport $$ \begin{split} \frac{\partial U}{\partial x} = \frac{1}{U} \bigg{[} & - V\frac{\partial U}{\partial y} - W\frac{\partial U}{\partial z} \ & -\frac{1}{\rho}\frac{\partial P}{\partial x} \ & - \frac{\partial}{\partial x} \overline{u'u'} - \frac{\partial}{\partial y} \overline{u'v'} - \frac{\partial}{\partial z} \overline{u'w'} \ & + \nu\left(\frac{\partial^2 U}{\partial x^2} + \frac{\partial^2 U}{\partial y^2} + \frac{\partial^2 U}{\partial z^2} \right) \bigg{]}. \end{split} $$ Mean kinetic energy transport $$ \begin{split} \frac{\partial K}{\partial x} = \frac{1}{U} \bigg{[} & - \underbrace{V \frac{\partial K}{\partial y}}{y\text{-adv.}} - \underbrace{W \frac{\partial K}{\partial z}}{z\text{-adv.}} % Pressure work: - \frac{1}{\rho}\frac{\partial}{\partial x_j} P U_i \delta_{ij} % Work by viscous forces + \frac{\partial}{\partial x_j} 2 \nu U_i S_{ij} \ % Not sure if that's capital S... % Turbulent transport of K & - \underbrace{ \frac{1}{2}\frac{\partial}{\partial x_j} \overline{u_i' u_j'} U_i }{\text{Turb. trans.}} % Production of k + \underbrace{ \overline{u_i' u_j'} \frac{\partial U_i}{\partial x_j} }{k\text{-prod.}} % Mean dissipation? Bar could be removed, or no? -- yes, capital letter, no bar. - \underbrace{ 2 \nu S_{ij}S_{ij} }_{\text{Mean diss.}} \bigg{]}. \end{split} $$ Mean momentum transport Weighted averages at $x/D=1$:
os.chdir("C:/Users/Pete/Research/Experiments/RVAT baseline") import py_rvat_baseline.plotting as rvat_baseline rvat_baseline.plotwake("mombargraph", scale=1.8, barcolor=None) plt.grid(True) os.chdir("C:/Users/Pete/Research/Experiments/RVAT baseline") import py_rvat_baseline.plotting as rvat_baseline rvat_baseline.plotwake("Kbargraph", scale=1.8, barcolor=sns.color_palette()[1]) plt.grid(True)
presentation.ipynb
petebachant/2015-UNH-ME-grad-seminar-slides
mit
UNH-RVAT Reynolds number dependence Are our results relevant to full scale? Models should be validated for the scales at which they will be applied, if possible How cheap (small, slow) can experiments get? $$ Re = \frac{Ul}{\nu} $$ UNH-RVAT Reynolds number dependence
os.chdir(rvat_re_dep_dir) import py_rvat_re_dep.plotting as rvat_re_dep fig, (ax1, ax2) = plt.subplots(figsize=(15, 6.5), nrows=1, ncols=2) rvat_re_dep.plot_perf_curves(ax1, ax2) fig.tight_layout()
presentation.ipynb
petebachant/2015-UNH-ME-grad-seminar-slides
mit
Reynolds number dependence at $\lambda = 1.9$
os.chdir(rvat_re_dep_dir) import py_rvat_re_dep.plotting as rvat_re_dep fig, (ax1, ax2) = plt.subplots(figsize=(14, 6), nrows=1, ncols=2) rvat_re_dep.plot_perf_re_dep(ax1, ax2) fig.tight_layout()
presentation.ipynb
petebachant/2015-UNH-ME-grad-seminar-slides
mit
Blade boundary layer dynamics <center> <img src="figures/McMasters-Henderson-1980.PNG" width=70%> <p class="citation">From McMasters and Henderson (1980)</p> </center> Wake transport
os.chdir(rvat_re_dep_dir) import py_rvat_re_dep.plotting as rvat_re_dep fig, ax = plt.subplots(figsize=(13, 7)) rvat_re_dep.make_mom_bar_graph(ax, print_analysis=False) fig.tight_layout()
presentation.ipynb
petebachant/2015-UNH-ME-grad-seminar-slides
mit
Wake transport totals
os.chdir(rvat_re_dep_dir) import py_rvat_re_dep.plotting as rvat_re_dep fig, ax = plt.subplots() rvat_re_dep.plot_wake_trans_totals(ax, emptymarkers=False, ucolor=sns.color_palette()[0], kcolor=sns.color_palette()[1]) fig.tight_layout()
presentation.ipynb
petebachant/2015-UNH-ME-grad-seminar-slides
mit
Numerical modeling Experiments are expensive Can be difficult to modify, e.g., turbine geometry Scaling issues Can we compute instead? Techniques Blade element methods: Use section characteristics to predict loading Momentum: Very cheap, issues with high solidity, very little flow information Vortex (potential flow): Cheap, also issues with high solidity, no turbulence Navier–Stokes: Turbulence modeled via RANS or LES (no DNS possible at this $Re$, yet) Highest cost UNH-RVAT blade-resolved RANS <div> <ul> <li>Simulate baseline with OpenFOAM</li> <img width="60%" style="float: right" src="figures/3D_vorticity_SA_964_10-threshold.png"/> <li>Need to resolve the boundary layer</li> <ul> <li>Separation</li> <li>Transition?</li> </ul> <li>Turbulence models (eddy viscosity)</li> <ul> <li>$k$–$\omega$ SST</li> <li>Spalart–Allmaras</li> </ul> <li>2-D: $\sim 1$ CPU hour</li> <li>3-D: $\sim 10^4$ CPU hours</li> <ul> <li>Feasible to replace experiments?</li> <li>"Interpolate" wake measurements?</li> </ul> </ul> </div> Overall mesh topology <center> <img width=65% src="figures/2D_mesh.PNG"> </center> Near-wall blade mesh <center> <img width=60% src="figures/2D_blade_mesh_closeup.png"> </center> $$ y^+ \sim 1 $$ Verification (2-D) <center> <img width=90%, src="figures/br-cfd-verification.png"> </center> Performance predictions <center> <img width=90%, src="figures/br-cfd-perf_bar_chart.png"> </center> Near-wake mean velocity (SA 3-D vs. experiment) <center> <img width=65%, src="figures/br-cfd-meancontquiv_SpalartAllmaras.png"> </center> <center> <img width=65%, style="padding-left: 45px", src="figures/br-cfd-meancontquiv_exp.png"> </center> Near-wake mean velocity (SST 3-D vs. experiment) <center> <img width=65%, src="figures/br-cfd-meancontquiv_kOmegaSST.png"> </center> <center> <img width=65%, style="padding-left: 45px", src="figures/br-cfd-meancontquiv_exp.png"> </center> Near-wake TKE (SA 3-D vs. experiment) <center> <img width=80%, src="figures/br-cfd-kcont_SpalartAllmaras.png"> </center> <center> <img width=80%, src="figures/br-cfd-kcont_exp.png"> </center> Near-wake TKE (SST 3-D vs. experiment) <center> <img width=80%, src="figures/br-cfd-kcont_kOmegaSST.png"> </center> <center> <img width=80%, src="figures/br-cfd-kcont_exp.png"> </center> Near-wake momentum transport <center> <img width=90%, src="figures/br-cfd-mom_bar_graph.png"> </center> Summary: Blade-resolved CFD 2-D feasible but poor predictor of performance and wake 3-D may be good for single turbine, but too expensive for arrays Actuator line modeling Developed by Sorensen and Shen (2002) Blade element method coupled with Navier–Stokes Save computational resources No finely resolved blade boundary layers No complicated meshing No mesh motion More physical description of wake evolution, turbulence Should resolve wakes of high solidity turbine blades Computing blade loading
import warnings warnings.filterwarnings("ignore") os.chdir(os.path.join(os.path.expanduser("~"), "Google Drive", "Research", "CFT-vectors")) import cft_vectors fig, ax = plt.subplots(figsize=(15, 15)) old_fontsize = plt.rcParams["font.size"] plt.rcParams["font.size"] *= 1.5 cft_vectors.plot_diagram(fig, ax, theta_deg=52, axis="off", label=True) os.chdir(talk_dir) plt.rcParams["font.size"] = old_fontsize
presentation.ipynb
petebachant/2015-UNH-ME-grad-seminar-slides
mit
3. Dimensionality reduction Many times we want to combine variables (for linear regression to avoid multicollinearity, to create indexes, etc) Our data
#Read data df_companies = pd.read_csv("data/big3_position.csv",sep="\t") df_companies["log_revenue"] = np.log10(df_companies["Revenue"]) df_companies["log_assets"] = np.log10(df_companies["Assets"]) df_companies["log_employees"] = np.log10(df_companies["Employees"]) df_companies["log_marketcap"] = np.log10(df_companies["MarketCap"]) #Keep only industrial companies df_companies = df_companies.loc[:,["log_revenue","log_assets","log_employees","log_marketcap","Company_name","TypeEnt"]] df_companies = df_companies.loc[df_companies["TypeEnt"]=="Industrial company"] #Dropnans df_companies = df_companies.replace([np.inf,-np.inf],np.nan) df_companies = df_companies.dropna() df_companies.head()
class8/class8_dimensionality_reduction.ipynb
jgarciab/wwd2017
gpl-3.0
Correlation between variables
# Compute the correlation matrix corr = df_companies.corr() # Generate a mask for the upper triangle (hide the upper triangle) mask = np.zeros_like(corr, dtype=np.bool) mask[np.triu_indices_from(mask)] = True # Draw the heatmap with the mask and correct aspect ratio sns.heatmap(corr, mask=mask, square=True,linewidths=.5,cmap="YlOrRd",vmin=0,vmax=1) plt.show()
class8/class8_dimensionality_reduction.ipynb
jgarciab/wwd2017
gpl-3.0
Revenue, employees and assets are highly correlated. Let's imagine wwe want to explain the market capitalization in terms of the other variables.
mod = smf.ols(formula='log_marketcap ~ log_revenue + log_employees + log_assets', data=df_companies) res = mod.fit() print(res.summary()) #The residuals are fine plt.figure(figsize=(4,3)) sns.regplot(res.predict(),df_companies["log_marketcap"] -res.predict()) #Get many models to see hwo coefficient changes from statsmodels.iolib.summary2 import summary_col mod1 = smf.ols(formula='log_marketcap ~ log_revenue + log_employees + log_assets', data=df_companies).fit() mod2 = smf.ols(formula='log_marketcap ~ log_revenue + log_assets', data=df_companies).fit() mod3 = smf.ols(formula='log_marketcap ~ log_employees + log_assets', data=df_companies).fit() mod4 = smf.ols(formula='log_marketcap ~ log_assets', data=df_companies).fit() mod5 = smf.ols(formula='log_marketcap ~ log_revenue + log_employees ', data=df_companies).fit() mod6 = smf.ols(formula='log_marketcap ~ log_revenue ', data=df_companies).fit() mod7 = smf.ols(formula='log_marketcap ~ log_employees ', data=df_companies).fit() output = summary_col([mod1,mod2,mod3,mod4,mod5,mod6,mod7],stars=True) print(mod1.rsquared_adj,mod2.rsquared_adj,mod3.rsquared_adj,mod4.rsquared_adj,mod5.rsquared_adj,mod6.rsquared_adj,mod7.rsquared_adj) output
class8/class8_dimensionality_reduction.ipynb
jgarciab/wwd2017
gpl-3.0
3.1 Combining variables Multiplying/summing variables It's easy It's arbitrary
X = df_companies.loc[:,["log_revenue","log_employees","log_assets"]] X.head(2) #Let's scale all the columns to have mean 0 and std 1 from sklearn.preprocessing import scale X_to_combine = scale(X) X_to_combine #In this case we sum them together X_combined = np.sum(X_to_combine,axis=1) X_combined #Add a new column with our combined variable and run regression df_companies["combined"] = X_combined print(smf.ols(formula='log_marketcap ~ combined ', data=df_companies).fit().summary())
class8/class8_dimensionality_reduction.ipynb
jgarciab/wwd2017
gpl-3.0
3.2 PCA Keep all the info The resulting variables do not actually mean much
#Do the fitting from sklearn.decomposition import PCA pca = PCA(n_components=2) new_X = pca.fit_transform(X) print("Explained variance") print(pca.explained_variance_ratio_) print() print("Weight of components") print(["log_revenue","log_employees","log_assets"]) print(pca.components_) print() new_X #Create our new variables (2 components, so 2 variables) df_companies["pca_x1"] = new_X[:,0] df_companies["pca_x2"] = new_X[:,1] print(smf.ols(formula='log_marketcap ~ pca_x1 + pca_x2 ', data=df_companies).fit().summary()) print("Before") sns.lmplot("log_revenue","log_assets",data=df_companies,fit_reg=False) print("After") sns.lmplot("pca_x1","pca_x2",data=df_companies,fit_reg=False)
class8/class8_dimensionality_reduction.ipynb
jgarciab/wwd2017
gpl-3.0
3.3 Factor analysis The observations are assumed to be caused by a linear transformation of lower dimensional latent factors and added Gaussian noise. Without loss of generality the factors are distributed according to a Gaussian with zero mean and unit covariance. The noise is also zero mean and has an arbitrary diagonal covariance matrix.
from sklearn.decomposition import FactorAnalysis fa = FactorAnalysis(n_components=2) new_X = fa.fit_transform(X) print("Weight of components") print(["log_revenue","log_employees","log_assets"]) print(fa.components_) print() new_X #New variables df_companies["fa_x1"] = new_X[:,0] df_companies["fa_x2"] = new_X[:,1] print(smf.ols(formula='log_marketcap ~ fa_x1 + fa_x2 ', data=df_companies).fit().summary()) print("After") sns.lmplot("fa_x1","fa_x2",data=df_companies,fit_reg=False)
class8/class8_dimensionality_reduction.ipynb
jgarciab/wwd2017
gpl-3.0
Difference between FA and PCA http://stats.stackexchange.com/questions/1576/what-are-the-differences-between-factor-analysis-and-principal-component-analysi Principal component analysis involves extracting linear composites of observed variables. Factor analysis is based on a formal model predicting observed variables from theoretical latent factors. 3.4 Methods to avoid overfitting (Machine learning with regularization) SVR: http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVR.html Lasso regression: http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Lasso.html Both have a regularization parameter, that penalizes having many terms. How to choose the best value of this parameter? - With a train_test split (or cross-validation) - http://scikit-learn.org/stable/modules/cross_validation.html - http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html
Image(url="http://www.holehouse.org/mlclass/07_Regularization_files/Image.png") from sklearn.model_selection import train_test_split y = df_companies["log_marketcap"] X = df_companies.loc[:,["log_revenue","log_employees","log_assets"]] X.head(2) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33) X_train.head()
class8/class8_dimensionality_reduction.ipynb
jgarciab/wwd2017
gpl-3.0
Linear regression (to compare)
df_train = X_train.copy() df_train["log_marketcap"] = y_train df_train.head() mod = smf.ols(formula='log_marketcap ~ log_revenue + log_employees + log_assets', data=df_train).fit() print("log_revenue log_employees log_assets ") print(mod.params.values[1:])
class8/class8_dimensionality_reduction.ipynb
jgarciab/wwd2017
gpl-3.0
SVR - Gives balanced weights (the most correlated independent variable (with the dependent) doesn't take all the weight). - Very good when you have hundreds of variables. You can iteratively drop the worst predictor. - It allow for more than linear "regression". The default kernel is "rbf", which fits curves. The problem is that interpreting it is hard.
from sklearn.svm import SVR clf = SVR(C=0.1, epsilon=0.2,kernel="linear") clf.fit(X_train, y_train) print("log_revenue log_employees log_assets ") print(clf.coef_)
class8/class8_dimensionality_reduction.ipynb
jgarciab/wwd2017
gpl-3.0
Lasso - Have a penalty - Discards the variables with low weights.
from sklearn import linear_model reg = linear_model.Lasso(alpha = 0.01) reg.fit(X_train,y_train) print("log_revenue log_employees log_assets ") print(reg.coef_)
class8/class8_dimensionality_reduction.ipynb
jgarciab/wwd2017
gpl-3.0
Summary
print(["SVR","Lasso","Linear regression"]) err1,err2,err3 = sklearn.metrics.mean_squared_error(clf.predict(X_test),y_test),sklearn.metrics.mean_squared_error(reg.predict(X_test),y_test),sklearn.metrics.mean_squared_error(mod.predict(X_test),y_test) print(err1,err2,err3) print(["SVR","Lasso","Linear regression"]) err1,err2,err3 = sklearn.metrics.r2_score(clf.predict(X_test),y_test),sklearn.metrics.r2_score(reg.predict(X_test),y_test),sklearn.metrics.r2_score(mod.predict(X_test),y_test) print(err1,err2,err3)
class8/class8_dimensionality_reduction.ipynb
jgarciab/wwd2017
gpl-3.0
Hessian function for marking corners
# inputt -the input image name # s -Standard Deviation value # t -The threshold of eigen value to be considered as edge def hessian(inputt,s,t): start = time.time() I = array(Image.open(inputt).convert('L')) # reads the input image into I G = [] for i in range(-2,2+1): G.append(gfilter(i,0,s)) # equating y to 0 for a 1D matrix Gx = [] #gaussian in x direction for i in range(-size,size+1): Gx.append(gfilter1(i,0,s,'x')) Gy = [] #gaussian in y direction for i in range(-size,size+1): Gy.append([gfilter1(0,i,s,'y')]) Gx2 = [] for i in range(-size,size+1): Gx2.append(gfilter2(i,0,s,'x')) Gy2 = [] for i in range(-size,size+1): Gy2.append([gfilter2(0,i,s,'y')]) Ix = [] for i in range(len(I[:,0])): Ix.extend([convolve(I[i,:],Gx)]) # I*G in x direction Ix = array(matrix(Ix)) Iy = [] for i in range(len(I[0,:])): Iy.extend([convolve(I[:,i],Gx)]) # I*G in y direction Iy = array(matrix(transpose(Iy))) Ixx = [] for i in range(len(Ix[:,0])): Ixx.extend([convolve(Ix[i,:],Gx2)]) # Ix * Gx in x direction Ixx = array(matrix(Ixx)) Iyy = [] for i in range(len(Iy[0,:])): Iyy.extend([convolve(Ix[:,i],Gx2)]) # Iy * Gy in y direction Iyy = array(matrix(transpose(Iyy))) Ixy = [] for i in range(len(Iy[0,:])): Ixy.extend([convolve(Ix[:,i],Gx2)]) # Iy * Gy in y direction Ixy = array(matrix(transpose(Ixy))) #store values in x,y to plot the corners x = [] # array x[] stores x coordinates of the corner y = [] # array y[] stores y coordinates of the corner for i in range(len(I[:,0])): for j in range(len(I[0,:])): H1 = linalg.eigvals(([Ixx[i,j],Ixy[i,j]],[Ixy[i,j],Iyy[i,j]])) if((abs(H1[0])>t) & (abs(H1[1])>t)): # if corner y.append(i-2) # appending y index to mark corners x.append(j-2) # appending y index to mark corners plott(I,x,y) return time.time() - start s = 1.5 #input standard deviation aka sigma value inp1 = hessian('/home/srikar/CVPA1/CVV/input1.png',s,3.95695) # size of filter 3.95695 inp2 = hessian('/home/srikar/CVPA1/CVV/input2.png',s,5) inp3 = hessian('/home/srikar/CVPA1/CVV/input3.png',s,5) print ('The Accuracy/Time for execution is:\nInput Image 1: %.2fseconds\nInput Image 2: %.2fseconds\nInput Image 3: %.2fseconds)'%(inp1,inp2,inp3))
PA1-Q3-1.ipynb
srikarpv/CV_PA1
mit
Links
%%HTML <a href="http://beyond.io"> This link will take you on a journey to the beyond!</a> %%HTML <a href="http://unsplash.com"> <img width=400px src="https://images.unsplash.com/photo-1460400355256-e87506dcec4f?ixlib=rb-0.3.5&q=80&fm=jpg&crop=entropy&s=f35997a68b394f08caf7d46cf2d27791"/> <p> Here I've made this entire section a link.</p> </a>
unit_17/lectures/lecture_2.ipynb
whitead/numerical_stats
gpl-3.0
Content Arrangement Spans, divs and breaks are ways to arrange content. They do not do much alone. Instead you use javascript and CSS to manipulate them. For example, divs are how jupyter notebooks separate input cells, from the notebook, from the top, etc. Inspecting Tools The best way to learn HTML is to look at different webpages. Try right-cliking and inspecting something! Connecting CSS with HTML
%%HTML <h1> This is a header</h1> <style> h1 { color:red; } </style>
unit_17/lectures/lecture_2.ipynb
whitead/numerical_stats
gpl-3.0
Wow Notice it made all headers red In order to finely tune the connection between CSS, JS and HTML, there is something called selectors.
%%HTML <h1 class="attention-getter"> This is an important header</h1> <style> .attention-getter { color:blue; } </style>
unit_17/lectures/lecture_2.ipynb
whitead/numerical_stats
gpl-3.0
There is an established philosophy that HTML = content, CSS = style. Therefore, it's incorrect to create a class called "blue-class" because now you've spilled over some style into your HTML. <span class="attention-getter"> Testing out you're own stuff</span> So with spans and divs, we're able to attach classes to HTML elements. If you want to learn more, I would recommend jsfiddle which is a wonderful place for testing out. If you want to have a structured lesson, check out w3schools and codeacademy Javascript JS is the programming language of the web. It allows you to manipulate elements.
%%HTML <h3 class="grab-me"> This header will change soon</h3> %%javascript var grabme = document.querySelector('.grab-me'); grabme.textContent = 'Hoopla'; %%HTML <ul class="fill-me"> </ul> %%javascript var fruits = ['strawberry', 'mango', 'bannana']; fruits.forEach(function(i) { document.querySelector('.fill-me').innerHTML += '<li>' + i + '</li>'; });
unit_17/lectures/lecture_2.ipynb
whitead/numerical_stats
gpl-3.0
There's a lot of stuff going on in there! Here are some differences with Python: You need semicolins at the end of lines The for loops syntax is different. Rather than have code below, we call a forEach function and give it a function we define right there. We also used the innerHTML instead of textContent this time around Using HTML, CSS, and JS in your notebook Example 1 &mdash; Changing font Using the inspector, I've discovered that all the paragraphs in rendered cells can be selected by .rendered_html. I'll now change their fonts This cell below grabs the fonts from google's font collection
%%HTML <link href='https://fonts.googleapis.com/css?family=Roboto+Condensed:300|Oswald:400,700' rel='stylesheet' type='text/css'> %%HTML <style> .rendered_html { font-family: 'Roboto Condensed'; font-size: 125%; } </style>
unit_17/lectures/lecture_2.ipynb
whitead/numerical_stats
gpl-3.0
I can also change headings to be something different
%%HTML <style> .rendered_html > h1,h2,h3,h4,h5,h6 { font-family: 'Oswald'; } </style>
unit_17/lectures/lecture_2.ipynb
whitead/numerical_stats
gpl-3.0
Example 2 &mdash; Changing the background Using the inspector, I've found that the dull gray background is from the .notebook_app.
%%HTML <style> .notebook_app { background-image: url('https://images.unsplash.com/photo-1453106037972-08fbfe790762?ixlib=rb-0.3.5&q=80&fm=jpg&crop=entropy&s=cbaaa89f2c5394ff276ac2ccbfffd4a4'); background-repeat: no; background-size: cover; } </style>
unit_17/lectures/lecture_2.ipynb
whitead/numerical_stats
gpl-3.0
Example 3 &mdash; Creating a nice results report If you just web-search for HTML table, you'll see the syntax. Basically you have this: ``` <table> <tr> each row </tr> </table> ``` with more than one column: ``` <table> <tr> <td> row 1 column 1</td> <td> column 2</td> </tr> </table> ```
import IPython.display as display def make_pretty_table(x, y): html = ''' <table> <tr> <th> x</th> <th> y </th> </tr> ''' for xi,yi in zip(x,y): html += '<tr> <td> {} </td> <td> {} </td> </tr> \n'.format(xi,yi) html += '</table>' d = display.HTML(html) display.display(d) x = range(10) y = range(10,20) make_pretty_table(x,y) def make_really_pretty_table(x, y): html = ''' <table class='pretty-table'> <tr> <th> x</th> <th> y </th> </tr> ''' for xi,yi in zip(x,y): html += '<tr> <td> {} </td> <td> {} </td> </tr> \n'.format(xi,yi) html += '</table>' html += ''' <style> .pretty-table > tbody > tr:nth-child(odd) { background-color: #ccc; } .pretty-table > tbody > tr > td, th { text-align: center !important; } </style> ''' d = display.HTML(html) display.display(d) x = range(10) y = range(10,20) make_really_pretty_table(x,y)
unit_17/lectures/lecture_2.ipynb
whitead/numerical_stats
gpl-3.0
Third example: Simple rectangular section with 7 nodes Define graph describing the section: 1) stringers are nodes with parameters: - x coordinate - y coordinate - Area 2) panels are oriented edges with parameters: - thickness - lenght which is automatically calculated
stringers = {1:[(3*a,h),A], 2:[(2*a,h),A], 3:[(a,h),A], 4:[(sympy.Integer(0),h),A], 5:[(sympy.Integer(0),sympy.Integer(0)),A], 6:[(sympy.Rational(3,2)*a,sympy.Integer(0)),A], 7:[(3*a,sympy.Integer(0)),A]} panels = {(1,2):t, (2,3):t, (3,4):t, (4,5):t, (5,6):t, (6,7):t, (7,1):t}
06_CorrectiveSolutions-7nodes.ipynb
Ccaccia73/semimonocoque
mit
Discretization We load the trajectory data generated by Brownian Dynamics simulations.
h5file = "data/cossio_kl0_Dx1_Dq1.h5" f = h5py.File(h5file, 'r') data = np.array(f['data']) f.close() fig, ax = plt.subplots(figsize=(12,3)) ax.plot(data[:,0],data[:,1],'.', markersize=1) ax.set_ylim(-8,8) ax.set_xlim(0,250000) ax.set_ylabel('x') ax.set_xlabel('time') plt.tight_layout()
examples/brownian_dynamics_1D/1D_smFS_MSM.ipynb
daviddesancho/MasterMSM
gpl-2.0
Clearly the system interconverts between two states. We can obtain a potential of mean force from a Boltzmann inversion of the probability distribution.
fig, ax = plt.subplots(figsize=(6,4)) hist, bin_edges = np.histogram(data[:,1], bins=np.linspace(-6.5,6.5,25),\ density=True) bin_centers = [0.5*(bin_edges[i]+bin_edges[i+1]) \ for i in range(len(bin_edges)-1)] ax.plot(bin_centers, -np.log(hist), lw=4) ax.set_xlim(-7,7) ax.set_ylim(0,8) ax.set_xlabel('x') _ = ax.set_ylabel('F ($k_BT$)') plt.tight_layout()
examples/brownian_dynamics_1D/1D_smFS_MSM.ipynb
daviddesancho/MasterMSM
gpl-2.0
Instead of defining two states using an arbitrary cutoff in our single dimension, we discretize the trajectory by assigning frames to microstates. In this case we use as microstates the indexes of a grid on x.
assigned_trj = list(np.digitize(data[:,1],bins=bin_edges))
examples/brownian_dynamics_1D/1D_smFS_MSM.ipynb
daviddesancho/MasterMSM
gpl-2.0
In this way, the continuous coordinate x is mapped onto a discrete microstate space.
fig,ax=plt.subplots(2,1, sharex=True) plt.subplots_adjust(wspace=0, hspace=0) ax[0].plot(range(100,len(data[:,1][:300])),data[:,1][100:300], lw=2) ax[1].step(range(100,len(assigned_trj[:300])),assigned_trj[100:300], color="g", lw=2) ax[0].set_xlim(100,300) ax[0].set_ylabel('x') ax[1].set_ylabel("state") ax[1].set_xlabel("time") plt.tight_layout()
examples/brownian_dynamics_1D/1D_smFS_MSM.ipynb
daviddesancho/MasterMSM
gpl-2.0
We then pass the discrete trajectory to the traj module to generate an instance of the TimeSeries class. Using some of its methods, we are able to generate and sort the names of the microstates in the trajectory, which will be useful later.
from mastermsm.trajectory import traj distraj = traj.TimeSeries(distraj=assigned_trj, dt=1) distraj.find_keys() distraj.keys.sort()
examples/brownian_dynamics_1D/1D_smFS_MSM.ipynb
daviddesancho/MasterMSM
gpl-2.0
Master Equation Model After generating the discrete trajectory, we can build the master equation model, for which we use the msm module.
from mastermsm.msm import msm
examples/brownian_dynamics_1D/1D_smFS_MSM.ipynb
daviddesancho/MasterMSM
gpl-2.0
First of all, we will create an instance of the SuperMSM class, which will be useful to produce and validate dynamical models.
msm_1D=msm.SuperMSM([distraj], sym=True)
examples/brownian_dynamics_1D/1D_smFS_MSM.ipynb
daviddesancho/MasterMSM
gpl-2.0
For the simplest type of dynamical model validation, we carry out a convergence test to check that the relaxation times $\tau$ do not show a dependency on the lag time. We build the MSM at different lag times $\Delta$t.
for lt in [1, 2, 5, 10, 20, 50, 100]: msm_1D.do_msm(lt) msm_1D.msms[lt].do_trans(evecs=True) msm_1D.msms[lt].boots()
examples/brownian_dynamics_1D/1D_smFS_MSM.ipynb
daviddesancho/MasterMSM
gpl-2.0
We then check the dependence of the relaxation times of the system, $\tau$ with respect to the choice of lag time $\Delta t$. We find that they are very well converged even from the shortest value of $\Delta t$.
tau_vs_lagt = np.array([[x,msm_1D.msms[x].tauT[0], msm_1D.msms[x].tau_std[0]] \ for x in sorted(msm_1D.msms.keys())]) fig, ax = plt.subplots() ax.errorbar(tau_vs_lagt[:,0],tau_vs_lagt[:,1],fmt='o-', yerr=tau_vs_lagt[:,2], markersize=10) #ax.fill_between(10**np.arange(-0.2,3,0.2), 1e-1, 10**np.arange(-0.2,3,0.2), facecolor='lightgray') ax.fill_between(tau_vs_lagt[:,0],tau_vs_lagt[:,1]+tau_vs_lagt[:,2], \ tau_vs_lagt[:,1]-tau_vs_lagt[:,2], alpha=0.1) ax.set_xlabel(r'$\Delta$t', fontsize=16) ax.set_ylabel(r'$\tau$', fontsize=16) ax.set_xlim(0.8,120) ax.set_ylim(2e2,500) ax.set_yscale('log') ax.set_xscale('log') plt.tight_layout()
examples/brownian_dynamics_1D/1D_smFS_MSM.ipynb
daviddesancho/MasterMSM
gpl-2.0
While this is not the most rigorous test we can do, it already gives some confidence on the dynamical model derived. We can inspect the count and transition matrices at even the shortest lag time
lt=1 # lag time msm_1D.do_msm(lt) msm_1D.msms[lt].do_trans(evecs=True) msm_1D.msms[lt].boots() plt.figure() plt.imshow(np.log10(msm_1D.msms[lt].count), interpolation='none', \ cmap='viridis_r', origin='lower') plt.ylabel('$\it{j}$') plt.xlabel('$\it{i}$') plt.title('Count matrix (log), $\mathbf{N}$') plt.colorbar() #plt.savefig("../../paper/figures/1d_count.png", dpi=300, transparent=True) plt.figure() plt.imshow(np.log10(msm_1D.msms[lt].trans), interpolation='none', \ cmap='viridis_r', vmin=-3, vmax=0, origin='lower') plt.ylabel('$\it{j}$') plt.xlabel('$\it{i}$') plt.title('Transition matrix (log), $\mathbf{T}$') _ = plt.colorbar() msm_1D.do_lbrate() plt.figure() plt.imshow(msm_1D.lbrate, interpolation='none', \ cmap='viridis_r', origin='lower', vmin=-0.5, vmax=0.1) plt.ylabel('$\it{j}$') plt.xlabel('$\it{i}$') plt.title('Rate matrix, $\mathbf{K}$') plt.colorbar()
examples/brownian_dynamics_1D/1D_smFS_MSM.ipynb
daviddesancho/MasterMSM
gpl-2.0
Analysis of the results The spectrum of relaxation times, $\tau_i$, derived from the eigenvalues of the transition matrix, is also remarkable. This system is expected to have a unique slow mode, corresponding to the barrier crossing process between the wells. In fact, that is precisely what we find.
fig, ax = plt.subplots() ax.errorbar(range(1,len(msm_1D.msms[lt].tauT)+1),msm_1D.msms[lt].tauT, fmt='o-', \ yerr= msm_1D.msms[lt].tau_std, ms=10) ax.fill_between(range(1,len(msm_1D.msms[1].tauT)+1), \ np.array(msm_1D.msms[lt].tauT)+np.array(msm_1D.msms[lt].tau_std), \ np.array(msm_1D.msms[lt].tauT)-np.array(msm_1D.msms[lt].tau_std)) ax.set_xlabel('Eigenvalue') ax.set_ylabel(r'$\tau_i$') ax.set_yscale('log') plt.tight_layout()
examples/brownian_dynamics_1D/1D_smFS_MSM.ipynb
daviddesancho/MasterMSM
gpl-2.0
From the eigenvectors we can also retrieve valuable information. The zeroth eigenvector, $\Psi^R_0$, corresponds to the equilibrium distribution. The slowest mode in our model, captured by the first eigenvector $\Psi^R_1$, corresponds to the transition betweem the folded and unfolded states of the protein.
fig, ax = plt.subplots(2,1, sharex=True) ax[0].plot(-msm_1D.msms[1].rvecsT[:,0]) ax[0].fill_between(range(len(msm_1D.msms[1].rvecsT[:,0])), \ -msm_1D.msms[1].rvecsT[:,0], 0, alpha=0.5) #ax[0].set_ylim(0,0.43) ax[1].plot(msm_1D.msms[1].rvecsT[:,1]) ax[1].axhline(0,0,25, c='k', ls='--', lw=1) ax[1].fill_between(range(len(msm_1D.msms[1].rvecsT[:,1])), \ msm_1D.msms[1].rvecsT[:,1], 0, alpha=0.5) ax[1].set_xlim(0,25) ax[1].set_xlabel("state") ax[0].set_ylabel("$\Psi^R_0$") ax[1].set_ylabel("$\Psi^R_1$") plt.tight_layout(h_pad=0)
examples/brownian_dynamics_1D/1D_smFS_MSM.ipynb
daviddesancho/MasterMSM
gpl-2.0
Calculation of committors Using the method by Berezhkovskii, Hummer and Szabo (JCP, 2009), we calculate the value of the committor (or $p_{fold}$ in the context of protein folding), the probability that the system will go from to one state before going back to the other. For this we must first define microstates which are definitely one or the other, which we do with options FF and UU.
msm_1D.msms[1].do_rate() FF = list(range(19,22,1)) UU = list(range(4,7,1)) msm_1D.msms[1].do_pfold(FF=FF, UU=UU) msm_1D.msms[1].peqT fig, ax = plt.subplots() ax.set_xlim(0,25) axalt = ax.twinx() axalt.plot(-np.log(msm_1D.msms[1].peqT), alpha=0.3, c='b') axalt.fill_between(range(len(msm_1D.msms[1].rvecsT[:,1])), \ -np.log(msm_1D.msms[1].peqT), 0, alpha=0.25, color='b') axalt.fill_between([FF[0], FF[-1]], \ 10, 0, alpha=0.15, color='green') axalt.set_ylim(0,10) axalt.fill_between([UU[0], UU[-1]], \ 10, 0, alpha=0.15, color='red') ax.plot(msm_1D.msms[1].pfold, c='k', lw=3) ax.set_ylabel('$\phi$') axalt.set_ylabel(r'${\beta}G$', color='b') ax.set_xlabel('state') msm_1D.msms[1].do_pfold(FF=FF, UU=UU) print (msm_1D.msms[1].kf)
examples/brownian_dynamics_1D/1D_smFS_MSM.ipynb
daviddesancho/MasterMSM
gpl-2.0
Sensitivity analysis Using the methods described in De Sancho et al (JCTC, 2015), we calculate the value of the sensitivity parameter, indicating the changes in the global rate (actually, on $\ln(k_{UU\rightarrow FF}$)) upon changes in the free energy of each microstate.
msm_1D.msms[1].sensitivity(FF=FF, UU=UU) plt.plot(msm_1D.msms[1].d_lnkf, 'k', lw=3) plt.fill_between([FF[0], FF[-1]], \ 0.2, -0.1, alpha=0.15, color='green') plt.fill_between([UU[0], UU[-1]], \ 0.2, -0.1, alpha=0.15, color='red') plt.xlabel('state') plt.ylabel(r'$\alpha$') plt.xlim(0,25) plt.ylim(-0.1,0.2)
examples/brownian_dynamics_1D/1D_smFS_MSM.ipynb
daviddesancho/MasterMSM
gpl-2.0
Get some data to play with
from sklearn.datasets import load_digits digits = load_digits() from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target) X_train.shape
First Steps.ipynb
amueller/nyu_ml_lectures
bsd-2-clause
3) Apply / evaluate
print(svm.predict(X_test)) svm.score(X_train, y_train) svm.score(X_test, y_test)
First Steps.ipynb
amueller/nyu_ml_lectures
bsd-2-clause
<h3 style="color:#1f77b4">Calling CAS Actions</h3>
conn.serverstatus() conn.userinfo() conn.help();
python/AX2016/Using Python With SAS Cloud Analytic Services (CAS).ipynb
sassoftware/sas-viya-programming
apache-2.0
<h3 style="color:#1f77b4"> Loading Data </h3>
tbl2 = conn.read_csv('https://raw.githubusercontent.com/' 'sassoftware/sas-viya-programming/master/data/cars.csv', casout=conn.CASTable('cars')) tbl2
python/AX2016/Using Python With SAS Cloud Analytic Services (CAS).ipynb
sassoftware/sas-viya-programming
apache-2.0
<h3 style="color:#1f77b4"> CASTable </h3> <br/> CASTable objects contain a reference to a CAS table as well as filtering and grouping options, and computed columns.
conn.tableinfo() tbl = conn.CASTable('attrition') tbl.columninfo() tbl2? tbl2.fetch()
python/AX2016/Using Python With SAS Cloud Analytic Services (CAS).ipynb
sassoftware/sas-viya-programming
apache-2.0
<h3 style="color:#1f77b4"> Exploring Data </h3>
tbl.summary() tbl.freq(inputs='Attrition')
python/AX2016/Using Python With SAS Cloud Analytic Services (CAS).ipynb
sassoftware/sas-viya-programming
apache-2.0
<h3 style="color:#1f77b4"> Building Analytical Models </h3>
conn.loadactionset('regression') conn.help(actionset='regression'); output = tbl.logistic( target='Attrition', inputs=['Gender', 'MaritalStatus', 'AccountAge'], nominals = ['Gender', 'MaritalStatus'] ) output.keys() output from swat.render import render_html render_html(output)
python/AX2016/Using Python With SAS Cloud Analytic Services (CAS).ipynb
sassoftware/sas-viya-programming
apache-2.0
<h3 style="color:#1f77b4"> Pandas-style DataFrame API </h3> <br/> Many Pandas DataFrame features are available on the CASTable objects.
import pandas as pd df = pd.read_csv('https://raw.githubusercontent.com/' 'sassoftware/sas-viya-programming/master/data/cars.csv') df.describe() tbl2.describe() tbl2.groupby('Origin').describe() tbl[['Gender', 'AccountAge']].head()
python/AX2016/Using Python With SAS Cloud Analytic Services (CAS).ipynb
sassoftware/sas-viya-programming
apache-2.0
<h3 style="color:#1f77b4"> Visualization </h3>
from bokeh.plotting import show, figure from bokeh.charts import Bar from bokeh.io import output_notebook output_notebook() output1 = tbl.freq(inputs=['Attrition']) p = Bar(output1['Frequency'], 'FmtVar', values='Frequency', color="#1f77b4", agg='mean', title="", xlabel = "Attrition", ylabel = 'Frequency', bar_width=0.8, plot_width=600, plot_height=400 ) show(p)
python/AX2016/Using Python With SAS Cloud Analytic Services (CAS).ipynb
sassoftware/sas-viya-programming
apache-2.0
conn.tableinfo() tbl2.groupby(['Origin', 'Type']).describe() tbl2[['MPG_CITY', 'MPG_Highway', 'MSRP']].describe() tbl2[(tbl2.MSRP > 90000) & (tbl2.Cylinders < 12)].head()
python/AX2016/Using Python With SAS Cloud Analytic Services (CAS).ipynb
sassoftware/sas-viya-programming
apache-2.0
conn.runcode(code=''' data cars_temp; set cars; sqrt_MSRP = sqrt(MSRP); MPG_avg = (MPG_city + MPG_highway) / 2; run; ''') conn.tableinfo() conn.loadactionset('fedsql') conn.fedsql.execdirect(query=''' select make, model, msrp, mpg_highway from cars where msrp > 80000 and mpg_highway > 20 ''')
python/AX2016/Using Python With SAS Cloud Analytic Services (CAS).ipynb
sassoftware/sas-viya-programming
apache-2.0
Get a list of URLs 获取URL的列表 Search and scroll 搜索并翻看 Go to Google Images and search for the images you are interested in. The more specific you are in your Google Search, the better the results and the less manual pruning you will have to do.<br> 打开Google Images页面,搜索你感兴趣的图片。你在搜索框中输入的信息越精确,那么搜索的结果就越好,而需要你手动处理的工作就越少。 Scroll down until you've seen all the images you want to download, or until you see a button that says 'Show more results'. All the images you scrolled past are now available to download. To get more, click on the button, and continue scrolling. The maximum number of images Google Images shows is 700.<br> 往下翻页直到你看到所有你想下载的图片,或者直到你看到一个“显示更多结果”的按钮为止。你刚翻看过的所有图片都是可下载的。为了获得更多的图片,点击“显示更多结果”按钮,继续翻看。Goolge Images最多可以显示700张图片。 It is a good idea to put things you want to exclude into the search query, for instance if you are searching for the Eurasian wolf, "canis lupus lupus", it might be a good idea to exclude other variants:<br> 在搜索请求框中增加一些你想排除在外的信息是个好主意。比如,如果你要搜canis lupus lupus这一类欧亚混血狼,最好筛除掉别的种类(这样返回的结果才比较靠谱) "canis lupus lupus" -dog -arctos -familiaris -baileyi -occidentalis You can also limit your results to show only photos by clicking on Tools and selecting Photos from the Type dropdown.<br> 你也可以限制搜索的结果,让搜索结果只显示照片,通过点击工具从Type里选择照片进行下载。 Download into file 下载到文件中 Now you must run some Javascript code in your browser which will save the URLs of all the images you want for you dataset.<br> 现在你需要在浏览器中运行一些javascript代码,浏览器将保存所有你想要放入数据集的图片的URL地址。 Press <kbd>Ctrl</kbd><kbd>Shift</kbd><kbd>J</kbd> in Windows/Linux and <kbd>Cmd</kbd><kbd>Opt</kbd><kbd>J</kbd> in Mac, and a small window the javascript 'Console' will appear. That is where you will paste the JavaScript commands.<br> (浏览器窗口下)windows/linux系统按<kbd>Ctrl</kbd><kbd>Shift</kbd><kbd>J</kbd>,Mac系统按 <kbd>Cmd</kbd><kbd>Opt</kbd><kbd>J</kbd>,就会弹出javascript的“控制台”面板,在这个面板中,你可以把相关的javascript命令粘贴进去。 You will need to get the urls of each of the images. Before running the following commands, you may want to disable ads block add-ons (YouBlock) in Chrome. Otherwise window.open() coomand doesn't work. Then you can run the following commands:<br> 你需要获得每个图片对应的url。在运行下面的代码之前,你可能需要在Chrome中禁用广告拦截插件,否则window.open()函数将不能工作。然后你就可以运行下面的代码: javascript urls = Array.from(document.querySelectorAll('.rg_di .rg_meta')).map(el=&gt;JSON.parse(el.textContent).ou); window.open('data:text/csv;charset=utf-8,' + escape(urls.join('\n'))); Create directory and upload urls file into your server 创建一个目录并将url文件上传到服务器上 Choose an appropriate name for your labeled images. You can run these steps multiple times to create different labels.<br> 为带标签的图片选择一个合适的名字,你可以多次执行下面的步骤来创建不同的标签。
folder = 'black' file = 'urls_black.csv' folder = 'teddys' file = 'urls_teddys.csv' folder = 'grizzly' file = 'urls_grizzly.csv'
zh-nbs/Lesson2_download.ipynb
fastai/course-v3
apache-2.0
You will need to run this cell once per each category.<br> 下面的单元格,每一个品种运行一次。
path = Path('data/bears') dest = path/folder dest.mkdir(parents=True, exist_ok=True) path.ls()
zh-nbs/Lesson2_download.ipynb
fastai/course-v3
apache-2.0
Finally, upload your urls file. You just need to press 'Upload' in your working directory and select your file, then click 'Upload' for each of the displayed files.<br> 最后,上传你的url文件。你只需要在工作区点击“Upload”按钮,然后选择你要上传的文件,再点击“Upload”即可。 Download images 下载图片 Now you will need to download your images from their respective urls.<br> 现在,你要做的是从图片对应的url地址下载这些图片。 fast.ai has a function that allows you to do just that. You just have to specify the urls filename as well as the destination folder and this function will download and save all images that can be opened. If they have some problem in being opened, they will not be saved.<br> fast.ai提供了一个函数来完成这个工作。你只需要指定url地址文件名和目标文件夹,这个函数就能自动下载和保存可打开的图片。如果图片本身无法打开的话,对应图片也不会被保存. Let's download our images! Notice you can choose a maximum number of images to be downloaded. In this case we will not download all the urls.<br> 我们开始下载图片吧!注意你可以设定需要下载的最大图片数量,这样我们就不会下载所有url地址了。 You will need to run this line once for every category.<br> 下面这行代码,每一个品种运行一次。
classes = ['teddys','grizzly','black'] download_images(path/file, dest, max_pics=200) # If you have problems download, try with `max_workers=0` to see exceptions: download_images(path/file, dest, max_pics=20, max_workers=0)
zh-nbs/Lesson2_download.ipynb
fastai/course-v3
apache-2.0
Then we can remove any images that can't be opened: <br> 然后我们可以删除任何不能打开的图片:
for c in classes: print(c) verify_images(path/c, delete=True, max_size=500)
zh-nbs/Lesson2_download.ipynb
fastai/course-v3
apache-2.0
View data 浏览数据
np.random.seed(42) data = ImageDataBunch.from_folder(path, train=".", valid_pct=0.2, ds_tfms=get_transforms(), size=224, num_workers=4).normalize(imagenet_stats) # If you already cleaned your data, run this cell instead of the one before # 如果你已经清洗过你的数据,直接运行这格代码而不是上面的 # np.random.seed(42) # data = ImageDataBunch.from_csv(path, folder=".", valid_pct=0.2, csv_labels='cleaned.csv', # ds_tfms=get_transforms(), size=224, num_workers=4).normalize(imagenet_stats)
zh-nbs/Lesson2_download.ipynb
fastai/course-v3
apache-2.0
Good! Let's take a look at some of our pictures then.<br> 好!我们浏览一些照片。
data.classes data.show_batch(rows=3, figsize=(7,8)) data.classes, data.c, len(data.train_ds), len(data.valid_ds)
zh-nbs/Lesson2_download.ipynb
fastai/course-v3
apache-2.0
Train model 训练模型
learn = cnn_learner(data, models.resnet34, metrics=error_rate) learn.fit_one_cycle(4) learn.save('stage-1') learn.unfreeze() learn.lr_find() learn.recorder.plot() learn.fit_one_cycle(2, max_lr=slice(3e-5,3e-4)) learn.save('stage-2')
zh-nbs/Lesson2_download.ipynb
fastai/course-v3
apache-2.0
Interpretation 结果解读
learn.load('stage-2'); interp = ClassificationInterpretation.from_learner(learn) interp.plot_confusion_matrix()
zh-nbs/Lesson2_download.ipynb
fastai/course-v3
apache-2.0
Cleaning Up 清理 Some of our top losses aren't due to bad performance by our model. There are images in our data set that shouldn't be.<br> 某些最大误差,不是由于模型的性能差导致的,而是由于数据集中的有些图片本身存在问题才导致的。 Using the ImageCleaner widget from fastai.widgets we can prune our top losses, removing photos that don't belong.<br> 从fastai.widgets库中导入并使用ImageCleaner小工具,我们就可以剔除那些归类错误的图片,从而减少预测失误。
from fastai.widgets import *
zh-nbs/Lesson2_download.ipynb
fastai/course-v3
apache-2.0
First we need to get the file paths from our top_losses. We can do this with .from_toplosses. We then feed the top losses indexes and corresponding dataset to ImageCleaner.<br> 首先,我们可以借助.from_toplosses,从top_losses中获取我们需要的文件路径。随后喂给ImageCleaner误差高的索引以及对应的数据集参数。 Notice that the widget will not delete images directly from disk but it will create a new csv file cleaned.csv from where you can create a new ImageDataBunch with the corrected labels to continue training your model.<br> 需要注意的是,这些小工具本身并不会直接从磁盘删除图片,它会创建一个新的csv文件cleaned.csv,通过这个文件,你可以新创建一个包含准确标签信息的ImageDataBunch(图片数据堆),并继续训练你的模型。 In order to clean the entire set of images, we need to create a new dataset without the split. The video lecture demostrated the use of the ds_type param which no longer has any effect. See the thread for more details.<br> 为了清空整个图片集,我们需要创建一个新的未经分拆的数据集。视频课程里演示的ds_type 参数的用法已经不再有效。参照 the thread 来获取更多细节。
db = (ImageList.from_folder(path) .no_split() .label_from_folder() .transform(get_transforms(), size=224) .databunch() ) # If you already cleaned your data using indexes from `from_toplosses`,<br><br> # 如果你已经从`from_toplosses`使用indexes清理了你的数据 # run this cell instead of the one before to proceed with removing duplicates.<br><br> # 运行这个单元格里面的代码(而非上面单元格的内容)以便继续删除重复项 # Otherwise all the results of the previous step would be overwritten by<br><br> # 否则前一个步骤中的结果都会被覆盖 # the new run of `ImageCleaner`.<br><br> # 下面就是要运行的`ImageCleaner`代码,请把下面的注释去掉开始运行 # db = (ImageList.from_csv(path, 'cleaned.csv', folder='.') # .no_split() # .label_from_df() # .transform(get_transforms(), size=224) # .databunch() # )
zh-nbs/Lesson2_download.ipynb
fastai/course-v3
apache-2.0
Then we create a new learner to use our new databunch with all the images.<br> 接下来,我们要创建一个新的学习器来使用包含全部图片的新数据堆。
learn_cln = cnn_learner(db, models.resnet34, metrics=error_rate) learn_cln.load('stage-2'); ds, idxs = DatasetFormatter().from_toplosses(learn_cln)
zh-nbs/Lesson2_download.ipynb
fastai/course-v3
apache-2.0
Make sure you're running this notebook in Jupyter Notebook, not Jupyter Lab. That is accessible via /tree, not /lab. Running the ImageCleaner widget in Jupyter Lab is not currently supported.<br> 确保你在Jupyter Notebook环境下运行这个notebook,而不是在Jupyter Lab中运行。我们可以通过/tree来访问(notebook),而不是/lab。目前还不支持在Jupyter Lab中运行ImageCleaner小工具。<br>
ImageCleaner(ds, idxs, path)
zh-nbs/Lesson2_download.ipynb
fastai/course-v3
apache-2.0