markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Note that we still use these spaces with the same $N_0 \cdot N_1$ quadrature points in real space, but the two highest frequencies have been set to zero. We have now created all relevant function spaces for the problem at hand. It remains to combine these spaces into tensor product spaces, and to combine tensor product...
V1 = TensorProductSpace(comm, (D0X, D1Y)) V0 = TensorProductSpace(comm, (D0X, D0Y)) P = TensorProductSpace(comm, (PX, PY), modify_spaces_inplace=True)
binder/drivencavity.ipynb
spectralDNS/shenfun
bsd-2-clause
These tensor product spaces are all scalar valued. The velocity is a vector, and a vector requires a mixed vector basis like $W_1^{\boldsymbol{N}} = V_1^{\boldsymbol{N}} \times V_0^{\boldsymbol{N}}$. The vector basis is created in shenfun as
W1 = VectorSpace([V1, V0]) W0 = VectorSpace([V0, V0])
binder/drivencavity.ipynb
spectralDNS/shenfun
bsd-2-clause
Note that the second vector basis, $W_0^{\boldsymbol{N}} = V_0^{\boldsymbol{N}} \times V_0^{\boldsymbol{N}}$, uses homogeneous boundary conditions throughout. Mixed variational form <div id="sec:mixedform"></div> We now formulate a variational problem using the Galerkin method: Find $\boldsymbol{u} \in W_1^{\boldsymbo...
VQ = CompositeSpace([W1, P]) # Coupling velocity and pressure
binder/drivencavity.ipynb
spectralDNS/shenfun
bsd-2-clause
We can now create test- and trialfunctions for the coupled space $VQ$, and then split them up into components afterwards:
up = TrialFunction(VQ) vq = TestFunction(VQ) u, p = up v, q = vq
binder/drivencavity.ipynb
spectralDNS/shenfun
bsd-2-clause
Notice. The test function v is using homogeneous Dirichlet boundary conditions even though it is derived from VQ, which contains W1. It is currently not (and will probably never be) possible to use test functions with inhomogeneous boundary conditions. With the basisfunctions in place we may assemble the different bloc...
Re = 100. nu = 2./Re if family.lower() == 'legendre': A = inner(grad(v), -nu*grad(u)) G = inner(div(v), p) else: A = inner(v, nu*div(grad(u))) G = inner(v, -grad(p)) D = inner(q, div(u))
binder/drivencavity.ipynb
spectralDNS/shenfun
bsd-2-clause
The assembled subsystems A, G and D are lists containg the different blocks of the complete, coupled, coefficient matrix. A actually contains 4 tensor product matrices of type TPMatrix. The first two matrices are for vector component zero of the test function v[0] and trial function u[0], the matrices 2 and 3 are for c...
sol = la.BlockMatrixSolver(A+G+D)
binder/drivencavity.ipynb
spectralDNS/shenfun
bsd-2-clause
In the solver sol there is now a regular block matrix found in sol.mat, which is the symmetric $$ \begin{bmatrix} A[0]+A[1] & 0 & G[0] \ 0 & A[2]+A[3] & G[1] \ D[0] & D[1] & 0 \end{bmatrix} $$ The boundary matrices are similarly collected in a boundary block matrix in sol.bc_mat. This matrix is us...
# Create Function to hold solution. Use set_boundary_dofs to fix the degrees # of freedom in uh_hat that determines the boundary conditions. uh_hat = Function(VQ).set_boundary_dofs() ui_hat = uh_hat[0] # New solution (iterative) uh_new = Function(VQ).set_boundary_dofs() ui_new = uh_new[0]
binder/drivencavity.ipynb
spectralDNS/shenfun
bsd-2-clause
The nonlinear right hand side also requires some additional attention. Nonlinear terms are usually computed in physical space before transforming to spectral. For this we need to evaluate the velocity vector on the quadrature mesh. We also need a rank 2 Array to hold the outer product $\boldsymbol{u}\boldsymbol{u}$. Th...
bh_hat = Function(VQ) # Create arrays to hold velocity vector solution ui = Array(W1) # Create work arrays for nonlinear part QT = CompositeSpace([W1, W0]) # for uiuj uiuj = Array(QT) uiuj_hat = Function(QT)
binder/drivencavity.ipynb
spectralDNS/shenfun
bsd-2-clause
The right hand side $L((\boldsymbol{v}, q);\boldsymbol{u}^{k});$ is computed in its own function compute_rhs as
def compute_rhs(ui_hat, bh_hat): global ui, uiuj, uiuj_hat, V1, bh_hat0 bh_hat.fill(0) ui = W1.backward(ui_hat, ui) uiuj = outer(ui, ui, uiuj) uiuj_hat = uiuj.forward(uiuj_hat) bi_hat = bh_hat[0] bi_hat = inner(v, div(uiuj_hat), output_array=bi_hat) #bi_hat = inner(grad(v), -uiuj_hat, ou...
binder/drivencavity.ipynb
spectralDNS/shenfun
bsd-2-clause
Here outer() is a shenfun function that computes the outer product of two vectors and returns the product in a rank two array (here uiuj). With uiuj forward transformed to uiuj_hat we can assemble the linear form either as inner(v, div(uiuj_hat) or inner(grad(v), -uiuj_hat). Now all that remains is to guess an initial ...
converged = False count = 0 alfa = 0.5 while not converged: count += 1 bh_hat = compute_rhs(ui_hat, bh_hat) uh_new = sol(bh_hat, u=uh_new, constraints=((2, 0, 0),)) error = np.linalg.norm(ui_hat-ui_new) uh_hat[:] = alfa*uh_new + (1-alfa)*uh_hat converged = abs(error) < 1e-8 or count >= 100 p...
binder/drivencavity.ipynb
spectralDNS/shenfun
bsd-2-clause
Partial Dependence Plots with categorical values Sigurd Carlsen Feb 2019 Holger Nahrstaedt 2020 .. currentmodule:: skopt Plot objective now supports optional use of partial dependence as well as different methods of defining parameter values for dependency plots.
print(__doc__) import sys from skopt.plots import plot_objective from skopt import forest_minimize import numpy as np np.random.seed(123) import matplotlib.pyplot as plt import numpy as np from sklearn.datasets import load_breast_cancer from sklearn.tree import DecisionTreeClassifier from sklearn.model_selection import...
0.8/notebooks/auto_examples/plots/partial-dependence-plot-with-categorical.ipynb
scikit-optimize/scikit-optimize.github.io
bsd-3-clause
objective function Here we define a function that we evaluate.
def objective(params): clf = DecisionTreeClassifier( **{dim.name: val for dim, val in zip(SPACE, params) if dim.name != 'dummy'}) return -np.mean(cross_val_score(clf, *load_breast_cancer(True)))
0.8/notebooks/auto_examples/plots/partial-dependence-plot-with-categorical.ipynb
scikit-optimize/scikit-optimize.github.io
bsd-3-clause
Bayesian optimization
SPACE = [ Integer(1, 20, name='max_depth'), Integer(2, 100, name='min_samples_split'), Integer(5, 30, name='min_samples_leaf'), Integer(1, 30, name='max_features'), Categorical(list('abc'), name='dummy'), Categorical(['gini', 'entropy'], name='criterion'), Categorical(list('def'), name='dumm...
0.8/notebooks/auto_examples/plots/partial-dependence-plot-with-categorical.ipynb
scikit-optimize/scikit-optimize.github.io
bsd-3-clause
Partial dependence plot Here we see an example of using partial dependence. Even when setting n_points all the way down to 10 from the default of 40, this method is still very slow. This is because partial dependence calculates 250 extra predictions for each point on the plots.
_ = plot_objective(result, n_points=10)
0.8/notebooks/auto_examples/plots/partial-dependence-plot-with-categorical.ipynb
scikit-optimize/scikit-optimize.github.io
bsd-3-clause
Plot without partial dependence Here we plot without partial dependence. We see that it is a lot faster. Also the values for the other parameters are set to the default "result" which is the parameter set of the best observed value so far. In the case of funny_func this is close to 0 for all parameters.
_ = plot_objective(result, sample_source='result', n_points=10)
0.8/notebooks/auto_examples/plots/partial-dependence-plot-with-categorical.ipynb
scikit-optimize/scikit-optimize.github.io
bsd-3-clause
Modify the shown minimum Here we try with setting the other parameters to something other than "result". When dealing with categorical dimensions we can't use 'expected_minimum'. Therefore we try with "expected_minimum_random" which is a naive way of finding the minimum of the surrogate by only using random sampling. n...
_ = plot_objective(result, n_points=10, sample_source='expected_minimum_random', minimum='expected_minimum_random', n_minimum_search=10000)
0.8/notebooks/auto_examples/plots/partial-dependence-plot-with-categorical.ipynb
scikit-optimize/scikit-optimize.github.io
bsd-3-clause
Set a minimum location Lastly we can also define these parameters ourselfs by parsing a list as the pars argument:
_ = plot_objective(result, n_points=10, sample_source=[15, 4, 7, 15, 'b', 'entropy', 'e'], minimum=[15, 4, 7, 15, 'b', 'entropy', 'e'])
0.8/notebooks/auto_examples/plots/partial-dependence-plot-with-categorical.ipynb
scikit-optimize/scikit-optimize.github.io
bsd-3-clause
Each split is in a structured json file with a number of questions and answers for each passage (or context). We’ll take this apart into parallel lists of contexts, questions, and answers (note that the contexts here are repeated since there are multiple questions per context):
def read_squad(path): dataInJSON = loadJSONData(path) return preprocessSQUAD(dataInJSON) train_contexts, train_questions, train_answers = read_squad('/content/squad/train-v2.0.json') print("Length of Context, Questions and Answers" , len (train_contexts), " , ", len(train_questions), " , ", len(train_answers) ) ...
Question_Answering_with_SQuAD_2_0_20210102.ipynb
HemantTiwariGitHub/AndroidNDSunshineProgress
apache-2.0
The contexts and questions are just strings. The answers are dicts containing the subsequence of the passage with the correct answer as well as an integer indicating the character at which the answer begins. In order to train a model on this data we need (1) the tokenized context/question pairs, and (2) integers indica...
def add_end_idx(answers, contexts): offByOneCount = 0 offByTwoCount = 0 exactCount = 0 for answer, context in zip(answers, contexts): # extract Answers and Start Positions #print(answer) answerText = answer['text'] answerStartIndex = answer['answer_start'] # ca...
Question_Answering_with_SQuAD_2_0_20210102.ipynb
HemantTiwariGitHub/AndroidNDSunshineProgress
apache-2.0
Now train_answers and val_answers include the character end positions and the corrected start positions. Next, let’s tokenize our context/question pairs. 🤗 Tokenizers can accept parallel lists of sequences and encode them together as sequence pairs.
!pip install transformers==4.0.1 from transformers import DistilBertTokenizerFast tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased') # Your code here train_encodings = tokenizer(train_contexts, train_questions, truncation=True, padding=True) # Your code here val_encodings = tokenizer(val_c...
Question_Answering_with_SQuAD_2_0_20210102.ipynb
HemantTiwariGitHub/AndroidNDSunshineProgress
apache-2.0
Next we need to convert our character start/end positions to token start/end positions. When using 🤗 Fast Tokenizers, we can use the <b>built in char_to_token()</b> method.
def add_token_positions(encodings, answers): start_positions = [] end_positions = [] # Your code here for answerIndex in range(len(answers)): #print (answers[answerIndex]) start_positions.append(encodings.char_to_token(answerIndex, answers[answerIndex]['answer_start'])) end_positi...
Question_Answering_with_SQuAD_2_0_20210102.ipynb
HemantTiwariGitHub/AndroidNDSunshineProgress
apache-2.0
Our data is ready. Let’s just put it in a PyTorch/TensorFlow dataset so that we can easily use it for training. In PyTorch, we define a custom Dataset class. In TensorFlow, we pass a tuple of (inputs_dict, labels_dict) to the from_tensor_slices method.
import tensorflow as tf # Your code here train_dataset = tf.data.Dataset.from_tensor_slices(( {key: train_encodings[key] for key in ['input_ids', 'attention_mask']}, {key: train_encodings[key] for key in ['start_positions', 'end_positions']} )) # Your code here val_dataset = tf.data.Dataset.from_tensor_slices...
Question_Answering_with_SQuAD_2_0_20210102.ipynb
HemantTiwariGitHub/AndroidNDSunshineProgress
apache-2.0
Now we can use a DistilBert model with a QA head for training:
from transformers import TFDistilBertForQuestionAnswering # Your code here model = TFDistilBertForQuestionAnswering.from_pretrained("distilbert-base-uncased")
Question_Answering_with_SQuAD_2_0_20210102.ipynb
HemantTiwariGitHub/AndroidNDSunshineProgress
apache-2.0
The data and model are both ready to go. You can train the model with Trainer/TFTrainer exactly as in the sequence classification example above. If using native PyTorch, replace labels with start_positions and end_positions in the training example. If using Keras’s fit, we need to make a minor modification to handle th...
# Keras will expect a tuple when dealing with labels # Write your code here to replace labels with start_positions and end_positions in the training example train_dataset = train_dataset.map(lambda x, y: (x, (y['start_positions'], y['end_positions']))) # Keras will assign a separate loss for each output and add them ...
Question_Answering_with_SQuAD_2_0_20210102.ipynb
HemantTiwariGitHub/AndroidNDSunshineProgress
apache-2.0
<h1>Ejercicio parcial:<h1> Resolver $\frac{dy}{dx}=\frac{x y^{4}}{3} - \frac{2 y}{3 x} + \frac{1}{3 x^{3} y^{2}}$. Ayuda: hace el anzats $\xi=ax+c$ y $\eta=bx+d$ para encontrar las simetrías
#cargamos la función f=x*y**4/3-R(2,3)*y/x+R(1,3)/x**3/y**2 f
examenes/ejer.5.ipynb
fdmazzone/Ecuaciones_Diferenciales
gpl-2.0
Hacemos el anzats para encontrar $\xi=ax+c$ y $\eta=bx+d$
xi=a*x+c eta=b*y+d xi, eta (eta.diff(x)+(eta.diff(y)-xi.diff(x))*f-xi*f.diff(x)-eta*f.diff(y)).factor()
examenes/ejer.5.ipynb
fdmazzone/Ecuaciones_Diferenciales
gpl-2.0
Luego, $c=d=0$, $b=-\frac23 a$, podemos tomar $a=1$
L=(eta.diff(x)+(eta.diff(y)-xi.diff(x))*f-xi*f.diff(x)-eta*f.diff(y)).factor() #L.subs({c:0,d:0}) L=(2*a*x**3*y**4 + 2*a*x*y + 3*b*x**3*y**4 + 3*b*x*y + c*x**2*y**4 + 3*c*y + 4*d*x**3*y**3 + 2*d*x) poly(L,x,y)
examenes/ejer.5.ipynb
fdmazzone/Ecuaciones_Diferenciales
gpl-2.0
Encontremos las coordenadas polares
y=Function('y')(x) xi=x eta=-R(2,3)*y xi, eta dsolve(Eq(y.diff(x),eta/xi),y) y=symbols('y') r=x**2*y**3 r s=integrate(xi**(-1),x) s s=log(abs(x)) r, s
examenes/ejer.5.ipynb
fdmazzone/Ecuaciones_Diferenciales
gpl-2.0
Encontremos la ecuación $\frac{dr}{ds}=??$
y=Function('y')(x) r=x**2*y**3 s=log(abs(x)) f=x*y**4/3-R(2,3)*y/x+R(1,3)/x**3/y**2 r,s,f (r.diff(x)/s.diff(x)).subs(y.diff(x),f).simplify()
examenes/ejer.5.ipynb
fdmazzone/Ecuaciones_Diferenciales
gpl-2.0
Resolvamos $\frac{dr}{ds}=1+r^2$
r=symbols('r') s=symbols('s') C=symbols('C') solEcuacionPolares=Eq(integrate((1+r**2)**(-1),r),s+C) solEcuacionPolares
examenes/ejer.5.ipynb
fdmazzone/Ecuaciones_Diferenciales
gpl-2.0
Expresemos la ecuación en coordenadas cartesianas
solEcuacionCart=solEcuacionPolares.subs(r,x**2*y**3).subs(s,log(abs(x))) solEcuacionCart ec1=Eq(solEcuacionCart.lhs.diff(x),solEcuacionCart.rhs.diff(x)) ec1 ec2=Eq(ec1.lhs,1/x) ec2 solve(ec2,y.diff(x))
examenes/ejer.5.ipynb
fdmazzone/Ecuaciones_Diferenciales
gpl-2.0
Part 1: Linear Regression Purpose of linear regression <div class="span5 alert alert-info"> <p> Given a dataset $X$ and $Y$, linear regression can be used to: </p> <ul> <li> Build a <b>predictive model</b> to predict future values of $X_i$ without a $Y$ value. </li> <li> Model the <b>strength of the relationship...
from sklearn.datasets import load_boston boston = load_boston() boston.keys() boston.data.shape # Print column names print (boston.feature_names) # Print description of Boston housing data set print (boston.DESCR)
Mini_Project_Linear_Regression.ipynb
anonyXmous/CapstoneProject
unlicense
Now let's explore the data set itself.
bos = pd.DataFrame(boston.data) bos.head()
Mini_Project_Linear_Regression.ipynb
anonyXmous/CapstoneProject
unlicense
There are no column names in the DataFrame. Let's add those.
bos.columns = boston.feature_names bos.head()
Mini_Project_Linear_Regression.ipynb
anonyXmous/CapstoneProject
unlicense
Now we have a pandas DataFrame called bos containing all the data we want to use to predict Boston Housing prices. Let's create a variable called PRICE which will contain the prices. This information is contained in the target data.
print (boston.target.shape) bos['PRICE'] = boston.target bos.head()
Mini_Project_Linear_Regression.ipynb
anonyXmous/CapstoneProject
unlicense
EDA and Summary Statistics Let's explore this data set. First we use describe() to get basic summary statistics for each of the columns.
bos.describe()
Mini_Project_Linear_Regression.ipynb
anonyXmous/CapstoneProject
unlicense
Scatter plots Let's look at some scatter plots for three variables: 'CRIM', 'RM' and 'PTRATIO'. What kind of relationship do you see? e.g. positive, negative? linear? non-linear?
plt.scatter(bos.CRIM, bos.PRICE) plt.xlabel("Per capita crime rate by town (CRIM)") plt.ylabel("Housing Price") plt.title("Relationship between CRIM and Price")
Mini_Project_Linear_Regression.ipynb
anonyXmous/CapstoneProject
unlicense
Your turn: Create scatter plots between RM and PRICE, and PTRATIO and PRICE. What do you notice?
#your turn: scatter plot between *RM* and *PRICE* plt.scatter(bos.RM, bos.PRICE) plt.xlabel("average number of rooms per dwelling (RM)") plt.ylabel("Housing Price") plt.title("Relationship between RM and Price") #your turn: scatter plot between *PTRATIO* and *PRICE* plt.scatter(bos.PTRATIO, bos.PRICE) plt.xlabel("pupi...
Mini_Project_Linear_Regression.ipynb
anonyXmous/CapstoneProject
unlicense
Your turn: What are some other numeric variables of interest? Plot scatter plots with these variables and PRICE.
#your turn: create some other scatter plots plt.scatter(bos.AGE, bos.PRICE) plt.xlabel("proportion of owner-occupied units built prior to 1940 (AGE)") plt.ylabel("Housing Price") plt.title("Relationship between House Ages and Price")
Mini_Project_Linear_Regression.ipynb
anonyXmous/CapstoneProject
unlicense
Scatter Plots using Seaborn Seaborn is a cool Python plotting library built on top of matplotlib. It provides convenient syntax and shortcuts for many common types of plots, along with better-looking defaults. We can also use seaborn regplot for the scatterplot above. This provides automatic linear regression fits (us...
sns.regplot(y="PRICE", x="RM", data=bos, fit_reg = True)
Mini_Project_Linear_Regression.ipynb
anonyXmous/CapstoneProject
unlicense
Histograms Histograms are a useful way to visually summarize the statistical properties of numeric variables. They can give you an idea of the mean and the spread of the variables as well as outliers.
plt.hist(bos.CRIM) plt.title("CRIM") plt.xlabel("Crime rate per capita") plt.ylabel("Frequency") plt.show()
Mini_Project_Linear_Regression.ipynb
anonyXmous/CapstoneProject
unlicense
Your turn: Plot separate histograms and one for RM, one for PTRATIO. Any interesting observations?
#your turn plt.hist(bos.RM) plt.title("RM") plt.xlabel("average number of rooms per dwelling") plt.ylabel("Frequency") plt.show() # Histogram for pupil-teacher ratio by town plt.hist(bos.PTRATIO) plt.title("PTRATIO") plt.xlabel("pupil-teacher ratio by town") plt.ylabel("Frequency") plt.show()
Mini_Project_Linear_Regression.ipynb
anonyXmous/CapstoneProject
unlicense
Linear regression with Boston housing data example Here, $Y$ = boston housing prices (also called "target" data in python) and $X$ = all the other features (or independent variables) which we will use to fit a linear regression model and predict Boston housing prices. We will use the least squares method as the way ...
# Import regression modules # ols - stands for Ordinary least squares, we'll use this import statsmodels.api as sm from statsmodels.formula.api import ols # statsmodels works nicely with pandas dataframes # The thing inside the "quotes" is called a formula, a bit on that below m = ols('PRICE ~ RM',bos).fit() print (m....
Mini_Project_Linear_Regression.ipynb
anonyXmous/CapstoneProject
unlicense
Interpreting coefficients There is a ton of information in this output. But we'll concentrate on the coefficient table (middle table). We can interpret the RM coefficient (9.1021) by first noticing that the p-value (under P&gt;|t|) is so small, basically zero. We can interpret the coefficient as, if we compare two grou...
# your turn plt.scatter(bos.PRICE, m.fittedvalues) plt.xlabel("Housing Price") plt.ylabel("Predicted Housing Price") plt.title("Relationship between Predicted and Actual Price")
Mini_Project_Linear_Regression.ipynb
anonyXmous/CapstoneProject
unlicense
Fitting Linear Regression using sklearn
from sklearn.linear_model import LinearRegression X = bos.drop('PRICE', axis = 1) # This creates a LinearRegression object lm = LinearRegression() lm
Mini_Project_Linear_Regression.ipynb
anonyXmous/CapstoneProject
unlicense
What can you do with a LinearRegression object? Check out the scikit-learn docs here. We have listed the main functions here. Main functions | Description --- | --- lm.fit() | Fit a linear model lm.predit() | Predict Y using the linear model with estimated coefficients lm.score() | Returns the coefficient of determin...
# Look inside lm object #lm.<tab>
Mini_Project_Linear_Regression.ipynb
anonyXmous/CapstoneProject
unlicense
Output | Description --- | --- lm.coef_ | Estimated coefficients lm.intercept_ | Estimated intercept Fit a linear model The lm.fit() function estimates the coefficients the linear regression using least squares.
# Use all 13 predictors to fit linear regression model lm.fit(X, bos.PRICE)
Mini_Project_Linear_Regression.ipynb
anonyXmous/CapstoneProject
unlicense
Your turn: How would you change the model to not fit an intercept term? Would you recommend not having an intercept? Estimated intercept and coefficients Let's look at the estimated coefficients from the linear model using 1m.intercept_ and lm.coef_. After we have fit our linear regression model using the least squar...
print ('Estimated intercept coefficient:', lm.intercept_) print ('Number of coefficients:', len(lm.coef_)) # The coefficients pd.DataFrame(list(zip(X.columns, lm.coef_)), columns = ['features', 'estimatedCoefficients'])
Mini_Project_Linear_Regression.ipynb
anonyXmous/CapstoneProject
unlicense
Predict Prices We can calculate the predicted prices ($\hat{Y}_i$) using lm.predict. $$ \hat{Y}i = \hat{\beta}_0 + \hat{\beta}_1 X_1 + \ldots \hat{\beta}{13} X_{13} $$
# first five predicted prices lm.predict(X)[0:5]
Mini_Project_Linear_Regression.ipynb
anonyXmous/CapstoneProject
unlicense
Your turn: Histogram: Plot a histogram of all the predicted prices Scatter Plot: Let's plot the true prices compared to the predicted prices to see they disagree (we did this with statsmodels before).
# your turn # Plot a histogram of all the predicted prices plt.hist(lm.predict(X)) plt.title("Predicted Prices") plt.xlabel("Predicted Prices") plt.ylabel("Frequency") plt.show() # Let's plot the true prices compared to the predicted prices to see they disagree plt.scatter(bos.PRICE, lm.predict(X)) plt.xlabel("Housin...
Mini_Project_Linear_Regression.ipynb
anonyXmous/CapstoneProject
unlicense
Residual sum of squares Let's calculate the residual sum of squares $$ S = \sum_{i=1}^N r_i = \sum_{i=1}^N (y_i - (\beta_0 + \beta_1 x_i))^2 $$
print (np.sum((bos.PRICE - lm.predict(X)) ** 2))
Mini_Project_Linear_Regression.ipynb
anonyXmous/CapstoneProject
unlicense
Mean squared error This is simple the mean of the residual sum of squares. Your turn: Calculate the mean squared error and print it.
#your turn print ('Mean squared error: ', np.mean((bos.PRICE - lm.predict(X)) ** 2))
Mini_Project_Linear_Regression.ipynb
anonyXmous/CapstoneProject
unlicense
Relationship between PTRATIO and housing price Try fitting a linear regression model using only the 'PTRATIO' (pupil-teacher ratio by town) Calculate the mean squared error.
lm = LinearRegression() lm.fit(X[['PTRATIO']], bos.PRICE) msePTRATIO = np.mean((bos.PRICE - lm.predict(X[['PTRATIO']])) ** 2) print (msePTRATIO)
Mini_Project_Linear_Regression.ipynb
anonyXmous/CapstoneProject
unlicense
We can also plot the fitted linear regression line.
plt.scatter(bos.PTRATIO, bos.PRICE) plt.xlabel("Pupil-to-Teacher Ratio (PTRATIO)") plt.ylabel("Housing Price") plt.title("Relationship between PTRATIO and Price") plt.plot(bos.PTRATIO, lm.predict(X[['PTRATIO']]), color='blue', linewidth=3) plt.show()
Mini_Project_Linear_Regression.ipynb
anonyXmous/CapstoneProject
unlicense
Your turn Try fitting a linear regression model using three independent variables 'CRIM' (per capita crime rate by town) 'RM' (average number of rooms per dwelling) 'PTRATIO' (pupil-teacher ratio by town) Calculate the mean squared error.
# your turn lm.fit(X[['CRIM']], bos.PRICE) print ('(MSE) Per capita crime rate by town: ', np.mean((bos.PRICE - lm.predict(X[['CRIM']])) ** 2)) lm.fit(X[['RM']], bos.PRICE) print ('(MSE) Average number of rooms per dwelling: ', np.mean((bos.PRICE - lm.predict(X[['RM']])) ** 2)) lm.fit(X[['PTRATIO']], bos.PRICE) print (...
Mini_Project_Linear_Regression.ipynb
anonyXmous/CapstoneProject
unlicense
Other important things to think about when fitting a linear regression model <div class="span5 alert alert-danger"> <ul> <li>**Linearity**. The dependent variable $Y$ is a linear combination of the regression coefficients and the independent variables $X$. </li> <li>**Constant standard deviation**. The SD of the d...
sns.set(font_scale=.8) sns.heatmap(X.corr(), vmax=.8, square=True, annot=True)
Mini_Project_Linear_Regression.ipynb
anonyXmous/CapstoneProject
unlicense
Part 3: Training and Test Data sets Purpose of splitting data into Training/testing sets <div class="span5 alert alert-info"> <p> Let's stick to the linear regression example: </p> <ul> <li> We built our model with the requirement that the model fit the data well. </li> <li> As a side-effect, the model will fit <...
X_train = X[:-50] X_test = X[-50:] Y_train = bos.PRICE[:-50] Y_test = bos.PRICE[-50:] print (X_train.shape) print (X_test.shape) print (Y_train.shape) print (Y_test.shape)
Mini_Project_Linear_Regression.ipynb
anonyXmous/CapstoneProject
unlicense
Another way, is to split the data into random train and test subsets using the function train_test_split in sklearn.cross_validation. Here's the documentation.
X_train, X_test, Y_train, Y_test = sklearn.cross_validation.train_test_split( X, bos.PRICE, test_size=0.33, random_state = 5) print (X_train.shape) print (X_test.shape) print (Y_train.shape) print (Y_test.shape)
Mini_Project_Linear_Regression.ipynb
anonyXmous/CapstoneProject
unlicense
Your turn: Let's build a linear regression model using our new training data sets. Fit a linear regression model to the training set Predict the output on the test set
# your turn # Fit a linear regression model to the training set lm.fit(X_train, Y_train) lm.predict(X_test)
Mini_Project_Linear_Regression.ipynb
anonyXmous/CapstoneProject
unlicense
Your turn: Calculate the mean squared error using just the test data using just the training data Are they pretty similar or very different? What does that mean?
# your turn # Calculate MSE using just the test data print ('(MSE) using just the test data: ', np.mean((Y_test - lm.predict(X_test)) ** 2)) # Calculate MSE using just the training data print ('(MSE) using just the training data: ', np.mean((Y_train - lm.predict(X_train)) ** 2))
Mini_Project_Linear_Regression.ipynb
anonyXmous/CapstoneProject
unlicense
Are they pretty similar or very different? What does that mean? -> They are very different because the model us based on training data so it will be accurate compared to the test data. The model is not exposed to test data so it will give a greater mean square error. It means there are data in test data which are diff...
plt.scatter(lm.predict(X_train), lm.predict(X_train) - Y_train, c='b', s=40, alpha=0.5) plt.scatter(lm.predict(X_test), lm.predict(X_test) - Y_test, c='g', s=40) plt.hlines(y = 0, xmin=0, xmax = 50) plt.title('Residual Plot using training (blue) and test (green) data') plt.ylabel('Residuals')
Mini_Project_Linear_Regression.ipynb
anonyXmous/CapstoneProject
unlicense
Your turn: Do you think this linear regression model generalizes well on the test data? -> <b> No, the scatter points are not close to zero so the model needs improvements. Check the features to see highly correlated predictors and remove one of them or check the parameters of the model and do fine-tuning. K-fold Cros...
from sklearn import cross_validation, linear_model # If the estimator is a classifier and y is either binary or multiclass, StratifiedKFold is used. # In all other cases, KFold is used scores = cross_validation.cross_val_score(lm, X, bos.PRICE, scoring='mean_squared_error', cv=4) # This will print metric for evalua...
Mini_Project_Linear_Regression.ipynb
anonyXmous/CapstoneProject
unlicense
Below I'm running images through the VGG network in batches. Exercise: Below, build the VGG network. Also get the codes from the first fully connected layer (make sure you get the ReLUd values).
# Set the batch size higher if you can fit in in your GPU memory batch_size = 10 codes_list = [] labels = [] batch = [] codes = None with tf.Session() as sess: # TODO: Build the vgg network here vgg = vgg16.Vgg16() input_ = tf.placeholder (tf.float32, [None, 224, 224, 3]) with tf.name_scope ('content_...
transfer-learning/Transfer_Learning.ipynb
snegirigens/DLND
mit
Building the Classifier Now that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.
# read codes and labels from file import numpy as np import csv with open('labels') as f: reader = csv.reader(f, delimiter='\n') labels = np.array([each for each in reader if len(each) > 0]).squeeze() with open('codes') as f: codes = np.fromfile(f, dtype=np.float32) codes = codes.reshape((len(labels), ...
transfer-learning/Transfer_Learning.ipynb
snegirigens/DLND
mit
Data prep As usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels! Exercise: From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels.
from sklearn import preprocessing lb = preprocessing.LabelBinarizer() lb.fit (labels) labels_vecs = lb.transform (labels) #print ('Labels: {}'.format([labels[i] for i in range (0, len(labels), 200)])) #print ('One-hot: {}'.format([labels_vecs[i] for i in range (0, len(labels), 200)]))
transfer-learning/Transfer_Learning.ipynb
snegirigens/DLND
mit
Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typic...
from sklearn.model_selection import StratifiedShuffleSplit sss = StratifiedShuffleSplit (n_splits=1, test_size=0.2) train_index, test_index = next (sss.split (codes, labels)) val_test_split = int(len(test_index)/2) train_x, train_y = codes[train_index], labels_vecs[train_index] val_x, val_y = codes[test_index[:val_te...
transfer-learning/Transfer_Learning.ipynb
snegirigens/DLND
mit
If you did it right, you should see these sizes for the training sets: Train shapes (x, y): (2936, 4096) (2936, 5) Validation shapes (x, y): (367, 4096) (367, 5) Test shapes (x, y): (367, 4096) (367, 5) Classifier layers Once you have the convolutional codes, you just need to build a classfier from some fully connected...
def fully_connected (x_tensor, num_outputs): weights = tf.Variable (tf.truncated_normal (shape=[x_tensor.get_shape().as_list()[1], num_outputs], stddev=0.1, dtype=tf.float32), name='weights') biases = tf.Variable (tf.zeros (shape=[num_outputs], dtype=tf.float32), name='biases') activations = tf.add (tf.mat...
transfer-learning/Transfer_Learning.ipynb
snegirigens/DLND
mit
Training Here, we'll train the network. Exercise: So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help. Use the get_batches function I wrote before to ...
epochs = 10000 saver = tf.train.Saver() with tf.Session() as sess: # TODO: Your training code here sess.run (tf.global_variables_initializer()) for epoch in range(epochs): for x, y in get_batches(train_x, train_y): entropy_cost, _, train_accuracy = sess.run ([cost, optimizer, accur...
transfer-learning/Transfer_Learning.ipynb
snegirigens/DLND
mit
Testing Below you see the test accuracy. You can also see the predictions returned for images.
with tf.Session() as sess: saver.restore(sess, tf.train.latest_checkpoint('checkpoints')) feed = {inputs_: test_x, labels_: test_y, keep_prob:1.0} test_acc = sess.run(accuracy, feed_dict=feed) print("Test accuracy: {:.4f}".format(test_acc)) %matplotlib inline import matplo...
transfer-learning/Transfer_Learning.ipynb
snegirigens/DLND
mit
Below, feel free to choose images and see how the trained classifier predicts the flowers in them.
test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg' test_img = imread(test_img_path) plt.imshow(test_img) # Run this cell if you don't have a vgg graph built if 'vgg' in globals(): print('"vgg" object already exists. Will not create again.') else: #create vgg with tf.Session() as sess: ...
transfer-learning/Transfer_Learning.ipynb
snegirigens/DLND
mit
The basic elements of Fold are blocks. We'll start with some blocks that work on simple data types.
scalar_block = td.Scalar() vector3_block = td.Vector(3)
tensorflow_fold/g3doc/quick.ipynb
tensorflow/fold
apache-2.0
Blocks are functions with associated input and output types.
def block_info(block): print("%s: %s -> %s" % (block, block.input_type, block.output_type)) block_info(scalar_block) block_info(vector3_block)
tensorflow_fold/g3doc/quick.ipynb
tensorflow/fold
apache-2.0
We can use eval() to see what a block does with its input:
scalar_block.eval(42) vector3_block.eval([1,2,3])
tensorflow_fold/g3doc/quick.ipynb
tensorflow/fold
apache-2.0
Not very exciting. We can compose simple blocks together with Record, like so:
record_block = td.Record({'foo': scalar_block, 'bar': vector3_block}) block_info(record_block)
tensorflow_fold/g3doc/quick.ipynb
tensorflow/fold
apache-2.0
We can see that Fold's type system is a bit richer than vanilla TF; we have tuple types! Running a record block does what you'd expect:
record_block.eval({'foo': 1, 'bar': [5, 7, 9]})
tensorflow_fold/g3doc/quick.ipynb
tensorflow/fold
apache-2.0
One useful thing you can do with blocks is wire them up to create pipelines using the &gt;&gt; operator, which performs function composition. For example, we can take our two tuple tensors and compose it with Concat, like so:
record2vec_block = record_block >> td.Concat() record2vec_block.eval({'foo': 1, 'bar': [5, 7, 9]})
tensorflow_fold/g3doc/quick.ipynb
tensorflow/fold
apache-2.0
Note that because Python dicts are unordered, Fold always sorts the outputs of a record block by dictionary key. If you want to preserve order you can construct a Record block from an OrderedDict. The whole point of Fold is to get your data into TensorFlow; the Function block lets you convert a TITO (Tensors In, Tensor...
negative_block = record2vec_block >> td.Function(tf.negative) negative_block.eval({'foo': 1, 'bar': [5, 7, 9]})
tensorflow_fold/g3doc/quick.ipynb
tensorflow/fold
apache-2.0
This is all very cute, but where's the beef? Things start to get interesting when our inputs contain sequences of indeterminate length. The Map block comes in handy here:
map_scalars_block = td.Map(td.Scalar())
tensorflow_fold/g3doc/quick.ipynb
tensorflow/fold
apache-2.0
There's no TF type for sequences of indeterminate length, but Fold has one:
block_info(map_scalars_block)
tensorflow_fold/g3doc/quick.ipynb
tensorflow/fold
apache-2.0
Right, but you've done the TF RNN Tutorial and even poked at seq-to-seq. You're a wizard with dynamic rnns. What does Fold offer? Well, how about jagged arrays?
jagged_block = td.Map(td.Map(td.Scalar())) block_info(jagged_block)
tensorflow_fold/g3doc/quick.ipynb
tensorflow/fold
apache-2.0
The Fold type system is fully compositional; any block you can create can be composed with Map to create a sequence, or Record to create a tuple, or both to create sequences of tuples or tuples of sequences:
seq_of_tuples_block = td.Map(td.Record({'foo': td.Scalar(), 'bar': td.Scalar()})) seq_of_tuples_block.eval([{'foo': 1, 'bar': 2}, {'foo': 3, 'bar': 4}]) tuple_of_seqs_block = td.Record({'foo': td.Map(td.Scalar()), 'bar': td.Map(td.Scalar())}) tuple_of_seqs_block.eval({'foo': range(3), 'bar': range(7)})
tensorflow_fold/g3doc/quick.ipynb
tensorflow/fold
apache-2.0
Most of the time, you'll eventually want to get one or more tensors out of your sequence, for wiring up to your particular learning task. Fold has a bunch of built-in reduction functions for this that do more or less what you'd expect:
((td.Map(td.Scalar()) >> td.Sum()).eval(range(10)), (td.Map(td.Scalar()) >> td.Min()).eval(range(10)), (td.Map(td.Scalar()) >> td.Max()).eval(range(10)))
tensorflow_fold/g3doc/quick.ipynb
tensorflow/fold
apache-2.0
The general form of such functions is Reduce:
(td.Map(td.Scalar()) >> td.Reduce(td.Function(tf.multiply))).eval(range(1,10))
tensorflow_fold/g3doc/quick.ipynb
tensorflow/fold
apache-2.0
If the order of operations is important, you should use Fold instead of Reduce (but if you can use Reduce you should, because it will be faster):
((td.Map(td.Scalar()) >> td.Fold(td.Function(tf.divide), tf.ones([]))).eval(range(1,5)), (td.Map(td.Scalar()) >> td.Reduce(td.Function(tf.divide), tf.ones([]))).eval(range(1,5))) # bad, not associative!
tensorflow_fold/g3doc/quick.ipynb
tensorflow/fold
apache-2.0
Now, let's do some learning! This is the part where "magic" happens; if you want a deeper understanding of what's happening here you might want to jump right to our more formal blocks tutorial or learn more about running blocks in TensorFlow
def reduce_net_block(): net_block = td.Concat() >> td.FC(20) >> td.FC(1, activation=None) >> td.Function(lambda xs: tf.squeeze(xs, axis=1)) return td.Map(td.Scalar()) >> td.Reduce(net_block)
tensorflow_fold/g3doc/quick.ipynb
tensorflow/fold
apache-2.0
The reduce_net_block function creates a block (net_block) that contains a two-layer fully connected (FC) network that takes a pair of scalar tensors as input and produces a scalar tensor as output. This network gets applied in a binary tree to reduce a sequence of scalar tensors to a single scalar tensor. One thing to ...
def random_example(fn): length = random.randrange(1, 10) data = [random.uniform(0,1) for _ in range(length)] result = fn(data) return data, result
tensorflow_fold/g3doc/quick.ipynb
tensorflow/fold
apache-2.0
The random_example function generates training data consisting of (example, fn(example)) pairs, where example is a random list of numbers, e.g.:
random_example(sum) random_example(min) def train(fn, batch_size=100): net_block = reduce_net_block() compiler = td.Compiler.create((net_block, td.Scalar())) y, y_ = compiler.output_tensors loss = tf.nn.l2_loss(y - y_) train = tf.train.AdamOptimizer().minimize(loss) sess.run(tf.global_variable...
tensorflow_fold/g3doc/quick.ipynb
tensorflow/fold
apache-2.0
Now we're going to train a neural network to approximate a reduction function of our choosing. Calling eval() repeatedly is super-slow and cannot exploit batch-wise parallelism, so we create a Compiler. See our page on running blocks in TensorFlow for more on Compilers and how to use them effectively.
sum_block = train(sum) sum_block.eval([1, 1])
tensorflow_fold/g3doc/quick.ipynb
tensorflow/fold
apache-2.0
Breaking news: deep neural network learns to calculate 1 + 1!!!! Of course we've done something a little sneaky here by constructing a model that can only represent associative functions and then training it to compute an associative function. The technical term for being sneaky in machine learning is inductive bias.
min_block = train(min) min_block.eval([2, -1, 4])
tensorflow_fold/g3doc/quick.ipynb
tensorflow/fold
apache-2.0
Oh noes! What went wrong? Note that we trained our network to compute min on positive numbers; negative numbers are outside of its input distribution.
min_block.eval([0.3, 0.2, 0.9])
tensorflow_fold/g3doc/quick.ipynb
tensorflow/fold
apache-2.0
Building the graph From Chris McCormick's blog, we can see the general structure of our network. The input words are passed in as one-hot encoded vectors. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal. The idea here is to train...
train_graph = tf.Graph() with train_graph.as_default(): inputs = labels =
embeddings/Skip-Gram word2vec.ipynb
guyk1971/deep-learning
mit
Embedding The embedding matrix has a size of the number of words by the number of neurons in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using one-hot encoded vectors for our inputs. When you do the matrix multiplication of the ...
n_vocab = len(int_to_vocab) n_embedding = # Number of embedding features with train_graph.as_default(): embedding = # create embedding weight matrix here embed = # use tf.nn.embedding_lookup to get the hidden layer output
embeddings/Skip-Gram word2vec.ipynb
guyk1971/deep-learning
mit
Next, let's load the data. The iris data set is included in scikit-learn's datasets submodule, so we can just load it directly like this:
iris = datasets.load_iris() X = pd.DataFrame({k: v for k, v in zip(iris.feature_names, iris.data.T)}) # Convert the raw data to a data frame X.head()
labs/09a - K-means clustering.ipynb
donaghhorgan/COMP9033
gpl-3.0
Exploratory data analysis Let's start by making a scatter plot matrix of our data. We can colour the individual scatter points according to their true class labels by passing c=iris.target to the function, like this:
pd.plotting.scatter_matrix(X, c=iris.target, figsize=(9, 9));
labs/09a - K-means clustering.ipynb
donaghhorgan/COMP9033
gpl-3.0
The colours of the data points here are our ground truth, that is the actual labels of the data. Generally, when we cluster data, we don't know the ground truth, but in this instance it will help us to assess how well $K$-means clustering segments the data into its true categories. K-means clustering Let's build an $K$...
k_means = cluster.KMeans(n_clusters=3) k_means.fit(X)
labs/09a - K-means clustering.ipynb
donaghhorgan/COMP9033
gpl-3.0
Note: In previous weeks, we have called fit(X, y) when fitting scikit-learn estimators. However, in each of these cases, we were fitting supervised learning models where y represented the true class labels of the data. This week, we're fitting $K$-means clustering models, which are unsupervised learners, and so there i...
labels = k_means.predict(X) print labels
labs/09a - K-means clustering.ipynb
donaghhorgan/COMP9033
gpl-3.0
We can check the results of our clustering visually by building another scatter plot matrix, this time colouring the points according to the cluster labels:
pd.plotting.scatter_matrix(X, c=labels, figsize=(9, 9));
labs/09a - K-means clustering.ipynb
donaghhorgan/COMP9033
gpl-3.0
As can be seen, the $K$-means algorithm has partitioned the data into three distinct sets, using just the values of petal length, petal width, sepal length and sepal width. The clusters do not precisely correspond to the true class labels plotted earlier but, as we usually perform clustering in situations where we don'...
k_means = cluster.KMeans(n_clusters=2) k_means.fit(X) labels = k_means.predict(X) pd.plotting.scatter_matrix(X, c=labels, figsize=(9, 9));
labs/09a - K-means clustering.ipynb
donaghhorgan/COMP9033
gpl-3.0
Finding the optimum number of clusters One way to find the optimum number of clusters is to plot the variation in total inertia with increasing numbers of clusters. Because the total inertia decreases as the number of clusters increases, we can determine a reasonable, but possibly not true, clustering of the data by fi...
clusters = range(1, 10) inertia = [] for n in clusters: k_means = cluster.KMeans(n_clusters=n) k_means.fit(X) inertia.append(k_means.inertia_) plt.plot(clusters, inertia) plt.xlabel("Number of clusters") plt.ylabel("Inertia");
labs/09a - K-means clustering.ipynb
donaghhorgan/COMP9033
gpl-3.0
Load some text data - from wikipedia, pages on people
people = graphlab.SFrame('people_wiki.gl/')
machine-learning/foundations/document-retrieval-assignment.ipynb
scoaste/showcase
mit