audio
audioduration (s)
0.62
28.4
text
stringlengths
2
287
heavily we know some of these explanations but not all of these explanations and thats
a substantial part of what works i will also be working down on why it works and what are
going through this whole route or if you have a different kind of a candidate search which
in hand and thats what we will be doing from theory to experiments and eventually that
you can do is get down any of these and then which which are really good except for lets
place while we also have done the clustered axis given down for participants of this course
so that you can get down access to an hpc so or you can obviously buy down this say
tool boxes all of these are open source as of now so you can use any of this other than
the one on matlab for which you will definitely have to pay for the licenses but the matlab
can get this book and thats thats what we will be using as a major reading material
to what we were doing in the earlier classes was that in the earlier classes while we were
using feature extractors and feature descriptors which are hard coded functions over there
neural network on the contrary today when we are going to do it
so here is when a neutral network itself has to come down to be an end to end learning
framework which means that input to the neural network itself is an image while the output
from it is still a classification ah output so it can be a classification output it can
again and then entering into the multi layer perceptron from there we enter into something
is my input and my output related and what happens during the learning phase and this
is a quite critical part over here
since in the last lecture and the lab which we had done so you were introduced to the
concept of error back propagation and from there we had a gradient descent based learning
rule now what exactly happens in telling this as a error back propagation and why it happens
of code is what we are going to explain you through this signal flow graph representation
other transformations and cost functions also to exist and then eventually go down to the
there were three inputs over here in the earlier case last week when we were doing it so these
three different features but these three can be three pixels so you can consider just three
space so each component itself is represented as one independent scalar value so your x
one can be the red value of a pixel x two can be the green value of a pixel x three
so let the decision associated with a particular pixel over here b p hat and now with the simple
neuron model what would happen is that we will have a weighted combination of these
inputs going down to a neuron and from there add down a bias take a summation out over
them and this summation is what is has what has this form so its w naught plus w one x
one plus w two x two plus w three x three where each of these weights w one w two and
w three are three weights associated with each of the three values x one x two and x
three and w naught is what is called as the bias or the one w naught can also be written
down as one into b where say b is the weight over there and the constant input to this
particular edge over there is what is one ok so in its linear algebra form which is
in its matrix representation this is a form which you would be getting now so you get
y is equal to the ah inner product or the dot product of two matrices
one of them is the weight matrix where you have the weights and the bias taken together
and the other matrix is a column matrix over there so thats why its x , one transpose
of y tends towards plus infinity this value tends towards plus one as the value of y tends
towards minus infinity this value tends towards zero and on the contrary with the tan hyperbolic
now from taking down a perceptron to getting into a neural network formulation which is
over there and i can map it down to again a different group of scalars so maybe my first
everything to my p one hat now note over here that as we had also discussed in the earlier
which you see with these weights one , two one , one one , three now the
reasoning behind these weights is that the first subscript is to the target where its
to my target which is called as p one so my first subscript is going to be the subscript
it is connecting and thats the nomenclature which we are following now if i arrange all
of these weights w one one w one two and w one three in a row matrix form then that is
along with that i have my scalar value which is my bias w one zero or b one ok and accordingly
dot product and then my non linearity applied similarly if i take down my second neuron
on the output side of it and feed it appropriately
so i would be getting down this second part of the partial network coming down and my
called as w k , g and then put a impose non linearity on top of it and then taking
which we see now if you look into this matrix of weights and biases which are come together
so i arrange them in a column form in a call in a in a column major format which is that
it is it just has k number of rows and just one column over there accordingly my bias
can also be stacked one on top of the other because each is independent of the other one
and now that would give me some sort of a rectangular matrix now if i clearly look into
and then that gives me an output matrix over there and this output matrix is a column matrix
my input matrix is also a column matrix
so thats my y and then i have a non linearity applied on a matrix which means that each
all of them together is what i get down as my target output so this was my very basic
understanding of how a neural network works down as such and then this was what we had
into the revision part over there so my error in prediction how it was different was that
if i have one of these predictors p one then i get down one value of a scalar for another
is the euclidean error over there so a euclidean error or the total error of the network is
so whatever is my actual ground truth which is p and my predicted value p hat these two
matrices are subtracted and then you take the amplitude of that or the l two norm of
matrix is a of all of these scalar xs over there and one of these samples is x subscript
one
be getting down a predicted value which is p hat subscript one ok so similarly i take
training data over there as i feed my last sample through my network over there my output
i will be getting a different value of error but can we give some sort of a consolidated
down and accordingly manipulating what happens to your predictions over there
so because there there isnt anything else on which it can change see my input x is constant
ah so that that will be different number of samples and across samples and across so between
two samples it will be a different value thats always known but when i am training across
over there the only reason why the prediction value bit of putting down the same sample
my sample and i get down a predicted value p one hat i do all my updates and everything
and then it comes to my second epoch which is epoch one in my epoch one i put down x
the epoch one is very different from p one hat at epoch zero and the only reason why
now that i have my cost function written down in terms of my weights my final point is that
we need to come down to a point where to a point in the weight space such that my argument
keep on changing the weights there will be one particular combination of these weights
such that my error is minimum and thats the exact one which i would like to achieve now
and how that achieved is through something called as the gradient descent learning rule
so in this gradient descent learning rule what we do is basically that its an iterative
your gradient of the weight space over there in terms of your cost function and that is
a range of zero to one and then say my del del w of jw is in a range of ten power of
going to be very less and in that case this factor over here comes to your rescue
that value is something which will actually be impacting significantly how the value of
w is changing over there
so this learning rate basically is a fact way of mathematically modulating the gradient
be in some way significantly impacting the change in w and thats how my w of k plus one
will be revising at a much better rate then w of k would have if we did not have this
till you are at the final conclusive step over there ok
now this was one way of trying to visualize our learning in terms of its cost function