audio
audioduration (s)
0.62
28.4
text
stringlengths
2
287
or the cost function is that i have my output my predicted output which is p hat i have
my ground truth of my predictors which is p and then i need to find out what is the
be differentiable and if the cost function is not differentiable then it doesnt work
so we will be going down with more detail cost functions in a bit later down on this
course where we will be bringing in very specific cost functions which are designed for classification
will bring in very specific cost functions which are designed for regression problems
so they they would eventually come down but as of now lets stick down to the very basic
form of the euclidean cost function so the next part is that you would be computing
my gradient of the network or nabla of network and this gradient of the network is what is
so my intermediate points over there for all of these hidden layers where what they are
is what is my p of y with respect to w and then that gets again remolded in terms of
utility of most of the libraries for deep learning is that you dont need to explicitly
as pytorch and that has very standard ways of doing it so but there you would be in standard
w and this update w is something which will happen in the reverse way as you had seen
the weights from the output side to the input side together and then next is to continue
again with step number one which is do a forward pass of x obtain p and then repeat all the
depends on the problem which you are handling but then so often i do get this question that
of data to the architecture of the network to the kind of nonlinear transfer functions
you use and what is the nature of the cost function which you are using over there so
kind of cost functions and have a value in the range of say one to ten some of them have
in order of ten power minus three or ten power minus four
well known terms and then say net is basically a pointer which is used for defining network
as arguments over here are not indicated over here so these are the ones which we will be
the actual state of p or the ground truth these two will be the input to the criterion
function next is when you find out the gradient of
called as a backward operator and this backward operator what it would be needing now is basically
is my predicted output and over here so what is my cost function ok now the next one is
so its its not so hard to get it down because if you go back to the earlier slides into
next part is that you need to update parameters of the network so this input over here is
or these two gradients which are calculated and then that together will update down w
of the width so each weight so it goes in a subsequent fashion for this one updates
what is called as one single iteration or one epoch of it and eventually you can now
trace down your error and then either decide to stop or you can continue so this brings
and makes it much more easier because of the rest of the script availability within python
compute these forward passes and the gradients much easily in a computationally attractive
so great on laptops you might experience heating problems as well so ah the best point is that
get on a custom workstation so i mean a very competitively built up workstation can be
reading sources from where you can read down and these are the two major conferences which
and while we have done down our basic introductory ah lectures on what a multi layer perceptron
is and then how to use it for classification so here the objective is that building upon
from the images and then we were feeding it through a neural network and this neural network
features which were going down on the input side of the neural network and it was just
connected down to a one hot vector of ten cross one dimension and that was basically
so any one of these classes is going to be one based on which particular class ah is
over there and that would help you in creating much better separation emerges by learning
we will be starting for that so lets get into how this code particularly works over here
header and that is not much to redefine around this header as well because we are just keep
going to keep this as the same header over there so lets just run and get down our initial
on and and my labels both for the training and the testing over there the next part comes
so multiple number of layers in terms of multiple hidden layers present out so the first part
and both of them are linearly connected so the first one is which connects down n number
of channels to ah six so n channels over here is the total number of features which you
of classes in the output ok now once that is done ah the next part is left with that
i need to define what is my forward pass of this algorithm
so in order to define my forward pass of the algorithm what i define is that whatever comes
is a n n dot linear which is a mapping from n number of channels to six channels ok now
will truncate all the values to zero and for any value which is greater than zero the value
will remain as the same so this is basically some sort of form in which it remains linear
ah soft max as your non linearity coming down from the output of it
now once that is done the next part goes down in your so this is my network which gets defined
over here so lets just define my network now comes down the data preparation stage over
there so in data preparation what we had done in last class was quite simple so we define
and another was ten thousand cross ten and this was just for your labels so you remember
clearly that we had labels in the range of zero to ten but then when training this network
we needed to have a one hot vector and that meant that it it needs to have ten number
of these items on these rows is going to be one based on which particular class it belongs
is one you assign that value as one over here now once that part is done the next part is
an numpy array to a torch tensor and thats what we were doing it so these part of the
scratch is pretty much same as what we had done in the earlier class and then the next
one was to check down whether you have a gpu available and a cuda support on your system
or not now once that gets done and so we start by ah defining at the training routine for
have some sort of an iterator and this is an epoch iterator now within the epoch iterator
how many seconds milliseconds it takes basically to execute each of them
for twenty epochs it took down twenty seven seconds to execute so from there the rest
of the concepts in terms of running laws then creating a batch over there and using your
repeating from the earlier case next before you start training you will need to always
zero down your gradients once that is done next is the feed forward routine and that
defines as output is equal to model of input and model is basically the function which
so the criteria function gets defined a bit later on as of now we are just using it as
comes down the point where you need to update your parameters so here is where this happens
the weights and these parameters are what get updated so your update rule which was
over there now from this once your parameters are updated then you can actually look into
you can keep on ah basically creating an array where you are adding down all of those losses
and it becomes easier to plot down now here you have these printer statements which basically
so lets just run this part and keep get get it be fetched as a function within your environment
now once thats done the next part is to actually get into your initialization and training
so the first part is we need this length of features and this is what we know from our
earlier experiences you can as well look into the dimension second dimension of the pickle
file that is and fetch it out as well thats thats also pretty much possible now ah within
roughly taking down it shows as one second but there are a few more milliseconds as well
so yeah you see that do not take actually much more of a time as compared to the earlier
and this is how your training loss was falling down so you can add down the extra module
which we had in the earlier one in terms of accuracy and you can see the accuracy growing
as well now once that is done the next part was to go with your train module and then
look into its testing part so over there what we do is we pull down we run an iterator over
the model which happens over here you get down your outputs and then see if your predicted
output patches down your exact output and if it is
has not yet completed so you get down a zero accuracy but you can keep one running this
one for a longer period of time so for the sake of time constraints we are not running
of time now there are a few interesting aspects which i would like to reiterate from our earlier