audio
audioduration (s)
0.62
28.4
text
stringlengths
2
287
are all touch tensors which get created so we need to create that for the training and
efficiency is that when you are trying to load data on to say a gpu device over there
so you have a lot of memory transfers going down between your ram to the gpu if you remember
from my cpu ram or system ram onto my gpu ram and then in the gpu the rest of the code
anything and i am not even making use of the cpu ram so i can now pre fetch the next set
of data which will be needed for operating on the gpu in this duration and that will
of having to wait for the gpu operation to get over and then write it back to the ram
in fact this does also work down within while you are even trying to fetch down from your
hard drive onto your ram as well so we will be covering that in to as we go into much
more detail classes as well so as of now this data loader is now ah made down to optimize
now the next part which we are going to look at is actually check down whether gpu is available
on the ram which is present on the cpu there are on two different data buses over there
so you will fetch and that fetch happens through your data loader once you have fetched over
so if this is not that then but the point is whenever you are fetching and putting it
you had your network defined the next part is that you need to define the trainer so
that given an x you get an output y ok now what you are getting down was a y hat which
was a predicted part of y you have your original y which is the ground truth available
now based on that you are going to calculate a difference between them which is your cost
function so we had used the mse or l two norm as a cost function over there in order to
as a simple part what we define is a training model function over here so the input to this
one is a pointer which is the model or the neural network given down over there
also referred to within these libraries as criterion function the number of epochs over
which its going to learn so number of epochs basically is the number of iterations over
some arbitrary value you put your input data you get some predicted output y hat you take
a difference between them so thats your cost now based on that cost you will be taking
done this one this is over the first epoch then with these revised weights you are again
going to put down your input x over there and get down another predicted value y hat
propagate and this keeps on going so thats what is called as epoch these these iteration
counter over there the next part is learning it which is your
was my learning rate which was helping me scale down and put down this gradient in the
way and there are no erratic variations so what we start initially is we put down a pointer
which is called as a starting time point then so this is just to show you how much
long it takes to traverse down per epoch then we create a few lists so one of them is for
the training loss one of the one is for training accuracy another is just for accumulating
is my loop counter variable over here and then this exists in the range of one to the
number of epochs ok so its ah its basically equal to the counter
goes up to the total number of epochs which is present over there so lets define a small
now and then so running loss is a loss is basically the criterion of the cost function
and keeps on accumulating as as we keep on going across the epoch
now the point is that we will start with loading data so my initial data loader will go down
definitely means is that i will load so i will basically traverse one data point at
a time and go through it ok so within my data over here which exists in my train loader
at the total number of data points existing so if i have fifty thousand once so within
then i will be back propagating it over there so this is per sample based over there
so going from there you remember that we had done this flag of use gpu so if this is set
comes down as else over here and then i dont need to typecast any of these now the way
of converting and writing inputs as variable inputs is of the fact that this particular
as an explicit variable within the library so they will be taken as constants across
restriction as to why you need to do that this is from from from reasoning left best
to do is you remember that we had a del del w of jw to be calculated and this value has
there now once that is done i will get my set of output so this set of outputs is basically
coming down over there then what i would like to find out is basically that which particular
class over there has the maximum probability of occurring ok so once i get that whichever
of class has the maximum probability of occurring so i will be just be putting that particular
class as a one hot everything else is going to subside down towards you now once that
is placed down into my prediction so these are basically predictions of whichever
as the so this is my loss and that loss is defined in terms of my criteria function now
samples and then keep on calculating losses for each sample and then we will add down
that complete epoch and that is what is found out over here so i take my running loss and
from there i get my so basically i get my running loss over here and i have the loss
given down from one particular data and then when i start my back propagating over there
first is find out by what is my total loss or the total error which occurs over there
and then i can divide it with the batch size which is equal to basically the total number
bit later on it may so happen that you might not need to
calculate for each of them but you can push it down into say hundred samples out of this
fifty thousand go in together their error is calculated and that is back propagated
and next you have another hundred so it would mean that there can be multiple back propagation
within the same epoch as well whereas here we are looking into one single back propagation
of my cost function if you remember so that first derivative of my cost function is what
is also called as a backward operator over here now once that is done the next part is
that i need to get down my network parameters and keep on updating my network parameters
loop over here which will basically be counting down all my model parameters so model parameters
which are present within my neural network which can be updated so within your neural
coming down next is you need to find out what is your to so we have calculated our training
loss and everything going down over there this is just a small script to print that
update each of them so ah for updating you will be running down through the rest of it
and then as you keep on going so what we definitely find out over here is that there is so you
you will be getting your gradients calculated you have the gradient for the loss function
you have your gradient for the network everything done with these you can now
is over the next part which remains for us is just to keep on looking into the errors
take on for a longer duration of time as to what our strategies and then how do you come
down with some way where just by looking into errors you can come down to a point where
so lets just run it down so what i have done is so this is the function which gets defined
and once that is done we need to start with the training now in the training part over
now so that the whole complex procedure is now just a function call for me so over here
ten and so this has been empirically optimized
you can definitely play around with on hundred or ten power minus three and just check around
errors over there so somewhere around this point is where the error is changing so it
going down by this i do know that this will keep on continuing further and go down so
now this is the other side of it where we are looking into the accuracy plot and if
the next part is to look down into your accuracy over your test set unfortunately with all
so we know that from our experiences if we were
have about ten twelve percent of accuracy very easily coming down so i will leave these
on very basic and preliminary concepts of deep learning and thats what we call as deep
down images and then relating each image to features itself and as we relate down images
to its features and these which are more compact representations of how images are represented
and from there we get down to something called as a classification problem or associating
way which is get down an image on the labs side you had also learned on how to code down
and then subsequently going down and using a neural network for classification purposes
learning with ah neural networks and what happens within this introduction to deep learning