audio
audioduration (s) 0.62
28.4
| text
stringlengths 2
287
|
|---|---|
understanding so one of the major a point is that if you look into most part of this
|
|
code then they are actually something which is quite modularly written down so if i want
|
|
over there so if i say i want to add down a few more linear number of channels
|
|
so i can just keep on adding down over there so say like i i just want to add down another
|
|
layer i can write it as n n dot linear so i have six number of outputs from there from
|
|
that six number of outputs i can again think of going down to say another six number of
|
|
outputs over there ok thats also pretty much possible then i can do a n n dot linear from
|
|
is done over here till this point i just had with the self dot l two and what i can add
|
|
defines these architectures and you will just have to make a shorter change so by now its
|
|
behavior and it was much slower in actually falling down so the error did decrease to
|
|
so i just need to execute only this part of it keep one thing in mind if you are making
|
|
some changes to a previous one then you need to execute all the subsequent blocks so that
|
|
python code for you and it will appear in the same modular way within your python environment
|
|
and you can just execute that py file within whatever is your choice of environment most
|
|
most likely most of us would be making use of anaconda python in order to do that and
|
|
linearly falling down and much jagged and the error was also much higher so i can make
|
|
a change over there and say make this as fifty and see so these changes which i am making
|
|
over there on the learning rate thats the factor which is going to update my in
|
|
in my learning rule and that does make a very important statement in terms of how fast its
|
|
also pretty much true ah and and thats what we do observe so its not necessary that always
|
|
getting down a slower is going to make you a better convergent or or even getting
|
|
the future lectures we would be ah getting more of a clearer understanding into how this
|
|
with more of these lectures and do ah keep on coding as we continue with the theoretical
|
|
order to ah hierarchically keep on building so there were two different particular kinds
|
|
to what was present in that encoder so if an encoder you were reducing the size over
|
|
there in decoder you are just going to increase the sizes and ah then one way is where you
|
|
decoder over there
|
|
so we we are actually going to ah replicate both of these ah methods over here within
|
|
same codes as you had done ah in the earlier ones and as with how it goes down is that
|
|
and that creates
|
|
and which will be just looking into how many number of ah parallel loaders you can work
|
|
it out and as such its its equal to number four and then you also set down your batch
|
|
size and which is equal to the number of images it fetches found in one particular batch so
|
|
you
|
|
you used gpu flag is not set down then none of your models or data gets converted on to
|
|
layer definition and initialization is complete then the next part is that you write down
|
|
your forward function and your forward function what it does is that it will do a forward
|
|
pass over the encoder which consists of a linear layer and a tan hyperbolic ah as a
|
|
ok so for the purpose of training ah its kept down as small so we just have ten iterations
|
|
perfect way and the best way of doing it is to take down an l two ah norm as a loss function
|
|
run it on a gpu the next part as it goes down is that ah you know to zero down all the gradient
|
|
residuals within a particular so yeah so basically your gradients which were computed in the
|
|
earlier epoch should not play any role in the current epoch and thats the whole purpose
|
|
of this ah zeroing down on the gradients then the next part is just a forward pass ah you
|
|
of an image then you have a batch then you give a batch of images
|
|
output and the ah input image batches which are formed and from there you have your loss
|
|
and then you do a derivative of the loss using your loss ah dot backward compute once that
|
|
it comes down to point one zero you see clearly see that there is a decrease in error and
|
|
then it keeps on ah steadily decreasing now one thing to keep in mind is that since its
|
|
a fully connected layer ah with lot of neurons so it would take a bit of time and thats where
|
|
ah it it takes down a bit of time for us as well and ah after some time you see that ah
|
|
to come down to an optimal
|
|
so the relative change the relative difference between these ah errors which comes down between
|
|
two epochs is also going down quite slower now you can actually play around over there
|
|
so ah what you can do is you can change down your learning rates and and play and see if
|
|
but then this is not ah always necessary that this would be the only combination which is
|
|
part of the ah classes so here you can just make a change around with your learning rates
|
|
as well and then you would be seeing that this changes now here this was the first approach
|
|
so what i would do is i have my network which is strain which was net ok now i take my encoder
|
|
part over there and then start a ah i add another new module to it ok i call this as
|
|
seven hundred eighty four neurons going down to four hundred neurons and then again reconstructed
|
|
to seven eighty four neurons here what i do is seven eighty four neurons going down to
|
|
four hundred neurons going down to two hundred and fifty six neurons from there coming up
|
|
to four hundred neurons from there going to seven hundred and eighty four neurons so this
|
|
becomes my first part of it now similarly in the decoder part where i have two hundred
|
|
and fifty six neurons to be connected down to four hundred neurons this comes down
|
|
over there after ah the output of this four hundred neurons comes out and then ah on the
|
|
before to the decoder i also have another tan hyperbolic function now if you clearly
|
|
of these four hundred neurons so that means that any of the values which comes down in
|
|
these four hundred neurons over there are in the range of minus one to plus one
|
|
neurons and for that reason what we have is that ah four hundred neurons ah so whenever
|
|
hyperbolic transfer function and not any other transfer function so you need to keep this
|
|
parts are just added over there and then i can ah print my network so lets just see what
|
|
four neurons to four hundred neurons then a tan hyperbolic the output of this one goes
|
|
into another sequential connection from four hundred to two fifty six and then tan hyperbolic
|
|
now input to the next decoder layer is ah two hundred and fifty six to four hundred
|
|
so thats up to you how you define your ah hierarchical strategies but till its a linear
|
|
network it does not ah make much of a difference coming down
|
|
pass and get down your outputs and then you have your ah loss function defined and derivative
|
|
in the earlier case ah though you have sort of ah better train model over there
|
|
so if you look down at the mean square error which comes down around point ah seven six
|
|
initialized and that does mean that whatever is coming down on the learnt representations
|
|
and getting forwarded to the decoder layer subsequently is now pretty much random that
|
|
and thats why the starting error is a bit more than what was the ending error over that
|
|
be a case that you might not be able to reach down the exact ah error rates which you had
|
|
achieved by using just one hidden layer and there is nothing to panic around it the only
|
|
ah we are trying to solve down ah the actual problem of classification over here using
|
|
ah auto encoder for representation learning and then we are using a stacking policy and
|
|
one particular task whereas the network is supposed to be used for another task and thats
|
|
classification so given all of these things in mind what you need to have like really
|
|
errors however clever stacking and change of error rates can actually bring it down
|
|
so at this point i would leave it to you to actually go up and as ah update this learning
|
|
now play around with this learning rate and you can definitely see a change coming ah
|
|
was to create a neural network where i have two hidden layers and then a final output
|
|
that this is just a ten digits classification problem zero to nine written in hundred and
|
|
and you have to classify it out now over there i would need to modify it down some part of
|
|
the network because i dont need the decoder as such anywhere now so now what i can do
|
|
this layer ok these two layers are something which i can do away with and the best way
|
|
over here now once that is done the next part is that i would need to add down so now what
|
|
to two hundred and fifty six neurons from two hundred and fifty six neurons i will have
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.