input
stringlengths
0
513
output
stringlengths
0
513
How does that cost
compared to the soft? How does that compare like the effect at the end of it?
compared to the soft? How does that compare like the effect at the end of it?
How would like giving it multiple prompts and measuring some kind of instead of like, I'm having the soft, soft prompting. How does that affect
How would like giving it multiple prompts and measuring some kind of instead of like, I'm having the soft, soft prompting. How does that affect
metric for letting you know how successful it is?
metric for letting you know how successful it is?
You know, I think we might be thinking too hard
You know, I think we might be thinking too hard
about this one.
about this one.
because, like the? Because the simple answer is at least for soft prompting.
because, like the? Because the simple answer is at least for soft prompting.
it's kind of literally giving a model a few more parameters that it can train, so it might be just as simple as there's a few more parameters in the model it can train, and then I can do better kind of equivalent to like adding another layer into the network, or something like that.
it's kind of literally giving a model a few more parameters that it can train, so it might be just as simple as there's a few more parameters in the model it can train, and then I can do better kind of equivalent to like adding another layer into the network, or something like that.
so that should definitely give it some option to have some more capacity.
so that should definitely give it some option to have some more capacity.
I think the idea is, though.
I think the idea is, though.
like it originally came from, you have a model, but no one lets you like change this original model. It's like in the cloud somewhere, and so you only can add another layer onto it and maybe try to attend it.
like it originally came from, you have a model, but no one lets you like change this original model. It's like in the cloud somewhere, and so you only can add another layer onto it and maybe try to attend it.
So then, adding these other tokens onto your like learnable tokens in the beginning gives you some extra capacity in your network that you have control over it.
So then, adding these other tokens onto your like learnable tokens in the beginning gives you some extra capacity in your network that you have control over it.
So it's I don't wanna say more than that. What's going on and why it helps like, maybe it's doing something in future engineering like.
So it's I don't wanna say more than that. What's going on and why it helps like, maybe it's doing something in future engineering like.
But I also suspect it's just. Here's some more parameters.
But I also suspect it's just. Here's some more parameters.
so you can probably learn. Do you have like a subject? Yeah.
so you can probably learn. Do you have like a subject? Yeah.
send me a message on piazza, and I'll send like there's a couple of recent papers on this soft, prompting idea.
send me a message on piazza, and I'll send like there's a couple of recent papers on this soft, prompting idea.
Thank you. Awesome.
Thank you. Awesome.
Thank you. No problem
Thank you. No problem
question regarding the data launching.
question regarding the data launching.
Sure. So that should change the distribution of the original data set, it just changed the distribution. Yeah, but like, not in a way we'll worry about. Yeah. So I'm worried about if the distribution changed that that affected the final result. So it shouldn't, because the changes that we want to make and we make spec...
Sure. So that should change the distribution of the original data set, it just changed the distribution. Yeah, but like, not in a way we'll worry about. Yeah. So I'm worried about if the distribution changed that that affected the final result. So it shouldn't, because the changes that we want to make and we make spec...
Cause, like, if you like, if you're just doing linear operations like adding a number dividing by a number. If you're doing like ordinal values, that means all the ordinary values still keep their ordering all, like the continuous values, still keep all their ordering as well. So we're really just like
Cause, like, if you like, if you're just doing linear operations like adding a number dividing by a number. If you're doing like ordinal values, that means all the ordinary values still keep their ordering all, like the continuous values, still keep all their ordering as well. So we're really just like
shrinking and expanding the distribution. So it shouldn't
shrinking and expanding the distribution. So it shouldn't
like mathematically, it shouldn't change the result. It's kind of like when we do calculus. And it's like, Okay, we can do an integral. And there's plus some constant at the end. But it turns out at least from training our model.
like mathematically, it shouldn't change the result. It's kind of like when we do calculus. And it's like, Okay, we can do an integral. And there's plus some constant at the end. But it turns out at least from training our model.
We can pick a constant that makes, you know our learning rate a little bit easier to work with.
We can pick a constant that makes, you know our learning rate a little bit easier to work with.
But in the end the constant doesn't really matter for the answer.
But in the end the constant doesn't really matter for the answer.
Oh, but there is.
Oh, but there is.
You can look at. Essentially, what we're trying to do is reduce variance over the model. And then there's a lot of mathematics that says, Okay, you can do all of these methods for reducing variance, and it doesn't add bias and bias is what you mean by it can affect the output of the model. So then you can prove that t...
You can look at. Essentially, what we're trying to do is reduce variance over the model. And then there's a lot of mathematics that says, Okay, you can do all of these methods for reducing variance, and it doesn't add bias and bias is what you mean by it can affect the output of the model. So then you can prove that t...
Okay, there's a lot of paper where it's just I removed that. Yeah, there's a lot of papers that kind of do this. This stuff's a bit older, but whenever you're applying one of these standardizations they'll usually be like a proof in the paper that says, and this doesn't affect the bias, your answer should still be the...
Okay, there's a lot of paper where it's just I removed that. Yeah, there's a lot of papers that kind of do this. This stuff's a bit older, but whenever you're applying one of these standardizations they'll usually be like a proof in the paper that says, and this doesn't affect the bias, your answer should still be the...
the variance is the thing we are trying to change, and if we change the variance, we can usually like decrease the ugliness of the statistics of the gradients and learning.
the variance is the thing we are trying to change, and if we change the variance, we can usually like decrease the ugliness of the statistics of the gradients and learning.
and it doesn't affect the bias. Oh, okay.
and it doesn't affect the bias. Oh, okay.
yeah, okay, got it. Okay. Thank you. No problem.
yeah, okay, got it. Okay. Thank you. No problem.
I have a question about your, you know the , the data and the circles like in the ticket.
I have a question about your, you know the , the data and the circles like in the ticket.
I I was thinking about it, but I guess, like
I I was thinking about it, but I guess, like
Pca wouldn't do anything with it right like if you do principal component analysis for something like that?
Pca wouldn't do anything with it right like if you do principal component analysis for something like that?
Yeah. So you mean, you know, when it was polar coordinates and like it was separable in in polar coordinates. But, like Pca. Can't do that because Pca is still going to do linear transformations of the data and polar coordinates is not a linear transformation.
Yeah. So you mean, you know, when it was polar coordinates and like it was separable in in polar coordinates. But, like Pca. Can't do that because Pca is still going to do linear transformations of the data and polar coordinates is not a linear transformation.
right?
right?
This PC. Is just looking like I get my zoom stuff.
This PC. Is just looking like I get my zoom stuff.
So is there like any
So is there like any
like with some some of this like kernel stuff, and like, you know, like.
like with some some of this like kernel stuff, and like, you know, like.
that stuff like, try to like separate it out in like different dimensions, like kernel stuff. You mean, like learning a kernel to transform some of the data. Yeah, you know, like kernel tricks, or like whatever they call it, like,
that stuff like, try to like separate it out in like different dimensions, like kernel stuff. You mean, like learning a kernel to transform some of the data. Yeah, you know, like kernel tricks, or like whatever they call it, like,
sometimes you
sometimes you
but like, when the
but like, when the
I think it's when, like the
I think it's when, like the
the data is like less, and the number of features are higher. You try to like, map it into like different like dimensions. These functions, I think, like they have you like, separate.
the data is like less, and the number of features are higher. You try to like, map it into like different like dimensions. These functions, I think, like they have you like, separate.
It's been just using Pca.
It's been just using Pca.
And this is kind of like, why deep learning works.
And this is kind of like, why deep learning works.
I mean, it's nice, really, because we're just trying to bake all of these problems into just the normal machine learning system. So it can do many of these transformations. It doesn't always do it perfectly, and it's really hard for them to find them true. And it might be that like doing that internally, and we would ...
I mean, it's nice, really, because we're just trying to bake all of these problems into just the normal machine learning system. So it can do many of these transformations. It doesn't always do it perfectly, and it's really hard for them to find them true. And it might be that like doing that internally, and we would ...
And that's what I like. I've got enough complicated stuff
And that's what I like. I've got enough complicated stuff
to worry about that if they can. If the model.
to worry about that if they can. If the model.