input
stringlengths
0
513
output
stringlengths
0
513
quantile for the rest of the %.
See whether what is the difference between those , and then you could just print out.
See whether what is the difference between those , and then you could just print out.
You know, these versions of things that have pretty high numbers that you can check later. Maybe they were. Someone was inputting data made mistakes. And you end up with that liar to look like these.
You know, these versions of things that have pretty high numbers that you can check later. Maybe they were. Someone was inputting data made mistakes. And you end up with that liar to look like these.
Any questions so far about these different
Any questions so far about these different
ways to find out buyers.
ways to find out buyers.
No? Okay.
No? Okay.
Well, again, if something else comes up.
Well, again, if something else comes up.
for example, I think in home homework or ,
for example, I think in home homework or ,
and you're not quite sure about which one of these outlier methods to make just shoot out a question on piazza, and either myself or one of the tas will end up answering it. If it's more about a specific question in the homework assignment.
and you're not quite sure about which one of these outlier methods to make just shoot out a question on piazza, and either myself or one of the tas will end up answering it. If it's more about a specific question in the homework assignment.
no problem.
no problem.
Okay? And then, yeah, for this last example, you can also do something a bit different. So the data point we have
Okay? And then, yeah, for this last example, you can also do something a bit different. So the data point we have
where we have false means that these values are value
where we have false means that these values are value
valid.
valid.
whereas true indicates the presence of an outlier.
whereas true indicates the presence of an outlier.
So you could adjust your interquartile mean a bit
So you could adjust your interquartile mean a bit
by doing this little multiplication, and it will give you
by doing this little multiplication, and it will give you
these different values. So if you were looking at
these different values. So if you were looking at
this is basically determining, you know, for each one of our features, which is each one of the columns. Whether or not this looks like an outlier, and then you can get this big table that will print true. And then you can just print out those particular rows that come with the data.
this is basically determining, you know, for each one of our features, which is each one of the columns. Whether or not this looks like an outlier, and then you can get this big table that will print true. And then you can just print out those particular rows that come with the data.
These will be your potential outliers.
These will be your potential outliers.
So yeah, I guess I've been using the word outliers a lot. They're all potential outliers when you do this kind of checks to see whether or not statistically, they're far away from the average
So yeah, I guess I've been using the word outliers a lot. They're all potential outliers when you do this kind of checks to see whether or not statistically, they're far away from the average
that's far away from the main.
that's far away from the main.
Okay?
Okay?
So there's some other ways. We can also look for outliers. There's really a whole bunch of different ways.
So there's some other ways. We can also look for outliers. There's really a whole bunch of different ways.
Is to basically, we can look at a couple of the percentiles over the data.
Is to basically, we can look at a couple of the percentiles over the data.
It's measured in statistics to indicate some sort of value.
It's measured in statistics to indicate some sort of value.
But
But
the point that we want to look at is you want to do this in some sort of statistical way.
the point that we want to look at is you want to do this in some sort of statistical way.
so that at some percent of the data, often I've seen before in this class a couple of like this mistake that's made a bit often
so that at some percent of the data, often I've seen before in this class a couple of like this mistake that's made a bit often
where like, if you normalize your data in some way between and one
where like, if you normalize your data in some way between and one
or . If this is your set.
or . If this is your set.
when we say we want to identify, you know, like the most uncommon % at the end. We don't
when we say we want to identify, you know, like the most uncommon % at the end. We don't
kind of literally mean, like the top, .%, and the bottom ..
kind of literally mean, like the top, .%, and the bottom ..
Sorry, not percent, but values.
Sorry, not percent, but values.
So remember, there's a little bit of a difference between the values that the features actually take on.
So remember, there's a little bit of a difference between the values that the features actually take on.
And then they like
And then they like
typical likelihood of how much those things appear in the data set.
typical likelihood of how much those things appear in the data set.
So just be careful when stuff is already mapped to between like and and .
So just be careful when stuff is already mapped to between like and and .
But actually, we want something a little bit different than this.
But actually, we want something a little bit different than this.
because you could still have
because you could still have
a whole bunch of data, and maybe a lot of it does appear at the end of these values.
a whole bunch of data, and maybe a lot of it does appear at the end of these values.
but really, when you end up doing like computing a standard deviation, you'll get some statistical
but really, when you end up doing like computing a standard deviation, you'll get some statistical
estimate of the distribution of the data. And we want the data according to that distribution.
estimate of the distribution of the data. And we want the data according to that distribution.
Maybe the key word to think about here is just always make sure you're computing your outliers with respect to some sort of distribution
Maybe the key word to think about here is just always make sure you're computing your outliers with respect to some sort of distribution
over your data, not just the values of the data.
over your data, not just the values of the data.
That means some sort of representation of a probability distribution.
That means some sort of representation of a probability distribution.
Okay?
Okay?
Now, the real question is, supposedly we've got a bunch of outliers. What do we want to do with them?
Now, the real question is, supposedly we've got a bunch of outliers. What do we want to do with them?
you know, if they're really weird outliers.
you know, if they're really weird outliers.
not weird. But
not weird. But
you know, like the people that compete in the Olympics, or most often outliers, maybe those people who want to give medal to medals, to rather just eliminating them from the data set.
you know, like the people that compete in the Olympics, or most often outliers, maybe those people who want to give medal to medals, to rather just eliminating them from the data set.
We basically have to choose between these options depending on what we want to do
We basically have to choose between these options depending on what we want to do
so we could
so we could
try and correct things.
try and correct things.
So if you were to grab
So if you were to grab
alright, we want to get the quantile, for
alright, we want to get the quantile, for
you know the last chunk
you know the last chunk
of the data or the st chunks. And we could basically
of the data or the st chunks. And we could basically
take this upper limit and kind of like clamp all the data. So if we were doing.
take this upper limit and kind of like clamp all the data. So if we were doing.
you know, like this example, we've got some Gaussian distribution over data. It's, of course, much more dense here in the middle.
you know, like this example, we've got some Gaussian distribution over data. It's, of course, much more dense here in the middle.
We could kind of do this like, take all these data points and just stick them right here on the edge.
We could kind of do this like, take all these data points and just stick them right here on the edge.
Now, we don't have any outliers.
Now, we don't have any outliers.
Right? According to our original definition.
Right? According to our original definition.
Okay, what's kind of weird about doing this?
Okay, what's kind of weird about doing this?
Any thoughts? Has anyone tried this before?
Any thoughts? Has anyone tried this before?
You're definitely changing the distribution?
You're definitely changing the distribution?
Which you always kind of do when you do outliers.
Which you always kind of do when you do outliers.
But
But
but also in this case, you're changing it. We're also making this really sharp.
but also in this case, you're changing it. We're also making this really sharp.
We're making for some weird reason. There's a lot of concentration of points here and there.
We're making for some weird reason. There's a lot of concentration of points here and there.
So in the worst case.
So in the worst case.
when we do this to kind of convert this now into a tribe level distribution, now a weird blob on the middle blob here, and a blob on the end.
when we do this to kind of convert this now into a tribe level distribution, now a weird blob on the middle blob here, and a blob on the end.
which really wasn't the
which really wasn't the
same thing. If we're gonna do like the lower limit and remap things, too.
same thing. If we're gonna do like the lower limit and remap things, too.
And
And
the one thing that can really get us
the one thing that can really get us
we're rather rather unlucky, as remember most of the time we're supposed to assume in a lot of our machine learning algorithms that the data is kind of distributed according to a Gaussian distribution.
we're rather rather unlucky, as remember most of the time we're supposed to assume in a lot of our machine learning algorithms that the data is kind of distributed according to a Gaussian distribution.
So the more stuff we plant
So the more stuff we plant
so less.
so less.
this is going to look like a Gaussian distribution, so you could also destabilize and train a little bit
this is going to look like a Gaussian distribution, so you could also destabilize and train a little bit
this way.
this way.
Well, that's
Well, that's
so. Maybe if you're only clamping a few things
so. Maybe if you're only clamping a few things
you can get away with doing this
you can get away with doing this
good.
good.
The other option is, you can try removing them entirely.
The other option is, you can try removing them entirely.
So you know, we never really wanna remove data, we want to try and keep as much as we can.
So you know, we never really wanna remove data, we want to try and keep as much as we can.
But in some cases we can remove data.
But in some cases we can remove data.
So if you happen to have more data than you possibly need. Then it's okay to remove data
So if you happen to have more data than you possibly need. Then it's okay to remove data
if you want to.
if you want to.
Or if you don't have a lot of data, you can explore different ways of being able to collect data. But you may not plan it. With respect to like a hard boundary. Your data was originally during testing
Or if you don't have a lot of data, you can explore different ways of being able to collect data. But you may not plan it. With respect to like a hard boundary. Your data was originally during testing
machine learning algorithm, you're going to be using really likes your data to be in some sort of gas exchange.
machine learning algorithm, you're going to be using really likes your data to be in some sort of gas exchange.
But maybe if you're using something like a decision tree based on the stuff accurately, not as bad as you point to be.
But maybe if you're using something like a decision tree based on the stuff accurately, not as bad as you point to be.
That makes sense. Any other questions about
That makes sense. Any other questions about
that perils. What to do with your outliers. If someone says you're not really allowed to hear looking.
that perils. What to do with your outliers. If someone says you're not really allowed to hear looking.
No? Well, I'll just hope we're in case here, and we can just get rid of them, and we don't have to worry about it.
No? Well, I'll just hope we're in case here, and we can just get rid of them, and we don't have to worry about it.
Okay.
Okay.
not. And I think in a lot of cases you end up being in that space as well.
not. And I think in a lot of cases you end up being in that space as well.
So it's not too bad.
So it's not too bad.
Would there be any motive for, instead of just like chunking it over one area, sticking it to the upper lower limit. Maybe some sort of like
Would there be any motive for, instead of just like chunking it over one area, sticking it to the upper lower limit. Maybe some sort of like
rating to be like a scaling.
rating to be like a scaling.
or at that point it would.