inputs stringlengths 190 2.97k | targets stringclasses 12
values | _template_idx int64 0 9 | _task_source stringclasses 1
value | _task_name stringclasses 1
value | _template_type stringclasses 2
values |
|---|---|---|---|---|---|
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
One example: haha you guys are a bunch of losers.
Solution is here: Yes
Explanation: The comment here has word 'losers', which is used as an at... | No | 6 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Example input: haha you guys are a bunch of losers.
Example output: Yes
Example explanation: The comment here has word 'losers', which is used... | Yes | 3 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Input: Consider Input: Interesting letter Jay Cross, perpetuating the liberal lie that republicans are racists, etc. That has been the provinc... | Output: No
| 2 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
You will be given a definition of a task first, then an example. Follow the example to solve a new instance of the task.
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
haha you guys are a b... | No | 0 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
One example is below.
Q: haha you guys are a bunch of losers.
A: Yes
Rationale: The comment here has word 'losers', which is used as an attack ... | Yes | 9 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Example: haha you guys are a bunch of losers.
Example solution: Yes
Example explanation: The comment here has word 'losers', which is used as a... | Solution: No | 5 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Let me give you an example: haha you guys are a bunch of losers.
The answer to this example can be: Yes
Here is why: The comment here has word... | Yes | 8 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Detailed Instructions: Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
See one example below:
Problem: haha you guys are a bunch of losers.
Solution: Yes
Explanation: The comment here has wor... | Yes | 4 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
It is a real poor idea to kill people; sometimes it comes back to bite you in the butt. People get rather irate when this happens. The person ... | Yes
| 0 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Q: And there are a bunch of idiots who want the government to run our healthcare.
A: Yes
****
Q: Precisely 1 million, 230, 000. Okay, I admit... | No
****
| 4 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Q: For your information home schooled fair WAY better than Public school children (proven Fact), you STUPID libtards took GOD out and put Tran... | No
****
| 4 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
--------
Question: Wild bears in Alaska should be exterminated. They are dangerous and a nuisance. I am darned tired of carrying bear spray a... | No
| 7 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
--------
Question: But... but... but... FREEDOM!!!
Answer: No
Question: And yet people like you never have one word to say about the RG edit... | Yes
| 7 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
instruction:
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
question:
Let's not put the cart ahead of the horse here. This has more to do with the con mindset that "state law has no requirem... | No
| 9 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Example input: haha you guys are a bunch of losers.
Example output: Yes
Example explanation: The comment here has word 'losers', which is used... | Yes | 3 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
One example: haha you guys are a bunch of losers.
Solution is here: Yes
Explanation: The comment here has word 'losers', which is used as an at... | Yes | 6 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Given the task definition, example input & output, solve the new input case.
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Example: haha you guys are a bunch of losers.
Output: Yes
The comm... | No | 1 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Given the task definition, example input & output, solve the new input case.
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Example: haha you guys are a bunch of losers.
Output: Yes
The comm... | Yes | 1 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
instruction:
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
question:
Racist dummy
answer:
Yes
question:
This idiot sent straight from writing articles about how Trump can't win the primar... | No
| 9 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Let me give you an example: haha you guys are a bunch of losers.
The answer to this example can be: Yes
Here is why: The comment here has word... | Yes | 8 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
TASK DEFINITION: Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
PROBLEM: A person's body adapts to the situations it most often finds itself in. If you most often find yourself sitting, you ... | No
| 8 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
instruction:
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
question:
And I never will. You can't argue with and you can't fix ignorance.
answer:
No
question:
When did 1000 feet per person... | No
| 9 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
instruction:
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
question:
I have to say that being a Homeroid, I've never particularly cared for Seaton from way before him becoming a pol but I d... | No
| 9 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
[EX Q]: ADN....Question ? Are you trying to censor Anonymous ? Because each time one of his comments comes up to check if it is civil and it i... | Yes
| 6 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Teacher: Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Teacher: Now, understand the problem? If you are still confused, see the following example:
haha you guys are a bunch of losers.
Solut... | No | 2 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Detailed Instructions: Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
See one example below:
Problem: haha you guys are a bunch of losers.
Solution: Yes
Explanation: The comment here has wor... | Yes | 4 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Detailed Instructions: Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
See one example below:
Problem: haha you guys are a bunch of losers.
Solution: Yes
Explanation: The comment here has wor... | No | 4 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Ex Input:
These punks need a good beating, and then to be dragged back across the border by their hair...
Ex Output:
Yes
Ex Input:
in other... | Yes
| 1 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
You will be given a definition of a task first, then an example. Follow the example to solve a new instance of the task.
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
haha you guys are a b... | No | 0 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
You will be given a definition of a task first, then an example. Follow the example to solve a new instance of the task.
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
haha you guys are a b... | No | 0 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Example input: haha you guys are a bunch of losers.
Example output: Yes
Example explanation: The comment here has word 'losers', which is used... | No | 3 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Teacher: Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Teacher: Now, understand the problem? If you are still confused, see the following example:
haha you guys are a bunch of losers.
Solut... | Yes | 2 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
[Q]: I'm not sure they are done growing, up or down. Cold blooded murder of any animal should carry more than 30 days loss of freedom at a low... | Yes
| 5 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Ex Input:
Junior, your ignorance of basic supply and demand is profound. At least you were smart enough to delete your last post of babbling s... | Yes
| 1 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Medicaid expansion is a boondoggle, but we need to expand it to educate everyone as to what a failure it will be. The newly "covered" Medicai... | No
| 0 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Example: haha you guys are a bunch of losers.
Example solution: Yes
Example explanation: The comment here has word 'losers', which is used as a... | Solution: No | 5 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Teacher: Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Teacher: Now, understand the problem? If you are still confused, see the following example:
haha you guys are a bunch of losers.
Solut... | No | 2 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
One example: haha you guys are a bunch of losers.
Solution is here: Yes
Explanation: The comment here has word 'losers', which is used as an at... | Yes | 6 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
[Q]: The problem is the voters, because we love to obsess about the most irrelevant crap, and then vote accordingly. Guess what? Then we are ... | Yes
| 5 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Given the task definition, example input & output, solve the new input case.
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Example: haha you guys are a bunch of losers.
Output: Yes
The comm... | No | 1 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
TASK DEFINITION: Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
PROBLEM: It is very stupid decision, because people after 18 can decide if they want to smoke or not.
SOLUTION: Yes
PROBLEM:... | No
| 8 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Q: Mary Sanchez is just another Hispanic with a nitpicking attitude full of hate and racism.
A: Yes
****
Q: It is ridiculous that people woul... | No
****
| 4 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
You will be given a definition of a task first, then an example. Follow the example to solve a new instance of the task.
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
haha you guys are a b... | No | 0 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
"...when the money that used to be available to pay them (state employees) is gone."
Meanwhile, Meyer (R-CONoco) doesn't hesitate to take ... | No
| 0 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
You're confusing "a" problem with "the" problem. Certainly black poverty, and the sick culture that has grown up around it, is a problem. But ... | Yes
| 0 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Given the task definition, example input & output, solve the new input case.
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Example: haha you guys are a bunch of losers.
Output: Yes
The comm... | No | 1 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Q: Spot on. The original design was well north of 20 mil, even then they were cheap Charlie plans. And they kept cutting down from there. K... | Yes
****
| 4 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Ex Input:
"Francois...said school officials would do whatever they can to make students and staff feel safe at the high school on Monday. "
... | Yes
| 1 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Let me give you an example: haha you guys are a bunch of losers.
The answer to this example can be: Yes
Here is why: The comment here has word... | No | 8 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Example Input: "Islam is a pearl. A Muslim is an oyster."--old Muslim proverb meaning that a Muslims conduct isn't necessarily in compliance ... | No
| 3 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Q: I wonder what portion of commenters on here are paid by special interest PR groups to push a specific agenda in the comments. My only comp... | No
****
| 4 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Ex Input:
I fail to see how this restaurant is celebrating colonial rule. Yes, the name of the restaurant and some of the items on the menu re... | Yes
| 1 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Teacher: Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Teacher: Now, understand the problem? If you are still confused, see the following example:
haha you guys are a bunch of losers.
Solut... | Yes | 2 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Given the task definition, example input & output, solve the new input case.
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Example: haha you guys are a bunch of losers.
Output: Yes
The comm... | No | 1 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Ex Input:
Hey guys, I remixed this video. The jokes just re-write themselves! http://karllindfilms.com/
Ex Output:
No
Ex Input:
Here we go ... | No
| 1 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Given the task definition, example input & output, solve the new input case.
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Example: haha you guys are a bunch of losers.
Output: Yes
The comm... | No | 1 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Example Input: So suck it up and order it with less ice. My goodness people complain over the most stupid things!
Example Output: Yes
Example... | No
| 3 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
One example is below.
Q: haha you guys are a bunch of losers.
A: Yes
Rationale: The comment here has word 'losers', which is used as an attack ... | Yes | 9 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Yes, that is the review process we go through to comment. The three comments we see are determined by random chance, not by topic, author of t... | Yes
| 0 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Part 1. Definition
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Part 2. Example
haha you guys are a bunch of losers.
Answer: Yes
Explanation: The comment here has word 'losers', which is u... | No | 7 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Part 1. Definition
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Part 2. Example
haha you guys are a bunch of losers.
Answer: Yes
Explanation: The comment here has word 'losers', which is u... | Yes | 7 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
[Q]: She may well come from Springfield.
[A]: No
[Q]: Patricia Blair: Instead of coming across as an insensitive hater, try tapping into the... | No
| 5 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Q: Did you not look at the map??? The other 40 states are 17 and 16. This would not have been a crime in any of those 40 states.
A: No
**... | No
****
| 4 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
[Q]: As far as I am concerned... You saw an opportunity to spread your big bad looter propaganda crap and everyone in archaeology is jumping o... | Yes
| 5 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Given the task definition, example input & output, solve the new input case.
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Example: haha you guys are a bunch of losers.
Output: Yes
The comm... | Yes | 1 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Example Input: Chris perhaps she is not "vested" in the community and has no desire to be. Just perhaps. Possibly she just wants to live somew... | Yes
| 3 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
He's from Rhode Island.
No
shot his legs out from under him? don't make me laugh
Yes
The oil price has increased from $26 to$44, so the PF d... | No
| 0 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Let me give you an example: haha you guys are a bunch of losers.
The answer to this example can be: Yes
Here is why: The comment here has word... | Yes | 8 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Let me give you an example: haha you guys are a bunch of losers.
The answer to this example can be: Yes
Here is why: The comment here has word... | No | 8 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
One example: haha you guys are a bunch of losers.
Solution is here: Yes
Explanation: The comment here has word 'losers', which is used as an at... | Yes | 6 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Given the task definition, example input & output, solve the new input case.
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Example: haha you guys are a bunch of losers.
Output: Yes
The comm... | Yes | 1 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
--------
Question: No normal, well-adjusted person has resentment toward bicyclists or pedestrians, so there's nothing to increase. Your plans ... | No
| 7 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
instruction:
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
question:
Companies like LifeLock and IDShield are merely credit monitoring services. They don't "protect" you from identity fraud... | Yes
| 9 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
--------
Question: Curtis Kropar, You said, "While Trump may be a racist and has a bunch of other very obvious issues..." So you agree that Tr... | No
| 7 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
One example is below.
Q: haha you guys are a bunch of losers.
A: Yes
Rationale: The comment here has word 'losers', which is used as an attack ... | Yes | 9 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Ex Input:
"The government can take up to 17 percent of a monthly Social Security check, as long as the remaining amount is at least $750." And... | Yes
| 1 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Example input: haha you guys are a bunch of losers.
Example output: Yes
Example explanation: The comment here has word 'losers', which is used... | No | 3 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Example input: haha you guys are a bunch of losers.
Example output: Yes
Example explanation: The comment here has word 'losers', which is used... | Yes | 3 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Example: haha you guys are a bunch of losers.
Example solution: Yes
Example explanation: The comment here has word 'losers', which is used as a... | Solution: No | 5 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
[EX Q]: Well, actually dumba** I go back to Cowper days and I still live in Alaska. I spent thirty years dealing with idiots like you, and m... | No
| 6 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Example input: haha you guys are a bunch of losers.
Example output: Yes
Example explanation: The comment here has word 'losers', which is used... | No | 3 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Teacher: Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Teacher: Now, understand the problem? If you are still confused, see the following example:
haha you guys are a bunch of losers.
Solut... | No | 2 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Q: Cute? Probably, but I think, in a few spots anyway, it gives a bit of insight into the candidate's personality. I got a giggle out of the o... | No
****
| 4 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Part 1. Definition
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Part 2. Example
haha you guys are a bunch of losers.
Answer: Yes
Explanation: The comment here has word 'losers', which is u... | Yes | 7 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
--------
Question: I can't think of one comment you've made where you haven't come across like a total prick. I wish it was possible to block s... | Yes
| 7 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
TASK DEFINITION: Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
PROBLEM: A person's body adapts to the situations it most often finds itself in. If you most often find yourself sitting, you ... | No
| 8 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
[Q]: I've visited several dispensaries recently and I agree, they're long on schwag and short on knowledge. One answer is to secure a good gro... | No
| 5 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Example input: haha you guys are a bunch of losers.
Example output: Yes
Example explanation: The comment here has word 'losers', which is used... | Yes | 3 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Given the task definition, example input & output, solve the new input case.
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Example: haha you guys are a bunch of losers.
Output: Yes
The comm... | No | 1 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Detailed Instructions: Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
See one example below:
Problem: haha you guys are a bunch of losers.
Solution: Yes
Explanation: The comment here has wor... | Yes | 4 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Example Input: The referee is shit!
Example Output: Yes
Example Input: Are you shitting me?
Example Output: Yes
Example Input: Darn. I was s... | Yes
| 3 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
One example is below.
Q: haha you guys are a bunch of losers.
A: Yes
Rationale: The comment here has word 'losers', which is used as an attack ... | No | 9 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
You will be given a definition of a task first, then an example. Follow the example to solve a new instance of the task.
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
haha you guys are a b... | Yes | 0 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Input: Consider Input: Now, now... As the proponent of your incorrect glass-house assertions, which are incongruent with the facts, don't be t... | Output: Yes
| 2 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
[Q]: Politicians love to spend other people's money. Why should Alaskans want to grow an already bloated parasite; State Government? The mo... | No
| 5 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
One example is below.
Q: haha you guys are a bunch of losers.
A: Yes
Rationale: The comment here has word 'losers', which is used as an attack ... | Yes | 9 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
[EX Q]: The referee is shit!
[EX A]: Yes
[EX Q]: And I eat Wendy's twice a week and have a pro-grade cardiovascular system.
Genetics, circum... | No
| 6 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Teacher: Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Teacher: Now, understand the problem? If you are still confused, see the following example:
haha you guys are a bunch of losers.
Solut... | Yes | 2 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Let me give you an example: haha you guys are a bunch of losers.
The answer to this example can be: Yes
Here is why: The comment here has word... | No | 8 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
Teacher: Categorize the comment on the basis of toxicity. If the comment is toxic (harmful, malicious, derogatory, threat, insult, identity attck, etc.) output Yes, otherwise output No.
Teacher: Now, understand the problem? If you are still confused, see the following example:
haha you guys are a bunch of losers.
Solut... | No | 2 | NIv2 | task1720_civil_comments_toxicity_classification | fs_opt |
End of preview. Expand in Data Studio
README.md exists but content is empty.
- Downloads last month
- 11