{ "cells": [ { "cell_type": "markdown", "metadata": { "id": "4pXaxnMWrteH" }, "source": [ "# Aim: To calculate respective element-wise Cohen Kappa Scores for our custom inter-annotator datasets.\n", "\n", "- Tweet Dataset\n", "- Fake News Dataset" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": { "iopub.execute_input": "2022-12-06T20:07:39.375584Z", "iopub.status.busy": "2022-12-06T20:07:39.374981Z", "iopub.status.idle": "2022-12-06T20:07:39.382165Z", "shell.execute_reply": "2022-12-06T20:07:39.380382Z", "shell.execute_reply.started": "2022-12-06T20:07:39.375550Z" }, "id": "K9lHkfzZUPyu", "trusted": true }, "outputs": [], "source": [ "import pandas as pd\n", "import numpy as np" ] }, { "cell_type": "markdown", "metadata": { "id": "gPGOd15ndFk2" }, "source": [ "# Overlapping Twitter News" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "20ml5y3yigqj" }, "outputs": [], "source": [ "df = pd.read_csv(\"\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 791 }, "id": "gdgHmS61ignW", "outputId": "99d7fe5a-8ed4-46bc-a8a2-1e793a051b6e" }, "outputs": [ { "data": { "text/html": [ "\n", "
\n", "
\n", "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
S_1Ann_1.1Ann_1.2Ann_1.3Ann_1.4Ann_1.5Ann_1.6Ann_1.7S_2Ann_2.1Ann_2.2Ann_2.3Ann_2.4Ann_2.5Ann_2.6Ann_2.7
0Is that a cave painting site? Exciting prizes ...SpeculationThe sentence does not provide sound logic to s...When,Where,WhyGreythe statement don’t have any positive and nega...Gaining AdvantageEducationalIs that a cave painting site? Exciting prizes ...SpeculationNo ProofWhen,Where,WhyGreytold to hype itGaining AdvantageEducational
1Sonia Gandhi to travel abroad for medical chec...SpeculationThe sentence does not provide sound logic to s...When,WhereGreythe statement don’t have any positive and nega...Gaining EsteemPoliticalSonia Gandhi to travel abroad for medical chec...SpeculationNo ProofWhen,WhereGreyNaNGaining EsteemPolitical
2S&T dept to set up 75 science hubs exclusively...SpeculationThe sentence does not provide sound logic to s...WhenGreythe statement don’t have any positive and nega...Gaining AdvantagePoliticalS&T dept to set up 75 science hubs exclusively...SpeculationNo ProofWhen,Where,WhyGreytold to show that they care for themGaining AdvantagePolitical
3PM Modi’s nuclear power push gains traction wi...SpeculationThe sentence does not provide sound logic to s...When,Where,WhyGreythe statement don’t have any positive and nega...Gaining AdvantagePoliticalPM Modi’s nuclear power push gains traction wi...SpeculationGains traction' without any proof,Inclined tow...When,Where,WhyBlackInclined towards one sideGaining EsteemPolitical
4India to showcase environmental conscious life...Opinionperson showing his thoughts for the enviornmentWhen,Where,WhyGreythe statement don’t have any positive and nega...Gaining EsteemPoliticalIndia to showcase environmental conscious life...DistortionNo proof , evidence to support the statementNaNBlackInclined towards one sideDefaming EsteemPolitical
\n", "
\n", " \n", " \n", " \n", "\n", " \n", "
\n", "
\n", " " ], "text/plain": [ " S_1 Ann_1.1 \\\n", "0 Is that a cave painting site? Exciting prizes ... Speculation \n", "1 Sonia Gandhi to travel abroad for medical chec... Speculation \n", "2 S&T dept to set up 75 science hubs exclusively... Speculation \n", "3 PM Modi’s nuclear power push gains traction wi... Speculation \n", "4 India to showcase environmental conscious life... Opinion \n", "\n", " Ann_1.2 Ann_1.3 Ann_1.4 \\\n", "0 The sentence does not provide sound logic to s... When,Where,Why Grey \n", "1 The sentence does not provide sound logic to s... When,Where Grey \n", "2 The sentence does not provide sound logic to s... When Grey \n", "3 The sentence does not provide sound logic to s... When,Where,Why Grey \n", "4 person showing his thoughts for the enviornment When,Where,Why Grey \n", "\n", " Ann_1.5 Ann_1.6 \\\n", "0 the statement don’t have any positive and nega... Gaining Advantage \n", "1 the statement don’t have any positive and nega... Gaining Esteem \n", "2 the statement don’t have any positive and nega... Gaining Advantage \n", "3 the statement don’t have any positive and nega... Gaining Advantage \n", "4 the statement don’t have any positive and nega... Gaining Esteem \n", "\n", " Ann_1.7 S_2 \\\n", "0 Educational Is that a cave painting site? Exciting prizes ... \n", "1 Political Sonia Gandhi to travel abroad for medical chec... \n", "2 Political S&T dept to set up 75 science hubs exclusively... \n", "3 Political PM Modi’s nuclear power push gains traction wi... \n", "4 Political India to showcase environmental conscious life... \n", "\n", " Ann_2.1 Ann_2.2 \\\n", "0 Speculation No Proof \n", "1 Speculation No Proof \n", "2 Speculation No Proof \n", "3 Speculation Gains traction' without any proof,Inclined tow... \n", "4 Distortion No proof , evidence to support the statement \n", "\n", " Ann_2.3 Ann_2.4 Ann_2.5 \\\n", "0 When,Where,Why Grey told to hype it \n", "1 When,Where Grey NaN \n", "2 When,Where,Why Grey told to show that they care for them \n", "3 When,Where,Why Black Inclined towards one side \n", "4 NaN Black Inclined towards one side \n", "\n", " Ann_2.6 Ann_2.7 \n", "0 Gaining Advantage Educational \n", "1 Gaining Esteem Political \n", "2 Gaining Advantage Political \n", "3 Gaining Esteem Political \n", "4 Defaming Esteem Political " ] }, "execution_count": 603, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Ew9vqgHyigkn" }, "outputs": [], "source": [ "#convert ann_1 and ann_2 into small letter\n", "df2 = df.apply(lambda x: x.astype(str).str.lower())" ] }, { "cell_type": "markdown", "metadata": { "id": "Gnb5gexkqOJT" }, "source": [ "## Cohen-Kappa Score implementation: (Manual)" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "Mt05V5Juigf9" }, "outputs": [], "source": [ "def cohen_kappa(ann1, ann2):\n", " \"\"\"Computes Cohen kappa for pair-wise annotators.\n", " :param ann1: annotations provided by first annotator\n", " :type ann1: list\n", " :param ann2: annotations provided by second annotator\n", " :type ann2: list\n", " :rtype: float\n", " :return: Cohen kappa statistic\n", " \"\"\"\n", " count = 0\n", " for an1, an2 in zip(ann1, ann2):\n", " if an1 == an2:\n", " count += 1\n", " # print(count, len(ann1))\n", " A = count / len(ann1) # observed agreement A (Po)\n", "\n", " uniq = set(ann1 + ann2)\n", " # print(uniq)\n", " E = 0 # expected agreement E (Pe)\n", " for item in uniq:\n", " cnt1 = ann1.count(item)\n", " cnt2 = ann2.count(item)\n", " count = ((cnt1 / len(ann1)) * (cnt2 / len(ann2)))\n", " E += count\n", " # print(A, E)\n", " return round((A - E) / (1 - E), 4)" ] }, { "cell_type": "markdown", "metadata": { "id": "HIliUydFqUi_" }, "source": [ "## For SBDO Column:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "K79_jFVYigb1", "outputId": "0d093681-2794-434d-cbf8-b25c167f1aa8" }, "outputs": [ { "data": { "text/plain": [ "speculation 154\n", "sounds factual 130\n", "opinion 105\n", "distortion 81\n", "bias 22\n", "speculation,bias 1\n", "Name: Ann_1.1, dtype: int64" ] }, "execution_count": 607, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df2['Ann_1.1'].str.strip().value_counts()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "lDQQGv3HigZJ", "outputId": "c02e7fa4-4305-47d4-c786-41e4a69ca1a2" }, "outputs": [ { "data": { "text/plain": [ "speculation 204\n", "sounds factual 122\n", "opinion 92\n", "distortion 40\n", "bias 32\n", "speculation,bias 2\n", "speculation,sounds factual 1\n", "Name: Ann_2.1, dtype: int64" ] }, "execution_count": 608, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df2['Ann_2.1'].str.strip().value_counts()" ] }, { "cell_type": "markdown", "metadata": { "id": "xBbHJ7ZXqWS4" }, "source": [ "### Overall Kappa Score\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "GHZQOWMmiwiT", "outputId": "4d3d2b32-d559-4a4c-c5c7-7bd94477033f" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Cohen kappa score for SBDOSF column: 0.4019\n" ] } ], "source": [ "# Overall all kappa scores\n", "ann1 = df2['Ann_1.1'].str.strip().to_list()\n", "ann2 = df2['Ann_2.1'].str.strip().to_list()\n", "print(f'Cohen kappa score for SBDOSF column: {cohen_kappa(ann1, ann2)}')" ] }, { "cell_type": "markdown", "metadata": { "id": "tAtoh0WhqaNA" }, "source": [ "### Element-wise Kappa Score" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "sSTEJP1fP35A", "outputId": "d192a2b8-b648-48ce-c201-d62f9a15d584" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "cohen_kappa score for speculation : 0.4651 and total when's according to ann_1 & ann_2 are 155 & 207 respectively!\n", "cohen_kappa score for sounds factual : 0.5376 and total sounds factual's according to ann_1 & ann_2 are 130 & 123 respectively!\n", "cohen_kappa score for opinion : 0.398 and total opinion's according to ann_1 & ann_2 are 105 & 92 respectively!\n", "cohen_kappa score for distortion : 0.1748 and total distortion's according to ann_1 & ann_2 are 81 & 40 respectively!\n", "cohen_kappa score for bias : 0.0525 and total bias's according to ann_1 & ann_2 are 23 & 34 respectively!\n" ] } ], "source": [ "ann1 = df2['Ann_1.1'].str.strip().to_list()\n", "ann2 = df2['Ann_2.1'].str.strip().to_list()\n", "\n", "# for speculation\n", "speculation_1 = [1 if when.find('speculation')>-1 else 0 for when in ann1]\n", "speculation_2 = [1 if when.find('speculation')>-1 else 0 for when in ann2]\n", "count_speculation_1, count_speculation_2 = sum(speculation_1), sum(speculation_2)\n", "print(f'cohen_kappa score for speculation : {cohen_kappa(speculation_1, speculation_2)} and total when\\'s according to ann_1 & ann_2 are {count_speculation_1} & {count_speculation_2} respectively!')\n", "\n", "# For sounds factual\n", "sounds_factual_1 = [1 if why.find('sounds factual')>-1 else 0 for why in ann1]\n", "sounds_factual_2 = [1 if why.find('sounds factual')>-1 else 0 for why in ann2]\n", "count_sounds_factual_1, count_sounds_factual_2 = sum(sounds_factual_1), sum(sounds_factual_2)\n", "\n", "print(f'cohen_kappa score for sounds factual : {cohen_kappa(sounds_factual_1, sounds_factual_2)} and total sounds factual\\'s according to ann_1 & ann_2 are {count_sounds_factual_1} & {count_sounds_factual_2} respectively!')\n", "\n", "\n", "# For opinion\n", "opinion_1 = [1 if who.find('opinion')>-1 else 0 for who in ann1]\n", "opinion_2 = [1 if who.find('opinion')>-1 else 0 for who in ann2]\n", "count_opinion_1, count_opinion_2 = sum(opinion_1), sum(opinion_2)\n", "\n", "print(f'cohen_kappa score for opinion : {cohen_kappa(opinion_1, opinion_2)} and total opinion\\'s according to ann_1 & ann_2 are {count_opinion_1} & {count_opinion_2} respectively!')\n", "\n", "\n", "# For distortion\n", "distortion_1 = [1 if where.find('distortion')>-1 else 0 for where in ann1]\n", "distortion_2 = [1 if where.find('distortion')>-1 else 0 for where in ann2]\n", "count_distortion_1, count_distortion_2 = sum(distortion_1), sum(distortion_2)\n", "\n", "print(f'cohen_kappa score for distortion : {cohen_kappa(distortion_1, distortion_2)} and total distortion\\'s according to ann_1 & ann_2 are {count_distortion_1} & {count_distortion_2} respectively!')\n", "\n", "\n", "# For bias\n", "bias_1 = [1 if what.find('bias')>-1 else 0 for what in ann1]\n", "bias_2 = [1 if what.find('bias')>-1 else 0 for what in ann2]\n", "count_bias_1, count_bias_2 = sum(bias_1), sum(bias_2)\n", "\n", "print(f'cohen_kappa score for bias : {cohen_kappa(bias_1, bias_2)} and total bias\\'s according to ann_1 & ann_2 are {count_bias_1} & {count_bias_2} respectively!')\n" ] }, { "cell_type": "markdown", "metadata": { "id": "9ksNDUyzqc-5" }, "source": [ "## For Missing W's Column:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "VJzuMBCcizjn", "outputId": "b279befa-3c8d-4acb-800b-c483c3de7996" }, "outputs": [ { "data": { "text/plain": [ "when,where,why 247\n", "when,why 80\n", "when,where 41\n", "why 27\n", "what,when,where,why 17\n", "when 16\n", "where,why 16\n", "when,where,who,why 11\n", "where 6\n", "when,who,why 4\n", "when,where,who 4\n", "nan 3\n", "where,who,why 3\n", "what,when,why 3\n", "what,why 3\n", "when,who 2\n", "what,when,where 2\n", "what,where,why 2\n", "what 1\n", "whose,why 1\n", "who,why 1\n", "who 1\n", "what,when,who,why 1\n", "what,where,who,why 1\n", "Name: Ann_1.3, dtype: int64" ] }, "execution_count": 611, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df2['Ann_1.3'].value_counts()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "oqnA5zbLizhS", "outputId": "a9541730-a44c-4fe5-cbb5-7b42f0eb6a55" }, "outputs": [ { "data": { "text/plain": [ "when,where,why 218\n", "when,why 114\n", "when,where 50\n", "why 40\n", "when 18\n", "where,why 16\n", "nan 8\n", "where 5\n", "when,where,who,why 4\n", "what,when,why 3\n", "what 3\n", "when,who 3\n", "what,when 2\n", "what,when,where 2\n", "when,who,why 2\n", "where,who,why 2\n", "who 1\n", "what,when,where,why 1\n", "who,why 1\n", "Name: Ann_2.3, dtype: int64" ] }, "execution_count": 612, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df2['Ann_2.3'].value_counts()" ] }, { "cell_type": "markdown", "metadata": { "id": "HdZKnQ7pqg7-" }, "source": [ "### Element-wise Kappa Score:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "nuVIRKYXizdq", "outputId": "c7bf60fa-e84e-4f28-e419-d6d482bedc57" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "cohen_kappa score for When : 0.5453 and total when's according to ann_1 & ann_2 are 428 & 417 respectively!\n", "cohen_kappa score for Why : 0.556 and total why's according to ann_1 & ann_2 are 417 & 401 respectively!\n", "cohen_kappa score for Who : 0.061 and total who's according to ann_1 & ann_2 are 29 & 13 respectively!\n", "cohen_kappa score for Where : 0.4753 and total where's according to ann_1 & ann_2 are 350 & 298 respectively!\n", "cohen_kappa score for What : 0.0671 and total what's according to ann_1 & ann_2 are 30 & 11 respectively!\n" ] } ], "source": [ "ann1 = df2['Ann_1.3'].str.strip().to_list()\n", "ann2 = df2['Ann_2.3'].str.strip().to_list()\n", "\n", "# for when\n", "when_1 = [1 if when.find('when')>-1 else 0 for when in ann1]\n", "when_2 = [1 if when.find('when')>-1 else 0 for when in ann2]\n", "count_when_1, count_when_2 = sum(when_1), sum(when_2)\n", "print(f'cohen_kappa score for When : {cohen_kappa(when_1, when_2)} and total when\\'s according to ann_1 & ann_2 are {count_when_1} & {count_when_2} respectively!')\n", "\n", "# For why\n", "why_1 = [1 if why.find('why')>-1 else 0 for why in ann1]\n", "why_2 = [1 if why.find('why')>-1 else 0 for why in ann2]\n", "count_why_1, count_why_2 = sum(why_1), sum(why_2)\n", "\n", "print(f'cohen_kappa score for Why : {cohen_kappa(why_1, why_2)} and total why\\'s according to ann_1 & ann_2 are {count_why_1} & {count_why_2} respectively!')\n", "\n", "\n", "# For who\n", "who_1 = [1 if who.find('who')>-1 else 0 for who in ann1]\n", "who_2 = [1 if who.find('who')>-1 else 0 for who in ann2]\n", "count_who_1, count_who_2 = sum(who_1), sum(who_2)\n", "\n", "print(f'cohen_kappa score for Who : {cohen_kappa(who_1, who_2)} and total who\\'s according to ann_1 & ann_2 are {count_who_1} & {count_who_2} respectively!')\n", "\n", "\n", "# For where\n", "where_1 = [1 if where.find('where')>-1 else 0 for where in ann1]\n", "where_2 = [1 if where.find('where')>-1 else 0 for where in ann2]\n", "count_where_1, count_where_2 = sum(where_1), sum(where_2)\n", "\n", "print(f'cohen_kappa score for Where : {cohen_kappa(where_1, where_2)} and total where\\'s according to ann_1 & ann_2 are {count_where_1} & {count_where_2} respectively!')\n", "\n", "\n", "# For what\n", "what_1 = [1 if what.find('what')>-1 else 0 for what in ann1]\n", "what_2 = [1 if what.find('what')>-1 else 0 for what in ann2]\n", "count_what_1, count_what_2 = sum(what_1), sum(what_2)\n", "\n", "print(f'cohen_kappa score for What : {cohen_kappa(what_1, what_2)} and total what\\'s according to ann_1 & ann_2 are {count_what_1} & {count_what_2} respectively!')\n" ] }, { "cell_type": "markdown", "metadata": { "id": "do3qpvqSqhwx" }, "source": [ "## For Color of lie Column:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "kDNWn13wizas", "outputId": "f6cabe7e-7d24-4462-990b-2f427c9025d3" }, "outputs": [ { "data": { "text/plain": [ "grey 127\n", "nan 118\n", "black 115\n", "red 77\n", "white 56\n", "Name: Ann_1.4, dtype: int64" ] }, "execution_count": 614, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df2['Ann_1.4'].str.strip().value_counts()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "XB068WS0izYo", "outputId": "d02e613d-e5d0-4a27-f9db-410de66a1b45" }, "outputs": [ { "data": { "text/plain": [ "black 168\n", "grey 150\n", "nan 86\n", "red 49\n", "white 40\n", "Name: Ann_2.4, dtype: int64" ] }, "execution_count": 615, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df2['Ann_2.4'].str.strip().value_counts()" ] }, { "cell_type": "markdown", "metadata": { "id": "YvnrVXS0qmNf" }, "source": [ "### Overall Kappa Score:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "iIPP6IK2izVY", "outputId": "514911a9-0c14-4ed4-954d-c09bc613471d" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Cohen kappa score for Color of lie column: 0.5371\n" ] } ], "source": [ "ann1 = df2['Ann_1.4'].str.strip().to_list()\n", "ann2 = df2['Ann_2.4'].str.strip().to_list()\n", "print(f'Cohen kappa score for Color of lie column: {cohen_kappa(ann1, ann2)}')" ] }, { "cell_type": "markdown", "metadata": { "id": "dAwp8A_Hqo9R" }, "source": [ "### Element Wise Kappa Score:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "vpnQvOBKSz4U", "outputId": "50e54c8d-9d86-4f58-9498-3d515641ea31" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "cohen_kappa score for grey : 0.5043 and total grey's according to ann_1 & ann_2 are 127 & 150 respectively!\n", "cohen_kappa score for black : 0.438 and total black's according to ann_1 & ann_2 are 115 & 168 respectively!\n", "cohen_kappa score for red : 0.5122 and total red's according to ann_1 & ann_2 are 77 & 49 respectively!\n", "cohen_kappa score for white : 0.5628 and total white's according to ann_1 & ann_2 are 56 & 40 respectively!\n", "cohen_kappa score for nan : 0.7052 and total nan's according to ann_1 & ann_2 are 118 & 86 respectively!\n" ] } ], "source": [ "df2['Ann_1.4'] = df2['Ann_1.4'].fillna('nan')\n", "df2['Ann_2.4'] = df2['Ann_2.4'].fillna('nan')\n", "\n", "ann1 = df2['Ann_1.4'].str.strip().to_list()\n", "ann2 = df2['Ann_2.4'].str.strip().to_list()\n", "\n", "# for grey\n", "grey_1 = [1 if color.find('grey')>-1 else 0 for color in ann1]\n", "grey_2 = [1 if color.find('grey')>-1 else 0 for color in ann2]\n", "count_grey_1, count_grey_2 = sum(grey_1), sum(grey_2)\n", "print(f'cohen_kappa score for grey : {cohen_kappa(grey_1, grey_2)} and total grey\\'s according to ann_1 & ann_2 are {count_grey_1} & {count_grey_2} respectively!')\n", "\n", "# For black\n", "black_1 = [1 if color.find('black')>-1 else 0 for color in ann1]\n", "black_2 = [1 if color.find('black')>-1 else 0 for color in ann2]\n", "count_black_1, count_black_2 = sum(black_1), sum(black_2)\n", "\n", "print(f'cohen_kappa score for black : {cohen_kappa(black_1, black_2)} and total black\\'s according to ann_1 & ann_2 are {count_black_1} & {count_black_2} respectively!')\n", "\n", "\n", "# For red\n", "red_1 = [1 if color.find('red')>-1 else 0 for color in ann1]\n", "red_2 = [1 if color.find('red')>-1 else 0 for color in ann2]\n", "count_red_1, count_red_2 = sum(red_1), sum(red_2)\n", "\n", "print(f'cohen_kappa score for red : {cohen_kappa(red_1, red_2)} and total red\\'s according to ann_1 & ann_2 are {count_red_1} & {count_red_2} respectively!')\n", "\n", "\n", "# For white\n", "white_1 = [1 if color.find('white')>-1 else 0 for color in ann1]\n", "white_2 = [1 if color.find('white')>-1 else 0 for color in ann2]\n", "count_white_1, count_white_2 = sum(white_1), sum(white_2)\n", "\n", "print(f'cohen_kappa score for white : {cohen_kappa(white_1, white_2)} and total white\\'s according to ann_1 & ann_2 are {count_white_1} & {count_white_2} respectively!')\n", "\n", "\n", "# For null values\n", "nan_1 = [1 if color.find('nan')>-1 else 0 for color in ann1]\n", "nan_2 = [1 if color.find('nan')>-1 else 0 for color in ann2]\n", "count_nan_1, count_nan_2 = sum(nan_1), sum(nan_2)\n", "\n", "print(f'cohen_kappa score for nan : {cohen_kappa(nan_1, nan_2)} and total nan\\'s according to ann_1 & ann_2 are {count_nan_1} & {count_nan_2} respectively!')\n" ] }, { "cell_type": "markdown", "metadata": { "id": "vwNdgAhjqlj5" }, "source": [ "## For Intent of Lie Column:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "sZMRUcomizSO", "outputId": "3a2459e3-c6ed-4754-af26-153e03823b0e" }, "outputs": [ { "data": { "text/plain": [ "gaining advantage 156\n", "nan 136\n", "protecting themselves 66\n", "gaining esteem 65\n", "protecting others 53\n", "avoiding embarrassment 9\n", "defaming esteem 8\n", "Name: Ann_1.6, dtype: int64" ] }, "execution_count": 618, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df2['Ann_1.6'].str.strip().value_counts()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "rI57NwXZizOv", "outputId": "416e01f1-292d-4555-c4eb-98d76d82bcec" }, "outputs": [ { "data": { "text/plain": [ "gaining advantage 188\n", "nan 104\n", "gaining esteem 79\n", "protecting themselves 64\n", "protecting others 29\n", "avoiding embarrassment 16\n", "defaming esteem 13\n", "Name: Ann_2.6, dtype: int64" ] }, "execution_count": 619, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df2['Ann_2.6'].str.strip().value_counts()" ] }, { "cell_type": "markdown", "metadata": { "id": "rcv3bJoSqspW" }, "source": [ "### Overall Kappa Score:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "FNMMZNkbizLv", "outputId": "8907ab3e-9cf8-4656-8463-070858b2fa89" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Cohen kappa score for Intent of lie column: 0.4846\n" ] } ], "source": [ "ann1 = df2['Ann_1.6'].str.strip().to_list()\n", "ann2 = df2['Ann_2.6'].str.strip().to_list()\n", "print(f'Cohen kappa score for Intent of lie column: {cohen_kappa(ann1, ann2)}')" ] }, { "cell_type": "markdown", "metadata": { "id": "eL7pKSsNqupD" }, "source": [ "### Element Wise Kappa Score:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "RwmBoEwRiMWv", "outputId": "ae5c0b0b-15a7-4242-fe75-8e6f19ac685e" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "cohen_kappa score gaining advantage : 0.4312 and total gaining advantage's according to ann_1 & ann_2 are 156 & 188 respectively!\n", "cohen_kappa score for gaining_esteem : 0.4966 and total gaining_esteem's according to ann_1 & ann_2 are 65 & 79 respectively!\n", "cohen_kappa score for protecting_themsprotecting_others : 0.557 and total protecting_themselves's according to ann_1 & ann_2 are 66 & 64 respectively!\n", "cohen_kappa score for protecting_others : 0.3401 and total protecting_others's according to ann_1 & ann_2 are 53 & 29 respectively!\n", "cohen_kappa score for avoiding_embarrassment : 0.6314 and total avoiding_embarrassment's according to ann_1 & ann_2 are 9 & 16 respectively!\n", "cohen_kappa score for defaming_esteem : 0.2711 and total defaming_esteem's according to ann_1 & ann_2 are 8 & 13 respectively!\n", "cohen_kappa score for nan : 0.5619 and total nan's according to ann_1 & ann_2 are 136 & 104 respectively!\n" ] } ], "source": [ "df2['Ann_1.6'] = df2['Ann_1.6'].fillna('nan')\n", "df2['Ann_2.6'] = df2['Ann_2.6'].fillna('nan')\n", "\n", "ann1 = df2['Ann_1.6'].str.strip().to_list()\n", "ann2 = df2['Ann_2.6'].str.strip().to_list()\n", "\n", "# for gaining advantage\n", "gaining_advantage_1 = [1 if color.find('gaining advantage')>-1 else 0 for color in ann1]\n", "gaining_advantage_2 = [1 if color.find('gaining advantage')>-1 else 0 for color in ann2]\n", "count_gaining_advantage_1, count_gaining_advantage_2 = sum(gaining_advantage_1), sum(gaining_advantage_2)\n", "print(f'cohen_kappa score gaining advantage : {cohen_kappa(gaining_advantage_1, gaining_advantage_2)} and total gaining advantage\\'s according to ann_1 & ann_2 are {count_gaining_advantage_1} & {count_gaining_advantage_2} respectively!')\n", "\n", "# For gaining_esteem\n", "gaining_esteem_1 = [1 if color.find('gaining esteem')>-1 else 0 for color in ann1]\n", "gaining_esteem_2 = [1 if color.find('gaining esteem')>-1 else 0 for color in ann2]\n", "count_gaining_esteem_1, count_gaining_esteem_2 = sum(gaining_esteem_1), sum(gaining_esteem_2)\n", "\n", "print(f'cohen_kappa score for gaining_esteem : {cohen_kappa(gaining_esteem_1, gaining_esteem_2)} and total gaining_esteem\\'s according to ann_1 & ann_2 are {count_gaining_esteem_1} & {count_gaining_esteem_2} respectively!')\n", "\n", "\n", "# For protecting_themselves\n", "protecting_themselves_1 = [1 if color.find('protecting themselves')>-1 else 0 for color in ann1]\n", "protecting_themselves_2 = [1 if color.find('protecting themselves')>-1 else 0 for color in ann2]\n", "count_protecting_themselves_1, count_protecting_themselves_2 = sum(protecting_themselves_1), sum(protecting_themselves_2)\n", "\n", "print(f'cohen_kappa score for protecting_themselves : {cohen_kappa(protecting_themselves_1, protecting_themselves_2)} and total protecting_themselves\\'s according to ann_1 & ann_2 are {count_protecting_themselves_1} & {count_protecting_themselves_2} respectively!')\n", "\n", "\n", "# For protecting_others\n", "protecting_others_1 = [1 if color.find('protecting others')>-1 else 0 for color in ann1]\n", "protecting_others_2 = [1 if color.find('protecting others')>-1 else 0 for color in ann2]\n", "count_protecting_others_1, count_protecting_others_2 = sum(protecting_others_1), sum(protecting_others_2)\n", "\n", "print(f'cohen_kappa score for protecting_others : {cohen_kappa(protecting_others_1, protecting_others_2)} and total protecting_others\\'s according to ann_1 & ann_2 are {count_protecting_others_1} & {count_protecting_others_2} respectively!')\n", "\n", "\n", "# For voiding_embarrassment \n", "avoiding_embarrassment_1 = [1 if color.find('avoiding embarrassment')>-1 else 0 for color in ann1]\n", "avoiding_embarrassment_2 = [1 if color.find('avoiding embarrassment')>-1 else 0 for color in ann2]\n", "count_avoiding_embarrassment_1, count_avoiding_embarrassment_2 = sum(avoiding_embarrassment_1), sum(avoiding_embarrassment_2)\n", "\n", "print(f'cohen_kappa score for avoiding_embarrassment : {cohen_kappa(avoiding_embarrassment_1, avoiding_embarrassment_2)} and total avoiding_embarrassment\\'s according to ann_1 & ann_2 are {count_avoiding_embarrassment_1} & {count_avoiding_embarrassment_2} respectively!')\n", "\n", "\n", "# For defaming esteem\n", "defaming_esteem_1 = [1 if color.find('defaming esteem')>-1 else 0 for color in ann1]\n", "defaming_esteem_2 = [1 if color.find('defaming esteem')>-1 else 0 for color in ann2]\n", "count_defaming_esteem_1, count_defaming_esteem_2 = sum(defaming_esteem_1), sum(defaming_esteem_2)\n", "\n", "print(f'cohen_kappa score for defaming_esteem : {cohen_kappa(defaming_esteem_1, defaming_esteem_2)} and total defaming_esteem\\'s according to ann_1 & ann_2 are {count_defaming_esteem_1} & {count_defaming_esteem_2} respectively!')\n", "\n", "\n", "# For null values\n", "nan_1 = [1 if color.find('nan')>-1 else 0 for color in ann1]\n", "nan_2 = [1 if color.find('nan')>-1 else 0 for color in ann2]\n", "count_nan_1, count_nan_2 = sum(nan_1), sum(nan_2)\n", "\n", "print(f'cohen_kappa score for nan : {cohen_kappa(nan_1, nan_2)} and total nan\\'s according to ann_1 & ann_2 are {count_nan_1} & {count_nan_2} respectively!')\n" ] }, { "cell_type": "markdown", "metadata": { "id": "kxGkBjo5qo-9" }, "source": [ "## For category of lie Column:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "lr46SsDSizI0", "outputId": "e52b93c0-77aa-4a06-b118-5fc9903c996a" }, "outputs": [ { "data": { "text/plain": [ "political 281\n", "educational 102\n", "nan 47\n", "ethnicity 20\n", "religious 18\n", "racial 18\n", "other 7\n", "Name: Ann_1.7, dtype: int64" ] }, "execution_count": 622, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df2['Ann_1.7'].value_counts()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "C0RRl0TfizFk", "outputId": "10e4c3bf-182a-4c40-87fa-ce4abf3a1bfc" }, "outputs": [ { "data": { "text/plain": [ "political 293\n", "educational 81\n", "nan 45\n", "ethnicity 41\n", "religious 17\n", "other 8\n", "racial 8\n", "Name: Ann_2.7, dtype: int64" ] }, "execution_count": 623, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df2['Ann_2.7'].value_counts()" ] }, { "cell_type": "markdown", "metadata": { "id": "ij4K7XrxqzEY" }, "source": [ "### Overall Kappa Score:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "eArT7b3hizCd", "outputId": "883b8c7b-a275-4609-aeb7-9a2dd6f93ce0" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Cohen kappa score for Final column: 0.7089\n" ] } ], "source": [ "ann1 = df2['Ann_1.7'].str.strip().to_list()\n", "ann2 = df2['Ann_2.7'].str.strip().to_list()\n", "print(f'Cohen kappa score for Final column: {cohen_kappa(ann1, ann2)}')" ] }, { "cell_type": "markdown", "metadata": { "id": "m8805L-Jq2Ng" }, "source": [ "### Element Wise Kappa Score:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "RS9tkWl1kbPO", "outputId": "917e6408-6861-4764-b288-de17c77e3763" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "cohen_kappa score political : 0.7417 and total political's according to ann_1 & ann_2 are 281 & 293 respectively!\n", "cohen_kappa score for educational : 0.7257 and total educational's according to ann_1 & ann_2 are 102 & 81 respectively!\n", "cohen_kappa score for ethnicity : 0.5665 and total ethnicity's according to ann_1 & ann_2 are 20 & 41 respectively!\n", "cohen_kappa score for religious : 0.6149 and total religious's according to ann_1 & ann_2 are 18 & 17 respectively!\n", "cohen_kappa score for racial : 0.4492 and total racial's according to ann_1 & ann_2 are 18 & 8 respectively!\n", "cohen_kappa score for other : 0.2554 and total other's according to ann_1 & ann_2 are 7 & 8 respectively!\n", "cohen_kappa score for nan : 0.8801 and total nan's according to ann_1 & ann_2 are 47 & 45 respectively!\n" ] } ], "source": [ "df2['Ann_1.7'] = df2['Ann_1.7'].fillna('nan')\n", "df2['Ann_2.7'] = df2['Ann_2.7'].fillna('nan')\n", "\n", "ann1 = df2['Ann_1.7'].str.strip().to_list()\n", "ann2 = df2['Ann_2.7'].str.strip().to_list()\n", "\n", "# for political\n", "political_1 = [1 if color.find('political')>-1 else 0 for color in ann1]\n", "political_2 = [1 if color.find('political')>-1 else 0 for color in ann2]\n", "count_political_1, count_political_2 = sum(political_1), sum(political_2)\n", "print(f'cohen_kappa score political : {cohen_kappa(political_1, political_2)} and total political\\'s according to ann_1 & ann_2 are {count_political_1} & {count_political_2} respectively!')\n", "\n", "\n", "# For educational\n", "educational_1 = [1 if color.find('educational')>-1 else 0 for color in ann1]\n", "educational_2 = [1 if color.find('educational')>-1 else 0 for color in ann2]\n", "count_educational_1, count_educational_2 = sum(educational_1), sum(educational_2)\n", "\n", "print(f'cohen_kappa score for educational : {cohen_kappa(educational_1, educational_2)} and total educational\\'s according to ann_1 & ann_2 are {count_educational_1} & {count_educational_2} respectively!')\n", "\n", "\n", "# For ethnicity\n", "ethnicity_1 = [1 if color.find('ethnicity')>-1 else 0 for color in ann1]\n", "ethnicity_2 = [1 if color.find('ethnicity')>-1 else 0 for color in ann2]\n", "count_ethnicity_1, count_ethnicity_2 = sum(ethnicity_1), sum(ethnicity_2)\n", "\n", "print(f'cohen_kappa score for ethnicity : {cohen_kappa(ethnicity_1, ethnicity_2)} and total ethnicity\\'s according to ann_1 & ann_2 are {count_ethnicity_1} & {count_ethnicity_2} respectively!')\n", "\n", "\n", "# For religious\n", "religious_1 = [1 if color.find('religious')>-1 else 0 for color in ann1]\n", "religious_2 = [1 if color.find('religious')>-1 else 0 for color in ann2]\n", "count_religious_1, count_religious_2 = sum(religious_1), sum(religious_2)\n", "\n", "print(f'cohen_kappa score for religious : {cohen_kappa(religious_1, religious_2)} and total religious\\'s according to ann_1 & ann_2 are {count_religious_1} & {count_religious_2} respectively!')\n", "\n", "\n", "# For racial \n", "racial_1 = [1 if color.find('racial')>-1 else 0 for color in ann1]\n", "racial_2 = [1 if color.find('racial')>-1 else 0 for color in ann2]\n", "count_racial_1, count_racial_2 = sum(racial_1), sum(racial_2)\n", "\n", "print(f'cohen_kappa score for racial : {cohen_kappa(racial_1, racial_2)} and total racial\\'s according to ann_1 & ann_2 are {count_racial_1} & {count_racial_2} respectively!')\n", "\n", "\n", "# For defaming other\n", "other_1 = [1 if color.find('other')>-1 else 0 for color in ann1]\n", "other_2 = [1 if color.find('other')>-1 else 0 for color in ann2]\n", "count_other_1, count_other_2 = sum(other_1), sum(other_2)\n", "\n", "print(f'cohen_kappa score for other : {cohen_kappa(other_1, other_2)} and total other\\'s according to ann_1 & ann_2 are {count_other_1} & {count_other_2} respectively!')\n", "\n", "\n", "# For null values\n", "nan_1 = [1 if color.find('nan')>-1 else 0 for color in ann1]\n", "nan_2 = [1 if color.find('nan')>-1 else 0 for color in ann2]\n", "count_nan_1, count_nan_2 = sum(nan_1), sum(nan_2)\n", "\n", "print(f'cohen_kappa score for nan : {cohen_kappa(nan_1, nan_2)} and total nan\\'s according to ann_1 & ann_2 are {count_nan_1} & {count_nan_2} respectively!')\n" ] }, { "cell_type": "markdown", "metadata": { "id": "adLYMydvdJzf" }, "source": [ "# Overlapping Fake News Data" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": { "iopub.execute_input": "2022-12-06T20:07:39.395133Z", "iopub.status.busy": "2022-12-06T20:07:39.393430Z", "iopub.status.idle": "2022-12-06T20:07:39.402869Z", "shell.execute_reply": "2022-12-06T20:07:39.401612Z", "shell.execute_reply.started": "2022-12-06T20:07:39.395087Z" }, "id": "eQPx_gEqUc9X", "trusted": true }, "outputs": [], "source": [ "df = pd.read_csv(\"\")" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/", "height": 739 }, "execution": { "iopub.execute_input": "2022-12-06T20:07:39.405036Z", "iopub.status.busy": "2022-12-06T20:07:39.404216Z", "iopub.status.idle": "2022-12-06T20:07:39.420289Z", "shell.execute_reply": "2022-12-06T20:07:39.418877Z", "shell.execute_reply.started": "2022-12-06T20:07:39.405007Z" }, "id": "jK7RTh6mUhWT", "outputId": "68649967-f665-4beb-d6db-1ff1942a3a7d", "trusted": true }, "outputs": [ { "data": { "text/html": [ "\n", "
\n", "
\n", "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
S_1Ann_1.1Ann_1.2Ann_1.3Ann_1.4Ann_1.5Ann_1.6Ann_1.7S_2Ann_2.1Ann_2.2Ann_2.3Ann_2.4Ann_2.5Ann_2.6Ann_2.7
0President Trump Travels to Orlando for Private...SpeculationWe have no proof if it is true or notWhen,WhyBlackTo defame TrumpGaining AdvantagePoliticalPresident Trump Travels to Orlando for Private...SpeculationReason For Visting The Svhool Is Not Mentioned.When,WhyRedTo Show Trump That He Does'T Care About The Pu...Gaining AdvantagePolitical
1VP MIKE PENCE Stops For BBQ Before Super Bowl…...SpeculationWe have no proof if it is true or notWhen,WhyBlackTo defame MikeGaining AdvantageEthnicityVP MIKE PENCE Stops For BBQ Before Super Bowl…...SpeculationNo Evidence Given Why He Had Stoppped At Bbq B...When,WhyBlackTo Show Mike Pence Doesn'T Gives Importance To...Avoiding EmbarrassmentEthnicity
2More Networks Join in the Boycott of New Trump AdSounds FactualNaNWhenNaNNaNNaNPoliticalMore Networks Join in the Boycott of New Trump AdSounds FactualNaNWhenNaNNaNNaNPolitical
3Julian Assange – “Everything that he has said/...OpinionThe Statement Happens To Be An Opinion By Assa...WhoBlackThe lie might be told to ramp up the Political...Gaining AdvantagePoliticalJulian Assange – “Everything that he has said/...OpinionIt Is Just A Opinion By A Person On Someone.When,WhyWhiteA Lie Said To Protect Someone And Can Be Under...Protecting ThemselvesPolitical
4Without Evidence/Trump Launches 59 Cruise Miss...SpeculationWe have no proof if it is true or notWhen,WhyBlackIt could be said to defame Trump and create panicProtecting ThemselvesPoliticalWithout Evidence/Trump Launches 59 Cruise Miss...SpeculationNo Evidence Given Why He Had LaunchedWhen,WhyRedCan Be Said Out Of Spite Of Rivalry Between Us...Protecting ThemselvesPolitical
\n", "
\n", " \n", " \n", " \n", "\n", " \n", "
\n", "
\n", " " ], "text/plain": [ " S_1 Ann_1.1 \\\n", "0 President Trump Travels to Orlando for Private... Speculation \n", "1 VP MIKE PENCE Stops For BBQ Before Super Bowl…... Speculation \n", "2 More Networks Join in the Boycott of New Trump Ad Sounds Factual \n", "3 Julian Assange – “Everything that he has said/... Opinion \n", "4 Without Evidence/Trump Launches 59 Cruise Miss... Speculation \n", "\n", " Ann_1.2 Ann_1.3 Ann_1.4 \\\n", "0 We have no proof if it is true or not When,Why Black \n", "1 We have no proof if it is true or not When,Why Black \n", "2 NaN When NaN \n", "3 The Statement Happens To Be An Opinion By Assa... Who Black \n", "4 We have no proof if it is true or not When,Why Black \n", "\n", " Ann_1.5 Ann_1.6 \\\n", "0 To defame Trump Gaining Advantage \n", "1 To defame Mike Gaining Advantage \n", "2 NaN NaN \n", "3 The lie might be told to ramp up the Political... Gaining Advantage \n", "4 It could be said to defame Trump and create panic Protecting Themselves \n", "\n", " Ann_1.7 S_2 \\\n", "0 Political President Trump Travels to Orlando for Private... \n", "1 Ethnicity VP MIKE PENCE Stops For BBQ Before Super Bowl…... \n", "2 Political More Networks Join in the Boycott of New Trump Ad \n", "3 Political Julian Assange – “Everything that he has said/... \n", "4 Political Without Evidence/Trump Launches 59 Cruise Miss... \n", "\n", " Ann_2.1 Ann_2.2 \\\n", "0 Speculation Reason For Visting The Svhool Is Not Mentioned. \n", "1 Speculation No Evidence Given Why He Had Stoppped At Bbq B... \n", "2 Sounds Factual NaN \n", "3 Opinion It Is Just A Opinion By A Person On Someone. \n", "4 Speculation No Evidence Given Why He Had Launched \n", "\n", " Ann_2.3 Ann_2.4 Ann_2.5 \\\n", "0 When,Why Red To Show Trump That He Does'T Care About The Pu... \n", "1 When,Why Black To Show Mike Pence Doesn'T Gives Importance To... \n", "2 When NaN NaN \n", "3 When,Why White A Lie Said To Protect Someone And Can Be Under... \n", "4 When,Why Red Can Be Said Out Of Spite Of Rivalry Between Us... \n", "\n", " Ann_2.6 Ann_2.7 \n", "0 Gaining Advantage Political \n", "1 Avoiding Embarrassment Ethnicity \n", "2 NaN Political \n", "3 Protecting Themselves Political \n", "4 Protecting Themselves Political " ] }, "execution_count": 627, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df.head()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": { "iopub.execute_input": "2022-12-06T20:07:39.423451Z", "iopub.status.busy": "2022-12-06T20:07:39.422530Z", "iopub.status.idle": "2022-12-06T20:07:39.436062Z", "shell.execute_reply": "2022-12-06T20:07:39.433920Z", "shell.execute_reply.started": "2022-12-06T20:07:39.423407Z" }, "id": "BjWn56wcUjU9", "trusted": true }, "outputs": [], "source": [ "#convert ann_1 and ann_2 into small letter\n", "df2 = df.apply(lambda x: x.astype(str).str.lower())" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "execution": { "iopub.execute_input": "2022-12-06T20:07:39.460738Z", "iopub.status.busy": "2022-12-06T20:07:39.460391Z", "iopub.status.idle": "2022-12-06T20:07:39.476545Z", "shell.execute_reply": "2022-12-06T20:07:39.474667Z", "shell.execute_reply.started": "2022-12-06T20:07:39.460708Z" }, "id": "t1dUfuXRU7wR", "trusted": true }, "outputs": [], "source": [ "def cohen_kappa(ann1, ann2):\n", " \"\"\"Computes Cohen kappa for pair-wise annotators.\n", " :param ann1: annotations provided by first annotator\n", " :type ann1: list\n", " :param ann2: annotations provided by second annotator\n", " :type ann2: list\n", " :rtype: float\n", " :return: Cohen kappa statistic\n", " \"\"\"\n", " count = 0\n", " for an1, an2 in zip(ann1, ann2):\n", " if an1 == an2:\n", " count += 1\n", " # print(count, len(ann1))\n", " A = count / len(ann1) # observed agreement A (Po)\n", "\n", " uniq = set(ann1 + ann2)\n", " # print(uniq)\n", " E = 0 # expected agreement E (Pe)\n", " for item in uniq:\n", " cnt1 = ann1.count(item)\n", " cnt2 = ann2.count(item)\n", " count = ((cnt1 / len(ann1)) * (cnt2 / len(ann2)))\n", " E += count\n", " # print(A, E)\n", " return round((A - E) / (1 - E), 4)" ] }, { "cell_type": "markdown", "metadata": { "id": "SaizLrBImtrS" }, "source": [ "### For SBDO column" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "Z1SD22F-b4To", "outputId": "b7339cc6-06d6-4e16-bd5e-0bff62f0d734" }, "outputs": [ { "data": { "text/plain": [ "speculation 294\n", "opinion 76\n", "distortion 31\n", "sounds factual 16\n", "bias 15\n", "speculation,sounds factual 2\n", "Name: Ann_1.1, dtype: int64" ] }, "execution_count": 632, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df2['Ann_1.1'].value_counts()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "7PAKecp7b-T7", "outputId": "c7536851-1800-4866-a25d-3c01c6b254cb" }, "outputs": [ { "data": { "text/plain": [ "speculation 283\n", "opinion 76\n", "distortion 33\n", "sounds factual 23\n", "bias 19\n", "Name: Ann_2.1, dtype: int64" ] }, "execution_count": 633, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df2['Ann_2.1'].value_counts()" ] }, { "cell_type": "markdown", "metadata": { "id": "hcnIIPdWq-es" }, "source": [ "### Overall Kappa Score:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "VCvquGGFcaxH", "outputId": "deedd8ac-9175-4589-8a89-8a6b67b294d3" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Cohen kappa score for SBDOSF column: 0.6668\n" ] } ], "source": [ "ann1 = df2['Ann_1.1'].to_list()\n", "ann2 = df2['Ann_2.1'].to_list()\n", "print(f'Cohen kappa score for SBDOSF column: {cohen_kappa(ann1, ann2)}')" ] }, { "cell_type": "markdown", "metadata": { "id": "BiiGtisfrBJE" }, "source": [ "### Element Wise Kappa Score:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "yGtnoW31mxrH", "outputId": "6649432b-8ea5-42b8-da9f-de279bf9b35b" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "cohen_kappa score for When : 0.6839 and total when's according to ann_1 & ann_2 are 296 & 283 respectively!\n", "cohen_kappa score for sounds factual : 0.4628 and total sounds factual's according to ann_1 & ann_2 are 18 & 23 respectively!\n", "cohen_kappa score for opinion : 0.7288 and total opinion's according to ann_1 & ann_2 are 76 & 76 respectively!\n", "cohen_kappa score for distortion : 0.6964 and total distortion's according to ann_1 & ann_2 are 31 & 33 respectively!\n", "cohen_kappa score for bias : 0.5717 and total bias's according to ann_1 & ann_2 are 15 & 19 respectively!\n" ] } ], "source": [ "ann1 = df2['Ann_1.1'].str.strip().to_list()\n", "ann2 = df2['Ann_2.1'].str.strip().to_list()\n", "\n", "# for speculation\n", "speculation_1 = [1 if when.find('speculation')>-1 else 0 for when in ann1]\n", "speculation_2 = [1 if when.find('speculation')>-1 else 0 for when in ann2]\n", "count_speculation_1, count_speculation_2 = sum(speculation_1), sum(speculation_2)\n", "print(f'cohen_kappa score for When : {cohen_kappa(speculation_1, speculation_2)} and total when\\'s according to ann_1 & ann_2 are {count_speculation_1} & {count_speculation_2} respectively!')\n", "\n", "# For sounds factual\n", "sounds_factual_1 = [1 if why.find('sounds factual')>-1 else 0 for why in ann1]\n", "sounds_factual_2 = [1 if why.find('sounds factual')>-1 else 0 for why in ann2]\n", "count_sounds_factual_1, count_sounds_factual_2 = sum(sounds_factual_1), sum(sounds_factual_2)\n", "\n", "print(f'cohen_kappa score for sounds factual : {cohen_kappa(sounds_factual_1, sounds_factual_2)} and total sounds factual\\'s according to ann_1 & ann_2 are {count_sounds_factual_1} & {count_sounds_factual_2} respectively!')\n", "\n", "\n", "# For opinion\n", "opinion_1 = [1 if who.find('opinion')>-1 else 0 for who in ann1]\n", "opinion_2 = [1 if who.find('opinion')>-1 else 0 for who in ann2]\n", "count_opinion_1, count_opinion_2 = sum(opinion_1), sum(opinion_2)\n", "\n", "print(f'cohen_kappa score for opinion : {cohen_kappa(opinion_1, opinion_2)} and total opinion\\'s according to ann_1 & ann_2 are {count_opinion_1} & {count_opinion_2} respectively!')\n", "\n", "\n", "# For distortion\n", "distortion_1 = [1 if where.find('distortion')>-1 else 0 for where in ann1]\n", "distortion_2 = [1 if where.find('distortion')>-1 else 0 for where in ann2]\n", "count_distortion_1, count_distortion_2 = sum(distortion_1), sum(distortion_2)\n", "\n", "print(f'cohen_kappa score for distortion : {cohen_kappa(distortion_1, distortion_2)} and total distortion\\'s according to ann_1 & ann_2 are {count_distortion_1} & {count_distortion_2} respectively!')\n", "\n", "\n", "# For bias\n", "bias_1 = [1 if what.find('bias')>-1 else 0 for what in ann1]\n", "bias_2 = [1 if what.find('bias')>-1 else 0 for what in ann2]\n", "count_bias_1, count_bias_2 = sum(bias_1), sum(bias_2)\n", "\n", "print(f'cohen_kappa score for bias : {cohen_kappa(bias_1, bias_2)} and total bias\\'s according to ann_1 & ann_2 are {count_bias_1} & {count_bias_2} respectively!')\n" ] }, { "cell_type": "markdown", "metadata": { "id": "dYFvLBnGm0Vj" }, "source": [ "### Missing W's column" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "yOTmeXTqNis1", "outputId": "350ad1e5-b338-4259-ea59-877eca4aefcd" }, "outputs": [ { "data": { "text/plain": [ "when,where,why 250\n", "when,where 71\n", "when,why 28\n", "when 20\n", "when,where,who,why 17\n", "why 9\n", "what,when,where,why 5\n", "what,where,why 5\n", "what,why 4\n", "what,when,why 3\n", "where 3\n", "where,why 3\n", "when,where,who 3\n", "where,who,why 2\n", "what 2\n", "nan 2\n", "who 2\n", "what,who,why 1\n", "what,when,who,why 1\n", "what,when,where 1\n", "what,when,where,who 1\n", "when,who 1\n", "Name: Ann_1.3, dtype: int64" ] }, "execution_count": 636, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df2['Ann_1.3'].value_counts()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "6UzcIps07QYh", "outputId": "0fdb3b05-03ae-40d0-a80e-a4471fb7f8c8" }, "outputs": [ { "data": { "text/plain": [ "when,where,why 219\n", "when,where 67\n", "when,why 42\n", "where,why 15\n", "when,where,who,why 14\n", "when 14\n", "what,when 10\n", "why,when 8\n", "why 7\n", "what,why 5\n", "what,when,why 5\n", "what 4\n", "nan 4\n", "when,who,why 4\n", "when,where,who 3\n", "what,when,where 3\n", "where 3\n", "why,when,where 2\n", "what,when,where,why 1\n", "what,who 1\n", "what,when,who 1\n", "when,who 1\n", "what,where,why 1\n", "Name: Ann_2.3, dtype: int64" ] }, "execution_count": 637, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df2['Ann_2.3'].value_counts()" ] }, { "cell_type": "markdown", "metadata": { "id": "nORd8nG5rC8m" }, "source": [ "### Element Wise Kappa Score:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "QEGT9jID9qpn", "outputId": "dbc7d6a1-58a8-4a75-d503-384280909534" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1]\n", "[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]\n", "cohen_kappa score for When : 0.1781 and total when's according to ann_1 & ann_2 are 401 & 394 respectively!\n", "cohen_kappa score for Why : 0.6253 and total why's according to ann_1 & ann_2 are 328 & 323 respectively!\n", "cohen_kappa score for Who : -0.0224 and total who's according to ann_1 & ann_2 are 28 & 24 respectively!\n", "cohen_kappa score for Where : 0.4768 and total where's according to ann_1 & ann_2 are 361 & 394 respectively!\n", "cohen_kappa score for What : 0.0141 and total what's according to ann_1 & ann_2 are 23 & 31 respectively!\n" ] } ], "source": [ "ann1 = df2['Ann_1.3'].str.strip().to_list()\n", "ann2 = df2['Ann_2.3'].str.strip().to_list()\n", "\n", "# for when\n", "when_1 = [1 if when.find('when') != -1 else 0 for when in ann1]\n", "when_2 = [1 if when.find('when') != -1 else 0 for when in ann2]\n", "print(when_1, when_2, sep = '\\n')\n", "count_when_1, count_when_2 = sum(when_1), sum(when_2)\n", "print(f'cohen_kappa score for When : {cohen_kappa(when_1, when_2)} and total when\\'s according to ann_1 & ann_2 are {count_when_1} & {count_when_2} respectively!')\n", "\n", "# For why\n", "why_1 = [1 if why.find('why')>-1 else 0 for why in ann1]\n", "why_2 = [1 if why.find('why')>-1 else 0 for why in ann2]\n", "count_why_1, count_why_2 = sum(why_1), sum(why_2)\n", "\n", "print(f'cohen_kappa score for Why : {cohen_kappa(why_1, why_2)} and total why\\'s according to ann_1 & ann_2 are {count_why_1} & {count_why_2} respectively!')\n", "\n", "\n", "# For who\n", "who_1 = [1 if who.find('who')>-1 else 0 for who in ann1]\n", "who_2 = [1 if who.find('who')>-1 else 0 for who in ann2]\n", "count_who_1, count_who_2 = sum(who_1), sum(who_2)\n", "\n", "print(f'cohen_kappa score for Who : {cohen_kappa(who_1, who_2)} and total who\\'s according to ann_1 & ann_2 are {count_who_1} & {count_who_2} respectively!')\n", "\n", "\n", "# For where\n", "where_1 = [1 if where.find('where')>-1 else 0 for where in ann1]\n", "where_2 = [1 if where.find('where')>-1 else 0 for where in ann2]\n", "count_where_1, count_where_2 = sum(where_1), sum(where_2)\n", "\n", "print(f'cohen_kappa score for Where : {cohen_kappa(where_1, where_2)} and total where\\'s according to ann_1 & ann_2 are {count_where_1} & {count_when_2} respectively!')\n", "\n", "\n", "# For what\n", "what_1 = [1 if what.find('what')>-1 else 0 for what in ann1]\n", "what_2 = [1 if what.find('what')>-1 else 0 for what in ann2]\n", "count_what_1, count_what_2 = sum(what_1), sum(what_2)\n", "\n", "print(f'cohen_kappa score for What : {cohen_kappa(what_1, what_2)} and total what\\'s according to ann_1 & ann_2 are {count_what_1} & {count_what_2} respectively!')\n" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "ReBYogmtpGtc" }, "outputs": [], "source": [ "# Sanity Check using Sklearn's implementation of Cohen Kappa Scores\n", "from sklearn.metrics import cohen_kappa_score\n", "cohen_kappa_score(when_1, when_2), cohen_kappa_score(why_1, why_2), cohen_kappa_score(what_1, what_2), cohen_kappa_score(where_1, where_2), cohen_kappa_score(who_1, who_2)" ] }, { "cell_type": "markdown", "metadata": { "id": "IjWo98Luq3Gq" }, "source": [ "## For color of lie column" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "zFp0zHWSdX1a", "outputId": "3a9a7d51-c38e-45d3-d838-330a78714f35" }, "outputs": [ { "data": { "text/plain": [ "black 281\n", "grey 73\n", "red 32\n", "white 30\n", "nan 18\n", "Name: Ann_1.4, dtype: int64" ] }, "execution_count": 640, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df2['Ann_1.4'].value_counts()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "yjSwlzqXddYH", "outputId": "239fae53-2b72-4ed9-bf46-1a02bbc97385" }, "outputs": [ { "data": { "text/plain": [ "black 253\n", "grey 90\n", "white 44\n", "red 24\n", "nan 23\n", "Name: Ann_2.4, dtype: int64" ] }, "execution_count": 641, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df2['Ann_2.4'].value_counts()" ] }, { "cell_type": "markdown", "metadata": { "id": "hbXO9IsfrNCU" }, "source": [ "### Overall Kappa Score:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "w8pCnUT7dAqR", "outputId": "ea441567-f02e-4168-d7f3-1d0e78176e2f" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Cohen kappa score for Color of lie column: 0.5828\n" ] } ], "source": [ "ann1 = df2['Ann_1.4'].to_list()\n", "ann2 = df2['Ann_2.4'].to_list()\n", "print(f'Cohen kappa score for Color of lie column: {cohen_kappa(ann1, ann2)}')" ] }, { "cell_type": "markdown", "metadata": { "id": "OqOp0c4orOw8" }, "source": [ "### Element Wise Kappa Score:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "YlocrY-gm8kO", "outputId": "81056363-5617-4ca2-a556-2f4c0d0d00ae" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "cohen_kappa score for grey : 0.5705 and total grey's according to ann_1 & ann_2 are 73 & 90 respectively!\n", "cohen_kappa score for black : 0.593 and total black's according to ann_1 & ann_2 are 281 & 253 respectively!\n", "cohen_kappa score for red : 0.5425 and total red's according to ann_1 & ann_2 are 32 & 24 respectively!\n", "cohen_kappa score for white : 0.7055 and total white's according to ann_1 & ann_2 are 30 & 44 respectively!\n", "cohen_kappa score for nan : 0.4116 and total nan's according to ann_1 & ann_2 are 18 & 23 respectively!\n" ] } ], "source": [ "df2['Ann_1.4'] = df2['Ann_1.4'].fillna('nan')\n", "df2['Ann_2.4'] = df2['Ann_2.4'].fillna('nan')\n", "\n", "ann1 = df2['Ann_1.4'].str.strip().to_list()\n", "ann2 = df2['Ann_2.4'].str.strip().to_list()\n", "\n", "# for grey\n", "grey_1 = [1 if color.find('grey')>-1 else 0 for color in ann1]\n", "grey_2 = [1 if color.find('grey')>-1 else 0 for color in ann2]\n", "count_grey_1, count_grey_2 = sum(grey_1), sum(grey_2)\n", "print(f'cohen_kappa score for grey : {cohen_kappa(grey_1, grey_2)} and total grey\\'s according to ann_1 & ann_2 are {count_grey_1} & {count_grey_2} respectively!')\n", "\n", "# For black\n", "black_1 = [1 if color.find('black')>-1 else 0 for color in ann1]\n", "black_2 = [1 if color.find('black')>-1 else 0 for color in ann2]\n", "count_black_1, count_black_2 = sum(black_1), sum(black_2)\n", "\n", "print(f'cohen_kappa score for black : {cohen_kappa(black_1, black_2)} and total black\\'s according to ann_1 & ann_2 are {count_black_1} & {count_black_2} respectively!')\n", "\n", "\n", "# For red\n", "red_1 = [1 if color.find('red')>-1 else 0 for color in ann1]\n", "red_2 = [1 if color.find('red')>-1 else 0 for color in ann2]\n", "count_red_1, count_red_2 = sum(red_1), sum(red_2)\n", "\n", "print(f'cohen_kappa score for red : {cohen_kappa(red_1, red_2)} and total red\\'s according to ann_1 & ann_2 are {count_red_1} & {count_red_2} respectively!')\n", "\n", "\n", "# For white\n", "white_1 = [1 if color.find('white')>-1 else 0 for color in ann1]\n", "white_2 = [1 if color.find('white')>-1 else 0 for color in ann2]\n", "count_white_1, count_white_2 = sum(white_1), sum(white_2)\n", "\n", "print(f'cohen_kappa score for white : {cohen_kappa(white_1, white_2)} and total white\\'s according to ann_1 & ann_2 are {count_white_1} & {count_white_2} respectively!')\n", "\n", "\n", "# For null values\n", "nan_1 = [1 if color.find('nan')>-1 else 0 for color in ann1]\n", "nan_2 = [1 if color.find('nan')>-1 else 0 for color in ann2]\n", "count_nan_1, count_nan_2 = sum(nan_1), sum(nan_2)\n", "\n", "print(f'cohen_kappa score for nan : {cohen_kappa(nan_1, nan_2)} and total nan\\'s according to ann_1 & ann_2 are {count_nan_1} & {count_nan_2} respectively!')\n" ] }, { "cell_type": "markdown", "metadata": { "id": "IkZRFH46q1nn" }, "source": [ "## For Intent of Lie Column:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "Jhciajeodhso", "outputId": "303f3aeb-0302-4295-9a3b-b8ebac8b695a" }, "outputs": [ { "data": { "text/plain": [ "gaining advantage 261\n", "protecting themselves 117\n", "gaining esteem 30\n", "nan 21\n", "avoiding embarrassment 5\n", "Name: Ann_1.6, dtype: int64" ] }, "execution_count": 644, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df2['Ann_1.6'].value_counts()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "EFxWPeSjdmbi", "outputId": "c31d5118-e272-4631-af89-03a92a7b08c7" }, "outputs": [ { "data": { "text/plain": [ "gaining advantage 218\n", "protecting themselves 133\n", "gaining esteem 52\n", "nan 22\n", "avoiding embarrassment 9\n", "Name: Ann_2.6, dtype: int64" ] }, "execution_count": 645, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df2['Ann_2.6'].value_counts()" ] }, { "cell_type": "markdown", "metadata": { "id": "JRqI1TlYrP-Z" }, "source": [ "### Overall Kappa Score:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "tdgAxyPldM3Q", "outputId": "87b8f7d6-631d-47cf-e523-d8ba6f726add" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Cohen kappa score for Intent of lie column: 0.3823\n" ] } ], "source": [ "ann1 = df2['Ann_1.6'].to_list()\n", "ann2 = df2['Ann_2.6'].to_list()\n", "print(f'Cohen kappa score for Intent of lie column: {cohen_kappa(ann1, ann2)}')" ] }, { "cell_type": "markdown", "metadata": { "id": "C6UNrtUSrR-8" }, "source": [ "### Element Wise Kappa Score:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "frXwHRUOm_3D", "outputId": "f266ee79-472d-486b-cebf-1ccc27866cd1" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "cohen_kappa score gaining advantage : 0.3773 and total gaining advantage's according to ann_1 & ann_2 are 261 & 218 respectively!\n", "cohen_kappa score for gaining_esteem : 0.2782 and total gaining_esteem's according to ann_1 & ann_2 are 30 & 52 respectively!\n", "cohen_kappa score for protecting_themselves : 0.4503 and total protecting_themselves's according to ann_1 & ann_2 are 117 & 133 respectively!\n", "cohen_kappa score for avoiding_embarrassment : -0.015 and total avoiding_embarrassment's according to ann_1 & ann_2 are 5 & 9 respectively!\n", "cohen_kappa score for nan : 0.4373 and total nan's according to ann_1 & ann_2 are 21 & 22 respectively!\n" ] } ], "source": [ "df2['Ann_1.6'] = df2['Ann_1.6'].fillna('nan')\n", "df2['Ann_2.6'] = df2['Ann_2.6'].fillna('nan')\n", "\n", "ann1 = df2['Ann_1.6'].str.strip().to_list()\n", "ann2 = df2['Ann_2.6'].str.strip().to_list()\n", "\n", "# for gaining advantage\n", "gaining_advantage_1 = [1 if color.find('gaining advantage')>-1 else 0 for color in ann1]\n", "gaining_advantage_2 = [1 if color.find('gaining advantage')>-1 else 0 for color in ann2]\n", "count_gaining_advantage_1, count_gaining_advantage_2 = sum(gaining_advantage_1), sum(gaining_advantage_2)\n", "print(f'cohen_kappa score gaining advantage : {cohen_kappa(gaining_advantage_1, gaining_advantage_2)} and total gaining advantage\\'s according to ann_1 & ann_2 are {count_gaining_advantage_1} & {count_gaining_advantage_2} respectively!')\n", "\n", "# For gaining_esteem\n", "gaining_esteem_1 = [1 if color.find('gaining esteem')>-1 else 0 for color in ann1]\n", "gaining_esteem_2 = [1 if color.find('gaining esteem')>-1 else 0 for color in ann2]\n", "count_gaining_esteem_1, count_gaining_esteem_2 = sum(gaining_esteem_1), sum(gaining_esteem_2)\n", "\n", "print(f'cohen_kappa score for gaining_esteem : {cohen_kappa(gaining_esteem_1, gaining_esteem_2)} and total gaining_esteem\\'s according to ann_1 & ann_2 are {count_gaining_esteem_1} & {count_gaining_esteem_2} respectively!')\n", "\n", "\n", "# For protecting_themselves\n", "protecting_themselves_1 = [1 if color.find('protecting themselves')>-1 else 0 for color in ann1]\n", "protecting_themselves_2 = [1 if color.find('protecting themselves')>-1 else 0 for color in ann2]\n", "count_protecting_themselves_1, count_protecting_themselves_2 = sum(protecting_themselves_1), sum(protecting_themselves_2)\n", "\n", "print(f'cohen_kappa score for protecting_themselves : {cohen_kappa(protecting_themselves_1, protecting_themselves_2)} and total protecting_themselves\\'s according to ann_1 & ann_2 are {count_protecting_themselves_1} & {count_protecting_themselves_2} respectively!')\n", "\n", "\n", "\n", "\n", "# For voiding_embarrassment \n", "avoiding_embarrassment_1 = [1 if color.find('avoiding embarrassment')>-1 else 0 for color in ann1]\n", "avoiding_embarrassment_2 = [1 if color.find('avoiding embarrassment')>-1 else 0 for color in ann2]\n", "count_avoiding_embarrassment_1, count_avoiding_embarrassment_2 = sum(avoiding_embarrassment_1), sum(avoiding_embarrassment_2)\n", "\n", "print(f'cohen_kappa score for avoiding_embarrassment : {cohen_kappa(avoiding_embarrassment_1, avoiding_embarrassment_2)} and total avoiding_embarrassment\\'s according to ann_1 & ann_2 are {count_avoiding_embarrassment_1} & {count_avoiding_embarrassment_2} respectively!')\n", "\n", "\n", "\n", "# For null values\n", "nan_1 = [1 if color.find('nan')>-1 else 0 for color in ann1]\n", "nan_2 = [1 if color.find('nan')>-1 else 0 for color in ann2]\n", "count_nan_1, count_nan_2 = sum(nan_1), sum(nan_2)\n", "\n", "print(f'cohen_kappa score for nan : {cohen_kappa(nan_1, nan_2)} and total nan\\'s according to ann_1 & ann_2 are {count_nan_1} & {count_nan_2} respectively!')\n", "\n", "\n", "# Extra classes not present in our dataset\n", "# # For protecting_others\n", "# protecting_others_1 = [1 if color.find('protecting others')>-1 else 0 for color in ann1]\n", "# protecting_others_2 = [1 if color.find('protecting others')>-1 else 0 for color in ann2]\n", "# count_protecting_others_1, count_protecting_others_2 = sum(protecting_others_1), sum(protecting_others_2)\n", "\n", "# print(f'cohen_kappa score for protecting_others : {cohen_kappa(protecting_others_1, protecting_others_2)} and total protecting_others\\'s according to ann_1 & ann_2 are {count_protecting_others_1} & {count_protecting_others_2} respectively!')\n", "\n", "# # For defaming esteem\n", "# defaming_esteem_1 = [1 if color.find('defaming esteem')>-1 else 0 for color in ann1]\n", "# defaming_esteem_2 = [1 if color.find('defaming esteem')>-1 else 0 for color in ann2]\n", "# count_defaming_esteem_1, count_defaming_esteem_2 = sum(defaming_esteem_1), sum(defaming_esteem_2)\n", "\n", "# print(f'cohen_kappa score for defaming_esteem : {cohen_kappa(defaming_esteem_1, defaming_esteem_2)} and total defaming_esteem\\'s according to ann_1 & ann_2 are {count_defaming_esteem_1} & {count_defaming_esteem_2} respectively!')\n" ] }, { "cell_type": "markdown", "metadata": { "id": "ayz3ibQ4qwwm" }, "source": [ "## For category of lie Column:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "8e7b_sTVdrr1", "outputId": "92fe3936-b486-424a-f227-b60cc2d3984e" }, "outputs": [ { "data": { "text/plain": [ "political 297\n", "educational 99\n", "ethnicity 18\n", "racial 12\n", "religious 4\n", "nan 2\n", "others 2\n", "Name: Ann_1.7, dtype: int64" ] }, "execution_count": 648, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df2['Ann_1.7'].value_counts()" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "QYoYGDY_dvll", "outputId": "642e5369-ee03-4e7b-b096-fe793c8d6466" }, "outputs": [ { "data": { "text/plain": [ "political 297\n", "educational 83\n", "ethnicity 23\n", "racial 20\n", "religious 8\n", "nan 2\n", "others 1\n", "Name: Ann_2.7, dtype: int64" ] }, "execution_count": 649, "metadata": {}, "output_type": "execute_result" } ], "source": [ "df2['Ann_2.7'].value_counts()" ] }, { "cell_type": "markdown", "metadata": { "id": "UjgiMUVord4f" }, "source": [ "### Overall Kappa Score:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "xsK4jodOd3RT", "outputId": "e7a93825-6014-44e6-a3c1-51077cf5a94a" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Cohen kappa score for Final column: 0.7098\n" ] } ], "source": [ "ann1 = df2['Ann_1.7'].str.strip().to_list()\n", "ann2 = df2['Ann_2.7'].str.strip().to_list()\n", "print(f'Cohen kappa score for Final column: {cohen_kappa(ann1, ann2)}')" ] }, { "cell_type": "markdown", "metadata": { "id": "t1xU4zAYrg5v" }, "source": [ "### Element wise Kappa Scores:" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "colab": { "base_uri": "https://localhost:8080/" }, "id": "7svIEqAIqIes", "outputId": "3eb33ba9-8cd3-425e-e46a-11290fc6496e" }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "cohen_kappa score gaining advantage : 0.776 and total political's according to ann_1 & ann_2 are 297 & 297 respectively!\n", "cohen_kappa score for educational : 0.7086 and total educational's according to ann_1 & ann_2 are 99 & 83 respectively!\n", "cohen_kappa score for ethnicity : 0.514 and total ethnicity's according to ann_1 & ann_2 are 18 & 23 respectively!\n", "cohen_kappa score for religious : 0.4938 and total religious's according to ann_1 & ann_2 are 4 & 8 respectively!\n", "cohen_kappa score for racial : 0.6116 and total racial's according to ann_1 & ann_2 are 12 & 20 respectively!\n", "cohen_kappa score for other : 0.6656 and total other's according to ann_1 & ann_2 are 2 & 1 respectively!\n", "cohen_kappa score for nan : 1.0 and total nan's according to ann_1 & ann_2 are 2 & 2 respectively!\n" ] } ], "source": [ "df2['Ann_1.7'] = df2['Ann_1.7'].fillna('nan')\n", "df2['Ann_2.7'] = df2['Ann_2.7'].fillna('nan')\n", "\n", "ann1 = df2['Ann_1.7'].str.strip().to_list()\n", "ann2 = df2['Ann_2.7'].str.strip().to_list()\n", "\n", "# for political\n", "political_1 = [1 if color.find('political')>-1 else 0 for color in ann1]\n", "political_2 = [1 if color.find('political')>-1 else 0 for color in ann2]\n", "count_political_1, count_political_2 = sum(political_1), sum(political_2)\n", "print(f'cohen_kappa score gaining advantage : {cohen_kappa(political_1, political_2)} and total political\\'s according to ann_1 & ann_2 are {count_political_1} & {count_political_2} respectively!')\n", "\n", "\n", "# For educational\n", "educational_1 = [1 if color.find('educational')>-1 else 0 for color in ann1]\n", "educational_2 = [1 if color.find('educational')>-1 else 0 for color in ann2]\n", "count_educational_1, count_educational_2 = sum(educational_1), sum(educational_2)\n", "\n", "print(f'cohen_kappa score for educational : {cohen_kappa(educational_1, educational_2)} and total educational\\'s according to ann_1 & ann_2 are {count_educational_1} & {count_educational_2} respectively!')\n", "\n", "\n", "# For ethnicity\n", "ethnicity_1 = [1 if color.find('ethnicity')>-1 else 0 for color in ann1]\n", "ethnicity_2 = [1 if color.find('ethnicity')>-1 else 0 for color in ann2]\n", "count_ethnicity_1, count_ethnicity_2 = sum(ethnicity_1), sum(ethnicity_2)\n", "\n", "print(f'cohen_kappa score for ethnicity : {cohen_kappa(ethnicity_1, ethnicity_2)} and total ethnicity\\'s according to ann_1 & ann_2 are {count_ethnicity_1} & {count_ethnicity_2} respectively!')\n", "\n", "\n", "# For religious\n", "religious_1 = [1 if color.find('religious')>-1 else 0 for color in ann1]\n", "religious_2 = [1 if color.find('religious')>-1 else 0 for color in ann2]\n", "count_religious_1, count_religious_2 = sum(religious_1), sum(religious_2)\n", "\n", "print(f'cohen_kappa score for religious : {cohen_kappa(religious_1, religious_2)} and total religious\\'s according to ann_1 & ann_2 are {count_religious_1} & {count_religious_2} respectively!')\n", "\n", "\n", "# For racial \n", "racial_1 = [1 if color.find('racial')>-1 else 0 for color in ann1]\n", "racial_2 = [1 if color.find('racial')>-1 else 0 for color in ann2]\n", "count_racial_1, count_racial_2 = sum(racial_1), sum(racial_2)\n", "\n", "print(f'cohen_kappa score for racial : {cohen_kappa(racial_1, racial_2)} and total racial\\'s according to ann_1 & ann_2 are {count_racial_1} & {count_racial_2} respectively!')\n", "\n", "\n", "# For defaming other\n", "other_1 = [1 if color.find('other')>-1 else 0 for color in ann1]\n", "other_2 = [1 if color.find('other')>-1 else 0 for color in ann2]\n", "count_other_1, count_other_2 = sum(other_1), sum(other_2)\n", "\n", "print(f'cohen_kappa score for other : {cohen_kappa(other_1, other_2)} and total other\\'s according to ann_1 & ann_2 are {count_other_1} & {count_other_2} respectively!')\n", "\n", "\n", "# For null values\n", "nan_1 = [1 if color.find('nan')>-1 else 0 for color in ann1]\n", "nan_2 = [1 if color.find('nan')>-1 else 0 for color in ann2]\n", "count_nan_1, count_nan_2 = sum(nan_1), sum(nan_2)\n", "\n", "print(f'cohen_kappa score for nan : {cohen_kappa(nan_1, nan_2)} and total nan\\'s according to ann_1 & ann_2 are {count_nan_1} & {count_nan_2} respectively!')\n" ] }, { "cell_type": "markdown", "metadata": { "id": "xxL4i4X_rq8M" }, "source": [ "### END" ] }, { "cell_type": "code", "execution_count": null, "metadata": { "id": "8-sxitPzrr61" }, "outputs": [], "source": [] } ], "metadata": { "colab": { "provenance": [] }, "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.12" } }, "nbformat": 4, "nbformat_minor": 0 }