text
stringlengths
41
31.4k
<s>/Pristina-Regular /PTBarnumBT-Regular /Raavi /RageItalic /Ravie /RefSpecialty /Ribbon131BT-Bold /Rockwell /Rockwell-Bold /Rockwell-BoldItalic /Rockwell-Condensed /Rockwell-CondensedBold /Rockwell-ExtraBold /Rockwell-Italic /Rockwell-Light /Rockwell-LightItalic /Rod /RodTransparent /RunicMT-Condensed /Sanvito-Light /Sanvito-Roman /ScriptC /ScriptMTBold /SegoeUI /SegoeUI-Bold /SegoeUI-BoldItalic /SegoeUI-Italic /Serpentine-BoldOblique /ShelleyVolanteBT-Regular /ShowcardGothic-Reg /Shruti /SILDoulosIPA /SimHei /SimSun /SimSun-PUA /SnapITC-Regular /StandardSymL /Stencil /StoneSans /StoneSans-Bold /StoneSans-BoldItalic /StoneSans-Italic /StoneSans-Semibold /StoneSans-SemiboldItalic /Stop /Swiss721BT-BlackExtended /Sylfaen /Symbol /SymbolMT /SymbolTiger /SymbolTigerExpert /Tahoma /Tahoma-Bold /Tci1 /Tci1Bold /Tci1BoldItalic /Tci1Italic /Tci2 /Tci2Bold /Tci2BoldItalic /Tci2Italic /Tci3 /Tci3Bold /Tci3BoldItalic /Tci3Italic /Tci4 /Tci4Bold /Tci4BoldItalic /Tci4Italic /TechnicalItalic /TechnicalPlain /Tekton /Tekton-Bold /TektonMM /Tempo-HeavyCondensed /Tempo-HeavyCondensedItalic /TempusSansITC /Tiger /TigerExpert /Times-Bold /Times-BoldItalic /Times-BoldItalicOsF /Times-BoldSC /Times-ExtraBold /Times-Italic /Times-ItalicOsF /TimesNewRomanMT-ExtraBold /TimesNewRomanPS-BoldItalicMT /TimesNewRomanPS-BoldMT /TimesNewRomanPS-ItalicMT /TimesNewRomanPSMT /Times-Roman /Times-RomanSC /Trajan-Bold /Trebuchet-BoldItalic /TrebuchetMS /TrebuchetMS-Bold /TrebuchetMS-Italic /Tunga-Regular /TwCenMT-Bold /TwCenMT-BoldItalic /TwCenMT-Condensed /TwCenMT-CondensedBold /TwCenMT-CondensedExtraBold /TwCenMT-CondensedMedium /TwCenMT-Italic /TwCenMT-Regular /Univers-Bold /Univers-BoldItalic /UniversCondensed-Bold /UniversCondensed-BoldItalic /UniversCondensed-Medium /UniversCondensed-MediumItalic /Univers-Medium /Univers-MediumItalic /URWBookmanL-DemiBold /URWBookmanL-DemiBoldItal /URWBookmanL-Ligh /URWBookmanL-LighItal /URWChanceryL-MediItal /URWGothicL-Book /URWGothicL-BookObli /URWGothicL-Demi /URWGothicL-DemiObli /URWPalladioL-Bold /URWPalladioL-BoldItal /URWPalladioL-Ital /URWPalladioL-Roma /USPSBarCode /VAGRounded-Black /VAGRounded-Bold /VAGRounded-Light /VAGRounded-Thin /Verdana /Verdana-Bold /Verdana-BoldItalic /Verdana-Italic /VerdanaRef /VinerHandITC /Viva-BoldExtraExtended /Vivaldii /Viva-LightCondensed /Viva-Regular /VladimirScript /Vrinda /Webdings /Westminster /Willow /Wingdings2 /Wingdings3 /Wingdings-Regular /WNCYB10 /WNCYI10 /WNCYR10 /WNCYSC10 /WNCYSS10 /WoodtypeOrnaments-One /WoodtypeOrnaments-Two /WP-ArabicScriptSihafa /WP-ArabicSihafa /WP-BoxDrawing /WP-CyrillicA /WP-CyrillicB /WP-GreekCentury /WP-GreekCourier /WP-GreekHelve /WP-HebrewDavid /WP-IconicSymbolsA /WP-IconicSymbolsB /WP-Japanese /WP-MathA /WP-MathB /WP-MathExtendedA /WP-MathExtendedB /WP-MultinationalAHelve /WP-MultinationalARoman /WP-MultinationalBCourier /WP-MultinationalBHelve /WP-MultinationalBRoman /WP-MultinationalCourier /WP-Phonetic /WPTypographicSymbols /XYATIP10 /XYBSQL10 /XYBTIP10 /XYCIRC10 /XYCMAT10 /XYCMBT10 /XYDASH10 /XYEUAT10 /XYEUBT10 /ZapfChancery-MediumItalic /ZapfDingbats /ZapfHumanist601BT-Bold /ZapfHumanist601BT-BoldItalic /ZapfHumanist601BT-Demi /ZapfHumanist601BT-DemiItalic /ZapfHumanist601BT-Italic /ZapfHumanist601BT-Roman /ZWAdobeF /NeverEmbed [ true /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 150 /ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 300 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 2.00333 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages true /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /ColorImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 150 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 2.00333 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /GrayImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 1200 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.00167 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 /AllowPSXObjects false /CheckCompliance [ /None /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /CreateJDFFile false /Description << /ARA <FEFF06270633062A062E062F0645002006470630064700200627064406250639062F0627062F0627062A002006440625064606340627062100200648062B062706260642002000410064006F00620065002000500044004600200645062A064806270641064206290020064506390020064506420627064A064A0633002006390631063600200648063706280627063906290020062706440648062B0627062606420020062706440645062A062F062706480644062900200641064A00200645062C062706440627062A002006270644062306390645062706440020062706440645062E062A064406410629061B0020064A06450643064600200641062A062D00200648062B0627062606420020005000440046002006270644064506460634062306290020062806270633062A062E062F062706450020004100630072006F0062006100740020064800410064006F006200650020005200650061006400650072002006250635062F0627063100200035002E0030002006480627064406250635062F062706310627062A0020062706440623062D062F062B002E> /CHS <FEFF4f7f75288fd94e9b8bbe5b9a521b5efa7684002000410064006f006200650020005000440046002065876863900275284e8e55464e1a65876863768467e5770b548c62535370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c676562535f00521b5efa768400200050004400460020658768633002> /CHT <FEFF4f7f752890194e9b8a2d7f6e5efa7acb7684002000410064006f006200650020005000440046002065874ef69069752865bc666e901a554652d965874ef6768467e5770b548c52175370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c4f86958b555f5df25efa7acb76840020005000440046002065874ef63002> /CZE <FEFF005400610074006f0020006e006100730074006100760065006e00ed00200070006f0075017e0069006a007400650020006b0020007600790074007600e101590065006e00ed00200064006f006b0075006d0065006e0074016f002000410064006f006200650020005000440046002000760068006f0064006e00fd00630068002000700072006f002000730070006f006c00650068006c0069007600e90020007a006f006200720061007a006f007600e1006e00ed002000610020007400690073006b0020006f006200630068006f0064006e00ed0063006800200064006f006b0075006d0065006e0074016f002e002000200056007900740076006f01590065006e00e900200064006f006b0075006d0065006e007400790020005000440046002000620075006400650020006d006f017e006e00e90020006f007400650076015900ed007400200076002000700072006f006700720061006d0065006300680020004100630072006f00620061007400200061002000410064006f00620065002000520065006100640065007200200035002e0030002000610020006e006f0076011b006a016100ed00630068002e> /DAN <FEFF004200720075006700200069006e0064007300740069006c006c0069006e006700650072006e0065002000740069006c0020006100740020006f007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400650072002c0020006400650072002000650067006e006500720020007300690067002000740069006c00200064006500740061006c006a006500720065007400200073006b00e60072006d007600690073006e0069006e00670020006f00670020007500640073006b007200690076006e0069006e006700200061006600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020004400650020006f007000720065007400740065006400650020005000440046002d0064006f006b0075006d0065006e0074006500720020006b0061006e002000e50062006e00650073002000690020004100630072006f00620061007400200065006c006c006500720020004100630072006f006200610074002000520065006100640065007200200035002e00300020006f00670020006e0079006500720065002e> /DEU <FEFF00560065007200770065006e00640065006e0020005300690065002000640069006500730065002000450069006e007300740065006c006c0075006e00670065006e0020007a0075006d002000450072007300740065006c006c0065006e00200076006f006e002000410064006f006200650020005000440046002d0044006f006b0075006d0065006e00740065006e002c00200075006d002000650069006e00650020007a0075007600650072006c00e40073007300690067006500200041006e007a006500690067006500200075006e00640020004100750073006700610062006500200076006f006e00200047006500730063006800e40066007400730064006f006b0075006d0065006e00740065006e0020007a0075002000650072007a00690065006c0065006e002e00200044006900650020005000440046002d0044006f006b0075006d0065006e007400650020006b00f6006e006e0065006e0020006d006900740020004100630072006f00620061007400200075006e0064002000520065006100640065007200200035002e003000200075006e00640020006800f600680065007200200067006500f600660066006e00650074002000770065007200640065006e002e> /ESP <FEFF005500740069006c0069006300650020006500730074006100200063006f006e0066006900670075007200610063006900f3006e0020007000610072006100200063007200650061007200200064006f00630075006d0065006e0074006f0073002000640065002000410064006f00620065002000500044004600200061006400650063007500610064006f007300200070006100720061002000760069007300750061006c0069007a00610063006900f3006e0020006500200069006d0070007200650073006900f3006e00200064006500200063006f006e006600690061006e007a006100200064006500200064006f00630075006d0065006e0074006f007300200063006f006d00650072006300690061006c00650073002e002000530065002000700075006500640065006e00200061006200720069007200200064006f00630075006d0065006e0074006f00730020005000440046002000630072006500610064006f007300200063006f006e0020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e003000200079002000760065007200730069006f006e0065007300200070006f00730074006500720069006f007200650073002e> /FRA <FEFF005500740069006c006900730065007a00200063006500730020006f007000740069006f006e00730020006100660069006e00200064006500200063007200e900650072002000640065007300200064006f00630075006d0065006e00740073002000410064006f006200650020005000440046002000700072006f00660065007300730069006f006e006e0065006c007300200066006900610062006c0065007300200070006f007500720020006c0061002000760069007300750061006c00690073006100740069006f006e0020006500740020006c00270069006d007000720065007300730069006f006e002e0020004c0065007300200064006f00630075006d0065006e00740073002000500044004600200063007200e900e90073002000700065007500760065006e0074002000ea0074007200650020006f007500760065007200740073002000640061006e00730020004100630072006f006200610074002c002000610069006e00730069002000710075002700410064006f00620065002000520065006100640065007200200035002e0030002000650074002000760065007200730069006f006e007300200075006c007400e90072006900650075007200650073002e> /GRE <FEFF03a703c103b703c303b903bc03bf03c003bf03b903ae03c303c403b5002003b103c503c403ad03c2002003c403b903c2002003c103c503b803bc03af03c303b503b903c2002003b303b903b1002003bd03b1002003b403b703bc03b903bf03c503c103b303ae03c303b503c403b5002003ad03b303b303c103b103c603b1002000410064006f006200650020005000440046002003ba03b103c403ac03bb03bb03b703bb03b1002003b303b903b1002003b103be03b903cc03c003b903c303c403b7002003c003c103bf03b203bf03bb03ae002003ba03b103b9002003b503ba03c403cd03c003c903c303b7002003b503c003b903c703b503b903c103b703bc03b103c403b903ba03ce03bd002003b503b303b303c103ac03c603c903bd002e0020002003a403b10020005000440046002003ad03b303b303c103b103c603b1002003c003bf03c5002003ad03c703b503c403b5002003b403b703bc03b903bf03c503c103b303ae03c303b503b9002003bc03c003bf03c103bf03cd03bd002003bd03b1002003b103bd03bf03b903c703c403bf03cd03bd002003bc03b5002003c403bf0020004100630072006f006200610074002c002003c403bf002000410064006f00620065002000520065006100640065007200200035002e0030002003ba03b103b9002003bc03b503c403b103b303b503bd03ad03c303c403b503c103b503c2002003b503ba03b403cc03c303b503b903c2002e> /HEB <FEFF05D405E905EA05DE05E905D5002005D105D405D205D305E805D505EA002005D005DC05D4002005DB05D305D9002005DC05D905E605D505E8002005DE05E105DE05DB05D9002000410064006F006200650020005000440046002005E205D105D505E8002005D405E605D205D4002005D505D405D305E405E105D4002005D005DE05D905E005D4002005E905DC002005DE05E105DE05DB05D905DD002005E205E105E705D905D905DD002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E05D905D505EA05E8002E002D0033002C002005E205D905D905E005D5002005D105DE05D305E805D905DA002005DC05DE05E905EA05DE05E9002005E905DC0020004100630072006F006200610074002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E> /HRV (Za stvaranje Adobe PDF dokumenata pogodnih za pouzdani prikaz i ispis poslovnih dokumenata koristite ove postavke. Stvoreni PDF dokumenti mogu se otvoriti Acrobat i Adobe Reader 5.0 i kasnijim verzijama.) /HUN <FEFF00410020006800690076006100740061006c006f007300200064006f006b0075006d0065006e00740075006d006f006b0020006d00650067006200ed007a00680061007400f30020006d0065006700740065006b0069006e007400e9007300e900720065002000e900730020006e0079006f006d00740061007400e1007300e10072006100200073007a00e1006e0074002000410064006f00620065002000500044004600200064006f006b0075006d0065006e00740075006d006f006b0061007400200065007a0065006b006b0065006c0020006100200062006500e1006c006c00ed007400e10073006f006b006b0061006c00200068006f007a006800610074006a00610020006c00e9007400720065002e0020002000410020006c00e90074007200650068006f007a006f00740074002000500044004600200064006f006b0075006d0065006e00740075006d006f006b00200061007a0020004100630072006f006200610074002000e9007300200061007a002000410064006f00620065002000520065006100640065007200200035002e0030002c0020007600610067007900200061007a002000610074007400f3006c0020006b00e9007301510062006200690020007600650072007a006900f3006b006b0061006c0020006e00790069007400680061007400f3006b0020006d00650067002e> /ITA (Utilizzare queste impostazioni per creare documenti Adobe PDF adatti per visualizzare e stampare documenti aziendali in modo affidabile. I documenti PDF creati possono essere aperti con Acrobat e Adobe Reader 5.0 e</s>
<s>versioni successive.) /JPN <FEFF30d330b830cd30b9658766f8306e8868793a304a3088307353705237306b90693057305f002000410064006f0062006500200050004400460020658766f8306e4f5c6210306b4f7f75283057307e305930023053306e8a2d5b9a30674f5c62103055308c305f0020005000440046002030d530a130a430eb306f3001004100630072006f0062006100740020304a30883073002000410064006f00620065002000520065006100640065007200200035002e003000204ee5964d3067958b304f30533068304c3067304d307e305930023053306e8a2d5b9a3067306f30d530a930f330c8306e57cb30818fbc307f3092884c3044307e30593002> /KOR <FEFFc7740020c124c815c7440020c0acc6a9d558c5ec0020be44c988b2c8c2a40020bb38c11cb97c0020c548c815c801c73cb85c0020bcf4ace00020c778c1c4d558b2940020b3700020ac00c7a50020c801d569d55c002000410064006f0062006500200050004400460020bb38c11cb97c0020c791c131d569b2c8b2e4002e0020c774b807ac8c0020c791c131b41c00200050004400460020bb38c11cb2940020004100630072006f0062006100740020bc0f002000410064006f00620065002000520065006100640065007200200035002e00300020c774c0c1c5d0c11c0020c5f40020c2180020c788c2b5b2c8b2e4002e> /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken waarmee zakelijke documenten betrouwbaar kunnen worden weergegeven en afgedrukt. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.) /NOR <FEFF004200720075006b00200064006900730073006500200069006e006e007300740069006c006c0069006e00670065006e0065002000740069006c002000e50020006f0070007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e00740065007200200073006f006d002000650072002000650067006e0065007400200066006f00720020007000e5006c006900740065006c006900670020007600690073006e0069006e00670020006f00670020007500740073006b007200690066007400200061007600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020005000440046002d0064006f006b0075006d0065006e00740065006e00650020006b0061006e002000e50070006e00650073002000690020004100630072006f00620061007400200065006c006c00650072002000410064006f00620065002000520065006100640065007200200035002e003000200065006c006c00650072002e> /POL <FEFF0055007300740061007700690065006e0069006100200064006f002000740077006f0072007a0065006e0069006100200064006f006b0075006d0065006e007400f300770020005000440046002000700072007a0065007a006e00610063007a006f006e00790063006800200064006f0020006e00690065007a00610077006f0064006e00650067006f002000770079015b0077006900650074006c0061006e00690061002000690020006400720075006b006f00770061006e0069006100200064006f006b0075006d0065006e007400f300770020006600690072006d006f0077007900630068002e002000200044006f006b0075006d0065006e0074007900200050004400460020006d006f017c006e00610020006f007400770069006500720061010700200077002000700072006f006700720061006d006900650020004100630072006f00620061007400200069002000410064006f00620065002000520065006100640065007200200035002e0030002000690020006e006f00770073007a0079006d002e> /PTB <FEFF005500740069006c0069007a006500200065007300730061007300200063006f006e00660069006700750072006100e700f50065007300200064006500200066006f0072006d00610020006100200063007200690061007200200064006f00630075006d0065006e0074006f0073002000410064006f00620065002000500044004600200061006400650071007500610064006f00730020007000610072006100200061002000760069007300750061006c0069007a006100e700e3006f002000650020006100200069006d0070007200650073007300e3006f00200063006f006e0066006900e1007600650069007300200064006500200064006f00630075006d0065006e0074006f007300200063006f006d0065007200630069006100690073002e0020004f007300200064006f00630075006d0065006e0074006f00730020005000440046002000630072006900610064006f007300200070006f00640065006d0020007300650072002000610062006500720074006f007300200063006f006d0020006f0020004100630072006f006200610074002000650020006f002000410064006f00620065002000520065006100640065007200200035002e0030002000650020007600650072007300f50065007300200070006f00730074006500720069006f007200650073002e> /RUM <FEFF005500740069006c0069007a00610163006900200061006300650073007400650020007300650074010300720069002000700065006e007400720075002000610020006300720065006100200064006f00630075006d0065006e00740065002000410064006f006200650020005000440046002000610064006500630076006100740065002000700065006e007400720075002000760069007a00750061006c0069007a00610072006500610020015f006900200074006900700103007200690072006500610020006c0061002000630061006c006900740061007400650020007300750070006500720069006f0061007201030020006100200064006f00630075006d0065006e00740065006c006f007200200064006500200061006600610063006500720069002e002000200044006f00630075006d0065006e00740065006c00650020005000440046002000630072006500610074006500200070006f00740020006600690020006400650073006300680069007300650020006300750020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e00300020015f00690020007600650072007300690075006e0069006c006500200075006c0074006500720069006f006100720065002e> /RUS <FEFF04180441043f043e043b044c04370443043904420435002004340430043d043d044b04350020043d0430044104420440043e0439043a043800200434043b044f00200441043e043704340430043d0438044f00200434043e043a0443043c0435043d0442043e0432002000410064006f006200650020005000440046002c0020043f043e04340445043e0434044f04490438044500200434043b044f0020043d0430043404350436043d043e0433043e0020043f0440043e0441043c043e044204400430002004380020043f04350447043004420438002004340435043b043e0432044b044500200434043e043a0443043c0435043d0442043e0432002e002000200421043e043704340430043d043d044b04350020005000440046002d0434043e043a0443043c0435043d0442044b0020043c043e0436043d043e0020043e0442043a0440044b043204300442044c002004410020043f043e043c043e0449044c044e0020004100630072006f00620061007400200438002000410064006f00620065002000520065006100640065007200200035002e00300020043800200431043e043b043504350020043f043e04370434043d043804450020043204350440044104380439002e> /SLV <FEFF005400650020006e006100730074006100760069007400760065002000750070006f0072006100620069007400650020007a00610020007500730074007600610072006a0061006e006a006500200064006f006b0075006d0065006e0074006f0076002000410064006f006200650020005000440046002c0020007000720069006d00650072006e006900680020007a00610020007a0061006e00650073006c006a00690076006f0020006f0067006c00650064006f00760061006e006a006500200069006e0020007400690073006b0061006e006a006500200070006f0073006c006f0076006e0069006800200064006f006b0075006d0065006e0074006f0076002e00200020005500730074007600610072006a0065006e006500200064006f006b0075006d0065006e0074006500200050004400460020006a00650020006d006f0067006f010d00650020006f0064007000720065007400690020007a0020004100630072006f00620061007400200069006e002000410064006f00620065002000520065006100640065007200200035002e003000200069006e0020006e006f00760065006a01610069006d002e> /SUO <FEFF004b00e40079007400e40020006e00e40069007400e4002000610073006500740075006b007300690061002c0020006b0075006e0020006c0075006f0074002000410064006f0062006500200050004400460020002d0064006f006b0075006d0065006e007400740065006a0061002c0020006a006f0074006b006100200073006f0070006900760061007400200079007200690074007900730061007300690061006b00690072006a006f006a0065006e0020006c0075006f00740065007400740061007600610061006e0020006e00e400790074007400e4006d0069007300650065006e0020006a0061002000740075006c006f007300740061006d0069007300650065006e002e0020004c0075006f0064007500740020005000440046002d0064006f006b0075006d0065006e00740069007400200076006f0069006400610061006e0020006100760061007400610020004100630072006f0062006100740069006c006c00610020006a0061002000410064006f00620065002000520065006100640065007200200035002e0030003a006c006c00610020006a006100200075007500640065006d006d0069006c006c0061002e> /SVE <FEFF0041006e007600e4006e00640020006400650020006800e4007200200069006e0073007400e4006c006c006e0069006e006700610072006e00610020006f006d002000640075002000760069006c006c00200073006b006100700061002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400200073006f006d00200070006100730073006100720020006600f60072002000740069006c006c006600f60072006c00690074006c006900670020007600690073006e0069006e00670020006f006300680020007500740073006b007200690066007400650072002000610076002000610066006600e4007200730064006f006b0075006d0065006e0074002e002000200053006b006100700061006400650020005000440046002d0064006f006b0075006d0065006e00740020006b0061006e002000f600700070006e00610073002000690020004100630072006f0062006100740020006f00630068002000410064006f00620065002000520065006100640065007200200035002e00300020006f00630068002000730065006e006100720065002e> /TUR <FEFF005400690063006100720069002000620065006c00670065006c006500720069006e0020006700fc00760065006e0069006c0069007200200062006900720020015f0065006b0069006c006400650020006700f6007200fc006e007400fc006c0065006e006d006500730069002000760065002000790061007a0064013100720131006c006d006100730131006e006100200075007900670075006e002000410064006f006200650020005000440046002000620065006c00670065006c0065007200690020006f006c0075015f007400750072006d0061006b0020006900e70069006e00200062007500200061007900610072006c0061007201310020006b0075006c006c0061006e0131006e002e00200020004f006c0075015f0074007500720075006c0061006e0020005000440046002000620065006c00670065006c0065007200690020004100630072006f006200610074002000760065002000410064006f00620065002000520065006100640065007200200035002e003000200076006500200073006f006e0072006100730131006e00640061006b00690020007300fc007200fc006d006c00650072006c00650020006100e70131006c006100620069006c00690072002e> /ENU (Use these settings to create Adobe PDF documents suitable for reliable viewing and printing of business documents. Created PDF documents can be opened with Acrobat and Adobe Reader 5.0 and later.)>> setdistillerparams /HWResolution [600 600] /PageSize [612.000 792.000]>> setpagedevice</s>
<s>untitledSentiment Analysis On Facebook Group Using Lexicon Based Approach Sanjida Akter Lecturer in CSE dept. Northern University Bangladesh Dhaka, Bangladesh Sanjida.akter@gmail.com Muhammad Tareq Aziz Software engineer goBD Dhaka, Bangladesh tareq@gob.coAbstract—Internet is one of the primary sources of Big Data. Rise of the social networking platforms are creating enormous amount of data in every second where human emotions are constantly expressed in real-time. The sentiment behind each post, comments, likes can be found using opinion mining. It is possible to determine business values from these objects and events if sentiment analysis is done on the huge amount of data. Here, we have chosen FOODBANK which is a very popular Facebook group in Bangladesh; to analyze sentiment of the data to find out their market values. Keywords— social media, sentiment analysis, market bucket analysis, supervised learning, probabilistic classifier, lexicon based approach I. INTRODUCTION Sentiment analysis is the concept: “how a person reacts on something or some events”. It is always an important issue “what people think”. People’s thoughts are available in social medias to be used as data source of market basket analysis. Social media plays an important role here because we can have data from people even we don’t know. Such analysis can help to determine recent market trend or market value of a particular product or event. based on people’s interest. Here we have utilized FACEBOOK which is a very popular social media platform as our data source and. “FOODBANK” which a well-known FACEBOOK group. For opinion mining or sentiment analysis some methods are applied like – Naive Bayes Machine Learning Classifier, Senti Word Net, Support Vector Machine. Here we applied both machine learning approach and Lexicon Based Dictionary. Most of the work on opinion mining is done with Naïve bayes. As our data set contains Bangla language; we built a Dictionary on top of the data set. We count the occurrence of sentimental words and featured words which are tagged as the target of calculated sentiment value. This approach can find 73% cases correctly whether post is positive or negative. After comparing two results, we found that lexicon based algorithm works better here. At the end of analysis; we can determine recent trends and characteristics of people’s food habit. It also can be identified that “what will be the food habit in future” and “where investors should put their attention”. II. SENTIMENT ANALYSIS In recent time, sentiment analysis is a popular research topic because many real-life problems are part of this subject matter. It is also highly challenging as NLP research topic that covers many novel sub problems. Additionally, there was almost no significant research done before the 2000 in either NLP or linguistics. It is because of lack of availability of opinion or text in digital forms. Since the year 2000 the field has grown rapidly to become one of the most active research areas in NLP. It is also widely researched in data mining, Web mining, and information retrieval technologies. In fact, it has spread from computer</s>
<s>science to management and business intelligence. There are mainly three methods or level of research on Sentiment Analysis. Document level [1]: analyze the overall sentiment expressed in the text and determine if the overall sentiment is positive or negative. Sentence level [2] - examine the sentiment expressed in sentences and determines whether each sentence expressed be a positive, negative, or neutral opinion. Aspect level [3] - aspect level performs better analysis when document level and the sentence level analysis do not find what people liked and did not like. Aspect level also called feature level. Aspect level does not work on document, paragraph sentences or clauses, it directly finds out the sentiment and the target. The sentiment can be positive, negative or neutral towards that specific target or entity. Realizing the importance of opinion targets also helps us understand the sentiment analysis problem better. For example, “I love to eat chicken”. Here we can see a positive sentiment and the target of this sentiment or the entity is “chicken”. There is another challenge which is that we can classify opinion in two types: Regular opinion and Comparative opinion [4]. III. LEXICON BASED Undoubtedly, the most important indicator of sentiments are sentiment words. These are words that are commonly used 978-1-5090-2906-8/16/$31.00 ©2016 IEEE iCEEiCT 2016to express positive or negative sentiments. For example, good, wonderful, and amazing are positive sentiment words, and bad, poor, and terrible are negative sentiment words. Sentiment words are instrumental to sentiment analysis for obvious reason. A list of such words are called a sentiment lexicon (or opinion lexicon). Over the years, researchers have designed numerous algorithms to compile such lexicons. Although sentiment words are important for sentiment analysis, only using them is not sufficient. The problem is much more complex. Some issues related to lexicon based approach are given bellow: • A positive or negative sentiment word may have opposite orientations in different application domains. For example, “dangerous” is a negative word but it can also be highly positive when someone posts “dangerous tasty”. • A sentence containing sentiment words may not express any sentiment. This happens frequently in question types of sentences. For example, “Suggest a good coffee shop around gulshan n banani...” • Many sentences without sentiment words can also imply opinion. For example, “They took a lot of time to serve us”. This sentence has a negative sentiment. In recent time many researchers are working on lexicon based approach [5]. IV. SENTIMENT ANALYSIS ON SOCIAL MEDIA GROUP It is possible to understand the real world movement by analyzing data of social media groups. [5] Sentiment analysis tools have been applied to examine the relationship between release of products, the ‘discussion’ online, and actual sales of products with the outcome being that such data can be used to predict sales volumes [6]. As we have mentioned that sentiment analyses are about opinion or expression or imply positive or negative sentiment or emotion towards an object. So the elements of Sentiment analysis are: “Opinion” which are</s>
<s>a judgment, viewpoint, or statement about matters commonly considered to be subjective. “Entity” which are a product, service, topic, issue, person, organization, or event. It is the target of an opinion. “Subjectivity and emotions” which are the state of mind of a person and instinctive responses. Let us use a Facebook post on “FoodBank” group to describe opinion properly. The post is given bellow with the time when it was posted. Tanvir RahmanMay 8, 2015 at 04.30 pm "Today we visit KFC. The decoration was awesome. We ordered Chicken Zinger burger. It was so tasty. But my sister think it contains too much calories." Observation: 1. This post contains a number of opinion. 2. It has more than one target object or Entity. 3. It has both positive and negative opinion. 4. It has two opinion holders. 5. This post is holding a date. V. SENTIMENT ANALYSIS USINGMACHINE LEARNING APPROACH almost 90% researchers work with different machine learning algorithms to detect sentiment or for opinion mining. Some common algorithms that are used for sentiment analysis are – Naïve Bayes, Maximum Entropy, support vector machine etc. [8]. Naive Bayes classifiers are highly scalable, requiring a number of parameters linear in the number of variables (features/predictors) in a learning problem. Maximum-likelihood training can be done by evaluating a closed-form expression, which takes linear time, rather than by expensive iterative approximation as used for many other types of classifiers. VI. SENTIMENT ANALYSIS USING LEXICO BASED APPROACH The lexical approach is a method of teaching foreign languages described by M. Lewis in the 1990s. At recent days researchers also working with lexicon based approach for sentiment analysis.[9][10] The basic concept on which this approach rests is the idea that an important part of learning a language consists of being able to understand and produce lexical phrases as chunks. Students are thought to be able to perceive patterns of language (grammar) as well as have meaningful set uses of words at their disposal when they are taught in this way. Machine translation can use a method based on dictionary entries, which means that the words will be translated as a dictionary does i.e. word by word, usually without much correlation of meaning between them. Dictionary lookups may be done with or without morphological analysis or lemmatization. While this approach to machine translation is probably the least sophisticated, dictionary-based machine translation is ideally suitable for the translation of long lists of phrases on the sub sentential (i.e., not a full sentence) level, e.g. inventories or simple catalogs of products and services. VII. OUR APPROACH AND EXPERIMENT RESULT Apart from many processes, we have used two main approaches to analyze sentiments of Social Network Data. These are: • Naïve Bayes Machine Learning classifier. • Dictionary Based Approach. A. Collecting Data As per our interest we decided to do our sentiment analysis experiment on Social network data. But social network is full of chaos. So we needed a group where people would talk on specific topic. We found</s>
<s>there are groups like BDCyclists, FoodBank where people discuss on same kind of topic. we considered “FoodBank” for this work. To collect Data, there is an API for developers in Facebook called Graph API. This API lets programmer to do different kinds of programmatic activities on top of Facebook data. So we wrote a C# console application to download all the posts that is available on FoodBank since the beginning. B. Formatting Data Before using the data in any of the sentiment analyzing process, we needed to do some formatting on the data. The data we downloaded were in JSON format. 1) Formatting Data for Naïve Bayes Classification: We write another c# console application that would extract and make a list of Messages of all posts for naïve Bayes classification. We write another Visual GUI application in C# that would let our team mates do the manual classification since all machine learning classifiers are heavily depend on trained data set. 2) Formatting Data for Naïve Bayes Classification: To do this, we tokenize each and every words. We made a list of unique words and there were 9000 words. Then we manually classified all these words based on sentiment rating like neutral, positive, negative, super positive, super negative, food name or location name etc. C. Formatting Data In machine learning, naive Bayes classifiers are a family of simple probabilistic classifiers based on applying Bayes' theorem with strong (naive) independence assumptions between the features. Since it’s a probabilistic classifier, it heavily depends on the probability mathematics. We used Python programming language for this classification technique. We took about 3600 random statuses from the list and classified them manually based on human judgment. Problems with Naïve Bayes Classifier: In different scientific papers it has been found that Naïve Bayes machine learning classifier is extremely successful when applied on well-formed text corpuses. Like movie reviews or novels. But in Facebook, Naïve Bayes has not performed well, at least according to our experimentation. When we gave a very close look to the poor performance of Naïve Bayes Classifier, we found that the Facebook Group Posts are extremely noisy. Since people post in random lengths with lots of spelling mistakes. For example: Fig. 1. variation in post on social network Since variation and combination of words in local Facebook groups are way too high; we needed a huge amount of training sample in order to reach to a satisfactory level of accuracy. But at this moment we have found simple counting positive and negative words. Specialized key words are giving better performance than Naïve Bayes classifier. So, for the rest of the sentiment analysis technique we emphasized on Lexicon Based approach. D. Approach II (our proposed solution: Lexicon based Sentiment Analysis) In a nutshell, it is nothing more than counting key positive and negative and neutral key words We found this approach is extremely efficient since ordinary people posts on this FoodBank and they are not comprising of complex sentence structures that would let us to</s>
<s>satirical or ironic posts. Point to be noted here that people on FoodBank uses a useful pattern to review something. For example: Fig. 2. Ratting pattern on FOODBANK We can observe that sometime people rate foods or location on a scale of 10. This is an interesting keyword to determine what people think about certain feature words. But since people write such kind of rating in different manners; sometimes it is hard to find out actually what is rated and if there are multiple feature words. So, we consider such sentiment rating as a basis of judging the whole messages. Our approach calculates the sentiment of the status in subjective manner rather than objective. E. The proposed work 1. Download all the data from FoodBank. 2. Make a list of unique words both that are used as a whole in all of the statuses. 3. Noise out all the words that doesn’t represent any sentimental value or feature value like location and food. 4. Rate the rest of the words that do represent the sentimental or feature value, make a wordlist out of it. 5. Check for the occurrence of words of our word list. 6. Tag each status with those words to each status, the sentimental values represent the sentiment of whole status. The food or location indicates to whom the sentiment is expressed. 7. When all the messages sentiment is detected, a macro level overview like food/ restaurant popularity, trends etc. is possible. We have followed all these steps to analyze sentiment in this lexicon based technique. VIII. APPLICATIONS OF SENTIMENT ANALYSIS There are many applications of Sentiment analysis. A short list of application is given bellow. 1. Customer Opinion mining 2. Manipulation 3. Political Observation 4. Social Study 5. Crowd Sourcing 6. Artificial Intelligent Research 7. Business Insight IX. ADVANTAGES OF SENTIMENT ANALYSIS USING LEXICON BASED APPROACH Advantages of sentiment analysis on a social media are given bellow. 1. Making decision based on knowing others interest. 2. Low cost and time saving method of getting consumer insight. 3. A faster way of getting customer data. 4. Ability to act on customer suggestion. 5. Helps to Identify Organization’s Strength, Weakness, Opportunities and Threats. (SWOT) analysis. 6. More accurate and insightful customer perception and feedback X. CONCLUSION In this paper, we have shown how to predict the sentiment behind a status post of Facebook which in nature of unstructured dataset, cross language domain and noisy. So, it can be determined that traditional opinion mining is not efficient enough to find out sentiments from social media’s like FACEBOOK. But lexicon based dictionary approach works efficiently in such kind of works. As sentiment analysis has many practical advantages so a structured process need to be developed. References [1] Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan “Thumbs up? Sentiment classification using machine learning techniques” Proceedings of EMNLP, pp. 79--86, 2002. [2] Steven Bethard, Hong Yu, Ashley Thornton, Vasileios Hatzivassiloglou and Dan Jurafsky Wilson, Wiebe and Hwa, “Automatic Extraction of Opinion Propositions and</s>
<s>their Holders” Computing Attitude and Affect in Text: Theory and Applications Volume 20 of the series The Information Retrieval Series pp. 125-141. [3] Wilson, Wiebe, Hwa “Just how mad are you? finding strong and weak opinion clauses” presented at the 19th national conference on Artificial intelligence, pp. 761-767. [4] Minqing Hu and Bing Liu “Feature-based opinion mining and summarization” KDD '04 Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining pp.168-177 [5] Nitin Jindal and Bing Liu “Mining Comparative Sentences and Relations” (2006), AAAI'06 proceedings of the 21st national conference on Artificial intelligence - Volume 2, pp.1331-1336. [6] Moreo A, Romero M, Castro JL, Zurita JM. “Lexicon-based comments-oriented news sentiment analyzer system”, Expert Syst Apple 2012; pp. 66–80. [7] Walaa Medhat, Ahmed Hassan, Hoda Korashy “Sentiment analysis algorithms and applications: A survey”, Ain Shams Engineering Journal, Volume 5, Issue 4, December 2014, pp. 1093–1113. [8] Bo Pang, Lillian Lee, Shivakumar Vaithyanathan “Sentiment classification using machine learning techniques” Proceedings of EMNLP, 2002, pp. 79—86. [9] Prabu Palanisamy, Vineet Yadav, Harsha Elchuri “Serendio: Simple and Practical lexicon based approach to Sentiment Analysis” presented at Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Seventh International Workshop on Semantic Evaluation (SemEval 2013), Atlanta, Georgia, June 14-15, 2013. pp. 543–548. [10] Maite Taboada, Julian Brooke, Milan Tofiloski, Kimberly Voll, Manfred Sted “Lexicon-Based Methods for Sentiment Analysis”, Computational Linguistics Volume 37, Number 2, 28 September 2010, pp. 268-307. [11] Lincoln C. Wood, Torsten Reiners, Haris S. Srivistava “Expanding Sales And Operations Planning Using Sentiment Analysis: Demand And Sales Clarity Form Social Media” ANZAM 2013, December 4-6, 2013. [12] Mohammad Ali Abbasi, Sun_Ki Chai, Huan Liu, Kiran Sagoo “Real-World behavior analysis through a social media lens” SBP'12 Proceedings of the 5th international conference on Social Computing, Behavioral-Cultural Modeling and Prediction, pp. 18-26. [13] Bo Pang, Lillian Lee “Opinion mining and sentiment analysis”, Foundations and Trends in Information Retrieval Vol. 2, No 1-2 (2008) pp.1–135. /ASCII85EncodePages false /AllowTransparency false /AutoPositionEPSFiles true /AutoRotatePages /None /Binding /Left /CalGrayProfile (Gray Gamma 2.2) /CalRGBProfile (sRGB IEC61966-2.1) /CalCMYKProfile (U.S. Web Coated \050SWOP\051 v2) /sRGBProfile (sRGB IEC61966-2.1) /CannotEmbedFontPolicy /Error /CompatibilityLevel 1.7 /CompressObjects /Off /CompressPages true /ConvertImagesToIndexed true /PassThroughJPEGImages true /CreateJobTicket false /DefaultRenderingIntent /Default /DetectBlends true /DetectCurves 0.0000 /ColorConversionStrategy /LeaveColorUnchanged /DoThumbnails false /EmbedAllFonts true /EmbedOpenType false /ParseICCProfilesInComments true /EmbedJobOptions true /DSCReportingLevel 0 /EmitDSCWarnings false /EndPage -1 /ImageMemory 1048576 /LockDistillerParams true /MaxSubsetPct 100 /Optimize true /OPM 0 /ParseDSCComments false /ParseDSCCommentsForDocInfo true /PreserveCopyPage true /PreserveDICMYKValues true /PreserveEPSInfo false /PreserveFlatness true /PreserveHalftoneInfo true /PreserveOPIComments false /PreserveOverprintSettings true /StartPage 1 /SubsetFonts true /TransferFunctionInfo /Remove /UCRandBGInfo /Preserve /UsePrologue false /ColorSettingsFile () /AlwaysEmbed [ true /AbadiMT-CondensedLight /ACaslon-Italic /ACaslon-Regular /ACaslon-Semibold /ACaslon-SemiboldItalic /AdobeArabic-Bold /AdobeArabic-BoldItalic /AdobeArabic-Italic /AdobeArabic-Regular /AdobeHebrew-Bold /AdobeHebrew-BoldItalic /AdobeHebrew-Italic /AdobeHebrew-Regular /AdobeHeitiStd-Regular /AdobeMingStd-Light /AdobeMyungjoStd-Medium /AdobePiStd /AdobeSongStd-Light /AdobeThai-Bold /AdobeThai-BoldItalic /AdobeThai-Italic /AdobeThai-Regular /AGaramond-Bold /AGaramond-BoldItalic /AGaramond-Italic /AGaramond-Regular /AGaramond-Semibold /AGaramond-SemiboldItalic /AgencyFB-Bold /AgencyFB-Reg /AGOldFace-Outline /AharoniBold /Algerian /Americana /Americana-ExtraBold /AndaleMono /AndaleMonoIPA /AngsanaNew /AngsanaNew-Bold /AngsanaNew-BoldItalic /AngsanaNew-Italic /AngsanaUPC /AngsanaUPC-Bold /AngsanaUPC-BoldItalic /AngsanaUPC-Italic /Anna /ArialAlternative /ArialAlternativeSymbol /Arial-Black /Arial-BlackItalic /Arial-BoldItalicMT /Arial-BoldMT /Arial-ItalicMT /ArialMT /ArialMT-Black /ArialNarrow /ArialNarrow-Bold /ArialNarrow-BoldItalic /ArialNarrow-Italic /ArialRoundedMTBold /ArialUnicodeMS /ArrusBT-Bold /ArrusBT-BoldItalic /ArrusBT-Italic /ArrusBT-Roman /AvantGarde-Book /AvantGarde-BookOblique /AvantGarde-Demi</s>
<s>/AvantGarde-DemiOblique /AvantGardeITCbyBT-Book /AvantGardeITCbyBT-BookOblique /BakerSignet /BankGothicBT-Medium /Barmeno-Bold /Barmeno-ExtraBold /Barmeno-Medium /Barmeno-Regular /Baskerville /BaskervilleBE-Italic /BaskervilleBE-Medium /BaskervilleBE-MediumItalic /BaskervilleBE-Regular /Baskerville-Bold /Baskerville-BoldItalic /Baskerville-Italic /BaskOldFace /Batang /BatangChe /Bauhaus93 /Bellevue /BellMT /BellMTBold /BellMTItalic /BerlingAntiqua-Bold /BerlingAntiqua-BoldItalic /BerlingAntiqua-Italic /BerlingAntiqua-Roman /BerlinSansFB-Bold /BerlinSansFBDemi-Bold /BerlinSansFB-Reg /BernardMT-Condensed /BernhardModernBT-Bold /BernhardModernBT-BoldItalic /BernhardModernBT-Italic /BernhardModernBT-Roman /BiffoMT /BinnerD /BinnerGothic /BlackadderITC-Regular /Blackoak /blex /blsy /Bodoni /Bodoni-Bold /Bodoni-BoldItalic /Bodoni-Italic /BodoniMT /BodoniMTBlack /BodoniMTBlack-Italic /BodoniMT-Bold /BodoniMT-BoldItalic /BodoniMTCondensed /BodoniMTCondensed-Bold /BodoniMTCondensed-BoldItalic /BodoniMTCondensed-Italic /BodoniMT-Italic /BodoniMTPosterCompressed /Bodoni-Poster /Bodoni-PosterCompressed /BookAntiqua /BookAntiqua-Bold /BookAntiqua-BoldItalic /BookAntiqua-Italic /Bookman-Demi /Bookman-DemiItalic /Bookman-Light /Bookman-LightItalic /BookmanOldStyle /BookmanOldStyle-Bold /BookmanOldStyle-BoldItalic /BookmanOldStyle-Italic /BookshelfSymbolOne-Regular /BookshelfSymbolSeven /BookshelfSymbolThree-Regular /BookshelfSymbolTwo-Regular /Botanical /Boton-Italic /Boton-Medium /Boton-MediumItalic /Boton-Regular /Boulevard /BradleyHandITC /Braggadocio /BritannicBold /Broadway /BrowalliaNew /BrowalliaNew-Bold /BrowalliaNew-BoldItalic /BrowalliaNew-Italic /BrowalliaUPC /BrowalliaUPC-Bold /BrowalliaUPC-BoldItalic /BrowalliaUPC-Italic /BrushScript /BrushScriptMT /CaflischScript-Bold /CaflischScript-Regular /Calibri /Calibri-Bold /Calibri-BoldItalic /Calibri-Italic /CalifornianFB-Bold /CalifornianFB-Italic /CalifornianFB-Reg /CalisMTBol /CalistoMT /CalistoMT-BoldItalic /CalistoMT-Italic /Cambria /Cambria-Bold /Cambria-BoldItalic /Cambria-Italic /CambriaMath /Candara /Candara-Bold /Candara-BoldItalic /Candara-Italic /Carta /CaslonOpenfaceBT-Regular /Castellar /CastellarMT /Centaur /Centaur-Italic /Century /CenturyGothic /CenturyGothic-Bold /CenturyGothic-BoldItalic /CenturyGothic-Italic /CenturySchL-Bold /CenturySchL-BoldItal /CenturySchL-Ital /CenturySchL-Roma /CenturySchoolbook /CenturySchoolbook-Bold /CenturySchoolbook-BoldItalic /CenturySchoolbook-Italic /CGTimes-Bold /CGTimes-BoldItalic /CGTimes-Italic /CGTimes-Regular /CharterBT-Bold /CharterBT-BoldItalic /CharterBT-Italic /CharterBT-Roman /CheltenhamITCbyBT-Bold /CheltenhamITCbyBT-BoldItalic /CheltenhamITCbyBT-Book /CheltenhamITCbyBT-BookItalic /Chiller-Regular /Cmb10 /CMB10 /Cmbsy10 /CMBSY10 /CMBSY5 /CMBSY6 /CMBSY7 /CMBSY8 /CMBSY9 /Cmbx10 /CMBX10 /Cmbx12 /CMBX12 /Cmbx5 /CMBX5 /Cmbx6 /CMBX6 /Cmbx7 /CMBX7 /Cmbx8 /CMBX8 /Cmbx9 /CMBX9 /Cmbxsl10 /CMBXSL10 /Cmbxti10 /CMBXTI10 /Cmcsc10 /CMCSC10 /Cmcsc8 /CMCSC8 /Cmcsc9 /CMCSC9 /Cmdunh10 /CMDUNH10 /Cmex10 /CMEX10 /CMEX7 /CMEX8 /CMEX9 /Cmff10 /CMFF10 /Cmfi10 /CMFI10 /Cmfib8 /CMFIB8 /Cminch /CMINCH /Cmitt10 /CMITT10 /Cmmi10 /CMMI10 /Cmmi12 /CMMI12 /Cmmi5 /CMMI5 /Cmmi6 /CMMI6 /Cmmi7 /CMMI7 /Cmmi8 /CMMI8 /Cmmi9 /CMMI9 /Cmmib10 /CMMIB10 /CMMIB5 /CMMIB6 /CMMIB7 /CMMIB8 /CMMIB9 /Cmr10 /CMR10 /Cmr12 /CMR12 /Cmr17 /CMR17 /Cmr5 /CMR5 /Cmr6 /CMR6 /Cmr7 /CMR7 /Cmr8 /CMR8 /Cmr9 /CMR9 /Cmsl10 /CMSL10 /Cmsl12 /CMSL12 /Cmsl8 /CMSL8 /Cmsl9 /CMSL9 /Cmsltt10 /CMSLTT10 /Cmss10 /CMSS10 /Cmss12 /CMSS12 /Cmss17 /CMSS17 /Cmss8 /CMSS8 /Cmss9 /CMSS9 /Cmssbx10 /CMSSBX10 /Cmssdc10 /CMSSDC10 /Cmssi10 /CMSSI10 /Cmssi12 /CMSSI12 /Cmssi17 /CMSSI17 /Cmssi8 /CMSSI8 /Cmssi9 /CMSSI9 /Cmssq8 /CMSSQ8 /Cmssqi8 /CMSSQI8 /Cmsy10 /CMSY10 /Cmsy5 /CMSY5 /Cmsy6 /CMSY6 /Cmsy7 /CMSY7 /Cmsy8 /CMSY8 /Cmsy9 /CMSY9 /Cmtcsc10 /CMTCSC10 /Cmtex10 /CMTEX10 /Cmtex8 /CMTEX8 /Cmtex9 /CMTEX9 /Cmti10 /CMTI10 /Cmti12 /CMTI12 /Cmti7 /CMTI7 /Cmti8 /CMTI8 /Cmti9 /CMTI9 /Cmtt10 /CMTT10 /Cmtt12 /CMTT12 /Cmtt8 /CMTT8 /Cmtt9 /CMTT9 /Cmu10 /CMU10 /Cmvtt10 /CMVTT10 /ColonnaMT /Colossalis-Bold /ComicSansMS /ComicSansMS-Bold /Consolas /Consolas-Bold /Consolas-BoldItalic /Consolas-Italic /Constantia /Constantia-Bold /Constantia-BoldItalic /Constantia-Italic /CooperBlack /CopperplateGothic-Bold /CopperplateGothic-Light /Copperplate-ThirtyThreeBC /Corbel /Corbel-Bold /Corbel-BoldItalic /Corbel-Italic /CordiaNew /CordiaNew-Bold /CordiaNew-BoldItalic /CordiaNew-Italic /CordiaUPC /CordiaUPC-Bold /CordiaUPC-BoldItalic /CordiaUPC-Italic /Courier /Courier-Bold /Courier-BoldOblique /CourierNewPS-BoldItalicMT /CourierNewPS-BoldMT /CourierNewPS-ItalicMT /CourierNewPSMT /Courier-Oblique /CourierStd /CourierStd-Bold /CourierStd-BoldOblique /CourierStd-Oblique /CourierX-Bold /CourierX-BoldOblique /CourierX-Oblique /CourierX-Regular /CreepyRegular /CurlzMT /David-Bold /David-Reg /DavidTransparent /Dcb10 /Dcbx10 /Dcbxsl10 /Dcbxti10 /Dccsc10 /Dcitt10 /Dcr10 /Desdemona /DilleniaUPC /DilleniaUPCBold /DilleniaUPCBoldItalic /DilleniaUPCItalic /Dingbats /DomCasual /Dotum /DotumChe /DoulosSIL /EdwardianScriptITC /Elephant-Italic /Elephant-Regular /EngraversGothicBT-Regular /EngraversMT /EraserDust /ErasITC-Bold /ErasITC-Demi /ErasITC-Light /ErasITC-Medium /ErieBlackPSMT /ErieLightPSMT /EriePSMT /EstrangeloEdessa /Euclid /Euclid-Bold /Euclid-BoldItalic /EuclidExtra /EuclidExtra-Bold /EuclidFraktur /EuclidFraktur-Bold /Euclid-Italic /EuclidMathOne /EuclidMathOne-Bold /EuclidMathTwo /EuclidMathTwo-Bold /EuclidSymbol /EuclidSymbol-Bold /EuclidSymbol-BoldItalic /EuclidSymbol-Italic /EucrosiaUPC /EucrosiaUPCBold /EucrosiaUPCBoldItalic /EucrosiaUPCItalic /EUEX10 /EUEX7 /EUEX8 /EUEX9 /EUFB10 /EUFB5 /EUFB7 /EUFM10 /EUFM5 /EUFM7 /EURB10 /EURB5 /EURB7 /EURM10 /EURM5 /EURM7 /EuroMono-Bold /EuroMono-BoldItalic /EuroMono-Italic /EuroMono-Regular /EuroSans-Bold /EuroSans-BoldItalic /EuroSans-Italic /EuroSans-Regular /EuroSerif-Bold /EuroSerif-BoldItalic /EuroSerif-Italic /EuroSerif-Regular /EUSB10 /EUSB5 /EUSB7 /EUSM10 /EUSM5 /EUSM7 /FelixTitlingMT /Fences /FencesPlain /FigaroMT /FixedMiriamTransparent /FootlightMTLight /Formata-Italic /Formata-Medium /Formata-MediumItalic /Formata-Regular /ForteMT /FranklinGothic-Book /FranklinGothic-BookItalic /FranklinGothic-Demi /FranklinGothic-DemiCond /FranklinGothic-DemiItalic /FranklinGothic-Heavy /FranklinGothic-HeavyItalic /FranklinGothicITCbyBT-Book /FranklinGothicITCbyBT-BookItal /FranklinGothicITCbyBT-Demi /FranklinGothicITCbyBT-DemiItal /FranklinGothic-Medium /FranklinGothic-MediumCond /FranklinGothic-MediumItalic /FrankRuehl /FreesiaUPC /FreesiaUPCBold /FreesiaUPCBoldItalic /FreesiaUPCItalic /FreestyleScript-Regular /FrenchScriptMT /Frutiger-Black /Frutiger-BlackCn /Frutiger-BlackItalic /Frutiger-Bold /Frutiger-BoldCn /Frutiger-BoldItalic /Frutiger-Cn /Frutiger-ExtraBlackCn /Frutiger-Italic /Frutiger-Light /Frutiger-LightCn /Frutiger-LightItalic /Frutiger-Roman /Frutiger-UltraBlack /Futura-Bold /Futura-BoldOblique</s>
<s>/Futura-Book /Futura-BookOblique /FuturaBT-Bold /FuturaBT-BoldItalic /FuturaBT-Book /FuturaBT-BookItalic /FuturaBT-Medium /FuturaBT-MediumItalic /Futura-Light /Futura-LightOblique /GalliardITCbyBT-Bold /GalliardITCbyBT-BoldItalic /GalliardITCbyBT-Italic /GalliardITCbyBT-Roman /Garamond /Garamond-Bold /Garamond-BoldCondensed /Garamond-BoldCondensedItalic /Garamond-BoldItalic /Garamond-BookCondensed /Garamond-BookCondensedItalic /Garamond-Italic /Garamond-LightCondensed /Garamond-LightCondensedItalic /Gautami /GeometricSlab703BT-Light /GeometricSlab703BT-LightItalic /Georgia /Georgia-Bold /Georgia-BoldItalic /Georgia-Italic /GeorgiaRef /Giddyup /Giddyup-Thangs /Gigi-Regular /GillSans /GillSans-Bold /GillSans-BoldItalic /GillSans-Condensed /GillSans-CondensedBold /GillSans-Italic /GillSans-Light /GillSans-LightItalic /GillSansMT /GillSansMT-Bold /GillSansMT-BoldItalic /GillSansMT-Condensed /GillSansMT-ExtraCondensedBold /GillSansMT-Italic /GillSans-UltraBold /GillSans-UltraBoldCondensed /GloucesterMT-ExtraCondensed /Gothic-Thirteen /GoudyOldStyleBT-Bold /GoudyOldStyleBT-BoldItalic /GoudyOldStyleBT-Italic /GoudyOldStyleBT-Roman /GoudyOldStyleT-Bold /GoudyOldStyleT-Italic /GoudyOldStyleT-Regular /GoudyStout /GoudyTextMT-LombardicCapitals /GSIDefaultSymbols /Gulim /GulimChe /Gungsuh /GungsuhChe /Haettenschweiler /HarlowSolid /Harrington /Helvetica /Helvetica-Black /Helvetica-BlackOblique /Helvetica-Bold /Helvetica-BoldOblique /Helvetica-Condensed /Helvetica-Condensed-Black /Helvetica-Condensed-BlackObl /Helvetica-Condensed-Bold /Helvetica-Condensed-BoldObl /Helvetica-Condensed-Light /Helvetica-Condensed-LightObl /Helvetica-Condensed-Oblique /Helvetica-Fraction /Helvetica-Narrow /Helvetica-Narrow-Bold /Helvetica-Narrow-BoldOblique /Helvetica-Narrow-Oblique /Helvetica-Oblique /HighTowerText-Italic /HighTowerText-Reg /Humanist521BT-BoldCondensed /Humanist521BT-Light /Humanist521BT-LightItalic /Humanist521BT-RomanCondensed /Imago-ExtraBold /Impact /ImprintMT-Shadow /InformalRoman-Regular /IrisUPC /IrisUPCBold /IrisUPCBoldItalic /IrisUPCItalic /Ironwood /ItcEras-Medium /ItcKabel-Bold /ItcKabel-Book /ItcKabel-Demi /ItcKabel-Medium /ItcKabel-Ultra /JasmineUPC /JasmineUPC-Bold /JasmineUPC-BoldItalic /JasmineUPC-Italic /JoannaMT /JoannaMT-Italic /Jokerman-Regular /JuiceITC-Regular /Kartika /Kaufmann /KaufmannBT-Bold /KaufmannBT-Regular /KidTYPEPaint /KinoMT /KodchiangUPC /KodchiangUPC-Bold /KodchiangUPC-BoldItalic /KodchiangUPC-Italic /KorinnaITCbyBT-Regular /KristenITC-Regular /KrutiDev040Bold /KrutiDev040BoldItalic /KrutiDev040Condensed /KrutiDev040Italic /KrutiDev040Thin /KrutiDev040Wide /KrutiDev060 /KrutiDev060Bold /KrutiDev060BoldItalic /KrutiDev060Condensed /KrutiDev060Italic /KrutiDev060Thin /KrutiDev060Wide /KrutiDev070 /KrutiDev070Condensed /KrutiDev070Italic /KrutiDev070Thin /KrutiDev070Wide /KrutiDev080 /KrutiDev080Condensed /KrutiDev080Italic /KrutiDev080Wide /KrutiDev090 /KrutiDev090Bold /KrutiDev090BoldItalic /KrutiDev090Condensed /KrutiDev090Italic /KrutiDev090Thin /KrutiDev090Wide /KrutiDev100 /KrutiDev100Bold /KrutiDev100BoldItalic /KrutiDev100Condensed /KrutiDev100Italic /KrutiDev100Thin /KrutiDev100Wide /KrutiDev120 /KrutiDev120Condensed /KrutiDev120Thin /KrutiDev120Wide /KrutiDev130 /KrutiDev130Condensed /KrutiDev130Thin /KrutiDev130Wide /KunstlerScript /Latha /LatinWide /LetterGothic /LetterGothic-Bold /LetterGothic-BoldOblique /LetterGothic-BoldSlanted /LetterGothicMT /LetterGothicMT-Bold /LetterGothicMT-BoldOblique /LetterGothicMT-Oblique /LetterGothic-Slanted /LevenimMT /LevenimMTBold /LilyUPC /LilyUPCBold /LilyUPCBoldItalic /LilyUPCItalic /Lithos-Black /Lithos-Regular /LotusWPBox-Roman /LotusWPIcon-Roman /LotusWPIntA-Roman /LotusWPIntB-Roman /LotusWPType-Roman /LucidaBright /LucidaBright-Demi /LucidaBright-DemiItalic /LucidaBright-Italic /LucidaCalligraphy-Italic /LucidaConsole /LucidaFax /LucidaFax-Demi /LucidaFax-DemiItalic /LucidaFax-Italic /LucidaHandwriting-Italic /LucidaSans /LucidaSans-Demi /LucidaSans-DemiItalic /LucidaSans-Italic /LucidaSans-Typewriter /LucidaSans-TypewriterBold /LucidaSans-TypewriterBoldOblique /LucidaSans-TypewriterOblique /LucidaSansUnicode /Lydian /Magneto-Bold /MaiandraGD-Regular /Mangal-Regular /Map-Symbols /MathA /MathB /MathC /Mathematica1 /Mathematica1-Bold /Mathematica1Mono /Mathematica1Mono-Bold /Mathematica2 /Mathematica2-Bold /Mathematica2Mono /Mathematica2Mono-Bold /Mathematica3 /Mathematica3-Bold /Mathematica3Mono /Mathematica3Mono-Bold /Mathematica4 /Mathematica4-Bold /Mathematica4Mono /Mathematica4Mono-Bold /Mathematica5 /Mathematica5-Bold /Mathematica5Mono /Mathematica5Mono-Bold /Mathematica6 /Mathematica6Bold /Mathematica6Mono /Mathematica6MonoBold /Mathematica7 /Mathematica7Bold /Mathematica7Mono /Mathematica7MonoBold /MatisseITC-Regular /MaturaMTScriptCapitals /Mesquite /Mezz-Black /Mezz-Regular /MICR /MicrosoftSansSerif /MingLiU /Minion-BoldCondensed /Minion-BoldCondensedItalic /Minion-Condensed /Minion-CondensedItalic /Minion-Ornaments /MinionPro-Bold /MinionPro-BoldIt /MinionPro-It /MinionPro-Regular /Miriam /MiriamFixed /MiriamTransparent /Mistral /Modern-Regular /MonotypeCorsiva /MonotypeSorts /MSAM10 /MSAM5 /MSAM6 /MSAM7 /MSAM8 /MSAM9 /MSBM10 /MSBM5 /MSBM6 /MSBM7 /MSBM8 /MSBM9 /MS-Gothic /MSHei /MSLineDrawPSMT /MS-Mincho /MSOutlook /MS-PGothic /MS-PMincho /MSReference1 /MSReference2 /MSReferenceSansSerif /MSReferenceSansSerif-Bold /MSReferenceSansSerif-BoldItalic /MSReferenceSansSerif-Italic /MSReferenceSerif /MSReferenceSerif-Bold /MSReferenceSerif-BoldItalic /MSReferenceSerif-Italic /MSReferenceSpecialty /MSSong /MS-UIGothic /MT-Extra /MTExtraTiger /MT-Symbol /MT-Symbol-Italic /MVBoli /Myriad-Bold /Myriad-BoldItalic /Myriad-Italic /Myriad-Roman /Narkisim /NewCenturySchlbk-Bold /NewCenturySchlbk-BoldItalic /NewCenturySchlbk-Italic /NewCenturySchlbk-Roman /NewMilleniumSchlbk-BoldItalicSH /NewsGothic /NewsGothic-Bold /NewsGothicBT-Bold /NewsGothicBT-BoldItalic /NewsGothicBT-Italic /NewsGothicBT-Roman /NewsGothic-Condensed /NewsGothic-Italic /NewsGothicMT /NewsGothicMT-Bold /NewsGothicMT-Italic /NiagaraEngraved-Reg /NiagaraSolid-Reg /NimbusMonL-Bold /NimbusMonL-BoldObli /NimbusMonL-Regu /NimbusMonL-ReguObli /NimbusRomNo9L-Medi /NimbusRomNo9L-MediItal /NimbusRomNo9L-Regu /NimbusRomNo9L-ReguItal /NimbusSanL-Bold /NimbusSanL-BoldCond /NimbusSanL-BoldCondItal /NimbusSanL-BoldItal /NimbusSanL-Regu /NimbusSanL-ReguCond /NimbusSanL-ReguCondItal /NimbusSanL-ReguItal /Nimrod /Nimrod-Bold /Nimrod-BoldItalic /Nimrod-Italic /NSimSun /Nueva-BoldExtended /Nueva-BoldExtendedItalic /Nueva-Italic /Nueva-Roman /NuptialScript /OCRA /OCRA-Alternate /OCRAExtended /OCRB /OCRB-Alternate /OfficinaSans-Bold /OfficinaSans-BoldItalic /OfficinaSans-Book /OfficinaSans-BookItalic /OfficinaSerif-Bold /OfficinaSerif-BoldItalic /OfficinaSerif-Book /OfficinaSerif-BookItalic /OldEnglishTextMT /Onyx /OnyxBT-Regular /OzHandicraftBT-Roman /PalaceScriptMT /Palatino-Bold /Palatino-BoldItalic /Palatino-Italic /PalatinoLinotype-Bold /PalatinoLinotype-BoldItalic /PalatinoLinotype-Italic /PalatinoLinotype-Roman /Palatino-Roman /PapyrusPlain /Papyrus-Regular /Parchment-Regular /Parisian /ParkAvenue /Penumbra-SemiboldFlare /Penumbra-SemiboldSans /Penumbra-SemiboldSerif /PepitaMT /Perpetua /Perpetua-Bold /Perpetua-BoldItalic /Perpetua-Italic /PerpetuaTitlingMT-Bold /PerpetuaTitlingMT-Light /PhotinaCasualBlack /Playbill /PMingLiU /Poetica-SuppOrnaments /PoorRichard-Regular /PopplLaudatio-Italic /PopplLaudatio-Medium /PopplLaudatio-MediumItalic /PopplLaudatio-Regular /PrestigeElite /Pristina-Regular /PTBarnumBT-Regular /Raavi /RageItalic /Ravie /RefSpecialty /Ribbon131BT-Bold /Rockwell /Rockwell-Bold /Rockwell-BoldItalic /Rockwell-Condensed /Rockwell-CondensedBold /Rockwell-ExtraBold /Rockwell-Italic /Rockwell-Light /Rockwell-LightItalic /Rod /RodTransparent /RunicMT-Condensed /Sanvito-Light /Sanvito-Roman /ScriptC /ScriptMTBold /SegoeUI /SegoeUI-Bold /SegoeUI-BoldItalic /SegoeUI-Italic /Serpentine-BoldOblique /ShelleyVolanteBT-Regular /ShowcardGothic-Reg /Shruti /SILDoulosIPA /SimHei /SimSun /SimSun-PUA /SnapITC-Regular /StandardSymL /Stencil /StoneSans /StoneSans-Bold /StoneSans-BoldItalic /StoneSans-Italic /StoneSans-Semibold /StoneSans-SemiboldItalic /Stop /Swiss721BT-BlackExtended /Sylfaen /Symbol /SymbolMT /SymbolTiger /SymbolTigerExpert /Tahoma /Tahoma-Bold /Tci1 /Tci1Bold /Tci1BoldItalic /Tci1Italic /Tci2 /Tci2Bold /Tci2BoldItalic /Tci2Italic /Tci3 /Tci3Bold /Tci3BoldItalic /Tci3Italic /Tci4 /Tci4Bold /Tci4BoldItalic /Tci4Italic /TechnicalItalic /TechnicalPlain /Tekton /Tekton-Bold /TektonMM /Tempo-HeavyCondensed /Tempo-HeavyCondensedItalic /TempusSansITC /Tiger /TigerExpert /Times-Bold /Times-BoldItalic /Times-BoldItalicOsF /Times-BoldSC /Times-ExtraBold</s>
<s>/Times-Italic /Times-ItalicOsF /TimesNewRomanMT-ExtraBold /TimesNewRomanPS-BoldItalicMT /TimesNewRomanPS-BoldMT /TimesNewRomanPS-ItalicMT /TimesNewRomanPSMT /Times-Roman /Times-RomanSC /Trajan-Bold /Trebuchet-BoldItalic /TrebuchetMS /TrebuchetMS-Bold /TrebuchetMS-Italic /Tunga-Regular /TwCenMT-Bold /TwCenMT-BoldItalic /TwCenMT-Condensed /TwCenMT-CondensedBold /TwCenMT-CondensedExtraBold /TwCenMT-CondensedMedium /TwCenMT-Italic /TwCenMT-Regular /Univers-Bold /Univers-BoldItalic /UniversCondensed-Bold /UniversCondensed-BoldItalic /UniversCondensed-Medium /UniversCondensed-MediumItalic /Univers-Medium /Univers-MediumItalic /URWBookmanL-DemiBold /URWBookmanL-DemiBoldItal /URWBookmanL-Ligh /URWBookmanL-LighItal /URWChanceryL-MediItal /URWGothicL-Book /URWGothicL-BookObli /URWGothicL-Demi /URWGothicL-DemiObli /URWPalladioL-Bold /URWPalladioL-BoldItal /URWPalladioL-Ital /URWPalladioL-Roma /USPSBarCode /VAGRounded-Black /VAGRounded-Bold /VAGRounded-Light /VAGRounded-Thin /Verdana /Verdana-Bold /Verdana-BoldItalic /Verdana-Italic /VerdanaRef /VinerHandITC /Viva-BoldExtraExtended /Vivaldii /Viva-LightCondensed /Viva-Regular /VladimirScript /Vrinda /Webdings /Westminster /Willow /Wingdings2 /Wingdings3 /Wingdings-Regular /WNCYB10 /WNCYI10 /WNCYR10 /WNCYSC10 /WNCYSS10 /WoodtypeOrnaments-One /WoodtypeOrnaments-Two /WP-ArabicScriptSihafa /WP-ArabicSihafa /WP-BoxDrawing /WP-CyrillicA /WP-CyrillicB /WP-GreekCentury /WP-GreekCourier /WP-GreekHelve /WP-HebrewDavid /WP-IconicSymbolsA /WP-IconicSymbolsB /WP-Japanese /WP-MathA /WP-MathB /WP-MathExtendedA /WP-MathExtendedB /WP-MultinationalAHelve /WP-MultinationalARoman /WP-MultinationalBCourier /WP-MultinationalBHelve /WP-MultinationalBRoman /WP-MultinationalCourier /WP-Phonetic /WPTypographicSymbols /XYATIP10 /XYBSQL10 /XYBTIP10 /XYCIRC10 /XYCMAT10 /XYCMBT10 /XYDASH10 /XYEUAT10 /XYEUBT10 /ZapfChancery-MediumItalic /ZapfDingbats /ZapfHumanist601BT-Bold /ZapfHumanist601BT-BoldItalic /ZapfHumanist601BT-Demi /ZapfHumanist601BT-DemiItalic /ZapfHumanist601BT-Italic /ZapfHumanist601BT-Roman /ZWAdobeF /NeverEmbed [ true /AntiAliasColorImages false /CropColorImages true /ColorImageMinResolution 150 /ColorImageMinResolutionPolicy /OK /DownsampleColorImages true /ColorImageDownsampleType /Bicubic /ColorImageResolution 300 /ColorImageDepth -1 /ColorImageMinDownsampleDepth 1 /ColorImageDownsampleThreshold 2.00333 /EncodeColorImages true /ColorImageFilter /DCTEncode /AutoFilterColorImages true /ColorImageAutoFilterStrategy /JPEG /ColorACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /ColorImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000ColorACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000ColorImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 150 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 300 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 2.00333 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages true /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /GrayImageDict << /QFactor 0.76 /HSamples [2 1 1 2] /VSamples [2 1 1 2] /JPEG2000GrayACSImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /JPEG2000GrayImageDict << /TileWidth 256 /TileHeight 256 /Quality 15 /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 1200 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 600 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.00167 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict << /K -1 /AllowPSXObjects false /CheckCompliance [ /None /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False /CreateJDFFile false /Description << /ARA <FEFF06270633062A062E062F0645002006470630064700200627064406250639062F0627062F0627062A002006440625064606340627062100200648062B062706260642002000410064006F00620065002000500044004600200645062A064806270641064206290020064506390020064506420627064A064A0633002006390631063600200648063706280627063906290020062706440648062B0627062606420020062706440645062A062F062706480644062900200641064A00200645062C062706440627062A002006270644062306390645062706440020062706440645062E062A064406410629061B0020064A06450643064600200641062A062D00200648062B0627062606420020005000440046002006270644064506460634062306290020062806270633062A062E062F062706450020004100630072006F0062006100740020064800410064006F006200650020005200650061006400650072002006250635062F0627063100200035002E0030002006480627064406250635062F062706310627062A0020062706440623062D062F062B002E> /CHS <FEFF4f7f75288fd94e9b8bbe5b9a521b5efa7684002000410064006f006200650020005000440046002065876863900275284e8e55464e1a65876863768467e5770b548c62535370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c676562535f00521b5efa768400200050004400460020658768633002> /CHT <FEFF4f7f752890194e9b8a2d7f6e5efa7acb7684002000410064006f006200650020005000440046002065874ef69069752865bc666e901a554652d965874ef6768467e5770b548c52175370300260a853ef4ee54f7f75280020004100630072006f0062006100740020548c002000410064006f00620065002000520065006100640065007200200035002e003000204ee553ca66f49ad87248672c4f86958b555f5df25efa7acb76840020005000440046002065874ef63002> /CZE <FEFF005400610074006f0020006e006100730074006100760065006e00ed00200070006f0075017e0069006a007400650020006b0020007600790074007600e101590065006e00ed00200064006f006b0075006d0065006e0074016f002000410064006f006200650020005000440046002000760068006f0064006e00fd00630068002000700072006f002000730070006f006c00650068006c0069007600e90020007a006f006200720061007a006f007600e1006e00ed002000610020007400690073006b0020006f006200630068006f0064006e00ed0063006800200064006f006b0075006d0065006e0074016f002e002000200056007900740076006f01590065006e00e900200064006f006b0075006d0065006e007400790020005000440046002000620075006400650020006d006f017e006e00e90020006f007400650076015900ed007400200076002000700072006f006700720061006d0065006300680020004100630072006f00620061007400200061002000410064006f00620065002000520065006100640065007200200035002e0030002000610020006e006f0076011b006a016100ed00630068002e> /DAN <FEFF004200720075006700200069006e0064007300740069006c006c0069006e006700650072006e0065002000740069006c0020006100740020006f007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400650072002c0020006400650072002000650067006e006500720020007300690067002000740069006c00200064006500740061006c006a006500720065007400200073006b00e60072006d007600690073006e0069006e00670020006f00670020007500640073006b007200690076006e0069006e006700200061006600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020004400650020006f007000720065007400740065006400650020005000440046002d0064006f006b0075006d0065006e0074006500720020006b0061006e002000e50062006e00650073002000690020004100630072006f00620061007400200065006c006c006500720020004100630072006f006200610074002000520065006100640065007200200035002e00300020006f00670020006e0079006500720065002e> /DEU <FEFF00560065007200770065006e00640065006e0020005300690065002000640069006500730065002000450069006e007300740065006c006c0075006e00670065006e0020007a0075006d002000450072007300740065006c006c0065006e00200076006f006e002000410064006f006200650020005000440046002d0044006f006b0075006d0065006e00740065006e002c00200075006d002000650069006e00650020007a0075007600650072006c00e40073007300690067006500200041006e007a006500690067006500200075006e00640020004100750073006700610062006500200076006f006e00200047006500730063006800e40066007400730064006f006b0075006d0065006e00740065006e0020007a0075002000650072007a00690065006c0065006e002e00200044006900650020005000440046002d0044006f006b0075006d0065006e007400650020006b00f6006e006e0065006e0020006d006900740020004100630072006f00620061007400200075006e0064002000520065006100640065007200200035002e003000200075006e00640020006800f600680065007200200067006500f600660066006e00650074002000770065007200640065006e002e> /ESP <FEFF005500740069006c0069006300650020006500730074006100200063006f006e0066006900670075007200610063006900f3006e0020007000610072006100200063007200650061007200200064006f00630075006d0065006e0074006f0073002000640065002000410064006f00620065002000500044004600200061006400650063007500610064006f007300200070006100720061002000760069007300750061006c0069007a00610063006900f3006e0020006500200069006d0070007200650073006900f3006e00200064006500200063006f006e006600690061006e007a006100200064006500200064006f00630075006d0065006e0074006f007300200063006f006d00650072006300690061006c00650073002e002000530065002000700075006500640065006e00200061006200720069007200200064006f00630075006d0065006e0074006f00730020005000440046002000630072006500610064006f007300200063006f006e0020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e003000200079002000760065007200730069006f006e0065007300200070006f00730074006500720069006f007200650073002e> /FRA <FEFF005500740069006c006900730065007a00200063006500730020006f007000740069006f006e00730020006100660069006e00200064006500200063007200e900650072002000640065007300200064006f00630075006d0065006e00740073002000410064006f006200650020005000440046002000700072006f00660065007300730069006f006e006e0065006c007300200066006900610062006c0065007300200070006f007500720020006c0061002000760069007300750061006c00690073006100740069006f006e0020006500740020006c00270069006d007000720065007300730069006f006e002e0020004c0065007300200064006f00630075006d0065006e00740073002000500044004600200063007200e900e90073002000700065007500760065006e0074002000ea0074007200650020006f007500760065007200740073002000640061006e00730020004100630072006f006200610074002c002000610069006e00730069002000710075002700410064006f00620065002000520065006100640065007200200035002e0030002000650074002000760065007200730069006f006e007300200075006c007400e90072006900650075007200650073002e> /GRE <FEFF03a703c103b703c303b903bc03bf03c003bf03b903ae03c303c403b5002003b103c503c403ad03c2002003c403b903c2002003c103c503b803bc03af03c303b503b903c2002003b303b903b1002003bd03b1002003b403b703bc03b903bf03c503c103b303ae03c303b503c403b5002003ad03b303b303c103b103c603b1002000410064006f006200650020005000440046002003ba03b103c403ac03bb03bb03b703bb03b1002003b303b903b1002003b103be03b903cc03c003b903c303c403b7002003c003c103bf03b203bf03bb03ae002003ba03b103b9002003b503ba03c403cd03c003c903c303b7002003b503c003b903c703b503b903c103b703bc03b103c403b903ba03ce03bd002003b503b303b303c103ac03c603c903bd002e0020002003a403b10020005000440046002003ad03b303b303c103b103c603b1002003c003bf03c5002003ad03c703b503c403b5002003b403b703bc03b903bf03c503c103b303ae03c303b503b9002003bc03c003bf03c103bf03cd03bd002003bd03b1002003b103bd03bf03b903c703c403bf03cd03bd002003bc03b5002003c403bf0020004100630072006f006200610074002c002003c403bf002000410064006f00620065002000520065006100640065007200200035002e0030002003ba03b103b9002003bc03b503c403b103b303b503bd03ad03c303c403b503c103b503c2002003b503ba03b403cc03c303b503b903c2002e> /HEB <FEFF05D405E905EA05DE05E905D5002005D105D405D205D305E805D505EA002005D005DC05D4002005DB05D305D9002005DC05D905E605D505E8002005DE05E105DE05DB05D9002000410064006F006200650020005000440046002005E205D105D505E8002005D405E605D205D4002005D505D405D305E405E105D4002005D005DE05D905E005D4002005E905DC002005DE05E105DE05DB05D905DD002005E205E105E705D905D905DD002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E05D905D505EA05E8002E002D0033002C002005E205D905D905E005D5002005D105DE05D305E805D905DA002005DC05DE05E905EA05DE05E9002005E905DC0020004100630072006F006200610074002E002005DE05E105DE05DB05D90020005000440046002005E905E005D505E605E805D5002005E005D905EA05E005D905DD002005DC05E405EA05D905D705D4002005D105D005DE05E605E205D505EA0020004100630072006F006200610074002005D5002D00410064006F00620065002000520065006100640065007200200035002E0030002005D505D205E805E105D005D505EA002005DE05EA05E705D305DE05D505EA002005D905D505EA05E8002E> /HRV (Za stvaranje Adobe PDF dokumenata pogodnih za pouzdani prikaz i ispis poslovnih dokumenata koristite ove postavke. Stvoreni PDF dokumenti mogu se otvoriti Acrobat i Adobe Reader 5.0 i kasnijim verzijama.) /HUN <FEFF00410020006800690076006100740061006c006f007300200064006f006b0075006d0065006e00740075006d006f006b0020006d00650067006200ed007a00680061007400f30020006d0065006700740065006b0069006e007400e9007300e900720065002000e900730020006e0079006f006d00740061007400e1007300e10072006100200073007a00e1006e0074002000410064006f00620065002000500044004600200064006f006b0075006d0065006e00740075006d006f006b0061007400200065007a0065006b006b0065006c0020006100200062006500e1006c006c00ed007400e10073006f006b006b0061006c00200068006f007a006800610074006a00610020006c00e9007400720065002e0020002000410020006c00e90074007200650068006f007a006f00740074002000500044004600200064006f006b0075006d0065006e00740075006d006f006b00200061007a0020004100630072006f006200610074002000e9007300200061007a002000410064006f00620065002000520065006100640065007200200035002e0030002c0020007600610067007900200061007a002000610074007400f3006c0020006b00e9007301510062006200690020007600650072007a006900f3006b006b0061006c0020006e00790069007400680061007400f3006b0020006d00650067002e> /ITA (Utilizzare queste impostazioni per creare documenti Adobe PDF adatti per visualizzare e stampare documenti aziendali in modo affidabile. I documenti PDF creati possono essere aperti con Acrobat e Adobe Reader 5.0 e versioni successive.) /JPN <FEFF30d330b830cd30b9658766f8306e8868793a304a3088307353705237306b90693057305f002000410064006f0062006500200050004400460020658766f8306e4f5c6210306b4f7f75283057307e305930023053306e8a2d5b9a30674f5c62103055308c305f0020005000440046002030d530a130a430eb306f3001004100630072006f0062006100740020304a30883073002000410064006f00620065002000520065006100640065007200200035002e003000204ee5964d3067958b304f30533068304c3067304d307e305930023053306e8a2d5b9a3067306f30d530a930f330c8306e57cb30818fbc307f3092884c3044307e30593002> /KOR <FEFFc7740020c124c815c7440020c0acc6a9d558c5ec0020be44c988b2c8c2a40020bb38c11cb97c0020c548c815c801c73cb85c0020bcf4ace00020c778c1c4d558b2940020b3700020ac00c7a50020c801d569d55c002000410064006f0062006500200050004400460020bb38c11cb97c0020c791c131d569b2c8b2e4002e0020c774b807ac8c0020c791c131b41c00200050004400460020bb38c11cb2940020004100630072006f0062006100740020bc0f002000410064006f00620065002000520065006100640065007200200035002e00300020c774c0c1c5d0c11c0020c5f40020c2180020c788c2b5b2c8b2e4002e> /NLD (Gebruik deze instellingen om Adobe PDF-documenten te maken waarmee zakelijke documenten betrouwbaar kunnen worden weergegeven en afgedrukt. De gemaakte PDF-documenten kunnen worden geopend met Acrobat en Adobe Reader 5.0 en hoger.) /NOR <FEFF004200720075006b00200064006900730073006500200069006e006e007300740069006c006c0069006e00670065006e0065002000740069006c002000e50020006f0070007000720065007400740065002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e00740065007200200073006f006d002000650072002000650067006e0065007400200066006f00720020007000e5006c006900740065006c006900670020007600690073006e0069006e00670020006f00670020007500740073006b007200690066007400200061007600200066006f0072007200650074006e0069006e006700730064006f006b0075006d0065006e007400650072002e0020005000440046002d0064006f006b0075006d0065006e00740065006e00650020006b0061006e002000e50070006e00650073002000690020004100630072006f00620061007400200065006c006c00650072002000410064006f00620065002000520065006100640065007200200035002e003000200065006c006c00650072002e> /POL <FEFF0055007300740061007700690065006e0069006100200064006f002000740077006f0072007a0065006e0069006100200064006f006b0075006d0065006e007400f300770020005000440046002000700072007a0065007a006e00610063007a006f006e00790063006800200064006f0020006e00690065007a00610077006f0064006e00650067006f002000770079015b0077006900650074006c0061006e00690061002000690020006400720075006b006f00770061006e0069006100200064006f006b0075006d0065006e007400f300770020006600690072006d006f0077007900630068002e002000200044006f006b0075006d0065006e0074007900200050004400460020006d006f017c006e00610020006f007400770069006500720061010700200077002000700072006f006700720061006d006900650020004100630072006f00620061007400200069002000410064006f00620065002000520065006100640065007200200035002e0030002000690020006e006f00770073007a0079006d002e> /PTB <FEFF005500740069006c0069007a006500200065007300730061007300200063006f006e00660069006700750072006100e700f50065007300200064006500200066006f0072006d00610020006100200063007200690061007200200064006f00630075006d0065006e0074006f0073002000410064006f00620065002000500044004600200061006400650071007500610064006f00730020007000610072006100200061002000760069007300750061006c0069007a006100e700e3006f002000650020006100200069006d0070007200650073007300e3006f00200063006f006e0066006900e1007600650069007300200064006500200064006f00630075006d0065006e0074006f007300200063006f006d0065007200630069006100690073002e0020004f007300200064006f00630075006d0065006e0074006f00730020005000440046002000630072006900610064006f007300200070006f00640065006d0020007300650072002000610062006500720074006f007300200063006f006d0020006f0020004100630072006f006200610074002000650020006f002000410064006f00620065002000520065006100640065007200200035002e0030002000650020007600650072007300f50065007300200070006f00730074006500720069006f007200650073002e> /RUM <FEFF005500740069006c0069007a00610163006900200061006300650073007400650020007300650074010300720069002000700065006e007400720075002000610020006300720065006100200064006f00630075006d0065006e00740065002000410064006f006200650020005000440046002000610064006500630076006100740065002000700065006e007400720075002000760069007a00750061006c0069007a00610072006500610020015f006900200074006900700103007200690072006500610020006c0061002000630061006c006900740061007400650020007300750070006500720069006f0061007201030020006100200064006f00630075006d0065006e00740065006c006f007200200064006500200061006600610063006500720069002e002000200044006f00630075006d0065006e00740065006c00650020005000440046002000630072006500610074006500200070006f00740020006600690020006400650073006300680069007300650020006300750020004100630072006f006200610074002c002000410064006f00620065002000520065006100640065007200200035002e00300020015f00690020007600650072007300690075006e0069006c006500200075006c0074006500720069006f006100720065002e> /RUS <FEFF04180441043f043e043b044c04370443043904420435002004340430043d043d044b04350020043d0430044104420440043e0439043a043800200434043b044f00200441043e043704340430043d0438044f00200434043e043a0443043c0435043d0442043e0432002000410064006f006200650020005000440046002c0020043f043e04340445043e0434044f04490438044500200434043b044f0020043d0430043404350436043d043e0433043e0020043f0440043e0441043c043e044204400430002004380020043f04350447043004420438002004340435043b043e0432044b044500200434043e043a0443043c0435043d0442043e0432002e002000200421043e043704340430043d043d044b04350020005000440046002d0434043e043a0443043c0435043d0442044b0020043c043e0436043d043e0020043e0442043a0440044b043204300442044c002004410020043f043e043c043e0449044c044e0020004100630072006f00620061007400200438002000410064006f00620065002000520065006100640065007200200035002e00300020043800200431043e043b043504350020043f043e04370434043d043804450020043204350440044104380439002e> /SLV <FEFF005400650020006e006100730074006100760069007400760065002000750070006f0072006100620069007400650020007a00610020007500730074007600610072006a0061006e006a006500200064006f006b0075006d0065006e0074006f0076002000410064006f006200650020005000440046002c0020007000720069006d00650072006e006900680020007a00610020007a0061006e00650073006c006a00690076006f0020006f0067006c00650064006f00760061006e006a006500200069006e0020007400690073006b0061006e006a006500200070006f0073006c006f0076006e0069006800200064006f006b0075006d0065006e0074006f0076002e00200020005500730074007600610072006a0065006e006500200064006f006b0075006d0065006e0074006500200050004400460020006a00650020006d006f0067006f010d00650020006f0064007000720065007400690020007a0020004100630072006f00620061007400200069006e002000410064006f00620065002000520065006100640065007200200035002e003000200069006e0020006e006f00760065006a01610069006d002e> /SUO <FEFF004b00e40079007400e40020006e00e40069007400e4002000610073006500740075006b007300690061002c0020006b0075006e0020006c0075006f0074002000410064006f0062006500200050004400460020002d0064006f006b0075006d0065006e007400740065006a0061002c0020006a006f0074006b006100200073006f0070006900760061007400200079007200690074007900730061007300690061006b00690072006a006f006a0065006e0020006c0075006f00740065007400740061007600610061006e0020006e00e400790074007400e4006d0069007300650065006e0020006a0061002000740075006c006f007300740061006d0069007300650065006e002e0020004c0075006f0064007500740020005000440046002d0064006f006b0075006d0065006e00740069007400200076006f0069006400610061006e0020006100760061007400610020004100630072006f0062006100740069006c006c00610020006a0061002000410064006f00620065002000520065006100640065007200200035002e0030003a006c006c00610020006a006100200075007500640065006d006d0069006c006c0061002e> /SVE <FEFF0041006e007600e4006e00640020006400650020006800e4007200200069006e0073007400e4006c006c006e0069006e006700610072006e00610020006f006d002000640075002000760069006c006c00200073006b006100700061002000410064006f006200650020005000440046002d0064006f006b0075006d0065006e007400200073006f006d00200070006100730073006100720020006600f60072002000740069006c006c006600f60072006c00690074006c006900670020007600690073006e0069006e00670020006f006300680020007500740073006b007200690066007400650072002000610076002000610066006600e4007200730064006f006b0075006d0065006e0074002e002000200053006b006100700061006400650020005000440046002d0064006f006b0075006d0065006e00740020006b0061006e002000f600700070006e00610073002000690020004100630072006f0062006100740020006f00630068002000410064006f00620065002000520065006100640065007200200035002e00300020006f00630068002000730065006e006100720065002e> /TUR <FEFF005400690063006100720069002000620065006c00670065006c006500720069006e0020006700fc00760065006e0069006c0069007200200062006900720020015f0065006b0069006c006400650020006700f6007200fc006e007400fc006c0065006e006d006500730069002000760065002000790061007a0064013100720131006c006d006100730131006e006100200075007900670075006e002000410064006f006200650020005000440046002000620065006c00670065006c0065007200690020006f006c0075015f007400750072006d0061006b0020006900e70069006e00200062007500200061007900610072006c0061007201310020006b0075006c006c0061006e0131006e002e00200020004f006c0075015f0074007500720075006c0061006e0020005000440046002000620065006c00670065006c0065007200690020004100630072006f006200610074002000760065002000410064006f00620065002000520065006100640065007200200035002e003000200076006500200073006f006e0072006100730131006e00640061006b00690020007300fc007200fc006d006c00650072006c00650020006100e70131006c006100620069006c00690072002e> /ENU (Use these settings to create Adobe PDF documents suitable for reliable viewing and printing of business documents. Created PDF documents can be opened with Acrobat and Adobe</s>
<s>Reader 5.0 and later.)>> setdistillerparams /HWResolution [600 600] /PageSize [612.000 792.000]>> setpagedevice</s>
<s>Aspect-Based Sentiment Analysis on Small DatasetsAspect-Based Sentiment Analysis Using SemEval and Amazon Datasets By: Tamanna Hasib ID: 17141017 Saima Ahmed Rahin ID: 13301117 Advisor: Mr. Moin Mostakim BRAC University Department of Computer Science and EngineeringTHESIS REPORT Tamanna Hasib, Saima Ahmed Rahin Aspect-Based Sentiment Analysis Using SemEval and Amazon Datasets PAGE 1 OF 37 Declaration This is to certify that the research work titled “Aspect-Based Sentiment Analysis Using SemEval and Amazon Datasets” is submitted by Saima Ahmed Rahin and Tamanna Hasib to the Department of Computer Science and Engineering, BRAC University in partial fulfillment of the requirements for the degree of Bachelor of Science in Computer Science and Engineering. We hereby declare that this thesis is based on results found from our own work. Materials of work found by other researcher are mentioned by reference. This Thesis, neither in whole or in part, has been previously submitted for any degree. We carried our research under the supervision of Mr. Moin Mostakim. Supervisor’s signature __________________ Mr. Moin Mostakim Author’s signature Author’s signature __________________ __________________ Saima Ahmed Rahin Tamanna Hasib THESIS REPORT Tamanna Hasib, Saima Ahmed Rahin Aspect-Based Sentiment Analysis Using SemEval and Amazon Datasets PAGE 2 OF 37 Abstract Sentiment analysis has become one of the most important tools in natural language processing, since it opens many possibilities to understand people’s opinions on different topics. Aspect-based sentiment analysis aims to take this a step further and find out, what exactly someone is talking about, and if he likes or dislikes it. Real world examples of perfect areas for this topic are the millions of available customer reviews in online shops. There have been multiple approaches to tackle this problem, using machine learning, deep learning and neural networks. However, currently the number of labelled reviews for training classifiers is very small. Therefore, we undertook multiple steps to research ways of improving ABSA performance on small datasets, by comparing recurrent and feed-forward neural networks and incorporating additional input data that was generated using different readily available NLP tools. Keywords Opinion Mining, Natural Language Processing, Recurrent Neural Network, Feed-Forward Neural Network, Part-of-Speech Tagging, Dependency Parsing, Word Vectors Acknowledgements First and foremost, we thank our thesis supervisor Mr. Moin Mostakim for his continuous support throughout the course of our work on this thesis. Moreover, we thank our families and friends for their encouragement and support. Lastly, we thank BRAC University for giving us the opportunity to complete our Bachelor of Science in Computer Science and Engineering. THESIS REPORT Tamanna Hasib, Saima Ahmed Rahin Aspect-Based Sentiment Analysis Using SemEval and Amazon Datasets PAGE 3 OF 37 Table of Contents Declaration ................................................................................................................................. 1 Abstract ...................................................................................................................................... 2 Keywords ................................................................................................................................... 2 Acknowledgements .................................................................................................................... 2 Table of Contents ....................................................................................................................... 3 List of Figures ............................................................................................................................ 4 List of Tables ............................................................................................................................. 4 Abbreviations ............................................................................................................................. 5 1. Introduction ............................................................................................................................ 6 2. Literature Review................................................................................................................... 7 2.1. Natural Language Processing .......................................................................................... 7 2.2. Related Work................................................................................................................... 8 2.3. Word Vectors .................................................................................................................. 9 2.4. Part-of-Speech Tagging................................................................................................. 10 2.5. Dependency Parsing ...................................................................................................... 11 2.6. Recurrent Neural Network ............................................................................................ 11</s>
<s>2.7. Feed-Forward Artificial Neural Network ...................................................................... 12 3. Methodology ........................................................................................................................ 13 4. System Implementation ....................................................................................................... 14 4.1. Languages and Tools ..................................................................................................... 14 4.2. Datasets ......................................................................................................................... 15 4.3. Text Preparation ............................................................................................................ 16 4.4. Training and Preparing Word Vectors .......................................................................... 17 4.5. Integrating Part-of-Speech Tags.................................................................................... 18 4.6. Integrating Word Dependencies .................................................................................... 18 4.7. Running the Neural Networks ....................................................................................... 19 5. Experiments and Result Analysis ........................................................................................ 23 5.1. Experiments ................................................................................................................... 23 5.2. Result Analysis .............................................................................................................. 32 6. Comparative Analysis .......................................................................................................... 33 7. Conclusion and Future Work ............................................................................................... 34 8. References ............................................................................................................................ 35 THESIS REPORT Tamanna Hasib, Saima Ahmed Rahin Aspect-Based Sentiment Analysis Using SemEval and Amazon Datasets PAGE 4 OF 37 List of Figures Fig. 1 Basic aspect-based sentiment analysis example .............................................................. 6 Fig. 2 Model visualization of word vector clustering ................................................................ 9 Fig. 3 Simple example for a POS-tagged sentence .................................................................. 10 Fig. 4 Relevance of POS tags for aspects and sentiment ......................................................... 10 Fig. 5 Simple representation of a sentence with parsed dependencies .................................... 11 Fig. 6 Layers of a basic recurrent neural network ................................................................... 11 Fig. 7 Layers of a basic feed-forward neural network ............................................................. 12 Fig. 8 Screenshot of the labelling tool we built for this task ................................................... 15 Fig. 9 Sample sentence from the SemEval laptop dataset ....................................................... 16 Fig. 10 DisplaCy Dependency Visualizer example output ...................................................... 19 Fig. 11 Aspect counting F1 score development using RNN .................................................... 25 Fig. 12 Aspect counting training process using FF-ANN........................................................ 25 Fig. 13 Aspect extraction F1 score development using RNN .................................................. 26 Fig. 14 Aspect extraction F1 score development using FF-ANN ............................................ 26 Fig. 15 Aspect sentiment prediction F1 score development using RNN ................................. 27 Fig. 16 Aspect sentiment prediction F1 score development using FF-ANN ........................... 27 Fig. 17 F1 score comparison of all experiments ...................................................................... 32 Fig. 18 Aspect sentiment prediction F1 score development and trendline using FF-ANN. ... 34 List of Tables Table 1: Results of experiments using RNN: only the review sentences ................................ 28 Table 2: Results of experiments using FF-ANN: only the review sentences .......................... 28 Table 3: Results of experiments using RNN: word vectors ..................................................... 29 Table 4: Results of experiments using FF-ANN: word vectors ............................................... 29 Table 5: Results of experiments using RNN: word vectors & POS tags ................................. 30 Table 6: Results of experiments using FF-ANN: word vectors & POS tags ........................... 30 Table 7: Results of experiments using RNN: word vectors, POS tags & dep. ....................... 31 Table 8: Results of experiments using FF-ANN: word vectors, POS tags & dep. ................. 31 Table 9: Comparison of the complete result set ....................................................................... 32 THESIS REPORT Tamanna Hasib, Saima Ahmed Rahin Aspect-Based Sentiment Analysis Using SemEval and Amazon Datasets PAGE 5 OF 37 Abbreviations NLP – Natural Language Processing ABSA – Aspect-Based Sentiment Analysis POS – Part-of-Speech WV – Word Vectors WD – Word Dependencies RNN – Recurrent Neural Network LSTM – Long Short-Term Memory FF-ANN – Feed-Forward Artificial Network THESIS REPORT Tamanna Hasib, Saima Ahmed Rahin Aspect-Based Sentiment Analysis Using SemEval and Amazon Datasets</s>
<s>PAGE 6 OF 37 1. Introduction Opinion mining is the task of gathering huge amounts of data that contain valuable information about people’s views on different topics. Online shopping websites like Amazon.com are of specific interest for this task, as they host hundreds of thousands of user-reviews for tens of thousands of different products. However, those reviews are currently only of use to users who read them one by one. Using aspect-based sentiment analysis it is possible to analyze these reviews and predict opinions not only for a whole review or sentence, but on an aspect-level [1]. This means, it is possible to find out that, for example, a sentence is praising a laptop’s display, while simultaneously criticizing its battery Fig. 1 Basic aspect-based sentiment analysis example So far there have been multiple approaches and competitions about aspect-based sentiment analysis, which resulted in various solutions using machine learning algorithms, as well as a few implementations using neural networks [1][2][3][4]. Despite the many approaches, nearly all of them based their work on a relatively small dataset provided by the SemEval community for this specific task, which consists of review sentences for laptops, hotels and restaurants, labelled with their respective aspects and sentiment polarities [1]. Small datasets like this are a problem for classification tasks, since they can’t cover a big variety of eventualities that appear in human language. THESIS REPORT Tamanna Hasib, Saima Ahmed Rahin Aspect-Based Sentiment Analysis Using SemEval and Amazon Datasets PAGE 7 OF 37 To offer a solution to this problem, we proposed that it is possible to support aspect-based sentiment analysis by generating additional structural data out of the existing text data. We wanted to prove that this can be achieved by using natural language processing models that are already trained on big corpora of text and can provide meta data about each review sentence up to the word-level. We intended to show that feeding this additional data into a neural network along with labelled text, can increase the accuracy in predicting aspect-based sentiments for reviews. 2. Literature Review We researched various methods and technologies for processing written human language to achieve our goal. Data mining and natural language processing have been an important topic in the last years and scientists all around the world have achieved various levels of success. The relevant background studies and literature for our task are given below. 2.1. Natural Language Processing Natural language processing is an umbrella term for all kinds of approaches in using computers to understand and manipulate human language. Research fields focus on working with written language, as well as actual auditory speech [5]. Scientists have been working on this field since capable computers were available, but with the rise of the internet as one of the most important ways of communication, NLP became more and more important. Today, NLP has become a part of our daily lives. Google, for instance, tries to predict what exactly we mean, when we start typing a search term into its search</s>
<s>field. Moreover, nearly all modern phones include software that tries to understand our speech to do certain tasks for us, and gadgets like Amazon’s Alexa and Google Home are already being used as home assistants that understand what we ask them to do. Over the years, many different out-of-the-box solutions have been developed, which can be used freely by anyone to work on various tasks in NLP. A big part of those solutions are classifiers, which provide a desired output given a certain textual input. Since our goal was to implement an aspect-based sentiment analysis system, the already available tools were of great interest, to find out which of them can aide in realizing our classifier. These tools are explained in detail in this section. THESIS REPORT Tamanna Hasib, Saima Ahmed Rahin Aspect-Based Sentiment Analysis Using SemEval and Amazon Datasets PAGE 8 OF 37 2.2. Related Work There have already been multiple attempts for aspect-based sentiment analysis, using different approaches, using machine learning [3][6][7], as well as neural networks [2]. Most approaches so far were based on machine learning. In fact, in the annual SemEval competitions on ABSA since in 2014 and 2015, most teams decided to use machine learning techniques like support vector machines (SVM) or conditional random field (CRF) classifiers and scored the best results with those approaches. The ABSA task was split into two tasks, aspect extraction and sentiment prediction. The winning team of 2015 in the aspect extraction task used CRF and modelled the aspect extraction as a multiclass classification problem and used n-grams and word clusters learnt from Amazon (laptop review task) and Yelp (restaurant review task). The winning team on the sentiment prediction task in 2015 used a maximum entropy classifier in its machine learning approach [1]. Our initial inspiration on using deep learning was mostly based on Wang’s and Liu’s work on aspect based sentiment analysis [2]. Using deep neural networks, they have provided a proof of concept, showing that deep learning algorithms are capable of potentially outperforming other implementations in aspect-based sentiment analysis. However, as mentioned before, they mainly used the small SemEval dataset. Their approach scored better than any of the winning teams in the 2015 SemEval competition. Along with it, they also made use of the Google News 300-dimensional word vectors [16]. We have seen the practice of using word vectors to support machine learning and neural networks in multiple research papers [2][8][9][10], hence we also decided to adapt it for our work. However, like one team in [1] did with word clusters, we decided to train our word vectors on reviews rather than newspaper articles. Wang and Liu used word vectors trained using the word2vec algorithm. While word2vec is a predictive model [11][12][13], there are other approaches like GloVe, which are count-based models. Baroni, Dinu and Kruszewski found out that predictive models are superior to count-based ones [14]. Using this information and the conclusions provided we chose word2vec as our training method for word vectors as well. THESIS REPORT</s>
<s>Tamanna Hasib, Saima Ahmed Rahin Aspect-Based Sentiment Analysis Using SemEval and Amazon Datasets PAGE 9 OF 37 2.3. Word Vectors Word vectors are word representations in form of high-dimensional vectors of real numbers. Each word in a given text corpus can be represented by one vector. The vectors of words which share a close relationship, because they often appear together in the text corpus, are clustered together in their vector representation as well. This way it is possible to, for example, obtain similar words or synonyms for a word simply by retrieving words with a close vector representation to a given word [15]. Fig. 2 Model visualization of word vector clustering The behavior of word vectors was of particular interest for our aspect extraction subtask. If a review is talking about a laptop’s display, the use of word vectors could help the network understand more quickly that the words “display” and “screen” are often used interchangeably and belong to the same aspect category [17]. Moreover, if the word vectors are trained using a high number of product reviews of the same domain as the reviews to be classified, it could also act as kind of a spell check, since common mistakes also cluster together with their correct forms [18]. THESIS REPORT Tamanna Hasib, Saima Ahmed Rahin Aspect-Based Sentiment Analysis Using SemEval and Amazon Datasets PAGE 10 OF 37 2.4. Part-of-Speech Tagging Part-of-speech tagging is a way of marking up a text with meta-information regarding the grammatical roles of the text’s words. Common tags are word types (noun, verb, adjective, adverb etc.), but many POS-taggers provide even more information, like whether a word appears in plural form, signifies possession or even negations. Fig. 3 Simple example for a POS-tagged sentence There are multiple classifiers which accomplish this task successfully, one of the most popular ones being the Stanford Log-linear Part-Of-Speech Tagger [23][24]. There are different methods used for POS-Tagging, one of the most accurate algorithms now is the Hidden Markov Model (HMM) [26]. As shown in Fig. 3, POS taggers can determine which words are adjectives, which are nouns, and which are verbs. Fig. 4 Relevance of POS tags for aspects and sentiment The relevance of this for our task lies in the fact that in many cases, sentence aspects are represented by nouns and sentiment can be detected using verbs and adjectives (Fig. 4). For this reason, we were positive that POS-Tagging should be a valuable addition to the data used to train the aspect-based sentiment analysis networks. THESIS REPORT Tamanna Hasib, Saima Ahmed Rahin Aspect-Based Sentiment Analysis Using SemEval and Amazon Datasets PAGE 11 OF 37 2.5. Dependency Parsing Fig. 5 Simple representation of a sentence with parsed dependencies Just like part-of-speech tagging, dependency parsing is a way of finding out how text is structured. However, dependency parsing goes a step further and extracts the meaning of a text by analyzing the word relationships within it [22][28]. With a well-trained dependency-parser it is possible to know exactly which adjective</s>
<s>describes which noun, or which noun belongs to which verb [27]. We hoped that the dependencies on the one hand improve the sentiment analysis, like proposed in [27] and on the other hand help the classifier understand, which aspect belongs to which sentiment [25]. 2.6. Recurrent Neural Network Fig. 6 Layers of a basic recurrent neural network A standard RNN is a deep neural network, which maps sequences to sequences [21][20]. In comparison to other neural networks, like feed-forward or convolutional neural networks, the hidden states of the network’s output in an RNN are fed back into the network. This way, inputs from earlier data points in a sequence still have an influence on later iterations, which closely THESIS REPORT Tamanna Hasib, Saima Ahmed Rahin Aspect-Based Sentiment Analysis Using SemEval and Amazon Datasets PAGE 12 OF 37 resembles the work process of the human memory on storing information. For this reason, recurrent neural networks are often used to classify or generate sequential data. 2.6.1. Long Short-Term Memory Conventional recurrent neural networks used to make use of Back-Propagation Through Time (BPTT) or Real-Time Recurrent Learning (RTRL) for the error calculation. The problem with these approaches was that error signals tended to blow up or vanish, which lead to lack of efficiency and accuracy when training RNNs. In [30] Hochreiter and Schmidhuber proposed a new recurrent network architecture – the LSTM. It was designed to overcome said problems by using memory cells and gates which learn to bridge time intervals. The gates define, if a memory is added, kept or removed from a cell. This way the LSTM learns, which memories should be kept, and which can be “forgotten”. The result was a recurrent neural network, which can offer drastically more efficiency and accuracy, and even keep improving the network in 1000 and more epochs of training. 2.7. Feed-Forward Artificial Neural Network Fig. 7 Layers of a basic feed-forward neural network In contrast to recurrent neural networks, feed-forward networks allow signals to only travel one way – previous outputs do not influence later states. Feed-forward ANNs are today the most widely used neural networks, since they are easy to implement and very versatile in their application. They consist of one input layer, a desired number of hidden layers and an output layer. The input size is defined by the desired input data and the output size by the target labels used for loss calculation [31][32]. Using hidden layers to make use of one or more nonlinear activation functions, many classification tasks can be implemented using this type of neural network. THESIS REPORT Tamanna Hasib, Saima Ahmed Rahin Aspect-Based Sentiment Analysis Using SemEval and Amazon Datasets PAGE 13 OF 37 3. Methodology The ABSA classification task can be divided into three sub-tasks, namely: 1. Predicting how many aspects a sentence contains 2. Extraction of those aspects 3. Prediction of the sentiment based on each of the sentence’s aspects Since each sentence can potentially contain more than one aspect, the classifier of sub-task 2.</s>
<s>had to return a probability distribution of aspects in a sentence. However, this probability distribution alone can hardly be used to define how many of the highest probabilities should be used to create the result set of aspects. Therefore, sub-task 1. was introduced to first define how many aspects the second classifier should use from the values it returns. Sub-task 3. then uses the found aspects to predict what sentiment is applied to each aspect inside a review sentence. Because the available labelled SemEval dataset is very small, around one thousand additional amazon review sentences on the relevant product category were scraped and labelled by hand to increase the amount of training data. The resulting dataset contained about 2500 labelled sentences. In addition to this dataset, around 100,000 more reviews were scraped from amazon to train the word vectors. The word vectors were trained using the popular word2vec algorithm [11][12][13]. The main intention of our work was to find out, if part-of-speech tags and semantic word dependencies obtained from the original datasets are suitable to improve the performance of an ABSA classifier. For this task, a POS tagger and a dependency parser were used to extract the needed information from each of the review sentences. Three neural networks were used to take care of each of the three classification sub-tasks defined above. The first neural network takes a review sentence, the pre-trained word vectors and a list of POS tags to predict how many aspects the sentence contains. The second neural network takes the same data as the first one; however, the sentences are split into sub-sentences, according to the obtained word dependencies. It returns a prediction of which aspects are most likely contained in the sentence and uses the result from the first network to return the correct number of aspects. THESIS REPORT Tamanna Hasib, Saima Ahmed Rahin Aspect-Based Sentiment Analysis Using SemEval and Amazon Datasets PAGE 14 OF 37 The third neural network again takes the same data as the second one; however, an aspect label is also fed into it, to determine the polarity for a specific aspect within the sentence. All networks were implemented both as recurrent neural networks, as well as feed-forward neural networks, to compare their performances on the ABSA task. 4. System Implementation The implementation of the classification system consisted of multiple parts. In the following sections, the necessary steps are explained in detail and the used tools are mentioned. Firstly, the datasets had to be prepared and tokenized to be able to feed them into a neural network. During the preparation of the datasets, POS tags and dependencies were generated. Afterwards, word vectors were trained using the set of 100,000 amazon reviews. In the end, three neural networks were implemented for each of the three ABSA tasks: aspect count prediction, aspect extraction and polarity prediction. 4.1. Languages and Tools The system is implemented in Python 3 and making use of the following Python modules: 4.1.1. SpaCy SpaCy is an NLP library, offering</s>
<s>multiple tools for various tasks. It is the fastest syntactic parser in the world and its accuracy lies within 1% of the best tools that are available [29]. Apart from dependency parsing, it also offers a full-fledged POS tagger. SpaCy was developed as an NLP tool that can be used in production environments. 4.1.2. PyTorch PyTorch is a Python implementation of the Torch library written in the Lua programming language. Just like other deep learning libraries like TensorFlow, it uses Tensors to make its computations. Tensors are matrixes of real numbers, which own multiple extra functionalities that make using them in neural networks very easy. They can do various tasks for you, which you would have to do by hand using other approaches. The main difference between PyTorch and other popular libraries like TensorFlow is that it uses a dynamic graph definition approach. This means you can define, change and execute nodes as you go which makes the programming of networks much more intuitive and integrates better into the Python programming environment. PyTorch offers many different types of THESIS REPORT Tamanna Hasib, Saima Ahmed Rahin Aspect-Based Sentiment Analysis Using SemEval and Amazon Datasets PAGE 15 OF 37 neural networks that can be implemented very quickly. Apart from that, you are also free to implement your own network from scratch. 4.1.3. Gensim Gensim is an open source library specializing in unsupervised text modelling algorithms. We used it to easily train word vectors. 4.2. Datasets The main datasets we used were the “Laptops Train Data” and “Laptops Trial Data” of SemEval’s 2015 Task 12: Aspect Based Sentiment Analysis for training our neural networks and evaluating the results respectively. About 1000 review sentences were additionally labelled by hand to increase the dataset’s size to about 2500 labelled sentences. For the task of labelling the additional sentences in an easy way, we implemented a web-based tool that gives the user a review sentence, pulled from a database of scraped reviews which were split into sentences before. The user can then choose which aspects are present in the sentence, and which polarity they have. Fig. 8 Screenshot of the labelling tool we built for this task The set of aspect labels in the original dataset consists of entities and attributes, separated by a “#” character. Our labelling tool used the same entities and attributes, to create a dataset with the same amount of information as in the SemEval dataset. THESIS REPORT Tamanna Hasib, Saima Ahmed Rahin Aspect-Based Sentiment Analysis Using SemEval and Amazon Datasets PAGE 16 OF 37 There are 22 different entities (e.g. LAPTOP, BATTERY, DISPLAY) and 9 different attributes (e.g. USABILITY, PRICE, OPERATION_PERFORMANCE). This leads to a total possible number of 198 different combinations, but the SemEval dataset uses 81 combinations, hence 81 individual aspect labels. Fig. 9 Sample sentence from the SemEval laptop dataset Using such a high number of possible labels to train a classifier on a small dataset, makes the system very prone to overfitting [19]. For this reason, we</s>
<s>translated the entity-attribute-combinations into simpler labels, decreasing the total number of aspect labels to 30. As for the sentiment polarity, the labels “positive”, “neutral” and “negative” were used as intended in the SemEval dataset. The second dataset used were around 100,000 amazon reviews on laptops, tablets and smartphones. This dataset was used to train the word vectors that were later used as word embeddings in the neural networks. 4.3. Text Preparation In NLP, it is important to consider that there are different types of text corpora. An example is the difference between lyrical texts and scientific texts. Not only the choice of words, but also the used sentence structure and even punctuation can be very different. This is the reason why data scientists use different text corpora to train different algorithms for their classification tasks, depending on which kind of text they are working with. The area of product reviews is in itself very diverse: some users use a language which is close to scientific texts; others are sometimes even hard for humans to understand, because they contain errors, wrong punctuation and even emoticons. THESIS REPORT Tamanna Hasib, Saima Ahmed Rahin Aspect-Based Sentiment Analysis Using SemEval and Amazon Datasets PAGE 17 OF 37 It may not be possible to change those reviews’ sentence structures to bring them in line with each other, but there are certain steps that can be undertaken, to at least minimize huge differences. These steps are: 1. Converting the whole text corpus to lower case text 2. Removing non-alphanumerical characters (except for apostrophes) 3. Spell-checking every word in the corpus 4.3.1. Preparing the Main Dataset Our main dataset of 2500 sentences was in the first step pre-processed. Firstly, the three steps above were applied to the text. Next, the text was tokenized, meaning it was split into its separate words. In this step, it was important to consider the expected format for the external tools we used, like the POS-tagger and dependency parser. For example, some tools expect the sentence “i’m fine.” to be split as [“i’m”, “fine”], others expect [“i”, “’m”, “fine”]. Since we were using the python library SpaCy which provides tokenization, POS tagging and dependency parsing, the sentences were already tokenized in the correct form. After tokenization, the words of all review sentences were added to a global vocabulary, mapping each word to exactly one distinct integer value. 4.3.2. Preparing the Word Vector Dataset Gensim’s algorithm for training word vectors automatically takes care of tokenization and other necessary tasks. However, it expects a text file containing one long line of text. The reviews which were scraped from amazon therefore needed be concatenated into one file. The resulting text corpus was converted to lowercase text, since only lowercase text was relevant for the implemented network. Moreover, punctuation was also stripped from the text corpus. 4.4. Training and Preparing Word Vectors The word vectors were trained using Gensim’s implementation of Google’s word2vec algorithm. A dimensionality of 50 was used for the resulting vectors and the</s>
<s>vectors were trained using the text file lined out in the section above. After the word vectors were ready, they were matched with the dictionary obtained from the 2500 review sentences. Since words that weren’t present in the review sentences which were THESIS REPORT Tamanna Hasib, Saima Ahmed Rahin Aspect-Based Sentiment Analysis Using SemEval and Amazon Datasets PAGE 18 OF 37 to be used to later train the networks, all word vectors which represented words not in the vocabulary, were removed. 4.5. Integrating Part-of-Speech Tags As mentioned before, the POS tags were generated using SpaCy. The tagger provides two types of POS tags for each word: a simple POS tag (e.g. VERB, NOUN etc.), which only provides the simple word type, and a detailed tag (e.g. VBZ, VBG, NN), which provides a more sophisticated explanation of the word’s role, based on its context and form. Both types of tags were used to later determine which one leads to better results. Since POS tags can’t simply be fed into the network as words, a POS vocabulary mapping each tag to a distinct integer value was generated, very similar to how it was done with the main dataset. Moreover, the vocabulary was then used to generate embeddings, which were concatenated to the word vectors during training to form a Tensor, which contains the sentence’s text information as well as POS tags. 4.6. Integrating Word Dependencies Bringing the word dependencies into a form which can be fed to the neural network was more complex, since they describe relationships rather than one-on-one tags such as, the POS-tagging process does. We decided to use the word dependencies which were provided by SpaCy, to split sentences into multiple sub-sentences which were then fed into the network separately. This leads to the situation that parts of sentences are trained on wrong aspects, since, for example, a sentence with two aspects will be split, and both parts will be trained using each of the two aspects. However, given other occurrences of sentences and sub-sentences which contain the same topics and aspects, this problem should be negligible, and predictions should still tend to point to the correct aspect. 4.6.1. Recognizing Sentence Parts SpaCy’s dependency parser attaches a parent node to each of a sentence’s words. This parent can either have another parent itself or be the root word of the whole sentence. Each edge in the resulting dependency tree is described by a dependency descriptor that shows which kind of connection two words share. A useful dependency visualizer called DisplaCy is provided by SpaCy and was used to understand the dependencies more easily during development. THESIS REPORT Tamanna Hasib, Saima Ahmed Rahin Aspect-Based Sentiment Analysis Using SemEval and Amazon Datasets PAGE 19 OF 37 Fig. 10 DisplaCy Dependency Visualizer example output (source: demos.explosion.ai/displacy/) To split sentences into multiple semantic parts, each word in the sentence was analyzed in a loop. For each item, the dependency tree was traversed from the leaves upwards to the root word. During the traversal,</s>
<s>it was checked if the current word is the main verb of a sub-sentence, meaning a verb that has its own subject. If that was the case, every connection leading to this verb described an own semantic unit within the given sentence, hence a separable sub-sentence. The procedure above worked well, if the sentence contained a verb. However, it failed, when there were no verbs, like for example in the sentence “Nice display, but terrible battery.” If in the first loop no verb was found, a second loop went over the sentence again, this time looking for nouns. If a noun was found, which wasn’t part of a compound, signifying it is the main subject of a sub-sentence, every connected word leading to this noun was treated as a sub-sentence. Using these procedures, sentences were split into multiple semantic parts, which were likely talking about different aspects. 4.7. Running the Neural Networks The first two steps of the proposed classifier were predicting the number of aspects and extracting the aspects from each of the reviews’ sentences. The third step was then to use this information and predict the sentiment applied to each of the found aspects in the given sentence. As this three-step approach is a logical process of determining the sentiment of an aspect in a sentence, we split our classifier into these steps and implemented one neural network for each of them. To find out which type of neural network suits our needs best, we implemented one THESIS REPORT Tamanna Hasib, Saima Ahmed Rahin Aspect-Based Sentiment Analysis Using SemEval and Amazon Datasets PAGE 20 OF 37 RNN and one feed-forward ANN for each of the three tasks. However, they were trained on predicting different labels. The first two networks (RNN + FF-ANN) were optimized to predict how many aspects are contained in one sentence. The second two were trained to then extract those aspects, while the last two networks were trained to predict a sentiment given one of the aspects. Lastly, the results of the different types of networks were compared, as well as the results using different combinations of input data. 4.7.1. Calculation of Loss and Optimization All our networks used the mean square error loss function for calculating the loss, using the output of the neural network and the expected targets (e.g. number of aspects, aspect and aspect polarity). After backpropagating to calculate the gradients, which was easily done in PyTorch, since the network automatically remembers the necessary calculations, the network was optimized using stochastic gradient decent as the optimization function and a learn rate of 0.005, which we found to provide the best possible results in our test cases. 4.7.2. Common Setup of the Recurrent Neural Networks All three recurrent networks were implemented using PyTorch’s nn.LSTM class and initialized using the generated word vectors as embeddings. They consisted of four layers, the first one being the input layer. The second two layers were provided by the LSTM, which computes the following function for each element</s>
<s>in the input sequence (e.g. word representations for each word in a sentence) for those two layers: Equation 1 LSTM layer calculations ht is the hidden state at time t, ct is the cell state at time t, xt is the hidden state of the previous layer at time t or the input in case of the first layer. it, ft, gt, and ot are the input, forget, cell and THESIS REPORT Tamanna Hasib, Saima Ahmed Rahin Aspect-Based Sentiment Analysis Using SemEval and Amazon Datasets PAGE 21 OF 37 out gates. is the sigmoid function. A general explanation of LSTM can be found in section 2.6.1 Long Short-Term Memory. The last layer first computed a linear transformation Equation 2 Linear transformation to transform the dimensionality of the data coming from the LSTM back into a desired output dimension, which is defined by the corresponding training label vector. Afterwards, the nonlinear Softmax function was used to convert the output layer to a probability distribution: where shift is maxi * xi Equation 3 Softmax function The loss function and optimization approach described in section 4.7.1. Calculation of Loss and Optimization was ultimately used, to compute the loss and optimize the network in each epoch. 4.7.3. Common Setup of the Feed-Forward Networks Like the three RNNs we used, the three feed-forward neural networks also share the same setup. Each of them consists of one input layer, one hidden layer and one output layer. The first hidden layer applies a linear transformation to the input: Equation 4 Linear transformation Again, we used it to transform the data to a 150-dimensional hidden layer, which we found to be a good value for best results in our test cases. Next, the first nonlinearity – the ReLU function – was applied to each element the layer: Equation 5 Rectified Linear Units (ReLU) function THESIS REPORT Tamanna Hasib, Saima Ahmed Rahin Aspect-Based Sentiment Analysis Using SemEval and Amazon Datasets PAGE 22 OF 37 After that, another linear transformation was applied to transform the data to the target dimension, which was defined by the desired label dimensionality. Afterwards, the nonlinear Softmax function was used, to convert the vector values into a probability distribution for the loss function: where shift is maxi * xi Equation 6 Softmax function The loss function and optimization approach described in section 4.7.1. Calculation of Loss and Optimization was ultimately used, to compute the loss and optimize the network. 4.7.4. Network Training Specification for Each Classification Task The data that was used to train the RNNs and feed-forward networks was essentially the same, hence the approaches described in the following three sections apply to both types of networks. 4.7.4.1. Aspect Count Prediction Networks The main difference between the three classification networks was the data which was fed into them. For the first network, which was trained to predict the number of aspects in a given sentence, the network was provided with a review sentence – or sub-sentences, if it was split using word dependencies –</s>
<s>using its word vectors and POS tags, which were translated into word embeddings. For the training label in this network, we used a one-hot encoding. We first determined the maximum number of aspects in one sentence among the whole dataset and created an array of this length, containing only zeros. For each sentence we then placed a “1” at that place in the array, which represented the number of aspects in that sentence, e.g. for a sentence with two aspects, the “1” was placed at the array index two. The network was set up to return a prediction in line with this target array, so that the loss function could be used to backpropagate and then optimize the network. 4.7.4.2. Aspect Extraction Networks For the second network, which was trained to extract the aspects within a given sentence, the network was fed one review sentence or sub-sentence, along with its POS tags. Each token in THESIS REPORT Tamanna Hasib, Saima Ahmed Rahin Aspect-Based Sentiment Analysis Using SemEval and Amazon Datasets PAGE 23 OF 37 the word sequence, as well as POS tag sequence was translated into word embeddings. The resulting matrixes were then concatenated and fed into the first layer of the network. Each sentence in the dataset was fed into the network, according to how many aspects it had. For example, a sentence that was labelled with two aspects, was fed into the network two times, each time using one of the two aspects as the target. The target labels for this network were the correct aspects for each iteration. Since only one aspect was fed into the network per iteration, the target also contained only one aspect. Again, we used a one hot encoding to represent the correct aspect among the set of all aspects in an array of zeros. 4.7.4.3. Polarity Prediction Networks For the third task, the word embeddings and POS tag embeddings were generated in the same way like in the first two networks. To incorporate the aspect, another vector had to be introduced as well, which let us feed the desired aspect into the network. The simple approach was to use another embedding representation for this aspect. When the training algorithm was iterating over the input sets, the three inputs – sentence, POS tags and aspects – are translated into embeddings, which are then concatenated and fed into the first network layer. Each sentence or sub-sentence was again fed into the network according to how many aspect labels it contained. The target label used was a one hot encoded vector, like in the previous two training algorithms. It consisted of the three polarities, represented by indexes in the vector and the correct polarity for the given aspect was encoded using a “1” inside a vector of zeros. 5. Experiments and Result Analysis 5.1. Experiments Experiments with the six networks were run using different parameters for the number of training epochs, hidden layer dimensions and learn rates, to find out which parameters give best results.</s>
<s>All parameter combinations were first run using only the sentences as inputs, then using word vectors, then using word vectors and POS tags, and then using word vectors, POS tags and word dependencies. This was done for each of the six recurrent and feed-forward networks, resulting on a total of 24 training and evaluation rounds for each time we tested the networks. THESIS REPORT Tamanna Hasib, Saima Ahmed Rahin Aspect-Based Sentiment Analysis Using SemEval and Amazon Datasets PAGE 24 OF 37 This way we could analyze which parameters lead to the best results at which combination of input data. In the following sections we present and discuss the results we gained, by using the best precision, recall and F1 measure we achieved during testing. Since our classification problems were using multiple classes, and the used precision, recall and F1 measure are generally used for binary classification problems, we calculated each of the measures for each class in our classification classes and took the average precision, recall and F1 measure from all classes. 5.1.1. Training Process In every epoch of the training process, we let the algorithm output the current average precision, recall and F1 score on the training set for each task and network respectively. Like explained above, firstly, all the networks were trained only on text, then with additional word vectors, then with additional POS tags and lastly using the split sentences. To compare training speed, we compared the development of the F1 score between the networks trained on different input data. After running the training process multiple times using different settings, we committed on using 150 dimensions for the hidden layers, stochastic gradient descent as the optimization function, a learn rate of 0.005 for the optimizer and mean square error as the loss function, since these settings overall gave the best results in our tests. THESIS REPORT Tamanna Hasib, Saima Ahmed Rahin Aspect-Based Sentiment Analysis Using SemEval and Amazon Datasets PAGE 25 OF 37 5.1.1.1. Aspect Counting Accuracy Increase during Training Fig. 11 Aspect counting F1 score development using RNN Fig. 12 Aspect counting training process using FF-ANN The comparisons show that the feed-forward network overall seemed to train much faster and overall reached a higher score on the test data in all cases of different input data. POS tags looked promising especially on the RNN. THESIS REPORT Tamanna Hasib, Saima Ahmed Rahin Aspect-Based Sentiment Analysis Using SemEval and Amazon Datasets PAGE 26 OF 37 5.1.1.2. Aspect Extraction Accuracy Increase during Training Fig. 13 Aspect extraction F1 score development using RNN Fig. 14 Aspect extraction F1 score development using FF-ANN The RNN’s performance on the training set during training seemed disappointing, however the performance on the test set proved to be slightly better later. The feed-forward network, however, showed solid learning curves during training. THESIS REPORT Tamanna Hasib, Saima Ahmed Rahin Aspect-Based Sentiment Analysis Using SemEval and Amazon Datasets PAGE 27 OF 37 5.1.1.3. Aspect Sentiment Prediction Accuracy Increase during Training Fig. 15 Aspect sentiment prediction F1 score</s>
<s>development using RNN Fig. 16 Aspect sentiment prediction F1 score development using FF-ANN The most notable curves in the sentiment networks were the text-only training processes, since their accuracy increased drastically right from the start. It was very hard to deduce whether it was due to very good training or overfitting. Thus, it had to be checked using the test dataset. THESIS REPORT Tamanna Hasib, Saima Ahmed Rahin Aspect-Based Sentiment Analysis Using SemEval and Amazon Datasets PAGE 28 OF 37 5.1.2. Evaluation Using Test Dataset 5.1.2.1. Without Additional Data The first experiment used only the bare sentences as inputs. Sentence tokens were translated into random embeddings with a dimensionality of 50. This means the embeddings were only used to uniquely identify words in the sentence, while not introduce any semantic data into the network, which would be the case with word vectors. We used this test’s results as a baseline, to which the results of the following experiments could be compared to see, if POS tags and word dependencies can improve the accuracy of our neural network implementations. Results using recurrent networks: Precision Recall F1 Aspect count 84.46% 84.12% 84.29% Aspects 46.85% 50.08% 48.41% Polarities 58.06% 57.70% 57.88% Table 1: Results of experiments using RNN: only the review sentences Results using feed-forward networks: Precision Recall F1 Aspect count 84.96% 84.87% 84.91% Aspects 54.20% 56.96% 55.55% Polarities 59.49% 59.83% 59.66% Table 2: Results of experiments using FF-ANN: only the review sentences The results on the test set using only the sentence text input were good, but didn’t quite live up to the expectations from the accuracy increase seen during the training process. This is very apparent in the polarity prediction task, where the feed-forward network reached nearly 100% accuracy during training, but just scored about 60% on the test data. Hence, it is apparent that it slightly overfitted to the training data. THESIS REPORT Tamanna Hasib, Saima Ahmed Rahin Aspect-Based Sentiment Analysis Using SemEval and Amazon Datasets PAGE 29 OF 37 5.1.2.2. With Word Vectors In the second experiment, the sentence tokens were translated into the generated word vectors and then fed into the networks. Using word vectors is already common practice to support neural networks on NLP tasks. Hence, testing the performance of using word vectors separately from POS tags and word dependencies, allows a comparison between this often-used practice, to our proposed techniques. Results using recurrent networks: Precision Recall F1 Aspect count 84.60% 84.73% 84.66% Aspects 49.86% 52.95% 51.36% Polarities 64.57% 62.49% 63.51% Table 3: Results of experiments using RNN: word vectors Results using feed-forward networks: Precision Recall F1 Aspect count 93.16% 93.84% 93.50% Aspects 61.90% 56.61% 59.14% Polarities 65.70% 65.49% 65.59% Table 4: Results of experiments using FF-ANN: word vectors As expected, performance on both the recurrent and feed-forward networks increased using word vectors compared to the text-only experiments. The highest rise can be seen in the results of the aspect count task using the feed-forward network. THESIS REPORT Tamanna Hasib, Saima Ahmed Rahin Aspect-Based Sentiment Analysis Using</s>
<s>SemEval and Amazon Datasets PAGE 30 OF 37 5.1.2.3. With Word Vectors and POS Tags The third experiment used the POS tags, along with word vectors applied on the sentence tokens. Since POS tags in contain semantic information which is similar, but not as detailed as the one contained in word dependencies, they were tested separately as well. Results using recurrent networks: Precision Recall F1 Aspect count 84.45% 84.58% 84.51% Aspects 50.81% 53.32% 52.03% Polarities 62.72% 65.00% 63.84% Table 5: Results of experiments using RNN: word vectors & POS tags Results using feed-forward networks: Precision Recall F1 Aspect count 94.57% 94.57% 94.57% Aspects 63.56% 55.39% 59.19% Polarities 66.90% 68.92% 67.89% Table 6: Results of experiments using FF-ANN: word vectors & POS tags Again, the results in nearly all experiments increased, compared to the word vectors and the text-only approaches. The results are overall in line with the expected results when taking the training process into account. However, looking at the training accuracy increase on the aspect counting RNN, the score on that task in the test set was not significantly higher than the tests before, like expected. In fact, the accuracy is even slightly lower in this case. THESIS REPORT Tamanna Hasib, Saima Ahmed Rahin Aspect-Based Sentiment Analysis Using SemEval and Amazon Datasets PAGE 31 OF 37 5.1.2.4. With Word Vectors, POS-Tags and Word Dependencies In the final experiment, all types of the proposed input data were used: input sentence, word vectors, POS tags and word dependencies, to see if word dependencies can increase the networks’ accuracy even more than POS tags alone. Results using recurrent networks: Precision Recall F1 Aspect count 79.71% 81.82% 80.75% Aspects 54.32% 58.36% 56.27% Polarities 52.87% 53.79% 53.33% Table 7: Results of experiments using RNN: word vectors, POS tags & dependencies Results using feed-forward networks: Precision Recall F1 Aspect count 91.84% 85.77% 88.70% Aspects 69.61% 73.54% 71.52% Polarities 64.17% 50.51% 56.53% Table 8: Results of experiments using FF-ANN: word vectors, POS tags & dependencies The results using word dependencies to train the networks on sub-sentences lead to an interesting result. The accuracy compared to the tests with less input data decreased on the aspect counting and polarity prediction tasks, but increased significantly on the aspect extraction task using the feed-forward network. THESIS REPORT Tamanna Hasib, Saima Ahmed Rahin Aspect-Based Sentiment Analysis Using SemEval and Amazon Datasets PAGE 32 OF 37 5.2. Result Analysis Precision Recall F1 Count Aspect Polarity Count Aspect Polarity Count Aspect Polarity - 84.46% 46.85% 58.06% 84.12% 50.08% 57.70% 84.29% 48.41% 57.88% WV 84.60% 49.86% 64.57% 84.73% 52.95% 62.49% 84.66% 51.36% 63.51% WV+P 84.45% 50.81% 62.72% 84.58% 53.32% 65.00% 84.51% 52.03% 63.84% WV+P+WD 79.71% 54.32% 52.87% 81.82% 58.36% 53.79% 80.75% 56.27% 53.33% - 84.96% 54.20% 59.49% 84.87% 56.96% 59.83% 84.91% 55.55% 59.66% WV 93.16% 61.90% 65.70% 93.84% 56.61% 65.49% 93.50% 59.14% 65.59% WV+P 94.57% 63.56% 66.90% 94.57% 55.39% 68.92% 94.57% 59.19% 67.89% WV+P+WD 91.84% 69.61% 64.17% 85.77% 73.54% 50.51% 88.70% 71.52% 56.53% Table 9: Comparison of the complete result set (WV =</s>
<s>word vectors, P = POS tags, WD = word dependencies) Fig. 17 F1 score comparison of all experiments As expected, the use of word vectors, which is already common practice in many NLP tasks, increased the accuracy compared to only using the input sentences alone in all the implemented neural networks. Moreover, the results show that in all cases the results of the feed-forward neural networks are superior to those from the RNNs. In fact, in case of the aspect counting networks, the highest F1 score of the RNN is still lower than the lowest score of the feed-forward networks. In case of the aspect extraction task, the best RNN F1 score is only slightly better than the lowest score among the feed-forward networks. THESIS REPORT Tamanna Hasib, Saima Ahmed Rahin Aspect-Based Sentiment Analysis Using SemEval and Amazon Datasets PAGE 33 OF 37 Overall the networks trained using POS tags performed best in the tasks of aspect counting and polarity prediction. The task of aspect extraction however, was by far best solved by the feed-forward network that additionally was trained on word dependencies. What can also be seen in the result set is that the proposed way of integrating word dependencies into the network noticeably decreased the accuracy in the polarity prediction task both in the RNN and FF-ANN, which leads to the assumption that the information contained in word dependencies is more relevant to the aspect recognition, than to the polarity recognition given the aspect. Given these insights, we can conclude that feed-forward neural networks are much better suited for the task of aspect-based sentiment analysis and all its sub-tasks, namely aspect counting, aspect extraction and aspect polarity prediction. We can also conclude that feeding POS tags and word dependencies into the networks as additional inputs is indeed capable of increasing the accuracy in all the sub-tasks. 6. Comparative Analysis In this section, we want to shortly compare our results to the ones from Wang and Liu [2], to deepen our understanding and find out possible learnings. Wang and Liu used an approach using only two steps: aspect extraction and aspect sentiment prediction. Hence, we can only compare those two tasks to their work. Aspect Extraction Task: Their best result in the task of aspect extraction was an F1 score of 51.3%, which we outperformed in our approach with an F1 score of 71.5%. Our result was achieved by implementing POS tags and word dependencies. Sentiment Prediction Task: The sentiment prediction task was solved by Wang and Liu with an F1 score of 78.3%, while our best result, using POS tags, resulted in only 67.9%. This can very likely be improved using different network parameters, and especially using a higher number of epochs. The trendline of our training process on sentiment prediction using POS tags (Fig. 18) shows, that the network will keep improving well over an epoch number of 300. This should be researched in future attempts. THESIS REPORT Tamanna Hasib, Saima Ahmed Rahin Aspect-Based Sentiment Analysis Using</s>
<s>SemEval and Amazon Datasets PAGE 34 OF 37 Fig. 18 Aspect sentiment prediction F1 score development and trendline using FF-ANN and POS tags 7. Conclusion and Future Work In this work, we have shown that our proposed approach of improving the performance of neural networks for aspect-based sentiment analysis by using part-of-speech tags and word dependencies is indeed possible. It was also shown, in direct comparison that feed-forward neural networks are performing considerably better in the given tasks of ABSA than recurrent neural networks do. Our hope is that these results can be an inspiration to use additional data, which is easily gathered using readily available NLP tools, to increase the performance of neural networks for different tasks. While our results are very promising, future work should follow up on increasing the accuracy even more, by tweaking different parameters and trying out other layer-combinations inside the feed-forward neural networks. Additionally, other tasks than ABSA should be tested on the applicability of our learnings concerning the integration of POS tags, word dependencies and possibly other NLP tools. THESIS REPORT Tamanna Hasib, Saima Ahmed Rahin Aspect-Based Sentiment Analysis Using SemEval and Amazon Datasets PAGE 35 OF 37 8. References [1] M. Pontiki, D. Galanis, H. Papageorgiou, S. Manandhar, I. Androutsopoulos (2015) Semeval-2015 task 12: Aspect based sentiment analysis. In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015), Denver, Colorado, USA [2] B. Wang, M. Liu (2015). Deep Learning for Aspect-Based Sentiment Analysis. [3] S. Brody, N. Elhadad (2010). An Unsupervised Aspect-Sentiment Model for Online Reviews. Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the ACL, pages 804–812, Los Angeles, California, USA [4] D. Zhang, T. Luo, D. Wang, & R. Liu (2015). Learning from LDA using Deep Neural Networks. [5] G. Chowdhury (2003). Natural language processing. Annual Review of Information Science and Technology, 37. pp. 51-89. ISSN 0066-4200, http://dx.doi.org/10.1002/aris.1440370103. [6] M. Hu, B. Liu (2004). Mining and Summarizing Customer Reviews, KDD’04, August 22–25, 2004, Seattle, Washington, USA [7] X. Ding, B. Liu, P. S. Yu (2008). A Holistic Lexicon-Based Approach to Opinion Mining, WSDM’08, February 11-12, 2008, Palo Alto, California, USA [8] H. Shirani-Mehr (2015). Applications of Deep Learning to Sentiment Analysis of Movie Reviews. [9] Z. Tu, W. Jiang, Q. Liu, S. Lin (2012). Dependency Forest for Sentiment Analysis. Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, CAS, Beijing, China. Natural Language Processing and Chinese Computing pp. 69-77. [10] A. L. Maas, R. E. Daly, P. T. Peter, D. Y. Huang, A. Ng, Ch. Potts (2011). Learning Word Vectors for Sentiment Analysis. [11] T. Mikolov, K. Chen, G. Corrado, J. Dean (2013). Efficient Estimation of Word Representations in Vector Space. In Proceedings of Workshop at ICLR. [12] T. Mikolov, I. Sutskever, K. Chen, G. Corrado, J. Dean (2013). Distributed Representations of Words and Phrases and their Compositionality. In Proceedings of NIPS. [13] T. Mikolov, W.-t. Yih, G. Zweig (2013). Linguistic Regularities in Continuous Space Word Representations. In Proceedings of NAACL HLT.</s>
<s>THESIS REPORT Tamanna Hasib, Saima Ahmed Rahin Aspect-Based Sentiment Analysis Using SemEval and Amazon Datasets PAGE 36 OF 37 [14] M. Baroni, G. Dinu, G. Kruszewski (2014). Don’t count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors. Center for Mind/Brain Sciences, University of Trento, Italy. [15] T. Mikolov, K. Chen, G. Corrado, J. Dean (2013). Efficient Estimation of Word Representations in Vector Space. [16] T. Mikolov et al. (2013). Distributed Representations of Words and Phrases and their Compositionality. Conference on Neural Information Processing Systems (NIPS2013). [17] K. Kasahara, T. Kato, C. Manning. Synonym Retrieval Using Word Vectors from Text Data, https://nlp.stanford.edu/~manning/xyzzy/acl30224.pdf [18] Y.-R. Wang, Y.-F. Liao (2015) Word Vector/Conditional Random Field-based Chinese Spelling Error Detection for SIGHAN-2015 Evaluation. 46-49. 10.18653/v1/W15-3108. [19] P. Domingos (2012). A few useful things to know about machine learning. University of Washington, Seattle, USA. [20] H. Sak, A. Senior, F. Beaufays (2014). Long Short-Term Memory Based Recurrent Neural Network Architectures for Large Vocabulary Speech Recognition. [21] I. Sutskever (2013). Training Recurrent Neural Networks. University of Toronto. [22] D. Chen, C. D. Manning (2014). A Fast and Accurate Dependency Parser using Neural Networks. Proceedings of EMNLP 2014 [23] K. Toutanova, C. D. Manning (2000). Enriching the Knowledge Sources Used in a Maximum Entropy Part-of-Speech Tagger. In Proceedings of the Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora (EMNLP/VLC-2000), pp. 63-70. [24] K. Toutanova, D. Klein, C. Manning, Y. Singer (2003). Feature-Rich Part-of-Speech Tagging with a Cyclic Dependency Network. In Proceedings of HLT-NAACL 2003, pp. 252-259. [25] T. Thura Thet, J.-Cheon Na, C. S. G. Khoo (2010). Aspect-based sentiment analysis of movie reviews on discussion boards, Journal of Information Science, 36 (6) 2010, pp. 823–848. [26] F. M. Hasan, N. UzZaman, M. Khan (2007). Comparison of different POS Tagging Techniques (-Gram, HMM and Brill’s tagger) for Bangla. BRAC University, Bangladesh. [27] H. Pouransari, S. Ghili (2014). Deep Learning for Sentiment Analysis of Movie Reviews. THESIS REPORT Tamanna Hasib, Saima Ahmed Rahin Aspect-Based Sentiment Analysis Using SemEval and Amazon Datasets PAGE 37 OF 37 [28] M.-C. de Marneffe, C. D. Mannin (2008-2016). Stanford typed dependencies manual. [29] J. D. Choi et al. (2015). It Depends: Dependency Parser Comparison Using A Web-based Evaluation Tool. ACL. [30] S. Hochreiter, J. Schmidhuber (1997). Long Short-Term Memory. Neural Computation 9(8):1735-1780. [31] D. Dai, W. Tan, H. Zhan (2017). Understanding the Feedforward Artificial Neural Network Model From the Perspective of Network Flow. arxiv.org/abs/1704.08068. [32] R. Amardeep, Dr. K. T. Swamy (2017). Training Feed forward Neural Network With Backpropogation Algorithm. IJECS Volume 6 Issue 1 Jan. 2017 Page No.19860-19866</s>
<s>Sequence_130_ID4448.pdf�������� ������������������ ��� ���� ����� ��� ���� ����� ����� ������������� � ���� ������������ �!�"�#��$%����&��'�����������)! � )�!� *�+* �,��,-+�.���/����������0����1�$��"� ��$� ��� ��2����� ��3#� �� �4� � � �0�����������4�.�4���5��� �&�#���$� ������$#5����"��� ���� ��� � ���� � ��� ���� �������$���6 �������2���������������(� �������$$5���7$���.��$Abstract— It is acknowledged that Twitter is a micro blogging social site and millions of people share their thoughts, views and reactions, as well as commenting on particular subject as his opinion or fact for seeking attention from different categorical person. In the current analysis and experimentation, we investigate the public opinions, facts and sentiments on Refugee Crisis which is widely discussed topic in social media platform today. To analyze the public sentiment on this crisis, around 35,000 relevant Twitter data have been collected in five different languages such as English, Bangla, Turkish, Chinese and Urdu, and use them for sentiment analysis and decision mining in the way of data mining and data science for Refugee Crisis. For that, a new model of real time sentiment analysis on Refugee Crisis is presented in this paper to provide some prediction on political improvements. This paper will also be able to give end level decision of how much people are commenting for supporting Refugee and how much comments are posting against Refugee by binomial classification of positive and negative using supervised machine learning algorithms such as DT, RF, NB and KNN. Result shown that KNN algorithm perform the best polarity accuracy of 95% compared to DT, RF and NB classifier. Respective and responsible person for Refugee can get a better knowledge by having our analysis. Keywords— Refugee crisis; Sentiment analysis; Data science; Machine learning; Opinion mining �. ��8103&6�1�38�"� ��$� ����� �%����� �������$���� ����#� �� ��� ���� ����������9#�������%2�#��#��.�1����������:���2#������� ��$� ���5��� ��� ����� � �� �#� �� %����� � ��$���� .� ;���� ��� <5��� ����%�����%�5����$���������� �������������%<�������� ���#� �� �����������:�����5����� �� ��� �� ��������%�5��� ���� �������#��������������5%<������.�"� ��$� �� � ��2���� ��� ���� #������� �� � ��2=� �9#���$� �� �� ����� � ���� �#� �� � ��� ���� �� #��#��� ���������������� ��#�� #��������������� ���9����$.������������' �: ���� �#� �� � $� � �� �$���� � ������ �� ������� � $� � �� ������ ����� � � � #�����5���� $������ �� �� #���� � :����� �� � %���5%<������� ��� �%<������� >�?.� "5%<������� ���� ���$� ��������$�� � � ����� ��� �����: ��#� �� ���#���� �� ���%<����������������� �����$�� � �������������� �� ��� �������� �<5���������� 2�� ���� �.�� � ���� ���� �� ��$�� �� �5�� �$�5 �� �� ��������� ��� �� �� %2������������� � � "� ��$� �� � ��2���� � �� 3#� �� � 4� � � � ���� ���5%<������ ����� ��.�����������$�����%���$������������5������������"� ��$� ��� ��2�������3#� �� �4� � �����%����������$�����$#����� ���$$5 ��2��������.����#����� ���� ����2@��:����������9#���$� �� �� ������5���� ��$� ������.�8�:�� ��$���� �� �������2���������������$������.� 1��2� ����� 5���� ����� �� � ��2=� � #��������� ������ ������ ���� �� ��$� �� �����.� 1��� ���5��� $�� � � � �������������� ����� �� ����������$�����$#���� ��#����� �������� �$�'� .� 1����� ��� �� #�� �2� �� ��������� ������ � �� ��$� ��� ��2����%5�� ���$5����%�5��0����������.���� �� � %�� ��� ��� ��� ��5#��</s>
<s>�� ����������� #���� ���:��� ��5��� %�� ����������� ������� � ��� ���2� ��� $������ .�� ��� ���2� ����������� #���� �� ���� ���� #��#���� :��� ������ ��������� ������ ��$���� %5�� ���2� ������ ��� ������ �� ���%��� ��5 ��2�� ������$�������#��#�����������#���� ��:���������������$�� � �� ���'� ��������� � � ������ ��5 ��2.� (���5��� �� :��� ��5��� ���� ��� ��� � � ���� ����� �:� ��������� ���� 5$%��� �� ��� ���2������������#���� ������� �������.����� ������� ��� � �� �� ���� %����� ���5��� ���� ����� ����:���������2.�1����#��%��$����� �����$� 2�2��������� �$� 2���5 ������ ��'�� ��� ���� �� "�5��� "5�� � � �� "�$����.� A��2������2��#��#�����$�42� $���� ��"2������������ �:������$.�1��5�� $����� �� �� #��#��� ����� %�� � �5��� � %2� ����������5��#��%��$�������� �:��:���� B������� 2���������������� �� ��#��#������5��� �� ������#�����5�����$%������#��%��$.��� � ���� #���� �� :����� ������� $����� ��� � �� �� ���� %�����#�����$�� ��� �9#����� #��#��@�� �#� �� �� ���5���� ���:��� �� ��� �� ������� � � �� ����.� � 1����� ���� �� ���� �� �������$�����#�����$��1:���������� �������$.�� �������� �������� �����%����������� �� ���5���� 1:������ #�������� �� $����� #�����$� ������ �%���� ����� � �#� �� �� �����2� 5�� � �����5�� �� �� �� ��$��� ��5�� � ��9��� �$����� �� '�� :���� ���� ���������� ���������� �5 ��'�� $� 2� ������ ������� $����� #�����$�.� 4���� ��� � +���$����� � #��#��� 5��� 1:������ ���� ����� ���� :����.� "��� ��� �����$#���� ���$#����� ����������:�����>�?.�1:������ �� ��$� �� $� � � �� � %�� ���#5�� � � ����� �����5���� �� �5��� ��� � ��2=� � #��#��@�� ��$$� �� � � ����� ����� ���#���5����$�������� ����.�� � ����� #�#���� :�� ����� #��#����� �� �:�:�2� �� ����� ��$���� ��$� ��� ��2����� ��5��� ��0����������� ���#���������$��#�������� � � � #������2� �2#��� ��� #��������� �$#����$� �� %��������Authorized licensed use limited to: Macquarie University. Downloaded on May 31,2020 at 17:17:31 UTC from IEEE Xplore. Restrictions apply. � ��:����������.�1����9��������� ��#�� #�����������������������5���� � � �����5�� �#� �� �� $� � � �������$� �5��� ��� "5##����A������ 4���� �� �"A4��� 8����� (�2��� �8(��� C� 8�������8���%��� �C88��� 0� ��$� ;������ �0;�� � �� &������ � 1�����&1��5�� �#2��� �#����$$� .����. 0�D�1�&�E30C�1����� ���� ��$�� �����%��� ����������� �� �� � � ���� ����� �"� ��$� ��� ��2����%2�#������� �������� ��� ��2��������5�� �����%��'�� � ��2���� �� $����� � �� #���5��� �����:� � �� "2��� ��������������.�&�� ���'��$��A.������.�5����5#������������ � ��������$����� � �� ���� #������2� �� ���� ��5�� �� ���%��'� %����� � � #�� � ��� ���5���� �� ������ � � �� ���� � � >�?.� � 1���� #�#���#���� ��� �� ��$#�������� #����$� ��� ��5�2� �� ���� �������$����'�� "A4��8(��C88�� ��88� ���������.� 1�����0��5��� ���:�������C88���� ���� %���� #������� � ���5��� �� ���F��8(���� ����%���� ������� � �� ���5���2� ���5��� �� �).�)F� � �� ��.��F����#�������2.�1�9��� ��2=� �������������:������ �G� ����� �����5����� � >+?.� �1��2�� ��2=��*���+������������ �:����� ����� ����� ����� � �����5�� #��������� #������� � � � ����� �5�� � ���� ��$#��#������ �� ������� �� � � ����� %2� 5�� � %���� �5#�������� � ��5 �5#�������� ���� �H5�.�1��2�5���&����� ��2�(������8(�� ��"A4��������$� ��� ������2� ��������� ��� #��������� ������� � �� �5����.� 0��5��� ���:�� ����� (I�� ����� ���� ��� ��� �� )!.*F� ���:� �������� ���5���������#���������� ��$� ������������ ��:����.�8(��������$������������5���2�����.�F.�G5$�� ������ � ��� ��.� �9#���$� �� ���� �9������� � ��� ��$� �� ��$� ��</s>
<s>�$�5�� :�%����� 1:������ :����� ���� #��#���#���� ���������:��� ���#� �� �>*?.�1��2�#����$������� ��$� ��� ��2���� � � $����� �����:� �������� �:���.� 1��2� ���:��� �������5���� �� ������ �� ��$� �� � ��2���� ��� ����� �� ������ ��#���� �� �#��������� �������� �� �5������� ��$� ��.�1��� #5%���� �#� �� �� � �� �� ��$� �� � ��2���� ��:����� ����"2��� �0�����������:����� ��� �>�?.�1������5�������:�������������� :��� ����� �� ����� ��� %��:�� � �� ��$� �� ��$�15�'���� �:����� ��� ����� �:����.�1��� ���5��������� ���:� ����������:�����#������%2�15�'��������$����#���������� ��$� ��.�3 ������������� �������������� 5$%����������:�������� �5����.���$�� 45��%��� ��'��%� �� ��� ��.� ��5�� � � �9#���$� �� ���%��'� �� �5���$���� � � "��"� #���5���� %2� #������� �����:��@�������5����>�?.�1��������������#�#���������#������������� ��$� �� ��"��"� �����:���� �5���$���.�1��2�#��#����� �������� �H5��� %����� � � ���� �������$�� �5��� ��� "A4�� 8(�� 8(��C�� �����C88�� ������&1��������$����#����������������5����"��"� �����:�.� � � ������ �9#���$� �� ���2� ��� ��.+)F� ���5���2�%2�5�� �"A4��������$�:�����#������ ����� ������������$� ������ %�� �%��� ��� ���� %������ ���5��� � � �� ��$� �� �� ���� � �� �������:����$#�����:��������������$�����.��3#� �� � $� � ��� �:�#�#��� ������ �� ��� :��'� �� �� � �>)?� 5�� � "� ��E���8��.� 1��2� ��#������ ���� �����% ��<���������$%� ���� � �9����� � � ���� �����$� ��.�1��2� ����� � ��2=��� ���� �:�������� ��:���������������#��� � �#�������.��"� ��$� ��� ��2����� �$����������:������ ��� �>!?.��� �:����� �H5�� ��� #��#����� :���� ������� ��5�� ���5���� �5��� ���$���� �� ���� � � %����� � �� D�9��� � %����� ���5���� � �������������� ��������$����'��8(�� ��D"A4�5�������%5���������2���$�$����.��������� � ���������� ����������� ��������������5���� � � ���� �9#���$� �.� !�F� ��� 8(� � �� )�F� ��� "A4� ����5���2� ��� �5 �.� !*F� ���8(� � �� )�F� ��� "A4����5���2���������5���������+������� � ���������� ����������� ���������.�����5�2����:��'����� ��� ��������������:����������$�<���6"� ����� ��� � �� #����$��� �� $5��� ������ �� ��$� �� � ��2����>�?.� "��� � ����� �� ������������ � ���������� ���� 5���� �5��� ���&1�� 0;�� "A4��C88�� D0�� 8(� � �����(����.� (����� � � �������5���� �%��� ���� ���2� ����5������ ���� ���5������� ��� ���:� ����$#����� � %��:�� � ����� ������������ � �##������ � �� ������������ �� ��$� �� ��5 �� :��� ���5���=��� ��$%� � � ���� ��9������ ��.��;��$� �%���� ����5���� � :�� �%������ ����� ������ ��� ������ �� ��������� :��'� ��� �� �� � � ����� ������.� � � �����#�#����:���������$#����������5�2�� ������������.����. �J�D�8�1�38�3;���03�3"�&�43&�D�35��#��#�����$���������������:������������ ������:���������� ��� ��:������������5���� ��� ������5���.�1��������������������#��� ��5��$����.�&�������$����������#������� �;�.�.��1�� ���������� ����������#��� ���:�����$����� ����$����� ���#������� ��� � ��� �5�� $�<��� #����� �5��� ��� ����� ��������� ��#��#������� �� ������������ � � �� �$#��$� ����� � ���: � � �.��.�A. Data collection ��������� ����������$��������%��������1:�������� ���������������� <�%�.� � � ����� ��� ��� � ������� � ���.� 1����� ��� ��� $� 2�:�2������9�����������������1:��������.�.��. ���#�����4��������������$���� ��$� ��� ��2����� ���#� �� �$� � .��. ��������������)��Authorized licensed use limited to: Macquarie University. Downloaded on May 31,2020 at 17:17:31 UTC from IEEE Xplore. Restrictions apply. 35��#��#�����$���������$#��$� ����:�����������$������.���:��� ' �: � � �� ��$��'�%��� ���:���� ������� 0�#��4� ���:������� ����#�����9�������:�������$�1:������:�����5�����=������:�2� �������� %2� 0�#��4� ��� � �� 1:������ ������#�����$$5 ��2.���� ��� ������%��������0�#��4� ����� ������%���%������ �� ���� � 2� ����� ���� ��� #��<���� � �� $����� � :�����5#�������� $���� �� ���� � � �"4D�� �������$� � � #��#��� � ����#� ��%���:�2.�E�����������:�����5�� ���H5�����H5��2�:������� �%�������2�$�������:�����5����������� ���:���� �:�����#������%2�1:������5���.����5��������������5 ��� �1:������5���@�� �:������� ����:����%���9��������� ��5�����5$� �����.�1:������� ���2#�����#������� ��$� 0�#��4� ��� ������� ��� K������� 1:�����L� � ��K:������9���L�����5���.�;�.�+����:�������#������������5����� �0�#��4�</s>
<s>�������������� ��:����.�.�+. &������������� ������������5 �� �����2 ���� ���5�� �� ��:�� �� �:����� ���� ������������$� �5�����=��� 1:������ ��$$5 ��2� ��� �� 1:������ ������#���:������������� ���� ����5������� ������(� ����15�'�����6��5� � �� ��� ���.� ��� ��� ������ ��� �� ����� ��$� �����5����������%����� �.�� ������#�#����M�������� ���������� 5���� ��� �:����� �� ������ .� &���� :��� ��� ����� ���� � �������� :���� ��$���%��� ����5����.� ���� � ��5���� �5#���������:������ �������60D��� '����#�������2$%�����$�<����$� ��� ��� �$���������� ���������%������ �� �� ������ � B. Data pre-processing &����:������9����������$�1:�������� ��� ������� �� � �� ��$� �� �� �� ��� �5��� ��� �5#������� �:������ :�%����� �� '����$���� ��� ���.� :����� ����� ��� ��$���� %����� #������� � �5���:����� ��� ����� ���� �� ��$� �� �� � � ������ ���5�����2.� ��� #������� ����#�������#������� �;�.�*.�����2� ��2� #��#���� ���� ����� � �5�� 5$%��� �� �:����.�E�� ��9������ ��������������:�������$��������$�����#�����$����$���$��� ���:������$���$5���#�����$��.�1�������:������������5#���������:������ ����������5���%����$���������5����������� ��2�����;�.���.��9��������1:����������������������:������� ������������ ���2#�� �� � ��$���� � ����� ��� ������� ��� 60D.� 60D� ���5��� %����$�������$��:����.�1���60D��5 �������$� 2����N�515%��� ��;���%��'@���������� '.�1�������������� ���2#�����5 �������2��2$%���������������#�������2$%���5����%2������������$�����5���.�1������2#�����2$%�������� ��� ��$���� ��� #5 ��5���� �$��'� �O��� 5��� ���#� �.��$� ��� � � �2$%��� �7��� �� ��� H5���� �KL��� �� ��� H5���� �P@�����.�:���������� ����� ��� �� 2��� ��$� �.�E����$��������������#������ �2$%���� ��$� �5�� �:���� �������� %2� ����:� � �����#��4� ��� #�������� �#������� :���� ���� ���#� �� ��5�����9#������ .�;�.������:�����:���.�.�*. &������� �����������#��.��. &5#�������1:�����0�$����������$�.��. "#�������2$%������$����������$�3 �� 5���� �� � 5��� � �� 5��� �$�� � �� ����� ��� ���5��� %��5 �H5�� %2� ����:� � ���� 5���� �� �� �� 1:�����Q� � 2��� � ���#������ %2� �� 5���� ������ ��� ���,���� 5��� �$�� #������� � %2�7�:����� ��� 5���� ��� #��#��� �5 �.� ;��� �9�$#����7��$�� ��R5��� �$�.� 1���� ����� ��$����� ��$� �5�� �������������������� ��2���.�1�����$�2�%���9����:������#����� ����������� ����� ��������%�� ��$����.�(2� ��$��� �:����� �#������ ����� ��2���� ��� ���%���� ��$��������� ��2.�S"��#� :����S� $�� �� ��$�� ��$$� � :����� ����� �� B�� ����2�5��5�� � ��$���� � �� �� �� ��� ��'�� S����S� S��S� � �� S� �S.�0�$��� ����#�:�����$�� �� �����4�����:� B��%���%���������������� :����� � �� :���� %�� ���� ��� � � �� ���� � &������.� 1�9��)��Authorized licensed use limited to: Macquarie University. Downloaded on May 31,2020 at 17:17:31 UTC from IEEE Xplore. Restrictions apply. � ��2���� ������$� �1���� ���� �=��� ���#� :����� %����� � � ��$� 5���2������������$#���� �����������%5���� ���:������� ��������#� :���� �� � ���2� ��#� �� � � � �� ��9�.� "� ��� �����������5$� ��� ��'��1:����� �� ��� � �5��� ������� ��9��� ��� ���� � � ��$����������� ����#�:������� �����.�;�.�)���#���������#����������:��������������#�:�������$��9����������������.�.�). ;������ ����#�:�����C. Classification ���� ��$#���� � �5�� #�� #������� � ���#��� :�� ��� !)!!����5��5�����:�������$����5 ��+������5 ���5��5�����:����.�E������� � ��2=��� �5�� �:����� %2� �ND��8� �9�� ��� � :����� ��� ��5���� �� �� #����$$� � �8D��� #�����$� #�������� %2�0�#��4� ��.����������:���������������$� �������� ��$� �������� �:����:����������2���������������8����������8�5����.�� ��2=��� "� ��$� �� 3#������� ��� ������ � � �5�� #������� � ����������� S��9�S� ��� � #5�� �����%5��� � � 0�#��4� ��� ���:���.��;�.�!��1:��������������#���������� ��� ��2=������� ������#������2��5��� ��� #��������� �5����� � �� �������� � �� "5%<�������2���5%<������� T� �%<�������.� 1�%��� �� ���:�� ���� #������2� �����#���������� ��� ��2=����:��������.�6�5���2� � � �� ��$� �� � ��2���� ������� :����� $�'�� ����������������� ��#��������:����$�'�������#�������������.�"�������� �:���� ��� �����:��� ��� �������� #�������� � �� �5�����#������2������.����������5 ��� 2��5##���� �:�����������������'��� ���#�� U���#�� ������ U����� ��� � ����� ��� $� 5���2� � ����������������5##���������������:��������������������</s>
<s>���������� � #������2� �����%5��.� ;�.� �� ������%�� ���� �5##���� ������ � ���� ��������.�� ��5����������:��������������������%5���.�A����5����5�������9�� �5 �� � � �5�� �������.� 1����� ���� 1:������ E�%� ���� ���1:���������� �������1:������������� ���1:������D�����1:���������������;���%��'��M������D� '��� .�35��#�������������%5���������������:�V���Q�8�$�Q�1:��������Q�"�5��������9�Q�D� �Q�0��:���Q�"5%<�������2Q� "5%<�������2� �� ��� ��Q� �������2Q� �������2���� ��.�����8�$��� ��1:����������� ��� �� ���� ��$� �����5�.�E����$����� ���$� ��$� �5�� �������.� ��#������2�� 1:���� ����� �����$�����%���5���:������:��'� �� ������ ��$�.�"��� �:���������������5 �� ��� �������������� ��2=� .�!. "� ��$� �������������� ��#������@�������$�1�(D���.� "�81�4�81��D�""�;���1�38�.��. A��5���=���� ���#������2�������4� 2������������� ��������$���� @�������:�������� �����.�E����� ���$����5�������%5����� ��� 5$����������5��������� �����$��� #����$� ��� :���� ���� ���#� �� #2��� �#����$$� .� 1���� ��� :���� <5��� ��$� �� #���� ����� .� E��� ������ ���� ��5���� �����%5��� �� ����� �:���� � ��� ����� ����%����� �� 5$����� ���5�� :����� ��� �� �� %2� D�%��� � ������������ ��$�#2��� .�E����� ���$����5�� �� �������%5���%2��5$$2� � ���� � :���� ���� ���#� �� 3 � G�� � ������ ��������$� #2��� .� � � �5%<�������2� �����%5���� �5%<������� ��� :���� :��������� ��� �#� �� � #���� ���� %2� �� � �� �%<������� ��� :���� :��������� ��� ���� ��� #���� ���� %2� �.� � � #������2� ������ �5##���� ���#���� ����%2���� ����� ���#���� ��������.�1�%���������:���������5���� #������2� ����� �� ������ ��� ���� ������ ��� 5$����������5��5�� �#2��� �#����$$� )��Authorized licensed use limited to: Macquarie University. Downloaded on May 31,2020 at 17:17:31 UTC from IEEE Xplore. Restrictions apply. 1�(D����.� 10�8";304�1�38�3;��110�(61�"�D. Implementation E�� �$#��$� ���� �5�� "4D� �������$� %2� #2��� �#����$$� .� E�� ����� �� �� �5�� #��<���� %2� � ����533���� ��#��:�����2��� .�&��������$#��$� ����� ����5��5��������#������� �;�.���.���� ������ :�� �$#������ ������ ����� #��'���� �� #2��� � �$��2� 5$#2�� $��#�����%.� #2#���� � �� #� ���.� &������� �����$#������ � �� ��#������� ���� ��#� �� �� � �� � ��#� �� �������%������:����� � ��5�� �2#�����$����$�������#��%��$.�1�� �#����$��� ��%��� � ���� ��� �5�� ����������� ����� %2� D�%���� ������ ������ ��$� ���'��� ���� .�3 �� ���� � ���� � ��� %� ��2�#���� ����� � ��� �5�� ��H5����� �����%5��� %2� 3 � G�� � ������������ ��$� ���'��� ���� � ��� ��$��� � �� ������ �5$$2� �����%������#���3 � G�� � �����.�&������� ��� �#���� � ��� ���� � � � �� ����� � ����� ����:���� �������#������'������� �� ����������������� ��� ����$#��������� ����� ���$���� �%2�"�� �����"��������������$����'������� .��� ��������#������� � �:������������������� ���������������%2�5�� �&������ �1���������������C�8���%���������������"A4��0� ��$� ;������ ����������� M�5���� � 8(� ������ ��$� ���'������� .� ;� ���2�� %5���� ���� �� 5��� � $����9� ��$� ���'��� ���� �:������$#5���������������5���.�.���. �$#��$� ����� ������$��A. 0�"6D1"��8&�&�"�6""�38"�� � ����� ������ �� :�� ����� ��$#������ �5�� �9#���$� �� :�����5����������%2� ��'� �� � �5�������������� ����� �� �� ����� �5������������������ � ��������$�!)!!��:������������.�"�$����#������ ���5���=���� � �� �9#���$� ���� �������� � � �����5��#���#������������%�� �#���� ����� ���������:� ��5%������� .�A. Visualizing of NLP Results 0��5���� ������� � � �� ���5$� �� %5�� � � ������ ��� $�'�� ���$�$����#���� ��%���������� �������2�������5���=���������5���.�;��$�.� ���� ��� ��� ���: � ����� �$� � !)!!� �:������ !++�� ��� �����5##����������� ��*�!�������� ��������.����� � ��2=� � ��5���� �� �� #������� �� ��� ��� ������������ � � �5�� ��������� :�� ��� ����� �#� �� � �� �� ��� � �� +�������� ��� �� ������� �%<������� �� �� ��� ��$� �:������ 5���.�M��#���������:�������: �$�����9��5�����2�� �;�.���.�.� �+� ��#������ ���� �5%<�������2� � �� #������2.� ��� ��� ��������: �������$� �+��������������������� ���������� ��+�++���� ���� �5##����</s>
<s>�����.� ;��� ���� �#� �� � �5%<�������2�� ++�� ������ ���������� ��*)�)������5##�����������$� �������#� �� ��5%<�������2.�1:���� ����5�� �� �:���� ������%������� � � ;�.� �*.�0� �:����$�� �� :�� � �� 1:������ 5���� #������� 5$%��� #��#��� ��$$� ��� �#�����5����#���.�.���. 3������:����������2�������.���. 3������:���"5%<�������2�������.��+. "5%<�������2�� ���������2�&����$�.��*. 0��:����A��5���=���� �)�+Authorized licensed use limited to: Macquarie University. Downloaded on May 31,2020 at 17:17:31 UTC from IEEE Xplore. Restrictions apply. B. Algorithmic performance and result visualization �����$���#����$� ����� �%������5������� ��$���5����5�� ��� 5��� �$����9.�� ���������:� ������� �:��������%����5��5������$�� .�5��� �$����9� ���#���� ��������9��$����9�:����� ��� ������$%� ���� ������5���������� ��#��������������.�(2��� 5��� �$����9�:���� ��������#�������� ���$��������5���%2���$#��� :�������5�������� ��������.��1�%�������.�1�(D�����.� �38;6"�38�4�10�J���������� Predicted Classyes noActual Class yes ��� �������� ���� ����� �������� ���� no ����� �������� ���� �� �������� ���� ���5���2� ��� ��� ��� ��� ���� ������ %��:�� � ������� � �� 1���18�� ��������� ���1���18��;8��;�.�"�$#�2�:���� �:������������:�.�FPFNTNTPTNTPAccuracy++++= �0������ ������ ������ ���� ������%��:�� �1��� ��������� ��1���;8.�1����������: �� ���������:� 0� iiFNTPTPcall= ��������� � ������ ������ ���� ������%��:�� �1��� ��������� ��1���;�.�E���� �:������������:�.��� iiiFPTPTPecision= �;��� ����5���� $���5��� ����� :�� $5���#����� ��� ������� T�#������� �:���������������� ����� �#���5�����������%2�������� ����������#������� .���0���0�� ivecisioncallecisioncallmeasureF××=− �E�� ����� �9#���$� ���� �5�� ���5��5���� ����� �:���� %2�������� ����� �2�����#���� ������ � ���������� ��%2�������� ��2� #���� �� ���� � � �������� :����� �:� �2� ���� #���� ���������� :��� ��� ����� � � �� �:� �2� #���� �� �������� :��� �������� ����#�������2.���� ��� ������������C88�$���� ������ � ��������$����: ����� %���� ���5���2� %2� � �2� ��� ���� #���� �� ������ � �� 8(�����: �������:�������5���2�%2�����2�������#���� �.�� �;�.�����:�������#���� ������#��������$�:������� ��� �����5���2���5��5����C88��8(��&1�� ��0;����������.�1�%��� �A� #���� ��� 0������� �������� � � �� ; $���5��� :�������#���� ��� �5�� 5���� ���������.� ��� ��� ����� �%������� �����#����$� �����C88����%��������� �0;��8(�� ��&1������������:���� ���#���� ��� 0������� �������� � � �� ; $���5��.� M��#��������#���� ����� � �� 0������� �������� � � �� ; $���5��� ��� C88��0;��8(�� ��&1�����������#������� �;�.���.�.���. ���5���2���#������� ��������2�1�(D���A.� 0���DD���0���"�38��8&�; 4��"60��3;��3D�0�1N��D�""�Classifier Recall Precision F-measureKNN���� ���� ���� ���� ���� ���� NB ���� ���� ���� DT ���� ���� ���� .���. 0��������������� �� ��; $���5������������2�������C. Comparisons with existing works E����$#������5�����5���2� ���5���:���� ��������:��'.� ��� ����� ��5���������&1��C88�� ��0;�#����$���%������#������� ���5������������5���� ��������5���2���$��� 5��� �$����9��������#��#�����$�����.�1�%���A���#���� ������������������5�������$#����� ��:������������:��'�.�����5���2���#��#�����$�������� &1� ��� �+F� ��$#����� :���� �9��� � ��� ��F.� ;��� C88��#��#�����$�������������:�����%���������5���2���F���$#�����:������������:��'���5��������F�� ��!)F.���E�� ����� ��$#���� ���� ���5�� �� 0������� �������� � � �� ; $���5��� �� #��#����� $������ � � :���� �9���� � :��'�.� E�������#���� ��� 0������� �������� � � �� ; $���5���� �5�� #��#�����$���������������%���������5���.�&�����������#���� ����� �1�%���A�.�)�*Authorized licensed use limited to: Macquarie University. Downloaded on May 31,2020 at 17:17:31 UTC from IEEE Xplore. Restrictions apply. 1�(D��A.� �34��0�"38"�3;����60��N�E�1G��J�"1�8M�E30C"�Classifier Existing Works Proposed Method � �!" #�$ �!" %� �&" #�$ �!" '�� �&" #�$ ��" #&�$ ��" (� )) �*" 1�(D��A�.� �34��0�"38�1�(D��3;�0���DD���0���"�38��8&�; 4��"60��E�1G��J�"1�8M�0�D�1�&�E30C"� Existing works [9] Proposed MethodClassifier RecalPrecision measure Recall Precisimeasure '�� ����+ ����� ����! ���� ���� ���� ����� ����� ����� ���� ���� ���� ���*� ���*+ ���*� ���� ���� ���� ���*� ���!� ���*� ���� ���� ���� D. Discussions � � ����������� �:����������: ������9#���$� ���� ���5�����#��#����� $����� :���� ��$#���� ��� ���� ����� �� $���� ������ � ��##�����.�;��$�8D�����5������������%������������$����� ���� �:����� ���� � � �5##���� ��� ����.� ;��� ���� �5%<�������2�� 5$%��������� ��� ����� ��� � �����#� �� .��������2� ��� ���� ������� �� �� ��� ����� � ��2=��.�;��$�#������2� ���5����� ��� ������: � �����$������ ���� �:��������� �</s>
<s>�� ����� �� �.�;��$����� ����� �� �� ���� ��� ��� ����� �%������� �����$���� �� �����:����� ���� � � �5##���� ��� ����.� 1�� ��'�� ������� � �%�5������ :�� ����� �$#��$� ���� ����� �� �2#��� �� �5#��������$���� ������ � ��������$�.�� ����������5�����C88��������$����:�� ���� %���� #����$� ��� ��$#�����:���� ������ �������$��� � ���$�� ���5���2.� ��$#����� � :���� �9���� � �������� :��'�����:��������5��#��#�����$�����#����$���%��������� ��9���� :��'�� :���� ���#���� ��� ���5���2�� �������� #������� � � �� $���5��.�A. �38�D6"�38��� � ����� #�#���:�� ����� ���: �������� �� �##�������� ��9��$� � � � �� �� ��$� �� � ��2���� � � ����� ������.� 1����2� �������5�� �� �:���������������%�� ���������������5���9#���$� �.����� #��#������� � :�� ����� � �2� !)!!� �:����� ����� ���� ��2���.� &���� �� �2#��� �� �5#�������� $���� �� ���� � �����$�� ���� �$#��$� ���� � � ���� #��#��������� ����.�����#�����$����������#����$���� ��������������:��������$�������5������ ��%�������5���:������ ������������� ����5##����� ���� ��� �����.� ��� ��� ����� ���: � ����� C88� �������$� ���� ����%���� ���5���� ��$#����� :���� ������ �������$�.�E�� �� � ������������ �5�� #��#�����$����� ��� %������$�����:����� ��$#����:���������� �9���� � $�����.� � � � 5�5���� :�� :���� ��2� ��� :��'� :����$�����������$�����#�����$���'��;���%��'�� ��N�515%�.�References >�? G���� �� 0��$���� ��� 3#� �� � 4� � � � �� "� ��$� �� � ��2���������� ��� � �� �##������� .� � ��� ���� ��� I�5� ��� �� �##������� � ���� ������ �� �� � ���� �T�4� ��$� ����I���4������.�+�� �.�������*.�>�? &�� ���'��$��A.��&���2��(� ���"����� � ��.�4.��3#� �� �$� � ��$���5�� �� ���%��'� ����� 5�� � �5#�������� ���� � � �������$�.� +��� 4���� ��� ���� ����� ��� ��� � �(��&���� � �� "$�������2��45������3$� ������.�>+? ���5�� "���$��� 1� "�� � 4��.�� ��������� � �� � ��� � ������� � 6�� "� ��$� �� � ��2���� � � G� ��� 1:�����.� ����� ����� � ��� ���� ������ ���� �(��&�����(��&������E���� �� ��&���6"�������.��>*? G5$�� ������ �� ���.� "��'��� �� ��2.�"� ��$� �� � ��2���� � � 1:������&��� ���� 5�� � 8����� (�2��� �������$.� ����� � �� � ��� ���� ������ ���� ��##������ ��1�������������$#5�� �� ����$$5 ������ �1��� ���2�����1��1�.��(� ������� ��������.�>�? 8�=� � 3=�5�'��� "��'� � �2��=�"� ��$� �� � ��2���� � � 1:�����V� �� ��9��$� � ��##�������������"2��� ������������.�"��� ���&�������1���$������� ��� ��$�������A��5$��+���8��������+� �*)�����!.�>�? ��$��45��%�����'��%� �����$���4���$���M��$�2��;����'��C�������G5���� �� 3$��� C������� G5���� .��������� � ���� �� ��$� �� �� "��"�� �� �� �����:�� 5�� � �5#�������� $���� �� ���� � � ���� �H5��.������ ��� ���� ��� I�� �� �� ��� ��� � � 8�5���� 8��:��'�� ��I�88�.��A� ��5�����(����� ���������.�>)? �#����� ���:���� A���'� "���$��� M����� "�''��� 0� 5� &���.�� 3#� �� �4� � � ��8�:��G����� ��� 5�� � "� ��E���8��.� ����� "2$#���5$� � ����������&����� ��2����� ��8��:��'� ���&�8�.��� ������� ���������.�>!? 0���� �� (� �� ��� "� ��$� �� � ��2���� �� 4����� 0����:�� 6�� G������ ��5�� ;���5���.� ���!� � �� � ��� ���� ��� �� ��� �� � �������� ����4��������� � � ���� �T�8� �� 1��� ���2� ���4�81������C��'������ ��������!.��>�? � '���� 0� ��� &�.� � � �� C5$���� "� ��$� �� ������������ � "2���$� �1:������&�������6"������ ��"�������� ��2���.����!������*� ��� 5�����$#5����"��:����� ���##������� ���� ��� �����34�"�����1�'2���I�#� �����!.�>��? ��.�8� ���C���� ���&�.��.�A��2��"�����&�.�8���:����0���4�#�������"� ��$� �� � ��2���� �� 1�#� �������.� ���!� ;�5���� � ��� ���� ������ ��� � � ���� ���� � � ������������ ������� ����� �</s>
<s>��$���� ����$$5 ������ �� ��(�� � ��$������������(������ ����� ��������!.�)��Authorized licensed use limited to: Macquarie University. Downloaded on May 31,2020 at 17:17:31 UTC from IEEE Xplore. Restrictions apply.</s>
<s>Sequence_105_ID4459.pdfProceedings of the 2019 5th International Conference on Advances in Electrical Engineering (ICAEE), 26-28September, Dhaka, BangladeshA Computational Approach of Recognizing Emotion from Bengali TextsHasan Abid Ruposh and Mohammed Moshiul HoqueDept. of Computer Science & Engineering, Chittagong University of Engineering & Technology,Chittagong-4349, Bangladesh{hasan.ruposh, mmoshiulh}@gmail.comAbstract— Emotion recognition is the task of determiningdistinct emotion exhibited in text. In recent year, due to theavailability of enormous amount of textual data, speciallydogmatic and self expressiveness of text played a significantrole to lead focus in this area. This paper presents an emotionrecognition technique that can identify six basic emotions fromBengali texts such as happy, sad, anger, fear, surprise, anddisgust respectively. We develop a corpus consisting of 1200emotive words that are used to train the SVM classifier foridentifying different emotions. Experimental result shows thatthe proposed system can recognize emotions with 73% accuracywhich is higher than the Naive based approach (60%).Keywords–Bangla language processing; Emotion recog-nition; Feature extraction; Emotion corpus; EvaluationI. INTRODUCTIONEmotion involves experience, cognition, feelings, be-haviour, physiology, and conceptualization [1]. There arenumerous approaches are employed to identify emotionsfrom the humans such as body gesture, facial expressions,heart rate, blood pressure, and text information. In this work,we pay attention on the recognition of emotion from Bengalitexts. Emotion recognition from text is a growing researchtopic of NLP/computational linguistics that is intently relatedto sentiment analysis. Sentiment analysis focuses to identifynegative, positive, or neutral feelings from text, whereasemotion analysis focuses to identify kinds of feelings throughthe text expression, such as happy, disgust, anger, sad,surprise and fear.Recently, emotion recognition in text has become moreattractive topic due to its broad utilization in marketing, psy-chology, HCI, advertising, artificial intelligence, pervasivecomputing, etc. Therefore, interpretation of emotions mayadvantageous to any company or in such as private enterprise,managing the response to a typical disaster, calculatinghappiness index, developing better interactive AI agents,analyzing consumer reaction, assessing the impact of theproducts on particular population, recommendation system,question-answering system and so on [2]. There are six basicemotions that a human is capable of expressing throughfacial expression: happiness, sadness, anger, fear, surprise,and disgust [3]. Human express their emotions through theirfacial expressions, speech, body gestures or writings. Variousdistinct sources of information, such as text, speech, andvisual can be considered to analyze humans’ emotions. Inthis work, we focus on the recognition of six basic emotionsfrom Bengali text only.Emotion recognition in text refers to the use of com-putational linguistic, and natural language processing todetermine discrete emotional information from the sourcetexts. Text is the most common form of interaction mediumon the web and Recently, due to the rapid growth of Internetusages and evolution of web 2.0, users are updating hugeamount of text contents on the Web in the form of socialmedia posts, micro-blogs, news articles, etc. These contentscan be used to develop better interactive system which needsto be able to analyze the text and deduce the emotion of theend user.Determining emotions from the texts is a quite challengingand complicated task due to the evasive characteristic ofemotion expression in text and also the intricacy of humanemotions [2]. Although Bengali is the sixth-most widelyspoken language in the world there is no</s>
<s>usable compu-tational system is developed that can recognize emotionsfrom Bengali texts. A significant research activities havebeen carried out on emotion recognition in text, especially,in English and European languages. However, a very fewwork are done on sentiment analysis in Bengali text [4][5].In addition, there is no useful work has been conducted yetto recognize the six basic emotions from Bengali texts inBangladesh. Thus, in this work we proposed a computationaltechnique of recognizing emotions from Bengali text usingmachine learning algorithm.II. RELATED WORKThere is a significant number of work have been conductedin English, Chinese or European languages to detect emo-tion from the text data which are broadly categories intothree approaches: keyword based [6][6], learning based [7]and hybrid based [8]. These work used the features thatwere adopted from semantic and syntactic data to detectemotions. Several work have been done using hashtags asthe emotional label for the data and SVM as the classifier.Purver et al. [9] used SVM classifier on Twitter data andgained 82% accuracy for categorizing the emotion Happy,and 67% in categorizing over the whole dataset for theidentical emotion. Balabantaray et al. [10] conduced anemotion classification task on Twitter data in which 8000tweets are labelled manually for six basic emotions. Thiswork used multi-class SVM with 73.24% accuracy. A studywas carried out by Seyeditabari et al. [2] for classifyingcomments in social media. An unsupervised method wasproposed to automatically identify emotions in text, based on978-1-7281-4934-9/19/$31.00 c©2019 IEEE���Authorized licensed use limited to: University of Exeter. Downloaded on May 06,2020 at 15:16:59 UTC from IEEE Xplore. Restrictions apply. categorical as well as dimensional approaches of emotions[11]. An automatic tweets based emotion detection systemis developed by Hasan et al. [12].Emotion detection from Bengali text is a relatively quitenew research issue in Bangla language processing field. Fewattempts are made to classify sentiment from Bengali textinto positive, negative or neutral categories. Shaika et al.[13] presented a methodology to extract the sentiment intopositive or negative category from Twitter posts. In orderto classify the posts they have used SVM and MaximumEntropy algorithms. A Naive bayes approach is developed toclassify sentiment into positive, negative or neutral from bothEnglish and Bangla texts [14]. This method used the Amazonreviews as data sets and achieved the 85.7% and 85.0%accuracy for English and Bengali review texts respectively.Islam et al. [15] described a supervised approach based onNave bayes to recognizing sentiment into positive or negativeclasses. They have used Facebook status written in Bengalias source data sets and achieved 72.0% accuracy. A deeplearning based sentiment detection is proposed by Hayderet al. [5]. This method classify the sentiment into threecategories: negative, positive, neutral emotion and gained78.0% accuracy. A method is proposed based on TF.IDFalgorithm to detect sentiment from Bengali social mediatexts which detected positive, negative and neutral sentimentcategories [16]. Das et al. [17] is conducted a study toidentify emotions in Bengali blog texts. They used rule basedbaseline system and SVM to detect emotional expressionfrom blogs. An another work of emotion analysis based onconditional random field is proposed that detected six basicemotions: sad, happy, fear, anger, surprise and disgust [18].In this work classification</s>
<s>is done on Bengali blogs and Newstexts at word and sentence level.Most of the previous studies focused on the sentimentdetection in terms of positive, negative or neutral categoriesfrom Bengali blogs, tweets or social media texts. In theproposed approach, our main task is to recognize the basicsix emotions such as happy, disgust, fear, surprise, fear andanger respectively from Bengali texts.III. PROPOSED METHODOLOGYFig.1 illustrates the schematic representation of proposedapproach of emotion recognition. This approach composesof four major modules: training, classification, testing andrecognition.A. Data PreprocessingIn order to fill in missing values, smooth noisy data, andresolve inconsistencies data prepossessing is needed.1) Tokenization: Tokenization divides the sentences intoindividual words. Sentences may be broken into distinctwords and punctuation across the white spaces. Fig.2 showsthe results of tokenization of a text input ’ami besh bhaloachi’.2) Append back remaining words in the text: After tok-enization process, rest of the texts are appended back in theFig. 1. Proposed approach of emotion recognition in Bengali textFig. 2. Tokenized texttext and these texts are included in the corpus. Fig.3 showsa fragment of the corpus after cleaning.Insignificance or irrelevant words are removed from thetext. We use main bodies of the text to train the emotionaltext classifier and represent the text document using a list ofwords and their frequencies. Finally, manually tagged thosedata into six emotion categories.B. Feature ExtractionWord frequencies are used as features quite often to extractthe feature from the text.1) CountVectorizer: CountVectorizers used to learn thevocabulary of a set of texts and then transform them into adata-frame that can be used for building models. CountVec-torizer take few parameters that are important for extractingfeatures.• Max-Features: Most frequent 500 words are used toreduce time and storage complexity. This is used tominimize the sparse matrix.• Stop Words: In the proposed system, we used thosewords into account as the dataset is smaller in sizewhich improves system accuracy.2) Bag of Words: A bag-of-words express the distributionof words within a text document. It take account of twofactors: a vocabulary of known words and a count of theexistence of known words. In bag-of-word model, histogramof the words within text is investigated in which each wordcount is considered as a feature. Each text document isconverted into binary vector that may use as input or output���Authorized licensed use limited to: University of Exeter. Downloaded on May 06,2020 at 15:16:59 UTC from IEEE Xplore. Restrictions apply. Fig. 3. A fragment of developed Bengali corpusin a learning model. Histogram intensity of the word iscalculated as in Eq.1 [19].I =NkeyNtotal(1)Where, I denotes the intensity, Nkey denotes number ofkeywords in a emotion text and Ntotal denotes the totalnumber of words in the text respectively.Feature space is a two dimensional array where rowsrepresents each text of the corpus and columns representsnumber of unique words available in the corpus. Each cellof the array represents the number of time a specific wordoccurs in a specific text. Fig. 4 shows a small fragment oftraining set feature space.Fig. 4. A fragment of feature space of training set3) Predicted Level: In the proposed implementation, pre-dicted level (i.e., emotional categories) of</s>
<s>test sample islabeled 0, 1, 2, 3, 4, or 5 [depending on the categories].It represents the semantic interpretation of text in the testsample. If the sample text is not labeled within the categoriesthen the system will fails to process it. Fig. 5 shows a sampleoutput of the predicted level of text sample.C. ClassifierAll extracting features are used to train the classifiermodel. We used SVM classifier with linear and non-linearclassification using kernel trick. For linear classification, costFig. 5. A sample predicted level of textsfunction can be evaluated by the Eq. 2.i=1max (0, 1− yi(w.xi − b)) + λ(‖w‖)2 (2)where, λ identifies the trade-off between the margin-size andconfirming that the samples remain on the actual side of themargin. Therefore, the 2nd term in the loss function becomeinsignificant for smaller values of λ which behaves like ashard-margin SVM. General linear kernel trick is determinedaccording to Eq. 3.k(xi, xj) = (xi · xj)d (3)D. Training and Testing PhasesA set of text file are used as training sample to train theclassifier. Output of the training phase is a trained machinelearning model that will be used in testing phase for emotiondetection and recognition. A sample text is processed withtokenizer, and extracted necessary features which are usedto learn the classifier model. Input to the testing phase isa text for which emotional categories is unknown or tobe determined. Classifier module used extracted features todetermine the emotion category for the test text sample.IV. EXPERIMENTSIt is a quite challenging task to develop an useful systemin Bangla language processing field due to the scarcitiesof available resources in Bengali. Due to the unavailabilityof emotion corpus in Bengali, we first focus to developan emotion corpus to serve our purpose. We evaluate theproposed system in terms several standard matrices suchas confusion matrix, precision, recall, F1 score and ROCmeasures respectively.A. Corpus PreparationCorpus contains 1200 emotional subjective Bengali textsamples which are labeled in terms of six basic emotions.We collect half of our data from Cambridge English Corpusby translating them from English to Bengali using Googletranslator. Some of the emotional texts are collected fromonline blogs, Facebook pages, Bengali newspapers. In orderto classify the emotion we used Ekmans basic emotion cat-egories such as sad, happy, fear, anger, disgust and surprise[20]. We adopted the following properties of the text tolabelling it into one of the emotion categories [21].���Authorized licensed use limited to: University of Exeter. Downloaded on May 06,2020 at 15:16:59 UTC from IEEE Xplore. Restrictions apply. • A text is considered happiness if it contain emotionalwords of feeling well, showing joy or pleasure.• A text is considered sadness if it contain emotionalwords of being affected with or expressive of grief orupset or failure.• A text is considered anger if it contain emotionalwords that highly contrast or disagree with the emotionhappiness or showing rage to someone.• A text is considered fear if it contain emotional wordsthat expresses an disagreeable emotion caused by thethreat of endangerment, pain or harm.• A text is considered disgust if it contain emotionalwords to offend the good taste or moral sense, extremedetest.TableI summarizes the statistics</s>
<s>of developed corpus.TABLE ISUMMARY OF THE BENGALI EMOTION CORPUSNumber of documents 1200Number of sentences 3600Number of words 12000Total unique words 21371) Evaluation Measures: In order to evaluate our pro-posed system, we used several evaluation matrices suchas confusion matrix, precision, recall, F1 score and ROCmeasures.• Confusion Matrix: It is a tabular representation ofdata used for evaluating the classification model per-formance. As our system is a multi-class classificationmodel, we used a confusion matrix consists of 7 (row)× 7 (column). This matrix represents total true posi-tives, false positives, true negatives and false negativesnumbers respectively.• Precision: refers as positive predictive value. It calcu-lates the ratio of exactly classified text into a particularclass to the total number of classified texts of thatemotion class. Precision can be obtained by Eq. 4Precision =TP + FP(4)• Recall: calculates the ratio of correctly classified textinto a particular class to the total number of classifiedtexts of that emotion class (Eq. 5).Recall =TP + FN(5)• F1 score: It is the weighted mean of recall and precisionmeasures. Eq. 6 is used to calculate F1 score.F1 =2× recall × precisionrecall + precision(6)• Accuracy: Accuracy is used as a statistical evaluationof how well a classification test correctly determinesor keep out a condition. Therefore, the accuracy isthe proportion of true results both true positives andtrue negatives among the total number of test sample.Accuracy can be measured using the Eq. 7.Accuracy =TP + TNTP + TN + FP + FN(7)B. ResultsIn order to measure the effectiveness, We used two classi-fication algorithms: SVM and Naive Bayes. Table II showsthe classification report for SVM with linear kernel. Fig. 6TABLE IIPRECISION, RECALL AND F1 SCORE FOR EMOTION CLASSES USINGSVMPrecision Recall F1 score SupportHappiness 0.63 0.71 0.67 17Sadness 0.50 0.43 0.46 14Anger 0.65 0.69 0.67 16Fear 0.90 1.00 0.95 19Surprise 0.78 0.72 0.75 25Disgust 0.81 0.76 0.79 17avg./total 0.73 0.73 0.73 108shows the ROC measures for the different emotion class oftext using SVM. Table III shows the classification reportFig. 6. ROC Curve (SVM Linear Kernel)for Naive Bayes classifier. Here support represents the totalnumber of document test by the system.Table IV represents the comparison between SVM andNaive Bayes classification algorithms for all emotionalclasses in terms of accuracy. The result reveals that SVMis performed better than the Naive Bayes classifiers inrecognizing emotions from Bengali texts. On average, SVMgiven 73% accuracy while Naive Bayes provided only 60%.1) Sample Input-Output: Sample test texts (.txt file) arekept in a folder. These samples are processed by the pro-posed system which determines the corresponding emotioncategory of the test input. Fig. 7 depicts sample input textsand corresponding predicted level as emotion categories.���Authorized licensed use limited to: University of Exeter. Downloaded on May 06,2020 at 15:16:59 UTC from IEEE Xplore. Restrictions apply. TABLE IIIPRECISION, RECALL AND F1 SCORE FOR EMOTION CLASSES USINGNAIVE BAYESPrecision Recall F1 score SupportHappiness 0.60 0.53 0.56 17Sadness 0.50 0.50 0.50 14Anger 0.43 0.38 0.40 16Fear 0.59 0.68 0.63 19Surprise 0.78 0.72 0.75 25Disgust 0.60 0.71 0.65 17avg./total 0.73 0.73 0.73 108TABLE IVCOMPARISON OF EACH EMOTION CLASSEmotion Category Naive Bayes (%) SVM (%)Happiness 0.53 0.75Sadness 0.50</s>
<s>0.63Anger 0.40 0.67Fear 0.69 0.80Surprise 0.72 0.75Disgust 0.71 0.76V. CONCLUSIONThe main purpose of the proposed system is to classify theBengali texts in terms of six basic emotions such as sadness,happiness, fear, anger, disgust, and surprise respectively. Forthis purpose, we developed a emotion corpus of Bengali textand trained SVM and Naive Bayes classifiers. The evaluationresults shown that SVM is performed better than the NaiveBayes in terms of higher accuracy and lower error rate.The performance of the system can be improved with largercorpus including more emotive words.REFERENCES[1] A. Ortony, G. Clore, and A. Collins, “The cognitive structure ofemotions.” 1988.[2] A. Seyeditabari, N. Tabari, and W. Zadrozn, “Emotion detection intext: a review.” CoRR, vol. abs/1806.00674, 2018.[3] P. Ekman, “Facial expression and emotion.” American psychologist,vol. 48, no. 4, p. 384, 1993.[4] D. Das and S. Bandyopadhyay, “Word to sentence level emotiontagging for bengali blogs.” ACL-IJCNLP, pp. 149–152, 2009.[5] M. S. Haydar, M. Al Helal, and S. A. Hossain, “Sentiment extractionfrom bangla text: A character level supervised recurrent neural networkapproach,” in 2018 International Conference on Computer, Commu-nication, Chemical, Material and Electronic Engineering (IC4ME2).IEEE, 2018, pp. 1–4.[6] T. Hancock, C. Landrigan, and C. Silver, “Expressing emotion in text-based communication.” Proc. of the SIGCHI Conf. on Human Factorsin Computing Systems, pp. 929–932, 2007.[7] C. Yang, Y. Lin, and H. Chen, “Emotion classification using web blogcorpora.” Proc. of IEEE/WIC/ACM Int. Conf. on Web Intelligence, pp.275–278, 2007.[8] S. Aman and S. Szpakowicz, “Identifying expressions of emotion intext.” Proc. of Int. Conf. on Text, Speech and Dialogue, vol. LNCS,4629.Fig. 7. Sample input and corresponding output[9] M. Purver and S. Battersby, “Experimenting with distant supervisionfor emotion classication.” In Proc. of the 13th Conf. of the EuropeanChapter of the Association for Computational Linguistics, pp. 482–491, 2012.[10] R. Balabantaray, M. Mohammad, and N. Sharma, “Multi-class twitteremotion classication: A new approach.” Int. J. of Applied Info. Sys.,vol. 4, no. 1, pp. 48–53, 2012.[11] S. Kim, A. Valitutti, and R. Calvo, “Evaluation of unsupervisedemotion models to textual affect recognition.” In Proc. of the NAACLHLT 2010 Workshop on Computational Approaches to Analysis andGeneration of Emotion in Text, pp. 62–70, 2010.[12] M. Hasan, E. Rundensteiner, and E. Agul, “Automatic emotion detec-tion in text streams by analyzing twitter data.” Int. J of Data Sci andAnal, vol. 7, no. 1, pp. 35–51, 2019.[13] C. Shaika and W. Chowdhury, “Sentiment analysis for bangla mi-croblog posts,” in Proc. Int. Conf. on Informatics, Electronics andVision. IEEE, 2014.[14] K. A. Hasan, M. S. Sabuj, and Z. Afrin, “Opinion mining usingnaive bayes,” in IEEE Int. WIE Conf. on Electrical and ComputerEngineering. IEEE, 2015, pp. 511–514.[15] M. S. Islam, M. A. Islam, M. A. Hossain, and J. J. Dey, “Supervisedapproach of sentimentality extraction from bengali facebook status,”in Computer and Information Technology (ICCIT), 2016 19th Inter-national Conference on. IEEE, 2016, pp. 383–387.[16] M. Nabi, T. Altaf, and S. Ismail, “Detecting sentiment from banglatext using machine learning technique and feature analysis,” Int J ofCom App, vol. 153, no. 11, pp. 28–34, 2016.[17] D. Das and S. Bandyopadhyay, “Emotions on bengali blog texts: roleof holder and topic,” in</s>
<s>Proc. of Int. Conf. on Advances in SocialNetworks Analysis and Mining. IEEE, 2011, pp. 587–592.[18] D. Dipankar and S. Bandyopadhyay, “Analyzing emotion in blog andnews at word and sentence level,” in Proc. of the 4th Indian Int. Conf.on Artificial Intelligence, 2009, pp. 1402–1414.[19] S. Sriram, , and X. Yuan, “An enhanced approach for classifyingemotions using customized decision tree algorithm,” in Proc. IEEESoutheastcon. IEEE, 2012.[20] P. Ekman, “Cross-cultural studies of facial expression,” Darwin andfacial expression: A century of research in review, vol. 169222, p. 1,1973.[21] L. Alam and M. Hoque, “A text-based chat system embodied withan expressive agent,” Advances in Human-Computer Interaction, pp.1–14.���Authorized licensed use limited to: University of Exeter. Downloaded on May 06,2020 at 15:16:59 UTC from IEEE Xplore. Restrictions apply.</s>
<s>Data Set For Sentiment Analysis On Bengali News Comments And Its Baseline EvaluationInternational Conference on Bangla Speech and Language Processing(ICBSLP), 27-28 September, 2019Data Set For Sentiment Analysis On Bengali NewsComments And Its Baseline EvaluationMd. Akhter-Uz-Zaman AshikDepartment of CSEShahjalal University of Scienceand Technology,Sylhet,BangladeshEmail:ashikchowdhury76@gmail.comShahriar ShovonDepartment of CSEShahjalal University of Scienceand Technology,Sylhet, BangladeshEmail: shahriarshovon69@gmail.comSummit HaqueDepartment of CSEShahjalal University of Scienceand Technology,Sylhet, BangladeshEmail: summit.haque@gmail.comAbstract—The biggest challenge of Bengali language process-ing is creating a strong data set to do research on. The mainfocus of this paper is to introduce an authentic and credible dataset and this dataset is open for all to be used for educationalpurposes1 for Bengali sentiment analysis where the data wasextracted from a well known online news portal’s user comments.Here comments on various news were scraped, and for detectingthe true sentiments of the sentences, five labels of sentimentswere used. An online crowd sourcing platform was used for dataannotation. To ensure the credibility and validity of the data set,every entry of the data set was tagged three times. Three modelsof text classification were used for baseline evaluation to checkthe validity of the data set. This data set might be of valuable helpfor future works and researches on Bengali sentiment analysis.Keywords:: Sentiment Analysis, Data Set, Bengali, NewsComments, SVM, RNN, LSTM, CNN.I. INTRODUCTIONPeoples value and opinion have a strong impact in today’sworld. In a world where almost everything is online, weget a large amount of feedback from people in these onlinesites, blogs and forums. Sentiment Analysis is the method toget a comprehension of this huge amount of opinion frompeople in a technological way. SA(sentiment analysis) hasrevolutionized the way we perceive data and make a decisionout of this growing amount of data.People like to express their opinion in online sites, andpreferably in their own language. Bengali is a very widelyspoken language. Many Bengali news portals have gainedpopularity in recent decade. SA is fairly very new to Bengalilanguage as opposed to English.A. MotivationLanguage is often vague or highly contextual which makesit very difficult for a machine to understand without humanhelp. As such, human annotated data is essential when training1Data set: https://data.mendeley.com/datasets/n53xt69gnf/3a machine learning platform to analyze sentiment. The per-formance of a machine learning model to detect sentimentsdepends largely on the training data. For this reason threedifferent perspectives have been used to make the data setmore precise.There are plenty of scopes to do analysis and researchon the aspect of sentiment analysis on Bengali language.Countless news portals, social sites, blogs, etc. have been onthe rise which are using this language. These online sites canthrow valuable insight on the peoples sentiment as a whole.One of the major challenge of performing sentiment analysison Bengali language is creating a authenticated and credibledata set without ambiguous data. If the data is classified underthe opposite category by two different person while preparingthe data set, then we can call it ’ambiguous’. The data set onwhich the sentiment analysis is to be done has to be free ofthese sort of data. One other thing that inspired us was whena data is labeled according to a persons</s>
<s>opinion, that person’sopinion is not judged whether s/he is right or not. So wedecided to create a data set where the data set will not belabeled according to just one person’s opinion to ensure thecredibility of the data set.II. RELATED WORKSentiment Analysis means the characterization of the sen-timent content of a text unit using Natural Language Pro-cessing, statistics or Machine Learning methods. The worksof Minqing Hu and Bing Liu circa 2004 done on customerreviews was the first major work done on sentiment analysis.They proposed the Feature-Based Opinion Mining Modelwhich is now known as Aspect-Based Opinion Mining[1].Md. Atikur Rahman and Emon Kumar Dey presented a workbased on their data sets for ABSA(aspect based sentimentanalysis)[2]. Their data set had around 5092 data in total. Theyworked with three tags positive, negative, neutral. M. Nabil etal created a data set from Arabic sentiment tweets consistingof 10,000 tweets and did experiments with 4 way sentimentclassification[3]. Akhtar, Md Shad et al created a data set for978-1-7281-5242-4/19 c©2019 IEEE978-1-7281-5241-7/19/$31.00 ©2019 IEEEl C/ICAuthorized licensed use limited to: University of Exeter. Downloaded on June 13,2020 at 14:28:14 UTC from IEEE Xplore. Restrictions apply. aspect based sentiment analysis in Hindi and did it’s baselineevaluation for data validation where the data set contains5,417 review sentences across 12 domains. There are a totalof 2,290 positive, 712 negative, 2,226 neutral and 189 conflictreviews [4]. A. K. Paul and P. C. Shill did sentiment analysisusing mutual information for feature selection and multi-nominal Naive Bayes for classification using English languagedata, they have achieved 85.1% accuracy without using nega-tion and got 85.8% accuracy with negation using Englishtesting data. For Bengali, using Bengali testing data, theygot 84.78% accuracy without using negation and got 83.77%accuracy with negation[5]. Sentiment analysis was also donein micro blog posts by S. Chowdhury and W. Chowdhury.They used support vector machine and maximum entropy todo the comparative analysis of these two machine learningalgorithms[6]. M. S. Islam et al presented a research paperwhere six different approaches were discussed to evaluate thesentiment. They also discussed about the implementation ofcosine similarity using TF-IDF to determine sentiments moreaccurately[7][8].III. METHODOLOGIESA. Data CollectionOnline news papers have a huge collection of user com-ments. The data collected was from a widely popular onlinenews portal Prothom-Alo’s user comments 2. From this hugecollection of comments 10 specific fields were selected. Thereare other fields where users comment expressing their opinionbut compared to these 10 fields they are negligible. These 10fields were selected as these are fairly common in comments.Fig. 1. Categories of the commentsPriorities were given to ”Bangladesh”, ”Economy”, ”Opin-ion” and ”Sports” because of being the most occurring themeamong the comments.2prothomaalo.comB. Data Pre-processingThe data which was crawled had some characteristic prob-lems within it. The issues we faced were:• Sentences got divided and separated while crawling• Some sentences were not properly structured• Same sentences appeared more than once• Some sentences had signs and unnecessary charactersTo solve these problems the sentence which was not properlystructured was dropped. Some signs like multiple questionmarks, multiple dots and exclamatory signs were cleared. Thecomments which appeared more than once were dropped.C.</s>
<s>Data AnnotationWe wanted to create our own data set with proper attentiongiven to the credibility of the sentiment behind the sentence.When a sentence is labelled, a single tag cannot ensure theactual sentiment of the sentence. A sentence might seemnegative to an individual but might not be for another indi-vidual. The data set has each entry tagged by three differentindividuals to get three different perspectives.There are many approaches to data labelling. Among them,crowd-sourcing is a convenient approach for sentiment anal-ysis. Crowdsourcing is a practice in which information orinputs are obtained from a huge number of people, typicallyvia the Internet. We used this practice for data annotation. Pip-ilika’s crowdsourcing platform 3 was used for campaigns. Thedata set consists of the standard 5 category sentiments whichare strongly positive, positive, neutral, negative, strongly neg-ative where every sentence is labelled 3 times by 3 differentindividuals to ensure credibility.D. ProcessingEvery entry having been tagged three times, they had to beprocessed and turned into one single tag. So we decided toassign a particular value against every tag. Then by calculatingthese values we decided to choose the final tag. Supposean entry has three tags, one positive and two negative, asnegative tag appears the most, that entry was labelled to benegative. There were some data where the data was taggedwith three different labels which makes the data ambiguous.So we decided to drop entries like this. As a result the dataset is free from ambiguous data.E. Data Set StatisticsThe data set we built has 13809 entries in it. It has mainly5 Labels of Sentiment:3crowd.pipilika.comAuthorized licensed use limited to: University of Exeter. Downloaded on June 13,2020 at 14:28:14 UTC from IEEE Xplore. Restrictions apply. Fig. 2. Labels and there amountTABLE ILABEL TYPE AND COUNTLabel CountSlightly Positive 1436Positive 2279Neutral 2955Negative 3936Slightly Negative 3203An overall map of the percentage of the sentiments isgiven below:Fig. 3. Percentage of each labelsThe data set does not have a negative or positive bias.The conventional model evaluation methods do not accuratelymeasure model performance when faced with imbalanceddata-sets. Standard classifier algorithms like Decision Treeand Logistic Regression have a bias towards classes whichhave number of instances. They tend to only predict themajority class data. The features of the minority class aretreated as noise and are often ignored. Thus, there is a highprobability of wrong classification of the minority class ascompared to the majority class[9].The statistical summary of the data set:TABLE IIDATA SET STATISTICSCategory WordsTotal 248562Longest Sentence 118Average Sentence Length 44Numeric Words 2389Bengali Words 244432Non-Bengali Words 4130The various topics we tried to cover are given below in thetable:TABLE IIITOPICS OF THE DATA SETFields of Data NumberOpinion 248562Sports 118Bangladesh 44Economy 2389Entertainment 244432International 4130Education 4130Technology 4130Lifestyle 4130Various 4130Opinion, Sports, Bangladesh and Economy are the majorityof the topics covered.IV. BASELINE EVALUATIONThere are mainly three ways for classifying the sentimentof a text unit[10]. These are Machine Learning Techniques,Lexicon-Based Techniquesand and Hybrid Techniques.Sentiment Analysis uses the evaluation metrics of Precision,Recall, F-score, and Accuracy for the classification problem.Also, average measures like macro, micro, and weighted F1-scores are useful for multi-class problems. Based on thebalance of classes of</s>
<s>the data-set the appropriate metric shouldbe chosen. Four effective measures have been selected for thestudy based on the confusion matrix of the output. These are:Precision(P ) = TP/(TP + FP ) (1)Recall(R) = TP/(TP + FN) (2)Accuracy(A) = (TP +TN)/(TP +TN +FP +FN) (3)F1− Score(F1) = 2.(P.R)/(P +R) (4)Authorized licensed use limited to: University of Exeter. Downloaded on June 13,2020 at 14:28:14 UTC from IEEE Xplore. Restrictions apply. Our focus point here is performing Baseline Evaluationon the data-set that we have created on the Bengali newscomments for sentiment analysis, where we have selectedthree models for the Baseline Evaluation. These are: BinarySVM classifier, Multi-class SVM classifier and LSTM(NeuralNetwork).A. SVM classifierIn machine learning, support-vector machines are super-vised learning models that have associated learning algorithmswhich analyze data used for classification and regressionanalysis[11]. We can use an SVM classifier when the datahas exactly two classes. An SVM classifies data by findingthe best hyperplane that separates all the data points of oneclass from those of the other class.The data-set was split into test data and train data in thefollowing way:• Amount of Train set: 10809• Amount of Test set: 3000• Amount of Validation Set: 2809SVM classifier is pretty good for binary classification,when the number of categories is only two, negative andpositive. Taking the slightly positive and positive classes intothe positive label and the rest into the negative label, anddropping the neutral classFollowing is the confusion matrix we have generated usingthe SVM classifier:TABLE IVCONFUSION MATRIX OF BINARY SVM CLASSIFIERPredicted Positive Predicted NegativeActually Positive 2823 892Actually Negative 1714 5425The following table shows the overall accuracy:TABLE VACCURACY OF BINARY SVM CLASSIFIERModel Test(%) Train(%) Validation(%)Binary SVM classifier 66.232 93.397 67.488If we take the Precison, Recall, Accuracy and F1-Score intoconsideration of this model, we generate the table:TABLE VIEVALUATION OF THE SVM MODELPrecision Recall Accuracy F1-score57.59 72.41 61.34 63.97B. RNN(LSTM)Recurrent Neural Networks(RNN) and Long Short-TermMemory(LSTM) networks are often used for sentiment anal-ysis. Recurrent Neural Networks and Long Short-Term net-works introduces a memory into the model. Having a memoryin a network is useful because, when dealing with text data,the meaning of a word depends on the context of the previoustext. A drawback of the Recurrent Neural network is thatit is only capable of dealing with short-term dependencies.Long Short-Term Memory networks address this problem byintroducing a long-term memory into the network[12]We created the training , testing and validation splits asfollows:• Amount of Train set: 11041• Amount of Test set: 1380• Amount of Validation set: 1381Like the SVM classifier, we took the slightly positive andpositive classes into the positive label and the rest into thenegative label, and dropping the neutral class, we did theevaluation.The confusion matrix came up to be the following:TABLE VIICONFUSION MATRIX OF LSTM CLASSIFIERPredicted Positive Predicted NegativeActually Positive 2529 1151Actually Negative 1495 5867The following table shows the accuracy of this model:TABLE VIIIACCURACY OF THE LSTM MODELModel Test(%) Train(%) Validation(%)LSTM 74.741 96.967 78.833By using the measurements of Precison, Recall, Accuracyand F1-Score, we can generate the table:TABLE IXEVALUATION OF THE LSTM MODELPrecision Recall Accuracy F1-score72.84 87.22 74.74 79.291C. CNNConvolutional Neural Networks(CNN) is a</s>
<s>class of DeepNeural Networks. CNNs are multi-layer perceptrons thatare regularized. Multi-layer perceptrons usually have fullyconnected networks, which can cause over fitting. CNNssolve this issue by taking advantage of hierarchical patternin data and assemble more complex patterns using smallerand simpler patterns.Authorized licensed use limited to: University of Exeter. Downloaded on June 13,2020 at 14:28:14 UTC from IEEE Xplore. Restrictions apply. For text classification with CNN, we usually embed thewords of a sentence into a 2D array stacking them together.Convolution filters are applied to selective number of wordsto produce a new feature representation. Then some poolingis performed on new features, and the pooled features fromdifferent filters are concatenated with each other to form thehidden representation. These representations are then followedby one (or multiple) fully connected layer(s) to make the finalprediction[13].We split the data in the following way for the evaluation:• Amount of Train Data: 10809• Amount of Test Data: 3000• Amount of Validation Data: 2809The confusion matrix for CNN:TABLE XCONFUSION MATRIX OF CNN CLASSIFIERPredicted Positive Predicted NegativeActually Positive 2144 1373Actually Negative 1862 5455The accuracy table for CNN:TABLE XIACCURACY OF THE CNN MODELModel Test(%) Train(%) Validation(%)CNN 60.49 89.03 63.68The evaluation of the CNN model:TABLE XIIEVALUATION OF THE CNN MODELPrecision Recall Accuracy F1-score58.92 68.52 60.49 66.24V. DISCUSSIONSVM employs kernel tricks and maximal margin conceptsto perform better in non-linear and high-dimensional tasks.SVMs are great for relatively small data sets with feweroutliers.Neural networks typically perform better on very largedata-sets. Neural networks profit a lot if the data points arestructured in a way that can be exploited by the architecture.That is the case with our data-set, thats why neural networks(LSTM) is a better choice. Neural Networks may require moredata but they almost always come up with a pretty robustmodel.Deep learning really shines when it comes to complexproblems such as image classification, natural language pro-cessing, and speech recognition. The data set has no cleardistinct pattern which can be exploited by the CNN model.VI. CONCLUSIONIn our endeavour, we have created our very own data set foranalysis, which consists of 13809 entries from the Prothom-Alo news portal. Baseline model selection and evaluationhave been done on this data set using three different modelswhich are SVM, CNN and LSTM model. We have done somecomparative performance testing with our data set related toBengali sentiment analysis and presented them in a tabularform.ACKNOWLEDGMENTWe are thankful to our Department of Computer Scienceand Engineering and NLP group of Shahjalal University ofScience and Technology, Sylhet 3114, Bangladesh, for thehelp they have always provided us with and for motivatingthis research.REFERENCES[1] B. Liu, “Opinion mining, sentiment analysis, opinion extraction,”available at: https://www.cs.uic.edu/∼liub/FBS/sentiment-analysis.html.(Accessed on 16 May 2019).[2] M. A. Rahman and E. Kumar Dey, “Datasets for aspect-based sentimentanalysis in bangla and its baseline evaluation,” Data, vol. 3, no. 2,2018. [Online]. Available: https://www.mdpi.com/2306-5729/3/2/15[3] M. Nabil, M. Aly, and A. Atiya, “Astd: Arabic sentiment tweetsdataset,” in Proceedings of the 2015 Conference on Empirical Methodsin Natural Language Processing, 2015, pp. 2515–2519.[4] M. S. Akhtar, A. Ekbal, and P. Bhattacharyya, “Aspect based sentimentanalysis in Hindi: Resource creation and evaluation,” in Proceedingsof the Tenth International Conference on</s>
<s>Language Resources andEvaluation (LREC 2016). Portorož, Slovenia: European LanguageResources Association (ELRA), May 2016, pp. 2703–2709. [Online].Available: https://www.aclweb.org/anthology/L16-1429[5] A. K. Paul and P. C. Shill, “Sentiment mining from bangla datausing mutual information,” in 2016 2nd International Conference onElectrical, Computer Telecommunication Engineering (ICECTE), Dec2016, pp. 1–4.[6] S. Chowdhury and W. Chowdhury, “Performing sentiment analysisin bangla microblog posts,” in 2014 International Conference onInformatics, Electronics Vision (ICIEV), May 2014, pp. 1–6.[7] M. Al-Amin, M. S. Islam, and S. Das Uzzal, “A comprehensive studyon sentiment of bengali text,” in 2017 International Conference onElectrical, Computer and Communication Engineering (ECCE), Feb2017, pp. 267–272.[8] M. S. Islam, M. A. Amin, and S. Das Uzzal, “Word embedding withhellinger pca to detect the sentiment of bengali text,” in 2016 19thInternational Conference on Computer and Information Technology(ICCIT), Dec 2016, pp. 363–366.[9] R. Longadge and S. Dongre, “Class imbalance problem in data miningreview,” arXiv preprint arXiv:1305.1707, 2013.[10] S. Symeonidis, “5 things you need to know about sentiment analysisand classification,” available at: https://www.kdnuggets.com/2018/03/5-things-sentiment-analysis-classification.html. (Accessed on 16 June2019).[11] B. Scholkopf and A. J. Smola, Learning with kernels: support vectormachines, regularization, optimization, and beyond. MIT press, 2001.[12] F. Miedema, “Sentiment analysis with long short-term memory net-works,” VRIJE UNIVERSITEIT AMSTERDAM, vol. 1, 2018.[13] S. Minaee, E. Azimi, and A. Abdolrashidi, “Deep-sentiment: Sentimentanalysis using ensemble of cnn and bi-lstm models,” arXiv preprintarXiv:1904.04206, 2019.Authorized licensed use limited to: University of Exeter. Downloaded on June 13,2020 at 14:28:14 UTC from IEEE Xplore. Restrictions apply.</s>
<s>Sentiment Analysis on Bangladesh Cricket with Support Vector MachineSee discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/329395265Sentiment Analysis on Bangladesh Cricket with Support Vector MachineConference Paper · September 2018DOI: 10.1109/ICBSLP.2018.8554585CITATIONSREADS3643 authors, including:Some of the authors of this publication are also working on these related projects:Sorting Bengali Text View projectRNA 3D Structure Analysis View projectMd. Mahfuzur RahamanShahjalal University of Science and Technology9 PUBLICATIONS 24 CITATIONS SEE PROFILEAll content following this page was uploaded by Md. Mahfuzur Rahaman on 10 April 2019.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/329395265_Sentiment_Analysis_on_Bangladesh_Cricket_with_Support_Vector_Machine?enrichId=rgreq-d4ca3870f1f2f6a3f44c49c497d699ef-XXX&enrichSource=Y292ZXJQYWdlOzMyOTM5NTI2NTtBUzo3NDYxNTU3NDcwNzQwNTVAMTU1NDkwODgwMTYyNQ%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/329395265_Sentiment_Analysis_on_Bangladesh_Cricket_with_Support_Vector_Machine?enrichId=rgreq-d4ca3870f1f2f6a3f44c49c497d699ef-XXX&enrichSource=Y292ZXJQYWdlOzMyOTM5NTI2NTtBUzo3NDYxNTU3NDcwNzQwNTVAMTU1NDkwODgwMTYyNQ%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Sorting-Bengali-Text?enrichId=rgreq-d4ca3870f1f2f6a3f44c49c497d699ef-XXX&enrichSource=Y292ZXJQYWdlOzMyOTM5NTI2NTtBUzo3NDYxNTU3NDcwNzQwNTVAMTU1NDkwODgwMTYyNQ%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/RNA-3D-Structure-Analysis?enrichId=rgreq-d4ca3870f1f2f6a3f44c49c497d699ef-XXX&enrichSource=Y292ZXJQYWdlOzMyOTM5NTI2NTtBUzo3NDYxNTU3NDcwNzQwNTVAMTU1NDkwODgwMTYyNQ%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-d4ca3870f1f2f6a3f44c49c497d699ef-XXX&enrichSource=Y292ZXJQYWdlOzMyOTM5NTI2NTtBUzo3NDYxNTU3NDcwNzQwNTVAMTU1NDkwODgwMTYyNQ%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Rahaman44?enrichId=rgreq-d4ca3870f1f2f6a3f44c49c497d699ef-XXX&enrichSource=Y292ZXJQYWdlOzMyOTM5NTI2NTtBUzo3NDYxNTU3NDcwNzQwNTVAMTU1NDkwODgwMTYyNQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Rahaman44?enrichId=rgreq-d4ca3870f1f2f6a3f44c49c497d699ef-XXX&enrichSource=Y292ZXJQYWdlOzMyOTM5NTI2NTtBUzo3NDYxNTU3NDcwNzQwNTVAMTU1NDkwODgwMTYyNQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Shahjalal_University_of_Science_and_Technology?enrichId=rgreq-d4ca3870f1f2f6a3f44c49c497d699ef-XXX&enrichSource=Y292ZXJQYWdlOzMyOTM5NTI2NTtBUzo3NDYxNTU3NDcwNzQwNTVAMTU1NDkwODgwMTYyNQ%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Rahaman44?enrichId=rgreq-d4ca3870f1f2f6a3f44c49c497d699ef-XXX&enrichSource=Y292ZXJQYWdlOzMyOTM5NTI2NTtBUzo3NDYxNTU3NDcwNzQwNTVAMTU1NDkwODgwMTYyNQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Rahaman44?enrichId=rgreq-d4ca3870f1f2f6a3f44c49c497d699ef-XXX&enrichSource=Y292ZXJQYWdlOzMyOTM5NTI2NTtBUzo3NDYxNTU3NDcwNzQwNTVAMTU1NDkwODgwMTYyNQ%3D%3D&el=1_x_10&_esc=publicationCoverPdfInternational Conference on Bangla Speech and Language Processing(ICBSLP), 21-22 September, 2018Sentiment Analysis on Bangladesh Cricket withSupport Vector MachineShamsul Arafin MahtabDepartment of CSEShahjalal University of Scienceand Technology,Sylhet, Bangladesharafinmahtab@gmail.comNazmul IslamDepartment of CSEShahjalal University of Scienceand Technology,Sylhet, Bangladeshnazmul.islam.6978@gmail.comMd Mahfuzur RahamanDepartment of CSEShahjalal University of Scienceand Technology,Sylhet, Bangladeshmahfuzsustbd@gmail.comAbstract—While social platform and news portal play a big rolein Internet today, it also becomes the valuable medium for publicopinions. We want to perform sentiment analysis in these publicopinions. Many works have been done on sentiment analysisin different sectors for English language. But works in Bengalilanguage are limited to only Bengali corpus and micro-blogging.So we have targeted a special sector which is Bangladesh Cricketwhere people express their opinions in their native Bengalilanguages on social medias in every moment. So we have prepareda dataset of three sentiment classes about Bangladesh Cricketfrom real people sentiments. We have processed our dataset byremoving unnecessary words from the Bengali texts. Then wehave used TF-IDF Vectorizer for vectorization and the classifierSupport Vector Machine to classify our data.Index Terms—Sentiment Analysis, TF-IDF, SVMI. INTRODUCTIONSentiment analysis, is the field of study for analyzingpeoples opinion, sentiments and emotions for different sourceslike products, organizations, services, events, social issues. Itis critical because it helps us see what people like and dislikeabout us, our brands, names or other aspects. User feedbackfrom social media, website, call center agents, or any othersource contains a treasure trove of useful information. But, itis not enough to know what users are talking about. We mustalso know how they feel. Sentiment analysis is one way touncover those feelings. According to the Oxford dictionary, itis the process of computationally identifying and categorizingopinions expressed in a piece of text, especially in order todetermine whether a person’s attitude towards a particulartopic, product, etc. is positive, negative, or neutral. Althoughfull understanding of natural text language is well beyondcapabilities of machines, statistical analysis can provide mean-ingful categorization of sentiments.In our country people love cricket like a religion. So theyhave diverse sentiments for this game. In the world of cricketpeople are regularly expressing their various kinds emotionsevery time in various aspects. So this section has become aninteresting area for us to analyze with real people emotionsfor cricket. Overall maximum of our data are well structuredBengali sentiments as people express their feelings for Cricketin their native languages. But there is a challenge to workon Bengali language because of the shortage of resources forBengali Language Processing.In this research we have extracted sentiments or opinionsof people from news portal and social platform and thenidentified the overall</s>
<s>polarity of texts as positive, negativeor neutral. And then we shifted our work to detect praise,criticism and sadness. At first we have created a Googleforms and labeled the data with these three classes. Then wepreprocessed and extracted our data as features using TF-IDFand applied machine learning models. For classification, ourfirst preference is Support Vector Machine as it gives goodperformance for low shape dataset. Besides we have usedDecision Tree and Multinomial Naive Bayes. We have alsogathered a lots of knowledge on Deep learning. We are holdingit for our future works.II. RELATED WORKSOur work is inspired by some of previous work in theserelated fields and some of them are for our knowledge gaining.Our idea mostly inspired from [10] where they tried to identifydifferent classes such as violence and emotions. And one of thenotable works we have followed [18]. They work on findingemotions from text messages. They have used TF-IDF forincreasing classification accuracy and Support Vector Machinefor classification. Their approach is quite similar to our firstresearch beginning. But, they have also used Vector SpaceModel (VSM) as the document representation model. In thispaper [9], they proposed the work where they utilizes the naiveBayes and fuzzy classifier to classify tweets into positive,negative or neutral behavior of a particular person. In [12],they consider the problem of classifying documents not bytopic but by overall sentiment determining whether a reviewis positive or negative. In [8], they have done their sentimentanalysis in detecting insults and flames. This inspired us tochoose and work on our class criticism in Bangladesh Cricketin our dataset.Also most recent work [1] for Bengali language on Twitterdata to find the polarity of a Bengali text if it is positive ornegative. They performed Bangla Pos-Tagger Package for POSTagging and Support Vector Machine and Maximum Entropyto do a comparative analysis on the performance of these two978-1-5386-8207-4/18/$31.00 c©2018 IEEEalgorithms by experimenting with a various sets of features.We are interested to work with the POS-Tagger they haveused for our future work. In article [2], they proposed multi-ple computational techniques like WordNet based, dictionarybased, corpus based or generative approaches for generatingSentiWordNet(s) for three Indian languages: Bengali, Hindiand Telugu. And in report [16], they aim to automaticallyextract the sentiment or polarity by using HMM to performPOS tagging and SVM classifier.Also work [17] on finding sentiment such as positive andnegative reviews over 2000 movie data. They have usedTF-IDF and their classification is performed using SupportVector Machine provided by weka tool. They have consideredunigram, bigram, POS tags of words and function words asfeature set. In paper [14], they prepared gold standard Bengali-English code-mixed data with language and polarity tag forsentiment analysis purposes and discussed the systems theyprepared to collect and filter raw Twitter data.Another notable paper we have followed [7] where theyhave done the opinion mining and mood extraction where theyclassified the polarity of text like positive, negative and neutral.[5] has done the survey on sentiment analysis analyzing textclassification on opinion mining. Besides [19] also done thesurvey of the sentiment analysis text data. [3] analyzed andtracked the emotions of English and Bengali texts. From</s>
<s>theseliterature, we have learned the Bengali language Processing.In addition, [11] focused on twitter micro blogging dataand classified positive, negative and neutral from the data.And in [6], they focused on restaurants reviews and classifiedthe polarity of the text. [13] applied several common ma-chine learning techniques on twitter micro-blogging, includingvarious forms of a Naive Bayes and a Maximum EntropyModel. We have also done research on finding emotions ofpeople about Bangladesh cricket from Facebook and onlinenewspaper Prothom-Alo. From all of the literature learning, wehave come to the approach to work on using TF-IDF Vectorizerand SVM to classify our data.III. METHODOLOGYOur initial preprocessing for Bengali text data is performedusing Python Natural Language Toolkitc (NLTK)1 and forvectorization and classification using machine learning model,we have imported the tools of scikit-learn2. Our whole system,outlining the whole process, is stated below.A. DatasetFrom the beginning of our research, we tried to find a properway to collect and prepare our main dataset as data collectionis time expensive sometimes. For this reason, we want tooptimize the time in data collection with proper labeling. Useof web scrapping can give us lots of data but the problem isthat some data might not be well structured, noisy or standardfor our thesis. So we have split our works by using two same1Leading platform for building Python programs to work with humanlanguage data.2Simple and efficient open source tools for data mining and data analysis.standard datasets. Initially we have collected a dataset whichis referred as Bengali dataset ABSA [15]. This dataset is aboutBangladesh cricket related comments which is quite similar tothe dataset we want to build. So we have primarily chosenthis dataset to train and build our machine learning system forsentiment analysis.This ABSA dataset contains 2979 data with 5 columns. Allof the data are crawled and tagged from BBC Bangla. Thoughthis dataset is made for aspect-based sentiment analysis, weignored the aspect columns as we do not need to train aspect-based. We have only chosen the comment column and thetarget column containing Positive, Negative and Neutralclasses, then trained this dataset.Besides training ABSA dataset with our system, we havealso built our main dataset. We have previously mentionedthat we have collected the sentiments from public posts orcomments from facebook group of Bangladesh Cricket andsports section of Prothom-Alo newspaper. Then we havecreated a Google form to label all these opinions with theclasses Praise, Criticism and Sadness. As we have used thismanual process, our overall data have become well structuredand less noisy. The shape of our dataset is 1601 which contains3 classes including praise with 513, criticism with 604 andsadness with 484 labeled data.B. PreprocessingAs our research is in natural language data, we need toprocess the data as a cleaned version. So data formatting anddata cleaning play a significant role for our system.In the beginning we have separated all the words as to-kens from Bengali texts. We have used Python NLTK fortokenizing. Our preprocessing ends after splitting all the wordsfrom natural language sentences as token. After tokenizing,we have collected a huge array which contains the Bengalistopwords [4]. This array includes the Bengali</s>
<s>stopwords ofso, in, they, but, or all of these as we do not need these wordswhile training our model. In addition, we have manually listedan array for punctuations and Bengali numbers. All of theseunnecessary list of words, numbers and punctuation marksare filtered in the initialization of TF-IDF vectorizer whichis explained in Feature Extraction section.C. Feature ExtractionOur text data requires special customized preparation beforestarting using it for predictive modeling. Text must be parsedand tokenized before using predictive model. All of the wordsneed to be converted into number or floating point number touse as input for the machine which is referred as vectorization.In this sections, scikit-learn library provides easy-to-use toolsto perform feature extraction of text data. A simple andeffective model for thinking about text documents in machinelearning is called the Bag-of-Words Model, or BoW. Themodel is so simple that it throws away all of the order infor-mation in the words and focuses on the occurrence of wordsin a document. We have previously used CountVectorizer toextract our data as feature which counts the occurrence ofwords. But TF-IDF is one of the most powerful to vectorizethe data as feature. With TF-IDF, words are given weight TF-IDF measures relevance, not frequency. So we have replacedthe word counts with TF-IDF scores across the whole dataset.The below parameters we have used to accurately filter ourdata with TF-IDF vectorizer.ngram_range=(1,2),analyzer=’word’,lowercase=False,stop_words=bangla_stopwords,tokenizer=nltk_tokenizing,sublinear_tf = True,use_idf = TrueHere we have used ngram with lower and upper boundaryto affect the vocabulary. Besides the stopwords parameterof TF-IDF is assigned with our Bengali stop words. Thetokenizing part is also done in this section and initializedwith Python NLTK which is quite good for tokenizing. Wehave also applied sub-linear tf scaling with 1 + log(tf) andenabled inverse-document-frequency re-weighting. These allparameters have properly processed our dataset and enrichedour current system. For this implementation, we have got anenriched accuracy in our lower size dataset.D. Classifier SelectionIn machine learning and statistics, classification is a super-vised learning approach in which the computer program learnsfrom the data input given to it and then uses this learning toclassify new observations. This dataset may simply be bi-class(like identifying whether the sentiments is positive or negativeor that the mail is spam or non-spam) or it may be multi-class too. Like in our analysis, we are using three classesin each dataset for classification. There are many machine-learning classifiers for text classification. After the properstudy, we have decided to use Support Vector Machine withLinear kernel which gives better performance in lower shapedataset. The kernel defines the similarity or a distance measurebetween new data and the support vectors. We can use otherkernels also such as a Polynomial Kernel and a Radial Kernelthat transform the input space into higher dimensions. Thisis called the Kernel Trick. SVM has been found providingbetter accuracy in the case of classifying text. As SVM arebinary classifiers, they are better suited in classifying polarityof a sentence. Since our work is to identify different typesof emotions which is like binary classification. So SupportVector Machine is our chosen method. Besides we have alsoused default Decision Tree and for</s>
<s>probabilistic model-basedapproach, we have used Multinomial Naive Bayes classifiersto compare and analyze our results.We have chosen 10% data as our random test sets. Then wehave trained our machine learning model with the rest 90%of the dataset. The trained model predicts from the test setswhether a public opinion is praise, criticism or sadness related.IV. EXPERIMENT & RESULT ANALYSISWe have done experiment and result analysis for two ofour datasets and for the machine learning models we havechosen to experiment. Here the precision, recall, f1-score andsupport result are given for each dataset.Precision is the number of sentence in the test set that iscorrectly labeled by the classifier from the total sentences inthe test set that are classified by the classifier for a particularclass.Precision = True Positive / (True Positive + False Positive)Recall is the number of sentences in the test set that iscorrectly labeled by the classifier from the total sentences inthe test set that are actually labeled for a particular class.Recall = True Positive / (True Positive + False Negative)F-measure is the weighted harmonic mean of precision andrecall for a particular class.F1-Score = 2 * ((Precision * Recall) / (Precision + Recall))A. Trained on Our DatasetThough our dataset shape is small, our system implemen-tation and its result are quite worthy to show here. Accuracy64.596 %.TABLE ICLASSIFICATION REPORTLabel Precision Recall F1-Score SupportPraise 0.80 0.73 0.76 51Criticism 0.56 0.81 0.67 59Sadness 0.63 0.37 0.47 51avg / total 0.66 0.65 0.63 161From our observations, we have discovered that criticismand sadness is similar in some cases. So we must improveour result with more data. Around 2000 labeled data for eachclasses will definitely give a satisfactory result for our currentsystem.B. Trained on ABSA DatasetWe have built a model but as our own dataset size is small,we have used same topic based ABSA dataset to be sure thatour model performs well. Accuracy 73.490 %.TABLE IICLASSIFICATION REPORTLabel Precision Recall F1-Score SupportPraise 0.73 0.25 0.37 64Criticism 0.74 0.97 0.84 208Sadness 0.33 0.04 0.07 26avg / total 0.70 0.73 0.67 298From the above report we see that our current systemperforms good for ABSA dataset though we have not doneenough preprocessing.Besides using Support Vector Machine, we have usedDecision Tree and Multinomial Naive Bayes to recheck ouraccuracy levels. We have used the models for both of thedatasets. And the results are close.Fig. 1. Comparison between two datasetsThis Analysis Shows that ABSA dataset result is a bit highfor Support Vector Machine, Decision Tree and MultinomialNaive Bayes. This is because our dataset shape is small whereABSA dataset is double larger than our dataset. So the properlearning of our system with large data can give us betteroutput.V. FUTURE WORK AND CONCLUSIONFor future we need some improvement in our research.First of all, we have a limited amount of data where wehave used 10% of our dataset as test set and found around64% accuracy. Now our first target is to increase our dataset.Also, we will increase our target classes which is now onlythree. We will also try to improve our approach for betteraccuracy of our result. And of</s>
<s>course we will apply deeplearning theory for our existing system. The most importantpart is that we are working on Bengali language where wehave not done the stemming, spell-checking and Bengali parts-of-speech tagging for our current research. Use of the propernatural language processing will highly improve our system.So we will definitely go on to workout with the accuratenatural language processing.ACKNOWLEDGMENTOur deep thanks to our supervisor Md Mahfuzur Rahamanfor his guidance, giving flexibility and continuous supportthroughout the work. We are also thankful to our family,friends for their support and encouragement. Finally, we thankSUST NLP Research Group and department of CSE, SUSTfor giving us their support throughout the thesis.REFERENCES[1] S. Chowdhury and W. Chowdhury. Performing sentiment analysisin bangla microblog posts. In 2014 International Conference onInformatics, Electronics & Vision (ICIEV), pages 1–6. IEEE, 2014.[2] A. Das and S. Bandyopadhyay. Sentiwordnet for indian languages. InProceedings of the Eighth Workshop on Asian Language Resouces, pages56–63, 2010.[3] D. Das. Analysis and tracking of emotions in english and bengali texts:a computational approach. In Proceedings of the 20th internationalconference companion on World wide web, pages 343–348. ACM, 2011.[4] G. Diaz. (2018, Oct.) Bengali stopwords. [Online]. Available: https://github.com/stopwords-iso/stopwords-bn.[5] D. M. E. D. M. Hussein. A survey on sentiment analysis challenges.Journal of King Saud University-Engineering Sciences, 2016.[6] H. Kang, S. J. Yoo, and D. Han. Senti-lexicon and improved naı̈ve bayesalgorithms for sentiment analysis of restaurant reviews. Expert Systemswith Applications, 39(5):6000–6010, 2012.[7] A. Kaur and V. Gupta. A survey on sentiment analysis and opinion min-ing techniques. Journal of Emerging Technologies in Web Intelligence,5(4):367–371, 2013.[8] A. Mahmud, K. Z. Ahmed, and M. Khan. Detecting flames and insultsin text. 2008.[9] R. Mehra, M. K. Bedi, G. Singh, R. Arora, T. Bala, and S. Saxena.Sentimental analysis using fuzzy and naive bayes. In ComputingMethodologies and Communication (ICCMC), 2017 International Con-ference on, pages 945–950. IEEE, 2017.[10] S. M. Mohammad. Sentiment analysis: Detecting valence, emotions,and other affectual states from text. In Emotion measurement, pages201–237. Elsevier, 2016.[11] A. Pak and P. Paroubek. Twitter as a corpus for sentiment analysis andopinion mining. In LREc, volume 10, pages 1320–1326, 2010.[12] B. Pang, L. Lee, and S. Vaithyanathan. Thumbs up?: sentiment classi-fication using machine learning techniques. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing-Volume 10, pages 79–86. Association for Computational Linguistics,2002.[13] R. Parikh and M. Movassate. Sentiment analysis of user-generatedtwitter updates using various classification techniques. CS224N FinalReport, 118, 2009.[14] B. G. Patra, D. Das, and A. Das. Sentiment analysis of code-mixedindian languages: An overview of sail code-mixed shared task@ icon-2017. arXiv preprint arXiv:1803.06745, 2018.[15] A. Rahman. (2018, Oct.) Bengali ABSA dataset. [Online]. Available:https://github.com/AtikRahman/Bangla Datasets ABSA.[16] A. Roy and A. A. Singh. (2018, Oct.) Sentiment Analysis ANLPResearch Report. [Online]. Available: https://github.com/abhie19/Sentiment-Analysis-Bangla-Language/.[17] P. H. Shahana and B. Omman. Evaluation of features on sentimentalanalysis. Procedia Computer Science, 46:1585–1592, 2015.[18] J. D. Silva and P. S. Haddela. A term weighting method for identifyingemotions from text content. In Industrial and Information Systems(ICIIS), 2013 8th IEEE International Conference on, pages 381–386.IEEE, 2013.[19] H. Tang, S. Tan, and X. Cheng. A survey on sentiment</s>
<s>detection ofreviews. Expert Systems with Applications, 36(7):10760–10773, 2009.View publication statsView publication statshttps://www.researchgate.net/publication/329395265</s>
<s>Performance Measurement of Multiple Supervised Learning Algorithms for Bengali News Headline Sentiment ClassificationSee discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/342226447Performance Measurement of Multiple Supervised Learning Algorithms forBengali News Headline Sentiment ClassificationConference Paper · November 2019DOI: 10.1109/SMART46866.2019.9117477CITATIONSREADS5 authors, including:Some of the authors of this publication are also working on these related projects:Bengali Computing (NLP) View projectBengali abstractive text summarization using sequence to sequence RNNs View projectMd. Majedul IslamDaffodil International University2 PUBLICATIONS 0 CITATIONS SEE PROFILEAbu Kaisar Mohammad MasumDaffodil International University8 PUBLICATIONS 1 CITATION SEE PROFILEMd Golam RabbaniPabna University of Science and Technology2 PUBLICATIONS 0 CITATIONS SEE PROFILEAll content following this page was uploaded by Abu Kaisar Mohammad Masum on 17 June 2020.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/342226447_Performance_Measurement_of_Multiple_Supervised_Learning_Algorithms_for_Bengali_News_Headline_Sentiment_Classification?enrichId=rgreq-81019c5e31428810f952440a40a5f4e3-XXX&enrichSource=Y292ZXJQYWdlOzM0MjIyNjQ0NztBUzo5MDMzMTY1MTkyNjgzNTNAMTU5MjM3ODg0OTM5OQ%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/342226447_Performance_Measurement_of_Multiple_Supervised_Learning_Algorithms_for_Bengali_News_Headline_Sentiment_Classification?enrichId=rgreq-81019c5e31428810f952440a40a5f4e3-XXX&enrichSource=Y292ZXJQYWdlOzM0MjIyNjQ0NztBUzo5MDMzMTY1MTkyNjgzNTNAMTU5MjM3ODg0OTM5OQ%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Bengali-Computing-NLP?enrichId=rgreq-81019c5e31428810f952440a40a5f4e3-XXX&enrichSource=Y292ZXJQYWdlOzM0MjIyNjQ0NztBUzo5MDMzMTY1MTkyNjgzNTNAMTU5MjM3ODg0OTM5OQ%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Bengali-abstractive-text-summarization-using-sequence-to-sequence-RNNs?enrichId=rgreq-81019c5e31428810f952440a40a5f4e3-XXX&enrichSource=Y292ZXJQYWdlOzM0MjIyNjQ0NztBUzo5MDMzMTY1MTkyNjgzNTNAMTU5MjM3ODg0OTM5OQ%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-81019c5e31428810f952440a40a5f4e3-XXX&enrichSource=Y292ZXJQYWdlOzM0MjIyNjQ0NztBUzo5MDMzMTY1MTkyNjgzNTNAMTU5MjM3ODg0OTM5OQ%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Islam909?enrichId=rgreq-81019c5e31428810f952440a40a5f4e3-XXX&enrichSource=Y292ZXJQYWdlOzM0MjIyNjQ0NztBUzo5MDMzMTY1MTkyNjgzNTNAMTU5MjM3ODg0OTM5OQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Islam909?enrichId=rgreq-81019c5e31428810f952440a40a5f4e3-XXX&enrichSource=Y292ZXJQYWdlOzM0MjIyNjQ0NztBUzo5MDMzMTY1MTkyNjgzNTNAMTU5MjM3ODg0OTM5OQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Daffodil_International_University?enrichId=rgreq-81019c5e31428810f952440a40a5f4e3-XXX&enrichSource=Y292ZXJQYWdlOzM0MjIyNjQ0NztBUzo5MDMzMTY1MTkyNjgzNTNAMTU5MjM3ODg0OTM5OQ%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Islam909?enrichId=rgreq-81019c5e31428810f952440a40a5f4e3-XXX&enrichSource=Y292ZXJQYWdlOzM0MjIyNjQ0NztBUzo5MDMzMTY1MTkyNjgzNTNAMTU5MjM3ODg0OTM5OQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Abu_Mohammad_Masum?enrichId=rgreq-81019c5e31428810f952440a40a5f4e3-XXX&enrichSource=Y292ZXJQYWdlOzM0MjIyNjQ0NztBUzo5MDMzMTY1MTkyNjgzNTNAMTU5MjM3ODg0OTM5OQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Abu_Mohammad_Masum?enrichId=rgreq-81019c5e31428810f952440a40a5f4e3-XXX&enrichSource=Y292ZXJQYWdlOzM0MjIyNjQ0NztBUzo5MDMzMTY1MTkyNjgzNTNAMTU5MjM3ODg0OTM5OQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Daffodil_International_University?enrichId=rgreq-81019c5e31428810f952440a40a5f4e3-XXX&enrichSource=Y292ZXJQYWdlOzM0MjIyNjQ0NztBUzo5MDMzMTY1MTkyNjgzNTNAMTU5MjM3ODg0OTM5OQ%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Abu_Mohammad_Masum?enrichId=rgreq-81019c5e31428810f952440a40a5f4e3-XXX&enrichSource=Y292ZXJQYWdlOzM0MjIyNjQ0NztBUzo5MDMzMTY1MTkyNjgzNTNAMTU5MjM3ODg0OTM5OQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Rabbani12?enrichId=rgreq-81019c5e31428810f952440a40a5f4e3-XXX&enrichSource=Y292ZXJQYWdlOzM0MjIyNjQ0NztBUzo5MDMzMTY1MTkyNjgzNTNAMTU5MjM3ODg0OTM5OQ%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Rabbani12?enrichId=rgreq-81019c5e31428810f952440a40a5f4e3-XXX&enrichSource=Y292ZXJQYWdlOzM0MjIyNjQ0NztBUzo5MDMzMTY1MTkyNjgzNTNAMTU5MjM3ODg0OTM5OQ%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Pabna_University_of_Science_and_Technology?enrichId=rgreq-81019c5e31428810f952440a40a5f4e3-XXX&enrichSource=Y292ZXJQYWdlOzM0MjIyNjQ0NztBUzo5MDMzMTY1MTkyNjgzNTNAMTU5MjM3ODg0OTM5OQ%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Rabbani12?enrichId=rgreq-81019c5e31428810f952440a40a5f4e3-XXX&enrichSource=Y292ZXJQYWdlOzM0MjIyNjQ0NztBUzo5MDMzMTY1MTkyNjgzNTNAMTU5MjM3ODg0OTM5OQ%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Abu_Mohammad_Masum?enrichId=rgreq-81019c5e31428810f952440a40a5f4e3-XXX&enrichSource=Y292ZXJQYWdlOzM0MjIyNjQ0NztBUzo5MDMzMTY1MTkyNjgzNTNAMTU5MjM3ODg0OTM5OQ%3D%3D&el=1_x_10&_esc=publicationCoverPdfProceedings of the SMART–2019, IEEE Conference ID: 46866 8th International Conference on System Modeling & Advancement in Research Trends, 22nd–23rd November, 2019 College of Computing Sciences & Information Technology, Teerthanker Mahaveer University, Moradabad, IndiaCopyright © IEEE–2019 ISBN: 978-1-7281-3245-7 235Performance Measurement of Multiple Supervised Learning Algorithms for Bengali News Headline Sentiment ClassificationMd. Majedul Islam1, Abu Kaisar Mohammad Masum2, Md Golam Rabbani3, Raihana Zannat4 and Mushfiqur Rahman51,2,3,5Dept. of CSE, Daffodil International University, Dhaka, Bangladesh4Dept. of Software Engineering, Daffodil International University, Dhaka, BangladeshE-mail: 1majedul15-6784@diu.edu.bd, 2mohammad15-6759@diu.edu.bd, 3golam15-204@diu.edu.bd, 4zannat.swe@diu.edu.bd, 5mushfiqur.cse@diu.edu.bdAbstract— The reading newspaper is a common habit in today’s life. Before reading news article all are focused on the news headline. Understanding the meaning of news headline everybody can easily identify the news types. That means the containing news article provides positive or negative news. Analysis of the sentiment of the news headline is a good solution for this kind of problem. Sentiment Analysis is a chief part of Natural Language Processing. It mines any kinds of opinion and set the sentiment of any text. We proposed a method for Bengali news headline sentiment measurement with different kinds of the supervised learning algorithm and their performance. Firstly, we set sentiment of each news headline then used the classification method to predicting the news headline which was containing a positive or negative headline. After all, Bengali is one of the most used languages in this world. A lot of research work done previously in a different language but very few in the Bengali language. So, increasing the Bengali language research resource need to develop different kinds of tools and technology. Keywords: Sentiment Analysis, Natural Language Processing, Opinion Mining, Bengali News Headline SentimentI. Introduction Any human language problems are solved by NLP in AI research fields. It grasps the concept of human language problems and tries to provide a solution for the machine. The machine learning algorithm is the most usable algorithm for understanding the NLP problem with the solution. Machine learning is a concept which meaning an automatic learning system. A few approaches have in machine learning such as supervised, unsupervised and semi-supervised learning. In supervised learning provided labelled data with input and output but in unsupervised learning provided only unlabeled input data and out will generate from input data. Semi-supervised learning is made from</s>
<s>both combinations where the label and unlabeled data have in mixed.Peoples express their opinion after reading any kinds of text and given the opinion will be negative, positive or neutral. Sentiment analysis helps to appreciate the opinion of providing text documents. News headline is a short text which contains the gist of the news. Everybody follows the headline before reading the news, at that time they understand the sentiment of news. In this paper, we introduce a method for Bengali news headline sentiment analysis using the multiple machine learning algorithms. We determine the news headline sentiment by 0 and 1 where 0 consists of negative news and 1 are positive news. After preparing the data, trained by multiple supervised learning classification algorithms which provide a predicted output with good accuracy.II. Related WorkSentiment analysis is the most usable research in natural language processing. Formerly various research work has done successfully in this field. This section we have discussed some related work which helps us to complete our research purpose.A. News and Blogs SentimentNews sentiment analysis is different from normal text sentiment analysis such as a review analysis, Balahur A et al [6]. The terminology of the news article apparently does by the writer. In review analysis, the related word is figured but in news, it’s difficult to find out for large and complex description. Make a short and long word essence to find out positive and negative news sentiment. Godbole et al. [2] attach a scoring rate to express the positive either negative news and blogs sentiment offers a solution for large text substance. Analyzing this sentiment will help to indicate the future acclaim and advertise of news and blogs. Fu Y et al. [5] proposed a methodology for travel news sentiment analysis. They analyze the key factor for china tourism and provide better predictive accuracy for future tourism research study.B. ML Algorithms for Sentiment Analysis ML approaches provide a satisfactory result and accuracy for review sentiment. Naive Bayes and SVM give the best performance from other algorithms, Jagdale Authorized licensed use limited to: University of Exeter. Downloaded on June 17,2020 at 06:48:51 UTC from IEEE Xplore. Restrictions apply. 8th International Conference on System Modeling & Advancement in Research Trends, 22nd–23rd November, 2019 College of Computing Sciences & Information Technology, Teerthanker Mahaveer University, Moradabad, India236 Copyright © IEEE–2019 ISBN: 978-1-7281-3245-7et al [7]. Twitter is the most important source of sentiment analysis for social media. Here opinion is divided into three categories such as happy unhappy and neutral. Kurnaz et al. [8] proposed a system with Sparse Autoencoder algorithm which gives 0.98 accuracies for twitter data sentiment analysis. For sentence-level news text, SVM and Naive Bayes give 96.46 % and 94.16% accuracy, Shirsat et al. [1].Work with Bengali text in any NLP research area is challenging. Data processing and preparation is different from other languages. This paper we try to apply different approaches of ML to provide an accurate future news headline sentiment prediction. Where different algorithm provides different accuracy with a</s>
<s>correct prediction result.III. MethodologyMachine learning approaches help to solve NLP problems. In natural language processing important problem such as text analysis, sentiment analysis, speech to text conversion, text summarization, image to text conversion, language to language translation all is solved using machine learning technique. Sentiment analysis is also an important part of natural language. Mine the opinion from the text document is the main concept of solving the sentiment analysis problem. This research work we follow the NLP and ML approaches to solve the Bengali news sentiment classification. Given below a workflow for this research work. B. ML Algorithms for Sentiment Analysis ML approaches provide a satisfactory result and accuracy for review sentiment. Naive Bayes and SVM give the best performance from other algorithms, Jagdale et al [7]. Twitter is the most important source of sentiment analysis for social media. Here opinion is divided into three categories such as happy unhappy and neutral. Kurnaz et al. [8] proposed a system with Sparse Autoencoder algorithm which gives 0.98 accuracies for twitter data sentiment analysis. For sentence-level news text, SVM and Naive Bayes give 96.46 % and 94.16% accuracy, Shirsat et al. [1]. Work with Bengali text in any NLP research area is challenging. Data processing and preparation is different from other languages. This paper we try to apply different approaches of ML to provide an accurate future news headline sentiment prediction. Where different algorithm provides different accuracy with a correct prediction result. III. METHODOLOGY Machine learning approaches help to solve NLP problems. In natural language processing important problem such as text analysis, sentiment analysis, speech to text conversion, text summarization, image to text conversion, language to language translation all is solved using machine learning technique. Sentiment analysis is also an important part of natural language. Mine the opinion from the text document is the main concept of solving the sentiment analysis problem. This research work we follow the NLP and ML approaches to solve the Bengali news sentiment classification. Given below a workflow for this research work. Figure1: Working flow for Bengali news headline sentiment A. Data collection and Dataset Properties Newspaper headline estimation expectation is the primary centre point in our research work. So a marked dataset is required for the conclusion characterization. We gather information from Bengali paper "prothom alo" utilizing web scratching system with python scripting. After collecting the data we set the sentiment of the headline. Headline sentiment divides two types where 0 means negative headline and 1 means positive headline. Dataset properties resemble given below. a. Total data 1619 b. 11 types of news c. 1109 positive headline and 510 negative headline d. Minimum & maximum word length 1 and 14. Figure 2: Frequency of positive & negative sentiment In figure 2, x-axis contains the frequency of the negative and positive news headline where y-axis contains the positive and negative news sentiment. Figure 3: Word length of positive and negative headline In figure3, x-axis contains the number of headlines and y-axis contains total length. The maximum</s>
<s>length of negative news headline is 12 and the amount of headlines is 2. Minimum length of negative news headline is 2 and the number of headlines is 3. For positive news maximum number of headlines, text length is 14 and the total number of headline 12 where the minimum number of text length is 1 and the amount of headlines is 5. Dataset Set Sentiment Vocabulary Count Train test definealgorithms Predict Output Naive Bayes Random forest Decision tree SVM KNN Fig. 1: Working Flow for Bengali News Headline SentimentA. Data Collection and Dataset Properties Newspaper headline estimation expectation is the primary centre point in our research work. So a marked dataset is required for the conclusion characterization. We gather information from Bengali paper “prothom alo” utilizing web scratching system with python scripting. After collecting the data we set the sentiment of the headline. Headline sentiment divides two types where 0 means negative headline and 1 means positive headline. Dataset properties resemble given below.a. Total data 1619b. 11 types of newsc. 1109 positive headline and 510 negative headlined. Minimum & maximum word length 1 and 14.B. ML Algorithms for Sentiment Analysis ML approaches provide a satisfactory result and accuracy for review sentiment. Naive Bayes and SVM give the best performance from other algorithms, Jagdale et al [7]. Twitter is the most important source of sentiment analysis for social media. Here opinion is divided into three categories such as happy unhappy and neutral. Kurnaz et al. [8] proposed a system with Sparse Autoencoder algorithm which gives 0.98 accuracies for twitter data sentiment analysis. For sentence-level news text, SVM and Naive Bayes give 96.46 % and 94.16% accuracy, Shirsat et al. [1]. Work with Bengali text in any NLP research area is challenging. Data processing and preparation is different from other languages. This paper we try to apply different approaches of ML to provide an accurate future news headline sentiment prediction. Where different algorithm provides different accuracy with a correct prediction result. III. METHODOLOGY Machine learning approaches help to solve NLP problems. In natural language processing important problem such as text analysis, sentiment analysis, speech to text conversion, text summarization, image to text conversion, language to language translation all is solved using machine learning technique. Sentiment analysis is also an important part of natural language. Mine the opinion from the text document is the main concept of solving the sentiment analysis problem. This research work we follow the NLP and ML approaches to solve the Bengali news sentiment classification. Given below a workflow for this research work. Figure1: Working flow for Bengali news headline sentiment A. Data collection and Dataset Properties Newspaper headline estimation expectation is the primary centre point in our research work. So a marked dataset is required for the conclusion characterization. We gather information from Bengali paper "prothom alo" utilizing web scratching system with python scripting. After collecting the data we set the sentiment of the headline. Headline sentiment divides two types where 0 means negative headline and 1</s>
<s>means positive headline. Dataset properties resemble given below. a. Total data 1619 b. 11 types of news c. 1109 positive headline and 510 negative headline d. Minimum & maximum word length 1 and 14. Figure 2: Frequency of positive & negative sentiment In figure 2, x-axis contains the frequency of the negative and positive news headline where y-axis contains the positive and negative news sentiment. Figure 3: Word length of positive and negative headline In figure3, x-axis contains the number of headlines and y-axis contains total length. The maximum length of negative news headline is 12 and the amount of headlines is 2. Minimum length of negative news headline is 2 and the number of headlines is 3. For positive news maximum number of headlines, text length is 14 and the total number of headline 12 where the minimum number of text length is 1 and the amount of headlines is 5. Dataset Set Sentiment Vocabulary Count Train test definealgorithms Predict Output Naive Bayes Random forest Decision tree SVM KNN Fig. 2: Frequency of Positive & Negative SentimentIn figure 2, x-axis contains the frequency of the negative and positive news headline where y-axis contains the positive and negative news sentiment.B. ML Algorithms for Sentiment Analysis ML approaches provide a satisfactory result and accuracy for review sentiment. Naive Bayes and SVM give the best performance from other algorithms, Jagdale et al [7]. Twitter is the most important source of sentiment analysis for social media. Here opinion is divided into three categories such as happy unhappy and neutral. Kurnaz et al. [8] proposed a system with Sparse Autoencoder algorithm which gives 0.98 accuracies for twitter data sentiment analysis. For sentence-level news text, SVM and Naive Bayes give 96.46 % and 94.16% accuracy, Shirsat et al. [1]. Work with Bengali text in any NLP research area is challenging. Data processing and preparation is different from other languages. This paper we try to apply different approaches of ML to provide an accurate future news headline sentiment prediction. Where different algorithm provides different accuracy with a correct prediction result. III. METHODOLOGY Machine learning approaches help to solve NLP problems. In natural language processing important problem such as text analysis, sentiment analysis, speech to text conversion, text summarization, image to text conversion, language to language translation all is solved using machine learning technique. Sentiment analysis is also an important part of natural language. Mine the opinion from the text document is the main concept of solving the sentiment analysis problem. This research work we follow the NLP and ML approaches to solve the Bengali news sentiment classification. Given below a workflow for this research work. Figure1: Working flow for Bengali news headline sentiment A. Data collection and Dataset Properties Newspaper headline estimation expectation is the primary centre point in our research work. So a marked dataset is required for the conclusion characterization. We gather information from Bengali paper "prothom alo" utilizing web scratching system with python scripting. After collecting the data we set the sentiment of</s>
<s>the headline. Headline sentiment divides two types where 0 means negative headline and 1 means positive headline. Dataset properties resemble given below. a. Total data 1619 b. 11 types of news c. 1109 positive headline and 510 negative headline d. Minimum & maximum word length 1 and 14. Figure 2: Frequency of positive & negative sentiment In figure 2, x-axis contains the frequency of the negative and positive news headline where y-axis contains the positive and negative news sentiment. Figure 3: Word length of positive and negative headline In figure3, x-axis contains the number of headlines and y-axis contains total length. The maximum length of negative news headline is 12 and the amount of headlines is 2. Minimum length of negative news headline is 2 and the number of headlines is 3. For positive news maximum number of headlines, text length is 14 and the total number of headline 12 where the minimum number of text length is 1 and the amount of headlines is 5. Dataset Set Sentiment Vocabulary Count Train test definealgorithms Predict Output Naive Bayes Random forest Decision tree SVM KNN Fig. 3: Word Length of Positive and Negative HeadlineIn figure 3, x-axis contains the number of headlines and y-axis contains total length. The maximum length of negative news headline is 12 and the amount of headlines is 2. Minimum length of negative news headline is 2 and the number of headlines is 3. For positive news maximum number of headlines, text length is 14 and the total number of headline 12 where the minimum number of text length is 1 and the amount of headlines is 5.Authorized licensed use limited to: University of Exeter. Downloaded on June 17,2020 at 06:48:51 UTC from IEEE Xplore. Restrictions apply. Performance Measurement of Multiple Supervised Learning Algorithms for Bengali News Headline Sentiment ClassificationCopyright © IEEE–2019 ISBN: 978-1-7281-3245-7 237B. Data Preprocessing The procedure of Bengali content information is troublesome from the procedure of different dialects information. The machine couldn’t recognize Bengali language characters or images naturally. To evacuate an undesirable character, space letter or digit, Bengali accentuation needs to characterize Bengali Unicode of the characters. The scope of Bengali Character Unicode is 0980-09FF. Another part of preprocessing is needed to expel space from the line and evacuate the stop words. For stop words remove we collect all Bengali stop words and save into a file then remove stop word from the dataset.1) Add ContractionsUsing a short form of a word is knows as contraction. There are a few contractions in the Bengali language. Such as, “ডা.” is the short form of “ডাক্তার”. Before pre-processing all of this contraction was added to the dataset text.2) Stop Word RemoveIn preprocessing removing stop word is very important. Stop word contains the most common word in a text or document. So in natural language processing stop, words are removed from the text for any language modelling. There are many stop words in the Bengali language such as আছে, আমরা, এখন etc. 3) Unwanted Character RemoveA</s>
<s>machine can’t understand a rare character or word. So in the pre-processing step remove unwanted characters is very important. In Bengali text whitespace, punctuation, some digits are included in unwanted characters.C. Vocabulary Count For vocabulary count, we use Count Vectorizer. It counts the split word which is showing up in dataset. Then uses the weight in input for vocabulary count. After the count, we fit and transform input with vocabulary.D. Train Test Data After ensuring the fit of the input parameter dataset needs to train for machine learning. Supervised learning way is required for classification technique. Because in the dataset label and input-output given. Then define test dataset to remove the unbiased assessment. In the model train, almost 85%data was given and for test dataset, 15% data with 101 random state are defined.E. Machine Learning Algorithms Supervised learning algorithms are used to solve all classification problem. The classification problems are following true and false logic. If the predicted input is positive it’s true otherwise it’s false. All of the predicted output is depending on the input label. Suppose x is an input variable and y is an output variable. So, output variable y is dependent on the input variable x. The classification function f will be,B. Data preprocessing The procedure of Bengali content information is troublesome from the procedure of different dialects information. The machine couldn't recognize Bengali language characters or images naturally. To evacuate an undesirable character, space letter or digit, Bengali accentuation needs to characterize Bengali Unicode of the characters. The scope of Bengali Character Unicode is 0980-09FF. Another part of preprocessing is needed to expel space from the line and evacuate the stop words. For stop words remove we collect all Bengali stop words and save into a file then remove stop word from the dataset. i. Add contractions: Using a short form of a word is knows as contraction. There are a few contractions in the Bengali language. Such as, "ডা." is the short form of "ডা�ার". Before pre-processing all of this contraction was added to the dataset text. ii. Stop word remove: In preprocessing removing stop word is very important. Stop word contains the most common word in a text or document. So in natural language processing stop, words are removed from the text for any language modelling. There are many stop words in the Bengali language such as আেছ, আমরা, এখন etc. iii. Unwanted character remove: A machine can’t understand a rare character or word. So in the pre-processing step remove unwanted characters is very important. In Bengali text whitespace, punctuation, some digits are included in unwanted characters. C. Vocabulary Count For vocabulary count, we use Count Vectorizer. It counts the split word which is showing up in dataset. Then uses the weight in input for vocabulary count. After the count, we fit and transform input with vocabulary. D. Train Test Data After ensuring the fit of the input parameter dataset needs to train for machine learning. Supervised learning way is required for classification</s>
<s>technique. Because in the dataset label and input-output given. Then define test dataset to remove the unbiased assessment. In the model train, almost 85%data was given and for test dataset, 15% data with 101 random state are defined. E. Machine Learning Algorithms Supervised learning algorithms are used to solve all classification problem. The classification problems are following true and false logic. If the predicted input is positive it's true otherwise it’s false. All of the predicted output is depending on the input label. Suppose x is an input variable and y is an output variable. So, output variable y is dependent on the input variable x. The classification function f will be, 𝑦𝑦 𝑦 𝑦𝑦𝑦𝑦𝑦𝑦………….. (1) Headline sentiment is a classification problem. Input news headline text identifies the output. The output contains sentiment of the news. Classification algorithm helps the true prediction of the output result. For the experiment, we used five classification algorithms with a suitable parameter. Briefly discussed in below about uses algorithms. i. Naive Bayes Classifier This algorithm is used to calculate the probability of the classification problem. In our research, we use multinomial NB which is a distinct classifier used for multinomial disposal. Suppose the probability of the input feature is, 𝑝𝑝 𝑦 𝑦𝑦𝑦�|𝑐𝑐�𝑦…… (2) Here, x is the independent variable and c is class. ii. Random Forest Classifier Random forest classifier depends on decision tree logic. In each classification prediction, everyone works a separate decision tree. The maximum number of the tree for class value is predicted output for this classifier. Average of single decision tree make a random forest classifier. So the equation for this classifier will be, 𝑦𝑦�� 𝑦 ∑ ��������������� ����� .……. (3) Here, f�� 𝑦 important factor from all tree norm 𝑦𝑦���= normalize factor from tree T= tree number iii. Decision Tree classifier The decision tree is the most capable and usable classification algorithm. Output generated by yes and no technique basis. All value depends on the input label then generated the prediction. Figure 4: Decision tree for news sentiment. iv. Nearest Neighbors classifier KNN is a non-parametric approach for classification algorithms. Output value calculated by the value of k which means the nearest value of k. where k is a parameter for find related output. k search the closest values for the providing parameter from the dataset. In our experiment, we use the value k=3 and provide a good result. Each instant is selected by the distance measurement. If the instance distance is near to the k value is put in the nearest neighbours then calculate the minimum distance from the value which will be the final value v. Support Vector Machine Classifier Support vector machine is the most useful method for sentiment analysis classification. Because it provides the best News? Positive Negative (1)Headline sentiment is a classification problem. Input news headline text identifies the output. The output contains sentiment of the news. Classification algorithm helps the true prediction of the output result. For the experiment, we used five classification</s>
<s>algorithms with a suitable parameter. Briefly discussed in below about uses algorithms.1) Naive Bayes Classifier This algorithm is used to calculate the probability of the classification problem. In our research, we use multinomial NB which is a distinct classifier used for multinomial disposal. Suppose the probability of the input feature is,B. Data preprocessing The procedure of Bengali content information is troublesome from the procedure of different dialects information. The machine couldn't recognize Bengali language characters or images naturally. To evacuate an undesirable character, space letter or digit, Bengali accentuation needs to characterize Bengali Unicode of the characters. The scope of Bengali Character Unicode is 0980-09FF. Another part of preprocessing is needed to expel space from the line and evacuate the stop words. For stop words remove we collect all Bengali stop words and save into a file then remove stop word from the dataset. i. Add contractions: Using a short form of a word is knows as contraction. There are a few contractions in the Bengali language. Such as, "ডা." is the short form of "ডা�ার". Before pre-processing all of this contraction was added to the dataset text. ii. Stop word remove: In preprocessing removing stop word is very important. Stop word contains the most common word in a text or document. So in natural language processing stop, words are removed from the text for any language modelling. There are many stop words in the Bengali language such as আেছ, আমরা, এখন etc. iii. Unwanted character remove: A machine can’t understand a rare character or word. So in the pre-processing step remove unwanted characters is very important. In Bengali text whitespace, punctuation, some digits are included in unwanted characters. C. Vocabulary Count For vocabulary count, we use Count Vectorizer. It counts the split word which is showing up in dataset. Then uses the weight in input for vocabulary count. After the count, we fit and transform input with vocabulary. D. Train Test Data After ensuring the fit of the input parameter dataset needs to train for machine learning. Supervised learning way is required for classification technique. Because in the dataset label and input-output given. Then define test dataset to remove the unbiased assessment. In the model train, almost 85%data was given and for test dataset, 15% data with 101 random state are defined. E. Machine Learning Algorithms Supervised learning algorithms are used to solve all classification problem. The classification problems are following true and false logic. If the predicted input is positive it's true otherwise it’s false. All of the predicted output is depending on the input label. Suppose x is an input variable and y is an output variable. So, output variable y is dependent on the input variable x. The classification function f will be, 𝑦𝑦 𝑦 𝑦𝑦𝑦𝑦𝑦𝑦………….. (1) Headline sentiment is a classification problem. Input news headline text identifies the output. The output contains sentiment of the news. Classification algorithm helps the true prediction of the output result. For the experiment, we used five classification algorithms with</s>
<s>a suitable parameter. Briefly discussed in below about uses algorithms. i. Naive Bayes Classifier This algorithm is used to calculate the probability of the classification problem. In our research, we use multinomial NB which is a distinct classifier used for multinomial disposal. Suppose the probability of the input feature is, 𝑝𝑝 𝑦 𝑦𝑦𝑦�|𝑐𝑐�𝑦…… (2) Here, x is the independent variable and c is class. ii. Random Forest Classifier Random forest classifier depends on decision tree logic. In each classification prediction, everyone works a separate decision tree. The maximum number of the tree for class value is predicted output for this classifier. Average of single decision tree make a random forest classifier. So the equation for this classifier will be, 𝑦𝑦�� 𝑦 ∑ ��������������� ����� .……. (3) Here, f�� 𝑦 important factor from all tree norm 𝑦𝑦���= normalize factor from tree T= tree number iii. Decision Tree classifier The decision tree is the most capable and usable classification algorithm. Output generated by yes and no technique basis. All value depends on the input label then generated the prediction. Figure 4: Decision tree for news sentiment. iv. Nearest Neighbors classifier KNN is a non-parametric approach for classification algorithms. Output value calculated by the value of k which means the nearest value of k. where k is a parameter for find related output. k search the closest values for the providing parameter from the dataset. In our experiment, we use the value k=3 and provide a good result. Each instant is selected by the distance measurement. If the instance distance is near to the k value is put in the nearest neighbours then calculate the minimum distance from the value which will be the final value v. Support Vector Machine Classifier Support vector machine is the most useful method for sentiment analysis classification. Because it provides the best News? Positive Negative (2)Here, x is the independent variable and c is class.2) Random Forest Classifier Random forest classifier depends on decision tree logic. In each classification prediction, everyone works a separate decision tree. The maximum number of the tree for class value is predicted output for this classifier. Average of single decision tree make a random forest classifier. So the equation for this classifier will be,B. Data preprocessing The procedure of Bengali content information is troublesome from the procedure of different dialects information. The machine couldn't recognize Bengali language characters or images naturally. To evacuate an undesirable character, space letter or digit, Bengali accentuation needs to characterize Bengali Unicode of the characters. The scope of Bengali Character Unicode is 0980-09FF. Another part of preprocessing is needed to expel space from the line and evacuate the stop words. For stop words remove we collect all Bengali stop words and save into a file then remove stop word from the dataset. i. Add contractions: Using a short form of a word is knows as contraction. There are a few contractions in the Bengali language. Such as, "ডা." is the short form of "ডা�ার". Before pre-processing all</s>
<s>of this contraction was added to the dataset text. ii. Stop word remove: In preprocessing removing stop word is very important. Stop word contains the most common word in a text or document. So in natural language processing stop, words are removed from the text for any language modelling. There are many stop words in the Bengali language such as আেছ, আমরা, এখন etc. iii. Unwanted character remove: A machine can’t understand a rare character or word. So in the pre-processing step remove unwanted characters is very important. In Bengali text whitespace, punctuation, some digits are included in unwanted characters. C. Vocabulary Count For vocabulary count, we use Count Vectorizer. It counts the split word which is showing up in dataset. Then uses the weight in input for vocabulary count. After the count, we fit and transform input with vocabulary. D. Train Test Data After ensuring the fit of the input parameter dataset needs to train for machine learning. Supervised learning way is required for classification technique. Because in the dataset label and input-output given. Then define test dataset to remove the unbiased assessment. In the model train, almost 85%data was given and for test dataset, 15% data with 101 random state are defined. E. Machine Learning Algorithms Supervised learning algorithms are used to solve all classification problem. The classification problems are following true and false logic. If the predicted input is positive it's true otherwise it’s false. All of the predicted output is depending on the input label. Suppose x is an input variable and y is an output variable. So, output variable y is dependent on the input variable x. The classification function f will be, 𝑦𝑦 𝑦 𝑦𝑦𝑦𝑦𝑦𝑦………….. (1) Headline sentiment is a classification problem. Input news headline text identifies the output. The output contains sentiment of the news. Classification algorithm helps the true prediction of the output result. For the experiment, we used five classification algorithms with a suitable parameter. Briefly discussed in below about uses algorithms. i. Naive Bayes Classifier This algorithm is used to calculate the probability of the classification problem. In our research, we use multinomial NB which is a distinct classifier used for multinomial disposal. Suppose the probability of the input feature is, 𝑝𝑝 𝑦 𝑦𝑦𝑦�|𝑐𝑐�𝑦…… (2) Here, x is the independent variable and c is class. ii. Random Forest Classifier Random forest classifier depends on decision tree logic. In each classification prediction, everyone works a separate decision tree. The maximum number of the tree for class value is predicted output for this classifier. Average of single decision tree make a random forest classifier. So the equation for this classifier will be, 𝑦𝑦�� 𝑦 ∑ ��������������� ����� .……. (3) Here, f�� 𝑦 important factor from all tree norm 𝑦𝑦���= normalize factor from tree T= tree number iii. Decision Tree classifier The decision tree is the most capable and usable classification algorithm. Output generated by yes and no technique basis. All value depends on the input label then generated the prediction. Figure 4:</s>
<s>Decision tree for news sentiment. iv. Nearest Neighbors classifier KNN is a non-parametric approach for classification algorithms. Output value calculated by the value of k which means the nearest value of k. where k is a parameter for find related output. k search the closest values for the providing parameter from the dataset. In our experiment, we use the value k=3 and provide a good result. Each instant is selected by the distance measurement. If the instance distance is near to the k value is put in the nearest neighbours then calculate the minimum distance from the value which will be the final value v. Support Vector Machine Classifier Support vector machine is the most useful method for sentiment analysis classification. Because it provides the best News? Positive Negative 3) Decision Tree Classifier The decision tree is the most capable and usable classification algorithm. Output generated by yes and no technique basis. All value depends on the input label then generated the prediction.B. Data preprocessing The procedure of Bengali content information is troublesome from the procedure of different dialects information. The machine couldn't recognize Bengali language characters or images naturally. To evacuate an undesirable character, space letter or digit, Bengali accentuation needs to characterize Bengali Unicode of the characters. The scope of Bengali Character Unicode is 0980-09FF. Another part of preprocessing is needed to expel space from the line and evacuate the stop words. For stop words remove we collect all Bengali stop words and save into a file then remove stop word from the dataset. i. Add contractions: Using a short form of a word is knows as contraction. There are a few contractions in the Bengali language. Such as, "ডা." is the short form of "ডা�ার". Before pre-processing all of this contraction was added to the dataset text. ii. Stop word remove: In preprocessing removing stop word is very important. Stop word contains the most common word in a text or document. So in natural language processing stop, words are removed from the text for any language modelling. There are many stop words in the Bengali language such as আেছ, আমরা, এখন etc. iii. Unwanted character remove: A machine can’t understand a rare character or word. So in the pre-processing step remove unwanted characters is very important. In Bengali text whitespace, punctuation, some digits are included in unwanted characters. C. Vocabulary Count For vocabulary count, we use Count Vectorizer. It counts the split word which is showing up in dataset. Then uses the weight in input for vocabulary count. After the count, we fit and transform input with vocabulary. D. Train Test Data After ensuring the fit of the input parameter dataset needs to train for machine learning. Supervised learning way is required for classification technique. Because in the dataset label and input-output given. Then define test dataset to remove the unbiased assessment. In the model train, almost 85%data was given and for test dataset, 15% data with 101 random state are defined. E. Machine Learning Algorithms Supervised</s>
<s>learning algorithms are used to solve all classification problem. The classification problems are following true and false logic. If the predicted input is positive it's true otherwise it’s false. All of the predicted output is depending on the input label. Suppose x is an input variable and y is an output variable. So, output variable y is dependent on the input variable x. The classification function f will be, 𝑦𝑦 𝑦 𝑦𝑦𝑦𝑦𝑦𝑦………….. (1) Headline sentiment is a classification problem. Input news headline text identifies the output. The output contains sentiment of the news. Classification algorithm helps the true prediction of the output result. For the experiment, we used five classification algorithms with a suitable parameter. Briefly discussed in below about uses algorithms. i. Naive Bayes Classifier This algorithm is used to calculate the probability of the classification problem. In our research, we use multinomial NB which is a distinct classifier used for multinomial disposal. Suppose the probability of the input feature is, 𝑝𝑝 𝑦 𝑦𝑦𝑦�|𝑐𝑐�𝑦…… (2) Here, x is the independent variable and c is class. ii. Random Forest Classifier Random forest classifier depends on decision tree logic. In each classification prediction, everyone works a separate decision tree. The maximum number of the tree for class value is predicted output for this classifier. Average of single decision tree make a random forest classifier. So the equation for this classifier will be, 𝑦𝑦�� 𝑦 ∑ ��������������� ����� .……. (3) Here, f�� 𝑦 important factor from all tree norm 𝑦𝑦���= normalize factor from tree T= tree number iii. Decision Tree classifier The decision tree is the most capable and usable classification algorithm. Output generated by yes and no technique basis. All value depends on the input label then generated the prediction. Figure 4: Decision tree for news sentiment. iv. Nearest Neighbors classifier KNN is a non-parametric approach for classification algorithms. Output value calculated by the value of k which means the nearest value of k. where k is a parameter for find related output. k search the closest values for the providing parameter from the dataset. In our experiment, we use the value k=3 and provide a good result. Each instant is selected by the distance measurement. If the instance distance is near to the k value is put in the nearest neighbours then calculate the minimum distance from the value which will be the final value v. Support Vector Machine Classifier Support vector machine is the most useful method for sentiment analysis classification. Because it provides the best News? Positive Negative Fig. 4: Decision Tree for News SentimentIV. Nearest Neighbors Classifier KNN is a non-parametric approach for classification algorithms. Output value calculated by the value of k which means the nearest value of k. where k is a parameter for find related output. k search the closest values for the providing (3)Authorized licensed use limited to: University of Exeter. Downloaded on June 17,2020 at 06:48:51 UTC from IEEE Xplore. Restrictions apply. 8th International Conference on System Modeling & Advancement in Research</s>
<s>Trends, 22nd–23rd November, 2019 College of Computing Sciences & Information Technology, Teerthanker Mahaveer University, Moradabad, India238 Copyright © IEEE–2019 ISBN: 978-1-7281-3245-7parameter from the dataset. In our experiment, we use the value k=3 and provide a good result. Each instant is selected by the distance measurement. If the instance distance is near to the k value is put in the nearest neighbours then calculate the minimum distance from the value which will be the final value4) Support Vector Machine ClassifierSupport vector machine is the most useful method for sentiment analysis classification. Because it provides the best accuracy for this type of problem. The hyperplane is used in each support vector machine classifier. Each hyperplane divided each dataset into two-part. The hyperplane is worked based on the kernel where the kernel represents some algebraic calculation. We use SVC kernel for our classification problem. SVC contain a vector classifier.F. Model Discussion Machine Learning algorithm provides a better result for sentiment analysis problem. We have seen all previous research that Support Vector Machine and Naive Bayes algorithm provide accurate result rather than other supervised learning algorithms to classify any sentiment analysis problems. In this research, we try to find out the best algorithms for Bengali news headline sentiment classification based on some supervised learning algorithm. And finally, selected the algorithms for classifying the Bengali news type depend on algorithm prediction.The necessary steps of the model are given below for choosing the classification algorithm.Step 1: Read the news headline dataset.Step 2: Set the news sentiment, negative news = 0 and positive news = 1. Step 3: Pre-process the headline text.Step 4: Count the vocabulary for using as model input. Step 5: Fit and Transform the vocabulary. Step 6: Divide the train and test. Step 7: Define the machine learning algorithm and train the model.Step 8: Check the algorithm accuracy and prediction result. If the prediction of the algorithm is equal to the actual prediction result then select the algorithm for headline classification. All of these steps are following for news headline classification based on the using algorithms.V. Experiment and OutputThis experiment, after dividing the test and train dataset we applied multiple machine learning algorithms. Using approaches are Naive Bayes, SVM, Random forest, Decision tree, and K-nearest neighbours. Previous all experiment in sentiment analysis Naive Bayes and SVM contribute the best accuracy. Similarly in this experiment, 75% accuracy from SVM and 73% from Naive Bayes classification algorithm which is the best from the other three algorithms. Random forest commit 69%, KNN commits 68% and Decision tree commits 60% accuracy for positive and negative news classification. In table1 discuss the performance and accuracy for the algorithms.Table1: Performance for Bengali Headline Sentiment AnalysisApproach Sentiment Precision Recall F1-score AccuracyNaive 0 0.55 0.24 0.3473%Bayes 1 0.75 0.92 0.83SVM0 0.68 0.21 0.3375%1 0.75 0.96 0.84Random 0 0.44 0.36 0.3969%Forest 1 0.76 0.82 0.79Decision 0 0.33 0.40 0.3660%Tree 1 0.73 0.67 0.70KNN0 0.45 0.39 0.4168%1 0.76 0.79 0.78In figure 5 the bar chart displays the accuracy comparison for applying algorithms.accuracy for this</s>
<s>type of problem. The hyperplane is used in each support vector machine classifier. Each hyperplane divided each dataset into two-part. The hyperplane is worked based on the kernel where the kernel represents some algebraic calculation. We use SVC kernel for our classification problem. SVC contain a vector classifier. F. Model Discussion Machine Learning algorithm provides a better result for sentiment analysis problem. We have seen all previous research that Support Vector Machine and Naive Bayes algorithm provide accurate result rather than other supervised learning algorithms to classify any sentiment analysis problems. In this research, we try to find out the best algorithms for Bengali news headline sentiment classification based on some supervised learning algorithm. And finally, selected the algorithms for classifying the Bengali news type depend on algorithm prediction. The necessary steps of the model are given below for choosing the classification algorithm. Step 1: Read the news headline dataset. Step 2: Set the news sentiment, negative news=0 and positive news=1. Step 3: Pre-process the headline text. Step 4: Count the vocabulary for using as model input. Step 5: Fit and Transform the vocabulary. Step 6: Divide the train and test. Step 7: Define the machine learning algorithm and train the model. Step 8: Check the algorithm accuracy and prediction result. If the prediction of the algorithm is equal to the actual prediction result then select the algorithm for headline classification. All of these steps are following for news headline classification based on the using algorithms. IV. EXPERIMENT AND OUTPUT This experiment, after dividing the test and train dataset we applied multiple machine learning algorithms. Using approaches are Naive Bayes, SVM, Random forest, Decision tree, and K-nearest neighbours. Previous all experiment in sentiment analysis Naive Bayes and SVM contribute the best accuracy. Similarly in this experiment, 75% accuracy from SVM and 73% from Naive Bayes classification algorithm which is the best from the other three algorithms. Random forest commit 69%, KNN commits 68% and Decision tree commits 60% accuracy for positive and negative news classification. In table1 discuss the performance and accuracy for the algorithms. Table1: Performance for Bengali Headline Sentiment Analysis. In figure 5 the bar chart displays the accuracy comparison for applying algorithms. Figure5: Accuracy chart for ML Algorithms Now we have used another table to check the classification result with a Bangla News headline. Where all of that applied algorithm predicts the accurate output. Headline = “রাজবাড়ীেত �মাটর�া�েকল দুঘ �টনায় কেলজ �াে�র মতৃ� � ” in English (“College student dies in a motorcycle accident in Rajbari”) Actual Prediction = 0 News Type= Negative News Prediction Sentiment News Type News classification SVM Prediction 0 Negative News Correct Prediction 0 Negative News Correct Prediction 0 Negative News Correct Prediction 1 Positive News Incorrect KNN Prediction 1 Positive News Incorrect Table 2: News Classification for the given headline. Approach Sentiment Precision Recall F1-score Accuracy Naive Bayes 0 0.55 0.24 0.34 73% 1 0.75 0.92 0.83 SVM 0 0.68 0.21 0.33 75% 1 0.75 0.96 0.84 Random Forest 0 0.44 0.36</s>
<s>0.39 69% 1 0.76 0.82 0.79 Decision Tree 0 0.33 0.40 0.36 60% 1 0.73 0.67 0.70 KNN 0 0.45 0.39 0.41 68% 1 0.76 0.79 0.78 Fig. 5: Accuracy Chart for ML AlgorithmsNow we have used another table to check the classification result with a Bangla News headline. Where all of that applied algorithm predicts the accurate output.Headline = “রাজবাড়ীতে মোটরসাইকেল দুর্ঘটনায় কলেজ ছাত্রের মৃত্যু ” in English (“College student dies in a motorcycle accident in Rajbari”)Actual Prediction = 0News Type = Negative NewsAuthorized licensed use limited to: University of Exeter. Downloaded on June 17,2020 at 06:48:51 UTC from IEEE Xplore. Restrictions apply. Performance Measurement of Multiple Supervised Learning Algorithms for Bengali News Headline Sentiment ClassificationCopyright © IEEE–2019 ISBN: 978-1-7281-3245-7 239Table 2: News Classification for the Given HeadlinePrediction Sentiment News Type News Classification SVM Prediction 0 Negative News Correct NB Prediction 0 Negative News CorrectDT Prediction 0 Negative News CorrectRF Prediction 1 Positive News IncorrectKNN Prediction 1 Positive News IncorrectTable 2 shows the classification result for the given headline. The provided headline is negative news and the predicted value is 0. So, if actual output is equal to the predicted output then that algorithm choose for news headline classification. Here SVM, Naive Bayes and Decision Tree provide actual prediction others two give the wrong prediction. But others sample only SVM and Naive Bayes provide an accurate prediction. Finally, SVM and Naive Bayes classifier are used for Bengali news headline sentiment classification.VI. Conclusion and Future WorkThis experiment work proposed a methodology for making a Bengali news feature conclusion analyzer utilizing numerous ML Algorithms. Since no machine gives a precise outcome notwithstanding yet utilizing calculations gives some exact outcome. Utilizing the proposed technique have effectively Identify the positive and negative news for Bengali newspaper. The precision of applying need to build which is in our future work. There are two or three imperfections in the proposed system. One is less dataset. For accurate result need a large dataset but manually sentiment provide is a lengthy process. The vocabulary of the dataset is low so for achieving a good accuracy need to increase vocabulary. Machine learning algorithm shows good performance for Bengali data but not better but in the English language, the problem gives it’s better performance. So in future, there is the workspace to improve accuracy for Bangla text with the excellent outcome from ML algorithms.Acknowledgement We acknowledge and thanks to our DIU NLP and Machine Learning Research Lab for their total assist. Special thanks for our Computer Science and Engineering department for help to complete the work and provide the facility for research.References[1] Shirsat, Vishal S., Rajkumar S. Jagdale, and Sachin N. Deshmukh. “Sentence Level Sentiment Identification and Calculation from News Articles Using Machine Learning Techniques.” In Computing, Communication and Signal Processing, pp. 371-376. Springer, Singapore, 2019.[2] Godbole, Namrata, Manja Srinivasaiah, and Steven Skiena. “Large-Scale Sentiment Analysis for News and Blogs.” Icwsm7, no. 21 (2007): 219-222.[3] Shapiro, Adam Hale, Moritz Sudhof, and Daniel Wilson. “Measuring news sentiment.” Federal Reserve Bank</s>
<s>of San Francisco, 2018.[4] Zhang, Wenbin, and Steven Skiena. “Trading strategies to exploit blog and news sentiment.” In Fourth international aAAI conference on weblogs and social media. 2010.[5] Fu Y, Hao JX, Li X, Hsu CH. Predictive Accuracy of Sentiment Analytics for Tourism: A Metalearning Perspective on Chinese Travel News. Journal of Travel Research. 2019 Apr;58(4):666-79.[6] Balahur A, Steinberger R, Kabadjov M, Zavarella V, Van Der Goot E, Halkia M, Pouliquen B, Belyaeva J. Sentiment analysis in the news. arXiv preprint arXiv:1309.6202. 2013 Sep 24.[7] Jagdale, Rajkumar S., Vishal S. Shirsat, and Sachin N. Deshmukh. “Sentiment analysis on product reviews using machine learning techniques.” In Cognitive Informatics and Soft Computing, pp. 639-647. Springer, Singapore, 2019.[8] Kurnaz, Asst Prof Dr Sefer, and Mustafa Ahmed Mahmood. “Sentiment Analysis in Data of Twitter using Machine Learning Algorithms.” (2019).[9] Chowdhury, SM Mazharul Hoque, Priyanka Ghosh, Sheikh Abujar, Most Arina Afrin, and Syed Akhter Hossain. “Sentiment Analysis of Tweet Data: The Study of Sentimental State of Human from Tweet Text.” In Emerging Technologies in Data Mining and Information Security, pp. 3-14. Springer, Singapore, 2019.Authorized licensed use limited to: University of Exeter. Downloaded on June 17,2020 at 06:48:51 UTC from IEEE Xplore. Restrictions apply. View publication statsView publication statshttps://www.researchgate.net/publication/342226447</s>
<s>Analyzing Performance of Different Machine Learning Approaches With Doc2vec for Classifying Sentiment of Bengali Natural LanguageSee discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/332582518Analyzing Performance of Different Machine Learning Approaches WithDoc2vec for Classifying Sentiment of Bengali Natural LanguageConference Paper · February 2019DOI: 10.1109/ECACE.2019.8679272CITATIONREADS5 authors, including:Some of the authors of this publication are also working on these related projects:Speech Signal Processing View projectBengali to English Machine Translation (Undergraduate Thesis Project) View projectMd. Tazimul Hoque2 PUBLICATIONS 7 CITATIONS SEE PROFILEAshraful IslamUniversity of Louisiana at Lafayette21 PUBLICATIONS 8 CITATIONS SEE PROFILEEshtiak AhmedAhsanullah University of Science & Tech20 PUBLICATIONS 21 CITATIONS SEE PROFILEKhondaker A. MamunUniversity of Toronto69 PUBLICATIONS 293 CITATIONS SEE PROFILEAll content following this page was uploaded by Ashraful Islam on 09 August 2020.The user has requested enhancement of the downloaded file.https://www.researchgate.net/publication/332582518_Analyzing_Performance_of_Different_Machine_Learning_Approaches_With_Doc2vec_for_Classifying_Sentiment_of_Bengali_Natural_Language?enrichId=rgreq-5b8e6107f82ccd95ffaa92a32b1c000f-XXX&enrichSource=Y292ZXJQYWdlOzMzMjU4MjUxODtBUzo5MjI1MjExMzgzNjQ0MTdAMTU5Njk1NzU4NzQ2Ng%3D%3D&el=1_x_2&_esc=publicationCoverPdfhttps://www.researchgate.net/publication/332582518_Analyzing_Performance_of_Different_Machine_Learning_Approaches_With_Doc2vec_for_Classifying_Sentiment_of_Bengali_Natural_Language?enrichId=rgreq-5b8e6107f82ccd95ffaa92a32b1c000f-XXX&enrichSource=Y292ZXJQYWdlOzMzMjU4MjUxODtBUzo5MjI1MjExMzgzNjQ0MTdAMTU5Njk1NzU4NzQ2Ng%3D%3D&el=1_x_3&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Speech-Signal-Processing-12?enrichId=rgreq-5b8e6107f82ccd95ffaa92a32b1c000f-XXX&enrichSource=Y292ZXJQYWdlOzMzMjU4MjUxODtBUzo5MjI1MjExMzgzNjQ0MTdAMTU5Njk1NzU4NzQ2Ng%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/project/Bengali-to-English-Machine-Translation-Undergraduate-Thesis-Project?enrichId=rgreq-5b8e6107f82ccd95ffaa92a32b1c000f-XXX&enrichSource=Y292ZXJQYWdlOzMzMjU4MjUxODtBUzo5MjI1MjExMzgzNjQ0MTdAMTU5Njk1NzU4NzQ2Ng%3D%3D&el=1_x_9&_esc=publicationCoverPdfhttps://www.researchgate.net/?enrichId=rgreq-5b8e6107f82ccd95ffaa92a32b1c000f-XXX&enrichSource=Y292ZXJQYWdlOzMzMjU4MjUxODtBUzo5MjI1MjExMzgzNjQ0MTdAMTU5Njk1NzU4NzQ2Ng%3D%3D&el=1_x_1&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Hoque86?enrichId=rgreq-5b8e6107f82ccd95ffaa92a32b1c000f-XXX&enrichSource=Y292ZXJQYWdlOzMzMjU4MjUxODtBUzo5MjI1MjExMzgzNjQ0MTdAMTU5Njk1NzU4NzQ2Ng%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Hoque86?enrichId=rgreq-5b8e6107f82ccd95ffaa92a32b1c000f-XXX&enrichSource=Y292ZXJQYWdlOzMzMjU4MjUxODtBUzo5MjI1MjExMzgzNjQ0MTdAMTU5Njk1NzU4NzQ2Ng%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Md_Hoque86?enrichId=rgreq-5b8e6107f82ccd95ffaa92a32b1c000f-XXX&enrichSource=Y292ZXJQYWdlOzMzMjU4MjUxODtBUzo5MjI1MjExMzgzNjQ0MTdAMTU5Njk1NzU4NzQ2Ng%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Ashraful_Islam25?enrichId=rgreq-5b8e6107f82ccd95ffaa92a32b1c000f-XXX&enrichSource=Y292ZXJQYWdlOzMzMjU4MjUxODtBUzo5MjI1MjExMzgzNjQ0MTdAMTU5Njk1NzU4NzQ2Ng%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Ashraful_Islam25?enrichId=rgreq-5b8e6107f82ccd95ffaa92a32b1c000f-XXX&enrichSource=Y292ZXJQYWdlOzMzMjU4MjUxODtBUzo5MjI1MjExMzgzNjQ0MTdAMTU5Njk1NzU4NzQ2Ng%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/University_of_Louisiana_at_Lafayette?enrichId=rgreq-5b8e6107f82ccd95ffaa92a32b1c000f-XXX&enrichSource=Y292ZXJQYWdlOzMzMjU4MjUxODtBUzo5MjI1MjExMzgzNjQ0MTdAMTU5Njk1NzU4NzQ2Ng%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Ashraful_Islam25?enrichId=rgreq-5b8e6107f82ccd95ffaa92a32b1c000f-XXX&enrichSource=Y292ZXJQYWdlOzMzMjU4MjUxODtBUzo5MjI1MjExMzgzNjQ0MTdAMTU5Njk1NzU4NzQ2Ng%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Eshtiak_Ahmed?enrichId=rgreq-5b8e6107f82ccd95ffaa92a32b1c000f-XXX&enrichSource=Y292ZXJQYWdlOzMzMjU4MjUxODtBUzo5MjI1MjExMzgzNjQ0MTdAMTU5Njk1NzU4NzQ2Ng%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Eshtiak_Ahmed?enrichId=rgreq-5b8e6107f82ccd95ffaa92a32b1c000f-XXX&enrichSource=Y292ZXJQYWdlOzMzMjU4MjUxODtBUzo5MjI1MjExMzgzNjQ0MTdAMTU5Njk1NzU4NzQ2Ng%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/Ahsanullah_University_of_Science_Tech?enrichId=rgreq-5b8e6107f82ccd95ffaa92a32b1c000f-XXX&enrichSource=Y292ZXJQYWdlOzMzMjU4MjUxODtBUzo5MjI1MjExMzgzNjQ0MTdAMTU5Njk1NzU4NzQ2Ng%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Eshtiak_Ahmed?enrichId=rgreq-5b8e6107f82ccd95ffaa92a32b1c000f-XXX&enrichSource=Y292ZXJQYWdlOzMzMjU4MjUxODtBUzo5MjI1MjExMzgzNjQ0MTdAMTU5Njk1NzU4NzQ2Ng%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Khondaker_Mamun?enrichId=rgreq-5b8e6107f82ccd95ffaa92a32b1c000f-XXX&enrichSource=Y292ZXJQYWdlOzMzMjU4MjUxODtBUzo5MjI1MjExMzgzNjQ0MTdAMTU5Njk1NzU4NzQ2Ng%3D%3D&el=1_x_4&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Khondaker_Mamun?enrichId=rgreq-5b8e6107f82ccd95ffaa92a32b1c000f-XXX&enrichSource=Y292ZXJQYWdlOzMzMjU4MjUxODtBUzo5MjI1MjExMzgzNjQ0MTdAMTU5Njk1NzU4NzQ2Ng%3D%3D&el=1_x_5&_esc=publicationCoverPdfhttps://www.researchgate.net/institution/University_of_Toronto?enrichId=rgreq-5b8e6107f82ccd95ffaa92a32b1c000f-XXX&enrichSource=Y292ZXJQYWdlOzMzMjU4MjUxODtBUzo5MjI1MjExMzgzNjQ0MTdAMTU5Njk1NzU4NzQ2Ng%3D%3D&el=1_x_6&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Khondaker_Mamun?enrichId=rgreq-5b8e6107f82ccd95ffaa92a32b1c000f-XXX&enrichSource=Y292ZXJQYWdlOzMzMjU4MjUxODtBUzo5MjI1MjExMzgzNjQ0MTdAMTU5Njk1NzU4NzQ2Ng%3D%3D&el=1_x_7&_esc=publicationCoverPdfhttps://www.researchgate.net/profile/Ashraful_Islam25?enrichId=rgreq-5b8e6107f82ccd95ffaa92a32b1c000f-XXX&enrichSource=Y292ZXJQYWdlOzMzMjU4MjUxODtBUzo5MjI1MjExMzgzNjQ0MTdAMTU5Njk1NzU4NzQ2Ng%3D%3D&el=1_x_10&_esc=publicationCoverPdf2019 International Conference on Electrical, Computer and Communication Engineering (ECCE), 7-9 February, 2019Analyzing Performance of Different MachineLearning Approaches With Doc2vec for ClassifyingSentiment of Bengali Natural LanguageMd. Tazimul Hoque∗, Ashraful Islam†‡, Eshtiak Ahmed†‡, Khondaker A. Mamun∗ and Mohammad Nurul Huda∗∗Department of Computer Science and EngineeringUnited International University, Dhaka-1212, BangladeshEmail: tazim.ndc@gmail.com, {mamun, mnh}@cse.uiu.ac.bd†Department of Computer Science and EngineeringDaffodil International University, Dhaka-1207, BangladeshEmail: {ashraful, eshtiak}.cse@diu.edu.bd‡Department of Computer Science and EngineeringBangladesh University of Engineering and Technology, Dhaka-1000, BangladeshAbstract—Vector or numeric representation of text documentshas been a revolution in natural language processing as it repre-sents similar parts of text in such a way that they are very closeto each other, making it very easy to classify or find similaritiesamong them. These vectors also represent the way we use thewords or parts of documents as well which helps finding similarityeven between pair of words. While word2vec is such a techniquethat represents each word as a vector, doc2vec takes it to anotherlevel by representing a whole sentence or document as a vector.Being able to represent an entire document as a vector allowscomparing a substantial number of words or sentences at a timewhich can save computational power as well as bandwidth. Thisrelatively newer doc2vec technology has not yet been implementedfor Bengali sentiment analysis and its feasibility is also unknown.In this study, we have trained a doc2vec model using a corpusconstructed with 7,000 Bengali sentences. The model consists oftwo types of data differentiated by their polarity i.e. positiveand negative. Later, we have employed several machine learningalgorithms for comparing the accuracy of classification amongwhich Bi-Directional Long Short-Term Memory (BLSTM) hasobtained the highest accuracy of 77.85% along with precision,recall and F-1 score of 78.06%, 77.39% and 77.72% respectively.Keywords—Sentiment Analysis (SA), Machine Learning (ML),Natural Language Processing (NLP), Bi-directional Long Short-Term Memory (BLSTM), Sequential Model (SM), doc2vec.I. INTRODUCTIONIn recent years, various social media platforms e.g. Face-book, Twitter, Youtube, Google+ play a vital role in day to daylife due to their ease-of-access, portability, and affordability[1], [2]. According to Statista, around 2.46 billion people areusing social media worldwide as of 2017 and it is expectedto reach 3.02 billion in 2021 where Facebook has remainedthe most popular one as of April, 2018 [3]. Another surveyconducted in September 2018 by StatCounter says that 89.04%of social media users</s>
<s>interact using Facebook in Bangladesh[4]. A very large number of data has been comprised overthe Internet as a result of enormous dealing with socialmedia platforms which conveys a significant contribution inSentiment analysis (SA) [1]. To be specific, analyzing thereactions by users accumulated from social media contents andposts lead to categorize them into several labels i.e. sad, angry,love.SA is also known as opinion mining, mood extraction oremotion analysis which is an application of Natural Languageprocessing (NLP). The year 2001 or around can be markedas the beginning of the research awareness in the field of SAand opinion mining [5]. Research papers mentioning sentimentanalysis focus specifically on the application of text classi-fication according to their polarity positive (good), negative(bad) or neutral. But now-a-days SA expresses broadly to meanthe computational treatment of opinion or review in text, pro-cessing natural language, computational linguistics, and bio-metrics to systematically identify, extract, quantify, and studyaffective states and subjective information [6]. In addition,recent advents in machine learning research, particularly deeplearning based methods e.g. recurrent neural network (RNN),avail the opportunities to infer decisions by training a modelin SA. Moreover, the latest key technique titled as doc2vecdeveloped by Google Inc. [7] in which usually a documentis represented by a vector, can be an emerging tactics forclassifying emotions or opinions from social media reactionsand posts. Although a lot of research has been conducted inthe area of SA and they are mainly based on the social mediaposts written in English, still these areas are yet to be exploredfor the social media posts in Bengali language.This paper aims to analyze public sentiments composedin Bengali on any topic and then categorize them into twoparticular classes i.e. positive sentiment, negative sentiment.For this we are considering Facebook post reactions Love,Wow, Sad, Angry, and Haha which represent different statesof emotion. Here, Love and Wow reactions are consideredas positive sentiment whilst Sad and Angry reactions areconsidered as negative sentiment. Facebook added these newreactions feature allowing users to react along with Like ina post. We have employed nine machine learning methodsi.e. Logistic Regression (LR), Support Vector Machine (SVM),Stochastic Gradient Descent (SGD), Decision Tree (DT), K-Neighbors, Linear Discriminant Analysis (LDA), Gaussian978-1-5386-9111-3/19/$31.00 ©2019 IEEENaive Bayes (GaussianNB), Sequential Model (SM), and Bidi-rectional Long Short-term Memory (BLSTM) to build classi-fication models so that they can classify the sentiments fromusers’ reactions in different posts published in Bengali. Amongthis methods, BLSTM has performed best as it provides anaccuracy of 77.85% along with precision, recall and F-1 scoreof 78.06%, 77.39% and 77.72% respectively. Therefore, thisresults are very promising for further investigations in thisground.The layout of this paper is organized as follows- a briefdescription of related works are narrated in section II, followedby the essential indications for building the corpus and modelexperimented in this study in section III. Thereafter, conductedmethods are being plotted sequentially in section IV. Thensection V explains the results established in this study. Finally,section VI concludes the findings of this study with possiblefuture implications.II. RELATED WORKSMany research works are accomplished by measuring theoverall polarity of a document or sentence to determine if it is apositive</s>
<s>or negative review [8]–[10]. Turney et al. used simpleunsupervised learning algorithm which finds average semanticorientation of the phrases form the review containing adjectivesor adverbs [8]. In system [9] Dave et al. trained a classifierusing a self-tagged corpus of reviews form web sites. Pang etal. applied machine-learning method for text categorization tojust the subjective portions of the document [10]. Phrase-levelsentiment analysis is discussed in [11] which identifies thecontextual polarity for a large subset of sentiment expressions.In their work they explained that contextual polarity of aphrase may be different from the polarities of the wordsappear in that phrase. Some popular approaches of sentimentanalysis subjective lexicon, using N-Gram modeling, machinelearning are discussed in [12]. Using deep learning model,Ouyang et al. proposed a framework word2vec + ConvolutionalNeural Network (CNN) [13] for classifying sentiment ofmovie reviews into fives labels: negative, somewhat negative,neural, somewhat positive and positive. They achieved 45.4%accuracy.Though a lot of works have been explored consideringthe research works for Bengali in this ground, very fewexperiments have been investigated in recent years. Chowdhuryet al. worked on sentiment analysis in Bengali microblog postsusing SVM and Maximum Entropy (MaxEnt) classificationtechniques [14]. They collected 1,300 tweets using TwitterAPI and split the dataset as 1,000 tweets for training and 300tweets for testing. They identified the overall polarity of asentence as either negative or positive. Their achieved accuracyis 93% for SVM using unigrams with emoticons as features.Das et al. developed a phrase level polarity classificationsystem using SVM [15]. They constructed a Bengali Newscorpus containing 3,435 distinct word-forms. It can categorizeopinion phrase as either positive or negative. Their evaluatedresult have a precision of 70.04% and a recall of 63.02%.Amin et al. used ”word2vec” model for vector representationof Bengali words [16]. They achieved 75.5% of accuracyusing ”word2vec” word co-occurrence score with the wordssentiment polarity score. They collected 16,000 Bengali singleline and multiline comments from blog posts and tagged themas positive or negative comment by a survey. Hassan et al.used deep recurrent model Long Short Term Memory (LSTM),with two loss functions binary cross-entropy and categoricalcross-entropy for Bengali sentiment analysis [17]. They used10,000 Bengali and Romanized Bengali text samples whichwere divided into three categories - Positive, Negative andAmbiguous. They achieved 70% accuracy with Bengali datasetand using Bengali and Romanized Bengali dataset the accuracyscore was 55%.III. CORPUS AND MODEL PREPARATION1) Corpus Collection: The aim of this study is to analyzepublic sentiment on any topic from Bengali text and thencategorize it based on sentiment polarity. We have consideredpositive and negative sentiment polarity in this work. Toconstruct a corpus for Bengali sentiment analysis, differentsources have been considered, among which Facebook postdata seems most promising for SA as they represent the mostnatural form of language. In Facebook posts, people reactwith different reactions i.e. “Like”, “Love”, “Wow”, “Sad”,“Angry”, and “Haha”, each of which represent different statesof emotion. Our aim is to classify these emotions into eitherpositive or negative class. Users react with “Like” more thanother reactions as it is easy to perform although it does notrepresent a specific sentiment polarity that can be classified aspositive or negative [18]. Correlation</s>
<s>among “Like” and otherreactions can be expressed as-• Strongly positive correlation with “Love” and “Wow”.• Weakly positive correlation with “Sad” and “Angry”.Although “Like” reaction is the most common, we haveconsidered this as low-effort data from users and ignored itwhile classifying the sentiment polarity of a post. Furthermore,we have observed that people reacted Wow reaction in anyfunny, sarcastic posts more than any other reactions. Therefore,we cant polarize post sentiment in either positive or negativecategory based on “Wow” reaction.We have used Facebook Graph API [19] implementedby our own Python script to collect data regularly from somepopular Bengali Facebook public pages. We have collected6244 Facebook posts which were pre-processed afterwards tovalidate them as proper text data. The pre-processing stage in-cludes the filtering of any kind of hyperlink, special characters,duplicate post and non Bengali phonetics. This filtering shrunkthe volume of our data set to 4317 posts. We stored this datainto database which contains following columns - page type,page post text, and reaction counts of - “like”, “love”, “wow”,“sad”, “angry” and “haha”. Fig. 1 demonstrates the total flowof data collection and corpus preparation from Facebook posts.To prepare positive and negative post documents fromthis database we had to categorize multiple reactions intoeither positive or negative. Here, “Love” and “Wow” reactionsrepresent positive polarity and on the other hand “Sad” and“Angry” reactions represent negative polarity. We consideredtotal count of “Love” and “Wow” reactions as summation oftotal positive reactions, and total count of “Sad” and “Angry”reactions as summation of total negative reactions. Comparingthe total numbers of positive and negative reactions of a post,we categorized it accordingly. This process is summarized inFig. 1: Flow of data collection and corpus preparation fromFacebook post.Algorithm 1. Here, we have not categorize a post’s sentimentif-• the total number of “Wow” reaction is greater thanpositive or negative reactions.• the total number of positive and negative reactions aresame or both are zero.The procedure to determine whether a post is categorizedor not is shown in Algorithm 2. After this procedure we have3,193 posts where majority reaction counts are -• Love: 1,162• Wow: 529• Sad: 1,007• Angry: 495.Algorithm 1 Preparing Positive/Negative Documents fromFacebook Page Posts0: procedure PARSEPOSTS(posts)0: for each post do0: positive← count(Love) + count(Wow)0: negative← count(Sad) + count(Angry)0: if Categorizable() = false then0: skip to the next post0: else if positive > negative then0: save post text into positive.txt0: else0: save post text into negative.txt0: end if0: end for0: end procedure=0So, finally we have 1,691 posts with positive polarity and 1,502posts with negative polarity. To keep equal polarity data, wefinally stored 1,500 posts per sentiment polarity (positive andnegative).Socian Ltd. [20] provided with a public corpus containing4,000 labeled Bengali sentences according their sentimentpolarity, either positive or negative which contains equal dis-tribution of labeled data. They have collected this corpus fromdifferent social media platforms, news paper sites and blogs.We included this data set with our prepared corpus. This wayfinally we managed to prepare a corpus of 7,000 posts (3,500for each sentiment polarity).2) Model Preparation: Creating numerical representationof any document is the goal of doc2vec [21]. Here eachdocument or sentence</s>
<s>is represented as a vector where similardocuments have closer values. We used our corpus, preparedusing the process described in the previous subsection to traindoc2vec model. All the labeled sentences from our corpuswere fed into doc2vec model to build its vocabulary. Here eachlabeled sentence contains a list of Bengali words and a labeleither Positive or Negative based on its sentiment polarity. Anexample of labeled sentences used to train doc2vec is -[[’word1’, ’word2’, ’word3’,..., ’last word’], [’label’]]To configure the doc2vec model we have considered win-dow size 50 and vector size 120. Here window size representsmaximum distance considered between current and predictedword in a sentence [21]. Output feature vectors dimensionalityis represented by vector size . We trained the doc2vec modelfor 40 epochs and stored it for further use. Our final corpusand doc2vec model are uploaded in Kaggle [22].IV. SENTIMENT CLASSIFICATIONFor sentiment classification using our prepared doc2vecmodel, we used machine learning approaches i.e. LR, SVM,SGD, DT, K-Neighbors Classifier, LDA, GaussianNB, SM,and BLSTM. Our trained doc2vec model contains vectorrepresentation of 7,000 labeled sentences with 120 dimensionof feature vectors. We split 80% data for training and 20%for testing randomly. All the classifiers are trained and testedusing this data accordingly.We observed different classifiers performance on doc2vecmodel. Among all classifiers, deep learning approach ofAlgorithm 2 Checking a post is either categorizable or not0: procedure CATEGORIZABLE()0: if count(Wow) > positive or negative then0: return false0: else if positive = negative then0: return false0: else if positive = negative then0: return false0: else if positive = negative = 0 then0: return false0: else0: return true0: end if0: end procedure=0BLSTM provided the best performance. Hence we are de-scribing about BLSTM and the configurations we used totrain this model. BLSTM an extension of LSTM, can improveperformance of a model in sequence classification problems.We configured LSTM with 32 hidden nodes and the dropoutvalue was set to 0.5 to reduce overfitting. LSTM hidden layerswere wrapped using a Bidirectional layer which created twocopies of hidden layers. This fits those two hidden layersfor both the original and the reverse sequence of input. Weused 10% data for validation from training data set in timeof training the model. BLSTM model was trained using 200epochs with a batch size of 200. This configuration provides anaccuracy of 77.85% which is the best among all the classifiersemployed.V. RESULT ANALYSISIn this study, we have applied the most common machinelearning performance metrics i.e. accuracy, precision, recall,F-1 score for the evaluation of engaged classifiers and theobtained results are represented in TABLE I. This resultsare sorted decreasingly based on the classification accuracyachieved by the employed classifiers. According to the dataavailable in TABLE I, BLSTM has the best performance asit has gained an accuracy of 77.85% whilst GaussianNB hasattained lowest accuracy which is 59.21% for the corpus wehave built in this study. In addition, TABLE II illustrates theconfusion matrix for BLSTM.Fig. 2 and Fig. 3 convey the history of accuracy and lossrespectively on the training and validation datasets over theepochs for model training. It is reported in the plot availablein Fig. 2 that the model could</s>
<s>achieve more accuracy as therates of accuracy in both training and validation datasets areincreasing significantly over model training epochs. On theother hand, from the plot of loss on both the training and thevalidation datasets represented in Fig. 3 indicates the sign forstopping model training in earlier epoch if depart process isfound consistently in the parallel plots.VI. CONCLUSION AND FUTURE IMPLICATIONSWhile the doc2vec technology has been employed in nu-merous research based studies for sentiment analysis in theEnglish language, its use in Bengali sentiment analysis hasnot been seen so far. However, our classification accuracyFig. 2: A plot of accuracy on the training (train) and validation(valid) datasets over training epochs for BLSTMFig. 3: A plot of loss on the training (train) and validation(valid) datasets over training epochs for BLSTMfor different classifiers shows that this technology has enoughpotential if implemented properly. The primary contribution ofthis study is that it presents the very first doc2vec model forBengali sentiment analysis while the achieved classificationaccuracy is significantly better than that of other implementa-tions using word2vec.TABLE I. OBTAINED RESULTS FOR EMPLOYED CLASSIFIERSClassifier Accuracy (%) Precision (%) Recall (%) F-1 Score (%)BLSTM 77.85 78.06 77.39 77.72SM 74.35 72.94 77.42 75.12SGD 74 73.46 75.14 74.29LR 73.14 71.89 76 73.88LDA 72.42 72.49 72.28 72.38SVM 67.21 67.04 67.71 67.37K-Neighbors 63.28 60.59 76 67.42DT 61.07 61.05 61.14 61.09GaussianNB 59.21 57.32 72.14 63.88TABLE II. CONFUSION MATRIX FOR BLSTMPredictedNegative PositiveActual Negative 549 152Positive 158 541Although the model is currently constructed with thepolarity of sentiment, it is a definite possibility that multi-class model can be prepared given enough time and largervolumes of data. In future, we can work with multiple classclassification instead of the polarities being just positive andnegative. Detecting non-polarized textual data can be anotherimprovement of our current system. Additionally, we pre-processed dataset to get the text containing only Bengaliphonetics which filtered out Romanized Bengali texts. Thisnarrowed down our dataset and also the scope to work withLatin letters used to write Bengali sentences (RomanizedBengali text). Filtering special characters removed any kindof emoticons used in the textual post, but emoticon playsa vital role in sentiment expression. We are intending towork with Romanized text and emoticons in our next researchwork involving sentiment analysis. To summarize, this studyrepresents the great potential of Bengali doc2vec technique andopens the door to more significant contributions in this aspect.REFERENCES[1] E. Cambria, “Affective computing and sentiment analysis,” IEEE Intel-ligent Systems, vol. 31, no. 2, pp. 102–107, 2016.[2] R. Gaspar, C. Pedro, P. Panagiotopoulos, and B. Seibt, “Beyond positiveor negative: Qualitative sentiment analysis of social media reactions tounexpected stressful events,” Computers in Human Behavior, vol. 56,pp. 179–191, 2016.[3] “Social Media Statistics Facts,” https://www.statista.com/topics/1164/social-networks/, (Visited on 10/27/2018).[4] “Social Media Stats Bangladesh,” http://gs.statcounter.com/social-media-stats/all/bangladesh, (Visited on 10/27/2018).[5] B. Pang and L. Lee, “Opinion mining and sentiment analysis,” Found.Trends Inf. Retr., vol. 2, no. 1-2, pp. 1–135, 2008.[6] “Sentiment analysis,” https://en.wikipedia.org/wiki/Sentiment analysis,(Visited on 10/27/2018).[7] Q. Le and T. Mikolov, “Distributed representations of sentences anddocuments,” in International Conference on Machine Learning, 2014,pp. 1188–1196.[8] P. D. Turney, “Thumbs up or thumbs down?: semantic orientationapplied to unsupervised classification of reviews,” Proceedings</s>
<s>of the40th Annual Meeting on Association for Computational Linguistics -ACL ’02, pp. 417–424, 2002.[9] K. Dave, S. Lawrence, and D. M. Pennock, “Mining the Peanut Gallery:Opinion Extraction and Semantic Classification of Product Reviews,”in Proceedings of the 12th international conference on World Wide Web(WWW ’03), 2003, pp. 519–528.[10] B. Pang, L. Lee, Z. A. Bán, B. Pang, L. Lee, and S. Vaithyanathan,Proceedings of the Conference on Empirical Methods in NaturalLanguage Processing, vol. 48, no. 1, pp. 49–55, 2002.[11] T. Wilson, J. Wiebe, and P. Hoffman, “Recognizing contextual polarityin phrase level sentiment analysis,” in Proceedings of the conference onhuman language technology and empirical methods in natural languageprocessing, 2005, pp. 347–354.[12] A. Kaur and V. Gupta, “A Survey on Sentiment Analysis and OpinionMining Techniques,” Journal of Emerging Technologies in Web Intelli-gence, vol. 5, no. 4, pp. 367–371, 2013.[13] X. Ouyang, P. Zhou, C. H. Li, and L. Liu, “Sentiment analysis usingconvolutional neural network,” in 2015 IEEE International Conferenceon Computer and Information Technology; Ubiquitous Computing andCommunications; Dependable, Autonomic and Secure Computing; Per-vasive Intelligence and Computing, 2015, pp. 2359–2364.[14] S. Chowdhury and W. Chowdhury, “Performing sentiment analysisin Bangla microblog posts,” in 2014 International Conference onInformatics, Electronics and Vision, ICIEV 2014, 2014.[15] A. Das and S. Bandyopadhyay, “Opinion-polarity identification in ben-gali,” in International Conference on Computer Processing of OrientalLanguages, 2010, pp. 169–182.[16] M. Al-Amin, M. S. Islam, and S. D. Uzzal, “Sentiment analysis of Ben-gali comments with Word2Vec and sentiment information of words,” inECCE 2017 - International Conference on Electrical, Computer andCommunication Engineering, 2017, pp. 186–190.[17] A. Hassan, M. R. Amin, A. K. A. Azad, and N. Mohammed, “Sentimentanalysis on bangla and romanized bangla text using deep recurrent mod-els,” in IWCI 2016 - 2016 International Workshop on ComputationalIntelligence, 2017, pp. 51–56.[18] “Facebook Reactions,” http://minimaxir.com/2016/06/interactive-reactions/, (Visited on 10/27/2018).[19] “Facebook Graph API,” https://developers.facebook.com/docs/graph-api/, (Visited on 10/27/2018).[20] “Socian Bangla Sentiment Dataset,” https://github.com/socianltd/socian-bangla-sentiment-dataset-labeled/, (Visited on 10/27/2018).[21] “Doc2vec paragraph embeddings,” https://radimrehurek.com/gensim/models/doc2vec.html, (Visited on 10/27/2018).[22] “Sentence corpus and Doc2Vec file for Bengali Sentiment Analy-sis,” https://www.kaggle.com/tazimhoque/bengali-sentiment-text, (Vis-ited on 10/29/2018).View publication statsView publication statshttps://www.researchgate.net/publication/332582518</s>
<s>Sentimental Style Transfer in Text with Multigenerative Variational Auto-EncoderInternational Conference on Bangla Speech and Language Processing(ICBSLP), 27-28 September, 2019Sentimental Style Transfer in Text withMultigenerative Variational Auto-EncoderMehedi Hasan PalashDepartment ofComputer Science and EngineeringShahjalal University ofScience and Technologyevonloch@gmail.comPartha Protim DasDepartment ofComputer Science and EngineeringShahjalal University ofScience and Technologymeparthaprotim@gmail.comSummit HaqueDepartment ofComputer Science and EngineeringShahjalal University ofScience and Technologysummit.haque@gmail.comAbstract—Style transfer is an emerging trend in the fieldsof deep learning’s applications, especially in images and audiodata this is proven very useful and sometimes the results areastonishing. Gradually styles of textual data are also beingchanged in many novel works. This paper focuses on the transferof the sentimental vibe of a sentence. Given a positive clause,the negative version of that clause or sentence is generatedkeeping the context same. The opposite is also done withnegative sentences. Previously this was a very tough job becausethe go-to techniques for such tasks such as Recurrent NeuralNetworks(RNNs) [1] and Long Short-Term Memories(LSTMs)[2] can’t perform well with it. But since newer technologies likeGenerative Adversarial Network(GAN) and Variational Auto-Encoder(VAE) are emerging, this work seem to become moreand more possible and effective. In this paper, Multi-GenarativeVariational Auto-Encoder is employed to transfer sentimentvalues. Inspite of working with a small dataset, this model provesto be promising.Index Terms—text, style, style-transfer, vae, sentiment-transferI. INTRODUCTIONIn this fast-growing era of deep learning, understandingand transferring of stylistic attributes in numerous fields havebeen proven very fruitful. Especially in the case of imagesand other visual forms of data, neural networks have workedwonders in capturing the style elements and manipulating them[3]. This study explores the transfer of styles in linguisticexpressions. The possibilities of text style transfer are limitless.It can be used in making customized chatbots that can interactwith humans as any human would do. This could come inhandy in different organizations and even in personal interests.Moreover, generating parallel data can be another implicationof textual style transfer. This is a very important aspect becausein many style transfer job the main problem becomes gettingthe parallel data. Transferring the style from one stream oftext to another can solve this problem easily and reliably withmuch less effort.Since dealing with textual data is very different frompictorial data, transferring styles is much of a challenge here.Mostly because in transferred images, the slight distortionscan be overlooked, which is not the case for linguistic data.Here the model has truly to understand the context and dothings according to that. This work is based on a basic stylisticattribute, that is-sentiment. We choose to work with convertinga very specific sentiment in human interactions- the positivityand negativity of sentences. We take a sentence or clause witha positive vibe and then change it to a negative one. This isa primitive step towards general style transfer in text. But ifthis basic problem can be solved, we can hope to get closerto more human-like machine-generated text.We used Multi Generator Variational Auto-Encoder(VAE) toachieve our goal. We also tried vanilla Auto-Encoder, but thisdoes not work as good as VAEs because they are continuous.On the other hand, both mean and variance are taken intoaccount in VAEs and that simple technique helps get a</s>
<s>betterresult, for this expands the window of opportunity for gettingmore diverse outputs. Therefore we get more relevant outputs.We use two different decoders to get two types of response.One for positive and another for negative styled output.II. RELATED WORKStyle transfer with non parallel text has been exercisedextensively in recent years. The most influential style transferwork would be of Gatys et al. [3]. It is shown here that the styleof images can be transferred. And this and several such worksinspire to build some machine similar to them which wouldwork for textual data. Zhang et al. [4] uses CNN for this task,which face many problems mostly because text and image aredifferent in nature and should not be treated alike. Therefore,Neural Machine Translations (NMTs) come into play [5] [6].Parallel data is used to let the model learn from them. Butthis is a bit problematic to find parallel texts, and sometimes978-1-7281-5242-4/19 ©2019 IEEE978-1-7281-5241-7/19/$31.00 ©2019 IEEEl C/ICAuthorized licensed use limited to: Imperial College London. Downloaded on June 13,2020 at 14:30:51 UTC from IEEE Xplore. Restrictions apply. it is even not possible to get in the post-NMT era of text styletransfer, Xu et al. [6], Fu et al. [7], Li et al. [8], Shen et al.[9], Prabhumoye et al. [10] introduce newer techniques likeGANs [11] and VAEs to perform this job, in an unsupervisedway. That is-without the help of parallel data. There havebeen many different techniques to evaluate the success of theseapproaches. Shen et al. [9] uses sentiment modification, wordsubstitution cipher decypherment and word order recovery ashuman verification factors. They infer styles from a sentencesand its original style indicator. A style dependent decoderis used to render them. Moreover, a brilliant technique-crossgenerated sentences is used to gather more information. Huet al. [12] employs Variational Auto-Encoders (VAEs) andattribute discriminators to generate sentences, whose attributesare controlled by trained disentangled latent representation ofthem. Here yelp and amazon datasets are used and evaluatedby humans to get more accurate response.III. DATASETA. Data SourceWe used a dataset on bangla sentences. 4600 comments arecollected from prothom-alo [13] news. Every comment has6 options for tagging. They are ’Surely Negative’, ’SlightlyNegative’, ’Neutral’, ’Slightly Positive’, ’Surely Positive’. Weget the data tagged three times by volunteers for making thetags more reliable. Some examples are shown in figure 1Fig. 1. Sample dataFor the purpose of using the tags appropriately to learnthem, we replaced the tags by some numerical values. Theseare assigned in accordance with their strength. The more thenegative a sentence seem to be are assigned more negativevalues, and the same but opposite goes for the positivesentences. [’Surely Negative’, ’Slightly Negative’, ’Neutral’,’Slightly Positive’, ’Surely Positive’] = [-2,-1,0,1,2]We then sum up three tags for each instance of the dataset.If the sum is less than zero, we marked it as negative andif greater than zero mark it as positive comment. For exam-ple: Suppose a comment has three tags : ’Surely Negative’,’Neutral’, ’Slightly Positive’. So the sum for this commentwill be : -2 + 1 + 0 = -1. So ultimately we treat this as anegative comment. After eliminating duplicates and unusablesentences, we</s>
<s>get 2500 negative comments and 2500 positivecomments.B. PreprocessingWhen we work with text data, we have to read stream ofcharacters. Single characters normally don’t mean anything.If we combine them together carefully, they will make sense.Tokenizing means split the stream of characters such that wecan consider them as semantic unit of language. Tokenization[14] also removes the punctuation marks. Here is an exampleof tokenization shown in figure 2Fig. 2. Tokenizing SentencesWe used Natural Language ToolKit(NLTK) for tokenization.We fixed the sequence length to 20. For sentences smaller than20, we padded them with a fixed string.IV. MODEL OVERVIEWWe have used a variational auto-encoder (Doersch et al[15]) with two generators in our model. Normally everyvariational auto-encoder has two parts. An encoder and adecoder. First, the encoder part maps the input sequence to alatent representation z. Then, the generator part takes samplesand makes them saturated with the desired style, which ispositive and negative in our case.We can consider the encoder as a neural network and ittakes datapoint x and its output is a hidden representation z.Then, another neural network called generator, samples fromz and produces the desired output. z preserves the context ofthe input and decoder is used for mixing the desired attribute’sproperties.In this model depicted in figure 3, two generators are used-one for positive and another for negative. The positive gener-ator is specialized to modify sentences with more positivity.The other is for generating negative sentences. First of all, theinput sentence passes through an encoder network. From theencoder network, a parameter of distribution Q(z|x) where xis the vectorized input sentence. The latent vector z is got fromthe stated distribution. Now this z is the information holder ofx. The decoders use z to recreate the sentence with a variedstyle.While training, the positive decoder and the negative decoderare separately used so that they can learn about a specificstyle. That is-when dealing with positive data, we train thenegative generator. Again we train the positive generator withthe negative data to make it capable of generating positivesentences.V. RESULTWe have used human evaluation for determining our model’saccuracy. As there is not enough software resource for eval-uating style transfer in Bangla. Human evaluation has beenused in extensively in the validation of recent linguistic taskssuch as in the works of Shen et al [9].Authorized licensed use limited to: Imperial College London. Downloaded on June 13,2020 at 14:30:51 UTC from IEEE Xplore. Restrictions apply. Fig. 3. High Level view of the modelWe have measuerd the following parameters for determiningthe success of an output:• Grammatical Correctness This metric is used to seehow much the model can understand the structure of thelanguage.• Context Similarity Whether the output could capture thecontext is expressed as this.• Polarity This is the most important point in our task-whether the output convey positive or negative vibe.We have copied the input of the model and used it as theoutput to get a baseline result. Since human comments areused as input baseline output, the output has hundred percentgrammatical correctness and context similarity with input. Butit has not achieved the correct polarity because the negativityof a</s>
<s>positive sentence is small, and vice versa.Fig. 4. Comparing our results with state of the art modelsComparing with the human tested results with theyelp dataset, all the parameters(grammar, context, positiv-ity/negativity) are much higher than all the state of the artmodels. This means that machines couldn’t be able to beathumans in this respect. But among the best performing models,cross aligned technique is proven to be the best. Since ourdataset is eerily small and in bangla, we couldn’t achievesuch greateness. Nonetheless, we have got closer to that modelin grammar and sentiment transferability, though we need tofocus on the fact that context-wise we are far from expectation.VI. ANALYSISThe sentiment transferability and grammatical correctnessof our model are not so far from the model prescribed by Shenet al.(2017) [9]. But the context preservation is not satisfactory.The dataset we have used has variety of contexts. Some areabout politics, some are about sports and so on. But Shenet al.(2017) [9] used a standard dataset from amazon productreview. So, the context preservation of our model is not asgood as their model. On the other hand if we could use datafrom a single context then it might have performed better.A sample output generated by our model is shown here infigure 5. Here, we can see that for the given positive sentence,Fig. 5. Positive to generated Negative outputour model tried to get the context(green boxed) and thenstaying within the context, it tries to generate totally oppositea sentence. Similarly, if a negative sentence is fed into themachine as in figure , the context is still captured and then theAuthorized licensed use limited to: Imperial College London. Downloaded on June 13,2020 at 14:30:51 UTC from IEEE Xplore. Restrictions apply. style of this negativity is changed, and it becomes a positivesentence.Fig. 6. Negative to generated Positive outputVII. CONCLUSIONA. DiscussionTransferring style in text is not much explored field. It’sstill a newer area than other explorable fields like images andaudios. Yet it is so much promising. Because there are so manythings we can do with textual data. With the power to generatenew sentences is alone a huge task to do with a machine. Andif we can apply our customized styles and personalization toit, this sounds even more wonderful, because up until now thiswas solely a humane capability.We explore this vast field’s only a small portion-translatingnegative sentences into positive ones and translating positivesentences into negative ones. Our model is able to succeedin some cases, the accuracy we get is 53.2%. This might notbe very good result, but our model couldn’t do much wellbecause of shortage of data. If we could get hold of muchlarger dataset, this model could achieve more, mostly becauseour model is very much data dependent.B. Future WorkWorking with sentimental values is an initial step in textualstyle transfer process. In this paper only the positivity andnegativity of a sentence are exploited. There are many otheraspects of styles in a sentence that can be explored in thismanner, like-gender, tense, political stand etc. If these basicstyle-bearing aspects can properly be understood and makethe machine understand, it</s>
<s>will have many interesting applica-tions, like-personalized chatbots and even making an universallanguage translator.VIII. ACKNOWLEDGEMENTWe would like to thank the NLP group of Shahjalal Univer-sity of Science and Technology for the necessary insight andexpertise that greatly assisted the research.REFERENCES[1] A. Karpathy, “The Unreasonable Effectiveness of Recur-rent Neural Networks,” May 2015. [Online]. Available:http://karpathy.github.io/2015/05/21/rnn-effectiveness/[2] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neuralcomputation, vol. 9, no. 8, pp. 1735–1780, 1997.[3] L. A. Gatys, A. S. Ecker, and M. Bethge, “Image style transfer usingconvolutional neural networks,” in Proceedings of the IEEE conferenceon computer vision and pattern recognition, 2016, pp. 2414–2423.[4] X. Zhang and Y. LeCun, “Text understanding from scratch,” arXivpreprint arXiv:1502.01710, 2015.[5] J. M. Hughes, N. J. Foti, D. C. Krakauer, and D. N. Rockmore,“Quantitative patterns of stylistic influence in the evolution of literature,”Proceedings of the National Academy of Sciences, vol. 109, no. 20, pp.7682–7686, 2012.[6] W. Xu, A. Ritter, B. Dolan, R. Grishman, and C. Cherry, “Paraphrasingfor style,” in Proceedings of COLING 2012, 2012, pp. 2899–2914.[7] Z. Fu, X. Tan, N. Peng, D. Zhao, and R. Yan, “Style transfer intext: Exploration and evaluation,” in Thirty-Second AAAI Conferenceon Artificial Intelligence, 2018.[8] J. Li, R. Jia, H. He, and P. Liang, “Delete, retrieve, generate:A simple approach to sentiment and style transfer,” arXiv preprintarXiv:1804.06437, 2018.[9] T. Shen, T. Lei, R. Barzilay, and T. Jaakkola, “Style transfer from non-parallel text by cross-alignment,” in Advances in neural informationprocessing systems, 2017, pp. 6830–6841.[10] S. Prabhumoye, Y. Tsvetkov, R. Salakhutdinov, and A. W. Black, “Styletransfer through back-translation,” arXiv preprint arXiv:1804.09000,2018.[11] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley,S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” inAdvances in neural information processing systems, 2014, pp. 2672–2680.[12] Z. Hu, Z. Yang, X. Liang, R. Salakhutdinov, and E. P. Xing, “Towardcontrolled generation of text,” in Proceedings of the 34th InternationalConference on Machine Learning-Volume 70. JMLR. org, 2017, pp.1587–1596.[13] “Prothom Alo,” https://www.prothomalo.com/, accessed: 2019-07-31.[14] “Tokenization,” https://nlp.stanford.edu/IR-book/html/htmledition/tokenization-1.html , accessed: 2019-07-10.[15] C. Doersch, “Tutorial on variational autoencoders,” arXiv preprintarXiv:1606.05908, 2016.Authorized licensed use limited to: Imperial College London. Downloaded on June 13,2020 at 14:30:51 UTC from IEEE Xplore. Restrictions apply.</s>
<s>dataData DescriptorDatasets for Aspect-Based Sentiment Analysis inBangla and Its Baseline EvaluationMd. Atikur Rahman * and Emon Kumar Dey *Institute of Information Technology, University of Dhaka, Dhaka 1000, Bangladesh* Correspondence: bsse0521@iit.du.ac.bd (M.A.R.); emonkd@iit.du.ac.bd (E.K.D.)Received: 20 March 2018; Accepted: 2 May 2018; Published: 4 May 2018�����������������Abstract: With the extensive growth of user interactions through prominent advances of the Web,sentiment analysis has obtained more focus from an academic and a commercial point of view.Recently, sentiment analysis in the Bangla language is progressively being considered as an importanttask, for which previous approaches have attempted to detect the overall polarity of a Bangladocument. To the best of our knowledge, there is no research on the aspect-based sentiment analysis(ABSA) of Bangla text. This can be described as being due to the lack of available datasets for ABSA.In this paper, we provide two publicly available datasets to perform the ABSA task in Bangla. One ofthe datasets consists of human-annotated user comments on cricket, and the other dataset consistsof customer reviews of restaurants. We also describe a baseline approach for the subtask of aspectcategory extraction to evaluate our datasets.Dataset: https://github.com/AtikRahman/Bangla_ABSA_DatasetsDataset License: CC0Keywords: ABSA dataset; Bangla ABSA; aspect extraction from Bangla1. SummaryPeople trust human opinion more so than traditional advertising. For example, consumers areused to seeking advice and recommendation from others before making decisions regarding importantpurchases. Word of mouth (WOM) has always been salient for consumers when making a decision.Such referrals have a strong impact on both customer decision-making and new customer acquisitionfor the purchasing of a company’s product or service [1]. On the other hand, organizations areeager to mine all the activities and interactions of people to understand what their weaknesses andstrengths are. This understanding would help them to develop their organizational strategy in thiscompetitive world.Sentiment analysis (or opinion mining) is a process to determine the viewpoint of a person ona certain topic. It classifies the polarity of a document (i.e., review, tweet, blog, or news), that is,whether the communicated opinion is positive, negative, or neutral. There are three levels at whichsentiment is analyzed [2]: the document level, sentence level, and aspect level. The document levelconsiders that a document has an opinion on an entity, and the task is to classify whether an entiredocument expresses a positive or negative sentiment. The task at the sentence level regards sentencesand determining whether each sentence expresses a positive, negative, or neutral opinion. Neitherthe document level nor the sentence level analysis discover exactly what people liked and did notlike. The aspect level (or aspect-based sentiment analysis—ABSA) performs a finer-grained analysisthat identifies the aspects of a given document or sentence and the sentiment expressed towardsData 2018, 3, 15; doi:10.3390/data3020015 www.mdpi.com/journal/datahttp://www.mdpi.com/journal/datahttp://www.mdpi.comhttps://github.com/AtikRahman/Bangla_ABSA_Datasetshttp://dx.doi.org/10.3390/data3020015http://www.mdpi.com/journal/datahttp://www.mdpi.com/2306-5729/3/2/15?type=check_update&version=3Data 2018, 3, 15 2 of 10each aspect. This level of analysis is the most detailed version that is capable of discovering complexopinions from reviews.There are two major tasks when performing ABSA. The first is to extract the specific areas oraspects mentioned in the opinioned review. The second is to identify the polarity (either positive,negative, or neutral) for every aspect. For example, the following</s>
<s>review of a restaurant revealstwo aspects: service and food. Both aspects have a positive polarity.“The service was excellent and the food was delicious.”As one can see, the name of the aspect categories are explicitly mentioned in this review. A reviewmight also contain implicit categories; for example, “The staff makes you feel at home and the chicken isgreat.” Here, the same aspects, “service” and “food”, are contained without being directly mentioned.Semantic Evaluation (SemEval), a reputed workshop in the NLP domain, introduced a completedataset [3] in English for the ABSA task. Later this was expanded to the ABSA task by addingmulti-lingual datasets in which eight languages over seven domains were incorporated. To performABSA, datasets of several languages, such as Arabic [4], Czech [5], and French [6], were created.There is no dataset for Bangla in the field of ABSA. Consequently, no work is being done to extractaspects and to identify corresponding polarities for Bangla reviews. We are currently working on aproject to extract the aspects from a Bangla review or comments for a particular product of a company,as online shopping is very popular nowadays in Bangladesh and is growing rapidly. People like tobuy products online after reading the comments of others.In this paper, we have created two new datasets that serve as a benchmark for the ABSA domainin Bangla texts. We present two datasets named “Cricket” and “Restaurant”. The first datasetcontains 2900 comments on cricket over 5 aspect categories, and the second dataset contains 2600restaurant reviews.Because there is no work in Bangla for the ABSA task, we have introduced ABSA by extractingaspect categories from Bangla texts in order to evaluate our datasets. We performed the taskwith different training approaches and found a satisfactory outcome compared to evaluations ofother languages.There are some related works from which we founded the idea of this topic. The restaurantreview dataset, provided by Ganu et al. [7], was used to improve rating predictions. Their annotationsincluded six aspect categories and overall sentence polarities. They had not prepared a complete ABSAdataset, as the aspect category was present but the corresponding polarity of that aspect was absent.The SemEval 2014 evaluation campaign [3] extended their dataset by adding three more fields with theaspect category. They published their dataset with four fields being contained for each review, that is,with the aspect term occurring in the sentences, the aspect term’s polarity, the aspect category, and theaspect category’s polarity. They also provided a laptop-review dataset and manually annotated withsimilar entities as for the restaurant dataset. These are the benchmark datasets that [8–11] researcheshave used for performing the ABSA task.The task was repeated in SemEval 2015 [12], for which aspect categories were the combination ofthe entity type and an attribute type. Multilingual datasets were released in the SemEval 2016workshop [13] on the seven domains (restaurant, laptop, mobile phone, digital camera, hotel,and museum) and in eight languages (English, Arabic, French, Chinese, Turkish, Spanish, Dutch,and Russian).A book-review dataset in the Arabic language was provided by [4]. They annotated book reviewsinto 14 categories and 4 types of polarities, including “Conflict”.</s>
<s>In [5], the author created an ITproduct-review dataset for the ABSA task, in which a total 2200 reviews were contained.The contribution of this paper is as follows:• We have collected and presented two Bangla datasets for ABSA and have made thempublicly available.• We performed statistical linguistic analysis on the datasets.Data 2018, 3, 15 3 of 10• We implemented state-of-the-art machine learning approaches for the collected datasets andfound satisfactory accuracies.2. Data Description“Digital Bangladesh” is the integral part of the Bangladesh government’s Vision 2021. The Internetis growing very fast over the country, and people are using different online platforms in every aspectof their lives. This encouraged us to construct datasets to analyze the Bengali people’s opinions and toextract their sentiments in different aspects. In this paper, we have constructed two different datasets,namely, the Cricket dataset and Restaurant dataset, to evaluate the people’s opinions.2.1. Cricket DatasetThe Cricket dataset consists of 2900 different comments from different online sources withfive different aspect categories. Most of the comments are collected from Facebook pages(https://www.facebook.com/BBCBengaliService/; https://www.facebook.com/DailyProthomAlo/).Some comments are collected from two popular Bengali Websites, BBC Bangla (http://www.bbc.com/bengali), and the Daily Prothom Alo (http://www.prothomalo.com). This dataset was collected by theauthors of this paper. The comments are of different lengths and each review contains approximately3–100 Bangla words. The reasons behind choosing these Websites for collecting data are given below:• BBC Bangla and the Daily Prothom Alo are very popular online news sites for the Bengalicommunity all around the world. They are popular for publishing trustworthy and authenticnews. Bengali people frequently read the news and sometimes make comments to share theiropinion. Although people write their comments or opinions in both Bangla and English, most ofthe time, they choose Bangla. We studied different articles and found that in almost 90% of thecases, people expressed their opinion in Bangla.• The Facebook page of Prothom Alo has over 13 million followers, and BBC Bangla has over11 million. These two pages provide enormous text posts as well as a large number of comments.• Cricket is one of the most popular games nowadays for Bengali people. We found that peopleare more interested in making comments on cricket-related news than on any other topic.Thus, we chose this category for our experiment.Table 1 shows an example of comments collected from the Facebook pages.Table 1. Example of cricket-related comments on Prothom Alo and BBC Bangla Facebook pages.Data 2018, 3, x 3 of 11 2. Data Description “Digital Bangladesh” is the integral part of the Bangladesh government’s Vision 2021. The Internet is growing very fast over the country, and people are using different online platforms in every aspect of their lives. This encouraged us to construct datasets to analyze the Bengali people’s opinions and to extract their sentiments in different aspects. In this paper, we have constructed two different datasets, namely, the Cricket dataset and Restaurant dataset, to evaluate the people’s opinions. 2.1. Cricket Dataset The Cricket dataset consists of 2900 different comments from different online sources with five different aspect categories. Most of the comments are collected from Facebook</s>
<s>pages (https://www.facebook.com/BBCBengaliService/; https://www.facebook.com/DailyProthomAlo/). Some comments are collected from two popular Bengali Websites, BBC Bangla (http://www.bbc.com/bengali), and the Daily Prothom Alo (http://www.prothomalo.com). This dataset was collected by the authors of this paper. The comments are of different lengths and each review contains approximately 3–100 Bangla words. The reasons behind choosing these Websites for collecting data are given below:  BBC Bangla and the Daily Prothom Alo are very popular online news sites for the Bengali community all around the world. They are popular for publishing trustworthy and authentic news. Bengali people frequently read the news and sometimes make comments to share their opinion. Although people write their comments or opinions in both Bangla and English, most of the time, they choose Bangla. We studied different articles and found that in almost 90% of the cases, people expressed their opinion in Bangla.  The Facebook page of Prothom Alo has over 13 million followers, and BBC Bangla has over 11 million. These two pages provide enormous text posts as well as a large number of comments.  Cricket is one of the most popular games nowadays for Bengali people. We found that people are more interested in making comments on cricket-related news than on any other topic. Thus, we chose this category for our experiment. Table 1 shows an example of comments collected from the Facebook pages. Table 1. Example of cricket-related comments on Prothom Alo and BBC Bangla Facebook pages. Comments Source মাশরািফ এক জাদুকারী নাম । য নামটা নেলই মন ভের যায়। আমােদর জিহর,জনসন, পালক, টিল নাই তেব এক জন মাশরািফ আেছ Prothom Alo Facebook page বালারেদরও দাষ নই, দাষটা 100% ম ােনজেমে র। পস বালারেদর জন উইেকট তির তা তারাই কের না। দাষটা ম ােনজেম না মানেল আিম নব। Prothom Alo Facebook page আশা কির তাসিকন অিত তু দেল িফরেব আর িনয়িমত খলেব এবং চর আউট করেব। Prothom Alo Facebook page এখন দশকেদর কােছ জনি য় হে 20--- ট /ওয়ানেড মানুষ এখন দখেতই চায়না.. Prothom Alo Facebook page হারেলও বাংলােদশ জতেলও বাংলােদশ।আগামীেত আবার আমরাই জতেবা। ইশ! আমােদর যিদ িবরাট কাহিলর মেতা একটা ব াট ান থাকেতা BBC Bangla Facebook page আমার পরামশ হেলা েকটারেদর কেপােরট জগত থেক দুের রাখেত হেব। BBC Bangla Facebook page People usually comment in Bangla about the news. We also found that 5–10% of the time, they commented in English and also wrote Bangla sentences written in the English alphabet. We did not consider these opinions in our dataset. In addition, some comments had only emoticons and no other text or words. We also omitted these for our dataset. All of the processes were done manually by the authors. The following section describes the annotation process of the collected corpus of cricket-related comments. 2.1.1. Annotation of Cricket Dataset The Bangla text on cricket was annotated jointly by the authors, a group of second-year students of BSSE, and two employees from the Institute of Information Technology, University of Dhaka, People usually comment in Bangla about the news. We also found that 5–10% of the time, theycommented</s>
<s>in English and also wrote Bangla sentences written in the English alphabet. We did notconsider these opinions in our dataset. In addition, some comments had only emoticons and noother text or words. We also omitted these for our dataset. All of the processes were done manuallyby the authors. The following section describes the annotation process of the collected corpus ofcricket-related comments.https://www.facebook.com/BBCBengaliService/https://www.facebook.com/DailyProthomAlo/http://www.bbc.com/bengalihttp://www.bbc.com/bengalihttp://www.prothomalo.comData 2018, 3, 15 4 of 102.1.1. Annotation of Cricket DatasetThe Bangla text on cricket was annotated jointly by the authors, a group of second-year studentsof BSSE, and two employees from the Institute of Information Technology, University of Dhaka,Bangladesh. All participants agreed to categorize the whole dataset into five different aspect categories.These were bowling, batting, team, team management, and other. Given a comment, the task ofthe annotators was to recommend the aspect category and polarity labels for each. Three types ofpolarities were considered, that is, positive, negative, and neutral. Table 2 shows the information aboutthe participants.Table 2. Information about the participants in data collection.Participant ID Gender Profession TaskP1 Male MS student/author Data collection (Cricket) and annotationP2 Male Faculty/author Data collection (Cricket) and annotationP3 Male Graduate student Annotation (Cricket) and translation (Restaurant)P4 Female Graduate student Annotation (Cricket) and translation (Restaurant)P5 Female Graduate student Annotation (Cricket) and translation (Restaurant)P6 Male Graduate student Annotation (Cricket) and translation (Restaurant)P7 Male Graduate student Annotation (Cricket) and translation (Restaurant)P8 Male Graduate student Annotation (Cricket) and translation (Restaurant)P9 Female Accountant AnnotationP10 Male Officer AnnotationEach participant categorized every comment of the dataset. We applied the majority votingtechnique to make the final decision about the aspect category and the polarity of a sentence. As anexample, we have taken the following comment:Data 2018, 3, x 4 of 11 Bangladesh. All participants agreed to categorize the whole dataset into five different aspect categories. These were bowling, batting, team, team management, and other. Given a comment, the task of the annotators was to recommend the aspect category and polarity labels for each. Three types of polarities were considered, that is, positive, negative, and neutral. Table 2 shows the information about the participants. Table 2. Information about the participants in data collection. Participant ID Gender Profession Task P1 Male MS student/author Data collection (Cricket) and annotation P2 Male Faculty/author Data collection (Cricket) and annotation P3 Male Graduate student Annotation (Cricket) and translation (Restaurant) P4 Female Graduate student Annotation (Cricket) and translation (Restaurant) P5 Female Graduate student Annotation (Cricket) and translation (Restaurant) P6 Male Graduate student Annotation (Cricket) and translation (Restaurant) P7 Male Graduate student Annotation (Cricket) and translation (Restaurant) P8 Male Graduate student Annotation (Cricket) and translation (Restaurant) P9 Female Accountant Annotation P10 Male Officer Annotation Each participant categorized every comment of the dataset. We applied the majority voting technique to make the final decision about the aspect category and the polarity of a sentence. As an example, we have taken the following comment: “এই িপেচ রান করা টাফ, বািলং িনঃসে েহ ভােলা হেয়েছ ” The voting result we found for the comment is given in Table 3. Table 3. Voting example to</s>
<s>define the category and polarity. Comment: এই িপেচ রান করা টাফ, বািলং িনঃসে েহ ভােলা হেয়েছ Participant Voting for Category Voting for Polarity P1 Bowling Positive P2 Bowling Positive P3 Batting Negative P4 Batting Negative P5 Other Neutral P6 Bowling Positive P7 Other Neutral P8 Bowling Positive P9 Bowling Positive P10 Batting Negative From Table 3, we can see that the comments had three votes for Batting with a negative polarity, two votes for Other with a neutral polarity, and four votes for Bowling with a positive polarity. Thus, our method determined this comment as being in the Bowling category with a positive polarity. We also had ties for some comments. In this situation, we took both the categories with their polarity in our dataset. Table 4 shows an example for this scenario. The voting result we found for the comment is given in Table 3.Table 3. Voting example to define the category and polarity.Comment:Data 2018, 3, x 4 of 12 Bangladesh. All participants agreed to categorize the whole dataset into five different aspect categories. These were bowling, batting, team, team management, and other. Given a comment, the task of the annotators was to recommend the aspect category and polarity labels for each. Three types of polarities were considered, that is, positive, negative, and neutral. Table 2 shows the information about the participants. Table 2. Information about the participants in data collection. Participant ID Gender Profession Task P1 Male MS student/author Data collection (Cricket) and annotation P2 Male Faculty/author Data collection (Cricket) and annotation P3 Male Graduate student Annotation (Cricket) and translation (Restaurant) P4 Female Graduate student Annotation (Cricket) and translation (Restaurant) P5 Female Graduate student Annotation (Cricket) and translation (Restaurant) P6 Male Graduate student Annotation (Cricket) and translation (Restaurant) P7 Male Graduate student Annotation (Cricket) and translation (Restaurant) P8 Male Graduate student Annotation (Cricket) and translation (Restaurant) P9 Female Accountant Annotation P10 Male Officer Annotation Each participant categorized every comment of the dataset. We applied the majority voting technique to make the final decision about the aspect category and the polarity of a sentence. As an example, we have taken the following comment: “এই িপেচ রান করা টাফ, বািলং িনঃসে েহ ভােলা হেয়েছ ” The voting result we found for the comment is given in Table 3. Table 3. Voting example to define the category and polarity. Comment: এই িপেচ রান করা টাফ, বািলং িনঃসে েহ ভােলা হেয়েছ Participant Voting for Category Voting for Polarity P1 Bowling Positive P2 Bowling Positive P3 Batting Negative P4 Batting Negative P5 Other Neutral P6 Bowling Positive P7 Other Neutral P8 Bowling Positive P9 Bowling Positive P10 Batting Negative From Table 3, we can see that the comments had three votes for Batting with a negative polarity, two votes for Other with a neutral polarity, and four votes for Bowling with a positive polarity. Thus, our method determined this comment as being in the Bowling category with a positive polarity. We also had ties for some comments. In this situation, we took both</s>
<s>the categories with their polarity in our dataset. Table 4 shows an example for this scenario. Participant Voting for Category Voting for PolarityP1 Bowling PositiveP2 Bowling PositiveP3 Batting NegativeP4 Batting NegativeP5 Other NeutralP6 Bowling PositiveP7 Other NeutralP8 Bowling PositiveP9 Bowling PositiveP10 Batting NegativeFrom Table 3, we can see that the comments had three votes for Batting with a negative polarity,two votes for Other with a neutral polarity, and four votes for Bowling with a positive polarity.Thus, our method determined this comment as being in the Bowling category with a positive polarity.We also had ties for some comments. In this situation, we took both the categories with their polarityin our dataset. Table 4 shows an example for this scenario.Data 2018, 3, 15 5 of 10Table 4. Voting category identification.Comment:Data 2018, 3, x 5 of 11 Table 4. Voting category identification. Comment: ওরা 200 কেরেছ, তামরা 100 করেত পারেব না? Participant Voting for Category Voting for Polarity P1 Batting Negative P2 Team Negative P3 Batting Negative P4 Batting Negative P5 Batting Negative P6 Team Negative P7 Team Negative P8 Batting Negative P9 Team Negative P10 Team Negative We can see from Table 4 that 50% of the evaluators voted for Batting, and 50% of the votes were for the Team category, both with a negative polarity. As they had tied, our algorithm took both of these categories in the labeled dataset with a negative polarity. We also faced another kind of problem to construct the dataset. After calculating the category, we found dissimilarities for some comments among the participants regarding the polarity of the comment. For example, when we considered the following comment, we found the voting result given in Table 5. Table 5. Problem related to polarity determination. Comment: রা াক বুেড়া হেয় গেছ, খলা পােরনা। তাহেল আজ িকভােব িক করেলা? Participant Voting for Category Voting for Polarity P1 Bowling Positive P2 Bowling Positive P3 Team Negative P4 Bowling Negative P5 Other Positive P6 Team Negative P7 Other Negative P8 Team Negative P9 Bowling Positive P10 Team Negative From Table 5, we can see that, for the comment, both the Bowling and Team categories had four votes. Thus, we keep both of these categories in our annotated dataset. In polarity, there were three positive votes and one negative vote for bowling. Thus, we considered it as having a positive polarity and added it to our dataset. Table 6 shows a sample of the labeled Cricket dataset. Table 6. A part of the Cricket dataset in xlsx format. বয্াপার না। eটা ধুমাt ঘর্টনা ছাড়া িকছু না। other neutral ভকামনা টাiগারেদর জনয্। team positive বাংলােদশ eখেনা তািমেমর েযাগয্ oেপনার েপেলা না। batting negative বাংলােদশ হারেব আজ । team negative সাতজন িsনার িনেয় মােঠ েবাতল টানােনা যায় ময্াচ েজতা যায় না। bowling negative টািনর্ং িপচ বািনেয় ঔষধ খুেজ লাভ িক বাuিn িপচ বানােলi হয়। team management negative eটা পুরাদেম িফিkং eকটা েখলা হiেস। team negative জয় ধু সমেয়র aেপkা। other positive েযi জেয়র সমান!! other neutral Participant Voting for Category Voting for PolarityP1 Batting</s>
<s>NegativeP2 Team NegativeP3 Batting NegativeP4 Batting NegativeP5 Batting NegativeP6 Team NegativeP7 Team NegativeP8 Batting NegativeP9 Team NegativeP10 Team NegativeWe can see from Table 4 that 50% of the evaluators voted for Batting, and 50% of the votes werefor the Team category, both with a negative polarity. As they had tied, our algorithm took both of thesecategories in the labeled dataset with a negative polarity.We also faced another kind of problem to construct the dataset. After calculating the category,we found dissimilarities for some comments among the participants regarding the polarity of thecomment. For example, when we considered the following comment, we found the voting result givenin Table 5.Table 5. Problem related to polarity determination.Comment:Data 2018, 3, x 5 of 12 Table 4. Voting category identification. Comment: ওরা 200 কেরেছ, তামরা 100 করেত পারেব না? Participant Voting for Category Voting for Polarity P1 Batting Negative P2 Team Negative P3 Batting Negative P4 Batting Negative P5 Batting Negative P6 Team Negative P7 Team Negative P8 Batting Negative P9 Team Negative P10 Team Negative We can see from Table 4 that 50% of the evaluators voted for Batting, and 50% of the votes were for the Team category, both with a negative polarity. As they had tied, our algorithm took both of these categories in the labeled dataset with a negative polarity. We also faced another kind of problem to construct the dataset. After calculating the category, we found dissimilarities for some comments among the participants regarding the polarity of the comment. For example, when we considered the following comment, we found the voting result given in Table 5. Table 5. Problem related to polarity determination. Comment: রা াক বুেড়া হেয় গেছ, খলা পােরনা। তাহেল আজ িকভােব িক করেলা? Participant Voting for Category Voting for Polarity P1 Bowling Positive P2 Bowling Positive P3 Team Negative P4 Bowling Negative P5 Other Positive P6 Team Negative P7 Other Negative P8 Team Negative P9 Bowling Positive P10 Team Negative From Table 5, we can see that, for the comment, both the Bowling and Team categories had four votes. Thus, we keep both of these categories in our annotated dataset. In polarity, there were three positive votes and one negative vote for bowling. Thus, we considered it as having a positive polarity and added it to our dataset. Table 6 shows a sample of the labeled Cricket dataset. Participant Voting for Category Voting for PolarityP1 Bowling PositiveP2 Bowling PositiveP3 Team NegativeP4 Bowling NegativeP5 Other PositiveP6 Team NegativeP7 Other NegativeP8 Team NegativeP9 Bowling PositiveP10 Team NegativeFrom Table 5, we can see that, for the comment, both the Bowling and Team categories had fourvotes. Thus, we keep both of these categories in our annotated dataset. In polarity, there were threepositive votes and one negative vote for bowling. Thus, we considered it as having a positive polarityand added it to our dataset. Table 6 shows a sample of the labeled Cricket dataset.In Table 7, the summary of the complete Cricket dataset is presented. We can see that there are atotal of</s>
<s>3034 different comments with five different categories, that is, Batting, Bowling, Team, TeamManagement, and Other. Each of the categories contains three different polarities: positive, negative,and neutral. For example, the Batting category contains a total of 583 comments, for which 138 are ofpositive, 389 are of negative, and 56 are of neutral polarity. The Bowling category contains 154 positive,145 negative, and 33 neutral polarity comments. The Team, Team Management, and Other categoriescontain totals of 774, 332, and 1013 comments, respectively.Data 2018, 3, 15 6 of 10Table 6. A part of the Cricket dataset in xlsx format.Data 2018, 3, x 6 of 12 Table 6. A part of the Cricket dataset in xlsx format. বয্াপার না। eটা ধুমাt ঘর্টনা ছাড়া িকছু না। other neutral ভকামনা টাiগারেদর জনয্। team positive বাংলােদশ eখেনা তািমেমর েযাগয্ oেপনার েপেলা না। batting negative বাংলােদশ হারেব আজ । team negative সাতজন িsনার িনেয় মােঠ েবাতল টানােনা যায় ময্াচ েজতা যায় না। bowling negative টািনর্ং িপচ বািনেয় ঔষধ খুেজ লাভ িক বাuিn িপচ বানােলi হয়। team management negative eটা পুরাদেম িফিkং eকটা েখলা হiেস। team negative জয় ধু সমেয়র aেপkা। other positive েযi জেয়র সমান!! other neutral েবালাররা েয পিরমােন শটর্ বল িদেc- তােত রান কতেবিশ হয় েসটাi েদখার িবষয়! bowling negative তােক েটs আর oিডআi দেল িনয়িমত চাi। team positive বাংলােদশ িkেকট আেরা eিগেয় যােব, oেপনারেদর eকটু ভােলা করেত হেব। team positive বাংলােদশ িkেকট আেরা eিগেয় যােব, oেপনারেদর eকটু ভােলা করেত হেব। batting negative িফেরi চমক েদখােলন রাjাক bowling positive নবীনেদর সুেযাগ েদয়া দরকার. other neutral েবািলং িপচ তেব আমােদর বয্াটসময্ানেদর আuট েলা আtহতয্া ছাড়া আর িকছুi নয়। batting negative েবািলং িপচ তেব আমােদর বয্াটসময্ানেদর আuট েলা আtহতয্া ছাড়া আর িকছুi নয়। bowling neutral দািয়tjান হীনতার aভাব? other negative In Table 7, the summary of the complete Cricket dataset is presented. We can see that there are a total of 3034 different comments with five different categories, that is, Batting, Bowling, Team, Team Management, and Other. Each of the categories contains three different polarities: positive, negative, and neutral. For example, the Batting category contains a total of 583 comments, for which 138 are of positive, 389 are of negative, and 56 are of neutral polarity. The Bowling category contains 154 positive, 145 negative, and 33 neutral polarity comments. The Team, Team Management, and Other categories contain totals of 774, 332, and 1013 comments, respectively. Table 7. The complete statistics of Cricket dataset. Category Polarity Total Positive Negative Neutral Batting 138 389 56 583 Bowling 154 145 33 332 Team 166 502 66 774 Team Management 24 293 15 332 Other 89 828 96 1013 Total Comments 3034 2.1.2. Analysis of Proposed Cricket Dataset We used Zipf’s law [14] for our proposed Cricket dataset. Zipf’s law is a statement-based observation that states that in a collection of data, the frequency of a given word should be inversely proportional to its rank in the corpus. The word that is the most frequent and scores rank 1 in a dataset should occur approximately twice as the second most frequent word, three</s>
<s>times as the third most frequent, and so on. Figure 1 shows the diagram in which we plotted the words of our Cricket dataset. The plot follows the trend of Zipf’s law. We also calculated the reliability of the annotation process. The value of the intraclass correlation (ICC) was 0.71. Table 7. The complete statistics of Cricket dataset.Category PolarityTotalPositive Negative NeutralBatting 138 389 56 583Bowling 154 145 33 332Team 166 502 66 774Team Management 24 293 15 332Other 89 828 96 1013Total Comments 30342.1.2. Analysis of Proposed Cricket DatasetWe used Zipf’s law [14] for our proposed Cricket dataset. Zipf’s law is a statement-basedobservation that states that in a collection of data, the frequency of a given word should be inverselyproportional to its rank in the corpus. The word that is the most frequent and scores rank 1 in a datasetshould occur approximately twice as the second most frequent word, three times as the third mostfrequent, and so on. Figure 1 shows the diagram in which we plotted the words of our Cricket dataset.The plot follows the trend of Zipf’s law. We also calculated the reliability of the annotation process.The value of the intraclass correlation (ICC) was 0.71.2.2. Restaurant DatasetTo create the Bangla Restaurant dataset, we took help directly from the English benchmark’sRestaurant dataset [3]. All comments were abstractly translated into Bangla with their exact annotation.The original English dataset contains a total of 2800 different comments. Participants from the samegroup involved in the Cricket dataset’s creation were involved in the translation process of theRestaurant dataset, except participants P9 and P10. We divided the original dataset equally into eightparts and distributed these to the participants. They translated their assigned parts of the originalEnglish dataset abstractly. Finally, participants P1 and P2 merged the separate sections and performedan extensive proofread.Data 2018, 3, 15 7 of 10Data 2018, 3, x; doi: www.mdpi.com/journal/data Figure 1. Distribution of word frequencies of Cricket dataset using Zipf’s law. Figure 2. Word frequency of Bangla Restaurant dataset according to Zipf’s law. Figure 1. Distribution of word frequencies of Cricket dataset using Zipf’s law.Annotation Schema for RestaurantThe Restaurant reviews [3] dataset used in this paper was abstractly translated into Bangla.There were five types of aspect categories, that is, Food, Price, Service, Ambiance, and Miscellaneous.As the objective was to identify the aspect category and corresponding polarity, the participants didnot add aspect terms or their polarities. In terms of the polarity of an aspect category, we consideredonly three polarity labels, that is, positive, negative and neutral. The original dataset consisted of fourdifferent polarity labels: positive, negative, neutral, and conflict. In our translated Bangla dataset,we omitted the conflict category and assumed it to be the same as the neutral category. The annotatorswere asked to assign each translated Bangla restaurant review into their categories and their polaritiesfor the original dataset. Table 8 shows a sample of the translated Restaurant dataset.Table 8. A part of the Restaurant dataset in xlsx format.Data 2018, 3, x 8 of 12 Table 8. A part of the Restaurant dataset in</s>
<s>xlsx format. খুব সীিমত আসন আেছ eবং খাদয্ পাoয়ার জনয্ যেথ aেপkা করেত হেব। ambience negative খুব সীিমত আসন আেছ eবং খাদয্ পাoয়ার জনয্ যেথ aেপkা করেত হেব। service negative দাম তুলনামূলকভােব কম। price positive াi িছল মজাদার food positive যিদo খাবারিট চমৎকার িছল, eিট সsা িছল না। food positive যিদo খাবারিট চমৎকার িছল, eিট সsা িছল না। price negative খুব ভাল! miscellaneous positive আচােরর সংেযাজন খুব ভাল িছল । food positive ধুমাt রাnাi েয েসরা তা নয় , েসবা সবসময় মেনােযাগী eবং ভাল হেয়েছ। food positive ধুমাt রাnাi েয েসরা তা নয় , েসবা সবসময় মেনােযাগী eবং ভাল হেয়েছ। service positive সবর্দা eকিট সুnর িভড়, িকn েকান েকালাহল েনi। ambience positive সjা alsl eবং পির ার - িব াn বা pশংসা করা িকছুi েনi ambience neutral আিম িনি ত েয আমােক বারবর িফের েযেত হেব, !!! miscellaneous positive সmাবত eিট eকিট েছাট আরামদায়ক েরsুেরn,ভাল সjার সে েরামািnক aনুভূিত। ambience positive যিদo খাদয্ ভাল িছল পিরেবষনা িছল িব । food positive যিদo খাদয্ ভাল িছল পিরেবষনা িছল িব । service negative কম রা মেনােযাগী eবং বnুtপূণর্। service positive খাবার ভাল িছল। food positive Table 9 shows the complete statistics of the Bangla Restaurant dataset. We can see from the table that five different categories, that is, Food, Price, Service, Ambiance, and Miscellaneous, contained 713, 178, 336, 234, and 613 reviews, respectively, with three different polarities. For example, the category Food contained 500 positive, 126 negative, and 87 neutral sentiment labels. The Service category contained 186 positive, 118 negative, and 32 neutral sentiments. We also found that this Restaurant dataset also followed Zipf’s law, which is shown in Figure 2. Table 9. Complete statistics of Bangla Restaurant dataset. Category Polarity Total Positive Negative Neutral Food 500 126 87 713 Price 102 60 16 178 Service 186 118 32 336 Ambiance 138 53 43 234 Miscellaneous 300 120 193 613 Table 9 shows the complete statistics of the Bangla Restaurant dataset. We can see from the tablethat five different categories, that is, Food, Price, Service, Ambiance, and Miscellaneous, contained 713,178, 336, 234, and 613 reviews, respectively, with three different polarities. For example, the categoryFood contained 500 positive, 126 negative, and 87 neutral sentiment labels. The Service categoryData 2018, 3, 15 8 of 10contained 186 positive, 118 negative, and 32 neutral sentiments. We also found that this Restaurantdataset also followed Zipf’s law, which is shown in Figure 2.Table 9. Complete statistics of Bangla Restaurant dataset.Category PolarityTotalPositive Negative NeutralFood 500 126 87 713Price 102 60 16 178Service 186 118 32 336Ambiance 138 53 43 234Miscellaneous 300 120 193 613Data 2018, 3, x; doi: www.mdpi.com/journal/data Figure 1. Distribution of word frequencies of Cricket dataset using Zipf’s law. Figure 2. Word frequency of Bangla Restaurant dataset according to Zipf’s law. Figure 2. Word frequency of Bangla Restaurant dataset according to Zipf’s law.3. Baseline EvaluationOur objective is to provide benchmark datasets for Bangla ABSA. Our datasets are designed fortwo major tasks of ABSA. These are aspect category extraction and the identification of polarity foreach aspect category. In</s>
<s>this paper, we experimented with the first subtask, that is, the extraction ofthe aspect category. We applied three major steps to extract the aspect category. Firstly, preprocessingwas performed on the dataset. After this, we extracted features from the data and finally performedclassification using some popular classification models.3.1. Preprocessing and Feature ExtractionIn the preprocessing phase, each Bangla document was represented as a “bag of words”.We applied traditional preprocessing steps for the evaluation. Firstly, punctuations and stop wordswere removed from each of the comments. After this, we removed the digits from our dataset, becausewe found that digits were not necessary for the aspect category. Finally, we tokenized each Banglaword from our dataset.Thus, a vocabulary of Bangla words was prepared after preprocessing. We created a featurematrix for which each review was represented by a vector of that vocabulary. Term frequency–inversedocument frequency (TF–IDF) was used for calculating the features.3.2. ResultsIn the training phase, extracted feature sets were trained by the popular supervised machinelearning algorithms. Because this was a multi-label classification problem, we trained our modelsData 2018, 3, 15 9 of 10by setting up multi-label output. We used linear SVC in the support vector machine (SVM)implementation. The following machine learning algorithms were used:I. Support Vector Machine (SVM)II. Random forest (RF)III. K-nearest neighbor (KNN)After the training was completed, our proposed Bangla test dataset was executed on the trainedmodel. The result is shown in the following table and figure.Table 10 shows the results for the task of aspect category extraction of the datasets we havepresented in this paper. We can see that using the SVM, we obtained the highest precision rate forboth of the datasets. Both datasets showed a low recall and F1-score. Figure 3 shows the overallaccuracy of the models using our datasets. The inherent nature of the datasets is the reason behind thelower performance of the models for both datasets. People share their opinion with their individualjudgment. Therefore, the variety of opinions in the datasets is much larger. On the other hand, aspectextraction is a multi-label classification problem. One’s opinion might have multiple aspect categories.Conventional classifiers miss some of these aspect categories.Table 10. Performance of proposed datasets.Dataset Model Precision Recall F1-ScoreCricketSVM 0.71 0.22 0.34RF 0.60 0.27 0.37KNN 0.45 0.21 0.25RestaurantSVM 0.77 0.30 0.38RF 0.64 0.26 0.33KNN 0.54 0.34 0.42Data 2018, 3, x 10 of 11 Figure 3. The result of three models of our datasets. 4. Conclusions and Future Work Two datasets are provided for the ABSA of Bangla text. These datasets have been designed to perform two tasks covering aspect category extraction and the identification of polarity for that aspect category. We also report baseline results to evaluate the task of aspect category extraction. As future plans, we aim to enhance our work by including further domains such as cars, mobiles, and laptops. We are working on more advanced methods for the ABSA of Bangla text using our datasets to achieve better performance. Author Contributions: All authors contributed equally to this work, and have read and approved the final manuscript. Conflicts of Interest: The</s>