# Method We aimed to mimic the presence of sparse/noisy content distribution, mandating the need to curate a novel dataset via specific lexicons. We scraped $500$ random posts from recognized transport forum[^4]. A pool of $50$ uni/bi-grams was created based on tf-idf representations, extracted from the posts, which was further pruned by annotators. Querying posts on *Twitter* with extracted lexicons led to a collection of $19,300$ tweets. In order to have lexical diversity, we added $2500$ randomly sampled tweets to our dataset. In spite of the sparse nature of these posts, the lexical characteristics act as **information cues**. Figure [1](#fig:pipeline){reference-type="ref" reference="fig:pipeline"} pictorially represents our methodology. Our approach required an initial set of ***informative tweets*** for which we employed two human annotators annotating a random sub-sample of the original dataset. From the $1500$ samples, $326$ were marked as *informative* and $1174$ as *non informative* ($\kappa=0.81$), discriminated on this criteria: ***Is the tweet addressing any complaint or raising grievances about modes of transport or services/ events associated with transportation such as traffic; public or private transport?***. An example tweet marked as *informative*: . We utilized tf-idf for the identification of initial from the curated set of ***informative tweets***. $50$ terms having the highest tf-idf scores were passed through the complete dataset and based on sub-string match, the ***transport relevant tweets*** were identified. The redundant tweets were filtered based on the cosine similarity score. **Implicit information indicators** were identified based on ***domain relevance score***, a metric used to gauge the coverage of n-gram ($1$,$2$,$3$) when evaluated against a randomly created pool of posts. We collected a pool of $5000$ randomly sampled tweets different from the data collection period. The rationale behind having such a metric was to discard commonly occurring n-grams normalized by random noise and include ones which are of lexical importance. We used terms associated with high domain relevance score (threshold determined experimentally) as for the next set of iterations. The growing dictionary augments the collection process. The process ran for $4$ iterations providing us $7200$ ***transport relevant tweets*** as no new lexicons were identified. In order to identify linguistic signals associated with the *complaint* posts, we randomly sampled a set of $2000$ tweets which was used as training set, manually annotated into distinct labels: *complaint relevant* ($702$) and *complaint non-relevant* ($1298$) ($\kappa=0.79$). We employed these features on our dataset.
Pictorial representation of the proposed pipeline.
**Linguistic markers**. To capture linguistic aspects of complaints, we utilized Bag of Words, count of POS tags and Word2vec clusters. **Sentiment markers**. We used quantified score based on the ratio of tokens mentioned in the following lexicons: MPQA, NRC, VADER and Stanford. **Information specific markers**. These account for a set of handcrafted features associated with *complaint*, we used the stated markers (a) Text-Meta Data, this includes the count of URL's, hashtags, user mentions, special symbols and user mentions, used to enhance retweet impact; (b) Request Identification, we employed the model presented in [@danescu2013no] to identify if a specific tweet assertion is a request; (c) Intensifiers, we make use of feature set derived from the number of words starting with capital letters and the repetition of special symbols (exclamation, questions marks) within the same post; (d) Politeness Markers, we utilize the politeness score of the tweet extracted from the model presented in [@danescu2013no]; (e) Pronoun Variation, these have the ability to reveal the personal involvement or intensify involvement. We utilize the frequency of pronoun types $\{\textit{first, second, third, demonstrative and indefinite}$} using pre-defined dictionaries. From the pool of $7200$ transport relevant tweets, we sampled $3500$ tweets which were used as the testing set. The results are reported in Table[1](#tab:res){reference-type="ref" reference="tab:res"} with $10$ fold cross-validation. **With increasing the number of iterations, the pool of gets refined and augments the selection of ***transport relevant tweets*****. The proposed pipeline is tailored to identify complaint relevant tweets in a noisy scenario.