# Introduction While NLP has not yet been applied to the field of SD, in recent years there have been notable applications of Artificial Intelligence (AI) in this area. This is testified by the rise of young research fields that seek to help meet the SDGs, as *Computational Sustainability* [@gomes2019computational] and *AI for Social Good* [@hager2019artificial; @DBLP:journals/corr/abs-2001-01818]. In this context, Machine Learning, in particular in the field of Computer Vision [@de2018machine], has been applied to contexts ranging from conservation biology [@kwok2019ai], to poverty [@blumenstock2015predicting] and slavery mapping [@DBLP:journals/remotesensing/FoodyLBLW19], to deforestation and water quality monitoring [@DBLP:journals/remotesensing/HollowayM18]. Despite its positive impact, it is important to recognise that some AI techniques can act both as an enhancer and inhibitor of sustainability. As recently shown by , AI might inhibit meeting a considerable number of targets across the SDGs and may result in inequalities within and across countries due to application biases. Understanding the implications of AI and its related fields on SD, or Social Good more generally, is particularly important for countries where action on SDGs is being focused and where issues are most acute [@unescoai; @unescolearningweek]. Various works highlight the importance of understanding the local context and engaging with local stakeholders, including beneficiaries, to achieve project sustainability. Where such information is not available, projects are designed and delivered based on the judgment of other actors (e.g. project funders, developers or domain experts, [@risal2014mismatch; @axinn1988international; @harman2014international]). Their judgment , in turn, is subject to biases [@kahneman2011thinking] that are shaped by past experiences, beliefs, preferences and worldviews: such biases can include, for example, preferences towards a specific sector (e.g. energy or water), technology (e.g. solar, hydro) or gender-group (e.g. solutions which benefit a gender disproportionately), which are pushed without considering the local needs. NLP has the potential to increase the availability of community-specific data to key decision makers and ensure project design is properly informed and appropriately targeted. However, careful attention needs to be paid to the potential for bias in data collection resulting from the interviewers [@bryman2016social], as well as the potential to introduce new bias through NLP.
User-Perceived Value wheel.
Flowchart of the intersection between NLP (purple square) and the delivery of SD projects.
Using UPVs (1) to build sustainable projects: note the role of NLP (purple square in 3).
# Method As a means to obtain qualitative data with the characteristics mentioned above, we adapt the User-Perceived Values (UPV) framework [@hirmerthesis]. The UPV framework builds on value theory, which is widely used in marketing and product design in the developed world [@sheth1991we; @woo1992cognition; @solomon2002value; @boztepe2007user]. Value theory assumes that a deep connection exists between what consumers perceive as important and their inclinations to adopt a new product or service [@nurkka2009capturing]. In the context of developing countries, our UPV framework identifies a set of 58 UPVs which can be used to frame the wide range of perspectives on what is of greatest concern to project beneficiaries [@HIRMER2016UPVmethod]. UPVs (or *tier 3* (T3) values) can be clustered into 17 *tier 2* (T2) value groups, each one embracing a set of similar T3 values; in turn, T2 values can be categorized into 6 *tier 1* (T1) high-level value pillars, as follows: [@HIRMER2014145]: 1. *Emotional*: contains the T2 values *Conscience*, *Contentment*, *Human Welfare* (tot. 9 T3 values) 2. *Epistemic*: contains the T2 values *Information* and *Knowledge* (tot. 2 T3 values) 3. *Functional*: contains the T2 values *Convenience*, *Cost Economy*, *Income Economy* and *Quality and Performance* (tot. 21 T3 values) 4. *Indigenous*: containing the T2 values *Social Norm* and *Religion* (tot. 5 T3 values) 5. *Intrinsic Human*: *Health*, *Physiological* and *Quality of Life* (tot. 11 T3 values) 6. *Social significance*: contains the T2 *Identity*, *Status* and *Social Interaction* (tot. 11 T3 values) The interplay between T1, T2 and T3 values is graphically depicted in the *UPV Wheel* (Figure [1](#fig:upv_wheel){reference-type="ref" reference="fig:upv_wheel"}). See Appendix A for the full set of UPV definitions. The UPV approach offers a theoretical framework to place communities at the centre of project design (Figure [3](#fig:proj_dev_schema){reference-type="ref" reference="fig:proj_dev_schema"}). Notably, it allows to (a) facilitate more responsible and beneficial project planning [@gallarza2006value]; and  (b) enable effective communication with rural dwellers. The latter allows the use of messaging of project benefits in a way that resonates with the beneficiaries' own understanding of benefits, as discussed by . This results in a higher end-user acceptance, because the initiative is perceived to have personal value to the beneficiaries: as a consequence, community commitment will be increased, eventually enhancing the project success rate and leading to more sustainable results [@hirmerthesis].
Playing the UPV game in Uganda. From left to right: 4) Cards for the items generator, cow, flush toilet and newspapers (adapted to the Ugandan context with the support of international experts and academics from the U. of Cambridge; 5) Women playing the UPV game in village (1); 7) Map of case-study villages.
Data conveying the beneficiaries' perspective is seldom considered in practical application, mainly due to the fact that it comes in the form of unstructured qualitative interviews. As introduced above, data needs to be *structured* in order to be useful [@OECD; @unstats]. This makes the entire process very long and costly, thus making it almost prohibitive to afford in practice for most small-scale projects. In this context, the role of AI, and more specifically NLP, can have a yet unexplored opportunity. Implementing successful NLP systems to automatically perform the annotation process on interviews (Figure [3](#fig:proj_dev_schema){reference-type="ref" reference="fig:proj_dev_schema"}, purple square), which constitutes the major bottleneck in the project planning pipeline (Section [4.1](#sec:corpus){reference-type="ref" reference="sec:corpus"}), would dramatically speed up the entire project life-cycle and drastically reduce its costs. In this context, we introduce the task of *Automatic UPV classification*, which consists of annotating each sentence of a given input interview with the appropriate UPV labels which are (implicitly) conveyed by the interviewee. To enable research in UPV classification, we release S2I, a corpus of labelled reports from 7 rural villages in Uganda (Figure [7](#fig:map){reference-type="ref" reference="fig:map"}). In this Section, we report on the corpus collection and annotation procedures and outline the challenges this poses for NLP. **The UPV game.** As widely recognised in marketing practice [@van2005consumer], consumers are usually unable to articulate their own values and needs [@ulwick2002turn]. This requires the use of methods that elicit what is important, such as laddering [@reynolds2001laddering] or Zaltman Metaphor Elicitation Technique (ZMET) [@coulter2001interpreting]. To avoid direct inquiry [@pinegar2006customers], developed an approach to identify perceived values in low-income settings by means of a game (hereafter referred to as *UPV game*). Expanding on the items proposed by , the UPV game makes reference to 46 everyday-use items in rural areas[^4], which are graphically depicted (Figure [4](#fig:cards){reference-type="ref" reference="fig:cards"}). The decision to represent items graphically stems from the high level of illiteracy across developing countries [@unesco2013adult]. Building on the techniques proposed by Coulter *et al.*   and Reynolds *et al.* , the UPV game is framed in the form of semi-structured interviews:\ [(1)]{.smallcaps} participants are asked to select 20 items, based on what is most important to them (*Select stimuli*),\ [(2)]{.smallcaps}  to rank them in order of importance; and finally,\ [(3)]{.smallcaps} they have to give reasons as to why an item was important to them. *Why-probing* was used to encourage discussion (*Storytelling*). **Case-Study Villages. ** 7 rural villages were studied: 3 in the West Nile Region (Northern Uganda); 1 in Mount Elgon (Eastern Uganda); 2 in the Ruwenzori Mountains (Western Uganda); and 1 in South Western Uganda. All villages are located in remote areas far from the main roads (Figure [7](#fig:map){reference-type="ref" reference="fig:map"}). A total of 7 languages are spoken across the villages[^5].
UPV frequencies from the S2I corpus (see Appendix A for UPV definitions).
**Data Collection Setting and Guidelines for Interviewers. ** For each village, 3 native speaker interviewers guided the UPV game. To ensure consistency and data quality, a two-day training workshop was held at Makerere University (Kampala, Uganda), and a local research assistant oversaw the entire data collection process in the field. **Data Collection. ** 12 people per village were interviewed, consisting of an equal split between men and women with varying backgrounds and ages. In order to gather complete insight into the underlying decision-making process -- which might be influenced by the context [@barry2008determining] -- interviews were conducted both individually and in groups of 6 people following standard focus group methods [@silverman2013doing; @bryman2016social]. Each interview lasted around 90 minutes. The data collection process took place over a period of 3 months and resulted in a total of 119 interviews. **Ethical Considerations. ** Participants received compensation in the amount of 1 day of labour. An informed consent form was read out loud by the interviewer prior to the UPV game, to cater for the high-level of illiteracy amongst participants. To ensure integrity, a risk assessment following the University of Cambridge's *Policy on the Ethics of Research Involving Human Participants and Personal Data* was completed. To protect the participants' identity, locations and proper names were anonymized. **Data Annotation.** The interviews were translated[^6] into English, analysed and annotated by domain experts[^7] using the computer-assisted qualitative data analysis software *HyperResearch* [@hesse1991hyperresearch]. To ensure consistency across interviews, they were annotated following  , using cross-sectional indexing [@mason2002organizing]. Due to the considerable size of collected data, the annotation process took around 6 months. We obtain a final corpus of 5102 annotated utterances from the interviews. Samples present an average length of 20 tokens. The average number of samples per T3 label is 169.1, with an extremely skewed distribution: the most frequent T3, *Economic Opportunity*, occurs 957 times, while the least common, *Preservation of the Environment*, only 7 (Figure [8](#fig:stats_aggregated){reference-type="ref" reference="fig:stats_aggregated"}). 58.8% of the samples are associated with more than 1 UPV, and 22.3% with more than 2 UPVs (refer to Appendix B for further details on UPV correlation). Such characteristics make UPV classification highly challenging to model: the task is an extreme multi-class multi-label problem, with high class imbalancy. Imbalanced label distributions pose a challenge for many NLP applications -- as sentiment analysis [@li2011imbalanced], sarcasm detection [@liusarcasm], and NER [@tomanek2009reducing] -- but are not uncommon in user-generated data [@imran2016twitter]. The following interview excerpt illustrates the multi-class multi-label characteristics of the problem: 1. *If I have a flush toilet in my house I can be a king of all kings because I can't go out on those squatting latrines* \[Reputation\]\[Aspiration\] 2. *And recently I was almost rapped* (sic.) *when I escorted my son to the latrine* \[Security\] 3. *That \[\...\] we have so many cases in our village of kids that fall into pit latrine* \[Safety\]\[Caring\]
Examples of negative samples generated through data augmentation.
Further challenges for NLP are introduced by the frequent use of non-standard grammar and poor sentence structuring, which often occur in oral production [@cole1995challenge]. Moreover, manual transcription of interviews may lead to spelling errors, thus increasing OOVs. This is illustrated in the below excerpts (spelling errors are underlined): - *Also men like phone [there]{.underline} are so jealous for their women for example like in the morning my husband called me and asked that are you in church; so that's why they picked a phone.* - *A house keeps secrecy for example \[\...\] I can be bitten by a snake if I had sex outside \[\...\] you see, me I cannot because [may]{.underline} child is looking for [mangoes]{.underline} in the bush and finds me there, how do I explain, can you imagine!!* As outlined above, given an input interview, the task consists in annotating each sentence with the appropriate UPV(s). The extreme multi-class multi-label quality of the task (Section [4.2](#sec:corpus_nlp){reference-type="ref" reference="sec:corpus_nlp"}) makes it impractical to tackle as a standard *multi-class classification* problem---where, given an input sample $x$, a system is trained to predict its label from a tagset $T=\{l_1, l_2, l_3\}$ as $x\rightarrow l_2$ (i.e. \[0,1,0\]). Instead, we model the task as a *binary classification* problem: given $x$, the system learns to predict its *relatedness* with each one of the possible labels, i.e. $(x, l_1) \rightarrow 0$, $(x, l_2) \rightarrow 1$ and $(x, l_3) \rightarrow 0$ [^8]. We consider the samples from the S2I corpus as *positive instances*. Then, we generate three kinds of *negative instances* by pairing the sample text with random labels. To illustrate, consider the three T2 classes *Convenience*, *Identity* and *Status*, which contain the following T3 values: - *Contentment*$_{T2}$ = {*Aesthetic*$_{T3}$, *Comfort*$_{T3}$, \...} - *Identity*$_{T2}$ = {*Appearance*$_{T3}$, *Dignity*$_{T3}$\...} - *Status*$_{T2}$ = {*Aspiration*$_{T3}$, *Reputation*$_{T3}$, \...} Moreover, *Contentment*$_{T2}$ $\in$ *Emotional*$_{T1}$ and {*Identity*$_{T2}$, *Status*$_{T2}$} $\in$ *SocialSignificance*$_{T1}$. Given a sample $x$ and its gold label *Aspiration*$_{T3}$, we can generate the following training samples: - $(x, \text{\textit{Aspiration}}_{T3})$ is a *positive sample*; - $(x, \text{\textit{Reputation}}_{T3})$ is a *mildly negative sample*, as $x$ is linked with a wrong T3 with the same T2; - $(x, \text{\textit{Dignity}}_{T3})$ is *negative sample*, as $x$ is a associated with a wrong T3 from a different T2 class, but both T2 classes belong to the same T1; and - $(x, \text{\textit{Aesthetic}}_{T3})$ is a *strictly negative sample*, as $x$ is associated with a wrong label from the another T2 class in a different T1. In this way, during training the system is exposed to positive (real) samples and negative (randomly generated) samples. A UPV classification system should satisfy the following desiderata: (1) it should be relatively light, given that it will be used in the context of developing countries, which may suffer from access bias[^9] and (2) the goal of such a system isn't to completely replace the work of human SD experts, but rather to reduce the time needed for interview annotation. In this context, false positives are quick to notice and delete, while false negatives are more difficult to spot and correct. Moreover, when assessing a community's needs and values, missing a relevant UPV is worse than including one which wasn't originally present. For these reasons, recall is particularly important for a UPV classifier. In the next Section, we provide a set of strong baselines for future reference. *Embedding Layer.* The system receives an input sample $(x, T3)$, where $x$ is the sample text $(e_1, ..., e_n)$, $T3$ is the T3 label as the sequence of its tokens $(e_1, ..., e_m)$, and $e_i$ is the word embedding representation of a token at position $i$. We obtain a T3 embedding $e_{T3}$ for each T3 label using a max pool operation over its word embeddings: given the short length of T3 codes, this proved to work well and it is similar to findings in relation extraction and targeted sentiment analysis [@tang2015effective]. We replicate $e_{T3}$ $n$ times and concatenate it to the text's word embeddings $x$ (Figure [\[fig:architecture\]](#fig:architecture){reference-type="ref" reference="fig:architecture"}). *Encoding Layer.* We obtain a hidden representation $\vec{h}_{text}$ with a forward LSTM [@gers1999learning] over the concatenated input. We then apply attention to capture the key parts of the input text w.r.t. the given T3. In detail, given the output matrix of the LSTM layer $H = [h_1, ..., h_n]$, we produce a hidden representation $h_{text}$ as follows: $\displaystyle \begin{aligned} M&=tanh( \begin{bmatrix} W_h H\\ W_v e_{upv} \otimes e_N \end{bmatrix} )\\ \alpha_{text}&=softmax(w^TM)\\ h_{text}&=H\alpha^T \end{aligned}$ This is similar in principle to the attention-based LSTM by , and proved to work better than classic attention over $H$ on our data. *Decoding Layer.* We predict $\hat{y} \in [0,1]$ with a dense layer followed by a sigmoidal activation. Each T3 comes with a short description, which was written by domain experts and used during manual labelling (the complete list is in the Appendix A). We integrate information from such descriptions into our model as follows: given the ordered word embeddings from the UPV description $(e_1, ..., e_d)$, we obtain a description representation $h_{descr}$ following the same steps as for the sample text. In line with previous studies on siamese networks [@yan2018few], we observe better results when sharing the weights between the two LSTMs. We keep two separated attention layers for sample texts and descriptions. We concatenate $h_{text}$ and $h_{descr}$ and feed the obtained vector to the output layer. A clear hierarchy exists between T3, T2 and T1 values (Section [3](#sec:upv_theory){reference-type="ref" reference="sec:upv_theory"}). We integrate such information using multi-task learning [@caruana1997multitask; @DBLP:journals/corr/Ruder17a]. Given an input sample, we predict its relatedness not only w.r.t. a T3 label, but also with its corresponding T2 and T1 labels[^10]. In practice, given the hidden representation $h = h_{text} \oplus h_{descr}$, we first feed it into a dense layer $dense_{T1}$ to obtain $h_{T1}$, and predict $\hat{y}_{T1}$ with a sigmoidal function. We then concatenate $h_{T1}$ with the previously obtained $h$, and we predict $\hat{y}_{T2}$ with a T2-specific dense layer $\sigma(dense_{T2}(h \oplus h_{T1}))$. Finally, $\hat{y}_{T3}$ is predicted as $\sigma(dense_{T3}(h \oplus h_{T2}))$. In this way, the prediction $\hat{y}_i$ is based on both the original $h$ and the hidden representation computed in the previous stage of the hierarchy, $h_{i-1}$ (Figure [\[fig:architecture\]](#fig:architecture){reference-type="ref" reference="fig:architecture"}). ::: tabular l P1.2cmP1.2cmP1.2cmP1.2cm &text & +att & +descr & +att+descr\ P & 77.5 & 78.1& **80.4** & 78.9\ R & 65.5 & **71.0**& 66.5 & 70.6\ $F_1$ & 71.0 & 74.2& 72.8 & **74.4**\ :::